diff --git a/.gitattributes b/.gitattributes index d9deaa0d70dd7da7a35f835479aef53bbefefe9f..9e4d879928084c005fc7212dac0ed97a54067e8f 100644 --- a/.gitattributes +++ b/.gitattributes @@ -50,3 +50,6 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text *.jpeg filter=lfs diff=lfs merge=lfs -text *.webp filter=lfs diff=lfs merge=lfs -text formal/setmm/set.mm filter=lfs diff=lfs merge=lfs -text +train filter=lfs diff=lfs merge=lfs -text +test filter=lfs diff=lfs merge=lfs -text +dev filter=lfs diff=lfs merge=lfs -text diff --git a/README.md b/README.md index 361a0b94dd322cfd8db4854b6faaca30ae9ad1a7..7b69e863bd2954eff497a80f3be1711b105e022b 100644 --- a/README.md +++ b/README.md @@ -22,8 +22,8 @@ task_ids: --- # Dataset Description -The `proof-pile` is a 40GB pre-training dataset of mathematical text that comprises roughly 15 billion tokens. The dataset is composed of diverse sources of both informal and formal mathematics, namely -- ArXiv.math (37GB) +The `proof-pile` is a 36GB pre-training dataset of mathematical text that comprises roughly 15 billion tokens. The dataset is composed of diverse sources of both informal and formal mathematics, namely +- ArXiv.math (35GB) - Open-source math textbooks (50MB) - Formal mathematics libraries (500MB) - Lean mathlib and other Lean repositories @@ -38,22 +38,60 @@ The `proof-pile` is a 40GB pre-training dataset of mathematical text that compri - Wikipedia math articles - MATH dataset (6MB) +The construction of the dataset is reproducible using the code and instructions in the [proof-pile Github +repo](https://github.com/zhangir-azerbayev/proof-pile). + # Supported Tasks -This dataset is intended to be used for pre-training language models. We envision models pre-trained on the `proof-pile` will have many downstream applications, including informal quantitative reasoning, formal theorem proving, semantic search for formal mathematics, and autoformalization. +This dataset is intended to be used for pre-training and fine-tuning language models. We envision models trained on the `proof-pile` will have many downstream applications, including informal quantitative reasoning, formal theorem proving, semantic search for formal mathematics, and autoformalization. # Languages All informal mathematics in the `proof-pile` is written in English and LaTeX (arXiv articles in other languages are filtered out using [languagedetect](https://github.com/shuyo/language-detection/blob/wiki/ProjectHome.md)). Formal theorem proving languages represented in this dataset are Lean 3, Isabelle, Coq, HOL Light, Metamath, and Mizar. - - # Configurations - The data is sorted into `"arxiv", "books", "formal", "stack-exchange", "wiki",` and `"math-dataset"` configurations. This is so that it is easy to upsample particular configurations during pre-training with the `datasets.interleave_datasets()` function. # Evaluation The version of `set.mm` in this dataset has 10% of proofs replaced with the `?` character in order to preserve a validation and test set for Metamath provers pre-trained on the `proof-pile`. The precise split can be found here: [validation](https://github.com/zhangir-azerbayev/mm-extract/blob/main/valid_decls.json) and [test](https://github.com/zhangir-azerbayev/mm-extract/blob/main/test_decls.json). - The Lean mathlib commit used in this dataset is `6313863`. Theorems created in subsequent commits can be used for evaluating Lean theorem provers. This dataset contains only the training set of the [MATH dataset](https://github.com/hendrycks/math). However, because this dataset contains ProofWiki, the Stacks Project, Trench's Analysis, and Stein's Number Theory, models trained on it cannot be evaluated on the [NaturalProofs dataset](https://github.com/wellecks/naturalproofs). +# Data Preprocessing +This section describes any significant filtering and transformations made to various subsets of the data. + +### arXiv.math +The arXiv.math dataset is large, heterogeneous, and contains a great deal of noise. We used the following heuristics +when choosing which files from arXiv.math source folders to include in the dataset: +- Keep only files with a `.tex` extension. +- Only include files that use either a `utf-8/16/32` or `latin-1` text encoding. +- Discard files that do not contain a part, chapter, section, sub...section, paragraph, or subparagraph heading. +- Delete files that contain the keyword `gnuplot`. Gnuplot-latex is an old command line utility that generates blocks + of entirely unintelligible source. +- Include only articles in English, as determined by the [langdetect library](https://pypi.org/project/langdetect/). \n", + "\n", +- Exclude files shorter than 280 characters (characters counted after substring removal described below). +In addition, we apply the following transformations to arXiv.math texts: +- Delete everything outside of `\begin{document}` and `\end{document}`. +- Delete everything including or after `\Refs`, `\begin{thebibliography}`, or `\begin{bibdiv}` +- Delete comments. +- Any more than three consecutive newlines are replaced by three consecutive newlines. +In [this notebook](https://github.com/zhangir-azerbayev/proof-pile/blob/main/analysis/arxiv_noisedetection.ipynb), we provide an analysis of the prevalence of noisy documents in the arXiv.math subset of the +proof-pile. + +### Stack Exchange +We only include questions that have at least 5 upvotes and an answer. We format Stack Exchange posts as follows +``` +QUESTION [{num_upvotes} upvotes]: {text of question} + +REPLY [{num_upvotes} votes]: {text of reply} + +REPLY [{num_upvotes} votes]: {text of reply} + +. +. +. +``` + +### set.mm +We converted `set.mm` into human-readable form by following the instructions in the [mm-extract repo](https://github.com/zhangir-azerbayev/mm-extract) + ## Contributions Authors: Zhangir Azerbayev, Edward Ayers, Bartosz Piotrowski. diff --git a/arxiv/0001.tar.gz b/arxiv/0001.tar.gz deleted file mode 100644 index a445b5b34a5221d44de1f7c1864adb62cec4c000..0000000000000000000000000000000000000000 --- a/arxiv/0001.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:ed7bd3096072c189a874f403b175a3b26b0a3c3dc0a0751fdff44f7323c68225 -size 4005296 diff --git a/arxiv/0002.tar.gz b/arxiv/0002.tar.gz deleted file mode 100644 index 8f80c643ff58b2b7b10ee5566bd068985d8d8e03..0000000000000000000000000000000000000000 --- a/arxiv/0002.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:e47092e8a829e83c6e41d4d66b54f54978196fec2d69f843bea3453e21c41e05 -size 5380511 diff --git a/arxiv/0003.tar.gz b/arxiv/0003.tar.gz deleted file mode 100644 index 2006676b62e52ae0044af8a0acdb7080b364bdf1..0000000000000000000000000000000000000000 --- a/arxiv/0003.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:51743efce3cff5678a69b1e3d3d76d6976502e1db94781f0b384a2fc69bb7cdb -size 5216047 diff --git a/arxiv/0004.tar.gz b/arxiv/0004.tar.gz deleted file mode 100644 index adda4e425da2d3340b83d930ab62ee6db2a03fd2..0000000000000000000000000000000000000000 --- a/arxiv/0004.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:f2f2fe534364a7b2101d53f72856c536f54fa071104258199f43aa7ca9baf452 -size 3595686 diff --git a/arxiv/0005.tar.gz b/arxiv/0005.tar.gz deleted file mode 100644 index 3ef394194626dda8e095eaa136092df847b8e142..0000000000000000000000000000000000000000 --- a/arxiv/0005.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:3fa9d6637db3d02b4bcd53644d47873f84e455838bfbcf883cbc450ee55a233c -size 6297457 diff --git a/arxiv/0006.tar.gz b/arxiv/0006.tar.gz deleted file mode 100644 index b30c8001733d26f8a1209acc0616c5930a3acd90..0000000000000000000000000000000000000000 --- a/arxiv/0006.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:da8f1258ea77cab44c00cd074f65ac251a2c40795e35969167641a485ae96a2b -size 5030042 diff --git a/arxiv/0007.tar.gz b/arxiv/0007.tar.gz deleted file mode 100644 index 9b48780b915f43e6d7573081bf841670e66c6fab..0000000000000000000000000000000000000000 --- a/arxiv/0007.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:b5e641db5dec6ada31a9b9d4fefdac382a66efd58069b03a3de0a03aba72cadf -size 4297957 diff --git a/arxiv/0008.tar.gz b/arxiv/0008.tar.gz deleted file mode 100644 index 2f7ae467344acf4e0e13ab2abca1dedfb7a3481f..0000000000000000000000000000000000000000 --- a/arxiv/0008.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:cb93723995dcc25af2dd36e0e5879638e56f2ed9958da25b8d6c7464a83cc196 -size 5009609 diff --git a/arxiv/0009.tar.gz b/arxiv/0009.tar.gz deleted file mode 100644 index 968a3f66f1794da296cbcbeaf3b3d11672a74baf..0000000000000000000000000000000000000000 --- a/arxiv/0009.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:51031b1b99f606be46f7aaa5089f7b434957a446a60f1032a03428178d5ca079 -size 4505207 diff --git a/arxiv/0010.tar.gz b/arxiv/0010.tar.gz deleted file mode 100644 index 1998f920575b30ab75862e0dc674d4b70f8e1cbc..0000000000000000000000000000000000000000 --- a/arxiv/0010.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:37949482d1d4744b1e643274daf3c9562cb738a89f0e7d3f725876c5683b6c8e -size 6599193 diff --git a/arxiv/0011.tar.gz b/arxiv/0011.tar.gz deleted file mode 100644 index 2ed38a8d5fc1bb738a5bd37a1fc0ac98a46ae749..0000000000000000000000000000000000000000 --- a/arxiv/0011.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:3083cabadf16e68e33e1f6c4ed89bdf861d3c2a2601afe9fd00b71a1e53e48b8 -size 5943655 diff --git a/arxiv/0012.tar.gz b/arxiv/0012.tar.gz deleted file mode 100644 index c7fc62c0c51fb722fd08109b49f1e8a95899b9cd..0000000000000000000000000000000000000000 --- a/arxiv/0012.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:af5c5dd987d317308869091e4664037ba65df3fa3c165292a416157ddf023fc9 -size 5850984 diff --git a/arxiv/0101.tar.gz b/arxiv/0101.tar.gz deleted file mode 100644 index 79e4f7fbb276d095b8345a6955a9a0869df85570..0000000000000000000000000000000000000000 --- a/arxiv/0101.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:f294f4ddb6283d27e197caa7789528715fa090aba2f7d04044fc154074a86920 -size 5438313 diff --git a/arxiv/0102.tar.gz b/arxiv/0102.tar.gz deleted file mode 100644 index 87b0765f5bb1c6396c169132563c1902c0b88dbd..0000000000000000000000000000000000000000 --- a/arxiv/0102.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:b6448060f3588978c4246091ef75c9ceaf16b96bbe99dcc80eab8da14b93d6a1 -size 4609964 diff --git a/arxiv/0103.tar.gz b/arxiv/0103.tar.gz deleted file mode 100644 index 175079e990ce62509de70bb3ec807d52df26c59c..0000000000000000000000000000000000000000 --- a/arxiv/0103.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:98e3280536dcb98c0fb55150b4b1ca89accba310645b8aa4053f82f23c73b55e -size 5173885 diff --git a/arxiv/0104.tar.gz b/arxiv/0104.tar.gz deleted file mode 100644 index 395b2b295ed063dc7c95ab0fd8d8ca4495a931bf..0000000000000000000000000000000000000000 --- a/arxiv/0104.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:5c50d765f2383ff4188953117084478b21c61e015f10da58ff2c8ae07de70b68 -size 5940912 diff --git a/arxiv/0105.tar.gz b/arxiv/0105.tar.gz deleted file mode 100644 index 2b0cf13a50790af2a1fdbd538973bbf45ab2d366..0000000000000000000000000000000000000000 --- a/arxiv/0105.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:26cfd6673bde636012cb1407181b3b7d0015081599023d350a48e4837a288a9c -size 5472958 diff --git a/arxiv/0106.tar.gz b/arxiv/0106.tar.gz deleted file mode 100644 index 32dc207fbdd13407b69e1432efebb6ba14e152ed..0000000000000000000000000000000000000000 --- a/arxiv/0106.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:b408463c19712a892dd56c20bf473a5cbf868c1f9b3798b22633084c1544d257 -size 5197709 diff --git a/arxiv/0107.tar.gz b/arxiv/0107.tar.gz deleted file mode 100644 index 4e6478ce4e9fcd3d5310481400fbb5a99ed99718..0000000000000000000000000000000000000000 --- a/arxiv/0107.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:35534d6882b6e5bfccf2cc7d3023db1da9bb264f9a2cc401adc7d500f9094ce2 -size 4765938 diff --git a/arxiv/0108.tar.gz b/arxiv/0108.tar.gz deleted file mode 100644 index 1f2ea6e8957a3d1602845da2e815c7117ea75db4..0000000000000000000000000000000000000000 --- a/arxiv/0108.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:d744ff9fbb42a0c9cfad109fcdd5985f99329d8dac547ee9141a180151cc23ff -size 4985016 diff --git a/arxiv/0109.tar.gz b/arxiv/0109.tar.gz deleted file mode 100644 index 2f9afad9bd0ab90b05efd9a8b1895b11c8fb6bc3..0000000000000000000000000000000000000000 --- a/arxiv/0109.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:a0285470750f7849364e41e2f507288f6c89d9fab8b702625ec7f307c8d77ac1 -size 4547380 diff --git a/arxiv/0110.tar.gz b/arxiv/0110.tar.gz deleted file mode 100644 index 0656029f8f3d8dfb7f1c7188f9dfcc60112a27dc..0000000000000000000000000000000000000000 --- a/arxiv/0110.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:e5758ee6436697218f04018caa3415b39f5a65d5b959aed2bc7d62ce66506f58 -size 7068163 diff --git a/arxiv/0111.tar.gz b/arxiv/0111.tar.gz deleted file mode 100644 index 2bb126f5883e6065ef1f19ed8d4d4e67b1098bb7..0000000000000000000000000000000000000000 --- a/arxiv/0111.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:66b7e2d0116d29a0af8cc30c0f887cf519bfe682f890410a74f4e0374b822368 -size 7879027 diff --git a/arxiv/0112.tar.gz b/arxiv/0112.tar.gz deleted file mode 100644 index b04650925ce5f7504b050de46e8eef1d2c986c3c..0000000000000000000000000000000000000000 --- a/arxiv/0112.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:15a6a0dac00bb61d38da4ba800f3bc38a4c2ea27d09e71696a4ee5688ee23b06 -size 5936310 diff --git a/arxiv/0201.tar.gz b/arxiv/0201.tar.gz deleted file mode 100644 index 10fb480c599eac94ed164569f92f8bf58587cd7e..0000000000000000000000000000000000000000 --- a/arxiv/0201.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:d50eda6c87edfe6639fae52ae6f7c94be85561c7ccd9a0502ee2bdc3957159e3 -size 6783307 diff --git a/arxiv/0202.tar.gz b/arxiv/0202.tar.gz deleted file mode 100644 index 0390c1f7678f66871583461b7f9021f73fe8ddd3..0000000000000000000000000000000000000000 --- a/arxiv/0202.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:677e99ace8476f0bae49f6a7f18f695d2701b20a723025320ba326231d29ccae -size 5664048 diff --git a/arxiv/0203.tar.gz b/arxiv/0203.tar.gz deleted file mode 100644 index ebe55ceec196377250e4a49575b065157d01f4fd..0000000000000000000000000000000000000000 --- a/arxiv/0203.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:68d43ba9116ea84fd702a67d28180e292cb8f204b41f5ce6f9026d3ad7c7e199 -size 6351776 diff --git a/arxiv/0204.tar.gz b/arxiv/0204.tar.gz deleted file mode 100644 index 0847cf29ab6e47b958778e9cc71a1743f89e84a6..0000000000000000000000000000000000000000 --- a/arxiv/0204.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:2a734117cdef39f1e70aa5d2be2f5310320d044aa2b8b30c0f5303d73fc26688 -size 6810959 diff --git a/arxiv/0205.tar.gz b/arxiv/0205.tar.gz deleted file mode 100644 index bafa8e5fd6bb8348754942b53bccb859cecb6d9e..0000000000000000000000000000000000000000 --- a/arxiv/0205.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:1de6229e56baf101b1d0f7b7990cde67131c65fcd47b4824d9a5afeafcb9f579 -size 6534461 diff --git a/arxiv/0206.tar.gz b/arxiv/0206.tar.gz deleted file mode 100644 index 3508fbd463d3434919e2d47d3488bc24cde3cfbc..0000000000000000000000000000000000000000 --- a/arxiv/0206.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:f146d0fe96af52c297f014b204789403c6e3d0626c359f18e73769346fb9ca23 -size 5876759 diff --git a/arxiv/0207.tar.gz b/arxiv/0207.tar.gz deleted file mode 100644 index 24a917302d1df3b7973f37869dd9891d6a52b743..0000000000000000000000000000000000000000 --- a/arxiv/0207.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:397a3d72c0e8629d1d43a672e01248c0ba2589a75fa33aabac71235479de4634 -size 6297404 diff --git a/arxiv/0208.tar.gz b/arxiv/0208.tar.gz deleted file mode 100644 index e663cbdf021db1518eefc80349d0126a7004b34f..0000000000000000000000000000000000000000 --- a/arxiv/0208.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:f88d7518a00d39da8603b03a0c912d20f324d84bdf88b427a136ef9c39a7a2fe -size 5310956 diff --git a/arxiv/0209.tar.gz b/arxiv/0209.tar.gz deleted file mode 100644 index b6842b3990a5763f1f3c32fa650c4d0e99a86b3b..0000000000000000000000000000000000000000 --- a/arxiv/0209.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:ca7a5be1477f60d86fd2c953795bc1033c42792870f3e550e418f5ebc7ca4d6a -size 8501748 diff --git a/arxiv/0210.tar.gz b/arxiv/0210.tar.gz deleted file mode 100644 index 95a4f9758e1ce1f54c12298111860bbd1e189c73..0000000000000000000000000000000000000000 --- a/arxiv/0210.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:d53478d80e33b0160fd725584c6b1644ad47809efc7c47a74eace45195b71dc9 -size 9502140 diff --git a/arxiv/0211.tar.gz b/arxiv/0211.tar.gz deleted file mode 100644 index 17ee92a7428d57ca58c1975da7fff2082ec7a954..0000000000000000000000000000000000000000 --- a/arxiv/0211.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:16d4feca7436cef569071ea3585174422bb84334275b8c131340d272d223d7e9 -size 10025344 diff --git a/arxiv/0212.tar.gz b/arxiv/0212.tar.gz deleted file mode 100644 index deacb62eaedfebe6eaf999a49685a4ba8ce94b69..0000000000000000000000000000000000000000 --- a/arxiv/0212.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:e2ce05505646bcb58cffb00257d4da4077fbae0df0f713bcacdef44472b3acac -size 8283892 diff --git a/arxiv/0301.tar.gz b/arxiv/0301.tar.gz deleted file mode 100644 index 8f367b924125a48e690018d8611718efc161cdd6..0000000000000000000000000000000000000000 --- a/arxiv/0301.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:5110150efcb60a3f402cdeeaf97f0485007084ae84ed6df889c38a9712538570 -size 7498335 diff --git a/arxiv/0302.tar.gz b/arxiv/0302.tar.gz deleted file mode 100644 index cbbb5763829858eb30b3f2f541d5daa2a88dd466..0000000000000000000000000000000000000000 --- a/arxiv/0302.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:743ad88554b0c110632b4f5bf4d150931ffca243f890013f704b83abd8b791ba -size 7053799 diff --git a/arxiv/0303.tar.gz b/arxiv/0303.tar.gz deleted file mode 100644 index a6f3328d899e65d57eb7afa574215b49953b2ec3..0000000000000000000000000000000000000000 --- a/arxiv/0303.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:69d7aac495143d5a9e31a27a444c441d46f93f3c7fabd0375f4e6d9d2ceb85c0 -size 7649426 diff --git a/arxiv/0304.tar.gz b/arxiv/0304.tar.gz deleted file mode 100644 index b8677159dfe644ef107dbae04337ea7eac4caceb..0000000000000000000000000000000000000000 --- a/arxiv/0304.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:55edd88a20121ce671de39c697e68761875bb96a03c4b93f473eef2f1fa599cf -size 8800791 diff --git a/arxiv/0305.tar.gz b/arxiv/0305.tar.gz deleted file mode 100644 index aadef0bc858754fbd7bd48a6ade76ade811fa96a..0000000000000000000000000000000000000000 --- a/arxiv/0305.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:b57c6c9eca7f5648ef587152a0c6dcf8cb8446c37791bb2c4df840ee20869317 -size 8513410 diff --git a/arxiv/0306.tar.gz b/arxiv/0306.tar.gz deleted file mode 100644 index 22378f6438d8c3be2bfa7b20a0a24d8dfdc6a7c8..0000000000000000000000000000000000000000 --- a/arxiv/0306.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:eb061fe509e890993de457fd7fa432bd54da34216ba3feba0152e760f48b2285 -size 9239475 diff --git a/arxiv/0307.tar.gz b/arxiv/0307.tar.gz deleted file mode 100644 index 7839430c1077396432aa44737b59fe9a879c6005..0000000000000000000000000000000000000000 --- a/arxiv/0307.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:77478820d37d42ff7b0f0ff39a4f91da871f9583d0193e23232200e71151d4cb -size 7948662 diff --git a/arxiv/0308.tar.gz b/arxiv/0308.tar.gz deleted file mode 100644 index c07bfbf8a6b7c8ebc1ea23d14ce773ba279c4af6..0000000000000000000000000000000000000000 --- a/arxiv/0308.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:bf5dfb9ea726af65dd486a4dc657511235a17c60b20114ab48ad678dbf4eac09 -size 5958916 diff --git a/arxiv/0309.tar.gz b/arxiv/0309.tar.gz deleted file mode 100644 index 60e60d940f9573b43da1934d9e6abd3192b557a3..0000000000000000000000000000000000000000 --- a/arxiv/0309.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:85f24dee9dc530b26596447a71e9bf9d41400981b570d2875c24dea3805154f1 -size 9336472 diff --git a/arxiv/0310.tar.gz b/arxiv/0310.tar.gz deleted file mode 100644 index e1e1e817082e82c1d53cae3fcfe560844d98ae4f..0000000000000000000000000000000000000000 --- a/arxiv/0310.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:ebe6de19ef9a46ceab94d6b3a65673a1e9acacc49882145c7372c8e3d6a40799 -size 9817064 diff --git a/arxiv/0311.tar.gz b/arxiv/0311.tar.gz deleted file mode 100644 index 2a3831bc9ddcd4799465dc97e48643ce816b4212..0000000000000000000000000000000000000000 --- a/arxiv/0311.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:3c56ae685b29088df2f6dc1dab8d6bad365402348e5049382ddbad6e83ec2a42 -size 10125149 diff --git a/arxiv/0312.tar.gz b/arxiv/0312.tar.gz deleted file mode 100644 index ef3c408317bd45658ab60fc3beb9f918c836e985..0000000000000000000000000000000000000000 --- a/arxiv/0312.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:af08a7f89dca533313e2c718b6b6546bea0761cadb45afe2e579c382e8b7047f -size 11562393 diff --git a/arxiv/0401.tar.gz b/arxiv/0401.tar.gz deleted file mode 100644 index 26476acda0bb36f7fd10de451ef5fe69e2ee5fd5..0000000000000000000000000000000000000000 --- a/arxiv/0401.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:be369cbcf0ff6f4c383e87e7a75dc773ef7b4daba66a73449434b2b8c81fa8a5 -size 9201354 diff --git a/arxiv/0402.tar.gz b/arxiv/0402.tar.gz deleted file mode 100644 index 653d29a897f6c53f9afd7ec1bd0fe64842d7a1e4..0000000000000000000000000000000000000000 --- a/arxiv/0402.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:7951246c28c886e3508b4a5004a98bd2143dee985904f42cbbb81dc7dfc0e9fd -size 9504480 diff --git a/arxiv/0403.tar.gz b/arxiv/0403.tar.gz deleted file mode 100644 index 5d9e3de984abf26405f699f3d1b111ab945a59b8..0000000000000000000000000000000000000000 --- a/arxiv/0403.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:31439971d32f56801b970b5f3fc71d81a4158f4755d2b8ff445312dba58bcba6 -size 10060651 diff --git a/arxiv/0404.tar.gz b/arxiv/0404.tar.gz deleted file mode 100644 index 36f903da4fa082508425217cf80b3688c3662129..0000000000000000000000000000000000000000 --- a/arxiv/0404.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:ca420077fde98a8bca3ef79dc69588e14aeaecc1bb8c6e92248b0db16911df46 -size 12173458 diff --git a/arxiv/0405.tar.gz b/arxiv/0405.tar.gz deleted file mode 100644 index 0369abf22f8812953ce95ee01243a70db275af41..0000000000000000000000000000000000000000 --- a/arxiv/0405.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:020748571a9450440af552c96de88e9a9ab4f5df2194245ab8cc09776f432d2a -size 11547674 diff --git a/arxiv/0406.tar.gz b/arxiv/0406.tar.gz deleted file mode 100644 index 6b45e12a53dfba67868c48083620ae2787478a7b..0000000000000000000000000000000000000000 --- a/arxiv/0406.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:78c910f4cdd850c44a32bb0def97182676a159a3952f75711741db142789274f -size 11307724 diff --git a/arxiv/0407.tar.gz b/arxiv/0407.tar.gz deleted file mode 100644 index 8538f72d92841bae863c8d54ad6fb826a67f60d6..0000000000000000000000000000000000000000 --- a/arxiv/0407.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:bc8de54a0078eb233f44942904e9e66d0389e81578f6c95800811f63d53bb55a -size 11958357 diff --git a/arxiv/0408.tar.gz b/arxiv/0408.tar.gz deleted file mode 100644 index 67fff87c48354dc09ecfe16d7eeedf3cbe6c1602..0000000000000000000000000000000000000000 --- a/arxiv/0408.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:14a5ef64b2c3519084a6465ca53ef680a9050ec7cc07536d287707fd99b74dd2 -size 8757970 diff --git a/arxiv/0409.tar.gz b/arxiv/0409.tar.gz deleted file mode 100644 index fe06c3dfbf76c39a37f7d193450be9bc247adf39..0000000000000000000000000000000000000000 --- a/arxiv/0409.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:d3067aa35e668475776bce4957a7a51ddd3fedb4015e66dbd524b1669a2f5bf4 -size 12786727 diff --git a/arxiv/0410.tar.gz b/arxiv/0410.tar.gz deleted file mode 100644 index b28f5d8d207b9a78edc689e4cc02fbeaa580fae6..0000000000000000000000000000000000000000 --- a/arxiv/0410.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:fb58b9bf3eb15530141defa6bce97778f58b8972b7116399d9e0a6569bede9d9 -size 10613569 diff --git a/arxiv/0411.tar.gz b/arxiv/0411.tar.gz deleted file mode 100644 index 9b684aff8980911dfc0b77c52b3a85fae6b06098..0000000000000000000000000000000000000000 --- a/arxiv/0411.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:ae1f2264ed101494d1fb9533f3a609bcd69278cd075d6d90a783d1cc1d6d22dc -size 13062894 diff --git a/arxiv/0412.tar.gz b/arxiv/0412.tar.gz deleted file mode 100644 index 18ec04e5a6eb1e3d86c88019d8f75030cb49031e..0000000000000000000000000000000000000000 --- a/arxiv/0412.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:ca36f307f103dc0c22be7e36cb659bdb7fdde0bdc2d9dd1217afacadd879cd44 -size 11975231 diff --git a/arxiv/0501.tar.gz b/arxiv/0501.tar.gz deleted file mode 100644 index e094f47b0f377c8458c050c50fd813e846341fa8..0000000000000000000000000000000000000000 --- a/arxiv/0501.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:0dd3e3f442d3729b4ee45dab4ac95a69183d9fcc6466ccf1fc4468f1f84a04b9 -size 10411351 diff --git a/arxiv/0502.tar.gz b/arxiv/0502.tar.gz deleted file mode 100644 index 849d752b40f2a09355001c9f474a88f99b8229e2..0000000000000000000000000000000000000000 --- a/arxiv/0502.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:0752ef98dbc15fb74d69e09a46723f9078f51907dcd4608d502804ff8de8954d -size 10860073 diff --git a/arxiv/0503.tar.gz b/arxiv/0503.tar.gz deleted file mode 100644 index f8dbcb8b9d2e3eb1fda448865fe6ac9e3a13f89b..0000000000000000000000000000000000000000 --- a/arxiv/0503.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:850df270454606e25a0d4c28fc416f338dbd6bd8581e9a6820a0dd5f15e14413 -size 12598531 diff --git a/arxiv/0504.tar.gz b/arxiv/0504.tar.gz deleted file mode 100644 index 551e984e25eba56f1644218294b81f1004e4c98d..0000000000000000000000000000000000000000 --- a/arxiv/0504.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:b20eec70877863d61d9cf268c582e26aca71f9d2c0c46f5690818708e800fe28 -size 11546598 diff --git a/arxiv/0505.tar.gz b/arxiv/0505.tar.gz deleted file mode 100644 index 1372dbc59e09043c404df637f5492d69c0deeb04..0000000000000000000000000000000000000000 --- a/arxiv/0505.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:4b67e50b2d70cf4ee2bc54f7502f2aec263ec55bc97533caa993355590552894 -size 13197360 diff --git a/arxiv/0506.tar.gz b/arxiv/0506.tar.gz deleted file mode 100644 index a5771fe255bdafa2e26a5d828207c65e64936226..0000000000000000000000000000000000000000 --- a/arxiv/0506.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:4528708b0b11d571a72948a7af5c9a81f20ad588d22fa6adb81ddadd18857d59 -size 12287050 diff --git a/arxiv/0507.tar.gz b/arxiv/0507.tar.gz deleted file mode 100644 index c049262353308857e80e907929b0d6f446ae5746..0000000000000000000000000000000000000000 --- a/arxiv/0507.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:d125c40215e89c07eab79ee4ce46eb309cc6cffb42b3570faa7b5a4a52f369d6 -size 11315064 diff --git a/arxiv/0508.tar.gz b/arxiv/0508.tar.gz deleted file mode 100644 index e84cb73d6f03881d3fef90274c9da099d32ccbfa..0000000000000000000000000000000000000000 --- a/arxiv/0508.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:a7db95461680f2e7bbc9c5e919df0d0d826b9866ebdafdac14aef4f7fe597cd0 -size 11271016 diff --git a/arxiv/0509.tar.gz b/arxiv/0509.tar.gz deleted file mode 100644 index 58858d8c4017cb7dd6c98c1f8eb5492934e45a95..0000000000000000000000000000000000000000 --- a/arxiv/0509.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:675458bed7278a88a9936c153e85fd4a4b2c1f329876d69e2dc31ce70d2796cf -size 13076158 diff --git a/arxiv/0510.tar.gz b/arxiv/0510.tar.gz deleted file mode 100644 index 213dfd09bbf5f9dbe844a23edbf09091534fd816..0000000000000000000000000000000000000000 --- a/arxiv/0510.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:1612fb92a5c0ff78c797645de8cf74a8d415c9fac513ade17d98662d6de7c9e7 -size 13167595 diff --git a/arxiv/0511.tar.gz b/arxiv/0511.tar.gz deleted file mode 100644 index 0468aad81d7046f1b1f2f6cd8e0aad865da19bc4..0000000000000000000000000000000000000000 --- a/arxiv/0511.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:18ce8648e2352c73f3e1debfd941e21b7b25b4507f094911436a3f1f2f585795 -size 15286973 diff --git a/arxiv/0512.tar.gz b/arxiv/0512.tar.gz deleted file mode 100644 index 8249f2a7b987f90f7e66a71683ff611f59ec2779..0000000000000000000000000000000000000000 --- a/arxiv/0512.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:c2688144918701472480cae5918969ec0aa5cf951f8beb37e4495f889d8b1465 -size 13142689 diff --git a/arxiv/0601.tar.gz b/arxiv/0601.tar.gz deleted file mode 100644 index 874665df5189262695475819b08f03e7d00fa7c4..0000000000000000000000000000000000000000 --- a/arxiv/0601.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:a030effe04427726070fca396bf0beede71417c12d84e869c0c8210c5d257b25 -size 15155731 diff --git a/arxiv/0602.tar.gz b/arxiv/0602.tar.gz deleted file mode 100644 index 246e190a46c9510b540685eef8e415032a641394..0000000000000000000000000000000000000000 --- a/arxiv/0602.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:7b376115b9016badef04273a34b9f2dd4d3a653810dd8b403eff2f3f6686ba6b -size 12989335 diff --git a/arxiv/0603.tar.gz b/arxiv/0603.tar.gz deleted file mode 100644 index 1414a816bcb5b5222305dcec7ab80f5701ca0ef6..0000000000000000000000000000000000000000 --- a/arxiv/0603.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:d54f34024161a2b70f12590efa73ea784a2a4f129cc3b1596d27f6dae0e8cead -size 13557222 diff --git a/arxiv/0604.tar.gz b/arxiv/0604.tar.gz deleted file mode 100644 index eae032b2849052c09bd69e627bf67263b4933771..0000000000000000000000000000000000000000 --- a/arxiv/0604.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:26094091faec1610a0e5dd5f66ef2169432f662d74fd254d650c723bf05eb2db -size 11967709 diff --git a/arxiv/0605.tar.gz b/arxiv/0605.tar.gz deleted file mode 100644 index a72cb378da6da8654216082fefc3bb32e260e7c8..0000000000000000000000000000000000000000 --- a/arxiv/0605.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:c17fb77bbceaf040367b1daa4cc2eec77cbeae6da9cbe040ca8f505282c14b36 -size 16130104 diff --git a/arxiv/0606.tar.gz b/arxiv/0606.tar.gz deleted file mode 100644 index 3e65e6a1f0f7ff5afda9623cdf83d3a027b171ee..0000000000000000000000000000000000000000 --- a/arxiv/0606.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:567572f6add5d385a1309690eee85ec843533216111d26b97c9911b7018aa23b -size 14359534 diff --git a/arxiv/0607.tar.gz b/arxiv/0607.tar.gz deleted file mode 100644 index 95e9cf3d16d9fa4db92375cdc0b7c8fcacda8e4b..0000000000000000000000000000000000000000 --- a/arxiv/0607.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:817a6d52ea4c70defb95612540cc6503aa4f8702af93078f9bc4944e4fffdf42 -size 15687287 diff --git a/arxiv/0608.tar.gz b/arxiv/0608.tar.gz deleted file mode 100644 index 4daa86f314258ed9929edb07d9a137dd7647f318..0000000000000000000000000000000000000000 --- a/arxiv/0608.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:cff35652e9d4bff97d28f22c83750b9558183944ef8e46adba3aede1a960dbc5 -size 14666763 diff --git a/arxiv/0609.tar.gz b/arxiv/0609.tar.gz deleted file mode 100644 index b6f0e00c4b8a6a24383e1bfa6226a9bf52054669..0000000000000000000000000000000000000000 --- a/arxiv/0609.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:46fb5ee00b548a313e713e8d11c70485ee379a4d97a77da48031a27dac7ead84 -size 15539453 diff --git a/arxiv/0610.tar.gz b/arxiv/0610.tar.gz deleted file mode 100644 index a5bcda0aca05654726e47d19d5d9d7e925e3f45b..0000000000000000000000000000000000000000 --- a/arxiv/0610.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:5ba209c39bf646b8f8f08f44fbd80ee8a241875e88ad9c93b1b232a681ed47fb -size 18644525 diff --git a/arxiv/0611.tar.gz b/arxiv/0611.tar.gz deleted file mode 100644 index 54bac4731ad6c467043bedaba52c9e566d7bceeb..0000000000000000000000000000000000000000 --- a/arxiv/0611.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:d6a2d6c22db491c09dce85be47a38957d248fa34d4c7656a79b80b8ed6598b9f -size 17188777 diff --git a/arxiv/0612.tar.gz b/arxiv/0612.tar.gz deleted file mode 100644 index da868fa50c678259bd2cc1d7570d26be9af14ea6..0000000000000000000000000000000000000000 --- a/arxiv/0612.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:bd202691373840b33fb1ce9d7c9faa933bc6274dd63d26ce0bbec0b0b138b187 -size 17764916 diff --git a/arxiv/0701.tar.gz b/arxiv/0701.tar.gz deleted file mode 100644 index f7d1a8967cb1e15f4055fcdbbcca2b7b024a6349..0000000000000000000000000000000000000000 --- a/arxiv/0701.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:a606749419dcb03aee5c3ace9cb375f1e423128a903c321cb16cf789603bf813 -size 18114070 diff --git a/arxiv/0702.tar.gz b/arxiv/0702.tar.gz deleted file mode 100644 index 43dce24fbf08b9b03b23e37916553b63faf89d23..0000000000000000000000000000000000000000 --- a/arxiv/0702.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:031e06caca9ed76a4e4316e74f5995661c05e835486c594dd9c0cb4c6152ed85 -size 16917387 diff --git a/arxiv/0703.tar.gz b/arxiv/0703.tar.gz deleted file mode 100644 index 45163f3dc26574d30e17138d3dc0d237c80e1c90..0000000000000000000000000000000000000000 --- a/arxiv/0703.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:142e36248bef60b19d8041e1de6539d9aa76ef67413bac751add88a581072f5f -size 17289115 diff --git a/arxiv/0704.tar.gz b/arxiv/0704.tar.gz deleted file mode 100644 index 2540bee6ff60919a6d9707df9386526028830d6c..0000000000000000000000000000000000000000 --- a/arxiv/0704.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:1655db60325245de19d8066b8cad9b537fdeff677c333fbddc0a2b80d4bd57db -size 17775776 diff --git a/arxiv/0705.tar.gz b/arxiv/0705.tar.gz deleted file mode 100644 index c50b53e96ab4b13a3cc4a7b3b128c2d6c45a73fd..0000000000000000000000000000000000000000 --- a/arxiv/0705.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:335f58ea8ddb5ed10cc740c8b689df024fa588ff6b8e7b46e6845e45ed4fe5bd -size 18747567 diff --git a/arxiv/0706.tar.gz b/arxiv/0706.tar.gz deleted file mode 100644 index 9877f3e14e085c217781b243fa79511c151fc5f0..0000000000000000000000000000000000000000 --- a/arxiv/0706.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:16ddfb91b4f964acafae0fd92f1ff72a44e5858920c308bd319146d1e7b0a4f8 -size 18553231 diff --git a/arxiv/0707.tar.gz b/arxiv/0707.tar.gz deleted file mode 100644 index 6dca08bbfba79a448b68540c5c7e42a8eef21272..0000000000000000000000000000000000000000 --- a/arxiv/0707.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:f80b0f0c3700fe0e348ff93dcf1a84f07cd4bfc2011f3acc4318f3e9d0a48a88 -size 19353373 diff --git a/arxiv/0708.tar.gz b/arxiv/0708.tar.gz deleted file mode 100644 index 0b6948cb47c614230b892d4b9d3115a82304e551..0000000000000000000000000000000000000000 --- a/arxiv/0708.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:087c34a20dfa2d3c8463a40cfe9f2500c076c65a6b6440319b394d94c7cf839f -size 19251407 diff --git a/arxiv/0709.tar.gz b/arxiv/0709.tar.gz deleted file mode 100644 index 6b4528437a76a2af154505373fa6936d9b20018d..0000000000000000000000000000000000000000 --- a/arxiv/0709.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:32781049e54b0ce5a355a8c7ab3cfb2d319cc494ae7133214c21ce9ac3d27e7e -size 19205606 diff --git a/arxiv/0710.tar.gz b/arxiv/0710.tar.gz deleted file mode 100644 index 6bcd048ab654b64274cd0131d99a4736bf579c12..0000000000000000000000000000000000000000 --- a/arxiv/0710.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:77eef1912f37c4079d7940af46a2f42cc57645fe4257d70f1ad03f0b10585051 -size 25104620 diff --git a/arxiv/0711.tar.gz b/arxiv/0711.tar.gz deleted file mode 100644 index 26c4beafed4c84b7dfb3d0eb9aa9a82864672a88..0000000000000000000000000000000000000000 --- a/arxiv/0711.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:0b91d47df3a688998958412291f32ad1333cb4118ce392c55c46d59e61f2e9a8 -size 22009506 diff --git a/arxiv/0712.tar.gz b/arxiv/0712.tar.gz deleted file mode 100644 index 1cbb7a9ed9696e712efbec3071e06e55f11053eb..0000000000000000000000000000000000000000 --- a/arxiv/0712.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:0d903bc8410cffb813282f39bd9c1d5b933d25f63cbdb6036561a04cfe2a3044 -size 20090068 diff --git a/arxiv/0801.tar.gz b/arxiv/0801.tar.gz deleted file mode 100644 index ec55e99665121bd9278aae4b626746b0be82f536..0000000000000000000000000000000000000000 --- a/arxiv/0801.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:452497812d4cc051b6ddd8e74861064962933e98a6fcd8626f34a8544e3deff8 -size 20853193 diff --git a/arxiv/0802.tar.gz b/arxiv/0802.tar.gz deleted file mode 100644 index 06eeb71091fc2970e2f3e96d51c24c0caf07ea65..0000000000000000000000000000000000000000 --- a/arxiv/0802.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:b3c0db3cfcba83785951dd91893bd783b7f4101dd844efc568e628d45ecedd2f -size 19873484 diff --git a/arxiv/0803.tar.gz b/arxiv/0803.tar.gz deleted file mode 100644 index 83f7bf09d4c132b2e7cf32f9dbc473c1f4ea8cd9..0000000000000000000000000000000000000000 --- a/arxiv/0803.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:168eee8ecec5e7c3454b26120d3a4cf51ea25c44d1f68eddf5fc63a5b4b2a6d2 -size 21469803 diff --git a/arxiv/0804.tar.gz b/arxiv/0804.tar.gz deleted file mode 100644 index dd736d3cb1eb3889be9140a104df0f7c44274a89..0000000000000000000000000000000000000000 --- a/arxiv/0804.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:8eb44e9395eaace3440460dca53487605d1ec3634520fa866abc871de438f66e -size 23309540 diff --git a/arxiv/0805.tar.gz b/arxiv/0805.tar.gz deleted file mode 100644 index 789fc0a8baaba31de9d04b7af7a46b303e606a4d..0000000000000000000000000000000000000000 --- a/arxiv/0805.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:3d611cea391988e2ac2eeb76e746ca19b831badcf4c3a81cdc9086aa8f3a859f -size 20269961 diff --git a/arxiv/0806.tar.gz b/arxiv/0806.tar.gz deleted file mode 100644 index aa7a4fc5ea7fc62ef592deee65364b65e5ca1f61..0000000000000000000000000000000000000000 --- a/arxiv/0806.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:2f1e964941e4ad1a284927d486c11e377a7d0d16259abab33195ad44efba6b02 -size 22548668 diff --git a/arxiv/0807.tar.gz b/arxiv/0807.tar.gz deleted file mode 100644 index 8fa97c47ad13cdb539339f057d5cfb667cd9c16e..0000000000000000000000000000000000000000 --- a/arxiv/0807.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:8896931fea8d1214b976f06c68668541c02265ad611df7bebde0b458192da1c8 -size 21692165 diff --git a/arxiv/0808.tar.gz b/arxiv/0808.tar.gz deleted file mode 100644 index d59b3b5066ee50be106edbd2c8e2919e905c043b..0000000000000000000000000000000000000000 --- a/arxiv/0808.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:43f74657e78e8cafae5096d35b5a22719f409638dfd7481e4c0b2a265fdbad0c -size 17531862 diff --git a/arxiv/0809.tar.gz b/arxiv/0809.tar.gz deleted file mode 100644 index 6c041a79c7966603dac546e17e5a135b32b9628d..0000000000000000000000000000000000000000 --- a/arxiv/0809.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:31c03de3d1b32a17fc89a3118bec309440fee58a763d10c4aede0ced65ef8ed7 -size 24679744 diff --git a/arxiv/0810.tar.gz b/arxiv/0810.tar.gz deleted file mode 100644 index e68fb959dc0ec4f3f2c8d31dc8bcee1e5e650b3a..0000000000000000000000000000000000000000 --- a/arxiv/0810.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:5d98a66dc2323cbc88de34d4caaeafb8e4b946361b567537b54ddd30ba603035 -size 26619940 diff --git a/arxiv/0811.tar.gz b/arxiv/0811.tar.gz deleted file mode 100644 index 9d996a6864af63288bba86b6b88348e2a53529fc..0000000000000000000000000000000000000000 --- a/arxiv/0811.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:f013e7cecd17276732c5ef247da577004380336afeeb279976efcc864aad740e -size 21862286 diff --git a/arxiv/0812.tar.gz b/arxiv/0812.tar.gz deleted file mode 100644 index 8f4351df6cc9b18836a3b89dc20cd812f907df18..0000000000000000000000000000000000000000 --- a/arxiv/0812.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:fe11d6708df12dbc94846b2c146dd18a0d28128fb8242bff2af4da74ea038968 -size 23423412 diff --git a/arxiv/0901.tar.gz b/arxiv/0901.tar.gz deleted file mode 100644 index c6503d2329a46a0a957e4315c7f3c3694fd3ebf4..0000000000000000000000000000000000000000 --- a/arxiv/0901.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:e6cfaeb63239a36a8fed62f5dca115c7b487fa68f4e988135ae026baf0323e15 -size 22449213 diff --git a/arxiv/0902.tar.gz b/arxiv/0902.tar.gz deleted file mode 100644 index 3e125719dc2d6e56264c92d44ec85018399faa8d..0000000000000000000000000000000000000000 --- a/arxiv/0902.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:f12713229aeb51a3d8e9f94e5cb5d288d0f868cfe78279b69ac2766b8fd0f024 -size 23770840 diff --git a/arxiv/0903.tar.gz b/arxiv/0903.tar.gz deleted file mode 100644 index 88447219c7407fc5243cc43aaff163529722eaca..0000000000000000000000000000000000000000 --- a/arxiv/0903.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:b59778de8c04b3c5de88c030dd57804118ffe5fbf9010f012f86961a3c3e1ec9 -size 28271472 diff --git a/arxiv/0904.tar.gz b/arxiv/0904.tar.gz deleted file mode 100644 index 487c2cc43c598b1517de3d828e7154c5e5dbafef..0000000000000000000000000000000000000000 --- a/arxiv/0904.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:70d0f2dfa945b4838ab84ccedc8edb7f10669ea4a0a495df8b4aed3c79de16ca -size 24548536 diff --git a/arxiv/0905.tar.gz b/arxiv/0905.tar.gz deleted file mode 100644 index 438595bd5cf05c180ad779f30d76ce2d90a20a78..0000000000000000000000000000000000000000 --- a/arxiv/0905.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:2ee82e4ccd9a2865d607f77cde55dc1364c13fddff087e4c2aab70320e9a4454 -size 24624606 diff --git a/arxiv/0906.tar.gz b/arxiv/0906.tar.gz deleted file mode 100644 index ef16133b5da496f097e86057b630cf5c46eb2df6..0000000000000000000000000000000000000000 --- a/arxiv/0906.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:8851289258a10864421e2ee791e348fa5c40b8d8ee39db59d52148c05ec04ea5 -size 27414283 diff --git a/arxiv/0907.tar.gz b/arxiv/0907.tar.gz deleted file mode 100644 index e601da6cf66ff49206901577b13e341781377d3f..0000000000000000000000000000000000000000 --- a/arxiv/0907.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:f54c8390752d29144b7a382b0bcc00d99a8083f2b8da77de6f15836da3aa37fb -size 26497546 diff --git a/arxiv/0908.tar.gz b/arxiv/0908.tar.gz deleted file mode 100644 index fc463e81b01cd7904fb7dbe1e6222471fb19bbff..0000000000000000000000000000000000000000 --- a/arxiv/0908.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:2caf644a751c9066a2f1b5a73d06cc28ee10fa30e962acb7d0e99100c82dd1b0 -size 21174910 diff --git a/arxiv/0909.tar.gz b/arxiv/0909.tar.gz deleted file mode 100644 index 0be77f6b5b5b06493afa01a50351f42c5fb5b22a..0000000000000000000000000000000000000000 --- a/arxiv/0909.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:551364ae7570178e71d60f56c91381f91ab18723f0b1de189773a4b2019cfa5a -size 26729043 diff --git a/arxiv/0910.tar.gz b/arxiv/0910.tar.gz deleted file mode 100644 index 1f106c524141ce407eae8abecc1f6002791b708e..0000000000000000000000000000000000000000 --- a/arxiv/0910.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:35e70399ac537946c842e7e4bfd4f76f4698a599d5783be51edbe70bd040e61e -size 28821759 diff --git a/arxiv/0911.tar.gz b/arxiv/0911.tar.gz deleted file mode 100644 index 760abe34efdcf9f6f33bce48c1afddde782e45fb..0000000000000000000000000000000000000000 --- a/arxiv/0911.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:7a561b468b3ece466e6af59e8420f3d4441f174c4d1c6b016d85324b76900eba -size 27228298 diff --git a/arxiv/0912.tar.gz b/arxiv/0912.tar.gz deleted file mode 100644 index 337e0cdd65997b4314fef547a7cae24eef2a8f2c..0000000000000000000000000000000000000000 --- a/arxiv/0912.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:45b727a040ae0f81169a0118b5017d8c342babade63fa50fea8af94840439666 -size 25815410 diff --git a/arxiv/1001.tar.gz b/arxiv/1001.tar.gz deleted file mode 100644 index a55cfb8239f55739e630ed001932271ee9565221..0000000000000000000000000000000000000000 --- a/arxiv/1001.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:4c7fdda99eb0d2c28f6836683a29e4a084c0d346acbb3ee04472cb45d54c4728 -size 27118119 diff --git a/arxiv/1002.tar.gz b/arxiv/1002.tar.gz deleted file mode 100644 index dec38bc28bb0b0bb27e08355be7fe5ccc0aa5d66..0000000000000000000000000000000000000000 --- a/arxiv/1002.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:46db7a43400cea110c72d571763581012ab77098c67d8d6a2b035e7e380d1180 -size 25322748 diff --git a/arxiv/1003.tar.gz b/arxiv/1003.tar.gz deleted file mode 100644 index 792df125a5e6c918e03f88717bdefab18a3aca3d..0000000000000000000000000000000000000000 --- a/arxiv/1003.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:d280009a2a972aff65d748e4da26c1bbc08ce35486bede0b42c60818b4999cbf -size 33967291 diff --git a/arxiv/1004.tar.gz b/arxiv/1004.tar.gz deleted file mode 100644 index 43930825ca041fd2b6f56c6991155c6cd2c6c863..0000000000000000000000000000000000000000 --- a/arxiv/1004.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:c9c2a2ab79556022a020de90cce8c6303593dff6e4c4b37c5011c7a145e992f9 -size 31336640 diff --git a/arxiv/1005.tar.gz b/arxiv/1005.tar.gz deleted file mode 100644 index e6499dea246f395d23f0e6ef403f4a220422bc64..0000000000000000000000000000000000000000 --- a/arxiv/1005.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:7be5e62854a65c88194d1097aef5697bf27986324973f928753b9d73279edac3 -size 30326413 diff --git a/arxiv/1006.tar.gz b/arxiv/1006.tar.gz deleted file mode 100644 index a9c3ae3525ef9dde517559547b8a3d12b6dde7fe..0000000000000000000000000000000000000000 --- a/arxiv/1006.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:e8903d4e8858c0d5c54cf5c64275b3d0407155e45ca1c07f90f52c373730f575 -size 30286501 diff --git a/arxiv/1007.tar.gz b/arxiv/1007.tar.gz deleted file mode 100644 index a347a81d8f8d75b5d34e36f49ce536e354656db6..0000000000000000000000000000000000000000 --- a/arxiv/1007.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:f5a08375b73ffa371eceb245be8170d0e49560637e999d6a297a175de3b69c74 -size 26933436 diff --git a/arxiv/1008.tar.gz b/arxiv/1008.tar.gz deleted file mode 100644 index 876f02b7150e65d6267df23b12786a3fdd41db99..0000000000000000000000000000000000000000 --- a/arxiv/1008.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:545fa661d5d1ded3766d3dae96d20501ac2b4e15871fa453cc9838b201c144ad -size 29160176 diff --git a/arxiv/1009.tar.gz b/arxiv/1009.tar.gz deleted file mode 100644 index 7d17b94c8f7f74b25062e50dffe9edcc49d10bef..0000000000000000000000000000000000000000 --- a/arxiv/1009.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:ce27d6b8f55bda974e8fcbdfc62287a5da5caefd956afec888e629d205e3693b -size 34752387 diff --git a/arxiv/1010.tar.gz b/arxiv/1010.tar.gz deleted file mode 100644 index c8900739cf2efbd57896d2e2bbb76cd09331dc31..0000000000000000000000000000000000000000 --- a/arxiv/1010.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:a5dce207b698a789821229532808f71068559ca2e1e2b8489e904320fbe84a9b -size 34795342 diff --git a/arxiv/1011.tar.gz b/arxiv/1011.tar.gz deleted file mode 100644 index de38ae9234480b10444c68ab61cd4446ccae138a..0000000000000000000000000000000000000000 --- a/arxiv/1011.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:72e020c8cdc8c62401b0910b6ec5a26a36155b2fe8310fcb1c82f16699e0c0d1 -size 37271860 diff --git a/arxiv/1012.tar.gz b/arxiv/1012.tar.gz deleted file mode 100644 index b5635eb9eb5d7ddd8c4b0324d50d5e067b530b91..0000000000000000000000000000000000000000 --- a/arxiv/1012.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:cee5b2501d24fe02547b258f3daed9772f38b81862b3ed873d8f95ec8ffbb62e -size 31677152 diff --git a/arxiv/1101.tar.gz b/arxiv/1101.tar.gz deleted file mode 100644 index 6db4c6795479ff04f590032712f3ab19dcc119b2..0000000000000000000000000000000000000000 --- a/arxiv/1101.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:cebfbe7fdd11b0dbe317e375d706311fec637e6e7835742848d30217edb2cf48 -size 33785349 diff --git a/arxiv/1102.tar.gz b/arxiv/1102.tar.gz deleted file mode 100644 index 6057a8dd9fa57ac23b8b050af24310cf12394a3e..0000000000000000000000000000000000000000 --- a/arxiv/1102.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:698040d7e3f81d9af1d8578efedf226ebde662d358b6bab7fb890b5d3f6b44ec -size 32661901 diff --git a/arxiv/1103.tar.gz b/arxiv/1103.tar.gz deleted file mode 100644 index e682c9cc87593c8571d223c22ea50a9b32d1a1da..0000000000000000000000000000000000000000 --- a/arxiv/1103.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:12a41af7ef33d126cc32b487cd9d3ad0d3ca1eaccc635c792dd6feb94c0ba5c7 -size 34778870 diff --git a/arxiv/1104.tar.gz b/arxiv/1104.tar.gz deleted file mode 100644 index 362dbf749ea4d83784ebf55df4019f5fbfadce33..0000000000000000000000000000000000000000 --- a/arxiv/1104.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:7159c2543c30f34c0eed5b5afd2f75a09374290aed21e54d80131f9334ba2909 -size 35767406 diff --git a/arxiv/1105.tar.gz b/arxiv/1105.tar.gz deleted file mode 100644 index 2c81c3dcd2788f1fe6943dfc2a0ca59be0a36725..0000000000000000000000000000000000000000 --- a/arxiv/1105.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:fd0ad80668b1d1687aa403389ee4a01ce7f44e7c22cfb64903c3e2f8728f6327 -size 37823906 diff --git a/arxiv/1106.tar.gz b/arxiv/1106.tar.gz deleted file mode 100644 index 6bcbb82bdffad42301426e94f7062408ed9e40d4..0000000000000000000000000000000000000000 --- a/arxiv/1106.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:aeaacbe24ac8299290ad3ad7415845d177ad67b9e92c635d345787b81a1b9ce6 -size 34132479 diff --git a/arxiv/1107.tar.gz b/arxiv/1107.tar.gz deleted file mode 100644 index 1eee044fdd0c2a09f139a7c96e40ff867ad0d670..0000000000000000000000000000000000000000 --- a/arxiv/1107.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:f407c16bd78f34fac9f95426c3ed08364c2f681c118b7f2042e09c42f29f98d3 -size 33946303 diff --git a/arxiv/1108.tar.gz b/arxiv/1108.tar.gz deleted file mode 100644 index 0e44ba1ac47ca4d37f155604844ec3dd583cfd62..0000000000000000000000000000000000000000 --- a/arxiv/1108.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:dcabe8553ccedbaa2a51d69677c0199ba2f0b12d3e741b93b618cb83408209aa -size 33325484 diff --git a/arxiv/1109.tar.gz b/arxiv/1109.tar.gz deleted file mode 100644 index a056367b9c5d62e82db7e8b8e32803cceb0c74b0..0000000000000000000000000000000000000000 --- a/arxiv/1109.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:28f02db54141ab774bc61d029b56fb15e3e9d89212b2e374071d4c65e24ae2fb -size 38497321 diff --git a/arxiv/1110.tar.gz b/arxiv/1110.tar.gz deleted file mode 100644 index 773da77d53c967aff148c16f365ef55fbd4764b0..0000000000000000000000000000000000000000 --- a/arxiv/1110.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:6dcbd954bf91d0fe73e604b50850828111198bc2de0286b23b64ecd6f9707d08 -size 38699536 diff --git a/arxiv/1111.tar.gz b/arxiv/1111.tar.gz deleted file mode 100644 index 74030eee03b374c36856c2a85d8ce6e1a8fd112e..0000000000000000000000000000000000000000 --- a/arxiv/1111.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:f65d6f4b8821ce8b5e494a2b66cd8d3a936c5b70450bfda50ae193470457f7df -size 39243371 diff --git a/arxiv/1112.tar.gz b/arxiv/1112.tar.gz deleted file mode 100644 index 21d57d5783e6b560809b434c9018ec0bf52bc658..0000000000000000000000000000000000000000 --- a/arxiv/1112.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:9287171a221ec0c2396a0233d900d4e6df6e0f8e040804d1bd3b2e21defa78f3 -size 37060458 diff --git a/arxiv/1209.tar.gz b/arxiv/1209.tar.gz deleted file mode 100644 index 5b2a4214cfab5ea72e8c717934bdcc9c5ee492e4..0000000000000000000000000000000000000000 --- a/arxiv/1209.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:23ee855bc3f65510cb8ea7a3aa101ad6b3fbbef8a094c3701fabc8e07308d085 -size 31970169 diff --git a/arxiv/1210.tar.gz b/arxiv/1210.tar.gz deleted file mode 100644 index 8d8f6d5fa0e793a5de4f3943d1db156d2106140d..0000000000000000000000000000000000000000 --- a/arxiv/1210.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:1edf929d2bff190638a22663e9d6616d8e7d0283ce153d02dece9a144b63c0ab -size 51648653 diff --git a/arxiv/1211.tar.gz b/arxiv/1211.tar.gz deleted file mode 100644 index 820713a145f99c2e6e3d6fc4f1f12eb57888df34..0000000000000000000000000000000000000000 --- a/arxiv/1211.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:1b130ac1f306145b6e72810c50091b8bd20734210918a9eee1028b7d8a07bc25 -size 45718666 diff --git a/arxiv/1212.tar.gz b/arxiv/1212.tar.gz deleted file mode 100644 index 7121f55a97342a26e31589a08c9406a8dccd70b8..0000000000000000000000000000000000000000 --- a/arxiv/1212.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:642c8f72bccf61721466a4711c974d20716d37976b9c315f0cf4d50c12218a9b -size 43179660 diff --git a/arxiv/1301.tar.gz b/arxiv/1301.tar.gz deleted file mode 100644 index 10363ee4bd09bc6be6362c8444cd0cc79f2cc106..0000000000000000000000000000000000000000 --- a/arxiv/1301.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:264e97cbf87a9461cd54d5972a13d199b81d5c7b68b8aa0391445d6289f9b130 -size 43639040 diff --git a/arxiv/1302.tar.gz b/arxiv/1302.tar.gz deleted file mode 100644 index 25046a6279e908b4119c0ffbe379b69fe33cd5ba..0000000000000000000000000000000000000000 --- a/arxiv/1302.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:2a9a72ca067b2d469100ca5eedac1ff9d1846a12eb70e66f17e2998e89c69a77 -size 42483417 diff --git a/arxiv/1303.tar.gz b/arxiv/1303.tar.gz deleted file mode 100644 index 94475f2c06a3a4da46f98610c07d8c85d77c71af..0000000000000000000000000000000000000000 --- a/arxiv/1303.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:a214dbcad1461fd48dd20e8a2345d60cb7619a6de0b6bd79730143063a3b1468 -size 42319835 diff --git a/arxiv/1304.tar.gz b/arxiv/1304.tar.gz deleted file mode 100644 index e589a4ad2e00f74c851929bfd30504249f3ec221..0000000000000000000000000000000000000000 --- a/arxiv/1304.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:48385161f70281e8927ee9e4bf7d19002ab39456d6ab33a624fb21da660db0d0 -size 46003074 diff --git a/arxiv/1305.tar.gz b/arxiv/1305.tar.gz deleted file mode 100644 index 9f2d1ee5d6f23b94c6b8de2fb23a982e0dd69dad..0000000000000000000000000000000000000000 --- a/arxiv/1305.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:d6e8fa739c793b2bb0d8c21b078601619e910fac3c8c4c1875a8294330669968 -size 44728760 diff --git a/arxiv/1306.tar.gz b/arxiv/1306.tar.gz deleted file mode 100644 index 65c0f2f03ce489fccf1d0b4c2323bc7d0c4df4fb..0000000000000000000000000000000000000000 --- a/arxiv/1306.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:9cbd07fa7b169d0ed86d2f915a7b0dd104d99d0af6ffbbd15bf2faa953a599ee -size 39728245 diff --git a/arxiv/1307.tar.gz b/arxiv/1307.tar.gz deleted file mode 100644 index 20ae876263777b2155a2c67230fdfe8320d9cd92..0000000000000000000000000000000000000000 --- a/arxiv/1307.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:d5625a4316ee0492868d033d8c9515fe009acefb4af0f5fda273fcd444a55230 -size 48558562 diff --git a/arxiv/1308.tar.gz b/arxiv/1308.tar.gz deleted file mode 100644 index e7d18a388ea7ab9e56248e5a358c70f185274ef3..0000000000000000000000000000000000000000 --- a/arxiv/1308.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:33fdd6968736ff5ab34b68eb8bc7622e46a6444660d66cfdcf61e7e0e9d5f31e -size 40574660 diff --git a/arxiv/1309.tar.gz b/arxiv/1309.tar.gz deleted file mode 100644 index c1acdb270e4806a4116f1322bdda6e42639d1fdb..0000000000000000000000000000000000000000 --- a/arxiv/1309.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:26894368c2152685e79af4b8b5a77f236bcbc208137dc4c05669cd345d8c6e84 -size 49518777 diff --git a/arxiv/1310.tar.gz b/arxiv/1310.tar.gz deleted file mode 100644 index d118809afb56452db73c765a6c5092f2a3576c18..0000000000000000000000000000000000000000 --- a/arxiv/1310.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:da0e5f90bd72a31c64e61da23b52b1b5027589173de62eacb6c29613c878266e -size 54223950 diff --git a/arxiv/1311.tar.gz b/arxiv/1311.tar.gz deleted file mode 100644 index 31444220df7927cad7093172028f37ff1f69fad4..0000000000000000000000000000000000000000 --- a/arxiv/1311.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:73f14539ee0fc24e947f0a0743cafbacf0329a1a896508ae3e3085ac2f3eafd7 -size 45200643 diff --git a/arxiv/1312.tar.gz b/arxiv/1312.tar.gz deleted file mode 100644 index 49c91618ef88ebcb1fa55fbd6b5e776c91c50f24..0000000000000000000000000000000000000000 --- a/arxiv/1312.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:cee737bbc602b10e37a0d072b81471fe999e312138ed2f369fe038e3eae79178 -size 48762527 diff --git a/arxiv/1401.tar.gz b/arxiv/1401.tar.gz deleted file mode 100644 index 9ad98b0d357d275c6617ac49e0917a13b02a164b..0000000000000000000000000000000000000000 --- a/arxiv/1401.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:cd4a6815008bf630f5b830a24a904cdee405108a9f87494e9914ac26b8b62e5d -size 50089645 diff --git a/arxiv/1402.tar.gz b/arxiv/1402.tar.gz deleted file mode 100644 index d8469b80499dbd3a3938daf830a843bc0c610573..0000000000000000000000000000000000000000 --- a/arxiv/1402.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:331aedcd4aba10f563ef49380ae68e2f8b48c74448bc6f25bd24cf0e6ce62cee -size 44887872 diff --git a/arxiv/1403.tar.gz b/arxiv/1403.tar.gz deleted file mode 100644 index 9a40bedfd3c434d87cb8c25e5a1d995eb153ed01..0000000000000000000000000000000000000000 --- a/arxiv/1403.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:1b4ece96136b4afe58e62ecfd871786364701d760b2604f2b23ecb5009b32312 -size 50327964 diff --git a/arxiv/1404.tar.gz b/arxiv/1404.tar.gz deleted file mode 100644 index 11088de227a97783e61dcca99e38e41c689942ac..0000000000000000000000000000000000000000 --- a/arxiv/1404.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:a18fe775b876136bb640484b654121f7dface632df3e6635b4bc5256a035b767 -size 49858427 diff --git a/arxiv/1405.tar.gz b/arxiv/1405.tar.gz deleted file mode 100644 index 8ea999743a07bb781f1c64b5d0cdcea5694e4e50..0000000000000000000000000000000000000000 --- a/arxiv/1405.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:2b58a8752c7e30d1796f3625c02da79ff8be371498ab06aed3672e104ec8c2ed -size 48930812 diff --git a/arxiv/1406.tar.gz b/arxiv/1406.tar.gz deleted file mode 100644 index 6811eb9a752280292893abab83775207621762e7..0000000000000000000000000000000000000000 --- a/arxiv/1406.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:e6845d1d665e95cd212240954eef4c3ce561096ab996c1981440780e655120db -size 47384449 diff --git a/arxiv/1407.tar.gz b/arxiv/1407.tar.gz deleted file mode 100644 index b4148578e4f2daddb6e2dda812cde493984b818d..0000000000000000000000000000000000000000 --- a/arxiv/1407.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:8798e0a620755dbeac9167764d4c089af8bbc85ba4b2e509a4b244ad573f23c6 -size 51224864 diff --git a/arxiv/1408.tar.gz b/arxiv/1408.tar.gz deleted file mode 100644 index b1b2974c813182a57f32f26c6f31681ff944e1d0..0000000000000000000000000000000000000000 --- a/arxiv/1408.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:3f00ce44b1002fcd25e52d0e28e4157a0f2c7dfa3a08441ecc8be3d85217a21b -size 44645915 diff --git a/arxiv/1409.tar.gz b/arxiv/1409.tar.gz deleted file mode 100644 index 5ac23435f1269078305f492dac2b510afcb68f45..0000000000000000000000000000000000000000 --- a/arxiv/1409.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:33a1f728a341ba885ec3497d04c1424e5f01feb18c8860be63ac398d456ba4e0 -size 54648180 diff --git a/arxiv/1410.tar.gz b/arxiv/1410.tar.gz deleted file mode 100644 index cf0cba386d9a7150fde00700feefc82cf90db0d9..0000000000000000000000000000000000000000 --- a/arxiv/1410.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:0890da1703f6e5924e0a85406c4397dc4fc5ef00fb704aa8cf3183261aecd92d -size 55439467 diff --git a/arxiv/1411.tar.gz b/arxiv/1411.tar.gz deleted file mode 100644 index d68c47e01c587942bc4bf9686f5d3eae74513239..0000000000000000000000000000000000000000 --- a/arxiv/1411.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:b2800b2b3686462b1e62d41e47414bc8062e244ce143858407a4270e15365c8b -size 50982002 diff --git a/arxiv/1412.tar.gz b/arxiv/1412.tar.gz deleted file mode 100644 index 960b0f41de69b5b137dbb55db6f89e5acce5cb63..0000000000000000000000000000000000000000 --- a/arxiv/1412.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:4d91074d2db5423b8a7e689400790b427d4cf272b27a30b669c97d43f9dee13a -size 53430082 diff --git a/arxiv/1501.tar.gz b/arxiv/1501.tar.gz deleted file mode 100644 index d0ffaf5348b0f6603a35d60cdcec2d64bf566cd1..0000000000000000000000000000000000000000 --- a/arxiv/1501.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:219478b380674c617e482345679d4bd982b67194a0c5e5fdbbb16e3e04f8c6ac -size 50690175 diff --git a/arxiv/1502.tar.gz b/arxiv/1502.tar.gz deleted file mode 100644 index bdb95df96466e3d2bb6476a865a27549aff7407c..0000000000000000000000000000000000000000 --- a/arxiv/1502.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:c2471bb49258d66a839aed24b510fc286bb4a5a314528c24e50d9601f71a5a02 -size 49235712 diff --git a/arxiv/1503.tar.gz b/arxiv/1503.tar.gz deleted file mode 100644 index b75f09dc72a6dd26c2fa1ce3e3664c1e87992561..0000000000000000000000000000000000000000 --- a/arxiv/1503.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:4c025827ca6a31e707e6f2893ac73940a1e6e1996e24f109b57cf8e88d4d1d5f -size 57500123 diff --git a/arxiv/1504.tar.gz b/arxiv/1504.tar.gz deleted file mode 100644 index be3b689b4a6453476299b18c07045aa559f2b5aa..0000000000000000000000000000000000000000 --- a/arxiv/1504.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:8f8a8e04fdec40c3f5ca31988c9d34391bb8f8b48398bd1894a23ae0d70decc7 -size 52776273 diff --git a/arxiv/1505.tar.gz b/arxiv/1505.tar.gz deleted file mode 100644 index 352d7393eefd22d91ea5f711bc809f2274486592..0000000000000000000000000000000000000000 --- a/arxiv/1505.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:20b6b5f034aa5a186e617a5140739a94f7b23585aeeb28125be2e38eef63fc09 -size 50018483 diff --git a/arxiv/1506.tar.gz b/arxiv/1506.tar.gz deleted file mode 100644 index 3aa3d009a6b5dbe330557ce89a8459d06ed9d894..0000000000000000000000000000000000000000 --- a/arxiv/1506.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:3ff33c1bcd8330bdf2fa4bd460cd01a413d66a0d8f2365d0093deeb04d978cf5 -size 59644182 diff --git a/arxiv/1507.tar.gz b/arxiv/1507.tar.gz deleted file mode 100644 index 358902f15c29d2ebfc885d11d30f89c96126b18c..0000000000000000000000000000000000000000 --- a/arxiv/1507.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:7dda745ce0bc1c73c86a7c9d113f5d3ee5733c92711da01fbca48fc2a0b693a4 -size 57445028 diff --git a/arxiv/1508.tar.gz b/arxiv/1508.tar.gz deleted file mode 100644 index 1b9db32a7cd5e0ece72ad6e631b669f9bfc3cf64..0000000000000000000000000000000000000000 --- a/arxiv/1508.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:78185d3565632858192077234a70ec09a31808fcd8ff62cb3c5824eb614bb96d -size 51837224 diff --git a/arxiv/1509.tar.gz b/arxiv/1509.tar.gz deleted file mode 100644 index 3b0d2c0c5366358d2e7c9955b0ac904ecf5f70ab..0000000000000000000000000000000000000000 --- a/arxiv/1509.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:2cfd6178b20ca07a1b07c273513cec408357b86c9e213eee084efbbd9a42319b -size 58893067 diff --git a/arxiv/1510.tar.gz b/arxiv/1510.tar.gz deleted file mode 100644 index 2b82a8869fc628173edb7b89e37db1d97e8aeb4c..0000000000000000000000000000000000000000 --- a/arxiv/1510.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:1c491e44451c4b22f77741efd1dd9b219d2eff1bf2c015622025fa66c76ac786 -size 42030220 diff --git a/arxiv/1511.tar.gz b/arxiv/1511.tar.gz deleted file mode 100644 index 0f480179cdd5e787790c8ae1c5b4a44f2793ee55..0000000000000000000000000000000000000000 --- a/arxiv/1511.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:e27303ce82dc9a47ad4357566e0b2ce0a8ab9bf0f1c549ca8289ebd684ba754f -size 58875588 diff --git a/arxiv/1512.tar.gz b/arxiv/1512.tar.gz deleted file mode 100644 index ce7c26c1860ed317b2caacabeafecb3cc0c44a6e..0000000000000000000000000000000000000000 --- a/arxiv/1512.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:14b5cbe6507d89ddcc33c338204126ab30969ac751437d5eea040d3683ca4b45 -size 58436593 diff --git a/arxiv/1601.tar.gz b/arxiv/1601.tar.gz deleted file mode 100644 index 89d1885b25be58c3751d8ff53e04c3b4d965c232..0000000000000000000000000000000000000000 --- a/arxiv/1601.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:68b9f4ba32627bc7fe169a2f4ca6aaeb5abb200983be1e998ce959712ecd68e0 -size 54099241 diff --git a/arxiv/1602.tar.gz b/arxiv/1602.tar.gz deleted file mode 100644 index bcc37732a84e705b527ad17ca418dfb2bac5961c..0000000000000000000000000000000000000000 --- a/arxiv/1602.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:e32695d0ac79b17fa1a452165c6b5e23234bcae77976f667106c9e15190c5118 -size 60171012 diff --git a/arxiv/1603.tar.gz b/arxiv/1603.tar.gz deleted file mode 100644 index 42a67c4e064438187ca905d246450311f25392d4..0000000000000000000000000000000000000000 --- a/arxiv/1603.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:a1f4b78315126a6256a8fca09a67681ff6a3131d19c9eb63939e6747bfe26880 -size 62488032 diff --git a/arxiv/1604.tar.gz b/arxiv/1604.tar.gz deleted file mode 100644 index 5a21331918d2b1f70c65086efdd84ff8989865d8..0000000000000000000000000000000000000000 --- a/arxiv/1604.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:b1bdb05ccd1906e690269c81752edc81c79ce9c9e40e65e2829f6343d43885dc -size 53665997 diff --git a/arxiv/1605.tar.gz b/arxiv/1605.tar.gz deleted file mode 100644 index 52c71bde31b20264231bdf7a0f51559e981f747c..0000000000000000000000000000000000000000 --- a/arxiv/1605.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:889f9168a93a7934c5515af80fe69cd076d6c717e7382f8c64ae6919c2ae10cd -size 62791994 diff --git a/arxiv/1606.tar.gz b/arxiv/1606.tar.gz deleted file mode 100644 index b6785b729ec7501223530fe227da4ec8c23f0f44..0000000000000000000000000000000000000000 --- a/arxiv/1606.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:578d4a835cc702b2b0281757681bbac65c76a43eafbec72e7356d89cb5850840 -size 56959441 diff --git a/arxiv/1607.tar.gz b/arxiv/1607.tar.gz deleted file mode 100644 index ba912a39700a6523ed919c3d561e5c972cf7be17..0000000000000000000000000000000000000000 --- a/arxiv/1607.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:eacd429c3d2c5d3838afbbb7c270ed7e8784d1e1ee4d7d140a9bc78a2ec9aa75 -size 53119663 diff --git a/arxiv/1608.tar.gz b/arxiv/1608.tar.gz deleted file mode 100644 index 034b8190bebeaa8d7b10ba5d81407f195cec6dc0..0000000000000000000000000000000000000000 --- a/arxiv/1608.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:84d014774e73cc68eef2f91746ea4e2d37a6cfa2027e4a6031f4c2251af01699 -size 56069508 diff --git a/arxiv/1609.tar.gz b/arxiv/1609.tar.gz deleted file mode 100644 index 7b86dc81a0fc815ea9d3d805c2e67608ce6e2a52..0000000000000000000000000000000000000000 --- a/arxiv/1609.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:90b8ef930091464d10564179d070d6ba4361812b94e54b6d4e5afcd53e2c1c1f -size 62516014 diff --git a/arxiv/1610.tar.gz b/arxiv/1610.tar.gz deleted file mode 100644 index 1fcb4abec72c93b864896b4787447365d646be0b..0000000000000000000000000000000000000000 --- a/arxiv/1610.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:978a5e59cc10921df824ec8f17dfc3691894b6906303a3270c9b0386b3112e0f -size 65451190 diff --git a/arxiv/1611.tar.gz b/arxiv/1611.tar.gz deleted file mode 100644 index ffbbd6c4a3bc497959ae1bf0408ea3200a9489a3..0000000000000000000000000000000000000000 --- a/arxiv/1611.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:a8fbcd098d565dd2fa10c5c2555ae379055453d87758b3e0d3f6afd407c4acfe -size 62577671 diff --git a/arxiv/1612.tar.gz b/arxiv/1612.tar.gz deleted file mode 100644 index ea15f257610a5a102172e2a164c207955c09ccbb..0000000000000000000000000000000000000000 --- a/arxiv/1612.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:0d4baad8d9f5cae30de2aa6df875a3b021ac3c41c57f005399ec549804d254c6 -size 58928303 diff --git a/arxiv/1701.tar.gz b/arxiv/1701.tar.gz deleted file mode 100644 index 69d816c4b5caeaab4b7303f5db437fb3949e159b..0000000000000000000000000000000000000000 --- a/arxiv/1701.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:a5e2f20dfdb9f88e68a6c4ca69e46ae4846d89d08a45ce561f250e623f7c2d36 -size 59559388 diff --git a/arxiv/1702.tar.gz b/arxiv/1702.tar.gz deleted file mode 100644 index 14ad2462e1a9fc4f4e934442cb599f77b55e46c1..0000000000000000000000000000000000000000 --- a/arxiv/1702.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:d6ed0c6a24964da7a207bb22ecfd3b7106e03d00d55e11e54ade9bd9fca1da1c -size 55874836 diff --git a/arxiv/1703.tar.gz b/arxiv/1703.tar.gz deleted file mode 100644 index bf33220088e6a6def29e15f08f4fcf10457e8086..0000000000000000000000000000000000000000 --- a/arxiv/1703.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:022ada10be64112538754a4a231e657b1a434267f896cbe34bb721d291c8a796 -size 66198297 diff --git a/arxiv/1704.tar.gz b/arxiv/1704.tar.gz deleted file mode 100644 index d565db9621c498d63860c6d370065c7476539afe..0000000000000000000000000000000000000000 --- a/arxiv/1704.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:f1ed9768e9133873ff9b5ed3e6abe141cae3143aee21dffa66bab39f43a01742 -size 52202105 diff --git a/arxiv/1705.tar.gz b/arxiv/1705.tar.gz deleted file mode 100644 index a00f721e611c1200b529745f9241a8d0b7877895..0000000000000000000000000000000000000000 --- a/arxiv/1705.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:30427302e553d29fdbac35d264eeddae636ae6e4c3a3a036b50c81a9c068b3cc -size 66341874 diff --git a/arxiv/1706.tar.gz b/arxiv/1706.tar.gz deleted file mode 100644 index b1cc6f5981033f69035cdc30e548bcfc3ce93432..0000000000000000000000000000000000000000 --- a/arxiv/1706.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:acef813149dcf20aacea298a90ea4c5984a1e8dfca95c7a1a11b6f477319b262 -size 62772527 diff --git a/arxiv/1707.tar.gz b/arxiv/1707.tar.gz deleted file mode 100644 index 87311921178d0cc4a83d7bff6bd81357e85338ca..0000000000000000000000000000000000000000 --- a/arxiv/1707.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:cfd254361ababffa4d84bf3b64c49b0458061061592af6c8ec038060f49541aa -size 60204796 diff --git a/arxiv/1708.tar.gz b/arxiv/1708.tar.gz deleted file mode 100644 index e6f1bf2920eaa73c8fb7e024771032c1c41eeaa4..0000000000000000000000000000000000000000 --- a/arxiv/1708.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:06832496d4339d86004e49a04dd684e73f71bee916d18fca436203b6757259bb -size 59026513 diff --git a/arxiv/1709.tar.gz b/arxiv/1709.tar.gz deleted file mode 100644 index 2b0838926c26a3b7954c8e6accf3fe4f5e5f5ec2..0000000000000000000000000000000000000000 --- a/arxiv/1709.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:cff89101d08dbcd6fdc98b899bddfea05fe2250f1a5843b05868eeae2bc8a467 -size 64832968 diff --git a/arxiv/1710.tar.gz b/arxiv/1710.tar.gz deleted file mode 100644 index b2154ebaf0ed8e18c596d231644e593efa6f690d..0000000000000000000000000000000000000000 --- a/arxiv/1710.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:724c56eeb749cbf55537b1c6a6a6c8b80d708c1a23bd54b768a019a0ec611e76 -size 70425542 diff --git a/arxiv/1711.tar.gz b/arxiv/1711.tar.gz deleted file mode 100644 index b18f1a9e83d7430c5171d092b3b50b3ac7b9fab6..0000000000000000000000000000000000000000 --- a/arxiv/1711.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:56279f1446262c57a55817a313fba53e4437a184ee98d8609ca09b69a1e05997 -size 68242303 diff --git a/arxiv/1712.tar.gz b/arxiv/1712.tar.gz deleted file mode 100644 index 553277debd1aefa5bb0ce1340c8fa0ca10b9c424..0000000000000000000000000000000000000000 --- a/arxiv/1712.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:84ca08d7f772126cb129f70d17b7f65aad78ca15e499779a7fc8856a7b1efb13 -size 62917058 diff --git a/arxiv/1801.tar.gz b/arxiv/1801.tar.gz deleted file mode 100644 index 39b95a5e0cd4522eb4f72c3d0564a351583a4b99..0000000000000000000000000000000000000000 --- a/arxiv/1801.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:eced7064b5b07c231832f841d6cb81c155cb6c26c85d6f53f350570a14140613 -size 63557698 diff --git a/arxiv/1802.tar.gz b/arxiv/1802.tar.gz deleted file mode 100644 index cde30842faba25be4bdb3d69e058d35feaa170f0..0000000000000000000000000000000000000000 --- a/arxiv/1802.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:d0c4578f8fdf87c3a00b8cfbab10a00ec38f6f887a21282738f60525e2d9e5bd -size 62327762 diff --git a/arxiv/1803.tar.gz b/arxiv/1803.tar.gz deleted file mode 100644 index 4854b6fb09b23a249e624db762e2ffbe3097a798..0000000000000000000000000000000000000000 --- a/arxiv/1803.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:9e99c97957440b34c6a2e731d9649b13f44400d55212f44a9e25837bdedb7c6c -size 67073838 diff --git a/arxiv/1804.tar.gz b/arxiv/1804.tar.gz deleted file mode 100644 index 4d5db7c8ece753c02d720d722565ee049cc5d36c..0000000000000000000000000000000000000000 --- a/arxiv/1804.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:d1cb89e24533584c20258a13e76ecce2a2b63ab58dca52a0e81187b95fe00355 -size 64208460 diff --git a/arxiv/1805.tar.gz b/arxiv/1805.tar.gz deleted file mode 100644 index a74d09a5c65a443d1fa0bb111a32aef775ea5053..0000000000000000000000000000000000000000 --- a/arxiv/1805.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:5d5a2dd635ad43ee00a18bc3dbc1709b2b7db17ca28a95447e95ac931a5632d0 -size 69544149 diff --git a/arxiv/1806.tar.gz b/arxiv/1806.tar.gz deleted file mode 100644 index cf62f2a93441f17fef139b65b637ba89b8207dce..0000000000000000000000000000000000000000 --- a/arxiv/1806.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:2376d3ee02c9073d5b040738f10b13f7040fe0afaf5fb6d37ba386511bf5d5ee -size 64260308 diff --git a/arxiv/1807.tar.gz b/arxiv/1807.tar.gz deleted file mode 100644 index 9c58b2423a59bdd42021fcb671b3bfdd2b9c7d40..0000000000000000000000000000000000000000 --- a/arxiv/1807.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:e9dbe997be0495a46ae1366c528d9082689818a7cb024c594944d578a186ca33 -size 64765852 diff --git a/arxiv/1808.tar.gz b/arxiv/1808.tar.gz deleted file mode 100644 index 43416c4bd1f1efac7dcba2e625384aabf3ae089d..0000000000000000000000000000000000000000 --- a/arxiv/1808.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:cf5c7dc4eb19722ff231785c9b06b47b0d46297ed5c3b6df0531552393d671a8 -size 62543254 diff --git a/arxiv/1809.tar.gz b/arxiv/1809.tar.gz deleted file mode 100644 index 8b000aebafc081170a1e57e466817b06a07eaa68..0000000000000000000000000000000000000000 --- a/arxiv/1809.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:0ddd31d0476ae9739207addee333849f99e9556176b642a97df15aad1ebcbddd -size 65352319 diff --git a/arxiv/1810.tar.gz b/arxiv/1810.tar.gz deleted file mode 100644 index 8929a0b3057f7b39299cae4d11c52becf66815c9..0000000000000000000000000000000000000000 --- a/arxiv/1810.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:fc77acd9e54219a589a15e8f470fed94a2ceb3f796d7f11e60cce6439ecd5155 -size 79420903 diff --git a/arxiv/1811.tar.gz b/arxiv/1811.tar.gz deleted file mode 100644 index 7d87dc8e2b89106ce3c0b1e21a5a7f11b9599bf1..0000000000000000000000000000000000000000 --- a/arxiv/1811.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:2f540f4b667fca812d8db02526c0363abc1f0ce205d67a0a5d9a73bca1e75c97 -size 69812153 diff --git a/arxiv/1812.tar.gz b/arxiv/1812.tar.gz deleted file mode 100644 index e7f8c89b4ecf7a9a2b9fd4eac041d963b2573fc0..0000000000000000000000000000000000000000 --- a/arxiv/1812.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:e2ecfcfe56bd4b42a315eb64c64c84402d9638b4b77720d5ee6b9cb8535ae385 -size 70402253 diff --git a/arxiv/1901.tar.gz b/arxiv/1901.tar.gz deleted file mode 100644 index fcfa0182977501a5317b3b48d7aac37da350a81f..0000000000000000000000000000000000000000 --- a/arxiv/1901.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:3cf17031d3c4651c9f11c175d0bc34aac282cb96cf7339f14070309bbb4487d7 -size 67650579 diff --git a/arxiv/1902.tar.gz b/arxiv/1902.tar.gz deleted file mode 100644 index 297c4b7c2a6502eae4ea2dd5d4a3504271269ec3..0000000000000000000000000000000000000000 --- a/arxiv/1902.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:a1eded29ccf9b73b4e11a503b733efde3b4e638797f6100d29bcfdbcd2adbe57 -size 68035310 diff --git a/arxiv/1903.tar.gz b/arxiv/1903.tar.gz deleted file mode 100644 index 36ac6c720e38c63f195019e200ea30cbc1f7fd15..0000000000000000000000000000000000000000 --- a/arxiv/1903.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:fe9a88512105db3b9715f4ac11953a0297b3872194b7a40e13daed3515aa843e -size 71741870 diff --git a/arxiv/1904.tar.gz b/arxiv/1904.tar.gz deleted file mode 100644 index 892734f63658737dfdcbd0a55096b20d7c7eb025..0000000000000000000000000000000000000000 --- a/arxiv/1904.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:e73aa4e9708b4dcf54d47ab5a6bf21a5c06647cf3175b2aac8ac09182e031f71 -size 72334143 diff --git a/arxiv/1905.tar.gz b/arxiv/1905.tar.gz deleted file mode 100644 index c1b59fdaf9a5df8888faf0ca014d7866a7a25bb3..0000000000000000000000000000000000000000 --- a/arxiv/1905.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:c6e378ad8ca5b7d8c0f2cea99fbd9e8e12a9b669e0be6d0f1f91bc46a72920f5 -size 76071268 diff --git a/arxiv/1906.tar.gz b/arxiv/1906.tar.gz deleted file mode 100644 index 91abf1cf0a2c94bd06340404e0adcde0e21a3e79..0000000000000000000000000000000000000000 --- a/arxiv/1906.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:ed11befff2a88ec924f1347bea15588b5a674e6fb017618600f2bed39b0471d1 -size 67216781 diff --git a/arxiv/1907.tar.gz b/arxiv/1907.tar.gz deleted file mode 100644 index 789f4968248e810b00136380c2516f4b723ec07a..0000000000000000000000000000000000000000 --- a/arxiv/1907.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:53268d08b2b3fc2f5dab1bd2cea46fc94354a3bf4c45def1d1a1ba529174c5e4 -size 78067239 diff --git a/arxiv/1908.tar.gz b/arxiv/1908.tar.gz deleted file mode 100644 index 2447281271a57e85bad767973b3d49a7d19f9e0f..0000000000000000000000000000000000000000 --- a/arxiv/1908.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:0f73b94813383b4286ced4dc6fe87e0dc003bada425e616de7f8aa6d12fdc7f1 -size 65321811 diff --git a/arxiv/1909.tar.gz b/arxiv/1909.tar.gz deleted file mode 100644 index 09eb686ddeafc29c270cb9a27b6d0c38d4a860ae..0000000000000000000000000000000000000000 --- a/arxiv/1909.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:3a0116d0e89b2d7d16d5a79191990fc07a32bdc2b80d6120aee76d1c5fdd8ffd -size 76728076 diff --git a/arxiv/1910.tar.gz b/arxiv/1910.tar.gz deleted file mode 100644 index 5f63bc623cecbbce54ca1668ad54eaf016a50363..0000000000000000000000000000000000000000 --- a/arxiv/1910.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:9189eee25f315c03095547bf946f03870d8030fed8e73a271d1014573905926c -size 89351699 diff --git a/arxiv/1911.tar.gz b/arxiv/1911.tar.gz deleted file mode 100644 index 8b06626e6c79ac5c09fe5717c404c48fbf354478..0000000000000000000000000000000000000000 --- a/arxiv/1911.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:0aa3d54c9d98e985dc61899acb745299fc3758e31b41016d2c50a996434357bc -size 74437111 diff --git a/arxiv/1912.tar.gz b/arxiv/1912.tar.gz deleted file mode 100644 index f08bb4e984d20bb4bfdce7b5ca7d75847c431401..0000000000000000000000000000000000000000 --- a/arxiv/1912.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:fbb23c28a393e5302ed0f41acaf703adb2bfb1fccff8c23606b90963149d3a92 -size 83951013 diff --git a/arxiv/2001.tar.gz b/arxiv/2001.tar.gz deleted file mode 100644 index ffd8d244bea6ae5ff7fe31271da658aa04276e6d..0000000000000000000000000000000000000000 --- a/arxiv/2001.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:22985ba2c58d28a6260796751117fd2251793d185283f3bf2e6886fea5f32325 -size 71252832 diff --git a/arxiv/2002.tar.gz b/arxiv/2002.tar.gz deleted file mode 100644 index 4a78a18e16cd7a19c28dd88c4c44b0cb4981e1a3..0000000000000000000000000000000000000000 --- a/arxiv/2002.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:3dc86e397d212073440646f1c7c1795b89ab824aff76b77724cc79ff739f7225 -size 81602874 diff --git a/arxiv/2003.tar.gz b/arxiv/2003.tar.gz deleted file mode 100644 index 887aaffc10a64d5aefd57206d562d3f3510f34e0..0000000000000000000000000000000000000000 --- a/arxiv/2003.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:d726845be9506c4691072f74d42db0971b9f12f816f1f31eb19c52614940a859 -size 83022003 diff --git a/arxiv/2004.tar.gz b/arxiv/2004.tar.gz deleted file mode 100644 index 74f209991fe1899a69d982b81c510821f633f73a..0000000000000000000000000000000000000000 --- a/arxiv/2004.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:0ba39e77a89e2bec4a18562719a30d380a5c418205264d5e8a33ba7bc13c340b -size 83095605 diff --git a/arxiv/2005.tar.gz b/arxiv/2005.tar.gz deleted file mode 100644 index e97ef38749f0ffd39fc6efaaced98b62ea79df7d..0000000000000000000000000000000000000000 --- a/arxiv/2005.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:a661fcc40520f2ada51ba5ea579e20afdeacddc2d55e461dc4b604687e54c884 -size 81473152 diff --git a/arxiv/2006.tar.gz b/arxiv/2006.tar.gz deleted file mode 100644 index 37a91c8961aba6e0522199d36670ad4281e39d04..0000000000000000000000000000000000000000 --- a/arxiv/2006.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:69d84f6d84cae926f96fa2ed87ec5c8e0a55e7807d1b10639e27f053984ee8bf -size 97999010 diff --git a/arxiv/2007.tar.gz b/arxiv/2007.tar.gz deleted file mode 100644 index 0ae47e3ae2d0ac1c9d2a727cd4c31de374d54157..0000000000000000000000000000000000000000 --- a/arxiv/2007.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:4a0f3cc3a01047dde7c3634242acf79a9e235afca4ede6d4e92b6a7c477dd61e -size 95560129 diff --git a/arxiv/2008.tar.gz b/arxiv/2008.tar.gz deleted file mode 100644 index 161e4fdb11afaff05925c2081b55da3a9eaac953..0000000000000000000000000000000000000000 --- a/arxiv/2008.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:a3d66d3c490d0bf684cf2fd6776f88613643b3beb7c108a42352616f57c191ff -size 77405326 diff --git a/arxiv/2009.tar.gz b/arxiv/2009.tar.gz deleted file mode 100644 index f06f8191d906216ee570b2993bf73ddc2c0b753d..0000000000000000000000000000000000000000 --- a/arxiv/2009.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:09a0beec17dfd02f8c6038fa046e340d9bd6234ebfa1591d5e5982eaf329badd -size 84564210 diff --git a/arxiv/2010.tar.gz b/arxiv/2010.tar.gz deleted file mode 100644 index 7f23b9e9b4fb41f15dc4495b875daff6ef429075..0000000000000000000000000000000000000000 --- a/arxiv/2010.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:ed033603e0e2d78acfd7f69ab0270e2cdcbfea928b98682f5d54a3d48b84bba6 -size 97239228 diff --git a/arxiv/2011.tar.gz b/arxiv/2011.tar.gz deleted file mode 100644 index f92cec3f7a2717d0183d2048041f2c5c98d11623..0000000000000000000000000000000000000000 --- a/arxiv/2011.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:dc4f3a6fde3272e1c99af226e8a84e81f395797f629ed7fe7508d66bcdc6bc60 -size 80935292 diff --git a/arxiv/2012.tar.gz b/arxiv/2012.tar.gz deleted file mode 100644 index 5af0c852149cadc54e315e4ef1e99be885261966..0000000000000000000000000000000000000000 --- a/arxiv/2012.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:f159f4e4ba73f25f288bea9e8d26fd9b0eb428f4d347054c0de78f7af48c128c -size 90579473 diff --git a/arxiv/2101.tar.gz b/arxiv/2101.tar.gz deleted file mode 100644 index e345dca58ee0e25819295747de2189f54f5a1984..0000000000000000000000000000000000000000 --- a/arxiv/2101.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:c33d29f268e300ec4386a8bf669a5a85365516d186c432e9093d1d21a0df2bdf -size 78265895 diff --git a/arxiv/2102.tar.gz b/arxiv/2102.tar.gz deleted file mode 100644 index c007c3d8ffd79bc129353afed450743928c348d4..0000000000000000000000000000000000000000 --- a/arxiv/2102.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:4d0668980ad00cc21342d673414d7a8200862d93a69eceff32b6f3705ab7b278 -size 79598570 diff --git a/arxiv/2103.tar.gz b/arxiv/2103.tar.gz deleted file mode 100644 index 0c2842d6601175b3145cd615e9d6643fe36f7b67..0000000000000000000000000000000000000000 --- a/arxiv/2103.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:45c8492edfd5eb41cfbcaedd61ede107c0f681c11fc792b42e074a00861c3e9e -size 89980134 diff --git a/arxiv/2104.tar.gz b/arxiv/2104.tar.gz deleted file mode 100644 index 91338483dcfd6230b44ec38027b4af80457becfc..0000000000000000000000000000000000000000 --- a/arxiv/2104.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:eded42befa24f2582b81afe9c4e652a7715fc3f03961a5d176ac01430c7c36db -size 81908139 diff --git a/arxiv/2105.tar.gz b/arxiv/2105.tar.gz deleted file mode 100644 index 7eabbe11108bfe59fcf41b1808777865d6abb7bd..0000000000000000000000000000000000000000 --- a/arxiv/2105.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:d4495870d28814250000fce1ee2e7ae893405cd1966128f34c13f8a9aef72c7b -size 86832842 diff --git a/arxiv/2106.tar.gz b/arxiv/2106.tar.gz deleted file mode 100644 index d19bd0417fa8053ea30e94d3c3ddb2de38016a31..0000000000000000000000000000000000000000 --- a/arxiv/2106.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:fbc6220c1c07367ec5d38a33fc62fa6b748d09fbe88c05ea0366a1ba2265e837 -size 92989254 diff --git a/arxiv/2107.tar.gz b/arxiv/2107.tar.gz deleted file mode 100644 index e4b433b1c8bc2ffb9d624bd42895d6cd0282836b..0000000000000000000000000000000000000000 --- a/arxiv/2107.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:8e1c64fc20711bcebb5cb1508c6738accd44bb296580ba030c70284e699fd2f8 -size 90305171 diff --git a/arxiv/2108.tar.gz b/arxiv/2108.tar.gz deleted file mode 100644 index 1db452e7b7ad6fe40fa4429f4bf74b1cd01d1f4f..0000000000000000000000000000000000000000 --- a/arxiv/2108.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:e5c21c0871ddff1719735f6e94db91e7ea5267da79c9ae4d971af538c7d63553 -size 78024743 diff --git a/arxiv/2109.tar.gz b/arxiv/2109.tar.gz deleted file mode 100644 index 34194648785574cad0aeb702f2e01a7c420dcaa0..0000000000000000000000000000000000000000 --- a/arxiv/2109.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:7c41f4217d9ab3919dae4676805686b05e3bd25b7d15ecd13a207a9db10afe5a -size 83683884 diff --git a/arxiv/2110.tar.gz b/arxiv/2110.tar.gz deleted file mode 100644 index bec823855f2062aa4ac1f614c55dbcc434870927..0000000000000000000000000000000000000000 --- a/arxiv/2110.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:4007818611ca0725c31dcef8b79d2514555d8f89bf7eacccc9472fdafafe5002 -size 84503586 diff --git a/arxiv/2111.tar.gz b/arxiv/2111.tar.gz deleted file mode 100644 index eea676499278f329833f25846be4fbd47957a203..0000000000000000000000000000000000000000 --- a/arxiv/2111.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:8d5c410fa0f7ed573ed05854197ba1f9aee4a853a02fec10c3964444cccd2517 -size 85880443 diff --git a/arxiv/2112.tar.gz b/arxiv/2112.tar.gz deleted file mode 100644 index d2e37c9eca60a0105fd8fa52f58e007f815e6dd5..0000000000000000000000000000000000000000 --- a/arxiv/2112.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:a2c6d26aeb9304c65db640e577d682270c408dc6bd25f56c134a45f2ad35acef -size 87445306 diff --git a/arxiv/2201.tar.gz b/arxiv/2201.tar.gz deleted file mode 100644 index 3a8e02ad4ce21c9ec0e7ad96dc05765adcf31c16..0000000000000000000000000000000000000000 --- a/arxiv/2201.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:0a1f0a23e36f5d2833351fc92465a7dcd3fa02801937f1384fdd78c49ce047e6 -size 75107065 diff --git a/arxiv/2202.tar.gz b/arxiv/2202.tar.gz deleted file mode 100644 index 5582f3ec77043a1cf58064412580a096d9f0459b..0000000000000000000000000000000000000000 --- a/arxiv/2202.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:db6525ddb1cd323483b12b6938ef96293a6715110ae7926c8ae176dc9a7fec53 -size 75046209 diff --git a/arxiv/2203.tar.gz b/arxiv/2203.tar.gz deleted file mode 100644 index 764ee12c8a06567a63842af923a0eef54b37db42..0000000000000000000000000000000000000000 --- a/arxiv/2203.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:ecb6b2a3c55d66c8910cfd4b1fc8f6935ad1f4348fff7c6953bcafe0700e6bbb -size 87341310 diff --git a/arxiv/2204.tar.gz b/arxiv/2204.tar.gz deleted file mode 100644 index 22be1726af41dd396c316d889a07070fbfb699de..0000000000000000000000000000000000000000 --- a/arxiv/2204.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:f8fee305c2d24d7a1534da07a32ff8b44e21898d19918ec22292d2882d2f83a4 -size 74953863 diff --git a/arxiv/2205.tar.gz b/arxiv/2205.tar.gz deleted file mode 100644 index d930a74fbdc62f081132cf7939f32a0bc11cd8a8..0000000000000000000000000000000000000000 --- a/arxiv/2205.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:418d5fd77a48fb8367b7173f3634e2dbc5cd197450b78e9c292b3fe02748639b -size 87143816 diff --git a/arxiv/2206.tar.gz b/arxiv/2206.tar.gz deleted file mode 100644 index 49c2583814deedee7f7468e0fa4f7614fe99ba63..0000000000000000000000000000000000000000 --- a/arxiv/2206.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:d4bd8bb0a7357b140874668e78205a28957bccee5356994f9926de4652a87017 -size 86669644 diff --git a/arxiv/9107.tar.gz b/arxiv/9107.tar.gz deleted file mode 100644 index b331337a24787bca373514f5498bcc7d827ce8c5..0000000000000000000000000000000000000000 --- a/arxiv/9107.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:689e427cc57f622dad9f8cd975de23c70944d632027395bf04177e52cecc1b2d -size 187 diff --git a/arxiv/9108.tar.gz b/arxiv/9108.tar.gz deleted file mode 100644 index 616a15886a260f8d45d683145b324dba650786e1..0000000000000000000000000000000000000000 --- a/arxiv/9108.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:32caf3c1eef4990cd9281567c4c5073db34ad1ebb55e7e7f793e05cd0d62bd86 -size 187 diff --git a/arxiv/9109.tar.gz b/arxiv/9109.tar.gz deleted file mode 100644 index dc60811c717925a7465250a4a98cc9b965311105..0000000000000000000000000000000000000000 --- a/arxiv/9109.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:13ec9dfe58a99caae5002cdaa33c4855d2bfc5d0e47710f4d42509cb9e2d6b06 -size 187 diff --git a/arxiv/9110.tar.gz b/arxiv/9110.tar.gz deleted file mode 100644 index 888a954ae15f5bb59e78006ee086317069131a24..0000000000000000000000000000000000000000 --- a/arxiv/9110.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:f537d4873bf0e5f62e74d08cc340cf4d9c2045b12deeda1fb561479931c9941d -size 187 diff --git a/arxiv/9111.tar.gz b/arxiv/9111.tar.gz deleted file mode 100644 index f072c67ae1ea40bad98fdd6fde82eb57567654f9..0000000000000000000000000000000000000000 --- a/arxiv/9111.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:397a34560cf121c8f66ef62a8a8b3b0cde4169917e07737801a012e406592d0b -size 187 diff --git a/arxiv/9112.tar.gz b/arxiv/9112.tar.gz deleted file mode 100644 index 216c78d62711834f1d83cc2808b94c1fbdd4f4fb..0000000000000000000000000000000000000000 --- a/arxiv/9112.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:694c3953d3ef13a9489af1d673597aeb20af187328ca2d54085daa2dd15ba417 -size 187 diff --git a/arxiv/9201.tar.gz b/arxiv/9201.tar.gz deleted file mode 100644 index b2b1ca5ce4bac58fe5856698df17bfaaedace023..0000000000000000000000000000000000000000 --- a/arxiv/9201.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:adeedcf85827f1dda6971d54e9bbdbad34da4fd4fe2effbb029f6d6ba50e42c6 -size 1713924 diff --git a/arxiv/9202.tar.gz b/arxiv/9202.tar.gz deleted file mode 100644 index 0443a51d34517b7349073743aeb97c7398019de4..0000000000000000000000000000000000000000 --- a/arxiv/9202.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:828d446c1e62a859493e4beff305bf7a4b54f7c0e994e6b5c8e69bea910e6d8e -size 168521 diff --git a/arxiv/9203.tar.gz b/arxiv/9203.tar.gz deleted file mode 100644 index 07a3564622ae2c7de0b5d00107f234988a169f4b..0000000000000000000000000000000000000000 --- a/arxiv/9203.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:22037cbc341a29896db2e12b816a35d4183f3671a1359aba4030a3b27852441b -size 35576 diff --git a/arxiv/9204.tar.gz b/arxiv/9204.tar.gz deleted file mode 100644 index f4aa43725aafb9b43f06dced3755afb23b6786db..0000000000000000000000000000000000000000 --- a/arxiv/9204.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:5d7783f76a711395a242c07ee61ddb7fe3cedc7c04957ce949e8fa5d8a8a2b9b -size 484787 diff --git a/arxiv/9205.tar.gz b/arxiv/9205.tar.gz deleted file mode 100644 index b8f6db0ca39579b614d5729aaf6fce61f3a4bb14..0000000000000000000000000000000000000000 --- a/arxiv/9205.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:48b476e3ec1d71e53f99ee0e72fce9951fa5afab72c2a2738015dd4daa7f678e -size 205222 diff --git a/arxiv/9206.tar.gz b/arxiv/9206.tar.gz deleted file mode 100644 index 9923c492238f4ca9b9a4c8e1151371bda6b2a6f1..0000000000000000000000000000000000000000 --- a/arxiv/9206.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:9efb63a0309afb3982d1b4720d139699d887339f8f7cb39d295a9a3514b18c72 -size 38377 diff --git a/arxiv/9207.tar.gz b/arxiv/9207.tar.gz deleted file mode 100644 index e1ff60e477d13e5280fa461e4ad8e4996ebfc155..0000000000000000000000000000000000000000 --- a/arxiv/9207.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:e47fadc86815077decc17c3852e6f8e8020b073d7270093e8d42022542be2b6a -size 379577 diff --git a/arxiv/9208.tar.gz b/arxiv/9208.tar.gz deleted file mode 100644 index aea8f38125aee0e4a2c3c6def9419fc429492782..0000000000000000000000000000000000000000 --- a/arxiv/9208.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:1e2edd1a4d9dd145b617dc4ffb4580a905fefe7e84000e5681d8e8bdb81a153d -size 59352 diff --git a/arxiv/9209.tar.gz b/arxiv/9209.tar.gz deleted file mode 100644 index 65cc92aed747c8352f2f416a96f5ec1441ced5d7..0000000000000000000000000000000000000000 --- a/arxiv/9209.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:8c95f815687f6193b8bc6fa19e675ca6e3d2471c71605c32d89192953e0ae24c -size 381347 diff --git a/arxiv/9210.tar.gz b/arxiv/9210.tar.gz deleted file mode 100644 index e54bb2960101e438265b4071535dc6f3abb61662..0000000000000000000000000000000000000000 --- a/arxiv/9210.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:0c6413865f639dc25b52217613c57cfc008fbee4e374d2ae98e7469e4ac72930 -size 298658 diff --git a/arxiv/9211.tar.gz b/arxiv/9211.tar.gz deleted file mode 100644 index 1bbc473f849191264276cbc5844b920beccc7672..0000000000000000000000000000000000000000 --- a/arxiv/9211.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:83d8084356c7f7aa44b545bba6d44d2ae13a68bc1e47430dd822dfa1fad8a246 -size 219141 diff --git a/arxiv/9212.tar.gz b/arxiv/9212.tar.gz deleted file mode 100644 index f0a28e2199bbb29a7507119a3c2d47f3e0738ae6..0000000000000000000000000000000000000000 --- a/arxiv/9212.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:e65d8da54bfa48f38f4d22846ff1e2081a03e6813ebf73f15268c7ad1f154559 -size 135918 diff --git a/arxiv/9301.tar.gz b/arxiv/9301.tar.gz deleted file mode 100644 index f4cffdb96e202438148f08de273637b3866f94ed..0000000000000000000000000000000000000000 --- a/arxiv/9301.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:92aafa05119d17f9d52fc91275f28c16daeca2d1960d02a20c0483a9f9c2b536 -size 202694 diff --git a/arxiv/9302.tar.gz b/arxiv/9302.tar.gz deleted file mode 100644 index 7dafa4e125889d1329f1fd4becf4dabfecdf3c28..0000000000000000000000000000000000000000 --- a/arxiv/9302.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:032bac0b556b0a51efaedc3139e84456d9c1b54aafa659098e8674387df79d33 -size 171811 diff --git a/arxiv/9303.tar.gz b/arxiv/9303.tar.gz deleted file mode 100644 index 95e5113f5e02a64c1fcc1e08d38788239cfe5dce..0000000000000000000000000000000000000000 --- a/arxiv/9303.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:d89a134b55c545715e343e484f72bfa77e7cb15478932559a67214a14b3882b4 -size 121479 diff --git a/arxiv/9304.tar.gz b/arxiv/9304.tar.gz deleted file mode 100644 index bedde5bc9222d05ba3e04ff65531af2942a3d927..0000000000000000000000000000000000000000 --- a/arxiv/9304.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:b3d0c6b9a0e59b16f6ec4af3bc53e9b47c3b3b626625f05f71cf023e09af4537 -size 232857 diff --git a/arxiv/9305.tar.gz b/arxiv/9305.tar.gz deleted file mode 100644 index 81aa8d54c7bee0a8902dbcad2c806558667cff15..0000000000000000000000000000000000000000 --- a/arxiv/9305.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:2483b982437abd4ae63c7b9aedb83b5d39d278a2645cfafa3007f5c24f51c988 -size 100747 diff --git a/arxiv/9306.tar.gz b/arxiv/9306.tar.gz deleted file mode 100644 index 615d157af5e0d8f40b28b2df72211446eab69dfa..0000000000000000000000000000000000000000 --- a/arxiv/9306.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:0c28a9b8b1910e351adddc0e2b6aba54632fcd6e8a9592fd971a74058d69922e -size 199714 diff --git a/arxiv/9307.tar.gz b/arxiv/9307.tar.gz deleted file mode 100644 index fdb8ede4da9e01342a2197015eb44cc13c6c85be..0000000000000000000000000000000000000000 --- a/arxiv/9307.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:a157c0145daa70746caf7a97c45bb5c9873f9bca00946824325100f6c6d0f3b2 -size 451258 diff --git a/arxiv/9308.tar.gz b/arxiv/9308.tar.gz deleted file mode 100644 index 34967725bb1ab9041ef215f2c1452d510f520816..0000000000000000000000000000000000000000 --- a/arxiv/9308.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:a4876caca60f5e1014964b34f01dcf4215f9cb76631c54109b4547d58743757c -size 339706 diff --git a/arxiv/9309.tar.gz b/arxiv/9309.tar.gz deleted file mode 100644 index dc4f51eeec303a675108ff3f76f33402534b47d0..0000000000000000000000000000000000000000 --- a/arxiv/9309.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:9d2feb7a4a7af21287f02f68a0035ff6cbba6124561174f27900c39b045b17b0 -size 227171 diff --git a/arxiv/9310.tar.gz b/arxiv/9310.tar.gz deleted file mode 100644 index 11d67140d96d390199c2941f3ba280bc80ccd2b6..0000000000000000000000000000000000000000 --- a/arxiv/9310.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:8e7da6a58904fca66d49d958bae8090c3eed770e9842be869ec43cbe3ee4d889 -size 544658 diff --git a/arxiv/9311.tar.gz b/arxiv/9311.tar.gz deleted file mode 100644 index c96207880e794014750dd2b3e31baad0ba83c159..0000000000000000000000000000000000000000 --- a/arxiv/9311.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:8aef3d408ace5b8715c91abc96646db55355b4f02e30f0d7ecddebb7973f76eb -size 244235 diff --git a/arxiv/9312.tar.gz b/arxiv/9312.tar.gz deleted file mode 100644 index ab99cb728dabda1403956f01cbceab8c35bb9982..0000000000000000000000000000000000000000 --- a/arxiv/9312.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:7c2ecd5cfa6b236a07ac23fe2c0c7d2ede51f180e8f66588b7d1f2196913116d -size 191112 diff --git a/arxiv/9401.tar.gz b/arxiv/9401.tar.gz deleted file mode 100644 index 191331d7bd2c8f77dcb4e44cb058a0f56fb6941b..0000000000000000000000000000000000000000 --- a/arxiv/9401.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:6129cfe9a1b2d9efeed161783d55d6d5e0edde3038dfe98fe2f82dddffeb055d -size 498715 diff --git a/arxiv/9402.tar.gz b/arxiv/9402.tar.gz deleted file mode 100644 index d2d53b324c583f7ebc81a7710d007e43faaf83ee..0000000000000000000000000000000000000000 --- a/arxiv/9402.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:d3bd9e55b111c64ef18c12e6d2d781e9afafb4320caf1f80f76ee5b16210f364 -size 274433 diff --git a/arxiv/9403.tar.gz b/arxiv/9403.tar.gz deleted file mode 100644 index 9d41b15db208b54372c6511a041a0fe5b79fb8af..0000000000000000000000000000000000000000 --- a/arxiv/9403.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:fbba13b24e9db58d9688c671e82099b16848650abea196850cedc54f162a6141 -size 395117 diff --git a/arxiv/9404.tar.gz b/arxiv/9404.tar.gz deleted file mode 100644 index f4b1df0258c9b991af86235d227f9a88fc1faff9..0000000000000000000000000000000000000000 --- a/arxiv/9404.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:2f290786185a15714d0cfa4ab943c9e0e071520144e3f926595faca29928fb41 -size 597181 diff --git a/arxiv/9405.tar.gz b/arxiv/9405.tar.gz deleted file mode 100644 index dca2e4155ee100f8eb5f2195fa65b14dd6451ddc..0000000000000000000000000000000000000000 --- a/arxiv/9405.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:654358e90d71502e121d14e3baece0f6c49adc3f8274749e3dd0cfb643b0722b -size 254720 diff --git a/arxiv/9406.tar.gz b/arxiv/9406.tar.gz deleted file mode 100644 index 77661467b52f2d973212167f277df121a56d7145..0000000000000000000000000000000000000000 --- a/arxiv/9406.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:9412d05b5e92dbdd7d366cb6d37d6555cc7e26f85cb61475f14a6555862511a7 -size 555300 diff --git a/arxiv/9407.tar.gz b/arxiv/9407.tar.gz deleted file mode 100644 index 4fbec104458e7664d01c9059c22858bbfd313fb7..0000000000000000000000000000000000000000 --- a/arxiv/9407.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:bfba9ea9927fa23b61070d946adea89bbf2336460f00aff862547217681c1022 -size 408973 diff --git a/arxiv/9408.tar.gz b/arxiv/9408.tar.gz deleted file mode 100644 index cf72047deed397e141b64beaf24704909107a8c3..0000000000000000000000000000000000000000 --- a/arxiv/9408.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:2ee1ec6e004b93c19073761af32a1cd58baddbc01139940b9802af4e7702fa86 -size 339693 diff --git a/arxiv/9409.tar.gz b/arxiv/9409.tar.gz deleted file mode 100644 index 76cab901ef8446f695ec3469fb2ef8884eaf68d1..0000000000000000000000000000000000000000 --- a/arxiv/9409.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:720805ba3e2db1995c5fbd5a63a82b19b814a5c3dc528e5f84d40339fe58440f -size 1191882 diff --git a/arxiv/9410.tar.gz b/arxiv/9410.tar.gz deleted file mode 100644 index 638920b052e5fcd055bbf70b674b43c6f1108758..0000000000000000000000000000000000000000 --- a/arxiv/9410.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:8283e65699f153ce6f1d516b23029020aacd9f3cde541e1af1bb9ab0d741dd22 -size 635720 diff --git a/arxiv/9411.tar.gz b/arxiv/9411.tar.gz deleted file mode 100644 index 0a62b5d07e0a2bd552608de5f272e53ddd1c80c3..0000000000000000000000000000000000000000 --- a/arxiv/9411.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:00253f66ab2bb93dcfdc2fbc1e244a29d001cfcc39e825530574a0c4b7e31c4a -size 1760627 diff --git a/arxiv/9412.tar.gz b/arxiv/9412.tar.gz deleted file mode 100644 index 4297308046a05e77f25c116ddc8b725e012c4126..0000000000000000000000000000000000000000 --- a/arxiv/9412.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:40dddd390ba59d71ec87a2713e2718dba67b4650a93e85b76974fcdc4ee0efac -size 471483 diff --git a/arxiv/9501.tar.gz b/arxiv/9501.tar.gz deleted file mode 100644 index 1360c53a1e1606fd79e68c56aecf451d53156be1..0000000000000000000000000000000000000000 --- a/arxiv/9501.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:3ae3b28f609d5dd8ba6c1d67584c8f4a3c97ba681afdde2f92bde6f2b6bbbe16 -size 416657 diff --git a/arxiv/9502.tar.gz b/arxiv/9502.tar.gz deleted file mode 100644 index 27f7901e3cd37b32428a36fc69c1d5c026f7456c..0000000000000000000000000000000000000000 --- a/arxiv/9502.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:301417e2423d009ea4ad569c748286d2864f33a6d918c5b706dfbf63c33fef1c -size 622943 diff --git a/arxiv/9503.tar.gz b/arxiv/9503.tar.gz deleted file mode 100644 index 9b56b3e2a842689de486b0f62f7e53600d74ce8c..0000000000000000000000000000000000000000 --- a/arxiv/9503.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:907d09c4ca98c9b93e5cf6e4d77a6ff5c10a01660a1eee782058ebfb7e076d91 -size 868539 diff --git a/arxiv/9504.tar.gz b/arxiv/9504.tar.gz deleted file mode 100644 index 14ea4901f545f950c85a2d7c7dfa554b327c987e..0000000000000000000000000000000000000000 --- a/arxiv/9504.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:fb2665b74050e06dd2f06b374fb733155c4ada8e1e47819864bfce4a965096ce -size 785699 diff --git a/arxiv/9505.tar.gz b/arxiv/9505.tar.gz deleted file mode 100644 index eca66102febda8c51f4030686e47011bed9273e1..0000000000000000000000000000000000000000 --- a/arxiv/9505.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:508b20cc116d87190b64a1fdbec1cde337270391dcbfde1d1a7a1cb31f5d06e3 -size 288492 diff --git a/arxiv/9506.tar.gz b/arxiv/9506.tar.gz deleted file mode 100644 index cbb855d6ff2fab9f974bbe84c7de80946c1cc5d6..0000000000000000000000000000000000000000 --- a/arxiv/9506.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:48b759f8835409c0e3bffc0b88e983a93f05f7b6bc50750c29a45b96192e3fb2 -size 734338 diff --git a/arxiv/9507.tar.gz b/arxiv/9507.tar.gz deleted file mode 100644 index 9abef28b7e80762fc52f24a3b573a558c516543e..0000000000000000000000000000000000000000 --- a/arxiv/9507.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:ad1d4ff2f2e91ecba44c5cadb2872dfab30dd81feec235a24583edc9d1271313 -size 602484 diff --git a/arxiv/9508.tar.gz b/arxiv/9508.tar.gz deleted file mode 100644 index 66235697cd4c60e037690a1b98dbc4251ca1368c..0000000000000000000000000000000000000000 --- a/arxiv/9508.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:862d9a7c496e23556025b622fe0040aed2f04dd52fe88b3b92664ce4de68de4f -size 591856 diff --git a/arxiv/9509.tar.gz b/arxiv/9509.tar.gz deleted file mode 100644 index 30e34e5f6186f08c422759e85c308a49e25e66f8..0000000000000000000000000000000000000000 --- a/arxiv/9509.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:4295cee131be1b7262b69643035c858d875fdf9eaaabb6ca01e986af4dc1f34d -size 488041 diff --git a/arxiv/9510.tar.gz b/arxiv/9510.tar.gz deleted file mode 100644 index 169343a1ca19c94db2e457ba74d3e0d68c6c1237..0000000000000000000000000000000000000000 --- a/arxiv/9510.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:23906ee726a0f8677af17dccddb9adfdfd3d0e431bcc2acfadde989085ca6200 -size 361974 diff --git a/arxiv/9511.tar.gz b/arxiv/9511.tar.gz deleted file mode 100644 index 537676c5d8e995f8a936566561189b3e1b200fa6..0000000000000000000000000000000000000000 --- a/arxiv/9511.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:77911faf68122b1a4395f9f106abcc12ba263d5bf0266436d9d1440b558a9064 -size 637661 diff --git a/arxiv/9512.tar.gz b/arxiv/9512.tar.gz deleted file mode 100644 index f78697fe0717d9a138906da29d7c0d9aa537c345..0000000000000000000000000000000000000000 --- a/arxiv/9512.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:c91e13780c504d25e9be5c6d5c21ac819ffa2f39e7c1e240bfc5404766f18f25 -size 723023 diff --git a/arxiv/9601.tar.gz b/arxiv/9601.tar.gz deleted file mode 100644 index 17abcdc3a58b65f9c03bed632853bd1398b0b567..0000000000000000000000000000000000000000 --- a/arxiv/9601.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:6a457a3ba72dbcca7a2aea711a215533b1358582c9d4fb2285eab64063a69911 -size 567981 diff --git a/arxiv/9602.tar.gz b/arxiv/9602.tar.gz deleted file mode 100644 index 765a5b94ce903fd0bb065b8624e53fbd72a59be0..0000000000000000000000000000000000000000 --- a/arxiv/9602.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:d4415bfeb5b3209572e4eadfc8629300ddd93899d4910789940c6deb26e43b9e -size 762602 diff --git a/arxiv/9603.tar.gz b/arxiv/9603.tar.gz deleted file mode 100644 index 48891341de5cb1d3f89738bdb9716f2347380968..0000000000000000000000000000000000000000 --- a/arxiv/9603.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:42a04d06b97a5b19a024b8d25a2aae894e8c13e1ae7f49cde3c0a4d337fc9138 -size 409589 diff --git a/arxiv/9604.tar.gz b/arxiv/9604.tar.gz deleted file mode 100644 index 55831a0bfd8d679130a309495dc7270f7d55f172..0000000000000000000000000000000000000000 --- a/arxiv/9604.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:649d4e9b4c830f995b1115e5ecdcd1af64d2781d7f6e038f9f43a5b9649c8ab0 -size 1388677 diff --git a/arxiv/9605.tar.gz b/arxiv/9605.tar.gz deleted file mode 100644 index 45600915c591ea0446db9924001b6412daab27bd..0000000000000000000000000000000000000000 --- a/arxiv/9605.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:8ee16c6e9c4a88612cfea6b5d1e4f226ab48120514c25835a8463f66963a8317 -size 1069489 diff --git a/arxiv/9606.tar.gz b/arxiv/9606.tar.gz deleted file mode 100644 index ae6a2dc768858b1ae92d8e5701d776daceace214..0000000000000000000000000000000000000000 --- a/arxiv/9606.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:108e8b38a6153be1e3d398164561563309bf87d851714ad24d04e9fff0c2ae9d -size 2241254 diff --git a/arxiv/9607.tar.gz b/arxiv/9607.tar.gz deleted file mode 100644 index 00100b91386a0d71b1c38ae1f540b5046ea40774..0000000000000000000000000000000000000000 --- a/arxiv/9607.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:0550db3e97766eb3f9d2f9d9ccbe13e7095caeba36888a7f86bc98ae3a886727 -size 922071 diff --git a/arxiv/9608.tar.gz b/arxiv/9608.tar.gz deleted file mode 100644 index 2c5b3bb359ba8e60394e7e6560be83e793306bb2..0000000000000000000000000000000000000000 --- a/arxiv/9608.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:2e325ebe76b8f27b69bb398e2c1bae08948ee5847d21261524d57258498a08c9 -size 242163 diff --git a/arxiv/9609.tar.gz b/arxiv/9609.tar.gz deleted file mode 100644 index eb66d04de8569b2220e7d4326b617475b45f51f7..0000000000000000000000000000000000000000 --- a/arxiv/9609.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:1c5aad087b72a240c427a3531a1c2957f05a9a3b2583b52c01749020d9444b56 -size 593551 diff --git a/arxiv/9610.tar.gz b/arxiv/9610.tar.gz deleted file mode 100644 index 537f45ea714080cdc256816093bfbeeafb351032..0000000000000000000000000000000000000000 --- a/arxiv/9610.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:401169c82381419fc94d836e2b4a7f600a401c9104a77b40286a517f468aac0f -size 626119 diff --git a/arxiv/9611.tar.gz b/arxiv/9611.tar.gz deleted file mode 100644 index 06ccf07e33584a2d6f1dc6b9be5749876f26316f..0000000000000000000000000000000000000000 --- a/arxiv/9611.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:e0149e4bac9e098c6b62f2ebbca7590cba9d48d82ac55165f03eda4b8d0d7de1 -size 874395 diff --git a/arxiv/9612.tar.gz b/arxiv/9612.tar.gz deleted file mode 100644 index f64e44a343d0437f2f0945db11961a66038bb933..0000000000000000000000000000000000000000 --- a/arxiv/9612.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:ec5a331df78b1e02a98031eb99acc45aa4da372a53f81c1bf7e58b22a6ba33a3 -size 1004281 diff --git a/arxiv/9701.tar.gz b/arxiv/9701.tar.gz deleted file mode 100644 index 348056a886b971fde6795c1d04918187b4b9bcd6..0000000000000000000000000000000000000000 --- a/arxiv/9701.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:606d14be1e65c38b387e074adc4a086988a995c81b4247674f98b89327a06033 -size 722018 diff --git a/arxiv/9702.tar.gz b/arxiv/9702.tar.gz deleted file mode 100644 index 3ae5316f1c7e3816d3aace1cfb9449ce6a486aff..0000000000000000000000000000000000000000 --- a/arxiv/9702.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:0570e512a85a82d7328d7c93ac13f80b8d7310b18637847ab6b4259c65316f45 -size 892703 diff --git a/arxiv/9703.tar.gz b/arxiv/9703.tar.gz deleted file mode 100644 index 2a9d9199c3b4ff304a0b5971de2e31e74e845f4b..0000000000000000000000000000000000000000 --- a/arxiv/9703.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:7ad4408b6eceb2f3a14a46cafd1aece9d73749327aedb9ddf93ff06e07728c5e -size 554451 diff --git a/arxiv/9704.tar.gz b/arxiv/9704.tar.gz deleted file mode 100644 index 4abde4d328b708a9de40731ace8bc35d54518373..0000000000000000000000000000000000000000 --- a/arxiv/9704.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:7d5d35c6c1adc48400a9d0489eeb7c28eb45e3962846495a9908eeac056efac3 -size 637951 diff --git a/arxiv/9705.tar.gz b/arxiv/9705.tar.gz deleted file mode 100644 index 3527645e1f1cb618216f7c03a87a750870b87fed..0000000000000000000000000000000000000000 --- a/arxiv/9705.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:ae4658f85f4015adebd00790dd2b0ea5ff39e8bf93c69dcca894536873ce99ab -size 1000277 diff --git a/arxiv/9706.tar.gz b/arxiv/9706.tar.gz deleted file mode 100644 index 491daabe159025a102d2ae595dd5ce3b51006889..0000000000000000000000000000000000000000 --- a/arxiv/9706.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:364b473e16205eba5665c5a09c2367f0cd1099e407eb314e576f6ac7b21293b9 -size 566184 diff --git a/arxiv/9707.tar.gz b/arxiv/9707.tar.gz deleted file mode 100644 index 9c7841488677a8e7260c9630a07b4eef1b2ddeb4..0000000000000000000000000000000000000000 --- a/arxiv/9707.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:4cc1d6c59c60fb91c724f30f6ad036330913004b1664f72c4d2c7a7baa51ba95 -size 1099376 diff --git a/arxiv/9708.tar.gz b/arxiv/9708.tar.gz deleted file mode 100644 index 1b4c8509f555ecf16882abcea3f267a0ce6d7851..0000000000000000000000000000000000000000 --- a/arxiv/9708.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:809acd53f0f44cf341460bf48aefc3487fc57960bcec5ab4924485c26da7d954 -size 526438 diff --git a/arxiv/9709.tar.gz b/arxiv/9709.tar.gz deleted file mode 100644 index 5c8ac373383ef62c21622c20c855f7c7524bcc67..0000000000000000000000000000000000000000 --- a/arxiv/9709.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:a0b293d0b52132bd1bfc92c8482e7969aebb930201922344157eb00acab90486 -size 988477 diff --git a/arxiv/9710.tar.gz b/arxiv/9710.tar.gz deleted file mode 100644 index f778f19cd3e39870237b6a13b01f22fab03537c5..0000000000000000000000000000000000000000 --- a/arxiv/9710.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:421b788adec71c5e6b3e83b24866f1fe5c65370a31af0c902ac8749eaa982df4 -size 613286 diff --git a/arxiv/9711.tar.gz b/arxiv/9711.tar.gz deleted file mode 100644 index ee16b604b77992f407573dabfe5fcb9858ff3527..0000000000000000000000000000000000000000 --- a/arxiv/9711.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:b86589ea90286f3c8ba367bcf99a83ec95b644712ee7b8fe5219f6b8ab106c14 -size 1079338 diff --git a/arxiv/9712.tar.gz b/arxiv/9712.tar.gz deleted file mode 100644 index bf96d12e28bc0d0c176b419a839c27a302ddc79f..0000000000000000000000000000000000000000 --- a/arxiv/9712.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:b7524596505cf3c7bd91e2ddb95bd0cf226c3ba30c25b33f7ce5ebfee31c43a0 -size 1920617 diff --git a/arxiv/9801.tar.gz b/arxiv/9801.tar.gz deleted file mode 100644 index 722cd2690878b8adb09401724cee5989fd10cdce..0000000000000000000000000000000000000000 --- a/arxiv/9801.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:7d6df5fa80dd32596255dc87dd40af1d14214d39b5b25f355d721fc0ec07fc90 -size 3067984 diff --git a/arxiv/9802.tar.gz b/arxiv/9802.tar.gz deleted file mode 100644 index 9f2fd546dd0b2c69fd07122da6e5efbc7066ce7c..0000000000000000000000000000000000000000 --- a/arxiv/9802.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:a5a46f208752bc6f13b7ff0131396da8334d6647e41c83fa8c9f522806ab9ff3 -size 2458667 diff --git a/arxiv/9803.tar.gz b/arxiv/9803.tar.gz deleted file mode 100644 index 7841bd617dd3476e129c999fcaeff84a42f50a6e..0000000000000000000000000000000000000000 --- a/arxiv/9803.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:ee2628b96d3d75d78364b83d4a0552b0623349b25825e1a9eb13a9bc3925a93b -size 3300259 diff --git a/arxiv/9804.tar.gz b/arxiv/9804.tar.gz deleted file mode 100644 index 2f308d374659798a2816d2a60174390f4a6a7c50..0000000000000000000000000000000000000000 --- a/arxiv/9804.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:941b698128803726fde0a7a0d8d805b49c2588585613f43767f1c5b97dc1de99 -size 3515462 diff --git a/arxiv/9805.tar.gz b/arxiv/9805.tar.gz deleted file mode 100644 index ecf87badea3b9f131ec9044995cade5b270e449d..0000000000000000000000000000000000000000 --- a/arxiv/9805.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:62e2f7b470e8a7a3c2be0bc93bdd8dfa01bcca2f2e5f5b8a93b6d6bf91573060 -size 3580362 diff --git a/arxiv/9806.tar.gz b/arxiv/9806.tar.gz deleted file mode 100644 index 4a7bcc8622409e5be92c4faeb51ac496ee7adda4..0000000000000000000000000000000000000000 --- a/arxiv/9806.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:bef7b76b3c884d683284032bd586bf53ff8ec69aec8f2c2101798871ec1fb5a7 -size 3276968 diff --git a/arxiv/9807.tar.gz b/arxiv/9807.tar.gz deleted file mode 100644 index 167d4bd3989bb30ca78456e3c1a72aa667a3b793..0000000000000000000000000000000000000000 --- a/arxiv/9807.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:f885256b09a15d68d2140904037e8424c9270b5a0821b2b21b72397ba9c95fed -size 4043147 diff --git a/arxiv/9808.tar.gz b/arxiv/9808.tar.gz deleted file mode 100644 index 77cdb2ce9c0a52a2e21b52cc357ffc9bffa8effc..0000000000000000000000000000000000000000 --- a/arxiv/9808.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:b7bd9e8385a78f9d70c62b9b906f061c5fb2c74c33fbc9d92c7d97ff1533fa62 -size 2829127 diff --git a/arxiv/9809.tar.gz b/arxiv/9809.tar.gz deleted file mode 100644 index 0d293c1fe6205595b20e984788bbd7283a613892..0000000000000000000000000000000000000000 --- a/arxiv/9809.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:2cb88ccf1ddb2a9bc58aab7153fa80251537721871facbe483b3aca031a715df -size 4846704 diff --git a/arxiv/9810.tar.gz b/arxiv/9810.tar.gz deleted file mode 100644 index afa30eea963e4783454bb8531985c8f5f0b1169b..0000000000000000000000000000000000000000 --- a/arxiv/9810.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:69c542a2efa20f527e5a5a1fbc3a4507b9d220f57936537b75a8eac546627b7b -size 3467161 diff --git a/arxiv/9811.tar.gz b/arxiv/9811.tar.gz deleted file mode 100644 index 3ca54cf7d6c4e3563ce1cf2caaf2a843bd4cd719..0000000000000000000000000000000000000000 --- a/arxiv/9811.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:85fc12a292f1433c536c226e0476c89b971e0b9febafa94e908d59a79d5b16af -size 4208217 diff --git a/arxiv/9812.tar.gz b/arxiv/9812.tar.gz deleted file mode 100644 index 9f35c853d451945fd1626aee43db5e40feb14f63..0000000000000000000000000000000000000000 --- a/arxiv/9812.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:5f97e246ca16a2443f4d02e2de78365a7c234484c1adda6e0583cac8502086c0 -size 3498716 diff --git a/arxiv/9901.tar.gz b/arxiv/9901.tar.gz deleted file mode 100644 index 4a254f6c8cf468bbff129f270638938c63312d89..0000000000000000000000000000000000000000 --- a/arxiv/9901.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:c434d2bf00c3f45bf809bc6538b9ad3cfdb348ab85e7b52a4f816771c2c99599 -size 2982250 diff --git a/arxiv/9902.tar.gz b/arxiv/9902.tar.gz deleted file mode 100644 index 1409165730effb0ed9fd2b4303a77299a2aab97c..0000000000000000000000000000000000000000 --- a/arxiv/9902.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:a4850c0e30a3a531cfa80185999e0e722806dfb91cecad9a4a3310fc5ef38c89 -size 3330697 diff --git a/arxiv/9903.tar.gz b/arxiv/9903.tar.gz deleted file mode 100644 index 85f7b843b95cdac33a8bb3baae6a78b4c733e9c0..0000000000000000000000000000000000000000 --- a/arxiv/9903.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:5b685a6f063e1cab900d1ef7a3b0ada1dd7d5b000a72c3bbf56bd4b5e646a1c8 -size 4818640 diff --git a/arxiv/9904.tar.gz b/arxiv/9904.tar.gz deleted file mode 100644 index 5932d84a2a37d59f40924a7dea2b3214e635b607..0000000000000000000000000000000000000000 --- a/arxiv/9904.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:a591379c9d584b05cb0891f1cbbdf2302e8e443a96fcd9409c7daae8104cb778 -size 4199502 diff --git a/arxiv/9905.tar.gz b/arxiv/9905.tar.gz deleted file mode 100644 index 41e8e6b393ff7acf9eabd3f4ac2be0c548c4e63a..0000000000000000000000000000000000000000 --- a/arxiv/9905.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:aba0aee188401e60d4310f5fcc2caaaf14fb10a492421abed9b1b4a920cbc78a -size 4043246 diff --git a/arxiv/9906.tar.gz b/arxiv/9906.tar.gz deleted file mode 100644 index b049ee3d7df0097038a63454b45f988dcde038bb..0000000000000000000000000000000000000000 --- a/arxiv/9906.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:fc1ea918c59f0e583a0fa3ee724678b4f93ad79aae95924ec7bf9b1a3e5b1e6e -size 4641755 diff --git a/arxiv/9907.tar.gz b/arxiv/9907.tar.gz deleted file mode 100644 index 3462aa702cc8511ed1ad44da0454f72d5a8f5d48..0000000000000000000000000000000000000000 --- a/arxiv/9907.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:a5d50d5f8e681ae9b7b9eb548d03ab8682f88b967d2a2441b7873a73040e516b -size 4207788 diff --git a/arxiv/9908.tar.gz b/arxiv/9908.tar.gz deleted file mode 100644 index 2c3da5c76382775c7b31835bfd9f5fb4e383f36b..0000000000000000000000000000000000000000 --- a/arxiv/9908.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:7de177a2a5e35c40e570fd847db2c83da26d08807861b0daad3b6481a729f5ed -size 3854708 diff --git a/arxiv/9909.tar.gz b/arxiv/9909.tar.gz deleted file mode 100644 index 325ccd6a9466a4fe113ac2498535457f3f724680..0000000000000000000000000000000000000000 --- a/arxiv/9909.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:530a3c5843392f8b4081c8d78a70f525037aac78bfc45444a72b930f6eea4d75 -size 4762217 diff --git a/arxiv/9910.tar.gz b/arxiv/9910.tar.gz deleted file mode 100644 index 9a817f0aba117b3b0ebb19d80c69c489c6273b25..0000000000000000000000000000000000000000 --- a/arxiv/9910.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:f6ae5113d4e65473e062dce80a77d304564b37a1afdfbc44c4064d31b93a1527 -size 4514619 diff --git a/arxiv/9911.tar.gz b/arxiv/9911.tar.gz deleted file mode 100644 index e3e137024d0806b51537bb946445939bad3a8105..0000000000000000000000000000000000000000 --- a/arxiv/9911.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:9d270aa39492460c4819c19072ff2e1654ccff1a8dc5a8b5f3bbe422f702547a -size 5126321 diff --git a/arxiv/9912.tar.gz b/arxiv/9912.tar.gz deleted file mode 100644 index 104b9b84571390578d1596b05677037c98d9dd5e..0000000000000000000000000000000000000000 --- a/arxiv/9912.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:d9361f42a99d82772bf5ced4d309cb9da25a52a4285e98fdd980ca378c1da8fc -size 5414956 diff --git a/books/cam_train.jsonl.gz b/books/cam_train.jsonl.gz deleted file mode 100644 index b56a0248d46600892bf38934d498c46669278415..0000000000000000000000000000000000000000 --- a/books/cam_train.jsonl.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:995ac1f529299c83c4fb0546f777863b1d596c8fe5fd96d612d8597619e68ecb -size 2801217 diff --git a/books/cam_val.jsonl.gz b/books/cam_val.jsonl.gz deleted file mode 100644 index 684485d78655f4bb69115d025a8bb42bdb376825..0000000000000000000000000000000000000000 --- a/books/cam_val.jsonl.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:5724a80cd30b75aa13e33de317b167fd2ae775ecf652e785c2d58114be0fbec9 -size 189577 diff --git a/books/cring_train.jsonl.gz b/books/cring_train.jsonl.gz deleted file mode 100644 index 76f5031778adf4bf12ab7b8989ce0245aa2c8058..0000000000000000000000000000000000000000 --- a/books/cring_train.jsonl.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:0b74f72c7d0a4c70df784df72b61ab4b7f43c31753d812438293f0b9c396e12d -size 375138 diff --git a/books/cring_val.jsonl.gz b/books/cring_val.jsonl.gz deleted file mode 100644 index 5e70fe59193c25208d84acfc222057bd9a76310f..0000000000000000000000000000000000000000 --- a/books/cring_val.jsonl.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:92719a2c81ee363b6982c1286eca6a77a6e13fb27e6b5a5c809d8feb4f89761d -size 36 diff --git a/books/hott_train.jsonl.gz b/books/hott_train.jsonl.gz deleted file mode 100644 index 0417b5c35dd1f6b43bc02e036485438c9fc40a1d..0000000000000000000000000000000000000000 --- a/books/hott_train.jsonl.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:03d0bdb31f8ef2973d6c4e7ede66928739044bac6c0b0b3475191e2adb1fe255 -size 443503 diff --git a/books/hott_val.jsonl.gz b/books/hott_val.jsonl.gz deleted file mode 100644 index c7db9d159bb397cd7eeb26b6b19e1e24e5b211cd..0000000000000000000000000000000000000000 --- a/books/hott_val.jsonl.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:e0f6554c7bc83ca835e2303eef2291e22aba41ea55178a6a9ee89fd67970e630 -size 31535 diff --git a/books/napkin_train.jsonl.gz b/books/napkin_train.jsonl.gz deleted file mode 100644 index dd4be8215010205d6d3899a6f81392e9fb6eadd3..0000000000000000000000000000000000000000 --- a/books/napkin_train.jsonl.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:0567de9213c0e27d928a83af7bfa01b3d776d94de84fb53443aedd76cd2b1b0c -size 580890 diff --git a/books/napkin_val.jsonl.gz b/books/napkin_val.jsonl.gz deleted file mode 100644 index 56e93ca75874c4e8f1f64ee9ddaba9d48a895c45..0000000000000000000000000000000000000000 --- a/books/napkin_val.jsonl.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:5a37a5ffa98f1ce515c5511a6756157093276a418825da55c34251be69a2050e -size 29307 diff --git a/books/stacks_train.jsonl.gz b/books/stacks_train.jsonl.gz deleted file mode 100644 index 10fed8638c66f1757006b60db781a9d64cf9be3b..0000000000000000000000000000000000000000 --- a/books/stacks_train.jsonl.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:bc203cb668763b128f9ff0e8c4b3b5764c22a29b8f46332cc1d32040d4d8244f -size 5947108 diff --git a/books/stacks_val.jsonl.gz b/books/stacks_val.jsonl.gz deleted file mode 100644 index bd1609afe7b333d3c1a60b74cc99760ba26f474f..0000000000000000000000000000000000000000 --- a/books/stacks_val.jsonl.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:88f1f243ffb4fd843edfbf7bf5039b9feb802cd67adde0dbb6fb9b852e39c9c6 -size 241678 diff --git a/books/stein_train.jsonl.gz b/books/stein_train.jsonl.gz deleted file mode 100644 index 2234e10ca52d9aeafc03876b31dbb2d577533050..0000000000000000000000000000000000000000 --- a/books/stein_train.jsonl.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:31550d6ed6b9b325fa4e62d3367aee153e76a553940390acdf3c64d8fb5621ed -size 113263 diff --git a/books/stein_val.jsonl.gz b/books/stein_val.jsonl.gz deleted file mode 100644 index 28ba66124974a68737c2df35427c87801b83a404..0000000000000000000000000000000000000000 --- a/books/stein_val.jsonl.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:3c575f4d29aedb4eaf2f9f17c7f04ed834d5aa26b31247b5bc9bd3f82e2ca47f -size 36 diff --git a/books/trench_train.jsonl.gz b/books/trench_train.jsonl.gz deleted file mode 100644 index a084932183eba250f5bd1a7f2274cd68bbedfe3e..0000000000000000000000000000000000000000 --- a/books/trench_train.jsonl.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:d502a2235606ecd5498b4fa6cd005702b2738791ccb40acc36e7daf9f4ab3dde -size 389739 diff --git a/books/trench_val.jsonl.gz b/books/trench_val.jsonl.gz deleted file mode 100644 index eaed0888b1f82e18f565a4d32c374d1d4341d1bb..0000000000000000000000000000000000000000 --- a/books/trench_val.jsonl.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:c33a9b8a2200e07a8f0e2fcf4c865ade8a884231fe8663a362a09a11e5385a19 -size 37 diff --git a/dev/proofpile_dev.jsonl.gz b/dev/proofpile_dev.jsonl.gz new file mode 100644 index 0000000000000000000000000000000000000000..a79770c5c2c9f8ceae83dbc0786cf6de6905a620 --- /dev/null +++ b/dev/proofpile_dev.jsonl.gz @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2129d758ca520d8a4ad4eff22a77ab880c603650a796e047e981419a7d0d8f04 +size 144101628 diff --git a/example.py b/example.py new file mode 100644 index 0000000000000000000000000000000000000000..da25c8a1f5bfbb9fdf50390fce27aa87a1bd6c57 --- /dev/null +++ b/example.py @@ -0,0 +1,26 @@ +from datasets import load_dataset +from itertools import islice +import sys +import time +from tqdm import tqdm + +dataset = load_dataset("./proof-pile.py", "default") + +size = dataset["train"].dataset_size / 2**30 +print(f"{size} GB TRAIN TOTAL") +print(dataset) +for x in tqdm(dataset["train"]): + print("EXAMPLE INSTANCE (trimmed):") + print(x["text"][:100]) + break + +then = time.time() +for x in tqdm(dataset["train"]): + pass +now = time.time() +print(f"{size} GB TRAIN TOTAL") +print(f"TRAVERSED IN {now-then} SECONDS") + +size += dataset["validation"].dataset_size/2**30 + dataset["test"].dataset_size/2**30 +print(f"{size} GB TOTAL (TRAIN, VAL, TEST)") + diff --git a/fetch_arxiv.py b/fetch_arxiv.py deleted file mode 100644 index 55348fcbdde6b14a9546b6a78e977b60c02f1683..0000000000000000000000000000000000000000 --- a/fetch_arxiv.py +++ /dev/null @@ -1,272 +0,0 @@ -import os -import sys -from pathlib import Path -import datetime - -import tarfile -import xml.etree.ElementTree as ET -from tqdm import tqdm -import re -from itertools import chain, islice -import requests -import time - -import shutil - -import arxiv - -import langdetect -from langdetect import detect - -from utils import Loader as Loader -from utils import make_archive - -def batch_loader(seq, size): - """ - Iterator that takes in a list `seq` and returns - chunks of size `size` - """ - return [seq[pos:pos + size] for pos in range(0, len(seq), size)] - - -def _delete_files_except_pattern(path, pattern, transform = lambda x: None, verbose=False): - """ - recursively - """ - for f in os.listdir(path): - f_path = os.path.join(path, f) - if verbose: - print(f_path) - if os.path.isfile(f_path): - if not re.search(pattern, f): - os.chmod(f_path, 0o755) - os.remove(f_path) - else: - transform(f_path) - elif os.path.isdir(f_path): - try: - print(f_path) - except UnicodeEncodeError: - new_path = f_path.encode("utf-8", 'replace').decode() - os.system(f"mv \"{f_path}\" \"{new_path}\"") - f_path = new_path - - _delete_files_except_pattern(f_path, pattern, transform=transform, verbose=verbose) - -def _download_with_progress_bar(url): - response = requests.get(url, stream=True) - total_size_in_bytes = int(response.headers.get("content-length", 0)) - block_size = 1024 # 1 Kibibyte - progress_bar = tqdm(total=total_size_in_bytes, unit="iB", unit_scale=True) - to_return = bytearray() - for data in response.iter_content(block_size): - progress_bar.update(len(data)) - to_return += data - progress_bar.close() - if total_size_in_bytes != 0 and progress_bar.n != total_size_in_bytes: - raise AssertionError("ERROR, something went wrong") - - return to_return - -def get_math_ids(resumption_token="init"): - with Loader(f"fetching metadata shard {resumption_token}..."): - if resumption_token=="init": - resp = requests.get("https://export.arxiv.org/oai2?verb=ListIdentifiers&set=math&metadataPrefix=oai_dc") - else: - time.sleep(5) - resp = requests.get(f"https://export.arxiv.org/oai2?verb=ListIdentifiers&resumptionToken={resumption_token}") - - root = ET.fromstring(resp.content.decode("utf-8")) - articles = root[2] - - math_ids = {} - for article in articles: - if article.tag == "{http://www.openarchives.org/OAI/2.0/}resumptionToken": - if article.text: - return math_ids | get_math_ids(resumption_token=article.text) - else: - return math_ids - - db_id = article[0].text - eyed = db_id[db_id.rindex(":")+1:] - math_ids[eyed] = True - -def clean_tex_file(path): - with open(path, encoding="utf-8") as f: - try: - src = f.read() - except (UnicodeDecodeError, UnicodeError): - print(f"Decoding error at {path} with utf-8. Trying latin-1") - try: - with open(path, encoding="latin-1") as fle: - src = fle.read() - #print("latin-1 successful\n") - except (UnicodeDecodeError, UnicodeError): - #print(f"Decoding error at {path} with latin-1. Trying utf-16") - try: - with open(path, encoding="utf-16") as fl: - src = fl.read() - #print("utf-16 successful\n") - except (UnicodeDecodeError, UnicodeError): - #print(f"Decoding error at {path} with utf-16. Trying utf-32") - try: - with open(path, encoding="utf-32") as f: - src = f.read() - except (UnicodeDecodeError, UnicodeError): - print(f"Decoding error at {path} with all of utf-8, 16, 32 and latin-1. Deleting this file") - print("This issue should only occur with a handful of quite old files. Continuing...\n") - return - - end = re.search(r"\\end\{document\}", src) - if end: - src = src[:end.span()[1]] - - bib = re.search(r"\\Refs|\\begin\{thebibliography\}", src) - if bib: - src = src[:bib.span()[0]] - - os.chmod(path, 0o755) - with open(path, "w", encoding="utf-8") as f: - f.write(src) - -def clean_tex_file_some_more(path): - with open(path) as f: - text = f.read() - - text = re.sub(r"(?280: - try: - print(path) - except UnicodeEncodeError: - path = path.encode('utf-8', 'replace').decode() - - try: - lang = detect(text) - except langdetect.lang_detect_exception.LangDetectException: - # no linguistic features to analyze, delete - return - - if lang=="en": - with open(path, "w") as f: - f.write(text) - else: - print("HIT NONENGLISH ARTICLE") - -def process_tarball_old_scheme(tarball_name, save_dir): - tarball_path = os.path.join(save_dir, tarball_name) - os.system("tar -xf " + tarball_path + " -C " + save_dir) - - last_ = tarball_name.rfind("_") - second_last_ = tarball_name.rfind("_", 0, last_) - subdir = tarball_name[second_last_+1:last_] - - subpath = os.path.join(save_dir, subdir) - zipped_names = os.listdir(subpath) - - for zipped_name in zipped_names: - if zipped_name[-len(".gz"):]==".gz": - zipped_path = os.path.join(subpath, zipped_name) - if re.match(r"math", zipped_name): - eyed = zipped_name[:-len(".gz")] - if tarfile.is_tarfile(zipped_path): - article_dir = os.path.join(subpath, eyed) - Path(article_dir).mkdir() - os.system("tar -xzf " + zipped_path + " -C " + article_dir) - os.remove(zipped_path) - else: - os.system("gzip -d " + zipped_path) - unzipped_path = os.path.join(subpath, eyed) - os.rename(unzipped_path, unzipped_path + ".tex") - else: - os.remove(zipped_path) - - _delete_files_except_pattern(subpath, r".*\.tex", transform=clean_tex_file) - os.remove(tarball_path) - -def process_tarball(tarball_name, save_dir, math_ids): - tarball_path = os.path.join(save_dir, tarball_name) - untar_cmd = "tar -xf " + tarball_path + " -C " + save_dir - os.system(untar_cmd) - - last_ = tarball_name.rfind("_") - second_last_ = tarball_name.rfind("_", 0, last_) - subdir = tarball_name[second_last_+1:last_] - - subpath = os.path.join(save_dir, subdir) - listdir = os.listdir(subpath) - - ids = [x[:-3] for x in listdir if x[-3:]==".gz"] - - for eyed in ids: - if eyed in math_ids: - zipped_path = os.path.join(subpath, eyed + ".gz") - - if tarfile.is_tarfile(zipped_path): - article_dir = os.path.join(subpath, eyed) - Path(article_dir).mkdir() - os.system("tar -xzf " + zipped_path + " -C " + article_dir) - os.remove(zipped_path) - else: - os.system("gzip -d " + zipped_path) - unzipped_path = os.path.join(subpath, eyed) - os.rename(unzipped_path, unzipped_path + ".tex") - - _delete_files_except_pattern(subpath, r".*\.tex", transform=clean_tex_file) - os.remove(tarball_path) - -def main(): - """ - Warning: this code is *extremely* brittle - """ - math_ids = get_math_ids() - - save_dir = "arxiv" - Path(save_dir).mkdir(exist_ok=True) - manifest_path = os.path.join(save_dir, "manifest.xml") - - os.system(f"s3cmd get s3://arxiv/src/arXiv_src_manifest.xml --requester-pays {manifest_path}") - - tree = ET.parse(manifest_path) - root = tree.getroot() - - shards_and_dates = [] - for child in root: - if child.tag == "file": - shard = child[1].text # the index of filename - yymm = child[9].text # the index of yymm - shards_and_dates.append((shard, yymm)) - - format_cutoff = datetime.datetime(2007, 3, 1) # arXiv switches from old to new format - for shard, yymm in tqdm(shards_and_dates): - print("SHARD: ", shard) - os.system(f"s3cmd get s3://arxiv/" + shard + \ - " --requester-pays " + save_dir) - tarball_name=shard[shard.rindex("/")+1:] - - # nb this code will stop working in 2051 ;) - year = int("19" + yymm[:2]) if int(yymm[:2])>50 else int("20"+yymm[:2]) - if datetime.datetime(year, int(yymm[2:]), 1)<=format_cutoff: - process_tarball_old_scheme(tarball_name, save_dir) - else: - process_tarball(tarball_name, save_dir, math_ids) - - os.remove(manifest_path) - -if __name__=="__main__": - main() - _delete_files_except_pattern("arxiv", r".*\.tex$", transform=clean_tex_file_some_more) - for f in tqdm(os.listdir("arxiv")): - f_path = os.path.join("arxiv", f) - make_archive(f_path) diff --git a/fetch_books_and_formal.py b/fetch_books_and_formal.py deleted file mode 100644 index 6787ba3cef738e06b0d9f717c48c0f1f42b6deb7..0000000000000000000000000000000000000000 --- a/fetch_books_and_formal.py +++ /dev/null @@ -1,601 +0,0 @@ -import sys -import os - -import json -import ndjson -import re - -from pathlib import Path -from tqdm import tqdm - -import requests -from tqdm import tqdm - -import base64 - -def jsonl_of_path(path, jsonl_train_path, jsonl_val_path, - train_split_key, val_split_key): - train_instances = [] - val_instances = [] - print("CREATING JSONL.GZ") - with open("splits.json") as f: - splits = json.load(f) - - for root, dirs, files in tqdm(os.walk(path)): - for name in files: - this_path = os.path.join(root, name) - with open(this_path) as f: - text = f.read() - - instance = {"text": text, - "meta": { - "subset_name": "curated", - "file": os.path.join(root, name) - } - } - - if this_path in splits[train_split_key]: - train_instances.append(instance) - elif this_path in splits[val_split_key]: - val_instances.append(instance) - else: - raise KeyError("key not found in splits.json") - - - with open(jsonl_train_path, "w") as f: - ndjson.dump(train_instances, f) - os.system("gzip " + jsonl_train_path) - - with open(jsonl_val_path, "w") as f: - ndjson.dump(val_instances, f) - os.system("gzip " + jsonl_val_path) - - os.system("rm -r " + path) - print("succesful conversion to jsonl") - - -def check_encoding(path): - for f in os.listdir(path): - f_path = os.path.join(path, f) - if os.path.isfile(f_path): - with open(f_path, encoding="utf-8") as fle: - try: - fle.read() - except UnicodeDecodeError: - print(f"{f_path} is not unicode") - elif os.path.isdir(f_path): - check_encoding(f_path) - - -def _get_dir_from_repo(author, repo, sha, repo_dir, save_path, creds): - """ - This super unelegant solution is to get around the github api rate limit - - repo_dir must be top-level in the repo. - """ - Path(save_path).mkdir(parents=True, exist_ok=True) - archive_path = os.path.join(save_path, "archive.tar.gz") - tarball_url = ( - "https://github.com/" + author + "/" + repo + "/archive/" + sha + ".tar.gz" - ) - - os.system("wget -O " + archive_path + " " + tarball_url) - os.system("tar -xzf " + archive_path + " -C " + save_path) - - export_name = repo + "-" + sha - - os.system( - "cp -r " + os.path.join(save_path, export_name, repo_dir, "*") + " " + save_path - ) - os.system("rm -r " + os.path.join(save_path, export_name) + " " + archive_path) - - -def _delete_files_except_pattern(path, pattern): - """ - recursively - """ - for f in os.listdir(path): - f_path = os.path.join(path, f) - if os.path.isfile(f_path): - if not re.search(pattern, f): - os.remove(f_path) - else: # debugging - with open(f_path, encoding="utf-8") as f: - try: - f.read() - except: - print(f"{f_path} not unicode encoded") - elif os.path.islink(f_path): - os.remove(f_path) - elif os.path.isdir(f_path): - _delete_files_except_pattern(f_path, pattern) - - -def _download_with_progress_bar(url): - response = requests.get(url, stream=True) - total_size_in_bytes = int(response.headers.get("content-length", 0)) - block_size = 1024 # 1 Kibibyte - progress_bar = tqdm(total=total_size_in_bytes, unit="iB", unit_scale=True) - to_return = bytearray() - for data in response.iter_content(block_size): - progress_bar.update(len(data)) - to_return += data - progress_bar.close() - if total_size_in_bytes != 0 and progress_bar.n != total_size_in_bytes: - raise AssertionError("ERROR, something went wrong") - - return to_return - - -def _blob_to_text(blob, creds): - resp = requests.get(blob["url"], auth=creds) - if resp.status_code != 200: - raise AssertionError("Failed to fetch from Github API") - - resp_json = json.loads(resp.content.decode("utf-8")) - return base64.b64decode(resp_json["content"]) - - -def lean(creds): - save_dir = "formal/lean" - Path(save_dir).mkdir(parents=True, exist_ok=True) - - sources = [ - { - "author": "leanprover-community", - "repo": "mathlib", - "sha": "63138639ca195344ae96aa77f3a02b90a3ac5c68", - "repo_dir": "src", - "save_path": os.path.join(save_dir, "mathlib"), - }, - { - "author": "leanprover-community", - "repo": "lean-liquid", - "sha": "9701fc4a29514852b599e9732c2409f34153ce2a", - "repo_dir": "src", - "save_path": os.path.join(save_dir, "liquid"), - }, - { - "author": "leanprover-community", - "repo": "sphere-eversion", - "sha": "cb378966c3c02d9e4ee83040d20c51782fa351ae", - "repo_dir": "src", - "save_path": os.path.join(save_dir, "sphere-eversion"), - }, - { - "author": "leanprover-community", - "repo": "lftcm2020", - "sha": "8b9f7c47b546227b7b6c877315e45eaccc2a0d70", - "repo_dir": "src", - "save_path": os.path.join(save_dir, "lftcm"), - }, - { - "author": "leanprover-community", - "repo": "lean-perfectoid-spaces", - "sha": "95a6520ce578b30a80b4c36e36ab2d559a842690", - "repo_dir": "src", - "save_path": os.path.join(save_dir, "perfectoid"), - }, - { - "author": "leanprover-community", - "repo": "mathzoo", - "sha": "87e9b492daeb929838706942aaa2437621b34a0e", - "repo_dir": "src", - "save_path": os.path.join(save_dir, "mathzoo"), - }, - ] - - for source in sources: - _get_dir_from_repo(**source, creds=creds) - _delete_files_except_pattern(source["save_path"], r".*\.lean") - - # we also don't want meta code - to_delete = ["tactic", "meta"] - os.system( - "rm -r " + " ".join([os.path.join(save_dir, "mathlib", x) for x in to_delete]) - ) - - jsonl_of_path(save_dir, "formal/lean_train.jsonl", "formal/lean_val.jsonl", - "formal-train", "formal-valid") - - -def coq(creds): - save_dir = "formal/coq" - Path(save_dir).mkdir(parents=True, exist_ok=True) - - sources = [ - { - "author": "math-comp", - "repo": "analysis", - "sha": "2ae3b628d12cacdc000c4cd70e6f3cae26ecf429", - "repo_dir": "theories", - "save_path": os.path.join(save_dir, "analysis"), - }, - { - "author": "math-comp", - "repo": "math-comp", - "sha": "65519a110ffdad7869b2a7cd08a2ddb51161b377", - "repo_dir": "mathcomp", - "save_path": os.path.join(save_dir, "math-comp"), - }, - { - "author": "math-comp", - "repo": "odd-order", - "sha": "833261a01fd0c62b05ccbadfc0c682e0bc16a5e9", - "repo_dir": "theories", - "save_path": os.path.join(save_dir, "odd-order"), - }, - { - "author": "math-comp", - "repo": "Abel", - "sha": "61d79aeb0acc1855e22882c484b73645df53b746", - "repo_dir": "theories", - "save_path": os.path.join(save_dir, "abel"), - }, - ] - - for source in sources: - _get_dir_from_repo(**source, creds=creds) - _delete_files_except_pattern(source["save_path"], r".*\.v") - - jsonl_of_path(save_dir, "formal/coq_train.jsonl", "formal/coq_val.jsonl", - "formal-train", "formal-valid") - - -def trench(): - save_dir = "books/trench" - archive_path = os.path.join(save_dir, "trench.zip") - Path(save_dir).mkdir(parents=True, exist_ok=True) - - print("DOWNLOADING TRENCH") - os.system( - "wget -O " - + archive_path - + ' "https://digitalcommons.trinity.edu/cgi/viewcontent.cgi?filename=2&article=1006&context=mono&type=additional"' - ) - print("DONE DOWNLOADING TRENCH") - - os.system("unzip " + archive_path + " -d " + save_dir) - to_delete = ["trench.zip", "wtrench.sty", "SETEPS.TEX", "EPS"] - os.system("rm -r " + " ".join([os.path.join(save_dir, f) for f in to_delete])) - - jsonl_of_path(save_dir, "books/trench_train.jsonl", "books/trench_val.jsonl", - "books-train", "books-valid") - - -def setmm(creds): - save_dir = "formal/setmm" - Path(save_dir).mkdir(parents=True, exist_ok=True) - - headers = { - "Accept": "application/vnd.git-lfs+json", - } - - json_data = { - "operation": "download", - "transfer": [ - "basic", - ], - "objects": [ - { - "oid": "ff1a12d49d4c68a05245bfd369af358b93c51b8c141419085bb5cef830f6eb7a", - "size": 182269314, - }, - ], - } - - response = requests.post( - "https://github.com/zhangir-azerbayev/mm-extract.git/info/lfs/objects/batch", - headers=headers, - json=json_data, - ) - - resp_json = response.json() - - download_url = resp_json["objects"][0]["actions"]["download"]["href"] - - encoded_src = _download_with_progress_bar(download_url) - src = encoded_src.decode("utf-8") - - with open(os.path.join(save_dir, "set.mm"), "w") as f: - f.write(src) - - jsonl_of_path(save_dir, "formal/setmm_train.jsonl", "formal/setmm_val.jsonl", - "formal-train", "formal-valid") - - -def stein(creds): - save_dir = "books/stein" - Path(save_dir).mkdir(parents=True, exist_ok=True) - - print("DOWNLOADING STEIN") - resp = _download_with_progress_bar( - "https://api.github.com/repos/williamstein/ent/git/blobs/a70578277b1222c94dc395f7d5baaf9862afd166" - ) - print("DONE DOWNLOADING STEIN") - - resp_json = json.loads(resp.decode("utf-8")) - src_encoded = base64.b64decode(resp_json["content"]) - src = src_encoded.decode("utf-8") - - with open(os.path.join(save_dir, "stein.tex"), "w") as f: - f.write(src) - - jsonl_of_path(save_dir, "books/stein_train.jsonl", "books/stein_val.jsonl", - "books-train", "books-valid") - -def cam(): - save_dir = "books/cam" - archive_path = os.path.join(save_dir, "cam.tar.gz") - Path(save_dir).mkdir(parents=True, exist_ok=True) - - os.system("wget -O " + archive_path + " https://github.com/dalcde/cam-notes/archive/06b2239.tar.gz") - - os.system ("tar -xf " + archive_path + " -C " + save_dir) - export_name = "cam-notes-06b2239b006f14d833cca2434190ebbf9a304bc6/" - os.system( - "cp -r " - + os.path.join( - save_dir, export_name, "* ") - + save_dir - ) - os.system("rm -r " + os.path.join(save_dir, export_name)) - os.remove(archive_path) - os.remove(os.path.join(save_dir, "header.tex")) - - _delete_files_except_pattern(save_dir, r".*\.tex") - - jsonl_of_path(save_dir, "books/cam_train.jsonl", "books/cam_val.jsonl", - "books-train", "books-valid") - -def hol(testing=False): - save_dir = "formal/hol" - archive_path = os.path.join(save_dir, "hol.zip") - Path(save_dir).mkdir(parents=True, exist_ok=True) - - if not testing: - os.system( - "wget -O " - + archive_path - + " https://github.com/jrh13/hol-light/archive/538c62f.tar.gz" - ) - - os.system("tar -xvf " + archive_path + " -C " + save_dir) - os.system( - "mv " - + os.path.join( - save_dir, "hol-light-538c62f7cdb0df146752c83f85fa672ae3906b03/* " - ) - + save_dir - ) - os.system( - "rm -r " - + os.path.join(save_dir, "hol-light-538c62f7cdb0df146752c83f85fa672ae3906b03") - ) - os.system("rm " + archive_path) - - # all top level files are metaprogramming, so delete them - for f in os.listdir(save_dir): - f_path = os.path.join(save_dir, f) - if os.path.isfile(f_path): - os.remove(f_path) - - os.system("rm -r formal/hol/Proofrecording") - - _delete_files_except_pattern(save_dir, r".*\.ml|.*\.doc") - - jsonl_of_path(save_dir, "formal/hol_train.jsonl", "formal/hol_val.jsonl", - "formal-train", "formal-valid") - -def afp(testing=False): - save_dir = "formal/afp" - archive_path = os.path.join(save_dir, "afp.zip") - Path(save_dir).mkdir(parents=True, exist_ok=True) - - if not testing: - os.system( - "wget -O " - + archive_path - + " https://github.com/isabelle-prover/mirror-afp-2021-1/archive/5a85b23.tar.gz" - ) - - os.system("tar -xf " + archive_path + " -C " + save_dir) - os.system( - "mv " - + os.path.join( - save_dir, - "mirror-afp-2021-1-5a85b23fb030c472d9a7b2d65a61e428f4eb8233/thys/* ", - ) - + save_dir - ) - os.system( - "rm -r " - + os.path.join( - save_dir, "mirror-afp-2021-1-5a85b23fb030c472d9a7b2d65a61e428f4eb8233" - ) - ) - os.system("rm " + archive_path) - - _delete_files_except_pattern(save_dir, r".*\.thy|.*\.tex") - - jsonl_of_path(save_dir, "formal/afp_train.jsonl", "formal/afp_val.jsonl", - "formal-train", "formal-valid") - -def mizar(creds): - save_dir = "formal/mizar" - Path(save_dir).mkdir(parents=True, exist_ok=True) - - resp = requests.get( - "https://api.github.com/repos/zhangir-azerbayev/mizar-mirror/git/trees/ce8e9735fd7a4d3488069c48da76bc622aec46ec" - ) - if resp.status_code != 200: - raise AssertionError("Failed to fetch mizar from Github API") - - resp_json = resp.json() - tree = resp_json["tree"] - - print("DOWNLOADING MIZAR") - for blob in tqdm(tree): - assert blob["type"] == "blob" - - src = _blob_to_text(blob, creds) - src = src.decode("utf-8") - # mml files have licensing information from lines 2-12 - src = "\n".join( - [x for i, x in enumerate(src.split("\n")) if i not in range(2, 13)] - ) - - save_path = os.path.join(save_dir, blob["path"]) - with open(save_path, "w") as f: - f.write(src) - print("DONE DOWNLOADING MIZAR") - - jsonl_of_path(save_dir, "formal/mizar_train.jsonl", "formal/mizar_val.jsonl", - "formal-train", "formal-valid") - - -def hott(creds): - save_dir = "books/hott" - Path(save_dir).mkdir(parents=True, exist_ok=True) - - resp = requests.get( - "https://api.github.com/repos/HoTT/book/git/trees/781565e93979f926001a353bf4ee1284ffa4fcb0", - auth=creds, - ) - if resp.status_code != 200: - raise AssertionError("Failed to fetch HoTT book from Github API") - - resp_json = resp.json() - tree = resp_json["tree"] - blobs = [blob for blob in tree if blob["type"] == "blob"] - - banned = [ - "back.tex", - "bmpsize-hack.tex", - "main.tex", - ] - - banned_rgx = r"opt|cover|front|hott" - - print("DOWNLOADING HOTT BOOK") - for blob in tqdm(blobs): - if ( - blob["path"][-4:] == ".tex" - and blob["path"] not in banned - and not re.match(banned_rgx, blob["path"]) - ): - src_enc = _blob_to_text(blob, creds) - src = src_enc.decode("utf-8") - - save_path = os.path.join(save_dir, blob["path"]) - with open(save_path, "w") as f: - f.write(src) - - print("DONE DOWNLOADING HOTT BOOK") - - jsonl_of_path(save_dir, "books/hott_train.jsonl", "books/hott_val.jsonl", - "books-train", "books-valid") - -def stacks(creds): - save_dir = "books/stacks" - Path(save_dir).mkdir(parents=True, exist_ok=True) - - resp = requests.get( - "https://api.github.com/repos/stacks/stacks-project/git/trees/0a847ff5e41b47795be075e130e7810173b35933", - auth=creds, - ) - resp_json = json.loads(resp.content.decode("utf-8")) - # assumes everything we need is a top level file, which is true for this commit. - blobs = resp_json["tree"] - print("DOWNLOADING STACKS") - for blob in tqdm(blobs): - if ( - blob["type"] == "blob" - and blob["path"][-4:] == ".tex" - and blob["path"] != "fdl.tex" - ): - decoded_content = _blob_to_text(blob, creds) - with open(os.path.join(save_dir, blob["path"]), "wb") as f: - f.write(decoded_content) - print("DONE DOWNLOADING STACKS") - - jsonl_of_path(save_dir, "books/stacks_train.jsonl", "books/stacks_val.jsonl", - "books-train", "books-valid") - -def cring(creds): - save_dir = "books/cring" - Path(save_dir).mkdir(parents=True, exist_ok=True) - - resp = requests.get( - "https://api.github.com/repos/aisejohan/cring/git/trees/2db2618ff70831002aeefbb16885ee42d5198db3", - auth=creds, - ) - if resp.status_code != 200: - raise AssertionError("Failed to catch cring from Github API") - - trees = json.loads(resp.content.decode("utf-8"))["tree"] - - print("DOWNLOADING CRING") - for blob in tqdm(trees): - if blob["type"] == "blob" and blob["path"] != "license.tex": - decoded_content = _blob_to_text(blob, creds) - with open(os.path.join(save_dir, blob["path"]), "wb") as f: - f.write(decoded_content) - - print("DONE DOWNLOADING CRING") - - jsonl_of_path(save_dir, "books/cring_train.jsonl", "books/cring_val.jsonl", - "books-train", "books-valid") - - -def napkin(creds): - save_dir = "books/napkin" - Path(save_dir).mkdir(parents=True, exist_ok=True) - - resp = requests.get( - "https://api.github.com/repos/vEnhance/napkin/git/trees/4f56c2ef5d0faf132ee14c15d96fb0f134d58bf0", - auth=creds, - ) - - if resp.status_code != 200: - raise AssertionError("Failed to catch napkin tree from Github API") - - trees = json.loads(resp.content.decode("utf-8"))["tree"] - - # We are assuming that we only want the files exactly two levels deep - - print("DOWNLOADING NAPKIN") - for tree in tqdm(trees): - if tree["type"] == "tree": - resp = requests.get(tree["url"], auth=creds) - blobs = json.loads(resp.content.decode("utf-8"))["tree"] - for blob in blobs: - if blob["type"] == "blob": - decoded_content = _blob_to_text(blob, creds) - with open(os.path.join(save_dir, blob["path"]), "wb") as f: - f.write(decoded_content) - print("DONE DOWNLOADING NAPKIN") - - jsonl_of_path(save_dir, "books/napkin_train.jsonl", "books/napkin_val.jsonl", - "books-train", "books-valid") - - -def main(): - creds = ("zhangir-azerbayev", os.environ["GITHUB_TOKEN"]) - napkin(creds) - cring(creds) - stacks(creds) - mizar(creds) - afp(testing=False) - setmm(creds) - trench() - hott(creds) - stein(creds) - coq(creds) - lean(creds) - hol() - cam() - - -if __name__ == "__main__": - main() diff --git a/fetch_math_dataset.py b/fetch_math_dataset.py deleted file mode 100644 index 12eb7250afb4805b0f0ddc3e91451bd71d06446e..0000000000000000000000000000000000000000 --- a/fetch_math_dataset.py +++ /dev/null @@ -1,62 +0,0 @@ -import os -import sys -import json -import ndjson -from pathlib import Path -from fetch_mathoverflow import batch_loader -import random - -from utils import make_archive - -ARCHIVE_URL = "https://people.eecs.berkeley.edu/~hendrycks/MATH.tar" -SAVE_PATH = "math-dataset" - -random.seed(20) - -def main(): - VAL_RATE=5e-2 - Path(SAVE_PATH).mkdir(exist_ok=True) - - archive_path = os.path.join(SAVE_PATH, "archive.tar") - os.system("wget -O " + archive_path + " " + ARCHIVE_URL) - os.system("tar -xf " + archive_path + " -C " + SAVE_PATH) - - cat_dir = os.path.join(SAVE_PATH, "MATH/train") - - for cat_name in os.listdir(cat_dir): - cat_path = os.path.join(cat_dir, cat_name) - if os.path.isdir(cat_path): - cat_texts = [] - for f in os.listdir(cat_path): - f_path = os.path.join(cat_path, f) - - with open(f_path) as fle: - prob_json = json.load(fle) - - text = "{\\bf Problem.} " + prob_json["problem"] + "\n" +\ - "{\\bf Level.} " + prob_json["level"] + "\n" +\ - "{\\bf Type.} " + prob_json["type"] + "\n" +\ - "{\\bf Solution.} " + prob_json["solution"] - - cat_texts.append(text) - - random.shuffle(cat_texts) - instances = [{"text": x.strip(), "meta": {"set_name": "MATH"}} for x in cat_texts] - split = int(VAL_RATE*len(instances)) - train = instances[split:] - val = instances[:split] - - with open(os.path.join(SAVE_PATH, "train.jsonl"), "a+") as f: - f.write(ndjson.dumps(train)) - f.write("\n") - with open(os.path.join(SAVE_PATH, "val.jsonl"), "a+") as f: - f.write(ndjson.dumps(val)) - f.write("\n") - - os.system("gzip " + os.path.join(SAVE_PATH, "train.jsonl")) - os.system("gzip " + os.path.join(SAVE_PATH, "val.jsonl")) - os.system("rm -r " + os.path.join(SAVE_PATH, "MATH")) - os.remove(archive_path) - -if __name__=="__main__": - main() diff --git a/fetch_stack_exchange.py b/fetch_stack_exchange.py deleted file mode 100644 index 89bfc04018fed0a168ffa83deb45280c92cf797d..0000000000000000000000000000000000000000 --- a/fetch_stack_exchange.py +++ /dev/null @@ -1,270 +0,0 @@ -from dataclasses import dataclass, field, fields -from functools import lru_cache -from xml.etree import ElementTree -from datetime import datetime -from enum import Enum -import typing -from typing import List, Optional, Union -import os.path -from itertools import groupby -import dataclasses -from tqdm import tqdm -from bs4 import BeautifulSoup -import sys -from pathlib import Path -import tarfile -import random -import ndjson -import json - -from utils import make_archive - -""" -Author: E.W.Ayers - -This code takes a dump of math overflow XML and produces -a structured set of questions with answers. - -1. Get mathoverflow.net.7z file -2. Extract this to `DATA_DIR = 'data/mathoverflow.net'` -3. Run `questions()` and run it to get a dictionary of mathoverflow questions. - Each question has an `Answers` field that contains a list of answers for the given q. -""" - - -def batch_loader(seq, size): - """ - Iterator that takes in a list `seq` and returns - chunks of size `size` - """ - return [seq[pos : pos + size] for pos in range(0, len(seq), size)] - - -DOC_SEP = "<|endoftext|>" - -# source: https://meta.stackexchange.com/questions/2677/database-schema-documentation-for-the-public-data-dump-and-sede -class PostType(Enum): - Question = 1 - Answer = 2 - OrphanedTagWiki = 3 - TagWikiExcerpt = 4 - TagWiki = 5 - ModeratorNomination = 6 - WikiPlaceholder = 7 - PrivilegeWiki = 8 - - -def is_optional(field): - return typing.get_origin(field) is Union and type(None) in typing.get_args(field) - - -def fromXML(cls, element): - out = {} - for field in fields(cls): - field_key = field.name - field_type = field.type - f = field.metadata.get("from_xml") - if f == "skip": - continue - attr_key = f["key"] if (f is not None and f["key"] is not None) else field_key - v = element.attrib.get(attr_key) - if v is None: - if field.default is not dataclasses.MISSING: - out[field_key] = field.default - elif field.default_factory is not dataclasses.MISSING: - out[field_key] = field.default_factory() # type: ignore - elif is_optional(field_type): - out[field_key] = None - else: - raise Exception(f"Missing field {attr_key}") - continue - if is_optional(field_type): - field_type = typing.get_args(field_type)[0] - if f is not None and f["fn"] is not None: - out[field_key] = f["fn"](v) - elif field_type is int: - out[field_key] = int(v) - elif field_type is str: - out[field_key] = str(v) - elif field_type is datetime: - out[field_key] = datetime.fromisoformat(v) - else: - raise Exception(f"Don't know how to decode {field_type}") - return cls(**out) - - -def use(fn, key=None): - return field(metadata={"from_xml": {"fn": fn, "key": key}}) - - -def skip(default): - return field(default=default, metadata={"from_xml": "skip"}) - - -def iter_rows(path): - for [_, element] in ElementTree.iterparse(path, events=["start"]): - if element.tag == "row": - yield element - - -DATA_DIR = "nothing" - - -@dataclass -class Comment: - Id: int - PostId: int - Score: int - Text: str - CreationDate: datetime - UserId: Optional[int] - - -# lru_cache() -def comments(): - path = os.path.join(DATA_DIR, "Comments.xml") - out = {} - for element in iter_rows(path): - x: Comment = fromXML(Comment, element) - out[x.Id] = x - print(f"Processed {len(out)} comments.") - return out - - -@dataclass -class Post: - Id: int - CreationDate: datetime - DeletionDate: Optional[datetime] - Score: int - Body: str # in html; need to parse out? - Title: Optional[str] - OwnerUserId: Optional[int] - ViewCount: Optional[int] - AcceptedAnswerId: Optional[int] - ParentId: Optional[int] - PostType: "PostType" = use(lambda x: PostType(int(x)), "PostTypeId") - Comments: List[Comment] = skip(None) - Answers: Optional[List["Post"]] = skip(None) - Tags: str = field(default="") - - -# @lru_cache() -def questions(): - path = os.path.join(DATA_DIR, "Posts.xml") - cs = {} - for k, c in groupby(comments().values(), lambda c: c.PostId): - x = list(c) - x.sort(key=lambda x: -x.Score) - cs[k] = x - qs = {} - answers = {} - for element in iter_rows(path): - post = fromXML(Post, element) - post.Comments = cs.get(post.Id, []) - if post.PostType is PostType.Question: - post.Answers = [] - qs[post.Id] = post - elif post.PostType is PostType.Answer: - answers[post.Id] = post - for qk, aa in groupby(answers.values(), lambda a: a.ParentId): - x = list(aa) - x.sort(key=lambda x: -x.Score) - qs[qk].Answers = x - print(f"Processed {len(qs)} questions with {len(answers)} answers.") - return qs - - -def strip_html(string): - soup = BeautifulSoup(string, "html.parser") - return soup.get_text() - - -def text_of_post(post): - text = "" - if post.Title: - text += "TITLE: " + post.Title - - text += f"\nQUESTION [{post.Score} upvotes]: {strip_html(post.Body).strip()}" - - commented = False - - answered = False - for answer in post.Answers: - if answer.Score >= 2: - answered = True - text += ( - f"\n\nREPLY [{answer.Score} votes]: {strip_html(answer.Body).strip()}" - ) - - return text, post.Score, post.Id, answered - - -def get_and_format(url, save_dir): - VAL_RATE = 0.05 - Path(save_dir).mkdir(exist_ok=True, parents=True) - archive_path = os.path.join(save_dir, "archive.7z") - os.system(f"wget -O {archive_path} {url}") - - global DATA_DIR - DATA_DIR = os.path.join(save_dir, "xml") - print(f"DATA DIR {DATA_DIR}") - os.system(f"7z e {archive_path} -o{DATA_DIR}") - - print("parsing xml...") - qs = questions() - - print("converting xml to text...") - qs_texts = [text_of_post(qs[key]) for key in tqdm(qs.keys())] - - for post, score, eyed, answered in tqdm(qs_texts): - if score >= 5 and answered: - if random.random() > VAL_RATE: - shard_path = os.path.join(save_dir, "train.jsonl") - else: - shard_path = os.path.join(save_dir, "val.jsonl") - - with open(shard_path, "a+") as f: - instance = { - "text": post, - "meta": { - "set_name": "stack_exchange", - "score": score, - "question_id": eyed, - }, - } - f.write(json.dumps(instance)) - f.write("\n") - - os.system("gzip " + " ".join(os.path.join(save_dir, x) - for x in ("train.jsonl", "val.jsonl"))) - - os.system(f"rm -r {DATA_DIR}") - os.remove(archive_path) - - -if __name__ == "__main__": - get_and_format( - "https://archive.org/download/stackexchange/mathoverflow.net.7z", - save_dir="stack-exchange/math_overflow", - ) - get_and_format( - "https://archive.org/download/stackexchange/math.stackexchange.com.7z", - "stack-exchange/math_stack_exchange", - ) - get_and_format( - "https://archive.org/download/stackexchange/physics.stackexchange.com.7z", - "stack-exchange/physics_stack_exchange", - ) - get_and_format( - "https://archive.org/download/stackexchange/cstheory.stackexchange.com.7z", - "stack-exchange/cstheory_stack_exchange", - ) - get_and_format( - "https://archive.org/download/stackexchange/datascience.stackexchange.com.7z", - "stack-exchange/datascience_stack_exchange", - ) - get_and_format( - "https://archive.org/download/stackexchange/proofassistants.stackexchange.com.7z", - "stack-exchange/proofassistants_stack_exchange", - ) diff --git a/fetch_wiki.py b/fetch_wiki.py deleted file mode 100644 index 4cb90718f087717c2f431010af800875761fd024..0000000000000000000000000000000000000000 --- a/fetch_wiki.py +++ /dev/null @@ -1,116 +0,0 @@ -from bs4 import BeautifulSoup as bs -import os -import wikipediaapi -import sys -import re -import pypandoc -import json -from pathlib import Path - -from fetch_books_and_formal import _download_with_progress_bar -from utils import make_archive - -import random - -random.seed(20) - - -def page_titles_of_category(cat_page): - """ - recursively - """ - titles = [] - for member in cat_page.categorymembers.values(): - if member.ns == wikipediaapi.Namespace.MAIN: - titles.append(member.title) - elif member.ns == wikipediaapi.Namespace.CATEGORY: - titles += page_titles_of_category(member) - return titles - -def wikipedia(): - """ - this doesnt work dont run it - """ - init_categories = [ - #"Category:Mathematical_theorems", - "Category:Mathematical_proofs", - #"Category:Mathematical_examples", - #"Category:Mathematical_problems", - #"Category:Mathematical_terminology", - ] - - title_set = set() - for cat_name in init_categories: - print(cat_name + "...") - title_set = title_set.union(page_titles_of_category(wiki.page(cat_name))) - - -PROOFWIKI_URL = ( - "https://zenodo.org/record/4902289/files/naturalproofs_proofwiki.json?download=1" -) -def proofwiki(testing=False): - VAL_RATE = 0.025 - save_dir = "wiki/proofwiki" - val_dir = "wiki/proofwiki_val" - Path(save_dir).mkdir(parents=True, exist_ok=True) - Path(val_dir).mkdir(parents=True, exist_ok=True) - - if testing: - with open("naturalproofs/proofwiki.json") as f: - struct = json.load(f) - else: - print("DOWNLOADING PROOFWIKI") - resp = _download_with_progress_bar(PROOFWIKI_URL) - struct = json.loads(resp.decode("utf-8")) - print("DONE DOWNLOADING PROOFWIKI") - - for i, thm in enumerate(struct["dataset"]["theorems"]): - if thm["contents"]: - thm_string = "\\section{" + thm["label"] + "}\n" - thm_string += ( - "Tags: " + ", ".join(thm["categories"]).replace("/", ": ") + "\n\n" - ) - - thm_string += ( - "\\begin{theorem}\n" - + "\n".join(thm["contents"]) - + "\n\\end{theorem}\n\n" - ) - - for proof in thm["proofs"]: - thm_string += ( - "\\begin{proof}\n" - + "\n".join(proof["contents"]) - + "\n\\end{proof}\n\n" - ) - - if random.random()>VAL_RATE: - with open(os.path.join(save_dir, f"""thm_{thm["id"]}.txt"""), "w") as f: - f.write(thm_string) - else: - with open(os.path.join(val_dir, f"""thm_{thm["id"]}.txt"""), "w") as f: - f.write(thm_string) - - defn_strings = [] - for defn in struct["dataset"]["definitions"]: - if defn["contents"]: - defn_string = ( - "\\begin{definition}[" - + defn["label"] - + "]\n" - + "\n".join(defn["contents"]) - + "\n\\end{definition}").strip() - - if random.random()>VAL_RATE: - with open(os.path.join(save_dir, f"""def_{defn["id"]}.txt"""), "w") as f: - f.write(defn_string) - else: - with open(os.path.join(val_dir, f"""def_{defn["id"]}.txt"""), "w") as f: - f.write(defn_string) - - -if __name__=="__main__": - #wikipedia() - proofwiki() - make_archive("wiki/proofwiki") - make_archive("wiki/proofwiki_val") diff --git a/gen_split.py b/gen_split.py deleted file mode 100644 index 6050cc1de82e4d8652debd2678b82f59e95002a1..0000000000000000000000000000000000000000 --- a/gen_split.py +++ /dev/null @@ -1,79 +0,0 @@ -import os -import random -import json - -random.seed(20) - -def _get_filepaths(path): - filepaths = [] - for f in os.listdir(path): - f_path = os.path.join(path, f) - if os.path.isfile(f_path): - filepaths.append(os.path.normpath(f_path)) - elif os.path.isdir(f_path): - filepaths += _get_filepaths(f_path) - - return filepaths - -def get_split(path, train_split: float, must_be_in_train): - filepaths = _get_filepaths(path) - random.shuffle(filepaths) - - boundary = int(train_split * len(filepaths)) - - train_paths = filepaths[:boundary] - valid_paths = filepaths[boundary:] - - print("TRAIN SPLIT (number in train, number in val): ", len(train_paths), len(valid_paths)) - - for path in must_be_in_train: - normed_path = os.path.normpath(path) - assert normed_path in train_paths or normed_path in valid_paths, f"{normed_path} not in paths" - if normed_path in valid_paths: - print(f"MOVING {path} to validation set") - valid_paths.remove(path) - train_paths.append(path) - - return train_paths, valid_paths - -def arxiv_split(): - train_paths = [] - val_paths = [] - for f in os.listdir("arxiv"): - if f[-3:] == ".gz": - f_path = os.path.join("./arxiv", f) - # validation set is june of years divisible by 4 - if int(f[1])%4==0 and int(f[3])==6: - val_paths.append(f_path) - else: - train_paths.append(f_path) - - return train_paths, val_paths - - -def main(): - train_rate = 0.95 - splits = {} - - args = [ - ("books", ["books/stein/stein.tex", "books/trench/TRENCH_REAL_ANALYSIS.tex"]), - ("formal", ["formal/setmm/set.mm"]), - ] - - for subdir, must_be_in_train in args: - print(subdir, must_be_in_train) - train, valid = get_split(subdir, train_rate, must_be_in_train) - splits[subdir + "-train"] = train - splits[subdir + "-valid"] = valid - - train, valid = arxiv_split() - splits["arxiv-train"] = train - splits["arxiv-valid"] = valid - print("arxiv", len(train), len(valid)) - - with open("splits.json", "w") as f: - f.write(json.dumps(splits, indent=4)) - - -if __name__=="__main__": - main() diff --git a/math-dataset/train.jsonl.gz b/math-dataset/train.jsonl.gz deleted file mode 100644 index 044b99974099cc9fe53e220b63d3871481f45986..0000000000000000000000000000000000000000 --- a/math-dataset/train.jsonl.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:841c8fb5476dcb40d985b8f3235df97b50695891177732dfc7c68b4146fc8809 -size 1715133 diff --git a/math-dataset/val.jsonl.gz b/math-dataset/val.jsonl.gz deleted file mode 100644 index 67074d719e280e060904535bc0aa091f873284ec..0000000000000000000000000000000000000000 --- a/math-dataset/val.jsonl.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:34a33ae62ec3915c8d85710bda89bcb7ed0097cefba09fa976373f49712ea3da -size 96158 diff --git a/proof-pile.py b/proof-pile.py index fbbcd843af302b0fa65f7849f6f6a5ef7ff7d67c..da3f369cd730dc3393f122f9599653077c220009 100644 --- a/proof-pile.py +++ b/proof-pile.py @@ -77,12 +77,7 @@ class ProofPile(datasets.GeneratorBasedBuilder): # data = datasets.load_dataset('my_dataset', 'first_domain') # data = datasets.load_dataset('my_dataset', 'second_domain') BUILDER_CONFIGS = [ - datasets.BuilderConfig(name="arxiv", version=VERSION, description="All of English arxiv.math up to 03/22"), - datasets.BuilderConfig(name="books", version=VERSION, description="Open source math textbooks"), - datasets.BuilderConfig(name="formal", version=VERSION, description="Formal math libraries"), - datasets.BuilderConfig(name="stack-exchange", version=VERSION, description="math overflow and math stack exchange"), - datasets.BuilderConfig(name="wiki", version=VERSION, description="wikipedia articles and proofwiki."), - datasets.BuilderConfig(name="math-dataset", version=VERSION, description="the MATH dataset."), + datasets.BuilderConfig(name="default", version=VERSION, description=""), ] @@ -119,30 +114,9 @@ class ProofPile(datasets.GeneratorBasedBuilder): # It can accept any type or nested list/dict and will give back the same structure with the url replaced with path to local files. # By default the archives will be extracted and a path to a cached folder where they are extracted is returned instead of the archive - self.archived_configs = ["arxiv", "wiki"] - self.jsonl_configs = ["stack-exchange", "math-dataset", "books", "formal"] - - if self.config.name in self.archived_configs: - if self.config.name=="arxiv": - with open(dl_manager.download("splits.json")) as f: - split = json.load(f) - - train_paths = split["arxiv-train"] - val_paths = split["arxiv-valid"] - if self.config.name=="stack-exchange": - train_paths = [os.path.join("./stack-exchange", x) for x in ["math_overflow.tar.gz", - "math_stack_exchange.tar.gz"]] - val_paths = [os.path.join("./stack-exchange", x) for x in ["math_overflow_val.tar.gz", - "math_stack_exchange_val.tar.gz"]] - if self.config.name=="math-dataset": - train_paths = ["math-dataset/train.tar.gz"] - val_paths = ["math-dataset/val.tar.gz"] - if self.config.name=="wiki": - train_paths = ["wiki/proofwiki.tar.gz", "wiki/wikipedia.tar.gz"] - val_paths = ["wiki/proofwiki_val.tar.gz"] - - train_files = itertools.chain.from_iterable(dl_manager.iter_archive(dl_manager.download(x)) for x in train_paths) - val_files = itertools.chain.from_iterable(dl_manager.iter_archive(dl_manager.download(x)) for x in val_paths) + train_files = [dl_manager.download_and_extract(f"train/proofpile_train_{i}.jsonl.gz") for i in range(8)] + val_files = [dl_manager.download_and_extract("dev/proofpile_dev.jsonl.gz")] + test_files = [dl_manager.download_and_extract("test/proofpile_test.jsonl.gz")] return [ datasets.SplitGenerator( @@ -159,103 +133,22 @@ class ProofPile(datasets.GeneratorBasedBuilder): "data_files": val_files, }, ), - ] - - elif self.config.name in self.jsonl_configs: - if self.config.name=="stack-exchange": - exchanges = ["math_overflow", "math_stack_exchange", "cstheory_stack_exchange", - "physics_stack_exchange", "proofassistants_stack_exchange"] - - train_paths = [os.path.join("./stack-exchange", x, "train.jsonl.gz") for x in exchanges] - val_paths = [os.path.join("./stack-exchange", x, "val.jsonl.gz") for x in exchanges] - elif self.config.name=="math-dataset": - train_paths = ["./math-dataset/train.jsonl.gz"] - val_paths = ["./math-dataset/val.jsonl.gz"] - elif self.config.name=="books": - books = ["cam", "cring", "hott", "napkin", "stacks", "stein", "trench"] - train_paths = [os.path.join("./books", x + "_train.jsonl.gz") for x in books] - val_paths = [os.path.join("./books", x+"_val.jsonl.gz") for x in books] - elif self.config.name=="formal": - libs = ["afp", "coq", "hol", "lean", "mizar", "setmm"] - train_paths = [os.path.join("./formal", x + "_train.jsonl.gz") for x in libs] - val_paths = [os.path.join("./formal", x + "_val.jsonl.gz") for x in libs] - - train_files = itertools.chain.from_iterable([dl_manager.download_and_extract(x)] for x in train_paths) - val_files = itertools.chain.from_iterable([dl_manager.download_and_extract(x)] for x in val_paths) - - return [ datasets.SplitGenerator( - name=datasets.Split.TRAIN, - # These kwargs will be passed to _generate_examples + name=datasets.Split.TEST, gen_kwargs={ - "data_files": train_files, - }, - ), - datasets.SplitGenerator( - name=datasets.Split.VALIDATION, - # These kwargs will be passed to _generate_examples - gen_kwargs={ - "data_files": val_files, - }, - ), - ] - else: - with open(dl_manager.download("splits.json")) as f: - splits = json.load(f) - - return [ - datasets.SplitGenerator( - name=datasets.Split.TRAIN, - # These kwargs will be passed to _generate_examples - gen_kwargs={ - "data_files": [dl_manager.download(x) for x in splits[self.config.name + "-train"]], - }, - ), - datasets.SplitGenerator( - name=datasets.Split.VALIDATION, - # These kwargs will be passed to _generate_examples - gen_kwargs={ - "data_files": [dl_manager.download(x) for x in splits[self.config.name + "-valid"]], - }, - ), + "data_files": test_files, + }, + ), ] - # method parameters are unpacked from `gen_kwargs` as given in `_split_generators` def _generate_examples(self, data_files): # TODO: This method handles input defined in _split_generators to yield (key, example) tuples from the dataset. # The `key` is for legacy reasons (tfds) and is not important in itself, but must be unique for each example. key = 0 - if self.config.name in self.archived_configs: - for name, obj in data_files: - text = obj.read().decode() - # Yields examples as (key, example) tuples - yield key, { - "text": text, - "meta": json.dumps({ - "config": self.config.name, - "file": name, - }) - } - key += 1 - elif self.config.name in self.jsonl_configs: - key = 0 - for name in data_files: - with open(name) as f: - instances = ndjson.load(f) - for instance in instances: - yield key, {"text": instance["text"], - "meta": json.dumps(instance["meta"])} - key += 1 - else: - for name in data_files: - with open(name, encoding="utf-8") as f: - text = f.read() - # Yields examples as (key, example) tuples - yield key, { - "text": text, - "meta": json.dumps({ - "config": self.config.name, - "file": name, - }) - } - key += 1 + for name in data_files: + with open(name) as f: + instances = ndjson.load(f) + for instance in instances: + yield key, {"text": instance["text"], + "meta": json.dumps(instance["meta"])} + key += 1 diff --git a/run1_postprocessing/arxiv_analyze.ipynb b/run1_postprocessing/arxiv_analyze.ipynb deleted file mode 100644 index 42350bd64613e6bfcaeb39289e5834d9465bcefe..0000000000000000000000000000000000000000 --- a/run1_postprocessing/arxiv_analyze.ipynb +++ /dev/null @@ -1,352 +0,0 @@ -{ - "cells": [ - { - "cell_type": "code", - "execution_count": 1, - "id": "144be7eb", - "metadata": {}, - "outputs": [], - "source": [ - "import os\n", - "import sys\n", - "os.environ[\"CUDA_VISIBLE_DEVICES\"] = \"1\"\n", - "\n", - "import gzip\n", - "import ndjson\n", - "from tqdm import tqdm\n", - "\n", - "import matplotlib.pyplot as plt\n", - "\n", - "import torch\n", - "\n", - "from transformers import AutoTokenizer, AutoModelForCausalLM" - ] - }, - { - "cell_type": "markdown", - "id": "d5a4a22d", - "metadata": {}, - "source": [ - "# Analyzing Arxiv Data\n", - "The purpose of this notebook is to make sure I am filtering arXiv.math properly for inclusion in the proof-pile. \n", - "\n", - "We are working on commit `d2ed9e500d8df9db25a6e5c86139a196d700a22e` \n", - "\n", - "In the main data cleaning script `fetch_arxiv.py`, we apply the following preprocessing steps to source files\n", - "- Delete everything outside of `\\begin{document}` and `\\end{document}` \n", - "- Delete everything including and after `\\Refs` or `\\begin{thebibliography}`, `\\begin{references}`. \n", - "- Delete comments\n", - "\n", - "After preprocessing, we include a file in the dataset only if it meets the following criteria.\n", - "- `.tex` extension. \n", - "- At least 280 characters. \n", - "- Using `utf-8`, `utf-16`, `utf-32`, or `latin-1` text encoding. \n", - "- Articles is in English, as determined by the [langdetect library](https://pypi.org/project/langdetect/). \n", - "\n", - "The above is the data that is on the current huggingface page. \n", - "\n", - "In the file `run1_postprocessing/make_files.py`, we apply the following additional preprocessing steps to produce the final data that proof-GPTv0.1 is trained on.\n", - "- Delete files that do not contain a part, chapter, section, sub...section, paragraph, or subparagraph heading. \n", - "- Delete files that contain the keyword \"gnuplot\". The Gnuplot-latex utility generates large blocks of basically unintelligible source. \n", - "- Delete everything including and enclosed by `\\begin{bibdiv}` and `\\end{bibdiv}`. \n", - "- `>3` consectuive newlines are replaced with 3 newlines. " - ] - }, - { - "cell_type": "markdown", - "id": "bdedf788", - "metadata": {}, - "source": [ - "### Strategy\n", - "These preprocessing heuristics are informed by $\\LaTeX{}$ expertise and manual inspection of data. However, I am not aware of every single subtelty of $\\LaTeX$, and I can only inspect so many training examples. Therefore, I am still not confident these heuristics yield a clean dataset.\n", - "\n", - "In this notebook I try to detect noise in the dataset by identifying documents that achieve a large loss when processed by an off-the-shelf pre-trained language model, specifically `EleutherAI/gpt-neo-125M`. " - ] - }, - { - "cell_type": "code", - "execution_count": 2, - "id": "107df731", - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "loading data batch...\n" - ] - } - ], - "source": [ - "# load subset of data\n", - "print(\"loading data batch...\")\n", - "fle_name = \"/data/corpora/proof-pile/train/proofpile_train_0.jsonl\"\n", - "with open(fle_name) as f: \n", - " data = ndjson.load(f)" - ] - }, - { - "cell_type": "code", - "execution_count": 3, - "id": "65488a28", - "metadata": {}, - "outputs": [], - "source": [ - "torch.cuda.empty_cache()\n", - "model = AutoModelForCausalLM.from_pretrained(\n", - " \"EleutherAI/gpt-neo-125M\").cuda()\n", - "\n", - "tokenizer = AutoTokenizer.from_pretrained(\n", - " \"EleutherAI/gpt-neo-125M\")\n", - "\n", - "tokenizer.pad_token = tokenizer.eos_token\n", - "\n", - "context = 2048" - ] - }, - { - "cell_type": "code", - "execution_count": 4, - "id": "2dfa6f09", - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "full batch length: 100000\n", - "working subset length: 717\n" - ] - } - ], - "source": [ - "n = 1000\n", - "\n", - "# only look at arxiv. The rest of the data is very high quality and is definitely clean.\n", - "tiny = [x for x in data[:n] if \"config\" in x[\"meta\"] and x[\"meta\"][\"config\"]==\"arxiv\"]\n", - "\n", - "print(f\"full batch length: {len(data)}\")\n", - "print(f\"working subset length: {len(tiny)}\")" - ] - }, - { - "cell_type": "markdown", - "id": "f8b0bc68", - "metadata": {}, - "source": [ - "As a sanity check, we append a random string of alphanumeric characters to our data. This should achieve a very high loss." - ] - }, - { - "cell_type": "code", - "execution_count": 5, - "id": "8166e799", - "metadata": {}, - "outputs": [], - "source": [ - "import random\n", - "import string\n", - "tiny.append({\"text\": ''.join(random.choices(string.ascii_uppercase + string.digits, k=8000))})" - ] - }, - { - "cell_type": "code", - "execution_count": 6, - "id": "2340d220", - "metadata": { - "scrolled": false - }, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "We're going to get an indexing warning, ignore it.\n" - ] - }, - { - "name": "stderr", - "output_type": "stream", - "text": [ - " 0%| | 0/718 [00:00 2048). Running this sequence through the model will result in indexing errors\n", - "100%|███████████████████████████████████████████████████| 718/718 [08:48<00:00, 1.36it/s]\n" - ] - } - ], - "source": [ - "loss_fn = torch.nn.CrossEntropyLoss(reduction='none')\n", - "\n", - "batch_size = 15\n", - "\n", - "print(\"We're going to get an indexing warning, ignore it.\")\n", - "for i in tqdm(range(len(tiny))): \n", - " example = tiny[i]\n", - " \n", - " tokens = tokenizer([example[\"text\"]], \n", - " return_tensors=\"pt\", \n", - " padding=True, \n", - " pad_to_multiple_of=context)\n", - " \n", - " tokens = {key: tokens[key].reshape((-1, context)).cuda() for key in tokens} \n", - " \n", - " labels = tokens[\"input_ids\"].clone()\n", - " \n", - " unreduced_loss = 0\n", - " num_tokens = 0 \n", - " for j in range(0, tokens[\"input_ids\"].shape[0], batch_size):\n", - " this_ids = tokens[\"input_ids\"][j:j+batch_size, :]\n", - " this_mask = tokens[\"attention_mask\"][j:j+batch_size, :]\n", - " this_labels = labels[j:j+batch_size, :]\n", - " \n", - " with torch.no_grad():\n", - " out = model(input_ids=this_ids, attention_mask=this_mask)\n", - " \n", - " preds = out.logits[:, :-1, :]\n", - " \n", - " \n", - " preds = preds.flatten(end_dim=1)\n", - " flat_labels = this_labels[:, 1:].flatten()\n", - " flat_mask = this_mask[:, 1:].flatten()\n", - " \n", - " unreduced_loss += torch.sum(loss_fn(preds, flat_labels)*flat_mask).item()\n", - " num_tokens += torch.sum(flat_mask).item()\n", - " \n", - " loss = unreduced_loss/num_tokens \n", - " \n", - " tiny[i][\"loss\"] = loss" - ] - }, - { - "cell_type": "code", - "execution_count": 7, - "id": "4d3595b2", - "metadata": {}, - "outputs": [ - { - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAigAAAGzCAYAAAAFROyYAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjUuMywgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/NK7nSAAAACXBIWXMAAA9hAAAPYQGoP6dpAAA4QklEQVR4nO3deVhU5/3//xegoCIzFBWRCoiaqKhogglOXOOGiFQrqWsVU6OtQRulMUpr3LJgzGZiXZK2iSaVaPRbTTR1wQ2TilFprUuiUeuW6OBWGCQVFc7vj3yYX0ZwQdE5wvNxXee6mPvc5573mTkwL842HoZhGAIAADART3cXAAAAcC0CCgAAMB0CCgAAMB0CCgAAMB0CCgAAMB0CCgAAMB0CCgAAMB0CCgAAMB0CCgAAMB0CCuAmCxculIeHh44dO+buUm65lmnTpsnDw+PeFOVGJ0+eVLVq1fSPf/zD3aXgHmvbtq2ee+45d5cBEVDwI8UfUsVTtWrVFBwcrJiYGL399tvKy8tzd4mmNm/ePC1cuNDdZaAczJgxQ9HR0WrXrl2JeZ9//rn69++vn/70p/L29pbValV0dLRmzJih7Oxsl76dO3d2+Z0KCAjQI488ovfee09FRUXasmWLy/wbTddTPP/1118vMa/4d3rXrl13/qKU0fr16zVixAi1aNFCXl5eatCgQan9Dhw4oOeee06tW7eWn5+f6tWrp7i4uFJrLg7I107VqlVz6Xfs2DHnvBdffLHU5x0yZIg8PDxUs2ZNl/aJEydq7ty5stvtt7fiKDdV3F0AzGfGjBkKDw/XlStXZLfbtWXLFo0bN05vvPGGPv30U0VGRrq7RFOaN2+eateureHDh7u7FNyBs2fPatGiRVq0aFGJeVOmTNELL7yghg0bavjw4WrYsKEuXbqkrKwsvf7661q0aJGOHDniskz9+vWVmprqHPuDDz7QiBEj9M0332j8+PH68MMPXfqnpKSoZs2a+sMf/lCmul999VWNHj1aNWrUKOMa3x1paWlaunSpHn74YQUHB1+335///Gf95S9/UUJCgp5++mnl5ubqnXfeUdu2bbV27Vp169atxDLz5893CRZeXl6ljl2tWjV99NFHmjx5skt7fn6+PvnkkxLBRpL69Okji8WiefPmacaMGbe6urgbDOD/vP/++4YkY+fOnSXmbdy40ahevboRFhZmfP/9926ozvyaN29udOrU6Zb7F7/eR48evWs1lXctU6dONSr6n4033njDqF69upGXl+fSvmTJEkOS0b9/f6OgoKDEcjk5OcbUqVNd2jp16mQ0b97cpS0/P9+oX7++4evra1y+fLnEOGXdjiQZrVu3NiQZr7/+usu8G/1O323fffedc/3i4uKMsLCwUvvt2rWrxGt97tw5o06dOka7du1c2ou3v7Nnz97wuY8ePWpIMvr162dIMnbv3u0yf/HixUbVqlWN+Ph4w9fXt8TyY8aMMcLCwoyioqKbrSbuIg7x4JZ06dJFzz//vI4fP66//vWvLvM2bdqkDh06yNfXV/7+/urTp4++/vrrEmN89913GjFihIKDg+Xj46Pw8HCNHj1aly9flnT98xtKOz+iQYMG6t27t7Zs2aI2bdqoevXqatmypbZs2SJJ+tvf/qaWLVuqWrVqioqK0r/+9a8S4x44cEBPPPGEAgICVK1aNbVp00affvppqc/9j3/8Q8nJyapTp458fX3185//XGfPnnWpZ//+/crIyHDuWu7cufOtvrwu1qxZ43w9/fz8FBcXp/379zvnv/baa/Lw8NDx48dLLJuSkiJvb2/997//dbZ9+eWX6tmzp6xWq2rUqKFOnTqV67kVV69e1QsvvKBGjRrJx8dHDRo00O9//3sVFBS49Nu1a5diYmJUu3ZtVa9eXeHh4frVr37l0mfJkiWKioqSn5+fLBaLWrZsqbfeesulT05OjsaNG6eQkBD5+PiocePGeuWVV1RUVFTmsUqzcuVKRUdHl9j1P2XKFNWuXVt/+ctf5O3tXWI5q9WqadOm3XT8GjVqqG3btsrPz3fZhu5Eu3bt1KVLF82aNUv/+9//btr/VrZ9SfrPf/6jX/ziFwoICHDW/dlnn91STcHBwapatepN+0VFRZV4rWvVqqUOHTqU+ndEkgzDkMPhkGEYNxzbZrMpPDxcaWlpLu2LFy9Wz549FRAQUOpy3bt31/Hjx7V79+6b1o+7h4CCWzZ06FBJPxxbLrZhwwbFxMTozJkzmjZtmpKTk7Vt2za1a9fOJVCcOnVKjz76qJYsWaIBAwbo7bff1tChQ5WRkaHvv//+tuo5fPiwBg8erPj4eKWmpuq///2v4uPjtXjxYo0fP16//OUvNX36dB05ckT9+/d3+QDbv3+/2rZtq6+//lqTJk3S66+/Ll9fX/Xt21crVqwo8Vxjx47Vv//9b02dOlWjR4/WqlWrNGbMGOf82bNnq379+mratKk+/PBDffjhh2XeRS9JH374oeLi4lSzZk298sorev755/XVV1+pffv2ztezf//+8vDw0Mcff1xi+Y8//lg9evTQT37yE0k/hMeOHTvK4XBo6tSpevnll5WTk6MuXbpox44dZa6vNE899ZSmTJmihx9+WG+++aY6deqk1NRUDRw40NnnzJkz6tGjh44dO6ZJkyZpzpw5GjJkiLZv3+7sk56erkGDBuknP/mJXnnlFc2cOVOdO3d2CVPff/+9OnXqpL/+9a8aNmyY3n77bbVr104pKSlKTk4u01iluXLlinbu3KmHH37Ypf2bb77RN998o759+5b4ML0d//nPf+Tl5SV/f/87HqvYtGnTlJ2drfnz59+w361u+9nZ2Xrssce0bt06Pf3003rppZd06dIl/exnPyv1d6S82e121a5du9R5DRs2lNVqlZ+fn375y1+WOPfnxwYNGqQlS5Y4w8y5c+e0fv16DR48+LrLREVFSRInSbubm/fgwERuZXew1Wo1HnroIefj1q1bG4GBgcb58+edbf/+978NT09PY9iwYc62YcOGGZ6enqWOXbwb9XqHD0o7/BAWFmZIMrZt2+ZsW7dunSHJqF69unH8+HFn+zvvvGNIMjZv3uxs69q1q9GyZUvj0qVLLnU89thjxgMPPFDiubt16+ayu3f8+PGGl5eXkZOT42y700M8eXl5hr+/vzFy5EiXfna73bBarS7tNpvNiIqKcum3Y8cOQ5LxwQcfONfngQceMGJiYlxq//77743w8HCje/fu163leq59j3bv3m1IMp566imXfs8++6whydi0aZNhGIaxYsWKm25bzzzzjGGxWIyrV69et88LL7xg+Pr6Gt98841L+6RJkwwvLy/jxIkTtzxWaQ4fPmxIMubMmePS/sknnxiSjNmzZ7u0FxUVGWfPnnWZrly54pzfqVMno2nTps55X3/9tfHb3/7WkGTEx8eXWsPtHOJJSkoyDMMwHn/8cSMoKMh5GLa03+lb3fbHjRtnSDI+//xzZ1teXp4RHh5uNGjQwCgsLLzlGm90iKc0W7duNTw8PIznn3/epX327NnGmDFjjMWLFxvLly83nnnmGaNKlSrGAw88YOTm5jr7FR/iefXVV419+/a5rMfcuXONmjVrGvn5+UZiYmKph3gMwzC8vb2N0aNH33LNKH/sQUGZ1KxZ03k1z+nTp7V7924NHz7cZVdpZGSkunfvrr///e+SpKKiIq1cuVLx8fFq06ZNiTFv97LViIgI2Ww25+Po6GhJPxyOCg0NLdH+n//8R5J04cIFbdq0Sf3791deXp7OnTunc+fO6fz584qJidGhQ4f03XffuTzXqFGjXOrs0KGDCgsLSz3McrvS09OVk5OjQYMGOWs6d+6cvLy8FB0drc2bNzv7DhgwQFlZWS4nZC5dulQ+Pj7q06ePJGn37t06dOiQBg8erPPnzzvHy8/PV9euXbV169YSh0XKqvg9/vHeC0n63e9+J0nOwwHFewpWr16tK1eulDqWv7+/8vPzlZ6eft3nW7ZsmTp06KCf/OQnLq9Rt27dVFhYqK1bt97yWKU5f/68JDn3QBVzOBySVGLvSW5ururUqeMyXXtY4MCBA855zZo105w5cxQXF6f33nuvTLXdimnTpslut2vBggWlzi/Ltv/3v/9djz76qNq3b+9cvmbNmho1apSOHTumr776qtzrl37Y2zZ48GCFh4eXuNz3mWee0Zw5czR48GAlJCRo9uzZWrRokQ4dOqR58+aVOl7z5s0VGRmpjz76SNIPJ+/26dPnpicTF29jcB8CCsrk4sWL8vPzkyTnh3OTJk1K9GvWrJnzw/Ds2bNyOBxq0aJFudby4xAi/XAOgCSFhISU2l58Xsbhw4dlGIaef/75Eh8uU6dOlfTDH8kbPVfxB9iPz/UoTWFhoex2u8tUfM7NtQ4dOiTph4B1bV3r1693qekXv/iFPD09tXTpUkk/HJNftmyZYmNjZbFYXMZLTEwsMd6f//xnFRQUKDc394b138zx48fl6empxo0bu7QHBQXJ39/fuY106tRJCQkJmj59umrXrq0+ffro/fffdzlP5emnn9aDDz6o2NhY1a9fX7/61a+0du3aEq/R2rVrS6xP8ZUexa/RrYx1I8Y15zYUb/MXL150aa9Zs6bS09OVnp6uCRMmlDpWgwYNlJ6erg0bNuiLL76Q3W7X6tWrr3v4ojQXLlxw2Yau97517NhRjz/++HXPRSnLtn/8+PHr/m4Xzy9v+fn56t27t/Ly8vTJJ5/c0uG0wYMHKygoSBs2bLhhn2XLlunw4cPatm3bDQ/vFDMMo1Lc88fMuMwYt+zbb79Vbm5uiQ+j8nK9PwaFhYWltl/v0sLrtRd/6BTvNXj22WcVExNTat9r1/FmY17PyZMnFR4e7tK2efPmUk+gLa7rww8/VFBQUIn5Var8/7+uwcHB6tChgz7++GP9/ve/1/bt23XixAm98sorJcZ79dVX1bp161LrK4/zKaSb7wXz8PDQ8uXLtX37dq1atUrr1q3Tr371K73++uvavn27atasqcDAQO3evVvr1q3TmjVrtGbNGr3//vsaNmyY85LfoqIide/e/bo30nrwwQcl6ZbGKk2tWrUklQyeTZs2lSTt27fPpb1KlSrOcPTtt9+WOqavr2+pl8qWRb9+/ZSRkeF8nJiYeN177kydOlWdO3fWO++8U+Icl9vZ9u+Vy5cvq1+/ftqzZ4/WrVtXpn9oQkJCdOHChevOHzRokFJSUjRy5EjVqlVLPXr0uOmYOTk5ZQqRKH8EFNyy4vs1FP9hCwsLkyQdPHiwRN8DBw6odu3a8vX1VfXq1WWxWEr8cb9W8V6JnJwclz+s5f2fWsOGDSVJVatWveMPjh8r7UM6KCioxGGGVq1albp8o0aNJP3w4XordQ0YMEBPP/20Dh48qKVLl6pGjRqKj48vMZ7FYinX9fyxsLAwFRUV6dChQ87/rKUfTrDMyclxbiPF2rZtq7Zt2+qll15SWlqahgwZoiVLluipp56SJHl7eys+Pl7x8fEqKirS008/rXfeeUfPP/+8GjdurEaNGunixYu3tD43G6s0oaGhql69uo4ePerS3qRJEz3wwANauXKlZs+eLV9f37K+VHfk9ddfdwlNN7qvSKdOndS5c2e98sormjJlisu8smz7YWFh1/3dLp5fXoqKijRs2DBt3LhRH3/8sTp16nTLyxqGoWPHjumhhx66bp/Q0FC1a9dOW7Zs0ejRo13Cfmm+++47Xb582WWbxr3HIR7ckk2bNumFF15QeHi4hgwZIkmqV6+eWrdurUWLFiknJ8fZd9++fVq/fr169eolSfL09FTfvn21atWqUu8OWbwXovgDtfg8AumHXb43+o/3dgQGBjr/wzx9+nSJ+bd76aevr6/L6yD9cKOobt26uUzXnt9QLCYmRhaLRS+//HKp52lcW1dCQoK8vLz00UcfadmyZerdu7fLB2dUVJQaNWqk1157rcShidLGux3F7/Hs2bNd2t944w1JUlxcnKQf9khcu7epeK9O8WGe4vM/inl6ejpvCljcp3///srMzNS6detK1JKTk6OrV6/e8lilqVq1qtq0aXPdu5ieO3dOI0eOLPX9udnetDsRFRXlsg1FRETcsH/xuSjvvvuuS3tZtv1evXppx44dyszMdLbl5+fr3XffVYMGDW5aQ1mMHTtWS5cu1bx589SvX7/r9ittm50/f77Onj2rnj173vA5XnzxRU2dOlVjx469aT1ZWVmSpMcee+ymfXH3sAcFJaxZs0YHDhzQ1atXlZ2drU2bNik9PV1hYWH69NNPXe6++Oqrryo2NlY2m00jRozQ//73P82ZM6fEPSFefvllrV+/Xp06ddKoUaPUrFkznT59WsuWLdMXX3whf39/9ejRQ6GhoRoxYoQmTJggLy8vvffee6pTp45OnDhRrus4d+5ctW/fXi1bttTIkSPVsGFDZWdnKzMzU99++63+/e9/l3nMqKgozZ8/Xy+++KIaN26swMBAdenS5ZaXt1gsmj9/voYOHaqHH35YAwcOdK77Z599pnbt2umPf/yjs39gYKAef/xxvfHGG8rLy9OAAQNcxvP09NSf//xnxcbGqnnz5nryySf105/+VN999502b94si8WiVatWlXk9f6xVq1ZKTEzUu+++q5ycHHXq1Ek7duzQokWL1LdvXz3++OOSpEWLFmnevHn6+c9/rkaNGikvL09/+tOfZLFYnCHnqaee0oULF9SlSxfVr19fx48f15w5c9S6dWvnf7ITJkzQp59+qt69e2v48OGKiopSfn6+9u7dq+XLl+vYsWOqXbv2LY11PX369NEf/vAHORwO5/k80g/nMezbt0+pqanasWOHBg4cqPDwcOXn52vfvn366KOP5Ofnd90Aei916tRJnTp1cjksVOxWt/1Jkybpo48+UmxsrH77298qICBAixYt0tGjR/X//t//k6fnjf+/3bNnj/PeKocPH1Zubq7ztvOtWrVy7u2bPXu25s2bJ5vNpho1apS4z9LPf/5zZ/AOCwvTgAEDnPc4+uKLL7RkyRK1bt1av/71r2/pNbkV6enpCg0NveFeGdwDbrp6CCZUfEli8eTt7W0EBQUZ3bt3N9566y3D4XCUutyGDRuMdu3aGdWrVzcsFosRHx9vfPXVVyX6HT9+3Bg2bJhRp04dw8fHx2jYsKGRlJTkclfOrKwsIzo62vD29jZCQ0ONN95447qXGcfFxZV4Dv3okstiP77k8MeOHDliDBs2zAgKCjKqVq1q/PSnPzV69+5tLF++vMRrcu3lsZs3by5x6bLdbjfi4uIMPz8/Q9JNLxW93qW9mzdvNmJiYgyr1WpUq1bNaNSokTF8+HBj165dJcb405/+ZEgy/Pz8jP/973+lPs+//vUvo1+/fkatWrUMHx8fIywszOjfv7+xcePGm9ZyrdIuBb9y5Yoxffp0Izw83KhataoREhJipKSkuFzG+s9//tMYNGiQERoaavj4+BiBgYFG7969XdZp+fLlRo8ePYzAwEDn+//rX//aOH36tMvz5eXlGSkpKUbjxo0Nb29vo3bt2sZjjz1mvPbaa847l97qWKXJzs42qlSpYnz44Yelzt+yZYvxxBNPGPXq1TOqVq1qWCwWo02bNsbUqVNLjF/anWRv5k4uM/6x4m20tO33Vrb94n5PPPGE4e/vb1SrVs149NFHjdWrV99SXdf+PfnxlJiY6OyXmJh43X7XbpNPPfWUERERYfj5+RlVq1Y1GjdubEycOLHE36br/c5fq7TLjAsLC4169eoZkydPvqX1xN3jYRh3cb8kANyHir8r5/PPP3d3KbjHVq5cqcGDB+vIkSOqV6+eu8up1AgoAHCNEydO6MEHH9TGjRtL/UZjVFw2m00dOnTQrFmz3F1KpUdAAQAApsNVPAAAwHQIKAAAwHQIKAAAwHQIKAAAwHTuyxu1FRUV6dSpU/Lz8+PLnAAAuE8YhqG8vDwFBwff9GZ/92VAOXXqVIlvrAUAAPeHkydPqn79+jfsc18GlOKvPj958qTLragBAIB5ORwOhYSEOD/Hb+S+DCjFh3UsFgsBBQCA+8ytnJ7BSbIAAMB0CCgAAMB0CCgAAMB0CCgAAMB0CCgAAMB0CCgAAMB0CCgAAMB0CCgAAMB0CCgAAMB0CCgAAMB0CCgAAMB0CCgAAMB0CCgAAMB0CCgAAMB0qri7AJSPBpM+c3cJZXZsZpy7SwAAmBR7UAAAgOkQUAAAgOkQUAAAgOkQUAAAgOkQUAAAgOkQUAAAgOkQUAAAgOkQUAAAgOkQUAAAgOkQUAAAgOkQUAAAgOkQUAAAgOkQUAAAgOkQUAAAgOkQUAAAgOkQUAAAgOmUKaDMnz9fkZGRslgsslgsstlsWrNmjXN+586d5eHh4TL95je/cRnjxIkTiouLU40aNRQYGKgJEybo6tWr5bM2AACgQqhSls7169fXzJkz9cADD8gwDC1atEh9+vTRv/71LzVv3lySNHLkSM2YMcO5TI0aNZw/FxYWKi4uTkFBQdq2bZtOnz6tYcOGqWrVqnr55ZfLaZUAAMD9rkwBJT4+3uXxSy+9pPnz52v79u3OgFKjRg0FBQWVuvz69ev11VdfacOGDapbt65at26tF154QRMnTtS0adPk7e19m6sBAAAqkts+B6WwsFBLlixRfn6+bDabs33x4sWqXbu2WrRooZSUFH3//ffOeZmZmWrZsqXq1q3rbIuJiZHD4dD+/fuv+1wFBQVyOBwuEwAAqLjKtAdFkvbu3SubzaZLly6pZs2aWrFihSIiIiRJgwcPVlhYmIKDg7Vnzx5NnDhRBw8e1N/+9jdJkt1udwknkpyP7Xb7dZ8zNTVV06dPL2upAADgPlXmgNKkSRPt3r1bubm5Wr58uRITE5WRkaGIiAiNGjXK2a9ly5aqV6+eunbtqiNHjqhRo0a3XWRKSoqSk5Odjx0Oh0JCQm57PAAAYG5lPsTj7e2txo0bKyoqSqmpqWrVqpXeeuutUvtGR0dLkg4fPixJCgoKUnZ2tkuf4sfXO29Fknx8fJxXDhVPAACg4rrj+6AUFRWpoKCg1Hm7d++WJNWrV0+SZLPZtHfvXp05c8bZJz09XRaLxXmYCAAAoEyHeFJSUhQbG6vQ0FDl5eUpLS1NW7Zs0bp163TkyBGlpaWpV69eqlWrlvbs2aPx48erY8eOioyMlCT16NFDERERGjp0qGbNmiW73a7JkycrKSlJPj4+d2UFAQDA/adMAeXMmTMaNmyYTp8+LavVqsjISK1bt07du3fXyZMntWHDBs2ePVv5+fkKCQlRQkKCJk+e7Fzey8tLq1ev1ujRo2Wz2eTr66vExESX+6YAAAB4GIZhuLuIsnI4HLJarcrNzeV8lP/TYNJn7i6hzI7NjHN3CQCAe6gsn998Fw8AADAdAgoAADAdAgoAADAdAgoAADAdAgoAADAdAgoAADAdAgoAADAdAgoAADAdAgoAADAdAgoAADAdAgoAADAdAgoAADAdAgoAADAdAgoAADAdAgoAADAdAgoAADAdAgoAADAdAgoAADAdAgoAADAdAgoAADAdAgoAADAdAgoAADAdAgoAADAdAgoAADAdAgoAADAdAgoAADAdAgoAADAdAgoAADAdAgoAADAdAgoAADAdAgoAADAdAgoAADAdAgoAADAdAgoAADAdAgoAADAdAgoAADAdAgoAADCdMgWU+fPnKzIyUhaLRRaLRTabTWvWrHHOv3TpkpKSklSrVi3VrFlTCQkJys7OdhnjxIkTiouLU40aNRQYGKgJEybo6tWr5bM2AACgQihTQKlfv75mzpyprKws7dq1S126dFGfPn20f/9+SdL48eO1atUqLVu2TBkZGTp16pT69evnXL6wsFBxcXG6fPmytm3bpkWLFmnhwoWaMmVK+a4VAAC4r3kYhmHcyQABAQF69dVX9cQTT6hOnTpKS0vTE088IUk6cOCAmjVrpszMTLVt21Zr1qxR7969derUKdWtW1eStGDBAk2cOFFnz56Vt7d3qc9RUFCggoIC52OHw6GQkBDl5ubKYrHcSfkVRoNJn7m7hDI7NjPO3SUAAO4hh8Mhq9V6S5/ft30OSmFhoZYsWaL8/HzZbDZlZWXpypUr6tatm7NP06ZNFRoaqszMTElSZmamWrZs6QwnkhQTEyOHw+HcC1Oa1NRUWa1W5xQSEnK7ZQMAgPtAmQPK3r17VbNmTfn4+Og3v/mNVqxYoYiICNntdnl7e8vf39+lf926dWW32yVJdrvdJZwUzy+edz0pKSnKzc11TidPnixr2QAA4D5SpawLNGnSRLt371Zubq6WL1+uxMREZWRk3I3anHx8fOTj43NXnwMAAJhHmQOKt7e3GjduLEmKiorSzp079dZbb2nAgAG6fPmycnJyXPaiZGdnKygoSJIUFBSkHTt2uIxXfJVPcR8AAIA7vg9KUVGRCgoKFBUVpapVq2rjxo3OeQcPHtSJEydks9kkSTabTXv37tWZM2ecfdLT02WxWBQREXGnpQAAgAqiTHtQUlJSFBsbq9DQUOXl5SktLU1btmzRunXrZLVaNWLECCUnJysgIEAWi0Vjx46VzWZT27ZtJUk9evRQRESEhg4dqlmzZslut2vy5MlKSkriEA4AAHAqU0A5c+aMhg0bptOnT8tqtSoyMlLr1q1T9+7dJUlvvvmmPD09lZCQoIKCAsXExGjevHnO5b28vLR69WqNHj1aNptNvr6+SkxM1IwZM8p3rQAAwH3tju+D4g5luY66suA+KAAAs7sn90EBAAC4WwgoAADAdAgoAADAdAgoAADAdAgoAADAdAgoAADAdAgoAADAdAgoAADAdAgoAADAdAgoAADAdAgoAADAdAgoAADAdAgoAADAdAgoAADAdAgoAADAdAgoAADAdAgoAADAdAgoAADAdAgoAADAdAgoAADAdAgoAADAdAgoAADAdAgoAADAdAgoAADAdAgoAADAdAgoAADAdAgoAADAdAgoAADAdAgoAADAdAgoAADAdAgoAADAdAgoAADAdAgoAADAdAgoAADAdAgoAADAdAgoAADAdMoUUFJTU/XII4/Iz89PgYGB6tu3rw4ePOjSp3PnzvLw8HCZfvOb37j0OXHihOLi4lSjRg0FBgZqwoQJunr16p2vDQAAqBCqlKVzRkaGkpKS9Mgjj+jq1av6/e9/rx49euirr76Sr6+vs9/IkSM1Y8YM5+MaNWo4fy4sLFRcXJyCgoK0bds2nT59WsOGDVPVqlX18ssvl8MqAQCA+12ZAsratWtdHi9cuFCBgYHKyspSx44dne01atRQUFBQqWOsX79eX331lTZs2KC6deuqdevWeuGFFzRx4kRNmzZN3t7et7EaAACgIrmjc1Byc3MlSQEBAS7tixcvVu3atdWiRQulpKTo+++/d87LzMxUy5YtVbduXWdbTEyMHA6H9u/fX+rzFBQUyOFwuEwAAKDiKtMelB8rKirSuHHj1K5dO7Vo0cLZPnjwYIWFhSk4OFh79uzRxIkTdfDgQf3tb3+TJNntdpdwIsn52G63l/pcqampmj59+u2WCgAA7jO3HVCSkpK0b98+ffHFFy7to0aNcv7csmVL1atXT127dtWRI0fUqFGj23qulJQUJScnOx87HA6FhITcXuEAAMD0busQz5gxY7R69Wpt3rxZ9evXv2Hf6OhoSdLhw4clSUFBQcrOznbpU/z4euet+Pj4yGKxuEwAAKDiKlNAMQxDY8aM0YoVK7Rp0yaFh4ffdJndu3dLkurVqydJstls2rt3r86cOePsk56eLovFooiIiLKUAwAAKqgyHeJJSkpSWlqaPvnkE/n5+TnPGbFarapevbqOHDmitLQ09erVS7Vq1dKePXs0fvx4dezYUZGRkZKkHj16KCIiQkOHDtWsWbNkt9s1efJkJSUlycfHp/zXEAAA3HfKtAdl/vz5ys3NVefOnVWvXj3ntHTpUkmSt7e3NmzYoB49eqhp06b63e9+p4SEBK1atco5hpeXl1avXi0vLy/ZbDb98pe/1LBhw1zumwIAACq3Mu1BMQzjhvNDQkKUkZFx03HCwsL097//vSxPDQAAKhG+iwcAAJgOAQUAAJgOAQUAAJgOAQUAAJgOAQUAAJgOAQUAAJgOAQUAAJgOAQUAAJgOAQUAAJgOAQUAAJgOAQUAAJgOAQUAAJgOAQUAAJgOAQUAAJgOAQUAAJgOAQUAAJhOFXcXgMqrwaTP3F1CmR2bGefuEgCgUmAPCgAAMB0CCgAAMB0CCgAAMB0CCgAAMB0CCgAAMB0CCgAAMB0CCgAAMB0CCgAAMB0CCgAAMB0CCgAAMB0CCgAAMB0CCgAAMB0CCgAAMB0CCgAAMB0CCgAAMB0CCgAAMB0CCgAAMB0CCgAAMB0CCgAAMJ0yBZTU1FQ98sgj8vPzU2BgoPr27auDBw+69Ll06ZKSkpJUq1Yt1axZUwkJCcrOznbpc+LECcXFxalGjRoKDAzUhAkTdPXq1TtfGwAAUCGUKaBkZGQoKSlJ27dvV3p6uq5cuaIePXooPz/f2Wf8+PFatWqVli1bpoyMDJ06dUr9+vVzzi8sLFRcXJwuX76sbdu2adGiRVq4cKGmTJlSfmsFAADuax6GYRi3u/DZs2cVGBiojIwMdezYUbm5uapTp47S0tL0xBNPSJIOHDigZs2aKTMzU23bttWaNWvUu3dvnTp1SnXr1pUkLViwQBMnTtTZs2fl7e190+d1OByyWq3Kzc2VxWK53fIrlAaTPnN3CZXCsZlx7i4BAO5bZfn8vqNzUHJzcyVJAQEBkqSsrCxduXJF3bp1c/Zp2rSpQkNDlZmZKUnKzMxUy5YtneFEkmJiYuRwOLR///5Sn6egoEAOh8NlAgAAFddtB5SioiKNGzdO7dq1U4sWLSRJdrtd3t7e8vf3d+lbt25d2e12Z58fh5Pi+cXzSpOamiqr1eqcQkJCbrdsAABwH7jtgJKUlKR9+/ZpyZIl5VlPqVJSUpSbm+ucTp48edefEwAAuE+V21lozJgxWr16tbZu3ar69es724OCgnT58mXl5OS47EXJzs5WUFCQs8+OHTtcxiu+yqe4z7V8fHzk4+NzO6UCAID7UJn2oBiGoTFjxmjFihXatGmTwsPDXeZHRUWpatWq2rhxo7Pt4MGDOnHihGw2myTJZrNp7969OnPmjLNPenq6LBaLIiIi7mRdAABABVGmPShJSUlKS0vTJ598Ij8/P+c5I1arVdWrV5fVatWIESOUnJysgIAAWSwWjR07VjabTW3btpUk9ejRQxERERo6dKhmzZolu92uyZMnKykpib0kAABAUhkDyvz58yVJnTt3dml///33NXz4cEnSm2++KU9PTyUkJKigoEAxMTGaN2+es6+Xl5dWr16t0aNHy2azydfXV4mJiZoxY8adrQkAAKgw7ug+KO7CfVBK4j4o9wb3QQGA23fP7oMCAABwNxBQAACA6RBQAACA6RBQAACA6RBQAACA6RBQAACA6RBQAACA6RBQAACA6RBQAACA6RBQAACA6RBQAACA6RBQAACA6RBQAACA6RBQAACA6RBQAACA6RBQAACA6RBQAACA6RBQAACA6RBQAACA6RBQAACA6RBQAACA6RBQAACA6RBQAACA6RBQAACA6RBQAACA6RBQAACA6RBQAACA6RBQAACA6RBQAACA6RBQAACA6RBQAACA6RBQAACA6RBQAACA6RBQAACA6RBQAACA6RBQAACA6ZQ5oGzdulXx8fEKDg6Wh4eHVq5c6TJ/+PDh8vDwcJl69uzp0ufChQsaMmSILBaL/P39NWLECF28ePGOVgQAAFQcZQ4o+fn5atWqlebOnXvdPj179tTp06ed00cffeQyf8iQIdq/f7/S09O1evVqbd26VaNGjSp79QAAoEKqUtYFYmNjFRsbe8M+Pj4+CgoKKnXe119/rbVr12rnzp1q06aNJGnOnDnq1auXXnvtNQUHB5e1JAAAUMHclXNQtmzZosDAQDVp0kSjR4/W+fPnnfMyMzPl7+/vDCeS1K1bN3l6eurLL78sdbyCggI5HA6XCQAAVFzlHlB69uypDz74QBs3btQrr7yijIwMxcbGqrCwUJJkt9sVGBjoskyVKlUUEBAgu91e6pipqamyWq3OKSQkpLzLBgAAJlLmQzw3M3DgQOfPLVu2VGRkpBo1aqQtW7aoa9eutzVmSkqKkpOTnY8dDgchBQCACuyuX2bcsGFD1a5dW4cPH5YkBQUF6cyZMy59rl69qgsXLlz3vBUfHx9ZLBaXCQAAVFx3PaB8++23On/+vOrVqydJstlsysnJUVZWlrPPpk2bVFRUpOjo6LtdDgAAuA+U+RDPxYsXnXtDJOno0aPavXu3AgICFBAQoOnTpyshIUFBQUE6cuSInnvuOTVu3FgxMTGSpGbNmqlnz54aOXKkFixYoCtXrmjMmDEaOHAgV/AAAABJt7EHZdeuXXrooYf00EMPSZKSk5P10EMPacqUKfLy8tKePXv0s5/9TA8++KBGjBihqKgoff755/Lx8XGOsXjxYjVt2lRdu3ZVr1691L59e7377rvlt1YAAOC+VuY9KJ07d5ZhGNedv27dupuOERAQoLS0tLI+NQAAqCT4Lh4AAGA6BBQAAGA6BBQAAGA6BBQAAGA6BBQAAGA6BBQAAGA6BBQAAGA6BBQAAGA6BBQAAGA6BBQAAGA6BBQAAGA6BBQAAGA6BBQAAGA6BBQAAGA6BBQAAGA6BBQAAGA6BBQAAGA6BBQAAGA6BBQAAGA6BBQAAGA6BBQAAGA6BBQAAGA6BBQAAGA6BBQAAGA6BBQAAGA6BBQAAGA6BBQAAGA6BBQAAGA6BBQAAGA6BBQAAGA6BBQAAGA6BBQAAGA6BBQAAGA6BBQAAGA6BBQAAGA6BBQAAGA6BBQAAGA6ZQ4oW7duVXx8vIKDg+Xh4aGVK1e6zDcMQ1OmTFG9evVUvXp1devWTYcOHXLpc+HCBQ0ZMkQWi0X+/v4aMWKELl68eEcrAgAAKo4yB5T8/Hy1atVKc+fOLXX+rFmz9Pbbb2vBggX68ssv5evrq5iYGF26dMnZZ8iQIdq/f7/S09O1evVqbd26VaNGjbr9tQAAABVKlbIuEBsbq9jY2FLnGYah2bNna/LkyerTp48k6YMPPlDdunW1cuVKDRw4UF9//bXWrl2rnTt3qk2bNpKkOXPmqFevXnrttdcUHBx8B6sDAAAqgnI9B+Xo0aOy2+3q1q2bs81qtSo6OlqZmZmSpMzMTPn7+zvDiSR169ZNnp6e+vLLL0sdt6CgQA6Hw2UCAAAVV7kGFLvdLkmqW7euS3vdunWd8+x2uwIDA13mV6lSRQEBAc4+10pNTZXVanVOISEh5Vk2AAAwmfviKp6UlBTl5uY6p5MnT7q7JAAAcBeVa0AJCgqSJGVnZ7u0Z2dnO+cFBQXpzJkzLvOvXr2qCxcuOPtcy8fHRxaLxWUCAAAVV7kGlPDwcAUFBWnjxo3ONofDoS+//FI2m02SZLPZlJOTo6ysLGefTZs2qaioSNHR0eVZDgAAuE+V+Sqeixcv6vDhw87HR48e1e7duxUQEKDQ0FCNGzdOL774oh544AGFh4fr+eefV3BwsPr27StJatasmXr27KmRI0dqwYIFunLlisaMGaOBAwdyBQ8AAJB0GwFl165devzxx52Pk5OTJUmJiYlauHChnnvuOeXn52vUqFHKyclR+/bttXbtWlWrVs25zOLFizVmzBh17dpVnp6eSkhI0Ntvv10OqwMAACoCD8MwDHcXUVYOh0NWq1W5ubmcj/J/Gkz6zN0lVArHZsa5uwQAuG+V5fP7vriKBwAAVC4EFAAAYDoEFAAAYDoEFAAAYDoEFAAAYDoEFAAAYDoEFAAAYDoEFAAAYDoEFAAAYDoEFAAAYDoEFAAAYDoEFAAAYDoEFAAAYDoEFAAAYDoEFAAAYDoEFAAAYDoEFAAAYDoEFAAAYDoEFAAAYDoEFAAAYDoEFAAAYDoEFAAAYDoEFAAAYDoEFAAAYDoEFAAAYDoEFAAAYDpV3F2AGTWY9Jm7SwAAoFJjDwoAADAdAgoAADAdAgoAADAdAgoAADAdAgoAADAdAgoAADAdAgoAADAdAgoAADAdAgoAADCdcg8o06ZNk4eHh8vUtGlT5/xLly4pKSlJtWrVUs2aNZWQkKDs7OzyLgMAANzH7soelObNm+v06dPO6YsvvnDOGz9+vFatWqVly5YpIyNDp06dUr9+/e5GGQAA4D51V76Lp0qVKgoKCirRnpubq7/85S9KS0tTly5dJEnvv/++mjVrpu3bt6tt27Z3oxwAAHCfuSt7UA4dOqTg4GA1bNhQQ4YM0YkTJyRJWVlZunLlirp16+bs27RpU4WGhiozM/O64xUUFMjhcLhMAACg4ir3gBIdHa2FCxdq7dq1mj9/vo4ePaoOHTooLy9Pdrtd3t7e8vf3d1mmbt26stvt1x0zNTVVVqvVOYWEhJR32QAAwETK/RBPbGys8+fIyEhFR0crLCxMH3/8sapXr35bY6akpCg5Odn52OFwEFIAAKjA7vplxv7+/nrwwQd1+PBhBQUF6fLly8rJyXHpk52dXeo5K8V8fHxksVhcJgAAUHHd9YBy8eJFHTlyRPXq1VNUVJSqVq2qjRs3OucfPHhQJ06ckM1mu9ulAACA+0S5H+J59tlnFR8fr7CwMJ06dUpTp06Vl5eXBg0aJKvVqhEjRig5OVkBAQGyWCwaO3asbDYbV/AAAACncg8o3377rQYNGqTz58+rTp06at++vbZv3646depIkt588015enoqISFBBQUFiomJ0bx588q7DAAAcB/zMAzDcHcRZeVwOGS1WpWbm3tXzkdpMOmzch8TFcOxmXHuLgEA7ltl+fzmu3gAAIDpEFAAAIDpEFAAAIDpEFAAAIDpEFAAAIDpEFAAAIDpEFAAAIDpEFAAAIDpEFAAAIDpEFAAAIDplPt38QAV2f34NQjcnh/A/Yg9KAAAwHQIKAAAwHQIKAAAwHQIKAAAwHQIKAAAwHQIKAAAwHQIKAAAwHQIKAAAwHQIKAAAwHQIKAAAwHQIKAAAwHQIKAAAwHQIKAAAwHQIKAAAwHQIKAAAwHQIKAAAwHSquLsAAHdXg0mfubuEMjs2M87dJQBwM/agAAAA0yGgAAAA0yGgAAAA0yGgAAAA0+EkWQCmw4m9ANiDAgAATIeAAgAATIeAAgAATMetAWXu3Llq0KCBqlWrpujoaO3YscOd5QAAAJNwW0BZunSpkpOTNXXqVP3zn/9Uq1atFBMTozNnzrirJAAAYBJuCyhvvPGGRo4cqSeffFIRERFasGCBatSooffee89dJQEAAJNwy2XGly9fVlZWllJSUpxtnp6e6tatmzIzM0v0LygoUEFBgfNxbm6uJMnhcNyV+ooKvr8r4wKouO7W3yPc/1pMXefuEm7Lvukx5T5m8e+JYRg37euWgHLu3DkVFhaqbt26Lu1169bVgQMHSvRPTU3V9OnTS7SHhITctRoBoCyss91dAVC+7uY2nZeXJ6vVesM+98WN2lJSUpScnOx8XFRUpAsXLqhWrVry8PBwY2V3zuFwKCQkRCdPnpTFYnF3OZUW74M58D64H++BOVTU98EwDOXl5Sk4OPimfd0SUGrXri0vLy9lZ2e7tGdnZysoKKhEfx8fH/n4+Li0+fv7380S7zmLxVKhNsL7Fe+DOfA+uB/vgTlUxPfhZntOirnlJFlvb29FRUVp48aNzraioiJt3LhRNpvNHSUBAAATcdshnuTkZCUmJqpNmzZ69NFHNXv2bOXn5+vJJ590V0kAAMAk3BZQBgwYoLNnz2rKlCmy2+1q3bq11q5dW+LE2YrOx8dHU6dOLXEIC/cW74M58D64H++BOfA+SB7GrVzrAwAAcA/xXTwAAMB0CCgAAMB0CCgAAMB0CCgAAMB0CCgAAMB0CChusnXrVsXHxys4OFgeHh5auXKlu0uqdFJTU/XII4/Iz89PgYGB6tu3rw4ePOjusiqd+fPnKzIy0nnHTJvNpjVr1ri7rEpv5syZ8vDw0Lhx49xdSqUybdo0eXh4uExNmzZ1d1luQUBxk/z8fLVq1Upz5851dymVVkZGhpKSkrR9+3alp6frypUr6tGjh/Lz891dWqVSv359zZw5U1lZWdq1a5e6dOmiPn36aP/+/e4urdLauXOn3nnnHUVGRrq7lEqpefPmOn36tHP64osv3F2SW9wXXxZYEcXGxio2NtbdZVRqa9eudXm8cOFCBQYGKisrSx07dnRTVZVPfHy8y+OXXnpJ8+fP1/bt29W8eXM3VVV5Xbx4UUOGDNGf/vQnvfjii+4up1KqUqVKqd9LV9mwBwX4P7m5uZKkgIAAN1dSeRUWFmrJkiXKz8/ne7ncJCkpSXFxcerWrZu7S6m0Dh06pODgYDVs2FBDhgzRiRMn3F2SW7AHBdAPX1Y5btw4tWvXTi1atHB3OZXO3r17ZbPZdOnSJdWsWVMrVqxQRESEu8uqdJYsWaJ//vOf2rlzp7tLqbSio6O1cOFCNWnSRKdPn9b06dPVoUMH7du3T35+fu4u754ioAD64b/Gffv2Vdpjve7WpEkT7d69W7m5uVq+fLkSExOVkZFBSLmHTp48qWeeeUbp6emqVq2au8uptH586D8yMlLR0dEKCwvTxx9/rBEjRrixsnuPgIJKb8yYMVq9erW2bt2q+vXru7ucSsnb21uNGzeWJEVFRWnnzp1666239M4777i5ssojKytLZ86c0cMPP+xsKyws1NatW/XHP/5RBQUF8vLycmOFlZO/v78efPBBHT582N2l3HMEFFRahmFo7NixWrFihbZs2aLw8HB3l4T/U1RUpIKCAneXUal07dpVe/fudWl78skn1bRpU02cOJFw4iYXL17UkSNHNHToUHeXcs8RUNzk4sWLLon46NGj2r17twICAhQaGurGyiqPpKQkpaWl6ZNPPpGfn5/sdrskyWq1qnr16m6urvJISUlRbGysQkNDlZeXp7S0NG3ZskXr1q1zd2mVip+fX4nzr3x9fVWrVi3Oy7qHnn32WcXHxyssLEynTp3S1KlT5eXlpUGDBrm7tHuOgOImu3bt0uOPP+58nJycLElKTEzUwoUL3VRV5TJ//nxJUufOnV3a33//fQ0fPvzeF1RJnTlzRsOGDdPp06dltVoVGRmpdevWqXv37u4uDbjnvv32Ww0aNEjnz59XnTp11L59e23fvl116tRxd2n3nIdhGIa7iwAAAPgx7oMCAABMh4ACAABMh4ACAABMh4ACAABMh4ACAABMh4ACAABMh4ACAABMh4ACAABMh4ACAABMh4ACAABMh4ACAABM5/8D59CZGIdowBoAAAAASUVORK5CYII=\n", - "text/plain": [ - "
" - ] - }, - "metadata": {}, - "output_type": "display_data" - } - ], - "source": [ - "losses = [x[\"loss\"] for x in tiny]\n", - "plt.hist(losses)\n", - "plt.title(\"Document-level losses (GPT-Neo 125M)\")\n", - "plt.show()" - ] - }, - { - "cell_type": "code", - "execution_count": 8, - "id": "e96cf901", - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "random sequence loss: 5.31893930607105\n", - "WNQ3LEC13MBO5UM7EM34EYH9HJ6V1R08QOCG4C0SZTOL6DCUVACX1IO1PRGZ7GPSN759YA206SWGQXPMG8CR2CEYHJGPZ9YV8HAU ...\n" - ] - } - ], - "source": [ - "print(\"random sequence loss: \", tiny[-1][\"loss\"])\n", - "print(tiny[-1][\"text\"][:100], \"...\")" - ] - }, - { - "cell_type": "code", - "execution_count": 9, - "id": "b7cc80a3", - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Index of 10 documents with highest loss\n", - "[717, 46, 238, 352, 197, 33, 520, 502, 139, 403]\n" - ] - } - ], - "source": [ - "ordered_idxs = sorted(list(range(len(tiny))), key = lambda i: -tiny[i][\"loss\"])\n", - "\n", - "print(\"Index of 10 documents with highest loss\")\n", - "print(ordered_idxs[:10])" - ] - }, - { - "cell_type": "code", - "execution_count": 10, - "id": "83762917", - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "loss : 3.4214292093012535\n", - "\\section{Discussion}\n", - "\\label{sec:Discussion}\n", - "\n", - "In this paper, we have provided a new method for the featurization of persistence diagrams through the use of template functions; that is, collections of functions compactly supported on the upper half plane away from the diagonal whose induced functions on diagrams separate points.\n", - "To do this, we further gave a complete description of compact sets in persistence diagram space endowed with the bottleneck distance.\n", - "\n", - "This method of featurization allows for a great deal of flexibility for the end user.\n", - "In particular, we have provided two options for template functions, tent functions and interpolating polynomials, but surely there are many other collections of functions which could be tested for optimizing classification and regression tasks.\n", - "We showed these two functions worked quite well on standard experiments, as well as in comparison to other methods available in the literature.\n", - "\n", - "We find the particular results of the SHREC data set (\\cref{sec:UliShapeData}) to be quite fascinating due to the vast improvement seen from tent functions to interpolating polynomials.\n", - "The usual knee-jerk reaction to setting up these featurization methods for persistence diagrams is that localization is key.\n", - "This was the impetus for creation of the tent functions as they have support contained in a small box, so each tent function truly only sees a small window of the diagram.\n", - "Meanwhile, the interpolating polynomials are nonzero well away from their chosen ``basepoint'' so the fact that these functions work at all is surprising to say the least.\n", - "\n", - "We hope to understand this behavior further in the course of future work.\n" - ] - } - ], - "source": [ - "idx = 1\n", - "print(\"loss : \", tiny[ordered_idxs[idx]][\"loss\"])\n", - "print(tiny[ordered_idxs[idx]][\"text\"])" - ] - }, - { - "cell_type": "markdown", - "id": "46c7c419", - "metadata": {}, - "source": [ - "### Discussion\n", - "In the cell above, we can set `idx = n` to view the document that generates `n`th highest loss. We can see, even the documents that yield the highest losses look like high quality, useful data. This means we can be relatively confident our pre-training data is free of complete noise. " - ] - } - ], - "metadata": { - "kernelspec": { - "display_name": "Python 3", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.10.4" - } - }, - "nbformat": 4, - "nbformat_minor": 5 -} diff --git a/run1_postprocessing/make_files.py b/run1_postprocessing/make_files.py deleted file mode 100644 index 985b98246bed3b0c925fcf454a7bf2e02063235e..0000000000000000000000000000000000000000 --- a/run1_postprocessing/make_files.py +++ /dev/null @@ -1,88 +0,0 @@ -from datasets import load_dataset -import random -import itertools -from itertools import islice -import sys -import time -from tqdm import tqdm, trange -import re -import json -import ndjson - - -def batch_loader(seq, size): - """ - Iterator that takes in a list `seq` and returns - chunks of size `size` """ - return [seq[pos:pos + size] for pos in range(0, len(seq), size)] - - -def parse_meta(instance): - instance["meta"] = json.loads(instance["meta"]) - return instance - -def filter_arxiv_text(instance): - keywords = ["\\part{", "\\chapter{", "\\section{", "\\section*{", "\\subsection{", "\\subsection*{", "\\subsubsection{", - "\\subsubsection*{", "\\paragraph{", "\\subparagraph{"] - - return any(x in instance["text"] for x in keywords) and "gnuplot" not in instance["text"] - -def process_arxiv_text(instance): - text = instance["text"] - - rexp = re.compile(r"\\begin{bibdiv}.*?\\end{bibdiv}", re.DOTALL) - text = re.sub(rexp, "", instance["text"]) - - rexp = re.compile(r"\n{3,}", re.DOTALL) - text = re.sub(rexp, "\n\n\n", text) - - instance["text"] = text - - return instance - - -def main(split): - """ - `split` is `"train"` or `"validation"` - """ - arxiv = load_dataset("hoskinson-center/proof-pile", "arxiv") - - print("PARSING ARXIV") - print("loading into memory...") - data_list = list(tqdm(arxiv[split])) - print("processing...") - data_list = list(filter(filter_arxiv_text, tqdm(data_list))) - data_list = list(map(process_arxiv_text, tqdm(data_list))) - data_list = list(map(parse_meta, tqdm(data_list))) - - #open("arxiv_examples.txt", "w").write("\n".join(["#"*80 + "\n" + x["text"] for x in eval_list[:100]])) - - keywords = ["formal", "books", "wiki", "stack-exchange", "math-dataset"] - - print("LOADING REST OF DATA...") - data_rest = [load_dataset("hoskinson-center/proof-pile", x)[split] for x in keywords] - data_rest_list = list(itertools.chain.from_iterable(data_rest)) - data_rest_list = list(map(parse_meta, tqdm(data_rest_list))) - - data_list = data_list + data_rest_list - print("shuffling...") - random.shuffle(data_list) - - if split=="train": - for i, batch in enumerate(batch_loader(data_list, 100_000)): - with open(f"proofpile_train_{i}.jsonl", "w") as f: - ndjson.dump(batch, f) - - elif split=="validation": - cut_idx = len(data_list)//2 - - with open("proofpile_dev.jsonl", "w") as f: - ndjson.dump(data_list[:cut_idx], f) - with open("proofpile_test.jsonl", "w") as f: - ndjson.dump(data_list[cut_idx:], f) - - print("COMPLETE") - -if __name__=="__main__": - main("train") - main("validation") diff --git a/splits.json b/splits.json deleted file mode 100644 index 5e884476b5424ed607a059e7039fd0cb37dd62b7..0000000000000000000000000000000000000000 --- a/splits.json +++ /dev/null @@ -1,15071 +0,0 @@ -{ - "books-train": [ - "books/napkin/cohomology.tex", - "books/napkin/bezout.tex", - "books/stacks/defos.tex", - "books/stacks/examples-stacks.tex", - "books/napkin/action.tex", - "books/hott/blurb.tex", - "books/napkin/galois.tex", - "books/stacks/pic.tex", - "books/stacks/introduction.tex", - "books/hott/reals.tex", - "books/cam/III_L/modular_forms_and_l_functions.tex", - "books/stacks/sites-cohomology.tex", - "books/cring/flat.tex", - "books/stacks/decent-spaces.tex", - "books/stacks/sdga.tex", - "books/napkin/grp-intro.tex", - "books/cam/II_L/representation_theory.tex", - "books/napkin/fourier.tex", - "books/napkin/log.tex", - "books/stacks/topology.tex", - "books/stacks/preamble.tex", - "books/stacks/weil.tex", - "books/stacks/spaces-pushouts.tex", - "books/stacks/crystalline.tex", - "books/hott/hlevels.tex", - "books/hott/induction.tex", - "books/napkin/rep-alg.tex", - "books/cam/III_L/algebras.tex", - "books/napkin/spec-zariski.tex", - "books/cam/III_L/ramsey_theory.tex", - "books/napkin/dets.tex", - "books/stacks/formal-spaces.tex", - "books/cam/IB_L/statistics.tex", - "books/napkin/preface.tex", - "books/napkin/characters.tex", - "books/napkin/references.tex", - "books/hott/macros.tex", - "books/napkin/finite-field.tex", - "books/stacks/bootstrap.tex", - "books/cam/IB_M/analysis_ii.tex", - "books/napkin/mor-scheme.tex", - "books/napkin/randvar.tex", - "books/napkin/transpose.tex", - "books/cam/III_L/riemannian_geometry.tex", - "books/cam/III_M/modern_statistical_methods.tex", - "books/stacks/spaces-duality.tex", - "books/hott/homotopy.tex", - "books/napkin/martingale.tex", - "books/napkin/structure.tex", - "books/stacks/spaces-more-morphisms.tex", - "books/cam/IV_L/topics_in_number_theory.tex", - "books/cam/III_L/logic.tex", - "books/stacks/exercises.tex", - "books/stacks/spaces-resolve.tex", - "books/stacks/more-groupoids.tex", - "books/napkin/differentiate.tex", - "books/cam/IB_L/electromagnetism.tex", - "books/cam/III_L/the_standard_model.tex", - "books/stacks/cohomology.tex", - "books/napkin/metric-prop.tex", - "books/napkin/zorn-lemma.tex", - "books/stacks/spaces-cohomology.tex", - "books/napkin/sheaves.tex", - "books/napkin/large-laws.tex", - "books/cam/IB_L/complex_methods.tex", - "books/stacks/stacks-sheaves.tex", - "books/cring/dimension.tex", - "books/cam/III_M/extremal_graph_theory.tex", - "books/napkin/limits.tex", - "books/napkin/vectors.tex", - "books/napkin/numfield.tex", - "books/napkin/functors.tex", - "books/stacks/smoothing.tex", - "books/napkin/applications.tex", - "books/napkin/sheafmod.tex", - "books/cam/IB_M/methods.tex", - "books/stacks/cotangent.tex", - "books/stacks/pione.tex", - "books/cring/homotopical.tex", - "books/stacks/proetale.tex", - "books/stacks/spaces-descent.tex", - "books/stacks/spaces-topologies.tex", - "books/cam/III_M/quantum_computation.tex", - "books/stacks/obsolete.tex", - "books/cam/archive/spinor_techniques_in_general_relativity.tex", - "books/napkin/manifolds.tex", - "books/napkin/vector-space.tex", - "books/napkin/ramification.tex", - "books/napkin/integrate.tex", - "books/stacks/spaces-over-fields.tex", - "books/napkin/title-embellishments.tex", - "books/cring/fields.tex", - "books/stacks/adequate.tex", - "books/stacks/sheaves.tex", - "books/napkin/zfc.tex", - "books/napkin/caratheodory.tex", - "books/hott/equivalences.tex", - "books/napkin/taylor.tex", - "books/napkin/localization.tex", - "books/napkin/proj.tex", - "books/stacks/more-algebra.tex", - "books/napkin/models.tex", - "books/stacks/spaces-perfect.tex", - "books/cam/II_M/linear_analysis.tex", - "books/napkin/pontryagin.tex", - "books/cam/II_L/logic_and_set_theory.tex", - "books/stacks/stacks.tex", - "books/stacks/formal-defos.tex", - "books/napkin/eigenvalues.tex", - "books/napkin/sets-functions.tex", - "books/cam/III_M/quantum_field_theory.tex", - "books/napkin/long-exact.tex", - "books/stacks/spaces-more-groupoids.tex", - "books/hott/preliminaries.tex", - "books/cam/III_M/differential_geometry.tex", - "books/stacks/moduli-curves.tex", - "books/napkin/cup-product.tex", - "books/stacks/stacks-geometry.tex", - "books/stacks/perfect.tex", - "books/stacks/morphisms.tex", - "books/napkin/meromorphic.tex", - "books/stacks/conventions.tex", - "books/stacks/sites-modules.tex", - "books/stacks/etale.tex", - "books/napkin/cardinal.tex", - "books/napkin/classgrp.tex", - "books/napkin/dual-trace.tex", - "books/cring/factorization.tex", - "books/cam/III_L/symplectic_geometry.tex", - "books/stacks/injectives.tex", - "books/stacks/spaces-flat.tex", - "books/stacks/varieties.tex", - "books/napkin/dedekind.tex", - "books/napkin/categories.tex", - "books/napkin/holomorphic.tex", - "books/stacks/bibliography.tex", - "books/cam/III_L/positivity_in_algebraic_geometry.tex", - "books/stacks/algebra.tex", - "books/stacks/desirables.tex", - "books/hott/formal.tex", - "books/stacks/restricted.tex", - "books/stacks/stacks-cohomology.tex", - "books/cring/integrality.tex", - "books/stacks/etale-cohomology.tex", - "books/napkin/quasicoh.tex", - "books/stacks/spaces-divisors.tex", - "books/napkin/abelian.tex", - "books/trench/TRENCH_REAL_ANALYSIS.tex", - "books/cam/IV_E/bounded_cohomology.tex", - "books/stacks/chapters.tex", - "books/stacks/flat.tex", - "books/napkin/norm-trace.tex", - "books/stacks/spaces-groupoids.tex", - "books/stacks/criteria.tex", - "books/stacks/dpa.tex", - "books/stacks/more-etale.tex", - "books/stacks/brauer.tex", - "books/stacks/groupoids-quotients.tex", - "books/cam/IB_L/numerical_analysis.tex", - "books/cring/spec.tex", - "books/trench/TRENCH_IMPROPER_FUNCTIONS.tex", - "books/cring/threeimportantfunctors.tex", - "books/cam/II_L/statistical_physics.tex", - "books/stacks/trace.tex", - "books/stacks/stacks-more-morphisms.tex", - "books/stacks/spaces-limits.tex", - "books/hott/logic.tex", - "books/napkin/proj-var.tex", - "books/stacks/spaces-properties.tex", - "books/napkin/spec-sheaf.tex", - "books/napkin/ordinal.tex", - "books/hott/hits.tex", - "books/napkin/quasi-proj.tex", - "books/cam/archive/3-manifolds.tex", - "books/napkin/ideals.tex", - "books/napkin/stokes.tex", - "books/napkin/pell.tex", - "books/napkin/circuits.tex", - "books/stacks/stacks-introduction.tex", - "books/cam/IB_M/linear_algebra.tex", - "books/cring/homological.tex", - "books/napkin/quotient.tex", - "books/cring/dedekind.tex", - "books/napkin/excision.tex", - "books/cam/IA_L/dynamics_and_relativity.tex", - "books/cam/III_M/algebraic_topology_iii.tex", - "books/cam/IB_E/variational_principles.tex", - "books/napkin/zariski.tex", - "books/cam/III_M/hydrodynamic_stability.tex", - "books/cam/IA_L/vector_calculus.tex", - "books/napkin/rings.tex", - "books/napkin/sylow.tex", - "books/napkin/compactness.tex", - "books/stacks/schemes.tex", - "books/cam/IB_M/markov_chains.tex", - "books/stacks/coding.tex", - "books/cam/IB_M/quantum_mechanics.tex", - "books/stacks/spaces-simplicial.tex", - "books/cring/homologicallocal.tex", - "books/hott/introduction.tex", - "books/hott/preface.tex", - "books/cam/II_M/algebraic_topology.tex", - "books/napkin/inner-form.tex", - "books/stacks/groupoids.tex", - "books/napkin/salespitch.tex", - "books/napkin/forcing.tex", - "books/cam/III_M/combinatorics.tex", - "books/cam/IB_E/metric_and_topological_spaces.tex", - "books/napkin/CH.tex", - "books/cring/graded.tex", - "books/napkin/forms.tex", - "books/hott/basics.tex", - "books/stein/stein.tex", - "books/cring/noetherian.tex", - "books/stacks/local-cohomology.tex", - "books/cam/IB_L/geometry.tex", - "books/stacks/intersection.tex", - "books/cam/III_M/symmetries_fields_and_particles.tex", - "books/napkin/cover-project.tex", - "books/cam/IA_L/analysis_i.tex", - "books/cam/IA_M/vectors_and_matrices.tex", - "books/cring/etale.tex", - "books/stacks/examples-defos.tex", - "books/cam/IB_L/fluid_dynamics.tex", - "books/cam/III_L/theoretical_physics_of_soft_condensed_matter.tex", - "books/stacks/dga.tex", - "books/stacks/descent.tex", - "books/stacks/spaces-chow.tex", - "books/cam/IA_M/numbers_and_sets.tex", - "books/napkin/multivar.tex", - "books/stacks/spaces-morphisms.tex", - "books/napkin/lebesgue-int.tex", - "books/hott/errata.tex", - "books/cring/various.tex", - "books/cring/smoothness.tex", - "books/cring/foundations.tex", - "books/stacks/stacks-perfect.tex", - "books/stacks/derived.tex", - "books/hott/categories.tex", - "books/cring/categories.tex", - "books/stacks/moduli.tex", - "books/napkin/notation.tex", - "books/napkin/metric-top.tex", - "books/stacks/equiv.tex", - "books/napkin/swapsum.tex", - "books/trench/TRENCH_LAGRANGE_MULTIPLIERS.tex", - "books/stacks/divisors.tex", - "books/napkin/semisimple.tex", - "books/cring/completion.tex", - "books/cam/IA_M/differential_equations.tex", - "books/cam/IB_L/groups_rings_and_modules.tex", - "books/napkin/genscheme.tex", - "books/napkin/cellular.tex", - "books/cam/II_M/integrable_systems.tex", - "books/stacks/more-morphisms.tex", - "books/stacks/topologies.tex", - "books/cam/IA_L/probability.tex", - "books/cam/III_M/percolation_and_random_walks_on_graphs.tex", - "books/stacks/coherent.tex", - "books/stacks/algebraization.tex", - "books/napkin/discriminant.tex", - "books/hott/symbols.tex", - "books/cam/III_M/local_fields.tex", - "books/napkin/fundamental-group.tex", - "books/stacks/simplicial.tex", - "books/stacks/fields.tex", - "books/napkin/shor.tex", - "books/stacks/dualizing.tex", - "books/stacks/stacks-limits.tex", - "books/napkin/measure-space.tex", - "books/napkin/p-adic.tex", - "books/stacks/sites.tex", - "books/stacks/quot.tex", - "books/stacks/spaces-more-cohomology.tex", - "books/cam/III_L/advanced_quantum_field_theory.tex", - "books/cam/IA_M/groups.tex", - "books/stacks/resolve.tex", - "books/stacks/guide.tex", - "books/stacks/spaces.tex", - "books/stacks/curves.tex", - "books/cam/III_L/schramm-loewner_evolutions.tex", - "books/cam/III_M/analysis_of_partial_differential_equations.tex", - "books/stacks/artin.tex", - "books/stacks/properties.tex", - "books/stacks/functors.tex", - "books/stacks/chow.tex", - "books/stacks/examples.tex", - "books/stacks/modules.tex", - "books/cam/II_M/probability_and_measure.tex", - "books/cam/III_E/classical_and_quantum_solitons.tex", - "books/stacks/algebraic.tex", - "books/cam/IB_L/complex_analysis.tex", - "books/stacks/duality.tex", - "books/hott/exercise_solutions.tex", - "books/stacks/hypercovering.tex", - "books/cam/II_M/galois_theory.tex", - "books/napkin/frobenius.tex", - "books/napkin/spec-examples.tex", - "books/napkin/singular.tex", - "books/cam/archive/supersymmetry.tex", - "books/napkin/artin.tex", - "books/stacks/derham.tex", - "books/stacks/sets.tex", - "books/stacks/limits.tex", - "books/stacks/constructions.tex", - "books/stacks/stacks-morphisms.tex" - ], - "books-valid": [ - "books/hott/setmath.tex", - "books/napkin/constructions.tex", - "books/napkin/advice.tex", - "books/napkin/top-more.tex", - "books/stacks/stacks-properties.tex", - "books/cam/III_M/advanced_probability.tex", - "books/cam/IV_M/topics_in_geometric_group_theory.tex", - "books/cam/II_L/number_fields.tex", - "books/napkin/affine-var.tex", - "books/napkin/digraph.tex", - "books/cam/III_L/stochastic_calculus_and_applications.tex", - "books/stacks/discriminant.tex", - "books/napkin/hintsol.tex", - "books/stacks/homology.tex", - "books/cam/IB_E/optimisation.tex", - "books/stacks/models.tex", - "books/stacks/categories.tex" - ], - "formal-train": [ - "formal/afp/SpecCheck/SpecCheck.thy", - "formal/lean/liquid/locally_constant/completion.lean", - "formal/lean/liquid/condensed/evaluation_homology.lean", - "formal/mizar/newton05.miz", - "formal/afp/Complx/OG_Syntax.thy", - "formal/afp/SenSocialChoice/Sen.thy", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2020/a/p10.lean", - "formal/lean/liquid/polyhedral_lattice/basic.lean", - "formal/lean/mathlib/topology/algebra/star.lean", - "formal/afp/Closest_Pair_Points/document/root.tex", - "formal/afp/X86_Semantics/State.thy", - "formal/afp/pGCL/Embedding.thy", - "formal/afp/Shadow_SC_DOM/monads/ShadowRootMonad.thy", - "formal/hol/Jordan/float.ml", - "formal/afp/CoCon/document/root.tex", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p43.lean", - "formal/lean/mathlib/algebra/order/monoid_lemmas_zero_lt.lean", - "formal/mizar/graphsp.miz", - "formal/mizar/bkmodel3.miz", - "formal/afp/Word_Lib/Examples.thy", - "formal/lean/mathlib/algebraic_geometry/properties.lean", - "formal/afp/Differential_Dynamic_Logic/Static_Semantics.thy", - "formal/mizar/waybel11.miz", - "formal/afp/Game_Based_Crypto/document/root.tex", - "formal/mizar/series_4.miz", - "formal/lean/perfectoid/for_mathlib/integral_closure.lean", - "formal/lean/mathlib/group_theory/group_action/conj_act.lean", - "formal/afp/Jinja/J/Progress.thy", - "formal/lean/mathlib/analysis/calculus/conformal/normed_space.lean", - "formal/hol/Help/ACCEPT_TAC.doc", - "formal/lean/mathlib/topology/sheaves/presheaf.lean", - "formal/lean/mathlib/data/qpf/multivariate/constructions/quot.lean", - "formal/afp/Timed_Automata/Timed_Automata.thy", - "formal/lean/mathlib/order/filter/interval.lean", - "formal/afp/Category3/FunctorCategory.thy", - "formal/afp/Word_Lib/Norm_Words.thy", - "formal/afp/Diophantine_Eqns_Lin_Hom/Simple_Algorithm.thy", - "formal/mizar/integr11.miz", - "formal/afp/Minkowskis_Theorem/Minkowskis_Theorem.thy", - "formal/afp/HRB-Slicing/StaticInter/CFG.thy", - "formal/lean/mathlib/combinatorics/simple_graph/connectivity.lean", - "formal/afp/Noninterference_Sequential_Composition/Propaedeutics.thy", - "formal/afp/CoSMeDis/System_Specification.thy", - "formal/afp/Nullstellensatz/Univariate_PM.thy", - "formal/hol/Examples/digit_serial_methods.ml", - "formal/mizar/field_9.miz", - "formal/hol/Multivariate/specialtopologies.ml", - "formal/afp/Orbit_Stabiliser/Left_Coset.thy", - "formal/afp/Flyspeck-Tame/PlaneGraphIso.thy", - "formal/afp/CoreC++/TypeRel.thy", - "formal/coq/math-comp/test_suite/output.v.out", - "formal/afp/AWN/OClosed_Lifting.thy", - "formal/afp/Parity_Game/WinningStrategy.thy", - "formal/afp/Linear_Recurrences/Solver/Linear_Recurrences_Solver.thy", - "formal/afp/Imperative_Insertion_Sort/Imperative_Loops.thy", - "formal/afp/Poincare_Disc/Poincare_Lines_Axis_Intersections.thy", - "formal/hol/Help/strip_ncomb.doc", - "formal/lean/mathlib/linear_algebra/determinant.lean", - "formal/mizar/group_2.miz", - "formal/lean/mathlib/combinatorics/simple_graph/hasse.lean", - "formal/lean/mathlib/data/fin/succ_pred.lean", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p629.lean", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p208.lean", - "formal/afp/DPRM_Theorem/Diophantine/Binary_Masking.thy", - "formal/mizar/yellow19.miz", - "formal/lean/mathlib/data/fun_like/embedding.lean", - "formal/mizar/fuzzy_5.miz", - "formal/lean/mathzoo/mathzoo/olympiads/imo/1977/p5.lean", - "formal/lean/mathlib/analysis/calculus/formal_multilinear_series.lean", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p114.lean", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p296.lean", - "formal/afp/DPRM_Theorem/Register_Machine/SingleStepRegister.thy", - "formal/hol/Help/is_uexists.doc", - "formal/lean/mathlib/category_theory/category/basic.lean", - "formal/lean/mathlib/algebra/ring/comp_typeclasses.lean", - "formal/lean/mathlib/number_theory/modular.lean", - "formal/lean/liquid/for_mathlib/preserves_finite_limits.lean", - "formal/afp/Kruskal/Graph_Definition_Impl.thy", - "formal/lean/liquid/for_mathlib/snake_lemma_naturality.lean", - "formal/afp/Nested_Multisets_Ordinals/Nested_Multiset.thy", - "formal/afp/SuperCalc/well_founded_continued.thy", - "formal/lean/lftcm/exercises_sources/wednesday/structures.lean", - "formal/afp/Menger/Y_neq_new_last.thy", - "formal/mizar/matrix17.miz", - "formal/mizar/hahnban1.miz", - "formal/hol/Help/EQ_IMP_RULE.doc", - "formal/afp/Shivers-CFA/MapSets.thy", - "formal/hol/Help/exactly.doc", - "formal/lean/mathzoo/mathzoo/misc/miniF2F/algebra/sqineq_unitcircatbpabsamblt1.lean", - "formal/afp/Automatic_Refinement/Lib/Prio_List.thy", - "formal/afp/Projective_Geometry/document/root.tex", - "formal/mizar/group_18.miz", - "formal/afp/Jinja/Common/SystemClasses.thy", - "formal/hol/Help/e.doc", - "formal/mizar/tex_4.miz", - "formal/hol/Help/warn.doc", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p327.lean", - "formal/hol/Help/distinctness_store.doc", - "formal/mizar/field_7.miz", - "formal/lean/mathlib/control/ulift.lean", - "formal/mizar/matrix_6.miz", - "formal/afp/Markov_Models/ex/PGCL.thy", - "formal/afp/Van_Emde_Boas_Trees/Imperative_HOL_Time/Ref_Time.thy", - "formal/afp/Physical_Quantities/ISQ_Quantities.thy", - "formal/afp/Jordan_Normal_Form/DL_Submatrix.thy", - "formal/afp/Consensus_Refined/Voting/OneThirdRule_Proofs.thy", - "formal/mizar/ec_pf_1.miz", - "formal/afp/Syntax_Independent_Logic/Prelim.thy", - "formal/afp/Deriving/Derive_Manager.thy", - "formal/afp/BNF_Operations/Lift.thy", - "formal/afp/DiskPaxos/DiskPaxos_Inv2.thy", - "formal/lean/mathlib/data/mv_polynomial/invertible.lean", - "formal/afp/Pi_Calculus/Weak_Early_Bisim.thy", - "formal/lean/lftcm/hints/category_theory/exercise2/hint5.lean", - "formal/lean/perfectoid/sheaves/stalk_of_rings.lean", - "formal/afp/Call_Arity/TTreeImplCardinalitySafe.thy", - "formal/afp/Automatic_Refinement/Tool/Autoref_Fix_Rel.thy", - "formal/afp/Nat-Interval-Logic/IL_IntervalOperators.thy", - "formal/afp/Formal_SSA/Graph_path.thy", - "formal/lean/mathlib/order/initial_seg.lean", - "formal/afp/GoedelGod/GoedelGod.thy", - "formal/afp/Aristotles_Assertoric_Syllogistic/document/root.tex", - "formal/afp/Higher_Order_Terms/Find_First.thy", - "formal/afp/Modular_Assembly_Kit_Security/document/root.tex", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2002/b/p3.lean", - "formal/afp/Consensus_Refined/HO_Transition_System.thy", - "formal/afp/GaleStewart_Games/MoreENat.thy", - "formal/lean/sphere-eversion/local/corrugation.lean", - "formal/afp/WebAssembly/Wasm_Printing/Wasm_Printing.thy", - "formal/afp/Stuttering_Equivalence/Samplers.thy", - "formal/afp/JinjaThreads/MM/SC_Completion.thy", - "formal/afp/Collections/ICF/Collections.thy", - "formal/lean/mathlib/analysis/calculus/extend_deriv.lean", - "formal/afp/QR_Decomposition/Examples_QR_Abstract_Float.thy", - "formal/afp/Isabelle_Meta_Model/toy_example/embedding/Init_rbt.thy", - "formal/afp/Pi_Calculus/Late_Semantics1.thy", - "formal/lean/mathlib/category_theory/bicategory/free.lean", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p441.lean", - "formal/hol/Help/term_type_unify.doc", - "formal/afp/LTL_Normal_Form/Normal_Form_Code_Export.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p126.lean", - "formal/afp/Knot_Theory/Knot_Theory.thy", - "formal/afp/Collections/document/conclusion.tex", - "formal/afp/Simpl/Vcg.thy", - "formal/lean/mathlib/algebra/lie/cartan_matrix.lean", - "formal/afp/FO_Theory_Rewriting/FOR_Certificate.thy", - "formal/afp/Consensus_Refined/Infra.thy", - "formal/hol/Help/mk_disj.doc", - "formal/afp/Optimal_BST/Quadrilateral_Inequality.thy", - "formal/afp/WorkerWrapper/Maybe.thy", - "formal/hol/Rqe/rqe_real.ml", - "formal/afp/FocusStreamsCaseStudies/JoinSplitTime.thy", - "formal/mizar/integr20.miz", - "formal/mizar/msscyc_1.miz", - "formal/afp/WHATandWHERE_Security/Parallel_Composition.thy", - "formal/lean/mathlib/logic/function/conjugate.lean", - "formal/lean/mathlib/category_theory/monad/limits.lean", - "formal/lean/mathlib/linear_algebra/matrix/dual.lean", - "formal/afp/Lambda_Free_KBOs/Lambda_Free_KBO_App.thy", - "formal/hol/Help/FIRST.doc", - "formal/afp/Extended_Finite_State_Machine_Inference/efsm2sal.thy", - "formal/afp/Independence_CH/Names.thy", - "formal/lean/mathlib/data/multiset/lattice.lean", - "formal/afp/Delta_System_Lemma/Delta_System.thy", - "formal/afp/Gauss_Jordan/Gauss_Jordan_IArrays.thy", - "formal/mizar/msualg_1.miz", - "formal/afp/Isabelle_Meta_Model/toy_example/embedding/meta_toy/Meta_META.thy", - "formal/afp/Clique_and_Monotone_Circuits/Monotone_Formula.thy", - "formal/afp/Propositional_Proof_Systems/SC_Cut.thy", - "formal/mizar/geomtrap.miz", - "formal/hol/Help/PURE_ONCE_ASM_REWRITE_RULE.doc", - "formal/afp/Design_Theory/Multisets_Extras.thy", - "formal/afp/Van_Emde_Boas_Trees/VEBT_Intf_Imperative.thy", - "formal/afp/Formal_SSA/document/root.tex", - "formal/lean/mathlib/ring_theory/power_series/basic.lean", - "formal/afp/Flow_Networks/Lib/Refine_Add_Fofu.thy", - "formal/lean/mathlib/data/rat/defs.lean", - "formal/afp/SIFPL/OBJ.thy", - "formal/hol/Library/isum.ml", - "formal/lean/mathlib/analysis/normed_space/linear_isometry.lean", - "formal/hol/Help/ss_of_maker.doc", - "formal/afp/FO_Theory_Rewriting/Primitives/LV_to_GTT.thy", - "formal/afp/LatticeProperties/Complete_Lattice_Prop.thy", - "formal/afp/Finitely_Generated_Abelian_Groups/Finite_Product_Extend.thy", - "formal/afp/Pi_Calculus/Strong_Early_Late_Comp.thy", - "formal/afp/Launchbury/HOLCF-Join-Classes.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p780.lean", - "formal/afp/Affine_Arithmetic/Ex_Affine_Approximation.thy", - "formal/lean/mathlib/algebra/order/ring.lean", - "formal/afp/Gale_Shapley/Gale_Shapley2.thy", - "formal/afp/Dict_Construction/Test_Lazy_Case.thy", - "formal/afp/Gauss_Sums/Gauss_Sums.thy", - "formal/afp/LLL_Basis_Reduction/Missing_Lemmas.thy", - "formal/lean/lftcm/solutions/thursday/linear_algebra.lean", - "formal/afp/Show/Show_Real.thy", - "formal/afp/Topological_Semantics/topo_alexandrov.thy", - "formal/lean/mathlib/group_theory/submonoid/pointwise.lean", - "formal/afp/Relational_Forests/Algorithms.thy", - "formal/afp/Slicing/While/WEquivalence.thy", - "formal/coq/analysis/altreals/realsum.v", - "formal/lean/mathlib/data/list/basic.lean", - "formal/afp/Polynomials/Poly_Mapping_Finite_Map.thy", - "formal/hol/Multivariate/polytope.ml", - "formal/afp/Circus/document/root.tex", - "formal/afp/Partial_Order_Reduction/Basics/Stuttering.thy", - "formal/afp/BNF_Operations/Permute.thy", - "formal/mizar/conlat_1.miz", - "formal/hol/Minisat/sat_solvers.ml", - "formal/hol/Help/unreserve_words.doc", - "formal/afp/GaleStewart_Games/GaleStewartDefensiveStrategies.thy", - "formal/mizar/numeral2.miz", - "formal/mizar/gate_3.miz", - "formal/afp/Dependent_SIFUM_Refinement/Examples/Eg1Eg2RefinementSimple.thy", - "formal/lean/liquid/condensed/extr/equivalence.lean", - "formal/afp/Decreasing-Diagrams-II/Decreasing_Diagrams_II_Aux.thy", - "formal/afp/Nullstellensatz/Algebraically_Closed_Fields.thy", - "formal/afp/Stream-Fusion/document/root.tex", - "formal/lean/mathlib/analysis/inner_product_space/lax_milgram.lean", - "formal/lean/mathlib/analysis/box_integral/partition/tagged.lean", - "formal/lean/mathlib/analysis/normed_space/is_R_or_C.lean", - "formal/lean/perfectoid/for_mathlib/punit_instances.lean", - "formal/lean/mathlib/order/filter/cofinite.lean", - "formal/afp/Simple_Firewall/Common/List_Product_More.thy", - "formal/lean/mathlib/data/nat/cast.lean", - "formal/lean/mathlib/logic/hydra.lean", - "formal/afp/Linear_Inequalities/Dim_Span.thy", - "formal/afp/DPRM_Theorem/Diophantine/Binary_And.thy", - "formal/hol/Help/listof.doc", - "formal/lean/mathzoo/mathzoo/misc/miniF2F/induction/divisibility_3divnto3m2n.lean", - "formal/afp/JinjaDCI/Compiler/PCompiler.thy", - "formal/coq/analysis/exp.v", - "formal/lean/mathlib/data/nat/lattice.lean", - "formal/mizar/petri_df.miz", - "formal/hol/Help/REAL_INT_MUL_CONV.doc", - "formal/afp/Simpl/ex/VcgExTotal.thy", - "formal/afp/Groebner_Bases/Reduced_GB.thy", - "formal/afp/Collections/Examples/ICF/Exploration.thy", - "formal/lean/liquid/laurent_measures/no_longer_needed_maybe.lean", - "formal/afp/CakeML_Codegen/Terms/Term_as_Value.thy", - "formal/afp/LTL_Master_Theorem/Logical_Characterization/Extra_Equivalence_Relations.thy", - "formal/afp/Prime_Distribution_Elementary/More_Dirichlet_Misc.thy", - "formal/hol/Help/mk_numeral.doc", - "formal/afp/Design_Theory/Design_Theory_Root.thy", - "formal/afp/Simple_Firewall/Service_Matrix.thy", - "formal/hol/Help/atleast.doc", - "formal/afp/Datatype_Order_Generator/Hash_Generator.thy", - "formal/afp/Incredible_Proof_Machine/Incredible_Completeness.thy", - "formal/mizar/projred2.miz", - "formal/afp/Green/CircExample.thy", - "formal/lean/mathlib/group_theory/perm/support.lean", - "formal/afp/IMP2/basic/Semantics.thy", - "formal/afp/Refine_Imperative_HOL/Lib/Named_Theorems_Rev.thy", - "formal/afp/CZH_Foundations/czh_semicategories/CZH_DG_SemiCAT.thy", - "formal/afp/KAT_and_DRA/TwoSorted/KAT2.thy", - "formal/afp/Network_Security_Policy_Verification/Security_Invariants/SINVAR_CommunicationPartners_impl.thy", - "formal/afp/FOL_Seq_Calc3/Syntax.thy", - "formal/afp/Simple_Firewall/Common/GroupF.thy", - "formal/afp/Conditional_Transfer_Rule/CTR_Introduction.thy", - "formal/afp/Treaps/Random_List_Permutation.thy", - "formal/lean/mathlib/group_theory/perm/cycle/basic.lean", - "formal/hol/Help/insert_prime.doc", - "formal/lean/liquid/for_mathlib/free_abelian_group.lean", - "formal/afp/Program-Conflict-Analysis/document/root.tex", - "formal/mizar/ranknull.miz", - "formal/afp/Slicing/document/root.tex", - "formal/afp/Jordan_Normal_Form/Matrix_Kernel.thy", - "formal/afp/Van_Emde_Boas_Trees/VEBT_SuccPredImperative.thy", - "formal/afp/DPRM_Theorem/Machine_Equations/Constants_Equations.thy", - "formal/lean/mathlib/category_theory/limits/preserves/shapes/pullbacks.lean", - "formal/afp/Formal_SSA/SSA_CFG_code.thy", - "formal/afp/Timed_Automata/DBM_Normalization.thy", - "formal/afp/Game_Based_Crypto/document/fig-5.tex", - "formal/lean/mathlib/ring_theory/subsemiring/basic.lean", - "formal/afp/Applicative_Lifting/Applicative_Filter.thy", - "formal/afp/CAVA_Automata/Step_Conv.thy", - "formal/afp/Linear_Inequalities/Mixed_Integer_Solutions.thy", - "formal/hol/Help/mk_abs.doc", - "formal/afp/PseudoHoops/document/root.tex", - "formal/mizar/radix_1.miz", - "formal/afp/Polynomial_Factorization/Order_Polynomial.thy", - "formal/afp/LLL_Basis_Reduction/LLL_Certification.thy", - "formal/lean/mathlib/geometry/manifold/algebra/left_invariant_derivation.lean", - "formal/mizar/groeb_3.miz", - "formal/afp/Random_BSTs/document/root.tex", - "formal/afp/Lazy_Case/Lazy_Case.thy", - "formal/afp/Regular_Tree_Relations/Horn_Setup/Horn_Inference.thy", - "formal/afp/Refine_Imperative_HOL/Examples/Worklist_Subsumption.thy", - "formal/lean/mathlib/data/finset/powerset.lean", - "formal/afp/Network_Security_Policy_Verification/Security_Invariants/SINVAR_Sink.thy", - "formal/lean/mathlib/algebra/symmetrized.lean", - "formal/afp/Propositional_Proof_Systems/SC_Gentzen.thy", - "formal/hol/Help/GENERAL_REWRITE_CONV.doc", - "formal/lean/liquid/laurent_measures/theta.lean", - "formal/afp/CoCon/Review_Confidentiality/Review_RAut.thy", - "formal/afp/JinjaThreads/Execute/JVMExec_Execute2.thy", - "formal/lean/mathlib/geometry/manifold/derivation_bundle.lean", - "formal/lean/mathlib/category_theory/bicategory/natural_transformation.lean", - "formal/afp/Safe_OCL/OCL_Normalization.thy", - "formal/afp/Frequency_Moments/Landau_Ext.thy", - "formal/lean/mathlib/data/sym/sym2.lean", - "formal/afp/BDD/RepointProof.thy", - "formal/afp/Goodstein_Lambda/document/root.tex", - "formal/lean/mathzoo/mathzoo/olympiads/imo/1972/p5.lean", - "formal/afp/WebAssembly/Wasm_Checker_Types.thy", - "formal/afp/Ordinary_Differential_Equations/Refinement/Refine_Info.thy", - "formal/afp/Interpreter_Optimizations/Ubx.thy", - "formal/lean/mathlib/number_theory/modular_forms/slash_actions.lean", - "formal/afp/JinjaThreads/MM/SC.thy", - "formal/afp/Real_Time_Deque/Stack_Proof.thy", - "formal/afp/Types_To_Sets_Extension/Examples/SML_Relativization/Lattices/SML_Semilattices.thy", - "formal/mizar/fintopo2.miz", - "formal/lean/mathlib/analysis/inner_product_space/calculus.lean", - "formal/mizar/euclid_3.miz", - "formal/afp/Forcing/Replacement_Axiom.thy", - "formal/afp/GPU_Kernel_PL/Kernel_programming_language.thy", - "formal/afp/HyperCTL/document/intro.tex", - "formal/lean/mathlib/algebra/category/Group/subobject.lean", - "formal/mizar/number02.miz", - "formal/afp/Groebner_Bases/Algorithm_Schema.thy", - "formal/afp/Possibilistic_Noninterference/Interface.thy", - "formal/afp/Groebner_Bases/Code_Target_Rat.thy", - "formal/afp/Deep_Learning/DL_Flatten_Matrix.thy", - "formal/afp/Propositional_Proof_Systems/Substitution.thy", - "formal/afp/GraphMarkingIBP/DSWMark.thy", - "formal/mizar/rlvect_x.miz", - "formal/afp/DPRM_Theorem/Register_Machine/MultipleStepRegister.thy", - "formal/mizar/tietze.miz", - "formal/afp/Heard_Of/document/root.tex", - "formal/hol/100/arithmetic_geometric_mean.ml", - "formal/afp/ConcurrentGC/Local_Invariants.thy", - "formal/hol/Help/mk_const.doc", - "formal/afp/Universal_Turing_Machine/Abacus_Defs.thy", - "formal/afp/Call_Arity/TransformTools.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p301.lean", - "formal/afp/Slicing/JinjaVM/JVMInterpretation.thy", - "formal/afp/Youngs_Inequality/Youngs.thy", - "formal/lean/mathlib/data/finset/pointwise.lean", - "formal/afp/Ordinary_Differential_Equations/Numerics/Abstract_Reachability_Analysis.thy", - "formal/lean/mathlib/number_theory/legendre_symbol/quadratic_char.lean", - "formal/lean/mathlib/category_theory/isomorphism.lean", - "formal/afp/Constructive_Cryptography_CM/Specifications/Channel.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p405.lean", - "formal/afp/Iptables_Semantics/Iptables_Semantics.thy", - "formal/hol/Help/rev.doc", - "formal/mizar/nat_1.miz", - "formal/mizar/zf_lang.miz", - "formal/lean/mathlib/probability/cond_count.lean", - "formal/lean/mathlib/group_theory/complement.lean", - "formal/afp/Ordered_Resolution_Prover/Ordered_Ground_Resolution.thy", - "formal/afp/Projective_Geometry/Pascal_Property.thy", - "formal/afp/Featherweight_OCL/UML_Contracts.thy", - "formal/afp/Isabelle_Meta_Model/Init.thy", - "formal/afp/Pi_Calculus/Strong_Early_Sim_Pres.thy", - "formal/lean/mathlib/logic/equiv/list.lean", - "formal/lean/mathlib/category_theory/limits/constructions/over/products.lean", - "formal/afp/ConcurrentGC/MarkObject.thy", - "formal/mizar/mmlquery.miz", - "formal/lean/mathlib/algebra/category/Mon/basic.lean", - "formal/afp/Security_Protocol_Refinement/Auth_simple/m3_enc.thy", - "formal/lean/mathlib/algebraic_geometry/prime_spectrum/basic.lean", - "formal/afp/Formal_SSA/Construct_SSA_notriv.thy", - "formal/afp/Timed_Automata/Closure.thy", - "formal/lean/sphere-eversion/to_mathlib/analysis/cut_off.lean", - "formal/afp/Knuth_Bendix_Order/Term_Aux.thy", - "formal/lean/mathlib/ring_theory/trace.lean", - "formal/lean/mathlib/category_theory/triangulated/pretriangulated.lean", - "formal/afp/Count_Complex_Roots/Count_Half_Plane.thy", - "formal/mizar/msualg_5.miz", - "formal/lean/mathlib/representation_theory/group_cohomology_resolution.lean", - "formal/lean/mathlib/category_theory/preadditive/functor_category.lean", - "formal/mizar/glib_003.miz", - "formal/afp/Concurrent_Ref_Alg/CRA.thy", - "formal/afp/Real_Time_Deque/RealTimeDeque_Enqueue.thy", - "formal/lean/mathlib/category_theory/limits/shapes/finite_limits.lean", - "formal/lean/mathlib/measure_theory/integral/periodic.lean", - "formal/mizar/polyform.miz", - "formal/lean/mathlib/analysis/specific_limits/normed.lean", - "formal/afp/Design_Theory/Design_Isomorphisms.thy", - "formal/mizar/ltlaxio1.miz", - "formal/hol/100/ratcountable.ml", - "formal/mizar/card_fin.miz", - "formal/afp/Flyspeck-Tame/TameEnumProps.thy", - "formal/afp/Iptables_Semantics/Primitive_Matchers/Common_Primitive_toString.thy", - "formal/lean/perfectoid/Spa/space.lean", - "formal/afp/Store_Buffer_Reduction/Preliminaries.thy", - "formal/afp/Refine_Imperative_HOL/Lib/Structured_Apply.thy", - "formal/hol/Help/CONTRAPOS_CONV.doc", - "formal/hol/Help/CONTR_TAC.doc", - "formal/afp/CAVA_LTL_Modelchecker/SM/Impl/SM_Datastructures.thy", - "formal/afp/Coinductive/Coinductive_Nat.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p433.lean", - "formal/lean/mathlib/group_theory/group_action/embedding.lean", - "formal/mizar/goboard5.miz", - "formal/hol/Examples/machin.ml", - "formal/hol/Help/REAL_RAT_DIV_CONV.doc", - "formal/afp/JinjaThreads/MM/JMM_Common.thy", - "formal/afp/Network_Security_Policy_Verification/Examples/Impl_List_Playground_ChairNetwork_statefulpolicy_example.thy", - "formal/hol/Rqe/pdivides_thms.ml", - "formal/afp/Featherweight_OCL/UML_PropertyProfiles.thy", - "formal/lean/sphere-eversion/loops/exists.lean", - "formal/afp/Psi_Calculi/Frame.thy", - "formal/hol/Help/BINOP2_CONV.doc", - "formal/lean/mathlib/measure_theory/function/ae_measurable_order.lean", - "formal/afp/SC_DOM_Components/Core_DOM_DOM_Components.thy", - "formal/afp/Metalogic_ProofChecker/Logic.thy", - "formal/afp/Szpilrajn/document/root.tex", - "formal/afp/Sqrt_Babylonian/Log_Impl.thy", - "formal/afp/Sort_Encodings/Mcalc2.thy", - "formal/afp/CAVA_LTL_Modelchecker/document/root.tex", - "formal/afp/Isabelle_C/C11-FrontEnd/C_Appendices.thy", - "formal/afp/DPRM_Theorem/Machine_Equations/Machine_Equation_Equivalence.thy", - "formal/coq/math-comp/algebra/zmodp.v", - "formal/afp/IsaNet/All_Protocols.thy", - "formal/afp/WebAssembly/Wasm_Printing/Wasm_Interpreter_Printing_Pure.thy", - "formal/lean/liquid/invpoly/ses.lean", - "formal/lean/mathlib/analysis/normed_space/star/complex.lean", - "formal/afp/Skip_Lists/Pi_pmf.thy", - "formal/afp/Ordinary_Differential_Equations/Numerics/Abstract_Reachability_Analysis_C1.thy", - "formal/lean/mathlib/algebra/order/archimedean.lean", - "formal/afp/UTP/utp/utp_usedby.thy", - "formal/lean/mathlib/analysis/special_functions/polynomials.lean", - "formal/lean/mathlib/algebra/char_p/default.lean", - "formal/hol/Help/prove_constructors_injective.doc", - "formal/afp/Jordan_Normal_Form/Matrix_Comparison.thy", - "formal/lean/mathlib/analysis/special_functions/trigonometric/arctan_deriv.lean", - "formal/lean/mathlib/algebraic_geometry/open_immersion.lean", - "formal/lean/perfectoid/valuation/perfection.lean", - "formal/afp/Gauss_Jordan/Inverse_IArrays.thy", - "formal/afp/CoSMed/Friend_Request_Confidentiality/Friend_Request.thy", - "formal/afp/Network_Security_Policy_Verification/Examples/I8_SSH_Landscape.thy", - "formal/afp/Featherweight_OCL/examples/Employee_Model/Analysis/Analysis_OCL.thy", - "formal/lean/mathlib/measure_theory/measure/lebesgue.lean", - "formal/afp/DPRM_Theorem/Register_Machine/SingleStepState.thy", - "formal/afp/Psi_Calculi/Tau_Stat_Imp.thy", - "formal/lean/mathlib/geometry/euclidean/triangle.lean", - "formal/lean/mathlib/linear_algebra/affine_space/affine_subspace.lean", - "formal/afp/Transition_Systems_and_Automata/Automata/NBA/NGBA_Refine.thy", - "formal/lean/mathlib/algebra/jordan/basic.lean", - "formal/afp/Hyperdual/TwiceFieldDifferentiable.thy", - "formal/afp/Network_Security_Policy_Verification/Security_Invariants/SINVAR_Sink_impl.thy", - "formal/mizar/qc_lang2.miz", - "formal/afp/AWN/AWN_Main.thy", - "formal/afp/Differential_Game_Logic/Syntax.thy", - "formal/afp/FocusStreamsCaseStudies/document/macros.tex", - "formal/afp/CRDT/Convergence.thy", - "formal/lean/mathzoo/mathzoo/misc/miniF2F/algebra/2rootspoly_apatapbeq2asqp2ab.lean", - "formal/mizar/fuznum_1.miz", - "formal/lean/mathlib/topology/stone_cech.lean", - "formal/afp/Gabow_SCC/Find_Path_Impl.thy", - "formal/hol/Library/ringtheory.ml", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p12.lean", - "formal/afp/Berlekamp_Zassenhaus/Polynomial_Record_Based.thy", - "formal/afp/JinjaThreads/Common/TypeRel.thy", - "formal/lean/mathlib/probability/independence.lean", - "formal/lean/mathlib/order/well_founded_set.lean", - "formal/mizar/modcat_1.miz", - "formal/lean/mathlib/number_theory/padics/padic_numbers.lean", - "formal/afp/Key_Agreement_Strong_Adversaries/pfslvl3_asymmetric.thy", - "formal/lean/mathlib/category_theory/closed/functor.lean", - "formal/lean/liquid/condensed/extr/basic.lean", - "formal/afp/Independence_CH/Union_Axiom.thy", - "formal/afp/Bicategory/BicategoryOfSpans.thy", - "formal/afp/Key_Agreement_Strong_Adversaries/Import_all.thy", - "formal/afp/IsaNet/document/root.tex", - "formal/lean/mathlib/category_theory/monoidal/center.lean", - "formal/lean/mathlib/linear_algebra/dfinsupp.lean", - "formal/afp/Roth_Arithmetic_Progressions/Roth_Arithmetic_Progressions.thy", - "formal/lean/mathlib/algebra/category/Group/zero.lean", - "formal/afp/Tree_Decomposition/Tree.thy", - "formal/mizar/kolmog01.miz", - "formal/afp/Launchbury/Nominal-Utils.thy", - "formal/lean/mathlib/topology/category/Profinite/default.lean", - "formal/lean/mathlib/topology/sheaves/sheaf_condition/opens_le_cover.lean", - "formal/afp/Combinatorics_Words/CoWBasic.thy", - "formal/afp/Jordan_Normal_Form/Complexity_Carrier.thy", - "formal/afp/Mason_Stothers/Mason_Stothers.thy", - "formal/mizar/polynom2.miz", - "formal/afp/Separation_Algebra/ex/Sep_Tactics_Test.thy", - "formal/afp/Median_Of_Medians_Selection/document/root.tex", - "formal/mizar/realalg2.miz", - "formal/afp/Auto2_Imperative_HOL/Functional/Lists_Ex.thy", - "formal/afp/SPARCv8/SparcModel_MMU/Sparc_Instruction.thy", - "formal/afp/Independence_CH/Powerset_Axiom.thy", - "formal/afp/Differential_Dynamic_Logic/Differential_Axioms.thy", - "formal/afp/Randomised_Social_Choice/Automation/SDS_Automation.thy", - "formal/afp/JinjaThreads/Compiler/Execs.thy", - "formal/hol/Help/basic_ss.doc", - "formal/afp/HOL-CSP/Det.thy", - "formal/lean/liquid/for_mathlib/abelian_sheaves/main.lean", - "formal/afp/HotelKeyCards/document/conclu.tex", - "formal/afp/CakeML/generated/CakeML/LibAuxiliary.thy", - "formal/afp/Program-Conflict-Analysis/Semantics.thy", - "formal/hol/Help/prebroken_binops.doc", - "formal/afp/Hermite/document/root.tex", - "formal/afp/MFODL_Monitor_Optimized/Optimized_MTL.thy", - "formal/hol/Help/pp_print_term.doc", - "formal/afp/CCS/Strong_Sim_Pres.thy", - "formal/lean/mathlib/analysis/normed_space/operator_norm.lean", - "formal/mizar/eqrel_1.miz", - "formal/afp/BytecodeLogicJmlTypes/Cachera.thy", - "formal/afp/CakeML_Codegen/Terms/Strong_Term.thy", - "formal/mizar/pralg_2.miz", - "formal/afp/Integration/Integral.thy", - "formal/afp/QR_Decomposition/QR_Decomposition_IArrays.thy", - "formal/afp/Lowe_Ontological_Argument/document/root.tex", - "formal/afp/Password_Authentication_Protocol/document/root.tex", - "formal/afp/Ordinary_Differential_Equations/Ex/Examples_Poincare_Map.thy", - "formal/lean/mathlib/data/ordmap/ordset.lean", - "formal/afp/JinjaThreads/Framework/FWProgressAux.thy", - "formal/hol/EC/secp256k1.ml", - "formal/afp/Category/document/root.tex", - "formal/lean/liquid/for_mathlib/short_complex.lean", - "formal/afp/Strong_Security/Domain_example.thy", - "formal/mizar/waybel_3.miz", - "formal/afp/Rewrite_Properties_Reduction/Rewriting/Replace_Constant.thy", - "formal/afp/Dedekind_Real/Dedekind_Real.thy", - "formal/afp/Kruskal/Kruskal_Refine.thy", - "formal/afp/IMP_Compiler_Reuse/Compiler2.thy", - "formal/lean/mathlib/data/nat/choose/bounds.lean", - "formal/afp/Buildings/Building.thy", - "formal/afp/Affine_Arithmetic/Affine_Arithmetic_Auxiliarities.thy", - "formal/lean/mathlib/category_theory/limits/shapes/binary_products.lean", - "formal/afp/LambdaAuth/Nominal2_Lemmas.thy", - "formal/lean/mathlib/number_theory/zsqrtd/basic.lean", - "formal/afp/Perfect-Number-Thm/Sigma.thy", - "formal/afp/Incredible_Proof_Machine/document/root.tex", - "formal/hol/Help/numdom.doc", - "formal/mizar/lattice8.miz", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p1124.lean", - "formal/afp/AODV/variants/b_fwdrreps/B_Seq_Invariants.thy", - "formal/lean/mathlib/category_theory/limits/shapes/zero_morphisms.lean", - "formal/afp/Order_Lattice_Props/Sup_Lattice.thy", - "formal/afp/Transition_Systems_and_Automata/Automata/NBA/NGBA_Implement.thy", - "formal/afp/Functional-Automata/DA.thy", - "formal/lean/liquid/for_mathlib/snake_lemma2.lean", - "formal/afp/Transition_Systems_and_Automata/Automata/NBA/NBA_Explicit.thy", - "formal/afp/DiskPaxos/document/tlaspec.tex", - "formal/afp/Farkas/Simplex_for_Reals.thy", - "formal/afp/DiskPaxos/DiskPaxos_Inv4.thy", - "formal/afp/UPF/document/root.tex", - "formal/afp/FileRefinement/document/introduction.tex", - "formal/afp/First_Order_Terms/Subsumption.thy", - "formal/afp/IMAP-CRDT/IMAP-proof-independent.thy", - "formal/afp/Jordan_Normal_Form/Show_Arctic.thy", - "formal/hol/Help/mk_conj.doc", - "formal/lean/mathlib/data/opposite.lean", - "formal/hol/Help/set_goal.doc", - "formal/mizar/algseq_1.miz", - "formal/lean/mathlib/topology/urysohns_bounded.lean", - "formal/hol/Help/COND_ELIM_CONV.doc", - "formal/mizar/rcomp_1.miz", - "formal/afp/CakeML_Codegen/Terms/Constructors.thy", - "formal/afp/Hybrid_Systems_VCs/PredicateTransformers/HS_VC_PT_Examples.thy", - "formal/afp/LocalLexing/Limit.thy", - "formal/afp/Types_To_Sets_Extension/Examples/TTS_Foundations/FNDS_Introduction.thy", - "formal/hol/Help/parse_as_infix.doc", - "formal/mizar/net_1.miz", - "formal/afp/KBPs/SPRView.thy", - "formal/lean/sphere-eversion/loops/surrounding.lean", - "formal/mizar/field_8.miz", - "formal/afp/SpecCheck/document/root.tex", - "formal/afp/BytecodeLogicJmlTypes/Logic.thy", - "formal/afp/CZH_Elementary_Categories/czh_ecategories/CZH_SMC_CAT.thy", - "formal/afp/Perron_Frobenius/Perron_Frobenius_Irreducible.thy", - "formal/hol/Library/multiplicative.ml", - "formal/afp/Automatic_Refinement/Tool/Autoref_Chapter.thy", - "formal/afp/Refine_Imperative_HOL/Sepref_Rules.thy", - "formal/mizar/ntalgo_2.miz", - "formal/afp/Stateful_Protocol_Composition_and_Typing/Stateful_Typing.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p182.lean", - "formal/afp/Independence_CH/Replacement_Axiom.thy", - "formal/afp/Case_Labeling/document/root.tex", - "formal/afp/Abstract-Hoare-Logics/document/root.tex", - "formal/afp/Pi_Calculus/Weak_Late_Step_Sim_Pres.thy", - "formal/mizar/polnot_1.miz", - "formal/afp/Imperative_Insertion_Sort/Imperative_Insertion_Sort.thy", - "formal/afp/Polynomial_Interpolation/Lagrange_Interpolation.thy", - "formal/afp/Virtual_Substitution/PolyAtoms.thy", - "formal/lean/mathlib/analysis/box_integral/partition/subbox_induction.lean", - "formal/afp/Simpl/HoareTotal.thy", - "formal/afp/CoCon/Reviewer_Assignment_Confidentiality/Reviewer_Assignment_Intro.thy", - "formal/lean/mathlib/ring_theory/polynomial/scale_roots.lean", - "formal/hol/Help/is_exists.doc", - "formal/lean/mathlib/linear_algebra/matrix/nonsingular_inverse.lean", - "formal/lean/liquid/hacks_and_tricks/by_exactI_hack.lean", - "formal/lean/mathlib/analysis/convolution.lean", - "formal/lean/mathlib/category_theory/abelian/right_derived.lean", - "formal/afp/DiskPaxos/DiskPaxos_Chosen.thy", - "formal/afp/Minsky_Machines/document/root.tex", - "formal/afp/Polynomials/MPoly_Type_Univariate.thy", - "formal/afp/Refine_Imperative_HOL/Sepref_Intf_Util.thy", - "formal/mizar/vectsp10.miz", - "formal/afp/JinjaThreads/Compiler/JVMTau.thy", - "formal/afp/Types_To_Sets_Extension/Examples/TTS_Foundations/Orders/Set_Simple_Orders.thy", - "formal/lean/mathlib/algebraic_geometry/sheafed_space.lean", - "formal/hol/Help/aconv.doc", - "formal/afp/Simpl/XVcg.thy", - "formal/lean/mathlib/category_theory/monoidal/rigid/functor_category.lean", - "formal/afp/PseudoHoops/SpecialPseudoHoops.thy", - "formal/afp/DPRM_Theorem/Register_Machine/RegisterMachineSpecification.thy", - "formal/hol/Minisat/dimacs_tools.ml", - "formal/lean/mathlib/topology/category/Profinite/projective.lean", - "formal/afp/Forcing/Union_Axiom.thy", - "formal/afp/JinjaThreads/BV/BCVExec.thy", - "formal/afp/KBPs/KBPsAlg.thy", - "formal/lean/liquid/for_mathlib/is_biprod.lean", - "formal/mizar/card_3.miz", - "formal/afp/FeatherweightJava/FJDefs.thy", - "formal/afp/Dict_Construction/Test_Side_Conditions.thy", - "formal/afp/Groebner_Macaulay/Monomial_Module.thy", - "formal/hol/Multivariate/geom.ml", - "formal/afp/HotelKeyCards/document/root.tex", - "formal/afp/Graph_Theory/Euler.thy", - "formal/mizar/rsspace4.miz", - "formal/lean/mathlib/data/pfunctor/multivariate/W.lean", - "formal/hol/Help/REAL_INT_SUB_CONV.doc", - "formal/lean/liquid/for_mathlib/ext.lean", - "formal/afp/Consensus_Refined/Consensus_Types.thy", - "formal/afp/Abs_Int_ITP2012/Abs_Int1_const.thy", - "formal/afp/Prim_Dijkstra_Simple/Prim_Impl.thy", - "formal/lean/mathzoo/mathzoo/misc/miniF2F/algebra/xmysqpymzsqpzmxsqeqxyz_xpypzp6dvdx3y3z3.lean", - "formal/afp/Key_Agreement_Strong_Adversaries/sklvl3.thy", - "formal/afp/Simpl/HoarePartial.thy", - "formal/afp/Collections/Lib/Diff_Array.thy", - "formal/lean/mathlib/group_theory/subsemigroup/operations.lean", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p125.lean", - "formal/afp/Ordinary_Differential_Equations/Ex/Lorenz/Result_File_Coarse.thy", - "formal/lean/mathlib/category_theory/limits/comma.lean", - "formal/afp/Incredible_Proof_Machine/Abstract_Formula.thy", - "formal/afp/Probabilistic_Timed_Automata/library/Stream_More.thy", - "formal/lean/mathlib/measure_theory/measure/null_measurable.lean", - "formal/afp/SPARCv8/SparcModel_MMU/RegistersOps.thy", - "formal/afp/Generic_Deriving/tests/Derive_Eq.thy", - "formal/afp/Priority_Search_Trees/PST_General.thy", - "formal/afp/Ordinary_Differential_Equations/Library/Matrix_Exponential.thy", - "formal/afp/JinjaThreads/Compiler/J1WellForm.thy", - "formal/lean/mathlib/computability/tm_to_partrec.lean", - "formal/mizar/roughs_1.miz", - "formal/afp/Combinatorics_Words/document/root.tex", - "formal/afp/Incredible_Proof_Machine/Build_Incredible_Tree.thy", - "formal/afp/Verified_SAT_Based_AI_Planning/SAS_Plus_STRIPS.thy", - "formal/mizar/algstr_0.miz", - "formal/mizar/scmisort.miz", - "formal/afp/Lp/Lp.thy", - "formal/afp/Isabelle_Marries_Dirac/Quantum.thy", - "formal/afp/VerifyThis2018/lib/Array_Map_Default.thy", - "formal/afp/HRB-Slicing/JinjaVM_Inter/JVMCFG_wf.thy", - "formal/afp/Topological_Semantics/sse_boolean_algebra_quantification.thy", - "formal/lean/mathlib/algebra/ring/prod.lean", - "formal/lean/mathlib/algebraic_topology/nerve.lean", - "formal/afp/Median_Method/Median.thy", - "formal/afp/Iptables_Semantics/Primitive_Matchers/IpAddresses.thy", - "formal/lean/mathlib/analysis/von_neumann_algebra/basic.lean", - "formal/mizar/simplex1.miz", - "formal/afp/Featherweight_OCL/collection_types/UML_Bag.thy", - "formal/afp/JinjaThreads/Common/StartConfig.thy", - "formal/afp/Concurrent_Ref_Alg/Iteration.thy", - "formal/lean/mathzoo/mathzoo/misc/miniF2F/algebra/9onxpypzleqsum2onxpy.lean", - "formal/afp/Jinja/DFA/Product.thy", - "formal/afp/Refine_Imperative_HOL/Userguides/Sepref_Guide_General_Util.thy", - "formal/afp/Slicing/StaticIntra/PDG.thy", - "formal/afp/IMO2019/IMO2019_Q4.thy", - "formal/lean/mathlib/algebra/lie/weights.lean", - "formal/afp/CryptHOL/Computational_Model.thy", - "formal/afp/Linear_Recurrences/Rational_FPS_Solver.thy", - "formal/afp/Native_Word/Code_Target_Word_Base.thy", - "formal/hol/Help/C.doc", - "formal/hol/Help/extend_basic_rewrites.doc", - "formal/lean/mathlib/topology/instances/int.lean", - "formal/mizar/pdiff_9.miz", - "formal/afp/JinjaThreads/J/TypeSafe.thy", - "formal/lean/mathlib/category_theory/sites/dense_subsite.lean", - "formal/afp/Real_Time_Deque/Big.thy", - "formal/hol/100/cantor.ml", - "formal/afp/Jinja/DFA/LBVCorrect.thy", - "formal/afp/Optics/Prisms.thy", - "formal/afp/Propositional_Proof_Systems/HC_Compl_Consistency.thy", - "formal/hol/Help/NUM_FACT_CONV.doc", - "formal/lean/mathlib/topology/algebra/mul_action.lean", - "formal/mizar/topalg_7.miz", - "formal/afp/Gauss_Sums/Periodic_Arithmetic.thy", - "formal/lean/mathlib/data/string/defs.lean", - "formal/afp/Automated_Stateful_Protocol_Verification/trac/ml_yacc_lib.thy", - "formal/afp/ConcurrentIMP/ex/CIMP_locales.thy", - "formal/mizar/mesfunc3.miz", - "formal/mizar/jordan1j.miz", - "formal/hol/Help/mk_string.doc", - "formal/lean/mathlib/ring_theory/finiteness.lean", - "formal/afp/Binding_Syntax_Theory/Univ.thy", - "formal/lean/mathzoo/mathzoo/imports/miniF2F.lean", - "formal/afp/Formal_Puiseux_Series/Formal_Puiseux_Series.thy", - "formal/mizar/circcmb2.miz", - "formal/afp/SIFPL/VDM_OBJ.thy", - "formal/mizar/finseq_6.miz", - "formal/afp/Deep_Learning/DL_Network.thy", - "formal/afp/Hybrid_Systems_VCs/KleeneAlgebraTests/HS_VC_KAT_rel.thy", - "formal/afp/Simpl/ex/ComposeEx.thy", - "formal/lean/mathlib/analysis/p_series.lean", - "formal/afp/Constructive_Cryptography_CM/Construction_Utility.thy", - "formal/afp/Design_Theory/Design_Operations.thy", - "formal/lean/mathlib/data/polynomial/cardinal.lean", - "formal/afp/Conditional_Transfer_Rule/CTR_Tools/CTR_Tools.thy", - "formal/afp/Binomial-Heaps/document/root.tex", - "formal/afp/Isabelle_Meta_Model/toy_example/embedding/core/Floor1_ctxt.thy", - "formal/afp/Iptables_Semantics/Common/Word_Upto.thy", - "formal/afp/List-Index/document/root.tex", - "formal/mizar/radix_6.miz", - "formal/lean/mathzoo/mathzoo/misc/miniF2F/algebra/2rootsintpoly_am10tap11eqasqpam110.lean", - "formal/lean/mathlib/measure_theory/integral/divergence_theorem.lean", - "formal/lean/liquid/normed_snake_dual.lean", - "formal/lean/mathlib/order/filter/germ.lean", - "formal/afp/KBPs/MuddyChildren.thy", - "formal/hol/Help/zip.doc", - "formal/lean/lftcm/hints/category_theory/exercise1/hint1.lean", - "formal/hol/Help/gcd_num.doc", - "formal/afp/IsaNet/infrastructure/Abstract_XOR.thy", - "formal/afp/SATSolverVerification/SatSolverCode.thy", - "formal/lean/mathlib/algebra/group_power/identities.lean", - "formal/afp/Wetzels_Problem/Wetzels_Problem.thy", - "formal/afp/IFC_Tracking/PDG.thy", - "formal/afp/Hyperdual/document/root.tex", - "formal/lean/mathlib/data/polynomial/iterated_deriv.lean", - "formal/afp/Bicategory/CatBicat.thy", - "formal/lean/mathlib/group_theory/archimedean.lean", - "formal/afp/PLM/TAO_99_SanityTests.thy", - "formal/afp/Falling_Factorial_Sum/document/root.tex", - "formal/lean/mathzoo/mathzoo/olympiads/imo/2005/p4.lean", - "formal/lean/mathlib/ring_theory/ideal/quotient.lean", - "formal/afp/Dynamic_Tables/document/root.tex", - "formal/hol/Help/ONCE_SIMPLIFY_CONV.doc", - "formal/hol/Help/variables.doc", - "formal/afp/AODV/variants/b_fwdrreps/B_Aodv_Message.thy", - "formal/lean/liquid/free_pfpng/main.lean", - "formal/afp/Coinductive/Examples/Koenigslemma.thy", - "formal/lean/mathlib/topology/gluing.lean", - "formal/lean/mathlib/model_theory/elementary_maps.lean", - "formal/mizar/topalg_5.miz", - "formal/afp/WebAssembly/Wasm_Base_Defs.thy", - "formal/afp/UTP/utp/utp_pred_laws.thy", - "formal/lean/liquid/for_mathlib/derived/ext_coproducts.lean", - "formal/hol/Help/NUM_MOD_CONV.doc", - "formal/afp/WebAssembly/Wasm_Axioms.thy", - "formal/afp/Lambda_Free_KBOs/Lambda_Free_KBO_Util.thy", - "formal/lean/mathlib/linear_algebra/dimension.lean", - "formal/afp/Ackermanns_not_PR/Primrec.thy", - "formal/afp/Optics/document/root.tex", - "formal/afp/List_Update/List_Factoring.thy", - "formal/hol/Help/FIND_ASSUM.doc", - "formal/afp/Lowe_Ontological_Argument/LoweOntologicalArgument_7.thy", - "formal/afp/Correctness_Algebras/Domain.thy", - "formal/afp/LocalLexing/TheoremD6.thy", - "formal/afp/Game_Based_Crypto/IND_CCA2_sym.thy", - "formal/mizar/yoneda_1.miz", - "formal/mizar/facirc_1.miz", - "formal/afp/JinjaThreads/J/Deadlocked.thy", - "formal/afp/Word_Lib/Word_64.thy", - "formal/afp/Word_Lib/Least_significant_bit.thy", - "formal/afp/Gabow_SCC/document/conclusion.tex", - "formal/lean/mathlib/order/category/BoundedLattice.lean", - "formal/mizar/arytm_2.miz", - "formal/lean/mathlib/data/polynomial/mirror.lean", - "formal/afp/Key_Agreement_Strong_Adversaries/dhlvl3.thy", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2013/a/p4.lean", - "formal/afp/Refine_Imperative_HOL/benchmarks/Dijkstra/isabelle/Dijkstra_Benchmark.thy", - "formal/afp/LOFT/OpenFlow_Serialize.thy", - "formal/afp/Free-Groups/Isomorphisms.thy", - "formal/afp/UpDown_Scheme/Triangular_Function.thy", - "formal/afp/SIFPL/Lattice.thy", - "formal/afp/Simplicial_complexes_and_boolean_functions/Boolean_functions.thy", - "formal/afp/Jordan_Hoelder/SimpleGroups.thy", - "formal/afp/Psi_Calculi/Simulation.thy", - "formal/lean/mathlib/category_theory/limits/constructions/equalizers.lean", - "formal/mizar/sprect_5.miz", - "formal/mizar/group_11.miz", - "formal/afp/JinjaThreads/MM/Non_Speculative.thy", - "formal/hol/Help/REAL_INT_ADD_CONV.doc", - "formal/mizar/irrat_1.miz", - "formal/afp/Functional-Automata/RegExp2NAe.thy", - "formal/hol/Help/mk_fthm.doc", - "formal/afp/JinjaThreads/Compiler/Correctness.thy", - "formal/afp/Relational_Disjoint_Set_Forests/Disjoint_Set_Forests.thy", - "formal/afp/MFMC_Countable/MFMC_Reduction.thy", - "formal/afp/CakeML/generated/CakeML/Lib.thy", - "formal/afp/Functional_Ordered_Resolution_Prover/document/root.tex", - "formal/afp/JinjaThreads/BV/BVProgressThreaded.thy", - "formal/coq/math-comp/solvable/extremal.v", - "formal/afp/Girth_Chromatic/Ugraphs.thy", - "formal/afp/Dependent_SIFUM_Type_Systems/Language.thy", - "formal/afp/Three_Circles/Bernstein_01.thy", - "formal/afp/Knot_Theory/Example.thy", - "formal/afp/SuperCalc/document/root.tex", - "formal/hol/Help/ABS_TAC.doc", - "formal/lean/mathlib/analysis/complex/conformal.lean", - "formal/mizar/scmpds_i.miz", - "formal/mizar/topreal8.miz", - "formal/afp/Hello_World/HelloWorld.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p483.lean", - "formal/hol/Help/REFL.doc", - "formal/afp/Key_Agreement_Strong_Adversaries/Runs.thy", - "formal/lean/mathlib/data/polynomial/eval.lean", - "formal/hol/Examples/reduct.ml", - "formal/afp/IsaNet/infrastructure/Take_While.thy", - "formal/lean/mathlib/data/set/sigma.lean", - "formal/afp/JinjaThreads/Examples/Examples_Main.thy", - "formal/afp/C2KA_DistributedSystems/Communication_C2KA.thy", - "formal/lean/mathlib/measure_theory/function/strongly_measurable.lean", - "formal/lean/mathlib/category_theory/category/PartialFun.lean", - "formal/afp/Strong_Security/document/root.tex", - "formal/afp/Incredible_Proof_Machine/Abstract_Rules_To_Incredible.thy", - "formal/afp/Quick_Sort_Cost/Quick_Sort_Average_Case.thy", - "formal/afp/SPARCv8/lib/wp/DetMonadLemmas.thy", - "formal/afp/Consensus_Refined/Observing/Two_Step_Observing.thy", - "formal/lean/liquid/thm95/polyhedral_iso.lean", - "formal/afp/Linear_Recurrences/Factorizations.thy", - "formal/lean/mathlib/category_theory/monoidal/internal/types.lean", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p42.lean", - "formal/hol/Help/VALID.doc", - "formal/lean/mathlib/linear_algebra/matrix/reindex.lean", - "formal/hol/Help/REWRITE_CONV.doc", - "formal/afp/Refine_Imperative_HOL/Examples/Sepref_Dijkstra.thy", - "formal/afp/Extended_Finite_State_Machines/Value.thy", - "formal/afp/Tail_Recursive_Functions/CaseStudy1.thy", - "formal/afp/Propositional_Proof_Systems/MiniSC_Craig.thy", - "formal/afp/Modal_Logics_for_NTS/Weak_Transition_System.thy", - "formal/afp/Regular_Tree_Relations/Regular_Relation_Abstract_Impl.thy", - "formal/lean/mathlib/category_theory/idempotents/simplicial_object.lean", - "formal/mizar/hahnban.miz", - "formal/lean/lftcm/hints/category_theory/exercise2/hint4.lean", - "formal/afp/CoSMed/Post_Confidentiality/Post_Intro.thy", - "formal/afp/LP_Duality/document/root.tex", - "formal/lean/liquid/normed_spectral.lean", - "formal/hol/Help/SYM.doc", - "formal/afp/Frequency_Moments/Frequency_Moments_Preliminary_Results.thy", - "formal/lean/mathlib/algebra/hom/freiman.lean", - "formal/lean/mathlib/topology/metric_space/shrinking_lemma.lean", - "formal/afp/Van_Emde_Boas_Trees/VEBT_BuildupMemImp.thy", - "formal/afp/Tarskis_Geometry/Action.thy", - "formal/afp/Extended_Finite_State_Machine_Inference/Code_Generation.thy", - "formal/lean/mathlib/data/fun_like/basic.lean", - "formal/afp/Prim_Dijkstra_Simple/Prim_Abstract.thy", - "formal/afp/CryptHOL/Cyclic_Group_SPMF.thy", - "formal/afp/Deriving/Comparator_Generator/Compare.thy", - "formal/hol/EC/edwards.ml", - "formal/afp/HOLCF-Prelude/Data_Bool.thy", - "formal/hol/Jordan/misc_defs_and_lemmas.ml", - "formal/lean/mathlib/model_theory/finitely_generated.lean", - "formal/afp/Incredible_Proof_Machine/Abstract_Rules.thy", - "formal/lean/mathlib/algebra/category/Ring/adjunctions.lean", - "formal/lean/mathlib/data/quot.lean", - "formal/afp/LLL_Factorization/Missing_Dvd_Int_Poly.thy", - "formal/hol/Help/thenc_.doc", - "formal/lean/mathzoo/mathzoo/olympiads/aime/1983/p1.lean", - "formal/hol/Help/REAL_INT_RED_CONV.doc", - "formal/mizar/mcart_1.miz", - "formal/afp/CryptHOL/Misc_CryptHOL.thy", - "formal/afp/Simpl/ex/XVcgEx.thy", - "formal/mizar/sfmastr2.miz", - "formal/hol/Geometric_Algebra/make.ml", - "formal/hol/Help/CONJ.doc", - "formal/afp/Goedel_HFSet_Semanticless/Pseudo_Coding.thy", - "formal/lean/mathlib/category_theory/monoidal/coherence.lean", - "formal/afp/Metalogic_ProofChecker/EtaNorm.thy", - "formal/afp/Fourier/Confine.thy", - "formal/lean/sphere-eversion/to_mathlib/partition.lean", - "formal/afp/Polynomial_Interpolation/Ring_Hom.thy", - "formal/afp/Virtual_Substitution/UniAtoms.thy", - "formal/lean/perfectoid/sheaves/covering.lean", - "formal/lean/mathlib/data/fin/tuple/basic.lean", - "formal/lean/mathlib/field_theory/perfect_closure.lean", - "formal/afp/pGCL/Termination.thy", - "formal/mizar/jordan15.miz", - "formal/afp/Iptables_Semantics/Alternative_Semantics.thy", - "formal/afp/BTree/BTree_ImpSet.thy", - "formal/afp/Intro_Dest_Elim/IHOL_IDE.thy", - "formal/afp/MFOTL_Monitor/MFOTL.thy", - "formal/afp/Card_Partitions/Set_Partition.thy", - "formal/mizar/zf_lang1.miz", - "formal/afp/Abstract_Completeness/Propositional_Logic.thy", - "formal/afp/Network_Security_Policy_Verification/Security_Invariants/SINVAR_SecGwExt_impl.thy", - "formal/mizar/henmodel.miz", - "formal/mizar/cat_4.miz", - "formal/afp/Complex_Bounded_Operators/document/root.tex", - "formal/afp/Security_Protocol_Refinement/Auth_simple/m3_sig.thy", - "formal/mizar/pzfmisc1.miz", - "formal/afp/JinjaThreads/BV/BVConform.thy", - "formal/lean/liquid/invpoly/basic.lean", - "formal/afp/VolpanoSmith/document/root.tex", - "formal/afp/Poincare_Bendixson/Invariance.thy", - "formal/afp/Completeness/Completeness.thy", - "formal/mizar/dualsp02.miz", - "formal/afp/Aggregation_Algebras/Semigroups_Big.thy", - "formal/mizar/mesfun6c.miz", - "formal/afp/SATSolverVerification/SatSolverVerification.thy", - "formal/lean/mathlib/data/qpf/multivariate/constructions/prj.lean", - "formal/afp/Jinja/Compiler/Hidden.thy", - "formal/hol/Help/INSTANTIATE_UPPERCASE.doc", - "formal/mizar/waybel32.miz", - "formal/afp/Separation_Logic_Imperative_HOL/Examples/Hash_Map.thy", - "formal/mizar/yellow_0.miz", - "formal/lean/lftcm/for_mathlib/manifolds.lean", - "formal/afp/Selection_Heap_Sort/HeapImperative.thy", - "formal/lean/mathlib/category_theory/graded_object.lean", - "formal/hol/100/reciprocity.ml", - "formal/afp/Floyd_Warshall/FW_Code.thy", - "formal/hol/Help/EVERY_ASSUM.doc", - "formal/afp/Isabelle_C/C11-FrontEnd/src/C_Ast.thy", - "formal/afp/Correctness_Algebras/Binary_Iterings_Strict.thy", - "formal/afp/ConcurrentGC/Local_Invariants_Lemmas.thy", - "formal/afp/Matrices_for_ODEs/MTX_Examples.thy", - "formal/lean/mathlib/analysis/asymptotics/superpolynomial_decay.lean", - "formal/afp/Featherweight_OCL/basic_types/UML_Void.thy", - "formal/lean/mathlib/category_theory/conj.lean", - "formal/afp/Prime_Distribution_Elementary/Moebius_Mu_Sum.thy", - "formal/afp/Automatic_Refinement/Tool/Autoref_Gen_Algo.thy", - "formal/afp/Simple_Firewall/Primitives/Primitives_toString.thy", - "formal/afp/Gabow_SCC/Gabow_Skeleton.thy", - "formal/mizar/zf_model.miz", - "formal/afp/Gauss-Jordan-Elim-Fun/Gauss_Jordan_Elim_Fun.thy", - "formal/afp/Containers/Closure_Set.thy", - "formal/lean/mathlib/category_theory/subobject/default.lean", - "formal/afp/SequentInvertibility/ModalSequents.thy", - "formal/afp/List_Update/document/root.tex", - "formal/afp/Coinductive/Examples/Coinductive_Examples.thy", - "formal/lean/perfectoid/for_mathlib/topological_groups.lean", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2008/a/p2.lean", - "formal/mizar/msafree4.miz", - "formal/afp/Constructor_Funs/document/root.tex", - "formal/afp/Deriving/Comparator_Generator/Compare_Order_Instances.thy", - "formal/hol/Help/hyp.doc", - "formal/afp/CAVA_LTL_Modelchecker/SM/Analysis/SM_Variables.thy", - "formal/mizar/urysohn3.miz", - "formal/lean/mathlib/category_theory/sites/types.lean", - "formal/afp/Polynomials/More_Modules.thy", - "formal/hol/miz3/Samples/tobias.ml", - "formal/afp/Refine_Imperative_HOL/Lib/User_Smashing.thy", - "formal/afp/Independence_CH/Kappa_Closed_Notions.thy", - "formal/afp/Collections/GenCF/Impl/Impl_Array_Map.thy", - "formal/lean/mathlib/category_theory/core.lean", - "formal/afp/Isabelle_Meta_Model/meta_isabelle/Printer_Pure.thy", - "formal/afp/CoreC++/Annotate.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p102.lean", - "formal/mizar/glib_008.miz", - "formal/mizar/enumset1.miz", - "formal/afp/DFS_Framework/DFS_Chapter_Framework.thy", - "formal/lean/mathlib/category_theory/preadditive/single_obj.lean", - "formal/afp/Formula_Derivatives/WS1S_Presburger_Equivalence.thy", - "formal/afp/Monad_Memo_DP/example/OptBST.thy", - "formal/afp/Polynomial_Factorization/Kronecker_Factorization.thy", - "formal/afp/Network_Security_Policy_Verification/TopoS_Helper.thy", - "formal/afp/MFMC_Countable/MFMC_Unbounded.thy", - "formal/lean/mathlib/algebra/direct_sum/finsupp.lean", - "formal/lean/liquid/pseudo_normed_group/category/strictProFiltPseuNormGrpWithTinv.lean", - "formal/afp/SIFUM_Type_Systems/Language.thy", - "formal/lean/mathlib/algebra/hom/ring.lean", - "formal/afp/Multiset_Ordering_NPC/Multiset_Ordering_in_NP.thy", - "formal/afp/IMP2/doc/Quickstart_Guide.thy", - "formal/lean/mathlib/ring_theory/localization/basic.lean", - "formal/afp/BTree/BTree_Imp.thy", - "formal/lean/mathlib/order/category/BoundedDistribLattice.lean", - "formal/afp/DPRM_Theorem/Diophantine/Binary_Orthogonal.thy", - "formal/hol/Help/REAL_RAT_EQ_CONV.doc", - "formal/afp/Native_Word/Uint64.thy", - "formal/afp/Coinductive/Examples/TLList_CCPO_Examples.thy", - "formal/afp/Safe_OCL/Tuple.thy", - "formal/afp/JinjaDCI/Compiler/Compiler.thy", - "formal/afp/GaleStewart_Games/document/root.tex", - "formal/hol/Tutorial/Sets_and_functions.ml", - "formal/afp/ADS_Functor/document/root.tex", - "formal/afp/Show/Old_Datatype/Old_Show.thy", - "formal/afp/BNF_Operations/Compose.thy", - "formal/afp/CryptHOL/document/root.tex", - "formal/afp/SpecCheck/Examples/SpecCheck_Examples.thy", - "formal/hol/Help/CHANGED_TAC.doc", - "formal/afp/Jinja/BV/BVExec.thy", - "formal/afp/SpecCheck/Shrink/SpecCheck_Shrink.thy", - "formal/afp/LocalLexing/PathLemmas.thy", - "formal/afp/IMAP-CRDT/IMAP-def.thy", - "formal/afp/Bounded_Deducibility_Security/BD_Security_Triggers.thy", - "formal/afp/Complete_Non_Orders/Kleene_Fixed_Point.thy", - "formal/lean/mathlib/algebra/quaternion_basis.lean", - "formal/afp/WebAssembly/Wasm.thy", - "formal/afp/Berlekamp_Zassenhaus/Unique_Factorization.thy", - "formal/afp/Tree-Automata/Ta_impl_codegen.thy", - "formal/afp/Auto2_Imperative_HOL/Imperative/Dijkstra_Impl.thy", - "formal/afp/Proof_Strategy_Language/Try_Hard.thy", - "formal/afp/Bicategory/PseudonaturalTransformation.thy", - "formal/afp/Propositional_Proof_Systems/Resolution_Sound.thy", - "formal/hol/Help/print_to_string.doc", - "formal/afp/Virtual_Substitution/InfinitesimalsUni.thy", - "formal/lean/mathlib/analysis/special_functions/trigonometric/bounds.lean", - "formal/lean/liquid/condensed/ab.lean", - "formal/afp/Metalogic_ProofChecker/Theory.thy", - "formal/lean/perfectoid/for_mathlib/topological_field.lean", - "formal/afp/Power_Sum_Polynomials/document/root.tex", - "formal/lean/mathlib/ring_theory/graded_algebra/homogeneous_localization.lean", - "formal/hol/Help/GEN_REWRITE_TAC.doc", - "formal/mizar/sppol_2.miz", - "formal/hol/Unity/mk_comp_unity.ml", - "formal/lean/mathlib/measure_theory/decomposition/lebesgue.lean", - "formal/mizar/lpspacc1.miz", - "formal/lean/mathlib/topology/continuous_function/locally_constant.lean", - "formal/afp/Hyperdual/Hyperdual.thy", - "formal/afp/Forcing/Names.thy", - "formal/mizar/pappus.miz", - "formal/afp/HRB-Slicing/Proc/WellFormProgs.thy", - "formal/afp/Schutz_Spacetime/TemporalOrderOnPath.thy", - "formal/lean/mathlib/order/countable_dense_linear_order.lean", - "formal/lean/mathlib/linear_algebra/tensor_algebra/basic.lean", - "formal/afp/Network_Security_Policy_Verification/Security_Invariants/Analysis_Tainting.thy", - "formal/mizar/termord.miz", - "formal/lean/mathlib/order/partition/finpartition.lean", - "formal/mizar/zmodul02.miz", - "formal/afp/Regular_Tree_Relations/Util/Ground_Terms.thy", - "formal/mizar/binari_4.miz", - "formal/lean/mathzoo/mathzoo/olympiads/aime/1984/p1.lean", - "formal/afp/Propositional_Proof_Systems/ND.thy", - "formal/lean/mathzoo/mathzoo/olympiads/aime/1990/p15.lean", - "formal/afp/Word_Lib/Next_and_Prev.thy", - "formal/afp/UTP/utp/examples/factorial.thy", - "formal/lean/mathlib/topology/uniform_space/matrix.lean", - "formal/afp/Poincare_Bendixson/ODE_Misc.thy", - "formal/afp/Relational_Minimum_Spanning_Trees/Kruskal.thy", - "formal/afp/Stateful_Protocol_Composition_and_Typing/examples/Example_TLS.thy", - "formal/afp/Launchbury/HeapSemantics.thy", - "formal/afp/CCS/Strong_Sim_SC.thy", - "formal/hol/Help/REAL_RAT_POW_CONV.doc", - "formal/afp/Pi_Transcendental/Pi_Transcendental.thy", - "formal/lean/mathlib/algebra/continued_fractions/terminated_stable.lean", - "formal/afp/FOL_Seq_Calc1/Sequent.thy", - "formal/afp/Resolution_FOL/Resolution.thy", - "formal/afp/SIFPL/VS_OBJ.thy", - "formal/mizar/ordinal6.miz", - "formal/afp/Groebner_Macaulay/document/root.tex", - "formal/afp/Circus/Var_list.thy", - "formal/lean/liquid/for_mathlib/homological_complex_map_d_to_d_from.lean", - "formal/hol/Multivariate/complex_database.ml", - "formal/lean/mathlib/measure_theory/measure/outer_measure.lean", - "formal/lean/mathlib/topology/uniform_space/compact_convergence.lean", - "formal/hol/EC/computegroup.ml", - "formal/mizar/matrixc1.miz", - "formal/mizar/clvect_2.miz", - "formal/hol/100/circle.ml", - "formal/afp/Network_Security_Policy_Verification/TopoS_Composition_Theory_impl.thy", - "formal/afp/Safe_Distance/document/root.tex", - "formal/afp/CAVA_Automata/CAVA_Base/All_Of_CAVA_Base.thy", - "formal/afp/UPF/document/conclusion.tex", - "formal/afp/Monad_Normalisation/Monad_Normalisation_Test.thy", - "formal/lean/lftcm/solutions/monday/metaprogramming.lean", - "formal/hol/Help/PURE_SIMP_CONV.doc", - "formal/lean/liquid/for_mathlib/endomorphisms/homology.lean", - "formal/lean/mathlib/algebra/char_p/exp_char.lean", - "formal/lean/mathlib/data/zmod/basic.lean", - "formal/lean/mathlib/data/fintype/basic.lean", - "formal/lean/liquid/for_mathlib/AddCommGroup/ab4.lean", - "formal/afp/Types_To_Sets_Extension/Examples/SML_Relativization/Simple_Orders/SML_Simple_Orders.thy", - "formal/afp/JinjaDCI/Compiler/Compiler1.thy", - "formal/afp/FO_Theory_Rewriting/FOR_Check.thy", - "formal/lean/mathlib/algebra/divisibility.lean", - "formal/afp/Modal_Logics_for_NTS/Nominal_Wellfounded.thy", - "formal/lean/mathlib/set_theory/ordinal/cantor_normal_form.lean", - "formal/lean/mathlib/algebra/hom/non_unital_alg.lean", - "formal/afp/Ordinal/OrdinalFix.thy", - "formal/afp/CakeML/generated/Lem_relation.thy", - "formal/afp/BenOr_Kozen_Reif/More_Matrix.thy", - "formal/mizar/int_8.miz", - "formal/afp/Prpu_Maxflow/Fifo_Push_Relabel_Impl.thy", - "formal/afp/Quantales/Dioid_Models_New.thy", - "formal/afp/Kruskal/UGraph.thy", - "formal/afp/CakeML_Codegen/Rewriting/Rewriting_Pterm.thy", - "formal/lean/mathlib/data/set/default.lean", - "formal/mizar/lopclset.miz", - "formal/afp/LambdaMu/document/root.tex", - "formal/afp/Clique_and_Monotone_Circuits/Assumptions_and_Approximations.thy", - "formal/afp/DiscretePricing/Geometric_Random_Walk.thy", - "formal/afp/Grothendieck_Schemes/Set_Extras.thy", - "formal/afp/Jinja/document/root.tex", - "formal/mizar/binari_6.miz", - "formal/lean/sphere-eversion/lint.lean", - "formal/afp/InformationFlowSlicing/NonInterferenceWhile.thy", - "formal/lean/mathlib/category_theory/monad/basic.lean", - "formal/afp/GraphMarkingIBP/StackMark.thy", - "formal/lean/mathlib/analysis/normed_space/matrix_exponential.lean", - "formal/lean/mathzoo/mathzoo/olympiads/imo/1965/p2.lean", - "formal/afp/KAT_and_DRA/SingleSorted/Conway_Tests.thy", - "formal/afp/InformationFlowSlicing_Inter/NonInterferenceInter.thy", - "formal/afp/MiniML/W.thy", - "formal/afp/Akra_Bazzi/Akra_Bazzi_Real.thy", - "formal/lean/mathlib/data/matrix/hadamard.lean", - "formal/hol/Rqe/asym.ml", - "formal/hol/Examples/kb.ml", - "formal/mizar/amistd_4.miz", - "formal/hol/Help/orelsec_.doc", - "formal/mizar/ndiff_3.miz", - "formal/lean/mathlib/control/monad/cont.lean", - "formal/lean/mathlib/category_theory/limits/preserves/shapes/products.lean", - "formal/lean/liquid/Lbar/functor.lean", - "formal/hol/Help/REDEPTH_SQCONV.doc", - "formal/mizar/fdiff_9.miz", - "formal/afp/FocusStreamsCaseStudies/Gateway_proof_aux.thy", - "formal/lean/liquid/for_mathlib/pullbacks.lean", - "formal/mizar/convex2.miz", - "formal/hol/Help/meson_prefine.doc", - "formal/hol/Examples/vitali.ml", - "formal/lean/mathlib/category_theory/limits/shapes/finite_products.lean", - "formal/afp/JinjaThreads/Execute/JVMExec_Execute.thy", - "formal/mizar/sppol_1.miz", - "formal/afp/Allen_Calculus/jointly_exhaustive.thy", - "formal/hol/Tutorial/Inductive_datatypes.ml", - "formal/lean/mathlib/category_theory/abelian/left_derived.lean", - "formal/lean/mathlib/category_theory/limits/preserves/functor_category.lean", - "formal/afp/Jordan_Normal_Form/Matrix_Complexity.thy", - "formal/afp/AWN/AWN_Labels.thy", - "formal/afp/Incredible_Proof_Machine/Propositional_Formulas.thy", - "formal/afp/CAVA_LTL_Modelchecker/SM/Lib/SOS_Misc_Add.thy", - "formal/mizar/wellord1.miz", - "formal/afp/Trie/Tries.thy", - "formal/afp/Collections/Examples/Collection_Examples.thy", - "formal/lean/mathlib/topology/algebra/const_mul_action.lean", - "formal/afp/Farkas/Farkas.thy", - "formal/afp/Monad_Memo_DP/heap_monad/Heap_Monad_Ext.thy", - "formal/lean/mathlib/measure_theory/measure/content.lean", - "formal/afp/CAVA_LTL_Modelchecker/CAVA_Abstract.thy", - "formal/afp/HOLCF-Prelude/HOLCF_Prelude.thy", - "formal/hol/Help/.valmod.doc", - "formal/lean/sphere-eversion/to_mathlib/unused/smoothness.lean", - "formal/afp/FOL_Seq_Calc2/Hintikka.thy", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2000/p6.lean", - "formal/hol/Help/map2.doc", - "formal/lean/mathlib/combinatorics/quiver/subquiver.lean", - "formal/afp/Buffons_Needle/document/root.tex", - "formal/afp/GraphMarkingIBP/Graph.thy", - "formal/afp/Rank_Nullity_Theorem/Fundamental_Subspaces.thy", - "formal/mizar/anproj11.miz", - "formal/afp/Sigma_Commit_Crypto/Okamoto_Sigma_Commit.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p229.lean", - "formal/coq/odd-order/BGsection4.v", - "formal/lean/mathlib/topology/metric_space/holder.lean", - "formal/mizar/hausdorf.miz", - "formal/lean/mathlib/category_theory/limits/preserves/shapes/kernels.lean", - "formal/afp/TESL_Language/Denotational.thy", - "formal/coq/math-comp/fingroup/action.v", - "formal/afp/Prpu_Maxflow/document/root.tex", - "formal/lean/mathlib/category_theory/monoidal/Mod.lean", - "formal/afp/Eval_FO/FO.thy", - "formal/afp/Containers/Set_Linorder.thy", - "formal/lean/mathlib/analysis/convex/independent.lean", - "formal/afp/Key_Agreement_Strong_Adversaries/sklvl3_asymmetric.thy", - "formal/lean/mathlib/set_theory/game/basic.lean", - "formal/hol/Help/MK_DISJ_UPPERCASE.doc", - "formal/lean/liquid/for_mathlib/sheafification_mono.lean", - "formal/afp/Complex_Geometry/Unit_Circle_Preserving_Moebius.thy", - "formal/afp/Psi_Calculi/Weak_Cong_Pres.thy", - "formal/lean/mathlib/group_theory/double_coset.lean", - "formal/afp/GewirthPGCProof/document/root.tex", - "formal/mizar/hilbert1.miz", - "formal/afp/Finger-Trees/Test.thy", - "formal/coq/abel/xmathcomp/various.v", - "formal/lean/liquid/for_mathlib/split_exact.lean", - "formal/afp/LTL_Master_Theorem/document/root.tex", - "formal/mizar/projred1.miz", - "formal/hol/Help/list_mk_gabs.doc", - "formal/lean/mathlib/model_theory/quotients.lean", - "formal/afp/Algebraic_Numbers/Show_Real_Alg.thy", - "formal/afp/Flyspeck-Tame/Plane.thy", - "formal/lean/mathlib/data/json.lean", - "formal/lean/mathlib/algebra/order/pi.lean", - "formal/mizar/grfunc_1.miz", - "formal/afp/IsaNet/instances/ICING_variant.thy", - "formal/afp/Buchi_Complementation/Alternate.thy", - "formal/afp/Regex_Equivalence/Derivatives_Finite.thy", - "formal/afp/Topological_Semantics/sse_operation_positive_quantification.thy", - "formal/afp/Rewriting_Z/CL_Z.thy", - "formal/hol/Help/print_unambiguous_comprehensions.doc", - "formal/mizar/functor1.miz", - "formal/mizar/waybel10.miz", - "formal/lean/mathlib/data/mv_polynomial/comap.lean", - "formal/afp/Twelvefold_Way/Preliminaries.thy", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2021/a/p19.lean", - "formal/mizar/rfinseq.miz", - "formal/lean/mathzoo/mathzoo/olympiads/imo/1967/p3.lean", - "formal/mizar/lattices.miz", - "formal/afp/JinjaThreads/MM/JMM_Spec.thy", - "formal/afp/Taylor_Models/Taylor_Models_Misc.thy", - "formal/lean/mathlib/group_theory/perm/cycle/type.lean", - "formal/afp/DPRM_Theorem/Diophantine/Exponential_Relation.thy", - "formal/lean/mathlib/data/sum/basic.lean", - "formal/afp/Auto2_Imperative_HOL/Functional/Indexed_PQueue.thy", - "formal/afp/Collections/Iterator/SetIterator.thy", - "formal/coq/math-comp/field/algC.v", - "formal/coq/math-comp/algebra/fraction.v", - "formal/lean/mathlib/data/nat/periodic.lean", - "formal/afp/Auto2_Imperative_HOL/Functional/Interval.thy", - "formal/afp/FOL_Seq_Calc2/Countermodel.thy", - "formal/mizar/reloc.miz", - "formal/lean/liquid/laurent_measures/ses.lean", - "formal/hol/Help/mk_flist.doc", - "formal/lean/mathlib/analysis/special_functions/exp_deriv.lean", - "formal/lean/mathlib/analysis/specific_limits/floor_pow.lean", - "formal/lean/mathlib/algebra/homology/differential_object.lean", - "formal/afp/UTP/toolkit/utp_toolkit.thy", - "formal/afp/EdmondsKarp_Maxflow/Augmenting_Path_BFS.thy", - "formal/afp/LTL_to_DRA/Mojmir.thy", - "formal/mizar/lang1.miz", - "formal/lean/mathlib/combinatorics/additive/ruzsa_covering.lean", - "formal/lean/mathlib/ring_theory/algebraic_independent.lean", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p92.lean", - "formal/lean/mathlib/data/mllist.lean", - "formal/afp/CoSMed/Prelim.thy", - "formal/mizar/sheffer2.miz", - "formal/lean/mathlib/group_theory/congruence.lean", - "formal/afp/HOL-CSP/Mprefix.thy", - "formal/lean/mathlib/algebraic_topology/Moore_complex.lean", - "formal/afp/FO_Theory_Rewriting/Primitives/NF.thy", - "formal/afp/Isabelle_Meta_Model/toy_example/Toy_Library.thy", - "formal/lean/mathlib/algebra/homology/single.lean", - "formal/afp/CoSMeDis/Post_Confidentiality/Post_All.thy", - "formal/afp/BenOr_Kozen_Reif/Renegar_Decision.thy", - "formal/afp/BD_Security_Compositional/Composing_Security_Network.thy", - "formal/afp/UPF_Firewall/FWNormalisation/FWNormalisationCore.thy", - "formal/lean/mathlib/analysis/special_functions/gamma.lean", - "formal/lean/mathlib/topology/continuous_function/compact.lean", - "formal/afp/Attack_Trees/document/root.tex", - "formal/afp/Bondy/Bondy.thy", - "formal/afp/Automatic_Refinement/Parametricity/Relators.thy", - "formal/afp/HOLCF-Prelude/Definedness.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p321.lean", - "formal/afp/Fresh_Identifiers/Fresh_Infinite.thy", - "formal/lean/mathlib/data/fun_like/equiv.lean", - "formal/afp/DiskPaxos/DiskPaxos.thy", - "formal/afp/Forcing/Proper_Extension.thy", - "formal/afp/Markov_Models/Discrete_Time_Markov_Chain.thy", - "formal/afp/Sort_Encodings/Sig.thy", - "formal/afp/CoSMed/System_Specification.thy", - "formal/lean/mathzoo/mathzoo/misc/miniF2F/induction/1pxpownlt1pnx.lean", - "formal/lean/liquid/for_mathlib/abelian_sheaves/exact.lean", - "formal/hol/Examples/mizar.ml", - "formal/afp/Incompleteness/Goedel_I.thy", - "formal/afp/Berlekamp_Zassenhaus/Hensel_Lifting_Type_Based.thy", - "formal/hol/Boyer_Moore/testset/list.ml", - "formal/hol/Help/get_infix_status.doc", - "formal/lean/mathlib/data/psigma/order.lean", - "formal/afp/Falling_Factorial_Sum/Falling_Factorial_Sum_Vandermonde.thy", - "formal/afp/Flyspeck-Tame/Plane1.thy", - "formal/afp/Prime_Number_Theorem/Newman_Ingham_Tauberian.thy", - "formal/mizar/ascoli.miz", - "formal/afp/Treaps/Probability_Misc.thy", - "formal/hol/Help/REAL_RAT_GE_CONV.doc", - "formal/afp/Poincare_Bendixson/Affine_Arithmetic_Misc.thy", - "formal/hol/Help/merge_nets.doc", - "formal/afp/Containers/List_Fusion.thy", - "formal/afp/MDP-Rewards/MDP_reward_Util.thy", - "formal/afp/UPF_Firewall/Examples/Transformation/Transformation02.thy", - "formal/hol/Minisat/minisat_prove.ml", - "formal/afp/DPRM_Theorem/Machine_Equations/All_State_Equations.thy", - "formal/afp/InfPathElimination/SubRel.thy", - "formal/lean/mathlib/number_theory/bernoulli.lean", - "formal/mizar/osafree.miz", - "formal/afp/Gromov_Hyperbolicity/Gromov_Boundary.thy", - "formal/afp/Ordered_Resolution_Prover/Proving_Process.thy", - "formal/afp/LocalLexing/TheoremD7.thy", - "formal/afp/Psi_Calculi/Chain.thy", - "formal/afp/JinjaThreads/Framework/FWWellform.thy", - "formal/afp/LTL_Master_Theorem/Quotient_Type.thy", - "formal/hol/Help/SUBST_VAR_TAC.doc", - "formal/lean/liquid/condensed/ab4.lean", - "formal/afp/PAC_Checker/PAC_Assoc_Map_Rel.thy", - "formal/mizar/gr_cy_2.miz", - "formal/afp/Formula_Derivatives/Examples/WS1S_Alt_Examples.thy", - "formal/hol/Help/MK_EXISTS_UPPERCASE.doc", - "formal/afp/ConcurrentIMP/Infinite_Sequences.thy", - "formal/hol/Help/dest_uexists.doc", - "formal/afp/Slicing/Basic/AuxLemmas.thy", - "formal/afp/Auto2_HOL/HOL/Order_Thms.thy", - "formal/afp/DPRM_Theorem/Register_Machine/MachineEquations.thy", - "formal/afp/KBPs/List_local.thy", - "formal/afp/Smith_Normal_Form/Admits_SNF_From_Diagonal_Iff_Bezout_Ring.thy", - "formal/afp/Sigma_Commit_Crypto/Sigma_OR.thy", - "formal/mizar/fintopo8.miz", - "formal/afp/CoSMeDis/Friend_Confidentiality/Friend_Network.thy", - "formal/afp/Concurrent_Ref_Alg/Conjunction.thy", - "formal/afp/Farkas/document/root.tex", - "formal/afp/Quantales/document/root.tex", - "formal/lean/mathlib/data/multiset/nat_antidiagonal.lean", - "formal/mizar/bintree2.miz", - "formal/afp/Featherweight_OCL/UML_Tools.thy", - "formal/afp/JinjaThreads/Execute/J_Execute.thy", - "formal/afp/Transition_Systems_and_Automata/Automata/DCA/DCA.thy", - "formal/hol/Help/NUM_PRE_CONV.doc", - "formal/afp/Affine_Arithmetic/Floatarith_Expression.thy", - "formal/mizar/lattice6.miz", - "formal/afp/Security_Protocol_Refinement/Refinement/a0i_agree.thy", - "formal/lean/mathlib/algebraic_topology/fundamental_groupoid/induced_maps.lean", - "formal/afp/Refine_Imperative_HOL/IICF/Impl/IICF_Indexed_Array_List.thy", - "formal/afp/ZFC_in_HOL/Kirby.thy", - "formal/afp/Core_SC_DOM/common/preliminaries/Heap_Error_Monad.thy", - "formal/afp/SIFPL/IMP.thy", - "formal/lean/mathlib/geometry/euclidean/circumcenter.lean", - "formal/afp/Dijkstra_Shortest_Path/Test.thy", - "formal/lean/mathlib/data/nat/nth.lean", - "formal/afp/Factor_Algebraic_Polynomial/Roots_of_Algebraic_Poly.thy", - "formal/hol/Help/occurs_in.doc", - "formal/mizar/measure9.miz", - "formal/mizar/gobrd12.miz", - "formal/lean/mathlib/ring_theory/valuation/valuation_ring.lean", - "formal/afp/Abstract-Rewriting/SN_Order_Carrier.thy", - "formal/lean/mathlib/combinatorics/simple_graph/degree_sum.lean", - "formal/afp/Factor_Algebraic_Polynomial/Roots_via_IA.thy", - "formal/afp/LTL_to_DRA/Auxiliary/Mapping2.thy", - "formal/afp/Slicing/While/Com.thy", - "formal/mizar/topmetr3.miz", - "formal/afp/Slicing/Basic/DynStandardControlDependence.thy", - "formal/lean/mathlib/topology/category/TopCommRing.lean", - "formal/lean/mathlib/topology/sheaves/stalks.lean", - "formal/afp/JinjaThreads/J/WellTypeRT.thy", - "formal/mizar/bhsp_2.miz", - "formal/afp/LinearQuantifierElim/Thys/Logic.thy", - "formal/afp/Boolos_Curious_Inference/document/root.tex", - "formal/afp/Differential_Dynamic_Logic/document/root.tex", - "formal/afp/Dijkstra_Shortest_Path/Introduction.thy", - "formal/afp/Refine_Imperative_HOL/IICF/Impl/IICF_Array.thy", - "formal/afp/Refine_Monadic/examples/Example_Chapter.thy", - "formal/afp/Equivalence_Relation_Enumeration/Equivalence_Relation_Enumeration.thy", - "formal/mizar/euclid10.miz", - "formal/mizar/algnum_1.miz", - "formal/lean/mathzoo/mathzoo/olympiads/imo/1962/p2.lean", - "formal/afp/Algebraic_VCs/P2S2R.thy", - "formal/afp/TortoiseHare/Basis.thy", - "formal/lean/mathlib/topology/algebra/group_with_zero.lean", - "formal/afp/Nullstellensatz/Nullstellensatz.thy", - "formal/mizar/circcmb3.miz", - "formal/hol/Help/shareout.doc", - "formal/hol/Geometric_Algebra/geometricalgebra.ml", - "formal/afp/Partial_Order_Reduction/Basics/Functions.thy", - "formal/hol/Help/rev_itlist.doc", - "formal/afp/Concurrent_Ref_Alg/Galois_Connections.thy", - "formal/afp/KBPs/Examples.thy", - "formal/lean/mathzoo/mathzoo/olympiads/imo/1984/p2.lean", - "formal/lean/mathlib/ring_theory/valuation/valuation_subring.lean", - "formal/mizar/group_7.miz", - "formal/lean/mathlib/linear_algebra/matrix/charpoly/minpoly.lean", - "formal/afp/Gabow_SCC/Find_Path.thy", - "formal/afp/Stern_Brocot/Stern_Brocot_Tree.thy", - "formal/lean/mathlib/measure_theory/function/ess_sup.lean", - "formal/afp/Category2/NatTrans.thy", - "formal/lean/mathzoo/mathzoo/olympiads/imo/1993/p5.lean", - "formal/afp/Stirling_Formula/Stirling_Formula.thy", - "formal/afp/Security_Protocol_Refinement/document/session_graph.tex", - "formal/afp/Architectural_Design_Patterns/Blockchain.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p212.lean", - "formal/afp/CoSMed/Automation_Setup.thy", - "formal/mizar/yellow_2.miz", - "formal/lean/mathlib/analysis/complex/phragmen_lindelof.lean", - "formal/afp/Rank_Nullity_Theorem/document/root.tex", - "formal/mizar/complex1.miz", - "formal/lean/liquid/polyhedral_lattice/finsupp.lean", - "formal/afp/Constructive_Cryptography_CM/Asymptotic_Security.thy", - "formal/afp/Topological_Semantics/topo_frontier_algebra.thy", - "formal/lean/lftcm/exercises_sources/thursday/category_theory/exercise4.lean", - "formal/lean/mathlib/ring_theory/integral_domain.lean", - "formal/afp/Lambda_Free_EPO/Chop.thy", - "formal/afp/Random_Graph_Subgraph_Threshold/Ugraph_Lemmas.thy", - "formal/hol/Help/is_cons.doc", - "formal/afp/BytecodeLogicJmlTypes/Sound.thy", - "formal/afp/JinjaThreads/Execute/Java2Jinja.thy", - "formal/hol/Examples/ste.ml", - "formal/afp/SATSolverVerification/Trail.thy", - "formal/mizar/mesfunc1.miz", - "formal/afp/Ordinary_Differential_Equations/Numerics/Transfer_Analysis.thy", - "formal/afp/PLM/TAO_98_ArtificialTheorems.thy", - "formal/lean/mathlib/topology/uniform_space/compare_reals.lean", - "formal/lean/mathlib/data/list/sections.lean", - "formal/afp/Fisher_Yates/Fisher_Yates.thy", - "formal/afp/Call_Arity/CoCallGraph-TTree.thy", - "formal/afp/Psi_Calculi/Sim_Struct_Cong.thy", - "formal/hol/Help/try_user_printer.doc", - "formal/afp/JinjaThreads/Framework/FWBisimulation.thy", - "formal/lean/mathlib/analysis/special_functions/trigonometric/chebyshev.lean", - "formal/afp/Featherweight_OCL/basic_types/UML_Real.thy", - "formal/mizar/lexbfs.miz", - "formal/mizar/ramsey_1.miz", - "formal/lean/mathlib/algebra/big_operators/option.lean", - "formal/lean/mathlib/data/nat/dist.lean", - "formal/afp/Concurrent_Ref_Alg/Conjunctive_Iteration.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p320.lean", - "formal/afp/AxiomaticCategoryTheory/document/root.tex", - "formal/afp/Probabilistic_While/Bernoulli.thy", - "formal/lean/mathlib/measure_theory/decomposition/jordan.lean", - "formal/mizar/bvfunc_6.miz", - "formal/lean/lftcm/solutions/tuesday/sets.lean", - "formal/lean/sphere-eversion/to_mathlib/topology/periodic.lean", - "formal/lean/mathlib/linear_algebra/orientation.lean", - "formal/hol/Help/parse_as_binder.doc", - "formal/lean/mathlib/field_theory/cardinality.lean", - "formal/afp/Boolean_Expression_Checkers/document/root.tex", - "formal/afp/FLP/AsynchronousSystem.thy", - "formal/afp/Collections/ICF/impl/Trie2.thy", - "formal/lean/mathlib/category_theory/monad/products.lean", - "formal/afp/Locally-Nameless-Sigma/Sigma/Sigma.thy", - "formal/lean/mathlib/topology/sheaves/limits.lean", - "formal/afp/IsaNet/instances/Anapaya_SCION.thy", - "formal/mizar/homothet.miz", - "formal/afp/XML/Xml.thy", - "formal/mizar/counters.miz", - "formal/afp/JinjaThreads/Compiler/Correctness1.thy", - "formal/afp/Vickrey_Clarke_Groves/FirstPrice.thy", - "formal/afp/Iptables_Semantics/Examples/Parser_Test/Parser6_Test.thy", - "formal/afp/Deep_Learning/Tensor_Rank.thy", - "formal/lean/mathlib/group_theory/group_action/sub_mul_action/pointwise.lean", - "formal/coq/math-comp/test_suite/test_intro_rw.v", - "formal/afp/Flyspeck-Tame/Worklist.thy", - "formal/afp/Real_Time_Deque/Deque.thy", - "formal/lean/mathlib/category_theory/sites/sieves.lean", - "formal/afp/VeriComp/Simulation.thy", - "formal/hol/Arithmetic/definability.ml", - "formal/afp/Hermite_Lindemann/Hermite_Lindemann.thy", - "formal/afp/Jordan_Normal_Form/Jordan_Normal_Form_Uniqueness.thy", - "formal/lean/mathlib/group_theory/group_action/default.lean", - "formal/afp/Interpreter_Optimizations/Ubx_Verification.thy", - "formal/afp/Decl_Sem_Fun_PL/ChangeEnv.thy", - "formal/lean/mathlib/data/finset/sigma.lean", - "formal/afp/Forcing/Foundation_Axiom.thy", - "formal/afp/Iptables_Semantics/Examples/SQRL_Shorewall/SQRL_2015_nospoof.thy", - "formal/mizar/ltlaxio2.miz", - "formal/afp/Iptables_Semantics/Semantics_Goto.thy", - "formal/afp/Pi_Calculus/Weak_Early_Sim_Pres.thy", - "formal/afp/Collections/ICF/impl/ListMapImpl.thy", - "formal/afp/Dependent_SIFUM_Refinement/CompositionalRefinement.thy", - "formal/afp/Higher_Order_Terms/Nterm.thy", - "formal/lean/mathlib/topology/algebra/nonarchimedean/bases.lean", - "formal/mizar/pencil_2.miz", - "formal/hol/Help/is_prefix.doc", - "formal/mizar/taxonom1.miz", - "formal/afp/Skew_Heap/document/root.tex", - "formal/afp/CSP_RefTK/Properties.thy", - "formal/afp/Propositional_Proof_Systems/HCSCND.thy", - "formal/afp/FO_Theory_Rewriting/Closure/GTT_RRn.thy", - "formal/afp/Menger/Separations.thy", - "formal/afp/Independence_CH/Separation_Axiom.thy", - "formal/lean/mathlib/algebra/direct_sum/module.lean", - "formal/hol/Help/NUM_REL_CONV.doc", - "formal/lean/mathlib/linear_algebra/bilinear_map.lean", - "formal/afp/Word_Lib/Most_significant_bit.thy", - "formal/afp/Incredible_Proof_Machine/Incredible_Trees.thy", - "formal/afp/CoreC++/CWellForm.thy", - "formal/afp/Parity_Game/ParityGame.thy", - "formal/afp/Simplicial_complexes_and_boolean_functions/Evasive.thy", - "formal/mizar/bvfunc_1.miz", - "formal/afp/Polynomials/MPoly_Type_Class.thy", - "formal/lean/liquid/pseudo_normed_group/FP2.lean", - "formal/afp/Differential_Game_Logic/Denotational_Semantics.thy", - "formal/afp/Regular_Algebras/Regular_Algebra_Variants.thy", - "formal/lean/mathzoo/mathzoo/misc/miniF2F/induction/sum_1oktkp1.lean", - "formal/afp/Progress_Tracking/Combined.thy", - "formal/mizar/seq_2.miz", - "formal/hol/Unity/mk_gen_induct.ml", - "formal/afp/Psi_Calculi/Weak_Stat_Imp.thy", - "formal/afp/CISC-Kernel/step/Step_vpeq.thy", - "formal/afp/JinjaDCI/BV/BVNoTypeError.thy", - "formal/afp/MiniML/Instance.thy", - "formal/lean/mathlib/algebra/order/euclidean_absolute_value.lean", - "formal/lean/mathlib/analysis/convex/simplicial_complex/basic.lean", - "formal/mizar/matroid0.miz", - "formal/afp/Collections/ICF/gen_algo/SetIteratorCollectionsGA.thy", - "formal/afp/Real_Time_Deque/Current.thy", - "formal/lean/mathlib/data/rat/denumerable.lean", - "formal/mizar/rlvect_2.miz", - "formal/afp/Types_To_Sets_Extension/Examples/TTS_Foundations/Orders/Type_Simple_Orders.thy", - "formal/afp/GaleStewart_Games/GaleStewartGames.thy", - "formal/afp/Modal_Logics_for_NTS/S_Transform.thy", - "formal/afp/Collections/Examples/ICF/Exploration_DFS.thy", - "formal/afp/Syntax_Independent_Logic/Syntax.thy", - "formal/afp/Psi_Calculi/Weak_Bisim_Subst.thy", - "formal/afp/Special_Function_Bounds/Sqrt_Bounds.thy", - "formal/afp/Flow_Networks/Augmenting_Path.thy", - "formal/afp/Tree_Decomposition/TreeDecomposition.thy", - "formal/coq/math-comp/test_suite/imset2_gproduct.v", - "formal/afp/KBPs/SPRViewNonDet.thy", - "formal/afp/HereditarilyFinite/Finitary.thy", - "formal/afp/Correctness_Algebras/Complete_Domain.thy", - "formal/hol/Help/NUM_REDUCE_TAC.doc", - "formal/hol/Help/INT_MAX_CONV.doc", - "formal/afp/UTP/document/root.tex", - "formal/lean/mathlib/data/multiset/sections.lean", - "formal/afp/Stirling_Formula/document/root.tex", - "formal/lean/mathlib/algebra/monoid_algebra/to_direct_sum.lean", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p170.lean", - "formal/hol/Help/dest_numeral.doc", - "formal/afp/CZH_Foundations/czh_semicategories/CZH_SMC_Introduction.thy", - "formal/lean/liquid/for_mathlib/preserves_exact.lean", - "formal/lean/mathzoo/mathzoo/olympiads/imo/1974/p3.lean", - "formal/afp/SATSolverVerification/NieuwenhuisOliverasTinelli.thy", - "formal/lean/mathlib/algebra/continued_fractions/computation/correctness_terminating.lean", - "formal/afp/AODV/variants/b_fwdrreps/B_Fwdrreps.thy", - "formal/lean/mathlib/topology/homeomorph.lean", - "formal/afp/Topological_Semantics/topo_derivative_algebra.thy", - "formal/hol/Complex/complex_real.ml", - "formal/lean/mathlib/data/complex/exponential.lean", - "formal/mizar/nomin_1.miz", - "formal/lean/mathlib/data/polynomial/unit_trinomial.lean", - "formal/lean/lftcm/hints/category_theory/exercise5/hint.lean", - "formal/lean/mathlib/topology/metric_space/basic.lean", - "formal/afp/Three_Circles/Three_Circles.thy", - "formal/lean/mathlib/category_theory/limits/shapes/images.lean", - "formal/afp/Prime_Harmonic_Series/Prime_Harmonic_Misc.thy", - "formal/afp/KBPs/Views.thy", - "formal/lean/lftcm/exercises_sources/friday/manifolds.lean", - "formal/afp/Fresh_Identifiers/Fresh_String.thy", - "formal/afp/Padic_Ints/Zp_Compact.thy", - "formal/lean/mathlib/linear_algebra/charpoly/basic.lean", - "formal/afp/UTP/utp/examples/sum_list.thy", - "formal/afp/Allen_Calculus/xor_cal.thy", - "formal/hol/Logic/linear.ml", - "formal/afp/Power_Sum_Polynomials/Power_Sum_Polynomials.thy", - "formal/afp/Functional_Ordered_Resolution_Prover/Executable_FO_Ordered_Resolution_Prover.thy", - "formal/lean/lftcm/hints/category_theory/exercise2/hint6.lean", - "formal/afp/Stuttering_Equivalence/StutterEquivalence.thy", - "formal/lean/mathlib/analysis/normed_space/lp_space.lean", - "formal/afp/Kruskal/Kruskal_Impl.thy", - "formal/lean/mathlib/field_theory/chevalley_warning.lean", - "formal/coq/math-comp/solvable/commutator.v", - "formal/afp/Gaussian_Integers/Gaussian_Integers_Everything.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p247.lean", - "formal/afp/Gauss_Sums/Gauss_Sums_Auxiliary.thy", - "formal/afp/Modular_Assembly_Kit_Security/Verification/Unwinding/AuxiliaryLemmas.thy", - "formal/afp/WebAssembly/Wasm_Properties.thy", - "formal/lean/mathlib/logic/function/basic.lean", - "formal/afp/Algebraic_VCs/AVC_KAT/VC_RKAT.thy", - "formal/afp/Tycon/Coerce.thy", - "formal/hol/Tutorial/Vectors.ml", - "formal/hol/Help/basic_congs.doc", - "formal/lean/mathlib/data/matrix/basis.lean", - "formal/afp/Lifting_the_Exponent/document/root.tex", - "formal/afp/Proof_Strategy_Language/PSL.thy", - "formal/hol/Help/int_ideal_cofactors.doc", - "formal/lean/mathlib/linear_algebra/clifford_algebra/basic.lean", - "formal/mizar/scmfsa_7.miz", - "formal/lean/mathlib/category_theory/abelian/opposite.lean", - "formal/afp/Planarity_Certificates/Planarity/Executable_Permutations.thy", - "formal/afp/Factored_Transition_System_Bounding/HoArithUtils.thy", - "formal/lean/mathlib/order/grade.lean", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2021/a/p12.lean", - "formal/afp/Parity_Game/PositionalDeterminacy.thy", - "formal/hol/Rqe/timers.ml", - "formal/afp/Ordinal_Partitions/Library_Additions.thy", - "formal/afp/Groebner_Macaulay/Groebner_Macaulay.thy", - "formal/lean/mathlib/topology/instances/rat_lemmas.lean", - "formal/afp/Regression_Test_Selection/JinjaSuppl/ClassesAbove.thy", - "formal/afp/Native_Word/Uint16.thy", - "formal/mizar/tbsp_1.miz", - "formal/lean/lftcm/exercises_sources/thursday/category_theory/exercise7.lean", - "formal/afp/Gabow_SCC/Gabow_GBG.thy", - "formal/afp/VectorSpace/VectorSpace.thy", - "formal/afp/Generic_Deriving/tests/Derive_Algebra.thy", - "formal/afp/Psi_Calculi/Bisimulation.thy", - "formal/mizar/roughif2.miz", - "formal/hol/Help/MK_BINOP_UPPERCASE.doc", - "formal/lean/mathlib/algebraic_topology/dold_kan/projections.lean", - "formal/lean/mathlib/algebra/order/floor.lean", - "formal/afp/Functional-Automata/Execute.thy", - "formal/mizar/tex_1.miz", - "formal/afp/Security_Protocol_Refinement/Refinement/Refinement.thy", - "formal/afp/Jordan_Normal_Form/Missing_Ring.thy", - "formal/lean/mathlib/linear_algebra/quadratic_form/real.lean", - "formal/afp/Optics/Two.thy", - "formal/lean/mathzoo/mathzoo/olympiads/imo/2013/p5.lean", - "formal/afp/Jordan_Normal_Form/Jordan_Normal_Form_Existence.thy", - "formal/hol/RichterHilbertAxiomGeometry/inverse_bug_puzzle_read.ml", - "formal/lean/mathlib/geometry/euclidean/inversion.lean", - "formal/lean/mathlib/data/vector3.lean", - "formal/hol/Help/term_match.doc", - "formal/afp/Linear_Inequalities/Integer_Hull.thy", - "formal/lean/mathlib/data/nat/log.lean", - "formal/afp/Graph_Theory/Bidirected_Digraph.thy", - "formal/afp/Dependent_SIFUM_Refinement/Examples/Eg1Eg2.thy", - "formal/hol/Examples/pseudoprime.ml", - "formal/afp/Separation_Algebra/ex/Simple_Separation_Example.thy", - "formal/lean/mathlib/algebra/char_p/algebra.lean", - "formal/afp/RSAPSS/Mod.thy", - "formal/lean/sphere-eversion/local/relation.lean", - "formal/lean/mathlib/data/set/intervals/unordered_interval.lean", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2002/b/p7.lean", - "formal/afp/FunWithTilings/document/root.tex", - "formal/lean/mathlib/analysis/special_functions/trigonometric/complex.lean", - "formal/afp/CZH_Foundations/czh_digraphs/CZH_DG_Small_TDGHM.thy", - "formal/lean/mathlib/analysis/normed/group/SemiNormedGroup/completion.lean", - "formal/lean/mathlib/algebra/order/algebra.lean", - "formal/afp/Complx/SeqCatch_decomp.thy", - "formal/mizar/topgen_2.miz", - "formal/afp/Heard_Of/uv/UvProof.thy", - "formal/afp/Nullstellensatz/Nullstellensatz_Field.thy", - "formal/afp/No_FTL_observers/SpecRel.thy", - "formal/lean/mathlib/linear_algebra/finite_dimensional.lean", - "formal/lean/liquid/for_mathlib/homology_iso_Ab.lean", - "formal/afp/Package_logic/SepAlgebra.thy", - "formal/lean/mathlib/data/set/intervals/proj_Icc.lean", - "formal/coq/abel/xmathcomp/classic_ext.v", - "formal/afp/Generic_Join/Generic_Join.thy", - "formal/hol/Complex/quelim.ml", - "formal/mizar/integra3.miz", - "formal/afp/VerifyThis2018/Snippets.thy", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2021/b/p21.lean", - "formal/afp/Saturation_Framework/Labeled_Lifting_to_Non_Ground_Calculi.thy", - "formal/afp/Interpreter_Optimizations/AList_Extra.thy", - "formal/mizar/funct_2.miz", - "formal/hol/Help/mk_cons.doc", - "formal/lean/mathlib/group_theory/commensurable.lean", - "formal/afp/Hermite/Hermite.thy", - "formal/afp/CoreC++/Determinism.thy", - "formal/coq/analysis/trigo.v", - "formal/lean/mathlib/topology/category/Top/epi_mono.lean", - "formal/lean/mathlib/data/pi/interval.lean", - "formal/afp/Sliding_Window_Algorithm/document/root.tex", - "formal/mizar/mesfunc8.miz", - "formal/lean/mathlib/ring_theory/witt_vector/mul_coeff.lean", - "formal/mizar/jordan.miz", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2011/a/p18.lean", - "formal/afp/Collections/Examples/Autoref/Nested_DFS.thy", - "formal/lean/mathlib/ring_theory/ideal/local_ring.lean", - "formal/afp/MuchAdoAboutTwo/MuchAdoAboutTwo.thy", - "formal/afp/Ordinal_Partitions/document/root.tex", - "formal/afp/Heard_Of/ute/UteProof.thy", - "formal/lean/mathlib/ring_theory/rees_algebra.lean", - "formal/afp/Fourier/Square_Integrable.thy", - "formal/hol/Help/DEPTH_SQCONV.doc", - "formal/lean/liquid/for_mathlib/simplicial/complex.lean", - "formal/mizar/ratfunc1.miz", - "formal/afp/Modular_Assembly_Kit_Security/Verification/Unwinding/UnwindingResults.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p341.lean", - "formal/lean/mathlib/combinatorics/set_family/shadow.lean", - "formal/mizar/rfunct_1.miz", - "formal/afp/Transition_Systems_and_Automata/Transition_Systems/Transition_System_Extra.thy", - "formal/afp/Flyspeck-Tame/Maps.thy", - "formal/afp/CakeML/Big_Step_Fun_Equiv.thy", - "formal/hol/ProofTrace/proofs.ml", - "formal/lean/liquid/for_mathlib/product_op.lean", - "formal/mizar/graph_3a.miz", - "formal/hol/RichterHilbertAxiomGeometry/from_topology.ml", - "formal/afp/Groebner_Bases/Syzygy.thy", - "formal/hol/Help/ARITH_RULE.doc", - "formal/afp/Transitive_Models/Nat_Miscellanea.thy", - "formal/lean/mathlib/data/fintype/order.lean", - "formal/mizar/prob_2.miz", - "formal/afp/Prim_Dijkstra_Simple/Chapter_Prim.thy", - "formal/afp/Conditional_Transfer_Rule/IML_UT/IML_UT.thy", - "formal/afp/Smith_Normal_Form/Mod_Type_Connect.thy", - "formal/afp/Markov_Models/ex/Gossip_Broadcast.thy", - "formal/hol/Help/mk_rewrites.doc", - "formal/lean/perfectoid/Huber_pair.lean", - "formal/lean/mathlib/algebraic_geometry/morphisms/quasi_compact.lean", - "formal/afp/Deep_Learning/Tensor_Unit_Vec.thy", - "formal/afp/Kleene_Algebra/Omega_Algebra.thy", - "formal/mizar/pdiffeq1.miz", - "formal/afp/Core_SC_DOM/common/monads/CharacterDataMonad.thy", - "formal/afp/Containers/ITP-2013/Benchmark_RBT.thy", - "formal/afp/Containers/Examples/Containers_DFS_Ex.thy", - "formal/afp/Network_Security_Policy_Verification/SINVAR_Examples.thy", - "formal/coq/math-comp/ssreflect/order.v", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p33.lean", - "formal/lean/mathlib/deprecated/submonoid.lean", - "formal/afp/Statecharts/Update.thy", - "formal/afp/FO_Theory_Rewriting/Closure/Context_Extensions.thy", - "formal/hol/Help/augment.doc", - "formal/afp/Forcing/document/root.tex", - "formal/lean/mathlib/category_theory/subobject/types.lean", - "formal/hol/Help/MP_TAC.doc", - "formal/afp/JinjaThreads/Common/Type.thy", - "formal/afp/Dirichlet_Series/Liouville_Lambda.thy", - "formal/afp/Projective_Geometry/Matroid_Rank_Properties.thy", - "formal/afp/HOL-CSP/document/root.tex", - "formal/afp/List-Index/List_Index.thy", - "formal/afp/SimplifiedOntologicalArgument/KanckosLethenNo2Possibilist.thy", - "formal/afp/Abstract-Rewriting/Relative_Rewriting.thy", - "formal/lean/sphere-eversion/global/one_jet_sec.lean", - "formal/hol/Jordan/jordan_curve_theorem.ml", - "formal/lean/liquid/for_mathlib/fin_functor.lean", - "formal/afp/Containers/RBT_ext.thy", - "formal/afp/Factor_Algebraic_Polynomial/Roots_of_Algebraic_Poly_Impl.thy", - "formal/lean/mathlib/order/copy.lean", - "formal/afp/VerifyThis2018/Challenge1_short.thy", - "formal/afp/FileRefinement/ResizableArrays.thy", - "formal/lean/mathlib/data/buffer/parser/basic.lean", - "formal/afp/Jinja/Common/Decl.thy", - "formal/hol/Help/PURE_REWRITE_RULE.doc", - "formal/afp/Native_Word/Native_Word_Test_PolyML2.thy", - "formal/lean/mathlib/topology/G_delta.lean", - "formal/afp/VolpanoSmith/Execute.thy", - "formal/afp/Inductive_Inference/Inductive_Inference_Basics.thy", - "formal/lean/lftcm/exercises_sources/friday/topology.lean", - "formal/afp/Store_Buffer_Reduction/PIMP.thy", - "formal/lean/mathlib/algebra/category/Module/filtered_colimits.lean", - "formal/coq/math-comp/all/all.v", - "formal/lean/liquid/Lbar/sum_nnnorm.lean", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2020/b/p22.lean", - "formal/lean/mathlib/measure_theory/measure/sub.lean", - "formal/lean/mathlib/analysis/normed_space/spectrum.lean", - "formal/afp/Applicative_Lifting/Idiomatic_Terms.thy", - "formal/afp/Error_Function/Error_Function_Asymptotics.thy", - "formal/afp/Well_Quasi_Orders/document/root.tex", - "formal/afp/CakeML/generated/CakeML/Tokens.thy", - "formal/afp/IP_Addresses/IPv6.thy", - "formal/afp/Collections/Iterator/Gen_Iterator.thy", - "formal/afp/Strong_Security/Parallel_Composition.thy", - "formal/hol/miz3/Samples/wishes.ml", - "formal/afp/Selection_Heap_Sort/document/root.tex", - "formal/afp/Jordan_Normal_Form/Matrix_IArray_Impl.thy", - "formal/afp/Stochastic_Matrices/Eigenspace.thy", - "formal/lean/mathlib/ring_theory/simple_module.lean", - "formal/afp/Network_Security_Policy_Verification/Examples/Example.thy", - "formal/lean/mathlib/data/vector/mem.lean", - "formal/afp/Source_Coding_Theorem/document/root.tex", - "formal/afp/Relation_Algebra/Relation_Algebra_Models.thy", - "formal/afp/Core_DOM/common/monads/CharacterDataMonad.thy", - "formal/afp/Planarity_Certificates/Planarity/Planar_Subdivision.thy", - "formal/afp/Optics/Lens_Record_Example.thy", - "formal/hol/Help/is_type.doc", - "formal/afp/Projective_Measurements/CHSH_Inequality.thy", - "formal/afp/Call_Arity/CoCallImplTTreeSafe.thy", - "formal/hol/Help/prioritize_int.doc", - "formal/mizar/clvect_3.miz", - "formal/lean/mathlib/topology/constructions.lean", - "formal/afp/DFS_Framework/Misc/DFS_Framework_Misc.thy", - "formal/lean/mathlib/measure_theory/group/integration.lean", - "formal/afp/KAT_and_DRA/document/root.tex", - "formal/hol/Help/REWR_CONV.doc", - "formal/afp/UTP/toolkit/Total_Recall.thy", - "formal/afp/Simple_Firewall/Common/IP_Partition_Preliminaries.thy", - "formal/coq/odd-order/BGsection11.v", - "formal/afp/Progress_Tracking/Exchange_Abadi.thy", - "formal/afp/Refine_Imperative_HOL/Examples/Sepref_WGraph.thy", - "formal/hol/Help/fix.doc", - "formal/mizar/lp_space.miz", - "formal/mizar/ltlaxio5.miz", - "formal/afp/Splay_Tree/Splay_Tree.thy", - "formal/lean/liquid/for_mathlib/internal_hom.lean", - "formal/afp/Finitely_Generated_Abelian_Groups/document/root.tex", - "formal/mizar/ndiff10.miz", - "formal/afp/Kleene_Algebra/Kleene_Algebra_Models.thy", - "formal/afp/List-Infinite/CommonArith/Util_Nat.thy", - "formal/lean/liquid/normed_group/pseudo_normed_group.lean", - "formal/afp/Shadow_SC_DOM/tests/Shadow_DOM_Document_adoptNode.thy", - "formal/afp/BinarySearchTree/BinaryTree_TacticStyle.thy", - "formal/afp/DiscretePricing/CRR_Model.thy", - "formal/lean/mathlib/algebra/star/module.lean", - "formal/lean/sphere-eversion/to_mathlib/topology/paracompact.lean", - "formal/lean/mathlib/analysis/normed_space/star/exponential.lean", - "formal/lean/mathlib/topology/continuous_function/cocompact_map.lean", - "formal/afp/Regular_Tree_Relations/GTT_Transitive_Closure.thy", - "formal/afp/JinjaDCI/Common/SystemClasses.thy", - "formal/hol/Help/mk_exists.doc", - "formal/afp/Show/Show_Instances.thy", - "formal/mizar/nat_4.miz", - "formal/lean/mathlib/measure_theory/function/l1_space.lean", - "formal/afp/Higher_Order_Terms/Pats.thy", - "formal/lean/mathlib/number_theory/cyclotomic/primitive_roots.lean", - "formal/afp/CakeML/Tests/Code_Test_Haskell.thy", - "formal/afp/Card_Multisets/Card_Multisets.thy", - "formal/hol/Help/dest_string.doc", - "formal/afp/InfPathElimination/Graph.thy", - "formal/afp/Metalogic_ProofChecker/Preliminaries.thy", - "formal/mizar/seq_4.miz", - "formal/afp/Types_To_Sets_Extension/Examples/SML_Relativization/Topology/SML_Product_Topology.thy", - "formal/lean/liquid/for_mathlib/horseshoe.lean", - "formal/afp/IP_Addresses/Lib_Numbers_toString.thy", - "formal/afp/Affine_Arithmetic/Ex_Ineqs.thy", - "formal/afp/LTL_to_DRA/Mojmir_Rabin.thy", - "formal/afp/Physical_Quantities/BIS.thy", - "formal/hol/100/triangular.ml", - "formal/lean/lftcm/exercises_sources/monday/metaprogramming.lean", - "formal/lean/mathlib/topology/homotopy/product.lean", - "formal/afp/Automatic_Refinement/Tool/Autoref_Tagging.thy", - "formal/lean/mathlib/dynamics/omega_limit.lean", - "formal/afp/UPF_Firewall/Examples/NAT-FW/NAT-FW.thy", - "formal/lean/mathlib/analysis/box_integral/divergence_theorem.lean", - "formal/afp/Cartan_FP/Cartan.thy", - "formal/afp/Key_Agreement_Strong_Adversaries/pfslvl2.thy", - "formal/mizar/euler_1.miz", - "formal/afp/Native_Word/Native_Word_Test_GHC.thy", - "formal/coq/odd-order/PFsection6.v", - "formal/afp/JinjaDCI/J/Annotate.thy", - "formal/lean/mathlib/ring_theory/nullstellensatz.lean", - "formal/hol/Help/GSYM.doc", - "formal/afp/LTL_Master_Theorem/LTL_to_DRA/DRA_Construction.thy", - "formal/lean/mathlib/category_theory/idempotents/functor_categories.lean", - "formal/afp/Architectural_Design_Patterns/Publisher_Subscriber.thy", - "formal/lean/mathlib/data/set/function.lean", - "formal/afp/Three_Circles/document/root.tex", - "formal/afp/Combinatorics_Words_Graph_Lemma/document/root.tex", - "formal/afp/Grothendieck_Schemes/document/root.tex", - "formal/hol/Multivariate/convex.ml", - "formal/mizar/integr10.miz", - "formal/lean/liquid/for_mathlib/truncation.lean", - "formal/hol/Multivariate/topology.ml", - "formal/afp/Separation_Logic_Imperative_HOL/Examples/To_List_GA.thy", - "formal/afp/POPLmark-deBruijn/Basis.thy", - "formal/afp/Optics/Lens_State.thy", - "formal/lean/mathlib/group_theory/finite_abelian.lean", - "formal/lean/mathlib/topology/metric_space/pi_nat.lean", - "formal/afp/Monad_Memo_DP/heap_monad/Pair_Memory.thy", - "formal/afp/Pi_Calculus/Weak_Late_Cong_Subst.thy", - "formal/afp/Collections/GenCF/Intf/Intf_Map.thy", - "formal/mizar/altcat_6.miz", - "formal/lean/mathlib/number_theory/class_number/finite.lean", - "formal/hol/Help/the_implicit_types.doc", - "formal/afp/Abs_Int_ITP2012/Abs_Int2_ivl.thy", - "formal/afp/Gauss_Jordan/Code_Z2.thy", - "formal/mizar/cqc_the3.miz", - "formal/afp/LocalLexing/Ladder.thy", - "formal/afp/Deriving/Hash_Generator/Hash_Generator.thy", - "formal/mizar/tsp_1.miz", - "formal/afp/Flow_Networks/document/root.tex", - "formal/afp/CryptHOL/Cyclic_Group.thy", - "formal/afp/Collections/Examples/Autoref/Combined_TwoSat.thy", - "formal/afp/UPF_Firewall/Examples/Examples.thy", - "formal/afp/JinjaThreads/Framework/FWProgress.thy", - "formal/lean/liquid/thm95/row_iso.lean", - "formal/afp/EdmondsKarp_Maxflow/EdmondsKarp_Termination_Abstract.thy", - "formal/lean/mathlib/order/filter/ennreal.lean", - "formal/afp/Possibilistic_Noninterference/Syntactic_Criteria.thy", - "formal/afp/Catalan_Numbers/Catalan_Auxiliary_Integral.thy", - "formal/afp/Optics/Interp.thy", - "formal/lean/mathlib/algebraic_topology/fundamental_groupoid/product.lean", - "formal/afp/Encodability_Process_Calculi/ProcessCalculi.thy", - "formal/hol/Help/REAL_LET_IMP.doc", - "formal/afp/WebAssembly/Wasm_Printing/Wasm_Type_Abs_Printing.thy", - "formal/afp/CoSMeDis/Post_Confidentiality/DYNAMIC_Post_Value_Setup_ISSUER.thy", - "formal/lean/mathlib/linear_algebra/free_module/finite/rank.lean", - "formal/hol/Help/CONV_TAC.doc", - "formal/hol/Help/length.doc", - "formal/afp/JinjaThreads/Execute/JVM_Execute.thy", - "formal/lean/mathlib/measure_theory/integral/bochner.lean", - "formal/coq/math-comp/ssreflect/tuple.v", - "formal/afp/Propositional_Proof_Systems/MiniSC.thy", - "formal/hol/Help/REPEAT_UPPERCASE.doc", - "formal/afp/Floyd_Warshall/Floyd_Warshall.thy", - "formal/hol/Help/is_eq.doc", - "formal/hol/Jordan/tactics_fix.ml", - "formal/afp/CakeML/generated/Lem_num_extra.thy", - "formal/afp/Physical_Quantities/SI_Pretty.thy", - "formal/mizar/equation.miz", - "formal/afp/CCS/Agent.thy", - "formal/afp/Topological_Semantics/topo_negation_fixedpoints.thy", - "formal/lean/mathzoo/mathzoo/olympiads/aime/1987/p5.lean", - "formal/lean/mathlib/data/num/basic.lean", - "formal/afp/LLL_Factorization/Modern_Computer_Algebra_Problem.thy", - "formal/lean/liquid/breen_deligne/constants.lean", - "formal/afp/Native_Word/Native_Word_Imperative_HOL.thy", - "formal/hol/Help/mk_small_numeral.doc", - "formal/afp/IMP2/automation/IMP2_Var_Abs.thy", - "formal/afp/Transition_Systems_and_Automata/Automata/DCA/DGCA.thy", - "formal/afp/SPARCv8/SparcModel_MMU/Sparc_Execution.thy", - "formal/lean/mathlib/geometry/euclidean/monge_point.lean", - "formal/afp/Multi_Party_Computation/ETP_OT.thy", - "formal/afp/Van_der_Waerden/document/root.tex", - "formal/afp/Correctness_Algebras/N_Semirings.thy", - "formal/afp/Refine_Monadic/Refine_Foreach.thy", - "formal/afp/Collections/ICF/gen_algo/PrioByAnnotatedList.thy", - "formal/afp/X86_Semantics/SymbolicExecution.thy", - "formal/afp/Generalized_Counting_Sort/Stability.thy", - "formal/afp/InfPathElimination/document/summary.tex", - "formal/hol/100/bertrand.ml", - "formal/mizar/pcomps_1.miz", - "formal/hol/Boyer_Moore/definitions.ml", - "formal/afp/Game_Based_Crypto/IND_CPA.thy", - "formal/lean/perfectoid/for_mathlib/padics.lean", - "formal/afp/CakeML/generated/Lem_basic_classes.thy", - "formal/afp/Consensus_Refined/Voting/OneThirdRule_Defs.thy", - "formal/afp/HyperCTL/Deep.thy", - "formal/afp/Tree-Automata/Tree.thy", - "formal/mizar/euclid_2.miz", - "formal/lean/mathlib/linear_algebra/clifford_algebra/conjugation.lean", - "formal/afp/Saturation_Framework/Intersection_Calculus.thy", - "formal/afp/Independence_CH/Infinity_Axiom.thy", - "formal/lean/mathlib/category_theory/additive/basic.lean", - "formal/afp/Security_Protocol_Refinement/Refinement/Agents.thy", - "formal/mizar/lopban_9.miz", - "formal/hol/RichterHilbertAxiomGeometry/miz3/FontHilbertAxiom.ml", - "formal/afp/Forcing/FrecR.thy", - "formal/lean/lftcm/hints/category_theory/exercise2/hint7.lean", - "formal/hol/Help/lhand.doc", - "formal/mizar/entropy1.miz", - "formal/afp/HRB-Slicing/Proc/PCFG.thy", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2020/b/p13.lean", - "formal/afp/Auto2_Imperative_HOL/Functional/Union_Find.thy", - "formal/afp/IP_Addresses/document/root.tex", - "formal/afp/Ordinary_Differential_Equations/ODE_Auxiliarities.thy", - "formal/afp/Finite_Fields/Formal_Polynomial_Derivatives.thy", - "formal/lean/liquid/for_mathlib/cech.lean", - "formal/afp/SATSolverVerification/document/root.tex", - "formal/afp/Akra_Bazzi/Akra_Bazzi_Asymptotics.thy", - "formal/afp/Category2/Functors.thy", - "formal/afp/KAT_and_DRA/SingleSorted/KAT.thy", - "formal/lean/liquid/challenge_prerequisites.lean", - "formal/lean/liquid/condensed/tensor_short_exact.lean", - "formal/afp/MDP-Rewards/MDP_cont.thy", - "formal/afp/Jordan_Hoelder/CompositionSeries.thy", - "formal/afp/Universal_Turing_Machine/Abacus_Mopup.thy", - "formal/mizar/funcsdom.miz", - "formal/afp/Network_Security_Policy_Verification/Lib/FiniteGraph.thy", - "formal/afp/LocalLexing/MainTheorems.thy", - "formal/afp/Markov_Models/ex/Zeroconf_Analysis.thy", - "formal/afp/Show/Show_Real_Impl.thy", - "formal/afp/Regex_Equivalence/Examples.thy", - "formal/hol/Help/REAL_LE_IMP.doc", - "formal/afp/Isabelle_C/C11-FrontEnd/examples/C3.thy", - "formal/afp/Iptables_Semantics/Common/Repeat_Stabilize.thy", - "formal/lean/mathlib/algebra/polynomial/group_ring_action.lean", - "formal/afp/AI_Planning_Languages_Semantics/PDDL_STRIPS_Semantics.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p293.lean", - "formal/mizar/hallmar1.miz", - "formal/afp/ResiduatedTransitionSystem/document/root.tex", - "formal/afp/FO_Theory_Rewriting/Primitives/NF_Impl.thy", - "formal/hol/Help/try_user_parser.doc", - "formal/afp/IsaNet/instances/EPIC_L1_SA.thy", - "formal/afp/Category3/HFSetCat.thy", - "formal/afp/Residuated_Lattices/Residuated_Relation_Algebra.thy", - "formal/lean/mathlib/data/bitvec/core.lean", - "formal/mizar/transgeo.miz", - "formal/afp/Projective_Geometry/Projective_Space_Axioms.thy", - "formal/afp/Generic_Deriving/tests/Derive_Eq_Laws.thy", - "formal/afp/Flyspeck-Tame/GraphProps.thy", - "formal/mizar/metric_2.miz", - "formal/hol/Library/modmul_group.ml", - "formal/afp/Szemeredi_Regularity/Szemeredi.thy", - "formal/afp/Metalogic_ProofChecker/EqualityProof.thy", - "formal/lean/mathlib/analysis/box_integral/box/basic.lean", - "formal/afp/Transitive_Models/CardinalArith_Relative.thy", - "formal/afp/JinjaThreads/MM/HB_Completion.thy", - "formal/mizar/cfdiff_2.miz", - "formal/lean/mathlib/algebra/category/Module/basic.lean", - "formal/afp/JinjaThreads/Compiler/Compiler1.thy", - "formal/afp/Propositional_Proof_Systems/SC_Sema.thy", - "formal/afp/Launchbury/Env.thy", - "formal/hol/Library/pocklington.ml", - "formal/lean/lftcm/hints/category_theory/exercise3/hint2.lean", - "formal/lean/mathlib/measure_theory/covering/besicovitch.lean", - "formal/coq/odd-order/PFsection11.v", - "formal/afp/LinearQuantifierElim/Thys/QEpres.thy", - "formal/mizar/qc_lang1.miz", - "formal/lean/mathlib/topology/metric_space/baire.lean", - "formal/lean/perfectoid/for_mathlib/normed_spaces.lean", - "formal/afp/CZH_Elementary_Categories/czh_ecategories/CZH_ECAT_Set.thy", - "formal/mizar/bvfunc14.miz", - "formal/afp/Jordan_Normal_Form/DL_Missing_Sublist.thy", - "formal/hol/Help/REAL_INT_GE_CONV.doc", - "formal/lean/mathlib/group_theory/monoid_localization.lean", - "formal/afp/Binding_Syntax_Theory/Preliminaries.thy", - "formal/mizar/osalg_2.miz", - "formal/afp/Neumann_Morgenstern_Utility/Neumann_Morgenstern_Utility_Theorem.thy", - "formal/lean/mathlib/algebra/homology/homological_complex.lean", - "formal/hol/miz3/Samples/lagrange.ml", - "formal/afp/Pi_Calculus/Strong_Early_Bisim_SC.thy", - "formal/afp/MSO_Regex_Equivalence/WS1S_Normalization.thy", - "formal/afp/Prime_Harmonic_Series/Squarefree_Nat.thy", - "formal/mizar/holder_1.miz", - "formal/afp/Quantales/Quantale_Left_Sided.thy", - "formal/lean/mathzoo/mathzoo/olympiads/imo/2006/p6.lean", - "formal/afp/Auto2_HOL/HOL/Auto2_HOL.thy", - "formal/mizar/quofield.miz", - "formal/afp/Registers/Finite_Tensor_Product.thy", - "formal/afp/Frequency_Moments/K_Smallest.thy", - "formal/lean/mathlib/ring_theory/power_series/well_known.lean", - "formal/mizar/ndiff_4.miz", - "formal/afp/CZH_Elementary_Categories/czh_ecategories/CZH_ECAT_Ordinal.thy", - "formal/mizar/rlaffin3.miz", - "formal/afp/Simplicial_complexes_and_boolean_functions/document/root.tex", - "formal/mizar/measure3.miz", - "formal/hol/Help/ONCE_REWRITE_TAC.doc", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p11.lean", - "formal/lean/mathlib/category_theory/sums/basic.lean", - "formal/lean/mathlib/testing/slim_check/sampleable.lean", - "formal/lean/liquid/combinatorial_lemma/partition.lean", - "formal/lean/mathlib/topology/vector_bundle/hom.lean", - "formal/afp/Recursion-Theory-I/document/root.tex", - "formal/lean/mathlib/number_theory/bernoulli_polynomials.lean", - "formal/hol/Logic/canon.ml", - "formal/lean/mathlib/data/num/prime.lean", - "formal/mizar/yellow16.miz", - "formal/afp/UPF_Firewall/StatefulFW/StatefulCore.thy", - "formal/afp/AODV/variants/a_norreqid/A_Fresher.thy", - "formal/lean/mathlib/group_theory/submonoid/inverses.lean", - "formal/lean/mathlib/data/finset/lattice.lean", - "formal/afp/Physical_Quantities/SI_Constants.thy", - "formal/afp/CoSMeDis/Post_Confidentiality/Post_Unwinding_Helper_ISSUER.thy", - "formal/afp/JinjaThreads/Compiler/J1State.thy", - "formal/afp/Separation_Algebra/Map_Extra.thy", - "formal/lean/mathlib/algebraic_geometry/prime_spectrum/noetherian.lean", - "formal/afp/Octonions/Cross_Product_7.thy", - "formal/lean/mathlib/data/real/ennreal.lean", - "formal/lean/mathzoo/mathzoo/misc/miniF2F/induction/sumkexp3eqsumksq.lean", - "formal/lean/mathlib/data/real/cau_seq.lean", - "formal/lean/mathlib/data/nat/totient.lean", - "formal/lean/liquid/breen_deligne/eval.lean", - "formal/afp/Isabelle_Meta_Model/meta_isabelle/Meta_Isabelle.thy", - "formal/lean/mathlib/group_theory/specific_groups/quaternion.lean", - "formal/afp/Goedel_Incompleteness/All_Abstract.thy", - "formal/coq/abel/xmathcomp/cyclotomic_ext.v", - "formal/lean/mathlib/category_theory/limits/constructions/limits_of_products_and_equalizers.lean", - "formal/afp/Flyspeck-Tame/Tame.thy", - "formal/hol/Multivariate/cauchy.ml", - "formal/afp/Buchi_Complementation/Ranking.thy", - "formal/lean/mathlib/data/countable/defs.lean", - "formal/afp/MSO_Regex_Equivalence/document/root.tex", - "formal/afp/Network_Security_Policy_Verification/Lib/Efficient_Distinct.thy", - "formal/afp/JinjaThreads/Compiler/Preprocessor.thy", - "formal/hol/Help/ONCE_REWRITE_RULE.doc", - "formal/afp/Iptables_Semantics/Access_Matrix_Embeddings.thy", - "formal/afp/Completeness/Sequents.thy", - "formal/afp/Progress_Tracking/Propagate.thy", - "formal/hol/Help/NUM_REDUCE_CONV.doc", - "formal/afp/List_Update/Swaps.thy", - "formal/afp/Open_Induction/document/root.tex", - "formal/afp/Independence_CH/CH.thy", - "formal/afp/VYDRA_MDL/Preliminaries.thy", - "formal/afp/Relation_Algebra/Relation_Algebra_Direct_Products.thy", - "formal/afp/UTP/utp/utp_hoare.thy", - "formal/afp/Virtual_Substitution/Heuristic.thy", - "formal/afp/Collections/ICF/impl/ListSetImpl.thy", - "formal/afp/Automatic_Refinement/Tool/Autoref_Id_Ops.thy", - "formal/afp/MiniSail/RCLogic.thy", - "formal/hol/Library/integer.ml", - "formal/hol/Help/REAL_INT_REDUCE_CONV.doc", - "formal/afp/Vickrey_Clarke_Groves/Argmax.thy", - "formal/lean/mathlib/category_theory/arrow.lean", - "formal/lean/liquid/for_mathlib/exact_functor.lean", - "formal/afp/Refine_Imperative_HOL/Lib/Pf_Mono_Prover.thy", - "formal/hol/Multivariate/make.ml", - "formal/lean/mathzoo/mathzoo/olympiads/imo/1994/p1.lean", - "formal/mizar/nomin_5.miz", - "formal/afp/Subset_Boolean_Algebras/document/root.tex", - "formal/mizar/hurwitz2.miz", - "formal/hol/Help/pow10.doc", - "formal/mizar/fomodel4.miz", - "formal/afp/Winding_Number_Eval/Missing_Analysis.thy", - "formal/afp/Akra_Bazzi/Master_Theorem_Examples.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p200.lean", - "formal/mizar/csspace.miz", - "formal/lean/mathlib/data/pnat/find.lean", - "formal/mizar/fomodel3.miz", - "formal/afp/FOL_Seq_Calc1/Tableau.thy", - "formal/mizar/tdlat_3.miz", - "formal/coq/math-comp/field/all_field.v", - "formal/mizar/waybel28.miz", - "formal/afp/Berlekamp_Zassenhaus/Gcd_Finite_Field_Impl.thy", - "formal/afp/First_Welfare_Theorem/Microeconomics/Arrow_Debreu_Model.thy", - "formal/hol/Help/bndvar.doc", - "formal/afp/Algebraic_VCs/document/root.tex", - "formal/lean/liquid/breen_deligne/main.lean", - "formal/afp/Echelon_Form/Echelon_Form_IArrays.thy", - "formal/mizar/numpoly1.miz", - "formal/mizar/tops_4.miz", - "formal/hol/Help/F_F.doc", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p109.lean", - "formal/lean/mathlib/data/mv_polynomial/rename.lean", - "formal/mizar/hilb10_2.miz", - "formal/afp/BDD/ShareReduceRepListProof.thy", - "formal/afp/LLL_Basis_Reduction/List_Representation.thy", - "formal/mizar/bvfunc_4.miz", - "formal/lean/mathzoo/mathzoo/olympiads/aime/2020/ii/p6.lean", - "formal/lean/mathlib/data/mv_polynomial/funext.lean", - "formal/mizar/jordan8.miz", - "formal/afp/Adaptive_State_Counting/document/root.tex", - "formal/afp/WOOT_Strong_Eventual_Consistency/DistributedExecution.thy", - "formal/afp/AODV/variants/d_fwdrreqs/D_Seq_Invariants.thy", - "formal/hol/Boyer_Moore/make.ml", - "formal/lean/mathlib/ring_theory/coprime/lemmas.lean", - "formal/afp/Applicative_Lifting/Applicative_Functor.thy", - "formal/afp/LinearQuantifierElim/Thys/FRE.thy", - "formal/mizar/waybel_2.miz", - "formal/afp/Network_Security_Policy_Verification/Lib/ML_GraphViz_Disable.thy", - "formal/afp/Minimal_SSA/document/root.tex", - "formal/afp/Launchbury/AList-Utils-Nominal.thy", - "formal/afp/Propositional_Proof_Systems/ND_Compl_Truthtable.thy", - "formal/afp/Refine_Imperative_HOL/IICF/Intf/IICF_Prio_Bag.thy", - "formal/afp/Game_Based_Crypto/CryptHOL_Tutorial.thy", - "formal/afp/UTP/toolkit/List_Extra.thy", - "formal/afp/Store_Buffer_Reduction/ReduceStoreBuffer.thy", - "formal/afp/Key_Agreement_Strong_Adversaries/Implem_lemmas.thy", - "formal/afp/Modular_Assembly_Kit_Security/SystemSpecification/EventSystems.thy", - "formal/afp/Pi_Calculus/Weak_Late_Step_Semantics.thy", - "formal/lean/mathlib/category_theory/functor/reflects_isomorphisms.lean", - "formal/hol/Help/PROP_ATOM_CONV.doc", - "formal/lean/mathlib/category_theory/adjunction/fully_faithful.lean", - "formal/hol/Help/PURE_ASM_REWRITE_RULE.doc", - "formal/hol/Help/.joinparsers.doc", - "formal/afp/Prim_Dijkstra_Simple/Undirected_Graph_Specs.thy", - "formal/afp/JinjaDCI/JVM/JVMState.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p289.lean", - "formal/afp/AI_Planning_Languages_Semantics/document/root.tex", - "formal/lean/mathlib/category_theory/path_category.lean", - "formal/afp/Coinductive_Languages/Context_Free_Grammar.thy", - "formal/afp/Simple_Firewall/Simple_Packet.thy", - "formal/afp/Propositional_Proof_Systems/SC_Depth_Limit.thy", - "formal/afp/Types_To_Sets_Extension/Examples/TTS_Foundations/Foundations/FNDS_Set_Ext.thy", - "formal/afp/Types_To_Sets_Extension/Examples/SML_Relativization/Foundations/Product_Type_Ext.thy", - "formal/afp/Featherweight_OCL/UML_Library.thy", - "formal/afp/Independence_CH/Forcing_Data.thy", - "formal/afp/Isabelle_Marries_Dirac/More_Tensor.thy", - "formal/hol/Logic/birkhoff.ml", - "formal/afp/Consensus_Refined/document/root.tex", - "formal/afp/LambdaAuth/FMap_Lemmas.thy", - "formal/lean/mathlib/group_theory/finiteness.lean", - "formal/lean/mathlib/ring_theory/nakayama.lean", - "formal/afp/Transition_Systems_and_Automata/Automata/DRA/DRA_Nodes.thy", - "formal/afp/JinjaThreads/DFA/Err.thy", - "formal/afp/Modal_Logics_for_NTS/FL_Bisimilarity_Implies_Equivalence.thy", - "formal/afp/BTree/BTree_Height.thy", - "formal/afp/Formula_Derivatives/Examples/WS1S_Nameful_Examples.thy", - "formal/lean/mathlib/category_theory/single_obj.lean", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p169.lean", - "formal/afp/Subresultants/document/root.tex", - "formal/mizar/waybel14.miz", - "formal/afp/BNF_CC/Fixpoints.thy", - "formal/afp/Coinductive/Lazy_LList.thy", - "formal/afp/AWN/ONode_Lifting.thy", - "formal/lean/mathlib/data/finset/pi.lean", - "formal/lean/mathlib/topology/sheaves/sheaf_condition/sites.lean", - "formal/afp/Core_SC_DOM/common/tests/Document_getElementById.thy", - "formal/afp/Twelvefold_Way/Twelvefold_Way_Entry3.thy", - "formal/afp/LLL_Factorization/Factorization_Algorithm_16_22.thy", - "formal/afp/Abstract-Hoare-Logics/Procs/PsLang.thy", - "formal/afp/CAVA_LTL_Modelchecker/BoolProgs/BoolProgs_Extras.thy", - "formal/afp/Completeness/Tree.thy", - "formal/mizar/group_17.miz", - "formal/afp/Localization_Ring/document/root.tex", - "formal/lean/mathlib/algebra/group/commute.lean", - "formal/lean/mathlib/data/polynomial/integral_normalization.lean", - "formal/afp/CAVA_LTL_Modelchecker/SM/RefPoint/SM_Cfg.thy", - "formal/hol/Help/choose.doc", - "formal/afp/Transitive-Closure/Transitive_Closure_RBT_Impl.thy", - "formal/coq/math-comp/algebra/mxpoly.v", - "formal/afp/Goedel_HFSet_Semanticless/document/root.tex", - "formal/afp/Generic_Deriving/tests/Derive_Show.thy", - "formal/hol/Help/CONJUNCTS_THEN2.doc", - "formal/afp/Goedel_Incompleteness/Standard_Model_More.thy", - "formal/afp/FO_Theory_Rewriting/Type_Instances_Impl.thy", - "formal/afp/CakeML/generated/Lem_set.thy", - "formal/lean/mathlib/category_theory/preadditive/default.lean", - "formal/lean/mathlib/algebra/category/Mon/filtered_colimits.lean", - "formal/afp/LocalLexing/TheoremD10.thy", - "formal/mizar/fib_num.miz", - "formal/afp/Delta_System_Lemma/ZF_Library.thy", - "formal/afp/Automated_Stateful_Protocol_Verification/Term_Abstraction.thy", - "formal/afp/Show/Show_Poly.thy", - "formal/lean/mathlib/topology/uniform_space/uniform_convergence.lean", - "formal/mizar/coh_sp.miz", - "formal/mizar/aofa_l00.miz", - "formal/afp/CakeML/generated/Lem_string.thy", - "formal/afp/Abstract_Soundness/document/root.tex", - "formal/afp/Containers/Collection_Order.thy", - "formal/afp/Ordinary_Differential_Equations/Refinement/Refine_Unions.thy", - "formal/afp/BDD/ProcedureSpecs.thy", - "formal/afp/Decl_Sem_Fun_PL/ValueProps.thy", - "formal/mizar/finance4.miz", - "formal/afp/Knot_Theory/Preliminaries.thy", - "formal/lean/mathlib/data/W/cardinal.lean", - "formal/afp/Decreasing-Diagrams-II/document/root.tex", - "formal/afp/Affine_Arithmetic/Affine_Code.thy", - "formal/mizar/uproots.miz", - "formal/afp/CZH_Elementary_Categories/czh_ecategories/CZH_ECAT_Small_Order.thy", - "formal/afp/Optics/Lens_Algebra.thy", - "formal/mizar/lattice7.miz", - "formal/afp/Jinja/J/Annotate.thy", - "formal/lean/mathlib/category_theory/idempotents/functor_extension.lean", - "formal/afp/Flyspeck-Tame/document/root.tex", - "formal/lean/mathlib/category_theory/sites/left_exact.lean", - "formal/lean/mathlib/category_theory/monoidal/functor_category.lean", - "formal/afp/KBPs/SPRViewSingle.thy", - "formal/afp/Gauss_Jordan/Code_Generation_IArrays_Haskell.thy", - "formal/afp/CakeML/generated/Lem_function.thy", - "formal/mizar/dblseq_3.miz", - "formal/lean/mathlib/ring_theory/discrete_valuation_ring.lean", - "formal/afp/PLM/TAO_5_MetaSolver.thy", - "formal/afp/Tycon/Monad.thy", - "formal/afp/UPF/ParallelComposition.thy", - "formal/afp/Deriving/Generator_Aux.thy", - "formal/afp/Virtual_Substitution/VSQuad.thy", - "formal/lean/mathlib/group_theory/submonoid/basic.lean", - "formal/mizar/binarith.miz", - "formal/hol/Help/REAL_RAT_RED_CONV.doc", - "formal/mizar/fcont_2.miz", - "formal/afp/Binomial-Heaps/SkewBinomialHeap.thy", - "formal/afp/ConcurrentGC/Proofs.thy", - "formal/afp/First_Order_Terms/Option_Monad.thy", - "formal/afp/Interval_Arithmetic_Word32/Finite_String.thy", - "formal/afp/Modal_Logics_for_NTS/Weak_Expressive_Completeness.thy", - "formal/lean/mathlib/data/multiset/pi.lean", - "formal/afp/InformationFlowSlicing/LiftingIntra.thy", - "formal/mizar/topreal9.miz", - "formal/hol/Help/concl.doc", - "formal/afp/Timed_Automata/Regions_Beta.thy", - "formal/lean/mathlib/category_theory/monoidal/tor.lean", - "formal/afp/Knot_Theory/Kauffman_Invariance.thy", - "formal/afp/SimplifiedOntologicalArgument/SimpleVariant.thy", - "formal/mizar/pardepap.miz", - "formal/hol/Help/dest_imp.doc", - "formal/afp/Inductive_Inference/LIM_BC.thy", - "formal/afp/Probabilistic_Prime_Tests/Miller_Rabin_Test.thy", - "formal/lean/mathlib/data/set/equitable.lean", - "formal/coq/math-comp/character/integral_char.v", - "formal/afp/Ordinary_Differential_Equations/Numerics/Runge_Kutta.thy", - "formal/afp/Probabilistic_Timed_Automata/library/More_List.thy", - "formal/hol/Help/loadt.doc", - "formal/afp/Derangements/Derangements.thy", - "formal/afp/First_Order_Terms/Term.thy", - "formal/afp/CZH_Elementary_Categories/czh_ecategories/CZH_ECAT_Functor.thy", - "formal/lean/mathlib/measure_theory/measure/stieltjes.lean", - "formal/afp/CoreC++/Expr.thy", - "formal/lean/mathlib/category_theory/limits/shapes/multiequalizer.lean", - "formal/lean/mathlib/category_theory/limits/exact_functor.lean", - "formal/afp/Multiset_Ordering_NPC/RPO_NP_Hard.thy", - "formal/afp/MonoidalCategory/FreeMonoidalCategory.thy", - "formal/afp/Depth-First-Search/DFS.thy", - "formal/afp/Relational_Disjoint_Set_Forests/document/root.tex", - "formal/lean/mathlib/algebraic_geometry/prime_spectrum/is_open_comap_C.lean", - "formal/lean/mathlib/field_theory/splitting_field.lean", - "formal/afp/Pi_Calculus/Weak_Early_Cong.thy", - "formal/afp/Abstract-Hoare-Logics/Procs/PsHoareTotal.thy", - "formal/afp/MonoidalCategory/document/root.tex", - "formal/afp/Regression_Test_Selection/JVM_RTS/JVMSemantics.thy", - "formal/afp/Monad_Normalisation/document/root.tex", - "formal/afp/Transition_Systems_and_Automata/Automata/DRA/DRA_Implement.thy", - "formal/lean/mathlib/category_theory/concrete_category/basic.lean", - "formal/afp/Groebner_Bases/Auto_Reduction.thy", - "formal/afp/Abs_Int_ITP2012/Complete_Lattice_ix.thy", - "formal/afp/Iptables_Semantics/Firewall_Common.thy", - "formal/afp/Marriage/Marriage.thy", - "formal/afp/Circus/Circus_Actions.thy", - "formal/lean/mathlib/order/synonym.lean", - "formal/afp/Weighted_Path_Order/Precedence.thy", - "formal/hol/Help/LET_TAC.doc", - "formal/hol/Examples/combin.ml", - "formal/mizar/compts_1.miz", - "formal/afp/Incredible_Proof_Machine/Incredible_Signatures.thy", - "formal/afp/Goedel_Incompleteness/Abstract_Encoding.thy", - "formal/lean/mathzoo/mathzoo/misc/miniF2F/algebra/ineq_nto1onlt2m1on.lean", - "formal/afp/Circus/Denotational_Semantics.thy", - "formal/afp/Budan_Fourier/Descartes_Roots_Test.thy", - "formal/lean/mathlib/category_theory/limits/yoneda.lean", - "formal/afp/Modular_arithmetic_LLL_and_HNF_algorithms/Storjohann_Mod_Operation.thy", - "formal/afp/MFMC_Countable/Rel_PMF_Characterisation_MFMC.thy", - "formal/mizar/ltlaxio3.miz", - "formal/lean/mathlib/analysis/inner_product_space/pi_L2.lean", - "formal/afp/Category3/SetCat.thy", - "formal/afp/Buchi_Complementation/Complementation_Final.thy", - "formal/afp/Multirelations/Multirelations.thy", - "formal/afp/Pi_Calculus/Strong_Late_Bisim_Pres.thy", - "formal/afp/Prpu_Maxflow/Relabel_To_Front.thy", - "formal/afp/HOL-CSP/CopyBuffer.thy", - "formal/mizar/functor3.miz", - "formal/afp/TESL_Language/Config_Morphisms.thy", - "formal/afp/JinjaDCI/BV/ClassAdd.thy", - "formal/lean/mathlib/combinatorics/additive/salem_spencer.lean", - "formal/afp/Resolution_FOL/Completeness_Instance.thy", - "formal/lean/mathlib/measure_theory/function/floor.lean", - "formal/afp/Slicing/Dynamic/BitVector.thy", - "formal/afp/FOL_Seq_Calc3/Soundness.thy", - "formal/lean/mathlib/algebra/order/to_interval_mod.lean", - "formal/mizar/lattba_1.miz", - "formal/lean/mathzoo/mathzoo/misc/miniF2F/numbertheory/2pownm1prime_nprime.lean", - "formal/hol/Help/dest_fun_ty.doc", - "formal/afp/DPRM_Theorem/Diophantine/Register_Machine_Sums.thy", - "formal/hol/Unity/aux_definitions.ml", - "formal/afp/CryptHOL/Environment_Functor.thy", - "formal/mizar/fdiff_3.miz", - "formal/afp/Kleene_Algebra/Dioid.thy", - "formal/mizar/bkmodel2.miz", - "formal/afp/Projective_Geometry/Desargues_2D.thy", - "formal/afp/MSO_Regex_Equivalence/Pi_Regular_Exp_Dual.thy", - "formal/afp/VeriComp/Language.thy", - "formal/afp/Tycon/document/root.tex", - "formal/afp/Collections/Examples/Autoref/Simple_DFS.thy", - "formal/afp/Aggregation_Algebras/document/root.tex", - "formal/lean/mathlib/ring_theory/fractional_ideal.lean", - "formal/lean/mathlib/ring_theory/coprime/ideal.lean", - "formal/afp/Ordinary_Differential_Equations/Refinement/Enclosure_Operations.thy", - "formal/mizar/group_21.miz", - "formal/afp/Game_Based_Crypto/RP_RF.thy", - "formal/afp/Network_Security_Policy_Verification/Security_Invariants/SINVAR_CommunicationPartners.thy", - "formal/afp/CZH_Elementary_Categories/czh_ecategories/CZH_ECAT_PCategory.thy", - "formal/lean/liquid/for_mathlib/CompHaus.lean", - "formal/afp/Featherweight_OCL/examples/Employee_Model/Design/Design_UML.thy", - "formal/afp/Lowe_Ontological_Argument/LoweOntologicalArgument_6.thy", - "formal/afp/Datatype_Order_Generator/Order_Generator.thy", - "formal/lean/mathlib/order/succ_pred/limit.lean", - "formal/afp/Jordan_Normal_Form/Missing_Misc.thy", - "formal/mizar/msualg_9.miz", - "formal/afp/Stone_Algebras/P_Algebras.thy", - "formal/lean/mathlib/topology/instances/ennreal.lean", - "formal/afp/Containers/Equal.thy", - "formal/afp/AVL-Trees/document/root.tex", - "formal/afp/ArrowImpossibilityGS/document/root.tex", - "formal/afp/Berlekamp_Zassenhaus/Hensel_Lifting.thy", - "formal/lean/mathlib/algebra/lie/non_unital_non_assoc_algebra.lean", - "formal/afp/Stream_Fusion_Code/Stream_Fusion.thy", - "formal/afp/Design_Theory/Designs_And_Graphs.thy", - "formal/lean/mathlib/algebra/add_torsor.lean", - "formal/afp/Groebner_Bases/Confluence.thy", - "formal/lean/mathlib/topology/algebra/nonarchimedean/adic_topology.lean", - "formal/afp/IMP2/lib/Subgoal_Focus_Some.thy", - "formal/mizar/pre_ff.miz", - "formal/afp/Relational_Minimum_Spanning_Trees/document/root.tex", - "formal/afp/Tarskis_Geometry/Tarski.thy", - "formal/afp/LambdaMu/Peirce.thy", - "formal/afp/Poincare_Disc/Hyperbolic_Functions.thy", - "formal/afp/Pi_Calculus/Rel.thy", - "formal/lean/mathlib/topology/sets/compacts.lean", - "formal/lean/mathlib/data/zmod/defs.lean", - "formal/afp/Simplex/Abstract_Linear_Poly.thy", - "formal/lean/mathlib/data/complex/determinant.lean", - "formal/afp/Roth_Arithmetic_Progressions/document/root.tex", - "formal/afp/Game_Based_Crypto/Game_Based_Crypto.thy", - "formal/afp/Chandy_Lamport/Snapshot.thy", - "formal/afp/LOFT/OpenFlow_Documentation.thy", - "formal/afp/Certification_Monads/Parser_Monad.thy", - "formal/lean/mathlib/topology/sheaves/sheaf_condition/unique_gluing.lean", - "formal/mizar/mesfun7c.miz", - "formal/afp/DiscretePricing/Infinite_Coin_Toss_Space.thy", - "formal/afp/First_Welfare_Theorem/Microeconomics/Common.thy", - "formal/lean/perfectoid/valuation/valuation_field_completion.lean", - "formal/mizar/sublemma.miz", - "formal/lean/mathlib/computability/halting.lean", - "formal/lean/mathlib/algebra/group/default.lean", - "formal/lean/mathlib/data/list/join.lean", - "formal/afp/Tycon/State_Transformer.thy", - "formal/afp/Simple_Firewall/document/root.tex", - "formal/afp/XML/document/root.tex", - "formal/afp/CofGroups/CofGroups.thy", - "formal/lean/mathlib/linear_algebra/coevaluation.lean", - "formal/afp/PLM/TAO_2_Semantics.thy", - "formal/afp/Correctness_Algebras/N_Semirings_Boolean.thy", - "formal/afp/Falling_Factorial_Sum/Falling_Factorial_Sum_Induction.thy", - "formal/afp/Prime_Distribution_Elementary/Prime_Distribution_Elementary_Library.thy", - "formal/hol/Help/remove_interface.doc", - "formal/lean/mathlib/logic/nonempty.lean", - "formal/hol/Help/MP.doc", - "formal/hol/Library/products.ml", - "formal/afp/Binding_Syntax_Theory/Terms.thy", - "formal/afp/Jinja/Compiler/Correctness2.thy", - "formal/afp/JinjaThreads/Framework/FWBisimDeadlock.thy", - "formal/afp/Promela/PromelaLTL.thy", - "formal/lean/mathlib/algebra/category/Ring/default.lean", - "formal/afp/Polynomial_Interpolation/document/root.tex", - "formal/afp/HRB-Slicing/Proc/ProcSDG.thy", - "formal/lean/mathlib/linear_algebra/special_linear_group.lean", - "formal/afp/Extended_Finite_State_Machines/Transition_Lexorder.thy", - "formal/afp/UPF_Firewall/PacketFilter/NetworkModels.thy", - "formal/lean/mathlib/linear_algebra/vandermonde.lean", - "formal/afp/JinjaThreads/Compiler/ListIndex.thy", - "formal/afp/Goedel_Incompleteness/Abstract_Representability.thy", - "formal/lean/mathlib/logic/encodable/basic.lean", - "formal/afp/Automatic_Refinement/Lib/Select_Solve.thy", - "formal/hol/Unity/mk_ensures.ml", - "formal/afp/Collections/ICF/impl/ListSetImpl_NotDist.thy", - "formal/coq/abel/xmathcomp/algR.v", - "formal/lean/mathlib/data/finsupp/fin.lean", - "formal/afp/Bernoulli/Periodic_Bernpoly.thy", - "formal/lean/lftcm/hints/category_theory/exercise1/hint3.lean", - "formal/mizar/amistd_5.miz", - "formal/afp/Applicative_Lifting/Beta_Eta.thy", - "formal/mizar/mesfun10.miz", - "formal/afp/Separation_Algebra/Separation_Algebra.thy", - "formal/afp/Network_Security_Policy_Verification/Security_Invariants/SINVAR_NoRefl_impl.thy", - "formal/lean/mathlib/algebraic_topology/alternating_face_map_complex.lean", - "formal/lean/mathlib/algebraic_topology/fundamental_groupoid/basic.lean", - "formal/afp/TortoiseHare/Brent.thy", - "formal/afp/Word_Lib/Word_EqI.thy", - "formal/mizar/polyeq_2.miz", - "formal/lean/mathlib/algebra/lie/basic.lean", - "formal/lean/liquid/for_mathlib/SheafOfTypes_sheafification.lean", - "formal/afp/TLA/Sequence.thy", - "formal/afp/Factor_Algebraic_Polynomial/Factor_Real_Poly.thy", - "formal/afp/Collections/ICF/tools/ICF_Tools.thy", - "formal/afp/Call_Arity/CardArityTransformSafe.thy", - "formal/mizar/gobrd14.miz", - "formal/afp/CakeML/generated/Lem_num.thy", - "formal/lean/mathlib/ring_theory/dedekind_domain/integral_closure.lean", - "formal/lean/mathlib/data/rat/cast.lean", - "formal/afp/Partial_Order_Reduction/Ample_Correctness.thy", - "formal/afp/Flow_Networks/Lib/Fofu_Impl_Base.thy", - "formal/mizar/chord.miz", - "formal/afp/FOL-Fitting/document/root.tex", - "formal/mizar/metrizts.miz", - "formal/lean/mathzoo/mathzoo/olympiads/imo/1979/p1.lean", - "formal/afp/CryptHOL/GPV_Bisim.thy", - "formal/afp/Ordinary_Differential_Equations/ODE_Analysis.thy", - "formal/afp/Flyspeck-Tame/ScoreProps.thy", - "formal/lean/mathlib/algebra/star/self_adjoint.lean", - "formal/lean/mathzoo/mathzoo/olympiads/imo/2008/p4.lean", - "formal/lean/mathlib/algebraic_geometry/morphisms/basic.lean", - "formal/afp/Probabilistic_Timed_Automata/library/Sequence.thy", - "formal/afp/Encodability_Process_Calculi/DivergenceReflection.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p73.lean", - "formal/afp/Encodability_Process_Calculi/SimulationRelations.thy", - "formal/afp/Pairing_Heap/Pairing_Heap_List2.thy", - "formal/afp/Finite_Automata_HF/document/root.tex", - "formal/lean/mathlib/topology/algebra/uniform_group.lean", - "formal/lean/mathzoo/mathzoo/olympiads/aime/1984/p15.lean", - "formal/afp/CoreC++/WellType.thy", - "formal/afp/CoCon/System_Specification.thy", - "formal/mizar/fdiff_7.miz", - "formal/afp/Stuttering_Equivalence/document/root.tex", - "formal/mizar/fdiff_11.miz", - "formal/afp/LambdaAuth/Results.thy", - "formal/afp/LocalLexing/ListTools.thy", - "formal/lean/liquid/polyhedral_lattice/cosimplicial.lean", - "formal/lean/mathlib/algebra/lie/base_change.lean", - "formal/lean/mathlib/logic/equiv/nat.lean", - "formal/coq/math-comp/algebra/countalg.v", - "formal/afp/RSAPSS/RSAPSS.thy", - "formal/hol/Help/alphaorder.doc", - "formal/hol/Boyer_Moore/induction.ml", - "formal/mizar/simplex0.miz", - "formal/afp/Separation_Logic_Imperative_HOL/Examples/Hash_Table.thy", - "formal/hol/Formal_ineqs/make.ml", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p142.lean", - "formal/hol/Help/rand.doc", - "formal/afp/Lowe_Ontological_Argument/LoweOntologicalArgument_4.thy", - "formal/hol/Rqe/condense_thms.ml", - "formal/lean/mathlib/linear_algebra/matrix/pos_def.lean", - "formal/hol/Help/ASSUME.doc", - "formal/afp/Word_Lib/Reversed_Bit_Lists.thy", - "formal/afp/Inductive_Inference/CP_FIN_NUM.thy", - "formal/hol/Help/freesin.doc", - "formal/afp/Correctness_Algebras/Recursion_Strict.thy", - "formal/mizar/funct_3.miz", - "formal/afp/VYDRA_MDL/document/root.tex", - "formal/afp/Consensus_Refined/MRU/New_Algorithm_Defs.thy", - "formal/afp/Automatic_Refinement/Tool/Autoref_Translate.thy", - "formal/lean/mathlib/order/conditionally_complete_lattice.lean", - "formal/lean/mathlib/category_theory/sites/cover_preserving.lean", - "formal/lean/mathlib/control/bitraversable/basic.lean", - "formal/mizar/modal_1.miz", - "formal/afp/Consensus_Refined/MRU/CT_Defs.thy", - "formal/afp/Promela/PromelaLTLConv.thy", - "formal/afp/Heard_Of/eigbyz/EigbyzProof.thy", - "formal/afp/DPRM_Theorem/Diophantine/Digit_Function.thy", - "formal/afp/Tycon/TypeApp.thy", - "formal/lean/mathlib/set_theory/zfc/basic.lean", - "formal/hol/QBF/mygraph.ml", - "formal/lean/mathlib/topology/category/Top/default.lean", - "formal/afp/AWN/OClosed_Transfer.thy", - "formal/hol/Help/CLAIM_TAC.doc", - "formal/afp/Flow_Networks/Residual_Graph.thy", - "formal/afp/ConcurrentGC/document/root.tex", - "formal/mizar/connsp_3.miz", - "formal/afp/CakeML_Codegen/Rewriting/Big_Step_Value_ML.thy", - "formal/lean/mathlib/algebra/category/Module/images.lean", - "formal/lean/liquid/pseudo_normed_group/category/strictProFiltPseuNormGrp.lean", - "formal/afp/SIFUM_Type_Systems/Compositionality.thy", - "formal/lean/liquid/for_mathlib/types.lean", - "formal/mizar/rat_1.miz", - "formal/afp/Lazy_Case/Test_Lazy_Case.thy", - "formal/afp/Correctness_Algebras/Domain_Recursion.thy", - "formal/lean/mathlib/ring_theory/hahn_series.lean", - "formal/afp/CYK/document/root.tex", - "formal/afp/Jinja/Common/TypeRel.thy", - "formal/mizar/parsp_2.miz", - "formal/afp/AODV/variants/c_gtobcast/C_Gtobcast.thy", - "formal/afp/Bicategory/EquivalenceOfBicategories.thy", - "formal/lean/mathlib/data/list/alist.lean", - "formal/afp/Registers/Laws_Complement.thy", - "formal/afp/Constructive_Cryptography_CM/State_Isomorphism.thy", - "formal/mizar/aff_1.miz", - "formal/afp/Psi_Calculi/Bisim_Pres.thy", - "formal/lean/mathlib/measure_theory/function/lp_order.lean", - "formal/lean/mathzoo/mathzoo/misc/miniF2F/algebra/apbon2pownleqapownpbpowon2.lean", - "formal/lean/mathlib/number_theory/padics/padic_integers.lean", - "formal/mizar/sin_cos8.miz", - "formal/afp/Laplace_Transform/Laplace_Transform_Library.thy", - "formal/afp/Virtual_Substitution/HeuristicProofs.thy", - "formal/afp/Neumann_Morgenstern_Utility/PMF_Composition.thy", - "formal/afp/MDP-Algorithms/code/Code_Real_Approx_By_Float_Fix.thy", - "formal/lean/mathlib/control/bitraversable/instances.lean", - "formal/afp/Launchbury/Env-HOLCF.thy", - "formal/afp/CakeML_Codegen/Rewriting/Rewriting_Sterm.thy", - "formal/afp/Call_Arity/List-Interleavings.thy", - "formal/afp/Prime_Harmonic_Series/document/root.tex", - "formal/afp/Dict_Construction/document/root.tex", - "formal/afp/KAT_and_DRA/SingleSorted/KAT_Models.thy", - "formal/afp/Triangle/Angles.thy", - "formal/lean/mathlib/measure_theory/measure/haar.lean", - "formal/afp/CakeML_Codegen/CupCakeML/CupCake_Semantics.thy", - "formal/afp/Store_Buffer_Reduction/Variants.thy", - "formal/hol/Help/W.doc", - "formal/lean/mathlib/linear_algebra/tensor_power.lean", - "formal/afp/Refine_Imperative_HOL/Lib/Pf_Add.thy", - "formal/afp/Refine_Monadic/Refine_Heuristics.thy", - "formal/lean/mathlib/data/list/of_fn.lean", - "formal/mizar/waybel_8.miz", - "formal/mizar/matrlin.miz", - "formal/hol/Help/REAL_RAT_LT_CONV.doc", - "formal/hol/100/stirling.ml", - "formal/lean/mathlib/category_theory/bicategory/End.lean", - "formal/lean/mathlib/category_theory/idempotents/karoubi_karoubi.lean", - "formal/afp/Groebner_Bases/Reduced_GB_Examples.thy", - "formal/afp/Refine_Imperative_HOL/Sepref_HOL_Bindings.thy", - "formal/afp/Transitive_Models/Higher_Order_Constructs.thy", - "formal/mizar/tmap_1.miz", - "formal/afp/Probabilistic_Noninterference/Language_Semantics.thy", - "formal/lean/mathlib/ring_theory/polynomial/gauss_lemma.lean", - "formal/lean/mathlib/category_theory/adjunction/lifting.lean", - "formal/afp/Tycon/Maybe_Monad.thy", - "formal/afp/Complex_Bounded_Operators/Cblinfun_Code.thy", - "formal/afp/Myhill-Nerode/Closures2.thy", - "formal/lean/mathlib/group_theory/group_action/sum.lean", - "formal/afp/Order_Lattice_Props/Closure_Operators.thy", - "formal/afp/Residuated_Lattices/document/root.tex", - "formal/mizar/finseq_1.miz", - "formal/afp/Constructive_Cryptography_CM/Concrete_Security.thy", - "formal/lean/mathlib/linear_algebra/free_module/finite/basic.lean", - "formal/afp/CZH_Elementary_Categories/document/root.tex", - "formal/afp/Flyspeck-Tame/Enumerator.thy", - "formal/afp/Allen_Calculus/examples.thy", - "formal/hol/Help/get_type_arity.doc", - "formal/afp/Iptables_Semantics/Examples/Ringofsaturn_com/Analyze_Ringofsaturn_com.thy", - "formal/afp/Consensus_Refined/Observing/Uv_Proofs.thy", - "formal/afp/Priority_Search_Trees/document/root.tex", - "formal/lean/mathlib/category_theory/monoidal/free/basic.lean", - "formal/afp/VectorSpace/FunctionLemmas.thy", - "formal/lean/mathlib/topology/uniform_space/absolute_value.lean", - "formal/afp/Quantales/Quantale_Star.thy", - "formal/lean/mathlib/topology/uniform_space/pi.lean", - "formal/afp/Program-Conflict-Analysis/Interleave.thy", - "formal/lean/liquid/for_mathlib/homological_complex_equiv_functor_category.lean", - "formal/hol/Help/lex.doc", - "formal/afp/Modal_Logics_for_NTS/Transition_System.thy", - "formal/mizar/vectsp_2.miz", - "formal/lean/perfectoid/valuation/basic.lean", - "formal/mizar/matrix13.miz", - "formal/mizar/ltlaxio4.miz", - "formal/afp/JinjaThreads/Compiler/J0Bisim.thy", - "formal/afp/PseudoHoops/LeftComplementedMonoid.thy", - "formal/afp/Transition_Systems_and_Automata/Automata/DBTA/DBTA.thy", - "formal/afp/FocusStreamsCaseStudies/document/root.tex", - "formal/afp/UPF_Firewall/PacketFilter/DatatypeAddress.thy", - "formal/afp/Collections/Iterator/Array_Iterator.thy", - "formal/lean/mathlib/algebraic_geometry/pullbacks.lean", - "formal/afp/VeriComp/Compiler.thy", - "formal/afp/Probabilistic_Prime_Tests/Carmichael_Numbers.thy", - "formal/afp/Automated_Stateful_Protocol_Verification/trac/trac_fp_parser.thy", - "formal/afp/Stern_Brocot/document/root.tex", - "formal/mizar/anproj_1.miz", - "formal/afp/Groebner_Bases/Groebner_Bases.thy", - "formal/afp/Registers/Axioms_Classical.thy", - "formal/afp/Symmetric_Polynomials/document/root.tex", - "formal/afp/Regular-Sets/Equivalence_Checking2.thy", - "formal/afp/Auto2_Imperative_HOL/Functional/Partial_Equiv_Rel.thy", - "formal/afp/Types_To_Sets_Extension/Examples/SML_Relativization/SML_Conclusions.thy", - "formal/lean/mathlib/algebra/algebra/basic.lean", - "formal/lean/mathlib/algebra/free_non_unital_non_assoc_algebra.lean", - "formal/afp/CCS/Struct_Cong.thy", - "formal/afp/Completeness/PermutationLemmas.thy", - "formal/afp/Binding_Syntax_Theory/Pick.thy", - "formal/lean/mathlib/data/set/accumulate.lean", - "formal/lean/mathlib/order/boolean_algebra.lean", - "formal/lean/mathlib/algebra/regular/basic.lean", - "formal/afp/Skip_Lists/Geometric_PMF.thy", - "formal/afp/Constructive_Cryptography_CM/Fused_Resource.thy", - "formal/mizar/fin_topo.miz", - "formal/afp/Dirichlet_Series/Moebius_Mu.thy", - "formal/afp/CakeML_Codegen/Terms/Terms_Extras.thy", - "formal/lean/mathzoo/mathzoo/misc/miniF2F/algebra/cubrtrp1oncubrtreq3_rcubp1onrcubeq5778.lean", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p158.lean", - "formal/lean/liquid/normed_free_pfpng/compare.lean", - "formal/lean/mathlib/linear_algebra/matrix/charpoly/coeff.lean", - "formal/lean/liquid/for_mathlib/AddCommGroup/tensor_short_exact.lean", - "formal/afp/Lehmer/Lehmer.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p392.lean", - "formal/afp/Nominal2/Eqvt.thy", - "formal/afp/Probabilistic_Timed_Automata/library/Lib.thy", - "formal/afp/DOM_Components/counterexample/fancy_tabs.thy", - "formal/hol/Help/help_path.doc", - "formal/afp/JinjaDCI/Compiler/Correctness2.thy", - "formal/afp/XML/Xmlt.thy", - "formal/lean/mathlib/algebra/homology/image_to_kernel.lean", - "formal/mizar/xreal_1.miz", - "formal/afp/Formal_SSA/While_Combinator_Exts.thy", - "formal/mizar/ec_pf_2.miz", - "formal/hol/Help/is_intconst.doc", - "formal/mizar/rmod_2.miz", - "formal/afp/Collections/ICF/impl/HashMap.thy", - "formal/afp/Logging_Independent_Anonymity/Definitions.thy", - "formal/afp/CZH_Foundations/czh_sets/CZH_Sets_FBRelations.thy", - "formal/afp/Launchbury/AList-Utils.thy", - "formal/lean/mathlib/analysis/asymptotics/theta.lean", - "formal/afp/Nested_Multisets_Ordinals/Duplicate_Free_Multiset.thy", - "formal/afp/JinjaDCI/Common/Conform.thy", - "formal/coq/math-comp/solvable/center.v", - "formal/hol/Help/num_2.doc", - "formal/afp/Randomised_Social_Choice/Random_Dictatorship.thy", - "formal/afp/GaleStewart_Games/FilteredList.thy", - "formal/afp/CoCon/Review_Confidentiality/Review_Value_Setup.thy", - "formal/afp/Linear_Inequalities/Basis_Extension.thy", - "formal/afp/JinjaThreads/BV/BVSpecTypeSafe.thy", - "formal/afp/Conditional_Transfer_Rule/UD/UD.thy", - "formal/afp/Collections/Examples/Refine_Monadic/Foreach_Refine.thy", - "formal/afp/Isabelle_Marries_Dirac/Complex_Vectors.thy", - "formal/lean/mathlib/analysis/complex/basic.lean", - "formal/afp/ROBDD/Conc_Impl.thy", - "formal/lean/mathlib/data/polynomial/degree/default.lean", - "formal/afp/Pi_Calculus/Weak_Late_Sim.thy", - "formal/afp/CakeML_Codegen/Test/Test_Embed_Tree.thy", - "formal/afp/Consensus_Refined/MRU/Paxos_Defs.thy", - "formal/lean/liquid/condensed/rescale.lean", - "formal/afp/Metalogic_ProofChecker/Term_Subst.thy", - "formal/hol/miz3/Samples/irrat2.ml", - "formal/afp/Word_Lib/Word_16.thy", - "formal/afp/Forcing/Recursion_Thms.thy", - "formal/hol/Help/ABBREV_TAC.doc", - "formal/mizar/stacks_1.miz", - "formal/hol/Help/NUM_NORMALIZE_CONV.doc", - "formal/afp/ADS_Functor/Inclusion_Proof_Construction.thy", - "formal/afp/ConcurrentGC/Global_Noninterference.thy", - "formal/afp/UTP/utp/utp_sequent.thy", - "formal/coq/math-comp/algebra/poly.v", - "formal/afp/Splay_Tree/document/root.tex", - "formal/mizar/euclidlp.miz", - "formal/hol/Help/REAL_POLY_SUB_CONV.doc", - "formal/afp/Probabilistic_Prime_Tests/QuadRes.thy", - "formal/hol/Help/alpha.doc", - "formal/hol/Help/repeat.doc", - "formal/mizar/jordan22.miz", - "formal/afp/Regular_Algebras/Dioid_Power_Sum.thy", - "formal/afp/Affine_Arithmetic/Executable_Euclidean_Space.thy", - "formal/lean/mathlib/algebra/direct_sum/ring.lean", - "formal/afp/Goedel_HFSet_Semantic/Instance.thy", - "formal/afp/BenOr_Kozen_Reif/document/root.tex", - "formal/lean/mathlib/ring_theory/polynomial/homogeneous.lean", - "formal/afp/Ordered_Resolution_Prover/document/root.tex", - "formal/lean/mathzoo/mathzoo/olympiads/aime/1987/p8.lean", - "formal/afp/Nash_Williams/document/root.tex", - "formal/mizar/pdiff_1.miz", - "formal/afp/Tycon/Functor.thy", - "formal/afp/Decl_Sem_Fun_PL/DeclSemAsNDInterpFSet.thy", - "formal/hol/Help/LIST_CONV.doc", - "formal/lean/mathlib/topology/order/hom/basic.lean", - "formal/hol/Help/PRESIMP_CONV.doc", - "formal/lean/mathlib/analysis/inner_product_space/l2_space.lean", - "formal/hol/Help/dest_exists.doc", - "formal/afp/FileRefinement/CArrays.thy", - "formal/afp/Call_Arity/AEnv.thy", - "formal/afp/Shadow_SC_DOM/tests/Shadow_DOM_Node_insertBefore.thy", - "formal/afp/List_Update/Comb.thy", - "formal/afp/Dirichlet_L/Dirichlet_L_Functions.thy", - "formal/mizar/normsp_2.miz", - "formal/afp/Core_DOM/common/preliminaries/Testing_Utils.thy", - "formal/afp/JiveDataStoreModel/document/root.tex", - "formal/afp/Network_Security_Policy_Verification/Security_Invariants/SINVAR_SubnetsInGW_impl.thy", - "formal/lean/mathlib/group_theory/perm/fin.lean", - "formal/afp/Call_Arity/ArityAnalysisSpec.thy", - "formal/lean/mathlib/category_theory/morphism_property.lean", - "formal/afp/Budan_Fourier/document/root.tex", - "formal/afp/Constructive_Cryptography_CM/Specifications/Key.thy", - "formal/coq/math-comp/algebra/all_algebra.v", - "formal/afp/CoCon/Discussion_Confidentiality/Discussion_All.thy", - "formal/afp/Collections/Collections_Entrypoints_Chapter.thy", - "formal/mizar/radix_3.miz", - "formal/mizar/gate_2.miz", - "formal/mizar/waybel_7.miz", - "formal/mizar/ring_4.miz", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p543.lean", - "formal/lean/mathlib/category_theory/monoidal/limits.lean", - "formal/afp/Collections/ICF/impl/HashMap_Impl.thy", - "formal/afp/Simple_Firewall/Common/IP_Addr_WordInterval_toString.thy", - "formal/hol/Arithmetic/make.ml", - "formal/afp/Prime_Number_Theorem/document/root.tex", - "formal/lean/mathlib/topology/homotopy/equiv.lean", - "formal/lean/mathlib/data/int/cast_field.lean", - "formal/hol/Help/is_forall.doc", - "formal/afp/Correctness_Algebras/Capped_Omega_Algebras.thy", - "formal/mizar/glib_014.miz", - "formal/hol/Help/remark.doc", - "formal/lean/liquid/condensed/sheafification_homology.lean", - "formal/afp/Simpl/ex/VcgExSP.thy", - "formal/mizar/extpro_1.miz", - "formal/mizar/niven.miz", - "formal/afp/QHLProver/document/root.tex", - "formal/afp/Certification_Monads/Error_Syntax.thy", - "formal/lean/mathlib/topology/algebra/uniform_mul_action.lean", - "formal/afp/Hybrid_Systems_VCs/HS_VC_Examples.thy", - "formal/lean/mathlib/category_theory/filtered.lean", - "formal/mizar/field_6.miz", - "formal/lean/mathlib/topology/metric_space/emetric_paracompact.lean", - "formal/afp/Falling_Factorial_Sum/Falling_Factorial_Sum_Combinatorics.thy", - "formal/afp/Propositional_Proof_Systems/HC.thy", - "formal/afp/Well_Quasi_Orders/Minimal_Elements.thy", - "formal/afp/Refine_Monadic/Refine_Automation.thy", - "formal/afp/Incompleteness/Sigma.thy", - "formal/lean/mathlib/order/filter/lift.lean", - "formal/afp/Shivers-CFA/SetMap.thy", - "formal/afp/Shadow_DOM/monads/ShadowRootMonad.thy", - "formal/lean/mathlib/algebra/category/Group/limits.lean", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p461.lean", - "formal/hol/RichterHilbertAxiomGeometry/readable.ml", - "formal/afp/Splay_Tree/Splay_Heap.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p517.lean", - "formal/lean/mathlib/order/symm_diff.lean", - "formal/afp/ROBDD/Middle_Impl.thy", - "formal/afp/VerifyThis2018/lib/Synth_Definition.thy", - "formal/lean/mathlib/category_theory/functor/currying.lean", - "formal/mizar/waybel_6.miz", - "formal/afp/Native_Word/Uint_Userguide.thy", - "formal/afp/Automatic_Refinement/Lib/Foldi.thy", - "formal/lean/mathlib/algebra/big_operators/order.lean", - "formal/afp/Psi_Calculi/Weak_Bisimulation.thy", - "formal/lean/mathlib/analysis/normed_space/finite_dimension.lean", - "formal/afp/Prpu_Maxflow/Prpu_Common_Inst.thy", - "formal/afp/Incredible_Proof_Machine/Incredible_Everything.thy", - "formal/afp/Closest_Pair_Points/Common.thy", - "formal/afp/JinjaThreads/JVM/JVMDefensive.thy", - "formal/afp/Real_Impl/Real_Unique_Impl.thy", - "formal/mizar/measur12.miz", - "formal/afp/LTL_Master_Theorem/Logical_Characterization/Master_Theorem.thy", - "formal/mizar/stirl2_1.miz", - "formal/afp/Valuation/document/root.tex", - "formal/afp/Nested_Multisets_Ordinals/Signed_Hereditary_Multiset.thy", - "formal/afp/Noninterference_Generic_Unwinding/document/root.tex", - "formal/afp/Ordinary_Differential_Equations/Ex/Lorenz/Lorenz_Approximation.thy", - "formal/lean/mathlib/algebra/ring_quot.lean", - "formal/afp/CakeML_Codegen/Utils/Code_Utils.thy", - "formal/afp/MonoidalCategory/MonoidalCategory.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p44.lean", - "formal/afp/Isabelle_C/C11-FrontEnd/examples/C_paper.thy", - "formal/afp/Bernoulli/Bernoulli_FPS.thy", - "formal/afp/IEEE_Floating_Point/IEEE.thy", - "formal/mizar/csspace2.miz", - "formal/afp/Stone_Relation_Algebras/Relation_Algebras.thy", - "formal/lean/mathlib/data/vector/basic.lean", - "formal/afp/Slicing/While/StaticControlDependences.thy", - "formal/mizar/altcat_5.miz", - "formal/afp/PropResPI/Prime_Implicates.thy", - "formal/afp/Randomised_Social_Choice/Randomised_Social_Choice.thy", - "formal/hol/Help/itlist.doc", - "formal/afp/CAVA_LTL_Modelchecker/SM/RefPoint/Gen_Scheduler.thy", - "formal/afp/Independence_CH/Separation_Rename.thy", - "formal/afp/Regression_Test_Selection/JVM_RTS/JVMCollectionSemantics.thy", - "formal/lean/mathlib/data/list/infix.lean", - "formal/mizar/intpro_1.miz", - "formal/hol/Help/a.doc", - "formal/afp/SpecCheck/Dynamic/SpecCheck_Dynamic.thy", - "formal/afp/X86_Semantics/BitByte.thy", - "formal/afp/LocalLexing/CFG.thy", - "formal/mizar/ami_2.miz", - "formal/afp/Forcing/Renaming_Auto.thy", - "formal/lean/mathlib/analysis/normed_space/ball_action.lean", - "formal/afp/AODV/Aodv.thy", - "formal/hol/Rqe/rqe_num.ml", - "formal/afp/JinjaThreads/BV/BVNoTypeError.thy", - "formal/lean/perfectoid/examples/discrete.lean", - "formal/afp/Poincare_Disc/Poincare_Tarski.thy", - "formal/afp/Sort_Encodings/T_G_Prelim.thy", - "formal/afp/Store_Buffer_Reduction/Abbrevs.thy", - "formal/mizar/matrix15.miz", - "formal/afp/Jinja/J/State.thy", - "formal/lean/mathzoo/mathzoo/misc/miniF2F/algebra/others_exirrpowirrrat.lean", - "formal/lean/mathlib/data/multiset/antidiagonal.lean", - "formal/afp/JinjaThreads/Common/Decl.thy", - "formal/afp/Jinja/BV/BVSpecTypeSafe.thy", - "formal/afp/JinjaDCI/J/Expr.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p222.lean", - "formal/lean/mathlib/data/subtype.lean", - "formal/afp/Collections/ICF/gen_algo/PrioUniqueByAnnotatedList.thy", - "formal/lean/mathlib/category_theory/elementwise.lean", - "formal/afp/ZFC_in_HOL/ZFC_Typeclasses.thy", - "formal/lean/mathlib/analysis/hofer.lean", - "formal/lean/mathlib/order/partial_sups.lean", - "formal/mizar/ami_3.miz", - "formal/afp/CakeML_Codegen/Terms/HOL_Datatype.thy", - "formal/afp/CCS/Weak_Sim_Pres.thy", - "formal/afp/Complx/OG_Soundness.thy", - "formal/hol/Help/ASM.doc", - "formal/hol/Help/ONCE_SIMP_RULE.doc", - "formal/mizar/finseq_3.miz", - "formal/afp/MSO_Regex_Equivalence/WS1S.thy", - "formal/lean/liquid/examples/real.lean", - "formal/lean/mathlib/algebra/field/basic.lean", - "formal/afp/Coinductive_Languages/Coinductive_Regular_Set.thy", - "formal/afp/SATSolverVerification/FunctionalImplementation.thy", - "formal/afp/JinjaThreads/Basic/JT_ICF.thy", - "formal/afp/Winding_Number_Eval/Missing_Transcendental.thy", - "formal/hol/Help/string_of_thm.doc", - "formal/lean/mathlib/data/sigma/interval.lean", - "formal/afp/LTL_to_DRA/LTL_FGXU.thy", - "formal/afp/DiscretePricing/Martingale.thy", - "formal/afp/Affine_Arithmetic/Counterclockwise.thy", - "formal/hol/Help/NUM_LE_CONV.doc", - "formal/coq/math-comp/algebra/polyXY.v", - "formal/afp/LocalLexing/Derivations.thy", - "formal/mizar/supinf_2.miz", - "formal/afp/FinFun/FinFunPred.thy", - "formal/lean/mathlib/linear_algebra/affine_space/independent.lean", - "formal/afp/Verified-Prover/document/root.tex", - "formal/lean/liquid/for_mathlib/hom_single_iso2.lean", - "formal/afp/DPRM_Theorem/Machine_Equations/Register_Equations.thy", - "formal/afp/Amortized_Complexity/Priority_Queue_ops_merge.thy", - "formal/lean/liquid/condensed/kernel_comparison.lean", - "formal/lean/liquid/Lbar/bounded.lean", - "formal/mizar/multop_1.miz", - "formal/afp/IMP2/lib/Named_Simpsets.thy", - "formal/afp/Word_Lib/Word_8.thy", - "formal/afp/Applicative_Lifting/Applicative_Examples.thy", - "formal/afp/Timed_Automata/Regions.thy", - "formal/lean/mathlib/data/finset/option.lean", - "formal/mizar/functor0.miz", - "formal/lean/mathlib/measure_theory/group/pointwise.lean", - "formal/afp/Menger/DisjointPaths.thy", - "formal/afp/CakeML/generated/Lem_sorting.thy", - "formal/mizar/ideal_2.miz", - "formal/afp/Architectural_Design_Patterns/document/root.tex", - "formal/afp/Concurrent_Revisions/Determinacy.thy", - "formal/afp/Isabelle_Meta_Model/toy_example/embedding/Generator_dynamic_sequential.thy", - "formal/afp/Smith_Normal_Form/Diagonal_To_Smith.thy", - "formal/lean/mathlib/field_theory/subfield.lean", - "formal/afp/Transitive_Models/Cardinal_Library_Relative.thy", - "formal/afp/KAT_and_DRA/SingleSorted/PHL_KAT.thy", - "formal/afp/Launchbury/HasESem.thy", - "formal/lean/mathlib/category_theory/adjunction/whiskering.lean", - "formal/afp/Factor_Algebraic_Polynomial/Is_Int_To_Int.thy", - "formal/hol/100/perfect.ml", - "formal/afp/HRB-Slicing/JinjaVM_Inter/JVMSDG.thy", - "formal/afp/AODV/variants/a_norreqid/A_Seq_Invariants.thy", - "formal/hol/Logic/givensem.ml", - "formal/hol/Help/tryfind.doc", - "formal/afp/Relational_Forests/document/root.tex", - "formal/afp/Types_To_Sets_Extension/Examples/SML_Relativization/Foundations/SML_Relations.thy", - "formal/afp/Shivers-CFA/Computability.thy", - "formal/afp/Ptolemys_Theorem/Ptolemys_Theorem.thy", - "formal/lean/mathlib/linear_algebra/affine_space/slope.lean", - "formal/lean/mathlib/measure_theory/measure/giry_monad.lean", - "formal/mizar/struct_0.miz", - "formal/hol/Complex/cpoly.ml", - "formal/afp/JinjaThreads/BV/TF_JVM.thy", - "formal/afp/Collections/ICF/tools/Locale_Code.thy", - "formal/lean/mathlib/analysis/normed/group/ball_sphere.lean", - "formal/lean/mathlib/data/polynomial/identities.lean", - "formal/afp/IMP2/document/root.tex", - "formal/lean/mathlib/data/set/functor.lean", - "formal/lean/mathlib/data/W/constructions.lean", - "formal/afp/IEEE_Floating_Point/IEEE_Properties.thy", - "formal/hol/Help/REAL_RAT_MAX_CONV.doc", - "formal/afp/WHATandWHERE_Security/WHATWHERE_Security.thy", - "formal/afp/JinjaThreads/MM/JMM_J_Typesafe.thy", - "formal/afp/Deep_Learning/Lebesgue_Functional.thy", - "formal/mizar/midsp_1.miz", - "formal/mizar/semi_af1.miz", - "formal/afp/Subresultants/Subresultant_Gcd.thy", - "formal/lean/mathlib/control/traversable/lemmas.lean", - "formal/mizar/chain_1.miz", - "formal/mizar/pua2mss1.miz", - "formal/afp/CZH_Foundations/czh_sets/CZH_Sets_VNHS.thy", - "formal/afp/Safe_Distance/Evaluation.thy", - "formal/afp/Automated_Stateful_Protocol_Verification/Examples.thy", - "formal/lean/mathlib/algebraic_topology/dold_kan/p_infty.lean", - "formal/mizar/srings_2.miz", - "formal/afp/Transition_Systems_and_Automata/Automata/NBA/NBA_Translate.thy", - "formal/afp/BirdKMP/KMP.thy", - "formal/afp/Iptables_Semantics/Common/Ternary.thy", - "formal/afp/Store_Buffer_Reduction/SyntaxTweaks.thy", - "formal/afp/Grothendieck_Schemes/Comm_Ring.thy", - "formal/lean/liquid/condensed/exact.lean", - "formal/afp/Stochastic_Matrices/document/root.tex", - "formal/lean/mathlib/algebra/module/submodule/pointwise.lean", - "formal/lean/mathlib/data/mv_polynomial/expand.lean", - "formal/afp/Laplace_Transform/Laplace_Transform.thy", - "formal/afp/Certification_Monads/Error_Monad.thy", - "formal/mizar/cardfin2.miz", - "formal/lean/mathlib/data/int/absolute_value.lean", - "formal/afp/Jinja/J/execute_WellType.thy", - "formal/lean/mathlib/data/matrix/block.lean", - "formal/hol/Model/syntax.ml", - "formal/mizar/orders_3.miz", - "formal/afp/PLM/TAO_1_Embedding.thy", - "formal/afp/OpSets/Insert_Spec.thy", - "formal/lean/mathzoo/mathzoo/olympiads/imo/1962/p4.lean", - "formal/afp/Factor_Algebraic_Polynomial/MPoly_Container.thy", - "formal/hol/Library/binary.ml", - "formal/afp/Regular-Sets/Regexp_Method.thy", - "formal/afp/Residuated_Lattices/Action_Algebra_Models.thy", - "formal/afp/MDP-Algorithms/code/Code_DP.thy", - "formal/mizar/oppcat_1.miz", - "formal/mizar/rfunct_3.miz", - "formal/mizar/latticea.miz", - "formal/afp/Quaternions/document/root.tex", - "formal/afp/Nominal2/Atoms.thy", - "formal/mizar/cat_7.miz", - "formal/mizar/prsubset.miz", - "formal/hol/Help/unparse_as_binder.doc", - "formal/afp/Collections/ICF/spec/MapSpec.thy", - "formal/afp/Approximation_Algorithms/Approx_BP_Hoare.thy", - "formal/afp/Constructor_Funs/Constructor_Funs.thy", - "formal/afp/Types_To_Sets_Extension/Examples/SML_Relativization/Lattices/SML_Complete_Lattices.thy", - "formal/afp/Projective_Measurements/document/root.tex", - "formal/afp/ClockSynchInst/document/root.tex", - "formal/lean/lftcm/demos/linalg.lean", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2001/p9.lean", - "formal/afp/FLP/Multiset.thy", - "formal/afp/Discrete_Summation/Summation_Conversion.thy", - "formal/afp/Rewrite_Properties_Reduction/Util/Terms_Positions.thy", - "formal/afp/Pi_Calculus/Weak_Late_Cong_Subst_SC.thy", - "formal/afp/Core_SC_DOM/common/monads/BaseMonad.thy", - "formal/afp/Isabelle_Meta_Model/Antiquote_Setup.thy", - "formal/hol/Help/METIS_TAC.doc", - "formal/afp/Collections/ICF/impl/ArraySetImpl.thy", - "formal/lean/mathlib/linear_algebra/linear_pmap.lean", - "formal/lean/mathzoo/mathzoo/olympiads/aime/1995/p7.lean", - "formal/afp/Forcing/Forces_Definition.thy", - "formal/hol/Help/INT_ARITH.doc", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p159.lean", - "formal/afp/Types_To_Sets_Extension/Examples/Vector_Spaces/VS_Conclusions.thy", - "formal/afp/Pop_Refinement/Related_Work.thy", - "formal/hol/Help/NUMSEG_CONV.doc", - "formal/mizar/xxreal_3.miz", - "formal/hol/Minisat/sat_tools.ml", - "formal/afp/Padic_Ints/document/root.tex", - "formal/afp/Gauss_Jordan/Gauss_Jordan_PA_IArrays.thy", - "formal/lean/liquid/condensed/ab5.lean", - "formal/lean/mathlib/topology/metric_space/lipschitz.lean", - "formal/lean/mathlib/topology/sheaves/sheaf.lean", - "formal/afp/Transition_Systems_and_Automata/Automata/DBA/DBA_Combine.thy", - "formal/lean/mathlib/algebra/module/prod.lean", - "formal/afp/Modular_arithmetic_LLL_and_HNF_algorithms/Uniqueness_Hermite_JNF.thy", - "formal/afp/Max-Card-Matching/Matching.thy", - "formal/mizar/card_2.miz", - "formal/afp/HRB-Slicing/StaticInter/CFGExit.thy", - "formal/afp/Category3/Limit.thy", - "formal/lean/mathlib/control/monad/writer.lean", - "formal/afp/Word_Lib/More_Sublist.thy", - "formal/afp/Resolution_FOL/Unification_Theorem.thy", - "formal/afp/Hidden_Markov_Models/HMM_Example.thy", - "formal/mizar/graph_4.miz", - "formal/mizar/robbins1.miz", - "formal/mizar/hidden.miz", - "formal/afp/Generalized_Counting_Sort/document/root.tex", - "formal/afp/Landau_Symbols/Landau_Simprocs.thy", - "formal/lean/mathlib/category_theory/limits/creates.lean", - "formal/mizar/filerec1.miz", - "formal/afp/Correctness_Algebras/Test_Iterings.thy", - "formal/afp/JinjaThreads/Basic/Basic_Main.thy", - "formal/afp/Jinja/DFA/Abstract_BV.thy", - "formal/afp/Hoare_Time/document/root.tex", - "formal/afp/HOLCF-Prelude/List_Comprehension.thy", - "formal/afp/Elliptic_Curves_Group_Law/Elliptic_Axclass.thy", - "formal/afp/Word_Lib/Signed_Division_Word.thy", - "formal/afp/Example-Submission/Submission.thy", - "formal/afp/Forcing/Choice_Axiom.thy", - "formal/afp/Goedel_Incompleteness/Rosser_Formula.thy", - "formal/afp/Psi_Calculi/Weaken_Simulation.thy", - "formal/afp/Polynomials/MPoly_Type_Class_FMap.thy", - "formal/afp/IMP2/basic/Annotated_Syntax.thy", - "formal/mizar/rlsub_2.miz", - "formal/lean/mathlib/data/list/count.lean", - "formal/lean/mathzoo/mathzoo/olympiads/aime/1988/p4.lean", - "formal/lean/lftcm/exercises_sources/thursday/category_theory/exercise6.lean", - "formal/mizar/mesfunc5.miz", - "formal/mizar/wallace1.miz", - "formal/afp/Statecharts/Kripke.thy", - "formal/hol/Help/pretype_of_type.doc", - "formal/afp/Stream_Fusion_Code/Stream_Fusion_List.thy", - "formal/afp/Allen_Calculus/allen.thy", - "formal/lean/mathlib/data/list/prime.lean", - "formal/mizar/group_9.miz", - "formal/lean/mathlib/category_theory/monoidal/internal/functor_category.lean", - "formal/lean/mathlib/ring_theory/polynomial/dickson.lean", - "formal/afp/Design_Theory/Sub_Designs.thy", - "formal/afp/Factored_Transition_System_Bounding/FSSublist.thy", - "formal/mizar/schems_1.miz", - "formal/afp/Extended_Finite_State_Machine_Inference/heuristics/Distinguishing_Guards.thy", - "formal/lean/mathlib/data/set/intervals/pi.lean", - "formal/lean/mathlib/data/option/defs.lean", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2001/p2.lean", - "formal/mizar/mazurulm.miz", - "formal/afp/Statecharts/SA.thy", - "formal/afp/pGCL/LoopInduction.thy", - "formal/mizar/zmodul08.miz", - "formal/afp/ConcurrentGC/Tactics.thy", - "formal/afp/Fourier/Lspace.thy", - "formal/afp/Syntax_Independent_Logic/Natural_Deduction.thy", - "formal/afp/PAC_Checker/PAC_Checker_Relation.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p410.lean", - "formal/afp/Network_Security_Policy_Verification/Security_Invariants/SINVAR_SecGwExt.thy", - "formal/lean/sphere-eversion/global/gromov.lean", - "formal/afp/Nested_Multisets_Ordinals/Multiset_More.thy", - "formal/lean/mathlib/analysis/normed/group/SemiNormedGroup.lean", - "formal/lean/mathlib/category_theory/concrete_category/bundled.lean", - "formal/afp/Formal_SSA/Construct_SSA.thy", - "formal/lean/mathlib/ring_theory/polynomial/cyclotomic/eval.lean", - "formal/coq/math-comp/algebra/intdiv.v", - "formal/hol/Help/parse_preterm.doc", - "formal/hol/Help/retypecheck.doc", - "formal/afp/Jinja/DFA/LBVSpec.thy", - "formal/afp/Automated_Stateful_Protocol_Verification/trac/trac_protocol_parser.thy", - "formal/afp/Tycon/Monad_Zero_Plus.thy", - "formal/afp/Jordan_Normal_Form/Schur_Decomposition.thy", - "formal/afp/Core_DOM/common/Core_DOM.thy", - "formal/afp/Word_Lib/Syntax_Bundles.thy", - "formal/lean/mathlib/data/polynomial/inductions.lean", - "formal/lean/mathlib/analysis/normed/group/basic.lean", - "formal/afp/IMP_Compiler/Compiler.thy", - "formal/lean/mathlib/linear_algebra/affine_space/affine_map.lean", - "formal/afp/Shadow_DOM/tests/Shadow_DOM_Document_getElementById.thy", - "formal/mizar/classes2.miz", - "formal/mizar/scmring1.miz", - "formal/afp/Verified_SAT_Based_AI_Planning/STRIPS_Representation.thy", - "formal/afp/Linear_Inequalities/Decomposition_Theorem.thy", - "formal/hol/Tutorial/Tactics_and_tacticals.ml", - "formal/afp/AWN/Inv_Cterms.thy", - "formal/lean/mathlib/algebra/order/absolute_value.lean", - "formal/lean/mathlib/category_theory/triangulated/basic.lean", - "formal/afp/Count_Complex_Roots/Count_Line.thy", - "formal/afp/Gabow_SCC/All_Of_Gabow_SCC.thy", - "formal/afp/No_FTL_observers/document/root.tex", - "formal/afp/Prime_Number_Theorem/Prime_Counting_Functions.thy", - "formal/afp/Containers/Examples/TwoSat_Ex.thy", - "formal/hol/Help/pow2.doc", - "formal/afp/Stable_Matching/document/root.tex", - "formal/afp/KBPs/KBPs_Main.thy", - "formal/afp/JinjaThreads/Execute/Random_Scheduler.thy", - "formal/afp/PropResPI/Propositional_Resolution.thy", - "formal/afp/Gauss_Jordan/Gauss_Jordan_PA.thy", - "formal/afp/Core_SC_DOM/common/classes/DocumentClass.thy", - "formal/afp/Algebraic_Numbers/Real_Roots.thy", - "formal/hol/Help/report.doc", - "formal/hol/Help/BETA_RULE.doc", - "formal/lean/liquid/for_mathlib/single_coproducts.lean", - "formal/afp/ROBDD/Pointer_Map.thy", - "formal/afp/Abs_Int_ITP2012/Abs_State.thy", - "formal/lean/mathzoo/mathzoo/olympiads/imo/1966/p4.lean", - "formal/afp/Promela/PromelaInvariants.thy", - "formal/lean/liquid/for_mathlib/derived/K_projective.lean", - "formal/mizar/lmod_7.miz", - "formal/lean/mathlib/data/polynomial/basic.lean", - "formal/hol/Help/injectivity.doc", - "formal/hol/Help/AC.doc", - "formal/afp/Iptables_Semantics/Semantics_Ternary/Matching_Ternary.thy", - "formal/afp/Automatic_Refinement/Lib/Attr_Comb.thy", - "formal/afp/Sort_Encodings/M.thy", - "formal/mizar/integra1.miz", - "formal/afp/Sort_Encodings/Preliminaries.thy", - "formal/hol/Permutation/qsort.ml", - "formal/afp/Van_Emde_Boas_Trees/Imperative_HOL_Time/Heap.thy", - "formal/afp/MDP-Algorithms/Blinfun_Matrix.thy", - "formal/afp/Hybrid_Multi_Lane_Spatial_Logic/Sensors.thy", - "formal/afp/Mason_Stothers/document/root.tex", - "formal/lean/mathlib/category_theory/closed/monoidal.lean", - "formal/hol/Help/SUBGOAL_TAC.doc", - "formal/afp/Complex_Geometry/Unitary11_Matrices.thy", - "formal/hol/miz3/miz3_of_hol.ml", - "formal/afp/CoSMed/Safety_Properties.thy", - "formal/lean/mathlib/algebra/module/submodule/basic.lean", - "formal/hol/Multivariate/cvectors.ml", - "formal/hol/Examples/padics.ml", - "formal/afp/DOM_Components/Core_DOM_Components.thy", - "formal/lean/mathlib/algebra/big_operators/basic.lean", - "formal/lean/mathlib/data/mv_polynomial/comm_ring.lean", - "formal/lean/mathlib/data/fintype/list.lean", - "formal/mizar/group_20.miz", - "formal/afp/LinearQuantifierElim/Thys/QEdlo_ex.thy", - "formal/afp/Hybrid_Systems_VCs/HS_Preliminaries.thy", - "formal/hol/Help/dest_forall.doc", - "formal/lean/mathlib/algebra/group/inj_surj.lean", - "formal/lean/mathlib/order/category/Frame.lean", - "formal/afp/CoSMeDis/API_Network.thy", - "formal/mizar/treal_1.miz", - "formal/lean/mathlib/algebra/category/Mon/default.lean", - "formal/afp/Dependent_SIFUM_Refinement/Examples/Eg2.thy", - "formal/coq/math-comp/algebra/ring_quotient.v", - "formal/afp/Dependent_SIFUM_Type_Systems/Examples/Example_Swap_Add.thy", - "formal/afp/Isabelle_Meta_Model/toy_example/embedding/core/Floor1_infra.thy", - "formal/hol/Logic/fol_prop.ml", - "formal/lean/mathlib/group_theory/perm/list.lean", - "formal/afp/Collections/ICF/gen_algo/SetIndex.thy", - "formal/hol/Help/nothing.doc", - "formal/hol/GL/decid.ml", - "formal/afp/SPARCv8/SparcModel_MMU/MMU.thy", - "formal/afp/Verified_SAT_Based_AI_Planning/List_Supplement.thy", - "formal/hol/Help/UNDISCH_THEN.doc", - "formal/afp/Network_Security_Policy_Verification/TopoS_Interface_impl.thy", - "formal/hol/Library/pratt.ml", - "formal/afp/Key_Agreement_Strong_Adversaries/AuthenticationI.thy", - "formal/hol/Examples/hol88.ml", - "formal/lean/mathlib/analysis/normed_space/enorm.lean", - "formal/lean/mathlib/data/matrix/basic.lean", - "formal/afp/Promela/document/root.tex", - "formal/afp/Discrete_Summation/document/root.tex", - "formal/afp/Matroids/document/root.tex", - "formal/afp/TLA/document/root.tex", - "formal/hol/Logic/fole.ml", - "formal/lean/mathlib/analysis/convex/caratheodory.lean", - "formal/hol/Help/EXPAND_SUM_CONV.doc", - "formal/lean/liquid/for_mathlib/AddCommGroup/kernels.lean", - "formal/afp/Priority_Search_Trees/Prio_Map_Specs.thy", - "formal/afp/Pairing_Heap/Pairing_Heap_Tree.thy", - "formal/afp/KBPs/ODList.thy", - "formal/afp/Nat-Interval-Logic/document/root.tex", - "formal/lean/mathlib/topology/category/CompHaus/projective.lean", - "formal/afp/Decl_Sem_Fun_PL/ValuesFSet.thy", - "formal/hol/Help/meson_dcutin.doc", - "formal/afp/Goedel_HFSet_Semanticless/Coding.thy", - "formal/afp/Linear_Recurrences/Eulerian_Polynomials.thy", - "formal/hol/Help/DISCH_ALL.doc", - "formal/afp/CakeML/generated/CakeML/TypeSystemAuxiliary.thy", - "formal/afp/CakeML/generated/Lem_string_extra.thy", - "formal/afp/SimplifiedOntologicalArgument/HOML.thy", - "formal/hol/Help/DESTRUCT_TAC.doc", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2020/b/p5.lean", - "formal/hol/Arithmetic/tarski.ml", - "formal/lean/mathlib/data/finsupp/big_operators.lean", - "formal/afp/Skip_Lists/Misc.thy", - "formal/afp/Show/Old_Datatype/Old_Show_Generator.thy", - "formal/afp/SPARCv8/SparcModel_MMU/Sparc_Code_Gen.thy", - "formal/afp/FunWithFunctions/FunWithFunctions.thy", - "formal/afp/JinjaThreads/J/SmallStep.thy", - "formal/coq/math-comp/ssreflect/all_ssreflect.v", - "formal/lean/mathlib/topology/continuous_function/bounded.lean", - "formal/hol/Help/funpow.doc", - "formal/mizar/matrixr1.miz", - "formal/lean/liquid/liquid.lean", - "formal/afp/Tree_Decomposition/Graph.thy", - "formal/afp/Refine_Imperative_HOL/Sepref_Id_Op.thy", - "formal/lean/mathlib/algebra/big_operators/nat_antidiagonal.lean", - "formal/hol/Help/dest_small_numeral.doc", - "formal/afp/LightweightJava/ott-src/lj.tex", - "formal/afp/Kuratowski_Closure_Complement/document/root.tex", - "formal/afp/X86_Semantics/Examples.thy", - "formal/mizar/msualg_7.miz", - "formal/afp/Diophantine_Eqns_Lin_Hom/Linear_Diophantine_Equations.thy", - "formal/afp/Completeness/Base.thy", - "formal/afp/Echelon_Form/Examples_Echelon_Form_IArrays.thy", - "formal/lean/mathlib/data/polynomial/laurent.lean", - "formal/afp/Circus/Designs.thy", - "formal/lean/mathlib/combinatorics/quiver/basic.lean", - "formal/afp/Category3/DiscreteCategory.thy", - "formal/afp/Modular_Assembly_Kit_Security/SecuritySpecification/Views.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p221.lean", - "formal/afp/Word_Lib/Word_Names.thy", - "formal/afp/Well_Quasi_Orders/Almost_Full_Relations.thy", - "formal/afp/Collections/ICF/impl/FTPrioImpl.thy", - "formal/mizar/bvfunc_3.miz", - "formal/afp/Automatic_Refinement/Automatic_Refinement.thy", - "formal/afp/Containers/Card_Datatype.thy", - "formal/lean/liquid/condensed/adjunctions_module.lean", - "formal/coq/math-comp/fingroup/automorphism.v", - "formal/afp/Consensus_Refined/MRU/Two_Step_MRU.thy", - "formal/lean/mathlib/ring_theory/polynomial/cyclotomic/basic.lean", - "formal/afp/Pi_Calculus/Weak_Late_Bisim_Pres.thy", - "formal/afp/RSAPSS/Pdifference.thy", - "formal/afp/CAVA_LTL_Modelchecker/Examples/Mulog.thy", - "formal/lean/liquid/pseudo_normed_group/CLC.lean", - "formal/afp/Virtual_Substitution/MPolyExtension.thy", - "formal/afp/MonoBoolTranAlgebra/document/root.tex", - "formal/afp/E_Transcendental/document/root.tex", - "formal/afp/LocalLexing/InductRules.thy", - "formal/afp/TESL_Language/Stuttering.thy", - "formal/hol/Help/is_const.doc", - "formal/afp/GPU_Kernel_PL/KPL_syntax.thy", - "formal/afp/Formula_Derivatives/WS1S_Formula.thy", - "formal/afp/Clean/src/Lens_Laws.thy", - "formal/lean/mathlib/order/category/Lattice.lean", - "formal/lean/mathlib/ring_theory/localization/away.lean", - "formal/afp/AWN/OPnet_Lifting.thy", - "formal/lean/mathlib/set_theory/cardinal/divisibility.lean", - "formal/lean/mathlib/ring_theory/valuation/extend_to_localization.lean", - "formal/hol/Help/freesl.doc", - "formal/afp/Concurrent_Ref_Alg/document/root.tex", - "formal/afp/Lambda_Free_RPOs/Lambda_Free_RPO_App.thy", - "formal/afp/CoreC++/CoreC++.thy", - "formal/hol/Help/COMB2_CONV.doc", - "formal/afp/LinearQuantifierElim/Thys/CertLin.thy", - "formal/afp/Pi_Calculus/Weak_Late_Cong_SC.thy", - "formal/afp/Frequency_Moments/Frequency_Moment_0.thy", - "formal/lean/mathlib/linear_algebra/free_module/rank.lean", - "formal/lean/mathlib/data/finmap.lean", - "formal/afp/Core_DOM/common/monads/NodeMonad.thy", - "formal/hol/Help/pp_print_fpf.doc", - "formal/lean/liquid/for_mathlib/derived/ProjectiveResolution.lean", - "formal/hol/Logic/given.ml", - "formal/afp/Poincare_Disc/Poincare_Distance.thy", - "formal/hol/EC/curve25519.ml", - "formal/afp/Efficient-Mergesort/Efficient_Sort.thy", - "formal/afp/Consensus_Refined/MRU/CT_Proofs.thy", - "formal/afp/Allen_Calculus/axioms.thy", - "formal/lean/mathlib/analysis/locally_convex/weak_dual.lean", - "formal/afp/FinFun/document/root.tex", - "formal/afp/Sturm_Sequences/Examples/Sturm_Ex.thy", - "formal/afp/Modal_Logics_for_NTS/Bisimilarity_Implies_Equivalence.thy", - "formal/lean/mathlib/analysis/special_functions/trigonometric/deriv.lean", - "formal/lean/mathlib/data/analysis/filter.lean", - "formal/lean/mathlib/topology/uniform_space/abstract_completion.lean", - "formal/afp/Coinductive/Coinductive_List_Prefix.thy", - "formal/afp/AWN/Qmsg.thy", - "formal/hol/Help/print_num.doc", - "formal/afp/Foundation_of_geometry/Incidence.thy", - "formal/hol/Help/temp_path.doc", - "formal/lean/mathzoo/mathzoo/olympiads/imo/2008/p3.lean", - "formal/afp/Progress_Tracking/Antichain.thy", - "formal/afp/Coinductive/Coinductive_List.thy", - "formal/lean/mathlib/ring_theory/algebraic.lean", - "formal/afp/Core_DOM/common/Core_DOM_Basic_Datatypes.thy", - "formal/afp/Containers/DList_Set.thy", - "formal/hol/Help/strings_of_file.doc", - "formal/afp/Circus/Refinement.thy", - "formal/lean/liquid/for_mathlib/projectives.lean", - "formal/lean/liquid/laurent_measures/functor.lean", - "formal/lean/mathlib/order/well_founded.lean", - "formal/afp/Tree-Automata/Ta_impl.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p150.lean", - "formal/mizar/metric_1.miz", - "formal/afp/Winding_Number_Eval/Winding_Number_Eval.thy", - "formal/lean/mathlib/category_theory/limits/lattice.lean", - "formal/afp/Recursion-Theory-I/PRecFinSet.thy", - "formal/lean/mathlib/category_theory/limits/filtered_colimit_commutes_finite_limit.lean", - "formal/hol/Help/ALL_TAC.doc", - "formal/hol/Help/strip_exists.doc", - "formal/mizar/euclid_6.miz", - "formal/afp/KBPs/Trie2.thy", - "formal/afp/CAVA_LTL_Modelchecker/BoolProgs/Programs/BoolProgs_LeaderFilters.thy", - "formal/afp/Interpreter_Optimizations/Std_to_Inca_compiler.thy", - "formal/lean/mathlib/order/filter/basic.lean", - "formal/mizar/aofa_a00.miz", - "formal/hol/Help/parse_as_prefix.doc", - "formal/lean/mathlib/topology/fiber_bundle.lean", - "formal/afp/Syntax_Independent_Logic/Syntax_Arith.thy", - "formal/mizar/scmfsa_1.miz", - "formal/afp/Transitive_Models/ZF_Miscellanea.thy", - "formal/afp/Matrix/Utility.thy", - "formal/lean/mathlib/data/int/parity.lean", - "formal/afp/Regular-Sets/Regexp_Constructions.thy", - "formal/afp/Virtual_Substitution/QE.thy", - "formal/afp/Generic_Deriving/tests/Derive_Algebra_Laws.thy", - "formal/mizar/conmetr1.miz", - "formal/afp/LTL_to_DRA/DTS.thy", - "formal/afp/Pi_Calculus/Weak_Late_Cong_Pres.thy", - "formal/afp/Algebraic_Numbers/Algebraic_Numbers_Prelim.thy", - "formal/afp/Polynomial_Factorization/document/root.tex", - "formal/mizar/laplace.miz", - "formal/mizar/fdiff_10.miz", - "formal/hol/EC/misc.ml", - "formal/afp/KBPs/Eval.thy", - "formal/afp/Independence_CH/Pairing_Axiom.thy", - "formal/hol/Help/issep.doc", - "formal/lean/mathlib/model_theory/order.lean", - "formal/afp/HOL-CSP/Introduction.thy", - "formal/afp/Psi_Calculi/Weak_Cong_Struct_Cong.thy", - "formal/lean/mathlib/data/list/lex.lean", - "formal/lean/mathlib/data/set/intervals/default.lean", - "formal/hol/Help/DIMINDEX_CONV.doc", - "formal/afp/Stewart_Apollonius/Stewart_Apollonius.thy", - "formal/afp/Automatic_Refinement/Autoref_Bindings_HOL.thy", - "formal/mizar/newton.miz", - "formal/afp/Conditional_Simplification/Reference_Prerequisites.thy", - "formal/afp/Monad_Memo_DP/heap_monad/Heap_Main.thy", - "formal/afp/Refine_Imperative_HOL/benchmarks/NestedDFS/isabelle/NDFS_Benchmark.thy", - "formal/mizar/int_2.miz", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p536.lean", - "formal/afp/Core_DOM/common/pointers/NodePointer.thy", - "formal/lean/mathlib/data/finset/sort.lean", - "formal/lean/liquid/for_mathlib/sheafification_equiv_compatibility.lean", - "formal/lean/mathlib/analysis/inner_product_space/positive.lean", - "formal/lean/mathlib/probability/density.lean", - "formal/coq/odd-order/PFsection4.v", - "formal/coq/math-comp/solvable/extraspecial.v", - "formal/mizar/matrix_8.miz", - "formal/afp/Refine_Monadic/Refine_Misc.thy", - "formal/afp/Frequency_Moments/Frequency_Moment_k.thy", - "formal/afp/Gromov_Hyperbolicity/Library_Complements.thy", - "formal/afp/QHLProver/Matrix_Limit.thy", - "formal/afp/Isabelle_Meta_Model/toy_example/embedding/Printer.thy", - "formal/afp/Binding_Syntax_Theory/Transition_QuasiTerms_Terms.thy", - "formal/afp/JinjaThreads/Execute/JVM_Execute2.thy", - "formal/afp/UPF/Service.thy", - "formal/lean/mathlib/algebra/hom/group_instances.lean", - "formal/afp/BNF_Operations/Kill.thy", - "formal/lean/mathlib/category_theory/glue_data.lean", - "formal/afp/Multi_Party_Computation/Secure_Multiplication.thy", - "formal/afp/Jinja/J/WellTypeRT.thy", - "formal/mizar/anproj_8.miz", - "formal/afp/Heard_Of/lastvoting/LastVotingDefs.thy", - "formal/hol/Help/REDEPTH_CONV.doc", - "formal/lean/mathlib/data/nat/interval.lean", - "formal/lean/mathlib/data/nat/fib.lean", - "formal/lean/mathlib/data/multiset/locally_finite.lean", - "formal/lean/mathlib/analysis/asymptotics/asymptotics.lean", - "formal/lean/mathlib/category_theory/limits/has_limits.lean", - "formal/afp/Linear_Inequalities/Cone.thy", - "formal/hol/Help/dest_neg.doc", - "formal/afp/IsaNet/instances/ICING_variant2.thy", - "formal/afp/Independence_CH/Fm_Definitions.thy", - "formal/afp/Graph_Saturation/StandardRules.thy", - "formal/lean/mathlib/algebraic_geometry/presheafed_space/gluing.lean", - "formal/lean/mathlib/data/rbtree/basic.lean", - "formal/afp/CakeML_Codegen/Compiler/Compiler.thy", - "formal/lean/mathlib/algebra/category/Module/change_of_rings.lean", - "formal/afp/WHATandWHERE_Security/document/root.tex", - "formal/mizar/funct_5.miz", - "formal/afp/CoSMeDis/Outer_Friend_Confidentiality/Issuer/Outer_Friend_Issuer_Observation_Setup.thy", - "formal/afp/Core_DOM/common/pointers/CharacterDataPointer.thy", - "formal/mizar/finseq_7.miz", - "formal/afp/BTree/BTree_Split.thy", - "formal/hol/Help/file_of_string.doc", - "formal/afp/Collections/ICF/ICF_Autoref.thy", - "formal/lean/mathlib/data/set/opposite.lean", - "formal/afp/Mereology/MM.thy", - "formal/afp/Statecharts/HASem.thy", - "formal/afp/Stream-Fusion/LazyList.thy", - "formal/hol/Help/top_goal.doc", - "formal/afp/DFS_Framework/Impl/Structural/Tailrec_Impl.thy", - "formal/lean/mathlib/category_theory/essential_image.lean", - "formal/lean/mathlib/category_theory/limits/constructions/over/connected.lean", - "formal/afp/Complex_Bounded_Operators/Complex_Vector_Spaces0.thy", - "formal/afp/Topological_Semantics/topo_operators_derivative.thy", - "formal/afp/WorkerWrapper/WorkerWrapperNew.thy", - "formal/afp/Category3/NaturalTransformation.thy", - "formal/lean/liquid/for_mathlib/kernel_comparison.lean", - "formal/afp/Parity_Game/AttractorInductive.thy", - "formal/hol/100/realsuncountable.ml", - "formal/afp/Goedel_Incompleteness/Jeroslow_Original.thy", - "formal/afp/Menger/Menger.thy", - "formal/lean/liquid/for_mathlib/quotient_map.lean", - "formal/mizar/waybel35.miz", - "formal/afp/Isabelle_Meta_Model/toy_example/Toy_Library_Static.thy", - "formal/lean/mathlib/data/nat/bits.lean", - "formal/lean/perfectoid/examples/empty.lean", - "formal/lean/mathlib/data/complex/basic.lean", - "formal/afp/Linear_Recurrences/Solver/Linear_Recurrences_Test.thy", - "formal/afp/UTP/utp/utp_var.thy", - "formal/afp/CakeML_Codegen/Doc/Doc_Backend.thy", - "formal/mizar/topdim_2.miz", - "formal/lean/mathlib/algebra/pempty_instances.lean", - "formal/afp/Knot_Theory/Kauffman_Matrix.thy", - "formal/afp/Van_Emde_Boas_Trees/VEBT_Example_Setup.thy", - "formal/afp/LLL_Basis_Reduction/FPLLL_Solver.thy", - "formal/afp/CakeML/CakeML_Compiler.thy", - "formal/lean/liquid/laurent_measures/ses2.lean", - "formal/lean/mathlib/analysis/convex/extrema.lean", - "formal/hol/100/four_squares.ml", - "formal/afp/Transition_Systems_and_Automata/Automata/NFA/NFA.thy", - "formal/lean/mathzoo/mathzoo/olympiads/imo/1982/p1.lean", - "formal/mizar/gobrd10.miz", - "formal/hol/Library/transc.ml", - "formal/afp/Constructive_Cryptography_CM/Constructions/DH_OTP.thy", - "formal/hol/Help/ANTE_RES_THEN.doc", - "formal/afp/Iptables_Semantics/Semantics_Stateful.thy", - "formal/afp/Padic_Ints/Padic_Integers.thy", - "formal/hol/Examples/prog.ml", - "formal/afp/Transitive_Models/Utils.thy", - "formal/hol/Multivariate/complexes.ml", - "formal/hol/EC/family25519.ml", - "formal/afp/Fishers_Inequality/Matrix_Vector_Extras.thy", - "formal/lean/mathlib/data/sum/order.lean", - "formal/lean/mathlib/analysis/normed_space/basic.lean", - "formal/afp/Sliding_Window_Algorithm/SWA.thy", - "formal/afp/Group-Ring-Module/Algebra8.thy", - "formal/afp/Formula_Derivatives/Examples/WS1S_Examples.thy", - "formal/afp/Safe_OCL/OCL_Examples.thy", - "formal/afp/Berlekamp_Zassenhaus/Berlekamp_Zassenhaus.thy", - "formal/afp/Psi_Calculi/Weak_Sim_Pres.thy", - "formal/mizar/jordan10.miz", - "formal/afp/Planarity_Certificates/Planarity/Kuratowski_Combinatorial.thy", - "formal/lean/mathlib/data/complex/module.lean", - "formal/afp/CakeML/generated/CakeML/SemanticPrimitivesAuxiliary.thy", - "formal/hol/Help/REMOVE_THEN.doc", - "formal/lean/mathlib/analysis/calculus/cont_diff.lean", - "formal/lean/mathlib/category_theory/sites/induced_topology.lean", - "formal/afp/AnselmGod/AnselmGod.thy", - "formal/lean/mathlib/topology/continuous_function/polynomial.lean", - "formal/hol/Boyer_Moore/testset/arith.ml", - "formal/mizar/rusub_4.miz", - "formal/afp/CoSMeDis/Safety_Properties.thy", - "formal/mizar/polyeq_4.miz", - "formal/hol/Help/override_interface.doc", - "formal/lean/mathlib/algebra/field/opposite.lean", - "formal/hol/100/pythagoras.ml", - "formal/lean/mathlib/category_theory/discrete_category.lean", - "formal/afp/Automatic_Refinement/Lib/Refine_Lib.thy", - "formal/hol/Help/define_type.doc", - "formal/hol/Help/DISJ1_TAC.doc", - "formal/afp/Real_Time_Deque/document/root.tex", - "formal/afp/Hybrid_Systems_VCs/KleeneAlgebraTests/HS_VC_KAT_ndfun.thy", - "formal/afp/Fishburn_Impossibility/document/root.tex", - "formal/afp/Forcing/Separation_Rename.thy", - "formal/afp/CoSMed/document/root.tex", - "formal/mizar/cfcont_1.miz", - "formal/hol/Boyer_Moore/counterexample.ml", - "formal/afp/Menger/MengerInduction.thy", - "formal/afp/Hermite/Hermite_IArrays.thy", - "formal/afp/Complex_Geometry/Hermitean_Matrices.thy", - "formal/afp/Extended_Finite_State_Machine_Inference/heuristics/Store_Reuse.thy", - "formal/afp/Lambda_Free_RPOs/Lambda_Free_RPO_Optim.thy", - "formal/lean/mathzoo/mathzoo/olympiads/imo/1963/p5.lean", - "formal/afp/Flow_Networks/Network_Impl.thy", - "formal/hol/Tutorial/Real_analysis.ml", - "formal/lean/mathlib/category_theory/limits/shapes/disjoint_coproduct.lean", - "formal/afp/Smith_Normal_Form/SNF_Missing_Lemmas.thy", - "formal/mizar/integr1c.miz", - "formal/afp/AODV/variants/a_norreqid/A_Norreqid.thy", - "formal/hol/100/combinations.ml", - "formal/hol/Help/pp_print_qterm.doc", - "formal/afp/Relational_Method/Anonymity.thy", - "formal/lean/mathlib/analysis/convex/strict_convex_space.lean", - "formal/lean/mathlib/linear_algebra/matrix/diagonal.lean", - "formal/afp/Game_Based_Crypto/IND_CCA2.thy", - "formal/afp/DataRefinementIBP/Preliminaries.thy", - "formal/lean/mathlib/analysis/normed_space/star/matrix.lean", - "formal/afp/OpSets/document/root.tex", - "formal/lean/mathlib/combinatorics/set_family/compression/uv.lean", - "formal/afp/Bicategory/CanonicalIsos.thy", - "formal/afp/CoCon/Prelim.thy", - "formal/hol/Help/ASM_REWRITE_RULE.doc", - "formal/afp/Bicategory/Pseudofunctor.thy", - "formal/mizar/grnilp_1.miz", - "formal/afp/Ackermanns_not_PR/document/root.tex", - "formal/afp/MiniML/Maybe.thy", - "formal/lean/mathlib/data/qpf/multivariate/constructions/sigma.lean", - "formal/afp/JinjaDCI/J/State.thy", - "formal/afp/Category2/SetCat.thy", - "formal/afp/Coinductive/Lazy_TLList.thy", - "formal/afp/Transition_Systems_and_Automata/Automata/NBA/NGBA.thy", - "formal/afp/ROBDD/Option_Helpers.thy", - "formal/afp/Partial_Function_MR/Partial_Function_MR_Examples.thy", - "formal/afp/WorkerWrapper/document/root.tex", - "formal/lean/mathlib/algebra/direct_sum/algebra.lean", - "formal/afp/Word_Lib/Bits_Int.thy", - "formal/afp/Independence_CH/Extensionality_Axiom.thy", - "formal/afp/Applicative_Lifting/Applicative.thy", - "formal/mizar/urysohn1.miz", - "formal/afp/ROBDD/Level_Collapse.thy", - "formal/lean/mathlib/control/traversable/basic.lean", - "formal/lean/mathlib/data/stream/init.lean", - "formal/lean/liquid/system_of_complexes/truncate.lean", - "formal/mizar/borsuk_3.miz", - "formal/lean/mathlib/data/multiset/sort.lean", - "formal/hol/Help/install_parser.doc", - "formal/afp/Slicing/While/Interpretation.thy", - "formal/afp/Core_SC_DOM/common/tests/Node_removeChild.thy", - "formal/afp/Core_SC_DOM/safely_composable/classes/ElementClass.thy", - "formal/hol/Help/hd.doc", - "formal/hol/Help/NOT_ELIM.doc", - "formal/afp/Ordered_Resolution_Prover/Abstract_Substitution.thy", - "formal/afp/CoSMeDis/Friend_Request_Confidentiality/Friend_Request_All.thy", - "formal/mizar/rlvect_4.miz", - "formal/afp/Iptables_Semantics/Example_Semantics.thy", - "formal/mizar/fintopo6.miz", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p132.lean", - "formal/mizar/scmfsa6c.miz", - "formal/afp/Vickrey_Clarke_Groves/Universes.thy", - "formal/afp/LatticeProperties/Conj_Disj.thy", - "formal/afp/Public_Announcement_Logic/document/root.tex", - "formal/afp/Simple_Firewall/Generic_SimpleFw.thy", - "formal/mizar/glib_001.miz", - "formal/hol/Help/REAL_RAT_MUL_CONV.doc", - "formal/mizar/ordinal1.miz", - "formal/hol/Help/the_interface.doc", - "formal/afp/Ordered_Resolution_Prover/Map2.thy", - "formal/lean/liquid/condensed/projective_resolution_module.lean", - "formal/hol/100/konigsberg.ml", - "formal/afp/CakeML/generated/Lem_assert_extra.thy", - "formal/afp/Complex_Geometry/Quadratic.thy", - "formal/lean/mathlib/topology/nhds_set.lean", - "formal/hol/Tutorial/HOLs_number_systems.ml", - "formal/hol/Help/SPEC_ALL.doc", - "formal/afp/DPRM_Theorem/Machine_Equations/Equation_Setup.thy", - "formal/mizar/index_1.miz", - "formal/afp/Flyspeck-Tame/IArray_Syntax.thy", - "formal/afp/Bicategory/InternalEquivalence.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p359.lean", - "formal/afp/Simpl/AlternativeSmallStep.thy", - "formal/afp/Timed_Automata/document/root.tex", - "formal/afp/Routing/Routing_Table.thy", - "formal/mizar/finseq_5.miz", - "formal/afp/Order_Lattice_Props/Fixpoint_Fusion.thy", - "formal/afp/Deep_Learning/Lebesgue_Zero_Set.thy", - "formal/lean/mathlib/data/polynomial/cancel_leads.lean", - "formal/lean/mathlib/algebra/module/basic.lean", - "formal/mizar/fsm_1.miz", - "formal/lean/mathlib/data/polynomial/div.lean", - "formal/mizar/cat_5.miz", - "formal/afp/Tail_Recursive_Functions/Method.thy", - "formal/afp/Isabelle_Meta_Model/toy_example/embedding/Generator_static.thy", - "formal/afp/Zeta_Function/Zeta_Function.thy", - "formal/lean/perfectoid/for_mathlib/rings.lean", - "formal/afp/Call_Arity/CallArityEnd2EndSafe.thy", - "formal/afp/SimplifiedOntologicalArgument/SimpleVariantHF.thy", - "formal/hol/Help/needs.doc", - "formal/hol/Help/term_unify.doc", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p338.lean", - "formal/afp/Types_To_Sets_Extension/Examples/SML_Relativization/Algebra/SML_Groups.thy", - "formal/afp/Delta_System_Lemma/Konig.thy", - "formal/afp/Security_Protocol_Refinement/Refinement/Channels.thy", - "formal/lean/mathlib/algebra/homology/quasi_iso.lean", - "formal/afp/Applicative_Lifting/Abstract_AF.thy", - "formal/lean/liquid/for_mathlib/nat_iso_map_homological_complex.lean", - "formal/afp/MFMC_Countable/Max_Flow_Min_Cut_Countable.thy", - "formal/afp/BNF_Operations/document/root.tex", - "formal/lean/sphere-eversion/to_mathlib/unused/parition_of_unity.lean", - "formal/afp/ConcurrentIMP/ex/CIMP_unbounded_buffer.thy", - "formal/afp/CoSMeDis/Post_Confidentiality/Independent_Posts/Independent_Post_Observation_Setup_RECEIVER.thy", - "formal/lean/sphere-eversion/to_mathlib/analysis/normed_space/operator_norm.lean", - "formal/afp/MSO_Regex_Equivalence/Pi_Equivalence_Checking.thy", - "formal/lean/mathlib/algebra/category/Ring/filtered_colimits.lean", - "formal/mizar/relset_2.miz", - "formal/mizar/partpr_2.miz", - "formal/lean/mathlib/category_theory/monoidal/braided.lean", - "formal/afp/Differential_Dynamic_Logic/Uniform_Renaming.thy", - "formal/afp/Szpilrajn/Szpilrajn.thy", - "formal/afp/Liouville_Numbers/Liouville_Numbers.thy", - "formal/afp/Dominance_CHK/Dom_Kildall_Correct.thy", - "formal/afp/AODV/variants/e_all_abcd/E_Fresher.thy", - "formal/afp/Tycon/Lift_Monad.thy", - "formal/afp/Collections/Examples/Refine_Monadic/Refine_Fold.thy", - "formal/afp/Refine_Monadic/Refine_Det.thy", - "formal/afp/Stone_Algebras/Stone_Construction.thy", - "formal/mizar/endalg.miz", - "formal/mizar/fscirc_1.miz", - "formal/coq/math-comp/solvable/jordanholder.v", - "formal/afp/ROBDD/BDT.thy", - "formal/afp/CRDT/Counter.thy", - "formal/afp/Interpreter_Optimizations/Inca_to_Ubx_simulation.thy", - "formal/afp/MFODL_Monitor_Optimized/Regex.thy", - "formal/afp/Inductive_Inference/CONS_LIM.thy", - "formal/afp/MiniSail/Operational.thy", - "formal/afp/Regular-Sets/Equivalence_Checking.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p541.lean", - "formal/afp/Refine_Imperative_HOL/IICF/Intf/IICF_Prio_Map.thy", - "formal/lean/sphere-eversion/to_mathlib/geometry/manifold/misc_manifold.lean", - "formal/afp/Metalogic_ProofChecker/CodeGen.thy", - "formal/afp/Network_Security_Policy_Verification/Examples/Impl_List_Playground_statefulpolicycompliance.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p140.lean", - "formal/afp/Multi_Party_Computation/OT_Functionalities.thy", - "formal/afp/Security_Protocol_Refinement/Refinement/Message.thy", - "formal/hol/Help/RATOR_CONV.doc", - "formal/afp/Dominance_CHK/Dom_Kildall.thy", - "formal/lean/mathlib/topology/category/Top/adjunctions.lean", - "formal/hol/100/constructible.ml", - "formal/afp/JinjaThreads/Compiler/J0J1Bisim.thy", - "formal/hol/Help/name_of.doc", - "formal/mizar/glib_009.miz", - "formal/mizar/bvfunc25.miz", - "formal/lean/mathlib/combinatorics/catalan.lean", - "formal/afp/WOOT_Strong_Eventual_Consistency/IntegrateAlgorithm.thy", - "formal/afp/Gaussian_Integers/Gaussian_Integers_Test.thy", - "formal/lean/mathlib/category_theory/concrete_category/default.lean", - "formal/afp/CCS/Strong_Bisim.thy", - "formal/mizar/topmetr.miz", - "formal/afp/CSP_RefTK/CopyBuffer_props.thy", - "formal/hol/Multivariate/canal.ml", - "formal/afp/RSAPSS/Word.thy", - "formal/lean/mathzoo/mathzoo/misc/miniF2F/numbertheory/fxeq4powxp6powxp9powx_f2powmdvdf2pown.lean", - "formal/afp/SimplifiedOntologicalArgument/ScottVariant.thy", - "formal/hol/Help/mk_iff.doc", - "formal/afp/BD_Security_Compositional/Independent_Secrets.thy", - "formal/afp/Free-Groups/PingPongLemma.thy", - "formal/afp/DynamicArchitectures/Configuration_Traces.thy", - "formal/afp/LightweightJava/document/root.tex", - "formal/hol/Help/HINT_EXISTS_TAC.doc", - "formal/afp/AODV/variants/e_all_abcd/E_Global_Invariants.thy", - "formal/lean/mathlib/category_theory/limits/preserves/finite.lean", - "formal/afp/Jordan_Normal_Form/Gauss_Jordan_IArray_Impl.thy", - "formal/lean/mathlib/topology/uniform_space/cauchy.lean", - "formal/coq/math-comp/character/inertia.v", - "formal/afp/IsaNet/Parametrized_Dataplane_3_undirected.thy", - "formal/afp/Secondary_Sylow/document/root.tex", - "formal/afp/Lambda_Free_RPOs/Lambda_Encoding.thy", - "formal/lean/mathlib/category_theory/pi/basic.lean", - "formal/lean/mathlib/model_theory/direct_limit.lean", - "formal/afp/Isabelle_Marries_Dirac/Entanglement.thy", - "formal/afp/Optimal_BST/Optimal_BST.thy", - "formal/afp/Stuttering_Equivalence/PLTL.thy", - "formal/hol/Help/new_constant.doc", - "formal/afp/Gromov_Hyperbolicity/Isometries.thy", - "formal/afp/CakeML/generated/Lem_set_extra.thy", - "formal/afp/First_Order_Terms/Unification.thy", - "formal/afp/Gabow_SCC/document/intro.tex", - "formal/afp/Ordinary_Differential_Equations/Library/Linear_ODE.thy", - "formal/afp/Collections/Userguides/ICF_Userguide.thy", - "formal/afp/Containers/ITP-2013/Benchmark_Default.thy", - "formal/afp/Echelon_Form/Code_Cayley_Hamilton.thy", - "formal/lean/lftcm/solutions/thursday/category_theory/exercise5.lean", - "formal/afp/No_FTL_observers/SomeFunc.thy", - "formal/afp/FOL_Harrison/document/root.tex", - "formal/lean/mathlib/algebra/category/Module/products.lean", - "formal/afp/WorkerWrapper/Nub.thy", - "formal/lean/mathlib/category_theory/monoidal/End.lean", - "formal/afp/LambdaMu/Substitution.thy", - "formal/afp/Security_Protocol_Refinement/Refinement/Keys.thy", - "formal/afp/Count_Complex_Roots/Count_Complex_Roots.thy", - "formal/lean/mathlib/group_theory/subgroup/pointwise.lean", - "formal/afp/Transition_Systems_and_Automata/Automata/NBA/NBA_Implement.thy", - "formal/lean/mathlib/group_theory/group_action/fixing_subgroup.lean", - "formal/afp/Polynomial_Factorization/Explicit_Roots.thy", - "formal/afp/Smith_Normal_Form/Elementary_Divisor_Rings.thy", - "formal/afp/CakeML_Codegen/Doc/Doc_Terms.thy", - "formal/afp/Concurrent_Revisions/Renaming.thy", - "formal/afp/DPRM_Theorem/Diophantine/Modulo_Divisibility.thy", - "formal/afp/Call_Arity/SestoftConf.thy", - "formal/afp/JinjaThreads/Basic/Set_Monad.thy", - "formal/lean/mathlib/data/polynomial/monomial.lean", - "formal/mizar/euclmetr.miz", - "formal/hol/100/pnt.ml", - "formal/afp/TLA/PreFormulas.thy", - "formal/afp/Interpreter_Optimizations/Inca_to_Ubx_compiler.thy", - "formal/afp/Case_Labeling/Examples/Conditionals.thy", - "formal/mizar/orders_2.miz", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p616.lean", - "formal/hol/Help/DISCH.doc", - "formal/afp/UPF_Firewall/PacketFilter/PortCombinators.thy", - "formal/afp/Prime_Distribution_Elementary/Partial_Zeta_Bounds.thy", - "formal/lean/mathzoo/mathzoo/olympiads/imo/1969/p1.lean", - "formal/mizar/ringfrac.miz", - "formal/lean/mathlib/data/real/pi/wallis.lean", - "formal/afp/Probabilistic_Prime_Tests/Solovay_Strassen_Test.thy", - "formal/afp/Core_DOM/document/root.tex", - "formal/afp/Laws_of_Large_Numbers/Laws_of_Large_Numbers.thy", - "formal/lean/mathlib/data/finsupp/to_dfinsupp.lean", - "formal/afp/FO_Theory_Rewriting/Closure/Context_RR2.thy", - "formal/lean/mathlib/algebra/big_operators/default.lean", - "formal/hol/miz3/test.ml", - "formal/afp/Real_Time_Deque/Idle.thy", - "formal/afp/Transitive_Models/Internalizations.thy", - "formal/afp/Physical_Quantities/document/root.tex", - "formal/afp/CryptoBasedCompositionalProperties/ListExtras.thy", - "formal/lean/mathlib/topology/metric_space/gluing.lean", - "formal/coq/analysis/summability.v", - "formal/afp/DiskPaxos/DiskPaxos_Inv6.thy", - "formal/afp/Types_To_Sets_Extension/Examples/SML_Relativization/Topology/SML_Topological_Space.thy", - "formal/afp/Stellar_Quorums/Stellar_Quorums.thy", - "formal/afp/Propositional_Proof_Systems/CNF_Sema.thy", - "formal/afp/Constructive_Cryptography/Examples/Examples.thy", - "formal/lean/liquid/for_mathlib/short_complex_colimits.lean", - "formal/hol/100/liouville.ml", - "formal/afp/Differential_Game_Logic/Lib.thy", - "formal/afp/Pi_Calculus/Weak_Early_Bisim_Pres.thy", - "formal/afp/Jinja/J/execute_Bigstep.thy", - "formal/afp/JinjaThreads/Common/WellForm.thy", - "formal/afp/Iptables_Semantics/Examples/Fail/Ports_Fail.thy", - "formal/afp/MSO_Regex_Equivalence/M2L_Equivalence_Checking.thy", - "formal/afp/Stateful_Protocol_Composition_and_Typing/Stateful_Strands.thy", - "formal/afp/AODV/variants/e_all_abcd/E_Seq_Invariants.thy", - "formal/afp/Combinatorics_Words/Submonoids.thy", - "formal/afp/Pi_Calculus/Weak_Late_Bisim_Subst_Pres.thy", - "formal/hol/Help/PURE_SIMP_RULE.doc", - "formal/coq/odd-order/stripped_odd_order_theorem.v", - "formal/hol/Help/ORELSE_TCL.doc", - "formal/afp/Finite_Fields/Finite_Fields_Isomorphic.thy", - "formal/mizar/real_ns3.miz", - "formal/lean/mathlib/analysis/calculus/implicit.lean", - "formal/afp/Nested_Multisets_Ordinals/McCarthy_91.thy", - "formal/afp/Types_To_Sets_Extension/Examples/Vector_Spaces/VS_Prerequisites.thy", - "formal/afp/Key_Agreement_Strong_Adversaries/pfslvl1.thy", - "formal/afp/BytecodeLogicJmlTypes/AssocLists.thy", - "formal/afp/Verified_SAT_Based_AI_Planning/STRIPS_Semantics.thy", - "formal/lean/mathlib/topology/algebra/monoid.lean", - "formal/hol/Help/issymb.doc", - "formal/hol/Logic/herbrand.ml", - "formal/afp/CZH_Universal_Constructions/czh_ucategories/CZH_UCAT_Introduction.thy", - "formal/lean/mathlib/analysis/sum_integral_comparisons.lean", - "formal/afp/AODV/Quality_Increases.thy", - "formal/afp/CAVA_LTL_Modelchecker/SM/Refine/SM_Pid.thy", - "formal/afp/List-Infinite/CommonSet/Util_Set.thy", - "formal/afp/Delta_System_Lemma/Cardinal_Library.thy", - "formal/lean/mathlib/group_theory/submonoid/centralizer.lean", - "formal/afp/Zeta_3_Irrational/document/root.tex", - "formal/afp/Registers/Axioms_Complement.thy", - "formal/afp/Hahn_Jordan_Decomposition/Hahn_Jordan_Decomposition.thy", - "formal/afp/Security_Protocol_Refinement/Key_establish/m3_nssk.thy", - "formal/lean/mathlib/measure_theory/function/strongly_measurable_lp.lean", - "formal/lean/mathlib/category_theory/limits/colimit_limit.lean", - "formal/afp/Twelvefold_Way/Twelvefold_Way_Core.thy", - "formal/afp/Iptables_Semantics/Matching_Embeddings.thy", - "formal/lean/mathlib/algebra/char_p/char_and_card.lean", - "formal/afp/Monad_Memo_DP/Indexing.thy", - "formal/hol/Help/COND_CASES_TAC.doc", - "formal/afp/CZH_Elementary_Categories/czh_ecategories/CZH_ECAT_NTCF.thy", - "formal/lean/mathlib/algebra/module/ulift.lean", - "formal/lean/mathzoo/mathzoo/olympiads/imo/1987/p1.lean", - "formal/afp/Regular_Tree_Relations/Tree_Automata/Tree_Automata_Det.thy", - "formal/afp/Actuarial_Mathematics/Preliminaries.thy", - "formal/afp/CZH_Foundations/czh_digraphs/CZH_DG_Introduction.thy", - "formal/afp/Regression_Test_Selection/Common/Semantics.thy", - "formal/afp/Forcing/Interface.thy", - "formal/lean/mathlib/analysis/normed_space/conformal_linear_map.lean", - "formal/afp/CZH_Elementary_Categories/czh_ecategories/CZH_ECAT_Order.thy", - "formal/afp/Call_Arity/ArityEtaExpansion.thy", - "formal/mizar/quatern3.miz", - "formal/afp/Jinja/Common/Value.thy", - "formal/lean/perfectoid/for_mathlib/equiv.lean", - "formal/afp/CoreC++/WellForm.thy", - "formal/afp/Linear_Programming/More_Jordan_Normal_Forms.thy", - "formal/hol/Help/isnum.doc", - "formal/hol/Rqe/rqe_lib.ml", - "formal/afp/CoSMeDis/Outer_Friend_Confidentiality/Outer_Friend_Network.thy", - "formal/afp/Weight_Balanced_Trees/Weight_Balanced_Trees.thy", - "formal/hol/Help/hol_dir.doc", - "formal/lean/mathlib/ring_theory/ideal/prod.lean", - "formal/afp/Abstract-Rewriting/Seq.thy", - "formal/lean/mathlib/data/set/countable.lean", - "formal/mizar/polyvie1.miz", - "formal/hol/Help/CONDS_ELIM_CONV.doc", - "formal/afp/Dependent_SIFUM_Type_Systems/TypeSystem.thy", - "formal/lean/mathlib/group_theory/group_action/units.lean", - "formal/mizar/pboole.miz", - "formal/lean/mathlib/order/category/BoundedOrder.lean", - "formal/afp/Simplicial_complexes_and_boolean_functions/Simplicial_complex.thy", - "formal/hol/Help/MAP_EVERY.doc", - "formal/mizar/altcat_1.miz", - "formal/mizar/anproj_2.miz", - "formal/hol/Examples/update_database.ml", - "formal/afp/Regular_Tree_Relations/Util/Term_Context.thy", - "formal/mizar/tdlat_1.miz", - "formal/afp/Gauss_Jordan/IArray_Addenda.thy", - "formal/afp/FO_Theory_Rewriting/Util/Saturation.thy", - "formal/mizar/funct_4.miz", - "formal/mizar/waybel29.miz", - "formal/afp/Twelvefold_Way/Twelvefold_Way_Entry11.thy", - "formal/afp/Multirelations/document/root.tex", - "formal/lean/mathlib/measure_theory/group/prod.lean", - "formal/lean/mathlib/algebra/char_p/local_ring.lean", - "formal/hol/Help/follow_path.doc", - "formal/afp/FO_Theory_Rewriting/Closure/Lift_Root_Step.thy", - "formal/mizar/glunir00.miz", - "formal/mizar/roughif1.miz", - "formal/afp/CoSMeDis/Outer_Friend_Confidentiality/Outer_Friend.thy", - "formal/mizar/jordan1.miz", - "formal/hol/Help/mk_setenum.doc", - "formal/hol/Boyer_Moore/support.ml", - "formal/mizar/cqc_the1.miz", - "formal/afp/Bell_Numbers_Spivey/document/root.tex", - "formal/lean/sphere-eversion/loops/basic.lean", - "formal/mizar/pre_circ.miz", - "formal/afp/Automatic_Refinement/Lib/Tagged_Solver.thy", - "formal/afp/Strong_Security/Type_System.thy", - "formal/afp/Physical_Quantities/Power_int.thy", - "formal/afp/Quantales/Quantic_Nuclei_Conuclei.thy", - "formal/afp/Green/Green.thy", - "formal/lean/mathlib/measure_theory/function/simple_func_dense_lp.lean", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p328.lean", - "formal/lean/mathlib/category_theory/bicategory/single_obj.lean", - "formal/afp/MonoidalCategory/MonoidalFunctor.thy", - "formal/lean/mathzoo/mathzoo/misc/miniF2F/numbertheory/sqmod3in01d.lean", - "formal/afp/Linear_Inequalities/document/root.tex", - "formal/afp/Slicing/Dynamic/DependentLiveVariables.thy", - "formal/afp/Constructive_Cryptography/Examples/Secure_Channel/System_Construction.thy", - "formal/hol/Help/ITAUT.doc", - "formal/lean/mathzoo/mathzoo/olympiads/amc/8/2020/p6.lean", - "formal/afp/Ordinal/document/root.tex", - "formal/hol/Help/is_undefined.doc", - "formal/lean/mathlib/algebra/category/Module/epi_mono.lean", - "formal/afp/Abstract_Soundness/Finite_Proof_Soundness.thy", - "formal/afp/MDP-Rewards/Blinfun_Util.thy", - "formal/lean/perfectoid/sheaves/presheaf_of_topological_rings.lean", - "formal/afp/CoSMeDis/Outer_Friend_Confidentiality/Issuer/Outer_Friend_Issuer_Value_Setup.thy", - "formal/afp/UTP/utp/utp_wp.thy", - "formal/mizar/l_hospit.miz", - "formal/afp/Van_Emde_Boas_Trees/document/root.tex", - "formal/mizar/measure4.miz", - "formal/lean/mathlib/set_theory/cardinal/continuum.lean", - "formal/afp/CoSMeDis/Outer_Friend_Confidentiality/Receiver/Outer_Friend_Receiver_Value_Setup.thy", - "formal/lean/mathlib/number_theory/von_mangoldt.lean", - "formal/lean/mathlib/topology/vector_bundle/basic.lean", - "formal/afp/Network_Security_Policy_Verification/Lib/FiniteListGraph_Impl.thy", - "formal/afp/CZH_Elementary_Categories/czh_ecategories/CZH_ECAT_Small_Category.thy", - "formal/lean/mathlib/data/list/nodup.lean", - "formal/lean/liquid/for_mathlib/snake_lemma3.lean", - "formal/afp/Collections/ICF/spec/PrioSpec.thy", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2009/a/p25.lean", - "formal/afp/Card_Number_Partitions/Number_Partition.thy", - "formal/afp/Category3/Subcategory.thy", - "formal/afp/LatticeProperties/Lattice_Ordered_Group.thy", - "formal/afp/POPLmark-deBruijn/POPLmarkRecord.thy", - "formal/afp/Case_Labeling/Examples/Hoare/Labeled_Hoare_Examples.thy", - "formal/afp/IP_Addresses/IP_Address_Parser.thy", - "formal/afp/CoSMeDis/Friend_Confidentiality/Friend_Value_Setup.thy", - "formal/hol/Help/RECALL_ACCEPT_TAC.doc", - "formal/afp/Collections/document/documentation.tex", - "formal/afp/Goedel_HFSet_Semanticless/Sigma.thy", - "formal/coq/odd-order/BGsection14.v", - "formal/afp/Saturation_Framework_Extensions/Standard_Redundancy_Criterion.thy", - "formal/hol/Library/primitive.ml", - "formal/lean/liquid/for_mathlib/abelian_category.lean", - "formal/hol/Tutorial/Inductive_definitions.ml", - "formal/afp/Core_DOM/common/pointers/ElementPointer.thy", - "formal/lean/mathlib/data/qpf/univariate/basic.lean", - "formal/afp/SenSocialChoice/RPRs.thy", - "formal/lean/mathlib/topology/algebra/order/extend_from.lean", - "formal/afp/Menger/Graph.thy", - "formal/afp/Core_DOM/common/pointers/ObjectPointer.thy", - "formal/afp/Network_Security_Policy_Verification/Security_Invariants/SINVAR_SubnetsInGW.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p184.lean", - "formal/afp/Propositional_Proof_Systems/CNF_Formulas.thy", - "formal/mizar/weddwitt.miz", - "formal/hol/Examples/inverse_bug_puzzle_miz3.ml", - "formal/lean/mathlib/algebra/homology/homotopy_category.lean", - "formal/afp/CAVA_Automata/Stuttering_Extension.thy", - "formal/lean/mathzoo/mathzoo/olympiads/aime/1991/p1.lean", - "formal/afp/Abstract-Hoare-Logics/Procs/PsTermi.thy", - "formal/mizar/zmodul04.miz", - "formal/afp/MFODL_Monitor_Optimized/document/root.tex", - "formal/mizar/moebius1.miz", - "formal/lean/mathlib/analysis/special_functions/exponential.lean", - "formal/afp/Probabilistic_Prime_Tests/Legendre_Symbol.thy", - "formal/afp/Datatype_Order_Generator/Derive_Aux.thy", - "formal/lean/sphere-eversion/local/h_principle.lean", - "formal/hol/Help/CONJUNCTS_UPPERCASE.doc", - "formal/afp/Game_Based_Crypto/SUF_CMA.thy", - "formal/mizar/sin_cos.miz", - "formal/lean/mathlib/data/list/chain.lean", - "formal/afp/Separation_Algebra/ex/capDL/Separation_D.thy", - "formal/lean/mathlib/topology/algebra/order/compact.lean", - "formal/afp/Robinson_Arithmetic/document/root.tex", - "formal/afp/Correctness_Algebras/Binary_Iterings_Nonstrict.thy", - "formal/afp/Refine_Monadic/examples/Breadth_First_Search.thy", - "formal/mizar/integra5.miz", - "formal/afp/Quantales/Quantales.thy", - "formal/mizar/matrixr2.miz", - "formal/lean/mathlib/field_theory/ratfunc.lean", - "formal/afp/Rank_Nullity_Theorem/Dual_Order.thy", - "formal/afp/CoSMed/Friend_Confidentiality/Friend_Value_Setup.thy", - "formal/afp/JinjaThreads/Common/Value.thy", - "formal/lean/liquid/for_mathlib/equivalence_additive.lean", - "formal/afp/Projective_Measurements/Linear_Algebra_Complements.thy", - "formal/afp/Multi_Party_Computation/document/root.tex", - "formal/lean/mathlib/linear_algebra/smodeq.lean", - "formal/afp/Randomised_Social_Choice/document/root.tex", - "formal/afp/Approximation_Algorithms/Approx_SC_Hoare.thy", - "formal/lean/mathlib/algebraic_geometry/function_field.lean", - "formal/mizar/jgraph_1.miz", - "formal/coq/math-comp/fingroup/gproduct.v", - "formal/afp/Noninterference_Ipurge_Unwinding/IpurgeUnwinding.thy", - "formal/hol/Help/dest_list.doc", - "formal/hol/Minisat/sat_script.ml", - "formal/afp/Applicative_Lifting/Joinable.thy", - "formal/afp/Random_Graph_Subgraph_Threshold/document/root.tex", - "formal/afp/InformationFlowSlicing_Inter/LiftingInter.thy", - "formal/afp/Dirichlet_L/Multiplicative_Characters.thy", - "formal/hol/Help/replicate.doc", - "formal/afp/Automatic_Refinement/Lib/Mpat_Antiquot.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p3.lean", - "formal/mizar/matrix_9.miz", - "formal/afp/Simplicial_complexes_and_boolean_functions/BDD.thy", - "formal/afp/Types_To_Sets_Extension/Examples/Introduction.thy", - "formal/afp/LTL_Normal_Form/Normal_Form.thy", - "formal/afp/Types_To_Sets_Extension/Examples/SML_Relativization/Foundations/Transfer_Ext.thy", - "formal/afp/Algebraic_Numbers/Bivariate_Polynomials.thy", - "formal/lean/mathlib/topology/algebra/open_subgroup.lean", - "formal/afp/Independence_CH/Forcing_Main.thy", - "formal/afp/JinjaThreads/J/Annotate.thy", - "formal/afp/CoSMeDis/Automation_Setup.thy", - "formal/afp/CofGroups/document/root.tex", - "formal/afp/Slicing/While/Semantics.thy", - "formal/afp/Transition_Systems_and_Automata/Automata/DRA/DRA_Refine.thy", - "formal/afp/Actuarial_Mathematics/Interest.thy", - "formal/lean/mathlib/topology/algebra/with_zero_topology.lean", - "formal/afp/CZH_Elementary_Categories/czh_ecategories/CZH_ECAT_Structure_Example.thy", - "formal/lean/mathlib/topology/metric_space/completion.lean", - "formal/afp/Formal_Puiseux_Series/Puiseux_Polynomial_Library.thy", - "formal/afp/Safe_OCL/OCL_Syntax.thy", - "formal/lean/liquid/for_mathlib/nnreal_int_binary.lean", - "formal/afp/Correctness_Algebras/Monotonic_Boolean_Transformers.thy", - "formal/lean/mathlib/order/filter/pointwise.lean", - "formal/afp/Amortized_Complexity/Priority_Queue_ops.thy", - "formal/afp/Correctness_Algebras/Monotonic_Boolean_Transformers_Instances.thy", - "formal/afp/Physical_Quantities/SI_Prefix.thy", - "formal/afp/Clean/examples/SquareRoot_concept.thy", - "formal/hol/Help/num_CONV.doc", - "formal/hol/Rqe/util.ml", - "formal/lean/sphere-eversion/local/dual_pair.lean", - "formal/afp/Kleene_Algebra/DRA.thy", - "formal/lean/mathlib/topology/unit_interval.lean", - "formal/afp/ArrowImpossibilityGS/Thys/Arrow_Order.thy", - "formal/mizar/sincos10.miz", - "formal/lean/mathlib/number_theory/wilson.lean", - "formal/afp/BNF_Operations/N2M.thy", - "formal/lean/mathlib/measure_theory/measure/haar_quotient.lean", - "formal/mizar/trees_9.miz", - "formal/afp/Random_Graph_Subgraph_Threshold/Prob_Lemmas.thy", - "formal/afp/Clean/src/Clean_Symbex.thy", - "formal/afp/Winding_Number_Eval/Cauchy_Index_Theorem.thy", - "formal/lean/mathlib/algebra/group_ring_action.lean", - "formal/afp/Metalogic_ProofChecker/Name.thy", - "formal/hol/Help/inst_goal.doc", - "formal/lean/mathlib/data/polynomial/taylor.lean", - "formal/hol/Help/ABS_CONV.doc", - "formal/afp/Separation_Logic_Imperative_HOL/document/root.tex", - "formal/afp/Padic_Ints/Padic_Construction.thy", - "formal/hol/Help/set_basic_rewrites.doc", - "formal/lean/mathlib/topology/connected.lean", - "formal/lean/mathlib/topology/category/Profinite/cofiltered_limit.lean", - "formal/afp/RSAPSS/document/root.tex", - "formal/coq/analysis/derive.v", - "formal/mizar/latwal_1.miz", - "formal/afp/Partial_Order_Reduction/Extensions/CCPO_Extensions.thy", - "formal/mizar/waybel12.miz", - "formal/afp/Finite_Fields/Ring_Characteristic.thy", - "formal/afp/CoCon/Traceback_Properties.thy", - "formal/lean/mathlib/analysis/analytic/linear.lean", - "formal/lean/mathlib/ring_theory/ring_hom/finite_type.lean", - "formal/afp/Green/DiamExample.thy", - "formal/afp/CoSMeDis/Post_Confidentiality/Post_Value_Setup_RECEIVER.thy", - "formal/mizar/seqfunc2.miz", - "formal/hol/Help/SELECT_CONV.doc", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2009/a/p9.lean", - "formal/lean/perfectoid/for_mathlib/submodule.lean", - "formal/afp/Mereology/CEM.thy", - "formal/afp/HRB-Slicing/Proc/ValidPaths.thy", - "formal/mizar/setwop_2.miz", - "formal/lean/mathlib/logic/embedding.lean", - "formal/afp/Generalized_Counting_Sort/Sorting.thy", - "formal/afp/Native_Word/Native_Word_Test.thy", - "formal/afp/HRB-Slicing/JinjaVM_Inter/JVMCFG.thy", - "formal/afp/Count_Complex_Roots/document/root.tex", - "formal/afp/HRB-Slicing/StaticInter/CFG_wf.thy", - "formal/afp/JinjaDCI/JVM/JVMInstructions.thy", - "formal/lean/liquid/for_mathlib/chain_complex_exact.lean", - "formal/lean/liquid/for_mathlib/SemiNormedGroup.lean", - "formal/afp/MFMC_Countable/MFMC_Misc.thy", - "formal/afp/Symmetric_Polynomials/Symmetric_Polynomials_Code.thy", - "formal/afp/List_Update/Prob_Theory.thy", - "formal/afp/Selection_Heap_Sort/HeapFunctional.thy", - "formal/lean/mathlib/data/list/default.lean", - "formal/mizar/wellfnd1.miz", - "formal/lean/mathlib/ring_theory/witt_vector/discrete_valuation_ring.lean", - "formal/afp/MFODL_Monitor_Optimized/Monitor.thy", - "formal/lean/mathlib/topology/algebra/order/floor.lean", - "formal/afp/Real_Power/RealPower.thy", - "formal/afp/Higher_Order_Terms/Term_Utils.thy", - "formal/hol/EC/edwards25519.ml", - "formal/afp/First_Order_Terms/Transitive_Closure_More.thy", - "formal/afp/Parity_Game/Attractor.thy", - "formal/afp/Graph_Theory/Graph_Theory.thy", - "formal/lean/mathlib/order/fixed_points.lean", - "formal/lean/mathlib/data/polynomial/reverse.lean", - "formal/afp/Correctness_Algebras/N_Omega_Binary_Iterings.thy", - "formal/afp/PAC_Checker/PAC_Polynomials_Operations.thy", - "formal/lean/mathlib/analysis/convex/quasiconvex.lean", - "formal/afp/BDD/EvalProof.thy", - "formal/afp/Auto2_HOL/HOL/Set_Thms.thy", - "formal/hol/Help/MATCH_ACCEPT_TAC.doc", - "formal/mizar/integra6.miz", - "formal/mizar/petri_2.miz", - "formal/hol/Help/REPEATC.doc", - "formal/afp/Network_Security_Policy_Verification/Security_Invariants/SINVAR_ACLnotCommunicateWith_impl.thy", - "formal/afp/AODV/variants/e_all_abcd/E_All_ABCD.thy", - "formal/lean/mathzoo/mathzoo/olympiads/imo/1997/p5.lean", - "formal/afp/Formula_Derivatives/Abstract_Formula.thy", - "formal/afp/Orbit_Stabiliser/Orbit_Stabiliser.thy", - "formal/lean/sphere-eversion/loops/reparametrization.lean", - "formal/lean/mathlib/data/int/nat_prime.lean", - "formal/afp/Hoare_Time/Quant_VCG.thy", - "formal/afp/ZFC_in_HOL/Ordinal_Exp.thy", - "formal/afp/JinjaDCI/J/WellTypeRT.thy", - "formal/afp/Real_Impl/document/root.tex", - "formal/lean/mathlib/data/polynomial/degree/card_pow_degree.lean", - "formal/mizar/ndiff_6.miz", - "formal/lean/mathlib/data/dfinsupp/basic.lean", - "formal/lean/mathlib/topology/instances/real.lean", - "formal/afp/Auto2_Imperative_HOL/Functional/Quicksort.thy", - "formal/afp/Correctness_Algebras/N_Omega_Algebras.thy", - "formal/lean/mathlib/algebra/category/FinVect.lean", - "formal/lean/mathlib/category_theory/abelian/projective.lean", - "formal/lean/mathlib/data/nat/multiplicity.lean", - "formal/afp/AODV/variants/b_fwdrreps/B_Aodv_Loop_Freedom.thy", - "formal/mizar/projdes1.miz", - "formal/mizar/polynom7.miz", - "formal/afp/Recursion-Theory-I/PRecUnGr.thy", - "formal/afp/Isabelle_Meta_Model/toy_example/generator/Design_shallow.thy", - "formal/afp/Tree-Automata/document/root.tex", - "formal/lean/liquid/pseudo_normed_group/splittable.lean", - "formal/hol/Help/INT_ADD_CONV.doc", - "formal/afp/VolpanoSmith/Semantics.thy", - "formal/mizar/scmringi.miz", - "formal/lean/mathlib/algebra/group/ulift.lean", - "formal/afp/Berlekamp_Zassenhaus/Poly_Mod.thy", - "formal/afp/Collections/Iterator/It_to_It.thy", - "formal/mizar/topreala.miz", - "formal/mizar/prvect_4.miz", - "formal/afp/Transition_Systems_and_Automata/Basic/Maps.thy", - "formal/afp/LTL/Equivalence_Relations.thy", - "formal/afp/Probabilistic_Timed_Automata/library/Graphs.thy", - "formal/afp/Shivers-CFA/ExCF.thy", - "formal/lean/mathlib/linear_algebra/free_module/pid.lean", - "formal/afp/UTP/utp/utp_theory.thy", - "formal/afp/FOL_Seq_Calc2/Export.thy", - "formal/hol/miz3/make.ml", - "formal/afp/Topological_Semantics/sse_boolean_algebra.thy", - "formal/afp/Perron_Frobenius/Perron_Frobenius_General.thy", - "formal/afp/Deep_Learning/DL_Shallow_Model.thy", - "formal/hol/Library/frag.ml", - "formal/afp/Metalogic_ProofChecker/Core.thy", - "formal/afp/Perron_Frobenius/HMA_Connect.thy", - "formal/hol/Tutorial/Wellfounded_induction.ml", - "formal/afp/CSP_RefTK/Fix_ind_ext.thy", - "formal/hol/Help/mk_primed_var.doc", - "formal/afp/Green/document/root.tex", - "formal/lean/mathlib/data/real/sign.lean", - "formal/lean/mathlib/analysis/special_functions/bernstein.lean", - "formal/lean/mathlib/algebra/category/Module/monoidal.lean", - "formal/mizar/fuzimpl2.miz", - "formal/afp/Simpl/Language.thy", - "formal/afp/Key_Agreement_Strong_Adversaries/document/session_graph.tex", - "formal/lean/mathzoo/mathzoo/olympiads/imo/1984/p6.lean", - "formal/afp/Collections/GenCF/Intf/Intf_Set.thy", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2002/b/p4.lean", - "formal/afp/Collections/ICF/impl/FTAnnotatedListImpl.thy", - "formal/afp/CZH_Foundations/czh_semicategories/CZH_SMC_NTSMCF.thy", - "formal/afp/Jinja/Common/WellForm.thy", - "formal/afp/Probabilistic_System_Zoo/Vardi.thy", - "formal/lean/mathlib/analysis/complex/liouville.lean", - "formal/afp/Dependent_SIFUM_Type_Systems/Examples/TypeSystemTactics.thy", - "formal/afp/Network_Security_Policy_Verification/vertex_example_simps.thy", - "formal/hol/Help/rotate.doc", - "formal/mizar/seqfunc.miz", - "formal/afp/Deriving/Comparator_Generator/Compare_Generator.thy", - "formal/afp/Call_Arity/ArityTransform.thy", - "formal/afp/CAVA_LTL_Modelchecker/BoolProgs/BoolProgs.thy", - "formal/afp/Hybrid_Multi_Lane_Spatial_Logic/Restriction.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p113.lean", - "formal/hol/Help/X_CHOOSE_TAC.doc", - "formal/lean/mathlib/analysis/complex/abs_max.lean", - "formal/lean/mathlib/analysis/mean_inequalities_pow.lean", - "formal/coq/odd-order/PFsection2.v", - "formal/afp/Collections/ICF/ICF_Refine_Monadic.thy", - "formal/hol/Help/PURE_REWRITE_CONV.doc", - "formal/lean/liquid/examples/pBanach.lean", - "formal/lean/mathlib/data/set/intervals/basic.lean", - "formal/afp/Dijkstra_Shortest_Path/document/root.tex", - "formal/mizar/complex3.miz", - "formal/mizar/comseq_1.miz", - "formal/afp/Pi_Transcendental/document/root.tex", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2001/p5.lean", - "formal/afp/Twelvefold_Way/Twelvefold_Way_Entry8.thy", - "formal/afp/Finitely_Generated_Abelian_Groups/DirProds.thy", - "formal/coq/math-comp/solvable/all_solvable.v", - "formal/mizar/leibniz1.miz", - "formal/afp/Collections/ICF/impl/ListSetImpl_Sorted.thy", - "formal/afp/Category2/document/root.tex", - "formal/hol/Rqe/testform_thms.ml", - "formal/hol/Tutorial/Number_theory.ml", - "formal/lean/mathlib/ring_theory/adjoin/fg.lean", - "formal/afp/CZH_Elementary_Categories/czh_ecategories/CZH_ECAT_GRPH.thy", - "formal/hol/Help/CONDS_CELIM_CONV.doc", - "formal/coq/odd-order/PFsection3.v", - "formal/lean/mathlib/data/finset/interval.lean", - "formal/mizar/graph_1.miz", - "formal/afp/Propositional_Proof_Systems/SC_Depth.thy", - "formal/lean/mathlib/category_theory/limits/preserves/shapes/terminal.lean", - "formal/lean/mathlib/probability/martingale/upcrossing.lean", - "formal/afp/Shadow_DOM/Shadow_DOM.thy", - "formal/mizar/scmfsa8b.miz", - "formal/mizar/vectsp11.miz", - "formal/afp/Automated_Stateful_Protocol_Verification/Term_Implication.thy", - "formal/afp/Lambda_Free_EPO/Embeddings.thy", - "formal/afp/Eval_FO/Cluster.thy", - "formal/afp/RefinementReactive/document/root.tex", - "formal/mizar/gate_1.miz", - "formal/lean/liquid/for_mathlib/Fintype.lean", - "formal/hol/Help/GENL.doc", - "formal/afp/Van_Emde_Boas_Trees/VEBT_Space.thy", - "formal/afp/Graph_Saturation/MissingRelation.thy", - "formal/afp/LOFT/document/root.tex", - "formal/afp/SenSocialChoice/Arrow.thy", - "formal/afp/GenClock/GenClock.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p304.lean", - "formal/mizar/euclid_9.miz", - "formal/afp/Collections/ICF/gen_algo/ICF_Gen_Algo_Chapter.thy", - "formal/afp/Stable_Matching/Bossiness.thy", - "formal/afp/UTP/utp/utp_concurrency.thy", - "formal/afp/Regular-Sets/Derivatives.thy", - "formal/lean/mathlib/algebra/geom_sum.lean", - "formal/hol/Help/remove.doc", - "formal/mizar/fintopo4.miz", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p224.lean", - "formal/afp/Applicative_Lifting/Combinators.thy", - "formal/afp/Vickrey_Clarke_Groves/StrictCombinatorialAuction.thy", - "formal/lean/mathlib/measure_theory/measure/hausdorff.lean", - "formal/lean/mathlib/logic/equiv/set.lean", - "formal/lean/mathlib/category_theory/monoidal/skeleton.lean", - "formal/afp/First_Welfare_Theorem/Microeconomics/Consumers.thy", - "formal/afp/Catalan_Numbers/Catalan_Numbers.thy", - "formal/lean/mathlib/geometry/manifold/partition_of_unity.lean", - "formal/afp/Generic_Deriving/tests/Derive_Encode.thy", - "formal/afp/Flyspeck-Tame/ArchStat.thy", - "formal/coq/math-comp/character/vcharacter.v", - "formal/hol/Jordan/parse_ext_override_interface.ml", - "formal/afp/Complete_Non_Orders/Complete_Relations.thy", - "formal/afp/Jordan_Normal_Form/Gauss_Jordan_Elimination.thy", - "formal/afp/Jinja/DFA/Typing_Framework_1.thy", - "formal/hol/Help/meson_split_limit.doc", - "formal/afp/Euler_Partition/document/root.tex", - "formal/afp/Refine_Imperative_HOL/Lib/Concl_Pres_Clarification.thy", - "formal/afp/Combinatorics_Words/Arithmetical_Hints.thy", - "formal/afp/Bounded_Deducibility_Security/BD_Security_Unwinding.thy", - "formal/afp/Formal_Puiseux_Series/Puiseux_Laurent_Library.thy", - "formal/afp/Network_Security_Policy_Verification/Security_Invariants/SINVAR_BLPbasic.thy", - "formal/lean/mathlib/order/min_max.lean", - "formal/afp/Transitive_Models/Delta_System_Relative.thy", - "formal/afp/Regular_Tree_Relations/Pair_Automaton.thy", - "formal/lean/mathlib/linear_algebra/quadratic_form/prod.lean", - "formal/lean/mathlib/linear_algebra/contraction.lean", - "formal/mizar/ordeq_01.miz", - "formal/afp/Iptables_Semantics/Primitive_Matchers/Ports.thy", - "formal/afp/Refine_Imperative_HOL/Sepref_Frame.thy", - "formal/lean/mathlib/algebra/category/Group/filtered_colimits.lean", - "formal/lean/mathlib/data/zmod/algebra.lean", - "formal/mizar/ami_5.miz", - "formal/afp/MFODL_Monitor_Optimized/Formula.thy", - "formal/afp/Security_Protocol_Refinement/Key_establish/m3_ds.thy", - "formal/afp/Prim_Dijkstra_Simple/Directed_Graph_Impl.thy", - "formal/lean/liquid/condensed/filtered_colimits_commute_with_finite_limits.lean", - "formal/lean/mathlib/data/finset/sym.lean", - "formal/afp/Collections/GenCF/Impl/Impl_List_Set.thy", - "formal/afp/Sort_Encodings/TermsAndClauses.thy", - "formal/lean/mathlib/analysis/box_integral/partition/measure.lean", - "formal/lean/mathlib/category_theory/bicategory/functor.lean", - "formal/coq/analysis/normedtype.v", - "formal/mizar/groeb_1.miz", - "formal/lean/mathlib/ring_theory/nilpotent.lean", - "formal/afp/Gromov_Hyperbolicity/Boundary_Extension.thy", - "formal/hol/Help/PURE_ONCE_REWRITE_CONV.doc", - "formal/mizar/uniform3.miz", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p126.lean", - "formal/afp/Jordan_Hoelder/JordanHolder.thy", - "formal/afp/Amortized_Complexity/Pairing_Heap_Tree_Analysis.thy", - "formal/afp/Algebraic_VCs/AVC_KAD/VC_KAD_dual_Examples.thy", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2010/a/p22.lean", - "formal/afp/Tarskis_Geometry/Linear_Algebra2.thy", - "formal/afp/Psi_Calculi/document/root.tex", - "formal/coq/math-comp/ssreflect/finfun.v", - "formal/afp/WOOT_Strong_Eventual_Consistency/CreateAlgorithms.thy", - "formal/afp/CZH_Elementary_Categories/czh_ecategories/CZH_ECAT_FUNCT.thy", - "formal/afp/CryptHOL/Partial_Function_Set.thy", - "formal/afp/Isabelle_Meta_Model/toy_example/document_generated/Design_generated.thy", - "formal/afp/FOL_Seq_Calc2/document/root.tex", - "formal/lean/mathlib/category_theory/monoidal/coherence_lemmas.lean", - "formal/lean/mathlib/ring_theory/polynomial/chebyshev.lean", - "formal/afp/HOL-CSP/Process.thy", - "formal/lean/mathlib/data/nat/factorization/prime_pow.lean", - "formal/lean/mathlib/combinatorics/colex.lean", - "formal/hol/Boyer_Moore/boyer-moore.ml", - "formal/afp/KAT_and_DRA/SingleSorted/PHL_DRAT.thy", - "formal/lean/liquid/condensed/tensor.lean", - "formal/mizar/topalg_1.miz", - "formal/hol/Help/it.doc", - "formal/hol/Help/NUM_RING.doc", - "formal/hol/EC/montgomery.ml", - "formal/mizar/sin_cos3.miz", - "formal/lean/mathlib/group_theory/group_action/sigma.lean", - "formal/afp/Real_Time_Deque/States_Proof.thy", - "formal/mizar/real_ns1.miz", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2021/a/p14.lean", - "formal/hol/Help/SIMPLIFY_CONV.doc", - "formal/lean/mathlib/control/uliftable.lean", - "formal/afp/Ordered_Resolution_Prover/FO_Ordered_Resolution_Prover.thy", - "formal/afp/CZH_Foundations/czh_semicategories/CZH_SMC_Small_NTSMCF.thy", - "formal/afp/Markov_Models/ex/Crowds_Protocol.thy", - "formal/afp/Ordered_Resolution_Prover/Lazy_List_Liminf.thy", - "formal/lean/mathlib/data/polynomial/ring_division.lean", - "formal/afp/Nominal2/Nominal2_Abs.thy", - "formal/afp/Zeta_Function/Hadjicostas_Chapman.thy", - "formal/lean/mathzoo/mathzoo/olympiads/amc/8/2020/p7.lean", - "formal/hol/Rqe/simplify.ml", - "formal/afp/Akra_Bazzi/Akra_Bazzi_Approximation.thy", - "formal/lean/mathlib/number_theory/bertrand.lean", - "formal/lean/mathlib/field_theory/abel_ruffini.lean", - "formal/lean/mathlib/algebra/lie/abelian.lean", - "formal/afp/Separation_Logic_Imperative_HOL/Examples/Open_List.thy", - "formal/afp/Decreasing-Diagrams/Decreasing_Diagrams.thy", - "formal/lean/mathlib/linear_algebra/linear_independent.lean", - "formal/afp/Dependent_SIFUM_Type_Systems/Security.thy", - "formal/lean/mathlib/computability/ackermann.lean", - "formal/afp/GPU_Kernel_PL/document/root.tex", - "formal/afp/CakeML/generated/Lem_function_extra.thy", - "formal/afp/Random_BSTs/Random_BSTs.thy", - "formal/mizar/qc_lang3.miz", - "formal/afp/CCS/Strong_Bisim_SC.thy", - "formal/afp/Word_Lib/Hex_Words.thy", - "formal/afp/Optimal_BST/Optimal_BST_Examples.thy", - "formal/afp/Dependent_SIFUM_Type_Systems/Preliminaries.thy", - "formal/afp/Types_To_Sets_Extension/Examples/SML_Relativization/Foundations/Lifting_Set_Ext.thy", - "formal/lean/liquid/polyhedral_lattice/quotient.lean", - "formal/mizar/prvect_3.miz", - "formal/hol/Help/ONCE_DEPTH_CONV.doc", - "formal/afp/Core_DOM/common/monads/ElementMonad.thy", - "formal/mizar/bvfunc_2.miz", - "formal/afp/Perron_Frobenius/Cancel_Card_Constraint.thy", - "formal/afp/Subset_Boolean_Algebras/Subset_Boolean_Algebras.thy", - "formal/lean/mathlib/category_theory/linear/linear_functor.lean", - "formal/lean/mathlib/ring_theory/henselian.lean", - "formal/lean/mathlib/category_theory/sites/closed.lean", - "formal/afp/Elliptic_Curves_Group_Law/Elliptic_Test.thy", - "formal/afp/Topological_Semantics/ex_LFUs.thy", - "formal/lean/mathlib/combinatorics/set_family/harris_kleitman.lean", - "formal/afp/Buchi_Complementation/Formula.thy", - "formal/afp/PAC_Checker/PAC_Specification.thy", - "formal/afp/LP_Duality/Minimum_Maximum.thy", - "formal/afp/Ordinals_and_Cardinals/Cardinal_Order_Relation_discontinued.thy", - "formal/afp/Locally-Nameless-Sigma/Locally_Nameless_Sigma.thy", - "formal/mizar/dilworth.miz", - "formal/afp/CakeML_Codegen/Terms/Sterm.thy", - "formal/lean/mathlib/algebra/category/Module/adjunctions.lean", - "formal/mizar/projpl_1.miz", - "formal/afp/LOFT/Sort_Descending.thy", - "formal/afp/Algebraic_Numbers/Resultant.thy", - "formal/afp/Dirichlet_Series/document/root.tex", - "formal/afp/DynamicArchitectures/document/root.tex", - "formal/lean/mathlib/representation_theory/basic.lean", - "formal/afp/Iptables_Semantics/Primitive_Matchers/Code_Interface.thy", - "formal/hol/RichterHilbertAxiomGeometry/miz3/HilbertAxiom.ml", - "formal/afp/DPRM_Theorem/document/root.tex", - "formal/lean/liquid/for_mathlib/homology_iso_datum.lean", - "formal/afp/Banach_Steinhaus/document/root.tex", - "formal/lean/mathlib/algebra/order/field_defs.lean", - "formal/mizar/newton02.miz", - "formal/hol/Multivariate/multivariate_database.ml", - "formal/afp/Correctness_Algebras/N_Algebras.thy", - "formal/mizar/sprect_1.miz", - "formal/afp/Bounded_Deducibility_Security/BD_Security_TS.thy", - "formal/afp/Safe_OCL/document/root.tex", - "formal/afp/CoSMed/Friend_Confidentiality/Friend_Intro.thy", - "formal/afp/Quick_Sort_Cost/Randomised_Quick_Sort.thy", - "formal/lean/mathlib/ring_theory/polynomial/bernstein.lean", - "formal/afp/Groebner_Macaulay/Hilbert_Function.thy", - "formal/hol/Examples/dlo.ml", - "formal/afp/SPARCv8/lib/wp/DetMonad.thy", - "formal/hol/Help/is_reserved_word.doc", - "formal/mizar/classes3.miz", - "formal/afp/JinjaDCI/BV/BVSpec.thy", - "formal/lean/mathlib/order/filter/modeq.lean", - "formal/hol/Help/vsubst.doc", - "formal/lean/mathlib/measure_theory/group/measurable_equiv.lean", - "formal/afp/Iptables_Semantics/Primitive_Matchers/L4_Protocol_Flags.thy", - "formal/lean/mathlib/analysis/special_functions/log/deriv.lean", - "formal/afp/Hybrid_Multi_Lane_Spatial_Logic/Traffic.thy", - "formal/mizar/scmring4.miz", - "formal/hol/GL/tests.ml", - "formal/afp/Dict_Construction/Documentation/Termination.thy", - "formal/afp/CZH_Elementary_Categories/czh_ecategories/CZH_ECAT_Par.thy", - "formal/afp/List_Update/MTF2_Effects.thy", - "formal/lean/mathlib/analysis/normed_space/units.lean", - "formal/afp/Jinja/BV/BVExample.thy", - "formal/afp/JinjaDCI/J/EConform.thy", - "formal/lean/mathlib/combinatorics/derangements/basic.lean", - "formal/lean/liquid/for_mathlib/free_abelian_group2.lean", - "formal/afp/Gauss_Jordan/Determinants2.thy", - "formal/coq/math-comp/algebra/matrix.v", - "formal/afp/Berlekamp_Zassenhaus/Chinese_Remainder_Poly.thy", - "formal/lean/mathlib/analysis/special_functions/log/monotone.lean", - "formal/afp/JinjaThreads/Compiler/Compiler_Main.thy", - "formal/afp/Kleene_Algebra/Inf_Matrix.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p245.lean", - "formal/afp/Game_Based_Crypto/Cryptographic_Constructions.thy", - "formal/hol/Help/ABS.doc", - "formal/afp/Auto2_Imperative_HOL/Imperative/SepAuto.thy", - "formal/lean/mathlib/ring_theory/int/basic.lean", - "formal/lean/mathlib/data/sign.lean", - "formal/mizar/waybel30.miz", - "formal/lean/liquid/for_mathlib/wide_pullback_iso.lean", - "formal/afp/LTL_Normal_Form/Normal_Form_Complexity.thy", - "formal/afp/Timed_Automata/DBM.thy", - "formal/hol/Help/CASE_REWRITE_TAC.doc", - "formal/afp/Bounded_Deducibility_Security/Trivia.thy", - "formal/afp/Stateful_Protocol_Composition_and_Typing/Labeled_Stateful_Strands.thy", - "formal/afp/Projective_Geometry/Pappus_Property.thy", - "formal/afp/JinjaThreads/MM/DRF_JVM.thy", - "formal/hol/Help/strip_forall.doc", - "formal/afp/Berlekamp_Zassenhaus/Code_Abort_Gcd.thy", - "formal/afp/Word_Lib/Word_32.thy", - "formal/afp/LLL_Basis_Reduction/LLL_Impl.thy", - "formal/hol/Help/ASM_MESON_TAC.doc", - "formal/afp/Dependent_SIFUM_Refinement/Examples/EgHighBranchRevC.thy", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2004/b/p3.lean", - "formal/lean/mathlib/ring_theory/subring/pointwise.lean", - "formal/mizar/bcialg_5.miz", - "formal/afp/Refine_Monadic/Refine_Mono_Prover.thy", - "formal/afp/Collections/ICF/impl/HashSet.thy", - "formal/hol/Help/IMP_REWRITE_TAC.doc", - "formal/lean/mathlib/number_theory/liouville/liouville_constant.lean", - "formal/afp/Simpl/Simpl.thy", - "formal/afp/WebAssembly/Wasm_Checker_Properties.thy", - "formal/lean/mathlib/category_theory/closed/zero.lean", - "formal/lean/mathlib/algebra/module/default.lean", - "formal/lean/mathlib/group_theory/eckmann_hilton.lean", - "formal/afp/WebAssembly/document/root.tex", - "formal/mizar/glib_015.miz", - "formal/lean/mathlib/algebraic_topology/cech_nerve.lean", - "formal/afp/Ordinary_Differential_Equations/Numerics/Refine_Reachability_Analysis.thy", - "formal/lean/sphere-eversion/to_mathlib/data/set/finite.lean", - "formal/afp/MiniSail/Nominal-Utils.thy", - "formal/afp/Formal_SSA/Disjoin_Transform.thy", - "formal/afp/Lower_Semicontinuous/document/root.tex", - "formal/lean/mathlib/algebra/category/Group/biproducts.lean", - "formal/afp/Physical_Quantities/SI.thy", - "formal/lean/mathlib/number_theory/legendre_symbol/mul_character.lean", - "formal/mizar/real_1.miz", - "formal/hol/Help/THEN.doc", - "formal/lean/mathzoo/mathzoo/misc/miniF2F/numbertheory/nckeqnm1ckpnm1ckm1.lean", - "formal/afp/JinjaThreads/J/DefAss.thy", - "formal/afp/CoSMeDis/Post_Confidentiality/Independent_Posts/Independent_DYNAMIC_Post_Value_Setup_ISSUER.thy", - "formal/lean/mathlib/order/category/NonemptyFinLinOrd.lean", - "formal/afp/Linear_Recurrences/Partial_Fraction_Decomposition.thy", - "formal/afp/Network_Security_Policy_Verification/Security_Invariants/SINVAR_NonInterference_impl.thy", - "formal/afp/ShortestPath/ShortestPath.thy", - "formal/hol/Help/mk_thm.doc", - "formal/afp/Native_Word/Native_Cast_Uint.thy", - "formal/afp/Partial_Order_Reduction/Traces.thy", - "formal/mizar/absred_0.miz", - "formal/afp/Taylor_Models/Experiments.thy", - "formal/afp/Safe_OCL/OCL_Typing.thy", - "formal/lean/mathlib/analysis/convex/strict.lean", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p388.lean", - "formal/afp/Hybrid_Multi_Lane_Spatial_Logic/document/root.tex", - "formal/afp/CAVA_Automata/document/root.tex", - "formal/hol/Help/pp_print_qtype.doc", - "formal/mizar/radix_4.miz", - "formal/afp/Transitive_Models/Renaming.thy", - "formal/afp/Iptables_Semantics/Primitive_Matchers/Common_Primitive_Lemmas.thy", - "formal/afp/MSO_Regex_Equivalence/Pi_Regular_Exp.thy", - "formal/hol/Help/CACHE_CONV.doc", - "formal/afp/Psi_Calculi/Tau_Sim.thy", - "formal/afp/Partial_Order_Reduction/Basics/List_Prefixes.thy", - "formal/afp/Hybrid_Logic/Hybrid_Logic.thy", - "formal/afp/Hermite_Lindemann/More_Multivariate_Polynomial_HLW.thy", - "formal/afp/Propositional_Proof_Systems/document/fig_tran.tex", - "formal/hol/Help/NUM_MAX_CONV.doc", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p640.lean", - "formal/afp/Hahn_Jordan_Decomposition/document/root.tex", - "formal/afp/Transitive_Models/Synthetic_Definition.thy", - "formal/afp/Interpreter_Optimizations/Unboxed_lemmas.thy", - "formal/lean/mathlib/analysis/calculus/local_extr.lean", - "formal/afp/Pi_Calculus/Weak_Late_Bisim_Subst.thy", - "formal/afp/ArrowImpossibilityGS/Thys/Arrow_Utility.thy", - "formal/afp/Abortable_Linearizable_Modules/Sequences.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p85.lean", - "formal/afp/Smith_Normal_Form/SNF_Algorithm_HOL_Analysis.thy", - "formal/afp/Design_Theory/BIBD.thy", - "formal/afp/Coinductive/Examples/Resumption.thy", - "formal/afp/Bondy/document/root.tex", - "formal/afp/Extended_Finite_State_Machine_Inference/heuristics/Group_By.thy", - "formal/mizar/basel_2.miz", - "formal/lean/mathlib/data/list/perm.lean", - "formal/afp/Network_Security_Policy_Verification/TopoS_Vertices.thy", - "formal/afp/Architectural_Design_Patterns/Blackboard.thy", - "formal/hol/Help/typify_universal_set.doc", - "formal/afp/Multi_Party_Computation/Uniform_Sampling.thy", - "formal/afp/Constructive_Cryptography_CM/document/root.tex", - "formal/afp/Nested_Multisets_Ordinals/Unary_PCF.thy", - "formal/afp/Smooth_Manifolds/Cotangent_Space.thy", - "formal/afp/Automatic_Refinement/Lib/Refine_Util.thy", - "formal/afp/WOOT_Strong_Eventual_Consistency/document/root.tex", - "formal/afp/AODV/variants/e_all_abcd/E_Aodv.thy", - "formal/lean/mathlib/linear_algebra/eigenspace.lean", - "formal/lean/mathlib/geometry/manifold/smooth_manifold_with_corners.lean", - "formal/afp/Isabelle_Meta_Model/toy_example/embedding/meta_toy/Printer_Toy.thy", - "formal/afp/SATSolverVerification/ConflictAnalysis.thy", - "formal/afp/Relational_Method/Definitions.thy", - "formal/lean/mathlib/ring_theory/adjoin/default.lean", - "formal/afp/Functional-Automata/RegSet_of_nat_DA.thy", - "formal/afp/Relational_Paths/document/root.tex", - "formal/afp/Forcing/Powerset_Axiom.thy", - "formal/afp/Hybrid_Multi_Lane_Spatial_Logic/Move.thy", - "formal/afp/ROBDD/BDD_Code.thy", - "formal/afp/UPF/document/introduction.tex", - "formal/afp/CZH_Elementary_Categories/czh_ecategories/CZH_ECAT_Category.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p254.lean", - "formal/mizar/waybel_0.miz", - "formal/afp/JinjaThreads/Compiler/J1Deadlock.thy", - "formal/afp/IMP_Compiler_Reuse/Compiler.thy", - "formal/afp/Iptables_Semantics/Primitive_Matchers/Ipassmt.thy", - "formal/afp/Possibilistic_Noninterference/Language_Semantics.thy", - "formal/hol/Help/GEN_REWRITE_RULE.doc", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p293.lean", - "formal/afp/Pi_Calculus/Weak_Early_Bisim_Subst_Pres.thy", - "formal/afp/Containers/Compatibility_Containers_Regular_Sets.thy", - "formal/afp/CAVA_LTL_Modelchecker/BoolProgs/BoolProgs_LTL_Conv.thy", - "formal/afp/Resolution_FOL/document/root.tex", - "formal/afp/Collections/document/intro.tex", - "formal/hol/Help/is_ratconst.doc", - "formal/afp/Regular_Tree_Relations/Tree_Automata/Tree_Automata_Impl.thy", - "formal/afp/Extended_Finite_State_Machine_Inference/examples/Drinks_Subsumption.thy", - "formal/afp/Groebner_Bases/Algorithm_Schema_Impl.thy", - "formal/afp/Types_To_Sets_Extension/ETTS/ETTS_Tools/ETTS_Tools.thy", - "formal/afp/Gauss_Jordan/Rank.thy", - "formal/mizar/polynom6.miz", - "formal/mizar/ring_2.miz", - "formal/afp/Prime_Distribution_Elementary/Primorial.thy", - "formal/afp/Polynomials/MPoly_Type_Class_OAlist.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p176.lean", - "formal/afp/SenSocialChoice/SCFs.thy", - "formal/afp/Design_Theory/Group_Divisible_Designs.thy", - "formal/afp/Category/HomFunctors.thy", - "formal/afp/AutoFocus-Stream/AF_Stream_Exec.thy", - "formal/lean/mathlib/analysis/ODE/picard_lindelof.lean", - "formal/afp/Propositional_Proof_Systems/document/fig_sema.tex", - "formal/lean/mathlib/algebraic_topology/fundamental_groupoid/simply_connected.lean", - "formal/afp/VerifyThis2019/lib/Exc_Nres_Monad.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p484.lean", - "formal/afp/Pi_Calculus/Weak_Early_Semantics.thy", - "formal/afp/Decl_Sem_Fun_PL/DenotCongruenceFSet.thy", - "formal/lean/mathlib/algebra/category/Group/images.lean", - "formal/hol/Library/prime.ml", - "formal/afp/Ergodic_Theory/Shift_Operator.thy", - "formal/mizar/msuhom_1.miz", - "formal/lean/mathlib/combinatorics/simple_graph/trails.lean", - "formal/hol/Jordan/tactics_ext2.ml", - "formal/mizar/hessenbe.miz", - "formal/afp/Verified_SAT_Based_AI_Planning/SAT_Plan_Base.thy", - "formal/mizar/finance1.miz", - "formal/lean/mathlib/data/list/range.lean", - "formal/afp/Planarity_Certificates/Planarity/Digraph_Map_Impl.thy", - "formal/lean/liquid/locally_constant/completion_aux.lean", - "formal/hol/Help/STRING_EQ_CONV.doc", - "formal/lean/mathlib/category_theory/limits/shapes/strong_epi.lean", - "formal/afp/CakeML_Codegen/Rewriting/Big_Step_Sterm.thy", - "formal/lean/mathlib/algebra/quaternion.lean", - "formal/hol/Model/modelset.ml", - "formal/afp/Possibilistic_Noninterference/After_Execution.thy", - "formal/afp/pGCL/Expectations.thy", - "formal/afp/Amortized_Complexity/Pairing_Heap_List1_Analysis.thy", - "formal/lean/mathlib/category_theory/subobject/well_powered.lean", - "formal/afp/KBPs/SPRViewDet.thy", - "formal/afp/Conditional_Transfer_Rule/Reference_Prerequisites.thy", - "formal/afp/Multi_Party_Computation/OT14.thy", - "formal/lean/mathlib/computability/primrec.lean", - "formal/afp/Network_Security_Policy_Verification/Lib/TopoS_Util.thy", - "formal/afp/Formal_SSA/RBT_Mapping_Exts.thy", - "formal/afp/Ordinary_Differential_Equations/Refinement/Refine_String.thy", - "formal/afp/Pi_Calculus/Strong_Early_Bisim_Subst_SC.thy", - "formal/lean/mathlib/data/finset/functor.lean", - "formal/afp/LinearQuantifierElim/Thys/QElin.thy", - "formal/afp/Core_SC_DOM/common/Core_DOM_Basic_Datatypes.thy", - "formal/afp/Green/SymmetricR2Shapes.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p37.lean", - "formal/afp/Clean/src/Hoare_Clean.thy", - "formal/afp/Propositional_Proof_Systems/SC.thy", - "formal/hol/Tutorial/Changing_proof_style.ml", - "formal/afp/Dirichlet_Series/Divisor_Count.thy", - "formal/afp/JinjaDCI/Common/WellForm.thy", - "formal/lean/mathlib/algebra/lie/cartan_subalgebra.lean", - "formal/afp/Surprise_Paradox/Surprise_Paradox.thy", - "formal/lean/mathlib/analysis/normed/group/add_torsor.lean", - "formal/mizar/diff_2.miz", - "formal/afp/LTL/Disjunctive_Normal_Form.thy", - "formal/afp/Jinja/BV/Effect.thy", - "formal/mizar/clopban4.miz", - "formal/hol/Rqe/poly_ext.ml", - "formal/hol/Help/RIGHT_BETAS.doc", - "formal/afp/Security_Protocol_Refinement/Auth_simple/m2_auth_chan.thy", - "formal/lean/mathlib/data/buffer/basic.lean", - "formal/afp/Transitive-Closure-II/document/root.tex", - "formal/hol/Help/dest_let.doc", - "formal/afp/Concurrent_Revisions/Data.thy", - "formal/afp/Conditional_Simplification/document/root.tex", - "formal/afp/Presburger-Automata/Exec.thy", - "formal/afp/Adaptive_State_Counting/ASC/ASC_LB.thy", - "formal/afp/Collections/Examples/Autoref/ICF_Test.thy", - "formal/lean/mathlib/group_theory/specific_groups/dihedral.lean", - "formal/lean/liquid/Lbar/ext_preamble.lean", - "formal/afp/Linear_Programming/LP_Preliminaries.thy", - "formal/afp/Syntax_Independent_Logic/Pseudo_Term.thy", - "formal/afp/Allen_Calculus/document/root.tex", - "formal/afp/Recursion-Theory-I/PRecFun.thy", - "formal/mizar/anproj_9.miz", - "formal/afp/BNF_CC/Quotient_Preservation.thy", - "formal/hol/Help/preterm_of_term.doc", - "formal/hol/miz3/Samples/forster.ml", - "formal/hol/Help/MATCH_CONV.doc", - "formal/afp/CZH_Foundations/czh_sets/CZH_Sets_IF.thy", - "formal/afp/Clean/src/Optics.thy", - "formal/lean/mathlib/category_theory/monoidal/transport.lean", - "formal/mizar/integr24.miz", - "formal/afp/QR_Decomposition/Least_Squares_Approximation.thy", - "formal/lean/mathlib/category_theory/comma.lean", - "formal/lean/mathzoo/mathzoo/misc/miniF2F/induction/sum2kp1npqsqm1.lean", - "formal/afp/Interpreter_Optimizations/Op_example.thy", - "formal/afp/Cayley_Hamilton/Cayley_Hamilton.thy", - "formal/hol/Help/prove_monotonicity_hyps.doc", - "formal/lean/liquid/for_mathlib/derived/les2.lean", - "formal/afp/Iptables_Semantics/Primitive_Matchers/Conntrack_State.thy", - "formal/afp/AVL-Trees/AVL.thy", - "formal/lean/mathlib/algebra/direct_sum/decomposition.lean", - "formal/hol/Rqe/condense.ml", - "formal/afp/Jinja/Common/Exceptions.thy", - "formal/hol/Help/SIMP_CONV.doc", - "formal/afp/Monad_Memo_DP/heap_monad/State_Heap_Misc.thy", - "formal/afp/ClockSynchInst/ICAInstance.thy", - "formal/afp/Polynomial_Interpolation/Missing_Polynomial.thy", - "formal/afp/Routing/IpRoute_Parser.thy", - "formal/mizar/msualg_3.miz", - "formal/afp/Berlekamp_Zassenhaus/Factor_Bound.thy", - "formal/lean/mathlib/linear_algebra/matrix/spectrum.lean", - "formal/afp/Isabelle_Meta_Model/toy_example/document_generated/Design_generated_generated.thy", - "formal/afp/Transitive_Models/Relativization.thy", - "formal/lean/mathlib/data/sum/interval.lean", - "formal/lean/mathlib/ring_theory/witt_vector/witt_polynomial.lean", - "formal/lean/mathlib/deprecated/group.lean", - "formal/afp/JinjaThreads/J/JHeap.thy", - "formal/afp/Core_DOM/common/classes/ObjectClass.thy", - "formal/hol/100/sqrt.ml", - "formal/lean/liquid/for_mathlib/AddCommGroup_instances.lean", - "formal/lean/liquid/condensed/condensify.lean", - "formal/lean/mathlib/topology/paracompact.lean", - "formal/afp/Knuth_Bendix_Order/Lexicographic_Extension.thy", - "formal/hol/Help/mk_binder.doc", - "formal/afp/Green/Derivs.thy", - "formal/afp/Clean/src/Clean_Main.thy", - "formal/afp/Noninterference_Ipurge_Unwinding/DeterministicProcesses.thy", - "formal/afp/MSO_Regex_Equivalence/M2L.thy", - "formal/afp/Dirichlet_Series/Dirichlet_Misc.thy", - "formal/lean/lftcm/hints/category_theory/exercise3/hint4.lean", - "formal/hol/Help/install_user_printer.doc", - "formal/lean/mathlib/category_theory/category/preorder.lean", - "formal/afp/Relational_Method/document/root.tex", - "formal/afp/Pairing_Heap/document/root.tex", - "formal/lean/lftcm/exercises_sources/thursday/category_theory/exercise3.lean", - "formal/lean/mathlib/measure_theory/integral/integral_eq_improper.lean", - "formal/afp/List_Interleaving/ListInterleaving.thy", - "formal/afp/Incompleteness/Goedel_II.thy", - "formal/afp/Lowe_Ontological_Argument/LoweOntologicalArgument_1.thy", - "formal/afp/Kleene_Algebra/Kleene_Algebra.thy", - "formal/mizar/msualg_2.miz", - "formal/lean/mathlib/algebra/order/hom/ring.lean", - "formal/mizar/xfamily.miz", - "formal/afp/Quantales/Quantale_Modules.thy", - "formal/afp/Signature_Groebner/Signature_Examples.thy", - "formal/lean/mathlib/algebra/group/conj.lean", - "formal/afp/Cubic_Quartic_Equations/Complex_Roots.thy", - "formal/afp/FLP/FLPExistingSystem.thy", - "formal/lean/mathzoo/mathzoo/olympiads/aime/1984/p7.lean", - "formal/lean/mathlib/algebra/module/localized_module.lean", - "formal/lean/mathlib/category_theory/bicategory/basic.lean", - "formal/mizar/pasch.miz", - "formal/lean/mathzoo/mathzoo/olympiads/imo/2001/p6.lean", - "formal/afp/JinjaDCI/J/Equivalence.thy", - "formal/hol/Help/dest_thm.doc", - "formal/hol/Help/ISPEC.doc", - "formal/mizar/jordan5b.miz", - "formal/hol/Help/union_prime.doc", - "formal/afp/Metalogic_ProofChecker/Sorts.thy", - "formal/hol/Help/bty.doc", - "formal/afp/Flow_Networks/Lib/Fofu_Abs_Base.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p24.lean", - "formal/lean/mathlib/analysis/locally_convex/balanced_core_hull.lean", - "formal/afp/Dependent_SIFUM_Refinement/document/root.tex", - "formal/afp/JinjaDCI/BV/StartProg.thy", - "formal/mizar/dualsp05.miz", - "formal/afp/Key_Agreement_Strong_Adversaries/pfslvl3.thy", - "formal/afp/Stone_Relation_Algebras/Semirings.thy", - "formal/afp/Encodability_Process_Calculi/Relations.thy", - "formal/lean/mathlib/data/complex/is_R_or_C.lean", - "formal/afp/Complex_Geometry/Circlines_Angle.thy", - "formal/afp/Incompleteness/Pf_Predicates.thy", - "formal/lean/mathlib/measure_theory/function/ae_measurable_sequence.lean", - "formal/afp/AODV/Loop_Freedom.thy", - "formal/lean/mathlib/number_theory/number_field.lean", - "formal/afp/Adaptive_State_Counting/ASC/ASC_Hoare.thy", - "formal/mizar/matrix10.miz", - "formal/afp/Posix-Lexing/document/root.tex", - "formal/afp/PCF/Logical_Relations.thy", - "formal/lean/mathlib/category_theory/structured_arrow.lean", - "formal/afp/Transition_Systems_and_Automata/Automata/NBA/NBA.thy", - "formal/afp/Heard_Of/HOModel.thy", - "formal/lean/mathlib/algebra/category/Semigroup/basic.lean", - "formal/hol/Help/list_mk_exists.doc", - "formal/afp/Slicing/StaticIntra/Slice.thy", - "formal/lean/liquid/for_mathlib/is_quasi_iso.lean", - "formal/afp/Affine_Arithmetic/Polygon.thy", - "formal/afp/CoCon/Safety_Properties.thy", - "formal/afp/Applicative_Lifting/Applicative_Monoid.thy", - "formal/lean/mathlib/analysis/calculus/iterated_deriv.lean", - "formal/afp/RSAPSS/SHA1Padding.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p618.lean", - "formal/mizar/zmatrlin.miz", - "formal/lean/mathlib/data/finsupp/pointwise.lean", - "formal/lean/mathlib/data/qpf/multivariate/constructions/comp.lean", - "formal/mizar/sysrel.miz", - "formal/lean/mathlib/number_theory/cyclotomic/basic.lean", - "formal/afp/Lambda_Free_RPOs/Infinite_Chain.thy", - "formal/hol/Rqe/inferpsign.ml", - "formal/mizar/petri_3.miz", - "formal/afp/Simplicial_complexes_and_boolean_functions/MkIfex.thy", - "formal/afp/DFS_Framework/Examples/Reachable_Nodes.thy", - "formal/afp/Groebner_Macaulay/Poly_Fun.thy", - "formal/lean/mathlib/data/num/bitwise.lean", - "formal/lean/mathlib/ring_theory/ring_hom/finite.lean", - "formal/afp/Jordan_Normal_Form/Determinant_Impl.thy", - "formal/afp/Stochastic_Matrices/Stochastic_Matrix_Markov_Models.thy", - "formal/coq/analysis/altreals/distr.v", - "formal/afp/Real_Time_Deque/Stack.thy", - "formal/lean/mathlib/category_theory/sites/compatible_plus.lean", - "formal/hol/Help/search.doc", - "formal/mizar/seq_1.miz", - "formal/afp/Category2/Universe.thy", - "formal/afp/CakeML/Evaluate_Termination.thy", - "formal/afp/UPF_Firewall/NAT/NAT.thy", - "formal/afp/Applicative_Lifting/Applicative_State.thy", - "formal/afp/Public_Announcement_Logic/PAL.thy", - "formal/lean/mathlib/order/hom/complete_lattice.lean", - "formal/lean/mathlib/algebra/free.lean", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p353.lean", - "formal/hol/Help/basic_prover.doc", - "formal/afp/InfPathElimination/Store.thy", - "formal/afp/CZH_Foundations/czh_sets/CZH_Sets_BRelations.thy", - "formal/afp/Combinatorics_Words/CoWAll.thy", - "formal/mizar/fdiff_6.miz", - "formal/afp/NormByEval/document/root.tex", - "formal/afp/Applicative_Lifting/Applicative_Stream.thy", - "formal/afp/Call_Arity/CoCallAritySig.thy", - "formal/lean/mathlib/analysis/special_functions/trigonometric/inverse_deriv.lean", - "formal/hol/100/thales.ml", - "formal/afp/WebAssembly/Wasm_Type_Abs.thy", - "formal/afp/Isabelle_C/README.thy", - "formal/lean/mathlib/linear_algebra/basis.lean", - "formal/lean/mathlib/algebra/star/pointwise.lean", - "formal/afp/Boolean_Expression_Checkers/Boolean_Expression_Checkers.thy", - "formal/mizar/domain_1.miz", - "formal/lean/mathlib/category_theory/monad/types.lean", - "formal/lean/mathlib/deprecated/subring.lean", - "formal/lean/mathlib/geometry/manifold/diffeomorph.lean", - "formal/afp/Van_der_Waerden/Van_der_Waerden.thy", - "formal/afp/Automated_Stateful_Protocol_Verification/trac/trac_term.thy", - "formal/lean/mathlib/algebraic_topology/fundamental_groupoid/punit.lean", - "formal/afp/Weight_Balanced_Trees/document/root.tex", - "formal/lean/mathlib/category_theory/monoidal/of_chosen_finite_products.lean", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p530.lean", - "formal/afp/CZH_Elementary_Categories/czh_ecategories/CZH_ECAT_Discrete.thy", - "formal/mizar/number01.miz", - "formal/afp/Monad_Memo_DP/transform/Transform_Cmd.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p430.lean", - "formal/afp/Foundation_of_geometry/document/root.tex", - "formal/afp/Core_DOM/common/Core_DOM_Functions.thy", - "formal/mizar/card_4.miz", - "formal/hol/Help/dest_abs.doc", - "formal/afp/GraphMarkingIBP/SetMark.thy", - "formal/afp/Interval_Arithmetic_Word32/Interpreter.thy", - "formal/afp/Decl_Sem_Fun_PL/InterTypeSystem.thy", - "formal/hol/Help/SPEC_VAR.doc", - "formal/afp/Linear_Inequalities/Normal_Vector.thy", - "formal/lean/mathlib/field_theory/separable.lean", - "formal/afp/Propositional_Proof_Systems/CNF_To_Formula.thy", - "formal/afp/Jordan_Normal_Form/Gram_Schmidt.thy", - "formal/afp/Functional-Automata/AutoMaxChop.thy", - "formal/afp/Multi_Party_Computation/Number_Theory_Aux.thy", - "formal/lean/mathlib/logic/lemmas.lean", - "formal/afp/Launchbury/HOLCF-Utils.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p466.lean", - "formal/afp/Parity_Game/UniformStrategy.thy", - "formal/lean/mathlib/algebra/ring/opposite.lean", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p107.lean", - "formal/afp/Ergodic_Theory/ME_Library_Complement.thy", - "formal/hol/Complex/fundamental.ml", - "formal/afp/Shadow_DOM/tests/Shadow_DOM_Node_removeChild.thy", - "formal/lean/mathlib/group_theory/submonoid/center.lean", - "formal/afp/Sturm_Sequences/Sturm_Method.thy", - "formal/lean/mathlib/category_theory/limits/shapes/wide_pullbacks.lean", - "formal/afp/Formal_Puiseux_Series/FPS_Hensel.thy", - "formal/mizar/bilinear.miz", - "formal/afp/GaleStewart_Games/MoreCoinductiveList2.thy", - "formal/afp/Ergodic_Theory/Asymptotic_Density.thy", - "formal/afp/Card_Partitions/Card_Partitions.thy", - "formal/afp/Tarskis_Geometry/Euclid_Tarski.thy", - "formal/hol/Help/INT_ABS_CONV.doc", - "formal/lean/mathlib/algebra/star/free.lean", - "formal/lean/mathlib/category_theory/bicategory/locally_discrete.lean", - "formal/afp/Poincare_Bendixson/document/root.tex", - "formal/afp/Collections/GenCF/Gen/Gen_Map2Set.thy", - "formal/afp/Parity_Game/MoreCoinductiveList.thy", - "formal/afp/Polynomial_Factorization/Gcd_Rat_Poly.thy", - "formal/lean/mathzoo/mathzoo/olympiads/aime/1988/p3.lean", - "formal/afp/JiveDataStoreModel/Isabelle/JavaType.thy", - "formal/lean/mathlib/topology/homotopy/path.lean", - "formal/lean/mathlib/measure_theory/measure/ae_measurable.lean", - "formal/hol/Rqe/num_calc_simp.ml", - "formal/afp/UPF_Firewall/Examples/Voice_over_IP/Voice_over_IP.thy", - "formal/afp/Formal_SSA/WhileGraphSSA.thy", - "formal/afp/Collections/Examples/Autoref/Succ_Graph.thy", - "formal/hol/Help/flat.doc", - "formal/afp/Collections/ICF/spec/PrioUniqueSpec.thy", - "formal/lean/mathlib/category_theory/bicategory/coherence.lean", - "formal/coq/abel/xmathcomp/map_gal.v", - "formal/hol/Help/setify.doc", - "formal/afp/Separation_Logic_Imperative_HOL/Examples/From_List_GA.thy", - "formal/afp/Van_Emde_Boas_Trees/VEBT_DeleteCorrectness.thy", - "formal/mizar/rsspace.miz", - "formal/afp/Hoare_Time/SepLogAdd/Sep_Algebra_Add.thy", - "formal/afp/Jinja/DFA/Err.thy", - "formal/afp/Partial_Order_Reduction/Extensions/Relation_Extensions.thy", - "formal/mizar/jordan1e.miz", - "formal/lean/mathlib/representation_theory/Rep.lean", - "formal/afp/UPF/Monads.thy", - "formal/afp/Featherweight_OCL/collection_types/UML_Set.thy", - "formal/lean/mathlib/topology/algebra/order/monotone_continuity.lean", - "formal/lean/mathlib/order/succ_pred/interval_succ.lean", - "formal/lean/mathlib/category_theory/sites/sheaf_of_types.lean", - "formal/hol/RichterHilbertAxiomGeometry/TarskiAxiomGeometry_read.ml", - "formal/afp/CoCon/Decision_Confidentiality/Decision_Intro.thy", - "formal/lean/mathlib/algebra/category/Group/epi_mono.lean", - "formal/afp/AODV/variants/a_norreqid/A_Quality_Increases.thy", - "formal/afp/Rewriting_Z/Z.thy", - "formal/afp/BDD/General.thy", - "formal/mizar/vectsp_1.miz", - "formal/lean/mathlib/algebraic_geometry/locally_ringed_space.lean", - "formal/afp/Graph_Theory/Vertex_Walk.thy", - "formal/afp/HOLCF-Prelude/Data_Function.thy", - "formal/afp/LambdaMu/TypePreservation.thy", - "formal/lean/mathlib/topology/algebra/continuous_monoid_hom.lean", - "formal/afp/Free-Groups/FreeGroups.thy", - "formal/coq/math-comp/test_suite/test_regular_conv.v", - "formal/afp/pGCL/document/root.tex", - "formal/lean/mathlib/analysis/calculus/lagrange_multipliers.lean", - "formal/hol/Help/basic_rectype_net.doc", - "formal/lean/mathlib/topology/metric_space/kuratowski.lean", - "formal/afp/Abortable_Linearizable_Modules/Idempotence.thy", - "formal/afp/Generic_Deriving/Derive.thy", - "formal/afp/Grothendieck_Schemes/Scheme.thy", - "formal/lean/mathlib/algebra/category/Module/colimits.lean", - "formal/hol/Help/ONCE_ASM_SIMP_TAC.doc", - "formal/afp/Complex_Geometry/Canonical_Angle.thy", - "formal/lean/mathlib/topology/metric_space/isometry.lean", - "formal/lean/liquid/Lbar/ext.lean", - "formal/afp/Ordinary_Differential_Equations/Ex/Examples_One_Step_Method.thy", - "formal/hol/Help/explode.doc", - "formal/afp/Prim_Dijkstra_Simple/Undirected_Graph_Impl.thy", - "formal/afp/Stochastic_Matrices/Stochastic_Matrix.thy", - "formal/lean/mathzoo/mathzoo/olympiads/imo/1977/p6.lean", - "formal/mizar/facirc_2.miz", - "formal/afp/Goedel_HFSet_Semanticless/Coding_Predicates.thy", - "formal/lean/mathlib/control/traversable/instances.lean", - "formal/afp/Ribbon_Proofs/Proofchain.thy", - "formal/lean/mathlib/model_theory/bundled.lean", - "formal/hol/Help/REAL_POLY_NEG_CONV.doc", - "formal/mizar/bhsp_3.miz", - "formal/afp/PCF/document/root.tex", - "formal/lean/mathlib/topology/metric_space/hausdorff_distance.lean", - "formal/afp/EdmondsKarp_Maxflow/Edka_Checked_Impl.thy", - "formal/lean/liquid/for_mathlib/ab5.lean", - "formal/afp/JinjaThreads/Framework/FWInterrupt.thy", - "formal/afp/Van_Emde_Boas_Trees/Time_Reasoning/Time_Reasoning.thy", - "formal/lean/lftcm/exercises_sources/thursday/groups_rings_fields.lean", - "formal/afp/Abs_Int_ITP2012/Abs_Int1_parity.thy", - "formal/lean/mathlib/group_theory/presented_group.lean", - "formal/afp/Card_Number_Partitions/Additions_to_Main.thy", - "formal/afp/Van_Emde_Boas_Trees/Imperative_HOL_Time/Heap_Time_Monad.thy", - "formal/afp/Perron_Frobenius/Spectral_Radius_Theory.thy", - "formal/afp/Ordered_Resolution_Prover/Inference_System.thy", - "formal/afp/CYK/CYK.thy", - "formal/lean/liquid/breen_deligne/apply_Pow.lean", - "formal/lean/mathlib/topology/algebra/order/intermediate_value.lean", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p119.lean", - "formal/lean/mathlib/data/tree.lean", - "formal/afp/UPF_Firewall/PacketFilter/ProtocolPortCombinators.thy", - "formal/afp/CZH_Foundations/czh_sets/CZH_Sets_Sets.thy", - "formal/lean/mathlib/data/int/order.lean", - "formal/afp/Sqrt_Babylonian/NthRoot_Impl.thy", - "formal/afp/Jinja/JVM/JVMDefensive.thy", - "formal/hol/Multivariate/tarski.ml", - "formal/lean/mathlib/algebra/group/units.lean", - "formal/afp/Probabilistic_Noninterference/document/root.tex", - "formal/afp/Posix-Lexing/Simplifying.thy", - "formal/mizar/scmpds_1.miz", - "formal/lean/mathlib/order/complete_lattice_intervals.lean", - "formal/afp/Auto2_Imperative_HOL/Functional/Arrays_Ex.thy", - "formal/afp/Show/Old_Datatype/Old_Show_Examples.thy", - "formal/lean/mathlib/data/sigma/order.lean", - "formal/afp/Independence_CH/Proper_Extension.thy", - "formal/afp/Euler_Partition/Euler_Partition.thy", - "formal/lean/sphere-eversion/to_mathlib/analysis/calculus.lean", - "formal/coq/analysis/altreals/xfinmap.v", - "formal/afp/WHATandWHERE_Security/MWLs.thy", - "formal/afp/Polynomials/Polynomials.thy", - "formal/mizar/mesfunc9.miz", - "formal/afp/pGCL/Sublinearity.thy", - "formal/lean/perfectoid/for_mathlib/data/set/basic.lean", - "formal/lean/mathlib/category_theory/fin_category.lean", - "formal/lean/mathlib/linear_algebra/clifford_algebra/default.lean", - "formal/afp/Triangle/document/root.tex", - "formal/afp/UPF/UPFCore.thy", - "formal/afp/Groebner_Macaulay/Cone_Decomposition.thy", - "formal/lean/perfectoid/sheaves/sheaf_of_topological_rings.lean", - "formal/lean/mathlib/algebra/category/BoolRing.lean", - "formal/afp/Planarity_Certificates/Verification/Check_Non_Planarity_Verification.thy", - "formal/hol/Help/WEAK_DNF_CONV.doc", - "formal/lean/mathlib/combinatorics/configuration.lean", - "formal/afp/Circus/Relations.thy", - "formal/afp/Refine_Monadic/examples/Examples.thy", - "formal/afp/Jinja/Compiler/Compiler2.thy", - "formal/afp/Featherweight_OCL/basic_types/UML_Integer.thy", - "formal/lean/mathlib/algebra/category/Module/simple.lean", - "formal/lean/mathlib/linear_algebra/dual.lean", - "formal/lean/mathlib/data/rbtree/insert.lean", - "formal/afp/Weighted_Path_Order/Multiset_Extension_Pair.thy", - "formal/lean/mathlib/probability/probability_mass_function/monad.lean", - "formal/afp/LocalLexing/TheoremD9.thy", - "formal/afp/CZH_Foundations/czh_semicategories/CZH_SMC_Set.thy", - "formal/afp/IsaNet/instances/EPIC_L2_SA.thy", - "formal/mizar/zfmodel1.miz", - "formal/mizar/yellow_5.miz", - "formal/afp/Architectural_Design_Patterns/Auxiliary.thy", - "formal/afp/Shadow_SC_DOM/classes/ShadowRootClass.thy", - "formal/afp/Forcing/Internal_ZFC_Axioms.thy", - "formal/afp/Featherweight_OCL/examples/Employee_Model/Design/Design_OCL.thy", - "formal/lean/mathlib/analysis/normed/group/hom.lean", - "formal/afp/Collections/Lib/HashCode.thy", - "formal/afp/CakeML_Codegen/Preproc/Eval_Class.thy", - "formal/lean/mathlib/category_theory/limits/unit.lean", - "formal/mizar/fsm_2.miz", - "formal/afp/JinjaThreads/DFA/Semilattices.thy", - "formal/afp/Berlekamp_Zassenhaus/Mahler_Measure.thy", - "formal/afp/Nat-Interval-Logic/IL_TemporalOperators.thy", - "formal/mizar/int_6.miz", - "formal/afp/List-Infinite/CommonSet/SetInterval2.thy", - "formal/lean/liquid/pseudo_normed_group/homotopy.lean", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p323.lean", - "formal/lean/mathlib/computability/regular_expressions.lean", - "formal/afp/CZH_Foundations/czh_semicategories/CZH_SMC_SemiCAT.thy", - "formal/afp/Launchbury/AbstractDenotational.thy", - "formal/lean/mathlib/data/num/lemmas.lean", - "formal/afp/GPU_Kernel_PL/KPL_state.thy", - "formal/afp/Call_Arity/CoCallGraph.thy", - "formal/lean/mathlib/ring_theory/polynomial/default.lean", - "formal/afp/Formal_SSA/FormalSSA_Misc.thy", - "formal/hol/Help/define_type_raw.doc", - "formal/setmm/set.mm", - "formal/mizar/isocat_2.miz", - "formal/afp/JinjaThreads/JVM/JVMExec.thy", - "formal/hol/100/two_squares.ml", - "formal/afp/Functional-Automata/NAe.thy", - "formal/lean/mathlib/ring_theory/polynomial_algebra.lean", - "formal/afp/VYDRA_MDL/Trace.thy", - "formal/afp/Encodability_Process_Calculi/SuccessSensitiveness.thy", - "formal/mizar/moebius2.miz", - "formal/lean/sphere-eversion/to_mathlib/topology/nhds_set.lean", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p513.lean", - "formal/afp/DPT-SAT-Solver/DPT_SAT_Solver.thy", - "formal/lean/liquid/rescale/FiltrationPow.lean", - "formal/afp/Kruskal/Graph_Definition_Aux.thy", - "formal/afp/Randomised_Social_Choice/Social_Decision_Schemes.thy", - "formal/afp/JinjaThreads/DFA/SemilatAlg.thy", - "formal/mizar/filter_2.miz", - "formal/lean/mathlib/algebra/gcd_monoid/integrally_closed.lean", - "formal/afp/Transcendence_Series_Hancl_Rucki/Transcendence_Series.thy", - "formal/afp/Datatype_Order_Generator/Derive_Examples.thy", - "formal/afp/Weighted_Arithmetic_Geometric_Mean/Weighted_Arithmetic_Geometric_Mean.thy", - "formal/lean/mathlib/category_theory/monoidal/linear.lean", - "formal/afp/Gromov_Hyperbolicity/Metric_Completion.thy", - "formal/lean/mathlib/category_theory/eq_to_hom.lean", - "formal/afp/Hidden_Markov_Models/Auxiliary.thy", - "formal/hol/Library/agm.ml", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p342.lean", - "formal/afp/Myhill-Nerode/Folds.thy", - "formal/afp/CAVA_LTL_Modelchecker/Nested_DFS/All_Of_Nested_DFS.thy", - "formal/afp/Algebraic_Numbers/document/root.tex", - "formal/afp/Van_der_Waerden/Digits.thy", - "formal/afp/Extended_Finite_State_Machine_Inference/code-targets/Code_Target_List.thy", - "formal/afp/Word_Lib/Generic_set_bit.thy", - "formal/coq/math-comp/test_suite/imset2_finset.v", - "formal/afp/Shadow_DOM/tests/Shadow_DOM_BaseTest.thy", - "formal/afp/LOFT/document/chap3.tex", - "formal/lean/liquid/for_mathlib/embed_preserves_colimits.lean", - "formal/lean/mathlib/category_theory/sigma/basic.lean", - "formal/afp/Virtual_Substitution/QuadraticCase.thy", - "formal/lean/perfectoid/for_mathlib/algebra.lean", - "formal/afp/Probabilistic_While/document/root.tex", - "formal/lean/mathzoo/mathzoo/olympiads/imo/2021/p1.lean", - "formal/afp/Propositional_Proof_Systems/LSC_Resolution.thy", - "formal/afp/Ordinary_Differential_Equations/Numerics/Concrete_Reachability_Analysis.thy", - "formal/lean/mathlib/ring_theory/valuation/basic.lean", - "formal/hol/Help/mk_realintconst.doc", - "formal/hol/Help/NUM_ADD_CONV.doc", - "formal/lean/mathlib/measure_theory/card_measurable_space.lean", - "formal/lean/liquid/for_mathlib/is_locally_constant.lean", - "formal/afp/Collections/Examples/Examples_Chapter.thy", - "formal/afp/MDP-Algorithms/Policy_Iteration.thy", - "formal/lean/mathlib/ring_theory/witt_vector/verschiebung.lean", - "formal/hol/Help/K.doc", - "formal/lean/mathlib/topology/algebra/order/monotone_convergence.lean", - "formal/lean/mathlib/category_theory/abelian/functor_category.lean", - "formal/afp/Promela/Promela.thy", - "formal/mizar/conaffm.miz", - "formal/lean/liquid/rescale/Tinv.lean", - "formal/afp/CZH_Foundations/czh_sets/CZH_Sets_MIF.thy", - "formal/lean/lftcm/hints/category_theory/exercise1/hint2.lean", - "formal/lean/mathlib/order/zorn.lean", - "formal/lean/mathlib/algebra/category/Group/Z_Module_equivalence.lean", - "formal/mizar/isocat_1.miz", - "formal/afp/UTP/utp/utp_meta_subst.thy", - "formal/afp/CakeML/Big_Step_Unclocked.thy", - "formal/lean/mathlib/field_theory/polynomial_galois_group.lean", - "formal/afp/ConcurrentGC/concrete/Concrete_heap.thy", - "formal/afp/Complx/lib/Cache_Tactics.thy", - "formal/hol/Help/is_conj.doc", - "formal/mizar/comseq_3.miz", - "formal/lean/liquid/condensed/sheafification_mono.lean", - "formal/afp/PLM/document/external.tex", - "formal/afp/Regular_Tree_Relations/GTT.thy", - "formal/hol/Help/ALL_CONV.doc", - "formal/afp/ArrowImpossibilityGS/Thys/GS.thy", - "formal/afp/Regular_Tree_Relations/Tree_Automata/Myhill_Nerode.thy", - "formal/afp/Differential_Dynamic_Logic/Denotational_Semantics.thy", - "formal/mizar/borsuk_6.miz", - "formal/mizar/pralg_1.miz", - "formal/afp/FocusStreamsCaseStudies/SteamBoiler.thy", - "formal/afp/Factored_Transition_System_Bounding/ListUtils.thy", - "formal/afp/Collections/ICF/tools/Record_Intf.thy", - "formal/lean/mathlib/category_theory/skeletal.lean", - "formal/afp/Inductive_Confidentiality/DolevYao/NS_Public_Bad.thy", - "formal/lean/mathlib/geometry/euclidean/oriented_angle.lean", - "formal/mizar/gtarski1.miz", - "formal/hol/Help/splitlist.doc", - "formal/lean/mathlib/data/multiset/fintype.lean", - "formal/lean/mathzoo/mathzoo/misc/miniF2F/numbertheory/notequiv2i2jasqbsqdiv8.lean", - "formal/afp/JinjaThreads/Common/Common_Main.thy", - "formal/hol/Help/UNDISCH_TAC.doc", - "formal/afp/Laplace_Transform/Existence.thy", - "formal/hol/Jordan/tactics_ext.ml", - "formal/lean/mathlib/category_theory/essentially_small.lean", - "formal/afp/UTP/utp/utp_pred.thy", - "formal/mizar/xxreal_0.miz", - "formal/hol/Help/ETA_CONV.doc", - "formal/afp/Linear_Inequalities/Missing_Matrix.thy", - "formal/mizar/flexary1.miz", - "formal/afp/Collections/Userguides/Refine_Monadic_Userguide.thy", - "formal/lean/mathlib/combinatorics/set_family/kleitman.lean", - "formal/afp/CoSMed/Post_Confidentiality/Post.thy", - "formal/afp/Randomised_Social_Choice/Random_Serial_Dictatorship.thy", - "formal/afp/WOOT_Strong_Eventual_Consistency/Data.thy", - "formal/afp/Adaptive_State_Counting/FSM/FSM.thy", - "formal/afp/Interpreter_Optimizations/Std_to_Inca_simulation.thy", - "formal/afp/Sturm_Sequences/Sturm.thy", - "formal/hol/Help/strip_comb.doc", - "formal/hol/Help/AP_TERM.doc", - "formal/afp/Featherweight_OCL/basic_types/UML_Boolean.thy", - "formal/afp/Iptables_Semantics/Datatype_Selectors.thy", - "formal/afp/Category3/CartesianCategory.thy", - "formal/hol/Help/REAL_RAT_NEG_CONV.doc", - "formal/afp/CZH_Foundations/czh_sets/CZH_Sets_NOP.thy", - "formal/afp/Topological_Semantics/document/root.tex", - "formal/lean/mathlib/category_theory/abelian/injective_resolution.lean", - "formal/afp/CAVA_LTL_Modelchecker/SM/Impl/SM_Ample_Impl.thy", - "formal/lean/mathlib/algebra/hom/group_action.lean", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p462.lean", - "formal/lean/mathlib/algebraic_topology/fundamental_groupoid/fundamental_group.lean", - "formal/lean/mathlib/order/pfilter.lean", - "formal/afp/Van_Emde_Boas_Trees/VEBT_Member.thy", - "formal/afp/Modular_Assembly_Kit_Security/Verification/Compositionality/CompositionBase.thy", - "formal/afp/Word_Lib/Rsplit.thy", - "formal/lean/liquid/condensed/coproducts.lean", - "formal/afp/Van_Emde_Boas_Trees/Separation_Logic_Imperative_HOL/Automation.thy", - "formal/afp/Refine_Imperative_HOL/benchmarks/Sepref_Chapter_Benchmarks.thy", - "formal/afp/JiveDataStoreModel/Isa_Counter/TypeIds.thy", - "formal/afp/LLL_Basis_Reduction/Norms.thy", - "formal/afp/BD_Security_Compositional/Transporting_Security.thy", - "formal/afp/Stateful_Protocol_Composition_and_Typing/Stateful_Compositionality.thy", - "formal/afp/IsaNet/instances/ICING.thy", - "formal/hol/Help/b.doc", - "formal/afp/Possibilistic_Noninterference/document/root.tex", - "formal/afp/Launchbury/document/map.tex", - "formal/lean/mathlib/analysis/special_functions/log/base.lean", - "formal/mizar/finance6.miz", - "formal/afp/CZH_Universal_Constructions/czh_ucategories/CZH_UCAT_Limit.thy", - "formal/afp/Poincare_Bendixson/Periodic_Orbit.thy", - "formal/lean/mathlib/topology/algebra/affine.lean", - "formal/mizar/euclid13.miz", - "formal/afp/Fishers_Inequality/Rank_Argument_General.thy", - "formal/afp/Word_Lib/Machine_Word_32.thy", - "formal/afp/Stateful_Protocol_Composition_and_Typing/Parallel_Compositionality.thy", - "formal/mizar/integra8.miz", - "formal/afp/Amortized_Complexity/Splay_Heap_Analysis.thy", - "formal/mizar/finance3.miz", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p81.lean", - "formal/afp/SimplifiedOntologicalArgument/BaseDefs.thy", - "formal/afp/Buffons_Needle/Buffons_Needle.thy", - "formal/afp/Psi_Calculi/Tau_Laws_No_Weak.thy", - "formal/afp/Pi_Calculus/Strong_Late_Axiomatisation.thy", - "formal/lean/mathlib/algebra/free_algebra.lean", - "formal/afp/Decl_Sem_Fun_PL/DeclSemAsDenotFSet.thy", - "formal/afp/Isabelle_C/C11-FrontEnd/C_Main.thy", - "formal/afp/CakeML_Codegen/Terms/Pterm.thy", - "formal/lean/mathlib/topology/continuous_function/ordered.lean", - "formal/afp/CakeML_Codegen/Doc/Doc_Rewriting.thy", - "formal/hol/Help/NO_THEN.doc", - "formal/lean/mathlib/algebra/module/injective.lean", - "formal/lean/mathlib/measure_theory/group/fundamental_domain.lean", - "formal/afp/DFS_Framework/Examples/DFS_Chapter_Examples.thy", - "formal/afp/TLA/Semantics.thy", - "formal/lean/mathzoo/mathzoo/misc/miniF2F/algebra/sum1onsqrt2to1onsqrt10000lt198.lean", - "formal/afp/Verified_SAT_Based_AI_Planning/AST_SAS_Plus_Equivalence.thy", - "formal/afp/CryptoBasedCompositionalProperties/document/root.tex", - "formal/afp/Call_Arity/AList-Utils-HOLCF.thy", - "formal/afp/Trie/document/root.tex", - "formal/mizar/toprns_1.miz", - "formal/lean/mathlib/linear_algebra/affine_space/finite_dimensional.lean", - "formal/afp/Poincare_Bendixson/Analysis_Misc.thy", - "formal/hol/100/subsequence.ml", - "formal/hol/100/inclusion_exclusion.ml", - "formal/afp/Automatic_Refinement/Lib/Anti_Unification.thy", - "formal/lean/mathlib/analysis/specific_limits/basic.lean", - "formal/mizar/trees_a.miz", - "formal/mizar/matrix_5.miz", - "formal/afp/Psi_Calculi/Weak_Bisim_Struct_Cong.thy", - "formal/afp/Sigma_Commit_Crypto/Discrete_Log.thy", - "formal/afp/Twelvefold_Way/Card_Bijections.thy", - "formal/afp/Goedel_HFSet_Semanticless/Pf_Predicates.thy", - "formal/afp/Refine_Imperative_HOL/IICF/Impl/Heaps/IICF_Impl_Heap.thy", - "formal/hol/Help/pp_print_type.doc", - "formal/coq/math-comp/solvable/pgroup.v", - "formal/afp/SequentInvertibility/document/root.tex", - "formal/mizar/jordan1f.miz", - "formal/afp/Van_Emde_Boas_Trees/VEBT_Uniqueness.thy", - "formal/afp/Weighted_Path_Order/KBO_Transformation.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p30.lean", - "formal/afp/Pluennecke_Ruzsa_Inequality/document/root.tex", - "formal/afp/Relational_Method/Possibility.thy", - "formal/afp/Simplicial_complexes_and_boolean_functions/Bij_betw_simplicial_complex_bool_func.thy", - "formal/afp/CISC-Kernel/step/Step_configuration.thy", - "formal/afp/Binding_Syntax_Theory/QuasiTerms_Environments_Substitution.thy", - "formal/hol/Mizarlight/duality.ml", - "formal/lean/mathlib/ring_theory/noetherian.lean", - "formal/afp/JinjaThreads/JVM/JVMHeap.thy", - "formal/afp/Neumann_Morgenstern_Utility/Expected_Utility.thy", - "formal/afp/DFS_Framework/Param_DFS.thy", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2021/b/p3.lean", - "formal/afp/Collections/Lib/Sorted_List_Operations.thy", - "formal/afp/CakeML/generated/Lem_tuple.thy", - "formal/afp/Constructive_Cryptography_CM/Goodies.thy", - "formal/afp/CoCon/Paper_Confidentiality/Paper_All.thy", - "formal/afp/Differential_Dynamic_Logic/USubst_Lemma.thy", - "formal/afp/Differential_Game_Logic/Ids.thy", - "formal/afp/CoCon/Reviewer_Assignment_Confidentiality/Reviewer_Assignment_NCPC_Aut.thy", - "formal/lean/mathlib/order/rel_iso.lean", - "formal/afp/Pluennecke_Ruzsa_Inequality/Pluennecke_Ruzsa_Inequality.thy", - "formal/afp/Minkowskis_Theorem/document/root.tex", - "formal/lean/mathlib/ring_theory/euclidean_domain.lean", - "formal/afp/Tycon/Error_Monad.thy", - "formal/afp/Poincare_Disc/Poincare_Circles.thy", - "formal/afp/Laplace_Transform/document/root.tex", - "formal/mizar/complex2.miz", - "formal/mizar/fvsum_1.miz", - "formal/mizar/funcop_1.miz", - "formal/afp/Network_Security_Policy_Verification/Security_Invariants/SINVAR_DomainHierarchyNG_impl.thy", - "formal/afp/Complex_Bounded_Operators/Complex_Euclidean_Space0.thy", - "formal/hol/Examples/rectypes.ml", - "formal/afp/Projective_Measurements/Projective_Measurements.thy", - "formal/mizar/scmfsa8a.miz", - "formal/mizar/group_5.miz", - "formal/hol/Help/cases.doc", - "formal/afp/Abortable_Linearizable_Modules/document/root.tex", - "formal/afp/Key_Agreement_Strong_Adversaries/Secrecy.thy", - "formal/afp/Zeta_Function/document/root.tex", - "formal/lean/mathlib/algebra/tropical/basic.lean", - "formal/mizar/int_1.miz", - "formal/afp/Hermite_Lindemann/Misc_HLW.thy", - "formal/hol/Help/NUM_MULT_CONV.doc", - "formal/hol/EC/nistp192.ml", - "formal/hol/Help/CHEAT_TAC.doc", - "formal/hol/Help/SKOLEM_CONV.doc", - "formal/afp/VYDRA_MDL/Timestamp_Lex.thy", - "formal/afp/FocusStreamsCaseStudies/Gateway_types.thy", - "formal/afp/Mereology/EM.thy", - "formal/afp/Name_Carrying_Type_Inference/PreSimplyTyped.thy", - "formal/lean/mathlib/field_theory/primitive_element.lean", - "formal/mizar/exchsort.miz", - "formal/mizar/relat_1.miz", - "formal/afp/JinjaThreads/Common/ExternalCallWF.thy", - "formal/afp/Extended_Finite_State_Machines/FSet_Utils.thy", - "formal/mizar/euclid12.miz", - "formal/afp/List_Update/BIT_pairwise.thy", - "formal/afp/Deep_Learning/Tensor_Matricization.thy", - "formal/afp/Kruskal/Kruskal.thy", - "formal/hol/Examples/brunn_minkowski.ml", - "formal/lean/mathlib/topology/instances/discrete.lean", - "formal/hol/Help/mem.doc", - "formal/lean/mathlib/linear_algebra/clifford_algebra/even_equiv.lean", - "formal/afp/HotelKeyCards/Basis.thy", - "formal/afp/Knuth_Morris_Pratt/KMP.thy", - "formal/afp/Network_Security_Policy_Verification/TopoS_generateCode.thy", - "formal/hol/Help/copverb.doc", - "formal/afp/CoSMeDis/Friend_Request_Confidentiality/Friend_Request_Network.thy", - "formal/lean/mathlib/data/finset/fold.lean", - "formal/hol/Help/equals_thm.doc", - "formal/afp/Conditional_Simplification/CS_Tools/CS_Tools.thy", - "formal/afp/PLM/TAO_99_Paradox.thy", - "formal/lean/mathlib/category_theory/sites/adjunction.lean", - "formal/afp/Probabilistic_Noninterference/Interface.thy", - "formal/hol/Help/thm_frees.doc", - "formal/afp/Projective_Geometry/Desargues_Property.thy", - "formal/mizar/genealg1.miz", - "formal/coq/math-comp/character/mxrepresentation.v", - "formal/afp/PAC_Checker/WB_Sort.thy", - "formal/afp/Deriving/Comparator_Generator/Compare_Instances.thy", - "formal/afp/FOL_Seq_Calc2/Prover.thy", - "formal/mizar/twoscomp.miz", - "formal/hol/Help/enter.doc", - "formal/afp/AODV/variants/c_gtobcast/C_Loop_Freedom.thy", - "formal/afp/CoSMeDis/Post_Confidentiality/Post_Unwinding_Helper_RECEIVER.thy", - "formal/afp/Virtual_Substitution/ExportProofs.thy", - "formal/afp/Regular_Tree_Relations/Util/Ground_Ctxt.thy", - "formal/lean/mathlib/ring_theory/filtration.lean", - "formal/afp/Optimal_BST/Weighted_Path_Length.thy", - "formal/hol/Help/parse_type.doc", - "formal/afp/JinjaThreads/DFA/LBVCorrect.thy", - "formal/lean/liquid/for_mathlib/module_epi.lean", - "formal/afp/Core_SC_DOM/safely_composable/Core_DOM_Heap_WF.thy", - "formal/afp/JinjaDCI/BV/Effect.thy", - "formal/afp/Collections/Examples/Autoref/Collection_Autoref_Examples_Chapter.thy", - "formal/afp/CZH_Elementary_Categories/czh_ecategories/CZH_ECAT_CSimplicial.thy", - "formal/hol/Mizarlight/make.ml", - "formal/lean/liquid/breen_deligne/eg.lean", - "formal/lean/mathlib/analysis/special_functions/trigonometric/angle.lean", - "formal/lean/mathlib/linear_algebra/exterior_algebra/basic.lean", - "formal/lean/liquid/for_mathlib/endomorphisms/basic.lean", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p116.lean", - "formal/hol/Help/STRUCT_CASES_THEN.doc", - "formal/lean/mathlib/analysis/calculus/dslope.lean", - "formal/lean/mathlib/category_theory/monoidal/internal/limits.lean", - "formal/afp/CoSMeDis/Friend_Confidentiality/Friend_Observation_Setup.thy", - "formal/afp/Shadow_DOM/document/root.tex", - "formal/afp/Goedel_Incompleteness/Diagonalization.thy", - "formal/afp/Projective_Geometry/Pappus_Desargues.thy", - "formal/lean/mathlib/algebra/tropical/lattice.lean", - "formal/lean/mathzoo/mathzoo/misc/miniF2F/induction/pprime_pdvdapowpma.lean", - "formal/lean/mathlib/data/fin/basic.lean", - "formal/afp/Sigma_Commit_Crypto/document/root.tex", - "formal/afp/UPF_Firewall/Examples/PersonalFirewall/PersonalFirewall.thy", - "formal/afp/Statecharts/Contrib.thy", - "formal/lean/mathlib/algebra/hom/equiv.lean", - "formal/lean/liquid/pseudo_normed_group/FP.lean", - "formal/afp/Refine_Monadic/Generic/RefineG_Domain.thy", - "formal/mizar/bagord_2.miz", - "formal/mizar/msafree1.miz", - "formal/hol/Help/is_binop.doc", - "formal/afp/WorkerWrapper/CounterExample.thy", - "formal/lean/liquid/rescale/pseudo_normed_group.lean", - "formal/coq/math-comp/ssreflect/bigop.v", - "formal/afp/BDD/LevellistProof.thy", - "formal/mizar/glibpre1.miz", - "formal/mizar/matrtop3.miz", - "formal/mizar/fdiff_1.miz", - "formal/afp/Certification_Monads/Strict_Sum.thy", - "formal/afp/LinearQuantifierElim/Thys/QElin_opt.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p510.lean", - "formal/lean/perfectoid/perfectoid_space.lean", - "formal/lean/mathlib/topology/algebra/ring.lean", - "formal/hol/Help/print_qtype.doc", - "formal/afp/JinjaThreads/J/ProgressThreaded.thy", - "formal/afp/Eval_FO/Ailamazyan.thy", - "formal/afp/SimplifiedOntologicalArgument/SimpleVariantPG.thy", - "formal/afp/Probabilistic_Timed_Automata/document/root.tex", - "formal/hol/100/ptolemy.ml", - "formal/afp/Security_Protocol_Refinement/Key_establish/m2_nssk.thy", - "formal/afp/Partial_Order_Reduction/Extensions/List_Extensions.thy", - "formal/lean/mathlib/geometry/manifold/algebra/smooth_functions.lean", - "formal/lean/mathlib/topology/is_locally_homeomorph.lean", - "formal/mizar/rsspace3.miz", - "formal/afp/QHLProver/Quantum_Hoare.thy", - "formal/afp/Subresultants/Dichotomous_Lazard.thy", - "formal/afp/Logging_Independent_Anonymity/document/root.tex", - "formal/afp/ZFC_in_HOL/ZFC_Library.thy", - "formal/afp/FO_Theory_Rewriting/Util/Utils.thy", - "formal/afp/CoCon/Observation_Setup.thy", - "formal/afp/Belief_Revision/AGM_Remainder.thy", - "formal/afp/InfPathElimination/Bexp.thy", - "formal/afp/CAVA_LTL_Modelchecker/Nested_DFS/NDFS_SI.thy", - "formal/afp/Iptables_Semantics/Semantics_Ternary/Negation_Type_Matching.thy", - "formal/afp/Network_Security_Policy_Verification/Security_Invariants/SINVAR_Dependability.thy", - "formal/mizar/sin_cos7.miz", - "formal/afp/Launchbury/Mono-Nat-Fun.thy", - "formal/afp/Multiset_Ordering_NPC/Multiset_Ordering_More.thy", - "formal/afp/CakeML/generated/Lem.thy", - "formal/afp/MFODL_Monitor_Optimized/Monitor_Impl.thy", - "formal/lean/liquid/for_mathlib/triangle_shift.lean", - "formal/mizar/topmetr4.miz", - "formal/mizar/jordan21.miz", - "formal/lean/mathlib/data/prod/tprod.lean", - "formal/hol/Boyer_Moore/clausal_form.ml", - "formal/lean/mathlib/data/multiset/sum.lean", - "formal/hol/Help/SET_RULE.doc", - "formal/lean/mathlib/probability/moments.lean", - "formal/afp/Poincare_Disc/document/root.tex", - "formal/afp/FOL_Seq_Calc3/document/root.tex", - "formal/lean/mathlib/ring_theory/valuation/integers.lean", - "formal/afp/LOFT/Examples/RFC2544/RFC2544.thy", - "formal/afp/LTL_to_DRA/Semi_Mojmir.thy", - "formal/afp/VerifyThis2019/Challenge1B.thy", - "formal/afp/Efficient-Mergesort/Mergesort_Complexity.thy", - "formal/afp/Algebraic_VCs/AVC_KAD/VC_KAD_scratch.thy", - "formal/afp/Binding_Syntax_Theory/Recursion.thy", - "formal/mizar/o_ring_1.miz", - "formal/afp/Priority_Queue_Braun/Priority_Queue_Braun.thy", - "formal/mizar/aescip_1.miz", - "formal/lean/mathlib/topology/instances/irrational.lean", - "formal/hol/Help/.singlefun.doc", - "formal/lean/mathlib/data/polynomial/lifts.lean", - "formal/afp/Regex_Equivalence/Regex_Equivalence.thy", - "formal/afp/IsaNet/infrastructure/Tools.thy", - "formal/afp/ConcurrentGC/Valid_Refs.thy", - "formal/afp/Collections/ICF/spec/ICF_Spec_Chapter.thy", - "formal/afp/Concurrent_Revisions/Executions.thy", - "formal/lean/lftcm/exercises_sources/wednesday/algebraic_hierarchy.lean", - "formal/afp/Card_Equiv_Relations/Card_Partial_Equiv_Relations.thy", - "formal/hol/Help/INT_MIN_CONV.doc", - "formal/afp/Optics/Lens_Statespace_Example.thy", - "formal/afp/CakeML/generated/Lem_maybe.thy", - "formal/hol/Help/null_meta.doc", - "formal/afp/Auto2_Imperative_HOL/Imperative/GCD_Impl.thy", - "formal/afp/Isabelle_C/C11-FrontEnd/examples/C4.thy", - "formal/afp/Progress_Tracking/Graph.thy", - "formal/afp/Ordinary_Differential_Equations/IVP/Reachability_Analysis.thy", - "formal/afp/Parity_Game/WellOrderedStrategy.thy", - "formal/hol/Help/MOD_DOWN_CONV.doc", - "formal/afp/QR_Decomposition/Examples_QR_IArrays_Float.thy", - "formal/lean/mathlib/analysis/normed_space/dual.lean", - "formal/lean/liquid/hacks_and_tricks/asyncI.lean", - "formal/hol/Help/MONO_TAC.doc", - "formal/afp/Separation_Logic_Imperative_HOL/Examples/List_Seg.thy", - "formal/mizar/integr18.miz", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2009/a/p5.lean", - "formal/lean/liquid/pseudo_normed_group/system_of_complexes2.lean", - "formal/afp/HotelKeyCards/Trace.thy", - "formal/mizar/arithm.miz", - "formal/afp/Berlekamp_Zassenhaus/Suitable_Prime.thy", - "formal/afp/Octonions/document/root.tex", - "formal/mizar/quin_1.miz", - "formal/lean/mathlib/category_theory/limits/preserves/opposites.lean", - "formal/afp/HOLCF-Prelude/Data_Maybe.thy", - "formal/coq/math-comp/ssreflect/binomial.v", - "formal/lean/mathlib/number_theory/function_field.lean", - "formal/hol/100/bernoulli.ml", - "formal/lean/mathlib/algebra/hom/units.lean", - "formal/hol/Help/POP_ASSUM.doc", - "formal/hol/Help/mk_vartype.doc", - "formal/afp/Graph_Theory/Auxiliary.thy", - "formal/afp/Eval_FO/Infinite.thy", - "formal/afp/SenSocialChoice/FSext.thy", - "formal/lean/mathlib/algebra/big_operators/associated.lean", - "formal/afp/ConcurrentGC/StrongTricolour.thy", - "formal/afp/Core_SC_DOM/common/monads/DocumentMonad.thy", - "formal/hol/Help/DISCH_THEN.doc", - "formal/afp/Menger/document/root.tex", - "formal/afp/Separation_Logic_Imperative_HOL/Examples/Default_Insts.thy", - "formal/afp/Tree-Automata/AbsAlgo.thy", - "formal/afp/Jordan_Normal_Form/Show_Matrix.thy", - "formal/afp/SIFUM_Type_Systems/document/root.tex", - "formal/hol/Help/term_of_rat.doc", - "formal/hol/Help/meson_chatty.doc", - "formal/afp/Padic_Ints/Cring_Poly.thy", - "formal/lean/mathlib/analysis/convex/krein_milman.lean", - "formal/hol/Help/top_thm.doc", - "formal/mizar/frechet.miz", - "formal/afp/Polynomials/document/root.tex", - "formal/afp/AODV/Fresher.thy", - "formal/hol/Multivariate/paths.ml", - "formal/mizar/group_8.miz", - "formal/hol/100/cosine.ml", - "formal/lean/mathlib/number_theory/padics/padic_norm.lean", - "formal/hol/Help/I.doc", - "formal/afp/Heard_Of/uv/UvDefs.thy", - "formal/afp/WebAssembly/Wasm_Printing/Wasm_Checker_Printing.thy", - "formal/afp/Forcing/Forcing_Theorems.thy", - "formal/afp/Parity_Game/AttractingStrategy.thy", - "formal/mizar/ncfcont2.miz", - "formal/lean/mathlib/data/stream/defs.lean", - "formal/hol/Help/TRY.doc", - "formal/afp/Core_SC_DOM/common/preliminaries/Hiding_Type_Variables.thy", - "formal/hol/Help/CHOOSE_UPPERCASE.doc", - "formal/afp/Program-Conflict-Analysis/Misc.thy", - "formal/hol/Tutorial/Recursive_definitions.ml", - "formal/lean/mathlib/category_theory/abelian/subobject.lean", - "formal/lean/liquid/for_mathlib/homological_complex_shift.lean", - "formal/afp/Bernoulli/Bernoulli_Zeta.thy", - "formal/afp/Kleene_Algebra/Matrix.thy", - "formal/afp/Virtual_Substitution/OptimizationProofs.thy", - "formal/afp/DiskPaxos/DiskPaxos_Inv3.thy", - "formal/lean/mathlib/analysis/convex/complex.lean", - "formal/afp/Optics/Dataspace_Example.thy", - "formal/mizar/boolealg.miz", - "formal/afp/Flow_Networks/Augmenting_Flow.thy", - "formal/afp/Transition_Systems_and_Automata/Basic/Sequence.thy", - "formal/afp/Sort_Encodings/document/root.tex", - "formal/hol/Help/help.doc", - "formal/lean/mathlib/ring_theory/non_zero_divisors.lean", - "formal/afp/AWN/Lib.thy", - "formal/afp/Graph_Theory/Stuff.thy", - "formal/afp/Nested_Multisets_Ordinals/Signed_Multiset.thy", - "formal/afp/Progress_Tracking/document/root.tex", - "formal/afp/Density_Compiler/PDF_Target_Semantics.thy", - "formal/afp/MDP-Algorithms/Modified_Policy_Iteration.thy", - "formal/afp/Finitely_Generated_Abelian_Groups/General_Auxiliary.thy", - "formal/afp/Landau_Symbols/document/root.tex", - "formal/afp/UPF_Firewall/document/introduction.tex", - "formal/lean/mathlib/data/pnat/prime.lean", - "formal/afp/Complex_Bounded_Operators/extra/Extra_Pretty_Code_Examples.thy", - "formal/lean/mathlib/analysis/normed_space/star/spectrum.lean", - "formal/mizar/circcomb.miz", - "formal/lean/mathlib/data/list/permutation.lean", - "formal/lean/mathlib/data/dlist/basic.lean", - "formal/lean/liquid/for_mathlib/exact_seq.lean", - "formal/hol/Help/list_mk_binop.doc", - "formal/afp/JinjaDCI/Compiler/Compiler2.thy", - "formal/lean/mathlib/ring_theory/localization/integral.lean", - "formal/lean/mathlib/topology/instances/rat.lean", - "formal/afp/MDP-Rewards/document/root.tex", - "formal/afp/X86_Semantics/Example_WC.thy", - "formal/afp/Game_Based_Crypto/document/fig-3.tex", - "formal/afp/LLL_Basis_Reduction/document/root.tex", - "formal/afp/DPRM_Theorem/Machine_Equations/All_Equations_Invariance.thy", - "formal/mizar/nat_d.miz", - "formal/afp/SpecCheck/Show/SpecCheck_Show.thy", - "formal/lean/mathlib/linear_algebra/matrix/transvection.lean", - "formal/mizar/matrix_1.miz", - "formal/lean/mathlib/ring_theory/ore_localization/basic.lean", - "formal/hol/Help/prove_general_recursive_function_exists.doc", - "formal/afp/JinjaThreads/JinjaThreads.thy", - "formal/hol/Help/comment_token.doc", - "formal/afp/Circus/Reactive_Processes.thy", - "formal/afp/Intro_Dest_Elim/Reference_Prerequisites.thy", - "formal/afp/Category3/ProductCategory.thy", - "formal/afp/Independence_CH/Not_CH.thy", - "formal/lean/mathzoo/mathzoo/olympiads/imo/1988/p6.lean", - "formal/mizar/jordan13.miz", - "formal/afp/Algebraic_Numbers/Complex_Roots_Real_Poly.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p275.lean", - "formal/lean/mathlib/category_theory/limits/kan_extension.lean", - "formal/hol/Help/tryapplyd.doc", - "formal/lean/mathlib/ring_theory/unique_factorization_domain.lean", - "formal/mizar/jordan14.miz", - "formal/lean/mathlib/algebra/group_power/lemmas.lean", - "formal/lean/mathlib/order/category/CompleteLattice.lean", - "formal/afp/pGCL/Healthiness.thy", - "formal/lean/mathlib/data/mv_polynomial/cardinal.lean", - "formal/mizar/vectsp_6.miz", - "formal/afp/JinjaThreads/MM/JMM_Compiler_Type2.thy", - "formal/afp/Game_Based_Crypto/Security_Spec.thy", - "formal/lean/mathlib/topology/local_homeomorph.lean", - "formal/afp/Refine_Monadic/Refine_Chapter.thy", - "formal/hol/Help/set_basic_congs.doc", - "formal/afp/EdmondsKarp_Maxflow/document/root.tex", - "formal/afp/AWN/AWN_Cterms.thy", - "formal/lean/liquid/thm95/col_exact_prep.lean", - "formal/afp/Word_Lib/Ancient_Numeral.thy", - "formal/lean/mathlib/data/polynomial/coeff.lean", - "formal/hol/Help/body.doc", - "formal/hol/Mizarlight/pa_f.ml", - "formal/lean/mathlib/algebra/order/rearrangement.lean", - "formal/afp/Count_Complex_Roots/Count_Circle.thy", - "formal/afp/QHLProver/Complex_Matrix.thy", - "formal/mizar/nomin_4.miz", - "formal/afp/Forcing/Pointed_DC.thy", - "formal/afp/Polynomials/Power_Products.thy", - "formal/mizar/waybel19.miz", - "formal/afp/Matrix/Matrix_Legacy.thy", - "formal/lean/mathlib/ring_theory/polynomial/selmer.lean", - "formal/lean/perfectoid/for_mathlib/group.lean", - "formal/afp/Clean/src/Seq_MonadSE.thy", - "formal/lean/mathlib/data/set/basic.lean", - "formal/afp/Ordinary_Differential_Equations/Numerics/Refine_Reachability_Analysis_C1.thy", - "formal/hol/Help/null_inst.doc", - "formal/lean/sphere-eversion/to_mathlib/partition2.lean", - "formal/afp/Dirichlet_Series/Partial_Summation.thy", - "formal/lean/mathlib/ring_theory/witt_vector/isocrystal.lean", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p185.lean", - "formal/afp/CoreC++/SmallStep.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p405.lean", - "formal/afp/Shadow_SC_DOM/Shadow_DOM.thy", - "formal/afp/Girth_Chromatic/Girth_Chromatic_Misc.thy", - "formal/hol/Help/dest_binop.doc", - "formal/afp/DPT-SAT-Solver/DPT_SAT_Tests.thy", - "formal/afp/Containers/List_Proper_Interval.thy", - "formal/afp/Regular_Tree_Relations/RR2_Infinite.thy", - "formal/mizar/jordan1a.miz", - "formal/afp/Ordinal/OrdinalVeblen.thy", - "formal/mizar/limfunc4.miz", - "formal/afp/IMP2/automation/IMP2_Specification.thy", - "formal/afp/Consensus_Refined/Same_Vote.thy", - "formal/afp/Graph_Theory/Pair_Digraph.thy", - "formal/mizar/wellset1.miz", - "formal/afp/CoCon/Discussion_Confidentiality/Discussion_Intro.thy", - "formal/afp/Core_SC_DOM/safely_composable/pointers/ShadowRootPointer.thy", - "formal/afp/Partial_Order_Reduction/Basics/Word_Prefixes.thy", - "formal/afp/HotelKeyCards/State.thy", - "formal/afp/Sort_Encodings/E.thy", - "formal/afp/Network_Security_Policy_Verification/Examples/Example_Forte14.thy", - "formal/mizar/tdlat_2.miz", - "formal/afp/Parity_Game/Strategy.thy", - "formal/afp/LTL_to_DRA/Impl/LTL_Rabin_Impl.thy", - "formal/lean/mathlib/probability/hitting_time.lean", - "formal/lean/mathzoo/mathzoo/olympiads/imo/1981/p3.lean", - "formal/afp/Real_Time_Deque/Big_Proof.thy", - "formal/afp/RIPEMD-160-SPARK/RIPEMD_160_SPARK.thy", - "formal/afp/Statecharts/CarAudioSystem.thy", - "formal/mizar/jordan1k.miz", - "formal/hol/Logic/prolog.ml", - "formal/lean/liquid/for_mathlib/AddCommGroup/epi.lean", - "formal/afp/Design_Theory/Design_Basics.thy", - "formal/afp/Ordinary_Differential_Equations/Refinement/Refine_Intersection.thy", - "formal/afp/Buchi_Complementation/Graph.thy", - "formal/lean/mathlib/measure_theory/group/measure.lean", - "formal/afp/Priority_Search_Trees/PST_RBT.thy", - "formal/afp/VerifyThis2018/Challenge3.thy", - "formal/afp/Differential_Dynamic_Logic/USubst.thy", - "formal/afp/Regular_Tree_Relations/document/root.tex", - "formal/mizar/mod_4.miz", - "formal/afp/Delta_System_Lemma/document/root.tex", - "formal/hol/Help/REAL_INT_NEG_CONV.doc", - "formal/hol/Help/POP_ASSUM_LIST.doc", - "formal/afp/Types_To_Sets_Extension/ETTS/Manual/ETTS_Introduction.thy", - "formal/afp/Residuated_Lattices/Residuated_Boolean_Algebras.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p458.lean", - "formal/lean/mathlib/category_theory/closed/cartesian.lean", - "formal/afp/Universal_Hash_Families/document/root.tex", - "formal/lean/mathzoo/mathzoo/misc/miniF2F/induction/prod1p1onk3le3m1onn.lean", - "formal/afp/CoSMeDis/Friend_Request_Confidentiality/Friend_Request_Intro.thy", - "formal/hol/Help/rightbin.doc", - "formal/mizar/comseq_2.miz", - "formal/afp/Transitive_Models/Renaming_Auto.thy", - "formal/lean/mathlib/data/set/Union_lift.lean", - "formal/afp/Special_Function_Bounds/Atan_CF_Bounds.thy", - "formal/lean/lftcm/solutions/thursday/category_theory/exercise7.lean", - "formal/mizar/matrixj2.miz", - "formal/mizar/scm_halt.miz", - "formal/lean/liquid/for_mathlib/universal_delta_functor/Ext.lean", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2001/p21.lean", - "formal/afp/Automatic_Refinement/Lib/Misc.thy", - "formal/afp/Monomorphic_Monad/Monomorphic_Monad.thy", - "formal/afp/Planarity_Certificates/l4v/lib/wp/NonDetMonadLemmas.thy", - "formal/afp/Ordinary_Differential_Equations/Ex/ARCH_COMP/Examples_ARCH_COMP.thy", - "formal/afp/Abstract-Hoare-Logics/While/Lang.thy", - "formal/afp/Network_Security_Policy_Verification/Security_Invariants/SINVAR_Dependability_norefl.thy", - "formal/lean/mathlib/analysis/normed_space/ordered.lean", - "formal/afp/Call_Arity/ArityAnalysisCorrDenotational.thy", - "formal/afp/AODV/variants/d_fwdrreqs/D_Aodv_Message.thy", - "formal/lean/mathlib/data/mv_polynomial/monad.lean", - "formal/afp/Linear_Inequalities/Sum_Vec_Set.thy", - "formal/afp/Simpl/HoareTotalDef.thy", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2002/b/p19.lean", - "formal/mizar/sprect_2.miz", - "formal/afp/Separation_Algebra/Sep_Tactics.thy", - "formal/afp/Lifting_the_Exponent/LTE.thy", - "formal/afp/MonoBoolTranAlgebra/Mono_Bool_Tran_Algebra.thy", - "formal/mizar/goboard6.miz", - "formal/afp/JinjaThreads/Compiler/Compiler2.thy", - "formal/afp/Three_Circles/Normal_Poly.thy", - "formal/afp/FinFun/FinFun.thy", - "formal/lean/mathlib/linear_algebra/projective_space/independence.lean", - "formal/mizar/matrtop2.miz", - "formal/afp/Shivers-CFA/AbsCFComp.thy", - "formal/afp/Markov_Models/Continuous_Time_Markov_Chain.thy", - "formal/afp/MDP-Algorithms/Matrix_Util.thy", - "formal/afp/Automated_Stateful_Protocol_Verification/PSPSP.thy", - "formal/afp/Polynomials/Term_Order.thy", - "formal/hol/miz3/Samples/lagrange1.ml", - "formal/lean/mathlib/algebra/algebra/unitization.lean", - "formal/afp/Lam-ml-Normalization/document/root.tex", - "formal/lean/mathlib/category_theory/balanced.lean", - "formal/afp/Regular_Tree_Relations/Tree_Automata/Tree_Automata_Complement.thy", - "formal/lean/lftcm/solutions/thursday/category_theory/exercise2.lean", - "formal/coq/odd-order/BGsection6.v", - "formal/afp/Shadow_SC_DOM/tests/Shadow_DOM_Node_removeChild.thy", - "formal/afp/PAC_Checker/PAC_Polynomials.thy", - "formal/mizar/aofa_a01.miz", - "formal/lean/mathlib/linear_algebra/matrix/mv_polynomial.lean", - "formal/lean/mathlib/topology/sets/order.lean", - "formal/lean/mathlib/geometry/manifold/conformal_groupoid.lean", - "formal/lean/mathlib/algebra/algebra/bilinear.lean", - "formal/afp/FOL_Seq_Calc2/Results.thy", - "formal/mizar/scmfsa_i.miz", - "formal/afp/Psi_Calculi/Weaken_Bisimulation.thy", - "formal/lean/mathlib/data/matrix/kronecker.lean", - "formal/afp/Extended_Finite_State_Machines/examples/Drinks_Machine_2.thy", - "formal/lean/mathlib/ring_theory/eisenstein_criterion.lean", - "formal/afp/Twelvefold_Way/Twelvefold_Way_Entry7.thy", - "formal/afp/Dirichlet_L/Dirichlet_Characters.thy", - "formal/afp/Special_Function_Bounds/Bounds_Lemmas.thy", - "formal/afp/FLP/FLPTheorem.thy", - "formal/hol/Help/STRIP_THM_THEN.doc", - "formal/afp/ResiduatedTransitionSystem/LambdaCalculus.thy", - "formal/afp/Algebraic_VCs/KAD_is_KAT.thy", - "formal/mizar/int_4.miz", - "formal/afp/Network_Security_Policy_Verification/TopoS_Stateful_Policy_impl.thy", - "formal/mizar/normform.miz", - "formal/afp/Polynomial_Factorization/Fundamental_Theorem_Algebra_Factorized.thy", - "formal/lean/mathlib/linear_algebra/affine_space/ordered.lean", - "formal/afp/WorkerWrapper/WorkerWrapper.thy", - "formal/lean/mathlib/linear_algebra/finsupp.lean", - "formal/hol/Examples/solovay.ml", - "formal/hol/Help/CHOOSE_THEN.doc", - "formal/afp/Hyperdual/HyperdualFunctionExtension.thy", - "formal/lean/liquid/for_mathlib/snake_lemma_naturality2.lean", - "formal/afp/ROBDD/Bool_Func.thy", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2008/a/p4.lean", - "formal/lean/mathlib/category_theory/adjunction/opposites.lean", - "formal/lean/mathlib/data/mv_polynomial/basic.lean", - "formal/afp/Algebraic_Numbers/Min_Int_Poly.thy", - "formal/afp/Differential_Game_Logic/Differential_Game_Logic.thy", - "formal/afp/MiniSail/BTVSubst.thy", - "formal/lean/mathlib/linear_algebra/matrix/basis.lean", - "formal/afp/Native_Word/Code_Int_Integer_Conversion.thy", - "formal/hol/Help/set_eq.doc", - "formal/mizar/scmfsa_5.miz", - "formal/hol/100/lhopital.ml", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p34.lean", - "formal/mizar/groeb_2.miz", - "formal/afp/Elliptic_Curves_Group_Law/document/root.tex", - "formal/lean/mathlib/number_theory/class_number/admissible_abs.lean", - "formal/hol/Library/card.ml", - "formal/afp/Van_Emde_Boas_Trees/VEBT_InsertCorrectness.thy", - "formal/afp/Polynomials/MPoly_Type.thy", - "formal/afp/SIFPL/VDM.thy", - "formal/lean/mathlib/data/set/enumerate.lean", - "formal/lean/mathlib/analysis/calculus/diff_on_int_cont.lean", - "formal/afp/Hoare_Time/Nielson_Examples.thy", - "formal/afp/CZH_Universal_Constructions/czh_ucategories/CZH_UCAT_Universal.thy", - "formal/hol/Permutation/make.ml", - "formal/lean/mathlib/order/complete_lattice.lean", - "formal/mizar/scmpds_7.miz", - "formal/afp/Simpl/Semantic.thy", - "formal/afp/pGCL/Continuity.thy", - "formal/afp/Automated_Stateful_Protocol_Verification/trac/trac.thy", - "formal/hol/Help/prioritize_overload.doc", - "formal/afp/WebAssembly/Wasm_Interpreter.thy", - "formal/afp/Irrationality_J_Hancl/document/root.tex", - "formal/mizar/closure3.miz", - "formal/lean/mathlib/number_theory/ADE_inequality.lean", - "formal/lean/mathlib/measure_theory/integral/circle_integral_transform.lean", - "formal/lean/mathlib/order/bounds.lean", - "formal/lean/liquid/condensed/basic.lean", - "formal/lean/mathlib/control/random.lean", - "formal/afp/CCS/Weak_Cong.thy", - "formal/afp/Card_Partitions/Injectivity_Solver.thy", - "formal/mizar/descip_1.miz", - "formal/afp/Collections/ICF/spec/SetSpec.thy", - "formal/afp/Refine_Monadic/Refine_Transfer.thy", - "formal/lean/perfectoid/valuation_spectrum.lean", - "formal/mizar/complsp1.miz", - "formal/afp/CoSMeDis/Outer_Friend_Confidentiality/Issuer/Outer_Friend_Issuer.thy", - "formal/mizar/urysohn2.miz", - "formal/afp/Core_SC_DOM/common/tests/Node_insertBefore.thy", - "formal/afp/Matrix/Ordered_Semiring.thy", - "formal/afp/JinjaThreads/Framework/FWLockingThread.thy", - "formal/hol/Jordan/lib_ext.ml", - "formal/mizar/nomin_2.miz", - "formal/hol/Help/REAL_POLY_CONV.doc", - "formal/afp/Groebner_Bases/F4_Examples.thy", - "formal/hol/Help/NUMBER_TAC.doc", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p412.lean", - "formal/afp/Pop_Refinement/document/root.tex", - "formal/mizar/waybel34.miz", - "formal/lean/mathzoo/mathzoo/olympiads/imo/2005/p3.lean", - "formal/lean/liquid/polyhedral_lattice/Hom.lean", - "formal/afp/MFMC_Countable/document/root.tex", - "formal/afp/DataRefinementIBP/Diagram.thy", - "formal/afp/Ordinary_Differential_Equations/Refinement/Refine_Invar.thy", - "formal/afp/Psi_Calculi/Weak_Bisim_Pres.thy", - "formal/afp/Irrational_Series_Erdos_Straus/document/root.tex", - "formal/hol/Help/is_disj.doc", - "formal/afp/Regular_Tree_Relations/GTT_Compose.thy", - "formal/afp/Bicategory/Tabulation.thy", - "formal/mizar/ring_5.miz", - "formal/hol/Help/is_gabs.doc", - "formal/coq/math-comp/field/cyclotomic.v", - "formal/mizar/tex_3.miz", - "formal/mizar/roughs_2.miz", - "formal/mizar/roughs_4.miz", - "formal/hol/Help/hide_constant.doc", - "formal/afp/Kruskal/SeprefUF.thy", - "formal/afp/Incredible_Proof_Machine/Incredible_Correctness.thy", - "formal/hol/Rqe/examples.ml", - "formal/afp/Simple_Firewall/Primitives/Iface.thy", - "formal/afp/Clean/document/root.tex", - "formal/lean/mathlib/category_theory/products/bifunctor.lean", - "formal/afp/DataRefinementIBP/Statements.thy", - "formal/afp/Flyspeck-Tame/Graph.thy", - "formal/afp/Probabilistic_System_Zoo/Probabilistic_Hierarchy.thy", - "formal/afp/LightweightJava/Lightweight_Java_Proof.thy", - "formal/afp/Rewrite_Properties_Reduction/Ground_Reduction_on_LLRG.thy", - "formal/afp/Multirelations/C_Algebras.thy", - "formal/lean/mathlib/algebra/group_with_zero/defs.lean", - "formal/lean/mathlib/category_theory/full_subcategory.lean", - "formal/afp/Collections/ICF/ICF_Entrypoints_Chapter.thy", - "formal/lean/mathlib/topology/algebra/continuous_affine_map.lean", - "formal/lean/mathlib/analysis/convex/combination.lean", - "formal/lean/liquid/condensed/bd_ses.lean", - "formal/hol/Help/GEN_REWRITE_CONV.doc", - "formal/lean/lftcm/exercises_sources/thursday/category_theory/exercise5.lean", - "formal/hol/Help/CHOOSE_TAC.doc", - "formal/mizar/wellord2.miz", - "formal/lean/lftcm/exercises_sources/thursday/category_theory/exercise8.lean", - "formal/lean/mathlib/analysis/convex/extreme.lean", - "formal/afp/DataRefinementIBP/Hoare.thy", - "formal/afp/Auto2_HOL/HOL/Auto2_Main.thy", - "formal/afp/AODV/variants/d_fwdrreqs/D_Aodv.thy", - "formal/afp/VeriComp/Transfer_Extras.thy", - "formal/mizar/frechet2.miz", - "formal/lean/mathlib/algebra/quandle.lean", - "formal/afp/Free-Groups/C2.thy", - "formal/afp/Hybrid_Multi_Lane_Spatial_Logic/RealInt.thy", - "formal/afp/Group-Ring-Module/Algebra5.thy", - "formal/mizar/rusub_1.miz", - "formal/afp/CCS/Weak_Bisim_Pres.thy", - "formal/hol/Help/implode.doc", - "formal/afp/Hello_World/document/root.tex", - "formal/afp/Example-Submission/document/root.tex", - "formal/afp/Isabelle_Meta_Model/toy_example/embedding/core/Core_init.thy", - "formal/afp/Jinja/document/introduction.tex", - "formal/afp/Complx/SmallStep.thy", - "formal/lean/mathlib/data/list/defs.lean", - "formal/lean/mathlib/category_theory/functor/hom.lean", - "formal/afp/Tarskis_Geometry/Metric.thy", - "formal/afp/PLM/TAO_10_PossibleWorlds.thy", - "formal/afp/IsaNet/document/session_graph.tex", - "formal/mizar/srings_1.miz", - "formal/afp/Lifting_Definition_Option/Lifting_Definition_Option_Examples.thy", - "formal/lean/mathlib/linear_algebra/matrix/adjugate.lean", - "formal/hol/Multivariate/misc.ml", - "formal/afp/Isabelle_C/C11-FrontEnd/examples/C2.thy", - "formal/afp/Incompleteness/Quote.thy", - "formal/hol/QBF/qbf.ml", - "formal/afp/FocusStreamsCaseStudies/ListExtras.thy", - "formal/mizar/zmodul05.miz", - "formal/afp/CakeML/generated/CakeML/Evaluate.thy", - "formal/afp/Separation_Logic_Imperative_HOL/Examples/Idioms.thy", - "formal/lean/mathlib/data/seq/wseq.lean", - "formal/mizar/ens_1.miz", - "formal/afp/Ordinal_Partitions/Omega_Omega.thy", - "formal/afp/Registers/Teleport.thy", - "formal/lean/mathlib/data/mv_polynomial/derivation.lean", - "formal/hol/Help/LE_IMP.doc", - "formal/mizar/rcomp_3.miz", - "formal/lean/mathlib/ring_theory/polynomial/rational_root.lean", - "formal/afp/Sophomores_Dream/document/root.tex", - "formal/afp/Pi_Calculus/Weak_Early_Late_Comp.thy", - "formal/lean/mathlib/control/functor.lean", - "formal/afp/ADS_Functor/ADS_Construction.thy", - "formal/afp/Formal_Puiseux_Series/document/root.tex", - "formal/lean/lftcm/solutions/tuesday/numbers.lean", - "formal/afp/Elliptic_Curves_Group_Law/Elliptic_Locale.thy", - "formal/lean/mathlib/probability/conditional_expectation.lean", - "formal/afp/Category/Functors.thy", - "formal/afp/Safe_OCL/Errorable.thy", - "formal/mizar/extens_1.miz", - "formal/mizar/mesfunc6.miz", - "formal/afp/JinjaDCI/Common/TypeRel.thy", - "formal/mizar/vectsp_9.miz", - "formal/mizar/realalg1.miz", - "formal/afp/CZH_Elementary_Categories/czh_ecategories/CZH_ECAT_Subcategory.thy", - "formal/lean/mathlib/order/hom/order.lean", - "formal/lean/mathzoo/mathzoo/olympiads/aime/1983/p3.lean", - "formal/lean/mathzoo/mathzoo/olympiads/amc/8/2020/p23.lean", - "formal/hol/Help/REAL_RAT_ABS_CONV.doc", - "formal/afp/Prpu_Maxflow/Generated_Code_Test.thy", - "formal/afp/UPF_Firewall/Examples/DMZ/DMZDatatype.thy", - "formal/afp/Psi_Calculi/Bisim_Subst.thy", - "formal/afp/Nash_Williams/Nash_Extras.thy", - "formal/afp/KBPs/document/root.tex", - "formal/mizar/jgraph_6.miz", - "formal/afp/LLL_Basis_Reduction/LLL_Complexity.thy", - "formal/afp/Correctness_Algebras/Pre_Post.thy", - "formal/lean/mathzoo/mathzoo/olympiads/imo/1959/p1.lean", - "formal/mizar/ntalgo_1.miz", - "formal/mizar/turing_1.miz", - "formal/afp/GPU_Kernel_PL/Misc.thy", - "formal/afp/Subresultants/Coeff_Int.thy", - "formal/afp/MSO_Regex_Equivalence/M2L_Examples.thy", - "formal/afp/Call_Arity/ArityConsistent.thy", - "formal/hol/Help/FIRST_CONV.doc", - "formal/mizar/pdiff_2.miz", - "formal/coq/math-comp/ssreflect/ssrnat.v", - "formal/afp/IP_Addresses/CIDR_Split.thy", - "formal/mizar/jgraph_2.miz", - "formal/afp/Compiling-Exceptions-Correctly/document/root.tex", - "formal/lean/liquid/system_of_complexes/completion.lean", - "formal/hol/miz3/Samples/bug1.ml", - "formal/lean/mathlib/analysis/box_integral/partition/split.lean", - "formal/afp/Complex_Geometry/Circlines.thy", - "formal/afp/JinjaThreads/MM/DRF_J.thy", - "formal/afp/LinearQuantifierElim/Thys/DLO.thy", - "formal/lean/mathlib/category_theory/subobject/mono_over.lean", - "formal/afp/Ordinary_Differential_Equations/Refinement/Refine_Vector_List.thy", - "formal/afp/Iptables_Semantics/Semantics_Ternary/Normalized_Matches.thy", - "formal/mizar/lopban11.miz", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p236.lean", - "formal/afp/MSO_Regex_Equivalence/M2L_Normalization.thy", - "formal/lean/mathlib/topology/metric_space/antilipschitz.lean", - "formal/lean/mathlib/algebra/algebra/restrict_scalars.lean", - "formal/mizar/fuzzy_2.miz", - "formal/afp/Auto2_Imperative_HOL/Imperative/LinkedList.thy", - "formal/lean/mathlib/control/lawful_fix.lean", - "formal/lean/mathlib/data/finsupp/antidiagonal.lean", - "formal/lean/mathzoo/mathzoo/misc/miniF2F/induction/nfactltnexpnm1ngt3.lean", - "formal/afp/Polynomial_Interpolation/Missing_Unsorted.thy", - "formal/lean/mathlib/group_theory/solvable.lean", - "formal/afp/Psi_Calculi/Weaken_Transition.thy", - "formal/afp/Collections/ICF/impl/Fifo.thy", - "formal/hol/Arithmetic/pa.ml", - "formal/mizar/lfuzzy_0.miz", - "formal/lean/mathlib/probability/martingale/convergence.lean", - "formal/afp/FO_Theory_Rewriting/Util/Multihole_Context.thy", - "formal/afp/Gauss_Jordan/document/root.tex", - "formal/lean/mathlib/set_theory/surreal/dyadic.lean", - "formal/afp/Deriving/Hash_Generator/Hash_Instances.thy", - "formal/lean/mathlib/algebra/char_p/pi.lean", - "formal/afp/pGCL/Algebra.thy", - "formal/afp/Modal_Logics_for_NTS/Logical_Equivalence.thy", - "formal/afp/Smith_Normal_Form/Smith_Normal_Form_JNF.thy", - "formal/afp/Flyspeck-Tame/Computation/ArchComp.thy", - "formal/lean/sphere-eversion/to_mathlib/convolution.lean", - "formal/afp/DFS_Framework/Misc/Impl_Rev_Array_Stack.thy", - "formal/afp/Deriving/Comparator_Generator/RBT_Comparator_Impl.thy", - "formal/afp/BenOr_Kozen_Reif/BKR_Decision.thy", - "formal/afp/Three_Circles/RRI_Misc.thy", - "formal/afp/Refine_Imperative_HOL/IICF/IICF.thy", - "formal/afp/Independence_CH/Separation_Instances.thy", - "formal/hol/Help/lookup.doc", - "formal/lean/mathlib/algebra/order/sub.lean", - "formal/afp/Dijkstra_Shortest_Path/GraphGA.thy", - "formal/afp/Grothendieck_Schemes/Group_Extras.thy", - "formal/afp/LTL_to_DRA/Logical_Characterization.thy", - "formal/afp/Hermite_Lindemann/More_Polynomial_HLW.thy", - "formal/hol/Help/the_inductive_types.doc", - "formal/lean/mathlib/topology/algebra/algebra.lean", - "formal/afp/Stone_Kleene_Relation_Algebras/document/root.tex", - "formal/afp/Incredible_Proof_Machine/Predicate_Formulas.thy", - "formal/lean/mathlib/algebra/homology/flip.lean", - "formal/lean/mathlib/combinatorics/simple_graph/matching.lean", - "formal/lean/mathzoo/mathzoo/olympiads/aime/1991/p6.lean", - "formal/afp/LOFT/Examples/OF_conv_test/OF_conv_test.thy", - "formal/afp/Jinja/J/WellType.thy", - "formal/lean/mathlib/order/partition/equipartition.lean", - "formal/afp/Call_Arity/CoCallAnalysisBinds.thy", - "formal/mizar/csspace4.miz", - "formal/afp/Diophantine_Eqns_Lin_Hom/Sorted_Wrt.thy", - "formal/afp/CoCon/Decision_Confidentiality/Decision_Value_Setup2.thy", - "formal/afp/SIFUM_Type_Systems/Security.thy", - "formal/afp/Security_Protocol_Refinement/Key_establish/m1_keydist_iirn.thy", - "formal/mizar/zf_fund2.miz", - "formal/afp/Decl_Sem_Fun_PL/DenotEqualitiesFSet.thy", - "formal/afp/Robinson_Arithmetic/Instance.thy", - "formal/lean/lftcm/hints/category_theory/exercise3/hint1.lean", - "formal/afp/Ordinary_Differential_Equations/IVP/Flow_Congs.thy", - "formal/afp/Inductive_Confidentiality/DolevYao/Event.thy", - "formal/afp/Knot_Theory/Link_Algebra.thy", - "formal/afp/Pop_Refinement/Definition.thy", - "formal/hol/Help/rhs.doc", - "formal/afp/JinjaDCI/Common/Auxiliary.thy", - "formal/afp/Forcing/Extensionality_Axiom.thy", - "formal/afp/CZH_Foundations/czh_digraphs/CZH_DG_Conclusions.thy", - "formal/afp/Berlekamp_Zassenhaus/Distinct_Degree_Factorization.thy", - "formal/afp/Linear_Programming/document/root.tex", - "formal/afp/Call_Arity/Arity.thy", - "formal/afp/Hoare_Time/QuantK_Sqrt.thy", - "formal/afp/Goedel_Incompleteness/Goedel_Formula.thy", - "formal/hol/EC/make.ml", - "formal/afp/UTP/toolkit/List_Lexord_Alt.thy", - "formal/afp/Correctness_Algebras/Hoare_Modal.thy", - "formal/afp/UpDown_Scheme/Up_Down.thy", - "formal/mizar/diff_1.miz", - "formal/lean/mathlib/category_theory/types.lean", - "formal/mizar/nfcont_4.miz", - "formal/lean/liquid/for_mathlib/abelian_group_object.lean", - "formal/afp/Van_Emde_Boas_Trees/VEBT_Example.thy", - "formal/lean/mathlib/algebra/opposites.lean", - "formal/afp/Deep_Learning/Tensor_Subtensor.thy", - "formal/coq/math-comp/test_suite/test_rat.v", - "formal/lean/liquid/for_mathlib/derived/derived_cat.lean", - "formal/afp/Huffman/document/root.tex", - "formal/afp/Jinja/J/JWellForm.thy", - "formal/afp/Randomised_Social_Choice/SD_Efficiency.thy", - "formal/lean/mathlib/dynamics/fixed_points/topology.lean", - "formal/afp/Smooth_Manifolds/Projective_Space.thy", - "formal/afp/Registers/Classical_Extra.thy", - "formal/lean/sphere-eversion/to_mathlib/unused/misc_manifold.lean", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p552.lean", - "formal/afp/Monad_Memo_DP/util/Ground_Function.thy", - "formal/afp/Hermite_Lindemann/More_Min_Int_Poly.thy", - "formal/afp/Partial_Order_Reduction/Extensions/Basic_Extensions.thy", - "formal/hol/Help/reserved_words.doc", - "formal/afp/Case_Labeling/Examples/Monadic_Language.thy", - "formal/afp/CISC-Kernel/document/root.tex", - "formal/hol/Help/installed_parsers.doc", - "formal/lean/mathlib/category_theory/sites/pretopology.lean", - "formal/afp/GenClock/document/root.tex", - "formal/afp/Incompleteness/SyntaxN.thy", - "formal/hol/Help/derive_strong_induction.doc", - "formal/coq/analysis/nsatz_realtype.v", - "formal/afp/Safe_OCL/OCL_Object_Model.thy", - "formal/mizar/rlaffin1.miz", - "formal/lean/mathlib/analysis/normed_space/compact_operator.lean", - "formal/lean/liquid/breen_deligne/universal_map.lean", - "formal/afp/Relation_Algebra/Relation_Algebra_Functions.thy", - "formal/hol/Help/parse_pretype.doc", - "formal/mizar/jordan1h.miz", - "formal/afp/Combinable_Wands/Mask.thy", - "formal/afp/Safe_Distance/Safe_Distance.thy", - "formal/hol/Help/pure_prove_recursive_function_exists.doc", - "formal/afp/ZFC_in_HOL/document/root.tex", - "formal/afp/Discrete_Summation/Examples.thy", - "formal/afp/Extended_Finite_State_Machines/examples/Drinks_Machine_LTL.thy", - "formal/afp/JinjaThreads/DFA/Product.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p144.lean", - "formal/afp/JiveDataStoreModel/Isabelle_Store/Store.thy", - "formal/hol/Help/SPEC.doc", - "formal/afp/WorkerWrapper/Accumulator.thy", - "formal/afp/Game_Based_Crypto/Diffie_Hellman.thy", - "formal/afp/AWN/AWN_SOS_Labels.thy", - "formal/afp/Differential_Game_Logic/Axioms.thy", - "formal/hol/Help/isbra.doc", - "formal/afp/Interpreter_Optimizations/Env.thy", - "formal/afp/Hoare_Time/Nielson_VCGi.thy", - "formal/hol/Help/ONCE_REWRITE_CONV.doc", - "formal/lean/mathlib/data/int/interval.lean", - "formal/afp/Stateful_Protocol_Composition_and_Typing/Messages.thy", - "formal/afp/Goedel_HFSet_Semanticless/SyntaxN.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p509.lean", - "formal/lean/mathlib/category_theory/limits/preserves/filtered.lean", - "formal/afp/Heard_Of/Reduction.thy", - "formal/lean/mathlib/algebraic_geometry/projective_spectrum/structure_sheaf.lean", - "formal/lean/mathlib/data/list/forall2.lean", - "formal/mizar/lopban10.miz", - "formal/afp/Gauss_Jordan/Examples_Gauss_Jordan_IArrays.thy", - "formal/afp/QR_Decomposition/Miscellaneous_QR.thy", - "formal/afp/Multi_Party_Computation/DH_Ext.thy", - "formal/lean/lftcm/exercises_sources/friday/analysis.lean", - "formal/mizar/scmfsa9a.miz", - "formal/afp/LambdaAuth/document/root.tex", - "formal/mizar/yellow_8.miz", - "formal/lean/mathzoo/mathzoo/misc/miniF2F/induction/ineq_nsqlefactn.lean", - "formal/hol/Help/REAL_LINEAR_PROVER.doc", - "formal/hol/Help/WF_INDUCT_TAC.doc", - "formal/afp/AODV/variants/c_gtobcast/C_OAodv.thy", - "formal/afp/Planarity_Certificates/Planarity/Planar_Subgraph.thy", - "formal/afp/Containers/Examples/Map_To_Mapping_Ex.thy", - "formal/lean/mathlib/group_theory/schreier.lean", - "formal/lean/mathlib/data/fintype/fin.lean", - "formal/afp/RSAPSS/Crypt.thy", - "formal/lean/mathlib/number_theory/legendre_symbol/gauss_sum.lean", - "formal/afp/Intro_Dest_Elim/IDE_Reference.thy", - "formal/lean/liquid/Lbar/Lbar_le.lean", - "formal/afp/LocalLexing/TheoremD5.thy", - "formal/afp/Irrationality_J_Hancl/Irrationality_J_Hancl.thy", - "formal/afp/Myhill-Nerode/document/root.tex", - "formal/afp/Regular_Tree_Relations/Tree_Automata/Tree_Automata_Class_Instances_Impl.thy", - "formal/lean/lftcm/solutions/thursday/category_theory/exercise3.lean", - "formal/lean/perfectoid/for_mathlib/quotient_group.lean", - "formal/afp/Planarity_Certificates/Verification/Check_Planarity_Verification.thy", - "formal/lean/liquid/for_mathlib/exact_seq4.lean", - "formal/afp/Jinja/J/BigStep.thy", - "formal/afp/Promela/document/intro.tex", - "formal/hol/Help/intersect.doc", - "formal/afp/CoCon/Review_Confidentiality/Review_All.thy", - "formal/hol/Complex/make.ml", - "formal/afp/Launchbury/CValue.thy", - "formal/lean/mathlib/field_theory/laurent.lean", - "formal/afp/Graph_Saturation/LabeledGraphSemantics.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p185.lean", - "formal/lean/mathlib/algebraic_geometry/locally_ringed_space/has_colimits.lean", - "formal/mizar/kurato_1.miz", - "formal/lean/mathlib/data/list/func.lean", - "formal/afp/Collections/Examples/Refine_Monadic/Bfs_Impl.thy", - "formal/afp/Iptables_Semantics/Primitive_Matchers/Ports_Normalize.thy", - "formal/hol/Help/print_all_thm.doc", - "formal/afp/Applicative_Lifting/Applicative_Option.thy", - "formal/afp/UPF/document/example-intro.tex", - "formal/hol/Examples/prover9.ml", - "formal/mizar/matrixj1.miz", - "formal/hol/Help/METIS.doc", - "formal/lean/mathlib/data/sigma/default.lean", - "formal/afp/Collections/Examples/Autoref/Coll_Test.thy", - "formal/afp/CakeML_Codegen/Utils/Test_Utils.thy", - "formal/mizar/hilbasis.miz", - "formal/mizar/mesfun11.miz", - "formal/afp/Stochastic_Matrices/Stochastic_Vector_PMF.thy", - "formal/lean/mathlib/algebra/group/ext.lean", - "formal/afp/Timed_Automata/Normalized_Zone_Semantics.thy", - "formal/lean/mathlib/category_theory/monoidal/rigid/of_equivalence.lean", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2017/a/p2.lean", - "formal/afp/AODV/variants/c_gtobcast/C_Aodv.thy", - "formal/afp/CakeML/Evaluate_Clock.thy", - "formal/afp/Modular_Assembly_Kit_Security/Verification/Compositionality/CompositionSupport.thy", - "formal/mizar/zf_fund1.miz", - "formal/lean/mathlib/measure_theory/integral/circle_integral.lean", - "formal/mizar/lopban_6.miz", - "formal/afp/ROBDD/Abstract_Impl.thy", - "formal/hol/Library/wo.ml", - "formal/lean/mathlib/linear_algebra/quadratic_form/isometry.lean", - "formal/afp/MDP-Algorithms/code/Code_Mod.thy", - "formal/lean/mathlib/data/set_like/basic.lean", - "formal/afp/CZH_Elementary_Categories/czh_ecategories/CZH_ECAT_Introduction.thy", - "formal/lean/mathlib/data/fin/fin2.lean", - "formal/afp/BytecodeLogicJmlTypes/MultiStep.thy", - "formal/afp/Stern_Brocot/Cotree.thy", - "formal/afp/Launchbury/Terms.thy", - "formal/afp/Van_Emde_Boas_Trees/VEBT_MinMax.thy", - "formal/lean/liquid/condensed/bd_ses_aux.lean", - "formal/lean/mathlib/topology/algebra/valued_field.lean", - "formal/afp/Bicategory/IsomorphismClass.thy", - "formal/afp/Polynomials/NZM.thy", - "formal/lean/mathlib/data/rat/default.lean", - "formal/lean/mathlib/model_theory/semantics.lean", - "formal/afp/CakeML/generated/Lem_list.thy", - "formal/lean/mathlib/category_theory/functor/functorial.lean", - "formal/afp/Tarskis_Geometry/Miscellany.thy", - "formal/mizar/neckla_2.miz", - "formal/lean/mathlib/data/nat/cast_field.lean", - "formal/lean/mathlib/logic/equiv/embedding.lean", - "formal/afp/Collections/document/root.tex", - "formal/hol/Help/dest_pair.doc", - "formal/afp/WOOT_Strong_Eventual_Consistency/Sorting.thy", - "formal/afp/BNF_Operations/GFP.thy", - "formal/afp/Affine_Arithmetic/Intersection.thy", - "formal/mizar/cat_8.miz", - "formal/afp/Gauss_Jordan/Code_Rational.thy", - "formal/mizar/substut1.miz", - "formal/afp/CakeML_Codegen/Terms/Value.thy", - "formal/afp/SequentInvertibility/MultiSequents.thy", - "formal/afp/Slicing/StaticIntra/Observable.thy", - "formal/afp/BenOr_Kozen_Reif/Renegar_Proofs.thy", - "formal/afp/QHLProver/Partial_State.thy", - "formal/afp/CSP_RefTK/Conclusion.thy", - "formal/afp/Graph_Theory/Digraph.thy", - "formal/mizar/metric_6.miz", - "formal/afp/Relational_Paths/Rooted_Paths.thy", - "formal/afp/Cubic_Quartic_Equations/Cubic_Polynomials.thy", - "formal/lean/mathlib/algebra/category/Module/biproducts.lean", - "formal/lean/liquid/laurent_measures/thm69.lean", - "formal/hol/Rqe/pdivides.ml", - "formal/afp/Collections/ICF/impl/ICF_Impl_Chapter.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p398.lean", - "formal/hol/Help/INT_SUB_CONV.doc", - "formal/afp/LinearQuantifierElim/Thys/PresArith.thy", - "formal/afp/Polynomial_Factorization/Missing_Multiset.thy", - "formal/hol/Help/f_f_.doc", - "formal/hol/Arithmetic/fol.ml", - "formal/lean/mathlib/ring_theory/flat.lean", - "formal/lean/mathlib/algebra/ring/pi.lean", - "formal/afp/Separation_Algebra/Sep_Eq.thy", - "formal/afp/Hoare_Time/Big_StepT.thy", - "formal/lean/mathlib/set_theory/ordinal/principal.lean", - "formal/afp/Isabelle_Meta_Model/meta_isabelle/Meta_SML.thy", - "formal/lean/liquid/for_mathlib/homology.lean", - "formal/afp/Concurrent_Revisions/document/root.tex", - "formal/afp/Simpl/ex/Quicksort.thy", - "formal/mizar/zmodlat3.miz", - "formal/lean/mathlib/category_theory/limits/shapes/comm_sq.lean", - "formal/afp/Slicing/StaticIntra/WeakControlDependence.thy", - "formal/afp/Perron_Frobenius/Spectral_Radius_Theory_2.thy", - "formal/lean/mathlib/analysis/special_functions/complex/circle.lean", - "formal/afp/Network_Security_Policy_Verification/Examples/Distributed_WebApp.thy", - "formal/lean/lftcm/solutions/thursday/order.lean", - "formal/afp/Routing/Linorder_Helper.thy", - "formal/lean/mathzoo/mathzoo/misc/miniF2F/algebra/amgm_sumasqdivbgeqsuma.lean", - "formal/afp/Gaussian_Integers/Gaussian_Integers_Sums_Of_Two_Squares.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p568.lean", - "formal/afp/pGCL/Tutorial/Primitives.thy", - "formal/lean/mathlib/combinatorics/simple_graph/inc_matrix.lean", - "formal/afp/VerifyThis2019/Parallel_Multiset_Fold.thy", - "formal/mizar/orders_5.miz", - "formal/lean/mathlib/data/seq/parallel.lean", - "formal/afp/Core_DOM/standard/Core_DOM_Heap_WF.thy", - "formal/afp/Auto2_Imperative_HOL/Imperative/RBTree_Impl.thy", - "formal/hol/Help/RULE_ASSUM_TAC.doc", - "formal/afp/Deep_Learning/document/root.tex", - "formal/lean/mathlib/category_theory/limits/shapes/concrete_category.lean", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p452.lean", - "formal/afp/Ordinary_Differential_Equations/IVP/Flow.thy", - "formal/afp/Factored_Transition_System_Bounding/SystemAbstraction.thy", - "formal/lean/mathlib/data/real/cau_seq_completion.lean", - "formal/mizar/fomodel2.miz", - "formal/lean/mathlib/data/pfunctor/multivariate/basic.lean", - "formal/afp/Monomorphic_Monad/document/root.tex", - "formal/afp/Complex_Geometry/Unitary_Matrices.thy", - "formal/afp/Valuation/Valuation3.thy", - "formal/afp/Word_Lib/document/root.tex", - "formal/afp/UTP/utp/utp_unrest.thy", - "formal/afp/IEEE_Floating_Point/Conversion_IEEE_Float.thy", - "formal/afp/Zeta_Function/Zeta_Library.thy", - "formal/afp/Decl_Sem_Fun_PL/RelationalSemFSet.thy", - "formal/afp/CZH_Elementary_Categories/czh_ecategories/CZH_ECAT_Rel.thy", - "formal/afp/Pairing_Heap/Pairing_Heap_List1.thy", - "formal/lean/mathlib/number_theory/class_number/number_field.lean", - "formal/afp/Bounded_Deducibility_Security/Abstract_BD_Security.thy", - "formal/afp/Network_Security_Policy_Verification/attic.thy", - "formal/afp/Concurrent_Ref_Alg/Rely_Quotient.thy", - "formal/hol/100/fourier.ml", - "formal/afp/IMAP-CRDT/IMAP-proof.thy", - "formal/afp/Metalogic_ProofChecker/SortConstants.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p282.lean", - "formal/afp/Van_Emde_Boas_Trees/VEBT_DelImperative.thy", - "formal/lean/liquid/for_mathlib/ennreal.lean", - "formal/lean/mathlib/topology/path_connected.lean", - "formal/lean/mathlib/field_theory/normal.lean", - "formal/afp/Possibilistic_Noninterference/Compositionality.thy", - "formal/lean/mathlib/algebra/continued_fractions/translations.lean", - "formal/mizar/cohsp_1.miz", - "formal/mizar/modelc_1.miz", - "formal/hol/Functionspaces/make.ml", - "formal/afp/Graph_Saturation/GraphRewriting.thy", - "formal/afp/Ordinal/OrdinalInduct.thy", - "formal/afp/Groebner_Macaulay/Groebner_Macaulay_Examples.thy", - "formal/lean/mathzoo/mathzoo/olympiads/aime/1990/p2.lean", - "formal/afp/Blue_Eyes/document/root.tex", - "formal/afp/BTree/Imperative_Loops.thy", - "formal/afp/Sturm_Sequences/document/root.tex", - "formal/lean/mathlib/linear_algebra/cross_product.lean", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p459.lean", - "formal/afp/Hood_Melville_Queue/document/root.tex", - "formal/lean/liquid/for_mathlib/exact_seq2.lean", - "formal/afp/Types_To_Sets_Extension/ETTS/Manual/ETTS_Syntax.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p156.lean", - "formal/afp/PAC_Checker/PAC_More_Poly.thy", - "formal/hol/Help/MK_COMB_TAC.doc", - "formal/afp/Collections/ICF/impl/ArrayMapImpl.thy", - "formal/mizar/cqc_sim1.miz", - "formal/afp/Transition_Systems_and_Automata/Automata/NBTA/NBTA_Combine.thy", - "formal/lean/sphere-eversion/to_mathlib/topology/algebra/module.lean", - "formal/lean/mathlib/geometry/manifold/instances/sphere.lean", - "formal/afp/SC_DOM_Components/Core_DOM_SC_DOM_Components.thy", - "formal/mizar/lattice2.miz", - "formal/afp/CRDT/Util.thy", - "formal/afp/CoCon/Paper_Confidentiality/Paper_Aut.thy", - "formal/afp/Delta_System_Lemma/Cofinality.thy", - "formal/afp/Native_Word/Native_Word_Test_PolyML64.thy", - "formal/lean/mathlib/ring_theory/prime.lean", - "formal/lean/liquid/for_mathlib/Cech/split.lean", - "formal/afp/JinjaThreads/MM/SC_Collections.thy", - "formal/lean/mathlib/algebraic_topology/dold_kan/faces.lean", - "formal/afp/Jinja/DFA/Kildall_1.thy", - "formal/afp/Native_Word/Native_Word_Test_Emu.thy", - "formal/afp/DiskPaxos/DiskPaxos_Invariant.thy", - "formal/afp/Consensus_Refined/Quorums.thy", - "formal/lean/mathlib/measure_theory/lattice.lean", - "formal/afp/Lambda_Free_KBOs/Lambda_Free_TKBO_Coefs.thy", - "formal/hol/Help/lhs.doc", - "formal/afp/Applicative_Lifting/Stream_Algebra.thy", - "formal/afp/HOLCF-Prelude/Data_Tuple.thy", - "formal/afp/MDP-Algorithms/examples/Code_Inventory.thy", - "formal/afp/Impossible_Geometry/Impossible_Geometry.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p136.lean", - "formal/afp/Strong_Security/Up_To_Technique.thy", - "formal/coq/math-comp/ssreflect/fintype.v", - "formal/afp/Randomised_BSTs/Randomised_BSTs.thy", - "formal/afp/Network_Security_Policy_Verification/Examples/Impl_List_Playground.thy", - "formal/coq/analysis/realfun.v", - "formal/afp/Transitive_Models/Pointed_DC_Relative.thy", - "formal/afp/Sigma_Commit_Crypto/Sigma_AND.thy", - "formal/afp/Signature_Groebner/Signature_Groebner.thy", - "formal/afp/Transition_Systems_and_Automata/Automata/DBTA/DGBTA.thy", - "formal/hol/Help/many.doc", - "formal/afp/Combinable_Wands/PartialHeapSA.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p335.lean", - "formal/afp/VerifyThis2018/lib/DRAT_Misc.thy", - "formal/afp/InfPathElimination/Labels.thy", - "formal/afp/Affine_Arithmetic/Optimize_Float.thy", - "formal/mizar/catalan2.miz", - "formal/lean/liquid/thm95/default.lean", - "formal/afp/Separation_Logic_Imperative_HOL/Hoare_Triple.thy", - "formal/afp/Auto2_Imperative_HOL/Functional/Mapping_Str.thy", - "formal/afp/Interpreter_Optimizations/OpInl.thy", - "formal/lean/mathlib/measure_theory/measure/complex_lebesgue.lean", - "formal/coq/odd-order/PFsection7.v", - "formal/lean/mathzoo/mathzoo/olympiads/imo/1964/p2.lean", - "formal/afp/Mersenne_Primes/Lucas_Lehmer.thy", - "formal/afp/InfPathElimination/document/intro.tex", - "formal/afp/JinjaThreads/Compiler/J1JVM.thy", - "formal/afp/Simple_Firewall/SimpleFw_Semantics.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p427.lean", - "formal/afp/Padic_Ints/Supplementary_Ring_Facts.thy", - "formal/afp/Buildings/Coxeter.thy", - "formal/afp/Graph_Theory/Subdivision.thy", - "formal/afp/Isabelle_C/C11-FrontEnd/document/generated/ontologies.tex", - "formal/mizar/t_1topsp.miz", - "formal/lean/mathlib/algebra/quotient.lean", - "formal/coq/math-comp/test_suite/output.v", - "formal/afp/Transition_Systems_and_Automata/Automata/DCA/DCA_Combine.thy", - "formal/afp/HRB-Slicing/StaticInter/Distance.thy", - "formal/afp/Ordered_Resolution_Prover/Lazy_List_Chain.thy", - "formal/afp/Weighted_Path_Order/document/root.tex", - "formal/afp/Combinatorics_Words_Lyndon/document/root.tex", - "formal/afp/CoCon/Paper_Confidentiality/Paper_Aut_PC.thy", - "formal/afp/DiscretePricing/Option_Price_Examples.thy", - "formal/afp/Real_Power/RatPower.thy", - "formal/afp/Call_Arity/CoCallFix.thy", - "formal/afp/Group-Ring-Module/Algebra1.thy", - "formal/lean/mathlib/analysis/box_integral/box/subbox_induction.lean", - "formal/afp/Incredible_Proof_Machine/Natural_Deduction.thy", - "formal/afp/Slicing/While/WellFormed.thy", - "formal/afp/Coinductive/Examples/LMirror.thy", - "formal/afp/IEEE_Floating_Point/document/root.tex", - "formal/afp/ConcurrentIMP/LTL.thy", - "formal/afp/Goedel_Incompleteness/Jeroslow_Simplified.thy", - "formal/afp/AODV/Seq_Invariants.thy", - "formal/lean/liquid/for_mathlib/exact_lift_desc.lean", - "formal/afp/Smith_Normal_Form/Smith_Normal_Form.thy", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2020/b/p6.lean", - "formal/afp/Syntax_Independent_Logic/Standard_Model.thy", - "formal/afp/JinjaThreads/J/Progress.thy", - "formal/lean/mathlib/linear_algebra/multilinear/basis.lean", - "formal/afp/Dijkstra_Shortest_Path/HashGraphImpl.thy", - "formal/afp/JinjaThreads/Execute/Round_Robin.thy", - "formal/mizar/waybel_1.miz", - "formal/afp/Padic_Ints/Extended_Int.thy", - "formal/lean/mathzoo/mathzoo/misc/miniF2F/numbertheory/2dvd4expn.lean", - "formal/afp/Encodability_Process_Calculi/SourceTargetRelation.thy", - "formal/hol/Help/TRANS_TAC.doc", - "formal/afp/Signature_Groebner/More_MPoly.thy", - "formal/afp/Hybrid_Multi_Lane_Spatial_Logic/perfect/Perfect_Sensors.thy", - "formal/lean/mathlib/linear_algebra/alternating.lean", - "formal/lean/mathlib/analysis/inner_product_space/gram_schmidt_ortho.lean", - "formal/lean/mathzoo/mathzoo/olympiads/imo/1974/p5.lean", - "formal/afp/Prime_Distribution_Elementary/Primes_Omega.thy", - "formal/afp/Graph_Theory/Digraph_Component_Vwalk.thy", - "formal/lean/mathlib/data/multiset/default.lean", - "formal/afp/Relation_Algebra/Relation_Algebra.thy", - "formal/lean/mathlib/analysis/inner_product_space/rayleigh.lean", - "formal/afp/JinjaThreads/DFA/Semilat.thy", - "formal/hol/EC/nistp521.ml", - "formal/afp/DFS_Framework/Impl/Structural/General_DFS_Structure.thy", - "formal/afp/Containers/Examples/Card_Datatype_Ex.thy", - "formal/mizar/ringder1.miz", - "formal/afp/Frequency_Moments/Probability_Ext.thy", - "formal/lean/mathlib/linear_algebra/exterior_algebra/of_alternating.lean", - "formal/afp/Auto2_Imperative_HOL/Functional/Connectivity.thy", - "formal/lean/mathlib/group_theory/schur_zassenhaus.lean", - "formal/afp/Call_Arity/ConstOn.thy", - "formal/afp/Deep_Learning/Tensor_Product.thy", - "formal/hol/Help/NANOCOP_TAC.doc", - "formal/afp/PseudoHoops/PseudoHoopFilters.thy", - "formal/lean/mathlib/probability/martingale/basic.lean", - "formal/afp/Monad_Memo_DP/example/Counting_Tiles.thy", - "formal/afp/Density_Compiler/PDF_Density_Contexts.thy", - "formal/lean/mathlib/algebra/module/linear_map.lean", - "formal/mizar/waybel18.miz", - "formal/afp/Optimal_BST/Optimal_BST2.thy", - "formal/coq/math-comp/field/algnum.v", - "formal/mizar/algstr_1.miz", - "formal/lean/mathlib/topology/compact_open.lean", - "formal/afp/JinjaThreads/BV/LBVJVM.thy", - "formal/afp/Formula_Derivatives/Presburger_Formula.thy", - "formal/afp/Forcing/Ordinals_In_MG.thy", - "formal/hol/Help/finished.doc", - "formal/afp/Simplicial_complexes_and_boolean_functions/Binary_operations.thy", - "formal/afp/CRDT/RGA.thy", - "formal/afp/Modular_arithmetic_LLL_and_HNF_algorithms/Storjohann.thy", - "formal/afp/UTP/utp/utp_expr.thy", - "formal/afp/FO_Theory_Rewriting/Util/Ground_MCtxt.thy", - "formal/afp/Universal_Turing_Machine/Uncomputable.thy", - "formal/coq/odd-order/PFsection10.v", - "formal/afp/HRB-Slicing/StaticInter/Observable.thy", - "formal/afp/Prpu_Maxflow/Prpu_Common_Impl.thy", - "formal/afp/Featherweight_OCL/UML_Types.thy", - "formal/afp/CryptHOL/GPV_Applicative.thy", - "formal/lean/mathlib/category_theory/limits/shapes/zero_objects.lean", - "formal/lean/mathlib/data/finsupp/basic.lean", - "formal/hol/Help/file_on_path.doc", - "formal/afp/Word_Lib/Bitwise.thy", - "formal/mizar/algstr_2.miz", - "formal/afp/Knuth_Bendix_Order/KBO.thy", - "formal/lean/mathlib/data/nat/psub.lean", - "formal/lean/mathlib/data/analysis/topology.lean", - "formal/afp/Complx/OG_Hoare.thy", - "formal/hol/Help/DISJ_CANON_CONV.doc", - "formal/lean/mathlib/analysis/calculus/inverse.lean", - "formal/afp/Count_Complex_Roots/Count_Complex_Roots_Examples.thy", - "formal/afp/Category/Cat.thy", - "formal/lean/liquid/for_mathlib/exact_seq3.lean", - "formal/afp/Ordinary_Differential_Equations/Refinement/GenCF_No_Comp.thy", - "formal/lean/mathzoo/mathzoo/misc/miniF2F/algebra/2varlineareq_xpeeq7_2xpeeq3_eeq11_xeqn4.lean", - "formal/afp/Goedel_Incompleteness/document/root.tex", - "formal/lean/mathlib/topology/metric_space/cau_seq_filter.lean", - "formal/lean/mathlib/algebra/category/Group/colimits.lean", - "formal/afp/Cubic_Quartic_Equations/Cardanos_Formula.thy", - "formal/afp/Independence_CH/Cardinal_Preservation.thy", - "formal/lean/liquid/prop_92/concrete.lean", - "formal/afp/Goedel_Incompleteness/Tarski.thy", - "formal/afp/Extended_Finite_State_Machine_Inference/heuristics/Weak_Subsumption.thy", - "formal/mizar/borsuk_2.miz", - "formal/afp/Jordan_Normal_Form/Ring_Hom_Matrix.thy", - "formal/afp/DiskPaxos/DiskPaxos_Model.thy", - "formal/hol/Help/term_order.doc", - "formal/afp/AODV/variants/a_norreqid/A_OAodv.thy", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2008/a/p8.lean", - "formal/afp/Dict_Construction/Documentation/Introduction.thy", - "formal/lean/mathlib/data/zmod/quotient.lean", - "formal/mizar/grzlog_1.miz", - "formal/lean/mathlib/algebra/invertible.lean", - "formal/afp/Weighted_Path_Order/Multiset_Extension_Pair_Impl.thy", - "formal/lean/mathlib/data/qpf/multivariate/constructions/const.lean", - "formal/lean/mathlib/combinatorics/partition.lean", - "formal/lean/mathlib/topology/algebra/module/finite_dimension.lean", - "formal/afp/Optimal_BST/Optimal_BST_Code.thy", - "formal/lean/perfectoid/examples/padics.lean", - "formal/afp/Perfect-Number-Thm/Perfect.thy", - "formal/afp/Collections/Lib/RBT_add.thy", - "formal/afp/Bounded_Deducibility_Security/document/intro.tex", - "formal/afp/Auto2_HOL/HOL/Arith_Thms.thy", - "formal/afp/List_Inversions/List_Inversions.thy", - "formal/hol/Help/PINST.doc", - "formal/afp/Stable_Matching/Contracts.thy", - "formal/lean/mathlib/algebraic_geometry/presheafed_space/has_colimits.lean", - "formal/afp/Core_SC_DOM/common/classes/CharacterDataClass.thy", - "formal/afp/Probabilistic_Noninterference/Syntactic_Criteria.thy", - "formal/hol/Help/tyvars.doc", - "formal/afp/Shadow_SC_DOM/tests/slots_fallback.thy", - "formal/lean/mathlib/geometry/manifold/tangent_bundle.lean", - "formal/afp/Smooth_Manifolds/Chart.thy", - "formal/lean/mathlib/category_theory/idempotents/basic.lean", - "formal/lean/mathlib/linear_algebra/free_module/basic.lean", - "formal/afp/Types_Tableaus_and_Goedels_God/FittingProof.thy", - "formal/afp/Word_Lib/Machine_Word_32_Basics.thy", - "formal/lean/mathlib/ring_theory/witt_vector/frobenius_fraction_field.lean", - "formal/afp/Regression_Test_Selection/Common/CollectionSemantics.thy", - "formal/afp/Nominal2/Nominal2.thy", - "formal/afp/Delta_System_Lemma/document/header-delta-system.tex", - "formal/afp/Core_SC_DOM/common/pointers/CharacterDataPointer.thy", - "formal/afp/Flyspeck-Tame/Invariants.thy", - "formal/afp/Pop_Refinement/Second_Example.thy", - "formal/hol/Help/mk_prover.doc", - "formal/hol/Library/bitmatch.ml", - "formal/afp/JinjaThreads/Compiler/Correctness1Threaded.thy", - "formal/hol/Help/INT_LT_CONV.doc", - "formal/afp/Pell/Pell_Algorithm.thy", - "formal/lean/liquid/for_mathlib/nnrat.lean", - "formal/lean/mathlib/order/succ_pred/relation.lean", - "formal/hol/EC/montwe.ml", - "formal/lean/mathlib/linear_algebra/multilinear/basic.lean", - "formal/afp/Approximation_Algorithms/document/root.tex", - "formal/afp/SIFUM_Type_Systems/LocallySoundModeUse.thy", - "formal/lean/mathlib/geometry/manifold/bump_function.lean", - "formal/afp/Knuth_Bendix_Order/document/root.tex", - "formal/afp/JiveDataStoreModel/Isabelle/JML.thy", - "formal/afp/Timed_Automata/DBM_Zone_Semantics.thy", - "formal/lean/mathlib/data/nat/digits.lean", - "formal/afp/Collections/ICF/impl/FTPrioUniqueImpl.thy", - "formal/mizar/sfmastr3.miz", - "formal/afp/Ergodic_Theory/Kohlberg_Neyman_Karlsson.thy", - "formal/mizar/ami_wstd.miz", - "formal/lean/liquid/for_mathlib/pi_induced.lean", - "formal/hol/Help/ASM_ARITH_TAC.doc", - "formal/afp/KBPs/Traces.thy", - "formal/hol/Help/forall2.doc", - "formal/lean/mathlib/representation_theory/fdRep.lean", - "formal/afp/KD_Tree/document/root.tex", - "formal/afp/Smith_Normal_Form/SNF_Uniqueness.thy", - "formal/lean/mathlib/set_theory/ordinal/topology.lean", - "formal/mizar/hilb10_3.miz", - "formal/afp/Iptables_Semantics/Primitive_Matchers/Primitive_Abstract.thy", - "formal/afp/Knot_Theory/Tangle_Moves.thy", - "formal/afp/List-Infinite/ListInf/ListInf_Prefix.thy", - "formal/lean/liquid/pseudo_normed_group/breen_deligne.lean", - "formal/afp/Noninterference_Sequential_Composition/document/root.tex", - "formal/afp/Functional-Automata/MaxChop.thy", - "formal/afp/Probabilistic_Timed_Automata/PTA.thy", - "formal/lean/mathlib/algebra/squarefree.lean", - "formal/mizar/latquasi.miz", - "formal/afp/Groebner_Bases/Groebner_PM.thy", - "formal/afp/JinjaThreads/MM/JMM_Compiler.thy", - "formal/afp/Taylor_Models/Float_Topology.thy", - "formal/afp/Applicative_Lifting/document/root.tex", - "formal/afp/Security_Protocol_Refinement/Key_establish/m3_kerberos5.thy", - "formal/afp/IMP2/lib/IMP2_Utils.thy", - "formal/afp/FunWithFunctions/document/root.tex", - "formal/mizar/nbvectsp.miz", - "formal/mizar/catalg_1.miz", - "formal/coq/math-comp/solvable/gseries.v", - "formal/hol/Help/HYP_UPPERCASE.doc", - "formal/mizar/taxonom2.miz", - "formal/lean/mathlib/algebra/direct_sum/basic.lean", - "formal/afp/Iptables_Semantics/Primitive_Matchers/Interface_Replace.thy", - "formal/afp/JinjaThreads/BV/Effect.thy", - "formal/afp/Echelon_Form/Echelon_Form.thy", - "formal/afp/Regular_Tree_Relations/AGTT.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p487.lean", - "formal/mizar/wsierp_1.miz", - "formal/afp/CakeML/Evaluate_Single.thy", - "formal/afp/Slicing/StaticIntra/StandardControlDependence.thy", - "formal/afp/Flyspeck-Tame/ArchCompProps.thy", - "formal/lean/mathlib/category_theory/abelian/basic.lean", - "formal/lean/liquid/condensed/bd_lemma.lean", - "formal/lean/mathlib/linear_algebra/adic_completion.lean", - "formal/lean/mathlib/measure_theory/pi_system.lean", - "formal/mizar/heyting3.miz", - "formal/mizar/waybel25.miz", - "formal/hol/Help/REAL_INT_EQ_CONV.doc", - "formal/afp/Network_Security_Policy_Verification/TopoS_withOffendingFlows.thy", - "formal/lean/mathlib/topology/sheaves/sheaf_condition/equalizer_products.lean", - "formal/afp/Physical_Quantities/SI_Accepted.thy", - "formal/afp/Modular_Assembly_Kit_Security/SecuritySpecification/InformationFlowProperties.thy", - "formal/afp/Lambda_Free_KBOs/Lambda_Free_KBO_Basic.thy", - "formal/lean/mathlib/data/nat/cast/defs.lean", - "formal/lean/mathlib/category_theory/differential_object.lean", - "formal/afp/Pratt_Certificate/document/root.tex", - "formal/afp/Algebraic_Numbers/Show_Real_Precise.thy", - "formal/afp/Independence_CH/ZF_Trans_Interpretations.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p598.lean", - "formal/afp/Simpl/HoareTotalProps.thy", - "formal/afp/Algebraic_VCs/AVC_KAD/VC_KAD_Examples2.thy", - "formal/afp/Zeta_Function/Zeta_Laurent_Expansion.thy", - "formal/mizar/unialg_2.miz", - "formal/mizar/papdesaf.miz", - "formal/mizar/jordan16.miz", - "formal/afp/Stochastic_Matrices/Stochastic_Matrix_Perron_Frobenius.thy", - "formal/afp/Types_To_Sets_Extension/Examples/TTS_Foundations/Algebra/Type_Semigroups.thy", - "formal/lean/mathlib/topology/discrete_quotient.lean", - "formal/afp/Isabelle_Meta_Model/document/root.tex", - "formal/lean/mathlib/computability/partrec_code.lean", - "formal/mizar/mmlquer2.miz", - "formal/afp/Integration/RealRandVar.thy", - "formal/afp/Combinable_Wands/PosRat.thy", - "formal/afp/CryptoBasedCompositionalProperties/Secrecy.thy", - "formal/afp/Gromov_Hyperbolicity/document/root.tex", - "formal/afp/Collections/ICF/impl/ArrayHashSet.thy", - "formal/afp/CakeML/generated/CakeML/Namespace.thy", - "formal/hol/Boyer_Moore/equalities.ml", - "formal/afp/Schutz_Spacetime/Util.thy", - "formal/afp/Types_To_Sets_Extension/Examples/SML_Relativization/Lattices/SML_Linorders.thy", - "formal/afp/AWN/AWN_Invariants.thy", - "formal/afp/Consensus_Refined/MRU/Three_Steps.thy", - "formal/afp/CakeML/generated/CakeML/TypeSystem.thy", - "formal/afp/Simplex/Linear_Poly_Maps.thy", - "formal/afp/Budan_Fourier/Budan_Fourier.thy", - "formal/afp/Concurrent_Ref_Alg/Conjunctive_Sequential.thy", - "formal/hol/Help/monotonicity_theorems.doc", - "formal/lean/mathlib/order/hom/bounded.lean", - "formal/afp/Higher_Order_Terms/Term_Class.thy", - "formal/afp/Verified_SAT_Based_AI_Planning/CNF_Supplement.thy", - "formal/hol/Help/list_mk_conj.doc", - "formal/lean/mathlib/representation_theory/maschke.lean", - "formal/afp/Hybrid_Systems_VCs/document/root.tex", - "formal/afp/WorkerWrapper/Nats.thy", - "formal/hol/Help/metisverb.doc", - "formal/afp/Attack_Trees/GDPRhealthcare.thy", - "formal/afp/MFOTL_Monitor/Slicing.thy", - "formal/lean/mathlib/algebra/order/nonneg.lean", - "formal/mizar/bhsp_5.miz", - "formal/afp/CryptHOL/List_Bits.thy", - "formal/lean/mathlib/category_theory/monoidal/free/coherence.lean", - "formal/afp/ConcurrentGC/Model.thy", - "formal/hol/Examples/multiwf.ml", - "formal/afp/Virtual_Substitution/EqualityVS.thy", - "formal/lean/mathlib/analysis/normed_space/banach.lean", - "formal/lean/mathlib/data/int/cast.lean", - "formal/afp/Independence_CH/Forcing_Theorems.thy", - "formal/hol/Library/jacobi.ml", - "formal/afp/Applicative_Lifting/Applicative_Test.thy", - "formal/afp/Collections/GenCF/Gen/Gen_Comp.thy", - "formal/afp/Graph_Saturation/LabeledGraphs.thy", - "formal/lean/mathlib/measure_theory/measure/measure_space.lean", - "formal/lean/liquid/condensed/extr/lift_comphaus.lean", - "formal/lean/liquid/pseudo_normed_group/category/CompHausFiltPseuNormGrpWithTinv.lean", - "formal/afp/CZH_Foundations/czh_semicategories/CZH_SMC_Subsemicategory.thy", - "formal/coq/math-comp/solvable/sylow.v", - "formal/lean/perfectoid/for_mathlib/uniform_space/group_basis.lean", - "formal/afp/QR_Decomposition/Generalizations2.thy", - "formal/afp/Collections/GenCF/Gen/Gen_Map.thy", - "formal/mizar/yellow13.miz", - "formal/afp/Clean/src/MonadSE.thy", - "formal/afp/Dynamic_Tables/Tables_nat.thy", - "formal/afp/IsaNet/infrastructure/Message.thy", - "formal/hol/Help/.pipeparser.doc", - "formal/afp/Differential_Dynamic_Logic/Ids.thy", - "formal/afp/Ordinary_Differential_Equations/Numerics/Refine_Rigorous_Numerics.thy", - "formal/lean/liquid/pseudo_normed_group/category/strictCompHausFiltPseuNormGrp.lean", - "formal/hol/Help/hol_expand_directory.doc", - "formal/coq/odd-order/BGsection10.v", - "formal/afp/Forcing/Utils.thy", - "formal/hol/Multivariate/determinants.ml", - "formal/lean/mathlib/group_theory/nilpotent.lean", - "formal/lean/mathlib/set_theory/game/pgame.lean", - "formal/afp/AWN/AWN.thy", - "formal/hol/Help/assocd.doc", - "formal/afp/Higher_Order_Terms/Name.thy", - "formal/lean/mathlib/model_theory/graph.lean", - "formal/lean/mathlib/dynamics/circle/rotation_number/translation_number.lean", - "formal/afp/IMP2/automation/IMP2_Basic_Decls.thy", - "formal/afp/Psi_Calculi/Weak_Congruence.thy", - "formal/afp/LTL_Master_Theorem/Code_Export.thy", - "formal/lean/mathlib/category_theory/limits/shapes/split_coequalizer.lean", - "formal/afp/LLL_Factorization/document/root.tex", - "formal/mizar/jgraph_4.miz", - "formal/lean/mathlib/algebra/monoid_algebra/basic.lean", - "formal/afp/Correctness_Algebras/Approximation.thy", - "formal/mizar/group_1a.miz", - "formal/mizar/diff_3.miz", - "formal/hol/100/euler.ml", - "formal/lean/mathlib/algebra/continued_fractions/computation/basic.lean", - "formal/afp/LTL_to_DRA/af.thy", - "formal/lean/lftcm/hints/category_theory/exercise6/hint1.lean", - "formal/afp/Completeness/Formula.thy", - "formal/lean/liquid/polyhedral_lattice/topology.lean", - "formal/afp/Polynomials/OAlist_Poly_Mapping.thy", - "formal/lean/mathlib/algebra/module/torsion.lean", - "formal/afp/EdmondsKarp_Maxflow/Edka_Benchmark_Export.thy", - "formal/mizar/asympt_2.miz", - "formal/afp/MiniSail/Examples.thy", - "formal/lean/mathlib/algebra/group/prod.lean", - "formal/afp/BinarySearchTree/BinaryTree.thy", - "formal/hol/Help/disjuncts.doc", - "formal/afp/Refine_Imperative_HOL/Sepref_Chapter_Setup.thy", - "formal/lean/mathzoo/mathzoo/olympiads/imo/2020/p2.lean", - "formal/mizar/topreal6.miz", - "formal/mizar/dblseq_1.miz", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p440.lean", - "formal/afp/Native_Word/Code_Target_Bits_Int.thy", - "formal/lean/mathzoo/mathzoo/misc/miniF2F/algebra/sqineq_at2malt1.lean", - "formal/afp/RIPEMD-160-SPARK/document/root.tex", - "formal/lean/mathlib/linear_algebra/matrix/determinant.lean", - "formal/afp/FOL-Fitting/FOL_Fitting.thy", - "formal/afp/Forcing/Forcing_Data.thy", - "formal/afp/Modal_Logics_for_NTS/Weak_Logical_Equivalence.thy", - "formal/afp/Tycon/Writer_Monad.thy", - "formal/lean/mathlib/order/ideal.lean", - "formal/hol/Help/extend_rectype_net.doc", - "formal/lean/mathlib/group_theory/free_abelian_group_finsupp.lean", - "formal/afp/Types_Tableaus_and_Goedels_God/GoedelProof_P1.thy", - "formal/lean/mathlib/computability/tm_computable.lean", - "formal/hol/Help/orelse_tcl_.doc", - "formal/afp/Regular_Tree_Relations/RR2_Infinite_Q_infinity.thy", - "formal/lean/mathlib/geometry/manifold/cont_mdiff_map.lean", - "formal/afp/UTP/toolkit/Map_Extra.thy", - "formal/afp/Probabilistic_Timed_Automata/library/Basic.thy", - "formal/afp/CCS/Weak_Sim.thy", - "formal/afp/Extended_Finite_State_Machine_Inference/document/root.tex", - "formal/afp/SimplifiedOntologicalArgument/DisableKodkodScala.thy", - "formal/afp/DPRM_Theorem/Register_Machine/MachineMasking.thy", - "formal/afp/CCS/Weak_Cong_Semantics.thy", - "formal/afp/ROBDD/Array_List.thy", - "formal/afp/Isabelle_Marries_Dirac/Deutsch.thy", - "formal/afp/Encodability_Process_Calculi/CombinedCriteria.thy", - "formal/lean/mathlib/algebra/ring/boolean_ring.lean", - "formal/afp/Factor_Algebraic_Polynomial/Multivariate_Resultant.thy", - "formal/afp/Gromov_Hyperbolicity/Gromov_Hyperbolicity.thy", - "formal/afp/CakeML_Codegen/Backend/CakeML_Correctness.thy", - "formal/lean/liquid/for_mathlib/topology.lean", - "formal/afp/AODV/Global_Invariants.thy", - "formal/afp/Category3/Adjunction.thy", - "formal/afp/Complex_Geometry/More_Transcendental.thy", - "formal/afp/AODV/Aodv_Loop_Freedom.thy", - "formal/afp/AODV/document/root.tex", - "formal/lean/mathlib/number_theory/class_number/admissible_absolute_value.lean", - "formal/afp/Propositional_Proof_Systems/Resolution_Compl_SC_Full.thy", - "formal/afp/CoreC++/SubObj.thy", - "formal/mizar/gr_cy_3.miz", - "formal/afp/Berlekamp_Zassenhaus/Finite_Field_Record_Based.thy", - "formal/lean/mathlib/analysis/complex/isometry.lean", - "formal/afp/Boolos_Curious_Inference/Boo1.thy", - "formal/lean/mathlib/ring_theory/localization/cardinality.lean", - "formal/afp/Optimal_BST/Optimal_BST_Memo.thy", - "formal/afp/C2KA_DistributedSystems/CKA.thy", - "formal/hol/Help/string_of_term.doc", - "formal/afp/CSP_RefTK/Process_norm.thy", - "formal/afp/AODV/variants/c_gtobcast/C_Fresher.thy", - "formal/afp/CakeML_Codegen/Rewriting/Rewriting_Pterm_Elim.thy", - "formal/afp/CoSMeDis/Outer_Friend_Confidentiality/Issuer/Outer_Friend_Issuer_Openness.thy", - "formal/afp/Virtual_Substitution/VSAlgos.thy", - "formal/afp/Differential_Dynamic_Logic/Lib.thy", - "formal/afp/Markov_Models/Markov_Decision_Process.thy", - "formal/mizar/algspec1.miz", - "formal/lean/mathlib/analysis/calculus/deriv.lean", - "formal/hol/Help/EVERY.doc", - "formal/mizar/algstr_4.miz", - "formal/afp/HereditarilyFinite/Finite_Automata.thy", - "formal/hol/EC/nistp384.ml", - "formal/lean/liquid/for_mathlib/unflip.lean", - "formal/afp/Iptables_Semantics/Examples/Synology_Diskstation_DS414/iptables_Ln_tuned_parsed.thy", - "formal/afp/Monad_Memo_DP/example/All_Examples.thy", - "formal/afp/IMP2/automation/IMP2_Var_Postprocessor.thy", - "formal/coq/math-comp/character/all_character.v", - "formal/afp/pGCL/Determinism.thy", - "formal/afp/CoSMeDis/Outer_Friend_Confidentiality/Issuer/Outer_Friend_Issuer_State_Indistinguishability.thy", - "formal/mizar/heyting2.miz", - "formal/mizar/ring_1.miz", - "formal/lean/mathlib/number_theory/legendre_symbol/zmod_char.lean", - "formal/lean/liquid/laurent_measures/bounded.lean", - "formal/afp/Tail_Recursive_Functions/document/root.tex", - "formal/lean/mathlib/set_theory/game/impartial.lean", - "formal/afp/Well_Quasi_Orders/Higman_OI.thy", - "formal/afp/Pi_Calculus/Weak_Late_Sim_Pres.thy", - "formal/afp/Robbins-Conjecture/Robbins_Conjecture.thy", - "formal/afp/Collections/Lib/Code_Target_ICF.thy", - "formal/afp/BTree/Basic_Assn.thy", - "formal/afp/Pi_Calculus/Agent.thy", - "formal/lean/mathlib/ring_theory/witt_vector/init_tail.lean", - "formal/afp/Groebner_Bases/General.thy", - "formal/afp/Optics/Lenses.thy", - "formal/afp/UPF/Normalisation.thy", - "formal/lean/mathlib/ring_theory/roots_of_unity.lean", - "formal/afp/JinjaThreads/DFA/Kildall.thy", - "formal/lean/lftcm/hints/category_theory/exercise3/hint6.lean", - "formal/afp/Menger/Y_eq_new_last.thy", - "formal/hol/Help/tl.doc", - "formal/afp/PLM/Thesis.thy", - "formal/lean/liquid/free_pfpng/basic.lean", - "formal/mizar/jordan5a.miz", - "formal/hol/Examples/borsuk.ml", - "formal/lean/sphere-eversion/global/rotation.lean", - "formal/afp/Extended_Finite_State_Machine_Inference/heuristics/Store_Reuse_Subsumption.thy", - "formal/lean/mathlib/model_theory/encoding.lean", - "formal/lean/mathlib/category_theory/preadditive/projective.lean", - "formal/lean/mathlib/order/iterate.lean", - "formal/afp/Sigma_Commit_Crypto/Chaum_Pedersen_Sigma_Commit.thy", - "formal/hol/Help/INT_NEG_CONV.doc", - "formal/lean/mathlib/category_theory/limits/cones.lean", - "formal/mizar/decomp_1.miz", - "formal/lean/mathlib/topology/instances/nnreal.lean", - "formal/afp/Topological_Semantics/ex_LFIs.thy", - "formal/afp/JinjaThreads/Compiler/J1JVMBisim.thy", - "formal/lean/mathlib/linear_algebra/exterior_algebra/grading.lean", - "formal/hol/100/isosceles.ml", - "formal/lean/mathlib/algebra/lie/solvable.lean", - "formal/hol/Help/dest_binder.doc", - "formal/afp/Network_Security_Policy_Verification/Security_Invariants/SINVAR_Dependability_norefl_impl.thy", - "formal/lean/mathlib/algebra/triv_sq_zero_ext.lean", - "formal/afp/JinjaDCI/JVM/JVMDefensive.thy", - "formal/lean/mathlib/data/pequiv.lean", - "formal/afp/Word_Lib/Legacy_Aliases.thy", - "formal/afp/Auto2_HOL/HOL/Auto2_Test.thy", - "formal/mizar/rpr_1.miz", - "formal/hol/Help/ASM_INT_ARITH_TAC.doc", - "formal/afp/Van_Emde_Boas_Trees/VEBT_Pred.thy", - "formal/afp/Containers/Containers_Generator.thy", - "formal/afp/Abs_Int_ITP2012/Abs_Int3.thy", - "formal/lean/mathlib/data/complex/cardinality.lean", - "formal/afp/Polynomials/More_MPoly_Type.thy", - "formal/hol/Help/real_ideal_cofactors.doc", - "formal/afp/Gale_Shapley/Gale_Shapley1.thy", - "formal/mizar/mesfunc7.miz", - "formal/lean/mathlib/group_theory/sylow.lean", - "formal/afp/JinjaThreads/Compiler/Exception_Tables.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p67.lean", - "formal/lean/mathlib/algebra/group/with_one.lean", - "formal/mizar/trees_4.miz", - "formal/afp/HOLCF-Prelude/examples/GHC_Rewrite_Rules.thy", - "formal/afp/Word_Lib/Aligned.thy", - "formal/mizar/mesfun12.miz", - "formal/afp/Simple_Firewall/Common/Option_Helpers.thy", - "formal/afp/VectorSpace/SumSpaces.thy", - "formal/lean/mathlib/data/rbtree/find.lean", - "formal/afp/CAVA_Automata/Lasso.thy", - "formal/afp/Optics/Lens_Order.thy", - "formal/afp/CoSMeDis/Post_Confidentiality/Post_Network.thy", - "formal/mizar/jordan24.miz", - "formal/hol/Help/is_numeral.doc", - "formal/afp/MiniSail/SyntaxL.thy", - "formal/afp/JinjaThreads/Framework/FWWait.thy", - "formal/afp/SequentInvertibility/SequentInvertibility.thy", - "formal/lean/mathlib/analysis/normed/field/unit_ball.lean", - "formal/afp/Propositional_Proof_Systems/MiniSC_HC.thy", - "formal/afp/HRB-Slicing/Proc/ProcState.thy", - "formal/lean/mathlib/combinatorics/simple_graph/regularity/energy.lean", - "formal/afp/Van_Emde_Boas_Trees/VEBT_Succ.thy", - "formal/afp/Native_Word/Uint.thy", - "formal/afp/Polynomial_Interpolation/Improved_Code_Equations.thy", - "formal/lean/perfectoid/valuation/linear_ordered_comm_group_with_zero.lean", - "formal/lean/liquid/facts/normed_group.lean", - "formal/afp/CryptHOL/GPV_Expectation.thy", - "formal/afp/AWN/AWN_SOS.thy", - "formal/hol/Boyer_Moore/terms_and_clauses.ml", - "formal/afp/Efficient-Mergesort/document/root.tex", - "formal/afp/Optics/Prisms_Examples.thy", - "formal/lean/mathlib/analysis/complex/removable_singularity.lean", - "formal/afp/UTP/utp/utp_expr_ovld.thy", - "formal/afp/CCS/Strong_Bisim_Pres.thy", - "formal/afp/Group-Ring-Module/Algebra7.thy", - "formal/afp/Jinja/JVM/JVMInstructions.thy", - "formal/afp/CISC-Kernel/step/Step.thy", - "formal/afp/PAC_Checker/PAC_Polynomials_Term.thy", - "formal/afp/Dependent_SIFUM_Refinement/Examples/Eg1.thy", - "formal/afp/Shadow_SC_DOM/tests/slots.thy", - "formal/lean/mathlib/combinatorics/simple_graph/clique.lean", - "formal/hol/Help/SUBGOAL_THEN.doc", - "formal/afp/Complex_Geometry/Riemann_Sphere.thy", - "formal/lean/mathlib/linear_algebra/matrix/bilinear_form.lean", - "formal/lean/mathlib/computability/NFA.lean", - "formal/afp/Relational-Incorrectness-Logic/document/root.tex", - "formal/afp/Ergodic_Theory/Recurrence.thy", - "formal/afp/Word_Lib/Machine_Word_64_Basics.thy", - "formal/afp/Stateful_Protocol_Composition_and_Typing/Examples.thy", - "formal/hol/GL/k4lr.ml", - "formal/afp/Game_Based_Crypto/document/fig-2.tex", - "formal/lean/mathlib/topology/metric_space/metrizable_uniformity.lean", - "formal/afp/SDS_Impossibility/SDS_Impossibility.thy", - "formal/lean/mathlib/data/list/intervals.lean", - "formal/afp/WOOT_Strong_Eventual_Consistency/Example.thy", - "formal/afp/Types_To_Sets_Extension/Examples/Vector_Spaces/VS_Modules.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p127.lean", - "formal/lean/mathlib/ring_theory/mv_polynomial/basic.lean", - "formal/mizar/substut2.miz", - "formal/hol/Help/let_CONV.doc", - "formal/afp/CryptoBasedCompositionalProperties/KnowledgeKeysSecrets.thy", - "formal/hol/Help/dest_char.doc", - "formal/lean/liquid/pseudo_normed_group/category/CompHausFiltPseuNormGrp.lean", - "formal/afp/Poincare_Disc/Poincare_Between.thy", - "formal/afp/Correctness_Algebras/General_Refinement_Algebras.thy", - "formal/afp/Flow_Networks/Graph.thy", - "formal/mizar/rusub_2.miz", - "formal/afp/Iptables_Semantics/Examples/Contrived/Contrived_Example.thy", - "formal/mizar/scmring3.miz", - "formal/afp/Pi_Calculus/Strong_Late_Expansion_Law.thy", - "formal/afp/Game_Based_Crypto/Pseudo_Random_Permutation.thy", - "formal/lean/mathlib/geometry/manifold/charted_space.lean", - "formal/afp/Psi_Calculi/Bisim_Struct_Cong.thy", - "formal/lean/mathlib/measure_theory/measure/regular.lean", - "formal/afp/Collections/Examples/ICF/PerformanceTest.thy", - "formal/lean/mathlib/ring_theory/class_group.lean", - "formal/mizar/rsspace2.miz", - "formal/afp/CoreC++/SystemClasses.thy", - "formal/hol/Tutorial/Embedding_of_logics_shallow.ml", - "formal/afp/Containers/ITP-2013/Benchmark_Comparison.thy", - "formal/afp/Frequency_Moments/document/root.tex", - "formal/afp/Launchbury/CValue-Nominal.thy", - "formal/afp/Groebner_Macaulay/Degree_Bound_Utils.thy", - "formal/lean/mathlib/data/sigma/basic.lean", - "formal/hol/Help/is_imp.doc", - "formal/lean/mathlib/category_theory/limits/final.lean", - "formal/lean/mathlib/computability/turing_machine.lean", - "formal/lean/mathlib/geometry/manifold/algebra/lie_group.lean", - "formal/afp/Interpreter_Optimizations/Inca.thy", - "formal/lean/mathlib/algebra/ring/basic.lean", - "formal/lean/perfectoid/for_mathlib/nonarchimedean/basic.lean", - "formal/afp/UPF_Firewall/FWNormalisation/NormalisationIntegerPortProof.thy", - "formal/afp/Multiset_Ordering_NPC/document/root.tex", - "formal/lean/mathlib/number_theory/lucas_primality.lean", - "formal/afp/Separation_Logic_Imperative_HOL/Examples/Hash_Map_Impl.thy", - "formal/afp/SPARCv8/SparcModel_MMU/Sparc_State.thy", - "formal/afp/TESL_Language/document/figures/dilating.tex", - "formal/hol/Help/REWRITE_TAC.doc", - "formal/lean/mathlib/algebraic_geometry/ringed_space.lean", - "formal/afp/Hoare_Time/BExp.thy", - "formal/afp/Statecharts/HAOps.thy", - "formal/afp/Complex_Bounded_Operators/Complex_Inner_Product0.thy", - "formal/afp/Goedel_HFSet_Semanticless/Quote.thy", - "formal/afp/Psi_Calculi/Structural_Congruence.thy", - "formal/afp/MFOTL_Monitor/Examples.thy", - "formal/afp/Goedel_Incompleteness/Abstract_First_Goedel.thy", - "formal/afp/Jordan_Hoelder/GroupIsoClasses.thy", - "formal/afp/Modal_Logics_for_NTS/Equivalence_Implies_Bisimilarity.thy", - "formal/afp/List_Update/BIT.thy", - "formal/mizar/member_1.miz", - "formal/afp/BenOr_Kozen_Reif/BKR_Proofs.thy", - "formal/afp/Inductive_Inference/Universal.thy", - "formal/hol/Help/LEANCOP_TAC.doc", - "formal/afp/Stone_Algebras/document/root.tex", - "formal/lean/liquid/for_mathlib/ab52.lean", - "formal/lean/mathlib/algebra/big_operators/pi.lean", - "formal/lean/mathlib/group_theory/group_action/quotient.lean", - "formal/afp/SIFUM_Type_Systems/Preliminaries.thy", - "formal/mizar/srings_4.miz", - "formal/afp/Collections/Iterator/Iterator.thy", - "formal/afp/Treaps/Treap.thy", - "formal/lean/mathlib/dynamics/fixed_points/basic.lean", - "formal/afp/First_Welfare_Theorem/document/root.tex", - "formal/lean/liquid/for_mathlib/bicartesian4.lean", - "formal/hol/Help/REAL_INT_LE_CONV.doc", - "formal/afp/JinjaThreads/Compiler/J1WellType.thy", - "formal/afp/Program-Conflict-Analysis/Flowgraph.thy", - "formal/afp/DPRM_Theorem/Diophantine/Existential_Quantifier.thy", - "formal/afp/Security_Protocol_Refinement/Key_establish/m3_kerberos4.thy", - "formal/lean/mathlib/algebra/ne_zero.lean", - "formal/coq/math-comp/test_suite/output.v.out.8.8", - "formal/afp/LLL_Basis_Reduction/Cost.thy", - "formal/lean/mathlib/data/array/lemmas.lean", - "formal/afp/Smith_Normal_Form/SNF_Algorithm_Two_Steps.thy", - "formal/mizar/yellow18.miz", - "formal/afp/Metalogic_ProofChecker/SortsExe.thy", - "formal/afp/VerifyThis2018/lib/DF_System.thy", - "formal/lean/mathlib/category_theory/preadditive/schur.lean", - "formal/lean/mathlib/algebra/continued_fractions/convergents_equiv.lean", - "formal/afp/Van_Emde_Boas_Trees/VEBT_Delete.thy", - "formal/afp/Robbins-Conjecture/document/root.tex", - "formal/afp/Matrices_for_ODEs/document/root.tex", - "formal/hol/Library/grouptheory.ml", - "formal/lean/mathlib/ring_theory/graded_algebra/homogeneous_ideal.lean", - "formal/lean/mathlib/geometry/manifold/algebra/monoid.lean", - "formal/afp/LambdaMu/ContextFacts.thy", - "formal/afp/Collections/ICF/spec/ICF_Spec_Base.thy", - "formal/afp/BNF_CC/Operation_Examples.thy", - "formal/hol/100/e_is_transcendental.ml", - "formal/afp/Fishers_Inequality/Fishers_Inequality.thy", - "formal/lean/mathlib/control/monad/basic.lean", - "formal/lean/liquid/for_mathlib/Profinite/clopen_limit.lean", - "formal/afp/Types_To_Sets_Extension/ETTS/Manual/ETTS_Examples.thy", - "formal/afp/Abs_Int_ITP2012/Abs_Int0.thy", - "formal/afp/Collections/GenCF/Impl/Impl_Cfun_Set.thy", - "formal/afp/DiskPaxos/DiskPaxos_Inv5.thy", - "formal/afp/Clean/examples/Quicksort_concept.thy", - "formal/afp/Nested_Multisets_Ordinals/Syntactic_Ordinal_Bridge.thy", - "formal/afp/Minsky_Machines/Minsky.thy", - "formal/lean/mathlib/category_theory/limits/preserves/shapes/biproducts.lean", - "formal/hol/miz3/Samples/bug0.ml", - "formal/afp/Slicing/StaticIntra/DataDependence.thy", - "formal/afp/Separation_Logic_Imperative_HOL/Examples/Array_Set_Impl.thy", - "formal/hol/Help/is_hidden.doc", - "formal/lean/liquid/for_mathlib/free_abelian_exact.lean", - "formal/lean/mathlib/group_theory/group_action/option.lean", - "formal/afp/Affine_Arithmetic/document/root.tex", - "formal/afp/CoCon/Decision_Confidentiality/Decision_NCPC_Aut.thy", - "formal/afp/Landau_Symbols/Group_Sort.thy", - "formal/afp/Iptables_Semantics/Primitive_Matchers/Interfaces_Normalize.thy", - "formal/hol/Quaternions/make.ml", - "formal/mizar/integr19.miz", - "formal/afp/Regression_Test_Selection/JinjaSuppl/JVMExecStepInductive.thy", - "formal/afp/Call_Arity/TTreeImplCardinality.thy", - "formal/lean/liquid/for_mathlib/map_to_sheaf_is_iso.lean", - "formal/afp/Isabelle_C/C11-FrontEnd/src/C_Document.thy", - "formal/afp/Verified_SAT_Based_AI_Planning/SAS_Plus_Semantics.thy", - "formal/hol/Help/denominator.doc", - "formal/mizar/basel_1.miz", - "formal/hol/Logic/trs.ml", - "formal/afp/Refine_Imperative_HOL/Sepref_Combinator_Setup.thy", - "formal/mizar/yellow20.miz", - "formal/afp/Real_Time_Deque/Small.thy", - "formal/lean/mathlib/algebra/lie/direct_sum.lean", - "formal/mizar/normsp_1.miz", - "formal/afp/Taylor_Models/Polynomial_Expression.thy", - "formal/hol/Unity/mk_unless.ml", - "formal/afp/RefinementReactive/Reactive.thy", - "formal/afp/ComponentDependencies/DataDependenciesConcreteValues.thy", - "formal/lean/liquid/real_measures/basic.lean", - "formal/lean/mathlib/probability/probability_mass_function/uniform.lean", - "formal/lean/mathlib/algebra/continued_fractions/continuants_recurrence.lean", - "formal/afp/Epistemic_Logic/Epistemic_Logic.thy", - "formal/afp/CakeML/generated/CakeML/SemanticPrimitives.thy", - "formal/mizar/topzari1.miz", - "formal/afp/LLL_Basis_Reduction/Int_Rat_Operations.thy", - "formal/afp/Generalized_Counting_Sort/Algorithm.thy", - "formal/afp/Saturation_Framework/Calculus_Variations.thy", - "formal/lean/mathlib/data/countable/small.lean", - "formal/hol/Help/ONCE_SIMP_CONV.doc", - "formal/lean/liquid/for_mathlib/hom_single_iso.lean", - "formal/hol/Help/rev_itlist2.doc", - "formal/afp/Containers/Map_To_Mapping.thy", - "formal/afp/DPRM_Theorem/Machine_Equations/Mask_Equations.thy", - "formal/afp/AODV/variants/c_gtobcast/C_Seq_Invariants.thy", - "formal/hol/Unity/make.ml", - "formal/afp/Jordan_Normal_Form/Matrix_Impl.thy", - "formal/mizar/pdiff_7.miz", - "formal/lean/mathlib/algebra/category/Ring/colimits.lean", - "formal/afp/FLP/document/root.tex", - "formal/coq/odd-order/PFsection8.v", - "formal/lean/liquid/condensed/Qprime_isoms2.lean", - "formal/hol/Help/INDUCT_TAC.doc", - "formal/afp/DPRM_Theorem/Diophantine/Diophantine_Relations.thy", - "formal/lean/mathlib/ring_theory/dedekind_domain/adic_valuation.lean", - "formal/lean/mathlib/ring_theory/ideal/cotangent.lean", - "formal/afp/Transition_Systems_and_Automata/Basic/Degeneralization.thy", - "formal/afp/OpSets/OpSet.thy", - "formal/afp/Van_Emde_Boas_Trees/Separation_Logic_Imperative_HOL/Hoare_Triple.thy", - "formal/afp/Boolean_Expression_Checkers/Boolean_Expression_Checkers_AList_Mapping.thy", - "formal/afp/Ergodic_Theory/Invariants.thy", - "formal/afp/Taylor_Models/Polynomial_Expression_Additional.thy", - "formal/afp/Koenigsberg_Friendship/KoenigsbergBridge.thy", - "formal/lean/liquid/prop_92/prop_92.lean", - "formal/lean/mathlib/set_theory/surreal/basic.lean", - "formal/afp/BNF_CC/document/root.tex", - "formal/afp/Abstract-Hoare-Logics/Proc/PHoare.thy", - "formal/afp/Gauss_Jordan/Bases_Of_Fundamental_Subspaces_IArrays.thy", - "formal/lean/mathlib/data/finite/set.lean", - "formal/lean/mathlib/category_theory/category/Cat.lean", - "formal/lean/mathlib/data/pfun.lean", - "formal/lean/mathlib/measure_theory/category/Meas.lean", - "formal/afp/Regex_Equivalence/document/root.tex", - "formal/lean/liquid/condensed/is_iso_iff_extrdisc.lean", - "formal/mizar/nomin_8.miz", - "formal/lean/mathlib/topology/dense_embedding.lean", - "formal/afp/FLP/ListUtilities.thy", - "formal/afp/Markov_Models/ex/PCTL.thy", - "formal/lean/mathlib/data/rbtree/default.lean", - "formal/afp/Subresultants/More_Homomorphisms.thy", - "formal/mizar/matrix_4.miz", - "formal/lean/mathlib/representation_theory/character.lean", - "formal/lean/mathlib/linear_algebra/charpoly/to_matrix.lean", - "formal/lean/mathlib/logic/unique.lean", - "formal/afp/GaleStewart_Games/AlternatingLists.thy", - "formal/afp/Registers/Laws_Complement_Quantum.thy", - "formal/afp/Winding_Number_Eval/Winding_Number_Eval_Examples.thy", - "formal/afp/LinearQuantifierElim/Thys/Cooper.thy", - "formal/afp/JinjaDCI/BV/TF_JVM.thy", - "formal/hol/Help/graph.doc", - "formal/afp/Ordinary_Differential_Equations/Refinement/Refine_Folds.thy", - "formal/afp/Neumann_Morgenstern_Utility/Lotteries.thy", - "formal/afp/Hoare_Time/Nielson_VCG.thy", - "formal/afp/FO_Theory_Rewriting/Util/Bot_Terms.thy", - "formal/lean/mathlib/combinatorics/simple_graph/prod.lean", - "formal/afp/Incompleteness/Pseudo_Coding.thy", - "formal/afp/Possibilistic_Noninterference/MyTactics.thy", - "formal/lean/mathlib/field_theory/krull_topology.lean", - "formal/mizar/normsp_0.miz", - "formal/afp/Latin_Square/document/root.tex", - "formal/lean/mathlib/ring_theory/bezout.lean", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2010/a/p10.lean", - "formal/afp/Isabelle_Meta_Model/meta_isabelle/Printer_SML.thy", - "formal/afp/Optics/Dataspace.thy", - "formal/afp/Free-Groups/Cancelation.thy", - "formal/hol/Help/mk_fset.doc", - "formal/afp/Native_Word/Word_Type_Copies.thy", - "formal/afp/Binding_Syntax_Theory/Equiv_Relation2.thy", - "formal/afp/List-Infinite/CommonArith/Util_NatInf.thy", - "formal/afp/Stateful_Protocol_Composition_and_Typing/Typed_Model.thy", - "formal/mizar/bcialg_1.miz", - "formal/mizar/zmodul01.miz", - "formal/afp/Collections/ICF/gen_algo/ListGA.thy", - "formal/hol/Help/mk_binop.doc", - "formal/afp/Rewrite_Properties_Reduction/Rewriting/Rewriting.thy", - "formal/afp/Iptables_Semantics/Primitive_Matchers/Transform.thy", - "formal/afp/Factored_Transition_System_Bounding/document/root.tex", - "formal/afp/Recursion-Theory-I/PRecFun2.thy", - "formal/lean/mathlib/linear_algebra/matrix/to_lin.lean", - "formal/afp/Fermat3_4/document/root.tex", - "formal/afp/Abs_Int_ITP2012/Collecting.thy", - "formal/hol/Help/unhide_constant.doc", - "formal/afp/Banach_Steinhaus/Banach_Steinhaus_Missing.thy", - "formal/afp/Rewrite_Properties_Reduction/Ground_Reduction_on_LV.thy", - "formal/lean/mathlib/order/minimal.lean", - "formal/lean/mathlib/logic/nontrivial.lean", - "formal/afp/Interpreter_Optimizations/Op.thy", - "formal/afp/Hybrid_Multi_Lane_Spatial_Logic/regular/Safety_Regular.thy", - "formal/afp/Polynomial_Interpolation/Is_Rat_To_Rat.thy", - "formal/afp/Sort_Encodings/Mcalc.thy", - "formal/afp/Locally-Nameless-Sigma/Sigma/TypedSigma.thy", - "formal/hol/100/pascal.ml", - "formal/afp/SDS_Impossibility/document/root.tex", - "formal/afp/Differential_Dynamic_Logic/Proof_Checker.thy", - "formal/afp/CoSMeDis/Outer_Friend_Confidentiality/Outer_Friend_All.thy", - "formal/hol/Help/EVERY_TCL.doc", - "formal/lean/liquid/for_mathlib/rational_cones.lean", - "formal/hol/Help/GEN_TAC.doc", - "formal/lean/mathlib/set_theory/cardinal/ordinal.lean", - "formal/hol/Help/free_in.doc", - "formal/afp/Regression_Test_Selection/document/root.tex", - "formal/lean/mathlib/algebraic_geometry/structure_sheaf.lean", - "formal/hol/Help/mk_cond.doc", - "formal/lean/lftcm/solutions/friday/manifolds.lean", - "formal/afp/Recursion-Addition/document/root.tex", - "formal/afp/CoreC++/TypeSafe.thy", - "formal/afp/JiveDataStoreModel/Isabelle_Store/Location.thy", - "formal/afp/Real_Time_Deque/RealTimeDeque_Dequeue.thy", - "formal/afp/Schutz_Spacetime/TernaryOrdering.thy", - "formal/mizar/newton01.miz", - "formal/afp/Irrational_Series_Erdos_Straus/Irrational_Series_Erdos_Straus.thy", - "formal/hol/Help/TRANS.doc", - "formal/afp/Jinja/BV/JVM_SemiType.thy", - "formal/lean/sphere-eversion/to_mathlib/equivariant.lean", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p69.lean", - "formal/hol/Help/basic_convs.doc", - "formal/afp/Transition_Systems_and_Automata/document/root.tex", - "formal/afp/Linear_Recurrences/Pochhammer_Polynomials.thy", - "formal/lean/mathlib/data/list/destutter.lean", - "formal/mizar/lopban_3.miz", - "formal/mizar/arytm_3.miz", - "formal/afp/Refine_Imperative_HOL/IICF/Impl/Heaps/IICF_Abs_Heapmap.thy", - "formal/afp/Incompleteness/document/root.tex", - "formal/hol/100/birthday.ml", - "formal/afp/Bicategory/InternalAdjunction.thy", - "formal/afp/Higher_Order_Terms/Term.thy", - "formal/mizar/valuat_1.miz", - "formal/lean/mathlib/algebra/category/Module/projective.lean", - "formal/lean/lftcm/solutions/wednesday/algebraic_hierarchy.lean", - "formal/afp/Core_DOM/common/monads/ObjectMonad.thy", - "formal/mizar/limfunc2.miz", - "formal/afp/Linear_Inequalities/Missing_VS_Connect.thy", - "formal/afp/Transition_Systems_and_Automata/Transition_Systems/Transition_System_Refine.thy", - "formal/hol/Help/prioritize_real.doc", - "formal/lean/liquid/for_mathlib/derived/homological.lean", - "formal/lean/mathlib/analysis/special_functions/pow.lean", - "formal/afp/Gauss_Jordan/Code_Set.thy", - "formal/lean/mathlib/topology/inseparable.lean", - "formal/afp/Transition_Systems_and_Automata/Automata/NBA/NBA_Combine.thy", - "formal/afp/Ramsey-Infinite/Ramsey.thy", - "formal/afp/Automatic_Refinement/Tool/Autoref_Relator_Interface.thy", - "formal/lean/mathlib/data/nat/sqrt_norm_num.lean", - "formal/lean/mathlib/data/real/cardinality.lean", - "formal/afp/Bounded_Deducibility_Security/Filtermap.thy", - "formal/afp/Circus/CSP_Processes.thy", - "formal/afp/Slicing/Basic/DynDataDependence.thy", - "formal/afp/General-Triangle/document/root.tex", - "formal/lean/liquid/banach.lean", - "formal/afp/Lambda_Free_RPOs/Lambda_Free_RPOs.thy", - "formal/hol/Help/load_on_path.doc", - "formal/afp/Bicategory/Bicategory.thy", - "formal/afp/Binding_Syntax_Theory/Variable.thy", - "formal/afp/FeatherweightJava/document/root.tex", - "formal/afp/Independence_CH/Foundation_Axiom.thy", - "formal/afp/CCS/Weak_Semantics.thy", - "formal/afp/Lifting_Definition_Option/document/root.tex", - "formal/afp/Complex_Bounded_Operators/One_Dimensional_Spaces.thy", - "formal/afp/Extended_Finite_State_Machines/document/root.tex", - "formal/afp/VerifyThis2018/Challenge2.thy", - "formal/lean/liquid/for_mathlib/derived/lemmas.lean", - "formal/lean/mathlib/ring_theory/polynomial/opposites.lean", - "formal/coq/math-comp/test_suite/test_guard.v", - "formal/lean/mathlib/topology/algebra/module/weak_dual.lean", - "formal/lean/mathlib/control/functor/multivariate.lean", - "formal/afp/Stone_Kleene_Relation_Algebras/Matrix_Kleene_Algebras.thy", - "formal/lean/mathlib/analysis/convex/gauge.lean", - "formal/afp/Refine_Monadic/Generic/RefineG_While.thy", - "formal/afp/Simpl/ex/ProcParExSP.thy", - "formal/afp/Quasi_Borel_Spaces/Measure_as_QuasiBorel_Measure.thy", - "formal/afp/Goedel_HFSet_Semanticless/Goedel_I.thy", - "formal/afp/Complex_Bounded_Operators/Complex_Vector_Spaces.thy", - "formal/afp/CakeML/generated/LemExtraDefs.thy", - "formal/afp/Isabelle_Meta_Model/isabelle_home/src/HOL/Isabelle_Main0.thy", - "formal/afp/CZH_Foundations/document/root.tex", - "formal/afp/Regular_Algebras/Regular_Algebra_Models.thy", - "formal/lean/mathzoo/mathzoo/misc/miniF2F/algebra/bleqa_apbon2msqrtableqambsqon8b.lean", - "formal/lean/mathlib/data/fintype/card_embedding.lean", - "formal/afp/MiniML/Generalize.thy", - "formal/lean/mathlib/linear_algebra/quadratic_form/complex.lean", - "formal/hol/Help/REPEAT_GTCL.doc", - "formal/lean/mathlib/combinatorics/simple_graph/adj_matrix.lean", - "formal/lean/mathlib/data/part.lean", - "formal/afp/Algebraic_VCs/AVC_KAD/Path_Model_Example.thy", - "formal/afp/Timed_Automata/Approx_Beta.thy", - "formal/mizar/random_3.miz", - "formal/mizar/vectsp_4.miz", - "formal/afp/Optics/Lens_Instances.thy", - "formal/mizar/necklace.miz", - "formal/lean/liquid/condensed/adjunctions.lean", - "formal/mizar/osalg_1.miz", - "formal/afp/Pi_Calculus/Strong_Late_Bisim.thy", - "formal/afp/Higher_Order_Terms/document/root.tex", - "formal/lean/mathlib/algebra/group/defs.lean", - "formal/afp/LTL_to_DRA/Impl/LTL_Impl.thy", - "formal/lean/mathlib/linear_algebra/clifford_algebra/grading.lean", - "formal/afp/Ordinary_Differential_Equations/Refinement/Refine_Dflt_No_Comp.thy", - "formal/hol/Help/X_CHOOSE_THEN.doc", - "formal/afp/Jinja/J/SmallStep.thy", - "formal/afp/Jinja/BV/LBVJVM.thy", - "formal/lean/liquid/pseudo_normed_group/LC.lean", - "formal/afp/SpecCheck/Output_Styles/SpecCheck_Output_Style.thy", - "formal/afp/CoreC++/Syntax.thy", - "formal/afp/JinjaThreads/DFA/Typing_Framework_err.thy", - "formal/mizar/ordinal7.miz", - "formal/afp/Finite-Map-Extras/Finite_Map_Extras.thy", - "formal/afp/OpSets/List_Spec.thy", - "formal/lean/mathlib/category_theory/linear/default.lean", - "formal/lean/mathlib/algebra/big_operators/fin.lean", - "formal/afp/VYDRA_MDL/Timestamp_Lex_Total.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p207.lean", - "formal/afp/CryptHOL/Negligible.thy", - "formal/afp/Decl_Sem_Fun_PL/DenotLam5.thy", - "formal/mizar/cfuncdom.miz", - "formal/afp/Extended_Finite_State_Machine_Inference/heuristics/PTA_Generalisation.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p370.lean", - "formal/afp/Ordinal_Partitions/Partitions.thy", - "formal/afp/VerifyThis2018/Challenge1.thy", - "formal/afp/CakeML_Codegen/Preproc/Embed.thy", - "formal/afp/JiveDataStoreModel/Isa_Counter/UnivSpec.thy", - "formal/afp/Deriving/Derive.thy", - "formal/afp/Consensus_Refined/Observing/BenOr_Defs.thy", - "formal/afp/Metalogic_ProofChecker/Instances.thy", - "formal/mizar/polyred.miz", - "formal/afp/Auto2_HOL/HOL/Lists_Thms.thy", - "formal/afp/MDP-Rewards/MDP_disc.thy", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2021/a/p25.lean", - "formal/mizar/fuzimpl1.miz", - "formal/afp/HRB-Slicing/StaticInter/CFGExit_wf.thy", - "formal/afp/Planarity_Certificates/Planarity/Permutations_2.thy", - "formal/afp/IMAP-CRDT/IMAP-proof-commute.thy", - "formal/afp/Polynomial_Factorization/Square_Free_Factorization.thy", - "formal/afp/Planarity_Certificates/l4v/lib/OptionMonadND.thy", - "formal/afp/Factor_Algebraic_Polynomial/MPoly_Divide.thy", - "formal/mizar/rfunct_2.miz", - "formal/hol/Multivariate/moretop.ml", - "formal/mizar/pencil_1.miz", - "formal/afp/Isabelle_Meta_Model/isabelle_home/src/Tools/Code/Isabelle_code_runtime.thy", - "formal/lean/liquid/Lbar/ext_aux3.lean", - "formal/afp/Inductive_Inference/Partial_Recursive.thy", - "formal/lean/mathlib/group_theory/submonoid/operations.lean", - "formal/hol/Help/ASM_REWRITE_TAC.doc", - "formal/afp/Multi_Party_Computation/Malicious_OT.thy", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2008/a/p25.lean", - "formal/afp/Randomised_Social_Choice/Utility_Functions.thy", - "formal/afp/Security_Protocol_Refinement/Key_establish/m2_kerberos.thy", - "formal/mizar/heyting1.miz", - "formal/afp/Vickrey_Clarke_Groves/document/root.tex", - "formal/afp/DPRM_Theorem/Machine_Equations/RM_Sums_Diophantine.thy", - "formal/afp/Myhill-Nerode/Closures.thy", - "formal/afp/HOLCF-Prelude/Data_Integer.thy", - "formal/afp/Launchbury/Pointwise.thy", - "formal/lean/mathlib/order/filter/ultrafilter.lean", - "formal/lean/mathlib/category_theory/products/associator.lean", - "formal/coq/math-comp/character/mxabelem.v", - "formal/lean/mathlib/algebraic_topology/dold_kan/homotopies.lean", - "formal/lean/mathlib/topology/tietze_extension.lean", - "formal/mizar/quantal1.miz", - "formal/afp/Lambda_Free_RPOs/Lambda_Free_Term.thy", - "formal/afp/UPF_Firewall/StatefulFW/StatefulFW.thy", - "formal/afp/AODV/variants/d_fwdrreqs/D_Fresher.thy", - "formal/lean/mathlib/category_theory/monoidal/CommMon_.lean", - "formal/afp/Simpl/HeapList.thy", - "formal/mizar/waybel31.miz", - "formal/afp/Van_Emde_Boas_Trees/VEBT_Intf_Functional.thy", - "formal/afp/Stateful_Protocol_Composition_and_Typing/Typing_Result.thy", - "formal/lean/mathlib/algebra/big_operators/norm_num.lean", - "formal/afp/Inductive_Confidentiality/GeneralAttacker/MessageGA.thy", - "formal/hol/Help/REAL_RAT_MIN_CONV.doc", - "formal/afp/Monad_Memo_DP/heap_monad/DP_CRelVH.thy", - "formal/afp/Roy_Floyd_Warshall/document/root.tex", - "formal/afp/JinjaThreads/DFA/LBVComplete.thy", - "formal/afp/Locally-Nameless-Sigma/preliminary/FMap.thy", - "formal/afp/Probabilistic_System_Zoo/Nonempty_Bounded_Set.thy", - "formal/lean/mathlib/group_theory/group_action/prod.lean", - "formal/afp/Stateful_Protocol_Composition_and_Typing/Miscellaneous.thy", - "formal/afp/Psi_Calculi/Agent.thy", - "formal/afp/Topological_Semantics/sse_operation_positive.thy", - "formal/afp/Call_Arity/ArityStack.thy", - "formal/coq/abel/xmathcomp/char0.v", - "formal/mizar/field_3.miz", - "formal/afp/Flyspeck-Tame/Relative_Completeness.thy", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2003/b/p17.lean", - "formal/mizar/arrow.miz", - "formal/afp/Probabilistic_While/Resampling.thy", - "formal/hol/Help/ALL_THEN.doc", - "formal/afp/CakeML/Semantic_Extras.thy", - "formal/afp/LinearQuantifierElim/Thys/QEdlo_fr.thy", - "formal/lean/mathlib/topology/category/Born.lean", - "formal/afp/Van_Emde_Boas_Trees/Separation_Logic_Imperative_HOL/Sep_Main.thy", - "formal/afp/AODV/variants/c_gtobcast/C_Quality_Increases.thy", - "formal/lean/mathlib/field_theory/finiteness.lean", - "formal/hol/Help/NO_TAC.doc", - "formal/coq/math-comp/ssreflect/generic_quotient.v", - "formal/afp/QR_Decomposition/Gram_Schmidt_IArrays.thy", - "formal/afp/Physical_Quantities/SI_Units.thy", - "formal/lean/mathzoo/mathzoo/misc/miniF2F/algebra/sqineq_4bap1lt4bsqpap1sq.lean", - "formal/lean/mathlib/category_theory/limits/small_complete.lean", - "formal/afp/Hoare_Time/Big_StepT_Partial.thy", - "formal/lean/liquid/for_mathlib/complex_extend.lean", - "formal/hol/Help/dest_iff.doc", - "formal/afp/InfPathElimination/RB.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p288.lean", - "formal/afp/CZH_Foundations/czh_semicategories/CZH_SMC_Small_Semicategory.thy", - "formal/afp/MuchAdoAboutTwo/document/root.tex", - "formal/mizar/ndiff_1.miz", - "formal/mizar/gtarski4.miz", - "formal/afp/Goedel_Incompleteness/Deduction2.thy", - "formal/afp/DPRM_Theorem/Machine_Equations/All_Equations.thy", - "formal/afp/Isabelle_Meta_Model/toy_example/embedding/core/Floor1_examp.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p188.lean", - "formal/lean/mathlib/data/sym/basic.lean", - "formal/lean/mathlib/data/real/irrational.lean", - "formal/afp/Diophantine_Eqns_Lin_Hom/List_Vector.thy", - "formal/lean/liquid/breen_deligne/eval1half.lean", - "formal/mizar/scpqsort.miz", - "formal/afp/Auto2_Imperative_HOL/Functional/RBTree.thy", - "formal/afp/Pop_Refinement/First_Example.thy", - "formal/hol/Unity/mk_unity_prog.ml", - "formal/hol/Help/INSTANTIATE_ALL.doc", - "formal/lean/liquid/hacks_and_tricks/type_pow.lean", - "formal/afp/AWN/document/root.tex", - "formal/afp/Gaussian_Integers/Gaussian_Integers.thy", - "formal/afp/Physical_Quantities/SI_Astronomical.thy", - "formal/afp/CakeML/generated/Lem_map.thy", - "formal/afp/Partial_Order_Reduction/Ample_Abstract.thy", - "formal/afp/Fishburn_Impossibility/Social_Choice_Functions.thy", - "formal/mizar/compl_sp.miz", - "formal/afp/MiniSail/WellformedL.thy", - "formal/afp/Universal_Turing_Machine/Abacus_Hoare.thy", - "formal/coq/analysis/altreals/realseq.v", - "formal/afp/Refine_Monadic/examples/WordRefine.thy", - "formal/coq/math-comp/fingroup/all_fingroup.v", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p314.lean", - "formal/afp/Flyspeck-Tame/Computation/Completeness.thy", - "formal/afp/Containers/Extend_Partial_Order.thy", - "formal/afp/Chord_Segments/document/root.tex", - "formal/lean/mathlib/computability/DFA.lean", - "formal/afp/Auto2_HOL/HOL/Logic_Thms.thy", - "formal/afp/Groebner_Macaulay/Dube_Prelims.thy", - "formal/afp/Refine_Imperative_HOL/Userguides/Sepref_Guide_Quickstart.thy", - "formal/hol/Help/PURE_ASM_REWRITE_TAC.doc", - "formal/lean/mathlib/number_theory/cyclotomic/rat.lean", - "formal/lean/mathlib/algebra/big_operators/intervals.lean", - "formal/afp/CoSMeDis/Post_Confidentiality/Post_Observation_Setup_ISSUER.thy", - "formal/afp/JinjaThreads/document/root.tex", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2000/p11.lean", - "formal/lean/mathlib/category_theory/functor/category.lean", - "formal/mizar/integr15.miz", - "formal/lean/mathlib/data/real/golden_ratio.lean", - "formal/afp/Collections/Iterator/Proper_Iterator.thy", - "formal/afp/Jordan_Normal_Form/Spectral_Radius.thy", - "formal/lean/mathlib/analysis/convex/cone.lean", - "formal/lean/liquid/pseudo_normed_group/with_Tinv.lean", - "formal/afp/Interpolation_Polynomials_HOL_Algebra/Lagrange_Interpolation.thy", - "formal/afp/Complex_Geometry/Chordal_Metric.thy", - "formal/lean/mathlib/control/traversable/equiv.lean", - "formal/afp/Auto2_Imperative_HOL/Functional/BST.thy", - "formal/afp/IsaNet/infrastructure/Agents.thy", - "formal/hol/Help/new_definition.doc", - "formal/lean/lftcm/exercises_sources/thursday/category_theory/exercise1.lean", - "formal/lean/sphere-eversion/to_mathlib/topology/path.lean", - "formal/afp/Stone_Relation_Algebras/Matrix_Relation_Algebras.thy", - "formal/hol/Help/IMP_REWR_CONV.doc", - "formal/afp/Count_Complex_Roots/Count_Rectangle.thy", - "formal/mizar/msafree3.miz", - "formal/afp/Launchbury/Env-Nominal.thy", - "formal/afp/Auto2_Imperative_HOL/Imperative/Quicksort_Impl.thy", - "formal/afp/Digit_Expansions/Binary_Operations.thy", - "formal/afp/Correctness_Algebras/Omega_Algebras.thy", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2000/p1.lean", - "formal/mizar/pre_poly.miz", - "formal/mizar/group_4.miz", - "formal/afp/Algebraic_Numbers/Interval_Arithmetic.thy", - "formal/afp/Containers/document/root.tex", - "formal/lean/mathlib/category_theory/preadditive/injective.lean", - "formal/afp/WOOT_Strong_Eventual_Consistency/SortKeys.thy", - "formal/afp/Separation_Logic_Imperative_HOL/Sep_Main.thy", - "formal/afp/Metalogic_ProofChecker/Term.thy", - "formal/hol/Multivariate/derivatives.ml", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p559.lean", - "formal/afp/X86_Semantics/X86_InstructionSemantics.thy", - "formal/mizar/music_s1.miz", - "formal/afp/Lazy_Case/document/root.tex", - "formal/afp/CoSMed/Observation_Setup.thy", - "formal/afp/Dijkstra_Shortest_Path/Dijkstra_Misc.thy", - "formal/mizar/helly.miz", - "formal/afp/VYDRA_MDL/Window.thy", - "formal/hol/Help/find.doc", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p478.lean", - "formal/afp/Liouville_Numbers/document/root.tex", - "formal/mizar/compos_1.miz", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p206.lean", - "formal/hol/Help/uniq.doc", - "formal/mizar/vectsp_7.miz", - "formal/afp/CoCon/Paper_Confidentiality/Paper_Intro.thy", - "formal/lean/mathzoo/mathzoo/misc/miniF2F/induction/12dvd4expnp1p20.lean", - "formal/afp/Automatic_Refinement/Parametricity/Param_HOL.thy", - "formal/lean/mathlib/data/polynomial/denoms_clearable.lean", - "formal/afp/Consensus_Refined/Voting/Ate_Defs.thy", - "formal/afp/FocusStreamsCaseStudies/FR_types.thy", - "formal/afp/Transition_Systems_and_Automata/Basic/Acceptance_Refine.thy", - "formal/lean/liquid/laurent_measures/basic.lean", - "formal/afp/Berlekamp_Zassenhaus/Poly_Mod_Finite_Field.thy", - "formal/hol/Rqe/testform.ml", - "formal/afp/Slicing/Basic/DynWeakControlDependence.thy", - "formal/lean/mathlib/analysis/special_functions/complex/log.lean", - "formal/coq/analysis/prodnormedzmodule.v", - "formal/lean/mathlib/group_theory/group_action/pi.lean", - "formal/hol/Help/EXISTS_EQUATION.doc", - "formal/afp/GraphMarkingIBP/LinkMark.thy", - "formal/afp/Call_Arity/TTreeAnalysisSig.thy", - "formal/afp/Gabow_SCC/Gabow_SCC.thy", - "formal/afp/IsaGeoCoq/document/root.tex", - "formal/mizar/tietze_2.miz", - "formal/afp/BNF_Operations/LFP.thy", - "formal/afp/Regular_Tree_Relations/RRn_Automata.thy", - "formal/mizar/glib_007.miz", - "formal/hol/Help/.upto.doc", - "formal/afp/CZH_Foundations/czh_semicategories/CZH_SMC_PSemicategory.thy", - "formal/afp/Psi_Calculi/Close_Subst.thy", - "formal/afp/AxiomaticCategoryTheory/AxiomaticCategoryTheory.thy", - "formal/afp/Well_Quasi_Orders/Wqo_Multiset.thy", - "formal/afp/Formal_SSA/SSA_Transfer_Rules.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p345.lean", - "formal/afp/Gaussian_Integers/document/root.tex", - "formal/hol/Boyer_Moore/irrelevance.ml", - "formal/mizar/cfdiff_1.miz", - "formal/afp/Psi_Calculi/Tau.thy", - "formal/afp/Belief_Revision/document/root.tex", - "formal/coq/odd-order/BGsection12.v", - "formal/mizar/prob_1.miz", - "formal/afp/ConcurrentGC/Proofs_Basis.thy", - "formal/afp/MiniSail/TypingL.thy", - "formal/afp/CAVA_LTL_Modelchecker/SM/Refine/SM_Visible.thy", - "formal/coq/math-comp/ssreflect/path.v", - "formal/mizar/pdiff_5.miz", - "formal/afp/Pi_Calculus/Weak_Early_Bisim_SC.thy", - "formal/lean/mathlib/topology/local_at_target.lean", - "formal/afp/Valuation/Valuation2.thy", - "formal/mizar/ff_siec.miz", - "formal/afp/InformationFlowSlicing/NonInterferenceIntra.thy", - "formal/afp/IsaNet/instances/EPIC_L1_BA.thy", - "formal/afp/Collections/ICF/impl/MapStdImpl.thy", - "formal/lean/mathlib/analysis/normed_space/lattice_ordered_group.lean", - "formal/mizar/altcat_2.miz", - "formal/afp/PropResPI/document/root.tex", - "formal/afp/Collections/Examples/ICF/ICF_Examples_Chapter.thy", - "formal/afp/Regex_Equivalence/Deriv_PDeriv.thy", - "formal/afp/Program-Conflict-Analysis/AcquisitionHistory.thy", - "formal/hol/Help/mk_type.doc", - "formal/lean/mathlib/order/rel_classes.lean", - "formal/coq/math-comp/field/closed_field.v", - "formal/afp/Rank_Nullity_Theorem/Miscellaneous.thy", - "formal/lean/liquid/breen_deligne/eval2.lean", - "formal/lean/mathlib/topology/sequences.lean", - "formal/afp/Iptables_Semantics/Primitive_Matchers/Conntrack_State_Transform.thy", - "formal/lean/mathlib/measure_theory/measure/complex.lean", - "formal/afp/KBPs/MapOps.thy", - "formal/afp/Marriage/document/root.tex", - "formal/hol/100/wilson.ml", - "formal/mizar/zf_colla.miz", - "formal/afp/HereditarilyFinite/Rank.thy", - "formal/afp/Collections/ICF/impl/Trie_Impl.thy", - "formal/lean/mathlib/ring_theory/jacobson_ideal.lean", - "formal/mizar/rlvect_1.miz", - "formal/lean/liquid/for_mathlib/derived/les_facts.lean", - "formal/lean/mathlib/order/filter/pi.lean", - "formal/coq/math-comp/solvable/burnside_app.v", - "formal/afp/CakeML_Codegen/Utils/CakeML_Utils.thy", - "formal/afp/SPARCv8/SparcModel_MMU/Sparc_Init_State.thy", - "formal/afp/Partial_Order_Reduction/Transition_System_Extensions.thy", - "formal/lean/mathlib/analysis/ODE/gronwall.lean", - "formal/lean/lftcm/exercises_sources/tuesday/logic.lean", - "formal/lean/mathlib/linear_algebra/free_algebra.lean", - "formal/lean/liquid/for_mathlib/snake_lemma.lean", - "formal/afp/CoCon/Discussion_Confidentiality/Discussion_Value_Setup.thy", - "formal/lean/liquid/free_pfpng/mono.lean", - "formal/mizar/topreal7.miz", - "formal/mizar/tops_5.miz", - "formal/lean/perfectoid/sheaves/sheaf_of_rings.lean", - "formal/afp/Generic_Join/document/root.tex", - "formal/afp/Pratt_Certificate/Pratt_Certificate_Code.thy", - "formal/afp/PAC_Checker/PAC_Checker_Synthesis.thy", - "formal/afp/DPRM_Theorem/Machine_Equations/State_Unique_Equations.thy", - "formal/mizar/calcul_2.miz", - "formal/hol/Help/HAS_SIZE_CONV.doc", - "formal/afp/Farkas/Matrix_Farkas.thy", - "formal/afp/Skip_Lists/document/root.tex", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p276.lean", - "formal/lean/perfectoid/for_mathlib/nnreal.lean", - "formal/lean/mathlib/data/pi/lex.lean", - "formal/afp/Algebraic_VCs/AVC_KAD/Pointer_Examples.thy", - "formal/afp/LTL_to_GBA/document/root.tex", - "formal/afp/Secondary_Sylow/GroupAction.thy", - "formal/lean/mathlib/algebraic_geometry/Spec.lean", - "formal/lean/mathlib/topology/category/Top/opens.lean", - "formal/mizar/groupp_1.miz", - "formal/afp/DFS_Framework/Impl/Data/Simple_Impl.thy", - "formal/lean/mathlib/data/matrix/dmatrix.lean", - "formal/mizar/vectsp_5.miz", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p13.lean", - "formal/hol/Help/BOOL_CASES_TAC.doc", - "formal/mizar/circtrm1.miz", - "formal/afp/Separation_Logic_Imperative_HOL/Examples/Hash_Set_Impl.thy", - "formal/afp/HyperCTL/Prelim.thy", - "formal/hol/Help/unions_prime.doc", - "formal/lean/mathlib/category_theory/limits/constructions/binary_products.lean", - "formal/afp/Adaptive_State_Counting/ASC/ASC_Suite.thy", - "formal/hol/Help/MESON_TAC.doc", - "formal/afp/Functional-Automata/MaxPrefix.thy", - "formal/afp/Interpolation_Polynomials_HOL_Algebra/document/root.tex", - "formal/mizar/rewrite1.miz", - "formal/afp/Polynomial_Factorization/Missing_Polynomial_Factorial.thy", - "formal/afp/Buildings/Chamber.thy", - "formal/mizar/topreal1.miz", - "formal/mizar/polynom1.miz", - "formal/mizar/polyeq_5.miz", - "formal/mizar/gobrd13.miz", - "formal/lean/mathlib/category_theory/abelian/generator.lean", - "formal/coq/analysis/set_interval.v", - "formal/hol/Boyer_Moore/struct_equal.ml", - "formal/hol/Help/unparse_as_infix.doc", - "formal/afp/Show/Show_Complex.thy", - "formal/afp/WHATandWHERE_Security/Up_To_Technique.thy", - "formal/afp/Van_Emde_Boas_Trees/VEBT_List_Assn.thy", - "formal/afp/CAVA_Automata/All_Of_CAVA_Automata.thy", - "formal/lean/mathlib/topology/order/lattice.lean", - "formal/afp/JinjaThreads/Common/SemiType.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p110.lean", - "formal/afp/Regex_Equivalence/Automaton.thy", - "formal/afp/Correctness_Algebras/document/root.tex", - "formal/afp/Auto2_Imperative_HOL/Imperative/Union_Find_Impl.thy", - "formal/mizar/mathmorp.miz", - "formal/lean/liquid/for_mathlib/AddCommGroup/pt.lean", - "formal/lean/mathlib/topology/uniform_space/separation.lean", - "formal/afp/Network_Security_Policy_Verification/Security_Invariants/SINVAR_BLPstrict.thy", - "formal/afp/Laplace_Transform/Piecewise_Continuous.thy", - "formal/lean/liquid/challenge_notations.lean", - "formal/afp/Auto2_Imperative_HOL/document/root.tex", - "formal/afp/Collections/GenCF/GenCF.thy", - "formal/afp/CAVA_LTL_Modelchecker/Nested_DFS/NDFS_SI_Statistics.thy", - "formal/mizar/prob_4.miz", - "formal/hol/Help/HAS_SIZE_DIMINDEX_RULE.doc", - "formal/afp/Lowe_Ontological_Argument/Relations.thy", - "formal/afp/Category2/MonadicEquationalTheory.thy", - "formal/afp/Automated_Stateful_Protocol_Verification/examples/Keyserver_Composition.thy", - "formal/hol/Logic/positive.ml", - "formal/lean/mathlib/algebraic_geometry/EllipticCurve.lean", - "formal/lean/mathlib/data/fintype/sort.lean", - "formal/afp/Impossible_Geometry/document/root.tex", - "formal/afp/JinjaDCI/BV/SemiType.thy", - "formal/hol/Help/new_basic_definition.doc", - "formal/afp/Native_Word/Uint32.thy", - "formal/lean/mathzoo/mathzoo/olympiads/imo/1965/p1.lean", - "formal/afp/Probabilistic_Noninterference/Compositionality.thy", - "formal/afp/Logging_Independent_Anonymity/Possibility.thy", - "formal/afp/PSemigroupsConvolution/Unary_Modalities.thy", - "formal/mizar/funct_1.miz", - "formal/afp/Inductive_Inference/TOTAL_CONS.thy", - "formal/mizar/functor2.miz", - "formal/afp/FOL_Seq_Calc2/Soundness.thy", - "formal/hol/Help/GEN_SIMPLIFY_CONV.doc", - "formal/hol/Help/dest_cond.doc", - "formal/afp/Show/Old_Datatype/Old_Show_Instances.thy", - "formal/lean/mathlib/data/list/nat_antidiagonal.lean", - "formal/lean/mathlib/algebra/module/pid.lean", - "formal/afp/Word_Lib/Strict_part_mono.thy", - "formal/afp/Separation_Logic_Imperative_HOL/Examples/Array_Blit.thy", - "formal/afp/Quasi_Borel_Spaces/Monad_QuasiBorel.thy", - "formal/afp/Types_To_Sets_Extension/Examples/SML_Relativization/Topology/SML_Ordered_Topological_Spaces.thy", - "formal/afp/Goedel_HFSet_Semanticless/II_Prelims.thy", - "formal/afp/Containers/Containers_Userguide.thy", - "formal/afp/CZH_Foundations/czh_sets/CZH_Sets_Equipollence.thy", - "formal/lean/mathlib/category_theory/limits/shapes/normal_mono/basic.lean", - "formal/afp/JinjaDCI/Common/Type.thy", - "formal/hol/Boyer_Moore/main.ml", - "formal/afp/Isabelle_C/C11-FrontEnd/examples/C0.thy", - "formal/mizar/fuzimpl3.miz", - "formal/afp/Game_Based_Crypto/PRF_UHF.thy", - "formal/mizar/power.miz", - "formal/lean/perfectoid/Spa/stalk_valuation.lean", - "formal/lean/mathlib/data/fin/tuple/sort.lean", - "formal/afp/Prim_Dijkstra_Simple/Undirected_Graph.thy", - "formal/hol/100/arithmetic.ml", - "formal/afp/Gauss_Jordan/Bases_Of_Fundamental_Subspaces.thy", - "formal/afp/Sqrt_Babylonian/document/root.tex", - "formal/afp/OpSets/RGA.thy", - "formal/coq/math-comp/ssreflect/ssreflect.v", - "formal/afp/Isabelle_Meta_Model/isabelle_home/src/HOL/ex/Isabelle_Cartouche_Examples.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p232.lean", - "formal/hol/Library/bitsize.ml", - "formal/lean/mathlib/algebra/lie/centralizer.lean", - "formal/afp/Automatic_Refinement/Lib/Mk_Term_Antiquot.thy", - "formal/lean/sphere-eversion/to_mathlib/geometry/manifold/vector_bundle/basic_core_constructions.lean", - "formal/afp/Core_DOM/common/classes/BaseClass.thy", - "formal/hol/Minisat/make.ml", - "formal/lean/mathlib/topology/algebra/valuation.lean", - "formal/afp/Berlekamp_Zassenhaus/Factorize_Int_Poly.thy", - "formal/afp/Shadow_SC_DOM/tests/my_get_owner_document.thy", - "formal/afp/Complex_Bounded_Operators/extra/Extra_Ordered_Fields.thy", - "formal/coq/math-comp/test_suite/test_ssrAC.v", - "formal/afp/Collections/Lib/Partial_Equivalence_Relation.thy", - "formal/mizar/mod_3.miz", - "formal/hol/Help/ss_of_thms.doc", - "formal/afp/UTP/toolkit/Partial_Fun.thy", - "formal/lean/mathlib/analysis/normed_space/extend.lean", - "formal/lean/mathzoo/mathzoo/olympiads/imo/1969/p2.lean", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2010/a/p11.lean", - "formal/afp/Real_Time_Deque/Type_Classes.thy", - "formal/afp/Multi_Party_Computation/ETP_RSA_OT.thy", - "formal/afp/Akra_Bazzi/Master_Theorem.thy", - "formal/afp/pGCL/Tutorial/LoopExamples.thy", - "formal/afp/Independence_CH/Interface.thy", - "formal/lean/mathlib/linear_algebra/affine_space/basis.lean", - "formal/afp/JinjaThreads/MM/JMM_Type.thy", - "formal/lean/mathlib/testing/slim_check/functions.lean", - "formal/lean/mathlib/analysis/complex/roots_of_unity.lean", - "formal/afp/Game_Based_Crypto/Hashed_Elgamal.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p769.lean", - "formal/afp/RSAPSS/SHA1.thy", - "formal/afp/Virtual_Substitution/EliminateVariable.thy", - "formal/afp/AWN/OAWN_Invariants.thy", - "formal/hol/Help/is_binary.doc", - "formal/afp/Finite_Fields/Finite_Fields_Preliminary_Results.thy", - "formal/afp/Key_Agreement_Strong_Adversaries/pfslvl3_symmetric.thy", - "formal/lean/sphere-eversion/to_mathlib/measure_theory/parametric_interval_integral.lean", - "formal/hol/RichterHilbertAxiomGeometry/miz3/make.ml", - "formal/lean/mathlib/data/rat/basic.lean", - "formal/hol/Help/STRIP_TAC.doc", - "formal/mizar/scmpds_3.miz", - "formal/lean/mathlib/algebra/continued_fractions/computation/default.lean", - "formal/afp/AODV/variants/e_all_abcd/E_Aodv_Data.thy", - "formal/hol/Help/ARITH_TAC.doc", - "formal/mizar/euclid_4.miz", - "formal/lean/mathlib/data/pnat/xgcd.lean", - "formal/afp/GoedelGod/document/root.tex", - "formal/afp/CCS/Strong_Sim.thy", - "formal/mizar/pl_axiom.miz", - "formal/lean/mathlib/order/lattice.lean", - "formal/afp/Network_Security_Policy_Verification/TopoS_Composition_Theory.thy", - "formal/afp/Consensus_Refined/Observing/BenOr_Proofs.thy", - "formal/afp/Complex_Bounded_Operators/extra/Extra_Jordan_Normal_Form.thy", - "formal/afp/Kleene_Algebra/Omega_Algebra_Models.thy", - "formal/mizar/integr13.miz", - "formal/mizar/vectmetr.miz", - "formal/afp/Ordinary_Differential_Equations/Numerics/Concrete_Rigorous_Numerics.thy", - "formal/mizar/msscyc_2.miz", - "formal/lean/mathlib/algebra/covariant_and_contravariant.lean", - "formal/lean/mathlib/category_theory/limits/shapes/regular_mono.lean", - "formal/afp/CoCon/Review_Confidentiality/Review_RAut_NCPC_PAut.thy", - "formal/lean/lftcm/solutions/thursday/category_theory/exercise6.lean", - "formal/afp/Isabelle_Marries_Dirac/Binary_Nat.thy", - "formal/lean/liquid/pseudo_normed_group/category/ProFiltPseuNormGrp.lean", - "formal/afp/Coinductive/Coinductive_Stream.thy", - "formal/afp/Formula_Derivatives/Examples/WS1S_Presburger_Examples.thy", - "formal/afp/Fermat3_4/Quad_Form.thy", - "formal/afp/Word_Lib/More_Arithmetic.thy", - "formal/mizar/seqm_3.miz", - "formal/afp/RSAPSS/EMSAPSS.thy", - "formal/lean/mathlib/algebra/hom/aut.lean", - "formal/afp/Nested_Multisets_Ordinals/Goodstein_Sequence.thy", - "formal/afp/Consensus_Refined/Consensus_Misc.thy", - "formal/afp/Smooth_Manifolds/Sphere.thy", - "formal/afp/Order_Lattice_Props/Galois_Connections.thy", - "formal/afp/Algebraic_VCs/AVC_KAD/VC_KAD_dual.thy", - "formal/hol/Help/DEDUCT_ANTISYM_RULE.doc", - "formal/lean/mathlib/measure_theory/function/continuous_map_dense.lean", - "formal/lean/mathlib/topology/uniform_space/uniform_convergence_topology.lean", - "formal/afp/JinjaThreads/Compiler/J1.thy", - "formal/lean/mathlib/algebra/algebraic_card.lean", - "formal/afp/Key_Agreement_Strong_Adversaries/sklvl2.thy", - "formal/mizar/jgraph_7.miz", - "formal/mizar/quaterni.miz", - "formal/afp/Markov_Models/document/root.tex", - "formal/mizar/analmetr.miz", - "formal/afp/MiniML/document/root.tex", - "formal/afp/Hybrid_Systems_VCs/ModalKleeneAlgebra/HS_VC_MKA.thy", - "formal/lean/liquid/locally_constant/SemiNormedGroup.lean", - "formal/afp/Certification_Monads/Misc.thy", - "formal/afp/Abstract-Hoare-Logics/While/HoareTotal.thy", - "formal/afp/Relation_Algebra/More_Boolean_Algebra.thy", - "formal/hol/Help/REAL_RING.doc", - "formal/lean/liquid/rescale/basic.lean", - "formal/hol/EC/formulary_jacobian.ml", - "formal/afp/Factored_Transition_System_Bounding/SetUtils.thy", - "formal/afp/ComponentDependencies/DataDependencies.thy", - "formal/coq/math-comp/ssreflect/eqtype.v", - "formal/lean/mathlib/set_theory/cardinal/schroeder_bernstein.lean", - "formal/afp/SIFPL/ContextVS.thy", - "formal/afp/Quasi_Borel_Spaces/Exponent_QuasiBorel.thy", - "formal/afp/Native_Word/Native_Word_Test_PolyML.thy", - "formal/afp/Transition_Systems_and_Automata/Automata/DRA/DRA_Combine.thy", - "formal/afp/Dict_Construction/Dict_Construction.thy", - "formal/afp/JinjaDCI/J/SmallStep.thy", - "formal/afp/CakeML/generated/CakeML/Ast.thy", - "formal/mizar/combgras.miz", - "formal/afp/VYDRA_MDL/Timestamp.thy", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2000/p15.lean", - "formal/afp/Topology/Topology.thy", - "formal/afp/Affine_Arithmetic/Affine_Arithmetic.thy", - "formal/afp/Regular-Sets/NDerivative.thy", - "formal/mizar/t_0topsp.miz", - "formal/afp/Routing/Linux_Router.thy", - "formal/afp/Probabilistic_System_Zoo/Finitely_Bounded_Set_Counterexample.thy", - "formal/afp/Cotangent_PFD_Formula/document/root.tex", - "formal/lean/mathlib/analysis/locally_convex/with_seminorms.lean", - "formal/mizar/cat_1.miz", - "formal/lean/mathlib/linear_algebra/unitary_group.lean", - "formal/lean/mathlib/algebra/associated.lean", - "formal/hol/Help/IMP_RES_THEN.doc", - "formal/afp/Collections/GenCF/GenCF_Chapter.thy", - "formal/afp/Word_Lib/Bit_Comprehension.thy", - "formal/mizar/jordan1g.miz", - "formal/afp/Gromov_Hyperbolicity/Eexp_Eln.thy", - "formal/afp/Mereology/GM.thy", - "formal/hol/Help/CONJUNCTS_THEN.doc", - "formal/mizar/grcat_1.miz", - "formal/afp/Affine_Arithmetic/Ex_Inter.thy", - "formal/afp/LambdaAuth/Semantics.thy", - "formal/lean/mathlib/ring_theory/graded_algebra/radical.lean", - "formal/afp/Independence_CH/Forces_Definition.thy", - "formal/hol/Help/distinctness.doc", - "formal/afp/Network_Security_Policy_Verification/Security_Invariants/SINVAR_Subnets_impl.thy", - "formal/afp/Graph_Theory/Digraph_Isomorphism.thy", - "formal/afp/Hoare_Time/SepLogAdd/Product_Separation_Algebra.thy", - "formal/afp/Partial_Order_Reduction/Transition_System_Traces.thy", - "formal/afp/Verified_SAT_Based_AI_Planning/SAT_Solve_SAS_Plus.thy", - "formal/hol/Help/thenl_.doc", - "formal/mizar/ami_4.miz", - "formal/lean/sphere-eversion/to_mathlib/topology/hausdorff_distance.lean", - "formal/afp/Virtual_Substitution/Reindex.thy", - "formal/afp/CZH_Foundations/czh_digraphs/CZH_DG_Set.thy", - "formal/lean/mathlib/algebra/group_with_zero/power.lean", - "formal/afp/Nested_Multisets_Ordinals/Hereditary_Multiset.thy", - "formal/lean/lftcm/hints/category_theory/exercise6/hint2.lean", - "formal/afp/Flyspeck-Tame/TameEnum.thy", - "formal/afp/Saturation_Framework_Extensions/FO_Ordered_Resolution_Prover_Revisited.thy", - "formal/hol/Help/REAL_INT_GT_CONV.doc", - "formal/afp/MFODL_Monitor_Optimized/Event_Data.thy", - "formal/afp/Transition_Systems_and_Automata/Automata/DBA/DBA.thy", - "formal/lean/mathlib/order/order_iso_nat.lean", - "formal/afp/PseudoHoops/Operations.thy", - "formal/afp/CAVA_Automata/CAVA_Base/CAVA_Code_Target.thy", - "formal/mizar/ec_pf_3.miz", - "formal/afp/Winding_Number_Eval/document/root.tex", - "formal/mizar/sprect_4.miz", - "formal/afp/Padic_Ints/Hensels_Lemma.thy", - "formal/hol/Help/defined.doc", - "formal/afp/JinjaThreads/Execute/Execute_Main.thy", - "formal/lean/mathlib/topology/algebra/module/locally_convex.lean", - "formal/afp/Gromov_Hyperbolicity/Isometries_Classification.thy", - "formal/afp/Randomised_Social_Choice/Elections.thy", - "formal/afp/BNF_CC/Axiomatised_BNF_CC.thy", - "formal/afp/Perron_Frobenius/Perron_Frobenius.thy", - "formal/afp/Transitive-Closure/document/root.tex", - "formal/lean/mathzoo/mathzoo/misc/miniF2F/algebra/amgm_sumasqdivbsqgeqsumbdiva.lean", - "formal/mizar/tdgroup.miz", - "formal/afp/Refine_Imperative_HOL/IICF/Impl/IICF_List_Mset.thy", - "formal/lean/mathlib/category_theory/monad/coequalizer.lean", - "formal/lean/mathlib/field_theory/finite/basic.lean", - "formal/afp/Core_SC_DOM/document/root.tex", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p235.lean", - "formal/lean/mathlib/category_theory/functor/const.lean", - "formal/afp/Decl_Sem_Fun_PL/DeclSemAsDenot.thy", - "formal/afp/Universal_Turing_Machine/UF.thy", - "formal/afp/CryptoBasedCompositionalProperties/Secrecy_types.thy", - "formal/lean/mathlib/category_theory/limits/shapes/default.lean", - "formal/hol/Help/PURE_SIMP_TAC.doc", - "formal/afp/Clean/examples/LinearSearch.thy", - "formal/lean/mathzoo/mathzoo/misc/miniF2F/numbertheory/3pow2pownm1mod2pownp3eq2pownp2.lean", - "formal/hol/Help/EQT_ELIM.doc", - "formal/afp/Padic_Ints/Padic_Int_Topology.thy", - "formal/afp/Groebner_Macaulay/Dube_Bound.thy", - "formal/lean/mathlib/algebra/algebra/tower.lean", - "formal/afp/Launchbury/ValueSimilarity.thy", - "formal/lean/mathlib/measure_theory/function/egorov.lean", - "formal/afp/Echelon_Form/Rings2.thy", - "formal/lean/mathlib/algebraic_geometry/Gamma_Spec_adjunction.lean", - "formal/afp/Extended_Finite_State_Machine_Inference/Subsumption.thy", - "formal/mizar/newton03.miz", - "formal/afp/Forcing/Infinity_Axiom.thy", - "formal/afp/Sunflowers/Erdos_Rado_Sunflower.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p137.lean", - "formal/afp/Pell/Efficient_Discrete_Sqrt.thy", - "formal/mizar/armstrng.miz", - "formal/afp/MFOTL_Monitor/Abstract_Monitor.thy", - "formal/mizar/msafree2.miz", - "formal/afp/Randomised_Social_Choice/Automation/Preference_Profile_Cmd.thy", - "formal/afp/pGCL/StructuredReasoning.thy", - "formal/hol/Help/quotexpander.doc", - "formal/afp/Verified_SAT_Based_AI_Planning/SAT_Plan_Extensions.thy", - "formal/hol/Help/print_goal.doc", - "formal/afp/HOL-CSP/Mndet.thy", - "formal/afp/Key_Agreement_Strong_Adversaries/Refinement.thy", - "formal/mizar/goboard2.miz", - "formal/lean/mathlib/algebra/quadratic_discriminant.lean", - "formal/mizar/relat_2.miz", - "formal/hol/Help/combine.doc", - "formal/mizar/memstr_0.miz", - "formal/afp/Network_Security_Policy_Verification/Example_BLP.thy", - "formal/afp/CakeML/generated/CakeML/SmallStep.thy", - "formal/lean/mathlib/category_theory/limits/preserves/basic.lean", - "formal/mizar/jordan20.miz", - "formal/afp/Safe_OCL/Finite_Map_Ext.thy", - "formal/afp/Ordinary_Differential_Equations/IVP/Initial_Value_Problem.thy", - "formal/afp/Conditional_Transfer_Rule/document/root.tex", - "formal/lean/mathzoo/mathzoo/olympiads/imo/2011/p3.lean", - "formal/lean/mathlib/category_theory/abelian/transfer.lean", - "formal/hol/Help/AP_THM.doc", - "formal/lean/mathzoo/mathzoo/olympiads/imo/1985/p6.lean", - "formal/mizar/hilbert2.miz", - "formal/lean/mathlib/category_theory/sites/compatible_sheafification.lean", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p739.lean", - "formal/lean/mathlib/algebra/char_zero.lean", - "formal/lean/mathlib/topology/sheaves/sheaf_condition/pairwise_intersections.lean", - "formal/afp/HRB-Slicing/Proc/Interpretation.thy", - "formal/coq/analysis/boolp.v", - "formal/hol/Help/BETA_CONV.doc", - "formal/lean/mathlib/data/setoid/partition.lean", - "formal/afp/Lambert_W/Lambert_W_MacLaurin_Series.thy", - "formal/lean/mathlib/data/mv_polynomial/pderiv.lean", - "formal/afp/Card_Multisets/document/root.tex", - "formal/hol/100/heron.ml", - "formal/afp/Attack_Trees/MC.thy", - "formal/afp/Tree-Automata/document/intro.tex", - "formal/lean/liquid/for_mathlib/homotopy_category.lean", - "formal/afp/CoreC++/Decl.thy", - "formal/afp/Stateful_Protocol_Composition_and_Typing/Lazy_Intruder.thy", - "formal/afp/Topological_Semantics/topo_negation_conditions.thy", - "formal/mizar/monoid_0.miz", - "formal/afp/Refine_Imperative_HOL/Userguides/Sepref_Guide_Reference.thy", - "formal/mizar/filter_1.miz", - "formal/afp/JinjaThreads/MM/JMM_DRF.thy", - "formal/afp/BenOr_Kozen_Reif/BKR_Algorithm.thy", - "formal/afp/Nominal2/Nominal2_FCB.thy", - "formal/afp/Winding_Number_Eval/Missing_Algebraic.thy", - "formal/afp/UpDown_Scheme/Grid.thy", - "formal/lean/mathlib/category_theory/sites/spaces.lean", - "formal/lean/mathlib/topology/spectral/hom.lean", - "formal/hol/Help/unzip.doc", - "formal/lean/mathlib/data/nat/factorial/basic.lean", - "formal/afp/MiniSail/Safety.thy", - "formal/lean/liquid/for_mathlib/exact_filtered_colimits.lean", - "formal/lean/mathlib/logic/relator.lean", - "formal/afp/Automatic_Refinement/Lib/Refine_Util_Bootstrap1.thy", - "formal/afp/Regular_Tree_Relations/Horn_Setup/Horn_List.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p343.lean", - "formal/afp/Certification_Monads/document/root.tex", - "formal/afp/LLL_Basis_Reduction/Gram_Schmidt_Int.thy", - "formal/lean/mathlib/data/fin/vec_notation.lean", - "formal/hol/Help/FIRST_TCL.doc", - "formal/lean/sphere-eversion/to_mathlib/data/set/basic.lean", - "formal/afp/Hybrid_Systems_VCs/KleeneAlgebraTests/HS_VC_KAT_Examples_ndfun.thy", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2002/a/p6.lean", - "formal/afp/Weighted_Path_Order/Multiset_Extension2_Impl.thy", - "formal/hol/Help/ONCE_SIMP_TAC.doc", - "formal/afp/Collections/Refine_Dflt_Only_ICF.thy", - "formal/afp/Call_Arity/Arity-Nominal.thy", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2021/b/p9.lean", - "formal/afp/Dirichlet_Series/Dirichlet_Efficient_Code.thy", - "formal/mizar/nat_3.miz", - "formal/afp/Gabow_SCC/Gabow_GBG_Code.thy", - "formal/afp/Differential_Dynamic_Logic/Pretty_Printer.thy", - "formal/hol/Help/last.doc", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p451.lean", - "formal/lean/mathlib/data/nat/choose/vandermonde.lean", - "formal/afp/Frequency_Moments/Frequency_Moments.thy", - "formal/hol/Help/dest_conj.doc", - "formal/afp/Complex_Bounded_Operators/Complex_Bounded_Linear_Function0.thy", - "formal/afp/Core_DOM/common/pointers/Ref.thy", - "formal/afp/Regular_Tree_Relations/Util/Ground_Closure.thy", - "formal/hol/Mizarlight/duality_holby.ml", - "formal/afp/Gauss_Jordan/Elementary_Operations.thy", - "formal/afp/Stone_Kleene_Relation_Algebras/Kleene_Algebras.thy", - "formal/afp/Isabelle_C/C11-FrontEnd/src/C_Parser_Annotation.thy", - "formal/lean/mathlib/algebra/hom/iterate.lean", - "formal/lean/mathlib/algebra/group/semiconj.lean", - "formal/afp/Monomorphic_Monad/Just_Do_It_Examples.thy", - "formal/hol/Help/lift_theorem.doc", - "formal/afp/Network_Security_Policy_Verification/TopoS_Library.thy", - "formal/hol/Help/derive_nonschematic_inductive_relations.doc", - "formal/afp/GewirthPGCProof/CJDDLplus.thy", - "formal/lean/mathlib/analysis/normed_space/completion.lean", - "formal/afp/Berlekamp_Zassenhaus/Arithmetic_Record_Based.thy", - "formal/afp/Category3/BinaryFunctor.thy", - "formal/afp/Cayley_Hamilton/Square_Matrix.thy", - "formal/lean/mathlib/geometry/manifold/algebra/structures.lean", - "formal/hol/Help/NUM_EQ_CONV.doc", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p362.lean", - "formal/afp/Finitely_Generated_Abelian_Groups/Generated_Groups_Extend.thy", - "formal/lean/mathlib/topology/semicontinuous.lean", - "formal/afp/DPRM_Theorem/Diophantine/Assignments.thy", - "formal/afp/IsaNet/Parametrized_Dataplane_0.thy", - "formal/mizar/rfunct_4.miz", - "formal/afp/Interpreter_Optimizations/Option_Extra.thy", - "formal/lean/mathlib/data/rel.lean", - "formal/afp/WOOT_Strong_Eventual_Consistency/SEC.thy", - "formal/afp/CoSMeDis/Friend_Confidentiality/Friend_Openness.thy", - "formal/lean/mathlib/algebra/order/monoid_lemmas.lean", - "formal/afp/QHLProver/Grover.thy", - "formal/afp/IP_Addresses/Prefix_Match.thy", - "formal/hol/Help/REAL_RAT_INV_CONV.doc", - "formal/hol/Multivariate/msum.ml", - "formal/mizar/topalg_3.miz", - "formal/afp/Partial_Order_Reduction/Extensions/ESet_Extensions.thy", - "formal/afp/Probabilistic_Prime_Tests/document/root.tex", - "formal/afp/Linear_Recurrences/Linear_Homogenous_Recurrences.thy", - "formal/lean/mathlib/data/finset/preimage.lean", - "formal/lean/liquid/Lbar/kernel_truncate.lean", - "formal/mizar/prefer_1.miz", - "formal/afp/Polynomial_Factorization/Precomputation.thy", - "formal/lean/liquid/condensed/filtered_colimits.lean", - "formal/afp/Bounded_Deducibility_Security/IO_Automaton.thy", - "formal/afp/UTP/toolkit/Sequence.thy", - "formal/afp/Call_Arity/AbstractTransform.thy", - "formal/afp/CAVA_LTL_Modelchecker/SM/Impl/SM_Wrapup.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p141.lean", - "formal/afp/JinjaDCI/BV/EffectMono.thy", - "formal/lean/mathlib/category_theory/preadditive/yoneda.lean", - "formal/hol/Rqe/dedmatrix_thms.ml", - "formal/lean/mathlib/ring_theory/tensor_product.lean", - "formal/afp/Foundation_of_geometry/Order.thy", - "formal/mizar/bspace.miz", - "formal/afp/Refine_Imperative_HOL/Examples/Sepref_NDFS.thy", - "formal/afp/Slicing/Basic/CFGExit_wf.thy", - "formal/afp/JinjaThreads/BV/EffectMono.thy", - "formal/hol/Help/BETA_TAC.doc", - "formal/afp/Topology/LList_Topology.thy", - "formal/afp/Complex_Geometry/Linear_Systems.thy", - "formal/afp/Goedel_Incompleteness/Abstract_First_Goedel_Rosser.thy", - "formal/afp/Encodability_Process_Calculi/Encodings.thy", - "formal/afp/Types_To_Sets_Extension/Examples/SML_Relativization/Algebra/SML_Rings.thy", - "formal/lean/mathlib/analysis/special_functions/log/basic.lean", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p64.lean", - "formal/lean/liquid/for_mathlib/Profinite/extend.lean", - "formal/afp/CoCon/Decision_Confidentiality/Decision_Value_Setup.thy", - "formal/mizar/glib_013.miz", - "formal/afp/Flyspeck-Tame/EnumeratorProps.thy", - "formal/hol/Help/mk_comb.doc", - "formal/hol/Help/MK_FORALL_UPPERCASE.doc", - "formal/mizar/lattice3.miz", - "formal/afp/VerifyThis2018/lib/VTcomp.thy", - "formal/afp/Decl_Sem_Fun_PL/ValuesFSetProps.thy", - "formal/mizar/partit1.miz", - "formal/lean/mathlib/analysis/convex/star.lean", - "formal/afp/List_Update/On_Off.thy", - "formal/afp/LocalLexing/LocalLexing.thy", - "formal/afp/CakeML_Codegen/Rewriting/Rewriting_Nterm.thy", - "formal/hol/Help/new_inductive_definition.doc", - "formal/afp/CZH_Elementary_Categories/czh_ecategories/CZH_ECAT_SemiCAT.thy", - "formal/afp/Simplicial_complexes_and_boolean_functions/ListLexorder.thy", - "formal/afp/ConcurrentIMP/ex/CIMP_one_place_buffer.thy", - "formal/afp/UTP/utp/utp_sp.thy", - "formal/mizar/measure7.miz", - "formal/lean/liquid/for_mathlib/AddCommGroup/exact.lean", - "formal/afp/Pi_Calculus/Res_Pres.thy", - "formal/afp/Case_Labeling/Case_Labeling_Examples.thy", - "formal/afp/PseudoHoops/PseudoHoops.thy", - "formal/hol/GL/modal.ml", - "formal/afp/Flyspeck-Tame/Arch.thy", - "formal/afp/Saturation_Framework/Calculus.thy", - "formal/afp/Lazy-Lists-II/LList2.thy", - "formal/afp/Topological_Semantics/sse_operation_negative_quantification.thy", - "formal/afp/Iptables_Semantics/Examples/SQRL_Shorewall/Analyze_SQRL_Shorewall.thy", - "formal/afp/Extended_Finite_State_Machine_Inference/heuristics/Increment_Reset.thy", - "formal/afp/AODV/variants/d_fwdrreqs/D_Aodv_Loop_Freedom.thy", - "formal/lean/mathlib/order/default.lean", - "formal/mizar/freealg.miz", - "formal/lean/mathlib/order/hom/basic.lean", - "formal/lean/mathlib/measure_theory/measurable_space_def.lean", - "formal/lean/sphere-eversion/to_mathlib/analysis/cont_diff.lean", - "formal/hol/Help/basic_rewrites.doc", - "formal/mizar/mfold_0.miz", - "formal/hol/EC/nistp224.ml", - "formal/afp/PseudoHoops/Examples.thy", - "formal/afp/CISC-Kernel/trace/Rushby-with-Control/K.thy", - "formal/afp/Complx/Language.thy", - "formal/afp/Paraconsistency/Paraconsistency.thy", - "formal/mizar/series_3.miz", - "formal/lean/mathlib/data/dfinsupp/interval.lean", - "formal/afp/Group-Ring-Module/Algebra9.thy", - "formal/coq/math-comp/ssreflect/ssrmatching.v", - "formal/afp/Psi_Calculi/Sum.thy", - "formal/lean/mathlib/ring_theory/witt_vector/domain.lean", - "formal/coq/math-comp/solvable/cyclic.v", - "formal/mizar/integra7.miz", - "formal/afp/LightweightJava/Lightweight_Java_Definition.thy", - "formal/lean/mathlib/order/filter/at_top_bot.lean", - "formal/afp/AODV/variants/a_norreqid/A_Aodv_Message.thy", - "formal/hol/100/gcd.ml", - "formal/afp/CCS/Weak_Cong_Sim.thy", - "formal/lean/liquid/condensed/projective_resolution.lean", - "formal/afp/Dominance_CHK/Dom_Semi_List.thy", - "formal/afp/Types_To_Sets_Extension/Examples/SML_Relativization/Foundations/Set_Ext.thy", - "formal/afp/Subresultants/Resultant_Prelim.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p234.lean", - "formal/lean/mathlib/dynamics/minimal.lean", - "formal/mizar/graph_2.miz", - "formal/afp/AODV/variants/e_all_abcd/E_Loop_Freedom.thy", - "formal/lean/mathlib/topology/continuous_function/t0_sierpinski.lean", - "formal/afp/QR_Decomposition/Examples_QR_IArrays_Symbolic.thy", - "formal/afp/Collections/ICF/impl/ListMapImpl_Invar.thy", - "formal/afp/Core_SC_DOM/common/monads/ElementMonad.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p35.lean", - "formal/afp/Pi_Calculus/Strong_Late_Sim.thy", - "formal/afp/CISC-Kernel/step/Step_policies.thy", - "formal/lean/mathlib/set_theory/zfc/ordinal.lean", - "formal/lean/mathlib/linear_algebra/std_basis.lean", - "formal/afp/Probabilistic_System_Zoo/Vardi_Counterexample.thy", - "formal/hol/Help/strip_gabs.doc", - "formal/mizar/yellow_9.miz", - "formal/afp/Noninterference_Inductive_Unwinding/document/root.tex", - "formal/afp/Dependent_SIFUM_Type_Systems/document/root.tex", - "formal/mizar/fdiff_5.miz", - "formal/hol/Help/prove_constructors_distinct.doc", - "formal/afp/HereditarilyFinite/Ordinal.thy", - "formal/afp/Applicative_Lifting/Applicative_List.thy", - "formal/lean/mathlib/model_theory/definability.lean", - "formal/afp/Iptables_Semantics/Examples/containern/Analyze_Containern.thy", - "formal/afp/Conditional_Transfer_Rule/CTR/CTR.thy", - "formal/mizar/scmfsa8c.miz", - "formal/afp/Launchbury/Adequacy.thy", - "formal/lean/mathlib/order/category/LinearOrder.lean", - "formal/afp/PAC_Checker/PAC_Checker_MLton.thy", - "formal/coq/odd-order/BGsection5.v", - "formal/hol/Help/some.doc", - "formal/afp/CoSMeDis/Outer_Friend_Confidentiality/Receiver/Outer_Friend_Receiver.thy", - "formal/afp/CZH_Elementary_Categories/czh_ecategories/CZH_ECAT_Yoneda.thy", - "formal/lean/liquid/combinatorial_lemma/default.lean", - "formal/afp/Hood_Melville_Queue/Hood_Melville_Queue.thy", - "formal/mizar/jordan7.miz", - "formal/afp/MiniML/MiniML.thy", - "formal/afp/Quasi_Borel_Spaces/CoProduct_QuasiBorel.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p135.lean", - "formal/afp/Iptables_Semantics/Semantics_Ternary/MatchExpr_Fold.thy", - "formal/lean/mathlib/data/set/intervals/infinite.lean", - "formal/hol/Help/HYP_TAC.doc", - "formal/lean/mathlib/data/fin/interval.lean", - "formal/afp/Regular-Sets/Regular_Set.thy", - "formal/lean/mathlib/set_theory/game/short.lean", - "formal/afp/Slicing/JinjaVM/JVMControlDependences.thy", - "formal/afp/TESL_Language/Corecursive_Prop.thy", - "formal/afp/Mereology/document/root.tex", - "formal/hol/Help/parses_as_binder.doc", - "formal/afp/Lazy-Lists-II/document/root.tex", - "formal/hol/Rqe/rqe_tactics_ext.ml", - "formal/hol/Help/binders.doc", - "formal/afp/Sturm_Sequences/Lib/Sturm_Library.thy", - "formal/hol/Help/REAL_RAT_REDUCE_CONV.doc", - "formal/afp/Stellar_Quorums/document/root.tex", - "formal/afp/Sigma_Commit_Crypto/Uniform_Sampling.thy", - "formal/hol/Help/SELECT_ELIM_TAC.doc", - "formal/afp/SATSolverVerification/UnitPropagate.thy", - "formal/mizar/yellow12.miz", - "formal/afp/Integration/document/intro.tex", - "formal/lean/mathlib/field_theory/separable_degree.lean", - "formal/afp/Virtual_Substitution/LinearCase.thy", - "formal/afp/CoCon/Discussion_Confidentiality/Discussion_NCPC.thy", - "formal/afp/Secondary_Sylow/SndSylow.thy", - "formal/afp/Native_Word/Native_Word_Test_OCaml.thy", - "formal/hol/Help/PRENEX_CONV.doc", - "formal/lean/mathlib/algebra/default.lean", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2021/a/p7.lean", - "formal/lean/mathlib/analysis/normed_space/algebra.lean", - "formal/afp/Abstract-Hoare-Logics/Procs/PsHoare.thy", - "formal/hol/EC/edwards448.ml", - "formal/afp/Collections/Examples/Refine_Monadic/Refine_Monadic_Examples_Chapter.thy", - "formal/afp/Transition_Systems_and_Automata/Basic/Implement.thy", - "formal/mizar/gtarski2.miz", - "formal/mizar/glib_012.miz", - "formal/afp/Linear_Recurrences/Linear_Recurrences_Misc.thy", - "formal/mizar/quatern2.miz", - "formal/afp/Network_Security_Policy_Verification/Security_Invariants/SINVAR_Tainting_impl.thy", - "formal/hol/Boyer_Moore/rewrite_rules.ml", - "formal/afp/MDP-Rewards/MDP_reward.thy", - "formal/mizar/pythtrip.miz", - "formal/mizar/convex1.miz", - "formal/afp/KBPs/ClockView.thy", - "formal/afp/AODV/variants/b_fwdrreps/B_Aodv.thy", - "formal/afp/UPF_Firewall/PacketFilter/IPv4.thy", - "formal/afp/Nominal2/Nominal2_Base.thy", - "formal/lean/mathlib/algebra/group/to_additive.lean", - "formal/lean/liquid/for_mathlib/sheaf.lean", - "formal/lean/mathlib/geometry/manifold/metrizable.lean", - "formal/afp/MDP-Algorithms/Value_Iteration.thy", - "formal/afp/Factor_Algebraic_Polynomial/document/root.tex", - "formal/afp/Isabelle_C/C11-FrontEnd/src/C_Lexer_Language.thy", - "formal/afp/Ordinary_Differential_Equations/Library/Interval_Integral_HK.thy", - "formal/lean/mathlib/set_theory/ordinal/fixed_point.lean", - "formal/afp/Heard_Of/ate/AteProof.thy", - "formal/afp/TLA/Inc.thy", - "formal/afp/Randomised_Social_Choice/Lotteries.thy", - "formal/afp/Factored_Transition_System_Bounding/RelUtils.thy", - "formal/afp/Markov_Models/ex/MDP_RP_Certification.thy", - "formal/hol/Help/is_neg.doc", - "formal/hol/Help/find_path.doc", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p104.lean", - "formal/mizar/revrot_1.miz", - "formal/mizar/goboard4.miz", - "formal/afp/Nullstellensatz/document/root.tex", - "formal/lean/lftcm/hints/category_theory/exercise3/hint7.lean", - "formal/hol/Help/NNFC_CONV.doc", - "formal/afp/Pi_Calculus/Strong_Late_Bisim_Subst.thy", - "formal/afp/LLL_Factorization/LLL_Factorization.thy", - "formal/afp/Polynomials/Show_Polynomials.thy", - "formal/hol/Logic/resolution.ml", - "formal/afp/Word_Lib/Signed_Words.thy", - "formal/lean/mathzoo/mathzoo/misc/miniF2F/induction/pord1p1on2powklt5on2.lean", - "formal/afp/Van_Emde_Boas_Trees/Separation_Logic_Imperative_HOL/Refine_Imp_Hol.thy", - "formal/lean/mathlib/combinatorics/simple_graph/regularity/uniform.lean", - "formal/afp/Ergodic_Theory/Ergodicity.thy", - "formal/hol/Help/REAL_RAT_ADD_CONV.doc", - "formal/afp/Conditional_Transfer_Rule/CTR/CTR_Reference.thy", - "formal/lean/liquid/for_mathlib/homotopy_category_coproducts.lean", - "formal/lean/mathlib/algebra/group/pi.lean", - "formal/afp/FLP/FLPSystem.thy", - "formal/afp/SPARCv8/SparcModel_MMU/Sparc_Types.thy", - "formal/lean/perfectoid/Spa/localization_Huber.lean", - "formal/lean/mathlib/data/nat/choose/dvd.lean", - "formal/afp/Modular_Assembly_Kit_Security/Verification/Compositionality/CompositionalityResults.thy", - "formal/afp/Core_DOM/common/tests/Node_insertBefore.thy", - "formal/afp/Call_Arity/ArityAnalysisAbinds.thy", - "formal/afp/UTP/utp/utp_expr_funcs.thy", - "formal/mizar/glib_010.miz", - "formal/afp/Laws_of_Large_Numbers/Laws_of_Large_Numbers_Example.thy", - "formal/lean/perfectoid/sheaves/opens.lean", - "formal/afp/Planarity_Certificates/Verification/Check_Non_Planarity_Impl.thy", - "formal/afp/ConcurrentGC/Noninterference.thy", - "formal/lean/perfectoid/Tate_ring.lean", - "formal/mizar/rolle.miz", - "formal/afp/Types_Tableaus_and_Goedels_God/AndersonProof.thy", - "formal/afp/Transition_Systems_and_Automata/Automata/Deterministic.thy", - "formal/afp/IP_Addresses/IP_Address.thy", - "formal/lean/mathlib/ring_theory/laurent_series.lean", - "formal/afp/Partial_Order_Reduction/Extensions/Nondeterminism.thy", - "formal/hol/Help/GEN_ALL.doc", - "formal/lean/mathlib/analysis/calculus/fderiv_analytic.lean", - "formal/lean/mathlib/linear_algebra/invariant_basis_number.lean", - "formal/lean/mathlib/ring_theory/localization/localization_localization.lean", - "formal/afp/Applicative_Lifting/Applicative_Sum.thy", - "formal/afp/Automated_Stateful_Protocol_Verification/examples/PKCS/PKCS_Model07.thy", - "formal/afp/Decl_Sem_Fun_PL/SmallStepLam.thy", - "formal/afp/Timed_Automata/Floyd_Warshall.thy", - "formal/afp/JinjaThreads/Framework/FWLTS.thy", - "formal/afp/DiscretePricing/document/root.tex", - "formal/lean/mathlib/data/rbmap/default.lean", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p77.lean", - "formal/afp/CoCon/Reviewer_Assignment_Confidentiality/Reviewer_Assignment_All.thy", - "formal/mizar/finance5.miz", - "formal/afp/Weighted_Path_Order/List_Order.thy", - "formal/afp/Transitive_Models/Recursion_Thms.thy", - "formal/lean/mathlib/data/polynomial/default.lean", - "formal/mizar/algstr_3.miz", - "formal/lean/liquid/for_mathlib/Profinite/compat_discrete_quotient.lean", - "formal/afp/Gauss_Jordan/Inverse.thy", - "formal/hol/Help/TOP_SWEEP_SQCONV.doc", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p455.lean", - "formal/afp/Isabelle_C/C11-FrontEnd/src/C_Environment.thy", - "formal/afp/Stone_Kleene_Relation_Algebras/Iterings.thy", - "formal/afp/Refine_Imperative_HOL/IICF/Impl/IICF_Array_List.thy", - "formal/hol/EC/projective.ml", - "formal/hol/Help/ignore_constant_varstruct.doc", - "formal/afp/JinjaThreads/JVM/JVMState.thy", - "formal/afp/Inductive_Confidentiality/GeneralAttacker/NS_Public_Bad_GA.thy", - "formal/hol/Help/is_comb.doc", - "formal/lean/mathlib/data/set/intervals/surj_on.lean", - "formal/lean/mathlib/category_theory/whiskering.lean", - "formal/afp/Factor_Algebraic_Polynomial/Roots_of_Real_Complex_Poly.thy", - "formal/afp/Probabilistic_Prime_Tests/Jacobi_Symbol.thy", - "formal/hol/100/friendship.ml", - "formal/afp/Paraconsistency/document/root.tex", - "formal/afp/UPF/Analysis.thy", - "formal/afp/Independence_CH/Choice_Axiom.thy", - "formal/coq/analysis/landau.v", - "formal/afp/Nash_Williams/Nash_Williams.thy", - "formal/afp/Graph_Theory/Digraph_Component.thy", - "formal/mizar/openlatt.miz", - "formal/afp/Network_Security_Policy_Verification/Examples/Tainting/IDEM.thy", - "formal/hol/Help/mk_neg.doc", - "formal/afp/DFS_Framework/Examples/DFS_All_Examples.thy", - "formal/mizar/uniroots.miz", - "formal/afp/UTP/utp/utp_recursion.thy", - "formal/lean/mathlib/computability/encoding.lean", - "formal/afp/Featherweight_OCL/document/FOCL_Syntax.tex", - "formal/hol/Help/INT_ARITH_TAC.doc", - "formal/hol/Help/leftbin.doc", - "formal/afp/Propositional_Proof_Systems/MiniFormulas.thy", - "formal/afp/Flyspeck-Tame/ListAux.thy", - "formal/afp/Clean/src/Clean.thy", - "formal/afp/CoSMeDis/Outer_Friend_Confidentiality/Outer_Friend_Intro.thy", - "formal/afp/Integration/document/outro.tex", - "formal/hol/Examples/lagrange_lemma.ml", - "formal/afp/PseudoHoops/RightComplementedMonoid.thy", - "formal/afp/Finitely_Generated_Abelian_Groups/IDirProds.thy", - "formal/afp/TESL_Language/StutteringDefs.thy", - "formal/afp/Optics/Scenes.thy", - "formal/lean/mathlib/probability/notation.lean", - "formal/afp/Minimal_SSA/Irreducible.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p303.lean", - "formal/afp/ComponentDependencies/document/intro.tex", - "formal/lean/mathlib/data/finsupp/multiset.lean", - "formal/afp/Gauss_Jordan/Examples_Gauss_Jordan_Abstract.thy", - "formal/lean/liquid/laurent_measures/condensed.lean", - "formal/afp/Game_Based_Crypto/IND_CPA_PK_Single.thy", - "formal/afp/HOL-CSP/Skip.thy", - "formal/afp/Twelvefold_Way/Twelvefold_Way_Entry9.thy", - "formal/lean/mathzoo/mathzoo/olympiads/aime/1984/p5.lean", - "formal/lean/mathlib/number_theory/liouville/basic.lean", - "formal/lean/perfectoid/valuation/with_zero_topology.lean", - "formal/afp/JinjaDCI/J/Progress.thy", - "formal/lean/mathlib/linear_algebra/span.lean", - "formal/afp/Deriving/Countable_Generator/Countable_Generator.thy", - "formal/afp/Transition_Systems_and_Automata/Basic/Basic.thy", - "formal/afp/MFMC_Countable/MFMC_Network.thy", - "formal/hol/Help/NNF_CONV.doc", - "formal/mizar/waybel13.miz", - "formal/afp/Regular_Tree_Relations/Horn_Setup/Horn_Fset.thy", - "formal/afp/Isabelle_Meta_Model/toy_example/embedding/Core.thy", - "formal/lean/mathlib/number_theory/legendre_symbol/add_character.lean", - "formal/afp/Landau_Symbols/Landau_More.thy", - "formal/hol/Tutorial/Linking_external_tools.ml", - "formal/afp/Binomial-Queues/document/root.tex", - "formal/afp/CakeML/generated/Lem_show_extra.thy", - "formal/lean/mathlib/data/typevec.lean", - "formal/afp/Prim_Dijkstra_Simple/Dijkstra_Impl.thy", - "formal/afp/Knot_Theory/Computations.thy", - "formal/afp/Ordered_Resolution_Prover/Standard_Redundancy.thy", - "formal/afp/VYDRA_MDL/Temporal.thy", - "formal/hol/Library/calc_real.ml", - "formal/mizar/matrix11.miz", - "formal/lean/mathzoo/mathzoo/olympiads/aime/1996/p5.lean", - "formal/afp/LTL_to_GBA/LTL_to_GBA_impl.thy", - "formal/afp/Collections/GenCF/Impl/Impl_List_Map.thy", - "formal/afp/AWN/Closed.thy", - "formal/afp/Jinja/Compiler/Correctness1.thy", - "formal/hol/Help/INTEGER_TAC.doc", - "formal/hol/Help/INTRO_TAC.doc", - "formal/afp/CoreC++/HeapExtension.thy", - "formal/afp/Dirichlet_Series/Multiplicative_Function.thy", - "formal/afp/Goedel_HFSet_Semanticless/Instance.thy", - "formal/lean/mathlib/algebra/big_operators/ring.lean", - "formal/afp/First_Welfare_Theorem/Argmax.thy", - "formal/afp/FOL_Seq_Calc1/Common.thy", - "formal/lean/mathlib/analysis/locally_convex/basic.lean", - "formal/afp/CSP_RefTK/Introduction.thy", - "formal/afp/Echelon_Form/document/root.tex", - "formal/afp/Core_SC_DOM/common/monads/NodeMonad.thy", - "formal/afp/Stream-Fusion/Stream.thy", - "formal/mizar/hurwitz.miz", - "formal/afp/Fourier/Periodic.thy", - "formal/coq/odd-order/BGsection13.v", - "formal/afp/InfPathElimination/Aexp.thy", - "formal/afp/Matrices_for_ODEs/MTX_Norms.thy", - "formal/afp/QHLProver/Quantum_Program.thy", - "formal/mizar/calcul_1.miz", - "formal/mizar/integr21.miz", - "formal/afp/X86_Semantics/X86_Parse.thy", - "formal/mizar/glib_002.miz", - "formal/afp/CakeML_Codegen/document/root.tex", - "formal/hol/Help/INT_POLY_CONV.doc", - "formal/afp/Fishers_Inequality/document/root.tex", - "formal/lean/mathlib/probability/variance.lean", - "formal/afp/VerifyThis2018/document/root.tex", - "formal/mizar/triang_1.miz", - "formal/afp/Allen_Calculus/disjoint_relations.thy", - "formal/afp/Finite_Fields/Card_Irreducible_Polynomials_Aux.thy", - "formal/afp/Fishers_Inequality/Incidence_Matrices.thy", - "formal/afp/Probabilistic_While/Geometric.thy", - "formal/mizar/topalg_2.miz", - "formal/afp/Transition_Systems_and_Automata/Automata/NBA/NGBA_Graphs.thy", - "formal/mizar/lopban_8.miz", - "formal/afp/Forcing/Forcing_Main.thy", - "formal/lean/mathzoo/mathzoo/misc/miniF2F/algebra/manipexpr_2erprsqpesqeqnrpnesq.lean", - "formal/afp/JinjaThreads/DFA/Abstract_BV.thy", - "formal/afp/Independence_CH/Forcing_Notions.thy", - "formal/lean/mathlib/logic/equiv/transfer_instance.lean", - "formal/mizar/series_1.miz", - "formal/lean/mathlib/combinatorics/hindman.lean", - "formal/mizar/scmpds_8.miz", - "formal/afp/Network_Security_Policy_Verification/Security_Invariants/SINVAR_BLPtrusted_impl.thy", - "formal/mizar/aff_4.miz", - "formal/lean/perfectoid/Spa/rational_open_data.lean", - "formal/coq/analysis/lebesgue_measure.v", - "formal/mizar/valued_1.miz", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2020/a/p9.lean", - "formal/afp/Registers/Axioms_Quantum.thy", - "formal/afp/Multi_Party_Computation/Noar_Pinkas_OT.thy", - "formal/lean/mathlib/data/nat/pairing.lean", - "formal/afp/Correctness_Algebras/Recursion.thy", - "formal/mizar/ckspace1.miz", - "formal/lean/mathlib/data/finsupp/pwo.lean", - "formal/afp/Pi_Calculus/Strong_Early_Sim.thy", - "formal/afp/Iptables_Semantics/Primitive_Matchers/No_Spoof.thy", - "formal/hol/Multivariate/gamma.ml", - "formal/lean/mathlib/category_theory/monoidal/category.lean", - "formal/hol/Help/map.doc", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2021/a/p18.lean", - "formal/afp/Iptables_Semantics/Examples/TUM_Net_Firewall/TUM_Simple_FW.thy", - "formal/afp/CakeML_Codegen/Utils/ML_Utils.thy", - "formal/mizar/finsop_1.miz", - "formal/hol/Rqe/make.ml", - "formal/afp/Simplex/Rel_Chain.thy", - "formal/mizar/fvaluat1.miz", - "formal/hol/Help/GEN_MESON_TAC.doc", - "formal/hol/miz3/Samples/sample.ml", - "formal/afp/Flyspeck-Tame/GeneratorProps.thy", - "formal/afp/Vickrey_Clarke_Groves/RelationProperties.thy", - "formal/afp/HRB-Slicing/document/root.tex", - "formal/lean/mathlib/category_theory/limits/preserves/shapes/images.lean", - "formal/afp/Binding_Syntax_Theory/Semantic_Domains.thy", - "formal/hol/Tutorial/Defining_new_types.ml", - "formal/afp/CoSMed/Traceback_Properties/Post_Visibility_Traceback.thy", - "formal/afp/Amortized_Complexity/document/root.tex", - "formal/afp/Pi_Calculus/Weak_Late_Step_Sim.thy", - "formal/afp/Slicing/While/AdditionalLemmas.thy", - "formal/afp/Types_Tableaus_and_Goedels_God/document/root.tex", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p43.lean", - "formal/afp/DataRefinementIBP/DataRefinement.thy", - "formal/mizar/vectsp_8.miz", - "formal/hol/100/derangements.ml", - "formal/afp/PseudoHoops/PseudoWaisbergAlgebra.thy", - "formal/afp/Simplex/Simplex.thy", - "formal/afp/Call_Arity/TTree.thy", - "formal/lean/mathlib/ring_theory/polynomial/tower.lean", - "formal/afp/Network_Security_Policy_Verification/Examples/Example_NetModel.thy", - "formal/afp/No_FTL_observers/Axioms.thy", - "formal/afp/Root_Balanced_Tree/Time_Monad.thy", - "formal/afp/Types_To_Sets_Extension/Examples/SML_Relativization/Algebra/SML_Monoids.thy", - "formal/lean/mathlib/algebra/star/basic.lean", - "formal/hol/Help/REAL_INT_LT_CONV.doc", - "formal/afp/AODV/variants/d_fwdrreqs/D_OAodv.thy", - "formal/afp/CAVA_LTL_Modelchecker/SM/Refine/Gen_Scheduler_Refine.thy", - "formal/afp/Auto2_Imperative_HOL/Imperative/BST_Impl.thy", - "formal/afp/Collections/GenCF/Impl/Impl_Array_Stack.thy", - "formal/afp/Markov_Models/Markov_Models.thy", - "formal/afp/Iptables_Semantics/Common/List_Misc.thy", - "formal/afp/BNF_CC/DDS.thy", - "formal/mizar/qc_trans.miz", - "formal/afp/Taylor_Models/Taylor_Models.thy", - "formal/afp/Markov_Models/ex/MDP_RP.thy", - "formal/afp/Prim_Dijkstra_Simple/Common.thy", - "formal/lean/mathlib/category_theory/limits/constructions/epi_mono.lean", - "formal/lean/mathlib/ring_theory/adjoin_root.lean", - "formal/afp/First_Order_Terms/Term_Pair_Multiset.thy", - "formal/afp/Sort_Encodings/Encodings.thy", - "formal/coq/odd-order/BGsection7.v", - "formal/lean/mathlib/data/countable/basic.lean", - "formal/lean/mathlib/category_theory/sites/sheafification.lean", - "formal/mizar/fdiff_2.miz", - "formal/afp/Root_Balanced_Tree/Root_Balanced_Tree_Tab.thy", - "formal/afp/CoreC++/ClassRel.thy", - "formal/afp/FocusStreamsCaseStudies/stream.thy", - "formal/mizar/fscirc_2.miz", - "formal/mizar/arytm_0.miz", - "formal/hol/Help/LAND_CONV.doc", - "formal/mizar/funct_6.miz", - "formal/afp/Containers/AssocList.thy", - "formal/afp/Refine_Monadic/Generic/RefineG_Assert.thy", - "formal/hol/Help/dest_eq.doc", - "formal/lean/mathzoo/mathzoo/misc/miniF2F/algebra/absapbon1pabsapbleqsumabsaon1pabsa.lean", - "formal/afp/Security_Protocol_Refinement/Key_establish/m3_nssk_par.thy", - "formal/hol/Help/refine.doc", - "formal/afp/Dirichlet_Series/Dirichlet_Series_Analysis.thy", - "formal/afp/Quaternions/Quaternions.thy", - "formal/afp/List_Update/Competitive_Analysis.thy", - "formal/hol/Multivariate/homology.ml", - "formal/mizar/rltopsp1.miz", - "formal/afp/Twelvefold_Way/Twelvefold_Way_Entry4.thy", - "formal/mizar/connsp_1.miz", - "formal/hol/Help/AUGMENT_SIMPSET.doc", - "formal/afp/Amortized_Complexity/Splay_Tree_Analysis_Optimal.thy", - "formal/mizar/aofa_000.miz", - "formal/afp/Slicing/StaticIntra/WeakOrderDependence.thy", - "formal/afp/Jinja/Compiler/J1WellForm.thy", - "formal/afp/Pi_Calculus/Strong_Early_Bisim.thy", - "formal/afp/JinjaThreads/Execute/Scheduler.thy", - "formal/afp/MiniML/Type.thy", - "formal/afp/HOL-CSP/Bot.thy", - "formal/mizar/int_7.miz", - "formal/hol/Help/mk_mconst.doc", - "formal/hol/100/ramsey.ml", - "formal/afp/Extended_Finite_State_Machines/AExp.thy", - "formal/lean/mathlib/category_theory/idempotents/karoubi.lean", - "formal/lean/mathlib/data/list/nodup_equiv_fin.lean", - "formal/afp/LambdaMu/Types.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p45.lean", - "formal/afp/ConcurrentIMP/document/root.tex", - "formal/hol/Examples/mccarthy.ml", - "formal/mizar/vsdiff_1.miz", - "formal/hol/Boyer_Moore/shells.ml", - "formal/afp/CakeML/generated/CakeML/AstAuxiliary.thy", - "formal/afp/Constructive_Cryptography_CM/Fold_Spmf.thy", - "formal/mizar/convex4.miz", - "formal/lean/mathlib/analysis/complex/re_im_topology.lean", - "formal/lean/mathlib/topology/vector_bundle/prod.lean", - "formal/mizar/borsuk_1.miz", - "formal/afp/IsaGeoCoq/Tarski_Neutral.thy", - "formal/lean/mathlib/order/filter/n_ary.lean", - "formal/afp/Coinductive/TLList_CCPO.thy", - "formal/afp/Quasi_Borel_Spaces/QuasiBorel.thy", - "formal/mizar/fdiff_8.miz", - "formal/afp/Independence_CH/Succession_Poset.thy", - "formal/afp/CAVA_Automata/Simulation.thy", - "formal/lean/mathlib/category_theory/sites/limits.lean", - "formal/afp/Network_Security_Policy_Verification/Lib/ML_GraphViz.thy", - "formal/coq/analysis/measure.v", - "formal/afp/SIFPL/VS.thy", - "formal/lean/liquid/Lbar/nnnorm_add_class.lean", - "formal/afp/Complex_Geometry/document/root.tex", - "formal/lean/sphere-eversion/to_mathlib/analysis/normed_group.lean", - "formal/mizar/zf_refle.miz", - "formal/hol/Help/THEN_TCL.doc", - "formal/mizar/normsp_3.miz", - "formal/afp/Transformer_Semantics/Kleisli_Quantale.thy", - "formal/lean/mathlib/topology/tactic.lean", - "formal/afp/Ribbon_Proofs/JHelper.thy", - "formal/afp/Gauss_Sums/Ramanujan_Sums.thy", - "formal/lean/mathzoo/mathzoo/olympiads/aime/1994/p4.lean", - "formal/afp/Ordinary_Differential_Equations/IVP/Cones.thy", - "formal/afp/Category3/DualCategory.thy", - "formal/lean/mathlib/category_theory/products/basic.lean", - "formal/afp/Tycon/Monad_Zero.thy", - "formal/mizar/ordinal2.miz", - "formal/afp/LTL_to_GBA/LTL_to_GBA.thy", - "formal/afp/CakeML_Codegen/Preproc/Eval_Instances.thy", - "formal/afp/Simpl/HoarePartialProps.thy", - "formal/lean/mathlib/category_theory/preadditive/Mat.lean", - "formal/afp/CakeML/generated/Lem_bool.thy", - "formal/hol/Help/num_of_string.doc", - "formal/afp/Jinja/Compiler/Compiler1.thy", - "formal/afp/Launchbury/Launchbury.thy", - "formal/hol/Help/binops.doc", - "formal/afp/Irrationals_From_THEBOOK/document/root.tex", - "formal/mizar/mboolean.miz", - "formal/mizar/integra4.miz", - "formal/lean/mathlib/topology/uniform_space/complete_separated.lean", - "formal/afp/Pi_Calculus/Strong_Late_Sim_SC.thy", - "formal/afp/VYDRA_MDL/Interval.thy", - "formal/lean/mathlib/algebra/star/unitary.lean", - "formal/afp/Call_Arity/CoCallAnalysisImpl.thy", - "formal/afp/Shadow_SC_DOM/tests/Shadow_DOM_BaseTest.thy", - "formal/afp/Psi_Calculi/Weak_Simulation.thy", - "formal/afp/DPRM_Theorem/Diophantine/Exponentiation.thy", - "formal/afp/JinjaDCI/JVM/JVMExec.thy", - "formal/afp/Stable_Matching/Basis.thy", - "formal/hol/Help/X_META_EXISTS_TAC.doc", - "formal/mizar/dickson.miz", - "formal/afp/UPF_Firewall/PacketFilter/PacketFilter.thy", - "formal/afp/Factored_Transition_System_Bounding/ActionSeqProcess.thy", - "formal/lean/mathlib/topology/metric_space/partition_of_unity.lean", - "formal/afp/FO_Theory_Rewriting/FOL_Extra.thy", - "formal/lean/liquid/system_of_complexes/basic.lean", - "formal/afp/Combinable_Wands/CombinableWands.thy", - "formal/afp/PCF/Basis.thy", - "formal/afp/KAD/Modal_Kleene_Algebra_Models.thy", - "formal/lean/mathlib/algebraic_topology/simplicial_set.lean", - "formal/afp/JinjaDCI/J/TypeSafe.thy", - "formal/afp/WOOT_Strong_Eventual_Consistency/ErrorMonad.thy", - "formal/afp/Pi_Calculus/Weak_Late_Semantics.thy", - "formal/lean/liquid/pseudo_normed_group/bounded_limits.lean", - "formal/afp/Automatic_Refinement/Tool/Autoref_Phases.thy", - "formal/afp/Partial_Function_MR/Partial_Function_MR.thy", - "formal/afp/Schutz_Spacetime/Minkowski.thy", - "formal/afp/Safe_Distance/Safe_Distance_Reaction.thy", - "formal/afp/Types_To_Sets_Extension/Examples/TTS_Foundations/Foundations/FNDS_Definite_Description.thy", - "formal/afp/Pi_Calculus/Late_Hennessy_Subst.thy", - "formal/afp/Berlekamp_Zassenhaus/Finite_Field_Factorization.thy", - "formal/afp/Digit_Expansions/Bits_Digits.thy", - "formal/lean/mathlib/analysis/special_functions/exp.lean", - "formal/lean/mathlib/topology/list.lean", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2021/a/p9.lean", - "formal/afp/CZH_Elementary_Categories/czh_ecategories/CZH_ECAT_Parallel.thy", - "formal/afp/Koenigsberg_Friendship/document/root.tex", - "formal/lean/mathlib/category_theory/monoidal/natural_transformation.lean", - "formal/afp/FOL_Seq_Calc2/Sequent_Calculus_Verifier.thy", - "formal/mizar/rlvect_3.miz", - "formal/lean/mathlib/analysis/special_functions/polar_coord.lean", - "formal/lean/mathlib/category_theory/limits/over.lean", - "formal/afp/Monad_Memo_DP/state_monad/State_Monad_Ext.thy", - "formal/mizar/trees_2.miz", - "formal/mizar/petri.miz", - "formal/lean/liquid/for_mathlib/Profinite/disjoint_union.lean", - "formal/afp/Refine_Imperative_HOL/Lib/Term_Synth.thy", - "formal/lean/mathlib/ring_theory/adjoin/basic.lean", - "formal/lean/liquid/breen_deligne/homotopy.lean", - "formal/lean/perfectoid/valuation/topology.lean", - "formal/afp/Key_Agreement_Strong_Adversaries/document/root.tex", - "formal/afp/MFOTL_Monitor/document/root.tex", - "formal/hol/Help/TOP_DEPTH_SQCONV.doc", - "formal/afp/Isabelle_C/C11-FrontEnd/examples/C1.thy", - "formal/afp/JinjaDCI/document/root.tex", - "formal/afp/Markov_Models/Trace_Space_Equals_Markov_Processes.thy", - "formal/mizar/real_lat.miz", - "formal/afp/Constructive_Cryptography/Distinguisher.thy", - "formal/mizar/vfunct_1.miz", - "formal/hol/Help/SIMP_RULE.doc", - "formal/afp/Polynomial_Factorization/Rational_Root_Test.thy", - "formal/lean/mathlib/order/category/BoolAlg.lean", - "formal/afp/Physical_Quantities/ISQ_Units.thy", - "formal/afp/Ordinary_Differential_Equations/Numerics/Refine_Rigorous_Numerics_Aform.thy", - "formal/afp/Median_Of_Medians_Selection/Median_Of_Medians_Selection.thy", - "formal/afp/Network_Security_Policy_Verification/Security_Invariants/SINVAR_Tainting.thy", - "formal/afp/Amortized_Complexity/Amortized_Framework0.thy", - "formal/afp/CoSMeDis/Post_Confidentiality/DYNAMIC_Post_COMPOSE2.thy", - "formal/mizar/int_5.miz", - "formal/afp/First_Welfare_Theorem/Microeconomics/Exchange_Economy.thy", - "formal/afp/Isabelle_C/C11-FrontEnd/document/generated/paper.tex", - "formal/afp/Propositional_Proof_Systems/Sema.thy", - "formal/afp/SumSquares/FourSquares.thy", - "formal/lean/mathlib/analysis/convex/function.lean", - "formal/hol/Help/GEN_NNF_CONV.doc", - "formal/afp/Posix-Lexing/Lexer.thy", - "formal/afp/Interval_Arithmetic_Word32/document/root.tex", - "formal/afp/Saturation_Framework/Given_Clause_Architectures.thy", - "formal/lean/mathlib/topology/noetherian_space.lean", - "formal/lean/mathlib/algebra/algebra/operations.lean", - "formal/afp/Resolution_FOL/TermsAndLiterals.thy", - "formal/afp/VeriComp/Fixpoint.thy", - "formal/afp/Applicative_Lifting/Applicative_Set.thy", - "formal/mizar/gaussint.miz", - "formal/afp/Incredible_Proof_Machine/Incredible_Predicate.thy", - "formal/afp/Priority_Queue_Braun/document/root.tex", - "formal/lean/mathlib/algebra/algebra/subalgebra/basic.lean", - "formal/lean/liquid/for_mathlib/homotopy_category_op.lean", - "formal/afp/Universal_Turing_Machine/UTM.thy", - "formal/hol/Logic/prenex.ml", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p709.lean", - "formal/lean/mathzoo/mathzoo/misc/miniF2F/algebra/apbpceq2_abpbcpcaeq1_aleq1on3anbleq1ancleq4on3.lean", - "formal/afp/Card_Equiv_Relations/Card_Equiv_Relations.thy", - "formal/coq/abel/abel.v", - "formal/lean/mathlib/topology/algebra/polynomial.lean", - "formal/mizar/mycielsk.miz", - "formal/afp/Ordinal_Partitions/Erdos_Milner.thy", - "formal/afp/AODV/variants/b_fwdrreps/B_Fresher.thy", - "formal/afp/Transition_Systems_and_Automata/Automata/NBA/NGBA_Algorithms.thy", - "formal/afp/SATSolverVerification/MoreList.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p109.lean", - "formal/hol/Help/FIX_TAC.doc", - "formal/lean/mathlib/algebra/hierarchy_design.lean", - "formal/lean/mathlib/topology/continuous_function/units.lean", - "formal/afp/JinjaDCI/J/JWellForm.thy", - "formal/lean/liquid/for_mathlib/homology_iso.lean", - "formal/mizar/jordan6.miz", - "formal/lean/mathlib/category_theory/subterminal.lean", - "formal/hol/Help/dest_gabs.doc", - "formal/afp/CoreC++/Value.thy", - "formal/afp/Decl_Sem_Fun_PL/EquivDenotInterTypes.thy", - "formal/mizar/jordan1i.miz", - "formal/afp/JinjaThreads/Framework/FWDeadlock.thy", - "formal/coq/math-comp/algebra/interval.v", - "formal/lean/mathlib/category_theory/concrete_category/unbundled_hom.lean", - "formal/hol/Help/THENL.doc", - "formal/afp/Regex_Equivalence/Position_Autos.thy", - "formal/lean/liquid/facts/nnreal.lean", - "formal/afp/Call_Arity/AnalBinds.thy", - "formal/hol/Help/report_timing.doc", - "formal/lean/mathlib/analysis/convex/basic.lean", - "formal/afp/Call_Arity/Sestoft.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p267.lean", - "formal/lean/mathlib/data/polynomial/erase_lead.lean", - "formal/afp/Multi_Party_Computation/Cyclic_Group_Ext.thy", - "formal/afp/Launchbury/Denotational.thy", - "formal/afp/Amortized_Complexity/Pairing_Heap_Tree_Analysis2.thy", - "formal/afp/KAD/Modal_Kleene_Algebra.thy", - "formal/afp/Dirichlet_Series/Arithmetic_Summatory.thy", - "formal/afp/Sigma_Commit_Crypto/Xor.thy", - "formal/afp/InfPathElimination/SubExt.thy", - "formal/mizar/mfold_2.miz", - "formal/afp/Generic_Deriving/tests/Derive_Datatypes.thy", - "formal/lean/mathlib/number_theory/pythagorean_triples.lean", - "formal/afp/QR_Decomposition/Gram_Schmidt.thy", - "formal/lean/mathlib/linear_algebra/matrix/is_diag.lean", - "formal/afp/Constructive_Cryptography/Resource.thy", - "formal/mizar/unialg_3.miz", - "formal/mizar/bcialg_2.miz", - "formal/lean/mathlib/algebra/dual_number.lean", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p314.lean", - "formal/afp/Optics/Channel_Type.thy", - "formal/lean/mathlib/measure_theory/group/arithmetic.lean", - "formal/mizar/ftacell1.miz", - "formal/mizar/ordinal5.miz", - "formal/mizar/uniform2.miz", - "formal/afp/Featherweight_OCL/UML_Main.thy", - "formal/afp/Bicategory/Modification.thy", - "formal/afp/Propositional_Proof_Systems/ND_Compl_SC.thy", - "formal/lean/mathlib/topology/category/Top/limits.lean", - "formal/afp/Types_Tableaus_and_Goedels_God/IHOML.thy", - "formal/lean/mathlib/topology/category/Locale.lean", - "formal/afp/Finitely_Generated_Abelian_Groups/Miscellaneous_Groups.thy", - "formal/mizar/abcmiz_0.miz", - "formal/afp/DPRM_Theorem/Register_Machine/RegisterMachineProperties.thy", - "formal/lean/mathlib/category_theory/endofunctor/algebra.lean", - "formal/afp/Applicative_Lifting/Applicative_Star.thy", - "formal/hol/Help/mapfilter.doc", - "formal/lean/mathlib/geometry/manifold/local_invariant_properties.lean", - "formal/afp/Call_Arity/ArityTransformSafe.thy", - "formal/afp/Graph_Saturation/RuleSemanticsConnection.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p89.lean", - "formal/lean/mathlib/geometry/manifold/instances/real.lean", - "formal/afp/Lehmer/document/root.tex", - "formal/afp/Lowe_Ontological_Argument/LoweOntologicalArgument_2.thy", - "formal/afp/TLA/Rules.thy", - "formal/afp/Incompleteness/II_Prelims.thy", - "formal/lean/mathlib/data/real/hyperreal.lean", - "formal/mizar/rinfsup2.miz", - "formal/afp/AODV/variants/a_norreqid/A_Aodv_Loop_Freedom.thy", - "formal/lean/mathlib/category_theory/category/ulift.lean", - "formal/mizar/fuzzy_1.miz", - "formal/afp/Iptables_Semantics/Examples/medium-sized-company/Analyze_medium_sized_company.thy", - "formal/afp/Jinja/JVM/JVMExec.thy", - "formal/afp/CakeML/generated/Lem_word.thy", - "formal/afp/Free-Groups/UnitGroup.thy", - "formal/afp/JinjaThreads/MM/JMM_Heap.thy", - "formal/hol/Help/PURE_REWRITE_TAC.doc", - "formal/hol/Rqe/rqe_list.ml", - "formal/afp/Refine_Imperative_HOL/Sepref_ICF_Bindings.thy", - "formal/afp/Hybrid_Multi_Lane_Spatial_Logic/HMLSL.thy", - "formal/lean/mathlib/data/int/least_greatest.lean", - "formal/afp/CISC-Kernel/step/Step_vpeq_locally_respects.thy", - "formal/afp/Show/Show.thy", - "formal/lean/liquid/for_mathlib/bicartesian.lean", - "formal/mizar/scm_1.miz", - "formal/afp/JinjaThreads/Framework/FWState.thy", - "formal/afp/Probabilistic_System_Zoo/document/root_non_bnfs.tex", - "formal/afp/Containers/ITP-2013/Benchmark_Set_Default.thy", - "formal/lean/mathlib/data/bool/set.lean", - "formal/mizar/axioms.miz", - "formal/hol/GL/gl.ml", - "formal/lean/mathzoo/mathzoo/misc/miniF2F/algebra/sqineq_36azm9asqle36zsq.lean", - "formal/afp/Ordinal/OrdinalRec.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p188.lean", - "formal/lean/mathlib/linear_algebra/matrix/trace.lean", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p48.lean", - "formal/afp/HOL-CSP/Conclusion.thy", - "formal/hol/Help/SUBS.doc", - "formal/coq/math-comp/solvable/primitive_action.v", - "formal/hol/Help/ORDERED_REWR_CONV.doc", - "formal/afp/Encodability_Process_Calculi/OperationalCorrespondence.thy", - "formal/mizar/polydiff.miz", - "formal/lean/liquid/Lbar/pseudo_normed_group.lean", - "formal/hol/Help/type_unify.doc", - "formal/lean/mathlib/analysis/analytic/composition.lean", - "formal/mizar/nfcont_1.miz", - "formal/lean/mathlib/measure_theory/constructions/pi.lean", - "formal/mizar/pdiff_6.miz", - "formal/afp/Poincare_Disc/Poincare_Perpendicular.thy", - "formal/afp/Modal_Logics_for_NTS/FL_Equivalence_Implies_Bisimilarity.thy", - "formal/afp/Category3/EquivalenceOfCategories.thy", - "formal/hol/Help/the_type_definitions.doc", - "formal/afp/KAT_and_DRA/TwoSorted/DRAT2.thy", - "formal/lean/liquid/rescale/CLC.lean", - "formal/afp/Affine_Arithmetic/Affine_Form.thy", - "formal/afp/CoCon/Review_Confidentiality/Review_RAut_NCPC.thy", - "formal/lean/mathlib/measure_theory/integral/exp_decay.lean", - "formal/lean/mathlib/category_theory/category/Bipointed.lean", - "formal/afp/WebAssembly/Wasm_Checker.thy", - "formal/afp/Inductive_Confidentiality/GeneralAttacker/Knowledge.thy", - "formal/mizar/compact1.miz", - "formal/hol/Help/mem_prime.doc", - "formal/afp/Jinja/JVM/JVMExecInstr.thy", - "formal/afp/Hybrid_Multi_Lane_Spatial_Logic/perfect/HMLSL_Perfect.thy", - "formal/lean/mathlib/category_theory/monoidal/functorial.lean", - "formal/mizar/hilbert4.miz", - "formal/afp/CAVA_Automata/CAVA_Base/Code_String.thy", - "formal/afp/Core_SC_DOM/common/preliminaries/Testing_Utils.thy", - "formal/hol/Rqe/basic.ml", - "formal/hol/Help/instantiate_casewise_recursion.doc", - "formal/afp/Regular-Sets/document/root.tex", - "formal/afp/WHATandWHERE_Security/WHATWHERE_Secure_Skip_Assign.thy", - "formal/hol/Help/print_type.doc", - "formal/afp/CakeML/generated/Lem_map_extra.thy", - "formal/afp/Furstenberg_Topology/document/root.tex", - "formal/afp/Three_Circles/Bernstein.thy", - "formal/lean/mathlib/category_theory/adjunction/evaluation.lean", - "formal/afp/Skew_Heap/Skew_Heap.thy", - "formal/hol/Help/get_const_type.doc", - "formal/afp/Isabelle_Meta_Model/toy_example/embedding/core/Floor1_access.thy", - "formal/afp/IMP2/automation/IMP2_VCG.thy", - "formal/lean/mathlib/analysis/inner_product_space/dual.lean", - "formal/afp/Deriving/Comparator_Generator/Compare_Rat.thy", - "formal/afp/Relation_Algebra/Relation_Algebra_Vectors.thy", - "formal/afp/Ordinary_Differential_Equations/Refinement/Refine_Phantom.thy", - "formal/lean/liquid/thm95/constants/spectral_constants.lean", - "formal/afp/Network_Security_Policy_Verification/TopoS_ENF.thy", - "formal/afp/Extended_Finite_State_Machine_Inference/code-targets/Code_Target_Set.thy", - "formal/afp/LatticeProperties/Modular_Distrib_Lattice.thy", - "formal/afp/Constructive_Cryptography/Constructive_Cryptography.thy", - "formal/afp/Isabelle_Meta_Model/toy_example/embedding/meta_toy/Parser_Toy_extended.thy", - "formal/lean/liquid/system_of_complexes/shift_sub_id.lean", - "formal/afp/Rewriting_Z/Lambda_Z.thy", - "formal/afp/CAVA_LTL_Modelchecker/BoolProgs/Programs/BoolProgs_Programs.thy", - "formal/afp/CZH_Foundations/czh_sets/CZH_Sets_Conclusions.thy", - "formal/lean/liquid/for_mathlib/derived/les.lean", - "formal/afp/Stewart_Apollonius/document/root.tex", - "formal/hol/Examples/mangoldt.ml", - "formal/afp/Containers/ITP-2013/Benchmark_LC.thy", - "formal/mizar/fuznorm1.miz", - "formal/afp/Vickrey_Clarke_Groves/MiscTools.thy", - "formal/afp/Consensus_Refined/Observing_Quorums.thy", - "formal/lean/mathzoo/mathzoo/olympiads/aime/2000/i/p7.lean", - "formal/afp/Automatic_Refinement/Lib/Mk_Record_Simp.thy", - "formal/hol/Help/mk_imp.doc", - "formal/lean/perfectoid/for_mathlib/with_zero.lean", - "formal/mizar/rlvect_5.miz", - "formal/lean/liquid/for_mathlib/projective_replacement.lean", - "formal/hol/Help/lcm_num.doc", - "formal/afp/Isabelle_C/C11-FrontEnd/document/root.tex", - "formal/afp/Program-Conflict-Analysis/LTS.thy", - "formal/afp/JinjaThreads/MM/MM.thy", - "formal/afp/Transcendence_Series_Hancl_Rucki/document/root.tex", - "formal/afp/BDD/NormalizeTotalProof.thy", - "formal/afp/CakeML/Big_Step_Determ.thy", - "formal/afp/Slicing/JinjaVM/SemanticsWF.thy", - "formal/afp/Propositional_Proof_Systems/Resolution_Compl_SC_Small.thy", - "formal/afp/Stone_Relation_Algebras/document/root.tex", - "formal/lean/mathlib/data/prod/basic.lean", - "formal/lean/mathlib/category_theory/concrete_category/bundled_hom.lean", - "formal/afp/Consensus_Refined/Voting_Opt.thy", - "formal/afp/Ordinary_Differential_Equations/Refinement/Refine_ScaleR2.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p329.lean", - "formal/afp/Formula_Derivatives/Automaton.thy", - "formal/afp/Prpu_Maxflow/Fifo_Push_Relabel.thy", - "formal/afp/Echelon_Form/Code_Cayley_Hamilton_IArrays.thy", - "formal/lean/mathlib/data/finset/pi_induction.lean", - "formal/lean/mathlib/linear_algebra/sesquilinear_form.lean", - "formal/mizar/rusub_5.miz", - "formal/lean/mathlib/measure_theory/measure/with_density_vector_measure.lean", - "formal/mizar/bagorder.miz", - "formal/afp/BDD/BinDag.thy", - "formal/mizar/latsubgr.miz", - "formal/afp/Hoare_Time/SepLogK_Hoare.thy", - "formal/afp/Pi_Calculus/Weak_Early_Step_Sim.thy", - "formal/afp/RefinementReactive/Refinement.thy", - "formal/lean/mathlib/algebra/cubic_discriminant.lean", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p80.lean", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p263.lean", - "formal/afp/Show/document/root.tex", - "formal/lean/mathlib/linear_algebra/quotient.lean", - "formal/afp/TESL_Language/document/root.tex", - "formal/afp/Matrix_Tensor/Matrix_Tensor.thy", - "formal/afp/Isabelle_Meta_Model/toy_example/embedding/meta_toy/Printer_META.thy", - "formal/hol/Help/is_list.doc", - "formal/lean/mathlib/analysis/normed_space/affine_isometry.lean", - "formal/afp/Probabilistic_Prime_Tests/Generalized_Primality_Test.thy", - "formal/lean/mathlib/group_theory/subsemigroup/centralizer.lean", - "formal/afp/CakeML/doc/Doc_Proofs.thy", - "formal/lean/lftcm/hints/category_theory/exercise2/hint1.lean", - "formal/afp/Well_Quasi_Orders/Minimal_Bad_Sequences.thy", - "formal/afp/Core_DOM/common/classes/CharacterDataClass.thy", - "formal/afp/Quasi_Borel_Spaces/Pair_QuasiBorel_Measure.thy", - "formal/lean/mathlib/algebra/module/hom.lean", - "formal/afp/Gauss_Sums/Complex_Roots_Of_Unity.thy", - "formal/afp/Simplex/Simplex_Algebra.thy", - "formal/afp/IEEE_Floating_Point/Double.thy", - "formal/lean/mathlib/topology/subset_properties.lean", - "formal/mizar/waybel15.miz", - "formal/afp/Integration/Measure.thy", - "formal/lean/liquid/condensed/acyclic.lean", - "formal/mizar/nomin_6.miz", - "formal/afp/Isabelle_Meta_Model/document/Rail.thy", - "formal/lean/mathlib/combinatorics/simple_graph/metric.lean", - "formal/afp/JiveDataStoreModel/Isabelle_Store/StoreProperties.thy", - "formal/afp/FileRefinement/document/root.tex", - "formal/lean/mathlib/analysis/locally_convex/bounded.lean", - "formal/coq/analysis/altreals/dedekind.v", - "formal/afp/Lucas_Theorem/document/root.tex", - "formal/lean/mathlib/topology/continuous_function/weierstrass.lean", - "formal/lean/mathlib/analysis/normed/group/quotient.lean", - "formal/afp/Stream_Fusion_Code/document/root.tex", - "formal/afp/FO_Theory_Rewriting/Util/Tree_Automata_Derivation_Split.thy", - "formal/afp/FOL_Seq_Calc2/Usemantics.thy", - "formal/afp/List-Infinite/ListInfinite.thy", - "formal/coq/math-comp/solvable/hall.v", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2017/a/p7.lean", - "formal/afp/Jinja/DFA/LBVComplete.thy", - "formal/afp/Noninterference_Sequential_Composition/Counterexamples.thy", - "formal/hol/Rqe/defs.ml", - "formal/hol/Help/SUBST_ALL_TAC.doc", - "formal/afp/Inductive_Confidentiality/DolevYao/Public.thy", - "formal/afp/Real_Impl/Prime_Product.thy", - "formal/mizar/fintopo5.miz", - "formal/lean/mathlib/algebra/smul_with_zero.lean", - "formal/afp/Registers/Laws.thy", - "formal/afp/Shivers-CFA/AbsCFCorrect.thy", - "formal/lean/mathlib/topology/metric_space/algebra.lean", - "formal/lean/liquid/Lbar/torsion_free_profinite.lean", - "formal/afp/Design_Theory/Block_Designs.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p76.lean", - "formal/afp/Ergodic_Theory/Normalizing_Sequences.thy", - "formal/hol/Help/partition.doc", - "formal/afp/CZH_Foundations/czh_semicategories/CZH_SMC_Small_Semifunctor.thy", - "formal/afp/Pi_Calculus/Strong_Early_Bisim_Subst.thy", - "formal/hol/EC/formulary_projective.ml", - "formal/hol/Boyer_Moore/waterfall.ml", - "formal/mizar/xcmplx_1.miz", - "formal/afp/Budan_Fourier/BF_Misc.thy", - "formal/lean/mathlib/data/dfinsupp/order.lean", - "formal/lean/mathlib/group_theory/index.lean", - "formal/afp/Smith_Normal_Form/document/root.tex", - "formal/afp/Deep_Learning/DL_Rank_CP_Rank.thy", - "formal/afp/CZH_Universal_Constructions/czh_ucategories/CZH_UCAT_Adjoints.thy", - "formal/lean/mathlib/measure_theory/integral/vitali_caratheodory.lean", - "formal/afp/Strong_Security/Strongly_Secure_Skip_Assign.thy", - "formal/afp/Concurrent_Ref_Alg/Sequential.thy", - "formal/afp/NormByEval/NBE.thy", - "formal/afp/Pi_Calculus/Weak_Early_Step_Semantics.thy", - "formal/afp/Refine_Monadic/Refine_While.thy", - "formal/afp/Network_Security_Policy_Verification/TopoS_Stateful_Policy_Algorithm.thy", - "formal/afp/Perron_Frobenius/Hom_Gauss_Jordan.thy", - "formal/lean/mathlib/analysis/complex/upper_half_plane/basic.lean", - "formal/lean/mathlib/combinatorics/derangements/exponential.lean", - "formal/afp/Security_Protocol_Refinement/document/root.tex", - "formal/afp/Cubic_Quartic_Equations/document/root.tex", - "formal/afp/Multi_Party_Computation/Semi_Honest_Def.thy", - "formal/afp/SATSolverVerification/CNF.thy", - "formal/lean/mathlib/ring_theory/witt_vector/frobenius.lean", - "formal/afp/Incredible_Proof_Machine/Entailment.thy", - "formal/hol/Help/r.doc", - "formal/afp/UPF_Firewall/Examples/PersonalFirewall/PersonalFirewallInt.thy", - "formal/afp/BTree/BTree_ImpSplit.thy", - "formal/hol/Help/CHANGED_CONV.doc", - "formal/afp/Fisher_Yates/document/root.tex", - "formal/mizar/cardfil2.miz", - "formal/afp/AutoFocus-Stream/AF_Stream.thy", - "formal/afp/Collections/Examples/Autoref/ICF_Only_Test.thy", - "formal/afp/CZH_Foundations/czh_sets/CZH_Sets_Introduction.thy", - "formal/afp/Collections/GenCF/Impl/Impl_Array_Hash_Map.thy", - "formal/lean/mathlib/analysis/fourier.lean", - "formal/mizar/field_5.miz", - "formal/afp/JinjaThreads/DFA/LBVSpec.thy", - "formal/lean/mathlib/ring_theory/power_basis.lean", - "formal/afp/Coinductive/Coinductive.thy", - "formal/afp/Iptables_Semantics/Examples/Synology_Diskstation_DS414/Analyze_Synology_Diskstation.thy", - "formal/afp/Group-Ring-Module/Algebra2.thy", - "formal/afp/Banach_Steinhaus/Banach_Steinhaus.thy", - "formal/afp/Pi_Calculus/Weak_Late_Bisim.thy", - "formal/hol/Help/dest_intconst.doc", - "formal/afp/AODV/OAodv.thy", - "formal/mizar/topreal3.miz", - "formal/afp/Floyd_Warshall/document/root.tex", - "formal/afp/Planarity_Certificates/Planarity/Planar_Complete.thy", - "formal/afp/Transition_Systems_and_Automata/Basic/Sequence_LTL.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p202.lean", - "formal/afp/AODV/Aodv_Message.thy", - "formal/afp/Collections/ICF/impl/ArrayHashMap_Impl.thy", - "formal/afp/Isabelle_C/C11-FrontEnd/document/preamble.tex", - "formal/afp/Fresh_Identifiers/Fresh_Nat.thy", - "formal/afp/Interpreter_Optimizations/Inca_Verification.thy", - "formal/afp/Complex_Geometry/Angles.thy", - "formal/hol/Help/basic_net.doc", - "formal/mizar/polyeq_3.miz", - "formal/afp/FO_Theory_Rewriting/document/root.tex", - "formal/lean/mathlib/topology/algebra/constructions.lean", - "formal/afp/JinjaThreads/Examples/BufferExample.thy", - "formal/lean/mathlib/control/fold.lean", - "formal/afp/Call_Arity/ArityAnalysisStack.thy", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2019/a/p9.lean", - "formal/afp/CoSMeDis/Friend_Confidentiality/Friend_All.thy", - "formal/afp/Propositional_Proof_Systems/ND_Sound.thy", - "formal/afp/Strong_Security/Type_System_example.thy", - "formal/lean/mathlib/algebra/algebra/spectrum.lean", - "formal/afp/Sturm_Sequences/Lib/Misc_Polynomial.thy", - "formal/afp/Higher_Order_Terms/Lambda_Free_Compat.thy", - "formal/mizar/bintree1.miz", - "formal/afp/Twelvefold_Way/document/root.tex", - "formal/afp/Ergodic_Theory/Fekete.thy", - "formal/afp/Knot_Theory/document/root.tex", - "formal/afp/Launchbury/Nominal-HOLCF.thy", - "formal/afp/Rewrite_Properties_Reduction/Rewriting/Rewriting_LLRG_LV_Mondaic.thy", - "formal/hol/Help/LAMBDA_ELIM_CONV.doc", - "formal/afp/First_Welfare_Theorem/Syntax.thy", - "formal/hol/Help/ASSUM_LIST.doc", - "formal/hol/Help/UNWIND_CONV.doc", - "formal/afp/No_FTL_observers/SpaceTime.thy", - "formal/afp/Functional-Automata/Automata.thy", - "formal/afp/Lambda_Free_RPOs/Extension_Orders.thy", - "formal/lean/mathlib/analysis/inner_product_space/projection.lean", - "formal/mizar/idea_1.miz", - "formal/afp/SimplifiedOntologicalArgument/document/root.tex", - "formal/afp/JinjaThreads/MM/JMM.thy", - "formal/afp/MiniSail/ContextSubtypingL.thy", - "formal/lean/mathlib/ring_theory/ideal/operations.lean", - "formal/afp/Simple_Firewall/Firewall_Common_Decision_State.thy", - "formal/lean/mathlib/data/fin_enum.lean", - "formal/coq/math-comp/algebra/finalg.v", - "formal/afp/Regular-Sets/Regular_Exp2.thy", - "formal/mizar/dist_1.miz", - "formal/lean/mathlib/order/ord_continuous.lean", - "formal/mizar/card_1.miz", - "formal/afp/Relational_Forests/Forests.thy", - "formal/mizar/nattra_1.miz", - "formal/afp/Resolution_FOL/Completeness.thy", - "formal/lean/mathlib/data/set/intervals/image_preimage.lean", - "formal/afp/Cauchy/CauchysMeanTheorem.thy", - "formal/lean/mathlib/analysis/special_functions/sqrt.lean", - "formal/afp/Complex_Bounded_Operators/Cblinfun_Matrix.thy", - "formal/afp/MFOTL_Monitor/Trace.thy", - "formal/afp/Physical_Quantities/ISQ_Conversion.thy", - "formal/afp/Word_Lib/Enumeration_Word.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p171.lean", - "formal/afp/AutoFocus-Stream/document/root.tex", - "formal/hol/Help/loads.doc", - "formal/lean/lftcm/hints/category_theory/exercise2/hint2.lean", - "formal/afp/Types_To_Sets_Extension/Examples/SML_Relativization/SML_Introduction.thy", - "formal/afp/Propositional_Proof_Systems/Formulas.thy", - "formal/afp/Boolos_Curious_Inference/Boo2.thy", - "formal/afp/LambdaAuth/Syntax.thy", - "formal/afp/Isabelle_Marries_Dirac/No_Cloning.thy", - "formal/afp/CryptoBasedCompositionalProperties/CompLocalSecrets.thy", - "formal/afp/CoSMed/Post_Confidentiality/Post_Value_Setup.thy", - "formal/afp/Stone_Algebras/Lattice_Basics.thy", - "formal/afp/SenSocialChoice/May.thy", - "formal/mizar/afinsq_1.miz", - "formal/afp/JinjaThreads/MM/MM_Main.thy", - "formal/mizar/translac.miz", - "formal/afp/CSP_RefTK/Assertions_ext.thy", - "formal/hol/Help/FREEZE_THEN.doc", - "formal/afp/Word_Lib/Word_Syntax.thy", - "formal/afp/Chandy_Lamport/Swap.thy", - "formal/afp/Differential_Game_Logic/Static_Semantics.thy", - "formal/afp/Jordan_Hoelder/document/root.tex", - "formal/afp/Affine_Arithmetic/Float_Real.thy", - "formal/afp/Eval_FO/Ailamazyan_Code.thy", - "formal/lean/mathlib/ring_theory/witt_vector/mul_p.lean", - "formal/afp/Probabilistic_Timed_Automata/library/MDP_Aux.thy", - "formal/hol/Help/dest_comb.doc", - "formal/afp/Tarskis_Geometry/Projective.thy", - "formal/lean/liquid/for_mathlib/preadditive_yoneda.lean", - "formal/mizar/complsp2.miz", - "formal/afp/Topological_Semantics/topo_closure_algebra.thy", - "formal/hol/Help/string_of_type.doc", - "formal/lean/mathlib/analysis/normed/group/SemiNormedGroup/kernels.lean", - "formal/lean/mathlib/algebra/continued_fractions/basic.lean", - "formal/lean/mathlib/ring_theory/witt_vector/teichmuller.lean", - "formal/afp/KAD/document/root.tex", - "formal/lean/mathlib/data/ulift.lean", - "formal/lean/mathlib/analysis/complex/cauchy_integral.lean", - "formal/afp/Generic_Join/Generic_Join_Correctness.thy", - "formal/afp/Sigma_Commit_Crypto/Pedersen.thy", - "formal/afp/Presburger-Automata/Presburger_Automata.thy", - "formal/lean/mathlib/group_theory/subsemigroup/basic.lean", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2016/a/p2.lean", - "formal/lean/mathlib/topology/sets/closeds.lean", - "formal/afp/Modal_Logics_for_NTS/FL_Transition_System.thy", - "formal/afp/Physical_Quantities/SI_Derived.thy", - "formal/hol/Help/HIGHER_REWRITE_CONV.doc", - "formal/lean/mathlib/data/pfunctor/multivariate/M.lean", - "formal/afp/Category3/Yoneda.thy", - "formal/afp/Multiset_Ordering_NPC/Multiset_Ordering_NP_Hard.thy", - "formal/afp/Prpu_Maxflow/Generic_Push_Relabel.thy", - "formal/afp/Call_Arity/document/root.tex", - "formal/lean/lftcm/solutions/wednesday/topological_spaces.lean", - "formal/lean/mathlib/data/multiset/functor.lean", - "formal/hol/Help/NUM_SIMPLIFY_CONV.doc", - "formal/afp/Transitive_Models/Discipline_Base.thy", - "formal/afp/CoSMeDis/Post_Confidentiality/Post_ISSUER.thy", - "formal/hol/Help/find_terms.doc", - "formal/afp/UTP/utp/utp_full.thy", - "formal/afp/CoSMeDis/Post_Confidentiality/Independent_Posts/Independent_DYNAMIC_Post_Network.thy", - "formal/mizar/finance2.miz", - "formal/afp/Slicing/While/DynamicControlDependences.thy", - "formal/hol/Help/do_list.doc", - "formal/afp/Constructive_Cryptography/Converter.thy", - "formal/afp/JinjaThreads/JVM/JVMInstructions.thy", - "formal/lean/mathzoo/mathzoo/olympiads/aime/1997/p9.lean", - "formal/afp/Isabelle_Meta_Model/toy_example/embedding/core/Floor2_examp.thy", - "formal/hol/Help/NUM_EVEN_CONV.doc", - "formal/lean/sphere-eversion/to_mathlib/data/real_basic.lean", - "formal/afp/Signature_Groebner/Prelims.thy", - "formal/afp/Refine_Monadic/Refine_Leof.thy", - "formal/afp/SuperCalc/terms.thy", - "formal/afp/Native_Word/Native_Word_Test_MLton.thy", - "formal/lean/mathlib/deprecated/subfield.lean", - "formal/afp/Sigma_Commit_Crypto/Number_Theory_Aux.thy", - "formal/lean/mathlib/category_theory/monad/default.lean", - "formal/hol/Help/BETA.doc", - "formal/afp/CakeML_Codegen/Backend/CakeML_Setup.thy", - "formal/coq/math-comp/test_suite/output.v.out.8.9", - "formal/lean/mathlib/algebra/order/pointwise.lean", - "formal/hol/Help/time.doc", - "formal/lean/mathlib/number_theory/pell.lean", - "formal/afp/Correctness_Algebras/Pre_Post_Modal.thy", - "formal/afp/MiniSail/RCLogicL.thy", - "formal/lean/mathlib/algebra/ring/idempotents.lean", - "formal/mizar/gcd_1.miz", - "formal/lean/liquid/for_mathlib/les_homology.lean", - "formal/afp/HOLCF-Prelude/Type_Classes.thy", - "formal/lean/mathlib/ring_theory/etale.lean", - "formal/afp/Word_Lib/Word_Lib_Sumo.thy", - "formal/afp/Functional_Ordered_Resolution_Prover/Executable_Subsumption.thy", - "formal/afp/SC_DOM_Components/Shadow_DOM_SC_DOM_Components.thy", - "formal/afp/Amortized_Complexity/Splay_Tree_Analysis.thy", - "formal/hol/Help/allpairs.doc", - "formal/afp/Extended_Finite_State_Machines/GExp_Lexorder.thy", - "formal/afp/Landau_Symbols/Landau_Real_Products.thy", - "formal/afp/PSemigroupsConvolution/Partial_Semigroup_Lifting.thy", - "formal/lean/mathzoo/mathzoo/misc/miniF2F/algebra/sqineq_2at2pclta2c2p41pc.lean", - "formal/lean/mathlib/algebra/lie/tensor_product.lean", - "formal/lean/mathlib/group_theory/subsemigroup/center.lean", - "formal/afp/Heard_Of/otr/OneThirdRuleProof.thy", - "formal/lean/mathlib/analysis/normed_space/add_torsor.lean", - "formal/lean/mathlib/topology/category/UniformSpace.lean", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2000/p20.lean", - "formal/mizar/clvect_1.miz", - "formal/afp/Refine_Imperative_HOL/Examples/Worklist_Subsumption_Impl.thy", - "formal/afp/Inductive_Confidentiality/GeneralAttacker/EventGA.thy", - "formal/afp/JinjaThreads/JVM/JVMThreaded.thy", - "formal/afp/Fourier/Fourier.thy", - "formal/afp/MFOTL_Monitor/Monitor_Code.thy", - "formal/lean/mathlib/group_theory/is_free_group.lean", - "formal/lean/perfectoid/valuation/canonical.lean", - "formal/afp/Tarskis_Geometry/Hyperbolic_Tarski.thy", - "formal/afp/Architectural_Design_Patterns/Singleton.thy", - "formal/afp/Registers/Quantum_Extra.thy", - "formal/mizar/hilb10_1.miz", - "formal/afp/Game_Based_Crypto/Unpredictable_Function.thy", - "formal/afp/Smooth_Manifolds/Differentiable_Manifold.thy", - "formal/hol/Help/COMB_CONV.doc", - "formal/afp/Transitive-Closure/Transitive_Closure_Impl.thy", - "formal/afp/Native_Word/Native_Word_Test_OCaml2.thy", - "formal/afp/Mereology/PM.thy", - "formal/mizar/asympt_1.miz", - "formal/lean/perfectoid/for_mathlib/filter.lean", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2002/b/p2.lean", - "formal/lean/liquid/for_mathlib/short_exact.lean", - "formal/afp/Lam-ml-Normalization/document/figureCR3.tex", - "formal/afp/Knuth_Morris_Pratt/document/root.tex", - "formal/afp/Abortable_Linearizable_Modules/RDR.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p227.lean", - "formal/afp/Security_Protocol_Refinement/Refinement/Atoms.thy", - "formal/afp/Special_Function_Bounds/Log_CF_Bounds.thy", - "formal/lean/mathlib/linear_algebra/direct_sum/finsupp.lean", - "formal/afp/Launchbury/ResourcedDenotational.thy", - "formal/afp/Graph_Theory/Kuratowski.thy", - "formal/lean/mathlib/data/sigma/lex.lean", - "formal/mizar/fib_num2.miz", - "formal/lean/mathlib/probability/conditional_probability.lean", - "formal/lean/mathlib/category_theory/grothendieck.lean", - "formal/lean/mathlib/category_theory/preadditive/eilenberg_moore.lean", - "formal/afp/FFT/FFT.thy", - "formal/afp/BNF_CC/Preliminaries.thy", - "formal/afp/Ordinary_Differential_Equations/Refinement/Refine_Hyperplane.thy", - "formal/afp/UPF_Firewall/PacketFilter/NetworkCore.thy", - "formal/mizar/zmodul03.miz", - "formal/mizar/nelson_1.miz", - "formal/afp/Refine_Monadic/Autoref_Monadic.thy", - "formal/lean/mathlib/data/seq/seq.lean", - "formal/hol/Help/SEQ_IMP_REWRITE_TAC.doc", - "formal/afp/FOL_Axiomatic/document/root.tex", - "formal/lean/perfectoid/Huber_ring/basic.lean", - "formal/afp/Refine_Monadic/Generic/RefineG_Recursion.thy", - "formal/afp/CZH_Foundations/czh_digraphs/CZH_DG_TDGHM.thy", - "formal/afp/JinjaThreads/Execute/PCompilerRefine.thy", - "formal/lean/mathlib/linear_algebra/affine_space/affine_equiv.lean", - "formal/afp/Collections/ICF/gen_algo/SetGA.thy", - "formal/afp/Pi_Calculus/Weak_Early_Cong_Pres.thy", - "formal/afp/Decl_Sem_Fun_PL/BigStepLam.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p155.lean", - "formal/lean/mathlib/logic/denumerable.lean", - "formal/afp/Rewrite_Properties_Reduction/document/root.tex", - "formal/afp/JinjaDCI/BV/BVExec.thy", - "formal/lean/mathlib/data/ordmap/ordnode.lean", - "formal/lean/mathlib/combinatorics/simple_graph/regularity/bound.lean", - "formal/hol/Help/NUMBER_RULE.doc", - "formal/lean/liquid/Lbar/squares.lean", - "formal/lean/mathlib/algebraic_geometry/projective_spectrum/scheme.lean", - "formal/afp/Blue_Eyes/Blue_Eyes.thy", - "formal/lean/mathlib/topology/algebra/order/liminf_limsup.lean", - "formal/afp/SPARCv8/SparcModel_MMU/Sparc_Properties.thy", - "formal/lean/liquid/for_mathlib/limit_flip_comp_iso.lean", - "formal/afp/CakeML_Codegen/Test/Test_Print.thy", - "formal/afp/DynamicArchitectures/Dynamic_Architecture_Calculus.thy", - "formal/lean/liquid/for_mathlib/ab4.lean", - "formal/lean/mathlib/computability/reduce.lean", - "formal/afp/Collections/Refine_Dflt_ICF.thy", - "formal/afp/LightweightJava/Lightweight_Java_Equivalence.thy", - "formal/hol/Help/CONJ_PAIR.doc", - "formal/lean/mathlib/data/finset/pimage.lean", - "formal/afp/Constructive_Cryptography/document/root.tex", - "formal/afp/AI_Planning_Languages_Semantics/SASP_Semantics.thy", - "formal/afp/CAVA_Automata/Digraph_Impl.thy", - "formal/lean/liquid/for_mathlib/homology_map_datum.lean", - "formal/lean/liquid/Lbar/iota.lean", - "formal/lean/lftcm/hints/category_theory/exercise3/hint8.lean", - "formal/afp/Refine_Imperative_HOL/Examples/Snippets/Sepref_Snip_Datatype.thy", - "formal/hol/Help/EQF_INTRO.doc", - "formal/afp/Word_Lib/Type_Syntax.thy", - "formal/afp/Iptables_Semantics/Documentation.thy", - "formal/afp/CAVA_LTL_Modelchecker/BoolProgs/Programs/BoolProgs_ReaderWriter.thy", - "formal/afp/Smith_Normal_Form/SNF_Algorithm.thy", - "formal/hol/Help/ASSOC_CONV.doc", - "formal/hol/Help/is_realintconst.doc", - "formal/lean/mathlib/measure_theory/function/ae_eq_fun.lean", - "formal/lean/mathlib/algebra/category/FinVect/limits.lean", - "formal/lean/mathlib/data/multiset/bind.lean", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2002/a/p13.lean", - "formal/afp/Generalized_Counting_Sort/Conservation.thy", - "formal/afp/Ptolemys_Theorem/document/root.tex", - "formal/afp/Bounded_Deducibility_Security/document/root.tex", - "formal/mizar/anproj10.miz", - "formal/afp/Sturm_Tarski/Sturm_Tarski.thy", - "formal/afp/CakeML_Codegen/Compiler/Composition.thy", - "formal/afp/POPLmark-deBruijn/POPLmarkRecordCtxt.thy", - "formal/afp/Weighted_Path_Order/RPO.thy", - "formal/afp/ShortestPath/ShortestPathNeg.thy", - "formal/afp/Noninterference_CSP/document/root.tex", - "formal/afp/Formula_Derivatives/FSet_More.thy", - "formal/afp/LP_Duality/Move_To_Matrix.thy", - "formal/lean/mathlib/category_theory/concrete_category/elementwise.lean", - "formal/mizar/poset_1.miz", - "formal/afp/Jinja/J/TypeSafe.thy", - "formal/afp/Isabelle_C/C11-FrontEnd/src/C_Eval.thy", - "formal/afp/Fresh_Identifiers/Fresh.thy", - "formal/afp/Separation_Logic_Imperative_HOL/Examples/Imp_List_Spec.thy", - "formal/mizar/jordan2c.miz", - "formal/afp/Refine_Imperative_HOL/IICF/Impl/Heaps/IICF_Abs_Heap.thy", - "formal/mizar/bcialg_6.miz", - "formal/mizar/goboard8.miz", - "formal/lean/mathlib/category_theory/limits/shapes/pullbacks.lean", - "formal/afp/Dijkstra_Shortest_Path/Graph.thy", - "formal/mizar/complfld.miz", - "formal/hol/Help/assoc.doc", - "formal/afp/Refine_Monadic/Refine_Basic.thy", - "formal/afp/Skip_Lists/Skip_List.thy", - "formal/lean/mathlib/control/basic.lean", - "formal/lean/mathzoo/mathzoo/olympiads/amc/10/2021/b/p5.lean", - "formal/afp/Ordinals_and_Cardinals/document/root.tex", - "formal/afp/Correctness_Algebras/Binary_Iterings.thy", - "formal/afp/Propositional_Proof_Systems/Resolution.thy", - "formal/afp/LLL_Basis_Reduction/LLL.thy", - "formal/hol/Multivariate/realanalysis.ml", - "formal/mizar/random_2.miz", - "formal/afp/Iptables_Semantics/Semantics_Ternary/Fixed_Action.thy", - "formal/lean/mathlib/ring_theory/subsemiring/pointwise.lean", - "formal/hol/Help/NUM_LT_CONV.doc", - "formal/afp/Relational_Method/Authentication.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p156.lean", - "formal/afp/LP_Duality/LP_Duality.thy", - "formal/mizar/scmp_gcd.miz", - "formal/hol/Help/frees.doc", - "formal/afp/Abortable_Linearizable_Modules/Consensus.thy", - "formal/lean/perfectoid/for_mathlib/open_embeddings.lean", - "formal/lean/mathlib/data/rat/order.lean", - "formal/lean/mathlib/category_theory/abelian/pseudoelements.lean", - "formal/mizar/waybel26.miz", - "formal/afp/FocusStreamsCaseStudies/FR.thy", - "formal/mizar/yellow17.miz", - "formal/afp/Stable_Matching/Strategic.thy", - "formal/afp/Physical_Quantities/Groups_mult.thy", - "formal/afp/Independence_CH/Cohen_Posets_Relative.thy", - "formal/afp/CZH_Foundations/czh_semicategories/CZH_SMC_Par.thy", - "formal/afp/Isabelle_Meta_Model/toy_example/embedding/meta_toy/Parser_META.thy", - "formal/hol/Examples/gcdrecurrence.ml", - "formal/lean/mathlib/category_theory/Fintype.lean", - "formal/afp/Gauss-Jordan-Elim-Fun/document/root.tex", - "formal/afp/JinjaThreads/Execute/SC_Schedulers.thy", - "formal/lean/mathlib/data/rbtree/min_max.lean", - "formal/afp/Topological_Semantics/ex_subminimal_logics.thy", - "formal/lean/mathlib/ring_theory/ore_localization/ore_set.lean", - "formal/afp/Recursion-Theory-I/PRecList.thy", - "formal/afp/Simpl/StateSpace.thy", - "formal/afp/Gauss_Jordan/Linear_Maps.thy", - "formal/afp/Algebraic_VCs/AVC_KAT/VC_RKAT_Examples.thy", - "formal/afp/Consensus_Refined/MRU/Three_Step_MRU.thy", - "formal/afp/PAC_Checker/PAC_Map_Rel.thy", - "formal/lean/mathlib/category_theory/quotient.lean", - "formal/afp/HRB-Slicing/StaticInter/ReturnAndCallNodes.thy", - "formal/lean/liquid/thm95/homotopy.lean", - "formal/lean/lftcm/hints/category_theory/exercise4/hint4.lean", - "formal/mizar/menelaus.miz", - "formal/lean/mathlib/measure_theory/function/jacobian.lean", - "formal/lean/mathlib/topology/instances/sign.lean", - "formal/afp/Physical_Quantities/ISQ_Proof.thy", - "formal/mizar/funct_9.miz", - "formal/afp/UPF_Firewall/PacketFilter/IntegerPort.thy", - "formal/afp/Factored_Transition_System_Bounding/FactoredSystemLib.thy", - "formal/mizar/rvsum_3.miz", - "formal/afp/Hermite_Lindemann/Algebraic_Integer_Divisibility.thy", - "formal/afp/Chandy_Lamport/Example.thy", - "formal/lean/mathlib/algebra/gcd_monoid/basic.lean", - "formal/mizar/abian.miz", - "formal/hol/Help/isalnum.doc", - "formal/hol/RichterHilbertAxiomGeometry/HilbertAxiom_read.ml", - "formal/lean/mathlib/category_theory/adjunction/reflective.lean", - "formal/lean/mathlib/logic/equiv/functor.lean", - "formal/afp/Amicable_Numbers/Amicable_Numbers.thy", - "formal/hol/Help/CONJUNCT1.doc", - "formal/lean/mathlib/data/option/basic.lean", - "formal/lean/mathlib/analysis/calculus/parametric_integral.lean", - "formal/mizar/birkhoff.miz", - "formal/lean/liquid/thm95/constants/default.lean", - "formal/hol/Help/REAL_POLY_ADD_CONV.doc", - "formal/afp/HRB-Slicing/StaticInter/SemanticsCFG.thy", - "formal/afp/Robinson_Arithmetic/Robinson_Arithmetic.thy", - "formal/mizar/ncfcont1.miz", - "formal/afp/Open_Induction/Open_Induction.thy", - "formal/mizar/jordan19.miz", - "formal/afp/InfPathElimination/Conf.thy", - "formal/mizar/euclid_7.miz", - "formal/afp/LTL_Master_Theorem/Logical_Characterization/Advice.thy", - "formal/afp/Topological_Semantics/topo_strict_implication.thy", - "formal/afp/Probabilistic_System_Zoo/document/root.tex", - "formal/afp/Graph_Saturation/RulesAndChains.thy", - "formal/lean/mathlib/category_theory/limits/is_limit.lean", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p422.lean", - "formal/lean/mathlib/model_theory/basic.lean", - "formal/afp/Containers/RBT_Set2.thy", - "formal/afp/Descartes_Sign_Rule/document/root.tex", - "formal/lean/mathlib/category_theory/equivalence.lean", - "formal/lean/mathlib/category_theory/natural_isomorphism.lean", - "formal/afp/Extended_Finite_State_Machine_Inference/SelectionStrategies.thy", - "formal/afp/Isabelle_Marries_Dirac/Tensor.thy", - "formal/afp/JinjaThreads/Compiler/PCompiler.thy", - "formal/afp/Buildings/document/root.tex", - "formal/afp/Universal_Turing_Machine/Rec_Def.thy", - "formal/mizar/autgroup.miz", - "formal/coq/odd-order/PFsection12.v", - "formal/lean/liquid/pseudo_normed_group/system_of_complexes.lean", - "formal/afp/DiscretePricing/Fair_Price.thy", - "formal/afp/LTL_Master_Theorem/Omega_Words_Fun_Stream.thy", - "formal/afp/Kleene_Algebra/Conway.thy", - "formal/afp/IsaNet/instances/EPIC_L1_SA_Example.thy", - "formal/afp/MSO_Regex_Equivalence/List_More.thy", - "formal/lean/mathlib/algebraic_geometry/gluing.lean", - "formal/afp/Jinja/DFA/Typing_Framework_err.thy", - "formal/afp/JinjaThreads/Framework/FWLifting.thy", - "formal/afp/Sort_Encodings/Mcalc2C.thy", - "formal/hol/Help/load_path.doc", - "formal/hol/Help/parse_term.doc", - "formal/afp/Store_Buffer_Reduction/ReduceStoreBufferSimulation.thy", - "formal/lean/mathlib/combinatorics/simple_graph/partition.lean", - "formal/afp/Virtual_Substitution/NegInfinity.thy", - "formal/hol/100/dirichlet.ml", - "formal/afp/JinjaThreads/BV/BV_Main.thy", - "formal/lean/mathlib/data/list/zip.lean", - "formal/afp/Noninterference_CSP/GeneralizedNoninterference.thy", - "formal/afp/Ordered_Resolution_Prover/Herbrand_Interpretation.thy", - "formal/afp/CZH_Elementary_Categories/czh_ecategories/CZH_ECAT_Small_NTCF.thy", - "formal/afp/Integration/Sigma_Algebra.thy", - "formal/mizar/srings_3.miz", - "formal/afp/PLM/TAO_3_Quantifiable.thy", - "formal/afp/HOL-CSP/Seq.thy", - "formal/lean/mathlib/measure_theory/measure/mutually_singular.lean", - "formal/afp/Twelvefold_Way/Twelvefold_Way.thy", - "formal/afp/AWN/AWN_Term_Graph.thy", - "formal/lean/mathlib/probability/probability_mass_function/constructions.lean", - "formal/lean/mathlib/set_theory/game/state.lean", - "formal/afp/Refine_Imperative_HOL/Examples/Sepref_Chapter_Examples.thy", - "formal/mizar/sprect_3.miz", - "formal/lean/mathlib/data/finset/sum.lean", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p148.lean", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p24.lean", - "formal/lean/mathlib/analysis/complex/circle.lean", - "formal/lean/mathlib/ring_theory/dedekind_domain/ideal.lean", - "formal/afp/Transitive_Models/Univ_Relative.thy", - "formal/afp/Optics/Dataspaces.thy", - "formal/mizar/diophan2.miz", - "formal/afp/Modal_Logics_for_NTS/Expressive_Completeness.thy", - "formal/afp/Perron_Frobenius/Roots_Unity.thy", - "formal/afp/Polynomials/MPoly_PM.thy", - "formal/hol/Help/butlast.doc", - "formal/lean/mathlib/algebra/char_p/subring.lean", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p209.lean", - "formal/coq/analysis/lebesgue_integral.v", - "formal/afp/Markov_Models/Markov_Models_Auxiliary.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p332.lean", - "formal/afp/Automatic_Refinement/Tool/Autoref_Data.thy", - "formal/lean/mathlib/analysis/normed_space/pointwise.lean", - "formal/afp/Jinja/Compiler/J1.thy", - "formal/afp/Call_Arity/CallArityEnd2End.thy", - "formal/hol/miz3/Samples/samples.ml", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p251.lean", - "formal/lean/mathlib/category_theory/endomorphism.lean", - "formal/lean/mathlib/data/set/prod.lean", - "formal/afp/Graph_Saturation/CombinedCorrectness.thy", - "formal/afp/Interpreter_Optimizations/OpUbx.thy", - "formal/lean/mathlib/data/seq/computation.lean", - "formal/afp/JinjaThreads/BV/JVM_SemiType.thy", - "formal/coq/odd-order/BGsection16.v", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p32.lean", - "formal/lean/liquid/condensed/top_comparison.lean", - "formal/afp/Physical_Quantities/CGS.thy", - "formal/hol/Help/DISJ2.doc", - "formal/hol/Help/unions.doc", - "formal/afp/POPLmark-deBruijn/POPLmark.thy", - "formal/afp/Relational_Paths/More_Relation_Algebra.thy", - "formal/afp/Promela/All_Of_Promela.thy", - "formal/afp/MiniSail/BTVSubstTypingL.thy", - "formal/coq/analysis/esum.v", - "formal/afp/SuperCalc/equational_clausal_logic.thy", - "formal/afp/Concurrent_Revisions/OperationalSemantics.thy", - "formal/mizar/nomin_3.miz", - "formal/afp/Well_Quasi_Orders/Kruskal_Examples.thy", - "formal/lean/mathlib/ring_theory/adjoin/power_basis.lean", - "formal/afp/Design_Theory/document/root.tex", - "formal/afp/Stern_Brocot/Cotree_Algebra.thy", - "formal/lean/mathlib/data/two_pointing.lean", - "formal/mizar/scmfsa_3.miz", - "formal/afp/Types_To_Sets_Extension/ETTS/ETTS_Auxiliary.thy", - "formal/afp/Hoare_Time/Partial_Evaluation.thy", - "formal/afp/Category3/ConcreteCategory.thy", - "formal/lean/mathlib/measure_theory/function/simple_func_dense.lean", - "formal/afp/Topological_Semantics/topo_interior_algebra.thy", - "formal/mizar/measur11.miz", - "formal/afp/Goedel_HFSet_Semanticless/Predicates.thy", - "formal/lean/mathlib/analysis/inner_product_space/spectrum.lean", - "formal/afp/Approximation_Algorithms/Approx_VC_Hoare.thy", - "formal/afp/Amortized_Complexity/Splay_Tree_Analysis_Base.thy", - "formal/afp/Special_Function_Bounds/Sin_Cos_Bounds.thy", - "formal/afp/WorkerWrapper/Streams.thy", - "formal/hol/Help/ALPHA_UPPERCASE.doc", - "formal/afp/Separation_Algebra/ex/VM_Example.thy", - "formal/lean/mathlib/analysis/box_integral/partition/filter.lean", - "formal/hol/Help/sort.doc", - "formal/lean/mathlib/category_theory/limits/shapes/types.lean", - "formal/mizar/fdiff_4.miz", - "formal/afp/Case_Labeling/Examples/Hoare/Labeled_Hoare.thy", - "formal/mizar/bkmodel1.miz", - "formal/mizar/collsp.miz", - "formal/afp/KBPs/DFS.thy", - "formal/afp/E_Transcendental/E_Transcendental.thy", - "formal/afp/Poincare_Disc/Tarski.thy", - "formal/lean/mathlib/data/real/pi/bounds.lean", - "formal/afp/PAC_Checker/PAC_Version.thy", - "formal/afp/VYDRA_MDL/Monitor.thy", - "formal/lean/mathlib/analysis/asymptotics/specific_asymptotics.lean", - "formal/lean/mathlib/algebra/lie/of_associative.lean", - "formal/afp/HotelKeyCards/Equivalence.thy", - "formal/afp/CoreC++/Equivalence.thy", - "formal/afp/Pi_Calculus/Weak_Late_Bisim_Subst_SC.thy", - "formal/lean/mathlib/category_theory/limits/shapes/functor_category.lean", - "formal/afp/Jordan_Hoelder/MaximalNormalSubgroups.thy", - "formal/lean/perfectoid/for_mathlib/primes.lean", - "formal/afp/IP_Addresses/Hs_Compat.thy", - "formal/afp/LTL_to_DRA/document/root.tex", - "formal/hol/Help/NUM_MIN_CONV.doc", - "formal/afp/Weighted_Path_Order/Relations.thy", - "formal/afp/C2KA_DistributedSystems/document/root.tex", - "formal/afp/CZH_Universal_Constructions/czh_ucategories/CZH_UCAT_Complete.thy", - "formal/afp/BTree/Partially_Filled_Array.thy", - "formal/afp/GewirthPGCProof/ExtendedDDL.thy", - "formal/afp/Abstract_Completeness/document/root.tex", - "formal/lean/mathlib/control/equiv_functor/instances.lean", - "formal/mizar/rvsum_2.miz", - "formal/afp/Hybrid_Systems_VCs/HS_VC_Spartan.thy", - "formal/hol/Rqe/work_thms.ml", - "formal/lean/liquid/for_mathlib/colim_preserves_colimits.lean", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p961.lean", - "formal/coq/math-comp/algebra/rat.v", - "formal/lean/mathlib/ring_theory/witt_vector/identities.lean", - "formal/lean/mathlib/analysis/calculus/darboux.lean", - "formal/afp/Integration/MonConv.thy", - "formal/afp/JinjaDCI/JVM/JVMExceptions.thy", - "formal/afp/Collections/ICF/ICF_Impl.thy", - "formal/lean/mathlib/category_theory/preadditive/generator.lean", - "formal/afp/Algebraic_VCs/AVC_KAD/VC_KAD_Examples.thy", - "formal/hol/Tutorial/Custom_tactics.ml", - "formal/afp/Abstract-Hoare-Logics/While/Hoare.thy", - "formal/afp/Hoare_Time/Big_Step.thy", - "formal/lean/mathlib/analysis/convex/jensen.lean", - "formal/mizar/amistd_2.miz", - "formal/lean/liquid/thm95/pfpng_iso.lean", - "formal/lean/lftcm/solutions/wednesday/structures.lean", - "formal/afp/MonoidalCategory/CartesianMonoidalCategory.thy", - "formal/afp/Transitive_Models/ZF_Library_Relative.thy", - "formal/lean/mathlib/data/nat/choose/cast.lean", - "formal/afp/Algebraic_Numbers/Algebraic_Numbers_External_Code.thy", - "formal/coq/odd-order/PFsection1.v", - "formal/mizar/cc0sp2.miz", - "formal/afp/Relational_Paths/Paths.thy", - "formal/lean/mathlib/ring_theory/polynomial/content.lean", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p149.lean", - "formal/afp/Concurrent_Revisions/Substitution.thy", - "formal/lean/mathlib/analysis/normed_space/add_torsor_bases.lean", - "formal/afp/Collections/ICF/gen_algo/Algos.thy", - "formal/afp/GPU_Kernel_PL/KPL_wellformedness.thy", - "formal/afp/AWN/OPnet.thy", - "formal/hol/Help/SET_TAC.doc", - "formal/afp/Shivers-CFA/Utils.thy", - "formal/afp/Partial_Order_Reduction/Basics/LList_Prefixes.thy", - "formal/lean/mathlib/number_theory/legendre_symbol/quadratic_reciprocity.lean", - "formal/afp/HRB-Slicing/StaticInter/Slice.thy", - "formal/afp/Verified_SAT_Based_AI_Planning/Map_Supplement.thy", - "formal/afp/Hybrid_Systems_VCs/KleeneAlgebraTests/HS_VC_KAT_Examples_rel.thy", - "formal/afp/JinjaThreads/Basic/Auxiliary.thy", - "formal/hol/Help/.orparser.doc", - "formal/afp/UPF_Firewall/StatefulFW/FTP.thy", - "formal/mizar/unialg_1.miz", - "formal/coq/analysis/ereal.v", - "formal/lean/liquid/free_pfpng/epi.lean", - "formal/lean/mathlib/measure_theory/group/action.lean", - "formal/mizar/prepower.miz", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p690.lean", - "formal/lean/mathlib/topology/algebra/order/basic.lean", - "formal/afp/Correctness_Algebras/Complete_Tests.thy", - "formal/afp/Pop_Refinement/General_Remarks.thy", - "formal/afp/Regular_Algebras/Pratts_Counterexamples.thy", - "formal/lean/mathlib/algebra/category/Mon/limits.lean", - "formal/afp/HOLCF-Prelude/Data_List.thy", - "formal/lean/mathlib/linear_algebra/tensor_algebra/grading.lean", - "formal/lean/mathlib/control/traversable/derive.lean", - "formal/mizar/euclid_8.miz", - "formal/afp/Complx/ex/Examples.thy", - "formal/afp/Native_Word/Bits_Integer.thy", - "formal/lean/mathlib/topology/category/Top/open_nhds.lean", - "formal/afp/Formal_SSA/Serial_Rel.thy", - "formal/afp/Monad_Memo_DP/state_monad/Monad.thy", - "formal/afp/Jinja/J/DefAss.thy", - "formal/afp/Goedel_Incompleteness/Loeb.thy", - "formal/afp/CakeML/Tests/Compiler_Test.thy", - "formal/mizar/cc0sp1.miz", - "formal/mizar/scmring2.miz", - "formal/hol/Tutorial/Custom_inference_rules.ml", - "formal/mizar/autalg_1.miz", - "formal/lean/liquid/for_mathlib/preserves_limits.lean", - "formal/afp/SimplifiedOntologicalArgument/SimplifiedOntologicalArgument.thy", - "formal/afp/Iptables_Semantics/Primitive_Matchers/Parser6.thy", - "formal/lean/mathlib/algebra/direct_sum/internal.lean", - "formal/hol/miz3/miz3.ml", - "formal/lean/mathlib/measure_theory/integral/layercake.lean", - "formal/afp/LTL_to_GBA/All_Of_LTL_to_GBA.thy", - "formal/afp/Flow_Networks/NetCheck.thy", - "formal/lean/mathzoo/mathzoo/olympiads/imo/1962/p1.lean", - "formal/lean/mathlib/data/qpf/multivariate/default.lean", - "formal/hol/Help/ONCE_ASM_REWRITE_RULE.doc", - "formal/afp/BD_Security_Compositional/Trivial_Security.thy", - "formal/mizar/nagata_2.miz", - "formal/afp/Isabelle_Meta_Model/toy_example/embedding/meta_toy/Printer_Toy_extended.thy", - "formal/afp/JinjaThreads/Execute/Code_Generation.thy", - "formal/afp/CZH_Elementary_Categories/czh_ecategories/CZH_SMC_FUNCT.thy", - "formal/afp/OpSets/Interleaving.thy", - "formal/hol/Boyer_Moore/environment.ml", - "formal/afp/CAVA_LTL_Modelchecker/SM/Refine/Decide_Locality.thy", - "formal/mizar/msualg_8.miz", - "formal/afp/Relational_Paths/Path_Algorithms.thy", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2008/a/p15.lean", - "formal/hol/Help/subtract_prime.doc", - "formal/afp/SimplifiedOntologicalArgument/UFilterVariant.thy", - "formal/afp/Affine_Arithmetic/Counterclockwise_2D_Strict.thy", - "formal/lean/liquid/Lbar/finsupp_instance.lean", - "formal/hol/100/polyhedron.ml", - "formal/afp/CakeML/Big_Step_Unclocked_Single.thy", - "formal/hol/Help/orelse_.doc", - "formal/lean/mathlib/category_theory/limits/types.lean", - "formal/afp/Shadow_SC_DOM/document/root.tex", - "formal/lean/mathlib/analysis/calculus/fderiv_symmetric.lean", - "formal/afp/Digit_Expansions/document/root.tex", - "formal/afp/Stream_Fusion_Code/Stream_Fusion_LList.thy", - "formal/lean/liquid/for_mathlib/short_exact_sequence.lean", - "formal/hol/Help/MATCH_MP_TAC.doc", - "formal/hol/Help/subst.doc", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p35.lean", - "formal/lean/mathlib/algebra/char_zero/defs.lean", - "formal/afp/JinjaThreads/Compiler/TypeComp.thy", - "formal/lean/mathlib/control/bitraversable/lemmas.lean", - "formal/lean/mathlib/algebra/homology/homotopy.lean", - "formal/lean/mathlib/algebra/homology/functor.lean", - "formal/afp/Winding_Number_Eval/Missing_Topology.thy", - "formal/afp/Regular-Sets/Regular_Exp.thy", - "formal/afp/IP_Addresses/Lib_List_toString.thy", - "formal/lean/mathlib/group_theory/order_of_element.lean", - "formal/lean/mathlib/order/category/FinBoolAlg.lean", - "formal/afp/LambdaMu/DeBruijn.thy", - "formal/lean/mathlib/analysis/special_functions/non_integrable.lean", - "formal/afp/Verified_SAT_Based_AI_Planning/Set2_Join_RBT.thy", - "formal/afp/Simple_Firewall/Primitives/L4_Protocol.thy", - "formal/afp/Kruskal/document/root.tex", - "formal/afp/Types_To_Sets_Extension/ETTS/Manual/Manual_Prerequisites.thy", - "formal/afp/Tycon/Error_Transformer.thy", - "formal/lean/mathlib/algebra/lie/engel.lean", - "formal/lean/mathlib/ring_theory/local_properties.lean", - "formal/mizar/convex3.miz", - "formal/lean/mathlib/analysis/special_functions/complex/arg.lean", - "formal/mizar/polynom5.miz", - "formal/lean/mathlib/measure_theory/integral/set_integral.lean", - "formal/mizar/monoid_1.miz", - "formal/afp/Transition_Systems_and_Automata/Automata/NBA/NBA_Refine.thy", - "formal/afp/Iptables_Semantics/Examples/IPPartEval/IP_Address_Space_Examples_All_Small.thy", - "formal/afp/Inductive_Inference/Union.thy", - "formal/afp/Datatype_Order_Generator/Derive.thy", - "formal/afp/List_Update/Move_to_Front.thy", - "formal/afp/Attack_Trees/Infrastructure.thy", - "formal/afp/Vickrey_Clarke_Groves/Partitions.thy", - "formal/afp/CAVA_LTL_Modelchecker/SM/Analysis/SM_POR.thy", - "formal/mizar/gobrd11.miz", - "formal/hol/Help/MP_CONV.doc", - "formal/afp/Clique_and_Monotone_Circuits/Clique_Large_Monotone_Circuits.thy", - "formal/afp/Binomial-Heaps/BinomialHeap.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p99.lean", - "formal/mizar/bhsp_7.miz", - "formal/hol/Help/meson_skew.doc", - "formal/hol/Complex/complexnumbers.ml", - "formal/afp/CZH_Foundations/czh_sets/CZH_Sets_Cardinality.thy", - "formal/afp/IMP2/doc/Examples.thy", - "formal/afp/Iptables_Semantics/Semantics_Embeddings.thy", - "formal/mizar/flang_2.miz", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2021/a/p3.lean", - "formal/afp/Dict_Construction/Documentation/Impossibility.thy", - "formal/afp/AODV/variants/e_all_abcd/E_Aodv_Loop_Freedom.thy", - "formal/afp/Dependent_SIFUM_Type_Systems/LocallySoundModeUse.thy", - "formal/afp/Iptables_Semantics/Semantics.thy", - "formal/afp/Proof_Strategy_Language/document/root.tex", - "formal/mizar/symsp_1.miz", - "formal/afp/Relation_Algebra/Relation_Algebra_RTC.thy", - "formal/afp/Binomial-Queues/PQ.thy", - "formal/afp/Vickrey_Clarke_Groves/CombinatorialAuctionCodeExtraction.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p175.lean", - "formal/afp/LatticeProperties/document/root.tex", - "formal/afp/Consensus_Refined/Observing_Quorums_Opt.thy", - "formal/afp/Ordinal/OrdinalArith.thy", - "formal/lean/mathlib/algebra/lie/subalgebra.lean", - "formal/mizar/pcs_0.miz", - "formal/afp/CZH_Foundations/czh_sets/ex/CZH_EX_Algebra.thy", - "formal/lean/mathlib/data/pfunctor/univariate/default.lean", - "formal/afp/Concurrent_Ref_Alg/Refinement_Lattice.thy", - "formal/afp/Constructor_Funs/Test_Constructor_Funs.thy", - "formal/mizar/jordan5d.miz", - "formal/hol/Library/floor.ml", - "formal/afp/MFMC_Countable/Matrix_For_Marginals.thy", - "formal/afp/Dedekind_Real/document/root.tex", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2000/p5.lean", - "formal/afp/Kleene_Algebra/Signatures.thy", - "formal/lean/mathlib/order/antichain.lean", - "formal/mizar/jordan1b.miz", - "formal/afp/Well_Quasi_Orders/Well_Quasi_Orders.thy", - "formal/lean/mathlib/measure_theory/integral/set_to_l1.lean", - "formal/afp/Smooth_Manifolds/Tangent_Space.thy", - "formal/hol/Help/META_SPEC_TAC.doc", - "formal/lean/mathlib/measure_theory/measure/ae_disjoint.lean", - "formal/afp/Applicative_Lifting/Applicative_Probability_List.thy", - "formal/afp/Priority_Queue_Braun/Priority_Queue_Braun2.thy", - "formal/lean/mathlib/field_theory/is_alg_closed/basic.lean", - "formal/hol/Help/EQ_MP.doc", - "formal/afp/Core_DOM/common/tests/Document_getElementById.thy", - "formal/lean/mathlib/field_theory/adjoin.lean", - "formal/mizar/binari_2.miz", - "formal/lean/mathlib/analysis/convex/topology.lean", - "formal/afp/First_Order_Terms/Seq_More.thy", - "formal/lean/mathlib/ring_theory/is_tensor_product.lean", - "formal/afp/JinjaThreads/Execute/State_Refinement.thy", - "formal/lean/liquid/for_mathlib/endomorphisms/Ext.lean", - "formal/lean/mathlib/field_theory/galois.lean", - "formal/afp/Foundation_of_geometry/Congruence.thy", - "formal/mizar/bhsp_6.miz", - "formal/afp/Call_Arity/CoCallImplSafe.thy", - "formal/lean/mathlib/order/compactly_generated.lean", - "formal/afp/Automatic_Refinement/Lib/Named_Sorted_Thms.thy", - "formal/afp/KAT_and_DRA/SingleSorted/Test_Dioid.thy", - "formal/afp/Hoare_Time/SepLog_Examples.thy", - "formal/afp/LTL_Master_Theorem/LTL_to_DRA/Transition_Functions.thy", - "formal/afp/Transitive_Models/Aleph_Relative.thy", - "formal/afp/Transitive-Closure/RBT_Map_Set_Extension.thy", - "formal/afp/Regex_Equivalence/Deriv_Autos.thy", - "formal/afp/Combinatorics_Words_Graph_Lemma/Graph_Lemma.thy", - "formal/hol/IsabelleLight/support.ml", - "formal/lean/mathlib/linear_algebra/matrix/symmetric.lean", - "formal/lean/mathzoo/mathzoo/olympiads/aime/1994/p3.lean", - "formal/afp/Tree_Decomposition/TreewidthTree.thy", - "formal/afp/Isabelle_Marries_Dirac/Deutsch_Jozsa.thy", - "formal/lean/mathlib/representation_theory/Action.lean", - "formal/lean/mathlib/algebra/lie/classical.lean", - "formal/lean/mathlib/data/nat/choose/basic.lean", - "formal/mizar/scm_inst.miz", - "formal/hol/Help/equals_goal.doc", - "formal/hol/Help/THENC.doc", - "formal/afp/SATSolverVerification/BasicDPLL.thy", - "formal/coq/math-comp/fingroup/presentation.v", - "formal/afp/Relational-Incorrectness-Logic/RelationalIncorrectness.thy", - "formal/lean/mathlib/order/filter/partial.lean", - "formal/afp/IMP2/automation/IMP2_Program_Analysis.thy", - "formal/afp/JinjaThreads/Compiler/JJ1WellForm.thy", - "formal/afp/Parity_Game/AttractorStrategy.thy", - "formal/afp/LOFT/OpenFlow_Matches.thy", - "formal/afp/Virtual_Substitution/Exports.thy", - "formal/afp/MFODL_Monitor_Optimized/Code_Double.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p296.lean", - "formal/afp/CoreC++/Auxiliary.thy", - "formal/lean/mathlib/analysis/calculus/conformal/inner_product.lean", - "formal/afp/Complx/ex/SumArr.thy", - "formal/afp/Bell_Numbers_Spivey/Bell_Numbers.thy", - "formal/lean/mathlib/number_theory/lucas_lehmer.lean", - "formal/lean/mathlib/group_theory/perm/basic.lean", - "formal/lean/mathlib/analysis/convex/join.lean", - "formal/afp/Stable_Matching/Sotomayor.thy", - "formal/afp/Green/General_Utils.thy", - "formal/afp/Hybrid_Multi_Lane_Spatial_Logic/regular/Regular_Sensors.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p22.lean", - "formal/lean/mathlib/linear_algebra/clifford_algebra/equivs.lean", - "formal/afp/Abstract_Completeness/Abstract_Completeness.thy", - "formal/afp/IMP_Compiler_Reuse/document/root.tex", - "formal/afp/HyperCTL/HyperCTL.thy", - "formal/afp/Deriving/Comparator_Generator/Comparator_Generator.thy", - "formal/mizar/tarski.miz", - "formal/afp/JinjaThreads/BV/BVExec.thy", - "formal/afp/FocusStreamsCaseStudies/FR_proof.thy", - "formal/mizar/grsolv_1.miz", - "formal/lean/mathlib/linear_algebra/finsupp_vector_space.lean", - "formal/afp/JiveDataStoreModel/Isabelle_Store/AttributesIndep.thy", - "formal/lean/mathlib/ring_theory/integrally_closed.lean", - "formal/lean/mathlib/linear_algebra/tensor_product_basis.lean", - "formal/lean/mathlib/topology/homotopy/homotopy_group.lean", - "formal/afp/Monad_Memo_DP/example/Longest_Common_Subsequence.thy", - "formal/lean/mathlib/data/prod/lex.lean", - "formal/afp/List-Infinite/document/root.tex", - "formal/hol/Help/define.doc", - "formal/afp/Consensus_Refined/MRU_Vote_Opt.thy", - "formal/afp/CoCon/Paper_Confidentiality/Paper_Value_Setup.thy", - "formal/afp/UTP/utp/utp_state_parser.thy", - "formal/mizar/robbins2.miz", - "formal/afp/Differential_Dynamic_Logic/Syntax.thy", - "formal/lean/mathlib/data/list/palindrome.lean", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p214.lean", - "formal/afp/Approximation_Algorithms/Approx_MIS_Hoare.thy", - "formal/lean/mathzoo/mathzoo/olympiads/aime/1989/p8.lean", - "formal/hol/Multivariate/vectors.ml", - "formal/lean/mathlib/category_theory/limits/preserves/shapes/zero.lean", - "formal/afp/SumSquares/TwoSquares.thy", - "formal/afp/Regular_Tree_Relations/Tree_Automata/Tree_Automata_Abstract_Impl.thy", - "formal/afp/Timed_Automata/Paths_Cycles.thy", - "formal/afp/POPLmark-deBruijn/document/root.tex", - "formal/afp/Probabilistic_Prime_Tests/Residues_Nat.thy", - "formal/mizar/pscomp_1.miz", - "formal/mizar/brouwer3.miz", - "formal/afp/Algebraic_Numbers/Real_Algebraic_Numbers.thy", - "formal/afp/Van_Emde_Boas_Trees/VEBT_Bounds.thy", - "formal/hol/Help/qmap.doc", - "formal/afp/Smooth_Manifolds/document/root.tex", - "formal/hol/Help/rev_splitlist.doc", - "formal/afp/Refine_Monadic/Generic/RefineG_Transfer.thy", - "formal/mizar/series_2.miz", - "formal/afp/Network_Security_Policy_Verification/Security_Invariants/METASINVAR_SystemBoundary.thy", - "formal/lean/mathlib/number_theory/padics/hensel.lean", - "formal/afp/Van_Emde_Boas_Trees/Separation_Logic_Imperative_HOL/Tools/Syntax_Match.thy", - "formal/lean/liquid/for_mathlib/homotopy_category_functor_compatibilities.lean", - "formal/mizar/zfrefle1.miz", - "formal/afp/Ordinal/Ordinal.thy", - "formal/mizar/scmpds_6.miz", - "formal/afp/Sigma_Commit_Crypto/Cyclic_Group_Ext.thy", - "formal/afp/Launchbury/Abstract-Denotational-Props.thy", - "formal/afp/Collections/ICF/impl/TrieSetImpl.thy", - "formal/hol/Help/SEMIRING_NORMALIZERS_CONV.doc", - "formal/lean/mathlib/deprecated/subgroup.lean", - "formal/afp/Lowe_Ontological_Argument/LoweOntologicalArgument_3.thy", - "formal/afp/Propositional_Proof_Systems/Tseytin_Sema.thy", - "formal/lean/mathlib/algebra/lie/quotient.lean", - "formal/afp/PSemigroupsConvolution/Quantales.thy", - "formal/afp/Buchi_Complementation/Complementation.thy", - "formal/afp/Category/SetCat.thy", - "formal/afp/Hahn_Jordan_Decomposition/Hahn_Jordan_Prelims.thy", - "formal/afp/Hermite_Lindemann/document/root.tex", - "formal/afp/Complx/OG_Annotations.thy", - "formal/afp/Partial_Order_Reduction/Extensions/Word_Extensions.thy", - "formal/lean/mathlib/category_theory/punit.lean", - "formal/afp/UPF_Firewall/PacketFilter/PolicyCore.thy", - "formal/afp/JinjaDCI/J/BigStep.thy", - "formal/afp/LTL_to_DRA/Impl/Mojmir_Rabin_Impl.thy", - "formal/mizar/waybel22.miz", - "formal/mizar/rearran1.miz", - "formal/afp/Core_DOM/common/classes/DocumentClass.thy", - "formal/lean/mathlib/measure_theory/integral/integrable_on.lean", - "formal/lean/mathlib/algebra/continued_fractions/computation/translations.lean", - "formal/afp/Prim_Dijkstra_Simple/Dijkstra_Abstract.thy", - "formal/lean/mathlib/data/fin/tuple/nat_antidiagonal.lean", - "formal/afp/Binding_Syntax_Theory/QuasiTerms_Swap_Fresh.thy", - "formal/hol/Help/mergesort.doc", - "formal/afp/Tarskis_Geometry/document/root.tex", - "formal/afp/Graph_Saturation/StandardModels.thy", - "formal/afp/Relation_Algebra/Relation_Algebra_Tests.thy", - "formal/lean/mathlib/category_theory/adjunction/basic.lean", - "formal/afp/WHATandWHERE_Security/Language_Composition.thy", - "formal/lean/mathzoo/mathzoo/misc/miniF2F/algebra/amgm_faxinrrp2msqrt2geq2mxm1div2x.lean", - "formal/mizar/realset1.miz", - "formal/lean/mathlib/algebra/regular/pow.lean", - "formal/afp/Refine_Imperative_HOL/IICF/Impl/IICF_Sepl_Binding.thy", - "formal/mizar/integr14.miz", - "formal/afp/JinjaDCI/BV/JVM_SemiType.thy", - "formal/afp/CoCon/Review_Confidentiality/Review_Intro.thy", - "formal/afp/Modal_Logics_for_NTS/Weak_Validity.thy", - "formal/lean/mathlib/topology/algebra/uniform_ring.lean", - "formal/afp/IP_Addresses/Prefix_Match_toString.thy", - "formal/lean/mathlib/topology/instances/real_vector_space.lean", - "formal/lean/mathlib/analysis/subadditive.lean", - "formal/afp/Automatic_Refinement/Parametricity/Param_Tool.thy", - "formal/afp/Propositional_Proof_Systems/CNF_Formulas_Sema.thy", - "formal/coq/odd-order/BGsection1.v", - "formal/lean/mathlib/data/bitvec/basic.lean", - "formal/afp/Refine_Imperative_HOL/Sepref_Monadify.thy", - "formal/afp/Jinja/BV/TF_JVM.thy", - "formal/afp/Simpl/HoarePartialDef.thy", - "formal/afp/MFMC_Countable/Rel_PMF_Characterisation.thy", - "formal/lean/mathzoo/mathzoo/misc/miniF2F/algebra/apbmpcneq0_aeq0anbeq0anceq0.lean", - "formal/hol/Ntrie/ntrie.ml", - "formal/lean/liquid/for_mathlib/homological_complex_op.lean", - "formal/afp/CakeML/generated/CakeML/FpSem.thy", - "formal/mizar/funct_8.miz", - "formal/afp/Interval_Arithmetic_Word32/Interval_Word32.thy", - "formal/lean/mathlib/algebra/category/Mon/colimits.lean", - "formal/afp/Bernoulli/Bernoulli.thy", - "formal/afp/Security_Protocol_Refinement/Refinement/Infra.thy", - "formal/hol/Help/exists.doc", - "formal/afp/DFS_Framework/Examples/Tarjan_LowLink.thy", - "formal/afp/Call_Arity/NoCardinalityAnalysis.thy", - "formal/lean/mathlib/measure_theory/integral/interval_average.lean", - "formal/afp/UPF_Firewall/FWNormalisation/NormalisationGenericProofs.thy", - "formal/lean/mathlib/data/set/intervals/disjoint.lean", - "formal/lean/mathlib/data/rbtree/main.lean", - "formal/afp/HRB-Slicing/Proc/Labels.thy", - "formal/afp/Diophantine_Eqns_Lin_Hom/document/root.tex", - "formal/afp/Heard_Of/Majorities.thy", - "formal/mizar/hilb10_5.miz", - "formal/afp/Probabilistic_Prime_Tests/Algebraic_Auxiliaries.thy", - "formal/lean/mathzoo/mathzoo/misc/miniF2F/numbertheory/sqmod4in01d.lean", - "formal/afp/Higher_Order_Terms/Fresh_Class.thy", - "formal/hol/Help/AP_THM_TAC.doc", - "formal/afp/Sturm_Sequences/document/root_userguide.tex", - "formal/lean/mathlib/algebra/module/submodule/bilinear.lean", - "formal/afp/SATSolverVerification/Initialization.thy", - "formal/afp/JinjaThreads/MM/JMM_Typesafe2.thy", - "formal/afp/JinjaDCI/Compiler/J1WellForm.thy", - "formal/afp/Lambda_Free_RPOs/document/root.tex", - "formal/afp/Separation_Algebra/document/root.tex", - "formal/afp/SpecCheck/Generators/SpecCheck_Generators.thy", - "formal/afp/IFC_Tracking/document/root.tex", - "formal/lean/mathlib/category_theory/concrete_category/reflects_isomorphisms.lean", - "formal/afp/Berlekamp_Zassenhaus/Finite_Field.thy", - "formal/lean/mathlib/number_theory/liouville/liouville_with.lean", - "formal/afp/Containers/Examples/Containers_TwoSat_Ex.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p493.lean", - "formal/afp/WOOT_Strong_Eventual_Consistency/StrongConvergence.thy", - "formal/afp/Launchbury/Substitution.thy", - "formal/afp/Auto2_HOL/HOL/Pelletier.thy", - "formal/lean/mathlib/topology/locally_constant/basic.lean", - "formal/lean/mathlib/group_theory/perm/sign.lean", - "formal/lean/mathlib/category_theory/category/pairwise.lean", - "formal/afp/Lambda_Free_KBOs/Lambda_Encoding_KBO.thy", - "formal/lean/mathlib/data/finset/fin.lean", - "formal/afp/Frequency_Moments/Product_PMF_Ext.thy", - "formal/hol/Help/bool_ty.doc", - "formal/lean/mathlib/combinatorics/simple_graph/strongly_regular.lean", - "formal/afp/Fishers_Inequality/Design_Extras.thy", - "formal/lean/liquid/for_mathlib/endomorphisms/functor.lean", - "formal/mizar/integr12.miz", - "formal/lean/mathlib/analysis/inner_product_space/basic.lean", - "formal/afp/FOL_Seq_Calc3/Semantics.thy", - "formal/afp/Incompleteness/Functions.thy", - "formal/afp/CRDT/Ordered_List.thy", - "formal/afp/LOFT/Semantics_OpenFlow.thy", - "formal/afp/First_Welfare_Theorem/Microeconomics/Private_Ownership_Economy.thy", - "formal/lean/mathlib/number_theory/basic.lean", - "formal/afp/AWN/OAWN_Convert.thy", - "formal/afp/Differential_Dynamic_Logic/Bound_Effect.thy", - "formal/afp/Forcing/Pairing_Axiom.thy", - "formal/hol/100/lagrange.ml", - "formal/afp/Psi_Calculi/Subst_Term.thy", - "formal/afp/CZH_Foundations/czh_sets/CZH_Sets_Ordinals.thy", - "formal/mizar/integr25.miz", - "formal/hol/Help/increasing.doc", - "formal/afp/Virtual_Substitution/document/root.tex", - "formal/afp/Circus/Var.thy", - "formal/lean/mathlib/topology/partition_of_unity.lean", - "formal/afp/Iptables_Semantics/Examples/TUM_Net_Firewall/Analyze_TUM_Net_Firewall.thy", - "formal/hol/Help/INST_UPPERCASE.doc", - "formal/lean/liquid/for_mathlib/presieve.lean", - "formal/hol/Help/NUM_SUB_CONV.doc", - "formal/lean/mathlib/data/int/cast/defs.lean", - "formal/afp/Eval_FO/Eval_FO.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p123.lean", - "formal/afp/AVL-Trees/AVL2.thy", - "formal/afp/SpecCheck/SpecCheck_Base.thy", - "formal/afp/Jordan_Normal_Form/Char_Poly.thy", - "formal/hol/Help/p.doc", - "formal/afp/Complex_Bounded_Operators/Complex_L2.thy", - "formal/afp/Functional-Automata/AutoProj.thy", - "formal/afp/Call_Arity/Cardinality-Domain-Lists.thy", - "formal/lean/sphere-eversion/global/indexing.lean", - "formal/lean/mathlib/model_theory/skolem.lean", - "formal/lean/mathzoo/mathzoo/misc/miniF2F/induction/sum_odd.lean", - "formal/lean/mathlib/logic/is_empty.lean", - "formal/lean/mathlib/algebra/category/Ring/instances.lean", - "formal/afp/Complex_Bounded_Operators/extra/Extra_Operator_Norm.thy", - "formal/afp/UPF_Firewall/PacketFilter/DatatypePort.thy", - "formal/afp/GaleStewart_Games/MorePrefix.thy", - "formal/afp/Ergodic_Theory/SG_Library_Complement.thy", - "formal/afp/Echelon_Form/Examples_Echelon_Form_Abstract.thy", - "formal/afp/Topological_Semantics/topo_border_algebra.thy", - "formal/mizar/lattice5.miz", - "formal/mizar/huffman1.miz", - "formal/afp/Density_Compiler/PDF_Semantics.thy", - "formal/lean/mathlib/number_theory/padics/ring_homs.lean", - "formal/afp/Density_Compiler/PDF_Compiler_Pred.thy", - "formal/lean/mathlib/ring_theory/localization/module.lean", - "formal/afp/Projective_Geometry/Higher_Projective_Space_Rank_Axioms.thy", - "formal/hol/Help/list_mk_icomb.doc", - "formal/afp/CISC-Kernel/trace/Rushby-with-Control/SK.thy", - "formal/afp/Types_To_Sets_Extension/ETTS/Tests/ETTS_Tests.thy", - "formal/lean/mathlib/linear_algebra/default.lean", - "formal/afp/BinarySearchTree/BinaryTree_Map.thy", - "formal/lean/mathlib/ring_theory/witt_vector/truncated.lean", - "formal/mizar/qc_lang4.miz", - "formal/afp/Jinja/BV/SemiType.thy", - "formal/lean/mathlib/data/pi/algebra.lean", - "formal/lean/liquid/for_mathlib/derived_functor_zero.lean", - "formal/hol/Help/is_cond.doc", - "formal/afp/CZH_Foundations/czh_digraphs/CZH_DG_PDigraph.thy", - "formal/lean/mathlib/algebra/category/Mon/adjunctions.lean", - "formal/lean/mathlib/topology/order.lean", - "formal/afp/Knights_Tour/KnightsTour.thy", - "formal/hol/Help/mk_eq.doc", - "formal/afp/Hoare_Time/QuantK_Hoare.thy", - "formal/mizar/topgrp_1.miz", - "formal/afp/Refine_Imperative_HOL/Sepref.thy", - "formal/afp/Mereology/GEM.thy", - "formal/afp/Simpl/DPC0Expressions.thy", - "formal/afp/Noninterference_Concurrent_Composition/ConcurrentComposition.thy", - "formal/afp/Noninterference_Ipurge_Unwinding/document/root.tex", - "formal/lean/mathlib/order/succ_pred/basic.lean", - "formal/hol/Help/net_of_thm.doc", - "formal/afp/Isabelle_Meta_Model/toy_example/embedding/meta_toy/Meta_Toy_extended.thy", - "formal/hol/Logic/lpo.ml", - "formal/afp/IsaNet/infrastructure/Keys.thy", - "formal/lean/sphere-eversion/global/relation.lean", - "formal/lean/mathlib/number_theory/l_series.lean", - "formal/afp/Slicing/Basic/CFG.thy", - "formal/afp/CakeML/doc/Doc_Generated.thy", - "formal/lean/mathlib/ring_theory/derivation.lean", - "formal/mizar/integra2.miz", - "formal/afp/Game_Based_Crypto/Elgamal.thy", - "formal/afp/Van_Emde_Boas_Trees/VEBT_Definitions.thy", - "formal/afp/UTP/utp/utp_tactics.thy", - "formal/afp/Conditional_Transfer_Rule/UD/Tests/UD_Tests.thy", - "formal/mizar/jordan_a.miz", - "formal/afp/Regression_Test_Selection/RTS.thy", - "formal/afp/Verified_SAT_Based_AI_Planning/document/root.tex", - "formal/afp/HRB-Slicing/StaticInter/BasicDefs.thy", - "formal/lean/mathlib/algebra/ring/aut.lean", - "formal/afp/Types_Tableaus_and_Goedels_God/IHOML_Examples.thy", - "formal/afp/Auto2_Imperative_HOL/Imperative/SepLogic_Base.thy", - "formal/lean/mathlib/topology/sheaves/functors.lean", - "formal/lean/mathlib/category_theory/category/Pointed.lean", - "formal/hol/Help/ANTS_TAC.doc", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p270.lean", - "formal/lean/mathlib/analysis/normed_space/hahn_banach/extension.lean", - "formal/afp/TLA/Even.thy", - "formal/mizar/pcomps_2.miz", - "formal/afp/Applicative_Lifting/Applicative_Vector.thy", - "formal/afp/TESL_Language/Run.thy", - "formal/lean/mathlib/algebra/category/Group/basic.lean", - "formal/lean/mathlib/analysis/normed_space/M_structure.lean", - "formal/afp/Transitive_Models/Replacement_Lepoll.thy", - "formal/hol/100/piseries.ml", - "formal/afp/Dominance_CHK/Sorted_Less2.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p412.lean", - "formal/lean/mathzoo/mathzoo/olympiads/imo/2007/p6.lean", - "formal/afp/FeatherweightJava/FJSound.thy", - "formal/afp/Quasi_Borel_Spaces/Bayesian_Linear_Regression.thy", - "formal/coq/odd-order/BGsection9.v", - "formal/afp/Security_Protocol_Refinement/Key_establish/m1_keydist.thy", - "formal/afp/Modal_Logics_for_NTS/Weak_Bisimilarity_Implies_Equivalence.thy", - "formal/afp/Aristotles_Assertoric_Syllogistic/AristotlesAssertoric.thy", - "formal/coq/math-comp/algebra/vector.v", - "formal/afp/InfPathElimination/ArcExt.thy", - "formal/afp/Strong_Security/Language_Composition.thy", - "formal/lean/mathlib/topology/homotopy/basic.lean", - "formal/hol/EC/formulary_xzprojective.ml", - "formal/hol/Help/vfree_in.doc", - "formal/lean/mathlib/measure_theory/measure/vector_measure.lean", - "formal/lean/perfectoid/for_mathlib/uniform_space/uniform_field.lean", - "formal/afp/Affine_Arithmetic/Counterclockwise_2D_Arbitrary.thy", - "formal/lean/sphere-eversion/to_mathlib/measure_theory/interval_integral.lean", - "formal/lean/mathzoo/mathzoo/olympiads/imo/2013/p1.lean", - "formal/afp/Weighted_Arithmetic_Geometric_Mean/document/root.tex", - "formal/afp/MFMC_Countable/MFMC_Bounded.thy", - "formal/afp/Abstract_Soundness/Infinite_Proof_Soundness.thy", - "formal/afp/Pell/Pell_Algorithm_Test.thy", - "formal/mizar/zmodlat2.miz", - "formal/afp/CoSMeDis/Friend_Request_Confidentiality/Friend_Request.thy", - "formal/lean/mathlib/topology/support.lean", - "formal/afp/LocalLexing/TheoremD11.thy", - "formal/lean/liquid/Lbar/ext_aux2.lean", - "formal/lean/mathlib/category_theory/limits/functor_category.lean", - "formal/afp/Real_Power/document/root.tex", - "formal/mizar/pells_eq.miz", - "formal/afp/Generic_Join/Examples_Join.thy", - "formal/mizar/random_1.miz", - "formal/mizar/binari_3.miz", - "formal/afp/Automated_Stateful_Protocol_Verification/Stateful_Protocol_Verification.thy", - "formal/hol/Logic/unif.ml", - "formal/afp/CCS/document/root.tex", - "formal/hol/Library/q.ml", - "formal/afp/JinjaDCI/BV/LBVJVM.thy", - "formal/hol/Help/INT_POW_CONV.doc", - "formal/afp/Pi_Calculus/Weak_Late_Cong_Subst_Pres.thy", - "formal/mizar/newton04.miz", - "formal/lean/mathlib/data/multiset/nodup.lean", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2020/b/p2.lean", - "formal/afp/Polynomial_Factorization/Dvd_Int_Poly.thy", - "formal/hol/Help/PURE_ONCE_REWRITE_TAC.doc", - "formal/afp/Group-Ring-Module/document/root.tex", - "formal/afp/MSO_Regex_Equivalence/Pi_Regular_Operators.thy", - "formal/lean/mathlib/category_theory/adjunction/default.lean", - "formal/afp/Factored_Transition_System_Bounding/FactoredSystem.thy", - "formal/lean/liquid/polyhedral_lattice/int.lean", - "formal/afp/HOLCF-Prelude/document/root.tex", - "formal/afp/Probabilistic_Noninterference/Concrete.thy", - "formal/afp/Complx/document/root.tex", - "formal/afp/CCS/Tau_Chain.thy", - "formal/afp/Probabilistic_While/Fast_Dice_Roll.thy", - "formal/lean/mathlib/category_theory/triangulated/rotate.lean", - "formal/afp/Smith_Normal_Form/SNF_Algorithm_Euclidean_Domain.thy", - "formal/afp/Gauss_Jordan/Determinants_IArrays.thy", - "formal/lean/mathlib/linear_algebra/general_linear_group.lean", - "formal/hol/miz3/Samples/icms.ml", - "formal/hol/Help/CONJ_CANON_CONV.doc", - "formal/lean/mathlib/analysis/special_functions/trigonometric/basic.lean", - "formal/lean/mathlib/order/category/FinPartialOrder.lean", - "formal/lean/mathlib/analysis/complex/polynomial.lean", - "formal/afp/Proof_Strategy_Language/Example.thy", - "formal/afp/JinjaThreads/DFA/Listn.thy", - "formal/lean/mathlib/analysis/seminorm.lean", - "formal/afp/DiscretePricing/Filtration.thy", - "formal/hol/Help/INT_REDUCE_CONV.doc", - "formal/hol/Help/rat_of_term.doc", - "formal/lean/mathlib/combinatorics/simple_graph/regularity/equitabilise.lean", - "formal/afp/Formal_SSA/Construct_SSA_code.thy", - "formal/afp/pGCL/Transformers.thy", - "formal/mizar/pencil_3.miz", - "formal/afp/Pi_Calculus/Strong_Early_Bisim_Pres.thy", - "formal/afp/PLM/TAO_4_BasicDefinitions.thy", - "formal/lean/liquid/for_mathlib/random_homological_lemmas.lean", - "formal/afp/Slicing/While/SemanticsWellFormed.thy", - "formal/afp/Universal_Turing_Machine/Recursive.thy", - "formal/afp/CISC-Kernel/step/Step_vpeq_weakly_step_consistent.thy", - "formal/afp/Gauss_Jordan/Code_Matrix.thy", - "formal/mizar/flang_1.miz", - "formal/afp/CAVA_LTL_Modelchecker/SM/Refine/Gen_Cfg_Bisim.thy", - "formal/lean/liquid/thm95/double_complex.lean", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p198.lean", - "formal/lean/mathlib/order/atoms.lean", - "formal/afp/Jinja/DFA/Semilattices.thy", - "formal/mizar/dirort.miz", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p101.lean", - "formal/afp/LambdaMu/Progress.thy", - "formal/mizar/valued_2.miz", - "formal/afp/IsaNet/infrastructure/Take_While_Update.thy", - "formal/afp/Game_Based_Crypto/PRF_IND_CPA.thy", - "formal/lean/mathlib/category_theory/abelian/homology.lean", - "formal/lean/mathlib/geometry/manifold/instances/units_of_normed_algebra.lean", - "formal/afp/CZH_Foundations/czh_semicategories/CZH_SMC_Simple.thy", - "formal/afp/Category3/FreeCategory.thy", - "formal/lean/mathlib/category_theory/limits/shapes/normal_mono/equalizers.lean", - "formal/lean/liquid/for_mathlib/AddCommGroup/explicit_products.lean", - "formal/afp/VeriComp/Semantics.thy", - "formal/afp/Relational_Disjoint_Set_Forests/More_Disjoint_Set_Forests.thy", - "formal/afp/ZFC_in_HOL/ZFC_in_HOL.thy", - "formal/afp/Abstract-Hoare-Logics/While/Termi.thy", - "formal/lean/mathlib/data/set/constructions.lean", - "formal/afp/Call_Arity/TTree-HOLCF.thy", - "formal/hol/Help/filter.doc", - "formal/mizar/membered.miz", - "formal/afp/Coinductive/TLList.thy", - "formal/lean/mathlib/category_theory/sums/associator.lean", - "formal/lean/mathlib/order/hom/lattice.lean", - "formal/afp/Kleene_Algebra/document/root.tex", - "formal/afp/UTP/utp/utp_sym_eval.thy", - "formal/mizar/mod_2.miz", - "formal/afp/LOFT/OpenFlow_Helpers.thy", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2016/a/p3.lean", - "formal/afp/Call_Arity/TrivialArityAnal.thy", - "formal/afp/IP_Addresses/IPv4.thy", - "formal/afp/Intro_Dest_Elim/IDE_Tools/IDE_Tools.thy", - "formal/mizar/matrix12.miz", - "formal/afp/Roy_Floyd_Warshall/Roy_Floyd_Warshall.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p765.lean", - "formal/hol/Help/INT_EQ_CONV.doc", - "formal/hol/Help/INTEGER_RULE.doc", - "formal/mizar/asympt_3.miz", - "formal/mizar/waybel33.miz", - "formal/lean/mathlib/measure_theory/decomposition/unsigned_hahn.lean", - "formal/mizar/topgen_4.miz", - "formal/afp/Sigma_Commit_Crypto/Schnorr_Sigma_Commit.thy", - "formal/lean/mathlib/linear_algebra/annihilating_polynomial.lean", - "formal/hol/Library/rstc.ml", - "formal/lean/liquid/for_mathlib/nnreal.lean", - "formal/afp/Key_Agreement_Strong_Adversaries/sklvl3_symmetric.thy", - "formal/afp/ADS_Functor/Canton_Transaction_Tree.thy", - "formal/afp/Collections/GenCF/Gen/Gen_Set.thy", - "formal/afp/Network_Security_Policy_Verification/Examples/Tainting/CryptoDB.thy", - "formal/afp/Ordinary_Differential_Equations/Refinement/Refine_Parallel.thy", - "formal/afp/Refine_Imperative_HOL/Lib/PO_Normalizer.thy", - "formal/hol/Help/delete_parser.doc", - "formal/afp/Functional-Automata/NA.thy", - "formal/afp/CakeML/generated/CakeML/SimpleIO.thy", - "formal/afp/Correctness_Algebras/Preconditions.thy", - "formal/afp/Jinja/DFA/Opt.thy", - "formal/lean/mathlib/data/multiset/fold.lean", - "formal/mizar/mfold_1.miz", - "formal/mizar/midsp_3.miz", - "formal/afp/Iptables_Semantics/Primitive_Matchers/IpAddresses_Normalize.thy", - "formal/afp/Key_Agreement_Strong_Adversaries/IK.thy", - "formal/afp/Separation_Algebra/ex/capDL/Types_D.thy", - "formal/hol/Rqe/inferisign_thms.ml", - "formal/afp/Topological_Semantics/topo_operators_basic.thy", - "formal/afp/Berlekamp_Zassenhaus/Finite_Field_Factorization_Record_Based.thy", - "formal/afp/Prime_Distribution_Elementary/Lcm_Nat_Upto.thy", - "formal/afp/WOOT_Strong_Eventual_Consistency/CreateConsistent.thy", - "formal/lean/mathlib/algebra/module/opposites.lean", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p247.lean", - "formal/lean/mathlib/category_theory/monoidal/internal/Module.lean", - "formal/afp/Hidden_Markov_Models/Hidden_Markov_Model.thy", - "formal/afp/Relation_Algebra/document/root.tex", - "formal/afp/Iptables_Semantics/Primitive_Matchers/Common_Primitive_Syntax.thy", - "formal/afp/Iptables_Semantics/Semantics_Ternary/Primitive_Normalization.thy", - "formal/hol/miz3/Samples/talk.ml", - "formal/mizar/extreal1.miz", - "formal/mizar/aofa_i00.miz", - "formal/mizar/polyeq_1.miz", - "formal/afp/SuperCalc/multisets_continued.thy", - "formal/mizar/matrix_7.miz", - "formal/lean/mathlib/measure_theory/integral/average.lean", - "formal/afp/Key_Agreement_Strong_Adversaries/Implem_asymmetric.thy", - "formal/afp/Ordinary_Differential_Equations/Library/MVT_Ex.thy", - "formal/hol/Help/current_goalstack.doc", - "formal/afp/Monomorphic_Monad/Monad_Overloading.thy", - "formal/hol/Help/overload_interface.doc", - "formal/afp/Rewriting_Z/document/root.tex", - "formal/afp/CISC-Kernel/trace/Rushby-with-Control/List_Theorems.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p149.lean", - "formal/afp/Gromov_Hyperbolicity/Bonk_Schramm_Extension.thy", - "formal/afp/Iptables_Semantics/Examples/Small_Examples.thy", - "formal/afp/CakeML/generated/CakeML/PrimTypes.thy", - "formal/afp/AODV/variants/d_fwdrreqs/D_Global_Invariants.thy", - "formal/afp/Polynomials/Quasi_PM_Power_Products.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p181.lean", - "formal/lean/mathlib/topology/sheaves/sheaf_of_functions.lean", - "formal/afp/Iptables_Semantics/Primitive_Matchers/Routing_IpAssmt.thy", - "formal/afp/JinjaThreads/Common/BinOp.thy", - "formal/lean/liquid/for_mathlib/AddCommGroup/tensor.lean", - "formal/afp/GraphMarkingIBP/document/root.tex", - "formal/afp/Linear_Inequalities/Convex_Hull.thy", - "formal/afp/InformationFlowSlicing/document/root.tex", - "formal/afp/Probabilistic_Timed_Automata/library/Sequence_LTL.thy", - "formal/afp/List_Interleaving/document/root.tex", - "formal/afp/Transitive_Models/Least.thy", - "formal/afp/Bounded_Deducibility_Security/Bounded_Deducibility_Security.thy", - "formal/afp/Deep_Learning/DL_Deep_Model.thy", - "formal/lean/mathlib/category_theory/limits/constructions/weakly_initial.lean", - "formal/afp/Ordinary_Differential_Equations/IVP/Upper_Lower_Solution.thy", - "formal/afp/Refine_Imperative_HOL/Examples/Sepref_All_Examples.thy", - "formal/afp/Transitive_Models/Lambda_Replacement.thy", - "formal/lean/mathzoo/mathzoo/olympiads/imo/1966/p5.lean", - "formal/afp/Bicategory/Strictness.thy", - "formal/mizar/sin_cos6.miz", - "formal/hol/Help/SIMPLE_DISJ_CASES.doc", - "formal/lean/mathlib/data/set/pointwise.lean", - "formal/lean/mathlib/ring_theory/localization/submodule.lean", - "formal/mizar/borsuk_4.miz", - "formal/afp/Special_Function_Bounds/Exp_Bounds.thy", - "formal/mizar/lopban_2.miz", - "formal/afp/Stirling_Formula/Gamma_Asymptotics.thy", - "formal/lean/mathlib/category_theory/bicategory/strict.lean", - "formal/lean/mathzoo/mathzoo/misc/miniF2F/numbertheory/4x3m7y3neq2003.lean", - "formal/afp/Word_Lib/Bitwise_Signed.thy", - "formal/afp/PAC_Checker/PAC_Checker.thy", - "formal/lean/mathlib/category_theory/functor/fully_faithful.lean", - "formal/afp/Correctness_Algebras/N_Semirings_Modal.thy", - "formal/hol/Help/CONJUNCT2.doc", - "formal/lean/mathlib/algebra/parity.lean", - "formal/afp/FocusStreamsCaseStudies/Gateway.thy", - "formal/afp/ConcurrentGC/Global_Invariants.thy", - "formal/afp/Density_Compiler/PDF_Compiler.thy", - "formal/hol/Help/print_qterm.doc", - "formal/afp/CoSMeDis/Post_Confidentiality/DYNAMIC_Post_ISSUER.thy", - "formal/afp/Network_Security_Policy_Verification/Security_Invariants/SINVAR_NonInterference.thy", - "formal/afp/Collections/ICF/CollectionsV1.thy", - "formal/hol/Help/REAL_RAT_LE_CONV.doc", - "formal/afp/Optics/Lens_Symmetric.thy", - "formal/lean/mathlib/order/modular_lattice.lean", - "formal/afp/Separation_Algebra/Separation_Algebra_Alt.thy", - "formal/afp/IP_Addresses/NumberWang_IPv6.thy", - "formal/afp/Root_Balanced_Tree/document/root.tex", - "formal/afp/Jordan_Normal_Form/Matrix.thy", - "formal/afp/Probabilistic_Prime_Tests/Fermat_Test.thy", - "formal/afp/Slicing/StaticIntra/ControlDependenceRelations.thy", - "formal/afp/Inductive_Inference/Lemma_R.thy", - "formal/lean/liquid/real_measures/default.lean", - "formal/afp/Berlekamp_Zassenhaus/Square_Free_Factorization_Int.thy", - "formal/lean/mathlib/data/finsupp/interval.lean", - "formal/afp/SIFPL/ContextOBJ.thy", - "formal/mizar/jgraph_3.miz", - "formal/afp/CoreC++/Execute.thy", - "formal/afp/AODV/variants/b_fwdrreps/B_Quality_Increases.thy", - "formal/lean/mathlib/data/finset/basic.lean", - "formal/mizar/dblseq_2.miz", - "formal/afp/Pi_Calculus/Strong_Early_Bisim_Subst_Pres.thy", - "formal/afp/DiskPaxos/DiskPaxos_Inv1.thy", - "formal/mizar/cqc_lang.miz", - "formal/hol/Help/mk_var.doc", - "formal/afp/Strong_Security/Strong_Security.thy", - "formal/mizar/ballot_1.miz", - "formal/afp/Flyspeck-Tame/ListSum.thy", - "formal/afp/Cauchy/document/root.tex", - "formal/lean/mathzoo/mathzoo/misc/miniF2F/induction/divisibility_9div10tonm1.lean", - "formal/afp/Poincare_Disc/Poincare.thy", - "formal/afp/TESL_Language/Hygge_Theory.thy", - "formal/afp/CakeML/Big_Step_Total.thy", - "formal/afp/Group-Ring-Module/Algebra6.thy", - "formal/hol/Help/rev_assocd.doc", - "formal/afp/Irrationals_From_THEBOOK/Irrationals_From_THEBOOK.thy", - "formal/afp/Separation_Logic_Imperative_HOL/Tools/Syntax_Match.thy", - "formal/lean/mathzoo/mathzoo/misc/miniF2F/algebra/manipexpr_apbeq2cceqiacpbceqm2.lean", - "formal/lean/mathlib/algebra/lie/universal_enveloping.lean", - "formal/afp/Berlekamp_Zassenhaus/Unique_Factorization_Poly.thy", - "formal/afp/DPRM_Theorem/Machine_Equations/State_0_Equation.thy", - "formal/afp/Extended_Finite_State_Machines/EFSM_LTL.thy", - "formal/afp/Transitive_Models/Arities.thy", - "formal/lean/mathlib/data/finset/locally_finite.lean", - "formal/afp/DPRM_Theorem/Diophantine/Binomial_Coefficient.thy", - "formal/mizar/fib_num4.miz", - "formal/lean/sphere-eversion/to_mathlib/geometry/manifold/misc.lean", - "formal/mizar/fib_fusc.miz", - "formal/afp/Security_Protocol_Refinement/Auth_simple/m2_confid_chan.thy", - "formal/lean/mathlib/topology/locally_constant/algebra.lean", - "formal/mizar/rusub_3.miz", - "formal/lean/mathlib/linear_algebra/matrix/charpoly/basic.lean", - "formal/afp/Multi_Party_Computation/ETP.thy", - "formal/mizar/lopban_4.miz", - "formal/afp/Locally-Nameless-Sigma/Sigma/ParRed.thy", - "formal/afp/AODV/variants/e_all_abcd/E_Quality_Increases.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p346.lean", - "formal/hol/100/cubic.ml", - "formal/afp/Shivers-CFA/Eval.thy", - "formal/afp/Abstract-Rewriting/SN_Orders.thy", - "formal/afp/Knot_Theory/Linkrel_Kauffman.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p131.lean", - "formal/lean/mathlib/topology/vector_bundle/pullback.lean", - "formal/lean/liquid/Lbar/ext_aux1.lean", - "formal/lean/mathzoo/mathzoo/olympiads/imo/1968/p5.lean", - "formal/hol/Help/type_abbrevs.doc", - "formal/afp/Gauss_Jordan/Rref.thy", - "formal/lean/mathlib/order/imp.lean", - "formal/afp/CoCon/Reviewer_Assignment_Confidentiality/Reviewer_Assignment_NCPC.thy", - "formal/afp/Knot_Theory/Tangles.thy", - "formal/afp/Weighted_Path_Order/Status.thy", - "formal/afp/Coinductive/Examples/LList_CCPO_Topology.thy", - "formal/afp/Koenigsberg_Friendship/MoreGraph.thy", - "formal/lean/mathlib/number_theory/frobenius_number.lean", - "formal/hol/Permutation/permutation.ml", - "formal/mizar/peterson.miz", - "formal/afp/Isabelle_C/C11-FrontEnd/src/C_Parser_Language.thy", - "formal/afp/Factor_Algebraic_Polynomial/MPoly_Divide_Code.thy", - "formal/afp/Order_Lattice_Props/Order_Lattice_Props_Loc.thy", - "formal/hol/Permutation/nummax.ml", - "formal/afp/Statecharts/HA.thy", - "formal/afp/Finite-Map-Extras/document/root.tex", - "formal/lean/mathlib/topology/continuous_function/stone_weierstrass.lean", - "formal/lean/mathlib/data/polynomial/degree/trailing_degree.lean", - "formal/afp/Progress_Tracking/Exchange.thy", - "formal/afp/Lambert_W/Lambert_W.thy", - "formal/afp/Jinja/DFA/Typing_Framework_2.thy", - "formal/lean/mathlib/data/list/cycle.lean", - "formal/hol/100/ceva.ml", - "formal/afp/Dependent_SIFUM_Type_Systems/Examples/Example.thy", - "formal/afp/Integration/Failure.thy", - "formal/afp/Shivers-CFA/CPSUtils.thy", - "formal/lean/mathzoo/mathzoo/olympiads/imo/1987/p6.lean", - "formal/mizar/matrix_0.miz", - "formal/afp/JinjaThreads/Compiler/JVMJ1.thy", - "formal/afp/Ordinary_Differential_Equations/Refinement/Autoref_Misc.thy", - "formal/coq/math-comp/ssreflect/ssrfun.v", - "formal/lean/mathlib/category_theory/monoidal/rigid/basic.lean", - "formal/hol/100/fta.ml", - "formal/afp/Sqrt_Babylonian/Sqrt_Babylonian_Auxiliary.thy", - "formal/lean/mathlib/analysis/box_integral/partition/basic.lean", - "formal/afp/ConcurrentGC/Initial_Conditions.thy", - "formal/afp/Virtual_Substitution/ExecutiblePolyProps.thy", - "formal/afp/Verified_SAT_Based_AI_Planning/CNF_Semantics_Supplement.thy", - "formal/mizar/int_3.miz", - "formal/afp/LinearQuantifierElim/document/root.tex", - "formal/afp/Native_Word/Code_Symbolic_Bits_Int.thy", - "formal/lean/mathlib/probability/integration.lean", - "formal/afp/Algebraic_VCs/AVC_KAT/VC_KAT_Examples.thy", - "formal/mizar/waybel24.miz", - "formal/hol/Help/fail.doc", - "formal/afp/Ribbon_Proofs/Ribbons_Basic.thy", - "formal/afp/Design_Theory/Resolvable_Designs.thy", - "formal/afp/Van_Emde_Boas_Trees/Separation_Logic_Imperative_HOL/Tools/Imperative_HOL_Add.thy", - "formal/afp/Parity_Game/document/root.tex", - "formal/lean/mathlib/algebra/module/projective.lean", - "formal/lean/mathlib/algebra/star/prod.lean", - "formal/afp/Real_Impl/Real_Impl.thy", - "formal/afp/Statecharts/Expr.thy", - "formal/afp/Strong_Security/Expr.thy", - "formal/afp/Simpl/DPC0Library.thy", - "formal/lean/mathlib/data/nat/modeq.lean", - "formal/hol/Examples/pell.ml", - "formal/mizar/closure1.miz", - "formal/lean/mathlib/measure_theory/constructions/borel_space.lean", - "formal/lean/mathlib/measure_theory/function/lp_space.lean", - "formal/afp/Algebraic_Numbers/Algebraic_Number_Tests.thy", - "formal/hol/Help/NOT_INTRO.doc", - "formal/afp/Clean/src/Test_Clean.thy", - "formal/mizar/heine.miz", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2021/b/p1.lean", - "formal/afp/Safe_OCL/Transitive_Closure_Ext.thy", - "formal/afp/Types_To_Sets_Extension/document/root.tex", - "formal/afp/ConcurrentGC/concrete/Concrete.thy", - "formal/lean/mathlib/order/concept.lean", - "formal/lean/mathlib/combinatorics/derangements/finite.lean", - "formal/afp/HRB-Slicing/StaticInter/AuxLemmas.thy", - "formal/afp/Arith_Prog_Rel_Primes/document/root.tex", - "formal/afp/LocalLexing/LLEarleyParsing.thy", - "formal/afp/PAL/document/root.tex", - "formal/hol/Help/new_type_abbrev.doc", - "formal/afp/Native_Word/Native_Word_Test_MLton2.thy", - "formal/lean/mathlib/category_theory/category/Kleisli.lean", - "formal/hol/GL/completeness.ml", - "formal/afp/Randomised_BSTs/document/root.tex", - "formal/afp/CakeML/Matching.thy", - "formal/afp/Polynomials/MPoly_Type_Class_Ordered.thy", - "formal/lean/mathlib/topology/order/priestley.lean", - "formal/afp/Stateful_Protocol_Composition_and_Typing/document/root.tex", - "formal/lean/mathlib/number_theory/liouville/measure.lean", - "formal/afp/IMP2/basic/Syntax.thy", - "formal/hol/Logic/support.ml", - "formal/afp/Smith_Normal_Form/SNF_Algorithm_Two_Steps_JNF.thy", - "formal/lean/mathlib/order/category/PartialOrder.lean", - "formal/lean/lftcm/exercises_sources/thursday/category_theory/exercise2.lean", - "formal/afp/Hoare_Time/Nielson_VCGi_complete.thy", - "formal/afp/Completeness/Soundness.thy", - "formal/afp/Formal_SSA/SSA_CFG.thy", - "formal/lean/mathlib/data/nat/parity.lean", - "formal/coq/odd-order/PFsection9.v", - "formal/lean/mathzoo/mathzoo/olympiads/imo/1981/p6.lean", - "formal/afp/LOFT/LinuxRouter_OpenFlow_Translation.thy", - "formal/afp/CoSMed/Friend_Request_Confidentiality/Friend_Request_Value_Setup.thy", - "formal/lean/mathlib/linear_algebra/projection.lean", - "formal/lean/mathzoo/mathzoo/olympiads/imo/1973/p3.lean", - "formal/afp/Ramsey-Infinite/document/root.tex", - "formal/mizar/connsp_2.miz", - "formal/afp/Binding_Syntax_Theory/Well_Sorted_Terms.thy", - "formal/hol/Help/RAND_CONV.doc", - "formal/mizar/procal_1.miz", - "formal/afp/Ordinary_Differential_Equations/Numerics/Example_Utilities.thy", - "formal/lean/mathlib/combinatorics/set_family/lym.lean", - "formal/afp/Iptables_Semantics/Examples/sns.ias.edu/SNS_IAS_Eduroam_Spoofing.thy", - "formal/afp/Containers/Lexicographic_Order.thy", - "formal/lean/mathlib/algebra/monoid_algebra/grading.lean", - "formal/lean/mathlib/data/lazy_list.lean", - "formal/lean/lftcm/exercises_sources/thursday/category_theory/exercise9.lean", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2021/b/p4.lean", - "formal/lean/mathlib/algebra/module/submodule/lattice.lean", - "formal/hol/Help/DISJ_CASES.doc", - "formal/afp/Rewrite_Properties_Reduction/Rewriting/Rewriting_GTRS.thy", - "formal/afp/First_Order_Terms/Fun_More.thy", - "formal/afp/Quasi_Borel_Spaces/Measure_QuasiBorel_Adjunction.thy", - "formal/mizar/realset3.miz", - "formal/afp/Bounded_Deducibility_Security/BD_Security.thy", - "formal/afp/ZFC_in_HOL/Cantor_NF.thy", - "formal/afp/Correctness_Algebras/Relative_Domain.thy", - "formal/afp/Forcing/Renaming.thy", - "formal/lean/liquid/normed_group/normed_with_aut.lean", - "formal/lean/mathlib/topology/uniform_space/equiv.lean", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2009/a/p2.lean", - "formal/lean/mathzoo/mathzoo/olympiads/imo/1990/p3.lean", - "formal/lean/mathlib/algebra/category/Module/limits.lean", - "formal/afp/Automated_Stateful_Protocol_Verification/examples/PKCS/PKCS_Model09.thy", - "formal/lean/mathlib/data/multiset/dedup.lean", - "formal/lean/sphere-eversion/loops/delta_mollifier.lean", - "formal/lean/mathlib/ring_theory/free_ring.lean", - "formal/afp/Dirichlet_L/Dirichlet_Theorem.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p129.lean", - "formal/afp/Interpreter_Optimizations/document/root.tex", - "formal/hol/100/chords.ml", - "formal/afp/Locally-Nameless-Sigma/preliminary/Environments.thy", - "formal/afp/JinjaThreads/J/JWellForm.thy", - "formal/afp/Collections/ICF/DatRef.thy", - "formal/afp/Security_Protocol_Refinement/Key_establish/m2_ds.thy", - "formal/afp/GPU_Kernel_PL/KPL_execution_thread.thy", - "formal/lean/mathlib/data/multiset/basic.lean", - "formal/mizar/conlat_2.miz", - "formal/hol/100/leibniz.ml", - "formal/afp/Partial_Order_Reduction/Ample_Analysis.thy", - "formal/afp/C2KA_DistributedSystems/Topology_C2KA.thy", - "formal/afp/Modal_Logics_for_NTS/Weak_Equivalence_Implies_Bisimilarity.thy", - "formal/mizar/analoaf.miz", - "formal/lean/mathlib/topology/metric_space/closeds.lean", - "formal/afp/Virtual_Substitution/DNFUni.thy", - "formal/afp/Lam-ml-Normalization/Lam_ml.thy", - "formal/lean/liquid/laurent_measures/ext.lean", - "formal/lean/liquid/for_mathlib/salamander.lean", - "formal/hol/Help/index.doc", - "formal/afp/Optics/Optics.thy", - "formal/afp/Trie/Trie.thy", - "formal/afp/Refine_Imperative_HOL/IICF/Impl/Heaps/IICF_Impl_Heapmap.thy", - "formal/afp/Coinductive/Quotient_TLList.thy", - "formal/afp/TESL_Language/TESL.thy", - "formal/afp/Heard_Of/lastvoting/LastVotingProof.thy", - "formal/afp/Perron_Frobenius/document/root.tex", - "formal/afp/Graph_Saturation/document/root.tex", - "formal/afp/Selection_Heap_Sort/Sort.thy", - "formal/coq/odd-order/BGsection2.v", - "formal/lean/mathlib/category_theory/subobject/basic.lean", - "formal/afp/LocalLexing/TheoremD13.thy", - "formal/afp/Jordan_Normal_Form/document/root.tex", - "formal/lean/mathlib/data/finite/default.lean", - "formal/afp/Sophomores_Dream/Sophomores_Dream.thy", - "formal/mizar/msalimit.miz", - "formal/hol/Help/type_match.doc", - "formal/lean/mathlib/algebra/group_with_zero/default.lean", - "formal/afp/Inductive_Inference/document/root.tex", - "formal/lean/mathlib/algebra/direct_limit.lean", - "formal/afp/LTL_to_GBA/document/intro.tex", - "formal/afp/Special_Function_Bounds/document/root.tex", - "formal/hol/Help/list_mk_forall.doc", - "formal/afp/Prefix_Free_Code_Combinators/Examples.thy", - "formal/hol/Complex/complex_transc.ml", - "formal/lean/mathlib/order/basic.lean", - "formal/afp/Prime_Number_Theorem/Prime_Number_Theorem_Library.thy", - "formal/afp/Isabelle_Meta_Model/meta_isabelle/Parser_Pure.thy", - "formal/afp/Interpreter_Optimizations/Global.thy", - "formal/lean/mathlib/algebra/homology/complex_shape.lean", - "formal/afp/Approximation_Algorithms/Approx_LB_Hoare.thy", - "formal/afp/Call_Arity/CardinalityAnalysisSpec.thy", - "formal/afp/Ordinary_Differential_Equations/Numerics/Abstract_Rigorous_Numerics.thy", - "formal/mizar/rewrite3.miz", - "formal/afp/Flow_Networks/Ford_Fulkerson.thy", - "formal/afp/WOOT_Strong_Eventual_Consistency/IntegrateInsertCommute.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p451.lean", - "formal/afp/Taylor_Models/document/root.tex", - "formal/lean/mathzoo/mathzoo/misc/miniF2F/induction/11div10tonmn1ton.lean", - "formal/afp/Collections/ICF/tools/Ord_Code_Preproc.thy", - "formal/hol/Help/FIRST_X_ASSUM.doc", - "formal/lean/mathlib/ring_theory/perfection.lean", - "formal/afp/Modular_arithmetic_LLL_and_HNF_algorithms/Uniqueness_Hermite.thy", - "formal/lean/mathlib/category_theory/sites/cover_lifting.lean", - "formal/afp/AI_Planning_Languages_Semantics/SASP_Checker.thy", - "formal/afp/Transition_Systems_and_Automata/Automata/DFA/DFA.thy", - "formal/afp/Matrix/document/root.tex", - "formal/afp/Hoare_Time/SepLogK_VCG.thy", - "formal/hol/Help/BINOP_CONV.doc", - "formal/lean/mathlib/algebraic_topology/topological_simplex.lean", - "formal/afp/FFT/document/root.tex", - "formal/mizar/metric_3.miz", - "formal/afp/Treaps/document/root.tex", - "formal/lean/mathlib/group_theory/p_group.lean", - "formal/afp/Certification_Monads/Check_Monad.thy", - "formal/afp/C2KA_DistributedSystems/Stimuli.thy", - "formal/afp/Modal_Logics_for_NTS/Weak_Formula.thy", - "formal/afp/Extended_Finite_State_Machines/examples/Drinks_Machine.thy", - "formal/hol/Functionspaces/cfunspace.ml", - "formal/afp/Linear_Inequalities/Fundamental_Theorem_Linear_Inequalities.thy", - "formal/mizar/robbins3.miz", - "formal/hol/Help/loaded_files.doc", - "formal/lean/perfectoid/sheaves/f_map.lean", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p711.lean", - "formal/afp/Aggregation_Algebras/Aggregation_Algebras.thy", - "formal/afp/Coinductive/document/root.tex", - "formal/mizar/afinsq_2.miz", - "formal/afp/Functional_Ordered_Resolution_Prover/IsaFoR_Term.thy", - "formal/lean/mathlib/topology/algebra/infinite_sum.lean", - "formal/afp/Noninterference_CSP/ClassicalNoninterference.thy", - "formal/afp/Inductive_Inference/Standard_Results.thy", - "formal/mizar/scmfsa_4.miz", - "formal/afp/Factored_Transition_System_Bounding/Acyclicity.thy", - "formal/afp/Core_SC_DOM/common/pointers/DocumentPointer.thy", - "formal/afp/Refine_Imperative_HOL/IICF/Intf/IICF_Matrix.thy", - "formal/lean/liquid/for_mathlib/yoneda.lean", - "formal/afp/Random_Graph_Subgraph_Threshold/Subgraph_Threshold.thy", - "formal/afp/CAVA_LTL_Modelchecker/SM/RefPoint/SM_State.thy", - "formal/coq/math-comp/ssreflect/ssrnotations.v", - "formal/lean/mathlib/analysis/locally_convex/polar.lean", - "formal/lean/mathlib/data/list/sublists.lean", - "formal/hol/Help/self_destruct.doc", - "formal/afp/LTL_to_DRA/Auxiliary/Preliminaries2.thy", - "formal/afp/CZH_Universal_Constructions/czh_ucategories/CZH_UCAT_Kan.thy", - "formal/afp/Types_To_Sets_Extension/Examples/SML_Relativization/Algebra/SML_Semirings.thy", - "formal/afp/POPLmark-deBruijn/Execute.thy", - "formal/afp/HyperCTL/Noninterference.thy", - "formal/lean/mathlib/category_theory/subobject/lattice.lean", - "formal/lean/mathlib/number_theory/liouville/residual.lean", - "formal/afp/Symmetric_Polynomials/Symmetric_Polynomials.thy", - "formal/afp/Functional-Automata/Functional_Automata.thy", - "formal/afp/MSO_Regex_Equivalence/PNormalization.thy", - "formal/mizar/isomichi.miz", - "formal/hol/100/pick.ml", - "formal/afp/Lambda_Free_KBOs/document/root.tex", - "formal/lean/lftcm/hints/category_theory/exercise4/hint1.lean", - "formal/afp/HotelKeyCards/NewCard.thy", - "formal/afp/Closest_Pair_Points/Closest_Pair_Alternative.thy", - "formal/hol/Help/NUM_CANCEL_CONV.doc", - "formal/afp/Key_Agreement_Strong_Adversaries/Implem.thy", - "formal/afp/HRB-Slicing/StaticInter/WeakSimulation.thy", - "formal/lean/mathlib/data/finsupp/order.lean", - "formal/afp/SATSolverVerification/SolveLoop.thy", - "formal/afp/CAVA_Automata/CAVA_Base/Lexord_List.thy", - "formal/afp/Isabelle_Marries_Dirac/document/root.tex", - "formal/mizar/hfdiff_1.miz", - "formal/lean/liquid/for_mathlib/derived/les3.lean", - "formal/lean/mathlib/logic/relation.lean", - "formal/afp/Jinja/JVM/JVMListExample.thy", - "formal/afp/Monad_Memo_DP/state_monad/DP_CRelVS_Ext.thy", - "formal/afp/Laws_of_Large_Numbers/document/root.tex", - "formal/lean/mathlib/topology/algebra/module/basic.lean", - "formal/lean/liquid/breen_deligne/category.lean", - "formal/afp/Ordinal/OrdinalDef.thy", - "formal/afp/TESL_Language/Introduction.thy", - "formal/afp/Circus/Refinement_Example.thy", - "formal/afp/Resolution_FOL/Examples.thy", - "formal/afp/Collections/Examples/Refine_Monadic/Refine_Monadic_Examples.thy", - "formal/afp/Smooth_Manifolds/Topological_Manifold.thy", - "formal/hol/Help/INT_LE_CONV.doc", - "formal/afp/CoreC++/WWellForm.thy", - "formal/lean/mathlib/linear_algebra/pi_tensor_product.lean", - "formal/lean/liquid/for_mathlib/homological_complex2.lean", - "formal/lean/mathlib/category_theory/idempotents/biproducts.lean", - "formal/afp/Polynomial_Factorization/Rational_Factorization.thy", - "formal/afp/Eval_FO/document/root.tex", - "formal/afp/Isabelle_Meta_Model/meta_isabelle/Printer_init.thy", - "formal/afp/Aggregation_Algebras/Matrix_Aggregation_Algebras.thy", - "formal/lean/mathlib/algebraic_geometry/Scheme.lean", - "formal/afp/Goedel_Incompleteness/Derivability_Conditions.thy", - "formal/afp/UPF_Firewall/Examples/PersonalFirewall/PersonalFirewallIpv4.thy", - "formal/mizar/taylor_2.miz", - "formal/afp/Lambda_Free_RPOs/Lambda_Free_RPO_Std.thy", - "formal/lean/mathzoo/mathzoo/olympiads/imo/1960/p1.lean", - "formal/afp/Slicing/Basic/CFG_wf.thy", - "formal/lean/mathlib/topology/metric_space/thickened_indicator.lean", - "formal/afp/Name_Carrying_Type_Inference/SimplyTyped.thy", - "formal/afp/Refine_Imperative_HOL/IICF/Intf/IICF_Multiset.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p756.lean", - "formal/afp/Word_Lib/Enumeration.thy", - "formal/lean/mathlib/measure_theory/function/conditional_expectation/real.lean", - "formal/afp/Jinja/JVM/JVMState.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p427.lean", - "formal/lean/mathzoo/mathzoo/misc/miniF2F/numbertheory/exk2powkeqapb2mulbpa2_aeq1.lean", - "formal/afp/CZH_Universal_Constructions/document/root.tex", - "formal/afp/Formal_SSA/Generic_Extract.thy", - "formal/afp/Refine_Imperative_HOL/IICF/Impl/IICF_List_SetO.thy", - "formal/afp/Resolution_FOL/Tree.thy", - "formal/lean/mathlib/ring_theory/ideal/over.lean", - "formal/lean/mathlib/group_theory/specific_groups/alternating.lean", - "formal/afp/Linear_Recurrences/Solver/Show_RatFPS.thy", - "formal/lean/mathlib/number_theory/sum_four_squares.lean", - "formal/afp/Simpl/UserGuide.thy", - "formal/lean/mathlib/data/finsupp/default.lean", - "formal/afp/CoreC++/Progress.thy", - "formal/afp/Differential_Game_Logic/Coincidence.thy", - "formal/afp/CZH_Foundations/czh_digraphs/CZH_DG_Small_Digraph.thy", - "formal/afp/Refine_Monadic/Refine_More_Comb.thy", - "formal/mizar/vectsp12.miz", - "formal/mizar/topmetr2.miz", - "formal/afp/Hoare_Time/Quant_Examples.thy", - "formal/hol/Help/FAIL_TAC.doc", - "formal/lean/liquid/polyhedral_lattice/cosimplicial_extra.lean", - "formal/afp/Groebner_Bases/More_MPoly_Type_Class.thy", - "formal/afp/Heard_Of/otr/OneThirdRuleDefs.thy", - "formal/afp/Core_SC_DOM/common/pointers/ElementPointer.thy", - "formal/afp/Jordan_Hoelder/SubgroupsAndNormalSubgroups.thy", - "formal/hol/Help/DISJ_ACI_RULE.doc", - "formal/afp/Refine_Imperative_HOL/Examples/Sepref_Minitests.thy", - "formal/lean/mathlib/data/finite/card.lean", - "formal/afp/Decl_Sem_Fun_PL/document/root.tex", - "formal/lean/sphere-eversion/to_mathlib/topology/local_homeomorph.lean", - "formal/afp/Partial_Order_Reduction/Extensions/ENat_Extensions.thy", - "formal/afp/Grothendieck_Schemes/Topological_Space.thy", - "formal/afp/Collections/GenCF/Impl/GenCF_Impl_Chapter.thy", - "formal/afp/Iptables_Semantics/Matching.thy", - "formal/afp/QR_Decomposition/QR_Efficient.thy", - "formal/afp/Isabelle_Meta_Model/toy_example/embedding/meta_toy/Parser_Toy.thy", - "formal/lean/sphere-eversion/to_mathlib/misc.lean", - "formal/lean/mathlib/category_theory/limits/shapes/equalizers.lean", - "formal/lean/mathlib/category_theory/monad/adjunction.lean", - "formal/lean/mathlib/ring_theory/principal_ideal_domain.lean", - "formal/afp/UPF_Firewall/Examples/Transformation/Transformation.thy", - "formal/hol/Help/dest_type.doc", - "formal/afp/Category2/Yoneda.thy", - "formal/afp/VerifyThis2019/Challenge2B.thy", - "formal/afp/CRDT/document/root.tex", - "formal/hol/Help/new_axiom.doc", - "formal/hol/Help/dpty.doc", - "formal/lean/mathlib/number_theory/primorial.lean", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2021/b/p18.lean", - "formal/lean/mathlib/geometry/manifold/whitney_embedding.lean", - "formal/afp/Conditional_Transfer_Rule/UD/UD_Reference.thy", - "formal/afp/Order_Lattice_Props/document/root.tex", - "formal/hol/Help/dest_const.doc", - "formal/afp/Perron_Frobenius/Bij_Nat.thy", - "formal/afp/Collections/ICF/gen_algo/MapGA.thy", - "formal/afp/Combinatorics_Words/Periodicity_Lemma.thy", - "formal/mizar/scm_comp.miz", - "formal/mizar/waybel23.miz", - "formal/afp/Verified_SAT_Based_AI_Planning/SAS_Plus_Representation.thy", - "formal/afp/Collections/GenCF/Impl/Impl_Bit_Set.thy", - "formal/afp/Inductive_Confidentiality/GeneralAttacker/ConfidentialityGA.thy", - "formal/lean/mathlib/group_theory/group_action/basic.lean", - "formal/afp/Featherweight_OCL/examples/Employee_Model/Analysis/Analysis_UML.thy", - "formal/lean/mathzoo/mathzoo/misc/miniF2F/numbertheory/prmdvsneqnsqmodpeq0.lean", - "formal/afp/CoSMed/Traceback_Properties/Traceback_Intro.thy", - "formal/afp/Gauss_Jordan/Code_Generation_IArrays_SML.thy", - "formal/lean/mathlib/data/vector/zip.lean", - "formal/afp/Ordinal/OrdinalInverse.thy", - "formal/afp/Consensus_Refined/Two_Steps.thy", - "formal/afp/Network_Security_Policy_Verification/Security_Invariants/SINVAR_ACLnotCommunicateWith.thy", - "formal/mizar/roughs_5.miz", - "formal/afp/Core_SC_DOM/common/classes/BaseClass.thy", - "formal/hol/Help/is_binder.doc", - "formal/afp/Types_Tableaus_and_Goedels_God/GoedelProof_P2.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p234.lean", - "formal/lean/lftcm/exercises_sources/tuesday/numbers.lean", - "formal/mizar/zfmisc_1.miz", - "formal/afp/Key_Agreement_Strong_Adversaries/dhlvl3_asymmetric.thy", - "formal/afp/Zeta_3_Irrational/Zeta_3_Irrational.thy", - "formal/hol/EC/xzprojective.ml", - "formal/afp/JinjaDCI/J/DefAss.thy", - "formal/afp/Goedel_HFSet_Semanticless/Functions.thy", - "formal/mizar/bhsp_4.miz", - "formal/afp/Types_To_Sets_Extension/Examples/SML_Relativization/Algebra/SML_Semigroups.thy", - "formal/coq/math-comp/fingroup/fingroup.v", - "formal/afp/Universal_Hash_Families/Carter_Wegman_Hash_Family.thy", - "formal/lean/lftcm/solutions/thursday/category_theory/exercise1.lean", - "formal/afp/CakeML_Codegen/CupCakeML/CupCake_Env.thy", - "formal/afp/JinjaDCI/Common/Exceptions.thy", - "formal/mizar/waybel_4.miz", - "formal/afp/HOLCF-Prelude/Numeral_Cpo.thy", - "formal/afp/Jinja/Compiler/PCompiler.thy", - "formal/afp/AODV/variants/b_fwdrreps/B_Aodv_Predicates.thy", - "formal/afp/Safe_OCL/OCL_Types.thy", - "formal/mizar/setlim_1.miz", - "formal/afp/CoreC++/BigStep.thy", - "formal/hol/Help/verbose.doc", - "formal/hol/Help/omit.doc", - "formal/afp/Card_Number_Partitions/Card_Number_Partitions.thy", - "formal/hol/Help/print_goalstack.doc", - "formal/lean/mathlib/algebra/big_operators/finsupp.lean", - "formal/lean/mathlib/data/list/indexes.lean", - "formal/afp/Prpu_Maxflow/Relabel_To_Front_Impl.thy", - "formal/afp/Buchi_Complementation/document/root.tex", - "formal/afp/Complete_Non_Orders/Binary_Relations.thy", - "formal/afp/Key_Agreement_Strong_Adversaries/Implem_symmetric.thy", - "formal/mizar/flang_3.miz", - "formal/mizar/aimloop.miz", - "formal/hol/Examples/holby.ml", - "formal/mizar/fraenkel.miz", - "formal/mizar/measure8.miz", - "formal/lean/mathlib/ring_theory/subring/basic.lean", - "formal/afp/Inductive_Inference/R1_BC.thy", - "formal/coq/math-comp/fingroup/morphism.v", - "formal/afp/Launchbury/C-Meet.thy", - "formal/afp/Polynomials/PP_Type.thy", - "formal/mizar/substlat.miz", - "formal/hol/Help/MAP_FIRST.doc", - "formal/hol/EC/jacobian.ml", - "formal/afp/JinjaThreads/Framework/LTS.thy", - "formal/afp/Amicable_Numbers/document/root.tex", - "formal/afp/Extended_Finite_State_Machines/GExp.thy", - "formal/lean/mathlib/order/galois_connection.lean", - "formal/hol/Help/CHAR_EQ_CONV.doc", - "formal/afp/SIFPL/HuntSands.thy", - "formal/afp/Matroids/Matroid.thy", - "formal/afp/Ribbon_Proofs/document/root.tex", - "formal/afp/Priority_Queue_Braun/Sorting_Braun.thy", - "formal/lean/mathlib/algebra/lie/matrix.lean", - "formal/hol/Logic/make.ml", - "formal/afp/Psi_Calculi/Weak_Cong_Sim_Pres.thy", - "formal/lean/mathlib/group_theory/perm/subgroup.lean", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2002/a/p21.lean", - "formal/afp/JinjaThreads/MM/JMM_JVM_Typesafe.thy", - "formal/afp/RefinementReactive/Temporal.thy", - "formal/afp/Store_Buffer_Reduction/document/root.tex", - "formal/lean/liquid/laurent_measures/prop72.lean", - "formal/lean/mathlib/category_theory/bicategory/functor_bicategory.lean", - "formal/lean/sphere-eversion/global/localisation_data.lean", - "formal/lean/mathlib/data/finset/order.lean", - "formal/lean/mathlib/number_theory/class_number/function_field.lean", - "formal/lean/mathlib/category_theory/thin.lean", - "formal/afp/Cubic_Quartic_Equations/Quartic_Polynomials.thy", - "formal/afp/Key_Agreement_Strong_Adversaries/AuthenticationN.thy", - "formal/lean/mathlib/group_theory/free_product.lean", - "formal/mizar/yellow15.miz", - "formal/afp/Dirichlet_L/document/root.tex", - "formal/lean/mathlib/order/category/Preorder.lean", - "formal/afp/SPARCv8/lib/WordDecl.thy", - "formal/lean/liquid/system_of_complexes/double.lean", - "formal/afp/JinjaDCI/Compiler/TypeComp.thy", - "formal/hol/Help/EXPAND_NSUM_CONV.doc", - "formal/mizar/aff_3.miz", - "formal/lean/mathlib/category_theory/adjunction/over.lean", - "formal/afp/Bicategory/ConcreteBicategory.thy", - "formal/afp/Architectural_Design_Patterns/RF_LTL.thy", - "formal/afp/MDP-Rewards/Bounded_Functions.thy", - "formal/afp/Slicing/Basic/Postdomination.thy", - "formal/hol/Help/can.doc", - "formal/afp/WebAssembly/Wasm_Soundness.thy", - "formal/lean/lftcm/exercises_sources/thursday/order.lean", - "formal/afp/Van_Emde_Boas_Trees/VEBT_DeleteBounds.thy", - "formal/afp/Dependent_SIFUM_Refinement/Examples/Eg1RefinementTrivial.thy", - "formal/afp/Propositional_Proof_Systems/document/root.tex", - "formal/coq/math-comp/algebra/ssralg.v", - "formal/lean/mathlib/category_theory/limits/shapes/equivalence.lean", - "formal/lean/liquid/locally_constant/Vhat.lean", - "formal/afp/Complex_Geometry/More_Set.thy", - "formal/hol/Multivariate/wlog.ml", - "formal/afp/Metalogic_ProofChecker/ProofTerm.thy", - "formal/afp/Refine_Imperative_HOL/IICF/Intf/IICF_Set.thy", - "formal/afp/Jacobson_Basic_Algebra/document/root.tex", - "formal/mizar/ndiff_8.miz", - "formal/afp/Stream-Fusion/StreamFusion.thy", - "formal/lean/mathlib/combinatorics/composition.lean", - "formal/mizar/waybel17.miz", - "formal/lean/mathlib/linear_algebra/affine_space/pointwise.lean", - "formal/lean/mathlib/data/finset/pairwise.lean", - "formal/afp/Core_DOM/common/pointers/DocumentPointer.thy", - "formal/afp/Complex_Geometry/Elementary_Complex_Geometry.thy", - "formal/afp/Linear_Programming/Matrix_LinPoly.thy", - "formal/lean/mathlib/ring_theory/localization/ideal.lean", - "formal/mizar/ideal_1.miz", - "formal/afp/JinjaThreads/Compiler/J1Heap.thy", - "formal/lean/mathlib/algebra/homology/additive.lean", - "formal/hol/Help/INST_TYPE.doc", - "formal/afp/Containers/ITP-2013/Benchmark_Set_LC.thy", - "formal/hol/Tutorial/HOL_as_a_functional_programming_language.ml", - "formal/mizar/cousin.miz", - "formal/lean/mathlib/order/monovary.lean", - "formal/afp/DPRM_Theorem/Register_Machine/MultipleStepState.thy", - "formal/afp/Safe_OCL/Object_Model.thy", - "formal/lean/mathlib/group_theory/submonoid/membership.lean", - "formal/coq/analysis/forms.v", - "formal/hol/Help/CONJ_TAC.doc", - "formal/afp/Jinja/J/Expr.thy", - "formal/mizar/borsuk_5.miz", - "formal/afp/BirdKMP/HOLCF_ROOT.thy", - "formal/afp/Real_Power/Log.thy", - "formal/afp/Automated_Stateful_Protocol_Verification/Term_Variants.thy", - "formal/lean/mathlib/geometry/euclidean/sphere.lean", - "formal/afp/HyperCTL/Finite_Noninterference.thy", - "formal/afp/Ribbon_Proofs/Ribbons_Interfaces.thy", - "formal/lean/mathlib/data/list/pairwise.lean", - "formal/afp/Stable_Matching/Choice_Functions.thy", - "formal/mizar/yellow21.miz", - "formal/lean/lftcm/solutions/thursday/category_theory/exercise9.lean", - "formal/afp/JinjaDCI/JinjaDCI.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p277.lean", - "formal/afp/Binding_Syntax_Theory/QuasiTerms_PickFresh_Alpha.thy", - "formal/hol/Help/TOP_DEPTH_CONV.doc", - "formal/afp/Jinja/J/Equivalence.thy", - "formal/afp/Dominance_CHK/Sorted_List_Operations2.thy", - "formal/lean/lftcm/solutions/thursday/groups_rings_fields.lean", - "formal/afp/Density_Compiler/PDF_Target_Density_Contexts.thy", - "formal/afp/CoCon/Decision_Confidentiality/Decision_All.thy", - "formal/mizar/limfunc3.miz", - "formal/afp/DPRM_Theorem/Register_Machine/MultipleToSingleSteps.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p110.lean", - "formal/lean/mathlib/algebra/module/equiv.lean", - "formal/afp/Topological_Semantics/sse_operation_negative.thy", - "formal/mizar/lmod_6.miz", - "formal/afp/CAVA_LTL_Modelchecker/SM/Lib/LTS.thy", - "formal/afp/Saturation_Framework_Extensions/Given_Clause_Architectures_Revisited.thy", - "formal/afp/CAVA_Automata/CAVA_Base/Statistics.thy", - "formal/afp/X86_Semantics/StateCleanUp.thy", - "formal/afp/CZH_Elementary_Categories/czh_ecategories/CZH_ECAT_Simple.thy", - "formal/afp/Stateful_Protocol_Composition_and_Typing/Strands_and_Constraints.thy", - "formal/lean/liquid/for_mathlib/concrete_equalizer.lean", - "formal/afp/Heard_Of/ute/UteDefs.thy", - "formal/lean/mathlib/dynamics/ergodic/conservative.lean", - "formal/afp/Automated_Stateful_Protocol_Verification/Transactions.thy", - "formal/lean/liquid/for_mathlib/short_complex_projections.lean", - "formal/afp/Registers/Pure_States.thy", - "formal/afp/Auto2_Imperative_HOL/Imperative/Rect_Intersect_Impl.thy", - "formal/mizar/hermitan.miz", - "formal/afp/JiveDataStoreModel/Isa_Counter_Store/Attributes.thy", - "formal/afp/Universal_Turing_Machine/Turing_Hoare.thy", - "formal/afp/Goedel_Incompleteness/Abstract_Jeroslow_Encoding.thy", - "formal/afp/Pi_Calculus/Weak_Early_Cong_Subst.thy", - "formal/hol/Help/NUM_ODD_CONV.doc", - "formal/afp/Nested_Multisets_Ordinals/document/root.tex", - "formal/hol/Rqe/list_rewrites.ml", - "formal/afp/Diophantine_Eqns_Lin_Hom/Minimize_Wrt.thy", - "formal/coq/math-comp/ssreflect/finset.v", - "formal/mizar/lopban_1.miz", - "formal/mizar/partfun4.miz", - "formal/afp/SequentInvertibility/SingleSuccedent.thy", - "formal/lean/mathlib/ring_theory/polynomial/symmetric.lean", - "formal/afp/HRB-Slicing/Proc/WellFormed.thy", - "formal/afp/Iptables_Semantics/Ruleset_Update.thy", - "formal/afp/Markov_Models/ex/Example_B.thy", - "formal/afp/MSO_Regex_Equivalence/Init_Normalization.thy", - "formal/afp/Heard_Of/ate/AteDefs.thy", - "formal/afp/LTL_to_DRA/LTL_Rabin.thy", - "formal/afp/LLL_Factorization/Factor_Bound_2.thy", - "formal/lean/mathlib/order/chain.lean", - "formal/afp/Extended_Finite_State_Machine_Inference/EFSM_Dot.thy", - "formal/afp/Collections/Lib/Dlist_add.thy", - "formal/coq/odd-order/BGsection15.v", - "formal/afp/SumSquares/document/root.tex", - "formal/mizar/zmodul07.miz", - "formal/lean/mathlib/algebra/order/group.lean", - "formal/afp/LocalLexing/TheoremD8.thy", - "formal/afp/Hybrid_Systems_VCs/ModalKleeneAlgebra/HS_VC_MKA_Examples_ndfun.thy", - "formal/lean/mathlib/data/rbmap/basic.lean", - "formal/lean/mathlib/data/polynomial/field_division.lean", - "formal/lean/mathlib/geometry/manifold/cont_mdiff.lean", - "formal/afp/Affine_Arithmetic/Counterclockwise_Vector.thy", - "formal/afp/JinjaThreads/JVM/JVMExecInstr.thy", - "formal/lean/mathlib/order/extension.lean", - "formal/mizar/kurato_0.miz", - "formal/afp/Separation_Logic_Imperative_HOL/Examples/Imp_Map_Spec.thy", - "formal/afp/FO_Theory_Rewriting/FOR_Semantics.thy", - "formal/mizar/prgcor_2.miz", - "formal/afp/Free-Groups/Generators.thy", - "formal/lean/mathlib/linear_algebra/affine_space/combination.lean", - "formal/afp/Flyspeck-Tame/ArchCompAux.thy", - "formal/lean/mathlib/category_theory/elements.lean", - "formal/afp/Constructive_Cryptography/Examples/Secure_Channel/Message_Authentication_Code.thy", - "formal/lean/mathlib/information_theory/hamming.lean", - "formal/hol/miz3/Samples/bug3.ml", - "formal/lean/mathlib/combinatorics/simple_graph/subgraph.lean", - "formal/afp/Category2/Category.thy", - "formal/afp/Possibilistic_Noninterference/Concrete.thy", - "formal/lean/mathzoo/mathzoo/olympiads/imo/1960/p2.lean", - "formal/lean/mathlib/ring_theory/jacobson.lean", - "formal/lean/mathlib/topology/metric_space/metrizable.lean", - "formal/afp/Interpolation_Polynomials_HOL_Algebra/Interpolation_Polynomial_Cardinalities.thy", - "formal/mizar/modelc_2.miz", - "formal/afp/VYDRA_MDL/Monitor_Code.thy", - "formal/lean/mathlib/category_theory/isomorphism_classes.lean", - "formal/lean/liquid/for_mathlib/short_complex_homological_complex.lean", - "formal/lean/mathlib/ring_theory/matrix_algebra.lean", - "formal/afp/Sort_Encodings/T.thy", - "formal/lean/mathlib/topology/category/Compactum.lean", - "formal/afp/Deep_Learning/Tensor_Plus.thy", - "formal/hol/Help/empty_ss.doc", - "formal/lean/mathlib/dynamics/flow.lean", - "formal/lean/mathzoo/mathzoo/olympiads/imo/2019/p4.lean", - "formal/mizar/toler_1.miz", - "formal/mizar/pencil_4.miz", - "formal/hol/Help/ss_of_prover.doc", - "formal/lean/mathlib/topology/instances/nat.lean", - "formal/mizar/tsp_2.miz", - "formal/afp/Hybrid_Multi_Lane_Spatial_Logic/regular/HMLSL_Regular.thy", - "formal/mizar/lopban13.miz", - "formal/afp/Clean/src/Symbex_MonadSE.thy", - "formal/afp/Independence_CH/FrecR_Arities.thy", - "formal/afp/Independence_CH/FrecR.thy", - "formal/hol/Logic/fol.ml", - "formal/mizar/instalg1.miz", - "formal/afp/Berlekamp_Zassenhaus/Degree_Bound.thy", - "formal/afp/Prim_Dijkstra_Simple/document/root.tex", - "formal/afp/CoSMed/Friend_Confidentiality/Friend.thy", - "formal/lean/mathlib/data/fin/tuple/monotone.lean", - "formal/lean/mathlib/topology/shrinking_lemma.lean", - "formal/afp/JinjaThreads/J/Expr.thy", - "formal/afp/FOL_Seq_Calc3/Result.thy", - "formal/afp/Jordan_Normal_Form/Missing_VectorSpace.thy", - "formal/afp/Markov_Models/ex/Example_A.thy", - "formal/afp/CCS/Weak_Cong_Pres.thy", - "formal/afp/Separation_Algebra/Sep_Heap_Instance.thy", - "formal/afp/Perron_Frobenius/Perron_Frobenius_Aux.thy", - "formal/lean/mathlib/data/mv_polynomial/supported.lean", - "formal/lean/mathlib/algebraic_geometry/AffineScheme.lean", - "formal/afp/Epistemic_Logic/document/root.tex", - "formal/afp/Isabelle_Meta_Model/isabelle_home/src/Pure/Isar/Isabelle_typedecl.thy", - "formal/hol/Help/end_itlist.doc", - "formal/afp/UTP/utp/utp_easy_parser.thy", - "formal/afp/Differential_Dynamic_Logic/Axioms.thy", - "formal/afp/Category3/CategoryWithFiniteLimits.thy", - "formal/afp/Kruskal/Graph_Definition.thy", - "formal/afp/DFS_Framework/Misc/DFS_Framework_Refine_Aux.thy", - "formal/afp/Shivers-CFA/document/root.tex", - "formal/afp/Comparison_Sort_Lower_Bound/Linorder_Relations.thy", - "formal/afp/Automated_Stateful_Protocol_Verification/Stateful_Protocol_Model.thy", - "formal/afp/CCS/Weak_Cong_Sim_Pres.thy", - "formal/mizar/group_12.miz", - "formal/afp/Deriving/Comparator_Generator/Comparator.thy", - "formal/lean/mathlib/measure_theory/covering/differentiation.lean", - "formal/afp/Prime_Distribution_Elementary/document/root.tex", - "formal/afp/CZH_Foundations/czh_digraphs/CZH_DG_Par.thy", - "formal/afp/IMP2/parser/Parser.thy", - "formal/lean/mathlib/data/nat/succ_pred.lean", - "formal/lean/mathlib/analysis/special_functions/trigonometric/arctan.lean", - "formal/afp/Native_Word/Native_Word_Test_SMLNJ.thy", - "formal/afp/KD_Tree/Nearest_Neighbors.thy", - "formal/afp/Pi_Calculus/Weak_Early_Cong_Subst_SC.thy", - "formal/lean/mathlib/category_theory/monoidal/Mon_.lean", - "formal/hol/Help/NUM_TO_INT_CONV.doc", - "formal/lean/mathlib/data/nat/sqrt.lean", - "formal/afp/CZH_Foundations/czh_digraphs/CZH_DG_Digraph.thy", - "formal/afp/Store_Buffer_Reduction/Text.thy", - "formal/afp/MDP-Algorithms/examples/Examples.thy", - "formal/afp/Transition_Systems_and_Automata/Automata/NBTA/NGBTA.thy", - "formal/afp/Berlekamp_Zassenhaus/Berlekamp_Type_Based.thy", - "formal/hol/Help/dest_vartype.doc", - "formal/mizar/cat_2.miz", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p153.lean", - "formal/hol/Help/extend_basic_congs.doc", - "formal/afp/Collections/ICF/impl/SetStdImpl.thy", - "formal/lean/mathlib/probability/strong_law.lean", - "formal/mizar/rmod_3.miz", - "formal/afp/Consensus_Refined/Observing/Uv_Defs.thy", - "formal/hol/100/morley.ml", - "formal/afp/CAVA_LTL_Modelchecker/BoolProgs/Programs/BoolProgs_Philosophers.thy", - "formal/lean/mathlib/algebra/continued_fractions/computation/approximations.lean", - "formal/lean/liquid/for_mathlib/is_quasi_iso_sigma.lean", - "formal/lean/mathlib/algebra/homology/homology.lean", - "formal/afp/Tycon/Binary_Tree_Monad.thy", - "formal/afp/Hybrid_Systems_VCs/ModalKleeneAlgebra/HS_VC_MKA_rel.thy", - "formal/lean/mathlib/measure_theory/function/locally_integrable.lean", - "formal/afp/AODV/Aodv_Data.thy", - "formal/mizar/cat_6.miz", - "formal/lean/mathlib/ring_theory/graded_algebra/basic.lean", - "formal/mizar/dualsp01.miz", - "formal/afp/Security_Protocol_Refinement/Key_establish/m1_nssk.thy", - "formal/lean/mathlib/algebraic_topology/simplicial_object.lean", - "formal/afp/Prime_Distribution_Elementary/Selberg_Asymptotic_Formula.thy", - "formal/lean/liquid/rescale/polyhedral_lattice.lean", - "formal/lean/mathlib/algebra/lie/character.lean", - "formal/afp/JinjaThreads/J/DefAssPreservation.thy", - "formal/afp/LocalLexing/LocalLexingLemmas.thy", - "formal/mizar/topreal5.miz", - "formal/afp/CoCon/All_BD_Security_Instances_for_CoCon.thy", - "formal/afp/Twelvefold_Way/Card_Bijections_Direct.thy", - "formal/afp/Types_To_Sets_Extension/Examples/Vector_Spaces/VS_Groups.thy", - "formal/afp/LOFT/Featherweight_OpenFlow_Comparison.thy", - "formal/mizar/yellow_6.miz", - "formal/mizar/glib_011.miz", - "formal/afp/Simple_Firewall/Shadowed.thy", - "formal/afp/Factored_Transition_System_Bounding/Invariants.thy", - "formal/afp/Factored_Transition_System_Bounding/Dependency.thy", - "formal/mizar/mesfunc2.miz", - "formal/afp/Probabilistic_Timed_Automata/library/Finiteness.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p10.lean", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2020/a/p22.lean", - "formal/lean/mathlib/geometry/manifold/mfderiv.lean", - "formal/afp/Propositional_Proof_Systems/SCND.thy", - "formal/afp/PCF/OpSem.thy", - "formal/afp/Noninterference_Generic_Unwinding/GenericUnwinding.thy", - "formal/hol/Jordan/make.ml", - "formal/afp/Clean/src/Hoare_MonadSE.thy", - "formal/afp/Probabilistic_Prime_Tests/Euler_Witness.thy", - "formal/afp/Girth_Chromatic/Girth_Chromatic.thy", - "formal/afp/Order_Lattice_Props/Order_Lattice_Props_Wenzel.thy", - "formal/afp/Statecharts/HAKripke.thy", - "formal/afp/Vickrey_Clarke_Groves/UniformTieBreaking.thy", - "formal/afp/Ordered_Resolution_Prover/Unordered_Ground_Resolution.thy", - "formal/afp/InfPathElimination/document/root.tex", - "formal/afp/Collections/ICF/impl/BinoPrioImpl.thy", - "formal/mizar/matrlin2.miz", - "formal/lean/mathlib/linear_algebra/matrix/to_linear_equiv.lean", - "formal/afp/Virtual_Substitution/Optimizations.thy", - "formal/afp/Combinatorics_Words/Reverse_Symmetry.thy", - "formal/afp/Launchbury/EvalHeap.thy", - "formal/afp/pGCL/Automation.thy", - "formal/afp/ROBDD/BDD_Examples.thy", - "formal/afp/CAVA_LTL_Modelchecker/CAVA_Impl.thy", - "formal/afp/Conditional_Simplification/CS_Reference.thy", - "formal/lean/mathlib/logic/encodable/lattice.lean", - "formal/hol/Help/TAC_PROOF.doc", - "formal/coq/math-comp/field/finfield.v", - "formal/afp/Groebner_Bases/Benchmarks.thy", - "formal/afp/SimplifiedOntologicalArgument/MFilter.thy", - "formal/lean/liquid/pseudo_normed_group/category/default.lean", - "formal/afp/CryptHOL/CryptHOL.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p237.lean", - "formal/lean/mathlib/algebra/bounds.lean", - "formal/afp/WHATandWHERE_Security/Type_System_example.thy", - "formal/afp/Adaptive_State_Counting/ASC/ASC_Sufficiency.thy", - "formal/afp/CoreC++/Exceptions.thy", - "formal/coq/math-comp/field/separable.v", - "formal/hol/Help/variant.doc", - "formal/afp/Residuated_Lattices/Residuated_Lattices.thy", - "formal/mizar/lagra4sq.miz", - "formal/lean/mathlib/analysis/convex/exposed.lean", - "formal/mizar/cardfil3.miz", - "formal/afp/Collections/document/root_userguide.tex", - "formal/afp/Transition_Systems_and_Automata/Automata/NBA/NBA_Graphs.thy", - "formal/mizar/lopban_5.miz", - "formal/afp/Call_Arity/CoCallAnalysisSig.thy", - "formal/afp/Cubic_Quartic_Equations/Ferraris_Formula.thy", - "formal/afp/Jordan_Normal_Form/Derivation_Bound.thy", - "formal/afp/LinearQuantifierElim/Thys/LinArith.thy", - "formal/afp/Separation_Logic_Imperative_HOL/Examples/Imp_Set_Spec.thy", - "formal/lean/mathlib/order/filter/small_sets.lean", - "formal/lean/mathlib/measure_theory/covering/besicovitch_vector_space.lean", - "formal/hol/Help/name.doc", - "formal/afp/Refine_Imperative_HOL/document/root.tex", - "formal/afp/Jordan_Normal_Form/DL_Missing_List.thy", - "formal/lean/liquid/system_of_complexes/rescale.lean", - "formal/afp/Incompleteness/Coding.thy", - "formal/afp/CryptoBasedCompositionalProperties/inout.thy", - "formal/mizar/c0sp1.miz", - "formal/afp/List_Inversions/document/root.tex", - "formal/hol/Help/find_term.doc", - "formal/afp/Security_Protocol_Refinement/document/isapreamble.tex", - "formal/afp/Poincare_Bendixson/Poincare_Bendixson.thy", - "formal/lean/mathlib/data/set/semiring.lean", - "formal/afp/Myhill-Nerode/Myhill_2.thy", - "formal/lean/mathlib/category_theory/abelian/ext.lean", - "formal/afp/LTL_Master_Theorem/LTL_to_DRA/DRA_Implementation.thy", - "formal/hol/Help/ideal_cofactors.doc", - "formal/afp/AODV/variants/d_fwdrreqs/D_Aodv_Data.thy", - "formal/afp/CakeML/generated/CakeML/NamespaceAuxiliary.thy", - "formal/hol/100/cayley_hamilton.ml", - "formal/afp/DiscretePricing/Disc_Cond_Expect.thy", - "formal/afp/Treaps/Random_Treap.thy", - "formal/afp/CoSMeDis/Friend_Confidentiality/Friend.thy", - "formal/afp/CISC-Kernel/trace/Rushby-with-Control/Option_Binders.thy", - "formal/afp/Incompleteness/Coding_Predicates.thy", - "formal/afp/Call_Arity/Env-Set-Cpo.thy", - "formal/afp/Concurrent_Ref_Alg/Infimum_Nat.thy", - "formal/lean/mathlib/algebra/char_p/basic.lean", - "formal/mizar/gate_5.miz", - "formal/afp/Transformer_Semantics/Isotone_Transformers.thy", - "formal/afp/CZH_Foundations/czh_digraphs/CZH_DG_Small_DGHM.thy", - "formal/lean/mathlib/data/dlist/instances.lean", - "formal/afp/CoreC++/document/root.tex", - "formal/afp/Universal_Hash_Families/Preliminary_Results.thy", - "formal/afp/IMP2/doc/IMP2_from_IMP.thy", - "formal/lean/mathlib/combinatorics/set_family/intersecting.lean", - "formal/mizar/bcialg_4.miz", - "formal/afp/FileRefinement/FileRefinement.thy", - "formal/afp/Monad_Memo_DP/heap_monad/DP_CRelVH_Ext.thy", - "formal/hol/Help/type_vars_in_term.doc", - "formal/afp/Word_Lib/Bit_Shifts_Infix_Syntax.thy", - "formal/lean/liquid/combinatorial_lemma/lem97.lean", - "formal/afp/KD_Tree/KD_Tree.thy", - "formal/lean/mathlib/category_theory/monoidal/of_has_finite_products.lean", - "formal/hol/Help/reduce_interface.doc", - "formal/afp/Functional-Automata/AutoRegExp.thy", - "formal/afp/Network_Security_Policy_Verification/Security_Invariants/SINVAR_Dependability_impl.thy", - "formal/afp/Clean/examples/IsPrime.thy", - "formal/hol/Help/check.doc", - "formal/hol/Help/rev_assoc.doc", - "formal/mizar/bciideal.miz", - "formal/lean/mathlib/algebra/big_operators/multiset.lean", - "formal/lean/mathlib/data/matrix/rank.lean", - "formal/mizar/fsm_3.miz", - "formal/afp/Hybrid_Systems_VCs/HS_ODEs.thy", - "formal/afp/Density_Compiler/PDF_Values.thy", - "formal/afp/List_Update/Phase_Partitioning.thy", - "formal/hol/Help/reserve_words.doc", - "formal/afp/SATSolverVerification/KrsticGoel.thy", - "formal/lean/liquid/for_mathlib/homological_complex_abelian.lean", - "formal/afp/PLM/TAO_6_Identifiable.thy", - "formal/mizar/real.miz", - "formal/afp/Collections/ICF/spec/ListSpec.thy", - "formal/lean/liquid/breen_deligne/suitable.lean", - "formal/afp/Smith_Normal_Form/Cauchy_Binet_HOL_Analysis.thy", - "formal/afp/Poincare_Bendixson/Limit_Set.thy", - "formal/afp/FocusStreamsCaseStudies/BitBoolTS.thy", - "formal/afp/Groebner_Bases/document/root.tex", - "formal/afp/Algebraic_Numbers/Complex_Algebraic_Numbers.thy", - "formal/afp/Word_Lib/More_Word.thy", - "formal/lean/mathlib/field_theory/tower.lean", - "formal/lean/mathlib/category_theory/limits/shapes/wide_equalizers.lean", - "formal/afp/UPF/NormalisationTestSpecification.thy", - "formal/hol/Rqe/rqe_main.ml", - "formal/afp/Power_Sum_Polynomials/Power_Sum_Polynomials_Library.thy", - "formal/afp/Saturation_Framework_Extensions/Clausal_Calculus.thy", - "formal/lean/mathlib/algebra/category/Algebra/basic.lean", - "formal/afp/Prime_Number_Theorem/Mertens_Theorems.thy", - "formal/afp/Padic_Ints/Function_Ring.thy", - "formal/lean/mathlib/analysis/convex/stone_separation.lean", - "formal/lean/mathlib/topology/algebra/nonarchimedean/basic.lean", - "formal/mizar/nfcont_2.miz", - "formal/afp/LinearQuantifierElim/Thys/QEdlo.thy", - "formal/mizar/catalan1.miz", - "formal/lean/liquid/for_mathlib/derived_functor.lean", - "formal/afp/Planarity_Certificates/l4v/lib/wp/WP.thy", - "formal/mizar/binom.miz", - "formal/afp/Auto2_Imperative_HOL/Imperative/Sep_Examples.thy", - "formal/afp/Algebraic_VCs/AVC_KAD/VC_KAD.thy", - "formal/afp/JinjaThreads/J/J_Main.thy", - "formal/afp/HRB-Slicing/JinjaVM_Inter/JVMPostdomination.thy", - "formal/afp/Graph_Theory/Shortest_Path.thy", - "formal/lean/mathlib/data/erased.lean", - "formal/afp/Bicategory/Subbicategory.thy", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2021/a/p22.lean", - "formal/lean/mathlib/data/set/intervals/with_bot_top.lean", - "formal/hol/Help/list_mk_comb.doc", - "formal/coq/math-comp/solvable/finmodule.v", - "formal/afp/Algebraic_Numbers/Show_Real_Approx.thy", - "formal/lean/mathlib/algebra/gcd_monoid/multiset.lean", - "formal/hol/GL/misc.ml", - "formal/hol/Help/pp_print_num.doc", - "formal/afp/Prim_Dijkstra_Simple/Directed_Graph.thy", - "formal/afp/Monad_Memo_DP/document/root.tex", - "formal/afp/LTL_to_DRA/Impl/af_Impl.thy", - "formal/afp/First_Welfare_Theorem/Preferences.thy", - "formal/afp/Algebraic_VCs/RKAT_Models.thy", - "formal/afp/Statecharts/DataSpace.thy", - "formal/afp/DPRM_Theorem/Register_Machine/CommutationRelations.thy", - "formal/afp/Modal_Logics_for_NTS/Disjunction.thy", - "formal/lean/mathlib/topology/sheaves/presheaf_of_functions.lean", - "formal/hol/Help/SUBST1_TAC.doc", - "formal/afp/Sort_Encodings/Mono.thy", - "formal/hol/Help/is_select.doc", - "formal/mizar/goedcpuc.miz", - "formal/afp/Correctness_Algebras/Domain_Iterings.thy", - "formal/hol/Help/ORDERED_IMP_REWR_CONV.doc", - "formal/afp/BytecodeLogicJmlTypes/Reachability.thy", - "formal/afp/Core_SC_DOM/common/pointers/Ref.thy", - "formal/hol/100/ballot.ml", - "formal/afp/Gauss_Jordan/Matrix_To_IArray.thy", - "formal/afp/Network_Security_Policy_Verification/Examples/Impl_List_Playground_ChairNetwork.thy", - "formal/hol/Help/insert.doc", - "formal/lean/mathzoo/mathzoo/misc/miniF2F/algebra/amgm_sqrtxymulxmyeqxpy_xpygeq4.lean", - "formal/lean/liquid/pseudo_normed_group/profinitely_filtered.lean", - "formal/coq/math-comp/algebra/polydiv.v", - "formal/afp/Collections/Examples/Autoref/Collection_Autoref_Examples.thy", - "formal/afp/UPF_Firewall/StatefulFW/FTP_WithPolicy.thy", - "formal/lean/mathlib/data/real/basic.lean", - "formal/lean/mathlib/logic/equiv/option.lean", - "formal/afp/Physical_Quantities/SI_Imperial.thy", - "formal/afp/HOL-CSP/Stop.thy", - "formal/afp/FOL_Seq_Calc3/Encoding.thy", - "formal/lean/mathlib/ring_theory/coprime/basic.lean", - "formal/afp/Core_DOM/standard/classes/ElementClass.thy", - "formal/lean/mathlib/topology/instances/matrix.lean", - "formal/afp/Transition_Systems_and_Automata/Transition_Systems/Transition_System.thy", - "formal/afp/Ordinary_Differential_Equations/Ex/Lorenz/Result_Elements.thy", - "formal/afp/QR_Decomposition/Projections.thy", - "formal/afp/Lowe_Ontological_Argument/LoweOntologicalArgument_5b.thy", - "formal/afp/Isabelle_Meta_Model/meta_isabelle/Printer_Isabelle.thy", - "formal/lean/mathlib/data/holor.lean", - "formal/lean/perfectoid/for_mathlib/group_with_zero.lean", - "formal/mizar/compos_2.miz", - "formal/lean/sphere-eversion/to_mathlib/topology/separation.lean", - "formal/afp/Order_Lattice_Props/Order_Lattice_Props.thy", - "formal/hol/Jordan/tactics_refine.ml", - "formal/mizar/msinst_1.miz", - "formal/lean/perfectoid/Frobenius.lean", - "formal/afp/Parity_Game/WinningRegion.thy", - "formal/hol/Rqe/signs.ml", - "formal/afp/Regular_Algebras/Regular_Algebras.thy", - "formal/afp/Tree-Automata/document/conclusion.tex", - "formal/afp/Refine_Imperative_HOL/Sepref_Basic.thy", - "formal/mizar/circuit1.miz", - "formal/afp/Kruskal/MinWeightBasis.thy", - "formal/afp/Dijkstra_Shortest_Path/Dijkstra.thy", - "formal/lean/mathlib/algebra/indicator_function.lean", - "formal/afp/Refine_Imperative_HOL/Examples/Sepref_DFS.thy", - "formal/afp/Functional-Automata/RegExp2NA.thy", - "formal/mizar/trees_3.miz", - "formal/lean/mathlib/linear_algebra/lagrange.lean", - "formal/lean/mathlib/set_theory/ordinal/natural_ops.lean", - "formal/lean/liquid/facts/default.lean", - "formal/afp/Depth-First-Search/document/root.tex", - "formal/lean/mathlib/topology/algebra/semigroup.lean", - "formal/afp/Verified-Prover/Prover.thy", - "formal/afp/VeriComp/Behaviour.thy", - "formal/mizar/rvsum_1.miz", - "formal/lean/mathlib/algebra/star/pi.lean", - "formal/lean/mathlib/topology/bornology/constructions.lean", - "formal/afp/UpDown_Scheme/Imperative.thy", - "formal/afp/Ordinary_Differential_Equations/document/root.tex", - "formal/afp/Gabow_SCC/Gabow_SCC_Code.thy", - "formal/coq/analysis/sequences.v", - "formal/hol/Minisat/sat_common_tools.ml", - "formal/afp/Vickrey_Clarke_Groves/SetUtils.thy", - "formal/afp/Perfect-Number-Thm/PerfectBasics.thy", - "formal/afp/VerifyThis2019/document/root.tex", - "formal/mizar/comput_1.miz", - "formal/afp/Propositional_Proof_Systems/Sema_Craig.thy", - "formal/lean/mathlib/order/filter/filter_product.lean", - "formal/afp/CoSMeDis/document/root.tex", - "formal/afp/Regular-Sets/pEquivalence_Checking.thy", - "formal/afp/Network_Security_Policy_Verification/Security_Invariants/SINVAR_BLPtrusted.thy", - "formal/mizar/tsep_1.miz", - "formal/hol/Rqe/dedmatrix.ml", - "formal/lean/mathlib/data/matrix/pequiv.lean", - "formal/afp/FOL_Seq_Calc1/document/root.tex", - "formal/afp/Featherweight_OCL/collection_types/UML_Pair.thy", - "formal/lean/mathlib/order/prime_ideal.lean", - "formal/hol/Help/isspace.doc", - "formal/afp/CakeML/generated/Lem_machine_word.thy", - "formal/afp/Sunflowers/document/root.tex", - "formal/afp/Well_Quasi_Orders/Almost_Full.thy", - "formal/afp/Pi_Calculus/Weak_Late_Bisim_SC.thy", - "formal/lean/mathlib/ring_theory/polynomial/eisenstein.lean", - "formal/afp/Group-Ring-Module/Algebra4.thy", - "formal/afp/CakeML_Codegen/Test/Test_Embed_Data2.thy", - "formal/afp/Registers/Finite_Tensor_Product_Matrices.thy", - "formal/mizar/lopban_7.miz", - "formal/lean/mathlib/order/filter/default.lean", - "formal/afp/Stateful_Protocol_Composition_and_Typing/examples/Example_Keyserver.thy", - "formal/afp/General-Triangle/GeneralTriangle.thy", - "formal/afp/Separation_Logic_Imperative_HOL/Sep_Examples.thy", - "formal/afp/Polynomial_Factorization/Missing_List.thy", - "formal/afp/Transitive_Models/Cardinal_AC_Relative.thy", - "formal/coq/odd-order/BGappendixC.v", - "formal/afp/LTL_Normal_Form/document/root.tex", - "formal/afp/Max-Card-Matching/document/root.tex", - "formal/lean/mathlib/group_theory/coset.lean", - "formal/lean/mathlib/algebra/category/Module/tannaka.lean", - "formal/lean/perfectoid/for_mathlib/nonarchimedean/is_subgroups_basis.lean", - "formal/afp/DFS_Framework/Misc/On_Stack.thy", - "formal/afp/Registers/Axioms_Complement_Quantum.thy", - "formal/afp/Types_To_Sets_Extension/ETTS/Manual/ETTS_Theory.thy", - "formal/mizar/lattice4.miz", - "formal/mizar/realset2.miz", - "formal/afp/Collections/ICF/gen_algo/SetByMap.thy", - "formal/afp/Polynomials/Utils.thy", - "formal/lean/mathlib/algebra/group/opposite.lean", - "formal/afp/Slicing/Basic/BasicDefs.thy", - "formal/lean/mathlib/category_theory/monad/equiv_mon.lean", - "formal/afp/LTL_to_DRA/Impl/Export_Code.thy", - "formal/hol/Help/REAL_RAT_GT_CONV.doc", - "formal/hol/Help/NUM_GE_CONV.doc", - "formal/hol/Multivariate/measure.ml", - "formal/afp/Propositional_Proof_Systems/HCSC.thy", - "formal/afp/Partial_Order_Reduction/Extensions/Coinductive_List_Extensions.thy", - "formal/lean/liquid/free_pfpng/acyclic.lean", - "formal/coq/math-comp/algebra/mxalgebra.v", - "formal/mizar/e_siec.miz", - "formal/afp/Category3/Category.thy", - "formal/hol/QBF/qbfr.ml", - "formal/mizar/cfunct_1.miz", - "formal/afp/LLL_Factorization/LLL_Factorization_Impl.thy", - "formal/hol/Help/el.doc", - "formal/afp/Metalogic_ProofChecker/document/root.tex", - "formal/afp/UTP/utp/utp_dynlog.thy", - "formal/afp/Affine_Arithmetic/Print.thy", - "formal/afp/CoSMeDis/Post_Confidentiality/Independent_Posts/Independent_Posts_Network.thy", - "formal/afp/LTL/document/root.tex", - "formal/lean/mathlib/combinatorics/set_family/compression/down.lean", - "formal/lean/mathlib/ring_theory/witt_vector/basic.lean", - "formal/hol/Library/permutations.ml", - "formal/afp/Ordinary_Differential_Equations/Numerics/Concrete_Reachability_Analysis_C1.thy", - "formal/lean/mathlib/data/int/modeq.lean", - "formal/afp/Jinja/Compiler/Compiler.thy", - "formal/lean/mathlib/data/nat/basic.lean", - "formal/afp/BNF_CC/Subtypes.thy", - "formal/afp/Virtual_Substitution/LuckyFind.thy", - "formal/afp/Simpl/Termination.thy", - "formal/afp/Incredible_Proof_Machine/Incredible_Propositional.thy", - "formal/afp/LTL/LTL.thy", - "formal/lean/mathzoo/mathzoo/olympiads/imo/2006/p3.lean", - "formal/hol/Help/REPEAT_TCL.doc", - "formal/mizar/ringcat1.miz", - "formal/afp/JinjaThreads/Framework/FWInitFinLift.thy", - "formal/hol/IsabelleLight/isalight.ml", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2020/b/p21.lean", - "formal/lean/mathlib/algebra/category/Ring/limits.lean", - "formal/afp/Aggregation_Algebras/Linear_Aggregation_Algebras.thy", - "formal/mizar/cantor_1.miz", - "formal/lean/mathlib/algebra/category/Algebra/limits.lean", - "formal/lean/mathlib/category_theory/abelian/non_preadditive.lean", - "formal/hol/Help/meson_brand.doc", - "formal/afp/Transition_Systems_and_Automata/Automata/DBTA/DBTA_Combine.thy", - "formal/mizar/radix_5.miz", - "formal/afp/Deriving/document/root.tex", - "formal/afp/JinjaDCI/Common/Objects.thy", - "formal/afp/ROBDD/Pointer_Map_Impl.thy", - "formal/afp/BDD/document/root.tex", - "formal/mizar/latsum_1.miz", - "formal/mizar/graph_5.miz", - "formal/hol/Help/flush_goalstack.doc", - "formal/mizar/euclid11.miz", - "formal/lean/mathlib/category_theory/abelian/diagram_lemmas/four.lean", - "formal/afp/Berlekamp_Zassenhaus/Matrix_Record_Based.thy", - "formal/afp/CakeML_Codegen/Doc/Doc_Preproc.thy", - "formal/lean/liquid/polyhedral_lattice/cech.lean", - "formal/afp/Independence_CH/document/root.tex", - "formal/lean/mathlib/set_theory/ordinal/arithmetic.lean", - "formal/afp/IP_Addresses/WordInterval_Sorted.thy", - "formal/afp/Iptables_Semantics/Primitive_Matchers/Parser.thy", - "formal/hol/Help/subset.doc", - "formal/lean/liquid/for_mathlib/derived/Ext_lemmas.lean", - "formal/hol/Help/mk_forall.doc", - "formal/afp/MFOTL_Monitor/Table.thy", - "formal/hol/Help/dest_disj.doc", - "formal/afp/Nested_Multisets_Ordinals/Syntactic_Ordinal.thy", - "formal/lean/mathlib/field_theory/is_alg_closed/algebraic_closure.lean", - "formal/afp/Refine_Monadic/document/root.tex", - "formal/lean/mathlib/linear_algebra/multilinear/tensor_product.lean", - "formal/afp/JinjaThreads/Examples/ApprenticeChallenge.thy", - "formal/afp/BNF_CC/Composition.thy", - "formal/afp/MFMC_Countable/MFMC_Web.thy", - "formal/lean/mathlib/order/lattice_intervals.lean", - "formal/lean/mathlib/algebra/homology/Module.lean", - "formal/mizar/waybel20.miz", - "formal/mizar/sgraph1.miz", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p432.lean", - "formal/mizar/prgcor_1.miz", - "formal/lean/liquid/for_mathlib/is_iso_neg.lean", - "formal/lean/mathlib/category_theory/functor/epi_mono.lean", - "formal/lean/mathzoo/mathzoo/olympiads/aime/1990/p4.lean", - "formal/afp/Correctness_Algebras/Hoare.thy", - "formal/hol/Help/strip_abs.doc", - "formal/afp/Monad_Memo_DP/heap_monad/State_Heap.thy", - "formal/afp/Ordinary_Differential_Equations/Library/Multivariate_Taylor.thy", - "formal/afp/Security_Protocol_Refinement/Auth_simple/m1_auth.thy", - "formal/afp/Complex_Geometry/Matrices.thy", - "formal/mizar/integr16.miz", - "formal/afp/JinjaThreads/Common/Observable_Events.thy", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2009/a/p7.lean", - "formal/afp/Card_Equiv_Relations/document/root.tex", - "formal/afp/Eval_FO/Mapping_Code.thy", - "formal/afp/Wetzels_Problem/document/root.tex", - "formal/afp/Smith_Normal_Form/Finite_Field_Mod_Type_Connection.thy", - "formal/mizar/nomin_7.miz", - "formal/afp/UPF_Firewall/PacketFilter/PolicyCombinators.thy", - "formal/afp/DFS_Framework/Invars/DFS_Invars_SCC.thy", - "formal/afp/Modular_arithmetic_LLL_and_HNF_algorithms/HNF_Mod_Det_Soundness.thy", - "formal/hol/miz3/Samples/robbins.ml", - "formal/mizar/jordan9.miz", - "formal/lean/mathlib/category_theory/connected_components.lean", - "formal/lean/lftcm/exercises_sources/wednesday/topological_spaces.lean", - "formal/afp/Launchbury/C.thy", - "formal/afp/Quasi_Borel_Spaces/Binary_CoProduct_QuasiBorel.thy", - "formal/afp/Ergodic_Theory/document/root.tex", - "formal/afp/CoreC++/Type.thy", - "formal/afp/List_Update/TS.thy", - "formal/lean/liquid/invpoly/functor.lean", - "formal/mizar/measure5.miz", - "formal/afp/Simpl/ex/Compose.thy", - "formal/hol/Help/inductive_type_store.doc", - "formal/afp/Echelon_Form/Echelon_Form_Det.thy", - "formal/lean/mathlib/data/pnat/factors.lean", - "formal/afp/Monad_Memo_DP/state_monad/Memory.thy", - "formal/afp/Kleene_Algebra/Dioid_Models.thy", - "formal/mizar/prob_3.miz", - "formal/afp/Abortable_Linearizable_Modules/document/introduction.tex", - "formal/afp/AODV/variants/c_gtobcast/C_Global_Invariants.thy", - "formal/mizar/topgen_6.miz", - "formal/afp/CakeML/generated/Lem_show.thy", - "formal/lean/mathlib/topology/uniform_space/completion.lean", - "formal/afp/Deep_Learning/DL_Fundamental_Theorem_Network_Capacity.thy", - "formal/hol/Help/ss_of_congs.doc", - "formal/hol/Model/semantics.ml", - "formal/lean/mathzoo/mathzoo/olympiads/imo/1961/p1.lean", - "formal/mizar/xtuple_0.miz", - "formal/hol/Complex/complex_grobner.ml", - "formal/lean/mathlib/category_theory/sums/default.lean", - "formal/afp/ConcurrentIMP/CIMP_vcg_rules.thy", - "formal/afp/Akra_Bazzi/Akra_Bazzi.thy", - "formal/afp/Higher_Order_Terms/Term_to_Nterm.thy", - "formal/hol/Help/use_file.doc", - "formal/afp/HyperCTL/Shallow.thy", - "formal/hol/RichterHilbertAxiomGeometry/UniversalPropCartProd.ml", - "formal/afp/BD_Security_Compositional/document/root.tex", - "formal/afp/QR_Decomposition/Examples_QR_Abstract_Symbolic.thy", - "formal/lean/mathlib/category_theory/limits/shapes/products.lean", - "formal/afp/Tycon/Monad_Plus.thy", - "formal/hol/Help/is_setenum.doc", - "formal/afp/KD_Tree/Range_Search.thy", - "formal/lean/liquid/for_mathlib/Cech/adjunction.lean", - "formal/afp/CZH_Universal_Constructions/czh_ucategories/CZH_UCAT_PWKan_Example.thy", - "formal/hol/Help/reverse_interface_mapping.doc", - "formal/mizar/pnproc_1.miz", - "formal/afp/Van_Emde_Boas_Trees/Separation_Logic_Imperative_HOL/Assertions.thy", - "formal/hol/Help/forall.doc", - "formal/afp/AODV/variants/c_gtobcast/C_Aodv_Predicates.thy", - "formal/afp/Partial_Order_Reduction/Extensions/Refine_Monadic_Extensions.thy", - "formal/afp/HOL-CSP/Assertions.thy", - "formal/hol/Help/striplist.doc", - "formal/afp/TESL_Language/SymbolicPrimitive.thy", - "formal/afp/Polynomial_Interpolation/Neville_Aitken_Interpolation.thy", - "formal/afp/Regression_Test_Selection/Common/CollectionBasedRTS.thy", - "formal/lean/liquid/breen_deligne/functorial_map.lean", - "formal/afp/Fishers_Inequality/Linear_Bound_Argument.thy", - "formal/lean/mathlib/ring_theory/chain_of_divisors.lean", - "formal/mizar/jordan12.miz", - "formal/afp/Complex_Bounded_Operators/extra/Extra_Lattice.thy", - "formal/afp/Quantales/Quantale_Models.thy", - "formal/hol/Help/LEANCOP.doc", - "formal/afp/CAVA_Automata/Automata.thy", - "formal/afp/Ergodic_Theory/Measure_Preserving_Transformations.thy", - "formal/afp/Berlekamp_Zassenhaus/Reconstruction.thy", - "formal/hol/Help/print_term.doc", - "formal/lean/lftcm/demos/category_theory.lean", - "formal/lean/mathlib/set_theory/cardinal/basic.lean", - "formal/afp/Key_Agreement_Strong_Adversaries/dhlvl2.thy", - "formal/hol/Help/hol_version.doc", - "formal/afp/Collections/Iterator/Idx_Iterator.thy", - "formal/afp/UPF_Firewall/FWNormalisation/NormalisationIPPProofs.thy", - "formal/mizar/mssubfam.miz", - "formal/afp/C2KA_DistributedSystems/C2KA.thy", - "formal/afp/Jinja/BV/BVSpec.thy", - "formal/lean/mathlib/measure_theory/constructions/prod.lean", - "formal/hol/Minisat/minisat_parse.ml", - "formal/afp/Real_Time_Deque/RTD_Util.thy", - "formal/lean/mathlib/category_theory/abelian/images.lean", - "formal/mizar/lpspace1.miz", - "formal/afp/Possibilistic_Noninterference/During_Execution.thy", - "formal/afp/JinjaThreads/Compiler/Compiler.thy", - "formal/afp/Deriving/Equality_Generator/Equality_Instances.thy", - "formal/lean/mathlib/set_theory/game/nim.lean", - "formal/mizar/polynom8.miz", - "formal/afp/Encodability_Process_Calculi/document/root.tex", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p354.lean", - "formal/afp/Planarity_Certificates/Verification/Setup_AutoCorres.thy", - "formal/lean/mathlib/data/pfunctor/univariate/M.lean", - "formal/afp/Launchbury/Iterative.thy", - "formal/mizar/group_3.miz", - "formal/afp/Simplex/Simplex_Auxiliary.thy", - "formal/lean/mathlib/group_theory/quotient_group.lean", - "formal/afp/Transitive_Models/Discipline_Cardinal.thy", - "formal/coq/analysis/reals.v", - "formal/afp/Ordinary_Differential_Equations/Ex/ODE_Examples.thy", - "formal/afp/Planarity_Certificates/Planarity/Reachablen.thy", - "formal/afp/Simple_Firewall/Common/Lib_Enum_toString.thy", - "formal/lean/mathlib/linear_algebra/ray.lean", - "formal/hol/Help/SYM_CONV.doc", - "formal/afp/Hoare_Time/QuantK_VCG.thy", - "formal/lean/liquid/thm95/modify_complex.lean", - "formal/afp/MSO_Regex_Equivalence/WS1S_Equivalence_Checking.thy", - "formal/afp/CoCon/Decision_Confidentiality/Decision_NCPC.thy", - "formal/afp/Shivers-CFA/ExCFSV.thy", - "formal/afp/QR_Decomposition/QR_Decomposition.thy", - "formal/lean/mathlib/analysis/mean_inequalities.lean", - "formal/afp/First_Order_Terms/document/root.tex", - "formal/afp/Perron_Frobenius/Spectral_Radius_Largest_Jordan_Block.thy", - "formal/afp/UPF_Firewall/Examples/DMZ/DMZInteger.thy", - "formal/lean/mathlib/data/nat/with_bot.lean", - "formal/afp/Knot_Theory/Tangle_Algebra.thy", - "formal/mizar/arytm_1.miz", - "formal/afp/LocalLexing/TheoremD12.thy", - "formal/mizar/bvfunc11.miz", - "formal/afp/List-Infinite/ListInf/ListInf.thy", - "formal/mizar/comptrig.miz", - "formal/afp/Differential_Dynamic_Logic/Frechet_Correctness.thy", - "formal/afp/HOLCF-Prelude/examples/Sieve_Primes.thy", - "formal/mizar/convfun1.miz", - "formal/hol/Help/infixes.doc", - "formal/mizar/glib_000.miz", - "formal/afp/Abs_Int_ITP2012/ACom.thy", - "formal/afp/Rep_Fin_Groups/Rep_Fin_Groups.thy", - "formal/afp/Automated_Stateful_Protocol_Verification/document/root.tex", - "formal/afp/Propositional_Proof_Systems/LSC.thy", - "formal/lean/mathlib/topology/sets/opens.lean", - "formal/mizar/scmfsa7b.miz", - "formal/afp/Collections/Examples/ICF/ICF_Examples.thy", - "formal/afp/ConcurrentIMP/CIMP_pred.thy", - "formal/lean/mathlib/topology/algebra/module/character_space.lean", - "formal/hol/Help/dom.doc", - "formal/hol/Help/prioritize_num.doc", - "formal/lean/liquid/for_mathlib/derived/defs.lean", - "formal/lean/mathlib/topology/urysohns_lemma.lean", - "formal/afp/AODV/variants/b_fwdrreps/B_Aodv_Data.thy", - "formal/lean/mathlib/linear_algebra/multilinear/finite_dimensional.lean", - "formal/afp/Automatic_Refinement/document/root.tex", - "formal/lean/mathlib/algebra/monoid_algebra/degree.lean", - "formal/lean/mathlib/data/set/intervals/monotone.lean", - "formal/lean/mathlib/data/mv_polynomial/default.lean", - "formal/afp/Boolean_Expression_Checkers/Boolean_Expression_Example.thy", - "formal/afp/PCF/Continuations.thy", - "formal/afp/Separata/document/root.tex", - "formal/afp/Buchi_Complementation/Complementation_Implement.thy", - "formal/mizar/afvect0.miz", - "formal/hol/Help/INT_GE_CONV.doc", - "formal/afp/Jordan_Normal_Form/Strassen_Algorithm_Code.thy", - "formal/lean/mathlib/linear_algebra/tensor_product.lean", - "formal/lean/mathlib/group_theory/group_action/sub_mul_action.lean", - "formal/lean/mathlib/algebra/order/invertible.lean", - "formal/mizar/msualg_6.miz", - "formal/afp/CZH_Foundations/czh_semicategories/CZH_SMC_Rel.thy", - "formal/mizar/sin_cos4.miz", - "formal/afp/Weighted_Path_Order/Multiset_Extension2.thy", - "formal/afp/Word_Lib/More_Divides.thy", - "formal/afp/Neumann_Morgenstern_Utility/document/root.tex", - "formal/afp/AODV/variants/e_all_abcd/E_OAodv.thy", - "formal/lean/mathlib/field_theory/finite/polynomial.lean", - "formal/hol/Examples/misiurewicz.ml", - "formal/afp/LOFT/List_Group.thy", - "formal/afp/Key_Agreement_Strong_Adversaries/Payloads.thy", - "formal/coq/math-comp/solvable/nilpotent.v", - "formal/afp/AODV/variants/b_fwdrreps/B_Loop_Freedom.thy", - "formal/afp/Density_Compiler/Density_Predicates.thy", - "formal/lean/mathzoo/mathzoo/misc/miniF2F/algebra/2varlineareq_fp3zeq11_3tfm1m5zeqn68_feqn10_zeq7.lean", - "formal/mizar/topreal2.miz", - "formal/afp/Shadow_SC_DOM/tests/Shadow_DOM_Document_getElementById.thy", - "formal/afp/HRB-Slicing/StaticInter/FundamentalProperty.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p233.lean", - "formal/hol/Help/ALPHA_CONV.doc", - "formal/afp/VerifyThis2019/Challenge2A.thy", - "formal/afp/List-Infinite/CommonArith/Util_Div.thy", - "formal/afp/Randomised_Social_Choice/Automation/QSOpt_Exact.thy", - "formal/afp/Hybrid_Multi_Lane_Spatial_Logic/Length.thy", - "formal/afp/Formula_Derivatives/WS1S_Nameful.thy", - "formal/afp/CakeML/generated/Lem_set_helpers.thy", - "formal/afp/RSAPSS/Cryptinverts.thy", - "formal/mizar/measure2.miz", - "formal/afp/JinjaThreads/Basic/Set_without_equal.thy", - "formal/afp/AODV/variants/b_fwdrreps/B_OAodv.thy", - "formal/afp/Core_DOM/common/tests/Core_DOM_BaseTest.thy", - "formal/mizar/dynkin.miz", - "formal/afp/DiskPaxos/document/body.tex", - "formal/afp/pGCL/Induction.thy", - "formal/afp/IMO2019/IMO2019_Q5.thy", - "formal/lean/mathlib/algebra/lie/ideal_operations.lean", - "formal/afp/Kleene_Algebra/PHL_KA.thy", - "formal/afp/Finite_Fields/document/root.tex", - "formal/afp/Collections/GenCF/Gen/GenCF_Gen_Chapter.thy", - "formal/afp/VectorSpace/LinearCombinations.thy", - "formal/afp/Possibilistic_Noninterference/Bisim.thy", - "formal/mizar/vfunct_2.miz", - "formal/mizar/hilb10_4.miz", - "formal/mizar/oposet_1.miz", - "formal/afp/Nested_Multisets_Ordinals/Hydra_Battle.thy", - "formal/mizar/scmpds_2.miz", - "formal/lean/mathlib/ring_theory/discriminant.lean", - "formal/mizar/field_2.miz", - "formal/afp/Shadow_DOM/tests/slots.thy", - "formal/afp/Fourier/Fourier_Aux2.thy", - "formal/afp/Auto2_Imperative_HOL/Functional/Interval_Tree.thy", - "formal/lean/mathlib/category_theory/sites/canonical.lean", - "formal/afp/Transformer_Semantics/Powerset_Monad.thy", - "formal/afp/Metalogic_ProofChecker/BetaNorm.thy", - "formal/mizar/jordan18.miz", - "formal/hol/Multivariate/clifford.ml", - "formal/afp/Bicategory/Prebicategory.thy", - "formal/coq/analysis/fsbigop.v", - "formal/hol/Help/DEPTH_CONV.doc", - "formal/afp/VeriComp/Well_founded.thy", - "formal/hol/Help/BINOP_TAC.doc", - "formal/hol/Tutorial/Abstractions_and_quantifiers.ml", - "formal/afp/Dirichlet_Series/Euler_Products.thy", - "formal/lean/mathlib/data/bool/basic.lean", - "formal/afp/MiniSail/IVSubstTypingL.thy", - "formal/mizar/alg_1.miz", - "formal/lean/mathlib/topology/metric_space/metric_separated.lean", - "formal/afp/Virtual_Substitution/Infinitesimals.thy", - "formal/afp/Pi_Calculus/Strong_Late_Bisim_Subst_Pres.thy", - "formal/afp/Fermat3_4/Fermat4.thy", - "formal/afp/Combinable_Wands/document/root.tex", - "formal/mizar/interva1.miz", - "formal/lean/perfectoid/valuation/field.lean", - "formal/mizar/group_6.miz", - "formal/afp/Fourier/document/root.tex", - "formal/mizar/fomodel0.miz", - "formal/afp/MDP-Algorithms/document/root.tex", - "formal/afp/Modal_Logics_for_NTS/document/root.tex", - "formal/afp/Derangements/document/root.tex", - "formal/mizar/xcmplx_0.miz", - "formal/afp/Native_Word/Native_Word_Test_SMLNJ2.thy", - "formal/lean/mathlib/topology/sheaves/forget.lean", - "formal/afp/Call_Arity/CoCallAnalysisSpec.thy", - "formal/afp/Isabelle_C/C11-FrontEnd/src/C_Lexer_Annotation.thy", - "formal/afp/Heard_Of/eigbyz/EigbyzDefs.thy", - "formal/lean/mathlib/ring_theory/witt_vector/structure_polynomial.lean", - "formal/afp/Call_Arity/CoCallGraph-Nominal.thy", - "formal/afp/Constructive_Cryptography/Examples/Secure_Channel/Secure_Channel.thy", - "formal/afp/Algebraic_Numbers/Factors_of_Int_Poly.thy", - "formal/lean/mathzoo/mathzoo/misc/miniF2F/numbertheory/x5neqy2p4.lean", - "formal/coq/analysis/numfun.v", - "formal/afp/Launchbury/Denotational-Related.thy", - "formal/lean/mathlib/algebra/category/Group/default.lean", - "formal/afp/Promela/PromelaAST.thy", - "formal/afp/Modular_Assembly_Kit_Security/Verification/Basics/BSPTaxonomy.thy", - "formal/afp/Applicative_Lifting/Applicative_DNEList.thy", - "formal/afp/FocusStreamsCaseStudies/document/intro.tex", - "formal/mizar/series_5.miz", - "formal/lean/mathlib/data/nat/factorization/basic.lean", - "formal/afp/Stone_Kleene_Relation_Algebras/Kleene_Relation_Algebras.thy", - "formal/lean/mathzoo/mathzoo/olympiads/aime/1983/p2.lean", - "formal/afp/Separation_Logic_Imperative_HOL/Tools/Imperative_HOL_Add.thy", - "formal/afp/MFODL_Monitor_Optimized/Monitor_Code.thy", - "formal/afp/Saturation_Framework/Lifting_to_Non_Ground_Calculi.thy", - "formal/afp/Extended_Finite_State_Machines/Transition.thy", - "formal/lean/mathlib/linear_algebra/matrix/polynomial.lean", - "formal/afp/Lowe_Ontological_Argument/QML.thy", - "formal/lean/mathlib/order/monotone.lean", - "formal/hol/Help/BETAS_CONV.doc", - "formal/afp/Modular_Assembly_Kit_Security/Verification/Basics/SecureSystems.thy", - "formal/afp/Regular_Algebras/document/root.tex", - "formal/hol/Help/DEPTH_BINOP_CONV.doc", - "formal/afp/FOL_Seq_Calc3/Fair_Stream.thy", - "formal/mizar/knaster.miz", - "formal/afp/FO_Theory_Rewriting/FOR_Check_Impl.thy", - "formal/afp/Functional_Ordered_Resolution_Prover/Deterministic_FO_Ordered_Resolution_Prover.thy", - "formal/afp/Clique_and_Monotone_Circuits/Preliminaries.thy", - "formal/hol/Help/mapf.doc", - "formal/lean/liquid/for_mathlib/bicartesian2.lean", - "formal/afp/FLP/Execution.thy", - "formal/afp/Forcing/Separation_Axiom.thy", - "formal/mizar/afproj.miz", - "formal/afp/Akra_Bazzi/Akra_Bazzi_Method.thy", - "formal/lean/liquid/for_mathlib/composable_morphisms.lean", - "formal/mizar/trees_1.miz", - "formal/afp/Monad_Memo_DP/state_monad/State_Main.thy", - "formal/mizar/ndiff_5.miz", - "formal/lean/lftcm/hints/category_theory/exercise3/hint5.lean", - "formal/afp/UTP/utp/examples/utp_simple_time.thy", - "formal/afp/CAVA_LTL_Modelchecker/All_Of_CAVA_LTL_Modelchecker.thy", - "formal/afp/UTP/toolkit/Countable_Set_Extra.thy", - "formal/afp/Registers/Misc.thy", - "formal/hol/Help/ASSUME_TAC.doc", - "formal/mizar/cgames_1.miz", - "formal/afp/Diophantine_Eqns_Lin_Hom/Algorithm.thy", - "formal/hol/Help/SELECT_RULE.doc", - "formal/hol/Help/term_union.doc", - "formal/hol/Help/REFUTE_THEN.doc", - "formal/hol/Help/unspaced_binops.doc", - "formal/afp/Circus/Circus_Syntax.thy", - "formal/lean/mathlib/algebra/order/monoid.lean", - "formal/afp/List-Infinite/ListInf/List2.thy", - "formal/afp/Constructive_Cryptography/Random_System.thy", - "formal/afp/Consensus_Refined/MRU/New_Algorithm_Proofs.thy", - "formal/lean/mathlib/algebra/homology/augment.lean", - "formal/lean/sphere-eversion/global/smooth_embedding.lean", - "formal/afp/Symmetric_Polynomials/Vieta.thy", - "formal/hol/Minisat/taut.ml", - "formal/afp/CakeML_Codegen/Rewriting/Big_Step_Value.thy", - "formal/afp/Binding_Syntax_Theory/Binding_Syntax.thy", - "formal/afp/MonoBoolTranAlgebra/Assertion_Algebra.thy", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2020/a/p25.lean", - "formal/afp/InfPathElimination/SymExec.thy", - "formal/afp/Quasi_Borel_Spaces/StandardBorel.thy", - "formal/afp/KAD/Domain_Semiring.thy", - "formal/afp/Pi_Calculus/Weak_Late_Cong.thy", - "formal/lean/mathlib/analysis/asymptotics/asymptotic_equivalent.lean", - "formal/lean/mathlib/geometry/euclidean/default.lean", - "formal/mizar/c0sp2.miz", - "formal/lean/mathlib/data/finset/card.lean", - "formal/lean/mathlib/group_theory/commutator.lean", - "formal/mizar/rewrite2.miz", - "formal/lean/mathlib/dynamics/periodic_pts.lean", - "formal/lean/mathlib/ring_theory/localization/num_denom.lean", - "formal/lean/mathlib/data/real/sqrt.lean", - "formal/lean/mathlib/analysis/special_functions/trigonometric/complex_deriv.lean", - "formal/lean/mathlib/measure_theory/tactic.lean", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p252.lean", - "formal/afp/CCS/Weak_Bisim.thy", - "formal/afp/ADS_Functor/Merkle_Interface.thy", - "formal/hol/Help/NUM_RED_CONV.doc", - "formal/afp/Linear_Inequalities/Farkas_Lemma.thy", - "formal/afp/Polynomials/OAlist.thy", - "formal/afp/Collections/GenCF/Gen/Gen_Hash.thy", - "formal/lean/lftcm/hints/category_theory/exercise4/hint2.lean", - "formal/lean/mathlib/topology/alexandroff.lean", - "formal/afp/Differential_Game_Logic/USubst.thy", - "formal/afp/Jordan_Normal_Form/Strassen_Algorithm.thy", - "formal/afp/SenSocialChoice/document/root.tex", - "formal/lean/liquid/condensed/is_proetale_sheaf.lean", - "formal/afp/LambdaAuth/Agreement.thy", - "formal/lean/mathlib/measure_theory/measure/open_pos.lean", - "formal/afp/JinjaDCI/J/WWellForm.thy", - "formal/afp/SATSolverVerification/AssertLiteral.thy", - "formal/afp/Deep_Learning/Tensor.thy", - "formal/lean/mathlib/data/multiset/range.lean", - "formal/afp/IMO2019/document/root.tex", - "formal/mizar/jgraph_5.miz", - "formal/lean/liquid/for_mathlib/homotopy_category_pretriangulated.lean", - "formal/lean/mathlib/category_theory/linear/functor_category.lean", - "formal/lean/mathzoo/mathzoo/misc/miniF2F/algebra/sqineq_2unitcircatblt1.lean", - "formal/afp/JinjaThreads/JVM/JVMExceptions.thy", - "formal/lean/mathlib/algebra/order/lattice_group.lean", - "formal/mizar/tsep_2.miz", - "formal/mizar/recdef_2.miz", - "formal/afp/Van_Emde_Boas_Trees/VEBT_Insert.thy", - "formal/afp/Transition_Systems_and_Automata/Automata/DRA/DRA_Explicit.thy", - "formal/hol/Help/num_0.doc", - "formal/mizar/fcont_1.miz", - "formal/hol/miz3/Samples/other_mizs.ml", - "formal/lean/mathlib/algebra/hom/group.lean", - "formal/afp/AWN/Invariants.thy", - "formal/lean/mathlib/field_theory/minpoly.lean", - "formal/lean/mathlib/category_theory/monoidal/types.lean", - "formal/afp/JinjaThreads/JVM/JVM_Main.thy", - "formal/afp/Finger-Trees/FingerTree.thy", - "formal/afp/Poincare_Disc/Poincare_Lines.thy", - "formal/afp/Auto2_HOL/document/root.tex", - "formal/afp/Jordan_Normal_Form/DL_Rank_Submatrix.thy", - "formal/mizar/modelc_3.miz", - "formal/lean/mathlib/category_theory/functor/flat.lean", - "formal/afp/Algebraic_Numbers/Algebraic_Numbers.thy", - "formal/afp/First_Order_Terms/Matching.thy", - "formal/lean/mathlib/algebra/module/dedekind_domain.lean", - "formal/afp/AODV/variants/a_norreqid/A_Loop_Freedom.thy", - "formal/lean/liquid/for_mathlib/derived/bounded_homotopy_category.lean", - "formal/lean/sphere-eversion/notations.lean", - "formal/hol/Help/type_of_pretype.doc", - "formal/hol/Jordan/num_ext_nabs.ml", - "formal/afp/Projective_Geometry/Desargues_3D.thy", - "formal/lean/mathlib/data/buffer/parser/numeral.lean", - "formal/afp/Simpl/Simpl_Heap.thy", - "formal/afp/Core_DOM/common/classes/NodeClass.thy", - "formal/lean/mathlib/topology/bases.lean", - "formal/hol/GL/make.ml", - "formal/afp/Cartan_FP/document/root.tex", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p480.lean", - "formal/lean/mathlib/data/finset/default.lean", - "formal/hol/Jordan/num_ext_gcd.ml", - "formal/afp/Dirichlet_Series/Dirichlet_Series.thy", - "formal/lean/mathlib/number_theory/arithmetic_function.lean", - "formal/afp/Forcing/Succession_Poset.thy", - "formal/lean/mathlib/linear_algebra/projective_space/basic.lean", - "formal/mizar/ortsp_1.miz", - "formal/afp/Shivers-CFA/HOLCFUtils.thy", - "formal/hol/Help/pp_print_thm.doc", - "formal/afp/Program-Conflict-Analysis/Normalization.thy", - "formal/hol/Examples/division_algebras.ml", - "formal/hol/Help/applyd.doc", - "formal/lean/mathlib/ring_theory/valuation/integral.lean", - "formal/lean/lftcm/exercises_sources/thursday/linear_algebra.lean", - "formal/lean/mathlib/set_theory/lists.lean", - "formal/afp/Prime_Distribution_Elementary/Elementary_Prime_Bounds.thy", - "formal/lean/mathlib/combinatorics/hall/finite.lean", - "formal/lean/mathlib/ring_theory/algebra_tower.lean", - "formal/afp/UPF_Firewall/StatefulFW/LTL_alike.thy", - "formal/afp/Chord_Segments/Chord_Segments.thy", - "formal/afp/LTL_to_DRA/Auxiliary/Map2.thy", - "formal/hol/Help/DISJ_CASES_THEN.doc", - "formal/afp/TortoiseHare/document/root.tex", - "formal/afp/CAVA_Automata/Automata_Impl.thy", - "formal/afp/Monad_Memo_DP/Pure_Monad.thy", - "formal/hol/Help/REAL_INT_RAT_CONV.doc", - "formal/afp/Category3/EpiMonoIso.thy", - "formal/lean/liquid/pseudo_normed_group/basic.lean", - "formal/mizar/scmfsa10.miz", - "formal/hol/Functionspaces/L2.ml", - "formal/hol/Help/MATCH_MP.doc", - "formal/afp/Flyspeck-Tame/RTranCl.thy", - "formal/afp/JiveDataStoreModel/Isabelle/Value.thy", - "formal/hol/Logic/skolem.ml", - "formal/hol/Help/ADD_ASSUM.doc", - "formal/lean/mathlib/data/list/tfae.lean", - "formal/afp/Complex_Bounded_Operators/Complex_Inner_Product.thy", - "formal/afp/Iptables_Semantics/Semantics_Ternary/Optimizing.thy", - "formal/afp/Dijkstra_Shortest_Path/GraphByMap.thy", - "formal/afp/Isabelle_Meta_Model/meta_isabelle/Meta_Pure.thy", - "formal/afp/Ordinary_Differential_Equations/Library/Bounded_Linear_Operator.thy", - "formal/afp/DPT-SAT-Solver/document/root.tex", - "formal/lean/mathlib/algebra/graded_monoid.lean", - "formal/afp/Planarity_Certificates/Planarity/List_Aux.thy", - "formal/afp/Gromov_Hyperbolicity/Busemann_Function.thy", - "formal/afp/KAD/Antidomain_Semiring.thy", - "formal/afp/Iptables_Semantics/document/root.tex", - "formal/afp/CakeML/generated/Lem_maybe_extra.thy", - "formal/afp/Attack_Trees/AT.thy", - "formal/lean/mathlib/order/category/Semilattice.lean", - "formal/afp/BDD/ShareRepProof.thy", - "formal/lean/sphere-eversion/to_mathlib/linear_algebra/basic.lean", - "formal/afp/Featherweight_OCL/document/conclusion.tex", - "formal/lean/mathlib/combinatorics/hall/basic.lean", - "formal/afp/Ordinary_Differential_Equations/Ex/Lorenz/C1/Lorenz_C1.thy", - "formal/hol/Help/nsplit.doc", - "formal/afp/Collections/GenCF/Intf/Intf_Comp.thy", - "formal/afp/Dict_Construction/Test_Dict_Construction.thy", - "formal/afp/PSemigroupsConvolution/Binary_Modalities.thy", - "formal/afp/Ordered_Resolution_Prover/FO_Ordered_Resolution.thy", - "formal/lean/mathlib/data/set_like/fintype.lean", - "formal/mizar/tex_2.miz", - "formal/lean/mathlib/category_theory/limits/bicones.lean", - "formal/afp/Ordered_Resolution_Prover/Clausal_Logic.thy", - "formal/afp/Factor_Algebraic_Polynomial/Poly_Connection.thy", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2019/a/p21.lean", - "formal/lean/mathlib/data/rat/meta_defs.lean", - "formal/afp/Partial_Order_Reduction/Extensions/Set_Extensions.thy", - "formal/hol/Help/REWRITES_CONV.doc", - "formal/afp/Euler_MacLaurin/Euler_MacLaurin_Landau.thy", - "formal/lean/mathlib/data/finset/finsupp.lean", - "formal/hol/IsabelleLight/new_tactics.ml", - "formal/afp/Probabilistic_While/While_SPMF.thy", - "formal/afp/Modular_arithmetic_LLL_and_HNF_algorithms/LLL_Certification_via_HNF.thy", - "formal/lean/mathlib/measure_theory/integral/torus_integral.lean", - "formal/coq/math-comp/fingroup/quotient.v", - "formal/hol/Tutorial/Semantics_of_programming_languages_shallow.ml", - "formal/afp/Applicative_Lifting/Applicative_Environment_Algebra.thy", - "formal/afp/Refine_Imperative_HOL/IICF/Impl/IICF_MS_Array_List.thy", - "formal/afp/Native_Word/document/root.tex", - "formal/lean/mathlib/data/finset/slice.lean", - "formal/afp/UPF_Firewall/Examples/PersonalFirewall/PersonalFirewallDatatype.thy", - "formal/afp/Lambda_Free_RPOs/Lambda_Encoding_RPO.thy", - "formal/afp/AWN/Toy.thy", - "formal/afp/Frequency_Moments/Frequency_Moment_2.thy", - "formal/hol/Arithmetic/derived.ml", - "formal/afp/Myhill-Nerode/Myhill.thy", - "formal/mizar/card_fil.miz", - "formal/mizar/brouwer.miz", - "formal/afp/Hoare_Time/Vars.thy", - "formal/afp/Transition_Systems_and_Automata/Automata/NBA/NBA_Algorithms.thy", - "formal/afp/Security_Protocol_Refinement/Key_establish/m3_ds_par.thy", - "formal/afp/Native_Word/Native_Word_Test_Scala.thy", - "formal/lean/liquid/combinatorial_lemma/finite.lean", - "formal/lean/mathlib/ring_theory/witt_vector/defs.lean", - "formal/afp/Flyspeck-Tame/FaceDivisionProps.thy", - "formal/afp/Quasi_Borel_Spaces/Probability_Space_QuasiBorel.thy", - "formal/lean/mathlib/order/max.lean", - "formal/afp/List-Infinite/CommonSet/SetIntervalCut.thy", - "formal/afp/Timed_Automata/Misc.thy", - "formal/lean/mathlib/data/finset/noncomm_prod.lean", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p196.lean", - "formal/lean/mathlib/data/list/dedup.lean", - "formal/lean/mathlib/algebra/euclidean_domain.lean", - "formal/mizar/fomodel1.miz", - "formal/afp/Groebner_Macaulay/Binomial_Int.thy", - "formal/afp/Matrices_for_ODEs/MTX_Preliminaries.thy", - "formal/afp/Refine_Imperative_HOL/Sepref_Foreach.thy", - "formal/afp/CryptHOL/Resumption.thy", - "formal/hol/Help/extend_basic_convs.doc", - "formal/afp/Rewrite_Properties_Reduction/Rewriting/Rewriting_Properties.thy", - "formal/lean/mathlib/measure_theory/integral/mean_inequalities.lean", - "formal/afp/AODV/Aodv_Basic.thy", - "formal/afp/Lam-ml-Normalization/document/figureLemma7.tex", - "formal/afp/Bounded_Deducibility_Security/Transition_System.thy", - "formal/afp/Program-Conflict-Analysis/ConsInterleave.thy", - "formal/afp/Van_Emde_Boas_Trees/Imperative_HOL_Time/Array_Time.thy", - "formal/lean/perfectoid/Spa/presheaf.lean", - "formal/afp/Real_Impl/Real_Impl_Auxiliary.thy", - "formal/mizar/topdim_1.miz", - "formal/afp/Actuarial_Mathematics/document/root.tex", - "formal/afp/Cotangent_PFD_Formula/Cotangent_PFD_Formula.thy", - "formal/afp/Discrete_Summation/Factorials.thy", - "formal/mizar/dualsp04.miz", - "formal/afp/LTL_to_DRA/Rabin.thy", - "formal/afp/List-Infinite/CommonSet/InfiniteSet2.thy", - "formal/hol/100/desargues.ml", - "formal/afp/Deriving/Comparator_Generator/Compare_Real.thy", - "formal/afp/Pi_Calculus/Weak_Early_Bisim_Subst.thy", - "formal/hol/Help/META_EXISTS_TAC.doc", - "formal/hol/Help/REFL_TAC.doc", - "formal/hol/Library/binomial.ml", - "formal/afp/Echelon_Form/Cayley_Hamilton_Compatible.thy", - "formal/mizar/sfmastr1.miz", - "formal/mizar/rlsub_1.miz", - "formal/lean/mathlib/field_theory/is_alg_closed/classification.lean", - "formal/lean/mathlib/topology/locally_finite.lean", - "formal/afp/Factor_Algebraic_Polynomial/Factor_Complex_Poly.thy", - "formal/afp/CakeML/generated/Lem_pervasives.thy", - "formal/hol/Help/apply.doc", - "formal/lean/mathlib/topology/algebra/order/proj_Icc.lean", - "formal/afp/Polynomial_Factorization/Prime_Factorization.thy", - "formal/mizar/osalg_4.miz", - "formal/afp/JinjaDCI/Compiler/Correctness1.thy", - "formal/hol/Help/mk_icomb.doc", - "formal/hol/Help/SUBS_CONV.doc", - "formal/afp/CZH_Foundations/czh_sets/ex/CZH_EX_Replacement.thy", - "formal/afp/Constructive_Cryptography/Converter_Rewrite.thy", - "formal/mizar/sheffer1.miz", - "formal/hol/IsabelleLight/make.ml", - "formal/mizar/jgraph_8.miz", - "formal/lean/mathlib/analysis/convex/contractible.lean", - "formal/lean/mathlib/number_theory/primes_congruent_one.lean", - "formal/mizar/mesfun14.miz", - "formal/hol/Help/mk_finty.doc", - "formal/mizar/dualsp03.miz", - "formal/hol/Help/ORELSE.doc", - "formal/lean/mathlib/category_theory/functor/left_derived.lean", - "formal/lean/liquid/polyhedral_lattice/pseudo_normed_group.lean", - "formal/afp/Hermite_Lindemann/Complex_Lexorder.thy", - "formal/afp/Formal_SSA/Generic_Interpretation.thy", - "formal/afp/Jinja/Jinja.thy", - "formal/lean/mathlib/ring_theory/multiplicity.lean", - "formal/hol/Arithmetic/arithprov.ml", - "formal/afp/CISC-Kernel/step/Step_invariants.thy", - "formal/afp/Transition_Systems_and_Automata/Automata/DRA/DRA_Translate.thy", - "formal/afp/MSO_Regex_Equivalence/Pi_Derivatives.thy", - "formal/afp/Modal_Logics_for_NTS/FL_Formula.thy", - "formal/afp/Modular_arithmetic_LLL_and_HNF_algorithms/document/root.tex", - "formal/afp/LTL_to_DRA/Impl/LTL_Compat.thy", - "formal/hol/Help/union.doc", - "formal/lean/mathlib/category_theory/epi_mono.lean", - "formal/lean/mathlib/algebra/category/Ring/constructions.lean", - "formal/afp/Fishers_Inequality/Dual_Systems.thy", - "formal/afp/IsaNet/instances/SCION_variant.thy", - "formal/afp/Sort_Encodings/U.thy", - "formal/mizar/sf_mastr.miz", - "formal/mizar/compos_0.miz", - "formal/mizar/ordinal4.miz", - "formal/afp/Core_SC_DOM/common/monads/ObjectMonad.thy", - "formal/afp/CAVA_LTL_Modelchecker/BoolProgs/Programs/BoolProgs_Simple.thy", - "formal/mizar/circuit2.miz", - "formal/lean/mathlib/category_theory/groupoid.lean", - "formal/lean/mathlib/category_theory/subobject/factor_thru.lean", - "formal/mizar/goboard3.miz", - "formal/afp/PCF/SmallStep.thy", - "formal/afp/Call_Arity/Cardinality-Domain.thy", - "formal/afp/JinjaDCI/BV/BVConform.thy", - "formal/afp/Prime_Harmonic_Series/Prime_Harmonic.thy", - "formal/afp/Inductive_Confidentiality/GeneralAttacker/PublicGA.thy", - "formal/afp/CakeML_Codegen/Doc/Doc_Compiler.thy", - "formal/lean/mathlib/algebra/punit_instances.lean", - "formal/mizar/recdef_1.miz", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p482.lean", - "formal/mizar/fcont_3.miz", - "formal/lean/mathlib/category_theory/monoidal/discrete.lean", - "formal/lean/sphere-eversion/to_mathlib/set_theory/cardinal/basic.lean", - "formal/lean/mathlib/ring_theory/integral_closure.lean", - "formal/afp/Isabelle_Meta_Model/toy_example/embedding/meta_toy/Meta_Toy.thy", - "formal/lean/mathlib/analysis/calculus/specific_functions.lean", - "formal/afp/Slicing/While/Labels.thy", - "formal/afp/BytecodeLogicJmlTypes/Language.thy", - "formal/afp/Physical_Quantities/ISQ_Dimensions.thy", - "formal/hol/Tutorial/all.ml", - "formal/afp/CZH_Foundations/czh_digraphs/CZH_DG_Rel.thy", - "formal/afp/LambdaMu/Reduction.thy", - "formal/afp/Sort_Encodings/G.thy", - "formal/afp/Sigma_Commit_Crypto/Rivest.thy", - "formal/afp/Monad_Memo_DP/state_monad/DP_CRelVS.thy", - "formal/mizar/partfun2.miz", - "formal/lean/liquid/for_mathlib/tsum.lean", - "formal/afp/Linear_Recurrences/RatFPS.thy", - "formal/lean/liquid/challenge.lean", - "formal/mizar/sin_cos9.miz", - "formal/lean/mathlib/combinatorics/simple_graph/density.lean", - "formal/afp/Rewrite_Properties_Reduction/Ground_Reduction_on_GTRS.thy", - "formal/afp/Tail_Recursive_Functions/CaseStudy2.thy", - "formal/afp/Hybrid_Multi_Lane_Spatial_Logic/Cars.thy", - "formal/afp/BTree/BTree.thy", - "formal/afp/Simpl/Hoare.thy", - "formal/lean/mathlib/linear_algebra/affine_space/basic.lean", - "formal/mizar/matrtop1.miz", - "formal/lean/mathlib/algebra/lie/submodule.lean", - "formal/afp/DataRefinementIBP/document/root.tex", - "formal/afp/Belief_Revision/AGM_Revision.thy", - "formal/afp/Isabelle_Marries_Dirac/Basics.thy", - "formal/afp/Akra_Bazzi/Eval_Numeral.thy", - "formal/hol/Help/ASM_METIS_TAC.doc", - "formal/afp/Security_Protocol_Refinement/Refinement/a0n_agree.thy", - "formal/afp/Psi_Calculi/Weaken_Stat_Imp.thy", - "formal/afp/RSAPSS/Wordarith.thy", - "formal/hol/IsabelleLight/meta_rules.ml", - "formal/mizar/yellow_1.miz", - "formal/hol/Help/by.doc", - "formal/afp/Collections/ICF/impl/RBTMapImpl.thy", - "formal/afp/CZH_Foundations/czh_sets/ex/CZH_EX_TS.thy", - "formal/afp/Buchi_Complementation/Complementation_Build.thy", - "formal/mizar/orders_1.miz", - "formal/lean/mathlib/linear_algebra/quadratic_form/basic.lean", - "formal/lean/mathlib/category_theory/sites/sheaf.lean", - "formal/lean/mathlib/analysis/normed/group/pointwise.lean", - "formal/lean/mathlib/category_theory/limits/shapes/biproducts.lean", - "formal/lean/mathlib/data/fp/basic.lean", - "formal/lean/mathlib/topology/instances/ereal.lean", - "formal/hol/Help/REAL_RAT_SUB_CONV.doc", - "formal/mizar/fintopo7.miz", - "formal/afp/Jordan_Normal_Form/Conjugate.thy", - "formal/afp/UPF_Firewall/document/root.tex", - "formal/afp/Bicategory/SpanBicategory.thy", - "formal/afp/MiniSail/Syntax.thy", - "formal/afp/Automatic_Refinement/Lib/Indep_Vars.thy", - "formal/hol/Help/theorems.doc", - "formal/afp/List-Infinite/CommonArith/Util_MinMax.thy", - "formal/lean/mathlib/data/finset/n_ary.lean", - "formal/afp/UPF_Firewall/Examples/Transformation/Transformation01.thy", - "formal/afp/Chandy_Lamport/Trace.thy", - "formal/afp/Clean/examples/Quicksort.thy", - "formal/afp/Independence_CH/Ordinals_In_MG.thy", - "formal/afp/HOL-CSP/Sync.thy", - "formal/afp/Inductive_Confidentiality/DolevYao/Message.thy", - "formal/hol/Help/REAL_IDEAL_CONV.doc", - "formal/afp/Adaptive_State_Counting/ATC/ATC.thy", - "formal/afp/CZH_Foundations/czh_semicategories/CZH_SMC_GRPH.thy", - "formal/hol/Help/definitions.doc", - "formal/afp/MSO_Regex_Equivalence/Formula.thy", - "formal/mizar/fuzzy_4.miz", - "formal/afp/Isabelle_Marries_Dirac/Quantum_Teleportation.thy", - "formal/afp/UTP/utp/examples/gcd.thy", - "formal/hol/Help/mk_intconst.doc", - "formal/lean/mathlib/field_theory/mv_polynomial.lean", - "formal/afp/Jordan_Normal_Form/Column_Operations.thy", - "formal/lean/liquid/for_mathlib/has_homology_aux.lean", - "formal/afp/Adaptive_State_Counting/ASC/ASC_Example.thy", - "formal/mizar/zmodul06.miz", - "formal/lean/mathlib/data/multiset/powerset.lean", - "formal/mizar/pdiff_4.miz", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p5.lean", - "formal/lean/mathlib/analysis/normed_space/complemented.lean", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2015/a/p10.lean", - "formal/lean/mathlib/algebra/ring/equiv.lean", - "formal/afp/Call_Arity/SestoftGC.thy", - "formal/lean/mathzoo/mathzoo/olympiads/amc/8/2020/p22.lean", - "formal/lean/mathlib/category_theory/adjunction/mates.lean", - "formal/afp/Containers/Collection_Enum.thy", - "formal/afp/Applicative_Lifting/Tree_Relabelling.thy", - "formal/afp/Signature_Groebner/document/root.tex", - "formal/afp/Collections/ICF/impl/RBTSetImpl.thy", - "formal/mizar/yellow_4.miz", - "formal/hol/IEEE/make.ml", - "formal/afp/Finitely_Generated_Abelian_Groups/Finite_And_Cyclic_Groups.thy", - "formal/lean/liquid/pseudo_normed_group/Tinv.lean", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2020/a/p4.lean", - "formal/afp/Count_Complex_Roots/CC_Polynomials_Extra.thy", - "formal/afp/Slicing/Basic/CFGExit.thy", - "formal/mizar/goboard9.miz", - "formal/lean/mathzoo/mathzoo/olympiads/imo/2011/p5.lean", - "formal/lean/mathlib/combinatorics/simple_graph/basic.lean", - "formal/afp/Echelon_Form/Echelon_Form_Inverse.thy", - "formal/afp/Routing/document/root.tex", - "formal/afp/Constructive_Cryptography/Examples/Secure_Channel/One_Time_Pad.thy", - "formal/afp/ROBDD/document/root.tex", - "formal/afp/Knuth_Bendix_Order/Subterm_and_Context.thy", - "formal/afp/Incompleteness/Predicates.thy", - "formal/hol/Help/FORALL_UNWIND_CONV.doc", - "formal/afp/Modular_Assembly_Kit_Security/SecuritySpecification/PropertyLibrary.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p48.lean", - "formal/afp/Promela/PromelaDatastructures.thy", - "formal/afp/Jinja/Common/Auxiliary.thy", - "formal/afp/Call_Arity/CardinalityAnalysisSig.thy", - "formal/afp/Clique_and_Monotone_Circuits/document/root.tex", - "formal/afp/Constructive_Cryptography/Wiring.thy", - "formal/lean/mathlib/linear_algebra/matrix/dot_product.lean", - "formal/afp/Psi_Calculi/Semantics.thy", - "formal/lean/mathlib/topology/hom/open.lean", - "formal/mizar/goboard1.miz", - "formal/afp/WorkerWrapper/LList.thy", - "formal/lean/mathzoo/mathzoo/olympiads/imo/1964/p1.lean", - "formal/afp/Formula_Derivatives/document/root.tex", - "formal/hol/Help/NUM_SUC_CONV.doc", - "formal/afp/Forcing/Rasiowa_Sikorski.thy", - "formal/afp/AODV/variants/a_norreqid/A_Global_Invariants.thy", - "formal/mizar/real_ns2.miz", - "formal/afp/CZH_Elementary_Categories/czh_ecategories/CZH_ECAT_Comma.thy", - "formal/lean/mathlib/analysis/convex/slope.lean", - "formal/afp/HRB-Slicing/HRBSlicing.thy", - "formal/lean/mathzoo/mathzoo/olympiads/aime/1988/p8.lean", - "formal/afp/Collections/ICF/tools/Locale_Code_Ex.thy", - "formal/lean/mathlib/control/applicative.lean", - "formal/mizar/yellow10.miz", - "formal/hol/Help/o.doc", - "formal/hol/Help/DENUMERAL.doc", - "formal/afp/Lambda_Free_KBOs/Lambda_Free_KBO_Std.thy", - "formal/afp/Featherweight_OCL/document/root.tex", - "formal/afp/UpDown_Scheme/Up.thy", - "formal/afp/Transitive_Models/Cardinal_Relative.thy", - "formal/afp/WOOT_Strong_Eventual_Consistency/BasicAlgorithms.thy", - "formal/afp/Well_Quasi_Orders/Least_Enum.thy", - "formal/afp/AODV/variants/a_norreqid/A_Aodv_Data.thy", - "formal/afp/Category3/InitialTerminal.thy", - "formal/afp/AODV/variants/d_fwdrreqs/D_Fwdrreqs.thy", - "formal/afp/VeriComp/document/root.tex", - "formal/hol/Help/instantiate.doc", - "formal/hol/Help/variants.doc", - "formal/hol/Help/print_fpf.doc", - "formal/lean/mathlib/topology/homotopy/contractible.lean", - "formal/hol/Help/list_mk_disj.doc", - "formal/mizar/finseq_4.miz", - "formal/lean/mathlib/analysis/calculus/lhopital.lean", - "formal/afp/Hoare_Time/Quant_Hoare.thy", - "formal/afp/UTP/toolkit/Positive.thy", - "formal/afp/Shadow_DOM/tests/Shadow_DOM_Document_adoptNode.thy", - "formal/hol/Help/REAL_FIELD.doc", - "formal/afp/ResiduatedTransitionSystem/ResiduatedTransitionSystem.thy", - "formal/afp/Gauss_Sums/Finite_Fourier_Series.thy", - "formal/hol/Help/REAL_RAT_SGN_CONV.doc", - "formal/lean/mathlib/topology/bornology/hom.lean", - "formal/afp/Saturation_Framework_Extensions/Soundness.thy", - "formal/hol/EC/secp224k1.ml", - "formal/lean/mathlib/analysis/normed_space/ray.lean", - "formal/mizar/nat_lat.miz", - "formal/lean/mathlib/topology/algebra/field.lean", - "formal/lean/liquid/condensed/Qprime_isoms.lean", - "formal/lean/mathlib/topology/algebra/localization.lean", - "formal/afp/FOL_Seq_Calc3/Prover.thy", - "formal/afp/Noninterference_Inductive_Unwinding/InductiveUnwinding.thy", - "formal/lean/mathlib/category_theory/limits/concrete_category.lean", - "formal/mizar/diophan1.miz", - "formal/afp/Binomial-Queues/PQ_Implementation.thy", - "formal/afp/Ordinary_Differential_Equations/Ex/Examples_Integral.thy", - "formal/hol/Help/EXISTENCE.doc", - "formal/afp/DOM_Components/document/root.tex", - "formal/coq/odd-order/PFsection13.v", - "formal/lean/mathlib/number_theory/padics/default.lean", - "formal/afp/PLM/TAO_9_PLM.thy", - "formal/lean/lftcm/solutions/friday/topology.lean", - "formal/afp/Refine_Imperative_HOL/IICF/Impl/IICF_List_MsetO.thy", - "formal/afp/Registers/Laws_Quantum.thy", - "formal/lean/liquid/for_mathlib/additive_functor.lean", - "formal/hol/Functionspaces/utils.ml", - "formal/mizar/yellow_3.miz", - "formal/lean/liquid/Lbar/torsion_free_condensed.lean", - "formal/hol/Help/itlist2.doc", - "formal/afp/Prpu_Maxflow/Graph_Topological_Ordering.thy", - "formal/lean/mathlib/algebra/group_power/ring.lean", - "formal/lean/mathlib/linear_algebra/matrix/hermitian.lean", - "formal/mizar/square_1.miz", - "formal/afp/CoSMed/Traceback_Properties/Friend_Traceback.thy", - "formal/lean/mathlib/order/closure.lean", - "formal/mizar/jordan4.miz", - "formal/afp/Extended_Finite_State_Machines/EFSM.thy", - "formal/hol/Help/axioms.doc", - "formal/afp/Gromov_Hyperbolicity/Hausdorff_Distance.thy", - "formal/afp/pGCL/Loops.thy", - "formal/afp/Call_Arity/ArityAnalysisFixProps.thy", - "formal/afp/Adaptive_State_Counting/FSM/FSM_Product.thy", - "formal/lean/mathlib/analysis/calculus/tangent_cone.lean", - "formal/hol/Examples/lucas_lehmer.ml", - "formal/afp/Planarity_Certificates/Verification/Simpl_Anno.thy", - "formal/mizar/topreal4.miz", - "formal/lean/mathlib/data/set/lattice.lean", - "formal/afp/Pi_Transcendental/Pi_Transcendental_Polynomial_Library.thy", - "formal/hol/Library/analysis.ml", - "formal/afp/BNF_CC/Concrete_Examples.thy", - "formal/lean/mathlib/category_theory/category/Cat/limit.lean", - "formal/lean/mathlib/order/circular.lean", - "formal/afp/Pi_Calculus/Early_Semantics.thy", - "formal/afp/Independence_CH/Edrel.thy", - "formal/afp/Algebraic_Numbers/Compare_Complex.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p393.lean", - "formal/afp/Gauss_Sums/Polya_Vinogradov.thy", - "formal/afp/SuperCalc/superposition.thy", - "formal/afp/Probabilistic_Noninterference/Trace_Based.thy", - "formal/afp/SATSolverVerification/Decide.thy", - "formal/afp/DPRM_Theorem/DPRM.thy", - "formal/afp/Modular_arithmetic_LLL_and_HNF_algorithms/HNF_Mod_Det_Algorithm.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p582.lean", - "formal/afp/TLA/Liveness.thy", - "formal/mizar/setfam_1.miz", - "formal/hol/Rqe/rewrites.ml", - "formal/afp/Markov_Models/Classifying_Markov_Chain_States.thy", - "formal/lean/liquid/for_mathlib/fintype_induction.lean", - "formal/coq/abel/xmathcomp/mxextra.v", - "formal/afp/AODV/variants/e_all_abcd/E_Aodv_Message.thy", - "formal/afp/Transition_Systems_and_Automata/Basic/Sequence_Zip.thy", - "formal/lean/mathlib/category_theory/limits/presheaf.lean", - "formal/afp/Root_Balanced_Tree/Root_Balanced_Tree.thy", - "formal/afp/CoSMeDis/Post_Confidentiality/Post_RECEIVER.thy", - "formal/afp/IMP2/lib/IMP2_Aux_Lemmas.thy", - "formal/lean/mathlib/computability/epsilon_NFA.lean", - "formal/afp/Decl_Sem_Fun_PL/Lambda.thy", - "formal/afp/QR_Decomposition/Matrix_To_IArray_QR.thy", - "formal/lean/mathlib/data/nat/count.lean", - "formal/afp/Hello_World/IO.thy", - "formal/afp/Prim_Dijkstra_Simple/Chapter_Dijkstra.thy", - "formal/lean/mathlib/order/locally_finite.lean", - "formal/hol/Help/REWRITE_RULE.doc", - "formal/afp/LTL_to_DRA/LTL_Rabin_Unfold_Opt.thy", - "formal/afp/Smooth_Manifolds/Partition_Of_Unity.thy", - "formal/afp/Registers/Axioms.thy", - "formal/afp/Isabelle_Marries_Dirac/Measurement.thy", - "formal/afp/FocusStreamsCaseStudies/arith_hints.thy", - "formal/afp/Collections/ICF/impl/SkewPrioImpl.thy", - "formal/afp/VerifyThis2018/lib/Dynamic_Array.thy", - "formal/lean/mathlib/order/upper_lower.lean", - "formal/hol/Help/new_type_definition.doc", - "formal/hol/100/platonic.ml", - "formal/mizar/numeral1.miz", - "formal/afp/PAC_Checker/PAC_Checker_Init.thy", - "formal/mizar/matrix16.miz", - "formal/afp/IP_Addresses/Lib_Word_toString.thy", - "formal/hol/Rqe/lift_qelim.ml", - "formal/afp/Modular_arithmetic_LLL_and_HNF_algorithms/Storjohann_Impl.thy", - "formal/afp/Modal_Logics_for_NTS/Formula.thy", - "formal/afp/Combinatorics_Words_Lyndon/Lyndon.thy", - "formal/lean/mathlib/linear_algebra/clifford_algebra/even.lean", - "formal/mizar/subset_1.miz", - "formal/lean/mathlib/topology/maps.lean", - "formal/afp/Affine_Arithmetic/Optimize_Integer.thy", - "formal/lean/mathlib/category_theory/monad/algebra.lean", - "formal/lean/mathlib/category_theory/comm_sq.lean", - "formal/afp/Extended_Finite_State_Machine_Inference/Inference.thy", - "formal/mizar/card_lar.miz", - "formal/lean/liquid/for_mathlib/Profinite/product.lean", - "formal/hol/Help/the_inductive_definitions.doc", - "formal/afp/CZH_Elementary_Categories/czh_ecategories/CZH_DG_FUNCT.thy", - "formal/hol/Help/CNF_CONV.doc", - "formal/afp/CakeML/generated/Lem_pervasives_extra.thy", - "formal/afp/Formula_Derivatives/Examples/Presburger_Examples.thy", - "formal/afp/DiscretePricing/Generated_Subalgebra.thy", - "formal/afp/CoSMeDis/Friend_Request_Confidentiality/Friend_Request_Value_Setup.thy", - "formal/lean/mathlib/algebra/gcd_monoid/finset.lean", - "formal/afp/DFS_Framework/document/root.tex", - "formal/hol/Geometric_Algebra/quaternions.ml", - "formal/afp/Hahn_Jordan_Decomposition/Extended_Reals_Sums_Compl.thy", - "formal/afp/FOL_Seq_Calc2/Sequent1.thy", - "formal/lean/mathlib/algebra/algebra/subalgebra/pointwise.lean", - "formal/afp/Pi_Calculus/document/root.tex", - "formal/lean/lftcm/solutions/thursday/category_theory/exercise4.lean", - "formal/afp/Linear_Recurrences/Linear_Inhomogenous_Recurrences.thy", - "formal/coq/analysis/functions.v", - "formal/lean/mathlib/analysis/convex/hull.lean", - "formal/afp/Coinductive_Languages/document/root.tex", - "formal/coq/math-comp/test_suite/imset2_gproduct.v.out", - "formal/lean/mathlib/number_theory/legendre_symbol/gauss_eisenstein_lemmas.lean", - "formal/hol/Multivariate/flyspeck.ml", - "formal/mizar/jordan11.miz", - "formal/mizar/finsub_1.miz", - "formal/lean/mathlib/topology/sheaves/sheafify.lean", - "formal/hol/Help/SIMPLE_EXISTS.doc", - "formal/lean/mathlib/algebra/continued_fractions/computation/terminates_iff_rat.lean", - "formal/lean/mathlib/algebra/module/bimodule.lean", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p139.lean", - "formal/afp/AutoFocus-Stream/IL_AF_Stream_Exec.thy", - "formal/hol/Help/IMP_ANTISYM_RULE.doc", - "formal/afp/Myhill-Nerode/Non_Regular_Languages.thy", - "formal/lean/mathlib/analysis/normed_space/mazur_ulam.lean", - "formal/afp/Amortized_Complexity/Skew_Heap_Analysis.thy", - "formal/afp/RSAPSS/WordOperations.thy", - "formal/afp/Hyperdual/LogisticFunction.thy", - "formal/afp/UTP/utp/utp_rel.thy", - "formal/lean/mathlib/data/mv_polynomial/equiv.lean", - "formal/afp/Liouville_Numbers/Liouville_Numbers_Misc.thy", - "formal/afp/Berlekamp_Zassenhaus/More_Missing_Multiset.thy", - "formal/afp/Correctness_Algebras/Tests.thy", - "formal/mizar/asympt_0.miz", - "formal/afp/Logging_Independent_Anonymity/Anonymity.thy", - "formal/afp/Regular_Tree_Relations/Regular_Relation_Impl.thy", - "formal/hol/Examples/zolotarev.ml", - "formal/lean/liquid/for_mathlib/real.lean", - "formal/afp/MSO_Regex_Equivalence/WS1S_Examples.thy", - "formal/hol/Help/NUM_EXP_CONV.doc", - "formal/mizar/bor_cant.miz", - "formal/mizar/msualg_4.miz", - "formal/afp/Slicing/JinjaVM/JVMCFG.thy", - "formal/mizar/prelamb.miz", - "formal/afp/Locally-Nameless-Sigma/preliminary/ListPre.thy", - "formal/afp/Density_Compiler/PDF_Transformations.thy", - "formal/hol/Rqe/inferpsign_thms.ml", - "formal/hol/Examples/schnirelmann.ml", - "formal/lean/mathlib/order/filter/countable_Inter.lean", - "formal/lean/mathlib/ring_theory/ring_invo.lean", - "formal/mizar/polynom4.miz", - "formal/afp/AnselmGod/document/root.tex", - "formal/lean/mathlib/order/filter/prod.lean", - "formal/hol/100/divharmonic.ml", - "formal/afp/KBPs/KBPsAuto.thy", - "formal/afp/Pi_Calculus/Weak_Early_Cong_Subst_Pres.thy", - "formal/hol/Help/NANOCOP.doc", - "formal/afp/Recursion-Addition/recursion.thy", - "formal/afp/VerifyThis2018/lib/Exc_Nres_Monad.thy", - "formal/lean/mathlib/category_theory/closed/ideal.lean", - "formal/afp/List_Update/Bit_Strings.thy", - "formal/hol/Help/num_1.doc", - "formal/afp/CZH_Foundations/czh_semicategories/CZH_SMC_Conclusions.thy", - "formal/afp/Twelvefold_Way/Twelvefold_Way_Entry1.thy", - "formal/afp/LOFT/OpenFlow_Action.thy", - "formal/afp/Routing/ReversePathFiltering.thy", - "formal/afp/Regression_Test_Selection/Common/RTS_safe.thy", - "formal/lean/mathlib/ring_theory/witt_vector/is_poly.lean", - "formal/afp/Probabilistic_System_Zoo/document/root_bnfs.tex", - "formal/afp/Simplex/document/root.tex", - "formal/hol/Help/types.doc", - "formal/lean/perfectoid/sheaves/presheaf.lean", - "formal/mizar/topgen_3.miz", - "formal/afp/Transitive_Models/Partial_Functions_Relative.thy", - "formal/afp/Collections/ICF/impl/ListSetImpl_Invar.thy", - "formal/hol/Help/INT_RING.doc", - "formal/mizar/integr23.miz", - "formal/lean/liquid/laurent_measures/simpler_laurent_measures.lean", - "formal/afp/AODV/variants/a_norreqid/A_Aodv_Predicates.thy", - "formal/afp/Ordinary_Differential_Equations/IVP/Picard_Lindeloef_Qualitative.thy", - "formal/hol/Help/REAL_ARITH_TAC.doc", - "formal/lean/mathlib/linear_algebra/matrix/finite_dimensional.lean", - "formal/afp/Interpreter_Optimizations/Dynamic.thy", - "formal/lean/mathlib/topology/metric_space/hausdorff_dimension.lean", - "formal/hol/Help/possibly.doc", - "formal/afp/Consensus_Refined/MRU/Paxos_Proofs.thy", - "formal/hol/Help/constants.doc", - "formal/lean/mathlib/analysis/complex/arg.lean", - "formal/lean/mathlib/computability/language.lean", - "formal/afp/JinjaThreads/Execute/TypeRelRefine.thy", - "formal/hol/Unity/mk_state_logic.ml", - "formal/afp/LocalLexing/document/root.tex", - "formal/afp/Network_Security_Policy_Verification/Security_Invariants/SINVAR_ACLcommunicateWith_impl.thy", - "formal/lean/mathzoo/mathzoo/misc/miniF2F/algebra/apb4leq8ta4pb4.lean", - "formal/lean/mathlib/linear_algebra/prod.lean", - "formal/lean/mathlib/data/list/sort.lean", - "formal/coq/odd-order/BGsection8.v", - "formal/lean/mathzoo/mathzoo/olympiads/imo/2001/p2.lean", - "formal/afp/DiskPaxos/document/root.tex", - "formal/afp/JinjaDCI/Common/Decl.thy", - "formal/afp/Core_DOM/common/tests/Document_adoptNode.thy", - "formal/mizar/waybel16.miz", - "formal/afp/Goodstein_Lambda/Goodstein_Lambda.thy", - "formal/afp/UTP/toolkit/FSet_Extra.thy", - "formal/lean/mathlib/group_theory/torsion.lean", - "formal/afp/Hermite_Lindemann/More_Algebraic_Numbers_HLW.thy", - "formal/afp/UPF_Firewall/PacketFilter/Ports.thy", - "formal/lean/mathlib/data/setoid/basic.lean", - "formal/mizar/lukasi_1.miz", - "formal/mizar/jct_misc.miz", - "formal/afp/Multi_Party_Computation/Malicious_Defs.thy", - "formal/afp/Dirichlet_Series/More_Totient.thy", - "formal/afp/First_Order_Terms/Abstract_Unification.thy", - "formal/lean/mathlib/order/semiconj_Sup.lean", - "formal/afp/Category3/CartesianClosedCategory.thy", - "formal/afp/Ribbon_Proofs/More_Finite_Map.thy", - "formal/afp/Collections/ICF/spec/AnnotatedListSpec.thy", - "formal/afp/Abs_Int_ITP2012/Abs_Int2.thy", - "formal/lean/mathlib/data/multiset/finset_ops.lean", - "formal/afp/SPARCv8/document/root.tex", - "formal/afp/Decl_Sem_Fun_PL/DenotSoundFSet.thy", - "formal/hol/Help/decreasing.doc", - "formal/hol/Multivariate/transcendentals.ml", - "formal/afp/Regular_Tree_Relations/Util/FSet_Utils.thy", - "formal/afp/CakeML_Codegen/Terms/Consts.thy", - "formal/coq/math-comp/solvable/frobenius.v", - "formal/afp/Treaps/Treap_Sort_and_BSTs.thy", - "formal/mizar/normsp_4.miz", - "formal/afp/Presburger-Automata/DFS.thy", - "formal/lean/liquid/for_mathlib/int.lean", - "formal/afp/Slicing/JinjaVM/JVMCFG_wf.thy", - "formal/afp/Locally-Nameless-Sigma/document/root.tex", - "formal/lean/mathlib/topology/basic.lean", - "formal/afp/CZH_Foundations/czh_sets/CZH_Sets_FSequences.thy", - "formal/afp/LatticeProperties/Lattice_Prop.thy", - "formal/lean/mathlib/analysis/normed_space/int.lean", - "formal/mizar/polyalg1.miz", - "formal/lean/lftcm/hints/category_theory/exercise4/hint3.lean", - "formal/lean/mathlib/algebra/char_p/invertible.lean", - "formal/lean/mathlib/linear_algebra/affine_space/midpoint.lean", - "formal/coq/math-comp/ssreflect/seq.v", - "formal/afp/Closest_Pair_Points/Closest_Pair.thy", - "formal/afp/Extended_Finite_State_Machine_Inference/heuristics/Least_Upper_Bound.thy", - "formal/lean/mathlib/analysis/inner_product_space/conformal_linear_map.lean", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p728.lean", - "formal/afp/VYDRA_MDL/MDL.thy", - "formal/afp/Isabelle_Meta_Model/toy_example/generator/Design_deep.thy", - "formal/afp/Bounded_Deducibility_Security/BD_Security_IO.thy", - "formal/afp/UPF_Firewall/PacketFilter/IPv4_TCPUDP.thy", - "formal/lean/mathlib/topology/metric_space/gromov_hausdorff.lean", - "formal/afp/Word_Lib/Even_More_List.thy", - "formal/lean/sphere-eversion/to_mathlib/unused/cont_mdiff.lean", - "formal/afp/Generic_Deriving/Tagged_Prod_Sum.thy", - "formal/afp/Szemeredi_Regularity/document/root.tex", - "formal/afp/Coinductive/Quotient_Coinductive_List.thy", - "formal/afp/Poincare_Disc/Poincare_Lines_Ideal_Points.thy", - "formal/mizar/card_5.miz", - "formal/lean/mathlib/order/directed.lean", - "formal/lean/mathlib/topology/local_extr.lean", - "formal/afp/Regular-Sets/Relation_Interpretation.thy", - "formal/mizar/gfacirc2.miz", - "formal/afp/Kruskal/Kruskal_Misc.thy", - "formal/afp/Jinja/BV/BVNoTypeError.thy", - "formal/mizar/setwiseo.miz", - "formal/hol/Help/TAUT.doc", - "formal/afp/CoSMeDis/Outer_Friend_Confidentiality/Receiver/Outer_Friend_Receiver_Observation_Setup.thy", - "formal/afp/QR_Decomposition/document/root.tex", - "formal/hol/Help/UNDISCH_ALL.doc", - "formal/afp/AODV/variants/b_fwdrreps/B_Global_Invariants.thy", - "formal/hol/Help/new_type.doc", - "formal/hol/Rqe/rol.ml", - "formal/afp/KBPs/Extra.thy", - "formal/lean/mathlib/group_theory/nielsen_schreier.lean", - "formal/lean/mathlib/analysis/normed_space/riesz_lemma.lean", - "formal/afp/Huffman/Huffman.thy", - "formal/afp/Abstract-Rewriting/document/root.tex", - "formal/afp/Hybrid_Systems_VCs/HS_VC_KA_rel.thy", - "formal/hol/Examples/sylvester_gallai.ml", - "formal/afp/AODV/variants/d_fwdrreqs/D_Loop_Freedom.thy", - "formal/afp/Simpl/ex/ClosureEx.thy", - "formal/lean/mathlib/analysis/calculus/fderiv_measurable.lean", - "formal/afp/Finite_Fields/Finite_Fields_Factorization_Ext.thy", - "formal/afp/Dijkstra_Shortest_Path/Weight.thy", - "formal/afp/CZH_Elementary_Categories/czh_ecategories/CZH_DG_CAT.thy", - "formal/afp/Shadow_DOM/classes/ShadowRootClass.thy", - "formal/afp/Launchbury/CorrectnessOriginal.thy", - "formal/afp/Binomial-Queues/Binomial_Queue.thy", - "formal/mizar/limfunc1.miz", - "formal/mizar/glibpre0.miz", - "formal/afp/Quasi_Borel_Spaces/Product_QuasiBorel.thy", - "formal/afp/CoSMeDis/Post_Confidentiality/Post_Observation_Setup_RECEIVER.thy", - "formal/afp/UTP/utp/utp_lift.thy", - "formal/afp/CZH_Elementary_Categories/czh_ecategories/CZH_ECAT_SS.thy", - "formal/lean/mathlib/algebra/tropical/big_operators.lean", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2002/b/p6.lean", - "formal/lean/mathlib/number_theory/dioph.lean", - "formal/afp/Deriving/Equality_Generator/Equality_Generator.thy", - "formal/afp/Incredible_Proof_Machine/Indexed_FSet.thy", - "formal/afp/Iptables_Semantics/Primitive_Matchers/Output_Interface_Replace.thy", - "formal/afp/Jordan_Normal_Form/Determinant.thy", - "formal/afp/Lambda_Free_EPO/Lambda_Free_EPO.thy", - "formal/afp/Dynamic_Tables/Tables_real.thy", - "formal/afp/Independence_CH/Demonstrations.thy", - "formal/afp/HereditarilyFinite/document/root.tex", - "formal/mizar/jordan1c.miz", - "formal/afp/Isabelle_Meta_Model/meta_isabelle/Parser_init.thy", - "formal/lean/mathlib/data/nat/upto.lean", - "formal/afp/UTP/toolkit/Infinity.thy", - "formal/lean/liquid/laurent_measures/aux_lemmas.lean", - "formal/lean/mathlib/model_theory/language_map.lean", - "formal/mizar/conmetr.miz", - "formal/afp/Efficient-Mergesort/Natural_Mergesort.thy", - "formal/afp/Collections/Lib/Robdd.thy", - "formal/afp/PAC_Checker/PAC_Misc.thy", - "formal/afp/Hoare_Time/Hoare_Time.thy", - "formal/hol/Help/mk_binary.doc", - "formal/lean/liquid/rescale/normed_group.lean", - "formal/lean/mathlib/data/set/pairwise.lean", - "formal/mizar/roughs_3.miz", - "formal/afp/CakeML_Codegen/Test/Test_Composition.thy", - "formal/afp/Regex_Equivalence/After2.thy", - "formal/lean/liquid/facts/int.lean", - "formal/afp/Goedel_Incompleteness/Abstract_Second_Goedel.thy", - "formal/afp/Key_Agreement_Strong_Adversaries/dhlvl1.thy", - "formal/lean/mathlib/combinatorics/quiver/path.lean", - "formal/coq/math-comp/field/algebraics_fundamentals.v", - "formal/lean/mathzoo/mathzoo/misc/miniF2F/algebra/2complexrootspoly_xsqp49eqxp7itxpn7i.lean", - "formal/lean/liquid/for_mathlib/homology_map.lean", - "formal/lean/mathlib/set_theory/cardinal/finite.lean", - "formal/lean/mathlib/topology/algebra/group_completion.lean", - "formal/afp/LLL_Factorization/Sub_Sums.thy", - "formal/afp/CAVA_LTL_Modelchecker/SM/Refine/Type_System.thy", - "formal/afp/DFS_Framework/Examples/DFS_Find_Path.thy", - "formal/coq/math-comp/field/galois.v", - "formal/afp/Iptables_Semantics/Common/Negation_Type_DNF.thy", - "formal/lean/liquid/prop_92/extension_profinite.lean", - "formal/lean/mathlib/category_theory/localization/construction.lean", - "formal/lean/liquid/for_mathlib/Gordan.lean", - "formal/hol/Help/SIMP_TAC.doc", - "formal/lean/mathlib/data/lazy_list/basic.lean", - "formal/afp/Diophantine_Eqns_Lin_Hom/Solver_Code.thy", - "formal/lean/mathlib/algebra/regular/smul.lean", - "formal/lean/mathlib/number_theory/ramification_inertia.lean", - "formal/mizar/pascal.miz", - "formal/lean/mathlib/category_theory/abelian/exact.lean", - "formal/afp/Network_Security_Policy_Verification/Security_Invariants/SINVAR_DomainHierarchyNG.thy", - "formal/lean/mathlib/measure_theory/function/special_functions.lean", - "formal/afp/Ordinary_Differential_Equations/Numerics/Transfer_ODE.thy", - "formal/lean/mathlib/group_theory/abelianization.lean", - "formal/afp/pGCL/Misc.thy", - "formal/afp/LambdaMu/Syntax.thy", - "formal/afp/Name_Carrying_Type_Inference/Fresh.thy", - "formal/lean/mathlib/order/sup_indep.lean", - "formal/afp/Algebraic_VCs/RKAT.thy", - "formal/afp/ComponentDependencies/DataDependenciesCaseStudy.thy", - "formal/afp/SimplifiedOntologicalArgument/SimpleVariantSEinT.thy", - "formal/lean/mathlib/control/fix.lean", - "formal/mizar/friends1.miz", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p33.lean", - "formal/mizar/partit_2.miz", - "formal/hol/Help/the_overload_skeletons.doc", - "formal/lean/mathlib/algebraic_geometry/projective_spectrum/topology.lean", - "formal/mizar/cousin2.miz", - "formal/lean/mathlib/linear_algebra/matrix/orthogonal.lean", - "formal/afp/Median_Method/document/root.tex", - "formal/lean/liquid/for_mathlib/is_iso_iota.lean", - "formal/afp/Propositional_Proof_Systems/Resolution_Compl.thy", - "formal/mizar/tarski_a.miz", - "formal/afp/Propositional_Proof_Systems/Tseytin.thy", - "formal/lean/mathlib/category_theory/monad/monadicity.lean", - "formal/afp/JinjaThreads/MM/JMM_Framework.thy", - "formal/lean/mathlib/algebra/category/Module/abelian.lean", - "formal/afp/Berlekamp_Zassenhaus/Poly_Mod_Finite_Field_Record_Based.thy", - "formal/lean/liquid/for_mathlib/sum_str.lean", - "formal/lean/mathlib/data/fintype/small.lean", - "formal/lean/mathlib/ring_theory/dedekind_domain/basic.lean", - "formal/lean/mathlib/data/polynomial/hasse_deriv.lean", - "formal/afp/Collections/GenCF/Impl/Impl_RBT_Map.thy", - "formal/lean/mathlib/category_theory/limits/cone_category.lean", - "formal/afp/MiniSail/SubstMethods.thy", - "formal/afp/Monad_Memo_DP/util/Solve_Cong.thy", - "formal/hol/Help/DNF_CONV.doc", - "formal/afp/Shivers-CFA/CPSScheme.thy", - "formal/hol/Boyer_Moore/generalize.ml", - "formal/afp/Groebner_Bases/Buchberger_Examples.thy", - "formal/afp/Modal_Logics_for_NTS/Nominal_Bounded_Set.thy", - "formal/afp/Category/Yoneda.thy", - "formal/afp/Psi_Calculi/Weakening.thy", - "formal/afp/LTL_Master_Theorem/Logical_Characterization/After.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p326.lean", - "formal/lean/mathlib/group_theory/group_action/defs.lean", - "formal/afp/CakeML_Codegen/Test/Test_Embed_Data.thy", - "formal/afp/Constructive_Cryptography_CM/Observe_Failure.thy", - "formal/lean/mathlib/algebra/star/chsh.lean", - "formal/afp/IEEE_Floating_Point/FP64.thy", - "formal/lean/mathlib/category_theory/is_connected.lean", - "formal/mizar/ndiff_9.miz", - "formal/mizar/osalg_3.miz", - "formal/lean/mathlib/group_theory/perm/cycle/concrete.lean", - "formal/afp/HOL-CSP/Hide.thy", - "formal/afp/Hyperdual/AnalyticTestFunction.thy", - "formal/lean/mathlib/category_theory/over.lean", - "formal/afp/Incredible_Proof_Machine/Incredible_Predicate_Tasks.thy", - "formal/afp/Flyspeck-Tame/Quasi_Order.thy", - "formal/afp/Forcing/Relative_Univ.thy", - "formal/afp/Incredible_Proof_Machine/Incredible_Propositional_Tasks.thy", - "formal/afp/Refine_Imperative_HOL/benchmarks/Heapmap/isabelle/Heapmap_Bench.thy", - "formal/lean/mathlib/topology/uniform_space/compact_separated.lean", - "formal/afp/Lambda_Free_RPOs/Lambda_Free_Util.thy", - "formal/lean/mathlib/data/matrix/char_p.lean", - "formal/afp/PLM/TAO_7_Axioms.thy", - "formal/lean/mathlib/data/sym/card.lean", - "formal/lean/mathlib/topology/algebra/module/multilinear.lean", - "formal/afp/Matroids/Indep_System.thy", - "formal/hol/Help/new_basic_type_definition.doc", - "formal/afp/JinjaThreads/Common/SystemClasses.thy", - "formal/mizar/partpr_1.miz", - "formal/afp/Core_SC_DOM/common/classes/ObjectClass.thy", - "formal/lean/mathlib/topology/extend_from.lean", - "formal/hol/Minisat/minisat_resolve.ml", - "formal/hol/Help/UNDISCH.doc", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p320.lean", - "formal/lean/mathlib/category_theory/limits/shapes/kernel_pair.lean", - "formal/afp/FeatherweightJava/Featherweight_Java.thy", - "formal/afp/AODV/variants/c_gtobcast/C_Aodv_Loop_Freedom.thy", - "formal/hol/Help/INT_RED_CONV.doc", - "formal/afp/BTree/BTree_Set.thy", - "formal/lean/mathlib/category_theory/linear/yoneda.lean", - "formal/afp/Interpreter_Optimizations/List_util.thy", - "formal/afp/WHATandWHERE_Security/Type_System.thy", - "formal/afp/LTL/Rewriting.thy", - "formal/afp/Universal_Hash_Families/Definitions.thy", - "formal/afp/Stern_Brocot/Bird_Tree.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p342.lean", - "formal/afp/Applicative_Lifting/Applicative_Open_State.thy", - "formal/lean/liquid/for_mathlib/neg_one_pow.lean", - "formal/lean/liquid/pseudo_normed_group/sum_hom.lean", - "formal/afp/UPF_Firewall/Examples/DMZ/DMZ.thy", - "formal/afp/Graph_Theory/Arc_Walk.thy", - "formal/mizar/finseq_8.miz", - "formal/hol/Examples/forster.ml", - "formal/lean/mathlib/category_theory/category/Groupoid.lean", - "formal/lean/liquid/free_pfpng/lemmas.lean", - "formal/lean/mathlib/measure_theory/covering/vitali_family.lean", - "formal/mizar/amistd_3.miz", - "formal/lean/mathlib/number_theory/prime_counting.lean", - "formal/afp/Algebraic_VCs/AVC_KAD/VC_KAD_wf_Examples.thy", - "formal/lean/sphere-eversion/to_mathlib/linear_algebra/basis.lean", - "formal/hol/Help/merge.doc", - "formal/afp/ConcurrentIMP/CIMP_lang.thy", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2003/b/p6.lean", - "formal/afp/Refine_Imperative_HOL/Userguides/Sepref_Chapter_Userguides.thy", - "formal/mizar/topalg_4.miz", - "formal/afp/Echelon_Form/Echelon_Form_Inverse_IArrays.thy", - "formal/afp/Launchbury/Value.thy", - "formal/afp/CZH_Foundations/czh_semicategories/CZH_SMC_Semifunctor.thy", - "formal/hol/Help/EVERY_CONV.doc", - "formal/afp/Modular_arithmetic_LLL_and_HNF_algorithms/Signed_Modulo.thy", - "formal/afp/Groebner_Bases/Buchberger.thy", - "formal/afp/Jinja/BV/BVConform.thy", - "formal/lean/mathlib/data/nat/choose/sum.lean", - "formal/lean/mathlib/analysis/calculus/mean_value.lean", - "formal/afp/Lambda_Free_EPO/document/root.tex", - "formal/lean/mathlib/model_theory/substructures.lean", - "formal/lean/mathlib/topology/sober.lean", - "formal/afp/Call_Arity/EtaExpansionSafe.thy", - "formal/lean/mathlib/category_theory/monoidal/preadditive.lean", - "formal/afp/Propositional_Proof_Systems/MiniFormulas_Sema.thy", - "formal/afp/Call_Arity/SestoftCorrect.thy", - "formal/afp/Factored_Transition_System_Bounding/AcycSspace.thy", - "formal/afp/Auto2_Imperative_HOL/Imperative/DynamicArray.thy", - "formal/afp/Featherweight_OCL/UML_Logic.thy", - "formal/afp/Modular_Assembly_Kit_Security/Basics/Projection.thy", - "formal/hol/Examples/miller_rabin.ml", - "formal/afp/JinjaThreads/Framework/FWSemantics.thy", - "formal/afp/Core_DOM/common/tests/Node_removeChild.thy", - "formal/afp/Belief_Revision/AGM_Contraction.thy", - "formal/afp/Prime_Distribution_Elementary/Summatory_Divisor_Sigma_Bounds.thy", - "formal/lean/mathlib/analysis/convex/uniform.lean", - "formal/afp/Refine_Imperative_HOL/IICF/Intf/IICF_Map.thy", - "formal/lean/mathlib/algebra/polynomial/big_operators.lean", - "formal/afp/Van_Emde_Boas_Trees/Imperative_HOL_Time/Imperative_HOL_Time.thy", - "formal/hol/Help/ONCE_DEPTH_SQCONV.doc", - "formal/hol/Help/EXPAND_CASES_CONV.doc", - "formal/afp/Chandy_Lamport/Co_Snapshot.thy", - "formal/hol/Help/EQF_ELIM.doc", - "formal/mizar/pepin.miz", - "formal/hol/Help/UNIFY_ACCEPT_TAC.doc", - "formal/lean/mathlib/data/nat/choose/factorization.lean", - "formal/afp/Real_Time_Deque/Idle_Proof.thy", - "formal/afp/CAVA_LTL_Modelchecker/SM/RefPoint/SM_LTL.thy", - "formal/afp/Core_DOM/common/preliminaries/Hiding_Type_Variables.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p215.lean", - "formal/hol/Help/define_quotient_type.doc", - "formal/afp/Universal_Turing_Machine/Abacus.thy", - "formal/mizar/boolmark.miz", - "formal/hol/Help/EXISTS_TAC.doc", - "formal/afp/CoSMed/Friend_Request_Confidentiality/Friend_Request_Intro.thy", - "formal/mizar/binop_1.miz", - "formal/afp/TLA/State.thy", - "formal/lean/mathzoo/mathzoo/olympiads/imo/2008/p2.lean", - "formal/lean/mathlib/algebraic_geometry/stalks.lean", - "formal/afp/Core_SC_DOM/common/tests/Document_adoptNode.thy", - "formal/afp/Interpreter_Optimizations/Map_Extra.thy", - "formal/afp/MiniSail/MiniSail.thy", - "formal/coq/odd-order/BGappendixAB.v", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p400.lean", - "formal/afp/Network_Security_Policy_Verification/Security_Invariants/SINVAR_TaintingTrusted_impl.thy", - "formal/afp/JinjaThreads/Common/Conform.thy", - "formal/lean/mathlib/algebra/char_zero/quotient.lean", - "formal/mizar/diff_4.miz", - "formal/hol/100/feuerbach.ml", - "formal/lean/perfectoid/continuous_valuations.lean", - "formal/afp/Auto2_Imperative_HOL/Imperative/Connectivity_Impl.thy", - "formal/afp/Differential_Game_Logic/document/root.tex", - "formal/lean/mathlib/order/cover.lean", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p764.lean", - "formal/afp/CAVA_Automata/Digraph.thy", - "formal/afp/Password_Authentication_Protocol/Propaedeutics.thy", - "formal/afp/Abs_Int_ITP2012/document/root.tex", - "formal/lean/mathzoo/mathzoo/misc/miniF2F/algebra/abpbcpcageq3_sumaonsqrtapbgeq3onsqrt2.lean", - "formal/lean/mathlib/category_theory/opposites.lean", - "formal/hol/Help/TARGET_REWRITE_TAC.doc", - "formal/afp/Modular_Assembly_Kit_Security/SystemSpecification/StateEventSystems.thy", - "formal/afp/JinjaThreads/Common/ExternalCall.thy", - "formal/mizar/xxreal_2.miz", - "formal/afp/Free-Groups/document/root.tex", - "formal/afp/Physical_Quantities/ISQ_Algebra.thy", - "formal/afp/GPU_Kernel_PL/KPL_execution_group.thy", - "formal/afp/Group-Ring-Module/Algebra3.thy", - "formal/lean/perfectoid/for_mathlib/topology.lean", - "formal/afp/Transformer_Semantics/Sup_Inf_Preserving_Transformers.thy", - "formal/afp/Tree_Decomposition/ExampleInstantiations.thy", - "formal/afp/LTL/example/Example.thy", - "formal/lean/liquid/for_mathlib/pow_functor.lean", - "formal/mizar/prvect_2.miz", - "formal/afp/Interpreter_Optimizations/Std.thy", - "formal/afp/Extended_Finite_State_Machines/Value_Lexorder.thy", - "formal/afp/Auto2_Imperative_HOL/Functional/Dijkstra.thy", - "formal/mizar/altcat_3.miz", - "formal/lean/mathlib/measure_theory/decomposition/signed_hahn.lean", - "formal/coq/math-comp/character/character.v", - "formal/lean/liquid/real_measures/condensed.lean", - "formal/hol/Help/undefine.doc", - "formal/afp/Planarity_Certificates/l4v/lib/OptionMonadWP.thy", - "formal/hol/Help/CONV_RULE.doc", - "formal/afp/Transition_Systems_and_Automata/Basic/Degeneralization_Refine.thy", - "formal/afp/Topology/document/root.tex", - "formal/mizar/topgen_1.miz", - "formal/lean/mathlib/data/qpf/multivariate/basic.lean", - "formal/lean/mathlib/measure_theory/covering/vitali.lean", - "formal/lean/mathlib/field_theory/fixed.lean", - "formal/mizar/csspace3.miz", - "formal/lean/mathlib/algebra/homology/short_exact/abelian.lean", - "formal/afp/Applicative_Lifting/Applicative_Environment.thy", - "formal/coq/math-comp/algebra/ssrnum.v", - "formal/mizar/scpinvar.miz", - "formal/hol/Help/prove.doc", - "formal/mizar/valued_0.miz", - "formal/afp/Propositional_Proof_Systems/Compactness.thy", - "formal/lean/mathlib/analysis/normed_space/continuous_affine_map.lean", - "formal/afp/Probabilistic_Timed_Automata/library/Instantiate_Existentials.thy", - "formal/lean/mathlib/analysis/box_integral/integrability.lean", - "formal/afp/Regular_Tree_Relations/Tree_Automata/Tree_Automata_Pumping.thy", - "formal/afp/Comparison_Sort_Lower_Bound/document/root.tex", - "formal/afp/Error_Function/Error_Function.thy", - "formal/afp/Constructive_Cryptography_CM/Constructions/Diffie_Hellman_CC.thy", - "formal/coq/odd-order/PFsection5.v", - "formal/lean/mathlib/data/int/gcd.lean", - "formal/lean/mathlib/data/polynomial/degree/lemmas.lean", - "formal/afp/Collections/GenCF/Intf/GenCF_Intf_Chapter.thy", - "formal/afp/CoCon/Automation_Setup.thy", - "formal/hol/EC/secp192k1.ml", - "formal/lean/mathlib/data/polynomial/derivative.lean", - "formal/lean/mathlib/data/finset/nat_antidiagonal.lean", - "formal/hol/Help/GEN_PART_MATCH.doc", - "formal/afp/CoSMeDis/Outer_Friend_Confidentiality/Receiver/Outer_Friend_Receiver_State_Indistinguishability.thy", - "formal/mizar/matrprob.miz", - "formal/lean/mathlib/data/list/min_max.lean", - "formal/lean/sphere-eversion/to_mathlib/topology/misc.lean", - "formal/lean/mathlib/algebraic_topology/dold_kan/notations.lean", - "formal/afp/Simpl/ex/VcgEx.thy", - "formal/afp/CRDT/Network.thy", - "formal/hol/Help/REAL_POLY_MUL_CONV.doc", - "formal/afp/AWN/OAWN_SOS_Labels.thy", - "formal/afp/DFS_Framework/Impl/Data/Restr_Impl.thy", - "formal/afp/Verified_SAT_Based_AI_Planning/State_Variable_Representation.thy", - "formal/afp/Integration/document/root.tex", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p435.lean", - "formal/afp/Approximation_Algorithms/Center_Selection.thy", - "formal/afp/Containers/ITP-2013/Clauses.thy", - "formal/mizar/scmfsa_9.miz", - "formal/lean/liquid/for_mathlib/whisker_adjunction.lean", - "formal/lean/mathlib/algebra/homology/short_exact/preadditive.lean", - "formal/hol/Rqe/signs_thms.ml", - "formal/afp/Delta_System_Lemma/Cohen_Posets.thy", - "formal/afp/IMP2/automation/IMP2_Basic_Simpset.thy", - "formal/hol/Help/startup_banner.doc", - "formal/afp/Polynomial_Factorization/Polynomial_Divisibility.thy", - "formal/afp/Finger-Trees/document/root.tex", - "formal/afp/JinjaThreads/J/WWellForm.thy", - "formal/lean/mathlib/data/qpf/multivariate/constructions/fix.lean", - "formal/afp/Featherweight_OCL/UML_State.thy", - "formal/afp/Modal_Logics_for_NTS/Residual.thy", - "formal/afp/Auto2_Imperative_HOL/Imperative/Arrays_Impl.thy", - "formal/afp/Free-Boolean-Algebra/document/root.tex", - "formal/afp/Launchbury/Value-Nominal.thy", - "formal/mizar/hilbert3.miz", - "formal/afp/Collections/ICF/impl/TrieMapImpl.thy", - "formal/coq/math-comp/algebra/ssrint.v", - "formal/afp/Launchbury/C-restr.thy", - "formal/lean/mathlib/algebra/lie/nilpotent.lean", - "formal/afp/Linear_Recurrences/Rational_FPS_Asymptotics.thy", - "formal/afp/Flyspeck-Tame/Rotation.thy", - "formal/afp/Metalogic_ProofChecker/TheoryExe.thy", - "formal/lean/mathlib/category_theory/limits/pi.lean", - "formal/lean/mathlib/topology/algebra/order/left_right.lean", - "formal/afp/Progress_Tracking/Auxiliary.thy", - "formal/afp/Linear_Recurrences/document/root.tex", - "formal/afp/JinjaDCI/Compiler/Hidden.thy", - "formal/afp/GPU_Kernel_PL/KPL_execution_kernel.thy", - "formal/afp/HOLCF-Prelude/examples/HLint.thy", - "formal/afp/Recursion-Theory-I/CPair.thy", - "formal/lean/liquid/for_mathlib/bicartesian3.lean", - "formal/afp/Refine_Imperative_HOL/Sepref_Chapter_Tool.thy", - "formal/coq/math-comp/field/fieldext.v", - "formal/mizar/robbins5.miz", - "formal/afp/WOOT_Strong_Eventual_Consistency/Consistency.thy", - "formal/lean/mathlib/analysis/special_functions/complex/log_deriv.lean", - "formal/lean/sphere-eversion/to_mathlib/order/filter/eventually_constant.lean", - "formal/hol/Tutorial/HOL_basics.ml", - "formal/afp/Iptables_Semantics/Primitive_Matchers/Common_Primitive_Matcher.thy", - "formal/afp/Fishers_Inequality/Set_Multiset_Extras.thy", - "formal/hol/LP_arith/make.ml", - "formal/hol/Rqe/inferisign.ml", - "formal/hol/Help/foldr.doc", - "formal/afp/IsaNet/Network_Assumptions.thy", - "formal/hol/miz3/Samples/bug2.ml", - "formal/afp/Jordan_Hoelder/SndIsomorphismGrp.thy", - "formal/mizar/cayldick.miz", - "formal/afp/IMP2_Binary_Heap/IMP2_Binary_Heap.thy", - "formal/afp/Euler_MacLaurin/Euler_MacLaurin.thy", - "formal/lean/mathlib/data/set/intervals/ord_connected.lean", - "formal/hol/Help/tysubst.doc", - "formal/afp/Transformer_Semantics/Kleisli_Transformers.thy", - "formal/afp/Iptables_Semantics/Common/WordInterval_Lists.thy", - "formal/lean/mathlib/order/filter/extr.lean", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2003/a/p23.lean", - "formal/afp/Promela/PromelaStatistics.thy", - "formal/afp/Isabelle_Meta_Model/isabelle_home/src/HOL/Isabelle_Main1.thy", - "formal/lean/liquid/pseudo_normed_group/category/ProFiltPseuNormGrpWithTinv.lean", - "formal/lean/mathlib/field_theory/intermediate_field.lean", - "formal/afp/Hoare_Time/Discussion.thy", - "formal/afp/Random_Graph_Subgraph_Threshold/Ugraph_Misc.thy", - "formal/afp/Gale_Shapley/document/root.tex", - "formal/afp/JinjaThreads/MM/JMM_Interp.thy", - "formal/mizar/cayley.miz", - "formal/hol/Help/remove_type_abbrev.doc", - "formal/afp/Separation_Logic_Imperative_HOL/Examples/Union_Find.thy", - "formal/lean/mathlib/topology/order/hom/esakia.lean", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2000/p12.lean", - "formal/hol/Help/FIRST_ASSUM.doc", - "formal/mizar/numerals.miz", - "formal/afp/Sturm_Sequences/Lib/Sturm_Library_Document.thy", - "formal/lean/mathlib/group_theory/free_abelian_group.lean", - "formal/afp/Featherweight_OCL/collection_types/UML_Sequence.thy", - "formal/afp/KAD/Modal_Kleene_Algebra_Applications.thy", - "formal/mizar/finseq_9.miz", - "formal/lean/lftcm/solutions/tuesday/logic.lean", - "formal/afp/Call_Arity/CoCallImplTTree.thy", - "formal/afp/Weighted_Path_Order/KBO_as_WPO.thy", - "formal/lean/mathlib/control/bifunctor.lean", - "formal/hol/Help/DISJ_CASES_TAC.doc", - "formal/afp/Iptables_Semantics/Call_Return_Unfolding.thy", - "formal/afp/Mereology/GMM.thy", - "formal/afp/Inductive_Confidentiality/DolevYao/ConfidentialityDY.thy", - "formal/afp/ConcurrentGC/Global_Invariants_Lemmas.thy", - "formal/afp/Refine_Imperative_HOL/IICF/Sepref_Chapter_IICF.thy", - "formal/afp/Ordinal/OrdinalOmega.thy", - "formal/afp/Jacobson_Basic_Algebra/Ring_Theory.thy", - "formal/lean/mathlib/ring_theory/localization/integer.lean", - "formal/afp/Berlekamp_Zassenhaus/Berlekamp_Hensel.thy", - "formal/afp/UTP/utp/utp_expr_insts.thy", - "formal/hol/Help/print_thm.doc", - "formal/afp/VYDRA_MDL/NFA.thy", - "formal/afp/FOL_Seq_Calc2/ProverLemmas.thy", - "formal/mizar/taylor_1.miz", - "formal/lean/liquid/for_mathlib/endomorphisms/ab4.lean", - "formal/afp/Launchbury/EverythingAdequacy.thy", - "formal/lean/liquid/prop819.lean", - "formal/afp/Weighted_Path_Order/LPO.thy", - "formal/lean/liquid/examples/Ext.lean", - "formal/mizar/nagata_1.miz", - "formal/lean/mathlib/ring_theory/localization/at_prime.lean", - "formal/afp/Registers/Laws_Classical.thy", - "formal/afp/MiniSail/Typing.thy", - "formal/afp/Word_Lib/Typedef_Morphisms.thy", - "formal/afp/KBPs/Kripke.thy", - "formal/afp/AODV/All.thy", - "formal/afp/Akra_Bazzi/Akra_Bazzi_Library.thy", - "formal/lean/mathzoo/mathzoo/misc/miniF2F/numbertheory/xsqpysqintdenomeq.lean", - "formal/lean/mathlib/analysis/convex/specific_functions.lean", - "formal/lean/mathlib/linear_algebra/matrix/invariant_basis_number.lean", - "formal/hol/Help/CONJ_ACI_RULE.doc", - "formal/afp/Modal_Logics_for_NTS/L_Transform.thy", - "formal/coq/analysis/altreals/discrete.v", - "formal/mizar/ndiff_7.miz", - "formal/mizar/jordan3.miz", - "formal/hol/Help/parse_inductive_type_specification.doc", - "formal/lean/mathlib/data/zmod/parity.lean", - "formal/afp/pGCL/pGCL.thy", - "formal/afp/AutoFocus-Stream/IL_AF_Stream.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p302.lean", - "formal/afp/InfPathElimination/LTS.thy", - "formal/afp/Akra_Bazzi/document/root.tex", - "formal/afp/IMP2/IMP2.thy", - "formal/afp/KBPs/KBPs.thy", - "formal/lean/mathlib/data/char.lean", - "formal/afp/Isabelle_C/C11-FrontEnd/src/C_Command.thy", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2020/a/p13.lean", - "formal/lean/mathlib/group_theory/perm/option.lean", - "formal/lean/mathlib/measure_theory/integral/interval_integral.lean", - "formal/hol/Help/WEAK_CNF_CONV.doc", - "formal/afp/Psi_Calculi/Tau_Laws_Weak.thy", - "formal/lean/mathlib/analysis/special_functions/arsinh.lean", - "formal/lean/mathlib/computability/partrec.lean", - "formal/afp/Pratt_Certificate/Pratt_Certificate.thy", - "formal/afp/CISC-Kernel/trace/Rushby-with-Control/ISK.thy", - "formal/mizar/scmpds_9.miz", - "formal/afp/CoreC++/State.thy", - "formal/lean/perfectoid/sheaves/stalk.lean", - "formal/mizar/fintopo3.miz", - "formal/mizar/nat_2.miz", - "formal/afp/Independence_CH/Internal_ZFC_Axioms.thy", - "formal/afp/Mereology/M.thy", - "formal/hol/Help/ASM_CASES_TAC.doc", - "formal/afp/JinjaThreads/MM/JMM_Typesafe.thy", - "formal/hol/Help/mk_fun_ty.doc", - "formal/mizar/yellow14.miz", - "formal/afp/Collections/ICF/impl/ArrayHashMap.thy", - "formal/afp/Regular_Tree_Relations/Util/Basic_Utils.thy", - "formal/afp/Refine_Imperative_HOL/Lib/Sepref_Misc.thy", - "formal/afp/Berlekamp_Zassenhaus/Karatsuba_Multiplication.thy", - "formal/lean/mathlib/analysis/normed/group/infinite_sum.lean", - "formal/afp/Ordinary_Differential_Equations/Library/Gronwall.thy", - "formal/afp/Containers/Containers.thy", - "formal/afp/Hoare_Time/Com.thy", - "formal/lean/mathlib/analysis/normed_space/star/basic.lean", - "formal/afp/JinjaThreads/J/State.thy", - "formal/lean/mathlib/set_theory/game/ordinal.lean", - "formal/lean/mathlib/category_theory/limits/preserves/shapes/binary_products.lean", - "formal/afp/CakeML_Codegen/Terms/General_Rewriting.thy", - "formal/lean/mathlib/topology/category/CompHaus/default.lean", - "formal/afp/Extended_Finite_State_Machine_Inference/code-targets/Code_Target_FSet.thy", - "formal/afp/Quasi_Borel_Spaces/Binary_Product_QuasiBorel.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p51.lean", - "formal/afp/Decl_Sem_Fun_PL/DenotCompleteFSet.thy", - "formal/mizar/uniform1.miz", - "formal/lean/mathlib/analysis/analytic/inverse.lean", - "formal/afp/Security_Protocol_Refinement/Key_establish/m1_keydist_inrn.thy", - "formal/mizar/afvect01.miz", - "formal/hol/Help/mk_pair.doc", - "formal/afp/JinjaThreads/MM/JMM_JVM.thy", - "formal/hol/Help/SPEC_TAC.doc", - "formal/lean/liquid/for_mathlib/homological_complex.lean", - "formal/afp/BenOr_Kozen_Reif/Matrix_Equation_Construction.thy", - "formal/afp/CZH_Foundations/czh_semicategories/CZH_SMC_Semicategory.thy", - "formal/afp/UTP/utp/utp_healthy.thy", - "formal/mizar/scmfsa_m.miz", - "formal/lean/perfectoid/for_mathlib/topological_rings.lean", - "formal/afp/LTL_Master_Theorem/Logical_Characterization/Restricted_Master_Theorem.thy", - "formal/afp/Separation_Algebra/ex/capDL/Abstract_Separation_D.thy", - "formal/lean/mathlib/model_theory/fraisse.lean", - "formal/lean/mathlib/analysis/special_functions/pow_deriv.lean", - "formal/afp/Call_Arity/BalancedTraces.thy", - "formal/afp/Syntax_Independent_Logic/Deduction.thy", - "formal/afp/Core_DOM/common/preliminaries/Heap_Error_Monad.thy", - "formal/lean/mathlib/data/bracket.lean", - "formal/afp/Collections/Lib/Assoc_List.thy", - "formal/coq/math-comp/test_suite/test_rat.v.out", - "formal/hol/Help/type_invention_warning.doc", - "formal/lean/mathlib/analysis/normed_space/extr.lean", - "formal/lean/mathlib/category_theory/limits/shapes/terminal.lean", - "formal/afp/Randomised_Social_Choice/Order_Predicates.thy", - "formal/mizar/binpack1.miz", - "formal/afp/Monad_Memo_DP/util/Tracing.thy", - "formal/afp/VerifyThis2019/Challenge1A.thy", - "formal/afp/JinjaThreads/Framework/FWLiftingSem.thy", - "formal/afp/Kleene_Algebra/PHL_DRA.thy", - "formal/lean/mathlib/order/filter/bases.lean", - "formal/afp/Containers/Mapping_Impl.thy", - "formal/lean/mathlib/group_theory/commuting_probability.lean", - "formal/hol/Help/net_of_cong.doc", - "formal/afp/Ribbon_Proofs/Ribbons_Graphical.thy", - "formal/afp/CAVA_LTL_Modelchecker/SM/Refine/SM_Finite_Reachable.thy", - "formal/afp/AODV/variants/c_gtobcast/C_Aodv_Message.thy", - "formal/afp/Hidden_Markov_Models/document/root.tex", - "formal/hol/LP_arith/lp_arith.ml", - "formal/lean/perfectoid/power_bounded.lean", - "formal/lean/mathlib/analysis/normed/group/completion.lean", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p22.lean", - "formal/hol/Help/dest_binary.doc", - "formal/lean/mathlib/data/W/basic.lean", - "formal/afp/Simple_Firewall/SimpleFw_Syntax.thy", - "formal/hol/Unity/mk_leadsto.ml", - "formal/hol/Complex/quelim_examples.ml", - "formal/lean/sphere-eversion/local/ample.lean", - "formal/afp/Optimal_BST/document/root.tex", - "formal/lean/mathlib/analysis/quaternion.lean", - "formal/afp/SIFUM_Type_Systems/TypeSystem.thy", - "formal/lean/mathlib/analysis/convex/integral.lean", - "formal/afp/AWN/Qmsg_Lifting.thy", - "formal/afp/Network_Security_Policy_Verification/Network_Security_Policy_Verification.thy", - "formal/coq/math-comp/ssreflect/ssrbool.v", - "formal/afp/Lambda_Free_KBOs/Lambda_Free_KBOs.thy", - "formal/lean/mathlib/testing/slim_check/testable.lean", - "formal/lean/mathlib/algebraic_geometry/presheafed_space.lean", - "formal/afp/Consensus_Refined/Voting.thy", - "formal/hol/Multivariate/metric.ml", - "formal/lean/liquid/Lbar/ext_aux4.lean", - "formal/mizar/measure6.miz", - "formal/afp/Gauss_Jordan/Gauss_Jordan.thy", - "formal/afp/Complete_Non_Orders/Fixed_Points.thy", - "formal/afp/HOLCF-Prelude/examples/Fibs.thy", - "formal/lean/liquid/for_mathlib/universal_delta_functor/basic.lean", - "formal/afp/Propositional_Proof_Systems/Consistency.thy", - "formal/mizar/scheme1.miz", - "formal/afp/Landau_Symbols/Landau_Library.thy", - "formal/afp/Hybrid_Systems_VCs/ModalKleeneAlgebra/HS_VC_MKA_Examples_rel.thy", - "formal/afp/IP_Addresses/IP_Address_toString.thy", - "formal/lean/mathlib/category_theory/yoneda.lean", - "formal/afp/Automatic_Refinement/Parametricity/Param_Chapter.thy", - "formal/lean/mathlib/topology/algebra/filter_basis.lean", - "formal/hol/Help/INT_OF_REAL_THM.doc", - "formal/afp/Pi_Calculus/Late_Semantics.thy", - "formal/afp/Cayley_Hamilton/document/root.tex", - "formal/afp/FOL_Harrison/FOL_Harrison.thy", - "formal/afp/JinjaThreads/BV/JVMDeadlocked.thy", - "formal/lean/mathlib/set_theory/ordinal/basic.lean", - "formal/afp/Gauss_Jordan/Code_Generation_IArrays.thy", - "formal/lean/mathlib/analysis/box_integral/basic.lean", - "formal/lean/mathlib/analysis/complex/schwarz.lean", - "formal/afp/Nat-Interval-Logic/IL_Interval.thy", - "formal/lean/mathlib/data/int/basic.lean", - "formal/afp/Stable_Matching/COP.thy", - "formal/afp/PSemigroupsConvolution/Partial_Semigroups.thy", - "formal/lean/liquid/for_mathlib/monoidal_category.lean", - "formal/afp/Stateful_Protocol_Composition_and_Typing/More_Unification.thy", - "formal/afp/Ordinary_Differential_Equations/Library/Vector_Derivative_On.thy", - "formal/afp/Package_logic/document/root.tex", - "formal/afp/IMP_Compiler/document/root.tex", - "formal/lean/mathlib/algebra/is_prime_pow.lean", - "formal/afp/Propositional_Proof_Systems/ND_Compl_Truthtable_Compact.thy", - "formal/afp/BirdKMP/document/root.tex", - "formal/afp/Iptables_Semantics/Examples/topoS_generated/Analyze_topos_generated.thy", - "formal/afp/Smooth_Manifolds/Bump_Function.thy", - "formal/coq/math-comp/solvable/abelian.v", - "formal/afp/ConcurrentIMP/CIMP.thy", - "formal/afp/Stream_Fusion_Code/Stream_Fusion_Examples.thy", - "formal/lean/liquid/for_mathlib/coprod_op.lean", - "formal/lean/mathlib/data/finite/basic.lean", - "formal/afp/Real_Time_Deque/Current_Proof.thy", - "formal/afp/HotelKeyCards/document/intro.tex", - "formal/afp/HRB-Slicing/Proc/Com.thy", - "formal/afp/Modular_Assembly_Kit_Security/SecuritySpecification/BasicSecurityPredicates.thy", - "formal/lean/mathlib/algebra/category/Group/abelian.lean", - "formal/afp/Shadow_DOM/tests/my_get_owner_document.thy", - "formal/mizar/altcat_4.miz", - "formal/afp/UTP/utp/utp_rel_opsem.thy", - "formal/afp/PAC_Checker/More_Loops.thy", - "formal/afp/Concurrent_Ref_Alg/Parallel.thy", - "formal/lean/mathlib/category_theory/functor/default.lean", - "formal/afp/Slicing/StaticIntra/CDepInstantiations.thy", - "formal/lean/mathlib/analysis/special_functions/trigonometric/inverse.lean", - "formal/lean/mathlib/analysis/inner_product_space/adjoint.lean", - "formal/hol/Permutation/permuted.ml", - "formal/afp/FO_Theory_Rewriting/Rewriting/Rewriting.thy", - "formal/coq/analysis/cardinality.v", - "formal/lean/sphere-eversion/to_mathlib/unused/linear_algebra/multilinear.lean", - "formal/afp/Game_Based_Crypto/document/fig-4.tex", - "formal/hol/Help/STRIP_ASSUME_TAC.doc", - "formal/afp/Interpreter_Optimizations/Result.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p55.lean", - "formal/afp/SC_DOM_Components/document/root.tex", - "formal/hol/Help/ASM_SIMP_TAC.doc", - "formal/afp/Gauss_Jordan/System_Of_Equations.thy", - "formal/lean/mathlib/data/list/duplicate.lean", - "formal/afp/Automatic_Refinement/Tool/Autoref_Tool.thy", - "formal/lean/mathlib/order/category/omega_complete_partial_order.lean", - "formal/lean/mathlib/combinatorics/additive/behrend.lean", - "formal/hol/Library/words.ml", - "formal/afp/HOLCF-Prelude/HOLCF_Main.thy", - "formal/afp/Algebraic_VCs/AVC_KAD/VC_KAD_wf.thy", - "formal/lean/mathlib/data/nat/choose/default.lean", - "formal/afp/Category3/CategoryWithPullbacks.thy", - "formal/afp/Deriving/Comparator_Generator/RBT_Compare_Order_Impl.thy", - "formal/hol/Help/DISCH_TAC.doc", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p17.lean", - "formal/afp/Generic_Deriving/document/root.tex", - "formal/afp/LocalLexing/TheoremD2.thy", - "formal/afp/Nullstellensatz/Lex_Order_PP.thy", - "formal/afp/Planarity_Certificates/l4v/lib/Lib.thy", - "formal/afp/Pi_Calculus/Late_Hennessy.thy", - "formal/lean/mathlib/data/int/char_zero.lean", - "formal/afp/Category3/Functor.thy", - "formal/lean/mathlib/algebra/ring/ulift.lean", - "formal/lean/mathlib/analysis/complex/real_deriv.lean", - "formal/afp/Ribbon_Proofs/Ribbons_Stratified.thy", - "formal/afp/Ordinary_Differential_Equations/Ex/Lorenz/C0/Lorenz_C0.thy", - "formal/lean/liquid/for_mathlib/nat_trans.lean", - "formal/lean/mathlib/ring_theory/witt_vector/compare.lean", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p457.lean", - "formal/mizar/scmfsa6b.miz", - "formal/afp/Refine_Imperative_HOL/IICF/Intf/IICF_List.thy", - "formal/mizar/xreal_0.miz", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p190.lean", - "formal/afp/CZH_Universal_Constructions/czh_ucategories/CZH_UCAT_PWKan.thy", - "formal/hol/Help/PAT_CONV.doc", - "formal/afp/Smith_Normal_Form/Diagonal_To_Smith_JNF.thy", - "formal/afp/Psi_Calculi/Weak_Cong_Simulation.thy", - "formal/afp/Consensus_Refined/MRU_Vote.thy", - "formal/coq/odd-order/BGsection3.v", - "formal/afp/Orbit_Stabiliser/Tetrahedron.thy", - "formal/afp/LatticeProperties/WellFoundedTransitive.thy", - "formal/afp/Selection_Heap_Sort/SelectionSort_Functional.thy", - "formal/hol/Examples/dickson.ml", - "formal/afp/Independence_CH/Definitions_Main.thy", - "formal/lean/mathlib/measure_theory/measurable_space.lean", - "formal/lean/mathlib/algebra/abs.lean", - "formal/afp/IMP_Compiler/Compiler2.thy", - "formal/afp/Word_Lib/More_Word_Operations.thy", - "formal/afp/Constructive_Cryptography_CM/More_CC.thy", - "formal/afp/Minsky_Machines/Recursive_Inseparability.thy", - "formal/afp/VolpanoSmith/secTypes.thy", - "formal/lean/mathlib/data/int/range.lean", - "formal/afp/Core_DOM/standard/pointers/ShadowRootPointer.thy", - "formal/afp/Dijkstra_Shortest_Path/Dijkstra_Impl_Adet.thy", - "formal/lean/liquid/for_mathlib/simplicial/iso.lean", - "formal/afp/Network_Security_Policy_Verification/Security_Invariants/SINVAR_BLPbasic_impl.thy", - "formal/afp/Monomorphic_Monad/Interpreter.thy", - "formal/afp/Recursion-Theory-I/RecEnSet.thy", - "formal/lean/mathlib/category_theory/preadditive/hom_orthogonal.lean", - "formal/afp/Slicing/JinjaVM/JVMPostdomination.thy", - "formal/afp/Noninterference_Concurrent_Composition/document/root.tex", - "formal/lean/mathlib/topology/algebra/order/extr_closure.lean", - "formal/afp/Mersenne_Primes/Lucas_Lehmer_Auxiliary.thy", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2020/a/p7.lean", - "formal/afp/VectorSpace/document/root.tex", - "formal/mizar/bcialg_3.miz", - "formal/afp/JinjaThreads/MM/JMM_Type2.thy", - "formal/afp/CakeML/CakeML_Quickcheck.thy", - "formal/afp/Transition_Systems_and_Automata/Automata/Nondeterministic.thy", - "formal/mizar/simplex2.miz", - "formal/afp/Chandy_Lamport/Util.thy", - "formal/lean/mathlib/topology/algebra/group.lean", - "formal/afp/Network_Security_Policy_Verification/Examples/Imaginary_Factory_Network.thy", - "formal/mizar/radix_2.miz", - "formal/afp/Pi_Calculus/Weak_Early_Bisim_Subst_SC.thy", - "formal/afp/Monad_Memo_DP/heap_monad/Heap_Default.thy", - "formal/afp/Abortable_Linearizable_Modules/SLin.thy", - "formal/afp/Jinja/J/Examples.thy", - "formal/lean/lftcm/hints/category_theory/exercise3/hint3.lean", - "formal/afp/TESL_Language/Operational.thy", - "formal/mizar/gfacirc1.miz", - "formal/hol/Examples/safetyliveness.ml", - "formal/afp/Subresultants/Subresultant.thy", - "formal/afp/Formula_Derivatives/WS1S_Prelim.thy", - "formal/afp/Presburger-Automata/document/root.tex", - "formal/hol/EC/edmont.ml", - "formal/mizar/borsuk_7.miz", - "formal/hol/Help/is_iff.doc", - "formal/lean/mathlib/data/list/rotate.lean", - "formal/mizar/rmod_4.miz", - "formal/afp/CoSMeDis/Post_Confidentiality/Independent_Posts/Independent_Post_RECEIVER.thy", - "formal/afp/Subresultants/Binary_Exponentiation.thy", - "formal/afp/Physical_Quantities/ISQ.thy", - "formal/afp/Key_Agreement_Strong_Adversaries/Message_derivation.thy", - "formal/hol/Help/ISPECL.doc", - "formal/afp/Tree-Automata/Ta.thy", - "formal/afp/Goedel_Incompleteness/Loeb_Formula.thy", - "formal/lean/mathlib/category_theory/limits/preserves/shapes/equalizers.lean", - "formal/afp/Iptables_Semantics/Examples/TUM_Net_Firewall/TUM_Spoofing_new3.thy", - "formal/afp/First_Order_Terms/Unifiers.thy", - "formal/mizar/xboole_0.miz", - "formal/lean/mathlib/algebra/big_operators/part_enat.lean", - "formal/afp/Transitive_Models/M_Basic_No_Repl.thy", - "formal/lean/mathzoo/mathzoo/misc/miniF2F/algebra/sqineq_unitcircatbpamblt1.lean", - "formal/afp/Groebner_Bases/Reduction.thy", - "formal/mizar/euclid.miz", - "formal/mizar/euclid_5.miz", - "formal/afp/Iptables_Semantics/Examples/Parser_Test/Parser_Test.thy", - "formal/afp/Refine_Imperative_HOL/Sepref_Definition.thy", - "formal/afp/CAVA_LTL_Modelchecker/SM/Analysis/SM_Indep.thy", - "formal/afp/IsaNet/Parametrized_Dataplane_1.thy", - "formal/coq/math-comp/solvable/alt.v", - "formal/afp/Separation_Logic_Imperative_HOL/Assertions.thy", - "formal/afp/Weighted_Path_Order/WPO.thy", - "formal/afp/Bounded_Deducibility_Security/Compositional_Reasoning.thy", - "formal/lean/mathlib/data/prod/pprod.lean", - "formal/mizar/xboolean.miz", - "formal/mizar/ndiff_2.miz", - "formal/lean/liquid/condensed/adjunctions2.lean", - "formal/afp/Call_Arity/ArityAnalysisFix.thy", - "formal/afp/Affine_Arithmetic/Affine_Approximation.thy", - "formal/mizar/ring_3.miz", - "formal/lean/mathlib/data/fin/tuple/default.lean", - "formal/afp/Fishburn_Impossibility/Fishburn_Impossibility.thy", - "formal/lean/mathlib/topology/category/Top/basic.lean", - "formal/afp/Simple_Firewall/SimpleFw_toString.thy", - "formal/lean/liquid/condensed/short_exact.lean", - "formal/mizar/poset_2.miz", - "formal/afp/KD_Tree/Build.thy", - "formal/afp/Jinja/BV/EffectMono.thy", - "formal/afp/Network_Security_Policy_Verification/Security_Invariants/SINVAR_Subnets.thy", - "formal/afp/Refine_Imperative_HOL/Sepref_Translate.thy", - "formal/afp/JinjaThreads/Common/Heap.thy", - "formal/lean/mathlib/category_theory/preadditive/projective_resolution.lean", - "formal/lean/mathlib/model_theory/satisfiability.lean", - "formal/lean/mathlib/category_theory/closed/types.lean", - "formal/hol/Help/PATH_CONV.doc", - "formal/afp/Card_Number_Partitions/document/root.tex", - "formal/afp/Monad_Memo_DP/example/Example_Misc.thy", - "formal/afp/CoreC++/WellTypeRT.thy", - "formal/afp/Collections/Iterator/SetIteratorGA.thy", - "formal/afp/Transformer_Semantics/document/root.tex", - "formal/lean/liquid/for_mathlib/AddCommGroup/explicit_limits.lean", - "formal/afp/Affine_Arithmetic/Straight_Line_Program.thy", - "formal/afp/Rank_Nullity_Theorem/Mod_Type.thy", - "formal/afp/KBPs/SPRViewNonDetIndInit.thy", - "formal/lean/mathlib/category_theory/monoidal/opposite.lean", - "formal/afp/Word_Lib/Singleton_Bit_Shifts.thy", - "formal/afp/Partial_Function_MR/document/root.tex", - "formal/lean/mathlib/number_theory/zsqrtd/gaussian_int.lean", - "formal/afp/Decl_Sem_Fun_PL/Values.thy", - "formal/afp/SimplifiedOntologicalArgument/SimpleVariantSE.thy", - "formal/hol/Help/ITAUT_TAC.doc", - "formal/lean/mathlib/testing/slim_check/gen.lean", - "formal/lean/liquid/for_mathlib/homotopy_category_lemmas.lean", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p84.lean", - "formal/hol/Help/INT_MUL_CONV.doc", - "formal/lean/perfectoid/for_mathlib/nonarchimedean/adic_topology.lean", - "formal/hol/Multivariate/degree.ml", - "formal/lean/mathlib/combinatorics/simple_graph/coloring.lean", - "formal/hol/Help/PURE_ASM_SIMP_TAC.doc", - "formal/lean/mathlib/topology/uniform_space/basic.lean", - "formal/hol/Help/then_.doc", - "formal/afp/Hybrid_Multi_Lane_Spatial_Logic/NatInt.thy", - "formal/afp/Featherweight_OCL/basic_types/UML_String.thy", - "formal/afp/Simpl/ex/Closure.thy", - "formal/afp/Deep_Learning/DL_Concrete_Matrices.thy", - "formal/lean/mathlib/category_theory/sites/subsheaf.lean", - "formal/hol/Help/empty_net.doc", - "formal/mizar/rfinseq2.miz", - "formal/lean/mathlib/data/polynomial/algebra_map.lean", - "formal/afp/JinjaDCI/Common/Value.thy", - "formal/afp/Statecharts/Data.thy", - "formal/afp/Real_Time_Deque/RealTimeDeque_Proof.thy", - "formal/lean/mathlib/order/complete_boolean_algebra.lean", - "formal/afp/Separation_Logic_Imperative_HOL/Examples/Circ_List.thy", - "formal/lean/mathlib/algebra/module/pointwise_pi.lean", - "formal/afp/Types_To_Sets_Extension/Examples/TTS_Foundations/Algebra/Set_Semigroups.thy", - "formal/afp/Transformer_Semantics/Kleisli_Quantaloid.thy", - "formal/afp/Virtual_Substitution/DNF.thy", - "formal/afp/Jinja/DFA/Kildall_2.thy", - "formal/hol/Help/prefixes.doc", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p143.lean", - "formal/mizar/prvect_1.miz", - "formal/afp/Twelvefold_Way/Twelvefold_Way_Entry6.thy", - "formal/afp/Gaussian_Integers/Gaussian_Integers_Pythagorean_Triples.thy", - "formal/afp/Coinductive_Languages/Coinductive_Language.thy", - "formal/hol/Help/mk_char.doc", - "formal/lean/liquid/breen_deligne/eval_Pow_functor_nat_trans_compatibility.lean", - "formal/lean/mathlib/analysis/normed_space/indicator_function.lean", - "formal/afp/VerifyThis2019/lib/VTcomp.thy", - "formal/lean/mathzoo/mathzoo/misc/miniF2F/algebra/absxm1pabsxpabsxp1eqxp2_0leqxleq1.lean", - "formal/afp/First_Welfare_Theorem/Utility_Functions.thy", - "formal/lean/mathlib/ring_theory/localization/fraction_ring.lean", - "formal/afp/Localization_Ring/Localization.thy", - "formal/afp/MFODL_Monitor_Optimized/Optimized_Join.thy", - "formal/coq/analysis/topology.v", - "formal/afp/VYDRA_MDL/Timestamp_Prod.thy", - "formal/lean/mathlib/group_theory/subsemigroup/membership.lean", - "formal/lean/mathlib/data/semiquot.lean", - "formal/afp/Perron_Frobenius/Check_Matrix_Growth.thy", - "formal/afp/CakeML_Codegen/Test/Test_Embed_Simple.thy", - "formal/afp/JinjaThreads/Framework/Bisimulation.thy", - "formal/afp/Extended_Finite_State_Machine_Inference/heuristics/Same_Register.thy", - "formal/afp/Knuth_Bendix_Order/Order_Pair.thy", - "formal/lean/liquid/pseudo_normed_group/QprimeFP.lean", - "formal/afp/CryptHOL/Generative_Probabilistic_Value.thy", - "formal/afp/Launchbury/Vars.thy", - "formal/lean/mathlib/category_theory/limits/constructions/over/default.lean", - "formal/afp/LocalLexing/Validity.thy", - "formal/afp/Core_SC_DOM/common/tests/Core_DOM_BaseTest.thy", - "formal/afp/HRB-Slicing/StaticInter/HRBSlice.thy", - "formal/mizar/nfcont_3.miz", - "formal/lean/mathlib/order/omega_complete_partial_order.lean", - "formal/afp/JinjaThreads/Common/ConformThreaded.thy", - "formal/afp/Decreasing-Diagrams-II/Decreasing_Diagrams_II.thy", - "formal/afp/HotelKeyCards/Notation.thy", - "formal/lean/liquid/for_mathlib/abelian_sheaves/functor_category.lean", - "formal/afp/Linear_Recurrences/Solver/Linear_Recurrences_Pretty.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p521.lean", - "formal/lean/mathlib/linear_algebra/free_module/strong_rank_condition.lean", - "formal/afp/CakeML/generated/CakeML/Ffi.thy", - "formal/lean/mathlib/topology/metric_space/gromov_hausdorff_realized.lean", - "formal/afp/Transitive_Models/Discipline_Function.thy", - "formal/mizar/goboard7.miz", - "formal/mizar/bvfunc_5.miz", - "formal/hol/Help/LIST_INDUCT_TAC.doc", - "formal/hol/100/primerecip.ml", - "formal/lean/liquid/Lbar/basic.lean", - "formal/mizar/partfun1.miz", - "formal/lean/mathlib/algebra/category/Ring/basic.lean", - "formal/afp/Security_Protocol_Refinement/Key_establish/m1_kerberos.thy", - "formal/afp/Budan_Fourier/Sturm_Multiple_Roots.thy", - "formal/afp/ComponentDependencies/document/root.tex", - "formal/lean/mathlib/data/nat/factorial/cast.lean", - "formal/afp/Fishers_Inequality/Fishers_Inequality_Root.thy", - "formal/afp/Case_Labeling/Case_Labeling.thy", - "formal/lean/mathlib/data/matrix/notation.lean", - "formal/afp/Flyspeck-Tame/TameProps.thy", - "formal/afp/PSemigroupsConvolution/document/root.tex", - "formal/afp/Iptables_Semantics/Code_haskell.thy", - "formal/afp/Hoare_Time/Nielson_Sqrt.thy", - "formal/afp/Network_Security_Policy_Verification/document/root.tex", - "formal/lean/mathlib/category_theory/with_terminal.lean", - "formal/mizar/morph_01.miz", - "formal/afp/IP_Addresses/NumberWang_IPv4.thy", - "formal/afp/Algebraic_VCs/AVC_KAT/VC_KAT_scratch.thy", - "formal/afp/List_Update/OPT2.thy", - "formal/mizar/parsp_1.miz", - "formal/mizar/c0sp3.miz", - "formal/afp/Key_Agreement_Strong_Adversaries/Messages.thy", - "formal/hol/Help/USE_THEN.doc", - "formal/afp/CZH_Foundations/czh_digraphs/CZH_DG_DGHM.thy", - "formal/lean/liquid/normed_group/controlled_exactness.lean", - "formal/afp/Propositional_Proof_Systems/CNF.thy", - "formal/mizar/diraf.miz", - "formal/afp/Markov_Models/Discrete_Time_Markov_Process.thy", - "formal/afp/Hello_World/RunningCodeFromIsabelle.thy", - "formal/mizar/scmbsort.miz", - "formal/mizar/scmyciel.miz", - "formal/afp/Syntax_Independent_Logic/Deduction_Q.thy", - "formal/afp/UPF_Firewall/StatefulFW/VOIP.thy", - "formal/afp/HyperCTL/document/root.tex", - "formal/afp/HereditarilyFinite/OrdArith.thy", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2009/a/p15.lean", - "formal/afp/Smith_Normal_Form/Diagonalize.thy", - "formal/lean/mathlib/analysis/calculus/fderiv.lean", - "formal/afp/Menger/Helpers.thy", - "formal/hol/Help/mk_uexists.doc", - "formal/lean/mathlib/measure_theory/function/ae_eq_of_integral.lean", - "formal/afp/Prefix_Free_Code_Combinators/document/root.tex", - "formal/afp/Lp/Functional_Spaces.thy", - "formal/afp/Dependent_SIFUM_Type_Systems/Compositionality.thy", - "formal/afp/UTP/utp/utp_alphabet.thy", - "formal/lean/mathlib/data/list/prod_sigma.lean", - "formal/afp/AWN/TransitionSystems.thy", - "formal/afp/LTL_to_DRA/Auxiliary/List2.thy", - "formal/afp/Jinja/DFA/Listn.thy", - "formal/hol/Help/num_10.doc", - "formal/mizar/filter_0.miz", - "formal/mizar/cardfil4.miz", - "formal/afp/Regression_Test_Selection/JinjaSuppl/Subcls.thy", - "formal/afp/Regression_Test_Selection/JinjaSuppl/ClassesChanged.thy", - "formal/mizar/xxreal_1.miz", - "formal/lean/mathlib/topology/algebra/uniform_filter_basis.lean", - "formal/afp/Jordan_Normal_Form/Jordan_Normal_Form.thy", - "formal/lean/mathlib/linear_algebra/matrix/default.lean", - "formal/afp/Refine_Monadic/Refine_Monadic.thy", - "formal/afp/Noninterference_Sequential_Composition/SequentialComposition.thy", - "formal/lean/mathlib/algebra/lie/free.lean", - "formal/afp/Incredible_Proof_Machine/Incredible_Deduction.thy", - "formal/afp/Propositional_Proof_Systems/Compactness_Consistency.thy", - "formal/afp/Jinja/Common/Objects.thy", - "formal/afp/Correctness_Algebras/Base.thy", - "formal/afp/Complex_Geometry/Homogeneous_Coordinates.thy", - "formal/afp/Isabelle_Meta_Model/isabelle_home/src/Tools/Code/Isabelle_code_target.thy", - "formal/hol/Help/INT_REM_DOWN_CONV.doc", - "formal/lean/mathlib/analysis/calculus/affine_map.lean", - "formal/afp/Berlekamp_Zassenhaus/Square_Free_Int_To_Square_Free_GFp.thy", - "formal/lean/mathlib/data/polynomial/induction.lean", - "formal/lean/mathlib/topology/algebra/order/archimedean.lean", - "formal/afp/Modal_Logics_for_NTS/Validity.thy", - "formal/lean/mathlib/order/disjointed.lean", - "formal/afp/Multiset_Ordering_NPC/Propositional_Formula.thy", - "formal/afp/ConcurrentIMP/CIMP_vcg.thy", - "formal/lean/mathzoo/mathzoo/misc/miniF2F/numbertheory/aoddbdiv4asqpbsqmod8eq1.lean", - "formal/hol/QBF/make.ml", - "formal/mizar/gr_cy_1.miz", - "formal/mizar/mesfun9c.miz", - "formal/afp/Independence_CH/Replacement_Instances.thy", - "formal/afp/List_Update/RExp_Var.thy", - "formal/lean/mathlib/group_theory/transfer.lean", - "formal/mizar/boole.miz", - "formal/afp/Complex_Geometry/Moebius.thy", - "formal/lean/mathlib/data/complex/exponential_bounds.lean", - "formal/mizar/scpisort.miz", - "formal/afp/WorkerWrapper/Continuations.thy", - "formal/afp/Call_Arity/ArityAnalysisSig.thy", - "formal/afp/SequentInvertibility/SRCTransforms.thy", - "formal/afp/UTP/utp/utp_dvar.thy", - "formal/mizar/waybel_5.miz", - "formal/afp/Source_Coding_Theorem/Source_Coding_Theorem.thy", - "formal/hol/Help/dest_setenum.doc", - "formal/afp/AODV/variants/d_fwdrreqs/D_Quality_Increases.thy", - "formal/afp/Launchbury/CorrectnessResourced.thy", - "formal/lean/mathzoo/mathzoo/olympiads/aime/1991/p9.lean", - "formal/lean/mathlib/data/rat/floor.lean", - "formal/lean/mathlib/algebra/lie/skew_adjoint.lean", - "formal/mizar/gtarski3.miz", - "formal/coq/math-comp/field/falgebra.v", - "formal/afp/JinjaThreads/Execute/ExternalCall_Execute.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p284.lean", - "formal/lean/mathlib/probability/stopping.lean", - "formal/mizar/integra9.miz", - "formal/afp/Order_Lattice_Props/Representations.thy", - "formal/afp/Containers/ITP-2013/Benchmark_ICF.thy", - "formal/lean/mathlib/combinatorics/double_counting.lean", - "formal/coq/math-comp/test_suite/imset2_finset.v.out", - "formal/mizar/incsp_1.miz", - "formal/hol/Help/TRY_CONV.doc", - "formal/mizar/lopban12.miz", - "formal/mizar/zmodlat1.miz", - "formal/lean/mathlib/category_theory/noetherian.lean", - "formal/afp/Transitive_Models/document/root.tex", - "formal/lean/mathlib/category_theory/limits/connected.lean", - "formal/afp/UTP/toolkit/document/root.tex", - "formal/hol/Help/STRUCT_CASES_TAC.doc", - "formal/afp/Gabow_SCC/document/root.tex", - "formal/afp/JinjaThreads/Framework/FWCondAction.thy", - "formal/afp/SC_DOM_Components/Shadow_DOM_DOM_Components.thy", - "formal/afp/DFS_Framework/Examples/Feedback_Arcs.thy", - "formal/lean/lftcm/exercises_sources/tuesday/sets.lean", - "formal/hol/Help/inst.doc", - "formal/mizar/real_3.miz", - "formal/afp/JinjaThreads/Execute/ToString.thy", - "formal/afp/Descartes_Sign_Rule/Descartes_Sign_Rule.thy", - "formal/afp/ConcurrentGC/Worklists.thy", - "formal/hol/Arithmetic/godel.ml", - "formal/afp/Universal_Hash_Families/Field.thy", - "formal/hol/Help/ORELSEC.doc", - "formal/mizar/nomin_9.miz", - "formal/afp/LinearQuantifierElim/Thys/CertDlo.thy", - "formal/mizar/kurato_2.miz", - "formal/afp/Mersenne_Primes/Lucas_Lehmer_Code.thy", - "formal/afp/Core_DOM/common/Core_DOM_Tests.thy", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2003/a/p1.lean", - "formal/afp/Saturation_Framework_Extensions/document/root.tex", - "formal/afp/CAVA_LTL_Modelchecker/SM/RefPoint/SM_Semantics.thy", - "formal/afp/Kruskal/UGraph_Impl.thy", - "formal/afp/Registers/Quantum_Extra2.thy", - "formal/lean/mathlib/category_theory/bicategory/coherence_tactic.lean", - "formal/afp/Stone_Relation_Algebras/Relation_Subalgebras.thy", - "formal/afp/Tycon/Resumption_Transformer.thy", - "formal/afp/Twelvefold_Way/Twelvefold_Way_Entry12.thy", - "formal/lean/mathlib/data/list/sigma.lean", - "formal/afp/Prim_Dijkstra_Simple/Directed_Graph_Specs.thy", - "formal/afp/Containers/Collection_Eq.thy", - "formal/lean/mathlib/number_theory/padics/padic_val.lean", - "formal/afp/Graph_Theory/Rtrancl_On.thy", - "formal/mizar/graph_3.miz", - "formal/afp/pGCL/WellDefined.thy", - "formal/afp/ADS_Functor/Generic_ADS_Construction.thy", - "formal/afp/CZH_Elementary_Categories/czh_ecategories/CZH_ECAT_CAT.thy", - "formal/afp/Abortable_Linearizable_Modules/IOA.thy", - "formal/afp/Psi_Calculi/Weak_Stat_Imp_Pres.thy", - "formal/afp/Lower_Semicontinuous/Lower_Semicontinuous.thy", - "formal/hol/Help/ss_of_provers.doc", - "formal/lean/sphere-eversion/global/immersion.lean", - "formal/mizar/margrel1.miz", - "formal/mizar/toprealc.miz", - "formal/lean/liquid/for_mathlib/AddCommGroup/direct_sum_colimit.lean", - "formal/afp/Formal_SSA/Construct_SSA_notriv_code.thy", - "formal/hol/RichterHilbertAxiomGeometry/Topology.ml", - "formal/hol/Help/dest_select.doc", - "formal/mizar/qmax_1.miz", - "formal/afp/Octonions/Octonions.thy", - "formal/afp/Virtual_Substitution/GeneralVSProofs.thy", - "formal/afp/Psi_Calculi/Tau_Chain.thy", - "formal/afp/Fishers_Inequality/Vector_Matrix_Mod.thy", - "formal/afp/Forcing/Internalizations.thy", - "formal/hol/Help/IMP_TRANS.doc", - "formal/afp/Tree_Decomposition/TreewidthCompleteGraph.thy", - "formal/lean/liquid/for_mathlib/FreeAb.lean", - "formal/lean/mathzoo/mathzoo/olympiads/imo/1992/p1.lean", - "formal/lean/mathlib/data/int/log.lean", - "formal/lean/mathlib/analysis/analytic/basic.lean", - "formal/afp/Ergodic_Theory/Gouezel_Karlsson.thy", - "formal/mizar/liouvil1.miz", - "formal/afp/Amortized_Complexity/Pairing_Heap_List2_Analysis.thy", - "formal/afp/UPF/SeqComposition.thy", - "formal/lean/mathlib/algebra/support.lean", - "formal/afp/Applicative_Lifting/Applicative_PMF.thy", - "formal/afp/LinearQuantifierElim/Thys/QElin_inf.thy", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2003/a/p24.lean", - "formal/afp/CakeML/generated/Lem_either.thy", - "formal/afp/Planarity_Certificates/Verification/AutoCorres_Misc.thy", - "formal/lean/mathlib/category_theory/lifting_properties/basic.lean", - "formal/lean/mathlib/number_theory/divisors.lean", - "formal/lean/mathlib/category_theory/adjunction/adjoint_functor_theorems.lean", - "formal/hol/Help/make_overloadable.doc", - "formal/hol/Help/DISJ2_TAC.doc", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2021/a/p8.lean", - "formal/lean/mathlib/model_theory/ultraproducts.lean", - "formal/lean/mathlib/data/nat/prime.lean", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2020/a/p15.lean", - "formal/lean/liquid/locally_constant/analysis.lean", - "formal/afp/Sigma_Commit_Crypto/Commitment_Schemes.thy", - "formal/afp/SIFPL/PBIJ.thy", - "formal/afp/Security_Protocol_Refinement/Refinement/Runs.thy", - "formal/lean/mathlib/dynamics/ergodic/measure_preserving.lean", - "formal/hol/Help/RING_AND_IDEAL_CONV.doc", - "formal/afp/Monad_Normalisation/Monad_Normalisation.thy", - "formal/afp/Weighted_Path_Order/Executable_Orders.thy", - "formal/afp/Groebner_Bases/Macaulay_Matrix.thy", - "formal/afp/Auto2_Imperative_HOL/Imperative/IntervalTree_Impl.thy", - "formal/lean/liquid/for_mathlib/chain_complex_cons.lean", - "formal/coq/analysis/misc/uniform_bigO.v", - "formal/mizar/cat_3.miz", - "formal/afp/Differential_Dynamic_Logic/Differential_Dynamic_Logic.thy", - "formal/afp/TLA/Buffer.thy", - "formal/afp/Amortized_Complexity/Pairing_Heap_List1_Analysis2.thy", - "formal/hol/Help/INT_GT_CONV.doc", - "formal/afp/Hybrid_Systems_VCs/PredicateTransformers/HS_VC_PT.thy", - "formal/afp/Ordinary_Differential_Equations/Numerics/One_Step_Method.thy", - "formal/lean/mathlib/data/real/pointwise.lean", - "formal/afp/KBPs/Robot.thy", - "formal/afp/HRB-Slicing/StaticInter/Postdomination.thy", - "formal/afp/Hoare_Time/DiscussionO.thy", - "formal/afp/Slicing/While/WCFG.thy", - "formal/afp/Pi_Calculus/Strong_Late_Sim_Pres.thy", - "formal/afp/MiniSail/document/root.tex", - "formal/afp/Stone_Algebras/Filters.thy", - "formal/afp/Planarity_Certificates/l4v/lib/OptionMonad.thy", - "formal/mizar/scmpds_4.miz", - "formal/hol/Help/NO_CONV.doc", - "formal/afp/Pi_Calculus/Weak_Early_Sim.thy", - "formal/afp/Isabelle_Marries_Dirac/Quantum_Prisoners_Dilemma.thy", - "formal/hol/Help/type_of.doc", - "formal/lean/mathlib/measure_theory/integral/lebesgue.lean", - "formal/afp/Call_Arity/EtaExpansion.thy", - "formal/lean/mathlib/data/polynomial/monic.lean", - "formal/afp/Goedel_HFSet_Semantic/document/root.tex", - "formal/afp/DFS_Framework/DFS_Framework.thy", - "formal/lean/mathlib/analysis/inner_product_space/orientation.lean", - "formal/afp/Containers/Containers_Auxiliary.thy", - "formal/afp/ConcurrentGC/TSO.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p447.lean", - "formal/lean/liquid/for_mathlib/equalizers.lean", - "formal/lean/mathlib/category_theory/sites/whiskering.lean", - "formal/afp/JiveDataStoreModel/Isa_Counter/DirectSubtypes.thy", - "formal/coq/odd-order/PFsection14.v", - "formal/lean/mathzoo/mathzoo/misc/miniF2F/algebra/3rootspoly_amdtamctambeqnasqmbpctapcbtdpasqmbpctapcbta.lean", - "formal/lean/mathlib/category_theory/pempty.lean", - "formal/afp/Virtual_Substitution/Debruijn.thy", - "formal/afp/Hoare_Time/QuantK_Examples.thy", - "formal/afp/Ribbon_Proofs/Ribbons_Graphical_Soundness.thy", - "formal/hol/Help/MK_COMB_UPPERCASE.doc", - "formal/mizar/finseq_2.miz", - "formal/lean/mathlib/algebra/order/complete_field.lean", - "formal/lean/mathlib/number_theory/fermat4.lean", - "formal/afp/Word_Lib/More_Misc.thy", - "formal/afp/CakeML_Codegen/Backend/CakeML_Byte.thy", - "formal/afp/UPF_Firewall/FWNormalisation/ElementaryRules.thy", - "formal/afp/CAVA_LTL_Modelchecker/SM/Analysis/SM_Sticky.thy", - "formal/afp/PCF/PCF.thy", - "formal/lean/mathlib/order/liminf_limsup.lean", - "formal/afp/Formal_SSA/Mapping_Exts.thy", - "formal/lean/mathlib/algebra/category/Group/adjunctions.lean", - "formal/afp/WebAssembly/Wasm_Properties_Aux.thy", - "formal/afp/Extended_Finite_State_Machines/Trilean.thy", - "formal/mizar/analort.miz", - "formal/afp/Decl_Sem_Fun_PL/SystemF.thy", - "formal/afp/CoCon/Reviewer_Assignment_Confidentiality/Reviewer_Assignment_Value_Setup.thy", - "formal/lean/mathlib/ring_theory/polynomial/basic.lean", - "formal/lean/mathlib/combinatorics/quiver/connected_component.lean", - "formal/lean/mathlib/data/finset/prod.lean", - "formal/hol/Help/mk_gabs.doc", - "formal/lean/liquid/for_mathlib/AddCommGroup.lean", - "formal/afp/Jacobson_Basic_Algebra/Group_Theory.thy", - "formal/lean/mathlib/data/mv_polynomial/variables.lean", - "formal/hol/Help/dest_finty.doc", - "formal/afp/Amortized_Complexity/Amortized_Framework.thy", - "formal/afp/Iptables_Semantics/Common/Negation_Type.thy", - "formal/afp/Ordinary_Differential_Equations/Refinement/Refine_Interval.thy", - "formal/afp/Sort_Encodings/CU.thy", - "formal/afp/Correctness_Algebras/Extended_Designs.thy", - "formal/afp/Algebraic_VCs/AVC_KAT/VC_KAT.thy", - "formal/afp/Modular_Assembly_Kit_Security/Verification/Compositionality/GeneralizedZippingLemma.thy", - "formal/afp/Program-Conflict-Analysis/ThreadTracking.thy", - "formal/mizar/pralg_3.miz", - "formal/afp/Sort_Encodings/CM.thy", - "formal/afp/CRDT/ORSet.thy", - "formal/afp/Planarity_Certificates/l4v/lib/wp/NonDetMonad.thy", - "formal/afp/WebAssembly/Wasm_Interpreter_Properties.thy", - "formal/afp/CoSMeDis/Post_Confidentiality/DYNAMIC_Post_Network.thy", - "formal/lean/mathlib/model_theory/syntax.lean", - "formal/afp/Compiling-Exceptions-Correctly/Exceptions.thy", - "formal/afp/Ordered_Resolution_Prover/Ground_Resolution_Model.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p269.lean", - "formal/afp/Kuratowski_Closure_Complement/KuratowskiClosureComplementTheorem.thy", - "formal/lean/mathlib/algebra/category/Module/kernels.lean", - "formal/afp/Catalan_Numbers/document/root.tex", - "formal/afp/Graph_Theory/document/root.tex", - "formal/mizar/mesfunc4.miz", - "formal/lean/mathlib/topology/sheaves/local_predicate.lean", - "formal/lean/mathlib/analysis/convex/measure.lean", - "formal/afp/Shivers-CFA/AbsCF.thy", - "formal/lean/mathlib/topology/metric_space/emetric_space.lean", - "formal/lean/liquid/for_mathlib/Ext_quasi_iso.lean", - "formal/afp/Lambert_W/document/root.tex", - "formal/afp/Iptables_Semantics/Simple_Firewall/SimpleFw_Compliance.thy", - "formal/lean/mathlib/order/category/DistribLattice.lean", - "formal/afp/Datatype_Order_Generator/document/root.tex", - "formal/afp/Game_Based_Crypto/Guessing_Many_One.thy", - "formal/afp/Hoare_Time/Nielson_Hoare.thy", - "formal/lean/mathlib/probability/ident_distrib.lean", - "formal/lean/liquid/for_mathlib/Cech/homotopy.lean", - "formal/afp/ShortestPath/document/root.tex", - "formal/mizar/toprealb.miz", - "formal/afp/Launchbury/document/root.tex", - "formal/afp/Correctness_Algebras/Relative_Modal.thy", - "formal/afp/CakeML_Codegen/Test/Test_Datatypes.thy", - "formal/afp/Consensus_Refined/Voting/Ate_Proofs.thy", - "formal/afp/Rank_Nullity_Theorem/Dim_Formula.thy", - "formal/mizar/subset.miz", - "formal/hol/Help/foldl.doc", - "formal/afp/Collections/GenCF/Impl/Impl_Uv_Set.thy", - "formal/lean/mathlib/order/jordan_holder.lean", - "formal/afp/Refine_Imperative_HOL/Examples/Snippets/Sepref_Snip_Combinator.thy", - "formal/hol/Help/is_abs.doc", - "formal/hol/RichterHilbertAxiomGeometry/thmFontHilbertAxiom.ml", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2013/a/p7.lean", - "formal/afp/Iptables_Semantics/Common/Remdups_Rev.thy", - "formal/afp/WOOT_Strong_Eventual_Consistency/Psi.thy", - "formal/afp/Regex_Equivalence/Benchmark.thy", - "formal/afp/Iptables_Semantics/Semantics_Ternary/Semantics_Ternary.thy", - "formal/lean/mathlib/category_theory/shift.lean", - "formal/afp/Bertrands_Postulate/Bertrand.thy", - "formal/afp/UTP/utp/utp.thy", - "formal/afp/Stateful_Protocol_Composition_and_Typing/Labeled_Strands.thy", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2013/a/p8.lean", - "formal/afp/UpDown_Scheme/document/root.tex", - "formal/afp/Registers/Quantum.thy", - "formal/mizar/topalg_6.miz", - "formal/afp/Flyspeck-Tame/Generator.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p495.lean", - "formal/afp/AWN/OInvariants.thy", - "formal/afp/WorkerWrapper/Last.thy", - "formal/afp/Network_Security_Policy_Verification/Security_Invariants/SINVAR_ACLcommunicateWith.thy", - "formal/mizar/lpspace2.miz", - "formal/afp/Auto2_Imperative_HOL/Imperative/Indexed_PQueue_Impl.thy", - "formal/lean/liquid/free_pfpng/setup.lean", - "formal/lean/mathlib/data/pnat/basic.lean", - "formal/afp/Jordan_Normal_Form/DL_Rank.thy", - "formal/mizar/nat_6.miz", - "formal/afp/VectorSpace/RingModuleFacts.thy", - "formal/lean/mathlib/linear_algebra/matrix/circulant.lean", - "formal/afp/Simplex/QDelta.thy", - "formal/afp/Registers/QHoare.thy", - "formal/lean/liquid/for_mathlib/two_step_resolution.lean", - "formal/afp/List-Infinite/CommonSet/SetIntervalStep.thy", - "formal/hol/Jordan/metric_spaces.ml", - "formal/afp/AODV/variants/c_gtobcast/C_Aodv_Data.thy", - "formal/lean/mathlib/topology/category/Profinite/as_limit.lean", - "formal/afp/Constructive_Cryptography_CM/Constructions/One_Time_Pad.thy", - "formal/lean/mathlib/topology/continuous_function/algebra.lean", - "formal/afp/IsaNet/Parametrized_Dataplane_3_directed.thy", - "formal/afp/Van_Emde_Boas_Trees/Time_Reasoning/Simple_TBOUND_Cond.thy", - "formal/lean/mathlib/set_theory/game/birthday.lean", - "formal/afp/DFS_Framework/Invars/DFS_Invars_Basic.thy", - "formal/lean/mathlib/logic/equiv/basic.lean", - "formal/afp/Bicategory/Coherence.thy", - "formal/mizar/binop_2.miz", - "formal/lean/mathlib/linear_algebra/pi.lean", - "formal/hol/Help/LABEL_TAC.doc", - "formal/lean/liquid/for_mathlib/nnreal_to_nat_colimit.lean", - "formal/afp/Probabilistic_Timed_Automata/PTA_Reachability.thy", - "formal/hol/Rqe/main_thms.ml", - "formal/lean/mathlib/analysis/calculus/parametric_interval_integral.lean", - "formal/lean/mathlib/category_theory/limits/shapes/strict_initial.lean", - "formal/afp/Buildings/Simplicial.thy", - "formal/lean/liquid/for_mathlib/derived/example.lean", - "formal/afp/Pop_Refinement/Future_Work.thy", - "formal/lean/mathlib/data/list/big_operators.lean", - "formal/afp/IsaNet/Parametrized_Dataplane_2.thy", - "formal/afp/Auto2_Imperative_HOL/Functional/Rect_Intersect.thy", - "formal/afp/Psi_Calculi/Sim_Pres.thy", - "formal/hol/Help/lift_function.doc", - "formal/afp/Timed_Automata/DBM_Basics.thy", - "formal/afp/Monad_Memo_DP/example/Bellman_Ford.thy", - "formal/afp/BTree/document/root.tex", - "formal/hol/Help/EXISTS_UPPERCASE.doc", - "formal/afp/Transition_Systems_and_Automata/Automata/DBA/DGBA.thy", - "formal/afp/Ergodic_Theory/Kingman.thy", - "formal/hol/Help/is_var.doc", - "formal/afp/Quasi_Borel_Spaces/document/root.tex", - "formal/lean/mathlib/representation_theory/invariants.lean", - "formal/afp/Correctness_Algebras/Lattice_Ordered_Semirings.thy", - "formal/mizar/polynom3.miz", - "formal/afp/Belief_Revision/AGM_Logic.thy", - "formal/afp/Polynomial_Factorization/Gauss_Lemma.thy", - "formal/afp/CAVA_LTL_Modelchecker/SM/Refine/Pid_Scheduler.thy", - "formal/afp/Call_Arity/TTreeAnalysisSpec.thy", - "formal/lean/mathlib/group_theory/semidirect_product.lean", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p59.lean", - "formal/afp/Berlekamp_Zassenhaus/document/root.tex", - "formal/mizar/circled1.miz", - "formal/afp/DOM_Components/Shadow_DOM_Components.thy", - "formal/afp/Security_Protocol_Refinement/Key_establish/m1_ds.thy", - "formal/afp/Mersenne_Primes/document/root.tex", - "formal/lean/mathlib/data/rbtree/default_lt.lean", - "formal/mizar/jordan17.miz", - "formal/afp/Slicing/Dynamic/DynPDG.thy", - "formal/afp/CAVA_Automata/CAVA_Base/CAVA_Base.thy", - "formal/mizar/pdiff_8.miz", - "formal/afp/Modular_Assembly_Kit_Security/SecuritySpecification/FlowPolicies.thy", - "formal/hol/Help/ss_of_conv.doc", - "formal/hol/Help/is_pair.doc", - "formal/mizar/group_10.miz", - "formal/hol/Help/mk_let.doc", - "formal/afp/BytecodeLogicJmlTypes/document/root.tex", - "formal/mizar/numbers.miz", - "formal/afp/Vickrey_Clarke_Groves/RelationOperators.thy", - "formal/afp/Polynomial_Interpolation/Ring_Hom_Poly.thy", - "formal/afp/Collections/Examples/ICF/itp_2010.thy", - "formal/afp/Dominance_CHK/Dom_Kildall_Property.thy", - "formal/lean/mathlib/topology/uniform_space/uniform_embedding.lean", - "formal/mizar/tarski_0.miz", - "formal/lean/mathzoo/mathzoo/misc/miniF2F/algebra/amgm_sum1toneqn_prod1tonleq1.lean", - "formal/mizar/jordan5c.miz", - "formal/afp/Tycon/Writer_Transformer.thy", - "formal/afp/Real_Time_Deque/RealTimeDeque.thy", - "formal/afp/Auto2_HOL/HOL/HOL_Base.thy", - "formal/lean/liquid/polyhedral_lattice/category.lean", - "formal/afp/LocalLexing/TheoremD4.thy", - "formal/afp/SPARCv8/lib/Lib.thy", - "formal/lean/mathlib/combinatorics/pigeonhole.lean", - "formal/mizar/eulrpart.miz", - "formal/afp/Taylor_Models/Horner_Eval.thy", - "formal/lean/mathlib/category_theory/action.lean", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p257.lean", - "formal/afp/Sqrt_Babylonian/Sqrt_Babylonian.thy", - "formal/afp/UPF_Firewall/UPF-Firewall.thy", - "formal/hol/Help/elistof.doc", - "formal/afp/Selection_Heap_Sort/Heap.thy", - "formal/mizar/ami_6.miz", - "formal/afp/Smooth_Manifolds/Analysis_More.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p28.lean", - "formal/hol/Help/CONTR.doc", - "formal/hol/Help/SIMPLE_CHOOSE.doc", - "formal/afp/GewirthPGCProof/GewirthArgument.thy", - "formal/afp/Dirichlet_Series/Arithmetic_Summatory_Asymptotics.thy", - "formal/afp/LocalLexing/TheoremD14.thy", - "formal/afp/WebAssembly/Wasm_Ast.thy", - "formal/lean/mathlib/algebra/group/type_tags.lean", - "formal/afp/CoSMeDis/Post_Confidentiality/Post_Value_Setup_ISSUER.thy", - "formal/hol/Multivariate/cross.ml", - "formal/afp/LLL_Basis_Reduction/LLL_Number_Bounds.thy", - "formal/mizar/euler_2.miz", - "formal/afp/HRB-Slicing/StaticInter/SDG.thy", - "formal/hol/Help/DIMINDEX_TAC.doc", - "formal/afp/Abortable_Linearizable_Modules/Simulations.thy", - "formal/mizar/msaterm.miz", - "formal/afp/Girth_Chromatic/document/root.tex", - "formal/lean/mathlib/category_theory/preadditive/biproducts.lean", - "formal/coq/math-comp/character/classfun.v", - "formal/afp/Equivalence_Relation_Enumeration/document/root.tex", - "formal/afp/Decl_Sem_Fun_PL/MutableRefProps.thy", - "formal/afp/X86_Semantics/document/root.tex", - "formal/hol/Help/net_of_conv.doc", - "formal/afp/Echelon_Form/Echelon_Form_Det_IArrays.thy", - "formal/afp/TESL_Language/StutteringLemmas.thy", - "formal/afp/ConcurrentGC/Phases.thy", - "formal/lean/mathlib/ring_theory/localization/inv_submonoid.lean", - "formal/afp/Types_To_Sets_Extension/Examples/SML_Relativization/Topology/SML_Topological_Space_Countability.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p403.lean", - "formal/lean/mathlib/algebra/continued_fractions/computation/approximation_corollaries.lean", - "formal/afp/Power_Sum_Polynomials/Power_Sum_Puzzle.thy", - "formal/lean/liquid/Lbar/ses.lean", - "formal/afp/Key_Agreement_Strong_Adversaries/Channels.thy", - "formal/afp/Lp/document/root.tex", - "formal/hol/Help/numerator.doc", - "formal/afp/Automatic_Refinement/Parametricity/Parametricity.thy", - "formal/afp/ZFC_in_HOL/ZFC_Cardinals.thy", - "formal/hol/Help/chop_list.doc", - "formal/mizar/clopban2.miz", - "formal/afp/CoSMeDis/Post_Confidentiality/Independent_Posts/Independent_DYNAMIC_Post_ISSUER.thy", - "formal/afp/FocusStreamsCaseStudies/SteamBoiler_proof.thy", - "formal/afp/EdmondsKarp_Maxflow/EdmondsKarp_Algo.thy", - "formal/afp/Complex_Geometry/More_Complex.thy", - "formal/mizar/group_19.miz", - "formal/lean/mathlib/group_theory/group_action/group.lean", - "formal/hol/Help/SPECL.doc", - "formal/afp/UTP/utp/utp_subst.thy", - "formal/afp/Syntax_Independent_Logic/document/root.tex", - "formal/afp/Deep_Learning/Tensor_Scalar_Mult.thy", - "formal/afp/Relational_Minimum_Spanning_Trees/Prim.thy", - "formal/lean/mathlib/algebra/category/Group/equivalence_Group_AddGroup.lean", - "formal/afp/Differential_Game_Logic/Identifiers.thy", - "formal/afp/BirdKMP/Theory_Of_Lists.thy", - "formal/afp/Tree_Decomposition/document/root.tex", - "formal/afp/Sunflowers/Sunflower.thy", - "formal/afp/Iptables_Semantics/Semantics_Ternary/Unknown_Match_Tacs.thy", - "formal/afp/Random_Graph_Subgraph_Threshold/Ugraph_Properties.thy", - "formal/hol/Help/new_specification.doc", - "formal/afp/CZH_Foundations/czh_sets/HOL_CContinuum.thy", - "formal/lean/mathlib/logic/basic.lean", - "formal/afp/CakeML/generated/CakeML/BigStep.thy", - "formal/afp/Category3/document/root.tex", - "formal/afp/Green/Integrals.thy", - "formal/mizar/glib_005.miz", - "formal/afp/AODV/variants/d_fwdrreqs/D_Aodv_Predicates.thy", - "formal/afp/MDP-Algorithms/examples/Code_Gridworld.thy", - "formal/afp/Interpreter_Optimizations/Env_list.thy", - "formal/afp/Jinja/JVM/JVMExceptions.thy", - "formal/lean/mathlib/category_theory/monad/kleisli.lean", - "formal/lean/mathlib/linear_algebra/projective_space/subspace.lean", - "formal/lean/mathlib/number_theory/zsqrtd/to_real.lean", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p192.lean", - "formal/afp/PSemigroupsConvolution/Partial_Semigroup_Models.thy", - "formal/afp/Auto2_HOL/HOL/Primes_Ex.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p96.lean", - "formal/lean/mathlib/algebra/group_power/default.lean", - "formal/afp/Linear_Inequalities/Farkas_Minkowsky_Weyl.thy", - "formal/afp/HRB-Slicing/StaticInter/SCDObservable.thy", - "formal/hol/Examples/cooper.ml", - "formal/lean/liquid/for_mathlib/yoneda_left_exact.lean", - "formal/afp/Decl_Sem_Fun_PL/MutableRef.thy", - "formal/lean/mathlib/linear_algebra/bilinear_form.lean", - "formal/lean/mathlib/analysis/special_functions/gaussian.lean", - "formal/lean/mathlib/data/bool/all_any.lean", - "formal/afp/FOL_Seq_Calc2/EPathHintikka.thy", - "formal/afp/Isabelle_Meta_Model/isabelle_home/src/HOL/Isabelle_Main2.thy", - "formal/lean/mathlib/linear_algebra/matrix/zpow.lean", - "formal/mizar/scmfsa_2.miz", - "formal/afp/Tycon/Lazy_List_Monad.thy", - "formal/afp/Hoare_Time/SepLog_Hoare.thy", - "formal/afp/Noninterference_CSP/CSPNoninterference.thy", - "formal/afp/Shadow_DOM/tests/slots_fallback.thy", - "formal/hol/Permutation/morelist.ml", - "formal/hol/miz3/Samples/drinker.ml", - "formal/mizar/midsp_2.miz", - "formal/afp/Complete_Non_Orders/document/root.tex", - "formal/afp/Extended_Finite_State_Machines/AExp_Lexorder.thy", - "formal/afp/Game_Based_Crypto/IND_CPA_PK.thy", - "formal/lean/liquid/combinatorial_lemma/profinite_setup.lean", - "formal/hol/Help/curry.doc", - "formal/afp/DPRM_Theorem/Diophantine/Parametric_Polynomials.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p101.lean", - "formal/afp/Digit_Expansions/Carries.thy", - "formal/afp/Shadow_SC_DOM/Shadow_DOM_Tests.thy", - "formal/mizar/aff_2.miz", - "formal/afp/Probabilistic_Noninterference/Resumption_Based.thy", - "formal/mizar/tops_2.miz", - "formal/afp/CSP_RefTK/document/root.tex", - "formal/afp/PAC_Checker/document/root.tex", - "formal/afp/FOL_Seq_Calc2/Completeness.thy", - "formal/afp/Universal_Turing_Machine/Recs.thy", - "formal/afp/Correctness_Algebras/N_Relation_Algebras.thy", - "formal/afp/Ordinary_Differential_Equations/Numerics/ODE_Numerics.thy", - "formal/afp/Density_Compiler/document/root.tex", - "formal/afp/PAC_Checker/Finite_Map_Multiset.thy", - "formal/afp/Partial_Order_Reduction/document/root.tex", - "formal/afp/Types_To_Sets_Extension/Examples/TTS_Foundations/Foundations/FNDS_Lifting_Set_Ext.thy", - "formal/afp/Strong_Security/Types.thy", - "formal/mizar/jordan2b.miz", - "formal/afp/Hybrid_Multi_Lane_Spatial_Logic/Views.thy", - "formal/lean/mathlib/algebraic_topology/simplex_category.lean", - "formal/afp/CZH_Elementary_Categories/czh_ecategories/CZH_ECAT_Small_Functor.thy", - "formal/afp/MFMC_Countable/MFMC_Flow_Attainability.thy", - "formal/mizar/liouvil2.miz", - "formal/coq/math-comp/fingroup/perm.v", - "formal/mizar/matrix_3.miz", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p151.lean", - "formal/afp/Separation_Logic_Imperative_HOL/Examples/Array_Map_Impl.thy", - "formal/mizar/measur10.miz", - "formal/afp/Core_SC_DOM/common/classes/NodeClass.thy", - "formal/afp/MSO_Regex_Equivalence/Pi_Regular_Set.thy", - "formal/afp/JinjaThreads/Compiler/CallExpr.thy", - "formal/afp/Berlekamp_Zassenhaus/Sublist_Iteration.thy", - "formal/afp/JinjaDCI/JVM/JVMExecInstr.thy", - "formal/lean/mathlib/category_theory/category/Rel.lean", - "formal/mizar/waybel21.miz", - "formal/lean/mathlib/topology/omega_complete_partial_order.lean", - "formal/afp/Modal_Logics_for_NTS/FS_Set.thy", - "formal/hol/Help/mk_select.doc", - "formal/afp/Pi_Calculus/Strong_Late_Bisim_Subst_SC.thy", - "formal/afp/CryptHOL/Generat.thy", - "formal/mizar/incproj.miz", - "formal/lean/mathzoo/mathzoo/misc/miniF2F/algebra/binomnegdiscrineq_10alt28asqp1.lean", - "formal/afp/CAVA_LTL_Modelchecker/SM/RefPoint/SM_Syntax.thy", - "formal/afp/Shadow_DOM/tests/Shadow_DOM_Node_insertBefore.thy", - "formal/lean/mathlib/linear_algebra/matrix/nondegenerate.lean", - "formal/hol/Help/dest_var.doc", - "formal/lean/mathlib/category_theory/limits/shapes/kernels.lean", - "formal/afp/Knights_Tour/document/root.tex", - "formal/afp/Automated_Stateful_Protocol_Verification/examples/PKCS/PKCS_Model03.thy", - "formal/mizar/fib_num3.miz", - "formal/afp/Myhill-Nerode/Myhill_1.thy", - "formal/hol/Help/genvar.doc", - "formal/hol/Help/uncurry.doc", - "formal/afp/Berlekamp_Zassenhaus/Factorize_Rat_Poly.thy", - "formal/lean/perfectoid/Huber_ring/localization.lean", - "formal/afp/LTL/Code_Equations.thy", - "formal/afp/KAD/Range_Semiring.thy", - "formal/lean/mathlib/logic/function/iterate.lean", - "formal/afp/Separata/Separata.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p668.lean", - "formal/hol/Help/TOP_SWEEP_CONV.doc", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2002/a/p1.lean", - "formal/hol/Help/then_tcl_.doc", - "formal/lean/mathlib/group_theory/exponent.lean", - "formal/hol/Help/the_specifications.doc", - "formal/lean/mathlib/algebra/group/basic.lean", - "formal/lean/mathzoo/mathzoo/olympiads/imo/1998/p2.lean", - "formal/afp/Real_Time_Deque/Small_Proof.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p547.lean", - "formal/afp/AODV/variants/e_all_abcd/E_Aodv_Predicates.thy", - "formal/afp/Chandy_Lamport/document/root.tex", - "formal/afp/JinjaThreads/Framework/FWLocking.thy", - "formal/lean/perfectoid/valuation/localization.lean", - "formal/lean/mathlib/combinatorics/hales_jewett.lean", - "formal/hol/RichterHilbertAxiomGeometry/error-checking.ml", - "formal/lean/mathlib/group_theory/submonoid/default.lean", - "formal/lean/mathlib/deprecated/ring.lean", - "formal/lean/liquid/for_mathlib/arrow/split.lean", - "formal/afp/Formula_Derivatives/WS1S_Alt_Formula.thy", - "formal/lean/liquid/for_mathlib/SemiNormedGroup_ulift.lean", - "formal/lean/mathlib/algebra/lie/semisimple.lean", - "formal/afp/CZH_Foundations/czh_sets/CZH_Sets_ZQR.thy", - "formal/lean/sphere-eversion/global/one_jet_bundle.lean", - "formal/afp/Flyspeck-Tame/LowerBound.thy", - "formal/afp/CZH_Foundations/czh_digraphs/CZH_DG_GRPH.thy", - "formal/afp/CakeML/document/root.tex", - "formal/lean/mathlib/algebra/group_power/basic.lean", - "formal/afp/Forcing/Nat_Miscellanea.thy", - "formal/afp/Modular_arithmetic_LLL_and_HNF_algorithms/Matrix_Change_Row.thy", - "formal/afp/PLM/TAO_8_Definitions.thy", - "formal/afp/Ordinary_Differential_Equations/IVP/Poincare_Map.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p211.lean", - "formal/coq/math-comp/test_suite/output.v.out.8.7", - "formal/mizar/gate_4.miz", - "formal/lean/mathlib/algebra/star/subalgebra.lean", - "formal/afp/Lucas_Theorem/Lucas_Theorem.thy", - "formal/hol/Help/delete_user_printer.doc", - "formal/afp/Residuated_Lattices/Action_Algebra.thy", - "formal/afp/IP_Addresses/WordInterval.thy", - "formal/lean/mathlib/linear_algebra/isomorphisms.lean", - "formal/afp/MFMC_Countable/MFMC_Finite.thy", - "formal/lean/mathlib/data/qpf/multivariate/constructions/cofix.lean", - "formal/afp/Launchbury/HOLCF-Join.thy", - "formal/mizar/clopban3.miz", - "formal/mizar/waybel_9.miz", - "formal/mizar/commacat.miz", - "formal/hol/Help/prove_cases_thm.doc", - "formal/afp/AWN/Pnet.thy", - "formal/hol/Help/DISJ_CASES_THEN2.doc", - "formal/mizar/ali2.miz", - "formal/afp/Algebraic_VCs/Domain_Quantale.thy", - "formal/afp/JinjaThreads/Common/Exceptions.thy", - "formal/afp/AutoFocus-Stream/ListSlice.thy", - "formal/afp/CZH_Universal_Constructions/czh_ucategories/CZH_UCAT_Conclusions.thy", - "formal/afp/CryptHOL/Set_Applicative.thy", - "formal/lean/mathlib/logic/equiv/local_equiv.lean", - "formal/hol/Help/EQT_INTRO.doc", - "formal/afp/Launchbury/HOLCF-Meet.thy", - "formal/lean/mathlib/category_theory/monoidal/subcategory.lean", - "formal/afp/Laplace_Transform/Uniqueness.thy", - "formal/afp/IMO2019/IMO2019_Q1.thy", - "formal/afp/Vickrey_Clarke_Groves/CombinatorialAuctionExamples.thy", - "formal/afp/MFOTL_Monitor/Interval.thy", - "formal/afp/Slicing/Slicing.thy", - "formal/lean/mathzoo/mathzoo/olympiads/imo/1978/p5.lean", - "formal/afp/Deep_Learning/DL_Missing_Finite_Set.thy", - "formal/afp/Simplex/Simplex_Incremental.thy", - "formal/lean/mathlib/analysis/normed_space/pi_Lp.lean", - "formal/lean/mathzoo/mathzoo/olympiads/aime/1999/p11.lean", - "formal/lean/perfectoid/sheaves/sheaf.lean", - "formal/lean/liquid/normed_snake.lean", - "formal/hol/Help/prove_recursive_functions_exist.doc", - "formal/lean/mathlib/data/string/basic.lean", - "formal/hol/Help/new_recursive_definition.doc", - "formal/afp/Network_Security_Policy_Verification/TopoS_Interface.thy", - "formal/lean/liquid/for_mathlib/commsq.lean", - "formal/afp/Imperative_Insertion_Sort/document/root.tex", - "formal/lean/mathlib/group_theory/specific_groups/cyclic.lean", - "formal/mizar/msafree5.miz", - "formal/hol/Model/make.ml", - "formal/afp/Dependent_SIFUM_Type_Systems/Examples/Example_TypeSystem.thy", - "formal/afp/JinjaThreads/Framework/FWThread.thy", - "formal/lean/mathlib/order/filter/indicator_function.lean", - "formal/afp/Factored_Transition_System_Bounding/FmapUtils.thy", - "formal/afp/PLM/document/root.tex", - "formal/afp/Abs_Int_ITP2012/Abs_Int1.thy", - "formal/lean/mathlib/analysis/analytic/radius_liminf.lean", - "formal/afp/Triangle/Triangle.thy", - "formal/hol/Help/is_let.doc", - "formal/afp/Consensus_Refined/Refinement.thy", - "formal/lean/mathlib/measure_theory/function/uniform_integrable.lean", - "formal/hol/Help/g.doc", - "formal/afp/Network_Security_Policy_Verification/Security_Invariants/SINVAR_TaintingTrusted.thy", - "formal/afp/Regression_Test_Selection/JVM_RTS/JVMCollectionBasedRTS.thy", - "formal/afp/Containers/Set_Impl.thy", - "formal/afp/Parity_Game/Graph_TheoryCompatibility.thy", - "formal/afp/Propositional_Proof_Systems/ND_FiniteAssms.thy", - "formal/afp/Relational_Minimum_Spanning_Trees/Boruvka.thy", - "formal/lean/mathlib/algebra/char_p/two.lean", - "formal/hol/Examples/harmonicsum.ml", - "formal/afp/Real_Time_Deque/Common.thy", - "formal/afp/Containers/RBT_Mapping2.thy", - "formal/afp/IMAP-CRDT/IMAP-proof-helpers.thy", - "formal/hol/Help/list_mk_abs.doc", - "formal/afp/JiveDataStoreModel/Isabelle/Subtype.thy", - "formal/afp/CoreC++/Objects.thy", - "formal/mizar/relset_1.miz", - "formal/lean/mathlib/order/filter/archimedean.lean", - "formal/afp/Sturm_Sequences/Sturm_Theorem.thy", - "formal/hol/Help/GEN.doc", - "formal/lean/mathlib/number_theory/class_number/admissible_card_pow_degree.lean", - "formal/lean/mathlib/group_theory/group_action/opposite.lean", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p419.lean", - "formal/afp/CZH_Foundations/czh_introduction/CZH_Introduction.thy", - "formal/afp/Game_Based_Crypto/document/fig-1.tex", - "formal/hol/Help/compose_insts.doc", - "formal/hol/Help/new_inductive_set.doc", - "formal/afp/Monad_Memo_DP/heap_monad/Memory_Heap.thy", - "formal/mizar/scmfsa6a.miz", - "formal/lean/mathlib/measure_theory/measure/haar_lebesgue.lean", - "formal/afp/Ordinary_Differential_Equations/Refinement/Refine_Default.thy", - "formal/lean/mathlib/group_theory/group_action/big_operators.lean", - "formal/afp/Propositional_Proof_Systems/SC_Compl_Consistency.thy", - "formal/afp/MDP-Algorithms/Splitting_Methods.thy", - "formal/afp/Planarity_Certificates/Planarity_Certificates.thy", - "formal/afp/Tycon/Monad_Transformer.thy", - "formal/hol/Help/top_realgoal.doc", - "formal/afp/HRB-Slicing/JinjaVM_Inter/JVMInterpretation.thy", - "formal/lean/mathlib/set_theory/game/domineering.lean", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2002/a/p12.lean", - "formal/lean/liquid/laurent_measures/int_nat_shifts.lean", - "formal/lean/mathlib/topology/continuous_function/basic.lean", - "formal/lean/mathlib/number_theory/cyclotomic/discriminant.lean", - "formal/hol/Help/prove_inductive_relations_exist.doc", - "formal/afp/Lowe_Ontological_Argument/LoweOntologicalArgument_5.thy", - "formal/afp/Complx/OG_Tactics.thy", - "formal/afp/Refine_Imperative_HOL/IICF/Impl/IICF_Array_Matrix.thy", - "formal/afp/Binomial-Heaps/Test.thy", - "formal/afp/Sturm_Tarski/PolyMisc.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p437.lean", - "formal/mizar/glib_006.miz", - "formal/afp/Finite_Fields/Card_Irreducible_Polynomials.thy", - "formal/lean/mathlib/category_theory/subobject/limits.lean", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p421.lean", - "formal/afp/Hello_World/HelloWorld_Proof.thy", - "formal/afp/FeatherweightJava/Execute.thy", - "formal/mizar/sin_cos5.miz", - "formal/afp/CakeML/Big_Step_Clocked.thy", - "formal/afp/UpDown_Scheme/UpDown_Scheme.thy", - "formal/afp/DPRM_Theorem/Register_Machine/RegisterMachineSimulation.thy", - "formal/lean/mathlib/category_theory/preadditive/injective_resolution.lean", - "formal/afp/CZH_Elementary_Categories/czh_ecategories/CZH_ECAT_Hom.thy", - "formal/mizar/abcmiz_1.miz", - "formal/lean/mathlib/measure_theory/function/convergence_in_measure.lean", - "formal/afp/SequentInvertibility/NominalSequents.thy", - "formal/lean/mathzoo/mathzoo/misc/miniF2F/numbertheory/aneqprodakp4_anmsqrtanp1eq2.lean", - "formal/coq/abel/xmathcomp/real_closed_ext.v", - "formal/lean/liquid/for_mathlib/acyclic.lean", - "formal/mizar/lattad_1.miz", - "formal/lean/mathlib/algebra/category/Group/preadditive.lean", - "formal/lean/mathlib/category_theory/functor/basic.lean", - "formal/lean/mathlib/group_theory/free_group.lean", - "formal/afp/FocusStreamsCaseStudies/Gateway_proof.thy", - "formal/afp/Euler_MacLaurin/document/root.tex", - "formal/mizar/srings_5.miz", - "formal/mizar/jordan23.miz", - "formal/afp/VectorSpace/MonoidSums.thy", - "formal/lean/mathlib/data/bundle.lean", - "formal/afp/CakeML_Codegen/Utils/Compiler_Utils.thy", - "formal/afp/DPRM_Theorem/Machine_Equations/State_d_Equation.thy", - "formal/mizar/hilb10_6.miz", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2009/a/p6.lean", - "formal/afp/Automated_Stateful_Protocol_Verification/Eisbach_Protocol_Verification.thy", - "formal/afp/Physical_Quantities/Enum_extra.thy", - "formal/afp/Sturm_Tarski/document/root.tex", - "formal/afp/Secondary_Sylow/SubgroupConjugation.thy", - "formal/afp/EdmondsKarp_Maxflow/FordFulkerson_Algo.thy", - "formal/afp/AWN/OAWN_SOS.thy", - "formal/afp/Types_To_Sets_Extension/Examples/TTS_Foundations/FNDS_Conclusions.thy", - "formal/afp/Finite_Fields/Monic_Polynomial_Factorization.thy", - "formal/lean/mathlib/analysis/special_functions/integrals.lean", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p313.lean", - "formal/afp/LinearQuantifierElim/Thys/QE.thy", - "formal/lean/mathlib/analysis/box_integral/partition/additive.lean", - "formal/mizar/jordan1d.miz", - "formal/afp/Knot_Theory/Tangle_Relation.thy", - "formal/lean/mathlib/data/polynomial/expand.lean", - "formal/afp/Flyspeck-Tame/Plane1Props.thy", - "formal/afp/CakeML/generated/CakeML/BigSmallInvariants.thy", - "formal/afp/Decl_Sem_Fun_PL/EquivRelationalDenotFSet.thy", - "formal/afp/Factored_Transition_System_Bounding/TopologicalProps.thy", - "formal/afp/Types_To_Sets_Extension/ETTS/ETTS.thy", - "formal/afp/Regular_Tree_Relations/Tree_Automata/Tree_Automata.thy", - "formal/afp/Jinja/Common/Type.thy", - "formal/afp/Refine_Imperative_HOL/Sepref_Improper.thy", - "formal/mizar/mesfun13.miz", - "formal/hol/Arithmetic/sigmacomplete.ml", - "formal/afp/Green/Paths.thy", - "formal/afp/FOL_Seq_Calc3/Completeness.thy", - "formal/afp/Count_Complex_Roots/Extended_Sturm.thy", - "formal/afp/CAVA_LTL_Modelchecker/SM/Refine/Rename_Cfg.thy", - "formal/lean/mathlib/ring_theory/polynomial/vieta.lean", - "formal/afp/FOL_Axiomatic/FOL_Axiomatic.thy", - "formal/lean/sphere-eversion/to_mathlib/topology/metric_space.lean", - "formal/afp/Name_Carrying_Type_Inference/document/root.tex", - "formal/afp/Surprise_Paradox/document/root.tex", - "formal/afp/Package_logic/PackageLogic.thy", - "formal/afp/JinjaThreads/J/Threaded.thy", - "formal/hol/Help/BITS_ELIM_CONV.doc", - "formal/mizar/scmfsa_x.miz", - "formal/lean/mathlib/category_theory/limits/essentially_small.lean", - "formal/afp/Abstract-Hoare-Logics/Proc/PTermi.thy", - "formal/afp/AI_Planning_Languages_Semantics/Option_Monad_Add.thy", - "formal/mizar/cqc_the2.miz", - "formal/afp/Refine_Imperative_HOL/IICF/Impl/IICF_HOL_List.thy", - "formal/afp/UPF/UPF.thy", - "formal/afp/Transition_Systems_and_Automata/Automata/NBTA/NBTA.thy", - "formal/afp/Metalogic_ProofChecker/EtaNormProof.thy", - "formal/afp/Weight_Balanced_Trees/Weight_Balanced_Trees_log.thy", - "formal/afp/Shadow_DOM/Shadow_DOM_Tests.thy", - "formal/afp/Combinatorics_Words/Lyndon_Schutzenberger.thy", - "formal/afp/JinjaThreads/Compiler/J0.thy", - "formal/afp/Universal_Turing_Machine/document/root.tex", - "formal/afp/Hybrid_Systems_VCs/ModalKleeneAlgebra/HS_VC_MKA_ndfun.thy", - "formal/hol/Examples/cong.ml", - "formal/coq/math-comp/ssreflect/ssrAC.v", - "formal/afp/Functional-Automata/document/root.tex", - "formal/hol/Help/injectivity_store.doc", - "formal/afp/Regex_Equivalence/Before2.thy", - "formal/mizar/abcmiz_a.miz", - "formal/lean/mathlib/algebra/category/Module/subobject.lean", - "formal/afp/Sigma_Commit_Crypto/Sigma_Protocols.thy", - "formal/afp/CakeML_Codegen/Rewriting/Rewriting_Term.thy", - "formal/afp/Network_Security_Policy_Verification/Security_Invariants/SINVAR_Subnets2.thy", - "formal/hol/Help/BINDER_CONV.doc", - "formal/lean/mathlib/topology/separation.lean", - "formal/afp/Jinja/Common/Conform.thy", - "formal/afp/Refine_Imperative_HOL/Sepref_Constraints.thy", - "formal/mizar/moebius3.miz", - "formal/afp/JinjaThreads/J/WellType.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p100.lean", - "formal/afp/Orbit_Stabiliser/document/root.tex", - "formal/lean/mathlib/analysis/normed_space/banach_steinhaus.lean", - "formal/afp/DPRM_Theorem/Diophantine/Alpha_Sequence.thy", - "formal/afp/CoSMeDis/Post_Confidentiality/Post_COMPOSE2.thy", - "formal/afp/Real_Time_Deque/Common_Proof.thy", - "formal/afp/Refine_Imperative_HOL/Examples/Sepref_Graph.thy", - "formal/mizar/classes1.miz", - "formal/lean/mathlib/order/bounded.lean", - "formal/afp/Propositional_Proof_Systems/NDHC.thy", - "formal/afp/Core_DOM/common/monads/DocumentMonad.thy", - "formal/afp/Prefix_Free_Code_Combinators/Prefix_Free_Code_Combinators.thy", - "formal/lean/mathlib/data/nat/bitwise.lean", - "formal/afp/CZH_Foundations/czh_digraphs/CZH_DG_Simple.thy", - "formal/afp/TortoiseHare/TortoiseHare.thy", - "formal/afp/VeriComp/Inf.thy", - "formal/afp/CoreC++/Conform.thy", - "formal/afp/Core_DOM/common/monads/BaseMonad.thy", - "formal/afp/Functional_Ordered_Resolution_Prover/Weighted_FO_Ordered_Resolution_Prover.thy", - "formal/mizar/funct_7.miz", - "formal/lean/mathlib/control/equiv_functor.lean", - "formal/lean/mathlib/linear_algebra/clifford_algebra/fold.lean", - "formal/afp/CoSMeDis/Post_Confidentiality/Independent_Posts/Independent_Post_Value_Setup_RECEIVER.thy", - "formal/afp/Randomised_Social_Choice/Preference_Profiles.thy" - ], - "formal-valid": [ - "formal/afp/Forcing/Forcing_Notions.thy", - "formal/afp/Jordan_Normal_Form/VS_Connect.thy", - "formal/afp/Transitive-Closure/Transitive_Closure_List_Impl.thy", - "formal/hol/Help/EXPAND_TAC.doc", - "formal/lean/mathlib/data/int/sqrt.lean", - "formal/afp/Interpolation_Polynomials_HOL_Algebra/Bounded_Degree_Polynomials.thy", - "formal/afp/Flyspeck-Tame/FaceDivision.thy", - "formal/afp/Formal_SSA/SSA_Semantics.thy", - "formal/hol/Library/poly.ml", - "formal/afp/Game_Based_Crypto/Pseudo_Random_Function.thy", - "formal/lean/liquid/prop819/completion.lean", - "formal/mizar/msafree.miz", - "formal/afp/Core_SC_DOM/common/pointers/NodePointer.thy", - "formal/afp/Combinatorics_Words_Lyndon/Lyndon_Addition.thy", - "formal/mizar/glib_004.miz", - "formal/mizar/yellow_7.miz", - "formal/hol/Help/gcd.doc", - "formal/afp/BD_Security_Compositional/Composing_Security.thy", - "formal/afp/Smith_Normal_Form/Rings2_Extended.thy", - "formal/mizar/amistd_1.miz", - "formal/afp/Dirichlet_Series/Dirichlet_Product.thy", - "formal/afp/Pell/document/root.tex", - "formal/mizar/pdiff_3.miz", - "formal/afp/Finitely_Generated_Abelian_Groups/Set_Multiplication.thy", - "formal/lean/liquid/for_mathlib/short_complex_functor_category.lean", - "formal/lean/mathlib/data/fintype/card.lean", - "formal/hol/Help/PURE_ONCE_ASM_REWRITE_TAC.doc", - "formal/lean/mathlib/measure_theory/measure/finite_measure_weak_convergence.lean", - "formal/lean/mathlib/data/polynomial/degree/definitions.lean", - "formal/afp/Forcing/Least.thy", - "formal/afp/Registers/Check_Autogenerated_Files.thy", - "formal/afp/Forcing/Arities.thy", - "formal/afp/Virtual_Substitution/NegInfinityUni.thy", - "formal/lean/mathlib/order/bounded_order.lean", - "formal/lean/mathlib/algebra/group_with_zero/basic.lean", - "formal/afp/Complex_Bounded_Operators/Complex_Bounded_Linear_Function.thy", - "formal/afp/Groebner_Macaulay/Degree_Section.thy", - "formal/afp/Kleene_Algebra/Finite_Suprema.thy", - "formal/afp/Koenigsberg_Friendship/FriendshipTheory.thy", - "formal/lean/mathlib/topology/metric_space/contracting.lean", - "formal/lean/mathlib/linear_algebra/direct_sum/tensor_product.lean", - "formal/afp/FocusStreamsCaseStudies/ArithExtras.thy", - "formal/lean/liquid/for_mathlib/mapping_cone.lean", - "formal/hol/Help/REPLICATE_TAC.doc", - "formal/lean/mathlib/linear_algebra/matrix/absolute_value.lean", - "formal/lean/mathlib/algebra/linear_recurrence.lean", - "formal/hol/Help/EQ_TAC.doc", - "formal/hol/Help/string_of_file.doc", - "formal/lean/mathlib/category_theory/preadditive/left_exact.lean", - "formal/lean/mathlib/linear_algebra/basic.lean", - "formal/mizar/integr22.miz", - "formal/afp/Transition_Systems_and_Automata/Basic/Refine.thy", - "formal/afp/CAVA_Automata/Digraph_Basic.thy", - "formal/hol/Help/subtract.doc", - "formal/hol/Help/INT_SGN_CONV.doc", - "formal/mizar/partfun3.miz", - "formal/mizar/orders_4.miz", - "formal/hol/Tutorial/Embedding_of_logics_deep.ml", - "formal/lean/mathlib/algebra/periodic.lean", - "formal/afp/Saturation_Framework/document/root.tex", - "formal/afp/Complex_Bounded_Operators/extra/Extra_General.thy", - "formal/hol/Help/X_GEN_TAC.doc", - "formal/afp/Mereology/CM.thy", - "formal/afp/JinjaThreads/BV/BVSpec.thy", - "formal/afp/Collections/Refine_Dflt.thy", - "formal/afp/Simpl/ex/ProcParEx.thy", - "formal/afp/Finitely_Generated_Abelian_Groups/Group_Hom.thy", - "formal/afp/List_Update/Partial_Cost_Model.thy", - "formal/afp/Matrices_for_ODEs/MTX_Flows.thy", - "formal/afp/MonoBoolTranAlgebra/Statements.thy", - "formal/afp/Ordinary_Differential_Equations/Numerics/Init_ODE_Solver.thy", - "formal/mizar/mssublat.miz", - "formal/lean/mathlib/measure_theory/decomposition/radon_nikodym.lean", - "formal/afp/WorkerWrapper/UnboxedNats.thy", - "formal/hol/Help/dest_cons.doc", - "formal/lean/mathlib/measure_theory/function/l2_space.lean", - "formal/lean/mathlib/probability/probability_mass_function/basic.lean", - "formal/afp/Coinductive/Examples/Hamming_Stream.thy", - "formal/afp/UPF/ServiceExample.thy", - "formal/afp/CZH_Foundations/czh_sets/CZH_Sets_Nat.thy", - "formal/afp/Buildings/Prelim.thy", - "formal/afp/Statecharts/document/root.tex", - "formal/afp/Perfect-Number-Thm/document/root.tex", - "formal/afp/FOL_Seq_Calc3/List_Syntax.thy", - "formal/afp/Schutz_Spacetime/document/root.tex", - "formal/hol/100/div3.ml", - "formal/lean/mathlib/geometry/manifold/complex.lean", - "formal/lean/liquid/for_mathlib/Profinite/quotient_map.lean", - "formal/afp/CakeML/CakeML_Code.thy", - "formal/afp/Twelvefold_Way/Twelvefold_Way_Entry5.thy", - "formal/mizar/clopban1.miz", - "formal/lean/mathlib/category_theory/category/Twop.lean", - "formal/afp/AODV/variants/a_norreqid/A_Aodv.thy", - "formal/afp/Separation_Logic_Imperative_HOL/Automation.thy", - "formal/hol/Jordan/real_ext_geom_series.ml", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2003/b/p9.lean", - "formal/lean/mathlib/category_theory/monoidal/functor.lean", - "formal/afp/Word_Lib/Many_More.thy", - "formal/afp/Jinja/DFA/SemilatAlg.thy", - "formal/afp/Partial_Order_Reduction/Transition_System_Interpreted_Traces.thy", - "formal/afp/FunWithTilings/Tilings.thy", - "formal/afp/Linear_Inequalities/Integral_Bounded_Vectors.thy", - "formal/lean/mathlib/category_theory/limits/constructions/pullbacks.lean", - "formal/hol/Help/ASM_FOL_TAC.doc", - "formal/mizar/measure1.miz", - "formal/lean/liquid/condensed/proetale_site.lean", - "formal/lean/mathzoo/mathzoo/misc/miniF2F/induction/seq_mul2pnp1.lean", - "formal/afp/Automated_Stateful_Protocol_Verification/examples/Keyserver.thy", - "formal/afp/SIFPL/document/root.tex", - "formal/afp/Differential_Dynamic_Logic/Coincidence.thy", - "formal/afp/Network_Security_Policy_Verification/TopoS_Impl.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p132.lean", - "formal/lean/mathlib/topology/continuous_on.lean", - "formal/lean/mathlib/analysis/normed/field/basic.lean", - "formal/lean/mathlib/category_theory/limits/fubini.lean", - "formal/hol/Help/ONCE_ASM_REWRITE_TAC.doc", - "formal/afp/BenOr_Kozen_Reif/Renegar_Algorithm.thy", - "formal/afp/Iptables_Semantics/Examples/IPPartEval/IP_Address_Space_Examples_All_Large.thy", - "formal/afp/Abstract-Hoare-Logics/Proc/PHoareTotal.thy", - "formal/afp/DFS_Framework/Examples/Tarjan.thy", - "formal/afp/Transition_Systems_and_Automata/Transition_Systems/Transition_System_Construction.thy", - "formal/lean/mathlib/order/antisymmetrization.lean", - "formal/mizar/brouwer2.miz", - "formal/hol/Help/GABS_CONV.doc", - "formal/afp/Fermat3_4/Fermat3.thy", - "formal/coq/math-comp/solvable/gfunctor.v", - "formal/lean/mathlib/data/nat/pow.lean", - "formal/hol/Examples/sos.ml", - "formal/afp/AI_Planning_Languages_Semantics/PDDL_STRIPS_Checker.thy", - "formal/hol/EC/excluderoots.ml", - "formal/lean/sphere-eversion/to_mathlib/smooth_barycentric.lean", - "formal/lean/mathlib/algebra/order/smul.lean", - "formal/afp/JinjaThreads/Framework/FWBisimLift.thy", - "formal/afp/Jinja/Compiler/TypeComp.thy", - "formal/afp/Modular_Assembly_Kit_Security/Basics/Prefix.thy", - "formal/afp/Binding_Syntax_Theory/document/root.tex", - "formal/afp/Program-Conflict-Analysis/ConstraintSystems.thy", - "formal/afp/Formal_SSA/Minimality.thy", - "formal/afp/Stone_Relation_Algebras/Linear_Order_Matrices.thy", - "formal/afp/Nested_Multisets_Ordinals/Signed_Syntactic_Ordinal.thy", - "formal/lean/mathlib/category_theory/natural_transformation.lean", - "formal/afp/LinearQuantifierElim/Thys/QEdlo_inf.thy", - "formal/afp/LTL_Master_Theorem/Logical_Characterization/Asymmetric_Master_Theorem.thy", - "formal/afp/IMAP-CRDT/document/root.tex", - "formal/afp/Valuation/Valuation1.thy", - "formal/afp/Projective_Geometry/Higher_Projective_Space_Axioms.thy", - "formal/afp/Coinductive/Examples/CCPO_Topology.thy", - "formal/afp/Amortized_Complexity/Amortized_Examples.thy", - "formal/afp/Randomised_Social_Choice/Stochastic_Dominance.thy", - "formal/afp/Linear_Recurrences/Linear_Recurrences_Common.thy", - "formal/hol/Help/MK_CONJ_UPPERCASE.doc", - "formal/mizar/setlim_2.miz", - "formal/mizar/sin_cos2.miz", - "formal/hol/Help/REAL_POLY_POW_CONV.doc", - "formal/afp/Twelvefold_Way/Twelvefold_Way_Entry2.thy", - "formal/lean/mathlib/ring_theory/artinian.lean", - "formal/afp/CoSMeDis/Post_Confidentiality/Independent_Posts/Independent_Post_Observation_Setup_ISSUER.thy", - "formal/afp/List_Update/BIT_2comp_on2.thy", - "formal/afp/Latin_Square/Latin_Square.thy", - "formal/lean/mathlib/algebra/continued_fractions/default.lean", - "formal/afp/Smooth_Manifolds/Product_Manifold.thy", - "formal/lean/liquid/for_mathlib/imker.lean", - "formal/mizar/topgen_5.miz", - "formal/afp/Planarity_Certificates/Planarity/Graph_Genus.thy", - "formal/lean/mathlib/measure_theory/measure/measure_space_def.lean", - "formal/lean/mathlib/category_theory/limits/opposites.lean", - "formal/afp/Stateful_Protocol_Composition_and_Typing/Intruder_Deduction.thy", - "formal/hol/Help/GEN_BETA_CONV.doc", - "formal/lean/mathlib/algebra/homology/exact.lean", - "formal/mizar/tops_1.miz", - "formal/hol/100/minkowski.ml", - "formal/lean/mathlib/set_theory/cardinal/cofinality.lean", - "formal/hol/Examples/inverse_bug_puzzle_tac.ml", - "formal/afp/Slicing/Basic/SemanticsCFG.thy", - "formal/afp/Prime_Number_Theorem/Prime_Number_Theorem.thy", - "formal/lean/mathlib/category_theory/enriched/basic.lean", - "formal/afp/Network_Security_Policy_Verification/Lib/FiniteListGraph.thy", - "formal/lean/mathlib/measure_theory/constructions/polish.lean", - "formal/hol/Help/SUB_CONV.doc", - "formal/lean/mathlib/analysis/normed/group/hom_completion.lean", - "formal/afp/IMP2_Binary_Heap/document/root.tex", - "formal/hol/Help/REAL_INT_POW_CONV.doc", - "formal/lean/lftcm/solutions/thursday/category_theory/exercise8.lean", - "formal/lean/mathlib/topology/algebra/uniform_field.lean", - "formal/lean/mathlib/data/rbtree/init.lean", - "formal/afp/Order_Lattice_Props/Order_Duality.thy", - "formal/mizar/xboole_1.miz", - "formal/afp/Cauchy/CauchySchwarz.thy", - "formal/hol/100/independence.ml", - "formal/hol/Help/set_basic_convs.doc", - "formal/mizar/finset_1.miz", - "formal/afp/MiniSail/IVSubst.thy", - "formal/mizar/ordeq_02.miz", - "formal/afp/Iptables_Semantics/Primitive_Matchers/Protocols_Normalize.thy", - "formal/mizar/robbins4.miz", - "formal/afp/Hybrid_Logic/document/root.tex", - "formal/afp/Modal_Logics_for_NTS/FL_Logical_Equivalence.thy", - "formal/lean/mathlib/algebra/ring/default.lean", - "formal/afp/Timed_Automata/DBM_Operations.thy", - "formal/coq/math-comp/ssreflect/fingraph.v", - "formal/afp/MiniSail/Wellformed.thy", - "formal/afp/Correctness_Algebras/Boolean_Semirings.thy", - "formal/afp/AI_Planning_Languages_Semantics/Lifschitz_Consistency.thy", - "formal/afp/CISC-Kernel/trace/Rushby-with-Control/Separation_kernel_model.thy", - "formal/lean/mathlib/algebra/char_p/quotient.lean", - "formal/hol/Help/unparse_as_prefix.doc", - "formal/afp/Dominance_CHK/Cfg.thy", - "formal/lean/mathlib/category_theory/limits/preserves/limits.lean", - "formal/afp/JinjaThreads/Framework/FWLock.thy", - "formal/lean/mathlib/category_theory/preadditive/endo_functor.lean", - "formal/lean/sphere-eversion/to_mathlib/measure_theory/basic.lean", - "formal/lean/perfectoid/sheaves/presheaf_of_rings.lean", - "formal/afp/Linear_Programming/Linear_Programming.thy", - "formal/afp/Abstract-Rewriting/Abstract_Rewriting.thy", - "formal/afp/CakeML_Codegen/Doc/Doc_CupCake.thy", - "formal/lean/mathlib/field_theory/finite/galois_field.lean", - "formal/lean/mathlib/ring_theory/norm.lean", - "formal/mizar/supinf_1.miz", - "formal/afp/Projective_Geometry/Projective_Plane_Axioms.thy", - "formal/hol/Help/PART_MATCH.doc", - "formal/afp/Selection_Heap_Sort/RemoveMax.thy", - "formal/mizar/weierstr.miz", - "formal/lean/mathlib/category_theory/adjunction/limits.lean", - "formal/hol/Help/AP_TERM_TAC.doc", - "formal/lean/mathlib/measure_theory/function/conditional_expectation/basic.lean", - "formal/hol/LP_arith/lp_tests.ml", - "formal/afp/Monad_Memo_DP/state_monad/Bottom_Up_Computation.thy", - "formal/afp/Groebner_Bases/F4.thy", - "formal/lean/liquid/for_mathlib/pid.lean", - "formal/lean/mathlib/category_theory/sites/plus.lean", - "formal/hol/Help/ran.doc", - "formal/afp/Key_Agreement_Strong_Adversaries/dhlvl3_symmetric.thy", - "formal/afp/Security_Protocol_Refinement/Key_establish/m3_kerberos_par.thy", - "formal/lean/liquid/for_mathlib/homology_lift_desc.lean", - "formal/coq/odd-order/wielandt_fixpoint.v", - "formal/lean/mathlib/field_theory/finite/trace.lean", - "formal/afp/JinjaThreads/MM/JMM_J.thy", - "formal/afp/Ordinary_Differential_Equations/Refinement/Weak_Set.thy", - "formal/hol/Help/rator.doc", - "formal/afp/Simpl/document/root.tex", - "formal/afp/Pi_Calculus/Late_Tau_Chain.thy", - "formal/afp/Decl_Sem_Fun_PL/Optimizer.thy", - "formal/afp/Containers/ITP-2013/Benchmark_Set.thy", - "formal/lean/mathlib/category_theory/preadditive/opposite.lean", - "formal/afp/Probabilistic_System_Zoo/Bool_Bounded_Set.thy", - "formal/lean/mathlib/analysis/convex/partition_of_unity.lean", - "formal/afp/Core_SC_DOM/common/Core_DOM_Tests.thy", - "formal/afp/Monad_Memo_DP/heap_monad/Bottom_Up_Computation_Heap.thy", - "formal/hol/Help/mk_list.doc", - "formal/mizar/group_14.miz", - "formal/afp/Verified_SAT_Based_AI_Planning/Solve_SASP.thy", - "formal/afp/Hybrid_Systems_VCs/KleeneAlgebraTests/HS_VC_KAT.thy", - "formal/afp/Polynomial_Interpolation/Polynomial_Interpolation.thy", - "formal/lean/mathlib/analysis/matrix.lean", - "formal/mizar/bkmodel4.miz", - "formal/afp/CISC-Kernel/trace/Rushby-with-Control/CISK.thy", - "formal/afp/Stone_Relation_Algebras/Fixpoints.thy", - "formal/afp/Open_Induction/Restricted_Predicates.thy", - "formal/afp/CSP_RefTK/DiningPhilosophers.thy", - "formal/afp/Formula_Derivatives/While_Default.thy", - "formal/lean/mathlib/category_theory/products/default.lean", - "formal/lean/mathlib/ring_theory/ring_hom_properties.lean", - "formal/lean/mathlib/ring_theory/polynomial/pochhammer.lean", - "formal/hol/Help/undefined.doc", - "formal/afp/Youngs_Inequality/document/root.tex", - "formal/afp/Free-Boolean-Algebra/Free_Boolean_Algebra.thy", - "formal/afp/Gauss_Jordan/Code_Real_Approx_By_Float_Haskell.thy", - "formal/mizar/neckla_3.miz", - "formal/lean/mathlib/algebra/order/with_zero.lean", - "formal/lean/mathzoo/mathzoo/olympiads/imo/1983/p6.lean", - "formal/mizar/closure2.miz", - "formal/afp/Virtual_Substitution/PrettyPrinting.thy", - "formal/afp/Conditional_Transfer_Rule/CTR/Tests/CTR_Tests.thy", - "formal/lean/mathzoo/mathzoo/misc/miniF2F/induction/divisibility_3div2tooddnp1.lean", - "formal/lean/mathlib/algebra/free_monoid.lean", - "formal/afp/Call_Arity/Set-Cpo.thy", - "formal/hol/Help/STRIP_GOAL_THEN.doc", - "formal/lean/mathlib/category_theory/generator.lean", - "formal/lean/mathlib/group_theory/subgroup/basic.lean", - "formal/afp/Finitely_Generated_Abelian_Groups/Finitely_Generated_Abelian_Groups.thy", - "formal/afp/Chandy_Lamport/Distributed_System.thy", - "formal/afp/RSAPSS/Productdivides.thy", - "formal/lean/liquid/rescale/LC.lean", - "formal/lean/mathlib/data/nat/squarefree.lean", - "formal/lean/liquid/thm95/col_exact.lean", - "formal/afp/Core_SC_DOM/common/Core_DOM_Functions.thy", - "formal/afp/Hoare_Time/AExp.thy", - "formal/coq/math-comp/ssreflect/choice.v", - "formal/afp/Strong_Security/MWLf.thy", - "formal/afp/Forcing/Synthetic_Definition.thy", - "formal/hol/Help/conjuncts.doc", - "formal/afp/Markov_Models/MDP_Reachability_Problem.thy", - "formal/lean/mathlib/data/hash_map.lean", - "formal/lean/mathlib/algebra/module/pi.lean", - "formal/lean/mathlib/combinatorics/quiver/arborescence.lean", - "formal/afp/Transition_Systems_and_Automata/Automata/DRA/DRA.thy", - "formal/hol/Help/GEN_ALPHA_CONV.doc", - "formal/lean/mathlib/data/real/nnreal.lean", - "formal/afp/Collections/GenCF/Intf/Intf_Hash.thy", - "formal/afp/UPF_Firewall/FWNormalisation/FWNormalisation.thy", - "formal/afp/HOLCF-Prelude/Num_Class.thy", - "formal/afp/Refine_Imperative_HOL/Sepref_Tool.thy", - "formal/afp/UpDown_Scheme/Grid_Point.thy", - "formal/afp/JinjaThreads/DFA/Typing_Framework.thy", - "formal/afp/Matrices_for_ODEs/SQ_MTX.thy", - "formal/afp/Ordinal/OrdinalCont.thy", - "formal/lean/mathlib/analysis/normed_space/bounded_linear_maps.lean", - "formal/afp/Probabilistic_Prime_Tests/Fermat_Witness.thy", - "formal/lean/mathzoo/mathzoo/misc/miniF2F/numbertheory/sumkmulnckeqnmul2pownm1.lean", - "formal/lean/mathlib/control/traversable/default.lean", - "formal/afp/Matrix_Tensor/document/root.tex", - "formal/mizar/xregular.miz", - "formal/lean/liquid/for_mathlib/nnreal_nat_binary.lean", - "formal/afp/Jinja/J/WWellForm.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p160.lean", - "formal/coq/analysis/classical_sets.v", - "formal/afp/Gauss_Jordan/System_Of_Equations_IArrays.thy", - "formal/afp/FOL_Seq_Calc3/Export.thy", - "formal/lean/sphere-eversion/to_mathlib/geometry/manifold/charted_space.lean", - "formal/afp/Hybrid_Multi_Lane_Spatial_Logic/perfect/Safety_Perfect.thy", - "formal/afp/Polynomial_Interpolation/Newton_Interpolation.thy", - "formal/afp/Slicing/Dynamic/DynSlice.thy", - "formal/afp/EdmondsKarp_Maxflow/EdmondsKarp_Impl.thy", - "formal/afp/Pi_Calculus/Early_Tau_Chain.thy", - "formal/afp/Prime_Distribution_Elementary/Shapiro_Tauberian.thy", - "formal/afp/QR_Decomposition/IArray_Addenda_QR.thy", - "formal/mizar/field_1.miz", - "formal/lean/mathlib/linear_algebra/matrix/block.lean", - "formal/hol/Help/REAL_INT_ABS_CONV.doc", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2003/a/p25.lean", - "formal/lean/mathlib/algebra/gcd_monoid/nat.lean", - "formal/lean/mathlib/ring_theory/fintype.lean", - "formal/lean/mathlib/algebra/order/field.lean", - "formal/afp/Finite_Automata_HF/Finite_Automata_HF.thy", - "formal/afp/Jinja/DFA/Semilat.thy", - "formal/afp/Bertrands_Postulate/document/root.tex", - "formal/afp/CZH_Foundations/czh_digraphs/CZH_DG_Subdigraph.thy", - "formal/afp/CoSMeDis/Prelim.thy", - "formal/lean/mathlib/category_theory/category/Quiv.lean", - "formal/afp/Transitive-Closure-II/RTrancl.thy", - "formal/afp/IsaNet/instances/SCION.thy", - "formal/afp/Encodability_Process_Calculi/FullAbstraction.thy", - "formal/lean/mathlib/logic/equiv/fin.lean", - "formal/lean/mathlib/data/finsupp/indicator.lean", - "formal/hol/Help/REAL_ARITH.doc", - "formal/afp/KAT_and_DRA/SingleSorted/DRAT.thy", - "formal/afp/CISC-Kernel/trace/Rushby-with-Control/Link_separation_kernel_model_to_CISK.thy", - "formal/afp/Core_SC_DOM/common/pointers/ObjectPointer.thy", - "formal/afp/Category/NatTrans.thy", - "formal/afp/Possibilistic_Noninterference/document/intro.tex", - "formal/afp/Multi_Party_Computation/GMW.thy", - "formal/hol/Help/meson_depth.doc", - "formal/hol/Help/checkpoint.doc", - "formal/afp/Modular_Assembly_Kit_Security/Verification/Unwinding/UnwindingConditions.thy", - "formal/afp/KAT_and_DRA/SingleSorted/DRA_Models.thy", - "formal/afp/Twelvefold_Way/Equiv_Relations_on_Functions.thy", - "formal/afp/JinjaDCI/Compiler/J1.thy", - "formal/afp/BTree/Array_SBlit.thy", - "formal/lean/mathlib/algebra/group_power/order.lean", - "formal/afp/Password_Authentication_Protocol/Protocol.thy", - "formal/afp/Launchbury/ResourcedAdequacy.thy", - "formal/lean/mathlib/algebra/order/module.lean", - "formal/hol/Help/PROVE_HYP.doc", - "formal/afp/JinjaThreads/MM/SC_Legal.thy", - "formal/afp/Network_Security_Policy_Verification/TopoS_Stateful_Policy.thy", - "formal/afp/DFS_Framework/Examples/Cyc_Check.thy", - "formal/afp/IsaNet/Network_Model.thy", - "formal/afp/JinjaThreads/Compiler/Correctness2.thy", - "formal/hol/Help/CCONTR.doc", - "formal/afp/CoSMeDis/Friend_Confidentiality/Friend_State_Indistinguishability.thy", - "formal/mizar/absvalue.miz", - "formal/mizar/tops_3.miz", - "formal/lean/mathlib/algebra/big_operators/finprod.lean", - "formal/afp/JinjaDCI/BV/BVSpecTypeSafe.thy", - "formal/lean/mathlib/category_theory/limits/shapes/reflexive.lean", - "formal/afp/GaleStewart_Games/GaleStewartDeterminedGames.thy", - "formal/hol/Help/PURE_ONCE_REWRITE_RULE.doc", - "formal/afp/Metalogic_ProofChecker/BetaNormProof.thy", - "formal/afp/AODV/Aodv_Predicates.thy", - "formal/afp/Twelvefold_Way/Twelvefold_Way_Entry10.thy", - "formal/afp/Complex_Bounded_Operators/Cblinfun_Code_Examples.thy", - "formal/lean/liquid/for_mathlib/arrow_preadditive.lean", - "formal/afp/Residuated_Lattices/Involutive_Residuated.thy", - "formal/hol/miz3/Samples/luxury.ml", - "formal/afp/UPF_Firewall/PacketFilter/IntegerPort_TCPUDP.thy", - "formal/hol/Multivariate/make_complex.ml", - "formal/afp/Finitely_Generated_Abelian_Groups/Group_Relations.thy", - "formal/mizar/group_1.miz", - "formal/lean/liquid/normed_free_pfpng/basic.lean", - "formal/afp/Nominal2/document/root.tex", - "formal/afp/Padic_Ints/Padic_Int_Polynomials.thy", - "formal/lean/liquid/combinatorial_lemma/profinite.lean", - "formal/afp/Planarity_Certificates/document/root.tex", - "formal/lean/sphere-eversion/to_mathlib/unused/misc.lean", - "formal/afp/Smith_Normal_Form/Cauchy_Binet.thy", - "formal/coq/math-comp/solvable/maximal.v", - "formal/afp/Propositional_Proof_Systems/Substitution_Sema.thy", - "formal/hol/Help/MESON.doc", - "formal/afp/Native_Word/Uint8.thy", - "formal/afp/Incredible_Proof_Machine/Rose_Tree.thy", - "formal/afp/Bicategory/document/root.tex", - "formal/afp/Real_Time_Deque/States.thy", - "formal/afp/Key_Agreement_Strong_Adversaries/sklvl1.thy", - "formal/hol/Rqe/matinsert_thms.ml", - "formal/afp/Binding_Syntax_Theory/Iteration.thy", - "formal/lean/mathlib/topology/metric_space/polish.lean", - "formal/hol/Rqe/matinsert.ml", - "formal/afp/VerifyThis2019/Challenge3.thy", - "formal/mizar/dtconstr.miz", - "formal/hol/ProofTrace/fusion.ml.diff", - "formal/hol/Help/term_of_preterm.doc", - "formal/afp/Gabow_SCC/Gabow_Skeleton_Code.thy", - "formal/afp/HereditarilyFinite/HF.thy", - "formal/afp/CakeML_Codegen/Backend/CakeML_Backend.thy", - "formal/afp/Universal_Turing_Machine/Turing.thy", - "formal/afp/Gauss_Sums/document/root.tex", - "formal/hol/Help/NUM_DIV_CONV.doc", - "formal/lean/mathzoo/mathzoo/misc/miniF2F/algebra/amgm_prod1toneq1_sum1tongeqn.lean", - "formal/afp/Safe_OCL/OCL_Basic_Types.thy", - "formal/afp/Smooth_Manifolds/Smooth.thy", - "formal/lean/perfectoid/for_mathlib/ideal_operations.lean", - "formal/afp/Iptables_Semantics/Primitive_Matchers/Tagged_Packet.thy", - "formal/lean/mathzoo/mathzoo/olympiads/imo/1987/p4.lean", - "formal/afp/Types_To_Sets_Extension/Examples/Vector_Spaces/VS_Vector_Spaces.thy", - "formal/afp/Splay_Tree/Splay_Map.thy", - "formal/hol/Library/iter.ml", - "formal/afp/Laplace_Transform/Lerch_Lemma.thy", - "formal/hol/Help/deep_alpha.doc", - "formal/hol/Multivariate/wlog_examples.ml", - "formal/hol/Help/isalpha.doc", - "formal/lean/mathzoo/mathzoo/olympiads/imo/1972/p5_alt1.lean", - "formal/afp/First_Order_Terms/Abstract_Matching.thy", - "formal/afp/Partial_Order_Reduction/Formula.thy", - "formal/afp/Native_Word/Native_Cast.thy", - "formal/afp/Name_Carrying_Type_Inference/Permutation.thy", - "formal/afp/UTP/utp/utp_parser_utils.thy", - "formal/lean/mathzoo/mathzoo/olympiads/aime/1983/p9.lean", - "formal/afp/Transition_Systems_and_Automata/Basic/Acceptance.thy", - "formal/mizar/bhsp_1.miz", - "formal/coq/analysis/Rstruct.v", - "formal/afp/Well_Quasi_Orders/Wqo_Instances.thy", - "formal/hol/Multivariate/integration.ml", - "formal/afp/Fresh_Identifiers/document/root.tex", - "formal/lean/mathlib/topology/continuous_function/zero_at_infty.lean", - "formal/hol/Examples/polylog.ml", - "formal/afp/Van_Emde_Boas_Trees/VEBT_Height.thy", - "formal/afp/CryptHOL/SPMF_Applicative.thy", - "formal/mizar/rvsum_4.miz", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p299.lean", - "formal/mizar/ordinal3.miz", - "formal/afp/Flyspeck-Tame/PlaneProps.thy", - "formal/afp/Simpl/SyntaxTest.thy", - "formal/lean/mathlib/number_theory/cyclotomic/gal.lean", - "formal/afp/FO_Theory_Rewriting/Closure/TA_Clousure_Const.thy", - "formal/hol/Complex/grobner_examples.ml", - "formal/lean/mathlib/number_theory/sum_two_squares.lean", - "formal/lean/liquid/for_mathlib/order.lean", - "formal/hol/Help/type_invention_error.doc", - "formal/lean/mathlib/ring_theory/free_comm_ring.lean", - "formal/afp/Registers/document/root.tex", - "formal/afp/LTL_Master_Theorem/LTL_to_DRA/DRA_Instantiation.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p551.lean", - "formal/afp/CakeML/generated/Lem_list_extra.thy", - "formal/afp/Intro_Dest_Elim/document/root.tex", - "formal/lean/liquid/statement.lean", - "formal/afp/Error_Function/document/root.tex", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2021/b/p13.lean", - "formal/afp/Shivers-CFA/FixTransform.thy", - "formal/lean/mathlib/analysis/normed_space/exponential.lean", - "formal/lean/mathlib/linear_algebra/clifford_algebra/star.lean", - "formal/mizar/pre_topc.miz", - "formal/afp/TLA/Intensional.thy", - "formal/afp/Allen_Calculus/nest.thy", - "formal/lean/mathlib/set_theory/ordinal/notation.lean", - "formal/lean/mathlib/analysis/normed_space/weak_dual.lean", - "formal/afp/DFS_Framework/Impl/Structural/Rec_Impl.thy", - "formal/lean/mathlib/data/mv_polynomial/counit.lean", - "formal/afp/Bernoulli/document/root.tex", - "formal/afp/Collections/ICF/ICF_Chapter.thy", - "formal/afp/Word_Lib/Machine_Word_64.thy", - "formal/afp/Higher_Order_Terms/Unification_Compat.thy", - "formal/hol/Mizarlight/miz2a.ml", - "formal/afp/InformationFlowSlicing_Inter/document/root.tex", - "formal/coq/math-comp/ssreflect/prime.v", - "formal/hol/Help/NUM_GT_CONV.doc", - "formal/afp/Deep_Learning/DL_Deep_Model_Poly.thy", - "formal/mizar/latstone.miz", - "formal/lean/mathlib/linear_algebra/matrix/charpoly/finite_field.lean", - "formal/afp/Card_Partitions/document/root.tex", - "formal/afp/Conditional_Simplification/IHOL_CS.thy", - "formal/afp/Floyd_Warshall/Recursion_Combinators.thy", - "formal/afp/Category3/SetCategory.thy", - "formal/mizar/waybel27.miz", - "formal/afp/MFOTL_Monitor/Monitor.thy", - "formal/lean/perfectoid/adic_space.lean", - "formal/afp/Separation_Logic_Imperative_HOL/Run.thy", - "formal/mizar/field_4.miz", - "formal/afp/Featherweight_OCL/document/introduction.tex", - "formal/afp/Program-Conflict-Analysis/MainResult.thy", - "formal/afp/Dijkstra_Shortest_Path/GraphSpec.thy", - "formal/afp/Amortized_Complexity/Lemmas_log.thy", - "formal/lean/mathlib/category_theory/adjunction/comma.lean", - "formal/hol/Help/mk_goalstate.doc", - "formal/lean/lftcm/hints/category_theory/exercise2/hint3.lean", - "formal/coq/analysis/mathcomp_extra.v", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p246.lean", - "formal/lean/mathlib/category_theory/sites/grothendieck.lean", - "formal/lean/mathlib/data/int/succ_pred.lean", - "formal/lean/mathlib/ring_theory/localization/as_subring.lean", - "formal/afp/Groebner_Bases/Syzygy_Examples.thy", - "formal/lean/mathlib/data/finite/defs.lean", - "formal/mizar/lfuzzy_1.miz", - "formal/afp/Monad_Memo_DP/example/Min_Ed_Dist0.thy", - "formal/hol/Help/the_definitions.doc", - "formal/afp/Gromov_Hyperbolicity/Morse_Gromov_Theorem.thy", - "formal/lean/liquid/for_mathlib/has_homology.lean", - "formal/afp/Transitive-Closure/Finite_Transitive_Closure_Simprocs.thy", - "formal/afp/CZH_Foundations/czh_sets/CZH_Utilities.thy", - "formal/lean/mathlib/analysis/normed_space/hahn_banach/separation.lean", - "formal/lean/mathlib/data/nat/choose/central.lean", - "formal/afp/Well_Quasi_Orders/Multiset_Extension.thy", - "formal/lean/liquid/prop819/strict_complex_iso.lean", - "formal/mizar/goedelcp.miz", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p37.lean", - "formal/afp/Collections/Userguides/Userguides_Chapter.thy", - "formal/afp/Smith_Normal_Form/Smith_Certified.thy", - "formal/afp/Flow_Networks/Network.thy", - "formal/afp/Hybrid_Systems_VCs/HS_VC_KA_ndfun.thy", - "formal/afp/Monad_Memo_DP/example/CYK.thy", - "formal/afp/Algebraic_VCs/AVC_KAT/VC_KAT_Examples2.thy", - "formal/afp/Jacobson_Basic_Algebra/Set_Theory.thy", - "formal/afp/Monad_Memo_DP/example/Knapsack.thy", - "formal/afp/Types_To_Sets_Extension/Examples/TTS_Foundations/Foundations/FNDS_Auxiliary.thy", - "formal/afp/Hidden_Markov_Models/HMM_Implementation.thy", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2003/a/p5.lean", - "formal/hol/Tutorial/Semantics_of_programming_languages_deep.ml", - "formal/afp/KAT_and_DRA/SingleSorted/FolkTheorem.thy", - "formal/afp/Deriving/Derive_Examples.thy", - "formal/afp/CoSMeDis/Post_Confidentiality/Post_Intro.thy", - "formal/afp/AI_Planning_Languages_Semantics/Error_Monad_Add.thy", - "formal/mizar/rinfsup1.miz", - "formal/lean/mathlib/category_theory/preadditive/of_biproducts.lean", - "formal/afp/Core_SC_DOM/common/Core_DOM.thy", - "formal/afp/IFC_Tracking/IFC.thy", - "formal/coq/analysis/signed.v", - "formal/hol/Help/dest_realintconst.doc", - "formal/afp/Stone_Kleene_Relation_Algebras/Kleene_Relation_Subalgebras.thy", - "formal/afp/Iptables_Semantics/Primitive_Matchers/Common_Primitive_Matcher_Generic.thy", - "formal/afp/Arith_Prog_Rel_Primes/Arith_Prog_Rel_Primes.thy", - "formal/afp/UPF_Firewall/PacketFilter/IntegerAddress.thy", - "formal/hol/Help/type_subst.doc", - "formal/afp/Metalogic_ProofChecker/CheckerExe.thy", - "formal/lean/mathlib/category_theory/limits/constructions/finite_products_of_binary_products.lean", - "formal/lean/mathlib/logic/equiv/fintype.lean", - "formal/lean/mathlib/data/real/ereal.lean", - "formal/afp/Types_To_Sets_Extension/Examples/SML_Relativization/Lattices/SML_Lattices.thy", - "formal/afp/Inductive_Confidentiality/document/root.tex", - "formal/afp/Pi_Calculus/Weak_Early_Step_Sim_Pres.thy", - "formal/afp/Types_Tableaus_and_Goedels_God/Relations.thy", - "formal/hol/Help/make_args.doc", - "formal/hol/Jordan/real_ext.ml", - "formal/mizar/dist_2.miz", - "formal/afp/Key_Agreement_Strong_Adversaries/Infra.thy", - "formal/afp/Propositional_Proof_Systems/Resolution_Compl_Consistency.thy", - "formal/hol/Help/is_vartype.doc", - "formal/afp/QHLProver/Gates.thy", - "formal/afp/LLL_Basis_Reduction/More_IArray.thy", - "formal/afp/CoreC++/DefAss.thy", - "formal/afp/Concurrent_Revisions/Occurrences.thy", - "formal/afp/Collections/Iterator/SetIteratorOperations.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p239.lean", - "formal/lean/mathlib/topology/bornology/basic.lean", - "formal/mizar/rlaffin2.miz", - "formal/afp/Refine_Monadic/Refine_Pfun.thy", - "formal/lean/mathlib/algebra/module/algebra.lean", - "formal/hol/Help/ASM_REAL_ARITH_TAC.doc", - "formal/lean/mathzoo/mathzoo/olympiads/aime/1997/p12.lean", - "formal/hol/Multivariate/lpspaces.ml", - "formal/afp/Word_Lib/Guide.thy", - "formal/hol/100/descartes.ml", - "formal/afp/JinjaDCI/J/WellType.thy", - "formal/hol/Help/RING.doc", - "formal/afp/Simpl/SmallStep.thy", - "formal/lean/mathlib/order/compare.lean", - "formal/afp/Comparison_Sort_Lower_Bound/Comparison_Sort_Lower_Bound.thy", - "formal/afp/Vickrey_Clarke_Groves/CombinatorialAuction.thy", - "formal/lean/mathlib/algebra/field_power.lean", - "formal/afp/Automated_Stateful_Protocol_Verification/examples/Keyserver2.thy", - "formal/mizar/scmpds_5.miz", - "formal/afp/Kleene_Algebra/Formal_Power_Series.thy", - "formal/afp/Well_Quasi_Orders/Infinite_Sequences.thy", - "formal/afp/Rep_Fin_Groups/document/root.tex", - "formal/afp/Algebraic_Numbers/Sturm_Rat.thy", - "formal/afp/Modal_Logics_for_NTS/FL_Validity.thy", - "formal/afp/PAL/PAL.thy", - "formal/hol/Tutorial/Propositional_logic.ml", - "formal/lean/liquid/for_mathlib/triangle.lean", - "formal/lean/mathlib/category_theory/simple.lean", - "formal/lean/mathlib/group_theory/noncomm_pi_coprod.lean", - "formal/lean/mathlib/algebra/category/GroupWithZero.lean", - "formal/lean/mathlib/logic/small.lean", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p15.lean", - "formal/afp/WebAssembly/Wasm_Printing/Wasm_Interpreter_Printing.thy", - "formal/afp/FeatherweightJava/FJAux.thy", - "formal/afp/Complex_Bounded_Operators/extra/Extra_Vector_Spaces.thy", - "formal/afp/Independence_CH/Absolute_Versions.thy", - "formal/afp/UTP/utp/utp_rel_laws.thy", - "formal/afp/Interpreter_Optimizations/Unboxed.thy", - "formal/hol/EC/nistp256.ml", - "formal/afp/Game_Based_Crypto/PRF_UPF_IND_CCA.thy", - "formal/afp/DFS_Framework/Examples/Nested_DFS.thy", - "formal/lean/mathlib/data/set/finite.lean", - "formal/afp/Prime_Distribution_Elementary/PNT_Consequences.thy", - "formal/lean/mathlib/data/rat/sqrt.lean", - "formal/afp/MonoBoolTranAlgebra/Mono_Bool_Tran.thy", - "formal/afp/JinjaThreads/MM/SC_Interp.thy", - "formal/hol/Help/DISJ1.doc", - "formal/afp/JinjaThreads/MM/Orders.thy", - "formal/afp/Ergodic_Theory/Transfer_Operator.thy", - "formal/afp/PAC_Checker/PAC_Checker_Specification.thy", - "formal/afp/FOL_Seq_Calc2/SeCaV.thy", - "formal/afp/Optics/Channel_Type_Example.thy", - "formal/afp/Abortable_Linearizable_Modules/document/abstract.tex", - "formal/afp/X86_Semantics/Memory.thy", - "formal/lean/mathlib/data/pfunctor/univariate/basic.lean", - "formal/afp/Collections/Iterator/SetAbstractionIterator.thy", - "formal/hol/100/quartic.ml", - "formal/afp/Decreasing-Diagrams/document/root.tex", - "formal/afp/Dominance_CHK/document/root.tex", - "formal/lean/perfectoid/for_mathlib/linear_ordered_comm_group.lean", - "formal/afp/LLL_Basis_Reduction/Gram_Schmidt_2.thy", - "formal/lean/liquid/for_mathlib/homology_exact.lean", - "formal/hol/Help/aty.doc", - "formal/afp/IsaNet/infrastructure/Event_Systems.thy", - "formal/afp/Furstenberg_Topology/Furstenberg_Topology.thy", - "formal/afp/MDP-Algorithms/Algorithms.thy", - "formal/afp/Slicing/StaticIntra/Distance.thy", - "formal/lean/mathlib/data/real/pi/leibniz.lean", - "formal/hol/Help/apply_prover.doc", - "formal/afp/Simpl/Generalise.thy", - "formal/lean/mathlib/linear_algebra/trace.lean", - "formal/afp/UPF_Firewall/StatefulFW/FTPVOIP.thy", - "formal/afp/Network_Security_Policy_Verification/Security_Invariants/SINVAR_NoRefl.thy", - "formal/afp/Security_Protocol_Refinement/Refinement/s0g_secrecy.thy", - "formal/lean/mathzoo/mathzoo/olympiads/aime/2001/i/p3.lean", - "formal/lean/mathlib/analysis/normed_space/multilinear.lean", - "formal/mizar/matrix14.miz", - "formal/afp/Graph_Theory/Weighted_Graph.thy", - "formal/mizar/yellow11.miz", - "formal/coq/math-comp/ssreflect/div.v", - "formal/lean/mathlib/analysis/inner_product_space/euclidean_dist.lean", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/algebra/p31.lean", - "formal/afp/LTL_Master_Theorem/Logical_Characterization/Syntactic_Fragments_and_Stability.thy", - "formal/mizar/finseqop.miz", - "formal/afp/Complex_Geometry/Oriented_Circlines.thy", - "formal/afp/CoSMeDis/Friend_Confidentiality/Friend_Intro.thy", - "formal/lean/mathlib/data/pnat/interval.lean", - "formal/afp/pGCL/Tutorial/Monty.thy", - "formal/afp/Completeness/document/root.tex", - "formal/lean/mathlib/algebra/order/hom/monoid.lean", - "formal/afp/WorkerWrapper/FixedPointTheorems.thy", - "formal/mizar/nat_5.miz", - "formal/hol/EC/wei25519.ml", - "formal/afp/CZH_Elementary_Categories/czh_ecategories/CZH_ECAT_Conclusions.thy", - "formal/afp/Containers/ITP-2013/Benchmark_Bool.thy", - "formal/afp/Call_Arity/ArityEtaExpansionSafe.thy", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2002/b/p11.lean", - "formal/afp/Word_Lib/Word_Lemmas.thy", - "formal/lean/mathlib/geometry/euclidean/basic.lean", - "formal/afp/ClockSynchInst/LynchInstance.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p66.lean", - "formal/afp/Quick_Sort_Cost/document/root.tex", - "formal/lean/mathlib/ring_theory/ideal/basic.lean", - "formal/afp/Dijkstra_Shortest_Path/Dijkstra_Impl.thy", - "formal/afp/UpDown_Scheme/Down.thy", - "formal/afp/Discrete_Summation/Discrete_Summation.thy", - "formal/lean/mathlib/category_theory/preadditive/additive_functor.lean", - "formal/afp/Transitive_Models/FiniteFun_Relative.thy", - "formal/lean/liquid/for_mathlib/truncation_Ext.lean", - "formal/afp/BinarySearchTree/document/root.tex", - "formal/afp/Poincare_Bendixson/Examples.thy", - "formal/lean/mathzoo/mathzoo/olympiads/mathd/numbertheory/p13.lean", - "formal/afp/HOL-CSP/Ndet.thy", - "formal/afp/Types_To_Sets_Extension/ETTS/Manual/ETTS_CR.thy", - "formal/afp/HOL-CSP/CSP.thy", - "formal/afp/Psi_Calculi/Weak_Psi_Congruence.thy", - "formal/hol/Help/GEN_REAL_ARITH.doc", - "formal/afp/Pell/Pell.thy", - "formal/mizar/zfmodel2.miz", - "formal/afp/Well_Quasi_Orders/Kruskal.thy", - "formal/lean/mathlib/data/nat/part_enat.lean", - "formal/lean/liquid/for_mathlib/Profinite/arrow_limit.lean", - "formal/lean/mathzoo/mathzoo/olympiads/amc/12/2019/a/p12.lean", - "formal/lean/mathlib/ring_theory/dedekind_domain/dvr.lean", - "formal/lean/mathlib/data/nat/gcd.lean", - "formal/lean/mathlib/data/real/conjugate_exponents.lean", - "formal/lean/mathlib/analysis/complex/upper_half_plane/topology.lean", - "formal/afp/List_Update/Inversion.thy", - "formal/afp/Ordinary_Differential_Equations/Numerics/Transfer_Euclidean_Space_Vector.thy", - "formal/afp/WorkerWrapper/Backtracking.thy", - "formal/afp/Flow_Networks/Graph_Impl.thy", - "formal/afp/Optics/Lens_Laws.thy", - "formal/afp/Randomised_Social_Choice/SDS_Lowering.thy", - "formal/afp/Polynomial_Interpolation/Divmod_Int.thy", - "formal/lean/mathlib/data/list/lattice.lean", - "formal/afp/Iptables_Semantics/No_Spoof_Embeddings.thy", - "formal/afp/Fishers_Inequality/Fishers_Inequality_Variations.thy", - "formal/afp/UPF/ElementaryPolicies.thy", - "formal/lean/mathzoo/mathzoo/olympiads/imo/2019/p1.lean", - "formal/afp/UTP/toolkit/Finite_Fun.thy", - "formal/afp/Buildings/Algebra.thy", - "formal/lean/mathlib/measure_theory/function/conditional_expectation/indicator.lean", - "formal/afp/Extended_Finite_State_Machines/VName.thy", - "formal/hol/EC/weierstrass.ml", - "formal/afp/Abstract-Hoare-Logics/Proc/PLang.thy", - "formal/afp/Pi_Calculus/Strong_Late_Bisim_SC.thy", - "formal/afp/Higher_Order_Terms/Fresh_Monad.thy", - "formal/afp/Network_Security_Policy_Verification/Examples/Tainting/MeasrDroid.thy", - "formal/afp/JinjaThreads/DFA/Opt.thy" - ], - "arxiv-train": [ - "./arxiv/1311.tar.gz", - "./arxiv/0810.tar.gz", - "./arxiv/9211.tar.gz", - "./arxiv/0701.tar.gz", - "./arxiv/2201.tar.gz", - "./arxiv/1903.tar.gz", - "./arxiv/2203.tar.gz", - "./arxiv/9305.tar.gz", - "./arxiv/0203.tar.gz", - "./arxiv/9108.tar.gz", - "./arxiv/0805.tar.gz", - "./arxiv/1610.tar.gz", - "./arxiv/1609.tar.gz", - "./arxiv/9908.tar.gz", - "./arxiv/1312.tar.gz", - "./arxiv/9208.tar.gz", - "./arxiv/1712.tar.gz", - "./arxiv/0404.tar.gz", - "./arxiv/0005.tar.gz", - "./arxiv/0905.tar.gz", - "./arxiv/9511.tar.gz", - "./arxiv/9707.tar.gz", - "./arxiv/1502.tar.gz", - "./arxiv/2001.tar.gz", - "./arxiv/0911.tar.gz", - "./arxiv/0202.tar.gz", - "./arxiv/1608.tar.gz", - "./arxiv/9602.tar.gz", - "./arxiv/9608.tar.gz", - "./arxiv/0608.tar.gz", - "./arxiv/0411.tar.gz", - "./arxiv/1507.tar.gz", - "./arxiv/9403.tar.gz", - "./arxiv/9504.tar.gz", - "./arxiv/1110.tar.gz", - "./arxiv/9207.tar.gz", - "./arxiv/1810.tar.gz", - "./arxiv/2010.tar.gz", - "./arxiv/1905.tar.gz", - "./arxiv/0409.tar.gz", - "./arxiv/2104.tar.gz", - "./arxiv/1409.tar.gz", - "./arxiv/0002.tar.gz", - "./arxiv/9706.tar.gz", - "./arxiv/0605.tar.gz", - "./arxiv/9710.tar.gz", - "./arxiv/9901.tar.gz", - "./arxiv/1009.tar.gz", - "./arxiv/1704.tar.gz", - "./arxiv/0312.tar.gz", - "./arxiv/0603.tar.gz", - "./arxiv/0407.tar.gz", - "./arxiv/1308.tar.gz", - "./arxiv/9302.tar.gz", - "./arxiv/1411.tar.gz", - "./arxiv/1503.tar.gz", - "./arxiv/9906.tar.gz", - "./arxiv/2202.tar.gz", - "./arxiv/9410.tar.gz", - "./arxiv/0110.tar.gz", - "./arxiv/0302.tar.gz", - "./arxiv/1401.tar.gz", - "./arxiv/1107.tar.gz", - "./arxiv/0712.tar.gz", - "./arxiv/9401.tar.gz", - "./arxiv/1405.tar.gz", - "./arxiv/1612.tar.gz", - "./arxiv/1012.tar.gz", - "./arxiv/1410.tar.gz", - "./arxiv/9812.tar.gz", - "./arxiv/1102.tar.gz", - "./arxiv/2007.tar.gz", - "./arxiv/2004.tar.gz", - "./arxiv/9303.tar.gz", - "./arxiv/0405.tar.gz", - "./arxiv/9702.tar.gz", - "./arxiv/1506.tar.gz", - "./arxiv/1008.tar.gz", - "./arxiv/0908.tar.gz", - "./arxiv/9610.tar.gz", - "./arxiv/1210.tar.gz", - "./arxiv/0804.tar.gz", - "./arxiv/2206.tar.gz", - "./arxiv/0402.tar.gz", - "./arxiv/0807.tar.gz", - "./arxiv/2003.tar.gz", - "./arxiv/1005.tar.gz", - "./arxiv/0510.tar.gz", - "./arxiv/9204.tar.gz", - "./arxiv/0107.tar.gz", - "./arxiv/9402.tar.gz", - "./arxiv/9807.tar.gz", - "./arxiv/1705.tar.gz", - "./arxiv/0611.tar.gz", - "./arxiv/0207.tar.gz", - "./arxiv/1304.tar.gz", - "./arxiv/1611.tar.gz", - "./arxiv/1011.tar.gz", - "./arxiv/0709.tar.gz", - "./arxiv/0310.tar.gz", - "./arxiv/0112.tar.gz", - "./arxiv/9310.tar.gz", - "./arxiv/9209.tar.gz", - "./arxiv/9212.tar.gz", - "./arxiv/1512.tar.gz", - "./arxiv/1509.tar.gz", - "./arxiv/0003.tar.gz", - "./arxiv/0607.tar.gz", - "./arxiv/9308.tar.gz", - "./arxiv/1001.tar.gz", - "./arxiv/0802.tar.gz", - "./arxiv/0509.tar.gz", - "./arxiv/0208.tar.gz", - "./arxiv/1909.tar.gz", - "./arxiv/0707.tar.gz", - "./arxiv/9905.tar.gz", - "./arxiv/1711.tar.gz", - "./arxiv/0512.tar.gz", - "./arxiv/0904.tar.gz", - "./arxiv/9206.tar.gz", - "./arxiv/2105.tar.gz", - "./arxiv/9306.tar.gz", - "./arxiv/2103.tar.gz", - "./arxiv/1403.tar.gz", - "./arxiv/2009.tar.gz", - "./arxiv/9805.tar.gz", - "./arxiv/9506.tar.gz", - "./arxiv/0106.tar.gz", - "./arxiv/9911.tar.gz", - "./arxiv/9711.tar.gz", - "./arxiv/1209.tar.gz", - "./arxiv/1002.tar.gz", - "./arxiv/0209.tar.gz", - "./arxiv/0501.tar.gz", - "./arxiv/1305.tar.gz", - "./arxiv/9405.tar.gz", - "./arxiv/9109.tar.gz", - "./arxiv/0401.tar.gz", - "./arxiv/0504.tar.gz", - "./arxiv/0702.tar.gz", - "./arxiv/0610.tar.gz", - "./arxiv/1402.tar.gz", - "./arxiv/1309.tar.gz", - "./arxiv/0109.tar.gz", - "./arxiv/0909.tar.gz", - "./arxiv/0303.tar.gz", - "./arxiv/9703.tar.gz", - "./arxiv/9704.tar.gz", - "./arxiv/9904.tar.gz", - "./arxiv/0511.tar.gz", - "./arxiv/1906.tar.gz", - "./arxiv/0801.tar.gz", - "./arxiv/9301.tar.gz", - "./arxiv/9601.tar.gz", - "./arxiv/9803.tar.gz", - "./arxiv/1007.tar.gz", - "./arxiv/0903.tar.gz", - "./arxiv/0009.tar.gz", - "./arxiv/2108.tar.gz", - "./arxiv/0506.tar.gz", - "./arxiv/1112.tar.gz", - "./arxiv/2204.tar.gz", - "./arxiv/0907.tar.gz", - "./arxiv/0507.tar.gz", - "./arxiv/0201.tar.gz", - "./arxiv/0803.tar.gz", - "./arxiv/0808.tar.gz", - "./arxiv/0505.tar.gz", - "./arxiv/1004.tar.gz", - "./arxiv/9809.tar.gz", - "./arxiv/1706.tar.gz", - "./arxiv/0210.tar.gz", - "./arxiv/0408.tar.gz", - "./arxiv/1105.tar.gz", - "./arxiv/0101.tar.gz", - "./arxiv/1812.tar.gz", - "./arxiv/1904.tar.gz", - "./arxiv/1707.tar.gz", - "./arxiv/9407.tar.gz", - "./arxiv/9404.tar.gz", - "./arxiv/1412.tar.gz", - "./arxiv/0502.tar.gz", - "./arxiv/1601.tar.gz", - "./arxiv/2109.tar.gz", - "./arxiv/9112.tar.gz", - "./arxiv/0007.tar.gz", - "./arxiv/1606.tar.gz", - "./arxiv/1211.tar.gz", - "./arxiv/1804.tar.gz", - "./arxiv/0308.tar.gz", - "./arxiv/0703.tar.gz", - "./arxiv/2110.tar.gz", - "./arxiv/0304.tar.gz", - "./arxiv/9110.tar.gz", - "./arxiv/9701.tar.gz", - "./arxiv/0010.tar.gz", - "./arxiv/0111.tar.gz", - "./arxiv/0910.tar.gz", - "./arxiv/9604.tar.gz", - "./arxiv/0311.tar.gz", - "./arxiv/1504.tar.gz", - "./arxiv/0809.tar.gz", - "./arxiv/9802.tar.gz", - "./arxiv/1701.tar.gz", - "./arxiv/9408.tar.gz", - "./arxiv/0410.tar.gz", - "./arxiv/9210.tar.gz", - "./arxiv/9409.tar.gz", - "./arxiv/0301.tar.gz", - "./arxiv/0012.tar.gz", - "./arxiv/0212.tar.gz", - "./arxiv/0403.tar.gz", - "./arxiv/0706.tar.gz", - "./arxiv/0705.tar.gz", - "./arxiv/1809.tar.gz", - "./arxiv/1602.tar.gz", - "./arxiv/2112.tar.gz", - "./arxiv/1902.tar.gz", - "./arxiv/9201.tar.gz", - "./arxiv/0306.tar.gz", - "./arxiv/0602.tar.gz", - "./arxiv/1106.tar.gz", - "./arxiv/9708.tar.gz", - "./arxiv/2012.tar.gz", - "./arxiv/9510.tar.gz", - "./arxiv/1310.tar.gz", - "./arxiv/1708.tar.gz", - "./arxiv/0704.tar.gz", - "./arxiv/2102.tar.gz", - "./arxiv/0004.tar.gz", - "./arxiv/1104.tar.gz", - "./arxiv/9808.tar.gz", - "./arxiv/0104.tar.gz", - "./arxiv/0503.tar.gz", - "./arxiv/1912.tar.gz", - "./arxiv/0601.tar.gz", - "./arxiv/0901.tar.gz", - "./arxiv/1302.tar.gz", - "./arxiv/9507.tar.gz", - "./arxiv/1603.tar.gz", - "./arxiv/9712.tar.gz", - "./arxiv/1605.tar.gz", - "./arxiv/9512.tar.gz", - "./arxiv/0710.tar.gz", - "./arxiv/9811.tar.gz", - "./arxiv/1109.tar.gz", - "./arxiv/1303.tar.gz", - "./arxiv/0902.tar.gz", - "./arxiv/9111.tar.gz", - "./arxiv/1803.tar.gz", - "./arxiv/1510.tar.gz", - "./arxiv/1604.tar.gz", - "./arxiv/2002.tar.gz", - "./arxiv/2107.tar.gz", - "./arxiv/9412.tar.gz", - "./arxiv/1306.tar.gz", - "./arxiv/9203.tar.gz", - "./arxiv/9705.tar.gz", - "./arxiv/9205.tar.gz", - "./arxiv/9612.tar.gz", - "./arxiv/9607.tar.gz", - "./arxiv/1010.tar.gz", - "./arxiv/1407.tar.gz", - "./arxiv/9709.tar.gz", - "./arxiv/9304.tar.gz", - "./arxiv/1501.tar.gz", - "./arxiv/1710.tar.gz", - "./arxiv/9503.tar.gz", - "./arxiv/9810.tar.gz", - "./arxiv/1801.tar.gz", - "./arxiv/1802.tar.gz", - "./arxiv/2011.tar.gz", - "./arxiv/0305.tar.gz", - "./arxiv/1101.tar.gz", - "./arxiv/9505.tar.gz", - "./arxiv/1808.tar.gz", - "./arxiv/1911.tar.gz", - "./arxiv/9609.tar.gz", - "./arxiv/0011.tar.gz", - "./arxiv/1408.tar.gz", - "./arxiv/1709.tar.gz", - "./arxiv/9309.tar.gz", - "./arxiv/1805.tar.gz", - "./arxiv/0211.tar.gz", - "./arxiv/2008.tar.gz", - "./arxiv/0711.tar.gz", - "./arxiv/0906.tar.gz", - "./arxiv/0206.tar.gz", - "./arxiv/9804.tar.gz", - "./arxiv/0912.tar.gz", - "./arxiv/9902.tar.gz", - "./arxiv/1108.tar.gz", - "./arxiv/9312.tar.gz", - "./arxiv/0811.tar.gz", - "./arxiv/1003.tar.gz", - "./arxiv/9907.tar.gz", - "./arxiv/0812.tar.gz", - "./arxiv/0604.tar.gz", - "./arxiv/9903.tar.gz", - "./arxiv/9910.tar.gz", - "./arxiv/9508.tar.gz", - "./arxiv/2205.tar.gz", - "./arxiv/9311.tar.gz", - "./arxiv/1212.tar.gz", - "./arxiv/0204.tar.gz", - "./arxiv/0008.tar.gz", - "./arxiv/2101.tar.gz", - "./arxiv/1505.tar.gz", - "./arxiv/0108.tar.gz", - "./arxiv/2106.tar.gz", - "./arxiv/0606.tar.gz", - "./arxiv/9502.tar.gz", - "./arxiv/1807.tar.gz", - "./arxiv/1511.tar.gz", - "./arxiv/9509.tar.gz", - "./arxiv/1103.tar.gz", - "./arxiv/9605.tar.gz", - "./arxiv/9912.tar.gz", - "./arxiv/1607.tar.gz", - "./arxiv/9611.tar.gz", - "./arxiv/1811.tar.gz", - "./arxiv/0103.tar.gz", - "./arxiv/9202.tar.gz", - "./arxiv/1404.tar.gz", - "./arxiv/0412.tar.gz", - "./arxiv/0102.tar.gz", - "./arxiv/9411.tar.gz", - "./arxiv/1908.tar.gz", - "./arxiv/1703.tar.gz", - "./arxiv/1508.tar.gz", - "./arxiv/1702.tar.gz", - "./arxiv/9307.tar.gz", - "./arxiv/9801.tar.gz", - "./arxiv/9603.tar.gz", - "./arxiv/9107.tar.gz", - "./arxiv/1901.tar.gz", - "./arxiv/0205.tar.gz", - "./arxiv/0105.tar.gz", - "./arxiv/0001.tar.gz", - "./arxiv/9501.tar.gz", - "./arxiv/9606.tar.gz", - "./arxiv/0309.tar.gz", - "./arxiv/0612.tar.gz", - "./arxiv/1910.tar.gz", - "./arxiv/9909.tar.gz", - "./arxiv/1301.tar.gz", - "./arxiv/1907.tar.gz", - "./arxiv/1111.tar.gz", - "./arxiv/2005.tar.gz", - "./arxiv/0708.tar.gz", - "./arxiv/0609.tar.gz", - "./arxiv/1307.tar.gz", - "./arxiv/2111.tar.gz", - "./arxiv/0307.tar.gz", - "./arxiv/0508.tar.gz" - ], - "arxiv-valid": [ - "./arxiv/9406.tar.gz", - "./arxiv/0806.tar.gz", - "./arxiv/1806.tar.gz", - "./arxiv/0006.tar.gz", - "./arxiv/9806.tar.gz", - "./arxiv/1406.tar.gz", - "./arxiv/2006.tar.gz", - "./arxiv/0406.tar.gz", - "./arxiv/1006.tar.gz" - ] -} \ No newline at end of file diff --git a/stack-exchange/cstheory_stack_exchange/train.jsonl.gz b/stack-exchange/cstheory_stack_exchange/train.jsonl.gz deleted file mode 100644 index d158a213b730cafbe05dcc7f4ad1c8d895b8d23d..0000000000000000000000000000000000000000 --- a/stack-exchange/cstheory_stack_exchange/train.jsonl.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:f4ea5325f4ceaa5fd6cba23ac0cca3f8e3dc0140be9e70dc99147cf5e65556cb -size 3701404 diff --git a/stack-exchange/cstheory_stack_exchange/val.jsonl.gz b/stack-exchange/cstheory_stack_exchange/val.jsonl.gz deleted file mode 100644 index fd811cc78e9cbeec4fcaef63852823be3218b5ff..0000000000000000000000000000000000000000 --- a/stack-exchange/cstheory_stack_exchange/val.jsonl.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:98622f0796509680300e97b7843d38a091c2b50880c340a35a3d63d92876a30a -size 215910 diff --git a/stack-exchange/datascience_stack_exchange/train.jsonl.gz b/stack-exchange/datascience_stack_exchange/train.jsonl.gz deleted file mode 100644 index b3c4482d79927d190939498b7dd1cae01161a0e3..0000000000000000000000000000000000000000 --- a/stack-exchange/datascience_stack_exchange/train.jsonl.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:e0ca0836fe99e6b9cbdb05180493dcf464bb44592c00d54c5397d7e12a4f33ac -size 1519372 diff --git a/stack-exchange/datascience_stack_exchange/val.jsonl.gz b/stack-exchange/datascience_stack_exchange/val.jsonl.gz deleted file mode 100644 index 61f1de36d2aab7f533bebacbe7ecd26ec6b4aeeb..0000000000000000000000000000000000000000 --- a/stack-exchange/datascience_stack_exchange/val.jsonl.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:702e3759dfe377035560f58166bc2b69c6c7afb578140e454c755ff52ae0aa91 -size 94833 diff --git a/stack-exchange/math_overflow/train.jsonl.gz b/stack-exchange/math_overflow/train.jsonl.gz deleted file mode 100644 index 9a60c5eba6c9a592280ce3f64a5fa750c771fc61..0000000000000000000000000000000000000000 --- a/stack-exchange/math_overflow/train.jsonl.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:0b724b86c6bb934b77618f96339c82d289eacfc5a23638d71497a31f38dbefe5 -size 37213938 diff --git a/stack-exchange/math_overflow/val.jsonl.gz b/stack-exchange/math_overflow/val.jsonl.gz deleted file mode 100644 index 167db75df6bac1eec0eda9f25cdaf851d228570e..0000000000000000000000000000000000000000 --- a/stack-exchange/math_overflow/val.jsonl.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:7600b12e621b39badb4c5f8c234f34f250bf18758e2164b7d2b2c12fdba64249 -size 1983068 diff --git a/stack-exchange/math_stack_exchange/train.jsonl.gz b/stack-exchange/math_stack_exchange/train.jsonl.gz deleted file mode 100644 index 15889110097282d3ce904e26bac7e300be310540..0000000000000000000000000000000000000000 --- a/stack-exchange/math_stack_exchange/train.jsonl.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:d181f7beaaa8f81ffa18a1cfb45bddabc63c124320d3d763cc695b3504430bdc -size 69598386 diff --git a/stack-exchange/math_stack_exchange/val.jsonl.gz b/stack-exchange/math_stack_exchange/val.jsonl.gz deleted file mode 100644 index 44f251b897dbcb43adadb26cab4fadba91ebfcfc..0000000000000000000000000000000000000000 --- a/stack-exchange/math_stack_exchange/val.jsonl.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:378cf6ae6e18351734058d42d61164e28256962f6360202b058cfa9884f4823f -size 3619154 diff --git a/stack-exchange/physics_stack_exchange/train.jsonl.gz b/stack-exchange/physics_stack_exchange/train.jsonl.gz deleted file mode 100644 index 8a702dfd2b70275625ac9b6c16c576e87f4563cc..0000000000000000000000000000000000000000 --- a/stack-exchange/physics_stack_exchange/train.jsonl.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:850d97fc875677d728c12fe01eda1054298a4868e38a5021bcfb129d62981a33 -size 18267459 diff --git a/stack-exchange/physics_stack_exchange/val.jsonl.gz b/stack-exchange/physics_stack_exchange/val.jsonl.gz deleted file mode 100644 index e4b9563882870f458efa6782cf2fe4cf9eb0dfc0..0000000000000000000000000000000000000000 --- a/stack-exchange/physics_stack_exchange/val.jsonl.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:ee48095067a23ba97438e1cf4dd894147fbde3ffcea3a6792377c5f711f615b1 -size 961947 diff --git a/stack-exchange/proofassistants_stack_exchange/train.jsonl.gz b/stack-exchange/proofassistants_stack_exchange/train.jsonl.gz deleted file mode 100644 index 53fc9cd4db4b99636dac32d82f5686c9a1120665..0000000000000000000000000000000000000000 --- a/stack-exchange/proofassistants_stack_exchange/train.jsonl.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:61cbb7ffe82497741a84f630bf07fc590804a65af6c29e1b212b6fbeff82e3cc -size 258713 diff --git a/stack-exchange/proofassistants_stack_exchange/val.jsonl.gz b/stack-exchange/proofassistants_stack_exchange/val.jsonl.gz deleted file mode 100644 index 4adde4e0d5eb6e45ac20871ad36a2b9eead5be7a..0000000000000000000000000000000000000000 --- a/stack-exchange/proofassistants_stack_exchange/val.jsonl.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:b72ae771d6c54abe69507fea91e71d53a53b46dcc70c65d4be5302a81d278580 -size 17195 diff --git a/test.py b/test.py deleted file mode 100644 index d1584f46276f7db7b35fb272fa9ac73d75feca02..0000000000000000000000000000000000000000 --- a/test.py +++ /dev/null @@ -1,27 +0,0 @@ -from datasets import load_dataset -from itertools import islice -import sys -import time -from tqdm import tqdm - -total_size = 0 -for x in ["arxiv", "wiki", "math-dataset", "stack-exchange", "books", "formal"]: - dataset = load_dataset("./proof-pile.py", x) - size = dataset["train"].dataset_size / 2**30 - total_size += size - print(x.upper()) - print(dataset) - for x in tqdm(dataset["train"]): - print("EXAMPLE INSTANCE (trimmed):") - print(x["text"][:100]) - break - - then = time.time() - for x in tqdm(dataset["train"]): - pass - now = time.time() - print(f"{size} GB TRAIN TOTAL") - print(f"TRAVERSED IN {now-then} SECONDS") - -print(f"{total_size} GB ACROSS ALL CONFIGS") - diff --git a/test/proofpile_test.jsonl.gz b/test/proofpile_test.jsonl.gz new file mode 100644 index 0000000000000000000000000000000000000000..90bf61c9bfa9ba067142ad6557fee9e5d49ecc1c --- /dev/null +++ b/test/proofpile_test.jsonl.gz @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3de76bef2067cd06de681bf7bc8391cfd46cff7c964aea4b8be22002881a2cff +size 138257828 diff --git a/train/proofpile_train_0.jsonl.gz b/train/proofpile_train_0.jsonl.gz new file mode 100644 index 0000000000000000000000000000000000000000..f4c6184b0dcc4a7dc7f9618e4e8565392d3745f7 --- /dev/null +++ b/train/proofpile_train_0.jsonl.gz @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ebd551043986f06477bee3af11d0d57180e9e440c3c9d8be0a378ace984ca525 +size 1245144003 diff --git a/train/proofpile_train_1.jsonl.gz b/train/proofpile_train_1.jsonl.gz new file mode 100644 index 0000000000000000000000000000000000000000..0b61e08488b554f618968240d5744b03c45680ca --- /dev/null +++ b/train/proofpile_train_1.jsonl.gz @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1b90e03d1dc1df325985af452aa5abc69b5e211adf855914a375c72f1e691153 +size 1248678238 diff --git a/train/proofpile_train_2.jsonl.gz b/train/proofpile_train_2.jsonl.gz new file mode 100644 index 0000000000000000000000000000000000000000..999bd334d4c2ec04d33e3acb6d728198ea3db7a9 --- /dev/null +++ b/train/proofpile_train_2.jsonl.gz @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4ce123e250490a8b86b1df06a871d3f12513d8731d468fcd6443d8b25f358f0c +size 1248942967 diff --git a/train/proofpile_train_3.jsonl.gz b/train/proofpile_train_3.jsonl.gz new file mode 100644 index 0000000000000000000000000000000000000000..234accec09b12a7daa22541fe0c9f63b820b2df9 --- /dev/null +++ b/train/proofpile_train_3.jsonl.gz @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:535055dbdc6a0e629d7de6a2a8d4b61d86410f1d939fdba773621fc01c5e8f66 +size 1254135740 diff --git a/train/proofpile_train_4.jsonl.gz b/train/proofpile_train_4.jsonl.gz new file mode 100644 index 0000000000000000000000000000000000000000..83a0791f065d812cdd8050e733cb6dcbf8a2808a --- /dev/null +++ b/train/proofpile_train_4.jsonl.gz @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4ab7d1b0c2c158622da7b2c4f24233b9d225d4fc8113099eefa1abc0fbcf68f0 +size 1243365834 diff --git a/train/proofpile_train_5.jsonl.gz b/train/proofpile_train_5.jsonl.gz new file mode 100644 index 0000000000000000000000000000000000000000..f449117a44f4d32d769a6de920ba16702ad1cd76 --- /dev/null +++ b/train/proofpile_train_5.jsonl.gz @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e58deabc3bf0e47c5e49b5e6a8f42d4f3ab467a3f4cb5d05114004e78324c67b +size 1271854185 diff --git a/train/proofpile_train_6.jsonl.gz b/train/proofpile_train_6.jsonl.gz new file mode 100644 index 0000000000000000000000000000000000000000..c36b6932320fbdde13dd6928e873923b56f2d869 --- /dev/null +++ b/train/proofpile_train_6.jsonl.gz @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d0b301a389f8f050f05485d28d45e62a3cfd838dbc2f044f8cf0eb3da4da2813 +size 1233941494 diff --git a/train/proofpile_train_7.jsonl.gz b/train/proofpile_train_7.jsonl.gz new file mode 100644 index 0000000000000000000000000000000000000000..390823cf6e5fe83d923149eddad736cf7b216942 --- /dev/null +++ b/train/proofpile_train_7.jsonl.gz @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:110db34dfd1878ab9c7f67c14532fb90cf41c72182c2563e39c9ff42384f31da +size 1005422313 diff --git a/wiki/proofwiki.tar.gz b/wiki/proofwiki.tar.gz deleted file mode 100644 index 687edc58a923b95830e0ec5c03793643ca11feca..0000000000000000000000000000000000000000 --- a/wiki/proofwiki.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:d21be26a22abd10e72a15caa428794f4e28e060b659a51468e5dd20c4750de32 -size 6968336 diff --git a/wiki/proofwiki_val.tar.gz b/wiki/proofwiki_val.tar.gz deleted file mode 100644 index 69ce68e22018eacf2ac071b41f19eec813c1ef2d..0000000000000000000000000000000000000000 --- a/wiki/proofwiki_val.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:858108a7d68301916eedd4807033f274512fd13bea9fd66f7e5da671f2448f0d -size 213288 diff --git a/wiki/wikipedia.tar.gz b/wiki/wikipedia.tar.gz deleted file mode 100644 index 1cc3e9565f2637884af119e9296bf3d6fa46d176..0000000000000000000000000000000000000000 --- a/wiki/wikipedia.tar.gz +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:9e28e3a71c55fd4978030804d2d7cb60e3e626ad34b788046e912ee4f336075e -size 7195035 diff --git a/wiki/wikipedia/0.txt b/wiki/wikipedia/0.txt deleted file mode 100644 index f24b1f50af15645f20bfc26f747f54e2279407a0..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/0.txt +++ /dev/null @@ -1,398 +0,0 @@ -The Basel problem is a problem in mathematical analysis with relevance to number theory, first posed by Pietro Mengoli in 1650 and solved by Leonhard Euler in 1734, and read on 5 December 1735 in The Saint Petersburg Academy of Sciences. Since the problem had withstood the attacks of the leading mathematicians of the day, Euler's solution brought him immediate fame when he was twenty-eight. Euler generalised the problem considerably, and his ideas were taken up years later by Bernhard Riemann in his seminal 1859 paper "On the Number of Primes Less Than a Given Magnitude", in which he defined his zeta function and proved its basic properties. The problem is named after Basel, hometown of Euler as well as of the Bernoulli family who unsuccessfully attacked the problem. - -The Basel problem asks for the precise summation of the reciprocals of the squares of the natural numbers, i.e. the precise sum of the infinite series: -$$ -\sum_{n=1}^\infty \frac{1}{n^2} = \frac{1}{1^2} + \frac{1}{2^2} + \frac{1}{3^2} + \cdots. -$$ - -The sum of the series is approximately equal to 1.644934. The Basel problem asks for the exact sum of this series (in closed form), as well as a proof that this sum is correct. Euler found the exact sum to be pi2/6 and announced this discovery in 1735. His arguments were based on manipulations that were not justified at the time, although he was later proven correct. He produced a truly rigorous proof in 1741. - -The solution to this problem can be used to estimate the probability that two large random numbers are relatively prime. Two random integers in the range from 1 to n, in the limit as n goes to infinity, are relatively prime with a probability that approaches 6/pi2, the inverse of the solution to the Basel problem. - -Euler's original derivation of the value pi2/6 essentially extended observations about finite polynomials and assumed that these same properties hold true for infinite series. - -Of course, Euler's original reasoning requires justification (100 years later, Karl Weierstrass proved that Euler's representation of the sine function as an infinite product is valid, by the Weierstrass factorization theorem), but even without justification, by simply obtaining the correct value, he was able to verify it numerically against partial sums of the series. The agreement he observed gave him sufficient confidence to announce his result to the mathematical community. - -To follow Euler's argument, recall the Taylor series expansion of the sine function -$$ - \sin x = x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + \cdots -$$ - -Dividing through by x, we have -$$ - \frac{\sin x}{x} = 1 - \frac{x^2}{3!} + \frac{x^4}{5!} - \frac{x^6}{7!} + \cdots -$$ - -Using the Weierstrass factorization theorem, it can also be shown that the right-hand side is the product of linear factors given by its roots, just as we do for finite polynomials (which Euler assumed as a heuristic for expanding an infinite degree polynomial in terms of its roots, but in fact is not always true for general $P(x)$): - -\begin{align} - -\frac{\sin x}{x} &= \left(1 - \frac{x}{\pi}\right)\left(1 + \frac{x}{\pi}\right)\left(1 - \frac{x}{2\pi}\right)\left(1 + \frac{x}{2\pi}\right)\left(1 - \frac{x}{3\pi}\right)\left(1 + \frac{x}{3\pi}\right) \cdots \\ - -&= \left(1 - \frac{x^2}{\pi^2}\right)\left(1 - \frac{x^2}{4\pi^2}\right)\left(1 - \frac{x^2}{9\pi^2}\right) \cdots - -\end{align} - -If we formally multiply out this product and collect all the x2 terms (we are allowed to do so because of Newton's identities), we see by induction that the x2 coefficient of sin x/x is -$$ - -\left(\frac{1}{\pi^2} + \frac{1}{4\pi^2} + \frac{1}{9\pi^2} + \cdots \right) = -\frac{1}{\pi^2}\sum_{n=1}^{\infty}\frac{1}{n^2}. -$$ - -But from the original infinite series expansion of sin x/x, the coefficient of x2 is −1/3! = −1/6. These two coefficients must be equal; thus, -$$ --\frac{1}{6} = -\frac{1}{\pi^2}\sum_{n=1}^{\infty}\frac{1}{n^2}. -$$ - -Multiplying both sides of this equation by −pi2 gives the sum of the reciprocals of the positive square integers. -$$ -\sum_{n=1}^{\infty}\frac{1}{n^2} = \frac{\pi^2}{6}. -$$ - -This method of calculating $\zeta(2)$ is detailed in expository fashion most notably in Havil's Gamma book which details many zeta function and logarithm-related series and integrals, as well as a historical perspective, related to the Euler gamma constant. - -Using formulae obtained from elementary symmetric polynomials, this same approach can be used to enumerate formulae for the even-indexed even zeta constants which have the following known formula expanded by the Bernoulli numbers: -$$ -\zeta(2n) = \frac{(-1)^{n-1} (2\pi)^{2n}}{2 \cdot (2n)!} B_{2n}. -$$ - -For example, let the partial product for $\sin(x)$ expanded as above be defined by $\frac{S_n(x)}{x} := \prod\limits_{k=1}^n \left(1 - \frac{x^2}{k^2 \cdot \pi^2}\right)$. Then using known formulas for elementary symmetric polynomials (a.k.a., Newton's formulas expanded in terms of power sum identities), we can see (for example) that - - - -\begin{align} - -\left[x^4\right] \frac{S_n(x)}{x} & = \frac{1}{2\pi^4}\left(\left(H_n^{(2)}\right)^2 - H_n^{(4)}\right) \qquad \xrightarrow{n \rightarrow \infty} \qquad \frac{1}{2}\left(\zeta(2)^2-\zeta(4)\right) \\ - -& \qquad \implies \zeta(4) = \frac{\pi^4}{90} = -2\pi^2 \cdot [x^4] \frac{\sin(x)}{x} +\frac{\pi^4}{36} \\ - -\left[x^6\right] \frac{S_n(x)}{x} & = -\frac{1}{6\pi^6}\left(\left(H_n^{(2)}\right)^3 - 2H_n^{(2)} H_n^{(4)} + 2H_n^{(6)}\right) \qquad \xrightarrow{n \rightarrow \infty} \qquad \frac{1}{6}\left(\zeta(2)^3-3\zeta(2)\zeta(4) + 2\zeta(6)\right) \\ - -& \qquad \implies \zeta(6) = \frac{\pi^6}{945} = -3 \cdot \pi^6 [x^6] \frac{\sin(x)}{x} - \frac{2}{3} \frac{\pi^2}{6} \frac{\pi^4}{90} + \frac{\pi^6}{216}, - -\end{align} - - - -and so on for subsequent coefficients of $[x^{2k}] \frac{S_n(x)}{x}$. There are other forms of Newton's identities expressing the (finite) power sums $H_n^{(2k)}$ in terms of the elementary symmetric polynomials, $e_i \equiv e_i\left(-\frac{\pi^2}{1^2}, -\frac{\pi^2}{2^2}, -\frac{\pi^2}{3^2}, -\frac{\pi^2}{4^2}, \cdots\right), $ but we can go a more direct route to expressing non-recursive formulas for $\zeta(2k)$ using the method of elementary symmetric polynomials. Namely, we have a recurrence relation between the elementary symmetric polynomials and the power sum polynomials given as on this page by -$$ -(-1)^{k}k e_k(x_1,\ldots,x_n) = \sum_{j=1}^k (-1)^{k-j-1} p_j(x_1,\ldots,x_n)e_{k-j}(x_1,\ldots,x_n), -$$ - -which in our situation equates to the limiting recurrence relation (or generating function convolution, or product) expanded as -$$ - \frac{\pi^{2k}}{2}\cdot \frac{(2k) \cdot (-1)^k}{(2k+1)!} = -[x^{2k}] \frac{\sin(\pi x)}{\pi x} \times \sum_{i \geq 1} \zeta(2i) x^i. -$$ - -Then by differentiation and rearrangement of the terms in the previous equation, we obtain that -$$ -\zeta(2k) = [x^{2k}]\frac{1}{2}\left(1-\pi x\cot(\pi x)\right). -$$ - -By the above results, we can conclude that $\zeta(2k)$ is always a rational multiple of $\pi^{2k}$. In particular, since $\pi$ and integer powers of it are transcendental, we can conclude at this point that $\zeta(2k)$ is irrational, and more precisely, transcendental for all $k \geq 1$. By contrast, the properties of the odd-indexed zeta constants, including Apéry's constant $\zeta(3)$, are almost completely unknown. - -The Riemann zeta function ζ(s) is one of the most significant functions in mathematics because of its relationship to the distribution of the prime numbers. The zeta function is defined for any complex number s with real part greater than 1 by the following formula: -$$ -\zeta(s) = \sum_{n=1}^\infty \frac{1}{n^s}. -$$ - -Taking s = 2, we see that ζ(2) is equal to the sum of the reciprocals of the squares of all positive integers: - -\zeta(2) = \sum_{n=1}^\infty \frac{1}{n^2} - -= \frac{1}{1^2} + \frac{1}{2^2} + \frac{1}{3^2} + \frac{1}{4^2} + \cdots = \frac{\pi^2}{6} \approx 1.644934. - -Convergence can be proven by the integral test, or by the following inequality: - -\begin{align} - -\sum_{n=1}^N \frac{1}{n^2} & < 1 + \sum_{n=2}^N \frac{1}{n(n-1)} \\ - -& = 1 + \sum_{n=2}^N \left( \frac{1}{n-1} - \frac{1}{n} \right) \\ - -& = 1 + 1 - \frac{1}{N} {\stackrel{N \to \infty}{\longrightarrow}} 2. - -\end{align} - -This gives us the upper bound 2, and because the infinite sum contains no negative terms, it must converge to a value strictly between 0 and 2. It can be shown that ζ(s) has a simple expression in terms of the Bernoulli numbers whenever s is a positive even integer. With s = 2n: -$$ -\zeta(2n) = \frac{(2\pi)^{2n}(-1)^{n+1}B_{2n}}{2\cdot(2n)!}. -$$ - -The normalized sinc function $\text{sinc}(x)=\frac{\sin (\pi x)}{\pi x}$ has a Weierstrass factorization representation as an infinite product: -$$ -\frac{\sin (\pi x)}{\pi x} = \prod_{n=1}^\infty \left(1-\frac{x^2}{n^2}\right). -$$ - -The infinite product is analytic, so taking the natural logarithm of both sides and differentiating yields -$$ -\frac{\pi \cos (\pi x)}{\sin (\pi x)}-\frac{1}{x}=-\sum_{n=1}^\infty \frac{2x}{n^2-x^2}. -$$ - -After dividing the equation by $2x$ and regrouping one gets -$$ -\frac{1}{2x^2}-\frac{\pi \cot (\pi x)}{2x}=\sum_{n=1}^\infty \frac{1}{n^2-x^2}. -$$ - -We make a change of variables ($x=-it$): -$$ --\frac{1}{2t^2}+\frac{\pi \cot (-\pi it)}{2it}=\sum_{n=1}^\infty \frac{1}{n^2+t^2}. -$$ - -Euler's formula can be used to deduce that -$$ -\frac{\pi \cot (-\pi i t)}{2it}=\frac{\pi}{2it}\frac{i\left(e^{2\pi t}+1\right)}{e^{2\pi t}-1}=\frac{\pi}{2t}+\frac{\pi}{t\left(e^{2\pi t} - 1\right)}. -$$ - -or using hyperbolic function: -$$ -\frac{\pi \cot (-\pi i t)}{2it}=\frac{\pi}{2t}{i\cot (\pi i t)}=\frac{\pi}{2t}\coth(\pi t). -$$ - -Then -$$ -\sum_{n=1}^\infty \frac{1}{n^2+t^2}=\frac{\pi \left(te^{2\pi t}+t\right)-e^{2\pi t}+1}{2\left(t^2 e^{2\pi t}-t^2\right)}=-\frac{1}{2t^2} + \frac{\pi}{2t} \coth(\pi t). -$$ - -Now we take the limit as $t$ approaches zero and use L'Hôpital's rule thrice: -$$ -\sum_{n=1}^\infty \frac{1}{n^2}=\lim_{t\to 0}\frac{\pi}{4}\frac{2\pi te^{2\pi t}-e^{2\pi t}+1}{\pi t^2 e^{2\pi t} + te^{2\pi t}-t} -$$ -$$ -\sum_{n=1}^\infty \frac{1}{n^2}=\lim_{t\to 0}\frac{\pi^3 te^{2\pi t}}{2\pi \left(\pi t^2 e^{2\pi t}+2te^{2\pi t} \right)+e^{2\pi t}-1} -$$ -$$ -\sum_{n=1}^\infty \frac{1}{n^2}=\lim_{t\to 0}\frac{\pi^2 (2\pi t+1)}{4\pi^2 t^2+12\pi t+6} -$$ -$$ -\sum_{n=1}^\infty \frac{1}{n^2}=\frac{\pi^2}{6}. -$$ - -Use Parseval's identity (applied to the function f(x) = x) to obtain -$$ -\sum_{n=-\infty}^\infty |c_n|^2 = \frac{1}{2\pi}\int_{-\pi}^\pi x^2 dx, -$$ - -where - -\begin{align} - -c_n &= \frac{1}{2\pi}\int_{-\pi}^\pi x e^{-inx} dx \\[4pt] - -&= \frac{n\pi \cos(n\pi)-\sin(n\pi)}{\pi n^2} i \\[4pt] - -&= \frac{\cos(n\pi)}{n} i \\[4pt] - -&= \frac{(-1)^n}{n} i - -\end{align} - -for n ≠ 0, and c0 = 0. Thus, - -|c_n|^2 = \begin{cases} - -\dfrac{1}{n^2}, & \text{for } n \neq 0, \\ - -0, & \text{for } n = 0, - -\end{cases} - - - -and -$$ -\sum_{n=-\infty}^\infty |c_n|^2 = 2\sum_{n=1}^\infty \frac{1}{n^2} = \frac{1}{2\pi} \int_{-\pi}^\pi x^2 dx. -$$ - -Therefore, -$$ -\sum_{n=1}^\infty \frac{1}{n^2} = \frac{1}{4\pi}\int_{-\pi}^\pi x^2 dx = \frac{\pi^2}{6} -$$ - -as required. - -Given a complete orthonormal basis in the space $L^2_{\operatorname{per}}(0, 1)$ of L2 periodic functions over $(0, 1)$ (i.e., the subspace of square-integrable functions which are also periodic), denoted by $\{e_i\}_{i=-\infty}^{\infty}$, Parseval's identity tells us that -$$ -\|x\|^2 = \sum_{i=-\infty}^{\infty} |\langle e_i, x\rangle|^2, -$$ - -where $\|x\| := \sqrt{\langle x,x\rangle}$ is defined in terms of the inner product on this Hilbert space given by -$$ -\langle f, g\rangle = \int_0^1 f(x) \overline{g(x)} dx,\ f,g \in L^2_{\operatorname{per}}(0, 1). -$$ - -We can consider the orthonormal basis on this space defined by $e_k \equiv e_k(\vartheta) := \exp(2\pi\imath k \vartheta)$ such that $\langle e_k,e_j\rangle = \int_0^1 e^{2\pi\imath (k-j) \vartheta} d\vartheta = \delta_{k,j}$. Then if we take $f(\vartheta) := \vartheta$, we can compute both that - - - -\begin{align} - -\|f\|^2 & = \int_0^1 \vartheta^2 d\vartheta = \frac{1}{3} \\ - -\langle f, e_k\rangle & = \int_0^1 \vartheta e^{-2\pi\imath k\vartheta} d\vartheta = \Biggl\{\begin{array}{ll} \frac{1}{2}, & k = 0 \\ -\frac{1}{2\pi\imath k} & k \neq 0, \end{array} - -\end{align} - - - -by elementary calculus and integration by parts, respectively. Finally, by Parseval's identity stated in the form above, we obtain that - - - -\begin{align} - -\|f\|^2 = \frac{1}{3} & = \sum_{\stackrel{k=-\infty}{k \neq 0}}^{\infty} \frac{1}{(2\pi k)^2}+ \frac{1}{4} - -= 2 \sum_{k=1}^{\infty} \frac{1}{(2\pi k)^2}+ \frac{1}{4} \\ - -& \implies \frac{\pi^2}{6} = \frac{2 \pi^2}{3} - \frac{\pi^2}{2} = \zeta(2). - -\end{align} - - - -Note that by considering higher-order powers of $f_j(\vartheta) := \vartheta^j \in L^2_{\operatorname{per}}(0, 1)$ we can use integration by parts to extend this method to enumerating formulas for $\zeta(2j)$ when $j > 1$. In particular, suppose we let -$$ -I_{j,k} := \int_0^1 \vartheta^j e^{-2\pi\imath k\vartheta} d\vartheta, -$$ - -so that integration by parts yields the recurrence relation that - - - -\begin{align} - -I_{j,k} & = \Biggl\{\begin{array}{ll} \frac{1}{j+1}, & k=0; \\ -\frac{1}{2\pi\imath \cdot k} + \frac{j}{2\pi\imath \cdot k} I_{j-1,k}, & k \neq 0\end{array} \\ - -& = \Biggl\{\begin{array}{ll} \frac{1}{j+1}, & k=0; \\ -\sum\limits_{m=1}^{j} \frac{j!}{(j+1-m)!} \cdot \frac{1}{(2\pi\imath \cdot k)^{m}}, & k \neq 0\end{array}. - -\end{align} - - - -Then by applying Parseval's identity as we did for the first case above along with the linearity of the inner product yields that - - - -\begin{align} - -\|f_j\|^2 = \frac{1}{2j+1} & = 2 \sum_{k \geq 1} I_{j,k} \bar{I}_{j,k} + \frac{1}{(j+1)^2} \\ - -& = 2 \sum_{m=1}^j \sum_{r=1}^j \frac{j!^2}{(j+1-m)! (j+1-r)!} \frac{(-1)^r}{\imath^{m+r}} \frac{\zeta(m+r)}{(2\pi)^{m+r}} + \frac{1}{(j+1)^2}. - -\end{align} - - - -While most proofs use results from advanced mathematics, such as Fourier analysis, complex analysis, and multivariable calculus, the following does not even require single-variable calculus (until a single limit is taken at the end). - -For a proof using the residue theorem, see the linked article. - -The proof goes back to Augustin Louis Cauchy (Cours d'Analyse, 1821, Note VIII). In 1954, this proof appeared in the book of Akiva and Isaak Yaglom "Nonelementary Problems in an Elementary Exposition". Later, in 1982, it appeared in the journal Eureka, attributed to John Scholes, but Scholes claims he learned the proof from Peter Swinnerton-Dyer, and in any case he maintains the proof was "common knowledge at Cambridge in the late 1960s". - -[[File:limit circle FbN.jpeg|thumb|The inequality
-$$ -\tfrac{1}{2}r^2\tan\theta > \tfrac{1}{2}r^2\theta > \tfrac{1}{2}r^2\sin\theta -$$
- -is shown. Taking reciprocals and squaring gives
-$$ -\cot^2\theta<\tfrac{1}{\theta^2}<\csc^2\theta -$$.]] - -The main idea behind the proof is to bound the partial (finite) sums -$$ -\sum_{k=1}^m \frac{1}{k^2} = \frac{1}{1^2} + \frac{1}{2^2} + \cdots + \frac{1}{m^2} -$$ - -between two expressions, each of which will tend to pi2/6 as m approaches infinity. The two expressions are derived from identities involving the cotangent and cosecant functions. These identities are in turn derived from de Moivre's formula, and we now turn to establishing these identities. - -Let x be a real number with 0 < x < pi/2, and let n be a positive odd integer. Then from de Moivre's formula and the definition of the cotangent function, we have - -\begin{align} - -\frac{\cos (nx) + i \sin (nx)}{\sin^n x} &= \frac{(\cos x + i\sin x)^n}{\sin^n x} \\[4pt] - -&= \left(\frac{\cos x + i \sin x}{\sin x}\right)^n \\[4pt] - -&= (\cot x + i)^n. - -\end{align} - -From the binomial theorem, we have - -\begin{align} - -(\cot x + i)^n - -= & {n \choose 0} \cot^n x + {n \choose 1} (\cot^{n - 1} x)i + \cdots + {n \choose {n - 1}} (\cot x)i^{n - 1} + {n \choose n} i^n \\[6pt] - -= & \Bigg( {n \choose 0} \cot^n x - {n \choose 2} \cot^{n - 2} x \pm \cdots \Bigg) + i\Bigg( {n \choose 1} \cot^{n-1} x - {n \choose 3} \cot^{n - 3} x \pm \cdots \Bigg). - -\end{align} - -Combining the two equations and equating imaginary parts gives the identity -$$ -\frac{\sin (nx)}{\sin^n x} = \Bigg( {n \choose 1} \cot^{n - 1} x - {n \choose 3} \cot^{n - 3} x \pm \cdots \Bigg). -$$ - -We take this identity, fix a positive integer m, set n = 2m + 1, and consider xr = rpi/2m + 1 for r = 1, 2, ..., m. Then nxr is a multiple of pi and therefore sin(nxr) = 0. So, -$$ -0 = {{2m + 1} \choose 1} \cot^{2m} x_r - {{2m + 1} \choose 3} \cot^{2m - 2} x_r \pm \cdots + (-1)^m{{2m + 1} \choose {2m + 1}} -$$ - -for every r = 1, 2, ..., m. The values xr = x1, x2, ..., xm are distinct numbers in the interval 0 < . Since the function cot2 x is one-to-one on this interval, the numbers tr = cot2 xr are distinct for r = 1, 2, ..., m. By the above equation, these m numbers are the roots of the mth degree polynomial -$$ -p(t) = {{2m + 1} \choose 1}t^m - {{2m + 1} \choose 3}t^{m - 1} \pm \cdots + (-1)^m{{2m+1} \choose {2m + 1}}. -$$ - -By Vieta's formulas we can calculate the sum of the roots directly by examining the first two coefficients of the polynomial, and this comparison shows that -$$ -\cot ^2 x_1 + \cot ^2 x_2 + \cdots + \cot ^2 x_m = \frac{\binom{2m + 1}3} {\binom{2m + 1}1} = \frac{2m(2m - 1)}6. -$$ - -Substituting the identity csc2 x = cot2 x + 1, we have -$$ -\csc ^2 x_1 + \csc ^2 x_2 + \cdots + \csc ^2 x_m = \frac{2m(2m - 1)}6 + m = \frac{2m(2m + 2)}6. -$$ - -Now consider the inequality cot2 x < 1/x2 < csc2 x (illustrated geometrically above). If we add up all these inequalities for each of the numbers xr = rpi/2m + 1, and if we use the two identities above, we get -$$ -\frac{2m(2m - 1)}6 < \left(\frac{2m + 1}{\pi} \right)^2 + \left(\frac{2m + 1}{2\pi} \right)^2 + \cdots + \left(\frac{2m + 1}{m \pi} \right)^2 < \frac{2m(2m + 2)}6. -$$ - -Multiplying through by (pi/2m + 1)_2, this becomes -$$ -\frac{\pi ^2}{6}\left(\frac{2m}{2m + 1}\right)\left(\frac{2m - 1}{2m + 1}\right) < \frac{1}{1^2} + \frac{1}{2^2} + \cdots + \frac{1}{m^2} < \frac{\pi ^2}{6}\left(\frac{2m}{2m + 1}\right)\left(\frac{2m + 2}{2m + 1}\right). -$$ - -As m approaches infinity, the left and right hand expressions each approach pi2/6, so by the squeeze theorem, - -\zeta(2) = \sum_{k=1}^\infty \frac{1}{k^2} = - -\lim_{m \to \infty}\left(\frac{1}{1^2} + \frac{1}{2^2} + \cdots + \frac{1}{m^2}\right) = \frac{\pi ^2}{6} - -and this completes the proof. - -See the special cases of the identities for the Riemann zeta function when $s = 2.$ Other notably special identities and representations of this constant appear in the sections below. - -The following are series representations of the constant: - -\begin{align} - -\zeta(2) &= 3 \sum_{k=1}^\infty \frac{1}{k^2 \binom{2k}{k}} \\ - -&= \sum_{i=1}^\infty \sum_{j=1}^\infty \frac{(i-1)! (j-1)!}{(i+j)!}. \\ - -\end{align} - -There are also BBP-type series expansions for ζ(2). -$$ -\frac{\zeta(2)}{2} = \cfrac{1}{v_1 - \cfrac{1^4}{v_2-\cfrac{2^4}{v_3-\cfrac{3^4}{v_4-\ddots}}}}, -$$ - -and -$$ -\frac{\zeta(2)}{5} = \cfrac{1}{\widetilde{v}_1 - \cfrac{1^4}{\widetilde{v}_2-\cfrac{2^4}{\widetilde{v}_3-\cfrac{3^4}{\widetilde{v}_4-\ddots}}}}, -$$ - -where $v_n = 2n-1 \mapsto \{1,3,5,7,9,\ldots\}$ and $\widetilde{v}_n = 11n^2-11n+3 \mapsto \{3,25,69,135,\ldots\}$. diff --git a/wiki/wikipedia/1.txt b/wiki/wikipedia/1.txt deleted file mode 100644 index 1bec2ff77f55b2425bcea5fc96c58a80c867ac2d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/1.txt +++ /dev/null @@ -1,37 +0,0 @@ -In real analysis and measure theory, the Vitali convergence theorem, named after the Italian mathematician Giuseppe Vitali, is a generalization of the better-known dominated convergence theorem of Henri Lebesgue. It is a characterization of the convergence in Lp in terms of convergence in measure and a condition related to uniform integrability. - -Let $(X,\mathcal{A},\mu)$ be a measure space, i.e. $\mu : \mathcal{A}\to [0,\infty]$ is a set function such that $\mu(\emptyset)=0$ and $\mu$ is countably-additive. All functions considered in the sequel will be functions $f:X\to \mathbb{K}$, where $\mathbb{K}=\R$ or $\mathbb{C}$. We adopt the following definitions according to Bogachev's terminology. - -* A set of functions $\mathcal{F} \subset L^1(X,\mathcal{A},\mu)$ is called uniformly integrable if $\lim_{M\to+\infty} \sup_{f\in\mathcal{F}} \int_{\{|f|>M\}} |f| d\mu = 0$, i.e \forall\ \varepsilon >0,\ \exists\ M_\varepsilon>0 - -\sup_{f\in\mathcal{F}} \int_{\{|f|\geq M_\varepsilon\}} |f| d\mu < \varepsilon. - -* A set of functions $\mathcal{F} \subset L^1(X,\mathcal{A},\mu)$ is said to have uniformly absolutely continuous integrals if $\lim_{\mu(A)\to 0}\sup_{f\in\mathcal{F}} \int_A |f| d\mu = 0$, i.e. \forall\ \varepsilon>0,\ \exists\ \delta_\varepsilon >0,\ \forall\ A\in\mathcal{A} : - -\mu(A)<\delta_\varepsilon \Rightarrow \sup_{f\in \mathcal{F}} \int_A |f| d\mu < \varepsilon. This definition is sometimes used as a definition of uniform integrability. However, it differs from the definition of uniform integrability given above. - -When $\mu(X)<\infty$, a set of functions $\mathcal{F} \subset L^1(X,\mathcal{A},\mu)$ is uniformly integrable if and only if it is bounded in $L^1(X,\mathcal{A},\mu)$ and has uniformly absolutely continuous integrals. If, in addition, $\mu$ is atomless, then the uniform integrability is equivalent to the uniform absolute continuity of integrals. - -Let $(X,\mathcal{A},\mu)$ be a measure space with $\mu(X)<\infty$. Let $(f_n)\subset L^p(X,\mathcal{A},\mu)$ and $f$ be an $\mathcal{A}$-measurable function. Then, the following are equivalent : - -# $f\in L^p(X,\mathcal{A},\mu)$ and $(f_n)$ converges to $f$ in $L^p(X,\mathcal{A},\mu)$ ; - -# The sequence of functions $(f_n)$ converges in $\mu$-measure to $f$ and $(|f_n|^p)_{n\geq 1}$ is uniformly integrable ; - -For a proof, see Bogachev's monograph "Measure Theory, Volume I". - -Let $(X,\mathcal{A},\mu)$ be a measure space and $1\leq p<\infty$. Let $(f_n)_{n\geq 1} \subseteq L^p(X,\mathcal{A},\mu)$ and $f\in L^p(X,\mathcal{A},\mu)$. Then, $(f_n)$ converges to $f$ in $L^p(X,\mathcal{A},\mu)$ if and only if the following holds : - -# The sequence of functions $(f_n)$ converges in $\mu$-measure to $f$ ; - -#$(f_n)$ has uniformly absolutely continuous integrals; - -# For every $\varepsilon>0$, there exists $X_\varepsilon\in \mathcal{A}$ such that $\mu(X_\varepsilon)<\infty$ and $\sup_{n\geq 1}\int_{X\setminus X_\varepsilon} |f_n|^p d\mu <\varepsilon.$ - -When $\mu(X)<\infty$, the third condition becomes superfluous (one can simply take $X_\varepsilon = X$) and the first two conditions give the usual form of Lebesgue-Vitali's convergence theorem originally stated for measure spaces with finite measure. In this case, one can show that conditions 1 and 2 imply that the sequence $(|f_n|^p)_{n\geq 1}$ is uniformly integrable. - -Let $(X,\mathcal{A},\mu)$ be measure space. Let $(f_n)_{n\geq 1} \subseteq L^1(X,\mathcal{A},\mu)$ and assume that $\lim_{n\to\infty}\int_A f_nd\mu$ exists for every $A\in\mathcal{A}$. Then, the sequence $(f_n)$ is bounded in $L^1(X,\mathcal{A},\mu)$ and has uniformly absolutely continuous integrals. In addition, there exists $f\in L^1(X,\mathcal{A},\mu)$ such that $\lim_{n\to\infty}\int_A f_nd\mu = \int_A f d\mu$ for every $A\in\mathcal{A}$. - -When $\mu(X)<\infty$, this implies that $(f_n)$ is uniformly integrable. - -For a proof, see Bogachev's monograph "Measure Theory, Volume I". diff --git a/wiki/wikipedia/10.txt b/wiki/wikipedia/10.txt deleted file mode 100644 index 9a3d736bcabeb65f734dac6ce01b48b117ee734f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/10.txt +++ /dev/null @@ -1,15 +0,0 @@ -Jean-Yves Girard (; born 1947) is a French logician working in proof theory. He is the research director (emeritus) at the mathematical institute of the University of Aix-Marseille, at Luminy. - -Jean-Yves Girard is an alumnus of the École normale supérieure de Saint-Cloud. - -He made a name for himself in the 1970s with his proof of strong normalization in a system of second-order logic called System F. This result gave a new proof of Takeuti's conjecture, which was proven a few years earlier by William W. Tait, Motō Takahashi and Dag Prawitz. For this purpose, he introduced the notion of "reducibility candidate" ("candidat de réducibilité"). He is also credited with the discovery of Girard's paradox, linear logic, the geometry of interaction, ludics, and (satirically) the mustard watch. - -He obtained the CNRS Silver medal in 1983 and is a member of the French Academy of Sciences. - -* - -* - -* - -* diff --git a/wiki/wikipedia/100.txt b/wiki/wikipedia/100.txt deleted file mode 100644 index 2f3b8023f9e3f44a93681ea48fb10a6d7deea6f2..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/100.txt +++ /dev/null @@ -1,43 +0,0 @@ -In probability and statistics, an urn problem is an idealized mental exercise in which some objects of real interest (such as atoms, people, cars, etc.) are represented as colored balls in an urn or other container. One pretends to remove one or more balls from the urn; the goal is to determine the probability of drawing one color or another, - -or some other properties. A number of important variations are described below. - -An urn model is either a set of probabilities that describe events within an urn problem, or it is a probability distribution, or a family of such distributions, of random variables associated with urn problems. - -In Ars Conjectandi (1713), Jacob Bernoulli considered the problem of determining, given a number of pebbles drawn from an urn, the proportions of different colored pebbles within the urn. This problem was known as the inverse probability problem, and was a topic of research in the eighteenth century, attracting the attention of Abraham de Moivre and Thomas Bayes. - -Bernoulli used the Latin word urna, which primarily means a clay vessel, but is also the term used in ancient Rome for a vessel of any kind for collecting ballots or lots; the present-day Italian word for ballot box is still urna. Bernoulli's inspiration may have been lotteries, elections, or games of chance which involved drawing balls from a container, and it has been asserted that elections in medieval and renaissance Venice, including that of the doge, often included the choice of electors by lot, using balls of different colors drawn from an urn. - -In this basic urn model in probability theory, the urn contains x white and y black balls, well-mixed together. One ball is drawn randomly from the urn and its color observed; it is then placed back in the urn (or not), and the selection process is repeated. - -Possible questions that can be answered in this model are: - -* Can I infer the proportion of white and black balls from n observations? With what degree of confidence? - -* Knowing x and y, what is the probability of drawing a specific sequence (e.g. one white followed by one black)? - -* If I only observe n balls, how sure can I be that there are no black balls? (A variation on the first question) - -* beta-binomial distribution: as above, except that every time a ball is observed, an additional ball of the same color is added to the urn. Hence, the number of total balls in the urn grows. See Pólya urn model. - -* binomial distribution: the distribution of the number of successful draws (trials), i.e. extraction of white balls, given n draws with replacement in an urn with black and white balls. - -* Hoppe urn: a Pólya urn with an additional ball called the mutator. When the mutator is drawn it is replaced along with an additional ball of an entirely new colour. - -* hypergeometric distribution: the balls are not returned to the urn once extracted. Hence, the number of total marbles in the urn decreases. This is referred to as "drawing without replacement", by opposition to "drawing with replacement". - -* multivariate hypergeometric distribution: as above, but with balls of more than two colors. - -* geometric distribution: number of draws before the first successful (correctly colored) draw. - -* multinomial distribution: the urn contains balls in more than two colors. - -* negative binomial distribution: number of draws before a certain number of failures (incorrectly colored draws) occurs. - -* : the distribution of the number of occupied urns after the random assignment of k balls into n urns, related to the coupon collector's problem and birthday problem. - -* Pólya urn: each time a ball of a particular colour is drawn, it is replaced along with an additional ball of the same colour. - -* Statistical physics: derivation of energy and velocity distributions. - -* The Ellsberg paradox. diff --git a/wiki/wikipedia/1000.txt b/wiki/wikipedia/1000.txt deleted file mode 100644 index f1fb52382e943504372f4b777963cd99831deafa..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/1000.txt +++ /dev/null @@ -1,450 +0,0 @@ -In elementary algebra, completing the square is a technique for converting a quadratic polynomial of the form -$$ -ax^2 + bx + c -$$ - -to the form -$$ - a(x-h)^2 + k -$$ - -for some values of h and k. - -Completing the square is used in - -* solving quadratic equations, - -* deriving the quadratic formula, - -* graphing quadratic functions, - -* evaluating integrals in calculus, such as Gaussian integrals with a linear term in the exponent, - -* finding Laplace transforms. - -In mathematics, completing the square is often applied in any computation involving quadratic polynomials. - -The formula in elementary algebra for computing the square of a binomial is: -$$ -(x + p)^2 = x^2 + 2px + p^2. -$$ - -For example: - -\begin{alignat}{2} - -(x+3)^2 &= x^2 + 6x + 9 && (p=3)\\[3pt] - -(x-5)^2 &= x^2 - 10x + 25\qquad && (p=-5). - -\end{alignat} - - - -In any perfect square, the coefficient of x is twice the number p, and the constant term is equal to p2. - -Consider the following quadratic polynomial: -$$ -x^2 + 10x + 28. -$$ - -This quadratic is not a perfect square, since 28 is not the square of 5: -$$ -(x+5)^2 = x^2 + 10x + 25. -$$ - -However, it is possible to write the original quadratic as the sum of this square and a constant: -$$ -x^2 + 10x + 28 = (x+5)^2 + 3. -$$ - -This is called completing the square. - -Given any monic quadratic -$$ -x^2 + bx + c, -$$ - -it is possible to form a square that has the same first two terms: -$$ -\left(x+\tfrac{1}{2} b\right)^2 = x^2 + bx + \tfrac{1}{4}b^2. -$$ - -This square differs from the original quadratic only in the value of the constant - -term. Therefore, we can write -$$ -x^2 + bx + c = \left(x + \tfrac{1}{2}b\right)^2 + k, -$$ - -where $k = c - \frac{b^2}{4}$. This operation is known as completing the square. - -For example: - -\begin{alignat}{1} - -x^2 + 6x + 11 &= (x+3)^2 + 2 \\[3pt] - -x^2 + 14x + 30 &= (x+7)^2 - 19 \\[3pt] - -x^2 - 2x + 7 &= (x-1)^2 + 6. - -\end{alignat} - - - -Given a quadratic polynomial of the form -$$ -ax^2 + bx + c -$$ - -it is possible to factor out the coefficient a, and then complete the square for the resulting monic polynomial. - -Example: - - - -\begin{align} - -3x^2 + 12x + 27 &= 3[x^2+4x+9]\\ - -&{}= 3\left[(x+2)^2 + 5\right]\\ - -&{}= 3(x+2)^2 + 3(5)\\ - -&{}= 3(x+2)^2 + 15 - -\end{align} - -This process of factoring out the coefficient a can further be simplified by only factorising it out of the first 2 terms. The integer at the end of the polynomial does not have to be included. - -Example: - - - -\begin{align} - -3x^2 + 12x + 27 &= 3[x^2+4x] + 27\\ - -&{}= 3\left[(x+2)^2 -4\right] + 27\\ - -&{}= 3(x+2)^2 + 3(-4) + 27\\ - -&{}= 3(x+2)^2 - 12 + 27\\ - -&{}= 3(x+2)^2 + 15 - -\end{align} - -This allows the writing of any quadratic polynomial in the form -$$ -a(x-h)^2 + k. -$$ - -The result of completing the square may be written as a formula. In the general case, one has -$$ -ax^2 + bx + c = a(x-h)^2 + k, -$$ - -with -$$ -h = -\frac{b}{2a} \quad\text{and}\quad k = c - ah^2 = c - \frac{b^2}{4a}. -$$ - -In particular, when a = 1, one has -$$ -x^2 + bx + c = (x-h)^2 + k, -$$ - -with -$$ -h = -\frac{b}{2} \quad\text{and}\quad k = c - h^2 = c - \frac{b^2}{4a}. -$$ - -By solving the equation $a(x-h)^2 + k=0$ in terms of $x-h,$ and reorganizing the resulting expression, one gets the quadratic formula for the roots of the quadratic equation: -$$ -x=\frac{-b \pm \sqrt{b^2-4ac}}{2a}. -$$ - -The matrix case looks very similar: -$$ -x^{\mathrm{T}}Ax + x^{\mathrm{T}}b + c = (x - h)^{\mathrm{T}}A(x - h) + k \quad\text{where}\quad h = -\frac{1}{2}A^{-1}b \quad\text{and}\quad k = c - \frac{1}{4}b^{\mathrm{T}}A^{-1}b -$$ - -where $A$ has to be symmetric. - -If $A$ is not symmetric the formulae for $h$ and $k$ have - -to be generalized to: -$$ -h = -(A+A^{\mathrm{T}})^{-1}b \quad\text{and}\quad k = c - h^{\mathrm{T}}A h = c - b^{\mathrm{T}} (A+A^{\mathrm{T}})^{-1} A (A+A^{\mathrm{T}})^{-1}b -$$. - -In analytic geometry, the graph of any quadratic function is a parabola in the xy-plane. Given a quadratic polynomial of the form -$$ -a(x-h)^2 + k -$$ - -the numbers h and k may be interpreted as the Cartesian coordinates of the vertex (or stationary point) of the parabola. That is, h is the x-coordinate of the axis of symmetry (i.e. the axis of symmetry has equation x = h), and k is the minimum value (or maximum value, if a < 0) of the quadratic function. - -One way to see this is to note that the graph of the function ƒ(x) = x2 is a parabola whose vertex is at the origin (0, 0). Therefore, the graph of the function ƒ(x - h) = (x - h)2 is a parabola shifted to the right by h whose vertex is at (h, 0), as shown in the top figure. In contrast, the graph of the function ƒ(x) + k = x2 + k is a parabola shifted upward by k whose vertex is at (0, k), as shown in the center figure. Combining both horizontal and vertical shifts yields ƒ(x - h) + k = (x - h)2 + k is a parabola shifted to the right by h and upward by k whose vertex is at (h, k), as shown in the bottom figure. - -Completing the square may be used to solve any quadratic equation. For example: -$$ -x^2 + 6x + 5 = 0. -$$ - -The first step is to complete the square: -$$ -(x+3)^2 - 4 = 0. -$$ - -Next we solve for the squared term: -$$ -(x+3)^2 = 4. -$$ - -Then either -$$ -x+3 = -2 \quad\text{or}\quad x+3 = 2, -$$ - -and therefore -$$ -x = -5 \quad\text{or}\quad x = -1. -$$ - -This can be applied to any quadratic equation. When the x2 has a coefficient other than 1, the first step is to divide out the equation by this coefficient: for an example see the non-monic case below. - -Unlike methods involving factoring the equation, which is reliable only if the roots are rational, completing the square will find the roots of a quadratic equation even when those roots are irrational or complex. For example, consider the equation -$$ -x^2 - 10x + 18 = 0. -$$ - -Completing the square gives -$$ -(x-5)^2 - 7 = 0, -$$ - -so -$$ -(x-5)^2 = 7. -$$ - -Then either -$$ -x-5 = -\sqrt{7} \quad\text{or}\quad x-5 = \sqrt{7}. -$$ - -In terser language: -$$ -x-5 = \pm \sqrt{7}, -$$ - -so -$$ -x = 5 \pm \sqrt{7}. -$$ - -Equations with complex roots can be handled in the same way. For example: - -\begin{array}{c} - -x^2 + 4x + 5 = 0 \\[6pt] - -(x+2)^2 + 1 = 0 \\[6pt] - -(x+2)^2 = -1 \\[6pt] - -x+2 = \pm i \\[6pt] - -x = -2 \pm i. - -\end{array} - - - -For an equation involving a non-monic quadratic, the first step to solving them is to divide through by the coefficient of x2. For example: - -\begin{array}{c} - -2x^2 + 7x + 6 = 0 \\[6pt] - -x^2 + \tfrac{7}{2}x + 3 = 0 \\[6pt] - -\left(x+\tfrac{7}{4}\right)^2 - \tfrac{1}{16} = 0 \\[6pt] - -\left(x+\tfrac{7}{4}\right)^2 = \tfrac{1}{16} \\[6pt] - -x+\tfrac{7}{4} = \tfrac{1}{4} \quad\text{or}\quad x+\tfrac{7}{4} = -\tfrac{1}{4} \\[6pt] - -x = -\tfrac{3}{2} \quad\text{or}\quad x = -2. - -\end{array} - - - -Applying this procedure to the general form of a quadratic equation leads to the quadratic formula. - -Completing the square may be used to evaluate any integral of the form -$$ -\int\frac{dx}{ax^2+bx+c} -$$ - -using the basic integrals - -\int\frac{dx}{x^2 - a^2} = \frac{1}{2a}\ln\left|\frac{x-a}{x+a}\right| +C \quad\text{and}\quad - -\int\frac{dx}{x^2 + a^2} = \frac{1}{a}\arctan\left(\frac{x}{a}\right) +C. - -For example, consider the integral -$$ -\int\frac{dx}{x^2 + 6x + 13}. -$$ - -Completing the square in the denominator gives: -$$ -\int\frac{dx}{(x+3)^2 + 4} = \int\frac{dx}{(x+3)^2 + 2^2}. -$$ - -This can now be evaluated by using the substitution - -u = x + 3, which yields -$$ -\int\frac{dx}{(x+3)^2 + 4} = \frac{1}{2}\arctan\left(\frac{x+3}{2}\right)+C. -$$ - -Consider the expression -$$ - |z|^2 - b^*z - bz^* + c, -$$ - -where z and b are complex numbers, z* and b* are the complex conjugates of z and b, respectively, and c is a real number. Using the identity |u|2 = uu* we can rewrite this as -$$ - |z-b|^2 - |b|^2 + c , -$$ - -which is clearly a real quantity. This is because - -\begin{align} - -|z-b|^2 &{}= (z-b)(z-b)^*\\ - -&{}= (z-b)(z^*-b^*)\\ - -&{}= zz^* - zb^* - bz^* + bb^*\\ - -&{}= |z|^2 - zb^* - bz^* + |b|^2 . - -\end{align} - -As another example, the expression -$$ -ax^2 + by^2 + c , -$$ - -where a, b, c, x, and y are real numbers, with a > 0 and b > 0, may be expressed in terms of the square of the absolute value of a complex number. Define -$$ -z = \sqrt{a}x + i \sqrt{b} y . -$$ - -Then - - - -\begin{align} - -|z|^2 &{}= z z^*\\ - -&{}= (\sqrt{a}x + i \sqrt{b}y)(\sqrt{a}x - i \sqrt{b}y) \\ - -&{}= ax^2 - i\sqrt{ab}xy + i\sqrt{ba}yx - i^2by^2 \\ - -&{}= ax^2 + by^2 , - -\end{align} - -so -$$ - ax^2 + by^2 + c = |z|^2 + c . -$$ - -A matrix M is idempotent when M2 = M. Idempotent matrices generalize the idempotent properties of 0 and 1. The completion of the square method of addressing the equation -$$ -a^2 + b^2 = a , -$$ - -shows that some idempotent 2×2 matrices are parametrized by a circle in the (a,b)-plane: - -The matrix $\begin{pmatrix}a & b \\ b & 1-a \end{pmatrix}$ will be idempotent provided $a^2 + b^2 = a ,$ which, upon completing the square, becomes -$$ -(a - \tfrac{1}{2})^2 + b^2 = \tfrac{1}{4} . -$$ - -In the (a,b)-plane, this is the equation of a circle with center (1/2, 0) and radius 1/2. - -Consider completing the square for the equation -$$ -x^2 + bx = a. -$$ - -Since x2 represents the area of a square with side of length x, and bx represents the area of a rectangle with sides b and x, the process of completing the square can be viewed as visual manipulation of rectangles. - -Simple attempts to combine the x2 and the bx rectangles into a larger square result in a missing corner. The term (b/2)2 added to each side of the above equation is precisely the area of the missing corner, whence derives the terminology "completing the square". - -As conventionally taught, completing the square consists of adding the third term, v 2 to -$$ -u^2 + 2uv -$$ - -to get a square. There are also cases in which one can add the middle term, either 2uv or -2uv, to -$$ -u^2 + v^2 -$$ - -to get a square. - -By writing - - - -\begin{align} - -x + {1 \over x} &{} = \left(x - 2 + {1 \over x}\right) + 2\\ - -&{}= \left(\sqrt{x} - {1 \over \sqrt{x}}\right)^2 + 2 - -\end{align} - -we show that the sum of a positive number x and its reciprocal is always greater than or equal to 2. The square of a real expression is always greater than or equal to zero, which gives the stated bound; and here we achieve 2 just when x is 1, causing the square to vanish. - -Consider the problem of factoring the polynomial -$$ -x^4 + 324 . -$$ - -This is -$$ -(x^2)^2 + (18)^2, -$$ - -so the middle term is 2(x2)(18) = 36x2. Thus we get - -\begin{align} x^4 + 324 &{}= (x^4 + 36x^2 + 324 ) - 36x^2 \\ - -&{}= (x^2 + 18)^2 - (6x)^2 =\text{a difference of two squares} \\ - -&{}= (x^2 + 18 + 6x)(x^2 + 18 - 6x) \\ - -&{}= (x^2 + 6x + 18)(x^2 - 6x + 18) - -\end{align} - -(the last line being added merely to follow the convention of decreasing degrees of terms). - -The same argument shows that $x^4 + 4a^4 $ is always factorizable as -$$ -x^4 + 4a^4 =(x^2+2a x + 2a^2)(x^2-2 ax + 2a^2) -$$ - -(Also known as Sophie Germain's identity). diff --git a/wiki/wikipedia/1001.txt b/wiki/wikipedia/1001.txt deleted file mode 100644 index b67b5fab1460cf52c0cfed14f1670a26973163d5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/1001.txt +++ /dev/null @@ -1,15 +0,0 @@ -:A Shadowing lemma is also a fictional creature in the Discworld. - -In the theory of dynamical systems, the shadowing lemma is a lemma describing the behaviour of pseudo-orbits near a hyperbolic invariant set. Informally, the theory states that every pseudo-orbit (which one can think of as a numerically computed trajectory with rounding errors on every step) stays uniformly close to some true trajectory (with slightly altered initial position)—in other words, a pseudo-trajectory is "shadowed" by a true one. - -Given a map f : X → X of a metric space (X, d) to itself, define a ε-pseudo-orbit (or ε-orbit) as a sequence $(x_n)$ of points such that $x_{n+1}$ belongs to a ε-neighborhood of $f(x_n)$. - -Then, near a hyperbolic invariant set, the following statement holds: - -Let Λ be a hyperbolic invariant set of a diffeomorphism f. There exists a neighborhood U of Λ with the following property: for any δ > 0 there exists ε > 0, such that any (finite or infinite) ε-pseudo-orbit that stays in U also stays in a δ-neighborhood of some true orbit. - - - -\forall (x_n), x_n\in U, d(x_{n+1},f(x_n))<\varepsilon \quad \exists (y_n), y_{n+1}=f(y_n),\quad \text{such that} \forall n x_n\in U_{\delta}(y_n). - - diff --git a/wiki/wikipedia/1002.txt b/wiki/wikipedia/1002.txt deleted file mode 100644 index a6796c13fbc2e7c0ef1f6d489ef55d6774ee0d4e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/1002.txt +++ /dev/null @@ -1,22 +0,0 @@ -In mathematics, the Lebedev–Milin inequality is any of several inequalities for the coefficients of the exponential of a power series, found by and . It was used in the proof of the Bieberbach conjecture, as it shows that the Milin conjecture implies the Robertson conjecture. - -They state that if -$$ -\sum_{k\ge 0} \beta_kz^k = \exp\left(\sum_{k\ge 1} \alpha_kz^k\right) -$$ - -for complex numbers βk and αk, and n is a positive integer, then - -\sum_{k=0}^{\infty}|\beta_k|^2 \le - -\exp\left(\sum_{k=1}^\infty k|\alpha_k|^2\right), - -\sum_{k=0}^{n}|\beta_k|^2 \le - -(n+1)\exp\left(\frac{1}{n+1}\sum_{m=1}^{n}\sum_{k=1}^m(k|\alpha_k|^2 -1/k)\right), - -|\beta_n|^2 \le - -\exp\left(\sum_{k=1}^n(k|\alpha_k|^2 -1/k)\right). - -See also exponential formula (on exponentiation of power series). diff --git a/wiki/wikipedia/1003.txt b/wiki/wikipedia/1003.txt deleted file mode 100644 index 9ee7391d4083a4ee84bb3718f36100532765fe0c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/1003.txt +++ /dev/null @@ -1,5 +0,0 @@ -Advanced Synchronization Facility (ASF) is a proposed extension to the x86-64 instruction set architecture that adds hardware transactional memory support. It was introduced by AMD; the latest specification was dated March 2009. , it was still in the proposal stage. No released microprocessors implement the extension. - -ASF provides the capability to start, end and abort transactional execution and to mark CPU cache lines for protected memory access in transactional code regions. It contains four new instructions—SPECULATE, COMMIT, ABORT and RELEASE—and turns the otherwise invalid LOCK-prefixed MOVx, PREFETCH and PREFETCHW instructions into valid ones inside transactional code regions. Up to 256 levels of nested transactional code regions is supported. - -The SPECULATE and COMMIT instructions mark the start and end of a transactional code region. Inside transactional code regions, the LOCK-prefixed MOVx reg/xmm, mem, PREFETCH and PREFETCHW instructions can mark up to four cache lines for protected memory access. Accesses from other processor cores to the protected cache lines result in exceptions, which in turn cause transaction aborts. Stores to protected cache lines must be performed using the LOCK MOVx mem, reg/imm/xmm instructions. Marked cache lines can be released from protection with the RELEASE instruction. Transaction aborts generated by hardware or explicitly requested through the ABORT instruction rolls back modifications to the protected cache lines and restarts execution from the instruction following the top-level SPECULATE instruction. diff --git a/wiki/wikipedia/1004.txt b/wiki/wikipedia/1004.txt deleted file mode 100644 index e773bb4c394d8ac61cea3fe7450b44f2d9efe720..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/1004.txt +++ /dev/null @@ -1,5 +0,0 @@ -In mathematics, the Ahlfors conjecture, now a theorem, states that the limit set of a finitely-generated Kleinian group is either the whole Riemann sphere, or has measure 0. - -The conjecture was introduced by , who proved it in the case that the Kleinian group has a fundamental domain with a finite number of sides. Canary proved the Ahlfors conjecture for topologically tame groups, by showing that a topologically tame Kleinian group is geometrically tame, so the Ahlfors conjecture follows from Marden's tameness conjecture that hyperbolic 3-manifolds with finitely generated fundamental groups are topologically tame (homeomorphic to the interior of compact 3-manifolds). This latter conjecture was proved, independently, by Agol and by Calegari. - -Canary also showed that in the case when the limit set is the whole sphere, the action of the Kleinian group on the limit set is ergodic. diff --git a/wiki/wikipedia/1005.txt b/wiki/wikipedia/1005.txt deleted file mode 100644 index c852e9902b752b2515704a8b8a723da053319788..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/1005.txt +++ /dev/null @@ -1,11 +0,0 @@ -In mathematics, in the field of algebraic number theory, a Bauerian extension is a field extension of an algebraic number field which is characterized by the prime ideals with inertial degree one in the extension. - -For a finite degree extension L/K of an algebraic number field K we define P(L/K) to be the set of primes p of K which have a factor P with inertial degree one (that is, the residue field of P has the same order as the residue field of p). - -Bauer's theorem states that if M/K is a finite degree Galois extension, then P(M/K) ⊇ P(L/K) if and only if M ⊆ L. In particular, finite degree Galois extensions N of K are characterised by set of prime ideals which split completely in N. - -An extension F/K is Bauerian if it obeys Bauer's theorem: that is, for every finite extension L of K, we have P(F/K) ⊇ P(L/K) if and only if L contains a subfield K-isomorphic to F. - -All field extensions of degree at most 4 over Q are Bauerian. - -An example of a non-Bauerian extension is the Galois extension of Q by the roots of 2x5 − 32x + 1, which has Galois group S5. diff --git a/wiki/wikipedia/1006.txt b/wiki/wikipedia/1006.txt deleted file mode 100644 index 4be32f59401395d027cf25e3dce298d1ad91462c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/1006.txt +++ /dev/null @@ -1,94 +0,0 @@ -In mathematics, Stone's theorem on one-parameter unitary groups is a basic theorem of functional analysis that establishes a one-to-one correspondence between self-adjoint operators on a Hilbert space $\mathcal{H}$ and one-parameter families -$$ -(U_{t})_{t \in \R} -$$ - -of unitary operators that are strongly continuous, i.e., -$$ -\forall t_0 \in \R, \psi \in \mathcal{H}: \qquad \lim_{t \to t_0} U_t(\psi) = U_{t_0}(\psi), -$$ - -and are homomorphisms, i.e., -$$ -\forall s,t \in \R : \qquad U_{t + s} = U_t U_s. -$$ - -Such one-parameter families are ordinarily referred to as strongly continuous one-parameter unitary groups. - -The theorem was proved by , and Neumann showed that the requirement that $(U_t)_{t \in \R}$ be strongly continuous can be relaxed to say that it is merely weakly measurable, at least when the Hilbert space is separable. - -This is an impressive result, as it allows to define the derivative of the mapping $t \mapsto U_t,$ which is only supposed to be continuous. It is also related to the theory of Lie groups and Lie algebras. - -The statement of the theorem is as follows. - -Theorem. Let $(U_t)_{t \in \R}$ be a strongly continuous one-parameter unitary group. Then there exists a unique (possibly unbounded) operator $A: \mathcal{D}_A \to \mathcal{H}$, that is self-adjoint on $\mathcal{D}_A$ and such that -$$ -\forall t \in \R : \qquad U_t = e^{itA}. -$$ - -The domain of $A$ is defined by -$$ -\mathcal{D}_A = \left \{ \psi \in \mathcal{H} \left | \lim_{\varepsilon \to 0} \frac{-i}{\varepsilon} \left(U_{\varepsilon} (\psi) - \psi \right) \text{ exists} \right. \right \}. -$$ - -Conversely, let $A: \mathcal{D}_A \to \mathcal{H}$ be a (possibly unbounded) self-adjoint operator on $\mathcal{D}_A \subseteq \mathcal{H}.$ Then the one-parameter family $(U_{t})_{t \in \R}$ of unitary operators defined by -$$ -\forall t \in \R : \qquad U_{t} := e^{itA} -$$ - -is a strongly continuous one-parameter group. - -In both parts of the theorem, the expression $e^{itA}$ is defined by means of the spectral theorem for unbounded self-adjoint operators. - -The operator $A$ is called the infinitesimal generator of $(U_{t})_{t \in \R}.$ Furthermore, $A$ will be a bounded operator if and only if the operator-valued mapping $t \mapsto U_{t}$ is norm-continuous. - -The infinitesimal generator $A$ of a strongly continuous unitary group $(U_{t})_{t \in \R}$ may be computed as -$$ -A\psi = -i\lim_{\varepsilon\to 0}\frac{U_\varepsilon\psi-\psi}{\varepsilon}, -$$ - -with the domain of $A$ consisting of those vectors $\psi$ for which the limit exists in the norm topology. That is to say, $A$ is equal to $-i$ times the derivative of $U_t$ with respect to $t$ at $t=0$. Part of the statement of the theorem is that this derivative exists—i.e., that $A$ is a densely defined self-adjoint operator. The result is not obvious even in the finite-dimensional case, since $U_t$ is only assumed (ahead of time) to be continuous, and not differentiable. - -The family of translation operators -$$ -\left[ T_t(\psi) \right](x) = \psi(x + t) -$$ - -is a one-parameter unitary group of unitary operators; the infinitesimal generator of this family is an extension of the differential operator -$$ --i \frac{d}{dx} -$$ - -defined on the space of continuously differentiable complex-valued functions with compact support on $\R.$ Thus -$$ -T_{t} = e^{t \frac{d}{dx}}. -$$ - -In other words, motion on the line is generated by the momentum operator. - -Stone's theorem has numerous applications in quantum mechanics. For instance, given an isolated quantum mechanical system, with Hilbert space of states H, time evolution is a strongly continuous one-parameter unitary group on $\mathcal{H}$. The infinitesimal generator of this group is the system Hamiltonian. - -Stone's Theorem can be recast using the language of the Fourier transform. The real line $\R$ is a locally compact abelian group. Non-degenerate *-representations of the group C*-algebra $C^*(\R)$ are in one-to-one correspondence with strongly continuous unitary representations of $\R,$ i.e., strongly continuous one-parameter unitary groups. On the other hand, the Fourier transform is a *-isomorphism from $C^*(\R)$ to $C_0(\R),$ the $C^*$-algebra of continuous complex-valued functions on the real line that vanish at infinity. Hence, there is a one-to-one correspondence between strongly continuous one-parameter unitary groups and *-representations of $C_0(\R).$ As every *-representation of $C_0(\R)$ corresponds uniquely to a self-adjoint operator, Stone's Theorem holds. - -Therefore, the procedure for obtaining the infinitesimal generator of a strongly continuous one-parameter unitary group is as follows: - -* Let $(U_{t})_{t \in \R}$ be a strongly continuous unitary representation of $\R$ on a Hilbert space $\mathcal{H}$. - -* Integrate this unitary representation to yield a non-degenerate *-representation $\rho$ of $C^*(\R)$ on $\mathcal{H}$ by first defining -$$ -\forall f \in C_c(\R): \qquad \rho(f) := \int_{\R} f(t) ~ U_{t} dt, -$$ - -and then extending $\rho$ to all of $C^*(\R)$ by continuity. - -* Use the Fourier transform to obtain a non-degenerate *-representation $\tau$ of $C_0(\R )$ on $\mathcal{H}$. - -* By the Riesz-Markov Theorem, $\tau$ gives rise to a projection-valued measure on $\R$ that is the resolution of the identity of a unique self-adjoint operator $A$, which may be unbounded. - -* Then $A$ is the infinitesimal generator of $(U_{t})_{t \in \R }.$ - -The precise definition of $C^*(\R)$ is as follows. Consider the *-algebra $C_c(\R),$ the continuous complex-valued functions on $\R$ with compact support, where the multiplication is given by convolution. The completion of this *-algebra with respect to the $L^1$-norm is a Banach *-algebra, denoted by $(L^1(\R),\star).$ Then $C^*(\R)$ is defined to be the enveloping $C^*$-algebra of $(L^1(\R),\star)$, i.e., its completion with respect to the largest possible $C^*$-norm. It is a non-trivial fact that, via the Fourier transform, $C^*(\R)$ is isomorphic to $C_0(\R).$ A result in this direction is the Riemann-Lebesgue Lemma, which says that the Fourier transform maps $L^1(\R)$ to $C_0(\R).$ - -The Stone–von Neumann theorem generalizes Stone's theorem to a pair of self-adjoint operators, $(P,Q)$, satisfying the canonical commutation relation, and shows that these are all unitarily equivalent to the position operator and momentum operator on $L^2(\R).$ - -The Hille–Yosida theorem generalizes Stone's theorem to strongly continuous one-parameter semigroups of contractions on Banach spaces. diff --git a/wiki/wikipedia/1007.txt b/wiki/wikipedia/1007.txt deleted file mode 100644 index 900b19f1e98d759f22601c8617fed0fd5853a9b8..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/1007.txt +++ /dev/null @@ -1,47 +0,0 @@ -In mathematics, the Schwartz kernel theorem is a foundational result in the theory of generalized functions, published by Laurent Schwartz in 1952. It states, in broad terms, that the generalized functions introduced by Schwartz (Schwartz distributions) have a two-variable theory that includes all reasonable bilinear forms on the space $\mathcal{D}$ of test functions. The space $\mathcal{D}$ itself consists of smooth functions of compact support. - -Let $X$ and $Y$ be open sets in $\mathbb{R}^n$. - -Every distribution $k \in \mathcal{D}'(X \times Y)$ defines a - -continuous linear map $K \colon \mathcal{D}(Y) \to \mathcal{D}'(X)$ such that - -for every $u \in \mathcal{D}(X), v \in \mathcal{D}(Y)$. - -Conversely, for every such continuous linear map $K$ - -there exists one and only one distribution $k \in \mathcal{D}'(X \times Y)$ such that () holds. - -The distribution $k$ is the kernel of the map $K$. - -Given a distribution $k \in \mathcal{D}'(X \times Y)$ one can always write the linear map K informally as -$$ -Kv = \int_{Y} k(\cdot,y) v(y) d y -$$ - -so that -$$ -\langle Kv,u \rangle = \int_{X} \int_{Y} k(x,y) v(y) u(x) d y d x -$$. - -The traditional kernel functions $K(x,y)$ of two variables of the theory of integral operators having been expanded in scope to include their generalized function analogues, which are allowed to be more singular in a serious way, a large class of operators from $\mathcal{D}$ to its dual space $\mathcal{D}'$ of distributions can be constructed. The point of the theorem is to assert that the extended class of operators can be characterised abstractly, as containing all operators subject to a minimum continuity condition. A bilinear form on $\mathcal{D}$ arises by pairing the image distribution with a test function. - -A simple example is that the natural embedding of the test function space $\mathcal{D}$ into $\mathcal{D}'$ - sending every test function $f$ into the corresponding distribution $[f]$ - corresponds to the delta distribution -$$ -\delta(x-y) -$$ - -concentrated at the diagonal of the underlined Euclidean space, in terms of the Dirac delta function $\delta$. While this is at most an observation, it shows how the distribution theory adds to the scope. Integral operators are not so 'singular'; another way to put it is that for $K$ a continuous kernel, only compact operators are created on a space such as the continuous functions on $[0,1]$. The operator $I$ is far from compact, and its kernel is intuitively speaking approximated by functions on $[0,1]\times[0,1]$ with a spike along the diagonal $x=y$ and vanishing elsewhere. - -This result implies that the formation of distributions has a major property of 'closure' within the traditional domain of functional analysis. It was interpreted (comment of Jean Dieudonné) as a strong verification of the suitability of the Schwartz theory of distributions to mathematical analysis more widely seen. In his Éléments d'analyse volume 7, p. 3 he notes that the theorem includes differential operators on the same footing as integral operators, and concludes that it is perhaps the most important modern result of functional analysis. He goes on immediately to qualify that statement, saying that the setting is too 'vast' for differential operators, because of the property of monotonicity with respect to the support of a function, which is evident for differentiation. Even monotonicity with respect to singular support is not characteristic of the general case; its consideration leads in the direction of the contemporary theory of pseudo-differential operators. - -Dieudonné proves a version of the Schwartz result valid for smooth manifolds, and additional supporting results, in sections 23.9 to 23.12 of that book. - -Much of the theory of nuclear spaces was developed by Alexander Grothendieck while investigating the Schwartz kernel theorem and published in . We have the following generalization of the theorem. - -Schwartz kernel theorem: Suppose that X is nuclear, Y is locally convex, and v is a continuous bilinear form on $X \times Y$. Then v originates from a space of the form $X^{\prime}_{A^{\prime}} \widehat{\otimes}_{\epsilon} Y^{\prime}_{B^{\prime}}$ where $A^{\prime}$ and $B^{\prime}$ are suitable equicontinuous subsets of $X^{\prime}$ and $Y^{\prime}$. Equivalently, v is of the form, -$$ -v(x, y) = \sum_{i=1}^{\infty} \lambda_i \left\langle x, x_i^{\prime} \right\rangle \left\langle y, y_i^{\prime} \right\rangle -$$ for all $(x, y) \in X \times Y$ - -where $\left( \lambda_i \right) \in l^1$ and each of $\{ x^{\prime}_1, x^{\prime}_2, \ldots \}$ and $\{ y^{\prime}_1, y^{\prime}_2, \ldots \}$ are equicontinuous. Furthermore, these sequences can be taken to be null sequences (i.e. converging to 0) in $X^{\prime}_{A^{\prime}}$ and $Y^{\prime}_{B^{\prime}}$, respectively. diff --git a/wiki/wikipedia/1008.txt b/wiki/wikipedia/1008.txt deleted file mode 100644 index 6a42f04eb969cf54c9be1c9605dcb0716de644d2..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/1008.txt +++ /dev/null @@ -1,5 +0,0 @@ -Hierarchical clustering is one method for finding community structures in a network. The technique arranges the network into a hierarchy of groups according to a specified weight function. The data can then be represented in a tree structure known as a dendrogram. Hierarchical clustering can either be agglomerative or divisive depending on whether one proceeds through the algorithm by adding links to or removing links from the network, respectively. One divisive technique is the Girvan–Newman algorithm. - - child node in the tree. - -
-
-j1 
-
-(2) 
-
-/ \
-
-j2 j3 
-
-(2) (4)
-
-/ \ |
-
-j4 j5 j6
-
-(3) (5) (6) 
-
-
- -The due dates of jobs are shown underneath of each node of the tree in parentheses. - -* j1: 2 - -* j2: 5 - -* j3: 4 - -* j4: 3 - -* j5: 5 - -* j6: 6 - -Now look at the set of jobs without any successors, find the one with latest due date, put it into the front of S: - -* The S set would be { j1, j2, j4, j3, j5, j6} diff --git a/wiki/wikipedia/2249.txt b/wiki/wikipedia/2249.txt deleted file mode 100644 index 8717347253a9aeab5b3661066be31327db3d6ef6..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2249.txt +++ /dev/null @@ -1,3 +0,0 @@ -In mathematics, Busemann's theorem is a theorem in Euclidean geometry and geometric tomography. It was first proved by Herbert Busemann in 1949 and was motivated by his theory of area in Finsler spaces. - -Let K be a convex body in n-dimensional Euclidean space Rn containing the origin in its interior. Let S be an (n - 2)-dimensional linear subspace of Rn. For each unit vector θ in S, the orthogonal complement of S, let Sθ denote the (n - 1)-dimensional hyperplane containing θ and S. Define r(θ) to be the (n - 1)-dimensional volume of K ∩ Sθ. Let C be the curve {θr(θ)} in S. Then C forms the boundary of a convex body in S. diff --git a/wiki/wikipedia/225.txt b/wiki/wikipedia/225.txt deleted file mode 100644 index ebc1d05298bbf8bcff6f38d3489d8aeefc53f03f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/225.txt +++ /dev/null @@ -1,13 +0,0 @@ -A distributed transaction is a database transaction in which two or more network hosts are involved. Usually, hosts provide transactional resources, while the transaction manager is responsible for creating and managing a global transaction that encompasses all operations against such resources. Distributed transactions, as any other transactions, must have all four ACID (atomicity, consistency, isolation, durability) properties, where atomicity guarantees all-or-nothing outcomes for the unit of work (operations bundle). - -Open Group, a vendor consortium, proposed the X/Open Distributed Transaction Processing (DTP) Model (X/Open XA), which became a de facto standard for behavior of transaction model components. - -Databases are common transactional resources and, often, transactions span a couple of such databases. In this case, a distributed transaction can be seen as a database transaction that must be synchronized (or provide ACID properties) among multiple participating databases which are distributed among different physical locations. The isolation property (the I of ACID) poses a special challenge for multi database transactions, since the (global) serializability property could be violated, even if each database provides it (see also global serializability). In practice most commercial database systems use strong strict two phase locking (SS2PL) for concurrency control, which ensures global serializability, if all the participating databases employ it. (see also commitment ordering for multidatabases.) - -A common algorithm for ensuring correct completion of a distributed transaction is the two-phase commit (2PC). This algorithm is usually applied for updates able to commit in a short period of time, ranging from couple of milliseconds to couple of minutes. - -There are also long-lived distributed transactions, for example a transaction to book a trip, which consists of booking a flight, a rental car and a hotel. Since booking the flight might take up to a day to get a confirmation, two-phase commit is not applicable here, it will lock the resources for this long. In this case more sophisticated techniques that involve multiple undo levels are used. The way you can undo the hotel booking by calling a desk and cancelling the reservation, a system can be designed to undo certain operations (unless they are irreversibly finished). - -In practice, long-lived distributed transactions are implemented in systems based on Web Services. Usually these transactions utilize principles of compensating transactions, Optimism and Isolation Without Locking. X/Open standard does not cover long-lived DTP. - -Several modern technologies, including Enterprise Java Beans (EJBs) and Microsoft Transaction Server (MTS) fully support distributed transaction standards. diff --git a/wiki/wikipedia/2250.txt b/wiki/wikipedia/2250.txt deleted file mode 100644 index 6baa4ba028b68b94600dff262259172522ce64ef..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2250.txt +++ /dev/null @@ -1,15 +0,0 @@ -In mathematics, Belyi's theorem on algebraic curves states that any non-singular algebraic curve C, defined by algebraic number coefficients, represents a compact Riemann surface which is a ramified covering of the Riemann sphere, ramified at three points only. - -This is a result of G. V. Belyi from 1979. At the time it was considered surprising, and it spurred Grothendieck to develop his theory of dessins d'enfant, which describes non-singular algebraic curves over the algebraic numbers using combinatorial data. - -It follows that the Riemann surface in question can be taken to be the quotient - -H/Γ - -(where H is the upper half-plane and Γ is a subgroup of finite index in the modular group) compactified by cusps. Since the modular group has non-congruence subgroups, it is not the conclusion that any such curve is a modular curve. - -A Belyi function is a holomorphic map from a compact Riemann surface S to the complex projective line P1(C) ramified only over three points, which after a Möbius transformation may be taken to be $ \{0, 1, \infty\} $. Belyi functions may be described combinatorially by dessins d'enfants. - -Belyi functions and dessins d'enfants – but not Belyi's theorem – date at least to the work of Felix Klein; he used them in his article to study an 11-fold cover of the complex projective line with monodromy group PSL(2,11). - -Belyi's theorem is an existence theorem for Belyi functions, and has subsequently been much used in the inverse Galois problem. diff --git a/wiki/wikipedia/2251.txt b/wiki/wikipedia/2251.txt deleted file mode 100644 index b9302f8a0280aa044aed55c5a8bb9538bf0c8bc2..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2251.txt +++ /dev/null @@ -1,5 +0,0 @@ -Thread Level Speculation (TLS) is a technique to speculatively execute a section of computer code that is anticipated to be executed later in parallel with the normal execution on a separate independent thread. Such a speculative thread may need to make assumptions about the values of input variables. If these prove to be invalid, then the portions of the speculative thread that rely on these input variables will need to be discarded and squashed. If the assumptions are correct the program can complete in a shorter time provided the thread was able to be scheduled efficiently. - -It is also known as Speculative Multithreading (SpMT). - -TLS extracts threads from serial code and executes them speculatively in parallel with a safe thread. The speculative thread will need to be discarded or re-run if its presumptions on the input state prove to be invalid. It is a dynamic (runtime) parallelization technique that can uncover parallelism that static (compile-time) parallelization techniques may fail to exploit because at compile time thread independence cannot be guaranteed. For the technique to achieve the goal of reducing overall execute time there must be available CPU resource that can be efficiently executed in parallel with the main safe thread. diff --git a/wiki/wikipedia/2252.txt b/wiki/wikipedia/2252.txt deleted file mode 100644 index 77672d46743d5731d3c9f3f9eab1eaeb806af29d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2252.txt +++ /dev/null @@ -1,5 +0,0 @@ -In geometry, a dissection problem is the problem of partitioning a geometric figure (such as a polytope or ball) into smaller pieces that may be rearranged into a new figure of equal content. In this context, the partitioning is called simply a dissection (of one polytope into another). It is usually required that the dissection use only a finite number of pieces. Additionally, to avoid set-theoretic issues related to the Banach–Tarski paradox and Tarski's circle-squaring problem, the pieces are typically required to be well-behaved. For instance, they may be restricted to being the closures of disjoint open sets. - -The Bolyai–Gerwien theorem states that any polygon may be dissected into any other polygon of the same area, using interior-disjoint polygonal pieces. It is not true, however, that any polyhedron has a dissection into any other polyhedron of the same volume using polyhedral pieces. This process is possible, however, for any two honeycombs (such as cube) in three dimension and any two zonohedra of equal volume (in any dimension). - -A dissection into triangles of equal area is called an equidissection. Most polygons cannot be equidissected, and those that can often have restrictions on the possible numbers of triangles. For example, Monsky's theorem states that there is no odd equidissection of a square. diff --git a/wiki/wikipedia/2253.txt b/wiki/wikipedia/2253.txt deleted file mode 100644 index 4d458dbc7ddc3da7ffbe1daaeb0979911363de86..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2253.txt +++ /dev/null @@ -1,11 +0,0 @@ -In mathematics, Kronecker's congruence, introduced by Kronecker, states that -$$ - \Phi_p(x,y)\equiv (x-y^p)(x^p-y)\bmod p, -$$ - -where p is a prime and Φp(x,y) is the modular polynomial of order p, given by -$$ -\Phi_n(x,j) = \prod_\tau (x-j(\tau)) -$$ - -for j the elliptic modular function and τ running through classes of imaginary quadratic integers of discriminant n. diff --git a/wiki/wikipedia/2254.txt b/wiki/wikipedia/2254.txt deleted file mode 100644 index 2753f0f98b00675ad386ed8fdddf8c54b5496af5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2254.txt +++ /dev/null @@ -1,19 +0,0 @@ -The elevator paradox is a paradox first noted by Marvin Stern and George Gamow, physicists who had offices on different floors of a multi-story building. Gamow, who had an office near the bottom of the building noticed that the first elevator to stop at his floor was most often going down, while Stern, who had an office near the top, noticed that the first elevator to stop at his floor was most often going up. This creates the false impression that elevator cars are more likely to be going in one direction than the other depending on which floor the observer is on. - -Several attempts (beginning with Gamow and Stern) were made to analyze the reason for this phenomenon: the basic analysis is simple, while detailed analysis is more difficult than it would at first appear. - -Simply, if one is on the top floor of a building, all elevators will come from below (none can come from above), and then depart going down, while if one is on the second from top floor, an elevator going to the top floor will pass first on the way up, and then shortly afterward on the way down – thus, while an equal number will pass going up as going down, downwards elevators will generally shortly follow upwards elevators (unless the elevator idles on the top floor), and thus the first elevator observed will usually be going up. The first elevator observed will be going down only if one begins observing in the short interval after an elevator has passed going up, while the rest of the time the first elevator observed will be going up. - -In more detail, the explanation is as follows: a single elevator spends most of its time in the larger section of the building, and thus is more likely to approach from that direction when the prospective elevator user arrives. An observer who remains by the elevator doors for hours or days, observing every elevator arrival, rather than only observing the first elevator to arrive, would note an equal number of elevators traveling in each direction. This then becomes a sampling problem - the observer is sampling stochastically a non uniform interval. - -To help visualize this, consider a thirty-story building, plus lobby, with only one slow elevator. The elevator is so slow because it stops at every floor on the way up, and then on every floor on the way down. It takes a minute to travel between floors and wait for passengers. Here is the arrival schedule for people unlucky enough to work in this building; as depicted above, it forms a triangle wave: - -If you were on the first floor and walked up randomly to the elevator, chances are the next elevator would be heading down. The next elevator would be heading up only during the first two minutes at each hour, e.g., at 9:00 and 9:01. The number of elevator stops going upwards and downwards are the same, but the probability that the next elevator is going up is only 2 in 60. - -A similar effect can be observed in railway stations where a station near the end of the line will likely have the next train headed for the end of the line. - -If there is more than one elevator in a building, the bias decreases - since there is a greater chance that the intending passenger will arrive at the elevator lobby during the time that at least one elevator is below them; with an infinite number of elevators, the probabilities would be equal. - -In the example above, if there are 30 floors and 58 elevators, so at every minute there are 2 elevators on each floor, one going up and one going down (save at the top and bottom), the bias is eliminated – every minute, one elevator arrives going up and another going down. This also occurs with 30 elevators spaced 2 minutes apart – on odd floors they alternate up/down arrivals, while on even floors they arrive simultaneously every two minutes. - -In a real building, there are complicated factors such as: the tendency of elevators to be frequently required on the ground or first floor, and to return there when idle; lopsided demand where everyone wants to go down at the end of the day; people on the lower floors being more willing to take the stairs; or the way full elevators ignore external floor-level calls. These factors tend to shift the frequency of observed arrivals, but do not eliminate the paradox entirely. In particular, a user very near the top floor will perceive the paradox even more strongly, as elevators are infrequently present or required above their floor. diff --git a/wiki/wikipedia/2255.txt b/wiki/wikipedia/2255.txt deleted file mode 100644 index 9193fd4f407153fe087db9f6791796396dcbee47..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2255.txt +++ /dev/null @@ -1,29 +0,0 @@ -In mathematics, the von Neumann paradox, named after John von Neumann, is the idea that one can break a planar figure such as the unit square into sets of points and subject each set to an area-preserving affine transformation such that the result is two planar figures of the same size as the original. This was proved in 1929 by John von Neumann, assuming the axiom of choice. It is based on the earlier Banach–Tarski paradox, which is in turn based on the Hausdorff paradox. - -Banach and Tarski had proved that, using isometric transformations, the result of taking apart and reassembling a two-dimensional figure would necessarily have the same area as the original. This would make creating two unit squares out of one impossible. But von Neumann realized that the trick of such so-called paradoxical decompositions was the use of a group of transformations that include as a subgroup a free group with two generators. The group of area-preserving transformations (whether the special linear group or the special affine group) contains such subgroups, and this opens the possibility of performing paradoxical decompositions using them. - -The following is an informal description of the method found by von Neumann. Assume that we have a free group H of area-preserving linear transformations generated by two transformations, σ and τ, which are not far from the identity element. Being a free group means that all its elements can be expressed uniquely in the form $\sigma^{u_1}\tau^{v_1}\sigma^{u_2}\tau^{v_2} \cdots \sigma^{u_n}\tau^{v_n}$ for some n, where the $u$s and $v$s are all non-zero integers, except possibly the first $u$ and the last $v$. We can divide this group into two parts: those that start on the left with σ to some non-zero power (we call this set A) and those that start with τ to some power (that is, $u_1$ is zero—we call this set B, and it includes the identity). - -If we operate on any point in Euclidean 2-space by the various elements of H we get what is called the orbit of that point. All the points in the plane can thus be classed into orbits, of which there are an infinite number with the cardinality of the continuum. Using the axiom of choice, we can choose one point from each orbit and call the set of these points M. We exclude the origin, which is a fixed point in H. If we then operate on M by all the elements of H, we generate each point of the plane (except the origin) exactly once. If we operate on M by all the elements of A or of B, we get two disjoint sets whose union is all points but the origin. - -Now we take some figure such as the unit square or the unit disk. We then choose another figure totally inside it, such as a smaller square, centred at the origin. We can cover the big figure with several copies of the small figure, albeit with some points covered by two or more copies. We can then assign each point of the big figure to one of the copies of the small figure. Let us call the sets corresponding to each copy $C_1, C_2, \dots, C_m$. We shall now make a one-to-one mapping of each point in the big figure to a point in its interior, using only area-preserving transformations. We take the points belonging to $C_1$ and translate them so that the centre of the $C_1$ square is at the origin. We then take those points in it which are in the set A defined above and operate on them by the area-preserving operation σ τ. This puts them into set B. We then take the points belonging to B and operate on them with σ2. They will now still be in B, but the set of these points will be disjoint from the previous set. We proceed in this manner, using σ3τ on the A points from C2 (after centring it) and σ4 on its B points, and so on. In this way, we have mapped all points from the big figure (except some fixed points) in a one-to-one manner to B type points not too far from the centre, and within the big figure. We can then make a second mapping to A type points. - -At this point we can apply the method of the Cantor-Bernstein-Schroeder theorem. This theorem tells us that if we have an injection from set D to set E (such as from the big figure to the A type points in it), and an injection from E to D (such as the identity mapping from the A type points in the figure to themselves), then there is a one-to-one correspondence between D and E. In other words, having a mapping from the big figure to a subset of the A points in it, we can make a mapping (a bijection) from the big figure to all the A points in it. (In some regions points are mapped to themselves, in others they are mapped using the mapping described in the previous paragraph.) Likewise we can make a mapping from the big figure to all the B points in it. So looking at this the other way round, we can separate the figure into its A and B points, and then map each of these back into the whole figure (that is, containing both kinds of points)! - -This sketch glosses over some things, such as how to handle fixed points. It turns out that more mappings and more sets are necessary to work around this. - -The paradox for the square can be strengthened as follows: - -Any two bounded subsets of the Euclidean plane with non-empty interiors are equidecomposable with respect to the area-preserving affine maps. - -This has consequences concerning the problem of measure. As von Neumann notes, - -"Infolgedessen gibt es bereits in der Ebene kein nichtnegatives additives Maß (wo das Einheitsquadrat das Maß 1 hat), dass [sic] gegenüber allen Abbildungen von A2 invariant wäre." - -"In accordance with this, already in the plane there is no nonnegative additive measure (for which the unit square has a measure of 1), which is invariant with respect to all transformations belonging to A2 [the group of area-preserving affine transformations]." - -To explain this a bit more, the question of whether a finitely additive measure exists, that is preserved under certain transformations, depends on what transformations are allowed. The Banach measure of sets in the plane, which is preserved by translations and rotations, is not preserved by non-isometric transformations even when they do preserve the area of polygons. As explained above, the points of the plane (other than the origin) can be divided into two dense sets which we may call A and B. If the A points of a given polygon are transformed by a certain area-preserving transformation and the B points by another, both sets can become subsets of the B points in two new polygons. The new polygons have the same area as the old polygon, but the two transformed sets cannot have the same measure as before (since they contain only part of the B points), and therefore there is no measure that "works". - -The class of groups isolated by von Neumann in the course of study of Banach–Tarski phenomenon turned out to be very important for many areas of mathematics: these are amenable groups, or groups with an invariant mean, and include all finite and all solvable groups. Generally speaking, paradoxical decompositions arise when the group used for equivalences in the definition of equidecomposability is not amenable. - -Von Neumann's paper left open the possibility of a paradoxical decomposition of the interior of the unit square with respect to the linear group SL(2,R) (Wagon, Question 7.4). In 2000, Miklós Laczkovich proved that such a decomposition exists. More precisely, let A be the family of all bounded subsets of the plane with non-empty interior and at a positive distance from the origin, and B the family of all planar sets with the property that a union of finitely many translates under some elements of SL(2,R) contains a punctured neighbourhood of the origin. Then all sets in the family A are SL(2,R)-equidecomposable, and likewise for the sets in B. It follows that both families consist of paradoxical sets. diff --git a/wiki/wikipedia/2256.txt b/wiki/wikipedia/2256.txt deleted file mode 100644 index 63a417a6f4ed0ea019115c3d5044cef652bfc95e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2256.txt +++ /dev/null @@ -1,9 +0,0 @@ -The Smale conjecture, named after Stephen Smale, is the statement that the diffeomorphism group of the 3-sphere has the homotopy-type of its isometry group, the orthogonal group O(4). It was proved in 1983 by Allen Hatcher. - -There are several equivalent statements of the Smale conjecture. One is that the component of the unknot in the space of smooth embeddings of the circle in 3-space has the homotopy-type of the round circles, equivalently, O(3). Another equivalent statement is that the group of diffeomorphisms of the 3-ball which restrict to the identity on the boundary is contractible. - -Sometimes also the (false) statement that the inclusion $ O(n+1) \to \text{Diff}(S^n) $ is a weak equivalence for all $n$ is meant when referring to the Smale conjecture. For $ n = 1 $, this is easy, for $ n = 2 $, Smale proved it himself. - -For $n\ge5$ the conjecture is false due to the failure of $\text{Diff}(S^n)$ to be contractible - -In late 2018, Tadayuki Watanabe released a preprint that proves the failure of Smale's conjecture in the remaining 4-dimensional case relying on work around the Kontsevich integral, a generalization of the Gauss linking integral. As of 2021, the proof remains unpublished in a mathematical journal. diff --git a/wiki/wikipedia/2257.txt b/wiki/wikipedia/2257.txt deleted file mode 100644 index efe8c52bb48562b57caa051bed6865dab3b902e0..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2257.txt +++ /dev/null @@ -1 +0,0 @@ -In mathematics, the Carleson–Jacobs theorem, introduced by , describes the best approximation to a continuous function on the unit circle by a function in a Hardy space. diff --git a/wiki/wikipedia/2258.txt b/wiki/wikipedia/2258.txt deleted file mode 100644 index 4507ff8fd60554104c1320263114148783a23b5b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2258.txt +++ /dev/null @@ -1,266 +0,0 @@ -In probability theory, Chebyshev's inequality (also called the Bienaymé–Chebyshev inequality) guarantees that, for a wide class of probability distributions, no more than a certain fraction of values can be more than a certain distance from the mean. Specifically, no more than 1/k2 of the distribution's values can be k or more standard deviations away from the mean (or equivalently, over 1 − 1/k2 of the distribution's values are less than k standard deviations away from the mean). The rule is often called Chebyshev's theorem, about the range of standard deviations around the mean, in statistics. The inequality has great utility because it can be applied to any probability distribution in which the mean and variance are defined. For example, it can be used to prove the weak law of large numbers. - -Its practical usage is similar to the 68–95–99.7 rule, which applies only to normal distributions. Chebyshev's inequality is more general, stating that a minimum of just 75% of values must lie within two standard deviations of the mean and 88.89% within three standard deviations for a broad range of different probability distributions. - -The term Chebyshev's inequality may also refer to Markov's inequality, especially in the context of analysis. They are closely related, and some authors refer to Markov's inequality as "Chebyshev's First Inequality," and the similar one referred to on this page as "Chebyshev's Second Inequality." - -The theorem is named after Russian mathematician Pafnuty Chebyshev, although it was first formulated by his friend and colleague Irénée-Jules Bienaymé. The theorem was first stated without proof by Bienaymé in 1853 and later proved by Chebyshev in 1867. His student Andrey Markov provided another proof in his 1884 Ph.D. thesis. - -Chebyshev's inequality is usually stated for random variables, but can be generalized to a statement about measure spaces. - -Let X (integrable) be a random variable with finite expected value μ and finite non-zero variance σ2. Then for any real number k > 0, - - - -\Pr(|X-\mu|\geq k\sigma) \leq \frac{1}{k^2}. - - - -Only the case $k > 1$ is useful. When $k \leq 1$ the right-hand side $ \frac{1}{k^2} \geq 1 $ and the inequality is trivial as all probabilities are ≤ 1. - -As an example, using $k = \sqrt{2}$ shows that the probability that values lie outside the interval $(\mu - \sqrt{2}\sigma, \mu + \sqrt{2}\sigma)$ does not exceed $\frac{1}{2}$. - -Because it can be applied to completely arbitrary distributions provided they have a known finite mean and variance, the inequality generally gives a poor bound compared to what might be deduced if more aspects are known about the distribution involved. - -Let (X, Σ, μ) be a measure space, and let f be an extended real-valued measurable function defined on X. Then for any real number t > 0 and 0 < p < ∞, -$$ -\mu(\{x\in X:|f(x)|\geq t\}) \leq {1\over t^p} \int_{\sigma_i} \le k_i \right ) \ge \prod_{i=1}^n \left (1 - \frac{1}{k_i^2} \right) -$$ - -Berge derived an inequality for two correlated variables X1, X2. Let ρ be the correlation coefficient between X1 and X2 and let σi2 be the variance of Xi. Then -$$ - \Pr\left( \bigcap_{ i = 1}^2 \left[ \frac{ | X_i - \mu_i | } { \sigma_i } < k \right] \right) \ge 1 - \frac{ 1 + \sqrt{ 1 - \rho^2 } } { k^2 }. -$$ - -This result can be sharpened to having different bounds for the two random variables and having asymmetric bounds, as in Selberg's inequality. - -Olkin and Pratt derived an inequality for n correlated variables. -$$ - \Pr\left(\bigcap_{i = 1 }^n \frac{\sigma_i} < k_i \right) \ge 1 - \frac{1}{n^2} \left(\sqrt{u} + \sqrt{n-1} \sqrt{n \sum_i \frac 1 { k_i^2} - u} \right)^2 -$$ - -where the sum is taken over the n variables and -$$ - u = \sum_{i=1}^n \frac{1}{ k_i^2} + 2\sum_{i=1}^n \sum_{jij is the correlation between Xi and Xj. - -Olkin and Pratt's inequality was subsequently generalised by Godwin. - -Mitzenmacher and Upfal note that by applying Markov's inequality to the nonnegative variable $| X - \operatorname{E}(X) |^n$, one can get a family of tail bounds -$$ - \Pr\left(| X - \operatorname{E}(X) | \ge k \operatorname{E}(|X - \operatorname{E}(X) |^n )^{ \frac{1}{n} }\right) \le \frac{1 } {k^n}, \qquad k >0, n \geq 2. -$$ - -For n = 2 we obtain Chebyshev's inequality. For k ≥ 1, n > 4 and assuming that the nth moment exists, this bound is tighter than Chebyshev's inequality. This strategy, called the method of moments, is often used to prove tail bounds. - -A related inequality sometimes known as the exponential Chebyshev's inequality is the inequality -$$ - \Pr(X \ge \varepsilon) \le e^{ -t \varepsilon }\operatorname{E}\left (e^{ t X } \right), \qquad t > 0. -$$ - -Let K(t) be the cumulant generating function, -$$ - K( t ) = \log \left(\operatorname{E}\left( e^{ t x } \right) \right). -$$ - -Taking the Legendre–Fenchel transformation of K(t) and using the exponential Chebyshev's inequality we have -$$ --\log( \Pr (X \ge \varepsilon )) \ge \sup_t( t \varepsilon - K( t ) ). -$$ - -This inequality may be used to obtain exponential inequalities for unbounded variables. - -If P(x) has finite support based on the interval [a, b], let M = max( where |x| is the absolute value of x. If the mean of P(x) is zero then for all k > 0 -$$ -\frac{\operatorname{E}(|X|^r ) - k^r }{M^r} \le \Pr( | X | \ge k ) \le \frac{\operatorname{E}(| X |^r ) }{ k^r }. -$$ - -The second of these inequalities with r = 2 is the Chebyshev bound. The first provides a lower bound for the value of P(x). - -Saw et al extended Chebyshev's inequality to cases where the population mean and variance are not known and may not exist, but the sample mean and sample standard deviation from N samples are to be employed to bound the expected value of a new drawing from the same distribution.The following simpler version of this inequality is given by Kabán. -$$ -P( | X - m | \ge ks ) \le \frac 1 {N + 1} \left\lfloor \frac {N+1} N \left(\frac{N - 1}{k^2} + 1 \right) \right\rfloor -$$ - -where X is a random variable which we have sampled N times, m is the sample mean, k is a constant and s is the sample standard deviation. - -This inequality holds even when the population moments do not exist, and when the sample is only weakly exchangeably distributed; this criterion is met for randomised sampling. A table of values for the Saw–Yang–Mo inequality for finite sample sizes (N < 100) has been determined by Konijn. The table allows the calculation of various confidence intervals for the mean, based on multiples, C, of the standard error of the mean as calculated from the sample. For example, Konijn shows that for N = 59, the 95 percent confidence interval for the mean m is (m − Cs, m + Cs) where 1=C = 4.447 × 1.006 = 4.47 (this is 2.28 times larger than the value found on the assumption of normality showing the loss on precision resulting from ignorance of the precise nature of the distribution). - -If the standard deviation is a multiple of the mean then a further inequality can be derived, -$$ - P( | X - m | \ge ks ) \le \frac 1 {N + 1}. -$$ - -Beasley et al have suggested a modification of this inequality -$$ - \Pr(x \le m - a \sigma_-) \le \frac { 1 } { a^2 }. -$$ - -Putting -$$ - a = \frac{ k \sigma } { \sigma_- }. -$$ - -Chebyshev's inequality can now be written -$$ - \Pr(x \le m - k \sigma) \le \frac { 1 } { k^2 } \frac { \sigma_-^2 } { \sigma^2 }. -$$ - -A similar result can also be derived for the upper semivariance. - -If we put -$$ - \sigma_u^2 = \max(\sigma_-^2, \sigma_+^2) , -$$ - -Chebyshev's inequality can be written -$$ - \Pr(| x \le m - k \sigma |) \le \frac 1 {k^2} \frac { \sigma_u^2 } { \sigma^2 } . -$$ - -Because σu2 ≤ σ2, use of the semivariance sharpens the original inequality. - -If the distribution is known to be symmetric, then -$$ - \sigma_+^2 = \sigma_-^2 = \frac{ 1 } { 2 } \sigma^2 -$$ - -and -$$ - \Pr(x \le m - k \sigma) \le \frac 1 {2k^2} . -$$ - -This result agrees with that derived using standardised variables. - -;Note: The inequality with the lower semivariance has been found to be of use in estimating downside risk in finance and agriculture. - -Stellato et al. due to Francesco Paolo Cantelli states that for a real random variable (X) with mean (μ) and variance (σ2) -$$ - P(X - \mu \ge a) \le \frac{\sigma^2}{ \sigma^2 + a^2 } -$$ - -where a ≥ 0. - -This inequality can be used to prove a one tailed variant of Chebyshev's inequality with k > 0 -$$ - \Pr(X - \mu \geq k \sigma) \leq \frac{ 1 }{ 1 + k^2 }. -$$ - -The bound on the one tailed variant is known to be sharp. To see this consider the random variable X that takes the values -$$ - X = 1 -$$ with probability $ \frac{ \sigma^2 } { 1 + \sigma^2 }$ -$$ - X = - \sigma^2 -$$ with probability $ \frac{ 1 } { 1 + \sigma^2 }.$ - -Then E(X) = 0 and E(X2) = σ2 and P(X < 1) = 1 / (1 + σ2). - -The one-sided variant can be used to prove the proposition that for probability distributions having an expected value and a median, the mean and the median can never differ from each other by more than one standard deviation. To express this in symbols let μ, ν, and σ be respectively the mean, the median, and the standard deviation. Then -$$ - \left | \mu - \nu \right | \leq \sigma. -$$ - -There is no need to assume that the variance is finite because this inequality is trivially true if the variance is infinite. - -The proof is as follows. Setting k = 1 in the statement for the one-sided inequality gives: -$$ -\Pr(X - \mu \geq \sigma) \leq \frac{ 1 }{ 2 } \implies \Pr(X \geq \mu + \sigma) \leq \frac{ 1 }{ 2 }. -$$ - -Changing the sign of X and of μ, we get -$$ -\Pr(X \leq \mu - \sigma) \leq \frac{ 1 }{ 2 }. -$$ - -As the median is by definition any real number m that satisfies the inequalities -$$ -\operatorname{P}(X\leq m) \geq \frac{1}{2}\text{ and }\operatorname{P}(X\geq m) \geq \frac{1}{2} -$$ - -this implies that the median lies within one standard deviation of the mean. A proof using Jensen's inequality also exists. - -Bhattacharyya extended Cantelli's inequality using the third and fourth moments of the distribution. - -Let μ = 0 and σ2 be the variance. Let γ = E(X3)/σ3 and κ = E(X4)/σ4. - -If k2 − kγ − 1 > 0 then -$$ - P(X > k\sigma) \le \frac{ \kappa - \gamma^2 - 1 }{ (\kappa - \gamma^2 - 1) (1 + k^2) + (k^2 - k\gamma - 1) }. -$$ - -The necessity of k2 − kγ − 1 > 0 requires that k be reasonably large. - -In 1823 Gauss showed that for a distribution with a unique mode at zero, -$$ - P( | X | \ge k ) \le \frac{ 4 \operatorname{ E }( X^2 ) } { 9k^2 } \quad\text{if} \quad k^2 \ge \frac{ 4 } { 3 } \operatorname{E} (X^2) , -$$ -$$ - P( | X | \ge k ) \le 1 - \frac{ k } { \sqrt{3} \operatorname{ E }( X^2 ) } \quad \text{if} \quad k^2 \le \frac{ 4 } { 3 } \operatorname{ E }( X^2 ). -$$ - -The Vysochanskij–Petunin inequality generalizes Gauss's inequality, which only holds for deviation from the mode of a unimodal distribution, to deviation from the mean, or more generally, any center. If X is a unimodal distribution with mean μ and variance σ2, then the inequality states that -$$ - P( | X - \mu | \ge k \sigma ) \le \frac{ 4 }{ 9k^2 } \quad \text{if} \quad k \ge \sqrt{8/3}, -$$ -$$ - P( | X - \mu | \ge k \sigma ) \le \frac{ 4 }{ 3k^2 } - \frac13 \quad \text{if} \quad k \le \sqrt{8/3}. -$$ - -For symmetrical unimodal distributions, the median and the mode are equal, so both the Vysochanskij–Petunin inequality and Gauss's inequality apply to the same center. Further, for symmetrical distributions, one-sided bounds can be obtained by noticing that -$$ - P( X - \mu \ge k \sigma ) = P( X - \mu \le -k \sigma ) = \frac{1}{2} P( |X - \mu| \ge k \sigma ). -$$ - -The additional fraction of $4/9$ present in these tail bounds lead to better confidence intervals than Chebyshev's inequality. For example, for any symmetrical unimodal distribution, the Vysochanskij–Petunin inequality states that 4/81 (approximately 4.9%) of the distribution lies outside 3 standard deviations of the mode. - -DasGupta has shown that if the distribution is known to be normal -$$ - P( | X - \mu | \ge k \sigma ) \le \frac{ 1 }{ 3 k^2 } . -$$ - -From DasGupta's inequality it follows that for a normal distribution at least 95% lies within approximately 2.582 standard deviations of the mean. This is less sharp than the true figure (approximately 1.96 standard deviations of the mean). - -*DasGupta has determined a set of best possible bounds for a normal distribution for this inequality. - -*Grechuk et al. developed a general method for deriving the best possible bounds in Chebyshev's inequality for any family of distributions, and any deviation risk measure in place of standard deviation. In particular, they derived Chebyshev inequality for distributions with log-concave densities. - -Several other related inequalities are also known. - -The Paley–Zygmund inequality gives a lower bound on tail probabilities, as opposed to Chebyshev's inequality which gives an upper bound. Applying it to the square of a random variable, we get -$$ - \Pr( | Z | > \theta \sqrt{E[Z^2]} ) \ge \frac{ ( 1 - \theta^2 )^2 E[Z^2]^2 }{E[Z^4]}. -$$ - -One use of Chebyshev's inequality in applications is to create confidence intervals for variates with an unknown distribution. Haldane noted, using an equation derived by Kendall, that if a variate (x) has a zero mean, unit variance and both finite skewness (γ) and kurtosis (κ) then the variate can be converted to a normally distributed standard score (z): -$$ - z = x - \frac{\gamma}{6} (x^2 - 1) + \frac{ x }{ 72 } [ 2 \gamma^2 (4 x^2 - 7) - 3 \kappa (x^2 - 3) ] + \cdots -$$ - -This transformation may be useful as an alternative to Chebyshev's inequality or as an adjunct to it for deriving confidence intervals for variates with unknown distributions. - -While this transformation may be useful for moderately skewed and/or kurtotic distributions, it performs poorly when the distribution is markedly skewed and/or kurtotic. - -For any collection of n non-negative independent random variables Xi with expectation 1 -$$ - \Pr\left ( \frac{\sum_{i=1}^n X_i }{n} - 1 \ge \frac{1}{n} \right) \le \frac{ 7 }{ 8 }. -$$ - -There is a second (less well known) inequality also named after Chebyshev - -If f, g : [a, b] → R are two monotonic functions of the same monotonicity, then -$$ - \frac{ 1 }{ b - a } \int_a^b \! f(x) g(x) dx \ge \left[ \frac{ 1 }{ b - a } \int_a^b \! f(x) dx \right] \left[ \frac{ 1 }{ b - a } \int_a^b \! g(x) dx \right] . -$$ - -If f and g are of opposite monotonicity, then the above inequality works in the reverse way. - -This inequality is related to Jensen's inequality, Kantorovich's inequality, the Hermite–Hadamard inequality - -There are also a number of other inequalities associated with Chebyshev: - -*Chebyshev's sum inequality - -*Chebyshev–Markov–Stieltjes inequalities diff --git a/wiki/wikipedia/2259.txt b/wiki/wikipedia/2259.txt deleted file mode 100644 index b52ddc9d735577d613fb6742af591536a0cdf337..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2259.txt +++ /dev/null @@ -1,107 +0,0 @@ -In mathematics, Hadamard's lemma, named after Jacques Hadamard, is essentially a first-order form of Taylor's theorem, in which we can express a smooth, real-valued function exactly in a convenient manner. - -{{math theorem|name=Hadamard's lemma|note=|style=|math_statement= - -Let $f$ be a smooth, real-valued function defined on an open, star-convex neighborhood $U$ of a point $a$ in $n$-dimensional Euclidean space. Then $f(x)$ can be expressed, for all $x \in U,$ in the form: - -f(x) = f(a) + \sum_{i=1}^n \left(x_i - a_i\right) g_i(x), - -where each $g_i$ is a smooth function on $U,$ $a = \left(a_1, \ldots, a_n\right),$ and $x = \left(x_1, \ldots, x_n\right).$ - -}} - -{{math proof|drop=hidden|proof= - -Let $x \in U.$ Define $h : [0, 1] \to \R$ by - -h(t) = f(a + t(x - a)) \qquad \text{ for all } t \in [0, 1]. - -Then - -h'(t) = \sum_{i=1}^n \frac{\partial f}{\partial x_i}(a + t(x - a)) \left(x_i - a_i\right), - -which implies - -h(1) - h(0) = \int_0^1 h'(t)dt - -= \int_0^1 \sum_{i=1}^n \frac{\partial f}{\partial x_i}(a + t(x - a)) \left(x_i - a_i\right) dt - -= \sum_{i=1}^n \left(x_i - a_i\right)\int_0^1 \frac{\partial f}{\partial x_i}(a + t(x - a)) dt. - -But additionally, $h(1) - h(0) = f(x) - f(a),$ so by letting - -g_i(x) = \int_0^1 \frac{\partial f}{\partial x_i}(a + t(x - a)) dt, - -the theorem has been proven. -$$ -\blacksquare -$$ - -}} - -{{math theorem|name=Corollary|note=|style=|math_statement= - -If $f : \R \to \R$ is smooth and $f(0) = 0$ then $f(x)/x$ is a smooth function on $\R.$ - -Explicitly, this conclusion means that the function $\R \to \R$ that sends $x \in \R$ to - -\begin{cases} - -f(x)/x & \text{ if } x \neq 0 \\ - -\lim_{t \to 0} f(t)/t & \text{ if } x = 0 \\ - -\end{cases} - -is a well-defined smooth function on $\R.$ - -}} - -{{math proof|drop=hidden|proof= - -By Hadamard's lemma, there exists some $g \in C^{\infty}(\R)$ such that $f(x) = f(0) + x g(x)$ so that $f(0) = 0$ implies $f(x)/x = g(x).$ -$$ -\blacksquare -$$ - -}} - -{{math theorem|name=Corollary|note=|style=|math_statement= - -If $y, z \in \R^n$ are distinct points and $f : \R^n \to \R$ is a smooth function that satisfies $f(z) = 0 = f(y)$ then there exist smooth functions $g_i, h_i \in C^{\infty}\left(\R^n\right)$ ($i = 1, \ldots, 3n - 2$) satisfying $g_i(z) = 0 = h_i(y)$ for every $i$ such that - -f = \sum_{i}^{} g_i h_i. - -}} - -{{math proof|drop=hidden|proof= - -By applying an invertible affine linear change in coordinates, it may be assumed without loss of generality that $z = (0, \ldots, 0)$ and $y = (0, \ldots, 0, 1).$ - -By Hadamard's lemma, there exist $g_1, \ldots, g_n \in C^{\infty}\left(\R^n\right)$ such that -$$ -f(x) = \sum_{i=1}^n x_i g_i(x). -$$ - -For every $i = 1, \ldots, n,$ let $\alpha_i := g_i(y)$ where $0 = f(y) = \sum_{i=1}^n y_i g_i(y) = g_n(y)$ implies $\alpha_n = 0.$ - -Then for any $x = \left(x_1, \ldots, x_n\right) \in \R^n,$ - -\begin{alignat}{8} - -f(x) - -&= \sum_{i=1}^n x_i g_i(x) && \\ - -&= \sum_{i=1}^n \left[x_i\left(g_i(x) - \alpha_i\right)\right] + \sum_{i=1}^{n-1} \left[x_i \alpha_i\right] && \quad \text{ using } g_i(x) = \left(g_i(x) - \alpha_i\right) + \alpha_i \text{ and } \alpha_n = 0 \\ - -&= \left[\sum_{i=1}^n x_i\left(g_i(x) - \alpha_i\right)\right] + \left[\sum_{i=1}^{n-1} x_i x_n \alpha_i\right] + \left[\sum_{i=1}^{n-1} x_i \left(1 - x_n\right) \alpha_i\right] && \quad \text{ using } x_i = x_n x_i + x_i \left(1 - x_n\right). \\ - -\end{alignat} - -Each of the $3 n - 2$ terms above has the desired properties. -$$ -\blacksquare -$$ - -}} diff --git a/wiki/wikipedia/226.txt b/wiki/wikipedia/226.txt deleted file mode 100644 index e5914dc62356bc152840d5f782f0f4210df7f519..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/226.txt +++ /dev/null @@ -1,45 +0,0 @@ -This article contains examples of Markov chains and Markov processes in action. - -All examples are in the countable state space. For an overview of Markov chains in general state space, see Markov chains on a measurable state space. - -A game of snakes and ladders or any other game whose moves are determined entirely by dice is a Markov chain, indeed, an absorbing Markov chain. This is in contrast to card games such as blackjack, where the cards represent a 'memory' of the past moves. To see the difference, consider the probability for a certain event in the game. In the above-mentioned dice games, the only thing that matters is the current state of the board. The next state of the board depends on the current state, and the next roll of the dice. It doesn't depend on how things got to their current state. In a game such as blackjack, a player can gain an advantage by remembering which cards have already been shown (and hence which cards are no longer in the deck), so the next state (or hand) of the game is not independent of the past states. - -Consider a random walk on the number line where, at each step, the position (call it x) may change by +1 (to the right) or −1 (to the left) with probabilities: -$$ -P_{\mathrm{move~left}} = \dfrac{1}{2} + \dfrac{1}{2} \left( \dfrac{x}{c+|x|} \right) -$$ -$$ -P_{\mathrm{move~right}} = 1 - P_{\mathrm{move~left}} -$$ - -(where c is a constant greater than 0) - -For example, if the constant, c, equals 1, the probabilities of a move to the left at positions x = −2,−1,0,1,2 are given by $\dfrac{1}{6},\dfrac{1}{4},\dfrac{1}{2},\dfrac{3}{4},\dfrac{5}{6}$ respectively. The random walk has a centering effect that weakens as c increases. - -Since the probabilities depend only on the current position (value of x) and not on any prior positions, this biased random walk satisfies the definition of a Markov chain. - -Suppose that you start with $10, and you wager $1 on an unending, fair, coin toss indefinitely, or until you lose all of your money. If $X_n$ represents the number of dollars you have after n tosses, with $X_0 = 10$, then the sequence $\{X_n : n \in \mathbb{N} \}$ is a Markov process. If I know that you have $12 now, then it would be expected that with even odds, you will either have $11 or $13 after the next toss. This guess is not improved by the added knowledge that you started with $10, then went up to $11, down to $10, up to $11, and then to $12. The fact that the guess is not improved by the knowledge of earlier tosses showcases the Markov property, the memoryless property of a stochastic process. - -The probabilities of weather conditions (modeled as either rainy or sunny), given the weather on the preceding day, - -can be represented by a transition matrix: - - - -P = \begin{bmatrix} - -0.9 & 0.1 \\ - -0.5 & 0.5 - -\end{bmatrix} - - - -The matrix P represents the weather model in which a sunny day is 90% likely to be followed by another sunny day, and a rainy day is 50% likely to be followed by another rainy day. This vector represents the probabilities of sunny and rainy weather on all days, and is independent of the initial weather. - -A finite-state machine can be used as a representation of a Markov chain. Assuming a sequence of independent and identically distributed input signals (for example, symbols from a binary alphabet chosen by coin tosses), if the machine is in state y at time n, then the probability that it moves to state x at time n + 1 depends only on the current state. - -If one pops one hundred kernels of popcorn in an oven, each kernel popping at an independent exponentially-distributed time, then this would be a continuous-time Markov process. If $X_t$ denotes the number of kernels which have popped up to time t, the problem can be defined as finding the number of kernels that will pop in some later time. The only thing one needs to know is the number of kernels that have popped prior to the time "t". It is not necessary to know when they popped, so knowing $X_t$ for previous times "t" is not relevant. - -The process described here is an approximation of a Poisson point process – Poisson processes are also Markov processes. diff --git a/wiki/wikipedia/2260.txt b/wiki/wikipedia/2260.txt deleted file mode 100644 index f566f3d34552dfe99f3c85d72425206a04e846d6..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2260.txt +++ /dev/null @@ -1,307 +0,0 @@ -In matrix theory, the Perron–Frobenius theorem, proved by and , asserts that a real square matrix with positive entries has a unique largest real eigenvalue and that the corresponding eigenvector can be chosen to have strictly positive components, and also asserts a similar statement for certain classes of nonnegative matrices. This theorem has important applications to probability theory (ergodicity of Markov chains); to the theory of dynamical systems (subshifts of finite type); to economics (Okishio's theorem, Hawkins–Simon condition); - -to demography (Leslie population age distribution model); - -to social networks (DeGroot learning process); to Internet search engines (PageRank); and even to ranking of football - -teams. The first to discuss the ordering of players within tournaments using Perron–Frobenius eigenvectors is Edmund Landau. - -Let positive and non-negative respectively describe matrices with exclusively positive real numbers as elements and matrices with exclusively non-negative real numbers as elements. The eigenvalues of a real square matrix A are complex numbers that make up the spectrum of the matrix. The exponential growth rate of the matrix powers Ak as k → ∞ is controlled by the eigenvalue of A with the largest absolute value (modulus). The Perron–Frobenius theorem describes the properties of the leading eigenvalue and of the corresponding eigenvectors when A is a non-negative real square matrix. Early results were due to and concerned positive matrices. Later, found their extension to certain classes of non-negative matrices. - -Let $A = (a_{ij}) $ be an $ n \times n $ positive matrix: $ a_{ij} > 0 $ for $ 1 \le i,j \le n $. Then the following statements hold. - -# There is a positive real number r, called the Perron root or the Perron–Frobenius eigenvalue (also called the leading eigenvalue or dominant eigenvalue), such that r is an eigenvalue of A and any other eigenvalue λ (possibly complex) in absolute value is strictly smaller than r , |λ| < r. Thus, the spectral radius $\rho(A) $ is equal to r. If the matrix coefficients are algebraic, this implies that the eigenvalue is a Perron number. - -# The Perron–Frobenius eigenvalue is simple: r is a simple root of the characteristic polynomial of A. Consequently, the eigenspace associated to r is one-dimensional. (The same is true for the left eigenspace, i.e., the eigenspace for AT, the transpose of A.) - -# There exists an eigenvector v = (v1,...,vn)T of A with eigenvalue r such that all components of v are positive: A v = r v, vi > 0 for 1 ≤ i ≤ n. (Respectively, there exists a positive left eigenvector w : wT A = r wT, wi > 0.) It is known in the literature under many variations as the Perron vector, Perron eigenvector, Perron-Frobenius eigenvector, leading eigenvector, or dominant eigenvector. - -# There are no other positive (moreover non-negative) eigenvectors except positive multiples of v (respectively, left eigenvectors except w), i.e., all other eigenvectors must have at least one negative or non-real component. - -# $ \lim_{k \rightarrow \infty} A^k/r^k = v w^T$, where the left and right eigenvectors for A are normalized so that wTv = 1. Moreover, the matrix v wT is the projection onto the eigenspace corresponding to r. This projection is called the Perron projection. - -# Collatz–Wielandt formula: for all non-negative non-zero vectors x, let f(x) be the minimum value of [Ax]i / xi taken over all those i such that xi ≠ 0. Then f is a real valued function whose maximum over all non-negative non-zero vectors x is the Perron–Frobenius eigenvalue. - -# A "Min-max" Collatz–Wielandt formula takes a form similar to the one above: for all strictly positive vectors x, let g(x) be the maximum value of [Ax]i / xi taken over i. Then g is a real valued function whose minimum over all strictly positive vectors x is the Perron–Frobenius eigenvalue. - -# Birkhoff–Varga formula: Let x and y be strictly positive vectors. Then $r = \sup_{x>0} \inf_{y>0} \frac{y^\top A x}{y^\top x} = \inf_{x>0} \sup_{y>0} \frac{y^\top A x}{y^\top x} = \inf_{x>0} \sup_{y>0} \sum_{i,j=1}^n y_i a_{ij} x_j/\sum_{i=1}^n y_i x_i.$ - -# Donsker–Varadhan–Friedland formula: Let p be a probability vector and x a strictly positive vector. Then $r = \sup_p \inf_{x>0} \sum_{i=1}^n p_i[Ax]_i/x_i.$ - -# Fiedler formula: $r = \sup_{z > 0} \ \inf_{x>0, \ y>0,\ x \circ y = z} \frac{y^\top A x}{y^\top x} = \sup_{z > 0} \ \inf_{x>0, \ y>0,\ x \circ y = z}\sum_{i,j=1}^n y_i a_{ij} x_j/\sum_{i=1}^n y_i x_i.$ - -# The Perron–Frobenius eigenvalue satisfies the inequalities -$$ -\min_i \sum_{j} a_{ij} \le r \le \max_i \sum_{j} a_{ij}. -$$ - -All of these properties extend beyond strictly positive matrices to primitive matrices (see below). Facts 1-7 can be found in Meyer Corollary 2.2, p.6). - -# Wielandt's theorem. If |B|iφ is eigenvalue for B), then B = e D AD−1 for some diagonal unitary matrix D (i.e. diagonal elements of D equals to el, non-diagonal are zero). - -# If some power Aq is reducible, then it is completely reducible, i.e. for some permutation matrix P, it is true that: - -P A^q P^{-1}= \begin{pmatrix} - -A_1 & O & O & \dots & O \\ - -O & A_2 & O & \dots & O \\ - -\vdots & \vdots & \vdots & & \vdots \\ - -O & O & O & \dots & A_d \\ - -\end{pmatrix} - -, where Ai are irreducible matrices having the same maximal eigenvalue. The number of these matrices d is the greatest common divisor of q and h, where h is period of A. - -# If c(x) = xn + ck1 xn-k1 + ck2 xn-k2 + ... + cks xn-ks is the characteristic polynomial of A in which only the non-zero terms are listed, then the period of A equals the greatest common divisor of k1, k2, ... , ks. - -# Cesàro averages: $ \lim_{k \rightarrow \infty} 1/k\sum_{i=0,...,k} A^i/r^i = ( v w^T),$ where the left and right eigenvectors for A are normalized so that wTv = 1. Moreover, the matrix v wT is the spectral projection corresponding to r, the Perron projection. - -# Let r be the Perron–Frobenius eigenvalue, then the adjoint matrix for (r-A) is positive. - -# If A has at least one non-zero diagonal element, then A is primitive. - -# If 0 ≤ A < B, then rA ≤ rB. Moreover, if B is irreducible, then the inequality is strict: rA < rB. - -A matrix A is primitive provided it is non-negative and Am is positive for some m, and hence Ak is positive for all k ≥ m. To check primitivity, one needs a bound on how large the minimal such m can be, depending on the size of A: - -* If A is a non-negative primitive matrix of size n, then An2 − 2n + 2 is positive. Moreover, this is the best possible result, since for the matrix M below, the power Mk is not positive for every k < n2 − 2n + 2, since (Mn2 − 2n+1)11 = 0. - -M= - -\left(\begin{smallmatrix} - -0 & 1 & 0 & 0 & \cdots & 0 \\ - -0 & 0 & 1 & 0 & \cdots & 0 \\ - -0 & 0 & 0 & 1 & \cdots & 0 \\ - -\vdots & \vdots & \vdots & \vdots & & \vdots \\ - -0 & 0 & 0 & 0 & \cdots & 1 \\ - -1 & 1 & 0 & 0 & \cdots & 0 - -\end{smallmatrix}\right) - - - -Numerous books have been written on the subject of non-negative matrices, and Perron–Frobenius theory is invariably a central feature. The following examples given below only scratch the surface of its vast application domain. - -The Perron–Frobenius theorem does not apply directly to non-negative matrices. Nevertheless, any reducible square matrix A may be written in upper-triangular block form (known as the normal form of a reducible matrix) - -PAP−1 = \left( \begin{smallmatrix} - -B_1 & * & * & \cdots & * \\ - -0 & B_2 & * & \cdots & * \\ - -\vdots & \vdots & \vdots & & \vdots \\ - -0 & 0 & 0 & \cdots & * \\ - -0 & 0 & 0 & \cdots & B_h - -\end{smallmatrix} - -\right) - -where P is a permutation matrix and each Bi is a square matrix that is either irreducible or zero. Now if A is - -non-negative then so too is each block of PAP−1, moreover the spectrum of A is just the union of the spectra of the - -Bi. - -The invertibility of A can also be studied. The inverse of PAP−1 (if it exists) must have diagonal blocks of the form Bi−1 so if any - -Bi isn't invertible then neither is PAP−1 or A. - -Conversely let D be the block-diagonal matrix corresponding to PAP−1, in other words PAP−1 with the - -asterisks zeroised. If each Bi is invertible then so is D and D−1(PAP−1) is equal to the - -identity plus a nilpotent matrix. But such a matrix is always invertible (if Nk = 0 the inverse of 1 − N is - -1 + N + N2 + ... + Nk−1) so PAP−1 and A are both invertible. - -Therefore, many of the spectral properties of A may be deduced by applying the theorem to the irreducible Bi. For example, the Perron root is the maximum of the ρ(Bi). While there will still be eigenvectors with non-negative components it is quite possible - -that none of these will be positive. - -A row (column) stochastic matrix is a square matrix each of whose rows (columns) consists of non-negative real numbers whose sum is unity. The theorem cannot be applied directly to such matrices because they need not be irreducible. - -If A is row-stochastic then the column vector with each entry 1 is an eigenvector corresponding to the eigenvalue 1, which is also ρ(A) by the remark above. It might not be the only eigenvalue on the unit circle: and the associated eigenspace can be multi-dimensional. If A is row-stochastic and irreducible then the Perron projection is also row-stochastic and all its rows are equal. - -The theorem has particular use in algebraic graph theory. The "underlying graph" of a nonnegative n-square matrix is the graph with vertices numbered 1, ..., n and arc ij if and only if Aij ≠ 0. If the underlying graph of such a matrix is strongly connected, then the matrix is irreducible, and thus the theorem applies. In particular, the adjacency matrix of a strongly connected graph is irreducible. - -The theorem has a natural interpretation in the theory of finite Markov chains (where it is the matrix-theoretic equivalent of the convergence of an irreducible finite Markov chain to its stationary distribution, formulated in terms of the transition matrix of the chain; see, for example, the article on the subshift of finite type). - -More generally, it can be extended to the case of non-negative compact operators, which, in many ways, resemble finite-dimensional matrices. These are commonly studied in physics, under the name of transfer operators, or sometimes Ruelle–Perron–Frobenius operators (after David Ruelle). In this case, the leading eigenvalue corresponds to the thermodynamic equilibrium of a dynamical system, and the lesser eigenvalues to the decay modes of a system that is not in equilibrium. Thus, the theory offers a way of discovering the arrow of time in what would otherwise appear to be reversible, deterministic dynamical processes, when examined from the point of view of point-set topology. - -A common thread in many proofs is the Brouwer fixed point theorem. Another popular method is that of Wielandt (1950). He used the Collatz–Wielandt formula described above to extend and clarify Frobenius's work. Another proof is based on the spectral theory - -Given a strictly positive eigenvector v corresponding to r and another eigenvector w with the same eigenvalue. (The vectors v and w can be chosen to be real, because A and r are both real, so the null space of A-r has a basis consisting of real vectors.) Assuming at least one of the components of w is positive (otherwise multiply w by −1). Given maximal possible α such that u=v- α w is non-negative, then one of the components of u is zero, otherwise α is not maximum. Vector u is an eigenvector. It is non-negative, hence by the lemma described in the previous section non-negativity implies strict positivity for any eigenvector. On the other hand, as above at least one component of u is zero. The contradiction implies that w does not exist. - -Case: There are no Jordan cells corresponding to the Perron–Frobenius eigenvalue r and all other eigenvalues which have the same absolute value. - -If there is a Jordan cell, then the infinity norm - -(A/r)k tends to infinity for k → ∞ , - -but that contradicts the existence of the positive eigenvector. - -Given r = 1, or A/r. Letting v be a Perron–Frobenius strictly positive eigenvector, so Av=v, then: -$$ - \|v\|_{\infty}= \|A^k v\|_{\infty} \ge \|A^k\|_{\infty} \min_i (v_i), ~~\Rightarrow~~ \|A^k\|_{\infty} \le \|v\|/\min_i (v_i) -$$ - -So Ak is bounded for all k. This gives another proof that there are no eigenvalues which have greater absolute value than Perron–Frobenius one. It also contradicts the existence of the Jordan cell for any eigenvalue which has absolute value equal to 1 (in particular for the Perron–Frobenius one), because existence of the Jordan cell implies that Ak is unbounded. For a two by two matrix: - - - -J^k= \begin{pmatrix} \lambda & 1 \\ 0 & \lambda \end{pmatrix} ^k - -= - -\begin{pmatrix} \lambda^k & k\lambda^{k-1} \\ 0 & \lambda^k \end{pmatrix}, - - - -hence Jk = |k + λ| (for |λ| = 1), so it tends to infinity when k does so. Since Jk = C−1 AkC, then Ak ≥ Jk/ (C−1 C ), so it also tends to infinity. The resulting contradiction implies that there are no Jordan cells for the corresponding eigenvalues. - -Combining the two claims above reveals that the Perron–Frobenius eigenvalue r is simple root of the characteristic polynomial. In the case of nonprimitive matrices, there exist other eigenvalues which have the same absolute value as r. The same claim is true for them, but requires more work. - -Given positive (or more generally irreducible non-negative matrix) A, the Perron–Frobenius eigenvector is the only (up to multiplication by constant) non-negative eigenvector for A. - -Other eigenvectors must contain negative or complex components since eigenvectors for different eigenvalues are orthogonal in some sense, but two positive eigenvectors cannot be orthogonal, so they must correspond to the same eigenvalue, but the eigenspace for the Perron–Frobenius is one-dimensional. - -Assuming there exists an eigenpair (λ, y) for A, such that vector y is positive, and given (r, x), where x – is the left Perron–Frobenius eigenvector for A (i.e. eigenvector for AT), then - -rxTy = (xT A) y = xT (Ay) = λxTy, also xT y > 0, so one has: r = λ. Since the eigenspace for the Perron–Frobenius eigenvalue r is one-dimensional, non-negative eigenvector y is a multiple of the Perron–Frobenius one. - -Given a positive (or more generally irreducible non-negative matrix) A, one defines - -the function f on the set of all non-negative non-zero vectors x such that f(x) is the minimum value of [Ax]i / xi taken over all those i such that xi ≠ 0. Then f is a real-valued function, whose maximum is the Perron–Frobenius eigenvalue r. - -For the proof we denote the maximum of f by the value R. The proof requires to show R = r. Inserting the Perron-Frobenius eigenvector v into f, we obtain f(v) = r and conclude r ≤ R. For the opposite inequality, we consider an arbitrary nonnegative vector x and let ξ=f(x). The definition of f gives 0 ≤ ξx ≤ Ax (componentwise). Now, we use the positive right eigenvector w for A for the Perron-Frobenius eigenvalue r, then ξ wT x = wT ξx ≤ wT (Ax) = (wT A)x = r wT x . Hence f(x) = ξ ≤ r, which implies - -R ≤ r. - -Let A be a positive (or more generally, primitive) matrix, and let r be its Perron–Frobenius eigenvalue. - -# There exists a limit Ak/rk for k → ∞, denote it by P. - -# P is a projection operator: P2 = P, which commutes with A: AP = PA. - -# The image of P is one-dimensional and spanned by the Perron–Frobenius eigenvector v (respectively for PT—by the Perron–Frobenius eigenvector w for AT). - -# P = vwT, where v,w are normalized such that wT v = 1. - -# Hence P is a positive operator. - -Hence P is a spectral projection for the Perron–Frobenius eigenvalue r, and is called the Perron projection. The above assertion is not true for general non-negative irreducible matrices. - -Actually the claims above (except claim 5) are valid for any matrix M such that there exists an eigenvalue r which is strictly greater than the other eigenvalues in absolute value and is the simple root of the characteristic polynomial. (These requirements hold for primitive matrices as above). - -Given that M is diagonalizable, M is conjugate to a diagonal matrix with eigenvalues r1, ... , rn on the diagonal (denote r1 = r). The matrix Mk/rk will be conjugate (1, (r2/r)k, ... , (rn/r)k), which tends to (1,0,0,...,0), for k → ∞, so the limit exists. The same method works for general M (without assuming that M is diagonalizable). - -The projection and commutativity properties are elementary corollaries of the definition: MMk/rk = Mk/rk M ; P2 = lim M2k/r2k = P. The third fact is also elementary: M(Pu) = M lim Mk/rk u = lim rMk+1/rk+1u, so taking the limit yields M(Pu) = r(Pu), so image of P lies in the r-eigenspace for M, which is one-dimensional by the assumptions. - -Denoting by v, r-eigenvector for M (by w for MT). Columns of P are multiples of v, because the image of P is spanned by it. Respectively, rows of w. So P takes a form (a v wT), for some a. Hence its trace equals to (a wT v). Trace of projector equals the dimension of its image. It was proved before that it is not more than one-dimensional. From the definition one sees that P acts identically on the r-eigenvector for M. So it is one-dimensional. So choosing (wTv) = 1, implies P = vwT. - -For any non-negative matrix A its Perron–Frobenius eigenvalue r satisfies the inequality: -$$ - r \le \max_i \sum_j a_{ij}. -$$ - -This is not specific to non-negative matrices: for any matrix A with an eigenvalue $\scriptstyle\lambda$ it is true - -that $\scriptstyle |\lambda| \le \max_i \sum_j |a_{ij}|$. This is an immediate corollary of the - -Gershgorin circle theorem. However another proof is more direct: - -Any matrix induced norm satisfies the inequality $\scriptstyle\|A\| \ge |\lambda|$ for any eigenvalue $\scriptstyle\lambda$ because, if $\scriptstyle x$ is a corresponding eigenvector, $\scriptstyle\|A\| \ge |Ax|/|x| = |\lambda x|/|x| = |\lambda|$. The infinity norm of a matrix is the maximum of row sums: $\scriptstyle \left \| A \right \| _\infty = \max \limits _{1 \leq i \leq m} \sum _{j=1} ^n | a_{ij} |. $ Hence the desired inequality is exactly $\scriptstyle\|A\|_\infty \ge |\lambda|$ applied to the non-negative matrix A. - -Another inequality is: -$$ -\min_i \sum_j a_{ij} \le r . -$$ - -This fact is specific to non-negative matrices; for general matrices there is nothing similar. Given that A is positive (not just non-negative), then there exists a positive eigenvector w such that Aw = rw and the smallest component of w (say wi) is 1. Then r = (Aw)i ≥ the sum of the numbers in row i of A. Thus the minimum row sum gives a lower bound for r and this observation can be extended to all non-negative matrices by continuity. - -Another way to argue it is via the Collatz-Wielandt formula. One takes the vector x = (1, 1, ..., 1) and immediately obtains the inequality. - -The proof now proceeds using spectral decomposition. The trick here is to split the Perron root from the other eigenvalues. The spectral projection associated with the Perron root is called the Perron projection and it enjoys the following property: - -The Perron projection of an irreducible non-negative square matrix is a positive matrix. - -Perron's findings and also (1)–(5) of the theorem are corollaries of this result. The key point is that a positive projection always has rank one. This means that if A is an irreducible non-negative square matrix then the algebraic and geometric multiplicities of its Perron root are both one. Also if P is its Perron projection then AP = PA = ρ(A)P so every column of P is a positive right eigenvector of A and every row is a positive left eigenvector. Moreover, if Ax = λx then PAx = λPx = ρ(A)Px which means Px = 0 if λ ≠ ρ(A). Thus the only positive eigenvectors are those associated with ρ(A). If A is a primitive matrix with ρ(A) = 1 then it can be decomposed as P ⊕ (1 − P)A so that An = P + (1 − P)An. As n increases the second of these terms decays to zero leaving P as the limit of An as n → ∞. - -The power method is a convenient way to compute the Perron projection of a primitive matrix. If v and w are the positive row and column vectors that it generates then the Perron projection is just wv/vw. The spectral projections aren't neatly blocked as in the Jordan form. Here they are overlaid and each generally has complex entries extending to all four corners of the square matrix. Nevertheless, they retain their mutual orthogonality which is what facilitates the decomposition. - -The analysis when A is irreducible and non-negative is broadly similar. The Perron projection is still positive but there may now be other eigenvalues of modulus ρ(A) that negate use of the power method and prevent the powers of (1 − P)A decaying as in the primitive case whenever ρ(A) = 1. So we consider the peripheral projection, which is the spectral projection of A corresponding to all the eigenvalues that have modulus ρ(A). It may then be shown that the peripheral projection of an irreducible non-negative square matrix is a non-negative matrix with a positive diagonal. - -Suppose in addition that ρ(A) = 1 and A has h eigenvalues on the unit circle. If P is the peripheral projection then the matrix R = AP = PA is non-negative and irreducible, Rh = P, and the cyclic group P, R, R2, ...., Rh−1 represents the harmonics of A. The spectral projection of A at the eigenvalue λ on the unit circle is given by the formula $\scriptstyle h^{-1}\sum^h_1\lambda^{-k}R^k$. All of these projections (including the Perron projection) have the same positive diagonal, moreover choosing any one of them and then taking the modulus of every entry invariably yields the Perron projection. Some donkey work is still needed in order to establish the cyclic properties (6)–(8) but it's essentially just a matter of turning the handle. The spectral decomposition of A is given by A = R ⊕ (1 − P)A so the difference between An and Rn is An − Rn = (1 − P)An representing the transients of An which eventually decay to zero. P may be computed as the limit of Anh as n → ∞. - -The matrices L = \left( - -\begin{smallmatrix} - -1 & 0 & 0 \\ - -1 & 0 & 0 \\ - -1 & 1 & 1 - -\end{smallmatrix} - -\right), P = \left( - -\begin{smallmatrix} - -1 & 0 & 0 \\ - -1 & 0 & 0 \\ - -\!\!\!-1 & 1 & 1 - -\end{smallmatrix} - -\right), T = \left( - -\begin{smallmatrix} - -0 & 1 & 1 \\ - -1 & 0 & 1 \\ - -1 & 1 & 0 - -\end{smallmatrix} - -\right), M = \left( - -\begin{smallmatrix} - -0 & 1 & 0 & 0 & 0 \\ - -1 & 0 & 0 & 0 & 0 \\ - -0 & 0 & 0 & 1 & 0 \\ - -0 & 0 & 0 & 0 & 1 \\ - -0 & 0 & 1 & 0 & 0 - -\end{smallmatrix} - -\right) provide simple examples of what can go wrong if the necessary conditions are not met. It is easily seen that the Perron and peripheral projections of L are both equal to P, thus when the original matrix is reducible the projections may lose non-negativity and there is no chance of expressing them as limits of its powers. The matrix T is an example of a primitive matrix with zero diagonal. If the diagonal of an irreducible non-negative square matrix is non-zero then the matrix must be primitive but this example demonstrates that the converse is false. M is an example of a matrix with several missing spectral teeth. If ω = eiπ/3 then ω6 = 1 and the eigenvalues of M are {1,ω234} so ω and ω5 are both absent. - -A problem that causes confusion is a lack of standardisation in the definitions. For example, some authors use the terms strictly positive and positive to mean > 0 and ≥ 0 respectively. In this article positive means > 0 and non-negative means ≥ 0. Another vexed area concerns decomposability and reducibility: irreducible is an overloaded term. For avoidance of doubt a non-zero non-negative square matrix A such that 1 + A is primitive is sometimes said to be connected. Then irreducible non-negative square matrices and connected matrices are synonymous. - -The nonnegative eigenvector is often normalized so that the sum of its components is equal to unity; in this case, the eigenvector is the vector of a probability distribution and is sometimes called a stochastic eigenvector. - -Perron–Frobenius eigenvalue and dominant eigenvalue are alternative names for the Perron root. Spectral projections are also known as spectral projectors and spectral idempotents. The period is sometimes referred to as the index of imprimitivity or the order of cyclicity. diff --git a/wiki/wikipedia/2261.txt b/wiki/wikipedia/2261.txt deleted file mode 100644 index 2b6066af7507f7c49d8f423e0a78a98db38574df..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2261.txt +++ /dev/null @@ -1,5 +0,0 @@ -In differential geometry, the fundamental theorem of space curves states that every regular curve in three-dimensional space, with non-zero curvature, has its shape (and size or scale) completely determined by its curvature and torsion. - -A curve can be described, and thereby defined, by a pair of scalar fields: curvature $\kappa$ and torsion $\tau$, both of which depend on some parameter which parametrizes the curve but which can ideally be the arc length of the curve. From just the curvature and torsion, the vector fields for the tangent, normal, and binormal vectors can be derived using the Frenet–Serret formulas. Then, integration of the tangent field (done numerically, if not analytically) yields the curve. - -If a pair of curves are in different positions but have the same curvature and torsion, then they are congruent to each other. diff --git a/wiki/wikipedia/2262.txt b/wiki/wikipedia/2262.txt deleted file mode 100644 index 13c79c65b4afb1d4b10da5348a3722d112fa04b3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2262.txt +++ /dev/null @@ -1 +0,0 @@ -In differential geometry, the Willmore conjecture is a lower bound on the Willmore energy of a torus. It is named after the English mathematician Tom Willmore, who conjectured it in 1965. A proof by Fernando Codá Marques and André Neves was announced in 2012 and published in 2014. Martin Schmidt claimed a proof in 2002, but it was not accepted for publication in any peer-reviewed mathematical journal (although it did not contain a proof of the Willmore conjecture, he proved some other important conjectures in it). Prior to the proof of Marques and Neves, the Willmore conjecture had already been proved for many special cases, such as tube tori (by Willmore himself), and for tori of revolution (by Langer & Singer). diff --git a/wiki/wikipedia/2263.txt b/wiki/wikipedia/2263.txt deleted file mode 100644 index f4cf9fd95dfe213832cf341d6e06e8972e1292ef..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2263.txt +++ /dev/null @@ -1,300 +0,0 @@ -Faà di Bruno's formula is an identity in mathematics generalizing the chain rule to higher derivatives. It is named after , although he was not the first to state or prove the formula. In 1800, more than 50 years before Faà di Bruno, the French mathematician Louis François Antoine Arbogast had stated the formula in a calculus textbook, which is considered to be the first published reference on the subject. - -Perhaps the most well-known form of Faà di Bruno's formula says that - -{d^n \over dx^n} f(g(x))=\sum \frac{n!}{m_1!1!^{m_1}m_2!2!^{m_2}\cdotsm_n!n!^{m_n}}\cdot f^{(m_1+\cdots+m_n)}(g(x))\cdot \prod_{j=1}^n\left(g^{(j)}(x)\right)^{m_j}, - -where the sum is over all n-tuples of nonnegative integers (m1, ..., mn) satisfying the constraint - -1\cdot m_1+2\cdot m_2+3\cdot m_3+\cdots+n\cdot m_n=n. - -Sometimes, to give it a memorable pattern, it is written in a way in which the coefficients that have the combinatorial interpretation discussed below are less explicit: - -{d^n \over dx^n} f(g(x)) - -=\sum \frac{n!}{m_1!m_2!\cdotsm_n!}\cdot - -f^{(m_1+\cdots+m_n)}(g(x))\cdot - -\prod_{j=1}^n\left(\frac{g^{(j)}(x)}{j!}\right)^{m_j}. - -Combining the terms with the same value of m1 + m2 + ... + mn = k and noticing that mj has to be zero for j > n - k + 1 leads to a somewhat simpler formula expressed in terms of Bell polynomials Bn,k(x1,...,xn-k+1): -$$ -{d^n \over dx^n} f(g(x)) = \sum_{k=1}^n f^{(k)}(g(x))\cdot B_{n,k}\left(g'(x),g(x),\dots,g^{(n-k+1)}(x)\right). -$$ - -The formula has a "combinatorial" form: -$$ -{d^n \over dx^n} f(g(x))=(f\circ g)^{(n)}(x)=\sum_{\pi\in\Pi} f^{(\left|\pi\right|)}(g(x))\cdot\prod_{B\in\pi}g^{(\left|B\right|)}(x) -$$ - -where - -* pi runs through the set Π of all partitions of the set { 1, ..., n }, - -*"B ∈ pi" means the variable B runs through the list of all of the "blocks" of the partition pi, and - -*|A| denotes the cardinality of the set A (so that |pi| is the number of blocks in the partition pi and |B| is the size of the block B). - -The following is a concrete explanation of the combinatorial form for the n = 4 case. - - - -\begin{align} - -(f\circ g)'(x) - -= {} & f'(g(x))g'(x)^4 - -+ 6f(g(x))g(x)g'(x)^2 \\[8pt] - -& {} + 3f(g(x))g(x)^2 - -+ 4f(g(x))g(x)g'(x) \\[8pt] - -& {} + f'(g(x))g'(x). - -\end{align} - - - -The pattern is: - - - -\begin{array}{cccccc} - -g'(x)^4 - -& & \leftrightarrow & & 1+1+1+1 - -& & \leftrightarrow & & f'(g(x)) - -& & \leftrightarrow & & 1 - -\\[12pt] - -g(x)g'(x)^2 - -& & \leftrightarrow & & 2+1+1 - -& & \leftrightarrow & & f(g(x)) - -& & \leftrightarrow & & 6 - -\\[12pt] - -g(x)^2 - -& & \leftrightarrow & & 2+2 - -& & \leftrightarrow & & f(g(x)) - -& & \leftrightarrow & & 3 - -\\[12pt] - -g(x)g'(x) - -& & \leftrightarrow & & 3+1 - -& & \leftrightarrow & & f(g(x)) - -& & \leftrightarrow & & 4 - -\\[12pt] - -g'(x) - -& & \leftrightarrow & & 4 - -& & \leftrightarrow & & f'(g(x)) - -& & \leftrightarrow & & 1 - -\end{array} - - - -The factor $g(x)g'(x)^2 $ corresponds to the partition 2 + 1 + 1 of the integer 4, in the obvious way. The factor $f(g(x))$ that goes with it corresponds to the fact that there are three summands in that partition. The coefficient 6 that goes with those factors corresponds to the fact that there are exactly six partitions of a set of four members that break it into one part of size 2 and two parts of size 1. - -Similarly, the factor $g(x)^2 $ in the third line corresponds to the partition 2 + 2 of the integer 4, (4, because we are finding the fourth derivative), while $f(g(x)) $ corresponds to the fact that there are two summands (2 + 2) in that partition. The coefficient 3 corresponds to the fact that there are $\tfrac{1}{2}\tbinom{4}{2}=3$ ways of partitioning 4 objects into groups of 2. The same concept applies to the others. - -A memorizable scheme is as follows: - -\begin{align} & \frac{D^1(f\circ{}g)}{1!} & = \left(f^{(1)}\circ{}g\right)\frac{\frac{g^{(1)} }{1!} }{1!} \\[8pt] - -& \frac{D^2(f\circ g)}{2!} & = \left(f^{(1)}\circ{}g\right)\frac{\frac{g^{(2)} }{2!} }{1!} & {} + \left(f^{(2)}\circ{}g\right)\frac{\frac{g^{(1)} }{1!}\frac{g^{(1)} }{1!} }{2!} \\[8pt] - -& \frac{D^3(f\circ g)}{3!} & = \left(f^{(1)}\circ{}g\right)\frac{\frac{g^{(3)} }{3!} }{1!} & {} + \left(f^{(2)}\circ{}g\right)\frac{\frac{g^{(1)} }{1!} }{1!}\frac{\frac{g^{(2)} }{2!} }{1!} & {} + \left(f^{(3)}\circ{}g\right)\frac{\frac{g^{(1)} }{1!}\frac{g^{(1)} }{1!}\frac{g^{(1)} }{1!} }{3!} \\[8pt] - -& \frac{D^4(f\circ g)}{4!} & = \left(f^{(1)}\circ{}g\right)\frac{\frac{g^{(4)} }{4!} }{1!} & {} + \left(f^{(2)}\circ{}g\right)\left(\frac{\frac{g^{(1)} }{1!} }{1!}\frac{\frac{g^{(3)} }{3!} }{1!}+\frac{\frac{g^{(2)} }{2!}\frac{g^{(2)} }{2!} }{2!}\right) & {} + \left(f^{(3)}\circ{}g\right)\frac{\frac{g^{(1)} }{1!}\frac{g^{(1)} }{1!} }{2!}\frac{\frac{g^{(2)} }{2!} }{1!} & {} + \left(f^{(4)}\circ{}g\right)\frac{\frac{g^{(1)} }{1!}\frac{g^{(1)} }{1!}\frac{g^{(1)} }{1!}\frac{g^{(1)} }{1!} }{4!} - -\end{align} - -These partition-counting Faà di Bruno coefficients have a "closed-form" expression. The number of partitions of a set of size n corresponding to the integer partition - -\displaystyle n=\underbrace{1+\cdots+1}_{m_1} - -+ \underbrace{2+\cdots+2}_{m_2} - -+ \underbrace{3+\cdots+3}_{m_3}+\cdots - -of the integer n is equal to -$$ -\frac{n!}{m_1!m_2!m_3!\cdots 1!^{m_1}2!^{m_2}3!^{m_3}\cdots}. -$$ - -These coefficients also arise in the Bell polynomials, which are relevant to the study of cumulants. - -Let y = g(x1, ..., xn). Then the following identity holds regardless of whether the n variables are all distinct, or all identical, or partitioned into several distinguishable classes of indistinguishable variables (if it seems opaque, see the very concrete example below): - -{\partial^n \over \partial x_1 \cdots \partial x_n}f(y) - -= \sum_{\pi\in\Pi} f^{(\left|\pi\right|)}(y)\cdot\prod_{B\in\pi} - -{\partial^{\left|B\right|}y \over \prod_{j\in B} \partial x_j} - -where (as above) - -* pi runs through the set Π of all partitions of the set { 1, ..., n }, - -*"B ∈ pi" means the variable B runs through the list of all of the "blocks" of the partition pi, and - -*|A| denotes the cardinality of the set A (so that |pi| is the number of blocks in the partition pi and |B| is the size of the block B). - -More general versions hold for cases where the all functions are vector- and even Banach-space-valued. In this case one needs to consider the Fréchet derivative or Gateaux derivative. - -; Example - -The five terms in the following expression correspond in the obvious way to the five partitions of the set { 1, 2, 3 }, and in each case the order of the derivative of f is the number of parts in the partition: - - - -\begin{align} - -{\partial^3 \over \partial x_1 \partial x_2 \partial x_3}f(y) - -= {} & f'(y){\partial^3 y \over \partial x_1 \partial x_2 \partial x_3} \\[10pt] - -& {} + f(y) \left( {\partial y \over \partial x_1} - -\cdot{\partial^2 y \over \partial x_2 \partial x_3} - -+{\partial y \over \partial x_2} - -\cdot{\partial^2 y \over \partial x_1 \partial x_3} - -+ {\partial y \over \partial x_3} - -\cdot{\partial^2 y \over \partial x_1 \partial x_2}\right) \\[10pt] - -& {} + f(y) {\partial y \over \partial x_1} - -\cdot{\partial y \over \partial x_2} - -\cdot{\partial y \over \partial x_3}. - -\end{align} - - - -If the three variables are indistinguishable from each other, then three of the five terms above are also indistinguishable from each other, and then we have the classic one-variable formula. - -===Formal power series version=== - -Suppose $f(x)=\sum_{n=0}^\infty {a_n} x^n$ - -and $g(x)=\sum_{n=0}^\infty {b_n} x^n$ - -are formal power series and $b_0 = 0$. - -Then the composition $f \circ g$ is again a formal power series, -$$ -f(g(x))=\sum_{n=0}^\infty{c_n}x^n, -$$ - -where c0 = a0 and the other coefficient cn for n ≥ 1 - -can be expressed as a sum over compositions of n or as an equivalent sum over partitions of n: -$$ -c_{n} = \sum_{\mathbf{i}\in \mathcal{C}_{n}} a_{k} b_{i_{1}} b_{i_{2}} \cdots b_{i_{k}}, -$$ - -where -$$ -\mathcal{C}_{n}=\{(i_1,i_2,\dots,i_k):\ 1 \le k \le n,\ i_1+i_2+ \cdots + i_k=n\} -$$ - -is the set of compositions of n with k denoting the number of parts, - -or -$$ -c_{n} = \sum_{k=1}^{n} a_{k} \sum_{\mathbf{\pi}\in \mathcal{P}_{n,k}} \binom{k}{\pi_{1},\pi_{2}, ..., \pi_{n}} b_{1}^{\pi_{1}} b_{2}^{\pi_{2}}\cdots b_{n}^{\pi_{n}}, -$$ - -where -$$ -\mathcal{P}_{n,k}=\{(\pi_1,\pi_2,\dots,\pi_n):\ \pi_1+\pi_2+ \cdots + \pi_n=k,\ \pi_{1}\cdot 1+\pi_{2}\cdot 2+ \cdots + \pi_{n}\cdot n = n \} -$$ - -is the set of partitions of n into k parts, in frequency-of-parts form. - -The first form is obtained by picking out the coefficient of xn - -in $(b_{1}x+b_{2}x^2+ \cdots)^{k} $ "by inspection", and the second form - -is then obtained by collecting like terms, or alternatively, by applying the multinomial theorem. - -The special case f(x) = ex, g(x) = Σn ≥ 1 an /n! xn gives the exponential formula. - -The special case f(x) = 1/(1 - x), g(x) = Σn ≥ 1 (−an) xn gives an expression for the reciprocal of the formal power series Σn ≥ 0 an xn in the case a0 = 1. - -Stanley - -gives a version for exponential power series. - -In the formal power series -$$ -f(x)=\sum_n {\frac{a_n}{n!}}x^n, -$$ - -we have the nth derivative at 0: -$$ -f^{(n)}(0)=a_n. -$$ - -This should not be construed as the value of a function, since these series are purely formal; there is no such thing as convergence or divergence in this context. - -If -$$ -g(x)=\sum_{n=0}^\infty {\frac{b_n}{n!}} x^n -$$ - -and -$$ -f(x)=\sum_{n=1}^\infty {\frac{a_n}{n!}} x^n -$$ - -and -$$ -g(f(x))=h(x)=\sum_{n=0}^\infty{\frac{c_n}{n!}}x^n, -$$ - -then the coefficient cn (which would be the nth derivative of h evaluated at 0 if we were dealing with convergent series rather than formal power series) is given by -$$ -c_n=\sum_{\pi=\left\{B_1,\ldots,B_k\right\}} a_{\left|B_1\right|}\cdots a_{\left|B_k\right|} b_k -$$ - -where pi runs through the set of all partitions of the set {1, ..., n} and B1, ..., Bk are the blocks of the partition pi, and | Bj | is the number of members of the jth block, for j = 1, ..., k. - -This version of the formula is particularly well suited to the purposes of combinatorics. - -We can also write with respect to the notation above -$$ -g(f(x)) = b_0+ \sum_{n=1}^\infty \frac{\sum_{k=1}^n b_k B_{n,k}(a_1,\ldots,a_{n-k+1})}{n!} x^n, -$$ - -where Bn,k(a1,...,an-k+1) are Bell polynomials. - -If f(x) = ex, then all of the derivatives of f are the same and are a factor common to every term. In case g(x) is a cumulant-generating function, then f(g(x)) is a moment-generating function, and the polynomial in various derivatives of g is the polynomial that expresses the moments as functions of the cumulants. diff --git a/wiki/wikipedia/2264.txt b/wiki/wikipedia/2264.txt deleted file mode 100644 index 2ebb79259c02f1ea0c9d007b63bf763205c62f5f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2264.txt +++ /dev/null @@ -1,3 +0,0 @@ -In mathematics, the Artin–Zorn theorem, named after Emil Artin and Max Zorn, states that any finite alternative division ring is necessarily a finite field. It was first published in 1930 by Zorn, but in his publication Zorn credited it to Artin. - -The Artin–Zorn theorem is a generalization of the Wedderburn theorem, which states that finite associative division rings are fields. As a geometric consequence, every finite Moufang plane is the classical projective plane over a finite field. diff --git a/wiki/wikipedia/2265.txt b/wiki/wikipedia/2265.txt deleted file mode 100644 index f35a2ede49d2cf6600b44e12c959b9618af448b0..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2265.txt +++ /dev/null @@ -1,10 +0,0 @@ -In mathematics, the Fulton–Hansen connectedness theorem is a result from intersection theory in algebraic geometry, for the case of subvarieties of projective space with codimension large enough to make the intersection have components of dimension at least 1. It is named after William Fulton and Johan Hansen, who proved it in 1979. - -The formal statement is that if V and W are irreducible algebraic subvarieties of a projective space P, all over an algebraically closed field, and if -$$ -\dim(V) + \dim (W) > \dim (P) -$$ - -in terms of the dimension of an algebraic variety, then the intersection U of V and W is connected. - -More generally, the theorem states that if $Z$ is a projective variety and $f\colon Z \to P^n \times P^n$ is any morphism such that $\dim f(Z) > n$, then $f^{-1}\Delta$ is connected, where $\Delta$ is the diagonal in $P^n \times P^n$. The special case of intersections is recovered by taking $Z = V \times W$, with $f$ the natural inclusion. diff --git a/wiki/wikipedia/2266.txt b/wiki/wikipedia/2266.txt deleted file mode 100644 index 1519bb82e6b713258dd8dd1efcb47c47f10bdccf..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2266.txt +++ /dev/null @@ -1,155 +0,0 @@ -In algebra, the Leibniz formula, named in honor of Gottfried Leibniz, expresses the determinant of a square matrix in terms of permutations of the matrix elements. If $A$ is an $n \times n$ matrix, where $a_{ij}$ is the entry in the $i$-th row and $j$-th column of $A$, the formula is -$$ -\det(A) = \sum_{\tau \in S_n} \sgn(\tau) \prod_{i = 1}^n a_{i, \tau(i)} = \sum_{\sigma \in S_n} \sgn(\sigma) \prod_{i = 1}^n a_{\sigma(i), i} -$$ - -where $\sgn$ is the sign function of permutations in the permutation group $S_n$, which returns $+1$ and $-1$ for even and odd permutations, respectively. - -Another common notation used for the formula is in terms of the Levi-Civita symbol and makes use of the Einstein summation notation, where it becomes -$$ -\det(A) = \epsilon_{i_1\cdots i_n} {a}_{1i_1} \cdots {a}_{ni_n}, -$$ - -which may be more familiar to physicists. - -Directly evaluating the Leibniz formula from the definition requires $\Omega(n! \cdot n)$ operations in general—that is, a number of operations asymptotically proportional to $n$ factorial—because $n!$ is the number of order-$n$ permutations. This is impractically difficult for even relatively small $n$. Instead, the determinant can be evaluated in $O(n^3)$ operations by forming the LU decomposition $A = LU$ (typically via Gaussian elimination or similar methods), in which case $\det A = \det L \cdot \det U$ and the determinants of the triangular matrices $L$ and $U$ are simply the products of their diagonal entries. (In practical applications of numerical linear algebra, however, explicit computation of the determinant is rarely required.) See, for example, Trefethen. The determinant can also be evaluated in fewer than $O(n^3)$ operations by reducing the problem to matrix multiplication, but most such algorithms are not practical. - -Theorem. - -There exists exactly one function $ F : M_n (\mathbb K) \rightarrow \mathbb K$ which is alternating multilinear w.r.t. columns and such that $F(I) = 1$. - -Proof. - -Uniqueness: Let $F$ be such a function, and let $A = (a_i^j)_{i = 1, \dots, n}^{j = 1, \dots , n}$ be an $n \times n$ matrix. Call $A^j$ the $j$-th column of $A$, i.e. $A^j = (a_i^j)_{i = 1, \dots , n}$, so that $A = \left(A^1, \dots, A^n\right).$ - -Also, let $E^k$ denote the $k$-th column vector of the identity matrix. - -Now one writes each of the $A^j$'s in terms of the $E^k$, i.e. -$$ -A^j = \sum_{k = 1}^n a_k^j E^k -$$. - -As $F$ is multilinear, one has - - - -\begin{align} - -F(A)& = F\left(\sum_{k_1 = 1}^n a_{k_1}^1 E^{k_1}, \dots, \sum_{k_n = 1}^n a_{k_n}^n E^{k_n}\right) = \sum_{k_1, \dots, k_n = 1}^n \left(\prod_{i = 1}^n a_{k_i}^i\right) F\left(E^{k_1}, \dots, E^{k_n}\right). - -\end{align} - - - -From alternation it follows that any term with repeated indices is zero. The sum can therefore be restricted to tuples with non-repeating indices, i.e. permutations: -$$ -F(A) = \sum_{\sigma \in S_n} \left(\prod_{i = 1}^n a_{\sigma(i)}^i\right) F(E^{\sigma(1)}, \dots , E^{\sigma(n)}). -$$ - -Because F is alternating, the columns $E$ can be swapped until it becomes the identity. The sign function $\sgn(\sigma)$ is defined to count the number of swaps necessary and account for the resulting sign change. One finally gets: - - - -\begin{align} - -F(A)& = \sum_{\sigma \in S_n} \sgn(\sigma) \left(\prod_{i = 1}^n a_{\sigma(i)}^i\right) F(I)\\ - -& = \sum_{\sigma \in S_n} \sgn(\sigma) \prod_{i = 1}^n a_{\sigma(i)}^i - -\end{align} - - - -as $F(I)$ is required to be equal to $1$. - -Therefore no function besides the function defined by the Leibniz Formula can be a multilinear alternating function with $F\left(I\right)=1$. - -Existence: We now show that F, where F is the function defined by the Leibniz formula, has these three properties. - -Multilinear: - - - -\begin{align} - -F(A^1, \dots, cA^j, \dots) & = \sum_{\sigma \in S_n} \sgn(\sigma) ca_{\sigma(j)}^j\prod_{i = 1, i \neq j}^n a_{\sigma(i)}^i\\ - -& = c \sum_{\sigma \in S_n} \sgn(\sigma) a_{\sigma(j)}^j\prod_{i = 1, i \neq j}^n a_{\sigma(i)}^i\\ - -&=c F(A^1, \dots, A^j, \dots)\\ - -\\ - -F(A^1, \dots, b+A^j, \dots) & = \sum_{\sigma \in S_n} \sgn(\sigma)\left(b_{\sigma(j)} + a_{\sigma(j)}^j\right)\prod_{i = 1, i \neq j}^n a_{\sigma(i)}^i\\ - -& = \sum_{\sigma \in S_n} \sgn(\sigma) - -\left( \left(b_{\sigma(j)}\prod_{i = 1, i \neq j}^n a_{\sigma(i)}^i\right) + \left(a_{\sigma(j)}^j\prod_{i = 1, i \neq j}^n a_{\sigma(i)}^i\right)\right)\\ - -& = \left(\sum_{\sigma \in S_n} \sgn(\sigma) b_{\sigma(j)}\prod_{i = 1, i \neq j}^n a_{\sigma(i)}^i\right) - -+ \left(\sum_{\sigma \in S_n} \sgn(\sigma) \prod_{i = 1}^n a_{\sigma(i)}^i\right)\\ - -&= F(A^1, \dots, b, \dots) + F(A^1, \dots, A^j, \dots)\\ - -\\ - -\end{align} - - - -Alternating: - - - -\begin{align} - -F(\dots, A^{j_1}, \dots, A^{j_2}, \dots) - -& = \sum_{\sigma \in S_n} \sgn(\sigma) \left(\prod_{i = 1, i \neq j_1, i\neq j_2}^n a_{\sigma(i)}^i\right) a_{\sigma(j_1)}^{j_1} a_{\sigma(j_2)}^{j_2}\\ - -\end{align} - - - -For any $\sigma \in S_n$ let $\sigma'$ be the tuple equal to $\sigma$ with the $j_1$ and $j_2$ indices switched. - - - -\begin{align} - -F(A) & = \sum_{\sigma\in S_{n},\sigma(j_{1})<\sigma(j_{2})}\left[\sgn(\sigma)\left(\prod_{i = 1, i \neq j_1, i\neq j_2}^na_{\sigma(i)}^{i}\right)a_{\sigma(j_{1})}^{j_{1}}a_{\sigma(j_{2})}^{j_{2}}+\sgn(\sigma')\left(\prod_{i = 1, i \neq j_1, i\neq j_2}^na_{\sigma'(i)}^{i}\right)a_{\sigma'(j_{1})}^{j_{1}}a_{\sigma'(j_{2})}^{j_{2}}\right]\\ - -& =\sum_{\sigma\in S_{n},\sigma(j_{1})<\sigma(j_{2})}\left[\sgn(\sigma)\left(\prod_{i = 1, i \neq j_1, i\neq j_2}^na_{\sigma(i)}^{i}\right)a_{\sigma(j_{1})}^{j_{1}}a_{\sigma(j_{2})}^{j_{2}}-\sgn(\sigma)\left(\prod_{i = 1, i \neq j_1, i\neq j_2}^na_{\sigma(i)}^{i}\right)a_{\sigma(j_{2})}^{j_{1}}a_{\sigma(j_{1})}^{j_{2}}\right]\\ - -& =\sum_{\sigma\in S_{n},\sigma(j_{1})<\sigma(j_{2})}\sgn(\sigma)\left(\prod_{i = 1, i \neq j_1, i\neq j_2}^na_{\sigma(i)}^{i}\right)\underbrace{\left(a_{\sigma(j_{1})}^{j_{1}}a_{\sigma(j_{2})}^{j_{2}}-a_{\sigma(j_{1})}^{j_{2}}a_{\sigma(j_{2})}^{j_{_{1}}}\right)}_{=0\text{, if }j_1=j_2}\\ - -\\ - -\end{align} - - - -Thus if $A^{j_1} = A^{j_2}$ then $F(\dots, A^{j_1}, \dots, A^{j_2}, \dots)=0$. - -Finally, $F(I)=1$: - - - -\begin{align} - -F(I) & = \sum_{\sigma \in S_n} \sgn(\sigma) \prod_{i = 1}^n I^i_{\sigma(i)} - -= \sum_{\sigma \in S_n} \sgn(\sigma) \prod_{i = 1}^n \operatorname{\delta}_{i,\sigma(i)}\\ - -& = \sum_{\sigma \in S_n} \sgn(\sigma) \operatorname{\delta}_{\sigma,\operatorname{id}_{\{1\ldots n\}}} - -= \sgn(\operatorname{id}_{\{1\ldots n\}}) - -= 1 - -\end{align} - - - -Thus the only alternating multilinear functions with $F(I)=1$ are restricted to the function defined by the Leibniz formula, and it in fact also has these three properties. Hence the determinant can be defined as the only function $ \det : M_n (\mathbb K) \rightarrow \mathbb K $ with these three properties. diff --git a/wiki/wikipedia/2267.txt b/wiki/wikipedia/2267.txt deleted file mode 100644 index 7a51ad3934f7ae4f3014b62ae8835875f33e8396..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2267.txt +++ /dev/null @@ -1,35 +0,0 @@ -In mathematics, Cohn's theorem states that a nth-degree self-inversive polynomial $p(z)$ has as many roots in the open unit disk $D =\{z \in \mathbb{C}: |z|<1\}$ as the reciprocal polynomial of its derivative. Cohn's theorem is useful for studying the distribution of the roots of self-inversive and self-reciprocal polynomials in the complex plane. - -An nth-degree polynomial, -$$ -p(z) = p_0 + p_1 z + \cdots + p_n z^n -$$ - -is called self-inversive if there exists a fixed complex number ( $\omega$ ) of modulus 1 so that, -$$ -p(z) = \omega p^*(z),\qquad \left(|\omega|=1\right), -$$ - -where -$$ -p^*(z)=z^n \bar{p}\left(\frac{1}{z}\right) =\bar{p}_n + \bar{p}_{n-1} z + \cdots + \bar{p}_0 z^n -$$ - -is the reciprocal polynomial associated with $p(z)$ and the bar means complex conjugation. Self-inversive polynomials have many interesting properties. For instance, its roots are all symmetric with respect to the unit circle and a polynomial whose roots are all on the unit circle is necessarily self-inversive. The coefficients of self-inversive polynomials satisfy the relations. -$$ -p_k = \omega \bar{p}_{n-k}, \qquad 0 \leqslant k \leqslant n. -$$ - -In the case where $\omega = 1, $ a self-inversive polynomial becomes a complex-reciprocal polynomial (also known as a self-conjugate polynomial). If its coefficients are real then it becomes a real self-reciprocal polynomial. - -The formal derivative of $p(z)$ is a (n − 1)th-degree polynomial given by -$$ -q(z) =p'(z) = p_1 + 2p_2 z + \cdots + n p_n z^{n-1}. -$$ - -Therefore, Cohn's theorem states that both $p(z)$ and the polynomial -$$ -q^*(z) =z^{n-1}\bar{q}_{n-1}\left(\frac{1}{z}\right) = z^{n-1} \bar{p}' \left(\frac{1}{z}\right) = n \bar{p}_n + (n-1)\bar{p}_{n-1} z + \cdots + \bar{p}_1 z^{n-1} -$$ - -have the same number of roots in $|z|<1.$ diff --git a/wiki/wikipedia/2268.txt b/wiki/wikipedia/2268.txt deleted file mode 100644 index f0847ce4de4d7909425c3827000f873c9500c40b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2268.txt +++ /dev/null @@ -1,113 +0,0 @@ -In mathematics, the second moment method is a technique used in probability theory and analysis to show that a random variable has positive probability of being positive. More generally, the "moment method" consists of bounding the probability that a random variable fluctuates far from its mean, by using its moments. - -The method is often quantitative, in that one can often deduce a lower bound on the probability that the random variable is larger than some constant times its expectation. The method involves comparing the second moment of random variables to the square of the first moment. - -The first moment method is a simple application of Markov's inequality for integer-valued variables. For a non-negative, integer-valued random variable X, we may want to prove that X = 0 with high probability. To obtain an upper bound for P(X > 0), and thus a lower bound for P(X = 0), we first note that since X takes only integer values, P(X > 0) = P(X ≥ 1). Since X is non-negative we can now apply Markov's inequality to obtain P(X ≥ 1) ≤ E[X]. Combining these we have P(X > 0) ≤ E[X]; the first moment method is simply the use of this inequality. - -In the other direction, E[X] being "large" does not directly imply that P(X = 0) is small. However, we can often use the second moment to derive such a conclusion, using Cauchy–Schwarz inequality. - -Theorem: If X ≥ 0 is a random variable with - -finite variance, then - - - -\operatorname{P}( X > 0 ) - -\ge \frac{(\operatorname{E}[X])^2}{\operatorname{E}[X^2]}. - - - -Proof: Using the Cauchy–Schwarz inequality, we have - - - -\operatorname{E}[X] = \operatorname{E}[ X \mathbf{1}_{\{ X > 0 \}} ] \le \operatorname{E}[X^2]^{1/2} \operatorname{P}( X > 0)^{1/2}. - - - -Solving for $\operatorname{P}( X > 0)$, the desired inequality then follows. ∎ - -The method can also be used on distributional limits of random variables. Furthermore, the estimate of the previous theorem can be refined by means of the so-called Paley–Zygmund inequality. Suppose that Xn is a sequence of non-negative real-valued random variables which converge in law to a random variable X. If there are finite positive constants c1, c2 such that -$$ -\operatorname{E} \left [X_n^2 \right ] \le c_1 \operatorname{E}[X_n]^2 -$$ -$$ -\operatorname{E} \left [X_n \right ]\ge c_2 -$$ - -hold for every n, then it follows from the Paley–Zygmund inequality that for every n and θ in (0, 1) -$$ - \operatorname{P} (X_n \geq c_2 \theta) \geq \frac{(1-\theta)^2}{c_1}. -$$ - -Consequently, the same inequality is satisfied by X. - -The Bernoulli bond percolation subgraph of a graph G at parameter p is a random subgraph obtained from G by deleting every edge of G with probability 1−p, independently. The infinite complete binary tree T is an infinite tree where one vertex (called the root) has two neighbors and every other vertex has three neighbors. The second moment method can be used to show that at every parameter p ∈ (1/2, 1] with positive probability the connected component of the root in the percolation subgraph of T is infinite. - -Let K be the percolation component of the root, and let Tn be the set of vertices of T that are at distance n from the root. Let Xn be the number of vertices in Tn ∩ K. To prove that K is infinite with positive probability, it is enough to show that $\limsup_{n\to\infty}1_{X_n>0}>0$ with positive probability. By the reverse Fatou lemma, it suffices to show that $\inf_{n\to\infty}P(X_n>0)>0$. The Cauchy–Schwarz inequality gives -$$ -E[X_n]^2\le E[X_n^2] E\left [(1_{X_n>0})^2\right ]=E[X_n^2]P(X_n>0). -$$ - -Therefore, it is sufficient to show that -$$ - \inf_n \frac{E \left[ X_n \right ]^2}{E \left[ X_n^2 \right ]}>0, -$$ - -that is, that the second moment is bounded from above by a constant times the first moment squared (and both are nonzero). In many applications of the second moment method, one is not able to calculate the moments precisely, but can nevertheless establish this inequality. - -In this particular application, these moments can be calculated. For every specific v in Tn, -$$ -P(v\in K)=p^n. -$$ - -Since $|T_n|=2^n$, it follows that -$$ -E[X_n]=2^np^n -$$ - -which is the first moment. Now comes the second moment calculation. -$$ -E\!\left[X_n^2 \right ] = E\!\left[\sum_{v\in T_n} \sum_{u\in T_n}1_{v\in K}1_{u\in K}\right] = \sum_{v\in T_n} \sum_{u\in T_n} P(v,u\in K). -$$ - -For each pair v, u in Tn let w(v, u) denote the vertex in T that is farthest away from the root and lies on the simple path in T to each of the two vertices v and u, and let k(v, u) denote the distance from w to the root. In order for v, u to both be in K, it is necessary and sufficient for the three simple paths from w(v, u) to v, u and the root to be in K. Since the number of edges contained in the union of these three paths is 2n − k(v, u), we obtain -$$ -P(v,u\in K)= p^{2n-k(v,u)}. -$$ - -The number of pairs (v, u) such that k(v, u) = s is equal to $2^s2^{n-s}2^{n-s-1}=2^{2n-s-1}$, for s = 0, 1, ..., n. Hence, -$$ -E\!\left[X_n^2 \right ] = \sum_{s=0}^n 2^{2n-s-1} p^{2n-s} = \frac 12(2p)^n \sum_{s=0}^n (2p)^s = \frac12(2p)^n \frac{(2p)^{n+1}-1}{2p-1} \le \frac p{2p-1} E[X_n]^2, -$$ - -which completes the proof. - -*The choice of the random variables Xn was rather natural in this setup. In some more difficult applications of the method, some ingenuity might be required in order to choose the random variables Xn for which the argument can be carried through. - -*The Paley–Zygmund inequality is sometimes used instead of the Cauchy–Schwarz inequality and may occasionally give more refined results. - -*Under the (incorrect) assumption that the events v, u in K are always independent, one has $P(v,u\in K)=P(v\in K)P(u\in K)$, and the second moment is equal to the first moment squared. The second moment method typically works in situations in which the corresponding events or random variables are “nearly independent". - -*In this application, the random variables Xn are given as sums -$$ -X_n=\sum_{v\in T_n}1_{v\in K}. -$$ - -In other applications, the corresponding useful random variables are integrals -$$ -X_n=\int f_n(t)d\mu(t), -$$ - -where the functions fn are random. In such a situation, one considers the product measure μ × μ and calculates - - \begin{align} - -E \left[X_n^2 \right ] & = E\left[\int\int f_n(x)f_n(y)d\mu(x)d\mu(y)\right ] \\ - -& = E\left[ \int\int E\left[f_n(x)f_n(y)\right]d\mu(x)d\mu(y)\right ], - -\end{align} - -where the last step is typically justified using Fubini's theorem. diff --git a/wiki/wikipedia/2269.txt b/wiki/wikipedia/2269.txt deleted file mode 100644 index f3cc6dab29a9aee713e42adcc213c04f124244b5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2269.txt +++ /dev/null @@ -1,6 +0,0 @@ -In queueing theory, Bartlett's theorem gives the distribution of the number of customers in a given part of a system at a fixed time. - -Suppose that customers arrive according to a non-stationary Poisson process with rate A(t), and that subsequently they move independently around a system of nodes. Write E for some particular part of the system and p(s,t) the probability that a customer who arrives at time s is in E at time t. Then the number of customers in E at time t has a Poisson distribution with mean -$$ -\mu(t) = \int_{-\infty}^t A(s) p(s,t) \mathrm{d}t. -$$ diff --git a/wiki/wikipedia/227.txt b/wiki/wikipedia/227.txt deleted file mode 100644 index ae9bbc8f06898276e18766afbd1c078e7b63e68d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/227.txt +++ /dev/null @@ -1,51 +0,0 @@ -Herbrand's theorem is a fundamental result of mathematical logic obtained by Jacques Herbrand (1930). It essentially allows a certain kind of reduction of first-order logic to propositional logic. Although Herbrand originally proved his theorem for arbitrary formulas of first-order logic, the simpler version shown here, restricted to formulas in prenex form containing only existential quantifiers, became more popular. - -Let -$$ -(\exists y_1,\ldots,y_n)F(y_1,\ldots,y_n) -$$ - -be a formula of first-order logic with $F(y_1,\ldots,y_n)$ quantifier-free, - -though it may contain additional free variables. - -This version of Herbrand's theorem states that the above formula is valid - -if and only if there exists a finite sequence of terms $t_{ij}$, - -possibly in an expansion of the language, with -$$ -1 \le i \le k -$$ and $1 \le j \le n$, - -such that -$$ -F(t_{11},\ldots,t_{1n}) \vee \ldots \vee F(t_{k1},\ldots,t_{kn}) -$$ - -is valid. If it is valid, it is called a Herbrand disjunction for -$$ -(\exists y_1,\ldots,y_n)F(y_1,\ldots,y_n). -$$ - -Informally: a formula $A$ in prenex form containing only existential quantifiers is provable (valid) in first-order logic if and only if a disjunction composed of substitution instances of the quantifier-free subformula of $A$ is a tautology (propositionally derivable). - -The restriction to formulas in prenex form containing only existential quantifiers does not limit the generality of the theorem, because formulas can be converted to prenex form and their universal quantifiers can be removed by Herbrandization. Conversion to prenex form can be avoided, if structural Herbrandization is performed. Herbrandization can be avoided by imposing additional restrictions on the variable dependencies allowed in the Herbrand disjunction. - -A proof of the non-trivial direction of the theorem can be constructed according to the following steps: - -# If the formula $(\exists y_1,\ldots,y_n)F(y_1,\ldots,y_n)$ is valid, then by completeness of cut-free sequent calculus, which follows from Gentzen's cut-elimination theorem, there is a cut-free proof of $\vdash (\exists y_1,\ldots,y_n)F(y_1,\ldots,y_n)$. - -# Starting from above downwards, remove the inferences that introduce existential quantifiers. - -# Remove contraction-inferences on previously existentially quantified formulas, since the formulas (now with terms substituted for previously quantified variables) might not be identical anymore after the removal of the quantifier inferences. - -# The removal of contractions accumulates all the relevant substitution instances of $F(y_1,\ldots,y_n)$ in the right side of the sequent, thus resulting in a proof of $\vdash F(t_{11},\ldots,t_{1n}), \ldots, F(t_{k1},\ldots,t_{kn})$, from which the Herbrand disjunction can be obtained. - -However, sequent calculus and cut-elimination were not known at the time of Herbrand's theorem, and Herbrand had to prove his theorem in a more complicated way. - -* Herbrand's theorem has been extended to higher-order logic by using expansion-tree proofs. The deep representation of expansion-tree proofs corresponds to a Herbrand disjunction, when restricted to first-order logic. - -* Herbrand disjunctions and expansion-tree proofs have been extended with a notion of cut. Due to the complexity of cut-elimination, Herbrand disjunctions with cuts can be non-elementarily smaller than a standard Herbrand disjunction. - -* Herbrand disjunctions have been generalized to Herbrand sequents, allowing Herbrand's theorem to be stated for sequents: "a Skolemized sequent is derivable iff it has a Herbrand sequent". diff --git a/wiki/wikipedia/2270.txt b/wiki/wikipedia/2270.txt deleted file mode 100644 index 7697b0edc109a8a7bb766ad89ea17e9f1a769c39..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2270.txt +++ /dev/null @@ -1,67 +0,0 @@ -Richards' theorem is a mathematical result due to Paul I. Richards in 1947. The theorem states that for, -$$ -R(s) = \frac {kZ(s)-sZ(k)}{kZ(k)-sZ(s)} -$$ - -if $Z(s)$ is a positive-real function (PRF) then $R(s)$ is a PRF for all real, positive values of $k$. - -The theorem has applications in electrical network synthesis. The PRF property of an impedance function determines whether or not a passive network can be realised having that impedance. Richards' theorem led to a new method of realising such networks in the 1940s. -$$ - R(s) = \frac {kZ(s)-sZ(k)}{kZ(k)-sZ(s)} -$$ - -where $Z(s)$ is a PRF, $k$ is a positive real constant, and $s= \sigma + i \omega$ is the complex frequency variable, can be written as, -$$ - R(s) = \dfrac {1-W(s)}{1+W(s)} -$$ - -where, -$$ - W(s) = {1 - \dfrac {Z(s)}{Z(k)} \over 1 + \dfrac {Z(s)}{Z(k)}} \left ( \frac {k+s}{k-s} \right ) -$$ - -Since $Z(s)$ is PRF then -$$ - 1 + \dfrac {Z(s)}{Z(k)} -$$ - -is also PRF. The zeroes of this function are the poles of $W(s)$. Since a PRF can have no zeroes in the right-half s-plane, then $W(s)$ can have no poles in the right-half s-plane and hence is analytic in the right-half s-plane. - -Let -$$ - Z(i \omega) = r (\omega) + ix(\omega) -$$ - -Then the magnitude of $W(i \omega)$ is given by, -$$ - \left | W(i \omega) \right | = \sqrt { \dfrac { (Z(k) - r(\omega))^2 +x(\omega)^2 }{ Z(k) + r(\omega))^2 +x(\omega)^2 }} -$$ - -Since the PRF condition requires that $r(\omega) \ge 0$ for all $\omega$ then $\left | W(i \omega) \right | \le 1$ for all $\omega$. The maximum magnitude of $W(s)$ occurs on the $i \omega$ axis because $W(s)$ is analytic in the right-half s-plane. Thus $|W(s)| \le 1$ for $\sigma \ge 0$. - -Let $ W(s) = u( \sigma, \omega) + iv( \sigma, \omega)$, then the real part of $R(s)$ is given by, -$$ - \Re (R(s)) = \dfrac {1 - |W(s)|^2}{ (1 + u( \sigma, \omega))^2 + v^2(\sigma, \omega)} -$$ - -Because $W(s) \le 1$ for $\sigma \ge 0$ then $\Re (R(s)) \ge 0$ for $\sigma \ge 0$ and consequently $R(s)$ must be a PRF. - -Richards' theorem can also be derived from Schwarz's lemma. - -The theorem was introduced by Paul I. Richards as part of his investigation into the properties of PRFs. The term PRF was coined by Otto Brune who proved that the PRF property was a necessary and sufficient condition for a function to be realisable as a passive electrical network, an important result in network synthesis. Richards gave the theorem in his 1947 paper in the reduced form, -$$ -R(s) = \frac {Z(s)-sZ(1)}{Z(1)-sZ(s)} -$$ - -that is, the special case where $k=1$ - -The theorem (with the more general casse of $k$ being able to take on any value) formed the basis of the network synthesis technique presented by Raoul Bott and Richard Duffin in 1949. In the Bott-Duffin synthesis, $Z(s)$ represents the electrical network to be synthesised and $R(s)$ is another (unknown) network incorporated within it ($R(s)$ is unitless, but $R(s)Z(k)$ has units of impedance and $R(s)/Z(k)$ has units of admittance). Making $Z(s)$ the subject gives -$$ - Z(s) = \left ( \frac {R(s)}{Z(k)} + \frac {k}{sZ(k)} \right )^{-1} + \left ( \frac {1}{Z(k) R(s)} + \frac {s}{k Z(k)} \right )^{-1} -$$ - -Since $Z(k)$ is merely a positive real number, $Z(s)$ can be synthesised as a new network proportional to $R(s)$ in parallel with a capacitor all in series with a network proportional to the inverse of $R(s)$ in parallel with an inductor. By a suitable choice for the value of $k$, a resonant circuit can be extracted from $R(s)$ leaving a function $Z'(s)$ two degrees lower than $Z(s)$. The whole process can then be applied iteratively to $Z'(s)$ until the degree of the function is reduced to something that can be realised directly. - -The advantage of the Bott-Duffin synthesis is that, unlike other methods, it is able to synthesise any PRF. Other methods have limitations such as only being able to deal with two kinds of element in any single network. Its major disadvantage is that it does not result in the minimal number of elements in a network. The number of elements grows exponentially with each iteration. After the first iteration there are two $Z'$ and associated elements, after the second, there are four $Z$ and so on. - -Hubbard notes that Bott and Duffin appeared not to know the relationship of Richards' theorem to Schwarz's lemma and offers it as his own discovery, but it was certainly known to Richards who used it in his own proof of the theorem. diff --git a/wiki/wikipedia/2271.txt b/wiki/wikipedia/2271.txt deleted file mode 100644 index 3f950de976679e34cbad1d20bd135fc3741ea368..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2271.txt +++ /dev/null @@ -1,81 +0,0 @@ -In mathematics, specifically the field of transcendental number theory, the four exponentials conjecture is a conjecture which, given the right conditions on the exponents, would guarantee the transcendence of at least one of four exponentials. The conjecture, along with two related, stronger conjectures, is at the top of a hierarchy of conjectures and theorems concerning the arithmetic nature of a certain number of values of the exponential function. - -If x1, x2 and y1, y2 are two pairs of complex numbers, with each pair being linearly independent over the rational numbers, then at least one of the following four numbers is transcendental: -$$ -e^{x_1y_1}, e^{x_1y_2}, e^{x_2y_1}, e^{x_2y_2}. -$$ - -An alternative way of stating the conjecture in terms of logarithms is the following. For 1 ≤ i, j ≤ 2 let λij be complex numbers such that exp(λij) are all algebraic. Suppose λ11 and λ12 are linearly independent over the rational numbers, and λ11 and λ21 are also linearly independent over the rational numbers, then -$$ -\lambda_{11}\lambda_{22}\neq\lambda_{12}\lambda_{21}. -$$ - -An equivalent formulation in terms of linear algebra is the following. Let M be the 2×2 matrix -$$ -M=\begin{pmatrix}\lambda_{11}&\lambda_{12} \\ \lambda_{21}&\lambda_{22}\end{pmatrix}, -$$ - -where exp(λij) is algebraic for 1 ≤ i, j ≤ 2. Suppose the two rows of M are linearly independent over the rational numbers, and the two columns of M are linearly independent over the rational numbers. Then the rank of M is 2. - -While a 2×2 matrix having linearly independent rows and columns usually means it has rank 2, in this case we require linear independence over a smaller field so the rank isn't forced to be 2. For example, the matrix -$$ -\begin{pmatrix}1&\pi \\ \pi&\pi^2\end{pmatrix} -$$ - -has rows and columns that are linearly independent over the rational numbers, since π is irrational. But the rank of the matrix is 1. So in this case the conjecture would imply that at least one of e, eπ, and eπ2 is transcendental (which in this case is already known since e is transcendental). - -The conjecture was considered as early as the early 1940s by Atle Selberg who never formally stated the conjecture. A special case of the conjecture is mentioned in a 1944 paper of Leonidas Alaoglu and Paul Erdős who suggest that it had been considered by Carl Ludwig Siegel. An equivalent statement was first mentioned in print by Theodor Schneider who set it as the first of eight important, open problems in transcendental number theory in 1957. - -The related six exponentials theorem was first explicitly mentioned in the 1960s by Serge Lang and Kanakanahalli Ramachandra, and both also explicitly conjecture the above result. Indeed, after proving the six exponentials theorem Lang mentions the difficulty in dropping the number of exponents from six to four - the proof used for six exponentials "just misses" when one tries to apply it to four. - -Using Euler's identity this conjecture implies the transcendence of many numbers involving e and π. For example, taking x1 = 1, x2 = 2, y1 = iπ, and y2 = iπ2, the conjecture - if true - implies that one of the following four numbers is transcendental: -$$ -e^{i\pi}, e^{i\pi\sqrt{2}}, e^{i\pi\sqrt{2}}, e^{2i\pi}. -$$ - -The first of these is just -1, and the fourth is 1, so the conjecture implies that eiπ2 is transcendental (which is already known, by consequence of the Gelfond–Schneider theorem). - -An open problem in number theory settled by the conjecture is the question of whether there exists a non-integer real number t such that both 2t and 3t are integers, or indeed such that at and bt are both integers for some pair of integers a and b that are multiplicatively independent over the integers. Values of t such that 2t is an integer are all of the form t = log2m for some integer m, while for 3t to be an integer, t must be of the form t = log3n for some integer n. By setting x1 = 1, x2 = t, y1 = log(2), and y2 = log(3), the four exponentials conjecture implies that if t is irrational then one of the following four numbers is transcendental: -$$ -2, 3, 2^t, 3^t. -$$ - -So if 2t and 3t are both integers then the conjecture implies that t must be a rational number. Since the only rational numbers t for which 2t is also rational are the integers, this implies that there are no non-integer real numbers t such that both 2t and 3t are integers. It is this consequence, for any two primes not just 2 and 3, that Alaoglu and Erdős desired in their paper as it would imply the conjecture that the quotient of two consecutive colossally abundant numbers is prime, extending Ramanujan's results on the quotients of consecutive superior highly composite number. - -The four exponentials conjecture reduces the pair and triplet of complex numbers in the hypotheses of the six exponentials theorem to two pairs. It is conjectured that this is also possible with the sharp six exponentials theorem, and this is the sharp four exponentials conjecture. Specifically, this conjecture claims that if x1, x2, and y1, y2 are two pairs of complex numbers with each pair being linearly independent over the rational numbers, and if βij are four algebraic numbers for 1 ≤ i, j ≤ 2 such that the following four numbers are algebraic: -$$ -e^{x_1 y_1-\beta_{11}}, e^{x_1 y_2-\beta_{12}}, e^{x_2 y_1-\beta_{21}}, e^{x_2 y_2-\beta_{22}}, -$$ - -then xi yj = βij for 1 ≤ i, j ≤ 2. So all four exponentials are in fact 1. - -This conjecture implies both the sharp six exponentials theorem, which requires a third x value, and the as yet unproven sharp five exponentials conjecture that requires a further exponential to be algebraic in its hypotheses. - -The strongest result that has been conjectured in this circle of problems is the strong four exponentials conjecture. This result would imply both aforementioned conjectures concerning four exponentials as well as all the five and six exponentials conjectures and theorems, as illustrated to the right, and all the three exponentials conjectures detailed below. The statement of this conjecture deals with the vector space over the algebraic numbers generated by 1 and all logarithms of non-zero algebraic numbers, denoted here as L. So L is the set of all complex numbers of the form -$$ -\beta_0+\sum_{i=1}^n \beta_i\log\alpha_i, -$$ - -for some n ≥ 0, where all the βi and αi are algebraic and every branch of the logarithm is considered. The statement of the strong four exponentials conjecture is then as follows. Let x1, x2, and y1, y2 be two pairs of complex numbers with each pair being linearly independent over the algebraic numbers, then at least one of the four numbers xi yj for 1 ≤ i, j ≤ 2 is not in L. - -The four exponentials conjecture rules out a special case of non-trivial, homogeneous, quadratic relations between logarithms of algebraic numbers. But a conjectural extension of Baker's theorem implies that there should be no non-trivial algebraic relations between logarithms of algebraic numbers at all, homogeneous or not. One case of non-homogeneous quadratic relations is covered by the still open three exponentials conjecture. In its logarithmic form it is the following conjecture. Let λ1, λ2, and λ3 be any three logarithms of algebraic numbers and γ be a non-zero algebraic number, and suppose that λ1λ2 = γλ3. Then λ1λ2 = γλ3 = 0. - -The exponential form of this conjecture is the following. Let x1, x2, and y be non-zero complex numbers and let γ be a non-zero algebraic number. Then at least one of the following three numbers is transcendental: -$$ -e^{x_1y}, e^{x_2y}, e^{\gamma x_1/x_2}. -$$ - -There is also a sharp three exponentials conjecture which claims that if x1, x2, and y are non-zero complex numbers and α, β1, β2, and γ are algebraic numbers such that the following three numbers are algebraic -$$ -e^{x_1 y-\beta_1}, e^{x_2 y-\beta_2}, e^{(\gamma x_1/x_2)-\alpha}, -$$ - -then either x2y = β2 or γx1 = αx2. - -The strong three exponentials conjecture meanwhile states that if x1, x2, and y are non-zero complex numbers with x1y, x2y, and x1/x2 all transcendental, then at least one of the three numbers x1y, x2y, x1/x2 is not in L. - -As with the other results in this family, the strong three exponentials conjecture implies the sharp three exponentials conjecture which implies the three exponentials conjecture. However, the strong and sharp three exponentials conjectures are implied by their four exponentials counterparts, bucking the usual trend. And the three exponentials conjecture is neither implied by nor implies the four exponentials conjecture. - -The three exponentials conjecture, like the sharp five exponentials conjecture, would imply the transcendence of eπ2 by letting (in the logarithmic version) λ1 = iπ, λ2 = −iπ, and γ = 1. - -Many of the theorems and results in transcendental number theory concerning the exponential function have analogues involving the modular function j. Writing q = e2πiτ for the nome and j(τ) = J(q), Daniel Bertrand conjectured that if q1 and q2 are non-zero algebraic numbers in the complex unit disc that are multiplicatively independent, then J(q1) and J(q2) are algebraically independent over the rational numbers. Although not obviously related to the four exponentials conjecture, Bertrand's conjecture in fact implies a special case known as the weak four exponentials conjecture. This conjecture states that if x1 and x2 are two positive real algebraic numbers, neither of them equal to 1, then π2 and the product log(x1)log(x2) are linearly independent over the rational numbers. This corresponds to the special case of the four exponentials conjecture whereby y1 = iπ, y2 = -iπ, and x1 and x2 are real. Perhaps surprisingly, though, it is also a corollary of Bertrand's conjecture, suggesting there may be an approach to the full four exponentials conjecture via the modular function j. diff --git a/wiki/wikipedia/2272.txt b/wiki/wikipedia/2272.txt deleted file mode 100644 index 38512cb76da68796b47fc93e3905e3456e1828a0..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2272.txt +++ /dev/null @@ -1 +0,0 @@ -In mathematics - specifically, in differential topology - Berger's inequality for Einstein manifolds is the statement that any 4-dimensional Einstein manifold (M, g) has non-negative Euler characteristic χ(M) ≥ 0. The inequality is named after the French mathematician Marcel Berger. diff --git a/wiki/wikipedia/2273.txt b/wiki/wikipedia/2273.txt deleted file mode 100644 index f7e50d419818d62d6c0f08343df362731a379fac..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2273.txt +++ /dev/null @@ -1,7 +0,0 @@ -In differential geometry, the slice theorem states: given a manifold M on which a Lie group G acts as diffeomorphisms, for any x in M, the map $G/G_x \to M, [g] \mapsto g \cdot x$ extends to an invariant neighborhood of $G/G_x$ (viewed as a zero section) in $G \times_{G_x} T_x M / T_x(G \cdot x)$ so that it defines an equivariant diffeomorphism from the neighborhood to its image, which contains the orbit of x. - -The important application of the theorem is a proof of the fact that the quotient $M/G$ admits a manifold structure when G is compact and the action is free. - -In algebraic geometry, there is an analog of the slice theorem; it is called Luna's slice theorem. - -Since G is compact, there exists an invariant metric; i.e., G acts as isometries. One then adopts the usual proof of the existence of a tubular neighborhood using this metric. diff --git a/wiki/wikipedia/2274.txt b/wiki/wikipedia/2274.txt deleted file mode 100644 index d6586c6dc6ffd90157bba8af89e3dbbc989830b9..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2274.txt +++ /dev/null @@ -1,32 +0,0 @@ -In propositional logic, disjunction elimination (sometimes named proof by cases, case analysis, or or elimination), is the valid argument form and rule of inference that allows one to eliminate a disjunctive statement from a logical proof. It is the inference that if a statement $P$ implies a statement $Q$ and a statement $R$ also implies $Q$, then if either $P$ or $R$ is true, then $Q$ has to be true. The reasoning is simple: since at least one of the statements P and R is true, and since either of them would be sufficient to entail Q, Q is certainly true. - -An example in English: - -If I'm inside, I have my wallet on me. - -If I'm outside, I have my wallet on me. - -It is true that either I'm inside or I'm outside. - -Therefore, I have my wallet on me. - -It is the rule can be stated as: -$$ -\frac{P \to Q, R \to Q, P \lor R}{\therefore Q} -$$ - -where the rule is that whenever instances of "$P \to Q$", and "$R \to Q$" and "$P \lor R$" appear on lines of a proof, "$Q$" can be placed on a subsequent line. - -The disjunction elimination rule may be written in sequent notation: -$$ -(P \to Q), (R \to Q), (P \lor R) \vdash Q -$$ - -where $\vdash$ is a metalogical symbol meaning that $Q$ is a syntactic consequence of $P \to Q$, and $R \to Q$ and $P \lor R$ in some logical system; - -and expressed as a truth-functional tautology or theorem of propositional logic: -$$ -(((P \to Q) \land (R \to Q)) \land (P \lor R)) \to Q -$$ - -where $P$, $Q$, and $R$ are propositions expressed in some formal system. diff --git a/wiki/wikipedia/2275.txt b/wiki/wikipedia/2275.txt deleted file mode 100644 index 4755058fac852ef1b1433ce0c0d1d63b628520c7..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2275.txt +++ /dev/null @@ -1,14 +0,0 @@ -In mathematics, Sendov's conjecture, sometimes also called Ilieff's conjecture, concerns the relationship between the locations of roots and critical points of a polynomial function of a complex variable. It is named after Blagovest Sendov. - -The conjecture states that for a polynomial -$$ - f(z) = (z - r_1)\cdots (z-r_n),\qquad (n\ge 2) -$$ - -with all roots r1, ..., rn inside the closed unit disk |z| ≤ 1, each of the n roots is at a distance no more than 1 from at least one critical point. - -The Gauss–Lucas theorem says that all of the critical points lie within the convex hull of the roots. It follows that the critical points must be within the unit disk, since the roots are. - -The conjecture has been proven for n < 9 by Brown-Xiang and for n sufficiently large by Tao. - -The conjecture was first mooted by Blagovest Sendov in 1959; he described the conjecture to his colleague Nikola Obreshkov. In 1967 the conjecture was misattributed to Ljubomir Iliev by Walter Hayman. In 1969 Meir and Sharma proved the conjecture for polynomials with n < 6. In 1991 Brown proved the conjecture for n < 7. Borcea extended the proof to n < 8 in 1996. Brown and Xiang proved the conjecture for n < 9 in 1999. Terence Tao proved the conjecture for sufficiently large n in 2020. diff --git a/wiki/wikipedia/2276.txt b/wiki/wikipedia/2276.txt deleted file mode 100644 index f02956ca934bb6937f6cbbc51d27bf94b3665467..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2276.txt +++ /dev/null @@ -1,25 +0,0 @@ -Hashiwokakero (橋をかけろ Hashi o kakero; lit. "build bridges!") is a type of logic puzzle published by Nikoli. It has also been published in English under the name Bridges or Chopsticks (based on a mistranslation: the hashi of the title, 橋, means bridge; hashi written with another character, 箸, means chopsticks). It has also appeared in The Times under the name Hashi. In France, Denmark, the Netherlands, and Belgium it is published under the name Ai-Ki-Ai. - -Hashiwokakero is played on a rectangular grid with no standard size, although the grid itself is not usually drawn. Some cells start out with (usually encircled) numbers from 1 to 8 inclusive; these are the "islands". The rest of the cells are empty. - -The goal is to connect all of the islands by drawing a series of bridges between the islands. The bridges must follow certain criteria: - -* They must begin and end at distinct islands, travelling a straight line in between. - -* They must not cross any other bridges or islands. - -* They may only run orthogonally (i.e. they may not run diagonally). - -* At most two bridges connect a pair of islands. - -* The number of bridges connected to each island must match the number on that island. - -* The bridges must connect the islands into a single connected group. - -Solving a Hashiwokakero puzzle is a matter of procedural force: having determined where a bridge must be placed, placing it there can eliminate other possible places for bridges, forcing the placement of another bridge, and so on. - -An island showing '3' in a corner, '5' along the outside edge, or '7' anywhere must have at least one bridge radiating from it in each valid direction, for if one direction did not have a bridge, even if all other directions sported two bridges, not enough will have been placed. Obviously, a '4' in a corner, '6' along the border, or '8' anywhere must have two bridges in each direction. This can be generalized as added bridges obstruct routes: a '3' that can only be travelled from vertically must have at least one bridge each for up and down, for example. - -It is common practice to cross off or fill in islands whose bridge quota has been reached. - -Hashiwokakero first appeared in Puzzle Communication Nikoli in issue #31 (September 1990), although an earlier form of the puzzle appeared in issue #28 (December 1989). diff --git a/wiki/wikipedia/2277.txt b/wiki/wikipedia/2277.txt deleted file mode 100644 index 34b31c0745d5adf0aac8d5dbbe1695252b542aed..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2277.txt +++ /dev/null @@ -1,290 +0,0 @@ -In combinatorics, a branch of mathematics, the inclusion–exclusion principle is a counting technique which generalizes the familiar method of obtaining the number of elements in the union of two finite sets; symbolically expressed as -$$ - |A \cup B| = |A| + |B| - |A \cap B|, -$$ - -where A and B are two finite sets and |S| indicates the cardinality of a set S (which may be considered as the number of elements of the set, if the set is finite). The formula expresses the fact that the sum of the sizes of the two sets may be too large since some elements may be counted twice. The double-counted elements are those in the intersection of the two sets and the count is corrected by subtracting the size of the intersection. - -The inclusion-exclusion principle, being a generalization of the two-set case, is perhaps more clearly seen in the case of three sets, which for the sets A, B and C is given by -$$ -|A \cup B \cup C| = |A| + |B| + |C| - |A \cap B| - |A \cap C| - |B \cap C| + |A \cap B \cap C|. -$$ - -This formula can be verified by counting how many times each region in the Venn diagram figure is included in the right-hand side of the formula. In this case, when removing the contributions of over-counted elements, the number of elements in the mutual intersection of the three sets has been subtracted too often, so must be added back in to get the correct total. - -Generalizing the results of these examples gives the principle of inclusion–exclusion. To find the cardinality of the union of n sets: - -# Include the cardinalities of the sets. - -# Exclude the cardinalities of the pairwise intersections. - -# Include the cardinalities of the triple-wise intersections. - -# Exclude the cardinalities of the quadruple-wise intersections. - -# Include the cardinalities of the quintuple-wise intersections. - -# Continue, until the cardinality of the n-tuple-wise intersection is included (if n is odd) or excluded (n even). - -The name comes from the idea that the principle is based on over-generous inclusion, followed by compensating exclusion. - -This concept is attributed to Abraham de Moivre (1718); but it first appears in a paper of Daniel da Silva (1854), and later in a paper by J. J. Sylvester (1883). Sometimes the principle is referred to as the formula of Da Silva, or Sylvester due to these publications. The principle is an example of the sieve method extensively used in number theory and is sometimes referred to as the sieve formula, though Legendre already used a similar device in a sieve context in 1808. - -As finite probabilities are computed as counts relative to the cardinality of the probability space, the formulas for the principle of inclusion–exclusion remain valid when the cardinalities of the sets are replaced by finite probabilities. More generally, both versions of the principle can be put under the common umbrella of measure theory. - -In a very abstract setting, the principle of inclusion–exclusion can be expressed as the calculation of the inverse of a certain matrix. This inverse has a special structure, making the principle an extremely valuable technique in combinatorics and related areas of mathematics. As Gian-Carlo Rota put it: - -
"One of the most useful principles of enumeration in discrete probability and combinatorial theory is the celebrated principle of inclusion–exclusion. When skillfully applied, this principle has yielded the solution to many a combinatorial problem."
- -In its general form, the principle of inclusion–exclusion states that for finite sets A1, …, An, one has the identity - -{{NumBlk|:|$\left|\bigcup_{i=1}^n A_i\right| = \sum_{i=1}^n |A_i| - \sum_{1 \leqslant i < j \leqslant n} |A_i\cap A_j| + \sum_{1 \leqslant i < j < k \leqslant n} |A_i \cap A_j\cap A_k| - \cdots + (-1)^{n-1} \left|A_1\cap\cdots\cap A_n\right|.$|}} - -This can be compactly written as -$$ -\left|\bigcup_{i=1}^n A_i\right| = \sum_{k=1}^n (-1)^{k+1} \left( \sum_{1 \leqslant i_1 < \cdots < i_k \leqslant n} | A_{i_1} \cap \cdots \cap A_{i_k} | \right) -$$ - -or -$$ -\left| \bigcup_{i=1}^n A_i\right| = \sum_{\emptyset\neq J\subseteq\{1,\ldots,n\}}(-1)^ |A_J|. -$$ - -If I is a fixed subset of the index set N, then the number of elements which belong to Ai for all i in I and for no other values is: -$$ - \sum_{I \subseteq J} (-1)^ |A_J|. -$$ - -Define the sets -$$ -B_k = A_{I \cup \{ k \}} \text{ for } k \in N \setminus I. -$$ - -We seek the number of elements in none of the Bk which, by the principle of inclusion–exclusion (with $B_\emptyset = A_I$), is -$$ -\sum_{K \subseteq N \setminus I} (-1)^|B_K|. -$$ - -The correspondence K ↔ J = I ∪ K between subsets of N \ I and subsets of N containing I is a bijection and if J and K correspond under this map then BK = AJ, showing that the result is valid. - -In probability, for events A1, ..., An in a probability space $(\Omega,\mathcal{F},\mathbb{P})$, the inclusion–exclusion principle becomes for n = 2 -$$ -\mathbb{P}(A_1\cup A_2)=\mathbb{P}(A_1)+\mathbb{P}(A_2)-\mathbb{P}(A_1\cap A_2), -$$ - -for n = 3 -$$ -\mathbb{P}(A_1\cup A_2\cup A_3)=\mathbb{P}(A_1)+\mathbb{P}(A_2)+\mathbb{P}(A_3)-\mathbb{P}(A_1\cap A_2)-\mathbb{P}(A_1\cap A_3)-\mathbb{P}(A_2\cap A_3)+\mathbb{P}(A_1\cap A_2\cap A_3) -$$ - -and in general -$$ -\mathbb{P}\left(\bigcup_{i=1}^n A_i\right)=\sum_{i=1}^n \mathbb{P}(A_i) -\sum_{ii with index in I. - -According to the Bonferroni inequalities, the sum of the first terms in the formula is alternately an upper bound and a lower bound for the LHS. This can be used in cases where the full formula is too cumbersome. - -For a general measure space (S,Σ,μ) and measurable subsets A1, …, An of finite measure, the above identities also hold when the probability measure $\mathbb{P}$ is replaced by the measure μ. - -If, in the probabilistic version of the inclusion–exclusion principle, the probability of the intersection AI only depends on the cardinality of I, meaning that for every k in {1, …, n} there is an ak such that -$$ -a_k=\mathbb{P}(A_I) \text{ for every } I\subset\{1,\ldots,n\} \text{ with } |I|=k, -$$ - -then the above formula simplifies to -$$ -\mathbb{P}\left(\bigcup_{i=1}^n A_i\right) =\sum_{k=1}^n (-1)^{k-1}\binom n k a_k -$$ - -due to the combinatorial interpretation of the binomial coefficient $\binom nk$. For example, if the events $A_i$ are independent and identically distributed, then $\mathbb{P}(A_i) = p$ for all i, and we have $a_k = p^k$, in which case the expression above simplifies to -$$ -\mathbb{P}\left(\bigcup_{i=1}^n A_i\right) = 1 - (1-p)^n. -$$ - -(This result can also be derived more simply by considering the intersection of the complements of the events $A_i$.) - -An analogous simplification is possible in the case of a general measure space (S, Σ, μ) and measurable subsets A1, …, An of finite measure. - -The principle is sometimes stated in the form that says that if -$$ -g(A)=\sum_{S \subseteq A}f(S) -$$ - -then - -{{NumBlk|:|$f(A)=\sum_{S \subseteq A}(-1)^g(S) $|}} - -The combinatorial and the probabilistic version of the inclusion–exclusion principle are instances of (). - -{{math proof|proof= - -Take $\underline{m} = \{1,2,\ldots,m\}$, $f(\underline{m}) = 0$, and -$$ -f(S)=\left|\bigcap_{i \in \underline{m} \setminus S} A_i \setminus \bigcup_{i \in S} A_i\right| \text{ and } f(S) = \mathbb{P} \left(\bigcap_{i \in \underline{m} \setminus S} A_i \setminus \bigcup_{i \in S} A_i\right) -$$ - -respectively for all sets $S$ with $S \subsetneq \underline{m}$. Then we obtain -$$ -g(A)=\left|\bigcap_{i \in \underline{m} \setminus A} A_i\right|, \quad g(\underline{m}) = \left|\bigcup_{i \in \underline{m}} A_i \right| \text{ and } g(A) = \mathbb{P} \left( \bigcap_{i \in \underline{m} \setminus A} A_i \right),~~ g(\underline{m}) = \mathbb{P} \left(\bigcup_{i \in \underline{m}} A_i\right) -$$ - -respectively for all sets $A$ with $A \subsetneq \underline{m}$. This is because elements $a$ of $\cap_{i \in \underline{m} \setminus A} A_i$ can be contained in other $A_i$ ($A_i$ with $i \in A$) as well, and the $\cap \setminus \cup$-formula runs exactly through all possible extensions of the sets $\{A_i \mid i \in \underline{m} \setminus A\}$ with other $A_i$, counting $a$ only for the set that matches the membership behavior of $a$, if $S$ runs through all subsets of $A$ (as in the definition of $g(A)$). - -Since $f(\underline{m}) = 0$, we obtain from () with $A = \underline{m}$ that - -\sum_{\underline{m} \supseteq T \supsetneq \varnothing}(-1)^} - -where $A - S$ is the multiset for which $(A - S) \uplus S = A$, and - -* μ(S) = 1 if S is a set (i.e. a multiset without double elements) of even cardinality. - -* μ(S) = −1 if S is a set (i.e. a multiset without double elements) of odd cardinality. - -* μ(S) = 0 if S is a proper multiset (i.e. S has double elements). - -Notice that $\mu(A - S)$ is just the $(-1)^$ of () in case $A - S$ is a set. - -{{proof|title=Proof of ()|proof= - -Substitute - -g(S)=\sum_{T\subseteq S}f(T) - -on the right hand side of (). Notice that $f(A)$ appears once on both sides of (). So we must show that for all $T$ with $T\subsetneq A$, the terms $f(T)$ cancel out on the right hand side of (). For that purpose, take a fixed $T$ such that $T\subsetneq A$ and take an arbitrary fixed $a \in A$ such that $a \notin T$. - -Notice that $A - S$ must be a set for each positive or negative appearance of $f(T)$ on the right hand side of () that is obtained by way of the multiset $S$ such that $T \subseteq S \subseteq A$. Now each appearance of $f(T)$ on the right hand side of () that is obtained by way of $S$ such that $A - S$ is a set that contains $a$ cancels out with the one that is obtained by way of the corresponding $S$ such that $A - S$ is a set that does not contain $a$. This gives the desired result. - -}} - -The inclusion–exclusion principle is widely used and only a few of its applications can be mentioned here. - -A well-known application of the inclusion–exclusion principle is to the combinatorial problem of counting all derangements of a finite set. A derangement of a set A is a bijection from A into itself that has no fixed points. Via the inclusion–exclusion principle one can show that if the cardinality of A is n, then the number of derangements is [n! / e] where [x] denotes the nearest integer to x; a detailed proof is available here and also see the examples section above. - -The first occurrence of the problem of counting the number of derangements is in an early book on games of chance: Essai d'analyse sur les jeux de hazard by P. R. de Montmort (1678 – 1719) and was known as either "Montmort's problem" or by the name he gave it, "problème des rencontres." The problem is also known as the hatcheck problem. - -The number of derangements is also known as the subfactorial of n, written !n. It follows that if all bijections are assigned the same probability then the probability that a random bijection is a derangement quickly approaches 1/e as n grows. - -The principle of inclusion–exclusion, combined with De Morgan's law, can be used to count the cardinality of the intersection of sets as well. Let $\overline{A_k}$ represent the complement of Ak with respect to some universal set A such that $A_k \subseteq A$ for each k. Then we have -$$ -\bigcap_{i=1}^n A_i = \overline{\bigcup_{i=1}^n \overline{A_i}} -$$ - -thereby turning the problem of finding an intersection into the problem of finding a union. - -The inclusion exclusion principle forms the basis of algorithms for a number of NP-hard graph partitioning problems, such as graph coloring. - -A well known application of the principle is the construction of the chromatic polynomial of a graph. - -The number of perfect matchings of a bipartite graph can be calculated using the principle. - -Given finite sets A and B, how many surjective functions (onto functions) are there from A to B? Without any loss of generality we may take A = {1, ..., k} and B = {1, ..., n}, since only the cardinalities of the sets matter. By using S as the set of all functions from A to B, and defining, for each i in B, the property Pi as "the function misses the element i in B" (i is not in the image of the function), the principle of inclusion-exclusion gives the number of onto functions between A and B as: -$$ -\sum_{j=0}^{n} \binom{n}{j} (-1)^j (n-j)^k. -$$ - -A permutation of the set S = {1, ..., n} where each element of S is restricted to not being in certain positions (here the permutation is considered as an ordering of the elements of S) is called a permutation with forbidden positions. For example, with S = {1,2,3,4}, the permutations with the restriction that the element 1 can not be in positions 1 or 3, and the element 2 can not be in position 4 are: 2134, 2143, 3124, 4123, 2341, 2431, 3241, 3421, 4231 and 4321. By letting Ai be the set of positions that the element i is not allowed to be in, and the property Pi to be the property that a permutation puts element i into a position in Ai, the principle of inclusion–exclusion can be used to count the number of permutations which satisfy all the restrictions. - -In the given example, there are 12 = 2(3!) permutations with property P1, 6 = 3! permutations with property P2 and no permutations have properties P3 or P4 as there are no restrictions for these two elements. The number of permutations satisfying the restrictions is thus: - -4! − (12 + 6 + 0 + 0) + (4) = 24 − 18 + 4 = 10. - -The final 4 in this computation is the number of permutations having both properties P1 and P2. There are no other non-zero contributions to the formula. - -The Stirling numbers of the second kind, S(n,k) count the number of partitions of a set of n elements into k non-empty subsets (indistinguishable boxes). An explicit formula for them can be obtained by applying the principle of inclusion–exclusion to a very closely related problem, namely, counting the number of partitions of an n-set into k non-empty but distinguishable boxes (ordered non-empty subsets). Using the universal set consisting of all partitions of the n-set into k (possibly empty) distinguishable boxes, A1, A2, …, Ak, and the properties Pi meaning that the partition has box Ai empty, the principle of inclusion–exclusion gives an answer for the related result. Dividing by k! to remove the artificial ordering gives the Stirling number of the second kind: -$$ -S(n,k) = \frac{1}{k!}\sum_{t=0}^k (-1)^t \binom k t (k-t)^n. -$$ - -A rook polynomial is the generating function of the number of ways to place non-attacking rooks on a board B that looks like a subset of the squares of a checkerboard; that is, no two rooks may be in the same row or column. The board B is any subset of the squares of a rectangular board with n rows and m columns; we think of it as the squares in which one is allowed to put a rook. The coefficient, rk(B) of xk in the rook polynomial RB(x) is the number of ways k rooks, none of which attacks another, can be arranged in the squares of B. For any board B, there is a complementary board $B'$ consisting of the squares of the rectangular board that are not in B. This complementary board also has a rook polynomial $R_{B'}(x)$ with coefficients $r_k(B').$ - -It is sometimes convenient to be able to calculate the highest coefficient of a rook polynomial in terms of the coefficients of the rook polynomial of the complementary board. Without loss of generality we can assume that n ≤ m, so this coefficient is rn(B). The number of ways to place n non-attacking rooks on the complete n × m "checkerboard" (without regard as to whether the rooks are placed in the squares of the board B) is given by the falling factorial: -$$ -(m)_n = m(m-1)(m-2) \cdots (m-n+1). -$$ - -Letting Pi be the property that an assignment of n non-attacking rooks on the complete board has a rook in column i which is not in a square of the board B, then by the principle of inclusion–exclusion we have: -$$ - r_n(B) = \sum_{t=0}^n (-1)^t (m-t)_{n-t} r_t(B'). -$$ - -Euler's totient or phi function, φ(n) is an arithmetic function that counts the number of positive integers less than or equal to n that are relatively prime to n. That is, if n is a positive integer, then φ(n) is the number of integers k in the range 1 ≤ k ≤ n which have no common factor with n other than 1. The principle of inclusion–exclusion is used to obtain a formula for φ(n). Let S be the set {1, …, n} and define the property Pi to be that a number in S is divisible by the prime number pi, for 1 ≤ i ≤ r, where the prime factorization of -$$ -n = p_1^{a_1} p_2^{a_2} \cdots p_r^{a_r}. -$$ - -Then, -$$ -\varphi(n) = n - \sum_{i=1}^r \frac{n}{p_i} + \sum_{1 \leqslant i < j \leqslant r} \frac{n}{p_i p_j} - \cdots = n \prod_{i=1}^r \left (1 - \frac{1}{p_i} \right ). -$$ - -In many cases where the principle could give an exact formula (in particular, counting prime numbers using the sieve of Eratosthenes), the formula arising doesn't offer useful content because the number of terms in it is excessive. If each term individually can be estimated accurately, the accumulation of errors may imply that the inclusion–exclusion formula isn't directly applicable. In number theory, this difficulty was addressed by Viggo Brun. After a slow start, his ideas were taken up by others, and a large variety of sieve methods developed. These for example may try to find upper bounds for the "sieved" sets, rather than an exact formula. - -Let A1, ..., An be arbitrary sets and p1, …, pn real numbers in the closed unit interval . Then, for every even number k in {0, …, n}, the indicator functions satisfy the inequality: -$$ -1_{A_1\cup\cdots\cup A_n} \ge \sum_{j=1}^k (-1)^{j-1}\sum_{1\le i_1<\cdots 0.) Since the element is counted precisely once by the left-hand side of equation (), we need to show that it is counted precisely once by the right-hand side. On the right-hand side, the only non-zero contributions occur when all the subsets in a particular term contain the chosen element, that is, all the subsets are selected from $A_1, A_2, \dots, A_t$. The contribution is one for each of these sets (plus or minus depending on the term) and therefore is just the (signed) number of these subsets used in the term. We then have: - -\begin{align} - -|\{A_i \mid 1 \leqslant i \leqslant t\}| &- |\{A_i \cap A_j \mid 1 \leqslant i < j \leqslant t\}| + \cdots + (-1)^{t+1}|\{A_1 \cap A_2 \cap \cdots \cap A_t\}| = \binom{t}{1} - \binom{t}{2} + \cdots + (-1)^{t+1}\binom{t}{t}. - -\end{align} - -By the binomial theorem, -$$ - 0 = (1-1)^t= \binom{t}{0} - \binom{t}{1} + \binom{t}{2} - \cdots + (-1)^t\binom{t}{t}. -$$ - -Using the fact that $\binom{t}{0} = 1$ and rearranging terms, we have -$$ -1 = \binom{t}{1} - \binom{t}{2} + \cdots + (-1)^{t+1}\binom{t}{t}, -$$ - -and so, the chosen element is counted only once by the right-hand side of equation (). - -An algebraic proof can be obtained using indicator functions (also known as characteristic functions). The indicator function of a subset S of a set X is the function - -\begin{align} - -&\mathbf{1}_S: X \to \{0,1\} \\ - -&\mathbf{1}_S(x) = \begin{cases} 1 & x \in S\\ 0 & x \notin S \end{cases} - -\end{align} - -If $A$ and $B$ are two subsets of $X$, then -$$ -\mathbf{1}_A \cdot\mathbf{1}_B = \mathbf{1}_{A\cap B}. -$$ - -Let A denote the union $\bigcup_{i=1}^n A_i$ of the sets A1, …, An. To prove the inclusion–exclusion principle in general, we first verify the identity - -{{NumBlk|:|$\mathbf{1}_A =\sum_{k=1}^n (-1)^{k-1} \sum_{I\subset\{1,\ldots,n\} \atop|I| = k} \mathbf{1}_{A_I}$|}} - -for indicator functions, where: -$$ -A_I = \bigcap_{i\in I} A_i. -$$ - -The following function -$$ -\left (\mathbf{1}_A-\mathbf{1}_{A_1} \right )\left (\mathbf{1}_A-\mathbf{1}_{A_2} \right )\cdots \left (\mathbf{1}_A-\mathbf{1}_{A_n} \right ), -$$ - -is identically zero because: if x is not in A, then all factors are 0 − 0 = 0; and otherwise, if x does belong to some Am, then the corresponding mth factor is 1 − 1 = 0. By expanding the product on the left-hand side, equation () follows. - -To prove the inclusion–exclusion principle for the cardinality of sets, sum the equation () over all x in the union of A1, …, An. To derive the version used in probability, take the expectation in (). In general, integrate the equation () with respect to μ. Always use linearity in these derivations. diff --git a/wiki/wikipedia/2278.txt b/wiki/wikipedia/2278.txt deleted file mode 100644 index 7ec270ccc5ccbf6f1ed5254eb4c28e318def3bff..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2278.txt +++ /dev/null @@ -1,7 +0,0 @@ -A quasi-triangulation is a subdivision of a geometric object into simplices, where vertices are not points but arbitrary sloped line segments. This division is not a triangulation in the geometric sense. It is a topological triangulation, however. A quasi-triangulation may have some of the characteristics of a Delaunay triangulation. - -[[File:Quasitriangulation.png|thumb|right|250px|Quasi-triangulation. Line segments of the topology (quasi-vertices) are shown in black, gray — quasi-edges, white — faces. a — a convex quadrangular edge, - -b — a nonconvex quadrangular edge, c — a triangular edge, d — a degenerate edge, a and e — parallel edges, - -f — a quasi-edge contains a part of the line segment.]] diff --git a/wiki/wikipedia/2279.txt b/wiki/wikipedia/2279.txt deleted file mode 100644 index 3dd5286c434dd20181fc8522afd69877497a2101..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2279.txt +++ /dev/null @@ -1,230 +0,0 @@ -In algebra, Lagrange's identity, named after Joseph Louis Lagrange, is: - - - -\begin{align} - -\biggl( \sum_{k=1}^n a_k^2\biggr) \biggl(\sum_{k=1}^n b_k^2\biggr) - \biggl(\sum_{k=1}^n a_k b_k\biggr)^2 & = \sum_{i=1}^{n-1} \sum_{j=i+1}^n (a_i b_j - a_j b_i)^2 \\ - -& \biggl(= \frac{1}{2} \sum_{i=1}^n \sum_{j=1,j\neq i}^n (a_i b_j - a_j b_i)^2\biggr), - -\end{align} - - - -which applies to any two sets {a1, a2, . . ., an} and {b1, b2, . . ., bn} of real or complex numbers (or more generally, elements of a commutative ring). This identity is a generalisation of the Brahmagupta–Fibonacci identity and a special form of the Binet–Cauchy identity. - -In a more compact vector notation, Lagrange's identity is expressed as: -$$ -\| \mathbf a \|^2 \ \| \mathbf b \|^2 - (\mathbf {a \cdot b } )^2 = \sum_{1 \le i < j \le n} \left(a_ib_j-a_jb_i \right)^2 \ , -$$ - -where a and b are n-dimensional vectors with components that are real numbers. The extension to complex numbers requires the interpretation of the dot product as an inner product or Hermitian dot product. Explicitly, for complex numbers, Lagrange's identity can be written in the form: -$$ -\biggl( \sum_{k=1}^n |a_k|^2\biggr) \biggl(\sum_{k=1}^n |b_k|^2\biggr) - \biggl|\sum_{k=1}^n a_k b_k\biggr|^2 = \sum_{i=1}^{n-1} \sum_{j=i+1}^n |a_i \overline{b}_j - a_j \overline{b}_i|^2 -$$ - -involving the absolute value. - -Since the right-hand side of the identity is clearly non-negative, it implies Cauchy's inequality in the finite-dimensional real coordinate space Rn and its complex counterpart Cn. - -Geometrically, the identity asserts that the square of the volume of the parallelepiped spanned by a set of vectors is the Gram determinant of the vectors. - -In terms of the wedge product, Lagrange's identity can be written -$$ -(a \cdot a)(b \cdot b) - (a \cdot b)^2 = (a \wedge b) \cdot (a \wedge b). -$$ - -Hence, it can be seen as a formula which gives the length of the wedge product of two vectors, which is the area of the parallelogram they define, in terms of the dot products of the two vectors, as -$$ -\|a \wedge b\| = \sqrt{(a \cdot a)(b \cdot b) - (a \cdot b)^2} = \sqrt{\|a\|^2\|b\|^2 - (a \cdot b)^2}. -$$ - -In three dimensions, Lagrange's identity asserts that if a and b are vectors in R3 with lengths |a| and |b|, then Lagrange's identity can be written in terms of the cross product and dot product: -$$ - |\mathbf{a}|^2 |\mathbf{b}|^2 - (\mathbf {a \cdot b})^2 = |\mathbf {a \times b}|^2 -$$ - -Using the definition of angle based upon the dot product (see also Cauchy–Schwarz inequality), the left-hand side is -$$ -\left|\mathbf{a}\right|^2 \left|\mathbf{b}\right|^2 \left( 1- \cos^2\theta\right) = \left|\mathbf{a}\right|^2 \left|\mathbf{b}\right|^2\sin^2\theta -$$ - -where θ is the angle formed by the vectors a and b. The area of a parallelogram with sides |a| and |b| and angle θ is known in elementary geometry to be -$$ -\left|\mathbf{a}\right| \left|\mathbf{b}\right| \left|\sin\theta\right|, -$$ - -so the left-hand side of Lagrange's identity is the squared area of the parallelogram. The cross product appearing on the right-hand side is defined by -$$ -\mathbf{a}\times\mathbf{b} = (a_2b_3-a_3b_2)\mathbf{i} + (a_3b_1-a_1b_3)\mathbf{j} + (a_1b_2-a_2b_1)\mathbf{k} -$$ - -which is a vector whose components are equal in magnitude to the areas of the projections of the parallelogram onto the yz, zx, and xy planes, respectively. - -For a and b as vectors in R7, Lagrange's identity takes on the same form as in the case of R3 -$$ -|\mathbf{a}|^2 |\mathbf{b}|^2 -|\mathbf{a} \cdot \mathbf{b}|^2 = |\mathbf{a} \times \mathbf{b}|^2 \ , -$$ - -However, the cross product in 7 dimensions does not share all the properties of the cross product in 3 dimensions. For example, the direction of a × b in 7-dimensions may be the same as c × d even though c and d are linearly independent of a and b. Also the seven-dimensional cross product is not compatible with the Jacobi identity. -$$ -|pq| = |p| |q|. -$$ - -The quaternions p and q are called imaginary if their scalar part is zero; equivalently, if -$$ -p = \mathbf{v},\quad q=\mathbf{w}. -$$ - -Lagrange's identity is just the multiplicativity of the norm of imaginary quaternions, -$$ -|\mathbf{v}\mathbf{w}|^2 = |\mathbf{v}|^2|\mathbf{w}|^2, -$$ - -since, by definition, -$$ -|\mathbf{v}\mathbf{w}|^2 = (\mathbf{v}\cdot\mathbf{w})^2 + |\mathbf{v}\times\mathbf{w}|^2. -$$ - -The vector form follows from the Binet-Cauchy identity by setting ci = ai and di = bi. The second version follows by letting ci and di denote the complex conjugates of ai and bi, respectively, - -Here is also a direct proof. The expansion of the first term on the left side is: - -{{NumBlk|:| \left( \sum_{k=1}^n a_k^2\right) \left(\sum_{k=1}^n b_k^2\right) = - -\sum_{i=1}^n \sum_{j=1}^n a_i^2 b_j^2 - -= \sum_{k=1}^n a_k^2 b_k^2 - -+ \sum_{i=1}^{n-1} \sum_{j=i+1}^n a_i^2 b_j^2 - -+ \sum_{j=1}^{n-1} \sum_{i=j+1}^n a_i^2 b_j^2 \ ,|}} - -which means that the product of a column of as and a row of bs yields (a sum of elements of) a square of abs, which can be broken up into a diagonal and a pair of triangles on either side of the diagonal. - -The second term on the left side of Lagrange's identity can be expanded as: - -{{NumBlk|:| \left(\sum_{k=1}^n a_k b_k\right)^2 = - -\sum_{k=1}^n a_k^2 b_k^2 + 2\sum_{i=1}^{n-1} \sum_{j=i+1}^n a_i b_i a_j b_j \ ,|}} - -which means that a symmetric square can be broken up into its diagonal and a pair of equal triangles on either side of the diagonal. - -To expand the summation on the right side of Lagrange's identity, first expand the square within the summation: -$$ - \sum_{i=1}^{n-1} \sum_{j=i+1}^n (a_i b_j - a_j b_i)^2 = \sum_{i=1}^{n-1} \sum_{j=i+1}^n \left(a_i^2 b_j^2 + a_j^2 b_i^2 - 2 a_i b_j a_j b_i\right). -$$ - -Distribute the summation on the right side, -$$ - \sum_{i=1}^{n-1} \sum_{j=i+1}^n (a_i b_j - a_j b_i)^2 = \sum_{i=1}^{n-1} \sum_{j=i+1}^n a_i^2 b_j^2 + \sum_{i=1}^{n-1} \sum_{j=i+1}^n a_j^2 b_i^2 - 2 \sum_{i=1}^{n-1} \sum_{j=i+1}^n a_i b_j a_j b_i . -$$ - -Now exchange the indices i and j of the second term on the right side, and permute the b factors of the third term, yielding: - -{{NumBlk|:|$ \sum_{i=1}^{n-1} \sum_{j=i+1}^n (a_i b_j - a_j b_i)^2 = \sum_{i=1}^{n-1} \sum_{j=i+1}^n a_i^2 b_j^2 + \sum_{j=1}^{n-1} \sum_{i=j+1}^n a_i^2 b_j^2 - 2 \sum_{i=1}^{n-1} \sum_{j=i+1}^n a_i b_i a_j b_j \ .$|}} - -Back to the left side of Lagrange's identity: it has two terms, given in expanded form by Equations () and (). The first term on the right side of Equation () ends up canceling out the first term on the right side of Equation (), yielding - -{{NumBlk|:|\sum_{i=1}^{n-1} \sum_{j=i+1}^n a_i^2 b_j^2 - -+ \sum_{j=1}^{n-1} \sum_{i=j+1}^n a_i^2 b_j^2 - 2\sum_{i=1}^{n-1} \sum_{j=i+1}^n a_i b_i a_j b_j - -|3=() − ()|RawN=yes}} - -which is the same as Equation (), so Lagrange's identity is indeed an identity, Q.E.D.. - -Normed division algebras require that the norm of the product is equal to the product of the norms. Lagrange's identity exhibits this equality. - -The product identity used as a starting point here, is a consequence of the norm of the product equality with the product of the norm for scator algebras. This proposal, originally presented in the context of a deformed Lorentz metric, is based on a transformation stemming from the product operation and magnitude definition in hyperbolic scator algebra. - -Lagrange's identity can be proved in a variety of ways. - -Most derivations use the identity as a starting point and prove in one way or another that the equality is true. In the present approach, Lagrange's identity is actually derived without assuming it a priori. - -Let $a_{i},b_{i}\in\mathbb{C}$ be complex numbers and the overbar - -represents complex conjugate. - -The product identity $\prod_{i=1}^{n}\left(1-a_{i}\bar{a}_{i}-b_{i}\bar{b}_{i}+a_{i}\bar{a}_{i}b_{i}\bar{b}_{i}\right)=\prod_{i=1}^{n}\left(1-a_{i}\bar{a}_{i}\right)\prod_{i=1}^{n}\left(1-b_{i}\bar{b}_{i}\right)$ reduces to the complex Lagrange's identity when fourth order terms, in a series expansion, are considered. - -In order to prove it, expand the product on the LHS of the product identity in terms of - -series up to fourth order. To this end, recall that products of the form $\left(1+x_{i}\right)$ can be expanded in terms of sums as -$$ -\prod_{i=1}^{n}\left(1+x_{i}\right) = 1+\sum_{i=1}^{n}x_{i}+\sum_{i - -\prod_{i=1}^{n}\left(1-a_{i}\bar{a}_{i}-b_{i}\bar{b}_{i}+a_{i}\bar{a}_{i}b_{i}\bar{b}_{i}\right) = 1-\sum_{i=1}^{n}\left(a_{i}\bar{a}_{i}+b_{i}\bar{b}_{i}\right)+\sum_{i=1}^{n}a_{i}\bar{a}_{i}b_{i}\bar{b}_{i} - -+\sum_{i - -The two factors on the RHS are also written in terms of series - - - -\prod_{i=1}^{n}\left(1-a_{i}\bar{a}_{i}\right)\prod_{i=1}^{n}\left(1-b_{i}\bar{b}_{i}\right)=\left(1-\sum_{i=1}^{n}a_{i}\bar{a}_{i}+\sum_{i - -The product of this expression up to fourth order is - - - -\prod_{i=1}^{n}\left(1-a_{i}\bar{a}_{i}\right)\prod_{i=1}^{n}\left(1-b_{i}\bar{b}_{i}\right)=1-\sum_{i=1}^{n}\left(a_{i}\bar{a}_{i}+b_{i}\bar{b}_{i}\right) - -+\left(\sum_{i=1}^{n}a_{i}\bar{a}_{i}\right)\left(\sum_{i=1}^{n}b_{i}\bar{b}_{i}\right)+\sum_{i - -Substitution of these two results in the product identity give - - - -\sum_{i=1}^{n}a_{i}\bar{a}_{i}b_{i}\bar{b}_{i}+\sum_{i - -The product of two conjugates series can be expressed as series involving the product of conjugate terms. The conjugate series product is $\left(\sum_{i=1}^{n}x_{i}\right)\left(\sum_{i=1}^{n}\bar{x}_{i}\right)=\sum_{i=1}^{n}x_{i}\bar{x}_{i}+\sum_{i - -\left(\sum_{i=1}^{n}a_{i}b_{i}\right)\left(\sum_{i=1}^{n}\overline{a_{i}b_{i}}\right)-\sum_{i - -The terms of the last two series on the LHS are grouped as -$$ -a_{i}\bar{a}_{i}b_{j}\bar{b}_{j}+a_{j}\bar{a}_{j}b_{i}\bar{b}_{i}-a_{i}b_{i}\bar{a}_{j}\bar{b}_{j}-\bar{a}_{i}\bar{b}_{i}a_{j}b_{j}=\left(a_{i}\bar{b}_{j}-a_{j}\bar{b}_{i}\right)\left(\bar{a}_{i}b_{j}-\bar{a}_{j}b_{i}\right), -$$ in order to obtain the complex Lagrange's identity: - - - -\left(\sum_{i=1}^{n}a_{i}b_{i}\right)\left(\sum_{i=1}^{n}\overline{a_{i}b_{i}}\right)+\sum_{i - -In terms of the moduli, - - - -\left|\sum_{i=1}^{n}a_{i}b_{i}\right|^{2}+\sum_{i - -Lagrange's identity for complex numbers has been obtained from a straightforward - -product identity. A derivation for the reals is obviously even more succinct. Since the Cauchy–Schwarz inequality is a particular case of Lagrange's identity, this - -proof is yet another way to obtain the CS inequality. Higher order terms in the series produce novel identities. diff --git a/wiki/wikipedia/228.txt b/wiki/wikipedia/228.txt deleted file mode 100644 index 7ca244c766875b975df09123c9002dd99669a59c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/228.txt +++ /dev/null @@ -1,32 +0,0 @@ -In probability theory, an event is said to happen almost surely (sometimes abbreviated as a.s.) if it happens with probability 1 (or Lebesgue measure 1). In other words, the set of possible exceptions may be non-empty, but it has probability 0. The concept is analogous to the concept of "almost everywhere" in measure theory. - -In probability experiments on a finite sample space, there is often no difference between almost surely and surely (since having a probability of 1 often entails including all the sample points). However, this distinction becomes important when the sample space is an infinite set, because an infinite set can have non-empty subsets of probability 0. - -Some examples of the use of this concept include the strong and uniform versions of the law of large numbers, and the continuity of the paths of Brownian motion. - -The terms almost certainly (a.c.) and almost always (a.a.) are also used. Almost never describes the opposite of almost surely: an event that happens with probability zero happens almost never. - -Let $(\Omega,\mathcal{F},P)$ be a probability space. An event $E \in \mathcal{F}$ happens almost surely if $P(E)=1$. Equivalently, $E$ happens almost surely if the probability of $E$ not occurring is zero: $P(E^C) = 0$. More generally, any event $E \subseteq \Omega$ (not necessarily in $\mathcal{F}$) happens almost surely if $E^C$ is contained in a null set: a subset $N$ in $\mathcal F$ such that $P(N)=0$. The notion of almost sureness depends on the probability measure $P$. If it is necessary to emphasize this dependence, it is customary to say that the event $E$ occurs P-almost surely, or almost surely $\left(\!P\right)$. - -In general, an event can happen "almost surely", even if the probability space in question includes outcomes which do not belong to the event—as the following examples illustrate. - -Imagine throwing a dart at a unit square (a square with an area of 1) so that the dart always hits an exact point in the square, in such a way that each point in the square is equally likely to be hit. Since the square has area 1, the probability that the dart will hit any particular subregion of the square is equal to the area of that subregion. For example, the probability that the dart will hit the right half of the square is 0.5, since the right half has area 0.5. - -Next, consider the event that the dart hits exactly a point in the diagonals of the unit square. Since the area of the diagonals of the square is 0, the probability that the dart will land exactly on a diagonal is 0. That is, the dart will almost never land on a diagonal (equivalently, it will almost surely not land on a diagonal), even though the set of points on the diagonals is not empty, and a point on a diagonal is no less possible than any other point. - -Consider the case where a (possibly biased) coin is tossed, corresponding to the probability space $(\{H,T\}, 2^{\{H, T\}}, P)$, where the event $\{H\}$ occurs if a head is flipped, and $\{T\}$ if a tail is flipped. For this particular coin, it is assumed that the probability of flipping a head is $P(H) = p\in (0,1)$, from which it follows that the complement event, that of flipping a tail, has probability $P(T) = 1 - p$. - -Now, suppose an experiment were conducted where the coin is tossed repeatedly, with outcomes $\omega_1,\omega_2,\ldots$ and the assumption that each flip's outcome is independent of all the others (i.e., they are independent and identically distributed;i.i.d). Define the sequence of random variables on the coin toss space, $(X_i)_{i\in\mathbb{N}}$ where $X_i(\omega)=\omega_i$. i.e. each $X_i$ records the outcome of the $i$th flip. - -In this case, any infinite sequence of heads and tails is a possible outcome of the experiment. However, any particular infinite sequence of heads and tails has probability 0 of being the exact outcome of the (infinite) experiment. This is because the i.i.d. assumption implies that the probability of flipping all heads over $n$ flips is simply $P(X_i = H, \ i=1,2,\dots,n)=\left(P(X_1 = H)\right)^n = p^n$. Letting $n\rightarrow\infty$ yields 0, since $p\in (0,1)$ by assumption. The result is the same no matter how much we bias the coin towards heads, so long as we constrain $p$ to be strictly between 0 and 1. In fact, the same result even holds in non-standard analysis—where infinitesimal probabilities are not allowed. - -Moreover, the event "the sequence of tosses contains at least one $T$" will also happen almost surely (i.e., with probability 1). - -But if instead of an infinite number of flips, flipping stops after some finite time, say 1,000,000 flips, then the probability of getting an all-heads sequence, $p^{1,000,000}$, would no longer be 0, while the probability of getting at least one tails, $1 - p^{1,000,000}$, would no longer be 1 (i.e., the event is no longer almost sure). - -In asymptotic analysis, a property is said to hold asymptotically almost surely (a.a.s.) if over a sequence of sets, the probability converges to 1. For instance, in number theory, a large number is asymptotically almost surely composite, by the prime number theorem; and in random graph theory, the statement "$G(n,p_n)$ is connected" (where $G(n,p)$ denotes the graphs on $n$ vertices with edge probability $p$) is true a.a.s. when, for some $\varepsilon > 0$ -$$ -p_n > \frac{(1+\varepsilon) \ln n} n. -$$ - -In number theory, this is referred to as "almost all", as in "almost all numbers are composite". Similarly, in graph theory, this is sometimes referred to as "almost surely". diff --git a/wiki/wikipedia/2280.txt b/wiki/wikipedia/2280.txt deleted file mode 100644 index 5a3d946682d22db451f419b962f5775901e60ebd..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2280.txt +++ /dev/null @@ -1,26 +0,0 @@ -In mathematics, Blattner's conjecture or Blattner's formula is a description of the discrete series representations of a general semisimple group G in terms of their restricted representations to a maximal compact subgroup K (their so-called K-types). It is named after Robert James Blattner, despite not being formulated as a conjecture by him. - -Blattner's formula says that if a discrete series representation with infinitesimal character λ is restricted to a maximal compact subgroup K, then the representation of K with highest weight μ occurs with multiplicity -$$ -\sum_{w\in W_K}\epsilon(\omega)Q(w(\mu+\rho_c)-\lambda-\rho_n) -$$ - -where - -Q is the number of ways a vector can be written as a sum of non-compact positive roots - -WK is the Weyl group of K - -ρc is half the sum of the compact roots - -ρn is half the sum of the non-compact roots - -ε is the sign character of WK. - -Blattner's formula is what one gets by formally restricting the Harish-Chandra character formula for a discrete series representation to the maximal torus of a maximal compact group. The problem in proving the Blattner formula is that this only gives the character on the regular elements of the maximal torus, and one also needs to control its behavior on the singular elements. For non-discrete irreducible representations the formal restriction of Harish-Chandra's character formula need not give the decomposition under the maximal compact subgroup: for example, for the principal series representations of SL2 the character is identically zero on the non-singular elements of the maximal compact subgroup, but the representation is not zero on this subgroup. In this case the character is a distribution on the maximal compact subgroup with support on the singular elements. - -Harish-Chandra orally attributed the conjecture to Robert James Blattner as a question Blattner raised, not a conjecture made by Blattner. Blattner did not publish it in any form. It first appeared in print in Schmid, where it was first referred to as "Blattner's Conjecture," despite the results of that paper having been obtained without knowledge of Blattner's question and notwithstanding Blattner's not having made such a conjecture. Okamoto mentioned a special case of it slightly earlier. - -Schmid proved Blattner's formula in some special cases. - -Schmid showed that Blattner's formula gave an upper bound for the multiplicities of K-representations, Schmid proved Blattner's conjecture for groups whose symmetric space is Hermitian, and Hecht proved Blattner's conjecture for linear semisimple groups. Blattner's conjecture (formula) was also proved by Enright by infinitesimal methods which were totally new and completely different from those of Hecht and Schmid (1975). Part of the impetus for Enright’s paper (1979) came from several sources: from , , . In Enright (1979) multiplicity formulae are given for the so-called mock-discrete series representations also. used his ideas to obtain results on the construction and classification of irreducible Harish-Chandra modules of any real semisimple Lie algebra. diff --git a/wiki/wikipedia/2281.txt b/wiki/wikipedia/2281.txt deleted file mode 100644 index b56a46c1aa2d7333e9a21b7676e6d37382579a12..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2281.txt +++ /dev/null @@ -1,421 +0,0 @@ -The adiabatic theorem is a concept in quantum mechanics. Its original form, due to Max Born and Vladimir Fock (1928), was stated as follows: - -A physical system remains in its instantaneous eigenstate if a given perturbation is acting on it slowly enough and if there is a gap between the eigenvalue and the rest of the Hamiltonian's spectrum. - -In simpler terms, a quantum mechanical system subjected to gradually changing external conditions adapts its functional form, but when subjected to rapidly varying conditions there is insufficient time for the functional form to adapt, so the spatial probability density remains unchanged. - -At some initial time $t_0$ a quantum-mechanical system has an energy given by the Hamiltonian $\hat{H}(t_0)$; the system is in an eigenstate of $\hat{H}(t_0)$ labelled $\psi(x,t_0)$. Changing conditions modify the Hamiltonian in a continuous manner, resulting in a final Hamiltonian $\hat{H}(t_1)$ at some later time $t_1$. The system will evolve according to the time-dependent Schrödinger equation, to reach a final state $\psi(x,t_1)$. The adiabatic theorem states that the modification to the system depends critically on the time $\tau = t_1 - t_0$ during which the modification takes place. - -For a truly adiabatic process we require $\tau \to \infty$; in this case the final state $\psi(x,t_1)$ will be an eigenstate of the final Hamiltonian $\hat{H}(t_1) $, with a modified configuration: -$$ -|\psi(x,t_1)|^2 \neq |\psi(x,t_0)|^2 . -$$ - -The degree to which a given change approximates an adiabatic process depends on both the energy separation between $\psi(x,t_0)$ and adjacent states, and the ratio of the interval $\tau$ to the characteristic time-scale of the evolution of $\psi(x,t_0)$ for a time-independent Hamiltonian, $\tau_{int} = 2\pi\hbar/E_0$, where $E_0$ is the energy of $\psi(x,t_0)$. - -Conversely, in the limit $\tau \to 0$ we have infinitely rapid, or diabatic passage; the configuration of the state remains unchanged: -$$ -|\psi(x,t_1)|^2 = |\psi(x,t_0)|^2 . -$$ - -The so-called "gap condition" included in Born and Fock's original definition given above refers to a requirement that the spectrum of $\hat{H}$ is discrete and nondegenerate, such that there is no ambiguity in the ordering of the states (one can easily establish which eigenstate of $\hat{H}(t_1)$ corresponds to $\psi(t_0)$). In 1999 J. E. Avron and A. Elgart reformulated the adiabatic theorem to adapt it to situations without a gap. - -Note that the term "adiabatic" is traditionally used in thermodynamics to describe processes without the exchange of heat between system and environment (see adiabatic process), more precisely these processes are usually faster than the timescale of heat exchange. (For example, a pressure wave is adiabatic with respect to a heat wave, which is not adiabatic.) Adiabatic in the context of thermodynamics is often used as a synonym for fast process. - -The Classical and Quantum mechanics definition is closer instead to the thermodynamical concept of a quasistatic process, which are processes that are almost always at equilibrium (i.e. that are slower than the internal energy exchange interactions time scales, namely a "normal" atmospheric heat wave is quasi-static and a pressure wave is not). Adiabatic in the context of Mechanics is often used as a synonym for slow process. - -In the quantum world adiabatic means for example that the time scale of electrons and photon interactions is much faster or almost instantaneous with respect to the average time scale of electrons and photon propagation. Therefore, we can model the interactions as a piece of continuous propagation of electrons and photons (i.e. states at equilibrium) plus a quantum jump between states (i.e. instantaneous). - -The adiabatic theorem in this heuristic context tells essentially that quantum jumps are preferably avoided and the system tries to conserve the state and the quantum numbers. - -The Quantum mechanical concept of adiabatic is related to Adiabatic invariant, it is often used in the Old quantum theory and has no direct relation with heat exchange. - -As an example, consider a pendulum oscillating in a vertical plane. If the support is moved, the mode of oscillation of the pendulum will change. If the support is moved sufficiently slowly, the motion of the pendulum relative to the support will remain unchanged. A gradual change in external conditions allows the system to adapt, such that it retains its initial character. The detailed classical example is available in the Adiabatic invariant page and here. - -The classical nature of a pendulum precludes a full description of the effects of the adiabatic theorem. As a further example consider a quantum harmonic oscillator as the spring constant $k$ is increased. Classically this is equivalent to increasing the stiffness of a spring; quantum-mechanically the effect is a narrowing of the potential energy curve in the system Hamiltonian. - -If $k$ is increased adiabatically $\left(\frac{dk}{dt} \to 0\right)$ then the system at time $t$ will be in an instantaneous eigenstate $\psi(t)$ of the current Hamiltonian $\hat{H}(t)$, corresponding to the initial eigenstate of $\hat{H}(0)$. For the special case of a system like the quantum harmonic oscillator described by a single quantum number, this means the quantum number will remain unchanged. Figure 1 shows how a harmonic oscillator, initially in its ground state, $n = 0$, remains in the ground state as the potential energy curve is compressed; the functional form of the state adapting to the slowly varying conditions. - -For a rapidly increased spring constant, the system undergoes a diabatic process $\left(\frac{dk}{dt} \to \infty\right)$ in which the system has no time to adapt its functional form to the changing conditions. While the final state must look identical to the initial state $\left(|\psi(t)|^2 = |\psi(0)|^2\right)$ for a process occurring over a vanishing time period, there is no eigenstate of the new Hamiltonian, $\hat{H}(t)$, that resembles the initial state. The final state is composed of a linear superposition of many different eigenstates of $\hat{H}(t)$ which sum to reproduce the form of the initial state. - -For a more widely applicable example, consider a 2-level atom subjected to an external magnetic field. The states, labelled $|1\rangle$ and $|2\rangle$ using bra–ket notation, can be thought of as atomic angular-momentum states, each with a particular geometry. For reasons that will become clear these states will henceforth be referred to as the diabatic states. The system wavefunction can be represented as a linear combination of the diabatic states: -$$ -|\Psi\rangle = c_1(t)|1\rangle + c_2(t)|2\rangle. -$$ - -With the field absent, the energetic separation of the diabatic states is equal to $\hbar\omega_0$; the energy of state $|1\rangle$ increases with increasing magnetic field (a low-field-seeking state), while the energy of state $|2\rangle$ decreases with increasing magnetic field (a high-field-seeking state). Assuming the magnetic-field dependence is linear, the Hamiltonian matrix for the system with the field applied can be written - -\mathbf{H} = \begin{pmatrix} - -\mu B(t)-\hbar\omega_0/2 & a \\ - -a^* & \hbar\omega_0/2-\mu B(t) - -\end{pmatrix} - -where $\mu$ is the magnetic moment of the atom, assumed to be the same for the two diabatic states, and $a$ is some time-independent coupling between the two states. The diagonal elements are the energies of the diabatic states ($E_1(t)$ and $E_2(t)$), however, as $\mathbf{H}$ is not a diagonal matrix, it is clear that these states are not eigenstates of the new Hamiltonian that includes the magnetic field contribution. - -The eigenvectors of the matrix $\mathbf{H}$ are the eigenstates of the system, which we will label $|\phi_1(t)\rangle$ and $|\phi_2(t)\rangle$, with corresponding eigenvalues - -\begin{align} - -\varepsilon_1(t) &= -\frac{1}{2}\sqrt{4a^2 + (\hbar\omega_0 - 2\mu B(t))^2} \\[4pt] - -\varepsilon_2(t) &= +\frac{1}{2}\sqrt{4a^2 + (\hbar\omega_0 - 2\mu B(t))^2}. - -\end{align} - -It is important to realise that the eigenvalues $\varepsilon_1(t)$ and $\varepsilon_2(t)$ are the only allowed outputs for any individual measurement of the system energy, whereas the diabatic energies $E_1(t)$ and $E_2(t)$ correspond to the expectation values for the energy of the system in the diabatic states $|1\rangle$ and $|2\rangle$. - -Figure 2 shows the dependence of the diabatic and adiabatic energies on the value of the magnetic field; note that for non-zero coupling the eigenvalues of the Hamiltonian cannot be degenerate, and thus we have an avoided crossing. If an atom is initially in state $|\phi_2(t_0)\rangle$ in zero magnetic field (on the red curve, at the extreme left), an adiabatic increase in magnetic field $\left(\frac{dB}{dt} \to 0\right)$ will ensure the system remains in an eigenstate of the Hamiltonian $|\phi_2(t)\rangle$ throughout the process (follows the red curve). A diabatic increase in magnetic field $\left(\frac{dB}{dt}\to \infty\right)$ will ensure the system follows the diabatic path (the dotted blue line), such that the system undergoes a transition to state $|\phi_1(t_1)\rangle$. For finite magnetic field slew rates $\left(0 < \frac{dB}{dt} < \infty\right)$ there will be a finite probability of finding the system in either of the two eigenstates. See below for approaches to calculating these probabilities. - -These results are extremely important in atomic and molecular physics for control of the energy-state distribution in a population of atoms or molecules. - -Under a slowly changing Hamiltonian $H(t)$ with instantaneous eigenstates $| n(t) \rangle$ and corresponding energies $E_n(t)$, a quantum system evolves from the initial state - -| \psi(0) \rangle = \sum_n c_n(0) | n(0) \rangle - -to the final state - -| \psi(t) \rangle = \sum_n c_n(t) | n(t) \rangle , - -where the coefficients undergo the change of phase - -c_n(t) = c_n(0) e^{i \theta_n(t)} e^{i \gamma_n(t)} - -with the dynamical phase - -\theta_m(t) = \frac{-1}{\hbar} \int_0^t E_m(t') dt' - -and geometric phase - -\gamma_m(t) = i \int_0^t \langle m(t') | \dot{m}(t') \rangle dt' . - -In particular, $|c_n(t)|^2 = |c_n(0)|^2$, so if the system begins in an eigenstate of $H(0)$, it remains in an eigenstate of $H(t)$ during the evolution with a change of phase only. - -This proof is partly inspired by one given by Sakurai in Modern Quantum Mechanics. - -The instantaneous eigenstates $| n(t) \rangle$ and energies $E_n(t)$, by assumption, satisfy the time-independent Schrödinger equation - -H(t) | n(t) \rangle = E_n(t) | n(t) \rangle - -at all times $t$. Thus, they constitute a basis that can be used to expand the state - -| \psi(t) \rangle = \sum_n c_n(t) | n(t) \rangle - -at any time $t$. The evolution of the system is governed by the time-dependent Schrödinger equation - -i \hbar |\dot{\psi}(t) \rangle = H(t) | \psi(t) \rangle, - -where $\dot{} = d / dt$ (see ). Insert the expansion of $| \psi(t) \rangle$, use $H(t) | n(t) \rangle = E_n(t) | n(t) \rangle$, differentiate with the product rule, take the inner product with $| m(t) \rangle$ and use orthonormality of the eigenstates to obtain - -i \hbar \dot{c}_m(t) + i \hbar \sum_n c_n(t) \langle m(t) | \dot{n}(t) \rangle = c_m(t) E_m(t) . - -This coupled first-order differential equation is exact and expresses the time-evolution of the coefficients in terms of inner products $\langle m(t) | \dot{n} (t) \rangle$ between the eigenstates and the time-differentiated eigenstates. But it is possible to re-express the inner products for $m \neq n$ in terms of matrix elements of the time-differentiated Hamiltonian $\dot{H}(t)$. To do so, differentiate both sides of the time-independent Schrödinger equation with respect to time using the product rule to get - -\dot{H}(t)|n(t)\rangle + H(t)|\dot{n}(t)\rangle = \dot{E}_n(t) |n(t)\rangle + E_n(t) |\dot{n}(t)\rangle . - -Again take the inner product with $| m(t) \rangle$ and use $\langle m(t) | H(t) = E_m(t) \langle m(t) |$ and orthonormality to find - -\langle m(t) | \dot{n}(t) \rangle = - \frac{\langle m(t) | \dot{H}(t) | n(t) \rangle}{E_m(t) - E_n(t)} - -\qquad (m \neq n). - -Insert this into the differential equation for the coefficients to obtain - -\dot{c}_m(t) + \left(\frac{i}{\hbar} E_m(t) + \langle m(t) | \dot{m}(t) \rangle \right) c_m(t) = \sum_{n \neq m} \frac{\langle m(t) | \dot{H} | n(t) \rangle}{E_m(t) - E_n(t)} c_n(t). - -This differential equation describes the time-evolution of the coefficients, but now in terms of matrix elements of $\dot{H}(t)$. To arrive at the adiabatic theorem, neglect the right hand side. This is valid if the rate of change of the Hamiltonian $\dot{H}(t)$ is small and there is a finite gap $E_m(t) - E_n(t) \neq 0$ between the energies. This is known as the adiabatic approximation. Under the adiabatic approximation, - -\dot{c}_m(t) = i \left(-\frac{E_m(t)}{\hbar} + i \langle m(t) | \dot{m}(t) \rangle \right) c_m(t) - -which integrates precisely to the adiabatic theorem - -c_m(t) = c_m(0) e^{i \theta_m(t)} e^{i \gamma_m(t)} - -with the phases defined in the statement of the theorem. - -The dynamical phase $\theta_m(t)$ is real because it involves an integral over a real energy. To see that the geometric phase $\gamma_m(t)$ is real, too, differentiate the normalization $\langle m(t) | m(t) \rangle = 1$ of the eigenstates and use the product rule to find that - -0 - -= \frac{d}{dt} \Bigl ( \langle m(t) | m(t) \rangle \Bigr ) - -= \langle \dot{m}(t) | m(t) \rangle + \langle m(t)) | \dot{m}(t) \rangle - -= \langle m(t)) | \dot{m}(t) \rangle^* + \langle m(t)) | \dot{m}(t) \rangle - -= 2 \operatorname{Re} \Bigl ( \langle m(t)) | \dot{m}(t) \rangle \Bigr ) - -. - -Thus, $\langle m(t)) | \dot{m}(t) \rangle $ is purely imaginary, so $i \langle m(t)) | \dot{m}(t) \rangle $ and thus the geometric phase $\gamma_m(t) $ are purely real. - -Proof with the details of the adiabatic approximation - -We are going to formulate the statement of the theorem as follows: - -For a slowly varying Hamiltonian $\hat{H}$ in the time range T the solution of the schroedinger equation $\Psi(t)$ with initial conditions $\Psi(0) = \psi_{n}(0)$ - -where $\psi_{n}(t)$ is the eigenvector of the instantaneous Schroedinger equation $\hat{H}(t)\psi_{n}(t)=E_{n}(t)\psi_{n}(t)$ can be approximated as: \left\| {\Psi(t)-\psi_\text{adiabatic}(t)} \right\| \approx O(\frac{1}{T}) where the adiabatic approximation is: |\psi_\text{adiabatic}(t)\rangle = e^{i\theta_{n}(t)}e^{i\gamma_{n}(t)}|\psi_n(t)\rangle and \theta_{n}(t) = - \frac{1}{\hbar} \int_{0}^{t}E_{n}(t') dt' \gamma_{n}(t) = \int_{0}^{t}\nu_{n}(t')dt' also called Berry phase \nu_{n}(t) = i \langle\psi_{n}(t) | \dot{\psi}_{n}(t)\rangle - -And now we are going to prove the theorem. - -Consider the time-dependent Schrödinger equation - -i \hbar{\partial \over \partial t} |\psi(t)\rangle = \hat{H}(\tfrac{t}{T}) |\psi(t)\rangle - -with Hamiltonian $\hat{H}(t).$ - -We would like to know the relation between an initial state $|\psi(0)\rangle$ and its final state $|\psi(T)\rangle$ at $t = T$ in the adiabatic limit $T \to \infty.$ - -First redefine time as $\lambda = \tfrac{t}{T} \in [0,1]$: - -i \hbar{\partial \over \partial \lambda} |\psi(\lambda)\rangle = T \hat{H}(\lambda) |\psi(\lambda)\rangle. - -At every point in time $\hat{H}(\lambda)$ can be diagonalized $\hat H(\lambda)|\psi_n(\lambda)\rangle = E_n(\lambda)|\psi_n(\lambda)\rangle$ with eigenvalues $E_n$ and eigenvectors $|\psi_n(\lambda)\rangle$. Since the eigenvectors form a complete basis at any time we can expand $|\psi(\lambda)\rangle$ as: - - |\psi(\lambda)\rangle = \sum_n c_n(\lambda)|\psi_n(\lambda)\rangle e^{iT\theta_n(\lambda)}, where \theta_n(\lambda) = -\frac{1}{\hbar}\int_0^\lambda E_n(\lambda')d\lambda'. - -The phase $\theta_n(t)$ is called the dynamic phase factor. By substitution into the Schrödinger equation, another equation for the variation of the coefficients can be obtained: - -i \hbar \sum_n (\dot{c}_n|\psi_n\rangle + c_n|\dot{\psi}_n\rangle + i c_n|\psi_n\rangle T\dot{\theta}_n)e^{iT\theta_n} = \sum_n c_n T E_n|\psi_n\rangle e^{iT\theta_n}. - -The term $\dot{\theta}_n$ gives $-E_n/\hbar$, and so the third term of left side cancels out with the right side, leaving - -\sum_n \dot{c}_n|\psi_n\rangle e^{iT\theta_n} = -\sum_n c_n|\dot{\psi}_n\rangle e^{iT\theta_n}. - -Now taking the inner product with an arbitrary eigenfunction $\langle\psi_m|$, the $\langle\psi_m|\psi_n\rangle$ on the left gives $\delta_{nm}$, which is 1 only for m = n and otherwise vanishes. The remaining part gives - -\dot{c}_m = -\sum_n c_n\langle\psi_m|\dot{\psi}_n\rangle e^{iT(\theta_n-\theta_m)}. - -For $T \to \infty$ the $e^{iT(\theta_n-\theta_m)}$ will oscillate faster and faster and intuitively will eventually suppress nearly all terms on the right side. The only exceptions are when $\theta_n-\theta_m$ has a critical point, i.e. $E_n(\lambda) = E_m(\lambda)$. This is trivially true for $m = n$. Since the adiabatic theorem assumes a gap between the eigenenergies at any time this cannot hold for $m \neq n$. Therefore, only the $m = n$ term will remain in the limit $T \to \infty$. - -In order to show this more rigorously we first need to remove the $m = n$ term. - -This can be done by defining d_m(\lambda) = c_m(\lambda) e^{\int_0^\lambda\langle\psi_m|\dot{\psi}_m\rangle d\lambda} = c_m(\lambda) e^{-i\gamma_m(\lambda)}. - -We obtain: - -\dot{d}_m = -\sum_{n\neq m} d_n\langle\psi_m|\dot{\psi}_n\rangle e^{iT(\theta_n-\theta_m)-i(\gamma_m-\gamma_n)}. - -This equation can be integrated: - -\begin{align} - -d_m(1)-d_m(0) &= -\int_0^1 \sum_{n\neq m} d_n\langle\psi_m|\dot{\psi}_n\rangle e^{iT(\theta_n-\theta_m)-i(\gamma_m-\gamma_n)} d\lambda\\ - -&= -\int_0^1 \sum_{n\neq m} (d_n-d_n(0))\langle\psi_m|\dot{\psi}_n\rangle e^{iT(\theta_n-\theta_m)-i(\gamma_m-\gamma_n)} d\lambda - \int_0^1 \sum_{n\neq m} d_n(0)\langle\psi_m|\dot{\psi}_n\rangle e^{iT(\theta_n-\theta_m)-i(\gamma_m-\gamma_n)}d\lambda - -\end{align} - -or written in vector notation - -\vec{d}(1)-\vec{d}(0) = -\int_0^1 \hat{A}(T, \lambda) (\vec{d}(\lambda)-\vec{d}(0)) d\lambda - \vec{\alpha}(T). - -Here $\hat{A}(T, \lambda)$ is a matrix and - -\alpha_m(T) = \int_0^1 \sum_{n\neq m} d_n(0)\langle\psi_m|\dot{\psi}_n\rangle e^{iT(\theta_n-\theta_m)- i(\gamma_m-\gamma_n)}d\lambda is basically a Fourier transform. - -It follows from the Riemann-Lebesgue lemma that $\vec{\alpha}(T) \to 0 $ as $T \to \infty$. As last step take the norm on both sides of the above equation: - -\Vert\vec{d}(1)- \vec{d}(0)\Vert \leq \Vert\vec{\alpha}(T)\Vert + \int_0^1 \Vert\hat{A}(T, \lambda)\Vert \Vert\vec{d}(\lambda)-\vec{d}(0)\Vert d\lambda - -and apply Grönwall's inequality to obtain - -\Vert\vec{d}(1)-\vec{d}(0)\Vert \leq \Vert\vec{\alpha}(T)\Vert e^{\int_0^1 \Vert\hat{A}(T, \lambda)\Vert d\lambda}. - -Since $\vec{\alpha}(T) \to 0$ it follows $\Vert\vec{d}(1)-\vec{d}(0)\Vert \to 0$ for $T \to \infty$. This concludes the proof of the adiabatic theorem. - -In the adiabatic limit the eigenstates of the Hamiltonian evolve independently of each other. If the system is prepared in an eigenstate $|\psi(0)\rangle = |\psi_n(0)\rangle$ its time evolution is given by: - -|\psi(\lambda)\rangle = |\psi_n(\lambda)\rangle e^{iT\theta_n(\lambda)}e^{i \gamma_n(\lambda)}. - -So, for an adiabatic process, a system starting from nth eigenstate also remains in that nth eigenstate like it does for the time-independent processes, only picking up a couple of phase factors. The new phase factor $\gamma_n(t)$ can be canceled out by an appropriate choice of gauge for the eigenfunctions. However, if the adiabatic evolution is cyclic, then $\gamma_n(t)$ becomes a gauge-invariant physical quantity, known as the Berry phase. - -Let's start from a parametric Hamiltonian $H(\vec{R}(t))$, where the parameters are slowly varying in time, the definition of slow here is defined essentially by the distance in energy by the eigenstates (through the uncertainty principle, we can define a timescale that shall be always much lower than the time scale considered). - -This way we clearly also identify that while slowly varying the eigenstates remains clearly separated in energy (e.g. also when we generalize this to the case of bands as in the TKNN formula the bands shall remain clearly separated). Given they do not intersect the states are ordered and in this sense this is also one of the meanings of the name topological order. - -We do have the instantaneous Schrödinger equation: - -H(\vec{R}(t))| \psi_m(t)\rangle = E_m(t)| \psi_m(t)\rangle - -And instantaneous eigenstates: - -\langle\psi_m(t)|\psi_n(t)\rangle = \delta_{mn} - -The generic solution: - -\Psi(t) = \sum a_n(t)|\psi_n(t)\rangle - -plugging in the full schroedinger and multiplying by a generic eigenvector: - -\langle \psi_m(t)|i\hbar\partial_t|\Psi(t)\rangle = \langle \psi_m(t)|H(\vec{R}(t))|\Psi(t)\rangle - -\dot{a}_m + \sum_n\langle \psi_m(t)|\partial_{\vec{R}} |\psi_n(t)\rangle\dot{\vec{R}}a_n = -\frac{i}{\hbar}E_m(t)a_m - -And if we introduce the adiabatic approximation: - - | \langle \psi_m(t)|\partial_{\vec{R}} |\psi_n(t)\rangle\dot{\vec{R}}a_n | \ll |a_m| for each $m\ne n$ - -We have - -\dot{a}_m = - \langle \psi_m(t)|\partial_{\vec{R}} |\psi_m(t)\rangle\dot{\vec{R}}a_m -\frac{i}{\hbar}E_m(t)a_m - -and - -a_m(t) = e^{-\frac{i}{\hbar} \int_{t_0}^t E_m(t')dt'} e^{i\gamma_m(t)}a_m(t_0) - -where - -\gamma_m(t) = i \int_{t_0}^t \langle \psi_m(t)|\partial_{\vec{R}} |\psi_m(t)\rangle\dot{\vec{R}}dt' = i \int_C \langle \psi_m(\vec{R})|\partial_{\vec{R}} |\psi_m(\vec{R})\rangle d\vec{R} - -And C is the path in the parameter space, - -This is the same as the statement of the theorem but in terms of the coefficients of the total wave function and its initial state. - -Now this is slightly more general than the other proofs given we consider a generic set of parameters, and we see that the Berry phase acts as a local geometric quantity in the parameter space. - -Finally integrals of local geometric quantities can give topological invariants as in the case of the Gauss-Bonnet theorem. - -In fact if the path C is closed then the Berry phase persists to Gauge transformation and becomes a physical quantity. - -Often a solid crystal is modeled as a set of independent valence electrons moving in a mean perfectly periodic potential generated by a rigid lattice of ions. With the Adiabatic theorem we can also include instead the motion of the valence electrons across the crystal and the thermal motion of the ions as in the Born–Oppenheimer approximation. - -This does explain many phenomena in the scope of: - -* thermodynamics: Temperature dependence of specific heat, thermal expansion, melting - -* transport phenomena: the temperature dependence of electric resistivity of conductors, the temperature dependence of electric conductivity in insulators, Some properties of low temperature superconductivity - -* optics: optic absorption in the infrared for ionic crystals, Brillouin scattering, Raman scattering - -We will now pursue a more rigorous analysis. Making use of bra–ket notation, the state vector of the system at time $t$ can be written -$$ -|\psi(t)\rangle = \sum_n c^A_n(t)e^{-iE_nt/\hbar}|\phi_n\rangle , -$$ - -where the spatial wavefunction alluded to earlier is the projection of the state vector onto the eigenstates of the position operator -$$ -\psi(x,t) = \langle x|\psi(t)\rangle . -$$ - -It is instructive to examine the limiting cases, in which $\tau$ is very large (adiabatic, or gradual change) and very small (diabatic, or sudden change). - -Consider a system Hamiltonian undergoing continuous change from an initial value $\hat{H}_0$, at time $t_0$, to a final value $\hat{H}_1$, at time $t_1$, where $\tau = t_1 - t_0$. The evolution of the system can be described in the Schrödinger picture by the time-evolution operator, defined by the integral equation -$$ -\hat{U}(t,t_0) = 1 - \frac{i}{\hbar}\int_{t_0}^t\hat{H}(t')\hat{U}(t',t_0)dt' , -$$ - -which is equivalent to the Schrödinger equation. -$$ -i\hbar\frac{\partial}{\partial t}\hat{U}(t,t_0) = \hat{H}(t)\hat{U}(t,t_0), -$$ - -along with the initial condition $\hat{U}(t_0,t_0) = 1$. Given knowledge of the system wave function at $t_0$, the evolution of the system up to a later time $t$ can be obtained using -$$ -|\psi(t)\rangle = \hat{U}(t,t_0)|\psi(t_0)\rangle. -$$ - -The problem of determining the adiabaticity of a given process is equivalent to establishing the dependence of $\hat{U}(t_1,t_0)$ on $\tau$. - -To determine the validity of the adiabatic approximation for a given process, one can calculate the probability of finding the system in a state other than that in which it started. Using bra–ket notation and using the definition $|0\rangle \equiv |\psi(t_0)\rangle$, we have: -$$ -\zeta = \langle 0|\hat{U}^\dagger(t_1,t_0)\hat{U}(t_1,t_0)|0\rangle - \langle 0|\hat{U}^\dagger(t_1,t_0)|0\rangle\langle 0 | \hat{U}(t_1,t_0) | 0 \rangle. -$$ - -We can expand $\hat{U}(t_1,t_0)$ -$$ -\hat{U}(t_1,t_0) = 1 + {1 \over i\hbar} \int_{t_0}^{t_1}\hat{H}(t)dt + {1 \over (i\hbar)^2} \int_{t_0}^{t_1}dt' \int_{t_0}^{t'}dt \hat{H}(t')\hat{H}(t) + \cdots. -$$ - -In the perturbative limit we can take just the first two terms and substitute them into our equation for $\zeta$, recognizing that -$$ -{1 \over \tau}\int_{t_0}^{t_1}\hat{H}(t)dt \equiv \bar{H} -$$ - -is the system Hamiltonian, averaged over the interval $t_0 \to t_1$, we have: -$$ -\zeta = \langle 0|(1 + \tfrac{i}{\hbar}\tau\bar{H})(1 - \tfrac{i}{\hbar}\tau\bar{H})|0\rangle - \langle 0|(1 + \tfrac{i}{\hbar}\tau\bar{H})|0\rangle \langle 0|(1 - \tfrac{i}{\hbar}\tau\bar{H})|0\rangle . -$$ - -After expanding the products and making the appropriate cancellations, we are left with: -$$ -\zeta = \frac{\tau^2}{\hbar^2}\left(\langle 0|\bar{H}^2|0\rangle - \langle 0|\bar{H}|0\rangle\langle 0|\bar{H}|0\rangle\right) , -$$ - -giving -$$ -\zeta = \frac{\tau^2\Delta\bar{H}^2}{\hbar^2} , -$$ - -where $\Delta\bar{H}$ is the root mean square deviation of the system Hamiltonian averaged over the interval of interest. - -The sudden approximation is valid when $\zeta \ll 1$ (the probability of finding the system in a state other than that in which is started approaches zero), thus the validity condition is given by -$$ -\tau \ll {\hbar \over \Delta\bar{H}} , -$$ - -which is a statement of the time-energy form of the Heisenberg uncertainty principle. - -In the limit $\tau \to 0$ we have infinitely rapid, or diabatic passage: -$$ -\lim_{\tau \to 0}\hat{U}(t_1,t_0) = 1 . -$$ - -The functional form of the system remains unchanged: -$$ -|\langle x|\psi(t_1)\rangle|^2 = \left|\langle x|\psi(t_0)\rangle\right|^2 . -$$ - -This is sometimes referred to as the sudden approximation. The validity of the approximation for a given process can be characterized by the probability that the state of the system remains unchanged: -$$ -P_D = 1 - \zeta. -$$ - -In the limit $\tau \to \infty$ we have infinitely slow, or adiabatic passage. The system evolves, adapting its form to the changing conditions, -$$ -|\langle x|\psi(t_1)\rangle|^2 \neq |\langle x|\psi(t_0)\rangle|^2 . -$$ - -If the system is initially in an eigenstate of $\hat{H}(t_0)$, after a period $\tau$ it will have passed into the corresponding eigenstate of $\hat{H}(t_1)$. - -This is referred to as the adiabatic approximation. The validity of the approximation for a given process can be determined from the probability that the final state of the system is different from the initial state: -$$ -P_A = \zeta . -$$ - -In 1932 an analytic solution to the problem of calculating adiabatic transition probabilities was published separately by Lev Landau and Clarence Zener, for the special case of a linearly changing perturbation in which the time-varying component does not couple the relevant states (hence the coupling in the diabatic Hamiltonian matrix is independent of time). - -The key figure of merit in this approach is the Landau–Zener velocity: - -v_\text{LZ} = {\frac{\partial}{\partial t}|E_2 - E_1| \over \frac{\partial}{\partial q}|E_2 - E_1|} \approx \frac{dq}{dt} , - -where $q$ is the perturbation variable (electric or magnetic field, molecular bond-length, or any other perturbation to the system), and $E_1$ and $E_2$ are the energies of the two diabatic (crossing) states. A large $v_\text{LZ}$ results in a large diabatic transition probability and vice versa. - -Using the Landau–Zener formula the probability, $P_{\rm D}$, of a diabatic transition is given by - -\begin{align} - -P_{\rm D} &= e^{-2\pi\Gamma}\\ - -\Gamma &= {a^2/\hbar \over \left|\frac{\partial}{\partial t}(E_2 - E_1)\right|} = {a^2/\hbar \over \left|\frac{dq}{dt}\frac{\partial}{\partial q}(E_2 - E_1)\right|}\\ - -&= {a^2 \over \hbar|\alpha|}\\ - -\end{align} - -For a transition involving a nonlinear change in perturbation variable or time-dependent coupling between the diabatic states, the equations of motion for the system dynamics cannot be solved analytically. The diabatic transition probability can still be obtained using one of the wide variety of numerical solution algorithms for ordinary differential equations. - -The equations to be solved can be obtained from the time-dependent Schrödinger equation: - -i\hbar\dot{\underline{c}}^A(t) = \mathbf{H}_A(t)\underline{c}^A(t) , - -where $\underline{c}^A(t)$ is a vector containing the adiabatic state amplitudes, $\mathbf{H}_A(t)$ is the time-dependent adiabatic Hamiltonian, and the overdot represents a time derivative. - -Comparison of the initial conditions used with the values of the state amplitudes following the transition can yield the diabatic transition probability. In particular, for a two-state system: - -P_D = |c^A_2(t_1)|^2 - -for a system that began with $|c^A_1(t_0)|^2 = 1$. diff --git a/wiki/wikipedia/2282.txt b/wiki/wikipedia/2282.txt deleted file mode 100644 index dfb835b5c3f1db77cc0aedbdacb9e6901a380c95..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2282.txt +++ /dev/null @@ -1,81 +0,0 @@ -In mathematics, the Bourbaki–Witt theorem in order theory, named after Nicolas Bourbaki and Ernst Witt, is a basic fixed point theorem for partially ordered sets. It states that if X is a non-empty chain complete poset, and -$$ -f : X \to X -$$ - -such that -$$ -f (x) \geq x -$$ for all $x,$ - -then f has a fixed point. Such a function f is called inflationary or progressive. - -If the poset X is finite then the statement of the theorem has a clear interpretation that leads to the proof. The sequence of successive iterates, -$$ - x_{n+1}=f(x_n), n=0,1,2,\ldots, -$$ - -where x0 is any element of X, is monotone increasing. By the finiteness of X, it stabilizes: -$$ - x_n=x_{\infty}, -$$ for n sufficiently large. - -It follows that x is a fixed point of f. - -Pick some $y \in X$. Define a function K recursively on the ordinals as follows: -$$ -K(0) = y -$$ -$$ -K( \alpha+1 ) = f( K( \alpha ) ). -$$ - -If $ \beta $ is a limit ordinal, then by construction -$$ -\{ K( \alpha ) \ : \ \alpha < \beta \} -$$ - -is a chain in X. Define -$$ -K( \beta ) = \sup \{ K( \alpha ) \ : \ \alpha < \beta \}. -$$ - -This is now an increasing function from the ordinals into X. It cannot be strictly increasing, as if it were we would have an injective function from the ordinals into a set, violating Hartogs' lemma. Therefore the function must be eventually constant, so for some -$$ - \alpha , \ \ K( \alpha+1 ) = K ( \alpha ); -$$ - -that is, -$$ -f( K( \alpha ) ) = K ( \alpha ). -$$ - -So letting -$$ -x = K ( \alpha ), -$$ - -we have our desired fixed point. Q.E.D. - -The Bourbaki–Witt theorem has various important applications. One of the most common is in the proof that the axiom of choice implies Zorn's lemma. We first prove it for the case where X is chain complete and has no maximal element. Let g be a choice function on -$$ -P(X) - \{ \varnothing \}. -$$ - -Define a function -$$ -f : X \to X -$$ - -by -$$ -f(x) = g( \{ y \ : \ y > x \} ). -$$ - -This is allowed as, by assumption, the set is non-empty. Then f(x) > x, so f is an inflationary function with no fixed point, contradicting the theorem. - -This special case of Zorn's lemma is then used to prove the Hausdorff maximality principle, that every poset has a maximal chain, which is easily seen to be equivalent to Zorn's Lemma. - -Bourbaki–Witt has other applications. In particular in computer science, it is used in the theory of computable functions. - -It is also used to define recursive data types, e.g. linked lists, in domain theory. diff --git a/wiki/wikipedia/2283.txt b/wiki/wikipedia/2283.txt deleted file mode 100644 index 6631cdc671cf987a3f76c89d1e6231de84047632..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2283.txt +++ /dev/null @@ -1,30 +0,0 @@ -In mathematics and in particular the field of complex analysis, Hurwitz's theorem is a theorem associating the zeroes of a sequence of holomorphic, compact locally uniformly convergent functions with that of their corresponding limit. The theorem is named after Adolf Hurwitz. - -Let {fk} be a sequence of holomorphic functions on a connected open set G that converge uniformly on compact subsets of G to a holomorphic function f which is not constantly zero on G. If f has a zero of order m at z0 then for every small enough ρ > 0 and for sufficiently large k ∈ N (depending on ρ), fk has precisely m zeroes in the disk defined by |z − z0| < ρ, including multiplicity. Furthermore, these zeroes converge to z0 as k → ∞. - -The theorem does not guarantee that the result will hold for arbitrary disks. Indeed, if one chooses a disk such that f has zeroes on its boundary, the theorem fails. An explicit example is to consider the unit disk D and the sequence defined by -$$ -f_n(z) = z-1+\frac 1 n, \qquad z \in \mathbb C -$$ - -which converges uniformly to f(z) = z − 1. The function f(z) contains no zeroes in D; however, each fn has exactly one zero in the disk corresponding to the real value 1 − (1/n). - -Hurwitz's theorem is used in the proof of the Riemann mapping theorem, and also has the following two corollaries as an immediate consequence: - -* Let G be a connected, open set and {fn} a sequence of holomorphic functions which converge uniformly on compact subsets of G to a holomorphic function f. If each fn is nonzero everywhere in G, then f is either identically zero or also is nowhere zero. - -* If {fn} is a sequence of univalent functions on a connected open set G that converge uniformly on compact subsets of G to a holomorphic function f, then either f is univalent or constant. - -Let f be an analytic function on an open subset of the complex plane with a zero of order m at z0, and suppose that {fn} is a sequence of functions converging uniformly on compact subsets to f. Fix some ρ > 0 such that f(z) ≠ 0 in 0 < |z − z0| ≤ ρ. Choose δ such that |f(z)| > δ for z on the circle |z − z0| = ρ. Since fk(z) converges uniformly on the disc we have chosen, we can find N such that |fk(z)| ≥ δ/2 for every k ≥ N and every z on the circle, ensuring that the quotient fk′(z)/fk(z) is well defined for all z on the circle |z − z0| = ρ. By Weierstrass's theorem we have $f_k' \to f'$ uniformly on the disc, and hence we have another uniform convergence: -$$ -\frac{f_k'(z)}{f_k(z)} \to \frac{f'(z)}{f(z)}. -$$ - -Denoting the number of zeros of fk(z) in the disk by Nk, we may apply the argument principle to find -$$ - m = \frac 1 {2\pi i}\int_{\vert z -z_0\vert = \rho} \frac{f'(z)}{f(z)} dz = \lim_{k\to\infty} \frac 1 {2\pi i} \int_{\vert z -z_0\vert = \rho} \frac{f'_k(z)}{f_k(z)} dz = \lim_{k\to\infty} N_k -$$ - -In the above step, we were able to interchange the integral and the limit because of the uniform convergence of the integrand. We have shown that Nk → m as k → ∞. Since the Nk are integer valued, Nk must equal m for large enough k. - -*Rouché's theorem diff --git a/wiki/wikipedia/2284.txt b/wiki/wikipedia/2284.txt deleted file mode 100644 index d8eb30ab514b47827e9b46718336935ecf566502..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2284.txt +++ /dev/null @@ -1,3 +0,0 @@ -In mathematics, the Ravenel conjectures are a set of mathematical conjectures in the field of stable homotopy theory posed by Douglas Ravenel at the end of a paper published in 1984. It was earlier circulated in preprint. The problems involved have largely been resolved, with all but the "telescope conjecture" being proved in later papers by others. The telescope conjecture is now generally believed not to be true, though there are some conflicting claims concerning it in the published literature, and is taken to be an open problem. Ravenel's conjectures exerted influence on the field through the founding of the approach of chromatic homotopy theory. - -The first of the seven conjectures, then the nilpotence conjecture, is now the nilpotence theorem. The telescope conjecture, which was #4 on the original list, remains of substantial interest because of its connection with the convergence of an Adams–Novikov spectral sequence. While opinion is against the truth of the original statement, investigations of associated phenomena (for a triangulated category in general) have become a research area in its own right. diff --git a/wiki/wikipedia/2285.txt b/wiki/wikipedia/2285.txt deleted file mode 100644 index bb35b07d31fb58c1b6b1a5f6cebd7062d4e21895..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2285.txt +++ /dev/null @@ -1,17 +0,0 @@ -In geometric graph theory, the Hadwiger–Nelson problem, named after Hugo Hadwiger and Edward Nelson, asks for the minimum number of colors required to color the plane such that no two points at distance 1 from each other have the same color. The answer is unknown, but has been narrowed down to one of the numbers 5, 6 or 7. The correct value may depend on the choice of axioms for set theory. - -The question can be phrased in graph theoretic terms as follows. Let G be the unit distance graph of the plane: an infinite graph with all points of the plane as vertices and with an edge between two vertices if and only if the distance between the two points is 1. The Hadwiger–Nelson problem is to find the chromatic number of G. As a consequence, the problem is often called "finding the chromatic number of the plane". By the de Bruijn–Erdős theorem, a result of de Bruijn, the problem is equivalent (under the assumption of the axiom of choice) to that of finding the largest possible chromatic number of a finite unit distance graph. - -According to Jensen, the problem was first formulated by Nelson in 1950, and first published by Gardner. Hadwiger had earlier published a related result, showing that any cover of the plane by five congruent closed sets contains a unit distance in one of the sets, and he also mentioned the problem in a later paper . Soifer discusses the problem and its history extensively. - -The fact that the chromatic number of the plane must be at least four follows from the existence of a seven-vertex unit distance graph with chromatic number four, named the Moser spindle after its discovery in 1961 by the brothers William and Leo Moser. This graph consists of two unit equilateral triangles joined at a common vertex, x. Each of these triangles is joined along another edge to another equilateral triangle; the vertices y and z of these joined triangles are at unit distance from each other. If the plane could be three-colored, the coloring within the triangles would force y and z to both have the same color as x, but then, since y and z are at unit distance from each other, we would not have a proper coloring of the unit distance graph of the plane. Therefore, at least four colors are needed to color this graph and the plane containing it. An alternative lower bound in the form of a ten-vertex four-chromatic unit distance graph, the Golomb graph, was discovered at around the same time by Solomon W. Golomb. - -The lower bound was raised to five in 2018, when computer scientist and biologist Aubrey de Grey found a 1581-vertex, non-4-colourable unit-distance graph. The proof is computer assisted. Mathematician Gil Kalai and computer scientist Scott Aaronson posted discussion of de Grey's finding, with Aaronson reporting independent verifications of de Grey's result using SAT solvers. Kalai linked additional posts by Jordan Ellenberg and Noam Elkies, with Elkies and (separately) de Grey proposing a Polymath project to find non-4-colorable unit distance graphs with fewer vertices than the one in de Grey's construction. As of 2021, the smallest known unit distance graph with chromatic number 5 has 509 vertices. The page of the Polymath project, Polymath, contains further research, media citations and verification data. - -The upper bound of seven on the chromatic number follows from the existence of a tessellation of the plane by regular hexagons, with diameter slightly less than one, that can be assigned seven colors in a repeating pattern to form a 7-coloring of the plane. According to Soifer, this upper bound was first observed by John R. Isbell. - -The problem can easily be extended to higher dimensions. Finding the chromatic number of 3-space is a particularly interesting problem. As with the version on the plane, the answer is not known, but has been shown to be at least 6 and at most 15. - -In the n-dimensional case of the problem, an easy upper bound on the number of required colorings found from tiling n-dimensional cubes is $\lfloor2+\sqrt{n}\rfloor^n$. A lower bound from simplexes is $n+1$. For $n>1$, a lower bound of $n+2$ is available using a generalization of the Moser spindle: a pair of the objects (each two simplexes glued together on a facet) which are joined on one side by a point and the other side by a line. An exponential lower bound was proved in by Frankl and Wilson in 1981. - -One can also consider colorings of the plane in which the sets of points of each color are restricted to sets of some particular type. Such restrictions may cause the required number of colors to increase, as they prevent certain colorings from being considered acceptable. For instance, if a coloring of the plane consists of regions bounded by Jordan curves, then at least six colors are required. diff --git a/wiki/wikipedia/2286.txt b/wiki/wikipedia/2286.txt deleted file mode 100644 index 9f39f32e024ef1bca9362e6405d761ee22737cc6..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2286.txt +++ /dev/null @@ -1,212 +0,0 @@ -Metamath is a formal language and an associated computer program (a proof checker) for archiving, verifying, and studying mathematical proofs. Several databases of proved theorems have been developed using Metamath covering standard results in logic, set theory, number theory, algebra, topology and analysis, among others. - -As of December 2020, the set of proved theorems using Metamath is one of the largest bodies of formalized mathematics, containing in particular proofs of 74 of the 100 theorems of the challenge, making it third after HOL Light and Isabelle, but before Coq, Mizar, , Lean, Nqthm, ACL2, and Nuprl. There are at least 17 proof verifiers for databases that use the Metamath format. - -This project is the first one of its kind that allows for interactive browsing of its formalized theorems database in the form of an ordinary website. - -The Metamath language is a metalanguage, suitable for developing a wide variety of formal systems. The Metamath language has no specific logic embedded in it. Instead, it can simply be regarded as a way to prove that inference rules (asserted as axioms or proven later) can be applied. - -The largest database of proved theorems follows conventional ZFC set theory and classic logic, but other databases exist and others can be created. - -The Metamath language design is focused on simplicity; the language, employed to state the definitions, axioms, inference rules and theorems is only composed of a handful of keywords, and all the proofs are checked using one simple algorithm based on the substitution of variables (with optional provisos for what variables must remain distinct after a substitution is made). - -The set of symbols that can be used for constructing formulas is declared using $c (constant symbols) and $v (variable symbols) statements; for example: - -
-
-$( Declare the constant symbols we will use $)
-
-$c 0 + = -> ( ) term wff |- $.
-
-$( Declare the metavariables we will use $)
-
-$v t r s P Q $.
-
-
- -The grammar for formulas is specified using a combination of $f (floating (variable-type) - -hypotheses) and $a (axiomatic assertion) statements; for example: - -
-
-$( Specify properties of the metavariables $)
-
-tt $f term t $.
-
-tr $f term r $.
-
-ts $f term s $.
-
-wp $f wff P $.
-
-wq $f wff Q $.
-
-$( Define "wff" (part 1) $)
-
-weq $a wff t = r $.
-
-$( Define "wff" (part 2) $)
-
-wim $a wff ( P -> Q ) $.
-
-
- -Axioms and rules of inference are specified with $a statements along with ${ and $} for block scoping and optional $e (essential hypotheses) statements; for example: - -
-
-$( State axiom a1 $)
-
-a1 $a |- ( t = r -> ( t = s -> r = s ) ) $.
-
-$( State axiom a2 $)
-
-a2 $a |- ( t + 0 ) = t $.
-
-${
-
-min $e |- P $.
-
-maj $e |- ( P -> Q ) $.
-
-$( Define the modus ponens inference rule $)
-
-mp $a |- Q $.
-
-$}
-
-
- -Using one construct, $a statements, to capture syntactic rules, axiom schemas, and rules of inference is intended to provide a level of flexibility similar to higher order logical frameworks without a dependency on a complex type system. - -Theorems (and derived rules of inference) are written with $p statements; for example: - -
-
-$( Prove a theorem $)
-
-th1 $p |- t = t $=
-
-$( Here is its proof: $)
-
-tt tze tpl tt weq tt tt weq tt a2 tt tze tpl
-
-tt weq tt tze tpl tt weq tt tt weq wim tt a2
-
-tt tze tpl tt tt a1 mp mp
-
-$.
-
-
- -Note the inclusion of the proof in the $p statement. It abbreviates the following detailed proof: - - - -tt $f term t - -tze $a term 0 - -1,2 tpl $a term ( t + 0 ) - -3,1 weq $a wff ( t + 0 ) = t - -1,1 weq $a wff t = t - -1 a2 $a |- ( t + 0 ) = t - -1,2 tpl $a term ( t + 0 ) - -7,1 weq $a wff ( t + 0 ) = t - -1,2 tpl $a term ( t + 0 ) - -9,1 weq $a wff ( t + 0 ) = t - -1,1 weq $a wff t = t - -10,11 wim $a wff ( ( t + 0 ) = t -> t = t ) - -1 a2 $a |- ( t + 0 ) = t - -1,2 tpl $a term ( t + 0 ) - -14,1,1 a1 $a |- ( ( t + 0 ) = t -> ( ( t + 0 ) = t -> t = t ) ) - -8,12,13,15 mp $a |- ( ( t + 0 ) = t -> t = t ) - -4,5,6,16 mp $a |- t = t - - - -The "essential" form of the proof elides syntactic details, leaving a more conventional presentation: - - - -a2 $a |- ( t + 0 ) = t - -a2 $a |- ( t + 0 ) = t - -a1 $a |- ( ( t + 0 ) = t -> ( ( t + 0 ) = t -> t = t ) ) - -2,3 mp $a |- ( ( t + 0 ) = t -> t = t ) - -1,4 mp $a |- t = t - - - -All Metamath proof steps use a single substitution rule, which is just the simple replacement of a variable with an expression and not the proper substitution described in works on predicate calculus. Proper substitution, in Metamath databases that support it, is a derived construct instead of one built into the Metamath language itself. - -The substitution rule makes no assumption about the logic system in use and only requires that the substitutions of variables are correctly done. - -Here is a detailed example of how this algorithm works. Steps 1 and 2 of the theorem 2p2e4 in the Metamath Proof Explorer (set.mm) are depicted left. Let's explain how Metamath uses its substitution algorithm to check that step 2 is the logical consequence of step 1 when you use the theorem opreq2i. Step 2 states that ( 2 + 2 ) = ( 2 + ( 1 + 1 ) ). It is the conclusion of the theorem opreq2i. The theorem opreq2i states that if , then (. This theorem would never appear under this cryptic form in a textbook but its literate formulation is banal: when two quantities are equal, one can replace one by the other in an operation. To check the proof Metamath attempts to unify ( with ( 2 + 2 ) = ( 2 + ( 1 + 1 ) ). There is only one way to do so: unifying with , with +, with 2 and with ( 1 + 1 ). So now Metamath uses the premise of opreq2i. This premise states that . As a consequence of its previous computation, Metamath knows that should be substituted by 2 and by ( 1 + 1 ). The premise becomes 2=( 1 + 1 ) and thus step 1 is therefore generated. In its turn step 1 is unified with df-2. df-2 is the definition of the number 2 and states that 2 = ( 1 + 1 ). Here the unification is simply a matter of constants and is straightforward (no problem of variables to substitute). So the verification is finished and these two steps of the proof of 2p2e4 are correct. - -When Metamath unifies ( 2 + 2 ) with it has to check that the syntactical rules are respected. In fact has the type class thus Metamath has to check that ( 2 + 2 ) is also typed class. - -The Metamath program is the original program created to manipulate databases written using the Metamath language. It has a text (command line) interface and is written in C. It can read a Metamath database into memory, verify the proofs of a database, modify the database (in particular by adding proofs), and write them back out to storage. - -It has a prove command that enables users to enter a proof, along with mechanisms to search for existing proofs. - -The Metamath program can convert statements to HTML or TeX notation; - -for example, it can output the modus ponens axiom from set.mm as: -$$ -\vdash \varphi\quad\&\quad \vdash ( \varphi \rightarrow \psi )\quad\Rightarrow\quad \vdash \psi -$$ - -Many other programs can process Metamath databases, in particular, there are at least 17 proof verifiers for databases that use the Metamath format. - -The Metamath website hosts several databases that store theorems derived from various axiomatic systems. Most databases (.mm files) have an associated interface, called an "Explorer", which allows one to navigate the statements and proofs interactively on the website, in a user-friendly way. Most databases use a Hilbert system of formal deduction though this is not a requirement. - -The Metamath Proof Explorer (recorded in set.mm) is the main and by far the largest database, with over 23,000 proofs in its main part as of July 2019. It is based on classical first-order logic and ZFC set theory (with the addition of Tarski-Grothendieck set theory when needed, for example in category theory). The database has been maintained for over twenty years (the first proofs in set.mm are dated August 1993). The database contains developments, among other fields, of set theory (ordinals and cardinals, recursion, equivalents of the axiom of choice, the continuum hypothesis...), the construction of the real and complex number systems, order theory, graph theory, abstract algebra, linear algebra, general topology, real and complex analysis, Hilbert spaces, number theory, and elementary geometry. This database was first created by Norman Megill, but as of 2019-10-04 there have been 48 contributors (including Norman Megill). - -The Metamath Proof Explorer references many text books that can be used in conjunction with Metamath. Thus, people interested in studying mathematics can use Metamath in connection with these books and verify that the proved assertions match the literature. - -This database develops mathematics from a constructive point of view, starting with the axioms of intuitionistic logic and continuing with axiom systems of constructive set theory. - -This database develops mathematics from Quine's New Foundations set theory. - -This database starts with higher-order logic and derives equivalents to axioms of first-order logic and of ZFC set theory. - -The Metamath website hosts a few other databases which are not associated with explorers but are nonetheless noteworthy. The database peano.mm written by Robert Solovay formalizes Peano arithmetic. The database nat.mm formalizes natural deduction. The database miu.mm formalizes the MU puzzle based on the formal system MIU presented in Gödel, Escher, Bach. - -The Metamath website also hosts a few older databases which are not maintained anymore, such as the "Hilbert Space Explorer", which presents theorems pertaining to Hilbert space theory which have now been merged into the Metamath Proof Explorer, and the "Quantum Logic Explorer", which develops quantum logic starting with the theory of orthomodular lattices. - -Because Metamath has a very generic concept of what a proof is (namely a tree of formulas connected by inference rules) and no specific logic is embedded in the software, Metamath can be used with species of logic as different as Hilbert-style logics or sequents-based logics or even with lambda calculus. - -However, Metamath provides no direct support for natural deduction systems. As noted earlier, the database nat.mm formalizes natural deduction. The Metamath Proof Explorer (with its database set.mm) instead use a set of conventions that allow the use of natural deduction approaches within a Hilbert-style logic. - -Using the design ideas implemented in Metamath, Raph Levien has implemented very small proof checker, mmverify.py, at only 500 lines of Python code. - -Ghilbert is a similar though more elaborate language based on mmverify.py. Levien would like to implement a system where several people could collaborate and his work is emphasizing modularity and connection between small theories. - -Using Levien seminal works, many other implementations of the Metamath design principles have been implemented for a broad variety of languages. Juha Arpiainen has implemented his own proof checker in Common Lisp called Bourbaki and Marnix Klooster has coded a proof checker in Haskell called Hmm. - -Although they all use the overall Metamath approach to formal system checker coding, they also implement new concepts of their own. - -Mel O'Cat designed a system called Mmj2, which provides a graphic user interface for proof entry. The initial aim of Mel O'Cat was to allow the user to enter the proofs by simply typing the formulas and letting Mmj2 find the appropriate inference rules to connect them. In Metamath on the contrary you may only enter the theorems names. You may not enter the formulas directly. Mmj2 has also the possibility to enter the proof forward or backward (Metamath only allows to enter proof backward). Moreover Mmj2 has a real grammar parser (unlike Metamath). This technical difference brings more comfort to the user. In particular Metamath sometimes hesitates between several formulas it analyzes (most of them being meaningless) and asks the user to choose. In Mmj2 this limitation no longer exists. - -There is also a project by William Hale to add a graphical user interface to Metamath called Mmide. Paul Chapman in its turn is working on a new proof browser, which has highlighting that allows you to see the referenced theorem before and after the substitution was made. - -Milpgame is a proof assistant and a checker (it shows a message only something gone wrong) with a graphic user interface for the Metamath language(set.mm),written by Filip Cernatescu, it is an open source(MIT License) Java application (cross-platform application: Window,Linux,Mac OS). User can enter the demonstration(proof) in two modes : forward and backward relative to the statement to prove. Milpgame checks if a statement is well formed (has a syntactic verifier). It can save unfinished proofs without the use of dummylink theorem. The demonstration is shown as tree, the statements are shown using html definitions (defined in typesetting chapter). Milpgame is distributed as Java .jar(JRE version 6 update 24 written in NetBeans IDE). diff --git a/wiki/wikipedia/2287.txt b/wiki/wikipedia/2287.txt deleted file mode 100644 index 82a4b2faf8315331138f99aae0334aa1ade592b1..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2287.txt +++ /dev/null @@ -1,94 +0,0 @@ -In mathematics, the uniformization theorem says that every simply connected Riemann surface is conformally equivalent to one of three Riemann surfaces: the open unit disk, the complex plane, or the Riemann sphere. In particular it implies that every Riemann surface admits a Riemannian metric of constant curvature. For compact Riemann surfaces, those with universal cover the unit disk are precisely the hyperbolic surfaces of genus greater than 1, all with non-abelian fundamental group; those with universal cover the complex plane are the Riemann surfaces of genus 1, namely the complex tori or elliptic curves with fundamental group Z2; and those with universal cover the Riemann sphere are those of genus zero, namely the Riemann sphere itself, with trivial fundamental group. - -The uniformization theorem is a generalization of the Riemann mapping theorem from proper simply connected open subsets of the plane to arbitrary simply connected Riemann surfaces. The uniformization theorem also has an equivalent statement in terms of closed Riemannian 2-manifolds: each such manifold has a conformally equivalent Riemannian metric with constant curvature. - -Many classical proofs of the uniformization theorem rely on constructing a real-valued harmonic function on the simply connected Riemann surface, possibly with a singularity at one or two points and often corresponding to a form of Green's function. Four methods of constructing the harmonic function are widely employed: the Perron method; the Schwarz alternating method; Dirichlet's principle; and Weyl's method of orthogonal projection. In the context of closed Riemannian 2-manifolds, several modern proofs invoke nonlinear differential equations on the space of conformally equivalent metrics. These include the Beltrami equation from Teichmüller theory and an equivalent formulation in terms of harmonic maps; Liouville's equation, already studied by Poincaré; and Ricci flow along with other nonlinear flows. - -Felix and Henri conjectured the uniformization theorem for (the Riemann surfaces of) algebraic curves. extended this to arbitrary multivalued analytic functions and gave informal arguments in its favor. The first rigorous proofs of the general uniformization theorem were given by and . Paul Koebe later gave several more proofs and generalizations. The history is described in Gray; a complete account of uniformization up to the 1907 papers of Koebe and Poincaré is given with detailed proofs in de Saint-Gervais (the Bourbaki-type pseudonym of the group of fifteen mathematicians who jointly produced this publication). - -Every Riemann surface is the quotient of free, proper and holomorphic action of a discrete group on its universal covering and this universal covering is holomorphically isomorphic (one also says: "conformally equivalent" or "biholomorphic") to one of the following: - -#the Riemann sphere - -#the complex plane - -#the unit disk in the complex plane. - -Rado's theorem shows that every Riemann surface is automatically second-countable. Although Rado's theorem is often used in proofs of the uniformization theorem, some proofs have been formulated so that Rado's theorem becomes a consequence. Second countability is automatic for compact Riemann surfaces. - -On an oriented 2-manifold, a Riemannian metric induces a complex structure using the passage to isothermal coordinates. If the Riemannian metric is given locally as -$$ - ds^2 = E dx^2 + 2F dx dy + G dy^2, -$$ - -then in the complex coordinate z = x + iy, it takes the form -$$ - ds^2 = \lambda|dz +\mu d\overline{z}|^2, -$$ - -where - -\lambda = \frac14 \left( E + G + 2\sqrt{EG - F^2} \right),\ \ - -\mu = \frac1{4\lambda} (E - G + 2iF), - -so that λ and μ are smooth with λ > 0 and |μ| < 1. In isothermal coordinates (u, v) the metric should take the form -$$ - ds^2 = \rho (du^2 + dv^2) -$$ - -with ρ > 0 smooth. The complex coordinate w = u + i v satisfies -$$ -\rho |dw|^2 = \rho |w_z|^2 \left| dz + {w_{\overline{z}}\over w_z} d\overline{z}\right|^2, -$$ - -so that the coordinates (u, v) will be isothermal locally provided the Beltrami equation -$$ - {\partial w\over \partial \overline{z}} = \mu {\partial w\over \partial z} -$$ - -has a locally diffeomorphic solution, i.e. a solution with non-vanishing Jacobian. - -These conditions can be phrased equivalently in terms of the exterior derivative and the Hodge star operator ∗. - -u and v will be isothermal coordinates if ∗du = dv, where ∗ is defined - -on differentials by ∗(p dx + q dy) = −q dx + p dy. - -Let ∆ = ∗d∗d be the Laplace-Beltrami operator. By standard elliptic theory, u can be chosen to be harmonic near a given point, i.e. Δ u = 0, with du non-vanishing. By the Poincaré lemma dv = ∗du has a local solution v exactly when d(∗du) = 0. This condition is equivalent to Δ u = 0, so can always be solved locally. Since du is non-zero and the square of the Hodge star operator is -1 on 1-forms, du and dv must be linearly independent, so that u and v give local isothermal coordinates. - -The existence of isothermal coordinates can be proved by other methods, for example using the general theory of the Beltrami equation, as in Ahlfors, or by direct elementary methods, as in Chern and Jost. - -From this correspondence with compact Riemann surfaces, a classification of closed orientable Riemannian 2-manifolds follows. Each such is conformally equivalent to a unique closed 2-manifold of constant curvature, so a quotient of one of the following by a free action of a discrete subgroup of an isometry group: - -#the sphere (curvature +1) - -#the Euclidean plane (curvature 0) - -#the hyperbolic plane (curvature -1). - - - -File:Orange Sphere.png|genus 0 - -File:Orange Torus.png|genus 1 - -File:Orange Genus 2 Surface.png|genus 2 - -File:Orange Genus 3 Surface.png|genus 3 - - - -The first case gives the 2-sphere, the unique 2-manifold with constant positive curvature and hence positive Euler characteristic (equal to 2). The second gives all flat 2-manifolds, i.e. the tori, which have Euler characteristic 0. The third case covers all 2-manifolds of constant negative curvature, i.e. the hyperbolic 2-manifolds all of which have negative Euler characteristic. The classification is consistent with the Gauss–Bonnet theorem, which implies that for a closed surface with constant curvature, the sign of that curvature must match the sign of the Euler characteristic. The Euler characteristic is equal to 2 – 2g, where g is the genus of the 2-manifold, i.e. the number of "holes". - -In 1913 Hermann Weyl published his classic textbook "Die Idee der Riemannschen Fläche" based on his Göttingen lectures from 1911 to 1912. It was the first book to present the theory of Riemann surfaces in a modern setting and through its three editions has remained influential. Dedicated to Felix Klein, the first edition incorporated Hilbert's treatment of the Dirichlet problem using Hilbert space techniques; Brouwer's contributions to topology; and Koebe's proof of the uniformization theorem and its subsequent improvements. Much later Weyl developed his method of orthogonal projection which gave a streamlined approach to the Dirichlet problem, also based on Hilbert space; that theory, which included Weyl's lemma on elliptic regularity, was related to Hodge's theory of harmonic integrals; and both theories were subsumed into the modern theory of elliptic operators and L2 Sobolev spaces. In the third edition of his book from 1955, translated into English in Weyl, Weyl adopted the modern definition of differential manifold, in preference to triangulations, but decided not to make use of his method of orthogonal projection. Springer followed Weyl's account of the uniformisation theorem, but used the method of orthogonal projection to treat the Dirichlet problem. This approach will be outlined below. Kodaira describes the approach in Weyl's book and also how to shorten it using the method of orthogonal projection. A related account can be found in Donaldson. - -In introducing the Ricci flow, Richard S. Hamilton showed that the Ricci flow on a closed surface uniformizes the metric (i.e., the flow converges to a constant curvature metric). However, his proof relied on the uniformization theorem. The missing step involved Ricci flow on the 2-sphere: a method for avoiding an appeal to the uniformization theorem (for genus 0) was provided by Chen; a short self-contained account of Ricci flow on the 2-sphere was given in Andrews. - -Koebe proved the general uniformization theorem that if a Riemann surface is homeomorphic to an open subset of the complex sphere (or equivalently if every Jordan curve separates it), then it is conformally equivalent to an open subset of the complex sphere. - -In 3 dimensions, there are 8 geometries, called the eight Thurston geometries. Not every 3-manifold admits a geometry, but Thurston's geometrization conjecture proved by Grigori Perelman states that every 3-manifold can be cut into pieces that are geometrizable. - -The simultaneous uniformization theorem of Lipman Bers shows that it is possible to simultaneously uniformize two compact Riemann surfaces of the same genus >1 with the same quasi-Fuchsian group. - -The measurable Riemann mapping theorem shows more generally that the map to an open subset of the complex sphere in the uniformization theorem can be chosen to be a quasiconformal map with any given bounded measurable Beltrami coefficient. diff --git a/wiki/wikipedia/2288.txt b/wiki/wikipedia/2288.txt deleted file mode 100644 index 8055e50d4d2b78e1537b0f15196a673465aadb53..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2288.txt +++ /dev/null @@ -1,77 +0,0 @@ -In the theory of stochastic processes in discrete time, a part of the mathematical theory of probability, the Doob decomposition theorem gives a unique decomposition of every adapted and integrable stochastic process as the sum of a martingale and a predictable process (or "drift") starting at zero. The theorem was proved by and is named for Joseph L. Doob. - -The analogous theorem in the continuous-time case is the Doob–Meyer decomposition theorem. - -Let (Ω, $\mathcal{F}$, $\mathbb{P}$) be a probability space, I = {0, 1, 2, . . . , N} with N ∈ $\mathbb{N}$ or I = $\mathbb{N}_0$ a finite or an infinite index set, ($\mathcal{F}$n)n∈I a filtration of $\mathcal{F}$, and X = (Xn)n∈I an adapted stochastic process with E[ for all n ∈ I. Then there exists a martingale M = (Mn)n∈I and an integrable predictable process A = (An)n∈I starting with A0 = 0 such that Xn = Mn + An for every n ∈ I. - -Here predictable means that An is $\mathcal{F}$n−1-measurable for every n ∈ I \ {0}. - -This decomposition is almost surely unique. - -The theorem is valid word by word also for stochastic processes X taking values in the d-dimensional Euclidean space $\mathbb{R}$d or the complex vector space $\mathbb{C}$d. This follows from the one-dimensional version by considering the components individually. - -Using conditional expectations, define the processes A and M, for every n ∈ I, explicitly by - -{{NumBlk|:|$A_n=\sum_{k=1}^n\bigl(\mathbb{E}[X_k|\mathcal{F}_{k-1}]-X_{k-1}\bigr)$|}} - -and - -{{NumBlk|:|$M_n=X_0+\sum_{k=1}^n\bigl(X_k-\mathbb{E}[X_k|\mathcal{F}_{k-1}]\bigr),$|}} - -where the sums for n = 0 are empty and defined as zero. Here A adds up the expected increments of X, and M adds up the surprises, i.e., the part of every Xk that is not known one time step before. - -Due to these definitions, An+1 (if n + 1 ∈ I) and Mn are Fn-measurable because the process X is adapted, E[ and E[ because the process X is integrable, and the decomposition Xn = Mn + An is valid for every n ∈ I. The martingale property -$$ -\mathbb{E}[M_n-M_{n-1}|\mathcal{F}_{n-1}]=0 -$$ a.s. - -also follows from the above definition (), for every n ∈ I \ {0}. - -To prove uniqueness, let X = M be an additional decomposition. Then the process Y := M − M is a martingale, implying that -$$ -\mathbb{E}[Y_n|\mathcal{F}_{n-1}]=Y_{n-1} -$$ a.s., - -and also predictable, implying that -$$ -\mathbb{E}[Y_n|\mathcal{F}_{n-1}]= Y_n -$$ a.s. - -for any n ∈ I \ {0}. Since Y0 = A by the convention about the starting point of the predictable processes, this implies iteratively that Yn = 0 almost surely for all n ∈ I, hence the decomposition is almost surely unique. - -A real-valued stochastic process X is a submartingale if and only if it has a Doob decomposition into a martingale M and an integrable predictable process A that is almost surely increasing. It is a supermartingale, if and only if A is almost surely decreasing. - -If X is a submartingale, then -$$ -\mathbb{E}[X_k|\mathcal{F}_{k-1}]\ge X_{k-1} -$$ a.s. - -for all k ∈ I \ {0}, which is equivalent to saying that every term in definition () of A is almost surely positive, hence A is almost surely increasing. The equivalence for supermartingales is proved similarly. - -Let X = (Xn)n∈$\mathbb{N}_0$ be a sequence in independent, integrable, real-valued random variables. They are adapted to the filtration generated by the sequence, i.e. Fn = σ(X0, . . . , Xn) for all n ∈ $\mathbb{N}_0$. By () and (), the Doob decomposition is given by -$$ -A_n=\sum_{k=1}^{n}\bigl(\mathbb{E}[X_k]-X_{k-1}\bigr),\quad n\in\mathbb{N}_0, -$$ - -and -$$ -M_n=X_0+\sum_{k=1}^{n}\bigl(X_k-\mathbb{E}[X_k]\bigr),\quad n\in\mathbb{N}_0. -$$ - -If the random variables of the original sequence X have mean zero, this simplifies to -$$ -A_n=-\sum_{k=0}^{n-1}X_k -$$ and $M_n=\sum_{k=0}^{n}X_k,\quad n\in\mathbb{N}_0,$ - -hence both processes are (possibly time-inhomogeneous) random walks. If the sequence X = (Xn)n∈$\mathbb{N}_0$ consists of symmetric random variables taking the values +1 and −1, then X is bounded, but the martingale M and the predictable process A are unbounded simple random walks (and not uniformly integrable), and Doob's optional stopping theorem might not be applicable to the martingale M unless the stopping time has a finite expectation. - -In mathematical finance, the Doob decomposition theorem can be used to determine the largest optimal exercise time of an American option. Let X = (X0, X1, . . . , XN) denote the non-negative, discounted payoffs of an American option in a N-period financial market model, adapted to a filtration - -(F0, F1, . . . , FN), and let $\mathbb{Q}$ denote an equivalent martingale measure. Let U = (U0, U1, . . . , UN) denote the Snell envelope of X with respect to $\mathbb{Q}$. The Snell envelope is the smallest $\mathbb{Q}$-supermartingale dominating X and in a complete financial market it represents the minimal amount of capital necessary to hedge the American option up to maturity. Let U = M + A denote the Doob decomposition with respect to $\mathbb{Q}$ of the Snell envelope U into a martingale M = (M0, M1, . . . , MN) and a decreasing predictable process A = (A0, A1, . . . , AN) with A0 = 0. Then the largest stopping time to exercise the American option in an optimal way is -$$ -\tau_{\text{max}}:=\begin{cases}N&\text{if }A_N=0,\\\min\{n\in\{0,\dots,N-1\}\mid A_{n+1}<0\}&\text{if } A_N<0.\end{cases} -$$ - -Since A is predictable, the event {τmax = n} = {An = 0, An+1 < 0} is in Fn for every n ∈ {0, 1, . . . , N − 1}, hence τmax is indeed a stopping time. It gives the last moment before the discounted value of the American option will drop in expectation; up to time τmax the discounted value process U is a martingale with respect to $\mathbb{Q}$. - -The Doob decomposition theorem can be generalized from probability spaces to σ-finite measure spaces. diff --git a/wiki/wikipedia/2289.txt b/wiki/wikipedia/2289.txt deleted file mode 100644 index c8dcbb5a9820ee0976ec9facd9fa7cf3e64361e8..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2289.txt +++ /dev/null @@ -1,37 +0,0 @@ -In proof theory, a branch of mathematical logic, elementary function arithmetic (EFA), also called elementary arithmetic and exponential function arithmetic, is the system of arithmetic with the usual elementary properties of 0, 1, +, ×, xy, together with induction for formulas with bounded quantifiers. - -EFA is a very weak logical system, whose proof theoretic ordinal is ω3, but still seems able to prove much of ordinary mathematics that can be stated in the language of first-order arithmetic. - -EFA is a system in first order logic (with equality). Its language contains: - -*two constants 0, 1, - -*three binary operations +, ×, exp, with exp(x,y) usually written as xy, - -*a binary relation symbol < (This is not really necessary as it can be written in terms of the other operations and is sometimes omitted, but is convenient for defining bounded quantifiers). - -Bounded quantifiers are those of the form ∀(x < y) and ∃(x < y) which are abbreviations for ∀ x (x < y) → ... and ∃x(x < y)∧... in the usual way. - -The axioms of EFA are - -*The axioms of Robinson arithmetic for 0, 1, +, ×, < - -*The axioms for exponentiation: x0 = 1, xy+1 = xy × x. - -*Induction for formulas all of whose quantifiers are bounded (but which may contain free variables). - -Harvey Friedman's grand conjecture implies that many mathematical theorems, such as Fermat's Last Theorem, can be proved in very weak systems such as EFA. - -The original statement of the conjecture from Friedman is: - -"Every theorem published in the Annals of Mathematics whose statement involves only finitary mathematical objects (i.e., what logicians call an arithmetical statement) can be proved in EFA. EFA is the weak fragment of Peano Arithmetic based on the usual quantifier-free axioms for 0, 1, +, ×, exp, together with the scheme of induction for all formulas in the language all of whose quantifiers are bounded." - -While it is easy to construct artificial arithmetical statements that are true but not provable in EFA, the point of Friedman's conjecture is that natural examples of such statements in mathematics seem to be rare. Some natural examples include consistency statements from logic, several statements related to Ramsey theory such as the Szemerédi regularity lemma and the graph minor theorem. - -Several related computational complexity classes have similar properties to EFA: - -*One can omit the binary function symbol exp from the language, by taking Robinson arithmetic together with induction for all formulas with bounded quantifiers and an axiom stating roughly that exponentiation is a function defined everywhere. This is similar to EFA and has the same proof theoretic strength, but is more cumbersome to work with. - -*There are weak fragments of second-order arithmetic called RCA_*|b=0 and WKL_*|b=0 that have the same consistency strength as EFA and are conservative over it for Π_0|b=2 sentences, which are sometimes studied in reverse mathematics . - -*Elementary recursive arithmetic (ERA) is a subsystem of primitive recursive arithmetic (PRA) in which recursion is restricted to bounded sums and products. This also has the same Π_0|b=2 sentences as EFA, in the sense that whenever EFA proves ∀x∃y P(x,y), with P quantifier-free, ERA proves the open formula P(x,T(x)), with T a term definable in ERA. Like PRA, ERA can be defined in an entirely logic-free manner, with just the rules of substitution and induction, and defining equations for all elementary recursive functions. Unlike PRA, however, the elementary recursive functions can be characterized by the closure under composition and projection of a finite number of basis functions, and thus only a finite number of defining equations are needed. diff --git a/wiki/wikipedia/229.txt b/wiki/wikipedia/229.txt deleted file mode 100644 index 0af9293c3738fa509c824dbc15705d1dbc8b0be5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/229.txt +++ /dev/null @@ -1,49 +0,0 @@ -In computer science, particularly the field of databases, the Thomas write rule is a rule in timestamp-based concurrency control. It can be summarized as ignore outdated writes. - -It states that, if a more recent transaction has already written the value of an object, then a less recent transaction does not need perform its own write since it will eventually be overwritten by the more recent one. - -The Thomas write rule is applied in situations where a predefined logical order is assigned to transactions when they start. For example a transaction might be assigned a monotonically increasing timestamp when it is created. The rule prevents changes in the order in which the transactions are executed from creating different outputs: The outputs will always be consistent with the predefined logical order. - -For example consider a database with 3 variables (A, B, C), and two atomic operations C := A (T1), and C := B (T2). Each transaction involves a read (A or B), and a write (C). The only conflict between these transactions is the write on C. The following is one possible schedule for the operations of these transactions: - -\begin{bmatrix} - -T_1 & T_2 \\ - -& Read(A) \\ - -Read(B) & \\ - -&Write(C) \\ - -Write(C) & \\ - -Commit & \\ - -& Commit \end{bmatrix} \Longleftrightarrow - -\begin{bmatrix} - -T_1 & T_2 \\ - -& Read(A) \\ - -Read(B) & \\ - -& Write(C) \\ - -& \\ - -Commit & \\ - -& Commit\\ - -\end{bmatrix} - - - -If (when the transactions are created) T1 is assigned a timestamp that precedes T2 (i.e., according to the logical order, T1 comes first), then only T2's write should be visible. If, however, T1's write is executed after T2's write, then we need a way to detect this and discard the write. - -One practical approach to this is to label each value with a write timestamp (WTS) that indicates the timestamp of the last transaction to modify the value. Enforcing the Thomas write rule only requires checking to see if the write timestamp of the object is greater than the time stamp of the transaction performing a write. If so, the write is discarded - -In the example above, if we call TS(T) the timestamp of transaction T, and WTS(O) the write timestamp of object O, then T2's write sets WTS(C) to TS(T2). When T1 tries to write C, it sees that TS(T1) < WTS(C), and discards the write. If a third transaction T3 (with TS(T3) > TS(T2)) were to then write to C, it would get TS(T3) > WTS(C), and the write would be allowed. diff --git a/wiki/wikipedia/2290.txt b/wiki/wikipedia/2290.txt deleted file mode 100644 index 77a3a97664067845a80e5647c6bfb77782b60d5e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2290.txt +++ /dev/null @@ -1,163 +0,0 @@ -In category theory, a branch of mathematics, a universal property is an important property which is satisfied by a universal morphism (see Formal Definition). - -Universal morphisms can also be thought of more abstractly as initial or terminal objects of a comma category (see Connection with comma categories). Universal properties occur almost everywhere in mathematics, and hence the precise category theoretic concept helps point out similarities between different branches of mathematics, some of which may even seem unrelated. - -Universal properties may be used in other areas of mathematics implicitly, but the abstract and more precise definition of it can be studied in category theory. - -This article gives a general treatment of universal properties. To understand the concept, it is useful to study several examples first, of which there are many: all free objects, direct product and direct sum, free group, free lattice, Grothendieck group, Dedekind–MacNeille completion, product topology, Stone–Čech compactification, tensor product, inverse limit and direct limit, kernel and cokernel, pullback, pushout and equalizer. - -Before giving a formal definition of universal properties, we offer some motivation for studying such constructions. - -* The concrete details of a given construction may be messy, but if the construction satisfies a universal property, one can forget all those details: all there is to know about the construction is already contained in the universal property. Proofs often become short and elegant if the universal property is used rather than the concrete details. For example, the tensor algebra of a vector space is slightly painful to actually construct, but using its universal property makes it much easier to deal with. - -* Universal properties define objects uniquely up to a unique isomorphism. Therefore, one strategy to prove that two objects are isomorphic is to show that they satisfy the same universal property. - -* Universal constructions are functorial in nature: if one can carry out the construction for every object in a category C then one obtains a functor on C. Furthermore, this functor is a right or left adjoint to the functor U used in the definition of the universal property. - -* Universal properties occur everywhere in mathematics. By understanding their abstract properties, one obtains information about all these constructions and can avoid repeating the same analysis for each individual instance. - -To understand the definition of a universal construction, it is important to look at examples. Universal constructions were not defined out of thin air, but were rather defined after mathematicians began noticing a pattern in many mathematical constructions (see Examples below). Hence, the definition may not make sense to one at first, but will become clear when one reconciles it with concrete examples. - -Let $F: C \to D$ be a functor between categories $C$ and $D$. In what follows, let $X$ be an object of $D$, while $A$ and $A'$ are objects of $C$. - -Thus, the functor $F$ maps $A$, $A'$ and $h$ in $C$ to $F(A)$, $F(A')$ and $F(h)$ in $D$. - -A universal morphism from $X$ to $F$ is a unique pair $(A, u: X \to F(A))$ in $D$ which has the following property, commonly referred to as a universal property. For any morphism of the form -$$ -f: X \to F(A') -$$ in $D$, there exists a unique morphism $h: A \to A'$ in $C$ such that the following diagram commutes: - -We can dualize this categorical concept. A universal morphism from $F$ to $X$ is a unique pair $(A, u: F(A) \to X)$ that satisfies the following universal property. For any morphism of the form $f: F(A') \to X$ in $D$, there exists a unique morphism $h: A' \to A$ in $C$ such that the following diagram commutes: - -Note that in each definition, the arrows are reversed. Both definitions are necessary to describe universal constructions which appear in mathematics; but they also arise due to the inherent duality present in category theory. - -In either case, we say that the pair $(A, u)$ which behaves as above satisfies a universal property. - -Universal morphisms can be described more concisely as initial and terminal objects in a comma category. - -Let $F: C \to D$ be a functor and $X$ an object of $D$. Then recall that the comma category $(X \downarrow F)$ is the category where - -* Objects are pairs of the form $(B, f: X \to F(B))$, where $B$ is an object in $C$ - -* A morphism from $(B, f: X \to F(B))$ to $(B', f': X \to F(B'))$ is given by a morphism $h: B \to B'$ in $C$ such that the diagram commutes: - -Now suppose that the object $(A, u: X \to F(A))$ in $(X \downarrow F)$ is initial. Then - -for every object $(A', f: X \to F(A'))$, there exists a unique morphism $h: A \to A'$ such that the following diagram commutes. - -Note that the equality here simply means the diagrams are the same. Also note that the diagram on the right side of the equality is the exact same as the one offered in defining a universal morphism from $X$ to $F$. Therefore, we see that a universal morphism from $X$ to $F$ is equivalent to an initial object in the comma category $(X \downarrow F)$. - -Conversely, recall that the comma category $(F \downarrow X)$ is the category where - -*Objects are pairs of the form $(B, f: F(B) \to X)$ where $B$ is an object in $C$ - -*A morphism from $(B, f:F(B) \to X)$ to $(B', f':F(B') \to X) $ is given by a morphism $h: B \to B'$ in $C$ such that the diagram commutes: - -Suppose $(A, u:F(A) \to X) $ is a terminal object in $(F \downarrow X)$. Then for every object $(A', f: F(A') \to X) $, - -there exists a unique morphism $h: A' \to A $ such that the following diagrams commute. - -The diagram on the right side of the equality is the same diagram pictured when defining a universal morphism from $F$ to $X$. Hence, a universal morphism from $F$ to $X$ corresponds with a terminal object in the comma category -$$ -(F \downarrow X) -$$. - -Below are a few examples, to highlight the general idea. The reader can construct numerous other examples by consulting the articles mentioned in the introduction. - -Let $C$ be the category of vector spaces $K$-Vect over a field $K$ and let $D$ be the category of algebras $K$-Alg over $K$ (assumed to be unital and associative). Let -$$ -U -$$ : $K$-Alg → $K$-Vect - -be the forgetful functor which assigns to each algebra its underlying vector space. - -Given any vector space $V$ over $K$ we can construct the tensor algebra $T(V)$. The tensor algebra is characterized by the fact: - -“Any linear map from $V$ to an algebra $A$ can be uniquely extended to an algebra homomorphism from $T(V)$ to $A$.” - -This statement is an initial property of the tensor algebra since it expresses the fact that the pair $(T(V),i)$, where $i:V \to U(T(V))$ is the inclusion map, is a universal morphism from the vector space $V$ to the functor $U$. - -Since this construction works for any vector space $V$, we conclude that $T$ is a functor from $K$-Vect to $K$-Alg. This means that $T$ is left adjoint to the forgetful functor $U$ (see the section below on relation to adjoint functors). - -A categorical product can be characterized by a universal construction. For concreteness, one may consider the Cartesian product in Set, the direct product in Grp, or the product topology in Top, where products exist. - -Let $X$ and $Y$ be objects of a category $C$ with finite products. The product of $X$ and $Y$ is an object $X$ × $Y$ together with two morphisms -$$ -\pi_1 -$$ : $X \times Y \to X$ -$$ -\pi_2 -$$ : $X \times Y \to Y$ - -such that for any other object $Z$ of $C$ and morphisms $f: Z \to X$ and $g: Z \to Y$ there exists a unique morphism $h: Z \to X \times Y$ such that $f = \pi_1 \circ h$ and $g = \pi_2 \circ h$. - -To understand this characterization as a universal property, take the category $D$ to be the product category $C \times C$ and define the diagonal functor -$$ -\Delta: C \to C \times C -$$ - -by $\Delta(X) = (X, X)$ and $\Delta(f: X \to Y) = (f, f)$. Then $(X \times Y, (\pi_1, \pi_2))$ is a universal morphism from $\Delta$ to the object $(X, Y)$ of $C \times C$: if $(f, g)$ is any morphism from $(Z, Z)$ to $(X, Y)$, then it must equal - -a morphism $\Delta(h: Z \to X \times Y) = (h,h)$ from $\Delta(Z) = (Z, Z)$ - -to $\Delta(X \times Y) = (X \times Y, X \times Y)$ followed by $(\pi_1, \pi_2)$. - -Categorical products are a particular kind of limit in category theory. One can generalize the above example to arbitrary limits and colimits. - -Let $J$ and $C$ be categories with $J$ a small index category and let $C^J$ be the corresponding functor category. The diagonal functor -$$ -\Delta: C \to C^J -$$ - -is the functor that maps each object $N$ in $C$ to the constant functor $\Delta(N): J \to C$ to $N$ (i.e. $\Delta(N)(X) = N$ for each $X$ in $J$). - -Given a functor $F: J \to C$ (thought of as an object in $C^J$), the limit of $F$, if it exists, is nothing but a universal morphism from $\Delta$ to $F$. Dually, the colimit of $F$ is a universal morphism from $F$ to $\Delta$. - -Defining a quantity does not guarantee its existence. Given a functor $F: C \to D$ and an object $X$ of $C$, - -there may or may not exist a universal morphism from $X$ to $F$. If, however, a universal morphism $(A, u)$ does exist, then it is essentially unique. - -Specifically, it is unique up to a unique isomorphism: if $(A', u')$ is another pair, then there exists a unique isomorphism -$$ -k: A \to A' -$$ such that $u' = F(k) \circ u$. - -This is easily seen by substituting $(A, u')$ in the definition of a universal morphism. - -It is the pair $(A, u)$ which is essentially unique in this fashion. The object $A$ itself is only unique up to isomorphism. Indeed, if $(A, u)$ is a universal morphism and $k: A \to A'$ is any isomorphism then the pair $(A', u')$, where $u' = F(k) \circ u$ is also a universal morphism. - -The definition of a universal morphism can be rephrased in a variety of ways. Let $F: C \to D$ be a functor and let $X$ be an object of $D$. Then the following statements are equivalent: - -* $(A, u)$ is a universal morphism from $X$ to $F$ - -* $(A, u)$ is an initial object of the comma category $(X \downarrow F)$ - -* $(A, u)$ is a representation of $\text{Hom}_D(X, F(-))$ - -The dual statements are also equivalent: - -* $(A, u)$ is a universal morphism from $F$ to $X$ - -* $(A, u)$ is a terminal object of the comma category $(F \downarrow X)$ - -* $(A, u)$ is a representation of $\text{Hom}_D(F(-), X)$ - -Suppose $(A_1, u_1)$ is a universal morphism from $X_1$ to $F$ and $(A_2, u_2)$ is a universal morphism from $X_2$ to $F$. - -By the universal property of universal morphisms, given any morphism $h: X_1 \to X_2$ there exists a unique morphism $g: A_1 \to A_2 $ such that the following diagram commutes: - -If every object $X_i$ of $D$ admits a universal morphism to $F$, then the assignment $X_i \mapsto A_i$ and $h \mapsto g$ defines a functor $G: D \to C$. The maps $u_i$ then define a natural transformation from $1_C$ (the identity functor on $C$) to $F\circ G$. The functors $(F, G)$ are then a pair of adjoint functors, with $G$ left-adjoint to $F$ and $F$ right-adjoint to $G$. - -Similar statements apply to the dual situation of terminal morphisms from $F$. If such morphisms exist for every $X$ in $C$ one obtains a functor $G: C \to D$ which is right-adjoint to $F$ (so $F$ is left-adjoint to $G$). - -Indeed, all pairs of adjoint functors arise from universal constructions in this manner. Let $F$ and $G$ be a pair of adjoint functors with unit $\eta$ and co-unit $\epsilon$ - -(see the article on adjoint functors for the definitions). Then we have a universal morphism for each object in $C$ and $D$: - -*For each object $X$ in $C$, $(F(X), \eta_X)$ is a universal morphism from $X$ to $G$. That is, for all $f: X \to G(Y)$ there exists a unique $g: F(X) \to Y$ for which the following diagrams commute. - -*For each object $Y$ in $D$, $(G(Y), \epsilon_Y)$ is a universal morphism from $F$ to $Y$. That is, for all $g: F(X) \to Y$ there exists a unique $f: X \to G(Y)$ for which the following diagrams commute. - -Universal constructions are more general than adjoint functor pairs: a universal construction is like an optimization problem; it gives rise to an adjoint pair if and only if this problem has a solution for every object of $C$ (equivalently, every object of $D$). - -Universal properties of various topological constructions were presented by Pierre Samuel in 1948. They were later used extensively by Bourbaki. The closely related concept of adjoint functors was introduced independently by Daniel Kan in 1958. diff --git a/wiki/wikipedia/2291.txt b/wiki/wikipedia/2291.txt deleted file mode 100644 index 960aee5c5a9a8b8c6d8e52cff477aacc3d7576b0..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2291.txt +++ /dev/null @@ -1,74 +0,0 @@ -The Titchmarsh convolution theorem is named after Edward Charles Titchmarsh, - -a British mathematician. The theorem describes the properties of the support of the convolution of two functions. - -E. C. Titchmarsh proved the following theorem, known as the Titchmarsh convolution theorem, in 1926: - -
- -If $\varphi(t)$ and $\psi(t)$ are integrable functions, such that -$$ -\int_0^x \varphi(t)\psi(x-t)dt=0 -$$ - -almost everywhere in the interval $0 - -A corollary follows: - -
- -If the integral above is 0 for all $x>0,$ then either $\varphi$ or $\psi$ is almost everywhere 0 in the interval $ [0,+\infty). $ - -
- -Thus the convolution of two functions on [0,+\infty) cannot be identically zero unless at least one of the two functions is identically zero. - -The theorem can be restated in the following form: - -Let $\varphi, \psi\in L^1(\mathbb{R})$. Then \inf\operatorname{supp} \varphi\ast \psi - -=\inf\operatorname{supp} \varphi+\inf\operatorname{supp} \psi if the right-hand side is finite. - -Similarly, $ \sup\operatorname{supp} \varphi\ast\psi = \sup\operatorname{supp}\varphi + \sup\operatorname{supp} \psi$ if the right-hand side is finite. - -This theorem essentially states that the well-known inclusion - - \operatorname{supp}\varphi\ast \psi \subset \operatorname{supp}\varphi+\operatorname{supp}\psi - - - -is sharp at the boundary. - -The higher-dimensional generalization in terms of the - -convex hull of the supports was proved by - -J.-L. Lions in 1951: - -If $\varphi, \psi\in\mathcal{E}'(\mathbb{R}^n)$, then $\operatorname{c.h.} \operatorname{supp} \varphi\ast \psi=\operatorname{c.h.} \operatorname{supp} \varphi+\operatorname{c.h.}\operatorname{supp} \psi.$ - -Above, $\operatorname{c.h.}$ denotes the convex hull of the set. $\mathcal{E}' (\mathbb{R}^n)$ denotes the space of distributions with compact support. - -The theorem lacks an elementary proof. The original proof by Titchmarsh is based on the Phragmén–Lindelöf principle, Jensen's inequality, the Theorem of Carleman, and Theorem of Valiron. - -More proofs are contained in: - -* "Theorem 4.3.3", - -(harmonic analysis style) - -* "Chapter VI", - -(real analysis style) - -* "Lecture 16", - -(complex analysis style). - -* - -* - -* diff --git a/wiki/wikipedia/2292.txt b/wiki/wikipedia/2292.txt deleted file mode 100644 index 81ccd3ac13638c1bc8194ac753f75bcb3b48c0a7..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2292.txt +++ /dev/null @@ -1,29 +0,0 @@ -The Valiant–Vazirani theorem is a theorem in computational complexity theory stating that if there is a polynomial time algorithm for Unambiguous-SAT, then NP = RP. It was proven by Leslie Valiant and Vijay Vazirani in their paper titled NP is as easy as detecting unique solutions published in 1986. The proof is based on the Mulmuley–Vazirani–Vazirani isolation lemma, which was subsequently used for a number of important applications in theoretical computer science. - -The Valiant–Vazirani theorem implies that the Boolean satisfiability problem, which is NP-complete, remains a computationally hard problem even if the input instances are promised to have at most one satisfying assignment. - -Unambiguous-SAT is the promise problem of deciding whether a given Boolean formula that has at most one satisfying assignment is unsatisfiable or has exactly one satisfying assignment. In the first case, an algorithm for Unambiguous-SAT should reject, and in the second it should accept the formula. - -If the formula has more than one satisfying assignment, then there is no condition on the behavior of the algorithm. - -The promise problem Unambiguous-SAT can be decided by a nondeterministic Turing machine that has at most one accepting computation path. In this sense, this promise problem belongs to the complexity class UP (which is usually only defined for languages). - -The proof of the Valiant–Vazirani theorem consists of a probabilistic reduction from SAT to SAT such that, with probability at least $\Omega(1/n)$, the output formula has at most one satisfying assignment, and thus satisfies the promise of the Unambiguous-SAT problem. - -More precisely, the reduction is a randomized polynomial-time algorithm that maps a Boolean formula $F(x_1,\dots,x_n)$ with $n$ variables $x_1,\dots,x_n$ to a Boolean formula $F'(x_1,\dots,x_n)$ such that - -* every satisfying assignment of $F'$ also satisfies $F$, and - -* if $F$ is satisfiable, then, with probability at least $\Omega(1/n)$, $F'$ has a unique satisfying assignment $(a_1,\dots,a_n)$. - -By running the reduction a polynomial number $t$ of times, each time with fresh independent random bits, we get formulas $F'_1,\dots,F'_t$. - -Choosing $t=O(n)$, we get that the probability that at least one formula $F'_i$ is uniquely satisfiable is at least $1/2$ if $F$ is satisfiable. - -This gives a Turing reduction from SAT to Unambiguous-SAT since an assumed algorithm for Unambiguous-SAT can be invoked on the $F'_i$. Then the random self-reducibility of SAT can be used to compute a satisfying assignment, should it exist. - -Overall, this proves that NP = RP if Unambiguous-SAT can be solved in RP. - -The idea of the reduction is to intersect the solution space of the formula $F$ with $k$ random affine hyperplanes over $\text{GF}(2)^n$, where $k\in\{1,\dots,n\}$ is chosen uniformly at random. - -An alternative proof is based on the isolation lemma by Mulmuley, Vazirani, and Vazirani. They consider a more general setting, and applied to the setting here this gives an isolation probability of only $\Omega(1/n^8)$. diff --git a/wiki/wikipedia/2293.txt b/wiki/wikipedia/2293.txt deleted file mode 100644 index c80feed430cbd214655e198eef38994fd88f0f41..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2293.txt +++ /dev/null @@ -1,51 +0,0 @@ -In mathematics, two sequences of numbers, often experimental data, are proportional or directly proportional if their corresponding elements have a constant ratio, which is called the coefficient of proportionality or proportionality constant. Two sequences are inversely proportional if corresponding elements have a constant product, also called the coefficient of proportionality. - -This definition is commonly extended to related varying quantities, which are often called variables. This meaning of variable is not the common meaning of the term in mathematics (see variable (mathematics)); these two different concepts share the same name for historical reasons. - -Two functions $f(x)$ and $g(x)$ are proportional if their ratio \frac{f(x)}{g(x)} is a constant function. - -If several pairs of variables share the same direct proportionality constant, the equation expressing the equality of these ratios is called a proportion, e.g., a/b = x/y = ⋯ = k (for details see Ratio). - -Proportionality is closely related to linearity. - -Given two variables x and y, y is directly proportional to x if there is a non-zero constant k such that -$$ -y = kx. -$$ - -The relation is often denoted using the symbols "∝" (not to be confused with the Greek letter alpha) or "~": -$$ -y \propto x, -$$ or $y \sim x.$ - -For $x \ne 0$ the proportionality constant can be expressed as the ratio -$$ - k = \frac{y}{x}. -$$ - -It is also called the constant of variation or constant of proportionality. - -A direct proportionality can also be viewed as a linear equation in two variables with a y-intercept of 0 and a slope of k. This corresponds to linear growth. - -* If an object travels at a constant speed, then the distance traveled is directly proportional to the time spent traveling, with the speed being the constant of proportionality. - -* The circumference of a circle is directly proportional to its diameter, with the constant of proportionality equal to pi. - -* On a map of a sufficiently small geographical area, drawn to scale distances, the distance between any two points on the map is directly proportional to the beeline distance between the two locations represented by those points; the constant of proportionality is the scale of the map. - -* The force, acting on a small object with small mass by a nearby large extended mass due to gravity, is directly proportional to the object's mass; the constant of proportionality between the force and the mass is known as gravitational acceleration. - -* The net force acting on an object is proportional to the acceleration of that object with respect to an inertial frame of reference. The constant of proportionality in this, Newton's second law, is the classical mass of the object. - -The concept of inverse proportionality can be contrasted with direct proportionality. Consider two variables said to be "inversely proportional" to each other. If all other variables are held constant, the magnitude or absolute value of one inversely proportional variable decreases if the other variable increases, while their product (the constant of proportionality k) is always the same. As an example, the time taken for a journey is inversely proportional to the speed of travel. - -Formally, two variables are inversely proportional (also called varying inversely, in inverse variation, in inverse proportion) if each of the variables is directly proportional to the multiplicative inverse (reciprocal) of the other, or equivalently if their product is a constant. It follows that the variable y is inversely proportional to the variable x if there exists a non-zero constant k such that -$$ -y = \frac{k}{x}, -$$ - -or equivalently, $xy = k.$ Hence the constant "k" is the product of x and y. - -The graph of two variables varying inversely on the Cartesian coordinate plane is a rectangular hyperbola. The product of the x and y values of each point on the curve equals the constant of proportionality (k). Since neither x nor y can equal zero (because k is non-zero), the graph never crosses either axis. - -The concepts of direct and inverse proportion lead to the location of points in the Cartesian plane by hyperbolic coordinates; the two coordinates correspond to the constant of direct proportionality that specifies a point as being on a particular ray and the constant of inverse proportionality that specifies a point as being on a particular hyperbola. diff --git a/wiki/wikipedia/2294.txt b/wiki/wikipedia/2294.txt deleted file mode 100644 index f6234572f2676ddafbb471e6f9b6a91dd980d2f2..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2294.txt +++ /dev/null @@ -1,40 +0,0 @@ -The Beal conjecture is the following conjecture in number theory: - -If -$$ - A^x +B^y = C^z, -$$ - -where A, B, C, x, y, and z are non-zero integers with x, y, z ≥ 3, then A, B, and C have a common prime factor. - -Equivalently, - -The equation $A^x + B^y = C^z$ has no solutions in non-zero integers and pairwise coprime integers A, B, C if x, y, z ≥ 3. - -The conjecture was formulated in 1993 by Andrew Beal, a banker and amateur mathematician, while investigating generalizations of Fermat's Last Theorem. Since 1997, Beal has offered a monetary prize for a peer-reviewed proof of this conjecture or a counterexample. the Mauldin conjecture, and the Tijdeman-Zagier conjecture. - -* The case (x, y, z) = (2, 3, 10) and all its permutations were proven by David Brown in 2009 to have only the Catalan solution. - -* The case (x, y, z) = (2, 3, 11) and all its permutations were proven by Freitas, Naskręcki and Stoll to have only the Catalan solution. - -* The case (x, y, z) = (2, 3, 15) and all its permutations were proven by Samir Siksek and Michael Stoll in 2013. - -* The case (x, y, z) = (2, 4, 4) and all its permutations were proven to have no solutions by combined work of Pierre de Fermat in the 1640s and Euler in 1738. (See one proof here and another here) - -* The case (x, y, z) = (2, 4, 5) and all its permutations are known to have only one non-Catalan solution, which doesn't contradict Beal conjecture, by Nils Bruin in 2003. - -* The case (x, y, z) = (3, 6, n) and all its permutations were proven for n ≥ 3 by Bennett, Chen, Dahmen and Yazdani in 2014. - -* By assuming the validity of Beal's conjecture, there exists an upper bound for any common divisor of x, y and z in the expression $ ax^m+by^n = z^r $. - -For a published proof or counterexample, banker Andrew Beal initially offered a prize of US $5,000 in 1997, raising it to $50,000 over ten years, but has since raised it to US $1,000,000. - -The American Mathematical Society (AMS) holds the $1 million prize in a trust until the Beal conjecture is solved. It is supervised by the Beal Prize Committee (BPC), which is appointed by the AMS president. - -The counterexamples $ 7^3 + 13^2 = 2^9$ and $1^m + 2^3 = 3^2$ show that the conjecture would be false if one of the exponents were allowed to be 2. The Fermat–Catalan conjecture is an open conjecture dealing with such cases (the condition of this conjecture is that the sum of the reciprocals is less than 1). If we allow at most one of the exponents to be 2, then there may be only finitely many solutions (except the case $1^m + 2^3 = 3^2$). - -If A, B, C can have a common prime factor then the conjecture is not true; a classic counterexample is $2^{10} + 2^{10} = 2^{11}$. - -A variation of the conjecture asserting that x, y, z (instead of A, B, C) must have a common prime factor is not true. A counterexample is $27^4 +162^3 = 9^7,$ in which 4, 3, and 7 have no common prime factor. (In fact, the maximum common prime factor of the exponents that is valid is 2; a common factor greater than 2 would be a counterexample to Fermat's Last Theorem.) - -The conjecture is not valid over the larger domain of Gaussian integers. After a prize of $50 was offered for a counterexample, Fred W. Helenius provided $(-2+i)^3 + (-2-i)^3 = (1+i)^4$. diff --git a/wiki/wikipedia/2295.txt b/wiki/wikipedia/2295.txt deleted file mode 100644 index 2cd9efd1510ee867a092a8cc5e9a321eecc3edbe..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2295.txt +++ /dev/null @@ -1,41 +0,0 @@ -In the field of mathematics known as functional analysis, the invariant subspace problem is a partially unresolved problem asking whether every bounded operator on a complex Banach space sends some non-trivial closed subspace to itself. Many variants of the problem have been solved, by restricting the class of bounded operators considered or by specifying a particular class of Banach spaces. The problem is still open for separable Hilbert spaces (in other words, all the examples found of operators with no non-trivial invariant subspaces act on Banach spaces which are not separable Hilbert spaces). - -The problem seems to have been stated in the mid-1900s after work by Beurling and von Neumann, who found (but never published) a positive solution for the case of compact operators. It was then posed by Paul Halmos for the case of operators $T$ such that $T^2$ is compact. This was resolved affirmatively, for the more general class of polynomially compact operators (operators $T$ such that $p(T)$ is a compact operator for a suitably chosen non-zero polynomial $p$), by Allen R. Bernstein and Abraham Robinson in 1966 (see for a summary of the proof). - -For Banach spaces, the first example of an operator without an invariant subspace was constructed by Per Enflo. He proposed a counterexample to the invariant subspace problem in 1975, publishing an outline in 1976. Enflo submitted the full article in 1981 and the article's complexity and length delayed its publication to 1987 Enflo's long "manuscript had a world-wide circulation among mathematicians" and some of its ideas were described in publications besides Enflo (1976). Enflo's works inspired a similar construction of an operator without an invariant subspace for example by Beauzamy, who acknowledged Enflo's ideas. - -In the 1990s, Enflo developed a "constructive" approach to the invariant subspace problem on Hilbert spaces. - -Formally, the invariant subspace problem for a complex Banach space $H$ of dimension > 1 is the question whether every bounded linear operator $T: H \to H $ has a non-trivial closed $T$-invariant subspace: a closed linear subspace $W$ of $H$, which is different from $\{0\}$ and from $H$, such that $ T(W)\subset W $. - -A negative answer to the problem is closely related to properties of the orbits $T$. If $x$ is an element of the Banach space $H$, the orbit of $x$ under the action of $T$, denoted by $[x]$, is the subspace generated by the sequence $\{ T^{n}(x): n \ge 0\}$. This is also called the $T$-cyclic subspace generated by $x$. From the definition it follows that $[x]$ is a $T$-invariant subspace. Moreover, it is the minimal $T$-invariant subspace containing $x$: if $W$ is another invariant subspace containing $x$, then necessarily $T^n(x) \in W$ for all $n \ge 0$ (since $W$ is $T$-invariant), and so $[x]\subset W$. If $x$ is non-zero, then $[x]$ is not equal to $\{0\}$, so its closure is either the whole space $H$ (in which case $x$ is said to be a cyclic vector for $T$) or it is a non-trivial $T$-invariant subspace. Therefore, a counterexample to the invariant subspace problem would be a Banach space $H$ and a bounded operator $T: H \to H $ for which every non-zero vector $x\in H$ is a cyclic vector for $T$. (Where a "cyclic vector" $x$ for an operator $T$ on a Banach space $H$ means one for which the orbit $[x]$ of $x$ is dense in $H$.) - -While the case of the invariant subspace problem for separable Hilbert spaces is still open, several other cases have been settled for topological vector spaces (over the field of complex numbers): - -*For finite-dimensional complex vector spaces of dimension greater than two every operator admits an eigenvector, so it has a 1-dimensional invariant subspace. - -* The conjecture is true if the Hilbert space $H$ is not separable (i.e. if it has an uncountable orthonormal basis). In fact, if $x$ is a non-zero vector in $H$, the norm closure of the linear orbit $[x]$ is separable (by construction) and hence a proper subspace and also invariant. - -*von Neumann showed that any compact operator on a Hilbert space of dimension at least 2 has a non-trivial invariant subspace. - -* The spectral theorem shows that all normal operators admit invariant subspaces. - -* Aronszajn proved that every compact operator on any Banach space of dimension at least 2 has an invariant subspace. - -* Bernstein proved using non-standard analysis that if the operator $T$ on a Hilbert space is polynomially compact (in other words $p(T)$ is compact for some non-zero polynomial $p$) then $T$ has an invariant subspace. Their proof uses the original idea of embedding the infinite-dimensional Hilbert space in a hyperfinite-dimensional Hilbert space (see Non-standard analysis#Invariant subspace problem). - -* Halmos, after having seen Robinson's preprint, eliminated the non-standard analysis from it and provided a shorter proof in the same issue of the same journal. - -* Lomonosov gave a very short proof using the Schauder fixed point theorem that if the operator $T$ on a Banach space commutes with a non-zero compact operator then $T$ has a non-trivial invariant subspace. This includes the case of polynomially compact operators because an operator commutes with any polynomial in itself. More generally, he showed that if $S$ commutes with a non-scalar operator $T$ that commutes with a non-zero compact operator, then $S$ has an invariant subspace. - -*The first example of an operator on a Banach space with no non-trivial invariant subspaces was found by , and his example was simplified by Beauzamy. - -*The first counterexample on a "classical" Banach space was found by , who described an operator on the classical Banach space $l_1$ with no invariant subspaces. - -*Later constructed an operator on $l_1$ without even a non-trivial closed invariant subset, that is that for every vector $x$ the set $\{ T^{n}(x): n \ge 0\}$ is dense, in which case the vector is called hypercyclic (the difference with the case of cyclic vectors is that we are not taking the subspace generated by the points $\{ T^{n}(x): n \ge 0\}$ in this case). - -*Atzmon gave an example of an operator without invariant subspaces on a nuclear Fréchet space. - -*Śliwa proved that any infinite dimensional Banach space of countable type over a non-Archimedean field admits a bounded linear operator without a non-trivial closed invariant subspace. This completely solves the non-Archimedean version of this problem, posed by van Rooij and Shikhof in 1992. - -*Argyros gave the construction of an infinite-dimensional Banach space such that every continuous operator is the sum of a compact operator and a scalar operator, so in particular every operator has an invariant subspace. diff --git a/wiki/wikipedia/2296.txt b/wiki/wikipedia/2296.txt deleted file mode 100644 index a0a7e0a06003e42c3a4a4d1cccaed938511d2d7b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2296.txt +++ /dev/null @@ -1 +0,0 @@ -In mathematics, the Mestre bound is a bound on the analytic rank of an elliptic curve in terms of its conductor, introduced by . diff --git a/wiki/wikipedia/2297.txt b/wiki/wikipedia/2297.txt deleted file mode 100644 index dd52f00f4f9d5470cba376e81cb8b4d50552cc7a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2297.txt +++ /dev/null @@ -1,552 +0,0 @@ -In number theory, the law of quadratic reciprocity is a theorem about modular arithmetic that gives conditions for the solvability of quadratic equations modulo prime numbers. Due to its subtlety, it has many formulations, but the most standard statement is: - -Let p and q be distinct odd prime numbers, and define the Legendre symbol as: - -\left(\frac{q}{p}\right) - -=\begin{cases} - -1 & \text{if } n^2 \equiv q \bmod p \text{ for some integer } n\\ - --1 & \text{otherwise} - -\end{cases} - -Then: -$$ - \left(\frac{p}{q}\right) \left(\frac{q}{p}\right) = (-1)^{\frac{p-1}{2}\frac{q-1}{2. -$$ - -}} - -This law, together with its supplements, allows the easy calculation of any Legendre symbol, making it possible to determine whether there is an integer solution for any quadratic equation of the form $x^2\equiv a \bmod p$ for an odd prime $p$; that is, to determine the "perfect squares" modulo $p$. However, this is a non-constructive result: it gives no help at all for finding a specific solution; for this, other methods are required. For example, in the case $p\equiv 3 \bmod 4$ using Euler's criterion one can give an explicit formula for the "square roots" modulo $p$ of a quadratic residue $a$, namely, -$$ -\pm a^{\frac{p+1}{4}} -$$ - -indeed, -$$ -\left (\pm a^{\frac{p+1}{4}} \right )^2=a^{\frac{p+1}{2}}=a\cdot a^{\frac{p-1}{2}}\equiv a\left(\frac{a}{p}\right)=a \bmod p. -$$ - -This formula only works if it is known in advance that $a$ is a quadratic residue, which can be checked using the law of quadratic reciprocity. - -The quadratic reciprocity theorem was conjectured by Euler and Legendre and first proved by Gauss, who referred to it as the "fundamental theorem" in his Disquisitiones Arithmeticae and his papers, writing - -The fundamental theorem must certainly be regarded as one of the most elegant of its type. (Art. 151) - -Privately, Gauss referred to it as the "golden theorem". He published six proofs for it, and two more were found in his posthumous papers. There are now over 240 published proofs. The shortest known proof is included below, together with short proofs of the law's supplements (the Legendre symbols of −1 and 2). - -Generalizing the reciprocity law to higher powers has been a leading problem in mathematics, and has been crucial to the development of much of the machinery of modern algebra, number theory, and algebraic geometry, culminating in Artin reciprocity, class field theory, and the Langlands program. - -Quadratic reciprocity arises from certain subtle factorization patterns involving perfect square numbers. In this section, we give examples which lead to the general case. - -Consider the polynomial $f(n) = n^2 - 5$ and its values for $n \in \N.$ The prime factorizations of these values are given as follows: - -The prime factors $p$ dividing $f(n)$ are $p=2,5$, and every prime whose final digit is $1$ or $9$; no primes ending in $3$ or $7$ ever appear. Now, $p$ is a prime factor of some $n^2-5$ whenever $n^2 - 5 \equiv 0 \bmod p$, i.e. whenever $n^2 \equiv 5 \bmod p,$ i.e. whenever 5 is a quadratic residue modulo $p$. This happens for $p= 2, 5$ and those primes with $p\equiv 1, 4 \bmod 5,$ and the latter numbers $1=(\pm1)^2$ and $4=(\pm2)^2$ are precisely the quadratic residues modulo $5$. Therefore, except for $p = 2,5$, we have that $5$ is a quadratic residue modulo $p$ iff $p$ is a quadratic residue modulo $5$. - -The law of quadratic reciprocity gives a similar characterization of prime divisors of $f(n)=n^2 - q$ for any prime q, which leads to a characterization for any integer $q$. - -Let p be an odd prime. A number modulo p is a quadratic residue whenever it is congruent to a square (mod p); otherwise it is a quadratic non-residue. ("Quadratic" can be dropped if it is clear from the context.) Here we exclude zero as a special case. Then as a consequence of the fact that the multiplicative group of a finite field of order p is cyclic of order p-1, the following statements hold: - -*There are an equal number of quadratic residues and non-residues; and - -*The product of two quadratic residues is a residue, the product of a residue and a non-residue is a non-residue, and the product of two non-residues is a residue. - -For the avoidance of doubt, these statements do not hold if the modulus is not prime. - -For example, there are only 3 quadratic residues (1, 4 and 9) in the multiplicative group modulo 15. - -Moreover, although 7 and 8 are quadratic non-residues, their product 7x8 = 11 is also a quadratic non-residue, in contrast to the prime case. - -Quadratic residues are entries in the following table: - -This table is complete for odd primes less than 50. To check whether a number m is a quadratic residue mod one of these primes p, find a ≡ m (mod p) and 0 ≤ a < p. If a is in row p, then m is a residue (mod p); if a is not in row p of the table, then m is a nonresidue (mod p). - -The quadratic reciprocity law is the statement that certain patterns found in the table are true in general. - -Another way to organize the data is to see which primes are residues mod which other primes, as illustrated in the following table. The entry in row p column q is R if q is a quadratic residue (mod p); if it is a nonresidue the entry is N. - -If the row, or the column, or both, are ≡ 1 (mod 4) the entry is blue or green; if both row and column are ≡ 3 (mod 4), it is yellow or orange. - -The blue and green entries are symmetric around the diagonal: The entry for row p, column q is R (resp N) if and only if the entry at row q, column p, is R (resp N). - -The yellow and orange ones, on the other hand, are antisymmetric: The entry for row p, column q is R (resp N) if and only if the entry at row q, column p, is N (resp R). - -The reciprocity law states that these patterns hold for all p and q. - -The supplements provide solutions to specific cases of quadratic reciprocity. They are often quoted as partial results, without having to resort to the complete theorem. - -Trivially 1 is a quadratic residue for all primes. The question becomes more interesting for −1. Examining the table, we find −1 in rows 5, 13, 17, 29, 37, and 41 but not in rows 3, 7, 11, 19, 23, 31, 43 or 47. The former set of primes are all congruent to 1 modulo 4, and the latter are congruent to 3 modulo 4. - -First Supplement to Quadratic Reciprocity. The congruence $x^2 \equiv -1 \bmod{p}$ is solvable if and only if $p$ is congruent to 1 modulo 4. - -Examining the table, we find 2 in rows 7, 17, 23, 31, 41, and 47, but not in rows 3, 5, 11, 13, 19, 29, 37, or 43. The former primes are all ≡ ±1 (mod 8), and the latter are all ≡ ±3 (mod 8). This leads to - -Second Supplement to Quadratic Reciprocity. The congruence $x^2 \equiv 2 \bmod{p}$ is solvable if and only if $p$ is congruent to ±1 modulo 8. - -−2 is in rows 3, 11, 17, 19, 41, 43, but not in rows 5, 7, 13, 23, 29, 31, 37, or 47. The former are ≡ 1 or ≡ 3 (mod 8), and the latter are ≡ 5, 7 (mod 8). - -3 is in rows 11, 13, 23, 37, and 47, but not in rows 5, 7, 17, 19, 29, 31, 41, or 43. The former are ≡ ±1 (mod 12) and the latter are all ≡ ±5 (mod 12). - -−3 is in rows 7, 13, 19, 31, 37, and 43 but not in rows 5, 11, 17, 23, 29, 41, or 47. The former are ≡ 1 (mod 3) and the latter ≡ 2 (mod 3). - -Since the only residue (mod 3) is 1, we see that −3 is a quadratic residue modulo every prime which is a residue modulo 3. - -5 is in rows 11, 19, 29, 31, and 41 but not in rows 3, 7, 13, 17, 23, 37, 43, or 47. The former are ≡ ±1 (mod 5) and the latter are ≡ ±2 (mod 5). - -Since the only residues (mod 5) are ±1, we see that 5 is a quadratic residue modulo every prime which is a residue modulo 5. - -−5 is in rows 3, 7, 23, 29, 41, 43, and 47 but not in rows 11, 13, 17, 19, 31, or 37. The former are ≡ 1, 3, 7, 9 (mod 20) and the latter are ≡ 11, 13, 17, 19 (mod 20). - -The observations about −3 and 5 continue to hold: −7 is a residue modulo p if and only if p is a residue modulo 7, −11 is a residue modulo p if and only if p is a residue modulo 11, 13 is a residue (mod p) if and only if p is a residue modulo 13, etc. The more complicated-looking rules for the quadratic characters of 3 and −5, which depend upon congruences modulo 12 and 20 respectively, are simply the ones for −3 and 5 working with the first supplement. - -Example. For −5 to be a residue (mod p), either both 5 and −1 have to be residues (mod p) or they both have to be non-residues: i.e., p ≡ ±1 (mod 5) and p ≡ 1 (mod 4) or p ≡ ±2 (mod 5) and p ≡ 3 (mod 4). Using the Chinese remainder theorem these are equivalent to p ≡ 1, 9 (mod 20) or p ≡ 3, 7 (mod 20). - -The generalization of the rules for −3 and 5 is Gauss's statement of quadratic reciprocity. - -Quadratic Reciprocity (Gauss's statement). If $q \equiv 1 \bmod{4}$, then the congruence $x^2 \equiv p \bmod{q}$ is solvable if and only if $x^2 \equiv q \bmod{p}$ is solvable. If $q \equiv 3 \bmod{4}$ and $p \equiv 3 \bmod{4}$, then the congruence $x^2 \equiv p \bmod{q}$ is solvable if and only if $x^2 \equiv -q \bmod{p}$ is solvable. - -Quadratic Reciprocity (combined statement). Define $q^* = (-1)^{\frac{q-1}{2}}q$. Then the congruence $x^2 \equiv p \bmod{q}$ is solvable if and only if $x^2 \equiv q^* \bmod{p}$ is solvable. - -Quadratic Reciprocity (Legendre's statement). If p or q are congruent to 1 modulo 4, then: $x^2 \equiv q \bmod{p}$ is solvable if and only if $x^2 \equiv p \bmod{q}$ is solvable. If p and q are congruent to 3 modulo 4, then: $x^2 \equiv q \bmod{p}$ is solvable if and only if $x^2 \equiv p \bmod{q}$ is not solvable. - -The last is immediately equivalent to the modern form stated in the introduction above. It is a simple exercise to prove that Legendre's and Gauss's statements are equivalent – it requires no more than the first supplement and the facts about multiplying residues and nonresidues. - -Apparently, the shortest known proof yet was published by B. Veklych in the American Mathematical Monthly. - -The value of the Legendre symbol of $-1$ (used in the proof above) follows directly from Euler's criterion: -$$ - \left(\frac{-1}{p}\right)\equiv (-1)^{\frac{p-1}{2}} \bmod p -$$ - -by Euler's criterion, but both sides of this congruence are numbers of the form $\pm 1$, so they must be equal. - -Whether $2$ is a quadratic residue can be concluded if we know the number of solutions of the equation $x^2+y^2=2$ with $x, y \in \Z_p,$ which can be solved by standard methods. Namely, all its solutions where $xy\neq 0, x\neq\pm y$ can be grouped into octuplets of the form $(\pm x, \pm y), (\pm y, \pm x)$, and what is left are four solutions of the form $(\pm 1, \pm 1)$ and possibly four additional solutions where $x^2=2, y=0$ and $x=0, y^2=2$, which exist precisely if $2$ is a quadratic residue. That is, $2$ is a quadratic residue precisely if the number of solutions of this equation is divisible by $8$. And this equation can be solved in just the same way here as over the rational numbers: substitute $x=a+1, y=at+1$, where we demand that $a\neq 0$ (leaving out the two solutions $(1,\pm 1)$), then the original equation transforms into -$$ -a=-\frac{2(t+1)}{(t^2+1)}. -$$ - -Here $t$ can have any value that does not make the denominator zero - for which there are $1+\left(\frac{-1}{p}\right)$ possibilities (i.e. $2$ if $-1$ is a residue, $0$ if not) - and also does not make $a$ zero, which excludes one more option, $t=-1$. Thus there are -$$ -p-\left(1+\left(\frac{-1}{p}\right)\right)-1 -$$ - -possibilities for $t$, and so together with the two excluded solutions there are overall $p-\left(\frac{-1}{p}\right)$ solutions of the original equation. Therefore, $2$ is a residue modulo $p$ if and only if $8$ divides $p-(-1)^{\frac{p-1}{2}}$. This is a reformulation of the condition stated above. - -The theorem was formulated in many ways before its modern form: Euler and Legendre did not have Gauss's congruence notation, nor did Gauss have the Legendre symbol. - -In this article p and q always refer to distinct positive odd primes, and x and y to unspecified integers. - -Fermat proved (or claimed to have proved) a number of theorems about expressing a prime by a quadratic form: - -\begin{align} - -p=x^2+ y^2 \qquad &\Longleftrightarrow \qquad p=2 \quad \text{ or } \quad p\equiv 1 \bmod{4} \\ - -p=x^2+2y^2 \qquad &\Longleftrightarrow \qquad p=2 \quad \text{ or } \quad p\equiv 1, 3 \bmod{8} \\ - -p=x^2+3y^2 \qquad &\Longleftrightarrow \qquad p=3 \quad \text{ or } \quad p\equiv 1 \bmod{3} \\ - -\end{align} - -He did not state the law of quadratic reciprocity, although the cases −1, ±2, and ±3 are easy deductions from these and other of his theorems. - -He also claimed to have a proof that if the prime number p ends with 7, (in base 10) and the prime number q ends in 3, and p ≡ q ≡ 3 (mod 4), then -$$ -pq=x^2+5y^2. -$$ - -Euler conjectured, and Lagrange proved, that - -\begin{align} - -p &\equiv 1, 9 \bmod{20}\quad \Longrightarrow \quad p = x^2+5y^2 \\ - -p, q &\equiv 3, 7 \bmod{20} \quad \Longrightarrow \quad pq=x^2+5y^2 - -\end{align} - -Proving these and other statements of Fermat was one of the things that led mathematicians to the reciprocity theorem. - -Translated into modern notation, Euler stated that for distinct odd primes p and q: - -# If q ≡ 1 (mod 4) then q is a quadratic residue (mod p) if and only if there exists some integer b such that p ≡ b2 (mod q). - -# If q ≡ 3 (mod 4) then q is a quadratic residue (mod p) if and only if there exists some integer b which is odd and not divisible by q such that p ≡ ±b2 (mod 4q). - -This is equivalent to quadratic reciprocity. - -He could not prove it, but he did prove the second supplement. - -Fermat proved that if p is a prime number and a is an integer, -$$ -a^p\equiv a \bmod{p}. -$$ - -Thus if p does not divide a, using the non-obvious fact (see for example Ireland and Rosen below) that the residues modulo p form a field and therefore in particular the multiplicative group is cyclic, hence there can be at most two solutions to a quadratic equation: -$$ -a^{\frac{p-1}{2}} \equiv \pm 1 \bmod{p}. -$$ - -Legendre lets a and A represent positive primes ≡ 1 (mod 4) and b and B positive primes ≡ 3 (mod 4), and sets out a table of eight theorems that together are equivalent to quadratic reciprocity: - -He says that since expressions of the form -$$ -N^{\frac{c-1}{2}}\bmod{c}, \qquad \gcd(N, c) = 1 -$$ - -will come up so often he will abbreviate them as: -$$ -\left(\frac{N}{c}\right) \equiv N^{\frac{c-1}{2}} \bmod{c} = \pm 1. -$$ - -This is now known as the Legendre symbol, and an equivalent definition is used today: for all integers a and all odd primes p -$$ -\left(\frac{a}{p}\right) = \begin{cases} 0 & a \equiv 0 \bmod{p} \\ 1 & a \not\equiv 0\bmod{p} \text{ and } \exists x : a\equiv x^2\bmod{p} \\-1 &a \not\equiv 0\bmod{p} \text{ and there is no such } x. \end{cases} -$$ - -\left(\frac{p}{q}\right) = \begin{cases} - -\left(\tfrac{q}{p}\right) & p\equiv 1 \bmod{4} \quad \text{ or } \quad q \equiv 1 \bmod{4} \\ - --\left(\tfrac{q}{p}\right) & p \equiv 3 \bmod{4} \quad \text{ and } \quad q \equiv 3 \bmod{4} - -\end{cases} - -He notes that these can be combined: -$$ - \left(\frac{p}{q}\right) \left(\frac{q}{p}\right) = (-1)^{\frac{p-1}{2}\frac{q-1}{2}}. -$$ - -A number of proofs, especially those based on Gauss's Lemma, explicitly calculate this formula. - -\begin{align} - -\left(\frac{-1}{p}\right) &= (-1)^{\frac{p-1}{2}} = \begin{cases} 1 &p \equiv 1 \bmod{4}\\ -1 & p \equiv 3 \bmod{4}\end{cases} \\ - -\left(\frac{2}{p}\right) &= (-1)^{\frac{p^2-1}{8}} = \begin{cases} 1 & p \equiv 1, 7 \bmod{8}\\ -1 & p \equiv 3, 5\bmod{8}\end{cases} - -\end{align} - -From these two supplements, we can obtain a third reciprocity law for the quadratic character -2 as follows: - -For -2 to be a quadratic residue, either -1 or 2 are both quadratic residues, or both non-residues :$\bmod p$. - -So either :$\frac{p-1}{2} \text{ or } \frac{p^2-1}{8}$ are both even, or they are both odd. The sum of these two expressions is -$$ -\frac{p^2+4p-5}{8} -$$ which is an integer. Therefore, - -\begin{align} - -\left(\frac{-2}{p}\right) &= (-1)^{\frac{p^2+4p-5}{8}} = \begin{cases} 1 & p \equiv 1, 3 \bmod{8}\\ -1 & p \equiv 5, 7\bmod{8}\end{cases} - -\end{align} - -Legendre's attempt to prove reciprocity is based on a theorem of his: - -Legendre's Theorem. Let a, b and c be integers where any pair of the three are relatively prime. Moreover assume that at least one of ab, bc or ca is negative (i.e. they don't all have the same sign). If - -\begin{align} - -u^2 &\equiv -bc \bmod{a} \\ - -v^2 &\equiv -ca \bmod{b} \\ - -w^2 &\equiv -ab \bmod{c} - -\end{align} - -are solvable then the following equation has a nontrivial solution in integers: -$$ -ax^2 + by^2 + cz^2=0. -$$ - -Example. Theorem I is handled by letting a ≡ 1 and b ≡ 3 (mod 4) be primes and assuming that $\left (\tfrac{b}{a} \right) = 1$ and, contrary the theorem, that $\left (\tfrac{a}{b} \right ) = -1.$ Then $x^2+ay^2-bz^2=0$ has a solution, and taking congruences (mod 4) leads to a contradiction. - -This technique doesn't work for Theorem VIII. Let b ≡ B ≡ 3 (mod 4), and assume -$$ -\left (\frac{B}{b} \right ) = \left (\frac{b}{B} \right ) = -1. -$$ - -Then if there is another prime p ≡ 1 (mod 4) such that -$$ -\left (\frac{p}{b} \right ) = \left (\frac{p}{B} \right ) = -1, -$$ - -the solvability of $Bx^2+by^2-pz^2=0$ leads to a contradiction (mod 4). But Legendre was unable to prove there has to be such a prime p; he was later able to show that all that is required is: - -Legendre's Lemma. If p is a prime that is congruent to 1 modulo 4 then there exists an odd prime q such that $\left (\tfrac{p}{q} \right ) = -1.$ - -but he couldn't prove that either. Hilbert symbol (below) discusses how techniques based on the existence of solutions to $ax^2+by^2+cz^2=0$ can be made to work. - -Gauss first proves the supplementary laws. He sets the basis for induction by proving the theorem for ±3 and ±5. Noting that it is easier to state for −3 and +5 than it is for +3 or −5, he states the general theorem in the form: - -If p is a prime of the form 4n + 1 then p, but if p is of the form 4n + 3 then −p, is a quadratic residue (resp. nonresidue) of every prime, which, with a positive sign, is a residue (resp. nonresidue) of p. In the next sentence, he christens it the "fundamental theorem" (Gauss never used the word "reciprocity"). - -Introducing the notation a R b (resp. a N b) to mean a is a quadratic residue (resp. nonresidue) (mod b), and letting a, a′, etc. represent positive primes ≡ 1 (mod 4) and b, b′, etc. positive primes ≡ 3 (mod 4), he breaks it out into the same 8 cases as Legendre: - -In the next Article he generalizes this to what are basically the rules for the Jacobi symbol (below). Letting A, A′, etc. represent any (prime or composite) positive numbers ≡ 1 (mod 4) and B, B′, etc. positive numbers ≡ 3 (mod 4): - -All of these cases take the form "if a prime is a residue (mod a composite), then the composite is a residue or nonresidue (mod the prime), depending on the congruences (mod 4)". He proves that these follow from cases 1) - 8). - -Gauss needed, and was able to prove, a lemma similar to the one Legendre needed: - -Gauss's Lemma. If p is a prime congruent to 1 modulo 8 then there exists an odd prime q such that: -$$ -q <2\sqrt p+1 \quad \text{and} \quad \left(\frac{p}{q}\right) = -1. -$$ - -The proof of quadratic reciprocity uses complete induction. - -Gauss's Version in Legendre Symbols. -$$ -\left(\frac{p}{q}\right) = \begin{cases} \left(\frac{q}{p}\right) & q \equiv 1 \bmod{4} \\ \left(\frac{-q}{p}\right) & q \equiv 3 \bmod{4} \end{cases} -$$ - -These can be combined: - -Gauss's Combined Version in Legendre Symbols. Let -$$ -q^* = (-1)^{\frac{q-1}{2}}q. -$$ - -In other words: -$$ -|q^*|=|q| \quad \text{and} \quad q^*\equiv 1 \bmod{4}. -$$ - -Then: -$$ - \left(\frac{p}{q}\right) = \left(\frac{q^*}{p}\right). -$$ - -A number of proofs of the theorem, especially those based on Gauss sums derive this formula. or the splitting of primes in algebraic number fields, - -The statements in this section are equivalent to quadratic reciprocity: if, for example, Euler's version is assumed, the Legendre-Gauss version can be deduced from it, and vice versa. - -Euler's Formulation of Quadratic Reciprocity. If $p \equiv \pm q \bmod{4a}$ then $\left(\tfrac{a}{p}\right)=\left(\tfrac{a}{q}\right).$ - -This can be proven using Gauss's lemma. - -Quadratic Reciprocity (Gauss; Fourth Proof). Let a, b, c, ... be unequal positive odd primes, whose product is n, and let m be the number of them that are ≡ 3 (mod 4); check whether n/a is a residue of a, whether n/b is a residue of b, .... The number of nonresidues found will be even when m ≡ 0, 1 (mod 4), and it will be odd if m ≡ 2, 3 (mod 4). - -Gauss's fourth proof consists of proving this theorem (by comparing two formulas for the value of Gauss sums) and then restricting it to two primes. He then gives an example: Let a = 3, b = 5, c = 7, and d = 11. Three of these, 3, 7, and 11 ≡ 3 (mod 4), so m ≡ 3 (mod 4). 5×7×11 R 3; 3×7×11 R 5; 3×5×11 R 7; and 3×5×7 N 11, so there are an odd number of nonresidues. - -Eisenstein's Formulation of Quadratic Reciprocity. Assume -$$ -p\ne q, \quad p'\ne q', \quad p \equiv p' \bmod{4}, \quad q \equiv q' \bmod{4}. -$$ - -Then -$$ - \left(\frac{p}{q}\right) \left(\frac{q}{p}\right) =\left(\frac{p'}{q'}\right) \left(\frac{q'}{p'}\right). -$$ - -Mordell's Formulation of Quadratic Reciprocity. Let a, b and c be integers. For every prime, p, dividing abc if the congruence -$$ -ax^2 + by^2 + cz^2 \equiv 0 \bmod{\tfrac{4abc}{p}} -$$ - -has a nontrivial solution, then so does: -$$ -ax^2 + by^2 + cz^2 \equiv 0 \bmod{4abc}. -$$ - -Zeta function formulation - -As mentioned in the article on Dedekind zeta functions, quadratic reciprocity is equivalent to the zeta function of a quadratic field being the product of the Riemann zeta function and a certain Dirichlet L-function - -The Jacobi symbol is a generalization of the Legendre symbol; the main difference is that the bottom number has to be positive and odd, but does not have to be prime. If it is prime, the two symbols agree. It obeys the same rules of manipulation as the Legendre symbol. In particular - -\begin{align} - -\left(\frac{-1}{n}\right) = (-1)^{\frac{n-1}{2}} &= \begin{cases} 1 & n \equiv 1 \bmod{4}\\ -1 & n \equiv 3 \bmod{4}\end{cases} \\ - -\left( \frac{2}{n}\right) = (-1)^{\frac{n^2-1}{8}} &= \begin{cases} 1 & n \equiv 1, 7 \bmod{8}\\ -1 & n \equiv 3, 5\bmod{8}\end{cases} \\ - -\left( \frac{-2}{n}\right) = (-1)^{\frac{n^2+4n-5}{8}} &= \begin{cases} 1 & n \equiv 1, 3 \bmod{8}\\ -1 & n \equiv 5, 7\bmod{8}\end{cases} - -\end{align} - -and if both numbers are positive and odd (this is sometimes called "Jacobi's reciprocity law"): -$$ - \left(\frac{m}{n}\right) = (-1)^{\frac{(m-1)(n-1)}{4}}\left(\frac{n}{m}\right). -$$ - -However, if the Jacobi symbol is 1 but the denominator is not a prime, it does not necessarily follow that the numerator is a quadratic residue of the denominator. Gauss's cases 9) - 14) above can be expressed in terms of Jacobi symbols: -$$ -\left (\frac{M}{p}\right) = (-1)^{\frac{(p-1)(M-1)}{4}} \left(\frac{p}{M}\right ), -$$ - -and since p is prime the left hand side is a Legendre symbol, and we know whether M is a residue modulo p or not. - -The formulas listed in the preceding section are true for Jacobi symbols as long as the symbols are defined. Euler's formula may be written -$$ -\left(\frac{a}{m}\right) =\left(\frac{a}{m \pm 4an}\right), \qquad n \in \Z, m\pm4an>0. -$$ - -Example. -$$ -\left (\frac{2}{7} \right ) = \left (\frac{2}{15} \right ) = \left (\frac{2}{23} \right ) = \left (\frac{2}{31} \right ) = \cdots = 1. -$$ - -2 is a residue modulo the primes 7, 23 and 31: -$$ -3^2 \equiv 2 \bmod{7}, \quad 5^2 \equiv 2 \bmod{23}, \quad 8^2 \equiv 2 \bmod{31}. -$$ - -But 2 is not a quadratic residue modulo 5, so it can't be one modulo 15. This is related to the problem Legendre had: if $\left (\tfrac{a}{m} \right) = -1,$ then a is a non-residue modulo every prime in the arithmetic progression m + 4a, m + 8a, ..., if there are any primes in this series, but that wasn't proved until decades after Legendre. - -Eisenstein's formula requires relative primality conditions (which are true if the numbers are prime) - -Let $a, b, a', b'$ be positive odd integers such that: - -\begin{align} - -\gcd &(a,b) =\gcd(a',b')= 1 \\ - -&a \equiv a' \bmod{4} \\ - -&b \equiv b' \bmod{4} - -\end{align} - -Then -$$ - \left(\frac{a}{b}\right) \left(\frac{b}{a}\right) =\left(\frac{a'}{b'}\right) \left(\frac{b'}{a'}\right). -$$ - -The quadratic reciprocity law can be formulated in terms of the Hilbert symbol $(a,b)_v$ where a and b are any two nonzero rational numbers and v runs over all the non-trivial absolute values of the rationals (the Archimedean one and the p-adic absolute values for primes p). The Hilbert symbol $(a,b)_v$ is 1 or −1. It is defined to be 1 if and only if the equation $ax^2+by^2=z^2$ has a solution in the completion of the rationals at v other than $x=y=z=0$. The Hilbert reciprocity law states that $(a,b)_v$, for fixed a and b and varying v, is 1 for all but finitely many v and the product of $(a,b)_v$ over all v is 1. (This formally resembles the residue theorem from complex analysis.) - -The proof of Hilbert reciprocity reduces to checking a few special cases, and the non-trivial cases turn out to be equivalent to the main law and the two supplementary laws of quadratic reciprocity for the Legendre symbol. There is no kind of reciprocity in the Hilbert reciprocity law; its name simply indicates the historical source of the result in quadratic reciprocity. Unlike quadratic reciprocity, which requires sign conditions (namely positivity of the primes involved) and a special treatment of the prime 2, the Hilbert reciprocity law treats all absolute values of the rationals on an equal footing. Therefore, it is a more natural way of expressing quadratic reciprocity with a view towards generalization: the Hilbert reciprocity law extends with very few changes to all global fields and this extension can rightly be considered a generalization of quadratic reciprocity to all global fields. - -The early proofs of quadratic reciprocity are relatively unilluminating. The situation changed when Gauss used Gauss sums to show that quadratic fields are subfields of cyclotomic fields, and implicitly deduced quadratic reciprocity from a reciprocity theorem for cyclotomic fields. His proof was cast in modern form by later algebraic number theorists. This proof served as a template for class field theory, which can be viewed as a vast generalization of quadratic reciprocity. - -Robert Langlands formulated the Langlands program, which gives a conjectural vast generalization of class field theory. He wrote: - -I confess that, as a student unaware of the history of the subject and unaware of the connection with cyclotomy, I did not find the law or its so-called elementary proofs appealing. I suppose, although I would not have (and could not have) expressed myself in this way that I saw it as little more than a mathematical curiosity, fit more for amateurs than for the attention of the serious mathematician that I then hoped to become. It was only in Hermann Weyl's book on the algebraic theory of numbers that I appreciated it as anything more. - -There are also quadratic reciprocity laws in rings other than the integers. - -In his second monograph on quartic reciprocity Gauss stated quadratic reciprocity for the ring $\Z[i]$ of Gaussian integers, saying that it is a corollary of the biquadratic law in $\Z[i],$ but did not provide a proof of either theorem. Dirichlet showed that the law in $\Z[i]$ can be deduced from the law for $\Z$ without using quartic reciprocity. - -For an odd Gaussian prime $\pi$ and a Gaussian integer $\alpha$ relatively prime to $\pi,$ define the quadratic character for $\Z[i]$ by: - -\left[\frac{\alpha}{\pi}\right]_2 \equiv \alpha^\frac{\mathrm{N} \pi - 1}{2}\bmod{\pi} = - -\begin{cases} - -1 & \exists \eta \in \Z[i]: \alpha \equiv \eta^2 \bmod{\pi} \\ - --1 & \text{otherwise} - -\end{cases} - -Let $\lambda = a + b i, \mu = c + d i$ be distinct Gaussian primes where a and c are odd and b and d are even. Then -$$ - \left [\frac{\lambda}{\mu}\right ]_2 = \left [\frac{\mu}{\lambda}\right ]_2, \qquad \left [\frac{i}{\lambda}\right ]_2 =(-1)^\frac{b}{2}, \qquad \left [\frac{1+i}{\lambda}\right ]_2 =\left(\frac{2}{a+b}\right). -$$ - -Consider the following third root of unity: -$$ -\omega = \frac{-1 + \sqrt{-3}}{2}=e^\frac{2\pi \imath}{3}. -$$ - -The ring of Eisenstein integers is $\Z[\omega].$ For an Eisenstein prime $\pi, \mathrm{N} \pi \neq 3,$ and an Eisenstein integer $\alpha$ with $\gcd(\alpha, \pi) = 1,$ define the quadratic character for $\Z[\omega]$ by the formula - -\left[\frac{\alpha}{\pi}\right]_2 \equiv \alpha^\frac{\mathrm{N} \pi - 1}{2}\bmod{\pi} = \begin{cases} - -1 &\exists \eta \in \Z[\omega]: \alpha \equiv \eta^2 \bmod{\pi} \\ - --1 &\text{otherwise} - -\end{cases} - -Let λ = a + bω and μ = c + dω be distinct Eisenstein primes where a and c are not divisible by 3 and b and d are divisible by 3. Eisenstein proved -$$ - \left[\frac{\lambda}{\mu}\right]_2 \left [\frac{\mu}{\lambda}\right ]_2 = (-1)^{\frac{\mathrm{N} \lambda - 1}{2}\frac{\mathrm{N} \mu-1}{2}}, \qquad \left [\frac{1-\omega}{\lambda}\right ]_2 =\left(\frac{a}{3}\right), \qquad \left [\frac{2}{\lambda}\right ]_2 =\left (\frac{2}{\mathrm{N} \lambda }\right). -$$ - -The above laws are special cases of more general laws that hold for the ring of integers in any imaginary quadratic number field. Let k be an imaginary quadratic number field with ring of integers $\mathcal{O}_k.$ For a prime ideal $\mathfrak{p} \subset \mathcal{O}_k $ with odd norm $\mathrm{N} \mathfrak{p}$ and $\alpha\in \mathcal{O}_k,$ define the quadratic character for $\mathcal{O}_k $ as - -\left[\frac{\alpha}{\mathfrak{p} }\right]_2 \equiv \alpha^{\frac{\mathrm{N} \mathfrak{p} - 1}{2}} \bmod{\mathfrak{p}} = - -\begin{cases} - -1 &\alpha\not\in \mathfrak{p} \text{ and } \exists \eta \in \mathcal{O}_k \text{ such that } \alpha - \eta^2 \in \mathfrak{p} \\ - --1 & \alpha\not\in \mathfrak{p} \text{ and there is no such } \eta \\ - -0 & \alpha\in \mathfrak{p} - -\end{cases} - -for an arbitrary ideal $\mathfrak{a} \subset \mathcal{O}_k$ factored into prime ideals $\mathfrak{a} = \mathfrak{p}_1 \cdots \mathfrak{p}_n$ define -$$ -\left [\frac{\alpha}{\mathfrak{a}}\right ]_2 = \left[\frac{\alpha}{\mathfrak{p}_1 }\right]_2\cdots \left[\frac{\alpha}{\mathfrak{p}_n }\right]_2, -$$ - -and for $\beta \in \mathcal{O}_k$ define -$$ -\left [\frac{\alpha}{\beta}\right ]_2 = \left [\frac{\alpha}{\beta \mathcal{O}_k}\right ]_2. -$$ - -Let $\mathcal{O}_k = \Z \omega_1\oplus \Z \omega_2,$ i.e. $\left\{\omega_1, \omega_2\right\}$ is an integral basis for $\mathcal{O}_k.$ For $\nu \in \mathcal{O}_k$ with odd norm $\mathrm{N}\nu,$ define (ordinary) integers a, b, c, d by the equations, - -\begin{align} - -\nu\omega_1&=a\omega_1+b\omega_2\\ - -\nu\omega_2&=c\omega_1+d\omega_2 - -\end{align} - -and a function -$$ -\chi(\nu) := \imath^{(b^2-a+2)c+(a^2-b+2)d+ad}. -$$ - -If m = Nμ and n = Nν are both odd, Herglotz proved -$$ - \left [\frac{\mu}{\nu}\right ]_2 \left[\frac{\nu}{\mu}\right]_2 = (-1)^{\frac{m-1}{2}\frac{n-1}{2}} \chi(\mu)^{m\frac{n-1}{2}} \chi(\nu)^{-n\frac{m-1}{2}}. -$$ - -Also, if -$$ - \mu \equiv\mu' \bmod{4} \quad \text{and} \quad \nu \equiv\nu' \bmod{4} -$$ - -Then -$$ - \left [\frac{\mu}{\nu}\right ]_2 \left[\frac{\nu}{\mu}\right]_2 = \left [\frac{\mu'}{\nu'}\right ]_2 \left[\frac{\nu'}{\mu'}\right]_2. -$$ - -Let F be a finite field with q = pn elements, where p is an odd prime number and n is positive, and let F[x] be the ring of polynomials in one variable with coefficients in F. If $f,g \in F[x]$ and f is irreducible, monic, and has positive degree, define the quadratic character for F[x] in the usual manner: - -\left(\frac{g}{f}\right) = \begin{cases} - -1 & \gcd(f,g)=1 \text{ and } \exists h,k \in F[x] \text{ such that }g-h^2 = kf \\ - --1 & \gcd(f,g)=1 \text{ and } g \text{ is not a square}\bmod{f}\\ - -0 & \gcd(f,g)\ne 1 - -\end{cases} - -If $f=f_1 \cdots f_n$ is a product of monic irreducibles let -$$ -\left(\frac{g}{f}\right) = \left(\frac{g}{f_1}\right) \cdots \left(\frac{g}{f_n}\right). -$$ - -Dedekind proved that if $f,g \in F[x]$ are monic and have positive degrees, -$$ -\left(\frac{g}{f}\right) \left(\frac{f}{g}\right) = (-1)^{\frac{q-1}{2}(\deg f)(\deg g)}. -$$ - -The attempt to generalize quadratic reciprocity for powers higher than the second was one of the main goals that led 19th century mathematicians, including Carl Friedrich Gauss, Peter Gustav Lejeune Dirichlet, Carl Gustav Jakob Jacobi, Gotthold Eisenstein, Richard Dedekind, Ernst Kummer, and David Hilbert to the study of general algebraic number fields and their rings of integers; specifically Kummer invented ideals in order to state and prove higher reciprocity laws. - -The ninth in the list of 23 unsolved problems which David Hilbert proposed to the Congress of Mathematicians in 1900 asked for the - -"Proof of the most general reciprocity law [f]or an arbitrary number field". Building upon work by Philipp Furtwängler, Teiji Takagi, Helmut Hasse and others, Emil Artin discovered Artin reciprocity in 1923, a general theorem for which all known reciprocity laws are special cases, and proved it in 1927. diff --git a/wiki/wikipedia/2298.txt b/wiki/wikipedia/2298.txt deleted file mode 100644 index 085f72f776d5ed7262f5a7ee7c78c22d35903436..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2298.txt +++ /dev/null @@ -1,40 +0,0 @@ -The following inequality is known as Sedrakyan's inequality, Engel's form or Titu's lemma, respectively, referring to the article “About the applications of one useful inequality” of Nairi Sedrakyan published in 1997, to the book Problem-solving strategies of Arthur Engel (mathematician) published in 1998 and to the book Mathematical Olympiad Treasures of Titu Andreescu published in 2003. - -It is a direct consequence of Cauchy-Bunyakovsky-Schwarz inequality. Nevertheless, in his article (1997) Sedrakyan has noticed that written in this form this inequality can be used as a mathematical proof technique and it has very useful new applications. In the book Algebraic Inequalities (Sedrakyan) are provided several generalizations of this inequality. - -For any reals $a_1, a_2, a_3, \ldots, a_n$ and positive reals $b_1, b_2, b_3,\ldots, b_n,$ we have $\frac{a^2_1}{b_1} + \frac{a^2_2}{b_2} + \cdots + \frac{a^2_n}{b_n} \geq \frac{\left(a_1 + a_2 + \cdots + a_n\right)^2}{b_1 + b_2 + \cdots + b_n}.$ - -Example 1. Nesbitt's inequality. - -For positive real numbers $a, b, c:$ -$$ -\frac{a}{b+c} + \frac{b}{a+c} + \frac{c}{a+b} \geq \frac{3}{2}. -$$ - -Example 2. International Mathematical Olympiad (IMO) 1995. - -For positive real numbers $ a,b,c $, where $ abc=1 $ we have that $ \frac{1}{a^3(b+c)}+\frac{1}{b^3(a+c)}+\frac{1}{c^3(a+b)} \geq \frac{3}{2}. $ - -Example 3. - -For positive real numbers $ a,b $ we have that $ 8(a^4+b^4) \geq (a+b)^4. $ - -Example 4. - -For positive real numbers $ a,b,c $ we have that $ \frac{1}{a+b}+\frac{1}{b+c}+\frac{1}{a+c} \geq \frac{9}{2(a+b+c)}. $ - -Example 1. - -Proof: Use $n = 3,$ $\left(a_1, a_2, a_3\right) := (a, b, c),$ and $\left(b_1, b_2, b_3\right) := (a(b + c), b(c + a), c(a + b))$ to conclude: \frac{a^2}{a(b + c)} + \frac{b^2}{b(c + a)} + \frac{c^2}{c(a + b)} \geq \frac{(a + b + c)^2}{a(b + c) + b(c + a) + c(a + b)} = \frac{a^2 + b^2 + c^2 + 2(ab + bc + ca)}{2(ab + bc + ca)} = \frac{a^2 + b^2 + c^2}{2(ab + bc + ca)} + 1 \geq \frac{1}{2} (1) + 1 = \frac{3}{2}. \blacksquare - -Example 2. - -We have that $\frac{\Big(\frac{1}{a}\Big)^2}{a(b+c)} + \frac{\Big(\frac{1}{b}\Big)^2}{b(a+c)} + \frac{\Big(\frac{1}{c}\Big)^2}{c(a+b)} \geq \frac{\Big(\frac{1}{a}+\frac{1}{b}+\frac{1}{c}\Big)^2}{2(ab+bc+ac)} = \frac{ab+bc+ac}{2} \geq \frac{3 \sqrt[3]{a^2 b^2 c^2}}{2} = \frac{3}{2}.$ - -Example 3. - -We have $\frac{a^2}{1} + \frac{b^2}{1} \geq \frac{(a + b)^2}{2}$ so that $a^4 + b^4 = \frac{\left(a^2\right)^2}{1} + \frac{\left(b^2\right)^2}{1} \geq \frac{\left(a^2 + b^2\right)^2}{2} \geq \frac{\left(\frac{(a+b)^2}{2}\right)^2}{2} = \frac{(a + b)^4}{8}.$ - -Example 4. - -We have that $\frac{1}{a+b} + \frac{1}{b+c} + \frac{1}{a+c} \geq \frac{(1+1+1)^2}{2(a+b+c)} = \frac{9}{2(a+b+c)}.$ diff --git a/wiki/wikipedia/2299.txt b/wiki/wikipedia/2299.txt deleted file mode 100644 index 63b808cfded4d65c70820e37e8178e2dbd5adf6b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2299.txt +++ /dev/null @@ -1,34 +0,0 @@ -In statistics, Wold's decomposition or the Wold representation theorem (not to be confused with the Wold theorem that is the discrete-time analog of the Wiener–Khinchin theorem), named after Herman Wold, says that every covariance-stationary time series $Y_{t}$ can be written as the sum of two time series, one deterministic and one stochastic. - -Formally -$$ -Y_t=\sum_{j=0}^\infty b_j \varepsilon_{t-j}+\eta_t, -$$ - -where: - -*$Y_t $ is the time series being considered, - -*$\varepsilon_t$ is an uncorrelated sequence which is the innovation process to the process $Y_t $ – that is, a white noise process that is input to the linear filter $\{b_j \} $. - -*$b $ is the possibly infinite vector of moving average weights (coefficients or parameters) - -*$\eta_t $ is a deterministic time series, such as one represented by a sine wave. - -The moving average coefficients have these properties: - -# Stable, that is square summable $\sum_{j=1}^{\infty}|b_{j}|^2$ < $\infty$ - -# Causal (i.e. there are no terms with j < 0) - -# Minimum delay - -# Constant ($ b_j $ independent of t) - -# It is conventional to define $b_0 = 1$ - -This theorem can be considered as an existence theorem: any stationary process has this seemingly special representation. Not only is the existence of such a simple linear and exact representation remarkable, but even more so is the special nature of the moving average model. Imagine creating a process that is a moving average but not satisfying these properties 1–4. For example, the coefficients $b_j$ could define an acausal and model. Nevertheless the theorem assures the existence of a causal that exactly represents this process. How this all works for the case of causality and the minimum delay property is discussed in Scargle (1981), where an extension of the Wold Decomposition is discussed. - -The usefulness of the Wold Theorem is that it allows the dynamic evolution of a variable $Y_{t}$ to be approximated by a linear model. If the innovations $\varepsilon_{t}$ are independent, then the linear model is the only possible representation relating the observed value of $Y_{t}$ to its past evolution. However, when $\varepsilon_{t}$ is merely an uncorrelated but not independent sequence, then the linear model exists but it is not the only representation of the dynamic dependence of the series. In this latter case, it is possible that the linear model may not be very useful, and there would be a nonlinear model relating the observed value of $Y_{t}$ to its past evolution. However, in practical time series analysis, it is often the case that only linear predictors are considered, partly on the grounds of simplicity, in which case the Wold decomposition is directly relevant. - -The Wold representation depends on an infinite number of parameters, although in practice they usually decay rapidly. The autoregressive model is an alternative that may have only a few coefficients if the corresponding moving average has many. These two models can be combined into an autoregressive-moving average (ARMA) model, or an autoregressive-integrated-moving average (ARIMA) model if non-stationarity is involved. See Scargle and references there; in addition this paper gives an extension of the Wold Theorem that allows more generality for the moving average (not necessarily stable, causal, or minimum delay) accompanied by a sharper characterization of the innovation (identically and independently distributed, not just uncorrelated). This extension allows the possibility of models that are more faithful to physical or astrophysical processes, and in particular can sense ″the arrow of time.″ diff --git a/wiki/wikipedia/23.txt b/wiki/wikipedia/23.txt deleted file mode 100644 index b2e38dd3eea3e75ce5e426bd5c8fa5748cbaec55..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/23.txt +++ /dev/null @@ -1,13 +0,0 @@ -A black hole firewall is a hypothetical phenomenon where an observer falling into a black hole encounters high-energy quanta at (or near) the event horizon. The "firewall" phenomenon was proposed in 2012 by physicists Ahmed Almheiri, Donald Marolf, Joseph Polchinski, and James Sully as a possible solution to an apparent inconsistency in black hole complementarity. The proposal is sometimes referred to as the AMPS firewall, an acronym for the names of the authors of the 2012 paper. The use of a firewall to resolve this inconsistency remains controversial, with physicists divided as to the solution to the paradox. - -According to quantum field theory in curved spacetime, a single emission of Hawking radiation involves two mutually entangled particles. The outgoing particle escapes and is emitted as a quantum of Hawking radiation; the infalling particle is swallowed by the black hole. Assume a black hole formed a finite time in the past and will fully evaporate away in some finite time in the future. Then, it will only emit a finite amount of information encoded within its Hawking radiation. Assume that at time $t$, more than half of the information had already been emitted. - -According to widely accepted research by physicists like Don Page and Leonard Susskind, an outgoing particle emitted at time $t$ must be entangled with all the Hawking radiation the black hole has previously emitted. This creates a paradox: a principle called "monogamy of entanglement" requires that, like any quantum system, the outgoing particle cannot be fully entangled with two independent systems at the same time; yet here the outgoing particle appears to be entangled with both the infalling particle and, independently, with past Hawking radiation. - -Some scientists suggest that the entanglement must somehow get immediately broken between the infalling particle and the outgoing particle. Breaking this entanglement would release large amounts of energy, thus creating a searing "black hole firewall" at the black hole event horizon. This resolution requires a violation of Einstein's equivalence principle, which states that free-falling is indistinguishable from floating in empty space. This violation has been characterized as "outrageous"; theoretical physicist Raphael Bousso has complained that "a firewall simply can't appear in empty space, any more than a brick wall can suddenly appear in an empty field and smack you in the face." - -The fuzzball picture resolves the dilemma by replacing the 'no-hair' vacuum with a stringy quantum state, thus explicitly coupling any outgoing Hawking radiation with the formation history of the black hole. - -Stephen Hawking received widespread mainstream media coverage in January 2014 with an informal proposal to replace the event horizon of a black hole with an "apparent horizon" where infalling matter is suspended and then released; however, some scientists have expressed confusion about what precisely is being proposed and how the proposal would solve the paradox. - -The firewall would exist at the black hole's event horizon, and would be invisible to observers outside the event horizon. Matter passing through the event horizon into the black hole would immediately be "burned to a crisp" by an arbitrarily hot "seething maelstrom of particles" at the firewall. more recent work has argued there is no statistically significant evidence for such echoes in the data. diff --git a/wiki/wikipedia/230.txt b/wiki/wikipedia/230.txt deleted file mode 100644 index 1ff9c3656f60622d13ab766fd998f99a0e4a6718..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/230.txt +++ /dev/null @@ -1,17 +0,0 @@ -In mathematics, Kolmogorov's normability criterion is a theorem that provides a necessary and sufficient condition for a topological vector space to be normable; that is, for the existence of a norm on the space that generates the given topology. The normability criterion can be seen as a result in same vein as the Nagata–Smirnov metrization theorem and Bing metrization theorem, which gives a necessary and sufficient condition for a topological space to be metrizable. The result was proved by the Russian mathematician Andrey Nikolayevich Kolmogorov in 1934. - -A topological vector space is normable if and only if it is a T1 space and admits a bounded convex neighbourhood of the origin. - -Because translation (that is, vector addition) by a constant preserves the convexity, boundedness, and openness of sets, the words "of the origin" can be replaced with "of some point" or even with "of every point". - -It may be helpful to first recall the following terms: - -* A topological vector space (TVS) is a vector space $X$ equipped with a topology $\tau$ such that the vector space operations of scalar multiplication and vector addition are continuous. - -* A topological vector space $(X, \tau)$ is called normable if there is a norm $\|\cdot\|: X \to \R$ on $X$ such that the open balls of the norm $\|\cdot\|$ generate the given topology $\tau.$ (Note well that a given normable topological vector space might admit multiple such norms.) - -* A topological space $X$ is called a T1 space if, for every two distinct points $x, y \in X,$ there is an open neighbourhood $U_x$ of $x$ that does not contain $y.$ In a topological vector space, this is equivalent to requiring that, for every $x \neq 0,$ there is an open neighbourhood of the origin not containing $x.$ Note that being T1 is weaker than being a Hausdorff space, in which every two distinct points $x, y \in X$ admit open neighbourhoods $U_x$ of $x$ and $U_y$ of $y$ with $U_x \cap U_y = \varnothing$; since normed and normable spaces are always Hausdorff, it is a "surprise" that the theorem only requires T1. - -* A subset $A$ of a vector space $X$ is a convex set if, for any two points $x, y \in A,$ the line segment joining them lies wholly within $A,$ that is, for all $0 \leq t \leq 1,$ $(1 - t) x + t y \in A.$ - -* A subset $A$ of a topological vector space $(X, \tau)$ is a bounded set if, for every open neighbourhood $U$ of the origin, there exists a scalar $\lambda$ so that $A \subseteq \lambda U.$ (One can think of $U$ as being "small" and $\lambda$ as being "big enough" to inflate $U$ to cover $A.$) diff --git a/wiki/wikipedia/2300.txt b/wiki/wikipedia/2300.txt deleted file mode 100644 index 17aefbee70af372379a3dff6438ffbeb6fefb348..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2300.txt +++ /dev/null @@ -1,12 +0,0 @@ -Adam Adamandy Kochański (5 August 1631 – 17 May 1700) was a Polish mathematician, physicist, clock-maker, pedagogue and librarian. He was the Court Mathematician of John III Sobieski. - -Kochański was born in Dobrzyń nad Wisłą. He began his education in Toruń, and in 1652 he entered the Society of Jesus in Vilnius. He studied philosophy at Vilnius University (then called Vilnius Academy). He also studied mathematics, physics and theology. He went on to lecture on those subjects at several European universities: in Florence, Prague, Olomouc, Wrocław, Mainz and Würzburg. In 1680 he accepted an offer from John III Sobieski, the king of Poland, returning to Poland and taking the position of the king's chaplain, mathematician, clock maker, librarian, and tutor of the king's son, Jakub. - -He wrote many scientific papers, mainly on mathematics and mechanics, but also on physics, astronomy and philosophy. The best known of his works, Observationes Cyclometricae ad facilitandam Praxin accommodatae, is devoted to the squaring the circle (or alternatively, the quadrature of the circle) and was published in 1685 in the leading scientific periodical of the time, Acta Eruditorum. He also found a famous approximation of π today called Kochański's approximation: -$$ -\sqrt{{40 \over 3} - 2 \sqrt{3}\ } = 3.14153333870509461863 \dots -$$ - -Kochański cooperated and corresponded with many scientists, Johannes Hevelius and Gottfried Leibniz among them. He was apparently the only one of the contemporary Poles to know elements of the newly invented calculus. As a mechanic he was a renowned clock maker. He suggested replacing the clock's pendulum with a spring, and standardizing the number of escapements per hour. - -He died in Teplice in Bohemia. diff --git a/wiki/wikipedia/2301.txt b/wiki/wikipedia/2301.txt deleted file mode 100644 index 34fef5896c6d40076870255ea8e9f1dc63fa7430..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2301.txt +++ /dev/null @@ -1,91 +0,0 @@ -The assignment problem is a fundamental combinatorial optimization problem. In its most general form, the problem is as follows: - -The problem instance has a number of agents and a number of tasks. Any agent can be assigned to perform any task, incurring some cost that may vary depending on the agent-task assignment. It is required to perform as many tasks as possible by assigning at most one agent to each task and at most one task to each agent, in such a way that the total cost of the assignment is minimized. - -Alternatively, describing the problem using graph theory: - -The assignment problem consists of finding, in a weighted bipartite graph, a matching of a given size, in which the sum of weights of the edges is minimum. - -If the numbers of agents and tasks are equal, then the problem is called balanced assignment. Otherwise, it is called unbalanced assignment. If the total cost of the assignment for all tasks is equal to the sum of the costs for each agent (or the sum of the costs for each task, which is the same thing in this case), then the problem is called linear assignment. Commonly, when speaking of the assignment problem without any additional qualification, then the linear balanced assignment problem is meant. - -Suppose that a taxi firm has three taxis (the agents) available, and three customers (the tasks) wishing to be picked up as soon as possible. The firm prides itself on speedy pickups, so for each taxi the "cost" of picking up a particular customer will depend on the time taken for the taxi to reach the pickup point. This is a balanced assignment problem. Its solution is whichever combination of taxis and customers results in the least total cost. - -Now, suppose that there are four taxis available, but still only three customers. This is an unbalanced assignment problem. One way to solve it is to invent a fourth dummy task, perhaps called "sitting still doing nothing", with a cost of 0 for the taxi assigned to it. This reduces the problem to a balanced assignment problem, which can then be solved in the usual way and still give the best solution to the problem. - -Similar adjustments can be done in order to allow more tasks than agents, tasks to which multiple agents must be assigned (for instance, a group of more customers than will fit in one taxi), or maximizing profit rather than minimizing cost. - -The formal definition of the assignment problem (or linear assignment problem) is - -Given two sets, A and T, of equal size, together with a weight function C : A × T → R. Find a bijection f : A → T such that the cost function: -$$ -\sum_{a\in A}C(a,f(a)) -$$ - -is minimized. - -Usually the weight function is viewed as a square real-valued matrix C, so that the cost function is written down as: -$$ -\sum_{a\in A}C_{a,f(a)} -$$ - -The problem is "linear" because the cost function to be optimized as well as all the constraints contain only linear terms. - -A naive solution for the assignment problem is to check all the assignments and calculate the cost of each one. This may be very inefficient since, with n agents and n tasks, there are n! (factorial of n) different assignments. Fortunately, there are many algorithms for solving the problem in time polynomial in n. - -The assignment problem is a special case of the transportation problem, which is a special case of the minimum cost flow problem, which in turn is a special case of a linear program. While it is possible to solve any of these problems using the simplex algorithm, each specialization has a small solution space and thus more efficient algorithms designed to take advantage of its special structure. - -In the balanced assignment problem, both parts of the bipartite graph have the same number of vertices, denoted by n. - -One of the first polynomial-time algorithms for balanced assignment was the Hungarian algorithm. It is a global algorithm – it is based on improving a matching along augmenting paths (alternating paths between unmatched vertices). Its run-time complexity, when using Fibonacci heaps, is $O(mn + n^2\log n)$, where m is a number of edges. This is currently the fastest run-time of a strongly polynomial algorithm for this problem. If all weights are integers, then the run-time can be improved to $O(mn + n^2\log \log n)$, but the resulting algorithm is only weakly-polynomial. If the weights are integers, and all weights are at most C (where C>1 is some integer), then the problem can be solved in $O(m\sqrt{n} \log(n\cdot C))$ weakly-polynomial time in a method called weight scaling. - -In addition to the global methods, there are local methods which are based on finding local updates (rather than full augmenting paths). These methods have worse asymptotic runtime guarantees, but they often work better in practice. These algorithms are called auction algorithms, push-relabel algorithms, or preflow-push algorithms. Some of these algorithms were shown to be equivalent. - -Some of the local methods assume that the graph admits a perfect matching; if this is not the case, then some of these methods might run forever. A simple technical way to solve this problem is to extend the input graph to a complete bipartite graph, by adding artificial edges with very large weights. These weights should exceed the weights of all existing matchings, to prevent appearance of artificial edges in the possible solution. - -As shown by Mulmuley, Vazirani and Vazirani, the problem of minimum weight perfect matching is converted to finding minors in the adjacency matrix of a graph. Using the isolation lemma, a minimum weight perfect matching in a graph can be found with probability at least ½. For a graph with n vertices, it requires $ O(\log^2(n)) $ time. - -In the unbalanced assignment problem, the larger part of the bipartite graph has n vertices and the smaller part has r is built from two copies of the original graph G: a forward copy Gf and a backward copy Gb. The backward copy is "flipped", so that, in each side of G, there are now n+r vertices. Between the copies, we need to add two kinds of linking edges: - -* Large-to-large: from each vertex in the larger part of Gf, add a zero-cost edge to the corresponding vertex in Gb. - -* Small-to-small: if the original graph does not have a one-sided-perfect matching, then from each vertex in the smaller part of Gf, add a very-high-cost edge to the corresponding vertex in Gb. - -All in all, at most $n+r$ new edges are required. The resulting graph always has a perfect matching of size $n+r$. A minimum-cost perfect matching in this graph must consist of minimum-cost maximum-cardinality matchings in Gf and Gb. The main problem with this doubling technique is that there is no speed gain when $r\ll n$. - -Instead of using reduction, the unbalanced assignment problem can be solved by directly generalizing existing algorithms for balanced assignment. The Hungarian algorithm can be generalized to solve the problem in $O(ms + s^2\log r)$ strongly-polynomial time. In particular, if s=r then the runtime is $O(mr + r^2\log r)$. If the weights are integers, then Thorup's method can be used to get a runtime of $O(ms + s^2\log \log r)$. - -The assignment problem can be solved by presenting it as a linear program. For convenience we will present the maximization problem. Each edge (i,j), where i is in A and j is in T, has a weight $w_{ij}$. For each edge (i,j) we have a variable $x_{ij}$. The variable is 1 if the edge is contained in the matching and 0 otherwise, so we set the domain constraints: - -0\le x_{ij}\le 1\text{ for }i,j\in A,T, x_{ij}\in \mathbb{Z}\text{ for }i,j\in A,T. - -The total weight of the matching is: $\sum_{(i,j)\in A\times T} w_{ij}x_{ij}$. The goal is to find a maximum-weight perfect matching. - -To guarantee that the variables indeed represent a perfect matching, we add constraints saying that each vertex is adjacent to exactly one edge in the matching, i.e, - -\sum_{j\in T}x_{ij}=1\text{ for }i\in A, - -~~~ - -\sum_{i\in A}x_{ij}=1\text{ for }j\in T, . - -All in all we have the following LP: - -\text{maximize}~~\sum_{(i,j)\in A\times T} w_{ij}x_{ij} - -\text{subject to}~~\sum_{j\in T}x_{ij}=1\text{ for }i\in A, - -~~~ - -\sum_{i\in A}x_{ij}=1\text{ for }j\in T 0\le x_{ij}\le 1\text{ for }i,j\in A,T, x_{ij}\in \mathbb{Z}\text{ for }i,j\in A,T. This is an integer linear program. However, we can solve it without the integrality constraints (i.e., drop the last constraint), using standard methods for solving continuous linear programs. While this formulation allows also fractional variable values, in this special case, the LP always has an optimal solution where the variables take integer values. This is because the constraint matrix of the fractional LP is totally unimodular – it satisfies the four conditions of Hoffman and Gale. - -This can also be proved directly. Let x be an optimal solution of the fractional LP, w(x) be its total weight, and k(x) be the number of non-integral variables. If 1=k(x)=0 we are done. Otherwise, there is a fractional variable, say $x_{i_1,j_2}$. Because the sum of variables adjacent to j_2 is 1, which in an integer, there must be another variable adjacent to j_2 with a fractional value, say $x_{i_3,j_2}$. By similar considerations on i_3, there must be another variable adjacent to i_3 with a fractional value, say $x_{i_3,j_4}$. By similar considerations we move from one vertex to another, collecting edges with fractional values. Since the graph is finite, at some point we must have a cycle. Without loss of generality we can assume that the cycle ends at vertex i_1, so the last fractional variable in the cycle is $x_{i_1,j_{2m}}$. So the number of edges in the cycle is 2m – it must be even since the graph is bipartite. - -Suppose we add a certain constant e to all even variables in the cycle, and remove the same constant e from all odd variables in the cycle. For any such e, the sum of variables near each vertex remains the same (1), so the vertex constraints are still satisfied. Moreover, if e is sufficiently small, all variables remain between 0 and 1, so the domain constraints are still satisfied too. It is easy to find a largest e that maintains the domain constraints: it is either the smallest difference between an odd variable and 0, or the smallest difference between an even variable and 1. Now, we have one less fractional variable, so k(x) decreases by 1. The objective value remains the same, since otherwise we could increase it by selecting e to be positive or negative, in contradiction to the assumption that it is maximal. - -By repeating the cycle-removal process we arrive, after at most n steps, at a solution in which all variables are integral. - -Other approaches for the assignment problem exist and are reviewed by Duan and Pettie (see Table II). Their work proposes an approximation algorithm for the assignment problem (and the more general maximum weight matching problem), which runs in linear time for any fixed error bound. - -When phrased as a graph theory problem, the assignment problem can be extended from bipartite graphs to arbitrary graphs. The corresponding problem, of finding a matching in a weighted graph where the sum of weights is maximized, is called the maximum weight matching problem. diff --git a/wiki/wikipedia/2302.txt b/wiki/wikipedia/2302.txt deleted file mode 100644 index b19dfdc437cb69997358039c4ef670d76e4d86cb..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2302.txt +++ /dev/null @@ -1,7 +0,0 @@ -In algebraic geometry, the Chasles–Cayley–Brill formula, also known as the Cayley–Brill formula, states that a correspondence T of valence k from an algebraic curve C of genus g to itself has d + e + 2kg united points, where d and e are the degrees of T and its inverse. - -Michel Chasles introduced the formula for genus g = 0, Arthur Cayley stated the general formula without proof, and Alexander von Brill gave the first proof. - -The number of united points of the correspondence is the intersection number of the correspondence with the diagonal Δ of C×C. - -The correspondence has valence k if and only if it is homologous to a linear combination a(C×1) + b(1×C) – kΔ where Δ is the diagonal of C×C. The Chasles–Cayley–Brill formula follows easily from this together with the fact that the self-intersection number of the diagonal is 2 – 2g. diff --git a/wiki/wikipedia/2303.txt b/wiki/wikipedia/2303.txt deleted file mode 100644 index 87b0c473a5e6ccce5bff31234203fa12763aba1f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2303.txt +++ /dev/null @@ -1,58 +0,0 @@ -In computational complexity theory, Savitch's theorem, proved by Walter Savitch in 1970, gives a relationship between deterministic and non-deterministic space complexity. It states that for any function $f\in\Omega(\log(n))$, -$$ -\mathsf{NSPACE}\left(f\left(n\right)\right) \subseteq \mathsf{DSPACE}\left(f\left(n\right)^2\right). -$$ - -In other words, if a nondeterministic Turing machine can solve a problem using $f(n)$ space, a deterministic Turing machine can solve the same problem in the square of that space bound. Although it seems that nondeterminism may produce exponential gains in time, this theorem shows that it has a markedly more limited effect on space requirements. - -The proof relies on an algorithm for STCON, the problem of determining whether there is a path between two vertices in a directed graph, which runs in $O\left((\log n)^2\right)$ space for $n$ vertices. The basic idea of the algorithm is to solve recursively a somewhat more general problem, testing the existence of a path from a vertex s to another vertex t that uses at most k edges, where k is a parameter that is given as input to the recursive algorithm. STCON may be solved from this problem by setting k to n. To test for a k-edge path from s to t, one may test whether each vertex u may be the midpoint of the s-t path, by recursively searching for paths of half the length from s to u and u to t. - -Using pseudocode (in Python syntax) we can express this algorithm as follows: - - - -def k_edge_path(s, t, k) -> bool: - -"""k initially equals n (which is the number of vertices)""" - -if k == 0: - -return s == t - -if k == 1: - -return (s, t) in edges - -for u in vertices: - -if k_edge_path(s, u, floor(k / 2)) and k_edge_path(u, t, ceil(k / 2)): - -return True - -return False - - - -This search calls itself to a recursion depth of $O(\log n)$ levels, each of which requires $O(\log n)$ bits to store the function arguments and local variables at that level: k and all vertices (s, t, u) require $O(\log n)$ bits each. The total auxiliary space complexity is thus $O\left((\log n)^2\right)$. Although described above in the form of a program in a high-level language, the same algorithm may be implemented with the same asymptotic space bound on a Turing machine. - -To see why this algorithm implies the theorem, consider the following. For any language $L \in \mathsf{NSPACE}(f(n))$, there is a Turing machine $M$ which decides $L$ in space $f(n)$. Assume w.l.o.g. the alphabet is a binary alphabet $\{0,1\}$ (viz. $L \subseteq \{0,1\}^*$). For any input word $x \in \{0,1\}^*$, there is a directed graph $G^M_x$ whose vertices are the configurations of $M$ when running on the input $x$. There may be infinitely many such configurations; e.g. when $M$ keeps writing a symbol on the tape and moving the head to the right in a loop, ad infinitum. The configurations then grow arbitrarily large. However, we know that at most $f\left(n\right)$ space is needed to decide whether $x \in L$, so we only care about configurations of size at most $f\left(n\right)$; call any such configuration admissible. There are finitely many admissible configurations; namely $2^{O\left(f(n)\right)}$. - -Therefore, the induced subgraph $[G^M_x]$ - -of $G^M_x$ containing (exactly) the admissible configurations has $2^{O\left(f(n)\right)}$ vertices. For each input $x \in \{0,1\}^n$, $[G^M_x]$ has a path from the starting configuration to an accepting configuration if and only if $x \in L$. Thus by deciding connectivity in $[G^M_x]$, we can decide membership of $x$ in $L$. By the above algorithm this can be done deterministically in space $O\left(\left(\log 2^{O\left(f(n)\right)}\right)^2\right) = O\left(f\left(n\right)^2\right)$; hence $L$ is in $\mathsf{DSPACE}\left(f\left(n\right)^2\right)$. - -Since this holds for all $f \in \Omega(\log n)$ and all $L \in \mathsf{NSPACE}\left(f\left(n\right)\right)$, we get the statement of the theorem: - -For all functions $f \in \Omega(\log n)$, $\mathsf{NSPACE}\left(f\left(n\right)\right) \subseteq \mathsf{DSPACE}\left(f\left(n\right)^2\right)$. - -------- - -Some important corollaries of the theorem include: - -* PSPACE = NPSPACE - -** This follows directly from the fact that the square of a polynomial function is still a polynomial function. It is believed that a similar relationship does not exist between the polynomial time complexity classes, P and NP, although this is still an open question. - -* NL ⊆ L2 - -** STCON is NL-complete, and so all languages in NL are also in the complexity class $\mathsf{\color{Blue}L}^2 =\mathsf{DSPACE}\left(\left(\log n\right)^2\right)$. diff --git a/wiki/wikipedia/2304.txt b/wiki/wikipedia/2304.txt deleted file mode 100644 index 7dbaa5ce8cecbfc2da7ef9ed722b4f4a6e4e528d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2304.txt +++ /dev/null @@ -1,36 +0,0 @@ -In mathematics, the phrases arbitrarily large, arbitrarily small and arbitrarily long are used in statements to make clear of the fact that an object is large, small and long with little limitation or restraint, respectively. The use of "arbitrarily" often occurs in the context of real numbers (and its subsets thereof), though its meaning can differ from that of "sufficiently" and "infinitely". - -The statement - -"$f(x)$ is non-negative for arbitrarily large $x$." - -is a shorthand for: - -"For every real number $n$, $f(x)$ is non-negative for some value of $x$ greater than $n$." - -In the common parlance, the term "arbitrarily long" is often used in the context of sequence of numbers. For example, to say that there are "arbitrarily long arithmetic progressions of prime numbers" does not mean that there exists any infinitely long arithmetic progression of prime numbers (there is not), nor that there exists any particular arithmetic progression of prime numbers that is in some sense "arbitrarily long". Rather, the phrase is used to refer to the fact that no matter how large a number $n$ is, there exists some arithmetic progression of prime numbers of length at least $n$. - -Similar to arbitrarily large, one can also define the phrase "$P(x)$ holds for arbitrarily small real numbers", as follows: -$$ -\forall \epsilon \in \mathbb{R}_{+}, \exists x \in \mathbb{R} : |x|<\epsilon \land P(x) -$$ - -In other words: - -However small a number, there will be a number $x$ smaller than it such that $P(x)$ holds. - -While similar, "arbitrarily large" is not equivalent to "sufficiently large". For instance, while it is true that prime numbers can be arbitrarily large (since there are infinitely many of them due to Euclid's theorem), it is not true that all sufficiently large numbers are prime. - -As another example, the statement "$f(x)$ is non-negative for arbitrarily large $x$." could be rewritten as: -$$ -\forall n \in \mathbb{R} \mbox{, } \exists x \in \mathbb{R} \mbox{ such that } x > n \land f(x) \ge 0 -$$ - -However, using "sufficiently large", the same phrase becomes: -$$ -\exists n \in \mathbb{R} \mbox{ such that } \forall x \in \mathbb{R} \mbox{, } x > n \Rightarrow f(x) \ge 0 -$$ - -Furthermore, "arbitrarily large" also does not mean "infinitely large". For example, although prime numbers can be arbitrarily large, an infinitely large prime number does not exist—since all prime numbers (as well as all other integers) are finite. - -In some cases, phrases such as "the proposition $P(x)$ is true for arbitrarily large $x$" are used primarily for emphasis, as in "$P(x)$ is true for all $x$, no matter how large $x$ is." In these cases, the phrase "arbitrarily large" does not have the meaning indicated above (i.e., "however large a number, there will be some larger number for which $P(x)$ still holds."). Instead, the usage in this case is in fact logically synonymous with "all". diff --git a/wiki/wikipedia/2305.txt b/wiki/wikipedia/2305.txt deleted file mode 100644 index f51fb85e4e699e1e21803c27f11be5774c6b2c64..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2305.txt +++ /dev/null @@ -1,22 +0,0 @@ -In additive and algebraic number theory, the Skolem–Mahler–Lech theorem states that if a sequence of numbers satisfies a linear difference equation, then with finitely many exceptions the positions at which the sequence is zero form a regularly repeating pattern. This result is named after Thoralf Skolem (who proved the theorem for sequences of rational numbers), Kurt Mahler (who proved it for sequences of algebraic numbers), and Christer Lech (who proved it for sequences whose elements belong to any field of characteristic 0). Its known proofs use p-adic analysis and are non-constructive. - -Let $s(n)_{n \ge 0}$ be a sequence of complex numbers satisfying $s(n) = c_1 s(n-1) + c_2 s(n-2) + \cdots + c_d s(n-d)$ for all $n \ge d$, where $c_i$ are complex number constants (i.e., a constant-recursive sequence of order $d$). Then the set of zeros $\{n \mid s(n) = 0\}$ is equal to the symmetric difference of a finite set and finitely many full arithmetic progressions. Here, an infinite arithmetic progression is full if there exist integers a and b such that the progression consists of all positive integers equal to b modulo a. If additionally $c_d \ne 0$ (excluding sequences such as 1, 0, 0, 0, ...), then the symmetric difference can be replaced by a union. - -Consider the sequence - -0, 0, 1, 0, 1, 0, 2, 0, 3, 0, 5, 0, 8, 0, ... - -that alternates between zeros and the Fibonacci numbers. - -This sequence can be generated by the linear recurrence relation -$$ -F(i) = F(i-2) + F(i-4) -$$ - -(a modified form of the Fibonacci recurrence), starting from the base cases F(1) = F(2) = F(4) = 0 and F(3) = 1. For this sequence, - -F(i) = 0 if and only if i is either one or even. Thus, the positions at which the sequence is zero can be partitioned into a finite set (the singleton set {1}) and a full arithmetic progression (the positive even numbers). - -In this example, only one arithmetic progression was needed, but other recurrence sequences may have zeros at positions forming multiple arithmetic progressions. - -The Skolem problem is the problem of determining whether a given recurrence sequence has a zero. There exist an algorithm to test whether there are infinitely many zeros, and if so to find the decomposition of these zeros into periodic sets guaranteed to exist by the Skolem–Mahler–Lech theorem. However, it is unknown whether there exists an algorithm to determine whether a recurrence sequence has any non-periodic zeros . diff --git a/wiki/wikipedia/2306.txt b/wiki/wikipedia/2306.txt deleted file mode 100644 index da033f02fc7a8061c0fc91aee4a495ede88c48c9..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2306.txt +++ /dev/null @@ -1,45 +0,0 @@ -VIATRA - -is an open-source model transformation framework based on the Eclipse Modeling Framework (EMF) and hosted by the Eclipse Foundation. - -VIATRA supports the development of model transformations with specific focus on event-driven, - -reactive transformations, i.e., rule-based scenarios where transformations occur as reactions to - -certain external changes in the model. - -Building upon an incremental query support for locating patterns - -and changes in the model, - -VIATRA offers a language (the VIATRA Query Language, VQL) to define transformations and a reactive transformation - -engine to execute certain transformations upon changes in the underlying model. - -VIATRA, as an open-source framework offering, serves as a central - -integration point and enabler engine in various applications, both in an industrial and in an academic context. Earlier versions of the framework have been intensively used for providing tool support for developing and verifying critical embedded systems in numerous European research projects such as , , and . - -As a major industrial application of VIATRA, it is utilized as the underlying model querying and transformation engine of the [IncQuery Suite https://incquery.io]. Thus, VIATRA is a key technical component in several industrial collaborations around model-based systems engineering (MBSE), fostering innovative systems engineering practices in domains like aerospace, manufacturing, industrial automation and automotive. Furthermore, via the applications of the IncQuery Suite, VIATRA serves as the foundation for model-based endeavors of ongoing, large-scale European industrial digitalization endeavors, such as the and the projects. - -VIATRA is well integrated with Eclipse Modeling tools (cf., e.g., a by Ujhelyi et al.). However, VIATRA works outside the Eclipse environment as well, as demonstrated by the using the JetBrains MPS platform. - -VIATRA provides the following main services: - -* An incremental query engine together with a graph pattern based language to specify and execute model queries efficiently. - -* An internal DSL over the Xtend language to specify both batch and event-driven, reactive transformations. - -* A model obfuscator to remove sensitive information from a confidential model (e.g., to create bug reports). - -The current VIATRA project is a full rewrite of the previous , coming with full compatibility and support for EMF models. The project features a History wiki page that describes the main differences between the different versions. The evolution of the framework has also been the subject of a by Dániel Varró et al. - -As for applications of the earlier VIATRA2 framework, it served as the underlying model transformation engine of the DECOS European IP in the field of dependable embedded systems. Furthermore, a traditional application area for VIATRA2 – starting as early as 1998 – was to support the analysis of system models taken from various application areas (safety-critical and/or embedded systems, robust e-business applications, middleware, service-oriented architecture) described using various modeling languages (SysML, UML, BPMN, etc.) during a model-driven systems engineering process. Such a model analysis typically also includes the verification and validation, the testing, the safety and security analysis as well as the early assessment of non-functional characteristics (such as reliability, availability, responsiveness, throughput, etc.) of the system under design. - -These use-cases and application fields still constitute focal areas for VIATRA, mostly addressed via the IncQuery Suite as an interface on the user's end. - -Since precise model-based systems development is the primary application area of VIATRA, it necessitates that (i) the model transformations are specified in a mathematically precise way, and (ii) these transformations are automated so that the target mathematical models can be derived fully automatically. To achieve this, VIATRA relies upon a mathematically precise rule-based specification formalism, namely, graph transformation (GT). VIATRA aims at invisible formal methods: here, formal details are hidden by automated model transformations projecting system models into various mathematical domains (and, preferably, vice versa). - -The basic concept in defining model transformations within VIATRA is the (graph) pattern. A pattern is a collection of model elements arranged into a certain structure fulfilling additional constraints (as defined by attribute conditions or other patterns). Patterns can be matched on certain model instances, and upon successful pattern matching, elementary model manipulation is specified by graph transformation rules. Like OCL, graph transformation rules describe pre- and postconditions to the transformations, but graph transformation rules are guaranteed to be executable, which is a main conceptual difference. - -In particular, as reactive, event-driven transformations are the current focus of VIATRA, VIATRA includes a rule execution engine which monitors changes (interpreted as events) in the model, and fires a rule whenever a change led to the fulfillment of the precondition for that rule (and, potentially, if some further control conditions are also met). diff --git a/wiki/wikipedia/2307.txt b/wiki/wikipedia/2307.txt deleted file mode 100644 index afe07554db9bb50853f0b55ec173389c305a07fe..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2307.txt +++ /dev/null @@ -1,96 +0,0 @@ -In mathematics, for given real numbers a and b, the logarithm logb a is a number x such that 1=bx = a. Analogously, in any group G, powers bk can be defined for all integers k, and the discrete logarithm logb a is an integer k such that 1=bk = a. In number theory, the more commonly used term is index: we can write x = indr a (mod m) (read "the index of a to the base r modulo m") for rx ≡ a (mod m) if r is a primitive root of m and gcd(a,m) = 1. - -Discrete logarithms are quickly computable in a few special cases. However, no efficient method is known for computing them in general. Several important algorithms in public-key cryptography base their security on the assumption that the discrete logarithm problem over carefully chosen groups has no efficient solution. - -Let G be any group. Denote its group operation by multiplication and its identity element by 1. Let b be any element of G. For any positive integer k, the expression bk denotes the product of b with itself k times: -$$ -b^k = \underbrace{b \cdot b \cdots b}_{k \text{factors}}. -$$ - -Similarly, let b−k denote the product of b−1 with itself k times. For k = 0, the kth power is the identity: 1=b0 = 1. - -Let a also be an element of G. An integer k that solves the equation 1=bk = a is termed a discrete logarithm (or simply logarithm, in this context) of a to the base b. One writes k = logb a. - -The powers of 10 form an infinite subset G = {…, 0.001, 0.01, 0.1, 1, 10, 100, 1000, …} of the rational numbers. This set G is a cyclic group under multiplication, and 10 is a generator. For any element a of the group, one can compute log10 a. For example, log10 10000 = 4, and log10 0.001 = −3. These are instances of the discrete logarithm problem. - -Other base-10 logarithms in the real numbers are not instances of the discrete logarithm problem, because they involve non-integer exponents. For example, the equation log10 53 = 1.724276… means that 101.724276… = 53. While integer exponents can be defined in any group using products and inverses, arbitrary exponents in the real numbers require other concepts such as the exponential function. - -A similar example holds for any non-zero real number b. The powers form a multiplicative subgroup G = {…, b−3, b−2, b−1, 1, b1, b2, b3, …} of the non-zero real numbers. For any element a of G, one can compute logb a. - -One of the simplest settings for discrete logarithms is the group (Zp)×. This is the group of multiplication modulo the prime p. Its elements are congruence classes modulo p, and the group product of two elements may be obtained by ordinary integer multiplication of the elements followed by reduction modulo p. - -The kth power of one of the numbers in this group may be computed by finding its kth power as an integer and then finding the remainder after division by p. When the numbers involved are large, it is more efficient to reduce modulo p multiple times during the computation. Regardless of the specific algorithm used, this operation is called modular exponentiation. For example, consider (Z17)×. To compute 34 in this group, compute 34 = 81, and then divide 81 by 17, obtaining a remainder of 13. Thus 34 = 13 in the group (Z17)×. - -The discrete logarithm is just the inverse operation. For example, consider the equation 3k ≡ 13 (mod 17) for k. From the example above, one solution is k = 4, but it is not the only solution. Since 316 ≡ 1 (mod 17)—as follows from Fermat's little theorem—it also follows that if n is an integer then 34+16n ≡ 34 × (316)n ≡ 13 × 1n ≡ 13 (mod 17). Hence the equation has infinitely many solutions of the form 4 + 16n. Moreover, because 16 is the smallest positive integer m satisfying 3m ≡ 1 (mod 17), these are the only solutions. Equivalently, the set of all possible solutions can be expressed by the constraint that k ≡ 4 (mod 16). - -In the special case where b is the identity element 1 of the group G, the discrete logarithm logb a is undefined for a other than 1, and every integer k is a discrete logarithm for a = 1. - -Powers obey the usual algebraic identity bk + l = bk bl. In other words, the function -$$ -f \colon \mathbf{Z} \to G -$$ - -defined by f(k) = bk is a group homomorphism from the integers Z under addition onto the subgroup H of G generated by b. For all a in H, logb a exists. Conversely, logb a does not exist for a that are not in H. - -If H is infinite, then logb a is also unique, and the discrete logarithm amounts to a group isomorphism -$$ -\log_b \colon H \to \mathbf{Z}. -$$ - -On the other hand, if H is finite of order n, then logb a is unique only up to congruence modulo n, and the discrete logarithm amounts to a group isomorphism -$$ -\log_b\colon H \to \mathbf{Z}_n, -$$ - -where Zn denotes the additive group of integers modulo n. - -The familiar base change formula for ordinary logarithms remains valid: If c is another generator of H, then -$$ -\log_c a = \log_c b \cdot \log_b a. -$$ - -The discrete logarithm problem is considered to be computationally intractable. That is, no efficient classical algorithm is known for computing discrete logarithms in general. - -A general algorithm for computing logb a in finite groups G is to raise b to larger and larger powers k until the desired a is found. This algorithm is sometimes called trial multiplication. It requires running time linear in the size of the group G and thus exponential in the number of digits in the size of the group. Therefore, it is an exponential-time algorithm, practical only for small groups G. - -More sophisticated algorithms exist, usually inspired by similar algorithms for integer factorization. These algorithms run faster than the naïve algorithm, some of them proportional to the square root of the size of the group, and thus exponential in half the number of digits in the size of the group. However none of them run in polynomial time (in the number of digits in the size of the group). - -* Baby-step giant-step - -* Function field sieve - -* Index calculus algorithm - -* Number field sieve - -* Pohlig–Hellman algorithm - -* Pollard's rho algorithm for logarithms - -* Pollard's kangaroo algorithm (aka Pollard's lambda algorithm) - -There is an efficient quantum algorithm due to Peter Shor. - -Efficient classical algorithms also exist in certain special cases. For example, in the group of the integers modulo p under addition, the power bk becomes a product bk, and equality means congruence modulo p in the integers. The extended Euclidean algorithm finds k quickly. - -With Diffie–Hellman a cyclic group modulus a prime p is used, allowing an efficient computation of the discrete logarithm with Pohlig–Hellman if the order of the group (being p−1) is sufficiently smooth, i.e. has no large prime factors. - -While computing discrete logarithms and factoring integers are distinct problems, they share some properties: - -*both are special cases of the hidden subgroup problem for finite abelian groups, - -*both problems seem to be difficult (no efficient algorithms are known for non-quantum computers), - -*for both problems efficient algorithms on quantum computers are known, - -*algorithms from one problem are often adapted to the other, and - -*the difficulty of both problems has been used to construct various cryptographic systems. - -There exist groups for which computing discrete logarithms is apparently difficult. In some cases (e.g. large prime order subgroups of groups (Zp)×) there is not only no efficient algorithm known for the worst case, but the average-case complexity can be shown to be about as hard as the worst case using random self-reducibility. - -At the same time, the inverse problem of discrete exponentiation is not difficult (it can be computed efficiently using exponentiation by squaring, for example). This asymmetry is analogous to the one between integer factorization and integer multiplication. Both asymmetries (and other possibly one-way functions) have been exploited in the construction of cryptographic systems. - -Popular choices for the group G in discrete logarithm cryptography (DLC) are the cyclic groups (Zp)× (e.g. ElGamal encryption, Diffie–Hellman key exchange, and the Digital Signature Algorithm) and cyclic subgroups of elliptic curves over finite fields (see Elliptic curve cryptography). - -While there is no publicly known algorithm for solving the discrete logarithm problem in general, the first three steps of the number field sieve algorithm only depend on the group G, not on the specific elements of G whose finite log is desired. By precomputing these three steps for a specific group, one need only carry out the last step, which is much less computationally expensive than the first three, to obtain a specific logarithm in that group. diff --git a/wiki/wikipedia/2308.txt b/wiki/wikipedia/2308.txt deleted file mode 100644 index 80b3cd3f72a399940bde77bc55440a5e0fdb7f6f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2308.txt +++ /dev/null @@ -1,5 +0,0 @@ -Harald Ganzinger (31 October 1950, Werneck – 3 June 2004, Saarbrücken) was a German computer scientist who together with Leo Bachmair developed the superposition calculus, which is (as of 2007) used in most of the state-of-the-art automated theorem provers for first-order logic. - -He received his Ph.D. from the Technical University of Munich in 1978. Before 1991 he was a Professor of Computer Science at University of Dortmund. Then he joined the Max Planck Institute for Computer Science in Saarbrücken shortly after it was founded in 1991. Until 2004 he was the Director of the Programming Logics department of the Max Planck Institute for Computer Science and honorary professor at Saarland University. His research group created the SPASS automated theorem prover. - -He received the Herbrand Award in 2004 (posthumous) for his important contributions to automated theorem proving. diff --git a/wiki/wikipedia/2309.txt b/wiki/wikipedia/2309.txt deleted file mode 100644 index 56ebbd2c3f9053d07b23d32baff0bee3611e8183..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2309.txt +++ /dev/null @@ -1,11 +0,0 @@ -In mathematics, Euler's idoneal numbers (also called suitable numbers or convenient numbers) are the positive integers D such that any integer expressible in only one way as x2 ± Dy2 (where x2 is relatively prime to Dy2) is a prime power or twice a prime power. In particular, a number that has two distinct representations as a sum of two squares is composite. Every idoneal number generates a set containing infinitely many primes and missing infinitely many other primes. - -A positive integer n is idoneal if and only if it cannot be written as ab + bc + ac for distinct positive integers a, b, and c. - -It is sufficient to consider the set { n + k2 }; if all these numbers are of the form p, p2, 2 · p or 2s for some integer s, where p is a prime, then n is idoneal. - -The 65 idoneal numbers found by Leonhard Euler and Carl Friedrich Gauss and conjectured to be the only such numbers are - -1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 13, 15, 16, 18, 21, 22, 24, 25, 28, 30, 33, 37, 40, 42, 45, 48, 57, 58, 60, 70, 72, 78, 85, 88, 93, 102, 105, 112, 120, 130, 133, 165, 168, 177, 190, 210, 232, 240, 253, 273, 280, 312, 330, 345, 357, 385, 408, 462, 520, 760, 840, 1320, 1365, and 1848 . - -Results of Peter J. Weinberger from 1973 imply that at most two other idoneal numbers exist, and that the list above is complete if the generalized Riemann hypothesis holds (some sources incorrectly claim that Weinberger's results imply that there's at most one other idoneal number). diff --git a/wiki/wikipedia/231.txt b/wiki/wikipedia/231.txt deleted file mode 100644 index d3924300970211fcc28f308842526a27f24e86c1..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/231.txt +++ /dev/null @@ -1,17 +0,0 @@ -In algebra, the Milnor–Moore theorem, introduced by classifies an important class of Hopf algebras, of the sort that often show up as cohomology rings in algebraic topology. - -The theorem states: given a connected, graded, cocommutative Hopf algebra A over a field of characteristic zero with $\dim A_n < \infty$ for all n, the natural Hopf algebra homomorphism -$$ -U(P(A)) \to A -$$ - -from the universal enveloping algebra of the graded Lie algebra $P(A)$ of primitive elements of A to A is an isomorphism. Here we say A is connected if $A_0$ is the field and $A_n = 0$ for negative n. The universal enveloping algebra of a graded Lie algebra L is the quotient of the tensor algebra of L by the two-sided ideal generated by all elements of the form $xy- (-1)^yx - [x,y]$. - -In algebraic topology, the term usually refers to the corollary of the aforementioned result, that for a pointed, simply connected space X, the following isomorphism holds: -$$ -U(\pi_{\ast}(\Omega X) \otimes \Q) \cong H_{\ast}(\Omega X;\Q), -$$ - -where $\Omega X$ denotes the loop space of X, - -compare with Theorem 21.5 from . This work may also be compared with that of . diff --git a/wiki/wikipedia/2310.txt b/wiki/wikipedia/2310.txt deleted file mode 100644 index 3b3b11a425e72f6a498f42f650f0ebef6c4b697d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2310.txt +++ /dev/null @@ -1,23 +0,0 @@ -The Helmholtz theorem of classical mechanics reads as follows: - -Let H(x,p;V) = K(p) + \varphi(x;V) be the Hamiltonian of a one-dimensional system, where K = \frac{p^2}{2m} is the kinetic energy and \varphi(x;V) is a "U-shaped" potential energy profile which depends on a parameter $V$. - -Let $\left\langle \cdot \right\rangle _{t}$ denote the time average. Let - -*$E = K + \varphi, $ - -*$T = 2\left\langle K\right\rangle _{t},$ - -*$P = \left\langle -\frac{\partial \varphi }{\partial V}\right\rangle _{t},$ - -*$S(E,V)=\log \oint \sqrt{2m\left( E-\varphi \left( x,V\right) \right) }dx.$ - -Then - -dS = \frac{dE+PdV}{T}. - -The thesis of this theorem of classical mechanics reads exactly as the heat theorem of thermodynamics. This fact shows that thermodynamic-like relations exist between certain mechanical quantities. This in turn allows to define the "thermodynamic state" of a one-dimensional mechanical system. In particular the temperature $T$ is given by time average of the kinetic energy, and the entropy $S$ by the logarithm of the action (i.e., $\oint dx \sqrt{2m\left( E - \varphi \left( x, V\right) \right) }$).
- -The importance of this theorem has been recognized by Ludwig Boltzmann who saw how to apply it to macroscopic systems (i.e. multidimensional systems), in order to provide a mechanical foundation of equilibrium thermodynamics. This research activity was strictly related to his formulation of the ergodic hypothesis. - -A multidimensional version of the Helmholtz theorem, based on the ergodic theorem of George David Birkhoff is known as generalized Helmholtz theorem. diff --git a/wiki/wikipedia/2311.txt b/wiki/wikipedia/2311.txt deleted file mode 100644 index f1514fb818d444c3dd4d73f85b0506f424d4909b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2311.txt +++ /dev/null @@ -1,20 +0,0 @@ -In combinatorial mathematics, the Lubell–Yamamoto–Meshalkin inequality, more commonly known as the LYM inequality, is an inequality on the sizes of sets in a Sperner family, proved by Bollobás, Lubell, Meshalkin, and Yamamoto. It is named for the initials of three of its discoverers. To include the initials of all four discoverers, it is sometimes referred to as the YBLM inequality. - -This inequality belongs to the field of combinatorics of sets, and has many applications in combinatorics. In particular, it can be used to prove Sperner's theorem. Its name is also used for similar inequalities. - -Let U be an n-element set, let A be a family of subsets of U such that no set in A is a subset of another set in A, and let ak denote the number of sets of size k in A. Then -$$ -\sum_{k=0}^n\frac{a_k} \le 1. -$$ - -Lubell proves the Lubell–Yamamoto–Meshalkin inequality by a double counting argument in which he counts the permutations of U in two different ways. First, by counting all permutations of U identified with {1, …, n } directly, one finds that there are n! of them. But secondly, one can generate a permutation (i.e., an order) of the elements of U by selecting a set S in A and choosing a map that sends {1, … , |S | } to S. If |S | = k, the set S is associated in this way with k!(n - k)! permutations, and in each of them the image of the first k elements of U is exactly S. Each permutation may only be associated with a single set in A, for if two prefixes of a permutation both formed sets in A then one would be a subset of the other. Therefore, the number of permutations that can be generated by this procedure is -$$ -\sum_{S\in A}|S|!(n-|S|)!=\sum_{k=0}^n a_k k! (n-k)!. -$$ - -Since this number is at most the total number of all permutations, -$$ -\sum_{k=0}^n a_k k! (n-k)!\le n!. -$$ - -Finally dividing the above inequality by n! leads to the result. diff --git a/wiki/wikipedia/2312.txt b/wiki/wikipedia/2312.txt deleted file mode 100644 index 31b5c20fa2b704e138a059b41ace816a3fb37bab..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2312.txt +++ /dev/null @@ -1,75 +0,0 @@ -Proof theory is a major branch of mathematical logic that represents proofs as formal mathematical objects, facilitating their analysis by mathematical techniques. Proofs are typically presented as inductively-defined data structures such as plain lists, boxed lists, or trees, which are constructed according to the axioms and rules of inference of the logical system. Consequently, proof theory is syntactic in nature, in contrast to model theory, which is semantic in nature. - -Some of the major areas of proof theory include structural proof theory, ordinal analysis, provability logic, reverse mathematics, proof mining, automated theorem proving, and proof complexity. Much research also focuses on applications in computer science, linguistics, and philosophy. - -Although the formalisation of logic was much advanced by the work of such figures as Gottlob Frege, Giuseppe Peano, Bertrand Russell, and Richard Dedekind, the story of modern proof theory is often seen as being established by David Hilbert, who initiated what is called Hilbert's program in the Foundations of Mathematics. The central idea of this program was that if we could give finitary proofs of consistency for all the sophisticated formal theories needed by mathematicians, then we could ground these theories by means of a metamathematical argument, which shows that all of their purely universal assertions (more technically their provable $\Pi^0_1$ sentences) are finitarily true; once so grounded we do not care about the non-finitary meaning of their existential theorems, regarding these as pseudo-meaningful stipulations of the existence of ideal entities. - -The failure of the program was induced by Kurt Gödel's incompleteness theorems, which showed that any ω-consistent theory that is sufficiently strong to express certain simple arithmetic truths, cannot prove its own consistency, which on Gödel's formulation is a $\Pi^0_1$ sentence. However, modified versions of Hilbert's program emerged and research has been carried out on related topics. This has led, in particular, to: - -*Refinement of Gödel's result, particularly J. Barkley Rosser's refinement, weakening the above requirement of ω-consistency to simple consistency; - -*Axiomatisation of the core of Gödel's result in terms of a modal language, provability logic; - -*Transfinite iteration of theories, due to Alan Turing and Solomon Feferman; - -*The discovery of self-verifying theories, systems strong enough to talk about themselves, but too weak to carry out the diagonal argument that is the key to Gödel's unprovability argument. - -In parallel to the rise and fall of Hilbert's program, the foundations of structural proof theory were being founded. Jan Łukasiewicz suggested in 1926 that one could improve on Hilbert systems as a basis for the axiomatic presentation of logic if one allowed the drawing of conclusions from assumptions in the inference rules of the logic. In response to this, Stanisław Jaśkowski (1929) and Gerhard Gentzen (1934) independently provided such systems, called calculi of natural deduction, with Gentzen's approach introducing the idea of symmetry between the grounds for asserting propositions, expressed in introduction rules, and the consequences of accepting propositions in the elimination rules, an idea that has proved very important in proof theory. Gentzen (1934) further introduced the idea of the sequent calculus, a calculus advanced in a similar spirit that better expressed the duality of the logical connectives, and went on to make fundamental advances in the formalisation of intuitionistic logic, and provide the first combinatorial proof of the consistency of Peano arithmetic. Together, the presentation of natural deduction and the sequent calculus introduced the fundamental idea of analytic proof to proof theory. - -Structural proof theory is the subdiscipline of proof theory that studies the specifics of proof calculi. The three most well-known styles of proof calculi are: - -*The Hilbert calculi - -*The natural deduction calculi - -*The sequent calculi - -Each of these can give a complete and axiomatic formalization of propositional or predicate logic of either the classical or intuitionistic flavour, almost any modal logic, and many substructural logics, such as relevance logic or linear logic. Indeed, it is unusual to find a logic that resists being represented in one of these calculi. - -Proof theorists are typically interested in proof calculi that support a notion of analytic proof. The notion of analytic proof was introduced by Gentzen for the sequent calculus; there the analytic proofs are those that are cut-free. Much of the interest in cut-free proofs comes from the subformula property: every formula in the end sequent of a cut-free proof is a subformula of one of the premises. This allows one to show consistency of the sequent calculus easily; if the empty sequent were derivable it would have to be a subformula of some premise, which it is not. Gentzen's midsequent theorem, the Craig interpolation theorem, and Herbrand's theorem also follow as corollaries of the cut-elimination theorem. - -Gentzen's natural deduction calculus also supports a notion of analytic proof, as shown by Dag Prawitz. The definition is slightly more complex: we say the analytic proofs are the normal forms, which are related to the notion of normal form in term rewriting. More exotic proof calculi such as Jean-Yves Girard's proof nets also support a notion of analytic proof. - -A particular family of analytic proofs arising in reductive logic are focused proofs which characterise a large family of goal-directed proof-search procedures. The ability to transform a proof system into a focused form is a good indication of its syntactic quality, in a manner similar to how admissibility of cut shows that a proof system is syntactically consistent. - -Structural proof theory is connected to type theory by means of the Curry–Howard correspondence, which observes a structural analogy between the process of normalisation in the natural deduction calculus and beta reduction in the typed lambda calculus. This provides the foundation for the intuitionistic type theory developed by Per Martin-Löf, and is often extended to a three way correspondence, the third leg of which are the cartesian closed categories. - -Other research topics in structural theory include analytic tableau, which apply the central idea of analytic proof from structural proof theory to provide decision procedures and semi-decision procedures for a wide range of logics, and the proof theory of substructural logics. - -Ordinal analysis is a powerful technique for providing combinatorial consistency proofs for subsystems of arithmetic, analysis, and set theory. Gödel's second incompleteness theorem is often interpreted as demonstrating that finitistic consistency proofs are impossible for theories of sufficient strength. Ordinal analysis allows one to measure precisely the infinitary content of the consistency of theories. For a consistent recursively axiomatized theory T, one can prove in finitistic arithmetic that the well-foundedness of a certain transfinite ordinal implies the consistency of T. Gödel's second incompleteness theorem implies that the well-foundedness of such an ordinal cannot be proved in the theory T. - -Consequences of ordinal analysis include (1) consistency of subsystems of classical second order arithmetic and set theory relative to constructive theories, (2) combinatorial independence results, and (3) classifications of provably total recursive functions and provably well-founded ordinals. - -Ordinal analysis was originated by Gentzen, who proved the consistency of Peano Arithmetic using transfinite induction up to ordinal ε0. Ordinal analysis has been extended to many fragments of first and second order arithmetic and set theory. One major challenge has been the ordinal analysis of impredicative theories. The first breakthrough in this direction was Takeuti's proof of the consistency of Π_1|b=1-CA0 using the method of ordinal diagrams. - -Provability logic is a modal logic, in which the box operator is interpreted as 'it is provable that'. The point is to capture the notion of a proof predicate of a reasonably rich formal theory. As basic axioms of the provability logic GL (Gödel-Löb), which captures provable in Peano Arithmetic, one takes modal analogues of the Hilbert-Bernays derivability conditions and Löb's theorem (if it is provable that the provability of A implies A, then A is provable). - -Some of the basic results concerning the incompleteness of Peano Arithmetic and related theories have analogues in provability logic. For example, it is a theorem in GL that if a contradiction is not provable then it is not provable that a contradiction is not provable (Gödel's second incompleteness theorem). There are also modal analogues of the fixed-point theorem. Robert Solovay proved that the modal logic GL is complete with respect to Peano Arithmetic. That is, the propositional theory of provability in Peano Arithmetic is completely represented by the modal logic GL. This straightforwardly implies that propositional reasoning about provability in Peano Arithmetic is complete and decidable. - -Other research in provability logic has focused on first-order provability logic, polymodal provability logic (with one modality representing provability in the object theory and another representing provability in the meta-theory), and interpretability logics intended to capture the interaction between provability and interpretability. Some very recent research has involved applications of graded provability algebras to the ordinal analysis of arithmetical theories. - -Reverse mathematics is a program in mathematical logic that seeks to determine which axioms are required to prove theorems of mathematics. The field was founded by Harvey Friedman. Its defining method can be described as "going backwards from the theorems to the axioms", in contrast to the ordinary mathematical practice of deriving theorems from axioms. The reverse mathematics program was foreshadowed by results in set theory such as the classical theorem that the axiom of choice and Zorn's lemma are equivalent over ZF set theory. The goal of reverse mathematics, however, is to study possible axioms of ordinary theorems of mathematics rather than possible axioms for set theory. - -In reverse mathematics, one starts with a framework language and a base theory—a core axiom system—that is too weak to prove most of the theorems one might be interested in, but still powerful enough to develop the definitions necessary to state these theorems. For example, to study the theorem "Every bounded sequence of real numbers has a supremum" it is necessary to use a base system that can speak of real numbers and sequences of real numbers. - -For each theorem that can be stated in the base system but is not provable in the base system, the goal is to determine the particular axiom system (stronger than the base system) that is necessary to prove that theorem. To show that a system S is required to prove a theorem T, two proofs are required. The first proof shows T is provable from S; this is an ordinary mathematical proof along with a justification that it can be carried out in the system S. The second proof, known as a reversal, shows that T itself implies S; this proof is carried out in the base system. The reversal establishes that no axiom system S′ that extends the base system can be weaker than S while still proving T. - -One striking phenomenon in reverse mathematics is the robustness of the Big Five axiom systems. In order of increasing strength, these systems are named by the initialisms RCA0, WKL0, ACA0, ATR0, and Π_1|b=1-CA0. Nearly every theorem of ordinary mathematics that has been reverse mathematically analyzed has been proven equivalent to one of these five systems. Much recent research has focused on combinatorial principles that do not fit neatly into this framework, like RT_2|b=2 (Ramsey's theorem for pairs). - -Research in reverse mathematics often incorporates methods and techniques from recursion theory as well as proof theory. - -Functional interpretations are interpretations of non-constructive theories in functional ones. Functional interpretations usually proceed in two stages. First, one "reduces" a classical theory C to an intuitionistic one I. That is, one provides a constructive mapping that translates the theorems of C to the theorems of I. Second, one reduces the intuitionistic theory I to a quantifier free theory of functionals F. These interpretations contribute to a form of Hilbert's program, since they prove the consistency of classical theories relative to constructive ones. Successful functional interpretations have yielded reductions of infinitary theories to finitary theories and impredicative theories to predicative ones. - -Functional interpretations also provide a way to extract constructive information from proofs in the reduced theory. As a direct consequence of the interpretation one usually obtains the result that any recursive function whose totality can be proven either in I or in C is represented by a term of F. If one can provide an additional interpretation of F in I, which is sometimes possible, this characterization is in fact usually shown to be exact. It often turns out that the terms of F coincide with a natural class of functions, such as the primitive recursive or polynomial-time computable functions. Functional interpretations have also been used to provide ordinal analyses of theories and classify their provably recursive functions. - -The study of functional interpretations began with Kurt Gödel's interpretation of intuitionistic arithmetic in a quantifier-free theory of functionals of finite type. This interpretation is commonly known as the Dialectica interpretation. Together with the double-negation interpretation of classical logic in intuitionistic logic, it provides a reduction of classical arithmetic to intuitionistic arithmetic. - -==Formal and informal proof== - -The informal proofs of everyday mathematical practice are unlike the formal proofs of proof theory. They are rather like high-level sketches that would allow an expert to reconstruct a formal proof at least in principle, given enough time and patience. For most mathematicians, writing a fully formal proof is too pedantic and long-winded to be in common use. - -Formal proofs are constructed with the help of computers in interactive theorem proving. - -Significantly, these proofs can be checked automatically, also by computer. Checking formal proofs is usually simple, whereas finding proofs (automated theorem proving) is generally hard. An informal proof in the mathematics literature, by contrast, requires weeks of peer review to be checked, and may still contain errors. - -In linguistics, type-logical grammar, categorial grammar and Montague grammar apply formalisms based on structural proof theory to give a formal natural language semantics. diff --git a/wiki/wikipedia/2313.txt b/wiki/wikipedia/2313.txt deleted file mode 100644 index 38b0a0aaf79267d16d0aeaca16b6409cf5723718..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2313.txt +++ /dev/null @@ -1,31 +0,0 @@ -The Bernstein–Kushnirenko theorem (or Bernstein–Khovanskii–Kushnirenko (BKK) theorem ), proven by David Bernstein and in 1975, is a theorem in algebra. It states that the number of non-zero complex solutions of a system of Laurent polynomial equations $f_1= \cdots = f_n=0$ is equal to the mixed volume of the Newton polytopes of the polynomials $f_1, \ldots, f_n$, assuming that all non-zero coefficients of $f_n$ are generic. A more precise statement is as follows: - -Let $A$ be a finite subset of $\Z^n.$ Consider the subspace $L_A$ of the Laurent polynomial algebra $\Complex \left [ x_1^{\pm 1}, \ldots, x_n^{\pm 1} \right ]$ consisting of Laurent polynomials whose exponents are in $A$. That is: -$$ -L_A = \left \{ f \left| f(x) = \sum_{\alpha \in A} c_\alpha x^\alpha, c_\alpha \in \Complex \right \}, \right. -$$ - -where for each $\alpha = (a_1, \ldots, a_n) \in \Z^n $ we have used the shorthand notation $x^\alpha$ to denote the monomial $ x_1^{a_1} \cdots x_n^{a_n}.$ - -Now take $n$ finite subsets $ A_1, \ldots, A_n$ of $\Z^n $, with the corresponding subspaces of Laurent polynomials, $L_{A_1}, \ldots, L_{A_n}.$ Consider a generic system of equations from these subspaces, that is: -$$ -f_1(x) = \cdots = f_n(x) = 0, -$$ - -where each $f_i$ is a generic element in the (finite dimensional vector space) $L_{A_i}.$ - -The Bernstein–Kushnirenko theorem states that the number of solutions $x \in (\Complex \setminus 0)^n $ of such a system is equal to -$$ - n!V(\Delta_1, \ldots, \Delta_n), -$$ - -where $V$ denotes the Minkowski mixed volume and for each $i, \Delta_i$ is the convex hull of the finite set of points $A_i$. Clearly, $\Delta_i$ is a convex lattice polytope; it can be interpreted as the Newton polytope of a generic element of the subspace $L_{A_i}$. - -In particular, if all the sets $A_i$ are the same, $A = A_1 = \cdots = A_n,$ then the number of solutions of a generic system of Laurent polynomials from $L_A$ is equal to -$$ -n! \operatorname{vol} (\Delta), -$$ - -where $\Delta$ is the convex hull of $A$ and vol is the usual $n$-dimensional Euclidean volume. Note that even though the volume of a lattice polytope is not necessarily an integer, it becomes an integer after multiplying by $n!$. - -Kushnirenko's name is also spelt Kouchnirenko. David Bernstein is a brother of Joseph Bernstein. Askold Khovanskii has found about 15 different proofs of this theorem. diff --git a/wiki/wikipedia/2314.txt b/wiki/wikipedia/2314.txt deleted file mode 100644 index cad333c01603db4a944c49c937fa135bf53b708d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2314.txt +++ /dev/null @@ -1,28 +0,0 @@ -In mathematics, the Plancherel theorem (sometimes called the Parseval–Plancherel identity) is a result in harmonic analysis, proven by Michel Plancherel in 1910. It states that the integral of a function's squared modulus is equal to the integral of the squared modulus of its frequency spectrum. That is, if $f(x) $ is a function on the real line, and $\widehat{f}(\xi)$ is its frequency spectrum, then - -{{NumBlk|:|$\int_{-\infty}^\infty |f(x)|^2 dx = \int_{-\infty}^\infty |\widehat{f}(\xi)|^2 d\xi$|}} - -A more precise formulation is that if a function is in both Lp spaces $L^1(\mathbb{R})$ and $L^2(\mathbb{R})$, then its Fourier transform is in $L^2(\mathbb{R})$, and the Fourier transform map is an isometry with respect to the L2 norm. This implies that the Fourier transform map restricted to $L^1(\mathbb{R}) \cap L^2(\mathbb{R})$ has a unique extension to a linear isometric map $L^2(\mathbb{R}) \mapsto L^2(\mathbb{R})$, sometimes called the Plancherel transform. This isometry is actually a unitary map. In effect, this makes it possible to speak of Fourier transforms of quadratically integrable functions. - -Plancherel's theorem remains valid as stated on n-dimensional Euclidean space $\mathbb{R}^n$. The theorem also holds more generally in locally compact abelian groups. There is also a version of the Plancherel theorem which makes sense for non-commutative locally compact groups satisfying certain technical assumptions. This is the subject of non-commutative harmonic analysis. - -The unitarity of the Fourier transform is often called Parseval's theorem in science and engineering fields, based on an earlier (but less general) result that was used to prove the unitarity of the Fourier series. - -Due to the polarization identity, one can also apply Plancherel's theorem to the $L^2(\mathbb{R})$ inner product of two functions. That is, if $f(x)$ and $g(x)$ are two $L^2(\mathbb{R})$ functions, and $ \mathcal P$ denotes the Plancherel transform, then -$$ -\int_{-\infty}^\infty f(x)\overline{g(x)} dx = \int_{-\infty}^\infty (\mathcal P f)(\xi) \overline{(\mathcal P g)(\xi)} d\xi, -$$ - -and if $f(x)$ and $g(x)$ are furthermore $L^1(\mathbb{R})$ functions, then -$$ - (\mathcal P f)(\xi) = \widehat{f}(\xi) = \int_{-\infty}^\infty f(x) e^{-2\pi i \xi x} dx , -$$ - -and -$$ - (\mathcal P g)(\xi) = \widehat{g}(\xi) = \int_{-\infty}^\infty g(x) e^{-2\pi i \xi x} dx , -$$ - -so - -{{NumBlk|:|$\int_{-\infty}^\infty f(x)\overline{g(x)} dx = \int_{-\infty}^\infty \widehat{f}(\xi) \overline{\widehat{g}(\xi)} d\xi.$|}} diff --git a/wiki/wikipedia/2315.txt b/wiki/wikipedia/2315.txt deleted file mode 100644 index a65975b580ea32c2470284ef2d435f4eb9c0309d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2315.txt +++ /dev/null @@ -1,80 +0,0 @@ -In complex analysis a complex-valued function ƒ of a complex variable z: - -*is said to be holomorphic at a point a if it is differentiable at every point within some open disk centered at a, and - -* is said to be analytic at a if in some open disk centered at a it can be expanded as a convergent power series -$$ -f(z)=\sum_{n=0}^\infty c_n(z-a)^n -$$ - -(this implies that the radius of convergence is positive). - -One of the most important theorems of complex analysis is that holomorphic functions are analytic. Among the corollaries of this theorem are - -* the identity theorem that two holomorphic functions that agree at every point of an infinite set S with an accumulation point inside the intersection of their domains also agree everywhere in every connected open subset of their domains that contains the set S, and - -* the fact that, since power series are infinitely differentiable, so are holomorphic functions (this is in contrast to the case of real differentiable functions), and - -* the fact that the radius of convergence is always the distance from the center a to the nearest singularity; if there are no singularities (i.e., if ƒ is an entire function), then the radius of convergence is infinite. Strictly speaking, this is not a corollary of the theorem but rather a by-product of the proof. - -* no bump function on the complex plane can be entire. In particular, on any connected open subset of the complex plane, there can be no bump function defined on that set which is holomorphic on the set. This has important ramifications for the study of complex manifolds, as it precludes the use of partitions of unity. In contrast the partition of unity is a tool which can be used on any real manifold. - -The argument, first given by Cauchy, hinges on Cauchy's integral formula and the power series expansion of the expression -$$ -\frac 1 {w-z} . -$$ - -Let D be an open disk centered at a and suppose ƒ is differentiable everywhere within an open neighborhood containing the closure of D. Let C be the positively oriented (i.e., counterclockwise) circle which is the boundary of D and let z be a point in D. Starting with Cauchy's integral formula, we have - -\begin{align}f(z) &{}= {1 \over 2\pi i}\int_C {f(w) \over w-z}\mathrm{d}w \\[10pt] - -&{}= {1 \over 2\pi i}\int_C {f(w) \over (w-a)-(z-a)} \mathrm{d}w \\[10pt] - -&{}={1 \over 2\pi i}\int_C {1 \over w-a}\cdot{1 \over 1-{z-a \over w-a}}f(w)\mathrm{d}w \\[10pt] - -&{}={1 \over 2\pi i}\int_C {1 \over w-a}\cdot{\sum_{n=0}^\infty\left({z-a \over w-a}\right)^n} f(w)\mathrm{d}w \\[10pt] - -&{}=\sum_{n=0}^\infty{1 \over 2\pi i}\int_C {(z-a)^n \over (w-a)^{n+1}} f(w)\mathrm{d}w.\end{align} - -Interchange of the integral and infinite sum is justified by observing that $f(w)/(w-a)$ is bounded on C by some positive number M, while for all w in C -$$ -\left|\frac{z-a}{w-a}\right|\leq r < 1 -$$ - -for some positive r as well. We therefore have -$$ -\left| {(z-a)^n \over (w-a)^{n+1} }f(w) \right| \le Mr^n, -$$ - -on C, and as the Weierstrass M-test shows the series converges uniformly over C, the sum and the integral may be interchanged. - -As the factor (z - a)n does not depend on the variable of integration w, it may be factored out to yield -$$ -f(z)=\sum_{n=0}^\infty (z-a)^n {1 \over 2\pi i}\int_C {f(w) \over (w-a)^{n+1}} \mathrm{d}w, -$$ - -which has the desired form of a power series in z: -$$ -f(z)=\sum_{n=0}^\infty c_n(z-a)^n -$$ - -with coefficients -$$ -c_n={1 \over 2\pi i}\int_C {f(w) \over (w-a)^{n+1}} \mathrm{d}w. -$$ - -* Since power series can be differentiated term-wise, applying the above argument in the reverse direction and the power series expression for -$$ - \frac 1 {(w-z)^{n+1}} -$$ - -gives -$$ -f^{(n)}(a) = {n! \over 2\pi i} \int_C {f(w) \over (w-a)^{n+1}} dw. -$$ - -This is a Cauchy integral formula for derivatives. Therefore the power series obtained above is the Taylor series of ƒ. - -* The argument works if z is any point that is closer to the center a than is any singularity of ƒ. Therefore, the radius of convergence of the Taylor series cannot be smaller than the distance from a to the nearest singularity (nor can it be larger, since power series have no singularities in the interiors of their circles of convergence). - -* A special case of the identity theorem follows from the preceding remark. If two holomorphic functions agree on a (possibly quite small) open neighborhood U of a, then they coincide on the open disk Bd(a), where d is the distance from a to the nearest singularity. diff --git a/wiki/wikipedia/2316.txt b/wiki/wikipedia/2316.txt deleted file mode 100644 index 32594bfc3c3dbd1acdb3adad92ac226167394f40..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2316.txt +++ /dev/null @@ -1,240 +0,0 @@ -In additive number theory, Fermat's theorem on sums of two squares states that an odd prime p can be expressed as: -$$ -p = x^2 + y^2, -$$ - -with x and y integers, if and only if -$$ -p \equiv 1 \pmod{4}. -$$ - -The prime numbers for which this is true are called Pythagorean primes. - -For example, the primes 5, 13, 17, 29, 37 and 41 are all congruent to 1 modulo 4, and they can be expressed as sums of two squares in the following ways: -$$ -5 = 1^2 + 2^2, \quad 13 = 2^2 + 3^2, \quad 17 = 1^2 + 4^2, \quad 29 = 2^2 + 5^2, \quad 37 = 1^2 + 6^2, \quad 41 = 4^2 + 5^2. -$$ - -On the other hand, the primes 3, 7, 11, 19, 23 and 31 are all congruent to 3 modulo 4, and none of them can be expressed as the sum of two squares. This is the easier part of the theorem, and follows immediately from the observation that all squares are congruent to 0 or 1 modulo 4. - -Since the Diophantus identity implies that the product of two integers each of which can be written as the sum of two squares is itself expressible as the sum of two squares, by applying Fermat's theorem to the prime factorization of any positive integer n, we see that if all the prime factors of n congruent to 3 modulo 4 occur to an even exponent, then n is expressible as a sum of two squares. The converse also holds. This generalization of Fermat's theorem is known as the sum of two squares theorem. - -Albert Girard was the first to make the observation, describing all positive integer numbers (not necessarily primes) expressible as the sum of two squares of positive integers; this was published in 1625. The statement that every prime p of the form 4n+1 is the sum of two squares is sometimes called Girard's theorem. For his part, Fermat wrote an elaborate version of the statement (in which he also gave the number of possible expressions of the powers of p as a sum of two squares) in a letter to Marin Mersenne dated December 25, 1640: for this reason this version of the theorem is sometimes called Fermat's Christmas theorem. - -Fermat's theorem on sums of two squares is strongly related with the theory of Gaussian primes. - -A Gaussian integer is a complex number $a+ib$ such that a and b are integers. The norm $N(a+ib)=a^2+b^2$ of a Gaussian integer is an integer equal to the square of the absolute value of the Gaussian integer. The norm of a product of Gaussian integers is the product of their norm. This is Diophantus identity, which results immediately from the similar property of the absolute value. - -Gaussian integers form a principal ideal domain. This implies that Gaussian primes can be defined similarly as primes numbers, that is as those Gaussian integers that are not the product of two non-units (here the units are 1, −1, i and −i). - -The multiplicative property of the norm implies that a prime number p is either a Gaussian prime or the norm of a Gaussian prime. Fermat's theorem asserts that the first case occurs when $p=4k+3,$ and that the second case occurs when $p=4k+1$ and $p=2.$ The last case is not considered in Fermat's statement, but is trivial, as $2 = 1^2+1^2 =N(1+i).$ - -Above point of view on Fermat's theorem is a special case of the theory of factorization of ideals in rings of quadratic integers. In summary, if $\mathcal O_\sqrt{d}$ is the ring of algebraic integers in the quadratic field, then an odd prime number p, not dividing d, is either a prime element in $\mathcal O_\sqrt{d},$ or the ideal norm of an ideal of $\mathcal O_\sqrt{d},$ which is necessarily prime. Moreover, the law of quadratic reciprocity allows distinguishing the two cases in terms of congruences. If $\mathcal O_\sqrt{d}$ is a principal ideal domain, then p is an ideal norm if and only -$$ -4p=a^2-db^2, -$$ - -with a and b both integers. - -In a letter to Blaise Pascal dated September 25, 1654 Fermat announced the following two results that are essentially the special cases $d=-2$ and $d=-3.$ If p is an odd prime, then -$$ -p = x^2 + 2y^2 \iff p\equiv 1\mbox{ or }p\equiv 3\pmod{8}, -$$ -$$ -p= x^2 + 3y^2 \iff p\equiv 1 \pmod{3}. -$$ - -Fermat wrote also: - -If two primes which end in 3 or 7 and surpass by 3 a multiple of 4 are multiplied, then their product will be composed of a square and the quintuple of another square. - -In other words, if p, q are of the form 20k + 3 or 20k + 7, then pq = x2 + 5y2. Euler later extended this to the conjecture that -$$ -p = x^2 + 5y^2 \iff p\equiv 1\mbox{ or }p\equiv 9\pmod{20}, -$$ -$$ -2p = x^2 + 5y^2 \iff p\equiv 3\mbox{ or }p\equiv 7\pmod{20}. -$$ - -Both Fermat's assertion and Euler's conjecture were established by Joseph-Louis Lagrange. This more complicate formulation relies on the fact that $\mathcal O_\sqrt{-5}$ is not a principal ideal domain, contrarily to $\mathcal O_\sqrt{-2}$ and $\mathcal O_\sqrt{-3}.$ - -There is a trivial algorithm for decomposing a prime of the form $p=4k+1$ into a sum of two squares: For all n such $1\le n<\sqrt p $, test whether the square root of $p-n^2$ is an integer. If this the case, one has got the decomposition. - -However the input size of the algorithm is $\log p,$ the number of digits of p (up to a constant factor that depends on the numeral base). The number of needed tests is of the order of $\sqrt p = \exp \left (\frac {\log p}2\right),$ and thus exponential in the input size. So the computational complexity of this algorithm is exponential. - -An algorithm with a polynomial complexity has been described by - -Stan Wagon in 1990, based on work by Serret and Hermite (1848), and Cornacchia (1908). - -Given an odd prime $p$ in the form $4k+1$, first find $x$ such that $x^2\equiv-1 \pmod{p}$. - -This can be done by finding a Quadratic non-residue modulo $p$, say $q$, and letting -$$ -x=q^\frac{p-1}{4} \pmod{p} -$$. - -Such an $x$ will satisfy the condition since quadratic non-residues satisfy $q^\frac{p-1}{2}\equiv -1 \pmod{p}$. - -Once $x$ is determined, one can apply the Euclidean algorithm with $p$ and $x$. Denote the first two remainders that are less than the square root of $p$ as $a$ and $b$. Then it will be the case that $a^2+b^2=p$. - -Take $p = 97$. A possible quadratic non-residue for 97 is 13, since $13^\frac{97-1}{2}\equiv-1 \pmod{97}$. so we let $x=13^\frac{97-1}{4}=22 \pmod{97}$. - -The Euclidean algorithm applied to 97 and 22 yields: - -97 = 22(4) + 9, - -22 = 9(2) + 4, - -9 = 4(2) + 1, - -4 = 1(4). - -The first two remainders smaller than the square root of 97 are 9 and 4; and indeed we have $97 = 9^2 + 4^2$, as expected. - -Fermat usually did not write down proofs of his claims, and he did not provide a proof of this statement. The first proof was found by Euler after much effort and is based on infinite descent. He announced it in two letters to Goldbach, on May 6, 1747 and on April 12, 1749; he published the detailed proof in two articles (between 1752 and 1755). Lagrange gave a proof in 1775 that was based on his study of quadratic forms. This proof was simplified by Gauss in his Disquisitiones Arithmeticae (art. 182). Dedekind gave at least two proofs based on the arithmetic of the Gaussian integers. There is an elegant proof using Minkowski's theorem about convex sets. Simplifying an earlier short proof due to Heath-Brown (who was inspired by Liouville's idea), Zagier presented a non-constructive one-sentence proof in 1990. - -And more recently Christopher gave a partition-theoretic proof. - -Euler succeeded in proving Fermat's theorem on sums of two squares in 1749, when he was forty-two years old. He communicated this in a letter to Goldbach dated 12 April 1749. The proof relies on infinite descent, and is only briefly sketched in the letter. The full proof consists in five steps and is published in two papers. The first four steps are Propositions 1 to 4 of the first paper and do not correspond exactly to the four steps below. The fifth step below is from the second paper. - -For the avoidance of ambiguity, zero will always be a valid possible constituent of "sums of two squares", so for example every square of an integer is trivially expressible as the sum of two squares by setting one of them to be zero. - -1. The product of two numbers, each of which is a sum of two squares, is itself a sum of two squares. - -This is a well-known property, based on the identity -$$ -(a^2+b^2)(p^2+q^2) = (ap+bq)^2 + (aq-bp)^2 -$$ - -due to Diophantus. - -2. If a number which is a sum of two squares is divisible by a prime which is a sum of two squares, then the quotient is a sum of two squares. - -(This is Euler's first Proposition). - -Indeed, suppose for example that $a^2 + b^2$ is divisible by $p^2+q^2$ and that this latter is a prime. Then $p^2 + q^2$ divides -$$ -(pb-aq)(pb+aq) = p^2b^2 - a^2q^2 = p^2(a^2+b^2) - a^2(p^2+q^2). -$$ - -Since $p^2+q^2$ is a prime, it divides one of the two factors. Suppose that it divides $pb-aq$. Since -$$ -(a^2+b^2)(p^2+q^2) = (ap+bq)^2 + (aq-bp)^2 -$$ - -(Diophantus's identity) it follows that $p^2+q^2$ must divide $(ap+bq)^2$. So the equation can be divided by the square of $p^2+q^2$. Dividing the expression by $(p^2+q^2)^2$ yields: -$$ -\frac{a^2+b^2}{p^2+q^2} = \left(\frac{ap+bq}{p^2+q^2}\right)^2 + \left(\frac{aq-bp}{p^2+q^2}\right)^2 -$$ - -and thus expresses the quotient as a sum of two squares, as claimed. - -On the other hand if $p^2+q^2$ divides $pb+aq$, a similar argument holds by using the following variant of Diophantus's identity: -$$ -(a^2+b^2)(q^2+p^2) = (aq+bp)^2 + (ap-bq)^2 . -$$ - -3. If a number which can be written as a sum of two squares is divisible by a number which is not a sum of two squares, then the quotient has a factor which is not a sum of two squares. (This is Euler's second Proposition). - -Suppose $q$ is a number not expressible as a sum of two squares, which divides $a^2+b^2$. Write the quotient, factored into its (possibly repeated) prime factors, as $p_1p_2\cdots p_n$ so that $a^2+b^2 = q p_1p_2\cdots p_n$. If all factors $p_i$ can be written as sums of two squares, then we can divide $a^2+b^2$ successively by $p_1$, $p_2$, etc., and applying step (2.) above we deduce that each successive, smaller, quotient is a sum of two squares. If we get all the way down to $q$ then $q$ itself would have to be equal to the sum of two squares, which is a contradiction. So at least one of the primes $p_i$ is not the sum of two squares. - -4. If $a$ and $b$ are relatively prime positive integers then every factor of $a^2 + b^2$ is a sum of two squares. - -(This is the step that uses step (3.) to produce an 'infinite descent' and was Euler's Proposition 4. The proof sketched below also includes the proof of his Proposition 3). - -Let $a,b$ be relatively prime positive integers: without loss of generality $a^2+b^2$ is not itself prime, otherwise there is nothing to prove. Let $q$ therefore be a proper factor of $a^2+b^2$, not necessarily prime: we wish to show that $q$ is a sum of two squares. Again, we lose nothing by assuming $ q > 2 $ since the case $ q = 2 = 1^2 + 1^2 $ is obvious. - -Let $m,n$ be non-negative integers such that $mq,nq$ are the closest multiples of $q$ (in absolute value) to $a,b$ respectively. Notice that the differences $c = a-mq$ and $d = b-nq$ are integers of absolute value strictly less than $q/2$: indeed, when $ q > 2 $ is even, gcd$(a,q/2)=1$; otherwise since gcd$(a,q/2) \mid q/2 \mid q \mid a^2+b^2$, we would also have gcd$(a,q/2) \mid b$. - -Multiplying out we obtain -$$ -a^2 + b^2 = m^2q^2 + 2mqc + c^2 + n^2q^2 + 2nqd + d^2 = Aq + (c^2+d^2) -$$ - -uniquely defining a non-negative integer $A$. Since $q$ divides both ends of this equation sequence it follows that $c^2+d^2$ must also be divisible by $q$: say $c^2+d^2 = qr$. Let $g$ be the gcd of $c$ and $d$ which by the co-primeness of $a,b$ is relatively prime to $q$. Thus $g^2$ divides $r$, so writing $e = c/g$, $f = d/g$ and $s = r/g^2$, we obtain the expression $e^2+f^2 = qs$ for relatively prime $e$ and $f$, and with $ s < q/2 $, since -$$ -qs = e^2 + f^2 \leq c^2+d^2 < \left(\frac{q}{2}\right)^2 + \left(\frac{q}{2}\right)^2 = q^2/2. -$$ - -Now finally, the descent step: if $q$ is not the sum of two squares, then by step (3.) there must be a factor $q_1$ say of $s$ which is not the sum of two squares. But $q_1 \leq s < q/2 < q $ and so repeating these steps (initially with $e,f;q_1$ in place of $a,b;q$, and so on ad infinitum) we shall be able to find a strictly decreasing infinite sequence $q, q_1, q_2, \ldots $ of positive integers which are not themselves the sums of two squares but which divide into a sum of two relatively prime squares. Since such an infinite descent is impossible, we conclude that $q$ must be expressible as a sum of two squares, as claimed. - -5. Every prime of the form $4n+1$ is a sum of two squares. - -(This is the main result of Euler's second paper). - -If $p=4n+1$, then by Fermat's Little Theorem each of the numbers $1, 2^{4n}, 3^{4n},\dots, (4n)^{4n}$ is congruent to one modulo $p$. The differences $2^{4n}-1, 3^{4n}-2^{4n},\dots,(4n)^{4n}-(4n-1)^{4n}$ are therefore all divisible by $p$. Each of these differences can be factored as -$$ -a^{4n}-b^{4n} = \left(a^{2n}+b^{2n}\right)\left(a^{2n}-b^{2n}\right). -$$ - -Since $p$ is prime, it must divide one of the two factors. If in any of the $4n-1$ cases it divides the first factor, then by the previous step we conclude that $p$ is itself a sum of two squares (since $a$ and $b$ differ by $1$, they are relatively prime). So it is enough to show that $p$ cannot always divide the second factor. If it divides all $4n-1$ differences $2^{2n}-1, 3^{2n}-2^{2n},\dots,(4n)^{2n}-(4n-1)^{2n}$, then it would divide all $4n-2$ differences of successive terms, all $4n-3$ differences of the differences, and so forth. Since the $k$th differences of the sequence $1^k, 2^k, 3^k,\dots$ are all equal to $k!$ (Finite difference), the $2n$th differences would all be constant and equal to $(2n)!$, which is certainly not divisible by $p$. Therefore, $p$ cannot divide all the second factors which proves that $p$ is indeed the sum of two squares. - -Lagrange completed a proof in 1775 based on his general theory of integral quadratic forms. The following presentation incorporates a slight simplification of his argument, due to Gauss, which appears in article 182 of the Disquisitiones Arithmeticae. - -An (integral binary) quadratic form is an expression of the form $ax^2 + bxy + cy^2$ with $a,b,c$ integers. A number $n$ is said to be represented by the form if there exist integers $x,y$ such that $n = ax^2 + bxy + cy^2$. Fermat's theorem on sums of two squares is then equivalent to the statement that a prime $p$ is represented by the form $x^2 + y^2$ (i.e., $a=c=1$, $b=0$) exactly when $p$ is congruent to $1$ modulo $4$. - -The discriminant of the quadratic form is defined to be $b^2 - 4ac$. The discriminant of $x^2 + y^2$ is then equal to $-4$. - -Two forms $ ax^2 + bxy + cy^2 $ and $ a'x'^2 + b'x'y' + c'y'^2 $ are equivalent if and only if there exist substitutions with integer coefficients -$$ - x = \alpha x' + \beta y' -$$ -$$ - y = \gamma x' + \delta y' -$$ - -with $\alpha\delta - \beta\gamma = \pm 1$ such that, when substituted into the first form, yield the second. Equivalent forms are readily seen to have the same discriminant, and hence also the same parity for the middle coefficient $ b $, which coincides with the parity of the discriminant. Moreover, it is clear that equivalent forms will represent exactly the same integers, because these kind of substitutions can be reversed by substitutions of the same kind. - -Lagrange proved that all positive definite forms of discriminant −4 are equivalent. Thus, to prove Fermat's theorem it is enough to find any positive definite form of discriminant −4 that represents $p$. For example, one can use a form -$$ - px^2 + 2mxy + \left(\frac{m^2+1}{p}\right)y^2, -$$ - -where the first coefficient a = $p$ was chosen so that the form represents $p$ by setting x = 1, and y = 0, the coefficient b = 2m is an arbitrary even number (as it must be, to get an even discriminant), and finally $ c=\frac{m^2+1}{p} $ is chosen so that the discriminant $ b^2-4ac=4m^2-4pc $ is equal to −4, which guarantees that the form is indeed equivalent to $ x^2+y^2 $. Of course, the coefficient $ c=\frac{m^2+1}{p} $ must be an integer, so the problem is reduced to finding some integer m such that $p$ divides $m^2+1$: or in other words, a 'square root of -1 modulo $p$' . - -We claim such a square root of $-1$ is given by $ K = \prod_{k=1}^\frac{p-1}{2} k $. Firstly it follows from Euclid's Fundamental Theorem of Arithmetic that $ ab \equiv 0 \pmod p \iff a \equiv 0 \pmod p \ \ \hbox{or}\ \ b \equiv 0 \pmod p $. Consequently, $ a^2 \equiv 1 \pmod p \iff a \equiv \pm 1 \pmod p $: that is, $ \pm 1 $ are their own inverses modulo $p$ and this property is unique to them. It then follows from the validity of Euclidean division in the integers, and the fact that $p$ is prime, that for every $ 2 \leq a \leq p-2 $ the gcd of $a$ and $p$ may be expressed via the Euclidean algorithm yielding a unique and distinct inverse $ a^{-1} \neq a $ of $a$ modulo $p$. In particular therefore the product of all non-zero residues modulo $p$ is $-1$. Let $ L = \prod_{l=\frac{p+1}{2}}^{p-1} l $: from what has just been observed, $ KL \equiv -1 \pmod p $. But by definition, since each term in $K$ may be paired with its negative in $L$, $ L = (-1)^\frac{p-1}{2}K $, which since $p$ is odd shows that $ K^2 \equiv -1 \pmod p \iff p \equiv 1 \pmod 4 $, as required. - -Richard Dedekind gave at least two proofs of Fermat's theorem on sums of two squares, both using the arithmetical properties of the Gaussian integers, which are numbers of the form a + bi, where a and b are integers, and i is the square root of −1. One appears in section 27 of his exposition of ideals published in 1877; the second appeared in Supplement XI to Peter Gustav Lejeune Dirichlet's Vorlesungen über Zahlentheorie, and was published in 1894. - -1. First proof. If $p$ is an odd prime number, then we have $i^{p-1} = (-1)^{\frac{p-1}{2}}$ in the Gaussian integers. Consequently, writing a Gaussian integer ω = x + iy with x,y ∈ Z and applying the Frobenius automorphism in Z[i]/(p), one finds -$$ -\omega^p = (x+yi)^p \equiv x^p+y^pi^p \equiv x + (-1)^{\frac{p-1}{2}}yi \pmod{p}, -$$ - -since the automorphism fixes the elements of Z/(p). In the current case, $p=4n+1$ for some integer n, and so in the above expression for ωp, the exponent (p-1)/2 of -1 is even. Hence the right hand side equals ω, so in this case the Frobenius endomorphism of Z[i]/(p) is the identity. - -Kummer had already established that if f ∈ {1,2} is the order of the Frobenius automorphism of Z[i]/(p), then the ideal $(p)$ in Z[i] would be a product of 2/f distinct prime ideals. (In fact, Kummer had established a much more general result for any extension of Z obtained by adjoining a primitive m-th root of unity, where m was any positive integer; this is the case m = 4 of that result.) Therefore, the ideal (p) is the product of two different prime ideals in Z[i]. Since the Gaussian integers are a Euclidean domain for the norm function $N(x + iy)=x^2+y^2$, every ideal is principal and generated by a nonzero element of the ideal of minimal norm. Since the norm is multiplicative, the norm of a generator $\alpha$ of one of the ideal factors of (p) must be a strict divisor of $N(p) = p^2$, so that we must have $ p = N(\alpha) = N(a+bi) = a^2 + b^2$, which gives Fermat's theorem. - -2. Second proof. This proof builds on Lagrange's result that if $p=4n+1$ is a prime number, then there must be an integer m such that $m^2 + 1$ is divisible by p (we can also see this by Euler's criterion); it also uses the fact that the Gaussian integers are a unique factorization domain (because they are a Euclidean domain). Since p ∈ Z does not divide either of the Gaussian integers $m + i$ and $m-i$ (as it does not divide their imaginary parts), but it does divide their product $m^2 + 1$, it follows that $p$ cannot be a prime element in the Gaussian integers. We must therefore have a nontrivial factorization of p in the Gaussian integers, which in view of the norm can have only two factors (since the norm is multiplicative, and $p^2 = N(p)$, there can only be up to two factors of p), so it must be of the form $p = (x+yi)(x-yi)$ for some integers $x$ and $y$. This immediately yields that $p = x^2 + y^2$. - -For $p$ congruent to $1$ mod $4$ a prime, $-1$ is a quadratic residue mod $p$ by Euler's criterion. Therefore, there exists an integer $m$ such that $p$ divides $m^2+1$. Let $\hat{i}, \hat{j}$ be the standard basis elements for the vector space $\mathbb{R}^2$ and set $\vec{u} = \hat{i} + m\hat{j}$ and $\vec{v} = 0\hat{i} + p\hat{j}$. Consider the lattice $S = \{a\vec{u} + b\vec{v} \mid a, b \in \mathbb Z\}$. If $\vec{w} = a\vec{u} + b\vec{v} = a \hat{i} + (am + bp)\hat{j} \in S$ then $\|\vec{w}\|^2 \equiv a^2 + (am+bp)^2 \equiv a^2(1 + m^2) \equiv 0\pmod{p}$. Thus $p$ divides $\|\vec{w}\|^2$ for any $\vec{w} \in S$. - -The area of the fundamental parallelogram of the lattice is $p$. The area of the open disk, $D$, of radius $\sqrt{2p}$ centered around the origin is $2 \pi p > 4p$. Furthermore, $D$ is convex and symmetrical about the origin. Therefore, by Minkowski's theorem there exists a nonzero vector $\vec{w} \in S$ such that $\vec{w} \in D$. Both $\|\vec{w}\|^2 < 2p$ and $p \mid \|\vec{w}\|^2$ so $p = \|\vec{w}\|^2$. Hence $p$ is the sum of the squares of the components of $\vec{w}$. - -Let $p=4k+1$ be prime, let $\mathbb{N}$ denote the natural numbers (with or without zero), and consider the finite set $S=\{(x,y,z)\in\mathbb{N}^3: x^2+4yz=p\}$ of triples of numbers. - -Then $S$ has two involutions: an obvious one $(x,y,z)\mapsto (x,z,y)$ whose fixed points $(x,y,y)$ correspond to representations of $p$ as a sum of two squares, and a more complicated one, - - (x,y,z)\mapsto - -\begin{cases} - -(x+2z,~z,~y-x-z),\quad \textrm{if} x < y-z \\ - -(2y-x,~y,~x-y+z),\quad \textrm{if} y-z < x < 2y\\ - -(x-2y,~x-y+z,~y),\quad \textrm{if} x > 2y - -\end{cases} - - - -which has exactly one fixed point $(1,1,k)$. Two involutions over the same finite set must have sets of fixed points with the same parity, and since the second involution has an odd number of fixed points, so does the first. - -Zero is even, so the first involution has a nonzero number of fixed points, any one of which gives a representation of $p$ as a sum of two squares. - -This proof, due to Zagier, is a simplification of an earlier proof by Heath-Brown, which in turn was inspired by a proof of Liouville. The technique of the proof is a combinatorial analogue of the topological principle that the Euler characteristics of a topological space with an involution and of its fixed-point set have the same parity and is reminiscent of the use of sign-reversing involutions in the proofs of combinatorial bijections. - -This proof is equivalent to a geometric or "visual" proof using "windmill" figures, given by Alexander Spivak in 2006 and described in this and this Mathologer YouTube video . - -In 2016, A. David Christopher gave a partition-theoretic proof by considering partitions of the odd prime $n$ having exactly two sizes $a_i (i=1,2)$, each occurring exactly $ a_i$ times, and by showing that at least one such partition exists if $n$ is congruent to 1 modulo 4. diff --git a/wiki/wikipedia/2317.txt b/wiki/wikipedia/2317.txt deleted file mode 100644 index c0e158f24afab51b63be7e87f47909f80f29c180..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2317.txt +++ /dev/null @@ -1,19 +0,0 @@ -The tetralemma is a figure that features prominently in the logic of India. - -It states that with reference to any a logical proposition X, there are four possibilities: -$$ -X -$$ (affirmation) -$$ -\neg X -$$ (negation) -$$ -X \land\neg X -$$ (both) -$$ -\neg (X \lor \neg X) \iff X \land \neg X \iff \emptyset -$$ (neither) - -The history of fourfold negation, the Catuskoti (Sanskrit), is evident in the logico-epistemological tradition of India, given the categorical nomenclature Indian logic in Western discourse. Subsumed within the auspice of Indian logic, 'Buddhist logic' has been particularly focused in its employment of the fourfold negation, as evidenced by the traditions of Nagarjuna and the Madhyamaka, particularly the school of Madhyamaka given the retroactive nomenclature of Prasangika by the Tibetan Buddhist logico-epistemological tradition. Though tetralemma was also used as a form inquiry rather than logic in the nasadiya sukta of rigveda (creation hymm) though seems to be rarely used as a tool of logic before buddhism - -A variant of the tetralemma is used in the Ancient Greek philosophical schools of Democritus and Pyrrhonism. Pyrrho includes it in his summary of his teachings, and Sextus Empiricus includes it among the Pyrrhonist Maxims. diff --git a/wiki/wikipedia/2318.txt b/wiki/wikipedia/2318.txt deleted file mode 100644 index 510edd29f3db5c9317597ac13be6cefa74c4f749..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2318.txt +++ /dev/null @@ -1,53 +0,0 @@ -In number theory, Vinogradov's theorem is a result which implies that any sufficiently large odd integer can be written as a sum of three prime numbers. It is a weaker form of Goldbach's weak conjecture, which would imply the existence of such a representation for all odd integers greater than five. It is named after Ivan Matveyevich Vinogradov who proved it in the 1930s. Hardy and Littlewood had shown earlier that this result followed from the generalized Riemann hypothesis, and Vinogradov was able to remove this assumption. The full statement of Vinogradov's theorem gives asymptotic bounds on the number of representations of an odd integer as a sum of three primes. The notion of "sufficiently large" was ill-defined in Vinogradov's original work, but in 2002 it was shown that 101346 is sufficiently large. Additionally numbers up to 1020 have been checked via brute force methods, thus only a finite number of cases to check remain before the odd Goldbach conjecture is proven or disproven. - -Let A be a positive real number. Then -$$ -r(N)={1\over 2}G(N)N^2+O\left(N^2\log^{-A}N\right), -$$ - -where -$$ -r(N)=\sum_{k_1+k_2+k_3=N}\Lambda(k_1)\Lambda(k_2)\Lambda(k_3), -$$ - -using the von Mangoldt function $\Lambda$, and -$$ -G(N)=\left(\prod_{p\mid N}\left(1-{1\over{\left(p-1\right)}^2}\right)\right)\left(\prod_{p\nmid N}\left(1+{1\over{\left(p-1\right)}^3}\right)\right). -$$ - -If N is odd, then G(N) is roughly 1, hence $N^2 \ll r(N)$ for all sufficiently large N. By showing that the contribution made to r(N) by proper prime powers is $O\left(N^{3\over 2}\log^2N\right)$, one sees that -$$ -N^2\log^{-3}N\ll\left(\hbox{number of ways N can be written as a sum of three primes}\right). -$$ - -This means in particular that any sufficiently large odd integer can be written as a sum of three primes, thus showing Goldbach's weak conjecture for all but finitely many cases. In 2013, Harald Helfgott proved Goldbach's weak conjecture for all cases. - -The proof of the theorem follows the Hardy–Littlewood circle method. Define the exponential sum -$$ -S(\alpha)=\sum_{n=1}^N\Lambda(n)e(\alpha n) -$$. - -Then we have - -S(\alpha)^3 = \sum_{n_1, n_2, n_3\leq N}\Lambda(n_1)\Lambda(n_2)\Lambda(n_3)e(\alpha(n_1+n_2+n_3)) - -= \sum_{n\leq 3N} \tilde{r}(n)e(\alpha n), - -where $\tilde{r}$ denotes the number of representations restricted to prime powers $\leq N$. Hence -$$ - r(N) = \int_0^1 S(\alpha)^3 e(-\alpha N)d\alpha -$$. - -If $\alpha$ is a rational number $\frac{p}{q}$, then $S(\alpha)$ can be given by the distribution of prime numbers in residue classes modulo $q$. Hence, using the Siegel-Walfisz theorem we can compute the contribution of the above integral in small neighbourhoods of rational points with small denominator. The set of real numbers close to such rational points is usually referred to as the major arcs, the complement forms the minor arcs. It turns out that these intervals dominate the integral, hence to prove the theorem one has to give an upper bound for $S(\alpha)$ for $\alpha$ contained in the minor arcs. This estimate is the most difficult part of the proof. - -If we assume the Generalized Riemann Hypothesis, the argument used for the major arcs can be extended to the minor arcs. This was done by Hardy and Littlewood in 1923. In 1937 Vinogradov gave an unconditional upper bound for $|S(\alpha)|$. His argument began with a simple sieve identity, the resulting terms were then rearranged in a complicated way to obtain some cancellation. In 1977 R. C. Vaughan found a much simpler argument, based on what later became known as Vaughan's identity. He proved that if $|\alpha-\frac{a}{q}|<\frac{1}{q^2}$, then -$$ - |S(\alpha)|\ll \left(\frac{N}{\sqrt{q}} + N^{4/5}+\sqrt{Nq}\right)\log^4 N -$$. - -Using the Siegel-Walfisz theorem we can deal with $q$ up to arbitrary powers of $\log N$, using Dirichlet's approximation theorem we obtain $|S(\alpha)|\ll\frac{N}{\log^A N}$ on the minor arcs. Hence the integral over the minor arcs can be bounded above by -$$ -\frac{CN}{\log^A N}\int_0^1|S(\alpha)|^2d\alpha \ll \frac{N^2}{\log^{A-1} N} -$$, - -which gives the error term in the theorem. diff --git a/wiki/wikipedia/2319.txt b/wiki/wikipedia/2319.txt deleted file mode 100644 index 816310d81ccbc17bcff55d01925d9c39055a31fe..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2319.txt +++ /dev/null @@ -1,22 +0,0 @@ -In mathematics, Clifford's theorem on special divisors is a result of on algebraic curves, showing the constraints on special linear systems on a curve C. - -A divisor on a Riemann surface C is a formal sum $\textstyle D = \sum_P m_P P$ of points P on C with integer coefficients. One considers a divisor as a set of constraints on meromorphic functions in the function field of C, defining $L(D)$ as the vector space of functions having poles only at points of D with positive coefficient, at most as bad as the coefficient indicates, and having zeros at points of D with negative coefficient, with at least that multiplicity. The dimension of $L(D)$ is finite, and denoted $\ell(D)$. The linear system of divisors attached to D is the corresponding projective space of dimension $\ell(D)-1$. - -The other significant invariant of D is its degree d, which is the sum of all its coefficients. - -A divisor is called special if ℓ(K - D) > 0, where K is the canonical divisor. - -Clifford's theorem states that for an effective special divisor D, one has: -$$ -2(\ell(D)- 1) \le d -$$, - -and that equality holds only if D is zero or a canonical divisor, or if C is a hyperelliptic curve and D linearly equivalent to an integral multiple of a hyperelliptic divisor. - -The Clifford index of C is then defined as the minimum of $d - 2(\ell(D) - 1)$ taken over all special divisors (except canonical and trivial), and Clifford's theorem states this is non-negative. It can be shown that the Clifford index for a generic curve of genus g is equal to the floor function $\lfloor\tfrac{g-1}{2}\rfloor.$ - -The Clifford index measures how far the curve is from being hyperelliptic. It may be thought of as a refinement of the gonality: in many cases the Clifford index is equal to the gonality minus 2. - -A conjecture of Mark Green states that the Clifford index for a curve over the complex numbers that is not hyperelliptic should be determined by the extent to which C as canonical curve has linear syzygies. In detail, one defines the invariant a(C) in terms of the minimal free resolution of the homogeneous coordinate ring of C in its canonical embedding, as the largest index i for which the graded Betti number βi, i + 2 is zero. Green and Robert Lazarsfeld showed that a(C) + 1 is a lower bound for the Clifford index, and Green's conjecture states that equality always holds. There are numerous partial results. - -Claire Voisin was awarded the Ruth Lyttle Satter Prize in Mathematics for her solution of the generic case of Green's conjecture in two papers. The case of Green's conjecture for generic curves had attracted a huge amount of effort by algebraic geometers over twenty years before finally being laid to rest by Voisin. The conjecture for arbitrary curves remains open. diff --git a/wiki/wikipedia/232.txt b/wiki/wikipedia/232.txt deleted file mode 100644 index c6e373ed8433bfaac2a8785d7dbb0993d3134733..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/232.txt +++ /dev/null @@ -1,39 +0,0 @@ -The fallacies of distributed computing are a set of assertions made by L Peter Deutsch and others at Sun Microsystems describing false assumptions that programmers new to distributed applications invariably make. - -The fallacies are - -# The network is reliable; - -# Latency is zero; - -# Bandwidth is infinite; - -# The network is secure; - -# Topology doesn't change; - -# There is one administrator; - -# Transport cost is zero; - -# The network is homogeneous. - -* Software applications are written with little error-handling on networking errors. During a network outage, such applications may stall or infinitely wait for an answer packet, permanently consuming memory or other resources. When the failed network becomes available, those applications may also fail to retry any stalled operations or require a (manual) restart. - -* Ignorance of network latency, and of the packet loss it can cause, induces application- and transport-layer developers to allow unbounded traffic, greatly increasing dropped packets and wasting bandwidth. - -* Ignorance of bandwidth limits on the part of traffic senders can result in bottlenecks. - -* Complacency regarding network security results in being blindsided by malicious users and programs that continually adapt to security measures. - -* Changes in network topology can have effects on both bandwidth and latency issues, and therefore can have similar problems. - -* Multiple administrators, as with subnets for rival companies, may institute conflicting policies of which senders of network traffic must be aware in order to complete their desired paths. - -* The "hidden" costs of building and maintaining a network or subnet are non-negligible and must consequently be noted in budgets to avoid vast shortfalls. - -* If a system assumes a homogeneous network, then it can lead to the same problems that result from the first three fallacies. - -The list of fallacies generally came about at Sun Microsystems. L. Peter Deutsch, one of the original Sun "Fellows", is credited with penning the first seven fallacies in 1994; however, Bill Joy and Tom Lyon had already identified the first four as "The Fallacies of Networked Computing" - -(the article claims "Dave Lyon", but this is a mistake). Around 1997, James Gosling, another Sun Fellow and the inventor of Java, added the eighth fallacy. diff --git a/wiki/wikipedia/2320.txt b/wiki/wikipedia/2320.txt deleted file mode 100644 index 450197f4ae988333cb3624b1cdd73702664b1e3b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2320.txt +++ /dev/null @@ -1,13 +0,0 @@ -In graph theory, a degree-constrained spanning tree is a spanning tree where the maximum vertex degree is limited to a certain constant k. The degree-constrained spanning tree problem is to determine whether a particular graph has such a spanning tree for a particular k. - -Input: n-node undirected graph G(V,E); positive integer k < n. - -Question: Does G have a spanning tree in which no node has degree greater than k? - -This problem is NP-complete . This can be shown by a reduction from the Hamiltonian path problem. It remains NP-complete even if k is fixed to a value ≥ 2. If the problem is defined as the degree must be ≤ k, the k = 2 case of degree-confined spanning tree is the Hamiltonian path problem. - -On a weighted graph, a Degree-constrained minimum spanning tree (DCMST) is a degree-constrained spanning tree in which the sum of its edges has the minimum possible sum. Finding a DCMST is an NP-Hard problem. - -Heuristic algorithms that can solve the problem in polynomial time have been proposed, including Genetic and Ant-Based Algorithms. - -Fürer give an iterative polynomial time algorithm which, given a graph $G$, returns a spanning tree with maximum degree no larger than $\Delta^* + 1$, where $\Delta^*$ is the minimum possible maximum degree over all spanning trees. Thus, if $k = \Delta^*$, such an algorithm will either return a spanning tree of maximum degree $k$ or $k+1$. diff --git a/wiki/wikipedia/2321.txt b/wiki/wikipedia/2321.txt deleted file mode 100644 index 83ec66f1a30e41695be5a7fc0c7d37c9bfa90782..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2321.txt +++ /dev/null @@ -1,53 +0,0 @@ -Carleman's inequality is an inequality in mathematics, named after Torsten Carleman, who proved it in 1923 and used it to prove the Denjoy-Carleman theorem on quasi-analytic classes. - -Let a1, a2, a3, ... be a sequence of non-negative real numbers, then -$$ - \sum_{n=1}^\infty \left(a_1 a_2 \cdots a_n\right)^{1/n} \le e \sum_{n=1}^\infty a_n. -$$ - -The constant e in the inequality is optimal, that is, the inequality does not always hold if e is replaced by a smaller number. The inequality is strict (it holds with "<" instead of "≤") if some element in the sequence is non-zero. - -Carleman's inequality has an integral version, which states that -$$ - \int_0^\infty \exp\left\{ \frac{1}{x} \int_0^x \ln f(t) dt \right\} dx \leq e \int_0^\infty f(x) dx -$$ - -for any f ≥ 0. - -A generalisation, due to Lennart Carleson, states the following: - -for any convex function g with g(0) = 0, and for any -1 < p < ∞, -$$ - \int_0^\infty x^p e^{-g(x)/x} dx \leq e^{p+1} \int_0^\infty x^p e^{-g'(x)} dx. -$$ - -Carleman's inequality follows from the case p = 0. - -An elementary proof is sketched below. From the inequality of arithmetic and geometric means applied to the numbers $1\cdot a_1,2\cdot a_2,\dots,n \cdot a_n$ -$$ -\mathrm{MG}(a_1,\dots,a_n)=\mathrm{MG}(1a_1,2a_2,\dots,na_n)(n!)^{-1/n}\le \mathrm{MA}(1a_1,2a_2,\dots,na_n)(n!)^{-1/n} -$$ - -where MG stands for geometric mean, and MA - for arithmetic mean. The Stirling-type inequality $n!\ge \sqrt{2\pi n} n^n e^{-n}$ applied to $n+1$ implies -$$ -(n!)^{-1/n} \le \frac{e}{n+1} -$$ for all $n\ge1.$ - -Therefore, -$$ -MG(a_1,\dots,a_n) \le \frac{e}{n(n+1)} \sum_{1\le k \le n} k a_k , -$$ - -whence -$$ -\sum_{n\ge1}MG(a_1,\dots,a_n) \le e \sum_{k\ge1} \bigg( \sum_{n\ge k} \frac{1}{n(n+1)}\bigg) k a_k = e \sum_{k\ge1} a_k , -$$ - -proving the inequality. Moreover, the inequality of arithmetic and geometric means of $n$ non-negative numbers is known to be an equality if and only if all the numbers coincide, that is, in the present case, if and only if $a_k= C/k$ for $k=1,\dots,n$. As a consequence, Carleman's inequality is never an equality for a convergent series, unless all $a_n$ vanish, just because the harmonic series is divergent. - -One can also prove Carleman's inequality by starting with Hardy's inequality -$$ -\sum_{n=1}^\infty \left (\frac{a_1+a_2+\cdots +a_n}{n}\right )^p\le \left (\frac{p}{p-1}\right )^p\sum_{n=1}^\infty a_n^p -$$ - -for the non-negative numbers a1,a2,... and p > 1, replacing each an with a_n|p=1/p, and letting p → ∞. diff --git a/wiki/wikipedia/2322.txt b/wiki/wikipedia/2322.txt deleted file mode 100644 index 74074a9798c6e7406da56c424e73679f2b4e5bb6..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2322.txt +++ /dev/null @@ -1,11 +0,0 @@ -In mathematics, a space form is a complete Riemannian manifold M of constant sectional curvature K. The three obvious examples are Euclidean n-space, the n-dimensional sphere, and hyperbolic space, although a space form need not be simply connected. - -The Killing–Hopf theorem of Riemannian geometry states that the universal cover of an n-dimensional space form $M^n$ with curvature $K = -1$ is isometric to $H^n$, hyperbolic space, with curvature $K = 0$ is isometric to $R^n$, Euclidean n-space, and with curvature $K = +1$ is isometric to $S^n$, the n-dimensional sphere of points distance 1 from the origin in $R^{n+1}$. - -By rescaling the Riemannian metric on $H^n$, we may create a space $M_K$ of constant curvature $K$ for any $K < 0$. Similarly, by rescaling the Riemannian metric on $S^n$, we may create a space $M_K$ of constant curvature $K$ for any $K > 0$. Thus the universal cover of a space form $M$ with constant curvature $K$ is isometric to $M_K$. - -This reduces the problem of studying space forms to studying discrete groups of isometries $\Gamma$ of $M_K$ which act properly discontinuously. Note that the fundamental group of $M$, $\pi_1(M)$, will be isomorphic to $\Gamma$. Groups acting in this manner on $R^n$ are called crystallographic groups. Groups acting in this manner on $H^2$ and $H^3$ are called Fuchsian groups and Kleinian groups, respectively. - -The space form problem is a conjecture stating that any two compact aspherical Riemannian manifolds with isomorphic fundamental groups are homeomorphic. - -The possible extensions are limited. One might wish to conjecture that the manifolds are isometric, but rescaling the Riemannian metric on a compact aspherical Riemannian manifold preserves the fundamental group and shows this to be false. One might also wish to conjecture that the manifolds are diffeomorphic, but John Milnor's exotic spheres are all homeomorphic and hence have isomorphic fundamental group, showing this to be false. diff --git a/wiki/wikipedia/2323.txt b/wiki/wikipedia/2323.txt deleted file mode 100644 index 79ba82edefa0c2134dd3c336ba09d3af1159fc94..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2323.txt +++ /dev/null @@ -1,297 +0,0 @@ -In mathematics, many logarithmic identities exist. The following is a compilation of the notable of these, many of which are used for computational purposes. - -Logarithms and exponentials with the same base cancel each other. This is true because logarithms and exponentials are inverse operations—much like the same way multiplication and division are inverse operations, and addition and subtraction are inverse operations. -$$ -b^{\log_b(x)} = x\text{ because }\mbox{antilog}_b(\log_b(x)) = x -$$ -$$ -\log_b(b^x) = x\text{ because }\log_b(\mbox{antilog}_b(x)) = x -$$> - -Both of the above are derived from the following two equations that define a logarithm: -$$ -b^c = x \iff \log_b(x) = c -$$ - -Substituting c in the left equation gives blogb(x) = x, and substituting x in the right gives logb(bc) = c. Finally, replace c with x. - -Logarithms can be used to make calculations easier. For example, two numbers can be multiplied just by using a logarithm table and adding. These are often known as logarithmic properties, which are documented in the table below. The first three operations below assume that x = bc and/or y = bd, so that logb(x) = c and logb(y) = d. Derivations also use the log definitions x = blogb(x) and x = logb(bx). - -Where $b$, $x$, and $y$ are positive real numbers and $b \ne 1$, and $c$ and $d$ are real numbers. - -The laws result from canceling exponentials and the appropriate law of indices. Starting with the first law: -$$ -xy = b^{\log_b(x)} b^{\log_b(y)} = b^{\log_b(x) + \log_b(y)} \Rightarrow \log_b(xy) = \log_b(b^{\log_b(x) + \log_b(y)}) = \log_b(x) + \log_b(y) -$$ - -The law for powers exploits another of the laws of indices: -$$ -x^y = (b^{\log_b(x)})^y = b^{y \log_b(x)} \Rightarrow \log_b(x^y) = y \log_b(x) -$$ - -The law relating to quotients then follows: -$$ -\log_b \bigg(\frac{x}{y}\bigg) = \log_b(x y^{-1}) = \log_b(x) + \log_b(y^{-1}) = \log_b(x) - \log_b(y) -$$ -$$ -\log_b \bigg(\frac{1}{y}\bigg) = \log_b(y^{-1}) = - \log_b(y) -$$ - -Similarly, the root law is derived by rewriting the root as a reciprocal power: -$$ -\log_b(\sqrt[y]x) = \log_b(x^{\frac{1}{y}}) = \frac{1}{y}\log_b(x) -$$ -$$ -\log_b a=\frac{\log_{10}(a)}{\log_{10}(b)} -$$ - -This identity is useful to evaluate logarithms on calculators. For instance, most calculators have buttons for ln and for log10, but not all calculators have buttons for the logarithm of an arbitrary base. - -Consider the equation $ b^c=a$ - -Take logarithm base $d$ of both sides: $ \log_d b^c=\log_d a$ - -Simplify and solve for $c$: $ c\log_d b=\log_d a$ -$$ -c=\frac{\log a}{\log b} -$$ - -Since $c=\log_b a$, then $\log_b a=\frac{\log_d a}{\log_d b}$ - -This formula has several consequences: -$$ - \log_b a = \frac 1 {\log_a b} -$$ -$$ - \log_{b^n} a = {\log_b a \over n} -$$ -$$ - b^{\log_a d} = d^{\log_a b} -$$ -$$ - -\log_b a = \log_b \left({1 \over a}\right) = \log_{1/b} a -$$ - - \log_{b_1}a_1 \cdots \log_{b_n}a_n - -= \log_{b_{\pi(1)}}a_1 \cdots \log_{b_{\pi(n)}}a_n, - -where \pi is any permutation of the subscripts 1, ..., n. For example - - \log_b w\cdot \log_a x\cdot \log_d c\cdot \log_d z - -= \log_d w\cdot \log_b x\cdot \log_a c\cdot \log_d z. - -The following summation/subtraction rule is especially useful in probability theory when one is dealing with a sum of log-probabilities: - -Note that the subtraction identity is not defined if $a=c$, since the logarithm of zero is not defined. - -Also note that, when programming, $a$ and $c$ may have to be switched on the right hand side of the equations if $c>>a$ to avoid losing the "1 +" due to rounding errors. Many programming languages have a specific log1p(x) function that calculates $\log_e (1+x)$ without underflow (when $x$ is small). - -More generally: -$$ -\log _b \sum\limits_{i=0}^N a_i = \log_b a_0 + \log_b \left( 1+\sum\limits_{i=1}^N \frac{a_i}{a_0} \right) = \log _b a_0 + \log_b \left( 1+\sum\limits_{i=1}^N b^{\left( \log_b a_i - \log _b a_0 \right)} \right) -$$ - -A useful identity involving exponents: -$$ - x^{\frac{\log(\log(x))}{\log(x)}} = \log(x) -$$ - -or more universally: -$$ - x^{\frac{\log(a)}{\log(x)}} = a -$$ -$$ - \frac{1}{\frac{1}{\log_x(a)}+\frac{1}{\log_y(a)}} = \log_{xy}(a) -$$ -$$ - \frac{1}{\frac{1}{\log_x(a)}-\frac{1}{\log_y(a)}} = \log_{\frac{x}{y}}(a) -$$ - -Based on, and - -\frac{x}{1+x} \leq \ln(1+x) - -\leq \frac{x(6+x)}{6+4x} - -\leq x \mbox{ for all } {-1} < x - -\begin{align} - -\frac{2x}{2+x}&\leq3-\sqrt{\frac{27}{3+2x}}\leq\frac{x}{\sqrt{1+x+x^2/12}} \\[4pt] - -&\leq \ln(1+x)\leq \frac{x}{\sqrt{1+x}}\leq \frac{x}{2}\frac{2+x}{1+x} \\[4pt] - -&\text{ for } 0 \le x \text{, reverse for } {-1} < x \le 0 - -\end{align} - -All are accurate around $x=0$, but not for large numbers. -$$ -\lim_{x\to 0^+}\log_a(x)=-\infty\quad \mbox{if } a > 1 -$$ -$$ -\lim_{x\to 0^+}\log_a(x)=\infty\quad \mbox{if } 0 < a < 1 -$$ -$$ -\lim_{x\to\infty}\log_a(x)=\infty\quad \mbox{if } a > 1 -$$ -$$ -\lim_{x\to\infty}\log_a(x)=-\infty\quad \mbox{if } 0 < a < 1 -$$ -$$ -\lim_{x\to 0^+}x^b\log_a(x)=0\quad \mbox{if } b > 0 -$$ -$$ -\lim_{x\to\infty}\frac{\log_a(x)}{x^b}=0\quad \mbox{if } b > 0 -$$ - -The last limit is often summarized as "logarithms grow more slowly than any power or root of x". -$$ -{d \over dx} \ln x = {1 \over x }, -$$ -$$ -{d \over dx} \log_b x = {1 \over x \ln b}, -$$ - -Where $x > 0$, $b > 0$, and $b \ne 1$. -$$ -\ln x = \int_1^x \frac {1}{t} dt -$$ -$$ -\int \log_a x dx = x(\log_a x - \log_a e) + C -$$ - -To remember higher integrals, it is convenient to define -$$ -x^{\left [n \right]} = x^{n}(\log(x) - H_n) -$$ - -where $H_n$ is the nth harmonic number: -$$ -x^{\left [ 0 \right ]} = \log x -$$ -$$ -x^{\left [ 1 \right ]} = x \log(x) - x -$$ -$$ -x^{\left [ 2 \right ]} = x^2 \log(x) - \begin{matrix} \frac{3}{2} \end{matrix}x^2 -$$ -$$ -x^{\left [ 3 \right ]} = x^3 \log(x) - \begin{matrix} \frac{11}{6} \end{matrix}x^3 -$$ - -Then -$$ -\frac{d}{dx} x^{\left[ n \right]} = nx^{\left[ n-1 \right]} -$$ -$$ -\int x^{\left[ n \right]}dx = \frac{x^{\left [ n+1 \right ]}}{n+1} + C -$$ - -The identities of logarithms can be used to approximate large numbers. Note that logb(a) + logb(c) = logb(ac), where a, b, and c are arbitrary constants. Suppose that one wants to approximate the 44th Mersenne prime, 232,582,657 -1. To get the base-10 logarithm, we would multiply 32,582,657 by log10(2), getting 9,808,357.09543 = 9,808,357 + 0.09543. We can then get 109,808,357 × 100.09543 ≈ 1.25 × 109,808,357. - -Similarly, factorials can be approximated by summing the logarithms of the terms. - -The complex logarithm is the complex number analogue of the logarithm function. No single valued function on the complex plane can satisfy the normal rules for logarithms. However, a multivalued function can be defined which satisfies most of the identities. It is usual to consider this as a function defined on a Riemann surface. A single valued version, called the principal value of the logarithm, can be defined which is discontinuous on the negative x axis, and is equal to the multivalued version on a single branch cut. - -In what follows, a capital first letter is used for the principal value of functions, and the lower case version is used for the multivalued function. The single valued version of definitions and identities is always given first, followed by a separate section for the multiple valued versions. - -ln(r) is the standard natural logarithm of the real number r. - -Arg(z) is the principal value of the arg function; its value is restricted to (−π, π]. It can be computed using Arg(x + iy) = atan2(y, x). - -Log(z) is the principal value of the complex logarithm function and has imaginary part in the range (−π, π]. -$$ -\operatorname{Log}(z) = \ln(|z|) + i \operatorname{Arg}(z) -$$ -$$ -e^{\operatorname{Log}(z)} = z -$$ - -The multiple valued version of log(z) is a set, but it is easier to write it without braces and using it in formulas follows obvious rules. - -log(z) is the set of complex numbers v which satisfy ev = z - -arg(z) is the set of possible values of the arg function applied to z. - -When k is any integer: -$$ -\log(z) = \ln(|z|) + i \arg(z) -$$ -$$ -\log(z) = \operatorname{Log}(z) + 2 \pi i k -$$ -$$ -e^{\log(z)} = z -$$ - -Principal value forms: -$$ -\operatorname{Log}(1) = 0 -$$ -$$ -\operatorname{Log}(e) = 1 -$$ - -Multiple value forms, for any k an integer: -$$ -\log(1) = 0 + 2 \pi i k -$$ -$$ -\log(e) = 1 + 2 \pi i k -$$ - -Principal value forms: -$$ -\operatorname{Log}(z_1) + \operatorname{Log}(z_2) = \operatorname{Log}(z_1 z_2) \pmod {2 \pi i} -$$ - -\operatorname{Log}(z_1) + \operatorname{Log}(z_2) = \operatorname{Log}(z_1 z_2)\quad - -(-\pi <\operatorname{Arg}(z_1)+\operatorname{Arg}(z_2)\leq \pi; - -\text{ e.g., } \operatorname{Re}z_1\geq 0 \text{ and } \operatorname{Re}z_2 > 0) -$$ -\operatorname{Log}(z_1) - \operatorname{Log}(z_2) = \operatorname{Log}(z_1 / z_2) \pmod {2 \pi i} -$$ - -\operatorname{Log}(z_1) - \operatorname{Log}(z_2) = \operatorname{Log}(z_1 / z_2) - -\quad - -(-\pi <\operatorname{Arg}(z_1)-\operatorname{Arg}(z_2)\leq \pi; - -\text{ e.g., } \operatorname{Re}z_1\geq 0 \text{ and } \operatorname{Re}z_2 > 0) - -Multiple value forms: -$$ -\log(z_1) + \log(z_2) = \log(z_1 z_2) -$$ -$$ -\log(z_1) - \log(z_2) = \log(z_1 / z_2) -$$ - -A complex power of a complex number can have many possible values. - -Principal value form: -$$ -{z_1}^{z_2} = e^{z_2 \operatorname{Log}(z_1)} -$$ -$$ -\operatorname{Log}{\left({z_1}^{z_2}\right)} = z_2 \operatorname{Log}(z_1) \pmod {2 \pi i} -$$ - -Multiple value forms: -$$ -{z_1}^{z_2} = e^{z_2 \log(z_1)} -$$ - -Where k1, k2 are any integers: -$$ -\log{\left({z_1}^{z_2}\right)} = z_2 \log(z_1) + 2 \pi i k_2 -$$ -$$ -\log{\left({z_1}^{z_2}\right)} = z_2 \operatorname{Log}(z_1) + z_2 2 \pi i k_1 + 2 \pi i k_2 -$$ diff --git a/wiki/wikipedia/2324.txt b/wiki/wikipedia/2324.txt deleted file mode 100644 index d88a922d8ed394384e8a05cc5719556d60a73226..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2324.txt +++ /dev/null @@ -1 +0,0 @@ -Web Sudoku is an online sudoku website which was rated as one of the best 50 fun and games website by Time. It was founded by Gideon Greenspan and Rachel Lee. The website was rated as the 7265th best website in the world by Jonathan Harchick in his book The World's Best Websites. Greenspan claimed that about three million people play on the site, adding that the numbers "are still growing very rapidly from week to week". He added that some of the players solve dozens of puzzles every day. diff --git a/wiki/wikipedia/2325.txt b/wiki/wikipedia/2325.txt deleted file mode 100644 index 7907ac662c3836b74e268bf5f0e78d8c54d3ff3d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2325.txt +++ /dev/null @@ -1,53 +0,0 @@ -In geometry, the statement that the angles opposite the equal sides of an isosceles triangle are themselves equal is known as the pons asinorum (, ), typically translated as "bridge of asses". This statement is Proposition 5 of Book 1 in Euclid's Elements, and is also known as the isosceles triangle theorem. Its converse is also true: if two angles of a triangle are equal, then the sides opposite them are also equal. The term is also applied to the Pythagorean theorem. - -Pons asinorum is also used metaphorically for a problem or challenge which acts as a test of critical thinking in a field, separating capable and incapable reasoners; it represents a test of ability or understanding. Its first known usage in this was in 1645. - -A persistent piece of mathematical folklore claims that an artificial intelligence program discovered an original and more elegant proof of this theorem. In fact, Marvin Minsky recounts that he had rediscovered the Pappus proof (which he was not aware of) by simulating what a mechanical theorem prover might do. - -Euclid's statement of the pons asinorum includes a second conclusion that if the equal sides of the triangle are extended below the base, then the angles between the extensions and the base are also equal. Euclid's proof involves drawing auxiliary lines to these extensions. But, as Euclid's commentator Proclus points out, Euclid never uses the second conclusion and his proof can be simplified somewhat by drawing the auxiliary lines to the sides of the triangle instead, the rest of the proof proceeding in more or less the same way. - -There has been much speculation and debate as to why Euclid added the second conclusion to the theorem, given that it makes the proof more complicated. One plausible explanation, given by Proclus, is that the second conclusion can be used in possible objections to the proofs of later propositions where Euclid does not cover every case. The proof relies heavily on what is today called side-angle-side, the previous proposition in the Elements. - -Proclus' variation of Euclid's proof proceeds as follows:
- -Let ABC be an isosceles triangle with AB and AC being the equal sides. Pick an arbitrary point D on side AB and construct E on AC so that AD = AE. Draw the lines BE, DC and DE.
- -Consider the triangles BAE and CAD; BA = CA, AE = AD, and $\angle A$ is equal to itself, so by side-angle-side, the triangles are congruent and corresponding sides and angles are equal.
- -Therefore $\angle ABE = \angle ACD$ and $\angle ADC = \angle AEB$, and BE = CD.
- -Since AB = AC and AD = AE, BD = CE by subtraction of equal parts.
- -Now consider the triangles DBE and ECD; BD = CE, BE = CD, and $\angle DBE = \angle ECD$ have just been shown, so applying side-angle-side again, the triangles are congruent.
- -Therefore $\angle BDE = \angle CED$ and $\angle BED = \angle CDE$.
- -Since $\angle BDE = \angle CED$ and $\angle CDE = \angle BED$, $\angle BDC = \angle CEB$ by subtraction of equal parts.
- -Consider a third pair of triangles, BDC and CEB; DB = EC, DC = EB, and $\angle BDC = \angle CEB$, so applying side-angle-side a third time, the triangles are congruent.
- -In particular, angle CBD = BCE, which was to be proved. - -Proclus gives a much shorter proof attributed to Pappus of Alexandria. This is not only simpler but it requires no additional construction at all. The method of proof is to apply side-angle-side to the triangle and its mirror image. More modern authors, in imitation of the method of proof given for the previous proposition have described this as picking up the triangle, turning it over and laying it down upon itself. - -Similarly, the name Dulcarnon was given to the 47th proposition of Book I of Euclid, better known as the Pythagorean theorem, after the Arabic Dhū 'l qarnain ذُو ٱلْقَرْنَيْن, meaning "the owner of the two horns", because diagrams of the theorem showed two smaller squares like horns at the top of the figure. The term is also used as a metaphor for a dilemma. The theorem was also sometimes called "the Windmill" for similar reasons. - -Uses of the pons asinorum as a metaphor for a test of critical thinking include: - -*Richard Aungerville's Philobiblon contains the passage "Quot Euclidis discipulos retrojecit Elefuga quasi scopulos eminens et abruptus, qui nullo scalarum suffragio scandi posset! Durus, inquiunt, est his sermo; quis potest eum audire?", which compares the theorem to a steep cliff that no ladder may help scale and asks how many would-be geometers have been turned away. - -*The term pons asinorum, in both its meanings as a bridge and as a test, is used as a metaphor for finding the middle term of a syllogism. - -*The 18th-century poet Thomas Campbell wrote a humorous poem called "Pons asinorum" where a geometry class assails the theorem as a company of soldiers might charge a fortress; the battle was not without casualties. - -*Economist John Stuart Mill called Ricardo's Law of Rent the pons asinorum of economics. - -*Pons Asinorum is the name given to a particular configuration of a Rubik's Cube. - -*Eric Raymond referred to the issue of syntactically-significant whitespace in the Python programming language as its pons asinorum. - -*The Finnish aasinsilta and Swedish åsnebrygga is a literary technique where a tenuous, even contrived connection between two arguments or topics, which is almost but not quite a non sequitur, is used as an awkward transition between them. In serious text, it is considered a stylistic error, since it belongs properly to the stream of consciousness- or causerie-style writing. Typical examples are ending a section by telling what the next section is about, without bothering to explain why the topics are related, expanding a casual mention into a detailed treatment, or finding a contrived connection between the topics (e.g. "We bought some red wine; speaking of red liquids, tomorrow is the World Blood Donor Day"). - -*In Dutch, ezelsbruggetje ('little bridge of asses') is the word for a mnemonic. The same is true for the German Eselsbrücke. - -*In Czech, oslí můstek has two meanings – it can describe either a contrived connection between two topics or a mnemonic. diff --git a/wiki/wikipedia/2326.txt b/wiki/wikipedia/2326.txt deleted file mode 100644 index 213e4d486b6e595d72cbba41afda794e083ea56a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2326.txt +++ /dev/null @@ -1,51 +0,0 @@ -PGF/TikZ is a pair of languages for producing vector graphics (e.g., technical illustrations and drawings) from a geometric/algebraic description, with standard features including the drawing of points, lines, arrows, paths, circles, ellipses and polygons. PGF is a lower-level language, while TikZ is a set of higher-level macros that use PGF. The top-level PGF and TikZ commands are invoked as TeX macros, but in contrast with PSTricks, the PGF/TikZ graphics themselves are described in a language that resembles MetaPost. Till Tantau is the designer of the PGF and TikZ languages. He is also the main developer of the only known interpreter for PGF and TikZ, which is written in TeX. PGF is an acronym for "Portable Graphics Format". TikZ was introduced in version 0.95 of PGF, and it is a recursive acronym for "TikZ ist kein Zeichenprogramm" (German for "TikZ is not a drawing program"). - -The PGF/TikZ interpreter can be used from the popular LaTeX and ConTeXt macro packages, and also directly from the original TeX. Since TeX itself is not concerned with graphics, the interpreter supports multiple TeX output backends: dvips, dvipdfm/dvipdfmx/xdvipdfmx, TeX4ht, and pdftex's internal PDF output driver. One of the major new features of this version was graph drawing using the graphdrawing package, which however requires LuaTeX. This version also added a new data visualization method and support for direct SVG output via the new dvisvgm driver. - -Several graphical editors can produce output for PGF/TikZ, such as the KDE program Cirkuit and the math drawing program GeoGebra. Export to TikZ is also available as extensions for Inkscape, Blender, MATLAB, matplotlib, Gnuplot, and R. The circuit-macros package of m4 macros exports circuit diagrams to TikZ using the dpic -g command line option. The dot2tex program can convert files in the DOT graph description language to PGF/TikZ. - -TikZ features libraries for easy drawing of many kinds of diagrams, such as the following (alphabetized by library name): - -* 3D drawing3d - -* Finite automata and Turing machinesautomata - -* Coordinate system calculationscalc - -* Calendarscalendar - -* Chains: nodes typically connected by edges and arranged in rows and columnschain - -* Logic circuit and electrical circuit diagramscircuits.logic and circuits.ee - -* Entity–relationship diagramser - -* Polygon folding diagramsfolding - -* Graph drawing with automatic layout optionsgraphdrawing - -* L-system drawingslindenmayersystems - -* Sequences of basic math operationsmath - -* Matricesmatrix - -* Mind mapsmindmap - -* Three-point perspective drawingsperspective - -* Petri netspetri - -* RDF semantic annotations (only in SVG output)rdf - -* Special shapes and symbolsshapes.geometric and shapes.symbols - -* Magnification of part of a graphic in an insetspy - -* Paths in SVG syntaxsvg.path - -* Treestrees - -* Turtle graphicsturtle - -* Zooming and panning graphicsviews diff --git a/wiki/wikipedia/2327.txt b/wiki/wikipedia/2327.txt deleted file mode 100644 index 2d4aad8d4e5e456a91ea849d23aa9e612d02e873..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2327.txt +++ /dev/null @@ -1,13 +0,0 @@ -The Carathéodory–Jacobi–Lie theorem is a theorem in symplectic geometry which generalizes Darboux's theorem. - -Let M be a 2n-dimensional symplectic manifold with symplectic form ω. For p ∈ M and r ≤ n, let f1, f2, ..., fr be smooth functions defined on an open neighborhood V of p whose differentials are linearly independent at each point, or equivalently -$$ -df_1(p) \wedge \ldots \wedge df_r(p) \neq 0, -$$ - -where {fi, fj} = 0. (In other words, they are pairwise in involution.) Here {–,–} is the Poisson bracket. Then there are functions fr+1, ..., fn, g1, g2, ..., gn defined on an open neighborhood U ⊂ V of p such that (fi, gi) is a symplectic chart of M, i.e., ω is expressed on U as -$$ -\omega = \sum_{i=1}^n df_i \wedge dg_i. -$$ - -As a direct application we have the following. Given a Hamiltonian system as $(M,\omega,H)$ where M is a symplectic manifold with symplectic form $\omega$ and H is the Hamiltonian function, around every point where $dH \neq 0$ there is a symplectic chart such that one of its coordinates is H. diff --git a/wiki/wikipedia/2328.txt b/wiki/wikipedia/2328.txt deleted file mode 100644 index 024fb2e1d670664d4996df0c8917de0d17d9954e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2328.txt +++ /dev/null @@ -1,25 +0,0 @@ -Hilbert's epsilon calculus is an extension of a formal language by the epsilon operator, where the epsilon operator substitutes for quantifiers in that language as a method leading to a proof of consistency for the extended formal language. The epsilon operator and epsilon substitution method are typically applied to a first-order predicate calculus, followed by a showing of consistency. The epsilon-extended calculus is further extended and generalized to cover those mathematical objects, classes, and categories for which there is a desire to show consistency, building on previously-shown consistency at earlier levels. - -For any formal language L, extend L by adding the epsilon operator to redefine quantification: - -*$ (\exists x) A(x)\ \equiv \ A(\epsilon x\ A) $ - -*$ (\forall x) A(x)\ \equiv \ A(\epsilon x\ (\neg A)) $ - -The intended interpretation of ϵx A is some x that satisfies A, if it exists. In other words, ϵx A returns some term t such that A(t) is true, otherwise it returns some default or arbitrary term. If more than one term can satisfy A, then any one of these terms (which make A true) can be chosen, non-deterministically. Equality is required to be defined under L, and the only rules required for L extended by the epsilon operator are modus ponens and the substitution of A(t) to replace A(x) for any term t. - -In tau-square notation from N. Bourbaki's Theory of Sets, the quantifiers are defined as follows: - -*$ (\exists x) A(x)\ \equiv \ (\tau_x(A)|x)A $ - -*$ (\forall x) A(x)\ \equiv \ \neg (\tau_x(\neg A)|x)\neg A\ \equiv \ (\tau_x(\neg A)|x)A$ - -where A is a relation in L, x is a variable, and $\tau_x(A)$ juxtaposes a $\tau$ at the front of A, replaces all instances of x with $\square$, and links them back to $\tau$. Then let Y be an assembly, (Y|x)A denotes the replacement of all variables x in A with Y. - -This notation is equivalent to the Hilbert notation and is read the same. It is used by Bourbaki to define cardinal assignment since they do not use the axiom of replacement. - -Defining quantifiers in this way leads to great inefficiencies. For instance, the expansion of Bourbaki's original definition of the number one, using this notation, has length approximately 4.5 × 1012, and for a later edition of Bourbaki that combined this notation with the Kuratowski definition of ordered pairs, this number grows to approximately 2.4 × 1054. - -Hilbert's program for mathematics was to justify those formal systems as consistent in relation to constructive or semi-constructive systems. While Gödel's results on incompleteness mooted Hilbert's Program to a great extent, modern researchers find the epsilon calculus to provide alternatives for approaching proofs of systemic consistency as described in the epsilon substitution method. - -A theory to be checked for consistency is first embedded in an appropriate epsilon calculus. Second, a process is developed for re-writing quantified theorems to be expressed in terms of epsilon operations via the epsilon substitution method. Finally, the process must be shown to normalize the re-writing process, so that the re-written theorems satisfy the axioms of the theory. diff --git a/wiki/wikipedia/2329.txt b/wiki/wikipedia/2329.txt deleted file mode 100644 index baac1f7657c27219915671c7ba702e5d535d951e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2329.txt +++ /dev/null @@ -1,81 +0,0 @@ -In the theory of probability and statistics, the Dvoretzky–Kiefer–Wolfowitz–Massart inequality bounds how close an empirically determined distribution function will be to the distribution function from which the empirical samples are drawn. It is named after Aryeh Dvoretzky, Jack Kiefer, and Jacob Wolfowitz, who in 1956 proved the inequality - - - -\Pr\Bigl(\sup_{x\in\mathbb R} |F_n(x) - F(x)| > \varepsilon \Bigr) \le Ce^{-2n\varepsilon^2}\qquad \text{for every }\varepsilon>0. - - - -with an unspecified multiplicative constant C in front of the exponent on the right-hand side. - -In 1990, Pascal Massart proved the inequality with the sharp constant C = 2, confirming a conjecture due to Birnbaum and McCarty. In 2021, Michael Naaman proved the multivariate version of the DKW inequality and generalized Massart's tightness result to the multivariate case, which results in a sharp constant of twice the number of variables, C = 2k. - -Given a natural number n, let X1, X2, …, Xn be real-valued independent and identically distributed random variables with cumulative distribution function F(·). Let Fn denote the associated empirical distribution function defined by - - - -F_n(x) = \frac1n \sum_{i=1}^n \mathbf{1}_{\{X_i\leq x\}},\qquad x\in\mathbb{R}. - - - -So $F(x)$ is the probability that a single random variable $X$ is smaller than $x$, and $F_n(x)$ is the fraction of random variables that are smaller than $x$. - -The Dvoretzky–Kiefer–Wolfowitz inequality bounds the probability that the random function Fn differs from F by more than a given constant ε > 0 anywhere on the real line. More precisely, there is the one-sided estimate - - - -\Pr\Bigl(\sup_{x\in\mathbb R} \bigl(F_n(x) - F(x)\bigr) > \varepsilon \Bigr) \le e^{-2n\varepsilon^2}\qquad \text{for every }\varepsilon\geq\sqrt{\tfrac{1}{2n}\ln2}, - - - -which also implies a two-sided estimate - - - -\Pr\Bigl(\sup_{x\in\mathbb R} |F_n(x) - F(x)| > \varepsilon \Bigr) \le 2e^{-2n\varepsilon^2}\qquad \text{for every }\varepsilon>0. - - - -This strengthens the Glivenko–Cantelli theorem by quantifying the rate of convergence as n tends to infinity. It also estimates the tail probability of the Kolmogorov–Smirnov statistic. The inequalities above follow from the case where F corresponds to be the uniform distribution on [0,1] in view of the fact - -that Fn has the same distributions as Gn(F) where Gn is the empirical distribution of - -U1, U2, …, Un where these are independent and Uniform(0,1), and noting that - - - -\sup_{x\in\mathbb R} |F_n(x) - F(x)| \stackrel{d}{=} \sup_{x \in \mathbb R} | G_n (F(x)) - F(x) | \le \sup_{0 \le t \le 1} | G_n (t) -t | , - - - -with equality if and only if F is continuous. - -In the multivariate case, X1, X2, …, Xn is an i.i.d. sequence of k-dimensional vectors. If Fn is the multivariate empirical cdf, then - - - -\Pr\Bigl(\sup_{t\in\mathbb R^k} |F_n(t) - F(t)| > \varepsilon \Bigr) \le (n+1)ke^{-2n\varepsilon^2} - - - -for every ε, n, k>0. The (n+1) term can be replaced with a 2 for any sufficiently large n. - -The Dvoretzky–Kiefer–Wolfowitz inequality is one method for generating CDF-based confidence bounds and producing a confidence band. The purpose of this confidence interval is to contain the entire CDF at the specified confidence level, while alternative approaches attempt to only achieve the confidence level on each individual point which can allow for a tighter bound. The DKW bounds runs parallel to, and is equally above and below, the empirical CDF. The equally spaced confidence interval around the empirical CDF allows for different rates of violations across the support of the distribution. In particular, it is more common for a CDF to be outside of the CDF bound estimated using the DKW inequality near the median of the distribution than near the endpoints of the distribution. - -The interval that contains the true CDF, $F(x)$, with probability $1-\alpha$ is often specified as - - - -F_n(x) - \varepsilon \le F(x) \le F_n(x) + \varepsilon \text{ where } \varepsilon = \sqrt{\frac{\ln{\frac{2}{\alpha}}}{2n}} - - - -which is also a special case of the asymptotic procedure for the multivariate case, whereby one uses the following critical value - - - -\frac{d(\alpha,k)}{\sqrt{n}} = \sqrt{\frac{\ln{\frac{2k}{\alpha}}}{2n}} - - - -for the multivariate test; one may replace 2k with k(n+1) for a test that holds for all n; moreover, the multivariate test described by Naaman can be generalized to account for heterogeneity and dependence. diff --git a/wiki/wikipedia/233.txt b/wiki/wikipedia/233.txt deleted file mode 100644 index 9c9780d4059b4fd3e6279550a81267742112044b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/233.txt +++ /dev/null @@ -1,79 +0,0 @@ -A knight's tour is a sequence of moves of a knight on a chessboard such that the knight visits every square exactly once. If the knight ends on a square that is one knight's move from the beginning square (so that it could tour the board again immediately, following the same path), the tour is closed; otherwise, it is open. - -The knight's tour problem is the mathematical problem of finding a knight's tour. Creating a program to find a knight's tour is a common problem given to computer science students. Variations of the knight's tour problem involve chessboards of different sizes than the usual 8 × 8, as well as irregular (non-rectangular) boards. - -The knight's tour problem is an instance of the more general Hamiltonian path problem in graph theory. The problem of finding a closed knight's tour is similarly an instance of the Hamiltonian cycle problem. Unlike the general Hamiltonian path problem, the knight's tour problem can be solved in linear time. - -The earliest known reference to the knight's tour problem dates back to the 9th century AD. In Rudraṭa's (5.15), a Sanskrit work on Poetics, the pattern of a knight's tour on a half-board has been presented as an elaborate poetic figure () called the or 'arrangement in the steps of a horse'. The same verse in four lines of eight syllables each can be read from left to right or by following the path of the knight on tour. Since the Indic writing systems used for Sanskrit are syllabic, each syllable can be thought of as representing a square on a chessboard. Rudrata's example is as follows: - -transliterated: - -For example, the first line can be read from left to right or by moving from the first square to the second line, third syllable (2.3) and then to 1.5 to 2.7 to 4.8 to 3.6 to 4.4 to 3.2. - -The Sri Vaishnava poet and philosopher Vedanta Desika during 14th century in his 1,008-verse magnum opus praising Lord Ranganatha's divine sandals of Srirangam, i.e. Paduka Sahasram (in chapter 30: Chitra Paddhati) has composed two consecutive Sanskrit verses containing 32 letters each (in Anushtubh meter) where the second verse can be derived from the first verse by performing Knight's tour on a 4 × 8 board, starting from the top-left corner. The transliterated 19th verse is as follows: - -The 20th verse that can be obtained by performing Knight's tour on the above verse is as follows: - -sThi thA sa ma ya rA ja thpA - -ga tha rA mA dha kE ga vi | - -dhu ran ha sAm sa nna thA dhA - -sA dhyA thA pa ka rA sa rA || - -It is believed that Desika composed all 1008 verses (including the special Chaturanga Turanga Padabandham mentioned above) in a single night as a challenge. - -A tour reported in the fifth book of Bhagavantabaskaraby by Bhat Nilakantha, a cyclopedic work in Sanskrit on ritual, law and politics, written either about 1600 or about 1700 describes three knight's tours. The tours are not only reentrant but also symmetrical, and the verses are based on the same tour, starting from different squares. The work by Bhat Nilakantha is an extraordinary achievement being a fully symmetric closed tour, predating the work of Euler (1759) by at least 60 years. - -One of the first mathematicians to investigate the knight's tour was Leonhard Euler. The first procedure for completing the knight's tour was Warnsdorff's rule, first described in 1823 by H. C. von Warnsdorff. - -In the 20th century, the Oulipo group of writers used it, among many others. The most notable example is the 10 × 10 knight's tour which sets the order of the chapters in Georges Perec's novel Life a User's Manual. - -The sixth game of the World Chess Championship 2010 between Viswanathan Anand and Veselin Topalov saw Anand making 13 consecutive knight moves (albeit using both knights); online commentators jested that Anand was trying to solve the knight's tour problem during the game. - -Schwenk proved that for any m × n board with m ≤ n, a closed knight's tour is always possible unless one or more of these three conditions are met: - -# m and n are both odd - -# m = 1, 2, or 4 - -# m = 3 and n = 4, 6, or 8. - -Cull et al. and Conrad et al. proved that on any rectangular board whose smaller dimension is at least 5, there is a (possibly open) knight's tour. For example, there are approximately 4×1051 possible move sequences on an 8 × 8 board, and it is well beyond the capacity of modern computers (or networks of computers) to perform operations on such a large set. However, the size of this number is not indicative of the difficulty of the problem, which can be solved "by using human insight and ingenuity ... without much difficulty." - -Warnsdorff's rule is a heuristic for finding a single knight's tour. The knight is moved so that it always proceeds to the square from which the knight will have the fewest onward moves. When calculating the number of onward moves for each candidate square, we do not count moves that revisit any square already visited. It is possible to have two or more choices for which the number of onward moves is equal; there are various methods for breaking such ties, including one devised by Pohl Although the Hamiltonian path problem is NP-hard in general, on many graphs that occur in practice this heuristic is able to successfully locate a solution in linear time. The knight's tour is such a special case. - -The heuristic was first described in "Des Rösselsprungs einfachste und allgemeinste Lösung" by H. C. von Warnsdorff in 1823. - -A computer program that finds a knight's tour for any starting position using Warnsdorff's rule was written by Gordon Horsington and published in 1984 in the book Century/Acorn User Book of Computer Puzzles. - -The knight's tour problem also lends itself to being solved by a neural network implementation. The network is set up such that every legal knight's move is represented by a neuron, and each neuron is initialized randomly to be either "active" or "inactive" (output of 1 or 0), with 1 implying that the neuron is part of the solution. Each neuron also has a state function (described below) which is initialized to 0. - -When the network is allowed to run, each neuron can change its state and output based on the states and outputs of its neighbors (those exactly one knight's move away) according to the following transition rules: - - - -U_{t+1} (N_{i,j}) = U_t(N_{i,j}) + 2 - \sum_{N \in G(N_{i,j})} V_t(N) - - - - - -V_{t+1} (N_{i,j}) = \left\{ - -\begin{array}{ll} - -1 & \mbox{if} U_{t+1}(N_{i,j}) > 3\\ - -0 & \mbox{if} U_{t+1}(N_{i,j}) < 0\\ - -V_t(N_{i,j}) & \mbox{otherwise}, - -\end{array} \right. - - - -where $t$ represents discrete intervals of time, $U(N_{i,j})$ is the state of the neuron connecting square $i$ to square $j$, $V(N_{i,j})$ is the output of the neuron from $i$ to $j$, and $G(N_{i,j})$ is the set of neighbors of the neuron. - -Although divergent cases are possible, the network should eventually converge, which occurs when no neuron changes its state from time $t$ to $t+1$. When the network converges, either the network encodes a knight's tour or a series of two or more independent circuits within the same board. diff --git a/wiki/wikipedia/2330.txt b/wiki/wikipedia/2330.txt deleted file mode 100644 index 2641284c4bfb8d2caded844b7bd7fcbd5fdc8ced..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2330.txt +++ /dev/null @@ -1,251 +0,0 @@ -In mathematical logic, a deduction theorem is a metatheorem that justifies doing conditional proofs — to prove an implication A → B, assume A as an hypothesis and then proceed to derive B — in systems that do not have an explicit inference rule for this. Deduction theorems exist for both propositional logic and first-order logic. The deduction theorem is an important tool in Hilbert-style deduction systems because it permits one to write more comprehensible and usually much shorter proofs than would be possible without it. In certain other formal proof systems the same conveniency is provided by an explicit inference rule; for example natural deduction calls it implication introduction. - -In more detail, the propositional logic deduction theorem states that if a formula $B$ is deducible from a set of assumptions $\Delta \cup \{A\}$ then the implication $ A \to B $ is deducible from $\Delta $; in symbols, $\Delta \cup \{A\} \vdash B $ implies $\Delta \vdash A \to B $. In the special case where $\Delta $ is the empty set, the deduction theorem claim can be more compactly written as: $A \vdash B$ implies $\vdash A \to B$. The deduction theorem for predicate logic is similar, but comes with some extra constraints (that would for example be satisfied if $A$ is a closed formula). In general a deduction theorem needs to take into account all logical details of the theory under consideration, so each logical system technically needs its own deduction theorem, although the differences are usually minor. - -The deduction theorem holds for all first-order theories with the usual deductive systems for first-order logic. However, there are first-order systems in which new inference rules are added for which the deduction theorem fails. Most notably, the deduction theorem fails to hold in Birkhoff–von Neumann quantum logic, because the linear subspaces of a Hilbert space form a non-distributive lattice. - -#"Prove" axiom 1: - -#:*P 1. hypothesis - -#:*Q 2. hypothesis - -#:**P 3. reiteration of 1 - -#:*Q→P 4. deduction from 2 to 3 - -#:P→(Q→P) 5. deduction from 1 to 4 QED - -#"Prove" axiom 2: - -#:*P→(Q→R) 1. hypothesis - -#:**P→Q 2. hypothesis - -#:***P 3. hypothesis - -#:***Q 4. modus ponens 3,2 - -#:***Q→R 5. modus ponens 3,1 - -#:***R 6. modus ponens 4,5 - -#:**P→R 7. deduction from 3 to 6 - -#:*(P→Q)→(P→R) 8. deduction from 2 to 7 - -#:(P→(Q→R))→((P→Q)→(P→R)) 9. deduction from 1 to 8 QED - -#Using axiom 1 to show ((P→(Q→P))→R)→R: - -#:*(P→(Q→P))→R 1. hypothesis - -#:*P→(Q→P) 2. axiom 1 - -#:*R 3. modus ponens 2,1 - -#:((P→(Q→P))→R)→R 4. deduction from 1 to 3 QED - -From the examples, you can see that we have added three virtual (or extra and temporary) rules of inference to our normal axiomatic logic. These are "hypothesis", "reiteration", and "deduction". The normal rules of inference (i.e. "modus ponens" and the various axioms) remain available. - -1. Hypothesis is a step where one adds an additional premise to those already available. So, if your previous step S was deduced as: -$$ - E_1, E_2, ... , E_{n-1}, E_n \vdash S, -$$ - -then one adds another premise H and gets: -$$ - E_1, E_2, ... , E_{n-1}, E_n, H \vdash H. -$$ - -This is symbolized by moving from the n-th level of indentation to the n+1-th level and saying - -*S previous step - -**H hypothesis - -2. Reiteration is a step where one re-uses a previous step. In practice, this is only necessary when one wants to take a hypothesis which is not the most recent hypothesis and use it as the final step before a deduction step. - -3. Deduction is a step where one removes the most recent hypothesis (still available) and prefixes it to the previous step. This is shown by unindenting one level as follows: - -*H hypothesis - -*......... (other steps) - -*C (conclusion drawn from H) - -*H→C deduction - -In axiomatic versions of propositional logic, one usually has among the axiom schemas (where P, Q, and R are replaced by any propositions): - -*Axiom 1 is: P→(Q→P) - -*Axiom 2 is: (P→(Q→R))→((P→Q)→(P→R)) - -*Modus ponens is: from P and P→Q infer Q - -These axiom schemas are chosen to enable one to derive the deduction theorem from them easily. So it might seem that we are begging the question. However, they can be justified by checking that they are tautologies using truth tables and that modus ponens preserves truth. - -From these axiom schemas one can quickly deduce the theorem schema P→P (reflexivity of implication) which is used below: - -# (P→((Q→P)→P))→((P→(Q→P))→(P→P)) from axiom schema 2 with P, (Q→P), P - -# P→((Q→P)→P) from axiom schema 1 with P, (Q→P) - -# (P→(Q→P))→(P→P) from modus ponens applied to step 2 and step 1 - -# P→(Q→P) from axiom schema 1 with P, Q - -# P→P from modus ponens applied to step 4 and step 3 - -Suppose that we have that Γ and H prove C, and we wish to show that Γ proves H→C. For each step S in the deduction which is a premise in Γ (a reiteration step) or an axiom, we can apply modus ponens to the axiom 1, S→(H→S), to get H→S. If the step is H itself (a hypothesis step), we apply the theorem schema to get H→H. If the step is the result of applying modus ponens to A and A→S, we first make sure that these have been converted to H→A and H→(A→S) and then we take the axiom 2, (H→(A→S))→((H→A)→(H→S)), and apply modus ponens to get (H→A)→(H→S) and then again to get H→S. At the end of the proof we will have H→C as required, except that now it only depends on Γ, not on H. So the deduction step will disappear, consolidated into the previous step which was the conclusion derived from H. - -To minimize the complexity of the resulting proof, some preprocessing should be done before the conversion. Any steps (other than the conclusion) which do not actually depend on H should be moved up before the hypothesis step and unindented one level. And any other unnecessary steps (which are not used to get the conclusion or can be bypassed), such as reiterations which are not the conclusion, should be eliminated. - -During the conversion, it may be useful to put all the applications of modus ponens to axiom 1 at the beginning of the deduction (right after the H→H step). - -When converting a modus ponens, if A is outside the scope of H, then it will be necessary to apply axiom 1, A→(H→A), and modus ponens to get H→A. Similarly, if A→S is outside the scope of H, apply axiom 1, (A→S)→(H→(A→S)), and modus ponens to get H→(A→S). It should not be necessary to do both of these, unless the modus ponens step is the conclusion, because if both are outside the scope, then the modus ponens should have been moved up before H and thus be outside the scope also. - -Under the Curry–Howard correspondence, the above conversion process for the deduction meta-theorem is analogous to the conversion process from lambda calculus terms to terms of combinatory logic, where axiom 1 corresponds to the K combinator, and axiom 2 corresponds to the S combinator. Note that the I combinator corresponds to the theorem schema P→P. - -If one intends to convert a complicated proof using the deduction theorem to a straight-line proof not using the deduction theorem, then it would probably be useful to prove these theorems once and for all at the beginning and then use them to help with the conversion: -$$ -A \to A -$$ - -helps convert the hypothesis steps. -$$ -(B \to C) \to ((A \to B) \to (A \to C)) -$$ - -helps convert modus ponens when the major premise is not dependent on the hypothesis, replaces axiom 2 while avoiding a use of axiom 1. -$$ -(A \to (B \to C)) \to (B \to (A \to C)) -$$ - -helps convert modus ponens when the minor premise is not dependent on the hypothesis, replaces axiom 2 while avoiding a use of axiom 1. - -These two theorems jointly can be used in lieu of axiom 2, although the converted proof would be more complicated: -$$ -(A \to B) \to ((B \to C) \to (A \to C)) -$$ -$$ -(A \to (A \to C)) \to (A \to C) -$$ - -Peirce's law is not a consequence of the deduction theorem, but it can be used with the deduction theorem to prove things which one might not otherwise be able to prove. -$$ -((A \to B) \to A) \to A -$$ - -It can also be used to get the second of the two theorems which can used in lieu of axiom 2. - -We prove the deduction theorem in a Hilbert-style deductive system of propositional calculus. - -Let $\Delta $ be a set of formulas and $A$ and $B$ formulas, such that $\Delta \cup \{A\} \vdash B $. We want to prove that $\Delta \vdash A \to B $. - -Since $\Delta \cup \{A\} \vdash B $, there is a proof of $B$ from $\Delta \cup \{A\}$. We prove the theorem by induction on the proof length n; thus the induction hypothesis is that for any $\Delta$, $A$ and $B$ such that there is a proof of $B$ from $\Delta \cup \{A\}$ of length up to n, $\Delta \vdash A \to B $ holds. - -If n = 1 then $B$ is member of the set of formulas $\Delta \cup \{A\}$. Thus either $B=A$, in which case $A \to B $ is simply $A \to A $ which is derivable by substitution from p → p that is derivable from the axioms, hence also $\Delta \vdash A \to B $; or $B$ is in $\Delta$, in which case $\Delta \vdash B $; it follows from axiom p → (q → p) with substitution that $\Delta \vdash B \to (A \to B) $ and hence by modus ponens that $\Delta \vdash A \to B $. - -Now let us assume the induction hypothesis for proofs of length up to n, and let $B$ be a formula provable from $\Delta \cup \{A\}$ with a proof of length n+1. Then there are three possibilities: - -#$B$ is member of the set of formulas $\Delta \cup \{A\}$; in this case we proceed as for n=0. - -#$B$ is arrived at by a substitution on a formula φ. Then φ is proven from $\Delta \cup \{A\}$ with at most n steps, hence by the induction hypothesis $\Delta \vdash A \to \varphi $, where we may write A and φ with different variables. But then we may arrive from $A \to \varphi $ at $A \to B $ by the same substitution which is used to derive $B$ from φ; thus $\Delta \vdash A \to B $. - -#$B$ is arrived at by using modus ponens. Then there is a formula C such that $\Delta \cup \{A\}$ proves $C$ and $C \to B $, and modus ponens is then used to prove $B$. The proofs of $C$ and $C \to B $ are with at most n steps, and by the induction hypothesis we have $\Delta \vdash A \to C $ and $\Delta \vdash A \to (C \to B) $. By the axiom (p → (q → r)) → ((p → q) → (p → r)) with substitution it follows that $\Delta \vdash (A \to (C \to B)) \to ((A \to C) \to (A \to B))$, and by using modus ponens twice we have $\Delta \vdash A \to B $. - -Thus in all cases the theorem holds also for n+1, and by induction the deduction theorem is proven. - -The deduction theorem is also valid in first-order logic in the following form: - -*If T is a theory and F, G are formulas with F closed, and $T \cup \{ F \} \vdash G$, then $T \vdash F \rightarrow G$. - -Here, the symbol $\vdash$ means "is a syntactical consequence of." We indicate below how the proof of this deduction theorem differs from that of the deduction theorem in propositional calculus. - -In the most common versions of the notion of formal proof, there are, in addition to the axiom schemes - -of propositional calculus (or the understanding that all tautologies of propositional calculus are to - -be taken as axiom schemes in their own right), quantifier axioms, and in addition to modus ponens, one additional rule of inference, known as the rule of generalization: "From K, infer ∀vK." - -In order to convert a proof of G from T∪{F} to one of F→G from T, one deals - -with steps of the proof of G which are axioms or result from application of modus ponens in the - -same way as for proofs in propositional logic. Steps which result from application of the rule of - -generalization are dealt with via the following quantifier axiom (valid whenever the variable - -v is not free in formula H): - -*(∀v(H→K))→(H→∀vK). - -Since in our case F is assumed to be closed, we can take H to be F. This axiom allows - -one to deduce F→∀vK from F→K and generalization, which is just what is needed whenever - -the rule of generalization is applied to some K in the proof of G. - -In first-order logic, the restriction of that F be a closed formula can be relaxed given that the free variables in F has not been varied in the deduction of G from $T \cup \{ F \}$. In the case that a free variable v in F has been varied in the deduction, we write $T \cup \{ F \} \vdash^v G$ (the superscript in the turnstile indicating that v has been varied) and the corresponding form of the deduction theorem is $T \vdash (\forall vF) \rightarrow G$. - -To illustrate how one can convert a natural deduction to the axiomatic form of proof, we apply it to the tautology Q→((Q→R)→R). In practice, it is usually enough to know that we could do this. We normally use the natural-deductive form in place of the much longer axiomatic proof. - -First, we write a proof using a natural-deduction like method: - -*Q 1. hypothesis - -**Q→R 2. hypothesis - -**R 3. modus ponens 1,2 - -*(Q→R)→R 4. deduction from 2 to 3 - -*Q→((Q→R)→R) 5. deduction from 1 to 4 QED - -Second, we convert the inner deduction to an axiomatic proof: - -*(Q→R)→(Q→R) 1. theorem schema (A→A) - -*((Q→R)→(Q→R))→(((Q→R)→Q)→((Q→R)→R)) 2. axiom 2 - -*((Q→R)→Q)→((Q→R)→R) 3. modus ponens 1,2 - -*Q→((Q→R)→Q) 4. axiom 1 - -**Q 5. hypothesis - -**(Q→R)→Q 6. modus ponens 5,4 - -**(Q→R)→R 7. modus ponens 6,3 - -*Q→((Q→R)→R) 8. deduction from 5 to 7 QED - -Third, we convert the outer deduction to an axiomatic proof: - -*(Q→R)→(Q→R) 1. theorem schema (A→A) - -*((Q→R)→(Q→R))→(((Q→R)→Q)→((Q→R)→R)) 2. axiom 2 - -*((Q→R)→Q)→((Q→R)→R) 3. modus ponens 1,2 - -*Q→((Q→R)→Q) 4. axiom 1 - -*[((Q→R)→Q)→((Q→R)→R)]→[Q→(((Q→R)→Q)→((Q→R)→R))] 5. axiom 1 - -*Q→(((Q→R)→Q)→((Q→R)→R)) 6. modus ponens 3,5 - -*[Q→(((Q→R)→Q)→((Q→R)→R))]→([Q→((Q→R)→Q)]→[Q→((Q→R)→R))]) 7. axiom 2 - -*[Q→((Q→R)→Q)]→[Q→((Q→R)→R))] 8. modus ponens 6,7 - -*Q→((Q→R)→R)) 9. modus ponens 4,8 QED - -These three steps can be stated succinctly using the Curry–Howard correspondence: - -*first, in lambda calculus, the function f = λa. λb. b a has type q → (q → r) → r - -*second, by lambda elimination on b, f = λa. s i (k a) - -*third, by lambda elimination on a, f = s (k (s i)) k diff --git a/wiki/wikipedia/2331.txt b/wiki/wikipedia/2331.txt deleted file mode 100644 index c764491a390fbd456f7e34c4c9d90a5fb03c7fd3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2331.txt +++ /dev/null @@ -1,5 +0,0 @@ -Proof of Fermat's last theorem may refer to: - -* Wiles's proof of Fermat's Last Theorem - -* Proof of Fermat's Last Theorem for specific exponents diff --git a/wiki/wikipedia/2332.txt b/wiki/wikipedia/2332.txt deleted file mode 100644 index 12c92bfe1fed69d6222cebb530bdb2712d7e5b6b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2332.txt +++ /dev/null @@ -1,11 +0,0 @@ -In number theory, Sophie Germain's theorem is a statement about the divisibility of solutions to the equation $x^p + y^p = z^p$ of Fermat's Last Theorem for odd prime $p$. - -Specifically, Sophie Germain proved that at least one of the numbers $x$, $y$, $z$ must be divisible by $p^2$ if an auxiliary prime $q$ can be found such that two conditions are satisfied: - -# No two nonzero $p^{\mathrm{th}}$ powers differ by one modulo $q$; and - -# $p$ is itself not a $p^{\mathrm{th}}$ power modulo $q$. - -Conversely, the first case of Fermat's Last Theorem (the case in which $p$ does not divide $xyz$) must hold for every prime $p$ for which even one auxiliary prime can be found. - -Germain identified such an auxiliary prime $q$ for every prime less than 100. The theorem and its application to primes $p$ less than 100 were attributed to Germain by Adrien-Marie Legendre in 1823. diff --git a/wiki/wikipedia/2333.txt b/wiki/wikipedia/2333.txt deleted file mode 100644 index 6f82a746baf6cc41a8f9aed0855186c963c054ac..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2333.txt +++ /dev/null @@ -1,15 +0,0 @@ -In algebraic geometry, the Reiss relation, introduced by , is a condition on the second-order elements of the points of a plane algebraic curve meeting a given line. - -If C is a complex plane curve given by the zeros of a polynomial f(x,y) of two variables, and L is a line meeting C transversely and not meeting C at infinity, then -$$ -\sum\frac{f_{xx}f_y^2-2f_{xy}f_xf_y+f_{yy}f_x^2}{f_y^3}=0 -$$ - -where the sum is over the points of intersection of C and L, and fx, fxy and so on stand for partial derivatives of f . - -This can also be written as -$$ -\sum\frac{\kappa}{\sin(\theta)^3}=0 -$$ - -where κ is the curvature of the curve C and θ is the angle its tangent line makes with L, and the sum is again over the points of intersection of C and L . diff --git a/wiki/wikipedia/2334.txt b/wiki/wikipedia/2334.txt deleted file mode 100644 index 26749219c3e096536b44f3b53f601f7b0f989afe..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2334.txt +++ /dev/null @@ -1,128 +0,0 @@ -In mathematics, Choi's theorem on completely positive maps is a result that classifies completely positive maps between finite-dimensional (matrix) C*-algebras. An infinite-dimensional algebraic generalization of Choi's theorem is known as Belavkin's "Radon-Nikodym" theorem for completely positive maps. - -Choi's theorem. Let $\Phi : \mathbb{C}^{n\times n} \to \mathbb{C}^{m\times m}$ be a linear map. The following are equivalent: - -(i) Φ is n-positive (i.e. $\Phi(A)\in\mathbb{C}^{m\times m}$ is positive whenever $A\in\mathbb{C}^{n\times n}$ is positive). - -(ii) The matrix with operator entries -$$ -C_\Phi= \left (\operatorname{id}_n\otimes\Phi \right ) \left (\sum_{ij}E_{ij}\otimes E_{ij} \right ) = \sum_{ij}E_{ij}\otimes\Phi(E_{ij}) \in \mathbb{C} ^{nm \times nm} -$$ - -is positive, where $E_{ij} \in \mathbb{C}^{n\times n}$ is the matrix with 1 in the ij-th entry and 0s elsewhere. (The matrix CΦ is sometimes called the Choi matrix of Φ.) - -(iii) Φ is completely positive. - -We observe that if -$$ -E=\sum_{ij}E_{ij}\otimes E_{ij}, -$$ - -then E=E* and E2=nE, so E=n−1EE* which is positive. Therefore CΦ =(In ⊗ Φ)(E) is positive by the n-positivity of Φ. - -This holds trivially. - -This mainly involves chasing the different ways of looking at Cnm×nm: - - - -\mathbb{C}^{nm\times nm} - -\cong\mathbb{C}^{nm}\otimes(\mathbb{C}^{nm})^* - -\cong\mathbb{C}^n\otimes\mathbb{C}^m\otimes(\mathbb{C}^n\otimes\mathbb{C}^m)^* - -\cong\mathbb{C}^n\otimes(\mathbb{C}^n)^*\otimes\mathbb{C}^m\otimes(\mathbb{C}^m)^* - -\cong\mathbb{C}^{n\times n}\otimes\mathbb{C}^{m\times m}. - - - -Let the eigenvector decomposition of CΦ be -$$ -C_\Phi = \sum _{i = 1} ^{nm} \lambda_i v_i v_i ^*, -$$ - -where the vectors $v_i$ lie in Cnm . By assumption, each eigenvalue $\lambda_i$ is non-negative so we can absorb the eigenvalues in the eigenvectors and redefine $v_i$ so that - - - - C_\Phi = \sum _{i = 1} ^{nm} v_i v_i ^* . - - - -The vector space Cnm can be viewed as the direct sum $\textstyle \oplus_{i=1}^n \mathbb{C}^m$ compatibly with the above identification $\textstyle\mathbb{C}^{nm}\cong\mathbb{C}^n\otimes\mathbb{C}^m$ - -and the standard basis of Cn. - -If Pk ∈ Cm × nm is projection onto the k-th copy of Cm, then Pk* ∈ Cnm×m is the inclusion of Cm as the k-th summand of the direct sum and - - - - \Phi (E_{kl}) = P_k \cdot C_\Phi \cdot P_l^* = \sum _{i = 1} ^{nm} P_k v_i ( P_l v_i )^*. - - - -Now if the operators Vi ∈ Cm×n are defined on the k-th standard - -basis vector ek of Cn by -$$ - V_i e_k = P_k v_i, -$$ - -then - -\Phi (E_{kl}) = \sum _{i = 1} ^{nm} P_k v_i ( P_l v_i )^* = \sum _{i = 1} ^{nm} V_i e_k e_l ^* V_i ^* - -= \sum _{i = 1} ^{nm} V_i E_{kl} V_i ^*. - -Extending by linearity gives us -$$ -\Phi(A) = \sum_{i=1}^{nm} V_i A V_i^* -$$ - -for any A ∈ Cn×n. Any map of this form is manifestly completely positive: the map $A \to V_i A V_i^*$ is completely positive, and the sum (across $i$) of completely positive operators is again completely positive. Thus $\Phi$ is completely positive, the desired result. - -The above is essentially Choi's original proof. Alternative proofs have also been known. - -In the context of quantum information theory, the operators {Vi} are called the Kraus operators (after Karl Kraus) of Φ. Notice, given a completely positive Φ, its Kraus operators need not be unique. For example, any "square root" factorization of the Choi matrix CΦ = BB gives a set of Kraus operators. - -Let -$$ -B^* = [b_1, \ldots, b_{nm}], -$$ - -where bi*'s are the row vectors of B, then -$$ -C_\Phi = \sum _{i = 1} ^{nm} b_i b_i ^*. -$$ - -The corresponding Kraus operators can be obtained by exactly the same argument from the proof. - -When the Kraus operators are obtained from the eigenvector decomposition of the Choi matrix, because the eigenvectors form an orthogonal set, the corresponding Kraus operators are also orthogonal in the Hilbert–Schmidt inner product. This is not true in general for Kraus operators obtained from square root factorizations. (Positive semidefinite matrices do not generally have a unique square-root factorizations.) - -If two sets of Kraus operators {Ai}1nm and {Bi}1nm represent the same completely positive map Φ, then there exists a unitary operator matrix -$$ -\{U_{ij}\}_{ij} \in \mathbb{C}^{nm^2 \times nm^2} \quad \text{such that} \quad A_i = \sum _{j = 1} U_{ij} B_j. -$$ - -This can be viewed as a special case of the result relating two minimal Stinespring representations. - -Alternatively, there is an isometry scalar matrix {uij}ij ∈ Cnm × nm such that -$$ -A_i = \sum _{j = 1} u_{ij} B_j. -$$ - -This follows from the fact that for two square matrices M and N, M M* = N N* if and only if M = N U for some unitary U. - -It follows immediately from Choi's theorem that Φ is completely copositive if and only if it is of the form -$$ -\Phi(A) = \sum _i V_i A^T V_i ^* . -$$ - -Choi's technique can be used to obtain a similar result for a more general class of maps. Φ is said to be Hermitian-preserving if A is Hermitian implies Φ(A) is also Hermitian. One can show Φ is Hermitian-preserving if and only if it is of the form -$$ -\Phi (A) = \sum_{i=1} ^{nm} \lambda_i V_i A V_i ^* -$$ - -where λi are real numbers, the eigenvalues of CΦ, and each Vi corresponds to an eigenvector of CΦ. Unlike the completely positive case, CΦ may fail to be positive. Since Hermitian matrices do not admit factorizations of the form B*B in general, the Kraus representation is no longer possible for a given Φ. diff --git a/wiki/wikipedia/2335.txt b/wiki/wikipedia/2335.txt deleted file mode 100644 index b16c8b6b80f72373daae9a0f06da0ce66538bde9..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2335.txt +++ /dev/null @@ -1,5 +0,0 @@ -Helen C. Purchase is a researcher in information visualization, graph drawing, and human–computer interaction who works as a senior lecturer in the School of Computing Science of the University of Glasgow. - -Purchase earned a doctorate at the University of Cambridge in 1992. Before moving to Glasgow, she was a senior lecturer at the University of Queensland, Australia, where she won a teaching award in 1999. - -She is the author of the book Experimental Human-Computer Interaction (Cambridge University Press, 2012), and was a keynote speaker at the 7th International Symposium on Visual Information Communication & Interaction (VINCI 2014) in Sydney, Australia. diff --git a/wiki/wikipedia/2336.txt b/wiki/wikipedia/2336.txt deleted file mode 100644 index 2b7bb8fd6ec85598faf1efd08cd21ab41672fb82..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2336.txt +++ /dev/null @@ -1,9 +0,0 @@ -In mathematics, a univariate object is an expression, equation, function or polynomial involving only one variable. Objects involving more than one variable are multivariate. In some cases the distinction between the univariate and multivariate cases is fundamental; for example, the fundamental theorem of algebra and Euclid's algorithm for polynomials are fundamental properties of univariate polynomials that cannot be generalized to multivariate polynomials. - -In statistics, a univariate distribution characterizes one variable, although it can be applied in other ways as well. For example, univariate data are composed of a single scalar component. In time series analysis, the whole time series is the "variable": a univariate time series is the series of values over time of a single quantity. Correspondingly, a "multivariate time series" characterizes the changing values over time of several quantities. In some cases, the terminology is ambiguous, since the values within a univariate time series may be treated using certain types of multivariate statistical analyses and may be represented using multivariate distributions. - -In addition to the question of scaling, a criterion (variable) in univariate statistics can be described by two important measures (also key figures or parameters): Location & Variation. - -* Measures of Location Scales (e.g. mode, median, arithmetic mean) describe in which area the data is arranged centrally. - -* Measures of Variation (e.g. span, interquartile distance, standard deviation) describe how similar or different the data are scattered. diff --git a/wiki/wikipedia/2337.txt b/wiki/wikipedia/2337.txt deleted file mode 100644 index 8915d0cc50bfd77c1bccd8befbc4e00e5d15d3c7..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2337.txt +++ /dev/null @@ -1,20 +0,0 @@ -In mathematics, the Brauer–Siegel theorem, named after Richard Brauer and Carl Ludwig Siegel, is an asymptotic result on the behaviour of algebraic number fields, obtained by Richard Brauer and Carl Ludwig Siegel. It attempts to generalise the results known on the class numbers of imaginary quadratic fields, to a more general sequence of number fields -$$ -K_1, K_2, \ldots.\ -$$ - -In all cases other than the rational field Q and imaginary quadratic fields, the regulator Ri of Ki must be taken into account, because Ki then has units of infinite order by Dirichlet's unit theorem. The quantitative hypothesis of the standard Brauer–Siegel theorem is that if Di is the discriminant of Ki, then -$$ - \frac{[K_i : \mathbf Q]}{\log|D_i|} \to 0\text{ as }i \to\infty. -$$ - -Assuming that, and the algebraic hypothesis that Ki is a Galois extension of Q, the conclusion is that -$$ - \frac{ \log(h_i R_i) }{ \log\sqrt } \to 1\text{ as }i \to\infty -$$ - -where hi is the class number of Ki. If one assumes that all the degrees $[K_i : \mathbf Q]$ are bounded above by a uniform constant - -N, then one may drop the assumption of normality - this is what is actually proved in Brauer's paper. - -This result is ineffective, as indeed was the result on quadratic fields on which it built. Effective results in the same direction were initiated in work of Harold Stark from the early 1970s. diff --git a/wiki/wikipedia/2338.txt b/wiki/wikipedia/2338.txt deleted file mode 100644 index fb81db0ed80f3c20e2e4ba1ce682086cc06ae18b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2338.txt +++ /dev/null @@ -1,122 +0,0 @@ -The fluctuation–dissipation theorem (FDT) or fluctuation–dissipation relation (FDR) is a powerful tool in statistical physics for predicting the behavior of systems that obey detailed balance. Given that a system obeys detailed balance, the theorem is a general proof that thermodynamic fluctuations in a physical variable predict the response quantified by the admittance or impedance (to be intended in their general sense, not only in electromagnetic terms) of the same physical variable (like voltage, temperature difference, etc.), and vice versa. The fluctuation–dissipation theorem applies both to classical and quantum mechanical systems. - -The fluctuation–dissipation theorem was proven by Herbert Callen and Theodore Welton in 1951 - -and expanded by Ryogo Kubo. There are antecedents to the general theorem, including Einstein's explanation of Brownian motion - -during his annus mirabilis and Harry Nyquist's explanation in 1928 of Johnson noise in electrical resistors. - -The fluctuation–dissipation theorem says that when there is a process that dissipates energy, turning it into heat (e.g., friction), there is a reverse process related to thermal fluctuations. This is best understood by considering some examples: - -* Drag and Brownian motion - -If an object is moving through a fluid, it experiences drag (air resistance or fluid resistance). Drag dissipates kinetic energy, turning it into heat. The corresponding fluctuation is Brownian motion. An object in a fluid does not sit still, but rather moves around with a small and rapidly-changing velocity, as molecules in the fluid bump into it. Brownian motion converts heat energy into kinetic energy—the reverse of drag. - -* Resistance and Johnson noise - -If electric current is running through a wire loop with a resistor in it, the current will rapidly go to zero because of the resistance. Resistance dissipates electrical energy, turning it into heat (Joule heating). The corresponding fluctuation is Johnson noise. A wire loop with a resistor in it does not actually have zero current, it has a small and rapidly-fluctuating current caused by the thermal fluctuations of the electrons and atoms in the resistor. Johnson noise converts heat energy into electrical energy—the reverse of resistance. - -* Light absorption and thermal radiation - -When light impinges on an object, some fraction of the light is absorbed, making the object hotter. In this way, light absorption turns light energy into heat. The corresponding fluctuation is thermal radiation (e.g., the glow of a "red hot" object). Thermal radiation turns heat energy into light energy—the reverse of light absorption. Indeed, Kirchhoff's law of thermal radiation confirms that the more effectively an object absorbs light, the more thermal radiation it emits. - -The fluctuation–dissipation theorem is a general result of statistical thermodynamics that quantifies the relation between the fluctuations in a system that obeys detailed balance and the response of the system to applied perturbations. - -For example, Albert Einstein noted in his 1905 paper on Brownian motion that the same random forces that cause the erratic motion of a particle in Brownian motion would also cause drag if the particle were pulled through the fluid. In other words, the fluctuation of the particle at rest has the same origin as the dissipative frictional force one must do work against, if one tries to perturb the system in a particular direction. - -From this observation Einstein was able to use statistical mechanics to derive the Einstein–Smoluchowski relation -$$ - D = {\mu k_{\rm B} T} -$$ - -which connects the diffusion constant D and the particle mobility μ, the ratio of the particle's terminal drift velocity to an applied force. kB is the Boltzmann constant, and T is the absolute temperature. - -In 1928, John B. Johnson discovered and Harry Nyquist explained Johnson–Nyquist noise. With no applied current, the mean-square voltage depends on the resistance $R$, $k_{\rm B}T$, and the bandwidth $\Delta\nu$ over which the voltage is measured: -$$ - \langle V^2 \rangle \approx 4Rk_{\rm B}T\Delta\nu. -$$ - -This observation can be understood through the lens of the fluctuation-dissipation theorem. Take, for example, a simple circuit consisting of a resistor with a resistance $R$ and a capacitor with a small capacitance $C$. Kirchhoff's law yields -$$ -V=-R\frac{dQ}{dt}+\frac{Q}{C} -$$ - -and so the response function for this circuit is -$$ -\chi(\omega)\equiv\frac{Q(\omega)}{V(\omega)}=\frac{1}{\frac{1}{C}-i\omega R} -$$ - -In the low-frequency limit $\omega\ll (RC)^{-1}$, its imaginary part is simply -$$ -\text{Im}\left[\chi(\omega)\right]\approx \omega RC^2 -$$ - -which then can be linked to the auto-correlation function $S_V(\omega)$ of the voltage via the fluctuation-dissipation theorem -$$ -S_V(\omega)=\frac{S_Q(\omega)}{C^2}\approx \frac{2k_{\rm B}T}{C^2\omega}\text{Im}\left[\chi(\omega)\right]=2Rk_{\rm B}T -$$ - -The Johnson–Nyquist voltage noise $\langle V^2 \rangle$ was observed within a small frequency bandwidth $\Delta \nu=\Delta\omega/(2\pi)$ centered around $\omega=\pm \omega_0$. Hence -$$ -\langle V^2 \rangle\approx S_V(\omega)\times 2\Delta \nu\approx 4Rk_{\rm B}T\Delta \nu -$$ - -The fluctuation–dissipation theorem can be formulated in many ways; one particularly useful form is the following: - -Let $x(t)$ be an observable of a dynamical system with Hamiltonian $H_0(x)$ subject to thermal fluctuations. - -The observable $x(t)$ will fluctuate around its mean value $\langle x\rangle_0$ - -with fluctuations characterized by a power spectrum $S_x(\omega) = \langle \hat{x}(\omega)\hat{x}^*(\omega) \rangle$. - -Suppose that we can switch on a time-varying, spatially constant field $f(t)$ which alters the Hamiltonian - -to $H(x)=H_0(x)-f(t)x$. - -The response of the observable $x(t)$ to a time-dependent field $f(t)$ is - -characterized to first order by the susceptibility or linear response function -$$ -\chi(t) -$$ of the system -$$ - \langle x(t) \rangle = \langle x \rangle_0 + \int\limits_{-\infty}^{t} \! f(\tau) \chi(t-\tau)d\tau, -$$ - -where the perturbation is adiabatically (very slowly) switched on at $\tau =-\infty$. - -The fluctuation–dissipation theorem relates the two-sided power spectrum (i.e. both positive and negative frequencies) of $x$ to the imaginary part of the Fourier transform $\hat{\chi}(\omega)$ of the susceptibility $\chi(t)$: -$$ -S_x(\omega) = \frac{2 k_\mathrm{B} T}{\omega} \mathrm{Im}\hat{\chi}(\omega). -$$ - -The left-hand side describes fluctuations in $x$, the right-hand side is closely related to the energy dissipated by the system when pumped by an oscillatory field $f(t) = F \sin(\omega t + \phi)$. - -This is the classical form of the theorem; quantum fluctuations are taken into account by - -replacing $2 k_\mathrm{B} T/\omega$ with ${\hbar}\coth(\hbar\omega/2k_\mathrm{B}T)$ (whose limit for $\hbar\to 0$ is $2 k_\mathrm{B} T/\omega$). A proof can be found by means of the LSZ reduction, an identity from quantum field theory. - -The fluctuation–dissipation theorem can be generalized in a straightforward way to the case of space-dependent fields, to the case of several variables or to a quantum-mechanics setting.. --> - -To study the violation of the fluctuation-dissipation relation in glassy systems, particularly spin glasses, Ref. performed numerical simulations of macroscopic systems (i.e. large compared to their correlation lengths) described by the three-dimensional Edwards-Anderson model using supercomputers. In their simulations, the system is initially prepared at a high temperature, rapidly cooled to a temperature $T=0.64 T_{\rm g}$ below the glass temperature $T_g$, and left to equilibrate for a very long time $t_{\rm w}$ under a magnetic field $H$. Then, at a later time $t+t_{\rm w}$, two dynamical observables are probed, namely the response function -$$ -\chi(t+t_{\rm w},t_{\rm w})\equiv\left.\frac{\partial m(t+t_{\rm w})}{\partial H}\right|_{H=0} -$$ - -and the spin-temporal correlation function -$$ -C(t+t_{\rm w},t_{\rm w})\equiv \frac{1}{V}\left.\sum_{x}\langle S_x(t_{\rm w}) S_x(t+t_{\rm w})\rangle\right|_{H=0} -$$ - -where $S_x=\pm 1$ is the spin living on the node $x$ of the cubic lattice of volume $V$, and $m(t)\equiv \frac{1}{V}\sum_{x} \langle S_{x}(t)\rangle$ is the magnetization density. The fluctuation-dissipation relation in this system can be written in terms of these observables as -$$ -T\chi(t+t_{\rm w}, t_{\rm w})=1-C(t+t_{\rm w}, t_{\rm w}) -$$ - -Their results confirm the expectation that as the system is left to equilibrate for longer times, the fluctuation-dissipation relation is closer to be satisfied. - -In the mid-1990s, in the study of dynamics of spin glass models, a generalization of the fluctuation–dissipation theorem was discovered that holds for asymptotic non-stationary states, where the temperature appearing in the equilibrium relation is substituted by an effective temperature with a non-trivial dependence on the time scales. - -This relation is proposed to hold in glassy systems beyond the models for which it was initially found. - -The Rényi entropy as well as von Neumann entropy in quantum physics are not observables since they depend nonlinearly on the density matrix. Recently, Mohammad H. Ansari and Yuli V. Nazarov proved an exact correspondence that reveals the physical meaning of the Rényi entropy flow in time. This correspondence is similar to the fluctuation-dissipation theorem in spirit and allows the measurement of quantum entropy using the full counting statistics (FCS) of energy transfers. diff --git a/wiki/wikipedia/2339.txt b/wiki/wikipedia/2339.txt deleted file mode 100644 index 88926b4d9e2e30c137221d78926a16d7e0868a17..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2339.txt +++ /dev/null @@ -1,57 +0,0 @@ -In online transaction processing (OLTP), information systems typically facilitate and manage transaction-oriented applications. - -The term "transaction" can have two different meanings, both of which might apply: in the realm of computers or database transactions it denotes an atomic change of state, whereas in the realm of business or finance, the term typically denotes an exchange of economic entities (as used by, e.g., Transaction Processing Performance Council or commercial transactions.) OLTP may use transactions of the first type to record transactions of the second. - -OLTP has also been used to refer to processing in which the system responds immediately to user requests. An automated teller machine (ATM) for a bank is an example of a commercial transaction processing application. Online transaction processing applications have high throughput and are insert- or update-intensive in database management. These applications are used concurrently by hundreds of users. The key goals of OLTP applications are availability, speed, concurrency and recoverability. Reduced paper trails and the faster, more accurate forecast for revenues and expenses are both examples of how OLTP makes things simpler for businesses. However, like many modern online information technology solutions, some systems require offline maintenance, which further affects the cost-benefit analysis of an online transaction processing system. - -OLTP is typically contrasted to OLAP (online analytical processing), which is generally characterized by much more complex queries, in a smaller volume, for the purpose of business intelligence or reporting rather than to process transactions. Whereas OLTP systems process all kinds of queries (read, insert, update and delete), OLAP is generally optimized for read only and might not even support other kinds of queries. OLTP also operates differently from batch processing and grid computing. - -In addition, OLTP is often contrasted to OLEP (online event processing), which is based on distributed event logs to offer strong consistency in large-scale heterogeneous systems. Whereas OLTP is associated with short atomic transactions, OLEP allows for more flexible distribution patterns and higher scalability, but with increased latency and without guaranteed upper bound to the processing time. - -An OLTP system is an accessible data processing system in today's enterprises. Some examples of OLTP systems include order entry, retail sales, and financial transaction systems. Online transaction processing systems increasingly require support for transactions that span a network and may include more than one company. For this reason, modern online transaction processing software uses client or server processing and brokering software that allows transactions to run on different computer platforms in a network. - -In large applications, efficient OLTP may depend on sophisticated transaction management software (such as CICS) and/or database optimization tactics to facilitate the processing of large numbers of concurrent updates to an OLTP-oriented database. - -For even more demanding decentralized database systems, OLTP brokering programs can distribute transaction processing among multiple computers on a network. OLTP is often integrated into service-oriented architecture (SOA) and Web services. - -Online transaction processing (OLTP) involves gathering input information, processing the data and updating existing data to reflect the collected and processed information. As of today, most organizations use a database management system to support OLTP. OLTP is carried in a client-server system. - -Online transaction process concerns about concurrency and atomicity. Concurrency controls guarantee that two users accessing the same data in the database system will not be able to change that data or the user has to wait until the other user has finished processing, before changing that piece of data. Atomicity controls guarantee that all the steps in a transaction are completed successfully as a group. That is, if any steps between the transaction fail, all other steps must fail also. - -To build an OLTP system, a designer must know that the large number of concurrent users does not interfere with the system's performance. To increase the performance of an OLTP system, a designer must avoid excessive use of indexes and clusters. - -The following elements are crucial for the performance of OLTP systems: - -* Rollback segments - -Rollback segments are the portions of database that record the actions of transactions in the event that a transaction is rolled back. Rollback segments provide read consistency, rollback transactions, and recovery of the database. - -* Clusters - -A cluster is a schema that contains one or more tables that have one or more columns in common. Clustering tables in a database improves the performance of join operations. - -* Discrete transactions - -A discrete transaction defers all change to the data until the transaction is committed. It can improve the performance of short, non-distributed transactions. - -* Block size - -The data block size should be a multiple of the operating system's block size within the maximum limit to avoid unnecessary I/O. - -* Buffer cache size - -SQL statements should be tuned to use the database buffer cache to avoid unnecessary resource consumption. - -* Dynamic allocation of space to tables and rollback segments - -* Transaction processing monitors and the multi-threaded server - -A transaction processing monitor is used for coordination of services. It is like an operating system and does the coordination at a high level of granularity and can span multiple computing devices. - -* Partition (database) - -Partition use increases performance for sites that have regular transactions while still maintaining availability and security. - -* Database tuning - -With database tuning, an OLTP system can maximize its performance as efficiently and rapidly as possible. diff --git a/wiki/wikipedia/234.txt b/wiki/wikipedia/234.txt deleted file mode 100644 index bd9799030b52ad42b3bf84d0d136d8cb8017c845..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/234.txt +++ /dev/null @@ -1,36 +0,0 @@ -In computational complexity theory, a function problem is a computational problem where a single output (of a total function) is expected for every input, but the output is more complex than that of a decision problem. For function problems, the output is not simply 'yes' or 'no'. - -A functional problem $P$ is defined as a relation $R$ over strings of an arbitrary alphabet $\Sigma$: -$$ -R \subseteq \Sigma^* \times \Sigma^*. -$$ - -An algorithm solves $P$ if for every input $x$ such that there exists a $y$ satisfying $(x, y) \in R$, the algorithm produces one such $y$. - -A well-known function problem is given by the Functional Boolean Satisfiability Problem, FSAT for short. The problem, which is closely related to the SAT decision problem, can be formulated as follows: - -Given a boolean formula $\varphi$ with variables $x_1, \ldots, x_n$, find an assignment $x_i \rightarrow \{ \text{TRUE}, \text{FALSE} \}$ such that $\varphi$ evaluates to $\text{TRUE}$ or decide that no such assignment exists. - -In this case the relation $R$ is given by tuples of suitably encoded boolean formulas and satisfying assignments. - -While a SAT algorithm, fed with a formula $\varphi$, only needs to return "unsatisfiable" or "satisfiable", an FSAT algorithm needs to return some satisfying assignment in the latter case. - -Other notable examples include the travelling salesman problem, which asks for the route taken by the salesman, and the integer factorization problem, which asks for the list of factors. - -Consider an arbitrary decision problem $L$ in the class NP. By the definition of NP, each problem instance $x$ that is answered 'yes' has a polynomial-size certificate $y$ which serves as a proof for the 'yes' answer. Thus, the set of these tuples $(x,y)$ forms a relation, representing the function problem "given $x$ in $L$, find a certificate $y$ for $x$". This function problem is called the function variant of $L$; it belongs to the class FNP. - -FNP can be thought of as the function class analogue of NP, in that solutions of FNP problems can be efficiently (i.e., in polynomial time in terms of the length of the input) verified, but not necessarily efficiently found. In contrast, the class FP, which can be thought of as the function class analogue of P, consists of function problems whose solutions can be found in polynomial time. - -Observe that the problem FSAT introduced above can be solved using only polynomially many calls to a subroutine which decides the SAT problem: An algorithm can first ask whether the formula $\varphi$ is satisfiable. After that the algorithm can fix variable $x_1$ to TRUE and ask again. If the resulting formula is still satisfiable the algorithm keeps $x_1$ fixed to TRUE and continues to fix $x_2$, otherwise it decides that $x_1$ has to be FALSE and continues. Thus, FSAT is solvable in polynomial time using an oracle deciding SAT. In general, a problem in NP is called self-reducible if its function variant can be solved in polynomial time using an oracle deciding the original problem. Every NP-complete problem is self-reducible. It is conjectured that the integer factorization problem is not self-reducible. - -Function problems can be reduced much like decision problems: Given function problems $\Pi_R$ and $\Pi_S$ we say that $\Pi_R$ reduces to $\Pi_S$ if there exists polynomially-time computable functions $f$ and $g$ such that for all instances $x$ of $R$ and possible solutions $y$ of $S$, it holds that - -*If $x$ has an $R$-solution, then $f(x)$ has an $S$-solution. - -*$(f(x), y) \in S \implies (x, g(x,y)) \in R.$ - -It is therefore possible to define FNP-complete problems analogous to the NP-complete problem: - -A problem $\Pi_R$ is FNP-complete if every problem in FNP can be reduced to $\Pi_R$. The complexity class of FNP-complete problems is denoted by FNP-C or FNPC. Hence the problem FSAT is also an FNP-complete problem, and it holds that $\mathbf{P} = \mathbf{NP}$ if and only if $\mathbf{FP} = \mathbf{FNP}$. - -The relation $R(x, y)$ used to define function problems has the drawback of being incomplete: Not every input $x$ has a counterpart $y$ such that $(x, y) \in R$. Therefore the question of computability of proofs is not separated from the question of their existence. To overcome this problem it is convenient to consider the restriction of function problems to total relations yielding the class TFNP as a subclass of FNP. This class contains problems such as the computation of pure Nash equilibria in certain strategic games where a solution is guaranteed to exist. In addition, if TFNP contains any FNP-complete problem it follows that $\mathbf{NP} = \textbf{co-NP}$. diff --git a/wiki/wikipedia/2340.txt b/wiki/wikipedia/2340.txt deleted file mode 100644 index 5a3e78b3bc41219ef656db8b9fd9f31e63b84443..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2340.txt +++ /dev/null @@ -1,50 +0,0 @@ -In algebra and algebraic geometry, the multi-homogeneous Bézout theorem is a generalization to multi-homogeneous polynomials of Bézout's theorem, which counts the number of isolated common zeros of a set of homogeneous polynomials. This generalization is due to Igor Shafarevich. - -Given a polynomial equation or a system of polynomial equations it is often useful to compute or to bound the number of solutions without computing explicitly the solutions. - -In the case of a single equation, this problem is solved by the fundamental theorem of algebra, which asserts that the number of complex solutions is bounded by the degree of the polynomial, with equality, if the solutions are counted with their multiplicities. - -In the case of a system of n polynomial equations in n unknowns, the problem is solved by Bézout's theorem, which asserts that, if the number of complex solutions is finite, their number is bounded by the product of the degrees of the solutions. Moreover, if the number of solutions at infinity is also finite, then the product of the degrees equals the number of solutions counted with multiplicities and including the solutions at infinity. - -However, it is rather common that the number of solutions at infinity is infinite. In this case, the product of the degrees of the polynomials may be much larger than the number of roots, and better bounds are useful. - -Multi-homogeneous Bézout theorem provides such a better root when the unknowns may be split into several subsets such that the degree of each polynomial in each subset is lower than the total degree of the polynomial. For example, let $p_1, \ldots, p_{2n}$ be polynomials of degree two which are of degree one in n indeterminate $x_1, \ldots x_n,$ and also of degree one in $y_1, \ldots y_n.$ (that is the polynomials are bilinear. In this case, Bézout's theorem bounds the number of solutions by -$$ -2^{2n}, -$$ - -while the multi-homogeneous Bézout theorem gives the bound (using Stirling's approximation) -$$ -\binom{2n}{n}= \frac{(2n)!}{(n!)^2}\sim \frac{2^{2n}}{\sqrt{\pi n}}. -$$ - -A multi-homogeneous polynomial is a polynomial that is homogeneous with respect to several sets of variables. - -More precisely, consider k positive integers $n_1, \ldots, n_k$, and, for i = 1, ..., k, the $n_i+1$ indeterminates $x_{i,0}, x_{i,1}, \ldots, x_{i,n_i}.$ A polynomial in all these indeterminates is multi-homogeneous of multi-degree $d_1, \ldots, d_k,$ if it is homogeneous of degree $d_i$ in $x_{i,0}, x_{i,1}, \ldots, x_{i,n}.$ - -A multi-projective variety is a projective subvariety of the product of projective spaces -$$ -\mathbb P_{n_1}\times \cdots\times \mathbb P_{n_k}, -$$ - -where $\mathbb P_n$ denote the projective space of dimension n. A multi-projective variety may be defined as the set of the common nontrivial zeros of an ideal of multi-homogeneous polynomials, where "nontrivial" means that $x_{i,0}, x_{i,1}, \ldots, x_{i,n}$ are not simultaneously 0, for each i. - -Bézout's theorem asserts that n homogeneous polynomials of degree $d_1, \ldots, d_n$ in n + 1 indeterminates define either an algebraic set of positive dimension, or a zero-dimensional algebraic set consisting of $d_1\cdots d_n$ points counted with their multiplicities. - -For stating the generalization of Bézout's theorem, it is convenient to introduce new indeterminates $t_1, \ldots, t_k,$ and to represent the multi-degree $d_1, \ldots, d_k$ by the linear form $\mathbf d=d_1t_1+\cdots + d_kt_k.$ In the following, "multi-degree" will refer to this linear form rather than to the sequence of degrees. - -Setting $n=n_1+\cdots +n_k,$ the multi-homogeneous Bézout theorem is the following. - -With above notation, n multi-homogeneous polynomials of multi-degrees $\mathbf d_1, \ldots, \mathbf d_n$ define either a multi-projective algebraic set of positive dimension, or a zero-dimensional algebraic set consisting of B points, counted with multiplicities, where B is the coefficient of -$$ -t_1^{n_1}\cdots t_k^{n_k} -$$ - -in the product of linear forms -$$ -\mathbf d_1 \cdots \mathbf d_n. -$$ - -The multi-homogeneous Bézout bound on the number of solutions may be used for non-homogeneous systems of equations, when the polynomials may be (multi)-homogenized without increasing the total degree. However, in this case, the bound may be not sharp, if there are solutions "at infinity". - -Without insight on the problem that is studied, it may be difficult to group the variables for a "good" multi-homogenization. Fortunately, there are many problems where such a grouping results directly from the problem that is modeled. For example, in mechanics, equations are generally homogeneous or almost homogeneous in the lengths and in the masses. diff --git a/wiki/wikipedia/2341.txt b/wiki/wikipedia/2341.txt deleted file mode 100644 index 2a9066b81275fe26902b35942dd14bb217f8e1b7..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2341.txt +++ /dev/null @@ -1,25 +0,0 @@ -In computer science, a Lease is a contract that gives its holder specified rights to some resource for a limited period. Because it is time-limited, a lease is an alternative to a lock for resource serialization. - -A traditional resource lock is granted until it is explicitly released by the locking client process. Reasons why a lock might not be released include: - -* The client failed before releasing the resources - -* The client deadlocked while attempting to allocate another resource - -* The client was blocked or delayed for an unreasonable period - -* The client neglected to free the resource, perhaps due to a bug - -* The request to free the resource was lost - -* The resource manager failed or lost track of the resource stated - -Any of these could end the availability of an important reusable resource until the system is reset. By contract, a lease is valid for a limited period, after which it automatically expires, making the resource available for reallocation by a new client. - -The term 'lease' was applied to this concept in a 1989 paper by Cary G. Gray and David R. Cheriton, but similar concepts (expiring tokens and breakable locks with timeouts) had been used in prior systems. - -Leases are commonly used in distributed systems for applications ranging from DHCP address allocation to file locking, but they are not (by themselves) a complete solution: - -* There must be some means of notifying the lease holder of the expiration and preventing that agent from continuing to rely on the resource. Often, this is done by requiring all requests to be accompanied by an access token, which is invalidated if the associated lease has expired. - -* If a lease is revoked after the lease holder has started operating on the resource, revocation may leave the resource in a compromised state. In such situations, it is common to use Atomic transactions to ensure that updates that do not complete have no effect. diff --git a/wiki/wikipedia/2342.txt b/wiki/wikipedia/2342.txt deleted file mode 100644 index 5287517bc255e09381618eef8ba024f281c1a284..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2342.txt +++ /dev/null @@ -1,5 +0,0 @@ -In geometric topology, the spherical space form conjecture states that a finite group acting on the 3-sphere is conjugate to a group of isometries of the 3-sphere. - -The conjecture was posed by Heinz Hopf in 1926 after determining the fundamental groups of three-dimensional spherical space forms as a generalization of the Poincaré conjecture to the non-simply connected case. - -The conjecture is implied by Thurston's geometrization conjecture, which was proven by Grigori Perelman in 2003. The conjecture was independently proven for groups whose actions have fixed points—this special case is known as the Smith conjecture. It is also proven for various groups acting without fixed points, such as cyclic groups whose orders are a power of two (George Livesay, Robert Myers) and cyclic groups of order 3 (J. Hyam Rubinstein). diff --git a/wiki/wikipedia/2343.txt b/wiki/wikipedia/2343.txt deleted file mode 100644 index 0c79503bca06ce7d296a946c8b73b501a9e78528..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2343.txt +++ /dev/null @@ -1,21 +0,0 @@ -In gravitation, Chasles' theorem says that the Newtonian gravitational attraction of a spherical shell, outside of that shell, is equivalent mathematically to the attraction of a point mass. - -The theorem is attributed to Michel Chasles (1793–1880). - -Benjamin Peirce followed Chasles work on that developed an analogy between conduction of heat and gravitational attraction: - -A single current in a direction perpendicular to the level surfaces, and having a velocity proportionate to the decrease in density … is the law of propagation of heat, when there is no radiation, and hence arise the analogies between the levels and isothermal surfaces, and the identity of the mathematical investigations of the attraction of bodies and the propagation of heat which have been developed by Chasles. - -Chaslesian shell is the figure that Peirce exploits: - -If an infinitely thin homogeneous shell is formed upon each level surface, of a system of bodies, having at each point a thickness proportional to the attraction at that point, the portion of either of these shells, which is included in a canal formed by trajectories, bears the same ratio to the whole shell, which the portion of another shell included in the same canal bears to that shell, provided there is no mass included between the shells. - -The conception of these shells, and the investigation of their acting and reacting properties was original with Chasles, and it will be convenient, as it is appropriate, to designated them as Chaslesian shells. - -The Chasles theorem as expressed by Peirce: - -The external level surfaces of a shell are the same with those of the original masses, and the attraction of the shell upon an external point has the same direction with the attraction of the original masses, and is normal to the level surface passing through the point. This theorem is due to Chasles. - -The ellipsoid is recruited to bound the Chaslesian shells: - -An infinitely thin homogeneous shell, of which the inner and outer surfaces are those of similar, and similarly placed, concentric ellipsoids, is a Chaslesian shell. diff --git a/wiki/wikipedia/2344.txt b/wiki/wikipedia/2344.txt deleted file mode 100644 index b9e97dff37c5284077a465740e51825763bd39c8..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2344.txt +++ /dev/null @@ -1,11 +0,0 @@ -Cirquent calculus is a proof calculus that manipulates graph-style constructs termed cirquents, as opposed to the traditional tree-style objects such as formulas or sequents. Cirquents come in a variety of forms, but they all share one main characteristic feature, making them different from the more traditional objects of syntactic manipulation. This feature is the ability to explicitly account for possible sharing of subcomponents between different components. For instance, it is possible to write an expression where two subexpressions F and E, while neither one is a subexpression of the other, still have a common occurrence of a subexpression G (as opposed to having two different occurrences of G, one in F and one in E). - -The approach was introduced by G. Japaridze in as an alternative proof theory capable of “taming” various nontrivial fragments of his computability logic, which had otherwise resisted all axiomatization attempts within the traditional proof-theoretic frameworks. The origin of the term “cirquent” is CIRcuit+seQUENT, as the simplest form of cirquents, while resembling circuits rather than formulas, can be thought of as collections of one-sided sequents (for instance, sequents of a given level of a Gentzen-style proof tree) where some sequents may have shared elements. - -thumb|Cirquent for the "two out of three" combination of resources, inexpressible in linear logic The basic version of cirquent calculus in was accompanied with an "abstract resource semantics" and the claim that the latter was an adequate formalization of the resource philosophy traditionally associated with linear logic. Based on that claim and the fact that the semantics induced a logic properly stronger than (affine) linear logic, Japaridze argued that linear logic was incomplete as a logic of resources. Furthermore, he argued that not only the deductive power but also the expressive power of linear logic was weak, for it, unlike cirquent calculus, failed to capture the ubiquitous phenomenon of resource sharing. - -thumb|Linear logic understands the displayed formula as the left cirquent, while classical logic as the right cirquent The resource philosophy of cirquent calculus sees the approaches of linear logic and classical logic as two extremes: the former does not allow any sharing at all, while in the latter “everything is shared that can be shared”. Unlike cirquent calculus, neither approach thus permits mixed cases where some identical subformulas are shared and some not. - -Among the later-found applications of cirquent calculus was the use of it to define a semantics for purely propositional independence-friendly logic. The corresponding logic was axiomatized by W. Xu. - -Syntactically, cirquent calculi are deep inference systems with the unique feature of subformula-sharing. This feature has been shown to provide speedup for certain proofs. For instance, polynomial size analytic proofs for the propositional pigeonhole have been constructed. Only quasipolynomial analytic proofs have been found for this principle in other deep inference systems. In resolution or analytic Gentzen-style systems, the pigeonhole principle is known to have only exponential size proofs. diff --git a/wiki/wikipedia/2345.txt b/wiki/wikipedia/2345.txt deleted file mode 100644 index 744d6df1ad87858f32142e76fd83146eaeb6d70b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2345.txt +++ /dev/null @@ -1,35 +0,0 @@ -The analyst's traveling salesman problem is an analog of the traveling salesman problem in combinatorial optimization. In its simplest and original form, it asks under what conditions may a set E in two-dimensional Euclidean space $\mathbb{R}^2$ be contained inside a rectifiable curve of finite length. So while in the original traveling salesman problem, one asks for the shortest way to visit every vertex in a graph with a discrete path, this analytical version requires the curve to visit perhaps infinitely many points. - -A rectifiable curve has tangents at almost all of its points, where in this case "almost all" means all but a subset whose one-dimensional Hausdorff measure is zero. Accordingly, if a set is contained in a rectifiable curve, the set must look flat when zooming in on almost all of its points. This suggests that testing us whether a set could be contained in a rectifiable curve must somehow incorporate information about how flat it is when one zooms in on its points at different scales. - -This discussion motivates the definition of the following quantity: - -left=1.6|$\beta_{E}(Q)=\frac{1}{\ell(Q)}\inf\{\delta:{}$there is a line $L$ so that for every $x\in E\cap Q,$ dist$(x,L)<\delta\},$ - -Where $E$ is the set that is to be contained in a rectifiable curve, $Q$ is any square, $\ell(Q)$ is the side length of $Q$, and dist$(x,L)$ measures the distance from $x$ to the line $L$. Intuitively, $2\beta_E(Q)\ell(Q)$ is the width of the smallest rectangle containing the portion of $E$ inside $Q$, and hence $\beta_E(Q)$ gives a scale invariant notion of flatness. - -Let Δ denote the collection of dyadic squares, that is, -$$ -\Delta=\{[i2^{k},(i+1)2^{k}]\times[j2^{k},(j+1)2^{k}]: i,j,k\in\mathbb{Z}\}, -$$ - -where $\mathbb{Z}$ denotes the set of integers. For a set $E\subseteq\mathbb{R}^2$, define -$$ -\beta(E)=\text{diam} E+ \sum_{Q\in\Delta}\beta_{E}(3Q)^2 \ell(Q) -$$ - -where diam E is the diameter of E. Then Peter Jones's analyst's traveling salesman theorem may be stated as follows: - -* There is a number C > 0 such that whenever E is a set with such that β(E) < ∞, E can be contained in a curve with length no more than Cβ(E). - -* Conversely (and substantially more difficult to prove), if Γ is a rectifiable curve, then β(Γ) < CH1(Γ). - -The Traveling Salesman Theorem was shown to hold in general Euclidean spaces by Kate Okikiolu, that is, the same theorem above holds for sets $E\subseteq\mathbb{R}^d$, d > 1, where Δ is now the collection of dyadic cubes in $\mathbb{R}^d$ defined in a similar way as dyadic squares. In her proof, the constant C grows exponentially with the dimension d. - -With some slight modifications to the definition of β(E), Raanan Schul showed Traveling Salesman Theorem also holds for sets E that lie in any Hilbert Space, and in particular, implies the theorems of Jones and Okikiolu, where now the constant C is independent of dimension. (In particular, this involves using β-numbers of balls instead of cubes). - -Hahlomaa further adjusted the definition of β(E) to get a condition for when a set E of an arbitrary metric space may be contained in the Lipschitz-image of a subset $A\subseteq\mathbb{R}$ of positive measure. For this, he had to redefine the definition of the β-numbers using menger curvature (since in a metric space there isn't necessarily a notion of a cube or a straight line). - -Menger curvature, as in the previous example, can be used to give numerical estimates that determine whether a set contains a rectifiable subset, and the proofs of these results frequently depend on β-numbers. - -The Denjoy–Riesz theorem gives general conditions under which a point set can be covered by the homeomorphic image of a curve. This is true, in particular, for every compact totally disconnected subset of the Euclidean plane. However, it may be necessary for such an arc to have infinite length, failing to meet the conditions of the analyst's traveling salesman theorem. diff --git a/wiki/wikipedia/2346.txt b/wiki/wikipedia/2346.txt deleted file mode 100644 index b2d023da4bc95a8f4b69122b1f4264a02e319651..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2346.txt +++ /dev/null @@ -1,17 +0,0 @@ -__notoc__ - -Candido's identity, named after the Italian mathematician Giacomo Candido, is an identity for real numbers. It states that for two arbitrary real numbers $x$ and $y$ the following equality holds: -$$ -\left[x^2+y^2+(x+y)^2\right]^2=2[x^4+y^4+(x+y)^4] -$$ - -The identity however is not restricted to real numbers but holds in every commutative ring. -$$ -(f_n^2+f_{n+1}^2+f_{n+2}^2)^2=2(f_n^4+f_{n+1}^4+f_{n+2}^4) -$$ - -A straightforward algebraic proof can be attained by simply completely expanding both sides of the equation. The identity however can also be interpreted geometrically. In this case it states that the area of square with side length $x^2+y^2+(x+y)^2$ equals twice the sum of areas of three squares with side lengths $x^2$, $y^2$ and $(x+y)^2$. This allows for the following proof due to Roger B. Nelson: - -*S. Melham: . In: Fibonacci Quarterly, 2004, 2, S. 155-160 - -*Claudi Alsina, Roger B. Nelsen: On Candido's Identity. In: Mathematics Magazine, Band 80, Nr. 3 (Juni, 2007), S. 226-228 () diff --git a/wiki/wikipedia/2347.txt b/wiki/wikipedia/2347.txt deleted file mode 100644 index 8522d2a63df7595edeec90c2ffe3b7f94eb5b8c8..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2347.txt +++ /dev/null @@ -1,40 +0,0 @@ -Greenberg's conjecture is either of two conjectures in algebraic number theory proposed by Ralph Greenberg. Both are still unsolved as of 2021. - -The first conjecture was proposed in 1976 and concerns Iwasawa invariants. This conjecture is related to Vandiver's conjecture, Leopoldt's conjecture, Birch–Tate conjecture, all of which are also unsolved. - -The conjecture, also referred to as Greenberg's invariants conjecture, firstly appeared in Greenberg's Princeton University thesis of 1971 and originally stated that, assuming that $F$ is a totally real number field and that $F_\infty/F$ is the cyclotomic $\mathbb{Z}_p$-extension, $\lambda(F_\infty/F) = \mu(F_\infty/F) = 0$, i.e. the power of $p$ dividing the class number of $F_n$ is bounded as $n \rightarrow \infty$. Note that if Leopoldt's conjecture holds for $F$ and $p$, the only $\mathbb{Z}_p$-extension of $F$ is the cyclotomic one (since it is totally real). - -In 1976, Greenberg expanded the conjecture by providing more examples for it and slightly reformulated it as follows: given that $k$ is a finite extension of $\mathbf{Q}$ and that $\ell$ is a fixed prime, with consideration of subfields of cyclomtomic extensions of $k$, one can define a tower of number fields -$$ -k = k_0 \subset k_1 \subset k_2 \subset \cdots \subset k_n \subset \cdots -$$ such that $k_n$ is a cyclic extension of $k$ of degree $\ell^n$. If $k$ is totally real, is the power of $l$ dividing the class number of $k_n$ bounded as  $n \rightarrow \infty$? Now, if $k$ is an arbitrary number field, then there exist integers $\lambda$, $\mu$ and $\nu$ such that the power of $\ell$ dividing the class number of $k_n$ is $\ell^{e_n}$, where $e_n = {\lambda}n + \mu^{\ell_n} + \nu$ for all sufficiently large $n$. The integers $\lambda$, $\mu$, $\nu$ depend only on $k$ and $\ell$. Then, we ask: is $\lambda = \mu = 0$ for $k$ totally real? - -Simply speaking, the conjecture asks whether we have $\mu_\ell(k) = \lambda_\ell(k) = 0$ for any totally real number field $k$ and any prime number $\ell$, or the conjecture can also be reformulated as asking whether both invariants λ and µ associated to the cyclotomic $Z_p$-extension of a totally real number field vanish. - -In 2001, Greenberg generalized the conjecture (thus making it known as Greenberg's pseudo-null conjecture or, sometimes, as Greenberg's generalized conjecture): - -Supposing that $F$ is a totally real number field and that $p$ is a prime, let $\tilde{F}$ denote the compositum of all $\mathbb{Z}_p$-extensions of $F$. (Recall that if Leopoldt's conjecture holds for $F$ and $p$, then $\tilde F=F$.) Let $\tilde{L}$ denote the pro-$p$ Hilbert class field of $\tilde{F}$ and let $\tilde{X} = \operatorname{Gal}(\tilde{L}/\tilde{F})$, regarded as a module over the ring $\tilde{\Lambda} = {\mathbb{Z}_p}\operatorname{Gal}(\tilde{F}/F)$. Then $\tilde{X}$ is a pseudo-null $\tilde{\Lambda}$-module. - -A possible reformulation: Let $\tilde{k}$ be the compositum of all the $\mathbb{Z}_p$-extensions of $k$ and let $\operatorname{Gal}(\tilde{k}/k) \simeq \mathbb{Z}^n_p$, then $Y_\tilde{k}$ is a pseudo-null $\Lambda_n$-module. - -Another related conjecture (also unsolved as of yet) exists: - -We have $\mu_\ell(k) = 0$ for any number field $k$ and any prime number $\ell$. - -This related conjecture was justified by Bruce Ferrero and Larry Washington, both of whom proved (see: Ferrero–Washington theorem) that $\mu_\ell(k) = 0$ for any abelian extension $k$ of the rational number field $\mathbb{Q}$ and any prime number $\ell$. - -Another conjecture, which can be referred to as Greenberg's conjecture, was proposed by Greenberg in 2016, and is known as Greenberg's $p$-rationality conjecture. It states that for any odd prime $p$ and for any $t$, there exists a $p$-rational field $K$ such that $\operatorname{Gal}(K/\mathbb{Q}) \cong (\mathbb{Z}/\mathbb{2Z})^t$. This conjecture is related to the Inverse Galois problem. - -*R. Greenberg, On some questions concerning the lwasawa invariants, Princeton University thesis (1971) - -*R. Greenberg, "On the lwasawa invariants of totally real number fields", American Journal of Mathematics, issue 98 (1976), pp. 263–284 - -*R. Greenberg, "Iwasawa Theory — Past and Present", Advanced Studies in Pure Mathematics, issue 30 (2001), pp. 335–385 - -*R. Greenberg, "Galois representations with open image", Annales mathématiques du Québec, volume 40, number 1 (2016), pp. 83–119 - -*B. Ferrero and L. C. Washington, "The Iwasawa Invariant $\mu_p$ Vanishes for Abelian Number Fields", Annals of Mathematics (Second Series), volume 109, number 2 (May, 1979), pp. 377–395 - -Category:Algebraic number theory - -Category:Conjectures diff --git a/wiki/wikipedia/2348.txt b/wiki/wikipedia/2348.txt deleted file mode 100644 index e8b540cba91180926ee923e912dd05fcd98858ad..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2348.txt +++ /dev/null @@ -1,25 +0,0 @@ -A version vector is a mechanism for tracking changes to data in a distributed system, where multiple agents might update the data at different times. The version vector allows the participants to determine if one update preceded another (happened-before), followed it, or if the two updates happened concurrently (and therefore might conflict with each other). In this way, version vectors enable causality tracking among data replicas and are a basic mechanism for optimistic replication. In mathematical terms, the version vector generates a preorder that tracks the events that precede, and may therefore influence, later updates. - -Version vectors maintain state identical to that in a vector clock, but the update rules differ slightly; in this example, replicas can either experience local updates (e.g., the user editing a file on the local node), or can synchronize with another replica: - -* Initially all vector counters are zero. - -* Each time a replica experiences a local update event, it increments its own counter in the vector by one. - -* Each time two replicas a and b synchronize, they both set the elements in their copy of the vector to the maximum of the element across both counters: $V_a[x] = V_b[x] = \max(V_a[x], V_b[x])$. After synchronization, the two replicas have identical version vectors. - -Pairs of replicas, a, b, can be compared by inspecting their version vectors and determined to be either: identical ($a=b$), concurrent ($a \parallel b$), or ordered ($a < b$ or $b < a$). The ordered relation is defined as: Vector $a < b$ if and only if every element of $V_a$ is less than or equal to its corresponding element in $V_b$, and at least one of the elements is strictly less than. If neither $a < b$ or $b < a$, but the vectors are not identical, then the two vectors must be concurrent. - -Version vectors or variants are used to track updates in many distributed file systems, such as Coda (file system) and Ficus, and are the main data structure behind optimistic replication. - -* Hash Histories avoid the use of counters by keeping a set of hashes of each updated version and comparing those sets by set inclusion. However this mechanism can only give probabilistic guarantees. - -* Concise Version Vectors allow significant space savings when handling multiple replicated items, such as in directory structures in filesystems. - -* Version Stamps allow tracking of a variable number of replicas and do not resort to counters. This mechanism can depict scalability problems in some settings, but can be replaced by Interval Tree Clocks. - -* Interval Tree Clocks generalize version vectors and vector clocks and allows dynamic numbers of replicas/processes. - -* Bounded Version Vectors allow a bounded implementation, with bounded size counters, as long as replica pairs can be atomically synchronized. - -* Dotted Version Vectors address scalability with a small set of servers mediating replica access by a large number of concurrent clients. diff --git a/wiki/wikipedia/2349.txt b/wiki/wikipedia/2349.txt deleted file mode 100644 index 65897db4198a657d9de68f3152886c98f96157a6..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2349.txt +++ /dev/null @@ -1,13 +0,0 @@ -In mathematics, more precisely, in the theory of simplicial sets, the Dold–Kan correspondence (named after Albrecht Dold and Daniel Kan) states that there is an equivalence between the category of (nonnegatively graded) chain complexes and the category of simplicial abelian groups. Moreover, under the equivalence, the $n$th homology group of a chain complex is the $n$th homotopy group of the corresponding simplicial abelian group, and a chain homotopy corresponds to a simplicial homotopy. (In fact, the correspondence preserves the respective standard model structures.) - -Example: Let C be a chain complex that has an abelian group A in degree n and zero in other degrees. Then the corresponding simplicial group is the Eilenberg–MacLane space $K(A, n)$. - -There is also an ∞-category-version of a Dold–Kan correspondence. - -The book "Nonabelian Algebraic Topology" cited below has a Section 14.8 on cubical versions of the Dold–Kan theorem, and relates them to a previous equivalence of categories between cubical omega-groupoids and crossed complexes, which is fundamental to the work of that book. - -The Dold-Kan correspondence between simplicial abelian groups and chain complexes can be constructed explicitly through an adjunction of functorspg 149. The first functor is the normalized chain complex functor
$N:s\textbf{Ab} \to \text{Ch}_{\geq 0}(\textbf{Ab})$
and the second functor is the "simplicialization" functor
$\Gamma:\text{Ch}_{\geq 0}(\textbf{Ab}) \to s\textbf{Ab}$
constructing a simplicial abelian group from a chain complex. - -Given a simplicial abelian group $A_\bullet \in \text{Ob}(\text{s}\textbf{Ab})$ there is a chain complex $NA_\bullet$ called the normalized chain complex with terms
$NA_n = \bigcap^{n-1}_{i=0}\ker(d_i) \subset A_n$
and differentials given by
$NA_n \xrightarrow{(-1)^nd_n} NA_{n-1}$
These differentials are well defined because of the simplicial identity
$d_i \circ d_n = d_{n-1}\circ d_i : A_n \to A_{n-2}$
showing the image of $d_n : NA_n \to A_{n-1}$ is in the kernel of each $d_i:NA_{n-1} \to NA_{n-2}$. This is because the definition of $NA_n$ gives $d_i(NA_n) = 0$. - -Now, composing these differentials gives a commutative diagram
$NA_n \xrightarrow{(-1)^nd_n} NA_{n-1} \xrightarrow{(-1)^{n-1}d_{n-1}} NA_{n-2}$
and the composition map $(-1)^n(-1)^{n-1}d_{n-1}\circ d_n$. This composition is the zero map because of the simplicial identity
$d_{n-1}\circ d_n = d_{n-1}\circ d_{n-1}$
and the inclusion $\text{Im}(d_n) \subset NA_{n-1}$, hence the normalized chain complex is a chain complex in $\text{Ch}_{\geq 0 }(\textbf{Ab})$. Because a simplicial abelian group is a functor
$A_\bullet : \text{Ord} \to \textbf{Ab} $
and morphisms $A_\bullet \to B_\bullet$ are given by natural transformations, meaning the maps of the simplicial identities still hold, the normalized chain complex construction is functorial. diff --git a/wiki/wikipedia/235.txt b/wiki/wikipedia/235.txt deleted file mode 100644 index 243bc0777bfc236ba08fe0bc4edb3a70fd7b8e57..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/235.txt +++ /dev/null @@ -1,126 +0,0 @@ -Squaring the circle is a problem proposed by ancient geometers. It is the challenge of constructing a square with the same area as a given circle by using only a finite number of steps with compass and straightedge. The difficulty of the problem raised the question of whether specified axioms of Euclidean geometry concerning the existence of lines and circles implied the existence of such a square. - -In 1882, the task was proven to be impossible, as a consequence of the Lindemann–Weierstrass theorem, which proves that pi (pi) is a transcendental, rather than an algebraic irrational number; that is, it is not the root of any polynomial with rational coefficients. It had been known for decades that the construction would be impossible if pi were transcendental, but pi was not proven transcendental until 1882. Approximate squaring to any given non-perfect accuracy, in contrast, is possible in a finite number of steps, since there are rational numbers arbitrarily close to pi. - -The expression "squaring the circle" is sometimes used as a metaphor for trying to do the impossible. - -The term quadrature of the circle is sometimes used to mean the same thing as squaring the circle, but it may also refer to approximate or numerical methods for finding the area of a circle. - -Methods to approximate the area of a given circle with a square, which can be thought of as a precursor problem to squaring the circle, were known already to Babylonian mathematicians. The Egyptian Rhind papyrus of 1800 BC gives the area of a circle as 64/81 d 2, where d is the diameter of the circle. In modern terms, this is equivalent to approximating pi as 256/81 (approximately 3.1605), a number that appears in the older Moscow Mathematical Papyrus and is used for volume approximations (i.e. hekat). Indian mathematicians also found an approximate method, though less accurate, documented in the Shulba Sutras. Archimedes proved the formula for the area of a circle (A = pir2, where r is the radius of the circle) and showed that the value of pi lay between 3/1|7 (approximately 3.1429) and 3/10|71 (approximately 3.1408). See Numerical approximations of pi for more on the history. - -The first known Greek to be associated with the problem was Anaxagoras, who worked on it while in prison. Hippocrates of Chios squared certain lunes, in the hope that it would lead to a solution – see Lune of Hippocrates. Antiphon the Sophist believed that inscribing regular polygons within a circle and doubling the number of sides will eventually fill up the area of the circle, and since a polygon can be squared, it means the circle can be squared. Even then there were skeptics—Eudemus argued that magnitudes cannot be divided up without limit, so the area of the circle will never be used up. The problem was even mentioned in Aristophanes's play The Birds. - -It is believed that Oenopides was the first Greek who required a plane solution (that is, using only a compass and straightedge). James Gregory attempted a proof of its impossibility in Vera Circuli et Hyperbolae Quadratura (The True Squaring of the Circle and of the Hyperbola) in 1667. Although his proof was faulty, it was the first paper to attempt to solve the problem using algebraic properties of pi. It was not until 1882 that Ferdinand von Lindemann rigorously proved its impossibility. - -right|thumb|A partial history by Florian Cajori of attempts at the problem. The Victorian-age mathematician, logician, and writer Charles Lutwidge Dodgson, better known by the pseudonym Lewis Carroll, also expressed interest in debunking illogical circle-squaring theories. In one of his diary entries for 1855, Dodgson listed books he hoped to write including one called "Plain Facts for Circle-Squarers". In the introduction to "A New Theory of Parallels", Dodgson recounted an attempt to demonstrate logical errors to a couple of circle-squarers, stating: - -
The first of these two misguided visionaries filled me with a great ambition to do a feat I have never heard of as accomplished by man, namely to convince a circle squarer of his error! The value my friend selected for Pi was 3.2: the enormous error tempted me with the idea that it could be easily demonstrated to BE an error. More than a score of letters were interchanged before I became sadly convinced that I had no chance. - -
- -A ridiculing of circle-squaring appears in Augustus de Morgan's A Budget of Paradoxes published posthumously by his widow in 1872. Having originally published the work as a series of articles in the Athenæum, he was revising it for publication at the time of his death. Circle squaring was very popular in the nineteenth century, but hardly anyone indulges in it today and it is believed that de Morgan's work helped bring this about. - -The two other classical problems of antiquity, famed for their impossibility, were doubling the cube and trisecting the angle. Like squaring the circle, these cannot be solved by compass-and-straightedge methods. However, unlike squaring the circle, they can be solved by the slightly more powerful construction method of origami, as described at mathematics of paper folding. - -The solution of the problem of squaring the circle by compass and straightedge requires the construction of the number sqrt pi. If sqrt pi is constructible, it follows from standard constructions that pi would also be constructible. In 1837, Pierre Wantzel showed that lengths that could be constructed with compass and straightedge had to be solutions of certain polynomial equations with rational coefficients. Thus, constructible lengths must be algebraic numbers. If the problem of the quadrature of the circle could be solved using only compass and straightedge, then pi would have to be an algebraic number. Johann Heinrich Lambert conjectured that pi was not algebraic, that is, a transcendental number, in 1761. He did this in the same paper in which he proved its irrationality, even before the general existence of transcendental numbers had been proven. It was not until 1882 that Ferdinand von Lindemann proved the transcendence of pi and so showed the impossibility of this construction. - -The transcendence of pi implies the impossibility of exactly "circling" the square, as well as of squaring the circle. - -It is possible to construct a square with an area arbitrarily close to that of a given circle. If a rational number is used as an approximation of pi, then squaring the circle becomes possible, depending on the values chosen. However, this is only an approximation and does not meet the constraints of the ancient rules for solving the problem. Several mathematicians have demonstrated workable procedures based on a variety of approximations. - -Bending the rules by introducing a supplemental tool, allowing an infinite number of compass-and-straightedge operations or by performing the operations in certain non-Euclidean geometries also makes squaring the circle possible in some sense. For example, the quadratrix of Hippias provides the means to square the circle and also to trisect an arbitrary angle, as does the Archimedean spiral. Although the circle cannot be squared in Euclidean space, it sometimes can be in hyperbolic geometry under suitable interpretations of the terms. As there are no squares in the hyperbolic plane, their role needs to be taken by regular quadrilaterals, meaning quadrilaterals with all sides congruent and all angles congruent (but these angles are strictly smaller than right angles). - -There exist, in the hyperbolic plane, (countably) infinitely many pairs of constructible circles and constructible regular quadrilaterals of equal area, which, however, are constructed simultaneously. - -There is no method for starting with a regular quadrilateral and constructing the circle of equal area, and there is no method for starting with a circle and constructing a regular quadrilateral of equal area (even when the circle has small enough radius such that a regular quadrilateral of equal area exists). - -Though squaring the circle with perfect accuracy is an impossible problem using only compass and straightedge, approximations to squaring the circle can be given by constructing lengths close to pi. - -It takes only minimal knowledge of elementary geometry to convert any given rational approximation of pi into a corresponding compass-and-straightedge construction, but constructions made in this way tend to be very long-winded in comparison to the accuracy they achieve. After the exact problem was proven unsolvable, some mathematicians applied their ingenuity to finding elegant approximations to squaring the circle, defined roughly and informally as constructions that are particularly simple among other imaginable constructions that give similar precision. - -One of the early historical approximations is Kochański's approximation which diverges from pi only in the 5th decimal place. It was very precise for the time of its discovery (1685). - -In the left diagram -$$ -|P_3 P_9|=|P_1 P_2|\sqrt{\frac{40}{3}-2\sqrt{3}}\approx 3.1415{\color{red}33338}\cdot|P_1 P_2|\approx \pi r. -$$ - -In 1849 an elegant and simple construction by Jacob de Gelder (1765-1848) was published in Grünert's Archiv. That was 64 years earlier than the comparable construction by Ramanujan. This was a fairly accurate construction which was based on constructing the approximate value of 3.14164079..., which is accurate to three decimal places (i.e. it differs from pi by about 4.8|e=-5). - -"We find that GH = r . 1 .77246 ..., and since $ \sqrt{\pi}$ = 1 .77245 we see that GH is greater than the side of the square whose area is equal to that of the circle by less than two hundred thousandths of the radius." - -Hobson does not mention the formula for the approximation of pi in his construction. The above illustration shows Hobson's construction with continuation. - -The Indian mathematician Srinivasa Ramanujan in 1913, Carl Olds in 1963, Martin Gardner in 1966, and Benjamin Bold in 1982 all gave geometric constructions for -$$ -\frac{355}{113} = 3.141592{\color{red}920\ldots} -$$ - -which is accurate to six decimal places of pi. - -In 1914, Ramanujan gave a ruler-and-compass construction which was equivalent to taking the approximate value for pi to be -$$ -\left(9^2 + \frac{19^2}{22}\right)^\frac14 = \sqrt[4]{\frac{2143}{22}} = 3.14159265{\color{red}2582\ldots} -$$ - -giving eight decimal places of pi. - -He describes his construction till line segment OS as follows. - -In this quadrature, Ramanujan did not construct the side length of the square, it was enough for him to show the line segment OS. In the following continuation of the construction, the line segment OS is used together with the line segment OB to represent the mean proportionals (red line segment OE). - -Continuation of the construction up to the desired side length a of the square: - -Extend AB beyond A and beat the circular arc b1 around O with radius OS, resulting in S′. Bisect the line segment BS′ in D and draw the semicircle b2 over D. Draw a straight line from O through C up to the semicircle b2, it cuts b2 in E. The line segment OE is the mean proportional between OS′ and OB, also called geometric mean. Extend the line segment EO beyond O and transfer EO twice more, it results F and A1, and thus the length of the line segment EA1 with the above described approximation value of pi, the half circumference of the circle. Bisect the line segment EA1 in G and draw the semicircle b3 over G. Transfer the distance OB from A1 to the line segment EA1, it results H. Create a vertical from H up to the semicircle b3 on EA1, it results B1. Connect A1 to B1, thus the sought side a of the square A1B1C1D1 is constructed, which has nearly the same area as the given circle. - -Examples to illustrate the errors: - -* In a circle of radius r = 10,000 km the error of side length a ≈ −2.8 mm - -* In the case of a circle with the radius r = 10 m the error of the area A ≈ −0.1 mm2 - -* In 1991, Robert Dixon gave a construction for -$$ -\frac{6}{5}\cdot \left( 1 + \varphi\right) = 3.141{\color{red}640\ldots}, -$$ - -where $\varphi$ is the golden ratio. Three decimal places are equal to those of pi. - -* If the radius $r = 1$ and the side of the square -$$ -a =\sqrt{\frac{6}{5}\cdot\left(1+\varphi \right)} = \sqrt{\varphi +1 +\frac{1}{5}\cdot\left(1+\varphi \right)} = 1.7724{\color{red}67\ldots} -$$ - -then the expanded second formula shows the sequence of the steps for an alternative construction (see the following illustration). Four decimal places are equal to those of sqrt pi. - -Finding the area under a curve, known as integration in calculus, or quadrature in numerical analysis, was known as squaring before the invention of calculus. Since the techniques of calculus were unknown, it was generally presumed that a squaring should be done via geometric constructions, that is, by compass and straightedge. For example, Newton wrote to Oldenburg in 1676 "I believe M. Leibnitz will not dislike the Theorem towards the beginning of my letter pag. 4 for squaring Curve lines Geometrically" (emphasis added). After Newton and Leibniz invented calculus, they still referred to this integration problem as squaring a curve. - -The mathematical proof that the quadrature of the circle is impossible using only compass and straightedge has not proved to be a hindrance to the many people who have invested years in this problem anyway. Having squared the circle is a famous crank assertion. (See also pseudomathematics.) In his old age, the English philosopher Thomas Hobbes convinced himself that he had succeeded in squaring the circle, a claim that was refuted by John Wallis as part of the Hobbes–Wallis controversy. - -During the 18th and 19th century, the notion that the problem of squaring the circle was somehow related to the longitude problem seems to have become prevalent among would-be circle squarers. Using "cyclometer" for circle-squarer, Augustus de Morgan wrote in 1872: - -
- -Montucla says, speaking of France, that he finds three notions prevalent among cyclometers: 1. That there is a large reward offered for success; 2. That the longitude problem depends on that success; 3. That the solution is the great end and object of geometry. The same three notions are equally prevalent among the same class in England. No reward has ever been offered by the government of either country.
- -Although from 1714 to 1828 the British government did indeed sponsor a £20,000 prize for finding a solution to the longitude problem, exactly why the connection was made to squaring the circle is not clear; especially since two non-geometric methods (the astronomical method of lunar distances and the mechanical chronometer) had been found by the late 1760s. The Board of Longitude received many proposals, including determining longitude by "squaring the circle", though the board did not take "any notice" of it. De Morgan goes on to say that "[t]he longitude problem in no way depends upon perfect solution; existing approximations are sufficient to a point of accuracy far beyond what can be wanted." In his book, de Morgan also mentions receiving many threatening letters from would-be circle squarers, accusing him of trying to "cheat them out of their prize". - -Even after it had been proved impossible, in 1894, amateur mathematician Edwin J. Goodwin claimed that he had developed a method to square the circle. The technique he developed did not accurately square the circle, and provided an incorrect area of the circle which essentially redefined pi as equal to 3.2. Goodwin then proposed the Indiana Pi Bill in the Indiana state legislature allowing the state to use his method in education without paying royalties to him. The bill passed with no objections in the state house, but the bill was tabled and never voted on in the Senate, amid increasing ridicule from the press. - -The mathematical crank Carl Theodore Heisel also claimed to have squared the circle in his 1934 book, "Behold! : the grand problem no longer unsolved: the circle squared beyond refutation." Paul Halmos referred to the book as a "classic crank book." - -In 1851, John Parker published a book Quadrature of the Circle in which he claimed to have squared the circle. His method actually produced an approximation of pi accurate to six digits. - -Mad Mathesis alone was unconfined, - -Too mad for mere material chains to bind, - -Now to pure space lifts her ecstatic stare, - -Now, running round the circle, finds it square. - -Similarly, the Gilbert and Sullivan comic opera Princess Ida features a song which satirically lists the impossible goals of the women's university run by the title character, such as finding perpetual motion. One of these goals is "And the circle – they will square it/Some fine day." - -The sestina, a poetic form first used in the 12th century by Arnaut Daniel, has been said to square the circle in its use of a square number of lines (six stanzas of six lines each) with a circular scheme of six repeated words. Spanos writes that this form invokes a symbolic meaning in which the circle stands for heaven and the square stands for the earth. A similar metaphor was used in "Squaring the Circle", a 1908 short story by O. Henry, about a long-running family feud. In the title of this story, the circle represents the natural world, while the square represents the city, the world of man. - -In later works circle-squarers such as Leopold Bloom in James Joyce's novel Ulysses and Lawyer Paravant in Thomas Mann's The Magic Mountain are seen as sadly deluded or as unworldly dreamers, unaware of its mathematical impossibility and making grandiose plans for a result they will never attain. diff --git a/wiki/wikipedia/2350.txt b/wiki/wikipedia/2350.txt deleted file mode 100644 index b881a85f1666668bed07b9560ee18813e923e298..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2350.txt +++ /dev/null @@ -1,7 +0,0 @@ -In mathematics, the Lickorish–Wallace theorem in the theory of 3-manifolds states that any closed, orientable, connected 3-manifold may be obtained by performing Dehn surgery on a framed link in the 3-sphere with ±1 surgery coefficients. Furthermore, each component of the link can be assumed to be unknotted. - -The theorem was proved in the early 1960s by W. B. R. Lickorish and Andrew H. Wallace, independently and by different methods. Lickorish's proof rested on the Lickorish twist theorem, which states that any orientable automorphism of a closed orientable surface is generated by Dehn twists along 3g - 1 specific simple closed curves in the surface, where g denotes the genus of the surface. Wallace's proof was more general and involved adding handles to the boundary of a higher-dimensional ball. - -A corollary of the theorem is that every closed, orientable 3-manifold bounds a simply-connected compact 4-manifold. - -By using his work on automorphisms of non-orientable surfaces, Lickorish also showed that every closed, non-orientable, connected 3-manifold is obtained by Dehn surgery on a link in the non-orientable 2-sphere bundle over the circle. Similar to the orientable case, the surgery can be done in a special way which allows the conclusion that every closed, non-orientable 3-manifold bounds a compact 4-manifold. diff --git a/wiki/wikipedia/2351.txt b/wiki/wikipedia/2351.txt deleted file mode 100644 index b92fb3c05e591266a99eee6ee0dd5b0019df6cf8..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2351.txt +++ /dev/null @@ -1,73 +0,0 @@ -In mathematics, an identity is an equality relating one mathematical expression A to another mathematical expression B, such that A and B (which might contain some variables) produce the same value for all values of the variables within a certain range of validity. In other words, A = B is an identity if A and B define the same functions, and an identity is an equality between functions that are differently defined. For example, $(a+b)^2 = a^2 + 2ab + b^2$ and $\cos^2\theta + \sin^2\theta =1$ are identities. - -Certain identities, such as $a+0=a$ and $a+(-a)=0$, form the basis of algebra, while other identities, such as $(a+b)^2 = a^2 + 2ab +b^2$ and $a^2 - b^2 = (a+b)(a-b)$, can be useful in simplifying algebraic expressions and expanding them. - -Geometrically, trigonometric identities are identities involving certain functions of one or more angles. They are distinct from triangle identities, which are identities involving both angles and side lengths of a triangle. Only the former are covered in this article. - -These identities are useful whenever expressions involving trigonometric functions need to be simplified. Another important application is the integration of non-trigonometric functions: a common technique which involves first using the substitution rule with a trigonometric function, and then simplifying the resulting integral with a trigonometric identity. - -One of the most prominent examples of trigonometric identities involves the equation $ \sin ^2 \theta + \cos ^2 \theta = 1, $ which is true for all complex values of $\theta$ (since the complex numbers $\mathbb{C}$ form the domain of sine and cosine). On the other hand, the equation -$$ -\cos \theta = 1 -$$ - -is only true for certain values of $\theta$, not all (nor for all values in a neighborhood). For example, this equation is true when $ \theta = 0,$ but false when $\theta = 2$. - -Another group of trigonometric identities concerns the so-called addition/subtraction formulas (e.g. the double-angle identity $\sin (2\theta) = 2\sin\theta \cos\theta$, the addition formula for $\tan (x + y)$), which can be used to break down expressions of larger angles into those with smaller constituents. - -The following identities hold for all integer exponents, provided that the base is non-zero: - -\begin{align} - -b^{m + n} &= b^m \cdot b^n \\ - -(b^m)^n &= b^{m\cdot n} \\ - -(b \cdot c)^n &= b^n \cdot c^n - -\end{align} - -Unlike addition and multiplication, exponentiation is not commutative. For example, 1=2 + 3 = 3 + 2 = 5 and 1=2 · 3 = 3 · 2 = 6, but 1=23 = 8, whereas 1=32 = 9. - -And unlike addition and multiplication, exponentiation is not associative either. For example, 1=(2 + 3) + 4 = 2 + (3 + 4) = 9 and 1=(2 · 3) · 4 = 2 · (3 · 4) = 24, but 23 to the 4 is 84 (or 4,096), whereas 2 to the 34 is 281 (or 2,417,851,639,229,258,349,412,352). Without parentheses to modify the order of calculation, by convention the order is top-down, not bottom-up: -$$ -b^{p^q} = b^{(p^q)} \ne (b^p)^q = b^{(p \cdot q)} = b^{p \cdot q} . -$$ - -Several important formulas, sometimes called logarithmic identities or log laws, relate logarithms to one another. - -The logarithm of a product is the sum of the logarithms of the numbers being multiplied; the logarithm of the ratio of two numbers is the difference of the logarithms. The logarithm of the pth power of a number is p times the logarithm of the number itself; the logarithm of a pth root is the logarithm of the number divided by p. The following table lists these identities with examples. Each of the identities can be derived after substitution of the logarithm definitions $x=b^{\log_b x},$ and/or $y=b^{\log_b y},$ in the left hand sides. - -
- -
- -The logarithm logb(x) can be computed from the logarithms of x and b with respect to an arbitrary base k using the following formula: - -$ \log_b(x) = \frac{\log_k(x)}{\log_k(b)}.$ - -Typical scientific calculators calculate the logarithms to bases 10 and e. Logarithms with respect to any base b can be determined using either of these two logarithms by the previous formula: -$$ - \log_b (x) = \frac{\log_{10} (x)}{\log_{10} (b)} = \frac{\log_{e} (x)}{\log_{e} (b)}. -$$ - -Given a number x and its logarithm logb(x) to an unknown base b, the base is given by: -$$ - b = x^\frac{1}{\log_b(x)}. -$$ - -The hyperbolic functions satisfy many identities, all of them similar in form to the trigonometric identities. In fact, Osborn's rule states that one can convert any trigonometric identity into a hyperbolic identity by expanding it completely in terms of integral powers of sines and cosines, changing sine to sinh and cosine to cosh, and switching the sign of every term which contains a product of an even number of hyperbolic sines. - -The Gudermannian function gives a direct relationship between the circular functions and the hyperbolic ones that does not involve complex numbers. - -Formally, an identity is a true universally quantified formula of the form $\forall x_1,\ldots,x_n; s=t,$ where s and t are terms with no other free variables than $x_1,\ldots,x_n.$ The quantifier prefix $\forall x_1,\ldots,x_n$ is often left implicit, when it is stated that the formula is an identity. For example, the axioms of a monoid are often given as the formulas -$$ -\forall x,y,z; x*(y*z)=(x*y)z,\quad \forall x; x*1=x, \quad \forall x; 1*x=x, -$$ - -or, shortly, -$$ -x*(y*z)=(x*y)z,\qquad x*1=x, \qquad 1*x=x. -$$ - -So, these formulas are identities in every monoid. As for any equality, the formulas without quantifier are often called equations. In other word, an identity is an equation that is true for every values of the variables. diff --git a/wiki/wikipedia/2352.txt b/wiki/wikipedia/2352.txt deleted file mode 100644 index d7f37a43265baec92f392c4ed8b320404465138c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2352.txt +++ /dev/null @@ -1,32 +0,0 @@ -Disjunction introduction or addition (also called or introduction) is a rule of inference of propositional logic and almost every other deduction system. The rule makes it possible to introduce disjunctions to logical proofs. It is the inference that if P is true, then P or Q must be true. - -An example in English: - -Socrates is a man. - -Therefore, Socrates is a man or pigs are flying in formation over the English Channel. - -The rule can be expressed as: -$$ -\frac{P}{\therefore P \lor Q} -$$ - -where the rule is that whenever instances of "$P$" appear on lines of a proof, "$P \lor Q$" can be placed on a subsequent line. - -More generally it's also a simple valid argument form, this means that if the premise is true, then the conclusion is also true as any rule of inference should be, and an immediate inference, as it has a single proposition in its premises. - -Disjunction introduction is not a rule in some paraconsistent logics because in combination with other rules of logic, it leads to explosion (i.e. everything becomes provable) and paraconsistent logic tries to avoid explosion and to be able to reason with contradictions. One of the solutions is to introduce disjunction with over rules. See . - -The disjunction introduction rule may be written in sequent notation: -$$ -P \vdash (P \lor Q) -$$ - -where $\vdash$ is a metalogical symbol meaning that $P \lor Q$ is a syntactic consequence of $P$ in some logical system; - -and expressed as a truth-functional tautology or theorem of propositional logic: -$$ -P \to (P \lor Q) -$$ - -where $P$ and $Q$ are propositions expressed in some formal system. diff --git a/wiki/wikipedia/2353.txt b/wiki/wikipedia/2353.txt deleted file mode 100644 index 3d78632ff6053a952212ca6cd81681902e60d746..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2353.txt +++ /dev/null @@ -1,20 +0,0 @@ -The Stewart–Walker lemma provides necessary and sufficient conditions for the linear perturbation of a tensor field to be gauge-invariant. $\Delta \delta T = 0$ if and only if one of the following holds - -1. $T_{0} = 0$ - -2. $T_{0}$ is a constant scalar field - -3. $T_{0}$ is a linear combination of products of delta functions $\delta_{a}^{b}$ - -A 1-parameter family of manifolds denoted by $\mathcal{M}_{\epsilon}$ with $\mathcal{M}_{0} = \mathcal{M}^{4}$ has metric $g_{ik} = \eta_{ik} + \epsilon h_{ik}$. These manifolds can be put together to form a 5-manifold $\mathcal{N}$. A smooth curve $\gamma$ can be constructed through $\mathcal{N}$ with tangent 5-vector $X$, transverse to $\mathcal{M}_{\epsilon}$. If $X$ is defined so that if $h_{t}$ is the family of 1-parameter maps which map $\mathcal{N} \to \mathcal{N}$ and $p_{0} \in \mathcal{M}_{0}$ then a point $p_{\epsilon} \in \mathcal{M}_{\epsilon}$ can be written as $h_{\epsilon}(p_{0})$. This also defines a pull back $h_{\epsilon}^{*}$ that maps a tensor field $T_{\epsilon} \in \mathcal{M}_{\epsilon} $ back onto $\mathcal{M}_{0}$. Given sufficient smoothness a Taylor expansion can be defined -$$ -h_{\epsilon}^{*}(T_{\epsilon}) = T_{0} + \epsilon h_{\epsilon}^{*}(\mathcal{L}_{X}T_{\epsilon}) + O(\epsilon^{2}) -$$ -$$ -\delta T = \epsilon h_{\epsilon}^{*}(\mathcal{L}_{X}T_{\epsilon}) \equiv \epsilon (\mathcal{L}_{X}T_{\epsilon})_{0} -$$ is the linear perturbation of $T$. However, since the choice of $X$ is dependent on the choice of gauge another gauge can be taken. Therefore the differences in gauge become $\Delta \delta T = \epsilon(\mathcal{L}_{X}T_{\epsilon})_0 - \epsilon(\mathcal{L}_{Y}T_{\epsilon})_0 = \epsilon(\mathcal{L}_{X-Y}T_\epsilon)_0$. Picking a chart where $X^{a} = (\xi^\mu,1)$ and $Y^a = (0,1)$ then $X^{a}-Y^{a} = (\xi^{\mu},0)$ which is a well defined vector in any $\mathcal{M}_\epsilon$ and gives the result -$$ -\Delta \delta T = \epsilon \mathcal{L}_{\xi}T_0. -$$ - -The only three possible ways this can be satisfied are those of the lemma. diff --git a/wiki/wikipedia/2354.txt b/wiki/wikipedia/2354.txt deleted file mode 100644 index 01e97334ee2c9a209d2efb67b80af6e4f6dc40ab..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2354.txt +++ /dev/null @@ -1,13 +0,0 @@ -In mathematics, the tameness theorem states that every complete hyperbolic 3-manifold with finitely generated fundamental group is topologically tame, in other words homeomorphic to the interior of a compact 3-manifold. - -The tameness theorem was conjectured by Marden. It was proved by Agol and, independently, by Danny Calegari and David Gabai. It is one of the fundamental properties of geometrically infinite hyperbolic 3-manifolds, together with the density theorem for Kleinian groups and the ending lamination theorem. - -It also implies the Ahlfors measure conjecture. - -Topological tameness may be viewed as a property of the ends of the manifold, namely, having a local product structure. An analogous statement is well known in two dimensions, that is, for surfaces. However, as the example of Alexander horned sphere shows, there are wild embeddings among 3-manifolds, so this property is not automatic. - -The conjecture was raised in the form of a question by Albert Marden, who proved that any geometrically finite hyperbolic 3-manifold is topologically tame. The conjecture was also called the Marden conjecture or the tame ends conjecture. - -There had been steady progress in understanding tameness before the conjecture was resolved. Partial results had been obtained by Thurston, Brock, Bromberg, Canary, Evans, Minsky, Ohshika. An important sufficient condition for tameness in terms of splittings of the fundamental group had been obtained by Bonahon. - -The conjecture was proved in 2004 by Ian Agol, and independently, by Danny Calegari and David Gabai. Agol's proof relies on the use of manifolds of pinched negative curvature and on Canary's trick of "diskbusting" that allows to replace a compressible end with an incompressible end, for which the conjecture has already been proved. The Calegari–Gabai proof is centered on the existence of certain closed, non-positively curved surfaces that they call "shrinkwrapped". diff --git a/wiki/wikipedia/2355.txt b/wiki/wikipedia/2355.txt deleted file mode 100644 index f091932756b3db7f8000aefcad813a976328a1ef..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2355.txt +++ /dev/null @@ -1,76 +0,0 @@ -Finite volume method (FVM) is a numerical method. FVM in computational fluid dynamics is used to solve the partial differential equation which arises from the physical conservation law by using discretisation. Convection is always followed by diffusion and hence where convection is considered we have to consider combine effect of convection and diffusion. But in places where fluid flow plays a non-considerable role we can neglect the convective effect of the flow. In this case we have to consider more simplistic case of only diffusion. The general equation for steady convection-diffusion can be easily derived from the general transport equation for property $\phi$ by deleting transient. - -General transport equation is defined as: -$$ -\frac{\partial \rho \phi }{\partial t} + \operatorname{div}(\rho \phi \upsilon) = \operatorname{div}(\Gamma \operatorname{grad} \phi) + S_\phi -$$ …………………………………………….1 - -Where, -$$ -\phi -$$ is a conservative form of all fluid flow, -$$ -\rho -$$ is density, -$$ -\operatorname{div}(\rho \phi \upsilon) -$$ is a net rate of flow of $\phi$ out of fluid element represents convective term, -$$ -\frac{\partial \rho \phi }{\partial t} -$$ is a transient term, -$$ -\operatorname{div}(\Gamma \operatorname{grad} \phi) -$$ is a rate of change $\phi$ of due to diffusion, -$$ - S_\phi -$$ is a rate of increase of $\phi$ due to source. - -Due to steady state condition transient term becomes zero and due to absence of convection convective term becomes zero, therefore steady state three- dimensional convection and diffusion equation becomes: -$$ -\operatorname{div}(\Gamma \operatorname{grad} \phi) + S_\phi = 0 -$$………………………………………………………….2 - -Therefore, -$$ -\frac{\partial}{\partial x}\left (\Gamma \frac{\partial \phi}{\partial x}\right )+\frac{\partial}{\partial y}\left (\Gamma \frac{\partial \phi}{\partial y}\right )+\frac{\partial}{\partial z}\left (\Gamma \frac{\partial \phi}{\partial z}\right )+S_\phi = 0 -$$…………………………………………………………………….3 - -Flow should also satisfy continuity equation therefore, -$$ -\nabla \cdot (\rho \mathbf{u}) = 0 -$$ ………………………………………………………………………………………………………4 - -Grid formation: - -1. Divide the domain into discrete control volume. - -2. Place the nodal point between end points defining the physical boundaries. Boundaries/ faces of the control volume are created midway between adjacent nodes. - -3. Set up the control volume near the edge of domain such that physical as well as control volume boundaries will coincide with each other. - -4. Considering a general nodal point P accompanied by six neighboring nodal point ‘E’ (which represent east), ‘W’ (which represent west), ‘N’ (which represent north), ‘S’ (which represent south), ‘T’ (which represent Top), ‘B’ (which represent bottom). In the considered control volume east side face is referred by ‘e’, west side face is referred by ‘w’, north side face is referred by ‘n’, south side face is referred by ‘s’, top side face is referred by ‘t’, bottom side face is referred by ‘b’. - -5. Now the distance between nodes W and P, between nodes P and E, between nodes P and N, between nodes S and P, between nodes P and T, between nodes B and P are denoted as $ {\delta x_{WP}} , {\delta x_{PE}} , {\delta x_{PN}} , {\delta x_{SP}} , {\delta x_{PT}} , {\delta x_{BP}} $respectively. - -Discretisation: - -On integration of equation 3 in one dimension over the general control volume gives: - -[$\left ( \Gamma A \frac{d\phi }{dx} \right )_e - \left ( \Gamma A \frac{d\phi }{dx} \right )_w] + [\left ( \Gamma A \frac{d\phi }{dx} \right )_n - \left ( \Gamma A \frac{d\phi }{dx} \right )_s] + [\left ( \Gamma A \frac{d\phi }{dx} \right )_t - \left ( \Gamma A \frac{d\phi }{dx} \right )_b] + \overrightarrow{S} \Delta V = 0$ - -Now using central differencing method we can rewrite above equation as - -[$\Gamma_eA_e\left ( \frac{\phi_E - \phi_P}{\delta x_{PE}}\right ) - \Gamma_wA_w\left ( \frac{\phi_P - \phi_W}{\delta x_{PE}}\right )] + [\Gamma_nA_n\left ( \frac{\phi_E - \phi_P}{\delta x_{PE}}\right ) - \Gamma_s A_s\left ( \frac{\phi_P - \phi_W}{\delta x_{PE}}\right )] + [\Gamma_tA_t\left ( \frac{\phi_E - \phi_P}{\delta x_{PE}}\right ) - \Gamma_bA_b\left ( \frac{\phi_P - \phi_W}{\delta x_{PE}}\right )] + (S_u + S_p\phi_p) = 0 $ - -This can be rearranged to give the discretised equation for interior nodes: -$$ -a_P\phi_P = a_W\phi_W + a_E\phi_E + a_S\phi_S + a_N\phi_N + a_B\phi_B + a_T\phi_T + S_u -$$ - -Where - -Solution of equation: - -1. For solving the one- dimensional convection- diffusion problem we have to express equation (8) at all the grid nodes. - -2. Now obtained set of algebraic equations is then solved to obtain the distribution of the transported property $\phi$. diff --git a/wiki/wikipedia/2356.txt b/wiki/wikipedia/2356.txt deleted file mode 100644 index bc6e836e9efb4d8e4291a20e0e5c42d501729e39..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2356.txt +++ /dev/null @@ -1,189 +0,0 @@ -Curry's paradox is a paradox in which an arbitrary claim F is proved from the mere existence of a sentence C that says of itself "If C, then F", requiring only a few apparently innocuous logical deduction rules. Since F is arbitrary, any logic having these rules allows one to prove everything. The paradox may be expressed in natural language and in various logics, including certain forms of set theory, lambda calculus, and combinatory logic. - -The paradox is named after the logician Haskell Curry. It has also been called Löb's paradox after Martin Hugo Löb, due to its relationship to Löb's theorem. - -Claims of the form "if A, then B" are called conditional claims. Curry's paradox uses a particular kind of self-referential conditional sentence, as demonstrated in this example: - -If this sentence is true, then Germany borders China. - -Even though Germany does not border China, the example sentence certainly is a natural-language sentence, and so the truth of that sentence can be analyzed. The paradox follows from this analysis. The analysis consists of two steps. - -# First, common natural-language proof techniques can be used to prove that the example sentence is true. - -# Second, the truth of the example sentence can be used to prove that Germany borders China. Because Germany does not border China, this suggests that there has been an error in one of the proofs. - -The claim "Germany borders China" could be replaced by any other claim, and the sentence would still be provable. Thus every sentence appears to be provable. Because the proof uses only well-accepted methods of deduction, and because none of these methods appears to be incorrect, this situation is paradoxical. - -The standard method for proving conditional sentences (sentences of the form "if A, then B") is called a "conditional proof". In this method, in order to prove "if A, then B", first A is assumed and then with that assumption B is shown to be true. - -To produce Curry's paradox, as described in the two steps above, apply this method to the sentence "if this sentence is true, then Germany borders China". Here A, "this sentence is true", refers to the overall sentence, while B is "Germany borders China". So, assuming A is the same as assuming "If A, then B". Therefore, in assuming A, we have assumed both A and "If A, then B". Therefore, B is true, by modus ponens, and we have proven "If this sentence is true, then 'Germany borders China' is true." in the usual way, by assuming the hypothesis and deriving the conclusion. - -Now, because we have proved "If this sentence is true, then 'Germany borders China' is true", then - -we can again apply modus ponens, because we know that the claim "this sentence is true" is correct. In this way, we can deduce that Germany borders China. - -The example in the previous section used unformalized, natural-language reasoning. Curry's paradox also occurs in some varieties of formal logic. In this context, it shows that if we assume there is a formal sentence (X → Y), where X itself is equivalent to (X → Y), then we can prove Y with a formal proof. One example of such a formal proof is as follows. For an explanation of the logic notation used in this section, refer to the list of logic symbols. - -# X := (X → Y)
assumption, the starting point, equivalent to "If this sentence is true, then Y" - -# X → X
law of identity - -# X → (X → Y)
substitute right side of 2, since X is equivalent to X → Y by 1 - -# X → Y
from 3 by contraction - -# X
substitute 4, by 1 - -# Y
from 5 and 4 by modus ponens - -An alternative proof is via Peirce's law. If X = X → Y then (X → Y) → X. This together with Peirce's law ((X → Y) → X) → X and modus ponens implies X and subsequently Y (as in above proof). - -Therefore, if Y is an unprovable statement in a formal system, there is no statement X in that system such that X is equivalent to the implication (X → Y). By contrast, the previous section shows that in natural (unformalized) language, for every natural language statement Y there is a natural language statement Z such that Z is equivalent to (Z → Y) in natural language. Namely, Z is "If this sentence is true then Y". - -In specific cases where the classification of Y is already known, few steps are needed to reveal the contradiction. For example, when Y is "Germany borders China," it is known that Y is false. - -# X = (X → Y)
assumption - -# X = (X → false)
substitute known value of Y - -# X = (¬X ∨ false)
implication - -# X = ¬X
identity - -Even if the underlying mathematical logic does not admit any self-referential sentences, certain forms of naive set theory are still vulnerable to Curry's paradox. In set theories that allow unrestricted comprehension, we can nevertheless prove any logical statement Y by examining the set - -X \ \stackrel{\mathrm{def}}{=}\ \left\{ x \mid x \in x \to Y \right\}. - -Assuming that $\in$ takes precedence over both $\to$ and $\leftrightarrow$, the proof proceeds as follows: - -# $ X = \left\{ x \mid x \in x \to Y \right\} $
Definition of X - -# $ x = X \to (x \in x \leftrightarrow X \in X) $
Substitution of equal sets in membership - -# $ x = X \to ( (x \in x \to Y) \leftrightarrow (X \in X \to Y))$
Addition of a consequent to both sides of a biconditional (from 2) - -# $ X \in X \leftrightarrow (X \in X \to Y) $
Law of concretion (from 1 and 3) - -# $ X \in X \to (X \in X \to Y) $
Biconditional elimination (from 4) - -# $ X \in X \to Y $
Contraction (from 5) - -# $(X \in X \to Y) \to X \in X $
Biconditional elimination (from 4) - -# $ X \in X $
Modus ponens (from 6 and 7) - -# $ Y $
Modus ponens (from 8 and 6) - -Step 4 is the only step invalid in a consistent set theory. In Zermelo–Fraenkel set theory, an extra hypothesis stating X is a set would be required, which is not provable in ZF or in its extension ZFC (with the axiom of choice). - -Therefore, in a consistent set theory, the set $\left\{ x \mid x \in x \to Y \right\}$ does not exist for false Y. This can be seen as a variant on Russell's paradox, but is not identical. Some proposals for set theory have attempted to deal with Russell's paradox not by restricting the rule of comprehension, but by restricting the rules of logic so that it tolerates the contradictory nature of the set of all sets that are not members of themselves. The existence of proofs like the one above shows that such a task is not so simple, because at least one of the deduction rules used in the proof above must be omitted or restricted. - -Curry's paradox may be expressed in untyped lambda calculus, enriched by restricted minimal logic. - -To cope with the lambda calculus's syntactic restrictions, $m$ shall denote the implication function taking two parameters, that is, the lambda term $((m A) B)$ shall be equivalent to the usual infix notation $A \to B$. - -An arbitrary formula $Z$ can be proved by defining a lambda function $N := \lambda p.((m p) Z)$, and $X := (\textsf{Y} N)$, where $\textsf{Y}$ denotes Curry's fixed-point combinator. Then $X = (N X) = ((m X) Z)$ by definition of $\textsf{Y}$ and $N$, hence the above sentential logic proof can be duplicated in the calculus: - - - -\begin{array}{cll} - -\vdash & ((m X) X) & \mbox{ by the minimal logic axiom } A \to A \\ - -\vdash & ((m X) ((m X) Z)) & \mbox{ since } X = ((m X) Z) \\ - -\vdash & ((m X) Z) & \mbox{ by the theorem } (A \to (A \to B)) \vdash (A \to B) \mbox{ of minimal logic } \\ - -\vdash & X & \mbox{ since } X = ((m X) Z) \\ - -\vdash & Z & \mbox{ by modus ponens } A, (A \to B) \vdash B \mbox{ from } X \mbox{ and } ((m X) Z) \\ - -\end{array} - - - -In simply typed lambda calculus, fixed-point combinators cannot be typed and hence are not admitted. - -Curry's paradox may also be expressed in combinatory logic, which has equivalent expressive power to lambda calculus. Any lambda expression may be translated into combinatory logic, so a translation of the implementation of Curry's paradox in lambda calculus would suffice. - -The above term $X$ translates to $(r \ r)$ in combinatory logic, where - -r = \textsf{S} \ (\textsf{S} (\textsf{K} m) (\textsf{S} \textsf{I} \textsf{I})) \ (\textsf{K} Z); - -hence - -(r \ r) = ((m (r r)) \ Z). - -Curry's paradox can be formulated in any language supporting basic logic operations that also allows a self-recursive function to be constructed as an expression. Two mechanisms that support the construction of the paradox are self-reference (the ability to refer to "this sentence" from within a sentence) and unrestricted comprehension in naive set theory. Natural languages nearly always contain many features that could be used to construct the paradox, as do many other languages. Usually the addition of meta programming capabilities to a language will add the features needed. Mathematical logic generally does not allow explicit reference to its own sentences. However the heart of Gödel's incompleteness theorems is the observation that a different form of self-reference can be added; see Gödel number. - -The axiom of unrestricted comprehension adds the ability to construct a recursive definition in set theory. This axiom is not supported by modern set theory. - -The logic rules used in the construction of the proof are the rule of assumption for conditional proof, the rule of contraction, and modus ponens. These are included in the most common logical systems, such as first-order logic. - -In the 1930s, Curry's paradox and the related Kleene–Rosser paradox played a major role in showing that formal logic systems based on self-recursive expressions are inconsistent. - -These include some versions of lambda calculus and combinatory logic. - -Curry began with the Kleene–Rosser paradox and deduced that the core problem could be expressed in this simpler Curry's paradox. His conclusion may be stated as saying that combinatory logic and lambda calculus cannot be made consistent as deductive languages, while still allowing recursion. - -In the study of illative (deductive) combinatory logic, Curry in 1941 recognized the implication of the paradox as implying that, without restrictions, the following properties of a combinatory logic are incompatible: - -# Combinatorial completeness. This means that an abstraction operator is definable (or primitive) in the system, which is a requirement on the expressive power of the system. - -# Deductive completeness. This is a requirement on derivability, namely, the principle that in a formal system with material implication and modus ponens, if Y is provable from the hypothesis X, then there is also a proof of X → Y. - -Note that unlike the liar paradox or Russell's paradox, Curry's paradox does not depend on what model of negation is used, as it is completely negation-free. Thus paraconsistent logics can still be vulnerable to this paradox, even if they are immune to the liar paradox. - -The origin of Alonzo Church's lambda calculus may have been, "How can you solve an equation, to provide a definition of a function?". This is expressed in this equivalence, - -f\ x = y \iff f = \lambda x.y. - -This definition is valid if there is one and only one function $f$ that satisfies the equation $f\ x = y $, but invalid otherwise. This is the core of the problem that Stephen Cole Kleene and then Haskell Curry discovered with combinatory logic and Lambda calculus. - -The situation may be compared to defining - -y = x^2 \iff x = \sqrt{y}. - -This definition is fine as long as only positive values are allowed for the square root. In mathematics an existentially quantified variable may represent multiple values, but only one at a time. Existential quantification is the disjunction of many instances of an equation. In each equation is one value for the variable. - -However, in mathematics, an expression with no free variables must have one and only one value. So $ \sqrt{4} $ can only represent $+2$. However, there is no convenient way to restrict the lambda abstraction to one value or to assure that there is a value. - -Lambda calculus allows recursion by passing the same function that is called as a parameter. This allows situations where $f\ x = y $ has multiple or no solutions for $f$. - -Lambda calculus may be considered as part of mathematics if only lambda abstractions that represent a single solution to an equation are allowed. Other lambda abstractions are incorrect in mathematics. - -Curry's paradox and other paradoxes arise in Lambda calculus because of the inconsistency of Lambda calculus considered as a deductive system. See also deductive lambda calculus. - -Lambda calculus is a consistent theory in its own domain. However, it is not consistent to add the lambda abstraction definition to general mathematics. Lambda terms describe values from the lambda calculus domain. Each lambda term has a value in that domain. - -When translating expressions from mathematics to lambda calculus, the domain of lambda calculus terms is not always isomorphic to the domain of the mathematical expressions. This lack of isomorphism is the source of the apparent contradictions. - -There are many language constructs that implicitly invoke an equation that may have none or many solutions. The sound resolution to this problem is to syntactically link these expressions to an existentially quantified variable. The variable represents the multiple values in a way that is meaningful in common human reasoning, but is also valid in mathematics. - -For example, a natural language that allows the Eval function is not mathematically consistent. But each call to Eval in that natural language may be translated into mathematics in a way that is consistent. The translation of Eval(s) into mathematics is - -let x = Eval(s) in x. - -So where s = "Eval(s) → y", - -let x = x → y in x. - -If y is false, then the x = x → y is false, but this is a falsehood, not a paradox. - -The existence of the variable x was implicit in the natural language. The variable x is created when the natural language is translated into mathematics. This allows us to use natural language, with natural semantics, while maintaining mathematical integrity. - -The argument in formal logic starts with assuming the validity of naming (X → Y) as X. However, this is not a valid starting point. First we must deduce the validity of the naming. The following theorem is easily proved and represents such a naming: - - \forall A, \exists X, X = A. - -In the above statement the formula A is named as X. Now attempt to instantiate the formula with (X → Y) for A. However, this is not possible, as the scope of $ \exists X $ is inside the scope of $ \forall A $. The order of the quantifiers may be reversed using Skolemization: - - \exists f, \forall A, f(A) = A. - -However, now instantiation gives - - f(X \to Y) = X \to Y, - -which is not the starting point for the proof and does not lead to a contradiction. There are no other instantiations for A that lead to the starting point of the paradox. - -In Zermelo–Fraenkel set theory (ZFC), the axiom of unrestricted comprehension is replaced with a group of axioms that allow construction of sets. So Curry's paradox cannot be stated in ZFC. ZFC evolved in response to Russell's paradox. diff --git a/wiki/wikipedia/2357.txt b/wiki/wikipedia/2357.txt deleted file mode 100644 index 70cd749bd155e2d0ead533177b670a78527edc77..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2357.txt +++ /dev/null @@ -1,54 +0,0 @@ -In coding theory, the weight enumerator polynomial of a binary linear code specifies the number of words of each possible Hamming weight. - -Let $C \subset \mathbb{F}_2^n$ be a binary linear code length $n$. The weight distribution is the sequence of numbers -$$ - A_t = \#\{c \in C \mid w(c) = t \} -$$ - -giving the number of codewords c in C having weight t as t ranges from 0 to n. The weight enumerator is the bivariate polynomial -$$ - W(C;x,y) = \sum_{w=0}^n A_w x^w y^{n-w}. -$$ - -#$ W(C;0,1) = A_{0}=1 $ - -#$ W(C;1,1) = \sum_{w=0}^{n}A_{w}=|C| $ - -#$ W(C;1,0) = A_{n}= 1 \mbox{ if } (1,\ldots,1)\in C\ \mbox{ and } 0 \mbox{ otherwise} $ - -#$ W(C;1,-1) = \sum_{w=0}^{n}A_{w}(-1)^{n-w} = A_{n}+(-1)^{1}A_{n-1}+\ldots+(-1)^{n-1}A_{1}+(-1)^{n}A_{0} $ - -Denote the dual code of $C \subset \mathbb{F}_2^n$ by -$$ -C^\perp = \{x \in \mathbb{F}_2^n \mid \langle x,c\rangle = 0 \mbox{ }\forall c \in C \} -$$ - -(where $\langle\ ,\ \rangle$ denotes the vector dot product and which is taken over $\mathbb{F}_2$). - -The MacWilliams identity states that -$$ -W(C^\perp;x,y) = \frac{1}{\mid C \mid} W(C;y-x,y+x). -$$ - -The identity is named after Jessie MacWilliams. - -The distance distribution or inner distribution of a code C of size M and length n is the sequence of numbers -$$ - A_i = \frac{1}{M} \# \left\lbrace (c_1,c_2) \in C \times C \mid d(c_1,c_2) = i \right\rbrace -$$ - -where i ranges from 0 to n. The distance enumerator polynomial is -$$ - A(C;x,y) = \sum_{i=0}^n A_i x^i y^{n-i} -$$ - -and when C is linear this is equal to the weight enumerator. - -The outer distribution of C is the 2n-by-n+1 matrix B with rows indexed by elements of GF(2)n and columns indexed by integers 0...n, and entries -$$ - B_{x,i} = \# \left\lbrace c \in C \mid d(c,x) = i \right\rbrace . -$$ - -The sum of the rows of B is M times the inner distribution vector (A0,...,An). - -A code C is regular if the rows of B corresponding to the codewords of C are all equal. diff --git a/wiki/wikipedia/2358.txt b/wiki/wikipedia/2358.txt deleted file mode 100644 index 9e9f1dbc9e7c3b94926d7bd6a6d4e75540d37580..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2358.txt +++ /dev/null @@ -1,13 +0,0 @@ -In geometry, tetrahedron packing is the problem of arranging identical regular tetrahedra throughout three-dimensional space so as to fill the maximum possible fraction of space. - -Currently, the best lower bound achieved on the optimal packing fraction of regular tetrahedra is 85.63%. Tetrahedra do not tile space, and an upper bound below 100% (namely, 1 − (2.6...)·10−25) has been reported. - -Aristotle claimed that tetrahedra could fill space completely. - -In 2006, Conway and Torquato showed that a packing fraction about 72% can be obtained by constructing a non-Bravais lattice packing of tetrahedra (with multiple particles with generally different orientations per repeating unit), and thus they showed that the best tetrahedron packing cannot be a lattice packing (with one particle per repeating unit such that each particle has a common orientation). These packing constructions almost doubled the optimal Bravais-lattice-packing fraction 36.73% obtained by Hoylman. In 2007 and 2010, Chaikin and coworkers experimentally showed that tetrahedron-like dice can randomly pack in a finite container up to a packing fraction between 75% and 76%. In 2008, Chen was the first to propose a packing of hard, regular tetrahedra that packed more densely than spheres, demonstrating numerically a packing fraction of 77.86%. A further improvement was made in 2009 by Torquato and Jiao, who compressed Chen's structure using a computer algorithm to a packing fraction of 78.2021%. - -In mid-2009 Haji-Akbari et al. showed, using MC simulations of initially random systems that at packing densities >50% an equilibrium fluid of hard tetrahedra spontaneously transforms to a dodecagonal quasicrystal, which can be compressed to 83.24%. They also reported a glassy, disordered packing at densities exceeding 78%. For a periodic approximant to a quasicrystal with an 82-tetrahedron unit cell, they obtained a packing density as high as 85.03%. - -In late 2009, a new, much simpler family of packings with a packing fraction of 85.47% was discovered by Kallus, Elser, and Gravel. These packings were also the basis of a slightly improved packing obtained by Torquato and Jiao at the end of 2009 with a packing fraction of 85.55%, and by Chen, Engel, and Glotzer in early 2010 with a packing fraction of 85.63%. The Chen, Engel and Glotzer result currently stands as the densest known packing of hard, regular tetrahedra. Surprisingly, the quasicrystal approximant packs denser than this double lattice of triangular bipyramids when tetrahedra are slightly rounded (the Minkowski sum of a tetrahedron and a sphere), making the 82-tetrahedron quasicrystal approximant the largest unit cell for a densest packing of identical particles to date. - -Because the earliest lower bound known for packings of tetrahedra was less than that of spheres, it was suggested that the regular tetrahedra might be a counterexample to Ulam's conjecture that the optimal density for packing congruent spheres is smaller than that for any other convex body. However, the more recent results have shown that this is not the case. diff --git a/wiki/wikipedia/2359.txt b/wiki/wikipedia/2359.txt deleted file mode 100644 index 9fb2876b62df0194a74f5f1d9449b5413d1f4fcf..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2359.txt +++ /dev/null @@ -1,17 +0,0 @@ -Disparity filter is a network reduction algorithm (a.k.a. graph sparsification algorithm - -) to extract the backbone structure of undirected weighted network. Many real world networks such as citation networks, food web, airport networks display heavy tailed statistical distribution of nodes' weight and strength. Disparity filter can sufficiently reduce the network without destroying the multi-scale nature of the network. The algorithm is developed by M. Angeles Serrano, Marian Boguna and Alessandro Vespignani. - -k-core decomposition is an algorithm that reduces a graph into a maximal connected subgraph of vertices with at least degree k. This algorithm can only be applied to unweighted graphs. - -A minimum spanning tree is a tree-like subgraph of a given graph G, in which it keeps all the nodes of graph G but minimizes the total weight of the subgraph. A minimum spanning tree is the least expensive way to maintain the size of a connected component. The significant limitation of this algorithm is that it overly simplifies the structure of the network (graph). The minimum spanning tree destroys local cycles, clustering coefficients which are usually present in real networks and are considered important in network measurement. - -A weighted graph can be easily reduced to a subgraph in which any of the edges' weight is larger than a given threshold wc. This technique has been applied to study the resistance of food webs and functional networks that connect correlated human brain sites. The shortcoming of this method is that it disregards nodes with small strength. In real networks, both strength and weight distribution in general follow heavy tailed distributions which span several degrees of magnitude. Applying a simple cutoff on weight will remove all the information below the cut-off. - -In network science, the strength notated as si of a node i is defined as si = Σjwij, where wij is the weight of the link between i and j. - -In order to apply the disparity filter algorithm without overlooking nodes with low strength, a normalized weight pij is defined as pij = wij/si. In the null model, the normalized weights of a certain node with degree k is generated like this: k - 1 pins are randomly assigned between the interval 0 and 1. The interval is then divided into k subintervals. The length of the subinterval represents the normalized weight of each link in the null model. - -Consecutively, and based on the null model, we can derive that the normalized weight distribution of a node with degree k follows $\rho(x)dx=(k-1)(1-x)^{k-2}dx$. of the null model: For a given normalized weight pij, the p-value αij of pij based on the null model is given by $\alpha_{ij}=1-(k-1)\int_0^{p_{ij}}(1-x)^{k-2}dx$ which reduces to $\alpha_{ij}=(1-p_{ij})^{k-1}$. The meaning of αij is the probability of having normalized weight larger or equal to pij in the framework of the given null model. By setting a significance level α (between 0 and 1), for any link of normalized weight pij, if αij is larger than α, it will be filtered out. Changing α we can progressively remove irrelevant links thus effectively extracting the backbone structure of the weighted network. - -The disparity filter algorithm has been shown to be a particular case of the Pólya Filter (built around the famous combinatorial scheme known as the Pólya Urn). The Pólya Filter is able to adapt the filtering procedure to the network's own heterogeneity by using a Maximum Likelihood procedure to set its free parameter $a$, which represent the strength of the self reinforcing mechanism ruling the underlying urn scheme. Given a significance level $\alpha$, the Pólya Filter has been shown to produce backbones much more sparse than the disparity filter and yet able to retain the most salient links of the system. diff --git a/wiki/wikipedia/236.txt b/wiki/wikipedia/236.txt deleted file mode 100644 index c57fde82d2ce5e0acc8cffb23c01f59a2ee3b5a7..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/236.txt +++ /dev/null @@ -1,23 +0,0 @@ -The Kolmogorov–Arnold–Moser (KAM) theorem is a result in dynamical systems about the persistence of quasiperiodic motions under small perturbations. The theorem partly resolves the small-divisor problem that arises in the perturbation theory of classical mechanics. - -The problem is whether or not a small perturbation of a conservative dynamical system results in a lasting quasiperiodic orbit. The original breakthrough to this problem was given by Andrey Kolmogorov in 1954. This was rigorously proved and extended by Jürgen Moser in 1962 (for smooth twist maps) and Vladimir Arnold in 1963 (for analytic Hamiltonian systems), and the general result is known as the KAM theorem. - -Arnold originally thought that this theorem could apply to the motions of the Solar System or other instances of the n-body problem, but it turned out to work only for the three-body problem because of a degeneracy in his formulation of the problem for larger numbers of bodies. Later, Gabriella Pinzari showed how to eliminate this degeneracy by developing a rotation-invariant version of the theorem. - -The KAM theorem is usually stated in terms of trajectories in phase space of an integrable Hamiltonian system. - -The motion of an integrable system is confined to an invariant torus (a doughnut-shaped surface). Different initial conditions of the integrable Hamiltonian system will trace different invariant tori in phase space. Plotting the coordinates of an integrable system would show that they are quasiperiodic. - -The KAM theorem states that if the system is subjected to a weak nonlinear perturbation, some of the invariant tori are deformed and survive, while others are destroyed. Surviving tori meet the non-resonance condition, i.e., they have “sufficiently irrational” frequencies. This implies that the motion continues to be quasiperiodic, with the independent periods changed (as a consequence of the non-degeneracy condition). The KAM theorem quantifies the level of perturbation that can be applied for this to be true. - -Those KAM tori that are destroyed by perturbation become invariant Cantor sets, named Cantori by Ian C. Percival in 1979. - -The non-resonance and non-degeneracy conditions of the KAM theorem become increasingly difficult to satisfy for systems with more degrees of freedom. As the number of dimensions of the system increases, the volume occupied by the tori decreases. - -As the perturbation increases and the smooth curves disintegrate we move from KAM theory to Aubry–Mather theory which requires less stringent hypotheses and works with the Cantor-like sets. - -The existence of a KAM theorem for perturbations of quantum many-body integrable systems is still an open question, although it is believed that arbitrarily small perturbations will destroy integrability in the infinite size limit. - -An important consequence of the KAM theorem is that for a large set of initial conditions the motion remains perpetually quasiperiodic. - -The methods introduced by Kolmogorov, Arnold, and Moser have developed into a large body of results related to quasiperiodic motions, now known as KAM theory. Notably, it has been extended to non-Hamiltonian systems (starting with Moser), to non-perturbative situations (as in the work of Michael Herman) and to systems with fast and slow frequencies (as in the work of Mikhail B. Sevryuk). diff --git a/wiki/wikipedia/2360.txt b/wiki/wikipedia/2360.txt deleted file mode 100644 index 556fffe77ee9d8c75ae4a4c1b8271ff7009eb7ec..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2360.txt +++ /dev/null @@ -1,121 +0,0 @@ -In information theory, Fano's inequality (also known as the Fano converse and the Fano lemma) relates the average information lost in a noisy channel to the probability of the categorization error. It was derived by Robert Fano in the early 1950s while teaching a Ph.D. seminar in information theory at MIT, and later recorded in his 1961 textbook. - -It is used to find a lower bound on the error probability of any decoder as well as the lower bounds for minimax risks in density estimation. - -Let the random variables $X$ and $Y$ represent input and output messages with a joint probability $P(x,y)$. Let $e$ represent an occurrence of error; i.e., that $X\neq \tilde{X}$, with $\tilde{X}=f(Y)$ being an approximate version of $X$. Fano's inequality is -$$ -H(X|Y)\leq H_b(e)+P(e)\log(|\mathcal{X}|-1), -$$ - -where $\mathcal{X}$ denotes the support of $X$, -$$ -H\left(X|Y\right)=-\sum_{i,j} P(x_i, y_j)\log P\left(x_i|y_j\right) -$$ - -is the conditional entropy, -$$ -P(e)=P(X\neq \tilde{X}) -$$ - -is the probability of the communication error, and -$$ -H_b(e)=-P(e)\log P(e)-(1-P(e))\log(1-P(e)) -$$ - -is the corresponding binary entropy. - -Define an indicator random variable $E$, that indicates the event that our estimate $\tilde{X}=f(Y)$ is in error, - - E := - -\begin{cases} - -1 ~&\text{ if }~ \tilde{X} \neq X~, \\ - -0 ~&\text{ if }~ \tilde{X} = X~. - -\end{cases} - - - -Consider $H(E,X|\tilde{X})$. We can use the chain rule for entropies to expand this in two different ways - - - -\begin{align} - -H(E,X|\tilde{X}) &= H(X|\tilde{X}) + \underbrace{H(E|X,\tilde{X})}_{=0} \\ - -&= H(E|\tilde{X}) + H(X|E,\tilde{X}) - -\end{align} - - - -Equating the two - - - -H(X|\tilde{X}) = H(E|\tilde{X}) + H(X|E,\tilde{X}) - - - -Expanding the right most term, $H(X|E,\tilde{X})$ - - - -\begin{align} - -H(X|E,\tilde{X}) &= \underbrace{H(X|E=0,\tilde{X})}_{=0}\cdot P(E=0) + H(X|E=1,\tilde{X})\cdot\underbrace{P(E=1)}_{=P(e)} \\ - -&= H(X|E=1,\tilde{X})\cdot P(e) - -\end{align} - - - -Since $E=0$ means $X=\tilde{X}$; being given the value of $\tilde{X}$ allows us to know the value of $X$ with certainty. This makes the term $H(X|E=0,\tilde{X}) = 0$. - -On the other hand, $E=1$ means that $\tilde{X}\neq X$, hence given the value of $\tilde{X}$, we can narrow down $X$ to one of $|\mathcal{X}|-1$ different values, allowing us to upper bound the conditional entropy $H(X|E=1,\tilde{X}) \leq \log(|\mathcal{X}|-1)$. Hence -$$ - H(X|E,\tilde{X}) \leq \log(|\mathcal{X}|-1)\cdot P(e) -$$ - -The other term, $H(E|\tilde{X}) \leq H(E)$, because conditioning reduces entropy. Because of the way $E$ is defined, $H(E) = H_b(e)$, meaning that $H(E|\tilde{X}) \leq H_b(e)$. Putting it all together, -$$ - H(X|\tilde{X}) \leq H_b(e) + P(e)\log(|\mathcal{X}|-1) -$$ - -Because $ X \rightarrow Y \rightarrow \tilde{X} $ is a Markov chain, we have $I(X;\tilde{X}) \leq I(X;Y)$ by the data processing inequality, and hence $H(X|\tilde{X}) \geq H(X|Y)$, giving us -$$ - H(X|Y) \leq H_b(e) + P(e)\log(|\mathcal{X}|-1) -$$ - -Let $X$ be a random variable with density equal to one of $r+1$ possible densities $f_1,\ldots,f_{r+1}$. Furthermore, the Kullback–Leibler divergence between any pair of densities cannot be too large, -$$ - D_{KL}(f_i\|f_j)\leq \beta -$$ for all $i\not = j.$ - -Let $\psi(X)\in\{1,\ldots, r+1\}$ be an estimate of the index. Then -$$ -\sup_i P_i(\psi(X)\not = i) \geq 1-\frac{\beta+\log 2}{\log r} -$$ - -where $P_i$ is the probability induced by $f_i$ - -The following generalization is due to Ibragimov and Khasminskii (1979), Assouad and Birge (1983). - -Let F be a class of densities with a subclass of r + 1 densities ƒθ such that for any θ ≠ θ -$$ -\|f_{\theta}-f_{\theta'}\|_{L_1}\geq \alpha, -$$ -$$ -D_{KL}(f_\theta\|f_{\theta'})\leq \beta. -$$ - -Then in the worst case the expected value of error of estimation is bound from below, -$$ -\sup_{f\in \mathbf{F}} E \|f_n-f\|_{L_1}\geq \frac{\alpha}{2}\left(1-\frac{n\beta+\log 2}{\log r}\right) -$$ - -where ƒn is any density estimator based on a sample of size n. diff --git a/wiki/wikipedia/2361.txt b/wiki/wikipedia/2361.txt deleted file mode 100644 index d38fdd545c8ffafbcffb0228a97730ca2883f427..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2361.txt +++ /dev/null @@ -1,307 +0,0 @@ -The Hahn–Banach theorem is a central tool in functional analysis. - -It allows the extension of bounded linear functionals defined on a subspace of some vector space to the whole space, and it also shows that there are "enough" continuous linear functionals defined on every normed vector space to make the study of the dual space "interesting". Another version of the Hahn–Banach theorem is known as the Hahn–Banach separation theorem or the hyperplane separation theorem, and has numerous uses in convex geometry. - -The theorem is named for the mathematicians Hans Hahn and Stefan Banach, who proved it independently in the late 1920s. - -The special case of the theorem for the space $C{\left[a, b \right]}$ of continuous functions on an interval was proved earlier (in 1912) by Eduard Helly, and a more general extension theorem, the M. Riesz extension theorem, from which the Hahn–Banach theorem can be derived, was proved in 1923 by Marcel Riesz. - -The first Hahn–Banach theorem was proved by Eduard Helly in 1921 who showed that certain linear functionals defined on a subspace of a certain type of normed space ($\mathbb{C}^{\mathbb{N}}$) had an extension of the same norm. Helly did this through the technique of first proving that a one-dimensional extension exists (where the linear functional has its domain extended by one dimension) and then using induction. In 1927, Hahn defined general Banach spaces and used Helly's technique to prove a norm-preserving version of Hahn–Banach theorem for Banach spaces (where a bounded linear functional on a subspace has a bounded linear extension of the same norm to the whole space). In 1929, Banach, who was unaware of Hahn's result, generalized it by replacing the norm-preserving version with the dominated extension version that uses sublinear functions. Whereas Helly's proof used mathematical induction, Hahn and Banach both used transfinite induction. - -The Hahn–Banach theorem arose from attempts to solve infinite systems of linear equations. This is needed to solve problems such as the moment problem, whereby given all the potential moments of a function one must determine if a function having these moments exists, and, if so, find it in terms of those moments. Another such problem is the Fourier cosine series problem, whereby given all the potential Fourier cosine coefficients one must determine if a function having those coefficients exists, and, again, find it if so. - -Riesz and Helly solved the problem for certain classes of spaces (such as $L^p([0, 1])$ and $C([a, b])$ where they discovered that the existence of a solution was equivalent to the existence and continuity of certain linear functionals. In effect, they needed to solve the following problem: - -(The vector problem) Given a collection $\left(f_i\right)_{i \in I}$ of bounded linear functionals on a normed space X and a collection of scalars $\left(c_i\right)_{i \in I},$ determine if there is an $x \in X$ such that $f_i(x) = c_i$ for all $i \in I.$ - -To solve this, if X is reflexive then it suffices to solve the following dual problem: - -(The functional problem) Given a collection $\left(x_i\right)_{i \in I}$ of vectors in a normed space X and a collection of scalars $\left(c_i\right)_{i \in I},$ determine if there is a bounded linear functional $f$ on X such that $f\left(x_i\right) = c_i$ for all $i \in I.$ - -Riesz went on to define $L^p([0, 1])$ space ($1 < p < \infty$) in 1910 and the $\ell^p$ spaces in 1913. While investigating these spaces he proved a special case of the Hahn–Banach theorem. Helly also proved a special case of the Hahn–Banach theorem in 1912. In 1910, Riesz solved the functional problem for some specific spaces and in 1912, Helly solved it for a more general class of spaces. It wasn't until 1932 that Banach, in one of the first important applications of the Hahn–Banach theorem, solved the general functional problem. The following theorem states the general functional problem and characterizes its solution. - -{{Math theorem|name=Theorem|note=The functional problem|math_statement= - -Let X be a real or complex normed space, I a non-empty set, $\left(c_i\right)_{i \in I}$ a family of scalars, and $\left(x_i\right)_{i \in I}$ a family of vectors in X. - -There exists a continuous linear functional $f$ on X such that $f\left(x_i\right) = c_i$ for all $i \in I$ if and only if there exists a $K > 0$ such that for any choice of scalars $\left(s_i\right)_{i \in I}$ where all but finitely many $s_i$ are $0,$ we necessarily have - -\left|\sum_{i \in I} s_i c_i\right| \leq K \left\|\sum_{i \in I} s_i x_i\right\|. - -}} - -One can use the above theorem to deduce the Hahn–Banach theorem. If X is reflexive, then this theorem solves the vector problem. - -A real-valued function $f : M \to \R$ defined on a subset $M$ of $X$ is said to be a function $p : X \to \R$ if $f(m) \leq p(m)$ for every $m \in M.$ - -Hence the reason why the following version of the Hahn-Banach theorem is called the dominated extension theorem. - -{{Math theorem - -| name = - -| math_statement = - -Suppose $p : X \to \R$ sublinear function defined on a real vector space $X,$ which by definition means that for all $x, y \in X$ and all non-negative real $t \geq 0,$ - -p(x + y) \leq p(x) + p(y) \qquad \text{ and } \qquad p(t x) = t p(x). - -Then any linear functional defined on a vector subspace of $X$ that is dominated by $p,$ has a linear extension to all of $X$ that is also dominated by $p.$ - -Explicitly, if $f : M \to \R$ is a linear functional defined on a vector subspace $M$ of $X$ that is dominated by $p : X \to \R$ (meaning that $f \leq p$ on $M$) - -then there exists a linear functional $F : X \to \R$ such that - -F(m) = f(m) \quad \text{ for all } m \in M, - --p(-x) \leq F(x) \leq p(x) \quad \text{ for all } x \in X. - -}} - -The extension $F$ is in general not uniquely determined by $f.$ - -The dominated extension theorem implies the following alternative statement of the Hahn–Banach theorem that can be applied to linear functionals on real or complex vector spaces. - -{{Math theorem - -| name = Hahn–Banach theorem - -| math_statement = Suppose $X$ is a real or complex vector space over the field $\mathbf{K},$ which is either $\R$ or $\Complex.$ - -If $f : M \to \mathbf{K}$ is a $\mathbf{K}$-linear functional on a $\mathbf{K}$-linear subspace M and $p : X \to \R$ a nonnegative sublinear function such that - -|f(m)| \leq p(m) \quad \text{ for all } m \in M. - -then there exists a $\mathbf{K}$-linear $F : X \to \mathbf{K}$ such that - -F(m) = f(m) \quad \text{ for all } m \in M, - -|F(x)| \leq p(x) \quad \text{ for all } x \in X. - -}} - -A complex-valued functional $f$ is said to be dominated by $p$ if $|f(m)| \leq p(m)$ for all $m \in M.$ - -So the above statements of the Hahn–Banach theorem are variations of the following succinct statement: - -Hahn–Banach: If $p : X \to \R$ is a sublinear function defined on a vectors space $X,$ then every dominated linear functional defined on a vector subspace of $X$ has dominated linear extension to all of $X.$ - -It is possible to relax slightly the subadditivity condition on p, requiring only that for all $x, y \in X$ and all scalars a and b satisfying $|a| + |b| \leq 1,$ - -p(a x + b y) \leq |a| p(x) + |b| p(y). - -It is further possible to relax the positive homogeneity and the subadditivity conditions on p, requiring only that p is convex. - -The Mizar project has completely formalized and automatically checked the proof of the Hahn–Banach theorem in the HAHNBAN file. - -In the complex case, the $\Complex$-linearity assumptions demand that $M = N + N i$ for some real vector space $N.$ Moreover, for every vector $x \in N,$ $f(i x) = i f(x).$ Thus the real part of a linear functional already determines the behavior of the linear functional as a whole, and proving the real case will suffice. - -First, Helly's initial result for the case where $M$ has codimension 1, from which the Hahn–Banach theorem follows. - -{{Math theorem - -| name = Lemma - -| note = - -| math_statement = Let $p : X \to \R$ be a sublinear function on a real vector space $X,$ let $f : M \to \R$ a linear functional on a proper vector subspace $M \subsetneq X$ such that $f \leq p$ on $M$ (meaning $f(m) \leq p(m)$ for all $m \in M$), and let $x \in X$ be a vector not in $M$ (so $M \oplus \R x = \operatorname{span} \{M, x\}$). - -There exists a linear extension $F : M \oplus \R x \to \R$ of $f$ such that $F \leq p$ on $M \oplus \R x.$ - -}} - -{{Math proof|drop=hidden|proof= - -Given any real number $b,$ the map $F_b : M \oplus \R x \to \R$ defined by $F_b(m + r x) = f(m) + r b$ is always a linear extension of $f$ to $M \oplus \R x$ but it might not satisfy $F_b \leq p.$ - -It will be shown that $b$ can always be chosen so as to guarantee that $F_b \leq p,$ which will complete the proof. - -If $m, n \in M$ then - -f(m) - f(n) = f(m - n) \leq p(m - n) = p(m + x - x - n) \leq p(m + x) + p(- x - n) - -which implies - --p(-x - n) - f(n) ~\leq~ p(m + x) - f(m). - - - -So define - -a = \sup_{n \in M}[-p(-x - n) - f(n)] \qquad \text{ and } \qquad c = \inf_{m \in M} [p(m + x) - f(m)] - -where because $a \leq c$ it is possible to pick $b$ such that $a \leq b \leq c.$ - -This value of $b$ satisfies "the decisive inequality" - --p(-x - n) - f(n) ~\leq~ b ~\leq~ p(m + x) - f(m) \qquad \text{ for all } m, n \in M - -from which the desired inequality $F_b(m + r x) \leq p(m + r x)$ follows. - -For if $r > 0$ (respectively, if $r < 0$) then substituting $\tfrac{1}{r} m$ in for $m$ (respectively, for $n$) in this inequality leads to the conclusion $r b \leq p(m + r x) - f(m).$ -$$ -\blacksquare -$$ - -}} - -The lemma above is the key step in deducing the dominated extension theorem from Zorn's lemma. - -In the above form, the functional to be extended must already be bounded by a sublinear function. In some applications, this might close to begging the question. However, in locally convex spaces, any continuous functional is already bounded by the norm, which is sublinear. One thus has{{Math theorem - -| name = Continuous extensions on locally convex spaces - -| math_statement = Let X be locally convex topological vector space over $\mathbf{K}$ (either $\R$ or $\Complex$), M a vector subspace of X, and $f$ a continuous linear functional on M. Then $f$ has a continuous linear extension to all of X. If the topology on X arises from a norm, then the norm of $f$ is preserved by this extension. - -}}In category-theoretic terms, the field $\mathbf{K}$ is an injective object in the category of locally convex vector spaces. - -The above proof uses Zorn's lemma, which in the axiomatic framework of Zermelo–Fraenkel set theory (ZF) is equivalent to the axiom of choice (AC). It is now known (see below) that the ultrafilter lemma (or equivalently, the Boolean prime ideal theorem), which is strictly weaker than the axiom of choice, is sufficient to prove the Hahn–Banach theorem for real vector spaces (HB). - -The ultrafilter lemma is equivalent (under ZF) to the Banach–Alaoglu theorem, which is another foundational theorem in functional analysis. Although the Banach–Alaoglu theorem implies HB, it is not equivalent to it (said differently, the Banach–Alaoglu theorem is strictly stronger than HB). - -However, HB is equivalent to a certain weakened version of the Banach–Alaoglu theorem for normed spaces. - -The Hahn–Banach theorem is also equivalent to the following statement: - -(∗): On every Boolean algebra B there exists a "probability charge", that is: a nonconstant finitely additive map from B into $[0, 1].$ - -(The Boolean prime ideal theorem is equivalent to the statement that there are always nonconstant probability charges which take only the values 0 and 1.) - -In ZF, the Hahn–Banach theorem suffices to derive the existence of a non-Lebesgue measurable set. Moreover, the Hahn–Banach theorem implies the Banach–Tarski paradox. - -For separable Banach spaces, D. K. Brown and S. G. Simpson proved that the Hahn–Banach theorem follows from WKL0, a weak subsystem of second-order arithmetic that takes a form of Kőnig's lemma restricted to binary trees as an axiom. In fact, they prove that under a weak set of assumptions, the two are equivalent, an example of reverse mathematics. - -The key element of the Hahn–Banach theorem is fundamentally a result about the separation of two convex sets: $\{-p(- x - n) - f(n) : n \in M\},$ and $\{p(m + x) - f(m) : m \in M\}.$ This sort of argument appears widely in convex geometry, optimization theory, and economics. Lemmas to this end derived from the original Hahn–Banach theorem are known as the Hahn–Banach separation theorems. - -Let X be a real locally convex topological vector space and let A and B be non-empty convex subsets. - -If $\operatorname{Int} A \neq \varnothing$ and $B \cap \operatorname{Int} A = \varnothing$ then there exists a continuous linear functional $f$ on X such that $\sup f(A) \leq \inf f(B)$ and $f(a) < \inf f(B)$ for all $a \in \operatorname{Int} A$ (such an $f$ is necessarily non-zero). - -Often one assumes that the convex sets have additional structure; i.e. that they are open or compact. In that case, the conclusion can be substantially strengthened:{{Math theorem - -| name = Theorem - -| math_statement = Let X be a real topological vector space and choose $A, B$ convex non-empty disjoint subsets of X. - -* If A is open then A and B are separated by a (closed) hyperplane. Explicitly, this means that there exists a continuous linear map $f : X \to \mathbf{K}$ and $s \in \R$ such that $f(a) < s \leq f(b)$ for all $a \in A, b \in B.$ If both A and B are open then the right-hand side may be taken strict as well. - -* If X is locally convex, A is compact, and B closed, then A and B are strictly separated: there exists a continuous linear map $f : X \to \mathbf{K}$ and $s, t \in \R$ such that $f(a) < t < s < f(b)$ for all $a \in A, b \in B.$ - -}} - -If X is complex, then the same claims hold, but for the real part of $f..$ - -One important corollary is known as the Geometric Hahn–Banach theorem or Mazur's theorem. - -To see that Mazur's theorem follows from the Hahn-Banach separation theorems, note that M is convex and apply the first bullet. Mazur's theorem clarifies that vector subspaces (even ones that are not closed) can be characterized by linear functionals. {{Math theorem - -| name = Corollary - -| note = Separation of a subspace and an open convex set - -| math_statement = Let X be a locally convex vector space, M a vector subspace, and U a non-empty open convex subset disjoint from M. Then there exists a continuous linear functional $f$ on X such that $f(m) = 0$ for all $m \in M$ and $\operatorname{Re} f > 0$ on U - -}} - -Since points are trivially convex, geometric Hahn-Banach implies that functionals can detect the boundary of a set. In particular, let X be a real topological vector space and $A \subseteq X$ be convex with $\operatorname{Int} A \neq \varnothing.$ If $a_0 \in A \setminus \operatorname{Int} A$ then there is a functional that is vanishing at $a_0,$ but supported on the interior of A. - -Call a normed space X smooth if at each point x in its unit ball there exists a unique closed hyperplane to the unit ball at x. Köthe showed in 1983 that a normed space is smooth at a point x if and only if the norm is Gateaux differentiable at that point. - -Let U be a convex balanced neighborhood of the origin in a locally convex topological vector space X and suppose $x \in X$ is not an element of U. Then there exists a continuous linear functional $f$ on X such that -$$ -\sup |f(U)| \leq |f(x)|. -$$ - -The Hahn–Banach theorem is the first sign of an important philosophy in functional analysis: to understand a space, one should understand its continuous functionals. - -For example, linear subspaces are characterized by functionals: if X is a normed vector space with linear subspace M (not necessarily closed) and if $z$ is an element of X not in the closure of M, then there exists a continuous linear map $f : X \to \mathbf{K}$ with $f(x) = 0$ for all x in M, $f(z) = 1,$ and $\|f\| = \operatorname{dist}(z, M)^{-1}.$ (To see this, note that $\operatorname{dist}(\cdot, M)$ is a sublinear function.) Moreover, if $z$ is an element of X, then there exists a continuous linear map $f : X \to \mathbf{K}$ such that $f(z) = \|z\|$ and $\|f\| \leq 1.$ This implies that the natural injection $J$ from a normed space X into its double dual $V^{**}$ is isometric. - -That last result also suggests that the Hahn–Banach theorem can often be used to locate a "nicer" topology in which to work. For example, many results in functional analysis assume that a space is Hausdorff or locally convex. However, suppose X is a topological vector space, not necessarily Hausdorff or locally convex, but with a nonempty, proper, convex, open set M. Then geometric Hahn-Banach implies that there is a hyperplane separating M from any other point. In particular, there must exist a nonzero functional on X — that is, the continuous dual space $X^*$ is non-trivial. Considering X with the weak topology induced by $X^*,$ then X becomes locally convex; by the second bullet of geometric Hahn-Banach, the weak topology on this new space separates points. Thus X with this weak topology becomes Hausdorff. This sometimes allows some results from locally convex topological vector spaces to be applied to non-Hausdorff and non-locally convex spaces. - -The Hahn–Banach theorem is often useful when one wishes to apply the method of a priori estimates. Suppose that we wish to solve the linear differential equation $P u = f$ for u, with $f$ given in some Banach space X. If we have control on the size of $u$ in terms of $\|F\|_X$ and we can think of u as a bounded linear functional on some suitable space of test functions $g,$ then we can view $f$ as a linear functional by adjunction: $(f, g) = (u, P^*g).$ At first, this functional is only defined on the image of $P,$ but using the Hahn–Banach theorem, we can try to extend it to the entire codomain X. The resulting functional is often defined to be a weak solution to the equation. - -To illustrate an actual application of the Hahn–Banach theorem, we will now prove a result that follows almost entirely from the Hahn–Banach theorem. - -Suppose X is a Hausdorff locally convex TVS over the field $\mathbf{K}$ and Y is a vector subspace of X that is TVS-isomorphic to $\mathbf{K}^I$ for some set I. - -Then Y is a closed and complemented vector subspace of X. - -{{Math proof|drop=hidden|proof= - -Since $\mathbf{K}^I$ is a complete TVS so is Y, and since any complete subset of a Hausdorff TVS is closed, Y is a closed subset of X. - -Let $f = \left(f_i\right)_{i \in I} : Y \to \mathbf{K}^I$ be a TVS isomorphism, so that each $f_i : Y \to \mathbf{K}$ is a continuous surjective linear functional. - -By the Hahn–Banach theorem, we may extend each $f_i$ to a continuous linear functional $F_i : X \to \mathbf{K}$ on X. - -Let $F := \left(F_i\right)_{i \in I} : X \to \mathbf{K}^I$ so $F$ is a continuous linear surjection such that its restriction to Y is $F\big\vert_Y = \left(F_i\big\vert_Y\right)_{i \in I} = \left(f_i\right)_{i \in I} = f.$ - -It follows that if we define $P := f^{-1} \circ F : X \to Y$ then the restriction to Y of this continuous linear map $P\big\vert_Y : Y \to Y$ is the identity map $\mathbb{1}_Y$ on Y, for $P\big\vert_Y = f^{-1} \circ F\big\vert_Y = f^{-1} \circ f = \mathbf{1}_Y.$ - -So in particular, $P$ is a continuous linear projection onto Y (that is, $P \circ P = P$). - -Thus Y is complemented in X and $X = Y \oplus \ker P$ in the category of TVSs. $\blacksquare$ - -}} - -One may use the above result to show that every closed vector subspace of $\R^{\N}$ is complemented and either finite dimensional or else TVS-isomorphic to $\R^{\N}.$ - -General template - -There are now many other versions of the Hahn–Banach theorem. The general template for the various versions of the Hahn–Banach theorem presented in this article is as follows: - -X is a vector space, p is a sublinear function on X (possibly a seminorm), M is a vector subspace of X (possibly closed), and $f$ is a linear functional on M satisfying $|f| \leq p$ on M (and possibly some other conditions). One then concludes that there exists a linear extension F of $f$ to X such that $|F| \leq p$ on X (possibly with additional properties). - -A proof runs as follows: - -let S be the convex hull of $\{m \in M : p(m) \leq 1\} \cup \{x \in X : q(x) \leq 1\}.$ Note that S is an absorbing disk in X, and call its Minkowski functional q. Then $p = P$ on M and $P \leq q$ on X. - -{{Math theorem - -| name = Hahn–Banach sandwich theorem - -| math_statement = Let S be any subset of a real vector space X, let p be a sublinear function on X, and let $f : S \to \R$ be any map. - -If there exist positive numbers a and b such that for all $x, y \in S,$ - -0 \geq \inf_{s \in S} [p(s - a x - b y) - f(s) - a f(x) - b f(y)] - -then there exists a linear functional F on X such that $F \leq p$ on X and $f \leq F$ on S. - -}} - -The following theorem of Mazur–Orlicz (1953) is equivalent to the Hahn–Banach theorem. - -{{Math theorem - -| name = Mazur–Orlicz theorem - -| math_statement = Let T be any set, $r : T \to \R$ be any real-valued map, X be a real or complex vector space, $v : T \to X$ be any map, and p be a sublinear function on X. Then the following are equivalent: - -# there exists a real-valued linear functional F on X such that $F \leq p$ on X and $r \leq F \circ v$ on T; - -# for any positive integer n, any sequence $s_1, \ldots, s_n$ of non-negative real numbers, and any sequence $t_1, \ldots, t_n$ of elements of T, \sum_{i=1}^n s_i r(t_i) \leq p\left(\sum_{i=1}^n s_i v(t_i)\right). - -}}The following theorem characterizes when any scalar function on X (not necessarily linear) has a continuous linear extension to all of X. - -{{Math theorem - -| name = Theorem - -| note = The extension principle - -| math_statement = Let $f$ a scalar-valued function on a subset S of a topological vector space X. - -Then there exists a continuous linear functional F on X extending $f$ if and only if there exists a continuous seminorm p on X such that - -\left|\sum_{i=1}^n a_i f(s_i)\right| \leq p\left(\sum_{i=1}^n a_is_i\right) - -for all positive integers n and all finite sequences $a_1, \ldots, a_n$ of scalars and elements $s_1, \ldots, s_n$ of S. - -}} - -Let X be a topological vector space. A vector subspace M of X has the extension property if any continuous linear functional on M can be extended to a continuous linear functional on X, and we say that X has the Hahn–Banach extension property (HBEP) if every vector subspace of X has the extension property. - -The Hahn–Banach theorem guarantees that every Hausdorff locally convex space has the HBEP. For complete metrizable topological vector spaces there is a converse, due to Kalton: every complete metrizable TVS with the Hahn–Banach extension property is locally convex. On the other hand, a vector space X of uncountable dimension, endowed with the finest vector topology, then this is a topological vector spaces with the Hahn-Banach extension property that is neither locally convex nor metrizable. - -A vector subspace M of a TVS X has the separation property if for every element of X such that $x \not\in M,$ there exists a continuous linear functional $f$ on X such that $f(x) \neq 0$ and $f(m) = 0$ for all $m \in M.$ Clearly, the continuous dual space of a TVS X separates points on X if and only if $\{0\},$ has the separation property. In 1992, Kakol proved that any infinite dimensional vector space X, there exist TVS-topologies on X that do not have the HBEP despite having enough continuous linear functionals for the continuous dual space to separate points on X. However, if X is a TVS then every vector subspace of X has the extension property if and only if every vector subspace of X has the separation property. diff --git a/wiki/wikipedia/2362.txt b/wiki/wikipedia/2362.txt deleted file mode 100644 index 4e6de34eb698e7cf804af9fdeb1d98b7d515e1df..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2362.txt +++ /dev/null @@ -1,45 +0,0 @@ -In functional analysis, a branch of mathematics, the Goldstine theorem, named after Herman Goldstine, is stated as follows: - -Goldstine theorem. Let $X$ be a Banach space, then the image of the closed unit ball $B \subseteq X$ under the canonical embedding into the closed unit ball $B^{\prime\prime}$ of the bidual space $X^{\prime\prime}$ is a weak*-dense subset. - -The conclusion of the theorem is not true for the norm topology, which can be seen by considering the Banach space of real sequences that converge to zero, c0 space $c_0,$ and its bi-dual space Lp space $\ell^{\infty}.$ - -For all $x^{\prime\prime} \in B^{\prime\prime},$ $\varphi_1, \ldots, \varphi_n \in X^{\prime}$ and $\delta > 0,$ there exists an $x \in (1+\delta)B$ such that $\varphi_i(x) = x^{\prime\prime}(\varphi_i)$ for all $1 \leq i \leq n.$ - -By the surjectivity of - -\begin{cases} - -\Phi : X \to \Complex^{n}, \\ x \mapsto \left(\varphi_1(x), \cdots, \varphi_n(x) \right) - -\end{cases} - -it is possible to find $x \in X$ with $\varphi_i(x) = x^{\prime\prime}(\varphi_i)$ for $1 \leq i \leq n.$ - -Now let - -Y := \bigcap_i \ker \varphi_i = \ker \Phi. - -Every element of $z \in (x + Y) \cap (1 + \delta)B$ satisfies $z \in (1+\delta)B$ and $\varphi_i(z) = \varphi_i(x)= x^{\prime\prime}(\varphi_i),$ so it suffices to show that the intersection is nonempty. - -Assume for contradiction that it is empty. Then $\operatorname{dist}(x, Y) \geq 1 + \delta$ and by the Hahn–Banach theorem there exists a linear form $\varphi \in X^{\prime}$ such that $\varphi\big\vert_Y = 0, \varphi(x) \geq 1 + \delta$ and $\|\varphi\|_{X^{\prime}} = 1.$ Then $\varphi \in \operatorname{span} \left\{ \varphi_1, \ldots, \varphi_n \right\}$ and therefore - -1+\delta \leq \varphi(x) = x^{\prime\prime}(\varphi) \leq \|\varphi\|_{X^{\prime}} \left\|x^{\prime\prime}\right\|_{X^{\prime\prime}} \leq 1, - -which is a contradiction. - -Fix $x^{\prime\prime} \in B^{\prime\prime},$ $\varphi_1, \ldots, \varphi_n \in X^{\prime}$ and $\epsilon > 0.$ Examine the set - -U := \left\{ y^{\prime\prime} \in X^{\prime\prime} : |(x^{\prime\prime} - y^{\prime\prime})(\varphi_i)| < \epsilon, 1 \leq i \leq n \right\}. - -Let $J : X \rightarrow X^{\prime\prime}$ be the embedding defined by $J(x) = \text{Ev}_x,$ where $\text{Ev}_x(\varphi) = \varphi(x)$ is the evaluation at $x$ map. Sets of the form $U$ form a base for the weak* topology, so density follows once it is shown $J(B) \cap U \neq \varnothing$ for all such $U.$ The lemma above says that for any $\delta > 0$ there exists a $x \in (1+\delta)B$ such that $x^{\prime\prime}(\varphi_i)=\varphi_i(x),$ $1\leq i\leq n,$ and in particular $\text{Ev}_x \in U.$ Since $J(B) \subset B^{\prime\prime},$ we have $\text{Ev}_x \in (1+\delta)J(B) \cap U.$ We can scale to get $\frac{1}{1+\delta} \text{Ev}_x \in J(B).$ The goal is to show that for a sufficiently small $\delta > 0,$ we have $\frac{1}{1+\delta} \text{Ev}_x \in J(B) \cap U.$ - -Directly checking, we have - -\left|\left[x^{\prime\prime} - \frac{1}{1+\delta} \text{Ev}_x\right](\varphi_i)\right| = \left|\varphi_i(x) - \frac{1}{1+\delta}\varphi_i(x)\right| = \frac{\delta}{1+\delta} |\varphi_i(x)|. - -Note that we can choose $M$ sufficiently large so that $\|\varphi_i\|_{X^{\prime}} \leq M$ for $1 \leq i \leq n.$ Note as well that $\|x\|_{X} \leq (1+\delta).$ If we choose $\delta$ so that $\delta M < \epsilon,$ then we have that - -\frac{\delta}{1+\delta} \left|\varphi_i(x)\right| \leq \frac{\delta}{1+\delta} \|\varphi_i\|_{X^{\prime}} \|x\|_{X} \leq \delta \|\varphi_i\|_{X^{\prime}} \leq \delta M < \epsilon. - -Hence we get $\frac{1}{1+\delta} \text{Ev}_x \in J(B) \cap U$ as desired. diff --git a/wiki/wikipedia/2363.txt b/wiki/wikipedia/2363.txt deleted file mode 100644 index 33ee2192b21781908f0f85cd983e29479b54250f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2363.txt +++ /dev/null @@ -1,26 +0,0 @@ -In complex analysis, a field within mathematics, Bloch's theorem gives a lower bound on the size of a disc in which an inverse to a holomorphic function exists. It is named after André Bloch. - -Let f be a holomorphic function in the unit disk |z| ≤ 1. Suppose that |f′(0)| = 1. Then there exists a disc of radius b and an analytic function φ in this disc, such that f(φ(z)) = z for all z in this disc. Here b > 1/72 is an absolute constant. - -If f is a holomorphic function in the unit disc with the property |f′(0)| = 1, then the image of f contains a disc of radius l, where l ≥ b is an absolute constant. - -This theorem is named after Edmund Landau. - -Bloch's theorem was inspired by the following theorem of Georges Valiron: - -Theorem. If f is a non-constant entire function then there exist discs D of arbitrarily large radius and analytic functions φ in D such that f(φ(z)) = z for z in D. - -Bloch's theorem corresponds to Valiron's theorem via the so-called Bloch's Principle. - -The lower bound 1/72 in Bloch's theorem is not the best possible. The number B defined as the supremum of all b for which this theorem holds, is called the Bloch's constant. Bloch's theorem tells us B ≥ 1/72, but the exact value of B is still unknown. - -The similarly defined optimal constant L in Landau's theorem is called the Landau's constant. Its exact value is also unknown. - -The best known bounds for B at present are -$$ -0.4332\approx\frac{\sqrt{3}}{4}+2\times10^{-4}\leq B\leq \sqrt{\frac{\sqrt{3}-1}{2}} \cdot \frac{\Gamma(\frac{1}{3})\Gamma(\frac{11}{12})}{\Gamma(\frac{1}{4})}\approx 0.4719, -$$ - -where Γ is the Gamma function. The lower bound was proved by Chen and Gauthier, and the upper bound dates back to Ahlfors and Grunsky. They also gave an upper bound for the Landau constant. - -In their paper, Ahlfors and Grunsky conjectured that their upper bounds are actually the true values of B and L. diff --git a/wiki/wikipedia/2364.txt b/wiki/wikipedia/2364.txt deleted file mode 100644 index 6e59d94c623cfd5a679cf06870a81436016e26bf..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2364.txt +++ /dev/null @@ -1,118 +0,0 @@ -In mathematics, the Fortuin–Kasteleyn–Ginibre (FKG) inequality is a correlation inequality, a fundamental tool in statistical mechanics and probabilistic combinatorics (especially random graphs and the probabilistic method), due to . Informally, it says that in many random systems, increasing events are positively correlated, while an increasing and a decreasing event are negatively correlated. It was obtained by studying the random cluster model. - -An earlier version, for the special case of i.i.d. variables, called Harris inequality, is due to , see below. One generalization of the FKG inequality is the Holley inequality (1974) below, and an even further generalization is the Ahlswede–Daykin "four functions" theorem (1978). Furthermore, it has the same conclusion as the Griffiths inequalities, but the hypotheses are different. - -Let $X$ be a finite distributive lattice, and μ a nonnegative function on it, that is assumed to satisfy the (FKG) lattice condition (sometimes a function satisfying this condition is called log supermodular) i.e., -$$ -\mu(x\wedge y)\mu(x\vee y) \ge \mu(x)\mu(y) -$$ - -for all x, y in the lattice $X$. - -The FKG inequality then says that for any two monotonically increasing functions ƒ and g on $X$, the following positive correlation inequality holds: -$$ - \left(\sum _{x\in X}f(x)g(x)\mu(x)\right)\left(\sum _{x\in X}\mu(x)\right) \ge \left(\sum _{x\in X}f(x)\mu(x)\right)\left(\sum _{x\in X}g(x)\mu(x)\right). -$$ - -The same inequality (positive correlation) is true when both ƒ and g are decreasing. If one is increasing and the other is decreasing, then they are negatively correlated and the above inequality is reversed. - -Similar statements hold more generally, when $X$ is not necessarily finite, not even countable. In that case, μ has to be a finite measure, and the lattice condition has to be defined using cylinder events; see, e.g., Section 2.2 of Grimmett. - -For proofs, see Fortuin or the Ahlswede–Daykin inequality (1978). Also, a rough sketch is given below, due to Holley, using a Markov chain coupling argument. - -The lattice condition for μ is also called multivariate total positivity, and sometimes the strong FKG condition; the term (multiplicative) FKG condition is also used in older literature. - -The property of μ that increasing functions are positively correlated is also called having positive associations, or the weak FKG condition. - -Thus, the FKG theorem can be rephrased as "the strong FKG condition implies the weak FKG condition". - -If the lattice $X$ is totally ordered, then the lattice condition is satisfied trivially for any measure μ. In case the measure μ is uniform, the FKG inequality is Chebyshev's sum inequality: if the two increasing functions take on values $a_1\leq a_2 \leq \cdots \leq a_n$ and $b_1\leq b_2 \leq \cdots \leq b_n$, then -$$ -\frac{a_1b_1+\cdots+a_nb_n}{n} \geq \frac{a_1+\cdots+a_n}{n} \frac{b_1+\cdots+b_n}{n}. -$$ - -More generally, for any probability measure μ on $\R$ and increasing functions ƒ and g, -$$ - \int_\R f(x)g(x) d\mu(x) \geq \int_\R f(x)d\mu(x) \int_\R g(x)d\mu(x), -$$ - -which follows immediately from -$$ -\int_\R\int_\R [f(x)-f(y)][g(x)-g(y)]d\mu(x)d\mu(y) \geq 0. -$$ - -The lattice condition is trivially satisfied also when the lattice is the product of totally ordered lattices, $X=X_1\times\cdots\times X_n$, and $\mu=\mu_1\otimes\cdots\otimes\mu_n$ is a product measure. Often all the factors (both the lattices and the measures) are identical, i.e., μ is the probability distribution of i.i.d. random variables. - -The FKG inequality for the case of a product measure is known also as the Harris inequality after Harris , who found and used it in his study of percolation in the plane. A proof of the Harris inequality that uses the above double integral trick on $\R$ can be found, e.g., in Section 2.2 of Grimmett. - -A typical example is the following. Color each hexagon of the infinite honeycomb lattice black with probability $p$ and white with probability $1-p$, independently of each other. Let a, b, c, d be four hexagons, not necessarily distinct. Let $a \leftrightarrow b$ and $c\leftrightarrow d$ be the events that there is a black path from a to b, and a black path from c to d, respectively. Then the Harris inequality says that these events are positively correlated: $\Pr(a \leftrightarrow b,\ c\leftrightarrow d) \geq \Pr(a \leftrightarrow b)\Pr(c\leftrightarrow d)$. In other words, assuming the presence of one path can only increase the probability of the other. - -Similarly, if we randomly color the hexagons inside an $n\times n$ rhombus-shaped hex board, then the events that there is black crossing from the left side of the board to the right side is positively correlated with having a black crossing from the top side to the bottom. On the other hand, having a left-to-right black crossing is negatively correlated with having a top-to-bottom white crossing, since the first is an increasing event (in the amount of blackness), while the second is decreasing. In fact, in any coloring of the hex board exactly one of these two events happen — this is why hex is a well-defined game. - -In the Erdős–Rényi random graph, the existence of a Hamiltonian cycle is negatively correlated with the 3-colorability of the graph, since the first is an increasing event, while the latter is decreasing. - -In statistical mechanics, the usual source of measures that satisfy the lattice condition (and hence the FKG inequality) is the following: - -If $S$ is an ordered set (such as $\{-1,+1\}$), and $\Gamma$ is a finite or infinite graph, then the set $S^\Gamma$ of $S$-valued configurations is a poset that is a distributive lattice. - -Now, if $\Phi$ is a submodular potential (i.e., a family of functions -$$ -\Phi_\Lambda: S^\Lambda \longrightarrow \R\cup\{\infty\}, -$$ - -one for each finite $\Lambda \subset \Gamma$, such that each $\Phi_\Lambda$ is submodular), then one defines the corresponding Hamiltonians as -$$ -H_\Lambda(\varphi):=\sum_{\Delta\cap\Lambda\not=\emptyset} \Phi_\Delta(\varphi). -$$ - -If μ is an extremal Gibbs measure for this Hamiltonian on the set of configurations $\varphi$, then it is easy to show that μ satisfies the lattice condition, see Sheffield. - -A key example is the Ising model on a graph $\Gamma$. Let $S=\{-1,+1\}$, called spins, and $\beta\in [0,\infty]$. Take the following potential: - -\Phi_\Lambda(\varphi)=\begin{cases} - -\beta 1_{\{\varphi(x)\not=\varphi(y)\}} & \text{if }\Lambda=\{x,y\}\text{ is a pair of adjacent vertices of }\Gamma;\\ - -0 & \text{otherwise.}\end{cases} - - - -Submodularity is easy to check; intuitively, taking the min or the max of two configurations tends to decrease the number of disagreeing spins. Then, depending on the graph $\Gamma$ and the value of $\beta$, there could be one or more extremal Gibbs measures, see, e.g., Georgii and Lyons. - -The Holley inequality, due to , states that the expectations -$$ - \langle f\rangle_i = \frac{\sum _{x\in X}f(x)\mu_i(x)}{\sum_{x\in X}\mu_i(x)} -$$ - -of a monotonically increasing function ƒ on a finite distributive lattice $X$ with respect to two positive functions μ1, μ2 on the lattice satisfy the condition -$$ - \langle f\rangle_1 \ge \langle f\rangle_2, -$$ - -provided the functions satisfy the Holley condition (criterion) -$$ -\mu_2(x\wedge y)\mu_1(x\vee y) \ge \mu_1(x)\mu_2(y) -$$ - -for all x, y in the lattice. - -To recover the FKG inequality: If μ satisfies the lattice condition and ƒ and g are increasing functions on $X$, then μ1(x) = g(x)μ(x) and μ2(x) = μ(x) will satisfy the lattice-type condition of the Holley inequality. Then the Holley inequality states that -$$ - \frac{ \langle fg\rangle_\mu }{\langle g\rangle_\mu} = \langle f\rangle_1 \ge \langle f\rangle_2 =\langle f\rangle_\mu, -$$ - -which is just the FKG inequality. - -As for FKG, the Holley inequality follows from the Ahlswede–Daykin inequality. - -Consider the usual case of $X$ being a product $\R^V$ for some finite set $V$. The lattice condition on μ is easily seen to imply the following monotonicity, which has the virtue that it is often easier to check than the lattice condition: - -Whenever one fixes a vertex $v \in V$ and two configurations φ and ψ outside v such that $\varphi(w) \geq \psi(w)$ for all $w\not=v$, the μ-conditional distribution of φ(v) given $\{\varphi(w) : w\not=v\}$ stochastically dominates the μ-conditional distribution of ψ(v) given $\{\psi(w) : w\not=v\}$. - -Now, if μ satisfies this monotonicity property, that is already enough for the FKG inequality (positive associations) to hold. - -Here is a rough sketch of the proof, due to Holley: starting from any initial configuration on $V$, one can run a simple Markov chain (the Metropolis algorithm) that uses independent Uniform[0,1] random variables to update the configuration in each step, such that the chain has a unique stationary measure, the given μ. The monotonicity of μ implies that the configuration at each step is a monotone function of independent variables, hence the product measure version of Harris implies that it has positive associations. Therefore, the limiting stationary measure μ also has this property. - -The monotonicity property has a natural version for two measures, saying that μ1 conditionally pointwise dominates μ2. It is again easy to see that if μ1 and μ2 satisfy the lattice-type condition of the Holley inequality, then μ1 conditionally pointwise dominates μ2. On the other hand, a Markov chain coupling argument similar to the above, but now without invoking the Harris inequality, shows that conditional pointwise domination, in fact, implies stochastically domination. Stochastic domination is equivalent to saying that $ \langle f\rangle_1 \ge \langle f\rangle_2$ for all increasing ƒ, thus we get a proof of the Holley inequality. (And thus also a proof of the FKG inequality, without using the Harris inequality.) - -See Holley and Georgii for details. diff --git a/wiki/wikipedia/2365.txt b/wiki/wikipedia/2365.txt deleted file mode 100644 index 3f96a1158a910bd0efe06314e58ab4f3ca763d32..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2365.txt +++ /dev/null @@ -1,71 +0,0 @@ -A* (pronounced "A-star") is a graph traversal and path search algorithm, which is often used in many fields of computer science due to its completeness, optimality, and optimal efficiency. One major practical drawback is its $O(b^d)$ space complexity, as it stores all generated nodes in memory. Thus, in practical travel-routing systems, it is generally outperformed by algorithms which can pre-process the graph to attain better performance, as well as memory-bounded approaches; however, A* is still the best solution in many cases. - -Peter Hart, Nils Nilsson and Bertram Raphael of Stanford Research Institute (now SRI International) first published the algorithm in 1968. It can be seen as an extension of Dijkstra's algorithm. A* achieves better performance by using heuristics to guide its search. - -A* was created as part of the Shakey project, which had the aim of building a mobile robot that could plan its own actions. Nils Nilsson originally proposed using the Graph Traverser algorithm for Shakey's path planning. Graph Traverser is guided by a heuristic function h(n), the estimated distance from node n to the goal node: it entirely ignores g(n), the distance from the start node to n. Bertram Raphael suggested using the sum, g(n) + h(n). General depth-first search can be implemented using A* by considering that there is a global counter C initialized with a very large value. Every time we process a node we assign C to all of its newly discovered neighbors. After each single assignment, we decrease the counter C by one. Thus the earlier a node is discovered, the higher its h(x) value. Both Dijkstra's algorithm and depth-first search can be implemented more efficiently without including an h(x) value at each node. - -On finite graphs with non-negative edge weights A* is guaranteed to terminate and is complete, i.e. it will always find a solution (a path from start to goal) if one exists. On infinite graphs with a finite branching factor and edge costs that are bounded away from zero ($d(x,y)>\varepsilon>0$ for some fixed $\varepsilon$), A* is guaranteed to terminate only if there exists a solution. - -They considered a variety of definitions of Alts and P in combination with A*'s heuristic being merely admissible or being both consistent and admissible. The most interesting positive result they proved is that A*, with a consistent heuristic, is optimally efficient with respect to all admissible A*-like search algorithms on all ″non-pathological″ search problems. Roughly speaking, their notion of non-pathological problem is what we now mean by ″up to tie-breaking″. This result does not hold if A*'s heuristic is admissible but not consistent. In that case, Dechter and Pearl showed there exist admissible A*-like algorithms that can expand arbitrarily fewer nodes than A* on some non-pathological problems. - -Optimal efficiency is about the set of nodes expanded, not the number of node expansions (the number of iterations of A*'s main loop). When the heuristic being used is admissible but not consistent, it is possible for a node to be expanded by A* many times, an exponential number of times in the worst case. - -In such circumstances Dijkstra's algorithm could outperform A* by a large margin. However, more recent research found that this pathological case only occurs in certain contrived situations where the edge weight of the search graph is exponential in the size of the graph, and that certain inconsistent (but admissible) heuristics can lead to a reduced number of node expansions in A* searches. - -While the admissibility criterion guarantees an optimal solution path, it also means that A* must examine all equally meritorious paths to find the optimal path. To compute approximate shortest paths, it is possible to speed up the search at the expense of optimality by relaxing the admissibility criterion. Oftentimes we want to bound this relaxation, so that we can guarantee that the solution path is no worse than (1 + ε) times the optimal solution path. This new guarantee is referred to as ε-admissible. - -There are a number of ε-admissible algorithms: - -*Weighted A*/Static Weighting's. If ha(n) is an admissible heuristic function, in the weighted version of the A* search one uses hw(n) = ε ha(n), ε > 1 as the heuristic function, and perform the A* search as usual (which eventually happens faster than using ha since fewer nodes are expanded). The path hence found by the search algorithm can have a cost of at most ε times that of the least cost path in the graph. -$$ -N + 1 = 1 + b^* + (b^*)^2 + \dots + (b^*)^d. -$$ - -Good heuristics are those with low effective branching factor (the optimal being b* = 1). - -The time complexity is polynomial when the search space is a tree, there is a single goal state, and the heuristic function h meets the following condition: -$$ -|h(x) - h^*(x)| = O(\log h^*(x)) -$$ - -where h* is the optimal heuristic, the exact cost to get from x to the goal. In other words, the error of h will not grow faster than the logarithm of the "perfect heuristic" h* that returns the true distance from x to the goal. - -The space complexity of A* is roughly the same as that of all other graph search algorithms, as it keeps all generated nodes in memory. - -What sets A* apart from a greedy best-first search algorithm is that it takes the cost/distance already traveled, g(n), into account. - -Some common variants of Dijkstra's algorithm can be viewed as a special case of A* where the heuristic $h(n) = 0$ for all nodes; in turn, both Dijkstra and A* are special cases of dynamic programming. - -A* itself is a special case of a generalization of branch and bound. - -*Anytime A* - -*Block A* - -*D* - -*Field D* - -*Fringe - -*Fringe Saving A* (FSA*) - -*Generalized Adaptive A* (GAA*) - -*Incremental heuristic search - -*Reduced A* - -*Iterative deepening A* (IDA*) - -*Jump point search - -*Lifelong Planning A* (LPA*) - -*New Bidirectional A* (NBA*) - -*Simplified Memory bounded A* (SMA*) - -*Theta* - -A* can also be adapted to a bidirectional search algorithm. Special care needs to be taken for the stopping criterion. diff --git a/wiki/wikipedia/2366.txt b/wiki/wikipedia/2366.txt deleted file mode 100644 index 56cc46ea66864a041bdfa12d5ddd2225894b4d01..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2366.txt +++ /dev/null @@ -1,81 +0,0 @@ -Bar induction is a reasoning principle used in intuitionistic mathematics, introduced by L. E. J. Brouwer. Bar induction's main use is the intuitionistic derivation of the fan theorem, a key result used in the derivation of the uniform continuity theorem. - -It is also useful in giving constructive alternatives to other classical results. - -The goal of the principle is to prove properties for all infinite sequences of natural numbers (called choice sequences in intuitionistic terminology), by inductively reducing them to properties of finite lists. Bar induction can also be used to prove properties about all choice sequences in a spread (a special kind of set). - -Given a choice sequence $x_0,x_1,x_2,x_3,\ldots$, any finite sequence of elements $x_0,x_1,x_2,x_3,\ldots,x_i$ of this sequence is called an initial segment of this choice sequence. - -There are three forms of bar induction currently in the literature, each one places certain restrictions on a pair of predicates and the key differences are highlighted using bold font. - -Given two predicates $R$ and $A$ on finite sequences of natural numbers such that all of the following conditions hold: - -* every choice sequence contains at least one initial segment satisfying $R$ at some point (this is expressed by saying that $R$ is a bar); - -* $R$ is decidable (i.e. our bar is decidable); - -* every finite sequence satisfying $R$ also satisfies $A$ (so $A$ holds for every choice sequence beginning with the aforementioned finite sequence); - -* if all extensions of a finite sequence by one element satisfy $A$, then that finite sequence also satisfies $A$ (this is sometimes referred to as $A$ being upward hereditary); - -then we can conclude that $A$ holds for the empty sequence (i.e. A holds for all choice sequences starting with the empty sequence). - -This principle of bar induction is favoured in the works of, A. S. Troelstra, S. C. Kleene and Dragalin. - -Given two predicates $R$ and $A$ on finite sequences of natural numbers such that all of the following conditions hold: - -* every choice sequence contains a unique initial segment satisfying $R$ at some point (i.e. our bar is thin); - -* every finite sequence satisfying $R$ also satisfies $A$; - -* if all extensions of a finite sequence by one element satisfy $A$, then that finite sequence also satisfies $A$; - -then we can conclude that $A$ holds for the empty sequence. - -This principle of bar induction is favoured in the works of Joan Moschovakis and is (intuitionistically) provably equivalent to decidable bar induction. - -Given two predicates $R$ and $A$ on finite sequences of natural numbers such that all of the following conditions hold: - -* every choice sequence contains at least one initial segment satisfying $R$ at some point; - -* once a finite sequence satisfies $R$, then every possible extension of that finite sequence also satisfies $R$ (i.e. our bar is monotonic); - -* every finite sequence satisfying $R$ also satisfies $A$; - -* if all extensions of a finite sequence by one element satisfy $A$, then that finite sequence also satisfies $A$; - -then we can conclude that $A$ holds for the empty sequence. - -This principle of bar induction is used in the works of A. S. Troelstra, S. C. Kleene, Dragalin and Joan Moschovakis. - -The following results about these schemata can be intuitionistically proved: - - - -\begin{align} - -BI_M & \vdash BI_D \\[3pt] - -BI_D & \vdash BI_T \\[3pt] - -BI_T & \vdash BI_D - -\end{align} - - - -(The symbol "$\vdash$" is a "turnstile".) - -An additional schema of bar induction was originally given as a theorem by Brouwer (1975) containing no "extra" restriction on R under the name The Bar Theorem. However, the proof for this theorem was erroneous, and unrestricted bar induction is not considered to be intuitionistically valid (see Dummett 1977 pp 94–104 for a summary of why this is so). The schema of unrestricted bar induction is given below for completeness. - -Given two predicates $R$ and $A$ on finite sequences of natural numbers such that all of the following conditions hold: - -* every choice sequence contains at least one initial segment satisfying $R$ at some point; - -* every finite sequence satisfying $R$ also satisfies $A$; - -* if all extensions of a finite sequence by one element satisfy $A$, then that finite sequence also satisfies $A$; - -then we can conclude that $A$ holds for the empty sequence. - -In classical reverse mathematics, "bar induction" ($BI_{D}$) denotes the related principle stating that if a relation $R$ is a well-order, then we have the schema of transfinite induction over $R$ for arbitrary formulas. diff --git a/wiki/wikipedia/2367.txt b/wiki/wikipedia/2367.txt deleted file mode 100644 index 678d071533119cab90551c6042d8eb54bf902a72..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2367.txt +++ /dev/null @@ -1 +0,0 @@ -thumb|upright=1.50|Example [[concept mapping|concept map created using the IHMC CmapTools computer program]] CmapTools is concept mapping software developed by the Florida Institute for Human and Machine Cognition (IHMC). It allows users to easily create graphical nodes representing concepts, and to connect nodes using lines and linking words to form a network of interrelated propositions that represent knowledge of a topic. The software has been used in classrooms and research labs, and in corporate training. diff --git a/wiki/wikipedia/2368.txt b/wiki/wikipedia/2368.txt deleted file mode 100644 index 08b4aac8291f780cf865ba8ed643dd1b563bbbd8..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2368.txt +++ /dev/null @@ -1,15 +0,0 @@ -In mathematics, Liouville's theorem, proved by Joseph Liouville in 1850, is a rigidity theorem about conformal mappings in Euclidean space. It states that any smooth conformal mapping on a domain of Rn, where n > 2, can be expressed as a composition of translations, similarities, orthogonal transformations and inversions: they are Möbius transformations (in n dimensions). This theorem severely limits the variety of possible conformal mappings in R3 and higher-dimensional spaces. By contrast, conformal mappings in R2 can be much more complicated – for example, all simply connected planar domains are conformally equivalent, by the Riemann mapping theorem. - -Generalizations of the theorem hold for transformations that are only weakly differentiable . The focus of such a study is the non-linear Cauchy–Riemann system that is a necessary and sufficient condition for a smooth mapping ƒ → Ω → Rn to be conformal: -$$ -Df^T Df = \left|\det Df\right|^{2/n} I -$$ - -where Df is the Jacobian derivative, T is the matrix transpose, and I is the identity matrix. A weak solution of this system is defined to be an element ƒ of the Sobolev space W_1,n|b=loc(Ω,Rn) with non-negative Jacobian determinant almost everywhere, such that the Cauchy–Riemann system holds at almost every point of Ω. Liouville's theorem is then that every weak solution (in this sense) is a Möbius transformation, meaning that it has the form -$$ -f(x) = b + \frac{\alpha A (x-a)}{|x-a|^\epsilon} -$$ - -where a,b are vectors in Rn, α is a scalar, A is a rotation matrix, and ε = 0 or 2. Equivalently stated, any quasiconformal map of a domain in Euclidean space that is also conformal is a Möbius transformation. This equivalent statement justifies using the Sobolev space W1,n, since ƒ ∈ W_1,n|b=loc(Ω,Rn) then follows from the geometrical condition of conformality and the ACL characterization of Sobolev space. The result is not optimal however: in even dimensions n = 2k, the theorem also holds for solutions that are only assumed to be in the space W_1,k|b=loc, and this result is sharp in the sense that there are weak solutions of the Cauchy–Riemann system in W1,p for any p < k which are not Möbius transformations. In odd dimensions, it is known that W1,n is not optimal, but a sharp result is not known. - -Similar rigidity results (in the smooth case) hold on any conformal manifold. The group of conformal isometries of an n-dimensional conformal Riemannian manifold always has dimension that cannot exceed that of the full conformal group SO(n+1,1). Equality of the two dimensions holds exactly when the conformal manifold is isometric with the n-sphere or projective space. Local versions of the result also hold: The Lie algebra of conformal Killing fields in an open set has dimension less than or equal to that of the conformal group, with equality holding if and only if the open set is locally conformally flat. diff --git a/wiki/wikipedia/2369.txt b/wiki/wikipedia/2369.txt deleted file mode 100644 index 0342eaf08f032f9d695cab8bd1059d4ca452aa83..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2369.txt +++ /dev/null @@ -1,211 +0,0 @@ -In mathematics, particularly in functional analysis and topology, the closed graph theorem is a fundamental result stating that a linear operator with a closed graph will, under certain conditions, be continuous. - -The original result has been generalized many times so there are now many theorems referred to as "closed graph theorems." - -The graph of a function $f : X \to Y$ is the set - - - -
  • (Definition): The graph $\operatorname{Gr} f$ of $f$ is a closed subset of $X \times Y.$
  • - -
  • For every $x \in X$ and net $x_{\bull} = \left(x_i\right)_{i \in I}$ in $X$ such that $x_{\bull} \to x$ in $X,$ if $y \in Y$ is such that the net $f\left(x_{\bull}\right) = \left(f\left(x_i\right)\right)_{i \in I}$ in $Y$ then $y = f(x).$ - -* Compare this to the definition of continuity in terms of nets, which recall is the following: for every $x \in X$ and net $x_{\bull} = \left(x_i\right)_{i \in I}$ in $X$ such that $x_{\bull} \to x$ in $X,$ $f\left(x_{\bull}\right) \to f(x)$ in $Y.$ - -* Thus to show that the function $f$ has a closed graph, it may be assumed that $f\left(x_{\bull}\right)$ converges in $Y$ to some $y \in Y$ (and then show that $y = f(x)$) while to show that $f$ is continuous, it may not be assumed that $f\left(x_{\bull}\right)$ converges in $Y$ to some $y \in Y$ and instead, it must be proven that this is true (and moreover, it must more specifically be proven that $f\left(x_{\bull}\right)$ converges to $f(x)$ in $Y$).
  • - - - -and if $Y$ is a Hausdorff compact space then we may add to this list: - -
      - -
    1. $f$ is continuous.
    2. - -
    - -and if both $X$ and $Y$ are first-countable spaces then we may add to this list: - -
      - -
    1. $f$ has a sequentially closed graph in $X \times Y.$
    2. - -
    - -;Function with a sequentially closed graph - -If $f : X \to Y$ is a function then the following are equivalent: - -
      - -
    1. $f$ has a sequentially closed graph in $X \times Y.$
    2. - -
    3. Definition: the graph of $f$ is a sequentially closed subset of $X \times Y.$
    4. - -
    5. For every $x \in X$ and sequence $x_{\bull} = \left(x_i\right)_{i=1}^{\infty}$ in $X$ such that $x_{\bull} \to x$ in $X,$ if $y \in Y$ is such that the net $f\left(x_{\bull}\right) := \left(f\left(x_i\right)\right)_{i=1}^{\infty} \to y$ in $Y$ then $y = f(x).$
    6. - -
    - -Suppose $f : D(f) \subseteq X \to Y$ is a linear operator between Banach spaces. - -
      - -
    • If $A$ is closed then $A - s \operatorname{Id}_{D(f)}$ is closed where $s$ is a scalar and $\operatorname{Id}_{D(f)}$ is the identity function.
    • - -
    • If $f$ is closed, then its kernel (or nullspace) is a closed vector subspace of $X.$
    • - -
    • If $f$ is closed and injective then its inverse $f^{-1}$ is also closed.
    • - -
    • A linear operator $f$ admits a closure if and only if for every $x \in X$ and every pair of sequences $x_{\bull} = \left(x_i\right)_{i=1}^{\infty}$ and $z_{\bull} = \left(z_i\right)_{i=1}^{\infty}$ in $D(f)$ both converging to $x$ in $X,$ such that both $f\left(x_{\bull}\right) = \left(f\left(x_i\right)\right)_{i=1}^{\infty}$ and $f\left(z_{\bull}\right) = \left(f\left(z_i\right)\right)_{i=1}^{\infty}$ converge in $Y,$ one has $\lim_{i \to \infty} f\left(x_i\right) = \lim_{i \to \infty} f\left(z_i\right).$
    • - -
    - -
      - -
    • Let $X$ denote the real numbers $\R$ with the usual Euclidean topology and let $Y$ denote $\R$ with the indiscrete topology (where $Y$ is not Hausdorff and that every function valued in $Y$ is continuous). - -Let $f : X \to Y$ be defined by $f(0) = 1$ and $f(x) = 0$ for all $x \neq 0.$ - -Then $f : X \to Y$ is continuous but its graph is not closed in $X \times Y.$
    • - -
    • If $X$ is any space then the identity map $\operatorname{Id} : X \to X$ is continuous but its graph, which is the diagonal $\operatorname{Gr} \operatorname{Id} = \{ (x, x) : x \in X \},$ is closed in $X \times X$ if and only if $X$ is Hausdorff. In particular, if $X$ is not Hausdorff then $\operatorname{Id} : X \to X$ is continuous but not closed.
    • - -
    • If $f : X \to Y$ is a continuous map whose graph is not closed then $Y$ is not a Hausdorff space.
    • - -
    - -
      - -
    • - -If $(X, \tau)$ is a Hausdorff TVS and $\nu$ is a vector topology on $X$ that is strictly finer than $\tau,$ then the identity map $\operatorname{Id} : (X, \tau) \to (X, \nu)$ a closed discontinuous linear operator. - -
    • - -
    • - -Consider the derivative operator $A = \frac{d}{d x}$ where $X = Y = C([a, b]).$is the Banach space of all continuous functions on an interval $[a, b].$ - -If one takes its domain $D(f)$ to be $C^1([a, b]),$ then $f$ is a closed operator, which is not bounded. - -On the other hand, if D(f) = smooth function{{!$C^{\infty}([a, b]).$}}, then $f$ will no longer be closed, but it will be closable, with the closure being its extension defined on $C^1([a, b]).$ - -
    • - -
    • - -Let $X$ and $Y$ both denote the real numbers $\R$ with the usual Euclidean topology. Let $f : X \to Y$ be defined by $f(0) = 0$ and $f(x) = \frac{1}{x}$ for all $x \neq 0.$ Then $f : X \to Y$ has a closed graph (and a sequentially closed graph) in $X \times Y = \R^2$ but it is not continuous (since it has a discontinuity at $x = 0$).
    • - -
    • Let $X$ denote the real numbers $\R$ with the usual Euclidean topology, let $Y$ denote $\R$ with the discrete topology, and let $\operatorname{Id} : X \to Y$ be the identity map (i.e. $\operatorname{Id}(x) := x$ for every $x \in X$). Then $\operatorname{Id} : X \to Y$ is a linear map whose graph is closed in $X \times Y$ but it is clearly not continuous (since singleton sets are open in $Y$ but not in $X$). - -
    • - -
    - -If $T : X \to Y$ is an everywhere-defined linear operator between Banach spaces, then the following are equivalent: - -# $T$ is continuous. - -# $T$ is closed (that is, the graph of $T$ is closed in the product topology on $X \times Y).$ - -# If $x_{\bull} = \left(x_i\right)_{i=1}^{\infty} \to x$ in $X$ then $T\left(x_{\bull}\right) := \left(T\left(x_i\right)\right)_{i=1}^{\infty} \to T(x)$ in $Y.$ - -# If $x_{\bull} = \left(x_i\right)_{i=1}^{\infty} \to 0$ in $X$ then $T\left(x_{\bull}\right) \to 0$ in $Y.$ - -# If $x_{\bull} = \left(x_i\right)_{i=1}^{\infty} \to x$ in $X$ and if $T\left(x_{\bull}\right)$ converges in $Y$ to some $y \in Y,$ then $y = T(x).$ - -# If $x_{\bull} = \left(x_i\right)_{i=1}^{\infty} \to 0$ in $X$ and if $T\left(x_{\bull}\right)$ converges in $Y$ to some $y \in Y,$ then $y = 0.$ - -The operator is required to be everywhere-defined, that is, the domain $D(T)$ of $T$ is $X.$ This condition is necessary, as there exist closed linear operators that are unbounded (not continuous); a prototypical example is provided by the derivative operator on $C([0, 1]),$ whose domain is a strict subset of $C([0, 1]).$ - -The usual proof of the closed graph theorem employs the open mapping theorem. - -In fact, the closed graph theorem, the open mapping theorem and the bounded inverse theorem are all equivalent. - -This equivalence also serves to demonstrate the importance of $X$ and $Y$ being Banach; one can construct linear maps that have unbounded inverses in this setting, for example, by using either continuous functions with compact support or by using sequences with finitely many non-zero terms along with the supremum norm. - -The closed graph theorem can be generalized from Banach spaces to more abstract topological vector spaces in the following ways. - -A linear operator from a barrelled space $X$ to a Fréchet space $Y$ is continuous if and only if its graph is closed. - -There are versions that does not require $Y$ to be locally convex. - -A linear map between two F-spaces is continuous if and only if its graph is closed. - -This theorem is restated and extend it with some conditions that can be used to determine if a graph is closed: - -If $T : X \to Y$ is a linear map between two F-spaces, then the following are equivalent: - -# $T$ is continuous. - -# $T$ has a closed graph. - -# If $x_{\bull} = \left(x_i\right)_{i=1}^{\infty} \to x$ in $X$ and if $T\left(x_{\bull}\right) := \left(T\left(x_i\right)\right)_{i=1}^{\infty}$ converges in $Y$ to some $y \in Y,$ then $y = T(x).${{sfn|Rudin|1991|pp=50-52 - -# If $x_{\bull} = \left(x_i\right)_{i=1}^{\infty} \to 0$ in $X$ and if $T\left(x_{\bull}\right)$ converges in $Y$ to some $y \in Y,$ then $y = 0.$ - -}} - -Every metrizable topological space is pseudometrizable. A pseudometrizable space is metrizable if and only if it is Hausdorff. - -A closed and bounded linear map from a locally convex infrabarreled space into a complete pseudometrizable locally convex space is continuous. - -An even more general version of the closed graph theorem is - -The Borel graph theorem, proved by L. Schwartz, shows that the closed graph theorem is valid for linear maps defined on and valued in most spaces encountered in analysis. - -Recall that a topological space is called a Polish space if it is a separable complete metrizable space and that a Souslin space is the continuous image of a Polish space. The weak dual of a separable Fréchet space and the strong dual of a separable Fréchet-Montel space are Souslin spaces. Also, the space of distributions and all Lp-spaces over open subsets of Euclidean space as well as many other spaces that occur in analysis are Souslin spaces. - -The Borel graph theorem states: - -Let $u : X \to Y$ be linear map between two locally convex Hausdorff spaces $X$ and $Y.$ If $X$ is the inductive limit of an arbitrary family of Banach spaces, if $Y$ is a Souslin space, and if the graph of u is a Borel set in $X \times Y,$ then u is continuous. - -An improvement upon this theorem, proved by A. Martineau, uses K-analytic spaces. - -A topological space $X$ is called a Kσδ if it is the countable intersection of countable unions of compact sets. - -A Hausdorff topological space $Y$ is called K-analytic if it is the continuous image of a Kσδ space (that is, if there is a Kσδ space $X$ and a continuous map of $X$ onto $Y$). - -Every compact set is K-analytic so that there are non-separable K-analytic spaces. Also, every Polish, Souslin, and reflexive Fréchet space is K-analytic as is the weak dual of a Frechet space. - -The generalized Borel graph theorem states: - -If $F : X \to Y$ is closed linear operator from a Hausdorff locally convex TVS $X$ into a Hausdorff finite-dimensional TVS $Y$ then F is continuous. diff --git a/wiki/wikipedia/237.txt b/wiki/wikipedia/237.txt deleted file mode 100644 index e6699ff4c28c571810ae6d0e1598fcce54702e0d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/237.txt +++ /dev/null @@ -1,27 +0,0 @@ -In computer science, in the field of databases, read–write conflict, also known as unrepeatable reads, is a computational anomaly associated with interleaved execution of transactions. - -Given a schedule S - -S = \begin{bmatrix} - -T1 & T2 \\ - -R(A) & \\ - -& R(A) \\ - -& W(A)\\ - -& Com. \\ - -R(A) & \\ - -W(A) & \\ - -Com. & \end{bmatrix} - -In this example, T1 has read the original value of A, and is waiting for T2 to finish. T2 also reads the original value of A, overwrites A, and commits. - -However, when T1 reads from A, it discovers two different versions of A, and T1 would be forced to abort, because T1 would not know what to do. This is an unrepeatable read. This could never occur in a serial schedule. Strict two-phase locking (Strict 2PL) prevents this conflict. - -Alice and Bob are using a website to book tickets for a specific show. Only one ticket is left for the specific show. Alice signs on first to see that only one ticket is left, and finds it expensive. Alice takes time to decide. Bob signs on and also finds one ticket left, and orders it instantly. Bob purchases and logs off. Alice decides to buy a ticket, to find there are no tickets. This is a typical read-write conflict situation. diff --git a/wiki/wikipedia/2370.txt b/wiki/wikipedia/2370.txt deleted file mode 100644 index 4e1589e21f422a23a3ef7ac0069e8f136820c473..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2370.txt +++ /dev/null @@ -1,65 +0,0 @@ -The Classic Tetris World Championship (CTWC) is a video game competition series, hosted by the Portland Retro Gaming Expo. - -The competition launched in 2010, during the filming of Ecstasy of Order: The Tetris Masters to determine the world's greatest Tetris player. In its first two years, the competition was held in Los Angeles, California, but was moved to Portland, Oregon, in 2012, and has been held there annually since (with the exceptions of the 2020 and 2021 tournaments, held online due to the COVID-19 pandemic). - -The contestants play the 1989 Nintendo version of Tetris on actual Nintendo Entertainment System consoles and CRT televisions. The final rounds are streamed online with live-edited screens and heads-up display to improve viewer experience. The tournament was initially dominated by Jonas Neubauer, who reached the finals in the first nine iterations of the tournament and won seven titles. - -Since Neubauer's final win in 2017, the tournament has been dominated by hypertapping, a style of playing in which the player rapidly taps the controller's D-pad to move pieces. This is in contrast to the delayed auto-shift (DAS) technique, in which the player simply holds down the D-pad to move the piece. Hypertapping is especially prevalent among a recent influx of younger players. Joseph Saelee won back-to-back titles while in high school, including a win against Neubauer in the 2018 final and one against Koji "Koryan" Nishio in the 2019 final. Thirteen-year-old Michael "dogplayingtetris" Artiaga won the 2020 edition of the tournament, beating his brother Andrew "PixelAndy" Artiaga in the final. The younger Artiaga then successfully defended his title in 2021, defeating Jacob Huff three games to one. - -In addition to classic Tetris, related games have been variously exhibited in side tournaments at CTWC events, including Tetris & Dr. Mario (1994), Tetris: The Grand Master (1998), Tetris for PS3 (2011), Tetris Ultimate (2014), and Tetris Effect (2018). The 2020 game Tetris Effect: Connected features a "Classic Score Attack" multiplayer mode allowing players to closely replicate the tournament's match format on modern consumer machines. The game mode was designed by Tomohiro "Greentea" Tatejima, a strong Classic Tetris player and CTWC participant. - -The competition takes place over two days, with the Qualifying Round on the first day and the Main Event on the second. Contestants are allowed to bring their own controller, but it must be either an original, unmodified NES Controller or an aftermarket unit that is deemed a faithful enough reproduction of one. At the conclusion of the competition, the champion and 2nd-place finisher are awarded a golden and silver T-piece trophy respectively. - -Qualifying takes place on a fixed number of NES stations. Entrants play "Type A" Tetris, starting on level 9 or higher, and are seeded based on their final score. Once an entrant's game ends for any reason, his/her score must be recorded by a tournament scorekeeper in order to be valid. Entrants may make as many qualifying attempts as they wish, but must return to the back of the waiting line for each one. Entrants may also pay a fee to rent a station for one hour, which allows unlimited qualifying attempts. - -The top 32 scorers are seeded into a tournament bracket for the Main Event. In 2018, 40 players were allowed to qualify, with a "Round Zero" play-off held among the bottom 16 seeds to reduce the field to 32. Forty-eight players qualified in 2016; the top 16 seeds automatically advanced, while the remaining 32 competed in "Round Zero" to fill the other 16 slots. In the event of multiple players maxing out (scoring 999,999 or higher), their second highest score is recorded to determine their seeding. This was especially utilized in 2018, when seven players maxed out, four of whom (Koji "Koryan" Nishio, Tomohiro "Green Tea" Tatejima, Jonas Neubauer and Harry Hong) maxed out twice. Thus, the officials needed their third highest scores just to determine the 1st to 4th seeding. - -The Main Event is a single-elimination tournament consisting of five rounds of head-to-head matches, with seeds from opposite ends of the rankings pitted against each other in the first round (i.e. #1 vs. #32, #2 vs. #31, etc.). Matches are played with specially modified cartridges that can display seven-figure scores and give both players the same sequence of randomly determined blocks. Prior to the 2016 tournament, the Main Event was played using unmodified cartridges. - -Both players begin to play "Type A" Tetris at the same time on separate systems, and the game continues until one of the following occurs: - -* Trailing player "tops-out," or allows the blocks to reach the top of the screen (leader wins) - -* Leader tops-out; trailing player fails to match that score before topping-out (leader wins) - -* Leader tops-out; trailing player passes that score (trailing player wins) - -During the first round, the higher-seeded player in a match chooses whether the first game will start at level 15 or 18. The lower seed chooses for the second game, and the higher seed for the third (if necessary). Starting with the second round, all games begin at level 18. - -The inaugural edition of CTWC was held at the Downtown Independent theater in Los Angeles, California on August 8, 2010. - -The 2010 championship had the flavor of an invitational tournament due to its original concept; five of the eight seats in the semifinals were automatically issued to certain distinguished players. The top two Tetris score recordholders Jonas Neubauer and Harry Hong, who had each achieved the maximum score of 999,999 points, were invited. Also included were the top two recordholders for the most lines cleared in a single game, Ben Mullen (296 lines) and Jesse Kelkar (291 lines). The final reserved seat was given to Thor Aackerlund, the champion of the 1990 Nintendo World Championships. Three spots were remaining for qualifiers: the top 3 players in the "Type B" games (on level 18-0) in a certain period could join the semifinal. - -The concept of assigned spots in the semifinals did not carry over to the 2011 championship. In the qualifying, the top 8 scorers of "Type B" games advanced to the main tournament. An additional 100,000 points were awarded for completing Level 19. - -A slight change was applied in determining the rankings: if players are tied for rounds advanced and games won in a losing match, the sum of two games in the losing match plus qualification score was used. However, this rule was used only in 2015 and 2016. - -*First double max-out in CTWC tournament: Joseph Saelee and Tomohiro Tatejima, 2019 - -*First double kill screen in CTWC tournament: Joseph Saelee and Tomohiro Tatejima, 2019 - -*First double 1.1 million score in CTWC tournament: Michael Artiaga and Koji Nishio, 2020 - -*First double 1.3 million score in CTWC tournament: Michael Artiaga and Minjun Kim ("Pokenerd"), 2021 - -Since 2018, global CTWC stops have been officially added, many of which are directly linked to the CTWC main event in Portland. Other than prizes, the winner of each global stop is sponsored to fly to Portland and try to qualify for the finals. - -During the expo there have been several tournaments on other systems over the years. - -*Tetris on the PlayStation 3: 4-player 2-vs-2 team battle with no items (2011) - -*Tetris Ultimate on the PlayStation 4: versus mode (2015) - -*Tetris & Dr. Mario on SNES: Tetris versus mode, held as a tournament for those who didn't participate in the main event (2016-2017) - -*Tetris: The Grand Master 2 on Arcade: versus mode with no items (2016) - -*Tetris: The Grand Master on Arcade: regular games racing for the fastest time (2017) - -*Tetris Effect on the PlayStation 4: separate gameplays on Journey mode and Mystery mode (2018) - -*Nintendo NES Tetris with extra rules: no next preview Level 18 and race from Level 0 to Level 19 (2018) - -There is a once-a-month online tournament called Classic Tetris Monthly (CTM) that was previously hosted on the same Twitch channel as the CTWC, but now is hosted on MonthlyTetris. Competitors routinely compete from around the world in CTM, which is streamed remotely and thus allows for great flexibility on the part of the competitors. CTM is overseen and commentated chiefly by Keith "vandweller" Didion, who took over for Jessica "fridaywitch" Starr, the tournament's founder, in the Summer of 2018. Starr premiered the tournament on December 3, 2017 on her personal Twitch channel, with 16 participants that had qualified in the few weeks leading up to the event. Harry Hong, the 2014 CTWC champion, was the tournament's first victor. Didion opened a Twitch account dedicated to CTM, called MonthlyTetris, shortly after he began hosting. - -Since 2015, a Classic Tetris European Championship has been played annually in Copenhagen. The tournament follows a similar structure, but is played on the PAL version of NES Tetris rather than the NTSC version. Due to the difference in framerates, the two versions of the game (both of which are designed for the NES) are balanced differently; pieces do not fall at identical speeds on the same level between the two versions. In addition, Delay Auto Shift (DAS) is faster in PAL compared to NTSC. At higher level play, this leads to significant differences in strategy and outcome. In particular, players who employ DAS as their primary strategy are able to play at the highest level. diff --git a/wiki/wikipedia/2371.txt b/wiki/wikipedia/2371.txt deleted file mode 100644 index e57c798843e7e40d7006af4b7a548b79e32cb127..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2371.txt +++ /dev/null @@ -1,7 +0,0 @@ -In mathematics, a random minimum spanning tree may be formed by assigning random weights from some distribution to the edges of an undirected graph, and then constructing the minimum spanning tree of the graph. - -When the given graph is a complete graph on n vertices, and the edge weights have a continuous distribution function whose derivative at zero is D > 0, then the expected weight of its random minimum spanning trees is bounded by a constant, rather than growing as a function of n. More precisely, this constant tends in the limit (as n goes to infinity) to ζ(3)/D, where ζ is the Riemann zeta function and ζ(3) is Apéry's constant. For instance, for edge weights that are uniformly distributed on the unit interval, the derivative is D = 1, and the limit is just ζ(3). - -In contrast to uniformly random spanning trees of complete graphs, for which the typical diameter is proportional to the square root of the number of vertices, random minimum spanning trees of complete graphs have typical diameter proportional to the cube root. - -Random minimum spanning trees of grid graphs may be used for invasion percolation models of liquid flow through a porous medium, and for maze generation. diff --git a/wiki/wikipedia/2372.txt b/wiki/wikipedia/2372.txt deleted file mode 100644 index ad9855274116425d14cbc0ebf37d5cb1e37efea5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2372.txt +++ /dev/null @@ -1,40 +0,0 @@ -In mathematics, the Dirichlet conditions are sufficient conditions for a real-valued, periodic function f to be equal to the sum of its Fourier series at each point where f is continuous. Moreover, the behavior of the Fourier series at points of discontinuity is determined as well (it is the midpoint of the values of the discontinuity). These conditions are named after Peter Gustav Lejeune Dirichlet. - -The conditions are: - -#f must be absolutely integrable over a period. - -#f must be of bounded variation in any given bounded interval. - -#f must have a finite number of discontinuities in any given bounded interval, and the discontinuities cannot be infinite. - -We state Dirichlet's theorem assuming f is a periodic function of period 2π with Fourier series expansion where -$$ - a_n = \frac{1}{2\pi} \int_{-\pi}^\pi f(x) e^{-inx} dx. -$$ - -The analogous statement holds irrespective of what the period of f is, or which version of the Fourier expansion is chosen (see Fourier series). - -
    - -Dirichlet's theorem: If f satisfies Dirichlet conditions, then for all x, we have that the series obtained by plugging x into the Fourier series is convergent, and is given by -$$ - \sum_{n = -\infty}^\infty a_n e^{inx} = \frac{f(x^+) + f(x^-)}{2}, -$$ - -where the notation -$$ - f(x^+) = \lim_{y \to x^+} f(y) -$$ -$$ - f(x^-) = \lim_{y \to x^-} f(y) -$$ - -denotes the right/left limits of f. - -A function satisfying Dirichlet's conditions must have right and left limits at each point of discontinuity, or else the function would need to oscillate at that point, violating the condition on maxima/minima. Note that at any point where f is continuous, -$$ - \frac{f(x^+) + f(x^-)}{2} = f(x). -$$ - -Thus Dirichlet's theorem says in particular that under the Dirichlet conditions the Fourier expansion for f converges to f(x) wherever f is continuous. diff --git a/wiki/wikipedia/2373.txt b/wiki/wikipedia/2373.txt deleted file mode 100644 index ffb2d9d06a9d2e20f9ecaaef7fe5986ee2070cd3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2373.txt +++ /dev/null @@ -1,10 +0,0 @@ -In mathematical representation theory, Steinberg's formula, introduced by , describes the multiplicity of an irreducible representation of a semisimple complex Lie algebra in a tensor product of two irreducible representations. - -It is a consequence of the Weyl character formula, and for the Lie algebra sl2 it is essentially the Clebsch–Gordan formula. - -Steinberg's formula states that the multiplicity of the irreducible representation of highest weight ν in the tensor product of the irreducible representations with highest weights λ and μ is given by -$$ - \sum_{w,w^\prime\in W} \epsilon(ww^\prime)P(w(\lambda+\rho)+w^\prime(\mu+\rho)-(\nu+2\rho)) -$$ - -where W is the Weyl group, ε is the determinant of an element of the Weyl group, ρ is the Weyl vector, and P is the Kostant partition function giving the number of ways of writing a vector as a sum of positive roots. diff --git a/wiki/wikipedia/2374.txt b/wiki/wikipedia/2374.txt deleted file mode 100644 index c8f6a051148d9eec7b88bb6b0ed17a3ea3a8b31e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2374.txt +++ /dev/null @@ -1,60 +0,0 @@ -In mathematics, Vieta's formulas are formulas that relate the coefficients of a polynomial to sums and products of its roots. Named after François Viète (more commonly referred to by the Latinised form of his name, "Franciscus Vieta"), the formulas are used specifically in algebra. - -Any general polynomial of degree n -$$ -P(x) = a_nx^n + a_{n-1}x^{n-1} + \cdots + a_1 x + a_0 -$$ - -(with the coefficients being real or complex numbers and an ≠ 0) has n (not necessarily distinct) complex roots r1, r2, ..., rn by the fundamental theorem of algebra. Vieta's formulas relate polynomial's coefficients to signed sums of products of the roots r1, r2, ..., rn as follows: - -\begin{cases} r_1 + r_2 + \dots + r_{n-1} + r_n = -\dfrac{a_{n-1}}{a_n} \\ - -(r_1 r_2 + r_1 r_3+\cdots + r_1 r_n) + (r_2r_3 + r_2r_4+\cdots + r_2r_n)+\cdots + r_{n-1}r_n = \dfrac{a_{n-2}}{a_{n}} \\ - -{} \quad \vdots \\ r_1 r_2 \dots r_n = (-1)^n \dfrac{a_0}{a_n}. \end{cases} - -Vieta's formulas can equivalently be written as -$$ -\sum_{1\le i_1 < i_2 < \cdots < i_k\le n} \left(\prod_{j = 1}^k r_{i_j}\right)=(-1)^k\frac{a_{n-k}}{a_n} -$$ - -for k = 1, 2, ..., n (the indices ik are sorted in increasing order to ensure each product of k roots is used exactly once). - -The left-hand sides of Vieta's formulas are the elementary symmetric polynomials of the roots. - -Vieta's formulas are frequently used with polynomials with coefficients in any integral domain R. Then, the quotients $a_i/a_n$ belong to the field of fractions of R (and possibly are in R itself if $a_n$ happens to be invertible in R) and the roots $r_i$ are taken in an algebraically closed extension. Typically, R is the ring of the integers, the field of fractions is the field of the rational numbers and the algebraically closed field is the field of the complex numbers. - -Vieta's formulas are then useful because they provide relations between the roots without having to compute them. - -For polynomials over a commutative ring that is not an integral domain, Vieta's formulas are only valid when $a_n$ is not a zero-divisor and $P(x)$ factors as $a_n(x-r_1)(x-r_2)\dots(x-r_n)$. For example, in the ring of the integers modulo 8, the quadratic polynomial $P(x) = x^2-1$ has four roots: 1, 3, 5, and 7. Vieta's formulas are not true if, say, $r_1=1$ and $r_2=3$, because $P(x)\neq (x-1)(x-3)$. However, $P(x)$ does factor as $(x-1)(x-7)$ and also as $(x-3)(x-5)$, and Vieta's formulas hold if we set either $r_1=1$ and $r_2=7$ or $r_1=3$ and $r_2=5$. - -Vieta's formulas applied to quadratic and cubic polynomials: - -The roots $r_1, r_2$ of the quadratic polynomial $P(x) = ax^2 + bx + c$ satisfy -$$ - r_1 + r_2 = -\frac{b}{a}, \quad r_1 r_2 = \frac{c}{a}. -$$ - -The first of these equations can be used to find the minimum (or maximum) of P; see . - -The roots $r_1, r_2, r_3$ of the cubic polynomial $P(x) = ax^3 + bx^2 + cx + d$ satisfy -$$ - r_1 + r_2 + r_3 = -\frac{b}{a}, \quad r_1 r_2 + r_1 r_3 + r_2 r_3 = \frac{c}{a}, \quad r_1 r_2 r_3 = -\frac{d}{a}. -$$ - -Vieta's formulas can be proved by expanding the equality -$$ -a_nx^n + a_{n-1}x^{n-1} +\cdots + a_1 x+ a_0 = a_n(x-r_1)(x-r_2)\cdots (x-r_n) -$$ - -(which is true since $r_1, r_2, \dots, r_n$ are all the roots of this polynomial), multiplying the factors on the right-hand side, and identifying the coefficients of each power of $x.$ - -Formally, if one expands $(x-r_1) (x-r_2) \cdots (x-r_n),$ the terms are precisely $(-1)^{n-k}r_1^{b_1}\cdots r_n^{b_n} x^k,$ where $b_i$ is either 0 or 1, accordingly as whether $r_i$ is included in the product or not, and k is the number of $r_i$ that are excluded, so the total number of factors in the product is n (counting $x^k$ with multiplicity k) – as there are n binary choices (include $r_i$ or x), there are $2^n$ terms – geometrically, these can be understood as the vertices of a hypercube. Grouping these terms by degree yields the elementary symmetric polynomials in $r_i$ – for xk, all distinct k-fold products of $r_i.$ - -As an example, consider the quadratic f(x) =a_2x^2+a_1x+a_0 = a_2(x-r_1)(x-r_2)= a_2(x^2-x(r_1+r_2) +r_1r_2) . Comparing identical powers of $x$, we find $a_2=a_2$, $a_1=-a_2 (r_1+r_2) $ and $ a_0 = a_2 (r_1r_2) $, with which we can for example identify $ r_1+r_2 = - a_1/a_2 $ and $ r_1r_2 = a_0/a_2 $, which are Vieta's formula's for $n=2$. - -As reflected in the name, the formulas were discovered by the 16th-century French mathematician François Viète, for the case of positive roots. - -In the opinion of the 18th-century British mathematician Charles Hutton, as quoted by Funkhouser, the general principle (not only for positive real roots) was first understood by the 17th-century French mathematician Albert Girard: - -
    ...[Girard was] the first person who understood the general doctrine of the formation of the coefficients of the powers from the sum of the roots and their products. He was the first who discovered the rules for summing the powers of the roots of any equation.
    diff --git a/wiki/wikipedia/2375.txt b/wiki/wikipedia/2375.txt deleted file mode 100644 index 380de10f9cfcc366ae3e11406fff09633e291aa8..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2375.txt +++ /dev/null @@ -1,23 +0,0 @@ -Nielsen theory is a branch of mathematical research with its origins in topological fixed-point theory. Its central ideas were developed by Danish mathematician Jakob Nielsen, and bear his name. - -The theory developed in the study of the so-called minimal number of a map f from a compact space to itself, denoted MF[f]. This is defined as: -$$ -\mathit{MF}[f] = \min \{ \# \mathrm{Fix}(g) | g \sim f \}, -$$ - -where ~ indicates homotopy of mappings, and #Fix(g) indicates the number of fixed points of g. The minimal number was very difficult to compute in Nielsen's time, and remains so today. Nielsen's approach is to group the fixed-point set into classes, which are judged "essential" or "nonessential" according to whether or not they can be "removed" by a homotopy. - -Nielsen's original formulation is equivalent to the following: - -We define an equivalence relation on the set of fixed points of a self-map f on a space X. We say that x is equivalent to y if and only if there exists a path c from x to y with f(c) homotopic to c as paths. The equivalence classes with respect to this relation are called the Nielsen classes of f, and the Nielsen number N(f) is defined as the number of Nielsen classes having non-zero fixed-point index sum. - -Nielsen proved that -$$ -N(f) \le \mathit{MF}[f], -$$ - -making his invariant a good tool for estimating the much more difficult MF[f]. This leads immediately to what is now known as the Nielsen fixed-point theorem: Any map f has at least N(f) fixed points. - -Because of its definition in terms of the fixed-point index, the Nielsen number is closely related to the Lefschetz number. Indeed, shortly after Nielsen's initial work, the two invariants were combined into a single "generalized Lefschetz number" (more recently called the Reidemeister trace) by Wecken and Reidemeister. - -* diff --git a/wiki/wikipedia/2376.txt b/wiki/wikipedia/2376.txt deleted file mode 100644 index 949ffe745f3901a4686cec3fc6d16db60a553681..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2376.txt +++ /dev/null @@ -1,43 +0,0 @@ -Proof by exhaustion, also known as proof by cases, proof by case analysis, complete induction or the brute force method, is a method of mathematical proof in which the statement to be proved is split into a finite number of cases or sets of equivalent cases, and where each type of case is checked to see if the proposition in question holds. This is a method of direct proof. A proof by exhaustion typically contains two stages: - -# A proof that the set of cases is exhaustive; i.e., that each instance of the statement to be proved matches the conditions of (at least) one of the cases. - -# A proof of each of the cases. - -The prevalence of digital computers has greatly increased the convenience of using the method of exhaustion (e.g., the first computer-assisted proof of four color theorem in 1976), though such approaches can also be challenged on the basis of mathematical elegance. Expert systems can be used to arrive at answers to many of the questions posed to them. In theory, the proof by exhaustion method can be used whenever the number of cases is finite. However, because most mathematical sets are infinite, this method is rarely used to derive general mathematical results. - -In the Curry–Howard isomorphism, proof by exhaustion and case analysis are related to ML-style pattern matching. - -Proof by exhaustion can be used to prove that if an integer is a perfect cube, then it must be either a multiple of 9, 1 more than a multiple of 9, or 1 less than a multiple of 9. - -Proof: - -
    Each perfect cube is the cube of some integer n, where n is either a multiple of 3, 1 more than a multiple of 3, or 1 less than a multiple of 3. So these three cases are exhaustive: - -*Case 1: If n = 3p, then n3 = 27p3, which is a multiple of 9. - -*Case 2: If n = 3p + 1, then n3 = 27p3 + 27p2 + 9p + 1, which is 1 more than a multiple of 9. For instance, if n = 4 then n3 = 64 = 9×7 + 1. - -*Case 3: If n = 3p − 1, then n3 = 27p3 − 27p2 + 9p − 1, which is 1 less than a multiple of 9. For instance, if n = 5 then n3 = 125 = 9×14 − 1. Q.E.D. - -Mathematicians prefer to avoid proofs by exhaustion with large numbers of cases, which are viewed as inelegant. An illustration as to how such proofs might be inelegant is to look at the following proofs that all modern Summer Olympic Games are held in years which are divisible by 4: - -Proof: The first modern Summer Olympics were held in 1896, and then every 4 years thereafter (neglecting exceptions such as when the games were not held due to World War I and World War II along with the 2020 Tokyo Olympics being postponed to 2021 due to the COVID-19 pandemic.). Since 1896 = 474 × 4 is divisible by 4, the next Olympics would be in year 474 × 4 + 4 = (474 + 1) × 4, which is also divisible by four, and so on (this is a proof by mathematical induction). Therefore the statement is proved. - -The statement can also be proved by exhaustion by listing out every year in which the Summer Olympics were held, and checking that every one of them can be divided by four. With 28 total Summer Olympics as of 2016, this is a proof by exhaustion with 28 cases. - -In addition to being less elegant, the proof by exhaustion will also require an extra case each time a new Summer Olympics is held. This is to be contrasted with the proof by mathematical induction, which proves the statement indefinitely into the future. - -There is no upper limit to the number of cases allowed in a proof by exhaustion. Sometimes there are only two or three cases. Sometimes there may be thousands or even millions. For example, rigorously solving a chess endgame puzzle might involve considering a very large number of possible positions in the game tree of that problem. - -The first proof of the four colour theorem was a proof by exhaustion with 1834 cases. This proof was controversial because the majority of the cases were checked by a computer program, not by hand. The shortest known proof of the four colour theorem today still has over 600 cases. - -In general the probability of an error in the whole proof increases with the number of cases. A proof with a large number of cases leaves an impression that the theorem is only true by coincidence, and not because of some underlying principle or connection. Other types of proofs—such as proof by induction (mathematical induction)—are considered more elegant. However, there are some important theorems for which no other method of proof has been found, such as - -* The proof that there is no finite projective plane of order 10. - -* The classification of finite simple groups. - -* The Kepler conjecture. - -* The Boolean Pythagorean triples problem. diff --git a/wiki/wikipedia/2377.txt b/wiki/wikipedia/2377.txt deleted file mode 100644 index af5779984c31944c9927e9fd16c0e3f6ceb501fc..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2377.txt +++ /dev/null @@ -1,104 +0,0 @@ -Goldbach's conjecture is one of the oldest and best-known unsolved problems in number theory and all of mathematics. It states that every even whole number greater than 2 is the sum of two prime numbers. - -The conjecture has been shown to hold for all integers less than 4 × 1018, but remains unproven despite considerable effort. - -On 7 June 1742, the German mathematician Christian Goldbach wrote a letter to Leonhard Euler (letter XLIII), in which he proposed the following conjecture: - -Every integer that can be written as the sum of two primes can also be written as the sum of as many primes as one wishes, until all terms are units. - -Goldbach was following the now-abandoned convention of considering 1 to be a prime number, - -Each of the three conjectures above has a natural analog in terms of the modern definition of a prime, under which 1 is excluded. - -A modern version of the first conjecture is: - -Every integer that can be written as the sum of two primes can also be written as the sum of as many primes as one wishes, until either all terms are two (if the integer is even) or one term is three and all other terms are two (if the integer is odd). - -A modern version of the marginal conjecture is: - -Every integer greater than 5 can be written as the sum of three primes. - -And a modern version of Goldbach's older conjecture of which Euler reminded him is: - -Every even integer greater than 2 can be written as the sum of two primes. - -These modern versions might not be entirely equivalent to the corresponding original statements. For example, if there were an even integer $ N=p+1 $ larger than 4, for $ p $ a prime, that could not be expressed as the sum of two primes in the modern sense, then it would be a counterexample to the modern version of the third conjecture (without being a counterexample to the original version). The modern version is thus probably stronger (but in order to confirm that, one would have to prove that the first version, freely applied to any positive even integer $n$, could not possibly rule out the existence of such a specific counterexample $N$). In any case, the modern statements have the same relationships with each other as the older statements did. That is, the second and third modern statements are equivalent, and either implies the first modern statement. - -The third modern statement (equivalent to the second) is the form in which the conjecture is usually expressed today. It is also known as the "strong", "even", or "binary" Goldbach conjecture. A weaker form of the second modern statement, known as "Goldbach's weak conjecture", the "odd Goldbach conjecture", or the "ternary Goldbach conjecture," asserts that - -Every odd integer greater than 7 can be written as the sum of three odd primes. - -A proof for the weak conjecture was proposed in 2013 by Harald Helfgott. Helfgott's proof has not yet appeared in a peer-reviewed publication, though was accepted for publication in the Annals of Mathematics Studies series in 2015, and has been undergoing further review and revision since. The weak conjecture would be a corollary of the strong conjecture: if n – 3 is a sum of two primes, then n is a sum of three primes. However, the converse implication and thus the strong Goldbach conjecture remain unproven. - -For small values of n, the strong Goldbach conjecture (and hence the weak Goldbach conjecture) can be verified directly. For instance, in 1938, Nils Pipping laboriously verified the conjecture up to n ≤ 105. With the advent of computers, many more values of n have been checked; T. Oliveira e Silva ran a distributed computer search that has verified the conjecture for n ≤ 4 × 1018 (and double-checked up to 4 × 1017) as of 2013. One record from this search is that 3,325,581,707,333,960,528 is the smallest number that cannot be written as a sum of two primes where one is smaller than 9781. - -Statistical considerations that focus on the probabilistic distribution of prime numbers present informal evidence in favour of the conjecture (in both the weak and strong forms) for sufficiently large integers: the greater the integer, the more ways there are available for that number to be represented as the sum of two or three other numbers, and the more "likely" it becomes that at least one of these representations consists entirely of primes. - -A very crude version of the heuristic probabilistic argument (for the strong form of the Goldbach conjecture) is as follows. The prime number theorem asserts that an integer m selected at random has roughly a $1/\ln m$ chance of being prime. Thus if n is a large even integer and m is a number between 3 and n/2, then one might expect the probability of m and n - m simultaneously being prime to be $1 \big/ \big[\ln m \ln(n - m)\big]$. If one pursues this heuristic, one might expect the total number of ways to write a large even integer n as the sum of two odd primes to be roughly -$$ -\sum_{m=3}^{n/2} \frac{1}{\ln m} \frac{1}{\ln(n - m)} \approx \frac{n}{2 (\ln n)^2}. -$$ - -Since $\ln n \ll \sqrt n$, this quantity goes to infinity as n increases, and we would expect that every large even integer has not just one representation as the sum of two primes, but in fact very many such representations. - -This heuristic argument is actually somewhat inaccurate, because it assumes that the events of m and n − m being prime are statistically independent of each other. For instance, if m is odd, then n − m is also odd, and if m is even, then n − m is even, a non-trivial relation because, besides the number 2, only odd numbers can be prime. Similarly, if n is divisible by 3, and m was already a prime distinct from 3, then n − m would also be coprime to 3 and thus be slightly more likely to be prime than a general number. Pursuing this type of analysis more carefully, G. H. Hardy and John Edensor Littlewood in 1923 conjectured (as part of their Hardy–Littlewood prime tuple conjecture) that for any fixed c ≥ 2, the number of representations of a large integer n as the sum of c primes $n = p_1 + \cdots + p_c$ with $p_1 \leq \cdots \leq p_c$ should be asymptotically equal to - -\left(\prod_p \frac{p \gamma_{c,p}(n)}{(p - 1)^c}\right) - -\int_{2 \leq x_1 \leq \cdots \leq x_c: x_1 + \cdots + x_c = n} \frac{dx_1 \cdots dx_{c-1}}{\ln x_1 \cdots \ln x_c}, - -where the product is over all primes p, and $\gamma_{c,p}(n)$ is the number of solutions to the equation -$$ -n = q_1 + \cdots + q_c \mod p -$$ in modular arithmetic, subject to the constraints $q_1, \ldots, q_c \neq 0 \mod p$. This formula has been rigorously proven to be asymptotically valid for c ≥ 3 from the work of Ivan Matveevich Vinogradov, but is still only a conjecture when $c = 2$. In the latter case, the above formula simplifies to 0 when n is odd, and to - - - -2 \Pi_2 \left(\prod_{p \mid n; p \geq 3} \frac{p - 1}{p - 2}\right) \int_2^n \frac{dx}{(\ln x)^2} - -\approx 2 \Pi_2 \left(\prod_{p \mid n; p \geq 3} \frac{p - 1}{p - 2}\right) \frac{n}{(\ln n)^2} - - - -when n is even, where $\Pi_2$ is Hardy–Littlewood's twin prime constant -$$ -\Pi_2 := \prod_{\textstyle{p{\rm prime}\atop p \ge 3}} \left(1 - \frac{1}{(p-1)^2}\right) \approx 0.66016 18158 46869 57392 78121 10014\dots -$$ - -This is sometimes known as the extended Goldbach conjecture. The strong Goldbach conjecture is in fact very similar to the twin prime conjecture, and the two conjectures are believed to be of roughly comparable difficulty. - -The Goldbach partition functions shown here can be displayed as histograms, which illustrate the above equations. See Goldbach's comet for more information. - -Goldbach's comet also suggests that there are tight upper and lower bounds on the number of representatives, and that the modulo 6 of 2n plays a part in the number of representations. - -The number of representations is about $n\ln n$, from $2n = p + c$ and the Prime Number Theorem. If each c is composite, then it must have a prime factor less than or equal to the square root of $2n$, by the method outlined in trial division. - -This leads to an expectation of $\frac{n\ln n}{\sqrt{2n}} = \sqrt \frac{n}{2}\ln n$ representations. - -The strong Goldbach conjecture is much more difficult than the weak Goldbach conjecture. Using Vinogradov's method, Nikolai Chudakov, Johannes van der Corput, and Theodor Estermann showed that almost all even numbers can be written as the sum of two primes (in the sense that the fraction of even numbers which can be so written tends towards 1). In 1930, Lev Schnirelmann proved that any natural number greater than 1 can be written as the sum of not more than C prime numbers, where C is an effectively computable constant; see Schnirelmann density. Schnirelmann's constant is the lowest number C with this property. Schnirelmann himself obtained C < 800,000. This result was subsequently enhanced by many authors, such as Olivier Ramaré, who in 1995 showed that every even number n ≥ 4 is in fact the sum of at most 6 primes. The best known result currently stems from the proof of the weak Goldbach conjecture by Harald Helfgott, which directly implies that every even number n ≥ 4 is the sum of at most 4 primes. - -In 1924 Hardy and Littlewood showed under the assumption of the generalized Riemann hypothesis that the number of even numbers up to X violating the Goldbach conjecture is much less than $X^{(1/2)+c}$ for small c. - -Chen Jingrun showed in 1973 using the methods of sieve theory that every sufficiently large even number can be written as the sum of either two primes, or a prime and a semiprime (the product of two primes). See Chen's theorem for further information. - -In 1975, Hugh Montgomery and Robert Charles Vaughan showed that "most" even numbers are expressible as the sum of two primes. More precisely, they showed that there exist positive constants c and C such that for all sufficiently large numbers N, every even number less than N is the sum of two primes, with at most $C N^{1-c}$ exceptions. In particular, the set of even integers that are not the sum of two primes has density zero. - -In 1951, Yuri Linnik proved the existence of a constant K such that every sufficiently large even number is the sum of two primes and at most K powers of 2. Roger Heath-Brown and Jan-Christoph Schlage-Puchta found in 2002 that K = 13 works. - -Although Goldbach's conjecture implies that every positive integer greater than one can be written as a sum of at most three primes, it is not always possible to find such a sum using a greedy algorithm that uses the largest possible prime at each step. The Pillai sequence tracks the numbers requiring the largest number of primes in their greedy representations. - -Similar problems to Goldbach's conjecture exist in which primes are replaced by other particular sets of numbers, such as the squares: - -* It was proven by Lagrange that every positive integer is the sum of four squares. See Waring's problem and the related Waring–Goldbach problem on sums of powers of primes. - -* Hardy and Littlewood listed as their Conjecture I: "Every large odd number (n > 5) is the sum of a prime and the double of a prime" (Mathematics Magazine, 66.1 (1993): 45–47). This conjecture is known as Lemoine's conjecture and is also called Levy's conjecture. - -* The Goldbach conjecture for practical numbers, a prime-like sequence of integers, was stated by Margenstern in 1984, and proved by Melfi in 1996: every even number is a sum of two practical numbers. - -* A strengthening of the Goldbach conjecture proposed by Harvey Dubner states that every even integer greater than 4,208 is the sum of two twin primes. Only 33 even integers less than 4,208 are not the sum of two twin primes. Dubner has verified computationally that this list is complete up to 2×1010. A proof of this stronger conjecture would not only imply Goldbach's conjecture, but also the twin prime conjecture. - -Goldbach's Conjecture () is the title of the biography of Chinese mathematician and number theorist Chen Jingrun, written by Xu Chi. - -The conjecture is a central point in the plot of the 1992 novel Uncle Petros and Goldbach's Conjecture by Greek author Apostolos Doxiadis, in the short story "Sixty Million Trillion Combinations" by Isaac Asimov and also in the 2008 mystery novel No One You Know by Michelle Richmond. - -Goldbach's conjecture is part of the plot of the 2007 Spanish film Fermat's Room. diff --git a/wiki/wikipedia/2378.txt b/wiki/wikipedia/2378.txt deleted file mode 100644 index 12e3821087cd0cc618953f11c3ac655b9a09f54f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2378.txt +++ /dev/null @@ -1,23 +0,0 @@ -In mathematics, the max–min inequality is as follows: for any function $f : Z \times W \to \mathbb{R} $, - - - -\sup_{z \in Z} \inf_{w \in W} f(z, w) \leq \inf_{w \in W} \sup_{z \in Z} f(z, w). - - - -When equality holds one says that f, W and Z satisfies a strong max–min property (or a saddle-point property). As the function f(z,w)=sin(z+w) illustrates, this equality does not always hold. A theorem giving conditions on f, W and Z in order to guarantee the saddle point property is called a minimax theorem. - -Define $ g(z) \triangleq \inf_{w \in W} f(z, w) $. -$$ -\forall w\in W, \forall z\in Z, g(z) \leq f(z, w) -$$ -$$ -\Longrightarrow \forall w\in W, \sup_{z\in Z} g(z) \leq \sup_{z\in Z} f(z, w) -$$ -$$ -\Longrightarrow \sup_{z\in Z} g(z) \leq \inf_{w\in W} \sup_{z\in Z} f(z, w) -$$ -$$ -\Longrightarrow \sup_{z\in Z} \inf_{w\in W} f(z,w) \leq \inf_{w\in W} \sup_{z\in Z} f(z, w) \qquad \square -$$ diff --git a/wiki/wikipedia/2379.txt b/wiki/wikipedia/2379.txt deleted file mode 100644 index c519250686e0c99dc35b66738295bdfba9951887..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2379.txt +++ /dev/null @@ -1,105 +0,0 @@ -Linear programming (LP, also called linear optimization) is a method to achieve the best outcome (such as maximum profit or lowest cost) in a mathematical model whose requirements are represented by linear relationships. Linear programming is a special case of mathematical programming (also known as mathematical optimization). - -More formally, linear programming is a technique for the optimization of a linear objective function, subject to linear equality and linear inequality constraints. Its feasible region is a convex polytope, which is a set defined as the intersection of finitely many half spaces, each of which is defined by a linear inequality. Its objective function is a real-valued affine (linear) function defined on this polyhedron. A linear programming algorithm finds a point in the polytope where this function has the smallest (or largest) value if such a point exists. - -Linear programs are problems that can be expressed in canonical form as - - \begin{align} - -& \text{Find a vector} && \mathbf{x} \\ - -& \text{that maximizes} && \mathbf{c}^T \mathbf{x}\\ - -& \text{subject to} && A \mathbf{x} \leq \mathbf{b} \\ - -& \text{and} && \mathbf{x} \ge \mathbf{0}. - -\end{align} - -Here the components of x are the variables to be determined, c and b are given vectors (with $\mathbf{c}^T$ indicating that the coefficients of c are used as a single-row matrix for the purpose of forming the matrix product), and A is a given matrix. The function whose value is to be maximized or minimized ($\mathbf x\mapsto\mathbf{c}^T\mathbf{x}$ in this case) is called the objective function. The inequalities Ax ≤ b and x ≥ 0 are the constraints which specify a convex polytope over which the objective function is to be optimized. In this context, two vectors are comparable when they have the same dimensions. If every entry in the first is less-than or equal-to the corresponding entry in the second, then it can be said that the first vector is less-than or equal-to the second vector. - -Linear programming can be applied to various fields of study. It is widely used in mathematics, and to a lesser extent in business, economics, and for some engineering problems. Industries that use linear programming models include transportation, energy, telecommunications, and manufacturing. It has proven useful in modeling diverse types of problems in planning, routing, scheduling, assignment, and design. - -The problem of solving a system of linear inequalities dates back at least as far as Fourier, who in 1827 published a method for solving them, and after whom the method of Fourier–Motzkin elimination is named. - -In 1939 a linear programming formulation of a problem that is equivalent to the general linear programming problem was given by the Soviet mathematician and economist Leonid Kantorovich, who also proposed a method for solving it. It is a way he developed, during World War II, to plan expenditures and returns in order to reduce costs of the army and to increase losses imposed on the enemy. Kantorovich's work was initially neglected in the USSR. About the same time as Kantorovich, the Dutch-American economist T. C. Koopmans formulated classical economic problems as linear programs. Kantorovich and Koopmans later shared the 1975 Nobel prize in economics. In 1947, Dantzig also invented the simplex method that for the first time efficiently tackled the linear programming problem in most cases. but a larger theoretical and practical breakthrough in the field came in 1984 when Narendra Karmarkar introduced a new interior-point method for solving linear-programming problems. - -Linear programming is a widely used field of optimization for several reasons. Many practical problems in operations research can be expressed as linear programming problems. In rare practical problems, the usual versions of the simplex algorithm may actually "cycle". - -However, the simplex algorithm has poor worst-case behavior: Klee and Minty constructed a family of linear programming problems for which the simplex method takes a number of steps exponential in the problem size. - -In contrast to the simplex algorithm, which finds an optimal solution by traversing the edges between vertices on a polyhedral set, interior-point methods move through the interior of the feasible region. - -This is the first worst-case polynomial-time algorithm ever found for linear programming. To solve a problem which has n variables and can be encoded in L input bits, this algorithm runs in $ O(n^6 L) $ time. Since Karmarkar's discovery, many interior-point methods have been proposed and analyzed. - -In 1987, Vaidya proposed an algorithm that runs in $ O(n^3) $ time. - -In 1989, Vaidya developed an algorithm that runs in $O(n^{2.5})$ time. Formally speaking, the algorithm takes $O( (n+d)^{1.5} n L)$ arithmetic operations in the worst case, where $d$ is the number of constraints, $ n $ is the number of variables, and $L$ is the number of bits. - -In 2015, Lee and Sidford showed that, it can be solved in $\tilde O((nnz(A) + d^2)\sqrt{d}L)$ time, where $nnz(A)$ represents the number of non-zero elements, and it remains taking $O(n^{2.5}L)$ in the worst case. - -In 2019, Cohen, Lee and Song improved the running time to $\tilde O( ( n^{\omega} + n^{2.5-\alpha/2} + n^{2+1/6} ) L)$ time, $ \omega $ is the exponent of matrix multiplication and $ \alpha $ is the dual exponent of matrix multiplication. $ \alpha $ is (roughly) defined to be the largest number such that one can multiply an $ n \times n $ matrix by a $ n \times n^\alpha $ matrix in $ O(n^2) $ time. In a followup work by Lee, Song and Zhang, they reproduce the same result via a different method. These two algorithms remain $\tilde O( n^{2+1/6} L ) $ when $ \omega = 2 $ and $ \alpha = 1 $. The result due to Jiang, Song, Weinstein and Zhang improved $ \tilde O ( n^{2+1/6} L) $ to $ \tilde O ( n^{2+1/18} L) $. - -The current opinion is that the efficiencies of good implementations of simplex-based methods and interior point methods are similar for routine applications of linear programming. However, for specific types of LP problems, it may be that one type of solver is better than another (sometimes much better), and that the structure of the solutions generated by interior point methods versus simplex-based methods are significantly different with the support set of active variables being typically smaller for the latter one. - -There are several open problems in the theory of linear programming, the solution of which would represent fundamental breakthroughs in mathematics and potentially major advances in our ability to solve large-scale linear programs. - -* Does LP admit a strongly polynomial-time algorithm? - -* Does LP admit a strongly polynomial-time algorithm to find a strictly complementary solution? - -* Does LP admit a polynomial-time algorithm in the real number (unit cost) model of computation? - -This closely related set of problems has been cited by Stephen Smale as among the 18 greatest unsolved problems of the 21st century. In Smale's words, the third version of the problem "is the main unsolved problem of linear programming theory." While algorithms exist to solve linear programming in weakly polynomial time, such as the ellipsoid methods and interior-point techniques, no algorithms have yet been found that allow strongly polynomial-time performance in the number of constraints and the number of variables. The development of such algorithms would be of great theoretical interest, and perhaps allow practical gains in solving large LPs as well. - -Although the Hirsch conjecture was recently disproved for higher dimensions, it still leaves the following questions open. - -* Are there pivot rules which lead to polynomial-time simplex variants? - -* Do all polytopal graphs have polynomially bounded diameter? - -These questions relate to the performance analysis and development of simplex-like methods. The immense efficiency of the simplex algorithm in practice despite its exponential-time theoretical performance hints that there may be variations of simplex that run in polynomial or even strongly polynomial time. It would be of great practical and theoretical significance to know whether any such variants exist, particularly as an approach to deciding if LP can be solved in strongly polynomial time. - -The simplex algorithm and its variants fall in the family of edge-following algorithms, so named because they solve linear programming problems by moving from vertex to vertex along edges of a polytope. This means that their theoretical performance is limited by the maximum number of edges between any two vertices on the LP polytope. As a result, we are interested in knowing the maximum graph-theoretical diameter of polytopal graphs. It has been proved that all polytopes have subexponential diameter. The recent disproof of the Hirsch conjecture is the first step to prove whether any polytope has superpolynomial diameter. If any such polytopes exist, then no edge-following variant can run in polynomial time. Questions about polytope diameter are of independent mathematical interest. - -Simplex pivot methods preserve primal (or dual) feasibility. On the other hand, criss-cross pivot methods do not preserve (primal or dual) feasibilitythey may visit primal feasible, dual feasible or primal-and-dual infeasible bases in any order. Pivot methods of this type have been studied since the 1970s. Essentially, these methods attempt to find the shortest pivot path on the arrangement polytope under the linear programming problem. In contrast to polytopal graphs, graphs of arrangement polytopes are known to have small diameter, allowing the possibility of strongly polynomial-time criss-cross pivot algorithm without resolving questions about the diameter of general polytopes. - -If all of the unknown variables are required to be integers, then the problem is called an integer programming (IP) or integer linear programming (ILP) problem. In contrast to linear programming, which can be solved efficiently in the worst case, integer programming problems are in many practical situations (those with bounded variables) NP-hard. 0–1 integer programming or binary integer programming (BIP) is the special case of integer programming where variables are required to be 0 or 1 (rather than arbitrary integers). This problem is also classified as NP-hard, and in fact the decision version was one of Karp's 21 NP-complete problems. - -If only some of the unknown variables are required to be integers, then the problem is called a mixed integer programming (MIP) problem. These are generally also NP-hard because they are even more general than ILP programs. - -There are however some important subclasses of IP and MIP problems that are efficiently solvable, most notably problems where the constraint matrix is totally unimodular and the right-hand sides of the constraints are integers or – more general – where the system has the total dual integrality (TDI) property. - -Advanced algorithms for solving integer linear programs include: - -* cutting-plane method - -* Branch and bound - -* Branch and cut - -* Branch and price - -* if the problem has some extra structure, it may be possible to apply delayed column generation. - -Such integer-programming algorithms are discussed by Padberg and in Beasley. - -A linear program in real variables is said to be integral if it has at least one optimal solution which is integral. Likewise, a polyhedron $P = \{x \mid Ax \ge 0\}$ is said to be integral if for all bounded feasible objective functions c, the linear program $\{\max cx \mid x \in P\}$ has an optimum $x^*$ with integer coordinates. As observed by Edmonds and Giles in 1977, one can equivalently say that the polyhedron $P$ is integral if for every bounded feasible integral objective function c, the optimal value of the linear program $\{\max cx \mid x \in P\}$ is an integer. - -Integral linear programs are of central importance in the polyhedral aspect of combinatorial optimization since they provide an alternate characterization of a problem. Specifically, for any problem, the convex hull of the solutions is an integral polyhedron; if this polyhedron has a nice/compact description, then we can efficiently find the optimal feasible solution under any linear objective. Conversely, if we can prove that a linear programming relaxation is integral, then it is the desired description of the convex hull of feasible (integral) solutions. - -Terminology is not consistent throughout the literature, so one should be careful to distinguish the following two concepts, - -* in an integer linear program, described in the previous section, variables are forcibly constrained to be integers, and this problem is NP-hard in general, - -* in an integral linear program, described in this section, variables are not constrained to be integers but rather one has proven somehow that the continuous problem always has an integral optimal value (assuming c is integral), and this optimal value may be found efficiently since all polynomial-size linear programs can be solved in polynomial time. - -One common way of proving that a polyhedron is integral is to show that it is totally unimodular. There are other general methods including the integer decomposition property and total dual integrality. Other specific well-known integral LPs include the matching polytope, lattice polyhedra, submodular flow polyhedra, and the intersection of two generalized polymatroids/g-polymatroids – e.g. see Schrijver 2003. - -Permissive licenses: - -Copyleft (reciprocal) licenses: - -MINTO (Mixed Integer Optimizer, an integer programming solver which uses branch and bound algorithm) has publicly available source code but is not open source. - -Proprietary licenses: diff --git a/wiki/wikipedia/238.txt b/wiki/wikipedia/238.txt deleted file mode 100644 index 59941550bac44cfd604df15ee3d5e3b7cc246aff..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/238.txt +++ /dev/null @@ -1,3 +0,0 @@ -In mathematics, the Brauer–Suzuki theorem, proved by Brauer, Suzuki, Brauer, states that if a finite group has a generalized quaternion Sylow 2-subgroup and no non-trivial normal subgroups of odd order, then the group has a center of order 2. In particular, such a group cannot be simple. - -A generalization of the Brauer–Suzuki theorem is given by Glauberman's Z* theorem. diff --git a/wiki/wikipedia/2380.txt b/wiki/wikipedia/2380.txt deleted file mode 100644 index 844b83bc2c87ae072f126a47475c96ce23a6026a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2380.txt +++ /dev/null @@ -1,36 +0,0 @@ -#redirectKinoshita-Lee-Nauenberg theorem eixt1981e2utvwkgx8avmp26ymrys31 - - - - - - - -Nijneomskiy - -0 - -36946741 - - - - - -511236655 - -2012-09-07T15:35:02Z - - - -Ezhiki - -48143 - - - -new redir - -wikitext - -text/x-wiki diff --git a/wiki/wikipedia/2381.txt b/wiki/wikipedia/2381.txt deleted file mode 100644 index 5ab60ed0f8d2508ce37e6678eb89a16fb8ab3dad..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2381.txt +++ /dev/null @@ -1,19 +0,0 @@ -Syncplicity is a file share and synchronization service developed by Syncplicity Inc. The service lets users store and synchronize files between computers. It supports Microsoft Windows and macOS. - -The service was initially available for beta test, and became public in 2008. - -In late 2010, a client for Intel-based Macintosh computers running Mac OS X version 10.6 or later was released. - -On May 21, 2012, Syncplicity, Inc. was acquired by EMC Corporation. - -In July 2015, Skyview Capital, a global private investment firm, purchased Syncplicity from EMC. - -In February 2017, Axway purchased Syncplicity from Skyview Capital. - -Syncplicity offers both free and paid accounts. - -Several file synchronization and backup services launched around the same time as Syncplicity, including Live Mesh, Dropbox, and SugarSync. Syncplicity allows synchronization with other online services including Google Docs, Zoho, and Facebook. Documents can be synchronised with an associated Google Docs account from Windows or Macintosh computers; however, documents uploaded to free Google Docs accounts will be converted to Google Docs file formats where conversion is supported, and otherwise ignored. Photos can be synchronised with Facebook albums. Online services including Scribd and Piknik are supported by Syncplicity. - -In 2008, Syncplicity was rated the second best synchronization software behind Dropbox in a Lifehacker reader poll, and PCWorld's reviewer called Syncplicity "my top pick among sync services". - -A later review (under EMC ownership) found that Syncplicity might not be able to compete well with Dropbox and Sugarsync on price and storage, but has features, including security and availability, that might be attractive to business users. The free version offers less storage than other free services. diff --git a/wiki/wikipedia/2382.txt b/wiki/wikipedia/2382.txt deleted file mode 100644 index fc217344a6be6096cebc9043841c4159a23877a9..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2382.txt +++ /dev/null @@ -1,61 +0,0 @@ -Microsoft OneDrive (formerly SkyDrive) is a file hosting service that Microsoft operates. First launched in August 2007, it enables registered users to share and synchronize their files. OneDrive also works as the storage back-end of the web version of Microsoft Office. OneDrive offers 5 GB of storage space free of charge, with 100 GB, 1 TB, and 6 TB storage options available either separately or with Office 365 subscriptions. - -The OneDrive client app adds file synchronization and cloud backup features to its device. The app comes bundled with Microsoft Windows and is available for macOS, Android, iOS, Windows Phone, Xbox 360, Xbox One, and Xbox Series X and S. In addition, Microsoft Office apps directly integrate with OneDrive. - -At its launch the service, known as Windows Live Folders at the time (with a codename of SkyDrive), was provided as a limited beta available to a few testers in the United States. On August 1, 2007, the service was expanded to a wider audience. Shortly thereafter, on August 9, 2007, the service was renamed Windows Live SkyDrive and made available to testers in the United Kingdom and India. SkyDrive was initially available in 38 countries and regions, later expanded to 62. On December 2, 2008, the capacity of an individual SkyDrive account was upgraded from 5 GB to 25 GB, and Microsoft added a separate entry point called Windows Live Photos which allowed users to access their photos and videos stored on SkyDrive. This entry point allowed users to add "People tags" to their photos, download photos into Windows Photo Gallery or as a ZIP file, as well as viewing Exif metadata such as camera information for the photos uploaded. Microsoft also added the ability to have full-screen slide shows for photos using Silverlight. - -SkyDrive was updated to "Wave 4" release on June 7, 2010, and added the ability to work with Office Web Apps (now known as Office Online), with versioning. In this update, due to the discontinuation of Windows Live Toolbar, the ability to synchronise and share bookmarked web links between users via SkyDrive was also discontinued. However, users were still able to use Windows Live Mesh, which replaced the previous Windows Live Favorites, to synchronize their favorites between computers until its discontinuation in February 2013. - -In June 2010, users of Office Live Workspace, released in October 2007, were migrated to Windows Live Office. The migration included all existing workspaces, documents, and sharing permissions. The merger of the two services was a result of Microsoft's decision to merge its Office Live team into Windows Live in January 2009, as well as several deficiencies with Office Live Workspace, which lacked high-fidelity document viewing and did not allow files to be edited from within the web browser. Office Live Workspace also did not offer offline collaboration and co-authoring functionality – instead documents were "checked out" and "checked in", though the service did integrate with SharedView for real-time screen sharing. - -On June 20, 2011, Microsoft overhauled the user interface for SkyDrive, built using HTML5 technologies. The updated version featured caching, hardware acceleration, HTML5 video, quick views, cleaner arrangement of photos and infinite scrolling. Microsoft also doubled the file size limit from 50 MB to 100 MB per file. With this update, Microsoft consolidated the different entry points for SkyDrive, such as Windows Live Photos and Windows Live Office, into one single interface. Files and folders shared with a user, including those in Windows Live Groups, were also accessible in the new interface. On November 29, 2011, Microsoft updated SkyDrive to make sharing and file management easier, as well as HTML5 and other updates. This update also allowed users to see how much storage they had (and how much they had used), a feature that had been removed in the previous update as part of the redesign. - -On December 3, 2011, Microsoft released SkyDrive apps for iOS and Windows Phone, which are available in the App Store and Windows Phone Store respectively. On April 22, 2012, Microsoft released a SkyDrive desktop app for Windows Vista, 7 and 8, as well as macOS, allowing users to synchronize files on SkyDrive, much like Windows Live Mesh, and to "fetch" files on their computer via the web browser. In addition, SkyDrive also provided additional storage available for purchase and reduced the free storage space for new users to 7 GB (from 25 GB.) Existing users were offered a free upgrade offer to retain their 25 GB of free storage. The updated SkyDrive also allowed files up to 2 GB in size (uploaded via the SkyDrive desktop app). The update also brought additional features such as Open Document Format (ODF) capability, URL shortening services and direct sharing of files to Twitter. - -On August 14, 2012, Microsoft announced a new update for SkyDrive which brought changes and improvements to SkyDrive.com, SkyDrive for Windows desktop and OS X, and the SkyDrive API as part of Live Connect. For SkyDrive.com, the updates brought a new "modern" design for the web service consistent with Outlook.com, and along with the UI update the service also received improvements such as instant search, contextual toolbar, multi-select in thumbnail view, drag-and-drop files into folders, and sorting improvements. For the SkyDrive for Windows desktop and macOS applications, the update brought new performance improvements to photo uploads and the sync experience. The update also improved the SkyDrive API with the removal of file type restrictions, ability to upload images in their full resolution, as well as a new SkyDrive file picker for opening and saving files. On August 28, 2012, Microsoft released a SkyDrive app for Android on Google Play store. On September 18, 2012, Microsoft also introduced a recycle bin feature on SkyDrive and announced that SkyDrive will allow users to create online surveys via Excel Web App. - -Microsoft became involved in a lawsuit with British television broadcaster Sky UK for using the word "Sky", resulting in a High Court ruling in June 2013 that the service's brand breached Sky's trademark. On July 31, 2013, in a joint press release between Sky and Microsoft, it was announced that a settlement had been reached and as a result the 'SkyDrive' name would be changed to 'OneDrive'. Sky allowed Microsoft to continue using the brand "for a reasonable period of time to allow for an orderly transition to a new brand". The change was made on most platforms on February 19, 2014, following an announcement on January 27. - -Upon the re-launch as OneDrive, monthly payment plans were introduced, along with the ability to earn up to 5 GB of free storage for referring new users to OneDrive (500 MB each), and 3 GB if users enable automatic uploads of photos using the OneDrive mobile apps on smartphones. Subscribers to Office 365's home-oriented plans also receive additional storage for use with the service, with 20 GB per user. - -In June 2014 it was announced that OneDrive's default storage would increase to 15 GB, putting it in line with its competitor Google Drive. An additional 15 GB were offered for activating camera roll backup on a mobile device, putting it ahead of Google Drive until November 2015, when this bonus was cancelled. The amount of additional storage for Office 365 subscribers also increased to 1 TB. Following calls for Microsoft to reverse the reduction decision, Microsoft announced on December 11 of the same year that it would allow existing users to request to have up to 30 GB of free storage unaffected by the reduction, and said it would fully refund customers of Office 365 not satisfied with the 1 TB cap, among other redress. - -In June 2019, alongside the announcement for the Personal Vault, Microsoft announced that it would increase the OneDrive standalone storage plan from 50 GB to 100 GB at no additional charge, and that it would be giving Office 365 subscribers a new option to add more storage as they need it. - -OneDrive initially did not store previous versions of files, except for Microsoft Office formats. In July 2017, however, Microsoft OneDrive team announced that version history support for all file types was the top requested feature; as such, OneDrive would keep older versions of all files for up to 30 days. - -OneDrive implements a "recycle bin"; files the user chooses to delete are stored there for a time, without counting as part of the user's allocation, and can be reinstated until they are ultimately purged from OneDrive. - -OneDrive allows the viewing of documents in Portable Document Format (PDF), Windows 8, Windows 10, Windows 10 Mobile, Windows Phone that allow users to synchronize their entire OneDrive storage with their computers for offline access, as well as between multiple computers. - -In an update on July 4, 2017, OneDrive desktop client started showing an error message to the effect that the local OneDrive folder must be located on an NTFS volume only. Other file systems, including the older FAT32 and exFAT, as well as the newer ReFS were not supported. Microsoft further commented that this was always the requirement; it had merely fixed a bug in which the warning was not displayed. Microsoft also denied this feature having anything to do with the forthcoming OneDrive Files On-Demand. - -Microsoft Office, starting with Microsoft Office 2010 and Microsoft Office for Mac 2011, allows users to directly open or save documents to OneDrive, or simultaneously edit shared documents with other users. Changes are synchronized when a document is saved and, where conflicts occur, the saving user can choose which version to keep; users can also use several different desktop and web programs to edit the same shared document. - -Microsoft OneNote users can sync one or more of their notebooks using OneDrive. Once a notebook is selected for sharing, OneDrive copies the notebook from the user's computer to OneDrive, and that online copy then becomes the original for all future changes. The originating copy remains on the user's hard drive but is no longer updated by OneNote. Users can switch back to an offline-only version of the notebook by manually changing its location in OneNote, but unpredictable results may occur, including the OneNote application crashing and loss of notebook data under certain conditions. Under such circumstances, re-sharing the Notebook to OneDrive may result in recovery of the lost data. - -In September 2019 Microsoft announced Personal Vault. It is a protected area in OneDrive where users can store their most important or sensitive files and photos without sacrificing the convenience of anywhere access. Personal Vault has a strong authentication method or a second step of identity verification, such as fingerprint, face, PIN, or a code sent via email or SMS. Personal Vault is not available in macOS app. - -OneDrive allows users to embed their Word, Excel and PowerPoint documents into other web pages. These embedded documents allow anyone who visits these web pages to interact with them, such as browsing an embedded PowerPoint slideshow or perform calculations within an embedded Excel spreadsheet. In addition, Microsoft has released a set of APIs for OneDrive via Live Connect to enable developers to develop web services and client apps utilizing OneDrive's cloud storage. This allows users of these web services and client apps to browse, view, upload or edit files stored on OneDrive. A software development kit (SDK) is available for .NET Framework, iOS, Android and Python with a limited set of API for web apps and Windows. - -OneDrive is already interoperable with a host of web services, including: - -* Outlook.com: Allows users to: - -** Directly upload Office documents and photos within Outlook.com, store them on OneDrive and share them with other users. - -** Directly save Office documents within Outlook.com to OneDrive, and view or edit these documents directly within the web browser. - -** Edit Office documents within the web browser using Office Online and reply directly back to the sender with the edits made. - -* Facebook, Twitter and LinkedIn: Enables users to quickly share their files with their contacts on these social networks. OneDrive maintains an access control list of all users with permissions to view or edit the files, including those users on social networks. - -* Bing: Save & Share feature allows users to save search histories into a OneDrive folder. - -* Windows Live Groups: Before being discontinued, Windows Live Groups provided each group with 1 GB of storage space on OneDrive to be shared between the group members. Group members were allowed to access, create, modify and delete files within the group's OneDrive folders, along with the other functionality that OneDrive provides. However, these features eventually became native to OneDrive. - -* Samsung Gallery: Users can sync their photos and videos from the gallery of Samsung Devices to OneDrive through the partnership of Microsoft and Samsung. - -Data stored on OneDrive is subject to monitoring by Microsoft, and any content that is in violation of Microsoft's Code of Conduct is subject to removal and may lead to temporary or permanent shutdown of the account. This has led to privacy concerns in relation to data stored on OneDrive. Microsoft has responded by indicating that "strict internal policies [are] in place to limit access to a user’s data", and that advanced mechanisms, such as Microsoft's automated PhotoDNA scanning tool, are utilized to ensure users abide with the Code of Conduct and that their account does not contain files in contravention thereof, such as partial human nudity (including art or drawings), or any online surveys. - -Microsoft has a similarly named but unrelated software plus service offering called OneDrive for Business (previously SkyDrive Pro). While OneDrive is a personal storage service on the web, OneDrive for Business is a managed cloud storage for business users that replaces SharePoint Workspace. The physical medium on which the information is stored can be either hosted on-premises or purchased as service subscription from Microsoft. diff --git a/wiki/wikipedia/2383.txt b/wiki/wikipedia/2383.txt deleted file mode 100644 index 4dba26f723cd2e2c86a58e6ba6068d6b27b808ba..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2383.txt +++ /dev/null @@ -1,33 +0,0 @@ -In fault-tolerant distributed computing, an atomic broadcast or total order broadcast is a broadcast where all correct processes in a system of multiple processes receive the same set of messages in the same order; that is, the same sequence of messages. The broadcast is termed "atomic" because it either eventually completes correctly at all participants, or all participants abort without side effects. Atomic broadcasts are an important distributed computing primitive. - -The following properties are usually required from an atomic broadcast protocol: - -# Validity: if a correct participant broadcasts a message, then all correct participants will eventually receive it. - -# Uniform Agreement: if one correct participant receives a message, then all correct participants will eventually receive that message. - -# Uniform Integrity: a message is received by each participant at most once, and only if it was previously broadcast. - -# Uniform Total Order: the messages are totally ordered in the mathematical sense; that is, if any correct participant receives message 1 first and message 2 second, then every other correct participant must receive message 1 before message 2. - -Rodrigues and Raynal and Schiper et al. define the integrity and validity properties of atomic broadcast slightly differently. - -Note that total order is not equivalent to FIFO order, which requires that if a process sent message 1 before it sent message 2, then all participants must receive message 1 before receiving message 2. It is also not equivalent to "causal order", where if message 2 "depends on" or "occurs after" message 1 then all participants must receive message 2 after receiving message 1. While a strong and useful condition, total order requires only that all participants receive the messages in the same order, but does not place other constraints on that order. - -Designing an algorithm for atomic broadcasts is relatively easy if it can be assumed that computers will not fail. For example, if there are no failures, atomic broadcast can be achieved simply by having all participants communicate with one "leader" which determines the order of the messages, with the other participants following the leader. - -However, real computers are faulty; they fail and recover from failure at unpredictable, possibly inopportune, times. For example, in the follow-the-leader algorithm, what if the leader fails at the wrong time? In such an environment achieving atomic broadcasts is difficult. A number of protocols have been proposed for performing atomic broadcast, under various assumptions about the network, failure models, availability of hardware support for multicast, and so forth. - -In order for the conditions for atomic broadcast to be satisfied, the participants must effectively "agree" on the order of receipt of the messages. Participants recovering from failure, after the other participants have "agreed" an order and started to receive the messages, must be able to learn and comply with the agreed order. Such considerations indicate that in systems with crash failures, atomic broadcast and consensus are equivalent problems. - -A value can be proposed by a process for consensus by atomically broadcasting it, and a process can decide a value by selecting the value of the first message which it atomically receives. Thus, consensus can be reduced to atomic broadcast. - -Conversely, a group of participants can atomically broadcast messages by achieving consensus regarding the first message to be received, followed by achieving consensus on the next message, and so forth until all the messages have been received. Thus, atomic broadcast reduces to consensus. This was demonstrated more formally and in greater detail by Xavier Défago, et al. - -A fundamental result in distributed computing is that achieving consensus in asynchronous systems in which even one crash failure can occur is impossible in the most general case. This was shown in 1985 by Michael J. Fischer, Nancy Lynch, and Mike Paterson, and is sometimes called the FLP result. Since consensus and atomic broadcast are equivalent, FLP applies also to atomic broadcast. The FLP result does not prohibit the implementation of atomic broadcast in practice, but it does require making less stringent assumptions than FLP in some respect, such as about processor and communication timings. - -The Chandra-Toueg algorithm is a consensus-based solution to atomic broadcast. Another solution has been put forward by Rodrigues and Raynal. - -The Zookeeper Atomic Broadcast (ZAB) protocol is the basic building block for Apache ZooKeeper, a fault-tolerant distributed coordination service which underpins Hadoop and many other important distributed systems. - -Ken Birman has proposed the virtual synchrony execution model for distributed systems, the idea of which is that all processes observe the same events in the same order. A total ordering of the messages being received, as in atomic broadcast, is one (though not the only) method for attaining virtually synchronous message receipt. diff --git a/wiki/wikipedia/2384.txt b/wiki/wikipedia/2384.txt deleted file mode 100644 index 2cffb07e1e9b9b468a94c0213e8909b1bfdac8b1..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2384.txt +++ /dev/null @@ -1,11 +0,0 @@ -The longest uncrossed (or nonintersecting) knight's path is a mathematical problem involving a knight on the standard 8×8 chessboard or, more generally, on a square n×n board. The problem is to find the longest path the knight can take on the given board, such that the path does not intersect itself. A further distinction can be made between a closed path, which ends on the same field as where it begins, and an open path, which ends on a different field from where it begins. - -The longest open paths on an n×n board are known only for n ≤ 9. Their lengths for n = 1, 2, …, 9 are: - -0, 0, 2, 5, 10, 17, 24, 35, 47 - -The longest closed paths are known only for n ≤ 10. Their lengths for n = 1, 2, …, 10 are: - -0, 0, 0, 4, 8, 12, 24, 32, 42, 54 - -The problem can be further generalized to rectangular n×m boards, or even to boards in the shape of any polyomino. Other standard chess pieces than the knight are less interesting, but fairy chess pieces like the camel ((3,1)-leaper), giraffe ((4,1)-leaper) and zebra ((3,2)-leaper) lead to problems of comparable complexity. diff --git a/wiki/wikipedia/2385.txt b/wiki/wikipedia/2385.txt deleted file mode 100644 index cddf2d6b312bc0bbaa1da6900e6f11d3a8f1a0a3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2385.txt +++ /dev/null @@ -1,13 +0,0 @@ -Wordtris is a Tetris offshoot designed by Sergei Utkin, Vyacheslav Tsoy and Armen Sarkissian (later President of Armenia) and published by Spectrum Holobyte in 1991 for the IBM PC platform. The game was released for the Game Boy (ported by Realtime Associates) and Super NES in 1992. - -The object of the game is to build words of three letters or more using the tiles that fall from the top of the playing area. Words can be constructed horizontally or vertically, and multiple words can overlap each other. If the player manages to construct the magic word at the top of the screen, the well will be cleared of all tiles and the player will receive a large bonus. - -Occasionally, a free tile (denoted by a "?") will drop. Its letter can be selected by the player (either by typing it in the PC version, or scrolling through letters with a button on the console versions). If the player does not choose a letter, the block will become a random letter when it stops. Eraser blocks will fall and remove whatever letter that they land on (in the SNES version, the eraser is replaced with bombs and vials of acid). - -In the Super NES version, players advance from levels "A" to "J." There is no level after "J." - -The background pictures (except the title screen) were taken from an earlier Tetris game by Spectrum Holobyte known as Super Tetris. The in-game music is composed by Paul Mogg who did the PC version of Super Tetris with Ed Bogas. - -The PC, Game Boy, and Macintosh versions of Wordtris feature original music by Ed Bogas, while the SNES version features music by Paul Mogg. Paul also worked on the computer versions but only worked on sound effects design. While the Game Boy and SNES versions contain looping music, the other ports do not. Ed composed the soundtrack for Wordtris using his own music software Super Studio Session for the Macintosh, in which his MIDI files were converted to the game in MIDI format. For the SNES version, Paul composed his music using Studio Vision Pro, also for the Macintosh. David Warhol provided sound engines and musical arrangements for both the Game Boy and SNES versions. - -Computer Gaming World stated that "Wordtris, like its predecessors, is as infuriating as it is incredibly addictive ... Tetris is a classic game. Wordtris does it one better." The SNES version of the game was scored a 65% by N-Force Magazine. diff --git a/wiki/wikipedia/2386.txt b/wiki/wikipedia/2386.txt deleted file mode 100644 index 6ddcc5ff9e64794b91ecad64b23b6947f39925a6..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2386.txt +++ /dev/null @@ -1,7 +0,0 @@ -In mathematics, the Narasimhan–Seshadri theorem, proved by , says that a holomorphic vector bundle over a Riemann surface is stable if and only if it comes from an irreducible projective unitary representation of the fundamental group. - -The main case to understand is that of topologically trivial bundles, i.e. those of degree zero (and the other cases are a minor - -technical extension of this case). This case of the Narasimhan–Seshadri theorem says that a degree zero holomorphic vector bundle over a Riemann surface is stable if and only if it comes from an irreducible unitary representation of the fundamental group of the Riemann surface. - -gave another proof using differential geometry, and showed that the stable vector bundles have an essentially unique unitary connection of constant (scalar) curvature. In the degree zero case, Donaldson's version of the theorem says that a degree zero holomorphic vector bundle over a Riemann surface is stable if and only if it admits a flat unitary connection compatible with its holomorphic structure. Then the fundamental group representation appearing in the original statement is just the monodromy representation of this flat unitary connection. diff --git a/wiki/wikipedia/2387.txt b/wiki/wikipedia/2387.txt deleted file mode 100644 index 1a45996ad0b68fd9ed6d978466079062be3d68be..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2387.txt +++ /dev/null @@ -1,7 +0,0 @@ -In topology, the Bing metrization theorem, named after R. H. Bing, characterizes when a topological space is metrizable. - -The theorem states that a topological space $X$ is metrizable if and only if it is regular and T0 and has a σ-discrete basis. A family of sets is called σ-discrete when it is a union of countably many discrete collections, where a family $\mathcal{F}$ of subsets of a space $X$ is called discrete, when every point of $X$ has a neighborhood that intersects at most one member of $\mathcal{F}.$ - -The theorem was proven by Bing in 1951 and was an independent discovery with the Nagata–Smirnov metrization theorem that was proved independently by both Nagata (1950) and Smirnov (1951). Both theorems are often merged in the Bing-Nagata-Smirnov metrization theorem. It is a common tool to prove other metrization theorems, e.g. the Moore metrization theorem – a collectionwise normal, Moore space is metrizable – is a direct consequence. - -Unlike the Urysohn's metrization theorem which provides a sufficient condition for metrization, this theorem provides both a necessary and sufficient condition for a topological space to be metrizable. diff --git a/wiki/wikipedia/2388.txt b/wiki/wikipedia/2388.txt deleted file mode 100644 index 068865396294a706707c37cde4b147329f9eea43..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2388.txt +++ /dev/null @@ -1,11 +0,0 @@ -thumb|300px|right|Example of Gephi network visualization.Gephi is an open-source network analysis and visualization software package written in Java on the NetBeans platform. - -Initially developed by students of the University of Technology of Compiègne (UTC) in France, Gephi has been selected for the Google Summer of Code in 2009, 2010, 2011, 2012, and 2013. - -Its last version, 0.9.0 has been launched in December 2015, with updates in February 2016 (0.9.1) and September 2017 (0.9.2). Previous versions are 0.6.0 (2008), 0.7.0 (2010), 0.8.0 (2011), 0.8.1 (2012) and 0.8.2 (2013). - -The Gephi Consortium, created in 2010, is a French non-profit corporation which supports development of future releases of Gephi. Members include SciencesPo, Linkfluence, WebAtlas, and Quid. Gephi is also supported by a large community of users, structured on a discussion group and a forum and producing numerous blogposts, papers and tutorials. - -Gephi has been used in a number of research projects in academia, journalism and elsewhere, for instance in visualizing the global connectivity of New York Times content and examining Twitter network traffic during social unrest along with more traditional network analysis topics. Gephi is widely used within the digital humanities (in history, literature, political sciences, etc.), a community where many of its developers are involved. - -Gephi inspired the LinkedIn InMaps and was used for the network visualizations for Truthy. diff --git a/wiki/wikipedia/2389.txt b/wiki/wikipedia/2389.txt deleted file mode 100644 index 8c7a6ea8177ab3d19b281f124e295bb224acbd71..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2389.txt +++ /dev/null @@ -1,7 +0,0 @@ -In computer science, the method of contraction hierarchies is a speed-up technique for finding the shortest-path in a graph. The most intuitive applications are car-navigation systems: a user wants to drive from $A$ to $B$ using the quickest possible route. The metric optimized here is the travel time. Intersections are represented by vertices, the road sections connecting them by edges. The edge weights represent the time it takes to drive along this segment of the road. A path from $A$ to $B$ is a sequence of edges (road sections); the shortest path is the one with the minimal sum of edge weights among all possible paths. The shortest path in a graph can be computed using Dijkstra's algorithm but, given that road networks consist of tens of millions of vertices, this is impractical. Contraction hierarchies is a speed-up method optimized to exploit properties of graphs representing road networks. This is based on the observation that road networks are highly hierarchical. Some intersections, for example highway junctions, are "more important" and higher up in the hierarchy than for example a junction leading into a dead end. Shortcuts can be used to save the precomputed distance between two important junctions such that the algorithm doesn't have to consider the full path between these junctions at query time. Contraction hierarchies do not know about which roads humans consider "important" (e.g. highways), but they are provided with the graph as input and are able to assign importance to vertices using heuristics. - -Contraction hierarchies are not only applied to speed-up algorithms in car-navigation systems but also in web-based route planners, traffic simulation, and logistics optimization. Implementations of the algorithm are publicly available as open source software. - -The contraction hierarchies (CH) algorithm is a two-phase approach to the shortest path problem consisting of a preprocessing phase and a query phase. As road networks change rather infrequently, more time (seconds to hours) can be used to once precompute some calculations before queries are to be answered. Using this precomputed data, many queries can be answered taking very little time (microseconds) each. but for grids it is known that the highway dimension $h\in\Theta(\sqrt{n})$. - -An alternative analysis was presented in the Customizable Contraction Hierarchy line of work. Query running times can be bounded by $O(td^2)$. As the tree-depth can be bounded in terms of the tree-width, $O((tw \log n)^2)$ is also a valid upper bound. The main source is but the consequences for the worst case running times are better detailed in. diff --git a/wiki/wikipedia/239.txt b/wiki/wikipedia/239.txt deleted file mode 100644 index e7dd3513c70c1807e2aa62803f0b8c8b1b179d0d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/239.txt +++ /dev/null @@ -1,35 +0,0 @@ -FreeMind is a free mind mapping application written in Java, which is further developed by the fork Freeplane until today (2021). FreeMind itself was last updated in 2014. FreeMind is licensed under the GNU General Public License Version 2. It provides extensive export capabilities. It runs on Microsoft Windows, Linux, and macOS via the Java Runtime Environment. - -As with other mind mapping software packages, FreeMind allows the user to edit a hierarchical set of ideas around a central concept. The non-linear approach assists in brainstorming new outlines and projects as ideas are added around the mind map. - -FreeMind was a finalist for Best Project in SourceForge.net's Community Choice Awards for 2008, which featured open-source software projects. A flash based export of this documentation is available online and can be viewed from flash-enabled web browsers. The link can be found in the external links section. - -FreeMind's most significant features are as follows: - -* Folding branches - -* Save files as XML—in mm file format - -* Export hypertext to HTML and XHTML - -* Export document to PDF and OpenDocument - -* Exports image to PNG, JPEG and SVG - -* Icons on nodes - -* Clouds around branches - -* Graphical links connecting nodes - -* Search restricted to single branches - -* Web and file hyperlinks from nodes - -* FreeMind browser/player for web in Java or Flash - -* Transform maps using XSLT - -FreeMind uses the Swing GUI toolkit for Java. - -FreeMind developers or developers of other projects have made plugins for various wiki and content management system software so that Freemind files can be viewed and in some cases created via the web interface. diff --git a/wiki/wikipedia/2390.txt b/wiki/wikipedia/2390.txt deleted file mode 100644 index ecf98f6c1587fafbea006ee8dd9b008f8e304433..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2390.txt +++ /dev/null @@ -1,56 +0,0 @@ -In the mathematical field of graph theory, a prism graph is a graph that has one of the prisms as its skeleton. - -The individual graphs may be named after the associated solid: - -* Triangular prism graph – 6 vertices, 9 edges - -* Cubical graph – 8 vertices, 12 edges - -* Pentagonal prism graph – 10 vertices, 15 edges - -* Hexagonal prism graph – 12 vertices, 18 edges - -* Heptagonal prism graph – 14 vertices, 21 edges - -* Octagonal prism graph – 16 vertices, 24 edges - -* ... - -Although geometrically the star polygons also form the faces of a different sequence of (self-intersecting and non-convex) prismatic polyhedra, the graphs of these star prisms are isomorphic to the prism graphs, and do not form a separate sequence of graphs. - -Prism graphs are examples of generalized Petersen graphs, with parameters GP(n,1). - -They may also be constructed as the Cartesian product of a cycle graph with a single edge. - -The n-gonal prism graphs with odd values of n may be constructed as circulant graphs $C_{2n}^{2,n}$. - -However, this construction does not work for even values of n. - -The graph of an n-gonal prism has 2n vertices and 3n edges. They are regular, cubic graphs. - -Since the prism has symmetries taking each vertex to each other vertex, the prism graphs are vertex-transitive graphs. - -As polyhedral graphs, they are also 3-vertex-connected planar graphs. Every prism graph has a Hamiltonian cycle. - -Among all biconnected cubic graphs, the prism graphs have within a constant factor of the largest possible number of 1-factorizations. A 1-factorization is a partition of the edge set of the graph into three perfect matchings, or equivalently an edge coloring of the graph with three colors. Every biconnected n-vertex cubic graph has O(2n/2) 1-factorizations, and the prism graphs have Ω(2n/2) 1-factorizations. - -The number of spanning trees of an n-gonal prism graph is given by the formula -$$ -\frac{n}{2}\bigl( (2+\sqrt{3})^n+(2-\sqrt{3})^n-2)\bigr. -$$ - -For n = 3, 4, 5, ... these numbers are - -75, 384, 1805, 8100, 35287, 150528, ... . - -The n-gonal prism graphs for even values of n are partial cubes. They form one of the few known infinite families of cubic partial cubes, and (except for four sporadic examples) the only vertex-transitive cubic partial cubes. - -The pentagonal prism is one of the forbidden minors for the graphs of treewidth three. The triangular prism and cube graph have treewidth exactly three, but all larger prism graphs have treewidth four. - -Other infinite sequences of polyhedral graph formed in a similar way from polyhedra with regular-polygon bases include the antiprism graphs (graphs of antiprisms) and wheel graphs (graphs of pyramids). Other vertex-transitive polyhedral graphs include the Archimedean graphs. - -If the two cycles of a prism graph are broken by the removal of a single edge in the same position in both cycles, the result is a ladder graph. If these two removed edges are replaced by two crossed edges, the result is a non-planar graph called a Möbius ladder. - -Category:Graph families - -Category:Regular graphs diff --git a/wiki/wikipedia/2391.txt b/wiki/wikipedia/2391.txt deleted file mode 100644 index 8fabe1a2edd603926f44e294ad5afd365d5c53a6..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2391.txt +++ /dev/null @@ -1,7 +0,0 @@ -In combinatorial optimization, the stacker crane problem is an optimization problem closely related to the traveling salesperson problem. Its input consists of a collection of ordered pairs of points in a metric space, and the goal is to connect these points into a cycle of minimum total length that includes all of the pairs, oriented consistently with each other. It models problems of scheduling the pickup and delivery of individual loads of cargo, by a stacker crane, construction crane or (in drayage) a truck, in a simplified form without constraints on the timing of these deliveries. It was introduced by Frederickson, with an equivalent formulation in terms of mixed graphs with directed edges modeling the input pairs and undirected edges modeling their distances. Frederickson et al. credit its formulation to a personal communication of Daniel J. Rosenkrantz. - -The stacker crane problem can be viewed as a generalization of the traveling salesperson problem in metric spaces: any instance of the traveling salesperson problem can be transformed into an instance of the stacker crane problem, having a pair $(p,p)$ for each point in the travelling salesman instance. In the other direction, the stacker crane problem can be viewed as a special case of the asymmetric traveling salesperson problem, where the points of the asymmetric traveling salesperson problem are the pairs of a stacker crane instance and the distance from one pair to another is taken as the distance from the delivery point of the first pair, through its pickup point, to the delivery point of the second pair. Because it generalizes the traveling salesperson problem, it inherits the same computational complexity: it is NP-hard, and at least as hard to approximate. - -An approximation algorithm based on the Christofides algorithm for the traveling salesperson problem can approximate the solution of the stacker crane problem to within an approximation ratio of 9/5. - -The problem of designing the back side of an embroidery pattern to minimize the total amount of thread used is closely related to the stacker crane problem, but it allows each of its pairs of points (the ends of the visible stitches on the front side of the pattern) to be traversed in either direction, rather than requiring the traversal to go through all pairs in a consistent direction. It is NP-hard by the same transformation from the traveling salesperson problem, and can be approximated to within an approximation ratio of 2. Another variation of the stacker crane problem, called the dial-a-ride problem, asks for the minimum route for a vehicle to perform a collection of pickups and deliveries while allowing it to hold some number k > 1 of loads at any point along its route. diff --git a/wiki/wikipedia/2392.txt b/wiki/wikipedia/2392.txt deleted file mode 100644 index 744342333cde6f95abadd2111c8e1849047a3a54..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2392.txt +++ /dev/null @@ -1,25 +0,0 @@ -In computer science, in the field of databases, write–write conflict, also known as overwriting uncommitted data is a computational anomaly associated with interleaved execution of transactions. - -Given a schedule S - -S = \begin{bmatrix} - -T1 & T2 \\ - -W(A) & \\ - -& W(B) \\ - -W(B) & \\ - -Com. & \\ - -& W(A)\\ - -& Com. \end{bmatrix} - -note that there is no read in this schedule. The writes are called blind writes. - -We have a lost update. Any attempts to make this schedule serial would give off two different results (either T1's version of A and B is shown, or T2's version of A and B is shown), and would not be the same as the above schedule. This schedule would not be serializable. - -Strict 2PL overcomes this inconsistency by locking T1 out from B. Unfortunately, deadlocks are something Strict 2PL does not overcome all the time. diff --git a/wiki/wikipedia/2393.txt b/wiki/wikipedia/2393.txt deleted file mode 100644 index bd849546f6522245cde148a6fba80c8cb6a97e52..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2393.txt +++ /dev/null @@ -1,21 +0,0 @@ -The Tait conjectures are three conjectures made by 19th-century mathematician Peter Guthrie Tait in his study of knots. The Tait conjectures involve concepts in knot theory such as alternating knots, chirality, and writhe. All of the Tait conjectures have been solved, the most recent being the Flyping conjecture. - -Tait came up with his conjectures after his attempt to tabulate all knots in the late 19th century. As a founder of the field of knot theory, his work lacks a mathematically rigorous framework, and it is unclear whether he intended the conjectures to apply to all knots, or just to alternating knots. It turns out that most of them are only true for alternating knots. - -A geometric proof, not using knot polynomials, was given in 2017 by Joshua Greene. - -A second conjecture of Tait: - -
    An amphicheiral (or acheiral) alternating link has zero writhe.
    - -This conjecture was also proved by Kauffman and Thistlethwaite.
    - -The Tait flyping conjecture was proved by Thistlethwaite and William Menasco in 1991. - -The Tait flyping conjecture implies some more of Tait's conjectures: - -
    Any two reduced diagrams of the same alternating knot have the same writhe.
    - -This follows because flyping preserves writhe. This was proved earlier by Murasugi and Thistlethwaite. It also follows from Greene's work. - -This follows because a knot's mirror image has opposite writhe. This conjecture is again only true for alternating knots: non-alternating amphichiral knot with crossing number 15 exist. diff --git a/wiki/wikipedia/2394.txt b/wiki/wikipedia/2394.txt deleted file mode 100644 index 633d5e34a6ef40b7687b03d39b246e8e3f52ef1e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2394.txt +++ /dev/null @@ -1,78 +0,0 @@ - - -In statistics, the Gauss–Markov theorem (or simply Gauss theorem for some authors) states that the ordinary least squares (OLS) estimator has the lowest sampling variance within the class of linear unbiased estimators, if the errors in the linear regression model are uncorrelated, have equal variances and expectation value of zero. The errors do not need to be normal, nor do they need to be independent and identically distributed (only uncorrelated with mean zero and homoscedastic with finite variance). The requirement that the estimator be unbiased cannot be dropped, since biased estimators exist with lower variance. See, for example, the James–Stein estimator (which also drops linearity), ridge regression, or simply any degenerate estimator. - -The theorem was named after Carl Friedrich Gauss and Andrey Markov, although Gauss' work significantly predates Markov's. But while Gauss derived the result under the assumption of independence and normality, Markov reduced the assumptions to the form stated above. A further generalization to non-spherical errors was given by Alexander Aitken. extends the Gauss–Markov theorem to the case where the error vector has a non-scalar covariance matrix. The Aitken estimator is also a BLUE. - -In most treatments of OLS, the regressors (parameters of interest) in the design matrix $\mathbf{X}$ are assumed to be fixed in repeated samples. This assumption is considered inappropriate for a predominantly nonexperimental science like econometrics. Instead, the assumptions of the Gauss–Markov theorem are stated conditional on $\mathbf{X}$. - -The dependent variable is assumed to be a linear function of the variables specified in the model. The specification must be linear in its parameters. This does not mean that there must be a linear relationship between the independent and dependent variables. The independent variables can take non-linear forms as long as the parameters are linear. The equation $ y = \beta_{0} + \beta_{1} x^2, $ qualifies as linear while $ y = \beta_{0} + \beta_{1}^2 x$ can be transformed to be linear by replacing $\beta_{1}^2$ by another parameter, say $\gamma$. An equation with a parameter dependent on an independent variable does not qualify as linear, for example $y = \beta_{0} + \beta_{1}(x) \cdot x$, where $\beta_{1}(x)$ is a function of $x$. - -Data transformations are often used to convert an equation into a linear form. For example, the Cobb–Douglas function—often used in economics—is nonlinear: -$$ -Y = A L^\alpha K^{1 - \alpha} e^\varepsilon -$$ - -But it can be expressed in linear form by taking the natural logarithm of both sides: -$$ -\ln Y=\ln A + \alpha \ln L + (1 - \alpha) \ln K + \varepsilon = \beta_0 + \beta_1 \ln L + \beta_2 \ln K + \varepsilon -$$ - -This assumption also covers specification issues: assuming that the proper functional form has been selected and there are no omitted variables. - -One should be aware, however, that the parameters that minimize the residuals of the transformed equation do not necessarily minimize the residuals of the original equation. - -For all $n$ observations, the expectation—conditional on the regressors—of the error term is zero: -$$ -\operatorname{E}[\varepsilon_{i}\mid \mathbf{X} ] = \operatorname{E}[\varepsilon_{i}\mid \mathbf{x}_{1}, \dots, \mathbf{x}_{n} ] = 0. -$$ - -where $\mathbf{x}_i = \begin{bmatrix} x_{i1} & x_{i2} & \cdots & x_{ik} \end{bmatrix}^{\mathsf{T}}$ is the data vector of regressors for the ith observation, and consequently $\mathbf{X} = \begin{bmatrix} \mathbf{x}_{1}^{\mathsf{T}} & \mathbf{x}_{2}^{\mathsf{T}} & \cdots & \mathbf{x}_{n}^{\mathsf{T}} \end{bmatrix}^{\mathsf{T}}$ is the data matrix or design matrix. - -Geometrically, this assumption implies that $\mathbf{x}_{i}$ and $\varepsilon_{i}$ are orthogonal to each other, so that their inner product (i.e., their cross moment) is zero. -$$ -\operatorname{E}[\mathbf{x}_{j} \cdot \varepsilon_{i}] = \begin{bmatrix} \operatorname{E}[{x}_{j1} \cdot \varepsilon_{i}] \\ \operatorname{E}[{x}_{j2} \cdot \varepsilon_{i}] \\ \vdots \\ \operatorname{E}[{x}_{jk} \cdot \varepsilon_{i}] \end{bmatrix} = \mathbf{0} \quad \text{for all } i, j \in n -$$ - -This assumption is violated if the explanatory variables are stochastic, for instance when they are measured with error, or are endogenous. Endogeneity can be the result of simultaneity, where causality flows back and forth between both the dependent and independent variable. Instrumental variable techniques are commonly used to address this problem. - -The sample data matrix $\mathbf{X}$ must have full column rank. -$$ -\operatorname{rank}(\mathbf{X}) = k -$$ - -Otherwise $\mathbf{X}'\mathbf{X}$ is not invertible and the OLS estimator cannot be computed. - -A violation of this assumption is perfect multicollinearity, i.e. some explanatory variables are linearly dependent. One scenario in which this will occur is called "dummy variable trap," when a base dummy variable is not omitted resulting in perfect correlation between the dummy variables and the constant term. - -Multicollinearity (as long as it is not "perfect") can be present resulting in a less efficient, but still unbiased estimate. The estimates will be less precise and highly sensitive to particular sets of data. Multicollinearity can be detected from condition number or the variance inflation factor, among other tests. - -The outer product of the error vector must be spherical. - -\operatorname{E}[\boldsymbol{\varepsilon} \boldsymbol{\varepsilon^{\mathsf{T}}} \mid \mathbf{X} ] - -= \operatorname{Var}[\boldsymbol{\varepsilon} \mid \mathbf{X} ] - -= \begin{bmatrix} - -\sigma^{2} & 0 & \cdots & 0 \\ - -0 & \sigma^{2} & \cdots & 0 \\ - -\vdots & \vdots & \ddots & \vdots \\ - -0 & 0 & \cdots & \sigma^{2} - -\end{bmatrix} - -= \sigma^{2} \mathbf{I} - -\quad \text{with } \sigma^{2} > 0 - -This implies the error term has uniform variance (homoscedasticity) and no serial dependence. If this assumption is violated, OLS is still unbiased, but inefficient. The term "spherical errors" will describe the multivariate normal distribution: if $\operatorname{Var}[\boldsymbol{\varepsilon}\mid \mathbf{X} ] = \sigma^{2} \mathbf{I}$ in the multivariate normal density, then the equation $f(\varepsilon)=c$ is the formula for a ball centered at μ with radius σ in n-dimensional space. - -Heteroskedasticity occurs when the amount of error is correlated with an independent variable. For example, in a regression on food expenditure and income, the error is correlated with income. Low income people generally spend a similar amount on food, while high income people may spend a very large amount or as little as low income people spend. Heteroskedastic can also be caused by changes in measurement practices. For example, as statistical offices improve their data, measurement error decreases, so the error term declines over time. - -This assumption is violated when there is autocorrelation. Autocorrelation can be visualized on a data plot when a given observation is more likely to lie above a fitted line if adjacent observations also lie above the fitted regression line. Autocorrelation is common in time series data where a data series may experience "inertia." If a dependent variable takes a while to fully absorb a shock. Spatial autocorrelation can also occur geographic areas are likely to have similar errors. Autocorrelation may be the result of misspecification such as choosing the wrong functional form. In these cases, correcting the specification is one possible way to deal with autocorrelation. - -In the presence of spherical errors, the generalized least squares estimator can be shown to be BLUE. diff --git a/wiki/wikipedia/2395.txt b/wiki/wikipedia/2395.txt deleted file mode 100644 index 91ba9f55f830547cfe1fb824d410ce96e4e41420..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2395.txt +++ /dev/null @@ -1,21 +0,0 @@ -In mathematics, a palindromic prime (sometimes called a palprime) is a prime number that is also a palindromic number. Palindromicity depends on the base of the number system and its notational conventions, while primality is independent of such concerns. The first few decimal palindromic primes are: - -2, 3, 5, 7, 11, 101, 131, 151, 181, 191, 313, 353, 373, 383, 727, 757, 787, 797, 919, 929, … - -Except for 11, all palindromic primes have an odd number of digits, because the divisibility test for 11 tells us that every palindromic number with an even number of digits is a multiple of 11. It is not known if there are infinitely many palindromic primes in base 10. The largest known is - -101888529 - 10944264 - 1. - -which has 1,888,529 digits, and was found in 18 October 2021 by Ryan Propper and Serge Batalov. On the other hand, it is known that, for any base, almost all palindromic numbers are composite, i.e. the ratio between palindromic composites and all palindromes below n tends to 1. - -In binary, the palindromic primes include the Mersenne primes and the Fermat primes. All binary palindromic primes except binary 11 (decimal 3) have an odd number of digits; those palindromes with an even number of digits are divisible by 3. The sequence of binary palindromic primes begins (in binary): - -11, 101, 111, 10001, 11111, 1001001, 1101011, 1111111, 100000001, 100111001, 110111011, ... - -The palindromic primes in base 12 are: (using reversed two and three for ten and eleven, respectively) - -2, 3, 5, 7, Ɛ, 11, 111, 131, 141, 171, 181, 1Ɛ1, 535, 545, 565, 575, 585, 5Ɛ5, 727, 737, 747, 767, 797, Ɛ1Ɛ, Ɛ2Ɛ, Ɛ6Ɛ, ... - -Due to the superstitious significance of the numbers it contains, the palindromic prime 1000000000000066600000000000001 is known as Belphegor's Prime, named after Belphegor, one of the seven princes of Hell. Belphegor's Prime consists of the number 666, on either side enclosed by thirteen zeroes and a one. Belphegor's Prime is an example of a beastly palindromic prime in which a prime p is palindromic with 666 in the center. Another beastly palindromic prime is 700666007. - -Ribenboim defines a triply palindromic prime as a prime p for which: p is a palindromic prime with q digits, where q is a palindromic prime with r digits, where r is also a palindromic prime. For example, p = 1011310 + 4661664 + 1, which has q = 11311 digits, and 11311 has r = 5 digits. The first (base-10) triply palindromic prime is the 11-digit number 10000500001. It is possible that a triply palindromic prime in base 10 may also be palindromic in another base, such as base 2, but it would be highly remarkable if it were also a triply palindromic prime in that base as well. diff --git a/wiki/wikipedia/2396.txt b/wiki/wikipedia/2396.txt deleted file mode 100644 index 82fbe9e8d9258942e419997262c7a02e9199767a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2396.txt +++ /dev/null @@ -1,25 +0,0 @@ -In mathematics, the Balian–Low theorem in Fourier analysis is named for Roger Balian and Francis E. Low. - -The theorem states that there is no well-localized window function (or Gabor atom) g either in time or frequency for an exact Gabor frame (Riesz Basis). - -Suppose g is a square-integrable function on the real line, and consider the so-called Gabor system -$$ -g_{m,n}(x) = e^{2\pi i m b x} g(x - n a), -$$ - -for integers m and n, and a,b>0 satisfying ab=1. The Balian–Low theorem states that if -$$ -\{g_{m,n}: m, n \in \mathbb{Z}\} -$$ - -is an orthonormal basis for the Hilbert space -$$ -L^2(\mathbb{R}), -$$ - -then either -$$ - \int_{-\infty}^\infty x^2 | g(x)|^2 dx = \infty \quad \textrm{or} \quad \int_{-\infty}^\infty \xi^2|\hat{g}(\xi)|^2 d\xi = \infty. -$$ - -The Balian–Low theorem has been extended to exact Gabor frames. diff --git a/wiki/wikipedia/2397.txt b/wiki/wikipedia/2397.txt deleted file mode 100644 index 8f63222ca7835ba5ee9bb9c482e55140aa2a3dba..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2397.txt +++ /dev/null @@ -1,5 +0,0 @@ -In the mathematical field of graph theory, the Goldner–Harary graph is a simple undirected graph with 11 vertices and 27 edges. It is named after A. Goldner and Frank Harary, who proved in 1975 that it was the smallest non-Hamiltonian maximal planar graph. The same graph had already been given as an example of a non-Hamiltonian simplicial polyhedron by Branko Grünbaum in 1967. The dual graph of the Goldner–Harary graph is represented geometrically by the truncation of the triangular prism. - -The automorphism group of the Goldner–Harary graph is of order 12 and is isomorphic to the dihedral group D6, the group of symmetries of a regular hexagon, including both rotations and reflections. - -The characteristic polynomial of the Goldner–Harary graph is : $-(x-1)^2 x^2 (x+2)^3 (x^2-3) (x^2-4 x-9)$. diff --git a/wiki/wikipedia/2398.txt b/wiki/wikipedia/2398.txt deleted file mode 100644 index bd4d20f6e255575152b3bd93182e20076b51d50f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2398.txt +++ /dev/null @@ -1,7 +0,0 @@ -System on TPTP is an online interface to several automated theorem proving systems and other automated reasoning tools. - -It allows users to run the systems either on problems from the latest releases from the TPTP problem library or on user-supplied problems in the TPTP syntax. - -The system is maintained by Geoff Sutcliffe at the University of Miami. In November 2010, it featured more than 50 systems, including both theorem provers and model finders. System on TPTP can either run user-selected systems, or pick systems automatically based on problem features, and run them in parallel. - -
    diff --git a/wiki/wikipedia/2399.txt b/wiki/wikipedia/2399.txt deleted file mode 100644 index 4ec09c53dd3e199e1982166c4d05cf313f0843a6..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2399.txt +++ /dev/null @@ -1,23 +0,0 @@ -In number theory, Tunnell's theorem gives a partial resolution to the congruent number problem, and under the Birch and Swinnerton-Dyer conjecture, a full resolution. - -The congruent number problem asks which positive integers can be the area of a right triangle with all three sides rational. Tunnell's theorem relates this to the number of integral solutions of a few fairly simple Diophantine equations. - -For a given square-free integer n, define - -\begin{align} - -A_n & = \#\{ (x,y,z) \in \mathbb{Z}^3 \mid n = 2x^2 + y^2 + 32z^2 \}, \\ - -B_n & = \#\{ (x,y,z) \in \mathbb{Z}^3 \mid n = 2x^2 + y^2 + 8z^2 \}, \\ - -C_n & = \#\{ (x,y,z) \in \mathbb{Z}^3 \mid n = 8x^2 + 2y^2 + 64z^2 \}, \\ - -D_n & = \#\{ (x,y,z) \in \mathbb{Z}^3 \mid n = 8x^2 + 2y^2 + 16z^2 \}. - -\end{align} - -Tunnell's theorem states that supposing n is a congruent number, if n is odd then 2An = Bn and if n is even then 2Cn = Dn. Conversely, if the Birch and Swinnerton-Dyer conjecture holds true for elliptic curves of the form $y^2 = x^3 - n^2x$, these equalities are sufficient to conclude that n is a congruent number. - -The theorem is named for Jerrold B. Tunnell, a number theorist at Rutgers University, who proved it in Tunnell. - -The importance of Tunnell's theorem is that the criterion it gives is testable by a finite calculation. For instance, for a given $n$, the numbers $A_n,B_n,C_n,D_n$ can be calculated by exhaustively searching through $x,y,z$ in the range $-\sqrt{n},\ldots,\sqrt{n}$. diff --git a/wiki/wikipedia/24.txt b/wiki/wikipedia/24.txt deleted file mode 100644 index a26ccb1b39bd4c6ec7b74ede491e7489789c25f4..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/24.txt +++ /dev/null @@ -1,5 +0,0 @@ -In mathematical logic, Lindström's theorem (named after Swedish logician Per Lindström, who published it in 1969) states that first-order logic is the strongest logic (satisfying certain conditions, e.g. closure under classical negation) having both the (countable) compactness property and the (downward) Löwenheim–Skolem property. - -Lindström's theorem is perhaps the best known result of what later became known as abstract model theory, the basic notion of which is an abstract logic; the more general notion of an institution was later introduced, which advances from a set-theoretical notion of model to a category-theoretical one. Lindström had previously obtained a similar result in studying first-order logics extended with Lindström quantifiers. - -Lindström's theorem has been extended to various other systems of logic, in particular modal logics by Johan van Benthem and Sebastian Enqvist. diff --git a/wiki/wikipedia/240.txt b/wiki/wikipedia/240.txt deleted file mode 100644 index 33a8012d08f4157a733f9aaf95a7484be3b9dd8c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/240.txt +++ /dev/null @@ -1,20 +0,0 @@ -In mathematics, Birch's theorem, named for Bryan John Birch, is a statement about the representability of zero by odd degree forms. - -Let K be an algebraic number field, k, l and n be natural numbers, r1, ..., rk be odd natural numbers, and f1, ..., fk be homogeneous polynomials with coefficients in K of degrees r1, ..., rk respectively in n variables. Then there exists a number ψ(r1, ..., rk, l, K) such that if -$$ -n \ge \psi(r_1,\ldots,r_k,l,K) -$$ - -then there exists an l-dimensional vector subspace V of Kn such that -$$ -f_1(x) = \cdots = f_k(x) = 0 \text{ for all } x \in V. -$$ - -The proof of the theorem is by induction over the maximal degree of the forms f1, ..., fk. Essential to the proof is a special case, which can be proved by an application of the Hardy–Littlewood circle method, of the theorem which states that if n is sufficiently large and r is odd, then the equation -$$ -c_1x_1^r+\cdots+c_nx_n^r=0,\quad c_i \in \mathbb{Z},\ i=1,\ldots,n -$$ - -has a solution in integers x1, ..., xn, not all of which are 0. - -The restriction to odd r is necessary, since even degree forms, such as positive definite quadratic forms, may take the value 0 only at the origin. diff --git a/wiki/wikipedia/2400.txt b/wiki/wikipedia/2400.txt deleted file mode 100644 index 98957f43d704c26d097d8a3ce1f6d5980e5c3db0..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2400.txt +++ /dev/null @@ -1,3 +0,0 @@ -#redirectSign sequence#Erd.C5.91s_discrepancy_problem - -Category:Mathematical problems diff --git a/wiki/wikipedia/2401.txt b/wiki/wikipedia/2401.txt deleted file mode 100644 index 4cd21176733df7ae6ce3633c9bb9529c29c5fbd1..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2401.txt +++ /dev/null @@ -1,102 +0,0 @@ -In mathematics, there are two different results that share the common name of the Ky Fan inequality. One is an inequality involving the geometric mean and arithmetic mean of two sets of real numbers of the unit interval. The result was published on page 5 of the book Inequalities by Edwin F. Beckenbach and Richard E. Bellman (1961), who refer to an unpublished result of Ky Fan. They mention the result in connection with the inequality of arithmetic and geometric means and Augustin Louis Cauchy's proof of this inequality by forward-backward-induction; a method which can also be used to prove the Ky Fan inequality. - -This Ky Fan inequality is a special case of Levinson's inequality and also the starting point for several generalizations and refinements; some of them are given in the references below. - -The second Ky Fan inequality is used in game theory to investigate the existence of an equilibrium. - -If xi with 0 ≤ xi ≤ $ \frac{1}{2} $ for i = 1, ..., n are real numbers, then - - \frac{ \bigl(\prod_{i=1}^n x_i\bigr)^{1/n} } - -{ \bigl(\prod_{i=1}^n (1-x_i)\bigr)^{1/n} } - -\le - -\frac{ \frac1n \sum_{i=1}^n x_i } - -{ \frac1n \sum_{i=1}^n (1-x_i) } - - - -with equality if and only if x1 = x2 = . . . = xn. - -Let -$$ -A_n:=\frac1n\sum_{i=1}^n x_i,\qquad G_n=\biggl(\prod_{i=1}^n x_i\biggr)^{1/n} -$$ - -denote the arithmetic and geometric mean, respectively, of x1, . . ., xn, and let -$$ -A_n':=\frac1n\sum_{i=1}^n (1-x_i),\qquad G_n'=\biggl(\prod_{i=1}^n (1-x_i)\biggr)^{1/n} -$$ - -denote the arithmetic and geometric mean, respectively, of 1 - x1, . . ., 1 - xn. Then the Ky Fan inequality can be written as -$$ -\frac{G_n}{G_n'}\le\frac{A_n}{A_n'}, -$$ - -which shows the similarity to the inequality of arithmetic and geometric means given by Gn ≤ An. - -If xi ∈ [0,½] and γi ∈ [0,1] for i = 1, . . ., n are real numbers satisfying γ1 + . . . + γn = 1, then - - \frac{ \prod_{i=1}^n x_i^{\gamma_i} } - -{ \prod_{i=1}^n (1-x_i)^{\gamma_i} } - -\le - -\frac{ \sum_{i=1}^n \gamma_i x_i } - -{ \sum_{i=1}^n \gamma_i (1-x_i) } - - - -with the convention 00 := 0. Equality holds if and only if either - -*γixi = 0 for all i = 1, . . ., n or - -*all xi > 0 and there exists x ∈ (0,½] such that x = xi for all i = 1, . . ., n with γi > 0. - -The classical version corresponds to γi = 1/n for all i = 1, . . ., n. - -Idea: Apply Jensen's inequality to the strictly concave function -$$ -f(x):= \ln x-\ln(1-x) = \ln\frac x{1-x},\qquad x\in(0,\tfrac12]. -$$ - -Detailed proof: (a) If at least one xi is zero, then the left-hand side of the Ky Fan inequality is zero and the inequality is proved. Equality holds if and only if the right-hand side is also zero, which is the case when γixi = 0 for all i = 1, . . ., n. - -(b) Assume now that all xi > 0. If there is an i with γi = 0, then the corresponding xi > 0 has no effect on either side of the inequality, hence the ith term can be omitted. Therefore, we may assume that γi > 0 for all i in the following. If x1 = x2 = . . . = xn, then equality holds. It remains to show strict inequality if not all xi are equal. - -The function f is strictly concave on (0,½], because we have for its second derivative -$$ -f(x)=-\frac1{x^2}+\frac1{(1-x)^2}<0,\qquad x\in(0,\tfrac12). -$$ - -Using the functional equation for the natural logarithm and Jensen's inequality for the strictly concave f, we obtain that - - - -\begin{align} - -\ln\frac{ \prod_{i=1}^n x_i^{\gamma_i}} - -{ \prod_{i=1}^n (1-x_i)^{\gamma_i} } - -&=\ln\prod_{i=1}^n\Bigl(\frac{x_i}{1-x_i}\Bigr)^{\gamma_i}\\ - -&=\sum_{i=1}^n \gamma_i f(x_i)\\ - -& - -where we used in the last step that the γi sum to one. Taking the exponential of both sides gives the Ky Fan inequality. - -A second inequality is also called the Ky Fan Inequality, because of a 1972 paper, "A minimax inequality and its applications". This second inequality is equivalent to the Brouwer Fixed Point Theorem, but is often more convenient. Let S be a compact convex subset of a finite-dimensional vector space V, and let $f(x,y)$ be a function from $S \times S$ to the real numbers that is lower semicontinuous in x, concave in y and has $f(z,z) \le 0$ for all z in S. Then there exists $x^* \in S$ such that $f( x^*, y ) \le 0 $ for all $y \in S$. This Ky Fan Inequality is used to establish the existence of equilibria in various games studied in economics. diff --git a/wiki/wikipedia/2402.txt b/wiki/wikipedia/2402.txt deleted file mode 100644 index dc36cc104fa8f96f32b594c35d94566718d2a836..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2402.txt +++ /dev/null @@ -1,129 +0,0 @@ -The language of mathematics has a vast vocabulary of specialist and technical terms. It also has a certain amount of jargon: commonly used phrases which are part of the culture of mathematics, rather than of the subject. Jargon often appears in lectures, and sometimes in print, as informal shorthand for rigorous arguments or precise ideas. Much of this is common English, but with a specific non-obvious meaning when used in a mathematical sense. - -Some phrases, like "in general", appear below in more than one section. - -; abstract nonsense:A tongue-in-cheek reference to category theory, using which one can employ arguments that establish a (possibly concrete) result without reference to any specifics of the present problem. For that reason, it's also known as general abstract nonsense or generalized abstract nonsense. - -; canonical:A reference to a standard or choice-free presentation of some mathematical object (e.g., canonical map, canonical form, or canonical ordering). The same term can also be used more informally to refer to something "standard" or "classic". For example, one might say that Euclid's proof is the "canonical proof" of the infinitude of primes. - -; deep:A result is called "deep" if its proof requires concepts and methods that are advanced beyond the concepts needed to formulate the result. For example, the prime number theorem — originally proved using techniques of complex analysis — was once thought to be a deep result until elementary proofs were found. On the other hand, the fact that π is irrational is usually known to be a deep result, because it requires a considerable development of real analysis before the proof can be established — even though the claim itself can be stated in terms of simple number theory and geometry. - -; elegant:An aesthetic term referring to the ability of an idea to provide insight into mathematics, whether by unifying disparate fields, introducing a new perspective on a single field, or by providing a technique of proof which is either particularly simple, or which captures the intuition or imagination as to why the result it proves is true. In some occasions, the term "beautiful" can also be used to the same effect, though Gian-Carlo Rota distinguished between elegance of presentation and beauty of concept, saying that for example, some topics could be written about elegantly although the mathematical content is not beautiful, and some theorems or proofs are beautiful but may be written about inelegantly. - -; elementary:A proof or a result is called "elementary" if it only involves basic concepts and methods in the field, and is to be contrasted with deep results which require more development within or outside the field. The concept of "elementary proof" is used specifically in number theory, where it usually refers to a proof that does not resort to methods from complex analysis. - -; folklore :A result is called "folklore" if it is non-obvious, non-published, yet somehow generally known to the specialists within a field. In many scenarios, it is unclear as to who first obtained the result, though if the result is significant, it may eventually find its way into the textbooks, whereupon it ceases to be folklore. - -; natural:Similar to "canonical" but more specific, and which makes reference to a description (almost exclusively in the context of transformations) which holds independently of any choices. Though long used informally, this term has found a formal definition in category theory. - -; pathological:An object behaves pathologically (or, somewhat more broadly used, in a degenerated way) if it either fails to conform to the generic behavior of such objects, fails to satisfy certain context-dependent regularity properties, or simply disobeys mathematical intuition. In many occasions, these can be and often are contradictory requirements, while in other occasions, the term is more deliberately used to refer to an object artificially constructed as a counterexample to these properties. A simple example is that from the definition of a triangle having angles which sum to π radians, a single straight line conforms to this definition pathologically. - -Note for that latter quote that as the differentiable functions are meagre in the space of continuous functions, as Banach found out in 1931, differentiable functions are colloquially speaking a rare exception among the continuous ones. Thus it can hardly be defended any-more to call non-differentiable continuous functions pathological. - -; rigor (rigour):The act of establishing a mathematical result using indisputable logic, rather than informal descriptive argument. Rigor is a cornerstone quality of mathematics, and can play an important role in preventing mathematics from degenerating into fallacies. - -; well-behaved:An object is well-behaved (in contrast with being pathological) if it satisfies certain prevailing regularity properties, or if it conforms to mathematical intuition (even though intuition can often suggest opposite behaviors as well). In some occasions (e.g., analysis), the term "smooth" can also be used to the same effect. - -Although ultimately every mathematical argument must meet a high standard of precision, mathematicians use descriptive but informal statements to discuss recurring themes or concepts with unwieldy formal statements. Note that many of the terms are completely rigorous in context. - -; almost all: A shorthand term for "all except for a set of measure zero", when there is a measure to speak of. For example, "almost all real numbers are transcendental" because the algebraic real numbers form a countable subset of the real numbers with measure zero. One can also speak of "almost all" integers having a property to mean "all except finitely many", despite the integers not admitting a measure for which this agrees with the previous usage. For example, "almost all prime numbers are odd". There is a more complicated meaning for integers as well, discussed in the main article. Finally, this term is sometimes used synonymously with generic, below. - -; arbitrarily large: Notions which arise mostly in the context of limits, referring to the recurrence of a phenomenon as the limit is approached. A statement such as that predicate P is satisfied by arbitrarily large values, can be expressed in more formal notation by ∀x : ∃y ≥ x : P(y). See also frequently. The statement that quantity f(x) depending on x "can be made" arbitrarily large, corresponds to ∀y : ∃x : f(x) ≥ y. - -; arbitrary: A shorthand for the universal quantifier. An arbitrary choice is one which is made unrestrictedly, or alternatively, a statement holds of an arbitrary element of a set if it holds of any element of that set. Also much in general-language use among mathematicians: "Of course, this problem can be arbitrarily complicated". - -; eventually:In the context of limits, this is shorthand meaning for sufficiently large arguments; the relevant argument(s) are implicit in the context. As an example, the function log(log(x)) eventually becomes larger than 100"; in this context, "eventually" means "for sufficiently large x." - -; factor through: A term in category theory referring to composition of morphisms. If we have three objects A, B, and C and a map $f \colon A \to C$ which is written as a composition $f = h \circ g$ with $g \colon A \to B$ and $h \colon B \to C$, then f is said to factor through any (and all) of $B$, $g$, and $h$. - -; finite: "Not infinite". For example, if the variance of a random variable is said to be finite, this implies it is a non-negative real number. - -; frequently: In the context of limits, this is shorthand for arbitrarily large arguments and its relatives; as with eventually, the intended variant is implicit. As an example, the sequence $(-1)^n$ is frequently in the interval (1/2, 3/2), because there are arbitrarily large n for which the value of the sequence is in the interval. - -; generic: This term has similar connotations as almost all but is used particularly for concepts outside the purview of measure theory. A property holds "generically" on a set if the set satisfies some (context-dependent) notion of density, or perhaps if its complement satisfies some (context-dependent) notion of smallness. For example, a property which holds on a dense Gδ (intersection of countably many open sets) is said to hold generically. In algebraic geometry, one says that a property of points on an algebraic variety that holds on a dense Zariski open set is true generically; however, it is usually not said that a property which holds merely on a dense set (which is not Zariski open) is generic in this situation. - -; in general: In a descriptive context, this phrase introduces a simple characterization of a broad class of objects, with an eye towards identifying a unifying principle. This term introduces an "elegant" description which holds for "arbitrary" objects. Exceptions to this description may be mentioned explicitly, as "pathological" cases. - -; left-hand side, right-hand side (LHS, RHS): Most often, these refer simply to the left-hand or the right-hand side of an equation; for example, $x = y + 1$ has $x$ on the LHS and $y + 1$ on the RHS. Occasionally, these are used in the sense of lvalue and rvalue: an RHS is primitive, and an LHS is derivative. - -; nice: A mathematical object is colloquially called nice or sufficiently nice if it satisfies hypotheses or properties, sometimes unspecified or even unknown, that are especially desirable in a given context. It is an informal antonym for pathological. For example, one might conjecture that a differential operator ought to satisfy a certain boundedness condition "for nice test functions," or one might state that some interesting topological invariant should be computable "for nice spaces X." - -; onto: A function (which in mathematics is generally defined as mapping the elements of one set A to elements of another B) is called "A onto B" (instead of "A to B" or "A into B") only if it is surjective; it may even be said that "f is onto" (i. e. surjective). Not translatable (without circumlocutions) to some languages other than English. - -; proper: If, for some notion of substructure, objects are substructures of themselves (that is, the relationship is reflexive), then the qualification proper requires the objects to be different. For example, a proper subset of a set S is a subset of S that is different from S, and a proper divisor of a number n is a divisor of n that is different from n. This overloaded word is also non-jargon for a proper morphism. - -; regular : A function is called regular if it satisfies satisfactory continuity and differentiability properties, which are often context-dependent. These properties might include possessing a specified number of derivatives, with the function and its derivatives exhibiting some nice property (see nice above), such as Hölder continuity. Informally, this term is sometimes used synonymously with smooth, below. These imprecise uses of the word regular are not to be confused with the notion of a regular topological space, which is rigorously defined. - -; resp.: (Respectively) A convention to shorten parallel expositions. "A (resp. B) [has some relationship to] X (resp. Y)" means that A [has some relationship to] X and also that B [has (the same) relationship to] Y. For example, squares (resp. triangles) have 4 sides (resp. 3 sides); or compact (resp. Lindelöf) spaces are ones where every open cover has a finite (resp. countable) open subcover. - -; sharp: Often, a mathematical theorem will establish constraints on the behavior of some object; for example, a function will be shown to have an upper or lower bound. The constraint is sharp (sometimes optimal) if it cannot be made more restrictive without failing in some cases. For example, for arbitrary non-negative real numbers x, the exponential function ex, where e = 2.7182818..., gives an upper bound on the values of the quadratic function x2. This is not sharp; the gap between the functions is everywhere at least 1. Among the exponential functions of the form αx, setting α = e2/e = 2.0870652... results in a sharp upper bound; the slightly smaller choice α = 2 fails to produce an upper bound, since then α3 = 8 < 32. In applied fields the word "tight" is often used with the same meaning. - -; smooth: Smoothness is a concept which mathematics has endowed with many meanings, from simple differentiability to infinite differentiability to analyticity, and still others which are more complicated. Each such usage attempts to invoke the physically intuitive notion of smoothness. - -; strong, stronger: A theorem is said to be strong if it deduces restrictive results from general hypotheses. One celebrated example is Donaldson's theorem, which puts tight restraints on what would otherwise appear to be a large class of manifolds. This (informal) usage reflects the opinion of the mathematical community: not only should such a theorem be strong in the descriptive sense (below) but it should also be definitive in its area. A theorem, result, or condition is further called stronger than another one if a proof of the second can be easily obtained from the first but not conversely. An example is the sequence of theorems: Fermat's little theorem, Euler's theorem, Lagrange's theorem, each of which is stronger than the last; another is that a sharp upper bound (see sharp above) is a stronger result than a non-sharp one. Finally, the adjective strong or the adverb strongly may be added to a mathematical notion to indicate a related stronger notion; for example, a strong antichain is an antichain satisfying certain additional conditions, and likewise a strongly regular graph is a regular graph meeting stronger conditions. When used in this way, the stronger notion (such as "strong antichain") is a technical term with a precisely defined meaning; the nature of the extra conditions cannot be derived from the definition of the weaker notion (such as "antichain"). - -; sufficiently large, suitably small, sufficiently close: In the context of limits, these terms refer to some (unspecified, even unknown) point at which a phenomenon prevails as the limit is approached. A statement such as that predicate P holds for sufficiently large values, can be expressed in more formal notation by ∃x : ∀y ≥ x : P(y). See also eventually. - -; upstairs, downstairs: A descriptive term referring to notation in which two objects are written one above the other; the upper one is upstairs and the lower, downstairs. For example, in a fiber bundle, the total space is often said to be upstairs, with the base space downstairs. In a fraction, the numerator is occasionally referred to as upstairs and the denominator downstairs, as in "bringing a term upstairs". - -; up to, modulo, mod out by: An extension to mathematical discourse of the notions of modular arithmetic. A statement is true up to a condition if the establishment of that condition is the only impediment to the truth of the statement. Also used when working with members of equivalence classes, especially in category theory, where the equivalence relation is (categorical) isomorphism; for example, "The tensor product in a weak monoidal category is associative and unital up to a natural isomorphism." - -; vanish: To assume the value 0. For example, "The function sin(x) vanishes for those values of x that are integer multiples of π." This can also apply to limits: see Vanish at infinity. - -; weak, weaker: The converse of strong. - -; well-defined: Accurately and precisely described or specified. For example, sometimes a definition relies on a choice of some object; the result of the definition must then be independent of this choice. - -The formal language of proof draws repeatedly from a small pool of ideas, many of which are invoked through various lexical shorthands in practice. - -; aliter: An obsolescent term which is used to announce to the reader an alternative method, or proof of a result. In a proof, it therefore flags a piece of reasoning that is superfluous from a logical point of view, but has some other interest. - -; by way of contradiction (BWOC), or "for, if not, ...": The rhetorical prelude to a proof by contradiction, preceding the negation of the statement to be proved. - -; if and only if (iff): An abbreviation for logical equivalence of statements. - -; in general: In the context of proofs, this phrase is often seen in induction arguments when passing from the base case to the induction step, and similarly, in the definition of sequences whose first few terms are exhibited as examples of the formula giving every term of the sequence. - -; necessary and sufficient: A minor variant on "if and only if"; "A is necessary (sufficient) for B" means "A if (only if) B". For example, "For a field K to be algebraically closed it is necessary and sufficient that it have no finite field extensions" means "K is algebraically closed if and only if it has no finite extensions". Often used in lists, as in "The following conditions are necessary and sufficient for a field to be algebraically closed...". - -; need to show (NTS), required to prove (RTP), wish to show, want to show (WTS): Proofs sometimes proceed by enumerating several conditions whose satisfaction will together imply the desired theorem; thus, one needs to show just these statements. - -; one and only one: A statement of the existence and uniqueness of an object; the object exists, and furthermore, no other such object exists. - -; Q.E.D.: (Quod erat demonstrandum): A Latin abbreviation, meaning "which was to be demonstrated", historically placed at the end of proofs, but less common currently, having been supplanted by the Halmos end-of-proof mark, a square sign ∎. - -; sufficiently nice: A condition on objects in the scope of the discussion, to be specified later, that will guarantee that some stated property holds for them. When working out a theorem, the use of this expression in the statement of the theorem indicates that the conditions involved may be not yet known to the speaker, and that the intent is to collect the conditions that will be found to be needed in order for the proof of the theorem to go through. - -; the following are equivalent (TFAE): Often several equivalent conditions (especially for a definition, such as normal subgroup) are equally useful in practice; one introduces a theorem stating an equivalence of more than two statements with TFAE. - -; transport of structure: It is often the case that two objects are shown to be equivalent in some way, and that one of them is endowed with additional structure. Using the equivalence, we may define such a structure on the second object as well, via transport of structure. For example, any two vector spaces of the same dimension are isomorphic; if one of them is given an inner product and if we fix a particular isomorphism, then we may define an inner product on the other space by factoring through the isomorphism. - -; without (any) loss of generality (WLOG, WOLOG, WALOG), we may assume (WMA): Sometimes a proposition can be more easily proved with additional assumptions on the objects it concerns. If the proposition as stated follows from this modified one with a simple and minimal explanation (for example, if the remaining special cases are identical but for notation), then the modified assumptions are introduced with this phrase and the altered proposition is proved. - -Mathematicians have several phrases to describe proofs or proof techniques. These are often used as hints for filling in tedious details. - -; angle chasing: Used to describe a geometrical proof that involves finding relationships between the various angles in a diagram. - -; back-of-the-envelope calculation: An informal computation omitting much rigor without sacrificing correctness. Often this computation is "proof of concept" and treats only an accessible special case. - -; brute force: Rather than finding underlying principles or patterns, this is a method where one would evaluate as many cases as needed to sufficiently prove or provide convincing evidence that the thing in question is true. Sometimes this involves evaluating every possible case (where it is also known as proof by exhaustion). - -; by example: A proof by example is an argument whereby a statement is not proved but instead illustrated by an example. If done well, the specific example would easily generalize to a general proof. - -; by inspection: A rhetorical shortcut made by authors who invite the reader to verify, at a glance, the correctness of a proposed expression or deduction. If an expression can be evaluated by straightforward application of simple techniques and without recourse to extended calculation or general theory, then it can be evaluated by inspection. It is also applied to solving equations; for example to find roots of a quadratic equation by inspection is to 'notice' them, or mentally check them. 'By inspection' can play a kind of gestalt role: the answer or solution simply clicks into place. - -; by intimidation: Style of proof where claims believed by the author to be easily verifiable are labelled as 'obvious' or 'trivial', which often results in the reader being confused. - -; clearly, can be easily shown: A term which shortcuts around calculation the mathematician perceives to be tedious or routine, accessible to any member of the audience with the necessary expertise in the field; Laplace used obvious (French: évident). - -; complete intuition : commonly reserved for jokes (puns on complete induction). - -; diagram chasing: Given a commutative diagram of objects and morphisms between them, if one wishes to prove some property of the morphisms (such as injectivity) which can be stated in terms of elements, then the proof can proceed by tracing the path of elements of various objects around the diagram as successive morphisms are applied to it. That is, one chases elements around the diagram, or does a diagram chase. - -; handwaving: A non-technique of proof mostly employed in lectures, where formal argument is not strictly necessary. It proceeds by omission of details or even significant ingredients, and is merely a plausibility argument. - -; in general: In a context not requiring rigor, this phrase often appears as a labor-saving device when the technical details of a complete argument would outweigh the conceptual benefits. The author gives a proof in a simple enough case that the computations are reasonable, and then indicates that "in general" the proof is similar. - -; index battle: for proofs involving objects with multiple indices which can be solved by going to the bottom (if anyone wishes to take up the effort). Similar to diagram chasing. - -; trivial: Similar to clearly. A concept is trivial if it holds by definition, is an immediate corollary to a known statement, or is a simple special case of a more general concept. diff --git a/wiki/wikipedia/2403.txt b/wiki/wikipedia/2403.txt deleted file mode 100644 index 9c0e240b9c14e7f7e96eda5fccb6cbbf399ca253..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2403.txt +++ /dev/null @@ -1,28 +0,0 @@ -In mathematics, a universal space is a certain metric space that contains all metric spaces whose dimension is bounded by some fixed constant. A similar definition exists in topological dynamics. - -Given a class $\textstyle \mathcal{C}$ of topological spaces, $\textstyle \mathbb{U}\in\mathcal{C}$ is universal for $\textstyle \mathcal{C}$ if each member of $\textstyle \mathcal{C}$ embeds in $\textstyle \mathbb{U}$. Menger stated and proved the case $\textstyle d=1$ of the following theorem. The theorem in full generality was proven by Nöbeling. - -Theorem: - -The $\textstyle (2d+1)$-dimensional cube $\textstyle [0,1]^{2d+1}$ is universal for the class of compact metric spaces whose Lebesgue covering dimension is less than $\textstyle d$. - -Nöbeling went further and proved: - -Theorem: The subspace of $\textstyle [0,1]^{2d+1}$ consisting of set of points, at most $\textstyle d$ of whose coordinates are rational, is universal for the class of separable metric spaces whose Lebesgue covering dimension is less than $\textstyle d$. - -The last theorem was generalized by Lipscomb to the class of metric spaces of $\textstyle \alpha$, $\textstyle \alpha>\aleph_{0}$: There exist a one-dimensional metric space $\textstyle J_{\alpha}$ such that the subspace of $\textstyle J_{\alpha}^{2d+1}$ consisting of set of points, at most $\textstyle d$ of whose coordinates are "rational" (suitably defined), is universal for the class of metric spaces whose Lebesgue covering dimension is less than $\textstyle d$ and whose weight is less than $\textstyle \alpha$. - -Consider the category of topological dynamical systems $\textstyle (X,T)$ consisting of a compact metric space $\textstyle X$ and a homeomorphism $\textstyle T:X\rightarrow X$. The topological dynamical system $\textstyle (X,T)$ is called minimal if it has no proper non-empty closed $\textstyle T$-invariant subsets. It is called infinite if $\textstyle |X|=\infty$. A topological dynamical system $\textstyle (Y,S)$ is called a factor of $\textstyle (X,T)$ if there exists a continuous surjective mapping $\textstyle \varphi:X\rightarrow Y$ which is equivariant, i.e. $\textstyle \varphi(Tx)=S\varphi(x)$ for all $\textstyle x\in X$. - -Similarly to the definition above, given a class $\textstyle \mathcal{C}$ of topological dynamical systems, $\textstyle \mathbb{U}\in\mathcal{C}$ is universal for $\textstyle \mathcal{C}$ if each member of $\textstyle \mathcal{C}$ embeds in $\textstyle \mathbb{U}$ through an equivariant continuous mapping. Lindenstrauss proved the following theorem: - -Theorem: Let $\textstyle d\in\mathbb$. The compact metric topological dynamical system $\textstyle (X,T)$ where $\textstyle X=([0,1]^{d})^{\mathbb}$ and $\textstyle T:X\rightarrow X$ is the shift homeomorphism -$$ -\textstyle (\ldots,x_{-2},x_{-1},\mathbf{x_{0}},x_{1},x_{2},\ldots)\rightarrow(\ldots,x_{-1},x_{0},\mathbf{x_{1}},x_{2},x_{3},\ldots) -$$ - -is universal for the class of compact metric topological dynamical systems whose mean dimension is strictly less than $\textstyle \frac{d}{36}$ and which possess an infinite minimal factor. - -In the same article Lindenstrauss asked what is the largest constant \textstyle c - - such that a compact metric topological dynamical system whose mean dimension is strictly less than $\textstyle cd$ and which possesses an infinite minimal factor embeds into $\textstyle ([0,1]^{d})^{\mathbb}$. The results above implies $\textstyle c \geq \frac{1}{36}$. The question was answered by Lindenstrauss and Tsukamoto who showed that $\textstyle c \leq \frac{1}{2}$ and Gutman and Tsukamoto who showed that $\textstyle c \geq \frac{1}{2}$. Thus the answer is $\textstyle c=\frac{1}{2}$. diff --git a/wiki/wikipedia/2404.txt b/wiki/wikipedia/2404.txt deleted file mode 100644 index 17faec833ff14d83c5ad4a9eff8644425c6b6fb2..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2404.txt +++ /dev/null @@ -1,51 +0,0 @@ -In mathematics, Carathéodory's theorem is a theorem in complex analysis, named after Constantin Carathéodory, which extends the Riemann mapping theorem. The theorem, first proved in 1913, states that the conformal mapping sending the unit disk to the region in the complex plane bounded by a Jordan curve extends continuously to a homeomorphism from the unit circle onto the Jordan curve. The result is one of Carathéodory's results on prime ends and the boundary behaviour of univalent holomorphic functions. - -The first proof of Carathéodory's theorem presented here is a summary of the short self-contained account in Garnett; there are related proofs in Pommerenke and Krantz. - -Clearly if f admits an extension to a homeomorphism, then ∂U must be a Jordan curve. - -Conversely if ∂U is a Jordan curve, the first step is to prove f extends continuously to the closure of D. In fact this will hold if and only if f is uniformly continuous on D: for this is true if it has a continuous extension to the closure of D; and, if f is uniformly continuous, it is easy to check f has limits on the unit circle and the same inequalities for uniform continuity hold on the closure of D. - -Suppose that f is not uniformly continuous. In this case there must be an ε > 0 and a point ζ on the unit circle and sequences zn, wn tending to ζ with |f(zn) − f(wn)| ≥ 2ε. This is shown below to lead to a contradiction, so that f must be uniformly continuous and hence has a continuous extension to the closure of D. - -For 0 < r < 1, let γr be the curve given by the arc of the circle | z − ζ | = r lying within D. Then f ∘ γr is a Jordan curve. Its length can be estimated using the Cauchy–Schwarz inequality: -$$ -\displaystyle{\ell(f\circ \gamma_r) = \int_{\gamma_r} |f^\prime(z)| |dz|\le \left(\int_{\gamma_r}|dz|\right)^{1/2}\cdot \left(\int_{\gamma_r}|f^\prime(z)|^2|dz|\right)^{1/2}\le (2\pi r)^{1/2} \cdot\left(\int_{\{\theta:|\zeta +re^{i\theta}|<1\}} |f^\prime(\zeta + re^{i\theta})|^2rd\theta\right)^{1/2}.} -$$ - -Hence there is a "length-area estimate": -$$ -\displaystyle{\int_0^{1} \ell(f\circ \gamma_r)^2 {dr\over r} \le 2\pi \int_{|z|<1} |f^\prime(z)|^2 dx dy = 2\pi \cdot {\rm Area}f(D)<\infty.} -$$ - -The finiteness of the integral on the left hand side implies that there is a sequence rn decreasing to 0 with $\ell(f\circ \gamma_{r_n})$ tending to 0. But the length of a curve g(t) for t in (a, b) is given by -$$ -\displaystyle{\ell(g) =\sup_{a< t_1< t_2 < \cdots n, bn at its two ends with |an – bn| ≤ $\ell(f\circ \gamma_{r_n})$, so this difference tends to 0. These two limit points must lie on ∂U, because f is a homeomorphism between D and U and thus a sequence converging in U has to be the image under f of a sequence converging in D. Since ∂U is a homeomorphic image of the circle ∂D, the distance between the two corresponding parameters ξn and ηn in ∂U must tend to 0. So eventually the smallest circular arc in ∂D joining ξn and ηn is defined and, by uniform continuity, the diameter of its image τn tends to 0. Together τn and f ∘ γrn form a simple Jordan curve. Its interior Un is contained in U by the Jordan curve theorem for ∂U and ∂Un: to see this, notice that U is the interior of ∂U, as it is bounded, connected and it is both open and closed in the complement of ∂U; so the exterior region of ∂U is unbounded, connected and does not intersect ∂Un, hence its closure is contained in the closure of the exterior of ∂Un; taking complements, we get the desired inclusion. The diameter of ∂Un tends to 0 because the diameters of τn and f ∘ γrn tend to 0. Hence the diameter and the area of Un tend to 0. - -Now if Vn denotes the intersection of D with the disk |z − ζ| < rn, then f(Vn) = Un. Indeed, the arc γrn divides D into Vn and a complementary region; Un is a connected component of U \ f ∘ γrn, as it is connected and is both open and closed in this set, so under the conformal homeomorphism f the curve f ∘ γrn divides U into Un and - -a complementary region Un′, one of which equals f(Vn). Since the areas of f(Vn) and Un tend to 0, while the sum of the areas of Un and Un′ is fixed, it follows that f(Vn) = Un. - -So the diameter of f(Vn) tends to 0. On the other hand, passing to subsequences of (zn) and (wn) if necessary, it may be assumed that zn and wn both lie in Vn. But this gives a contradiction since |f(zn) − f(wn)| ≥ ε. So f must be uniformly continuous on U. - -Thus f extends continuously to the closure of D. Since f(D) = U, by compactness f carries the closure of D onto the closure of U and hence ∂D onto ∂U. If f is not one-one, there are points u, v on ∂D with u ≠ v and f(u) = f(v). Let X and Y be the radial lines from 0 to u and v. Then f(X ∪ Y) is a Jordan curve. Arguing as before, its interior V is contained in U and is a connected component of U \ f(X ∪ Y). On the other hand, D \ (X ∪ Y) is the disjoint union of two open sectors - -W1 and W2. Hence, for one of them, W1 say, f(W1) = V. Let Z be the portion of ∂W1 on the unit circle, so that Z is a closed arc and f(Z) is a subset of both ∂U and the closure of V. But their intersection is a single point and hence f is constant on Z. By the Schwarz reflection principle, f can be analytically continued by conformal reflection across the circular arc. Since non-constant holomorphic functions have isolated zeros, this forces f to be constant, a contradiction. So f is one-one and hence a homeomorphism on the closure of D. - -Two different proofs of Carathéodory's theorem are described in Carathéodory and Carathéodory. The first proof follows Carathéodory's original method of proof from 1913 using properties of Lebesgue measure on the circle: the continuous extension of the inverse function g of f to ∂U is justified by Fatou's theorem on the boundary behaviour of bounded harmonic functions on the unit disk. The second proof is based on the method of Lindelöf, where a sharpening of the maximum modulus inequality was established for bounded holomorphic functions h defined on a bounded domain V: if a lies in V, then - -|h(a)| ≤ mt ⋅ M1 − t, - -where 0 ≤ t ≤ 1, M is maximum modulus of h for sequential limits on ∂U and m is the maximum modulus of h for sequential limits on ∂U lying in a sector centred on a subtending an angle 2πt at a. - -An extension of the theorem states that a conformal isomorphism -$$ - g\colon \mathbb{D} \to U -$$, - -where $U$ is a simply connected subset of the Riemann sphere, extends continuously to the unit circle if and only if the boundary of $U$ is locally connected. - -This result is often also attributed to Carathéodory, but was first stated and proved by Marie Torhorst in her 1918 thesis, under the supervision of Hans Hahn, using Carathéodory's theory of prime ends. More precisely, Torhorst proved that local connectivity is equivalent to the domain having only prime ends of the first kind. By the theory of prime ends, the latter property, in turn, is equivalent to $g$ having a continuous extension. diff --git a/wiki/wikipedia/2405.txt b/wiki/wikipedia/2405.txt deleted file mode 100644 index 7c29596f3d93886feaa5e0e2b6c35c95e36cca6b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2405.txt +++ /dev/null @@ -1,11 +0,0 @@ -In mathematics, the Dawson-Gärtner theorem is a result in large deviations theory. Heuristically speaking, the Dawson-Gärtner theorem allows one to transport a large deviation principle on a “smaller” topological space to a “larger” one. - -Let (Yj)j∈J be a projective system of Hausdorff topological spaces with maps pij : Yj → Yi. Let X be the projective limit (also known as the inverse limit) of the system (Yj, pij)i,j∈J, i.e. -$$ -X = \varprojlim_{j \in J} Y_{j} = \left\{ \left. y = (y_{j})_{j \in J} \in Y = \prod_{j \in J} Y_{j} \right| i < j \implies y_{i} = p_{ij} (y_{j}) \right\}. -$$ - -Let (με)ε>0 be a family of probability measures on X. Assume that, for each j ∈ J, the push-forward measures (pj∗με)ε>0 on Yj satisfy the large deviation principle with good rate function Ij : Yj → R ∪ {+∞}. Then the family (με)ε>0 satisfies the large deviation principle on X with good rate function I : X → R ∪ {+∞} given by -$$ -I(x) = \sup_{j \in J} I_{j}(p_{j}(x)). -$$ diff --git a/wiki/wikipedia/2406.txt b/wiki/wikipedia/2406.txt deleted file mode 100644 index cc1b29145bbff4893fcd0452570ec1d9b18e54f4..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2406.txt +++ /dev/null @@ -1,151 +0,0 @@ -In geometry, a Heronian triangle is a triangle that has side lengths and area that are all integers. Heronian triangles are named after Hero of Alexandria. The term is sometimes applied more widely to triangles whose sides and area are all rational numbers, since one can rescale the sides by a common multiple to obtain a triangle that is Heronian in the above sense. - -Any right-angled triangle whose sidelengths are a Pythagorean triple is a Heronian triangle, as the side lengths of such a triangle are integers, and its area is also an integer, being half of the product of the two shorter sides of the triangle, at least one of which must be even. - -An example of a Heronian triangle which is not right-angled is the isosceles triangle with sidelengths 5, 5, and 6, whose area is 12. This triangle is obtained by joining two copies of the right-angled triangle with sides 3, 4, and 5 along the sides of length 4. This approach works in general, as illustrated in the adjacent picture. One takes a Pythagorean triple (a, b, c), with c being largest, then another one (a, d, e), with e being largest, constructs the triangles with these sidelengths, and joins them together along the sides of length a, to obtain a triangle with integer side lengths c, e, and b + d, and with area -$$ -A=\frac{1}{2}(b+d)a -$$ (one half times the base times the height). - -If a is even then the area A is an integer. Less obviously, if a is odd then A is still an integer, as b and d must both be even, making b+d even too. - -Some Heronian triangles cannot be obtained by joining together two right-angled triangles with integer sides as described above. For example, a 5, 29, 30 Heronian triangle with area 72 cannot be constructed from two integer Pythagorean triangles since none of its altitudes are integers. Also no primitive Pythagorean triangle can be constructed from two smaller integer Pythagorean triangles. Such Heronian triangles are known as indecomposable. Thus every Heronian triangle has an odd number of sides of even length, and every primitive Heronian triangle has exactly one even side. - -*The semiperimeter s of a Heronian triangle with sides a, b and c can never be prime. This can be seen from the fact that s(s−a)(s−b)(s−c) has to be a perfect square and if s is a prime then one of the other terms must have s as a factor but this is impossible as these terms are all less than s. - -*The area of a Heronian triangle is always divisible by 6. This can be seen from the fact that the area of a triangle is half of one side times its altitude from that side, and a Heronian triangle has integer sides and area. Some Heronian triangles have three non-integer altitudes, for example the acute (15, 34, 35) with area 252 and the obtuse (5, 29, 30) with area 72. Any Heronian triangle with one or more non-integer altitudes can be scaled up by a factor equalling the least common multiple of the altitudes' denominators in order to obtain a similar Heronian triangle with three integer altitudes. - -*Heronian triangles that have no integer altitude (indecomposable and non-Pythagorean) have sides that are all divisible by primes of the form 4k+1. - -*There exist an infinite number of primitive Heronian triangles with one side length equal to a provided that a > 2. - -*If any two sides (but not three) of a Heronian triangle have a common factor, that factor must be the sum of two squares. - -*Every angle of a Heronian triangle has a rational sine. This follows from the area formula Area = (1/2)ab sin C, in which the area and the sides a and b are integers, and equivalently for the other angles. - -*Every angle of a Heronian triangle has a rational cosine. This follows from the law of cosines , c2 = a2 + b2 − 2ab cos C, in which the sides a, b, and c are integers, and equivalently for the other angles. - -*Since all Heronian triangles have all angles' sines and cosines rational, this implies that each oblique angle of a Heron triangle has a rational tangent, cotangent, secant, and cosecant. Furthermore, half of each angle has a rational tangent because tan C/2 = sin C / (1 + cos C), and equivalently for other angles. - -*There are no Heronian triangles whose three internal angles form an arithmetic progression. This is because all plane triangles with angles in an arithmetic progression must have one angle of 60°, which does not have a rational sine. - -*Any square inscribed in a Heronian triangle has rational sides: For a general triangle the inscribed square on side of length a has length $\tfrac{2Aa}{a^2+2A}$ where A is the triangle's area; in a Heronian triangle, both A and a are integers. - -*Every Heronian triangle has a rational inradius (radius of its inscribed circle): For a general triangle the inradius is the ratio of the area to half the perimeter, and both of these are rational in a Heronian triangle. - -*Every Heronian triangle has a rational circumradius (the radius of its circumscribed circle): For a general triangle the circumradius equals one-fourth the product of the sides divided by the area; in a Heronian triangle the sides and area are integers. - -*In a Heronian triangle the distance from the centroid to each side is rational, because for all triangles this distance is the ratio of twice the area to three times the side length. This can be generalized by stating that all centers associated with Heronian triangles whose barycentric coordinates are rational ratios have a rational distance to each side. These centers include the circumcenter, orthocenter, nine-point center, symmedian point, Gergonne point and Nagel point. - -*All Heronian triangles can be placed on a lattice with each vertex at a lattice point. - -The Indian mathematician Brahmagupta (598-668 A.D.) derived the parametric solution such that every Heronian triangle has sides proportional to: -$$ -a=n(m^{2}+k^{2}) -$$ -$$ -b=m(n^{2}+k^{2}) -$$ -$$ -c=(m+n)(mn-k^{2}) -$$ -$$ -\text{Semiperimeter}=s=(a+b+c)/2=mn(m+n) -$$ -$$ -\text{Area}=mnk(m+n)(mn-k^{2}) -$$ -$$ -\text{Inradius}=k(mn-k^{2}) -$$ -$$ -s-a=n(mn-k^{2}) -$$ -$$ -s-b=m(mn-k^{2}) -$$ -$$ -s-c=(m+n)k^{2} -$$ - -for integers m, n and k where: -$$ -\gcd{(m,n,k)}=1 -$$ -$$ -mn > k^2 \ge 1 -$$ -$$ - m \ge n \ge 1 -$$. - -The proportionality factor is generally a rational p/q where q = gcd(a, b, c) reduces the generated Heronian triangle to its primitive and p scales up this primitive to the required size. For example, taking m = 36, n = 4 and k = 3 produces a triangle with a = 5220, b = 900 and c = 5400, which is similar to the 5, 29, 30 Heronian triangle and the proportionality factor used has p = 1 and q = 180. - -The obstacle for a computational use of Brahmagupta's parametric solution is the denominator q of the proportionality factor. q can only be determined by calculating the greatest common divisor of the three sides ( gcd(a, b, c) ) and introduces an element of unpredictability into the generation process. -$$ -a= 25n^2 +5n-5=5(5n^2+n-1), -$$ -$$ -b= 25n^3 -5n^2 -7n+3=(5n+3)(5n^2 -4n+1), -$$ -$$ -c= 25n^3 +20n^2 -2n-4=(5n-2)(5n^2+6n+2), -$$ -$$ -A = (5n-2)(5n+3)(5n^2 +n-1), -$$ -$$ -r =5n-2, -$$ -$$ -r_a=5n+3, -$$ -$$ -r_b=5n^2+n-1, -$$ -$$ - r_c= A =(5n-2)(5n+3)(5n^2 +n-1). -$$ - -There are infinitely many Heronian triangles that can be placed on a lattice such that not only are the vertices at lattice points, as holds for all Heronian triangles, but additionally the centers of the incircle and excircles are at lattice points. - -See also formulas for Heronian triangles with one angle equal to twice another, Heronian triangles with sides in arithmetic progression, and isosceles Heronian triangles. - -The tangent of half of any interior angle of a Heronian triangle is necessarily rational; see properties above. These half angles are positive, and they sum to 90° (π/2 radians) because the interior angles (A, B, C) of any triangle sum to 180° (π radians). We start by choosing r = tan(A/2) and s = tan(B/2) to be any positive rational numbers satisfying rs < 1. The limit of 1 ensures that angle A/2 + B/2 is less than 90° and thus that the angle C/2 will be positive. The value t = tan(C/2) will also be a positive rational number because -$$ -t = \tan(C/2) = \cot(90^\circ - C/2) = \cot(A/2 + B/2) = \frac{1 - \tan(A/2) \tan(B/2)}{\tan(A/2) + \tan(B/2)} = \frac{1 - rs}{r+s}. -$$ - -We can compute the sine of any angle using the formula $\sin \theta = \frac{2 \tan(\theta/2)}{1 + \tan^2(\theta/2)}$, so the sines of $A, B, C$ are $\frac{2r}{1+r^2}, \frac{2s}{1+s^2}, \frac{2t}{1+t^2},$ respectively. These values are rational because the values of r, s, and t are rational. - -We use the Law of sines to conclude that a triangle with angles A, B, and C can be scaled to have side lengths a, b, and c equal to these sines. The area of such a triangle is (ab sin C) / 2, which is one half the product of the three sines and will necessarily be rational. Integer values for the side lengths and area are obtained by multiplying the side-length values by the least common multiple of their denominators and then by dividing out by the greatest common factor of the results. Thus, we have computed the side lengths and area of a primitive Heronian triangle from its half-angle tangents. - -When it is also the case that r, s, or t equals 1 then the corresponding interior angle will be a right angle and the three sides will also define a Pythagorean triple. - -The list of primitive integer Heronian triangles, sorted by area and, if this is the same, - -by perimeter, starts as in the following table. - -"Primitive" means that - -the greatest common divisor of the three side lengths equals 1. - -Lists of primitive Heronian triangles whose sides do not exceed 6,000,000 can be found at - -A shape is called equable if its area equals its perimeter. There are exactly five equable Heronian triangles: the ones with side lengths (5,12,13), (6,8,10), (6,25,29), (7,15,20), and (9,10,17). - -Since the area of an equilateral triangle with rational sides is an irrational number, no equilateral triangle is Heronian. However, there is a unique sequence of Heronian triangles that are "almost equilateral" because the three sides are of the form n − 1, n, n + 1. A method for generating all solutions to this problem based on continued fractions was described in 1864 by Edward Sang, and in 1880 Reinhold Hoppe gave a closed-form expression for the solutions. The first few examples of these almost-equilateral triangles are listed in the following table : - -Subsequent values of n can be found by multiplying the previous value by 4, then subtracting the value prior to that one (52 = 4 × 14 − 4, 194 = 4 × 52 − 14, etc.), thus: -$$ -n_t = 4n_{t-1} - n_{t-2} , -$$ - -where t denotes any row in the table. This is a Lucas sequence. Alternatively, the formula $(2 + \sqrt{3})^t + (2 - \sqrt{3})^t$ generates all n. Equivalently, let A = area and y = inradius, then, -$$ -\big((n-1)^2+n^2+(n+1)^2\big)^2-2\big((n-1)^4+n^4+(n+1)^4\big) = (6n y)^2 = (4A)^2 -$$ - -where {n, y} are solutions to n2 − 12y2 = 4. A small transformation n = 2x yields a conventional Pell equation x2 − 3y2 = 1, the solutions of which can then be derived from the regular continued fraction expansion for 3. - -The variable n is of the form $n=\sqrt{2 + 2 k}$, where k is 7, 97, 1351, 18817, …. The numbers in this sequence have the property that k consecutive integers have integral standard deviation. diff --git a/wiki/wikipedia/2407.txt b/wiki/wikipedia/2407.txt deleted file mode 100644 index 744dc536dfbc0e607f29706b99676b46986e02df..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2407.txt +++ /dev/null @@ -1,11 +0,0 @@ -The necktie paradox is a puzzle or paradox within the subjectivistic interpretation of probability theory, in which two men wager over whose necktie is cheaper, with the owner of the more expensive tie giving it to the other man. Both men reason that they stand to win more than they would lose, in such a bet. The necktie paradox is a variation (and historically, the origin) of the two-envelope paradox. - -Two men are each given a necktie by their respective wives as a Christmas present. Over drinks they start arguing over who has the cheaper necktie. They agree to have a wager over it. They will consult their wives and find out the prices of the neckties. The terms of the bet are that the man with the more expensive necktie has to give it to the other as the prize. - -The first man reasons as follows: winning and losing are equally likely. If I lose, then I lose the value of my necktie. But if I win, then I win more than the value of my necktie. Therefore, the wager is to my advantage. The second man can consider the wager in exactly the same way; thus, paradoxically, it seems both men have the advantage in the bet. This is obviously not possible. - -The paradox can be resolved by giving more careful consideration to what is lost in one scenario ("the value of my necktie") and what is won in the other ("more than the value of my necktie"). If one assumes for simplicity that the only possible necktie prices are $20 and $40, and that a man has equal chances of having a $20 or $40 necktie, then four outcomes (all equally likely) are possible: - -The first man has a 50% chance of a neutral outcome, a 25% chance of gaining a necktie worth $40, and a 25% chance of losing a necktie worth $40. Turning to the losing and winning scenarios: if the man loses $40, then it is true that he has lost the value of his necktie; and if he gains $40, then it is true that he has gained more than the value of his necktie. The win and the loss are equally likely, but what we call "the value of his necktie" in the losing scenario is the same amount as what we call "more than the value of his necktie" in the winning scenario. Accordingly, neither man has the advantage in the wager. - -This paradox is a rephrasing of the simplest case of the two envelopes problem, and the explanation of the resolution is essentially the same. diff --git a/wiki/wikipedia/2408.txt b/wiki/wikipedia/2408.txt deleted file mode 100644 index 10bcc3bf90be734cbdcbeaad0028718579f4b71a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2408.txt +++ /dev/null @@ -1,41 +0,0 @@ -In abstract algebra, multiplicity theory concerns the multiplicity of a module M at an ideal I (often a maximal ideal) -$$ -\mathbf{e}_I(M). -$$ - -The notion of the multiplicity of a module is a generalization of the degree of a projective variety. By Serre's intersection formula, it is linked to an intersection multiplicity in the intersection theory. - -The main focus of the theory is to detect and measure a singular point of an algebraic variety (cf. resolution of singularities). Because of this aspect, valuation theory, Rees algebras and integral closure are intimately connected to multiplicity theory. - -Let R be a positively graded ring such that R is generated as an R0-algebra and R0 is Artinian. Note that R has finite Krull dimension d. Let M be a finitely generated R-module and FM(t) its Hilbert–Poincaré series. This series is a rational function of the form -$$ -\frac{P(t)}{(1-t)^d}, -$$ - -where $P(t)$ is a polynomial. By definition, the multiplicity of M is -$$ -\mathbf{e}(M) = P(1). -$$ - -The series may be rewritten -$$ -F(t) = \sum_1^d {a_{d-i} \over (1 - t)^d} + r(t). -$$ - -where r(t) is a polynomial. Note that $a_{d-i}$ are the coefficients of the Hilbert polynomial of M expanded in binomial coefficients. We have -$$ -\mathbf{e}(M) = a_0. -$$ - -As Hilbert–Poincaré series are additive on exact sequences, the multiplicity is additive on exact sequences of modules of the same dimension. - -The following theorem, due to Christer Lech, gives a priori bounds for multiplicity. - -{{math_theorem - -|Suppose R is local with maximal ideal $\mathfrak{m}$. If an I is $\mathfrak{m}$-primary ideal, then -$$ -e(I) \le d! \deg(R) \lambda(R/\overline{I}). -$$| name = Lech - -}} diff --git a/wiki/wikipedia/2409.txt b/wiki/wikipedia/2409.txt deleted file mode 100644 index 4a0f8ef429fdd236f18e823000e73b214cfa24be..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2409.txt +++ /dev/null @@ -1,29 +0,0 @@ -In mathematical logic, a conservative extension is a supertheory of a theory which is often convenient for proving theorems, but proves no new theorems about the language of the original theory. Similarly, a non-conservative extension is a supertheory which is not conservative, and can prove more theorems than the original. - -More formally stated, a theory $T_2$ is a (proof theoretic) conservative extension of a theory $T_1$ if every theorem of $T_1$ is a theorem of $T_2$, and any theorem of $T_2$ in the language of $T_1$ is already a theorem of $T_1$. - -More generally, if $\Gamma$ is a set of formulas in the common language of $T_1$ and $T_2$, then $T_2$ is $\Gamma$-conservative over $T_1$ if every formula from $\Gamma$ provable in $T_2$ is also provable in $T_1$. - -Note that a conservative extension of a consistent theory is consistent. If it were not, then by the principle of explosion, every formula in the language of $T_2$ would be a theorem of $T_2$, so every formula in the language of $T_1$ would be a theorem of $T_1$, so $T_1$ would not be consistent. Hence, conservative extensions do not bear the risk of introducing new inconsistencies. This can also be seen as a methodology for writing and structuring large theories: start with a theory, $T_0$, that is known (or assumed) to be consistent, and successively build conservative extensions $T_1$, $T_2$, ... of it. - -Recently, conservative extensions have been used for defining a notion of module for ontologies: if an ontology is formalized as a logical theory, a subtheory is a module if the whole ontology is a conservative extension of the subtheory. - -An extension which is not conservative may be called a proper extension. - -* ACA0, a subsystem of second-order arithmetic studied in reverse mathematics, is a conservative extension of first-order Peano arithmetic. - -* Von Neumann–Bernays–Gödel set theory is a conservative extension of Zermelo–Fraenkel set theory with the axiom of choice (ZFC). - -* Internal set theory is a conservative extension of Zermelo–Fraenkel set theory with the axiom of choice (ZFC). - -* Extensions by definitions are conservative. - -* Extensions by unconstrained predicate or function symbols are conservative. - -* IΣ1 (a subsystem of Peano arithmetic with induction only for Σ01-formulas) is a Π02-conservative extension of the primitive recursive arithmetic (PRA). - -* ZFC is a Σ13-conservative extension of ZF by Shoenfield's absoluteness theorem. - -* ZFC with the continuum hypothesis is a Π21-conservative extension of ZFC. - -With model-theoretic means, a stronger notion is obtained: an extension $T_2$ of a theory $T_1$ is model-theoretically conservative if $T_1 \subseteq T_2$ and every model of $T_1$ can be expanded to a model of $T_2$. Each model-theoretic conservative extension also is a (proof-theoretic) conservative extension in the above sense. The model theoretic notion has the advantage over the proof theoretic one that it does not depend so much on the language at hand; on the other hand, it is usually harder to establish model theoretic conservativity. diff --git a/wiki/wikipedia/241.txt b/wiki/wikipedia/241.txt deleted file mode 100644 index 0398d5386e4a5b5050eca3bcf497209b167ec7b3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/241.txt +++ /dev/null @@ -1,5 +0,0 @@ -Distributed concurrency control is the concurrency control of a system distributed over a computer network (Bernstein et al. 1987, Weikum and Vossen 2001). - -In database systems and transaction processing (transaction management) distributed concurrency control refers primarily to the concurrency control of a distributed database. It also refers to the concurrency control in a multidatabase (and other multi-transactional object) environment (e.g., federated database, grid computing, and cloud computing environments. A major goal for distributed concurrency control is distributed serializability (or global serializability for multidatabase systems). Distributed concurrency control poses special challenges beyond centralized one, primarily due to communication and computer latency. It often requires special techniques, like distributed lock manager over fast computer networks with low latency, like switched fabric (e.g., InfiniBand). Commitment ordering (or commit ordering) is a general serializability technique that achieves distributed serializability (and global serializability in particular) effectively on a large scale, without concurrency control information distribution (e.g., local precedence relations, locks, timestamps, or tickets), and thus without performance penalties that are typical to other serializability techniques (Raz 1992). - -The most common distributed concurrency control technique is strong strict two-phase locking (SS2PL, also named rigorousness), which is also a common centralized concurrency control technique. SS2PL provides both the serializability, strictness, and commitment ordering properties. Strictness, a special case of recoverability, is utilized for effective recovery from failure, and commitment ordering allows participating in a general solution for global serializability. For large-scale distribution and complex transactions, distributed locking's typical heavy performance penalty (due to delays, latency) can be saved by using the atomic commitment protocol, which is needed in a distributed database for (distributed) transactions' atomicity (e.g., two-phase commit, or a simpler one in a reliable system), together with some local commitment ordering variant (e.g., local SS2PL) instead of distributed locking, to achieve global serializability in the entire system. All the commitment ordering theoretical results are applicable whenever atomic commitment is utilized over partitioned, distributed recoverable (transactional) data, including automatic distributed deadlock resolution. Such technique can be utilized also for a large-scale parallel database, where a single large database, residing on many nodes and using a distributed lock manager, is replaced with a (homogeneous) multidatabase, comprising many relatively small databases (loosely defined; any process that supports transactions over partitioned data and participates in atomic commitment complies), fitting each into a single node, and using commitment ordering (e.g., SS2PL, strict CO) together with some appropriate atomic commitment protocol (without using a distributed lock manager). diff --git a/wiki/wikipedia/2410.txt b/wiki/wikipedia/2410.txt deleted file mode 100644 index 1419cc64e1de3ffd918bdadf99a2dd1eb916bdf9..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2410.txt +++ /dev/null @@ -1,33 +0,0 @@ -Johnson's algorithm is a way to find the shortest paths between all pairs of vertices in an edge-weighted directed graph. It allows some of the edge weights to be negative numbers, but no negative-weight cycles may exist. It works by using the Bellman–Ford algorithm to compute a transformation of the input graph that removes all negative weights, allowing Dijkstra's algorithm to be used on the transformed graph. It is named after Donald B. Johnson, who first published the technique in 1977. - -A similar reweighting technique is also used in Suurballe's algorithm for finding two disjoint paths of minimum total length between the same two vertices in a graph with non-negative edge weights. - -Johnson's algorithm consists of the following steps: - -#First, a new node q is added to the graph, connected by zero-weight edges to each of the other nodes. - -#Second, the Bellman–Ford algorithm is used, starting from the new vertex q, to find for each vertex v the minimum weight h(v) of a path from q to v. If this step detects a negative cycle, the algorithm is terminated. - -#Next the edges of the original graph are reweighted using the values computed by the Bellman–Ford algorithm: an edge from u to v, having length w(u,v), is given the new length w(u,v) + h(u) - h(v). - -#Finally, q is removed, and Dijkstra's algorithm is used to find the shortest paths from each node s to every other vertex in the reweighted graph. The distance in the original graph is then computed for each distance D(u , v), by adding h(v) - h(u) to the distance returned by Dijkstra's algorithm. - -The first three stages of Johnson's algorithm are depicted in the illustration below. - -The graph on the left of the illustration has two negative edges, but no negative cycles. The center graph shows the new vertex q, a shortest path tree as computed by the Bellman–Ford algorithm with q as starting vertex, and the values h(v) computed at each other node as the length of the shortest path from q to that node. Note that these values are all non-positive, because q has a length-zero edge to each vertex and the shortest path can be no longer than that edge. On the right is shown the reweighted graph, formed by replacing each edge weight w(u,v) by w(u,v) + h(u) - h(v). In this reweighted graph, all edge weights are non-negative, but the shortest path between any two nodes uses the same sequence of edges as the shortest path between the same two nodes in the original graph. The algorithm concludes by applying Dijkstra's algorithm to each of the four starting nodes in the reweighted graph. - -In the reweighted graph, all paths between a pair s and t of nodes have the same quantity h(s) - h(t) added to them. The previous statement can be proven as follows: Let p be an s-t path. Its weight W in the reweighted graph is given by the following expression: -$$ -\bigl(w(s, p_1) + h(s) - h(p_1)\bigr) + \bigl(w(p_1, p_2) + h(p_1) - h(p_2)\bigr) + ... + \bigl(w(p_n, t) + h(p_n) - h(t)\bigr). -$$ - -Every $+h(p_i)$ is cancelled by $-h(p_i)$ in the previous bracketed expression; therefore, we are left with the following expression for W: -$$ -\bigl(w(s, p_1) + w(p_1, p_2) + ... + w(p_n, t)\bigr)+ h(s) - h(t) -$$ - -The bracketed expression is the weight of p in the original weighting. - -Since the reweighting adds the same amount to the weight of every s-t path, a path is a shortest path in the original weighting if and only if it is a shortest path after reweighting. The weight of edges that belong to a shortest path from q to any node is zero, and therefore the lengths of the shortest paths from q to every node become zero in the reweighted graph; however, they still remain shortest paths. Therefore, there can be no negative edges: if edge uv had a negative weight after the reweighting, then the zero-length path from q to u together with this edge would form a negative-length path from q to v, contradicting the fact that all vertices have zero distance from q. The non-existence of negative edges ensures the optimality of the paths found by Dijkstra's algorithm. The distances in the original graph may be calculated from the distances calculated by Dijkstra's algorithm in the reweighted graph by reversing the reweighting transformation. - -The time complexity of this algorithm, using Fibonacci heaps in the implementation of Dijkstra's algorithm, is $O(|V|^2\log |V| + |V||E|)$: the algorithm uses $O(|V||E|)$ time for the Bellman–Ford stage of the algorithm, and $O(|V|\log |V| + |E|)$ for each of the $|V|$ instantiations of Dijkstra's algorithm. Thus, when the graph is sparse, the total time can be faster than the Floyd–Warshall algorithm, which solves the same problem in time $O(|V|^3)$. diff --git a/wiki/wikipedia/2411.txt b/wiki/wikipedia/2411.txt deleted file mode 100644 index 628107c37f7dc1c389491af214241a2aadc44c6c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2411.txt +++ /dev/null @@ -1,3 +0,0 @@ -In algebraic geometry, the Virasoro conjecture states that a certain generating function encoding Gromov–Witten invariants of a smooth projective variety is fixed by an action of half of the Virasoro algebra. The Virasoro conjecture is named after theoretical physicist Miguel Ángel Virasoro. - -proposed the Virasoro conjecture as a generalization of Witten's conjecture. gave a survey of the Virasoro conjecture. diff --git a/wiki/wikipedia/2412.txt b/wiki/wikipedia/2412.txt deleted file mode 100644 index be8e174407fb797ecd46e477161c3c25902f430d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2412.txt +++ /dev/null @@ -1,7 +0,0 @@ -In absolute geometry, the Saccheri–Legendre theorem states that the sum of the angles in a triangle is at most 180°. Absolute geometry is the geometry obtained from assuming all the axioms that lead to Euclidean geometry with the exception of the axiom that is equivalent to the parallel postulate of Euclid. - -The theorem is named after Giovanni Girolamo Saccheri and Adrien-Marie Legendre. - -The existence of at least one triangle with angle sum of 180 degrees in absolute geometry implies Euclid's parallel postulate. Similarly, the existence of at least one triangle with angle sum of less than 180 degrees implies the characteristic postulate of hyperbolic geometry. - -Max Dehn gave an example of a non-Legendrian geometry where the angle sum of a triangle is greater than 180 degrees, and a semi-Euclidean geometry where there is a triangle with an angle sum of 180 degrees but Euclid's parallel postulate fails. In Dehn's geometries the Archimedean axiom does not hold. diff --git a/wiki/wikipedia/2413.txt b/wiki/wikipedia/2413.txt deleted file mode 100644 index 1e02761f54c93d5fd056cc61751ec0624d80e2f3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2413.txt +++ /dev/null @@ -1 +0,0 @@ -The packing constant of a geometric body is the largest average density achieved by packing arrangements of congruent copies of the body. For most bodies the value of the packing constant is unknown. The following is a list of bodies in Euclidean spaces whose packing constant is known. Therefore, any such body for which the lattice packing constant was previously known, such as any ellipse, consequently has a known packing constant. In addition to these bodies, the packing constants of hyperspheres in 8 and 24 dimensions are almost exactly known. diff --git a/wiki/wikipedia/2414.txt b/wiki/wikipedia/2414.txt deleted file mode 100644 index 5613960b797e9890f01550bfaf84ebb2d98249e1..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2414.txt +++ /dev/null @@ -1,48 +0,0 @@ -In mathematics, Artin–Verdier duality is a duality theorem for constructible abelian sheaves over the spectrum of a ring of algebraic numbers, introduced by , that generalizes Tate duality. - -It shows that, as far as etale (or flat) cohomology is concerned, the ring of integers in a number field behaves like a 3-dimensional mathematical object. - -Let X be the spectrum of the ring of integers in a totally imaginary number field K, and F a constructible étale abelian sheaf on X. Then the Yoneda pairing -$$ -H^r(X,F)\times \operatorname{Ext}^{3-r}(F,\mathbb{G}_m)\to H^3(X,\mathbb{G}_m)=\Q/\Z -$$ - -is a non-degenerate pairing of finite abelian groups, for every integer r. - -Here, Hr(X,F) is the r-th étale cohomology group of the scheme X with values in F, and Extr(F,G) is the group of r-extensions of the étale sheaf G by the étale sheaf F in the category of étale abelian sheaves on X. Moreover, Gm denotes the étale sheaf of units in the structure sheaf of X. - -proved Artin–Verdier duality for constructible, but not necessarily torsion sheaves. For such a sheaf F, the above pairing induces isomorphisms - -\begin{align} - -H^r(X, F)^* &\cong \operatorname{Ext}^{3-r}(F, \mathbb{G}_m) && r = 0, 1 \\ - -H^r(X, F) &\cong \operatorname{Ext}^{3-r}(F, \mathbb{G}_m)^* && r = 2, 3 - -\end{align} - -where -$$ -(-)^* = \operatorname{Hom}(-, \Q /\Z). -$$ - -Let U be an open subscheme of the spectrum of the ring of integers in a number field K, and F a finite flat commutative group scheme over U. Then the cup product defines a non-degenerate pairing -$$ -H^r(U,F^D)\times H_c^{3-r}(U,F)\to H_c^3(U,{\mathbb G}_m)=\Q/\Z -$$ - -of finite abelian groups, for all integers r. - -Here FD denotes the Cartier dual of F, which is another finite flat commutative group scheme over U. Moreover, $H^r(U,F)$ is the r-th flat cohomology group of the scheme U with values in the flat abelian sheaf F, and $H_c^r(U,F)$ is the r-th flat cohomology with compact supports of U with values in the flat abelian sheaf F. - -The flat cohomology with compact supports is defined to give rise to a long exact sequence -$$ -\cdots\to H^r_c(U,F)\to H^r(U,F)\to \bigoplus\nolimits_{v\notin U} H^r(K_v,F)\to H^{r+1}_c(U,F) \to\cdots -$$ - -The sum is taken over all places of K, which are not in U, including the archimedean ones. The local contribution Hr(Kv, F) is the Galois cohomology of the Henselization Kv of K at the place v, modified a la Tate: -$$ -H^r(K_v,F)=H^r_T(\mathrm{Gal}(K_v^s/K_v),F(K_v^s)). -$$ - -Here $K_v^s$ is a separable closure of $K_v.$ diff --git a/wiki/wikipedia/2415.txt b/wiki/wikipedia/2415.txt deleted file mode 100644 index 8048e001167703a6abe36b65a3ccbbf2cad4b095..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2415.txt +++ /dev/null @@ -1,43 +0,0 @@ -The Cauchy formula for repeated integration, named after Augustin Louis Cauchy, allows one to compress n antidifferentiations of a function into a single integral (cf. Cauchy's formula). - -Let f be a continuous function on the real line. Then the nth repeated integral of f with basepoint a, - -f^{(-n)}(x) = \int_a^x \int_a^{\sigma_1} \cdots \int_a^{\sigma_{n-1}} f(\sigma_{n}) \mathrm{d}\sigma_{n} \cdots \mathrm{d}\sigma_2 \mathrm{d}\sigma_1, - -is given by single integration - -f^{(-n)}(x) = \frac{1}{(n-1)!} \int_a^x\left(x-t\right)^{n-1} f(t)\mathrm{d}t. - -A proof is given by induction. Since f is continuous, the base case follows from the fundamental theorem of calculus: - -\frac{\mathrm{d}}{\mathrm{d}x} f^{(-1)}(x) = \frac{\mathrm{d}}{\mathrm{d}x}\int_a^x \frac{(x-t)^0}{0!}f(t)\mathrm{d}t = \frac{\mathrm{d}}{\mathrm{d}x}\int_a^x f(t)\mathrm{d}t = f(x); - -where - -f^{(-1)}(a) = \int_a^a f(t)\mathrm{d}t = 0. - -Now, suppose this is true for n, and let us prove it for n+1. Firstly, using the Leibniz integral rule, note that - -\frac{\mathrm{d}}{\mathrm{d} x} \left[ \frac{1}{n!} \int_a^x \left(x-t\right)^n f(t)\mathrm{d}t \right] = \frac{1}{(n-1)!} \int_a^x\left(x-t\right)^{n-1} f(t)\mathrm{d}t . - -Then, applying the induction hypothesis, - -\begin{align} - -f^{-(n+1)}(x) &= \int_a^x \int_a^{\sigma_1} \cdots \int_a^{\sigma_{n}} f(\sigma_{n+1}) \mathrm{d}\sigma_{n+1} \cdots \mathrm{d}\sigma_2 \mathrm{d}\sigma_1 \\ - -&= \int_a^x \frac{1}{(n-1)!} \int_a^{\sigma_1}\left(\sigma_1-t\right)^{n-1} f(t)\mathrm{d}t\mathrm{d}\sigma_1 \\ - -&= \int_a^x \frac{\mathrm{d}}{\mathrm{d}\sigma_1} \left[\frac{1}{n!} \int_a^{\sigma_1} \left(\sigma_1-t\right)^n f(t)\mathrm{d}t\right] \mathrm{d}\sigma_1 \\ - -&= \frac{1}{n!} \int_a^x \left(x-t\right)^n f(t)\mathrm{d}t. - -\end{align} - -This completes the proof. - -The Cauchy formula is generalized to non-integer parameters by the Riemann-Liouville integral, where $n \in \Z_{\geq 0}$ is replaced by $\alpha \in \Complex,\ \Re(\alpha)>0$, and the factorial is replaced by the gamma function. The two formulas agree when $\alpha \in \Z_{\geq 0}$. - -Both the Cauchy formula and the Riemann-Liouville integral are generalized to arbitrary dimension by the Riesz potential. - -In fractional calculus, these formulae can be used to construct a differintegral, allowing one to differentiate or integrate a fractional number of times. Differentiating a fractional number of times can be accomplished by fractional integration, then differentiating the result. diff --git a/wiki/wikipedia/2416.txt b/wiki/wikipedia/2416.txt deleted file mode 100644 index 33ed9caf5df71ca537fe4156ae5b859d9790fc38..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2416.txt +++ /dev/null @@ -1,73 +0,0 @@ -Epigram is a functional programming language with dependent types, and the integrated development environment (IDE) usually packaged with the language. Epigram's type system is strong enough to express program specifications. The goal is to support a smooth transition from ordinary programming to integrated programs and proofs whose correctness can be checked and certified by the compiler. Epigram exploits the Curry–Howard correspondence, also termed the propositions as types principle, and is based on intuitionistic type theory. - -The Epigram prototype was implemented by Conor McBride based on joint work with James McKinna. Its development is continued by the Epigram group in Nottingham, Durham, St Andrews, and Royal Holloway, University of London in the United Kingdom (UK). The current experimental implementation of the Epigram system is freely available together with a user manual, a tutorial and some background material. The system has been used under Linux, Windows, and macOS. - -It is currently unmaintained, and version 2, which was intended to implement Observational Type Theory, was never officially released but exists in GitHub. - -Epigram uses a two-dimensional, natural deduction style syntax, with versions in LaTeX and ASCII. Here are some examples from The Epigram Tutorial: - -The following declaration defines the natural numbers: - -The declaration says that Nat is a type with kind * (i.e., it is a simple type) and two constructors: zero and suc. The constructor suc takes a single Nat argument and returns a Nat. This is equivalent to the Haskell declaration "data Nat = Zero | Suc Nat". - -In LaTeX, the code is displayed as: - -\underline{\mathrm{data}} \left(\frac{}{\mathsf{Nat} : \star}\right) \underline{\mathrm{where}} - -\left(\frac{}{\mathsf{zero} : \mathsf{Nat}}\right) ; - -\left(\frac{n : \mathsf{Nat}}{\mathsf{suc}\ n : \mathsf{Nat}}\right) - -The horizontal-line notation can be read as "assuming (what is on the top) is true, we can infer that (what is on the bottom) is true." For example, "assuming n is of type Nat, then suc n is of type Nat." If nothing is on the top, then the bottom statement is always true: "zero is of type Nat (in all cases)." - -\mathsf{NatInd} : \begin{matrix} - -\forall P : \mathsf{Nat} \rightarrow \star \Rightarrow P\ \mathsf{zero} \rightarrow \\ - -(\forall n : \mathsf{Nat} \Rightarrow P\ n \rightarrow P\ (\mathsf{suc}\ n)) \rightarrow\\ - -\forall n : \mathsf{Nat} \Rightarrow P\ n - -\end{matrix} -$$ -\mathsf{NatInd}\ P\ mz\ ms\ \mathsf{zero} \equiv mz -$$ -$$ -\mathsf{NatInd}\ P\ mz\ ms\ (\mathsf{suc}\ n) \equiv ms\ n\ (NatInd\ P\ mz\ ms\ n) -$$ - -...And in ASCII: -$$ -\mathsf{plus}\ x\ y \Leftarrow \underline{\mathrm{rec}}\ x\ \{ -$$ -$$ -\mathsf{plus}\ x\ y \Leftarrow \underline{\mathrm{case}}\ x\ \{ -$$ -$$ -\mathsf{plus\ zero}\ y \Rightarrow y -$$ -$$ -\quad\quad \mathsf{plus}\ (\mathsf{suc}\ x)\ y \Rightarrow \mathsf{suc} (\mathsf{plus}\ x\ y)\ \}\ \} -$$ - -...And in ASCII: - -{{sxhl|2=idris|1= - -plus x y <= rec x { - -plus x y <= case x { - -plus zero y => y - -plus (suc x) y => suc (plus x y) - -} - -} - -}} - -Epigram is essentially a typed lambda calculus with generalized algebraic data type extensions, except for two extensions. First, types are first-class entities, of type $\star$; types are arbitrary expressions of type $\star$, and type equivalence is defined in terms of the types' normal forms. Second, it has a dependent function type; instead of $P \rightarrow Q$, $\forall x : P \Rightarrow Q$, where $x$ is bound in $Q$ to the value that the function's argument (of type $P$) eventually takes. - -Full dependent types, as implemented in Epigram, are a powerful abstraction. (Unlike in Dependent ML, the value(s) depended upon may be of any valid type.) A sample of the new formal specification capabilities dependent types bring may be found in The Epigram Tutorial. diff --git a/wiki/wikipedia/2417.txt b/wiki/wikipedia/2417.txt deleted file mode 100644 index ee8b25f91d684d798fbf0912c7d2c813c9840aaf..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2417.txt +++ /dev/null @@ -1,445 +0,0 @@ -This article lists Wikipedia articles about named mathematical inequalities. - -* Agmon's inequality - -* Askey–Gasper inequality - -* Babenko–Beckner inequality - -* Bernoulli's inequality - -* Bernstein's inequality (mathematical analysis) - -* Bessel's inequality - -* Bihari–LaSalle inequality - -* Bohnenblust–Hille inequality - -* Borell–Brascamp–Lieb inequality - -* Brezis–Gallouet inequality - -* Carleman's inequality - -* Chebyshev–Markov–Stieltjes inequalities - -* Chebyshev's sum inequality - -* Clarkson's inequalities - -* Eilenberg's inequality - -* Fekete–Szegő inequality - -* Fenchel's inequality - -* Friedrichs's inequality - -* Gagliardo–Nirenberg interpolation inequality - -* Gårding's inequality - -* Grothendieck inequality - -* Grunsky's inequalities - -* Hanner's inequalities - -* Hardy's inequality - -* Hardy–Littlewood inequality - -* Hardy–Littlewood–Sobolev inequality - -* Harnack's inequality - -* Hausdorff–Young inequality - -* Hermite–Hadamard inequality - -* Hilbert's inequality - -* Hölder's inequality - -* Jackson's inequality - -* Jensen's inequality - -* Khabibullin's conjecture on integral inequalities - -* Kantorovich inequality - -* Karamata's inequality - -* Korn's inequality - -* Ladyzhenskaya's inequality - -* Landau–Kolmogorov inequality - -* Lebedev–Milin inequality - -* Lieb–Thirring inequality - -* Littlewood's 4/3 inequality - -* Markov brothers' inequality - -* Mashreghi–Ransford inequality - -* Max–min inequality - -* Minkowski's inequality - -* Poincaré inequality - -* Popoviciu's inequality - -* Prékopa–Leindler inequality - -* Rayleigh–Faber–Krahn inequality - -* Remez inequality - -* Riesz rearrangement inequality - -* Schur test - -* Shapiro inequality - -* Sobolev inequality - -* Steffensen's inequality - -* Szegő inequality - -* Three spheres inequality - -* Trace inequalities - -* Trudinger's theorem - -* Turán's inequalities - -* Von Neumann's inequality - -* Wirtinger's inequality for functions - -* Young's convolution inequality - -* Young's inequality for products - -* Hardy–Littlewood maximal inequality - -* Inequality of arithmetic and geometric means - -* Ky Fan inequality - -* Levinson's inequality - -* Maclaurin's inequality - -* Mahler's inequality - -* Muirhead's inequality - -* Newton's inequalities - -* Stein–Strömberg theorem - -* Binomial coefficient bounds - -* Factorial bounds - -* XYZ inequality - -* Fisher's inequality - -* Ingleton's inequality - -* Lubell–Yamamoto–Meshalkin inequality - -* Nesbitt's inequality - -* Rearrangement inequality - -* Schur's inequality - -* Shapiro inequality - -* Stirling's formula (bounds) - -* Grönwall's inequality - -* Alexandrov–Fenchel inequality - -* Aristarchus's inequality - -* Barrow's inequality - -* Berger–Kazdan comparison theorem - -* Blaschke–Lebesgue inequality - -* Blaschke–Santaló inequality - -* Bishop–Gromov inequality - -* Bogomolov–Miyaoka–Yau inequality - -* Bonnesen's inequality - -* Brascamp–Lieb inequality - -* Brunn–Minkowski inequality - -* Castelnuovo–Severi inequality - -* Cheng's eigenvalue comparison theorem - -* Clifford's theorem on special divisors - -* Cohn-Vossen's inequality - -* Erdős–Mordell inequality - -* Euler's theorem in geometry - -* Gromov's inequality for complex projective space - -* Gromov's systolic inequality for essential manifolds - -* Hadamard's inequality - -* Hadwiger–Finsler inequality - -* Hinge theorem - -* Hitchin–Thorpe inequality - -* Isoperimetric inequality - -* Jordan's inequality - -* Jung's theorem - -* Loewner's torus inequality - -* Łojasiewicz inequality - -* Loomis–Whitney inequality - -* Melchior's inequality - -* Milman's reverse Brunn–Minkowski inequality - -* Milnor–Wood inequality - -* Minkowski's first inequality for convex bodies - -* Myers's theorem - -* Noether inequality - -* Ono's inequality - -* Pedoe's inequality - -* Ptolemy's inequality - -* Pu's inequality - -* Riemannian Penrose inequality - -* Toponogov's theorem - -* Triangle inequality - -* Weitzenböck's inequality - -* Wirtinger inequality (2-forms) - -* Inequalities in information theory - -* Kraft's inequality - -* Log sum inequality - -* Welch bounds - -* Abhyankar's inequality - -* Pisier–Ringrose inequality - -* Abel's inequality - -* Bregman–Minc inequality - -* Cauchy–Schwarz inequality - -* Golden–Thompson inequality - -* Hadamard's inequality - -* Hoffman-Wielandt inequality - -*Peetre's inequality - -* Sylvester's rank inequality - -* Triangle inequality - -*Trace inequalities - -* Bendixson's inequality - -* Weyl's inequality in matrix theory - -* Cauchy interlacing theorem - -* Poincaré separation theorem - -* Bonse's inequality - -* Large sieve inequality - -* Pólya–Vinogradov inequality - -* Turán–Kubilius inequality - -* Weyl's inequality - -* Azuma's inequality - -* Bennett's inequality, an upper bound on the probability that the sum of independent random variables deviates from its expected value by more than any specified amount - -* Bhatia–Davis inequality, an upper bound on the variance of any bounded probability distribution - -* Bernstein inequalities (probability theory) - -* Boole's inequality - -* Borell–TIS inequality - -* Burkholder's inequality - -* Burkholder–Davis–Gundy inequalities - -* Cantelli's inequality - -* Chebyshev's inequality - -* Chernoff's inequality - -* Chung–Erdős inequality - -* Concentration inequality - -* Cramér–Rao inequality - -* Doob's martingale inequality - -* Dvoretzky–Kiefer–Wolfowitz inequality - -* Eaton's inequality, a bound on the largest absolute value of a linear combination of bounded random variables - -* Emery's inequality - -* Entropy power inequality - -* Etemadi's inequality - -* Fannes–Audenaert inequality - -* Fano's inequality - -* Fefferman's inequality - -* Fréchet inequalities - -* Gauss's inequality - -* Gauss–Markov theorem, the statement that the least-squares estimators in certain linear models are the best linear unbiased estimators - -* Gaussian correlation inequality - -* Gaussian isoperimetric inequality - -* Gibbs's inequality - -* Hoeffding's inequality - -* Hoeffding's lemma - -* Jensen's inequality - -* Khintchine inequality - -* Kolmogorov's inequality - -* Kunita–Watanabe inequality - -* Le Cam's theorem - -* Lenglart's inequality - -* Marcinkiewicz–Zygmund inequality - -* Markov's inequality - -* McDiarmid's inequality - -* Paley–Zygmund inequality - -* Pinsker's inequality - -* Popoviciu's inequality on variances - -* Rao–Blackwell theorem - -* Ross's conjecture, a lower bound on the average waiting time in certain queues - -* Samuelson's inequality - -* Shearer's inequality - -* Stochastic Gronwall inequality - -* Talagrand's concentration inequality - -* Vitale's random Brunn–Minkowski inequality - -* Vysochanskiï–Petunin inequality - -* Berger's inequality for Einstein manifolds - -* Ahlswede–Daykin inequality - -* Bell's inequality - see Bell's theorem - -** Bell's original inequality - -* CHSH inequality - -* Clausius–Duhem inequality - -* Correlation inequality – any of several inequalities - -* FKG inequality - -* Ginibre inequality - -* Griffiths inequality - -* Heisenberg's inequality - -* Holley inequality - -* Leggett–Garg inequality - -* Riemannian Penrose inequality - -* Rushbrooke inequality - -* Tsirelson's inequality diff --git a/wiki/wikipedia/2418.txt b/wiki/wikipedia/2418.txt deleted file mode 100644 index f072f0dc052b669e2129388d41bccecda2c82fdd..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2418.txt +++ /dev/null @@ -1,9 +0,0 @@ -In mathematics, ψ0ω), widely known as Buchholz's ordinal, is a large countable ordinal that is used to measure the proof-theoretic strength of some mathematical systems. In particular, it is the proof theoretic ordinal of the subsystem $\Pi_1^1$-CA0 of second-order arithmetic; this is one of the "big five" subsystems studied in reverse mathematics (Simpson 1999). It is also the proof-theoretic ordinal of $\mathsf{ID_{<\omega}}$, the theory of finitely iterated inductive definitions. Buchholz's ordinal is also the order type of the segment bounded by $D_0D_\omega0$ in Buchholz's ordinal notation $\mathsf{(OT, <)}$. Lastly, it can be expressed as the limit of the sequence: $\varepsilon_0 = \psi_0(\Omega)$, $\mathsf{BHO} = \psi_0(\Omega_2)$, ... - -* $\Omega_0 = 1$, and $\Omega_n = \aleph_n$ for n > 0. - -* $C_i(\alpha)$ is the closure of $\Omega_i$ under addition and the $\psi_\eta(\mu)$ function itself (the latter of which only for $\mu < \alpha$ and $\eta \leq \omega$). - -* $\psi_i(\alpha)$ is the smallest ordinal not in $C_i(\alpha)$. - -*Thus, ψ0ω) is the smallest ordinal not in the closure of $1$ under addition and the $\psi_\eta(\mu)$ function itself (the latter of which only for $\mu < \Omega_\omega$ and $\eta \leq \omega$). diff --git a/wiki/wikipedia/2419.txt b/wiki/wikipedia/2419.txt deleted file mode 100644 index b599730f15be1ba94ae15251909305048f84e708..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2419.txt +++ /dev/null @@ -1,65 +0,0 @@ -The objective of the Thomson problem is to determine the minimum electrostatic potential energy configuration of N electrons constrained to the surface of a unit sphere that repel each other with a force given by Coulomb's law. The physicist J. J. Thomson posed the problem in 1904 after proposing an atomic model, later called the plum pudding model, based on his knowledge of the existence of negatively charged electrons within neutrally-charged atoms. - -Related problems include the study of the geometry of the minimum energy configuration and the study of the large N behavior of the minimum energy. - -The physical system embodied by the Thomson problem is a special case of one of eighteen unsolved mathematics problems proposed by the mathematician Steve Smale — "Distribution of points on the 2-sphere". The solution of each N-electron problem is obtained when the N-electron configuration constrained to the surface of a sphere of unit radius, $r=1$, yields a global electrostatic potential energy minimum, $U(N)$. - -The electrostatic interaction energy occurring between each pair of electrons of equal charges ($e_i = e_j = e$, with $e$ the elementary charge of an electron) is given by Coulomb's Law, -$$ -U_{ij}(N)=k_e{e_i e_j \over r_{ij}}. -$$ - -Here, $k_e$ is Coulomb's constant and $r_{ij}=|\mathbf{r}_i - \mathbf{r}_j|$ is the distance between each pair of electrons located at points on the sphere defined by vectors $\mathbf{r}_i$ and $\mathbf{r}_j$, respectively. - -Simplified units of $e=1$ and $k_e=1$ are used without loss of generality. Then, -$$ -U_{ij}(N) = {1 \over r_{ij}}. -$$ - -The total electrostatic potential energy of each N-electron configuration may then be expressed as the sum of all pair-wise interactions -$$ -U(N) = \sum_{i < j} \frac{1}{r_{ij}}. -$$ - -The global minimization of $U(N)$ over all possible collections of N distinct points is typically found by numerical minimization algorithms. - -The solution of the Thomson problem for two electrons is obtained when both electrons are as far apart as possible on opposite sides of the origin, $r_{ij} = 2r = 2$, or -$$ -U(2) = {1 \over 2}. -$$ - -Minimum energy configurations have been rigorously identified in only a handful of cases. - -* For N = 1, the solution is trivial as the electron may reside at any point on the surface of the unit sphere. The total energy of the configuration is defined as zero as the electron is not subject to the electric field due to any other sources of charge. - -* For N = 2, the optimal configuration consists of electrons at antipodal points. - -* For N = 3, electrons reside at the vertices of an equilateral triangle about a great circle. - -* For N = 4, electrons reside at the vertices of a regular tetrahedron. - -* For N = 5, a mathematically rigorous computer-aided solution was reported in 2010 with electrons residing at vertices of a triangular dipyramid. - -* For N = 6, electrons reside at vertices of a regular octahedron. - -* For N = 12, electrons reside at the vertices of a regular icosahedron. - -The geometric solutions of the Thomson problem for N = 4, 6, and 12 electrons are Platonic solids whose faces are congruent equilateral triangles. Numerical solutions for N = 8 and 20 are not the regular convex polyhedral configurations of the remaining two Platonic solids, whose faces are square and pentagonal, respectively. - -One can also ask for ground states of particles interacting with arbitrary potentials. - -To be mathematically precise, let f be a decreasing real-valued function, and define the energy functional $ \sum_{i < j} f(|x_i-x_j|)$ - -Traditionally, one considers $ f(x)=x^{-\alpha} $ also known as Riesz $\alpha$-kernels. For integrable Riesz kernels see the 1972 work of Landkof. For non-integrable Riesz kernels, the Poppy-seed bagel theorem holds, see the 2004 work of Hardin and Saff. Notable cases include: - -* α = ∞, the Tammes problem (packing); - -* α = -1, the Thomson problem; - -* α = 0, to maximize the product of distances, latterly known as Whyte's problem; - -* α = 1 : maximum average distance problem. - -One may also consider configurations of N points on a sphere of higher dimension. See spherical design. - -Several algorithms have been applied to this problem. The focus since the millennium has been on local optimization methods applied to the energy function, although random walks have made their appearance: diff --git a/wiki/wikipedia/242.txt b/wiki/wikipedia/242.txt deleted file mode 100644 index b158fc3408eedcc9b201156852a58f086aa19172..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/242.txt +++ /dev/null @@ -1,98 +0,0 @@ -The Penrose–Hawking singularity theorems (after Roger Penrose and Stephen Hawking) are a set of results in general relativity that attempt to answer the question of when gravitation produces singularities. The Penrose singularity theorem is a theorem in semi-Riemannian geometry and its general relativistic interpretation predicts a gravitational singularity in black hole formation. The Hawking singularity theorem is based on the Penrose theorem and it is interpreted as a gravitational singularity in the Big Bang situation. Penrose was awarded the Nobel Prize in Physics in 2020 "for the discovery that black hole formation is a robust prediction of the general theory of relativity", which he shared with Reinhard Genzel and Andrea Ghez. - -A singularity in solutions of the Einstein field equations is one of two things: - -# a situation where matter is forced to be compressed to a point (a space-like singularity) - -# a situation where certain light rays come from a region with infinite curvature (a time-like singularity) - -Space-like singularities are a feature of non-rotating uncharged black holes as described by the Schwarzschild metric, while time-like singularities are those that occur in charged or rotating black hole exact solutions. Both of them have the property of geodesic incompleteness, in which either some light-path or some particle-path cannot be extended beyond a certain proper time or affine parameter (affine parameter being the null analog of proper time). - -The Penrose theorem guarantees that some sort of geodesic incompleteness occurs inside any black hole whenever matter satisfies reasonable energy conditions. The energy condition required for the black-hole singularity theorem is weak: it says that light rays are always focused together by gravity, never drawn apart, and this holds whenever the energy of matter is non-negative. - -Hawking's singularity theorem is for the whole universe, and works backwards in time: it guarantees that the (classical) Big Bang has infinite density. This theorem is more restricted and only holds when matter obeys a stronger energy condition, called the dominant energy condition, in which the energy is larger than the pressure. All ordinary matter, with the exception of a vacuum expectation value of a scalar field, obeys this condition. During inflation, the universe violates the dominant energy condition, and it was initially argued (e.g. by Starobinsky) that inflationary cosmologies could avoid the initial big-bang singularity. However, it has since been shown that inflationary cosmologies are still past-incomplete, and thus require physics other than inflation to describe the past boundary of the inflating region of spacetime. - -It is still an open question whether (classical) general relativity predicts time-like singularities in the interior of realistic charged or rotating black holes, or whether these are artefacts of high-symmetry solutions and turn into spacelike singularities when perturbations are added. - -In general relativity, a singularity is a place that objects or light rays can reach in a finite time where the curvature becomes infinite, or space-time stops being a manifold. Singularities can be found in all the black-hole spacetimes, the Schwarzschild metric, the Reissner–Nordström metric, the Kerr metric and the Kerr–Newman metric, and in all cosmological solutions that do not have a scalar field energy or a cosmological constant. - -One cannot predict what might come "out" of a big-bang singularity in our past, or what happens to an observer that falls "in" to a black-hole singularity in the future, so they require a modification of physical law. Before Penrose, it was conceivable that singularities only form in contrived situations. For example, in the collapse of a star to form a black hole, if the star is spinning and thus possesses some angular momentum, maybe the centrifugal force partly counteracts gravity and keeps a singularity from forming. The singularity theorems prove that this cannot happen, and that a singularity will always form once an event horizon forms. - -In the collapsing star example, since all matter and energy is a source of gravitational attraction in general relativity, the additional angular momentum only pulls the star together more strongly as it contracts: the part outside the event horizon eventually settles down to a Kerr black hole (see No-hair theorem). The part inside the event horizon necessarily has a singularity somewhere. The proof is somewhat constructiveit shows that the singularity can be found by following light-rays from a surface just inside the horizon. But the proof does not say what type of singularity occurs, spacelike, timelike, orbifold, jump discontinuity in the metric. It only guarantees that if one follows the time-like geodesics into the future, it is impossible for the boundary of the region they form to be generated by the null geodesics from the surface. This means that the boundary must either come from nowhere or the whole future ends at some finite extension. - -An interesting "philosophical" feature of general relativity is revealed by the singularity theorems. Because general relativity predicts the inevitable occurrence of singularities, the theory is not complete without a specification for what happens to matter that hits the singularity. One can extend general relativity - -to a unified field theory, such as the Einstein–Maxwell–Dirac system, where no such singularities occur. - -In history, there is a deep connection between the curvature of a manifold and its topology. The Bonnet–Myers theorem states that a complete Riemannian manifold that has Ricci curvature everywhere greater than a certain positive constant must be compact. The condition of positive Ricci curvature is most conveniently stated in the following way: for every geodesic there is a nearby initially parallel geodesic that will bend toward it when extended, and the two will intersect at some finite length. - -When two nearby parallel geodesics intersect, the extension of either one is no longer the shortest path between the endpoints. The reason is that two parallel geodesic paths necessarily collide after an extension of equal length, and if one path is followed to the intersection then the other, you are connecting the endpoints by a non-geodesic path of equal length. This means that for a geodesic to be a shortest length path, it must never intersect neighboring parallel geodesics. - -Starting with a small sphere and sending out parallel geodesics from the boundary, assuming that the manifold has a Ricci curvature bounded below by a positive constant, none of the geodesics are shortest paths after a while, since they all collide with a neighbor. This means that after a certain amount of extension, all potentially new points have been reached. If all points in a connected manifold are at a finite geodesic distance from a small sphere, the manifold must be compact. - -Roger Penrose argued analogously in relativity. If null geodesics, the paths of light rays, are followed into the future, points in the future of the region are generated. If a point is on the boundary of the future of the region, it can only be reached by going at the speed of light, no slower, so null geodesics include the entire boundary of the proper future of a region. When the null geodesics intersect, they are no longer on the boundary of the future, they are in the interior of the future. So, if all the null geodesics collide, there is no boundary to the future. - -In relativity, the Ricci curvature, which determines the collision properties of geodesics, is determined by the energy tensor, and its projection on light rays is equal to the null-projection of the energy–momentum tensor and is always non-negative. This implies that the volume of a congruence of parallel null geodesics once it starts decreasing, will reach zero in a finite time. Once the volume is zero, there is a collapse in some direction, so every geodesic intersects some neighbor. - -Penrose concluded that whenever there is a sphere where all the outgoing (and ingoing) light rays are initially converging, the boundary of the future of that region will end after a finite extension, because all the null geodesics will converge. This is significant, because the outgoing light rays for any sphere inside the horizon of a black hole solution are all converging, so the boundary of the future of this region is either compact or comes from nowhere. The future of the interior either ends after a finite extension, or has a boundary that is eventually generated by new light rays that cannot be traced back to the original sphere. - -The singularity theorems use the notion of geodesic incompleteness as a stand-in for the presence of infinite curvatures. Geodesic incompleteness is the notion that there are geodesics, paths of observers through spacetime, that can only be extended for a finite time as measured by an observer traveling along one. Presumably, at the end of the geodesic the observer has fallen into a singularity or encountered some other pathology at which the laws of general relativity break down. - -Typically a singularity theorem has three ingredients: - -# An energy condition on the matter, - -# A condition on the global structure of spacetime, - -# Gravity is strong enough (somewhere) to trap a region. - -There are various possibilities for each ingredient, and each leads to different singularity theorems. - -A key tool used in the formulation and proof of the singularity theorems is the Raychaudhuri equation, which describes the divergence $\theta$ of a congruence (family) of geodesics. The divergence of a congruence is defined - -as the derivative of the log of the determinant of the congruence volume. The Raychaudhuri - -equation is -$$ -\dot{\theta} = - \sigma_{ab}\sigma^{ab} - \frac{1}{3}\theta^2 - {E[\vec{X}]^a}_a -$$ - -where $\sigma_{ab}$ is the shear tensor of the congruence and ${E[\vec{X}]^a}_{a} = R_{mn} X^m X^n$ is also known as the Raychaudhuri scalar (see the congruence page for details). The key point is that ${E[\vec{X}]^a}_a$ will be non-negative provided that the Einstein field equations hold and - -* the null energy condition holds and the geodesic congruence is null, or - -* the strong energy condition holds and the geodesic congruence is timelike. - -When these hold, the divergence becomes infinite at some finite value of the affine parameter. Thus all geodesics leaving a point will eventually reconverge after a finite time, provided the appropriate energy condition holds, a result also known as the focusing theorem. - -This is relevant for singularities thanks to the following argument: - -# Suppose we have a spacetime that is globally hyperbolic, and two points $p$ and $q$ that can be connected by a timelike or null curve. Then there exists a geodesic of maximal length connecting $p$ and $q$. Call this geodesic $\gamma$. - -# The geodesic $\gamma$ can be varied to a longer curve if another geodesic from $p$ intersects $\gamma$ at another point, called a conjugate point. - -# From the focusing theorem, we know that all geodesics from $p$ have conjugate points at finite values of the affine parameter. In particular, this is true for the geodesic of maximal length. But this is a contradictionone can therefore conclude that the spacetime is geodesically incomplete. - -In general relativity, there are several versions of the Penrose–Hawking singularity theorem. Most versions state, roughly, that if there is a trapped null surface and the energy density is nonnegative, then there exist geodesics of finite length that cannot be extended. - -These theorems, strictly speaking, prove that there is at least one non-spacelike geodesic that is only finitely extendible into the past but there are cases in which the conditions of these theorems obtain in such a way that all past-directed spacetime paths terminate at a singularity. - -There are many versions. Here is the null version: - -Assume - -# The null energy condition holds. - -# We have a noncompact connected Cauchy surface. - -# We have a closed trapped null surface $\mathcal{T}$. - -Then, we either have null geodesic incompleteness, or closed timelike curves. - -Sketch of proof: Proof by contradiction. The boundary of the future of $\mathcal{T}$, $\dot{J}(\mathcal{T})$ is generated by null geodesic segments originating from $\mathcal{T}$ with tangent vectors orthogonal to it. Being a trapped null surface, by the null Raychaudhuri equation, both families of null rays emanating from $\mathcal{T}$ will encounter caustics. (A caustic by itself is unproblematic. For instance, the boundary of the future of two spacelike separated points is the union of two future light cones with the interior parts of the intersection removed. Caustics occur where the light cones intersect, but no singularity lies there.) The null geodesics generating $\dot{J}(\mathcal{T})$ have to terminate, however, i.e. reach their future endpoints at or before the caustics. Otherwise, we can take two null geodesic segmentschanging at the causticand then deform them slightly to get a timelike curve connecting a point on the boundary to a point on $\mathcal{T}$, a contradiction. But as $\mathcal{T}$ is compact, given a continuous affine parameterization of the geodesic generators, there exists a lower bound to the absolute value of the expansion parameter. So, we know caustics will develop for every generator before a uniform bound in the affine parameter has elapsed. As a result, $\dot{J}(\mathcal{T})$ has to be compact. Either we have closed timelike curves, or we can construct a congruence by timelike curves, and every single one of them has to intersect the noncompact Cauchy surface exactly once. Consider all such timelike curves passing through $\dot{J}(\mathcal{T})$ and look at their image on the Cauchy surface. Being a continuous map, the image also has to be compact. Being a timelike congruence, the timelike curves can't intersect, and so, the map is injective. If the Cauchy surface were noncompact, then the image has a boundary. We're assuming spacetime comes in one connected piece. But $\dot{J}(\mathcal{T})$ is compact and boundariless because the boundary of a boundary is empty. A continuous injective map can't create a boundary, giving us our contradiction. - -Loopholes: If closed timelike curves exist, then timelike curves don't have to intersect the partial Cauchy surface. If the Cauchy surface were compact, i.e. space is compact, the null geodesic generators of the boundary can intersect everywhere because they can intersect on the other side of space. - -Other versions of the theorem involving the weak or strong energy condition also exist. - -In modified gravity, the Einstein field equations do not hold and so these singularities do not necessarily arise. For example, in Infinite Derivative Gravity, it is possible for ${E[\vec{X}]^a}_a$ to be negative even if the Null Energy Condition holds. diff --git a/wiki/wikipedia/2420.txt b/wiki/wikipedia/2420.txt deleted file mode 100644 index 6fb0bf63c784bb6cbe9667ea4131d5e3674637e3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2420.txt +++ /dev/null @@ -1,137 +0,0 @@ -Thomas Michael "Tim" Scanlon (; born 1940), usually cited as T. M. Scanlon, is an American philosopher. At the time of his retirement in 2016, he was the Alford Professor of Natural Religion, Moral Philosophy, and Civil Polity in Harvard University's Department of Philosophy, where he had taught since 1984. He was made a MacArthur Fellow in 1993. - -His teaching in the department has included courses on theories of justice, equality, and recent ethical theory. His book, What We Owe to Each Other, was published by Harvard University Press in 1998; a collection of papers on political theory, The Difficulty of Tolerance, was published by Cambridge University Press in 2003. - -Scanlon is the father-in-law of philosopher and African-American studies scholar Tommie Shelby. - -His dissertation and some of his first papers were in mathematical logic, where his main concern was in proof theory, but he turned to ethics and political philosophy, where he developed a version of contractualism in the line of John Rawls, Immanuel Kant, and Jean-Jacques Rousseau. Scanlon has also published important work on freedom of speech, equality, tolerance, foundations of contract law, human rights, conceptions of welfare, and theories of justice, as well as on foundational questions in moral theory. - -Contractualism is a constructivist attempt at providing a unified account of the subject matter of a central part of morality which Scanlon calls "what we owe to each other." The normative domain of what we owe to each other is meant to encompass those duties to other people which we bear in virtue of their standing as rational creatures. A broader conception of morality includes whatever else we may owe to specific people, such as the special obligations we bear in relations with friends and family, or whatever else morality may require of us, such as the way in which we treat ourselves or nature. Scanlon believes that what we owe to each other, or what we could loosely call "the morality of right and wrong", is distinct from this broader conception of morality in that contractualism provides a unified account of its content. - -In this form of contractualism, judgements about right and wrong, unlike empirical judgements, are not theoretical claims about the nature of the spatiotemporal world but rather practical claims about what we have reason to do. Further, they are a particularly important class of practical claims in that the judgement that an action is wrong is taken to provide reasons to not do that action which are most often considered to be decisive against competing reasons. Following this point, Scanlon takes questions about the reason-giving force of moral judgements to be prior to questions about the subject matter of the morality of right and wrong. More explicitly, he thinks that if we provide an account of the extraordinary reason-giving force of moral judgements then this account could largely form the basis for a characterisation of the subject matter of what we owe to each other. - -Scanlon grounds the reason-giving force of judgements about right and wrong in "the positive value of a way of living with others". A way of living with others which is typified by an ideal of mutual recognition between rational agents, where mutual recognition demands that moral agents acknowledge the value of human life and respond to this value in the right ways. - -On the question of how ought we to value human, or rational, life, Scanlon argues that different valuable things require different ways of valuing. In contrast to teleological accounts of value, often to take something to be of value is not only to see reason to bring about a maximal amount of that thing. This is especially true when considering the value of human life. When we value human life, he writes, we do not see this as a reason to create as much human life as we can. Rather, we tend to see reason to respect other human beings, to protect them from death and other forms of harm and, in general, to want their lives to go well. More important for Scanlon, to value rational life is to recognize the features which distinguish rational life from other valuable things, specifically, the ability of rational creatures to assess reasons and judgements, and to govern their lives in accordance with these assessments. Scanlon asserts that the proper response to the recognition of these distinctive features is to treat rational creatures in terms of principles which they could not reasonably reject. - -From this point, Scanlon's account of the value of rational life provides a focus around which his account of the reason-giving force of moral judgements dovetails quite neatly with a characterization of the method of reasoning which we use to arrive at judgements of right and wrong, a method, moreover, which seems to be phenomenologically plausible. The reason-giving force of moral judgements is grounded in an ideal of mutual recognition which requires treating others in accordance with principles that they could not reasonably reject. Because mutual recognition requires that these other people are also appropriately motivated, this entails Scanlon's formulation of wrongness: "An act is wrong if and only if any principle that permitted it would be one that could reasonably be rejected by people moved to find principles for the general regulation of behaviour that others, similarly motivated, could not reasonably reject". An act is right, quite simply, if a principle permitting it could not reasonably be rejected in terms of this contractualist formulation. - -Regarding how moral principles are derived from the contractualist formulation, when considering whether a principle can be rejected we must take into account the consequences, in general, of its being accepted, not only the consequences of the particular actions that it allows. Because we cannot be sure about who will be affected by a principle, and how they will be affected, we must draw on our experience of life and consider the "generic reasons" which individuals are likely to have, as a result of their general circumstances, to reject a principle. In order to determine whether a principle is reasonably rejectable, we must impartially weigh these generic reasons against each other, and exercising our judgement, draw a conclusion about what the weight of reasons support. Given the motivation of finding principles for the general regulation of society that no-one could reasonably reject, if the weight of reasons support a certain conclusion then it would be unreasonable to reject that conclusion. Importantly, principles can only be rejected by individuals; aggregation of reasons across individuals is not allowed. So if the generic reasons of an individual carry more weight than any other individual's generic reasons, then his generic reasons are (for the most part) decisive in determining principles. - -The generic reasons which are open to consideration under the contractualist formulation are any reasons which we judge as relevant to reasonable rejectability. This requires that we exercise our judgement in determining whether such reasons would be suitable grounds for mutual recognition. Therefore, that a principle would negatively affect a person's well-being is not the only kind of reason which may be brought against a principle. Other considerations, such as how a burden would be imposed by a principle, can serve as reasonable grounds for rejection. - -While contractualism only provides an account of that central part of morality which deals with what we owe to each other, Scanlon writes that this part of morality is related to the broader realm of morality in complex ways. There is pressure for the morality of what we owe to each other to acknowledge the values included in the broader realm of morality insofar as principles which don't make room for these values could be reasonably rejected. In turn, these values must accommodate the dictates of what we owe to each other to the extent that they involve relations with others, who have separate moral standing. - -In his 2009 John Locke Lectures at Oxford, Scanlon argued in favor of what he calls "Reasons Fundamentalism." This is "the thesis that there are irreducibly normative truths about reasons for action." Scanlon refined and published this material in his book . - -Scanlon's What We Owe to Each Other is referenced several times in the American television series The Good Place, serving as the initial text used to instruct the protagonist Eleanor, who has apparently ended up in Heaven by mistake. The phrase "What We Owe to Each Other" is used as the title of the sixth episode of the first season, and that episode features a summary of Scanlon's ideas, as does the season two finale. Scanlon's ideas play a prominent role in the series finale, in which Eleanor finally finishes reading Scanlon's book and uses the principles of contractualism to explain a crucial decision that she makes. - -* - -* - -* - -* - -* - -* - -* - -* - -* - -* - -* - -* - -* - -Reprinted as: - -Also available as: - -* - -* - -* - -* - -* - -* - -* - -* - -* - -* - -* - -* - -* - -* - -* - -* - -* - -* - -* - -* See also - -* - -See also: - -* - -* - -See also: - -See also: - -* - -* - -See also: - -See also: - -See also: - -See also: - -See also: - -* - -* - -* - -* - -* - -* - -See also: diff --git a/wiki/wikipedia/2421.txt b/wiki/wikipedia/2421.txt deleted file mode 100644 index cac3ddbc5b43d5fa36837a22af3c02da71f46993..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2421.txt +++ /dev/null @@ -1,11 +0,0 @@ -Mrs. Miniver's problem is a geometry problem about circles. Given a circle A, find a circle B such that the area of the lens formed by intersecting their two interiors is equal to the area of the symmetric difference of A and B (the sum of the areas contained in one but not both circles). - -The problem derives from "A Country House Visit", one of Jan Struther's newspaper articles featuring her character Mrs. Miniver. According to the story: - -
    She saw every relationship as a pair of intersecting circles. It would seem at first glance that the more they overlapped the better the relationship; but this is not so. Beyond a certain point the law of diminishing returns sets in, and there are not enough private resources left on either side to enrich the life that is shared. Probably perfection is reached when the area of the two outer crescents, added together, is exactly equal to that of the leaf-shaped piece in the middle. On paper there must be some neat mathematical formula for arriving at this; in life, none.
    - -Alan Wachtel writes of the problem: - -
    It seems that certain mathematicians took this literary challenge literally, and Fadiman follows it with an excerpt from "Ingenious Mathematical Problems and Methods," by L. A. Graham, who had evidently posed the problem in a mathematics journal. Graham gives a solution by William W. Johnson of Cleveland for the general case of unequal circles. The analysis isn't difficult, but the resulting transcendental equation is messy and can't be solved exactly. When the circles are of equal size, the equation is much simpler, but it still can be solved only approximately.
    - -In the case of two circles of equal size, the ratio of the distance between their centers and their radius is often quoted as approximately 0.807946. However, that actually describes the case when the three areas each are of equal size. The solution for the problem as stated in the story ("when the area of the two outer crescents, added together, is exactly equal to that of the leaf-shaped piece in the middle") is approximately 0.529864. diff --git a/wiki/wikipedia/2422.txt b/wiki/wikipedia/2422.txt deleted file mode 100644 index 328d1f11d35e66a65f3df9fe7e3d7f3ee1ad3b69..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2422.txt +++ /dev/null @@ -1,55 +0,0 @@ -In number theory and computer science, the partition problem, or number partitioning, is the task of deciding whether a given multiset S of positive integers can be partitioned into two subsets S1 and S2 such that the sum of the numbers in S1 equals the sum of the numbers in S2. Although the partition problem is NP-complete, there is a pseudo-polynomial time dynamic programming solution, and there are heuristics that solve the problem in many instances, either optimally or approximately. For this reason, it has been called "the easiest hard problem". - -There is an optimization version of the partition problem, which is to partition the multiset S into two subsets S1, S2 such that the difference between the sum of elements in S1 and the sum of elements in S2 is minimized. The optimization version is NP-hard, but can be solved efficiently in practice. - -The partition problem is a special case of two related problems: - -* In the subset sum problem, the goal is to find a subset of S whose sum is a certain target number T given as input (the partition problem is the special case in which T is half the sum of S). - -* In multiway number partitioning, there is an integer parameter k, and the goal is to decide whether S can be partitioned into k subsets of equal sum (the partition problem is the special case in which k = 2). - -*However, it is quite different than the 3-partition problem: in that problem, the number of subsets is not fixed in advance - it should be |S|/3, where each subset must have exactly 3 elements. 3-partition is much harder than partition - it has no pseudo-polynomial time algorithm unless P = NP. - -Given S = {3,1,1,2,2,1}, a valid solution to the partition problem is the two sets S1 = {1,1,1,2} and S2 = {2,3}. Both sets sum to 5, and they partition S. Note that this solution is not unique. S1 = {3,1,1} and S2 = {2,2,1} is another solution. - -Not every multiset of positive integers has a partition into two subsets with equal sum. An example of such a set is S = {2,5}. - -The partition problem is NP hard. This can be proved by reduction from the subset sum problem. An instance of SubsetSum consists of a set S of positive integers and a target sum T < S; the goal is to decide if there is a subset of S with sum exactly T. - -Given such an instance, construct an instance of Partition in which the input set contains the original set plus two elements: z1 and z2, with z1 = sum(S) and z2 = 2 T. The sum of this input set is sum(S)+z1+z2 = 2 sum(S)+2 T, so the target sum for Partition is sum(S) + T. - -* Suppose there exists a solution S' to the SubsetSum instance. Then sum(S')=T, so sum(S' u {z1}) = sum(S) + T, so S' u {z1} is a solution to the Partition instance. - -* Conversely, suppose there exists a solution S to the Partition instance. Then, S must contain either z1 or z2, but not both, since their sum is more than sum(S) + T. If S contains z1, then it must contain elements from S with a sum of exactly T, so S minus z1 is a solution to the SubsetSum instance. If S contains z2, then it must contain elements from S with a sum of exactly sum(S)-T, so the other objects in S are a solution to the SubsetSum instance. - -As mentioned above, the partition problem is a special case of multiway-partitioning and of subset-sum. Therefore, it can be solved by algorithms developed for each of these problems. Algorithms developed for multiway number partitioning include: - -* Greedy number partitioning - loops over the numbers, and puts each number in the set whose current sum is smallest. If the numbers are not sorted, then the runtime is O(n) and the approximation ratio is at most 3/2 ("approximation ratio" means the larger sum in the algorithm output, divided by the larger sum in an optimal partition). Sorting the numbers increases the runtime to O(n log n ) and improves the approximation ratio to 7/6. If the numbers are distributed uniformly in [0,1], then the approximation ratio is at most $1 + O(\log{\log{n}}/n)$ almost surely , and $1 + O(1/n)$ in expectation. - -* Largest Differencing Method (also called the Karmarkar-Karp algorithm ) sorts the numbers in descending order and repeatedly replaces numbers by their differences. The runtime complexity is O(n log n ). In the worst case, its approximation ratio is similar - at most 7/6. However, in the average case it performs much better than the greedy algorithm: when numbers are distributed uniformly in [0,1], its approximation ratio is at most $1 + 1/n^{\Theta(\log{n})}$ in expectation. It also performs better in simulation experiments. - -* The Multifit algorithm uses binary search combined with an algorithm for bin packing . In the worst case, its approximation ratio is 8/7. - -*The subset sum problem has an FPTAS which can be used for the partition problem as well, by setting the target sum to sum(S)/2. - -There are exact algorithms, that always find the optimal partition. Since the problem is NP-hard, such algorithms might take exponential time in general, but may be practically usable in certain cases. Algorithms developed for multiway number partitioning include: - -* The pseudopolynomial time number partitioning takes $O(n m)$ memory, where m is the largest number in the input. - -* The Complete Greedy Algorithm (CGA) considers all partitions by constructing a binary tree. Each level in the tree corresponds to an input number, where the root corresponds to the largest number, the level below to the next-largest number, etc. Each branch corresponds to a different set in which the current number can be put. Traversing the tree in depth-first order requires only $O(n)$ space, but might take $O(2^n)$ time. The runtime can be improved by using a greedy heuristic: in each level, develop first the branch in which the current number is put in the set with the smallest sum. This algorithm finds first the solution found by greedy number partitioning, but then proceeds to look for better solutions. Some variations of this idea are fully polynomial-time approximation schemes for the subset-sum problem, and hence for the partition problem as well. - -* The Complete Karmarkar-Karp algorithm (CKK) considers all partitions by constructing a binary tree. Each level corresponds to a pair of numbers. The left branch corresponds to putting them in different subsets (i.e., replacing them by their difference), and the right branch corresponds to putting them in the same subset (i.e., replacing them by their sum). This algorithm finds first the solution found by the largest differencing method, but then proceeds to find better solutions. It runs substantially faster than CGA on random instances. Its advantage is much larger when an equal partition exists, and can be of several orders of magnitude. In practice, problems of arbitrary size can be solved by CKK if the numbers have at most 12 significant digits. CKK can also run as an anytime algorithm: it finds the KK solution first, and then finds progressively better solutions as time allows (possibly requiring exponential time to reach optimality, for the worst instances). It requires $O(n)$ space, but in the worst case might take $O(2^n)$ time. - -Algorithms developed for subset sum include: - -* Horowitz and Sanhi - runs in time $O( 2^{n/2}\cdot (n/2))$, but requires $O( 2^{n/2})$ space. - -* Schroeppel and Shamir - runs in time $O( 2^{n/2}\cdot (n/4))$, and requires much less space - $O(2^{n/4})$. - -* Howgrave-Graham and Joux - runs in time $O(2^{n/3})$, but it is a randomized algorithm that only solves the decision problem (not the optimization problem). - -Sets with only one, or no partitions tend to be hardest (or most expensive) to solve compared to their input sizes. When the values are small compared to the size of the set, perfect partitions are more likely. The problem is known to undergo a "phase transition"; being likely for some sets and unlikely for others. If m is the number of bits needed to express any number in the set and n is the size of the set then $m/n < 1$ tends to have many solutions and $m/n > 1$ tends to have few or no solutions. As n and m get larger, the probability of a perfect partition goes to 1 or 0 respectively. This was originally argued based on empirical evidence by Gent and Walsh, then using methods from statistical physics by Mertens, and later proved by Borgs, Chayes, and Pittel. - -A related problem, somewhat similar to the Birthday paradox, is that of determining the size of the input set so that we have a probability of one half that there is a solution, under the assumption that each element in the set is randomly selected with uniform distribution between 1 and some given value. The solution to this problem can be counter-intuitive, like the birthday paradox. - -Equal-cardinality partition is a variant in which both parts should have an equal number of items, in addition to having an equal sum. This variant is NP-hard too, as proved in diff --git a/wiki/wikipedia/2423.txt b/wiki/wikipedia/2423.txt deleted file mode 100644 index 5209d33b4bbd84e0925b7871e90be6d62317fba2..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2423.txt +++ /dev/null @@ -1,156 +0,0 @@ -Céa's lemma is a lemma in mathematics. Introduced by Jean Céa in his Ph.D. dissertation, it is an important tool for proving error estimates for the finite element method applied to elliptic partial differential equations. - -Let $V$ be a real Hilbert space with the norm $\|\cdot\|.$ Let $a:V\times V\to \mathbb R$ be a bilinear form with the properties - -* $|a(v, w)| \le \gamma \|v\|\|w\|$ for some constant $\gamma>0$ and all $v, w $ in $V$ (continuity) - -* $a(v, v) \ge \alpha \|v\|^2$ for some constant $\alpha>0$ and all $v$ in $V$ (coercivity or $V$-ellipticity). - -Let $L:V\to \mathbb R$ be a bounded linear operator. Consider the problem of finding an element $u$ in $V$ such that -$$ -a(u, v)=L(v) -$$ for all $v$ in $V.$ - -Consider the same problem on a finite-dimensional subspace $V_h$ of $V,$ so, $u_h$ in $V_h$ satisfies -$$ -a(u_h, v)=L(v) -$$ for all $v$ in $V_h.$ - -By the Lax–Milgram theorem, each of these problems has exactly one solution. Céa's lemma states that -$$ -\|u-u_h\|\le \frac{\gamma}{\alpha}\|u-v\| -$$ for all $v$ in $V_h.$ - -That is to say, the subspace solution $u_h$ is "the best" approximation of $u$ in $V_h,$ up to the constant $\gamma/\alpha.$ - -The proof is straightforward - -\alpha\|u-u_h\|^2 \le a(u-u_h,u-u_h) = a(u-u_h,u-v) + a(u-u_h,v - u_h) = a(u-u_h,u-v) - -\le \gamma\|u-u_h\|\|u-v\| for all $v$ in $V_h.$ - -We used the $a$-orthogonality of $u-u_h$ and $V_h$ -$$ -a(u-u_h,v) = 0, \ \forall \ v \in V_h -$$ - -which follows directly from $V_h \subset V$ -$$ -a(u, v) = L(v) = a(u_h, v) -$$ for all $v$ in $V_h$. - -Note: Céa's lemma holds on complex Hilbert spaces also, one then uses a sesquilinear form $a(\cdot, \cdot)$ instead of a bilinear one. The coercivity assumption then becomes $|a(v, v)| \ge \alpha \|v\|^2$ for all $v$ in $V$ (notice the absolute value sign around $a(v, v)$). - -In many applications, the bilinear form $a:V\times V\to \mathbb R$ is symmetric, so -$$ -a(v, w) =a(w, v) -$$ for all $v, w $ in $V.$ - -This, together with the above properties of this form, implies that $a(\cdot, \cdot)$ is an inner product on $V.$ The resulting norm -$$ -\|v\|_a=\sqrt{a(v, v)} -$$ - -is called the energy norm, since it corresponds to a physical energy in many problems. This norm is equivalent to the original norm $\|\cdot\|.$ - -Using the $a$-orthogonality of $u-u_h$ and $V_h$ and the Cauchy–Schwarz inequality -$$ -\|u-u_h\|_a^2 = a(u-u_h,u-u_h) = a(u-u_h,u-v) \le \|u-u_h\|_a \cdot \|u-v\|_a -$$ for all $v$ in $V_h$. - -Hence, in the energy norm, the inequality in Céa's lemma becomes -$$ -\|u-u_h\|_a\le \|u-v\|_a -$$ for all $v$ in $V_h$ - -(notice that the constant $\gamma/\alpha$ on the right-hand side is no longer present). - -This states that the subspace solution $u_h$ is the best approximation to the full-space solution $u$ in respect to the energy norm. Geometrically, this means that $u_h$ is the projection of the solution $u$ onto the subspace $V_h$ in respect to the inner product $a(\cdot, \cdot)$ (see the adjacent picture). - -Using this result, one can also derive a sharper estimate in the norm $\| \cdot \|$. Since -$$ -\alpha \|u-u_h\|^2 \le a(u-u_h,u-u_h) = \|u-u_h\|_a^2 \le \|u - v\|_a^2 \le \gamma \|u-v\|^2 -$$ for all $v$ in $V_h$, - -it follows that -$$ -\|u-u_h\| \le \sqrt{\frac{\gamma}{\alpha}} \|u-v\| -$$ for all $v$ in $V_h$. - -We will apply Céa's lemma to estimate the error of calculating the solution to an elliptic differential equation by the finite element method. - -Consider the problem of finding a function $u:[a, b]\to \mathbb R$ satisfying the conditions - -\begin{cases} - --u=f \mbox { in } [a, b] \\ - -u(a)=u(b)=0 - -\end{cases} - - - -where $f:[a, b]\to \mathbb R$ is a given continuous function. - -Physically, the solution $u$ to this two-point boundary value problem represents the shape taken by a string under the influence of a force such that at every point $x$ between $a$ and $b$ the force density is $f(x)\mathbf{e}$ (where $\mathbf{e}$ is a unit vector pointing vertically, while the endpoints of the string are on a horizontal line, see the adjacent picture). For example, that force may be the gravity, when $f$ is a constant function (since the gravitational force is the same at all points). - -Let the Hilbert space $V$ be the Sobolev space $H^1_0(a, b),$ which is the space of all square-integrable functions $v$ defined on $[a, b]$ that have a weak derivative on $[a, b]$ with $v'$ also being square integrable, and $v$ satisfies the conditions $v(a)=v(b)=0.$ The inner product on this space is -$$ -(v, w)=\int_a^b\! \left( v(x)w(w) + v'(x) w'(x)\right)dx -$$ for all $v$ and $w$ in $V.$ - -After multiplying the original boundary value problem by $v$ in this space and performing an integration by parts, one obtains the equivalent problem -$$ -a(u, v)=L(v) -$$ for all $v$ in $V$, - -with -$$ -a(u, v)=\int_a^b\! u'(x) v'(x)dx -$$, - -and -$$ -L(v) = \int_a^b\! f(x) v(x) dx. -$$ - -It can be shown that the bilinear form $a(\cdot, \cdot)$ and the operator $L$ satisfy the assumptions of Céa's lemma. - -In order to determine a finite-dimensional subspace $V_h$ of $V,$ consider a partition -$$ -a=x_0< x_1 < \cdots < x_{n-1} < x_n = b -$$ - -of the interval $[a, b],$ and let $V_h$ be the space of all continuous functions that are affine on each subinterval in the partition (such functions are called piecewise-linear). In addition, assume that any function in $V_h$ takes the value 0 at the endpoints of $[a, b].$ It follows that $V_h$ is a vector subspace of $V$ whose dimension is $n-1$ (the number of points in the partition that are not endpoints). - -Let $u_h$ be the solution to the subspace problem -$$ -a(u_h, v)=L(v) -$$ for all $v$ in $V_h,$ - -so one can think of $u_h$ as of a piecewise-linear approximation to the exact solution $u.$ By Céa's lemma, there exists a constant $C>0$ dependent only on the bilinear form $a(\cdot, \cdot),$ such that -$$ -\|u-u_h\|\le C \|u-v\| -$$ for all $v$ in $V_h.$ - -To explicitly calculate the error between $u$ and $u_h,$ consider the function $\pi u$ in $V_h$ that has the same values as $u$ at the nodes of the partition (so $\pi u$ is obtained by linear interpolation on each interval $[x_i, x_{i+1}]$ from the values of $u$ at interval's endpoints). It can be shown using Taylor's theorem that there exists a constant $K$ that depends only on the endpoints $a$ and $b,$ such that -$$ -|u'(x)-(\pi u)'(x)|\le K h \|u\|_{L^2(a, b)} -$$ - -for all $x$ in $[a, b],$ where $h$ is the largest length of the subintervals $[x_i, x_{i+1}]$ in the partition, and the norm on the right-hand side is the L2 norm. - -This inequality then yields an estimate for the error -$$ -\|u-\pi u\|. -$$ - -Then, by substituting $v=\pi u$ in Céa's lemma it follows that -$$ -\|u-u_h\|\le C h \|u\|_{L^2(a, b)}, -$$ - -where $C$ is a different constant from the above (it depends only on the bilinear form, which implicitly depends on the interval $[a, b]$). - -This result is of a fundamental importance, as it states that the finite element method can be used to approximately calculate the solution of our problem, and that the error in the computed solution decreases proportionately to the partition size $h.$ Céa's lemma can be applied along the same lines to derive error estimates for finite element problems in higher dimensions (here the domain of $u$ was in one dimension), and while using higher order polynomials for the subspace $V_h.$ diff --git a/wiki/wikipedia/2424.txt b/wiki/wikipedia/2424.txt deleted file mode 100644 index d4c258405e7470f4643d2779c99001f0e44ed1dd..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2424.txt +++ /dev/null @@ -1,21 +0,0 @@ -was a Japanese mathematician, known for his work in proof theory. - -After graduating from Tokyo University, he went to Princeton to study under Kurt Gödel. - - - -He later became a professor at the University of Illinois at Urbana–Champaign. Takeuti was president (2003–2009) of the Kurt Gödel Society, having worked on the book Memoirs of a Proof Theorist: Godel and Other Logicians. His goal was to prove the consistency of the real numbers. To this end, Takeuti's conjecture speculates that a sequent formalisation of second-order logic has cut-elimination. He is also known for his work on ordinal diagrams with Akiko Kino. - -* - -* - -* - -* - -* - -* - -* diff --git a/wiki/wikipedia/2425.txt b/wiki/wikipedia/2425.txt deleted file mode 100644 index ac120b23a5e0122682de627932f9b0ea39334d48..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2425.txt +++ /dev/null @@ -1,27 +0,0 @@ -In science and especially in mathematical studies, a variational principle is one that enables a problem to be solved using calculus of variations, which concerns finding functions that optimize the values of quantities that depend on those functions. For example, the problem of determining the shape of a hanging chain suspended at both ends—a catenary—can be solved using variational calculus, and in this case, the variational principle is the following: The solution is a function that minimizes the gravitational potential energy of the chain. - -Any physical law which can be expressed as a variational principle describes a self-adjoint operator. These expressions are also called Hermitian. Such an expression describes an invariant under a Hermitian transformation. - -Felix Klein's Erlangen program attempted to identify such invariants under a group of transformations. In what is referred to in physics as Noether's theorem, the Poincaré group of transformations (what is now called a gauge group) for general relativity defines symmetries under a group of transformations which depend on a variational principle, or action principle. - -* The Rayleigh–Ritz method for solving boundary-value problems approximately - -* Ekeland's variational principle in mathematical optimization - -* The finite element method - -* The variation principle relating topological entropy and Kolmogorov-Sinai entropy. - -* Fermat's principle in geometrical optics - -* Maupertuis' principle in classical mechanics - -* The principle of least action in mechanics, electromagnetic theory, and quantum mechanics - -* The variational method in quantum mechanics - -* Gauss's principle of least constraint and Hertz's principle of least curvature - -* Hilbert's action principle in general relativity, leading to the Einstein field equations. - -* Palatini variation diff --git a/wiki/wikipedia/2426.txt b/wiki/wikipedia/2426.txt deleted file mode 100644 index b6f2ada5b70cdc3717888f2378593b9b4348e451..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2426.txt +++ /dev/null @@ -1,17 +0,0 @@ -In mathematics, the HM-GM-AM-QM inequalities state the relationship between the harmonic mean, geometric mean, arithmetic mean, and quadratic mean (aka root mean square, RMS). Suppose that $x_1, x_2, \ldots, x_n$ are positive real numbers. Then -$$ -0<\frac{n}{1/x_1+1/x_2+\cdots+1/x_n}\leq\sqrt[n]{x_1x_2\cdots x_n}\leq\frac{x_1+x_2+\cdots+x_n}{n} \leq\sqrt{\frac{x_1^2+x_2^2+\cdots+x_n^2}{n}}. -$$ - -These inequalities often appear in mathematical competitions and have applications in many fields of science. - -There are three inequalities between means to prove. There are various methods to prove the inequalities, including mathematical induction, the Cauchy–Schwarz inequality, Lagrange multipliers, and Jensen's inequality. For several proofs that GM ≤ AM, see Inequality of arithmetic and geometric means. - -From the Cauchy–Schwarz inequality on real numbers, setting one vector to (1, 1, ...): -$$ -\left( \sum_{i=1}^n u_i \cdot 1 \right)^2 \leq \left( \sum_{i=1}^n u_i^2 \right) \left( \sum_{i=1}^n 1^2 \right) = n \sum_{i=1}^n u_i^2, -$$ hence $\left( \frac{\sum_{i=1}^n u_i}{n} \right)^2 \leq \frac{\sum_{i=1}^n u_i^2}{n}$. For positive $u_i$ the square root of this gives the inequality. - -When n = 2, the inequalities become $\frac 2 {\frac{1}{x_1}+\frac{1}{x_2}} \leq \sqrt{x_1x_2} \leq \frac{x_1+x_2}{2}\leq\sqrt{\frac{x_1^2+x_2^2}{2}}$ for all $x_1, x_2 > 0,$ which can be visualized in a semi-circle whose diameter is [AB] and center D. - -Suppose AC = x1 and BC = x2. Construct perpendiculars to [AB] at D and C respectively. Join [CE] and [DF] and further construct a perpendicular [CG] to [DF] at G. Then the length of GF can be calculated to be the harmonic mean, CF to be the geometric mean, DE to be the arithmetic mean, and CE to be the quadratic mean. The inequalities then follow easily by the Pythagorean theorem. diff --git a/wiki/wikipedia/2427.txt b/wiki/wikipedia/2427.txt deleted file mode 100644 index 0a26e5ac7991a9f238e79712148d3b74e041ee0c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2427.txt +++ /dev/null @@ -1,3 +0,0 @@ -This is a comparison of the freeware (proprietary software release free of charge) file synchronization software. - -This is a comparison of commercial software in the field of file synchronization. These programs only provide full functionality with a payment. As indicated, some are trialware and provide functionality during a trial period; some are freemium, meaning that they have freeware editions. diff --git a/wiki/wikipedia/2428.txt b/wiki/wikipedia/2428.txt deleted file mode 100644 index f6afaf0665df4451031de9542214192067c289ea..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2428.txt +++ /dev/null @@ -1,42 +0,0 @@ -In the mathematical theory of automorphic forms, the fundamental lemma relates orbital integrals on a reductive group over a local field to stable orbital integrals on its endoscopic groups. It was conjectured by in the course of developing the Langlands program. The fundamental lemma was proved by Gérard Laumon and Ngô Bảo Châu in the case of unitary groups and then by Ngô for general reductive groups, building on a series of important reductions made by Jean-Loup Waldspurger to the case of Lie algebras. Time magazine placed Ngô's proof on the list of the "Top 10 scientific discoveries of 2009". In 2010, Ngô was awarded the Fields medal for this proof. - -Langlands outlined a strategy for proving local and global Langlands conjectures using the Arthur–Selberg trace formula, but in order for this approach to work, the geometric sides of the trace formula for different groups must be related in a particular way. This relationship takes the form of identities between orbital integrals on reductive groups G and H over a nonarchimedean local field F, where the group H, called an endoscopic group of G, is constructed from G and some additional data. - -The first case considered was $G = {\rm SL}_2$ . then developed the general framework for the theory of endoscopic transfer and formulated specific conjectures. However, during the next two decades only partial progress was made towards proving the fundamental lemma. Harris called it a "bottleneck limiting progress on a host of arithmetic questions". Langlands himself, writing on the origins of endoscopy, commented: - -The fundamental lemma states that an orbital integral O for a group G is equal to a stable orbital integral SO for an endoscopic group H, up to a transfer factor Δ : -$$ -SO_{\gamma_H}(1_{K_H}) = \Delta(\gamma_H,\gamma_G)O^\kappa_{\gamma_G}(1_{K_G}) -$$ - -where - -*F is a local field - -*G is an unramified group defined over F, in other words a quasi-split reductive group defined over F that splits over an unramified extension of F - -*H is an unramified endoscopic group of G associated to κ - -*KG and KH are hyperspecial maximal compact subgroups of G and H, which means roughly that they are the subgroups of points with coefficients in the ring of integers of F. - -*1KG and 1KH are the characteristic functions of KG and KH. - -*Δ(γHG) is a transfer factor, a certain elementary expression depending on γH and γG - -*γH and γG are elements of G and H representing stable conjugacy classes, such that the stable conjugacy class of G is the transfer of the stable conjugacy class of H. - -*κ is a character of the group of conjugacy classes in the stable conjugacy class of γG - -*SO and O are stable orbital integrals and orbital integrals depending on their parameters. - -Shelstad proved the fundamental lemma for Archimedean fields. - -Waldspurger verified the fundamental lemma for general linear groups. - -Kottwitz and Blasius verified some cases of the fundamental lemma for 3-dimensional unitary groups. - -Hales and Weissauer verified the fundamental lemma for the symplectic and general symplectic groups Sp4, GSp4. - -A paper of George Lusztig and David Kazhdan pointed out that orbital integrals could be interpreted as counting points on certain algebraic varieties over finite fields. Further, the integrals in question can be computed in a way that depends only on the residue field of F; and the issue can be reduced to the Lie algebra version of the orbital integrals. Then the problem was restated in terms of the Springer fiber of algebraic groups. The circle of ideas was connected to a purity conjecture; Laumon gave a conditional proof based on such a conjecture, for unitary groups. then proved the fundamental lemma for unitary groups, using Hitchin fibration introduced by , which is an abstract geometric analogue of the Hitchin system of complex algebraic geometry. - -Waldspurger showed for Lie algebras that the function field case implies the fundamental lemma over all local fields, and Waldspurger showed that the fundamental lemma for Lie algebras implies the fundamental lemma for groups. diff --git a/wiki/wikipedia/2429.txt b/wiki/wikipedia/2429.txt deleted file mode 100644 index 8d68476d57a51de8dc0bf163c26546f5d4cf46ec..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2429.txt +++ /dev/null @@ -1,5 +0,0 @@ -Paraconsistent mathematics, sometimes called inconsistent mathematics, represents an attempt to develop the classical infrastructure of mathematics (e.g. analysis) based on a foundation of paraconsistent logic instead of classical logic. A number of reformulations of analysis can be developed, for example functions which both do and do not have a given value simultaneously. - -Chris Mortensen claims (see references): - -One could hardly ignore the examples of analysis and its special case, the calculus. There prove to be many places where there are distinctive inconsistent insights; see Mortensen (1995) for example. (1) Robinson's non-standard analysis was based on infinitesimals, quantities smaller than any real number, as well as their reciprocals, the infinite numbers. This has an inconsistent version, which has some advantages for calculation in being able to discard higher-order infinitesimals. The theory of differentiation turned out to have these advantages, while the theory of integration did not. (2) diff --git a/wiki/wikipedia/243.txt b/wiki/wikipedia/243.txt deleted file mode 100644 index 63ab3b15637b887272324370b9e0906ab2be1304..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/243.txt +++ /dev/null @@ -1,15 +0,0 @@ -Rectangle packing is a packing problem where the objective is to determine whether a given set of small rectangles can be placed inside a given large polygon, such that no two small rectangles overlap. Several variants of this problem have been studied. - -In this variant, there are multiple instances of a single rectangle of size (l,w), and a bigger rectangle of size (L,W). The goal is to pack as many small rectangles as possible into the big rectangle. It is allowed to rotate some of the small rectangles by multiples of 90°. - -This problem has some applications such as loading of boxes on pallets and, specifically, woodpulp stowage. As an example result: it is possible to pack 147 small rectangles of size (137,95) in a big rectangle of size (1600,1230). - -Given a rectilinear polygon R in the plane, a set S of points in R, and a set of identical squares, the goal is to find the largest number of non-overlapping squares that can be packed in points of S. - -Suppose that, for each point p in S, we put a square centered at p. Let GS be the intersection graph of these squares. A square-packing is equivalent to an independent set in GS. Finding a largest square-packing is NP-hard; one may prove this by reducing from 3SAT. - -In this variant, the small rectangles can have varying lengths and widths, and they should be packed in a given large rectangle. The decision problem of whether such a packing exists is NP-hard. This can be proved by a reduction from 3-partition. Given an instance of 3-partition with 3m positive integers: a1, ..., a3m, with a total sum of m T, we construct 3m small rectangles, all with a width of 1, such that the length of rectangle i is ai + m. The big rectangle has width m and length T + 3m. Every solution to the 3-partition instance induces a packing of the rectangles into m subsets such that the total length in each subset is exactly T, so they exactly fit into the big rectangle. Conversely, in any packing of the big rectangle, there must be no "holes", so the rectangles must not be rotated. Therefore, the packing must involve exactly m rows where each row contains rectangles with a total length of exactly T. This corresponds to a solution of the 3-partition instance. - -When there is an additional restriction that the packing must be exact (with no wasted space), the small rectangles may be rotated only by multiples of 90°. In this case, the problem is in NP. Without this requirement, the small rectangles may be rotated in arbitrary angles. In this more general case, it is not clear if the problem is in NP, since it is much harder to verify a solution. - -In this variant, the small rectangles can have varying lengths and widths, and their orientation is fixed (they cannot be rotated). The goal is to pack them in an enclosing rectangle of minimum area, with no boundaries on the enclosing rectangle's width or height. This problem has an important application in combining images into a single larger image. A web page that loads a single larger image often renders faster in the browser than the same page loading multiple small images, due to the overhead involved in requesting each image from the web server. The problem is NP-complete in general, but there are fast algorithms for solving small instances. diff --git a/wiki/wikipedia/2430.txt b/wiki/wikipedia/2430.txt deleted file mode 100644 index e353ea7e4937083c08d62fd25e84b9c9642aa34d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2430.txt +++ /dev/null @@ -1,21 +0,0 @@ -In discrete mathematics and theoretical computer science, reconfiguration problems are computational problems involving reachability or connectivity of state spaces. - -Here, a state space is a discrete set of configurations of a system or solutions of a combinatorial problem, called states, together with a set of allowed moves linking one state to another. Reconfiguration problems may ask: - -*For a given class of problems, is the state space always connected? That is, can one transform every pair of states into each other with a sequence of moves? If not, what is the computational complexity of determining whether the state space for a particular problem is connected? - -*What is the diameter of the state space, the smallest number D such that every two states can be transformed into each other with at most D moves? - -*Given two states, what is the complexity of determining whether they can be transformed into each other, or of finding the shortest sequence of moves for transforming one into another? - -*If moves are chosen randomly with a carefully chosen probability distribution so that the resulting Markov chain converges to a discrete uniform distribution, how many moves are needed in a random walk in order to ensure that the state at the end up the walk is nearly uniformly distributed? That is, what is the Markov chain mixing time? - -Examples of problems studied in reconfiguration include: - -*Games or puzzles such as the 15 puzzle or Rubik's cube. This type of puzzle can often be modeled mathematically using the theory of permutation groups, leading to fast algorithms for determining whether states are connected; however, finding the state space diameter or the shortest path between two states may be more difficult. For instance, for $n\times n\times n$ version's of the Rubik's cube, the state space diameter is $\Theta(n^2/\log n)$, and the complexity of finding shortest solutions is unknown, but for a generalized version of the puzzle (in which some cube faces are unlabeled) it is NP-hard. Other reconfiguration puzzles such as Sokoban may be modeled as token reconfiguration but lack a group-theoretic structure. For such problems, the complexity can be higher; in particular, testing reachability for Sokoban is PSPACE-complete. - -*Rotation distance in binary trees and related problems of flip distance in flip graphs. A rotation is an operation that changes the structure of a binary tree without affecting the left-to-right ordering of its nodes, often used to rebalence binary search trees. Rotation distance is the minimum number of rotations needed to transform one tree into another. The same state space also models the triangulations of a convex polygon, and moves that "flip" one triangulation into another by removing one diagonal of the polygon and replacing it by another; similar problems have also been studied on other kinds of triangulation. The maximum possible rotation distance between two trees with a given number of nodes is known, but it remains an open problem whether the rotation distance between two arbitrary trees can be found in polynomial time. The analogous problems for flip distance between triangulations of point sets or non-convex polygons are NP-hard. - -*Reconfiguration of graph colorings. The moves that have been considered for coloring reconfiguration include changing the color of a single vertex, or swapping the colors of a Kempe chain. When the number of colors is at least two plus the degeneracy of a graph, then the state space of single-vertex recolorings is connected, and Cereceda's conjecture suggests that it has polynomial diameter. For fewer colors, some graphs have disconnected state spaces. For 3-colorings, testing global connectivity of the single-vertex recoloring state space is co-NP-complete, but when two colorings can be reconfigured to each other, the shortest reconfiguration sequence can be found in polynomial time. For more than three colors, single-vertex reconfiguration is PSPACE-complete. - -*Nondeterministic constraint logic is a combinatorial problem on orientations of cubic graphs whose edges are colored red and blue. In a valid state of the system, each vertex must have at least one blue edge or at least two edges coming into it. A move in this state space reverses the orientation of a single edge while preserving these constraints. It is PSPACE-complete to test whether the resulting state space is connected or whether two states are reachable from each other, even when the underlying graph has bounded bandwidth. These hardness results are often used as the basis of reductions proving that other reconfiguration problems, such as the ones arising from games and puzzles, are also hard. diff --git a/wiki/wikipedia/2431.txt b/wiki/wikipedia/2431.txt deleted file mode 100644 index 7401c179f2d761cadb413c8712f9244a51fdcaf4..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2431.txt +++ /dev/null @@ -1,112 +0,0 @@ -In mathematics, Sylvester’s criterion is a necessary and sufficient criterion to determine whether a Hermitian matrix is positive-definite. It is named after James Joseph Sylvester. - -Sylvester's criterion states that a n × n Hermitian matrix M is positive-definite if and only if all the following matrices have a positive determinant: - -* the upper left 1-by-1 corner of M, - -* the upper left 2-by-2 corner of M, - -* the upper left 3-by-3 corner of M, - -* ${}\quad\vdots$ - -* M itself. - -In other words, all of the leading principal minors must be positive. By using appropriate permutations of rows and columns of M, it can also be shown that the positivity of any nested sequence of n principal minors of M is equivalent to M being positive-definite. - -An analogous theorem holds for characterizing positive-semidefinite Hermitian matrices, except that it is no longer sufficient to consider only the leading principal minors: - -a Hermitian matrix M is positive-semidefinite if and only if all principal minors of M are nonnegative. - -Suppose $M_n$is $n\times n$ Hermitian matrix $M_n^{\dagger}=M_n$. Let $M_k,k=1,\ldots n$ be the principal minor matrices, the $k\times k$ upper left corner matrices. Let's show that if $M_n $ is positive definite the principal minors are positive that is $\det M_k>0 $. It is easy to see that $M_k$ is positive definite by choosing - -
    x=\left( - -\begin{array}{c} x_1\\ \vdots\\ x_k\\ 0\\ \vdots\\0\end{array} - -\right) = - -\left( - -\begin{array}{c} - -\vec{x}\\ - -0\\ - -\vdots\\ - -0 - -\end{array} - -\right) - -
    - -and noticing $00$. This ends the part 1. - -

    Then the other way, we use induction. Suppose the criterion holds for $M_n$ we show that it holds for $M_{n+1}.$ We write - -

    $M_{n+1}= \left( \begin{array}{cc}M_n&\vec{v}\\ \vec{v}^{\dagger}&d\end{array}\right)$
    - -where $v$ is a vector and $d$ is a real constant. - -Let's show the induction step: if $\det M_{n+1}>0$ then $M_{n+1}$ is positive definite ($M_n$ is positive definite). Denote - -
    $x=\left( \begin{array}{c}\vec{x}\\ x_{n+1}\end{array} \right) $
    -$$ -x^{\dagger}M_{n+1}x=\vec{x}^{\dagger}M_n\vec{x}+x_{n+1}\vec{x}^{\dagger}\vec{v}+\bar{x}_{n+1}\vec{v}^{\dagger}\vec{x}+d|x_{n+1}|^2 -$$ - -Let's make a translation $\vec{x}\rightarrow \vec{x}+\vec{c} $ to eliminate the linear term in $\vec{x}$. This happens exactly when $\vec{c}=x_{n+1}M_n^{-1}\vec{v}$. We have -$$ -=(\vec{x}^{\dagger}+\vec{v}^{\dagger}M_n^{-1}\bar{x}_{n+1})M_n(\vec{x}+x_{n+1}M_n^{-1}\vec{v}) -$$ -$$ --|x_{n+1}|^2\vec{v}^{\dagger}M_n^{-1}\vec{v}+d|x_{n+1}|^2 -$$ -$$ -=(\vec{x}+\vec{c})^{\dagger}M_n(\vec{x}+\vec{c})+|x_{n+1}|^2(d-\vec{v}^{\dagger}M_n^{-1}\vec{v}) -$$ - -Then we use the block matrix determinant formula(formula should be checked) - -
    $\det \left(\begin{array}{cc}A&B\\C&D\end{array} \right)=\det A\det(D-CA^{-1}B).$
    - -We have -$$ -\det M_{n+1}=\det M_n(d-\vec{v}^{\dagger}M_n^{-1}\vec{v})>0 -$$ which implies $d-\vec{v}^{\dagger}M_n^{-1}\vec{v}>0$ and we have $x^{\dagger}M_{n+1}x>0.$$\Box$ - -The proof is only for nonsingular Hermitian matrix with coefficients in $\mathbb{R}$, therefore only for nonsingular real-symmetric matrices. - -Positive definite or semidefinite matrix: A symmetric matrix A whose eigenvalues are positive (λ > 0) is called positive definite, and when the eigenvalues are just nonnegative (λ ≥ 0), A is said to be positive semidefinite. - -Theorem I: A real-symmetric matrix A has nonnegative eigenvalues if and only if A can be factored as A = BTB, and all eigenvalues are positive if and only if B is nonsingular. - -Theorem II (The Cholesky decomposition): The symmetric matrix A possesses positive pivots if and only if A can be uniquely factored as A = RTR, where R is an upper-triangular matrix with positive diagonal entries. This is known as the Cholesky decomposition of A, and R is called the Cholesky factor of A. - -Theorem III: Let Ak be the k × k leading principal submatrix of An×n. If A has an LU factorization A = LU, where L is a lower triangular matrix with a unit diagonal, then det(Ak) = u11u22 · · · ukk, and the k-th pivot is ukk = det(A1) = a11 for k = 1, ukk = det(Ak)/det(Ak−1) for k = 2, 3, . . . , n, where ukk is the (k, k)-th entry of U for all k = 1, 2, . . . , n. - -Combining Theorem II with Theorem III yields: - -Statement I: If the symmetric matrix A can be factored as A=RTR where R is an upper-triangular matrix with positive diagonal entries, then all the pivots of A are positive (by Theorem II), therefore all the leading principal minors of A are positive (by Theorem III). - -Statement II: If the nonsingular n × n symmetric matrix A can be factored as $A=B^TB$, then the QR decomposition (closely related to Gram-Schmidt process) of B (B = QR) yields: $A=B^TB=R^TQ^TQR=R^TR$, where Q is orthogonal matrix and R is upper triangular matrix. - -As A is non-singular and $A=R^TR$, it follows that all the diagonal entries of R are non-zero. Let rjj be the (j, j)-th entry of R for all j = 1, 2, . . . , n. Then rjj ≠ 0 for all j = 1, 2, . . . , n. - -Let F be a diagonal matrix, and let fjj be the (j, j)-th entry of F for all j = 1, 2, . . . , n. For all j = 1, 2, . . . , n, we set fjj = 1 if rjj > 0, and we set fjj = -1 if rjj < 0. Then $F^TF=I_n$, the n × n identity matrix. - -Let S=FR. Then S is an upper-triangular matrix with all diagonal entries being positive. Hence we have $A=R^TR=R^TF^TFR=S^TS$, for some upper-triangular matrix S with all diagonal entries being positive. - -Namely Statement II requires the non-singularity of the symmetric matrix A. - -Combining Theorem I with Statement I and Statement II yields: - -Statement III: If the real-symmetric matrix A is positive definite then A possess factorization of the form A = BTB, where B is nonsingular (Theorem I), the expression A = BTB implies that A possess factorization of the form A = RTR where R is an upper-triangular matrix with positive diagonal entries (Statement II), therefore all the leading principal minors of A are positive (Statement I). - -In other words, Statement III proves the "only if" part of Sylvester's Criterion for non-singular real-symmetric matrices. - -Sylvester's Criterion: The real-symmetric matrix A is positive definite if and only if all the leading principal minors of A are positive. diff --git a/wiki/wikipedia/2432.txt b/wiki/wikipedia/2432.txt deleted file mode 100644 index 16faa68351aa7320e3d39e83fb3a1451e92a8fbb..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2432.txt +++ /dev/null @@ -1,32 +0,0 @@ -[[Image:Friends strangers graph.gif|right|frame| - -78 of the 156 possible friends-strangers graphs with 6 nodes. The other 78 can be obtained by reversing the red and blue colours of each graph. For each graph the red/blue nodes - -shows a sample triplet of mutual friends/strangers.]] - -The theorem on friends and strangers is a mathematical theorem in an area of mathematics called Ramsey theory. - -Suppose a party has six people. Consider any two of them. They might be meeting for the first time—in which case we will call them mutual strangers; or they might have met before—in which case we will call them mutual acquaintances. The theorem says: - -In any party of six people either at least three of them are (pairwise) mutual strangers or at least three of them are (pairwise) mutual acquaintances. - -A proof of the theorem requires nothing but a three-step logic. It is convenient to phrase the problem in graph-theoretic language. - -Suppose a graph has 6 vertices and every pair of (distinct) vertices is joined by an edge. Such a graph is called a complete graph (because there cannot be any more edges). A complete graph on $n$ vertices is denoted by the symbol $K_n$. - -Now take a $K_6$. It has 15 edges in all. Let the 6 vertices stand for the 6 people in our party. Let the edges be coloured red or blue depending on whether the two people represented by the vertices connected by the edge are mutual strangers or mutual acquaintances, respectively. The theorem now asserts: - -No matter how you colour the 15 edges of a $K_6$ with red and blue, you cannot avoid having either a red triangle—that is, a triangle all of whose three sides are red, representing three pairs of mutual strangers—or a blue triangle, representing three pairs of mutual acquaintances. In other words, whatever colours you use, there will always be at least one monochromatic triangle ( that is, a triangle all of whose edges have the same color ). - -Choose any one vertex; call it P. There are five edges leaving P. They are each coloured red or blue. The pigeonhole principle says that at least three of them must be of the same colour; for if there are less than three of one colour, say red, then there are at least three that are blue. - -Let A, B, C be the other ends of these three edges, all of the same colour, say blue. If any one of AB, BC, CA is blue, then that edge together with the two edges from P to the edge's endpoints forms a blue triangle. If none of AB, BC, CA is blue, then all three edges are red and we have a red triangle, namely, ABC. - -The utter simplicity of this argument, which so powerfully produces a very interesting conclusion, is what makes the theorem appealing. In 1930, in a paper entitled 'On a Problem in Formal Logic,' Frank P. Ramsey proved a very general theorem (now known as Ramsey's theorem) of which this theorem is a simple case. This theorem of Ramsey forms the foundation of the area known as Ramsey theory in combinatorics. - -The conclusion to the theorem does not hold if we replace the party of six people by a party of less than six. To show this, we give a coloring of K5 with red and blue that does not contain a triangle with all edges the same color. We draw K5 as a pentagon surrounding a star (a pentagram). We color the edges of the pentagon red and the edges of the star blue. - -Thus, 6 is the smallest number for which we can claim the conclusion of the theorem. In Ramsey theory, we write this fact as: -$$ -R(3,3: 2) = 6. -$$ diff --git a/wiki/wikipedia/2433.txt b/wiki/wikipedia/2433.txt deleted file mode 100644 index 75785c7fff8893c5b772c61e3214db929b764244..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2433.txt +++ /dev/null @@ -1,6 +0,0 @@ -In queueing theory, a discipline within the mathematical theory of probability, Ross's conjecture gives a lower bound for the average waiting-time experienced by a customer when arrivals to the queue do not follow the simplest model for random arrivals. It was proposed by Sheldon M. Ross in 1978 and proved in 1981 by Tomasz Rolski. The conjecture states that the average amount of time that a customer spends waiting in a queue is greater than or equal to -$$ -\frac{\lambda \operatorname E (S^2)}{2 \{1-\lambda \operatorname E (S) \}} -$$ - -where S is the service time and λ is the average arrival rate (in the limit as the length of the time period increases). diff --git a/wiki/wikipedia/2434.txt b/wiki/wikipedia/2434.txt deleted file mode 100644 index acb8d361160c2dd998650afc93ab8cec7d9f798f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2434.txt +++ /dev/null @@ -1,95 +0,0 @@ -In mathematics, the Lebesgue differentiation theorem is a theorem of real analysis, which states that for almost every point, the value of an integrable function is the limit of infinitesimal averages taken about the point. The theorem is named for Henri Lebesgue. - -For a Lebesgue integrable real or complex-valued function f on Rn, the indefinite integral is a set function which maps a measurable set A  to the Lebesgue integral of $f \cdot \mathbf{1}_A$, where $\mathbf{1}_{A}$ denotes the characteristic function of the set A. It is usually written -$$ - A \mapsto \int_{A}f\ \mathrm{d}\lambda, -$$ - -with λ the n-dimensional Lebesgue measure. - -The derivative of this integral at x is defined to be -$$ -\lim_{B \rightarrow x} \frac{1} \int_{B}f \mathrm{d}\lambda, -$$ - -where |B| denotes the volume (i.e., the Lebesgue measure) of a ball B  centered at x, and B → x means that the diameter of B  tends to 0.
    - -The Lebesgue differentiation theorem states that this derivative exists and is equal to f(x) at almost every point x ∈ Rn. In fact a slightly stronger statement is true. Note that: -$$ -\left|\frac{1} \int_{B}f(y) \mathrm{d}\lambda(y) - f(x)\right| = \left|\frac{1} \int_{B}(f(y) - f(x)) \mathrm{d}\lambda(y)\right| \le \frac{1} \int_{B}|f(y) -f(x)| \mathrm{d}\lambda(y). -$$ - -The stronger assertion is that the right hand side tends to zero for almost every point x. The points x for which this is true are called the Lebesgue points of f. - -A more general version also holds. One may replace the balls B  by a family $\mathcal{V}$ of sets U  of bounded eccentricity. This means that there exists some fixed c > 0 such that each set U  from the family is contained in a ball B  with $|U| \ge c |B|$. It is also assumed that every point x ∈ Rn is contained in arbitrarily small sets from $\mathcal{V}$. When these sets shrink to x, the same result holds: for almost every point x, -$$ - f(x) = \lim_{U \rightarrow x, U \in \mathcal{V}} \frac{1} \int_U f \mathrm{d}\lambda. -$$ - -The family of cubes is an example of such a family $\mathcal{V}$, as is the family $\mathcal{V}$(m) of rectangles in R2 such that the ratio of sides stays between m-1 and m, for some fixed m ≥ 1. If an arbitrary norm is given on Rn, the family of balls for the metric associated to the norm is another example. - -The one-dimensional case was proved earlier by Lebesgue. If f is integrable on the real line, the function -$$ -F(x) = \int_{(-\infty,x]} f(t) \mathrm{d} t -$$ - -is almost everywhere differentiable, with $F'(x) = f(x).$ Were $F$ defined by a Riemann integral this would be essentially the fundamental theorem of calculus, but Lebesgue proved that it remains true when using the Lebesgue integral. - -The theorem in its stronger form—that almost every point is a Lebesgue point of a locally integrable function f—can be proved as a consequence of the weak-L1 estimates for the Hardy–Littlewood maximal function. The proof below follows the standard treatment that can be found in Benedetto, , and Rudin. - -Since the statement is local in character, f can be assumed to be zero outside some ball of finite radius and hence integrable. It is then sufficient to prove that the set -$$ -E_\alpha = \Bigl\{ x \in \mathbf{R}^n :\limsup_ \bigg|\int_B f(y) -f(x) \mathrm{d}y\bigg| > 2\alpha \Bigr\} -$$ - -has measure 0 for all α > 0. - -Let ε > 0 be given. Using the density of continuous functions of compact support in L1(Rn), one can find such a function g satisfying -$$ -\|f - g\|_{L^1} = \int_{\mathbf{R}^n} |f(x) - g(x)| \mathrm{d}x < \varepsilon. -$$ - -It is then helpful to rewrite the main difference as -$$ - \frac{1} \int_B f(y) \mathrm{d}y - f(x) = \Bigl(\frac{1} \int_B \bigl(f(y) - g(y)\bigr) \mathrm{d}y \Bigr) + \Bigl(\frac{1}\int_B g(y) \mathrm{d}y - g(x) \Bigr)+ \bigl(g(x) - f(x)\bigr). -$$ - -The first term can be bounded by the value at x of the maximal function for f - g, denoted here by $(f-g)^*(x)$: -$$ - \frac{1} \int_B |f(y) - g(y)| \mathrm{d}y \leq \sup_{r>0} \frac{1}\int_{B_r(x)} |f(y)-g(y)| \mathrm{d}y = (f-g)^*(x). -$$ - -The second term disappears in the limit since g is a continuous function, and the third term is bounded by |f(x) - g(x)|. For the absolute value of the original difference to be greater than 2α in the limit, at least one of the first or third terms must be greater than α in absolute value. However, the estimate on the Hardy–Littlewood function says that -$$ - \Bigl| \left \{ x : (f-g)^*(x) > \alpha \right \} \Bigr| \leq \frac{A_n}{\alpha} \|f - g\|_{L^1} < \frac{A_n}{\alpha} \varepsilon, -$$ - -for some constant An depending only upon the dimension n. The Markov inequality (also called Tchebyshev's inequality) says that -$$ - \Bigl|\left\{ x : |f(x) - g(x)| > \alpha \right \}\Bigr| \leq \frac{1}{\alpha} \|f - g\|_{L^1} < \frac{1}{\alpha} \varepsilon -$$ - -whence -$$ - |E_\alpha| \leq \frac{A_n+1}{\alpha} \varepsilon. -$$ - -Since ε was arbitrary, it can be taken to be arbitrarily small, and the theorem follows. - -The Vitali covering lemma is vital to the proof of this theorem; its role lies in proving the estimate for the Hardy–Littlewood maximal function. - -The theorem also holds if balls are replaced, in the definition of the derivative, by families of sets with diameter tending to zero satisfying the Lebesgue's regularity condition, defined above as family of sets with bounded eccentricity. This follows since the same substitution can be made in the statement of the Vitali covering lemma. - -This is an analogue, and a generalization, of the fundamental theorem of calculus, which equates a Riemann integrable function and the derivative of its (indefinite) integral. It is also possible to show a converse – that every differentiable function is equal to the integral of its derivative, but this requires a Henstock–Kurzweil integral in order to be able to integrate an arbitrary derivative. - -A special case of the Lebesgue differentiation theorem is the Lebesgue density theorem, which is equivalent to the differentiation theorem for characteristic functions of measurable sets. The density theorem is usually proved using a simpler method (e.g. see Measure and Category). - -This theorem is also true for every finite Borel measure on Rn instead of Lebesgue measure (a proof can be found in e.g. ). More generally, it is true of any finite Borel measure on a separable metric space such that at least one of the following holds: - -* the metric space is a Riemannian manifold, - -* the metric space is a locally compact ultrametric space, - -* the measure is doubling. - -A proof of these results can be found in sections 2.8–2.9 of (Federer 1969). diff --git a/wiki/wikipedia/2435.txt b/wiki/wikipedia/2435.txt deleted file mode 100644 index 8e507caf2791e1019d8e6f30cd8d559b82c405f0..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2435.txt +++ /dev/null @@ -1,177 +0,0 @@ -In number theory, a perfect number is a positive integer that is equal to the sum of its positive divisors, excluding the number itself. For instance, 6 has divisors 1, 2 and 3 (excluding itself), and 1 + 2 + 3 = 6, so 6 is a perfect number. - -The sum of divisors of a number, excluding the number itself, is called its aliquot sum, so a perfect number is one that is equal to its aliquot sum. Equivalently, a perfect number is a number that is half the sum of all of its positive divisors including itself; in symbols, σ1(n) = 2n where σ1 is the sum-of-divisors function. For instance, 28 is perfect as 1 + 2 + 4 + 7 + 14 + 28 = 56 = 2 × 28. - -This definition is ancient, appearing as early as Euclid's Elements (VII.22) where it is called (perfect, ideal, or complete number). Euclid also proved a formation rule (IX.36) whereby $q(q+1)/2$ is an even perfect number whenever $q$ is a prime of the form $2^p -1$ for positive integer $p$—what is now called a Mersenne prime. Two millennia later, Leonhard Euler proved that all even perfect numbers are of this form. This is known as the Euclid–Euler theorem. - -It is not known whether there are any odd perfect numbers, nor whether infinitely many perfect numbers exist. The first few perfect numbers are 6, 28, 496 and 8128 . - -In about 300 BC Euclid showed that if 2p − 1 is prime then 2p−1(2p − 1) is perfect. - -The first four perfect numbers were the only ones known to early Greek mathematics, and the mathematician Nicomachus noted 8128 as early as around AD 100. In modern language, Nicomachus states without proof that every perfect number is of the form $2^{n-1}(2^n-1)$ where $2^n-1$ is prime. He seems to be unaware that n itself has to be prime. He also says (wrongly) that the perfect numbers end in 6 or 8 alternately. (The first 5 perfect numbers end with digits 6, 8, 6, 8, 6; but the sixth also ends in 6.) Philo of Alexandria in his first-century book "On the creation" mentions perfect numbers, claiming that the world was created in 6 days and the moon orbits in 28 days because 6 and 28 are perfect. Philo is followed by Origen, and by Didymus the Blind, who adds the observation that there are only four perfect numbers that are less than 10,000. (Commentary on Genesis 1. 14–19). St Augustine defines perfect numbers in City of God (Book XI, Chapter 30) in the early 5th century AD, repeating the claim that God created the world in 6 days because 6 is the smallest perfect number. The Egyptian mathematician Ismail ibn Fallūs (1194–1252) mentioned the next three perfect numbers (33,550,336; 8,589,869,056; and 137,438,691,328) and listed a few more which are now known to be incorrect. The first known European mention of the fifth perfect number is a manuscript written between 1456 and 1461 by an unknown mathematician. In 1588, the Italian mathematician Pietro Cataldi identified the sixth (8,589,869,056) and the seventh (137,438,691,328) perfect numbers, and also proved that every perfect number obtained from Euclid's rule ends with a 6 or an 8. - -Euclid proved that 2p−1(2p − 1) is an even perfect number whenever 2p − 1 is prime (Elements, Prop. IX.36). - -For example, the first four perfect numbers are generated by the formula 2p−1(2p − 1), with p a prime number, as follows: - -for p = 2: 21(22 − 1) = 2 × 3 = 6 - -for p = 3: 22(23 − 1) = 4 × 7 = 28 - -for p = 5: 24(25 − 1) = 16 × 31 = 496 - -for p = 7: 26(27 − 1) = 64 × 127 = 8128. - -Prime numbers of the form 2p − 1 are known as Mersenne primes, after the seventeenth-century monk Marin Mersenne, who studied number theory and perfect numbers. For 2p − 1 to be prime, it is necessary that p itself be prime. However, not all numbers of the form 2p − 1 with a prime p are prime; for example, 211 − 1 = 2047 = 23 × 89 is not a prime number. In fact, Mersenne primes are very rare—of the 2,610,944 prime numbers p up to 43,112,609, - -2p − 1 is prime for only 47 of them. - -Although Nicomachus had stated (without proof) that all perfect numbers were of the form $2^{n-1}\left(2^n - 1\right)$ where $2^n - 1$ is prime (though he stated this somewhat differently), Ibn al-Haytham (Alhazen) circa AD 1000 conjectured only that every even perfect number is of that form. It was not until the 18th century that Leonhard Euler proved that the formula 2p−1(2p − 1) will yield all the even perfect numbers. Thus, there is a one-to-one correspondence between even perfect numbers and Mersenne primes; each Mersenne prime generates one even perfect number, and vice versa. This result is often referred to as the Euclid–Euler theorem. - -An exhaustive search by the GIMPS distributed computing project has shown that the first 48 even perfect numbers are 2p−1(2p − 1) for - -p = 2, 3, 5, 7, 13, 17, 19, 31, 61, 89, 107, 127, 521, 607, 1279, 2203, 2281, 3217, 4253, 4423, 9689, 9941, 11213, 19937, 21701, 23209, 44497, 86243, 110503, 132049, 216091, 756839, 859433, 1257787, 1398269, 2976221, 3021377, 6972593, 13466917, 20996011, 24036583, 25964951, 30402457, 32582657, 37156667, 42643801, 43112609 and 57885161 . - -Three higher perfect numbers have also been discovered, namely those for which p = 74207281, 77232917, and 82589933, though there may be others within this range. , 51 Mersenne primes are known, and therefore 51 even perfect numbers (the largest of which is 282589932 × (282589933 − 1) with 49,724,095 digits). It is not known whether there are infinitely many perfect numbers, nor whether there are infinitely many Mersenne primes. - -As well as having the form 2p−1(2p − 1), each even perfect number is the (2p − 1)th triangular number (and hence equal to the sum of the integers from 1 to 2p − 1) and the 2p−1th hexagonal number. Furthermore, each even perfect number except for 6 is the ((2p + 1)/3)th centered nonagonal number and is equal to the sum of the first 2(p−1)/2 odd cubes: - -\begin{align} - -6 = 2^1\left(2^2 - 1\right) - -& = 1 + 2 + 3, \\[8pt] - -28 = 2^2\left(2^3 - 1\right) - -& = 1 + 2 + 3 + 4 + 5 + 6 + 7 = 1^3 + 3^3, \\[8pt] - -496 = 2^4\left(2^5 - 1\right) - -& = 1 + 2 + 3 + \cdots + 29 + 30 + 31 \\ - -& = 1^3 + 3^3 + 5^3 + 7^3, \\[8pt] - -8128 = 2^6\left(2^7 - 1\right) - -& = 1 + 2 + 3 + \cdots + 125 + 126 + 127 \\ - -& = 1^3 + 3^3 + 5^3 + 7^3 + 9^3 + 11^3 + 13^3 + 15^3, \\[8pt] - -33550336 = 2^{12}\left(2^{13} - 1\right) - -& = 1 + 2 + 3 + \cdots + 8189 + 8190 + 8191 \\ - -& = 1^3 + 3^3 + 5^3 + \cdots + 123^3 + 125^3 + 127^3. - -\end{align} - -Even perfect numbers (except 6) are of the form -$$ -T_{2^p - 1} = 1 + \frac{\left(2^p - 2\right) \times \left(2^p + 1\right)}{2} = 1 + 9 \times T_{\left(2^p - 2\right)/3} -$$ - -with each resulting triangular number T7 = 28, T31 = 496, T127 = 8128 (after subtracting 1 from the perfect number and dividing the result by 9) ending in 3 or 5, the sequence starting with T2 = 3, T10 = 55, T42 = 903, T2730 = 3727815, ... This can be reformulated as follows: adding the digits of any even perfect number (except 6), then adding the digits of the resulting number, and repeating this process until a single digit (called the digital root) is obtained, always produces the number 1. For example, the digital root of 8128 is 1, because 8 + 1 + 2 + 8 = 19, 1 + 9 = 10, and 1 + 0 = 1. This works with all perfect numbers 2p−1(2p − 1) with odd prime p and, in fact, with all numbers of the form 2m−1(2m − 1) for odd integer (not necessarily prime) m. - -Owing to their form, 2p−1(2p − 1), every even perfect number is represented in binary form as p ones followed by p − 1 zeros; for example, - -610 = 22 + 21 = 1102, - -2810 = 24 + 23 + 22 = 111002, - -49610 = 28 + 27 + 26 + 25 + 24 = 1111100002, and - -812810 = 212 + 211 + 210 + 29 + 28 + 27 + 26 = 11111110000002. - -Thus every even perfect number is a pernicious number. - -Every even perfect number is also a practical number (cf. Related concepts). - -==Odd perfect numbers== - -It is unknown whether any odd perfect numbers exist, though various results have been obtained. In 1496, Jacques Lefèvre stated that Euclid's rule gives all perfect numbers, thus implying that no odd perfect number exists. Euler stated: "Whether ... there are any odd perfect numbers is a most difficult question". More recently, Carl Pomerance has presented a heuristic argument suggesting that indeed no odd perfect number should exist. All perfect numbers are also Ore's harmonic numbers, and it has been conjectured as well that there are no odd Ore's harmonic numbers other than 1. - -Any odd perfect number N must satisfy the following conditions: - -* N > 101500. - -* N is not divisible by 105. - -* N is of the form N ≡ 1 (mod 12) or N ≡ 117 (mod 468) or N ≡ 81 (mod 324). - -* N is of the form -$$ -N=q^{\alpha} p_1^{2e_1} \cdots p_k^{2e_k}, -$$ - -where: - -* q, p1, ..., pk are distinct odd primes (Euler). - -* q ≡ α ≡ 1 (mod 4) (Euler). - -* The smallest prime factor of N is at most $\frac{k-1}{2}.$ - -* Either qα > 1062, or pj2ej > 1062 for some j. - -* $\alpha + 2e_1 + 2e_2 + 2e_3 + \cdots + 2e_k \geq \frac{66k-191}{25} $. - -* $ qp_1p_2p_3 \cdots p_k < 2N^{\frac{17}{26}}$. - -* The largest prime factor of N is greater than 108 and less than $(3N)^{1/3}.$ - -* The second largest prime factor is greater than 104, and is less than $(2N)^{1/5}$. - -* The third largest prime factor is greater than 100, and less than $(2N)^\frac16.$ - -* N has at least 101 prime factors and at least 10 distinct prime factors. If 3 is not one of the factors of N, then N has at least 12 distinct prime factors. - -Furthermore, several minor results are known about the exponents - -e1, ..., ek. - -* Not all ei ≡ 1 (mod 3). - -* Not all ei ≡ 2 (mod 5). - -* If all ei ≡ 1 (mod 3) or 2 (mod 5), then the smallest prime factor of N must lie between 108 and 101000. - -* (e1, ..., ek) ≠ (1, ..., 1, 3), (1, ..., 1, 5), (1, ..., 1, 6). - -* If e1 = ... = ek = e, then - -** e cannot be 3, 5, 24, 6, 8, 11, 14 or 18. - -In 1888, Sylvester stated: - -Many of the properties proved about odd perfect numbers also apply to Descartes numbers, and Pace Nielsen has suggested that sufficient study of those numbers may lead to a proof that no odd perfect numbers exist. - -All even perfect numbers have a very precise form; odd perfect numbers either do not exist or are rare. There are a number of results on perfect numbers that are actually quite easy to prove but nevertheless superficially impressive; some of them also come under Richard Guy's strong law of small numbers: - -* The only even perfect number of the form x3 + 1 is 28 . - -* 28 is also the only even perfect number that is a sum of two positive cubes of integers . - -* The reciprocals of the divisors of a perfect number N must add up to 2 (to get this, take the definition of a perfect number, $\sigma_1(n) = 2n$, and divide both sides by n): - -** For 6, we have $1/6 + 1/3 + 1/2 + 1/1 = 2$; - -** For 28, we have $1/28 + 1/14 + 1/7 + 1/4 + 1/2 + 1/1 = 2$, etc. - -* The number of divisors of a perfect number (whether even or odd) must be even, because N cannot be a perfect square. - -** From these two results it follows that every perfect number is an Ore's harmonic number. - -* The even perfect numbers are not trapezoidal numbers; that is, they cannot be represented as the difference of two positive non-consecutive triangular numbers. There are only three types of non-trapezoidal numbers: even perfect numbers, powers of two, and the numbers of the form $2^{n-1}(2^n+1)$ formed as the product of a Fermat prime $2^n+1$ with a power of two in a similar way to the construction of even perfect numbers from Mersenne primes. - -* The number of perfect numbers less than n is less than $c\sqrt{n}$, where c > 0 is a constant. In fact it is $o(\sqrt{n})$, using little-o notation. - -* Every even perfect number ends in 6 or 28, base ten; and, with the only exception of 6, ends in 1, base 9. Therefore, in particular the digital root of every even perfect number other than 6 is 1. - -* The only square-free perfect number is 6. - -The sum of proper divisors gives various other kinds of numbers. Numbers where the sum is less than the number itself are called deficient, and where it is greater than the number, abundant. These terms, together with perfect itself, come from Greek numerology. A pair of numbers which are the sum of each other's proper divisors are called amicable, and larger cycles of numbers are called sociable. A positive integer such that every smaller positive integer is a sum of distinct divisors of it is a practical number. - -By definition, a perfect number is a fixed point of the restricted divisor function 1=s(n) = σ(n) − n, and the aliquot sequence associated with a perfect number is a constant sequence. All perfect numbers are also $\mathcal{S}$-perfect numbers, or Granville numbers. - -A semiperfect number is a natural number that is equal to the sum of all or some of its proper divisors. A semiperfect number that is equal to the sum of all its proper divisors is a perfect number. Most abundant numbers are also semiperfect; abundant numbers which are not semiperfect are called weird numbers. diff --git a/wiki/wikipedia/2436.txt b/wiki/wikipedia/2436.txt deleted file mode 100644 index 363732d986a3356c8f90d7a62863adcf455effa3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2436.txt +++ /dev/null @@ -1,56 +0,0 @@ -Hansen's problem is a problem in planar surveying, named after the astronomer Peter Andreas Hansen (1795-1874), who worked on the geodetic survey of Denmark. There are two known points A and B, and two unknown points P1 and P2. From P1 and P2 an observer measures the angles made by the lines of sight to each of the other three points. The problem is to find the positions of P1 and P2. See figure; the angles measured are (α1, β1, α2, β2). - -Since it involves observations of angles made at unknown points, the problem is an example of resection (as opposed to intersection). - -Define the following angles: - -γ = P1AP2, δ = P1BP2, φ = P2AB, ψ = P1BA. - -As a first step we will solve for φ and ψ. - -The sum of these two unknown angles is equal to the sum of β1 and β2, yielding the equation -$$ -\phi+\psi=\beta_1+\beta_2. -$$ - -A second equation can be found more laboriously, as follows. The law of sines yields -$$ -\frac{AB}{P_2 B}=\frac{\sin \alpha_2}{\sin \phi} -$$ and -$$ -\frac{P_2 B}{P_1 P_2}=\frac{\sin \beta_1}{\sin \delta}. -$$ - -Combining these, we get -$$ -\frac{AB}{P_1 P_2}=\frac{\sin \alpha_2 \sin \beta_1}{\sin \phi \sin \delta}. -$$ - -Entirely analogous reasoning on the other side yields -$$ -\frac{AB}{P_1 P_2}=\frac{\sin \alpha_1 \sin \beta_2}{\sin \psi \sin \gamma}. -$$ - -Setting these two equal gives -$$ -\frac{\sin \phi}{\sin \psi}=\frac{\sin \gamma \sin \alpha_2 \sin \beta_1}{\sin \delta \sin \alpha_1 \sin \beta_2} = k. -$$ - -Using a known trigonometric identity this ratio of sines can be expressed as the tangent of an angle difference: -$$ -\tan \frac{\phi - \psi}{2}=\frac{k-1}{k+1}\tan\frac{\phi+\psi}{2}. -$$ - -Where ${k = }\frac{\sin \phi}{\sin \psi}.$ - -This is the second equation we need. Once we solve the two equations for the two unknowns $\phi$ and $\psi$, we can use either of the two expressions above for $\frac{AB}{P_1 P_2}$ to find P1P2 since AB is known. We can then find all the other segments using the law of sines. - -We are given four angles (α1, β1, α2, β2) and the distance AB. The calculation proceeds as follows: - -* Calculate $\gamma=\pi-\alpha_1-\beta_1-\beta_2,\quad \delta=\pi-\alpha_2-\beta_1-\beta_2.$ - -* Calculate $k=\frac{\sin \gamma \sin \alpha_2 \sin \beta_1}{\sin \delta \sin \alpha_1 \sin \beta_2}.$ - -* Let $s=\beta_1+\beta_2, \quad d=2 \arctan \left[ \frac{k-1}{k+1}\tan(s/2)\right]$ and then $\phi=(s+d)/2, \quad \psi=(s-d)/2.$ - -* Calculate P_1 P_2 = AB \frac{\sin \phi \sin \delta}{\sin \alpha_2 \sin \beta_1} or equivalently P_1 P_2 = AB \frac{\sin \psi \sin \gamma}{\sin \alpha_1 \sin \beta_2}. If one of these fractions has a denominator close to zero, use the other one. diff --git a/wiki/wikipedia/2437.txt b/wiki/wikipedia/2437.txt deleted file mode 100644 index 4e8231103a57aea0329d2217813416d2a1609453..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2437.txt +++ /dev/null @@ -1,181 +0,0 @@ -The quadratic knapsack problem (QKP), first introduced in 19th century, is an extension of knapsack problem that allows for quadratic terms in the objective function: Given a set of items, each with a weight, a value, and an extra profit that can be earned if two items are selected, determine the number of items to include in a collection without exceeding capacity of the knapsack, so as to maximize the overall profit. Usually, quadratic knapsack problems come with a restriction on the number of copies of each kind of item: either 0, or 1. This special type of QKP forms the 0-1 quadratic knapsack problem, which was first discussed by Gallo et al. - -The 0-1 quadratic knapsack problem is a variation of knapsack problems, combining the features of unbounded knapsack problem, 0-1 knapsack problem and quadratic knapsack problem. - -Specifically, the 0–1 quadratic knapsack problem has the following form: -$$ -\text{maximize } \left\{\sum_{i=1}^n p_i x_i + \sum_{i=1}^n\sum_{j=1,i\neq j}^n P_{ij}x_ix_j: x\in X, x \text{ binary} \right\} -$$ -$$ - \text{subject to } X\equiv\left\{x \in \{0,1\}^n: \sum_{i=1}^n w_i x_i \leq W; x_i \in \{0,1\} \text{ for } i=1,\ldots, n \right\}. -$$ - -While the binary variable xi represents whether item i is included in the knapsack, $p_i$ is the profit earned by selecting item i and $P_{ij}$ is the profit achieved if both item i and j are added. - -Informally, the problem is to maximize the sum of the values of the items in the knapsack so that the sum of the weights is less than or equal to the knapsack's capacity. - -As one might expect, QKP has a wide range of applications including telecommunication, transportation network, computer science and economics. In fact, Witzgall first discussed QKP when selecting sites for satellite stations in order to maximize the global traffic with respect to a budget constraint. Similar model applies to problems like considering the location of airports, railway stations, or freight handling terminals. Applications of QKP in the field of computer science is more common after the early days: compiler design problem, clique problem, very large scale integration (VLSI) design. Additionally, pricing problems appear to be an application of QKP as described by Johnson et al. - -In general, the decision version of the knapsack problem (Can a value of at least V be achieved under a restriction of a certain capacity W?) is NP-complete. Thus, a given solution can be verified to in polynomial time while no algorithm can identify a solution efficiently. - -The optimization knapsack problem is NP-hard and there is no known algorithm that can solve the problem in polynomial time. - -As a particular variation of the knapsack problem, the 0-1 quadratic knapsack problem is also NP-hard. - -While no available efficient algorithm exists in the literature, there is a pseudo-polynomial time based on dynamic programming and other heuristic algorithms that can always generate “good” solutions. - -While the knapsack problem is one of the most commonly solved operation research (OR) problems, there are limited efficient algorithms that can solve 0-1 quadratic knapsack problems. Available algorithms include but are not limited to brute force, linearization, and convex reformulation. Just like other NP-hard problems, it is usually enough to find a workable solution even if it is not necessarily optimal. Heuristic algorithms based on greedy algorithm, dynamic programming can give a relatively “good” solution to the 0-1 QKP efficiently. - -The brute-force algorithm to solve this problem is to identify all possible subsets of the items without exceeding the capacity and select the one with the optimal value. The pseudo-code is provided as follows: - - - -// Input: - -// Profits (stored in array p) - -// Quadratic profits (stored in matrix P) - -// Weights (stored in array w) - -// Number of items (n) - -// Knapsack capacity (W) - -int max = 0 - -for all subset S do - -int value, weight = 0 - -for i from 0 to S.size-1 do: - -value = value + p[i] - -weight = weight + w[i] - -for j from i+1 to S.size-1 do: - -value = value + P[i][j] - -if weight <= W then: - -if value > max then: - -max = value - - - -Given n items, there will be at most $2^n$ subsets and for each legal candidate set, the running time of computing the values earned is $O(n^2)$. Thus, the efficiency class of brute-force algorithm is $(2^n n^2 )=\lambda(2^n)$, being exponential. - -Problems of such form are difficult to solve directly using standard solvers and thus people try to reformulate it as a linear program using auxiliary variables and constraints so that the problem can be readily solved using commercial packages. Two well-known linearization approaches for the 0-1 QKP are the standard linearization and Glover’s linearization. - -The first one is the standard linearization strategy, as shown below: - -LP1: maximize -$$ -\sum_{i=1}^n p_i x_i + \sum_{i=1}^n \left(\sum_{j=1,i\neq j}^n (P_{ij} + P_{ji}) z_{ij} \right). -$$ - -subject to -$$ -z_{ij}\leq x_i -$$ for all $(i,j), iixj term with a continuous variable zij. This reformulates the QKP into a knapsack problem, which we can then solve optimally using standard solvers. - -The second reformulation, which is more concise, is called Glover’s linearization. The Glover formulation is shown below, where Li and Ui are lower and upper bounds on $\sum_{j=1,i\neq j}^n P_{ij}x_j$, respectively: - -LP2: maximize -$$ -\sum_{i=1}^n p_i x_i + \sum_{i=1}^nz_i -$$ - -subject to -$$ -L_ix_i\leq z_i \leq U_ix_i -$$ for $i=1,\ldots, n$ -$$ -\sum_{j=1,i\neq j}^nP_{ij} x_j - U_i(1-x_i)\leq z_i\leq \sum_{j=1,i\neq j}^nP_{ij} x_j-L_i(1-x_i) -$$ for $i=1,\ldots, n$ -$$ -x \in X, x -$$ binary - -In the formulation LP2, we have replaced the expression $\sum_{j=1,i\neq j}^n P_{ij} x_i x_j$ with a continuous variable zi. Similarly, we can use standard solvers to solve the linearization problem. Note that Glover’s linearization only includes $n$ auxiliary variables with $2n$ constraints while standard linearization requires ${n \choose 2}$ auxiliary variables and $3{n \choose 2}$ constraints to achieve linearity. - -Note that nonlinear programs are hard to solve due to the possibility of being stuck at a local maximum. However, when the program is convex, any local maximum is the global maximum. A convex program is to maximize a concave function or minimize a convex function on a convex set. A set S is convex if $\forall u,v\in S$, $\lambda u+(1-\lambda)v\in S$ where $\lambda \in [0,1]$. That is to say, any point between two points in the set must also be an element of the set. A function f is concave if $f[\lambda u+(1-\lambda)v]\leq \lambda f(u)+(1-\lambda)f(v)$. A function f is convex if $f[\lambda u+(1-\lambda)v]\geq \lambda f(u)+(1-\lambda)f(v)$. Informally, a function is concave if the line segment connecting two points on the graph lies above or on the graph, while a function is convex if below or on the graph. Thus, by rewriting the objective function into an equivalent convex function, we can reformulate the program to be convex, which can be solved using optimization packages. - -The objective function can be written as $c^Tx+x^TCx$ using linear algebra notation. We need to make P a positive semi-definite matrix in order to reformulate a convex function. In this case, we modify the objective function to be $p^Tx+x^TPx+\sum_{i=1}^n \left(\sum_{j=1,j\neq i}^n|P_{ij}|\right)(x_i^2-x_i)$ by applying results from linear algebra, where P is a diagonally dominant matrix and thus a positive semi-definite. This reformulation can be solved using a standard commercial mixed-integer quadratic package. - -George Dantzig proposed a greedy approximation algorithm to unbounded knapsack problem which can also be used to solve the 0-1 QKP. The algorithm consists of two phrases: identify an initial solution and improve it. - -First compute for each item, the total objective contribution realizable by selecting it, $p_i+\sum_{i\neq j}^n P_{ij}$, and sort the items in decreasing order of the potential value per unit of weight, $(p_i+\sum_{i\neq j}^nP_{ij})/w_i$. Then select the items with the maximal value-weight ratio into the knapsack until there is no space for more, which forms the initial solution. - -Starting with the initial solution, the improvement is conducted by pairwise exchange. For each item in the solution set, identify the items not in the set where swapping results in an improving objective. Select the pair with maximal improvement and swap. There are also possibilities that removing one from the set or adding one to the set will produce the greatest contribution. Repeat until there is no improving swapping. - -The complexity class of this algorithm is $O(2^n)$ since for the worst case every possible combination of items will be identified. - -Quadknap is an exact branch-and-bound algorithm proposed by Caprara et al., where upper bounds are computed by considering a Lagrangian relaxation which approximate a difficult problem by a simpler problem and penalizes violations of constraints using Lagrange multiplier to impost a cost on violations. Quadknap releases the integer requirement when computing the upper bounds. Suboptimal Lagrangian multipliers are derived from sub-gradient optimization and provide a convenient reformulation of the problem. This algorithm is quite efficient since Lagrangian multipliers are stable, and suitable data structures are adopted to compute a tight upper bound in linear expected time in the number of variables. This algorithm was reported to generate exact solutions of instances with up to 400 binary variables, i.e., significantly larger than those solvable by other approaches. The code was written in C and is available online. - -While dynamic programming can generate optimal solutions to knapsack problems, dynamic programming approaches for QKP can only yield a relatively good quality solution, which can serve as a lower bound to the optimal objectives. While it runs in pseudo-polynomial time, it has a large memory requirement. - -For simplicity, assume all weights are non-negative. The objective is to maximize total value subject to the constraint: that the total weight is less than or equal to W. Then for each $w\leq W$, define $f(m,w)$ to be the value of the most profitable packing of the first m items found with a total weight of w. That is, let -$$ -f(m,w)=\max\left\{\sum_{i=1}^m p_i x_i + \sum_{i=1}^m\sum_{j=1,i\neq j}^m P_{ij}x_ix_j: \sum_{i=1}^m w_i=w,1\leq i\leq m \right\}. -$$ - -Then, $f(m,w)$ is the solution to the problem. Note that by dynamic programming, the solution to a problem arises from the solution to its smaller sub-problems. In this particular case, start with the first item and try to find a better packing by considering adding items with an expected weight of 𝑤. If the weight of the item to be added exceeds 𝑤, then $f(m,w)$ is the same with $f(m-1,w)$. Given that the item has a smaller weight compared with the desired weight, $f(m,w)$ is either the same as $f(m-1,w)$ if adding makes no contribution, or the same as the solution for a knapsack with smaller capacity, specifically one with the capacity reduced by the weight of that chosen item, plus the value of one correct item, i.e. $f(m-1,w-w_m)+p_m+\sum_{i=1}^{m-1}P_{im}x_i$. To conclude, we have that - -f(m,w)= - -\begin{cases} - -\max f(m-1,w),f(m-1,w-w_m)+p_m+\sum_{i=1}^{m-1}P_{im}x_i & \text{if } w_m\leq w\\ - -f(m-1,w) & \text{otherwise} - -\end{cases} - - - -Note on efficiency class: Clearly the running time of this algorithm is $O(Wn^2)$, based on the nested loop and the computation of the profit of new packing. This does not contradict the fact the QKP is NP-hard since W is not polynomial in the length of the input. - -Note that the previous algorithm requires $O(Wn^2)$ space for storing the current packing of items for all m,w, which may not be able to handle large-size problems. In fact, this can be easily improved by dropping the index m from $f(m,w)$ since all the computations depend only on the results from the preceding stage. - -Redefine $f(w)$ to be the current value of the most profitable packing found by the heuristic. That is, -$$ -f(w)=\max\left\{\sum_{i=1}^m p_i x_i + \sum_{i=1}^m \sum_{j=1,i\neq j}^m P_{ij}x_ix_j: \sum_{i=1}^m w_i=w,m\leq n \right\}. -$$ - -Accordingly, by dynamic programming we have that - -f(m)= - -\begin{cases} - -\max f(w),f(w-w_m)+p_m+\sum_{i=1}^{m-1}P_{im}x_i & \text{if } w_m\leq w, \\ - -f(w) & \text{otherwise.} - -\end{cases} - - - -Note this revised algorithm still runs in $O(Wn^2)$ while only taking up $O(Wn)$ memory compared to the previous $O(Wn^2)$. - -Researchers have studied 0-1 quadratic knapsack problems for decades. One focus is to find effective algorithms or effective heuristics, especially those with an outstanding performance solving real world problems. The relationship between the decision version and the optimization version of the 0-1 QKP should not be ignored when working with either one. On one hand, if the decision problem can be solved in polynomial time, then one can find the optimal solution by applying this algorithm iteratively. On the other hand, if there exists an algorithm that can solve the optimization problem efficiently, then it can be utilized in solving the decision problem by comparing the input with the optimal value. - -Another theme in literature is to identify what are the "hard" problems. Researchers who study the 0-1 QKP often perform computational studies to show the superiority of their strategies. Such studies can also be conducted to assess the performance of different solution methods. For the 0-1 QKP, those computational studies often rely on randomly generated data, introduced by Gallo et al. Essentially every computational study of the 0-1 QKP utilizes data that is randomly generated as follows. The weights are integers taken from a uniform distribution over the interval [1, 50], and the capacity constraints is an integer taken from a uniform distribution between 50 and the sum of item weights. The objective coefficients, i.e. the values are randomly chosen from [1,100]. It has been observed that generating instances of this form yields problems with highly variable and unpredictable difficulty. Therefore, the computational studies presented in the literature may be unsound. Thus some researches aim to develop a methodology to generate instances of the 0-1 QKP with a predictable and consistent level of difficulty. diff --git a/wiki/wikipedia/2438.txt b/wiki/wikipedia/2438.txt deleted file mode 100644 index 287713db7e3021879807d945386ea46a74d2044b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2438.txt +++ /dev/null @@ -1,87 +0,0 @@ -In optimization, 3-opt is a simple local search algorithm for solving the travelling salesperson problem and related network optimization problems. Compared to the simpler 2-opt algorithm, it is slower but can generate higher-quality solutions. - -3-opt analysis involves deleting 3 connections (or edges) in a network (or tour), to create 3 sub-tours. Then the 7 different ways of reconnecting the network are analysed to find the optimum one. This process is then repeated for a different set of 3 connections, until all possible combinations have been tried in a network. A single execution of 3-opt has a time complexity of $O(n^3)$. Iterated 3-opt has a higher time complexity. - -This is the mechanism by which the 3-opt swap manipulates a given route: - -def reverse_segment_if_better(tour, i, j, k): - -"""If reversing tour[i:j] would make the tour shorter, then do it.""" - -# Given tour [...A-B...C-D...E-F...] - -A, B, C, D, E, F = tour[i-1], tour[i], tour[j-1], tour[j], tour[k-1], tour[k % len(tour)] - -d0 = distance(A, B) + distance(C, D) + distance(E, F) - -d1 = distance(A, C) + distance(B, D) + distance(E, F) - -d2 = distance(A, B) + distance(C, E) + distance(D, F) - -d3 = distance(A, D) + distance(E, B) + distance(C, F) - -d4 = distance(F, B) + distance(C, D) + distance(E, A) - -if d0 > d1: - -tour[i:j] = reversed(tour[i:j]) - -return -d0 + d1 - -elif d0 > d2: - -tour[j:k] = reversed(tour[j:k]) - -return -d0 + d2 - -elif d0 > d4: - -tour[i:k] = reversed(tour[i:k]) - -return -d0 + d4 - -elif d0 > d3: - -tmp = tour[j:k] + tour[i:j] - -tour[i:k] = tmp - -return -d0 + d3 - -return 0 - -The principle is pretty simple. You compute the original distance $d_0$ and you compute the cost of each modification. If you find a better cost, apply the modification and return $\delta$ (relative cost). - -This is the complete 3-opt swap making use of the above mechanism: - -def three_opt(tour): - -"""Iterative improvement based on 3 exchange.""" - -while True: - -delta = 0 - -for (a, b, c) in all_segments(len(tour)): - -delta += reverse_segment_if_better(tour, a, b, c) - -if delta >= 0: - -break - -return tour - -def all_segments(n: int): - -"""Generate all segments combinations""" - -return ((i, j, k) - -for i in range(n) - -for j in range(i + 2, n) - -for k in range(j + 2, n + (i > 0))) - -For the given tour, you generate all segments combinations and for each combinations, you try to improve the tour by reversing segments. While you find a better result, you restart the process, otherwise finish. diff --git a/wiki/wikipedia/2439.txt b/wiki/wikipedia/2439.txt deleted file mode 100644 index 4bf3b430f6c6f2e8667122f2e668451df3196fa4..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2439.txt +++ /dev/null @@ -1,17 +0,0 @@ -A differential backup is a type of data backup that preserves data, saving only the difference in the data since the last full backup. The rationale in this is that, since changes to data are generally few compared to the entire amount of data in the data repository, the amount of time required to complete the backup will be smaller than if a full backup was performed every time that the organization or data owner wishes to back up changes since the last full backup. Another advantage, at least as compared to the incremental backup method of data backup, is that at data restoration time, at most two backup media are ever needed to restore all the data. This simplifies data restores as well as increases the likelihood of shortening data restoration time. - -A differential backup is a cumulative backup of all changes made since the last full backup, i.e., the differences since the last full backup. The advantage to this is the quicker recovery time, requiring only a full backup and the last differential backup to restore the entire data repository. The disadvantage is that for each day elapsed since the last full backup, more data needs to be backed up, especially if a significant proportion of the data has changed, thus increasing backup time as compared to the incremental backup method. - -It is important to use the terms "differential backup" and "incremental backup" correctly. The two terms are widely used in the industry, and their use is universally standard. A differential backup refers to a backup made to include the differences since the last full backup, while an incremental backup contains only the changes since the last incremental backup. (Or, of course, since the last full backup if the incremental backup in questions is the first incremental backup immediately after the last full backup.) All the major data backup vendors have standardized on these definitions. - -The difference between incremental and differential backups can be illustrated as follows: - -;Incremental backups: - -The above assumes that backups are done daily. Otherwise, the “Changes since” entry must be modified to refer to the last backup (whether such last backup was full or incremental). It also assumes a weekly rotation. - -;Differential backups: - -It is important to remember the industry standard meaning of these two terms because, while the terms above are in very wide use, some writers have been known to reverse their meaning. For example, Oracle Corporation leverages a backward description of differential backups in their DB product as of May 14, 2015: - -"Differential Incremental Backups - In a differential level 1 backup, RMAN backs up all blocks that have changed since the most recent cumulative or differential incremental backup, whether at level 1 or level 0. RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup. If no level 1 is available, RMAN copies all blocks changed since the level 0 backup." diff --git a/wiki/wikipedia/244.txt b/wiki/wikipedia/244.txt deleted file mode 100644 index 990bb22611b87c40d988ba2ff30b48d69391358f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/244.txt +++ /dev/null @@ -1 +0,0 @@ -In the mathematical classification of finite simple groups, the component theorem of shows that if G is a simple group of odd type, and various other assumptions are satisfied, then G has a centralizer of an involution with a "standard component" with small centralizer. diff --git a/wiki/wikipedia/2440.txt b/wiki/wikipedia/2440.txt deleted file mode 100644 index 93a7d509a7e801f3b90649200c4789f7da386543..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2440.txt +++ /dev/null @@ -1,399 +0,0 @@ -Stochastic portfolio theory (SPT) is a mathematical theory for analyzing stock market structure and portfolio behavior introduced by E. Robert Fernholz in 2002. It is descriptive as opposed to normative, and is consistent with the observed behavior of actual markets. Normative assumptions, which serve as a basis for earlier theories like modern portfolio theory (MPT) and the capital asset pricing model (CAPM), are absent from SPT. - -SPT uses continuous-time random processes (in particular, continuous semi-martingales) to represent the prices of individual securities. Processes with discontinuities, such as jumps, have also been incorporated into the theory. - -SPT considers stocks and stock markets, but its methods can be applied to other classes of assets as well. A stock is represented by its price process, usually in the logarithmic representation. In the case the market is a collection of stock-price processes $X_i,$ for $ i=1, \dots, n,$ each defined by a continuous semimartingale -$$ - d \log X_i(t) = \gamma_i(t) dt + \sum_{\nu=1}^d \xi_{i \nu}(t) dW_{\nu}(t) -$$ - -where $ W := (W_1, \dots, W_d)$ is an $n$-dimensional Brownian motion (Wiener) process with $d \geq n$, and the processes $\gamma_i$ and $\xi_{i \nu}$ are progressively measurable with respect to the Brownian filtration -$$ -\{\mathcal{F}_t\} = \{\mathcal{F}^W_t\} -$$. In this representation $\gamma_i(t)$ is called the (compound) growth rate of $X_i,$ and the covariance between $\log X_i$ and $\log X_j$ is $\sigma_{ij}(t)=\sum_{\nu=1}^d \xi_{i \nu}(t) \xi_{j \nu}(t).$ It is frequently assumed that, for all $i,$ the process $\xi_{i,1}^2(t) + \cdots + \xi_{i d}^2(t)$ is positive, locally square-integrable, and does not grow too rapidly as $t \rightarrow\infty.$ - -The logarithmic representation is equivalent to the classical arithmetic representation which uses the rate of return $\alpha_i(t),$ however the growth rate can be a meaningful indicator of long-term performance of a financial asset, whereas the rate of return has an upward bias. The relation between the rate of return and the growth rate is -$$ -\alpha_{i}(t) = \gamma_i(t) + \frac{\sigma_{ii}(t)}{2} -$$ - -The usual convention in SPT is to assume that each stock has a single share outstanding, so $X_i(t)$ - -represents the total capitalization of the $i$-th stock at time $t,$ and -$$ -X(t) = X_1(t) + \cdots + X_n(t) -$$ is the total capitalization of the market. - -Dividends can be included in this representation, but are omitted here for simplicity. - -An investment strategy $\pi = (\pi_1 , \cdots, \pi_n)$ is a vector of bounded, progressively measurable - -processes; the quantity $\pi_i(t)$ represents the proportion of total wealth invested in the $i$-th stock at - -time $t$, and $\pi_0(t) := 1 - \sum_{i=1}^n \pi_i(t)$ is the proportion hoarded (invested in a money market with zero interest rate). Negative weights correspond to short positions. The cash strategy $\kappa \equiv 0 (\kappa_0 \equiv 1)$ keeps all wealth in the money market. A strategy $\pi$ is called portfolio, if it is fully invested in the stock market, that is $\pi_1(t) + \cdots + \pi_n (t) = 1$ holds, at all times. - -The value process $Z_{\pi}$ of a strategy $\pi$ is always positive and satisfies - - - -d \log Z_{\pi}(t) = \sum_{i=1}^n \pi_i(t) d\log X_i(t) + \gamma_\pi^*(t) dt - - - -where the process $\gamma_{\pi}^*$ is called the excess growth rate process and is given by - - - -\gamma_{\pi}^*(t) := \frac{1}{2} \sum_{i=1}^n \pi_i(t) \sigma_{ii}(t) - --\frac{1}{2} \sum_{i,j=1}^n \pi_i(t) \pi_j(t) \sigma_{ij}(t) - - - -This expression is non-negative for a portfolio with non-negative weights $\pi_i(t)$ and has been used - -in quadratic optimization of stock portfolios, a special case of which is optimization with respect to the logarithmic utility function. - -The market weight processes, - - - -\mu_i(t) := \frac{X_i(t)}{X_1(t) + \cdots + X_n(t)} - - - -where $i=1, \dots, n$ define the market portfolio $\mu$. With the initial condition $Z_\mu(0) = X(0),$ the associated value process will satisfy $Z_{\mu}(t) = X(t)$ for all $t.$ - -A number of conditions can be imposed on a market, sometimes to model actual markets and sometimes to emphasize certain types of hypothetical market behavior. Some commonly invoked conditions are: - -# A market is nondegenerate if the eigenvalues of the covariance matrix $(\sigma_{ij} (t))_{1 \leq i,j \leq n}$ are bounded away from zero. It has bounded variance if the eigenvalues are bounded. - -# A market is coherent if $\operatorname{lim}_{t\rightarrow\infty} t^{-1} \log(\mu_i(t)) = 0 $ for all $i = 1, \dots, n.$ - -# A market is diverse on $[0, T]$ if there exists $\varepsilon > 0$ such that $\mu_{\max}(t) \leq 1 -\varepsilon$ for $t \in [0, T].$ - -# A market is weakly diverse on $[0, T]$ if there exists $\varepsilon > 0$ such that -$$ -\frac{1}{T}\int_0^T \mu_{\max}(t) dt \leq 1 - \varepsilon -$$ - -Diversity and weak diversity are rather weak conditions, and markets are generally far more diverse than would be tested by these extremes. A measure of market diversity is market entropy, defined by -$$ -S(\mu(t)) = -\sum_{i=1}^{n} \mu_i(t) \log(\mu_i(t)). -$$ - -[[Image:Cumulative turnover processes for cap ranked US stocks in the 1990s from CRSP data.pdf|thumb|300px|right|alt=Figure 3:“cumulative turnover” processes.| - -Figure 3 shows the “cumulative turnover” processes at various ranks over the course of a - -decade. As expected, the amount of turnover increases as one goes down the capitalization ladder. - -There is also a pronounced linear growth in time across all displayed ranks.]] - -We consider the vector process $(\mu_{(1)}(t), \dots, \mu_{(n)}(t)),$ with $0 \leq t < \infty$ of ranked market weights - - \max_{1\leq i \leq n} \mu_i(t) =: - -\mu_{(1)}(t) \geq \mu_{(2)}(t) \geq \cdots \mu_{(n)}(t) - -= \min_{1\leq i \leq n} \mu_i(t) - -where ties are resolved “lexicographically”, always in favor of the lowest index. The log-gaps -$$ - G^{(k,k+1)}(t) := \log(\mu_{(k)}(t)/\mu_{(k+1)}(t)), -$$ - -where $ 0 \leq t < \infty$ and $k=1, \dots, n-1$ are continuous, non-negative semimartingales; we denote by $\Lambda^{(k,k+1)}(t) = L^{G^{(k, k+1)}}(t ; 0)$ their local times at the origin. These quantities measure the amount of turnover between ranks $k$ and $k+1$ during the time-interval $[0, t]$. - -A market is called stochastically stable, if $(\mu_{(1)}(t), \cdots, \mu_{(n)}(t))$ converges in distribution as $t \rightarrow\infty$ to a random vector $(M_{(1)} , \cdots, M_{(n)})$ with values in the Weyl chamber -$$ -\{(x_1, \dots, x_n ) \mid x_1 > x_2 > \dots > x_n \text{ and }\sum_{i=1}^{n} x_i = 1\} -$$ - -of the unit simplex, and if the strong law of large numbers - - - -\lim_{t \rightarrow \infty}\frac{\Lambda^{(k,k+1)}(t)}{t} = \lambda^{(k,k+1)} > 0 - - - -holds for suitable real constants $\lambda^{(1,2)}, \dots, \lambda^{(n-1, n)}.$ - -Given any two investment strategies $\pi,\rho$ and a real number $T > 0$, we say that $\pi$ is arbitrage relative to $\rho$ over the time-horizon $[0,T]$, if $\mathbb{P}(Z_\pi(T) \geq Z_\rho (T)) \geq 1$ and $\mathbb{P}(Z_\pi(T) > Z_\rho (T)) > 0$ both hold; this relative arbitrage is called “strong” if $\mathbb{P}(Z_\pi (T) > Z_\rho (T)) = 1.$ When $\rho$ is $\kappa \equiv 0,$ we recover the usual definition of arbitrage relative to cash. - -We say that a given strategy $\nu$ has the numeraire property, if for any strategy $\pi$ the ratio $Z_\pi/Z_\nu$ is a $\mathbb{P}$−supermartingale. In such a case, the process $1/Z_\nu$ is called a “deflator” for the market. - -No arbitrage is possible, over any given time horizon, relative to a strategy $\nu$ that has the numeraire property (either with respect to the underlying probability measure $\mathbb{P}$, or with respect to any other probability measure which is equivalent to $\mathbb{P}$). A strategy $\nu$ with the numeraire property maximizes the asymptotic growth rate from investment, in the sense that - - - -\limsup_{T\rightarrow\infty} \frac{1}{T}\log\left(\frac{Z_\pi(T)}{Z_\nu(T)}\right) \leq 0 - - - -holds for any strategy $\pi$; it also maximizes the expected log-utility from investment, in the sense that for any strategy $\pi$ and real number $T > 0$ we have - - - -\mathbb{E}[\log(Z_\pi(T)] \leq \mathbb{E}[\log(Z_\nu(T))]. - - - -If the vector $\alpha(t) = (\alpha_1(t), \cdots, \alpha_n(t))'$ of instantaneous rates of return, and the matrix $\sigma(t) = (\sigma(t))_{1\leq i,j \leq n}$ of instantaneous covariances, are known, then the strategy - - - -\nu(t) = \arg\max_{p \in \mathbb{R}^n} (p' \alpha(t) - \tfrac{1}{2}p' \alpha(t) p) - -\qquad \text{ for all } 0 \leq t < \infty - - - -has the numeraire property whenever the indicated maximum is attained. - -The study of the numeraire portfolio links SPT to the so-called Benchmark approach to Mathematical Finance, which takes such a numeraire portfolio as given and provides a way to price contingent claims, without any further assumptions. - -A probability measure $\mathbb{Q}$ is called equivalent martingale measure (EMM) on a given time-horizon $[0,T]$, if it has the same null sets as $\mathbb{P}$ on $\mathcal{F}_T$, and if the processes $X_1(t), \dots, X_n(t)$ with 0 \leq - -t \leq T are all $\mathbb{Q}$−martingales. Assuming that such an EMM exists, arbitrage is not possible on $[0,T]$ relative to either cash $\kappa$ or to the market portfolio $\mu$ (or more generally, relative to any - -strategy $\rho$ whose wealth process $Z_\rho$ is a martingale under some EMM). Conversely, if $\pi, \rho$ are portfolios and one of them is arbitrage relative to the other on $[0,T]$ then no EMM can exist on this horizon. - -Suppose we are given a smooth function $ G : U \rightarrow (0, \infty)$ on some neighborhood -$$ -U -$$ of the unit simplex in $\mathbb{R}^n$ . We call - - - -\pi_i^{\mathbb{G}}(t) := \mu_i(t) \left( - -D_i \log(\mathbb{G}(\mu(t))) + 1 - -- \sum_{j=1}^n \mu_j(t) D_j \log(\mathbb{G}(\mu(t))) - -\right) - -\qquad \text{ for } 1 \leq i \leq n - - - -the portfolio generated by the function $\mathbb{G}$. It can be shown that all the weights of this portfolio are non-negative, if its generating function $\mathbb{G}$ is concave. Under mild conditions, the relative performance of this functionally-generated portfolio $\pi_\mathbb{G}$ with respect to the market portfolio $\mu$, is given by the F-G decomposition - - - -\log\left( - -\frac{Z_{\pi^\mathbb{G}}(T)}{Z_{\mu}(T)} - -\right) - -= \log\left( - -\frac{\mathbb{G}(\mu(T))}{\mathbb{G}(\mu(0))} - -\right) - -+ \int_0^T g(t) dt - - - -which involves no stochastic integrals. Here the expression - - - -g(t) := \frac{-1}{2\mathbb{G}(\mu(t))} - -\sum_{i=1}^n \sum_{j=1}^n - -D_{ij}^2 \mathbb{G}(\mu(t)) \mu_i(t) \mu_j(t) \tau_{ij}^\mu(t) - - - -is called the drift process of the portfolio (and it is a non-negative quantity if the generating function $\mathbb{G}$ is concave); and the quantities - - - -\tau_{ij}^\mu(t) := \sum_{\nu=1}^n (\xi_{i\nu}(t) - \xi_\nu^\mu(t))(\xi_{j\nu}(t) - \xi_\nu^\mu(t)),\qquad - -\xi_{i\nu}(t) := \sum_{i=1}^n \mu_i(t) \xi_{i\nu}(t) - - - -with $1 \leq i, j \leq n$ are called the relative covariances between $\log (X_i)$ and $\log (X_j)$ with respect to the market. - -# The constant function $\mathbb{G} := w > 0$ generates the market portfolio $\mu$, - -# The geometric mean function $\mathbb{H}(x) := (x_1 \cdots x_n)^\frac{1}{n}$ generates the equal-weighted portfolio $\varphi_i(n) = \frac{1}{n}$ for all $ 1 \leq i \leq n $, - -# The modified entropy function $\mathbb{S}^c(x) = c - \sum_{i=1}^n x_i \cdot \log(x_i)$ for any $c > 0$ generates the modified entropy-weighted portfolio, - -# The function $\mathbb{D}^{(p)}(x) := (\sum_{i=1}^n x_i^p)^\frac{1}{p}$ with $0 < p < 1$ generates the diversity-weighted portfolio $\delta_i^{(p)}(t) = \frac{(\mu_i(t))^p}{\sum_{i=1}^n(\mu_i(t))^p} $ with drift process $(1-p)\gamma_{\delta^{(p)}}^*(t)$. - -The excess growth rate of the market portfolio admits - -the representation $2\gamma_\mu^*(t) = \sum_{i=1}^n \mu_i(t) \tau_{ii}^\mu(t)$ as a capitalization-weighted average relative stock - -variance. This quantity is nonnegative; if it happens to be bounded away from zero, namely - - - -\gamma_\mu^*(t) = \frac{1}{2}\sum_{i=1}^n \mu_i(t) \tau_{ii}^\mu(t) \geq h > 0, - - - -for all $0 \leq t < \infty$ for some real constant $h$, then it can be shown using the F-G decomposition that, - -for every $T > \mathbb{S}(\mu(0))/h,$ there exists a constant $c > 0$ for which the modified entropic portfolio $\Theta^{(c)}$ is strict arbitrage relative to the market $\mu$ over $[0, T]$; see Fernholz and Karatzas (2005) for details. It is an - -open question, whether such arbitrage exists over arbitrary time horizons (for two special cases, in - -which the answer to this question turns out to be affirmative, please see the paragraph below and - -the next section). - -If the eigenvalues of the covariance matrix $(\sigma_{ij}(t))_{1\leq i,j \leq n}$ are bounded away from both zero and infinity, the condition $\gamma_\mu^*\geq h > 0$ can be shown to be equivalent to diversity, namely $\mu_{\max} \leq 1 - \varepsilon$ for a suitable $\varepsilon \in (0, 1).$ Then the diversity-weighted portfolio $\delta^{(p)}$ leads to strict arbitrage - -relative to the market portfolio over sufficiently long time horizons; whereas, suitable modifications - -of this diversity-weighted portfolio realize such strict arbitrage over arbitrary time horizons. - -We consider the example of a system of stochastic differential equations - - - -d\log(X_i(t)) = \frac{\alpha}{2\mu_i(t)} dt + \frac{\sigma}{\mu_i(t)} dW_i(t) - - - -with $1\leq i \leq n$ given real constants $\alpha \geq 0$ and an $n$-dimensional Brownian motion -$$ -(W_1 , \dots, W_n). -$$ It follows from the work of Bass and Perkins (2002) that this system has a weak solution, which is unique in distribution. Fernholz and Karatzas (2005) show how to construct this solution in terms of scaled and time-changed squared Bessel processes, and prove that the resulting system is coherent. - -The total market capitalization $X$ behaves here as geometric Brownian motion with drift, and has the same constant growth rate as the largest stock; whereas the excess growth rate of the market - -portfolio is a positive constant. On the other hand, the relative market weights $\mu_i$ - -with $1 \leq i \leq n$ have the dynamics of multi-allele Wright-Fisher processes. - -This model is an example of a non-diverse market with unbounded variances, in which strong arbitrage opportunities with respect to the market portfolio $\mu$ exist over arbitrary time horizons, as was shown by Banner and Fernholz (2008). Moreover, Pal (2012) derived the joint density of market weights at fixed times and at certain stopping times. - -We fix an integer $m \in \{2, \dots, n - 1\}$ and construct two capitalization-weighted portfolios: one consisting of the top $m$ stocks, denoted $\zeta$, and one consisting of the bottom $n - m$ stocks, denoted $\eta$. More specifically, - - - -\zeta_i(t) = \frac{\sum_{k=1}^m \mu_{(k)}(t) \mathbf{1}_{\{\mu_i(t) = \mu_{(k)}(t)\}}} - -{\sum_{l=1}^m \mu_{(l)}(t)} - -\qquad \text{ and } - -\eta_i(t) = \frac{\sum_{k=m+1}^n \mu_{(k)}(t) \mathbf{1}_{\{\mu_i(t) = \mu_{(k)}(t)\}}} - -{\sum_{l=m+1}^n \mu_{(l)}(t)} - - - -for $1 \leq i \leq n.$ Fernholz (1999), (2002) showed that the relative performance of the large-stock portfolio with respect to the market is given as - - - -\log\left(\frac{Z_\zeta(t)}{Z_\mu(t)}\right) - -= \log\left(\frac{\mu_{(1)}(T) + \cdots + \mu_{(m)}(T)} - -{\mu_{(1)}(0) + \cdots + \mu_{(m)}(0)} \right) - -- \frac{1}{2}\int_0^T \frac{\mu_{(m)}(t)}{\mu_{(1)}(t) + \cdots + \mu_{(m)}(t)} - -d\Lambda^{(m, m+1)}(t). - - - -Indeed, if there is no turnover at the mth rank during the interval $[0, T]$, the fortunes of $\zeta$ relative - -to the market are determined solely on the basis of how the total capitalization of this sub-universe - -of the $m$ largest stocks fares, at time $T$ versus time 0; whenever there is turnover at the $m$-th rank, - -though, $\zeta$ has to sell at a loss a stock that gets “relegated” to the lower league, and buy a stock - -that has risen in value and been promoted. This accounts for the “leakage” that is evident in the - -last term, an integral with respect to the cumulative turnover process $\Lambda^{(m,m+1)}$ of the relative weight in the large-cap portfolio $\zeta$ of the stock that occupies the mth rank. - -The reverse situation prevails with the portfolio $\eta$ of small stocks, which gets to sell at a profit stocks that are being promoted to the “upper capitalization” league, and buy relatively cheaply stocks that are being relegated: - - - -\log\left(\frac{Z_\eta(t)}{Z_\mu(t)}\right) - -= \log\left(\frac{\mu_{(m+1)}(T) + \cdots + \mu_{(n)}(T)} - -{\mu_{(m+1)}(0) + \cdots + \mu_{(n)}(0)} \right) - -+ \frac{1}{2}\int_0^T \frac{\mu_{(m+1)}(t)}{\mu_{(m+1)}(t) - -+ \cdots + \mu_{(n)}(t)}. - - - -It is clear from these two expressions that, in a coherent and stochastically stable market, the small- - -stock cap-weighted portfolio $\zeta$ will tend to outperform its large-stock counterpart $\eta$, at least over - -large time horizons and; in particular, we have under those conditions - - - -\lim_{T\rightarrow\infty} \frac{1}{T} \log\left(\frac{Z_\eta(t)}{Z_\mu(t)}\right) - -= \lambda^{(m, m+1)} \mathbb{E} - -\left( - -\frac{M_{(1)}}{M_{(1)} + \cdots + M_{(m)}} - -+ \frac{M_{(m+1)}}{M_{(m+1)} + \cdots + M_{(n)}} - -\right) > 0. - - - -This quantifies the so-called size effect. In Fernholz (1999, 2002), constructions such as these are generalized to include functionally generated portfolios based on ranked market weights. - -First- and second-order models are hybrid Atlas models that reproduce some of the structure of real stock markets. First-order models have only rank-based parameters, and second-order models have both rank-based and name-based parameters. - -Suppose that $X_1,\ldots,X_n$ is a coherent market, and that the limits - - - -\mathbf{\sigma}_k^2 = \lim_{t\to\infty} - -t^{-1}\langle\log\mu_{(k)}\rangle (t) - - - -and - - - -\mathbf{g}_k = \lim_{T\to\infty}\frac{1}{T}\int_0^T\sum_{i=1}^n \mathbf{1}_{\{r_t(i)=k\}}d\log\mu_i(t) - - - -exist for $k=1,\ldots,n$, where $r_t(i)$ is the rank of $ X_i(t)$. Then the Atlas model ${\widehat X}_1,\ldots,{\widehat X}_n$ defined by - - - -d\log {\widehat X}_i(t) = \sum_{k=1}^n \mathbf{g}_k\mathbf{1}_{\dt - -+ \sum_{k=1}^n\mathbf{\sigma}_k \mathbf{1}_{\dW_i(t), - - - -where ${\hat r}_t(i)$ is the rank of $\widehat X_i(t)$ and $(W_1,\ldots,W_n)$ is an $n$-dimensional Brownian motion process, is the first-order model for the original market, $X_1,\ldots,X_n$. - -Under reasonable conditions, the capital distribution curve for a first-order model will be close to that of the original market. However, a first-order model is ergodic in the sense that each stock asymptotically spends $(1/n)$-th of its time at each rank, a property that is not present in actual markets. In order to vary the proportion of time that a stock spends at each rank, it is necessary to use some form of hybrid Atlas model with parameters that depend on both rank and name. An effort in this direction was made by Fernholz, Ichiba, and Karatzas (2013), who introduced a second-order model for the market with rank- and name-based growth parameters, and variance parameters that depended on rank alone. diff --git a/wiki/wikipedia/2441.txt b/wiki/wikipedia/2441.txt deleted file mode 100644 index 0a693b72b09f4e91349ad8e4d2749d704f169060..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2441.txt +++ /dev/null @@ -1,12 +0,0 @@ -Self-verifying theories are consistent first-order systems of arithmetic, much weaker than Peano arithmetic, that are capable of proving their own consistency. Dan Willard was the first to investigate their properties, and he has described a family of such systems. According to Gödel's incompleteness theorem, these systems cannot contain the theory of Peano arithmetic nor its weak fragment Robinson arithmetic; nonetheless, they can contain strong theorems. - -In outline, the key to Willard's construction of his system is to formalise enough of the Gödel machinery to talk about provability internally without being able to formalise diagonalisation. Diagonalisation depends upon being able to prove that multiplication is a total function (and in the earlier versions of the result, addition also). Addition and multiplication are not function symbols of Willard's language; instead, subtraction and division are, with the addition and multiplication predicates being defined in terms of these. Here, one cannot prove the $\Pi^0_2$ sentence expressing totality of multiplication: -$$ -(\forall x,y)\ (\exists z)\ {\rm multiply}(x,y,z). -$$ - -where ${\rm multiply}$ is the three-place predicate which stands for $z/y=x$. - -When the operations are expressed in this way, provability of a given sentence can be encoded as an arithmetic sentence describing termination of an analytic tableau. Provability of consistency can then simply be added as an axiom. The resulting system can be proven consistent by means of a relative consistency argument with respect to ordinary arithmetic. - -One can further add any true $\Pi^0_1$ sentence of arithmetic to the theory while still retaining consistency of the theory. diff --git a/wiki/wikipedia/2442.txt b/wiki/wikipedia/2442.txt deleted file mode 100644 index ad87b81b2347741c08cd580a265eb61bd22dda75..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2442.txt +++ /dev/null @@ -1,14 +0,0 @@ -A tetradic number, also known as a four-way number, is a number that remains the same when flipped back to front, flipped front to back, mirrored up-down, or flipped up-down. The only numbers that remain the same which turned up-side-down or mirrored are 0, 1, and 8, so a tetradic number is a palindromic number containing only 0, 1, and 8 as digits. (This is dependent on the use of a handwriting style or font in which these digits are symmetrical, as well on the use of Arabic numerals in the first place.) The first few tetradic numbers are 1, 8, 11, 88, 101, 111, 181, 808, 818, ... (OEIS A006072). - -Tetradic numbers are also known as four-way numbers due to the fact that they have four-way symmetry and can flipped back to front, flipped front to back, mirrored up-down, or flipped up-down and always stay the same. The four-way symmetry explains the name, due to tetra- being the Greek prefix for four. Tetradic numbers are both strobogrammatic and palindromic. - -A larger tetradic number can always be generated by adding another tetradic number to each end, retaining the symmetry. - -Tetradic primes are a specific type of tetradic number defined as tetradic numbers that are also prime numbers. The first few tetradic primes are 11, 101, 181, 18181, 1008001, 1180811, 1880881, 1881881, ... (OEIS A068188). - -The largest known tetradic prime is -$$ -10^{180054} + 8 R_{58567} \cdot 10^{60744} + 1, -$$ - -where $R_n$ is a repunit, that is, a number which contains only the digit 1 repeated $n$ times. The prime has 180,055 decimal digits. diff --git a/wiki/wikipedia/2443.txt b/wiki/wikipedia/2443.txt deleted file mode 100644 index 014d5d29621562c3cada80616d8049a74a20348c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2443.txt +++ /dev/null @@ -1,11 +0,0 @@ -Go! Sudoku is a sudoku puzzle game for the PlayStation Portable, released in Europe on December 2, 2005 and North America on March 21, 2006. It was later released in Japan on April 27, 2006 under the name of Kazuo (カズオ), as Nikoli holds the trademark for Sudoku. The game was mildly received by gamers and professional critics. - -The base game features 1000 Sudoku puzzles grouped by difficulty, as well as various modes, customizable grids and multiplayer. 200 more puzzles are available for download from the Go! Sudoku official website, and players can add their own custom backgrounds to the games. - -Many players as well as professional reviewers have reported a major glitch in the PSP title, in which a mysterious message dialog pops up at semi-regular intervals throughout the game. The dialog box covers the entire playing board and interrupts the gameplay by preventing players from seeing the board or making changes to it. The dialog box is titled "Attention!" and only contains the words "Please wait..." There is no way to close the dialog box, though it will vanish on its own usually after about 15 seconds. - -The problem is exacerbated due to Go! Sudoku being a single-player game that is played against the clock. Since the object of the game is to complete each puzzle in as little time as possible, with points being awarded based on how quickly the player finishes, the frequent appearance of the message dialog hinders any attempts at beating one's previous high scores, largely negating the competitive element of the single-player gameplay. The bug is even more obtrusive during multiplayer competitive play as it conveys a significant advantage to the player who does not encounter the bug. Additionally, the frequency of the dialog box popping up appears to increase with time, forcing users to eventually exit out of the game completely and restart. - -The nature and severity of the bug is considered game-breaking by many. And, though many users have notified Ubisoft about the bug, no official acknowledgment of the problem has been made, nor have any solutions been offered by the publisher as of January 2010. However, there are reports that keeping the PSP fully charged and playing the game with the charger plugged in minimizes the bug's occurrence. - -Go! Sudoku was also released on the PlayStation Network (for the PlayStation 3 as an initial Starter Pack with 12 puzzles, with a level pack of more than 1200 puzzles. diff --git a/wiki/wikipedia/2444.txt b/wiki/wikipedia/2444.txt deleted file mode 100644 index 6180bb308b7965581fd7806a79f0ccbefc50d0cd..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2444.txt +++ /dev/null @@ -1,128 +0,0 @@ -In operator theory, Naimark's dilation theorem is a result that characterizes positive operator valued measures. It can be viewed as a consequence of Stinespring's dilation theorem. - -In the mathematical literature, one may also find other results that bear Naimark's name. - -In the physics literature, it is common to see the spelling “Neumark” instead of “Naimark.” The latter variant is according to the romanization of Russian used in translation of Soviet journals, with diacritics omitted (originally Naĭmark). The former is according to the etymology of the surname. - -Let X be a compact Hausdorff space, H be a Hilbert space, and L(H) the Banach space of bounded operators on H. A mapping E from the Borel σ-algebra on X to $L(H)$ is called an operator-valued measure if it is weakly countably additive, that is, for any disjoint sequence of Borel sets $\{ B_i \}$, we have - - - -\langle E (\cup _i B_i) x, y \rangle = \sum_i \langle E (B_i) x, y \rangle - - - -for all x and y. Some terminology for describing such measures are: - -* E is called regular if the scalar valued measure - - - -B \rightarrow \langle E (B) x, y \rangle - - - -is a regular Borel measure, meaning all compact sets have finite total variation and the measure of a set can be approximated by those of open sets. - -* E is called bounded if $|E| = \sup_B \|E(B) \| < \infty$. - -* E is called positive if E(B) is a positive operator for all B. - -* E is called self-adjoint if E(B) is self-adjoint for all B. - -* E is called spectral if it is self-adjoint and $E (B_1 \cap B_2) = E(B_1) E(B_2)$ for all $ B_1, B_2 $. - -We will assume throughout that E is regular. - -Let C(X) denote the abelian C*-algebra of continuous functions on X. If E is regular and bounded, it induces a map $\Phi _E : C(X) \rightarrow L(H)$ in the obvious way: -$$ -\langle \Phi _E (f) h_1 , h_2 \rangle = \int _X f(x) \langle E(dx) h_1, h_2 \rangle -$$ - -The boundedness of E implies, for all h of unit norm - - - -\langle \Phi _E (f) h , h \rangle = \int _X f(x) \langle E(dx) h, h \rangle \leq \| f \|_\infty \cdot |E| . - - - -This shows $ \Phi _E (f)$ is a bounded operator for all f, and $\Phi _E$ itself is a bounded linear map as well. - -The properties of $\Phi_E$ are directly related to those of E: - -* If E is positive, then $\Phi_E$, viewed as a map between C*-algebras, is also positive. - -* $\Phi_E$ is a homomorphism if, by definition, for all continuous f on X and $h_1, h_2 \in H$, - - - -\langle \Phi_E (fg) h_1, h_2 \rangle = \int _X f(x) \cdot g(x) \langle E(dx) h_1, h_2 \rangle - -= \langle \Phi_E (f) \Phi_E (g) h_1 , h_2 \rangle. - - - -Take f and g to be indicator functions of Borel sets and we see that $\Phi _E$ is a homomorphism if and only if E is spectral. - -* Similarly, to say $\Phi_E$ respects the * operation means - - - -\langle \Phi_E ( {\bar f} ) h_1, h_2 \rangle = \langle \Phi_E (f) ^* h_1 , h_2 \rangle. - - - -The LHS is - - - -\int _X {\bar f} \langle E(dx) h_1, h_2 \rangle, - - - -and the RHS is - - - -\langle h_1, \Phi_E (f) h_2 \rangle = \overline{\langle \Phi_E(f) h_2, h_1 \rangle} = \int _X {\bar f}(x) \overline{\langle E(dx) h_2, h_1 \rangle} = \int _X {\bar f}(x) \langle h_1, E(dx) h_2 \rangle - - - -So, taking f a sequence of continuous functions increasing to the indicator function of B, we get $\langle E(B) h_1, h_2 \rangle = \langle h_1, E(B) h_2 \rangle$, i.e. E(B) is self adjoint. - -* Combining the previous two facts gives the conclusion that $\Phi _E$ is a *-homomorphism if and only if E is spectral and self adjoint. (When E is spectral and self adjoint, E is said to be a projection-valued measure or PVM.) - -The theorem reads as follows: Let E be a positive L(H)-valued measure on X. There exists a Hilbert space K, a bounded operator $V: K \rightarrow H$, and a self-adjoint, spectral L(K)-valued measure on X, F, such that -$$ - E(B) = V F(B) V^*. -$$ - -We now sketch the proof. The argument passes E to the induced map $\Phi_E$ and uses Stinespring's dilation theorem. Since E is positive, so is $\Phi_E$ as a map between C*-algebras, as explained above. Furthermore, because the domain of $\Phi _E$, C(X), is an abelian C*-algebra, we have that $\Phi_E$ is completely positive. By Stinespring's result, there exists a Hilbert space K, a *-homomorphism $\pi : C(X) \rightarrow L(K)$, and operator $V: K \rightarrow H$ such that -$$ - \Phi_E(f) = V \pi (f) V^*. -$$ - -Since π is a *-homomorphism, its corresponding operator-valued measure F is spectral and self adjoint. It is easily seen that F has the desired properties. - -In the finite-dimensional case, there is a somewhat more explicit formulation. - -Suppose now $X = \{1, \dotsc, n \}$, therefore C(X) is the finite-dimensional algebra $\mathbb{C}^n$, and H has finite dimension m. A positive operator-valued measure E then assigns each i a positive semidefinite m × m matrix $E_i$. Naimark's theorem now states that there is a projection-valued measure on X whose restriction is E. - -Of particular interest is the special case when $\sum_i E_i = I$ where I is the identity operator. (See the article on POVM for relevant applications.) In this case, the induced map $\Phi_E$ is unital. It can be assumed with no loss of generality that each $E_i$ is a rank-one projection onto some $x_i \in \mathbb{C}^m$. Under such assumptions, the case $n < m$ is excluded and we must have either - -# $n = m$ and E is already a projection-valued measure (because $\sum_{i=1}^n x_i x_i^* = I$ if and only if $\{x_i\}$ is an orthonormal basis), - -# $n > m$ and $\{ E_i \}$ does not consist of mutually orthogonal projections. - -For the second possibility, the problem of finding a suitable projection-valued measure now becomes the following problem. By assumption, the non-square matrix -$$ - M = \begin{bmatrix} x_1 \cdots x_n \end{bmatrix} -$$ - -is an isometry, that is $M M^* = I$. If we can find a $(n-m) \times n$ matrix N where -$$ -U = \begin{bmatrix} M \\ N \end{bmatrix} -$$ - -is a n × n unitary matrix, the projection-valued measure whose elements are projections onto the column vectors of U will then have the desired properties. In principle, such a N can always be found. diff --git a/wiki/wikipedia/2445.txt b/wiki/wikipedia/2445.txt deleted file mode 100644 index 30b763ca54e1de8029fba98faec66134ede6c2fb..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2445.txt +++ /dev/null @@ -1,52 +0,0 @@ -In computer science, the Akra–Bazzi method, or Akra–Bazzi theorem, is used to analyze the asymptotic behavior of the mathematical recurrences that appear in the analysis of divide and conquer algorithms where the sub-problems have substantially different sizes. It is a generalization of the master theorem for divide-and-conquer recurrences, which assumes that the sub-problems have equal size. It is named after mathematicians Mohamad Akra and Louay Bazzi. -$$ -T(x)=g(x) + \sum_{i=1}^k a_i T(b_i x + h_i(x))\qquad \text{for }x \geq x_0. -$$ - -The conditions for usage are: - -* sufficient base cases are provided - -* $a_i$ and $b_i$ are constants for all $i$ - -* $a_i > 0$ for all $i$ - -* $0 < b_i < 1$ for all $i$ - -* $\left|g(x)\right| \in O(x^c)$, where c is a constant and O notates Big O notation - -* $\left| h_i(x) \right| \in O\left(\frac{x}{(\log x)^2}\right)$ for all $i$ - -* $x_0$ is a constant - -The asymptotic behavior of $T(x)$ is found by determining the value of $p$ for which $\sum_{i=1}^k a_i b_i^p = 1$ and plugging that value into the equation -$$ -T(x) \in \Theta \left( x^p\left( 1+\int_1^x \frac{g(u)}{u^{p+1}}du \right)\right) -$$ - -(see Θ). Intuitively, $h_i(x)$ represents a small perturbation in the index of $T$. By noting that $\lfloor b_i x \rfloor = b_i x + (\lfloor b_i x \rfloor - b_i x)$ and that the absolute value of $\lfloor b_i x \rfloor - b_i x$ is always between 0 and 1, $h_i(x)$ can be used to ignore the floor function in the index. Similarly, one can also ignore the ceiling function. For example, $T(n) = n + T \left(\frac{1}{2} n \right)$ and $T(n) = n + T \left(\left\lfloor \frac{1}{2} n \right\rfloor \right)$ will, as per the Akra–Bazzi theorem, have the same asymptotic behavior. - -Suppose $T(n)$ is defined as 1 for integers $0 \leq n \leq 3$ and $n^2 + \frac{7}{4} T \left( \left\lfloor \frac{1}{2} n \right\rfloor \right) + T \left( \left\lceil \frac{3}{4} n \right\rceil \right)$ for integers $n > 3$. In applying the Akra–Bazzi method, the first step is to find the value of $p$ for which $\frac{7}{4} \left(\frac{1}{2}\right)^p + \left(\frac{3}{4} \right)^p = 1$. In this example, $p=2$. Then, using the formula, the asymptotic behavior can be determined as follows: - - - -\begin{align} - -T(x) & \in \Theta \left( x^p\left( 1+\int_1^x \frac{g(u)}{u^{p+1}}du \right)\right) \\ - -& = \Theta \left( x^2 \left( 1+\int_1^x \frac{u^2}{u^3}du \right)\right) \\ - -& = \Theta(x^2(1 + \ln x)) \\ - -& = \Theta(x^2\log x). - -\end{align} - - - -The Akra–Bazzi method is more useful than most other techniques for determining asymptotic behavior because it covers such a wide variety of cases. Its primary application is the approximation of the running time of many divide-and-conquer algorithms. For example, in the merge sort, the number of comparisons required in the worst case, which is roughly proportional to its runtime, is given recursively as $T(1) = 0$ and -$$ -T(n) = T\left(\left\lfloor \frac{1}{2} n \right\rfloor \right) + T\left(\left\lceil \frac{1}{2} n \right\rceil \right) + n - 1 -$$ - -for integers $n > 0$, and can thus be computed using the Akra–Bazzi method to be $\Theta(n \log n)$. diff --git a/wiki/wikipedia/2446.txt b/wiki/wikipedia/2446.txt deleted file mode 100644 index 7b45925df930f8d3842e2b8728e42f60ba201f0f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2446.txt +++ /dev/null @@ -1,39 +0,0 @@ -In mathematics, Ribet's theorem (earlier called the epsilon conjecture or ε-conjecture) is a statement in number theory concerning properties of Galois representations associated with modular forms. It was proposed by Jean-Pierre Serre and proven by Ken Ribet. The proof of the epsilon conjecture was a significant step towards the proof of Fermat's Last Theorem. As shown by Serre and Ribet, the Taniyama–Shimura conjecture (whose status was unresolved at the time) and the epsilon conjecture together imply that Fermat's Last Theorem is true. - -In mathematical terms, Ribet's theorem shows that if the Galois representation associated with an elliptic curve has certain properties, then that curve cannot be modular (in the sense that there cannot exist a modular form which gives rise to the same Galois representation). - -Let f be a weight 2 newform on Γ0(qN)-i.e. of level qN where q does not divide N-with absolutely irreducible 2-dimensional mod p Galois representation ρf,p unramified at q if q ≠ p and finite flat at q = p. Then there exists a weight 2 newform g of level N such that -$$ - \rho_{f,p} \simeq \rho_{g,p}. -$$ - -In particular, if E is an elliptic curve over $\mathbb{Q}$ with conductor qN, then the modularity theorem guarantees that there exists a weight 2 newform f of level qN such that the 2-dimensional mod p Galois representation ρf, p of f is isomorphic to the 2-dimensional mod p Galois representation ρE, p of E. To apply Ribet's Theorem to ρE, p, it suffices to check the irreducibility and ramification of ρE, p. Using the theory of the Tate curve, one can prove that ρE, p is unramified at q ≠ p and finite flat at q = p if p divides the power to which q appears in the minimal discriminant ΔE. Then Ribet's theorem implies that there exists a weight 2 newform g of level N such that ρg, p ≈ ρE, p. - -Note that Ribet's theorem does not guarantee that if one begins with an elliptic curve E of conductor qN, there exists an elliptic curve E' of level N such that ρE, p ≈ ρE′, p. The newform g of level N may not have rational Fourier coefficients, and hence may be associated to a higher-dimensional abelian variety, not an elliptic curve. For example, elliptic curve 4171a1 in the Cremona database given by the equation -$$ -E: y^2 + xy + y = x^3 - 663204x + 206441595 -$$ - -with conductor 43×97 and discriminant 437 × 973 does not level-lower mod 7 to an elliptic curve of conductor 97. Rather, the mod p Galois representation is isomorphic to the mod p Galois representation of an irrational newform g of level 97. - -However, for p large enough compared to the level N of the level-lowered newform, a rational newform (e.g. an elliptic curve) must level-lower to another rational newform (e.g. elliptic curve). In particular for p ≫ NN1+ε, the mod p Galois representation of a rational newform cannot be isomorphic to that of an irrational newform of level N. - -Similarly, the Frey-Mazur conjecture predicts that for p large enough (independent of the conductor N), elliptic curves with isomorphic mod p Galois representations are in fact isogenous, and hence have the same conductor. Thus non-trivial level-lowering between rational newforms is not predicted to occur for large p (in particular p > 17). - -In his thesis, came up with the idea of associating solutions (a,b,c) of Fermat's equation with a completely different mathematical object: an elliptic curve. - -If p is an odd prime and a, b, and c are positive integers such that -$$ -a^p + b^p = c^p, -$$ - -then a corresponding Frey curve is an algebraic curve given by the equation -$$ -y^2 = x(x - a^p)(x + b^p). -$$ - -This is a nonsingular algebraic curve of genus one defined over $\mathbb{Q}$, and its projective completion is an elliptic curve over $\mathbb{Q}$. - -In 1982 Gerhard Frey called attention to the unusual properties of the same curve as Hellegouarch, now called a Frey curve. This provided a bridge between Fermat and Taniyama by showing that a counterexample to Fermat's Last Theorem would create such a curve that would not be modular. The conjecture attracted considerable interest when Frey (1986) suggested that the Taniyama–Shimura–Weil conjecture implies Fermat's Last Theorem. However, his argument was not complete. In 1985 Jean-Pierre Serre proposed that a Frey curve could not be modular and provided a partial proof of this. This showed that a proof of the semistable case of the Taniyama–Shimura conjecture would imply Fermat's Last Theorem. Serre did not provide a complete proof and what was missing became known as the epsilon conjecture or ε-conjecture. In the summer of 1986, Kenneth Alan Ribet proved the epsilon conjecture, thereby proving that the Taniyama–Shimura–Weil conjecture implied Fermat's Last Theorem. - -Suppose that the Fermat equation with exponent p ≥ 5 had a solution in non-zero integers a, b, c. Let us form the corresponding Frey curve Eap,bp,cp. It is an elliptic curve and one can show that its minimal discriminant Δ is equal to 2−8 (abc)2p and its conductor N is the radical of abc, i.e. the product of all distinct primes dividing abc. By an elementary consideration of the equation ap + bp = cp, it is clear that one of a, b, c is even and hence so is N. By the Taniyama–Shimura conjecture, E is a modular elliptic curve. Since all odd primes dividing a, b, c in N appear to a pth power in the minimal discriminant Δ, by Ribet's theorem one can perform level descent modulo p repetitively to strip off all odd primes from the conductor. However, there are no newforms of level 2 as the genus of the modular curve X0(2) is zero (and newforms of level N are differentials on X0(N)). diff --git a/wiki/wikipedia/2447.txt b/wiki/wikipedia/2447.txt deleted file mode 100644 index 1c42c5dc8b7717c2ea3f235f5a93e59e4bd8aca6..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2447.txt +++ /dev/null @@ -1,43 +0,0 @@ -In information technology a reasoning system is a software system that generates conclusions from available knowledge using logical techniques such as deduction and induction. Reasoning systems play an important role in the implementation of artificial intelligence and knowledge-based systems. - -By the everyday usage definition of the phrase, all computer systems are reasoning systems in that they all automate some type of logic or decision. In typical use in the Information Technology field however, the phrase is usually reserved for systems that perform more complex kinds of reasoning. For example, not for systems that do fairly straightforward types of reasoning such as calculating a sales tax or customer discount but making logical inferences about a medical diagnosis or mathematical theorem. Reasoning systems come in two modes: interactive and batch processing. Interactive systems interface with the user to ask clarifying questions or otherwise allow the user to guide the reasoning process. Batch systems take in all the available information at once and generate the best answer possible without user feedback or guidance. - -Reasoning systems have a wide field of application that includes scheduling, business rule processing, problem solving, complex event processing, intrusion detection, predictive analytics, robotics, computer vision, and natural language processing. - -The first reasoning systems were theorem provers, systems that represent axioms and statements in First Order Logic and then use rules of logic such as modus ponens to infer new statements. Another early type of reasoning system were general problem solvers. These were systems such as the General Problem Solver designed by Newell and Simon. General problem solvers attempted to provide a generic planning engine that could represent and solve structured problems. They worked by decomposing problems into smaller more manageable sub-problems, solving each sub-problem and assembling the partial answers into one final answer. Another example general problem solver was the SOAR family of systems. - -In practice these theorem provers and general problem solvers were seldom useful for practical applications and required specialized users with knowledge of logic to utilize. The first practical application of automated reasoning were expert systems. Expert systems focused on much more well defined domains than general problem solving such as medical diagnosis or analyzing faults in an aircraft. Expert systems also focused on more limited implementations of logic. Rather than attempting to implement the full range of logical expressions they typically focused on modus-ponens implemented via IF-THEN rules. Focusing on a specific domain and allowing only a restricted subset of logic improved the performance of such systems so that they were practical for use in the real world and not merely as research demonstrations as most previous automated reasoning systems had been. The engine used for automated reasoning in expert systems were typically called inference engines. Those used for more general logical inferencing are typically called theorem provers. - -With the rise in popularity of expert systems many new types of automated reasoning were applied to diverse problems in government and industry. Some such as case-based reasoning were off shoots of expert systems research. Others such as constraint satisfaction algorithms were also influenced by fields such as decision technology and linear programming. Also, a completely different approach, one not based on symbolic reasoning but on a connectionist model has also been extremely productive. This latter type of automated reasoning is especially well suited to pattern matching and signal detection types of problems such as text searching and face matching. - -The term reasoning system can be used to apply to just about any kind of sophisticated decision support system as illustrated by the specific areas described below. However, the most common use of the term reasoning system implies the computer representation of logic. Various implementations demonstrate significant variation in terms of systems of logic and formality. Most reasoning systems implement variations of propositional and symbolic (predicate) logic. These variations may be mathematically precise representations of formal logic systems (e.g., FOL), or extended and hybrid versions of those systems (e.g., Courteous logic). Reasoning systems may explicitly implement additional logic types (e.g., modal, deontic, temporal logics). However, many reasoning systems implement imprecise and semi-formal approximations to recognised logic systems. These systems typically support a variety of procedural and semi-declarative techniques in order to model different reasoning strategies. They emphasise pragmatism over formality and may depend on custom extensions and attachments in order to solve real-world problems. - -Many reasoning systems employ deductive reasoning to draw inferences from available knowledge. These inference engines support forward reasoning or backward reasoning to infer conclusions via modus ponens. The recursive reasoning methods they employ are termed ‘forward chaining’ and ‘backward chaining’, respectively. Although reasoning systems widely support deductive inference, some systems employ abductive, inductive, defeasible and other types of reasoning. Heuristics may also be employed to determine acceptable solutions to intractable problems. - -Reasoning systems may employ the closed world assumption (CWA) or open world assumption (OWA). The OWA is often associated with ontological knowledge representation and the Semantic Web. Different systems exhibit a variety of approaches to negation. As well as logical or bitwise complement, systems may support existential forms of strong and weak negation including negation-as-failure and ‘inflationary’ negation (negation of non-ground atoms). Different reasoning systems may support monotonic or non-monotonic reasoning, stratification and other logical techniques. - -Many reasoning systems provide capabilities for reasoning under uncertainty. This is important when building situated reasoning agents which must deal with uncertain representations of the world. There are several common approaches to handling uncertainty. These include the use of certainty factors, probabilistic methods such as Bayesian inference or Dempster–Shafer theory, multi-valued (‘fuzzy’) logic and various connectionist approaches. - -This section provides a non-exhaustive and informal categorisation of common types of reasoning system. These categories are not absolute. They overlap to a significant degree and share a number of techniques, methods and algorithms. - -Constraint solvers solve constraint satisfaction problems (CSPs). They support constraint programming. A constraint is a which must be met by any valid solution to a problem. Constraints are defined declaratively and applied to variables within given domains. Constraint solvers use search, backtracking and constraint propagation techniques to find solutions and determine optimal solutions. They may employ forms of linear and nonlinear programming. They are often used to perform optimization within highly combinatorial problem spaces. For example, they may be used to calculate optimal scheduling, design efficient integrated circuits or maximise productivity in a manufacturing process. - -Theorem provers use automated reasoning techniques to determine proofs of mathematical theorems. They may also be used to verify existing proofs. In addition to academic use, typical applications of theorem provers include verification of the correctness of integrated circuits, software programs, engineering designs, etc. - -Logic programs (LPs) are software programs written using programming languages whose primitives and expressions provide direct representations of constructs drawn from mathematical logic. An example of a general-purpose logic programming language is Prolog. LPs represent the direct application of logic programming to solve problems. Logic programming is characterised by highly declarative approaches based on formal logic, and has wide application across many disciplines. - -Rule engines represent conditional logic as discrete rules. Rule sets can be managed and applied separately to other functionality. They have wide applicability across many domains. Many rule engines implement reasoning capabilities. A common approach is to implement production systems to support forward or backward chaining. Each rule (‘production’) binds a conjunction of predicate clauses to a list of executable actions. - -At run-time, the rule engine matches productions against facts and executes (‘fires’) the associated action list for each match. If those actions remove or modify any facts, or assert new facts, the engine immediately re-computes the set of matches. Rule engines are widely used to model and apply business rules, to control decision-making in automated processes and to enforce business and technical policies. - -Deductive classifiers arose slightly later than rule-based systems and were a component of a new type of artificial intelligence knowledge representation tool known as frame languages. A frame language describes the problem domain as a set of classes, subclasses, and relations among the classes. It is similar to the object-oriented model. Unlike object-oriented models however, frame languages have a formal semantics based on first order logic. - -They utilize this semantics to provide input to the deductive classifier. The classifier in turn can analyze a given model (known as an ontology) and determine if the various relations described in the model are consistent. If the ontology is not consistent the classifier will highlight the declarations that are inconsistent. If the ontology is consistent the classifier can then do further reasoning and draw additional conclusions about the relations of the objects in the ontology. - -For example, it may determine that an object is actually a subclass or instance of additional classes as those described by the user. Classifiers are an important technology in analyzing the ontologies used to describe models in the Semantic web. - -Machine learning systems evolve their behavior over time based on experience. This may involve reasoning over observed events or example data provided for training purposes. For example, machine learning systems may use inductive reasoning to generate hypotheses for observed facts. Learning systems search for generalised rules or functions that yield results in line with observations and then use these generalisations to control future behavior. - -Case-based reasoning (CBR) systems provide solutions to problems by analysing similarities to other problems for which known solutions already exist. They use analogical reasoning to infer solutions based on case histories. CBR systems are commonly used in customer/technical support and call centre scenarios and have applications in industrial manufacture, agriculture, medicine, law and many other areas. - -A procedural reasoning system (PRS) uses reasoning techniques to select plans from a procedural knowledge base. Each plan represents a course of action for achievement of a given goal. The PRS implements a belief-desire-intention model by reasoning over facts (‘beliefs’) to select appropriate plans (‘intentions’) for given goals (‘desires’). Typical applications of PRS include management, monitoring and fault detection systems. diff --git a/wiki/wikipedia/2448.txt b/wiki/wikipedia/2448.txt deleted file mode 100644 index 91b7e67339dd5ba75081c1b1eab4bc4406914813..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2448.txt +++ /dev/null @@ -1,85 +0,0 @@ -Inequalities are very important in the study of information theory. There are a number of different contexts in which these inequalities appear. - -Consider a tuple $X_1,X_2,\dots,X_n$ of $n$ finitely (or at most countably) supported random variables on the same probability space. There are 2n subsets, for which (joint) entropies can be computed. For example, when n = 2, we may consider the entropies $H(X_1),$ $H(X_2),$ and $H(X_1, X_2)$. They satisfy the following inequalities (which together characterize the range of the marginal and joint entropies of two random variables): - -* $H(X_1) \ge 0$ - -* $H(X_2) \ge 0$ - -* $H(X_1) \le H(X_1, X_2)$ - -* $H(X_2) \le H(X_1, X_2)$ - -* $H(X_1, X_2) \le H(X_1) + H(X_2).$ - -In fact, these can all be expressed as special cases of a single inequality involving the conditional mutual information, namely -$$ -I(A;B|C) \ge 0, -$$ - -where $A$, $B$, and $C$ each denote the joint distribution of some arbitrary (possibly empty) subset of our collection of random variables. Inequalities that can be derived as linear combinations of this are known as Shannon-type inequalities. - -For larger $n$ there are further restrictions on possible values of entropy. - -To make this precise, a vector $h$ in $\mathbb{R}^{2^n}$ indexed by subsets of $\{1,\dots,n\}$ is said to be entropic if there is a joint, discrete distribution of n random variables $X_1,\dots,X_n$ such that $h_I = H(X_i \colon i \in I)$ is their joint entropy, for each subset $I$. - -The set of entropic vectors is denoted $\Gamma^*_n$, following the notation of Yeung. - -It is not closed nor convex for $n \geq 3$, but its topological closure $\overline{\Gamma^*_n}$ is known to be convex and hence it can be characterized by the (infinitely many) linear inequalities satisfied by all entropic vectors, called entropic inequalities. - -The set of all vectors that satisfy Shannon-type inequalities (but not necessarily other entropic inequalities) contains $\overline{\Gamma^*_n}$. - -This containment is strict for $n \geq 4$ and further inequalities are known as non-Shannon type inequalities. - -Zhang and Yeung reported the first non-Shannon-type inequality. - -Matus proved that no finite set of inequalities can characterize (by linear combinations) all entropic inequalities. In other words, the region $\overline{\Gamma^*_n}$ is not a polytope. - -A great many important inequalities in information theory are actually lower bounds for the Kullback–Leibler divergence. Even the Shannon-type inequalities can be considered part of this category, since the interaction information can be expressed as the Kullback–Leibler divergence of the joint distribution with respect to the product of the marginals, and thus these inequalities can be seen as a special case of Gibbs' inequality. - -On the other hand, it seems to be much more difficult to derive useful upper bounds for the Kullback–Leibler divergence. This is because the Kullback–Leibler divergence DKL(P||Q) depends very sensitively on events that are very rare in the reference distribution Q. DKL(P||Q) increases without bound as an event of finite non-zero probability in the distribution P becomes exceedingly rare in the reference distribution Q, and in fact DKL(P||Q) is not even defined if an event of non-zero probability in P has zero probability in Q. (Hence the requirement that P be absolutely continuous with respect to Q.) - -This fundamental inequality states that the Kullback–Leibler divergence is non-negative. - -Another inequality concerning the Kullback–Leibler divergence is known as Kullback's inequality. If P and Q are probability distributions on the real line with P absolutely continuous with respect to Q, and whose first moments exist, then -$$ -D_{KL}(P\|Q) \ge \Psi_Q^*(\mu'_1(P)), -$$ - -where $\Psi_Q^*$ is the large deviations rate function, i.e. the convex conjugate of the cumulant-generating function, of Q, and $\mu'_1(P)$ is the first moment of P. - -The Cramér–Rao bound is a corollary of this result. - -Pinsker's inequality relates Kullback-Leibler divergence and total variation distance. It states that if P, Q are two probability distributions, then -$$ -\sqrt{\frac{1}{2}D_{KL}^{(e)}(P\|Q)} \ge \sup \{ |P(A) - Q(A)| : A\text{ is an event to which probabilities are assigned.} \}. -$$ - -where -$$ -D_{KL}^{(e)}(P||Q) -$$ - -is the Kullback-Leibler divergence in nats and -$$ - \sup_A |P(A) - Q(A)| -$$ - -is the total variation distance. - -In 1957, Hirschman showed that for a (reasonably well-behaved) function $f:\mathbb R \rightarrow \mathbb C$ such that $\int_{-\infty}^\infty |f(x)|^2dx = 1,$ and its Fourier transform $g(y)=\int_{-\infty}^\infty f(x) e^{-2 \pi i x y}dx,$ the sum of the differential entropies of $|f|^2$ and $|g|^2$ is non-negative, i.e. -$$ --\int_{-\infty}^\infty |f(x)|^2 \log |f(x)|^2 dx -\int_{-\infty}^\infty |g(y)|^2 \log |g(y)|^2 dy \ge 0. -$$ - -Hirschman conjectured, and it was later proved, that a sharper bound of $\log(e/2),$ which is attained in the case of a Gaussian distribution, could replace the right-hand side of this inequality. This is especially significant since it implies, and is stronger than, Weyl's formulation of Heisenberg's uncertainty principle. - -Given discrete random variables $X$, $Y$, and $Y'$, such that $X$ takes values only in the interval [-1, 1] and $Y'$ is determined by $Y$ (such that $H(Y'|Y)=0$), we have - -\mathbb E \big( \big| \mathbb E(X|Y') - \mathbb E(X|Y) \big| \big) - -\le \sqrt {I(X;Y|Y') 2 \log 2 }, - -relating the conditional expectation to the conditional mutual information. This is a simple consequence of Pinsker's inequality. (Note: the correction factor log 2 inside the radical arises because we are measuring the conditional mutual information in bits rather than nats.) - -Several machine based proof checker algorithms are now available. is a Matlab based proof checker for all Shannon type Inequalities. Xitip is an open source, faster version of the same algorithm implemented in C with a graphical front end. also has a built in language parsing feature which support a broader range of random variable descriptions as input. and are cloud based implementations for validating the Shannon type inequalities. uses GLPK optimizer and has a C++ backend based on Xitip with a web based user interface. uses Gurobi solver for optimization and a mix of python and C++ in the backend implementation. It can also provide the canonical break down of the inequalities in terms of basic Information measures. diff --git a/wiki/wikipedia/2449.txt b/wiki/wikipedia/2449.txt deleted file mode 100644 index dc1e49439a6740ba6042be8bd8dca66be538da8b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2449.txt +++ /dev/null @@ -1,149 +0,0 @@ -Depth-first search (DFS) is an algorithm for traversing or searching tree or graph data structures. The algorithm starts at the root node (selecting some arbitrary node as the root node in the case of a graph) and explores as far as possible along each branch before backtracking. - -A version of depth-first search was investigated in the 19th century by French mathematician Charles Pierre Trémaux as a strategy for solving mazes. - -The time and space analysis of DFS differs according to its application area. In theoretical computer science, DFS is typically used to traverse an entire graph, and takes time 1=$O(|V| + |E|)$, where |V| is the number of vertices and |E| the number of edges. This is linear in the size of the graph. In these applications it also uses space $O(|V|)$ in the worst case to store the stack of vertices on the current search path as well as the set of already-visited vertices. Thus, in this setting, the time and space bounds are the same as for breadth-first search and the choice of which of these two algorithms to use depends less on their complexity and more on the different properties of the vertex orderings the two algorithms produce. - -For applications of DFS in relation to specific domains, such as searching for solutions in artificial intelligence or web-crawling, the graph to be traversed is often either too large to visit in its entirety or infinite (DFS may suffer from non-termination). In such cases, search is only performed to a limited depth; due to limited resources, such as memory or disk space, one typically does not use data structures to keep track of the set of all previously visited vertices. When search is performed to a limited depth, the time is still linear in terms of the number of expanded vertices and edges (although this number is not the same as the size of the entire graph because some vertices may be searched more than once and others not at all) but the space complexity of this variant of DFS is only proportional to the depth limit, and as a result, is much smaller than the space needed for searching to the same depth using breadth-first search. For such applications, DFS also lends itself much better to heuristic methods for choosing a likely-looking branch. When an appropriate depth limit is not known a priori, iterative deepening depth-first search applies DFS repeatedly with a sequence of increasing limits. In the artificial intelligence mode of analysis, with a branching factor greater than one, iterative deepening increases the running time by only a constant factor over the case in which the correct depth limit is known due to the geometric growth of the number of nodes per level. - -DFS may also be used to collect a sample of graph nodes. However, incomplete DFS, similarly to incomplete BFS, is biased towards nodes of high degree. - -For the following graph: - -a depth-first search starting at the node A, assuming that the left edges in the shown graph are chosen before right edges, and assuming the search remembers previously visited nodes and will not repeat them (since this is a small graph), will visit the nodes in the following order: A, B, D, F, E, C, G. The edges traversed in this search form a Trémaux tree, a structure with important applications in graph theory. - -Performing the same search without remembering previously visited nodes results in visiting the nodes in the order A, B, D, F, E, A, B, D, F, E, etc. forever, caught in the A, B, D, F, E cycle and never reaching C or G. - -Iterative deepening is one technique to avoid this infinite loop and would reach all nodes. - -A convenient description of a depth-first search of a graph is in terms of a spanning tree of the vertices reached during the search. Based on this spanning tree, the edges of the original graph can be divided into three classes: forward edges, which point from a node of the tree to one of its descendants, back edges, which point from a node to one of its ancestors, and cross edges, which do neither. Sometimes tree edges, edges which belong to the spanning tree itself, are classified separately from forward edges. If the original graph is undirected then all of its edges are tree edges or back edges. - -depth-first search to linearly order the vertices of a graph or tree. There are four possible ways of doing this: - -* A preordering is a list of the vertices in the order that they were first visited by the depth-first search algorithm. This is a compact and natural way of describing the progress of the search, as was done earlier in this article. A preordering of an expression tree is the expression in Polish notation. - -* A postordering is a list of the vertices in the order that they were last visited by the algorithm. A postordering of an expression tree is the expression in reverse Polish notation. - -* A reverse preordering is the reverse of a preordering, i.e. a list of the vertices in the opposite order of their first visit. Reverse preordering is not the same as postordering. - -* A reverse postordering is the reverse of a postordering, i.e. a list of the vertices in the opposite order of their last visit. Reverse postordering is not the same as preordering. - -For binary trees there is additionally in-ordering and reverse in-ordering. - -For example, when searching the directed graph below beginning at node A, the sequence of traversals is either A B D B A C A or A C D C A B A (choosing to first visit B or C from A is up to the algorithm). Note that repeat visits in the form of backtracking to a node, to check if it has still unvisited neighbors, are included here (even if it is found to have none). Thus the possible preorderings are A B D C and A C D B, while the possible postorderings are D B C A and D C B A, and the possible reverse postorderings are A C B D and A B C D. - -Reverse postordering produces a topological sorting of any directed acyclic graph. This ordering is also useful in control-flow analysis as it often represents a natural linearization of the control flows. The graph above might represent the flow of control in the code fragment below, and it is natural to consider this code in the order A B C D or A C B D but not natural to use the order A B D C or A C D B. - -if (A) then { - -B - -} else { - -C - -} - -D - -Input: - -Output: - -A recursive implementation of DFS: - -procedure DFS(G, v) is - -label v as discovered - -for all directed edges from v to w that are in G.adjacentEdges(v) do - -if vertex w is not labeled as discovered then - -recursively call DFS(G, w) - -The order in which the vertices are discovered by this algorithm is called the lexicographic order. - -A non-recursive implementation of DFS with worst-case space complexity $O(|E|)$, with the possibility of duplicate vertices on the stack: - -procedure DFS_iterative(G, v) is - -let S be a stack - -S.push(v) - -while S is not empty do - -v = S.pop() - -if v is not labeled as discovered then - -label v as discovered - -for all edges from v to w in G.adjacentEdges(v) do - -S.push(w) - -These two variations of DFS visit the neighbors of each vertex in the opposite order from each other: the first neighbor of v visited by the recursive variation is the first one in the list of adjacent edges, while in the iterative variation the first visited neighbor is the last one in the list of adjacent edges. The recursive implementation will visit the nodes from the example graph in the following order: A, B, D, F, E, C, G. The non-recursive implementation will visit the nodes as: A, E, F, B, D, C, G. - -The non-recursive implementation is similar to breadth-first search but differs from it in two ways: - -# it uses a stack instead of a queue, and - -# it delays checking whether a vertex has been discovered until the vertex is popped from the stack rather than making this check before adding the vertex. - -If G is a tree, replacing the queue of the breadth-first search algorithm with a stack will yield a depth-first search algorithm. For general graphs, replacing the stack of the iterative depth-first search implementation with a queue would also produce a breadth-first search algorithm, although a somewhat nonstandard one. - -Another possible implementation of iterative depth-first search uses a stack of iterators of the list of neighbors of a node, instead of a stack of nodes. This yields the same traversal as recursive DFS. - -procedure DFS_iterative(G, v) is - -let S be a stack - -S.push(iterator of G.adjacentEdges(v)) - -while S is not empty do - -if S.peek().hasNext() then - -w = S.peek().next() - -if w is not labeled as discovered then - -label w as discovered - -S.push(iterator of G.adjacentEdges(w)) - -else - -S.pop() - -Algorithms that use depth-first search as a building block include: - -* Finding connected components. - -* Topological sorting. - -* Finding 2-(edge or vertex)-connected components. - -* Finding 3-(edge or vertex)-connected components. - -* Finding the bridges of a graph. - -* Generating words in order to plot the limit set of a group. - -* Finding strongly connected components. - -* Determining whether a species is closer to one species or another in a phylogenetic tree. - -* Planarity testing. - -* Solving puzzles with only one solution, such as mazes. (DFS can be adapted to find all solutions to a maze by only including nodes on the current path in the visited set.) - -* Maze generation may use a randomized depth-first search. - -* Finding biconnectivity in graphs. - -The computational complexity of DFS was investigated by John Reif. More precisely, given a graph $G$, let $O=(v_1,\dots,v_n)$ be the ordering computed by the standard recursive DFS algorithm. This ordering is called the lexicographic depth-first search ordering. John Reif considered the complexity of computing the lexicographic depth-first search ordering, given a graph and a source. A decision version of the problem (testing whether some vertex u occurs before some vertex v in this order) is P-complete, meaning that it is "a nightmare for parallel processing". - -A depth-first search ordering (not necessarily the lexicographic one), can be computed by a randomized parallel algorithm in the complexity class RNC. As of 1997, it remained unknown whether a depth-first traversal could be constructed by a deterministic parallel algorithm, in the complexity class NC. diff --git a/wiki/wikipedia/245.txt b/wiki/wikipedia/245.txt deleted file mode 100644 index f786942a9b5f67a8634e603bc183e9c7c45cf41d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/245.txt +++ /dev/null @@ -1,75 +0,0 @@ -The modified due-date (MDD) scheduling heuristic is a greedy heuristic used to solve the single machine total weighted tardiness problem (SMTWTP). - -The modified due date scheduling is a scheduling heuristic, created in 1982 by Baker and Bertrand, used to solve the NP-hard single machine total-weighted tardiness problem. This problem is centered around reducing the global tardiness of a list of tasks which are characterized by their processing time, due date and weight by re-ordering them. - -This heuristic works the same way as other greedy algorithms. At each iteration, it finds the next job to schedule and add it to the list. This operation is repeated until no jobs are left unscheduled. MDD is similar to the earliest due date (EDD) heuristic except that MDD takes into account the partial sequence of job that have been already constructed, whereas EDD only looks at the jobs' due dates. - -Here is an implementation of the MDD algorithm in pseudo-code. It takes in an unsorted list of tasks and return the list sorted by increasing modified due date: - -function mdd(processed, task) - -return max(processed + task.processTime, task.dueDate) - -function mddSort(tasks) - -unsortedTasks = copy(tasks) - -sortedTasks = list - -processed = 0 - -while unsortedTasks isn't empty - -bestTask = unsortedTasks.getFirst() - -bestMdd = mdd(processed, bestTask) - -for task in unsortedTasks - -mdd = mdd(processed, task) - -if mdd < bestMdd then - -bestMdd = mdd - -bestTask = task - -sortedTasks.pushBack(bestTask) - -unsortedTasks.remove(bestTask) - -processed += bestTask.processTime - -return sortedTasks - -In this example we will schedule flight departures. Each flight is characterized by: - -* a due date: The time after which the plane is expected to have taken off - -* a processing time: The amount of time the plane takes to take off - -* a weight: An arbitrary value to specify the priority of the flight. - -We need to find an order for the flight to take off that will result in the smallest total weighted tardiness. For this example we will use the following values: - -In the default order, the total weighted tardiness is 136. - -The first step is to compute the modified due date for each flight. Since the current time is 0 and, in our example, we don’t have any flight whose due date is smaller than its processing time, the mdd of each flight is equal to its due date: - -The flight with the smallest MDD (Flight n° 3) is then processed, and the new modified due date is computed. The current time is now 5. - -The operation is repeated until no more flights are left unscheduled.
    - -We obtain the following results: - -In this order, the total weighted tardiness is 92. - -This example can be generalized to schedule any list of job characterized by a due date and a processing time. - -Applying this heuristic will result in a sorted list of tasks which tardiness cannot be reduced by adjacent pair-wise interchange. MDD’s complexity is $O(n)$. - -There is a version of MDD called weighted modified due date (WMDD) which takes into account the weights. In such a case, the evaluation function is replaced by: - -function wmdd(processed, task) - -return (1 / task.weight) * max(task.processTime, task.dueDate - processed) diff --git a/wiki/wikipedia/2450.txt b/wiki/wikipedia/2450.txt deleted file mode 100644 index 7ef9d18f15e914f92a7f7d9dcbf23bf0c2ad8638..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2450.txt +++ /dev/null @@ -1,34 +0,0 @@ -In mathematics, the Askey–Gasper inequality is an inequality for Jacobi polynomials proved by and used in the proof of the Bieberbach conjecture. - -It states that if β ≥ 0, α + β ≥ −2, and −1 ≤ x ≤ 1 then -$$ -\sum_{k=0}^n \frac{P_k^{(\alpha,\beta)}(x)}{P_k^{(\beta,\alpha)}(1)} \ge 0 -$$ - -where -$$ -P_k^{(\alpha,\beta)}(x) -$$ - -is a Jacobi polynomial. - -The case when β = 0 can also be written as -$$ -{}_3F_2 \left (-n,n+\alpha+2,\tfrac{1}{2}(\alpha+1);\tfrac{1}{2}(\alpha+3),\alpha+1;t \right)>0, \qquad 0\leq t<1, \quad \alpha>-1. -$$ - -In this form, with α a non-negative integer, the inequality was used by Louis de Branges in his proof of the Bieberbach conjecture. - -gave a short proof of this inequality, by combining the identity - -\begin{align} - -\frac{(\alpha+2)_n}{n!} &\times {}_3F_2 \left (-n,n+\alpha+2,\tfrac{1}{2}(\alpha+1);\tfrac{1}{2}(\alpha+3),\alpha+1;t \right) = \\ - -&= \frac{\left(\tfrac{1}{2} \right)_j\left (\tfrac{\alpha}{2}+1 \right )_{n-j} \left (\tfrac{\alpha}{2}+\tfrac{3}{2} \right )_{n-2j}(\alpha+1)_{n-2j}}{j!\left (\tfrac{\alpha}{2}+\tfrac{3}{2} \right )_{n-j}\left (\tfrac{\alpha}{2}+\tfrac{1}{2} \right )_{n-2j}(n-2j)!} \times {}_3F_2\left (-n+2j,n-2j+\alpha+1,\tfrac{1}{2}(\alpha+1);\tfrac{1}{2}(\alpha+2),\alpha+1;t \right ) - -\end{align} - -with the Clausen inequality. - -Gasper give some generalizations of the Askey–Gasper inequality to basic hypergeometric series. diff --git a/wiki/wikipedia/2451.txt b/wiki/wikipedia/2451.txt deleted file mode 100644 index 49a7ce76d12f6e7df4f9bc37c84069628b27da5d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2451.txt +++ /dev/null @@ -1,7 +0,0 @@ -A savepoint is a way of implementing subtransactions (also known as nested transactions) within a relational database management system by indicating a point within a transaction that can be "rolled back to" without affecting any work done in the transaction before the savepoint was created. Multiple savepoints can exist within a single transaction. Savepoints are useful for implementing complex error recovery in database applications. If an error occurs in the midst of a multiple-statement transaction, the application may be able to recover from the error (by rolling back to a savepoint) without needing to abort the entire transaction. - -A savepoint can be declared by issuing a SAVEPOINT name statement. All changes made after a savepoint has been declared can be undone by issuing a ROLLBACK TO SAVEPOINT name command. Issuing RELEASE SAVEPOINT name will cause the named savepoint to be discarded, but will not otherwise affect anything. Issuing the commands ROLLBACK or COMMIT will also discard any savepoints created since the start of the main transaction. - -Savepoints are defined in the SQL standard and are supported by all established SQL relational databases, including PostgreSQL, Oracle Database, Microsoft SQL Server, MySQL, DB2, SQLite (since 3.6.8), Firebird, H2 Database Engine, and Informix (since version 11.50xC3). - -Category:Data management diff --git a/wiki/wikipedia/2452.txt b/wiki/wikipedia/2452.txt deleted file mode 100644 index 7c7c3f5aff2b39e1c01add781badd0d8a5816622..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2452.txt +++ /dev/null @@ -1,57 +0,0 @@ -Z3, also known as the Z3 Theorem Prover, is a cross-platform satisfiability modulo theories (SMT) solver by Microsoft. - -Z3 was developed in the Research in Software Engineering (RiSE) group at Microsoft Research and is targeted at solving problems that arise in software verification and software analysis. Z3 supports arithmetic, fixed-size bit-vectors, extensional arrays, datatypes, uninterpreted functions, and quantifiers. Its main applications are extended static checking, test case generation, and predicate abstraction. - -In 2015, it received the Programming Languages Software Award from ACM SIGPLAN. In 2018, Z3 received the Test of Time Award from the European Joint Conferences on Theory and Practice of Software (ETAPS). Microsoft researchers Nikolaj Bjørner and Leonardo de Moura received the 2019 Herbrand Award for Distinguished Contributions to Automated Reasoning in recognition of their work in advancing theorem proving with Z3. - -Z3 was open sourced in the beginning of 2015. The source code is licensed under MIT License and hosted on GitHub. - -The solver can be built using Visual Studio, a Makefile or using CMake and runs on Windows, FreeBSD, Linux, and macOS. - -It has bindings for various programming languages including C, C++, Java, Julia, Haskell, OCaml, Python, WebAssembly, and .NET/Mono. The default input format is SMTLIB2. - -In this example propositional logic assertions are checked using functions to represent the propositions a and b. The following Z3 script checks to see if $\overline{a \land b} \equiv \overline{a} \lor \overline{b}$: - -(declare-fun a () Bool) - -(declare-fun b () Bool) - -(assert (not (= (not (and a b)) (or (not a)(not b))))) - -(check-sat) - -Result: - -unsat - -Note that the script asserts the negation of the proposition of interest. The unsat result means that the negated proposition is not satisfiable, thus proving the desired result (De Morgan's laws). - -The following script solves the two given equations, finding suitable values for the variables a and b: - -(declare-const a Int) - -(declare-const b Int) - -(assert (= (+ a b) 20)) - -(assert (= (+ a (* 2 b)) 10)) - -(check-sat) - -(get-model) - -Result: - -sat - -(model - -(define-fun b () Int - --10) - -(define-fun a () Int - -30) - -) diff --git a/wiki/wikipedia/2453.txt b/wiki/wikipedia/2453.txt deleted file mode 100644 index ebffe6f8b4cd90fa10bb1285d3d8e7eb9d983c34..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2453.txt +++ /dev/null @@ -1,25 +0,0 @@ -Picross S is a series of nonogram puzzle game developed and published by Jupiter for the Nintendo Switch game console. The games are a compilation of nonogram puzzles. As with all past entries in the series, the game involves the player completing nonograms in the shortest time possible. Picross S is the successor to Jupiter's Picross e series on Nintendo 3DS. - -The first game in the series, Picross S was released in Japan, Europe, and North America on 28 September 2017. The game received a mixed reception, with reviewers citing its presence of new puzzles as a positive but its blandness and lack of creativity in comparison to past entries as a negative. It released on the Nintendo Switch worldwide on 28 September 2017. - -Picross S2 was released in Japan, Europe, and North America on 2 August 2018. This was the first game to add the Clip Picross feature, which allows players to complete a larger picture by filling out small puzzles. These are unlocked by completing every 5th, 10th, and 15th level of a page. The game was received better than Picross S1, but was still criticized for its lack of innovation. - -Picross S3 was released in Japan, Europe, and North America on 25 April 2019. It is the highest rated game in the series based on review scores. The game was heavily praised for the new Color Picross mode despite only having 30 of the puzzle type. In Color Picross, the picture is determined by the numbers in the top and left of the puzzle, as well as each of the colors being listed in order, leading to more complex and innovative puzzles. - -Picross S4 was released in Japan, Europe, and North America on 23 April 2020. Picross S4 was generally liked and positively reviewed by critics due to the simple and classic nonogram gameplay, but was criticized for adding no new gameplay modes. - -The only new feature is the Extra menu, which gives the player two large 30x30 puzzles. If you have save files for Picross S1, Picross S2, or Picross S3 on your system, you will receive a special 40x30 puzzle for each game you own. The only way to play every puzzle in the game is to own all of the previous games in the series. - -Picross S5 was released in Japan, Europe, and North America on 26 November 2020. This game retains all the same modes and features of Picross S4, and adds two small "quality of life" features. A High Contrast mode was added for Color Picross, which changes the colors to make them easier to differentiate for colorblind players. The total completion time is displayed when all puzzles in a mode are completed, and for the full game once all modes are completed. These features were added to the four previous games in the series via patches released on the same day. - -Picross S6 was released in Japan, Europe, and North America on 22 April 2021. This game retains all the same modes and features of Picross S4 and S5. The puzzle grids in Extra Mode now feature alternating colour lines every 5 squares, to make the larger puzzle size easier to parse; this feature was added to Picross S4 and S5 via patches released around the same time. - -As with their Picross e series, Jupiter has used the mechanics and UI of Picross S as the basis for a number of licensed spin-off picross games. - -Kemono Friends Picross was released in Japan, Europe, and North America on 4 October 2018. Based on the Kemono Friends media franchise. - -Picross Lord of the Nazarick was released in Japan, Europe, and North America on 25 July 2019. Based on the Overlord anime. - -A Sega-themed entry featuring puzzles based on 59 classic Sega Genesis and Master System games including Sonic the Hedgehog, Alex Kidd and Phantasy Star. It was announced on 26 June 2020, and was released on 5 August 2021 in Japan, Europe and North America. Also known as Picross S Mega Drive & Mark III Edition in Japan and Picross S Mega Drive & Master System Edition in Europe, in line with the branding of Sega's 8- and 16-bit consoles in those regions. - -In the same fashion as past entries in the series, the games involve the player completing nonograms. diff --git a/wiki/wikipedia/2454.txt b/wiki/wikipedia/2454.txt deleted file mode 100644 index 020d8dbde3b7b91cd8765fc5d8296cc4d63c7762..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2454.txt +++ /dev/null @@ -1,31 +0,0 @@ -In the Oracle RDBMS environment, redo logs comprise files in a proprietary format which log a history of all changes made to the database. Each redo log file consists of redo records. A redo record, also called a redo entry, holds a group of change vectors, each of which describes or represents a change made to a single block in the database. - -For example, if a user UPDATEs a salary-value in a table containing employee-related data, the DBMS generates a redo record containing change-vectors that describe changes to the data segment block for the table. And if the user then COMMITs the update, Oracle generates another redo record and assigns the change a "system change number" (SCN). - -Whenever something changes in a datafile, Oracle records the change in the redo log. The name redo log indicates its purpose: If the database crashes, the RDBMS can redo (re-process) all changes on datafiles which will take the database data back to the state it was when the last redo record was written. DBAs use the views V$LOG, V$LOGFILE, V$LOG_HISTORY and V$THREAD to find information about the redo log of the database. Each redo log file belongs to exactly one group (of which at least two must exist). Exactly one of these groups is the CURRENT group (can be queried using the column status of v$log). Oracle uses that current group to write the redo log entries. When the group is full, a log switch occurs, making another group the current one. Each log switch causes checkpoint, however, the converse is not true: a checkpoint does not cause a redo log switch. One can also manually cause a redo-log switch using the ALTER SYSTEM SWITCH LOGFILE command. - -Redo log files occur in two types: - -* online redo logs ("ORL" or "redo logs" for short) - -* archived redo logs ("archive logs") - -Before a user receives a "Commit complete" message, the system must first successfully write the new or changed data to a redo log file. - -The RDBMS first writes all changes included in the transaction into the log buffer in the System Global Area (SGA). Using memory in this way for the initial capture aims to reduce disk IO. Of course, when a transaction commits, the redo log buffer must be flushed to disk, because otherwise the recovery for that commit could not be guaranteed. The LGWR (Log Writer) process does that flushing. - -Having a redo log makes it possible to replay SQL statements. Before an Oracle database changes data in a datafile it writes changes to the redo log. If something happens to one of the datafiles, a recovery procedure can restore a backed-up datafile and then replay the redo written since backup-time; this brings the datafile to the state it had before it became unavailable. Standby databases in an Oracle Data Guard environment use the same technique: one database (the primary database) records all changes and sends them to the standby database(s). Each standby database applies (replays) the arrived redo, resulting in synchronization with the primary database. - -If a database crashes, the recovery process has to apply all transactions, both uncommitted as well as committed, to the data-files on disk, using the information in the redo log files. Oracle must re-do all redo-log transactions that have both a BEGIN and a COMMIT entry (roll forward), and it must undo all transactions that have a BEGIN entry but no COMMIT entry (roll back). (Re-doing a transaction in this context simply means applying the information in the redo log files to the database; the system does not re-run the transaction itself.) The system thus re-creates committed transactions by applying the “after image” records in the redo log files to the database, and undoes incomplete transactions by using the "before image" records in the undo tablespace. - -Change data capture can read the redo logs. - -In Oracle Data Guard configurations, standby redo logs resemble their equivalent online redo logs, but serve to store redo data transmitted from a different database. - -Given the verbosity of the logging, Oracle Corporation provides methods for archiving redo logs (archive-logs), and this in turn can feed into data-backup scenarios and standby databases. - -The existence of a detailed series of individually logged transactions and actions provides the basis of several data-management enhancements such as Oracle Flashback, log-mining and point-in-time recovery. The concept of a database incarnation - -can influence the use of redo in database recovery. - -For database tuning purposes, efficiently coping with redo logs requires plentiful and fast-access disk. diff --git a/wiki/wikipedia/2455.txt b/wiki/wikipedia/2455.txt deleted file mode 100644 index d46224bc86a001034bf8cb6d7d547d9d3504d7ec..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2455.txt +++ /dev/null @@ -1,37 +0,0 @@ -{{Infobox video game - -| title = Tetris 2 - -| image = Tetris 2 NES.jpg - -| caption = North American NES cover art - -| developer = Nintendo R&D1 (NES, Game Boy)
    TOSE (NES, Game Boy)
    Bullet Proof Software (Super NES) - -| designer = - -| producer = Gunpei Yokoi - -| composer = Mitsuhiko Takano
    Miyuki Uemura - -| publisher = Nintendo - -| released = - -| modes = Single-player, multiplayer - -| series = Tetris - -| genre = Puzzle - -| platforms = Game Boy, Nintendo Entertainment System, Super Nintendo Entertainment System - -}} - -Tetris 2 (known in Japan as Tetris Flash ) is a video game published in 1993 and 1994 by Nintendo for the Game Boy, Nintendo Entertainment System and Super Nintendo Entertainment System. - -As a variation of the Tetris concept, rather than having the objective of matching horizontal lines of blocks that descend from the top of the screen as tetrominos, instead the player matches the colors of the descending blocks (which include irregular tetromino shapes) to blocks already fixed on the game board, which causes blocks to disappear from the board when three blocks of the same color are matched, in a manner similar to the game Dr. Mario. - -In the United States, it was the top-selling NES and Game Boy game in January 1994, and the top Game Boy game in February. In the United Kingdom, it was the top-selling NES game for eight months in 1994, in March and then from May through summer and autumn to November. It was also the top-selling Game Boy game in August 1994. - -Reviews of the NES version were more mixed. The magazine Game Players, who reviewed the NES released in February 1994, called Tetris 2 "a disappointing attempt for puzzle fans who have patiently waited for this sequel." Famitsu gave it a score of 21 out of 40. Famitsu also gave the Game Boy version a 23 out of 40 score. diff --git a/wiki/wikipedia/2456.txt b/wiki/wikipedia/2456.txt deleted file mode 100644 index 8434f27100070b33659184f3e3fa4d241c338a04..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2456.txt +++ /dev/null @@ -1,51 +0,0 @@ -Cytoscape is an open source bioinformatics software platform for visualizing molecular interaction networks and integrating with gene expression profiles and other state data. Additional features are available as plugins. Plugins are available for network and molecular profiling analyses, new layouts, additional file format support and connection with databases and searching in large networks. Plugins may be developed using the Cytoscape open Java software architecture by anyone and plugin community development is encouraged. Cytoscape also has a JavaScript-centric sister project named that can be used to analyse and visualise graphs in JavaScript environments, like a browser. - -Cytoscape was originally created at the Institute of Systems Biology in Seattle in 2002. Now, it is developed by an international consortium of open source developers. Cytoscape was initially made public in July, 2002 (v0.8); the second release (v0.9) was in November, 2002, and v1.0 was released in March 2003. Version 1.1.1 is the last stable release for the 1.0 series. Version 2.0 was initially released in 2004; Cytoscape 2.83, the final 2.xx version, was released in May 2012. Version 3.0 was released Feb 1, 2013, and the latest version, 3.4.0, was released in May 2016. - -The Cytoscape core developer team continues to work on this project and released Cytoscape 3.0 in 2013. This represented a major change in the Cytoscape architecture; it is a more modularized, expandable and maintainable version of the software. - -While Cytoscape is most commonly used for biological research applications, it is agnostic in terms of usage. Cytoscape can be used to visualize and analyze network graphs of any kind involving nodes and edges (e.g., social networks). A key aspect of the software architecture of Cytoscape is the use of plugins for specialized features. Plugins are developed by core developers and the greater user community. - -Input - -* Input and construct molecular interaction networks from raw interaction files (SIF format) containing lists of protein-protein and/or protein–DNA interaction pairs. For yeast and other model organisms, large sources of pairwise interactions are available through the BIND and TRANSFAC databases. User-defined interaction types are also supported. - -* Load and save previously-constructed interaction networks in GML format (Graph Modelling Language). - -* Load and save networks and node/edge attributes in an XML document format called XGMML (eXtensible Graph Markup and Modeling Language). - -* Input mRNA expression profiles from tab- or space-delimited text files. - -* Load and save arbitrary attributes on nodes and edges. For example, input a set of custom annotation terms for your proteins, create a set of confidence values for your protein–protein interactions. - -* Import gene functional annotations from the Gene Ontology (GO) and KEGG databases. - -* Directly import GO terms and annotations from OBO and Gene Association files. - -* Load and save state of the cytoscape session in a cytoscape session (.cys) file. Cytoscape session file includes networks, attributes (for node/edge/network), desktop states (selected/hidden nodes and edges, window sizes), properties, and visual styles. - -Visualization - -* Customize network data display using powerful visual styles. - -* View a superposition of gene expression ratios and p-values on the network. Expression data can be mapped to node color, label, border thickness, or border color, etc. according to user-configurable colors and visualization schemes. - -* Layout networks in two dimensions. A variety of layout algorithms are available, including cyclic and spring-embedded layouts. - -* Zoom in/out and pan for browsing the network. - -* Use the network manager to easily organize multiple networks. And this structure can be saved in a session file. - -* Use the bird's eye view to easily navigate large networks. - -* Easily navigate large networks (100,000+ nodes and edges) by an efficient rendering engine. - -Analysis - -* Plugins are available for network and molecular profile analysis. For example: - -** Filter the network to select subsets of nodes and/or interactions based on the current data. For instance, users may select nodes involved in a threshold number of interactions, nodes that share a particular GO annotation, or nodes whose gene expression levels change significantly in one or more conditions according to p-values loaded with the gene expression data. - -** Find active subnetworks/pathway modules. The network is screened against gene expression data to identify connected sets of interactions, i.e. interaction subnetworks, whose genes show particularly high levels of differential expression. The interactions contained in each subnetwork provide hypotheses for the regulatory and signaling interactions in control of the observed expression changes. - -** Find clusters (highly interconnected regions) in any network loaded into Cytoscape. Depending on the type of network, clusters may mean different things. For instance, clusters in a protein–protein interaction network have been shown to be protein complexes and parts of pathways. Clusters in a protein similarity network represent protein families. diff --git a/wiki/wikipedia/2457.txt b/wiki/wikipedia/2457.txt deleted file mode 100644 index 91fb1b1ce6520c81f54b8955b2b34e74245c344a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2457.txt +++ /dev/null @@ -1,47 +0,0 @@ -Enterprise file synchronization and sharing (also known as EFSS and enterprise file sync and share) refers to software services that enable organizations to securely synchronize and share documents, photos, videos and files from multiple devices with employees, and external customers and partners. Organizations often adopt these technologies to prevent employees from using consumer-based file sharing apps to store, access and manage corporate data that is outside of the IT department’s control and visibility. - -EFSS applications are often characterized by having most or all of the following features and capabilities: - -* Sync files stored in corporate storage to user desktops and devices - -* Send links to large files with support for multiple file extensions and protocols - -* Integration to existing business applications via APIs, plugins and mobile apps - -* Built-in file creation, editing and previewing - -* User access permissions to files and folders - -* Protection of files stored and transferred by encryption, antivirus scanning, and DLP (data loss prevention) - -* Publish links to files with the ability to set a login requirement to access data - -* Authentication options for Active Directory, SAML, Azure Active Directory, etc. - -* Schedule and automate file transfers from automated systems and repositories - -* Audit and report file activities and system actions - -Depending on what an EFSS provider offers, services can be deployed using cloud computing, on-premises, or hybrid. According to Forrester Research, some EFSS providers can provide the ability to lockdown data in certain geographies for companies that have requirements to store content/metadata in specific jurisdictions. - -Box, one of the first EFSS products, was originally developed as a college project of Aaron Levie while he was a student of the University of Southern California in 2004. Levie left school to run the company full-time in 2005. - -In 2007 Dropbox was founded, and officially launched at 2008's TechCrunch Disrupt conference. The same year, Microsoft began beta testing of Windows Live Folders, a predecessor of OneDrive. - -Around 2010, the EFSS market emerged with over 100 vendors from a variety technology backgrounds including backup and cloud storage (Citrix ShareFile, Syncplicity), managed file transfer (Accellion, Biscom, Box, Hightail, Thru), enterprise content management and more. Many were developed as alternatives to consumer file sync and sharing services that did not have security features in place to protect company information nor the flexibility to integrate with existing content repositories and business applications. - -In October 2011, software company, Citrix Systems, announced that it had acquired private enterprise file sync and share service, ShareFile, to add to the Citrix product line. ShareFile was a competitor of Box and Dropbox but focused on selling its product to IT departments of large organizations. - -In 2012, CTERA Networks entered the EFSS market. - -In July 2013, Forrester Research released the first “Forrester Wave” report on the EFSS market where they identified and scored products from the most significant providers. - -On June 25, 2014, Google announced at its I/O Conference that it was entering the enterprise file sharing market with the release of “Google Drive for Work.” - -In July 2014, Gartner Research released its first “Magic Quadrant” report on the EFSS market. The study evaluates the strengths and cautions of the most notable vendors in the industry. - -In October, 2014, encrypted vendor Tresorit entered the EFSS market with Tresorit for Business. Tresorit is a competitor of Dropbox and Box, promising businesses more security and privacy compliance with End-to-end encryption. - -In April 2015, BlackBerry Limited paid between $100 million and $150 million to buy WatchDox Ltd. for its enterprise file sync and sharing capabilities. - -In July 2015, one EFSS vendor, Syncplicity, was sold to private equity firm, Skyview Capital, by previous owner, EMC Corporation. diff --git a/wiki/wikipedia/2458.txt b/wiki/wikipedia/2458.txt deleted file mode 100644 index 65edf8eb828d38dbad5fa052e6f10381dd7c78b5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2458.txt +++ /dev/null @@ -1,55 +0,0 @@ -Berkson's paradox, also known as Berkson's bias, collider bias, or Berkson's fallacy, is a result in conditional probability and statistics which is often found to be counterintuitive, and hence a veridical paradox. It is a complicating factor arising in statistical tests of proportions. Specifically, it arises when there is an ascertainment bias inherent in a study design. The effect is related to the explaining away phenomenon in Bayesian networks, and conditioning on a collider in graphical models. - -It is often described in the fields of medical statistics or biostatistics, as in the original description of the problem by Joseph Berkson. - -The most common example of Berkson's paradox is a false observation of a negative correlation between two positive traits, i.e., that members of a population which have some positive trait tend to lack a second. Berkson's paradox occurs when this observation appears true when in reality the two properties are unrelated—or even positively correlated—because members of the population where both are absent are not equally observed. For example, a person may observe from their experience that fast food restaurants in their area which serve good hamburgers tend to serve bad fries and vice versa; but because they would likely not eat anywhere where both were bad, they fail to allow for the large number of restaurants in this category which would weaken or even flip the correlation. - -Berkson's original illustration involves a retrospective study examining a risk factor for a disease in a statistical sample from a hospital in-patient population. Because samples are taken from a hospital in-patient population, rather than from the general public, this can result in a spurious negative association between the disease and the risk factor. For example, if the risk factor is diabetes and the disease is cholecystitis, a hospital patient without diabetes is more likely to have cholecystitis than a member of the general population, since the patient must have had some non-diabetes (possibly cholecystitis-causing) reason to enter the hospital in the first place. That result will be obtained regardless of whether there is any association between diabetes and cholecystitis in the general population. - -An example presented by Jordan Ellenberg: Suppose Alex will only date a man if his niceness plus his handsomeness exceeds some threshold. Then nicer men do not have to be as handsome to qualify for Alex's dating pool. So, among the men that Alex dates, Alex may observe that the nicer ones are less handsome on average (and vice versa), even if these traits are uncorrelated in the general population. Note that this does not mean that men in the dating pool compare unfavorably with men in the population. On the contrary, Alex's selection criterion means that Alex has high standards. The average nice man that Alex dates is actually more handsome than the average man in the population (since even among nice men, the ugliest portion of the population is skipped). Berkson's negative correlation is an effect that arises within the dating pool: the rude men that Alex dates must have been even more handsome to qualify. - -As a quantitative example, suppose a collector has 1000 postage stamps, of which 300 are pretty and 100 are rare, with 30 being both pretty and rare. 10% of all his stamps are rare and 10% of his pretty stamps are rare, so prettiness tells nothing about rarity. He puts the 370 stamps which are pretty or rare on display. Just over 27% of the stamps on display are rare (100/370), but still only 10% of the pretty stamps are rare (and 100% of the 70 not-pretty stamps on display are rare). If an observer only considers stamps on display, they will observe a spurious negative relationship between prettiness and rarity as a result of the selection bias (that is, not-prettiness strongly indicates rarity in the display, but not in the total collection). - -Two independent events become conditionally dependent (negatively dependent) given that at least one of them occurs. Symbolically: - -If $0 < P(A) < 1$, $0 < P(B) < 1$, and $P(A|B) = P(A)$, then $P(A|B,A \cup B) < P(A|A \cup B)$. - -* Event $A$ and event $B$ may or may not occur - -* $P(A|B)$, a conditional probability, is the probability of observing event $A$ given that $B$ is true. - -* Explanation: Event $A$ and $B$ are independent of each other - -* $P(A|B,A \cup B)$ is the probability of observing event $A$ given that $B$ and ($A$ or $B$) occurs. This can also be written as $P(A|B \cap (A \cup B))$ - -* Explanation: The probability of $A$ given both $B$ and ( $A$ or $B$ ) is smaller than the probability of $A$ given ( $A$ or $B$ ) - -In other words, given two independent events, if you consider only outcomes where at least one occurs, then they become negatively dependent, as shown above. - -The cause is that the conditional probability of event $A$ occurring, given that it or $B$ occurs, is inflated: it is higher than the unconditional probability, because we have excluded cases where neither occur. -$$ -P(A|A \cup B) > P(A) -$$ - -conditional probability inflated relative to unconditional - -One can see this in tabular form as follows: the yellow regions are the outcomes where at least one event occurs (and ~A means "not A"). - -For instance, if one has a sample of $100$, and both $A$ and $B$ occur independently half the time ( $P(A) = P(B) = 1 / 2$ ), one obtains: - -So in $75$ outcomes, either $A$ or $B$ occurs, of which $50$ have $A$ occurring. By comparing the conditional probability of $A$ to the unconditional probability of $A$: -$$ -P(A|A \cup B) = 50 / 75 = 2 / 3 > P(A) = 50 / 100 = 1 / 2 -$$ - -We see that the probability of $A$ is higher ($2 / 3$) in the subset of outcomes where ($A$ or $B$) occurs, than in the overall population ($1 / 2$). On the other hand, the probability of $A$ given both $B$ and ($A$ or $B$) is simply the unconditional probability of $A$, $P(A)$, since $A$ is independent of $B$. In the numerical example, we have conditioned on being in the top row: - -Here the probability of $A$ is $25 / 50 = 1 / 2$. - -Berkson's paradox arises because the conditional probability of $A$ given $B$ within the three-cell subset equals the conditional probability in the overall population, but the unconditional probability within the subset is inflated relative to the unconditional probability in the overall population, hence, within the subset, the presence of $B$ decreases the conditional probability of $A$ (back to its overall unconditional probability): -$$ -P(A|B, A \cup B) = P(A|B) = P(A) -$$ -$$ -P(A|A \cup B) > P(A) -$$ diff --git a/wiki/wikipedia/2459.txt b/wiki/wikipedia/2459.txt deleted file mode 100644 index 04a5468b8a45cd9d75516d6a336f421192530b26..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2459.txt +++ /dev/null @@ -1,70 +0,0 @@ -In mathematics, computer science and economics, an optimization problem is the problem of finding the best solution from all feasible solutions. - -Optimization problems can be divided into two categories, depending on whether the variables are continuous or discrete: - -* An optimization problem with discrete variables is known as a discrete optimization, in which an object such as an integer, permutation or graph must be found from a countable set. - -* A problem with continuous variables is known as a continuous optimization, in which an optimal value from a continuous function must be found. They can include constrained problems and multimodal problems. - -The standard form of a continuous optimization problem is - -\begin{align} - -&\underset{x}{\operatorname{minimize}}& & f(x) \\ - -&\operatorname{subjectto} - -& &g_i(x) \leq 0, \quad i = 1,\dots,m \\ - -&&&h_j(x) = 0, \quad j = 1, \dots,p - -\end{align} - -where - -* f : ℝn → ℝ is the objective function to be minimized over the n-variable vector x, - -* gi(x) ≤ 0 are called inequality constraints - -* hj(x) = 0 are called equality constraints, and - -* m ≥ 0 and p ≥ 0. - -If m = p = 0, the problem is an unconstrained optimization problem. By convention, the standard form defines a minimization problem. A maximization problem can be treated by negating the objective function. - -Formally, a combinatorial optimization problem A is a quadruple (I, f, m, g), where - -* I is a set of instances; - -* given an instance x ∈ I, f(x) is the set of feasible solutions; - -* given an instance x and a feasible solution y of x, m(x, y) denotes the measure of y, which is usually a positive real. - -* g is the goal function, and is either min or max. - -The goal is then to find for some instance x an optimal solution, that is, a feasible solution y with -$$ -m(x, y) = g \bigl\{ m(x, y') \mid y' \in f(x) \bigr\} . -$$ - -For each combinatorial optimization problem, there is a corresponding decision problem that asks whether there is a feasible solution for some particular measure m0. For example, if there is a graph G which contains vertices u and v, an optimization problem might be "find a path from u to v that uses the fewest edges". This problem might have an answer of, say, 4. A corresponding decision problem would be "is there a path from u to v that uses 10 or fewer edges?" This problem can be answered with a simple 'yes' or 'no'. - -In the field of approximation algorithms, algorithms are designed to find near-optimal solutions to hard problems. The usual decision version is then an inadequate definition of the problem since it only specifies acceptable solutions. Even though we could introduce suitable decision problems, the problem is more naturally characterized as an optimization problem. - -==See also== - -*Counting problem (complexity) - -*Design Optimization - -*Function problem - -*Glove problem - -*Operations research - -*Satisficing: the optimum need not be found, just a "good enough" solution. - -*Search problem - -*Semi-infinite programming diff --git a/wiki/wikipedia/246.txt b/wiki/wikipedia/246.txt deleted file mode 100644 index a5952c4d572c422fb2aa0afb519f39ab0316b938..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/246.txt +++ /dev/null @@ -1,188 +0,0 @@ -In mathematics, the Weil conjectures were highly influential proposals by . They led to a successful multi-decade program to prove them, in which many leading researchers developed the framework of modern algebraic geometry and number theory. - -The conjectures concern the generating functions (known as local zeta functions) derived from counting points on algebraic varieties over finite fields. A variety V over a finite field with q elements has a finite number of rational points (with coordinates in the original field), as well as points with coordinates in any finite extension of the original field. The generating function has coefficients derived from the numbers Nk of points over the extension field with qk elements. - -Weil conjectured that such zeta functions for smooth varieties are rational functions, satisfy a certain functional equation, and have their zeros in restricted places. The last two parts were quite consciously modeled on the Riemann zeta function, a kind of generating function for prime integers, which obeys a functional equation and (conjecturally) has its zeros restricted by the Riemann hypothesis. The rationality was proved by , the functional equation by , and the analogue of the Riemann hypothesis by . - -The earliest antecedent of the Weil conjectures is by Carl Friedrich Gauss and appears in section VII of his Disquisitiones Arithmeticae , concerned with roots of unity and Gaussian periods. In article 358, he moves on from the periods that build up towers of quadratic extensions, for the construction of regular polygons; and assumes that p is a prime number congruent to 1 modulo 3. Then there is a cyclic cubic field inside the cyclotomic field of pth roots of unity, and a normal integral basis of periods for the integers of this field (an instance of the Hilbert–Speiser theorem). Gauss constructs the order-3 periods, corresponding to the cyclic group (Z/pZ)× of non-zero residues modulo p under multiplication and its unique subgroup of index three. Gauss lets $\mathfrak{R}$, $\mathfrak{R}'$, and $\mathfrak{R}$ be its cosets. Taking the periods (sums of roots of unity) corresponding to these cosets applied to exp(2πi/p), he notes that these periods have a multiplication table that is accessible to calculation. Products are linear combinations of the periods, and he determines the coefficients. He sets, for example, $(\mathfrak{R}\mathfrak{R})$ equal to the number of elements of Z/pZ which are in $\mathfrak{R}$ and which, after being increased by one, are also in $\mathfrak{R}$. He proves that this number and related ones are the coefficients of the products of the periods. To see the relation of these sets to the Weil conjectures, notice that if α and α + 1 are both in $\mathfrak{R}$, then there exist x and y in Z/pZ such that x3 = α and y3 = α + 1; consequently, x3 + 1 = y3. Therefore $(\mathfrak{R}\mathfrak{R})$ is the number of solutions to x3 + 1 = y3 in the finite field Z/pZ. The other coefficients have similar interpretations. Gauss's determination of the coefficients of the products of the periods therefore counts the number of points on these elliptic curves, and as a byproduct he proves the analog of the Riemann hypothesis. - -The Weil conjectures in the special case of algebraic curves were conjectured by . The case of curves over finite fields was proved by Weil, finishing the project started by Hasse's theorem on elliptic curves over finite fields. Their interest was obvious enough from within number theory: they implied upper bounds for exponential sums, a basic concern in analytic number theory . - -What was really eye-catching, from the point of view of other mathematical areas, was the proposed connection with algebraic topology. Given that finite fields are discrete in nature, and topology speaks only about the continuous, the detailed formulation of Weil (based on working out some examples) was striking and novel. It suggested that geometry over finite fields should fit into well-known patterns relating to Betti numbers, the Lefschetz fixed-point theorem and so on. - -The analogy with topology suggested that a new homological theory be set up applying within algebraic geometry. This took two decades (it was a central aim of the work and school of Alexander Grothendieck) building up on initial suggestions from Serre. The rationality part of the conjectures was proved first by , using p-adic methods. Grothendieck and his collaborators established the rationality conjecture, the functional equation and the link to Betti numbers by using the properties of étale cohomology, a new cohomology theory developed by Grothendieck and Michael Artin for attacking the Weil conjectures, as outlined in Grothendieck. - -Of the four conjectures the analogue of the Riemann hypothesis was the hardest to prove. Motivated by the proof of Serre of an analogue of the Weil conjectures for Kähler manifolds, Grothendieck envisioned a proof based on his standard conjectures on algebraic cycles . However, Grothendieck's standard conjectures remain open (except for the hard Lefschetz theorem, which was proved by Deligne by extending his work on the Weil conjectures), and the analogue of the Riemann hypothesis was proved by , using the étale cohomology theory but circumventing the use of standard conjectures by an ingenious argument. - -Deligne found and proved a generalization of the Weil conjectures, bounding the weights of the pushforward of a sheaf. - -Suppose that X is a non-singular n-dimensional projective algebraic variety over the field Fq with q elements. The zeta function ζ(X, s) of X is by definition -$$ -\zeta(X, s) = \exp\left(\sum_{m = 1}^\infty \frac{N_m}{m} q^{-ms}\right) -$$ - -where Nm is the number of points of X defined over the degree m extension Fqm of Fq. - -The Weil conjectures state: - -1. (Rationality) ζ(X, s) is a rational function of T = q−s. More precisely, ζ(X, s) can be written as a finite alternating product -$$ -\prod_{i=0}^{2n} P_i(q^{-s})^{(-1)^{i+1}} = \frac{P_1(T)\dotsb P_{2n-1}(T)}{P_0(T)\dotsb P_{2n}(T)}, -$$ - -where each Pi(T) is an integral polynomial. Furthermore, P0(T) = 1 − T, P2n(T) = 1 − qnT, and for 1 ≤ i ≤ 2n − 1, Pi(T) factors over C as $\textstyle\prod_j (1 - \alpha_{ij}T)$ for some numbers αij. - -2. (Functional equation and Poincaré duality) The zeta function satisfies -$$ -\zeta(X,n-s)=\pm q^{\frac{nE}{2}-Es}\zeta(X,s) -$$ - -or equivalently -$$ -\zeta(X,q^{-n}T^{-1})=\pm q^{\frac{nE}{2}}T^E\zeta(X,T) -$$ - -where E is the Euler characteristic of X. In particular, for each i, the numbers α2n−i,1, α2n−i,2, ... equal the numbers qni,1, qni,2, ... in some order. - -3. (Riemann hypothesis) |αi,j| = qi/2 for all 1 ≤ i ≤ 2n − 1 and all j. This implies that all zeros of Pk(T) lie on the "critical line" of complex numbers s with real part k/2. - -4. (Betti numbers) If X is a (good) "reduction mod p" of a non-singular projective variety Y defined over a number field embedded in the field of complex numbers, then the degree of Pi is the ith Betti number of the space of complex points of Y. - -The simplest example (other than a point) is to take X to be the projective line. The number of points of X over a field with qm elements is just Nm = qm + 1 (where the "+ 1" comes from the "point at infinity"). The zeta function is just - -1/(1 − q−s)(1 − q1−s). - -It is easy to check all parts of the Weil conjectures directly. For example, the corresponding complex variety is the Riemann sphere and its initial Betti numbers are 1, 0, 1. - -It is not much harder to do n-dimensional projective space. The number of points of X over a field with qm elements is just Nm = 1 + qm + q2m + ⋯ + qnm. The zeta function is just - -1/(1 − q−s)(1 − q1−s)(1 − q2−s)⋯(1 − qn−s). - -It is again easy to check all parts of the Weil conjectures directly. (Complex projective space gives the relevant Betti numbers, which nearly determine the answer.) - -The number of points on the projective line and projective space are so easy to calculate because they can be written as disjoint unions of a finite number of copies of affine spaces. It is also easy to prove the Weil conjectures for other spaces, such as Grassmannians and flag varieties, which have the same "paving" property. - -These give the first non-trivial cases of the Weil conjectures (proved by Hasse). If E is an elliptic curve over a finite field with q elements, then the number of points of E defined over the field with qm elements is 1 − αm − βm + qm, where α and β are complex conjugates with absolute value sqrt q. - -The zeta function is - -ζ(E, s) = (1 − αq−s)(1 − βq−s)/(1 − q−s)(1 − q1−s). - -Weil suggested that the conjectures would follow from the existence of a suitable "Weil cohomology theory" for varieties over finite fields, similar to the usual cohomology with rational coefficients for complex varieties. His idea was that if F is the Frobenius automorphism over the finite field, then the number of points of the variety X over the field of order qm is the number of fixed points of Fm (acting on all points of the variety X defined over the algebraic closure). In algebraic topology the number of fixed points of an automorphism can be worked out using the Lefschetz fixed-point theorem, given as an alternating sum of traces on the cohomology groups. So if there were similar cohomology groups for varieties over finite fields, then the zeta function could be expressed in terms of them. - -The first problem with this is that the coefficient field for a Weil cohomology theory cannot be the rational numbers. To see this consider the case of a supersingular elliptic curve over a finite field of characteristic p. The endomorphism ring of this is an order in a quaternion algebra over the rationals, and should act on the first cohomology group, which should be a 2-dimensional vector space over the coefficient field by analogy with the case of a complex elliptic curve. However a quaternion algebra over the rationals cannot act on a 2-dimensional vector space over the rationals. The same argument eliminates the possibility of the coefficient field being the reals or the p-adic numbers, because the quaternion algebra is still a division algebra over these fields. However it does not eliminate the possibility that the coefficient field is the field of l-adic numbers for some prime l ≠ p, because over these fields the division algebra splits and becomes a matrix algebra, which can act on a 2-dimensional vector space. Grothendieck and Michael Artin managed to construct suitable cohomology theories over the field of l-adic numbers for each prime l ≠ p, called l-adic cohomology. - -By the end of 1964 Grothendieck together with Artin and Jean-Louis Verdier (and the earlier 1960 work by Dwork) proved the Weil conjectures apart from the most difficult third conjecture above (the "Riemann hypothesis" conjecture) (Grothendieck 1965). The general theorems about étale cohomology allowed Grothendieck to prove an analogue of the Lefschetz fixed-point formula for the l-adic cohomology theory, and by applying it to the Frobenius automorphism F he was able to prove the conjectured formula for the zeta function: -$$ -\zeta(s)=\frac{P_1(T)\cdots P_{2n-1}(T)}{P_0(T)P_2(T)\cdots P_{2n}(T)} -$$ - -where each polynomial Pi is the determinant of I − TF on the l-adic cohomology group Hi. - -The rationality of the zeta function follows immediately. The functional equation for the zeta function follows from Poincaré duality for l-adic cohomology, and the relation with complex Betti numbers of a lift follows from a comparison theorem between l-adic and ordinary cohomology for complex varieties. - -More generally, Grothendieck proved a similar formula for the zeta function (or "generalized L-function") of a sheaf F0: -$$ -Z(X_0, F_0, t) = \prod_{x\in |X_0|}\det(1-F^*_xt^{\deg(x)}|F_0)^{-1} -$$ - -as a product over cohomology groups: -$$ -Z(X_0, F_0, t) = \prod_{i}\det(1-F^* t|H^i_c(F))^{(-1)^{i+1}} -$$ - -The special case of the constant sheaf gives the usual zeta function. - -Verdier, Serre, Katz and Freitag gave expository accounts of the first proof of Deligne. Much of the background in l-adic cohomology is described in . - -Deligne's first proof of the remaining third Weil conjecture (the "Riemann hypothesis conjecture") used the following steps: - -*Grothendieck expressed the zeta function in terms of the trace of Frobenius on l-adic cohomology groups, so the Weil conjectures for a d-dimensional variety V over a finite field with q elements depend on showing that the eigenvalues α of Frobenius acting on the ith l-adic cohomology group Hi(V) of V have absolute values |α|=qi/2 (for an embedding of the algebraic elements of Ql into the complex numbers). - -*After blowing up V and extending the base field, one may assume that the variety V has a morphism onto the projective line P1, with a finite number of singular fibers with very mild (quadratic) singularities. The theory of monodromy of Lefschetz pencils, introduced for complex varieties (and ordinary cohomology) by Lefschetz, and extended by Grothendieck and Deligne to l-adic cohomology, relates the cohomology of V to that of its fibers. The relation depends on the space Ex of vanishing cycles, the subspace of the cohomology Hd−1(Vx) of a non-singular fiber Vx, spanned by classes that vanish on singular fibers. - -*The Leray spectral sequence relates the middle cohomology group of V to the cohomology of the fiber and base. The hard part to deal with is more or less a group H1(P1, j*E) = H_1|b=c(U,E), where U is the points the projective line with non-singular fibers, and j is the inclusion of U into the projective line, and E is the sheaf with fibers the spaces Ex of vanishing cycles. - -The heart of Deligne's proof is to show that the sheaf E over U is pure, in other words to find the absolute values of the eigenvalues of Frobenius on its stalks. This is done by studying the zeta functions of the even powers Ek of E and applying Grothendieck's formula for the zeta functions as alternating products over cohomology groups. The crucial idea of considering even k powers of E was inspired by the paper , who used a similar idea with k=2 for bounding the Ramanujan tau function. Langlands pointed out that a generalization of Rankin's result for higher even values of k would imply the Ramanujan conjecture, and Deligne realized that in the case of zeta functions of varieties, Grothendieck's theory of zeta functions of sheaves provided an analogue of this generalization. - -*The poles of the zeta function of Ek are found using Grothendieck's formula -$$ -Z(U,E^k,T) = \frac{\det(1-F^* T|H^1_c(E^k))}{\det(1-F^* T|H^0_c(E^k))\det(1-F^* T|H^2_c(E^k))} -$$ - -and calculating the cohomology groups in the denominator explicitly. The H_0|b=c term is usually just 1 as U is usually not compact, and the H_2|b=c can be calculated explicitly as follows. Poincaré duality relates H_2|b=c(Ek) to H_0|(Ek), which is in turn the space of covariants of the monodromy group, which is the geometric fundamental group of U acting on the fiber of Ek at a point. The fiber of E has a bilinear form induced by cup product, which is antisymmetric if d is even, and makes E into a symplectic space. (This is a little inaccurate: Deligne did later show that E∩E = 0 by using the hard Lefschetz theorem, this requires the Weil conjectures, and the proof of the Weil conjectures really has to use a slightly more complicated argument with E/E∩E rather than E.) An argument of Kazhdan and Margulis shows that the image of the monodromy group acting on E, given by the Picard–Lefschetz formula, is Zariski dense in a symplectic group and therefore has the same invariants, which are well known from classical invariant theory. Keeping track of the action of Frobenius in this calculation shows that its eigenvalues are all qk(d−1)/2+1, so the zeta function of Z(Ek,T) has poles only at T=1/qk(d−1)/2+1. - -*The Euler product for the zeta function of Ek is -$$ -Z(E^k,T) = \prod_x \frac{1}{Z(E^k_x,T)} -$$ - -If k is even then all the coefficients of the factors on the right (considered as power series in T) are non-negative; this follows by writing -$$ -\frac{1}{\det(1-T^{\deg(x)}F_x|E^k)} =\exp\left(\sum_{n>0}\frac{T^n}{n}\text{Trace}(F_x^n|E)^k\right) -$$ - -and using the fact that the traces of powers of F are rational, so their k powers are non-negative as k is even. Deligne proves the rationality of the traces by relating them to numbers of points of varieties, which are always (rational) integers. - -*The powers series for Z(Ek, T) converges for T less than the absolute value 1/qk(d−1)/2+1 of its only possible pole. When k is even the coefficients of all its Euler factors are non-negative, so that each of the Euler factors has coefficients bounded by a constant times the coefficients of Z(Ek, T) and therefore converges on the same region and has no poles in this region. So for k even the polynomials Z(E_k|b=x, T) have no zeros in this region, or in other words the eigenvalues of Frobenius on the stalks of Ek have absolute value at most qk(d−1)/2+1. - -*This estimate can be used to find the absolute value of any eigenvalue α of Frobenius on a fiber of E as follows. For any integer k, αk is an eigenvalue of Frobenius on a stalk of Ek, which for k even is bounded by q1+k(d−1)/2. So -$$ -|\alpha^k|\le q^{k(d-1)/2 +1} -$$ - -As this is true for arbitrarily large even k, this implies that -$$ -|\alpha|\le q^{(d-1)/2}. -$$ - -Poincaré duality then implies that -$$ -|\alpha|=q^{(d-1)/2}. -$$ - -The deduction of the Riemann hypothesis from this estimate is mostly a fairly straightforward use of standard techniques and is done as follows. - -*The eigenvalues of Frobenius on H_1|b=c(U,E) can now be estimated as they are the zeros of the zeta function of the sheaf E. This zeta function can be written as an Euler product of zeta functions of the stalks of E, and using the estimate for the eigenvalues on these stalks shows that this product converges for |T|−d/2−1/2, so that there are no zeros of the zeta function in this region. This implies that the eigenvalues of Frobenius on E are at most qd/2+1/2 in absolute value (in fact it will soon be seen that they have absolute value exactly qd/2). This step of the argument is very similar to the usual proof that the Riemann zeta function has no zeros with real part greater than 1, by writing it as an Euler product. - -*The conclusion of this is that the eigenvalues α of the Frobenius of a variety of even dimension d on the middle cohomology group satisfy -$$ - |\alpha| \le q^{d/2+1/2} -$$ - -To obtain the Riemann hypothesis one needs to eliminate the 1/2 from the exponent. This can be done as follows. Applying this estimate to any even power Vk of V and using the Künneth formula shows that the eigenvalues of Frobenius on the middle cohomology of a variety V of any dimension d satisfy -$$ - |\alpha^k| \le q^{kd/2+1/2} -$$ - -As this is true for arbitrarily large even k, this implies that -$$ -|\alpha| \le q^{d/2} -$$ - -Poincaré duality then implies that -$$ -|\alpha| = q^{d/2}. -$$ - -*This proves the Weil conjectures for the middle cohomology of a variety. The Weil conjectures for the cohomology below the middle dimension follow from this by applying the weak Lefschetz theorem, and the conjectures for cohomology above the middle dimension then follow from Poincaré duality. - -Deligne found and proved a generalization of the Weil conjectures, bounding the weights of the pushforward of a sheaf. In practice it is this generalization rather than the original Weil conjectures that is mostly used in applications, such as the hard Lefschetz theorem. Much of the second proof is a rearrangement of the ideas of his first proof. The main extra idea needed is an argument closely related to the theorem of Jacques Hadamard and Charles Jean de la Vallée Poussin, used by Deligne to show that various L-series do not have zeros with real part 1. - -A constructible sheaf on a variety over a finite field is called pure of weight β if for all points x the eigenvalues of the Frobenius at x all have absolute value N(x)β/2, and is called mixed of weight ≤β if it can be written as repeated extensions by pure sheaves with weights ≤β. - -Deligne's theorem states that if f is a morphism of schemes of finite type over a finite field, then Rif! takes mixed sheaves of weight ≤β to mixed sheaves of weight ≤β+i. - -The original Weil conjectures follow by taking f to be a morphism from a smooth projective variety to a point and considering the constant sheaf Ql on the variety. This gives an upper bound on the absolute values of the eigenvalues of Frobenius, and Poincaré duality then shows that this is also a lower bound. - -In general Rif! does not take pure sheaves to pure sheaves. However it does when a suitable form of Poincaré duality holds, for example if f is smooth and proper, or if one works with perverse sheaves rather than sheaves as in Beilinson. - -Inspired by the work of Witten on Morse theory, Laumon found another proof, using Deligne's l-adic Fourier transform, which allowed him to simplify Deligne's proof by avoiding the use of the method of Hadamard and de la Vallée Poussin. His proof generalizes the classical calculation of the absolute value of Gauss sums using the fact that the norm of a Fourier transform has a simple relation to the norm of the original function. Kiehl used Laumon's proof as the basis for their exposition of Deligne's theorem. Katz gave a further simplification of Laumon's proof, using monodromy in the spirit of Deligne's first proof. Kedlaya gave another proof using the Fourier transform, replacing etale cohomology with rigid cohomology. - -*Deligne was able to prove the hard Lefschetz theorem over finite fields using his second proof of the Weil conjectures. - -*Deligne had previously shown that the Ramanujan-Petersson conjecture follows from the Weil conjectures. - -*Deligne used the Weil conjectures to prove estimates for exponential sums. - -* were able to prove the Künneth type standard conjecture over finite fields using Deligne's proof of the Weil conjectures. diff --git a/wiki/wikipedia/2460.txt b/wiki/wikipedia/2460.txt deleted file mode 100644 index f9abd906f947e30b398e51ea334e0f882afd6c05..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2460.txt +++ /dev/null @@ -1,8 +0,0 @@ -The Kato theorem, or Kato's cusp condition (after Japanese mathematician Tosio Kato), is used in computational quantum physics. It states that for generalized Coulomb potentials, the electron density has a cusp at the position of the nuclei, where it satisfies -$$ - Z_k = - \frac{a_o}{2n(\mathbf{r})} \frac{dn(\mathbf{r})}{dr} |_{r \rightarrow \mathbf{R_k}} -$$ - -Here $ \mathbf{R_k} $ denotes the positions of the nuclei, $ Z_k $ their atomic number and $ a_o $ is the Bohr radius. - -For a Coulombic system one can thus, in principle, read off all information necessary for completely specifying the Hamiltonian directly from examining the density distribution. This is also known as E. Bright Wilson's argument within the framework of density functional theory (DFT). The electron density of the ground state of a molecular system contains cusps at the location of the nuclei, and by identifying these from the total electron density of the system, the positions are thus established. From Kato's theorem, one also obtains the nuclear charge of the nuclei, and thus the external potential is fully defined. Finally, integrating the electron density over space gives the number of electrons, and the (electronic) Hamiltonian is defined. This is valid in a non-relativistic treatment within the Born–Oppenheimer approximation, and assuming point-like nuclei. diff --git a/wiki/wikipedia/2461.txt b/wiki/wikipedia/2461.txt deleted file mode 100644 index 7dc82c946309cfb389ba49efd3f98f1dfc304c3a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2461.txt +++ /dev/null @@ -1,13 +0,0 @@ -In mathematics, in the theory of rewriting systems, Newman's lemma, also commonly called the diamond lemma, states that a terminating (or strongly normalizing) abstract rewriting system (ARS), that is, one in which there are no infinite reduction sequences, is confluent if it is locally confluent. In fact a terminating ARS is confluent precisely when it is locally confluent. - -Equivalently, for every binary relation with no decreasing infinite chains and satisfying a weak version of the diamond property, there is a unique minimal element in every connected component of the relation considered as a graph. - -Today, this is seen as a purely combinatorial result based on well-foundedness due to a proof of Gérard Huet in 1980. Newman's original proof was considerably more complicated. - -In general, Newman's lemma can be seen as a combinatorial result about binary relations → on a set A (written backwards, so that a → b means that b is below a) with the following two properties: - -* → is a well-founded relation: every non-empty subset X of A has a minimal element (an element a of X such that a → b for no b in X). Equivalently, there is no infinite chain a0 → a1 → a2 → a3 → .... In the terminology of rewriting systems, → is terminating. - -* Every covering is bounded below. That is, if an element a in A covers elements b and c in A in the sense that a → b and a → c, then there is an element d in A such that b and c , where denotes the reflexive transitive closure of →. In the terminology of rewriting systems, → is locally confluent. - -If the above two conditions hold, then the lemma states that → is confluent: whenever a and a , there is an element d such that b and c . In view of the termination of →, this implies that every connected component of → as a graph contains a unique minimal element a, moreover b for every element b of the component. diff --git a/wiki/wikipedia/2462.txt b/wiki/wikipedia/2462.txt deleted file mode 100644 index 9b1e1b6f8ac7104d4194a7b898d37fb8d932622e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2462.txt +++ /dev/null @@ -1,9 +0,0 @@ -In geometry, the Japanese theorem states that the centers of the incircles of certain triangles inside a cyclic quadrilateral are vertices of a rectangle. - -Triangulating an arbitrary cyclic quadrilateral by its diagonals yields four overlapping triangles (each diagonal creates two triangles). The centers of the incircles of those triangles form a rectangle. - -Specifically, let □ABCD be an arbitrary cyclic quadrilateral and let M_1, M_2, M_3, M_4 be the incenters of the triangles △ABD, △ABC, △BCD, △ACD. Then the quadrilateral formed by M_1, M_2, M_3, M_4 is a rectangle. - -Note that this theorem is easily extended to prove the Japanese theorem for cyclic polygons. To prove the quadrilateral case, simply construct the parallelogram tangent to the corners of the constructed rectangle, with sides parallel to the diagonals of the quadrilateral. The construction shows that the parallelogram is a rhombus, which is equivalent to showing that the sums of the radii of the incircles tangent to each diagonal are equal. - -The quadrilateral case immediately proves the general case by induction on the set of triangulating partitions of a general polygon. diff --git a/wiki/wikipedia/2463.txt b/wiki/wikipedia/2463.txt deleted file mode 100644 index 7a0217bc378e486e1d971d59ad446518d9d2e116..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2463.txt +++ /dev/null @@ -1 +0,0 @@ -In number theory, Waring's prime number conjecture is a conjecture related to Vinogradov's theorem, named after the English mathematician Edward Waring. It states that every odd number exceeding 3 is either a prime number or the sum of three prime numbers. It follows from the generalized Riemann hypothesis, and (trivially) from Goldbach's weak conjecture. diff --git a/wiki/wikipedia/2464.txt b/wiki/wikipedia/2464.txt deleted file mode 100644 index 8604f454792961197b76aec2b542ad22e0f4b025..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2464.txt +++ /dev/null @@ -1,15 +0,0 @@ -In mathematics, in the field of harmonic analysis, an oscillatory integral operator is an integral operator of the form -$$ -T_\lambda u(x)=\int_{\R^n}e^{i\lambda S(x, y)} a(x, y) u(y)dy, \qquad x\in\R^m, \quad y\in\R^n, -$$ - -where the function S(x,y) is called the phase of the operator and the function a(x,y) is called the symbol of the operator. λ is a parameter. One often considers S(x,y) to be real-valued and smooth, and a(x,y) smooth and compactly supported. Usually one is interested in the behavior of Tλ for large values of λ. - -Oscillatory integral operators often appear in many fields of mathematics (analysis, partial differential equations, integral geometry, number theory) and in physics. Properties of oscillatory integral operators have been studied by Elias Stein and his school. - -The following bound on the L2 → L2 action of oscillatory integral operators (or L2 → L2 operator norm) was obtained by Lars Hörmander in his paper on Fourier integral operators: - -Assume that x,y ∈ Rn, n ≥ 1. Let S(x,y) be real-valued and smooth, and let a(x,y) be smooth and compactly supported. If $\det_{j,k} \frac{\partial^2 S}{\partial x_j \partial y_k}(x,y)\ne 0$ everywhere on the support of a(x,y), then there is a constant C such that Tλ, which is initially defined on smooth functions, extends to a continuous operator from L2(Rn) to L2(Rn), with the norm bounded by $C \lambda^{-n/2} $, for any λ ≥ 1: -$$ -\|T_\lambda\|_{L^2(\mathbf{R}^n)\to L^2(\mathbf{R}^n)}\le C\lambda^{-n/2}. -$$ diff --git a/wiki/wikipedia/2465.txt b/wiki/wikipedia/2465.txt deleted file mode 100644 index 030c4629df55ba1c95544e72e63a159ba4a2f502..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2465.txt +++ /dev/null @@ -1,147 +0,0 @@ -The Feynman–Kac formula named after Richard Feynman and Mark Kac, establishes a link between parabolic partial differential equations (PDEs) and stochastic processes. In 1947 when Kac and Feynman were both on Cornell faculty, Kac attended a presentation of Feynman's and remarked that the two of them were working on the same thing from different directions. The Feynman–Kac formula resulted, which proves rigorously the real case of Feynman's path integrals. The complex case, which occurs when a particle's spin is included, is still unproven. - -It offers a method of solving certain partial differential equations by simulating random paths of a stochastic process. Conversely, an important class of expectations of random processes can be computed by deterministic methods. - -Consider the partial differential equation -$$ -\frac{\partial u}{\partial t}(x,t) + \mu(x,t) \frac{\partial u}{\partial x}(x,t) + \tfrac{1}{2} \sigma^2(x,t) \frac{\partial^2 u}{\partial x^2}(x,t) -V(x,t) u(x,t) + f(x,t) = 0, -$$ - -defined for all $x \in \mathbb{R}$ and $t \in [0, T]$, subject to the terminal condition -$$ -u(x,T)=\psi(x), -$$ - -where μ, σ, ψ, V, f are known functions, T is a parameter and $ u:\mathbb{R}\times[0,T]\to\mathbb{R}$ is the unknown. Then the Feynman–Kac formula tells us that the solution can be written as a conditional expectation -$$ - u(x,t) = E^Q\left[ \int_t^T e^{- \int_t^r V(X_\tau,\tau) d\tau}f(X_r,r)dr + e^{-\int_t^T V(X_\tau,\tau) d\tau}\psi(X_T) \Bigg| X_t=x \right] -$$ - -under the probability measure Q such that X is an Itô process driven by the equation -$$ -dX = \mu(X,t)dt + \sigma(X,t)dW^Q, -$$ - -with WQ(t) is a Wiener process (also called Brownian motion) under Q, and the initial condition for X(t) is X(t) = x. - -A proof that the above formula is a solution of the differential equation is long, difficult and not presented here. It is however reasonably straightforward to show that, if a solution exists, it must have the above form. The proof of that lesser result is as follows: - -Let u(x, t) be the solution to the above partial differential equation. Applying the product rule for Itô processes to the process -$$ - Y(s) = e^{-\int_t^s V(X_\tau,\tau) d\tau} u(X_s,s)+ \int_t^s e^{-\int_t^r V(X_\tau,\tau) d\tau}f(X_r,r) dr -$$ - -one gets - - - -\begin{align} - -dY = {} & d\left(e^{- \int_t^s V(X_\tau,\tau) d\tau}\right) u(X_s,s) + e^{- \int_t^s V(X_\tau,\tau) d\tau}du(X_s,s) \\[6pt] - -& {} + d\left(e^{- \int_t^s V(X_\tau,\tau) d\tau}\right)du(X_s,s) + d\left(\int_t^s e^{- \int_t^r V(X_\tau,\tau) d\tau} f(X_r,r) dr\right) - -\end{align} - - - -Since -$$ -d\left(e^{- \int_t^s V(X_\tau,\tau) d\tau}\right) =-V(X_s,s) e^{- \int_t^s V(X_\tau,\tau) d\tau} ds, -$$ - -the third term is $ O(dt du) $ and can be dropped. We also have that -$$ - d\left(\int_t^s e^{- \int_t^r V(X_\tau,\tau) d\tau}f(X_r,r)dr\right) = e^{- \int_t^s V(X_\tau,\tau) d\tau} f(X_s,s) ds. -$$ - -Applying Itô's lemma to $du(X_s,s)$, it follows that - - - -\begin{align} - -dY= {} & e^{-\int_t^s V(X_\tau,\tau) d\tau}\left(-V(X_s,s) u(X_s,s) +f(X_s,s)+\mu(X_s,s)\frac{\partial u}{\partial X}+\frac{\partial u}{\partial s}+\tfrac{1}{2}\sigma^2(X_s,s)\frac{\partial^2 u}{\partial X^2}\right)ds \\[6pt] - -& {} + e^{- \int_t^s V(X_\tau,\tau) d\tau}\sigma(X,s)\frac{\partial u}{\partial X}dW. - -\end{align} - - - -The first term contains, in parentheses, the above partial differential equation and is therefore zero. What remains is -$$ -dY=e^{-\int_t^s V(X_\tau,\tau) d\tau}\sigma(X,s)\frac{\partial u}{\partial X}dW. -$$ - -Integrating this equation from t to T, one concludes that -$$ - Y(T) - Y(t) = \int_t^T e^{- \int_t^s V(X_\tau,\tau) d\tau}\sigma(X,s)\frac{\partial u}{\partial X}dW. -$$ - -Upon taking expectations, conditioned on Xt = x, and observing that the right side is an Itô integral, which has expectation zero, it follows that -$$ -E[Y(T)\mid X_t=x] = E[Y(t)\mid X_t=x] = u(x,t). -$$ - -The desired result is obtained by observing that -$$ -E[Y(T)\mid X_t=x] = E \left [e^{-\int_t^T V(X_\tau,\tau) d\tau} u(X_T,T) + \int_t^T e^{- \int_t^r V(X_\tau,\tau) d\tau}f(X_r,r)dr \Bigg| X_t=x \right ] -$$ - -and finally -$$ - u(x,t) = E \left [e^{- \int_t^T V(X_\tau,\tau) d\tau} \psi(X_T) + \int_t^T e^{-\int_t^s V(X_\tau,\tau)d\tau} f(X_s,s)ds \Bigg| X_t=x \right ] -$$ - -* The proof above that a solution must have the given form is essentially that of with modifications to account for $f(x,t)$. - -* The expectation formula above is also valid for N-dimensional Itô diffusions. The corresponding partial differential equation for $ u:\mathbb{R}^N\times[0,T]\to\mathbb{R}$ becomes: -$$ -\frac{\partial u}{\partial t} + \sum_{i=1}^N \mu_i(x,t)\frac{\partial u}{\partial x_i} + \frac{1}{2} \sum_{i=1}^N\sum_{j=1}^N\gamma_{ij}(x,t) \frac{\partial^2 u}{\partial x_i \partial x_j} -r(x,t)u = f(x,t), -$$ - -where, -$$ - \gamma_{ij}(x,t) = \sum_{k=1}^N\sigma_{ik}(x,t)\sigma_{jk}(x,t), -$$ - -i.e. $\gamma = \sigma \sigma^{\mathrm{T}}$, where $\sigma^{\mathrm{T}}$ denotes the transpose of $\sigma$. - -* This expectation can then be approximated using Monte Carlo or quasi-Monte Carlo methods. - -* When originally published by Kac in 1949, the Feynman–Kac formula was presented as a formula for determining the distribution of certain Wiener functionals. Suppose we wish to find the expected value of the function -$$ - e^{-\int_0^t V(x(\tau)) d\tau} -$$ - -in the case where x(τ) is some realization of a diffusion process starting at x(0) = 0. The Feynman–Kac formula says that this expectation is equivalent to the integral of a solution to a diffusion equation. Specifically, under the conditions that $u V(x) \geq 0$, -$$ - E\left[ e^{- u \int_0^t V(x(\tau)) d\tau} \right] = \int_{-\infty}^{\infty} w(x,t) dx -$$ - -where w(x, 0) = δ(x) and -$$ -\frac{\partial w}{\partial t} = \frac{1}{2} \frac{\partial^2 w}{\partial x^2} - u V(x) w. -$$ - -The Feynman–Kac formula can also be interpreted as a method for evaluating functional integrals of a certain form. If -$$ - I = \int f(x(0)) e^{-u\int_0^t V(x(t)) dt} g(x(t)) Dx -$$ - -where the integral is taken over all random walks, then -$$ - I = \int w(x,t) g(x) dx -$$ - -where w(x, t) is a solution to the parabolic partial differential equation -$$ - \frac{\partial w}{\partial t} = \frac{1}{2} \frac{\partial^2 w}{\partial x^2} - u V(x) w -$$ - -with initial condition w(x, 0) = f(x). - -In quantitative finance, the Feynman–Kac formula is used to efficiently calculate solutions to the Black–Scholes equation to price options on stocks. - -In quantum chemistry, it is used to solve the Schrödinger equation with the Pure Diffusion Monte Carlo method. diff --git a/wiki/wikipedia/2466.txt b/wiki/wikipedia/2466.txt deleted file mode 100644 index efe2b7681f4b28e04dba60085e194f02209e1202..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2466.txt +++ /dev/null @@ -1,30 +0,0 @@ -In complex analysis, a branch of mathematics, the Schwarz integral formula, named after Hermann Schwarz, allows one to recover a holomorphic function, up to an imaginary constant, from the boundary values of its real part. - -Let f be a function holomorphic on the closed unit disc {z ∈ C | |z| ≤ 1}. Then -$$ -f(z) = \frac{1}{2\pi i} \oint_{|\zeta| = 1} \frac{\zeta + z}{\zeta - z} \operatorname{Re}(f(\zeta)) \frac{d\zeta}{\zeta}+ i\operatorname{Im}(f(0)) -$$ - -for all |z| < 1. - -Let f be a function holomorphic on the closed upper half-plane {z ∈ C | Im(z) ≥ 0} such that, for some α > 0, |zα f(z)| is bounded on the closed upper half-plane. Then -$$ -f(z) = \frac{1}{\pi i} \int_{-\infty}^\infty \frac{u(\zeta,0)}{\zeta - z} d\zeta = \frac{1}{\pi i} \int_{-\infty}^\infty \frac{\operatorname{Re}(f)(\zeta+0i)}{\zeta - z} d\zeta -$$ - -for all Im(z) > 0. - -Note that, as compared to the version on the unit disc, this formula does not have an arbitrary constant added to the integral; this is because the additional decay condition makes the conditions for this formula more stringent. - -The formula follows from Poisson integral formula applied to u: -$$ -u(z) = \frac{1}{2\pi}\int_0^{2\pi} u(e^{i\psi}) \operatorname{Re} {e^{i\psi} + z \over e^{i\psi} - z} d\psi \qquad \text{for } |z| < 1. -$$ - -By means of conformal maps, the formula can be generalized to any simply connected open set. - -* Ahlfors, Lars V. (1979), Complex Analysis, Third Edition, McGraw-Hill, - -* Remmert, Reinhold (1990), Theory of Complex Functions, Second Edition, Springer, - -* Saff, E. B., and A. D. Snider (1993), Fundamentals of Complex Analysis for Mathematics, Science, and Engineering, Second Edition, Prentice Hall, diff --git a/wiki/wikipedia/2467.txt b/wiki/wikipedia/2467.txt deleted file mode 100644 index b7ea7c52f3fdad163b894d43c548164cb44b7078..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2467.txt +++ /dev/null @@ -1,79 +0,0 @@ -See List of things named after Gottfried Leibniz for other formulas known under the same name. - -In mathematics, the Leibniz formula for pi, named after Gottfried Leibniz, states that -$$ -1 - \frac{1}{3} + \frac{1}{5} - \frac{1}{7} + \frac{1}{9} - \cdots = \frac{\pi}{4}, -$$ - -an alternating series. It is also called the Leibniz-Madhava series as it is a special case of a more general series expansion for the inverse tangent function, first discovered by the Indian mathematician Madhava of Sangamagrama in the 14th century, the specific case first published by Leibniz around 1676. The series for the inverse tangent function, which is also known as Gregory's series, can be given by: -$$ -\arctan x = x - \frac{x^3}{3} + \frac{x^5}{5} - \frac{x^7}{7} + \cdots -$$ - -The Leibniz formula for pi/4 can be obtained by putting x = 1 into this series. - -It also is the Dirichlet L-series of the non-principal Dirichlet character of modulus 4 evaluated at s = 1, and therefore the value β(1) of the Dirichlet beta function. - -\begin{align} - -\frac{\pi}{4} &= \arctan(1) \\ &= \int_0^1 \frac 1{1+x^2} dx \\[8pt] - -& = \int_0^1\left(\sum_{k=0}^n (-1)^k x^{2k}+\frac{(-1)^{n+1}x^{2n+2} }{1+x^2}\right) dx \\[8pt] - -& = \left(\sum_{k=0}^n \frac{(-1)^k}{2k+1}\right) - -+(-1)^{n+1} \left(\int_0^1\frac{x^{2n+2}}{1+x^2} dx\right). - -\end{align} - -Considering only the integral in the last term, we have: -$$ -0 \le \int_0^1 \frac{x^{2n+2}}{1+x^2}dx \le \int_0^1 x^{2n+2}dx = \frac{1}{2n+3} \rightarrow 0 \text{ as } n \rightarrow \infty. -$$ - -Therefore, by the squeeze theorem, as n → ∞ we are left with the Leibniz series: -$$ -\frac{\pi}4 = \sum_{k=0}^\infty\frac{(-1)^k}{2k+1} -$$ - -\begin{align} - -\frac{\pi}{4} &= \arctan(1) \\ &= \int_0^1 \frac 1{1+z^2} dz \\[8pt] - -\end{align} - -When $|z|<1$, $ \sum_{k=0}^\infty (-1)^k z^{2k}$ converges uniformly, therefore -$$ -f(z) = \arctan(z) = \int_{0}^{z} \frac {1}{1+z^2} dz =\sum_{n=0}^{\infty}\frac{(-1)^n}{2n+1}z^{2n+1}\ (|z|<1) -$$ - -If $f(z)$ approaches $f(1)$ so that it is continuous and converges uniformly, the proof is complete. From Leibniz's test, $\sum_{n=0}^{\infty}\frac{(-1)^n}{2n+1}$ converges, also $f(z)$ approaches $f(1)$ from within the Stolz angle, so from Abel's theorem this is correct. - -Leibniz's formula converges extremely slowly: it exhibits sublinear convergence. Calculating pi to 10 correct decimal places using direct summation of the series requires about five billion terms because 4/2k + 1 < 10−10 for k > 2 × 1010 − 1/2. - -However, the Leibniz formula can be used to calculate pi to high precision (hundreds of digits or more) using various convergence acceleration techniques. For example, the Shanks transformation, Euler transform or Van Wijngaarden transformation, which are general methods for alternating series, can be applied effectively to the partial sums of the Leibniz series. Further, combining terms pairwise gives the non-alternating series -$$ -\frac{\pi}{4} = \sum_{n=0}^{\infty} \left(\frac{1}{4n+1}-\frac{1}{4n+3}\right) = \sum_{n=0}^{\infty} \frac{2}{(4n+1)(4n+3)} -$$ - -which can be evaluated to high precision from a small number of terms using Richardson extrapolation or the Euler–Maclaurin formula. This series can also be transformed into an integral by means of the Abel–Plana formula and evaluated using techniques for numerical integration. - -If the series is truncated at the right time, the decimal expansion of the approximation will agree with that of pi for many more digits, except for isolated digits or digit groups. For example, taking five million terms yields -$$ -3.141592\underline{4}5358979323846\underline{4}643383279502\underline{7}841971693993\underline{873}058... -$$ - -where the underlined digits are wrong. The errors can in fact be predicted; they are generated by the Euler numbers En according to the asymptotic formula -$$ -\frac{\pi}{2} - 2 \sum_{k=1}^\frac{N}{2} \frac{(-1)^{k-1}}{2k-1} \sim \sum_{m=0}^\infty \frac{E_{2m}}{N^{2m+1}} -$$ - -where N is an integer divisible by 4. If N is chosen to be a power of ten, each term in the right sum becomes a finite decimal fraction. The formula is a special case of the Boole summation formula for alternating series, providing yet another example of a convergence acceleration technique that can be applied to the Leibniz series. In 1992, Jonathan Borwein and Mark Limber used the first thousand Euler numbers to calculate pi to 5,263 decimal places with the Leibniz formula. - -The Leibniz formula can be interpreted as a Dirichlet series using the unique non-principal Dirichlet character modulo 4. As with other Dirichlet series, this allows the infinite sum to be converted to an infinite product with one term for each prime number. Such a product is called an Euler product. It is: - -\begin{align}\frac\pi4&=\left(\prod_{p\equiv 1\pmod 4}\frac{p}{p-1}\right) \left( \prod_{p\equiv 3\pmod 4}\frac{p}{p+1}\right)\\ - -&=\frac{3}{4} \cdot \frac{5}{4} \cdot \frac{7}{8} \cdot \frac{11}{12} \cdot \frac{13}{12}\cdot\frac{17}{16}\cdot\frac{19}{20}\cdot\frac{23}{24}\cdot\frac{29}{28} \cdots\end{align} - -In this product, each term is a superparticular ratio, each numerator is an odd prime number, and each denominator is the nearest multiple of 4 to the numerator. diff --git a/wiki/wikipedia/2468.txt b/wiki/wikipedia/2468.txt deleted file mode 100644 index c5bdc3607c0592b5476cbc2612105e50966118c0..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2468.txt +++ /dev/null @@ -1,97 +0,0 @@ -In mathematics and computer science, a canonical, normal, or standard form of a mathematical object is a standard way of presenting that object as a mathematical expression. Often, it is one which provides the simplest representation of an object and which allows it to be identified in a unique way. The distinction between "canonical" and "normal" forms varies from subfield to subfield. In most fields, a canonical form specifies a unique representation for every object, while a normal form simply specifies its form, without the requirement of uniqueness. - -The canonical form of a positive integer in decimal representation is a finite sequence of digits that does not begin with zero. More generally, for a class of objects on which an equivalence relation is defined, a canonical form consists in the choice of a specific object in each class. For example: - -* Jordan normal form is a canonical form for matrix similarity. - -* The row echelon form is a canonical form, when one considers as equivalent a matrix and its left product by an invertible matrix. - -In computer science, and more specifically in computer algebra, when representing mathematical objects in a computer, there are usually many different ways to represent the same object. In this context, a canonical form is a representation such that every object has a unique representation (with canonicalization being the process through which a representation is put into its canonical form). Thus, the equality of two objects can easily be tested by testing the equality of their canonical forms. - -Despite this advantage, canonical forms frequently depend on arbitrary choices (like ordering the variables), which introduce difficulties for testing the equality of two objects resulting on independent computations. Therefore, in computer algebra, normal form is a weaker notion: A normal form is a representation such that zero is uniquely represented. This allows testing for equality by putting the difference of two objects in normal form. - -Canonical form can also mean a differential form that is defined in a natural (canonical) way. - -Given a set S of objects with an equivalence relation R on S, a canonical form is given by designating some objects of S to be "in canonical form", such that every object under consideration is equivalent to exactly one object in canonical form. In other words, the canonical forms in S represent the equivalence classes, once and only once. To test whether two objects are equivalent, it then suffices to test equality on their canonical forms. - -A canonical form thus provides a classification theorem and more, in that it not only classifies every class, but also gives a distinguished (canonical) representative for each object in the class. - -Formally, a canonicalization with respect to an equivalence relation R on a set S is a mapping c:S→S such that for all s, s1, s2 ∈ S: - -# c(s) = c(c(s)) (idempotence), - -# s1 R s2 if and only if c(s1) = c(s2) (decisiveness), and - -# s R c(s) (representativeness). - -Property 3 is redundant; it follows by applying 2 to 1. - -In practical terms, it is often advantageous to be able to recognize the canonical forms. There is also a practical, algorithmic question to consider: how to pass from a given object s in S to its canonical form s*? Canonical forms are generally used to make operating with equivalence classes more effective. For example, in modular arithmetic, the canonical form for a residue class is usually taken as the least non-negative integer in it. Operations on classes are carried out by combining these representatives, and then reducing the result to its least non-negative residue. - -The uniqueness requirement is sometimes relaxed, allowing the forms to be unique up to some finer equivalence relation, such as allowing for reordering of terms (if there is no natural ordering on terms). - -A canonical form may simply be a convention, or a deep theorem. For example, polynomials are conventionally written with the terms in descending powers: it is more usual to write x2 + x + 30 than x + 30 + x2, although the two forms define the same polynomial. By contrast, the existence of Jordan canonical form for a matrix is a deep theorem. - -Note: in this section, "up to" some equivalence relation E means that the canonical form is not unique in general, but that if one object has two different canonical forms, they are E-equivalent. - -Standard form is used by many mathematicians and scientists to write extremely large numbers in a more concise and understandable way, the most prominent of which being the scientific notation. - -* Canonical representation of a positive integer - -* Canonical form of a continued fraction - -In analytic geometry: - -*The equation of a line: Ax + By = C, with A2 + B2 = 1 and C ≥ 0 - -*The equation of a circle: $(x - h)^2 + (y - k)^2 = r^2$ - -By contrast, there are alternative forms for writing equations. For example, the equation of a line may be written as a linear equation in point-slope and slope-intercept form. - -Convex polyhedra can be put into canonical form such that: - -* All faces are flat, - -* All edges are tangent to the unit sphere, and - -* The centroid of the polyhedron is at the origin. - -Every differentiable manifold has a cotangent bundle. That bundle can always be endowed with a certain differential form, called the canonical one-form. This form gives the cotangent bundle the structure of a symplectic manifold, and allows vector fields on the manifold to be integrated by means of the Euler-Lagrange equations, or by means of Hamiltonian mechanics. Such systems of integrable differential equations are called integrable systems. - -The study of dynamical systems overlaps with that of integrable systems; there one has the idea of a normal form (dynamical systems). - -In the study of manifolds in three dimensions, one has the first fundamental form, the second fundamental form and the third fundamental form. - -* Negation normal form - -* Conjunctive normal form - -* Disjunctive normal form - -* Algebraic normal form - -* Prenex normal form - -* Skolem normal form - -* Blake canonical form, also known as the complete sum of prime implicants, the complete sum, or the disjunctive prime form - -* Cantor normal form of an ordinal number - -* Normal form game - -* Normal form (natural deduction) - -The symbolic manipulation of a formula from one form to another is called a "rewriting" of that formula. One can study the abstract properties of rewriting generic formulas, by studying the collection of rules by which formulas can be validly manipulated. These are the "rewriting rules"—an integral part of an abstract rewriting system. A common question is whether it is possible to bring some generic expression to a single, common form, the normal form. If different sequences of rewrites still result in the same form, then that form can be termed a normal form, with the rewrite being called a confluent. It is not always possible to obtain a normal form. - -* A lambda term is in beta normal form if no beta reduction is possible; lambda calculus is a particular case of an abstract rewriting system. In the untyped lambda calculus, for example, the term $(\lambda x.(x x) \lambda x.(x x))$ doesn't have a normal form. In the typed lambda calculus, every well-formed term can be rewritten to its normal form. - -In graph theory, a branch of mathematics, graph canonization is the problem of finding a canonical form of a given graph G. A canonical form is a labeled graph Canon(G) that is isomorphic to G, such that every graph that is isomorphic to G has the same canonical form as G. Thus, from a solution to the graph canonization problem, one could also solve the problem of graph isomorphism: to test whether two graphs G and H are isomorphic, compute their canonical forms Canon(G) and Canon(H), and test whether these two canonical forms are identical. - -In computing, the reduction of data to any kind of canonical form is commonly called data normalization. - -For instance, database normalization is the process of organizing the fields and tables of a relational database to minimize redundancy and dependency. - -In the field of software security, a common vulnerability is unchecked malicious input (see Code injection). The mitigation for this problem is proper input validation. Before input validation is performed, the input is usually normalized by eliminating encoding (e.g., HTML encoding) and reducing the input data to a single common character set. - -Other forms of data, typically associated with signal processing (including audio and imaging) or machine learning, can be normalized in order to provide a limited range of values. diff --git a/wiki/wikipedia/2469.txt b/wiki/wikipedia/2469.txt deleted file mode 100644 index 14290472716bce5b736ee661dd30d6a12e248898..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2469.txt +++ /dev/null @@ -1,52 +0,0 @@ -In mathematical analysis, Wiener's tauberian theorem is any of several related results proved by Norbert Wiener in 1932. They provide a necessary and sufficient condition under which any function in L1 or L2 can be approximated by linear combinations of translations of a given function. - -Informally, if the Fourier transform of a function f vanishes on a certain set Z, the Fourier transform of any linear combination of translations of f also vanishes on Z. Therefore, the linear combinations of translations of f can not approximate a function whose Fourier transform does not vanish on Z. - -Wiener's theorems make this precise, stating that linear combinations of translations of f are dense if and only if the zero set of the Fourier transform of f is empty (in the case of L1) or of Lebesgue measure zero (in the case of L2). - -Gelfand reformulated Wiener's theorem in terms of commutative C*-algebras, when it states that the spectrum of the L1 group ring L1(R) of the group R of real numbers is the dual group of R. A similar result is true when R is replaced by any locally compact abelian group. - -Let f ∈ L1(R) be an integrable function. The span of translations fa(x) = f(x + a) is dense in L1(R) if and only if the Fourier transform of f has no real zeros. - -The following statement is equivalent to the previous result, and explains why Wiener's result is a Tauberian theorem: - -Suppose the Fourier transform of f ∈ L1 has no real zeros, and suppose the convolution f * h tends to zero at infinity for some h ∈ L. Then the convolution g * h tends to zero at infinity for any g ∈ L1. - -More generally, if -$$ - \lim_{x \to \infty} (f*h)(x) = A \int f(x) dx -$$ - -for some f ∈ L1 the Fourier transform of which has no real zeros, then also -$$ - \lim_{x \to \infty} (g*h)(x) = A \int g(x) dx -$$ - -for any g ∈ L1. - -Wiener's theorem has a counterpart in l1(Z): the span of the translations of f ∈ l1(Z) is dense if and only if the Fourier transform -$$ - \varphi(\theta) = \sum_{n \in \mathbb{Z}} f(n) e^{-in\theta} -$$ - -has no real zeros. The following statements are equivalent version of this result: - -* Suppose the Fourier transform of f ∈ l1(Z) has no real zeros, and for some bounded sequence h the convolution f * h tends to zero at infinity. Then g * h also tends to zero at infinity for any g ∈ l1(Z). - -* Let φ be a function on the unit circle with absolutely convergent Fourier series. Then 1/φ has absolutely convergent Fourier series if and only if φ has no zeros. - -showed that this is equivalent to the following property of the Wiener algebra A(T), which he proved using the theory of Banach algebras, thereby giving a new proof of Wiener's result: - -* The maximal ideals of A(T) are all of the form -$$ - M_x = \left\{ f \in A(\mathbb{T}) \mid f(x) = 0 \right\}, \quad x \in \mathbb{T}. -$$ - -Let f ∈ L2(R) be a square-integrable function. The span of translations fa(x) = f(x + a) is dense in L2(R) if and only if the real zeros of the Fourier transform of f form a set of zero Lebesgue measure. - -The parallel statement in l2(Z) is as follows: the span of translations of a sequence f ∈ l2(Z) is dense if and only if the zero set of the Fourier transform -$$ - \varphi(\theta) = \sum_{n \in \mathbb{Z}} f(n) e^{-in\theta} -$$ - -has zero Lebesgue measure. diff --git a/wiki/wikipedia/247.txt b/wiki/wikipedia/247.txt deleted file mode 100644 index 2922f66c3b93319180651e3469cadbd77d82067c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/247.txt +++ /dev/null @@ -1,11 +0,0 @@ -The World Sudoku Championship (WSC) is an annual international puzzle competition organised by a member of the World Puzzle Federation. The first event was held in Lucca in 2006. National teams are determined by local affiliates of the World Puzzle Federation. The competition typically consists of 100 or more puzzles solved by all competitors over multiple timed rounds, including classic sudoku, variations and other puzzle types, normally followed by a playoff for the top qualifiers to determine a champion. Examples of rounds include the Relay round, where an answer from one puzzle contributes digits to the start of the next sudoku, and the "World Record" round, in which solvers competed to set a Guinness World Record for fastest sudoku solution. - -Of the 13 championships held so far, Kota Morinishi of Japan (2014, 2015, 2017 and 2018) has been the most successful winner with four individual titles, over Thomas Snyder of United States (2007, 2008 and 2011), Jan Mrozowski of Poland (2009, 2010 and 2012) who have each won three. - -From 2007 there has also been a team competition. Japan is the most successful team, having won the title in 2007, 2012, 2014, 2015 and 2018; Czech Republic (2008, 2016), Germany (2010 and 2011) and China (2013, 2017) have won this title twice; Slovakia (2009) also won a title. - -Starting from 2011, the event has been held alongside the World Puzzle Championship. - -Currently, 30 countries are official members of the World Puzzle Federation. Individuals may also take part if their country is not already represented by a national team. - -Starting from 2013, titles have been awarded also for the best players in two age groups, Under 18 and Over 50 years of age. diff --git a/wiki/wikipedia/2470.txt b/wiki/wikipedia/2470.txt deleted file mode 100644 index 9f630ccf0a1419a95f8ff16ebb08db05f184fea3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2470.txt +++ /dev/null @@ -1,17 +0,0 @@ -In abstract algebra, the Eakin–Nagata theorem states: given commutative rings $A \subset B$ such that $B$ is finitely generated as a module over $A$, if $B$ is a Noetherian ring, then $A$ is a Noetherian ring. (Note the converse is also true and is easier.) - -The theorem is similar to the Artin–Tate lemma, which says that the same statement holds with "Noetherian" replaced by "finitely generated algebra" (assuming the base ring is a Noetherian ring). - -The theorem was first proved in Paul M. Eakin's thesis and later independently by . The theorem can also be deduced from the characterization of a Noetherian ring in terms of injective modules, as done for example by David Eisenbud in ; this approach is useful for a generalization to non-commutative rings. - -The following more general result is due to Edward W. Formanek and is proved by an argument rooted to the original proofs by Eakin and Nagata. According to , this formulation is likely the most transparent one. - - Suppose otherwise. By assumption, the set of all $IM$, where $I$ is an ideal of $A$ such that $M/IM$ is not Noetherian has a maximal element, $I_0 M$. Replacing $M$ and $A$ by $M/I_0 M$ and $A/\operatorname{Ann}(M/I_0 M)$, we can assume - -*for each nonzero ideal $I \subset A$, the module $M/IM$ is Noetherian. - -Next, consider the set $S$ of submodules $N \subset M$ such that $M/N$ is faithful. Choose a set of generators $\{ x_1, \dots, x_n \}$ of $M$ and then note that $M/N$ is faithful if and only if for each $a \in A$, the inclusion $\{ a x_1, \dots, a x_n \} \subset N$ implies $a = 0$. Thus, it is clear that Zorn's lemma applies to the set $S$, and so the set has a maximal element, $N_0$. Now, if $M/N_0$ is Noetherian, then it is a faithful Noetherian module over A and, consequently, A is a Noetherian ring, a contradiction. Hence, $M/N_0$ is not Noetherian and replacing $M$ by $M/N_0$, we can also assume - -*each nonzero submodule $N \subset M$ is such that $M/N$ is not faithful. - -Let a submodule $0 \ne N \subset M$ be given. Since $M/N$ is not faithful, there is a nonzero element $a \in A$ such that $aM \subset N$. By assumption, $M/aM$ is Noetherian and so $N/aM$ is finitely generated. Since $aM$ is also finitely generated, it follows that $N$ is finitely generated; i.e., $M$ is Noetherian, a contradiction. $\square$ diff --git a/wiki/wikipedia/2471.txt b/wiki/wikipedia/2471.txt deleted file mode 100644 index 3322b0cf185649f514a66a8afe932c7045dfc1c7..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2471.txt +++ /dev/null @@ -1,95 +0,0 @@ -In database systems, isolation determines how transaction integrity is visible to other users and systems. - -A lower isolation level increases the ability of many users to access the same data at the same time, but increases the number of concurrency effects (such as dirty reads or lost updates) users might encounter. Conversely, a higher isolation level reduces the types of concurrency effects that users may encounter, but requires more system resources and increases the chances that one transaction will block another. - -Isolation is typically defined at database level as a property that defines how or when the changes made by one operation become visible to others. On older systems, it may be implemented systemically, for example through the use of temporary tables. In two-tier systems, a transaction processing (TP) manager is required to maintain isolation. In n-tier systems (such as multiple websites attempting to book the last seat on a flight), a combination of stored procedures and transaction management is required to commit the booking and send confirmation to the customer. - -Isolation is one of the four ACID properties, along with atomicity, consistency and durability. - -Concurrency control comprises the underlying mechanisms in a DBMS which handle isolation and guarantee related correctness. It is heavily used by the database and storage engines both to guarantee the correct execution of concurrent transactions, and (via different mechanisms) the correctness of other DBMS processes. The transaction-related mechanisms typically constrain the database data access operations' timing (transaction schedules) to certain orders characterized as the serializability and recoverability schedule properties. Constraining database access operation execution typically means reduced performance (measured by rates of execution), and thus concurrency control mechanisms are typically designed to provide the best performance possible under the constraints. Often, when possible without harming correctness, the serializability property is compromised for better performance. However, recoverability cannot be compromised, since such typically results in a quick database integrity violation. - -Two-phase locking is the most common transaction concurrency control method in DBMSs, used to provide both serializability and recoverability for correctness. In order to access a database object a transaction first needs to acquire a lock for this object. Depending on the access operation type (e.g., reading or writing an object) and on the lock type, acquiring the lock may be blocked and postponed, if another transaction is holding a lock for that object. - -The ANSI/ISO standard SQL 92 refers to three different read phenomena when Transaction 1 reads data that Transaction 2 might have changed. - -In the following examples, two transactions take place. In the first, Query 1 is performed. Then, in the second transaction, Query 2 is performed and committed. Finally, in the first transaction, Query 1 is performed again. - -The queries use the following data table: - -A dirty read (aka uncommitted dependency) occurs when a transaction is allowed to read data from a row that has been modified by another running transaction and not yet committed. - -Dirty reads work similarly to non-repeatable reads; however, the second transaction would not need to be committed for the first query to return a different result. The only thing that may be prevented in the READ UNCOMMITTED isolation level is updates appearing out of order in the results; that is, earlier updates will always appear in a result set before later updates. - -In our example, Transaction 2 changes a row, but does not commit the changes. Transaction 1 then reads the uncommitted data. Now if Transaction 2 rolls back its changes (already read by Transaction 1) or updates different changes to the database, then the view of the data may be wrong in the records of Transaction 1. - -But in this case no row exists that has an id of 1 and an age of 21. - -A non-repeatable read occurs when, during the course of a transaction, a row is retrieved twice and the values within the row differ between reads. - -Non-repeatable reads phenomenon may occur in a lock-based concurrency control method when read locks are not acquired when performing a SELECT, or when the acquired locks on affected rows are released as soon as the SELECT operation is performed. Under the multiversion concurrency control method, non-repeatable reads may occur when the requirement that a transaction affected by a commit conflict must roll back is relaxed. - -In this example, Transaction 2 commits successfully, which means that its changes to the row with id 1 should become visible. However, Transaction 1 has already seen a different value for age in that row. At the SERIALIZABLE and REPEATABLE READ isolation levels, the DBMS must return the old value for the second SELECT. At READ COMMITTED and READ UNCOMMITTED, the DBMS may return the updated value; this is a non-repeatable read. - -There are two basic strategies used to prevent non-repeatable reads. The first is to delay the execution of Transaction 2 until Transaction 1 has committed or rolled back. This method is used when locking is used, and produces the serial schedule T1, T2. A serial schedule exhibits repeatable reads behaviour. - -In the other strategy, as used in multiversion concurrency control, Transaction 2 is permitted to commit first, which provides for better concurrency. However, Transaction 1, which commenced prior to Transaction 2, must continue to operate on a past version of the database — a snapshot of the moment it was started. When Transaction 1 eventually tries to commit, the DBMS checks if the result of committing Transaction 1 would be equivalent to the schedule T1, T2. If it is, then Transaction 1 can proceed. If it cannot be seen to be equivalent, however, Transaction 1 must roll back with a serialization failure. - -Using a lock-based concurrency control method, at the REPEATABLE READ isolation mode, the row with ID = 1 would be locked, thus blocking Query 2 until the first transaction was committed or rolled back. In READ COMMITTED mode, the second time Query 1 was executed, the age would have changed. - -Under multiversion concurrency control, at the SERIALIZABLE isolation level, both SELECT queries see a snapshot of the database taken at the start of Transaction 1. Therefore, they return the same data. However, if Transaction 2 then attempted to UPDATE that row as well, a serialization failure would occur and Transaction 1 would be forced to roll back. - -At the READ COMMITTED isolation level, each query sees a snapshot of the database taken at the start of each query. Therefore, they each see different data for the updated row. No serialization failure is possible in this mode (because no promise of serializability is made), and Transaction 1 will not have to be retried. - -A phantom read occurs when, in the course of a transaction, new rows are added or removed by another transaction to the records being read. - -This can occur when range locks are not acquired on performing a SELECT ... WHERE operation. - -The phantom reads anomaly is a special case of Non-repeatable reads when Transaction 1 repeats a ranged SELECT ... WHERE query and, between both operations, Transaction 2 creates (i.e. INSERT) new rows (in the target table) which fulfill that WHERE clause. - -Note that Transaction 1 executed the same query twice. If the highest level of isolation were maintained, the same set of rows should be returned both times, and indeed that is what is mandated to occur in a database operating at the SQL SERIALIZABLE isolation level. However, at the lesser isolation levels, a different set of rows may be returned the second time. - -In the SERIALIZABLE isolation mode, Query 1 would result in all records with age in the range 10 to 30 being locked, thus Query 2 would block until the first transaction was committed. In REPEATABLE READ mode, the range would not be locked, allowing the record to be inserted. Therefore, the second statement of Query 1 would not return the same result as the first one. - -Of the four ACID properties in a DBMS (Database Management System), the isolation property is the one most often relaxed. When attempting to maintain the highest level of isolation, a DBMS usually acquires locks on data which may result in a loss of concurrency, or implements multiversion concurrency control. This requires adding logic for the application to function correctly. - -Most DBMSs offer a number of transaction isolation levels, which control the degree of locking that occurs when selecting data. For many database applications, the majority of database transactions can be constructed to avoid requiring high isolation levels (e.g. SERIALIZABLE level), thus reducing the locking overhead for the system. The programmer must carefully analyze database access code to ensure that any relaxation of isolation does not cause software bugs that are difficult to find. Conversely, if higher isolation levels are used, the possibility of deadlock is increased, which also requires careful analysis and programming techniques to avoid. - -Since each isolation level is stronger than those below, in that no higher isolation level allows an action forbidden by a lower one, the standard permits a DBMS to run a transaction at an isolation level stronger than that requested (e.g., a "Read committed" transaction may actually be performed at a "Repeatable read" isolation level). - -The isolation levels defined by the ANSI/ISO SQL standard are listed as follows. - -This is the highest isolation level. - -With a lock-based concurrency control DBMS implementation, serializability requires read and write locks (acquired on selected data) to be released at the end of the transaction. Also range-locks must be acquired when a SELECT query uses a ranged WHERE clause, especially to avoid the phantom reads phenomenon. - -When using non-lock based concurrency control, no locks are acquired; however, if the system detects a write collision among several concurrent transactions, only one of them is allowed to commit. See snapshot isolation for more details on this topic. - -From : (Second Informal Review Draft) ISO/IEC 9075:1992, Database Language SQL- July 30, 1992: - -The execution of concurrent SQL-transactions at isolation level SERIALIZABLE is guaranteed to be serializable. A serializable execution is defined to be an execution of the operations of concurrently executing SQL-transactions that produces the same effect as some serial execution of those same SQL-transactions. A serial execution is one in which each SQL-transaction executes to completion before the next SQL-transaction begins. - -In this isolation level, a lock-based concurrency control DBMS implementation keeps read and write locks (acquired on selected data) until the end of the transaction. However, range-locks are not managed, so phantom reads can occur. - -Write skew is possible at this isolation level in some systems. Write skew is a phenomenon where two writes are allowed to the same column(s) in a table by two different writers (who have previously read the columns they are updating), resulting in the column having data that is a mix of the two transactions. - -In this isolation level, a lock-based concurrency control DBMS implementation keeps write locks (acquired on selected data) until the end of the transaction, but read locks are released as soon as the SELECT operation is performed (so the non-repeatable reads phenomenon can occur in this isolation level). As in the previous level, range-locks are not managed. - -Putting it in simpler words, read committed is an isolation level that guarantees that any data read is committed at the moment it is read. It simply restricts the reader from seeing any intermediate, uncommitted, 'dirty' read. It makes no promise whatsoever that if the transaction re-issues the read, it will find the same data; data is free to change after it is read. - -This is the lowest isolation level. In this level, dirty reads are allowed, so one transaction may see not-yet-committed changes made by other transactions. - -The default isolation level of different DBMS's varies quite widely. Most databases that feature transactions allow the user to set any isolation level. Some DBMS's also require additional syntax when performing a SELECT statement to acquire locks (e.g. SELECT ... FOR UPDATE to acquire exclusive write locks on accessed rows). - -However, the definitions above have been criticized as being ambiguous, and as not accurately reflecting the isolation provided by many databases: - -This paper shows a number of weaknesses in the anomaly approach to defining isolation levels. The three ANSI phenomena are ambiguous, and even in their loosest interpretations do not exclude some anomalous behavior ... This leads to some counter-intuitive results. In particular, lock-based isolation levels have different characteristics than their ANSI equivalents. This is disconcerting because commercial database systems typically use locking implementations. Additionally, the ANSI phenomena do not distinguish between a number of types of isolation level behavior that are popular in commercial systems. - -There are also other criticisms concerning ANSI SQL's isolation definition, in that it encourages implementors to do "bad things": - -... it relies in subtle ways on an assumption that a locking schema is used for concurrency control, as opposed to an optimistic or multi-version concurrency scheme. This implies that the proposed semantics are ill-defined. - -'+' — possible - -
    '-' — not possible - -Anomaly Serializable is not the same as Serializable. That is, it is necessary, but not sufficient that a Serializable schedule should be free of all three phenomena types. diff --git a/wiki/wikipedia/2472.txt b/wiki/wikipedia/2472.txt deleted file mode 100644 index 2a21fb8444770067a1d7f6c17de32ff05157acd2..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2472.txt +++ /dev/null @@ -1,45 +0,0 @@ -In set theory, a field of mathematics, the Burali-Forti paradox demonstrates that constructing "the set of all ordinal numbers" leads to a contradiction and therefore shows an antinomy in a system that allows its construction. It is named after Cesare Burali-Forti, who, in 1897, published a paper proving a theorem which, unknown to him, contradicted a previously proved result by Cantor. Bertrand Russell subsequently noticed the contradiction, and when he published it in his 1903 book Principles of Mathematics, he stated that it had been suggested to him by Burali-Forti's paper, with the result that it came to be known by Burali-Forti's name. - -We will prove this by reductio ad absurdum. - -# Let $\Omega$ be a set consisting of all ordinal numbers. - -# $\Omega$ is transitive because for every element $x$ of $\Omega$ (which is an ordinal number and can be any ordinal number) and every element $y$ of $x$ (i.e. under the definition of Von Neumann ordinals, for every ordinal number $y < x$), we have that $y$ is an element of $\Omega$ because any ordinal number contains only ordinal numbers, by the definition of this ordinal construction. - -# $\Omega$ is well ordered by the membership relation because all its elements are also well ordered by this relation. - -# So, by steps 2 and 3, we have that $\Omega$ is an ordinal class and also, by step 1, an ordinal number, because all ordinal classes that are sets are also ordinal numbers. - -# This implies that $\Omega$ is an element of $\Omega$. - -# Under the definition of Von Neumann ordinals, $\Omega < \Omega$ is the same as $\Omega$ being an element of $\Omega$. This latter statement is proven by step 5. - -# But no ordinal class is less than itself, including $\Omega$ because of step 4 ($\Omega$ is an ordinal class), i.e. $\Omega \nless \Omega$. - -We have deduced two contradictory propositions ($\Omega < \Omega$ and $\Omega \nless \Omega$) from the sethood of $\Omega$ and, therefore, disproved that $\Omega$ is a set. - -The version of the paradox above is anachronistic, because it presupposes the definition of the ordinals due to John von Neumann, under which each ordinal is the set of all preceding ordinals, which was not known at the time the paradox was framed by Burali-Forti. - -Here is an account with fewer presuppositions: suppose that we associate with each well-ordering - -an object called its order type in an unspecified way (the order types are the ordinal numbers). The order types (ordinal numbers) themselves are well-ordered in a natural way, - -and this well-ordering must have an order type $\Omega$. It is easily shown in - -naïve set theory (and remains true in ZFC but not in New Foundations) that the order - -type of all ordinal numbers less than a fixed $\alpha$ is $\alpha$ itself. - -So the order - -type of all ordinal numbers less than $\Omega$ is $\Omega$ itself. But - -this means that $\Omega$, being the order type of a proper initial segment of the ordinals, is strictly less than the order type of all the ordinals, - -but the latter is $\Omega$ itself by definition. This is a contradiction. - -If we use the von Neumann definition, under which each ordinal is identified as the set of all preceding ordinals, the paradox is unavoidable: the offending proposition that the order type of all ordinal numbers less than a fixed $\alpha$ is $\alpha$ itself must be true. The collection of von Neumann ordinals, like the collection in the Russell paradox, cannot be a set in any set theory with classical logic. But the collection of order types in New Foundations (defined as equivalence classes of well-orderings under similarity) is actually a set, and the paradox is avoided because the order type of the ordinals less than $\Omega$ - -turns out not to be $\Omega$. - -Modern axioms for formal set theory such as ZF and ZFC circumvent this antinomy by not allowing the construction of sets using terms like "all sets with the property $P$", as is possible in naive set theory and as is possible with Gottlob Frege's axiomsspecifically Basic Law Vin the "Grundgesetze der Arithmetik." Quine's system New Foundations (NF) uses a different solution. showed that in the original version of Quine's system "Mathematical Logic" (ML), an extension of New Foundations, it is possible to derive the Burali-Forti paradox, showing that this system was contradictory. Quine's revision of ML following Rosser's discovery does not suffer from this defect, and indeed was subsequently proved equiconsistent with NF by Hao Wang. diff --git a/wiki/wikipedia/2473.txt b/wiki/wikipedia/2473.txt deleted file mode 100644 index 0f0a2643df33347204aab4438dceb943e0c7e4f1..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2473.txt +++ /dev/null @@ -1,194 +0,0 @@ -In number theory, the law of quadratic reciprocity, like the Pythagorean theorem, has lent itself to an unusually large number of proofs. Several hundred proofs of the law of quadratic reciprocity have been published. - -Of the elementary combinatorial proofs, there are two which apply types of double counting. One by Gotthold Eisenstein counts lattice points. Another applies Zolotarev's lemma to $(\mathbb{Z}/pq\mathbb{Z})^{\times} $, expressed by the Chinese remainder theorem as $(\mathbb{Z} /p \mathbb{Z})^{\times} \times (\mathbb{Z} /q \mathbb{Z})^{\times}$ and calculates the signature of a permutation. The shortest known proof also uses a simplified version of double counting, namely double counting modulo a fixed prime. - -Eisenstein's proof of quadratic reciprocity is a simplification of Gauss's third proof. It is more geometrically intuitive and requires less technical manipulation. - -The point of departure is "Eisenstein's lemma", which states that for distinct odd primes p, q, -$$ -\left(\frac qp\right) = (-1)^{\sum_u \left \lfloor qu/p \right \rfloor}, -$$ - -where $\left \lfloor x \right \rfloor$ denotes the floor function (the largest integer less than or equal to x), and where the sum is taken over the even integers u = 2, 4, 6, ..., p−1. For example, -$$ -\left(\frac 7{11}\right) = (-1)^{ \left \lfloor 14/11 \right \rfloor + \left \lfloor 28/11 \right \rfloor + \left \lfloor 42/11 \right \rfloor + \left \lfloor 56/11 \right \rfloor + \left \lfloor 70/11 \right \rfloor } = (-1)^{1 + 2 + 3 + 5 + 6} = (-1)^{17} = -1. -$$ - -This result is very similar to Gauss's lemma, and can be proved in a similar fashion (proof given below). - -Using this representation of (q/p), the main argument is quite elegant. The sum $\sum_u \left \lfloor qu/p \right \rfloor$ counts the number of lattice points with even x-coordinate in the interior of the triangle ABC in the following diagram: - -Because each column has an even number of points (namely q−1 points), the number of such lattice points in the region BCYX is the same modulo 2 as the number of such points in the region CZY: - -Then by flipping the diagram in both axes, we see that the number of points with even x-coordinate inside CZY is the same as the number of points inside AXY having odd x-coordinates. This can be justified mathematically by noting that $\textstyle q-1-\left\lfloor \frac{2kq}{p}\right\rfloor = \left\lfloor\frac{(p-2k)q}{p}\right\rfloor$. - -The conclusion is that -$$ -\left(\frac qp\right) = (-1)^\mu, -$$ - -where μ is the total number of lattice points in the interior of AXY. - -Switching p and q, the same argument shows that -$$ -\left(\frac pq\right) = (-1)^\nu, -$$ - -where ν is the number of lattice points in the interior of WYA. Since there are no lattice points on the line AY itself (because p and q are relatively prime), and since the total number of points in the rectangle WYXA is -$$ -\left(\frac{p-1}2\right) \left(\frac{q-1}2\right), -$$ - -we obtain -$$ -\left(\frac qp\right) \left(\frac pq\right) = (-1)^{\mu + \nu} = (-1)^{(p-1)(q-1)/4}. -$$ - -For an even integer u in the range 1 ≤ u ≤ p−1, denote by r(u) the least positive residue of qu modulo p. (For example, for p = 11, q = 7, we allow u = 2, 4, 6, 8, 10, and the corresponding values of r(u) are 3, 6, 9, 1, 4.) The numbers (−1)r(u)r(u), again treated as least positive residues modulo p, are all even (in our running example, they are 8, 6, 2, 10, 4.) Furthermore, they are all distinct, because if (−1)r(u)r(u) ≡ (−1)r(t)r(t) (mod p), then we may divide out by q to obtain u ≡ ±t (mod p). This forces u ≡ t (mod p), because both u and t are even, whereas p is odd. Since there exactly (p−1)/2 of them and they are distinct, they must be simply a rearrangement of the even integers 2, 4, ..., p−1. Multiplying them together, we obtain -$$ -(-1)^{r(2)}2q \cdot (-1)^{r(4)}4q \cdot \cdots \cdot (-1)^{r(p-1)}(p-1)q \equiv 2 \cdot 4 \cdot \cdots \cdot (p-1)\pmod{p}. -$$ - -Dividing out successively by 2, 4, ..., p−1 on both sides (which is permissible since none of them are divisible by p) and rearranging, we have -$$ -q^{(p-1)/2} \equiv (-1)^{r(2) + r(4) + \cdots + r(p-1)}\pmod{p}. -$$ - -On the other hand, by the definition of r(u) and the floor function, -$$ -\frac{qu}p = \left \lfloor \frac{qu}p\right \rfloor + \frac{r(u)}p, -$$ - -and since p is odd and u is even, -$$ -qu = p\left\lfloor\frac{qu}p\right\rfloor + r(u) -$$ - -implies that $\left \lfloor qu/p \right \rfloor$ and r(u) are congruent modulo 2. - -Finally this shows that -$$ -q^{(p-1)/2} \equiv (-1)^{\sum_u \left \lfloor qu/p \right \rfloor} \pmod{p}. -$$ - -We are finished because the left hand side is just an alternative expression for (q/p). - -This lemma essentially states that the number of least residues after doubling that are odd gives the value of (q/p). This follows easily from Gauss' lemma. - -Also, $qu = p\left\lfloor\frac{qu}p\right\rfloor + r(u)$ implies that $\left \lfloor qu/p \right \rfloor$ and r(u) are either congruent modulo 2, or incongruent, depending solely on the parity of u. - -This means that the residues $1,2,\dots,\frac{p-1}{2}$ are (in)congruent to $\left \lfloor qu/p \right \rfloor$, and so -$$ -(-1)^{\frac{p-1}{2}}\equiv (-1)^{\sum_u \left \lfloor qu/p \right \rfloor}\equiv (-1)^{\sum_u r(u)+u} -$$ - -where $\textstyle 1\le u\le \frac{p-1}{2}$. - -For example, using the previous example of $p=7, q=11$, the residues are $7,3,10,6,2$ and the floor function gives $0,1,1,2,3$. The pattern of congruence is $1,0,1,0,1$. - -The proof of Quadratic Reciprocity using Gauss sums is one of the more common and classic proofs. These proofs work by comparing computations of single values in two different ways, one using Euler's Criterion and the other using the Binomial theorem. As an example of how Euler's criterion is used, we can use it to give a quick proof of the first supplemental case of determining $\left(\frac{-1}{p}\right)$ for an odd prime p: By Euler's criterion $\left(\frac{-1}{p}\right) \equiv (-1)^{\frac{p-1}{2}} \pmod{p}$ , but since both sides of the equivalence are ±1 and p is odd, we can deduce that $\left(\frac{-1}{p}\right) = (-1)^{\frac{p-1}{2}}$. - -Let $\zeta_8 = e^{2\pi i/8}$, a primitive 8th root of unity and set $\tau = \zeta_8+\zeta_8^{-1}$. Since $\zeta_8^2 = i$ and $\zeta_8^{-2}=-i$ we see that $\tau^{2} = 2$. Because $\tau$ is an algebraic integer, if p is an odd prime it makes sense to talk about it modulo p. (Formally we are considering the commutative ring formed by factoring the algebraic integers $ \mathbf A $ with the ideal generated by p. Because $ p^{-1} $ is not an algebraic integer, 1, 2, ..., p are distinct elements of $ {\mathbf A}/p{\mathbf A} $.) Using Euler's criterion, it follows that \tau^{p-1} =(\tau^2)^{\frac{p-1}{2}} = 2^{\frac{p-1}{2}} \equiv \left(\frac{2}{p}\right) \pmod{p}We can then say that \tau^p\equiv \left(\frac{2}{p}\right)\tau\pmod{p}But we can also compute $\tau^p \pmod{p}$ using the binomial theorem. Because the cross terms in the binomial expansion all contain factors of p, we find that $\tau^p\equiv \zeta_8^p+\zeta_8^{-p} \pmod{p}$. We can evaluate this more exactly by breaking this up into two cases - -* $p\equiv \pm 1\pmod{8} \Rightarrow \zeta_8^p+\zeta_8^{-p} = \zeta_8+\zeta_8^{-1}$. - -* $p\equiv \pm 3 \pmod{8} \Rightarrow \zeta_8^p+\zeta_8^{-p} = -\zeta_8-\zeta_8^{-1}$. - -These are the only options for a prime modulo 8 and both of these cases can be computed using the exponential form $\zeta_8=e^{\frac{2\pi i}{8}}$. We can write this succinctly for all odd primes p as \tau^p \equiv (-1)^{\frac{p^2-1}{8}}\tau \pmod{p}Combining these two expressions for $\tau^p\pmod{p}$ and multiplying through by $\tau$ we find that $2\cdot \left(\frac{2}{p}\right) \equiv 2\cdot(-1)^{\frac{p^2-1}{8}}\pmod{p}$. Since both $\left(\frac{2}{p}\right)$ and $(-1)^{\frac{p^2-1}{8}}$ are ±1 and 2 is invertible modulo p, we can conclude that \left(\frac{2}{p}\right) = (-1)^{\frac{p^2-1}{8}} - -The idea for the general proof follows the above supplemental case: Find an algebraic integer that somehow encodes the Legendre symbols for p, then find a relationship between Legendre symbols by computing the qth power of this algebraic integer modulo q in two different ways, one using Euler's criterion the other using the binomial theorem. - -Let g_p = \sum_{k=1}^{p-1}\left(\frac{k}{p}\right)\zeta_p^kwhere $\zeta_p=e^{2\pi i/p}$ is a primitive pth root of unity. This is a Quadratic Gauss Sum. A fundamental property of these Gauss sums is that g_p^2 = p^*where $p^*=\left(\frac{-1}{p}\right) p$. To put this in context of the next proof, the individual elements of the Gauss sum are in the cyclotomic field $L = \mathbb{Q}(\zeta_p)$ but the above formula shows that the sum itself is a generator of the unique quadratic field contained in L. Again, since the quadratic Gauss sum is an algebraic integer, we can use modular arithmetic with it. Using this fundamental formula and Euler's criterion we find thatg_p^{q-1} = (g_p^2)^{\frac{q-1}{2}} = (p^*)^{\frac{q-1}{2}} \equiv \left(\frac{p^*}{q}\right)\pmod{q}Thereforeg_p^q \equiv \left(\frac{p^*}{q}\right)g_p\pmod{q}Using the binomial theorem, we also find that $g_p^q\equiv\sum_{k=1}^{p-1}\left(\frac{k}{p}\right)\zeta_p^{qk} \pmod{q}$, If we let a be a multiplicative inverse of $q\pmod{p}$, then we can rewrite this sum as $\left(\frac{a}{p}\right)\sum_{t=1}^{p-1}\left(\frac{t}{p}\right)\zeta_p^t$ using the substitution $t=qk$, which doesn't affect the range of the sum. Since $\left(\frac{a}{p}\right)=\left(\frac{q}{p}\right)$, we can then writeg_p^q\equiv\left(\frac{q}{p}\right)g_p\pmod{q}Using these two expressions for $g_p^q \pmod{q}$, and multiplying through by $g_p$ gives\left(\frac{q}{p}\right)p^*\equiv \left(\frac{p^*}{q}\right)p^*\pmod{q}Since $p^*$ is invertible modulo q, and the Legendre symbols are either ±1, we can then conclude that\left(\frac{q}{p}\right) = \left(\frac{p^*}{q}\right) - -The proof presented here is by no means the simplest known; however, it is quite a deep one, in the sense that it motivates some of the ideas of Artin reciprocity. - -Suppose that p is an odd prime. The action takes place inside the cyclotomic field -$$ -L = \mathbb Q(\zeta_p), -$$ - -where ζp is a primitive pth root of unity. The basic theory of cyclotomic fields informs us that there is a canonical isomorphism -$$ -G = \operatorname{Gal}(L/\mathbb Q) \cong (\Z/p\Z)^\times -$$ - -which sends the automorphism σa satisfying $\sigma_a(\zeta_p) = \zeta_p^a$ to the element $a \in (\Z/p\Z)^\times.$ In particular, this isomorphism is injective because the multiplicative group of a field is a cyclic group: $F^\times\cong C_{p-1}$. - -Now consider the subgroup H of squares of elements of G. Since G is cyclic, H has index 2 in G, so the subfield corresponding to H under the Galois correspondence must be a quadratic extension of Q. (In fact it is the unique quadratic extension of Q contained in L.) The Gaussian period theory determines which one; it turns out to be $\mathbb Q(\sqrt{p^*})$, where -$$ -p^* = \left\{\begin{array}{rl} p & \text{if } p \equiv 1 \pmod{4}, \\ -p & \text{if } p \equiv 3 \pmod{4}. \end{array}\right. -$$ - -At this point we start to see a hint of quadratic reciprocity emerging from our framework. On one hand, the image of H in $(\mathbb Z/p\mathbb Z)^\times$ consists precisely of the (nonzero) quadratic residues modulo p. On the other hand, H is related to an attempt to take the square root of p (or possibly of -p). In other words, if now q is a prime (different from p), we have shown that -$$ -\left(\frac qp\right) =1 \quad \iff \quad \sigma_q \in H \quad \iff \quad \sigma_q \mbox{ fixes } \mathbb Q(\sqrt{p^*}). -$$ - -In the ring of integers $\mathcal O_L=\mathbb Z[\zeta_p]$, choose any unramified prime ideal β of lying over q, and let $\phi \in \operatorname{Gal}(L/\mathbb Q)$ be the Frobenius automorphism associated to β; the characteristic property of $\phi$ is that -$$ -\phi(x) \equiv x^q\!\!\! \pmod{\beta}\ \text{ for any } x\in \mathcal O_L. -$$ - -(The existence of such a Frobenius element depends on quite a bit of algebraic number theory machinery.) - -The key fact about $\phi$ that we need is that for any subfield K of L, -$$ -\phi\mbox{ fixes } K \quad \iff \quad q\mbox{ splits completely in } K. -$$ - -Indeed, let δ be any ideal of OK below β (and hence above q). Then, since $\phi(x) \equiv x^q\!\!\! \pmod{\delta} $ for any $x\in \mathcal O_K$, we see that $\phi\vert_K \in \operatorname{Gal}(K/\mathbb Q)$ is a Frobenius for δ. A standard result concerning $\phi$ is that its order is equal to the corresponding inertial degree; that is, -$$ -\operatorname{ord}(\phi\vert_K) = [O_K/\delta O_K : \mathbb Z/q\mathbb Z]. -$$ - -The left hand side is equal to 1 if and only if φ fixes K, and the right hand side is equal to one if and only q splits completely in K, so we are done. - -Now, since the pth roots of unity are distinct modulo β (i.e. the polynomial Xp − 1 is separable in characteristic q), we must have -$$ -\phi(\zeta_p) = \zeta_p^q; -$$ - -that is, $\phi$ coincides with the automorphism σq defined earlier. Taking K to be the quadratic field in which we are interested, we obtain the equivalence -$$ -\left(\frac qp\right) =1 \quad \iff \quad q\mbox{ splits completely in } \mathbb Q(\sqrt{p^*}). -$$ - -Finally we must show that -$$ -q\mbox{ splits completely in } \mathbb Q(\sqrt{p^*}) \quad \iff \quad \left(\frac{p^*}q\right) = 1. -$$ - -Once we have done this, the law of quadratic reciprocity falls out immediately since -$$ -\left(\frac{p^*}q\right) = \left(\frac pq\right)\ \text{ for } p\equiv 1\!\!\! \pmod 4, -$$ - -and - -\left(\frac{p^*}q\right) = \left(\frac{-p}q\right) = \left(\frac{-1}q\right)\left(\frac pq\right) - -= \begin{cases} +\left(\frac pq \right) & \mbox{if } q \equiv 1 \pmod{4}, \\ -\left(\frac pq\right) & \mbox{if } q \equiv 3 \pmod{4}\end{cases} - -for $p\equiv 3\!\!\! \pmod 4$. - -To show the last equivalence, suppose first that $\left(\frac{p^*}q\right) = 1.$ In this case, there is some integer x (not divisible by q) such that $ x^2 \equiv p^* \pmod{q}, $ say $ x^2 - p^* = cq $ for some integer c. Let -$$ -K = \mathbb Q(\sqrt{p^*}), -$$ and consider the ideal $(x-\sqrt{p^*},q)$ of K. It certainly divides the principal ideal (q). It cannot be equal to (q), since $x-\sqrt{p^*}$ is not divisible by q. It cannot be the unit ideal, because then -$$ -(x+\sqrt{p^*}) = (x+\sqrt{p^*})(x-\sqrt{p^*},q) = (cq, q(x+\sqrt{p^*})) -$$ - -is divisible by q, which is again impossible. Therefore (q) must split in K. - -Conversely, suppose that (q) splits, and let β be a prime of K above q. Then $(q) \subsetneq \beta,$ so we may choose some -$$ -a+b\sqrt{p^*} \in \beta\setminus(q), \text{ where } a,b\in \mathbb Q. -$$ - -Actually, since $p^* \equiv 1\!\!\! \pmod{4},$ elementary theory of quadratic fields implies that the ring of integers of K is precisely $\mathbb Z\left[\frac{1+\sqrt{p^*}}2\right],$ so the denominators of a and b are at worst equal to 2. Since q ≠ 2, we may safely multiply a and b by 2, and assume that $a+b\sqrt{p^*} \in \beta\setminus(q),$ where now a and b are in Z. In this case we have -$$ -(a+b\sqrt{p^*})(a-b\sqrt{p^*}) = a^2 - b^2p^* \in \beta \cap \mathbb Z = (q), -$$ - -so $q \mid a^2 - b^2p^*.$ However, q cannot divide b, since then also q divides a, which contradicts our choice of $a+b\sqrt{p^*}.$ Therefore, we may divide by b modulo q, to obtain $p^* \equiv (ab^{-1})^2\!\!\! \pmod{q}$ as desired. diff --git a/wiki/wikipedia/2474.txt b/wiki/wikipedia/2474.txt deleted file mode 100644 index 8f6672c9969354fdccc01a74c3f36c3b9e272a60..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2474.txt +++ /dev/null @@ -1,35 +0,0 @@ -In mathematics, the Malgrange–Ehrenpreis theorem states that every non-zero linear differential operator with constant coefficients has a Green's function. It was first proved independently by and - -. - -This means that the differential equation -$$ -P\left(\frac{\partial}{\partial x_1}, \ldots, \frac{\partial}{\partial x_\ell} \right) u(\mathbf{x}) = \delta(\mathbf{x}), -$$ - -where P is a polynomial in several variables and δ is the Dirac delta function, has a distributional solution u. It can be used to show that -$$ -P\left(\frac{\partial}{\partial x_1}, \ldots, \frac{\partial}{\partial x_\ell} \right) u(\mathbf{x}) = f(\mathbf{x}) -$$ - -has a solution for any compactly supported distribution f. The solution is not unique in general. - -The analogue for differential operators whose coefficients are polynomials (rather than constants) is false: see Lewy's example. - -The original proofs of Malgrange and Ehrenpreis were non-constructive as they used the Hahn–Banach theorem. Since then several constructive proofs have been found. - -There is a very short proof using the Fourier transform and the Bernstein–Sato polynomial, as follows. By taking Fourier transforms the Malgrange–Ehrenpreis theorem is equivalent to the fact that every non-zero polynomial P has a distributional inverse. By replacing P by the product with its complex conjugate, one can also assume that P is non-negative. For non-negative polynomials P the existence of a distributional inverse follows from the existence of the Bernstein–Sato polynomial, which implies that Ps can be analytically continued as a meromorphic distribution-valued function of the complex variable s; the constant term of the Laurent expansion of Ps at s = −1 is then a distributional inverse of P. - -Other proofs, often giving better bounds on the growth of a solution, are given in , and . - -gives a detailed discussion of the regularity properties of the fundamental solutions. - -A short constructive proof was presented in : -$$ - E=\frac{1}{\overline{P_m(2\eta)}} \sum_{j=0}^m a_j e^{\lambda_j\eta x} \mathcal{F}^{-1}_{\xi}\left(\frac{\overline{P(i\xi+\lambda_j\eta)}}{P(i \xi + \lambda_j \eta)}\right) -$$ - -is a fundamental solution of P(∂), i.e., P(∂)E = δ, if Pm is the principal part of P, η ∈ Rn with Pm(η) ≠ 0, the real numbers λ0, ..., λm are pairwise different, and -$$ -a_j=\prod_{k=0,k\neq j}^m(\lambda_j-\lambda_k)^{-1}. -$$ diff --git a/wiki/wikipedia/2475.txt b/wiki/wikipedia/2475.txt deleted file mode 100644 index a8e88e5125a5726fba0131ddffd6b598355cdb9b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2475.txt +++ /dev/null @@ -1,33 +0,0 @@ -Connes' embedding problem, formulated by Alain Connes in the 1970s, is a major problem in von Neumann algebra theory. During that time, the problem was reformulated in several different areas of mathematics. Dan Voiculescu developing his free entropy theory found that Connes’ embedding problem is related to the existence of microstates. Some results of von Neumann algebras theory can be obtained assuming positive solution to the problem. The problem is connected to some basic questions in quantum theory, which led to the realization that it also has important implications in computer science. - -The problem admits a number of equivalent formulations. Notably, it is equivalent to the following long standing problems: - -* Kirchberg's QWEP conjecture in C*-algebra theory - -* Tsirelson's problem in quantum information theory - -* The predual of any (separable) von Neumann algebra is finitely representable in the trace class. - -In January 2020, Ji, Natarajan, Vidick, Wright, and Yuen announced a result in quantum complexity theory that implies a negative answer to Connes' embedding problem. - -Let $\omega$ be a free ultrafilter on the natural numbers and let R be the hyperfinite type II1 factor with trace $\tau$. One can construct the ultrapower $R^\omega$ as follows: let $l^\infty(R)=\{(x_n)_n\subseteq R:\sup_n||x_n||<\infty\}$ be the von Neumann algebra of norm-bounded sequences and let $I_\omega=\{(x_n)\in l^\infty(R):\lim_{n\rightarrow\omega}\tau(x_n^*x_n)^{\frac{1}{2}}=0\}$. The quotient $R^\omega = l^\infty(R)/I_\omega$ turns out to be a II1 factor with trace $\tau_{R^\omega}(x)=\lim_{n\rightarrow\omega}\tau(x_n+I_\omega)$, where $(x_n)_n$ is any representative sequence of $x$. - -Connes' embedding problem asks whether every type II1 factor on a separable Hilbert space can be embedded into some $R^\omega$. - -Positive solution to the problem would imply that invariant subspaces exist for a large class of operators in II-1-factors (Uffe Haagerup); all countable discrete groups are hyperlinear. A positive solution to the problem would be implied by equality between free entropy $\chi^*$ and free entropy defined by microstates (Dan Voiculescu). In January 2020, a group of researchers - -*Connes' embedding problem and quantum information theory workshop; Vanderbilt University in Nashville Tennessee; May 1-7, 2020 () - -* The many faceted Connes' Embedding Problem; BIRS, Canada; July 14 -19, 2019 - -* Winter school: Connes' embedding problem and quantum information theory; University of Oslo, January 07-11, 2019 - -* Workshop on Sofic and Hyperlinear Groups and the Connes Embedding Conjecture; UFSC Florianopolis, Brazil; June 10-21 2018 - -* Approximation Properties in Operator Algebras and Ergodic Theory; UCLA; April 30 - May 5, 2018 - -* Operator Algebras and Quantum Information Theory; Institut Henri Poincare, Paris; December 2017 - -* Workshop on Operator Spaces, Harmonic Analysis and Quantum Probability; ICMAT, Madrid; May 20-June 14, 2013 - -* Fields Workshop around Connes Embedding Problem - University of Ottawa, May 16-18, 2008 diff --git a/wiki/wikipedia/2476.txt b/wiki/wikipedia/2476.txt deleted file mode 100644 index f5c6cfbe0794d21a3957fb6051628c7df544661f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2476.txt +++ /dev/null @@ -1,7 +0,0 @@ -Circle packing in a square is a packing problem in applied mathematics, where the aim is to pack n unit circles into the smallest possible square; or, equivalently, to arrange n points in a unit square aiming to get the greatest minimal separation, dn, between points. To convert between these two formulations of the problem, the square side for unit circles will be $L=2+\frac{2}{d_n}$. - -Solutions (not necessarily optimal) have been computed for every N≤10,000. Solutions up to N=20 are shown below: - -The obvious square packing is optimal for 1, 4, 9, 16, 25, and 36 circles (the smallest six square numbers), but ceases to be optimal for larger squares from 49 onwards. - -Dense packings of circles in squares and other rectangles have been the subject of many investigations. diff --git a/wiki/wikipedia/2477.txt b/wiki/wikipedia/2477.txt deleted file mode 100644 index d7eb434a1622d66630c206a53f5c41d013931a5d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2477.txt +++ /dev/null @@ -1,21 +0,0 @@ -sync is a standard system call in the Unix operating system, which commits all data in the kernel filesystem to non-volatile storage buffers, i.e., data which has been scheduled for writing via low-level I/O system calls. Higher-level I/O layers such as stdio may maintain separate buffers of their own. - -As a function in C, the sync() call is typically declared as void sync(void) in . The system call is also available via a command line utility also called sync, and similarly named functions in other languages such as Perl and Node.js (in the fs module). - -The related system call fsync() commits just the buffered data relating to a specified file descriptor. fdatasync() is also available to write out just the changes made to the data in the file, and not necessarily the file's related metadata. - -Some Unix systems run a kind of flush or update daemon, which calls the sync function on a regular basis. On some systems, the cron daemon does this, and on Linux it was handled by the pdflush daemon which was replaced by a new implementation and finally removed from the Linux kernel in 2012. Buffers are also flushed when filesystems are unmounted or remounted read-only, for example prior to system shutdown. - -In order to provide proper durability, databases need to use some form of sync in order to make sure the information written has made it to non-volatile storage rather than just being stored in a memory-based write cache that would be lost if power failed. PostgreSQL for example may use a variety of different sync calls, including fsync() and fdatasync(), in order for commits to be durable. Unfortunately, for any single client writing a series of records, a rotating hard drive can only commit once per rotation, which makes for at best a few hundred such commits per second. Turning off the fsync requirement can therefore greatly improve commit performance, but at the expense of potentially introducing database corruption after a crash. - -Databases also employ transaction log files (typically much smaller than the main data files) that have information about recent changes, such that changes can be reliably redone in case of crash; then the main data files can be synced less often. - -To avoid any data loss return values of fsync() should be checked because when performing I/O operations that are buffered by the library or the kernel, errors may not be reported at the time of using the write() system call or the fflush() call, since the data may not be written to non-volatile storage but only be written to the memory page cache. Errors from writes are instead often reported during system calls to fsync(), msync() or close(). Prior to 2018, Linux's fsync() behavior under certain circumstances failed to report error status, change behavior was proposed on 23 April 2018. - -Hard disks may default to using their own volatile write cache to buffer writes, which greatly improves performance while introducing a potential for lost writes. Tools such as hdparm -F will instruct the HDD controller to flush the on-drive write cache buffer. The performance impact of turning caching off is so large that even the normally conservative FreeBSD community rejected disabling write caching by default in FreeBSD 4.3. - -In SCSI and in SATA with Native Command Queuing (but not in plain ATA, even with TCQ) the host can specify whether it wants to be notified of completion when the data hits the disk's platters or when it hits the disk's buffer (on-board cache). Assuming a correct hardware implementation, this feature allows the disk's on-board cache to be used while guaranteeing correct semantics for system calls like fsync. This hardware feature is called Force Unit Access (FUA) and it allows consistency with less overhead than flushing the entire cache as done for ATA (or SATA non-NCQ) disks. Although Linux enabled NCQ around 2007, it did not enable SATA/NCQ FUA until 2012, citing lack of support in the early drives. - -Firefox 3.0, released in 2008, introduced fsync system calls that were found to degrade its performance; the call was introduced in order to guarantee the integrity of the embedded SQLite database. - -Linux Foundation chief technical officer Theodore Ts'o claims there is no need to "fear fsync", and that the real cause of Firefox 3 slowdown is the excessive use of fsync. He also concedes however (quoting Mike Shaver) that "On some rather common Linux configurations, especially using the ext3 filesystem in the “data=ordered” mode, calling fsync doesn't just flush out the data for the file it's called on, but rather on all the buffered data for that filesystem." diff --git a/wiki/wikipedia/2478.txt b/wiki/wikipedia/2478.txt deleted file mode 100644 index 48c4ecf777b621807657d2513ee158a4612eb381..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2478.txt +++ /dev/null @@ -1,35 +0,0 @@ -
    In computer science, the planar 3-satisfiability problem (abbreviated PLANAR 3SAT or PL3SAT) is an extension of the classical Boolean 3-satisfiability problem to a planar incidence graph. In other words, it asks whether the variables of a given Boolean formula—whose incidence graph consisting of variables and clauses can be embedded on a plane—can be consistently replaced by the values TRUE or FALSE in such a way that the formula evaluates to TRUE. If this is the case, the formula is called satisfiable. On the other hand, if no such assignment exists, the function expressed by the formula is FALSE for all possible variable assignments and the formula is unsatisfiable. For example, the formula "a AND NOT b" is satisfiable because one can find the values a = TRUE and b = FALSE, which make (a AND NOT b) = TRUE. In contrast, "a AND NOT a" is unsatisfiable. - -Like 3SAT, PLANAR-SAT is NP-complete, and is commonly used in reductions. - -A planar graph is a graph that can be drawn on the plane in a way such that no two of its edges cross each other. Every 3SAT problem can be converted to an incidence graph in the following manner: For every variable $v_i$, the graph has one corresponding node $v_i$, and for every clause $c_j$, the graph has one corresponding node $c_j.$ An edge $(v_i, c_j)$ is created between variable $v_i$and clause c_j - - whenever $v_i$ or $\bar{v_i}$ is in $c_j$. Positive and negative literals are distinguishd using edge colorings. - -Planar 3SAT is a subset of 3SAT in which the incidence graph of the variables and the clauses of a Boolean formula is planar. It is important because it is a restricted variant, and is still NP-complete. Many problems (for example games and puzzles) cannot represent non-planar graphs. Hence, Planar 3SAT provides a way to prove those games to be NP-hard. - -The following proof sketch follows the proof of D. Lichtenstein. - -Trivially, PLANAR 3SAT is in NP. It is thus sufficient to show that it is NP-hard via reduction from 3SAT. - -First, draw the incidence graph of the 3SAT formula. Since no two variables or clauses are connected, the resulting graph will be bipartite. The resulting graph may not be planar. Replace every crossing of edges with a crossover gadget shown . However, the figure leads to a minor error—some clauses contain 4 variables and some contain only 2 variables so the premises of 3SAT are not followed exactly in the gadget shown. This glitch is easily fixable: For a clause that only contains 2 variables, either create parallel edges from one variable to the clause or create a separate false variable to include in the constraint. - -For the 4 variable clause, borrow the reduction from 4SAT to 3SAT to create a gadget that involves introducing an extra variable set to false which represents whether the left or right side of the original clause contained the satisfying literal. This completes the reduction. - -* Planar 3SAT with a variable-cycle: Here, in addition to the incidence-graph being planar, the following graph should also be planar: there is a cycle going through all variables, each clause is either inside the cycle or outside it, and linked to its corresponding variables. This problem is NP-complete. - -**Planar positive rectinilear 1-in-3SAT: This is the planar equivalent of positive 1-in-3SAT. It is NP-complete. - -* Planar NAE 3SAT: This problem is planar equivalent of NAE 3SAT. Surprisingly, it can be solved in polynomial time. The proof is by reduction to planar maximum cut. - -*Planar circuit SAT: This is a variant of circuit SAT in which the circuit, computing the SAT formula, is a planar directed acyclic graph. Note that this is a different graph than the adjacency graph of the formula. This problem is NP-complete. - -Shakashaka is a logic puzzle board game developed by publisher Nikoli. The objective is to fill the white squares in a given grid with a pattern of triangles such that each white area in the resulting grid has a rectangular shape. Furthermore, each black square in the grid marked with a number must be orthogonally adjacent to the specified number of triangles. It has been proven to be NP-complete via a reduction from Planar 3SAT - -This is the problem of deciding whether a polygonal chain with fixed edge lengths and angles has a planar configuration without crossings. It has been proven to be strongly NP-hard via a reduction from planar monotone rectilinear 3SAT. - -This is the problem of partitioning a polygon into simpler polygons such that the total length of all edges used in the partition is as small as possible. - -When the figure is a rectilinear polygon and it should be partitioned into rectangles, and the polygon is hole-free, then the problem is polynomial. But if it contains holes (even degenerate holes - single points), the problem is NP-hard, by reduction from Planar SAT. The same holds if the figure is any polygon and it should be partitioned into convex figures. - -A related problem is minimum-weight triangulation - finding a triangulation of minimal total edge length. The decision version of this problem is proven to be NP-complete via a reduction from a variant of Planar 1-in-3SAT. diff --git a/wiki/wikipedia/2479.txt b/wiki/wikipedia/2479.txt deleted file mode 100644 index e7f2a9965918174687c0dd7fb54fe8e95daad09f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2479.txt +++ /dev/null @@ -1,27 +0,0 @@ -In geometry, Jung's theorem is an inequality between the diameter of a set of points in any Euclidean space and the radius of the minimum enclosing ball of that set. It is named after Heinrich Jung, who first studied this inequality in 1901. Algorithms also exist to solve the smallest-circle problem explicitly. - -Consider a compact set -$$ -K \subset \mathbb{R}^n -$$ - -and let -$$ -d = \max_{p,q\in K} \| p - q \|_2 -$$ - -be the diameter of K, that is, the largest Euclidean distance between any two of its points. Jung's theorem states that there exists a closed ball with radius -$$ -r \leq d \sqrt{\frac{n}{2(n+1)}} -$$ - -that contains K. The boundary case of equality is attained by the regular n-simplex. - -The most common case of Jung's theorem is in the plane, that is, when n = 2. In this case the theorem states that there exists a circle enclosing all points whose radius satisfies -$$ -r \leq \frac{d}{\sqrt{3}}, -$$ - -and this bound is as tight as possible since when K is an equilateral triangle (or its three vertices) one has $r = \frac{d}{\sqrt{3}}.$ - -For any bounded set S in any metric space, d/2 ≤ r ≤ d. The first inequality is implied by the triangle inequality for the center of the ball and the two diametral points, and the second inequality follows since a ball of radius d centered at any point of S will contain all of S. In a uniform metric space, that is, a space in which all distances are equal, r = d. At the other end of the spectrum, in an injective metric space such as the Manhattan distance in the plane, r = d/2: any two closed balls of radius d/2 centered at points of S have a non-empty intersection, therefore all such balls have a common intersection, and a radius d/2 ball centered at a point of this intersection contains all of S. Versions of Jung's theorem for various non-Euclidean geometries are also known (see e.g. Dekster 1995, 1997). diff --git a/wiki/wikipedia/248.txt b/wiki/wikipedia/248.txt deleted file mode 100644 index 321ea55847b338cbeb80af7d4a8304d333a4318a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/248.txt +++ /dev/null @@ -1,17 +0,0 @@ -In mathematics, a Littlewood polynomial is a polynomial all of whose coefficients are +1 or -1. - -Littlewood's problem asks how large the values of such a polynomial must be on the unit circle in the complex plane. The answer to this would yield information about the autocorrelation of binary sequences. - -They are named for J. E. Littlewood who studied them in the 1950s. - -A polynomial -$$ - p(x) = \sum_{i=0}^n a_i x^i -$$ - -is a Littlewood polynomial if all the $a_i = \pm 1$. Littlewood's problem asks for constants c1 and c2 such that there are infinitely many Littlewood polynomials pn , of increasing degree n satisfying -$$ -c_1 \sqrt{n+1} \le | p_n(z) | \le c_2 \sqrt{n+1} . -$$ - -for all $z$ on the unit circle. The Rudin–Shapiro polynomials provide a sequence satisfying the upper bound with $c_2 = \sqrt 2$. In 2019, an infinite family of Littlewood polynomials satisfying both the upper and lower bound was constructed by Paul Balister, Béla Bollobás, Robert Morris, Julian Sahasrabudhe, and Marius Tiba. diff --git a/wiki/wikipedia/2480.txt b/wiki/wikipedia/2480.txt deleted file mode 100644 index 77f0ddbaeb96f4c64a95696aeab14f74aab09425..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2480.txt +++ /dev/null @@ -1,83 +0,0 @@ -In group theory, Cayley's theorem, named in honour of Arthur Cayley, states that every group G is isomorphic to a subgroup of a symmetric group. - -More specifically, G is isomorphic to a subgroup of the symmetric group $\operatorname{Sym}(G)$ whose elements are the permutations of the underlying set of G. - -Explicitly, - -* for each $g \in G$, the left-multiplication-by-g map $\ell_g \colon G \to G$ sending each element x to gx is a permutation of G, and - -* the map $G \to \operatorname{Sym}(G)$ sending each element g to $\ell_g$ is an injective homomorphism, so it defines an isomorphism from G onto a subgroup of $\operatorname{Sym}(G)$. - -The homomorphism $G \to \operatorname{Sym}(G)$ can also be understood as arising from the left translation action of G on the underlying set G. - -When G is finite, $\operatorname{Sym}(G)$ is finite too. The proof of Cayley's theorem in this case shows that if G is a finite group of order n, then G is isomorphic to a subgroup of the standard symmetric group $S_n$. But G might also be isomorphic to a subgroup of a smaller symmetric group, $S_m$ for some $mg : G → G, defined by 1=fg(x) = g ∗ x. By the existence of inverses, this function has a two-sided inverse, $f_{g^{-1}}$. So multiplication by g acts as a bijective function. Thus, fg is a permutation of G, and so is a member of Sym(G). - -The set 1=K = {fg : g ∈ G} is a subgroup of Sym(G) that is isomorphic to G. The fastest way to establish this is to consider the function T : G → Sym(G) with 1=T(g) = fg for every g in G. T is a group homomorphism because (using · to denote composition in Sym(G)): -$$ - (f_g \cdot f_h)(x) = f_g(f_h(x)) = f_g(h*x) = g*(h*x) = (g*h)*x = f_{g*h}(x) , -$$ - -for all x in G, and hence: -$$ - T(g) \cdot T(h) = f_g \cdot f_h = f_{g*h} = T(g*h) . -$$ - -The homomorphism T is injective since 1=T(g) = idG (the identity element of Sym(G)) implies that 1=g ∗ x = x for all x in G, and taking x to be the identity element e of G yields 1=g = g ∗ e = e, i.e. the kernel is trivial. Alternatively, T is also injective since 1=g ∗ x = g′ ∗ x implies that 1=g = g′ (because every group is cancellative). - -Thus G is isomorphic to the image of T, which is the subgroup K. - -T is sometimes called the regular representation of G. - -An alternative setting uses the language of group actions. We consider the group $G$ as acting on itself by left multiplication, i.e. $g \cdot x = gx$, which has a permutation representation, say $\phi : G \to \mathrm{Sym}(G)$. - -The representation is faithful if $\phi$ is injective, that is, if the kernel of $\phi$ is trivial. Suppose $g\in\ker\phi$. Then, $g\cdot e = ge = g = e$. Thus, $\ker\phi$ is trivial. The result follows by use of the first isomorphism theorem, from which we get $\mathrm{Im} \phi \cong G$. - -The identity element of the group corresponds to the identity permutation. All other group elements correspond to derangements: permutations that do not leave any element unchanged. Since this also applies for powers of a group element, lower than the order of that element, each element corresponds to a permutation that consists of cycles all of the same length: this length is the order of that element. The elements in each cycle form a right coset of the subgroup generated by the element. - -Z2 = {0,1} with addition modulo 2; group element 0 corresponds to the identity permutation e, group element 1 to permutation (12). E.g. 0 +1 = 1 and 1+1 = 0, so 1 -> 0 and 0 -> 1, as they would under a permutation. - -Z3 = {0,1,2} with addition modulo 3; group element 0 corresponds to the identity permutation e, group element 1 to permutation (123), and group element 2 to permutation (132). E.g. 1 + 1 = 2 corresponds to (123)(123)=(132). - -Z4 = {0,1,2,3} with addition modulo 4; the elements correspond to e, (1234), (13)(24), (1432). - -The elements of Klein four-group {e, a, b, c} correspond to e, (12)(34), (13)(24), and (14)(23). - -S3 (dihedral group of order 6) is the group of all permutations of 3 objects, but also a permutation group of the 6 group elements, and the latter is how it is realized by its regular representation. - -Theorem: - -Let G be a group, and let H be a subgroup. - -Let $G/H$ be the set of left cosets of H in G. - -Let N be the normal core of H in G, defined to be the intersection of the conjugates of H in G. - -Then the quotient group $G/N$ is isomorphic to a subgroup of $\operatorname{Sym}(G/H)$. - -The special case $H=1$ is Cayley's original theorem. diff --git a/wiki/wikipedia/2481.txt b/wiki/wikipedia/2481.txt deleted file mode 100644 index 273bebcac537e0ec16cf2f6f14390ec0e7d15c50..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2481.txt +++ /dev/null @@ -1,13 +0,0 @@ -In proof theory, a structural rule is an inference rule that does not refer to any logical connective, but instead operates on the judgment or sequents directly. Structural rules often mimic intended meta-theoretic properties of the logic. Logics that deny one or more of the structural rules are classified as substructural logics. - -Three common structural rules are: - -* Weakening, where the hypotheses or conclusion of a sequent may be extended with additional members. In symbolic form weakening rules can be written as $\frac{\Gamma \vdash \Sigma}{\Gamma, A \vdash \Sigma}$ on the left of the turnstile, and $\frac{\Gamma \vdash \Sigma}{\Gamma \vdash \Sigma, A}$ on the right. - -* Contraction, where two equal (or unifiable) members on the same side of a sequent may be replaced by a single member (or common instance). Symbolically: $\frac{\Gamma, A, A \vdash \Sigma}{\Gamma, A \vdash \Sigma}$ and $\frac{\Gamma \vdash A, A, \Sigma}{\Gamma \vdash A, \Sigma}$. Also known as factoring in automated theorem proving systems using resolution. Known as idempotency of entailment in classical logic. - -* Exchange, where two members on the same side of a sequent may be swapped. Symbolically: $\frac{\Gamma_1, A, \Gamma_2, B, \Gamma_3 \vdash \Sigma}{\Gamma_1, B, \Gamma_2, A, \Gamma_3 \vdash \Sigma}$ and $\frac{\Gamma \vdash \Sigma_1, A, \Sigma_2, B, \Sigma_3}{\Gamma \vdash \Sigma_1, B, \Sigma_2, A, \Sigma_3}$. (This is also known as the permutation rule.) - -A logic without any of the above structural rules would interpret the sides of a sequent as pure sequences; with exchange, they are multisets; and with both contraction and exchange they are sets. - -These are not the only possible structural rules. A famous structural rule is known as cut. Considerable effort is spent by proof theorists in showing that cut rules are superfluous in various logics. More precisely, what is shown is that cut is only (in a sense) a tool for abbreviating proofs, and does not add to the theorems that can be proved. The successful 'removal' of cut rules, known as cut elimination, is directly related to the philosophy of computation as normalization (see Curry–Howard correspondence); it often gives a good indication of the complexity of deciding a given logic. diff --git a/wiki/wikipedia/2482.txt b/wiki/wikipedia/2482.txt deleted file mode 100644 index b060b4b53d1cd323ba993eefab010db6436f5556..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2482.txt +++ /dev/null @@ -1,75 +0,0 @@ -Gödel's completeness theorem is a fundamental theorem in mathematical logic that establishes a correspondence between semantic truth and syntactic provability in first-order logic. - -The completeness theorem applies to any first order theory: If T is such a theory, and φ is a sentence (in the same language) and any model of T is a model of φ, then there is a (first-order) proof of φ using the statements of T as axioms. One sometimes says this as "anything true is provable." - -It makes a close link between model theory that deals with what is true in different models, and proof theory that studies what can be formally proven in particular formal systems. - -It was first proved by Kurt Gödel in 1929. It was then simplified in 1947, when Leon Henkin observed in his Ph.D. thesis that the hard part of the proof can be presented as the Model Existence Theorem (published in 1949). Henkin's proof was simplified by Gisbert Hasenjaeger in 1953. - -There are numerous deductive systems for first-order logic, including systems of natural deduction and Hilbert-style systems. Common to all deductive systems is the notion of a formal deduction. This is a sequence (or, in some cases, a finite tree) of formulae with a specially designated conclusion. The definition of a deduction is such that it is finite and that it is possible to verify algorithmically (by a computer, for example, or by hand) that a given sequence (or tree) of formulae is indeed a deduction. - -A first-order formula is called logically valid if it is true in every structure for the language of the formula (i.e. for any assignment of values to the variables of the formula). To formally state, and then prove, the completeness theorem, it is necessary to also define a deductive system. A deductive system is called complete if every logically valid formula is the conclusion of some formal deduction, and the completeness theorem for a particular deductive system is the theorem that it is complete in this sense. Thus, in a sense, there is a different completeness theorem for each deductive system. A converse to completeness is soundness, the fact that only logically valid formulas are provable in the deductive system. - -If some specific deductive system of first-order logic is sound and complete, then it is "perfect" (a formula is provable if and only if it is logically valid), thus equivalent to any other deductive system with the same quality (any proof in one system can be converted into the other). - -We first fix a deductive system of first-order predicate calculus, choosing any of the well-known equivalent systems. Gödel's original proof assumed the Hilbert-Ackermann proof system. - -The completeness theorem says that if a formula is logically valid then there is a finite deduction (a formal proof) of the formula. - -Thus, the deductive system is "complete" in the sense that no additional inference rules are required to prove all the logically valid formulae. A converse to completeness is soundness, the fact that only logically valid formulae are provable in the deductive system. Together with soundness (whose verification is easy), this theorem implies that a formula is logically valid if and only if it is the conclusion of a formal deduction. - -The theorem can be expressed more generally in terms of logical consequence. We say that a sentence s is a syntactic consequence of a theory T, denoted $T\vdash s$, if s is provable from T in our deductive system. We say that s is a semantic consequence of T, denoted $T\models s$, if s holds in every model of T. The completeness theorem then says that for any first-order theory T with a well-orderable language, and any sentence s in the language of T, - -if $T\models s$, then $T\vdash s$. - -Since the converse (soundness) also holds, it follows that $T\models s$ if and only if $T\vdash s$, and thus that syntactic and semantic consequence are equivalent for first-order logic. - -This more general theorem is used implicitly, for example, when a sentence is shown to be provable from the axioms of group theory by considering an arbitrary group and showing that the sentence is satisfied by that group. - -Gödel's original formulation is deduced by taking the particular case of a theory without any axiom. - -The completeness theorem can also be understood in terms of consistency, as a consequence of Henkin's model existence theorem. We say that a theory T is syntactically consistent if there is no sentence s such that both s and its negation ¬s are provable from T in our deductive system. The model existence theorem says that for any first-order theory T with a well-orderable language, - -if $T$ is syntactically consistent, then $T$ has a model. - -Another version, with connections to the Löwenheim–Skolem theorem, says: - -Every syntactically consistent, countable first-order theory has a finite or countable model. - -Given Henkin's theorem, the completeness theorem can be proved as follows: If $T \models s$, then $T\cup\lnot s$ does not have models. By the contrapositive of Henkin's theorem, then $T\cup\lnot s$ is syntactically inconsistent. So a contradiction ($\bot$) is provable from $T\cup\lnot s$ in the deductive system. Hence $(T\cup\lnot s) \vdash \bot$, and then by the properties of the deductive system, $T\vdash s$. - -The Model Existence Theorem and its proof can be formalized in the framework of Peano arithmetic. Precisely, we can systematically define a model of any consistent effective first-order theory T in Peano arithmetic by interpreting each symbol of T by an arithmetical formula whose free variables are the arguments of the symbol. (In many cases, we will need to assume, as a hypothesis of the construction, that T is consistent, since Peano arithmetic may not prove that fact.) However, the definition expressed by this formula is not recursive (but is, in general, Δ2). - -An important consequence of the completeness theorem is that it is possible to recursively enumerate the semantic consequences of any effective first-order theory, by enumerating all the possible formal deductions from the axioms of the theory, and use this to produce an enumeration of their conclusions. - -This comes in contrast with the direct meaning of the notion of semantic consequence, that quantifies over all structures in a particular language, which is clearly not a recursive definition. - -Also, it makes the concept of "provability", and thus of "theorem", a clear concept that only depends on the chosen system of axioms of the theory, and not on the choice of a proof system. - -Gödel's incompleteness theorems show that there are inherent limitations to what can be proven within any given first-order theory in mathematics. The "incompleteness" in their name refers to another meaning of complete (see model theory – Using the compactness and completeness theorems): A theory $T$ is complete (or decidable) if every sentence $S$ in the language of $T$ is either provable ($T\vdash S$) or disprovable ($T\vdash \neg S$). - -The first incompleteness theorem states that any $T$ which is consistent, effective and contains Robinson arithmetic ("Q") must be incomplete in this sense, by explicitly constructing a sentence $S_T$ which is demonstrably neither provable nor disprovable within $T$. The second incompleteness theorem extends this result by showing that $S_T$ can be chosen so that it expresses the consistency of $T$ itself. - -Since $S_T$ cannot be disproven in $T$, the completeness theorem implies the existence of a model of $T$ in which $S_T$ is false. In fact, $S_T$ is a Π1 sentence, i.e. it states that some finitistic property is true of all natural numbers; so if it is false, then some natural number is a counterexample. If this counterexample existed within the standard natural numbers, its existence would disprove $S_T$ within $T$; but the incompleteness theorem showed this to be impossible, so the counterexample must not be a standard number, and thus any model of $T$ in which $S_T$ is false must include non-standard numbers. - -In fact, the model of any theory containing Q obtained by the systematic construction of the arithmetical model existence theorem, is always non-standard with a non-equivalent provability predicate and a non-equivalent way to interpret its own construction, so that this construction is non-recursive (as recursive definitions would be unambiguous). - -Also, if $T$ is at least slightly stronger than Q (e.g. if it includes induction for bounded existential formulas), then Tennenbaum's theorem shows that it has no recursive non-standard models. - -The completeness theorem and the compactness theorem are two cornerstones of first-order logic. While neither of these theorems can be proven in a completely effective manner, each one can be effectively obtained from the other. - -The compactness theorem says that if a formula φ is a logical consequence of a (possibly infinite) set of formulas Γ then it is a logical consequence of a finite subset of Γ. This is an immediate consequence of the completeness theorem, because only a finite number of axioms from Γ can be mentioned in a formal deduction of φ, and the soundness of the deductive system then implies φ is a logical consequence of this finite set. This proof of the compactness theorem is originally due to Gödel. - -Conversely, for many deductive systems, it is possible to prove the completeness theorem as an effective consequence of the compactness theorem. - -The ineffectiveness of the completeness theorem can be measured along the lines of reverse mathematics. When considered over a countable language, the completeness and compactness theorems are equivalent to each other and equivalent to a weak form of choice known as weak König's lemma, with the equivalence provable in RCA0 (a second-order variant of Peano arithmetic restricted to induction over Σ01 formulas). Weak König's lemma is provable in ZF, the system of Zermelo–Fraenkel set theory without axiom of choice, and thus the completeness and compactness theorems for countable languages are provable in ZF. However the situation is different when the language is of arbitrary large cardinality since then, though the completeness and compactness theorems remain provably equivalent to each other in ZF, they are also provably equivalent to a weak form of the axiom of choice known as the ultrafilter lemma. In particular, no theory extending ZF can prove either the completeness or compactness theorems over arbitrary (possibly uncountable) languages without also proving the ultrafilter lemma on a set of same cardinality. - -The completeness theorem is a central property of first-order logic that does not hold for all logics. Second-order logic, for example, does not have a completeness theorem for its standard semantics (but does have the completeness property for Henkin semantics), and the set of logically-valid formulas in second-order logic is not recursively enumerable. The same is true of all higher-order logics. It is possible to produce sound deductive systems for higher-order logics, but no such system can be complete. - -Lindström's theorem states that first-order logic is the strongest (subject to certain constraints) logic satisfying both compactness and completeness. - -A completeness theorem can be proved for modal logic or intuitionistic logic with respect to Kripke semantics. - -Gödel's original proof of the theorem proceeded by reducing the problem to a special case for formulas in a certain syntactic form, and then handling this form with an ad hoc argument. - -In modern logic texts, Gödel's completeness theorem is usually proved with Henkin's proof, rather than with Gödel's original proof. Henkin's proof directly constructs a term model for any consistent first-order theory. James Margetson (2004) developed a computerized formal proof using the Isabelle theorem prover. Other proofs are also known. diff --git a/wiki/wikipedia/2483.txt b/wiki/wikipedia/2483.txt deleted file mode 100644 index 02db946a315ea228f5df5f1cba08df12057f8d00..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2483.txt +++ /dev/null @@ -1,159 +0,0 @@ -The MaxCliqueDyn algorithm is an algorithm for finding a maximum clique in an undirected graph. It is based on a basic algorithm (MaxClique algorithm) which finds a maximum clique of bounded size. The bound is found using improved coloring algorithm. The MaxCliqueDyn extends MaxClique algorithm to include dynamically varying bounds. This algorithm was designed by Janez Konc and description was published in 2007. In comparison to earlier algorithms described in the published article the MaxCliqueDyn algorithm is improved by an improved approximate coloring algorithm (ColorSort algorithm) and by applying tighter, more computationally expensive upper bounds on a fraction of the search space. Both improvements reduce time to find maximum clique. In addition to reducing time improved coloring algorithm also reduces the number of steps needed to find a maximum clique. - -The MaxClique algorithm is the basic algorithm of MaxCliqueDyn algorithm. The pseudo code of the algorithm is: - -procedure MaxClique(R, C) is - -Q = Ø; Qmax = Ø; - -while R ≠ Ø do - -choose a vertex p with a maximum color C(p) from set R; - -R := R\{p}; - -if |Q| + C(p)>|Qmax| then - -Q := Q ⋃ {p}; - -if R ⋂ Γ(p) ≠ Ø then - -obtain a vertex-coloring C' of G(R ⋂ Γ(p)); - -MaxClique(R ⋂ Γ(p), C'); - -else if |Q|>|Qmax| then Qmax:=Q; - -Q := Q\{p}; - -else - -return - -end while - -where Q is a set of vertices of the currently growing clique, Qmax is a set of vertices of the largest clique currently found, R is a set of candidate vertices and C its corresponding set of color classes. The MaxClique algorithm recursively searches for maximum clique by adding and removing vertices to and from Q. - -In the MaxClique algorithm the approximate coloring algorithm is used to obtain set of color classes C. The ColorSort algorithm is an improved algorithm of the approximate coloring algorithm. In the approximate coloring algorithm vertices are colored one by one in the same order as they appear in a set of candidate vertices R so that if the next vertex p is non-adjacent to all vertices in the some color class it is added to this class and if p is adjacent to at least one vertex in every one of existing color classes it is put into a new color class. - -The MaxClique algorithm returns vertices R ordered by their colors. By looking at the MaxClique algorithm it is clear that vertices v ∈ R with colors C(v) < |Qmax| − |Q| + 1 will never be added to the current clique Q. Therefore, sorting those vertices by color is of no use to MaxClique algorithm. - -The improved coloring with ColorSort algorithm takes in consideration the above observation. Each vertex is assigned to a color class Ck. If k < |Qmax| − |Q| + 1 the vertex is moved to R (behind the last vertex in R). If k ≥ |Qmax| − |Q| + 1 than the vertex stays in the color class Ck and is not copied to the set R. At the end all of the vertex in color classes Ck where k ≥ |Qmax| − |Q| + 1 are added to the back of set R as they appear in each color class Ck and in increasing order with respect to index k. In Color Sort algorithm only these vertices are assigned colors C(v) = k. - -ColorSort algorithm - -procedure ColorSort(R, C) is - -max_no := 1; - -kmin := |Qmax| − |Q| + 1; - -if kmin ≤ 0 then kmin := 1; - -j := 0; - -C1 := Ø; C2 := Ø; - -for i := 0 to |R| − 1 do - -p := R[i]; {the i-th vertex in R} - -k := 1; - -while Ck ⋂ Γ(p) ≠ Ø do - -k := k+1; - -if k > max_no then - -max_no := k; - -Cmax_no+1 := Ø; - -end if - -Ck := Ck ⋃ {p}; - -if k < kmin then - -R[j] := R[i]; - -j := j+1; - -end if - -end for - -C[j−1] := 0; - -for k := kmin to max_no do - -for i := 1 to |Ck| do - -R[j] := Ck[i]; - -C[j] := k; - -j := j+1; - -end for - -end for - -Example - -The graph above can be described as candidate set of vertices R = {7(5), 1(4), 4(4), 2(3), 3(3), 6(3), 5(2), 8(2)}. Set of vertices R can now be used as input for both approximate coloring algorithm and ColorSort algorithm. Using any of the two algorithms a table below is constructed. - -The approximate coloring algorithm returns set of vertices R={7(5), 5(2), 1(4), 6(3), 8(2), 4(4), 2(3), 3(3)} and its corresponding set of color classes C = {1,1,2,2,2,3,3,3}. The ColorSort algorithm returns set of vertices R = {7(5), 1(4), 6(3), 5(2), 8(2), 4(4), 2(3), 3(3)} and its corresponding set of color classes C = {–,–,–,–,–,3,3,3}, where – represents unknown color class with k < 3. - -The MaxCliqueDyn algorithm is in basic MaxClique algorithm that uses ColorSort algorithm instead approximate coloring algorithm for determining color classes. On each step of MaxClique algorithm the algorithm also recalculates the degrees of vertices in R regarding to the vertex the algorithm is currently in. These vertices are then sorted by decreasing order with respect to their degrees in graph G(R). Then the ColorSort algorithm considers vertices in R sorted by their degrees in the induced graph G(R) rather than in G. By doing so the number of steps required to find the maximum clique is reduced to the minimum. Even so, the overall running time of the MaxClique algorithm is not improved, because computational expense O(|R|2) of the determination of the degrees and sorting of vertices in R stays the same. - -MaxCliqueDyn algorithm - -procedure MaxCliqueDyn(R, C, level) is - -S[level] := S[level] + S[level−1] − Sold[level]; - -Sold[level] := S[level−1]; - -while R ≠ Ø do - -choose a vertex p with maximum C(p) (last vertex) from R; - -R := R\{p}; - -if |Q| + C[index of p in R] > |Qmax| then - -Q := Q ⋃ {p}; - -if R ⋂ Γ(p) ≠ Ø then - -if S[level]/ALL STEPS < Tlimit then - -calculate the degrees of vertices in G(R ⋂ Γ(p)); - -sort vertices in R ⋂ Γ(p) in a descending order - -with respect to their degrees; - -end if - -ColorSort(R ⋂ Γ(p), C') - -S[level] := S[level] + 1; - -ALL STEPS := ALL STEPS + 1; - -MaxCliqueDyn(R ⋂ Γ(p), C', level + 1); - -else if |Q| > |Qmax| then Qmax := Q; - -Q := Q\{p}; - -else - -return - -end while - -Value Tlimit can be determined by experimenting on random graphs. In the original article it was determined that algorithm works best for Tlimit = 0.025. diff --git a/wiki/wikipedia/2484.txt b/wiki/wikipedia/2484.txt deleted file mode 100644 index b60d45a976de5367c17bfce100a0914b61f67235..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2484.txt +++ /dev/null @@ -1 +0,0 @@ -In Riemannian geometry, a branch of mathematics, the prescribed Ricci curvature problem is as follows: given a smooth manifold M and a symmetric 2-tensor h, construct a metric on M whose Ricci curvature tensor equals h. diff --git a/wiki/wikipedia/2485.txt b/wiki/wikipedia/2485.txt deleted file mode 100644 index 681ab5736975c05eeb3ee384630fef8c8824deea..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2485.txt +++ /dev/null @@ -1,17 +0,0 @@ -In graph theory, Wagner's theorem is a mathematical forbidden graph characterization of planar graphs, named after Klaus Wagner, stating that a finite graph is planar if and only if its minors include neither K5 (the complete graph on five vertices) nor K3,3 (the utility graph, a complete bipartite graph on six vertices). This was one of the earliest results in the theory of graph minors and can be seen as a forerunner of the Robertson–Seymour theorem. - -A planar embedding of a given graph is a drawing of the graph in the Euclidean plane, with points for its vertices and curves for its edges, in such a way that the only intersections between pairs of edges are at a common endpoint of the two edges. A minor of a given graph is another graph formed by deleting vertices, deleting edges, and contracting edges. When an edge is contracted, its two endpoints are merged to form a single vertex. In some versions of graph minor theory the graph resulting from a contraction is simplified by removing self-loops and multiple adjacencies, while in other version multigraphs are allowed, but this variation makes no difference to Wagner's theorem. - -Wagner's theorem states that every graph has either a planar embedding, or a minor of one of two types, the complete graph K5 or the complete bipartite graph K3,3. (It is also possible for a single graph to have both types of minor.) - -If a given graph is planar, so are all its minors: vertex and edge deletion obviously preserve planarity, and edge contraction can also be done in a planarity-preserving way, by leaving one of the two endpoints of the contracted edge in place and routing all of the edges that were incident to the other endpoint along the path of the contracted edge. - -A minor-minimal non-planar graph is a graph that is not planar, but in which all proper minors (minors formed by at least one deletion or contraction) are planar. Another way of stating Wagner's theorem is that there are only two minor-minimal non-planar graphs, K5 and K3,3. - -Another result also sometimes known as Wagner's theorem states that a four-connected graph is planar if and only if it has no K5 minor. That is, by assuming a higher level of connectivity, the graph K3,3 can be made unnecessary in the characterization, leaving only a single forbidden minor, K5. Correspondingly, the Kelmans–Seymour conjecture states that a 5-connected graph is planar if and only if it does not have K5 as a topological minor. - -Wagner published both theorems in 1937, subsequent to the 1930 publication of Kuratowski's theorem, according to which a graph is planar if and only if it does not contain as a subgraph a subdivision of one of the same two forbidden graphs K5 and K3,3. In a sense, Kuratowski's theorem is stronger than Wagner's theorem: a subdivision can be converted into a minor of the same type by contracting all but one edge in each path formed by the subdivision process, but converting a minor into a subdivision of the same type is not always possible. However, in the case of the two graphs K5 and K3,3, it is straightforward to prove that a graph that has at least one of these two graphs as a minor also has at least one of them as a subdivision, so the two theorems are equivalent. - -One consequence of the stronger version of Wagner's theorem for four-connected graphs is to characterize the graphs that do not have a K5 minor. The theorem can be rephrased as stating that every such graph is either planar or it can be decomposed into simpler pieces. Using this idea, the K5-minor-free graphs may be characterized as the graphs that can be formed as combinations of planar graphs and the eight-vertex Wagner graph, glued together by clique-sum operations. For instance, K3,3 can be formed in this way as a clique-sum of three planar graphs, each of which is a copy of the tetrahedral graph K4. - -Wagner's theorem is an important precursor to the theory of graph minors, which culminated in the proofs of two deep and far-reaching results: the graph structure theorem (a generalization of Wagner's clique-sum decomposition of K5-minor-free graphs) and the Robertson–Seymour theorem (a generalization of the forbidden minor characterization of planar graphs, stating that every graph family closed under the operation of taking minors has a characterization by a finite number of forbidden minors). Analogues of Wagner's theorem can also be extended to the theory of matroids: in particular, the same two graphs K5 and K3,3 (along with three other forbidden configurations) appear in a characterization of the graphic matroids by forbidden matroid minors. diff --git a/wiki/wikipedia/2486.txt b/wiki/wikipedia/2486.txt deleted file mode 100644 index d4398df241bfbe27286ea4ecc5433d62c419256b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2486.txt +++ /dev/null @@ -1,57 +0,0 @@ -In mathematics, economics, and computer science, the stable marriage problem (also stable matching problem or SMP) is the problem of finding a stable matching between two equally sized sets of elements given an ordering of preferences for each element. A matching is a bijection from the elements of one set to the elements of the other set. A matching is not stable if: - -In other words, a matching is stable when there does not exist any match (A, B) which both prefer each other to their current partner under the matching. - -The stable marriage problem has been stated as follows: - -The existence of two classes that need to be paired with each other (heterosexual men and women in this example) distinguishes this problem from the stable roommates problem. - -Algorithms for finding solutions to the stable marriage problem have applications in a variety of real-world situations, perhaps the best known of these being in the assignment of graduating medical students to their first hospital appointments. In 2012, Nobel Memorial Prize in Economic Sciences was awarded to Lloyd S. Shapley and Alvin E. Roth "for the theory of stable allocations and the practice of market design." - -An important and large-scale application of stable marriage is in assigning users to servers in a large distributed Internet service. Billions of users access web pages, videos, and other services on the Internet, requiring each user to be matched to one of (potentially) hundreds of thousands of servers around the world that offer that service. A user prefers servers that are proximal enough to provide a faster response time for the requested service, resulting in a (partial) preferential ordering of the servers for each user. Each server prefers to serve users that it can with a lower cost, resulting in a (partial) preferential ordering of users for each server. Content delivery networks that distribute much of the world's content and services solve this large and complex stable marriage problem between users and servers every tens of seconds to enable billions of users to be matched up with their respective servers that can provide the requested web pages, videos, or other services. - -The Gale–Shapley algorithm (also known as the deferred acceptance algorithm) involves a number of "rounds" (or "iterations"): - -* In the first round, first a) each unengaged man proposes to the woman he prefers most, and then b) each woman replies "maybe" to her suitor she most prefers and "no" to all other suitors. She is then provisionally "engaged" to the suitor she most prefers so far, and that suitor is likewise provisionally engaged to her. - -* In each subsequent round, first a) each unengaged man proposes to the most-preferred woman to whom he has not yet proposed (regardless of whether the woman is already engaged), and then b) each woman replies "maybe" if she is currently not engaged or if she prefers this man over her current provisional partner (in this case, she rejects her current provisional partner who becomes unengaged). The provisional nature of engagements preserves the right of an already-engaged woman to "trade up" (and, in the process, to "jilt" her until-then partner). - -* This process is repeated until everyone is engaged. - -This algorithm is guaranteed to produce a stable marriage for all participants in time $O(n^2)$ where $n$ is the number of men or women. - -Among all possible different stable matchings, it always yields the one that is best for all men among all stable matchings, and worst for all women. - -It is a truthful mechanism from the point of view of men (the proposing side). I.e, no man can get a better matching for himself by misrepresenting his preferences. Moreover, the GS algorithm is even group-strategy proof for men, i.e., no coalition of men can coordinate a misrepresentation of their preferences such that all men in the coalition are strictly better-off. However, it is possible for some coalition to misrepresent their preferences such that some men are better-off and the other men retain the same partner. - -The GS algorithm is non-truthful for the women (the reviewing side): each woman may be able to misrepresent her preferences and get a better match. - -The rural hospitals theorem concerns a more general variant of the stable matching problem, like that applying in the problem of matching doctors to positions at hospitals, differing in the following ways from the basic n-to-n form of the stable marriage problem: - -*Each participant may only be willing to be matched to a subset of the participants on the other side of the matching. - -*The participants on one side of the matching (the hospitals) may have a numerical capacity, specifying the number of doctors they are willing to hire. - -*The total number of participants on one side might not equal the total capacity to which they are to be matched on the other side. - -*The resulting matching might not match all of the participants. - -In this case, the condition of stability is that no unmatched pair prefer each other to their situation in the matching (whether that situation is another partner or being unmatched). With this condition, a stable matching will still exist, and can still be found by the Gale–Shapley algorithm. - -For this kind of stable matching problem, the rural hospitals theorem states that: - -* The set of assigned doctors, and the number of filled positions in each hospital, are the same in all stable matchings. - -* Any hospital that has some empty positions in some stable matching, receives exactly the same set of doctors in all stable matchings. - -In stable matching with indifference, some men might be indifferent between two or more women and vice versa. - -The stable roommates problem is similar to the stable marriage problem, but differs in that all participants belong to a single pool (instead of being divided into equal numbers of "men" and "women"). - -The hospitals/residents problem – also known as the college admissions problem – differs from the stable marriage problem in that a hospital can take multiple residents, or a college can take an incoming class of more than one student. Algorithms to solve the hospitals/residents problem can be hospital-oriented (as the NRMP was before 1995) or resident-oriented. This problem was solved, with an algorithm, in the same original paper by Gale and Shapley, in which the stable marriage problem was solved. - -The hospitals/residents problem with couples allows the set of residents to include couples who must be assigned together, either to the same hospital or to a specific pair of hospitals chosen by the couple (e.g., a married couple want to ensure that they will stay together and not be stuck in programs that are far away from each other). The addition of couples to the hospitals/residents problem renders the problem NP-complete. - -The assignment problem seeks to find a matching in a weighted bipartite graph that has maximum weight. Maximum weighted matchings do not have to be stable, but in some applications a maximum weighted matching is better than a stable one. - -The matching with contracts problem is a generalization of matching problem, in which participants can be matched with different terms of contracts. An important special case of contracts is matching with flexible wages. diff --git a/wiki/wikipedia/2487.txt b/wiki/wikipedia/2487.txt deleted file mode 100644 index 76fa51f82797b8519c41081f4f9ab3bf3b99cc01..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2487.txt +++ /dev/null @@ -1,174 +0,0 @@ -In mathematics, the isoperimetric inequality is a geometric inequality involving the perimeter of a set and its volume. In $n$-dimensional space $\R^n$ the inequality lower bounds the surface area or perimeter $\operatorname{per}(S)$ of a set $S\subset\R^n$ by its volume $\operatorname{vol}(S)$, -$$ -\operatorname{per}(S)\geq n \operatorname{vol}(S)^{\frac{n-1}{n}} \operatorname{vol}(B_1)^{\frac{1}{n}} -$$, - -where $B_1\subset\R^n$ is a unit sphere. The equality holds only when $S$ is a sphere in $\R^n$. - -On a plane, i.e. when $n=2$, the isoperimetric inequality relates the square of the circumference of a closed curve and the area of a plane region it encloses. Isoperimetric literally means "having the same perimeter". Specifically in $\R ^2$, the isoperimetric inequality states, for the length L of a closed curve and the area A of the planar region that it encloses, that -$$ - L^2 \ge 4\pi A, -$$ - -and that equality holds if and only if the curve is a circle. - -The isoperimetric problem is to determine a plane figure of the largest possible area whose boundary has a specified length. The closely related Dido's problem asks for a region of the maximal area bounded by a straight line and a curvilinear arc whose endpoints belong to that line. It is named after Dido, the legendary founder and first queen of Carthage. The solution to the isoperimetric problem is given by a circle and was known already in Ancient Greece. However, the first mathematically rigorous proof of this fact was obtained only in the 19th century. Since then, many other proofs have been found. - -The isoperimetric problem has been extended in multiple ways, for example, to curves on surfaces and to regions in higher-dimensional spaces. Perhaps the most familiar physical manifestation of the 3-dimensional isoperimetric inequality is the shape of a drop of water. Namely, a drop will typically assume a symmetric round shape. Since the amount of water in a drop is fixed, surface tension forces the drop into a shape which minimizes the surface area of the drop, namely a round sphere. - -The classical isoperimetric problem dates back to antiquity. The problem can be stated as follows: Among all closed curves in the plane of fixed perimeter, which curve (if any) maximizes the area of its enclosed region? This question can be shown to be equivalent to the following problem: Among all closed curves in the plane enclosing a fixed area, which curve (if any) minimizes the perimeter? - -This problem is conceptually related to the principle of least action in physics, in that it can be restated: what is the principle of action which encloses the greatest area, with the greatest economy of effort? The 15th-century philosopher and scientist, Cardinal Nicholas of Cusa, considered rotational action, the process by which a circle is generated, to be the most direct reflection, in the realm of sensory impressions, of the process by which the universe is created. German astronomer and astrologer Johannes Kepler invoked the isoperimetric principle in discussing the morphology of the solar system, in Mysterium Cosmographicum (The Sacred Mystery of the Cosmos, 1596). - -Although the circle appears to be an obvious solution to the problem, proving this fact is rather difficult. The first progress toward the solution was made by Swiss geometer Jakob Steiner in 1838, using a geometric method later named Steiner symmetrisation. Steiner showed that if a solution existed, then it must be the circle. Steiner's proof was completed later by several other mathematicians. - -Steiner begins with some geometric constructions which are easily understood; for example, it can be shown that any closed curve enclosing a region that is not fully convex can be modified to enclose more area, by "flipping" the concave areas so that they become convex. It can further be shown that any closed curve which is not fully symmetrical can be "tilted" so that it encloses more area. The one shape that is perfectly convex and symmetrical is the circle, although this, in itself, does not represent a rigorous proof of the isoperimetric theorem (see external links). - -The solution to the isoperimetric problem is usually expressed in the form of an inequality that relates the length L of a closed curve and the area A of the planar region that it encloses. The isoperimetric inequality states that -$$ -4\pi A \le L^2, -$$ - -and that the equality holds if and only if the curve is a circle. The area of a disk of radius R is πR2 and the circumference of the circle is 2πR, so both sides of the inequality are equal to 4π2R2 in this case. - -Dozens of proofs of the isoperimetric inequality have been found. In 1902, Hurwitz published a short proof using the Fourier series that applies to arbitrary rectifiable curves (not assumed to be smooth). An elegant direct proof based on comparison of a smooth simple closed curve with an appropriate circle was given by E. Schmidt in 1938. It uses only the arc length formula, expression for the area of a plane region from Green's theorem, and the Cauchy–Schwarz inequality. - -For a given closed curve, the isoperimetric quotient is defined as the ratio of its area and that of the circle having the same perimeter. This is equal to -$$ -Q=\frac{4\pi A}{L^2} -$$ - -and the isoperimetric inequality says that Q ≤ 1. Equivalently, the isoperimetric ratio L2/A is at least 4pi for every curve. - -The isoperimetric quotient of a regular n-gon is -$$ -Q_n=\frac{\pi}{n \tan \tfrac{\pi}{n}}. -$$ - -Let $C$ be a smooth regular convex closed curve. Then the improved isoperimetric inequality states the following -$$ -L^2\geqslant 4\pi A+8\pi\left|\widetilde{A}_{0.5}\right|, -$$ - -where $L, A, \widetilde{A}_{0.5}$ denote the length of $C$, the area of the region bounded by $C$ and the oriented area of the Wigner caustic of $C$, respectively, and the equality holds if and only if $C$ is a curve of constant width. - -Let C be a simple closed curve on a sphere of radius 1. Denote by L the length of C and by A the area enclosed by C. The spherical isoperimetric inequality states that -$$ -L^2 \ge A (4\pi - A), -$$ - -and that the equality holds if and only if the curve is a circle. There are, in fact, two ways to measure the spherical area enclosed by a simple closed curve, but the inequality is symmetric with the respect to taking the complement. - -This inequality was discovered by Paul Lévy (1919) who also extended it to higher dimensions and general surfaces. - -In the more general case of arbitrary radius R, it is known that -$$ -L^2\ge 4\pi A - \frac{A^2}{R^2}. -$$ - -The isoperimetric inequality states that a sphere has the smallest surface area per given volume. Given a bounded set $S\subset\R ^n$ with surface area $\operatorname{per}(S)$ and volume $\operatorname{vol}(S)$, the isoperimetric inequality states -$$ -\operatorname{per}(S)\geq n \operatorname{vol}(S)^{\frac{n-1}{n}} \operatorname{vol}(B_1)^{\frac{1}{n}} -$$, - -where $B_1\subset\R ^n$ is a unit ball. The equality holds when $S$ is a ball in $\R ^n$. Under additional restrictions on the set (such as convexity, regularity, smooth boundary), the equality holds for a ball only. But in full generality the situation is more complicated. The relevant result of Schmidt (for a simpler proof see Baebler) is clarified in Hadwiger as follows. An extremal set consists of a ball and a "corona" that contributes neither to the volume nor to the surface area. That is, the equality holds for a compact set $S$ if and only if $S$ contains a closed ball $B$ such that $\operatorname{vol}(B)=\operatorname{vol}(S)$ and $\operatorname{per}(B)=\operatorname{per}(S).$ For example, the "corona" may be a curve. - -The proof of the inequality follows directly from Brunn–Minkowski inequality between a set $S$ and a ball with radius $\epsilon$, i.e. $B_\epsilon=\epsilon B_1$. By taking Brunn–Minkowski inequality to the power $n$, subtracting $\operatorname{vol}(S)$ from both sides, dividing them by $\epsilon$, and taking the limit as $\epsilon\to 0.$ (Osserman; Federer). - -In full generality , the isoperimetric inequality states that for any set $S\subset\R^n$ whose closure has finite Lebesgue measure -$$ -n\omega_n^{\frac{1}{n}} L^n(\bar{S})^{\frac{n-1}{n}} \le M^{n-1}_*(\partial S) -$$ - -where $M_*^{n-1}$ is the (n-1)-dimensional Minkowski content, Ln is the n-dimensional Lebesgue measure, and ωn is the volume of the unit ball in $\R^n$. If the boundary of S is rectifiable, then the Minkowski content is the (n-1)-dimensional Hausdorff measure. - -The n-dimensional isoperimetric inequality is equivalent (for sufficiently smooth domains) to the Sobolev inequality on $\R^n$ with optimal constant: -$$ -\left( \int_{\R^n} |u|^{\frac{n}{n-1}}\right)^{\frac{n-1}{n}} \le n^{-1}\omega_{n}^{-\frac{1}{n}}\int_{\R^n}|\nabla u| -$$ - -for all $u\in W^{1,1}(\R^n)$. - -Hadamard manifolds are complete simply connected manifolds with nonpositive curvature. Thus they generalize the Euclidean space $\R^n$, which is a Hadamard manifold with curvature zero. In 1970's and early 80's, Thierry Aubin, Misha Gromov, Yuri Burago, and Viktor Zalgaller conjectured that the Euclidean isoperimetric inequality -$$ -\operatorname{per}(S)\geq n \operatorname{vol}(S)^{\frac{n-1}{n}}\operatorname{vol}(B_1)^{\frac{1}{n}} -$$ - -holds for bounded sets $S$ in Hadamard manifolds, which has become known as the Cartan–Hadamard conjecture. - -In dimension 2 this had already been established in 1926 by André Weil, who was a student of Hadamard at the time. - -In dimensions 3 and 4 the conjecture was proved by Bruce Kleiner in 1992, and Chris Croke in 1984 respectively. - -Most of the work on isoperimetric problem has been done in the context of smooth regions in Euclidean spaces, or more generally, in Riemannian manifolds. However, the isoperimetric problem can be formulated in much greater generality, using the notion of Minkowski content. Let $(X, \mu, d)$ be a metric measure space: X is a metric space with metric d, and μ is a Borel measure on X. The boundary measure, or Minkowski content, of a measurable subset A of X is defined as the lim inf -$$ -\mu^+(A) = \liminf_{\varepsilon \to 0+} \frac{\mu(A_\varepsilon) - \mu(A)}{\varepsilon}, -$$ - -where -$$ -A_\varepsilon = \{ x \in X | d(x, A) \leq \varepsilon \} -$$ - -is the ε-extension of A. - -The isoperimetric problem in X asks how small can $\mu^+(A)$ be for a given μ(A). If X is the Euclidean plane with the usual distance and the Lebesgue measure then this question generalizes the classical isoperimetric problem to planar regions whose boundary is not necessarily smooth, although the answer turns out to be the same. - -The function -$$ -I(a) = \inf \{ \mu^+(A) | \mu(A) = a\} -$$ - -is called the isoperimetric profile of the metric measure space $(X, \mu, d)$. Isoperimetric profiles have been studied for Cayley graphs of discrete groups and for special classes of Riemannian manifolds (where usually only regions A with regular boundary are considered). - -In graph theory, isoperimetric inequalities are at the heart of the study of expander graphs, which are sparse graphs that have strong connectivity properties. Expander constructions have spawned research in pure and applied mathematics, with several applications to complexity theory, design of robust computer networks, and the theory of error-correcting codes. - -Isoperimetric inequalities for graphs relate the size of vertex subsets to the size of their boundary, which is usually measured by the number of edges leaving the subset (edge expansion) or by the number of neighbouring vertices (vertex expansion). For a graph $G$ and a number $k$, the following are two standard isoperimetric parameters for graphs. - -*The edge isoperimetric parameter: -$$ -\Phi_E(G,k)=\min_{S\subseteq V} \left\{|E(S,\overline{S})| : |S|=k \right\} -$$ - -*The vertex isoperimetric parameter: -$$ -\Phi_V(G,k)=\min_{S\subseteq V} \left\{|\Gamma(S)\setminus S| : |S|=k \right\} -$$ - -Here $E(S,\overline{S})$ denotes the set of edges leaving $S$ and $\Gamma(S)$ denotes the set of vertices that have a neighbour in $S$. The isoperimetric problem consists of understanding how the parameters $\Phi_E$ and $\Phi_V$ behave for natural families of graphs. - -The $d$-dimensional hypercube $Q_d$ is the graph whose vertices are all Boolean vectors of length $d$, that is, the set $\{0,1\}^d$. Two such vectors are connected by an edge in $Q_d$ if they are equal up to a single bit flip, that is, their Hamming distance is exactly one. - -The following are the isoperimetric inequalities for the Boolean hypercube. - -The edge isoperimetric inequality of the hypercube is $\Phi_E(Q_d,k) \geq k(d-\log_2 k)$. This bound is tight, as is witnessed by each set $S$ that is the set of vertices of any subcube of $Q_d$. - -Harper's theorem says that Hamming balls have the smallest vertex boundary among all sets of a given size. Hamming balls are sets that contain all points of Hamming weight at most $r$ and no points of Hamming weight larger than $r+1$ for some integer $r$. This theorem implies that any set $S\subseteq V$ with -$$ -|S|\geq\sum_{i=0}^{r} {d\choose i} -$$ - -satisfies -$$ -|S\cup\Gamma(S)|\geq \sum_{i=0}^{r+1}{d\choose i}. -$$ - -As a special case, consider set sizes $k=|S|$ of the form -$$ -k={d \choose 0} + {d \choose 1} + \dots + {d \choose r} -$$ - -for some integer $r$. Then the above implies that the exact vertex isoperimetric parameter is -$$ -\Phi_V(Q_d,k) = {d\choose r+1}. -$$ - -The isoperimetric inequality for triangles in terms of perimeter p and area T states that -$$ -p^2 \ge 12\sqrt{3} \cdot T, -$$ - -with equality for the equilateral triangle. This is implied, via the AM–GM inequality, by a stronger inequality which has also been called the isoperimetric inequality for triangles: -$$ -T \le \frac{\sqrt{3}}{4}(abc)^{\frac{2}{3}}. -$$ diff --git a/wiki/wikipedia/2488.txt b/wiki/wikipedia/2488.txt deleted file mode 100644 index c871e076c7b4d5d7b989cda85098954928ae6907..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2488.txt +++ /dev/null @@ -1,18 +0,0 @@ -In combinatorial mathematics, the Albertson conjecture is an unproven relationship between the crossing number and the chromatic number of a graph. It is named after Michael O. Albertson, a professor at Smith College, who stated it as a conjecture in 2007; it is one of his many conjectures in graph coloring theory. The conjecture states that, among all graphs requiring $n$ colors, the complete graph $K_n$ is the one with the smallest crossing number. - -Equivalently, if a graph can be drawn with fewer crossings than $K_n$, then, according to the conjecture, it may be colored with fewer than $n$ colors. - -It is straightforward to show that graphs with bounded crossing number have bounded chromatic number: one may assign distinct colors to the endpoints of all crossing edges and then 4-color the remaining planar graph. Albertson's conjecture replaces this qualitative relationship between crossing number and coloring by a more precise quantitative relationship. Specifically, - -a different conjecture of states that the crossing number of the complete graph $K_n$ is -$$ -\textrm{cr}(K_n)=\frac14\biggl\lfloor\frac{n}{2}\biggr\rfloor\left\lfloor\frac{n-1}{2}\right\rfloor\left\lfloor\frac{n-2}{2}\right\rfloor\left\lfloor\frac{n-3}{2}\right\rfloor. -$$ - -It is known how to draw complete graphs with this many crossings, by placing the vertices in two concentric circles; what is unknown is whether there exists a better drawing with fewer crossings. Therefore, a strengthened formulation of the Albertson conjecture is that every $n$-chromatic graph has crossing number at least as large as the right hand side of this formula. This strengthened conjecture would be true if and only if both Guy's conjecture and the Albertson conjecture are true. - -A weaker form of the conjecture, proven by M. Schaefer, states that every graph with chromatic number $n$ has crossing number $\Omega(n^4)$ (using big omega notation), or equivalently that every graph with crossing number $k$ has chromatic number $O(k^{1/4})$. Albertson published a simple proof of these bounds, by combining the fact that every minimal $n$-chromatic graph has minimum degree at least $n-1$ (because otherwise greedy coloring would use fewer colors) together with the crossing number inequality according to which every graph $G=(V,E)$ with $|E|/|V|\ge 4$ has crossing number $\Omega(|E|^3/|V|^2)$. Using the same reasoning, they show that a counterexample to Albertson's conjecture for the chromatic number $n$ (if it exists) must have fewer than $4n$ vertices. - -The Albertson conjecture is vacuously true for $n\le 4$. In these cases, $K_n$ has crossing number zero, so the conjecture states only that the $n$-chromatic graphs have crossing number greater than or equal to zero, something that is true of all graphs. The case $n=5$ of Albertson's conjecture is equivalent to the four color theorem, that any planar graph can be colored with four or fewer colors, for the only graphs requiring fewer crossings than the one crossing of $K_5$ are the planar graphs, and the conjecture implies that these should all be at most 4-chromatic. Through the efforts of several groups of authors the conjecture is now known to hold for all $n\le 18$. For every integer $c\ge 6$, Luiz and Richter presented a family of $(c+1)$-color-critical graphs that do not contain a subdivision of the complete graph $K_{c+1}$ but have crossing number at least that of $K_{c+1}$. - -There is also a connection to the Hadwiger conjecture, an important open problem in combinatorics concerning the relationship between chromatic number and the existence of large cliques as minors in a graph. A variant of the Hadwiger conjecture, stated by György Hajós, is that every $n$-chromatic graph contains a subdivision of $K_n$; if this were true, the Albertson conjecture would follow, because the crossing number of the whole graph is at least as large as the crossing number of any of its subdivisions. However, counterexamples to the Hajós conjecture are now known, so this connection does not provide an avenue for proof of the Albertson conjecture. diff --git a/wiki/wikipedia/2489.txt b/wiki/wikipedia/2489.txt deleted file mode 100644 index 094ede6550e626427f5d5ed5e4a4612780c5284f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2489.txt +++ /dev/null @@ -1,12 +0,0 @@ -In mathematics, the Hodge index theorem for an algebraic surface V determines the signature of the intersection pairing on the algebraic curves C on V. It says, roughly speaking, that the space spanned by such curves (up to linear equivalence) has a one-dimensional subspace on which it is positive definite (not uniquely determined), and decomposes as a direct sum of some such one-dimensional subspace, and a complementary subspace on which it is negative definite. - -In a more formal statement, specify that V is a non-singular projective surface, and let H be the divisor class on V of a hyperplane section of V in a given projective embedding. Then the intersection -$$ -H \cdot H = d\ -$$ - -where d is the degree of V (in that embedding). Let D be the vector space of rational divisor classes on V, up to algebraic equivalence. The dimension of D is finite and is usually denoted by ρ(V). The Hodge index theorem says that the subspace spanned by H in D has a complementary subspace on which the intersection pairing is negative definite. Therefore, the signature (often also called index) is (1,ρ(V)-1). - -The abelian group of divisor classes up to algebraic equivalence is now called the Néron-Severi group; it is known to be a finitely-generated abelian group, and the result is about its tensor product with the rational number field. Therefore, ρ(V) is equally the rank of the Néron-Severi group (which can have a non-trivial torsion subgroup, on occasion). - -This result was proved in the 1930s by W. V. D. Hodge, for varieties over the complex numbers, after it had been a conjecture for some time of the Italian school of algebraic geometry (in particular, Francesco Severi, who in this case showed that ρ < ∞). Hodge's methods were the topological ones brought in by Lefschetz. The result holds over general (algebraically closed) fields. diff --git a/wiki/wikipedia/249.txt b/wiki/wikipedia/249.txt deleted file mode 100644 index 2d6ab3954145aef4c932206ff507a12383f9b811..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/249.txt +++ /dev/null @@ -1,30 +0,0 @@ -In mathematics, the Riemann–Siegel formula is an asymptotic formula for the error of the approximate functional equation of the Riemann zeta function, an approximation of the zeta function by a sum of two finite Dirichlet series. It was found by Siegel in unpublished manuscripts of Bernhard Riemann dating from the 1850s. Siegel derived it from the Riemann–Siegel integral formula, an expression for the zeta function involving contour integrals. It is often used to compute values of the Riemann–Siegel formula, sometimes in combination with the Odlyzko–Schönhage algorithm which speeds it up considerably. When used along the critical line, it is often useful to use it in a form where it becomes a formula for the Z function. - -If M and N are non-negative integers, then the zeta function is equal to -$$ -\zeta(s) = \sum_{n=1}^N\frac{1}{n^s} + \gamma(1-s)\sum_{n=1}^M\frac{1}{n^{1-s}} + R(s) -$$ - -where -$$ -\gamma(s) = \pi^{\tfrac{1}{2}-s} \frac{\Gamma \left (\tfrac{s}{2} \right)}{\Gamma \left(\tfrac{1}{2}(1-s)\right)} -$$ - -is the factor appearing in the functional equation ζ(s) = γ(1 − s) ζ(1 − s), and -$$ -R(s) = \frac{-\Gamma(1-s)}{2\pi i}\int \frac{(-x)^{s-1}e^{-Nx}}{e^x-1}dx -$$ - -is a contour integral whose contour starts and ends at +∞ and circles the singularities of absolute value at most 2πM. The approximate functional equation gives an estimate for the size of the error term. Siegel and Edwards derive the Riemann–Siegel formula from this by applying the method of steepest descent to this integral to give an asymptotic expansion for the error term R(s) as a series of negative powers of Im(s). In applications s is usually on the critical line, and the positive integers M and N are chosen to be about (2piIm(s))1/2. Gabcke found good bounds for the error of the Riemann–Siegel formula. - -Riemann showed that -$$ -\int_{0 \searrow 1} \frac{e^{-i\pi u^2+2\pi i pu}}{e^{\pi i u}-e^{-\pi i u}} du = \frac{e^{i\pi p^2}-e^{i\pi p}}{e^{i\pi p} - e^{-i \pi p}} -$$ - -where the contour of integration is a line of slope -1 passing between 0 and 1 . - -He used this to give the following integral formula for the zeta function: -$$ -\pi^{-\tfrac{s}{2}}\Gamma\left (\tfrac{s}{2} \right)\zeta(s)= \pi^{-\tfrac{s}{2}} \Gamma \left (\tfrac{s}{2} \right) \int_{0\swarrow 1}\frac{x^{-s}e^{\pi i x^2}}{e^{\pi i x}-e^{-\pi i x}}dx +\pi^{-\frac{1-s}{2}}\Gamma \left (\tfrac{1-s}{2} \right)\int_{0\searrow 1}\frac{x^{s-1}e^{-\pi i x^2}}{e^{\pi i x}-e^{-\pi i x}}dx -$$ diff --git a/wiki/wikipedia/2490.txt b/wiki/wikipedia/2490.txt deleted file mode 100644 index 77d71c5e97247902eb484192454b9d871764dd08..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2490.txt +++ /dev/null @@ -1,25 +0,0 @@ -In mathematics, the concept of irreducibility is used in several ways. - -* A polynomial over a field may be an irreducible polynomial if it cannot be factored over that field. - -* In abstract algebra, irreducible can be an abbreviation for irreducible element of an integral domain; for example an irreducible polynomial. - -* In representation theory, an irreducible representation is a nontrivial representation with no nontrivial proper subrepresentations. Similarly, an irreducible module is another name for a simple module. - -* Absolutely irreducible is a term applied to mean irreducible, even after any finite extension of the field of coefficients. It applies in various situations, for example to irreducibility of a linear representation, or of an algebraic variety; where it means just the same as irreducible over an algebraic closure. - -* In commutative algebra, a commutative ring R is irreducible if its prime spectrum, that is, the topological space Spec R, is an irreducible topological space. - -* A matrix is irreducible if it is not similar via a permutation to a block upper triangular matrix (that has more than one block of positive size). (Replacing non-zero entries in the matrix by one, and viewing the matrix as the adjacency matrix of a directed graph, the matrix is irreducible if and only if such directed graph is strongly connected.) A detailed definition is given here. - -* Also, a Markov chain is irreducible if there is a non-zero probability of transitioning (even if in more than one step) from any state to any other state. - -* In the theory of manifolds, an n-manifold is irreducible if any embedded (n − 1)-sphere bounds an embedded n-ball. Implicit in this definition is the use of a suitable category, such as the category of differentiable manifolds or the category of piecewise-linear manifolds. The notions of irreducibility in algebra and manifold theory are related. An n-manifold is called prime, if it cannot be written as a connected sum of two n-manifolds (neither of which is an n-sphere). An irreducible manifold is thus prime, although the converse does not hold. From an algebraist's perspective, prime manifolds should be called "irreducible"; however, the topologist (in particular the 3-manifold topologist) finds the definition above more useful. The only compact, connected 3-manifolds that are prime but not irreducible are the trivial 2-sphere bundle over S1 and the twisted 2-sphere bundle over S1. See, for example, Prime decomposition (3-manifold). - -* A topological space is irreducible if it is not the union of two proper closed subsets. This notion is used in algebraic geometry, where spaces are equipped with the Zariski topology; it is not of much significance for Hausdorff spaces. See also irreducible component, algebraic variety. - -* In universal algebra, irreducible can refer to the inability to represent an algebraic structure as a composition of simpler structures using a product construction; for example subdirectly irreducible. - -* A 3-manifold is P²-irreducible if it is irreducible and contains no 2-sided $\mathbb RP^2$ (real projective plane). - -* An irreducible fraction (or fraction in lowest terms) is a vulgar fraction in which the numerator and denominator are smaller than those in any other equivalent fraction. diff --git a/wiki/wikipedia/2491.txt b/wiki/wikipedia/2491.txt deleted file mode 100644 index 3a0818ca5308eb531fc1de62aa34599e0b583ea9..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2491.txt +++ /dev/null @@ -1,73 +0,0 @@ -In graph theory, an independent set, stable set, coclique or anticlique is a set of vertices in a graph, no two of which are adjacent. That is, it is a set $S$ of vertices such that for every two vertices in $S$, there is no edge connecting the two. Equivalently, each edge in the graph has at most one endpoint in $S$. A set is independent if and only if it is a clique in the graph's complement. The size of an independent set is the number of vertices it contains. Independent sets have also been called "internally stable sets", of which "stable set" is a shortening. - -A maximal independent set is an independent set that is not a proper subset of any other independent set. - -A maximum independent set is an independent set of largest possible size for a given graph $G$. This size is called the independence number of $G$ and is usually denoted by $\alpha(G)$. The optimization problem of finding such a set is called the maximum independent set problem. It is a strongly NP-hard problem. As such, it is unlikely that there exists an efficient algorithm for finding a maximum independent set of a graph. - -Every maximum independent set also is maximal, but the converse implication does not necessarily hold. - -A set is independent if and only if it is a clique in the graph’s complement, so the two concepts are complementary. In fact, sufficiently large graphs with no large cliques have large independent sets, a theme that is explored in Ramsey theory. - -A set is independent if and only if its complement is a vertex cover. Therefore, the sum of the size of the largest independent set $\alpha(G)$ and the size of a minimum vertex cover $\beta(G)$ is equal to the number of vertices in the graph. - -A vertex coloring of a graph $G$ corresponds to a partition of its vertex set into independent subsets. Hence the minimal number of colors needed in a vertex coloring, the chromatic number $\chi(G)$, is at least the quotient of the number of vertices in $G$ and the independent number $\alpha(G)$. - -In a bipartite graph with no isolated vertices, the number of vertices in a maximum independent set equals the number of edges in a minimum edge covering; this is Kőnig's theorem. - -An independent set that is not a proper subset of another independent set is called maximal. Such sets are dominating sets. Every graph contains at most 3n/3 maximal independent sets, but many graphs have far fewer. - -The number of maximal independent sets in n-vertex cycle graphs is given by the Perrin numbers, and the number of maximal independent sets in n-vertex path graphs is given by the Padovan sequence. Therefore, both numbers are proportional to powers of 1.324718..., the plastic number. - -In computer science, several computational problems related to independent sets have been studied. - -*In the maximum independent set problem, the input is an undirected graph, and the output is a maximum independent set in the graph. If there are multiple maximum independent sets, only one need be output. This problem is sometimes referred to as "vertex packing". - -*In the maximum-weight independent set problem, the input is an undirected graph with weights on its vertices and the output is an independent set with maximum total weight. The maximum independent set problem is the special case in which all weights are one. - -*In the maximal independent set listing problem, the input is an undirected graph, and the output is a list of all its maximal independent sets. The maximum independent set problem may be solved using as a subroutine an algorithm for the maximal independent set listing problem, because the maximum independent set must be included among all the maximal independent sets. - -*In the independent set decision problem, the input is an undirected graph and a number k, and the output is a Boolean value: true if the graph contains an independent set of size k, and false otherwise. - -The first three of these problems are all important in practical applications; the independent set decision problem is not, but is necessary in order to apply the theory of NP-completeness to problems related to independent sets. - -The independent set problem and the clique problem are complementary: a clique in G is an independent set in the complement graph of G and vice versa. Therefore, many computational results may be applied equally well to either problem. For example, the results related to the clique problem have the following corollaries: - -* The independent set decision problem is NP-complete, and hence it is not believed that there is an efficient algorithm for solving it. - -* The maximum independent set problem is NP-hard and it is also hard to approximate. - -Despite the close relationship between maximum cliques and maximum independent sets in arbitrary graphs, the independent set and clique problems may be very different when restricted to special classes of graphs. For instance, for sparse graphs (graphs in which the number of edges is at most a constant times the number of vertices in any subgraph), the maximum clique has bounded size and may be found exactly in linear time; however, for the same classes of graphs, or even for the more restricted class of bounded degree graphs, finding the maximum independent set is MAXSNP-complete, implying that, for some constant c (depending on the degree) it is NP-hard to find an approximate solution that comes within a factor of c of the optimum. - -The maximum independent set problem is NP-hard. However, it can be solved more efficiently than the O(n2 2n) time that would be given by a naive brute force algorithm that examines every vertex subset and checks whether it is an independent set. - -As of 2017 it can be solved in time O(1.1996n) using polynomial space. When restricted to graphs with maximum degree 3, it can be solved in time O(1.0836n). - -For many classes of graphs, a maximum weight independent set may be found in polynomial time. - -Famous examples are claw-free graphs, - -P5-free graphs - -and perfect graphs. - -For chordal graphs, a maximum weight independent set can be found in linear time. - -Modular decomposition is a good tool for solving the maximum weight independent set problem; the linear time algorithm on cographs is the basic example for that. Another important tool are clique separators as described by Tarjan. - -Kőnig's theorem implies that in a bipartite graph the maximum independent set can be found in polynomial time using a bipartite matching algorithm. - -In general, the maximum independent set problem cannot be approximated to a constant factor in polynomial time (unless P = NP). In fact, Max Independent Set in general is Poly-APX-complete, meaning it is as hard as any problem that can be approximated to a polynomial factor. However, there are efficient approximation algorithms for restricted classes of graphs. - -In planar graphs, the maximum independent set may be approximated to within any approximation ratio c < 1 in polynomial time; similar polynomial-time approximation schemes exist in any family of graphs closed under taking minors. - -In bounded degree graphs, effective approximation algorithms are known with approximation ratios that are constant for a fixed value of the maximum degree; for instance, a greedy algorithm that forms a maximal independent set by, at each step, choosing the minimum degree vertex in the graph and removing its neighbors, achieves an approximation ratio of (Δ+2)/3 on graphs with maximum degree Δ. Approximation hardness bounds for such instances were proven in Berman. Indeed, even Max Independent Set on 3-regular 3-edge-colorable graphs is APX-complete. - -An interval graph is a graph in which the nodes are 1-dimensional intervals (e.g. time intervals) and there is an edge between two intervals if and only if they intersect. An independent set in an interval graph is just a set of non-overlapping intervals. The problem of finding maximum independent sets in interval graphs has been studied, for example, in the context of job scheduling: given a set of jobs that has to be executed on a computer, find a maximum set of jobs that can be executed without interfering with each other. This problem can be solved exactly in polynomial time using earliest deadline first scheduling. - -A geometric intersection graph is a graph in which the nodes are geometric shapes and there is an edge between two shapes if and only if they intersect. An independent set in a geometric intersection graph is just a set of disjoint (non-overlapping) shapes. The problem of finding maximum independent sets in geometric intersection graphs has been studied, for example, in the context of Automatic label placement: given a set of locations in a map, find a maximum set of disjoint rectangular labels near these locations. - -Finding a maximum independent set in intersection graphs is still NP-complete, but it is easier to approximate than the general maximum independent set problem. A recent survey can be found in the introduction of Chan. - -The problem of finding a maximal independent set can be solved in polynomial time by a trivial greedy algorithm. All maximal independent sets can be found in time O(3n/3) = O(1.4423n). - -The maximum independent set and its dual, the minimum vertex cover problem, is involved in proving the computational complexity of many theoretical problems. They also serve as useful models for real world optimization problems, for example maximum independent set is a useful model for discovering stable genetic components for designing engineered genetic systems. diff --git a/wiki/wikipedia/2492.txt b/wiki/wikipedia/2492.txt deleted file mode 100644 index 148a5d23a9dde25deda3e89accf716c6347478a6..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2492.txt +++ /dev/null @@ -1,117 +0,0 @@ -In mathematical logic, Russell's paradox (also known as Russell's antinomy), is a set-theoretic paradox discovered by the British philosopher and mathematician Bertrand Russell in 1901. Russell's paradox shows that every set theory that contains an unrestricted comprehension principle leads to contradictions. The paradox had already been discovered independently in 1899 by the German mathematician Ernst Zermelo. However, Zermelo did not publish the idea, which remained known only to David Hilbert, Edmund Husserl, and other academics at the University of Göttingen. At the end of the 1890s, Georg Cantor – considered the founder of modern set theory – had already realized that his theory would lead to a contradiction, which he told Hilbert and Richard Dedekind by letter. - -According to the unrestricted comprehension principle, for any sufficiently well-defined property, there is the set of all and only the objects that have that property. Let R be the set of all sets that are not members of themselves. If R is not a member of itself, then its definition entails that it is a member of itself; if it is a member of itself, then it is not a member of itself, since it is the set of all sets that are not members of themselves. The resulting contradiction is Russell's paradox. In symbols: -$$ -\text{Let } R = \{ x \mid x \not \in x \} \text{, then } R \in R \iff R \not \in R -$$ - -Russell also showed that a version of the paradox could be derived in the axiomatic system constructed by the German philosopher and mathematician Gottlob Frege, hence undermining Frege's attempt to reduce mathematics to logic and questioning the logicist programme. Two influential ways of avoiding the paradox were both proposed in 1908: Russell's own type theory and the Zermelo set theory. In particular, Zermelo's axioms restricted the unlimited comprehension principle. With the additional contributions of Abraham Fraenkel, Zermelo set theory developed into the now-standard Zermelo–Fraenkel set theory (commonly known as ZFC when including the axiom of choice). The main difference between Russell's and Zermelo's solution to the paradox is that Zermelo modified the axioms of set theory while maintaining a standard logical language, while Russell modified the logical language itself. The language of ZFC, with the help of Thoralf Skolem, turned out to be that of first-order logic. - -Most sets commonly encountered are not members of themselves. For example, consider the set of all squares in the plane. This set is not itself a square in the plane, thus it is not a member of itself. Let us call a set "normal" if it is not a member of itself, and "abnormal" if it is a member of itself. Clearly every set must be either normal or abnormal. The set of squares in the plane is normal. In contrast, the complementary set that contains everything which is not a square in the plane is itself not a square in the plane, and so it is one of its own members and is therefore abnormal. - -Now we consider the set of all normal sets, R, and try to determine whether R is normal or abnormal. If R were normal, it would be contained in the set of all normal sets (itself), and therefore be abnormal; on the other hand if R were abnormal, it would not be contained in the set of all normal sets (itself), and therefore be normal. This leads to the conclusion that R is neither normal nor abnormal: Russell's paradox. - -The term "naive set theory" is used in various ways. In one usage, naive set theory is a formal theory, which we may call NST, that is formulated in a first-order language with a binary non-logical predicate $\in$, and that includes the Axiom of extensionality: -$$ -\forall x \forall y ( \forall z (z \in x \iff z \in y) \implies x = y) -$$ - -and the axiom schema of unrestricted comprehension: -$$ - \exists y \forall x (x \in y \iff \varphi(x)) -$$ - -for any formula $\varphi$ with the variable x as a free variable inside $\varphi$. Substitute $x \notin x$ for $\varphi(x)$. Then by existential instantiation (reusing the symbol $y$) and universal instantiation we have -$$ -y \in y \iff y \notin y -$$ - -a contradiction. Therefore, NST is inconsistent. - -From the principle of explosion of classical logic, any proposition can be proved from a contradiction. Therefore, the presence of contradictions like Russell's paradox in an axiomatic set theory is disastrous; since if any formula can be proven true it destroys the conventional meaning of truth and falsity. Further, since set theory was seen as the basis for an axiomatic development of all other branches of mathematics, Russell's paradox threatened the foundations of mathematics as a whole. This motivated a great deal of research around the turn of the 20th century to develop a consistent (contradiction free) set theory. - -In 1908, Ernst Zermelo proposed an axiomatization of set theory that avoided the paradoxes of naive set theory by replacing arbitrary set comprehension with weaker existence axioms, such as his axiom of separation (Aussonderung). Modifications to this axiomatic theory proposed in the 1920s by Abraham Fraenkel, Thoralf Skolem, and by Zermelo himself resulted in the axiomatic set theory called ZFC. This theory became widely accepted once Zermelo's axiom of choice ceased to be controversial, and ZFC has remained the canonical axiomatic set theory down to the present day. - -ZFC does not assume that, for every property, there is a set of all things satisfying that property. Rather, it asserts that given any set X, any subset of X definable using first-order logic exists. The object R discussed above cannot be constructed in this fashion, and is therefore not a ZFC set. In some extensions of ZFC, objects like R are called proper classes. - -ZFC is silent about types, although the cumulative hierarchy has a notion of layers that resemble types. Zermelo himself never accepted Skolem's formulation of ZFC using the language of first-order logic. As José Ferreirós notes, Zermelo insisted instead that "propositional functions (conditions or predicates) used for separating off subsets, as well as the replacement functions, can be 'entirely arbitrary [ganz beliebig];" the modern interpretation given to this statement is that Zermelo wanted to include higher-order quantification in order to avoid Skolem's paradox. Around 1930, Zermelo also introduced (apparently independently of von Neumann), the axiom of foundation, thus—as Ferreirós observes— "by forbidding 'circular' and 'ungrounded' sets, it [ZFC] incorporated one of the crucial motivations of TT [type theory]—the principle of the types of arguments". This 2nd order ZFC preferred by Zermelo, including axiom of foundation, allowed a rich cumulative hierarchy. Ferreirós writes that "Zermelo's 'layers' are essentially the same as the types in the contemporary versions of simple TT [type theory] offered by Gödel and Tarski. One can describe the cumulative hierarchy into which Zermelo developed his models as the universe of a cumulative TT in which transfinite types are allowed. (Once we have adopted an impredicative standpoint, abandoning the idea that classes are constructed, it is not unnatural to accept transfinite types.) Thus, simple TT and ZFC could now be regarded as systems that 'talk' essentially about the same intended objects. The main difference is that TT relies on a strong higher-order logic, while Zermelo employed second-order logic, and ZFC can also be given a first-order formulation. The first-order 'description' of the cumulative hierarchy is much weaker, as is shown by the existence of denumerable models (Skolem paradox), but it enjoys some important advantages." - -In ZFC, given a set A, it is possible to define a set B that consists of exactly the sets in A that are not members of themselves. B cannot be in A by the same reasoning in Russell's Paradox. This variation of Russell's paradox shows that no set contains everything. - -Through the work of Zermelo and others, especially John von Neumann, the structure of what some see as the "natural" objects described by ZFC eventually became clear; they are the elements of the von Neumann universe, V, built up from the empty set by transfinitely iterating the power set operation. It is thus now possible again to reason about sets in a non-axiomatic fashion without running afoul of Russell's paradox, namely by reasoning about the elements of V. Whether it is appropriate to think of sets in this way is a point of contention among the rival points of view on the philosophy of mathematics. - -Other solutions to Russell's paradox, with an underlying strategy closer to that of type theory, include Quine's New Foundations and Scott-Potter set theory. Yet another approach is to define multiple membership relation with appropriately modified comprehension scheme, as in the Double extension set theory. - -Russell discovered the paradox in May or June 1901. By his own account in his 1919 Introduction to Mathematical Philosophy, he "attempted to discover some flaw in Cantor's proof that there is no greatest cardinal". In a 1902 letter, he announced the discovery to Gottlob Frege of the paradox in Frege's 1879 Begriffsschrift and framed the problem in terms of both logic and set theory, and in particular in terms of Frege's definition of function: - -Russell would go on to cover it at length in his 1903 The Principles of Mathematics, where he repeated his first encounter with the paradox: - -Russell wrote to Frege about the paradox just as Frege was preparing the second volume of his Grundgesetze der Arithmetik. Frege responded to Russell very quickly; his letter dated 22 June 1902 appeared, with van Heijenoort's commentary in Heijenoort 1967:126–127. Frege then wrote an appendix admitting to the paradox, and proposed a solution that Russell would endorse in his Principles of Mathematics, but was later considered by some to be unsatisfactory. For his part, Russell had his work at the printers and he added an appendix on the doctrine of types. - -Ernst Zermelo in his (1908) A new proof of the possibility of a well-ordering (published at the same time he published "the first axiomatic set theory") laid claim to prior discovery of the antinomy in Cantor's naive set theory. He states: "And yet, even the elementary form that Russell9 gave to the set-theoretic antinomies could have persuaded them [J. König, Jourdain, F. Bernstein] that the solution of these difficulties is not to be sought in the surrender of well-ordering but only in a suitable restriction of the notion of set". Footnote 9 is where he stakes his claim: - -Frege sent a copy of his Grundgesetze der Arithmetik to Hilbert; as noted above, Frege's last volume mentioned the paradox that Russell had communicated to Frege. After receiving Frege's last volume, on 7 November 1903, Hilbert wrote a letter to Frege in which he said, referring to Russell's paradox, "I believe Dr. Zermelo discovered it three or four years ago". A written account of Zermelo's actual argument was discovered in the Nachlass of Edmund Husserl. - -In 1923, Ludwig Wittgenstein proposed to "dispose" of Russell's paradox as follows: - -
    - -The reason why a function cannot be its own argument is that the sign for a function already contains the prototype of its argument, and it - -cannot contain itself. For let us suppose that the function F(fx) could be its own argument: in that case there would be a proposition F(F(fx)), in which the outer function F and the inner function F must have different meanings, since the inner one has the form O(fx) and the outer one has the form Y(O(fx)). Only the letter 'F' is common to the two functions, but the letter by itself signifies nothing. This immediately becomes clear if instead of F(Fu) we write (do) : F(Ou) . Ou = Fu. That disposes of Russell's paradox. (Tractatus Logico-Philosophicus, 3.333) - -
    - -Russell and Alfred North Whitehead wrote their three-volume Principia Mathematica hoping to achieve what Frege had been unable to do. They sought to banish the paradoxes of naive set theory by employing a theory of types they devised for this purpose. While they succeeded in grounding arithmetic in a fashion, it is not at all evident that they did so by purely logical means. While Principia Mathematica avoided the known paradoxes and allows the derivation of a great deal of mathematics, its system gave rise to new problems. - -In any event, Kurt Gödel in 1930–31 proved that while the logic of much of Principia Mathematica, now known as first-order logic, is complete, Peano arithmetic is necessarily incomplete if it is consistent. This is very widely—though not universally—regarded as having shown the logicist program of Frege to be impossible to complete. - -In 2001 A Centenary International Conference celebrating the first hundred years of Russell's paradox was held in Munich and its proceedings have been published. - -There are some versions of this paradox that are closer to real-life situations and may be easier to understand for non-logicians. For example, the barber paradox supposes a barber who shaves all men who do not shave themselves and only men who do not shave themselves. When one thinks about whether the barber should shave himself or not, the paradox begins to emerge. - -An easy refutation of the "layman's versions" such as the barber paradox seems to be that no such barber exists, or that the barber has alopecia, or is a woman, and in the latter two cases the barber doesn't shave, and so can exist without paradox. The whole point of Russell's paradox is that the answer "such a set does not exist" means the definition of the notion of set within a given theory is unsatisfactory. Note the difference between the statements "such a set does not exist" and "it is an empty set". It is like the difference between saying "There is no bucket" and saying "The bucket is empty". - -A notable exception to the above may be the Grelling–Nelson paradox, in which words and meaning are the elements of the scenario rather than people and hair-cutting. Though it is easy to refute the barber's paradox by saying that such a barber does not (and cannot) exist, it is impossible to say something similar about a meaningfully defined word. - -As illustrated above for the barber paradox, Russell's paradox is not hard to extend. Take: - -* A transitive verb , that can be applied to its substantive form. - -Form the sentence: - -The er that s all (and only those) who don't themselves, - -Sometimes the "all" is replaced by "all ers". - -An example would be "paint": - -The painter that paints all (and only those) that don't paint themselves. - -or "elect" - -The elector (representative), that elects all that don't elect themselves. - -Paradoxes that fall in this scheme include: - -* The barber with "shave". - -* The original Russell's paradox with "contain": The container (Set) that contains all (containers) that don't contain themselves. - -* The Grelling–Nelson paradox with "describer": The describer (word) that describes all words, that don't describe themselves. - -* Richard's paradox with "denote": The denoter (number) that denotes all denoters (numbers) that don't denote themselves. (In this paradox, all descriptions of numbers get an assigned number. The term "that denotes all denoters (numbers) that don't denote themselves" is here called Richardian.) - -* "I am lying.", namely the liar paradox and Epimenides paradox, whose origins are ancient - -* Russell–Myhill paradox - -* The Burali-Forti paradox, about the order type of all well-orderings - -* The Kleene–Rosser paradox, showing that the original lambda calculus is inconsistent, by means of a self-negating statement - -* Curry's paradox (named after Haskell Curry), which does not require negation - -* The smallest uninteresting integer paradox - -* Girard's paradox in type theory diff --git a/wiki/wikipedia/2493.txt b/wiki/wikipedia/2493.txt deleted file mode 100644 index d5628aa79a2b93d136cfe6ae90871b79b62f9c85..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2493.txt +++ /dev/null @@ -1,5 +0,0 @@ -Sue Hays Whitesides is a Canadian mathematician and computer scientist, a professor emeritus of computer science and the chair of the computer science department at the University of Victoria in British Columbia, Canada. Her research specializations include computational geometry and graph drawing. - -Whitesides received her Ph.D. in mathematics in 1975 from the University of Wisconsin–Madison, under the supervision of Richard Bruck. Before joining the University of Victoria faculty, she taught at Dartmouth College and McGill University; at McGill, she was director of the School of Computer Science from 2005 to 2008. - -Whitesides was the program chair for the 1998 International Symposium on Graph Drawing and program co-chair for the 2012 Symposium on Computational Geometry. diff --git a/wiki/wikipedia/2494.txt b/wiki/wikipedia/2494.txt deleted file mode 100644 index ec49859f51e2459439561904843d90f928c6b577..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2494.txt +++ /dev/null @@ -1,12 +0,0 @@ -In mathematics, particularly functional analysis, the Dunford–Schwartz theorem, named after Nelson Dunford and Jacob T. Schwartz, states that the averages of powers of certain norm-bounded operators on L1 converge in a suitable sense. -$$ -\text{Let }T \text{ be a linear operator from }L^1 \text{ to } L^1 \text{ with } \|T\|_1\leq 1\text{ and }\|T\|_\infty\leq 1 \text{. Then} -$$ -$$ -\lim_{n\rightarrow\infty}\frac{1}{n}\sum_{k=0}^{n-1}T^kf -$$ -$$ - \text{exists almost everywhere for all }f\in L^1\text{.} -$$ - -The statement is no longer true when the boundedness condition is relaxed to even $\|T\|_\infty\le 1+\varepsilon$. diff --git a/wiki/wikipedia/2495.txt b/wiki/wikipedia/2495.txt deleted file mode 100644 index 87cb3753a7ce849ac9592b249be3e45304da19ad..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2495.txt +++ /dev/null @@ -1,9 +0,0 @@ -In probability theory, the Feldman–Hájek theorem or Feldman–Hájek dichotomy is a fundamental result in the theory of Gaussian measures. It states that two Gaussian measures $\mu$ and $\nu$ on a locally convex space $X$ are either equivalent measures or else mutually singular: there is no possibility of an intermediate situation in which, for example, $\mu$ has a density with respect to $\nu$ but not vice versa. In the special case that $X$ is a Hilbert space, it is possible to give an explicit description of the circumstances under which $\mu$ and $\nu$ are equivalent: writing $m_{\mu}$ and $m_{\nu}$ for the means of $\mu$ and $\nu$, and $C_\mu$ and $C_\nu$ for their covariance operators, equivalence of $\mu$ and $\nu$ holds if and only if - -* $\mu$ and $\nu$ have the same Cameron–Martin space $H = C_{\mu}^{1/2}(X) = C_{\nu}^{1/2}(X)$; - -* the difference in their means lies in this common Cameron–Martin space, i.e. $m_{\mu} - m_{\nu} \in H$; and - -* the operator $(C_{\mu}^{-1/2} C_{\nu}^{1/2}) (C_{\mu}^{-1/2} C_{\nu}^{1/2})^{\ast} - I$ is a Hilbert–Schmidt operator on $\bar{H}$. - -A simple consequence of the Feldman–Hájek theorem is that dilating a Gaussian measure on an infinite-dimensional Hilbert space $X$ (i.e. taking $C_{\nu} = s C_{\mu}$ for some scale factor $s \geq 0$) always yields two mutually singular Gaussian measures, except for the trivial dilation with $s = 1$, since $(s^{2} - 1) I$ is Hilbert–Schmidt only when $s = 1$. diff --git a/wiki/wikipedia/2496.txt b/wiki/wikipedia/2496.txt deleted file mode 100644 index c41c2de260f31d9c35bd56067edbf026168aee6c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2496.txt +++ /dev/null @@ -1,226 +0,0 @@ -In mathematics, a transcendental number is a number that is not algebraic—that is, not the root of a non-zero polynomial of finite degree with rational coefficients. The best known transcendental numbers are pi and e. - -Though only a few classes of transcendental numbers are known, in part as it can be extremely difficult to show that a given number is transcendental, transcendental numbers are not rare. Indeed, almost all real and complex numbers are transcendental, since the algebraic numbers compose a countable set, while the set of real numbers and the set of complex numbers are both uncountable sets, and therefore larger than any countable set. All transcendental real numbers (also known as real transcendental numbers or transcendental irrational numbers) are irrational numbers, since all rational numbers are algebraic. The converse is not true: not all irrational numbers are transcendental. Hence, the set of real numbers consists of non-overlapping rational, algebraic non-rational and transcendental real numbers. and in 1851 gave the first decimal examples such as the Liouville constant - - - -\begin{align} - -L_b &= \sum_{n=1}^\infty 10^{-n!} \\ &= 10^{-1} + 10^{-2} + 10^{-6} + 10^{-24} + 10^{-120} + 10^{-720} + 10^{-5040} + 10^{-40320} + \ldots \\ - -&= 0.\textbf{1}\textbf{1}000\textbf{1}00000000000000000\textbf{1}00000000000000000000000000000000000000000000000000000\ldots \\ - -\end{align} - -in which the nth digit after the decimal point is 1 if n is equal to k! (k factorial) for some k and 0 otherwise. In other words, the nth digit of this number is 1 only if n is one of the numbers 1! = 1, 2! = 2, 3! = 6, 4! = 24, etc. Liouville showed that this number belongs to a class of transcendental numbers that can be more closely approximated by rational numbers than can any irrational algebraic number, and this class of numbers are called Liouville numbers, named in his honour. Liouville showed that all Liouville numbers are transcendental. - -The first number to be proven transcendental without having been specifically constructed for the purpose of proving transcendental numbers' existence was e, by Charles Hermite in 1873. - -In 1874, Georg Cantor proved that the algebraic numbers are countable and the real numbers are uncountable. He also gave a new method for constructing transcendental numbers. Although this was already implied by his proof of the countability of the algebraic numbers, Cantor also published a construction that proves there are as many transcendental numbers as there are real numbers. Cantor's work established the ubiquity of transcendental numbers. - -In 1882, Ferdinand von Lindemann published the first complete proof of the transcendence of π. He first proved that ea is transcendental if a is a non-zero algebraic number. Then, since eipi = −1 is algebraic (see Euler's identity), ipi must be transcendental. But since i is algebraic, π therefore must be transcendental. This approach was generalized by Karl Weierstrass to what is now known as the Lindemann–Weierstrass theorem. The transcendence of π allowed the proof of the impossibility of several ancient geometric constructions involving compass and straightedge, including the most famous one, squaring the circle. - -In 1900, David Hilbert posed an influential question about transcendental numbers, Hilbert's seventh problem: If a is an algebraic number that is not zero or one, and b is an irrational algebraic number, is ab necessarily transcendental? The affirmative answer was provided in 1934 by the Gelfond–Schneider theorem. This work was extended by Alan Baker in the 1960s in his work on lower bounds for linear forms in any number of logarithms (of algebraic numbers). - -A transcendental number is a (possibly complex) number that is not the root of any integer polynomial, meaning that it is not an algebraic number of any degree. Every real transcendental number must also be irrational, since a rational number is, by definition, an algebraic number of degree one. The set of transcendental numbers is uncountably infinite. Since the polynomials with rational coefficients are countable, and since each such polynomial has a finite number of zeroes, the algebraic numbers must also be countable. However, Cantor's diagonal argument proves that the real numbers (and therefore also the complex numbers) are uncountable. Since the real numbers are the union of algebraic and transcendental numbers, it is impossible for both subsets to be countable. This makes the transcendental numbers uncountable. - -No rational number is transcendental and all real transcendental numbers are irrational. The irrational numbers contain all the real transcendental numbers and a subset of the algebraic numbers, including the quadratic irrationals and other forms of algebraic irrationals. - -Any non-constant algebraic function of a single variable yields a transcendental value when applied to a transcendental argument. For example, from knowing that pi is transcendental, it can be immediately deduced that numbers such as 5π, π-3/sqrt 2, (sqrt π-sqrt 3)^8, and sqrt π^5+7|4 are transcendental as well. - -However, an algebraic function of several variables may yield an algebraic number when applied to transcendental numbers if these numbers are not algebraically independent. For example, pi and (1 − π) are both transcendental, but π + (1 − π) = 1 is obviously not. It is unknown whether e + π, for example, is transcendental, though at least one of e + π and eπ must be transcendental. More generally, for any two transcendental numbers a and b, at least one of a + b and ab must be transcendental. To see this, consider the polynomial (x − a)(x − b) = x2 − (a + b)x + ab. If (a + b) and ab were both algebraic, then this would be a polynomial with algebraic coefficients. Because algebraic numbers form an algebraically closed field, this would imply that the roots of the polynomial, a and b, must be algebraic. But this is a contradiction, and thus it must be the case that at least one of the coefficients is transcendental. - -The non-computable numbers are a strict subset of the transcendental numbers. - -All Liouville numbers are transcendental, but not vice versa. Any Liouville number must have unbounded partial quotients in its continued fraction expansion. Using a counting argument one can show that there exist transcendental numbers which have bounded partial quotients and hence are not Liouville numbers. - -Using the explicit continued fraction expansion of e, one can show that e is not a Liouville number (although the partial quotients in its continued fraction expansion are unbounded). Kurt Mahler showed in 1953 that pi is also not a Liouville number. It is conjectured that all infinite continued fractions with bounded terms that are not eventually periodic are transcendental (eventually periodic continued fractions correspond to quadratic irrationals). - -Numbers proven to be transcendental: - -* ea if a is algebraic and nonzero (by the Lindemann–Weierstrass theorem). - -*pi (by the Lindemann–Weierstrass theorem). - -* eπ, Gelfond's constant, as well as e−π/2 = ii (by the Gelfond–Schneider theorem). - -* ab where a is algebraic but not 0 or 1, and b is irrational algebraic (by the Gelfond–Schneider theorem), in particular: - -2^sqrt 2, the Gelfond–Schneider constant (or Hilbert number) - -*sin a, cos a, tan a, csc a, sec a, and cot a, and their hyperbolic counterparts, for any nonzero algebraic number a, expressed in radians (by the Lindemann–Weierstrass theorem). - -*The fixed point of the cosine function (also referred to as the Dottie number d) – the unique real solution to the equation cos x = x, where x is in radians (by the Lindemann–Weierstrass theorem). - -* ln a if a is algebraic and not equal to 0 or 1, for any branch of the logarithm function (by the Lindemann–Weierstrass theorem). - -* logb a if a and b are positive integers not both powers of the same integer (by the Gelfond–Schneider theorem). - -* W(a) if a is algebraic and nonzero, for any branch of the Lambert W Function (by the Lindemann–Weierstrass theorem), in particular: Ω the omega constant - -* sqrt x_s, the square super-root of any natural number is either an integer or transcendental (by the Gelfond–Schneider theorem) - -* Γ(1/3), Γ(1/4), and Γ(1/6). - -*:$\sum_{n=0}^\infty 10^{-2^n} = 0.\textbf{1}\textbf{1}0\textbf{1}000\textbf{1}0000000\textbf{1}\ldots$ - -which also holds by replacing 10 with any algebraic b > 1. - -* The Gauss constant. - -* The two lemniscate constants L_1 (sometimes denoted as ϖ) and L_2. - -* The aforementioned Liouville constant for any algebraic b ∈ (0, 1). - -* The Prouhet–Thue–Morse constant. - -* The Komornik–Loreti constant. - -* Any number for which the digits with respect to some fixed base form a Sturmian word. - -* For β > 1 -$$ -\sum_{k=0}^\infty 10^{-\left\lfloor \beta^{k} \right\rfloor}; -$$ - -where $\beta\mapsto\lfloor \beta \rfloor$ is the floor function. - -* 3.300330000000000330033... and its reciprocal 0.30300000303..., two numbers with only two different decimal digits whose nonzero digit positions are given by the Moser–de Bruijn sequence and its double. - -* The number π/2Y_0(2)/J_0(2)-γ, where Y_α(x) and J_α(x) are Bessel functions and γ is the Euler–Mascheroni constant. - -Numbers which have yet to be proven to be either transcendental or algebraic: - -* Most sums, products, powers, etc. of the number pi and the number e, e.g. eπ, e + π, π − e, π/e, pipi, ee, πe, π^sqrt 2, eπ2 are not known to be rational, algebraic, irrational or transcendental. A notable exception is e^πsqrt n (for any positive integer n) which has been proven transcendental. - -* The Euler–Mascheroni constant γ: In 2010 M. Ram Murty and N. Saradha considered an infinite list of numbers also containing γ/4 and showed that all but at most one of them have to be transcendental. In 2012 it was shown that at least one of γ and the Euler–Gompertz constant δ is transcendental. - -* Catalan's constant, not even proven to be irrational. - -* Khinchin's constant, also not proven to be irrational. - -* Apéry's constant ζ(3) (which Apéry proved is irrational). - -* The Riemann zeta function at other odd integers, ζ(5), ζ(7), ... (not proven to be irrational). - -* The Feigenbaum constants δ and α, also not proven to be irrational. - -* Mills' constant, also not proven to be irrational. - -* The Copeland–Erdős constant, formed by concatenating the decimal representations of the prime numbers. - -Conjectures: - -* Schanuel's conjecture, - -* Four exponentials conjecture. - -The first proof that the base of the natural logarithms, e, is transcendental dates from 1873. We will now follow the strategy of David Hilbert (1862–1943) who gave a simplification of the original proof of Charles Hermite. The idea is the following: - -Assume, for purpose of finding a contradiction, that e is algebraic. Then there exists a finite set of integer coefficients c0, c1, ..., cn satisfying the equation: -$$ -c_{0}+c_{1}e+c_{2}e^{2}+\cdots+c_{n}e^{n}=0, \qquad c_0, c_n \neq 0. -$$ - -Now for a positive integer k, we define the following polynomial: -$$ - f_k(x) = x^{k} \left [(x-1)\cdots(x-n) \right ]^{k+1}, -$$ - -and multiply both sides of the above equation by -$$ -\int^{\infty}_{0} f_k e^{-x}dx, -$$ - -to arrive at the equation: -$$ -c_{0} \left (\int^{\infty}_{0} f_k e^{-x}dx\right )+ c_1e\left ( \int^{\infty}_{0}f_k e^{-x}dx\right )+\cdots+ c_{n}e^{n} \left (\int^{\infty}_{0}f_k e^{-x}dx\right ) = 0. -$$ - -By splitting respective domains of integration, this equation can be written in the form -$$ -P+Q=0 -$$ - -where - -\begin{align} - -P &= c_{0}\left ( \int^{\infty}_{0}f_k e^{-x}dx\right )+ c_{1}e\left (\int^{\infty}_{1}f_k e^{-x}dx\right )+ c_{2}e^{2}\left (\int^{\infty}_{2}f_k e^{-x}dx\right ) +\cdots+ c_{n}e^{n}\left (\int^{\infty}_{n}f_k e^{-x}dx\right ) \\ - -Q &= c_{1}e\left (\int^{1}_{0} f_k e^{-x}dx\right )+c_{2}e^{2} \left (\int^{2}_{0} f_k e^{-x}dx\right )+\cdots+c_{n}e^{n}\left (\int^{n}_{0} f_k e^{-x}dx \right ) - -\end{align} - -Lemma 1. For an appropriate choice of k, $\tfrac{P}{k!}$ is a non-zero integer. - -
    Proof. Each term in P is an integer times a sum of factorials, which results from the relation -$$ -\int^{\infty}_{0}x^{j}e^{-x}dx=j! -$$ - -which is valid for any positive integer j (consider the Gamma function). - -It is non-zero because for every a satisfying 0< a ≤ n, the integrand in -$$ -c_{a}e^{a}\int^{\infty}_{a} f_k e^{-x}dx -$$ - -is e−x times a sum of terms whose lowest power of x is k+1 after substituting x for x+a in the integral. Then this becomes a sum of integrals of the form -$$ -A_{j-k}\int^{\infty}_{0}x^{j}e^{-x}dx -$$ Where Aj-k is integer. - -with k+1 ≤ j, and it is therefore an integer divisible by (k+1)!. After dividing by k!, we get zero modulo (k+1). However, we can write: -$$ -\int^{\infty}_{0} f_k e^{-x}dx = \int^{\infty}_{0} \left ( \left [(-1)^{n}(n!) \right ]^{k+1}e^{-x}x^k + \cdots \right ) dx -$$ - -and thus -$$ -{\frac {1}{k!}}c_{0}\int _{0}^{\infty }f_{k}e^{-x}dx\equiv c_{0}[(-1)^{n}(n!)]^{k+1}\not\equiv 0{\pmod {k+1}}. -$$ - -So when dividing each integral in P by k!, the initial one is not divisible by k+1, but all the others are, as long as k+1 is prime and larger than n and |c0|. It follows that $\tfrac{P}{k!}$ itself is not divisible by the prime k+1 and therefore cannot be zero.
    - -Lemma 2. $\left|\tfrac{Q}{k!}\right|<1$ for sufficiently large $k$. - -
    Proof. Note that - -\begin{align} - -f_k e^{-x} &= x^{k}[(x-1)(x-2)\cdots(x-n)]^{k+1}e^{-x}\\ - -&= \left (x(x-1)\cdots(x-n) \right)^k \cdot \left ((x-1)\cdots(x-n)e^{-x}\right)\\ - -&= u(x)^k \cdot v(x) - -\end{align} - -where $u(x)$ and $v(x)$ are continuous functions of $x$ for all $x$, so are bounded on the interval $[0,n]$. That is, there are constants $G, H > 0$ such that -$$ - \left |f_k e^{-x} \right | \leq |u(x)|^k \cdot |v(x)| < G^k H \quad \text{ for } 0 \leq x \leq n. -$$ - -So each of those integrals composing $Q$ is bounded, the worst case being -$$ -\left|\int_{0}^{n}f_{k}e^{-x}dx\right| \leq \int_{0}^{n} \left |f_{k}e^{-x} \right |dx \leq \int_{0}^{n}G^k Hdx = nG^k H. -$$ - -It is now possible to bound the sum $Q$ as well: -$$ -|Q| < G^{k} \cdot nH \left (|c_1|e+|c_2|e^2+\cdots+|c_n|e^{n} \right ) = G^k \cdot M, -$$ - -where $M$ is a constant not depending on $k$. It follows that -$$ -\left| \frac{Q}{k!} \right| < M \cdot \frac{G^k}{k!} \to 0 \quad \text{ as } k \to \infty, -$$ - -finishing the proof of this lemma.
    - -Choosing a value of $k$ satisfying both lemmas leads to a non-zero integer ($P/k!$) added to a vanishingly small quantity ($Q/k!$) being equal to zero, is an impossibility. It follows that the original assumption, that e can satisfy a polynomial equation with integer coefficients, is also impossible; that is, e is transcendental. - -A similar strategy, different from Lindemann's original approach, can be used to show that the number pi is transcendental. Besides the gamma-function and some estimates as in the proof for e, facts about symmetric polynomials play a vital role in the proof. - -For detailed information concerning the proofs of the transcendence of pi and e, see the references and external links. diff --git a/wiki/wikipedia/2497.txt b/wiki/wikipedia/2497.txt deleted file mode 100644 index 86d8a43df447cfd0956e5942e239da92c79a948d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2497.txt +++ /dev/null @@ -1,13 +0,0 @@ -In universal algebra and mathematical logic, a term algebra is a freely generated algebraic structure over a given signature. For example, in a signature consisting of a single binary operation, the term algebra over a set X of variables is exactly the free magma generated by X. Other synonyms for the notion include absolutely free algebra and anarchic algebra. - -From a category theory perspective, a term algebra is the initial object for the category of all X-generated algebras of the same signature, and this object, unique up to isomorphism, is called an initial algebra; it generates by homomorphic projection all algebras in the category. - -A similar notion is that of a Herbrand universe in logic, usually used under this name in logic programming, which is (absolutely freely) defined starting from the set of constants and function symbols in a set of clauses. That is, the Herbrand universe consists of all ground terms: terms that have no variables in them. - -An atomic formula or atom is commonly defined as a predicate applied to a tuple of terms; a ground atom is then a predicate in which only ground terms appear. The Herbrand base is the set of all ground atoms that can be formed from predicate symbols in the original set of clauses and terms in its Herbrand universe. These two concepts are named after Jacques Herbrand. - -Term algebras also play a role in the semantics of abstract data types, where an abstract data type declaration provides the signature of a multi-sorted algebraic structure and the term algebra is a concrete model of the abstract declaration. - -Term algebras can be shown decidable using quantifier elimination. The complexity of the decision problem is in NONELEMENTARY. - -The signature σ of a language is a triple consisting of the alphabet of constants O, function symbols F, and predicates P. The Herbrand base of a signature σ consists of all ground atoms of σ: of all formulas of the form R(t1, …, tn), where t1, …, tn are terms containing no variables (i.e. elements of the Herbrand universe) and R is an n-ary relation symbol (i.e. predicate). In the case of logic with equality, it also contains all equations of the form t1 = t2, where t1 and t2 contain no variables. diff --git a/wiki/wikipedia/2498.txt b/wiki/wikipedia/2498.txt deleted file mode 100644 index 2725336a56e422d980a8dacf5dcb9a904a4802fd..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2498.txt +++ /dev/null @@ -1,28 +0,0 @@ -In mathematics, the Lie–Kolchin theorem is a theorem in the representation theory of linear algebraic groups; Lie's theorem is the analog for linear Lie algebras. - -It states that if G is a connected and solvable linear algebraic group defined over an algebraically closed field and -$$ -\rho\colon G \to GL(V) -$$ - -a representation on a nonzero finite-dimensional vector space V, then there is a one-dimensional linear subspace L of V such that -$$ -\rho(G)(L) = L. -$$ - -That is, ρ(G) has an invariant line L, on which G therefore acts through a one-dimensional representation. This is equivalent to the statement that V contains a nonzero vector v that is a common (simultaneous) eigenvector for all $ \rho(g), g \in G $. - -It follows directly that every irreducible finite-dimensional representation of a connected and solvable linear algebraic group G has dimension one. In fact, this is another way to state the Lie–Kolchin theorem. - -The result for Lie algebras was proved by and for algebraic groups was proved by . - -The Borel fixed point theorem generalizes the Lie–Kolchin theorem. - -Sometimes the theorem is also referred to as the Lie–Kolchin triangularization theorem because by induction it implies that with respect to a suitable basis of V the image $\rho(G)$ has a triangular shape; in other words, the image group $\rho(G)$ is conjugate in GL(n,K) (where n = dim V) to a subgroup of the group T of upper triangular matrices, the standard Borel subgroup of GL(n,K): the image is simultaneously triangularizable. - -The theorem applies in particular to a Borel subgroup of a semisimple linear algebraic group G. - -If the field K is not algebraically closed, the theorem can fail. The standard unit circle, viewed as the set of complex numbers $ \{ x+iy \in \mathbb{C} \mid x^2+y^2=1 \} $ of absolute value one is a one-dimensional commutative (and therefore solvable) linear algebraic group over the real numbers which has a two-dimensional representation into the special orthogonal group SO(2) without an invariant (real) line. Here the image $ \rho(z)$ of $ z=x+iy $ is the orthogonal matrix -$$ - \begin{pmatrix} x & y \\ -y & x \end{pmatrix}. -$$ diff --git a/wiki/wikipedia/2499.txt b/wiki/wikipedia/2499.txt deleted file mode 100644 index 0c699e6f6f1353cc4f3682a3e88638a833b1d65b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2499.txt +++ /dev/null @@ -1 +0,0 @@ -In mathematics, Oka's lemma, proved by Kiyoshi Oka, states that in a domain of holomorphy in $\Complex^n$, the function $-\log d(z)$ is plurisubharmonic, where $d$ is the distance to the boundary. This property shows that the domain is pseudoconvex. Historically, this lemma was first shown in the Hartogs domain in the case of two variables, also Oka's lemma is the inverse of Levi's problem (unramified Riemann domain $\Complex^n$). Therefore, Oka himself calls Levi's problem "problème inverse de Hartogs", and Levi's problem is occasionally called Hartogs' Inverse Problem. diff --git a/wiki/wikipedia/25.txt b/wiki/wikipedia/25.txt deleted file mode 100644 index 5390f46764f6afff2be32bec1cc1af2b68da3257..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/25.txt +++ /dev/null @@ -1,9 +0,0 @@ -In graph theory, the rectilinear minimum spanning tree (RMST) of a set of n points in the plane (or more generally, in ℝd) is a minimum spanning tree of that set, where the weight of the edge between each pair of points is the rectilinear distance between those two points. - -By explicitly constructing the complete graph on n vertices, which has n(n-1)/2 edges, a rectilinear minimum spanning tree can be found using existing algorithms for finding a minimum spanning tree. In particular, using Prim's algorithm with an adjacency matrix yields time complexity O(n2). - -In the planar case, more efficient algorithms exist. They are based on the idea that connections may only happen with the nearest neighbour of a point in each octant - that is, each of the eight regions of the plane delimited by the coordinate axis from this point and their bisectors. - -The resulting graph has only a linear number of edges and can be constructed in O(n log n) using a divide and conquer algorithm or a sweep line algorithm. - -The problem commonly arises in physical design of electronic circuits. In modern high-density integrated circuits wire routing is performed by wires which consist of segments running horizontally in one layer of metal and vertically in another metal layer. As a result, the wire length between two points is naturally measured with rectilinear distance. Although the routing of a whole net with multiple nodes is better represented by the rectilinear Steiner tree, the RMST provides a reasonable approximation and wire length estimate. diff --git a/wiki/wikipedia/250.txt b/wiki/wikipedia/250.txt deleted file mode 100644 index 0bfa111c7e33ba74012de5c31644171082c7784d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/250.txt +++ /dev/null @@ -1,9 +0,0 @@ -In mathematics, a simple subcubic graph (SSCG) is a finite simple graph in which each vertex has degree at most three. Suppose we have a sequence of simple subcubic graphs G1, G2, ... such that each graph Gi has at most i + k vertices (for some integer k) and for no i < j is Gi homeomorphically embeddable into (i.e. is a graph minor of) Gj. - -The Robertson–Seymour theorem proves that subcubic graphs (simple or not) are well-founded by homeomorphic embeddability, implying such a sequence cannot be infinite. So, for each value of k, there is a sequence with maximal length. The function SSCG(k) denotes that length for simple subcubic graphs. The function SCG(k) denotes that length for (general) subcubic graphs. - -The SCG sequence begins SCG(0) = 6, but then explodes to a value equivalent to fε2*2 in the fast-growing hierarchy. - -The SSCG sequence begins SSCG(0) = 2, SSCG(1) = 5, but then grows rapidly. SSCG(2) = 3 × 2(3 × 295) − 8 ≈ 3.241704 ⋅ 1035775080127201286522908640066 and its decimal expansion ends in ...11352349133049430008. - -SSCG(3) is much larger than both TREE(3) and TREETREE(3)(3). Adam P. Goucher claims there is no qualitative difference between the asymptotic growth rates of SSCG and SCG. He writes "It's clear that SCG(n) ≥ SSCG(n), but I can also prove SSCG(4n + 3) ≥ SCG(n)." diff --git a/wiki/wikipedia/2500.txt b/wiki/wikipedia/2500.txt deleted file mode 100644 index 8da9ee2855dc93e32bca007d5ece8cf48ddc0174..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2500.txt +++ /dev/null @@ -1,28 +0,0 @@ -In field theory, the primitive element theorem is a result characterizing the finite degree field extensions that can be generated by a single element. Such a generating element is called a primitive element of the field extension, and the extension is called a simple extension in this case. The theorem states that a finite extension is simple if and only if there are only finitely many intermediate fields. An older result, also often called "primitive element theorem", states that every finite separable extension is simple; it can be seen as a consequence of the former theorem. These theorems imply in particular that all algebraic number fields over the rational numbers, and all extensions in which both fields are finite, are simple. - -Let $E/F$ be a field extension. An element $\alpha\in E$ is a primitive element for $E/F$ if $E=F(\alpha),$ i.e. if every element of $E$ can be written as a rational function in $\alpha$ with coefficients in $F$. If there exists such a primitive element, then $E/F$ is referred to as a simple extension. - -If the field extension $E/F$ has primitive element $\alpha$ and is of finite degree $n = [E:F]$, then every element x of E can be written uniquely in the form -$$ -x=f_{n-1}{\alpha}^{n-1}+\cdots+f_1{\alpha}+f_0, -$$ - -where $f_i\in F$ for all i. That is, the set -$$ -\{1,\alpha,\ldots,{\alpha}^{n-1}\} -$$ - -is a basis for E as a vector space over F. - -If one adjoins to the rational numbers $F = \mathbb{Q}$ the two irrational numbers $\sqrt{2}$ and $\sqrt{3}$ to get the extension field $E=\mathbb{Q}(\sqrt{2},\sqrt{3})$ of degree 4, one can show this extension is simple, meaning $E=\mathbb{Q}(\alpha)$ for a single $\alpha\in E$. Taking $\alpha = \sqrt{2} + \sqrt{3} $, the powers 1, α , α2, α3 can be expanded as linear combinations of 1, $\sqrt{2}$, $\sqrt{3}$, $\sqrt{6}$ with integer coefficients. One can solve this system of linear equations for $\sqrt{2}$ and $\sqrt{3}$ over $\mathbb{Q}(\alpha)$, to obtain $\sqrt{2} = \tfrac12(\alpha^3-9\alpha)$ and $\sqrt{3} = -\tfrac12(\alpha^3-11\alpha)$. This shows α is indeed a primitive element: -$$ -\mathbb{Q}(\sqrt 2, \sqrt 3)=\mathbb{Q}(\sqrt2 + \sqrt3). -$$ - -The classical primitive element theorem states: - -Every separable field extension of finite degree is simple. - -This theorem applies to algebraic number fields, i.e. finite extensions of the rational numbers Q, since Q has characteristic 0 and therefore every finite extension over Q is separable. - -The following primitive element theorem (Ernst Steinitz of Joseph-Louis Lagrange from 1771, which Galois certainly knew. It is likely that Lagrange had already been aware of the primitive element theorem for splitting fields. Steinitz called the "classical" one Theorem of the primitive elements and the other one Theorem of the intermediate fields. Emil Artin reformulated Galois theory in the 1930s without the use of the primitive element theorems. diff --git a/wiki/wikipedia/2501.txt b/wiki/wikipedia/2501.txt deleted file mode 100644 index 928498df524ad865530a4480b6a43a30cddbc0cf..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2501.txt +++ /dev/null @@ -1,5 +0,0 @@ -In functional analysis, a branch of mathematics, the Ryll-Nardzewski fixed-point theorem states that if $E$ is a normed vector space and $K$ is a nonempty convex subset of $E$ that is compact under the weak topology, then every group (or equivalently: every semigroup) of affine isometries of $K$ has at least one fixed point. (Here, a fixed point of a set of maps is a point that is fixed by each map in the set.) - -This theorem was announced by Czesław Ryll-Nardzewski. Later Namioka and Asplund gave a proof based on a different approach. Ryll-Nardzewski himself gave a complete proof in the original spirit. - -The Ryll-Nardzewski theorem yields the existence of a Haar measure on compact groups. diff --git a/wiki/wikipedia/2502.txt b/wiki/wikipedia/2502.txt deleted file mode 100644 index b3982c9af3a5ec8f1660226808650e497e8422ec..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2502.txt +++ /dev/null @@ -1,147 +0,0 @@ -In computer science, the test-and-set instruction is an instruction used to write (set) 1 to a memory location and return its old value as a single atomic (i.e., non-interruptible) operation. The caller can then "test" the result to see if the state was changed by the call. If multiple processes may access the same memory location, and if a process is currently performing a test-and-set, no other process may begin another test-and-set until the first process's test-and-set is finished. A CPU may use a test-and-set instruction offered by another electronic component, such as dual-port RAM; a CPU itself may also offer a test-and-set instruction. - -A lock can be built using an atomic test-and-set instruction as follows: - -function Lock(boolean *lock) { - -while (test_and_set(lock) == 1); - -} - -This code assumes that the memory location was initialized to 0 at some point prior to the first test-and-set. The calling process obtains the lock if the old value was 0, otherwise the while-loop spins waiting to acquire the lock. This is called a spinlock. At any point, the holder of the lock can simply set the memory location back to 0 to release the lock for acquisition by another--this does not require any special handling as the holder "owns" this memory location. "Test and Test-and-set" is another example. - -Maurice Herlihy (1991) proved that test-and-set (1-bit comparand) has a finite consensus number and can solve the wait-free consensus problem for at-most two concurrent processes. In contrast, compare-and-swap (32-bit comparand) offers a more general solution to this problem, and in some implementations compare-double-and-swap (64-bit comparand) is also available for extended utility. - -DPRAM test-and-set instructions can work in many ways. Here are two variations, both of which describe a DPRAM which provides exactly 2 ports, allowing 2 separate electronic components (such as 2 CPUs) access to every memory location on the DPRAM. - -When CPU 1 issues a test-and-set instruction, the DPRAM first makes an "internal note" of this by storing the address of the memory location in a special place. If at this point, CPU 2 happens to issue a test-and-set instruction for the same memory location, the DPRAM first checks its "internal note", recognizes the situation, and issues a BUSY interrupt, which tells CPU 2 that it must wait and retry. This is an implementation of a busy waiting or spinlock using the interrupt mechanism. Since all this happens at hardware speeds, CPU 2's wait to get out of the spin-lock is very short. - -Whether or not CPU 2 was trying to access the memory location, the DPRAM performs the test given by CPU 1. If the test succeeds, the DPRAM sets the memory location to the value given by CPU 1. Then the DPRAM wipes out its "internal note" that CPU 1 was writing there. At this point, CPU 2 could issue a test-and-set, which would succeed. - -CPU 1 issues a test-and-set instruction to write to "memory location A". The DPRAM does not immediately store the value in memory location A, but instead simultaneously moves the current value to a special register, while setting the contents of memory location A to a special "flag value". If at this point, CPU 2 issues a test-and-set to memory location A, the DPRAM detects the special flag value, and as in Variation 1, issues a BUSY interrupt. - -Whether or not CPU 2 was trying to access the memory location, the DPRAM now performs CPU 1's test. If the test succeeds, the DPRAM sets memory location A to the value specified by CPU 1. If the test fails, the DPRAM copies the value back from the special register to memory location A. Either operation wipes out the special flag value. If CPU 2 now issues a test-and-set, it will succeed. - -Some instruction sets have an atomic test-and-set machine language instruction. Examples include x86 and IBM System/360 and its successors (including z/Architecture). - -Those that do not can still implement an atomic test-and-set using a read-modify-write or compare-and-swap instruction. - -The test and set instruction, when used with boolean values, uses logic like that shown in the following function, except that the function must execute atomically. That is, no other process must be able to interrupt the function mid-execution, thereby seeing a state that only exists while the function executes. That requires hardware support; it cannot be implemented as shown. Nevertheless, the code shown helps to explain the behaviour of test-and-set. NOTE: In this example, 'lock' is assumed to be passed by reference (or by name) but the assignment to 'initial' creates a new value (not just copying a reference). - -function TestAndSet(boolean_ref lock) { - -boolean initial = lock; - -lock = true; - -return initial; - -} - -Not only is the code shown not atomic, in the sense of the test-and-set instruction, it also differs from the descriptions of DPRAM hardware test-and-set above. Here, the value being set and the test are fixed and invariant, and the value is updated regardless of the outcome of the test, whereas for the DPRAM test-and-set, the memory is set only when the test succeeds, and the value to set and the test condition are specified by the CPU. Here, the value to set can only be 1, but if 0 and 1 are considered the only valid values for the memory location, and "value is nonzero" is the only allowed test, then this equates to the case described for DPRAM hardware (or, more specifically, the DPRAM case reduces to this under these constraints). From that viewpoint, this can, correctly, be called "test-and-set" in the full, conventional sense of that term. The essential point to note is the general intent and principle of test-and-set: a value is both tested and set in one atomic operation such that no other program thread or process can change the target memory location after it is tested but before it is set. (This is because the location must only be set if it currently has a certain value, not if it had that value sometime earlier.) - -In the C programming language, the implementation would be like: - - - -#define LOCKED 1 - -int test_and_set(int* lockPtr) { - -int oldValue; - -// -- Start of atomic segment -- - -// This should be interpreted as pseudocode for illustrative purposes only. - -// Traditional compilation of this code will not guarantee atomicity, the - -// use of shared memory (i.e., non-cached values), protection from compiler - -// optimizations, or other required properties. - -oldValue = *lockPtr; - -*lockPtr = LOCKED; - -// -- End of atomic segment -- - -return oldValue; - -} - - - -The code also shows that there are really two operations: an atomic read-modify-write and a test. Only the read-modify-write needs to be atomic. (This is true because delaying the value comparison by any amount of time will not change the result of the test once the value to test has been obtained. Once the code writes the initial value, the result of the test has been established, even if it has not been computed yet — e.g., by the == operator.) - -One way to implement mutual exclusion is by using a test-and-set based lock as follows: - - - -volatile int lock = 0; - -void critical() { - -while (test_and_set(&lock) == 1); - -critical section // only one process can be in this section at a time - -lock = 0; // release lock when finished with the critical section - -} - - - -The lock variable is a shared variable i.e. it can be accessed by all processors/threads. Note the volatile keyword. In absence of volatile, the compiler and/or the CPU(s) may optimize access to lock and/or use cached values, thus rendering the above code erroneous. Conversely, and unfortunately, the presence of volatile does not guarantee that reads and writes are committed to memory. Some compilers issue memory barriers to ensure that operations are committed to memory, but since the semantics of volatile in C/C++ is quite vague, not all compilers will do that. Consult your compiler's documentation to determine if it does. - -This function can be called by multiple processes, but it is guaranteed that only one process will be in the critical section at a time. The rest of the processes will keep spinning until they get the lock. It is possible that a process is never granted the lock. In such a case it will loop endlessly. This is a drawback of this implementation as it doesn't ensure fairness. These issues are further elaborated in the performance section. - - - -enter_region: ; A "jump to" tag; function entry point. - -tsl reg, flag ; Test and Set Lock; flag is the - -; shared variable; it is copied - -; into the register reg and flag - -; then atomically set to 1. - -cmp reg, #0 ; Was flag zero on entry_region? - -jnz enter_region ; Jump to enter_region if - -; reg is non-zero; i.e., - -; flag was non-zero on entry. - -ret ; Exit; i.e., flag was zero on - -; entry. If we get here, tsl - -; will have set it non-zero; thus, - -; we have claimed the resource - -; associated with flag. - -leave_region: - -move flag, #0 ; store 0 in flag - -ret ; return to caller - - - -Here tsl is an atomic instruction and flag is the lock variable. The process doesn't return unless it acquires the lock. - -The four major evaluation metrics for locks in general are uncontended lock-acquisition latency, bus traffic, fairness, and storage. - -Test-and-set scores low on two of them, namely, high bus traffic and unfairness. - -When processor P1 has obtained a lock and processor P2 is also waiting for the lock, P2 will keep incurring bus transactions in attempts to acquire the lock. When a processor has obtained a lock, all other processors which also wish to obtain the same lock keep trying to obtain the lock by initiating bus transactions repeatedly until they get hold of the lock. This increases the bus traffic requirement of test-and-set significantly. This slows down all other traffic from cache and coherence misses. It slows down the overall section, since the traffic is saturated by failed lock acquisition attempts. Test-and-test-and-set is an improvement over TSL since it does not initiate lock acquisition requests continuously. - -When we consider fairness, we consider if a processor gets a fair chance of acquiring the lock when it is set free. In an extreme situation the processor might starve i.e. it might not be able to acquire the lock for an extended period of time even though it has become free during that time. - -Storage overhead for TSL is next to nothing since only one lock is required. Uncontended latency is also low since only one atomic instruction and branch are needed. diff --git a/wiki/wikipedia/2503.txt b/wiki/wikipedia/2503.txt deleted file mode 100644 index 51ffdd6cba307a8cd3874d6aa00d6edc42df253a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2503.txt +++ /dev/null @@ -1,7 +0,0 @@ -An error account is a type of account used for storing compensation for errors in trading, a transaction that is not posted in a timely manner because of inconsistencies, such as an incorrect account or routing numbers to the wrong name on the account, producing a claim that needs to be resolved as soon as possible so payments can be made. - -When many hundreds or thousands of transactions are being done each day, and whenever there is human input involved, error accounts are necessary to keep the audit trail intact. Error accounts also play a role in improving customer service. GAAP recommends daily or weekly monitoring of error accounts depending on volume and transaction size. It is typically up to the company or applicable government department's accounting department to monitor the error accounts that it has in place. - -In 1994, Nick Leeson used a poorly monitored error account at Barings Bank in an attempt to cover up evidence of his trading losses and place ever larger unauthorized trades to win back the money. In doing so, he lost over £800 million and bankrupted his employer. - -Error accounts can be implemented in manual accounting as well, but this is much less common in the developed world since personal computers became pervasive. diff --git a/wiki/wikipedia/2504.txt b/wiki/wikipedia/2504.txt deleted file mode 100644 index 5ac989054c7a63fff5dadea3cc46011b88810dad..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2504.txt +++ /dev/null @@ -1,13 +0,0 @@ -In physics and mathematics, an ansatz (; , meaning: "initial placement of a tool at a work piece", plural Ansätze ; ) is an educated guess or an additional assumption made to help solve a problem, and which may later be verified to be part of the solution by its results. - -An ansatz is the establishment of the starting equation(s), the theorem(s), or the value(s) describing a mathematical or physical problem or solution. It typically provides an initial estimate or framework to the solution of a mathematical problem, and can also take into consideration the boundary conditions (in fact, an ansatz is sometimes thought of as a "trial answer" and an important technique in solving differential equations). - -After an ansatz, which constitutes nothing more than an assumption, has been established, the equations are solved more precisely for the general function of interest, which then constitutes a confirmation of the assumption. In essence, an ansatz makes assumptions about the form of the solution to a problem so as to make the solution easier to find. - -It has been demonstrated that machine learning techniques can be applied to provide initial estimates similar to those invented by humans and to discover new ones in case no ansatz is available. - -Given a set of experimental data that looks to be clustered about a line, a linear ansatz could be made to find the parameters of the line by a least squares curve fit. Variational approximation methods use ansätze and then fit the parameters. - -Another example could be the mass, energy, and entropy balance equations that, considered simultaneous for purposes of the elementary operations of linear algebra, are the ansatz to most basic problems of thermodynamics. - -Another example of an ansatz is to suppose the solution of a homogeneous linear differential equation to take an exponential form, or a power form in the case of a difference equation. More generally, one can guess a particular solution of a system of equations, and test such an ansatz by directly substituting the solution into the system of equations. In many cases, the assumed form of the solution is general enough that it can represent arbitrary functions, in such a way that the set of solutions found this way is a full set of all the solutions. diff --git a/wiki/wikipedia/2505.txt b/wiki/wikipedia/2505.txt deleted file mode 100644 index 3b34b78a054024322e27f0e57e3a50cc6f6f970f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2505.txt +++ /dev/null @@ -1,25 +0,0 @@ -The Berry paradox is a self-referential paradox arising from an expression like "The smallest positive integer not definable in under sixty letters" (a phrase with fifty-seven letters). - -Bertrand Russell, the first to discuss the paradox in print, attributed it to G. G. Berry (1867–1928), a junior librarian at Oxford's Bodleian Library. Russell called Berry "the only person in Oxford who understood mathematical logic." - -Consider the expression: - -"The smallest positive integer not definable in under sixty letters." - -Since there are only twenty-six letters in the English alphabet, there are finitely many phrases of under sixty letters, and hence finitely many positive integers that are defined by phrases of under sixty letters. Since there are infinitely many positive integers, this means that there are positive integers that cannot be defined by phrases of under sixty letters. If there are positive integers that satisfy a given property, then there is a smallest positive integer that satisfies that property; therefore, there is a smallest positive integer satisfying the property "not definable in under sixty letters". This is the integer to which the above expression refers. But the above expression is only fifty-seven letters long, therefore it is definable in under sixty letters, and is not the smallest positive integer not definable in under sixty letters, and is not defined by this expression. This is a paradox: there must be an integer defined by this expression, but since the expression is self-contradictory (any integer it defines is definable in under sixty letters), there cannot be any integer defined by it. - -Perhaps another helpful analogy to Berry's Paradox would be the phrase "indescribable feeling". If the feeling is indeed indescribable, then no description of the feeling would be true. But if the word "indescribable" communicates something about the feeling, then it may be considered a description: this is self-contradictory. - -Mathematician and computer scientist Gregory J. Chaitin in The Unknowable (1999) adds this comment: "Well, the Mexican mathematical historian Alejandro Garcidiego has taken the trouble to find that letter [of Berry's from which Russell penned his remarks], and it is rather a different paradox. Berry’s letter actually talks about the first ordinal that can’t be named in a finite number of words. According to Cantor’s theory such an ordinal must exist, but we’ve just named it in a finite number of words, which is a contradiction." - -The Berry paradox as formulated above arises because of systematic ambiguity in the word "definable". In other formulations of the Berry paradox, such as one that instead reads: "...not nameable in less..." the term "nameable" is also one that has this systematic ambiguity. Terms of this kind give rise to vicious circle fallacies. Other terms with this type of ambiguity are: satisfiable, true, false, function, property, class, relation, cardinal, and ordinal. To resolve one of these paradoxes means to pinpoint exactly where our use of language went wrong and to provide restrictions on the use of language which may avoid them. - -This family of paradoxes can be resolved by incorporating stratifications of meaning in language. Terms with systematic ambiguity may be written with subscripts denoting that one level of meaning is considered a higher priority than another in their interpretation. "The number not nameable0 in less than eleven words" may be nameable1 in less than eleven words under this scheme. - -Using programs or proofs of bounded lengths, it is possible to construct an analogue of the Berry expression in a formal mathematical language, as has been done by Gregory Chaitin. Though the formal analogue does not lead to a logical contradiction, it does prove certain impossibility results. - -George Boolos (1989) built on a formalized version of Berry's paradox to prove Gödel's incompleteness theorem in a new and much simpler way. The basic idea of his proof is that a proposition that holds of x if and only if x = n for some natural number n can be called a definition for n, and that the set {(n, k): n has a definition that is k symbols long} can be shown to be representable (using Gödel numbers). Then the proposition "m is the first number not definable in less than k symbols" can be formalized and shown to be a definition in the sense just stated. - -It is not possible in general to unambiguously define what is the minimal number of symbols required to describe a given string (given a specific description mechanism). In this context, the terms string and number may be used interchangeably, since a number is actually a string of symbols, e.g. an English word (like the word "eleven" used in the paradox) while, on the other hand, it is possible to refer to any word with a number, e.g. by the number of its position in a given dictionary or by suitable encoding. Some long strings can be described exactly using fewer symbols than those required by their full representation, as is often achieved using data compression. The complexity of a given string is then defined as the minimal length that a description requires in order to (unambiguously) refer to the full representation of that string. - -The Kolmogorov complexity is defined using formal languages, or Turing machines which avoids ambiguities about which string results from a given description. It can be proven that the Kolmogorov complexity is not computable. The proof by contradiction shows that if it were possible to compute the Kolmogorov complexity, then it would also be possible to systematically generate paradoxes similar to this one, i.e. descriptions shorter than what the complexity of the described string implies. That is to say, the definition of the Berry number is paradoxical because it is not actually possible to compute how many words are required to define a number, and we know that such computation is not possible because of the paradox. diff --git a/wiki/wikipedia/2506.txt b/wiki/wikipedia/2506.txt deleted file mode 100644 index 2e68b1f881c5914c9d669f07634bea7cf1277327..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2506.txt +++ /dev/null @@ -1,9 +0,0 @@ -The analogy of the divided line () is presented by the Greek philosopher Plato in the Republic (509d–511e). It is written as a dialogue between Glaucon and Socrates, in which the latter further elaborates upon the immediately preceding Analogy of the Sun at the former's request. Socrates asks Glaucon to not only envision this unequally bisected line but to imagine further bisecting each of the two segments. Socrates explains that the four resulting segments represent four separate 'affections' (παθήματα) of the psyche. The lower two sections are said to represent the visible while the higher two are said to represent the intelligible. These affections are described in succession as corresponding to increasing levels of reality and truth from conjecture (εἰκασία) to belief (πίστις) to thought (διάνοια) and finally to understanding (νόησις). Furthermore, this analogy not only elaborates a theory of the psyche but also presents metaphysical and epistemological views. - -In The Republic (509d–510a), Plato describes the divided line this way: - -Thus AB represents shadows and reflections of physical things, and BC the physical things themselves. These correspond to two kinds of knowledge, the illusion (εἰκασία eikasia) of our ordinary, everyday experience, and belief (πίστις pistis) about discrete physical objects which cast their shadows. In the Timaeus, the category of illusion includes all the "opinions of which the minds of ordinary people are full," while the natural sciences are included in the category of belief. also became Aristotle's metaphysical model. The third level might be a Pythagorean level of mathematics. The fourth level is Plato's ideal Parmenidean reality, the world of highest level Ideas. - -Plato holds a very strict notion of knowledge. For example, he does not accept expertise about a subject, nor direct perception (see Theaetetus), nor true belief about the physical world (the Meno) as knowledge. It is not enough for the philosopher to understand the Ideas (Forms), he must also understand the relation of Ideas to all four levels of the structure to be able to know anything at all. For this reason, in most of the earlier Socratic dialogues, Socrates denies knowledge both to himself and others. - -For the first level, "the world of becoming and passing away," Plato expressly denies the possibility of knowledge. Constant change never stays the same, therefore, properties of objects must refer to different Ideas at different times. Note that for knowledge to be possible, which Plato believed, the other three levels must be unchanging. The third and fourth level, mathematics and Ideas, are already eternal and unchanging. However, to ensure that the second level, the objective, physical world, is also unchanging, Plato, in the Republic, Book 4 introduces empirically derived axiomatic restrictions that prohibit both motion and shifting perspectives. diff --git a/wiki/wikipedia/2507.txt b/wiki/wikipedia/2507.txt deleted file mode 100644 index d813f3f9945eb418c7719c7e361d5da550bdf8b9..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2507.txt +++ /dev/null @@ -1,101 +0,0 @@ -Bézout's theorem is a statement in algebraic geometry concerning the number of common zeros of n polynomials in n indeterminates. In its original form the theorem states that in general the number of common zeros equals the product of the degrees of the polynomials. It is named after Étienne Bézout. - -In some elementary texts, Bézout's theorem refers only to the case of two variables, and asserts that, if two plane algebraic curves of degrees $d_1$ and $d_2$ have no component in common, they have $d_1d_2$ intersection points, counted with their multiplicity, and including points at infinity and points with complex coordinates. - -In its modern formulation, the theorem states that, if N is the number of common points over an algebraically closed field of n projective hypersurfaces defined by homogeneous polynomials in n + 1 indeterminates, then N is either infinite, or equals the product of the degrees of the polynomials. Moreover, the finite case occurs almost always. - -In the case of two variables and in the case of affine hypersurfaces, if multiplicities and points at infinity are not counted, this theorem provides only an upper bound of the number of points, which is almost always reached. This bound is often referred to as the Bézout bound. - -Bézout's theorem is fundamental in computer algebra and effective algebraic geometry, by showing that most problems have a computational complexity that is at least exponential in the number of variables. It follows that in these areas, the best complexity that can be hoped for will occur with algorithms that have a complexity which is polynomial in the Bézout bound. - -In the case of plane curves, Bézout's theorem was essentially stated by Isaac Newton in his proof of lemma 28 of volume 1 of his Principia in 1687, where he claims that two curves have a number of intersection points given by the product of their degrees. - -The general theorem was later published in 1779 in Étienne Bézout's Théorie générale des équations algébriques. He supposed the equations to be "complete", which in modern terminology would translate to generic. Since with generic polynomials, there are no points at infinity, and all multiplicities equal one, Bézout's formulation is correct, although his proof does not follow the modern requirements of rigor. - -This and the fact that the concept of intersection multiplicity was outside the knowledge of his time led to a sentiment expressed by some authors that his proof was neither correct nor the first proof to be given. - -The proof of the statement that includes multiplicities was not possible before the 20th century with the introduction of abstract algebra and algebraic geometry. - -Suppose that X and Y are two plane projective curves defined over a field F that do not have a common component (this condition means that X and Y are defined by polynomials, which are not multiples of a common non constant polynomial; in particular, it holds for a pair of "generic" curves). Then the total number of intersection points of X and Y with coordinates in an algebraically closed field E which contains F, counted with their multiplicities, is equal to the product of the degrees of X and Y. - -The generalization in higher dimension may be stated as: - -Let n projective hypersurfaces be given in a projective space of dimension n over an algebraically closed field, which are defined by n homogeneous polynomials in n + 1 variables, of degrees $d_1, \ldots,d_n.$ Then either the number of intersection points is infinite, or the number of intersection points, counted with multiplicity, is equal to the product $d_1 \cdots d_n.$ If the hypersurfaces are irreducible and in relative general position, then there are $d_1 \cdots d_n$ intersection points, all with multiplicity 1. - -There are various proofs of this theorem, which either are expressed in purely algebraic terms, or use the language or algebraic geometry. Three algebraic proofs are sketched below. - -Bézout's theorem has been generalized as the so-called multi-homogeneous Bézout theorem. - -The equation of a line in a Euclidean plane is linear, that is, it equates to zero a polynomial of degree one. So, the Bézout bound for two lines is 1, meaning that two lines either intersect at a single point, or do not intersect. In the latter case, the lines are parallel and meet at a point at infinity. - -One can verify this with equations. The equation of a first line can be written in slope-intercept form $y=sx+m$ or, in projective coordinates $y=sx+mt$ (if the line is vertical, one may exchange x and y). If the equation of a second line is (in projective coordinates) $ax+by+ct=0,$ by substituting $sx+mt$ for y in it, one gets $(a+bs)x + (c+bm)t=0.$ If $a+bs\ne 0, $ one gets the x-coordinate of the intersection point by solving the latter equation in x and putting t = 1. - -If $a+bs= 0, $ that is $s=-a/b,$ the two line are parallel as having the same slope. If $m\ne -c/b,$ they are distinct, and the substituted equation gives t = 0. This gives the point at infinity of projective coordinates (1, s, 0). - -As above, one may write the equation of the line in projective coordinates as $y=sx+mt.$ If curve is defined in projective coordinates by a homogeneous polynomial $p(x,y,t)$ of degree n, the substitution of y provides a homogeneous polynomial of degree n in x and t. The fundamental theorem of algebra implies that it can be factored in linear factors. Each factor gives the ratio of the x and t coordinates of an intersection point, and the multiplicity of the factor is the multiplicity of the intersection point. - -If t is viewed as the coordinate of infinity, a factor equal to t represents an intersection point at infinity. - -If at least one partial derivative of the polynomial p is not zero at an intersection point, then the tangent of the curve at this point is defined (see ), and the intersection multiplicity is greater than one if and only if the line is tangent to the curve. If all partial derivatives are zero, the intersection point is a singular point, and the intersection multiplicity is at least two. - -Two conic sections generally intersect in four points, some of which may coincide. To properly account for all intersection points, it may be necessary to allow complex coordinates and include the points on the infinite line in the projective plane. For example: - -*Two circles never intersect in more than two points in the plane, while Bézout's theorem predicts four. The discrepancy comes from the fact that every circle passes through the same two complex points on the line at infinity. Writing the circle -$$ -(x-a)^2+(y-b)^2 = r^2 -$$ - -in homogeneous coordinates, we get -$$ -(x-az)^2+(y-bz)^2 - r^2z^2 = 0, -$$ - -from which it is clear that the two points (1 : i : 0) and (1 : –i : 0) lie on every circle. When two circles don't meet at all in the real plane, the two other intersections have non-zero imaginary parts, or if they are concentric then they meet at exactly the two points on the line at infinity with an intersection multiplicity of two. - -*Any conic should meet the line at infinity at two points according to the theorem. A hyperbola meets it at two real points corresponding to the two directions of the asymptotes. An ellipse meets it at two complex points which are conjugate to one another---in the case of a circle, the points (1 : i : 0) and (1 : –i : 0). A parabola meets it at only one point, but it is a point of tangency and therefore counts twice. - -*The following pictures show examples in which the circle x2 + y2 – 1 = 0 meets another ellipse in fewer intersection points because at least one of them has multiplicity greater than one: - -The concept of multiplicity is fundamental for Bézout's theorem, as it allows having an equality instead of a much weaker inequality. - -Intuitively, the multiplicity of a common zero of several polynomials is the number of zeros into which it can split when the coefficients are slightly changed. For example, a tangent to a curve is a line that cuts the curve at a point that splits in several points if the line is slightly moved. This number is two in general (ordinary points), but may be higher (three for inflection points, four for undulation points, etc.). This number is the "multiplicity of contact" of the tangent. - -This definition of a multiplicities by deformation was sufficient until the end of the 19th century, but has several problems that led to more convenient modern definitions: Deformations are difficult to manipulate; for example, in the case of a root of a univariate polynomial, for proving that the multiplicity obtained by deformation equals the multiplicity of the corresponding linear factor of the polynomial, one has to know that the roots are continuous functions of the coefficients. Deformations cannot be used over fields of positive characteristic. Moreover, there are cases where a convenient deformation is difficult to define (as in the case of more than two planes curves have a common intersection point), and even cases where no deformation is possible. - -Presently, following Jean-Pierre Serre, a multiplicity is generally defined as the length of a local ring associated with the point where the multiplicity is considered. Most specific definitions can be shown to be special case of Serre's definition. - -In the case of Bézout's theorem, the general intersection theory can be avoided, as there are proofs (see below) that associate to each input data for the theorem a polynomial in the coefficients of the equations, which factorizes into linear factors, each corresponding to a single intersection point. So, the multiplicity of an intersection point is the multiplicity of the corresponding factor. The proof that this multiplicity equals the one that is obtained by deformation, results then from the fact that the intersection points and the factored polynomial depend continuously on the roots. - -Let P and Q be two homogeneous polynomials in the indeterminates x, y, t of respective degrees p and q. Their zeros are the homogeneous coordinates of two projective curves. Thus the homogeneous coordinates of their intersection points are the common zeros of P and Q. - -By collecting together the powers of one indeterminate, say y, one gets univariate polynomials whose coefficients are homogeneous polynomials in x and t. - -For technical reasons, one must change of coordinates in order that the degrees in y of P and Q equal their total degrees (p and q), and each line passing through two intersection points does not pass through the point (0, 1, 0) (this means that no two point have the same Cartesian x-coordinate. - -The resultant R(x ,t) of P and Q with respect to y is a homogeneous polynomial in x and t that has the following property: $R(\alpha,\tau)=0$ with $(\alpha, \tau)\ne (0,0)$ if and only if it exist $\beta$ such that $\alpha, \beta, \tau$ is a common zero of P and Q (see ). The above technical condition ensures that $\beta$ is unique. The first above technical condition means that the degrees used in the definition of the resultant are p and q; this implies that the degree of R is pq (see ). - -As R is a homogeneous polynomial in two indeterminates, the fundamental theorem of algebra implies that R is a product of pq linear polynomials. If one defines the multiplicity of a common zero of P and Q as the number of occurrences of the corresponding factor in the product, Bézout's theorem is thus proved. - -For proving that the intersection multiplicity that has just been defined equals the definition in terms of a deformation, it suffices to remark that the resultant and thus its linear factors are continuous functions of the coefficients of P and Q. - -Proving the equality with other definitions of intersection multiplicities relies on the technicalities of these definitions and is therefore outside the scope of this article. - -In the early 20th century, Francis Sowerby Macaulay introduced the multivariate resultant (also known as Macaulay's resultant) of n homogeneous polynomials in n indeterminates, which is generalization of the usual resultant of two polynomials. Macaulay's resultant is a polynomial function of the coefficients of n homogeneous polynomials that is zero if and only the polynomials have a nontrivial (that is some component is nonzero) common zero in an algebraically closed field containing the coefficients. - -The U-resultant is a particular instance of Macaulay's resultant, introduced also by Macaulay. Given n homogeneous polynomials $f_1,\ldots,f_n$ in n + 1 indeterminates $x_0, \ldots, x_n,$ the U-resultant is the resultant of $f_1,\ldots,f_n,$ and $U_0x_0+\cdots +U_nx_n,$ where the coefficients $U_0, \ldots, U_n$ are auxiliary indeterminates. The U-resultant is a homogeneous polynomial in $U_0, \ldots, U_n,$ whose degree is the product of the degrees of the $f_i.$ - -Although a multivariate polynomial is generally irreducible, the U-resultant can be factorized into linear (in the $U_i$) polynomials over an algebraically closed field containing the coefficients of the $f_i.$ These linear factors correspond to the common zeros of the $f_i$ in the following way: to each common zero $(\alpha_0, \ldots, \alpha_n)$ corresponds a linear factor $(\alpha_0 U_0 + \cdots + \alpha_n U_n),$ and conversely. - -This proves Bézout's theorem, if the multiplicity of a common zero is defined as the multiplicity of the corresponding linear factor of the U-resultant. As for the preceding proof, the equality of this multiplicity with the definition by deformation results from the continuity of the U-resultant as a function of the coefficients of the $f_i.$ - -This proof of Bézout's theorem seems the oldest proof that satisfies the modern criteria of rigor. - -Bézout's theorem can be proved by recurrence on the number of polynomials - -by using the following theorem. - -Let V be a projective algebraic set of dimension $\delta$ and degree $d_1$, and H be a hypersurface (defined by a single polynomial) of degree $d_2$, that does not contain any irreducible component of V; under these hypotheses, the intersection of V and H has dimension $\delta-1$ and degree $d_1d_2.$ - -For a (sketched) proof using Hilbert series, see . - -Beside allowing a conceptually simple proof of Bézout's theorem, this theorem is fundamental for intersection theory, since this theory is essentially devoted to the study of intersection multiplicities when the hypotheses of the above theorem do not apply. diff --git a/wiki/wikipedia/2508.txt b/wiki/wikipedia/2508.txt deleted file mode 100644 index 562b842a6b0bc60fe57fdd621e35b3c86caec80a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2508.txt +++ /dev/null @@ -1,8 +0,0 @@ -: For other inequalities named after Wirtinger, see Wirtinger's inequality. - -In mathematics, the Wirtinger inequality for 2-forms, named after Wilhelm Wirtinger, states that on a Kähler manifold M, the exterior kth power of the symplectic form (Kähler form) ω, when evaluated on a simple (decomposable) 2k-vector ζ of unit volume, is bounded above by k!. That is, -$$ - \omega^k(\zeta) \leq k !. -$$ - -In other words, ωk/k! is a calibration on M. An important corollary is that every complex submanifold of a Kähler manifold is volume minimizing in its homology class. diff --git a/wiki/wikipedia/2509.txt b/wiki/wikipedia/2509.txt deleted file mode 100644 index ccb5ba0245b19b69eafc3a4ad102225142c53188..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2509.txt +++ /dev/null @@ -1,212 +0,0 @@ -In mathematics, the Poisson summation formula is an equation that relates the Fourier series coefficients of the periodic summation of a function to values of the function's continuous Fourier transform. Consequently, the periodic summation of a function is completely defined by discrete samples of the original function's Fourier transform. And conversely, the periodic summation of a function's Fourier transform is completely defined by discrete samples of the original function. The Poisson summation formula was discovered by Siméon Denis Poisson and is sometimes called Poisson resummation. - -Consider an aperiodic function $s(x)$ with Fourier transform $S(f) \triangleq \int_{-\infty}^{\infty} s(x)\ e^{-i2\pi f x} dx,$ alternatively designated by $\hat s(f)$ and $\mathcal{F}\{s\}(f).$ - -The basic Poisson summation formula is: - -{{NumBlk|:|$\sum_{n=-\infty}^\infty s(n)=\sum_{k=-\infty}^\infty S(k).$|}} - -Also consider periodic functions, where parameters $T>0$ and $P>0$ are in the same units as $x$: - -s_{_P}(x) \triangleq \sum_{n=-\infty}^{\infty} s(x \pm nP) \quad \text{and} \quad S_{1/T}(f) \triangleq \sum_{k=-\infty}^{\infty} S(f \pm k/T). - - - -Then is a special case (P=1, x=0) of this generalization: - -{{NumBlk|:|$s_{_P}(x) = \sum_{k=-\infty}^{\infty} \underbrace{\frac{1}{P}\cdot S\left(\frac{k}{P}\right)}_{S[k]}\ e^{i 2\pi \frac{k}{P} x },$ .|}} - -which is a Fourier series expansion with coefficients that are samples of function $S(f).$ Similarly: - -{{NumBlk|:|$S_{1/T}(f) = \sum_{n=-\infty}^{\infty} \underbrace{T\cdot s(nT)}_{s[n]}\ e^{-i 2\pi n Tf},$|}} - -also known as the important Discrete-time Fourier transform. - -{{math proof|title=Derivations|proof= - -A proof may be found in either Pinsky or Zygmund. , for instance, holds in the sense that if $s(x) \in L_1(\mathbb{R})$, then the right-hand side is the (possibly divergent) Fourier series of the left-hand side. It follows from the dominated convergence theorem that $s_{_P}(x)$ exists and is finite for almost every $x$. Furthermore it follows that $s_{_P}$ is integrable on any interval of length $P.$ So it is sufficient to show that the Fourier series coefficients of $s_{_P}(x)$ are $ \frac{1}{P} S\left(\frac{k}{P}\right).$ Proceeding from the definition of the Fourier coefficients we have: - -\begin{align} - -S[k]\ &\triangleq \ \frac{1}{P}\int_0^{P} s_{_P}(x)\cdot e^{-i 2\pi \frac{k}{P} x} dx\\ - -&=\ \frac{1}{P}\int_0^{P} - -\left(\sum_{n=-\infty}^{\infty} s(x \pm nP)\right) - -\cdot e^{-i 2\pi\frac{k}{P} x} dx\\ - -&=\ \frac{1}{P} - -\sum_{n=-\infty}^{\infty} - -\int_0^{P} s(x \pm nP)\cdot e^{-i 2\pi\frac{k}{P} x} dx, - -\end{align} - -where the interchange of summation with integration is once again justified by dominated convergence. With a change of variables ($\tau = x + nP$) this becomes: - -\begin{align} - -S[k] = - -\frac{1}{P} \sum_{n=-\infty}^{\infty} \int_{nP}^{nP + P} s(\tau) \ e^{-i 2\pi \frac{k}{P} \tau} \ \underbrace{e^{i 2\pi k n}}_{1}d\tau - -\ =\ \frac{1}{P} \int_{-\infty}^{\infty} s(\tau) \ e^{-i 2\pi \frac{k}{P} \tau} d\tau \triangleq \frac{1}{P}\cdot S\left(\frac{k}{P}\right) - -\end{align} - -Distributional formulation - -These equations can be interpreted in the language of distributions for a function $s$ whose derivatives are all rapidly decreasing (see Schwartz function). - -The Poisson summation formula arises as a particular case of the - -Convolution Theorem on tempered distributions, - -using the Dirac comb distribution and its Fourier series: -$$ -\sum_{n=-\infty}^\infty \delta(x \pm nT) \equiv \sum_{k=-\infty}^\infty \frac{1}{T}\cdot e^{\pm i 2\pi \frac{k}{T} x} \quad\stackrel{\mathcal{F}}{\Longleftrightarrow}\quad \frac{1}{T}\cdot \sum_{k=-\infty}^{\infty} \delta (f \pm k/T). -$$ - -In other words, the periodization of a Dirac delta $\delta,$ - -resulting in a Dirac comb, corresponds to the discretization of its spectrum which is constantly one. - -Hence, this again is a Dirac comb but with reciprocal increments. - -For the case $T=1,$ readily follows: - -\begin{align} - -\sum_{k=-\infty}^\infty S(k) - -&= \sum_{k=-\infty}^\infty \left(\int_{-\infty}^{\infty} s(x)\ e^{-i 2\pi k x} dx \right) - -= \int_{-\infty}^{\infty} s(x) \underbrace{\left(\sum_{k=-\infty}^\infty e^{-i 2\pi k x}\right)}_{\sum_{n=-\infty}^\infty \delta(x-n)} dx \\ - -&= \sum_{n=-\infty}^\infty \left(\int_{-\infty}^{\infty} s(x)\ \delta(x-n)\ dx \right) = \sum_{n=-\infty}^\infty s(n). - -\end{align} - -Similarly: - -\begin{align} - -\sum_{k=-\infty}^{\infty} S(f + k/T) - -&= \sum_{k=-\infty}^{\infty} \mathcal{F}\left \{ s(x)\cdot e^{-i 2\pi\frac{k}{T}x}\right \}\\ - -&= \mathcal{F} \bigg \{s(x)\underbrace{\sum_{k=-\infty}^{\infty} e^{-i 2\pi\frac{k}{T}x}}_{T \sum_{n=-\infty}^{\infty} \delta(x-nT)}\bigg \} - -= \mathcal{F}\left \{\sum_{n=-\infty}^{\infty} T\cdot s(nT) \cdot \delta(x-nT)\right \}\\ - -&= \sum_{n=-\infty}^{\infty} T\cdot s(nT) \cdot \mathcal{F}\left \{\delta(x-nT)\right \} - -= \sum_{n=-\infty}^{\infty} T\cdot s(nT) \cdot e^{-i 2\pi nT f}. - -\end{align} - -Or: - -\begin{align} - -\sum_{k=-\infty}^{\infty} S(f - k/T) &= S(f) * \sum_{k=-\infty}^{\infty} \delta(f - k/T) \\ - -&= S(f) * \mathcal{F}\left \{T \sum_{n=-\infty}^{\infty} \delta(x-nT)\right \} \\ - -&= \mathcal{F}\left \{s(x)\cdot T \sum_{n=-\infty}^{\infty} \delta(x-nT)\right \} - -= \mathcal{F}\left \{\sum_{n=-\infty}^{\infty} T\cdot s(nT) \cdot \delta(x-nT)\right \} \quad \text{as above}. - -\end{align} - -}} - -The Poisson summation formula can also be proved quite conceptually using the compatibility of Pontryagin duality with short exact sequences such as -$$ -0 \to \Z \to \R \to \R / \Z \to 0. -$$ - -holds provided $s(x)$ is a continuous integrable function which satisfies -$$ -|s(x)| + |S(x)| \le C (1+|x|)^{-1-\delta} -$$ - -for some $C > 0,\delta > 0$ and every $x.$ Note that such $s(x)$ is uniformly continuous, this together with the decay assumption on $s$, show that the series defining $s_{_P}$ converges uniformly to a continuous function. holds in the strong sense that both sides converge uniformly and absolutely to the same limit. - -holds in a pointwise sense under the strictly weaker assumption that $s$ has bounded variation and -$$ -2\cdot s(x)=\lim_{\varepsilon\to 0} s(x+\varepsilon) + \lim_{\varepsilon\to 0} s(x-\varepsilon). -$$ - -The Fourier series on the right-hand side of is then understood as a (conditionally convergent) limit of symmetric partial sums. - -As shown above, holds under the much less restrictive assumption that $s(x)$ is in $L^1(\mathbb{R})$, but then it is necessary to interpret it in the sense that the right-hand side is the (possibly divergent) Fourier series of $s_{_P}(x).$ In this case, one may extend the region where equality holds by considering summability methods such as Cesàro summability. When interpreting convergence in this way , case $x=0,$ holds under the less restrictive conditions that $s(x)$ is integrable and 0 is a point of continuity of $s_{_P}(x)$. However may fail to hold even when both $s$ and $S$ are integrable and continuous, and the sums converge absolutely. - -In partial differential equations, the Poisson summation formula provides a rigorous justification for the fundamental solution of the heat equation with absorbing rectangular boundary by the method of images. Here the heat kernel on $\mathbb{R}^2$ is known, and that of a rectangle is determined by taking the periodization. The Poisson summation formula similarly provides a connection between Fourier analysis on Euclidean spaces and on the tori of the corresponding dimensions. In one dimension, the resulting solution is called a theta function. - -In the statistical study of time-series, if $s$ is a function of time, then looking only at its values at equally spaced points of time is called "sampling." In applications, typically the function $s$ is band-limited, meaning that there is some cutoff frequency $f_o$ such that $S(f)$ is zero for frequencies exceeding the cutoff: $S(f)=0$ for $|f|>f_o.$ For band-limited functions, choosing the sampling rate $\tfrac{1}{T} > 2 f_o$ guarantees that no information is lost: since $S$ can be reconstructed from these sampled values. Then, by Fourier inversion, so can $s.$ This leads to the Nyquist–Shannon sampling theorem. - -Computationally, the Poisson summation formula is useful since a slowly converging summation in real space is guaranteed to be converted into a quickly converging equivalent summation in Fourier space. (A broad function in real space becomes a narrow function in Fourier space and vice versa.) This is the essential idea behind Ewald summation. - -The Poisson summation formula is also useful to bound the errors obtained when an integral is approximated by a (Riemann) sum. Consider an approximation of $S(0)=\int_{-\infty}^\infty dx s(x)$ as $\delta \sum_{n=-\infty}^\infty s(n \delta)$, where $ \delta \ll 1 $ is the size of the bin. Then, according to this approximation coincides with $ \sum_{k=-\infty}^\infty S(k/ \delta)$. The error in the approximation can then be bounded as $\left| \sum_{k \ne 0} S(k/ \delta) \right| \le \sum_{k \ne 0} | S(k/ \delta)|$. This is particularly useful when the Fourier transform of $ s(x) $ is rapidly decaying if $1/\delta \gg 1 $. - -The Poisson summation formula may be used to derive Landau's asymptotic formula for the number of lattice points in a large Euclidean sphere. It can also be used to show that if an integrable function, $s$ and $S$ both have compact support then $s = 0.$ - -In number theory, Poisson summation can also be used to derive a variety of functional equations including the functional equation for the Riemann zeta function. - -One important such use of Poisson summation concerns theta functions: periodic summations of Gaussians . Put $ q= e^{i\pi \tau } $, for $ \tau$ a complex number in the upper half plane, and define the theta function: -$$ - \theta ( \tau) = \sum_n q^{n^2}. -$$ - -The relation between $ \theta (-1/\tau)$ and $ \theta (\tau)$ turns out to be important for number theory, since this kind of relation is one of the defining properties of a modular form. By choosing $s(x)= e^{-\pi x^2}$ and using the fact that $S(f) = e^{-\pi f ^2},$ one can conclude: -$$ -\theta \left({-1\over\tau}\right) = \sqrt{\tau \over i} \theta (\tau), -$$ by putting ${1/\lambda} = \sqrt{\tau/i}.$ - -It follows from this that $\theta^8$ has a simple transformation property under $\tau \mapsto {-1/ \tau}$ and this can be used to prove Jacobi's formula for the number of different ways to express an integer as the sum of eight perfect squares. - -Cohn & Elkies proved an upper bound on the density of sphere packings using the Poisson summation formula, which subsequently led to a proof of optimal sphere packings in dimension 8 and 24. - -* Let $s(x) = e^{-ax}$ for $0 \leq x$ and $s(x) = 0$ for $x < 0$ to get \coth(x) = x\sum_{n \in \Z} \frac{1}{x^2+\pi^2n^2} = \frac{1}{x}+ 2x \sum_{n \in \Z_+} \frac{1}{x^2+\pi^2n^2}. - -* It can be used to prove the functional equation for the theta function. - -* Poisson's summation formula appears in Ramanujan's notebooks and can be used to prove some of his formulas, in particular it can be used to prove one of the formulas in Ramanujan's first letter to Hardy. - -* It can be used to calculate the quadratic Gauss sum. - -The Poisson summation formula holds in Euclidean space of arbitrary dimension. Let $\Lambda$ be the lattice in $\mathbb{R}^d$ consisting of points with integer coordinates; $\Lambda$ is the character group, or Pontryagin dual, of $\mathbb{R}^d$. For a function $s$ in $L^1(\mathbb{R}^d)$, consider the series given by summing the translates of $s$ by elements of $\Lambda$: -$$ -\sum_{\nu\in\Lambda} s(x+\nu). -$$ - -Theorem For $s$ in $L^1(\mathbb{R}^d)$, the above series converges pointwise almost everywhere, and thus defines a periodic function $\mathbb{P}s$ on $\Lambda.$ $\mathbb{P}s$ lies in $L^1$ with $\| \mathbb{P}s \|_1 \le \| s \|_1.$
    - -Moreover, for all $\nu$ in $\Lambda,$ $\mathbb{P}S(\nu)$ (Fourier transform on $\Lambda$) equals $S(\nu)$ (Fourier transform on $\mathbb{R}^d$). - -When $s$ is in addition continuous, and both $s$ and $S$ decay sufficiently fast at infinity, then one can "invert" the domain back to $\mathbb{R}^d$ and make a stronger statement. More precisely, if -$$ -|s(x)| + |S(x)| \le C (1+|x|)^{-d-\delta} -$$ - -for some C, δ > 0, then -$$ -\sum_{\nu\in\Lambda} s(x+\nu) = \sum_{\nu\in\Lambda}S(\nu)e^{i 2\pi x\cdot\nu}, -$$ - -where both series converge absolutely and uniformly on Λ. When d = 1 and x = 0, this gives above. - -More generally, a version of the statement holds if Λ is replaced by a more general lattice in $\mathbb{R}^d$. The dual lattice Λ′ can be defined as a subset of the dual vector space or alternatively by Pontryagin duality. Then the statement is that the sum of delta-functions at each point of Λ, and at each point of Λ′, are again Fourier transforms as distributions, subject to correct normalization. - -This is applied in the theory of theta functions, and is a possible method in geometry of numbers. In fact in more recent work on counting lattice points in regions it is routinely used − summing the indicator function of a region D over lattice points is exactly the question, so that the LHS of the summation formula is what is sought and the RHS something that can be attacked by mathematical analysis. - -Further generalization to locally compact abelian groups is required in number theory. In non-commutative harmonic analysis, the idea is taken even further in the Selberg trace formula, but takes on a much deeper character. - -A series of mathematicians applying harmonic analysis to number theory, most notably Martin Eichler, Atle Selberg, Robert Langlands, and James Arthur, have generalised the Poisson summation formula to the Fourier transform on non-commutative locally compact reductive algebraic groups $ G$ with a discrete subgroup $ \Gamma$ such that $ G/\Gamma$ has finite volume. For example, $ G$ can be the real points of $ SL_n$ and $ \Gamma$ can be the integral points of $ SL_n$. In this setting, $ G$ plays the role of the real number line in the classical version of Poisson summation, and $ \Gamma$ plays the role of the integers $ n$ that appear in the sum. The generalised version of Poisson summation is called the Selberg Trace Formula, and has played a role in proving many cases of Artin's conjecture and in Wiles's proof of Fermat's Last Theorem. The left-hand side of becomes a sum over irreducible unitary representations of $ G$, and is called "the spectral side," while the right-hand side becomes a sum over conjugacy classes of $ \Gamma$, and is called "the geometric side." - -The Poisson summation formula is the archetype for vast developments in harmonic analysis and number theory. - -The Poisson summation formula is a particular case of the convolution theorem on tempered distributions. If one of the two factors is the Dirac comb, one obtains periodic summation on one side and sampling on the other side of the equation. Applied to the Dirac delta function and its Fourier transform, the function that is constantly 1, this yields the Dirac comb identity. diff --git a/wiki/wikipedia/251.txt b/wiki/wikipedia/251.txt deleted file mode 100644 index 067bf3fb6107373c650a7bf4d23c8e6459a254ea..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/251.txt +++ /dev/null @@ -1,43 +0,0 @@ -In mathematics, the Atiyah–Bott fixed-point theorem, proven by Michael Atiyah and Raoul Bott in the 1960s, is a general form of the Lefschetz fixed-point theorem for smooth manifolds M, which uses an elliptic complex on M. This is a system of elliptic differential operators on vector bundles, generalizing the de Rham complex constructed from smooth differential forms which appears in the original Lefschetz fixed-point theorem. - -The idea is to find the correct replacement for the Lefschetz number, which in the classical result is an integer counting the correct contribution of a fixed point of a smooth mapping -$$ - f\colon M \to M. -$$ - -Intuitively, the fixed points are the points of intersection of the graph of f with the diagonal (graph of the identity mapping) in $M\times M$, and the Lefschetz number thereby becomes an intersection number. The Atiyah–Bott theorem is an equation in which the LHS must be the outcome of a global topological (homological) calculation, and the RHS a sum of the local contributions at fixed points of f. - -Counting codimensions in $M\times M$, a transversality assumption for the graph of f and the diagonal should ensure that the fixed point set is zero-dimensional. Assuming M a closed manifold should ensure then that the set of intersections is finite, yielding a finite summation as the RHS of the expected formula. Further data needed relates to the elliptic complex of vector bundles $E_j$, namely a bundle map -$$ -\varphi_j \colon f^{-1}(E_j) \to E_j -$$ - -for each j, such that the resulting maps on sections give rise to an endomorphism of an elliptic complex $T$. Such an endomorphism $T$ has Lefschetz number -$$ -L(T), -$$ - -which by definition is the alternating sum of its traces on each graded part of the homology of the elliptic complex. - -The form of the theorem is then -$$ -L(T) = \sum_x \left(\sum_j (-1)^j \mathrm{trace} \varphi_{j,x}\right)/\delta(x). -$$ - -Here trace $\varphi_{j,x}$ means the trace of $\varphi_{j}$ at a fixed point x of f, and $\delta(x)$ is the determinant of the endomorphism $I -Df$ at x, with $Df$ the derivative of f (the non-vanishing of this is a consequence of transversality). The outer summation is over the fixed points x, and the inner summation over the index j in the elliptic complex. - -Specializing the Atiyah–Bott theorem to the de Rham complex of smooth differential forms yields the original Lefschetz fixed-point formula. A famous application of the Atiyah–Bott theorem is a simple proof of the Weyl character formula in the theory of Lie groups. - -The early history of this result is entangled with that of the Atiyah–Singer index theorem. There was other input, as is suggested by the alternate name Woods Hole fixed-point theorem that was used in the past (referring properly to the case of isolated fixed points). A 1964 meeting at Woods Hole brought together a varied group: - -
    Eichler started the interaction between fixed-point theorems and automorphic forms. Shimura played an important part in this development by explaining this to Bott at the Woods Hole conference in 1964.
    - -As Atiyah puts it: - -
    [at the conference]...Bott and I learnt of a conjecture of Shimura concerning a generalization of the Lefschetz formula for holomorphic maps. After much effort we convinced ourselves that there should be a general formula of this type [...]; .
    - -and they were led to a version for elliptic complexes. - -In the recollection of William Fulton, who was also present at the conference, the first to produce a proof was Jean-Louis Verdier. - -In the context of algebraic geometry, the statement applies for smooth and proper varieties over an algebraically closed field. This variant of the Atiyah–Bott fixed point formula was proved by Kondyrev by expressing both sides of the formula as appropriately chosen categorical traces. diff --git a/wiki/wikipedia/2510.txt b/wiki/wikipedia/2510.txt deleted file mode 100644 index 4c9fd8f28c4ed954e67c9ef1b494e22294c91cec..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2510.txt +++ /dev/null @@ -1,177 +0,0 @@ -DOT is a graph description language. DOT graphs are typically files with the filename extension gv or dot. The extension gv is preferred, to avoid confusion with the extension dot used by versions of Microsoft Word before 2007. - -Various programs can process DOT files. Some, such as dot, neato, twopi, circo, fdp, and sfdp, can read a DOT file and render it in graphical form. Others, such as gvpr, gc, acyclic, ccomps, sccmap, and tred, read DOT files and perform calculations on the represented graph. Finally, others, such as lefty, dotty, and grappa, provide an interactive interface. The GVedit tool combines a text editor with noninteractive image viewer. Most programs are part of the Graphviz package or use it internally. - -At its simplest, DOT can be used to describe an undirected graph. An undirected graph shows simple relations between objects, such as friendship between people. The graph keyword is used to begin a new graph, and nodes are described within curly braces. A double-hyphen (--) is used to show relations between the nodes. - - - -// The graph name and the semicolons are optional - -graph graphname { - -a -- b -- c; - -b -- d; - -} - - - -Similar to undirected graphs, DOT can describe directed graphs, such as flowcharts and dependency trees. The syntax is the same as for undirected graphs, except the digraph keyword is used to begin the graph, and an arrow (->) is used to show relationships between nodes. - - - -digraph graphname { - -a -> b -> c; - -b -> d; - -} - - - -Various attributes can be applied to graphs, nodes and edges in DOT files. These attributes can control aspects such as color, shape, and line styles. For nodes and edges, one or more attribute–value pairs are placed in square brackets ([]) after a statement and before the semicolon (which is optional). Graph attributes are specified as direct attribute–value pairs under the graph element, where multiple attributes are separated by a comma or using multiple sets of square brackets, while node attributes are placed after a statement containing only the name of the node, but not the relations between the dots. - - - -graph graphname { - -// This attribute applies to the graph itself - -size="1,1"; - -// The label attribute can be used to change the label of a node - -a [label="Foo"]; - -// Here, the node shape is changed. - -b [shape=box]; - -// These edges both have different line properties - -a -- b -- c [color=blue]; - -b -- d [style=dotted]; - -// [style=invis] hides a node. - -} - - - -HTML-like labels are only available on versions of Graphviz that are newer than mid-November 2003, in particular, they are not considered as part of release 1.10. - -Dot supports C and C++ style single line and multiple line comments. In addition, it ignores lines with a number sign symbol (#) as their first character. - - - -// This is a single line comment. - -/* This is a - -multiple line - -comment. */ - -# Lines like this are also ignored. - - - -Following is an example script that describes the bonding structure of an ethane molecule. This is an undirected graph and contains node attributes as explained above. - - - -graph ethane { - -C_0 -- H_0 [type=s]; - -C_0 -- H_1 [type=s]; - -C_0 -- H_2 [type=s]; - -C_0 -- C_1 [type=s]; - -C_1 -- H_3 [type=s]; - -C_1 -- H_4 [type=s]; - -C_1 -- H_5 [type=s]; - -} - - - -The DOT language defines a graph, but does not provide facilities for rendering the graph. There are several programs that can be used to render, view, and manipulate graphs in the DOT language: - -* Graphviz – a collection of CLI utilities and libraries to manipulate and render graphs into different formats like SVG, PDF, PNG etc. - -**dot – CLI tool for conversion between and other formats - -* Canviza JavaScript library for rendering DOT files - -* d3-graphviza JavaScript library based on Viz.js and D3.js that renders DOT graphs and supports animated transitions between graphs and interactive graph manipulation - -* Vis.jsa JavaScript library that accept DOT as input for network graphs. - -*Viz.js – a JavaScript port of Graphviz that provides a simple wrapper for using it in the browser. - -* hpcc-js/wasm Graphviza fast WASM library for Graphviz similar to Viz.js - -* Gephian interactive visualization and exploration platform for all kinds of networks and complex systems, dynamic and hierarchical graphs - -* Grappaa partial port of Graphviz to Java - -* graphviz-javaan open source partial port of Graphviz to Java available from github.com - -* ZGRViewera DOT viewer - -* Beluginga Python- & Google Cloud Platform-based viewer of DOT and Beluga extensions - -* dot2texa program to convert files from DOT to PGF/TikZ or PSTricks, both of which are rendered in LaTeX - -* OmniGrafflea digital illustration application for macOS that can import a subset of DOT, producing an editable document (but the result cannot be exported back to DOT) - -* Tulipa software framework in C++ that can import DOT files for analysis - -* VizierFXan Apache Flex graph rendering library in ActionScript - -It is possible to specify layout details with DOT, although not all tools that implement the DOT language pay attention to the position attributes. Thus, depending on the tools used, users must rely on automated layout algorithms (potentially resulting in unexpected output) or tediously hand-positioned nodes. - -For example: - - - -digraph g { - - node [shape=plaintext]; - - A1 -> B1; - - A2 -> B2; - - A3 -> B3; - - - - A1 -> A2 [label=f]; - - A2 -> A3 [label=g]; - - B2 -> B3 [label="g'"]; - - B1 -> B3 [label="(g o f)'" tailport=s headport=s]; - - { rank=same; A1 A2 A3 } - - { rank=same; B1 B2 B3 } - -} - - - -There are two problems in the image titled "An image that seems improperly rendered". The square on the right is not a perfect square and some labels are not next to the related arrow ((g o f)') and some overlap the arrows. - -This can be fixed with Inkscape or other SVG editors. In some cases, this can also be fixed by using the pos attribute to specify a position, and the weight attribute to square the graph. diff --git a/wiki/wikipedia/2511.txt b/wiki/wikipedia/2511.txt deleted file mode 100644 index 713aa400435844afeca63b04a0ae49c77cc56acd..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2511.txt +++ /dev/null @@ -1,67 +0,0 @@ -In polyhedral combinatorics, a branch of mathematics, Steinitz's theorem is a characterization of the undirected graphs formed by the edges and vertices of three-dimensional convex polyhedra: they are exactly the 3-vertex-connected planar graphs. That is, every convex polyhedron forms a 3-connected planar graph, and every 3-connected planar graph can be represented as the graph of a convex polyhedron. For this reason, the 3-connected planar graphs are also known as polyhedral graphs. - -This result provides a classification theorem for the three-dimensional convex polyhedra, something that is not known in higher dimensions. It provides a complete and purely combinatorial description of the graphs of these polyhedra, allowing other results on them, such as Eberhard's theorem on the realization of polyhedra with given types of faces, to be proven more easily, without reference to the geometry of these shapes. Additionally, it has been applied in graph drawing, as a way to construct three-dimensional visualizations of abstract graphs. Branko Grünbaum has called this theorem "the most important and deepest known result on 3-polytopes." - -The theorem appears in a 1922 publication of Ernst Steinitz, after whom it is named. It can be proven by mathematical induction (as Steinitz did), by finding the minimum-energy state of a two-dimensional spring system and lifting the result into three dimensions, or by using the circle packing theorem. - -Several extensions of the theorem are known, in which the polyhedron that realizes a given graph has additional constraints; for instance, every polyhedral graph is the graph of a convex polyhedron with integer coordinates, or the graph of a convex polyhedron all of whose edges are tangent to a common midsphere. - -An undirected graph is a system of vertices and edges, each edge connecting two of the vertices. From any polyhedron one can form a graph, by letting the vertices of the graph correspond to the vertices of the polyhedron and by connecting any two graph vertices by an edge whenever the corresponding two polyhedron vertices are the endpoints of an edge of the polyhedron. This graph is known as the skeleton of the polyhedron. - -A graph is planar if it can be drawn with its vertices as points in the Euclidean plane, and its edges as curves that connect these points, such that no two edge curves cross each other and such that the point representing a vertex lies on the curve representing an edge only when the vertex is an endpoint of the edge. By Fáry's theorem, every planar drawing can be straightened so that the curves representing the edges are line segments. A graph is 3-connected if it has more than three vertices and, after the removal of any two of its vertices, any other pair of vertices remain connected by a path. - -Steinitz's theorem states that these two conditions are both necessary and sufficient to characterize the skeletons of three-dimensional convex polyhedra: a given graph $G$ is the graph of a convex three-dimensional polyhedron, if and only if $G$ is planar and 3-vertex-connected. - -One direction of Steinitz's theorem (the easier direction to prove) states that the graph of every convex polyhedron is planar and 3-connected. As shown in the illustration, planarity can be shown by using a Schlegel diagram: if one places a light source near one face of the polyhedron, and a plane on the other side, the shadows of the polyhedron edges will form a planar graph, embedded in such a way that the edges are straight line segments. The 3-connectivity of a polyhedral graph is a special case of Balinski's theorem that the graph of any $k$-dimensional convex polytope is $k$-connected. The connectivity of the graph of a polytope, after removing any $k-1$ of its vertices, can be proven by choosing one more vertex $v$, finding a linear function that is zero on the resulting set of $k$ vertices, and following the paths generated by the simplex method to connect every vertex to one of two extreme vertices of the linear function, with the chosen vertex $v$ connected to both. - -The other, more difficult, direction of Steinitz's theorem states that every planar 3-connected graph is the graph of a convex polyhedron. There are three standard approaches for this part: proofs by induction, lifting two-dimensional Tutte embeddings into three dimensions using the Maxwell–Cremona correspondence, and methods using the circle packing theorem to generate a canonical polyhedron. - -Although Steinitz's original proof was not expressed in terms of graph theory, it can be rewritten in those terms, and involves finding a sequence of Δ-Y and Y-Δ transforms that reduce any 3-connected planar graph to $K_4$, the graph of the tetrahedron. A Y-Δ transform removes a degree-three vertex from a graph, adding edges between all of its former neighbors if those edges did not already exist; the reverse transformation, a Δ-Y transform, removes the edges of a triangle from a graph and replaces them by a new degree-three vertex adjacent to the same three vertices. Once such a sequence is found, it can be reversed and converted into geometric operations that build up the desired polyhedron step by step starting from a tetrahedron. Each Y-Δ transform in the reversed sequence can be performed geometrically by slicing off a degree-three vertex from a polyhedron. A Δ-Y transform in the reversed sequence can be performed geometrically by removing a triangular face from a polyhedron and extending its neighboring faces until the point where they meet, but only when that triple intersection point of the three neighboring faces is on the far side of the removed face from the polyhedron. When the triple intersection point is not on the far side of this face, a projective transformation of the polyhedron suffices to move it to the correct side. Therefore, by induction on the number of Δ-Y and Y-Δ transforms needed to reduce a given graph to $K_4$, every polyhedral graph can be realized as a polyhedron. - -A later work by Epifanov strengthened Steinitz's proof that every polyhedral graph can be reduced to $K_4$ by Δ-Y and Y-Δ transforms. Epifanov proved that if two vertices are specified in a planar graph, then the graph can be reduced to a single edge between those terminals by combining Δ-Y and Y-Δ transforms with series–parallel reductions. Epifanov's proof was complicated and non-constructive, but it was simplified by Truemper using methods based on graph minors. Truemper observed that every grid graph is reducible by Δ-Y and Y-Δ transforms in this way, that this reducibility is preserved by graph minors, and that every planar graph is a minor of a grid graph. This idea can be used to replace Steinitz's lemma that a reduction sequence exists. After this replacement, the rest of the proof can be carried out using induction in the same way as Steinitz's original proof. For these proofs, carried out using any of the ways of finding sequences of Δ-Y and Y-Δ transforms, there exist polyhedral graphs that require a nonlinear number of steps. More precisely, infinitely many graphs require a number of steps at least proportional to $n^{3/2}$, where $n$ is the number of vertices in the graph, and the best known upper bound on the number of steps that suffice is larger, proportional to $n^2$. - -An alternative form of induction proof is based on removing edges (and compressing out the degree-two vertices that might be left after this removal) or contracting edges and forming a minor of the given planar graph. Any polyhedral graph can be reduced to $K_4$ by a linear number of these operations, and again the operations can be reversed and the reversed operations performed geometrically, giving a polyhedral realization of the graph. However, while it is simpler to prove that a reduction sequence exists for this type of argument, and the reduction sequences are shorter, the geometric steps needed to reverse the sequence are more complicated. - -If a graph is drawn in the plane with straight line edges, then an equilibrium stress is defined as an assignment of nonzero real numbers (weights) to the edges, with the property that each vertex is in the position given by the weighted average of its neighbors. According to the Maxwell–Cremona correspondence, an equilibrium stress can be lifted to a piecewise linear continuous three-dimensional surface such that the edges forming the boundaries between the flat parts of the surface project to the given drawing. The weight and length of each edge determines the difference in slopes of the surface on either side of the edge, and the condition that each vertex is in equilibrium with its neighbors is equivalent to the condition that these slope differences cause the surface to meet up with itself correctly in the neighborhood of the vertex. Positive weights translate to convex dihedral angles between two faces of the piecewise linear surface, and negative weights translate to concave dihedral angles. Conversely, every continuous piecewise-linear surface comes from an equilibrium stress in this way. If a finite planar graph is drawn and given an equilibrium stress in such a way that all interior edges of the drawing have positive weights, and all exterior edges have negative weights, then by translating this stress into a three-dimensional surface in this way, and then replacing the flat surface representing the exterior of the graph by its complement in the same plane, one obtains a convex polyhedron, with the additional property that its perpendicular projection onto the plane has no crossings. - -The Maxwell–Cremona correspondence has been used to obtain polyhedral realizations of polyhedral graphs by combining it with a planar graph drawing method of W. T. Tutte, the Tutte embedding. Tutte's method begins by fixing one face of a polyhedral graph into convex position in the plane. This face will become the outer face of a drawing of a graph. The method continues by setting up a system of linear equations in the vertex coordinates, according to which each remaining vertex should be placed at the average of its neighbors. Then as Tutte showed, this system of equations will have a unique solution in which each face of the graph is drawn as a convex polygon. Intuitively, this solution describes the pattern that would be obtained by replacing the interior edges of the graph by ideal springs and letting them settle to their minimum-energy state. The result is almost an equilibrium stress: if one assigns weight one to each interior edge, then each interior vertex of the drawing is in equilibrium. However, it is not always possible to assign negative numbers to the exterior edges so that they, too, are in equilibrium. Such an assignment is always possible when the outer face is a triangle, and so this method can be used to realize any polyhedral graph that has a triangular face. If a polyhedral graph does not contain a triangular face, its dual graph does contain a triangle and is also polyhedral, so one can realize the dual in this way and then realize the original graph as the polar polyhedron of the dual realization. An alternative method for realizing polyhedra using liftings avoids duality by choosing any face with at most five vertices as the outer face. Every polyhedral graph has such a face, and by choosing the fixed shape of this face more carefully, the Tutte embedding of the rest of the graph can be lifted. - -According to one variant of the circle packing theorem, for every polyhedral graph, there exists a system of circles in the plane or on any sphere, representing the vertices and faces of the graph, so that: - -*each two adjacent vertices of the graph are represented by tangent circles, - -*each two adjacent faces of the graph are represented by tangent circle, - -*each pair of a vertex and a face that it touches are represented by circles that cross at a right angle, and - -*all other pairs of circles are separated from each other. - -The same system of circles forms a representation of the dual graph by swapping the roles of circles that represent vertices, and circles that represent faces. From any such representation on a sphere, embedded into three-dimensional Euclidean space, one can form a convex polyhedron that is combinatorially equivalent to the given graph, as an intersection of half-spaces whose boundaries pass through the face circles. From each vertex of this polyhedron, the horizon on the sphere, seen from that vertex, is the circle that represents it. This horizon property determines the three-dimensional position of each vertex, and the polyhedron can be equivalently defined as the convex hull of the vertices, positioned in this way. The sphere becomes the midsphere of the realization: each edge of the polyhedron is tangent to the sphere, at a point where two tangent vertex circles cross two tangent face circles. - -It is possible to prove a stronger form of Steinitz's theorem, that any polyhedral graph can be realized by a convex polyhedron whose coordinates are integers. For instance, Steinitz's original induction-based proof can be strengthened in this way. However, the integers that would result from Steinitz's construction are doubly exponential in the number of vertices of the given polyhedral graph. Writing down numbers of this magnitude in binary notation would require an exponential number of bits. Geometrically, this means that some features of the polyhedron may have size doubly exponentially larger than others, making the realizations derived from this method problematic for applications in graph drawing. - -Subsequent researchers have found lifting-based realization algorithms that use only a linear number of bits per vertex. It is also possible to relax the requirement that the coordinates be integers, and assign coordinates in such a way that the $x$-coordinates of the vertices are distinct integers in the range from 0 to $2n-4$ and the other two coordinates are real numbers in the unit interval, so that each edge has length at least one while the overall polyhedron has linear volume. Some polyhedral graphs are known to be realizable on grids of only polynomial size; in particular this is true for the pyramids (realizations of wheel graphs), prisms (realizations of prism graphs), and stacked polyhedra (realizations of Apollonian networks). - -Another way of stating the existence of integer realizations is that every three-dimensional convex polyhedron has a combinatorially equivalent integer polyhedron. For instance, the regular dodecahedron is not itself an integer polyhedron, because of its regular pentagon faces, but it can be realized as an equivalent integer pyritohedron. This is not always possible in higher dimensions, where there exist polytopes (such as the ones constructed from the Perles configuration) that have no integer equivalent. - -A Halin graph is a special case of a polyhedral graph, formed from a planar-embedded tree (with no degree-two vertices) by connecting the leaves of the tree into a cycle. For Halin graphs, one can choose polyhedral realizations of a special type: the outer cycle forms a horizontal convex base face, and every other face lies directly above the base face (as in the polyhedra realized through lifting), with all of these upper faces having the same slope. Polyhedral surfaces with equal-slope faces over any base polygon (not necessarily convex) can be constructed from the polygon's straight skeleton, and an equivalent way of describing this realization is that the two-dimensional projection of the tree onto the base face forms its straight skeleton. The proof of this result uses induction: any rooted tree may reduced to a smaller tree by removing the leaves from an internal node whose children are all leaves, the Halin graph formed from the smaller tree has a realization by the induction hypothesis, and it is possible to modify this realization in order to add any number of leaf children to the tree node whose children were removed. - -In any polyhedron that represents a given polyhedral graph $G$, the faces of $G$ are exactly the cycles in $G$ that do not separate $G$ into two components: that is, removing a facial cycle from $G$ leaves the rest of $G$ as a connected subgraph. Such cycles are called peripheral cycles. Thus, the combinatorial structure of the faces (but not their geometric shapes) is uniquely determined from the graph structure. Another strengthening of Steinitz's theorem, by Barnette and Grünbaum, states that for any polyhedral graph, any face of the graph, and any convex polygon representing that face, it is possible to find a polyhedral realization of the whole graph that has the specified shape for the designated face. This is related to a theorem of Tutte, that any polyhedral graph can be drawn in the plane with all faces convex and any specified shape for its outer face. However, the planar graph drawings produced by Tutte's method do not necessarily lift to convex polyhedra. Instead, Barnette and Grünbaum prove this result using an inductive method. It is also always possible, given a polyhedral graph $G$ and an arbitrary cycle $C$ in $G$, to find a realization for which $C$ forms the silhouette of the realization under parallel projection. - -The realization of polyhedra using the circle packing theorem provides another strengthening of Steinitz's theorem: every 3-connected planar graph may be represented as a convex polyhedron in such a way that all of its edges are tangent to the same unit sphere, the midsphere of the polyhedron. By performing a carefully chosen Möbius transformation of a circle packing before transforming it into a polyhedron, it is possible to find a polyhedral realization that realizes all the symmetries of the underlying graph, in the sense that every graph automorphism is a symmetry of the polyhedral realization. More generally, if $G$ is a polyhedral graph and $K$ is any smooth three-dimensional convex body, it is possible to find a polyhedral representation of $G$ in which all edges are tangent to $K$. - -Circle packing methods can also be used to characterize the graphs of polyhedra that have a circumsphere through all their vertices, or an insphere tangent to all of their faces. (The polyhedra with a circumsphere are also significant in hyperbolic geometry as the ideal polyhedra.) In both cases, the existence of a sphere is equivalent to the solvability of a system of linear inequalities on positive real variables associated with each edge of the graph. In the case of the insphere, these variables must sum to exactly one on each face cycle of the graph, and to more than one on each non-face cycle. Dually, for the circumsphere, the variables must sum to one at each vertex, and more than one across each cut with two or more vertices on each side of the cut. Although there may be exponentially many linear inequalities to satisfy, a solution (if one exists) can be found in polynomial time using the ellipsoid method. The values of the variables from a solution determine the angles between pairs of circles in a circle packing whose corresponding polyhedron has the desired relation to its sphere. - -In any dimension higher than three, the algorithmic Steinitz problem consists of determining whether a given lattice is the face lattice of a convex polytope. It is unlikely to have polynomial time complexity, as it is NP-hard and more strongly complete for the existential theory of the reals, even for four-dimensional polytopes, by Richter-Gebert's universality theorem. Here, the existential theory of the reals is a class of computational problems that can be formulated in terms of finding real variables that satisfy a given system of polynomial equations and inequalities. For the algorithmic Steinitz problem, the variables of such a problem can be the vertex coordinates of a polytope, and the equations and inequalities can be used to specify the flatness of each face in the given face lattice and the convexity of each angle between faces. Completeness means that every other problem in this class can be transformed into an equivalent instance of the algorithmic Steinitz problem, in polynomial time. The existence of such a transformation implies that, if the algorithmic Steinitz problem has a polynomial time solution, then so does every problem in the existential theory of the reals, and every problem in NP. However, because a given graph may correspond to more than one face lattice, it is difficult to extend this completeness result to the problem of recognizing the graphs of 4-polytopes. Determining the computational complexity of this graph recognition problem remains open. - -Researchers have also found graph-theoretic characterizations of the graphs of certain special classes of three-dimensional non-convex polyhedra and four-dimensional convex polytopes. However, in both cases, the general problem remains unsolved. Indeed, even the problem of determining which complete graphs are the graphs of non-convex polyhedra (other than $K_4$ for the tetrahedron and $K_7$ for the Császár polyhedron) remains unsolved. - -Eberhard's theorem partially characterizes the multisets of polygons that can be combined to form the faces of a convex polyhedron. It can be proven by forming a 3-connected planar graph with the given set of polygon faces, and then applying Steinitz's theorem to find a polyhedral realization of that graph. - -László Lovász has shown a correspondence between polyhedral representations of graphs and matrices realizing the Colin de Verdière graph invariants of the same graphs. The Colin de Verdière invariant is the maximum corank of a weighted adjacency matrix of the graph, under some additional conditions that are irrelevant for polyhedral graphs. These are square symmetric matrices indexed by the vertices, with the weight of vertex $i$ in the diagonal coefficient $M_{i,i}$ and with the weight of edge $i,j$ in the off-diagonal coefficients $M_{i,j}$ and $M_{j,i}$. When vertices $i$ and $j$ are not adjacent, the coefficient $M_{i,j}$ is required to be zero. This invariant is at most three if and only if the graph is a planar graph. As Lovász shows, when the graph is polyhedral, a representation of it as a polyhedron can be obtained by finding a weighted adjacency matrix of corank three, finding three vectors forming a basis for its nullspace, using the coefficients of these vectors as coordinates for the vertices of a polyhedron, and scaling these vertices appropriately. - -The history of Steinitz's theorem is described by Grünbaum, who notes its first appearance in a cryptic form in a publication of Ernst Steinitz, originally written in 1916. Steinitz provided more details in later lecture notes, published after his 1928 death. Although modern treatments of Steinitz's theorem state it as a graph-theoretic characterization of polyhedra, Steinitz did not use the language of graphs. The graph-theoretic formulation of the theorem was introduced in the early 1960s by Branko Grünbaum and Theodore Motzkin, with its proof also converted to graph theory in Grünbaum's 1967 text Convex Polytopes. The work of Epifanov on Δ-Y and Y-Δ transforms, strengthening Steinitz's proof, was motivated by other problems than the characterization of polyhedra. Truemper credits Grünbaum with observing the relevance of this work for Steinitz's theorem. - -The Maxwell–Cremona correspondence between stress diagrams and polyhedral liftings was developed in a series of papers by James Clerk Maxwell from 1864 to 1870, based on earlier work of Pierre Varignon, William Rankine, and others, and was popularized in the late 19th century by Luigi Cremona. The observation that this correspondence can be used with the Tutte embedding to prove Steinitz's theorem comes from Eades. - -The circle packing theorem was proved by Paul Koebe in 1936 and (independently) by E. M. Andreev in 1970; it was popularized in the mid-1980s by William Thurston, who (despite citing Koebe and Andreev) is often credited as one of its discoverers. Andreev's version of the theorem was already formulated as a Steinitz-like characterization for certain polyhedra in hyperbolic space, and the use of circle packing to realize polyhedra with midspheres comes from the work of Thurston. The problem of characterizing polyhedra with inscribed or circumscribed spheres, eventually solved using a method based on circle packing realizations, goes back to unpublished work of René Descartes circa 1630 and to Jakob Steiner in 1832; the first examples of polyhedra that have no realization with a circumsphere or insphere were given by Steinitz in 1928. diff --git a/wiki/wikipedia/2512.txt b/wiki/wikipedia/2512.txt deleted file mode 100644 index 5a67b699d2f1d1516ca1bc3d9194d81c9fc62816..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2512.txt +++ /dev/null @@ -1,15 +0,0 @@ -Refinement is a generic term of computer science that encompasses various approaches for producing correct computer programs and simplifying existing programs to enable their formal verification. - -In formal methods, program refinement is the verifiable transformation of an abstract (high-level) formal specification into a concrete (low-level) executable program. Stepwise refinement allows this process to be done in stages. Logically, refinement normally involves implication, but there can be additional complications. - -The progressive just-in-time preparation of the product backlog (requirements list) in agile software development approaches, such as Scrum, is also commonly described as refinement. - -Data refinement is used to convert an abstract data model (in terms of sets for example) into implementable data structures (such as arrays). Operation refinement converts a specification of an operation on a system into an implementable program (e.g., a procedure). The postcondition can be strengthened and/or the precondition weakened in this process. This reduces any nondeterminism in the specification, typically to a completely deterministic implementation. - -For example, x ∈ {1,2,3} (where x is the value of the variable x after an operation) could be refined to x ∈ {1,2}, then x ∈ {1}, and implemented as x := 1. Implementations of x := 2 and x := 3 would be equally acceptable in this case, using a different route for the refinement. However, we must be careful not to refine to x ∈ {} (equivalent to false) since this is unimplementable; it is impossible to select a member from the empty set. - -The term reification is also sometimes used (coined by Cliff Jones). Retrenchment is an alternative technique when formal refinement is not possible. The opposite of refinement is abstraction. - -Refinement calculus is a formal system (inspired from Hoare logic) that promotes program refinement. The FermaT Transformation System is an industrial-strength implementation of refinement. The B-Method is also a formal method that extends refinement calculus with a component language: it has been used in industrial developments. - -In type theory, a refinement type is a type endowed with a predicate which is assumed to hold for any element of the refined type. Refinement types can express preconditions when used as function arguments or postconditions when used as return types: for instance, the type of a function which accepts natural numbers and returns natural numbers greater than 5 may be written as $f: \mathbb{N} \rarr \{n: \mathbb{N} | n > 5\}$. Refinement types are thus related to behavioral subtyping. diff --git a/wiki/wikipedia/2513.txt b/wiki/wikipedia/2513.txt deleted file mode 100644 index 120187630b1af19c1e948aa211c65efb7490e54e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2513.txt +++ /dev/null @@ -1,6 +0,0 @@ -In general relativity, the peeling theorem describes the asymptotic behavior of the Weyl tensor as one goes to . Let $\gamma$ be a null geodesic in a spacetime $(M, g_{ab})$ from a point p to null infinity, with affine parameter $\lambda$. Then the theorem states that, as $\lambda$ tends to infinity: -$$ -C_{abcd} = \frac{C^{(1)}_{abcd}}{\lambda}+\frac{C^{(2)}_{abcd}}{\lambda^2}+\frac{C^{(3)}_{abcd}}{\lambda^3}+\frac{C^{(4)}_{abcd}}{\lambda^4}+O\left(\frac{1}{\lambda^5}\right) -$$ - -where $C_{abcd}$ is the Weyl tensor, and we used the abstract index notation. Moreover, in the Petrov classification, $C^{(1)}_{abcd}$ is type N, $C^{(2)}_{abcd}$ is type III, $C^{(3)}_{abcd}$ is type II (or II-II) and $C^{(4)}_{abcd}$ is type I. diff --git a/wiki/wikipedia/2514.txt b/wiki/wikipedia/2514.txt deleted file mode 100644 index d0116aec2ebce190e784a7f31662d749413d3f55..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2514.txt +++ /dev/null @@ -1,17 +0,0 @@ -In the mathematical discipline of graph theory the Tutte–Berge formula is a characterization of the size of a maximum matching in a graph. It is a generalization of Tutte theorem on perfect matchings, and is named after W. T. Tutte (who proved Tutte's theorem) and Claude Berge (who proved its generalization). - -The theorem states that the size of a maximum matching of a graph $G=(V,E)$ equals - -\frac{1}{2} \min_{U\subseteq V} \left(|U|-\operatorname{odd}(G-U)+|V|\right), - -where $\operatorname{odd}(H)$ counts how many of the connected components of the graph $H$ have an odd number of vertices. - -Equivalently, the number of unmatched vertices in a maximum matching equals
    \max_{U\subseteq V} \left(\operatorname{odd}(G-U)-|U|\right) - -.
    - -Intuitively, for any subset $U$ of the vertices, the only way to completely cover an odd component of $G-U$ by a matching is for one of the matched edges covering the component to be incident to $U$. If, instead, some odd component had no matched edge connecting it to $U$, then the part of the matching that covered the component would cover its vertices in pairs, but since the component has an odd number of vertices it would necessarily include at least one leftover and unmatched vertex. Therefore, if some choice of $U$ has few vertices but its removal creates a large number of odd components, then there will be many unmatched vertices, implying that the matching itself will be small. This reasoning can be made precise by stating that the size of a maximum matching is at most equal to the value given by the Tutte–Berge formula. - -The characterization of Tutte and Berge proves that this is the only obstacle to creating a large matching: the size of the optimal matching will be determined by the subset $U$ with the biggest difference between its numbers of odd components outside $U$ and vertices inside $U$. That is, there always exists a subset $U$ such that deleting $U$ creates the correct number of odd components needed to make the formula true. One way to find such a set $U$ is to choose any maximum matching $M$, and to let $X$ be the set of vertices that are either unmatched in $M$, or that can be reached from an unmatched vertex by an alternating path that ends with a matched edge. Then, let $U$ be the set of vertices that are matched by $M$ to vertices in $X$. No two vertices in $X$ can be adjacent, for if they were then their alternating paths could be concatenated to give a path by which the matching could be increased, contradicting the maximality of $M$. Every neighbor of a vertex $x$ in $X$ must belong to $U$, for otherwise we could extend an alternating path to $x$ by one more pair of edges, through the neighbor, causing the neighbor to become part of $U$. Therefore, in $G-U$, every vertex of $X$ forms a single-vertex component, which is odd. There can be no other odd components, because all other vertices remain matched after deleting $U$. So with this construction the size of $U$ and the number of odd components created by deleting $U$ are what they need to be to make the formula be true. - -Tutte's theorem characterizes the graphs with perfect matchings as being the ones for which deleting any subset $U$ of vertices creates at most $|U|$ odd components. (A subset $U$ that creates at least $|U|$ odd components can always be found in the empty set.) In this case, by the Tutte–Berge formula, the size of the matching is $|V|/2$; that is, the maximum matching is a perfect matching. Thus, Tutte's theorem can be derived as a corollary of the Tutte–Berge formula, and the formula can be seen as a generalization of Tutte's theorem. diff --git a/wiki/wikipedia/2515.txt b/wiki/wikipedia/2515.txt deleted file mode 100644 index 1a13be5e29d53f2fd05b172a35fcfa20e9da7e75..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2515.txt +++ /dev/null @@ -1,13 +0,0 @@ -In theoretical physics, the Haag–Łopuszański–Sohnius theorem shows that the possible symmetries of a consistent 4-dimensional quantum field theory do not only consist of internal symmetries and Poincaré symmetry, but can also include supersymmetry with central charges (CCs) as a nontrivial extension of the Poincaré algebra. Supersymmetry without CCs was discovered in 1971 by Yuri Golfand and E. P. Likhtman who generalized the Coleman–Mandula theorem. - -One of the important results is that the fermionic part of the Lie superalgebra has to have spin-1/2 (spin 3/2 or higher are ruled out). - -Prior to the Haag–Łopuszański–Sohnius theorem, the Coleman–Mandula theorem was the strongest of a series of no-go theorems, stating that the symmetry group of a consistent 4-dimensional quantum field theory is the direct product of the internal symmetry group and the Poincaré group. - -In 1971 Yuri Golfand and E. P. Likhtman published the first paper on four-dimensional supersymmetry which presented (in modern notation) N=1 superalgebra and N=1 super-QED with charged matter and a mass term for the photon field. They proved that the conserved supercharges can exist in four dimensions by allowing both commuting and anticommuting symmetry generators, thus providing a nontrivial extension of the Poincaré algebra, namely the supersymmetry algebra. In 1975, Rudolf Haag, Jan Łopuszański, and Martin Sohnius further generalized superalgebras by analyzing extended supersymmetries (e.g. N=2) and introducing additional central charges. - -What is most fundamental in this result (and thus in supersymmetry), is that there can be an interplay of spacetime symmetry with internal symmetry (in the sense of "mixing particles"): the supersymmetry generators transform bosonic particles into fermionic ones and vice versa, but the anticommutator of two such transformations yields a translation in spacetime. Precisely such an interplay seemed excluded by the Coleman–Mandula theorem, which stated that (bosonic) internal symmetries cannot interact non-trivially with spacetime symmetry. - -This theorem was also an important justification of the previously found Wess–Zumino model, an interacting four-dimensional quantum field theory with supersymmetry, leading to a renormalizable theory. - -The theorem only deals with "visible symmetries, i.e., with symmetries of the S-matrix" and thus it is still possible that "the fundamental equations may have a higher symmetry". Expressed differently, this means the theorem does not restrict broken symmetry, but only unbroken symmetries. diff --git a/wiki/wikipedia/2516.txt b/wiki/wikipedia/2516.txt deleted file mode 100644 index a468aaa4d029ef1178ecf4483f33f2f09054f39a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2516.txt +++ /dev/null @@ -1,3 +0,0 @@ -Encina was a DCE-based transaction processing system developed by Transarc, who were later acquired by IBM. - -It was used as the basis of IBM TXSeries, which is a variant of CICS for non-mainframe platforms (however, in newer versions of TXSeries, the Encina component has been removed.) diff --git a/wiki/wikipedia/2517.txt b/wiki/wikipedia/2517.txt deleted file mode 100644 index a506e578e537bd797e88cfa35208ffbaba78194b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2517.txt +++ /dev/null @@ -1,27 +0,0 @@ -Hex is a two player abstract strategy board game in which players attempt to connect opposite sides of a hexagonal board. Hex was invented by mathematician and poet Piet Hein in 1942 and independently by John Nash in 1948. - -It is traditionally played on an 11×11 rhombus board, although 13×13 and 19×19 boards are also popular. Each player is assigned a pair of opposite sides of the board which they must try to connect by taking turns placing a stone of their color onto any empty space. Once placed, the stones are unable to be moved or removed. A player wins when they successfully connect their sides together through a chain of adjacent stones. Draws are impossible in Hex due to the topology of the game board. - -The game has deep strategy, sharp tactics and a profound mathematical underpinning related to the Brouwer fixed-point theorem. The game was first marketed as a board game in Denmark under the name Con-tac-tix, and Parker Brothers marketed a version of it in 1952 called Hex; they are no longer in production. Hex can also be played with paper and pencil on hexagonally ruled graph paper. - -Hex-related research is current in the areas of topology, graph and matroid theory, combinatorics, combinatorial game theory and artificial intelligence. - -Hex is a connection game, a particular type of positional game. The game can never end in a draw (tie), it became known in Denmark under the name Polygon due to an article by Hein in the 26 December 1942 edition of the Danish newspaper Politiken, the first published description of the game, in which he used that name. The game was independently re-invented in 1948 by the mathematician John Nash at Princeton University. According to Martin Gardner, who featured Hex in his July 1957 Mathematical Games column, Nash's fellow players called the game either Nash or John, with the latter name referring to the fact that the game could be played on hexagonal bathroom tiles. Various paradigms resulting from research into the game have been used to create digital computer Hex playing automatons starting about 2000. The first implementations used evaluation functions that emulated Shannon and Moore's electrical circuit model embedded in an alpha-beta search framework with hand-crafted knowledge-based patterns. Starting about 2006, Monte Carlo tree search methods borrowed from successful computer implementations of Go were introduced and soon dominated the field. Later, hand crafted patterns were supplemented by machine learning methods for pattern discovery. These programs are now competitive against skilled human players. Elo based ratings have been assigned to the various programs and can be used to measure technical progress as well as assess playing strength against Elo-rated humans. Current research is often published in either the quarterly ICGA Journal or the annual Advances in Computer Games series (van den Herik et al. eds.). - -Each player has an allocated color, conventionally Red and Blue or White and Black. - -A "safely connected" pattern is composed of stones of the player's color and open spaces which can be joined into a chain, an unbroken sequence of edge-wise adjacent stones, no matter how the opponent plays. One of the simplest such patterns is the bridge (see diagram 1), which consists of two stones of the same color (A and C), and a pair of open spaces (B and D). If the opponent plays in either space, the player plays in the other, creating a contiguous chain. There are also safely connected patterns which connect stones to edges. There are many more safely connected patterns, some quite complex, built up of simpler ones like those shown. Patterns and paths can be disrupted by the opponent before they are complete, so the configuration of the board during an actual game often looks like a patchwork rather than something planned or designed. The middle part of the game consists of creating a network of such weakly connected stones and patterns On boards with unequal dimensions, the player whose sides are further apart can win regardless of who plays first. On boards with equal dimensions, the first player can win on a board with an even number of cells per side, and the second player can win on a board with an odd number. On boards with an even number, one of the first player's winning moves is always to place a stone in the acute corner. - -Hex had an incarnation as the question board from the television game show Blockbusters. In order to play a "move", contestants had to answer a question correctly. The board had 5 alternating columns of 4 hexagons; the solo player could connect top-to-bottom in 4 moves, while the team of two could connect left-to-right in 5 moves. - -The game of Y is Hex played on a triangular grid of hexagons; the object is for either player to connect all three sides of the triangle. Y is a generalization of Hex to the extent that any position on a Hex board can be represented as an equivalent position on a larger Y board. - -Havannah is game based on Hex. It differs from Hex in that it is played on a hexagonal grid of hexagons and a win is achieved by forming one of three patterns. - -Projex is a variation of Hex played on a real projective plane, where the players have the goal of creating a noncontractible loop. Like in Hex, there are no ties, and there is no position in which both players have a winning connection. - -As of 2016, there were tournaments reported from Brazil, Czech Republic, Denmark, France, Germany, Italy, Netherlands, Norway, Poland, Portugal, Spain, UK and the US. - -One of the largest Hex tourneys is organized by the International Committee of Mathematical Games in Paris, France, which is annually held since 2013. - -Hex is also part of the Computer Olympiad. diff --git a/wiki/wikipedia/2518.txt b/wiki/wikipedia/2518.txt deleted file mode 100644 index 9b148e3cf0d294a1173c189be1b931a8dd0efba7..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2518.txt +++ /dev/null @@ -1,65 +0,0 @@ -In concurrent computing, a deadlock is a state in which each member of a group waits for another member, including itself, to take action, such as sending a message or more commonly releasing a lock. Deadlocks are a common problem in multiprocessing systems, parallel computing, and distributed systems, where software and hardware locks are used to arbitrate shared resources and implement process synchronization. - -In an operating system, a deadlock occurs when a process or thread enters a waiting state because a requested system resource is held by another waiting process, which in turn is waiting for another resource held by another waiting process. If a process is unable to change its state indefinitely because the resources requested by it are being used by another waiting process, then the system is said to be in a deadlock. - -In a communications system, deadlocks occur mainly due to lost or corrupt signals rather than resource contention. - -[[File:Two processes, two resources.gif|thumbnail|Two processes competing for two resources in opposite order.]] - -A deadlock situation on a resource can arise if and only if all of the following conditions occur simultaneously in a system: - -# Mutual exclusion: At least two resource must be held in a non-shareable mode. Otherwise, the processes would not be prevented from using the resource when necessary. Only one process can use the resource at any given instant of time. - -# Hold and wait or resource holding: a process is currently holding at least one resource and requesting additional resources which are being held by other processes. - -# No preemption: a resource can be released only voluntarily by the process holding it. - -# Circular wait: each process must be waiting for a resource which is being held by another process, which in turn is waiting for the first process to release the resource. In general, there is a set of waiting processes, P = {P1, P2, …, PN}, such that P1 is waiting for a resource held by P2, P2 is waiting for a resource held by P3 and so on until PN is waiting for a resource held by P1. - -These four conditions are known as the Coffman conditions from their first description in a 1971 article by Edward G. Coffman, Jr. Major approaches are as follows. - -In this approach, it is assumed that a deadlock will never occur. This is also an application of the Ostrich algorithm. This approach was initially used by MINIX and UNIX. This is used when the time intervals between occurrences of deadlocks are large and the data loss incurred each time is tolerable. - -Ignoring deadlocks can be safely done if deadlocks are formally proven to never occur. An example is the RTIC framework. - -Under the deadlock detection, deadlocks are allowed to occur. Then the state of the system is examined to detect that a deadlock has occurred and subsequently it is corrected. An algorithm is employed that tracks resource allocation and process states, it rolls back and restarts one or more of the processes in order to remove the detected deadlock. Detecting a deadlock that has already occurred is easily possible since the resources that each process has locked and/or currently requested are known to the resource scheduler of the operating system. - -After a deadlock is detected, it can be corrected by using one of the following methods: - -# Process termination: one or more processes involved in the deadlock may be aborted. One could choose to abort all competing processes involved in the deadlock. This ensures that deadlock is resolved with certainty and speed. But the expense is high as partial computations will be lost. Or, one could choose to abort one process at a time until the deadlock is resolved. This approach has a high overhead because after each abort an algorithm must determine whether the system is still in deadlock. Several factors must be considered while choosing a candidate for termination, such as priority and age of the process. - -# Resource preemption: resources allocated to various processes may be successively preempted and allocated to other processes until the deadlock is broken. - -[[File:Avoiding deadlock.gif|380px|thumbnail|right| - -(A) Two processes competing for one resource, following a first-come, first-served policy. - -(B) Deadlock occurs when both processes lock the resource simultaneously. - -(C) The deadlock can be resolved by breaking the symmetry of the locks. - -(D) The deadlock can be prevented by breaking the symmetry of the locking mechanism.]] - -Deadlock prevention works by preventing one of the four Coffman conditions from occurring. - -* Removing the mutual exclusion condition means that no process will have exclusive access to a resource. This proves impossible for resources that cannot be spooled. But even with spooled resources, the deadlock could still occur. Algorithms that avoid mutual exclusion are called non-blocking synchronization algorithms. - -* The hold and wait or resource holding conditions may be removed by requiring processes to request all the resources they will need before starting up (or before embarking upon a particular set of operations). This advance knowledge is frequently difficult to satisfy and, in any case, is an inefficient use of resources. Another way is to require processes to request resources only when it has none; First, they must release all their currently held resources before requesting all the resources they will need from scratch. This too is often impractical. It is so because resources may be allocated and remain unused for long periods. Also, a process requiring a popular resource may have to wait indefinitely, as such a resource may always be allocated to some process, resulting in resource starvation. (These algorithms, such as serializing tokens, are known as the all-or-none algorithms.) - -* The no preemption condition may also be difficult or impossible to avoid as a process has to be able to have a resource for a certain amount of time, or the processing outcome may be inconsistent or thrashing may occur. However, the inability to enforce preemption may interfere with a priority algorithm. Preemption of a "locked out" resource generally implies a rollback, and is to be avoided since it is very costly in overhead. Algorithms that allow preemption include lock-free and wait-free algorithms and optimistic concurrency control. If a process holding some resources and requests for some another resource(s) that cannot be immediately allocated to it, the condition may be removed by releasing all the currently being held resources of that process. - -* The final condition is the circular wait condition. Approaches that avoid circular waits include disabling interrupts during critical sections and using a hierarchy to determine a partial ordering of resources. If no obvious hierarchy exists, even the memory address of resources has been used to determine ordering and resources are requested in the increasing order of the enumeration. Dijkstra's solution can also be used. - -A livelock is similar to a deadlock, except that the states of the processes involved in the livelock constantly change with regard to one another, none progressing. - -The term was coined by Edward A. Ashcroft in a 1975 paper in connection with an examination of airline booking systems. Livelock is a special case of resource starvation; the general definition only states that a specific process is not progressing. - -Livelock is a risk with some algorithms that detect and recover from deadlock. If more than one process takes action, the deadlock detection algorithm can be repeatedly triggered. This can be avoided by ensuring that only one process (chosen arbitrarily or by priority) takes action. - -Distributed deadlocks can occur in distributed systems when distributed transactions or concurrency control is being used. - -Distributed deadlocks can be detected either by constructing a global wait-for graph from local wait-for graphs at a deadlock detector or by a distributed algorithm like edge chasing. - -Phantom deadlocks are deadlocks that are falsely detected in a distributed system due to system internal delays but do not actually exist. - -For example, if a process releases a resource R1 and issues a request for R2, and the first message is lost or delayed, a coordinator (detector of deadlocks) could falsely conclude a deadlock (if the request for R2 while having R1 would cause a deadlock). diff --git a/wiki/wikipedia/2519.txt b/wiki/wikipedia/2519.txt deleted file mode 100644 index 345fc946486e29581e4542095781d5564094dda3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2519.txt +++ /dev/null @@ -1,60 +0,0 @@ -In mathematics, the Cauchy–Kovalevskaya theorem (also written as the Cauchy–Kowalevski theorem) is the main local existence and uniqueness theorem for analytic partial differential equations associated with Cauchy initial value problems. A special case was proven by , and the full result by . - -This theorem is about the existence of solutions to a system of m differential equations in n dimensions when the coefficients are analytic functions. The theorem and its proof are valid for analytic functions of either real or complex variables. - -Let K denote either the fields of real or complex numbers, and let V = Km and W = Kn. Let A1, ..., An−1 be analytic functions defined on some neighbourhood of (0, 0) in W × V and taking values in the m × m matrices, and let b be an analytic function with values in V defined on the same neighbourhood. Then there is a neighbourhood of 0 in W on which the quasilinear Cauchy problem -$$ - \partial_{x_n}f = A_1(x,f) \partial_{x_1} f + \cdots + A_{n-1}(x,f)\partial_{x_{n-1}}f + b(x,f) -$$ - -with initial condition -$$ - f(x) = 0 -$$ - -on the hypersurface -$$ - x_n = 0 -$$ - -has a unique analytic solution ƒ : W → V near 0. - -Lewy's example shows that the theorem is not more generally valid for all smooth functions. - -The theorem can also be stated in abstract (real or complex) vector spaces. Let V and W be finite-dimensional real or complex vector spaces, with n = dim W. Let A1, ..., An−1 be analytic functions with values in End (V) and b an analytic function with values in V, defined on some neighbourhood of (0, 0) in W × V. In this case, the same result holds. - -Both sides of the partial differential equation can be expanded as formal power series and give recurrence relations for the coefficients of the formal power series for f that uniquely determine the coefficients. The Taylor series coefficients of the Ai's and b are majorized in matrix and vector norm by a simple scalar rational analytic function. The corresponding scalar Cauchy problem involving this function instead of the Ai's and b has an explicit local analytic solution. The absolute values of its coefficients majorize the norms of those of the original problem; so the formal power series solution must converge - -where the scalar solution converges. - -If F and fj are analytic functions near 0, then the non-linear Cauchy problem -$$ - \partial_t^k h = F\left(x,t,\partial_t^j\partial_x^\alpha h \right),\text{ where }j\operatorname{Var}(Y) = \operatorname{E}[\operatorname{Var}(Y \mid X)] + \operatorname{Var}(\operatorname{E}[Y \mid X]). - -In language perhaps better known to statisticians than to probability theorists, the two terms are the "unexplained" and the "explained" components of the variance respectively (cf. fraction of variance unexplained, explained variation). In actuarial science, specifically credibility theory, the first component is called the expected value of the process variance (EVPV) and the second is called the variance of the hypothetical means (VHM). These two components are also the source of the term "Eve's law", from the initials EV VE for "expectation of variance" and "variance of expectation". - -There is a general variance decomposition formula for $c \geq 2$ components (see below). For example, with two conditioning random variables: - -\operatorname{Var}[Y] = \operatorname{E}\left[\operatorname{Var}\left(Y \mid X_1, X_2\right)\right] + \operatorname{E}[\operatorname{Var}(\operatorname{E}\left[Y \mid X_1, X_2\right] \mid X_1)] + \operatorname{Var}(\operatorname{E}\left[Y \mid X_1\right]), - -which follows from the law of total conditional variance: - -\operatorname{Var}(Y \mid X_1) = \operatorname{E} \left[\operatorname{Var}(Y \mid X_1, X_2) \mid X_1\right] + \operatorname{Var} \left(\operatorname{E}\left[Y \mid X_1, X_2 \right] \mid X_1\right). - -Note that the conditional expected value $\operatorname{E}(Y \mid X)$ is a random variable in its own right, whose value depends on the value of $X.$ Notice that the conditional expected value of $Y$ given the event $X = x$ is a function of $x$ (this is where adherence to the conventional and rigidly case-sensitive notation of probability theory becomes important!). If we write $\operatorname{E}(Y \mid X = x) = g(x)$ then the random variable $\operatorname{E}(Y \mid X)$ is just $g(X).$ Similar comments apply to the conditional variance. - -One special case, (similar to the law of total expectation) states that if $A_1, \ldots, A_n$ is a partition of the whole outcome space, that is, these events are mutually exclusive and exhaustive, then - -\begin{align} - -\operatorname{Var} (X) = {} & \sum_{i=1}^n \operatorname{Var}(X\mid A_i) \Pr(A_i) + \sum_{i=1}^n \operatorname{E}[X\mid A_i]^2 (1-\Pr(A_i))\Pr(A_i) \\[4pt] - -& {} - 2\sum_{i=2}^n \sum_{j=1}^{i-1} \operatorname{E}[X \mid A_i] \Pr(A_i)\operatorname{E}[X\mid A_j] \Pr(A_j). - -\end{align} - -In this formula, the first component is the expectation of the conditional variance; the other two components are the variance of the conditional expectation. - -The law of total variance can be proved using the law of total expectation. First, - -\operatorname{Var}[Y] = \operatorname{E}\left[Y^2\right] - \operatorname{E}[Y]^2 - -from the definition of variance. Again, from the definition of variance, and applying the law of total expectation, we have - -\operatorname{E}\left[Y^2\right] = \operatorname{E}\left[\operatorname{E}[Y^2\mid X]\right] = \operatorname{E} \left[\operatorname{Var}[Y \mid X] + [\operatorname{E}[Y \mid X]]^2\right]. - -Now we rewrite the conditional second moment of $Y$ in terms of its variance and first moment, and apply the law of total expectation on the right hand side: - -\operatorname{E}\left[Y^2\right] - \operatorname{E}[Y]^2 = \operatorname{E} \left[\operatorname{Var}[Y \mid X] + [\operatorname{E}[Y \mid X]]^2\right] - [\operatorname{E} [\operatorname{E}[Y \mid X]]]^2. - -Since the expectation of a sum is the sum of expectations, the terms can now be regrouped: - -= \left(\operatorname{E} [\operatorname{Var}[Y \mid X]]\right) + \left(\operatorname{E} \left[\operatorname{E}[Y \mid X]^2\right] - [\operatorname{E} [\operatorname{E}[Y \mid X]]]^2\right). - -Finally, we recognize the terms in the second set of parentheses as the variance of the conditional expectation $\operatorname{E}[Y \mid X]$: - -= \operatorname{E} [\operatorname{Var}[Y \mid X]] + \operatorname{Var} [\operatorname{E}[Y \mid X]]. - -The following formula shows how to apply the general, measure theoretic variance decomposition formula to stochastic dynamic systems. Let $Y(t)$ be the value of a system variable at time $t.$ Suppose we have the internal histories (natural filtrations) $H_{1t},H_{2t},\ldots,H_{c-1,t}$, each one corresponding to the history (trajectory) of a different collection of system variables. The collections need not be disjoint. The variance of $Y(t)$ can be decomposed, for all times $t,$ into $c \geq 2$ components as follows: - -\begin{align} - -\operatorname{Var}[Y(t)] = {} & \operatorname{E}(\operatorname{Var}[Y(t)\mid H_{1t},H_{2t},\ldots,H_{c-1,t}]) \\[4pt] - -& {} + \sum_{j=2}^{c-1}\operatorname{E}(\operatorname{Var}[\operatorname{E}[Y(t)\mid H_{1t},H_{2t},\ldots,H_{jt}] \mid H_{1t},H_{2t},\ldots,H_{j-1,t}]) \\[4pt] - -& {} + \operatorname{Var}(\operatorname{E}[Y(t)\mid H_{1t}]). - -\end{align} - -The decomposition is not unique. It depends on the order of the conditioning in the sequential decomposition. - -In cases where $(Y, X)$ are such that the conditional expected value is linear; that is, in cases where - -\operatorname{E}(Y \mid X) = a X + b, - -it follows from the bilinearity of covariance that - -a={\operatorname{Cov}(Y, X) \over \operatorname{Var}(X)} - -and - -b = \operatorname{E}(Y)-{\operatorname{Cov}(Y, X) \over \operatorname{Var}(X)} \operatorname{E}(X) - -and the explained component of the variance divided by the total variance is just the square of the correlation between $Y$ and $X;$ that is, in such cases, - -{\operatorname{Var}(\operatorname{E}(Y \mid X)) \over \operatorname{Var}(Y)} = \operatorname{Corr}(X, Y)^2. - -One example of this situation is when $(X, Y)$ have a bivariate normal (Gaussian) distribution. - -More generally, when the conditional expectation $\operatorname{E}(Y \mid X)$ is a non-linear function of $X$ - -\iota_{Y\mid X} = {\operatorname{Var}(\operatorname{E}(Y \mid X)) \over \operatorname{Var}(Y)} = \operatorname{Corr}(\operatorname{E}(Y \mid X), Y)^2, - -which can be estimated as the $R$ squared from a non-linear regression of $Y$ on $X,$ using data drawn from the joint distribution of $(X, Y).$ When $\operatorname{E}(Y \mid X)$ has a Gaussian distribution (and is an invertible function of $X$), or $Y$ itself has a (marginal) Gaussian distribution, this explained component of variation sets a lower bound on the mutual information: - -\operatorname{I}(Y; X) \geq \ln \left([1 - \iota_{Y \mid X}]^{-1/2}\right). - -A similar law for the third central moment $\mu_3$ says - -\mu_3(Y)=\operatorname{E}\left(\mu_3(Y \mid X)\right) + \mu_3(\operatorname{E}(Y \mid X)) + 3\operatorname{cov}(\operatorname{E}(Y \mid X), \operatorname{var}(Y \mid X)). - -For higher cumulants, a generalization exists. See law of total cumulance. diff --git a/wiki/wikipedia/2521.txt b/wiki/wikipedia/2521.txt deleted file mode 100644 index bc642da9ad26ad00dea47980306cd244da9520b3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2521.txt +++ /dev/null @@ -1,143 +0,0 @@ -The following database management systems and other software use multiversion concurrency control. - -* Altibase - -* ArangoDB - -* Berkeley DB - -* Cloudant - -* Cloud Spanner - -* Clustrix - -* CockroachDB - -* Couchbase - -* CouchDB - -* CUBRID - -* IBM Db2 – since IBM DB2 9.7 LUW ("Cobra") under CS isolation level – in currently committed mode - -* IBM Cognos TM1 – in versions 9.5.2 and up - -* Drizzle - -* Druid - -* etcd - -* EXASOL - -* eXtremeDB - -* Firebird - -* FLAIM - -* FoundationDB - -* GE Smallworld Version Managed Data Store - -* H2 Database Engine – experimental since version 1.0.57 (2007-08-25) - -* HBase - -* HSQLDB – starting with version 2.0 - -* IBM Netezza - -* InfiniDB - -* Ingres - -* InterBase – all versions - -* LeanXcale - -* LMDB and MDBX - -* MariaDB (MySQL fork) – when used with XtraDB, an InnoDB fork and that is included in MariaDB sources and binaries or - -* MarkLogic Server – a bit of this is described in - -* MemSQL - -* Meronymy SPARQL Database Server - -* Microsoft SQL Server – when using READ_COMMITTED_SNAPSHOT, starting with SQL Server 2005 - -* MonetDB - -* MongoDB – when used with the WiredTiger storage engine - -* MySQL – when used with InnoDB, Falcon, or Archive storage engines - -* NuoDB - -* ObjectDB - -* ObjectStore - -* Oracle database – all versions since Oracle 4 - -* Oracle (née DEC) Rdb - -* OrientDB - -* PostgreSQL - -* Postgres-XL - -* Rdb/ELN - -* RDM Embedded - -* REAL Server - -* Realm - -* RethinkDB - -* SAP HANA - -* SAP IQ - -* - -* sones GraphDB - -* Splice Machine - -* Sybase SQL Anywhere - -* Sybase ASE 16 - -* Teradata - -*TerminusDB - -*Tibero – all versions since Tibero 3 - -* TokuMX - -* Actian Vector - -* YugabyteDB - -* Zope Object Database - -*JBoss Cache – v 3.0 - -*Ehcache – v 1.6.0-beta4 - -*Clojure – language software transactional memory - -*pojo-mvcc – a lightweight MVCC implementation written in Java - -*JVSTM – Software Transactional memory that implements the concept of Versioned Boxes - -* Apache Jackrabbit Oak diff --git a/wiki/wikipedia/2522.txt b/wiki/wikipedia/2522.txt deleted file mode 100644 index d3954d094e09a1cb345cb41171acc3b1021052b9..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2522.txt +++ /dev/null @@ -1,61 +0,0 @@ -The uniqueness theorem for Poisson's equation states that, for a large class of boundary conditions, the equation may have many solutions, but the gradient of every solution is the same. In the case of electrostatics, this means that there is a unique electric field derived from a potential function satisfying Poisson's equation under the boundary conditions. - -__TOC__ - -The general expression for Poisson's equation in electrostatics is -$$ -\mathbf{\nabla}^2 \varphi = -\frac{\rho_f}{\epsilon_0}, -$$ - -where $\varphi$ is the electric potential and $\rho_f$ is the charge distribution over some region $V$ with boundary surface $S$ . - -The uniqueness of the solution can be proven for a large class of boundary conditions as follows. - -Suppose that we claim to have two solutions of Poisson's equation. Let us call these two solutions $\varphi_1$ and $\varphi_2$. Then -$$ -\mathbf{\nabla}^2 \varphi_1 = - \frac{\rho_f}{\epsilon_0}, -$$ and -$$ -\mathbf{\nabla}^2 \varphi_2 = - \frac{\rho_f}{\epsilon_0}. -$$ - -It follows that $\varphi=\varphi_2-\varphi_1$ is a solution of Laplace's equation, which is a special case of Poisson's equation that equals to $0$. By subtracting the two solutions above gives - -{{NumBlk||\mathbf{\nabla}^2 \varphi = \mathbf{\nabla}^2 \varphi_1 - \mathbf{\nabla}^2 \varphi_2 = 0. |}} - -By applying the vector differential identity we know that -$$ -\nabla \cdot (\varphi \nabla \varphi )= (\nabla \varphi )^2 + \varphi \nabla^2 \varphi. -$$ - -However, from () we also know that throughout the region $\nabla^2 \varphi = 0.$ Consequently, the second term goes to zero and we find that -$$ -\nabla \cdot (\varphi \nabla \varphi )= (\nabla \varphi )^2. -$$ - -By taking the volume integral over the region $V$, we find that -$$ -\int_V \mathbf{\nabla}\cdot(\varphi \mathbf{\nabla}\varphi) \mathrm{d}V = \int_V (\mathbf{\nabla}\varphi)^2 \mathrm{d}V. -$$ - -By applying the divergence theorem, we rewrite the expression above as - -{{NumBlk||\int_{S} (\varphi \mathbf{\nabla}\varphi) \cdot \mathrm{d}\mathbf{S}= \int_V (\mathbf{\nabla}\varphi)^2 \mathrm{d}V. |}} - -We now sequentially consider three distinct boundary conditions: a Dirichlet boundary condition, a Neumann boundary condition, and a mixed boundary condition. - -First, we consider the case where Dirichlet boundary conditions are specified as $\varphi = 0$ on the boundary of the region. If the Dirichlet boundary condition is satisfied on $S$ by both solutions (i.e., if $\varphi = 0$ on the boundary), then the left-hand side of () is zero. Consequently, we find that -$$ -\int_V (\mathbf{\nabla}\varphi)^2 \mathrm{d}V = 0. -$$ - -Since this is the volume integral of a positive quantity (due to the squared term), we must have $\nabla \varphi = 0$ at all points. Further, because the gradient of $\varphi$ is everywhere zero and $\varphi$ is zero on the boundary, $\varphi$ must be zero throughout the whole region. Finally, since $\varphi = 0$ throughout the whole region, and since $\varphi = \varphi_2 - \varphi_1$ throughout the whole region, therefore $\varphi_1 = \varphi_2$ throughout the whole region. This completes the proof that there is the unique solution of Poisson's equation with a Dirichlet boundary condition. - -Second, we consider the case where Neumann boundary conditions are specified as $\nabla\varphi = 0$ on the boundary of the region. If the Neumann boundary condition is satisfied on $S$ by both solutions, then the left-hand side of () is zero again. Consequently, as before, we find that -$$ -\int_V (\mathbf{\nabla}\varphi)^2 \mathrm{d}V = 0. -$$ - -As before, since this is the volume integral of a positive quantity, we must have $\nabla \varphi = 0$ at all points. Further, because the gradient of $\varphi$ is everywhere zero within the volume $V$, and because the gradient of $\varphi$ is everywhere zero on the boundary $S$, therefore $\varphi$ must be constant---but not necessarily zero---throughout the whole region. Finally, since $\varphi = k$ throughout the whole region, and since $\varphi = \varphi_2 - \varphi_1$ throughout the whole region, therefore $\varphi_1 = \varphi_2 - k$ throughout the whole region. This completes the proof that there is the unique solution up to an additive constant of Poisson's equation with a Neumann boundary condition. - -Mixed boundary conditions could be given as long as either the gradient or the potential is specified at each point of the boundary. Boundary conditions at infinity also hold. This results from the fact that the surface integral in () still vanishes at large distances because the integrand decays faster than the surface area grows. diff --git a/wiki/wikipedia/2523.txt b/wiki/wikipedia/2523.txt deleted file mode 100644 index 5134cb5f4aa91f0b5f19f3244e3eb1d14d18f4c3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2523.txt +++ /dev/null @@ -1,13 +0,0 @@ -Open-shop scheduling or open-shop scheduling problem (OSSP) is an optimization problem in computer science and operations research. It is a variant of optimal job scheduling. In a general job-scheduling problem, we are given n jobs J1, J2, ..., Jn of varying processing times, which need to be scheduled on m machines with varying processing power, while trying to minimize the makespan - the total length of the schedule (that is, when all the jobs have finished processing). In the specific variant known as open-shop scheduling, each job consists of a set of operations O1, O2, ..., On which need to be processed in an arbitrary order. The problem was first studied by Teofilo F. Gonzalez and Sartaj Sahni in 1976. - -In the standard three-field notation for optimal job-scheduling problems, the open-shop variant is denoted by O in the first field. For example, the problem denoted by "O3|$p_{ij}$|$C_\max$" is a 3-machines job-shop problem with unit processing times, where the goal is to minimize the maximum completion time. - -The input to the open-shop scheduling problem consists of a set of n jobs, another set of m workstations, and a two-dimensional table of the amount of time each job should spend at each workstation (possibly zero). Each job may be processed only at one workstation at a time, and each workstation can process only one job at a time. However, unlike the job-shop problem, the order in which the processing steps happen can vary freely. The goal is to assign a time for each job to be processed by each workstation, so that no two jobs are assigned to the same workstation at the same time, no job is assigned to two workstations at the same time, and every job is assigned to each workstation for the desired amount of time. The usual measure of quality of a solution is its makespan, the amount of time from the start of the schedule (the first assignment of a job to a workstation) to its end (the finishing time of the last job at the last workstation). - -The open-shop scheduling problem can be solved in polynomial time for instances that have only two workstations or only two jobs. It may also be solved in polynomial time when all nonzero processing times are equal: in this case the problem becomes equivalent to edge coloring a bipartite graph that has the jobs and workstations as its vertices, and that has an edge for every job-workstation pair that has a nonzero processing time. The color of an edge in the coloring corresponds to the segment of time at which a job-workstation pair is scheduled to be processed. Because the line graphs of bipartite graphs are perfect graphs, bipartite graphs may be edge-colored in polynomial time. - -For three or more workstations, or three or more jobs, with varying processing times, open-shop scheduling is NP-hard. - -* Job-shop scheduling is a similar problem but with a yet additional constraint the operations must be done in a specific order. - -* Flow-shop scheduling is a job-shop scheduling but with an additional flow constraint each operation must be done on a specific machine. diff --git a/wiki/wikipedia/2524.txt b/wiki/wikipedia/2524.txt deleted file mode 100644 index 94f219064c086ba5fd0948932194298329ab142f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2524.txt +++ /dev/null @@ -1,19 +0,0 @@ -In probability theory, the Helly–Bray theorem relates the weak convergence of cumulative distribution functions to the convergence of expectations of certain measurable functions. It is named after Eduard Helly and Hubert Evelyn Bray. - -Let F and F1, F2, ... be cumulative distribution functions on the real line. The Helly–Bray theorem states that if Fn converges weakly to F, then -$$ -\int_\mathbb{R} g(x)dF_n(x) \quad\xrightarrow[n\to\infty]{}\quad \int_\mathbb{R} g(x)dF(x) -$$ - -for each bounded, continuous function g: R → R, where the integrals involved are Riemann-Stieltjes integrals. - -Note that if X and X1, X2, ... are random variables corresponding to these distribution functions, then the Helly–Bray theorem does not imply that E(Xn) → E(X), since g(x) = x is not a bounded function. - -In fact, a stronger and more general theorem holds. Let P and P1, P2, ... be probability measures on some set S. Then Pn converges weakly to P if and only if -$$ -\int_S g dP_n \quad\xrightarrow[n\to\infty]{}\quad \int_S g dP, -$$ - -for all bounded, continuous and real-valued functions on S. (The integrals in this version of the theorem are Lebesgue–Stieltjes integrals.) - -The more general theorem above is sometimes taken as defining weak convergence of measures (see Billingsley, 1999, p. 3). diff --git a/wiki/wikipedia/2525.txt b/wiki/wikipedia/2525.txt deleted file mode 100644 index c4ab026a761a2ee768173ba404ef6388405e83b5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2525.txt +++ /dev/null @@ -1,11 +0,0 @@ -The International Symposium on Graph Drawing (GD) is an annual academic conference in which researchers present peer reviewed papers on graph drawing, information visualization of network information, geometric graph theory, and related topics. - -The Graph Drawing symposia have been central to the growth and development of graph drawing as a research area: as Herman et al. write, "the Graph Drawing community grew around the yearly Symposia." Nguyen lists Graph Drawing as one of "several good conferences which directly or indirectly concern with information visualization", and Wong et al. report that its proceedings "provide a wealth of information". In a 2003 study the symposium was among the top 30% of computer science research publication venues, ranked by impact factor. - -The first symposium was held in Marino, near Rome, Italy, in 1992, organized by Giuseppe Di Battista, Peter Eades, Pierre Rosenstiehl, and Roberto Tamassia. The first two symposia did not publish proceedings, but reports are available online. Since 1994, the proceedings of the symposia have been published by Springer-Verlag's Lecture Notes in Computer Science series. - -Countries in which the conference has been held include Australia, Austria, Canada, the Czech Republic, France, Germany (twice), Greece, Ireland, Italy (three times), and the United States (five times). - -A citation graph having vertices representing the papers in the 1994–2000 Graph Drawing symposia and having edges representing citations between these papers was made available as part of the graph drawing contest associated with the 2001 symposium. - -The largest connected component of this graph consists of 249 vertices and 642 edges; clustering analysis reveals several prominent subtopics within graph drawing that are more tightly connected, including three-dimensional graph drawing and orthogonal graph drawing. diff --git a/wiki/wikipedia/2526.txt b/wiki/wikipedia/2526.txt deleted file mode 100644 index e2a30b5f59a063a9dc235ce566d9ac576ee68bfb..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2526.txt +++ /dev/null @@ -1,249 +0,0 @@ -In mathematics, the Fourier inversion theorem says that for many types of functions it is possible to recover a function from its Fourier transform. Intuitively it may be viewed as the statement that if we know all frequency and phase information about a wave then we may reconstruct the original wave precisely. - -The theorem says that if we have a function $f:\R \to \C$ satisfying certain conditions, and we use the convention for the Fourier transform that -$$ -(\mathcal{F}f)(\xi):=\int_{\mathbb{R}} e^{-2\pi iy\cdot\xi} f(y)dy, -$$ - -then -$$ -f(x)=\int_{\mathbb{R}} e^{2\pi ix\cdot\xi} (\mathcal{F}f)(\xi)d\xi. -$$ - -In other words, the theorem says that -$$ -f(x)=\iint_{\mathbb{R}^2} e^{2\pi i(x-y)\cdot\xi} f(y)dyd\xi. -$$ - -This last equation is called the Fourier integral theorem. - -Another way to state the theorem is that if $R$ is the flip operator i.e. $(Rf)(x) := f(-x)$, then -$$ -\mathcal{F}^{-1}=\mathcal{F}R=R\mathcal{F}. -$$ - -The theorem holds if both $f$ and its Fourier transform are absolutely integrable (in the Lebesgue sense) and $f$ is continuous at the point $x$. However, even under more general conditions versions of the Fourier inversion theorem hold. In these cases the integrals above may not converge in an ordinary sense. - -In this section we assume that $f$ is an integrable continuous function. Use the convention for the Fourier transform that -$$ -(\mathcal{F}f)(\xi):=\int_{\mathbb{R}^n} e^{-2\pi iy\cdot\xi} f(y)dy. -$$ - -Furthermore, we assume that the Fourier transform is also integrable. - -The most common statement of the Fourier inversion theorem is to state the inverse transform as an integral. For any integrable function $g$ and all $x \in \mathbb R^n$ set -$$ -\mathcal{F}^{-1}g(x):=\int_{\mathbb{R}^n} e^{2\pi ix\cdot\xi} g(\xi)d\xi. -$$ - -Then for all $x \in \mathbb R^n$ we have -$$ -\mathcal{F}^{-1}(\mathcal{F}f)(x)=f(x). -$$ - -The theorem can be restated as -$$ -f(x)=\int_{\mathbb{R}^n} \int_{\mathbb{R}^n} e^{2\pi i(x-y)\cdot\xi} f(y)dyd\xi. -$$ - -If f is real valued then by taking the real part of each side of the above we obtain -$$ -f(x)=\int_{\mathbb{R}^n} \int_{\mathbb{R}^n} \cos (2\pi (x-y)\cdot\xi) f(y)dyd\xi. -$$ - -For any function $g$ define the flip operator $R$ by -$$ -Rg(x):=g(-x). -$$ - -Then we may instead define -$$ -\mathcal{F}^{-1}f := R\mathcal{F}f = \mathcal{F}Rf. -$$ - -It is immediate from the definition of the Fourier transform and the flip operator that both $R\mathcal{F}f$ and $\mathcal{F}Rf$ match the integral definition of $\mathcal{F}^{-1}f$, and in particular are equal to each other and satisfy $\mathcal{F}^{-1}(\mathcal{F}f)(x)=f(x)$. - -Since $Rf=R\mathcal{F}^{-1}\mathcal{F}f =RR \mathcal{FF}f$ we have $R=\mathcal{F}^2$ and -$$ -\mathcal{F}^{-1}=\mathcal{F}^3. -$$ - -The form of the Fourier inversion theorem stated above, as is common, is that -$$ -\mathcal{F}^{-1}(\mathcal{F}f)(x) = f(x). -$$ - -In other words, $\mathcal{F}^{-1}$ is a left inverse for the Fourier transform. However it is also a right inverse for the Fourier transform i.e. -$$ -\mathcal{F}(\mathcal{F}^{-1}f)(\xi) = f(\xi). -$$ - -Since $\mathcal{F}^{-1}$ is so similar to $\mathcal{F}$, this follows very easily from the Fourier inversion theorem (changing variables $\zeta := -\zeta$): - -\begin{align} - -f & =\mathcal{F}^{-1}(\mathcal{F}f)(x)\\[6pt] - -& =\int_{\mathbb{R}^{n}}\int_{\mathbb{R}^{n}}e^{2\pi ix\cdot\xi}e^{-2\pi iy\cdot\xi} f(y) dy d\xi\\[6pt] - -& =\int_{\mathbb{R}^{n}}\int_{\mathbb{R}^{n}}e^{-2\pi ix\cdot\zeta}e^{2\pi iy\cdot\zeta} f(y) dy d\zeta\\[6pt] - -& =\mathcal{F}(\mathcal{F}^{-1}f)(x). - -\end{align} - -Alternatively, this can be seen from the relation between $\mathcal{F}^{-1}f$ and the flip operator and the associativity of function composition, since -$$ -f = \mathcal{F}^{-1}(\mathcal{F}f) = \mathcal{F}R\mathcal{F}f = \mathcal{F} (\mathcal{F}^{-1}f). -$$ - -When used in physics and engineering, the Fourier inversion theorem is often used under the assumption that everything "behaves nicely". In mathematics such heuristic arguments are not permitted, and the Fourier inversion theorem includes an explicit specification of what class of functions is being allowed. However, there is no "best" class of functions to consider so several variants of the Fourier inversion theorem exist, albeit with compatible conclusions. - -The Fourier inversion theorem holds for all Schwartz functions (roughly speaking, smooth functions that decay quickly and whose derivatives all decay quickly). This condition has the benefit that it is an elementary direct statement about the function (as opposed to imposing a condition on its Fourier transform), and the integral that defines the Fourier transform and its inverse are absolutely integrable. This version of the theorem is used in the proof of the Fourier inversion theorem for tempered distributions (see below). - -The Fourier inversion theorem holds for all continuous functions that are absolutely integrable (i.e. $L^1(\mathbb R^n)$) with absolutely integrable Fourier transform. This includes all Schwartz functions, so is a strictly stronger form of the theorem than the previous one mentioned. This condition is the one used above in the statement section. - -A slight variant is to drop the condition that the function $f $ be continuous but still require that it and its Fourier transform be absolutely integrable. Then $f = g$ almost everywhere where g is a continuous function, and $\mathcal{F}^{-1}(\mathcal{F}f)(x)=g(x)$ for every $x \in \mathbb R^n$. - -; Piecewise smooth; one dimension - -If the function is absolutely integrable in one dimension (i.e. $ f \in L^1(\mathbb R)$) and is piecewise smooth then a version of the Fourier inversion theorem holds. In this case we define -$$ -\mathcal{F}^{-1}g(x):=\lim_{R\to\infty}\int_{-R}^R e^{2\pi ix\xi}g(\xi)d\xi. -$$ - -Then for all $ x \in \mathbb R$ -$$ -\mathcal{F}^{-1}(\mathcal{F}f)(x) = \frac{1}{2}(f(x_-) + f(x_+)), -$$ - -i.e. $\mathcal{F}^{-1}(\mathcal{F}f)(x)$ equals the average of the left and right limits of $ f$ at $ x$. At points where $ f$ is continuous this simply equals $ f(x)$. - -A higher-dimensional analogue of this form of the theorem also holds, but according to Folland (1992) is "rather delicate and not terribly useful". - -; Piecewise continuous; one dimension - -If the function is absolutely integrable in one dimension (i.e. $ f \in L^1(\mathbb R)$) but merely piecewise continuous then a version of the Fourier inversion theorem still holds. In this case the integral in the inverse Fourier transform is defined with the aid of a smooth rather than a sharp cut off function; specifically we define -$$ -\mathcal{F}^{-1}g(x):=\lim_{R\to\infty}\int_{\mathbb{R}} \varphi(\xi/R)e^{2\pi ix\xi}g(\xi)d\xi,\qquad\varphi(\xi):=e^{-\xi^2}. -$$ - -The conclusion of the theorem is then the same as for the piecewise smooth case discussed above. - -; Continuous; any number of dimensions - -If $ f$ is continuous and absolutely integrable on $\mathbb R^n$ then the Fourier inversion theorem still holds so long as we again define the inverse transform with a smooth cut off function i.e. -$$ -\mathcal{F}^{-1}g(x):=\lim_{R\to\infty}\int_{\mathbb{R}^n} \varphi(\xi/R)e^{2\pi ix\cdot\xi}g(\xi)d\xi,\qquad\varphi(\xi):=e^{-\vert\xi\vert^2}. -$$ - -The conclusion is now simply that for all $x \in \mathbb R^n$ -$$ -\mathcal{F}^{-1}(\mathcal{F}f)(x)=f(x). -$$ - -; No regularity condition; any number of dimensions - -If we drop all assumptions about the (piecewise) continuity of $f$ and assume merely that it is absolutely integrable, then a version of the theorem still holds. The inverse transform is again defined with the smooth cut off, but with the conclusion that -$$ -\mathcal{F}^{-1}(\mathcal{F}f)(x) = f(x) -$$ - -for almost every $x \in \mathbb R^n.$ - -In this case the Fourier transform cannot be defined directly as an integral since it may not be absolutely convergent, so it is instead defined by a density argument (see the Fourier transform article). For example, putting -$$ -g_k(\xi):=\int_{\{y\in\mathbb{R}^n:\left\vert y\right\vert\leq k\}} e^{-2\pi iy\cdot\xi} f(y)dy,\qquad k\in\mathbb{N}, -$$ - -we can set $\textstyle\mathcal{F}f := \lim_{k\to\infty}g_k$ where the limit is taken in the $L^2$-norm. The inverse transform may be defined by density in the same way or by defining it in terms of the Fourier transform and the flip operator. We then have -$$ -f(x)=\mathcal{F}(\mathcal{F}^{-1}f)(x)=\mathcal{F}^{-1}(\mathcal{F}f)(x) -$$ - -in the mean squared norm. In one dimension (and one dimension only), it can also be shown that it converges for almost every x∈ℝ- this is Carleson's theorem, but is much harder to prove than convergence in the mean squared norm. - -The Fourier transform may be defined on the space of tempered distributions $\mathcal{S}'(\mathbb{R}^n)$ by duality of the Fourier transform on the space of Schwartz functions. Specifically for $f\in\mathcal{S}'(\mathbb{R}^n)$ and for all test functions $\varphi\in\mathcal S(\mathbb{R}^n)$ we set -$$ -\langle \mathcal{F}f,\varphi\rangle := \langle f,\mathcal{F}\varphi\rangle, -$$ - -where $\mathcal{F}\varphi$ is defined using the integral formula. If $f \in L^1(\mathbb R^n) \cap L^2(\mathbb R^n)$ then this agrees with the usual definition. We may define the inverse transform $\mathcal{F}^{-1}\colon\mathcal{S}'(\mathbb{R}^n)\to\mathcal{S}'(\mathbb{R}^n)$, either by duality from the inverse transform on Schwartz functions in the same way, or by defining it in terms of the flip operator (where the flip operator is defined by duality). We then have -$$ -\mathcal{F}\mathcal{F}^{-1} = \mathcal{F}^{-1}\mathcal{F} = \operatorname{Id}_{\mathcal{S}'(\mathbb{R}^n)}. -$$ - -The Fourier inversion theorem is analogous to the convergence of Fourier series. In the Fourier transform case we have -$$ -f\colon\mathbb{R}^n\to\mathbb{C},\quad\hat f\colon\mathbb{R}^n\to\mathbb{C}, -$$ -$$ -\hat f(\xi):=\int_{\mathbb{R}^n} e^{-2\pi iy\cdot\xi} f(y)dy, -$$ -$$ -f(x)=\int_{\mathbb{R}^n} e^{2\pi ix\cdot\xi} \hat f(\xi)d\xi. -$$ - -In the Fourier series case we instead have -$$ -f\colon[0,1]^n\to\mathbb{C},\quad\hat f\colon\mathbb{Z}^n\to\mathbb{C}, -$$ -$$ -\hat f(k):=\int_{[0,1]^n} e^{-2\pi iy\cdot k} f(y)dy, -$$ -$$ -f(x)=\sum_{k\in\mathbb{Z}^n} e^{2\pi ix\cdot k} \hat f(k). -$$ - -In particular, in one dimension $k \in \mathbb Z$ and the sum runs from $- \infty$ to $\infty$. - -In applications of the Fourier transform the Fourier inversion theorem often plays a critical role. In many situations the basic strategy is to apply the Fourier transform, perform some operation or simplification, and then apply the inverse Fourier transform. - -More abstractly, the Fourier inversion theorem is a statement about the Fourier transform as an operator (see Fourier transform on function spaces). For example, the Fourier inversion theorem on $f \in L^2(\mathbb R^n)$ shows that the Fourier transform is a unitary operator on $L^2(\mathbb R^n)$. - -The inverse Fourier transform is extremely similar to the original Fourier transform: as discussed above, it differs only in the application of a flip operator. For this reason the properties of the Fourier transform hold for the inverse Fourier transform, such as the Convolution theorem and the Riemann–Lebesgue lemma. - -Tables of Fourier transforms may easily be used for the inverse Fourier transform by composing the looked-up function with the flip operator. For example, looking up the Fourier transform of the rect function we see that - -f(x) = \operatorname{rect}(a x) \quad \Rightarrow \quad (\mathcal{F}f)(\xi)=\frac{1} \operatorname{sinc}\left(\frac{\xi}{a}\right), - -so the corresponding fact for the inverse transform is - -g(\xi)=\operatorname{rect}(a \xi) \quad \Rightarrow \quad (\mathcal{F}^{-1}g)(x)=\frac{1} \operatorname{sinc}\left(-\frac{x}{a}\right) . - -The proof uses a few facts, given $f(y)$ and $\mathcal{F}f (\xi) = \int_{\mathbb{R}^n} e^{-2\pi i y\cdot\xi} f(y)dy$. - -# If $x \in \mathbb R^n$ and $g(\xi) = e^{2 \pi \mathrm{i}x \cdot \xi} \psi(\xi)$, then $(\mathcal{F}g)(y) = (\mathcal{F}\psi)(y - x)$. - -# If $\varepsilon \in \mathbb R$ and $\psi(\xi) = \varphi(\varepsilon\xi)$, then $(\mathcal{F}\psi)(y) = (\mathcal{F}\varphi)(y/\varepsilon)/|\varepsilon|$. - -# For $f, g \in L^1(\mathbb R^n)$, Fubini's theorem implies that $\textstyle\int g(\xi) \cdot (\mathcal{F}f)(\xi)d\xi = \int(\mathcal{F}g)(y) \cdot f(y)dy$. - -# Define $\varphi(\xi) = e^{-\pi \vert \xi \vert^2}$; then $(\mathcal{F}\varphi)(y) = \varphi(y)$. - -# Define $\varphi_\varepsilon(y) = \varphi(y/\varepsilon)/\varepsilon^n$. Then with $\ast$ denoting convolution, $\varphi_\varepsilon$ is an approximation to the identity: for any continuous $f \in L^1(\mathbb R^n)$ and point $x \in \mathbb R^n$, $\lim_{\varepsilon \to 0} (\varphi_\varepsilon \ast f)(x) = f(x)$ (where the convergence is pointwise). - -Since, by assumption, $\mathcal{F}f\in L^1(\mathbb{R}^n)$, then it follows by the dominated convergence theorem that -$$ -\int_{\mathbb{R}^n} e^{2\pi i x\cdot\xi}(\mathcal{F}f)(\xi)d\xi = \lim_{\varepsilon \to 0}\int_{\mathbb{R}^n} e^{-\pi\varepsilon^2|\xi|^2 + 2\pi i x\cdot\xi}(\mathcal{F}f)(\xi)d\xi. -$$ - -Define $g_x(\xi) = e^{-\pi\varepsilon^2\vert \xi \vert^2 + 2 \pi \mathrm{i} x \cdot \xi}$. Applying facts 1, 2 and 4, repeatedly for multiple integrals if necessary, we obtain -$$ -(\mathcal{F}g_x)(y) = \frac{1}{\varepsilon^n}e^{-\frac{\pi}{\varepsilon^2}|x - y|^2}=\varphi_\varepsilon(x-y). -$$ - -Using fact 3 on $f$ and $g_x$, for each $x\in\mathbb R^n$, we have -$$ -\int_{\mathbb{R}^n} e^{-\pi\varepsilon^2|\xi|^2 + 2\pi i x\cdot\xi}(\mathcal{F}f)(\xi)d\xi = \int_{\mathbb{R}^n} \frac{1}{\varepsilon^n}e^{-\frac{\pi}{\varepsilon^2}|x - y|^2} f(y)dy = (\varphi_\varepsilon * f)(x), -$$ - -the convolution of $f$ with an approximate identity. But since $f \in L^1(\mathbb R^n)$, fact 5 says that -$$ -\lim_{\varepsilon\to 0}(\varphi_{\varepsilon} * f) (x) = f(x). -$$ - -Putting together the above we have shown that -$$ -\int_{\mathbb{R}^n} e^{2\pi i x\cdot\xi}(\mathcal{F}f)(\xi)d\xi = f(x). \qquad\square -$$ diff --git a/wiki/wikipedia/2527.txt b/wiki/wikipedia/2527.txt deleted file mode 100644 index d4464e7a42756643d6e71326f8111e455c8e74d9..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2527.txt +++ /dev/null @@ -1,3 +0,0 @@ -In mathematics, the Grothendieck existence theorem, introduced by , gives conditions that enable one to lift infinitesimal deformations of a scheme to a deformation, and to lift schemes over infinitesimal neighborhoods over a subscheme of a scheme S to schemes over S. - -The theorem can be viewed as an instance of formal GAGA. diff --git a/wiki/wikipedia/2528.txt b/wiki/wikipedia/2528.txt deleted file mode 100644 index 5a50cf015c7ced05bf194ec57fcf707e1d6eca5f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2528.txt +++ /dev/null @@ -1,300 +0,0 @@ -{{infobox non-integer number - -|image=Isosceles right triangle with legs length 1.svg - -|image_caption=The square root of 2 is equal to the length of the hypotenuse of an isosceles right triangle with legs of length 1. - -|binary= - -|decimal= - -|hexadecimal= - -|continued_fraction=$1 + \cfrac{1}{2 + \cfrac{1}{2 + \cfrac{1}{2 + \cfrac{1}{2 + \ddots}}}}$ - -}} - -The square root of 2 (approximately 1.4142) is a positive real number that, when multiplied by itself, equals the number 2. It may be written in mathematics as $\sqrt{2}$ or $2^{1/2}$, and is an algebraic number. Technically, it should be called the principal square root of 2, to distinguish it from the negative number with the same property. - -Geometrically, the square root of 2 is the length of a diagonal across a square with sides of one unit of length; this follows from the Pythagorean theorem. It was probably the first number known to be irrational. The fraction 99/70 (≈ 1.4142857) is sometimes used as a good rational approximation with a reasonably small denominator. - -Sequence in the On-Line Encyclopedia of Integer Sequences consists of the digits in the decimal expansion of the square root of 2, here truncated to 65 decimal places: - -The Babylonian clay tablet YBC 7289 (c. 1800–1600 BC) gives an approximation of sqrt 2 in four sexagesimal figures, 1 24 51 10, which is accurate to about six decimal digits, and is the closest possible three-place sexagesimal representation of sqrt 2: -$$ -1 + \frac{24}{60} + \frac{51}{60^2} + \frac{10}{60^3} = \frac{305470}{216000} = 1.41421\overline{296}. -$$ - -Another early approximation is given in ancient Indian mathematical texts, the Sulbasutras (c. 800–200 BC), as follows: Increase the length [of the side] by its third and this third by its own fourth less the thirty-fourth part of that fourth. That is, -$$ -1 + \frac{1}{3} + \frac{1}{3 \times 4} - \frac{1}{3 \times4 \times 34} = \frac{577}{408} = 1.41421\overline{56862745098039}. -$$ - -This approximation is the seventh in a sequence of increasingly accurate approximations based on the sequence of Pell numbers, which can be derived from the continued fraction expansion of sqrt 2. Despite having a smaller denominator, it is only slightly less accurate than the Babylonian approximation. - -Pythagoreans discovered that the diagonal of a square is incommensurable with its side, or in modern language, that the square root of two is irrational. Little is known with certainty about the time or circumstances of this discovery, but the name of Hippasus of Metapontum is often mentioned. For a while, the Pythagoreans treated as an official secret the discovery that the square root of two is irrational, and, according to legend, Hippasus was murdered for divulging it. - -A short proof of the irrationality of sqrt 2 can be obtained from the rational root theorem, that is, if p(x) is a monic polynomial with integer coefficients, then any rational root of p(x) is necessarily an integer. Applying this to the polynomial p(x) = x^2 − 2, it follows that sqrt 2 is either an integer or irrational. Because sqrt 2 is not an integer (2 is not a perfect square), sqrt 2 must therefore be irrational. This proof can be generalized to show that any square root of any natural number that is not a perfect square is irrational. - -For other proofs that the square root of any non-square natural number is irrational, see Quadratic irrational number or Infinite descent. - -One proof of the number's irrationality is the following proof by infinite descent. It is also a proof by contradiction, also known as an indirect proof, in that the proposition is proved by assuming that the opposite of the proposition is true and showing that this assumption is false, thereby implying that the proposition must be true. - -# Assume that sqrt 2 is a rational number, meaning that there exists a pair of integers whose ratio is exactly sqrt 2. - -# If the two integers have a factor, it can be eliminated using the Euclidean algorithm. - -# Then sqrt 2 can be written as an irreducible fraction a/b such that a and b are coprime integers (having no common factor) which additionally means that at least one of a or b must be odd. - -# It follows that a^2/b^2 = 2 and a^2 = 2b^2.   ( (a/b)^n = a^n/b^n )   ( a^2 and b^2 are integers) - -# Therefore, a^2 is even because it is equal to 2b^2. (2b^2 is necessarily even because it is 2 times another whole number.) - -# It follows that a must be even (as squares of odd integers are never even). - -# Because a is even, there exists an integer k that fulfills a = 2k. - -# Substituting 2k from step 7 for a in the second equation of step 4: 2b^2 = (2k)^2 is equivalent to 2b^2 = 4k^2, which is equivalent to b^2 = 2k^2. - -# Because 2k^2 is divisible by two and therefore even, and because 2k^2 = b^2, it follows that b^2 is also even which means that b is even. - -# By steps 5 and 8 a and b are both even, which contradicts that a/b is irreducible as stated in step 3. - -Q.E.D. - -Because there is a contradiction, the assumption (1) that sqrt 2 is a rational number must be false. This means that sqrt 2 is not a rational number. That is, sqrt 2 is irrational. - -This proof was hinted at by Aristotle, in his Analytica Priora, §I.23. It appeared first as a full proof in Euclid's Elements, as proposition 117 of Book X. However, since the early 19th century, historians have agreed that this proof is an interpolation and not attributable to Euclid. - -As with the proof by infinite descent, we obtain $a^2 = 2b^2$. Being the same quantity, each side has the same prime factorization by the fundamental theorem of arithmetic, and in particular, would have to have the factor 2 occur the same number of times. However, the factor 2 appears an odd number of times on the right, but an even number of times on the left—a contradiction. - -A simple proof is attributed by John Horton Conway to Stanley Tennenbaum when the latter was a student in the early 1950s and whose most recent appearance is in an article by Noson Yanofsky in the May–June 2016 issue of American Scientist. Given two squares with integer sides respectively a and b, one of which has twice the area of the other, place two copies of the smaller square in the larger as shown in Figure 1. The square overlap region in the middle ((2b − a)2) must equal the sum of the two uncovered squares (2(a − b)2). However, these squares on the diagonal have positive integer sides that are smaller than the original squares. Repeating this process, there are arbitrarily small squares one twice the area of the other, yet both having positive integer sides, which is impossible since positive integers cannot be less than 1. - -Another geometric reductio ad absurdum argument showing that sqrt 2 is irrational appeared in 2000 in the American Mathematical Monthly. It is also an example of proof by infinite descent. It makes use of classic compass and straightedge construction, proving the theorem by a method similar to that employed by ancient Greek geometers. It is essentially the same algebraic proof as in the previous paragraph, viewed geometrically in another way. - -Let △ ABC be a right isosceles triangle with hypotenuse length m and legs n as shown in Figure 2. By the Pythagorean theorem, m/n = sqrt 2. Suppose m and n are integers. Let m:n be a ratio given in its lowest terms. - -Draw the arcs BD and CE with centre A. Join DE. It follows that AB = AD, AC = AE and the ∠BAC and ∠DAE coincide. Therefore, the triangles ABC and ADE are congruent by SAS. - -Because ∠EBF is a right angle and ∠BEF is half a right angle, △ BEF is also a right isosceles triangle. Hence BE = m − n implies BF = m − n. By symmetry, DF = m − n, and △ FDC is also a right isosceles triangle. It also follows that FC = n − (m − n) = 2n − m. - -Hence, there is an even smaller right isosceles triangle, with hypotenuse length 2n − m and legs m − n. These values are integers even smaller than m and n and in the same ratio, contradicting the hypothesis that m:n is in lowest terms. Therefore, m and n cannot be both integers, hence sqrt 2 is irrational. - -In a constructive approach, one distinguishes between on the one hand not being rational, and on the other hand being irrational (i.e., being quantifiably apart from every rational), the latter being a stronger property. Given positive integers a and b, because the valuation (i.e., highest power of 2 dividing a number) of 2b2 is odd, while the valuation of a2 is even, they must be distinct integers; thus |2b^2 − a^2| ≥ 1. Then -$$ -\left|\sqrt2 - \frac{a}{b}\right| = \frac{b^2\!\left(\sqrt{2}+\frac{a}{b}\right)} \ge \frac{1}{b^2\!\left(\sqrt2 + \frac{a}{b}\right)} \ge \frac{1}{3b^2}, -$$ - -the latter inequality being true because it is assumed that a/b ≤ 3 − sqrt 2 (otherwise the quantitative apartness can be trivially established). This gives a lower bound of 1/3b^2 for the difference |sqrt 2 − a/b|, yielding a direct proof of irrationality not relying on the law of excluded middle; see Errett Bishop (1985, p. 18). This proof constructively exhibits a discrepancy between sqrt 2 and any rational. - -* Lemma: For the Diophantine equation $x^2+y^2=z^2$ in its primitive (simplest) form, integer solutions exist if and only if either $x$ or $y$ is odd, but never when both $x$ and $y$ are odd. - -Proof: For the given equation, there are only six possible combinations of oddness and evenness for whole-number values of $x$ and $y$ that produce a whole-number value for $z$. A simple enumeration of all six possibilities shows why four of these six are impossible. Of the two remaining possibilities, one can be proven to not contain any solutions using modular arithmetic, leaving the sole remaining possibility as the only one to contain solutions, if any. - -The fifth possibility (both $x$ and $y$ odd and $z$ even) can be shown to contain no solutions as follows. - -Since $z$ is even, $z^2$ must be divisible by $4$, hence -$$ -x^2+y^2 \equiv 0 \mod4 -$$ - -The square of any odd number is always congruent to 1 modulo 4. The square of any even number is always congruent to 0 modulo 4. Since both $x$ and $y$ are odd and $z$ is even: -$$ -1+1 \equiv 0 \mod 4 -$$ -$$ -2 \equiv 0 \mod 4 -$$ - -which is impossible. Therefore, the fifth possibility is also ruled out, leaving the sixth to be the only possible combination to contain solutions, if any. - -An extension of this lemma is the result that two identical whole-number squares can never be added to produce another whole-number square, even when the equation is not in its simplest form. - -* Theorem: $\sqrt2$ is irrational. - -Proof: Suppose to the contrary $\sqrt2$ is rational. Therefore, -$$ -\sqrt2 = {a \over b} -$$ - -where $a,b \in \mathbb{Z}$ and $b \neq 0$ - -Squaring both sides, -$$ -2 = {a^2 \over b^2} -$$ -$$ -2b^2 = a^2 -$$ -$$ -b^2+b^2 = a^2 -$$ - -But the lemma proves that the sum of two identical whole-number squares cannot produce another whole-number square. - -Therefore, the assumption that $\sqrt2$ is rational is contradicted. -$$ -\sqrt2 -$$ is irrational. Q. E. D. - -The multiplicative inverse (reciprocal) of the square root of two (i.e., the square root of 1/2) is a widely used constant. -$$ -\frac1{\sqrt{2}} = \frac{\sqrt{2}}{2} = \sin 45^\circ = \cos 45^\circ = -$$ 0.70710678118654752440084436210484903928483593768847... - -One-half of sqrt 2, also the reciprocal of sqrt 2, is a common quantity in geometry and trigonometry because the unit vector that makes a 45° angle with the axes in a plane has the coordinates -$$ -\left(\frac{\sqrt{2}}{2}, \frac{\sqrt{2}}{2}\right)\!. -$$ - -This number satisfies -$$ -\tfrac{1}{2}\sqrt{2} = \sqrt{\tfrac{1}{2}} = \frac{1}{\sqrt{2}} = \cos 45^{\circ} = \sin 45^{\circ}. -$$ - -One interesting property of sqrt 2 is -$$ -\!\ {1 \over {\sqrt{2} - 1}} = \sqrt{2} + 1 -$$ - -since -$$ -\left(\sqrt{2}+1\right)\!\left(\sqrt{2}-1\right) = 2-1 = 1. -$$ - -This is related to the property of silver ratios. - -sqrt 2 can also be expressed in terms of copies of the imaginary unit i using only the square root and arithmetic operations, if the square root symbol is interpreted suitably for the complex numbers i and −i: -$$ -\frac{\sqrt{i}+i \sqrt{i}}{i}\text{ and }\frac{\sqrt{-i}-i \sqrt{-i}}{-i} -$$ - -sqrt 2 is also the only real number other than 1 whose infinite tetrate (i.e., infinite exponential tower) is equal to its square. In other words: if for c > 1, x1 = c and xn+1 = cxn for n > 1, the limit of xn as n → ∞ will be called (if this limit exists) f(c). Then sqrt 2 is the only number c > 1 for which f(c) = c2. Or symbolically: -$$ -\sqrt{2}^{\sqrt{2}^{\sqrt{2}^{~\cdot^{~\cdot^{~\cdot}}}}} = 2. -$$ - -sqrt 2 appears in Viète's formula for pi: -$$ -2^m\sqrt{2-\sqrt{2+\sqrt{2+\cdots+\sqrt{2}}}} \to \pi\text{ as }m \to \infty -$$ - -for m square roots and only one minus sign. - -Similar in appearance but with a finite number of terms, sqrt 2 appears in various trigonometric constants: - -\begin{align} - -\sin\frac{\pi}{32} &= \tfrac12\sqrt{2-\sqrt{2+\sqrt{2+\sqrt{2}}}} &\quad - -\sin\frac{3\pi}{16} &= \tfrac12\sqrt{2-\sqrt{2-\sqrt{2}}} &\quad - -\sin\frac{11\pi}{32} &= \tfrac12\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2}}}} \\[6pt] - -\sin\frac{\pi}{16} &= \tfrac12\sqrt{2-\sqrt{2+\sqrt{2}}} &\quad - -\sin\frac{7\pi}{32} &= \tfrac12\sqrt{2-\sqrt{2-\sqrt{2+\sqrt{2}}}} &\quad - -\sin\frac{3\pi}{8} &= \tfrac12\sqrt{2+\sqrt{2}} \\[6pt] - -\sin\frac{3\pi}{32} &= \tfrac12\sqrt{2-\sqrt{2+\sqrt{2-\sqrt{2}}}} &\quad - -\sin\frac{\pi}{4} &= \tfrac12\sqrt{2} &\quad - -\sin\frac{13\pi}{32} &= \tfrac12\sqrt{2+\sqrt{2+\sqrt{2-\sqrt{2}}}} \\[6pt] - -\sin\frac{\pi}{8} &= \tfrac12\sqrt{2-\sqrt{2}} &\quad - -\sin\frac{9\pi}{32} &= \tfrac12\sqrt{2+\sqrt{2-\sqrt{2+\sqrt{2}}}} &\quad - -\sin\frac{7\pi}{16} &= \tfrac12\sqrt{2+\sqrt{2+\sqrt{2}}} \\[6pt] - -\sin\frac{5\pi}{32} &= \tfrac12\sqrt{2-\sqrt{2-\sqrt{2-\sqrt{2}}}} &\quad - -\sin\frac{5\pi}{16} &= \tfrac12\sqrt{2+\sqrt{2-\sqrt{2}}} &\quad - -\sin\frac{15\pi}{32} &= \tfrac12\sqrt{2+\sqrt{2+\sqrt{2+\sqrt{2}}}} - -\end{align} - -It is not known whether sqrt 2 is a normal number, which is a stronger property than irrationality, but statistical analyses of its binary expansion are consistent with the hypothesis that it is normal to base two. - -The identity cos π/4 = sin π/4 = 1/sqrt 2, along with the infinite product representations for the sine and cosine, leads to products such as - -\frac{1}{\sqrt 2} = \prod_{k=0}^\infty \left(1-\frac{1}{(4k+2)^2}\right) = - -\left(1-\frac{1}{4}\right)\!\left(1-\frac{1}{36}\right)\!\left(1-\frac{1}{100}\right) \cdots - -and - -\sqrt{2} = \prod_{k=0}^\infty\frac{(4k+2)^2}{(4k+1)(4k+3)} = - -\left(\frac{2 \cdot 2}{1 \cdot 3}\right)\!\left(\frac{6 \cdot 6}{5 \cdot 7}\right)\!\left(\frac{10 \cdot 10}{9 \cdot 11}\right)\!\left(\frac{14 \cdot 14}{13 \cdot 15}\right) \cdots - -or equivalently, - -\sqrt{2} = \prod_{k=0}^\infty\left(1+\frac{1}{4k+1}\right)\left(1-\frac{1}{4k+3}\right) = - -\left(1+\frac{1}{1}\right)\!\left(1-\frac{1}{3}\right)\!\left(1+\frac{1}{5}\right)\!\left(1-\frac{1}{7}\right) \cdots. - -The number can also be expressed by taking the Taylor series of a trigonometric function. For example, the series for cos π/4 gives -$$ -\frac{1}{\sqrt{2}} = \sum_{k=0}^\infty \frac{(-1)^k \left(\frac{\pi}{4}\right)^{2k}}{(2k)!}. -$$ - -The Taylor series of sqrt 1 + x with x = 1 and using the double factorial n!! gives - -\sqrt{2} = \sum_{k=0}^\infty (-1)^{k+1} \frac{(2k-3)!!}{(2k)!!} = - -1 + \frac{1}{2} - \frac{1}{2\cdot4} + \frac{1\cdot3}{2\cdot4\cdot6} - \frac{1\cdot3\cdot5}{2\cdot4\cdot6\cdot8} + \cdots = 1 + \frac{1}{2} - \frac{1}{8} + \frac{1}{16} - \frac{5}{128} + \frac{7}{256} + \cdots. - -The convergence of this series can be accelerated with an Euler transform, producing -$$ -\sqrt{2} = \sum_{k=0}^\infty \frac{(2k+1)!}{2^{3k+1}(k!)^2 } = \frac{1}{2} +\frac{3}{8} + \frac{15}{64} + \frac{35}{256} + \frac{315}{4096} + \frac{693}{16384} + \cdots. -$$ - -It is not known whether sqrt 2 can be represented with a BBP-type formula. BBP-type formulas are known for πsqrt 2 and sqrt 2 ln(1+sqrt 2), however. - -The number can be represented by an infinite series of Egyptian fractions, with denominators defined by 2n th terms of a Fibonacci-like recurrence relation a(n) = 34a(n−1) − a(n−2), a(0) = 0, a(1) = 6. -$$ -\sqrt{2}=\frac{3}{2}-\frac{1}{2}\sum_{n=0}^\infty \frac{1}{a(2^n)}=\frac{3}{2}-\frac{1}{2}\left(\frac{1}{6}+\frac{1}{204}+\frac{1}{235416}+\dots \right) -$$ - -The square root of two has the following continued fraction representation: -$$ - \!\ \sqrt{2} = 1 + \cfrac{1}{2 + \cfrac{1}{2 + \cfrac{1}{2 + \cfrac{1}{2 + \ddots}}}}. -$$ - -The convergents formed by truncating this representation form a sequence of fractions that approximate the square root of two to increasing accuracy, and that are described by the Pell numbers (known as side and diameter numbers to the ancient Greeks because of their use in approximating the ratio between the sides and diagonal of a square). The first convergents are: 1/1, 3/2, 7/5, 17/12, 41/29, 99/70, 239/169, 577/408. The convergent p/q differs from sqrt 2 by almost exactly 1/2q^2sqrt 2 and then the next convergent is p + 2q/p + q. - -The following nested square expressions converge to sqrt 2: - -\begin{align} \sqrt{2} - -&=\tfrac{3}{2} - 2 \left( \tfrac{1}{4}- \left( \tfrac{1}{4}-\left( \tfrac{1}{4}- \left( \tfrac{1}{4}- \cdots \right)^2 \right)^2 \right)^2 \right)^2\\ - -&=\tfrac{3}{2} - 4 \left( \tfrac{1}{8}+ \left( \tfrac{1}{8}+\left( \tfrac{1}{8}+ \left( \tfrac{1}{8}+ \cdots \right)^2 \right)^2 \right)^2 \right)^2. - -\end{align} - -In 1786, German physics professor Georg Christoph Lichtenberg Today, the (approximate) aspect ratio of paper sizes under ISO 216 (A4, A0, etc.) is 1:sqrt 2. - -Proof:
    - -Let $S = $ shorter length and $L = $ longer length of the sides of a sheet of paper, with
    -$$ -R = \frac{L}{S} = \sqrt{2} -$$ as required by ISO 216. - -Let $R' = \frac{L'}{S'}$ be the analogous ratio of the halved sheet, then
    -$$ -R' = \frac{S}{L/2} = \frac{2S}{L} = \frac{2}{(L/S)} = \frac{2}{\sqrt{2}} = \sqrt{2} = R -$$. - -There are some interesting properties involving the square root of 2 in the physical sciences: - -* The square root of two is the frequency ratio of a tritone interval in twelve-tone equal temperament music. - -* The square root of two forms the relationship of f-stops in photographic lenses, which in turn means that the ratio of areas between two successive apertures is 2. - -* The celestial latitude (declination) of the Sun during a planet's astronomical cross-quarter day points equals the tilt of the planet's axis divided by sqrt 2. diff --git a/wiki/wikipedia/2529.txt b/wiki/wikipedia/2529.txt deleted file mode 100644 index 306ba84a996a0c9b9023a7d59f408571ca3ac4f7..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2529.txt +++ /dev/null @@ -1,134 +0,0 @@ -Minimax (sometimes MinMax, MM or saddle point) is a decision rule used in artificial intelligence, decision theory, game theory, statistics, and philosophy for minimizing the possible loss for a worst case (maximum loss) scenario. When dealing with gains, it is referred to as "maximin"—to maximize the minimum gain. Originally formulated for n-player zero-sum game theory, covering both the cases where players take alternate moves and those where they make simultaneous moves, it has also been extended to more complex games and to general decision-making in the presence of uncertainty. - -The maximin value is the highest value that the player can be sure to get without knowing the actions of the other players; equivalently, it is the lowest value the other players can force the player to receive when they know the player's action. Its formal definition is: -$$ -\underline{v_i} = \max_{a_i} \min_{a_{-i}} {v_i(a_i,a_{-i})} -$$ - -Where: - -* i is the index of the player of interest. - -* $-i$ denotes all other players except player i. - -* $a_i$ is the action taken by player i. - -* $a_{-i}$ denotes the actions taken by all other players. - -* $v_i$ is the value function of player i. - -Calculating the maximin value of a player is done in a worst-case approach: for each possible action of the player, we check all possible actions of the other players and determine the worst possible combination of actions—the one that gives player i the smallest value. Then, we determine which action player i can take in order to make sure that this smallest value is the highest possible. - -For example, consider the following game for two players, where the first player ("row player") may choose any of three moves, labelled T, M, or B, and the second player ("column" player) may choose either of two moves, L or R. The result of the combination of both moves is expressed in a payoff table: - -(where the first number in each cell is the pay-out of the row player and the second number is the pay-out of the column player). - -For the sake of example, we consider only pure strategies. Check each player in turn: - -* The row player can play T, which guarantees them a payoff of at least 2 (playing B is risky since it can lead to payoff -100, and playing M can result in a payoff of -10). Hence: $\underline{v_{row}} = 2$. - -* The column player can play L and secure a payoff of at least 0 (playing R puts them in the risk of getting $-20$). Hence: $\underline{v_{col}} = 0$. - -If both players play their respective maximin strategies $(T,L)$, the payoff vector is $(3,1)$. - -The minimax value of a player is the smallest value that the other players can force the player to receive, without knowing the player's actions; equivalently, it is the largest value the player can be sure to get when they know the actions of the other players. Its formal definition is: - -
    For every two-person, zero-sum game with finitely many strategies, there exists a value V and a mixed strategy for each player, such that - -(a) Given player 2's strategy, the best payoff possible for player 1 is V, and - -(b) Given player 1's strategy, the best payoff possible for player 2 is −V. - -
    - -Equivalently, Player 1's strategy guarantees them a payoff of V regardless of Player 2's strategy, and similarly Player 2 can guarantee themselves a payoff of −V. The name minimax arises because each player minimizes the maximum payoff possible for the other—since the game is zero-sum, they also minimize their own maximum loss (i.e. maximize their minimum payoff). - -See also example of a game without a value. - -The following example of a zero-sum game, where A and B make simultaneous moves, illustrates maximin solutions. Suppose each player has three choices and consider the payoff matrix for A displayed on the right. Assume the payoff matrix for B is the same matrix with the signs reversed (i.e. if the choices are A1 and B1 then B pays 3 to A). Then, the maximin choice for A is A2 since the worst possible result is then having to pay 1, while the simple maximin choice for B is B2 since the worst possible result is then no payment. However, this solution is not stable, since if B believes A will choose A2 then B will choose B1 to gain 1; then if A believes B will choose B1 then A will choose A1 to gain 3; and then B will choose B2; and eventually both players will realize the difficulty of making a choice. So a more stable strategy is needed. - -Some choices are dominated by others and can be eliminated: A will not choose A3 since either A1 or A2 will produce a better result, no matter what B chooses; B will not choose B3 since some mixtures of B1 and B2 will produce a better result, no matter what A chooses. - -A can avoid having to make an expected payment of more than 1∕3 by choosing A1 with probability 1∕6 and A2 with probability 5∕6: The expected payoff for A would be 3 × (1∕6) − 1 × (5∕6) = −1∕3 in case B chose B1 and −2 × (1∕6) + 0 × (5∕6) = −1/3 in case B chose B2. Similarly, B can ensure an expected gain of at least 1/3, no matter what A chooses, by using a randomized strategy of choosing B1 with probability 1∕3 and B2 with probability 2∕3. These mixed minimax strategies are now stable and cannot be improved. - -Frequently, in game theory, maximin is distinct from minimax. Minimax is used in zero-sum games to denote minimizing the opponent's maximum payoff. In a zero-sum game, this is identical to minimizing one's own maximum loss, and to maximizing one's own minimum gain. - -"Maximin" is a term commonly used for non-zero-sum games to describe the strategy which maximizes one's own minimum payoff. In non-zero-sum games, this is not generally the same as minimizing the opponent's maximum gain, nor the same as the Nash equilibrium strategy. - -The minimax values are very important in the theory of repeated games. One of the central theorems in this theory, the folk theorem, relies on the minimax values. - -In combinatorial game theory, there is a minimax algorithm for game solutions. - -A simple version of the minimax algorithm, stated below, deals with games such as tic-tac-toe, where each player can win, lose, or draw. If player A can win in one move, their best move is that winning move. If player B knows that one move will lead to the situation where player A can win in one move, while another move will lead to the situation where player A can, at best, draw, then player B's best move is the one leading to a draw. Late in the game, it's easy to see what the "best" move is. The Minimax algorithm helps find the best move, by working backwards from the end of the game. At each step it assumes that player A is trying to maximize the chances of A winning, while on the next turn player B is trying to minimize the chances of A winning (i.e., to maximize B's own chances of winning). - -=== Minimax algorithm with alternate moves === - -A minimax algorithm is a recursive algorithm for choosing the next move in an n-player game, usually a two-player game. A value is associated with each position or state of the game. This value is computed by means of a position evaluation function and it indicates how good it would be for a player to reach that position. The player then makes the move that maximizes the minimum value of the position resulting from the opponent's possible following moves. If it is A's turn to move, A gives a value to each of their legal moves. - -A possible allocation method consists in assigning a certain win for A as +1 and for B as −1. This leads to combinatorial game theory as developed by John Horton Conway. An alternative is using a rule that if the result of a move is an immediate win for A it is assigned positive infinity and if it is an immediate win for B, negative infinity. The value to A of any other move is the maximum of the values resulting from each of B's possible replies. For this reason, A is called the maximizing player and B is called the minimizing player, hence the name minimax algorithm. The above algorithm will assign a value of positive or negative infinity to any position since the value of every position will be the value of some final winning or losing position. Often this is generally only possible at the very end of complicated games such as chess or go, since it is not computationally feasible to look ahead as far as the completion of the game, except towards the end, and instead, positions are given finite values as estimates of the degree of belief that they will lead to a win for one player or another. - -This can be extended if we can supply a heuristic evaluation function which gives values to non-final game states without considering all possible following complete sequences. We can then limit the minimax algorithm to look only at a certain number of moves ahead. This number is called the "look-ahead", measured in "plies". For example, the chess computer Deep Blue (the first one to beat a reigning world champion, Garry Kasparov at that time) looked ahead at least 12 plies, then applied a heuristic evaluation function. - -The algorithm can be thought of as exploring the nodes of a game tree. The effective branching factor of the tree is the average number of children of each node (i.e., the average number of legal moves in a position). The number of nodes to be explored usually increases exponentially with the number of plies (it is less than exponential if evaluating forced moves or repeated positions). The number of nodes to be explored for the analysis of a game is therefore approximately the branching factor raised to the power of the number of plies. It is therefore impractical to completely analyze games such as chess using the minimax algorithm. - -The performance of the naïve minimax algorithm may be improved dramatically, without affecting the result, by the use of alpha–beta pruning. Other heuristic pruning methods can also be used, but not all of them are guaranteed to give the same result as the un-pruned search. - -A naïve minimax algorithm may be trivially modified to additionally return an entire Principal Variation along with a minimax score. - -The pseudocode for the depth limited minimax algorithm is given below. - -function minimax(node, depth, maximizingPlayer) is - -if depth = 0 or node is a terminal node then - -return the heuristic value of node - -if maximizingPlayer then - -value := -∞ - -for each child of node do - -value := max(value, minimax(child, depth - 1, FALSE)) - -return value - -else (* minimizing player *) - -value := +∞ - -for each child of node do - -value := min(value, minimax(child, depth - 1, TRUE)) - -return value - -(* Initial call *) - -minimax(origin, depth, TRUE) - -The minimax function returns a heuristic value for leaf nodes (terminal nodes and nodes at the maximum search depth). Non-leaf nodes inherit their value from a descendant leaf node. The heuristic value is a score measuring the favorability of the node for the maximizing player. Hence nodes resulting in a favorable outcome, such as a win, for the maximizing player have higher scores than nodes more favorable for the minimizing player. The heuristic value for terminal (game ending) leaf nodes are scores corresponding to win, loss, or draw, for the maximizing player. For non terminal leaf nodes at the maximum search depth, an evaluation function estimates a heuristic value for the node. The quality of this estimate and the search depth determine the quality and accuracy of the final minimax result. - -Minimax treats the two players (the maximizing player and the minimizing player) separately in its code. Based on the observation that $\max(a,b) = -\min(-a,-b)$, minimax may often be simplified into the negamax algorithm. - -Suppose the game being played only has a maximum of two possible moves per player each turn. The algorithm generates the tree on the right, where the circles represent the moves of the player running the algorithm (maximizing player), and squares represent the moves of the opponent (minimizing player). Because of the limitation of computation resources, as explained above, the tree is limited to a look-ahead of 4 moves. - -The algorithm evaluates each leaf node using a heuristic evaluation function, obtaining the values shown. The moves where the maximizing player wins are assigned with positive infinity, while the moves that lead to a win of the minimizing player are assigned with negative infinity. At level 3, the algorithm will choose, for each node, the smallest of the child node values, and assign it to that same node (e.g. the node on the left will choose the minimum between "10" and "+∞", therefore assigning the value "10" to itself). The next step, in level 2, consists of choosing for each node the largest of the child node values. Once again, the values are assigned to each parent node. The algorithm continues evaluating the maximum and minimum values of the child nodes alternately until it reaches the root node, where it chooses the move with the largest value (represented in the figure with a blue arrow). This is the move that the player should make in order to minimize the maximum possible loss. - -Minimax theory has been extended to decisions where there is no other player, but where the consequences of decisions depend on unknown facts. For example, deciding to prospect for minerals entails a cost which will be wasted if the minerals are not present, but will bring major rewards if they are. One approach is to treat this as a game against nature (see move by nature), and using a similar mindset as Murphy's law or resistentialism, take an approach which minimizes the maximum expected loss, using the same techniques as in the two-person zero-sum games. - -In addition, expectiminimax trees have been developed, for two-player games in which chance (for example, dice) is a factor. - -In classical statistical decision theory, we have an estimator $\delta$ that is used to estimate a parameter $\theta \in \Theta$. We also assume a risk function $R(\theta,\delta)$, usually specified as the integral of a loss function. In this framework, $\tilde{\delta}$ is called minimax if it satisfies - -\sup_\theta R(\theta,\tilde{\delta}) = \inf_\delta \sup_\theta R(\theta,\delta). - -An alternative criterion in the decision theoretic framework is the Bayes estimator in the presence of a prior distribution $\Pi$. An estimator is Bayes if it minimizes the average risk - -\int_\Theta R(\theta,\delta)d\Pi(\theta). - -A key feature of minimax decision making is being non-probabilistic: in contrast to decisions using expected value or expected utility, it makes no assumptions about the probabilities of various outcomes, just scenario analysis of what the possible outcomes are. It is thus robust to changes in the assumptions, as these other decision techniques are not. Various extensions of this non-probabilistic approach exist, notably minimax regret and Info-gap decision theory. - -Further, minimax only requires ordinal measurement (that outcomes be compared and ranked), not interval measurements (that outcomes include "how much better or worse"), and returns ordinal data, using only the modeled outcomes: the conclusion of a minimax analysis is: "this strategy is minimax, as the worst case is (outcome), which is less bad than any other strategy". Compare to expected value analysis, whose conclusion is of the form: "this strategy yields E(X)=n." Minimax thus can be used on ordinal data, and can be more transparent. - -In philosophy, the term "maximin" is often used in the context of John Rawls's A Theory of Justice, where he refers to it (Rawls 1971, p. 152) in the context of The Difference Principle. Rawls defined this principle as the rule which states that social and economic inequalities should be arranged so that "they are to be of the greatest benefit to the least-advantaged members of society". diff --git a/wiki/wikipedia/253.txt b/wiki/wikipedia/253.txt deleted file mode 100644 index 4a0c53ab4055c36daf561e8ff9355897dde2cc47..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/253.txt +++ /dev/null @@ -1,109 +0,0 @@ -In computer science, the longest increasing subsequence problem is to find a subsequence of a given sequence in which the subsequence's elements are in sorted order, lowest to highest, and in which the subsequence is as long as possible. This subsequence is not necessarily contiguous, or unique. - -Longest increasing subsequences are studied in the context of various disciplines related to mathematics, including algorithmics, random matrix theory, representation theory, and physics. The longest increasing subsequence problem is solvable in time O(n log n), where n denotes the length of the input sequence. - -The algorithm outlined below solves the longest increasing subsequence problem efficiently with arrays and binary searching. - -It processes the sequence elements in order, maintaining the longest increasing subsequence found so far. Denote the sequence values as $X[0], X[1], \ldots,$ etc. Then, after processing $X[i],$ the algorithm will have stored values in two arrays: - -* $M[j]$ — stores the index $k$ of the smallest value $X[k]$ such that there is an increasing subsequence of length $j$ ending at $X[k]$ on the range $k \leq i$ (Need to make this statement clearer). So if $K_{i,j}$ denotes the set of all indices $k$ such that $k \leq i$ and there exists an increasing subsequence of length $j$ ending at $X[k]$ (that is, there exist $j$ indices $l_1 < l_2 < \cdots < l_j = k$ ending at $k$ such that $X\left[l_1\right] \leq X\left[l_2\right] \leq \cdots \leq X\left[k\right]$), then $M[j]$ is the index for which the following holds: $M[j] \in K_{i,j}$ and $X[M[j]] = \min_{k \in K_{i,j}} X[k]$ (or equivalently, $M[j] \in K_{i,j}$ and for every $k \in K_{i,j},$ $X[M[j]] \leq X[k]$). Note that $j \leq i + 1,$ because $j \geq 1$ represents the length of the increasing subsequence, and $k \geq 0$ represents the index of its termination. - -* $P[k]$ — stores the index of the predecessor of $X[k]$ in the longest increasing subsequence ending at $X[k].$ - -In addition the algorithm stores a variable L representing the length of the longest increasing subsequence found so far. Because the algorithm below uses zero-based numbering, for clarity $M$ is padded with $M[0],$ which goes unused so that $M[j]$ corresponds to a subsequence of length $j.$ A real implementation can skip $M[0]$ and adjust the indices accordingly. - -Note that, at any point in the algorithm, the sequence - -X[M[1]], X[M[2]], \ldots, X[M[L]] - -is increasing. For, if there is an increasing subsequence of length $j \geq 2$ ending at $X[M[j]],$ then there is also a subsequence of length $j - 1$ ending at a smaller value: namely the one ending at $X[P[M[j]]].$ Thus, we may do binary searches in this sequence in logarithmic time. - -The algorithm, then, proceeds as follows: - -P = array of length N - -M = array of length N + 1 - -L = 0 - -for i in range 0 to N-1: - -// Binary search for the largest positive j ≤ L - -// such that X[M[j]] < X[i] - -lo = 1 - -hi = L + 1 - -while lo < hi: - -mid = lo + floor((hi-lo)/2) - -if X[M[mid]] < X[i]: - -lo = mid+1 - -else: - -hi = mid - -// After searching, lo is 1 greater than the - -// length of the longest prefix of X[i] - -newL = lo - -// The predecessor of X[i] is the last index of - -// the subsequence of length newL-1 - -P[i] = M[newL-1] - -M[newL] = i - -if newL > L: - -// If we found a subsequence longer than any we've - -// found yet, update L - -L = newL - -// Reconstruct the longest increasing subsequence - -S = array of length L - -k = M[L] - -for i in range L-1 to 0: - -S[i] = X[k] - -k = P[k] - -return S - -Because the algorithm performs a single binary search per sequence element, its total time can be expressed using Big O notation as O(n log n). Fredman discusses a variant of this algorithm, which he credits to Donald Knuth; in the variant that he studies, the algorithm tests whether each value $X[i]$ can be used to extend the current longest increasing sequence, in constant time, prior to doing the binary search. With this modification, the algorithm uses at most n log2 n - n log2log2 n + O(n) comparisons in the worst case, which is optimal for a comparison-based algorithm up to the constant factor in the O(n) term. - -According to the Erdős–Szekeres theorem, any sequence of n2+1 distinct integers has an increasing or a decreasing subsequence of length n + 1. - -In the limit as n approaches infinity, the length of the longest increasing subsequence of a randomly permuted sequence of n items has a distribution approaching the Tracy–Widom distribution, the distribution of the largest eigenvalue of a random matrix in the Gaussian unitary ensemble. - -The longest increasing subsequence has also been studied in the setting of online algorithms, in which the elements of a sequence of independent random variables with continuous distribution F – or alternatively the elements of a random permutation – are presented one at a time to an algorithm that must decide whether to include or exclude each element, without knowledge of the later elements. In this variant of the problem, which allows for interesting applications in several contexts, it is possible to devise an optimal selection procedure that, given a random sample of size n as input, will generate an increasing sequence with maximal expected length of size approximately sqrt 2n. - -The length of the increasing subsequence selected by this optimal procedure has variance approximately equal to sqrt 2n'/3, and its limiting distribution is asymptotically normal after the usual centering and scaling. - -The same asymptotic results hold with more precise bounds for the corresponding problem in the setting of a Poisson arrival process. - -A further refinement in the Poisson process setting is given through the proof of a central limit theorem for the optimal selection process - -which holds, with a suitable normalization, in a more complete sense than one would expect. The proof yields not only the "correct" functional limit theorem - -but also the (singular) covariance matrix of the three-dimensional process summarizing all interacting processes. - -*Part of MUMmer (Maximum Unique Match finder) system for aligning entire genomes. - -*Used in version control systems like Git etc. - -*Used in Patience Diff, a diffing algorithm (computes and displays the differences between the content of files), which is used in the “Bazaar” (Bazaar is a version control system that helps you track project history over time and to collaborate easily with others..) diff --git a/wiki/wikipedia/2530.txt b/wiki/wikipedia/2530.txt deleted file mode 100644 index d45a5196b51b43b0a611fcf1cd3ec7748cbd7898..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2530.txt +++ /dev/null @@ -1,9 +0,0 @@ -In mathematics, the simplicial approximation theorem is a foundational result for algebraic topology, guaranteeing that continuous mappings can be (by a slight deformation) approximated by ones that are piecewise of the simplest kind. It applies to mappings between spaces that are built up from simplices-that is, finite simplicial complexes. The general continuous mapping between such spaces can be represented approximately by the type of mapping that is (affine-) linear on each simplex into another simplex, at the cost (i) of sufficient barycentric subdivision of the simplices of the domain, and (ii) replacement of the actual mapping by a homotopic one. - -This theorem was first proved by L.E.J. Brouwer, by use of the Lebesgue covering theorem (a result based on compactness). It served to put the homology theory of the time-the first decade of the twentieth century-on a rigorous basis, since it showed that the topological effect (on homology groups) of continuous mappings could in a given case be expressed in a finitary way. This must be seen against the background of a realisation at the time that continuity was in general compatible with the pathological, in some other areas. This initiated, one could say, the era of combinatorial topology. - -There is a further simplicial approximation theorem for homotopies, stating that a homotopy between continuous mappings can likewise be approximated by a combinatorial version. - -Let $ K $ and $ L $ be two simplicial complexes. A simplicial mapping $ f : K \to L $ is called a simplicial approximation of a continuous function $ F : |K| \to |L| $ if for every point $ x \in |K| $, $ |f|(x) $ belongs to the minimal closed simplex of $ L $ containing the point $ F(x) $. If $ f $ is a simplicial approximation to a continuous map $ F $, then the geometric realization of $ f $, $ |f| $ is necessarily homotopic to $ F $. - -The simplicial approximation theorem states that given any continuous map $ F : |K| \to |L| $ there exists a natural number $ n_0 $ such that for all $ n \ge n_0 $ there exists a simplicial approximation $ f : \mathrm{Bd}^n K \to L $ to $ F $ (where $ \mathrm{Bd} K $ denotes the barycentric subdivision of $ K $, and $ \mathrm{Bd}^n K $ denotes the result of applying barycentric subdivision $ n $ times.) diff --git a/wiki/wikipedia/2531.txt b/wiki/wikipedia/2531.txt deleted file mode 100644 index ebf64ef4b0c461d89a63f92f4f44b5169d56519e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2531.txt +++ /dev/null @@ -1,61 +0,0 @@ -Ramon Llull (; c. 1232 – c. 1315/16) was a philosopher, theologian, poet, missionary, and Christian apologist from the Kingdom of Majorca. - -He invented a philosophical system known as the Art, conceived as a type of universal logic to prove the truth of Christian doctrine to interlocutors of all faiths and nationalities. The Art consists of a set of general principles and combinatorial operations. It is illustrated with diagrams. - -Considered one of the fathers of Catalan literature, he is thought to be the first to use a vernacular language to express philosophical, scientific, and technical ideas. He wrote in Catalan, Latin, and possibly Arabic (although no texts in Arabic survive). Some of his books were translated into Occitan, French, and Castilian during his lifetime. - -Posthumously he has enjoyed a varied reputation. In Catalonia he is considered a saint, but he has also been condemned as a heretic. In the 20th century he figured in a great deal of literature and art, and became known as a precursor of the computer and pioneer of computation theory. - -Llull was born in Palma into a wealthy family of Barcelona patricians who had come to the Kingdom of Majorca in 1229 with the conquering armies of James I of Aragon. James I had conquered the formerly Almohad-ruled Majorca as part of a larger move to integrate the territories of the Balearic Islands (now part of Spain) into the Crown of Aragon. Llull was born there a few years later, in 1232 or 1233. Muslims still constituted a large part of the population of Majorca and Jews were present in cultural and economic affairs. - -In 1257 Llull married Blanca Picany, with whom he had two children, Domènec and Magdalena. Although he formed a family, he lived what he would later call the licentious and wordly life of a troubadour. - -In 1263 Llull experienced a series of visions. He narrates the event in his autobiography Vita coaetanea ("A Contemporary Life"): - -The vision came to Llull five times in all and inspired in him three intentions: to give up his soul for the sake of God’s love and honor, to convert the ‘Saracens’ to Christianity, and write the best book in the world against the errors of the unbelievers. - -Following his visions he sold his possessions on the model of Saint Francis of Assisi and set out on pilgrimages to the shrines of Saint Mary of Rocamadour, Saint James, and other places, never to come back to his family and profession. When he returned to Majorca he purchased a Muslim slave in order to learn Arabic from him. For the next nine years, until 1274, he engaged in study and contemplation in relative solitude. He read extensively in both Latin and Arabic, learning both Christian and Muslim theological and philosophical thought. - -Between 1271 and 1274 Llull wrote his first works, a compendium of the Muslim thinker Al-Ghazali's logic and the Llibre de contemplació en Déu (Book on the Contemplation of God), a lengthy guide to finding truth through contemplation. - -In 1274, while staying at a hermitage on Puig de Randa, the form of the great book Llull was to write was finally given to him through divine revelation: a complex system that he named his Art, which would become the motivation behind most of his life's efforts. - -Llull urged the study of Arabic and other then-insufficiently studied languages in Europe, along with most of his works, to convert Muslims and schismatic Christians. He travelled through Europe to meet with popes, kings, and princes, trying to establish special colleges to prepare future missionaries. In 1276 a language school for Franciscan missionaries was founded at Miramar, funded by the King of Majorca. - -About 1291 he went to Tunis, preached to the Saracens, disputed with them in philosophy, and after another brief sojourn in Paris, returned to the East as a missionary. Llull travelled to Tunis a second time in about 1304, and wrote numerous letters to the king of Tunis, but little else is known about this part of his life. - -He returned in 1308, reporting that the conversion of Muslims should be achieved through prayer, not through military force. He finally achieved his goal of linguistic education at major universities in 1311 when the Council of Vienne ordered the creation of chairs of Hebrew, Arabic and Chaldean (Aramaic) at the universities of Bologna, Oxford, Paris, and Salamanca as well as at the Papal Court. The Art's reliance on divine attributes also has a certain similarity to the contemplation of the ninety-nine Names of God in the Muslim tradition. Llull's familiarity with the Islamic intellectual tradition is evidenced by the fact that his first work (1271-2) was a compendium of Al-Ghazali's logic. - -From early in his career Llull composed dialogues to enact the procedure of the Art. This is linked to the missionary aspect of the Art. Llull conceived it as an instrument to convert all peoples of the world to Christianity and experimented with more popular genres to make it easier to understand. His earliest and most well known dialogue is the Book of the Gentile and the Three Wise Men, written in Catalan the 1270s and later translated into Latin. It is framed as a meeting of three wise men (a Muslim, a Jew, and a Christian) and a Gentile in the woods. They learn about the Lullian method when they encounter a set of trees with leaves inscribed with Lullian principles. Lady Intelligence appears and informs them of the properties of the trees and the rules for implementing the leaves. The wise men use the trees to prove their respective Articles of Faith to the Gentile (although some of the Islamic tenets cannot be proved with the Lullian procedure) and in the end the Gentile is converted to Christianity. - -Llull also composed many other dialogues. Later in his career when he became concerned with heretical activity in the Arts Faculty of the University of Paris, he wrote "disputations" with philosophers as interlocutors. He also created a character for himself and he stars in many of these dialogues as the Christian wise man (for instance: Liber de quaestione valde alta et profunda, composed in 1311). - -Llull structured many of his works around trees. In some, like the Book of the Gentile and the Three Wise Men, the "leaves" of the trees stand for the combinatorial elements (principles) of the Art. In other works a series of trees shows how the Art generates all ("encyclopedic") knowledge. The Tree of Science (1295-6) comprises sixteen trees ranging from earthly and moral to divine and pedagogical. Each tree is divided into seven parts (roots, trunk, branches, twigs, leaves, flowers, fruits). The roots always consist of the Lullian divine principles and from there the tree grows into the differentiated aspects of its respective category of reality. - -Llull also wrote narrative prose drawing on the literary traditions of his time (epic, romance) to express the Art. These works were intended to communicate the potentially complex operations of the Art to a lay audience. Blanquerna (c.1276-83) is his most well known novel. Felix (1287-9) is also notable, although it was not widely circulated during his lifetime and only available in Catalan. It is formulated as a sort of Bildungsroman in which Felix, the main character, begins on a journey at the instigation of his father who has written the "Book of Wonders". The book is divided into ten chapters (echoing the encyclopedic range of the Tree of Science) as Felix gains knowledge: God, angels, heavens, elements, plans, minerals, animals, man, Paradise, and Hell. It turns out to be a metafiction, as Felix's journey ends at a monastery where he relates the "Book of Wonders" now embellished and fused with the account of his own adventures. - -According to Llull's autobiographical Vita, his Art was not received well at the University of Paris when he first presented it there in the 1280s. This experience supposedly is what led him to revise the Art (creating the tertiary version). Llull's Art was never adopted by mainstream academia of the thirteenth and early-fourteenth centuries, but it did accrue quite a bit of interest. A significant number of Lullian manuscripts were collected by the Carthusian monks of Paris at Vauvert and by several theologians who donated their manuscripts to the Sorbonne Library. One disciple, Thomas Le Myésier, went so far as to create elaborate compilations of Llull's works, including a manuscript dedicated to the queen of France. - -In the 1360s the inquisitor Nicholas Eymerich condemned Lullism in Aragon. He obtained a Papal Bull in 1376 to prohibit Lullian teaching, although it proved ineffective. In Paris Jean Gerson also issued a series of polemical writings against Lullism. There was an official document issued to prohibit the Lullian Art from being taught in the Faculty of Theology. - -Llull's most significant early modern proponent was Nicholas of Cusa. He collected many works by Llull and adapted many aspects of Lullian thought for his own mystical theology. There was also growing interest in Lullism in Catalonia, Italy, and France. Jacques Lefèvre d'Étaples published eight of Llull's books in 1499, 1505, and 1516. Lefèvre was therefore responsible for the first significant circulation of Llull's work in print outside of Catalonia. It is thought that the influence of Lullian works in Renaissance Italy (coinciding with the rise of neoplatonism) contributed to a development in metaphysics, from a static Artistotelian notion of being to reality as a dynamic process. In Northern and Central Europe Lullism was adopted by Lutherans and Calvinists interested in promoting programs of theological humanism. Gottfried Leibniz was exposed to these currents during his years in Mainz, and Llull's Art clearly informed his De Arte Combinatoria. - -There is a significant body of alchemical treatises falsely attributed to Llull. The two fundamental works of the corpus are the Testamentum and the Liber de secretis naturae seu de quinta essentia which both date to the fourteenth century. Occultists such as Heinrich Cornelius Agrippa and Giordano Bruno were drawn to these works. Despite Llull's growing identification with alchemy and Neoplatonic mysticism, others (such as Giulio Pace and Johann Heinrich Alsted were still interested in the Lullian Art as a universal logic, even in the seventeenth century when Descartes and Ramus proposed competing systems. - -Meanwhile in Spain, the Cardinal Francisco Jiménez de Cisneros, Archbishop of Toledo, had taken up Lullism for his project of reform. Cisneros mobilized various intellectuals and editors, founding chairs at universities and publishing Llull's works. Founded in 1633, the Pontifical College of La Sapiencia on Majorca became the epicenter for teaching Lullism. The Franciscans from La Sapiencia were the ones to seek Llull's canonization at Rome in the seventeenth century. These efforts were renewed in the eighteenth century, but never succeeded. Llull was beatified in 1847 by Pope Pius IX. His feast day was assigned to 30 June and is celebrated by the Third Order of St. Francis. - -Llull is now recognized by scholars as significant in both the history of Catalan literature as well as intellectual history. From 1906 to 1950 the Comissió Editora Lul·liana led a project to edit Llull's works written in Catalan. This series was called the Obres de Ramon Llull (ORL). In 1957 the Raimundus-Lullus-Institut was founded in Freiburg, Germany to begin the work of editing Llull's Latin works. This series is called the Raimundi Lulli Opera Latina (ROL) and is still ongoing. In 1990 the work on the Catalan texts was restarted with the Nova Edició de les Obres de Ramon Llull (NEORL). In the world of English-language scholarship the work of Frances Yates on memory systems (The Art of Memory, published 1966) brought new interest to Ramon Llull as a figure in the history of cognitive systems. - -Llull has appeared in the art and literature of the last century, especially in the genres of surrealism, philosophical fantasy, and metafiction. Salvador Dalí's alchemical thought was influenced by Ramon Llull and Dalí incorporated the diagrams from the Lullian Art into his work called Alchimie des Philosophes. In 1937 Jorge Luis Borges wrote a snippet called "Ramon Llull' s Thinking Machine" proposing the Lullian Art as a device to produce poetry. - -Other notable references to Ramon Llull are: Aldous Huxley's short story "The Death of Lully", a fictionalized account aftermath of Llull's stoning in Tunis, set aboard the Genoese ship that returned him to Mallorca. Paul Auster refers to Llull (as Raymond Lull) in his memoir The Invention of Solitude in the second part, The Book of Memory. Llull is also a major character in The Box of Delights, a children's novel by poet John Masefield. - -Llull's Art is sometimes recognized as a precursor to computer science and computation theory. - -With the discovery in 2001 of his lost manuscripts, Ars notandi, Ars eleccionis, and Alia ars eleccionis, Llull is also given credit for creating an electoral system now known as the Borda count and Condorcet criterion, which Jean-Charles de Borda and Nicolas de Condorcet independently proposed centuries later. - -* Ramon Llull's New Rhetoric, text and translation of Llull's 'Rethorica Nova', edited and translated by Mark D. Johnston, Davis, California: Hermagoras Press, 1994 - -* Selected Works of Ramon Llull (1232‑1316), edited and translated by Anthony Bonner, Princeton, N.J.: Princeton University Press 1985, two volumes XXXI + 1330 pp. (Contents: vol. 1: The Book of the Gentile and the Three Wise Men, pp. 93–305; Ars Demonstrativa, pp. 317–567; Ars Brevis, pp. 579–646; vol. 2: Felix: or the Book of Wonders, pp. 659–1107; Principles of Medicine pp. 1119–1215; Flowers of Love and Flowers of Intelligence, pp. 1223–1256) - -* Doctor Illuminatus: A Ramon Llull Reader, edited and translated by Anthony Bonner, with a new translation of The Book of the Lover and the Beloved by Eve Bonner, Princeton, N.J.: Princeton University Press 1994 diff --git a/wiki/wikipedia/2532.txt b/wiki/wikipedia/2532.txt deleted file mode 100644 index 7bf6102c281790e03f183ac1306eed79b9ee5270..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2532.txt +++ /dev/null @@ -1,60 +0,0 @@ -In computational complexity theory and computability theory, a search problem is a type of computational problem represented by a binary relation. If R is a binary relation such that field(R) ⊆ Γ+ and T is a Turing machine, then T calculates R if: - -* If x is such that there is some y such that R(x, y) then T accepts x with output z such that R(x, z) (there may be multiple y, and T need only find one of them) - -* If x is such that there is no y such that R(x, y) then T rejects x - -Intuitively, the problem consists in finding structure "y" in object "x". An algorithm is said to solve the problem if at least one corresponding structure exists, and then one occurrence of this structure is made output; otherwise, the algorithm stops with an appropriate output ("Item not found" or any message of the like). - -Such problems occur very frequently in graph theory, for example, where searching graphs for structures such as particular matching, cliques, independent set, etc. are subjects of interest. - -Note that the graph of a partial function is a binary relation, and if T calculates a partial function then there is at most one possible output. - -A relation R can be viewed as a search problem, and a Turing machine which calculates R is also said to solve it. Every search problem has a corresponding decision problem, namely -$$ -L(R)=\{x\mid \exists y R(x,y)\}. -$$ - -This definition may be generalized to n-ary relations using any suitable encoding which allows multiple strings to be compressed into one string (for instance by listing them consecutively with a delimiter). - -A search problem is defined by: - -* A set of states - -* A start state - -* A goal state or goal test: a boolean function which tells us whether a given state is a goal state - -* A successor function: a mapping from a state to a set of new states - -Find a solution when not given an algorithm to solve a problem, but only a specification of what a solution looks like. - -* Generic search algorithm: given a graph, start nodes, and goal nodes, incrementally explore paths from the start nodes. - -* Maintain a frontier of paths from the start node that have been explored. - -* As search proceeds, the frontier expands into the unexplored nodes until a goal node is encountered. - -* The way in which the frontier is expanded defines the search strategy. - -Input: a graph, - -a set of start nodes, - -Boolean procedure goal(n) that tests if n is a goal node. - -frontier := {s : s is a start node}; - -while frontier is not empty: - -select and remove path from frontier; - -if goal(nk) - -return ; - -for every neighbor n of nk - -add to frontier; - -end while diff --git a/wiki/wikipedia/2533.txt b/wiki/wikipedia/2533.txt deleted file mode 100644 index d3b5d175dd0eb6510e91b0ef0c882315e5015b41..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2533.txt +++ /dev/null @@ -1,7 +0,0 @@ -William Thurston's elliptization conjecture states that a closed 3-manifold with finite fundamental group is spherical, i.e. has a Riemannian metric of constant positive sectional curvature. - -A 3-manifold with a Riemannian metric of constant positive sectional curvature is covered by the 3-sphere, moreover the group of covering transformations are isometries of the 3-sphere. - -If the original 3-manifold had in fact a trivial fundamental group, then it is homeomorphic to the 3-sphere (via the covering map). Thus, proving the elliptization conjecture would prove the Poincaré conjecture as a corollary. In fact, the elliptization conjecture is logically equivalent to two simpler conjectures: the Poincaré conjecture and the spherical space form conjecture. - -The elliptization conjecture is a special case of Thurston's geometrization conjecture, which was proved in 2003 by G. Perelman. diff --git a/wiki/wikipedia/2534.txt b/wiki/wikipedia/2534.txt deleted file mode 100644 index 996c968c1227b31be17d70f9813ff2bc5f1c8344..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2534.txt +++ /dev/null @@ -1,122 +0,0 @@ -In mathematics, the Peter–Weyl theorem is a basic result in the theory of harmonic analysis, applying to topological groups that are compact, but are not necessarily abelian. It was initially proved by Hermann Weyl, with his student Fritz Peter, in the setting of a compact topological group G . The theorem is a collection of results generalizing the significant facts about the decomposition of the regular representation of any finite group, as discovered by Ferdinand Georg Frobenius and Issai Schur. - -Let G be a compact group. The theorem has three parts. The first part states that the matrix coefficients of irreducible representations of G are dense in the space C(G) of continuous complex-valued functions on G, and thus also in the space L2(G) of square-integrable functions. The second part asserts the complete reducibility of unitary representations of G. The third part then asserts that the regular representation of G on L2(G) decomposes as the direct sum of all irreducible unitary representations. Moreover, the matrix coefficients of the irreducible unitary representations form an orthonormal basis of L2(G). In the case that G is the group of unit complex numbers, this last result is simply a standard result from Fourier series. - -A matrix coefficient of the group G is a complex-valued function $\varphi$ on G given as the composition -$$ -\varphi = L\circ \pi -$$ - -where π : G → GL(V) is a finite-dimensional (continuous) group representation of G, and L is a linear functional on the vector space of endomorphisms of V (e.g. trace), which contains GL(V) as an open subset. Matrix coefficients are continuous, since representations are by definition continuous, and linear functionals on finite-dimensional spaces are also continuous. - -The first part of the Peter–Weyl theorem asserts (; ): - -
    Peter–Weyl Theorem (Part I). The set of matrix coefficients of G is dense in the space of continuous complex functions C(G) on G, equipped with the uniform norm.
    - -This first result resembles the Stone–Weierstrass theorem in that it indicates the density of a set of functions in the space of all continuous functions, subject only to an algebraic characterization. In fact, the matrix coefficients form a unital algebra invariant under complex conjugation because the product of two matrix coefficients is a matrix coefficient of the tensor product representation, and the complex conjugate is a matrix coefficient of the dual representation. Hence the theorem follows directly from the Stone–Weierstrass theorem if the matrix coefficients separate points, which is obvious if G is a matrix group . Conversely, it is a consequence of the theorem that any compact Lie group is isomorphic to a matrix group . - -A corollary of this result is that the matrix coefficients of G are dense in L2(G). - -The second part of the theorem gives the existence of a decomposition of a unitary representation of G into finite-dimensional representations. Now, intuitively groups were conceived as rotations on geometric objects, so it is only natural to study representations which essentially arise from continuous actions on Hilbert spaces. (For those who were first introduced to dual groups consisting of characters which are the continuous homomorphisms into the circle group, this approach is similar except that the circle group is (ultimately) generalised to the group of unitary operators on a given Hilbert space.) - -Let G be a topological group and H a complex Hilbert space. - -A continuous action ∗ : G × H → H, gives rise to a continuous map ρ : G → HH (functions from H to H with the strong topology) defined by: ρ(g)(v) = ∗(g,v). This map is clearly a homomorphism from G into GL(H), the homeomorphic automorphisms of H. Conversely, given such a map, we can uniquely recover the action in the obvious way. - -Thus we define the representations of G on a Hilbert space H to be those group homomorphisms, ρ, which arise from continuous actions of G on H. We say that a representation ρ is unitary if ρ(g) is a unitary operator for all g ∈ G; i.e., $\langle \rho(g)v,\rho(g)w\rangle = \langle v,w\rangle$ for all v, w ∈ H. (I.e. it is unitary if ρ : G → U(H). Notice how this generalises the special case of the one-dimensional Hilbert space, where U(C) is just the circle group.) - -Given these definitions, we can state the second part of the Peter–Weyl theorem : - -
    Peter–Weyl Theorem (Part II). Let ρ be a unitary representation of a compact group G on a complex Hilbert space H. Then H splits into an orthogonal direct sum of irreducible finite-dimensional unitary representations of G.
    - -To state the third and final part of the theorem, there is a natural Hilbert space over G consisting of square-integrable functions, $L^2(G)$; this makes sense because Haar measure exists on G. The group G has a unitary representation ρ on $L^2(G)$ given by acting on the left, via -$$ -\rho(h)f(g) = f(h^{-1}g). -$$ - -The final statement of the Peter–Weyl theorem gives an explicit orthonormal basis of $L^2(G)$. Roughly it asserts that the matrix coefficients for G, suitably renormalized, are an orthonormal basis of L2(G). In particular, $L^2(G)$ decomposes into an orthogonal direct sum of all the irreducible unitary representations, in which the multiplicity of each irreducible representation is equal to its degree (that is, the dimension of the underlying space of the representation). Thus, -$$ -L^2(G) = \underset{\pi\in\Sigma}{\widehat{\bigoplus}} E_\pi^{\oplus\dim E_\pi} -$$ - -where Σ denotes the set of (isomorphism classes of) irreducible unitary representations of G, and the summation denotes the closure of the direct sum of the total spaces Eπ of the representations π. - -We may also regard $L^2(G)$ as a representation of the direct product group $G\times G$, with the two factors acting by translation on the left and the right, respectively. Fix a representation $(\pi,E_\pi)$ of $G$. The space of matrix coefficients for the representation may be identified with $\operatorname{End}(E_\pi)$, the space of linear maps of $E_\pi$ to itself. The natural left and right action of $G\times G$ on the matrix coefficients corresponds to the action on $\operatorname{End}(E_\pi)$ given by -$$ -(g,h)\cdot A=\pi(g)A\pi(h)^{-1}. -$$ - -Then we may decompose $L^2(G)$ as unitary representation of $G\times G$ in the form -$$ -L^2(G) = \underset{\pi\in\Sigma}{\widehat{\bigoplus}} E_\pi\otimes E_\pi^*. -$$ - -Finally, we may form an orthonormal basis for $L^2(G)$ as follows. Suppose that a representative π is chosen for each isomorphism class of irreducible unitary representation, and denote the collection of all such π by Σ. Let $\scriptstyle{u_{ij}^{(\pi)}}$ be the matrix coefficients of π in an orthonormal basis, in other words -$$ -u^{(\pi)}_{ij}(g) = \langle \pi(g)e_j, e_i\rangle. -$$ - -for each g ∈ G. Finally, let d(π) be the degree of the representation π. The theorem now asserts that the set of functions -$$ -\left\{\sqrt{d^{(\pi)}}u^{(\pi)}_{ij}\mid \pi\in\Sigma, 1\le i,j\le d^{(\pi)}\right\} -$$ - -is an orthonormal basis of $L^2(G).$ - -A function $f$ on G is called a class function if $f(hgh^{-1})=f(g)$ for all $g$ and $h$ in G. The space of square-integrable class functions forms a closed subspace of $L^2(G)$, and therefore a Hilbert space in its own right. Within the space of matrix coefficients for a fixed representation $\pi$ is the character $\chi_\pi$ of $\pi$, defined by -$$ -\chi_\pi(g)=\operatorname{trace}(\pi(g)). -$$ - -In the notation above, the character is the sum of the diagonal matrix coefficients: -$$ -\chi_\pi=\sum_{i=1}^{d^{(\pi)}}u_{ii}^{(\pi)}. -$$ - -An important consequence of the preceding result is the following: - -Theorem: The characters of the irreducible representations of G form an orthonormal basis for the space of square-integrable class functions on G. - -This result plays an important part in Weyl's classification of the representations of a connected compact Lie group. - -A simple but helpful example is the case of the group of complex numbers of magnitude 1, $G=S^1$. In this case, the irreducible representations are one-dimensional and given by -$$ -\pi_n(e^{i\theta})=e^{in\theta}. -$$ - -There is then a single matrix coefficient for each representation, the function -$$ -u_n(e^{i\theta})=e^{in\theta}. -$$ - -The last part of the Peter–Weyl theorem then asserts in this case that these functions form an orthonormal basis for $L^2(S^1)$. In this case, the theorem is simply a standard result from the theory of Fourier series. - -For any compact group G, we can regard the decomposition of $L^2(G)$ in terms of matrix coefficients as a generalization of the theory of Fourier series. Indeed, this decomposition is often referred to as a Fourier series. - -We use the standard representation of the group SU(2) as -$$ - \operatorname{SU}(2) = \left \{ \begin{pmatrix} \alpha&-\overline{\beta}\\ \beta & \overline{\alpha} \end{pmatrix}: \ \ \alpha,\beta\in\mathbb{C}, |\alpha|^2 + |\beta|^2 = 1\right \} ~, -$$ - -Thus, SU(2) is represented as the 3-sphere $S^3$ sitting inside $\mathbb{C}^2=\mathbb{R}^4$. - -The irreducible representations of SU(2), meanwhile, are labeled by a non-negative integer $m$ and can be realized as the natural action of SU(2) on the space of homogeneous polynomials of degree $m$ in two complex variables. The matrix coefficients of the $m$th representation are hyperspherical harmonics of degree $m$, that is, the restrictions to $S^3$ of homogeneous harmonic polynomials of degree $m$ in $\alpha$ and $\beta$. The key to verifying this claim is to compute that for any two complex numbers $z_1$ and $z_2$, the function -$$ -(\alpha,\beta)\mapsto (z_1\alpha+z_2\beta)^m -$$ - -is harmonic as a function of $(\alpha,\beta)\in\mathbb{C}^2=\mathbb{R}^4$. - -In this case, finding an orthonormal basis for $L^2(\operatorname{SU}(2))=L^2(S^3)$ consisting of matrix coefficients amounts to finding an orthonormal basis consisting of hyperspherical harmonics, which is a standard construction in analysis on spheres. - -The Peter–Weyl theorem—specifically the assertion that the characters form an orthonormal basis for the space of square-integrable class functions—plays a key role in the classification of the irreducible representations of a connected compact Lie group. The argument also depends on the Weyl integral formula (for class functions) and the Weyl character formula. - -An outline of the argument may be found here. - -One important consequence of the Peter–Weyl theorem is the following: - -Theorem: Every compact Lie group has a faithful finite-dimensional representation and is therefore isomorphic to a closed subgroup of $\operatorname{GL}(n;\mathbb{C})$ for some $n$. - -From the Peter–Weyl theorem, one can deduce a significant general structure theorem. Let G be a compact topological group, which we assume Hausdorff. For any finite-dimensional G-invariant subspace V in L2(G), where G acts on the left, we consider the image of G in GL(V). It is closed, since G is compact, and a subgroup of the Lie group GL(V). It follows by a theorem of Élie Cartan that the image of G is a Lie group also. - -If we now take the limit (in the sense of category theory) over all such spaces V, we get a result about G: Because G acts faithfully on L2(G), G is an inverse limit of Lie groups. It may of course not itself be a Lie group: it may for example be a profinite group. diff --git a/wiki/wikipedia/2535.txt b/wiki/wikipedia/2535.txt deleted file mode 100644 index b9e6711c24c892f2b7d261fffb5f024152772a9a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2535.txt +++ /dev/null @@ -1 +0,0 @@ -#Redirect Little's law diff --git a/wiki/wikipedia/2536.txt b/wiki/wikipedia/2536.txt deleted file mode 100644 index 3784916aab6e87b50b3614f2711b9dd855b7e879..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2536.txt +++ /dev/null @@ -1,68 +0,0 @@ -Hilbert's 16th problem was posed by David Hilbert at the Paris conference of the International Congress of Mathematicians in 1900, as part of his list of 23 problems in mathematics. - -The original problem was posed as the Problem of the topology of algebraic curves and surfaces (Problem der Topologie algebraischer Kurven und Flächen). - -Actually the problem consists of two similar problems in different branches of mathematics: - -* An investigation of the relative positions of the branches of real algebraic curves of degree n (and similarly for algebraic surfaces). - -* The determination of the upper bound for the number of limit cycles in two-dimensional polynomial vector fields of degree n and an investigation of their relative positions. - -The first problem is yet unsolved for n = 8. Therefore, this problem is what usually is meant when talking about Hilbert's sixteenth problem in real algebraic geometry. The second problem also remains unsolved: no upper bound for the number of limit cycles is known for any n > 1, and this is what usually is meant by Hilbert's sixteenth problem in the field of dynamical systems. - -The Spanish Royal Society for Mathematics published an explanation of Hilbert's sixteenth problem. - -In 1876 Harnack investigated algebraic curves in the real projective plane and found that curves of degree n could have no more than -$$ - {n^2-3n+4 \over 2} -$$ - -separate connected components. Furthermore, he showed how to construct curves that attained that upper bound, and thus that it was the best possible bound. Curves with that number of components are called M-curves. - -Hilbert had investigated the M-curves of degree 6, and found that the 11 components always were grouped in a certain way. His challenge to the mathematical community now was to completely investigate the possible configurations of the components of the M-curves. - -Furthermore, he requested a generalization of Harnack's curve theorem to algebraic surfaces and a similar investigation of surfaces with the maximum number of components. - -Here we are going to consider polynomial vector fields in the real plane, that is a system of differential equations of the form: -$$ - {dx \over dt}=P(x,y), \qquad {dy \over dt}=Q(x,y) -$$ - -where both P and Q are real polynomials of degree n. - -These polynomial vector fields were studied by Poincaré, who had the idea of abandoning the search for finding exact solutions to the system, and instead attempted to study the qualitative features of the collection of all possible solutions. - -Among many important discoveries, he found that the limit sets of such solutions need not be a stationary point, but could rather be a periodic solution. Such solutions are called limit cycles. - -The second part of Hilbert's 16th problem is to decide an upper bound for the number of limit cycles in polynomial vector fields of degree n and, similar to the first part, investigate their relative positions. - -It was shown in 1991/1992 by Yulii Ilyashenko and Jean Écalle that every polynomial vector field in the plane has only finitely many limit cycles (a 1923 article by Henri Dulac claiming a proof of this statement had been shown to contain a gap in 1981). This statement is not obvious, since it is easy to construct smooth (C) vector fields in the plane with infinitely many concentric limit cycles. - -The question whether there exists a finite upper bound H(n) for the number of limit cycles of planar polynomial vector fields of degree n remains unsolved for any n > 1. (H(1) = 0 since linear vector fields do not have limit cycles.) Evgenii Landis and Ivan Petrovsky claimed a solution in the 1950s, but it was shown wrong in the early 1960s. Quadratic plane vector fields with four limit cycles are known. In general, the difficulties in estimating the number of limit cycles by numerical integration are due to the nested limit cycles with very narrow regions of attraction, which are hidden attractors, and semi-stable limit cycles. - -In his speech, Hilbert presented the problems as: - -Hilbert continues: - -{{quote| - -Following this purely algebraic problem I would like to raise a question that, it seems to me, can be attacked by the same method of continuous coefficient changing, and whose answer is of similar importance to the topology of the families of curves defined by differential equations – that is the question of the upper bound and position of the Poincaré boundary cycles (cycles limites) for a differential equation of first order of the form: -$$ - {dy \over dx} = {Y \over X} -$$ - -where X, Y are integer, rational functions of nth degree in resp. x, y, or written homogeneously: - - - -X \left( y {dz \over dt} - z {dy \over dt} \right) - -+ Y\left(z {dx \over dt} - x {dz \over dt} \right) - -+ Z\left(x {dy \over dt} - y {dx \over dt} \right) - -= 0 - - - -where X, Y, Z means integral, rational, homogenic functions of nth degree in x, y, z and the latter are to be considered function of the parameter t.}} diff --git a/wiki/wikipedia/2537.txt b/wiki/wikipedia/2537.txt deleted file mode 100644 index 3f7cb7228bb530f180f118c2078939c4cb646fae..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2537.txt +++ /dev/null @@ -1,19 +0,0 @@ -Holland's schema theorem, also called the fundamental theorem of genetic algorithms, is an inequality that results from coarse-graining an equation for evolutionary dynamics. The Schema Theorem says that short, low-order schemata with above-average fitness increase exponentially in frequency in successive generations. The theorem was proposed by John Holland in the 1970s. It was initially widely taken to be the foundation for explanations of the power of genetic algorithms. However, this interpretation of its implications has been criticized in several publications reviewed in, where the Schema Theorem is shown to be a special case of the Price equation with the schema indicator function as the macroscopic measurement. - -A schema is a template that identifies a subset of strings with similarities at certain string positions. Schemata are a special case of cylinder sets, and hence form a topological space. - -Consider binary strings of length 6. The schema 1*10*1 describes the set of all strings of length 6 with 1's at positions 1, 3 and 6 and a 0 at position 4. The * is a wildcard symbol, which means that positions 2 and 5 can have a value of either 1 or 0. The order of a schema $ o(H)$ is defined as the number of fixed positions in the template, while the defining length $ \delta(H) $ is the distance between the first and last specific positions. The order of 1*10*1 is 4 and its defining length is 5. The fitness of a schema is the average fitness of all strings matching the schema. The fitness of a string is a measure of the value of the encoded problem solution, as computed by a problem-specific evaluation function. Using the established methods and genetic operators of genetic algorithms, the schema theorem states that short, low-order schemata with above-average fitness increase exponentially in successive generations. Expressed as an equation: -$$ -\operatorname{E}(m(H,t+1)) \geq {m(H,t) f(H) \over a_t}[1-p]. -$$ - -Here $m(H,t)$ is the number of strings belonging to schema $H$ at generation $t$, $f(H)$ is the observed average fitness of schema $H$ and $a_t$ is the observed average fitness at generation $t$. The probability of disruption $p$ is the probability that crossover or mutation will destroy the schema $H$. It can be expressed as: -$$ -p = {\delta(H) \over l-1}p_c + o(H) p_m -$$ - -where $ o(H)$ is the order of the schema, $l$ is the length of the code, $ p_m$ is the probability of mutation and $ p_c $ is the probability of crossover. So a schema with a shorter defining length $ \delta(H) $ is less likely to be disrupted.
    An often misunderstood point is why the Schema Theorem is an inequality rather than an equality. The answer is in fact simple: the Theorem neglects the small, yet non-zero, probability that a string belonging to the schema $H$ will be created "from scratch" by mutation of a single string (or recombination of two strings) that did not belong to $H$ in the previous generation. Moreover, the expression for $p$ is clearly pessimistic: depending on the mating partner, recombination may not disrupt the scheme even when a cross point is selected between the first and the last fixed position of $H$. - -The schema theorem holds under the assumption of a genetic algorithm that maintains an infinitely large population, but does not always carry over to (finite) practice: due to sampling error in the initial population, genetic algorithms may converge on schemata that have no selective advantage. This happens in particular in multimodal optimization, where a function can have multiple peaks: the population may drift to prefer one of the peaks, ignoring the others. - -The reason that the Schema Theorem cannot explain the power of genetic algorithms is that it holds for all problem instances, and cannot distinguish between problems in which genetic algorithms perform poorly, and problems for which genetic algorithms perform well. diff --git a/wiki/wikipedia/2538.txt b/wiki/wikipedia/2538.txt deleted file mode 100644 index f65f9e04401ac1c4072a2c7b03d28211d4ed85a9..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2538.txt +++ /dev/null @@ -1,9 +0,0 @@ -In convex analysis, the Fenchel–Moreau theorem (named after Werner Fenchel and Jean Jacques Moreau) or Fenchel biconjugation theorem (or just biconjugation theorem) is a theorem which gives necessary and sufficient conditions for a function to be equal to its biconjugate. This is in contrast to the general property that for any function $f^{**} \leq f$. This can be seen as a generalization of the bipolar theorem. It is used in duality theory to prove strong duality (via the perturbation function). - -Let $(X,\tau)$ be a Hausdorff locally convex space, for any extended real valued function $f: X \to \mathbb{R} \cup \{\pm \infty\}$ it follows that $f = f^{**}$ if and only if one of the following is true - -# $f$ is a proper, lower semi-continuous, and convex function, - -# $f \equiv +\infty$, or - -# $f \equiv -\infty$. diff --git a/wiki/wikipedia/2539.txt b/wiki/wikipedia/2539.txt deleted file mode 100644 index 593ba48138a8883bcfa941e04f8b5f4bb88046a9..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2539.txt +++ /dev/null @@ -1,5 +0,0 @@ -Moral Machine is an online platform, developed by Iyad Rahwan's Scalable Cooperation group at the Massachusetts Institute of Technology, that generates moral dilemmas and collects information on the decisions that people make between two destructive outcomes. The platform is the idea of Iyad Rahwan and social psychologists Azim Shariff and Jean-François Bonnefon, who conceived of the idea ahead of the publication of their article about the ethics of self-driving cars. The key contributors to building the platform were MIT Media Lab graduate students Edmond Awad and Sohan Dsouza. - -The presented scenarios are often variations of the trolley problem, and the information collected would be used for further research regarding the decisions that machine intelligence must make in the future. For example, as artificial intelligence plays an increasingly significant role in autonomous driving technology, research projects like Moral Machine help to find solutions for challenging life-and-death decisions that will face self-driving vehicles. - -Analysis of the data collected through Moral Machine showed broad differences in relative preferences among different countries, and correlations between these preferences and various national metrics. diff --git a/wiki/wikipedia/254.txt b/wiki/wikipedia/254.txt deleted file mode 100644 index c37886508c6191b40de72c594ed19bd5ebbdc0c1..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/254.txt +++ /dev/null @@ -1,3 +0,0 @@ -In automata theory (a branch of theoretical computer science), NFA minimization is the task of transforming a given nondeterministic finite automaton (NFA) into an equivalent NFA that has a minimum number of states, transitions, or both. While efficient algorithms exist for DFA minimization, NFA minimization is PSPACE-complete. No efficient (polynomial time) algorithms are known, and under the standard assumption P ≠ PSPACE, none exist. The most efficient known algorithm is the Kameda‒Weiner algorithm. - -Unlike deterministic finite automata, minimal NFAs may not be unique. There may be multiple NFAs of the same size which accept the same regular language, but for which there is no equivalent NFA or DFA with fewer states. diff --git a/wiki/wikipedia/2540.txt b/wiki/wikipedia/2540.txt deleted file mode 100644 index b55db6a6beb5ebd409a277599221dd6c1dd3ad78..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2540.txt +++ /dev/null @@ -1,376 +0,0 @@ -The strip packing problem is a 2-dimensional geometric minimization problem. - -Given a set of axis-aligned rectangles and a strip of bounded width and infinite height, determine an overlapping-free packing of the rectangles into the strip minimizing its height. - -This problem is a cutting and packing problem and is classified as an Open Dimension Problem according to Wäscher et al. - -This problem arises in the area of scheduling, where it models jobs, that require a contiguous portion of the memory over a given time period. Another example is the area of industrial manufacturing, where rectangular pieces need to be cut out of a sheet of material (e.g., cloth or paper) that has a fixed width but infinite length, and one wants to minimize the wasted material. - -This problem was first studied in 1980. It is strongly-NP hard and there exists no polynomial time approximation algorithm with a ratio smaller than $3/2$ unless $P = NP$. However, the best approximation ratio achieved so far (by a polynomial time algorithm by Harren et al. which can be proven by a reduction from the strongly NP-complete 3-partition problem. - -Note that both lower bounds $3/2$ and $5/4$ also hold for the case that a rotation of the items by 90 degrees is allowed. - -Additionally, it was proven by Ashok et al. that strip packing is W[1]-hard when parameterized by the height of the optimal packing. - -There are two trivial lower bounds on optimal solutions. - -The first is the height of the largest item. - -Define $h_{\max}(I) := \max\{h(i) | i \in \mathcal{I}\}$. - -Then it holds that -$$ -OPT(I) \geq h_{\max}(I) -$$. - -Another lower bound is given by the total area of the items. - -Define $\mathrm{AREA}(\mathcal{I}) := \sum_{i \in \mathcal{I}}h(i)w(i)$ then it holds that -$$ -OPT(I) \geq \mathrm{AREA}(\mathcal{I})/W -$$. - -The following two lower bounds take notice of the fact that certain items cannot be placed next to each other in the strip, and can be computed in $\mathcal{O}(n \log(n))$. - -For the first lower bound assume that the items are sorted by non-increasing height. Define $k := \max \{i : \sum_{j = 1}^k w(j) \leq W\}$. For each $l > k $ define $i(l) \leq k$ the first index such that $ w(l) + \sum_{j = 1}^{i(l)} w(j) > W$. Then it holds that -$$ -OPT(I) \geq \max \{h(l) + h(i(l))| l >k \wedge w(l) + \sum_{j = 1}^{i(l)} w(j) > W\} -$$. - -For the second lower bound, partition the set of items into three sets. Let $\alpha \in [1, W/2]\cap \mathbb{N}$ and define $\mathcal{I}_1(\alpha) := \{i \in \mathcal{I} | w(i) > W - \alpha\}$, $\mathcal{I}_2(\alpha) := \{i \in \mathcal{I} | W - \alpha \geq w(i) > W/2\}$, and $\mathcal{I}_3(\alpha) := \{i \in \mathcal{I} | W/2 \geq w(i) > \alpha \}$. Then it holds that - - - -OPT(I) \geq - -\max_{\alpha \in [1, W/2]\cap \mathbb{N}} - -\Bigg\{ \sum_{i \in \mathcal{I}_1(\alpha) \cup \mathcal{I}_2(\alpha)} h(i) + \left(\frac{\sum_{i \in \mathcal{I}_3(\alpha) h(i)w(i) - \sum_{i \in \mathcal{I}_2(\alpha)}(W -w(i))h(i)}}{W}\right)_+ - -\Bigg\}, where $(x)_+ := \max\{x,0\}$ for each $x \in \mathbb{R}$. - -On the other hand, Steinberg has shown that the height of an optimal solution can be upper bounded by -$$ -OPT(I) \leq 2\max\{h_{\max}(I),\mathrm{AREA}(\mathcal{I})/W\}. -$$ - -More precisely he showed that given a $W \geq w_{\max}(\mathcal{I})$ and a $H \geq h_{\max}(I)$ then the items $\mathcal{I}$ can be placed inside a box with width $W$ and height $H$ if -$$ - WH \geq 2\mathrm{AREA}(\mathcal{I}) + (2w_{\max}(\mathcal{I}) - W)_+(2h_{\max}(I) - H)_+ -$$, where $(x)_+ := \max\{x,0\}$. - -Since this problem is NP-hard, approximation algorithms have been studied for this problem. - -Most of the heuristic approaches have an approximation ratio between $3$ and $2$. - -Finding an algorithm with a ratio below $2$ seems hard, and - -the complexity of the corresponding algorithms increases regarding their running time and their descriptions. - -The smallest approximation ratio achieved so far is $(5/3+\varepsilon)$. - -This algorithm was first described by Baker et al. It works as follows: - -Let $ L $ be a sequence of rectangular items. - -The algorithm iterates the sequence in the given order. - -For each considered item $ r \in L $, it searches for the bottom-most position to place it and then shifts it as far to the left as possible. - -Hence, it places $ r $ at the bottom-most left-most possible coordinate $ (x,y)$ in the strip. - -This algorithm has the following properties: - -* The approximation ratio of this algorithm cannot be bounded by a constant. More precisely they showed that for each $ M > 0 $ there exists a list $ L $ of rectangular items ordered by increasing width such that $ BL(L)/ OPT(L) > M $, where $ BL(L) $ is the height of the packing created by the BL algorithm and $ OPT(L) $ is the height of the optimal solution for $ L $. - -* If the items are ordered by decreasing widths, then $ BL(L)/ OPT(L) \leq 3 $. - -* If the item are all squares and are ordered by decreasing widths, then $ BL(L)/ OPT(L) \leq 2 $. - -* For any $ \delta > 0 $, there exists a list $ L $ of rectangles ordered by decreasing widths such that $ BL(L)/ OPT(L) > 3 - \delta $. - -* For any $ \delta > 0 $, there exists a list $ L $ of squares ordered by decreasing widths such that $ BL(L)/ OPT(L) > 2 - \delta $. - -* For each $ \varepsilon \in (0,1] $, there exists an instance containing only squares where each order of the squares $ L $ has a ratio of $ BL(L)/ OPT(L) > \frac{12}{11 +\varepsilon} $, i.e., there exist instances where BL does not find the optimum even when iterating all possible orders of the items. - -This algorithm was first described by Coffman et al. in 1980 and works as follows: - -Let $ \mathcal{I} $ be the given set of rectangular items. - -First, the algorithm sorts the items by order of nonincreasing height. - -Then, starting at position $ (0,0) $, the algorithm places the items next to each other in the strip until the next item will overlap the right border of the strip. - -At this point, the algorithm defines a new level at the top of the tallest item in the current level and places the items next to each other in this new level. - -This algorithm has the following properties: - -* The running time can be bounded by $ \mathcal{O}(|\mathcal{I}| \log(|\mathcal{I}|))$ and if the items are already sorted even by $\mathcal{O}(|\mathcal{I}|)$. - -* For every set of items $ \mathcal{I} $, it produces a packing of height $ NFDH(\mathcal{I}) \leq 2 OPT(\mathcal{I}) + h_{\max} \leq 3 OPT(\mathcal{I})$, where $ h_{\max} $ is the largest height of an item in $ \mathcal{I} $. - -* For every $ \varepsilon > 0 $ there exists a set of rectangles $ \mathcal{I} $ such that $ NFDH(\mathcal{I}|) > (2-\varepsilon) OPT(\mathcal{I}).$ - -* The packing generated is a guillotine packing. This means the items can be obtained through a sequence of horizontal or vertical edge-to-edge cuts. - -This algorithm, first described by Coffman et al. in 1980, works similar to the NFDH algorithm. - -However, when placing the next item, the algorithm scans the levels from bottom to top and places the item in the first level on which it will fit. - -A new level is only opened if the item does not fit in any previous ones. - -This algorithm has the following properties: - -* The running time can be bounded by $ \mathcal{O}(|\mathcal{I}|^2)$, since there are at most $ |\mathcal{I}|$ levels. - -* For every set of items $ \mathcal{I} $ it produces a packing of height $ FFDH(\mathcal{I}) \leq 1.7 OPT(\mathcal{I}) + h_{\max} \leq 2.7 OPT(\mathcal{I})$, where $ h_{\max} $ is the largest height of an item in $ \mathcal{I} $. - -* Let $ m \geq 2 $. For any set of items $ \mathcal{I} $ and strip with width $W$ such that $ w(i) \leq W/m $ for each $ i \in \mathcal{I} $, it holds that $ FFDH(\mathcal{I}) \leq \left(1 + 1/m\right) OPT(\mathcal{I}) + h_{\max}$. Furthermore, for each $ \varepsilon > 0 $, there exists such a set of items $ \mathcal{I} $ with $ FFDH(\mathcal{I}) > \left(1 + 1/m -\varepsilon\right)OPT(\mathcal{I})$. - -* If all the items in $ \mathcal{I} $ are squares, it holds that $ FFDH(\mathcal{I}) \leq (3/2) OPT(\mathcal{I}) + h_{\max}$. Furthermore, for each $ \varepsilon >0$, there exists a set of squares $ \mathcal{I} $ such that $ FFDH(\mathcal{I}) > \left(3/2-\varepsilon\right)OPT(\mathcal{I})$. - -* The packing generated is a guillotine packing. This means the items can be obtained through a sequence of horizontal or vertical edge-to-edge cuts. - -This algorithm was first described by Coffman et al. - -For a given set of items $ \mathcal{I} $ and strip with width $ W$, it works as follows: - -# Determinate $ m \in \mathbb{N} $, the largest integer such that the given rectangles have width $ W/m $ or less. - -# Divide $ \mathcal{I} $ into two sets $ \mathcal{I}_{wide} $ and $ \mathcal{I}_{narrow} $, such that $ \mathcal{I}_{wide} $ contains all the items $ i \in \mathcal{I}$ with a width $ w(i) > W/(m+1) $ while $ \mathcal{I}_{narrow} $ contains all the items with $ w(i) \leq W/(m+1) $. - -# Order $ \mathcal{I}_{wide} $ and $ \mathcal{I}_{narrow} $ by nonincreasing height. - -# Pack the items in $ \mathcal{I}_{wide} $ with the FFDH algorithm. - -# Reorder the levels/shelves constructed by FFDH such that all the shelves with a total width larger than $ W(m+1)/(m+2) $ are below the more narrow ones. - -# This leaves a rectangular area $ R $ of with $ W/(m+2) $, next to more narrow levels/shelves, that contains no item. - -# Use the FFDH algorithm to pack the items in $ \mathcal{I}_{narrow} $ using the area $ R $ as well. - -This algorithm has the following properties: - -* For every set of items $ \mathcal{I} $ and the corresponding $ m $, it holds that $ SF(\mathcal{I}) \leq (m+2)/(m+1)OPT(\mathcal{I}) + 2h_{\max}$. Note that for $ m=1 $, it holds that $ SF(\mathcal{I}) \leq (3/2) OPT(\mathcal{I}) + 2h_{\max}$ - -* For each $ \varepsilon >0$, there is a set of items $ \mathcal{I} $ such that $ SF(\mathcal{I}) > \left((m+2)/(m+1) -\varepsilon\right)OPT(\mathcal{I})$. - -For a given set of items $ \mathcal{I} $ and strip with width $ W$, it works as follows: - -# Find all the items with a width larger than $ W/2 $ and stack them at the bottom of the strip (in random order). Call the total height of these items $ h_0 $. All the other items will be placed above $ h_0 $. - -# Sort all the remaining items in nonincreasing order of height. The items will be placed in this order. - -# Consider the horizontal line at $ h_0 $ as a shelf. The algorithm places the items on this shelf in nonincreasing order of height until no item is left or the next one does not fit. - -# Draw a vertical line at $ W/2 $, which cuts the strip into two equal halves. - -# Let $ h_l $ be the highest point covered by any item in the left half and $ h_r $ the corresponding point on the right half. Draw two horizontal line segments of length $ W/2 $ at $ h_l $ and $ h_r $ across the left and the right half of the strip. These two lines build new shelves on which the algorithm will place the items, as in step 3. Choose the half which has the lower shelf and place the items on this shelf until no other item fits. Repeat this step until no item is left. - -This algorithm has the following properties: - -* The running time can be bounded by $ \mathcal{O}(|\mathcal{I}| \log(|\mathcal{I}|))$ and if the items are already sorted even by $\mathcal{O}(|\mathcal{I}|)$. - -* For every set of items $ \mathcal{I} $ it produces a packing of height $ A(\mathcal{I}) \leq 2 OPT(\mathcal{I}) + h_{\max}/2 \leq 2.5 OPT(\mathcal{I})$, where $ h_{\max} $ is the largest height of an item in $ \mathcal{I} $. - -This algorithm is an extension of Sleator's approach and was first described by Golan. - -It places the items in nonincreasing order of width. - -The intuitive idea is to split the strip into sub-strips while placing some items. - -Whenever possible, the algorithm places the current item $ i $ side-by-side of an already placed item $ j $. - -In this case, it splits the corresponding sub-strip into two pieces: one containing the first item $ j $ and the other containing the current item $ i $. - -If this is not possible, it places $ i $ on top of an already placed item and does not split the sub-strip. - -This algorithm creates a set S of sub-strips. For each sub-strip s ∈ S we know its lower left corner s.xposition and s.yposition, its width s.width, the horizontal lines parallel to the upper and lower border of the item placed last inside this sub-strip s.upper and s.lower, as well as the width of it s.itemWidth. - -function Split Algorithm (SP) is - -input: items I, width of the strip W - -output: A packing of the items - -Sort I in nonincreasing order of widths; - -Define empty list S of sub-strips; - -Define a new sub-strip s with s.xposition = 0, s.yposition = 0, s.width = W, s.lower = 0, s.upper = 0, s.itemWidth = W; - -Add s to S; - -while I not empty do - -i := I.pop(); Removes widest item from I - -Define new list S_2 containing all the substrips with s.width - s.itemWidth ≥ i.width; - -S_2 contains all sub-strips where i fits next to the already placed item - -if S_2 is empty then - -In this case, place the item on top of another one. - -Find the sub-strip s in S with smallest s.upper; i.e. the least filled sub-strip - -Place i at position (s.xposition, s.upper); - -Update s: s.lower := s.upper; s.upper := s.upper+i.height; s.itemWidth := i.width; - -else - -In this case, place the item next to another one at the same level and split the corresponding sub-strip at this position. - -Find s ∈ S_2 with the smallest s.lower; - -Place i at position (s.xposition + s.itemWidth, s.lower); - -Remove s from S; - -Define two new sub-strips s1 and s2 with - -s1.xposition = s.xposition, s1.yposition = s.upper, s1.width = s.itemWidth, s1.lower = s.upper, s1.upper = s.upper, s1.itemWidth = s.itemWidth; - -s2.xposition = s.xposition+s.itemWidth, s2.yposition = s.lower, s2.width = s.width - s.itemWidth, s2.lower = s.lower, s2.upper = i.height, s2.itemWidth = i.width; - -S.add(s1,s2); - -return - -end function - -This algorithm has the following properties: - -* The running time can be bounded by $ \mathcal{O}(|\mathcal{I}|^2)$ since the number of substrips is bounded by $ |\mathcal{I}|$. - -* For any set of items $\mathcal{I}$ it holds that $ SP(\mathcal{I}) \leq 2 OPT(\mathcal{I}) + h_{\max} \leq 3 OPT(\mathcal{I})$. - -* For any $\varepsilon >0$, there exists a set of items $\mathcal{I}$ such that $ SP(\mathcal{I}) > (3-\varepsilon) OPT(\mathcal{I})$. - -* For any $\varepsilon >0$ and $C>0$, there exists a set of items $\mathcal{I}$ such that $ SP(\mathcal{I}) > (2-\varepsilon) OPT(\mathcal{I})+C$. - -This algorithm was first described by Schiermeyer. - -The description of this algorithm needs some additional notation. - -For a placed item $i \in \mathcal{I}$, its lower left corner is denoted by $(a_i,c_i)$ and its upper right corner by $(b_i,d_i)$. - -Given a set of items $\mathcal{I}$ and a strip of width $W$, it works as follows: - -# Stack all the rectangles of width greater than $W/2$ on top of each other (in random order) at the bottom of the strip. Denote by $H_0$ the height of this stack. All other items will be packed above $H_0$. - -# Sort the remaining items in order of nonincreasing height and consider the items in this order in the following steps. Let $h_{\max}$ be the height of the tallest of these remaining items. - -# Place the items one by one left aligned on a shelf defined by $H_0$ until no other item fit on this shelf or there is no item left. Call this shelf the first level. - -# Let $h_1$ be the height of the tallest unpacked item. Define a new shelf at $H_0 + h_{\max} + h_1$. The algorithm will fill this shelf from right to left, aligning the items to the right, such that the items touch this shelf with their top. Call this shelf the second reverse-level. - -# Place the items into the two shelves due to First-Fit, i.e., placing the items in the first level where they fit and in the second one otherwise. Proceed until there are no items left, or the total width of the items in the second shelf is at least $W/2$. - -# Shift the second reverse-level down until an item from it touches an item from the first level. Define $H_1$ as the new vertical position of the shifted shelf. Let $f$ and $s$ be the right most pair of touching items with $f$ placed on the first level and $s$ on the second reverse-level. Define $x_r := \max(b_f,b_s)$. - -# If $x_r < W/2$ then $s$ is the last rectangle placed in the second reverse-level. Shift all the other items from this level further down (all the same amount) until the first one touches an item from the first level. Again the algorithm determines the rightmost pair of touching items $f'$ and $s'$. Define $h_2$ as the amount by which the shelf was shifted down. - -## If $h_2 \leq h(s)$ then shift $s$ to the left until it touches another item or the border of the strip. Define the third level at the top of $s'$. - -## If $h_2 > h(s)$ then shift $s$ define the third level at the top of $s'$. Place $s$ left-aligned in this third level, such that it touches an item from the first level on its left. - -# Continue packing the items using the First-Fit heuristic. Each following level (starting at level three) is defined by a horizontal line through the top of the largest item on the previous level. Note that the first item placed in the next level might not touch the border of the strip with their left side, but an item from the first level or the item $s$. - -This algorithm has the following properties: - -* The running time can be bounded by $ \mathcal{O}(|\mathcal{I}|^2)$, since there are at most $ |\mathcal{I}|$ levels. - -* For every set of items $ \mathcal{I} $ it produces a packing of height $ RF(\mathcal{I}) \leq 2 OPT(\mathcal{I})$. - -Steinbergs algorithm is a recursive one. Given a set of rectangular items $ \mathcal{I}$ and a rectangular target region with width $ W$ and height $ H$, it proposes four reduction rules, that place some of the items and leaves a smaller rectangular region with the same properties as before regarding of the residual items. - -Consider the following notations: Given a set of items $ \mathcal{I}$ we denote by $ h_{\max}(\mathcal{I})$ the tallest item height in $ \mathcal{I}$, $ w_{\max}(\mathcal{I})$ the largest item width appearing in $ \mathcal{I}$ and by $ \mathrm{AREA}(\mathcal{I}) := \sum_{i \in \mathcal{I}} w(i)h(i)$ the total area of these items. - -Steinbergs shows that if -$$ - h_{\max}(\mathcal{I}) \leq H -$$, $ w_{\max}(\mathcal{I}) \leq W $, and $ \mathrm{AREA}(\mathcal{I}) \leq W\cdot H - (2h_{\max}(\mathcal{I}) -h)_+(2w_{\max}(\mathcal{I}) - W)_+ $, where $(a)_+ := \max\{0,a\}$, - -then all the items can be placed inside the target region of size $ W \times H $. - -Each reduction rule will produce a smaller target area and a subset of items that have to be placed. When the condition from above holds before the procedure started, then the created subproblem will have this property as well. - -Procedure 1: It can be applied if $ w_{\max}(\mathcal{I}') \geq W/2$. - -# Find all the items $i \in \mathcal{I}$ with width $ w(i) \geq W/2$ and remove them from $\mathcal{I}$. - -# Sort them by nonincreasing width and place them left-aligned at the bottom of the target region. Let $h_0$ be their total height. - -# Find all the items $i \in \mathcal{I}$ with width $ h(i) > H-h_0$. Remove them from $\mathcal{I}$ and place them in a new set $\mathcal{I}_H$. - -# If $\mathcal{I}_H$ is empty, define the new target region as the area above $h_0$, i.e. it has height $H-h_0$ and width $W$. Solve the problem consisting of this new target region and the reduced set of items with one of the procedures. - -# If $\mathcal{I}_H$ is not empty, sort it by nonincreasing height and place the items right allinged one by one in the upper right corner of the target area. Let $w_0$ be the total width of these items. Define a new target area with width $W-w_0$ and height $H - h_0$ in the upper left corner. Solve the problem consisting of this new target region and the reduced set of items with one of the procedures. - -Procedure 2: It can be applied if the following conditions hold: $ w_{\max}(\mathcal{I}) \leq W/2$, $ h_{\max}(\mathcal{I}) \leq H/2$, and there exist two different items $i,i' \in \mathcal{I}$ with $ w(i) \geq W/4$, $ w(i') \geq W/4$, $ h(i) \geq H/4$, $ h(i') \geq H/4$ and $ 2(\mathrm{AREA}(\mathcal{I}) - w(i)h(i) -w(i')h(i')) \leq (W- \max\{w(i),w(i')\})H$. - -# Find $i$ and $i'$ and remove them from $\mathcal{I}$. - -# Place the wider one in the lower-left corner of the target area and the more narrow one left-aligned on the top of the first. - -# Define a new target area on the right of these both items, such that it has the width $ W- \max\{w(i),w(i')\}$ and height $ H$. - -# Place the residual items in $\mathcal{I}$ into the new target area using one of the procedures. - -Procedure 3: It can be applied if the following conditions hold: $ w_{\max}(\mathcal{I}) \leq W/2$, $ h_{\max}(\mathcal{I}) \leq H/2$, $ |\mathcal{I}| > 1$, and when sorting the items by decreasing width there exist an index $m $ such that when defining $\mathcal{I'}$ as the first $m $ items it holds that -$$ - \mathrm{AREA}(\mathcal{I})- WH/4 \leq \mathrm{AREA}(\mathcal{I'}) \leq 3WH/8 -$$ as well as $w(i_{m+1})\leq W/4 $ - -# Set $ W_1 := \max{W/2, 2\mathrm{AREA}(\mathcal{I'})/H}$. - -# Define two new rectangular target areas one at the lower-left corner of the original one with height $H$ and width $W_1$ and the other left of it with height $H$ and width $W-W_1$. - -# Use one of the procedures to place the items in $\mathcal{I'}$ into the first new target area and the items in $\mathcal{I}\setminus\mathcal{I'}$ into the second one. - -Note that procedures 1 to 3 have a symmetric version when swapping the height and the width of the items and the target region. - -Procedure 4: It can be applied if the following conditions hold: $ w_{\max}(\mathcal{I}) \leq W/2$, $ h_{\max}(\mathcal{I}) \leq H/2$, and there exists an item $i \in \mathcal{I}$ such that $w(i) h(i) \geq \mathrm{AREA}(\mathcal{I}) - WH/4$. - -# Place the item $i$ in the lower-left corner of the target area and remove it from $\mathcal{I}$. - -# Define a new target area right of this item such that it has the width $W-w(i)$ and height $H$ and place the residual items inside this area using one of the procedures. - -This algorithm has the following properties: - -* The running time can be bounded by $ \mathcal{O}(|\mathcal{I}| \log(|\mathcal{I}|)^2/\log(\log(|\mathcal{I}|)))$. - -* For every set of items $ \mathcal{I} $ it produces a packing of height $ ST(\mathcal{I}) \leq 2 OPT(\mathcal{I})$. - -To improve upon the lower bound of $3/2$ for polynomial-time algorithms, pseudo-polynomial time algorithms for the strip packing problem have been considered. - -When considering this type of algorithms, all the sizes of the items and the strip are given as integrals. Furthermore, the width of the strip $W$ is allowed to appear polynomially in the running time. - -Note that this is no longer considered as a polynomial running time since, in the given instance, the width of the strip needs an encoding size of $ \log(W)$. - -The pseudo-polynomial time algorithms that have been developed mostly use the same approach. It is shown that each optimal solution can be simplified and transformed into one that has one of a constant number of structures. The algorithm then iterates all these structures and places the items inside using linear and dynamic programming. The best ratio accomplished so far is $(5/4 +\varepsilon) OPT(I) $. while there cannot be a pseudo-polynomial time algorithm with ratio better than $5/4 $ unless $P = NP$ - -In the online variant of strip packing, the items arrive over time. When an item arrives, it has to be placed immediately before the next item is known. There are two types of online algorithms that have been considered. In the first variant, it is not allowed to alter the packing once an item is placed. In the second, items may be repacked when another item arrives. This variant is called the migration model. - -The quality of an online algorithm is measured by the (absolute) competitive ratio -$$ -\mathrm{sup}_I A(I)/OPT(I) -$$, - -where $ A(I) $ corresponds to the solution generated by the online algorithm and $ OPT(I) $ corresponds to the size of the optimal solution. - -In addition to the absolute competitive ratio, the asymptotic competitive ratio of online algorithms has been studied. For instances $I$ with $h_{\max}(I)\leq 1 $ it is defined as -$$ -\lim \mathrm{sup}_{OPT(I) \rightarrow \infty} A(I)/OPT(I) -$$. - -Note that all the instances can be scaled such that $h_{\max}(I)\leq 1 $. - -The framework of Han et al. is applicable in the online setting if the online bin packing - -algorithm belongs to the class Super Harmonic. Thus, Seiden's online bin packing algorithm - -Harmonic++ implies an algorithm for online strip packing with asymptotic ratio 1.58889. diff --git a/wiki/wikipedia/2541.txt b/wiki/wikipedia/2541.txt deleted file mode 100644 index 63c9481ab4d6a95c387c6a49275d2c53d0f2b6a8..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2541.txt +++ /dev/null @@ -1,117 +0,0 @@ -A race condition or race hazard is the condition of an electronics, software, or other system where the system's substantive behavior is dependent on the sequence or timing of other uncontrollable events. It becomes a bug when one or more of the possible behaviors is undesirable. - -The term race condition was already in use by 1954, for example in David A. Huffman's doctoral thesis "The synthesis of sequential switching circuits". - -Race conditions can occur especially in logic circuits, multithreaded, or distributed software programs. - -A typical example of a race condition may occur when a logic gate combines signals that have traveled along different paths from the same source. The inputs to the gate can change at slightly different times in response to a change in the source signal. The output may, for a brief period, change to an unwanted state before settling back to the designed state. Certain systems can tolerate such glitches but if this output functions as a clock signal for further systems that contain memory, for example, the system can rapidly depart from its designed behaviour (in effect, the temporary glitch becomes a permanent glitch). - -Consider, for example, a two-input AND gate fed with the following logic:
    $output = A \wedge \overline{A}$
    A logic signal $A$ on one input and its negation, $\neg A$ ( the ¬ is a boolean negation), on another input in theory never output a true value: $A \wedge \overline{A} \ne 1$ If, however, changes in the value of $A$ take longer to propagate to the second input than the first when $A$ changes from false to true then a brief period will ensue during which both inputs are true, and so the gate's output will also be true. - -A critical race condition occurs when the order in which internal variables are changed determines the eventual state that the state machine will end up in. - -A non-critical race condition occurs when the order in which internal variables are changed does not determine the eventual state that the state machine will end up in. - -A static race condition occurs when a signal and its complement are combined. - -A dynamic race condition occurs when it results in multiple transitions when only one is intended. They are due to interaction between gates. It can be eliminated by using no more than two levels of gating. - -An essential race condition occurs when an input has two transitions in less than the total feedback propagation time. Sometimes they are cured using inductive delay line elements to effectively increase the time duration of an input signal. - -Design techniques such as Karnaugh maps encourage designers to recognize and eliminate race conditions before they cause problems. Often logic redundancy can be added to eliminate some kinds of races. - -As well as these problems, some logic elements can enter metastable states, which create further problems for circuit designers. - -A race condition arises in software when a computer program, to operate properly, depends on the sequence or timing of the program's processes or threads. Critical race conditions cause invalid execution and software bugs. Critical race conditions often happen when the processes or threads depend on some shared state. Operations upon shared states are done in critical sections that must be mutually exclusive. Failure to obey this rule can corrupt the shared state. - -A data race is a type of race condition. Data races are important parts of various formal memory models. The memory model defined in the C11 and C++11 standards specify that a C or C++ program containing a data race has undefined behavior. - -A race condition can be difficult to reproduce and debug because the end result is nondeterministic and depends on the relative timing between interfering threads. Problems of this nature can therefore disappear when running in debug mode, adding extra logging, or attaching a debugger. A bug that disappears like this during debugging attempts is often referred to as a "Heisenbug". It is therefore better to avoid race conditions by careful software design. - -Assume that two threads each increment the value of a global integer variable by 1. Ideally, the following sequence of operations would take place: - -In the case shown above, the final value is 2, as expected. However, if the two threads run simultaneously without locking or synchronization, the outcome of the operation could be wrong. The alternative sequence of operations below demonstrates this scenario: - -In this case, the final value is 1 instead of the expected result of 2. This occurs because here the increment operations are not mutually exclusive. Mutually exclusive operations are those that cannot be interrupted while accessing some resource such as a memory location. - -Not all regard data races as a subset of race conditions. The precise definition of data race is specific to the formal concurrency model being used, but typically it refers to a situation where a memory operation in one thread could potentially attempt to access a memory location at the same time that a memory operation in another thread is writing to that memory location, in a context where this is dangerous. This implies that a data race is different from a race condition as it is possible to have nondeterminism due to timing even in a program without data races, for example, in a program in which all memory accesses use only atomic operations. - -This can be dangerous because on many platforms, if two threads write to a memory location at the same time, it may be possible for the memory location to end up holding a value that is some arbitrary and meaningless combination of the bits representing the values that each thread was attempting to write; this could result in memory corruption if the resulting value is one that neither thread attempted to write (sometimes this is called a 'torn write'). Similarly, if one thread reads from a location while another thread is writing to it, it may be possible for the read to return a value that is some arbitrary and meaningless combination of the bits representing the value that the memory location held before the write, and of the bits representing the value being written. - -On many platforms, special memory operations are provided for simultaneous access; in such cases, typically simultaneous access using these special operations is safe, but simultaneous access using other memory operations is dangerous. Sometimes such special operations (which are safe for simultaneous access) are called atomic or synchronization operations, whereas the ordinary operations (which are unsafe for simultaneous access) are called data operations. This is probably why the term is data race; on many platforms, where there is a race condition involving only synchronization operations, such a race may be nondeterministic but otherwise safe; but a data race could lead to memory corruption or undefined behavior. - -The precise definition of data race differs across formal concurrency models. This matters because concurrent behavior is often non-intuitive and so formal reasoning is sometimes applied. - -The C++ standard, in draft N4296 (2014-11-19), defines data race as follows in section 1.10.23 (page 14) - -
    - -Two actions are potentially concurrent if - -* they are performed by different threads, or - -* they are unsequenced, and at least one is performed by a signal handler. - -The execution of a program contains a data race if it contains two potentially concurrent conflicting actions, at least one of which is not atomic, and neither happens before the other, except for the special case for signal handlers described below [omitted]. Any such data race results in undefined behavior. - -
    - -The parts of this definition relating to signal handlers are idiosyncratic to C++ and are not typical of definitions of data race. - -The paper Detecting Data Races on Weak Memory Systems provides a different definition: - -
    "two memory operations conflict if they access the same location and at least one of them is a write operation... - -"Two memory operations, x and y, in a sequentially consistent execution form a race 〈x,y〉, iff x and y conflict, and they are not ordered by the hb1 relation of the execution. The race 〈x,y〉, is a data race iff at least one of x or y is a data operation.
    - -Here we have two memory operations accessing the same location, one of which is a write. - -The hb1 relation is defined elsewhere in the paper, and is an example of a typical "happens-before" relation; intuitively, if we can prove that we are in a situation where one memory operation X is guaranteed to be executed to completion before another memory operation Y begins, then we say that "X happens-before Y". If neither "X happens-before Y" nor "Y happens-before X", then we say that X and Y are "not ordered by the hb1 relation". So, the clause "...and they are not ordered by the hb1 relation of the execution" can be intuitively translated as "...and X and Y are potentially concurrent". - -The paper considers dangerous only those situations in which at least one of the memory operations is a "data operation"; in other parts of this paper, the paper also defines a class of "synchronization operations" which are safe for potentially simultaneous use, in contrast to "data operations". - -The Java Language Specification provides a different definition: - -
    Two accesses to (reads of or writes to) the same variable are said to be conflicting if at least one of the accesses is a write...When a program contains two conflicting accesses (§17.4.1) that are not ordered by a happens-before relationship, it is said to contain a data race...a data race cannot cause incorrect behavior such as returning the wrong length for an array.
    - -A critical difference between the C++ approach and the Java approach is that in C++, a data race is undefined behavior, whereas in Java, a data race merely affects "inter-thread actions". - -For example, in Java, this guarantee is directly specified: and privilege escalation. - -A specific kind of race condition involves checking for a predicate (e.g. for authentication), then acting on the predicate, while the state can change between the time of check and the time of use. When this kind of bug exists in security-sensitive code, a security vulnerability called a time-of-check-to-time-of-use (TOCTTOU) bug is created. - -Race conditions are also intentionally used to create hardware random number generators and physically unclonable functions. PUFs can be created by designing circuit topologies with identical paths to a node and relying on manufacturing variations to randomly determine which paths will complete first. By measuring each manufactured circuit's specific set of race condition outcomes, a profile can be collected for each circuit and kept secret in order to later verify a circuit's identity. - -Two or more programs may collide in their attempts to modify or access a file system, which can result in data corruption or privilege escalation. File locking provides a commonly used solution. A more cumbersome remedy involves organizing the system in such a way that one unique process (running a daemon or the like) has exclusive access to the file, and all other processes that need to access the data in that file do so only via interprocess communication with that one process. This requires synchronization at the process level. - -A different form of race condition exists in file systems where unrelated programs may affect each other by suddenly using up available resources such as disk space, memory space, or processor cycles. Software not carefully designed to anticipate and handle this race situation may then become unpredictable. Such a risk may be overlooked for a long time in a system that seems very reliable. But eventually enough data may accumulate or enough other software may be added to critically destabilize many parts of a system. An example of this occurred with the near loss of the Mars Rover "Spirit" not long after landing. A solution is for software to request and reserve all the resources it will need before beginning a task; if this request fails then the task is postponed, avoiding the many points where failure could have occurred. Alternatively, each of those points can be equipped with error handling, or the success of the entire task can be verified afterwards, before continuing. A more common approach is to simply verify that enough system resources are available before starting a task; however, this may not be adequate because in complex systems the actions of other running programs can be unpredictable. - -In networking, consider a distributed chat network like IRC, where a user who starts a channel automatically acquires channel-operator privileges. If two users on different servers, on different ends of the same network, try to start the same-named channel at the same time, each user's respective server will grant channel-operator privileges to each user, since neither server will yet have received the other server's signal that it has allocated that channel. (This problem has been largely solved by various IRC server implementations.) - -In this case of a race condition, the concept of the "shared resource" covers the state of the network (what channels exist, as well as what users started them and therefore have what privileges), which each server can freely change as long as it signals the other servers on the network about the changes so that they can update their conception of the state of the network. However, the latency across the network makes possible the kind of race condition described. In this case, heading off race conditions by imposing a form of control over access to the shared resource—say, appointing one server to control who holds what privileges—would mean turning the distributed network into a centralized one (at least for that one part of the network operation). - -Race conditions can also exist when a computer program is written with non-blocking sockets, in which case the performance of the program can be dependent on the speed of the network link. - -Software flaws in life-critical systems can be disastrous. Race conditions were among the flaws in the Therac-25 radiation therapy machine, which led to the death of at least three patients and injuries to several more. - -Another example is the energy management system provided by GE Energy and used by Ohio-based FirstEnergy Corp (among other power facilities). A race condition existed in the alarm subsystem; when three sagging power lines were tripped simultaneously, the condition prevented alerts from being raised to the monitoring technicians, delaying their awareness of the problem. This software flaw eventually led to the North American Blackout of 2003. GE Energy later developed a software patch to correct the previously undiscovered error. - -Many software tools exist to help detect race conditions in software. They can be largely categorized into two groups: static analysis tools and dynamic analysis tools. - -Thread Safety Analysis is a static analysis tool for annotation-based intra-procedural static analysis, originally implemented as a branch of gcc, and now reimplemented in Clang, supporting PThreads. - -Dynamic analysis tools include: - -* Intel Inspector, a memory and thread checking and debugging tool to increase the reliability, security, and accuracy of C/C++ and Fortran applications; Intel Advisor, a sampling based, SIMD vectorization optimization and shared memory threading assistance tool for C, C++, C#, and Fortran software developers and architects; - -* ThreadSanitizer, which uses binary (Valgrind-based) or source, LLVM-based instrumentation, and supports PThreads); and Helgrind, a Valgrind tool for detecting synchronisation errors in C, C++ and Fortran programs that use the POSIX pthreads threading primitives. - -* Data Race Detector is designed to find data races in the Go Programming language. - -There are several benchmarks designed to evaluate the effectiveness of data race detection tools - -* DataRaceBench is a benchmark suite designed to systematically and quantitatively evaluate data race detection tools which analyze multi-threaded applications written in OpenMP. - -Neuroscience is demonstrating that race conditions can occur in mammal (rat) brains as well. - -In UK railway signalling, a race condition would arise in the carrying out of Rule 55. According to this rule, if a train was stopped on a running line by a signal, the locomotive fireman would walk to the signal box in order to remind the signalman that the train was present. In at least one case, at Winwick in 1934, an accident occurred because the signalman accepted another train before the fireman arrived. Modern signalling practice removes the race condition by making it possible for the driver to instantaneously contact the signal box by radio. diff --git a/wiki/wikipedia/2542.txt b/wiki/wikipedia/2542.txt deleted file mode 100644 index 1ca8da1102b79a70f39672c4ba8b037cdf08816c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2542.txt +++ /dev/null @@ -1,7 +0,0 @@ -In mathematics, or specifically, in differential topology, Ehresmann's lemma or Ehresmann's fibration theorem states that if a smooth mapping $ f\colon M \rightarrow N$, where $ M $ and $N$ are smooth manifolds, is - -# a surjective submersion, and - -# a proper map, (in particular, this condition is always satisfied if M is compact), - -then it is a locally trivial fibration. This is a foundational result in differential topology due to Charles Ehresmann, and has many variants. diff --git a/wiki/wikipedia/2543.txt b/wiki/wikipedia/2543.txt deleted file mode 100644 index 061579c95d74ea5a7a5105ffd46e2a4fd26db01d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2543.txt +++ /dev/null @@ -1,23 +0,0 @@ -In mathematics, the Ax–Grothendieck theorem is a result about injectivity and surjectivity of polynomials that was proved independently by James Ax and Alexander Grothendieck. - -The theorem is often given as this special case: If P is an injective polynomial function from an n-dimensional complex vector space to itself then P is bijective. That is, if P always maps distinct arguments to distinct values, then the values of P cover all of Cn. - -The full theorem generalizes to any algebraic variety over an algebraically closed field. - -Grothendieck's proof of the theorem is based on proving the analogous theorem for finite fields and their algebraic closures. That is, for any field F that is itself finite or that is the closure of a finite field, if a polynomial P from Fn to itself is injective then it is bijective. - -If F is a finite field, then Fn is finite. In this case the theorem is true for trivial reasons having nothing to do with the representation of the function as a polynomial: any injection of a finite set to itself is a bijection. When F is the algebraic closure of a finite field, the result follows from Hilbert's Nullstellensatz. The Ax–Grothendieck theorem for complex numbers can therefore be proven by showing that a counterexample over C would translate into a counterexample in some algebraic extension of a finite field. - -This method of proof is noteworthy in that it is an example of the idea that finitistic algebraic relations in fields of characteristic 0 translate into algebraic relations over finite fields with large characteristic. Thus, one can use the arithmetic of finite fields to prove a statement about C even though there is no homomorphism from any finite field to C. The proof thus uses model-theoretic principles such as the compactness theorem to prove an elementary statement about polynomials. The proof for the general case uses a similar method. - -There are other proofs of the theorem. Armand Borel gave a proof using topology. The case of n = 1 and field C follows since C is algebraically closed and can also be thought of as a special case of the result that for any analytic function f on C, injectivity of f implies surjectivity of f. This is a corollary of Picard's theorem. - -Another example of reducing theorems about morphisms of finite type to finite fields can be found in EGA IV: There, it is proved that a radicial S-endomorphism of a scheme X of finite type over S is bijective (10.4.11), and that if X/S is of finite presentation, and the endomorphism is a monomorphism, then it is an automorphism (17.9.6). Therefore, a scheme of finite presentation over a base S is a cohopfian object in the category of S-schemes. - -The Ax–Grothendieck theorem may also be used to prove the Garden of Eden theorem, a result that like the Ax–Grothendieck theorem relates injectivity with surjectivity but in cellular automata rather than in algebraic fields. Although direct proofs of this theorem are known, the proof via the Ax–Grothendieck theorem extends more broadly, to automata acting on amenable groups. - -Some partial converses to the Ax-Grothendieck Theorem: - -*A generically surjective polynomial map of n-dimensional affine space over a finitely generated extension of Z or Z/pZ[t] is bijective with a polynomial inverse rational over the same ring (and therefore bijective on affine space of the algebraic closure). - -*A generically surjective rational map of n-dimensional affine space over a Hilbertian field is generically bijective with a rational inverse defined over the same field. ("Hilbertian field" being defined here as a field for which Hilbert's Irreducibility Theorem holds, such as the rational numbers and function fields.) diff --git a/wiki/wikipedia/2544.txt b/wiki/wikipedia/2544.txt deleted file mode 100644 index 75a39947f82aa7995c700bfbb86aa034124752da..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2544.txt +++ /dev/null @@ -1,170 +0,0 @@ -In probability theory, concentration inequalities provide bounds on how a random variable deviates from some value (typically, its expected value). The law of large numbers of classical probability theory states that sums of independent random variables are, under very mild conditions, close to their expectation with a large probability. Such sums are the most basic examples of random variables concentrated around their mean. Recent results show that such behavior is shared by other functions of independent random variables. - -Concentration inequalities can be sorted according to how much information about the random variable is needed in order to use them. - -Let $X$ be a random variable that is non-negative (almost surely). Then, for every constant $a > 0$, -$$ -\Pr(X \geq a) \leq \frac{\operatorname{E}(X)}{a}. -$$ - -Note the following extension to Markov's inequality: if $\Phi$ is a strictly increasing and non-negative function, then -$$ -\Pr(X \geq a) = \Pr(\Phi (X) \geq \Phi (a)) \leq \frac{\operatorname{E}(\Phi(X))}{\Phi (a)}. -$$ - -Chebyshev's inequality requires the following information on a random variable $X$: - -* The expected value $\operatorname{E}[X]$ is finite. - -* The variance $\operatorname{Var}[X] = \operatorname{E}[(X - \operatorname{E}[X] )^2]$ is finite. - -Then, for every constant $a > 0$, -$$ -\Pr(|X-\operatorname{E}[X]| \geq a) \leq \frac{\operatorname{Var}[X]}{a^2}, -$$ - -or equivalently, -$$ -\Pr(|X-\operatorname{E}[X]| \geq a\cdot \operatorname{Std}[X]) \leq \frac{1}{a^2}, -$$ - -where $\operatorname{Std}[X]$ is the standard deviation of $X$. - -Chebyshev's inequality can be seen as a special case of the generalized Markov's inequality applied to the random variable $|X-\operatorname{E}[X]|$ with $\Phi(x) = x^2$. - -
    - -
    - -
    - -
    - -The generic Chernoff bound requires only the moment generating function of $X$, defined as: $M_X(t) := \operatorname{E}\!\left[e^{tX}\right]$, provided it exists. Based on Markov's inequality, for every $t>0$: -$$ -\Pr(X \geq a) \leq \frac{\operatorname{E}[e^{t\cdot X}]}{e^{t\cdot a}}, -$$ - -and for every $t<0$: -$$ -\Pr(X \leq a) \leq \frac{\operatorname{E}[e^{t\cdot X}]}{e^{t\cdot a}}. -$$ - -There are various Chernoff bounds for different distributions and different values of the parameter $t$. See for a compilation of more concentration inequalities. - -Let $X_1, X_2,\dots,X_n$ be independent random variables such that, for all i: -$$ -a_i\leq X_i\leq b_i -$$ almost surely. -$$ -c_i := b_i-a_i -$$ -$$ -\forall i: c_i \leq C -$$ - -Let $S_n$ be their sum, $E_n$ its expected value and $V_n$ its variance: -$$ -S_n := \sum_{i=1}^n X_i -$$ -$$ -E_n := \operatorname{E}[S_n] = \sum_{i=1}^n \operatorname{E}[X_i] -$$ -$$ -V_n := \operatorname{Var}[S_n] = \sum_{i=1}^n \operatorname{Var}[X_i] -$$ - -It is often interesting to bound the difference between the sum and its expected value. Several inequalities can be used. - -1. Hoeffding's inequality says that: -$$ -\Pr\left[|S_n-E_n|>t\right] < 2 \exp \left(-\frac{2t^2}{\sum_{i=1}^n c_i^2} \right)< 2 \exp \left(-\frac{2t^2}{n C^2} \right) -$$ - -2. The random variable $S_n-E_n$ is a special case of a martingale, and $S_0-E_0=0$. Hence, the general form of Azuma's inequality can also be used and it yields a similar bound: -$$ -\Pr\left[|S_n-E_n|>t\right] < 2 \exp \left(-\frac{2t^2}{\sum_{i=1}^n c_i^2}\right)< 2 \exp \left(-\frac{2t^2}{n C^2} \right) -$$ - -This is a generalization of Hoeffding's since it can handle other types of martingales, as well as supermartingales and submartingales. Note that if the simpler form of Azuma's inequality is used, the exponent in the bound is worse by a factor of 4. - -3. The sum function, $S_n=f(X_1,\dots,X_n)$, is a special case of a function of n variables. This function changes in a bounded way: if variable i is changed, the value of f changes by at most $b_i-a_it\right] < 2 \exp \left(-\frac{2t^2}{\sum_{i=1}^n c_i^2} \right)< 2 \exp \left(-\frac{2t^2}{n C^2} \right) -$$ - -This is a different generalization of Hoeffding's since it can handle other functions besides the sum function, as long as they change in a bounded way. - -4. Bennett's inequality offers some improvement over Hoeffding's when the variances of the summands are small compared to their almost-sure bounds C. It says that: - -\Pr\left[|S_n-E_n| > t \right] \leq - -2\exp\left[ - \frac{V_n}{C^2} h\left(\frac{C t}{V_n} \right)\right], where $h(u) = (1+u)\log(1+u)-u$ - -5. The first of Bernstein's inequalities says that: -$$ -\Pr\left[|S_n-E_n|>t\right] < 2 \exp \left(-\frac{t^2/2}{V_n + C\cdot t/3} \right) -$$ - -This is a generalization of Hoeffding's since it can handle random variables with not only almost-sure bound but both almost-sure bound and variance bound. - -6. Chernoff bounds have a particularly simple form in the case of sum of independent variables, since $\operatorname{E}[e^{t\cdot S_n}] = \prod_{i=1}^n {\operatorname{E}[e^{t\cdot X_i}]}$. - -For example, suppose the variables $X_i$ satisfy $X_i \geq E(X_i)-a_i-M$, for $1 \leq i \leq n$. Then we have lower tail inequality: -$$ -\Pr[S_n - E_n < -\lambda]\leq \exp\left(-\frac{\lambda^2}{2(V_n+\sum_{i=1}^n a_i^2+M\lambda/3)}\right) -$$ - -If $X_i$ satisfies $X_i \leq E(X_i)+a_i+M$, we have upper tail inequality: -$$ -\Pr[S_n - E_n > \lambda]\leq \exp\left(-\frac{\lambda^2}{2(V_n + \sum_{i=1}^n a_i^2+M\lambda/3)}\right) -$$ - -If $X_i$ are i.i.d., $|X_i| \leq 1$ and $\sigma^2$ is the variance of $X_i$, a typical version of Chernoff inequality is: -$$ -\Pr[|S_n| \geq k\sigma]\leq 2e^{-k^2/4n} \text{ for } 0 \leq k\leq 2\sigma. -$$ - -7. Similar bounds can be found in: Rademacher distribution#Bounds on sums - -The Efron–Stein inequality (or influence inequality, or MG bound on variance) bounds the variance of a general function. - -Suppose that $X_1 \dots X_n$, $X_1' \dots X_n'$ are independent with $X_i'$ and $X_i$ having the same distribution for all $i$. - -Let $X = (X_1,\dots , X_n), X^{(i)} = (X_1, \dots , X_{i-1}, X_i',X_{i+1}, \dots , X_n).$ Then - - - -\mathrm{Var}(f(X)) \leq \frac{1}{2} \sum_{i=1}^{n} E[(f(X)-f(X^{(i)}))^2]. - - - -The Dvoretzky–Kiefer–Wolfowitz inequality bounds the difference between the real and the empirical cumulative distribution function. - -Given a natural number $n$, let $X_1, X_2,\dots,X_n$ be real-valued independent and identically distributed random variables with cumulative distribution function F(·). Let $F_n$ denote the associated empirical distribution function defined by -$$ -F_n(x) = \frac1n \sum_{i=1}^n \mathbf{1}_{\{X_i\leq x\}},\qquad x\in\mathbb{R}. -$$ - -So $F(x)$ is the probability that a single random variable $X$ is smaller than $x$, and $F_n(x)$ is the average number of random variables that are smaller than $x$. - -Then -$$ -\Pr\left(\sup_{x\in\mathbb R} \bigl(F_n(x) - F(x)\bigr) > \varepsilon \right) \le e^{-2n\varepsilon^2} \text{ for every } \varepsilon \geq \sqrt{\tfrac 1 {2n} \ln2}. -$$ - -Anti-concentration inequalities, on the other hand, provide an upper bound on how much a random variable can concentrate around a quantity. - -For example, Rao and Yehudayoff show that there exists some $C > 0$ such that, for most directions of the hypercube $x \in \{\pm 1\}^n$, the following is true: - - - -\Pr\left(\langle x, Y\rangle = k\right) \le \frac{C}{\sqrt{n}}, - - - -where $Y$ is drawn uniformly from a subset $B \subseteq \{\pm 1\}^n$ of large enough size. - -Such inequalities are of importance in several fields, including communication complexity (e.g., in proofs of the gap Hamming problem) and graph theory. - -An interesting anti-concentration inequality for weighted sums of independent Rademacher random variables can be obtained using the Paley–Zygmund and the Khintchine inequalities. diff --git a/wiki/wikipedia/2545.txt b/wiki/wikipedia/2545.txt deleted file mode 100644 index 3c97ba191bcb3570d4f420b9e9152afe831a09cc..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2545.txt +++ /dev/null @@ -1,248 +0,0 @@ -In probability theory, Wald's equation, Wald's identity or Wald's lemma is an important identity that simplifies the calculation of the expected value of the sum of a random number of random quantities. In its simplest form, it relates the expectation of a sum of randomly many finite-mean, independent and identically distributed random variables to the expected number of terms in the sum and the random variables' common expectation under the condition that the number of terms in the sum is independent of the summands. - -The equation is named after the mathematician Abraham Wald. An identity for the second moment is given by the Blackwell–Girshick equation. - -Let (Xn)n∈$\mathbb{N}$ be a sequence of real-valued, independent and identically distributed random variables and let N be a nonnegative integer-value random variable that is independent of the sequence (Xn)n∈$\mathbb{N}$. Suppose that N and the Xn have finite expectations. Then -$$ -\operatorname{E}[X_1+\dots+X_N]=\operatorname{E}[N] \operatorname{E}[X_1]. -$$ - -Roll a six-sided dice. Take the number on the die (call it N) and roll that number of six-sided dice to get the numbers X1, . . . , XN, and add up their values. By Wald's equation, the resulting value on average is -$$ -\operatorname{E}[N] \operatorname{E}[X] = \frac{1+2+3+4+5+6}6\cdot\frac{1+2+3+4+5+6}6 = \frac{441}{36} = \frac{49}{4} = 12.25. -$$ - -Let (Xn)n∈$\mathbb{N}$ be an infinite sequence of real-valued random variables and let N be a nonnegative integer-valued random variable. - -Assume that: - -. (Xn)n∈$\mathbb{N}$ are all integrable (finite-mean) random variables, - -. E[Xn1{N ≥ n}] = E[Xn] P(N ≥ n) for every natural number n, and - -. the infinite series satisfies -$$ -\sum_{n=1}^\infty\operatorname{E}\!\bigl[|X_n| 1_{\{N\ge n\}}\bigr]<\infty. -$$ - -Then the random sums -$$ -S_N:=\sum_{n=1}^NX_n,\qquad T_N:=\sum_{n=1}^N\operatorname{E}[X_n] -$$ - -are integrable and -$$ -\operatorname{E}[S_N]=\operatorname{E}[T_N]. -$$ - -If, in addition, - -. (Xn)n∈$\mathbb{N}$ all have the same expectation, and - -. N has finite expectation, - -then -$$ -\operatorname{E}[S_N]=\operatorname{E}[N] \operatorname{E}[X_1]. -$$ - -Remark: Usually, the name Wald's equation refers to this last equality. - -Clearly, assumption () is needed to formulate assumption () and Wald's equation. Assumption () controls the amount of dependence allowed between the sequence (Xn)n∈$\mathbb{N}$ and the number N of terms; see the counterexample below for the necessity. Note that assumption () is satisfied when N is a stopping time for the sequence (Xn)n∈$\mathbb{N}$.{{citation needed|reason=I think I have a counter example when Xi's are not independent. Suppose for each odd number i, (X_i,X_{i+1}) is (+1,-1) or (-1,+1) with probability half each. Let my stopping criterion be t==odd AND X_t=+1, then (2) is not satisfied.|date=May 2016}} Assumption () is of more technical nature, implying absolute convergence and therefore allowing arbitrary rearrangement of an infinite series in the proof. - -If assumption () is satisfied, then assumption () can be strengthened to the simpler condition - -. there exists a real constant C such that E[|Xn{{! 1{N ≥ n}] ≤ C P(N ≥ n)}} for all natural numbers n. - -Indeed, using assumption (), - -\sum_{n=1}^\infty\operatorname{E}\!\bigl[|X_n|1_{\{N\ge n\}}\bigr]\le - -C\sum_{n=1}^\infty\operatorname{P}(N\ge n), - -and the last series equals the expectation of N [Proof], which is finite by assumption (). Therefore, () and () imply assumption (). - -Assume in addition to () and () that - -. N is independent of the sequence (Xn)n∈$\mathbb{N}$ and - -. there exists a constant C such that E[|Xn|] ≤ C for all natural numbers n. - -Then all the assumptions (), (), () and (), hence also () are satisfied. In particular, the conditions () and () are satisfied if - -. the random variables (Xn)n∈$\mathbb{N}$ all have the same distribution. - -Note that the random variables of the sequence (Xn)n∈$\mathbb{N}$ don't need to be independent. - -The interesting point is to admit some dependence between the random number N of terms and the sequence (Xn)n∈$\mathbb{N}$. A standard version is to assume (), (), () and the existence of a filtration (Fn)n∈$\mathbb{N}$0 such that - -. N is a stopping time with respect to the filtration, and - -. Xn and Fn–1 are independent for every n ∈ $\mathbb{N}$. - -Then () implies that the event {N ≥ n} = {N ≤ n – 1}c is in Fn–1, hence by () independent of Xn. This implies (), and together with () it implies (). - -For convenience (see the proof below using the optional stopping theorem) and to specify the relation of the sequence (Xn)n∈$\mathbb{N}$ and the filtration (Fn)n∈$\mathbb{N}$0, the following additional assumption is often imposed: - -. the sequence (Xn)n∈$\mathbb{N}$ is adapted to the filtration (Fn)n∈$\mathbb{N}$, meaning the Xn is Fn-measurable for every n ∈ $\mathbb{N}$. - -Note that () and () together imply that the random variables (Xn)n∈$\mathbb{N}$ are independent. - -An application is in actuarial science when considering the total claim amount follows a compound Poisson process -$$ -S_N=\sum_{n=1}^NX_n -$$ - -within a certain time period, say one year, arising from a random number N of individual insurance claims, whose sizes are described by the random variables (Xn)n∈$\mathbb{N}$. Under the above assumptions, Wald's equation can be used to calculate the expected total claim amount when information about the average claim number per year and the average claim size is available. Under stronger assumptions and with more information about the underlying distributions, Panjer's recursion can be used to calculate the distribution of SN. - -Let N be an integrable, $\mathbb{N}$0-valued random variable, which is independent of the integrable, real-valued random variable Z with E[Z] = 0. Define Xn = (–1)n Z for all n ∈ $\mathbb{N}$. Then assumptions (), (), (), and () with C := E[|Z|] are satisfied, hence also () and (), and Wald's equation applies. If the distribution of Z is not symmetric, then () does not hold. Note that, when Z is not almost surely equal to the zero random variable, then () and () cannot hold simultaneously for any filtration (Fn)n∈$\mathbb{N}$, because Z cannot be independent of itself as E[Z2] = (E[Z])2 = 0 is impossible. - -Let (Xn)n∈$\mathbb{N}$ be a sequence of independent, symmetric, and {–1, +1}-valued random variables. For every n ∈ $\mathbb{N}$ let Fn be the σ-algebra generated by X1, . . . , Xn and define N = n when Xn is the first random variable taking the value +1. Note that P(N = n) = 1/2n, hence E[N] < ∞ by the ratio test. The assumptions (), () and (), hence () and () with C = 1, (), (), and () hold, hence also (), and () and Wald's equation applies. However, () does not hold, because N is defined in terms of the sequence (Xn)n∈$\mathbb{N}$. Intuitively, one might expect to have E[SN] > 0 in this example, because the summation stops right after a one, thereby apparently creating a positive bias. However, Wald's equation shows that this intuition is misleading. - -Consider a sequence (Xn)n∈$\mathbb{N}$ of i.i.d. random variables, taking each of the two values 0 and 1 with probability 1/2 (actually, only X1 is needed in the following). Define N = 1 – X1. Then SN is identically equal to zero, hence E[SN] = 0, but E[X1] = 1/2 and E[N] = 1/2 and therefore Wald's equation does not hold. Indeed, the assumptions (), (), () and () are satisfied, however, the equation in assumption () holds for all n ∈ $\mathbb{N}$ except for n = 1. - -Very similar to the second example above, let (Xn)n∈$\mathbb{N}$ be a sequence of independent, symmetric random variables, where Xn takes each of the values 2n and –2n with probability 1/2. Let N be the first n ∈ $\mathbb{N}$ such that Xn = 2n. Then, as above, N has finite expectation, hence assumption () holds. Since E[Xn] = 0 for all n ∈ $\mathbb{N}$, assumptions () and () hold. However, since SN = 1 almost surely, Wald's equation cannot hold. - -Since N is a stopping time with respect to the filtration generated by (Xn)n∈$\mathbb{N}$, assumption () holds, see above. Therefore, only assumption () can fail, and indeed, since -$$ -\{N\ge n\}=\{X_i=-2^{i} \text{ for } i=1,\ldots,n-1\} -$$ - -and therefore P(N ≥ n) = 1/2n–1 for every n ∈ $\mathbb{N}$, it follows that - -\sum_{n=1}^\infty\operatorname{E}\!\bigl[|X_n|1_{\{N\ge n\}}\bigr] - -=\sum_{n=1}^\infty 2^n\operatorname{P}(N\ge n) - -=\sum_{n=1}^\infty 2=\infty. - -Assume (), (), (), (), () and (). Using assumption (), define the sequence of random variables -$$ -M_n = \sum_{i=1}^n (X_i - \operatorname{E}[X_i]),\quad n\in{\mathbb N}_0. -$$ - -Assumption () implies that the conditional expectation of Xn given Fn–1 equals E[Xn] almost surely for every n ∈ $\mathbb{N}$, hence (Mn)n∈$\mathbb{N}$0 is a martingale with respect to the filtration (Fn)n∈$\mathbb{N}$0 by assumption (). Assumptions (), () and () make sure that we can apply the optional stopping theorem, hence MN = SN – TN is integrable and - -{{NumBlk|:|$\operatorname{E}[S_N-T_N] = \operatorname{E}[M_0] = 0.$|}} - -Due to assumption (), -$$ -|T_N|=\biggl|\sum_{i=1}^N\operatorname{E}[X_i]\biggr| \le \sum_{i=1}^N\operatorname{E}[|X_i|]\le CN, -$$ - -and due to assumption () this upper bound is integrable. Hence we can add the expectation of TN to both sides of Equation () and obtain by linearity - -\operatorname{E}[S_N] - -=\operatorname{E}[T_N]. - -Remark: Note that this proof does not cover the above example with dependent terms. - -This proof uses only Lebesgue's monotone and dominated convergence theorems. - -We prove the statement as given above in three steps. - -We first show that the random sum SN is integrable. Define the partial sums - -{{NumBlk|:|$S_i=\sum_{n=1}^iX_n,\quad i\in{\mathbb N}_0.$|}} - -Since N takes its values in $\mathbb{N}$0 and since S0 = 0, it follows that -$$ -|S_N|=\sum_{i=1}^\infty|S_i|1_{\{N=i\}}. -$$ - -The Lebesgue monotone convergence theorem implies that -$$ -\operatorname{E}[|S_N|]=\sum_{i=1}^\infty\operatorname{E}[|S_i|1_{\{N=i\}}]. -$$ - -By the triangle inequality, -$$ -|S_i|\le\sum_{n=1}^i|X_n|,\quad i\in{\mathbb N}. -$$ - -Using this upper estimate and changing the order of summation (which is permitted because all terms are non-negative), we obtain - -{{NumBlk|:|$\operatorname{E}[|S_N|]\le\sum_{n=1}^\infty\sum_{i=n}^\infty\operatorname{E}[|X_n|1_{\{N=i\}}]=\sum_{n=1}^\infty\operatorname{E}[|X_n|1_{\{N\ge n\}}],$|}} - -where the second inequality follows using the monotone convergence theorem. By assumption (), the infinite sequence on the right-hand side of () converges, hence SN is integrable. - -We now show that the random sum TN is integrable. Define the partial sums - -{{NumBlk|:|$T_i=\sum_{n=1}^i\operatorname{E}[X_n],\quad i\in{\mathbb N}_0,$|}} - -of real numbers. Since N takes its values in $\mathbb{N}$0 and since T0 = 0, it follows that -$$ -|T_N|=\sum_{i=1}^\infty|T_i|1_{\{N=i\}}. -$$ - -As in step 1, the Lebesgue monotone convergence theorem implies that -$$ -\operatorname{E}[|T_N|]=\sum_{i=1}^\infty |T_i|\operatorname{P}(N=i). -$$ - -By the triangle inequality, -$$ -|T_i|\le\sum_{n=1}^i\bigl|\!\operatorname{E}[X_n]\bigr|,\quad i\in{\mathbb N}. -$$ - -Using this upper estimate and changing the order of summation (which is permitted because all terms are non-negative), we obtain - -{{NumBlk|:|$\operatorname{E}[|T_N|]\le\sum_{n=1}^\infty\bigl|\!\operatorname{E}[X_n]\bigr|\underbrace{\sum_{i=n}^\infty\operatorname{P}(N=i)}_{=\operatorname{P}(N\ge n)},$|}} - -By assumption (), - -\bigl|\!\operatorname{E}[X_n]\bigr|\operatorname{P}(N\ge n) - -=\bigl|\!\operatorname{E}[X_n1_{\{N\ge n\}}]\bigr| - -\le\operatorname{E}[|X_n|1_{\{N\ge n\}}],\quad n\in{\mathbb N}. - -Substituting this into () yields -$$ -\operatorname{E}[|T_N|]\le\sum_{n=1}^\infty\operatorname{E}[|X_n|1_{\{N\ge n\}}], -$$ - -which is finite by assumption (), hence TN is integrable. - -To prove Wald's equation, we essentially go through the same steps again without the absolute value, making use of the integrability of the random sums SN and TN in order to show that they have the same expectation. - -Using the dominated convergence theorem with dominating random variable and the definition of the partial sum Si given in (), it follows that - -\operatorname{E}[S_N]=\sum_{i=1}^\infty\operatorname{E}[S_i1_{\{N=i\}}] - -=\sum_{i=1}^\infty\sum_{n=1}^i\operatorname{E}[X_n1_{\{N=i\}}]. - -Due to the absolute convergence proved in () above using assumption (), we may rearrange the summation and obtain that -$$ -\operatorname{E}[S_N]=\sum_{n=1}^\infty\sum_{i=n}^\infty\operatorname{E}[X_n1_{\{N=i\}}]=\sum_{n=1}^\infty\operatorname{E}[X_n1_{\{N\ge n\}}], -$$ - -where we used assumption () and the dominated convergence theorem with dominating random variable for the second equality. Due to assumption () and the σ-additivity of the probability measure, - -\begin{align}\operatorname{E}[X_n1_{\{N\ge n\}}] &=\operatorname{E}[X_n]\operatorname{P}(N\ge n)\\ - -&=\operatorname{E}[X_n]\sum_{i=n}^\infty\operatorname{P}(N=i) - -=\sum_{i=n}^\infty\operatorname{E}\!\bigl[\operatorname{E}[X_n]1_{\{N=i\}}\bigr].\end{align} - -Substituting this result into the previous equation, rearranging the summation (which is permitted due to absolute convergence, see () above), using linearity of expectation and the definition of the partial sum Ti of expectations given in (), -$$ -\operatorname{E}[S_N]=\sum_{i=1}^\infty\sum_{n=1}^i\operatorname{E}\!\bigl[\operatorname{E}[X_n]1_{\{N=i\}}\bigr]=\sum_{i=1}^\infty\operatorname{E}[\underbrace{T_i1_{\{N=i\}}}_{=T_N1_{\{N=i\}}}]. -$$ - -By using dominated convergence again with dominating random variable , -$$ -\operatorname{E}[S_N]=\operatorname{E}\!\biggl[T_N\underbrace{\sum_{i=1}^\infty1_{\{N=i\}}}_{=1_{\{N\ge1\}}}\biggr]=\operatorname{E}[T_N]. -$$ - -If assumptions () and () are satisfied, then by linearity of expectation, -$$ -\operatorname{E}[T_N]=\operatorname{E}\!\biggl[\sum_{n=1}^N \operatorname{E}[X_n]\biggr]=\operatorname{E}[X_1]\operatorname{E}\!\biggl[\underbrace{\sum_{n=1}^N 1}_{=N}\biggr]=\operatorname{E}[N]\operatorname{E}[X_1]. -$$ - -This completes the proof. - -* Wald's equation can be transferred to Rd-valued random variables (Xn)n∈$\mathbb{N}$ by applying the one-dimensional version to every component. - -* If (Xn)n∈$\mathbb{N}$ are Bochner-integrable random variables taking values in a Banach space, then the general proof above can be adjusted accordingly. diff --git a/wiki/wikipedia/2546.txt b/wiki/wikipedia/2546.txt deleted file mode 100644 index d5394852737ea173c459085a262dc91fc5538364..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2546.txt +++ /dev/null @@ -1,7 +0,0 @@ -Discovr is a series of apps for the Apple iOS platform by David McKinney, which is characterized by using nodes to represent relationships between media in a tactile and visual way. There are apps for browsing music and movies. - -In October 2011, Discovr announced that the apps had passed one million downloads and $1.1 million in revenue. - -== Operation == - -The user interface is based on radial layouts where a node is connected to 1-6 other nodes. It displays an interactive, graphic map. The user begins by searching or browsing a pre-selected set of recommended items. Tapping causes a node to expand, to display related items. Double-tapping takes the user to a page of content about the artist or app. The interface uses a force-based layout algorithm which causes the new child nodes to pop out of the parent node, repelling nearby nodes, and quickly settle into positions that minimize overlap. The graph algorithm was developed by Tamás Nepusz, a PhD in graph theory who previously worked at Last.fm as a research engineer. diff --git a/wiki/wikipedia/2547.txt b/wiki/wikipedia/2547.txt deleted file mode 100644 index 69851e52a60f927c195d769db2e5b3c4d6dfbbab..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2547.txt +++ /dev/null @@ -1,5 +0,0 @@ -In graph theory, the unproven Erdős–Gyárfás conjecture, made in 1995 by the prolific mathematician Paul Erdős and his collaborator András Gyárfás, states that every graph with minimum degree 3 contains a simple cycle whose length is a power of two. Erdős offered a prize of $100 for proving the conjecture, or $50 for a counterexample; it is one of many conjectures of Erdős. - -If the conjecture is false, a counterexample would take the form of a graph with minimum degree three having no power-of-two cycles. It is known through computer searches of Gordon Royle and Klas Markström that any counterexample must have at least 17 vertices, and any cubic counterexample must have at least 30 vertices. Markström's searches found four graphs on 24 vertices in which the only power-of-two cycles have 16 vertices. One of these four graphs is planar; however, the Erdős–Gyárfás conjecture is now known to be true for the special case of 3-connected cubic planar graphs - -Weaker results relating the degree of a graph to unavoidable sets of cycle lengths are known: there is a set S of lengths, with |S| = O(n0.99), such that every graph with average degree ten or more contains a cycle with its length in S , and every graph whose average degree is exponential in the iterated logarithm of n necessarily contains a cycle whose length is a power of two . The conjecture is also known to be true for planar claw-free graphs and for graphs that avoid large induced stars and satisfy additional constraints on their degrees . diff --git a/wiki/wikipedia/2548.txt b/wiki/wikipedia/2548.txt deleted file mode 100644 index 1a6f4d8ecbb3eb4bbb092db280959c3ef269fb67..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2548.txt +++ /dev/null @@ -1,70 +0,0 @@ -In probability theory, the central limit theorem says that, under certain conditions, the sum of many independent identically-distributed random variables, when scaled appropriately, converges in distribution to a standard normal distribution. The martingale central limit theorem generalizes this result for random variables to martingales, which are stochastic processes where the change in the value of the process from time t to time t + 1 has expectation zero, even conditioned on previous outcomes. - -Here is a simple version of the martingale central limit theorem: Let -$$ -X_1, X_2, \dots -$$ be a martingale with bounded increments; that is, suppose -$$ -\operatorname{E}[X_{t+1} - X_t \vert X_1,\dots, X_t]=0, -$$ - -and -$$ -|X_{t+1} - X_t| \le k -$$ - -almost surely for some fixed bound k and all t. Also assume that $|X_1|\le k$ almost surely. - -Define -$$ -\sigma_t^2 = \operatorname{E}[(X_{t+1}-X_t)^2|X_1, \ldots, X_t], -$$ - -and let -$$ -\tau_\nu = \min\left\{t : \sum_{i=1}^{t} \sigma_i^2 \ge \nu\right\}. -$$ - -Then -$$ -\frac{X_{\tau_\nu}}{\sqrt{\nu}} -$$ - -converges in distribution to the normal distribution with mean 0 and variance 1 as $\nu \to +\infty \!$. More explicitly, - -\lim_{\nu \to +\infty} \operatorname{P} \left(\frac{X_{\tau_\nu}}{\sqrt{\nu}} < x\right) = \Phi(x) - -= \frac{1}{\sqrt{2\pi}} - -\int_{-\infty}^x - -\exp\left(-\frac{u^2}{2}\right) - - du, \quad x\in\mathbb{R}. - - - -The statement of the above result implicitly assumes the variances sum to infinity, so the following holds with probability 1: -$$ - \sum_{t=1}^{\infty} \sigma_t^2 = \infty -$$ - -This ensures that with probability 1: -$$ - \tau_v < \infty , \forall v \geq 0 -$$ - -This condition is violated, for example, by a martingale that is defined to be zero almost surely for all time. - -The result can be intuitively understood by writing the ratio as a summation: -$$ - \frac{X_{\tau_v}}{\sqrt{v}} = \frac{X_1}{\sqrt{v}} + \frac{1}{\sqrt{v}} \sum_{i=1}^{\tau_v-1} (X_{i+1}-X_i) , \forall \tau_v \geq 1 -$$ - -The first term on the right-hand-side asymptotically converges to zero, while the second term is qualitatively similar to the summation formula for the central limit theorem in the simpler case of i.i.d. random variables. While the terms in the above expression are not necessarily i.i.d., they are uncorrelated and have zero mean. Indeed: -$$ - E[(X_{i+1}-X_i)] = 0 , \forall i \in \{1, 2, 3, ...\} -$$ -$$ - E[(X_{i+1}-X_i)(X_{j+1}-X_j)] = 0 , \forall i \neq j, i, j \in \{1, 2, 3, ...\} -$$ diff --git a/wiki/wikipedia/2549.txt b/wiki/wikipedia/2549.txt deleted file mode 100644 index 6fa00aee627b8583785f35b768ce98053ab77f84..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2549.txt +++ /dev/null @@ -1,29 +0,0 @@ -In the mathematical theory of Kleinian groups, Jørgensen's inequality is an inequality involving the traces of elements of a Kleinian group, proved by . - -The inequality states that if A and B generate a non-elementary discrete subgroup of the SL2(C), then -$$ - \left|\operatorname{Tr}(A)^2 -4\right| + \left|\operatorname{Tr}\left(ABA^{-1}B^{-1}\right)-2\right|\ge 1. -$$ - -The inequality gives a quantitative estimate of the discreteness of the group: many of the standard corollaries bound - -elements of the group away from the identity. For instance, if A is parabolic, - -then -$$ - \left\|A - I\right\|\ \left\|B - I\right\|\ge 1 -$$ - -where $\|\cdot\|$ denotes the usual norm on SL2(C). - -Another consequence in the parabolic case is the existence of cusp neighborhoods in hyperbolic 3-manifolds: if G is a - -Kleinian group and j is a parabolic element of G with fixed point w, then there is a horoball based at w - -which projects to a cusp neighborhood in the quotient space $ \mathbb{H}^3/G $. Jørgensen's inequality - -is used to prove that every element of G which does not have a fixed point at w moves the horoball entirely - -off itself and so does not affect the local geometry of the quotient at w; intuitively, the geometry is entirely determined - -by the parabolic element. diff --git a/wiki/wikipedia/255.txt b/wiki/wikipedia/255.txt deleted file mode 100644 index f29f17821674c59ca90a516ca3e94c36586b1ed0..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/255.txt +++ /dev/null @@ -1,37 +0,0 @@ -A data-flow diagram is a way of representing a flow of data through a process or a system (usually an information system). The DFD also provides information about the outputs and inputs of each entity and the process itself. A data-flow diagram has no control flow are no decision rules and no loops. Specific operations based on the data can be represented by a flowchart. - -There are several notations for displaying data-flow diagrams. The notation presented above was described in 1979 by Tom DeMarco as part of structured analysis. - -For each data flow, at least one of the endpoints (source and / or destination) must exist in a process. The refined representation of a process can be done in another data-flow diagram, which subdivides this process into sub-processes. - -The data-flow diagram is a tool that is part of structured analysis and data modeling. When using UML, the activity diagram typically takes over the role of the data-flow diagram. A special form of data-flow plan is a site-oriented data-flow plan. - -Data-flow diagrams can be regarded as inverted Petri nets, because places in such networks correspond to the semantics of data memories. Analogously, the semantics of transitions from Petri nets and data flows and functions from data-flow diagrams should be considered equivalent. - -The DFD notation draws on graph theory, originally used in operational research to model workflow in organizations. DFD originated from the activity diagram used in the structured analysis and design technique methodology at the end of the 1970s. DFD popularizers include Edward Yourdon, Larry Constantine, Tom DeMarco, Chris Gane and Trish Sarson. - -Data-flow diagrams (DFD) quickly became a popular way to visualize the major steps and data involved in software-system processes. DFDs were usually used to show data flow in a computer system, although they could in theory be applied to business process modeling. DFDs were useful to document the major data flows or to explore a new high-level design in terms of data flow. - -DFD consists of processes, flows, warehouses, and terminators. There are several ways to view these DFD components. - -Process - -The process (function, transformation) is part of a system that transforms inputs to outputs. The symbol of a process is a circle, an oval, a rectangle or a rectangle with rounded corners (according to the type of notation). The process is named in one word, a short sentence, or a phrase that is clearly to express its essence. - -Data flow - -Data flow (flow, dataflow) shows the transfer of information (sometimes also material) from one part of the system to another. The symbol of the flow is the arrow. The flow should have a name that determines what information (or what material) is being moved. Exceptions are flows where it is clear what information is transferred through the entities that are linked to these flows. Material shifts are modeled in systems that are not merely informative. Flow should only transmit one type of information (material). The arrow shows the flow direction (it can also be bi-directional if the information to/from the entity is logically dependent - e.g. question and answer). Flows link processes, warehouses and terminators. - -Warehouse - -The warehouse (datastore, data store, file, database) is used to store data for later use. The symbol of the store is two horizontal lines, the other way of view is shown in the DFD Notation. The name of the warehouse is a plural noun (e.g. orders) - it derives from the input and output streams of the warehouse. The warehouse does not have to be just a data file, for example, a folder with documents, a filing cabinet, and optical discs. Therefore, viewing the warehouse in DFD is independent of implementation. The flow from the warehouse usually represents the reading of the data stored in the warehouse, and the flow to the warehouse usually expresses data entry or updating (sometimes also deleting data). Warehouse is represented by two parallel lines between which the memory name is located (it can be modeled as a UML buffer node). - -Terminator - -The Terminator is an external entity that communicates with the system and stands outside of the system. It can be, for example, various organizations (eg a bank), groups of people (e.g. customers), authorities (e.g. a tax office) or a department (e.g. a human-resources department) of the same organization, which does not belong to the model system. The terminator may be another system with which the modeled system communicates. - -Entity names should be comprehensible without further comments. DFD is a system created by analysts based on interviews with system users. It is determined for system developers, on one hand, project contractor on the other, so the entity names should be adapted for model domain or amateur users or professionals. Entity names should be general (independent, e.g. specific individuals carrying out the activity), but should clearly specify the entity. Processes should be numbered for easier mapping and referral to specific processes. The numbering is random, however, it is necessary to maintain consistency across all DFD levels (see DFD Hierarchy). DFD should be clear, as the maximum number of processes in one DFD is recommended to be from 6 to 9, minimum is 3 processes in one DFD. The exception is the so-called contextual diagram where the only process symbolizes the model system and all terminators with which the system communicates. - -DFD must be consistent with other models of the system - entity relationship diagram, state-transition diagram, data dictionary, and process specification models. Each process must have its name, inputs and outputs. Each flow should have its name (exception see Flow). Each Data store must have input and output flow. Input and output flows do not have to be displayed in one DFD - but they must exist in another DFD describing the same system. An exception is warehouse standing outside the system (external storage) with which the system communicates. - -To make the DFD more transparent (i.e. not too many processes), multi-level DFDs can be created. DFDs that are at a higher level are less detailed (aggregate more detailed DFD at lower levels). The contextual DFD is the highest in the hierarchy (see DFD Creation Rules). The so-called zero level is followed by DFD 0, starting with process numbering (e.g., process 1, process 2). In the next, the so-called first level - DFD 1 - the numbering continues. E.g. process 1 is divided into the first three levels of the DFD, which are numbered 1.1, 1.2 and 1.3. Similarly, processes in the second level (DFD 2) are numbered eg 2.1.1, 2.1.2, 2.1.3 and 2.1.4. The number of levels depends on the size of the model system. DFD 0 processes may not have the same number of decomposition levels. DFD 0 contains the most important (aggregated) system functions. The lowest level should include processes that make it possible to create a process specification for roughly one A4 page. If the mini-specification should be longer, it is appropriate to create an additional level for the process where it will be decomposed into multiple processes. For a clear overview of the entire DFD hierarchy, a vertical (cross-sectional) diagram can be created. The warehouse is displayed at the highest level where it is first used and at every lower level as well. diff --git a/wiki/wikipedia/2550.txt b/wiki/wikipedia/2550.txt deleted file mode 100644 index d34d823fe0ca859173fbb46d30d0067686a8487c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2550.txt +++ /dev/null @@ -1,55 +0,0 @@ -Clustering is the problem of partitioning data points into groups based on their similarity. Correlation clustering provides a method for clustering a set of objects into the optimum number of clusters without specifying that number in advance. - -In machine learning, correlation clustering or cluster editing operates in a scenario where the relationships between the objects are known instead of the actual representations of the objects. For example, given a weighted graph $G=(V,E)$ where the edge weight indicates whether two nodes are similar (positive edge weight) or different (negative edge weight), the task is to find a clustering that either maximizes agreements (sum of positive edge weights within a cluster plus the absolute value of the sum of negative edge weights between clusters) or minimizes disagreements (absolute value of the sum of negative edge weights within a cluster plus the sum of positive edge weights across clusters). Unlike other clustering algorithms this does not require choosing the number of clusters $k$ in advance because the objective, to minimize the sum of weights of the cut edges, is independent of the number of clusters. - -It may not be possible to find a perfect clustering, where all similar items are in a cluster while all dissimilar ones are in different clusters. If the graph indeed admits a perfect clustering, then simply deleting all the negative edges and finding the connected components in the remaining graph will return the required clusters. - -But, in general a graph may not have a perfect clustering. For example, given nodes a,b,c such that a,b and a,c are similar while b,c are dissimilar, a perfect clustering is not possible. In such cases, the task is to find a clustering that maximizes the number of agreements (number of + edges inside clusters plus the number of − edges between clusters) or minimizes the number of disagreements (the number of − edges inside clusters plus the number of + edges between clusters). This problem of maximizing the agreements is NP-complete (multiway cut problem reduces to maximizing weighted agreements and the problem of partitioning into triangles can be reduced to the unweighted version). - -Bansal et al. discuss the NP-completeness proof and also present both a constant factor approximation algorithm and polynomial-time approximation scheme to find the clusters in this setting. Ailon et al. propose a randomized 3-approximation algorithm for the same problem. - -CC-Pivot(G=(V,E+,E)) - -Pick random pivot i ∈ V - -Set $C=\{i\}$, V'=Ø - -For all j ∈ V, j ≠ i; - -If (i,j) ∈ E+ then - -Add j to C - -Else (If (i,j) ∈ E) - -Add j to V' - -Let G' be the subgraph induced by V' - -Return clustering C,CC-Pivot(G') - -The authors show that the above algorithm is a 3-approximation algorithm for correlation clustering. The best polynomial-time approximation algorithm known at the moment for this problem achieves a ~2.06 approximation by rounding a linear program, as shown by Chawla, Makarychev, Schramm, and Yaroslavtsev. - -Karpinski and Schudy proved existence of a polynomial time approximation scheme (PTAS) for that problem on complete graphs and fixed number of clusters. - -In 2011, it was shown by Bagon and Galun - -that the optimization of the correlation clustering functional is closely related to well known discrete optimization methods. - -In their work they proposed a probabilistic analysis of the underlying implicit model that allows the correlation clustering functional to estimate the underlying number of clusters. - -This analysis suggests the functional assumes a uniform prior over all possible partitions regardless of their number of clusters. - -Thus, a non-uniform prior over the number of clusters emerges. - -Several discrete optimization algorithms are proposed in this work that scales gracefully with the number of elements (experiments show results with more than 100,000 variables). - -The work of Bagon and Galun also evaluated the effectiveness of the recovery of the underlying number of clusters in several applications. - -Correlation clustering also relates to a different task, where correlations among attributes of feature vectors in a high-dimensional space are assumed to exist guiding the clustering process. These correlations may be different in different clusters, thus a global decorrelation cannot reduce this to traditional (uncorrelated) clustering. - -Correlations among subsets of attributes result in different spatial shapes of clusters. Hence, the similarity between cluster objects is defined by taking into account the local correlation patterns. With this notion, the term has been introduced in simultaneously with the notion discussed above. - -Different methods for correlation clustering of this type are discussed in and the relationship to different types of clustering is discussed in. See also Clustering high-dimensional data. - -Correlation clustering (according to this definition) can be shown to be closely related to biclustering. As in biclustering, the goal is to identify groups of objects that share a correlation in some of their attributes; where the correlation is usually typical for the individual clusters. diff --git a/wiki/wikipedia/2551.txt b/wiki/wikipedia/2551.txt deleted file mode 100644 index e0eb0acd002a2505b35c1c748804c96210a5a72d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2551.txt +++ /dev/null @@ -1,53 +0,0 @@ -Data preservation is the act of conserving and maintaining both the safety and integrity of data. Preservation is done through formal activities that are governed by policies, regulations and strategies directed towards protecting and prolonging the existence and authenticity of data and its metadata. Data can be described as the elements or units in which knowledge and information is created, - - and metadata are the summarizing subsets of the elements of data; or the data about the data. The main goal of data preservation is to protect data from being lost or destroyed and to contribute to the reuse and progression of the data. - -Most historical data collected over time has been lost or destroyed. War and natural disasters combined with the lack of materials and necessary practices to preserve and protect data has caused this. Usually, only the most important data sets were saved, such as government records and statistics, legal contracts and economic transactions. Scientific research and doctoral theses data have mostly been destroyed from improper storage and lack of data preservation awareness and execution. Over time, data preservation has evolved and has generated importance and awareness. We now have many different ways to preserve data and many different important organizations involved in doing so. - -The first digital data preservation storage solutions appeared in the 1950s, which were usually flat or hierarchically structured. While there were still issues with these solutions, it made storing data much cheaper, and more easily accessible. In the 1970s relational databases as well as spreadsheets appeared. Relational data bases structure data into tables using structured query languages which made them more efficient than the preceding storage solutions, and spreadsheets hold high volumes of numeric data which can be applied to these relational databases to produce derivative data. More recently, non-relational (non-structured query language) databases have appeared as complements to relational databases which hold high volumes of unstructured or semi-structured data. - -In contrast, data holdings are collections of gathered data that are informally kept, and not necessarily prepared for long-term preservation. For example, a collection or back-up of personal files. Data holdings are generally the storage methods used in the past when data has been lost due to environmental and other historical disasters. - -Digital preservation, is similar to data preservation, but is mainly concerned with technological threats, and solely digital data. Essentially digital data is a set of formal activities to enable ongoing or persistent use and access of digital data exceeding the occurrence of technological malfunction or change. Digital preservation is aware of the inevitable change in technology and protocols, and prepares for data will need to be accessible across new types of technologies and platforms while being the integrity of the data and metadata being conserved. - -Technology, while providing great process in conserving data that may not have been possible in the past, is also changing at such a quick rate that digital data may not be accessible anymore due to the format being incompatible with new software. Without the use of data preservation much of our existing digital data is at risk. - -The majority of methods used towards data preservation today are digital methods, which are so far the most effective methods that exist. - -=== Archives === - -Archives are a collection of historical documents and records. Archives contribute and work towards the preservation of data by collecting data that is well organized, while providing the appropriate metadata to confirm it. - -An example of an important data archive is The LONI Image Data Archive, which is an archive that collects data regarding clinical trials and clinical research studies. - -=== Catalogues, directories and portals === - -Catalogues, directories and portals are consolidated resources which are kept by individual institutions, and are associated with data archives and holdings. In other words, the data is not presented on the site, but instead might act as metadata and aggregators, and may administer thorough inventories. - -=== Repositories === - -Repositories are places where data archives and holdings can be accessed and stored. The goal of repositories is to make sure that all requirements and protocols of archives and holdings are being met, and data is being certified to ensure data integrity and user trust. - -Single-site Repositories - -A repository that holds all data sets on a single site. - -An example of a major single-site repository the Data Archiving and Networking Services which is a repository which provides ongoing access to digital research resources for the Netherlands. - -Multi-Site Repositories - -A repository that hosts data set on multiple institutional sites. - -An example of a well known multi-site repository is OpenAIRE which is a repository that hosts research data and publications collaborating all of the EU countries and more. OpenAIRE promotes open scholarship and seeks to improves discover-ability and re-usability of data. - -Trusted Digital Repository - -A repository that seeks to provide reliable, trusted access over a long period of time. The repository can be single or multi-sited but must cooperate with the Reference Model for an Open Archival Information System, as well as adhere to a set of rules or attributes that contribute to its trust such as having persistent financial responsibility, organizational buoyancy, administrative responsibility security and safety. - -An example of a trusted digital repository is The Digital Repository of Ireland (DRI) which is a multi-site repository that hosts Ireland's humanity and social science data sets. - -=== Cyber Infrastructures === - -Cyber infrastructures which consists of archive collections which are made available through the system of hardware, technologies, software, policies, services and tools. Cyber infrastructures are geared towards the sharing of data supporting peer-to-peer collaborations and a cultural community. - -An example of a major cyber-infrastructure is The Canadian Geo-spatial Data Infrastructure which provides access to spatial data in Canada. diff --git a/wiki/wikipedia/2552.txt b/wiki/wikipedia/2552.txt deleted file mode 100644 index bea80b5b4b72d03d8ff761f70f5c018a8a93ad82..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2552.txt +++ /dev/null @@ -1,33 +0,0 @@ -Solomon Wolf Golomb (; May 30, 1932 – May 1, 2016) was an American mathematician, engineer, and professor of electrical engineering at the University of Southern California, best known for his works on mathematical games. Most notably, he invented Cheskers in 1948 and coined the name. He also fully described polyominoes and pentominoes in 1953. He specialized in problems of combinatorial analysis, number theory, coding theory, and communications. His game of pentomino inspired Tetris. - -Golomb, a graduate of the Baltimore City College high school, received his bachelor's degree from Johns Hopkins University and master's and doctorate degree in mathematics from Harvard University in 1957 with a dissertation on "Problems in the Distribution of the Prime Numbers". - -While working at the Glenn L. Martin Company he became interested in communications theory and began his work on shift register sequences. He spent his Fulbright year at the University of Oslo and then joined the Jet Propulsion Laboratory at Caltech, where he researched military and space communications. He joined the faculty of USC in 1963 and was awarded full tenure two years later. - -Golomb pioneered the identification of the characteristics and merits of maximum length shift register sequences, also known as pseudorandom or pseudonoise sequences, which have extensive military, industrial and consumer applications. Today, millions of cordless and cellular phones employ pseudorandom direct-sequence spread spectrum implemented with shift register sequences. His efforts made USC a center for communications research. - -Golomb was the inventor of Golomb coding, a form of entropy encoding. Golomb rulers, used in astronomy and in data encryption, are also named for him, as is one of the main generation techniques of Costas arrays, the Lempel-Golomb generation method. - -He was a regular columnist, writing Golomb's Puzzle Column in the IEEE Information Society Newsletter. He was also a frequent contributor to Scientific Americans Mathematical Games column (The column did much to publicize his discoveries about polyominoes and pentominoes) and a frequent participant in Gathering 4 Gardner conferences. Among his contributions to recreational mathematics are Rep-tiles. He also contributed a puzzle to each issue of the Johns Hopkins Magazine, a monthly publication of his undergraduate alma mater, for a column called "Golomb's Gambits", and was a frequent contributor to Word Ways: The Journal of Recreational Linguistics. - -Golomb was a member of both the National Academy of Engineering and the National Academy of Science. - -In 1985, he received the Shannon Award of the Information Theory Society of the IEEE. - -In 1992, he received the medal of the U.S. National Security Agency for his research, and has also been the recipient of the Lomonosov Medal of the Russian Academy of Science and the Kapitsa Medal of the Russian Academy of Natural Sciences. - -In 2000, he was awarded the IEEE Richard W. Hamming Medal for his exceptional contributions to information sciences and systems. He was singled out as a major figure of coding and information theory for over four decades, specifically for his ability to apply advanced mathematics to problems in digital communications. - -Golomb was one of the first high profile professors to attempt the Ronald K. Hoeflin Mega IQ power test, which originally appeared in Omni Magazine. He scored at least IQ 176, which represents 1/1,000,000 of the unselected population. - -In 2012, he became a fellow of the American Mathematical Society. That same year, it was announced that he had been selected to receive the National Medal of Science. In 2014, he was elected as a fellow of the Society for Industrial and Applied Mathematics "for contributions to coding theory, data encryption, communications, and mathematical games." - -In 2013, he was awarded the National Medal of Science 2011. - -In 2016, he was awarded the Benjamin Franklin Medal in Electrical Engineering "for pioneering work in space communications and the design of digital spread spectrum signals, transmissions that provide security, interference suppression, and precise location for cryptography; missile guidance; defense, space, and cellular communications; radar; sonar; and GPS." - -*' () - -*Polyominoes, Princeton University Press; 2nd edition 1996, - -*Shift Register Sequences, San Francisco, Holden-Day, 1967. diff --git a/wiki/wikipedia/2553.txt b/wiki/wikipedia/2553.txt deleted file mode 100644 index 77f1e1d616266bf1375892c2e6b5b943847efb83..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2553.txt +++ /dev/null @@ -1,11 +0,0 @@ -In number theory, the Stark conjectures, introduced by and later expanded by , give conjectural information about the coefficient of the leading term in the Taylor expansion of an Artin L-function associated with a Galois extension K/k of algebraic number fields. The conjectures generalize the analytic class number formula expressing the leading coefficient of the Taylor series for the Dedekind zeta function of a number field as the product of a regulator related to S-units of the field and a rational number. When K/k is an abelian extension and the order of vanishing of the L-function at s = 0 is one, Stark gave a refinement of his conjecture, predicting the existence of certain S-units, called Stark units. and Cristian Dumitru Popescu gave extensions of this refined conjecture to higher orders of vanishing. - -The Stark conjectures, in the most general form, predict that the leading coefficient of an Artin L-function is the product of a type of regulator, the Stark regulator, with an algebraic number. When the extension is abelian and the order of vanishing of an L-function at s = 0 is one, Stark's refined conjecture predicts the existence of the Stark units, whose roots generate Kummer extensions of K that are abelian over the base field k (and not just abelian over K, as Kummer theory implies). As such, this refinement of his conjecture has theoretical implications for solving Hilbert's twelfth problem. Also, it is possible to compute Stark units in specific examples, allowing verification of the veracity of his refined conjecture as well as providing an important computational tool for generating abelian extensions of number fields. In fact, some standard algorithms for computing abelian extensions of number fields involve producing Stark units that generate the extensions (see below). - -The first order zero conjectures are used in recent versions of the PARI/GP computer algebra system to compute Hilbert class fields of totally real number fields, and the conjectures provide one solution to Hilbert's twelfth problem, which challenged mathematicians to show how class fields may be constructed over any number field by the methods of complex analysis. - -Stark's principal conjecture has been proven in various special cases, including the case where the character defining the L-function takes on only rational values. Except when the base field is the field of rational numbers or an imaginary quadratic field, the abelian Stark conjectures are still unproved in number fields, and more progress has been made in function fields of an algebraic variety. - -related Stark's conjectures to the noncommutative geometry of Alain Connes. This provides a conceptual framework for studying the conjectures, although at the moment it is unclear whether Manin's techniques will yield the actual proof. - -Recent progress has been made by Dasgupta and Kakde. diff --git a/wiki/wikipedia/2554.txt b/wiki/wikipedia/2554.txt deleted file mode 100644 index d88ead5948e29828d5456fd7985e1e6150b01e12..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2554.txt +++ /dev/null @@ -1,39 +0,0 @@ -Balanced number partitioning is a variant of multiway number partitioning in which there are constraints on the number of items allocated to each set. The input to the problem is a set of n items of different sizes, and two integers m, k. The output is a partition of the items into m subsets, such that the number of items in each subset is at most k. Subject to this, it is required that the sums of sizes in the m subsets are as similar as possible. - -An example application is identical-machines scheduling where each machine has a job-queue that can hold at most k jobs. The problem has applications also in manufacturing of VLSI chips, and in assigning tools to machines in flexible manufacturing systems. - -In the standard three-field notation for optimal job scheduling problems, the problem of minimizing the largest sum is sometimes deonted by " P | #≤k | $C_\max$". The middle field "#≤k" denotes that the number of jobs in each machine should be at most k. This is in contrast to the unconstrained version, which is denoted by " P||$C_\max$". problem [SP12]. There are many algorithms that aim to find a balanced partition in which the sum is as nearly-equal as possible. - -* Coffman, Frederickson and Lueker present a restricted version of the LPT algorithm (called RLPT), in which inputs are assigned in pairs. When inputs are uniformly-distributed random variables, the expected largest sum of RLPT is exactly $\frac{n}{4}+\frac{1}{2n+2}$. The expected work-difference (difference between largest and smallest sum) is $\Theta(1/n)$. presents a balanced variant of the LDM algorithm for m=2, called BLDM. Its expected work-difference is $n^{-\Theta(\log n)}$. - -*Mertens presents a complete anytime algorithm for balanced two-way partitioning. It combines the BLDM algorithm with the complete-Karmarkar-Karp algorithm. - -Another special case called 3-partitioning is when the number of items in each subset should be at most 3 (k=3). Deciding whether there exists such a partition with equal sums is exactly the 3-partition problem, which is known to be strongly NP-hard. There are approximation algorithms that aim to find a partition in which the sum is as nearly-equal as possible. - -* Kellerer and Woeginger adapt the LPT algorithm to triplet partitioning (where there are at most 3*m items, and each subset should contain at most 3 items). Their algorithm is called modified-LPT or MLPT. It orders the items from large to small, and puts each item in turn into the bin with the smallest sum among those bins that contain fewer than 3 items. They show that the MLPT algorithm attains at most $\frac{4 m-1}{3 m}$ of the minimum largest sum, which is the same approximation ratio that LPT attains for the unconstrained problem. The bound is tight for MLPT. - -* Chen, He and Lin show that, for the same problem, MLPT attains at least $\frac{3 m-1}{4 m-2}$ of the maximum smallest sum, which is again the same ratio that LPT attains for the unconstrained problem. - -* Kellerer and Kotov present a different algorithm (for the case with exactly 3*m items), that attains at most $7/6$ of the minimum largest sum. - -A more general case, called k-partitioning, study a variant in which there are k*m items (for some integer k), and each of the m sets must contain exactly k items. They present several heuristic algorithms for approximating the minimum largest sum: - -** Folding algorithm: optimal for m=2, and in general has tight approximation ratio $2-\frac{1}{m}$. - -** Exchange algorithm: tight approximation ratio $2-\frac{2}{m+1}$. It is not known if it runs in polynomial time. - -** Primal-dual algorithm (a combination of LPT and MultiFit): approximation ratio at most $4/3$. It is tight for k=4 when m is sufficiently large; the precise lower bound is $\frac{4 m}{3 m+1}$). study a variant in which each of the m sets must contain either ceiling(n/m) or floor(n/m) items (so k=ceiling(n/m)). They extend the Balanced-LDM (BLDM) from m=2 to general m. The generalized algorithm runs in time $O(n\log n)$. They prove that its approximation ratio for the minimum largest sum is exactly 4/3 for k=3, 19/12 for k=4, 103/60 for k=5, 643/360 for k=6, and 4603/2520 for k=7. The ratios were found by solving a mixed integer linear program. In general (for any k), the approximation ratio is at least $2-\sum_{j=0}^{k-1}\frac{j!}{k!}$ and at most $2-\frac{1}{k-1}$. The exact MILP results for 3,4,5,6,7 correspond to the lower bound. For k>7, no exact results are known, but the difference between the lower and upper bound is less than 0.3%. When the parameter is the number of subsets (m), the approximation ratio is exactly $2-\frac{1}{m}$. - -*Zhang, Mouratidis and Pang can be used for balanced partitioning. study the problem of minimizing the largest sum when the number of items in all sets is at most k. They show that the linear-program relaxation of this variant has the same optimal value as the LP relaxation of the unconstrained variant. The expression $\max((\sum x_i)/m, x_1)$, where xi are the inputs ordered from large to small, is a lower bound for the optimal largest-sum, and its worst-case ratio is 1/2 in both variants. The improved expression $\max((\sum x_i)/m, x_1, x_k+x_{m+1})$ has worst-case ratio 2/3 in the unconstrained variant and 1/2 in the constrained variant. The approximation ratio of the modified list scheduling is 1/2 for the unconstrained variant, but it is 0 for the constrained variant (it can be arbitrarily bad). The approximation ratio of the modified LPT algorithm is at most 2. They also show that the lower bound of consider the problem of maximizing the smallest sum. They show that the FOLDING algorithm has tight approximation ratio $\max\left(\frac{2}{k}, \frac{1}{m}\right)$. They present a new algorithm, HARMONIC1, with worst-case ratio at least $\max\left(\frac{1}{k}, \frac{1}{\lceil \sum_{i=1}^m \frac{1}{i}\rceil+1}\right)$. Both these algorithms are ordinal - they partition the items based only on the order between them rather than their exact values. They prove that any ordinal algorithm has ratio at most $O(1/\ln{m})$for maximizing the smallest sum. This indicates that HARMONIC1 is asymptotically optimal. For any fixed k, any ordinal algorithm has ratio at most the smallest root of the equation $\sum_{i=1}^m \left\lfloor\left\lfloor \frac{k+i-1}{i}\right\rfloor x \right\rfloor = k$. When k tends to infinity, this upper bound approaches 0. - -There are some general relations between approximations to the balanced partition problem and the standard (unconstrained) partition problem. - -* Babel, Kellerer and Kotov prove that the problem is NP-hard even for k=3 (for k=2 it can be solved efficiently by finding a maximum weight matching). They then present an algorithm called Kernel-LPT (KLPT): it assigns a kernel to each subset, and then runs the modified LPT algorithm (puts each item into the subset with the smallest sum among those that have fewer than k items). They prove that, with k=3, KLPT has an approximation ratio $\frac{4 m-1}{3 m}$ for the minimum largest sum. However, Chen, He and Lin claim that its tight approximation ratio is $\frac{3 m-1}{2 m}$ for the minimum largest sum, and $\frac{2 m-1}{3 m-2}$ for the maximum smallest sum. - -In another variant of this problem, there are some k categories of size m, and each subset should contain exactly one item from each category. That is, kh =1 for each category h. - -* Wu and Yao presented the layered LPT algorithm - a variant of the LPT algorithm. They prove that its approximation ratio is $2-\frac{1}{m}$ for minimizing the largest sum; $\frac{1}{m}$ for maximizing the smallest sum in the general case; and in some special cases, can be improved to $\frac{m}{2 m-1}$ for general k and $\frac{m-1}{2 m-3}$ for k=3. - -* Li and Li presented different algorithms for the same problem. For minimizing the largest sum, they present an EPTAS for constant k, and FPTAS for constant m. For maximizing the smallest sum, they present a 1/(k-1) approximation algorithm for the general case, and an EPTAS for constant k. They also study a more general objective: minimizing the lp-norm of the vector of sums. They prove that the layered-LPT algorithm is a 2-approximation algorithm for all norms. - -*Dell'Olmo, Hansen, Pallottino and Storchi study 32 different objectives for this problem. For each of the four operators max,min,sum,diff, one operator can be applied to the k items inside each subset, and then one operator can be applied to the m results for the different subsets. Each of these 16 objectives can be either maximized or minimized, for a total of 32. They show that 21 of these problems can be solved in linear time; 7 require more complex, but still polynomial-time, algorithms; 3 are NP-hard: maximizing (min,sum), minimizing (max,sum) and minimizing (diff,sum). They left open the status of minimizing (diff,diff). diff --git a/wiki/wikipedia/2555.txt b/wiki/wikipedia/2555.txt deleted file mode 100644 index 15f5bea6e1acb1658cd0bc183209422b98d61984..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2555.txt +++ /dev/null @@ -1,19 +0,0 @@ -The vertex k-center problem is a classical NP-hard problem in computer science. It has application in facility location and clustering. Basically, the vertex k-center problem models the following real problem: given a city with $n$ facilities, find the best $k$ facilities where to build fire stations. Since firemen must attend any emergency as quickly as possible, the distance from the farthest facility to its nearest fire station has to be as small as possible. In other words, the position of the fire stations must be such that every possible fire is attended as quickly as possible. - -The vertex k-center problem is a classical NP-Hard problem in computer science. It was first proposed by Hakimi in 1964. Formally, the vertex k-center problem consists in: given a complete undirected graph $G=(V,E)$ in a metric space, and a positive integer $k$, find a subset $C \subseteq V$ such that $|C|\le k$ and the objective function $r(C)=\max_{v \in V}\{d(v,C)\}$ is minimized. The distance $d(v,C)$ is defined as the distance from the vertex $v$ to its nearest center in $C$. - -If $P \neq NP$, the vertex k-center problem can not be (optimally) solved in polynomial time. However, there are some polynomial time algorithms that get near optimal solutions. Specifically, 2-approximated solutions. Actually, if $P \neq NP$ the best possible solution that can be achieved by a polynomial time algorithm is a 2-approximated one. In the context of a minimization problem, such as the vertex k-center problem, a 2-approximated solution is any solution $C'$ such that $r(C') \le 2 \times r(\text{OPT})$, where $r(\text{OPT})$ is the size of an optimal solution. An algorithm that guarantees to generate 2-approximated solutions is known as a 2-approximation algorithm. The main 2-approximated algorithms for the vertex k-center problem reported in the literature are the Sh algorithm, the HS algorithm, - -Formally characterized by David Shmoys in 1995, the Sh algorithm takes as input a complete undirected graph $G=(V,E)$, a positive integer $k$, and an assumption $r$ on what the optimal solution size is. The Sh algorithm works as follows: selects the first center $c_1$ at random. So far, the solution consists of only one vertex, $C=\{c_1\}$. Next, selects center $c_2$ at random from the set containing all the vertices whose distance from $C$ is greater than $2 \times r$. At this point, $C=\{c_1,c_2\}$. Finally, selects the remaining $k-2$ centers the same way $c_2$ was selected. The complexity of the Sh algorithm is $O(kn)$, where $n$ is the number of vertices. - -Proposed by Dorit Hochbaum and David Shmoys in 1985, the HS algorithm takes the Sh algorithm as basis. By noticing that the value of $r(\text{OPT})$ must equals the cost of some edge in $E$, and since there are $O(n^2)$ edges in $E$, the HS algorithm basically repeats the Sh algorithm with every edge cost. The complexity of the HS algorithm is $O(n^4)$. However, by running a binary search over the ordered set of edge costs, its complexity is reduced to $O(n^2 \log n)$. - -Proposed independently by Teofilo Gonzalez, and by Martin Dyer and Alan Frieze in 1985, the Gon algorithm is basically a more powerful version of the Sh algorithm. While the Sh algorithm requires a guess $r$ on $r(\text{OPT})$, the Gon algorithm prescinds from such guess by noticing that if any set of vertices at distance greater than $2 \times r(\text{OPT})$ exists, then the farthest vertex must be inside such set. Therefore, instead of computing at each iteration the set of vertices at distance greater than $2 \times r$ and then selecting a random vertex, the Gon algorithm simply selects the farthest vertex from every partial solution $C'$. The complexity of the Gon algorithm is $O(kn)$, where $n$ is the number of vertices. - -Proposed by García Díaz et al. in 2017, the CDS algorithm is a 3-approximation algorithm that takes ideas from the Gon algorithm (farthest point heuristic), the HS algorithm (parametric pruning), and the relationship between the vertex k-center problem and the Dominating Set problem. The CDS algorithm has a complexity of $O(n^4)$. However, by performing a binary search over the ordered set of edge costs, a more efficiente heuristic named CDSh is proposed. The CDSh algorithm complexity is $O(n^2 \log n)$. Despite the suboptimal performance of the CDS algorithm, and the heuristic performance of CDSh, both present a much better performance than the Sh, HS, and Gon algorithms. - -Some of the most widely used benchmark datasets for the vertex k-center problem are the pmed instances from OR-Lib., and some instances from TSP-Lib. Table 1 shows the mean and standard deviation of the experimental approximation factors of the solutions generated by each algorithm over the 40 pmed instances from OR-Lib - -The greedy pure algorithm (or Gr) follows the core idea of greedy algorithms: to take optimal local decisions. In the case of the vertex k-center problem, the optimal local decision consists in selecting each center in such a way that the size of the solution (covering radius) is minimum at each iteration. In other words, the first center selected is the one that solves the 1-center problem. The second center selected is the one that, along with the previous center, generates a solution with minimum covering radius. The remaining centers are selected the same way. The complexity of the Gr algorithm is $O(kn^2)$. The empirical performance of the Gr algorithm is poor on most benchmark instances. - -The Scoring algorithm (or Scr) was introduced by Jurij Mihelič and Borut Robič in 2005. This algorithm takes advantage of the reduction from the vertex k-center problem to the minimum dominating set problem. The problem is solved by pruning the input graph with every possible value of the optimal solution size and then solving the minimum dominating set problem heuristically. This heuristic follows the lazy principle, which takes every decision as slow as possible (opossed to the greedy strategy). The complexity of the Scr algorithm is $O(n^4)$. The empirical performance of the Scr algorithm is very good on most benchmark instances. However, its running time rapidly becomes impractical as the input grows. So, it seems to be a good algorithm only for small instances. diff --git a/wiki/wikipedia/2556.txt b/wiki/wikipedia/2556.txt deleted file mode 100644 index adad019913bb380a2265827cdfdd824843660585..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2556.txt +++ /dev/null @@ -1,133 +0,0 @@ -Freeplane is a free, open source software application for creating mind maps (diagrams of connections between ideas), and electronic outlines. Written in Java, Freeplane is supported on Windows, Mac OS X and Linux, and is licensed under the GNU GPL version "2 or later". - -In 2007, Freeplane was forked from the FreeMind project. Freeplane maintains partial file format compatibility with FreeMind - Freeplane fully supports the FreeMind file format, but adds features and tags not supported by FreeMind, which are ignored on loading. - -New features of Freeplane stable release (June 2010) include: - -* Export to PNG, JPEG, SVG (in addition to HTML / XHTML and PDF) - -* Find / Replace in all open maps - -* Paste HTML as node structure - -* Outline mode - -* Portable version (run from a USB flash drive) - -* Scripting via Groovy - -* Spell checker - -The first stable Freeplane 1.2.x was 1.2.20 released on October 20, 2012. It includes the following new features: - -* Text processor like node styles - -* Conditional node styles - -* Map templates for new maps - -* Formatting panel - -* Add-ons: Installable enhancements - -* Hyperlinks for menu items - -* Keyboard shortcut documentation: Map and HTML table generation added for the documentation map - -* Check for newer auto save files on opening of a map - -* Single instance mode: open files in existing program instance instead of opening a new one. - -* Node level dependent filters - -* Improvement in search and replace functions - -* Different cloud shapes - -* New icons for rating - -* Automatic Edge Color - -* "Grid" for moving of nodes (Preferences->Behaviour->Grid gap size) - -* Copy and paste attributes - -* Named filter conditions - -* Different shapes, line types, width and transparency for connectors - -* Freeplane portable version (download and install file named FreeplanePortable_xxx.paf.exe) - -* File -> Properties... dialog showing facts about the map such as total nodes, branches and leaf nodes - -* New icons added to facilitate speedy use of main and contextual menus - -* Formulas: Use of formulas as node text and attributes (like in spread sheet processors) - -* Node numbering and Formats/templates as style attributes - -* Added progress icons to show incremental completion in 10% or 25% steps - -* Summaries: Create graphical and textual summaries by "bracketing" nodes. See example map - -* Menu and command structure refactored both to integrate new features and to make Freeplane more intuitive and easier to learn - -* Dates and numbers: Parsing and formatting improved - -* Digital post-its: free positionable and free floating nodes. - -* Dates and numbers: Improved scripting support - -Version 1.3 was published ((date)). - -New features of 1.3.x included: - -* Expand LaTeX feature to both formulae and text - -* Integration of OpenStreetMap - -New features of Freeplane 1.5 include: - -* New options for creating mind maps with high homogeneity and symmetry - -* Clones - -* Init scripts - -* Background images - -* References to other mind maps from formulas and scripts - -* PDF and SVG exports enhancements - -* Java 9 support - -* JLatexMath update - -* Bug fixes - -* Dark UI mode support (Look and feel and map template "Darcula") - -* User interface enhancements - -* Nodes and aliases enhancements - -* Java 13 support, Java 11 compatibility, Java 7 support dropped, Java 8 is required - -* Bug fixes - -The latest stable release is . - -One feature of Freeplane is the support for installable enhancements. Add-ons are a way to extend and customize Freeplane similar to how plug-ins and extensions can be used to extend and customize well-known applications like Firefox or LibreOffice. Freeplane add-ons can be used to provide a single function, a bundle of multiple functions, bind those functions to a menu item, etc. - -Available add-ons include : - -* GTD support - -* study planner - -* more icons - -* versioning and collaborative work - -See more on the add-ons page. diff --git a/wiki/wikipedia/2557.txt b/wiki/wikipedia/2557.txt deleted file mode 100644 index b66e724bd2dd9159beb4c936b65ff685bae6016b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2557.txt +++ /dev/null @@ -1,29 +0,0 @@ -In Euclidean plane geometry, a pseudotriangle (pseudo-triangle) is the simply connected subset of the plane that lies between any three mutually tangent convex sets. A pseudotriangulation (pseudo-triangulations) is a partition of a region of the plane into pseudotriangles, and a pointed pseudotriangulation is a pseudotriangulation in which at each vertex the incident edges span an angle of less than π. - -Although the words "pseudotriangle" and "pseudotriangulation" have been used with various meanings in mathematics for much longer, the terms as used here were introduced in 1993 by Michel Pocchiola and Gert Vegter in connection with the computation of visibility relations and bitangents among convex obstacles in the plane. Pointed pseudotriangulations were first considered by Ileana Streinu (2000, 2005) as part of her solution to the carpenter's ruler problem, a proof that any simple polygonal path in the plane can be straightened out by a sequence of continuous motions. Pseudotriangulations have also been used for collision detection among moving objects and for dynamic graph drawing and shape morphing. Pointed pseudotriangulations arise in rigidity theory as examples of minimally rigid planar graphs, and in methods for placing guards in connection with the art gallery theorem. The shelling antimatroid of a planar point set gives rise to pointed pseudotriangulations, although not all pointed pseudotriangulations can arise in this way. - -For a detailed survey of much of the material discussed here, see Rote, Santos, and Streinu (2008). - -Pocchiola and Vegter (1996a,b,c) originally defined a pseudotriangle to be a simply-connected region of the plane bounded by three smooth convex curves that are tangent at their endpoints. However, subsequent work has settled on a broader definition that applies more generally to polygons as well as to regions bounded by smooth curves, and that allows nonzero angles at the three vertices. In this broader definition, a pseudotriangle is a simply-connected region of the plane, having three convex vertices. The three boundary curves connecting these three vertices must be convex, in the sense that any line segment connecting two points on the same boundary curve must lie entirely outside or on the boundary of the pseudotriangle. Thus, the pseudotriangle is the region between the convex hulls of these three curves, and more generally any three mutually tangent convex sets form a pseudotriangle that lies between them. - -For algorithmic applications it is of particular interest to characterize pseudotriangles that are polygons. In a polygon, a vertex is convex if it spans an interior angle of less than π, and concave otherwise (in particular, we consider an angle of exactly π to be concave). Any polygon must have at least three convex angles, because the total exterior angle of a polygon is 2π, the convex angles contribute less than π each to this total, and the concave angles contribute zero or negative amounts. A polygonal pseudotriangle is a polygon that has exactly three convex vertices. In particular, any triangle, and any nonconvex quadrilateral, is a pseudotriangle. - -The convex hull of any pseudotriangle is a triangle. The curves along the pseudotriangle boundary between each pair of convex vertices either lie within the triangle or coincide with one of its edges. - -A pseudotriangulation is a partition of a region of the plane into pseudotriangles. Any triangulation of a region of the plane is a pseudotriangulation. While any two triangulations of the same region must have the same numbers of edges and triangles, the same is not true of pseudotriangulations; for instance, if the region is itself an n-vertex polygonal pseudotriangle, then a pseudotriangulation of it may have as few as one pseudotriangle and n edges, or as many as n − 2 pseudotriangles and 2n − 3 edges. - -A minimal pseudotriangulation is a pseudotriangulation T such that no subgraph of T is a pseudotriangulation covering the same convex region of the plane. A minimal pseudotriangulation with n vertices must have at least 2n − 3 edges; if it has exactly 2n − 3 edges, it must be a pointed pseudotriangulation, but there exist minimal pseudotriangulations with 3n − O(1) edges. - -Agarwal et al. (2002) describe data structures for maintaining pseudotriangulations of moving points or moving polygons. They show that using pseudotriangulations in place of triangulations allows their algorithms to maintain these structures with relatively few combinatorial changes as the inputs move, and they use these dynamic pseudotriangulations to perform collision detection among the moving objects. - -Gudmundsson et al. (2004) consider the problem of finding a pseudotriangulation of a point set or polygon with minimum total edge length, and provide approximation algorithms for this problem. - -A pointed pseudotriangulation can be defined as a finite non-crossing collection of line segments, such that at each vertex the incident line segments span an angle of at most π, and such that no line segments can be added between any two existing vertices while preserving this property. It is not hard to see that a pointed pseudotriangulation is a pseudotriangulation of its convex hull: all convex hull edges may be added while preserving the angle-spanning property, and all interior faces must be pseudotriangles else a bitangent line segment could be added between two vertices of the face. - -A pointed pseudotriangulation with v vertices must have exactly 2v − 3 edges. This follows by a simple double counting argument involving the Euler characteristic: as each face but the outer one is a pseudotriangle, with three convex angles, the pseudotriangulation must have 3f − 3 convex angles between adjacent edges. Each edge is the clockwise edge for two angles, so there are a total of 2e angles, of which all but v are convex. Thus, 3f − 3 = 2e − v. Combining this with the Euler equation f − e + v = 2 and solving the resulting simultaneous linear equations gives e = 2v − 3. The same argument also shows that f = v − 1 (including the convex hull as one of the faces), so the pseudotriangulation must have exactly v − 2 pseudotriangles. - -Similarly, since any k-vertex subgraph of a pointed pseudotriangulation can be completed to form a pointed pseudotriangulation of its vertices, the subgraph must have at most 2k − 3 edges. Thus, pointed pseudotriangulations satisfy the conditions defining Laman graphs: they have exactly 2v − 3 edges, and their k-vertex subgraphs have at most 2k − 3 edges. Laman graphs, and therefore also pointed pseudotriangulations, are minimally rigid graphs in two dimensions. Every planar Laman graph can be drawn as a pointed pseudotriangulation, although not every planar drawing of a planar Laman graph is a pseudotriangulation. - -Another way of finding a pointed pseudotriangulation is to shell a point set; that is, to remove convex hull vertices one by one until all points have been removed. The family of sequences of removals that can be formed in this way is the shelling antimatroid of the point set, and the set of edges of convex hulls of the sequence of point sets formed by this removal process forms a pseudotriangulation. However, not all pointed pseudotriangulations can be formed in this way. - -Aichholzer et al. (2004) show that a set of n points, h of which belong to the convex hull of the set, must have at least Ch−2×3n−h different pointed pseudotriangulations, where Ci denotes the ith Catalan number. As a consequence, they show that the point sets with the fewest pointed pseudotriangulations are the vertex sets of convex polygons. Aichholzer et al. (2006) investigate point sets with large numbers of pointed pseudotriangulations. Computational geometry researchers have also provided algorithms for listing all pointed pseudotriangulations of a point set in a small amount of time per pseudotriangulation. diff --git a/wiki/wikipedia/2558.txt b/wiki/wikipedia/2558.txt deleted file mode 100644 index 6caef3357829e280592d175ed16601628bd3826a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2558.txt +++ /dev/null @@ -1,170 +0,0 @@ -In vector calculus and differential geometry the generalized Stokes theorem (sometimes with apostrophe as Stokes' theorem or Stokes's theorem), also called the Stokes–Cartan theorem, is a statement about the integration of differential forms on manifolds, which both simplifies and generalizes several theorems from vector calculus. It is a generalization of Isaac Newton's fundamental theorem of calculus that relates two-dimensional line integrals to three-dimensional surface integrals. - -Stokes' theorem says that the integral of a differential form ω over the boundary of some orientable manifold Ω is equal to the integral of its exterior derivative dω over the whole of Ω, i.e., -$$ -\int_{\partial \Omega}\omega=\int_\Omega d\omega. -$$ - -Stokes' theorem was formulated in its modern form by Élie Cartan in 1945, following earlier work on the generalization of the theorems of vector calculus by Vito Volterra, Édouard Goursat, and Henri Poincaré. - -This modern form of Stokes' theorem is a vast generalization of a classical result that Lord Kelvin communicated to George Stokes in a letter dated July 2, 1850. Stokes set the theorem as a question on the 1854 Smith's Prize exam, which led to the result bearing his name. It was first published by Hermann Hankel in 1861. and F is a smooth vector field on R3, then: -$$ -\oint_\Gamma \mathbf{F} \cdot d{\mathbf{\Gamma}} = \iint_S \nabla\times\mathbf{F} \cdot d\mathbf{S} -$$ - -This classical statement, is a special case of the general formulation stated above after making an identification of vector field with a 1-form and its curl with a two form through - -\begin{pmatrix} - -F_x \\ - -F_y \\ - -F_z \\ - -\end{pmatrix}\cdot d\Gamma \to F_x dx + F_y dy + F_z dz - -\nabla \times \begin{pmatrix}F_x \\ F_y \\ F_z\end{pmatrix} \cdot d\mathbf{S} = - -\begin{pmatrix} - -\partial_y F_z - \partial_z F_y \\ - -\partial_z F_x - \partial_x F_z \\ - -\partial_x F_y - \partial_y F_x \\ - -\end{pmatrix} \cdot d\mathbf{S} \to -$$ -d(F_x dx + F_y dy + F_z dz) = (\partial_y F_z - \partial_z F_y)dy\wedge dz + (\partial_z F_x -\partial_x F_z)dz\wedge dx + (\partial_x F_y - \partial_y F_x)dx\wedge dy -$$. - -Other classical generalisations of the fundamental theorem of calculus like the divergence theorem, and Green's theorem are special cases of the general formulation stated above after making a standard identification of vector fields with differential forms (different for each of the classical theorems). - -The fundamental theorem of calculus states that the integral of a function f over the interval [a, b] can be calculated by finding an antiderivative F of f: -$$ -\int_a^b f(x)dx = F(b) - F(a). -$$ - -Stokes' theorem is a vast generalization of this theorem in the following sense. - -* By the choice of F, dF/dx = f(x). In the parlance of differential forms, this is saying that f(x) dx is the exterior derivative of the 0-form, i.e. function, F: in other words, that dF = f dx. The general Stokes theorem applies to higher differential forms ω instead of just 0-forms such as F. - -* A closed interval [a, b] is a simple example of a one-dimensional manifold with boundary. Its boundary is the set consisting of the two points a and b. Integrating f over the interval may be generalized to integrating forms on a higher-dimensional manifold. Two technical conditions are needed: the manifold has to be orientable, and the form has to be compactly supported in order to give a well-defined integral. - -* The two points a and b form the boundary of the closed interval. More generally, Stokes' theorem applies to oriented manifolds M with boundary. The boundary ∂M of M is itself a manifold and inherits a natural orientation from that of M. For example, the natural orientation of the interval gives an orientation of the two boundary points. Intuitively, a inherits the opposite orientation as b, as they are at opposite ends of the interval. So, "integrating" F over two boundary points a, b is taking the difference F(b) − F(a). - -In even simpler terms, one can consider the points as boundaries of curves, that is as 0-dimensional boundaries of 1-dimensional manifolds. So, just as one can find the value of an integral (f dx = dF) over a 1-dimensional manifold ([a, b]) by considering the anti-derivative (F) at the 0-dimensional boundaries ({a, b}), one can generalize the fundamental theorem of calculus, with a few additional caveats, to deal with the value of integrals (dω) over n-dimensional manifolds (Ω) by considering the antiderivative (ω) at the (n − 1)-dimensional boundaries (∂Ω) of the manifold. - -So the fundamental theorem reads: -$$ -\int_{[a, b]} f(x)dx = \int_{[a, b]} dF = \int_{\{a\}^- \cup \{b\}^+} F = F(b) - F(a). -$$ - -Let Ω be an oriented smooth manifold with boundary of dimension n and let α be a smooth n-differential form that is compactly supported on Ω. First, suppose that α is compactly supported in the domain of a single, oriented coordinate chart {U, φ}. In this case, we define the integral of α over Ω as - -\int_\Omega \alpha = \int_{\varphi(U)} (\varphi^{-1})^* \alpha, - -i.e., via the pullback of α to Rn. - -More generally, the integral of α over Ω is defined as follows: Let {ψi} be a partition of unity associated with a locally finite cover {Ui, φi} of (consistently oriented) coordinate charts, then define the integral - -\int_\Omega \alpha \equiv \sum_i \int_{U_i} \psi_i \alpha, - -where each term in the sum is evaluated by pulling back to Rn as described above. This quantity is well-defined; that is, it does not depend on the choice of the coordinate charts, nor the partition of unity. - -The generalized Stokes theorem reads: - -If $\omega$ is a smooth $(n-1)$-form with compact support on smooth $n$-dimensional manifold-with-boundary $\Omega$, $\partial\Omega$ denotes the boundary of $\Omega$ given the induced orientation, and $i:\partial \Omega\hookrightarrow \Omega$ is the inclusion map, then - -\int_\Omega d\omega = \int_{\partial \Omega} i^*\omega. - -Conventionally, $\int_{\partial\Omega} i^*\omega$ is abbreviated as $\int_{\partial\Omega} \omega$, since the pullback of a differential form by the inclusion map is simply its restriction to its domain: $i^*\omega=\omega|_{\partial\Omega}$. Here $d$ is the exterior derivative, which is defined using the manifold structure only. The right-hand side is sometimes written as $\oint_{\partial\Omega} \omega$ to stress the fact that the $(n-1)$-manifold $\partial\Omega$ has no boundary. (This fact is also an implication of Stokes' theorem, since for a given smooth $n$-dimensional manifold $\Omega$, application of the theorem twice gives $\int_{\partial(\partial \Omega)}\omega=\int_\Omega d(d\omega)=0$ for any $(n-2)$-form $\omega$, which implies that $\partial(\partial\Omega)=\emptyset$.) The right-hand side of the equation is often used to formulate integral laws; the left-hand side then leads to equivalent differential formulations (see below). - -The theorem is often used in situations where $\Omega$ is an embedded oriented submanifold of some bigger manifold, often $\mathbf{R}^k$, on which the form $\omega$ is defined. - -Let M be a smooth manifold. A (smooth) singular k-simplex in M is defined as a smooth map from the standard simplex in Rk to M. The group Ck(M, Z) of singular k-chains on M is defined to be the free abelian group on the set of singular k-simplices in M. These groups, together with the boundary map, ∂, define a chain complex. The corresponding homology (resp. cohomology) group is isomorphic to the usual singular homology group Hk(M, Z) (resp. the singular cohomology group Hk(M, Z)), defined using continuous rather than smooth simplices in M. - -On the other hand, the differential forms, with exterior derivative, d, as the connecting map, form a cochain complex, which defines the de Rham cohomology groups H_k|b=dR(M, R). - -Differential k-forms can be integrated over a k-simplex in a natural way, by pulling back to Rk. Extending by linearity allows one to integrate over chains. This gives a linear map from the space of k-forms to the kth group of singular cochains, Ck(M, Z), the linear functionals on Ck(M, Z). In other words, a k-form ω defines a functional - -I(\omega)(c) = \oint_c \omega. - -on the k-chains. Stokes' theorem says that this is a chain map from de Rham cohomology to singular cohomology with real coefficients; the exterior derivative, d, behaves like the dual of ∂ on forms. This gives a homomorphism from de Rham cohomology to singular cohomology. On the level of forms, this means: - -#closed forms, i.e., dω = 0, have zero integral over boundaries, i.e. over manifolds that can be written as ∂Σc Mc, and - -#exact forms, i.e., ω = dσ, have zero integral over cycles, i.e. if the boundaries sum up to the empty set: Σc Mc = ∅. - -De Rham's theorem shows that this homomorphism is in fact an isomorphism. So the converse to 1 and 2 above hold true. In other words, if {ci} are cycles generating the kth homology group, then for any corresponding real numbers, {ai} , there exist a closed form, ω, such that - -\oint_{c_i} \omega = a_i, - -and this form is unique up to exact forms. - -Stokes' theorem on smooth manifolds can be derived from Stokes' theorem for chains in smooth manifolds, and vice versa. Formally stated, the latter reads: - -If c is a smooth k-chain in a smooth manifold M, and ω is a smooth (k − 1)-form on M, then - -\int_{\partial c}\omega = \int_c d\omega. - -To simplify these topological arguments, it is worthwhile to examine the underlying principle by considering an example for d = 2 dimensions. The essential idea can be understood by the diagram on the left, which shows that, in an oriented tiling of a manifold, the interior paths are traversed in opposite directions; their contributions to the path integral thus cancel each other pairwise. As a consequence, only the contribution from the boundary remains. It thus suffices to prove Stokes' theorem for sufficiently fine tilings (or, equivalently, simplices), which usually is not difficult. - -The formulation above, in which Ω is a smooth manifold with boundary, does not suffice in many applications. For example, if the domain of integration is defined as the plane region between two x-coordinates and the graphs of two functions, it will often happen that the domain has corners. In such a case, the corner points mean that Ω is not a smooth manifold with boundary, and so the statement of Stokes' theorem given above does not apply. Nevertheless, it is possible to check that the conclusion of Stokes' theorem is still true. This is because Ω and its boundary are well-behaved away from a small set of points (a measure zero set). - -A version of Stokes' theorem that allows for roughness was proved by Whitney. Assume that D is a connected bounded open subset of Rn. Call D a standard domain if it satisfies the following property: There exists a subset P of ∂D, open in ∂D, whose complement in ∂D has Hausdorff (n − 1)-measure zero; and such that every point of P has a generalized normal vector. This is a vector v(x) such that, if a coordinate system is chosen so that v(x) is the first basis vector, then, in an open neighborhood around x, there exists a smooth function f(x2, ..., xn) such that P is the graph { x1 = f(x2, ..., xn) } and D is the region {x1 : x1 < f(x2, ..., xn) } . Whitney remarks that the boundary of a standard domain is the union of a set of zero Hausdorff (n − 1)-measure and a finite or countable union of smooth (n − 1)-manifolds, each of which has the domain on only one side. He then proves that if D is a standard domain in Rn, ω is an (n − 1)-form which is defined, continuous, and bounded on D ∪ P, smooth on D, integrable on P, and such that dω is integrable on D, then Stokes' theorem holds, that is, -$$ -\int_P \omega = \int_D d\omega. -$$ - -The study of measure-theoretic properties of rough sets leads to geometric measure theory. Even more general versions of Stokes' theorem have been proved by Federer and by Harrison. - -The general form of the Stokes theorem using differential forms is more powerful and easier to use than the special cases. The traditional versions can be formulated using Cartesian coordinates without the machinery of differential geometry, and thus are more accessible. Further, they are older and their names are more familiar as a result. The traditional forms are often considered more convenient by practicing scientists and engineers but the non-naturalness of the traditional formulation becomes apparent when using other coordinate systems, even familiar ones like spherical or cylindrical coordinates. There is potential for confusion in the way names are applied, and the use of dual formulations. - -This is a (dualized) (1 + 1)-dimensional case, for a 1-form (dualized because it is a statement about vector fields). This special case is often just referred to as Stokes' theorem in many introductory university vector calculus courses and is used in physics and engineering. It is also sometimes known as the curl theorem. - -The classical Kelvin–Stokes theorem relates the surface integral of the curl of a vector field over a surface Σ in Euclidean three-space to the line integral of the vector field over its boundary. It is a special case of the general Stokes theorem (with n = 2) once we identify a vector field with a 1-form using the metric on Euclidean 3-space. The curve of the line integral, ∂Σ, must have positive orientation, meaning that ∂Σ points counterclockwise when the surface normal, n, points toward the viewer. - -One consequence of the Kelvin–Stokes theorem is that the field lines of a vector field with zero curl cannot be closed contours. The formula can be rewritten as: - -{{math theorem|1=Suppose F = (P(x,y,z), Q(x,y,z), R(x,y,z)) is defined in a region with smooth surface Σ and has continuous first-order partial derivatives. Then - -\begin{align} - -& \iint_\Sigma \Bigg(\left(\frac{\partial R}{\partial y}-\frac{\partial Q}{\partial z} \right)dydz +\left(\frac{\partial P}{\partial z}-\frac{\partial R}{\partial x} \right) dzdx +\left (\frac{\partial Q}{\partial x}-\frac{\partial P}{\partial y}\right)dxdy\Bigg) \\[4pt] - -= {} & \oint_{\partial\Sigma} \Big(Pdx+Qdy+Rdz\Big),\end{align} - - - -
    where P, Q, and R are the components of F, and ∂Σ is the boundary of the region Σ. - -}} - -Green's theorem is immediately recognizable as the third integrand of both sides in the integral in terms of P, Q, and R cited above. - -Two of the four Maxwell equations involve curls of 3-D vector fields, and their differential and integral forms are related by the Kelvin–Stokes theorem. Caution must be taken to avoid cases with moving boundaries: the partial time derivatives are intended to exclude such cases. If moving boundaries are included, interchange of integration and differentiation introduces terms related to boundary motion not included in the results below (see Differentiation under the integral sign): - -The above listed subset of Maxwell's equations are valid for electromagnetic fields expressed in SI units. In other systems of units, such as CGS or Gaussian units, the scaling factors for the terms differ. For example, in Gaussian units, Faraday's law of induction and Ampère's law take the forms: - -\begin{align} \nabla \times \mathbf{E} &= -\frac{1}{c} \frac{\partial \mathbf{B}} {\partial t}, \\ - -\nabla \times \mathbf{H} &= \frac{1}{c} \frac{\partial \mathbf{D}} {\partial t} + \frac{4\pi}{c} \mathbf{J}, \end{align} - -respectively, where c is the speed of light in vacuum. - -Likewise, the divergence theorem -$$ -\int_\mathrm{Vol} \nabla \cdot \mathbf{F} d_\mathrm{Vol} = \oint_{\partial \operatorname{Vol}} \mathbf{F} \cdot d\boldsymbol{\Sigma} -$$ - -is a special case if we identify a vector field with the (n − 1)-form obtained by contracting the vector field with the Euclidean volume form. An application of this is the case F = fc where c is an arbitrary constant vector. Working out the divergence of the product gives -$$ -\mathbf c \cdot \int_\mathrm{Vol} \nabla f d_\mathrm{Vol} = \mathbf c \cdot \oint_{\partial \mathrm{Vol}} f d\boldsymbol{\Sigma}. -$$ - -Since this holds for all c we find -$$ -\int_\mathrm{Vol} \nabla f d_\mathrm{Vol} = \oint_{\partial \mathrm{Vol}} f d\boldsymbol{\Sigma}. -$$ diff --git a/wiki/wikipedia/2559.txt b/wiki/wikipedia/2559.txt deleted file mode 100644 index 93785195fd8e358c77ce4ff4b93c25067a24a9ba..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2559.txt +++ /dev/null @@ -1,15 +0,0 @@ -Zarafa was an open-source groupware application that originated in the city of Delft in the Netherlands. The company that developed Zarafa, previously known as Connectux, is also called Zarafa. The Zarafa groupware provided email storage on the server side and offered its own Ajax-based mail client called WebAccess and a HTML5-based, WebApp. Advanced features were available in commercially supported versions ("Small Business", "Professional" and "Enterprise" (different feature levels)). Zarafa has been superseded by Kopano. - -Zarafa was originally designed to integrate with Microsoft Office Outlook and was intended as an alternative to the Microsoft Exchange Server. Connectivity with Microsoft Outlook was provided via a proprietary client-side plugin. Support for the plugin has been discontinued after Q1/2016, though Outlook from then on can use its own ActiveSync implementation instead. The WebApp (and WebAccess) has the same "look-and-feel" as the Outlook OWA. The software handles a personal address-book, calendar, notes and tasks, "Public Folders", a shared calendar (inviting internal and external users, resource management), exchange of files, and video chat. The open source edition does not support any MAPI-based Outlook users, while the community edition supports three Outlook users. - -All server-side components and the WebApp/WebAccess of Zarafa are published under the Affero General Public License (AGPL), based on the GNU General Public License, version 2 (GPLv2). Introducing and maintaining a dual-licensing strategy, on 18 September 2008 Zarafa released the full core software, that is the server side software stack, under the GNU Affero General Public License, version 3 (AGPLv3). - -Zarafa provides its groupware functionality by connecting the Linux-based server with Outlook clients using MAPI. The communication between server and client is based upon SOAP technology. The connection to Outlook clients can be secured using TLS/SSL, either directly between the Zarafa server program and the client, or via an HTTPS proxy. - -All data is generally stored in a MySQL database, although attachments can be saved on the filesystem. The Zarafa server can get its user information from LDAP, Active Directory, Unix user accounts or the MySQL database. - -The webmail is based on HTML5 (WebApp) and AJAX technology (WebAccess), with a PHP backend using a MAPI PHP extension. - -Other clients can connect via POP3, IMAP and iCalendar/CalDAV. - -Zarafa initiated a project called Z-push in October 2007. It supports Exchange ActiveSync compatible devices (Symbian, Pocket PC, iPhone (firmware 2.0 and higher), Android (version 2.1 and higher), Nokia (mail4Exchange)) implementing the ActiveSync protocol and using the Incremental Change System (ICS) provided by the PHP-MAPI extension. diff --git a/wiki/wikipedia/256.txt b/wiki/wikipedia/256.txt deleted file mode 100644 index 36bcf376aef8e83a0159caae2b852ac273bbc115..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/256.txt +++ /dev/null @@ -1,33 +0,0 @@ -Broadwell is the fifth generation of the Intel Core Processor. It is Intel's codename for the 14 nanometer die shrink of its Haswell microarchitecture. It is a "tick" in Intel's tick–tock principle as the next step in semiconductor fabrication. Like some of the previous tick-tock iterations, Broadwell did not completely replace the full range of CPUs from the previous microarchitecture (Haswell), as there were no low-end desktop CPUs based on Broadwell. - -Some of the processors based on the Broadwell microarchitecture are marketed as "5th-generation Core" i3, i5 and i7 processors. This moniker is however not used for marketing of the Broadwell-based Celeron, Pentium or Xeon chips. This microarchitecture also introduced the Core M processor branding. - -Broadwell is the last Intel platform on which Windows 7 is supported by either Intel or Microsoft; however, third-party hardware vendors have offered limited Windows 7 support on more recent platforms. - -Broadwell's H and C variants are used in conjunction with Intel 9 Series chipsets (Z97, H97 and HM97), in addition to retaining backward compatibility with some of the Intel 8 Series chipsets. - -Broadwell has been launched in three major variants: - -* BGA package: - -** Broadwell-Y: system on a chip (SoC); 4.5 W and 3.5 W thermal design power (TDP) classes, for tablets and certain ultrabook-class implementations. GT2 GPU was used, while maximum supported memory is 8 GB of LPDDR3-1600. - -** Broadwell-U: SoC; two TDP classes 15 W for 2+2 and 2+3 configurations (two cores with a GT2 or GT3 GPU) as well as 28 W for 2+3 configurations. Designed to be used on motherboards with the PCH-LP chipset for Intel's ultrabook and NUC platforms. Maximum supported is up to 16 GB of DDR3 or LPDDR3 memory, with DDR3-1600 and LPDDR3-1867 as the maximum memory speeds. The 2+2 configuration is scheduled for Q4 2014, while the 2+3 is estimated for Q1 2015. These are scheduled for Q2 2015. - -* PREFETCHW instruction Intel was expected to release 17 Broadwell U series family microprocessors at CES 2015. Also, according to a leak posted on vr-zone, Broadwell-E chips would be available in 2016. - -On August 11, 2014, Intel unveiled formally its 14 nm manufacturing process, and indicated that mobile variants of the process would be known as Core M products. Additionally, Core M products were announced to be shipping during the end of 2014, with desktop variants shipping shortly after. - -With Broadwell, Intel focused mainly on laptops, miniature desktops, and all-in-one systems. This left traditional desktop users with no new socketed CPU options beyond fourth-generation Haswell, which first arrived in 2013. Even though the company finally introduced two Broadwell desktop chips in the summer of 2015, it launched its high-end sixth-generation Skylake CPUs very shortly thereafter. In September 2015, Kirk Skaugen, senior vice president and general manager of Intel's Client Computing Group, admitted that skipping desktops with Broadwell was a poor decision. Between the end-of-life for Windows XP in 2014 and the lack of new desktop chips, Intel had not given desktop PC users any good reasons to upgrade in 2015. - -On September 5, 2014, Intel launched the first three Broadwell-based processors that belong to the low-TDP Core M family, Core M 5Y10, Core M 5Y10a and Core M 5Y70. - -On October 9, 2014, the first laptop with Broadwell Intel Core M 5Y70 CPU, Lenovo Yoga 3 Pro, was launched. - -On October 31, 2014, four more Broadwell based CPUs were launched belonging to Core M Family, increasing the number of launched Broadwell CPUs to seven. - -On January 5, 2015, 17 additional Broadwell laptop CPUs were launched for the Celeron, Pentium and Core i3, i5 and i7 series. - -On March 31, 2016, Intel officially launched 14 nm Broadwell-EP Xeon E5 V4 CPUs. - -On May 30, 2016, Intel officially launched 14 nm Broadwell-E Core i7 69xx/68xx processor family. diff --git a/wiki/wikipedia/2560.txt b/wiki/wikipedia/2560.txt deleted file mode 100644 index 175c66161a71ca05a7aa9fbb8c041e84a9932bdb..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2560.txt +++ /dev/null @@ -1,55 +0,0 @@ -Flow-shop scheduling is an optimization problem in computer science and operations research. It is a variant of optimal job scheduling. In a general job-scheduling problem, we are given n jobs J1, J2, ..., Jn of varying processing times, which need to be scheduled on m machines with varying processing power, while trying to minimize the makespan – the total length of the schedule (that is, when all the jobs have finished processing). In the specific variant known as flow-shop scheduling, each job contains exactly m operations. The i-th operation of the job must be executed on the i-th machine. No machine can perform more than one operation simultaneously. For each operation of each job, execution time is specified. - -Flow-shop scheduling is a special case of job-shop scheduling where there is strict order of all operations to be performed on all jobs. Flow-shop scheduling may apply as well to production facilities as to computing designs. A special type of flow-shop scheduling problem is the permutation flow-shop scheduling problem in which the processing order of the jobs on the resources is the same for each subsequent step of processing. - -In the standard three-field notation for optimal-job-scheduling problems, the flow-shop variant is denoted by F in the first field. For example, the problem denoted by " F3|$p_{ij}$|$C_\max$" is a 3-machines flow-shop problem with unit processing times, where the goal is to minimize the maximum completion time. - -There are n machines and m jobs. Each job contains exactly m operations. The i-th operation of the job must be executed on the i-th machine. No machine can perform more than one operation simultaneously. For each operation of each job, execution time is specified. - -Operations within one job must be performed in the specified order. The first operation gets executed on the first machine, then (as the first operation is finished) the second operation on the second machine, and so on until the n-th operation. Jobs can be executed in any order, however. Problem definition implies that this job order is exactly the same for each machine. The problem is to determine the optimal such arrangement, i.e. the one with the shortest possible total job execution makespan. - -The sequencing problem can be stated as determining a sequence S such that one or several sequencing objectives are optimized. - -# (Average) Flow time, $\sum (w_i) F_i $ - -# Makespan, Cmax - -# (Average) Tardiness, $\sum (w_i) T_i $ - -# .... - -detailed discussion of performance measurement can be found in Malakooti(2013). - -As presented by Garey et al. (1976), most of extensions of the flow-shop-scheduling problems are NP-hard and few of them can be solved optimally in O(nlogn); for example, F2|prmu|Cmax can be solved optimally by using Johnson's Rule. - -Taillard provides substantial benchmark problems for scheduling flow shops, open shops, and job shops. - -The proposed methods to solve flow-shop-scheduling problems can be classified as exact algorithm such as branch and bound and heuristic algorithm such as genetic algorithm. - -F2|prmu|Cmax and F3|prmu|Cmax can be solved optimally by using Johnson's Rule but for general case there is no algorithm that guarantee the optimality of the solution. - -The flow shop contains n jobs simultaneously available at time zero and to be processed by two machines arranged in series with unlimited storage in between them. The processing time of all jobs are known with certainty. It is required to schedule n jobs on machines so as to minimize makespan. The Johnson's rule for scheduling jobs in two-machine flow shop is given below. - -In an optimal schedule, job i precedes job j if min{p1i,p2j} < min{p1j,p2i}. Where as, p1i is the processing time of job i on machine 1 and p2i is the processing time of job i on machine 2. Similarly, p1j and p2j are processing times of job j on machine 1 and machine 2 respectively. - -For Johnson's algorithm: - -Let p1j be the processing time of job j on machine 1 - -and p2j the processing time of job j on machine 2 - -Johnson's algorithm: - -# Form set1 containing all the jobs with p1j < p2j - -# Form set2 containing all the jobs with p1j > p2j, the jobs with p1j = p2j may be put in either set. - -# Form the sequence as follows: - -#: (i) The job in set1 go first in the sequence and they go in increasing order of p1j (SPT) - -#: (ii) The jobs in set2 follow in decreasing order of p2j (LPT). Ties are broken arbitrarily. - -This type schedule is referred as SPT(1)–LPT(2) schedule. - -A detailed discussion of the available solution methods are provided by Malakooti (2013). diff --git a/wiki/wikipedia/2561.txt b/wiki/wikipedia/2561.txt deleted file mode 100644 index 67c3623baf3ca6756c09e2905c8f4dfb42830061..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2561.txt +++ /dev/null @@ -1,59 +0,0 @@ -In graph theory, Edmonds' algorithm or Chu–Liu/Edmonds' algorithm is an algorithm for finding a spanning arborescence of minimum weight (sometimes called an optimum branching). - -It is the directed analog of the minimum spanning tree problem. - -The algorithm was proposed independently first by Yoeng-Jin Chu and Tseng-Hong Liu (1965) and then by Jack Edmonds (1967). - -The algorithm takes as input a directed graph $D = \langle V, E \rangle$ where $V$ is the set of nodes and $E$ is the set of directed edges, a distinguished vertex $r \in V$ called the root, and a real-valued weight $w(e)$ for each edge $e \in E$. - -It returns a spanning arborescence $A$ rooted at $r$ of minimum weight, where the weight of an arborescence is defined to be the sum of its edge weights, $w(A) = \sum_{e \in A}{w(e)}$. - -The algorithm has a recursive description. - -Let $f(D, r, w)$ denote the function which returns a spanning arborescence rooted at $r$ of minimum weight. - -We first remove any edge from $E$ whose destination is $r$. - -We may also replace any set of parallel edges (edges between the same pair of vertices in the same direction) by a single edge with weight equal to the minimum of the weights of these parallel edges. - -Now, for each node $v$ other than the root, find the edge incoming to $v$ of lowest weight (with ties broken arbitrarily). - -Denote the source of this edge by $\pi(v)$. - -If the set of edges $P = \{(\pi(v),v) \mid v \in V \setminus \{ r \} \}$ does not contain any cycles, then $f(D,r,w) = P$. - -Otherwise, $P$ contains at least one cycle. - -Arbitrarily choose one of these cycles and call it $C$. - -We now define a new weighted directed graph $D^\prime = \langle V^\prime, E^\prime \rangle$ in which the cycle $C$ is "contracted" into one node as follows: - -The nodes of $V^\prime$ are the nodes of $V$ not in $C$ plus a new node denoted $v_C$. - -* If $(u,v)$ is an edge in $E$ with $u\notin C$ and $v\in C$ (an edge coming into the cycle), then include in $E^\prime$ a new edge $e = (u, v_C)$, and define $w^\prime(e) = w(u,v) - w(\pi(v),v)$. - -* If $(u,v)$ is an edge in $E$ with $u\in C$ and $v\notin C$ (an edge going away from the cycle), then include in $E^\prime$ a new edge $e = (v_C, v)$, and define $w^\prime(e) = w(u,v) $. - -* If $(u,v)$ is an edge in $E$ with $u\notin C$ and $v\notin C$ (an edge unrelated to the cycle), then include in $E^\prime$ a new edge $e = (u, v)$, and define $w^\prime(e) = w(u,v) $. - -For each edge in $E^\prime$, we remember which edge in $E$ it corresponds to. - -Now find a minimum spanning arborescence $A^\prime$ of $D^\prime$ using a call to $f(D^\prime, r,w^\prime)$. - -Since $A^\prime$ is a spanning arborescence, each vertex has exactly one incoming edge. - -Let $(u, v_C)$ be the unique incoming edge to $v_C$ in $A^\prime$. - -This edge corresponds to an edge $(u,v) \in E$ with $v \in C$. - -Remove the edge $(\pi(v),v)$ from $C$, breaking the cycle. - -Mark each remaining edge in $C$. - -For each edge in $A^\prime$, mark its corresponding edge in $E$. - -Now we define $f(D, r, w)$ to be the set of marked edges, which form a minimum spanning arborescence. - -Observe that $f(D, r, w)$ is defined in terms of $f(D^\prime, r, w^\prime)$, with $D^\prime$ having strictly fewer vertices than $D$. Finding $f(D, r, w)$ for a single-vertex graph is trivial (it is just $D$ itself), so the recursive algorithm is guaranteed to terminate. - -The running time of this algorithm is $O(EV)$. A faster implementation of the algorithm due to Robert Tarjan runs in time $O(E \log V)$ for sparse graphs and $O(V^2)$ for dense graphs. This is as fast as Prim's algorithm for an undirected minimum spanning tree. In 1986, Gabow, Galil, Spencer, and Tarjan produced a faster implementation, with running time $O(E + V \log V)$. diff --git a/wiki/wikipedia/2562.txt b/wiki/wikipedia/2562.txt deleted file mode 100644 index 8c5c9cd4b248116a0faaf8ef609480c885d48c34..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2562.txt +++ /dev/null @@ -1,173 +0,0 @@ -In mathematics, quaternionic analysis is the study of functions with quaternions as the domain and/or range. Such functions can be called functions of a quaternion variable just as functions of a real variable or a complex variable are called. - -As with complex and real analysis, it is possible to study the concepts of analyticity, holomorphy, harmonicity and conformality in the context of quaternions. Unlike the complex numbers and like the reals, the four notions do not coincide. - -The projections of a quaternion onto its scalar part or onto its vector part, as well as the modulus and versor functions, are examples that are basic to understanding quaternion structure. - -An important example of a function of a quaternion variable is -$$ -f_1(q) = u q u^{-1} -$$ - -which rotates the vector part of q by twice the angle represented by u. - -The quaternion multiplicative inverse $f_2(q) = q^{-1}$ is another fundamental function, but as with other number systems, $f_2(0)$ and related problems are generally excluded due to the nature of dividing by zero. - -Affine transformations of quaternions have the form -$$ -f_3(q) = aq + b, \quad a, b, q \in \mathbb{H}. -$$ - -Linear fractional transformations of quaternions can be represented by elements of the matrix ring $M_2(\mathbb{H})$ operating on the projective line over $\mathbb{H}$. For instance, the mappings $q \mapsto u q v,$ where $u$ and $v$ are fixed versors serve to produce the motions of elliptic space. - -Quaternion variable theory differs in some respects from complex variable theory. For example: The complex conjugate mapping of the complex plane is a central tool but requires the introduction of a non-arithmetic, non-analytic operation. Indeed, conjugation changes the orientation of plane figures, something that arithmetic functions do not change. - -In contrast to the complex conjugate, the quaternion conjugation can be expressed arithmetically, as $f_4(q) = - \tfrac{1}{2} (q + iqi + jqj + kqk)$ - -This equation can be proven, starting with the basis {1, i, j, k}: - -f_4(1) = -\tfrac{1}{2}(1 - 1 - 1 - 1) = 1, \quad - -f_4(i) = -\tfrac{1}{2}(i - i + i + i) = -i, \quad - -f_4(j) = -j, \quad - -f_4(k) = -k . - -Consequently, since $f_4$ is linear, -$$ -f_4(q) = f_4(w + x i + y j + z k) = w f_4(1) + x f_4(i) + y f_4(j) + z f_4(k) = w - x i - y j - z k = q^*. -$$ - -The success of complex analysis in providing a rich family of holomorphic functions for scientific work has engaged some workers in efforts to extend the planar theory, based on complex numbers, to a 4-space study with functions of a quaternion variable. These efforts were summarized in Deavours.{{efn|Deavours recalls a 1935 issue of Commentarii Mathematici Helvetici where an alternative theory of "regular functions" was initiated by Fueter through the idea of Morera's theorem: quaternion function $F$ is "left regular at $q$" when the integral of $F$ vanishes over any sufficiently small hypersurface containing $q$. Then the analogue of Liouville's theorem holds: The only regular quaternion function with bounded norm in $\mathbb{E}^4$ is a constant. One approach to construct regular functions is to use power series with real coefficients. Deavours also gives analogues for the Poisson integral, the Cauchy integral formula, and the presentation of Maxwell’s equations of electromagnetism with quaternion functions.}} - -Though $\mathbb{H}$ appears as a union of complex planes, the following proposition shows that extending complex functions requires special care: - -Let $f_5(z) = u(x,y) + i v(x,y)$ be a function of a complex variable, $z = x + i y$. Suppose also that $u$ is an even function of $y$ and that $v$ is an odd function of $y$. Then $f_5(q) = u(x,y) + rv(x,y)$ is an extension of $f_5$ to a quaternion variable $q = x + yr$ where $r^2 = -1$ and $r \in \mathbb{H}$. - -Then, let $r^*$ represent the conjugate of $r$, so that $q = x - yr^*$. The extension to $\mathbb{H}$ will be complete when it is shown that $f_5(q) = f_5(x - yr^*)$. Indeed, by hypothesis -$$ -u(x,y) = u(x,-y), \quad v(x,y) = -v(x,-y) \quad -$$ one obtains -$$ -f_5(x - y r^*) = u(x,-y) + r^* v(x,-y) = u(x,y) + r v(x,y) = f_5(q). -$$ - -In the following, colons and square brackets are used to denote homogeneous vectors. - -The rotation about axis r is a classical application of quaternions to space mapping. - -In terms of a homography, the rotation is expressed -$$ -[q:1] \begin{pmatrix}u & 0\\0 & u \end{pmatrix} = [qu : u] \thicksim [u^{-1}qu : 1] , -$$ - -where $u = \exp(\theta r) = \cos \theta + r \sin \theta$ is a versor. If p * = -p, then the translation $q \mapsto q + p$ is expressed by -$$ -[q : 1]\begin{pmatrix}1 & 0 \\ p & 1 \end{pmatrix} = [q + p : 1]. -$$ - -Rotation and translation xr along the axis of rotation is given by -$$ -[q : 1]\begin{pmatrix}u & 0 \\ uxr & u \end{pmatrix} = [qu + uxr : u] \thicksim [u^{-1}qu + xr : 1]. -$$ - -Such a mapping is called a screw displacement. In classical kinematics, Chasles' theorem states that any rigid body motion can be displayed as a screw displacement. Just as the representation of a Euclidean plane isometry as a rotation is a matter of complex number arithmetic, so Chasles' theorem, and the screw axis required, is a matter of quaternion arithmetic with homographies: Let s be a right versor, or square root of minus one, perpendicular to r, with t = rs. - -Consider the axis passing through s and parallel to r. Rotation about it is expressed by the homography composition -$$ -\begin{pmatrix}1 & 0 \\ -s & 1 \end{pmatrix} \begin{pmatrix}u & 0 \\ 0 & u \end{pmatrix} \begin{pmatrix}1 & 0 \\ s & 1 \end{pmatrix} = \begin{pmatrix}u & 0 \\ z & u \end{pmatrix}, -$$ - -where $z = u s - s u = \sin \theta (rs - sr) = 2 t \sin \theta .$ - -Now in the (s,t)-plane the parameter θ traces out a circle $u^{-1} z = u^{-1}(2 t \sin \theta) = 2 \sin \theta ( t \cos \theta - s \sin \theta) $ in the half-plane $\lbrace wt + xs : x > 0 \rbrace .$ - -Any p in this half-plane lies on a ray from the origin through the circle $\lbrace u^{-1} z : 0 < \theta < \pi \rbrace$ and can be written $p = a u^{-1} z , \ \ a > 0 .$ - -Then up = az, with $\begin{pmatrix}u & 0 \\ az & u \end{pmatrix} $ as the homography expressing conjugation of a rotation by a translation p. - -Since the time of Hamilton, it has been realized that requiring the independence of the derivative from the path that a differential follows toward zero is too restrictive: it excludes even $f(q) = q^2$ from differentiation. Therefore, a direction-dependent derivative is necessary for functions of a quaternion variable. - -Considering the increment of polynomial function of quaternionic argument shows that the increment is a linear map of increment of the argument. From this, a definition can be made: - -A continuous map -$$ -f: \mathbb H \rightarrow \mathbb H -$$ - -is called differentiable on the set $U \subset \mathbb H$, if, at every point $x \in U$, the increment of the map $f$ can be represented as -$$ -f(x+h)-f(x)=\frac{d f(x)}{d x}\circ h+o(h) -$$ - -where -$$ -\frac{d f(x)}{d x}:\mathbb H\rightarrow\mathbb H -$$ - -is linear map of quaternion algebra $\mathbb H$ and -$$ -o:\mathbb H\rightarrow \mathbb H -$$ - -is such continuous map that -$$ -\lim_{a\rightarrow 0}\frac=0 -$$ - -The linear map -$$ -\frac{d f(x)}{d x} -$$ - -is called the derivative of the map $f$. - -On the quaternions, the derivative may be expressed as -$$ -\frac{d f(x)}{d x} = \sum_s \frac{d_{s0} f(x)}{d x} \otimes \frac{d_{s1} f(x)}{d x} -$$ - -Therefore, the differential of the map $f$ may be expressed as follows with brackets on either side. -$$ -\frac{d f(x)}{d x}\circ dx = \left(\sum_s \frac{d_{s0} f(x)}{d x} \otimes \frac{d_{s1} f(x)}{d x}\right)\circ dx = \sum_s \frac{d_{s0} f(x)}{d x} dx \frac{d_{s1} f(x)}{d x} -$$ - -The number of terms in the sum will depend on the function f. The expressions -$$ -\frac{d_{sp}d f(x)}{d x}, p = 0,1 -$$ are called - -components of derivative. - -The derivative of a quaternionic function holds the following equalities -$$ -\frac{df(x)}{d x}\circ h=\lim_{t\to 0}(t^{-1}(f(x+th)-f(x))) -$$ -$$ -\frac{d(f(x)+g(x))}{d x} = \frac{df(x)}{d x}+\frac{dg(x)}{d x} -$$ -$$ -\frac{df(x)g(x)}{d x} = \frac{df(x)}{d x}\ g(x)+f(x)\ \frac{dg(x)}{d x} -$$ -$$ -\frac{df(x)g(x)}{d x} \circ h = \left(\frac{df(x)}{d x}\circ h\right )\ g(x)+f(x)\left(\frac{dg(x)}{d x}\circ h\right) -$$ -$$ -\frac{daf(x)b}{d x} = a\ \frac{df(x)}{d x}\ b -$$ -$$ -\frac{daf(x)b}{d x}\circ h = a\left(\frac{df(x)}{d x}\circ h\right) b -$$ - -For the function f(x) = axb, the derivative is - -and so the components are: - -Similarly, for the function f(x) = x2, the derivative is - -and the components are: - -Finally, for the function f(x) = x−1, the derivative is - -and the components are: diff --git a/wiki/wikipedia/2563.txt b/wiki/wikipedia/2563.txt deleted file mode 100644 index b1bcd07c446510984b77502db50106901afe82cf..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2563.txt +++ /dev/null @@ -1,15 +0,0 @@ -BioFabric is an open-source software application for graph drawing. It presents graphs as a node-link diagram, but unlike other graph drawing tools that depict the nodes using discrete symbols, it represents nodes using horizontal lines. - -Traditional node-link methods for visualizing networks deteriorate in terms of legibility when dealing with large networks, due to the proliferation of edge crossings amassing as what are disparagingly termed 'hairballs'. BioFabric is one of a number of alternative approaches designed explicitly to tackle this scalability issue, choosing to do so by depicting nodes as lines on the horizontal axis, one per row; edges as lines on the vertical axis, one per column, terminating at the two rows associated with the endpoint nodes. As such, nodes and edges are each provided their own dimension (as opposed to solely the edges with nodes being non-dimensional points). BioFabric exploits the additional degree of freedom thus produced to place ends of incident edges in groups. This placement can potentially carry semantic information, whereas in node-link graphics the placement is often arbitrarily generated within constraints for aesthetics, such as during force-directed graph drawing, and may result in apparently informative artifacts. - -Edges are drawn (vertically) in a darker shade than (horizontal) nodes, creating visual distinction. Additional edges increase the width of the graph. - -Both ends of a link are represented as a square to reinforce the above effect even at small scales. Directed graphs also incorporate arrowheads. - -The first version, 1.0.0, was released in July 2012. Development work on BioFabric is ongoing. An open source R implementation was released in 2013, RBioFabric, for use with the igraph package, and subsequently described on the project weblog. - -Input - -* Networks can be imported using SIF files as input. - -Blakley et al. have described how the technique used by BioFabric, which they refer to as a cartographic representation, can be used to compare the networks A and B by juxtaposing the edges in (A ∖ B), (A ∩ B), and (B ∖ A), a technique that is evocative of a Venn Diagram. Rossi and Magnani have developed ranked sociograms, which is a BioFabric-like presentation where the node ordering is based upon a ranking metric. This approach attaches semantic meaning to the length of the edge lines, and can be used to visualize the assortativity or dissortativity of a network. diff --git a/wiki/wikipedia/2564.txt b/wiki/wikipedia/2564.txt deleted file mode 100644 index 0ccc16aa6d58a97a93ff8945f8dadb5d88d15ea9..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2564.txt +++ /dev/null @@ -1,19 +0,0 @@ -In mathematics, particularly, in asymptotic convex geometry, Milman's reverse Brunn–Minkowski inequality is a result due to Vitali Milman that provides a reverse inequality to the famous Brunn-Minkowski inequality for convex bodies in n-dimensional Euclidean space Rn. Namely, it bounds the volume of the Minkowski sum of two bodies from above in terms of the volumes of the bodies. - -Let K and L be convex bodies in Rn. The Brunn-Minkowski inequality states that -$$ - \mathrm{vol}(K+L)^{1/n} \geq \mathrm{vol}(K)^{1/n} + \mathrm{vol}(L)^{1/n}~, -$$ - -where vol denotes n-dimensional Lebesgue measure and the + on the left-hand side denotes Minkowski addition. - -In general, no reverse bound is possible, since one can find convex bodies K and L of unit volume so that the volume of their Minkowski sum is arbitrarily large. Milman's theorem states that one can replace one of the bodies by its image under a properly chosen volume-preserving linear map so that the left-hand side of the Brunn-Minkowski inequality is bounded by a constant multiple of the right-hand side. - -The result is one of the main structural theorems in the local theory of Banach spaces. - -There is a constant C, independent of n, such that for any two centrally symmetric convex bodies K and L in Rn, there are volume-preserving linear maps φ and ψ from Rn to itself such that for any real numbers s, t > 0 -$$ -\mathrm{vol} ( s \varphi K + t \psi L )^{1/n} \leq C \left( s \mathrm{vol} ( \varphi K )^{1/n} + t \mathrm{vol} ( \psi L )^{1/n} \right)~. -$$ - -One of the maps may be chosen to be the identity. diff --git a/wiki/wikipedia/2565.txt b/wiki/wikipedia/2565.txt deleted file mode 100644 index 0a6945b01a67429ead2b78275bb34bd6e90a5af5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2565.txt +++ /dev/null @@ -1,7 +0,0 @@ -Ulrik Brandes is a German computer scientist, social scientist, and network scientist known for his work on centrality, cluster analysis, and graph drawing. He is Professor for Social Networks at ETH Zurich in Switzerland, in the Department of Humanities, Social and Political Sciences. - -Brandes earned a diploma from RWTH Aachen University in 1994, and a PhD from the University of Konstanz in 1999. His dissertation, Layout of Graph Visualizations, concerned graph drawing, and was jointly supervised by Dorothea Wagner and Michael Kaufmann. - -After completing a habilitation in 2002, and taking an assistant professorship at the University of Passau, he returned to Konstanz as a professor in 2003, before moving to ETH Zurich. - -Brandes is the coauthor of the book Studying Social Networks: A Guide to Empirical Research (Campus Verlag / Chicago University Press, 2012, with Marina Hennig, Jürgen Pfeffer, and Ines Mergel). His edited volumes include Network Analysis: Methodological Foundations (Springer, 2005, edited with Thomas Erlebach) as well as several conference proceedings. diff --git a/wiki/wikipedia/2566.txt b/wiki/wikipedia/2566.txt deleted file mode 100644 index ed0c6771214d1744783e1831dbe1072a77b581d0..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2566.txt +++ /dev/null @@ -1,5 +0,0 @@ -Seok-Hee Hong is a Korean-Australian computer scientist known for her research in graph drawing, including on the effects of crossings and other features of graph drawings on human readability, on 1-planar graphs, and on the layout of transit maps. She is a professor of computer science at the University of Sydney. - -Hong studied computer science and engineering at Ewha Womans University, earning bachelor's, master's, and Ph.D. degrees there. She came to Australia in 1999–2000 as a Korea Science and Engineering Foundation Postdoctoral Fellow at the University of Newcastle, followed by another postdoctoral position at the University of Sydney, where she became a lecturer in 2001. She became a senior lecturer in 2006, Australian Research Fellow in 2008, associate professor in 2009, and ARC Future Fellow and full professor in 2013. She has also been a visiting researcher at many institutions in Canada, Japan, Taiwan, Italy, and (as a Humboldt Fellow) in Germany. - -She was named to the Australian Research Council College of Experts in 2016. diff --git a/wiki/wikipedia/2567.txt b/wiki/wikipedia/2567.txt deleted file mode 100644 index d1311f7fea2715e54018501a370b5afeb2cabc68..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2567.txt +++ /dev/null @@ -1,36 +0,0 @@ -The Andreotti–Norguet formula, first introduced by , is a higher–dimensional analogue of Cauchy integral formula for expressing the derivatives of a holomorphic function. Precisely, this formula express the value of the partial derivative of any multiindex order of a holomorphic function of several variables, in any interior point of a given bounded domain, as a hypersurface integral of the values of the function on the boundary of the domain itself. In this respect, it is analogous and generalizes the Bochner–Martinelli formula, reducing to it when the absolute value of the multiindex order of differentiation is 0. When considered for functions of n = 1 complex variables, it reduces to the ordinary Cauchy formula for the derivative of a holomorphic function: however, when n > 1, its integral kernel is not obtainable by simple differentiation of the Bochner–Martinelli kernel. - -The Andreotti–Norguet formula was first published in the research announcement : however, its full proof was only published later in the paper . Another, different proof of the formula was given by Martinelli. In 1977 and 1978, Lev Aizenberg gave still another proof and a generalization of the formula based on the Cauchy–Fantappiè–Leray kernel instead on the Bochner–Martinelli kernel. - -The notation adopted in the following description of the integral representation formula is the one used by Kytmanov and by Kytmanov: the notations used in the original works and in other references, though equivalent, are significantly different. Precisely, it is assumed that - -*n > 1 is a fixed natural number, - -*ζ, z ∈ $\mathbb{C}$^n are complex vectors, - -*α = (α_1,...,α_n) ∈ $\mathbb{N}$^n is a multiindex whose absolute value is |α|, - -*D ⊂ $\mathbb{C}$^n is a bounded domain whose closure is D, - -*A(D) is the function space of functions holomorphic on the interior of D and continuous on its boundary ∂D. - -*the iterated Wirtinger derivatives of order α of a given complex valued function f ∈ A(D) are expressed using the following simplified notation: -$$ -\partial^\alpha f = \frac{\partial^ f}{\partial z_1^{\alpha_1} \cdots \partial z_n^{\alpha_n}}. -$$ - -For every multiindex α, the Andreotti–Norguet kernel ω_α (ζ, z) is the following differential form in ζ of bidegree (n, n − 1): - -\omega_\alpha(\zeta,z) = \frac{(n-1)!\alpha_1!\cdots\alpha_n!}{(2\pi i)^n} - -\sum_{j=1}^n\frac{(-1)^{j-1}(\bar\zeta_j-\overline z_j)^{\alpha_j+1} d\bar\zeta^{\alpha+I}[j] \land d\zeta}{\left(|z_1-\zeta_1|^{2(\alpha_1+1)} + \cdots + |z_n-\zeta_n|^{2(\alpha_n+1)}\right)^n}, - -where I = (1,...,1) ∈ $\mathbb{N}$^n and -$$ - d\bar\zeta^{\alpha+I}[j] = d\bar\zeta_1^{\alpha_1+1} \land \cdots \land d\bar\zeta_{j-1}^{\alpha_{j+1}+1} \land d\bar\zeta_{j+1}^{\alpha_{j-1}+1} \land \cdots \land d\bar\zeta_n^{\alpha_n+1} -$$ - -For every function f ∈ A(D), every point z ∈ D and every multiindex α, the following integral representation formula holds -$$ -\partial^\alpha f(z) = \int_{\partial D} f(\zeta)\omega_\alpha(\zeta,z). -$$ diff --git a/wiki/wikipedia/2568.txt b/wiki/wikipedia/2568.txt deleted file mode 100644 index c9c279b75ff38b7b16044773d445a1e4f287ccd6..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2568.txt +++ /dev/null @@ -1,5 +0,0 @@ -The Katětov–Tong insertion theorem is a theorem of point-set topology proved independently by Miroslav Katětov and Hing Tong in the 1950s. The theorem states the following: - -Let $X$ be a normal topological space and let $g, h\colon X \to \mathbb{R}$ be functions with g upper semicontinuous, h lower semicontinuous and $g \leq h$. Then there exists a continuous function $f\colon X \to \mathbb{R}$ with $g \leq f \leq h.$ - -This theorem has a number of applications and is the first of many classical insertion theorems. In particular it implies the Tietze extension theorem and consequently Urysohn's lemma, and so the conclusion of the theorem is equivalent to normality. diff --git a/wiki/wikipedia/2569.txt b/wiki/wikipedia/2569.txt deleted file mode 100644 index 9815bdc3663c6cc5e70a1257de6ba3e37839a1c3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2569.txt +++ /dev/null @@ -1,62 +0,0 @@ -In number theory, Brun's theorem states that the sum of the reciprocals of the twin primes (pairs of prime numbers which differ by 2) converges to a finite value known as Brun's constant, usually denoted by B2 . Brun's theorem was proved by Viggo Brun in 1919, and it has historical importance in the introduction of sieve methods. - -The convergence of the sum of reciprocals of twin primes follows from bounds on the density of the sequence of twin primes. - -Let $\pi_2(x)$ denote the number of primes p ≤ x for which p + 2 is also prime (i.e. $\pi_2(x)$ is the number of twin primes with the smaller at most x). Then, for x ≥ 3, we have -$$ - \pi_2(x) =O\left(\frac {x(\log\log x)^2}{(\log x)^2} \right). -$$ - -That is, twin primes are less frequent than prime numbers by nearly a logarithmic factor. - -It follows from this bound that the sum of the reciprocals of the twin primes converges, or stated in other words, the twin primes form a small set. In explicit terms the sum -$$ - \sum\limits_{ p : p + 2 \in \mathbb{P} } {\left( {\frac{1}{p} + \frac{1}} \right)} = \left( {\frac{1}{3} + \frac{1}{5}} \right) + \left( {\frac{1}{5} + \frac{1}{7}} \right) + \left( {\frac{1} + \frac{1}} \right) + \cdots -$$ - -either has finitely many terms or has infinitely many terms but is convergent: its value is known as Brun's constant. - -If it were the case that the sum diverged, then that fact would imply that there are infinitely many twin prime numbers. Because the sum of the reciprocals of the twin primes instead converges, it is not possible to conclude from this result that there are finitely many or infinitely many twin primes. Brun's constant could be an irrational number only if there are infinitely many twin primes. - -The series converges extremely slowly. Thomas Nicely remarks that after summing the first one billion (109) terms, the relative error is still more than 5%. - -By calculating the twin primes up to 1014 (and discovering the Pentium FDIV bug along the way), Nicely heuristically estimated Brun's constant to be 1.902160578. Nicely has extended his computation to 1.6 as of 18 January 2010 but this is not the largest computation of its type. - -In 2002, Pascal Sebah and Patrick Demichel used all twin primes up to 1016 to give the estimate that B2 ≈ 1.902160583104. Hence, - -The last is based on extrapolation from the sum 1.830484424658... for the twin primes below 1016. Dominic Klyve showed conditionally (in an unpublished thesis) that B2 < 2.1754 (assuming the extended Riemann hypothesis). It has been shown unconditionally that B2 < 2.347. - -There is also a Brun's constant for prime quadruplets. A prime quadruplet is a pair of two twin prime pairs, separated by a distance of 4 (the smallest possible distance). The first prime quadruplets are (5, 7, 11, 13), (11, 13, 17, 19), (101, 103, 107, 109). Brun's constant for prime quadruplets, denoted by B4, is the sum of the reciprocals of all prime quadruplets: - -B_4 = \left(\frac{1}{5} + \frac{1}{7} + \frac{1}{11} + \frac{1}{13}\right) - -+ \left(\frac{1}{11} + \frac{1}{13} + \frac{1}{17} + \frac{1}{19}\right) - -+ \left(\frac{1}{101} + \frac{1}{103} + \frac{1}{107} + \frac{1}{109}\right) + \cdots - -with value: - -B4 = 0.87058 83800 ± 0.00000 00005, the error range having a 99% confidence level according to Nicely. - -This constant should not be confused with the Brun's constant for cousin primes, as prime pairs of the form (p, p + 4), which is also written as B4. Wolf derived an estimate for the Brun-type sums Bn of 4/n. - -Let $C_2=0.6601\ldots$ be the twin prime constant. Then it is conjectured that -$$ -\pi_2(x)\sim2C_2\frac{x}{(\log x)^2}. -$$ - -In particular, -$$ -\pi_2(x)<(2C_2+\varepsilon)\frac{x}{(\log x)^2} -$$ - -for every $\varepsilon>0$ and all sufficiently large x. - -Many special cases of the above have been proved. Most recently, Jie Wu proved that for sufficiently large x, -$$ - \pi_2(x) < 4.5 \frac {x}{(\log x)^2} -$$ - -where 4.5 corresponds to $\varepsilon\approx3.18$ in the above. - -The digits of Brun's constant were used in a bid of $1,902,160,540 in the Nortel patent auction. The bid was posted by Google and was one of three Google bids based on mathematical constants. diff --git a/wiki/wikipedia/257.txt b/wiki/wikipedia/257.txt deleted file mode 100644 index 4e275acc2ad4cfeab0f5933be08ae4c4ba961c8b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/257.txt +++ /dev/null @@ -1 +0,0 @@ -In linear algebra and projective geometry, Gerbaldi's theorem, proved by , states that one can find six pairwise apolar linearly independent nondegenerate ternary quadratic forms. These are permuted by the Valentiner group. diff --git a/wiki/wikipedia/2570.txt b/wiki/wikipedia/2570.txt deleted file mode 100644 index 2593a87dc3abae1f12b65bf2af615f7209711ee7..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2570.txt +++ /dev/null @@ -1,73 +0,0 @@ -In quantum mechanics, the Kochen–Specker (KS) theorem, also known as the Bell–Kochen–Specker theorem, is a "no-go" theorem proved by John S. Bell in 1966 and by Simon B. Kochen and Ernst Specker in 1967. It places certain constraints on the permissible types of hidden-variable theories, which try to explain the predictions of quantum mechanics in a context-independent way. The version of the theorem proved by Kochen and Specker also gave an explicit example for this constraint in terms of a finite number of state vectors. - -The theorem is a complement to Bell's theorem (to be distinguished from the (Bell–)Kochen–Specker theorem of this article). While Bell's theorem established nonlocality to be a feature of any hidden variable theory that recovers the predictions of quantum mechanics, the KS theorem established contextuality to be an inevitable feature of such theories. - -The theorem proves that there is a contradiction between two basic assumptions of the hidden-variable theories intended to reproduce the results of quantum mechanics: that all hidden variables corresponding to quantum-mechanical observables have definite values at any given time, and that the values of those variables are intrinsic and independent of the device used to measure them. The contradiction is caused by the fact that quantum-mechanical observables need not be commutative. It turns out to be impossible to simultaneously embed all the commuting subalgebras of the algebra of these observables in one commutative algebra, assumed to represent the classical structure of the hidden-variables theory, if the Hilbert space dimension is at least three. - -The Kochen–Specker theorem excludes hidden-variable theories that assume that elements of physical reality can all be consistently represented simultaneously by the quantum mechanical Hilbert space formalism disregarding the context of a particular framework (technically a projective decomposition of the identity operator) related to the experiment or analytical viewpoint under consideration. As succinctly worded by Isham and Butterfield, (under the assumption of a universal probabilistic sample space as in non-contextual hidden variable theories) the Kochen–Specker theorem "asserts the impossibility of assigning values to all physical quantities whilst, at the same time, preserving the functional relations between them". - -The KS theorem is an important step in the debate on the (in)completeness of quantum mechanics, boosted in 1935 by the criticism of the Copenhagen assumption of completeness in the article by Einstein, Podolsky and Rosen, creating the so-called EPR paradox. This paradox is derived from the assumption that a quantum-mechanical measurement result is generated in a deterministic way as a consequence of the existence of an element of physical reality assumed to be present before the measurement as a property of the microscopic object. In the EPR article it was assumed that the measured value of a quantum-mechanical observable can play the role of such an element of physical reality. As a consequence of this metaphysical supposition, the EPR criticism was not taken very seriously by the majority of the physics community. Moreover, in his answer Bohr had pointed to an ambiguity in the EPR article, to the effect that it assumes the value of a quantum-mechanical observable is non-contextual (i.e. is independent of the measurement arrangement). Taking into account the contextuality stemming from the measurement arrangement would, according to Bohr, make obsolete the EPR reasoning. It was subsequently observed by Einstein that Bohr's reliance on contextuality implies nonlocality ("spooky action at a distance"), and that, in consequence, one would have to accept incompleteness if one wanted to avoid nonlocality. - -In the 1950s and 1960s two lines of development were open for those not averse to metaphysics, both lines improving on a "no-go" theorem presented by von Neumann, purporting to prove the impossibility of the hidden-variable theories yielding the same results as quantum mechanics. First, Bohm developed an interpretation of quantum mechanics, generally accepted as a hidden-variable theory underpinning quantum mechanics. The nonlocality of Bohm's theory induced Bell to assume that quantum reality is nonlocal, and that probably only local hidden-variable theories are in disagreement with quantum mechanics. More importantly, Bell managed to lift the problem from the level of metaphysics to physics by deriving an inequality, the Bell inequality, that is capable of being experimentally tested. - -A second line is the Kochen–Specker one. The essential difference from Bell's approach is that the possibility of underpinning quantum mechanics by a hidden-variable theory is dealt with independently of any reference to locality or nonlocality, but instead a stronger restriction than locality is made, namely that hidden variables are exclusively associated with the quantum system being measured; none are associated with the measurement apparatus. This is called the assumption of non-contextuality. Contextuality is related here with incompatibility of quantum-mechanical observables, incompatibility being associated with mutual exclusiveness of measurement arrangements. The Kochen–Specker theorem states that no non-contextual hidden-variable model can reproduce the predictions of quantum theory when the dimension of the Hilbert space is three or more. - -Bell published a proof of the Kochen–Specker theorem in 1966, in an article which had been submitted to a journal earlier than his famous Bell-inequality article, but was lost on an editor's desk for two years. Considerably simpler proofs than the Kochen–Specker one were given later, amongst others, by Mermin and by Peres. However, many simpler proofs only establish the theorem for Hilbert spaces of higher dimension, e.g., from dimension four. - -The KS theorem explores whether it is possible to embed the set of quantum-mechanical observables into a set of classical quantities, in spite of the fact that all classical quantities are mutually compatible. - -The first observation made in the Kochen–Specker article is that this is possible in a trivial way, namely, by ignoring the algebraic structure of the set of quantum-mechanical observables. Indeed, let pA(ak) be the probability that observable A has value ak, then the product ΠA pA(ak), taken over all possible observables A, is a valid joint probability distribution, yielding all probabilities of quantum-mechanical observables by taking marginals. Kochen and Specker note that this joint probability distribution is not acceptable, however, since it ignores all correlations between the observables. Thus, in quantum mechanics A2 has value ak2 if A has value ak, implying that the values of A and A2 are highly correlated. - -More generally, it is required by Kochen and Specker that for an arbitrary function f the value $v\big(f(\mathbf A)\big)$ of observable $f(\mathbf A)$ satisfies -$$ -v\big(f(\mathbf A)\big) = f\big(v(\mathbf A)\big). -$$ - -If A1 and A2 are compatible (commeasurable) observables, then, by the same token, we should have the following two equalities: -$$ -v(c_1 \mathbf A_1 + c_2 \mathbf A_2) = c_1 v(\mathbf A_1) + c_2 v(\mathbf A_2), -$$ -$$ -c_1 -$$ and $c_2$ real, and -$$ -v(\mathbf A_1 \mathbf A_2) = v(\mathbf A_1) v(\mathbf A_2). -$$ - -The first of these is a considerable weakening compared to von Neumann's assumption that this equality should hold independently of whether A1 and A2 are compatible or incompatible. Kochen and Specker were capable of proving that a value assignment is not possible even on the basis of these weaker assumptions. In order to do so, they restricted the observables to a special class, namely, so-called yes–no observables, having only values 0 and 1, corresponding to projection operators on the eigenvectors of certain orthogonal bases of a Hilbert space. - -As long as the Hilbert space is at least three-dimensional, they were able to find a set of 117 such projection operators, not allowing to attribute to each of them in an unambiguous way either value 0 or 1. Instead of the rather involved proof by Kochen and Specker, it is more illuminating to reproduce here one of the much simpler proofs given much later, which employs a lower number of projection operators, but only proves the theorem when the dimension of the Hilbert space is at least 4. It turns out that it is possible to obtain a similar result on the basis of a set of only 18 projection operators. - -In order to do so, it is sufficient to realize that if u1, u2, u3 and u4 are the four orthogonal vectors of an orthogonal basis in the four-dimensional Hilbert space, then the projection operators P1, P2, P3, P4 on these vectors are all mutually commuting (and, hence, correspond to compatible observables, allowing a simultaneous attribution of values 0 or 1). Since -$$ -\mathbf P_1 + \mathbf P_2 + \mathbf P_3 + \mathbf P_4 = \mathbf I, -$$ - -it follows that -$$ -v(\mathbf P_1 + \mathbf P_2 + \mathbf P_3 + \mathbf P_4) = v(\mathbf I) = 1. -$$ - -But since - -v(\mathbf P_1 + \mathbf P_2 + \mathbf P_3 + \mathbf P_4) = - -v(\mathbf P_1) + v(\mathbf P_2) + v(\mathbf P_3) + v(\mathbf P_4), - -it follows from $v(\mathbf P_i)$ = 0 or 1, $i = 1, \ldots, 4$, that out of the four values $v(\mathbf P_1), v(\mathbf P_2), v(\mathbf P_3), v(\mathbf P_4)$ one must be 1, while the other three must be 0. - -Cabello, extending an argument developed by Kernaghan considered 9 orthogonal bases, each basis corresponding to a column of the following table, in which the basis vectors are explicitly displayed. The bases are chosen in such a way that each projector appears in exactly two contexts, thus establishing functional relations between contexts. - -Now the "no-go" theorem follows by making sure that the following is impossible: to place a value, either a 1 or a 0, into each compartment of the table above in such a way that: - -(a) the value 1 appears exactly once per column, the other entries in the column being 0; - -(b) equally colored compartments contain the same value – either both contain 1 or both contain 0. - -As it happens, all we have to do now is ask the question, how many times should the value 1 appear in the table? On the one hand, (a) implies that 1 should appear 9 times: there are 9 columns and (a) says that 1 should appear exactly once per column. On the other hand, (b) implies that 1 should appear an even number of times: the compartments all come in equally colored pairs, and (b) says that if one member of a pair contains 1, then the other member must contain 1 as well. To repeat, (a) says that 1 appears 9 times, while (b) says that it appears an even number of times. Since 9 is not even, it follows that (a) and (b) are mutually contradictory; no distribution of 1s and 0s into the compartments could possibly satisfy both. - -The usual proof of Bell's theorem (CHSH inequality) can also be converted into a simple proof of the KS theorem in dimension at least 4. Bell's setup involves four measurements with four outcomes (four pairs of a simultaneous binary measurement in each wing of the experiment) and four with two outcomes (the two binary measurements in each wing of the experiment, unaccompanied), thus 24 projection operators. - -In the Kochen–Specker article the possibility is discussed that the value attribution $v(\mathbf A)$ may be context-dependent, i.e. observables corresponding to equal vectors in different columns of the table need not have equal values because different columns correspond to different measurement arrangements. Since subquantum reality (as described by the hidden-variable theory) may be dependent on the measurement context, it is possible that relations between quantum-mechanical observables and hidden variables are just homomorphic rather than isomorphic. This would make obsolete the requirement of a context-independent value attribution. Hence, the KS theorem only excludes noncontextual hidden-variable theories. The possibility of contextuality has given rise to the so-called modal interpretations of quantum mechanics. - -By the KS theorem the impossibility is proven of Einstein's assumption that an element of physical reality is represented by a value of a quantum-mechanical observable. The value of a quantum-mechanical observable refers in the first place to the final position of the pointer of a measuring instrument, which comes into being only during the measurement, and which, for this reason, cannot play the role of an element of physical reality. Elements of physical reality, if existing, would seem to need a subquantum (hidden-variable) theory for their description rather than quantum mechanics. In later publications the Bell inequalities are discussed on the basis of hidden-variable theories in which the hidden variable is supposed to refer to a subquantum property of the microscopic object different from the value of a quantum-mechanical observable. This opens up the possibility of distinguishing different levels of reality described by different theories, which had already been practised by Louis de Broglie. For such more general theories the KS theorem is applicable only if the measurement is assumed to be a faithful one, in the sense that there is a deterministic relation between a subquantum element of physical reality and the value of the observable found on measurement. diff --git a/wiki/wikipedia/2571.txt b/wiki/wikipedia/2571.txt deleted file mode 100644 index 739d632f336033fd42c63691cca72308bdfdf982..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2571.txt +++ /dev/null @@ -1,50 +0,0 @@ -In the mathematical areas of number theory and analysis, an infinite sequence or a function is said to eventually have a certain property, if it doesn't have the said property across all its ordered instances, but will after some instances have passed. The use of the term "eventually" can be often rephrased as "for sufficiently large numbers", and can be also extended to the class of properties that apply to elements of any ordered set (such as sequences and subsets of $\mathbb{R}$). - -The general form where the phrase eventually (or sufficiently large) is found appears as follows: -$$ -P -$$ is eventually true for $x$ ($P$ is true for sufficiently large $x$), - -where $\forall$ and $\exists$ are the universal and existential quantifiers, which is actually a shorthand for: -$$ -\exists a \in \mathbb{R} -$$ such that $P$ is true $\forall x \ge a$ - -or somewhat more formally: -$$ -\exists a \in \mathbb{R}: \forall x \in \mathbb{R}:x \ge a \Rightarrow P(x) -$$ - -This does not necessarily mean that any particular value for $a$ is known, but only that such an $a$ exists. The phrase "sufficiently large" should not be confused with the phrases "arbitrarily large" or "infinitely large". For more, see Arbitrarily large#Arbitrarily large vs. sufficiently large vs. infinitely large. - -For an infinite sequence, one is often more interested in the long-term behaviors of the sequence than the behaviors it exhibits early on. In which case, one way to formally capture this concept is to say that the sequence possesses a certain property eventually, or equivalently, that the property is satisfied by one of its subsequences $(a_n)_{n \geq N}$, for some $N \in \N$. - -For example, the definition of a sequence of real numbers $(a_n)$ converging to some limit $a$ is: - -For each positive number $\varepsilon$, there exists a natural number $N$ such that for all $n >N $, $\left\vert a_n - a \right\vert<\varepsilon$. - -When the term "eventually" is used as a shorthand for "there exists a natural number $N$ such that for all $n > N$", the convergence definition can be restated more simply as: - -For each positive number $\varepsilon>0$, eventually $\left\vert a_n-a \right\vert<\varepsilon$. - -Here, notice that the set of natural numbers that do not satisfy this property is a finite set; that is, the set is empty or has a maximum element. As a result, the use of "eventually" in this case is synonymous with the expression "for all but a finite number of terms" – a special case of the expression "for almost all terms" (although "almost all" can also be used to allow for infinitely many exceptions as well). - -At the basic level, a sequence can be thought of as a function with natural numbers as its domain, and the notion of "eventually" applies to functions on more general sets as well—in particular to those that have an ordering with no greatest element. - -More specifically, if $S$ is such a set and there is an element $s$ in $S$ such that the function $f$ is defined for all elements greater than $s$, then $f$ is said to have some property eventually if there is an element $x_0$ such that whenever $x>x_0$, $f(x)$ has the said property. This notion is used, for example, in the study of Hardy fields, which are fields made up of real functions, each of which have certain properties eventually. - -* "All primes greater than 2 are odd" can be written as "Eventually, all primes are odd.” - -* Eventually, all primes are congruent to ±1 modulo 6. - -* The square of a prime is eventually congruent to 1 mod 24 (specifically, this is true for all primes greater than 3). - -* The factorial of a natural number eventually ends in the digit 0 (specifically, this is true for all natural numbers greater than 4). - -When a sequence or a function has a property eventually, it can have useful implications in the context of proving something in relation to that sequence. For example, in the context of the asymptotic behavior of certain functions, it can be useful to know if it eventually behaves differently than would or could be observed computationally, since otherwise this could not be noticed. - -The term "eventually" can be also incorporated into many mathematical definitions to make them more concise. These include the definitions of some types of limits (as seen above), and the Big O notation for describing asymptotic behavior. - -*A 3-manifold is called sufficiently large if it contains a properly embedded 2-sided incompressible surface. This property is the main requirement for a 3-manifold to be called a Haken manifold. - -* Temporal logic introduces an operator that can be used to express statements interpretable as: Certain property will eventually hold in a future moment in time. diff --git a/wiki/wikipedia/2572.txt b/wiki/wikipedia/2572.txt deleted file mode 100644 index 054850417583f0106f12a5fe806e54b488e3587f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2572.txt +++ /dev/null @@ -1,142 +0,0 @@ -In mathematics, Buffon's needle problem is a question first posed in the 18th century by Georges-Louis Leclerc, Comte de Buffon: - -Suppose we have a floor made of parallel strips of wood, each the same width, and we drop a needle onto the floor. What is the probability that the needle will lie across a line between two strips? - -Buffon's needle was the earliest problem in geometric probability to be solved; it can be solved using integral geometry. The solution for the sought probability p, in the case where the needle length l is not greater than the width t of the strips, is -$$ -p=\frac{2}{\pi}\frac{l}{t}. -$$ - -This can be used to design a Monte Carlo method for approximating the number π, although that was not the original motivation for de Buffon's question. - -The problem in more mathematical terms is: Given a needle of length $l$ dropped on a plane ruled with parallel lines t units apart, what is the probability that the needle will lie across a line upon landing? - -Let x be the distance from the center of the needle to the closest parallel line, and let θ be the acute angle between the needle and one of the parallel lines. - -The uniform probability density function of x between 0 and t /2 is - - - -\begin{cases} - -\frac{2}{t} &:\ 0 \le x \le \frac{t}{2}\\ - -0 &: \text{elsewhere.} - -\end{cases} - - - -Here, x = 0 represents a needle that is centered directly on a line, and x = t/2 represents a needle that is perfectly centered between two lines. The uniform PDF assumes the needle is equally likely to fall anywhere in this range, but could not fall outside of it. - -The uniform probability density function of θ between 0 and π/2 is - - - -\begin{cases} - -\frac{2}{\pi} &:\ 0 \le \theta \le \frac{\pi}{2}\\ - -0 &: \text{elsewhere.} - -\end{cases} - - - -Here, θ = 0 radians represents a needle that is parallel to the marked lines, and θ = π/2 radians represents a needle that is perpendicular to the marked lines. Any angle within this range is assumed an equally likely outcome. - -The two random variables, x and θ, are independent, so the joint probability density function is the product - - - -\begin{cases} - -\frac{4}{t\pi} &:\ 0 \le x \le \frac{t}{2}, \ 0 \le \theta \le \frac{\pi}{2}\\ - -0 &: \text{elsewhere.} - -\end{cases} - - - -The needle crosses a line if -$$ -x \le \frac{l}{2}\sin\theta. -$$ - -Now there are two cases. - -Integrating the joint probability density function gives the probability that the needle will cross a line: -$$ -P = \int_{\theta=0}^{\frac{\pi}{2}} \int_{x=0}^{(l/2)\sin \theta} \frac{4}{t\pi}dxd\theta = \frac{2 l}{t\pi}. -$$ - -Suppose $l > t$. In this case, integrating the joint probability density function, we obtain: -$$ -\int_{\theta=0}^{\frac{\pi}{2}} \int_{x=0}^{m(\theta)} \frac{4}{t\pi}dxd\theta , -$$ - -where $m(\theta) $ is the minimum between -$$ -(l/2)\sin\theta -$$ and $t/2 $. - -Thus, performing the above integration, we see that, - -when $t < l$, - -the probability that the needle will cross a line is -$$ -\frac{2 l}{t\pi} - \frac{2}{t\pi}\left\{\sqrt{l^2 - t^2} + t\sin^{-1}\left(\frac{t}{l}\right)\right\}+1 -$$ - -or -$$ - \frac{2}{\pi} \cos^{-1}\frac{t}{l} + \frac{2}{\pi} \frac{l}{t} \left\{1 - \sqrt{1 - \left( \frac{t}{l} \right)^2 } \right\}. -$$ - -In the second expression, the first term represents the probability of the angle of the needle being such that it will always cross at least one line. The right term represents the probability that, the needle falls at an angle where its position matters, and it crosses the line. - -Alternatively notice that whenever $\theta$ has a value such that $l \sin\theta \le t$, that is, in the range $0 \le \theta \le \sin^{-1}(t/l)$ the probability of crossing is the same as in the short needle case. However if $l \sin\theta > t$, that is, $\sin^{-1}(t/l) < \theta \le \frac{\pi}{2}$ the probability is constant and is equal to 1. -$$ -\left(\int_{\theta=0}^{\sin^{-1}(t/l)} \int_{x=0}^{(l/2)\sin\theta} \frac{4}{t\pi}\right) + \left(\int_{\sin^{-1}(t/l)}^{\frac{\pi}{2}} \frac{2}{\pi}\right) = \frac{2 l}{t\pi} - \frac{2}{t\pi}\left\{\sqrt{l^2 - t^2} + t\sin^{-1}\left(\frac{t}{l}\right)\right\}+1 -$$ - -The following solution for the "short needle" case, while equivalent to the one above, has a more visual flavor, and avoids iterated integrals. - -We can calculate the probability $P$ as the product of 2 probabilities: $P = P_1 \cdot P_2$, where $P_1$ is the probability that the center of the needle falls close enough to a line for the needle to possibly cross it, and $P_2$ is the probability that the needle actually crosses the line, given that the center is within reach. - -Looking at the illustration in the above section, it is apparent that the needle can cross a line if the center of the needle is within $l / 2$ units of either side of the strip. Adding $\frac{l}{2}+\frac{l}{2}$ from both sides and dividing by the whole width $t$, we obtain $P_1 = \frac{l}{t}.$ - -Now, we assume that the center is within reach of the edge of the strip, and calculate $P_2$. To simplify the calculation, we can assume that $l = 2$. - -Let x and θ be as in the illustration in this section. Placing a needle's center at x, the needle will cross the vertical axis if it falls within a range of 2θ radians, out of π radians of possible orientations. This represents the gray area to the left of x in the figure. For a fixed x, we can express θ as a function of x: $\theta\left(x\right) = \cos^{-1}\left(x\right)$. Now we can let x move from 0 to 1, and integrate: -$$ -P_2 = \int_0^1 \frac{2\theta(x)}{\pi}dx = \frac{2}{\pi}\int_0^1 \cos^{-1}(x)dx = \frac{2}{\pi}\cdot 1 = \frac{2}{\pi}. -$$ - -Multiplying both results, we obtain $P = P_1\cdot P_2 = \frac{l}{t}\frac{2}{\pi} = \frac{2 l}{t\pi}$, as above. - -There is an even more elegant and simple method of calculating the "short needle case". The end of the needle farthest away from any one of the two lines bordering its region must be located within a horizontal (perpendicular to the bordering lines) distance of $l\cos\theta$ (where $\theta$ is the angle between the needle and the horizontal) from this line in order for the needle to cross it. The farthest this end of the needle can move away from this line horizontally in its region is $t$. The probability that the farthest end of the needle is located no more than a distance $l\cos\theta$ away from the line (and thus that the needle crosses the line) out of the total distance $t$ it can move in its region for $0 \le \theta \le \pi/2$ is given by -$$ -P = \frac{\int_0^{\frac{\pi}{2}} l\cos\theta d\theta}{\int_0^{\frac{\pi}{2}} t d\theta} = \frac{l}{t}\frac{\int_0^{\frac{\pi}{2}} \cos\theta d\theta}{\int_0^{\frac{\pi}{2}} d\theta} = \frac{l}{t}\frac{1}{\frac{\pi}{2}}=\frac{2l}{t\pi} -$$, as above. - -The short-needle problem can also be solved without any integration, in a way that explains the formula for p from the geometric fact that a circle of diameter t will cross the distance t strips always (i.e. with probability 1) in exactly two spots. This solution was given by Joseph-Émile Barbier in 1860 and is also referred to as "Buffon's noodle". - -In the first, simpler case above, the formula obtained for the probability $P$ can be rearranged to: \pi = \frac{2 l}{t P}Thus, if we conduct an experiment to estimate $P$, we will also have an estimate for π. - -Suppose we drop n needles and find that h of those needles are crossing lines, so $P$ is approximated by the fraction $h / n$. This leads to the formula: -$$ -\pi \approx \frac{2l\cdot n}{t h}. -$$ - -In 1901, Italian mathematician Mario Lazzarini performed Buffon's needle experiment. Tossing a needle 3408 times, he obtained the well-known approximation 355/113 for π, accurate to six decimal places. - -Lazzarini's "experiment" is an example of confirmation bias, as it was set up to replicate the already well-known approximation of 355/113 (in fact, there is no better rational approximation with fewer than five digits in the numerator and denominator), yielding a more accurate "prediction" of π than would be expected from the number of trials, as follows: - -Lazzarini chose needles whose length was 5/6 of the width of the strips of wood. In this case, the probability that the needles will cross the lines is $\frac{5}{3 \pi}$. Thus if one were to drop n needles and get x crossings, one would estimate π as:\pi \approx \frac 53 \cdot \frac nxSo if Lazzarini was aiming for the result 355/113, he needed n and x such that:\frac{355}{113} = \frac 53 \cdot \frac nxor equivalently,x = \frac{113 n}{213}.To do this, one should pick n as a multiple of 213, because then $\frac{113 n}{213}$ is an integer; one then drops n needles, and hopes for exactly $x = \frac{113 n}{213}$ successes. - -If one drops 213 needles and happens to get 113 successes, then one can triumphantly report an estimate of π accurate to six decimal places. If not, one can just do 213 more trials and hope for a total of 226 successes; if not, just repeat as necessary. Lazzarini performed 3408 = 213 · 16 trials, making it seem likely that this is the strategy he used to obtain his "estimate." - -The above description of strategy might even be considered charitable to Lazzarini. A statistical analysis of intermediate results he reported for fewer tosses leads to a very low probability of achieving such close agreement to the expected value all through the experiment. This makes it very possible that the "experiment" itself was never physically performed, but based on numbers concocted from imagination to match statistical expectations, but too well, as it turns out. diff --git a/wiki/wikipedia/2573.txt b/wiki/wikipedia/2573.txt deleted file mode 100644 index 765774b2a6716b5e3629fe756fdd9b2c20dbd4d1..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2573.txt +++ /dev/null @@ -1,31 +0,0 @@ -The Moschovakis coding lemma is a lemma from descriptive set theory involving sets of real numbers under the axiom of determinacy (the principle — incompatible with choice — that every two-player integer game is determined). The lemma was developed and named after the mathematician Yiannis N. Moschovakis. - -The lemma may be expressed generally as follows: - -Let Γ be a non-selfdual pointclass closed under real quantification and ∧, and ≺ a Γ-well-founded relation on ωω of rank θ ∈ ON. Let R ⊆ dom(≺) × ωω be such that (∀x∈dom(≺))(∃y)(x R y). Then there is a Γ-set A ⊆ dom(≺) × ωω which is a choice set for R , that is: - -# (∀α<θ)(∃x∈dom(≺),y)(. - -# (∀x,y)(x A y → x R y). - -A proof runs as follows: suppose for contradiction θ is a minimal counterexample, and fix ≺, R, and a good universal set U ⊆ (ωω)3 for the Γ-subsets of (ωω)2. Easily, θ must be a limit ordinal. For δ < θ, we say u ∈ ωω codes a δ-choice set provided the property (1) holds for α ≤ δ using A = U u and property (2) holds for A = U u where we replace x ∈ dom(≺) with x ∈ dom(≺) ∧ . By minimality of θ, for all δ < θ, there are δ-choice sets. - -Now, play a game where players I, II select points u,v ∈ ωω and II wins when u coding a δ1-choice set for some δ1 < θ implies v codes a δ2-choice set for some δ2 > δ1. A winning strategy for I defines a Σ_1|p=1 set B of reals encoding δ-choice sets for arbitrarily large δ < θ. Define then - -x A y ↔ (∃w∈B)U(w,x,y), - -which easily works. On the other hand, suppose τ is a winning strategy for II. From the s-m-n theorem, let s:(ωω)2 → ωω be continuous such that for all ϵ, x, t, and w, - -U(s(ϵ,x),t,w) ↔ (∃y,z)(y ≺ x ∧ U(ϵ,y,z) ∧ U(z,t,w)). - -By the recursion theorem, there exists ϵ0 such that U(ϵ0,x,z) ↔ z = τ(s(ϵ0,x)). A straightforward induction on for x ∈ dom(≺) shows that - -(∀x∈dom(≺))(∃!z)U(ϵ0,x,z), - -and - -(∀x∈dom(≺),z)(U(ϵ0,x,z) → z encodes a choice set of ordinal ≥. - -So let - -x A y ↔ (∃z∈dom(≺),w)(U(ϵ0,z,w) ∧ U(w,x,y)). diff --git a/wiki/wikipedia/2574.txt b/wiki/wikipedia/2574.txt deleted file mode 100644 index 20d4777ca3002a47b4f7476f7ebe4fa4c5b65c4e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2574.txt +++ /dev/null @@ -1,16 +0,0 @@ -In the mathematical theory of automorphic representations, a multiplicity-one theorem is a result about the representation theory of an adelic reductive algebraic group. The multiplicity in question is the number of times a given abstract group representation is realised in a certain space, of square-integrable functions, given in a concrete way. - -A multiplicity one theorem may also refer to a result about the restriction of a representation of a group G to a subgroup H. In that context, the pair (G, H) is called a strong Gelfand pair. - -Let G be a reductive algebraic group over a number field K and let A denote the adeles of K. Let Z denote the centre of G and let ω be a continuous unitary character from Z(K)\Z(A)× to C×. Let L20(G(K)/G(A), ω) denote the space of cusp forms with central character ω on G(A). This space decomposes into a direct sum of Hilbert spaces -$$ -L^2_0(G(K)\backslash G(\mathbf{A}),\omega)=\widehat{\bigoplus}_{(\pi,V_\pi)}m_\pi V_\pi -$$ - -where the sum is over irreducible subrepresentations and mpi are non-negative integers. - -The group of adelic points of G, G(A), is said to satisfy the multiplicity-one property if any smooth irreducible admissible representation of G(A) occurs with multiplicity at most one in the space of cusp forms of central character ω, i.e. mpi is 0 or 1 for all such pi. - -The fact that the general linear group, GL(n), has the multiplicity-one property was proved by Jacquet for n = 2 and independently by Piatetski-Shapiro and for n > 2 using the uniqueness of the Whittaker model. Multiplicity-one also holds for SL(2), but not for SL(n) for n > 2 . - -The strong multiplicity one theorem of Piatetski-Shapiro and states that two cuspidal automorphic representations of the general linear group are isomorphic if their local components are isomorphic for all but a finite number of places. diff --git a/wiki/wikipedia/2575.txt b/wiki/wikipedia/2575.txt deleted file mode 100644 index a8d65802d587eb5ed485226bbd762fe5ec0e1b1c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2575.txt +++ /dev/null @@ -1,23 +0,0 @@ -The Ragsdale conjecture is a mathematical conjecture that concerns the possible arrangements of real algebraic curves embedded in the projective plane. It was proposed by Virginia Ragsdale in her dissertation in 1906 and was disproved in 1979. It has been called "the oldest and most famous conjecture on the topology of real algebraic curves". - -Ragsdale's dissertation, "On the Arrangement of the Real Branches of Plane Algebraic Curves," was published by the American Journal of Mathematics in 1906. The dissertation was a treatment of Hilbert's sixteenth problem, which had been proposed by Hilbert in 1900, along with 22 other unsolved problems of the 19th century; it is one of the handful of Hilbert's problems that remains wholly unresolved. Ragsdale formulated a conjecture that provided an upper bound on the number of topological circles of a certain type, along with the basis of evidence. - -Ragsdale's main conjecture is as follows. - -Assume that an algebraic curve of degree 2k contains p even and n odd ovals. Ragsdale conjectured that -$$ - p \le \tfrac32 k(k-1) + 1 \quad\text{and}\quad n \le \tfrac32 k(k-1). -$$ - -She also posed the inequality -$$ - | 2(p-n)-1 | \le 3k^2 - 3k + 1, -$$ - -and showed that the inequality could not be further improved. This inequality was later proved by Petrovsky. - -The conjecture was held of very high importance in the field of real algebraic geometry for most of the twentieth century. Later, in 1980, Oleg Viro introduced a technique known as "patchworking algebraic curves" and used to generate a counterexample to the conjecture. - -In 1993, Ilia Itenberg produced additional counterexamples to the Ragsdale conjecture, so Viro and Itenberg wrote a paper in 1996 discussing their work on disproving the conjecture using the "patchworking" technique. - -The problem of finding a sharp upper bound remains unsolved. diff --git a/wiki/wikipedia/2576.txt b/wiki/wikipedia/2576.txt deleted file mode 100644 index e0a26c86332c2765b837f59e0cf497e91a8d6e71..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2576.txt +++ /dev/null @@ -1,49 +0,0 @@ -thumb|upright=1.3|[[Papyrus Oxyrhynchus 29|P. Oxy. 29, one of the oldest surviving fragments of Euclid's Elements, a textbook used for millennia to teach proof-writing techniques. The diagram accompanies Book II, Proposition 5.]] - -A mathematical proof is an inferential argument for a mathematical statement, showing that the stated assumptions logically guarantee the conclusion. The argument may use other previously established statements, such as theorems; but every proof can, in principle, be constructed using only certain basic or original assumptions known as axioms, along with the accepted rules of inference. Proofs are examples of exhaustive deductive reasoning which establish logical certainty, to be distinguished from empirical arguments or non-exhaustive inductive reasoning which establish "reasonable expectation". Presenting many cases in which the statement holds is not enough for a proof, which must demonstrate that the statement is true in all possible cases. An proposition that has not been proved but is believed to be true is known as a conjecture, or a hypothesis if frequently used as an assumption for further mathematical work. - -Proofs employ logic expressed in mathematical symbols, along with natural language which usually admits some ambiguity. In most mathematical literature, proofs are written in terms of rigorous informal logic. Purely formal proofs, written fully in symbolic language without the involvement of natural language, are considered in proof theory. The distinction between formal and informal proofs has led to much examination of current and historical mathematical practice, quasi-empiricism in mathematics, and so-called folk mathematics, oral traditions in the mainstream mathematical community or in other cultures. The philosophy of mathematics is concerned with the role of language and logic in proofs, and mathematics as a language. - -The word "proof" comes from the Latin probare (to test). Related modern words are English "probe", "probation", and "probability", Spanish probar (to smell or taste, or sometimes touch or test), Italian provare (to try), and German probieren (to try). The legal term "probity" means authority or credibility, the power of testimony to prove facts when given by persons of reputation or status. - -Plausibility arguments using heuristic devices such as pictures and analogies preceded strict mathematical proof. However, computers are now used both to prove theorems and to carry out calculations that are too long for any human or team of humans to check; the first proof of the four color theorem is an example of a computer-assisted proof. Some mathematicians are concerned that the possibility of an error in a computer program or a run-time error in its calculations calls the validity of such computer-assisted proofs into question. In practice, the chances of an error invalidating a computer-assisted proof can be reduced by incorporating redundancy and self-checks into calculations, and by developing multiple independent approaches and programs. Errors can never be completely ruled out in case of verification of a proof by humans either, especially if the proof contains natural language and requires deep mathematical insight to uncover the potential hidden assumptions and fallacies involved. - -A statement that is neither provable nor disprovable from a set of axioms is called undecidable (from those axioms). One example is the parallel postulate, which is neither provable nor refutable from the remaining axioms of Euclidean geometry. - -Mathematicians have shown there are many statements that are neither provable nor disprovable in Zermelo–Fraenkel set theory with the axiom of choice (ZFC), the standard system of set theory in mathematics (assuming that ZFC is consistent); see List of statements undecidable in ZFC. - -Gödel's (first) incompleteness theorem shows that many axiom systems of mathematical interest will have undecidable statements. - -While early mathematicians such as Eudoxus of Cnidus did not use proofs, from Euclid to the foundational mathematics developments of the late 19th and 20th centuries, proofs were an essential part of mathematics. With the increase in computing power in the 1960s, significant work began to be done investigating mathematical objects outside of the proof-theorem framework, in experimental mathematics. Early pioneers of these methods intended the work ultimately to be embedded in a classical proof-theorem framework, e.g. the early development of fractal geometry, which was ultimately so embedded. - -Although not a formal proof, a visual demonstration of a mathematical theorem is sometimes called a "proof without words". The left-hand picture below is an example of a historic visual proof of the Pythagorean theorem in the case of the (3,4,5) triangle. - - - -File:Chinese pythagoras.jpg|Visual proof for the (3,4,5) triangle as in the Zhoubi Suanjing 500–200 BCE. - -File:Pythagoras-2a.gif|Animated visual proof for the Pythagorean theorem by rearrangement. - -File:Pythag anim.gif|A second animated proof of the Pythagorean theorem. - - - -Some illusory visual proofs, such as the missing square puzzle, can be constructed in a way which appear to prove a supposed mathematical fact but only do so under the presence of tiny errors (for example, supposedly straight lines which actually bend slightly) which are unnoticeable until the entire picture is closely examined, with lengths and angles precisely measured or calculated. - -An elementary proof is a proof which only uses basic techniques. More specifically, the term is used in number theory to refer to proofs that make no use of complex analysis. For some time it was thought that certain theorems, like the prime number theorem, could only be proved using "higher" mathematics. However, over time, many of these results have been reproved using only elementary techniques. - -A particular way of organising a proof using two parallel columns is often used as a mathematical exercise in elementary geometry classes in the United States. The proof is written as a series of lines in two columns. In each line, the left-hand column contains a proposition, while the right-hand column contains a brief explanation of how the corresponding proposition in the left-hand column is either an axiom, a hypothesis, or can be logically derived from previous propositions. The left-hand column is typically headed "Statements" and the right-hand column is typically headed "Reasons". - -The expression "mathematical proof" is used by lay people to refer to using mathematical methods or arguing with mathematical objects, such as numbers, to demonstrate something about everyday life, or when data used in an argument is numerical. It is sometimes also used to mean a "statistical proof" (below), especially when used to argue from data. - -"Statistical proof" from data refers to the application of statistics, data analysis, or Bayesian analysis to infer propositions regarding the probability of data. While using mathematical proof to establish theorems in statistics, it is usually not a mathematical proof in that the assumptions from which probability statements are derived require empirical evidence from outside mathematics to verify. In physics, in addition to statistical methods, "statistical proof" can refer to the specialized mathematical methods of physics applied to analyze data in a particle physics experiment or observational study in physical cosmology. "Statistical proof" may also refer to raw data or a convincing diagram involving data, such as scatter plots, when the data or diagram is adequately convincing without further analysis. - -Proofs using inductive logic, while considered mathematical in nature, seek to establish propositions with a degree of certainty, which acts in a similar manner to probability, and may be less than full certainty. Inductive logic should not be confused with mathematical induction. - -Bayesian analysis uses Bayes' theorem to update a person's assessment of likelihoods of hypotheses when new evidence or information is acquired. - -Psychologism views mathematical proofs as psychological or mental objects. Mathematician philosophers, such as Leibniz, Frege, and Carnap have variously criticized this view and attempted to develop a semantics for what they considered to be the language of thought, whereby standards of mathematical proof might be applied to empirical science. - -Philosopher-mathematicians such as Spinoza have attempted to formulate philosophical arguments in an axiomatic manner, whereby mathematical proof standards could be applied to argumentation in general philosophy. Other mathematician-philosophers have tried to use standards of mathematical proof and reason, without empiricism, to arrive at statements outside of mathematics, but having the certainty of propositions deduced in a mathematical proof, such as Descartes' cogito argument. - -Sometimes, the abbreviation "Q.E.D." is written to indicate the end of a proof. This abbreviation stands for "quod erat demonstrandum", which is Latin for "that which was to be demonstrated". A more common alternative is to use a square or a rectangle, such as □ or ∎, known as a "tombstone" or "halmos" after its eponym Paul Halmos. Often, "which was to be shown" is verbally stated when writing "QED", "□", or "∎" during an oral presentation. diff --git a/wiki/wikipedia/2577.txt b/wiki/wikipedia/2577.txt deleted file mode 100644 index b3016de65d582b6eaf27e38ff684b4f842398767..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2577.txt +++ /dev/null @@ -1,103 +0,0 @@ -In mathematics, an orthogonal polynomial sequence is a family of polynomials such that any two different polynomials in the sequence are orthogonal to each other under some inner product. - -The most widely used orthogonal polynomials are the classical orthogonal polynomials, consisting of the Hermite polynomials, the Laguerre polynomials and the Jacobi polynomials. The Gegenbauer polynomials form the most important class of Jacobi polynomials; they include the Chebyshev polynomials, and the Legendre polynomials as special cases. - -The field of orthogonal polynomials developed in the late 19th century from a study of continued fractions by P. L. Chebyshev and was pursued by A. A. Markov and T. J. Stieltjes. They appear in a wide variety of fields: numerical analysis (quadrature rules), probability theory, representation theory (of Lie groups, quantum groups, and related objects), enumerative combinatorics, algebraic combinatorics, mathematical physics (the theory of random matrices, integrable systems, etc.), and number theory. Some of the mathematicians who have worked on orthogonal polynomials include Gábor Szegő, Sergei Bernstein, Naum Akhiezer, Arthur Erdélyi, Yakov Geronimus, Wolfgang Hahn, Theodore Seio Chihara, Mourad Ismail, Waleed Al-Salam, and Richard Askey. - -Given any non-decreasing function α on the real numbers, we can define the Lebesgue–Stieltjes integral - -\int f(x) d\alpha(x) - -of a function f. If this integral is finite for all polynomials f, we can define an inner product on pairs of polynomials f and g by - -\langle f, g \rangle = \int f(x) g(x) d\alpha(x). - -This operation is a positive semidefinite inner product on the vector space of all polynomials, and is positive definite if the function α has an infinite number of points of growth. It induces a notion of orthogonality in the usual way, namely that two polynomials are orthogonal if their inner product is zero. - -Then the sequence (Pn)_n=0|p=∞ of orthogonal polynomials is defined by the relations - - \deg P_n = n~, \quad \langle P_m, P_n \rangle = 0 \quad \text{for} \quad m \neq n~. - -In other words, the sequence is obtained from the sequence of monomials 1, x, x2, … by the Gram–Schmidt process with respect to this inner product. - -Usually the sequence is required to be orthonormal, namely, - - \langle P_n, P_n \rangle = 1 , - -however, other normalisations are sometimes used. - -Sometimes we have - - d\alpha(x) = W(x) dx - -where - -W : [x_1, x_2] \to \R - -is a non-negative function with support on some interval [x1, x2] in the real line (where x1 = −∞ and x2 = ∞ are allowed). Such a W is called a weight function. Then the inner product is given by - -\langle f, g \rangle = \int_{x_1}^{x_2} f(x) g(x) W(x) dx. - -However, there are many examples of orthogonal polynomials where the measure dα(x) has points with non-zero measure where the function α is discontinuous, so cannot be given by a weight function W as above. - -The most commonly used orthogonal polynomials are orthogonal for a measure with support in a real interval. This includes: - -*The classical orthogonal polynomials (Jacobi polynomials, Laguerre polynomials, Hermite polynomials, and their special cases Gegenbauer polynomials, Chebyshev polynomials and Legendre polynomials). - -*The Wilson polynomials, which generalize the Jacobi polynomials. They include many orthogonal polynomials as special cases, such as the Meixner–Pollaczek polynomials, the continuous Hahn polynomials, the continuous dual Hahn polynomials, and the classical polynomials, described by the Askey scheme - -*The Askey–Wilson polynomials introduce an extra parameter q into the Wilson polynomials. - -Discrete orthogonal polynomials are orthogonal with respect to some discrete measure. Sometimes the measure has finite support, in which case the family of orthogonal polynomials is finite, rather than an infinite sequence. The Racah polynomials are examples of discrete orthogonal polynomials, and include as special cases the Hahn polynomials and dual Hahn polynomials, which in turn include as special cases the Meixner polynomials, Krawtchouk polynomials, and Charlier polynomials. - -Meixner classified all the orthogonal Sheffer sequences: there are only Hermite, Laguerre, Charlier, Meixner, and Meixner–Pollaczek. In some sense Krawtchouk should be on this list too, but they are a finite sequence. These six families correspond to the NEF-QVFs and are martingale polynomials for certain Lévy processes. - -Sieved orthogonal polynomials, such as the sieved ultraspherical polynomials, sieved Jacobi polynomials, and sieved Pollaczek polynomials, have modified recurrence relations. - -One can also consider orthogonal polynomials for some curve in the complex plane. The most important case (other than real intervals) is when the curve is the unit circle, giving orthogonal polynomials on the unit circle, such as the Rogers–Szegő polynomials. - -There are some families of orthogonal polynomials that are orthogonal on plane regions such as triangles or disks. They can sometimes be written in terms of Jacobi polynomials. For example, Zernike polynomials are orthogonal on the unit disk. - -The advantage of orthogonality between different orders of Hermite polynomials is applied to Generalized frequency division multiplexing (GFDM) structure. More than one symbol can be carried in each grid of time-frequency lattice. - -Orthogonal polynomials of one variable defined by a non-negative measure on the real line have the following properties. - -The orthogonal polynomials Pn can be expressed in terms of the moments -$$ - m_n = \int x^n d\alpha(x) -$$ - -as follows: - - P_n(x) = c_n \det \begin{bmatrix} - -m_0 & m_1 & m_2 &\cdots & m_n \\ - -m_1 & m_2 & m_3 &\cdots & m_{n+1} \\ - -\vdots&\vdots&\vdots&\ddots& \vdots \\ - -m_{n-1} &m_n& m_{n+1} &\cdots &m_{2n-1}\\ - -1 & x & x^2 & \cdots & x^n - -\end{bmatrix}~, - -where the constants cn are arbitrary (depend on the normalization of Pn). - -This comes directly from applying the Gram-Schmidt process to the monomials, imposing each polynomial to be orthogonal with respect to the previous ones. For example, orthogonality with $P_0$ prescribes that $P_1$ must have the formP_1(x) = c_1 \left(x- \frac{\langle P_0, x\rangle P_0}{\langle P_0,P_0\rangle} \right) - -= c_1 ( x - m_1),which can be seen to be consistent with the previously given expression with the determinant. - -The polynomials Pn satisfy a recurrence relation of the form -$$ - P_n(x) = (A_n x + B_n) P_{n-1}(x) + C_n P_{n-2}(x)~. -$$ - -See Favard's theorem for a converse result. - -If the measure dα is supported on an interval [a, b], all the zeros of Pn lie in [a, b]. Moreover, the zeros have the following interlacing property: if m < n, there is a zero of Pn between any two zeros of Pm. Electrostatic interpretations of the zeros can be given. - -From the 1980s, with the work of X. G. Viennot, J. Labelle, Y.-N. Yeh, D. Foata, and others, combinatorial interpretations were found for all the classical orthogonal polynomials. - -The Macdonald polynomials are orthogonal polynomials in several variables, depending on the choice of an affine root system. They include many other families of multivariable orthogonal polynomials as special cases, including the Jack polynomials, the Hall–Littlewood polynomials, the Heckman–Opdam polynomials, and the Koornwinder polynomials. The Askey–Wilson polynomials are the special case of Macdonald polynomials for a certain non-reduced root system of rank 1. diff --git a/wiki/wikipedia/2578.txt b/wiki/wikipedia/2578.txt deleted file mode 100644 index 458eecbc03d97cae7f9094e9db01fb9fee0e14d7..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2578.txt +++ /dev/null @@ -1,119 +0,0 @@ -In complex analysis, the residue theorem, sometimes called Cauchy's residue theorem, is a powerful tool to evaluate line integrals of analytic functions over closed curves; it can often be used to compute real integrals and infinite series as well. It generalizes the Cauchy integral theorem and Cauchy's integral formula. From a geometrical perspective, it can be seen as a special case of the generalized Stokes' theorem. - -The statement is as follows: - -Let U be a simply connected open subset of the complex plane containing a finite list of points a1, ..., an, - -U0 = U \ {a1, …, an}, - -and a function f defined and holomorphic on U0. Let γ be a closed rectifiable curve in U0, and denote the winding number of γ around ak by I(γ, ak). The line integral of f around γ is equal to 2πi times the sum of residues of f at the points, each counted as many times as γ winds around the point: - -\oint_\gamma f(z) dz = 2\pi i \sum_{k=1}^n \operatorname{I}(\gamma, a_k) \operatorname{Res}( f, a_k ). - -If γ is a positively oriented simple closed curve, I(γ, ak) = 1 if ak is in the interior of γ, and 0 if not, therefore - -\oint_\gamma f(z) dz = 2\pi i \sum \operatorname{Res}( f, a_k ) - -with the sum over those ak inside γ. - -The relationship of the residue theorem to Stokes' theorem is given by the Jordan curve theorem. The general plane curve γ must first be reduced to a set of simple closed curves {γi} whose total is equivalent to γ for integration purposes; this reduces the problem to finding the integral of f dz along a Jordan curve γi with interior V. The requirement that f be holomorphic on U0 = U \ {ak} is equivalent to the statement that the exterior derivative d(f dz) = 0 on U0. Thus if two planar regions V and W of U enclose the same subset {aj} of {ak}, the regions V \ W and W \ V lie entirely in U0, and hence - -\int_{V \setminus W} d(f dz) - \int_{W \setminus V} d(f dz) - -is well-defined and equal to zero. Consequently, the contour integral of f dz along γj = ∂V is equal to the sum of a set of integrals along paths λj, each enclosing an arbitrarily small region around a single aj — the residues of f (up to the conventional factor 2πi) at {aj}. Summing over {γj}, we recover the final expression of the contour integral in terms of the winding numbers {I(γ, ak)}. - -In order to evaluate real integrals, the residue theorem is used in the following manner: the integrand is extended to the complex plane and its residues are computed (which is usually easy), and a part of the real axis is extended to a closed curve by attaching a half-circle in the upper or lower half-plane, forming a semicircle. The integral over this curve can then be computed using the residue theorem. Often, the half-circle part of the integral will tend towards zero as the radius of the half-circle grows, leaving only the real-axis part of the integral, the one we were originally interested in. - -The integral - -\int_{-\infty}^\infty \frac{e^{itx}}{x^2+1}dx - -arises in probability theory when calculating the characteristic function of the Cauchy distribution. It resists the techniques of elementary calculus but can be evaluated by expressing it as a limit of contour integrals. - -Suppose t > 0 and define the contour C that goes along the real line from −a to a and then counterclockwise along a semicircle centered at 0 from a to −a. Take a to be greater than 1, so that the imaginary unit i is enclosed within the curve. Now consider the contour integral - -\int_C {f(z)}dz = \int_C \frac{e^{itz}}{z^2+1}dz. - -Since eitz is an entire function (having no singularities at any point in the complex plane), this function has singularities only where the denominator z2 + 1 is zero. Since z2 + 1 = (z + i)(z − i), that happens only where z = i or z = −i. Only one of those points is in the region bounded by this contour. Because f(z) is - -\begin{align} - -\frac{e^{itz}}{z^2+1} & =\frac{e^{itz}}{2i}\left(\frac{1}{z-i}-\frac{1}{z+i}\right) \\ - -& =\frac{e^{itz}}{2i(z-i)} -\frac{e^{itz}}{2i(z+i)} , - -\end{align} - -the residue of f(z) at z = i is - -\operatorname{Res}_{z=i}f(z)=\frac{e^{-t}}{2i}. - -According to the residue theorem, then, we have - -\int_C f(z)dz=2\pi i\cdot\operatorname{Res}\limits_{z=i}f(z)=2\pi i \frac{e^{-t}}{2i} = \pi e^{-t}. - -The contour C may be split into a straight part and a curved arc, so that - -\int_{\mathrm{straight}} f(z)dz+\int_{\mathrm{arc}} f(z)dz=\pi e^{-t} - -and thus - -\int_{-a}^a f(z)dz =\pi e^{-t}-\int_{\mathrm{arc}} f(z)dz. - -Using some estimations, we have - -\left|\int_{\mathrm{arc}}\frac{e^{itz}}{z^2+1}dz\right| \leq \pi a \cdot \sup_{\text{arc}} \left| \frac{e^{itz}}{z^2+1} \right| \leq \pi a \cdot \sup_{\text{arc}} \frac{1} \leq \frac{\pi a}{a^2 - 1}, - -and - -\lim_{a \to \infty} \frac{\pi a}{a^2-1} = 0. - -The estimate on the numerator follows since t > 0, and for complex numbers z along the arc (which lies in the upper half-plane), the argument φ of z lies between 0 and pi. So, - -\left|e^{itz}\right| = \left|e^{it|z|(\cos\varphi + i\sin\varphi)}\right|=\left|e^{-t|z|\sin\varphi + it|z|\cos\varphi}\right|=e^{-t|z| \sin\varphi} \le 1. - -Therefore, - -\int_{-\infty}^\infty \frac{e^{itz}}{z^2+1}dz=\pi e^{-t}. - -If t < 0 then a similar argument with an arc that winds around −i rather than i shows that - -\int_{-\infty}^\infty\frac{e^{itz}}{z^2+1}dz=\pi e^t, - -and finally we have - -\int_{-\infty}^\infty\frac{e^{itz}}{z^2+1}dz=\pi e^{-\left|t\right|}. - -(If t = 0 then the integral yields immediately to elementary calculus methods and its value is pi.) - -The fact that π cot(πz) has simple poles with residue 1 at each integer can be used to compute the sum - - \sum_{n=-\infty}^\infty f(n). - -Consider, for example, f(z) = z−2. Let ΓN be the rectangle that is the boundary of [−N − 1/2, N + 1/2]2 with positive orientation, with an integer N. By the residue formula, - -\frac{1}{2 \pi i} \int_{\Gamma_N} f(z) \pi \cot(\pi z) dz = \operatorname{Res}\limits_{z = 0} + \sum_{n = -N \atop n\ne 0}^N n^{-2}. - -The left-hand side goes to zero as N → ∞ since the integrand has order $O(n^{-2})$. On the other hand, - -\frac{z}{2} \cot\left(\frac{z}{2}\right) = 1 - B_2 \frac{z^2}{2!} + \cdots where the Bernoulli number $B_2 = \frac{1}{6}.$ - -(In fact, z/2 cot(z/2) = iz/1 − e−iz − iz/2.) Thus, the residue Res_1=z=0 is −π2/3. We conclude: - -\sum_{n = 1}^\infty \frac{1}{n^2} = \frac{\pi^2}{6} - -which is a proof of the Basel problem. - -The same trick can be used to establish the sum of the Eisenstein series: - -\pi \cot(\pi z) = \lim_{N \to \infty} \sum_{n=-N}^N (z - n)^{-1}. - -We take f(z) = (w − z)−1 with w a non-integer and we shall show the above for w. The difficulty in this case is to show the vanishing of the contour integral at infinity. We have: - -\int_{\Gamma_N} \frac{\pi \cot(\pi z)}{z} dz = 0 - -since the integrand is an even function and so the contributions from the contour in the left-half plane and the contour in the right cancel each other out. Thus, - -\int_{\Gamma_N} f(z) \pi \cot(\pi z) dz = \int_{\Gamma_N} \left(\frac{1}{w - z} + \frac{1}{z}\right) \pi \cot(\pi z) dz - -goes to zero as N → ∞. diff --git a/wiki/wikipedia/2579.txt b/wiki/wikipedia/2579.txt deleted file mode 100644 index 7d80491bc8981ab8cc0695ce7a69b8d46e498987..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2579.txt +++ /dev/null @@ -1,18 +0,0 @@ -In number theory, the Pólya conjecture stated that "most" (i.e., 50% or more) of the natural numbers less than any given number have an odd number of prime factors. The conjecture was posited by the Hungarian mathematician George Pólya in 1919, and proved false in 1958 by C. Brian Haselgrove. - -The size of the smallest counterexample is often used to show how a conjecture can be true for many cases, and still be false, providing an illustration for the strong law of small numbers. - -Pólya conjecture states that for any n (> 1), if the natural numbers less than or equal to n (excluding 0) are partitioned into those with an odd number of prime factors, and those with an even number of prime factors, then the former set has at least as many members as the latter set. (Repeated prime factors are counted the requisite number of times—thus 18 = 21 × 32 has 1 + 2 = 3 prime factors i.e. an odd number, while 60 = 22 × 3 × 5 has 4 prime factors, i.e. an even number.) - -Equivalently, it can be stated in terms of the summatory Liouville function, with the conjecture being that -$$ -L(n) = \sum_{k=1}^n \lambda(k) \leq 0 -$$ - -for all n > 1. Here, λ(k) = (−1)Ω(k) is positive if the number of prime factors of the integer k is even, and is negative if it is odd. The big Omega function counts the total number of prime factors of an integer. - -The Pólya conjecture was disproved by C. Brian Haselgrove in 1958. He showed that the conjecture has a counterexample, which he estimated to be around 1.845 × 10361. - -An explicit counterexample, of n = 906,180,359 was given by R. Sherman Lehman in 1960; the smallest counterexample is n = 906,150,257, found by Minoru Tanaka in 1980. - -The conjecture fails to hold for most values of n in the region of 906,150,257 ≤ n ≤ 906,488,079. In this region, the summatory Liouville function reaches a maximum value of 829 at n = 906,316,571. diff --git a/wiki/wikipedia/258.txt b/wiki/wikipedia/258.txt deleted file mode 100644 index 6c2d2ecb38970e1b70aad00518b3b442d7ca5153..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/258.txt +++ /dev/null @@ -1,11 +0,0 @@ -In mathematics, abstract nonsense, general abstract nonsense, generalized abstract nonsense, and general nonsense are terms used by mathematicians to describe abstract methods related to category theory and homological algebra. More generally, “abstract nonsense” may refer to a proof that relies on category-theoretic methods, or even to the study of category theory itself. - -Roughly speaking, category theory is the study of the general form, that is, categories of mathematical theories, without regard to their content. As a result, mathematical proofs that rely on category-theoretic ideas often seem out-of-context, somewhat akin to a non sequitur. Authors sometimes dub these proofs “abstract nonsense” as a light-hearted way of alerting readers to their abstract nature. Labeling an argument "abstract nonsense" is usually not intended to be derogatory, For example, one might say that "By abstract nonsense, products are unique up to isomorphism when they exist", instead of arguing about how these isomorphisms can be derived from the universal property that defines the product. This allows one to skip proof details that can be considered trivial or not providing much insight, focusing instead on genuinely innovative parts of a larger proof. - -The term predates the foundation of category theory as a subject itself. Referring to a joint paper with Samuel Eilenberg that introduced the notion of a "category" in 1942, Saunders Mac Lane wrote the subject was 'then called "general abstract nonsense"'. The term is often used to describe the application of category theory and its techniques to less abstract domains. - -The term is believed to have been coined by the mathematician Norman Steenrod, himself one of the developers of the categorical point of view. - -Consider the example of showing that a 3-manifold M admits a map to the 2-sphere that is non-trivial (i.e. non-homotopic to a constant map), when the 2nd Betti number of M is positive. This means the 2nd cohomology group has positive rank (by the universal coefficient theorem for cohomology), so it has a non-zero element. The properties of Eilenberg–MacLane spaces then give a corresponding non-trivial map f from M to the infinite-dimensional complex projective space CP, since it is a K(Z,2) Eilenberg–MacLane space. The space CP can be realized as a CW complex with exactly one cell in each even dimension and no cells in odd dimension, while M can be realized with no cells in dimensions above 3, so by the cellular approximation theorem there is a map homotopic to f that maps M into the 3-skeleton of CP, which is the 2-sphere. - -Though this proof establishes the truth of the statement in question, the proof technique has little to do with the topology or geometry of the 2-sphere, let alone 3-manifolds, as it relies on more general categorical principles. Because of the reliance on these abstract principles, the result is independent of subtler geometric details, so offers little geometric insight into the nature of such a map. On the other hand, the proof is surprisingly short and clean, and a “hands-on” approach involving the explicit construction of such a map would be potentially laborious. diff --git a/wiki/wikipedia/2580.txt b/wiki/wikipedia/2580.txt deleted file mode 100644 index ad95eb80573145cbf46fb07667e79a2de17d18c7..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2580.txt +++ /dev/null @@ -1,13 +0,0 @@ -The intersecting secant theorem or just secant theorem describes the relation of line segments created by two intersecting secants and the associated circle. - -For two lines AD and BC that intersect each other in P and some circle in A and D respective B and C the following equation holds: -$$ -|PA|\cdot|PD|=|PB|\cdot|PC| -$$ - -The theorem follows directly from the fact, that the triangles PAC and PBD are similar. They share $\angle DPC$ and $\angle ADB=\angle ACB$ as they are inscribed angles over AB. The similarity yields an equation for ratios which is equivalent to the equation of the theorem given above: -$$ -\frac{PA}{PC}=\frac{PB}{PD} \Leftrightarrow |PA|\cdot|PD|=|PB|\cdot|PC| -$$ - -Next to the intersecting chords theorem and the tangent-secant theorem the intersecting secants theorem represents one of the three basic cases of a more general theorem about two intersecting lines and a circle - the power of point theorem. diff --git a/wiki/wikipedia/2581.txt b/wiki/wikipedia/2581.txt deleted file mode 100644 index 3e3a34c5225cd032d30e6f762a1549bf3cbde702..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2581.txt +++ /dev/null @@ -1,45 +0,0 @@ -{{infobox graph - -| name = Wheel graph - -| image = -| image_caption = Several examples of wheel graphs - -| girth = 3 - -| diameter = 2 if n > 4
    1 if n = 4 - -| vertices = n - -| edges = 2(n − 1) - -| chromatic_number = 4 if n is even
    3 if n is odd - -| chromatic_index = - -| spectrum = $\{2\cos(2k\pi / (n - 1))^1; $
    $k = 1, \ldots, n - 2\}$$\cup\{1 \pm \sqrt{n}\}$ - -| properties = Hamiltonian
    Self-dual
    Planar - -| notation = Wn - -}} - -In the mathematical discipline of graph theory, a wheel graph is a graph formed by connecting a single universal vertex to all vertices of a cycle. A wheel graph with n vertices can also be defined as the 1-skeleton of an (n-1)-gonal pyramid. Some authors write Wn to denote a wheel graph with n vertices (n ≥ 4); other authors instead use Wn to denote a wheel graph with n+1 vertices (n ≥ 3), which is formed by connecting a single vertex to all vertices of a cycle of length n. In the rest of this article we use the former notation. - -Given a vertex set of {1, 2, 3, …, v}, the edge set of the wheel graph can be represented in set-builder notation by {{1, 2}, {1, 3}, …, {1, v}, {2, 3}, {3, 4}, …, {v − 1, v}, {v, 2}}. - -Wheel graphs are planar graphs, and as such have a unique planar embedding. More specifically, every wheel graph is a Halin graph. They are self-dual: the planar dual of any wheel graph is an isomorphic graph. Every maximal planar graph, other than K4 = W4, contains as a subgraph either W5 or W6. - -There is always a Hamiltonian cycle in the wheel graph and there are $n^2-3n+3$ cycles in Wn . - -For odd values of n, Wn is a perfect graph with chromatic number 3: the vertices of the cycle can be given two colors, and the center vertex given a third color. For even n, Wn has chromatic number 4, and (when n ≥ 6) is not perfect. W7 is the only wheel graph that is a unit distance graph in the Euclidean plane. - -The chromatic polynomial of the wheel graph Wn is : -$$ -P_{W_n}(x) = x((x - 2)^{(n - 1)} - (-1)^n(x - 2)). -$$ - -In matroid theory, two particularly important special classes of matroids are the wheel matroids and the whirl matroids, both derived from wheel graphs. The k-wheel matroid is the graphic matroid of a wheel Wk+1, while the k-whirl matroid is derived from the k-wheel by considering the outer cycle of the wheel, as well as all of its spanning trees, to be independent. - -The wheel W6 supplied a counterexample to a conjecture of Paul Erdős on Ramsey theory: he had conjectured that the complete graph has the smallest Ramsey number among all graphs with the same chromatic number, but Faudree and McKay (1993) showed W6 has Ramsey number 17 while the complete graph with the same chromatic number, K4, has Ramsey number 18. That is, for every 17-vertex graph G, either G or its complement contains W6 as a subgraph, while neither the 17-vertex Paley graph nor its complement contains a copy of K4. diff --git a/wiki/wikipedia/2582.txt b/wiki/wikipedia/2582.txt deleted file mode 100644 index a66fbdd2ec3d7494fcb083a4513910b2b61fdd09..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2582.txt +++ /dev/null @@ -1,23 +0,0 @@ -In geometry, the tennis ball theorem states that any smooth curve on the surface of a sphere that divides the sphere into two equal-area subsets without touching or crossing itself must have at least four inflection points, points at which the curve does not consistently bend to only one side of its tangent line. - -The tennis ball theorem was first published under this name by Vladimir Arnold in 1994, and is often attributed to Arnold, but a closely related result appears earlier in a 1968 paper by Beniamino Segre, and the tennis ball theorem itself is a special case of a theorem in a 1977 paper by Joel L. Weiner. The name of the theorem comes from the standard shape of a tennis ball, whose seam forms a curve that meets the conditions of the theorem; the same kind of curve is also used for the seams on baseballs. - -The tennis ball theorem can be generalized to any curve that is not contained in a closed hemisphere. A centrally symmetric curve on the sphere must have at least six inflection points. The theorem is analogous to the four-vertex theorem according to which any smooth closed plane curve has at least four points of extreme curvature. - -Precisely, an inflection point of a doubly continuously differentiable ($C^2$) curve on the surface of a sphere is a point $p$ with the following property: let $I$ be the connected component containing $p$ of the intersection of the curve with its tangent great circle at $p$. (For most curves $I$ will just be $p$ itself, but it could also be an arc of the great circle.) Then, for $p$ to be an inflection point, every neighborhood of $I$ must contain points of the curve that belong to both of the hemispheres separated by this great circle. - -The theorem states that every $C^2$ curve that partitions the sphere into two equal-area components has at least four inflection points in this sense. - -The tennis ball and baseball seams can be modeled mathematically by a curve made of four semicircular arcs, with exactly four inflection points where pairs of these arcs meet. - -A great circle also bisects the sphere's surface, and has infinitely many inflection points, one at each point of the curve. However, the condition that the curve divide the sphere's surface area equally is a necessary part of the theorem. Other curves that do not divide the area equally, such as circles that are not great circles, may have no inflection points at all. - -One proof of the tennis ball theorem uses the curve-shortening flow, a process for continuously moving the points of the curve towards their local centers of curvature. Applying this flow to the given curve can be shown to preserve the smoothness and area-bisecting property of the curve. Additionally, as the curve flows, its number of inflection points never increases. This flow eventually causes the curve to transform into a great circle, and - -the convergence to this circle can be approximated by a Fourier series. Because curve-shortening does not change any other great circle, the first term in this series is zero, and combining this with a theorem of Sturm on the number of zeros of Fourier series shows that, as the curve nears this great circle, it has at least four inflection points. Therefore, the original curve also has at least four inflection points. - -A generalization of the tennis ball theorem applies to any simple smooth curve on the sphere that is not contained in a closed hemisphere. As in the original tennis ball theorem, such curves must have at least four inflection points. If a curve on the sphere is centrally symmetric, it must have at least six inflection points. - -A closely related theorem of Segre also concerns simple closed spherical curves, on spheres embedded into three-dimensional space. If, for such a curve, $o$ is any point of the three-dimensional convex hull of a smooth curve on the sphere that is not a vertex of the curve, then at least four points of the curve have osculating planes passing through $o$. In particular, for a curve not contained in a hemisphere, this theorem can be applied with $o$ at the center of the sphere. Every inflection point of a spherical curve has an osculating plane that passes through the center of the sphere, but this might also be true of some other points. - -This theorem is analogous to the four-vertex theorem, that every smooth simple closed curve in the plane has four vertices (extreme points of curvature). It is also analogous to a theorem of August Ferdinand Möbius that every non-contractible smooth curve in the projective plane has at least three inflection points. diff --git a/wiki/wikipedia/2583.txt b/wiki/wikipedia/2583.txt deleted file mode 100644 index 997b19ca5d230b0cc6770f452167b146d57f23b2..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2583.txt +++ /dev/null @@ -1,177 +0,0 @@ -In mathematics, the dot product or scalar product is an algebraic operation that takes two equal-length sequences of numbers (usually coordinate vectors), and returns a single number. In Euclidean geometry, the dot product of the Cartesian coordinates of two vectors is widely used. It is often called "the" inner product (or rarely projection product) of Euclidean space, even though it is not the only inner product that can be defined on Euclidean space (see Inner product space for more). - -Algebraically, the dot product is the sum of the products of the corresponding entries of the two sequences of numbers. Geometrically, it is the product of the Euclidean magnitudes of the two vectors and the cosine of the angle between them. These definitions are equivalent when using Cartesian coordinates. In modern geometry, Euclidean spaces are often defined by using vector spaces. In this case, the dot product is used for defining lengths (the length of a vector is the square root of the dot product of the vector by itself) and angles (the cosine of the angle of two vectors is the quotient of their dot product by the product of their lengths). - -The name "dot product" is derived from the centered dot " · ", that is often used to designate this operation; the alternative name "scalar product" emphasizes that the result is a scalar, rather than a vector, as is the case for the vector product in three-dimensional space. - -The dot product may be defined algebraically or geometrically. The geometric definition is based on the notions of angle and distance (magnitude of vectors). The equivalence of these two definitions relies on having a Cartesian coordinate system for Euclidean space. - -In modern presentations of Euclidean geometry, the points of space are defined in terms of their Cartesian coordinates, and Euclidean space itself is commonly identified with the real coordinate space Rn. In such a presentation, the notions of length and angles are defined by means of the dot product. The length of a vector is defined as the square root of the dot product of the vector by itself, and the cosine of the (non oriented) angle of two vectors of length one is defined as their dot product. So the equivalence of the two definitions of the dot product is a part of the equivalence of the classical and the modern formulations of Euclidean geometry. - -The dot product of two vectors 1= and 1= is defined as: -$$ -\mathbf{\color{red}a}\cdot\mathbf{\color{blue}b}=\sum_{i=1}^n {\color{red}a}_i{\color{blue}b}_i={\color{red}a}_1{\color{blue}b}_1+{\color{red}a}_2{\color{blue}b}_2+\cdots+{\color{red}a}_n{\color{blue}b}_n -$$ - -where Σ denotes summation and n is the dimension of the vector space. For instance, in three-dimensional space, the dot product of vectors and is: - - - -\begin{align} - -\ [{\color{red}1, 3, -5}] \cdot [{\color{blue}4, -2, -1}] &= ({\color{red}1} \times {\color{blue}4}) + ({\color{red}3}\times{\color{blue}-2}) + ({\color{red}-5}\times{\color{blue}-1}) \\ - -&= 4 - 6 + 5 \\ - -&= 3 - -\end{align} - - - -If vectors are identified with row matrices, the dot product can also be written as a matrix product -$$ -\mathbf{\color{red}a} \cdot \mathbf{\color{blue}b} = \mathbf{\color{red}a}\mathbf{\color{blue}b}^\mathsf T, -$$ - -where $\mathbf{\color{blue}b}^\mathsf T$ denotes the transpose of $\mathbf{\color{blue}b}$. - -Expressing the above example in this way, a 1 × 3 matrix (row vector) is multiplied by a 3 × 1 matrix (column vector) to get a 1 × 1 matrix that is identified with its unique entry: - - - -\begin{bmatrix} - -\color{red}1 & \color{red}3 & \color{red}-5 - -\end{bmatrix} - -\begin{bmatrix} - -\color{blue}4 \\ \color{blue}-2 \\ \color{blue}-1 - -\end{bmatrix} = \color{purple}3 - -. - -In Euclidean space, a Euclidean vector is a geometric object that possesses both a magnitude and a direction. A vector can be pictured as an arrow. Its magnitude is its length, and its direction is the direction to which the arrow points. The magnitude of a vector a is denoted by $ \left\| \mathbf{a} \right\| $. The dot product of two Euclidean vectors a and b is defined by or one can say that "the dot product is associative with respect to scalar multiplication" because c (a ⋅ b) = (c a) ⋅ b = a ⋅ (c b). - -# Orthogonal: - -#: Two non-zero vectors a and b are orthogonal if and only if 1=a ⋅ b = 0. - -# No cancellation: - -#: Unlike multiplication of ordinary numbers, where if 1=ab = ac, then b always equals c unless a is zero, the dot product does not obey the cancellation law: - -#: If 1=a ⋅ b = a ⋅ c and a ≠ 0, then we can write: 1=a ⋅ (b − c) = 0 by the distributive law; the result above says this just means that a is perpendicular to (b − c), which still allows (b − c) ≠ 0, and therefore allows b ≠ c. - -# Product Rule: - -#: If a and b are (vector-valued) differentiable functions, then the derivative (denoted by a prime ) of a ⋅ b is given by the rule 1=(a ⋅ b). - -Given two vectors a and b separated by angle θ (see image right), they form a triangle with a third side 1=. The dot product of this with itself is: - - - -\begin{align} - -\mathbf{\color{orange}c} \cdot \mathbf{\color{orange}c} & = ( \mathbf{\color{red}a} - \mathbf{\color{blue}b}) \cdot ( \mathbf{\color{red}a} - \mathbf{\color{blue}b} ) \\ - -& = \mathbf{\color{red}a} \cdot \mathbf{\color{red}a} - \mathbf{\color{red}a} \cdot \mathbf{\color{blue}b} - \mathbf{\color{blue}b} \cdot \mathbf{\color{red}a} + \mathbf{\color{blue}b} \cdot \mathbf{\color{blue}b} \\ - -& = \mathbf{\color{red}a}^2 - \mathbf{\color{red}a} \cdot \mathbf{\color{blue}b} - \mathbf{\color{red}a} \cdot \mathbf{\color{blue}b} + \mathbf{\color{blue}b}^2 \\ - -& = \mathbf{\color{red}a}^2 - 2 \mathbf{\color{red}a} \cdot \mathbf{\color{blue}b} + \mathbf{\color{blue}b}^2 \\ - -\mathbf{\color{orange}c}^2 & = \mathbf{\color{red}a}^2 + \mathbf{\color{blue}b}^2 - 2 \mathbf{\color{red}a} \mathbf{\color{blue}b} \cos \mathbf{\color{purple}\theta} \\ - -\end{align} - - - -which is the law of cosines. - -There are two ternary operations involving dot product and cross product. - -The scalar triple product of three vectors is defined as -$$ - \mathbf{a} \cdot ( \mathbf{b} \times \mathbf{c} ) = \mathbf{b} \cdot ( \mathbf{c} \times \mathbf{a} )=\mathbf{c} \cdot ( \mathbf{a} \times \mathbf{b} ). -$$ - -Its value is the determinant of the matrix whose columns are the Cartesian coordinates of the three vectors. It is the signed volume of the parallelepiped defined by the three vectors, and is isomorphic to the three-dimensional special case of the exterior product of three vectors. - -The vector triple product is defined by - -* Mechanical work is the dot product of force and displacement vectors, - -* Power is the dot product of force and velocity. - -For vectors with complex entries, using the given definition of the dot product would lead to quite different properties. For instance, the dot product of a vector with itself would be an arbitrary complex number, and could be zero without the vector being the zero vector (such vectors are called isotropic); this in turn would have consequences for notions like length and angle. Properties such as the positive-definite norm can be salvaged at the cost of giving up the symmetric and bilinear properties of the scalar product, through the alternative definition -$$ - \mathbf{a} \cdot \mathbf{b} = \sum_i {{a_i}\overline{b_i}} , -$$ - -where $\overline{b_i}$ is the complex conjugate of $b_i$. When vectors are represented by row vectors, the dot product can be expressed as a matrix product involving a conjugate transpose, denoted with the superscript H: -$$ - \mathbf{a} \cdot \mathbf{b} = \mathbf{b}^\mathsf{H} \mathbf{a} . -$$ - -In the case of vectors with real components, this definition is the same as in the real case. The scalar product of any vector with itself is a non-negative real number, and it is nonzero except for the zero vector. However, the complex scalar product is sesquilinear rather than bilinear, as it is conjugate linear and not linear in a. The scalar product is not symmetric, since -$$ - \mathbf{a} \cdot \mathbf{b} = \overline{\mathbf{b} \cdot \mathbf{a}} . -$$ - -The angle between two complex vectors is then given by -$$ - \cos \theta = \frac{\operatorname{Re} ( \mathbf{a} \cdot \mathbf{b} )}{ \left\| \mathbf{a} \right\| \left\| \mathbf{b} \right\| } . -$$ - -The complex scalar product leads to the notions of Hermitian forms and general inner product spaces, which are widely used in mathematics and physics. - -The self dot product of a complex vector $\mathbf{a} \cdot \mathbf{a} = \mathbf{a}^\mathsf{H} \mathbf{a} $, involving the conjugate transpose of a row vector, is a generalization of the scalar absolute square of a complex number. - -The inner product generalizes the dot product to abstract vector spaces over a field of scalars, being either the field of real numbers $ \R $ or the field of complex numbers $ \Complex $. It is usually denoted using angular brackets by $ \left\langle \mathbf{a} , \mathbf{b} \right\rangle $. - -The inner product of two vectors over the field of complex numbers is, in general, a complex number, and is sesquilinear instead of bilinear. An inner product space is a normed vector space, and the inner product of a vector with itself is real and positive-definite. - -The dot product is defined for vectors that have a finite number of entries. Thus these vectors can be regarded as discrete functions: a length-n vector u is, then, a function with domain {k ∈ $\mathbb{N}$ ∣ 1 ≤ k ≤ n}, and ui is a notation for the image of i by the function/vector u. - -This notion can be generalized to continuous functions: just as the inner product on vectors uses a sum over corresponding components, the inner product on functions is defined as an integral over some interval a ≤ x ≤ b (also denoted [a, b]): -$$ - \left\langle u , v \right\rangle = \int_a^b u(x) v(x) d x -$$ - -Generalized further to complex functions ψ(x) and χ(x), by analogy with the complex inner product above, gives -$$ - \left\langle \psi , \chi \right\rangle = \int_a^b \psi(x) \overline{\chi(x)} d x . -$$ - -Inner products can have a weight function (i.e., a function which weights each term of the inner product with a value). Explicitly, the inner product of functions $u(x)$ and $v(x)$ with respect to the weight function $r(x)>0$ is -$$ - \left\langle u , v \right\rangle = \int_a^b r(x) u(x) v(x) d x. -$$ - -A double-dot product for matrices is the Frobenius inner product, which is analogous to the dot product on vectors. It is defined as the sum of the products of the corresponding components of two matrices A and B of the same size: -$$ - \mathbf{A} : \mathbf{B} = \sum_i \sum_j A_{ij} \overline{B_{ij}} = \operatorname{tr} ( \mathbf{B}^\mathsf{H} \mathbf{A} ) = \operatorname{tr} ( \mathbf{A} \mathbf{B}^\mathsf{H} ) . -$$ -$$ - \mathbf{A} : \mathbf{B} = \sum_i \sum_j A_{ij} B_{ij} = \operatorname{tr} ( \mathbf{B}^\mathsf{T} \mathbf{A} ) = \operatorname{tr} ( \mathbf{A} \mathbf{B}^\mathsf{T} ) = \operatorname{tr} ( \mathbf{A}^\mathsf{T} \mathbf{B} ) = \operatorname{tr} ( \mathbf{B} \mathbf{A}^\mathsf{T} ) . -$$ (For real matrices) - -Writing a matrix as a dyadic, we can define a different double-dot product (see ,) however it is not an inner product. - -The inner product between a tensor of order n and a tensor of order m is a tensor of order n + m − 2, see Tensor contraction for details. - -The straightforward algorithm for calculating a floating-point dot product of vectors can suffer from catastrophic cancellation. To avoid this, approaches such as the Kahan summation algorithm are used. - -A dot product function is included in: - -* BLAS level 1 real SDOT, DDOT; complex CDOTU, ZDOTU = X^T * Y, CDOTC ZDOTC = X^H * Y - -* Matlab as A' * B or conj(transpose(A)) * B or sum( conj(A) .* B) - -* GNU Octave as sum (conj (X) .* Y, dim) - -* Intel oneAPI Math Kernel Library real p?dot dot = sub(x)'*sub(y); complex p?dotc dotc = conjg(sub(x)')*sub(y) diff --git a/wiki/wikipedia/2584.txt b/wiki/wikipedia/2584.txt deleted file mode 100644 index db80a071ed8e3e2d4f5bce0dba36debe4d1d1cec..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2584.txt +++ /dev/null @@ -1,106 +0,0 @@ -In computational complexity theory, the time hierarchy theorems are important statements about time-bounded computation on Turing machines. Informally, these theorems say that given more time, a Turing machine can solve more problems. For example, there are problems that can be solved with n2 time but not n time. - -The time hierarchy theorem for deterministic multi-tape Turing machines was first proven by Richard E. Stearns and Juris Hartmanis in 1965. It was improved a year later when F. C. Hennie and Richard E. Stearns improved the efficiency of the Universal Turing machine. Consequent to the theorem, for every deterministic time-bounded complexity class, there is a strictly larger time-bounded complexity class, and so the time-bounded hierarchy of complexity classes does not completely collapse. More precisely, the time hierarchy theorem for deterministic Turing machines states that for all time-constructible functions f(n), -$$ -\mathsf{DTIME}\left(o\left(\frac{f(n)}{\log f(n)}\right)\right) \subsetneq \mathsf{DTIME}(f(n)) -$$. - -The time hierarchy theorem for nondeterministic Turing machines was originally proven by Stephen Cook in 1972. It was improved to its current form via a complex proof by Joel Seiferas, Michael Fischer, and Albert Meyer in 1978. Finally in 1983, Stanislav Žák achieved the same result with the simple proof taught today. The time hierarchy theorem for nondeterministic Turing machines states that if g(n) is a time-constructible function, and f(n+1) = o(g(n)), then -$$ -\mathsf{NTIME}(f(n)) \subsetneq \mathsf{NTIME}(g(n)) -$$. - -The analogous theorems for space are the space hierarchy theorems. A similar theorem is not known for time-bounded probabilistic complexity classes, unless the class also has one bit of advice. - -Both theorems use the notion of a time-constructible function. A function $f:\mathbb{N}\rightarrow\mathbb{N}$ is time-constructible if there exists a deterministic Turing machine such that for every $n\in\mathbb{N}$, if the machine is started with an input of n ones, it will halt after precisely f(n) steps. All polynomials with non-negative integer coefficients are time-constructible, as are exponential functions such as 2n. - -We need to prove that some time class TIME(g(n)) is strictly larger than some time class TIME(f(n)). We do this by constructing a machine which cannot be in TIME(f(n)), by diagonalization. We then show that the machine is in TIME(g(n)), using a simulator machine. - -
    Time Hierarchy Theorem. If f(n) is a time-constructible function, then there exists a decision problem which cannot be solved in worst-case deterministic time f(n) but can be solved in worst-case deterministic time f(n)2. In other words, -$$ -\mathsf{DTIME}(f(n)) \subsetneq \mathsf{DTIME}\left (f(n)^2 \right). -$$
    - -Note 1. f(n) is at least n, since smaller functions are never time-constructible.
    - -Note 2. Even more generally, it can be shown that if f(n) is time-constructible, then -$$ -\mathsf{DTIME}\left(o\left(\frac{f(n)}{\log f(n)}\right)\right)\subsetneq \mathsf{DTIME}\left (f(n) \right). -$$ - -For example, there are problems solvable in time n2 but not time n, since n is in -$$ -o\left(\frac{n^2}{\log {n^2}}\right). -$$ - -We include here a proof that DTIME(f(n)) is a strict subset of DTIME(f(2n + 1)3) as it is simpler. See the bottom of this section for information on how to extend the proof to f(n)2. - -To prove this, we first define the language of the encodings of machines and their inputs which cause them to halt within f -$$ - H_f = \left\{ ([M], x)\ |\ M \ \text{accepts}\ x \ \text{in}\ f(|x|) \ \text{steps} \right\}. -$$ - -Notice here that this is a time-class. It is the set of pairs of machines and inputs to those machines (M,x) to those machines so that the machine M accepts within f(|x|) steps. - -Here, M is a deterministic Turing machine, and x is its input (the initial contents of its tape). [M] denotes an input that encodes the Turing machine M. Let m be the size of the tuple ([M], x). - -We know that we can decide membership of Hf by way of a deterministic Turing machine R, that simulates M for f(x) steps by first calculating f(|x|) and then writing out a row of 0s of that length, and then using this row of 0s as a "clock" or "counter" to simulate M for at most that many steps. At each step, the simulating machine needs to look through the definition of M to decide what the next action would be. It is safe to say that this takes at most f(m)3 operations (since it is know that a simulation of a machine of time complexity T(n) can be achieved in time O(TlogT)), we have that: -$$ - H_f \in \mathsf{TIME}\left(f(m)^3\right). -$$ - -The rest of the proof will show that -$$ - H_f \notin \mathsf{TIME}\left(f\left( \left\lfloor \frac{m}{2} \right\rfloor \right)\right) -$$ - -so that if we substitute 2n + 1 for m, we get the desired result. Let us assume that Hf is in this time complexity class, and we will reach a contradiction. - -If Hf is in this time complexity class, then there exists a machine K which, given some machine description [M] and input x, decides whether the tuple ([M], x) is in Hf within -$$ -\mathsf{TIME}\left(f\left( \left\lfloor \frac{m}{2} \right\rfloor \right)\right). -$$ - -We use this K to construct another machine, N, which takes a machine description [M] and runs K on the tuple ([M], [M]), ie. M is simulated on its own code by K, and then N accepts if K rejects, and rejects if K accepts. - -If n is the length of the input to N, then m (the length of the input to K) is twice n plus some delimiter symbol, so m = 2n + 1. N's running time is thus -$$ - \mathsf{TIME}\left(f\left( \left\lfloor \frac{m}{2} \right\rfloor \right)\right) = \mathsf{TIME}\left(f\left( \left\lfloor \frac{2n+1}{2} \right\rfloor \right)\right) = \mathsf{TIME}\left(f(n)\right). -$$ - -Feed N into - -Now if we feed [N] as input into N itself (which makes n the length of [N]) and ask the question whether N accepts its own description as input, we get: - -* If N accepts [N] (which we know it does in at most f(n) operations since K halts on ([N], [N]) in f(n) steps), this means that K rejects ([N], [N]), so ([N], [N]) is not in Hf, and so by the definition of Hf, this implies that N does not accept [N] in f(n) steps. Contradiction. - -* If N rejects [N] (which we know it does in at most f(n) operations), this means that K accepts ([N], [N]), so ([N], [N]) is in Hf, and thus N does accept [N] in f(n) steps. Contradiction. - -We thus conclude that the machine K does not exist, and so -$$ - H_f \notin \mathsf{TIME}\left(f\left( \left\lfloor \frac{m}{2} \right\rfloor \right)\right). -$$ - -The reader may have realised that the proof is simpler because we have chosen a simple Turing machine simulation for which we can be certain that -$$ - H_f \in \mathsf{TIME}(f(m)^3). -$$ - -It has been shown that a more efficient model of simulation exists which establishes that -$$ - H_f \in \mathsf{TIME}(f(m) \log f(m)) -$$ - -but since this model of simulation is rather involved, it is not included here. - -Observe however that an identical argument as above then implies that $DTIME(r(f(2m+1)))$ is not contained within $DTIME(f(m))$ if $r(f(|M|+|x|))$ is a function which gives a time within which it is possible to simulate a machine $M$ with time complexity $f(n)$ on input $x$. - -If g(n) is a time-constructible function, and f(n+1) = o(g(n)), then there exists a decision problem which cannot be solved in non-deterministic time f(n) but can be solved in non-deterministic time g(n). In other words, the complexity class NTIME(f(n)) is a strict subset of NTIME(g(n)). - -The time hierarchy theorems guarantee that the deterministic and non-deterministic versions of the exponential hierarchy are genuine hierarchies: in other words P ⊊ EXPTIME ⊊ 2-EXP ⊊ ... and NP ⊊ NEXPTIME ⊊ 2-NEXP ⊊ .... - -For example, $\mathsf{P} \subsetneq \mathsf{EXPTIME}$ since $\mathsf{P} \subseteq \mathsf{DTIME} (2^n)\subsetneq \mathsf{DTIME} (2^{2n}) \subseteq \mathsf{EXPTIME}$. Indeed, $\mathsf{DTIME}\left(2^n\right) \subseteq \mathsf{DTIME}\left(o\left(\frac{2^{2n}}{2n}\right)\right) \subsetneq \mathsf{DTIME}(2^{2n})$ from the time hierarchy theorem. - -The theorem also guarantees that there are problems in P requiring arbitrarily large exponents to solve; in other words, P does not collapse to DTIME(nk) for any fixed k. For example, there are problems solvable in n5000 time but not n4999 time. This is one argument against Cobham's thesis, the convention that P is a practical class of algorithms. If such a collapse did occur, we could deduce that P ≠ PSPACE, since it is a well-known theorem that DTIME(f(n)) is strictly contained in DSPACE(f(n)). - -However, the time hierarchy theorems provide no means to relate deterministic and non-deterministic complexity, or time and space complexity, so they cast no light on the great unsolved questions of computational complexity theory: whether P and NP, NP and PSPACE, PSPACE and EXPTIME, or EXPTIME and NEXPTIME are equal or not. diff --git a/wiki/wikipedia/2585.txt b/wiki/wikipedia/2585.txt deleted file mode 100644 index fd941c6653282e50798a0730c63d0b3132c52eec..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2585.txt +++ /dev/null @@ -1,16 +0,0 @@ -In mathematics, Weyl's lemma, named after Hermann Weyl, states that every weak solution of Laplace's equation is a smooth solution. This contrasts with the wave equation, for example, which has weak solutions that are not smooth solutions. Weyl's lemma is a special case of elliptic or hypoelliptic regularity. - -Let $\Omega$ be an open subset of $n$-dimensional Euclidean space $\mathbb{R}^{n}$, and let $\Delta$ denote the usual Laplace operator. Weyl's lemma states that if a locally integrable function $u \in L_{\mathrm{loc}}^{1}(\Omega)$ is a weak solution of Laplace's equation, in the sense that -$$ -\int_\Omega u(x) \Delta \varphi (x) dx = 0 -$$ - -for every smooth test function $\varphi \in C_c^\infty(\Omega)$ with compact support, then (up to redefinition on a set of measure zero) $u \in C^{\infty}(\Omega)$ is smooth and satisfies $\Delta u = 0$ pointwise in $\Omega$. - -This result implies the interior regularity of harmonic functions in $\Omega$, but it does not say anything about their regularity on the boundary $\partial\Omega$. - -To prove Weyl's lemma, one convolves the function $u$ with an appropriate mollifier $\varphi_\varepsilon$ and shows that the mollification $u_\varepsilon = \varphi_\varepsilon\ast u$ satisfies Laplace's equation, which implies that $u_\varepsilon$ has the mean value property. Taking the limit as $\varepsilon\to 0$ and using the properties of mollifiers, one finds that $u$ also has the mean value property, which implies that it is a smooth solution of Laplace's equation. Alternative proofs use the smoothness of the fundamental solution of the Laplacian or suitable a priori elliptic estimates. - -More generally, the same result holds for every distributional solution of Laplace's equation: If $T\in D'(\Omega)$ satisfies $\langle T, \Delta \varphi\rangle = 0$ for every $\varphi\in C_c^\infty(\Omega)$, then $T= T_u$ is a regular distribution associated with a smooth solution $u\in C^\infty(\Omega)$ of Laplace's equation. - -Weyl's lemma follows from more general results concerning the regularity properties of elliptic or hypoelliptic operators. A linear partial differential operator $P$ with smooth coefficients is hypoelliptic if the singular support of $P u$ is equal to the singular support of $u$ for every distribution $u$. The Laplace operator is hypoelliptic, so if $\Delta u = 0$, then the singular support of $u$ is empty since the singular support of $0$ is empty, meaning that $u\in C^\infty(\Omega)$. In fact, since the Laplacian is elliptic, a stronger result is true, and solutions of $\Delta u = 0$ are real-analytic. diff --git a/wiki/wikipedia/2586.txt b/wiki/wikipedia/2586.txt deleted file mode 100644 index a35a18b8088dbd69c800dd4fbeec8cf2f198b913..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2586.txt +++ /dev/null @@ -1,13 +0,0 @@ -Vampire is an automatic theorem prover for first-order classical logic developed in the Department of Computer Science at the University of Manchester. Up to Version 3, it was developed by Andrei Voronkov together with Kryštof Hoder and previously with Alexandre Riazanov. Since Version 4, the development has involved a wider international team including Laura Kovacs, Giles Reger, and Martin Suda. Since 1999 it has won at least 53 trophies in the "world cup for theorem provers" (the CADE ATP System Competition) including the most prestigious FOF division and the theory-reasoning TFA division. - -Vampire's kernel implements the calculi of ordered binary resolution and superposition for handling equality. The splitting rule and negative equality splitting can be simulated by the introduction of new predicate definitions and dynamic folding of such definitions. A DPLL-style algorithm splitting is also supported. A number of standard redundancy criteria and simplification techniques are used for pruning the search space: tautology deletion, subsumption resolution, rewriting by ordered unit equalities, basicness restrictions and irreducibility of substitution terms. - -The reduction ordering used is the standard Knuth–Bendix ordering. - -A number of efficient indexing techniques are used to implement all major operations on sets of terms and clauses. Run-time algorithm specialisation is used to accelerate forward matching. - -Although the kernel of the system works only with clausal normal forms, the preprocessor component accepts a problem in the full first-order logic syntax, it and performs a number of useful transformations before passing the result to the kernel. When a theorem is proven, the system produces a verifiable proof, which validates both the phase and the refutation of the conjunctive normal form. - -Along with proving theorems, Vampire has other related functionalities such as generating interpolants. - -Executables can be obtained from the system website. A somewhat outdated version is available under the GNU Lesser General Public License as part of Sigma KEE. diff --git a/wiki/wikipedia/2587.txt b/wiki/wikipedia/2587.txt deleted file mode 100644 index 0797cab3bb16b5a595ec684e9d7845cae01029e7..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2587.txt +++ /dev/null @@ -1,9 +0,0 @@ -Fagin's theorem is the oldest result of descriptive complexity theory, a branch of computational complexity theory that characterizes complexity classes in terms of logic-based descriptions of their problems rather than by the behavior of algorithms for solving those problems. - -The theorem states that the set of all properties expressible in existential second-order logic is precisely the complexity class NP. - -It was proven by Ronald Fagin in 1973 in his doctoral thesis, and appears in his 1974 paper. The arity required by the second-order formula was improved (in one direction) in Lynch's theorem, and several results of Grandjean have provided tighter bounds on nondeterministic random-access machines. - -In addition to Fagin's 1974 paper, Immerman 1999 provides a detailed proof of the theorem. It is straightforward to show that every existential second-order formula can be recognized in NP, by nondeterministically choosing the value of all existentially-qualified variables, so the main part of the proof is to show that every language in NP can be described by an existential second-order formula. To do so, one can use second-order existential quantifiers to arbitrarily choose a computation tableau. In more detail, for every timestep of an execution trace of a non-deterministic Turing machine, this tableau encodes the state of the Turing machine, its position in the tape, the contents of every tape cell, and which nondeterministic choice the machine makes at that step. Constraining this encoded information so that it describes a valid execution trace in which the tape contents and Turing machine state and position at each timestep follow from the previous timestep can then be done with a first-order formula. - -A key lemma used in the proof is that it is possible to encode a linear order of length nk (such as the linear orders of timesteps and tape contents at any timestep) as a 2k-ary relation R on a universe A of size n. One way to achieve this is to choose a linear ordering L of A and then define R to be the lexicographical ordering of k-tuples from A with respect to L. diff --git a/wiki/wikipedia/2588.txt b/wiki/wikipedia/2588.txt deleted file mode 100644 index 2eb135369df2b9ae96351627a48c94398cb30f5c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2588.txt +++ /dev/null @@ -1,19 +0,0 @@ -A percentage point or percent point is the unit for the arithmetic difference of two percentages. For example, moving up from 40 percent to 44 percent is an increase of 4 percentage points, but a 10-percent increase in the quantity being measured. In literature, the unit is usually either written out, or abbreviated as pp or p.p. to avoid ambiguity. After the first occurrence, some writers abbreviate by using just "point" or "points". - -Consider the following hypothetical example: In 1980, 50 percent of the population smoked, and in 1990 only 40 percent of the population smoked. One can thus say that from 1980 to 1990, the prevalence of smoking decreased by 10 percentage points (or by 10 percent of the population) or by 20 percent when talking about smokers only - percentages indicate proportionate part of a total. - -Percentage-point differences are one way to express a risk or probability. Consider a drug that cures a given disease in 70 percent of all cases, while without the drug, the disease heals spontaneously in only 50 percent of cases. The drug reduces absolute risk by 20 percentage points. Alternatives may be more meaningful to consumers of statistics, such as the reciprocal, also known as the number needed to treat (NNT). In this case, the reciprocal transform of the percentage-point difference would be 1/(20pp) = 1/0.20 = 5. Thus if 5 patients are treated with the drug, one could expect to heal one more case of the disease than would have occurred in the absence of the drug. - -For measurements involving percentages as a unit, such as, growth, yield, or ejection fraction, statistical deviations and related descriptive statistics, including the standard deviation and root-mean-square error, the result should be expressed in units of percentage points instead of percentage. Mistakenly using percentage as the unit for the standard deviation is confusing, since percentage is also used as a unit for the relative standard deviation, i.e. standard deviation divided by average value (coefficient of variation). - -* Percentage (%) 1 part in 100 - -* Per mille (‰) 1 part in 1,000 - -*Basis point (bp) difference of 1 part in 10,000 - -*Permyriad (‱) 1 part in 10,000 - -* Per cent mille (pcm) 1 part in 100,000 - -* Baker percentage diff --git a/wiki/wikipedia/2589.txt b/wiki/wikipedia/2589.txt deleted file mode 100644 index 251a5d89d339e693082a2c55cc33e4ddd969986c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2589.txt +++ /dev/null @@ -1,9 +0,0 @@ -In mathematics, the bagpipe theorem of describes the structure of the connected (but possibly non-paracompact) ω-bounded surfaces by showing that they are "bagpipes": the connected sum of a compact "bag" with several "long pipes". - -A space is called ω-bounded if the closure of every countable set is compact. For example, the long line and the closed long ray are ω-bounded but not compact. When restricted to a metric space ω-boundedness is equivalent to compactness. - -The bagpipe theorem states that every ω-bounded connected surface is the connected sum of a compact connected surface and a finite number of long pipes. - -A space P is called a long pipe if there exist subspaces $\{U_n: n \in \mathbb{N}\} $ each of which is homeomorphic to $S^1 \times \mathbb{R} $ such that for $n1 copies of the half-open interval $[0,1)$, pasted together with the lexicographic ordering. ω1 denotes the first uncountable ordinal number, which is the set of all countable ordinals. Another (non-isomorphic) example is given by removing a single point from the "long plane" $L \times L$ where $L$ is the long line, formed by gluing together two copies of $L^+$ at their endpoints to get a space which is "long at both ends". There are in fact $2^{\aleph_1}$ different isomorphism classes of long pipes. - -The bagpipe theorem does not describe all surfaces since are many examples of surfaces that are not ω-bounded, such as the Prüfer manifold. diff --git a/wiki/wikipedia/259.txt b/wiki/wikipedia/259.txt deleted file mode 100644 index aa115ab3f74fad6de938dfe844ed5b955822b931..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/259.txt +++ /dev/null @@ -1,66 +0,0 @@ -ω-consistent theory - -In mathematical logic, an ω-consistent (or omega-consistent, also called numerically segregative) theory is a theory (collection of sentences) that is not only (syntactically) consistent (that is, does not prove a contradiction), but also avoids proving certain infinite combinations of sentences that are intuitively contradictory. The name is due to Kurt Gödel, who introduced the concept in the course of proving the incompleteness theorem. - -A theory T is said to interpret the language of arithmetic if there is a translation of formulas of arithmetic into the language of T so that T is able to prove the basic axioms of the natural numbers under this translation. - -A T that interprets arithmetic is ω-inconsistent if, for some property P of natural numbers (defined by a formula in the language of T), T proves P(0), P(1), P(2), and so on (that is, for every standard natural number n, T proves that P(n) holds), but T also proves that there is some natural number n (necessarily nonstandard in any model for T) such that P(n) fails. This may not generate a contradiction within T because T may not be able to prove for any specific value of n that P(n) fails, only that there is such an n. - -T is ω-consistent if it is not ω-inconsistent. - -There is a weaker but closely related property of Σ1-soundness. A theory T is Σ1-sound (or 1-consistent, in another terminology) if every Σ01-sentence provable in T is true in the standard model of arithmetic N (i.e., the structure of the usual natural numbers with addition and multiplication). - -If T is strong enough to formalize a reasonable model of computation, Σ1-soundness is equivalent to demanding that whenever T proves that a Turing machine C halts, then C actually halts. Every ω-consistent theory is Σ1-sound, but not vice versa. - -More generally, we can define an analogous concept for higher levels of the arithmetical hierarchy. If Γ is a set of arithmetical sentences (typically Σ0n for some n), a theory T is Γ-sound if every Γ-sentence provable in T is true in the standard model. When Γ is the set of all arithmetical formulas, Γ-soundness is called just (arithmetical) soundness. - -If the language of T consists only of the language of arithmetic (as opposed to, for example, set theory), then a sound system is one whose model can be thought of as the set ω, the usual set of mathematical natural numbers. The case of general T is different, see ω-logic below. - -Σn-soundness has the following computational interpretation: if the theory proves that a program C using a Σn−1-oracle halts, then C actually halts. - -Write PA for the theory Peano arithmetic, and Con(PA) for the statement of arithmetic that formalizes the claim "PA is consistent". Con(PA) could be of the form "For every natural number n, n is not the Gödel number of a proof from PA that 0=1". (This formulation uses 0=1 instead of a direct contradiction; that gives the same result, because PA certainly proves ¬0=1, so if it proved 0=1 as well we would have a contradiction, and on the other hand, if PA proves a contradiction, then it proves anything, including 0=1.) - -Now, assuming PA is really consistent, it follows that PA + ¬Con(PA) is also consistent, for if it were not, then PA would prove Con(PA) (reductio), contradicting Gödel's second incompleteness theorem. However, PA + ¬Con(PA) is not ω-consistent. This is because, for any particular natural number n, PA + ¬Con(PA) proves that n is not the Gödel number of a proof that 0=1 (PA itself proves that fact; the extra assumption ¬Con(PA) is not needed). However, PA + ¬Con(PA) proves that, for some natural number n, n is the Gödel number of such a proof (this is just a direct restatement of the claim ¬Con(PA)). - -In this example, the axiom ¬Con(PA) is Σ1, hence the system PA + ¬Con(PA) is in fact Σ1-unsound, not just ω-inconsistent. - -Let T be PA together with the axioms c ≠ n for each natural number n, where c is a new constant added to the language. Then T is arithmetically sound (as any nonstandard model of PA can be expanded to a model of T), but ω-inconsistent (as it proves $\exists xc=x$, and c ≠ n for every number n). - -Σ1-sound ω-inconsistent theories using only the language of arithmetic can be constructed as follows. Let IΣn be the subtheory of PA with the induction schema restricted to Σn-formulas, for any n > 0. The theory IΣn + 1 is finitely axiomatizable, let thus A be its single axiom, and consider the theory T = IΣn + ¬A. We can assume that A is an instance of the induction schema, which has the form -$$ -\forall w[B(0,w)\land\forall x(B(x,w)\to B(x+1,w))\to\forall xB(x,w)]. -$$ - -If we denote the formula -$$ -\forall w[B(0,w)\land\forall x(B(x,w)\to B(x+1,w))\to B(n,w)] -$$ - -by P(n), then for every natural number n, the theory T (actually, even the pure predicate calculus) proves P(n). On the other hand, T proves the formula $\exists x\neg P(x)$, because it is logically equivalent to the axiom ¬A. Therefore, T is ω-inconsistent. - -It is possible to show that T is Πn + 3-sound. In fact, it is Πn + 3-conservative over the (obviously sound) theory IΣn. The argument is more complicated (it relies on the provability of the Σn + 2-reflection principle for IΣn in IΣn + 1). - -Let ω-Con(PA) be the arithmetical sentence formalizing the statement "PA is ω-consistent". Then the theory PA + ¬ω-Con(PA) is unsound (Σ3-unsound, to be precise), but ω-consistent. The argument is similar to the first example: a suitable version of the Hilbert–Bernays–Löb derivability conditions holds for the "provability predicate" ω-Prov(A) = ¬ω-Con(PA + ¬A), hence it satisfies an analogue of Gödel's second incompleteness theorem. - -The concept of theories of arithmetic whose integers are the true mathematical integers is captured by ω-logic. Let T be a theory in a countable language that includes a unary predicate symbol N intended to hold just of the natural numbers, as well as specified names 0, 1, 2, ..., one for each (standard) natural number (which may be separate constants, or constant terms such as 0, 1, 1+1, 1+1+1, ..., etc.). Note that T itself could be referring to more general objects, such as real numbers or sets; thus in a model of T the objects satisfying N(x) are those that T interprets as natural numbers, not all of which need be named by one of the specified names. - -The system of ω-logic includes all axioms and rules of the usual first-order predicate logic, together with, for each T-formula P(x) with a specified free variable x, an infinitary ω-rule of the form: - -From $P(0),P(1),P(2),\ldots$ infer $\forall x(N(x)\to P(x))$. - -That is, if the theory asserts (i.e. proves) P(n) separately for each natural number n given by its specified name, then it also asserts P collectively for all natural numbers at once via the evident finite universally quantified counterpart of the infinitely many antecedents of the rule. For a theory of arithmetic, meaning one with intended domain the natural numbers such as Peano arithmetic, the predicate N is redundant and may be omitted from the language, with the consequent of the rule for each P simplifying to $\forall xP(x)$. - -An ω-model of T is a model of T whose domain includes the natural numbers and whose specified names and symbol N are standardly interpreted, respectively as those numbers and the predicate having just those numbers as its domain (whence there are no nonstandard numbers). If N is absent from the language then what would have been the domain of N is required to be that of the model, i.e. the model contains only the natural numbers. (Other models of T may interpret these symbols nonstandardly; the domain of N need not even be countable, for example.) These requirements make the ω-rule sound in every ω-model. As a corollary to the omitting types theorem, the converse also holds: the theory T has an ω-model if and only if it is consistent in ω-logic. - -There is a close connection of ω-logic to ω-consistency. A theory consistent in ω-logic is also ω-consistent (and arithmetically sound). The converse is false, as consistency in ω-logic is a much stronger notion than ω-consistency. However, the following characterization holds: a theory is ω-consistent if and only if its closure under unnested applications of the ω-rule is consistent. - -If the theory T is recursively axiomatizable, ω-consistency has the following characterization, due to Craig Smoryński: - -T is ω-consistent if and only if $T+\mathrm{RFN}_T+\mathrm{Th}_{\Pi^0_2}(\mathbb N)$ is consistent. - -Here, $\mathrm{Th}_{\Pi^0_2}(\mathbb N)$ is the set of all Π02-sentences valid in the standard model of arithmetic, and $\mathrm{RFN}_T$ is the uniform reflection principle for T, which consists of the axioms -$$ -\forall x(\mathrm{Prov}_T(\ulcorner\varphi(\dot x)\urcorner)\to\varphi(x)) -$$ - -for every formula $\varphi$ with one free variable. In particular, a finitely axiomatizable theory T in the language of arithmetic is ω-consistent if and only if T + PA is $\Sigma^0_2$-sound. diff --git a/wiki/wikipedia/2590.txt b/wiki/wikipedia/2590.txt deleted file mode 100644 index 9e46e875a9162a6a188a64c47c32f9e61dd8b2e3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2590.txt +++ /dev/null @@ -1,5 +0,0 @@ -In geometry, Barrow's inequality is an inequality relating the distances between an arbitrary point within a triangle, the vertices of the triangle, and certain points on the sides of the triangle. It is named after David Francis Barrow. - -Let P be an arbitrary point inside the triangle ABC. From P and ABC, define U, V, and W as the points where the angle bisectors of BPC, CPA, and APB intersect the sides BC, CA, AB, respectively. Then Barrow's inequality states that This result was named "Barrow's inequality" as early as 1961. - -A simpler proof was later given by Louis J. Mordell. diff --git a/wiki/wikipedia/2591.txt b/wiki/wikipedia/2591.txt deleted file mode 100644 index b6fe15ff691ab3243342f0b0dd15668a36a1ea4b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2591.txt +++ /dev/null @@ -1,53 +0,0 @@ -In number theory the n conjecture is a conjecture stated by Browkin as a generalization of the abc conjecture to more than three integers. - -Given $ {n \ge 3}$, let $ {a_1,a_2,...,a_n \in \mathbb{Z}}$ satisfy three conditions: - -(i) $\gcd(a_1,a_2,...,a_n)=1$ - -(ii) $ {a_1 + a_2 + ... + a_n=0}$ - -(iii) no proper subsum of $ {a_1,a_2,...,a_n}$ equals $ {0}$ - -First formulation - -The n conjecture states that for every $ {\varepsilon >0}$, there is a constant $ C $, depending on $ {n} $ and $ {\varepsilon} $, such that: - -
    $ \operatorname{max}(|a_1|,|a_2|,...,|a_n|)< C_{n,\varepsilon}\operatorname{rad}(|a_1| \cdot |a_2| \cdot ... \cdot |a_n|)^{2n - 5 + \varepsilon} $
    - -where $ \operatorname{rad}(m)$ denotes the radical of the integer $ {m} $, defined as the product of the distinct prime factors of $ {m} $. - -Second formulation - -Define the quality of $ {a_1,a_2,...,a_n}$ as -$$ - q(a_1,a_2,...,a_n)= \frac{\log(\operatorname{max}(|a_1|,|a_2|,...,|a_n|))}{\log(\operatorname{rad}(|a_1| \cdot |a_2| \cdot ... \cdot |a_n|))} -$$ - -The n conjecture states that $\limsup q(a_1,a_2,...,a_n)= 2n-5 $. - -Vojta proposed a stronger variant of the n conjecture, where setwise coprimeness of $ {a_1,a_2,...,a_n}$ is replaced by pairwise coprimeness of $ {a_1,a_2,...,a_n}$. - -There are two different formulations of this strong n conjecture. - -Given $ {n \ge 3}$, let $ {a_1,a_2,...,a_n \in \mathbb{Z}}$ satisfy three conditions: - -(i) $ {a_1,a_2,...,a_n}$ are pairwise coprime - -(ii) $ {a_1 + a_2 + ... + a_n=0}$ - -(iii) no proper subsum of $ {a_1,a_2,...,a_n}$ equals $ {0}$ - -First formulation - -The strong n conjecture states that for every $ {\varepsilon >0}$, there is a constant $ C $, depending on $ {n} $ and $ {\varepsilon} $, such that: - -
    $ \operatorname{max}(|a_1|,|a_2|,...,|a_n|)< C_{n,\varepsilon}\operatorname{rad}(|a_1| \cdot |a_2| \cdot ... \cdot |a_n|)^{1 + \varepsilon} $
    - -Second formulation - -Define the quality of $ {a_1,a_2,...,a_n}$ as -$$ - q(a_1,a_2,...,a_n)= \frac{\log(\operatorname{max}(|a_1|,|a_2|,...,|a_n|))}{\log(\operatorname{rad}(|a_1| \cdot |a_2| \cdot ... \cdot |a_n|))} -$$ - -The strong n conjecture states that $\limsup q(a_1,a_2,...,a_n)= 1 $. diff --git a/wiki/wikipedia/2592.txt b/wiki/wikipedia/2592.txt deleted file mode 100644 index bdfad3bec4511188e9b9cf373093a49c4e2d50e2..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2592.txt +++ /dev/null @@ -1,38 +0,0 @@ -In convex geometry, the Mahler volume of a centrally symmetric convex body is a dimensionless quantity that is associated with the body and is invariant under linear transformations. It is named after German-English mathematician Kurt Mahler. It is known that the shapes with the largest possible Mahler volume are the balls and solid ellipsoids; this is now known as the Blaschke–Santaló inequality. The still-unsolved Mahler conjecture states that the minimum possible Mahler volume is attained by a hypercube. - -A convex body in Euclidean space is defined as a compact convex set with non-empty interior. If B is a centrally symmetric convex body in n-dimensional Euclidean space, the polar body Bo is another centrally symmetric body in the same space, defined as the set -$$ -\left\{ x\mid x\cdot y\le 1 \text{ for all } y\in B \right\}. -$$ - -The Mahler volume of B is the product of the volumes of B and Bo. - -If T is an invertible linear transformation, then $(TB)^\circ = (T^{-1})^\ast B^\circ$; thus applying T to B changes its volume by $\det T$ and changes the volume of Bo by $\det (T^{-1})^\ast$. Thus the overall Mahler volume of B is preserved by linear transformations. - -The polar body of an n-dimensional unit sphere is itself another unit sphere. Thus, its Mahler volume is just the square of its volume, -$$ -\frac{\Gamma(3/2)^{2n}4^n}{\Gamma(\frac{n}{2}+1)^2}. -$$ - -Here Γ represents the Gamma function. - -By affine invariance, any ellipsoid has the same Mahler volume. - -The polar body of a polyhedron or polytope is its dual polyhedron or dual polytope. In particular, the polar body of a cube or hypercube is an octahedron or cross polytope. Its Mahler volume can be calculated as -$$ -\frac{4^n}{\Gamma(n+1)}. -$$ - -The Mahler volume of the sphere is larger than the Mahler volume of the hypercube by a factor of approximately $\left(\tfrac{\pi}{2}\right)^n$. - -The Blaschke–Santaló inequality states that the shapes with maximum Mahler volume are the spheres and ellipsoids. The three-dimensional case of this result was proven by Wilhelm Blaschke; the full result was proven much later by using a technique known as Steiner symmetrization by which any centrally symmetric convex body can be replaced with a more sphere-like body without decreasing its Mahler volume. - -The shapes with the minimum known Mahler volume are hypercubes, cross polytopes, and more generally the Hanner polytopes which include these two types of shapes, as well as their affine transformations. The Mahler conjecture states that the Mahler volume of these shapes is the smallest of any n-dimensional symmetric convex body; it remains unsolved when $n\geq4$. As Terry Tao writes: - -Bourgain prove that the Mahler volume is bounded below by $c^n$ times the volume of a sphere for some absolute constant $c > 0$, matching the scaling behavior of the hypercube volume but with a smaller constant. Kuperberg proves that, more concretely, one can take $c=\tfrac{1}{2}$ in this bound. A result of this type is known as a reverse Santaló inequality. - -* The 2-dimensional case of the Mahler conjecture has been solved by Mahler and the 3-dimensional case by Iriyeh. - -* Nazarov proved that the unit cube is a strict local minimizer for the Mahler volume in the class of origin symmetric convex bodies endowed with the Banach–Mazur distance. - -The Mahler volume can be defined in the same way, as the product of the volume and the polar volume, for convex bodies whose interior contains the origin regardless of symmetry. Mahler conjectured that, for this generalization, the minimum volume is obtained by a simplex, with its centroid at the origin. As with the symmetric Mahler conjecture, reverse Santaló inequalities are known showing that the minimum volume is at least within an exponential factor of the simplex. diff --git a/wiki/wikipedia/2593.txt b/wiki/wikipedia/2593.txt deleted file mode 100644 index 8c8dc790d96383f32350ad6d3667d9d5cc586ccd..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2593.txt +++ /dev/null @@ -1,64 +0,0 @@ -In mathematics, the Dyson conjecture is a conjecture about the constant term of certain Laurent polynomials, proved independently in 1962 by Wilson and Gunson. Andrews generalized it to the q-Dyson conjecture, proved by Zeilberger and Bressoud and sometimes called the Zeilberger–Bressoud theorem. Macdonald generalized it further to more general root systems with the Macdonald constant term conjecture, proved by Cherednik. - -The Dyson conjecture states that the Laurent polynomial -$$ -\prod _{1\le i\ne j\le n}(1-t_i/t_j)^{a_i} -$$ - -has constant term -$$ -\frac{(a_1+a_2+\cdots+a_n)!}{a_1!a_2!\cdots a_n!}. -$$ - -The conjecture was first proved independently by Wilson and Gunson. Good later found a short proof, by observing that the Laurent polynomials, and therefore their constant terms, satisfy the recursion relations -$$ -F(a_1,\dots,a_n) = \sum_{i=1}^nF(a_1,\dots,a_i-1,\dots,a_n). -$$ - -The case n = 3 of Dyson's conjecture follows from the Dixon identity. - -Sills and used a computer to find expressions for non-constant coefficients of - -Dyson's Laurent polynomial. - -When all the values ai are equal to β/2, the constant term in Dyson's conjecture is the value of Dyson's integral -$$ -\frac{1}{(2\pi)^n}\int_0^{2\pi}\cdots\int_0^{2\pi}\prod_{1\le jn is the q-Pochhammer symbol. - -This conjecture reduces to Dyson's conjecture for q=1, and was proved by Zeilberger, using a combinatorial approach inspired by - -previous work of Ira Gessel and Dominique Foata. A shorter proof, using formal Laurent series, was given in 2004 by Ira Gessel and Guoce Xin, and - -an even shorter proof, using a quantitative form, due to Karasev and Petrov, and independently to Lason, of Noga Alon's Combinatorial Nullstellensatz, - -was given in 2012 by Gyula Karolyi and Zoltan Lorant Nagy. - -The latter method was extended, in 2013, by Shalosh B. Ekhad and Doron Zeilberger to derive explicit expressions of any specific coefficient, not just the - -constant term, see http://www.math.rutgers.edu/~zeilberg/mamarim/mamarimhtml/qdyson.html, for detailed references. - -Macdonald extended the conjecture to arbitrary finite or affine root systems, with Dyson's original conjecture corresponding to - -the case of the An-1 root system and Andrews's conjecture corresponding to the affine An-1 root system. Macdonald reformulated these conjectures as conjectures about the norms of Macdonald polynomials. Macdonald's conjectures were proved by using doubly affine Hecke algebras. - -Macdonald's form of Dyson's conjecture for root systems of type BC is closely related to Selberg's integral. diff --git a/wiki/wikipedia/2594.txt b/wiki/wikipedia/2594.txt deleted file mode 100644 index 67824fed151a9c9fc3222b6835c535238aab7be3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2594.txt +++ /dev/null @@ -1,229 +0,0 @@ -The multifit algorithm is an algorithm for multiway number partitioning, originally developed for the problem of identical-machines scheduling. It was developed by Coffman, Garey and Johnson. Its novelty comes from the fact that it uses an algorithm for another famous problem - the bin packing problem - as a subroutine. - -The input to the algorithm is a set S of numbers, and a parameter n. The required output is a partition of S into n subsets, such that the largest subset sum (also called the makespan) in as small as possible. - -The algorithm uses as a subroutine, an algorithm called first-fit-decreasing bin packing (FFD). The FFD algorithm takes as input the same set S of numbers, and a bin-capacity c. It heuristically packs numbers into bins such that the sum of numbers in each bin is at most C, aiming to use as few bins as possible. Multifit runs FFD multiple times, each time with a different capacity C, until it finds some C such that FFD with capacity C packs S into at most n bins. To find it, it uses binary search as follows. - -# Let L := max ( sum(S) / n, max(S) ). Note, with bin-capacity smaller than L, every packing must use more than n bins. - -# Let U := max ( 2 sum(S) / n, max(S) ). Note, with bin-capacity at least U, FFD uses at most n bins. Proof: suppose by contradiction that some input si did not fit into any of the first n bins. Clearly this is possible only if i ≥ n+1. If si > C/2, then, since the inputs are ordered in descending order, the same inequality holds for all the first n+1 inputs in S. This means that sum(S) > (n+1)C/2 > n U/2, a contradiction to the definition of U. Otherwise, si ≤ C/2. So the sum of each of the first n bins is more than C/2. This again implies sum(S) > n C/2 > n U/2, contradiction. - -# Iterate k times (where k is a precision parameter): - -#* Let C := (L+U)/2. Run FFD on S with capacity C. - -#** If FFD needs at most n bins, then decrease U by letting U := C. - -#** If FFD needs more than n bins, then increase L by letting L := C. - -# Finally, run FFD with capacity U. It is guaranteed to use at most n bins. Return the resulting scheduling. - -Multifit is a constant-factor approximation algorithm. It always finds a partition in which the makespan is at most a constant factor larger than the optimal makespan. To find this constant, we must first analyze FFD. While the standard analysis of FFD considers approximation w.r.t. number of bins when the capacity is constant, here we need to analyze approximation w.r.t. capacity when the number of bins is constant. Formally, for every input size S and integer n, let $OPT(S,n)$ be the smallest capacity such that S can be packed into n bins of this capacity. Note that $OPT(S,n)$ is the value of the optimal solution to the original scheduling instance. - -Let $r_n$ be the smallest real number such that, for every input S, FFD with capacity $r_n\cdot OPT(S,n)$ uses at most n bins. - -Coffman, Garey and Johnson prove the following upper bounds on $r_n$: and later, at most 13/11≈1.182. The original proof of this missed some cases; presented a complete and simpler proof. The 13/11 cannot be improved: there is an instance with n=13 in which the approximation ratio is exactly 13/11. shows that $r_n\geq 20/17$, which is tight. The inputs are 9,7,6,5,5, 4,4,4,4,4,4,4,4,4. They can be packed into 4 bins of capacity 17 as follows: - -* 9, 4, 4 - -* 7, 6, 4 - -* 5, 4, 4, 4 - -* 5, 4, 4, 4 - -But if we run FFD with bin capacity smaller than 20, then the filled bins are: - -* 9,7 [4 does not fit] - -* 6,5,5 [4 does not fit] - -* 4,4,4,4 [4 does not fit] - -* 4,4,4,4 - -* 4 - -Note that the sum in each of the first 4 bins is 16, so we cannot put another 4 inside it. Therefore, 4 bins are not sufficient. - -For n=13: the following
    "In the solution of the makespan problem using MULTIFIT, it is easy to construct examples where one processor is never used. Such a solution is tolerable for the makespan problem, but is totally unacceptable for our problem [since the smallest sum is 0]. Modifications of MULTIFIT can be devised which would be more suitable for our problem, but we could find none which produces a better worst-case bound than that of LPT."
    - -The upper bounds on $r_n$ are proved by contradiction. For any integers p ≥ q, if $r_n > p/q$, then there exists a (p/q)-counterexample, defined as an instance S and a number n of bins such that - -* S can be packed into n bins with capacity q; - -* FFD does not manage to pack S into n bins with capacity p. - -If there exists such a counterexample, then there also exists a minimal (p/q)-counterexample, which is a (p/q)-counterexample with a smallest number of items in S and a smallest number of bins n. In a minimal (p/q)-counterexample, FFD packs all items in S except the last (smallest) one into n bins with capacity p. Given a minimal (p/q)-counterexample, denote by P1,...,Pn the (incomplete) FFD packing into these n bins with capacity p, by Pn+1 the bin containing the single smallest item, and by Q1,...,Qn the (complete) optimal packing into n bins with capacity q. The following lemmas can be proved: - -* No union of k subsets from {Q1,...,Qn} is dominated by a union of k subsets from {P1,...,Pn+1} ("dominated" means that each item in the dominated subset is mapped to a weakly-larger item in the dominating subset). Otherwise we could get a smaller counterexample as follows. [1] Delete all items in the Pi. Clearly, the incomplete FFD packing now needs n-k bins, and still the smallest item (or an entire bin) remains unpacked. [2] In the optimal packing Qi, exchange each item with its dominating item. Now, the k subsets Qi are larger (probably larger than q), but all other n-k subsets are smaller (in particular, at most q). Therefore, after deleting all items in the Pi, the remaining items can be packed into at most n-k bins of size q. - -* Each of Q1,...,Qn contains at least 3 items. This is because [a] each Qi with a single item is dominated by the Pj that contains that item; [b] for each Qi with two items x and y, if both x and y are in the same Pj, then Qi is dominated by this Pj; [c] Suppose x≥y, x is in some Pj, and y is in some Pk to its right. This means that y did not fit into Pj. But x+y ≤ q. This means that Pj must contain some item z ≥ y. So Qi is dominated by Pj. [d] Suppose x≥y, x is in some Pj, and y is in some Pk to its left. This means that there must be a previous item z ≥ x. So Qi is dominated by Pk. - -*Otherwise we had domination and, by the previous lemma, could get a smaller counterexample. - -* Each of P1,...,Pn contains at least 2 items. This is because, if some Pi contains only a single item, this implies that the last (smallest) item does not fit into it. This means that this single item must be alone in an optimal bundle, contradicting the previous lemma. - -*Let s be the size of the smallest item. Then $s > \frac{n}{n-1}(p-q)$. Proof: Since s does not fit into the first n bundles, we have $sum(P_i) + s > p$, so $\sum_{i=1}^n sum(P_i) + n\cdot s > n\cdot p$. On the other hand, since all items fit into n bins of capacity q, we have $\sum_{i=1}^n sum(P_i) + s \leq n\cdot q$. Subtracting the inequalities gives $s > \frac{n}{n-1}(p-q)$. - -*The size of every item is at most $q - 2s$. This is because there are at least 3 items in each optimal bin (with capacity q). - -*The sum of items in every bin P1,...,Pn is larger than $p - s$; otherwise we could add the smallest item. - -From the above lemmas, it is already possible to prove a loose upper bound $r_n\leq 5/4 = 1.25$. Proof. Let S, n be a minimal (5/4)-counterexample. The above lemmas imply that - - -* $s > \frac{n}{n-1}(5-4) > 1$. Since the optimal capacity is 4, no optimal bin can contain 4 or more items. Therefore, each optimal bin must contain at most 3 items, and the number of items is at most 3n. - -* The size of each item is at most $4 - 2s$, and the size of each FFD bin is more than $5-s$. If some FFD bin contained only two items, its sum would be at most $8 - 4s = 5 + (3-3s) - s < 5 - s$; so each FFD bin must contain at least 3 items. But this means that FFD yields exactly n bins - a contradiction. - -To prove tighter bounds, one needs to take a closer look at the FFD packing of the minimal (p/q)-counterexample. The items and FFD bins P1,...,Pn are termed as follows: - -* A regular item is an item added to some bin Pi, before the next bin Pi+1 was opened. Equivalently, a regular item is an item in Pi which is at least as large as every item in every bin Pj for j>i. - -* A fallback item is an item added to some bin Pi, after the next bin Pi+1 was opened. Equivalently, a fallback item is an item in Pi which is smaller than the largest item in Pi+1. - -* A regular k-bin is a bin that contains k regular items and no fallback items. - -* A fallback k-bin is a bin that contains k regular items and some fallback items. - -The following lemmas follow immediately from these definitions and the operation of FFD. - -* If k12, then all k1-bins are to the left of all k2-bins. This is because all bins have the same capacity, so if more regular items fit into a bin, these items must be smaller, so they must be allocated later. - -* If Pi is a k-bin, then the sum of the k regular items in Pi is larger than $\frac{k}{k+1}\cdot p$, since otherwise we could add another item before opening a new bin. - -* If Pi and Pi+1 are both k-bins, and then the sum of the k regular items in Pi is at least as large as in Pi+1 (this is because the items are ordered by decreasing size). - -* All regular k-bins are to the left of all fallback k-bins. This is because all bins have the same capacity, so if more fallback items fit into a bin, these items must be smaller, so they must be allocated later. - -In a minimal counterexample, there are no regular 1-bins (since each bin contains at least 2 items), so by the above lemmas, the FFD bins P1,...,Pn are ordered by type: - -* Zero or more fallback 1-bins; - -* Then, zero or more regular 2-bins; - -* Then, zero or more fallback 2-bins; - -* Then, zero or more regular 3-bins; - -* Then, zero or more fallback 3-bins; - -* and so on. - -The upper bound $r_n\leq 1.22$ is proved by assuming a minimal (122/100)-counterexample. Each item is given a weight based on its size and its bin in the FFD packing. The weights are determined such that the total weight in each FFD bin is at least x, and the total weight in almost each optimal bin is at most x (for some predetermined x). This implies that the number of FFD bins is at most the number of optimal bins, which contradicts the assumption that it is a counterexample. - -By the lemmas above, we know that: - -* The size of the smallest item satisfies s > p-q = 22, so s = 22+D for some D>0. - -*Each optimal bin contains at most 4 items (floor(100/22)), and each FFD bin contains at most 5 items (floor(122/22)). - -* The size of every item is at most q-2s = 56-2D. - -*The sum in each FFD bin is larger than p-s = 100-D. - -*There are no 1-bins, since in a 1-bin, the size of the regular item must be at least p/2=61, while here the size of every item is less than 56. - -If D>4, the size of each item is larger than 26, so each optimal bin (with capacity 100) must contain at most 3 items. Each item is smaller than 56-2D and each FFD bin has a sum larger than 100-D, so each FFD bin must contain at least 3 items. Therefore, there are at most n FFD bins - contradiction. So from now on, we assume D≤4. The items are assigned types and weights as follows. - -* The two items in each regular 2-bin except maybe the last one have a size larger than (100-D)/2 each. All such items are called type-X2, and assigned a weight of (100-D)/2. The last 2-regular bin is a special case: if both its items have a size larger than (100-D)/2, then they are type-X2 too; otherwise, they are called type-Z, and their weight equals their size. - -* The two regular items in each fallback 2-bin have a total size larger than 2*122/3; they are called type-Y2, and their weight equals their size minus D. - -* The three items in each regular 3-bin except maybe the last one have a size larger than (100-D)/3 each. All such items are called type-X3, and assigned a weight of (100-D)/3. The last 3-regular bin is a special case: if all items in it have a size larger than (100-D)/3, then they are type-X3 too; otherwise, they are called type-Z and their weight equals their size. - -* The three regular items in each fallback 3-bin have a total size larger than 3*122/4; they are called type-Y3, and their weight equals their size minus D. - -* The four items in each regular 4-bin except maybe the last one have a size larger than (100-D)/4 each. All such items are called type-X4, and assigned a weight of (100-D)/4. The last 4-regular bin is a special case: if all items in it have a size larger than (100-D)/4, then they are type-X4 too; otherwise, they are called type-Z and their weight equals their size. - -* The remaining items (including all fallback items in fallback 2-bins and 3-bins, all fallback 4-bins, and all other 5-item bins) are all called type-X5, and their weight equals 22 (if D ≤ 12/5) or (100-D)/4 (otherwise). The threshold 12/5 was computed such that the weight is always at most 22+D, so that the weight is always smaller than the size. - -Note that the weight of each item is at most its size (the weight can be seen as the size "rounded down"). Still, the total weight of items in every FFD bin is at least 100-D: - -* For regular 2-bins, regular 3-bins and regular 4-bins: - -** For the non-last ones, this is immediate. - -** The last such bins contain only Z-type items, whose weight equals their size, so the total weight of these bins equals their total size, which is more than 100-D. - -* Fallback 2-bins contain two type-Y2 items with total weight larger than 2*122/3-2D, plus at least one type-X5 item with weight at least 22 (if D ≤ 12/5) or (100-D)/4 (otherwise). In both cases the total weight is more than 100-D. - -* Fallback 3-bins contain three type-Y3 items with total weight larger than 3*122/4-3D, plus at least one type-X5 item with weight at least 22. So the total weight is more than 3*122/4+22-3D = 113.5-3D ≥ 105.5-D > 100-D, since D≤4. - -* 5-item bins contain 5 items with size at least 22+D and weight at least 22, so their total weight is obviously more than 100-D. - -The total weight of items in most optimal bins is at most 100-D: - -* This is clear for any optimal bin containing a type-Y2 item or a type-Y3 item, since their weight is their size minus D, the weights of other items is at most their size, and the total size of an optimal bin is at most 100. - -*For optimal bins containing only type-X2, type-X3, type-X4 and type-X5 items, it is possible to check all possible configurations (all combinations that fit into an optimal bin of size 100), and verify that the total weight in each configuration is at most 100-D. - -*Optimal bins containing type-Z items might have a total weight larger than 100-D. Since the total weight is at most 100, there is an "excess weight" of at most D for each such bin. However, the number of type-Z items is limited: - -**If D > 12/5, then there are at most 5 type-Z items (2 in the last regular 2-bin and 3 in the last regular 3-bin; the items in the last regular 4-bin are all type-X4). Therefore, the excess weight is at most 5D. Comparing the total weight of FFD vs. optimal bins yields s < 5D ≤ 20 < 22, a contradiction. - -**Otherwise, there are at most 9 type-Z items (2+3+4). Therefore, the excess weight is at most 9D. Comparing the total weight of FFD vs. optimal bins yields s < 9D ≤ 108/5 < 22, a contradiction. - -The upper bound $r_n\leq 13/11\approx 1.182$ is proved by assuming a minimal ((120-3d)/100)-counterexample, with some d<20/33, and deriving a contradiction. - -MultiFit is not monotone in the following sense: it is possible that an input decreases while the max-sum in the partition returned by MultiFit increases. As an example, suppose n=3 and the input numbers are:
    44, 24, 24, 22, 21, 17, 8, 8, 6, 6.
    FFD packs these inputs into 3 bins of capacity 60 (which is optimal): - -* 44, 8, 8; - -* 24, 24, 6, 6; - -* 22, 21, 17. - -But if the "17" becomes "16", then FFD with capacity 60 needs 4 bins: - -* 44, 16; - -* 24, 24, 8; - -* 22, 21, 8, 6; - -* 6. - -so MultiFit must increase the capacity, for example, to 62: - -* 44, 16; - -* 24, 24, 8, 6; - -* 22, 21, 8, 6. - -Multifit has been extended to the more general problem of maximin-share allocation of chores. In this problem, S is a set of chores, and there are n agents who assign potentially different valuations to the chores. The goal is to give to each agent, a set of chores worth at most r times the maximum value in an optimal scheduling based on i's valuations. A naive approach is to let each agent in turn use the MultiFit algorithm to calculate the threshold, and then use the algorithm where each agent uses his own threshold. If this approach worked, we would get an approximation of 13/11. However, this approach fails due to the non-monotonicity of FFD. - -Here is an example. Suppose there are four agents, and they have valuations of two types: - -Both types can partition the chores into 4 parts of total value 75. Type A: - -* 51, 12, 12 - -* 27.5, 27.5, 10, 10 - -* 27.5, 27.5, 10, 10 - -* 25, 10, 10, 10, 10, 10 - -Type B: - -* 51, 24 - -* 27.5, 27.5, 20 - -* 27.5, 27.5, 20 - -* 8.33 {9 times} - -If all four agents are of the same, then FFD with threshold 75 fills the 4 optimal bins. But suppose there is one agent of type B, and the others are of type A. Then, in the first round, the agent of type B takes the bundle 51, 24 (the other agents cannot take it since for them the values are 51,25 whose sum is more than 75).In the following rounds, the following bundles are filled for the type A agents: - -* 27.5, 27.5, 12 [the sum is 67 - there is no room for another 10] - -* 27.5, 27.5, 12 [the sum is 67 - there is no room for another 10] - -* 10, 10, 10, 10, 10, 10, 10 [the sum is 70 - there is no room for another 10] - -so the last two chores remain unallocated. - -Using a more sophisticated threshold calculation, it is possible to guarantee to each agent at most 11/9≈1.22 of his optimal value if the optimal value is known, and at most 5/4≈1.25 of his optimal value (using a polynomial time algorithm) if the optimal value is not known. diff --git a/wiki/wikipedia/2595.txt b/wiki/wikipedia/2595.txt deleted file mode 100644 index 062e475bc4a9a34ceaa2b49a6ce2a2157c833e8c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2595.txt +++ /dev/null @@ -1,3 +0,0 @@ -GroupLogic, Inc., founded in 1988 and headquartered in Arlington, Virginia, USA, is an enterprise software company that develops, sells and supports software for moving and storing data including activEcho, mobilEcho, ArchiveConnect, MassTransit and ExtremeZ-IP. GroupLogic’s products are used by information technology organizations to allow employees to access and manage corporate files regardless of the type of computing platform the employee is using to access the network. - -On September 13, 2012, GroupLogic announced that it became a subsidiary of Acronis, a software company specializing in backup and disaster recovery products and services. diff --git a/wiki/wikipedia/2596.txt b/wiki/wikipedia/2596.txt deleted file mode 100644 index 1d9251da2e8b4482e35d65acdde7629bc71fa231..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2596.txt +++ /dev/null @@ -1 +0,0 @@ -#redirect Kefeng Liu#Mariño–Vafa conjecture and string duality diff --git a/wiki/wikipedia/2597.txt b/wiki/wikipedia/2597.txt deleted file mode 100644 index 54784db93b6f1cf1a0d163c2407c778fd3a91973..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2597.txt +++ /dev/null @@ -1,39 +0,0 @@ -Water pouring puzzles (also called water jug problems, decanting problems, measuring puzzles, or Die Hard with a Vengeance puzzles) are a class of puzzle involving a finite collection of water jugs of known integer capacities (in terms of a liquid measure such as liters or gallons). - -Initially each jug contains a known integer volume of liquid, not necessarily equal to its capacity. - -Puzzles of this type ask how many steps of pouring water from one jug to another (until either one jug becomes empty or the other becomes full) are needed to reach a goal state, specified in terms of the volume of liquid that must be present in some jug or jugs. - -By Bézout's identity, such puzzles have solution if and only if the desired volume is a multiple of the greatest common divisor of all the integer volume capacities of jugs. - -It is a common assumption, stated as part of these puzzles, that the jugs in the puzzle are irregularly shaped and unmarked, so that it is impossible to accurately measure - -any quantity of water that does not completely fill a jug. Other assumptions of these problems may include that no water can be spilled, and that each step pouring water from a source - -jug to a destination jug stops when either the source jug - -is empty or the destination jug is full, whichever happens first. - -The standard puzzle of this kind works with three jugs of capacity 8, 5 and 3 liters. These are initially filled with 8, 0 and 0 liters. In the goal state they should filled with 4, 4 and 0 liters. - -The puzzle may be solved in seven steps, passing through the following sequence of states (denoted as a bracketed triple of the three volumes of water in the three jugs): - -[8,0,0] → [3,5,0] → [3,2,3] → [6,2,0] → [6,0,2] → [1,5,2] → [1,4,3] → [4,4,0]. - -Cowley writes that this particular puzzle "dates back to mediaeval times" and notes its occurrence in Bachet's 17th-century mathematics textbook. - -The rules are sometimes formulated by adding a tap (source) and a drain (sink) which provide an infinite amount of additional water and an opportunity to pour all liquid from any jug into the sink. Filling a jug to the rim from the tap or pouring the entire contents of jug into the drain each count as one step while solving the problem. This version of the puzzle was featured in a scene of the 1995 movie Die Hard with a Vengeance. - -This variant is identical to the original, as a third container capable of holding the contents of the first two is mathematically equivalent to a tap or drain capable of filling or emptying both containers. An optimal solution can be easily obtained using a billiard-shape barycentric plot (or a mathematical billiard). - -The graph shows two ways to obtain 4 litres using 3-litre and 5-litre jugs, and a water source and sink on a Cartesian grid with diagonal lines of slope -1. The x and y axes represent the amounts in the 5 and 3 L jugs, respectively. Starting from (0, 0), we traverse the grid along the line segments, turning only on its boundaries, until we reach the black line denoting 4 L in the 5 L jug. Solid lines denote pouring between jugs, dashed lines denote filling a jug and dotted lines denote emptying a jug. - -Concatenating either solution, traversal of the 4 L line and the reverse of the other solution returns to (0, 0), yielding a cycle graph. If and only if the jugs' volumes are co-prime, every boundary point is visited, giving an algorithm to measure any integer amount up to the sum of the volumes. - -Another variant is when one of the jugs has a known volume of water to begin with; In that case, the achievable volumes are either a multiple of the greatest common divisor between the two containers away from the existing known volume, or from zero. For example, if one jug that holds 8 liters is empty and the other jug that hold 12 liters has 9 liters of water in it to begin with, then with a source (tap) and a drain (sink), these two jugs can measure volumes of 9 liters, 5 liters, 1 liter, as well as 12 liters, 8 liters, 4 liters and 0 liters. The simplest solution for 5 liters is [9,0] → [9,8] → [12,5]; The simplest solution for 4 liters is [9,0] → [12,0] → [4,8]. These solutions are visualized by red and blue arrows in the Cartesian plot below: - -If the number of jugs is three, the filling status after each step can be described in a diagram of barycentric coordinates, because the sum of all three integers stays the same throughout all steps. In consequence the steps can be visualized as billiard moves in the (clipped) coordinate system on a triangular lattice. - -The barycentric plot on the right gives two solutions to the 8, 5 and 3 L puzzle. The yellow area denotes combinations achievable with the jugs. Starting at the square, solid red and dashed blue paths show pourable transitions. When a vertex lands on the dotted black triangle, 4 L has been measured. Another pour to the diamond yields 4 L in each 8 and 5 L jugs. - -The blue path is one step shorter than the path for the two-jug puzzle with tap and drain, as we can accumulate 4 L in the 8 L jug, absent in the two-jug variant. diff --git a/wiki/wikipedia/2598.txt b/wiki/wikipedia/2598.txt deleted file mode 100644 index e917677e18d7d74fff5cb880b54b03cbecea8081..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2598.txt +++ /dev/null @@ -1,216 +0,0 @@ -Brouwer's fixed-point theorem is a fixed-point theorem in topology, named after L. E. J. (Bertus) Brouwer. It states that for any continuous function $f$ mapping a compact convex set to itself there is a point $x_0$ such that $f(x_0)=x_0$. The simplest forms of Brouwer's theorem are for continuous functions $f$ from a closed interval $I$ in the real numbers to itself or from a closed disk $D$ to itself. A more general form than the latter is for continuous functions from a convex compact subset K - - of Euclidean space to itself. - -Among hundreds of fixed-point theorems, Brouwer's is particularly well known, due in part to its use across numerous fields of mathematics. - -In its original field, this result is one of the key theorems characterizing the topology of Euclidean spaces, along with the Jordan curve theorem, the hairy ball theorem and the Borsuk–Ulam theorem. - -This gives it a place among the fundamental theorems of topology. The theorem is also used for proving deep results about differential equations and is covered in most introductory courses on differential geometry. - -It appears in unlikely fields such as game theory. In economics, Brouwer's fixed-point theorem and its extension, the Kakutani fixed-point theorem, play a central role in the proof of existence of general equilibrium in market economies as developed in the 1950s by economics Nobel prize winners Kenneth Arrow and Gérard Debreu. - -The theorem was first studied in view of work on differential equations by the French mathematicians around Henri Poincaré and Charles Émile Picard. Proving results such as the Poincaré–Bendixson theorem requires the use of topological methods. This work at the end of the 19th century opened into several successive versions of the theorem. The general case was first proved in 1910 by Jacques Hadamard and by Luitzen Egbertus Jan Brouwer. - -The theorem has several formulations, depending on the context in which it is used and its degree of generalization. The simplest is sometimes given as follows: - -;In the plane: Every continuous function from a closed disk to itself has at least one fixed point. - -This can be generalized to an arbitrary finite dimension: - -;In Euclidean space:Every continuous function from a closed ball of a Euclidean space into itself has a fixed point. - -A slightly more general version is as follows: - -;Convex compact set:Every continuous function from a convex compact subset K of a Euclidean space to K itself has a fixed point. - -An even more general form is better known under a different name: - -;Schauder fixed point theorem:Every continuous function from a convex compact subset K of a Banach space to K itself has a fixed point. - -The theorem holds only for sets that are compact (thus, in particular, bounded and closed) and convex (or homeomorphic to convex). The following examples show why the pre-conditions are important. - -Consider the function -$$ -f(x) = x+1, -$$ - -which is a continuous function from $\mathbb{R}$ to itself. As it shifts every point to the right, it cannot have a fixed point. The space $\mathbb{R}$ is convex and closed, but not bounded. - -Consider the function -$$ -f(x) = \frac{x+1}{2}, -$$ - -which is a continuous function from the open interval (−1,1) to itself. In this interval, it shifts every point to the right, so it cannot have a fixed point. The space (−1,1) is convex and bounded, but not closed. The function f does have a fixed point for the closed interval [−1,1], namely f(1) = 1. - -Convexity is not strictly necessary for BFPT. Because the properties involved (continuity, being a fixed point) are invariant under homeomorphisms, BFPT is equivalent to forms in which the domain is required to be a closed unit ball $D^n$. For the same reason it holds for every set that is homeomorphic to a closed ball (and therefore also closed, bounded, connected, without holes, etc.). - -The following example shows that BFPT doesn't work for domains with holes. Consider the function $f(x)=-x$, - -which is a continuous function from the unit circle to itself. Since -x≠x holds for any point of the unit circle, f has no fixed point. The analogous example works for the n-dimensional sphere (or any symmetric domain that does not contain the origin). The unit circle is closed and bounded, but it has a hole (and so it is not convex) . The function f does have a fixed point for the unit disc, since it takes the origin to itself. - -A formal generalization of BFPT for "hole-free" domains can be derived from the Lefschetz fixed-point theorem. - -The continuous function in this theorem is not required to be bijective or even surjective. - -The theorem has several "real world" illustrations. Here are some examples. - -# Take two sheets of graph paper of equal size with coordinate systems on them, lay one flat on the table and crumple up (without ripping or tearing) the other one and place it, in any fashion, on top of the first so that the crumpled paper does not reach outside the flat one. There will then be at least one point of the crumpled sheet that lies directly above its corresponding point (i.e. the point with the same coordinates) of the flat sheet. This is a consequence of the n = 2 case of Brouwer's theorem applied to the continuous map that assigns to the coordinates of every point of the crumpled sheet the coordinates of the point of the flat sheet immediately beneath it. - -# Take an ordinary map of a country, and suppose that that map is laid out on a table inside that country. There will always be a "You are Here" point on the map which represents that same point in the country. - -# In three dimensions a consequence of the Brouwer fixed-point theorem is that, no matter how much you stir a cocktail in a glass (or think about milk shake), when the liquid has come to rest, some point in the liquid will end up in exactly the same place in the glass as before you took any action, assuming that the final position of each point is a continuous function of its original position, that the liquid after stirring is contained within the space originally taken up by it, and that the glass (and stirred surface shape) maintain a convex volume. Ordering a cocktail shaken, not stirred defeats the convexity condition ("shaking" being defined as a dynamic series of non-convex inertial containment states in the vacant headspace under a lid). In that case, the theorem would not apply, and thus all points of the liquid disposition are potentially displaced from the original state. - -The theorem is supposed to have originated from Brouwer's observation of a cup of coffee. - -If one stirs to dissolve a lump of sugar, it appears there is always a point without motion. - -He drew the conclusion that at any moment, there is a point on the surface that is not moving. - -The fixed point is not necessarily the point that seems to be motionless, since the centre of the turbulence moves a little bit. - -The result is not intuitive, since the original fixed point may become mobile when another fixed point appears. - -Brouwer is said to have added: "I can formulate this splendid result different, I take a horizontal sheet, and another identical one which I crumple, flatten and place on the other. Then a point of the crumpled sheet is in the same place as on the other sheet." It was later proved by L. E. J. Brouwer in 1909. Jacques Hadamard proved the general case in 1910, - -To understand the prehistory of Brouwer's fixed point theorem one needs to pass through differential equations. At the end of the 19th century, the old problem of the stability of the solar system returned into the focus of the mathematical community. - -Its solution required new methods. As noted by Henri Poincaré, who worked on the three-body problem, there is no hope to find an exact solution: "Nothing is more proper to give us an idea of the hardness of the three-body problem, and generally of all problems of Dynamics where there is no uniform integral and the Bohlin series diverge." - -He also noted that the search for an approximate solution is no more efficient: "the more we seek to obtain precise approximations, the more the result will diverge towards an increasing imprecision". - -He studied a question analogous to that of the surface movement in a cup of coffee. What can we say, in general, about the trajectories on a surface animated by a constant flow? Poincaré discovered that the answer can be found in what we now call the topological properties in the area containing the trajectory. If this area is compact, i.e. both closed and bounded, then the trajectory either becomes stationary, or it approaches a limit cycle. Poincaré went further; if the area is of the same kind as a disk, as is the case for the cup of coffee, there must necessarily be a fixed point. This fixed point is invariant under all functions which associate to each point of the original surface its position after a short time interval t. If the area is a circular band, or if it is not closed, then this is not necessarily the case. - -To understand differential equations better, a new branch of mathematics was born. Poincaré called it analysis situs. The French Encyclopædia Universalis defines it as the branch which "treats the properties of an object that are invariant if it is deformed in any continuous way, without tearing". In 1886, Poincaré proved a result that is equivalent to Brouwer's fixed-point theorem, although the connection with the subject of this article was not yet apparent. A little later, he developed one of the fundamental tools for better understanding the analysis situs, now known as the fundamental group or sometimes the Poincaré group. This method can be used for a very compact proof of the theorem under discussion. - -Poincaré's method was analogous to that of Émile Picard, a contemporary mathematician who generalized the Cauchy–Lipschitz theorem. Picard's approach is based on a result that would later be formalised by another fixed-point theorem, named after Banach. Instead of the topological properties of the domain, this theorem uses the fact that the function in question is a contraction. - -At the dawn of the 20th century, the interest in analysis situs did not stay unnoticed. However, the necessity of a theorem equivalent to the one discussed in this article was not yet evident. Piers Bohl, a Latvian mathematician, applied topological methods to the study of differential equations. In 1904 he proved the three-dimensional case of our theorem, but his publication was not noticed. - -It was Brouwer, finally, who gave the theorem its first patent of nobility. His goals were different from those of Poincaré. This mathematician was inspired by the foundations of mathematics, especially mathematical logic and topology. His initial interest lay in an attempt to solve Hilbert's fifth problem. In 1909, during a voyage to Paris, he met Henri Poincaré, Jacques Hadamard, and Émile Borel. The ensuing discussions convinced Brouwer of the importance of a better understanding of Euclidean spaces, and were the origin of a fruitful exchange of letters with Hadamard. For the next four years, he concentrated on the proof of certain great theorems on this question. In 1912 he proved the hairy ball theorem for the two-dimensional sphere, as well as the fact that every continuous map from the two-dimensional ball to itself has a fixed point. These two results in themselves were not really new. As Hadamard observed, Poincaré had shown a theorem equivalent to the hairy ball theorem. The revolutionary aspect of Brouwer's approach was his systematic use of recently developed tools such as homotopy, the underlying concept of the Poincaré group. In the following year, Hadamard generalised the theorem under discussion to an arbitrary finite dimension, but he employed different methods. Hans Freudenthal comments on the respective roles as follows: "Compared to Brouwer's revolutionary methods, those of Hadamard were very traditional, but Hadamard's participation in the birth of Brouwer's ideas resembles that of a midwife more than that of a mere spectator." - -Brouwer's approach yielded its fruits, and in 1910 he also found a proof that was valid for any finite dimension, as well as other key theorems such as the invariance of dimension. In the context of this work, Brouwer also generalized the Jordan curve theorem to arbitrary dimension and established the properties connected with the degree of a continuous mapping. This branch of mathematics, originally envisioned by Poincaré and developed by Brouwer, changed its name. In the 1930s, analysis situs became algebraic topology. - -The theorem proved its worth in more than one way. During the 20th century numerous fixed-point theorems were developed, and even a branch of mathematics called fixed-point theory. - -Brouwer's theorem is probably the most important. It is also among the foundational theorems on the topology of topological manifolds and is often used to prove other important results such as the Jordan curve theorem. - -Besides the fixed-point theorems for more or less contracting functions, there are many that have emerged directly or indirectly from the result under discussion. A continuous map from a closed ball of Euclidean space to its boundary cannot be the identity on the boundary. Similarly, the Borsuk–Ulam theorem says that a continuous map from the n-dimensional sphere to Rn has a pair of antipodal points that are mapped to the same point. In the finite-dimensional case, the Lefschetz fixed-point theorem provided from 1926 a method for counting fixed points. In 1930, Brouwer's fixed-point theorem was generalized to Banach spaces. This generalization is known as Schauder's fixed-point theorem, a result generalized further by S. Kakutani to multivalued functions. One also meets the theorem and its variants outside topology. It can be used to prove the Hartman-Grobman theorem, which describes the qualitative behaviour of certain differential equations near certain equilibria. Similarly, Brouwer's theorem is used for the proof of the Central Limit Theorem. The theorem can also be found in existence proofs for the solutions of certain partial differential equations. - -Other areas are also touched. In game theory, John Nash used the theorem to prove that in the game of Hex there is a winning strategy for white. In economics, P. Bich explains that certain generalizations of the theorem show that its use is helpful for certain classical problems in game theory and generally for equilibria (Hotelling's law), financial equilibria and incomplete markets. - -Brouwer's celebrity is not exclusively due to his topological work. The proofs of his great topological theorems are not constructive, and Brouwer's dissatisfaction with this is partly what led him to articulate the idea of constructivity. He became the originator and zealous defender of a way of formalising mathematics that is known as intuitionism, which at the time made a stand against set theory. Brouwer disavowed his original proof of the fixed-point theorem. The first algorithm to approximate a fixed point was proposed by Herbert Scarf. A subtle aspect of Scarf's algorithm is that it finds a point that is almost fixed by a function f, but in general cannot find a point that is close to an actual fixed point. In mathematical language, if ε is chosen to be very small, Scarf's algorithm can be used to find a point x such that f(x) is very close to x, i.e., $d(f(x),x) < \varepsilon $. But Scarf's algorithm cannot be used to find a point x such that x is very close to a fixed point: we cannot guarantee $d(x,y) < \varepsilon,$ where $f(y)=y.$ Often this latter condition is what is meant by the informal phrase "approximating a fixed point". - -Brouwer's original 1911 proof relied on the notion of the degree of a continuous mapping. Modern accounts of the proof can also be found in the literature. - -Let $K=\overline{B(0)}$ denote the closed unit ball in $\mathbb R^n$ centered at the origin. Suppose for simplicitly that $f:K\to K$ is continuously differentiable. A regular value of $f$ is a point $p\in B(0)$ such that the Jacobian of $f$ is non-singular at every point of the preimage of $p$. In particular, by the inverse function theorem, every point of the preimage of $f$ lies in $B(0)$ (the interior of $K$). The degree of $f$ at a regular value $p\in B(0)$ is defined as the sum of the signs of the Jacobian determinant of $f$ over the preimages of $p$ under $f$: -$$ -\operatorname{deg}_p(f) = \sum_{x\in f^{-1}(p)} \operatorname{sign}\left(\det(Df(x))\right). -$$ - -The degree is, roughly speaking, the number of "sheets" of the preimage f lying over a small open set around p, with sheets counted oppositely if they are oppositely oriented. This is thus a generalization of winding number to higher dimensions. - -The degree satisfies the property of homotopy invariance: let $f$ and $g$ be two continuously differentiable functions, and $H_t(x)=tf+(1-t)g$ for $0\le t\le 1$. Suppose that the point $p$ is a regular value of $H_t$ for all t. Then $\deg_p f = \deg_p g$. - -If there is no fixed point of the boundary of $K$, then the function -$$ -g(x)=\frac{x-f(x)}{\sup_{x\in K}\left|x-f(x)\right|} -$$ - -is well-defined, and -$$ -H(t,x) = \frac{x-tf(x)}{\sup_{x\in K}\left|x-tf(x)\right|} -$$ - -defines a homotopy from the identity function to it. The identity function has degree one at every point. In particular, the identity function has degree one at the origin, so $g$ also has degree one at the origin. As a consequence, the preimage $g^{-1}(0)$ is not empty. The elements of $g^{-1}(0)$ are precisely the fixed points of the original function f. - -This requires some work to make fully general. The definition of degree must be extended to singular values of f, and then to continuous functions. The more modern advent of homology theory simplifies the construction of the degree, and so has become a standard proof in the literature. - -The proof uses the observation that the boundary of the n-disk Dn is Sn−1, the (n − 1)-sphere. - -Suppose, for contradiction, that a continuous function f : Dn → Dn has no fixed point. This means that, for every point x in Dn, the points x and f(x) are distinct. Because they're distinct, for every point x in Dn, we can construct a unique ray from f(x) to x and follow the ray until it intersects the boundary Sn−1 (see illustration). By calling this intersection point F(x), we define a function F : Dn → Sn−1 sending each point in the disk to its corresponding intersection point on the boundary. As a special case, whenever x itself is on the boundary, then the intersection point F(x) must be x. - -Consequently, F is a special type of continuous function known as a retraction: every point of the codomain (in this case Sn−1) is a fixed point of F. - -Intuitively it seems unlikely that there could be a retraction of Dn onto Sn−1, and in the case n = 1, the impossibility is more basic, because S0 (i.e., the endpoints of the closed interval D1) is not even connected. The case n = 2 is less obvious, but can be proven by using basic arguments involving the fundamental groups of the respective spaces: the retraction would induce a surjective group homomorphism from the fundamental group of D2 to that of S1, but the latter group is isomorphic to Z while the first group is trivial, so this is impossible. The case n = 2 can also be proven by contradiction based on a theorem about non-vanishing vector fields. - -For n > 2, however, proving the impossibility of the retraction is more difficult. One way is to make use of homology groups: the homology Hn−1(Dn) is trivial, while Hn−1(Sn−1) is infinite cyclic. This shows that the retraction is impossible, because again the retraction would induce an injective group homomorphism from the latter to the former group. - -To prove that a continuous map $F$ has fixed points, one can assume that it is smooth, because if a map has no fixed points then $F_\epsilon := F \star \varphi_\epsilon$, its convolution with an appropriate mollifier (a smooth function of sufficiently small support and integral one), will produce a smooth function with no fixed points. As in the proof using homology, the problem is reduced to proving that there is no smooth retraction $F$ from the ball $B$ onto its boundary $\partial B$. If $\omega$ is a volume form on the boundary then by Stokes' Theorem, -$$ -0<\int_{\partial B}\omega = \int_{\partial B}F^*(\omega) = \int_BdF^*(\omega)= \int_BF^*(d\omega)=\int_BF^*(0) = 0 -$$ - -giving a contradiction. - -More generally, this shows that there is no smooth retraction from any non-empty smooth orientable compact manifold onto its boundary. The proof using Stokes' theorem is closely related to the proof using homology, because the form $\omega$ generates the de Rham cohomology group $H^{n-1}(\partial B)$ which is isomorphic to the homology group $H_{n-1}(\partial B)$ by de Rham's Theorem. - -The BFPT can be proved using Sperner's lemma. We now give an outline of the proof for the special case in which f is a function from the standard n-simplex, $\Delta^n,$ to itself, where -$$ -\Delta^n = \left\{P\in\mathbb{R}^{n+1}\mid\sum_{i = 0}^{n}{P_i} = 1 \text{ and } P_i \ge 0 \text{ for all } i\right\}. -$$ - -For every point $P\in \Delta^n,$ also $f(P)\in \Delta^n.$ Hence the sum of their coordinates is equal: -$$ -\sum_{i = 0}^{n}{P_i} = 1 = \sum_{i = 0}^{n}{f(P)_i} -$$ - -Hence, by the pigeonhole principle, for every $P\in \Delta^n,$ there must be an index $j \in \{0, \ldots, n\}$ such that the $j$th coordinate of $P$ is greater than or equal to the $j$th coordinate of its image under f: -$$ -P_j \geq f(P)_j. -$$ - -Moreover, if $P$ lies on a k-dimensional sub-face of $\Delta^n,$ then by the same argument, the index $j$ can be selected from among the k + 1 coordinates which are not zero on this sub-face. - -We now use this fact to construct a Sperner coloring. For every triangulation of $\Delta^n,$ the color of every vertex $P$ is an index $j$ such that $f(P)_j \leq P_j.$ - -By construction, this is a Sperner coloring. Hence, by Sperner's lemma, there is an n-dimensional simplex whose vertices are colored with the entire set of n + 1 available colors. - -Because f is continuous, this simplex can be made arbitrarily small by choosing an arbitrarily fine triangulation. Hence, there must be a point $P$ which satisfies the labeling condition in all coordinates: $f(P)_j \leq P_j$ for all $j.$ - -Because the sum of the coordinates of $P$ and $f(P)$ must be equal, all these inequalities must actually be equalities. But this means that: -$$ -f(P) = P. -$$ - -That is, $P$ is a fixed point of $f.$ - -There is also a quick proof, by Morris Hirsch, based on the impossibility of a differentiable retraction. The indirect proof starts by noting that the map f can be approximated by a smooth map retaining the property of not fixing a point; this can be done by using the Weierstrass approximation theorem, for example. One then defines a retraction as above which must now be differentiable. Such a retraction must have a non-singular value, by Sard's theorem, which is also non-singular for the restriction to the boundary (which is just the identity). Thus the inverse image would be a 1-manifold with boundary. The boundary would have to contain at least two end points, both of which would have to lie on the boundary of the original ball—which is impossible in a retraction. - -R. Bruce Kellogg, Tien-Yien Li, and James A. Yorke turned Hirsch's proof into a computable proof by observing that the retract is in fact defined everywhere except at the fixed points. For almost any point, q, on the boundary, (assuming it is not a fixed point) the one manifold with boundary mentioned above does exist and the only possibility is that it leads from q to a fixed point. It is an easy numerical task to follow such a path from q to the fixed point so the method is essentially computable. gave a conceptually similar path-following version of the homotopy proof which extends to a wide variety of related problems. - -A variation of the preceding proof does not employ the Sard's theorem, and goes as follows. If $r\colon B\to \partial B$ is a smooth retraction, one considers the smooth deformation $g^t(x):=t r(x)+(1-t)x,$ and the smooth function -$$ -\varphi(t):=\int_B \det D g^t(x) dx. -$$ - -Differentiating under the sign of integral it is not difficult to check that '(t) = 0 for all t, so φ is a constant function, which is a contradiction because φ(0) is the n-dimensional volume of the ball, while φ(1) is zero. The geometric idea is that φ(t) is the oriented area of gt(B) (that is, the Lebesgue measure of the image of the ball via gt, taking into account multiplicity and orientation), and should remain constant (as it is very clear in the one-dimensional case). On the other hand, as the parameter t passes form 0 to 1 the map gt transforms continuously from the identity map of the ball, to the retraction r, which is a contradiction since the oriented area of the identity coincides with the volume of the ball, while the oriented area of r is necessarily 0, as its image is the boundary of the ball, a set of null measure. - -A quite different proof given by David Gale is based on the game of Hex. The basic theorem about Hex is that no game can end in a draw. This is equivalent to the Brouwer fixed-point theorem for dimension 2. By considering n-dimensional versions of Hex, one can prove in general that Brouwer's theorem is equivalent to the determinacy theorem for Hex. - -The Lefschetz fixed-point theorem says that if a continuous map f from a finite simplicial complex B to itself has only isolated fixed points, then the number of fixed points counted with multiplicities (which may be negative) is equal to the Lefschetz number -$$ -\displaystyle \sum_n(-1)^n\operatorname{Tr}(f|H_n(B)) -$$ - -and in particular if the Lefschetz number is nonzero then f must have a fixed point. If B is a ball (or more generally is contractible) then the Lefschetz number is one because the only non-zero homology group is: $H_0(B)$ and f acts as the identity on this group, so f has a fixed point. - -In reverse mathematics, Brouwer's theorem can be proved in the system WKL0, and conversely over the base system RCA0 Brouwer's theorem for a square implies the weak König's lemma, so this gives a precise description of the strength of Brouwer's theorem. - -The Brouwer fixed-point theorem forms the starting point of a number of more general fixed-point theorems. - -The straightforward generalization to infinite dimensions, i.e. using the unit ball of an arbitrary Hilbert space instead of Euclidean space, is not true. The main problem here is that the unit balls of infinite-dimensional Hilbert spaces are not compact. For example, in the Hilbert space ℓ2 of square-summable real (or complex) sequences, consider the map f : ℓ2 → ℓ2 which sends a sequence (xn) from the closed unit ball of ℓ2 to the sequence (yn) defined by -$$ -y_0 = \sqrt{1 - \|x\|_2^2}\quad\text{ and}\quad y_n = x_{n-1} \text{ for } n \geq 1. -$$ - -It is not difficult to check that this map is continuous, has its image in the unit sphere of ℓ2, but does not have a fixed point. - -The generalizations of the Brouwer fixed-point theorem to infinite dimensional spaces therefore all include a compactness assumption of some sort, and also often an assumption of convexity. See fixed-point theorems in infinite-dimensional spaces for a discussion of these theorems. - -There is also finite-dimensional generalization to a larger class of spaces: If $X$ is a product of finitely many chainable continua, then every continuous function $f:X\rightarrow X$ has a fixed point, where a chainable continuum is a (usually but in this case not necessarily metric) compact Hausdorff space of which every open cover has a finite open refinement $\{U_1,\ldots,U_m\}$, such that $U_i \cap U_j \neq \emptyset$ if and only if $|i-j| \leq 1$. Examples of chainable continua include compact connected linearly ordered spaces and in particular closed intervals of real numbers. - -The Kakutani fixed point theorem generalizes the Brouwer fixed-point theorem in a different direction: it stays in Rn, but considers upper hemi-continuous set-valued functions (functions that assign to each point of the set a subset of the set). It also requires compactness and convexity of the set. - -The Lefschetz fixed-point theorem applies to (almost) arbitrary compact topological spaces, and gives a condition in terms of singular homology that guarantees the existence of fixed points; this condition is trivially satisfied for any map in the case of Dn. diff --git a/wiki/wikipedia/2599.txt b/wiki/wikipedia/2599.txt deleted file mode 100644 index b4755476cb5bf6f6446e93a13901596581a0d28b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2599.txt +++ /dev/null @@ -1,49 +0,0 @@ -In mathematics, Reeb sphere theorem, named after Georges Reeb, states that - -A closed oriented connected manifold M n that admits a singular foliation having only centers is homeomorphic to the sphere Sn and the foliation has exactly two singularities. - -A singularity of a foliation F is of Morse type if in its small neighborhood all leaves of the foliation are level sets of a Morse function, being the singularity a critical point of the function. The singularity is a center if it is a local extremum of the function; otherwise, the singularity is a saddle. - -The number of centers c and the number of saddles $s$, specifically $c-s$, is tightly connected with the manifold topology. - -We denote $\operatorname{ind} p = \min(k,n-k)$, the index of a singularity $p$, where k is the index of the corresponding critical point of a Morse function. In particular, a center has index 0, index of a saddle is at least 1. - -A Morse foliation F on a manifold M is a singular transversely oriented codimension one foliation of class $C^2$ with isolated singularities such that: - -* each singularity of F is of Morse type, - -* each singular leaf L contains a unique singularity p; in addition, if $\operatorname{ind} p = 1$ then $L\setminus p$ is not connected. - -This is the case $c>s=0$, the case without saddles. - -Theorem: Let $M^n$ be a closed oriented connected manifold of dimension $n\ge 2$. Assume that $M^n$ admits a $C^1$-transversely oriented codimension one foliation $F$ with a non empty set of singularities all of them centers. Then the singular set of $F$ consists of two points and $M^n$ is homeomorphic to the sphere $S^n$. - -It is a consequence of the Reeb stability theorem. - -More general case is $c>s\ge 0.$ - -In 1978, Edward Wagneur generalized the Reeb sphere theorem to Morse foliations with saddles. He showed that the number of centers cannot be too much as compared with the number of saddles, notably, $c\le s+2$. So there are exactly two cases when $c>s$: - -(1) $c=s+2, $ - -(2) $c=s+1. $ - -He obtained a description of the manifold admitting a foliation with singularities that satisfy (1). - -Theorem: Let $M^n$ be a compact connected manifold admitting a Morse foliation $F$ with $c$ centers and $s$ saddles. Then $c\le s+2$. - -In case $c=s+2$, - -* $M$ is homeomorphic to $S^n$, - -* all saddles have index 1, - -* each regular leaf is diffeomorphic to $S^{n-1}$. - -Finally, in 2008, César Camacho and Bruno Scardua considered the case (2), $c=s+1$. This is possible in a small number of low dimensions. - -Theorem: Let $M^n$ be a compact connected manifold and $F$ a Morse foliation on $M$. If $s = c + 1$, then - -* $n=2,4,8$ or $16$, - -* $M^n$ is an Eells–Kuiper manifold. diff --git a/wiki/wikipedia/26.txt b/wiki/wikipedia/26.txt deleted file mode 100644 index b3eb78c55005f7beb0c766e8232ee66ce78e2e5f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/26.txt +++ /dev/null @@ -1,13 +0,0 @@ -In computer science E-LOTOS (Enhanced LOTOS) is a formal specification language designed between 1993 and 1999, and standardized by ISO in 2001. - -E-LOTOS was initially intended to be a revision of the LOTOS language standardized by ISO 8807 in 1989, but the revision turned out to be profound, leading to a new specification language. - -The starting point for the revision of LOTOS was the PhD thesis of Ed Brinksma, who had been the Rapporteur at ISO of the LOTOS standard. - -In 1993, the initial goals of the definition of E-LOTOS were stated in ISO/IEC JTC1/N2802 announcement. - -In 1997, when the language definition reached the maturity level of an ISO Committee Draft, an announcement was posted describing the main features of E-LOTOS. - -The following document recalls the milestones of E-LOTOS definition project. - -E-LOTOS has inspired descendent languages, among which LOTOS NT and LNT. diff --git a/wiki/wikipedia/260.txt b/wiki/wikipedia/260.txt deleted file mode 100644 index 7e6002d7d61515cbf80b3ec04560079d2ad82e66..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/260.txt +++ /dev/null @@ -1,25 +0,0 @@ -Syncdocs is backup and file synchronization software for Google Drive. Syncdocs uses cloud computing to enable users to back up and synchronize Windows computer files to Google Documents and Google Drive accounts. - -* Full data migration to and from Google Drive cloud. - -* The program supports editing of local Microsoft Office documents online using Google Docs. - -* The Windows folder structure is replicated online. - -* Ability to sync network folders and external USB drives to Google Drive. - -* Up to 16 accounts can be synced to Google Drive concurrently. - -* Synchronization of file and folder changes and contacts between Google and local computers. - -* Compression Support. - -* End-to-End Google Drive Encryption using 256 bit Advanced Encryption Standard - -* File versioning and Unicode filename support. - -* File sharing - -* Drive mapping of Google Drive to a local drive letter. - -The Google Drive client shares some of the same basic features as Syncdocs. The main differences are Syncdocs ability to sync multiple Google accounts concurrently and Syncdocs ability to sync any folders on the PC or network. However, Syncdocs does not have an OS X or Android client, which Google Drive does. diff --git a/wiki/wikipedia/2600.txt b/wiki/wikipedia/2600.txt deleted file mode 100644 index db96581588ee47c997443ed15ceca0718e9cc114..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2600.txt +++ /dev/null @@ -1,13 +0,0 @@ -The Bottleneck traveling salesman problem (bottleneck TSP) is a problem in discrete or combinatorial optimization. The problem is to find the Hamiltonian cycle (visiting each node exactly once) in a weighted graph which minimizes the weight of the highest-weight edge of the cycle. It was first formulated by Gilmore with some additional constraints, and in its full generality by Garfinkel. - -Another reduction, from the bottleneck TSP to the usual TSP (where the goal is to minimize the sum of edge lengths), allows any algorithm for the usual TSP to also be used to solve the bottleneck TSP. - -If the edge weights of the bottleneck TSP are replaced by any other numbers that have the same relative order, then the bottleneck solution remains unchanged. - -If, in addition, each number in the sequence exceeds the sum of all smaller numbers, then the bottleneck solution will also equal the usual TSP solution. - -For instance, such a result may be attained by resetting each weight to ni where n is the number of vertices in the graph and i is the rank of the original weight of the edge in the sorted sequence of weights. For instance, following this transformation, the Held–Karp algorithm could be used to solve the bottleneck TSP in time O(n22n). - -This approximation ratio is best possible. For, any unweighted graph can be transformed into a metric space by setting its edge weights to 1 and setting the distance between all nonadjacent pairs of vertices to 2. An approximation with ratio better than 2 in this metric space could be used to determine whether the original graph contains a Hamiltonian cycle, an NP-complete problem. - -Without the assumption that the input is a metric space, no finite approximation ratio is possible. diff --git a/wiki/wikipedia/2601.txt b/wiki/wikipedia/2601.txt deleted file mode 100644 index 18fdaa219aa27ff670236396ba4736ad193345cc..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2601.txt +++ /dev/null @@ -1,23 +0,0 @@ -In mathematics, a Newman–Shanks–Williams prime (NSW prime) is a prime number p which can be written in the form -$$ -S_{2m+1}=\frac{\left(1 + \sqrt{2}\right)^{2m+1} + \left(1 - \sqrt{2}\right)^{2m+1}}{2}. -$$ - -NSW primes were first described by Morris Newman, Daniel Shanks and Hugh C. Williams in 1981 during the study of finite simple groups with square order. - -The first few NSW primes are 7, 41, 239, 9369319, 63018038201, … , corresponding to the indices 3, 5, 7, 19, 29, … . - -The sequence S alluded to in the formula can be described by the following recurrence relation: -$$ -S_0=1 -$$ -$$ -S_1=1 -$$ -$$ -S_n=2S_{n-1}+S_{n-2}\qquad\text{for all }n\geq 2. -$$ - -The first few terms of the sequence are 1, 1, 3, 7, 17, 41, 99, … . Each term in this sequence is half the corresponding term in the sequence of companion Pell numbers. These numbers also appear in the continued fraction convergents to 2. - -* diff --git a/wiki/wikipedia/2602.txt b/wiki/wikipedia/2602.txt deleted file mode 100644 index 45b8eb9ac2a6d2d9f2a86eacfae9d47e8b3b0b27..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2602.txt +++ /dev/null @@ -1 +0,0 @@ -Simpath is an algorithm introduced by Donald Knuth that constructs a zero-suppressed decision diagram (ZDD) representing all simple paths between two vertices in a given graph. diff --git a/wiki/wikipedia/2603.txt b/wiki/wikipedia/2603.txt deleted file mode 100644 index 1defc4eea96c5b44ac3055cc76d7db2a91672928..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2603.txt +++ /dev/null @@ -1,67 +0,0 @@ -The cokernel of a linear mapping of vector spaces f : X → Y is the quotient space Y / im(f) of the codomain of f by the image of f. The dimension of the cokernel is called the corank of f. - -Cokernels are dual to the kernels of category theory, hence the name: the kernel is a subobject of the domain (it maps to the domain), while the cokernel is a quotient object of the codomain (it maps from the codomain). - -Intuitively, given an equation f(x) = y that one is seeking to solve, the cokernel measures the constraints that y must satisfy for this equation to have a solution – the obstructions to a solution – while the kernel measures the degrees of freedom in a solution, if one exists. This is elaborated in intuition, below. - -More generally, the cokernel of a morphism f : X → Y in some category (e.g. a homomorphism between groups or a bounded linear operator between Hilbert spaces) is an object Q and a morphism q : Y → Q such that the composition q f is the zero morphism of the category, and furthermore q is universal with respect to this property. Often the map q is understood, and Q itself is called the cokernel of f. - -In many situations in abstract algebra, such as for abelian groups, vector spaces or modules, the cokernel of the homomorphism f : X → Y is the quotient of Y by the image of f. In topological settings, such as with bounded linear operators between Hilbert spaces, one typically has to take the closure of the image before passing to the quotient. - -One can define the cokernel in the general framework of category theory. In order for the definition to make sense the category in question must have zero morphisms. The cokernel of a morphism f : X → Y is defined as the coequalizer of f and the zero morphism 0XY : X → Y. - -Explicitly, this means the following. The cokernel of f : X → Y is an object Q together with a morphism q : Y → Q such that the diagram - -
    Image:Cokernel-01.svg
    - -commutes. Moreover, the morphism q must be universal for this diagram, i.e. any other such q′ : Y → Q′ can be obtained by composing q with a unique morphism u : Q → Q′: - -
    Image:Cokernel-02.svg
    - -As with all universal constructions the cokernel, if it exists, is unique up to a unique isomorphism, or more precisely: if q : Y → Q and q′ : Y → Q′ are two cokernels of f : X → Y, then there exists a unique isomorphism u : Q → Q′ with q = u q. - -Like all coequalizers, the cokernel q : Y → Q is necessarily an epimorphism. Conversely an epimorphism is called normal (or conormal) if it is the cokernel of some morphism. A category is called conormal if every epimorphism is normal (e.g. the category of groups is conormal). - -In the category of groups, the cokernel of a group homomorphism f : G → H is the quotient of H by the normal closure of the image of f. In the case of abelian groups, since every subgroup is normal, the cokernel is just H modulo the image of f: -$$ -\operatorname{coker}(f) = H / \operatorname{im}(f). -$$ - -In a preadditive category, it makes sense to add and subtract morphisms. In such a category, the coequalizer of two morphisms f and g (if it exists) is just the cokernel of their difference: -$$ -\operatorname{coeq}(f, g) = \operatorname{coker}(g - f). -$$ - -In an abelian category (a special kind of preadditive category) the image and coimage of a morphism f are given by - -\begin{align} - -\operatorname{im}(f) &= \ker(\operatorname{coker} f), \\ - -\operatorname{coim}(f) &= \operatorname{coker}(\ker f). - -\end{align} - -In particular, every abelian category is normal (and conormal as well). That is, every monomorphism m can be written as the kernel of some morphism. Specifically, m is the kernel of its own cokernel: -$$ -m = \ker(\operatorname{coker}(m)) -$$ - -The cokernel can be thought of as the space of constraints that an equation must satisfy, as the space of obstructions, just as the kernel is the space of solutions. - -Formally, one may connect the kernel and the cokernel of a map T: V → W by the exact sequence -$$ -0 \to \ker T \to V \overset T \longrightarrow W \to \operatorname{coker} T \to 0. -$$ - -These can be interpreted thus: given a linear equation T(v) = w to solve, - -* the kernel is the space of solutions to the homogeneous equation T(v) = 0, and its dimension is the number of degrees of freedom in solutions to T(v) = w, if they exist; - -* the cokernel is the space of constraints on w that must be satisfied if the equation is to have a solution, and its dimension is the number of independent constraints that must be satisfied for the equation to have a solution. - -The dimension of the cokernel plus the dimension of the image (the rank) add up to the dimension of the target space, as the dimension of the quotient space W / T(V) is simply the dimension of the space minus the dimension of the image. - -As a simple example, consider the map T: R2 → R2, given by T(x, y) = (0, y). Then for an equation T(x, y) = (a, b) to have a solution, we must have a = 0 (one constraint), and in that case the solution space is (x, b), or equivalently, (0, b) + (x, 0), (one degree of freedom). The kernel may be expressed as the subspace (x, 0) ⊆ V: the value of x is the freedom in a solution. The cokernel may be expressed via the real valued map W: (a, b) → (a): given a vector (a, b), the value of a is the obstruction to there being a solution. - -Additionally, the cokernel can be thought of as something that "detects" surjections in the same way that the kernel "detects" injections. A map is injective if and only if its kernel is trivial, and a map is surjective if and only if its cokernel is trivial, or in other words, if W = im(T). diff --git a/wiki/wikipedia/2604.txt b/wiki/wikipedia/2604.txt deleted file mode 100644 index 5ded9a39a069c49c9f0581b042ba9024e4547d8a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2604.txt +++ /dev/null @@ -1,65 +0,0 @@ -In logic and related fields such as mathematics and philosophy, "if and only if" (shortened as "iff") is a biconditional logical connective between statements, where either both statements are true or both are false. - -The connective is biconditional (a statement of material equivalence), and can be likened to the standard material conditional ("only if", equal to "if ... then") combined with its reverse ("if"); hence the name. The result is that the truth of either one of the connected statements requires the truth of the other (i.e. either both statements are true, or both are false), though it is controversial whether the connective thus defined is properly rendered by the English "if and only if"—with its pre-existing meaning. For example, P if and only if Q means that P is true whenever Q is true, and the only case in which P is true is if Q is also true, whereas in the case of P if Q, there could be other scenarios where P is true and Q is false. - -In writing, phrases commonly used as alternatives to P "if and only if" Q include: Q is necessary and sufficient for P, P is equivalent (or materially equivalent) to Q (compare with material implication), P precisely if Q, P precisely (or exactly) when Q, P exactly in case Q, and P just in case Q. Some authors regard "iff" as unsuitable in formal writing; others consider it a "borderline case" and tolerate its use. - -In logical formulae, logical symbols, such as $\leftrightarrow$ and $\Leftrightarrow$, are used instead of these phrases; see below. - -The truth table of P $\Leftrightarrow$ Q is as follows: - -It is equivalent to that produced by the XNOR gate, and opposite to that produced by the XOR gate. - -The corresponding logical symbols are "↔", "$\Leftrightarrow$", - -It is somewhat unclear how "iff" was meant to be pronounced. In current practice, the single 'word' "iff" is almost always read as the four words "if and only if". However, in the preface of General Topology, Kelley suggests that it should be read differently: "In some cases where mathematical content requires 'if and only if' and euphony demands something less I use Halmos' 'iff'". The authors of one discrete mathematics textbook suggest: "Should you need to pronounce iff, really hang on to the 'ff' so that people hear the difference from 'if'", implying that "iff" could be pronounced as . - -Technically, definitions are always "if and only if" statements; some texts — such as Kelley's General Topology — follow the strict demands of logic, and use "if and only if" or iff in definitions of new terms. However, this logically correct usage of "if and only if" is relatively uncommon, as the majority of textbooks, research papers and articles (including English Wikipedia articles) follow the special convention to interpret "if" as "if and only if", whenever a mathematical definition is involved (as in "a topological space is compact if every open cover has a finite subcover"). - -* "Madison will eat the fruit if it is an apple." (equivalent to "Only if Madison will eat the fruit, can it be an apple" or "Madison will eat the fruit ← the fruit is an apple") - -*: This states that Madison will eat fruits that are apples. It does not, however, exclude the possibility that Madison might also eat bananas or other types of fruit. All that is known for certain is that she will eat any and all apples that she happens upon. That the fruit is an apple is a sufficient condition for Madison to eat the fruit. - -* "Madison will eat the fruit only if it is an apple." (equivalent to "If Madison will eat the fruit, then it is an apple" or "Madison will eat the fruit → the fruit is an apple") - -*: This states that the only fruit Madison will eat is an apple. It does not, however, exclude the possibility that Madison will refuse an apple if it is made available, in contrast with (1), which requires Madison to eat any available apple. In this case, that a given fruit is an apple is a necessary condition for Madison to be eating it. It is not a sufficient condition since Madison might not eat all the apples she is given. - -* "Madison will eat the fruit if and only if it is an apple." (equivalent to "Madison will eat the fruit ↔ the fruit is an apple") - -*: This statement makes it clear that Madison will eat all and only those fruits that are apples. She will not leave any apple uneaten, and she will not eat any other type of fruit. That a given fruit is an apple is both a necessary and a sufficient condition for Madison to eat the fruit. - -Sufficiency is the converse of necessity. That is to say, given P→Q (i.e. if P then Q), P would be a sufficient condition for Q, and Q would be a necessary condition for P. Also, given P→Q, it is true that ¬Q→¬P (where ¬ is the negation operator, i.e. "not"). This means that the relationship between P and Q, established by P→Q, can be expressed in the following, all equivalent, ways: - -P is sufficient for Q - -Q is necessary for P - -¬Q is sufficient for ¬P - -¬P is necessary for ¬Q - -As an example, take the first example above, which states P→Q, where P is "the fruit in question is an apple" and Q is "Madison will eat the fruit in question". The following are four equivalent ways of expressing this very relationship: - -If the fruit in question is an apple, then Madison will eat it. - -Only if Madison will eat the fruit in question, is it an apple. - -If Madison will not eat the fruit in question, then it is not an apple. - -Only if the fruit in question is not an apple, will Madison not eat it. - -Here, the second example can be restated in the form of if...then as "If Madison will eat the fruit in question, then it is an apple"; taking this in conjunction with the first example, we find that the third example can be stated as "If the fruit in question is an apple, then Madison will eat it; and if Madison will eat the fruit, then it is an apple". - - - -File:Example of A is a proper subset of B.svg|A is a proper subset of B. A number is in A only if it is in B; a number is in B if it is in A. - -File:Example of C is no proper subset of B.svg|C is a subset but not a proper subset of B. A number is in B if and only if it is in C, and a number is in C if and only if it is in B. - - - -Euler diagrams show logical relationships among events, properties, and so forth. "P only if Q", "if P then Q", and "P→Q" all mean that P is a subset, either proper or improper, of Q. "P if Q", "if Q then P", and Q→P all mean that Q is a proper or improper subset of P. "P if and only if Q" and "Q if and only if P" both mean that the sets P and Q are identical to each other. - -Iff is used outside the field of logic as well. Wherever logic is applied, especially in mathematical discussions, it has the same meaning as above: it is an abbreviation for if and only if, indicating that one statement is both necessary and sufficient for the other. This is an example of mathematical jargon (although, as noted above, if is more often used than iff in statements of definition). - -The elements of X are all and only the elements of Y means: "For any z in the domain of discourse, z is in X if and only if z is in Y." diff --git a/wiki/wikipedia/2605.txt b/wiki/wikipedia/2605.txt deleted file mode 100644 index dfa238e637ab00bb8b3f53b83cf17487a4156edc..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2605.txt +++ /dev/null @@ -1,3 +0,0 @@ -In mathematics, Selberg's conjecture, also known as Selberg's eigenvalue conjecture, conjectured by , states that the eigenvalues of the Laplace operator on Maass wave forms of congruence subgroups are at least 1/4. Selberg showed that the eigenvalues are at least 3/16. - -The generalized Ramanujan conjecture for the general linear group implies Selberg's conjecture. More precisely, Selberg's conjecture is essentially the generalized Ramanujan conjecture for the group GL2 over the rationals at the infinite place, and says that the component at infinity of the corresponding representation is a principal series representation of GL2(R) (rather than a complementary series representation). The generalized Ramanujan conjecture in turn follows from the Langlands functoriality conjecture, and this has led to some progress on Selberg's conjecture. diff --git a/wiki/wikipedia/2606.txt b/wiki/wikipedia/2606.txt deleted file mode 100644 index 74073b7d3bafefeb1fea11be16ff8f80741a7ea1..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2606.txt +++ /dev/null @@ -1,31 +0,0 @@ -yEd is a general-purpose diagramming program with a multi-document interface. - -It is a cross-platform application written in Java that runs on Windows, Linux, Mac OS, and other platforms that support the Java Virtual Machine. - -It is released under a proprietary software license, that allows using a single copy gratis. - -An online version of yEd, yEd Live , also exists, and there is a Confluence version of yEd, - -Graphity for Confluence . - -yEd can be used to draw many different types of diagrams, including flowcharts, network diagrams, UMLs, BPMN, mind maps, organization charts, and entity-relationship diagrams. yEd also allows the use of custom vector and raster graphics as diagram elements. - -yEd loads and saves diagrams from/to GraphML, an XML-based format. It can also print very large diagrams that span multiple pages. - -yEd can automatically arrange diagram elements using a variety of graph layout algorithms, including force-based layout, - -hierarchical layout (for flowcharts), orthogonal layout (for UML class diagrams), and tree layout (for organization charts). - -yEd can import data in various formats to generate diagrams out of it. - -Import formats include the Microsoft Excel .xls format for spreadsheet data, the Gedcom format for genealogical data, and also arbitrary XML-based file formats, which are transformed by means of XSLT stylesheets. - -Predefined XSLT stylesheets provided by yEd can process the Apache Ant build script format used to define dependency information in software build processes and the OWL file format for the description of ontologies. - -Other XML-based data is processed in a generic way. - -yEd can export diagrams to various raster and vector formats, including GIF, JPEG, PNG, EMF, BMP, PDF, EPS, and SVG. It can also export to SWF (Shockwave Flash) file format and HTML image maps. - -The structural information of a diagram can be exported as GML (Graph Modeling Language) and TGF (Trivial Graph Format). - -yEd is a product of yWorks GmbH, a German software company. diff --git a/wiki/wikipedia/2607.txt b/wiki/wikipedia/2607.txt deleted file mode 100644 index ec35dc68d4eec12c576b0703848c25b7f215bfc3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2607.txt +++ /dev/null @@ -1,21 +0,0 @@ -Blue Gene is an IBM project aimed at designing supercomputers that can reach operating speeds in the petaFLOPS (PFLOPS) range, with low power consumption. - -The project created three generations of supercomputers, Blue Gene/L, Blue Gene/P, and Blue Gene/Q. During their deployment, Blue Gene systems often led the TOP500 and Green500 rankings of the most powerful and most power efficient supercomputers, respectively. Blue Gene systems have also consistently scored top positions in the Graph500 list. The project was awarded the 2009 National Medal of Technology and Innovation. - -As of 2015, IBM seems to have ended the development of the Blue Gene family though no public announcement has been made. IBM's continuing efforts of the supercomputer scene seems to be concentrated around OpenPower, using accelerators such as FPGAs and GPUs to battle the end of Moore's law. - -In December 1999, IBM announced a US$100 million research initiative for a five-year effort to build a massively parallel computer, to be applied to the study of biomolecular phenomena such as protein folding. The project had two main goals: to advance our understanding of the mechanisms behind protein folding via large-scale simulation, and to explore novel ideas in massively parallel machine architecture and software. Major areas of investigation included: how to use this novel platform to effectively meet its scientific goals, how to make such massively parallel machines more usable, and how to achieve performance targets at a reasonable cost, through novel machine architectures. The initial design for Blue Gene was based on an early version of the Cyclops64 architecture, designed by Monty Denneau. The initial research and development work was pursued at IBM T.J. Watson Research Center and led by William R. Pulleyblank. - -At IBM, Alan Gara started working on an extension of the QCDOC architecture into a more general-purpose supercomputer: The 4D nearest-neighbor interconnection network was replaced by a network supporting routing of messages from any node to any other; and a parallel I/O subsystem was added. DOE started funding the development of this system and it became known as Blue Gene/L (L for Light); development of the original Blue Gene system continued under the name Blue Gene/C (C for Cyclops) and, later, Cyclops64. - -In November 2004 a 16-rack system, with each rack holding 1,024 compute nodes, achieved first place in the TOP500 list, with a Linpack performance of 70.72 TFLOPS. L2 cache misses are handled by two built-in DDR3 memory controllers running at 1.33 GHz. The chip also integrates logic for chip-to-chip communications in a 5D torus configuration, with 2GB/s chip-to-chip links. The Blue Gene/Q chip is manufactured on IBM's copper SOI process at 45 nm. It delivers a peak performance of 204.8 GFLOPS at 1.6 GHz, drawing about 55 watts. The chip measures 19×19 mm (359.5 mm²) and comprises 1.47 billion transistors. The chip is mounted on a compute card along with 16 GB DDR3 DRAM (i.e., 1 GB for each user processor core). - -A Q32 compute drawer contains 32 compute cards, each water cooled. - -A "midplane" (crate) contains 16 Q32 compute drawers for a total of 512 compute nodes, electrically interconnected in a 5D torus configuration (4x4x4x4x2). Beyond the midplane level, all connections are optical. Racks have two midplanes, thus 32 compute drawers, for a total of 1024 compute nodes, 16,384 user cores and 16 TB RAM. - -* A 209 TFLOPS peak (172 TFLOPS LINPACK) Blue Gene/Q system called Lemanicus was installed at the EPFL in March 2013. This system belongs to the Center for Advanced Modeling Science CADMOS () which is a collaboration between the three main research institutions on the shore of the Lake Geneva in the French speaking part of Switzerland : University of Lausanne, University of Geneva and EPFL. The system consists of a single rack (1,024 compute nodes) with 2.1 PB of IBM GPFS-GSS storage. - -* A half-rack Blue Gene/Q system, with about 100 TFLOPS (peak), called Cumulus was installed at A*STAR Computational Resource Centre, Singapore, at early 2011. - -Record-breaking science applications have been run on the BG/Q, the first to cross 10 petaflops of sustained performance. The cosmology simulation framework HACC achieved almost 14 petaflops with a 3.6 trillion particle benchmark run, while the Cardioid code, which models the electrophysiology of the human heart, achieved nearly 12 petaflops with a near real-time simulation, both on Sequoia. A fully compressible flow solver has also achieved 14.4 PFLOP/s (originally 11 PFLOP/s) on Sequoia, 72% of the machine's nominal peak performance. diff --git a/wiki/wikipedia/2608.txt b/wiki/wikipedia/2608.txt deleted file mode 100644 index fd6f6898cf49c81e54150f3886fe0109d054ad70..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2608.txt +++ /dev/null @@ -1,93 +0,0 @@ -The Knaster–Kuratowski–Mazurkiewicz lemma is a basic result in mathematical fixed-point theory published in 1929 by Knaster, Kuratowski and Mazurkiewicz. - -The KKM lemma can be proved from Sperner's lemma and can be used to prove the Brouwer fixed-point theorem. - -Let $\Delta_{n-1}$ be an $(n-1)$-dimensional simplex with n vertices labeled as $1,\ldots,n$. - -A KKM covering is defined as a set $C_1,\ldots,C_n$ of closed sets such that for any $I \subseteq \{1,\ldots,n\}$, the convex hull of the vertices corresponding to $I$ is covered by $\bigcup_{i\in I}C_i$. - -The KKM lemma says that in every KKM covering, the common intersection of all n sets is nonempty, i.e: -$$ -\bigcap_{i=1}^n C_i \neq \emptyset. -$$ - -When $n=3$, the KKM lemma considers the simplex $\Delta_2$ which is a triangle, whose vertices can be labeled 1, 2 and 3. We are given three closed sets $C_1,C_2,C_3$ such that: - -* $C_1$ covers vertex 1, $C_2$ covers vertex 2, $C_3$ covers vertex 3. - -* The edge 12 (from vertex 1 to vertex 2) is covered by the sets $C_1$ and $C_2$, the edge 23 is covered by the sets $C_2$ and $C_3$, the edge 31 is covered by the sets $C_3$ and $C_1$. - -* The union of all three sets covers the entire triangle - -The KKM lemma states that the sets $C_1, C_2, C_3$ have at least one point in common. - -The lemma is illustrated by the picture on the right, in which set #1 is blue, set #2 is red and set #3 is green. The KKM requirements are satisfied, since: - -* Each vertex is covered by a unique color. - -* Each edge is covered by the two colors of its two vertices. - -* The triangle is covered by all three colors. - -The KKM lemma states that there is a point covered by all three colors simultaneously; such a point is clearly visible in the picture. - -Note that it is important that all sets are closed, i.e., contain their boundary. If, for example, the red set is not closed, then it is possible that the central point is contained only in the blue and green sets, and then the intersection of all three sets may be empty. - -David Gale proved the following generalization of the KKM lemma. Suppose that, instead of one KKM covering, we have n different KKM coverings: $C^1_1,\ldots,C^1_n,\ldots,C^n_1,\ldots,C^n_n$. Then, there exists a permutation $\pi$ of the coverings with a non-empty intersection, i.e: -$$ -\bigcap_{i=1}^{n}C^{\pi(i)}_i \neq \emptyset -$$. - -The name "rainbow KKM lemma" is inspired by Gale's description of his lemma:
    "A colloquial statement of this result is... if each of three people paint a triangle red, white and blue according to the KKM rules, then there will be a point which is in the red set of one person, the white set of another, the blue of the third". - -The original KKM lemma follows from the rainbow KKM lemma by simply picking n identical coverings. - -A connector of a simplex is a connected set that touches all n faces of the simplex. - -A connector-free covering is a covering $C_1,\ldots,C_n$ in which no $C_i$ contains a connector. - -Any KKM covering is a connector-free covering, since in a KKM covering, no $C_i$ even touches all n faces. However, there are connector-free coverings that are not KKM coverings. An example is illustrated at the right. There, the red set touches all three faces, but it does not contain any connector, since no connected component of it touches all three faces. - -A theorem of Ravindra Bapat, generalizing Sperner's lemma, implies the KKM lemma extends to connector-free coverings (he proved his theorem for $n=3$). - -The connector-free variant also has a permutation variant, so that both these generalizations can be used simultaneously. - -The KKMS theorem is a generalization of the KKM lemma by Lloyd Shapley. It is useful in economics, especially in cooperative game theory. - -While a KKM covering contains n closed sets, a KKMS covering contains $2^n-1$ closed sets - indexed by the nonempty subsets of $[n]$ (equivalently: by nonempty faces of $\Delta_{n-1}$). For any $I \subseteq [n]$, the convex hull of the vertices corresponding to $I$ should be covered by the union of sets corresponding to subsets of $I$ , that is:
    $\operatorname{conv}(\{v_i : i\in I\}) \subseteq \bigcup_{J \subseteq I}C_J$.
    Any KKM covering is a special case of a KKMS covering. In a KKM covering, the n sets corresponding to singletons are nonempty, while the other sets are empty. However, there are many other KKMS coverings. - -in general, it is not true that the common intersection of all $2^n-1$ sets in a KKMS covering is nonempty; this is illustrated by the special case of a KKM covering, in which most sets are empty. - -The KKMS theorem says that, in every KKMS covering, there is a balanced collection $B$ of $2^{[n]}$, such that the intersection of sets indexed by $B$ is nonempty: -$$ -\bigcap_{J\in B} C_J \neq \emptyset -$$ - -It remains to explain what a "balanced collection" is. A collection $B$ of subsets of $[n]$ is called balanced if there is a weight function on $B$ (assigning a weight $w_J\geq 0$ to every $J\in B$), such that, for each element $i\in [n]$, the sum of weights of all subsets containing $i$ is exactly 1. For example, suppose $n=3$. Then: - -* The collection {{1}, {2}, {3}} is balanced: choose all weights to be 1. The same is true for any collection in which each element appears exactly once, such as the collection {{1,2},{3}} or the collection { {1,2,3} }. - -* The collection {{1,2}, {2,3}, {3,1}} is balanced: choose all weights to be 1/2. The same is true for any collection in which each element appears exactly twice. - -* The collection {{1,2}, {2,3}} is not balanced, since for any choice of positive weights, the sum for element 2 will be larger than the sum for element 1 or 3, so it is not possible that all sums equal 1. - -* The collection {{1,2}, {2,3}, {1}} is balanced: choose $w_{1,2}=0,w_{2,3}=1,w_{1}=1$. - -In hypergraph terminology, a collection B is balanced with respect to its ground-set V, iff the hypergraph with vertex-set V and edge-set B admits a perfect fractional matching. - -The KKMS theorem implies the KKM lemma. - -Reny and Wooders proved that the balanced set can also be chosen to be partnered. - -Zhou proved a variant of the KKMS theorem where the covering consists of open sets rather than closed sets. - -Hidetoshi Komiya generalized the KKMS theorem from simplices to polytopes. Let P be any compact convex polytope. Let $\textrm{Faces}(P)$ be the set of nonempty faces of P. A Komiya covering of P is a family of closed sets $\{C_F: F\in \textrm{Faces}(P)\}$ such that for every face $F\in \textrm{Faces}(P)$:
    $F\subseteq \bigcup_{G\subseteq F, ~ G \in \textrm{Faces}(P)} C_G$.
    Komiya's theorem says that for every Komiya covering of P, there is a balanced collection $B\subseteq \textrm{Faces}(P)$, such that the intersection of sets indexed by $B$ is nonempty: -$$ -\bigcap_{F\in B} C_F \neq \emptyset -$$ - -Komiya's theorem also generalizes the definition of a balanced collection: instead of requiring that there is a weight function on $B$ such that the sum of weights near each vertex of P is 1, we start by choosing any set of points $\textbf{b} = \{b^F: F\in \textrm{Faces}(P), b^F\in F \}$. A collection $B\subseteq \textrm{Faces}(P)$ is called balanced with respect to $\textbf{b}$ iff $b^P \in \operatorname{conv} \{ b^F: F\in B \}$, that is, the point assigned to the entire polygon P is a convex combination of the points assigned to the faces in the collection B. - -The KKMS theorem is a special case of Komiya's theorem in which the polytope $P = \Delta_{n-1}$ and $b^F $ is the barycenter of the face F (in particular, $b^P $ is the barycenter of $\Delta_{n-1}$, which is the point $(1/n, \ldots, 1/n)$ ). - -Oleg R. Musin proved several generalizations of the KKM lemma and KKMS theorem, with boundary conditions on the coverings. The boundary conditions are related to homotopy. diff --git a/wiki/wikipedia/2609.txt b/wiki/wikipedia/2609.txt deleted file mode 100644 index 16fcc193b02fca05b12ef603f006e0b1b0945f41..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2609.txt +++ /dev/null @@ -1,12 +0,0 @@ -In mathematics, the Schwarz reflection principle is a way to extend the domain of definition of a complex analytic function, i.e., it is a form of analytic continuation. It states that if an analytic function is defined on the upper half-plane, and has well-defined (non-singular) real values on the real axis, then it can be extended to the conjugate function on the lower half-plane. In notation, if $F(z)$ is a function that satisfies the above requirements, then its extension to the rest of the complex plane is given by the formula, -$$ -F(\bar{z})=\overline{F(z)}. -$$ - -That is, we make the definition that agrees along the real axis. - -The result proved by Hermann Schwarz is as follows. Suppose that F is a continuous function on the closed upper half plane $\left\{ z \in \mathbb{C}\ |\ \mathrm{Im}(z) \geq 0 \right\} $, holomorphic on the upper half plane $\left\{ F(z) \in \mathbb{C}\ |\ \mathrm{Im}(z) > 0 \right\} $, which takes real values on the real axis. Then the extension formula given above is an analytic continuation to the whole complex plane. - -In practice it would be better to have a theorem that allows F certain singularities, for example F a meromorphic function. To understand such extensions, one needs a proof method that can be weakened. In fact Morera's theorem is well adapted to proving such statements. Contour integrals involving the extension of F clearly split into two, using part of the real axis. So, given that the principle is rather easy to prove in the special case from Morera's theorem, understanding the proof is enough to generate other results. - -The principle also adapts to apply to harmonic functions. diff --git a/wiki/wikipedia/261.txt b/wiki/wikipedia/261.txt deleted file mode 100644 index 8fd51ad281b54a555517acbdf401204ee3169771..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/261.txt +++ /dev/null @@ -1,61 +0,0 @@ -In mathematics, the term modulo ("with respect to a modulus of", the Latin ablative of modulus which itself means "a small measure") is often used to assert that two distinct mathematical objects can be regarded as equivalent—if their difference is accounted for by an additional factor. It was initially introduced into mathematics in the context of modular arithmetic by Carl Friedrich Gauss in 1801. Since then, the term has gained many meanings—some exact and some imprecise (such as equating "modulo" with "except for"). For the most part, the term often occurs in statements of the form: - -A is the same as B modulo C - -which means - -A and B are the same—except for differences accounted for or explained by C. - -Modulo is a mathematical jargon that was introduced into mathematics in the book Disquisitiones Arithmeticae by Carl Friedrich Gauss in 1801. Given the integers a, b and n, the expression "a ≡ b (mod n)", pronounced "a is congruent to b modulo n", means that a − b is an integer multiple of n, or equivalently, a and b both share the same remainder when divided by n. It is the Latin ablative of modulus, which itself means "a small measure." - -The term has gained many meanings over the years—some exact and some imprecise. The most general precise definition is simply in terms of an equivalence relation R, where a is equivalent (or congruent) to b modulo R if aRb. More informally, the term is found in statements of the form: - -A is the same as B modulo C - -which means - -A and B are the same—except for differences accounted for or explained by C. - -Gauss originally intended to use "modulo" as follows: given the integers a, b and n, the expression a ≡ b (mod n) (pronounced "a is congruent to b modulo n") means that a − b is an integer multiple of n, or equivalently, a and b both leave the same remainder when divided by n. For example: - -13 is congruent to 63 modulo 10 - -means that - -13 - 63 is a multiple of 10 (equiv., 13 and 63 differ by a multiple of 10). - -In computing and computer science, the term can be used in several ways: - -* In computing, it is typically the modulo operation: given two numbers (either integer or real), a and n, a modulo n is the remainder of the numerical division of a by n, under certain constraints. - -* In category theory as applied to functional programming, "operating modulo" is special jargon which refers to mapping a functor to a category by highlighting or defining remainders. - -The term "modulo" can be used differently—when referring to different mathematical structures. For example: - -* Two members a and b of a group are congruent modulo a normal subgroup, if and only if ab−1 is a member of the normal subgroup (see quotient group and isomorphism theorem for more). - -* Two members of a ring or an algebra are congruent modulo an ideal, if the difference between them is in the ideal. - -** Used as a verb, the act of factoring out a normal subgroup (or an ideal) from a group (or ring) is often called "modding out the..." or "we now mod out the...". - -* Two subsets of an infinite set are equal modulo finite sets precisely if their symmetric difference is finite, that is, you can remove a finite piece from the first subset, then add a finite piece to it, and get the second subset as a result. - -* A short exact sequence of maps leads to the definition of a quotient space as being one space modulo another; thus, for example, that a cohomology is the space of closed forms modulo exact forms. - -In general, modding out is a somewhat informal term that means declaring things equivalent that otherwise would be considered distinct. For example, suppose the sequence 1 4 2 8 5 7 is to be regarded as the same as the sequence 7 1 4 2 8 5, because each is a cyclicly-shifted version of the other: - - - -\begin{array}{ccccccccccccc} - -& 1 & & 4 & & 2 & & 8 & & 5 & & 7 \\ - -\searrow & & \searrow & & \searrow & & \searrow & & \searrow & & \searrow & & \searrow \\ - -& 7 & & 1 & & 4 & & 2 & & 8 & & 5 - -\end{array} - - - -In that case, the phrase "modding out by cyclic shifts" can also be used. diff --git a/wiki/wikipedia/2610.txt b/wiki/wikipedia/2610.txt deleted file mode 100644 index 2b06f809955bcef0cc14cc70024bda7c307a97b8..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2610.txt +++ /dev/null @@ -1,32 +0,0 @@ -In algebraic geometry, a closed immersion $i: X \hookrightarrow Y$ of schemes is a regular embedding of codimension r if each point x in X has an open affine neighborhood U in Y such that the ideal of $X \cap U$ is generated by a regular sequence of length r. A regular embedding of codimension one is precisely an effective Cartier divisor. - -For example, if X and Y are smooth over a scheme S and if i is an S-morphism, then i is a regular embedding. In particular, every section of a smooth morphism is a regular embedding. If $\operatorname{Spec}B$ is regularly embedded into a regular scheme, then B is a complete intersection ring. - -The notion is used, for instance, in an essential way in Fulton's approach to intersection theory. The important fact is that when i is a regular embedding, if I is the ideal sheaf of X in Y, then the normal sheaf, the dual of $I/I^2$, is locally free (thus a vector bundle) and the natural map $\operatorname{Sym}(I/I^2) \to \oplus_0^\infty I^n/I^{n+1}$ is an isomorphism: the normal cone $\operatorname{Spec}(\oplus_0^\infty I^n/I^{n+1})$ coincides with the normal bundle. - -A morphism of finite type $f:X \to Y$ is called a (local) complete intersection morphism if each point x in X has an open affine neighborhood U so that f |U factors as $U \overset{j}\to V \overset{g}\to Y$ where j is a regular embedding and g is smooth. For example, if f is a morphism between smooth varieties, then f factors as $X \to X \times Y \to Y$ where the first map is the graph morphism and so is a complete intersection morphism. - -One non-example is a scheme which isn't equidimensional. For example, the scheme - - - -X = \text{Spec}\left( \frac{\mathbb{C}[x,y,z]}{(xz,yz)}\right) - - - -is the union of $\mathbb{A}^2$ and $\mathbb{A}^1$. Then, the embedding $X \hookrightarrow \mathbb{A}^3$ isn't regular since taking any non-origin point on the $z$-axis is of dimension $1$ while any non-origin point on the $xy$-plane is of dimension $2$. - -Let $f: X \to Y$ be a local-complete-intersection morphism that admits a global factorization: it is a composition $X \overset{i}\hookrightarrow P \overset{p}\to Y$ where $i$ is a regular embedding and $p$ a smooth morphism. Then the virtual tangent bundle is an element of the Grothendieck group of vector bundles on X given as: -$$ -T_f = [i^* T_{P/Y}] - [N_{X/P}] -$$. - -The notion is used for instance in the Riemann–Roch-type theorem. - -SGA 6 Expo VII uses the following weakened form of the notion of a regular embedding, that agrees with the usual one for Noetherian schemes. - -First, given a projective module E over a commutative ring A, an A-linear map $u: E \to A$ is called Koszul-regular if the Koszul complex determined by it is acyclic in dimension > 0 (consequently, it is a resolution of the cokernel of u). - -Then a closed immersion $X \hookrightarrow Y$ is called Koszul-regular if the ideal sheaf determined by it is such that, locally, there are a finite free A-module E and a Koszul-regular surjection from E to the ideal sheaf. - -(This complication is because the discussion of a zero-divisor is tricky for Non-noetherian rings in that one cannot use the theory of associated primes.) diff --git a/wiki/wikipedia/2611.txt b/wiki/wikipedia/2611.txt deleted file mode 100644 index 4b4839c2396f66e02f3a7ec32952c94efb4484a9..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2611.txt +++ /dev/null @@ -1,73 +0,0 @@ -In computer science, Tarjan's off-line lowest common ancestors algorithm is an algorithm for computing lowest common ancestors for pairs of nodes in a tree, based on the union-find data structure. The lowest common ancestor of two nodes d and e in a rooted tree T is the node g that is an ancestor of both d and e and that has the greatest depth in T. It is named after Robert Tarjan, who discovered the technique in 1979. Tarjan's algorithm is an offline algorithm; that is, unlike other lowest common ancestor algorithms, it requires that all pairs of nodes for which the lowest common ancestor is desired must be specified in advance. The simplest version of the algorithm uses the union-find data structure, which unlike other lowest common ancestor data structures can take more than constant time per operation when the number of pairs of nodes is similar in magnitude to the number of nodes. A later refinement by Gabow speeds the algorithm up to linear time. - -The pseudocode below determines the lowest common ancestor of each pair in P, given the root r of a tree in which the children of node n are in the set n.children. For this offline algorithm, the set P must be specified in advance. It uses the MakeSet, Find, and Union functions of a disjoint-set forest. MakeSet(u) removes u to a singleton set, Find(u) returns the standard representative of the set containing u, and Union(u,v) merges the set containing u with the set containing v. - -TarjanOLCA(r) is first called on the root r. - -function TarjanOLCA(u) is - -MakeSet(u) - -u.ancestor := u - -for each v in u.children do - -TarjanOLCA(v) - -Union(u, v) - -Find(u).ancestor := u - -u.color := black - -for each v such that {u, v} in P do - -if v.color == black then - -print "Tarjan's Lowest Common Ancestor of " + u + - -" and " + v + " is " + Find(v).ancestor + "." - -Each node is initially white, and is colored black after it and all its children have been visited. - -For each node pair {u,v} to be investigated: - -* When v is already black (viz. when v comes before u in a post-order traversal of the tree): After u is colored black, the lowest common ancestor of this pair is available as Find(v).ancestor, but only while the LCA of u and v is not colored black. - -* Otherwise: Once v is colored black, the LCA will be available as Find(u).ancestor, while the LCA is not colored black. - -For reference, here are optimized versions of MakeSet, Find, and Union for a disjoint-set forest: - -function MakeSet(x) is - -x.parent := x - -x.rank := 1 - -function Union(x, y) is - -xRoot := Find(x) - -yRoot := Find(y) - -if xRoot.rank > yRoot.rank then - -yRoot.parent := xRoot - -else if xRoot.rank < yRoot.rank then - -xRoot.parent := yRoot - -else if xRoot.rank == yRoot.rank then - -yRoot.parent := xRoot - -xRoot.rank := xRoot.rank + 1 - -function Find(x) is - -if x.parent != x then - -x.parent := Find(x.parent) - -return x.parent diff --git a/wiki/wikipedia/2612.txt b/wiki/wikipedia/2612.txt deleted file mode 100644 index 0867e5267260c42f85ead3c21b43127465060b2e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2612.txt +++ /dev/null @@ -1,207 +0,0 @@ -In mathematics, a Fermat number, named after Pierre de Fermat, who first studied them, is a positive integer of the form -$$ -F_{n} = 2^{2^n} + 1, -$$ - -where n is a non-negative integer. The first few Fermat numbers are: - -3, 5, 17, 257, 65537, 4294967297, 18446744073709551617, ... . - -If 2k + 1 is prime and k > 0, then k must be a power of 2, so 2k + 1 is a Fermat number; such primes are called Fermat primes. As of 2021, the only known Fermat primes are F0 = 3, F1 = 5, F2 = 17, F3 = 257, and F4 = 65537 ; heuristics suggest that there are no more. - -The Fermat numbers satisfy the following recurrence relations: - - - -F_{n} = (F_{n-1}-1)^{2}+1 - - - -F_{n} = F_{0} \cdots F_{n-1} + 2 - -for n ≥ 1, - - - -F_{n} = F_{n-1} + 2^{2^{n-1}}F_{0} \cdots F_{n-2} - - - -F_{n} = F_{n-1}^2 - 2(F_{n-2}-1)^2 - -for n ≥ 2. Each of these relations can be proved by mathematical induction. From the second equation, we can deduce Goldbach's theorem (named after Christian Goldbach): no two Fermat numbers share a common integer factor greater than 1. To see this, suppose that 0 ≤ i < j and Fi and Fj have a common factor a > 1. Then a divides both -$$ -F_{0} \cdots F_{j-1} -$$ - -and Fj; hence a divides their difference, 2. Since a > 1, this forces a = 2. This is a contradiction, because each Fermat number is clearly odd. As a corollary, we obtain another proof of the infinitude of the prime numbers: for each Fn, choose a prime factor pn; then the sequence {pn} is an infinite sequence of distinct primes. - -* No Fermat prime can be expressed as the difference of two pth powers, where p is an odd prime. - -* With the exception of F0 and F1, the last digit of a Fermat number is 7. - -* The sum of the reciprocals of all the Fermat numbers is irrational. (Solomon W. Golomb, 1963) - -Fermat numbers and Fermat primes were first studied by Pierre de Fermat, who conjectured that all Fermat numbers are prime. Indeed, the first five Fermat numbers F0, ..., F4 are easily shown to be prime. Fermat's conjecture was refuted by Leonhard Euler in 1732 when he showed that -$$ - F_{5} = 2^{2^5} + 1 = 2^{32} + 1 = 4294967297 = 641 \times 6700417. -$$ - -Euler proved that every factor of Fn must have the form k2n+1 + 1 (later improved to k2n+2 + 1 by Lucas). - -That 641 is a factor of F5 can be deduced from the equalities 641 = 27 × 5 + 1 and 641 = 24 + 54. It follows from the first equality that 27 × 5 ≡ −1 (mod 641) and therefore (raising to the fourth power) that 228 × 54 ≡ 1 (mod 641). On the other hand, the second equality implies that 54 ≡ −24 (mod 641). These congruences imply that 232 ≡ −1 (mod 641). - -Fermat was probably aware of the form of the factors later proved by Euler, so it seems curious that he failed to follow through on the straightforward calculation to find the factor. One common explanation is that Fermat made a computational mistake. - -There are no other known Fermat primes Fn with n > 4, but little is known about Fermat numbers for large n. In fact, each of the following is an open problem: - -* Is Fn composite for all n > 4? - -* Are there infinitely many Fermat primes? (Eisenstein 1844) - -* Are there infinitely many composite Fermat numbers? - -* Does a Fermat number exist that is not square-free? - -, it is known that Fn is composite for 5 ≤ n ≤ 32, although of these, complete factorizations of Fn are known only for 0 ≤ n ≤ 11, and there are no known prime factors for 1=n = 20 and 1=n = 24. Several new Fermat factors are found each year. - -Like composite numbers of the form 2p − 1, every composite Fermat number is a strong pseudoprime to base 2. This is because all strong pseudoprimes to base 2 are also Fermat pseudoprimes - i.e. -$$ -2^{F_n-1} \equiv 1 \pmod{F_n} -$$ - -for all Fermat numbers. - -In 1904, Cipolla showed that the product of at least two distinct prime or composite Fermat numbers $F_{a} F_{b} \dots F_{s},$ $a > b > \dots > s > 1$ will be a Fermat pseudoprime to base 2 if and only if $2^s > a $. - -If n is a positive integer, -$$ -a^n-b^n=(a-b)\sum_{k=0}^{n-1} a^kb^{n-1-k}. -$$ - -{{math proof| - -\begin{align} - -(a-b)\sum_{k=0}^{n-1}a^kb^{n-1-k} &=\sum_{k=0}^{n-1}a^{k+1}b^{n-1-k}-\sum_{k=0}^{n-1}a^kb^{n-k}\\ - -&=a^n+\sum_{k=1}^{n-1}a^kb^{n-k}-\sum_{k=1}^{n-1}a^kb^{n-k}-b^n\\ - -&=a^n-b^n - -\end{align} - -}} - -{{math theorem| If $2^k+1$ is an odd prime, then $k$ is a power of 2. - -{{math proof|If $k$ is a positive integer but not a power of 2, it must have an odd prime factor $s > 2$, and we may write $k= rs$ where $1 \le r < k$. - -By the preceding lemma, for positive integer $m$, -$$ -(a-b) \mid (a^m-b^m) -$$ - -where $ \mid $ means "evenly divides". Substituting $a = 2^r, b = -1$, and $m = s$ and using that $ s $ is odd, -$$ - (2^r+1) \mid (2^{rs}+1), -$$ - -and thus -$$ - (2^r+1) \mid (2^k+1). -$$ - -Because $1 < 2^r+1 < 2^k+1$, it follows that $2^k+1$ is not prime. Therefore, by contraposition $k$ must be a power of 2. - -}}}} - -{{math theorem| A Fermat prime cannot be a Wieferich prime. - -{{math proof| We show if $p=2^m+1$ is a Fermat prime (and hence by the above, m is a power of 2), then the congruence $2^{p-1} \equiv 1 \bmod {p^2}$ does not hold. - -Since $2m |p-1$ we may write $p-1=2m\lambda$. If the given congruence holds, then $p^2|2^{2m\lambda}-1$, and therefore -$$ -0 \equiv \frac{2^{2m\lambda}-1}{2^m+1}=(2^m-1)\left(1+2^{2m}+2^{4m}+\cdots +2^{2(\lambda-1)m} \right) \equiv -2\lambda \pmod {2^m+1}. -$$ - -Hence $2^m+1|2\lambda$, and therefore $2\lambda \geq 2^m+1$. This leads to $p-1 \geq m(2^m+1)$, which is impossible since $m \geq 2$. - -}}}} - -Any prime divisor p of $F_n = 2^{2^n}+1$ is of the form $k2^{n+2}+1$ whenever n > 1. - -{{math proof|title=Sketch of proof|Let Gp denote the group of non-zero integers modulo p under multiplication, which has order p−1. Notice that 2 (strictly speaking, its image modulo p) has multiplicative order equal to $2^{n+1}$ in Gp (since $ 2^{2^{n+1$ is the square of $2^{2^n}$ which is −1 modulo Fn), so that, by Lagrange's theorem, p − 1 is divisible by $2^{n+1} $ and p has the form $k2^{n+1} + 1$ for some integer k, as Euler knew. Édouard Lucas went further. Since n > 1, the prime p above is congruent to 1 modulo 8. Hence (as was known to Carl Friedrich Gauss), 2 is a quadratic residue modulo p, that is, there is integer a such that $p|a^2-2.$ Then the image of a has order $2^{n+2}$ in the group Gp and (using Lagrange's theorem again), p − 1 is divisible by $2^{n+2}$ and p has the form $s2^{n+2} + 1$ for some integer s. - -In fact, it can be seen directly that 2 is a quadratic residue modulo p, since -$$ -\left(1 +2^{2^{n-1}} \right)^{2} \equiv 2^{1+2^{n-1}} \pmod p. -$$ - -Since an odd power of 2 is a quadratic residue modulo p, so is 2 itself. - -}}}} - -A Fermat number cannot be a perfect number or part of a pair of amicable numbers. - -The series of reciprocals of all prime divisors of Fermat numbers is convergent. - -If nn + 1 is prime, there exists an integer m such that n = 22m. The equation - -nn + 1 = F(2m+m) - -holds in that case. - -Let the largest prime factor of the Fermat number Fn be P(Fn). Then, -$$ -P(F_n) \ge 2^{n+2}(4n+9) + 1. -$$ - -Carl Friedrich Gauss developed the theory of Gaussian periods in his Disquisitiones Arithmeticae and formulated a sufficient condition for the constructibility of regular polygons. Gauss stated that this condition was also necessary, but never published a proof. Pierre Wantzel gave a full proof of necessity in 1837. The result is known as the Gauss–Wantzel theorem: - -An n-sided regular polygon can be constructed with compass and straightedge if and only if n is the product of a power of 2 and distinct Fermat primes: in other words, if and only if n is of the form n = 2kp1p2...ps, where k, s are nonnegative integers and the pi are distinct Fermat primes. - -A positive integer n is of the above form if and only if its totient φ(n) is a power of 2. - -Fermat primes are particularly useful in generating pseudo-random sequences of numbers in the range 1 ... N, where N is a power of 2. The most common method used is to take any seed value between 1 and P − 1, where P is a Fermat prime. Now multiply this by a number A, which is greater than the square root of P and is a primitive root modulo P (i.e., it is not a quadratic residue). Then take the result modulo P. The result is the new value for the RNG. -$$ -V_{j+1} = \left( A \times V_j \right) \bmod P -$$ (see linear congruential generator, RANDU) - -This is useful in computer science since most data structures have members with 2X possible values. For example, a byte has 256 (28) possible values (0–255). Therefore, to fill a byte or bytes with random values a random number generator which produces values 1–256 can be used, the byte taking the output value −1. Very large Fermat primes are of particular interest in data encryption for this reason. This method produces only pseudorandom values as, after P − 1 repetitions, the sequence repeats. A poorly chosen multiplier can result in the sequence repeating sooner than P − 1. - -Numbers of the form $a^{2^{ \overset{n} {}}} \!\!+ b^{2^{ \overset{n} {}}}$ with a, b any coprime integers, a > b > 0, are called generalized Fermat numbers. An odd prime p is a generalized Fermat number if and only if p is congruent to 1 (mod 4). (Here we consider only the case n > 0, so 3 = $2^{2^{0}} \!+ 1$ is not a counterexample.) - -An example of a probable prime of this form is 12465536 + 5765536 (found by Valeryi Kuryshev). - -By analogy with the ordinary Fermat numbers, it is common to write generalized Fermat numbers of the form $a^{2^{ \overset{n} {}}} \!\!+ 1$ as Fn(a). In this notation, for instance, the number 100,000,001 would be written as F3(10). In the following we shall restrict ourselves to primes of this form, $a^{2^{ \overset{n} {}}} \!\!+ 1$, such primes are called "Fermat primes base a". Of course, these primes exist only if a is even. - -If we require n > 0, then Landau's fourth problem asks if there are infinitely many generalized Fermat primes Fn(a). - -Because of the ease of proving their primality, generalized Fermat primes have become in recent years a topic for research within the field of number theory. Many of the largest known primes today are generalized Fermat primes. - -Generalized Fermat numbers can be prime only for even a, because if a is odd then every generalized Fermat number will be divisible by 2. The smallest prime number $F_n(a)$ with $n>4$ is $F_5(30)$, or 3032 + 1. Besides, we can define "half generalized Fermat numbers" for an odd base, a half generalized Fermat number to base a (for odd a) is $\frac{a^{2^n} \!+ 1}{2}$, and it is also to be expected that there will be only finitely many half generalized Fermat primes for each odd base. - -(In the list, the generalized Fermat numbers ($F_n(a)$) to an even a are $a^{2^n} \!+ 1$, for odd a, they are $\frac{a^{2^n} \!\!+ 1}{2}$. If a is a perfect power with an odd exponent , then all generalized Fermat number can be algebraic factored, so they cannot be prime) - -(For the smallest number $n$ such that $F_n(a)$ is prime, see ) - -(See for more information (even bases up to 1000), also see for odd bases) - -(For the smallest prime of the form $F_n(a,b)$ (for odd $a+b$), see also ) - -(For the smallest even base a such that $F_n(a)$ is prime, see ) - -The smallest base b such that b2n + 1 is prime are - -2, 2, 2, 2, 2, 30, 102, 120, 278, 46, 824, 150, 1534, 30406, 67234, 70906, 48594, 62722, 24518, 75898, 919444, ... - -The smallest k such that (2n)k + 1 is prime are - -1, 1, 1, 0, 1, 1, 2, 1, 1, 2, 1, 2, 2, 1, 1, 0, 4, 1, ... (The next term is unknown) (also see and ) - -A more elaborate theory can be used to predict the number of bases for which $F_n(a)$ will be prime for fixed $n$. The number of generalized Fermat primes can be roughly expected to halve as $n$ is increased by 1. - -The following is a list of the 5 largest known generalized Fermat primes. They are all megaprimes. The whole top-5 is discovered by participants in the PrimeGrid project. - -On the Prime Pages one can find the . diff --git a/wiki/wikipedia/2613.txt b/wiki/wikipedia/2613.txt deleted file mode 100644 index 182eb9e0a6d4aa1420ba14c195de6ae817089df8..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2613.txt +++ /dev/null @@ -1,3 +0,0 @@ -In probability theory, the Bapat–Beg theorem gives the joint probability distribution of order statistics of independent but not necessarily identically distributed random variables in terms of the cumulative distribution functions of the random variables. Ravindra Bapat and Beg published the theorem in 1989, though they did not offer a proof. A simple proof was offered by Hande in 1994. - -Often, all elements of the sample are obtained from the same population and thus have the same probability distribution. The Bapat–Beg theorem describes the order statistics when each element of the sample is obtained from a different statistical population and therefore has its own probability distribution. However, when the random variables have only two possible distributions, the complexity can be reduced to $O(m^{2k})$. Thus, in the case of two populations, the complexity is polynomial in $m$ for any fixed number of statistics $k$. diff --git a/wiki/wikipedia/2614.txt b/wiki/wikipedia/2614.txt deleted file mode 100644 index d7d0dfcaa7930d1e8430f48be1230f9e12216318..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2614.txt +++ /dev/null @@ -1,37 +0,0 @@ -In mathematics, the Rellich–Kondrachov theorem is a compact embedding theorem concerning Sobolev spaces. It is named after the Austrian-German mathematician Franz Rellich and the Russian mathematician Vladimir Iosifovich Kondrashov. Rellich proved the L2 theorem and Kondrashov the Lp theorem. - -Let Ω ⊆ Rn be an open, bounded Lipschitz domain, and let 1 ≤ p < n. Set -$$ -p^{*} := \frac{n p}{n - p}. -$$ - -Then the Sobolev space W1,p(Ω; R) is continuously embedded in the Lp space Lp(Ω; R) and is compactly embedded in Lq(Ω; R) for every 1 ≤ q < p. In symbols, -$$ -W^{1, p} (\Omega) \hookrightarrow L^{p^{*}} (\Omega) -$$ - -and -$$ -W^{1, p} (\Omega) \subset \subset L^{q} (\Omega) \text{ for } 1 \leq q < p^{*}. -$$ - -On a compact manifold with C1 boundary, the Kondrachov embedding theorem states that if k > ℓ and k − n/p > ℓ − n/q then the Sobolev embedding -$$ -W^{k,p}(M)\subset W^{\ell,q}(M) -$$ - -is completely continuous (compact). - -Since an embedding is compact if and only if the inclusion (identity) operator is a compact operator, the Rellich–Kondrachov theorem implies that any uniformly bounded sequence in W1,p(Ω; R) has a subsequence that converges in Lq(Ω; R). Stated in this form, in the past the result was sometimes referred to as the Rellich–Kondrachov selection theorem, since one "selects" a convergent subsequence. (However, today the customary name is "compactness theorem", whereas "selection theorem" has a precise and quite different meaning, referring to multifunctions). - -The Rellich–Kondrachov theorem may be used to prove the Poincaré inequality, which states that for u ∈ W1,p(Ω; R) (where Ω satisfies the same hypotheses as above), -$$ -\| u - u_\Omega \|_{L^p (\Omega)} \leq C \| \nabla u \|_{L^p (\Omega)} -$$ - -for some constant C depending only on p and the geometry of the domain Ω, where -$$ -u_\Omega := \frac{1}{\operatorname{meas} (\Omega)} \int_\Omega u(x) \mathrm{d} x -$$ - -denotes the mean value of u over Ω. diff --git a/wiki/wikipedia/2615.txt b/wiki/wikipedia/2615.txt deleted file mode 100644 index a66f438f26d369065e3f4254e4d3bea8f0d4e113..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2615.txt +++ /dev/null @@ -1,25 +0,0 @@ -In Pursuit of the Traveling Salesman: Mathematics at the Limits of Computation is a book on the travelling salesman problem, by William J. Cook, published in 2012 by the Princeton University Press, with a paperback reprint in 2014. The Basic Library List Committee of the Mathematical Association of America has suggested its inclusion in undergraduate mathematics libraries. - -The travelling salesman problem asks to find the shortest cyclic tour of a collection of points, in the plane or in more abstract mathematical spaces. - -Because the problem is NP-hard, algorithms that take polynomial time are unlikely to be guaranteed to find its optimal solution; on the other hand a brute-force search of all permutations would always solve the problem exactly but would take far too long to be usable for all but the smallest problems. Threading a middle ground between these too-fast and too-slow running times, and developing a practical system that can find the exact solution of larger instances, raises difficult questions of algorithm engineering, which have sparked the development of "many of the concepts and techniques of combinatorial optimization". - -The introductory chapter of the book explores the limits of calculation on the problem, from 49-point problems solved by hand in the mid-1950s by George Dantzig, D. R. Fulkerson, and Selmer M. Johnson to a problem with 85,900 points solved optimally in 2006 by the Concorde TSP Solver, which Cook helped develop. The next chapters covers the early history of the problem and of related problems, including Leonhard Euler's work on the Seven Bridges of Königsberg, William Rowan Hamilton's Icosian game, and Julia Robinson first naming the problem in 1949. Another chapter describes real-world applications of the problem, ranging "from genome sequencing and designing computer processors to arranging music and hunting for planets". Reviewer Brian Hayes cites "the most charming revelation" of the book as being the fact that one of those real-world applications has been route planning for actual traveling salesmen in the early 20th century. - -Chapters four through seven, "core of the book", discuss methods for solving the problem, leading from heuristics and metaheuristics, linear programming relaxation, and cutting-plane methods, up to the branch and bound method that combines these techniques and is used by Concorde. The next two chapters also cover technical material, on the performance of computer implementations and on the Computational complexity theory of the problem. - -The remaining chapters are more human-centered, covering human and animal problem-solving strategies, and the incorporation of TSP solutions into the artworks of Julian Lethbridge, Robert Bosch, and others. A short final summary chapter suggests possible future directions, including the possibility of progress on the P versus NP problem. - -The book is intended for a non-specialist audience, avoids technical detail and is written "in an easy to understand style". It includes many historical asides, examples, applications, and biographical information and photographs of key players in the story, making it accessible to readers without a mathematical background. - -Although In Pursuit of the Traveling Salesman is not a textbook, reviewer Christopher Thompson suggests that some of its material on the use of linear programming and on applications of the problem "would be well-suited for classroom use", citing in particular the way it links multiple fields including numerical analysis, graph theory, algorithm design, logic, and statistics. - -Reviewer Stan Wagon writes that "any reader with an interest in combinatorial algorithms will find much of value in this book". Jan Karel Lenstra and David Shmoys write that "The writing is relaxed and entertaining; the presentation is excellent. We greatly enjoyed reading it." And reviewer Haris Aziz concludes "The book is highly recommended to any one with a mathematical curiosity and interest in the - -development of ideas.". - -More details of Cook's work with Concorde, suitable for more serious researchers on the problem and on related topics, can be found in an earlier book by Cook with David Applegate, Robert E. Bixby - -and Václav Chvátal, The Traveling Salesman Problem: A Computational Study (2007). - -Other books on the travelling salesman problem, also more technical than In Pursuit of the Traveling Salesman, include The Traveling Salesman Problem: A Guided Tour of Combinatorial Optimization (by Lawler, Lenstra, Rinnooy Kan, and Shmoys, 1985) and The Traveling Salesman Problem and Its Variations (by Gutin and Punnen, 2006). diff --git a/wiki/wikipedia/2616.txt b/wiki/wikipedia/2616.txt deleted file mode 100644 index 2fb03a84e6224d1328e073211a94d43ef0264641..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2616.txt +++ /dev/null @@ -1,22 +0,0 @@ -Bhaskara's Lemma is an identity used as a lemma during the chakravala method. It states that: -$$ - Nx^2 + k = y^2\implies N\left(\frac{mx + y}{k}\right)^2 + \frac{m^2 - N}{k} = \left(\frac{my + Nx}{k}\right)^2 -$$ - -for integers $m, x, y, N,$ and non-zero integer $k$. - -The proof follows from simple algebraic manipulations as follows: multiply both sides of the equation by $m^2-N$, add $N^2x^2+2Nmxy+Ny^2$, factor, and divide by $k^2$. -$$ - Nx^2 + k = y^2\implies Nm^2x^2-N^2x^2+k(m^2-N) = m^2y^2-Ny^2 -$$ -$$ -\implies Nm^2x^2+2Nmxy+Ny^2+k(m^2-N) = m^2y^2+2Nmxy+N^2x^2 -$$ -$$ -\implies N(mx+y)^2+k(m^2-N) = (my+Nx)^2 -$$ -$$ -\implies N\left(\frac{mx + y}{k}\right)^2 + \frac{m^2 - N}{k} = \left(\frac{my + Nx}{k}\right)^2. -$$ - -So long as neither $k$ nor $m^2-N$ are zero, the implication goes in both directions. (The lemma holds for real or complex numbers as well as integers.) diff --git a/wiki/wikipedia/2617.txt b/wiki/wikipedia/2617.txt deleted file mode 100644 index 45b6b085c8f06912b469a5503185d26e233046a5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2617.txt +++ /dev/null @@ -1,23 +0,0 @@ -In mathematics, Mahler's inequality, named after Kurt Mahler, states that the geometric mean of the term-by-term sum of two finite sequences of positive numbers is greater than or equal to the sum of their two separate geometric means: -$$ -\prod_{k=1}^n (x_k + y_k)^{1/n} \ge \prod_{k=1}^n x_k^{1/n} + \prod_{k=1}^n y_k^{1/n} -$$ - -when xk, yk > 0 for all k. - -By the inequality of arithmetic and geometric means, we have: -$$ -\prod_{k=1}^n \left({x_k \over x_k + y_k}\right)^{1/n} \le {1 \over n} \sum_{k=1}^n {x_k \over x_k + y_k}, -$$ - -and -$$ -\prod_{k=1}^n \left({y_k \over x_k + y_k}\right)^{1/n} \le {1 \over n} \sum_{k=1}^n {y_k \over x_k + y_k}. -$$ - -Hence, -$$ -\prod_{k=1}^n \left({x_k \over x_k + y_k}\right)^{1/n} + \prod_{k=1}^n \left({y_k \over x_k + y_k}\right)^{1/n} \le {1 \over n} n = 1. -$$ - -Clearing denominators then gives the desired result. diff --git a/wiki/wikipedia/2618.txt b/wiki/wikipedia/2618.txt deleted file mode 100644 index 16dd3ee1e9c367031fb7597e8abf650fe556096d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2618.txt +++ /dev/null @@ -1,13 +0,0 @@ -Ulam's packing conjecture, named for Stanislaw Ulam, is a conjecture about the highest possible packing density of identical convex solids in three-dimensional Euclidean space. The conjecture says that the optimal density for packing congruent spheres is smaller than that for any other convex body. That is, according to the conjecture, the ball is the convex solid which forces the largest fraction of space to remain empty in its optimal packing structure. This conjecture is therefore related to the Kepler conjecture about sphere packing. Since the solution to the Kepler conjecture establishes that identical balls must leave ≈25.95% of the space empty, Ulam's conjecture is equivalent to the statement that no other convex solid forces that much space to be left empty. - -This conjecture was attributed posthumously to Ulam by Martin Gardner, who remarks in a postscript added to one of his Mathematical Games columns that Ulam communicated this conjecture to him in 1972. Though the original reference to the conjecture states only that Ulam "suspected" the ball to be the worst case for packing, the statement has been subsequently taken as a conjecture. - -Numerical experiments with a large variety of convex solids have resulted in each case in the construction of packings that leave less empty space than is left by close-packing of equal spheres, and so many solids have been ruled out as counterexamples of Ulam's conjecture. - -Nevertheless, there is an infinite space of possible shapes that have not been ruled out. - -Yoav Kallus has shown that at least among point-symmetric bodies, the ball constitutes a local maximum of the fraction of empty space forced. - -That is, any point-symmetric solid that does not deviate too much from a ball can be packed with greater efficiency than can balls. - -The analog of Ulam's packing conjecture in two dimensions would say that no convex shape forces more than ≈9.31% of the plane to remain uncovered, since that is the fraction of empty space left uncovered in the densest packing of disks. However, the regular octagon and smoothed octagon give counter-examples. It is conjectured that regular heptagons force the largest fraction of the plane to remain uncovered. In dimensions above four (excluding 8 and 24), the situation is complicated by the fact that the analogs of the Kepler conjecture remain open. diff --git a/wiki/wikipedia/2619.txt b/wiki/wikipedia/2619.txt deleted file mode 100644 index ff98cf397f83dcd9c2e007645a3153f2cc020cb3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2619.txt +++ /dev/null @@ -1,143 +0,0 @@ -Hilbert's tenth problem is the tenth on the list of mathematical problems that the German mathematician David Hilbert posed in 1900. It is the challenge to provide a general algorithm which, for any given Diophantine equation (a polynomial equation with integer coefficients and a finite number of unknowns), can decide whether the equation has a solution with all unknowns taking integer values. - -For example, the Diophantine equation $3x^2-2xy-y^2z-7=0$ has an integer solution: $x=1,\ y=2,\ z=-2$. By contrast, the Diophantine equation $x^2+y^2+1=0$ has no such solution. - -Hilbert's tenth problem has been solved, and it has a negative answer: such a general algorithm does not exist. This is the result of combined work of Martin Davis, Yuri Matiyasevich, Hilary Putnam and Julia Robinson which spans 21 years, with Matiyasevich completing the theorem in 1970. The theorem is now known as Matiyasevich's theorem or the MRDP theorem. - -Hilbert formulated the problem as follows: - -
    Given a Diophantine equation with any number of unknown quantities and with rational integral numerical coefficients: To devise a process according to which it can be determined in a finite number of operations whether the equation is solvable in rational integers.
    - -The words "process" and "finite number of operations" have been taken to mean that Hilbert was asking for an algorithm. The term "rational integer" simply refers to the integers, positive, negative or zero: 0, ±1, ±2, ... . So Hilbert was asking for a general algorithm to decide whether a given polynomial Diophantine equation with integer coefficients has a solution in integers. - -Hilbert's problem is not concerned with finding the solutions. It only asks whether, in general, we can decide whether one or more solutions exist. The answer to this question is negative, in the sense that no "process can be devised" for answering that question. In modern terms, Hilbert's 10th problem is an undecidable problem. Although it is unlikely that Hilbert had conceived of such a possibility, before going on to list the problems, he did presciently remark: - -
    Occasionally it happens that we seek the solution under insufficient hypotheses or in an incorrect sense, and for this reason do not succeed. The problem then arises: to show the impossibility of the solution under the given hypotheses or in the sense contemplated.
    - -Proving the 10th problem undecidable is then a valid answer even in Hilbert's terms, since it is a proof about "the impossibility of the solution". - -In a Diophantine equation, there are two kinds of variables: the parameters and the unknowns. The Diophantine set consists of the parameter assignments for which the Diophantine equation is solvable. A typical example is the linear Diophantine equation in two unknowns, -$$ -a_1x_1 + a_2x_2 = a_3 -$$, - -where the equation is solvable when the greatest common divisor of $a_1, a_2$ divides $a_3$. The values for $a_1, a_2, a_3$ that satisfy this restriction are a Diophantine set and the equation above is its Diophantine definition. - -Diophantine definitions can be provided by simultaneous systems of equations as well as by individual equations because the system -$$ -p_1=0,\ldots,p_k=0 -$$ - -is equivalent to the single equation -$$ -p_1^2+\cdots+p_k^2=0. -$$ - -Sets of natural numbers, of pairs of natural numbers (or even of n-tuples of natural numbers) that have Diophantine definitions are called Diophantine sets. In these terms, Hilbert's tenth problem asks whether there is an algorithm to determine if a given Diophantine set is non-empty. - -The work on the problem has been in terms of solutions in natural numbers (understood as the non-negative integers) rather than arbitrary integers. The two problems are equivalent: any general algorithm that can decide whether a given Diophantine equation has an integer solution could be modified into an algorithm that decides whether a given Diophantine equation has a natural number solution, and vice versa. This can be seen as follows: The requirement that solutions be natural numbers can be expressed with the help of Lagrange's four-square theorem: every natural number is the sum of the squares of four integers, so we simply replace every parameter with the sum of squares of four extra parameters. Similarly, since every integer can be written as the difference of two natural numbers, we can replace every parameter that ranges in integers with the difference of two parameters that range in natural numbers. - -A recursively enumerable set can be characterized as one for which there exists an algorithm that will ultimately halt when a member of the set is provided as input, but may continue indefinitely when the input is a non-member. It was the development of computability theory (also known as recursion theory) that provided a precise explication of the intuitive notion of algorithmic computability, thus making the notion of recursive enumerability perfectly rigorous. It is evident that Diophantine sets are recursively enumerable (a.k.a. semi-decidable). This is because one can arrange all possible tuples of values of the unknowns in a sequence and then, for a given value of the parameter(s), test these tuples, one after another, to see whether they are solutions of the corresponding equation. The unsolvability of Hilbert's tenth problem is a consequence of the surprising fact that the converse is true: - -
    Every recursively enumerable set is Diophantine.
    - -This result is variously known as Matiyasevich's theorem (because he provided the crucial step that completed the proof) and the MRDP theorem (for Yuri Matiyasevich, Julia Robinson, Martin Davis, and Hilary Putnam). Because there exists a recursively enumerable set that is not computable, the unsolvability of Hilbert's tenth problem is an immediate consequence. In fact, more can be said: there is a polynomial -$$ -p(a,x_1,\ldots,x_n) -$$ - -with integer coefficients such that the set of values of $a$ for which the equation -$$ -p(a,x_1,\ldots,x_n)=0 -$$ - -has solutions in natural numbers is not computable. So, not only is there no general algorithm for testing Diophantine equations for solvability, even for this one parameter family of equations, there is no algorithm. - -The Matiyasevich/MRDP Theorem relates two notions - one from computability theory, the other from number theory - and has some surprising consequences. Perhaps the most surprising is the existence of a universal Diophantine equation: - -There exists a polynomial $p(a,n,x_1,\ldots,x_k)$ such that, given any Diophantine set $S$ there is a number $n_0$ such that -$$ - S = \{a \mid \exists x_1, \ldots, x_k[p(a,n_0,x_1,\ldots,x_k)=0]\}. -$$ - -This is true simply because Diophantine sets, being equal to recursively enumerable sets, are also equal to Turing machines. It is a well known property of Turing machines that there exist universal Turing machines, capable of executing any algorithm. - -Hilary Putnam has pointed out that for any Diophantine set $S$ of positive integers, there is a polynomial -$$ -q(x_0,x_1,\ldots,x_n) -$$ - -such that $S$ consists of exactly the positive numbers among the values assumed by $q$ as - -the variables -$$ -x_0,x_1,\ldots,x_n -$$ - -range over all natural numbers. This can be seen as follows: If -$$ -p(a,y_1,\ldots,y_n)=0 -$$ - -provides a Diophantine definition of $S$, then it suffices to set -$$ -q(x_0,x_1,\ldots,x_n)= x_0[1- p(x_0,x_1,\ldots,x_n)^2]. -$$ - -So, for example, there is a polynomial for which the positive part of its range is exactly the prime numbers. (On the other hand, no polynomial can only take on prime values.) The same holds for other recursively enumerable sets of natural numbers: the factorial, the binomial coefficients, the fibonacci numbers, etc. - -Other applications concern what logicians refer to as $\Pi^{0}_1$ propositions, sometimes also called propositions of Goldbach type. These are like Goldbach's conjecture, in stating that all natural numbers possess a certain property that is algorithmically checkable for each particular number. The Matiyasevich/MRDP theorem implies that each such proposition is equivalent to a statement that asserts that some particular Diophantine equation has no solutions in natural numbers. A number of important and celebrated problems are of this form: in particular, Fermat's Last Theorem, the Riemann hypothesis, and the Four color theorem. In addition the assertion that particular formal systems such as Peano arithmetic or ZFC are consistent can be expressed as $\Pi^{0}_1$ sentences. The idea is to follow Kurt Gödel in coding proofs by natural numbers in such a way that the property of being the number representing a proof is algorithmically checkable. -$$ -\Pi^{0}_1 -$$ sentences have the special property that if they are false, that fact will be provable in any of the usual formal systems. This is because the falsity amounts to the existence of a counter-example which can be verified by simple arithmetic. So if a $\Pi^{0}_1$ sentence is such that neither it nor its negation is provable in one of these systems, that sentence must be true. - -A particularly striking form of Gödel's incompleteness theorem is also a consequence of the Matiyasevich/MRDP theorem: - -
    Let -$$ -p(a,x_1,\ldots,x_k)=0 -$$ - -provide a Diophantine definition of a non-computable set. Let $A$ be an algorithm that outputs a sequence of natural numbers $n$ such that the corresponding equation -$$ -p(n,x_1,\ldots,x_k)=0 -$$ - -has no solutions in natural numbers. Then there is a number $n_0$ which is not output by $A$ while in fact the equation -$$ -p(n_0,x_1,\ldots,x_k)=0 -$$ - -has no solutions in natural numbers.
    - -To see that the theorem is true, it suffices to notice that if there were no such number $n_0$, one could algorithmically test membership of a number $n$ in this non-computable set by simultaneously running the algorithm $A$ to see whether $n$ is output while also checking all possible $k$-tuples of natural numbers seeking a solution of the equation -$$ -p(n,x_1,\ldots,x_k)=0. -$$ - -We may associate an algorithm $A$ with any of the usual formal systems such as Peano arithmetic or ZFC by letting it systematically generate consequences of the axioms and then output a number $n$ whenever a sentence of the form -$$ -\neg \exists x_1,\ldots , x_k [p(n,x_1,\ldots,x_k)=0] -$$ - -is generated. Then the theorem tells us that either a false statement of this form is proved or a true one remains unproved in the system in question. - -We may speak of the degree of a Diophantine set as being the least degree of a polynomial in an equation defining that set. Similarly, we can call the dimension of such a set the fewest unknowns in a defining equation. Because of the existence of a universal Diophantine equation, it is clear that there are absolute upper bounds to both of these quantities, and there has been much interest in determining these bounds. - -Already in the 1920s Thoralf Skolem showed that any Diophantine equation is equivalent to one of degree 4 or less. His trick was to introduce new unknowns by equations setting them equal to the square of an unknown or the product of two unknowns. Repetition of this process results in a system of second degree equations; then an equation of degree 4 is obtained by summing the squares. So every Diophantine set is trivially of degree 4 or less. It is not known whether this result is best possible. - -Julia Robinson and Yuri Matiyasevich showed that every Diophantine set has dimension no greater than 13. Later, Matiyasevich sharpened their methods to show that 9 unknowns suffice. Although it may well be that this result is not the best possible, there has been no further progress. So, in particular, there is no algorithm for testing Diophantine equations with 9 or fewer unknowns for solvability in natural numbers. For the case of rational integer solutions (as Hilbert had originally posed it), the 4-squares trick shows that there is no algorithm for equations with no more than 36 unknowns. But Zhi Wei Sun showed that the problem for integers is unsolvable even for equations with no more than 11 unknowns. - -Martin Davis studied algorithmic questions involving the number of solutions of a Diophantine equation. Hilbert's tenth problem asks whether or not that number is 0. Let $A=\{0,1,2,3,\ldots,\aleph_0\}$ and let $C$ be a proper non-empty subset of $A$. Davis proved that there is no algorithm to test a given Diophantine equation to determine whether the number of its solutions is a member of the set $C$. Thus there is no algorithm to determine whether the number of solutions of a Diophantine equation is finite, odd, a perfect square, a prime, etc. - -The proof of the MRDP theorem has been formalized in Coq. - -Although Hilbert posed the problem for the rational integers, it can be just as well asked for many rings (in particular, for any ring whose number of elements is countable). Obvious examples are the rings of integers of algebraic number fields as well as the rational numbers. - -There has been much work on Hilbert's tenth problem for the rings of integers of algebraic number fields. Basing themselves on earlier work by Jan Denef and Leonard Lipschitz and using class field theory, Harold N. Shapiro and Alexandra Shlapentokh were able to prove: - -
    Hilbert's tenth problem is unsolvable for the ring of integers of any algebraic number field whose Galois group over the rationals is abelian.
    - -Shlapentokh and Thanases Pheidas (independently of one another) obtained the same result for algebraic number fields admitting exactly one pair of complex conjugate embeddings. - -The problem for the ring of integers of algebraic number fields other than those covered by the results above remains open. Likewise, despite much interest, the problem for equations over the rationals remains open. Barry Mazur has conjectured that for any variety over the rationals, the topological closure over the reals of the set of solutions has only finitely many components. This conjecture implies that the integers are not Diophantine over the rationals and so if this conjecture is true a negative answer to Hilbert's Tenth Problem would require a different approach than that used for other rings. diff --git a/wiki/wikipedia/262.txt b/wiki/wikipedia/262.txt deleted file mode 100644 index fa64db4d7b12c690b9a90f88382443d417502ab9..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/262.txt +++ /dev/null @@ -1,55 +0,0 @@ -The theorem of the envelopment of societies for resource-dependent populations, also called the Bruss–Duerinckx theorem, is a mathematical result on the behavior of populations which choose their society form according to only two hypotheses, namely those which are seen as most "natural": - -* Hypothesis 1 (H1): Individuals want to survive and see a future for their descendants, - -* Hypothesis 2 (H2): The average individual prefers a higher standard of living to a lower one, - -where H1 is supposed to precede H2 in the case of incompatibility of H1 with H2. - -Here populations with a society structure are modeled by so-called resource-dependent branching processes (RDBPs). - -The objective of RDBPs is to model different society structures and to compare the advantages and disadvantages of different societies, with the focus being on human societies. A RDBP is a discrete time branching process (BP) in which individuals are supposed to have to work in order to be able to live and to reproduce. The population decides on a current society form by a policy, that is a prescription of rules how available resources are to be distributed among the individuals. Policies may change over the generations by the interaction of individuals. - -To model a human society, a RDBP incorporates a part from an initial state (number of ancestors at time 0) individual demands for resources (standard of living), creation (production) of new resources for the next generation (including non-consumption and heritage of resources), a policy to distribute resources, and a control option for individuals interacting with the society. For simplicity the reproduction in a RDBP is modeled as being asexual, but the option to replace the mean reproduction rate by the so-called average reproduction rate of mating units (see Galton-Watson processes and ref. 1984) allows to show that the main results given below are not affected by this simplification. - -Formally, a RDBP is a stochastic process Γ defined on the non-negative integers which is a BP defined by - -* an initial state Γ0; - -* a law of reproduction of individuals (asexual); - -* a law of individual creation of resources; - -* a law of individual resource demands (claims); - -* a policy to distribute available resources to individuals which are present in the population - -* a tool of interaction between individuals and the society. - -Models for the development of a human society in time must allow for interdependence between the different components. Such models are in general very complicated. Crucial in the development of the results was the idea not to try to model the development of a society with a (single) realistic RDBP but rather by a sequence of RDBPs respecting H1 and H2, that is by control actions defining at each time of control a relevant short-horizon RDBP. Thus RDBPs serve as locally defined models for the short-term behavior of a society whereas the evolution of a society is seen as a sequence of RDBPs controlled by the interaction of individuals. The tool of interaction for the individuals within each generation is the option to emigrate before reproduction (generating children) if their individual resource claims are not met by the current society form. Emigration can here be replaced by other options of protest. - -It turns out that two special policies stand out as guidelines for the development of any society. These are the so-called weakest-first policy (wf-policy) and the so-called strongest-first policy (sf-policy) defined in resource-dependent branching process. It can be argued that the wf-society shares important features of an extreme form of communism whereas the sf-society can similarly be interpreted as an extreme form of capitalism. - -Let: - -m = mean reproduction (descendants) per individual - -r = mean production (resource creation) per individual - -F = the individual probability distribution of claims (resources) - -Then using a result on the behavior of stopping times of sums of order statistics (ref. 1991) the survival criteria can be explicitly computed for both the wf-society and the sf-society as a function of m, r and F. - -The theorem of the envelopment of societies says: - -Intuition why the above theorem should be true, is only partially true and sometimes completely wrong (explicit counterexamples). This is why this result has attracted much attention. The mathematical proof does not use new mathematical methods but is subtle. Apart from a classical result on so-called complete convergence, it is mainly based on theorems for stopping times on sums of independent and identically distributed order statistics (ref. 1991) and fine-tuned balancing acts between model assumptions and convergence in probability and almost sure convergence. - -The theorem allows for several conclusions, but the most challenging one is arguably the following. If one sees RDBPs with the two natural hypotheses as being an acceptable model, then the wf-policy and the sf-policy (arguably seen as an idealized form or extreme form of communism and (an extreme form - -of) capitalism, respectively) play both a particular role. They are both, and will always be, guidelines for any human society - -following the natural hypotheses. They cannot be stable societies: Extreme communism cannot be stable because individuals would like to go towards an increased standard of living, that is, towards H2. Extreme Capitalism cannot be stable - -because, unless resources were more than abundant, it would either die out or be quickly outnumbered by competing societies streaming into the vacuum. - -However both form in the long run (in terms of the effective of populations) an envelope of any society whatever sophisticated its policy may be. diff --git a/wiki/wikipedia/2620.txt b/wiki/wikipedia/2620.txt deleted file mode 100644 index fef2d1e5e9267fcc02d7de9a4aa3cb9d64d810ca..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2620.txt +++ /dev/null @@ -1,7 +0,0 @@ -In economics, the Edgeworth conjecture is the idea, named after Francis Ysidro Edgeworth, that the core of an economy shrinks to the set of Walrasian equilibria as the number of agents increases to infinity. - -The core of an economy is a concept from cooperative game theory defined as the set of feasible allocations in an economy that cannot be improved upon by subset of the set of the economy's consumers (a coalition). For general equilibrium economies typically the core is non-empty (there is at least one feasible allocation) but also "large" in the sense that there may be a continuum of feasible allocations that satisfy the requirements. The conjecture basically states that if the number of agents is also "large" then the only allocations in the core are precisely what a competitive market would produce. As such, the conjecture is seen as providing some game-theoretic foundations for the usual assumption in general equilibrium theory of price taking agents. In particular, it means that in a "large" economy people act as if they were price takers, even though theoretically they have all the power to set prices and renegotiate their trades. Hence, the fictitious Walrasian auctioneer of general equilibrium, while strictly speaking completely unrealistic, can be seen as a "short-cut" to getting the right answer. - -Edgeworth himself did not quite prove this result—hence the term conjecture rather than theorem—although he did provide most of the necessary intuition and went some way towards it. In the 1960s formal proofs were presented under different assumptions by Debreu and Scarf (1963) as well as Aumann (1964). Both of these results however showed that the conditions sufficient for this result to hold were a bit more stringent than Edgeworth anticipated. Debreu and Scarf considered the case of a "replica economy" where there is a finite number of agent types and the agents added to the economy to make it "large" are of the same type and in the same proportion as those already in it. Aumann's result relied on an existence of a continuum of agents. - -These proofs of the Edgeworth conjecture are seen as providing some qualified support for the idea that a large economy functions approximately as a price taking competitive economy of general equilibrium theory, even though agents have the power to set prices. diff --git a/wiki/wikipedia/2621.txt b/wiki/wikipedia/2621.txt deleted file mode 100644 index 5ed215e4d52b412416ea9d2865f9a2a3e427d60a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2621.txt +++ /dev/null @@ -1,43 +0,0 @@ -In the mathematical discipline of graph theory, a 3-dimensional matching is a generalization of bipartite matching (also known as 2-dimensional matching) to 3-partite hypergraphs. Finding a largest 3-dimensional matching is a well-known NP-hard problem in computational complexity theory. - -Let X, Y, and Z be finite sets, and let T be a subset of X × Y × Z. That is, T consists of triples (x, y, z) such that x ∈ X, y ∈ Y, and z ∈ Z. Now M ⊆ T is a 3-dimensional matching if the following holds: for any two distinct triples (x1, y1, z1) ∈ M and (x2, y2, z2) ∈ M, we have x1 ≠ x2, y1 ≠ y2, and z1 ≠ z2. - -The figure on the right illustrates 3-dimensional matchings. The set X is marked with red dots, Y is marked with blue dots, and Z is marked with green dots. Figure (a) shows the set T (gray areas). Figure (b) shows a 3-dimensional matching M with |M| = 2, and Figure (c) shows a 3-dimensional matching M with |M| = 3. - -The matching M illustrated in Figure (c) is a maximum 3-dimensional matching, i.e., it maximises |M|. The matching illustrated in Figures (b)–(c) are maximal 3-dimensional matchings, i.e., they cannot be extended by adding more elements from T. - -Here is example interactive visualisation in - -A 2-dimensional matching can be defined in a completely analogous manner. Let X and Y be finite sets, and let T be a subset of X × Y. Now M ⊆ T is a 2-dimensional matching if the following holds: for any two distinct pairs (x1, y1) ∈ M and (x2, y2) ∈ M, we have x1 ≠ x2 and y1 ≠ y2. - -In the case of 2-dimensional matching, the set T can be interpreted as the set of edges in a bipartite graph G = (X, Y, T); each edge in T connects a vertex in X to a vertex in Y. A 2-dimensional matching is then a matching in the graph G, that is, a set of pairwise non-adjacent edges. - -Hence 3-dimensional matchings can be interpreted as a generalization of matchings to hypergraphs: the sets X, Y, and Z contain the vertices, each element of T is a hyperedge, and the set M consists of pairwise non-adjacent edges (edges that do not have a common vertex). In case of 2-dimensional matching, we have Y = Z. - -A 3-dimensional matching is a special case of a set packing: we can interpret each element (x, y, z) of T as a subset {x, y, z} of X ∪ Y ∪ Z; then a 3-dimensional matching M consists of pairwise disjoint subsets. - -In computational complexity theory, 3-dimensional matching is also the name of the following decision problem: given a set T and an integer k, decide whether there exists a 3-dimensional matching M ⊆ T with |M| ≥ k. - -This decision problem is known to be NP-complete; it is one of Karp's 21 NP-complete problems. There exist though polynomial time algorithms for that problem for dense hypergraphs. - -The problem is NP-complete even in the special case that k = |X| = |Y| = |Z| and when each element is contained in exactly 3 sets, i.e., when we want a perfect matching in a 3-regular the hypergraph. In this case, a 3-dimensional (dominating) matching is not only a set packing but also an exact cover: the set M covers each element of X, Y, and Z exactly once. The proof is by reduction from 3SAT. Given a 3SAT instance, we construct a 3DM instance as follows: - -* For each variable xi, there is a "variable gadget" shaped like a wheel. It is made of overlapping triplets. The number of triplets is twice the number of occurrences of xi in the formula. There are exactly two ways to cover all the vertices in the gadget: one is to choose all even-indexed triplets, and one is to choose all odd-indexed triplets. These two ways correspond to setting xi to "true" or "false". The "true" selection leaves uncovered exactly one vertex in every odd-indexed triplet, and the "false" selection leaves uncovered exactly one vertex in every even-indexed triplet. - -*For each clause xi u xj u xk, there is a "clause gadget" shaped like a rose. It is made of three overlapping triplets, one for each variable in the clause. It can be covered iff at least one of the nodes is left uncovered by the selection of the variable gadgets. - -*Since it is possible that two or more nodes are left uncovered, we also need a "garbage collection gadget". It is shaped like a larger rose. It is made of several overlapping triplets, one for each vertex that can be left uncovered in the variable gadget. The number of such gadgets is determined so that they can be covered exactly if and only if there is a satisfying assignment. - -* - -A maximum 3-dimensional matching is a largest 3-dimensional matching. In computational complexity theory, this is also the name of the following optimization problem: given a set T, find a 3-dimensional matching M ⊆ T that maximizes |M|. - -Since the decision problem described above is NP-complete, this optimization problem is NP-hard, and hence it seems that there is no polynomial-time algorithm for finding a maximum 3-dimensional matching. However, there are efficient polynomial-time algorithms for finding a maximum bipartite matching (maximum 2-dimensional matching), for example, the Hopcroft–Karp algorithm. - -There is a very simple polynomial-time 3-approximation algorithm for 3-dimensional matching: find any maximal 3-dimensional matching. - -However, attaining better approximation factors is probably hard: the problem is APX-complete, that is, it is hard to approximate within some constant. - -It is NP-hard to achieve an approximation factor of 95/94 for maximum 3-d matching, and an approximation factor of 48/47 for maximum 4-d matching. The hardness remains even when restricted to instances with exactly two occurrences of each element. - -There are various algorithms for 3-d matching in the Massively parallel computation model. diff --git a/wiki/wikipedia/2622.txt b/wiki/wikipedia/2622.txt deleted file mode 100644 index 668cb5b3605acde8c6be6698b602f9bebc48f78f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2622.txt +++ /dev/null @@ -1,43 +0,0 @@ -Kawasaki's theorem is a theorem in the mathematics of paper folding that describes the crease patterns with a single vertex that may be folded to form a flat figure. It states that the pattern is flat-foldable if and only if alternatingly adding and subtracting the angles of consecutive folds around the vertex gives an alternating sum of zero. - -Crease patterns with more than one vertex do not obey such a simple criterion, and are NP-hard to fold. - -The theorem is named after one of its discoverers, Toshikazu Kawasaki. However, several others also contributed to its discovery, and it is sometimes called the Kawasaki–Justin theorem or Husimi's theorem after other contributors, Jacques Justin and Kôdi Husimi. - -A one-vertex crease pattern consists of a set of rays or creases drawn on a flat sheet of paper, all emanating from the same point interior to the sheet. (This point is called the vertex of the pattern.) Each crease must be folded, but the pattern does not specify whether the folds should be mountain folds or valley folds. The goal is to determine whether it is possible to fold the paper so that every crease is folded, no folds occur elsewhere, and the whole folded sheet of paper lies flat. - -To fold flat, the number of creases must be even. This follows, for instance, from Maekawa's theorem, which states that the number of mountain folds at a flat-folded vertex differs from the number of valley folds by exactly two folds. Therefore, suppose that a crease pattern consists of an even number 2n of creases, and let α1, α2, ⋯, α2n be the consecutive angles between the creases around the vertex, in clockwise order, starting at any one of the angles. Then Kawasaki's theorem states that the crease pattern may be folded flat if and only if the alternating sum and difference of the angles adds to zero: - -α1 - α2 + α3 - ⋯ + α2n - 1 - α2n = 0 - -An equivalent way of stating the same condition is that, if the angles are partitioned into two alternating subsets, then the sum of the angles in either of the two subsets is exactly 180 degrees. However, this equivalent form applies only to a crease pattern on a flat piece of paper, whereas the alternating sum form of the condition remains valid for crease patterns on conical sheets of paper with nonzero defect at the vertex. - -Kawasaki's theorem, applied to each of the vertices of an arbitrary crease pattern, determines whether the crease pattern is locally flat-foldable, meaning that the part of the crease pattern near the vertex can be flat-folded. However, there exist crease patterns that are locally flat-foldable but that have no global flat folding that works for the whole crease pattern at once. conjectured that global flat-foldability could be tested by checking Kawasaki's theorem at each vertex of a crease pattern, and then also testing bipartiteness of an undirected graph associated with the crease pattern. However, this conjecture was disproven by Bern, who showed that Hull's conditions are not sufficient. More strongly, Bern and Hayes showed that the problem of testing global flat-foldability is NP-complete. - -To show that Kawasaki's condition necessarily holds for any flat-folded figure, it suffices to observe that, at each fold, the orientation of the paper is reversed. Thus, if the first crease in the flat-folded figure is placed in the plane parallel to the x-axis, the next crease must be rotated from it by an angle of α1, the crease after that by an angle of α1 - α2 (because the second angle has the reverse orientation from the first), etc. In order for the paper to meet back up with itself at the final angle, Kawasaki's condition must be met. - -Showing that the condition is also a sufficient condition is a matter of describing how to fold a given crease pattern so that it folds flat. That is, one must show how to choose whether to make mountain or valley folds, and in what order the flaps of paper should be arranged on top of each other. One way to do this is to choose a number i such that the partial alternating sum - -α1 - α2 + α3 - ⋯ + α2i - 1 - α2i - -is as small as possible. Either i = 0 and the partial sum is an empty sum that is also zero, or for some nonzero choice of i the partial sum is negative. Then, accordion fold the pattern, starting with angle α2i + 1 and alternating between mountain and valley folds, placing each angular wedge of the paper below the previous folds. At each step until the final fold, an accordion fold of this type will never self-intersect. The choice of i ensures that the first wedge sticks out to the left of all the other folded pieces of paper, allowing the final wedge to connect back up to it. - -An alternative proof of sufficiency can be used to show that there are many different flat foldings. Consider the smallest angle αi and the two creases on either side of it. Mountain-fold one of these two creases and valley-fold the other, choosing arbitrarily which fold to use for which crease. Then, glue the resulting flap of paper onto the remaining part of the crease pattern. The result of this gluing will be a crease pattern with two fewer creases, on a conical sheet of paper, that still satisfies Kawasaki's condition. Therefore, by mathematical induction, repeating this process will eventually lead to a flat folding. The base case of the induction is a cone with only two creases and two equal-angle wedges, which can obviously be flat-folded by using a mountain fold for both creases. There are two ways to choose which folds to use in each step of this method, and each step eliminates two creases. Therefore, any crease pattern with 2n creases that satisfies Kawasaki's condition has at least 2n different choices of mountain and valley folds that all lead to valid flat foldings. - -In the late 1970s, Kôdi Husimi and David A. Huffman independently observed that flat-folded figures with four creases have opposite angles adding to pi, a special case of Kawasaki's theorem. Huffman included the result in a 1976 paper on curved creases, and - -Husimi published the four-crease theorem in a book on origami geometry with his wife Mitsue Husimi. - -The same result was published even earlier, in a pair of papers from 1966 by S. Murata that also included the six-crease case and the general case of Maekawa's theorem. - -The fact that crease patterns with arbitrarily many creases necessarily have alternating sums of angles adding to pi was discovered by Kawasaki, by Stuart Robertson, and by Jacques Justin (again, independently of each other) in the late 1970s and early 1980s. - -Because of Justin's contribution to the problem, Kawasaki's theorem has also been called the Kawasaki–Justin theorem. - -The fact that this condition is sufficient—that is, that crease patterns with evenly many angles, alternatingly summing to pi can always be flat-folded—may have been first stated by Hull. - -Kawasaki himself has called the result Husimi's theorem, after Kôdi Husimi, and some other authors have followed this terminology as well. The name "Kawasaki's theorem" was first given to this result in Origami for the Connoisseur by Kunihiko Kasahara and Toshie Takahama (Japan Publications, 1987). - -Hull credits the lower bound of 2n on the number of different flat-foldings of a crease pattern meeting the conditions of the theorem to independent work in the early 1990s by Azuma, Justin, and Ewins and Hull. - -Although Kawasaki's theorem completely describes the folding patterns that have flat-folded states, it does not describe the folding process needed to reach that state. For some folding patterns, it may be necessary to curve or bend the paper while transforming it from a flat sheet to its flat-folded state, rather than keeping the rest of the paper flat and only changing the dihedral angles at each fold. For rigid origami (a type of folding that keeps the surface flat except at its folds, suitable for hinged panels of rigid material rather than flexible paper), additional conditions are needed on a folding pattern to allow it to move from an unfolded state to a flat-folded state. diff --git a/wiki/wikipedia/2623.txt b/wiki/wikipedia/2623.txt deleted file mode 100644 index f844a6cb6c2e553457eb62566e599ca709e5ec40..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2623.txt +++ /dev/null @@ -1,13 +0,0 @@ -In mathematics, the Bing–Borsuk conjecture states that every $n$-dimensional homogeneous absolute neighborhood retract space is a topological manifold. The conjecture has been proved for dimensions 1 and 2, and it is known that the 3-dimensional version of the conjecture implies the Poincaré conjecture. - -A topological space is homogeneous if, for any two points $m_1, m_2 \in M$, there is a homeomorphism of $M$ which takes $m_1$ to $m_2$. - -A metric space $M$ is an absolute neighborhood retract (ANR) if, for every closed embedding $f: M \rightarrow N$ (where $N$ is a metric space), there exists an open neighbourhood $U$ of the image $f(M)$ which retracts to $f(M)$. - -There is an alternate statement of the Bing–Borsuk conjecture: suppose $M$ is embedded in $\mathbb{R}^{m+n}$ for some $m \geq 3$ and this embedding can be extended to an embedding of $M \times (-\varepsilon, \varepsilon)$. If $M$ has a mapping cylinder neighbourhood $N=C_\varphi$ of some map $\varphi: \partial N \rightarrow M$ with mapping cylinder projection $\pi: N \rightarrow M$, then $\pi$ is an approximate fibration. - -The conjecture was first made in a paper by R. H. Bing and Karol Borsuk in 1965, who proved it for $n=1$ and 2. - -Włodzimierz Jakobsche showed in 1978 that, if the Bing–Borsuk conjecture is true in dimension 3, then the Poincaré conjecture must also be true. - -The Busemann conjecture states that every Busemann $G$-space is a topological manifold. It is a special case of the Bing–Borsuk conjecture. The Busemann conjecture is known to be true for dimensions 1 to 4. diff --git a/wiki/wikipedia/2624.txt b/wiki/wikipedia/2624.txt deleted file mode 100644 index 786ef4a64dc7df3e48350daf33969a44ccd7b7ce..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2624.txt +++ /dev/null @@ -1,531 +0,0 @@ -In mathematics, Hensel's lemma, also known as Hensel's lifting lemma, named after Kurt Hensel, is a result in modular arithmetic, stating that if a univariate polynomial has a simple root modulo a prime number p, then this root can be lifted to a unique root modulo any higher power of p. More generally, if a polynomial factors modulo p into two coprime polynomials, this factorization can be lifted to a factorization modulo any higher power of p (the case of roots corresponds to the case of degree 1 for one of the factors). - -By passing to the "limit" (in fact this is an inverse limit) when the power of p tends to infinity, it follows that a root or a factorization modulo p can be lifted to a root or a factorization over the p-adic integers. - -These results have been widely generalized, under the same name, to the case of polynomials over an arbitrary commutative ring, where p is replaced by an ideal, and "coprime polynomials" means "polynomials that generate an ideal containing 1". - -Hensel's lemma is fundamental in p-adic analysis, a branch of analytic number theory. - -The proof of Hensel's lemma is constructive, and leads to an efficient algorithm for Hensel lifting, which is fundamental for factoring polynomials, and gives the most efficient known algorithm for exact linear algebra over the rational numbers. - -Hensel's original lemma concerns the relation between polynomial factorization over the integers and over the integers modulo a prime number p and its powers. It can be straightforwardly extended to the case where the integers are replaced by any commutative ring, and p is replaced by any maximal ideal (indeed, the maximal ideals of $\Z$ have the form $p\Z,$ where p is a prime number). - -Making this precise requires a generalization of the usual modular arithmetic, and so it is useful to define accurately the terminology that is commonly used in this context. - -Let R be a commutative ring, and I an ideal of R. Reduction modulo I refers to the replacement of every element of R by its image under the canonical map $R\to R/I.$ For example, if $f\in R[X]$ is a polynomial with coefficients in R, its reduction modulo I, denoted $f \bmod I,$ is the polynomial in $(R/I)[X]=R[X]/IR[X]$ obtained by replacing the coefficients of f by their image in $R/I.$ Two polynomials f and g in $R[X]$ are congruent modulo I, denoted f\equiv g \pmod I if they have the same coefficients modulo I, that is if $f-g\in IR[X].$ If $h\in R[X],$ a factorization of h modulo I consists in two (or more) polynomials f, g in $R[X]$ such that h\equiv fg \pmod I. - -The lifting process is the inverse of reduction. That is, given objects depending on elements of $R/I,$ the lifting process replaces these elements by elements of $R$ (or of $R/I^k$ for some k > 1) that maps to them in a way that keeps the properties of the objects. - -For example, given a polynomial $h\in R[X]$ and a factorization modulo I expressed as h\equiv fg \pmod I, lifting this factorization modulo $I^k$ consists of finding polynomials $f',g'\in R[X]$ such that f'\equiv f \pmod I, g'\equiv g \pmod I, and h\equiv fg \pmod {I^k}. Hensel's lemma asserts that such a lifting is always possible under mild conditions; see next section. - -Originally, Hensel's lemma was stated (and proved) for lifting a factorization modulo a prime number p of a polynomial over the integers to a factorization modulo any power of p and to a factorization over the p-adic integers. This can be generalized easily, with the same proof to the case where the integers are replaced by any commutative ring, the prime number is replaced by a maximal ideal, and the p-adic integers are replaced by the completion with respect to the maximal ideal. It is this generalization, which is also widely used, that is presented here. - -Let $\mathfrak m$ be a maximal ideal of a commutative ring R, and -$$ -h=\alpha_0X^n+\cdots +\alpha_{n-1}X+\alpha_n -$$ - -be a polynomial in $R[X]$ with a leading coefficient $\alpha_0$ not in $\mathfrak m.$ - -Since $\mathfrak m$ is a maximal ideal, the quotient ring $R/\mathfrak m$ is a field, and $(R/\mathfrak m)[X]$ is a principal ideal domain, and, in particular, a unique factorization domain, which means that every nonzero polynomial in $(R/\mathfrak m)[X]$ can be factorized in a unique way as the product of a nonzero element of $(R/\mathfrak m)$ and irreducible polynomials that are monic (that is, their leading coefficients are 1). - -Hensel's lemma asserts that every factorization of h modulo $\mathfrak m$ into coprime polynomials can be lifted in an unique way into a factorization modulo $\mathfrak m^k$ for every k. - -More precisely, with the above hypotheses, if h\equiv \alpha_0 fg\pmod \mathfrak m, where f and g are monic and coprime modulo $\mathfrak m,$ then, for every positive integer k there are monic polynomials $f_k$ and $g_k$ such that - -\begin{align} - -h&\equiv \alpha_0 f_kg_k \pmod{\mathfrak m^k},\\ - -f_k&\equiv f\pmod{\mathfrak m},\\ - -g_k&\equiv g\pmod{\mathfrak m}, - -\end{align} - -and $f_k$ and $g_k$ are unique (with these properties) modulo $\mathfrak m^k.$ - -An important special case is when $f=X-r.$ In this case the coprimality hypothesis means that r is a simple root of $h \bmod \mathfrak m.$ This gives the following special case of Hensel's lemma, which is often called also Hensel's lemma. - -With above hypotheses and notations, if r is a simple root of $h \bmod \mathfrak m,$ then r can be lifted in a unique way to a simple root of $h \bmod {\mathfrak m^n}$ for every positive integer n. Explicitly, for every positive integer n, there is a unique $r_n\in R/{\mathfrak m}^n$ such that r_n\equiv r \pmod \mathfrak m and $r_n$ is a simple root of $h \bmod \mathfrak m^n.$ - -The fact that one can lift to $R/\mathfrak m^n$ for every positive integer n suggests to "pass to the limit" when n tends to the infinity. This was one of the main motivations for introducing p-adic integers. - -Given a maximal ideal $\mathfrak m$ of a commutative ring R, the powers of $\mathfrak m$ form a basis of open neighborhoods for a topology on R, which is called the $\mathfrak m$-adic topology. The completion of this topology can be identified with the completion of the local ring $R_\mathfrak m,$ and with the inverse limit $\lim_\leftarrow R/\mathfrak m^n.$ This completion is a complete local ring, generally denoted $\widehat R_\mathfrak m.$ When R is the ring of the integers, and $\mathfrak m=p\Z,$ where p is a prime number, this completion is the ring of p-adic integers $\Z_p.$ - -The definition of the completion as an inverse limit, and the above statement of Hensel's lemma imply that every factorization into pairwise coprime polynomials modulo $\mathfrak m$ of a polynomial $h\in R[X]$ can be uniquely lifted to a factorization of the image of h in $\widehat R_\mathfrak m[X].$ Similarly, every simple root of h modulo $\mathfrak m$ can be lifted to a simple root of the image of h in $\widehat R_\mathfrak m[X].$ - -Hensel's lemma is generally proved incrementally by lifting a factorization over $R/\mathfrak m^n$ to either a factorization over $R/\mathfrak m^{n+1}$ (Linear lifting), or a factorization over $R/\mathfrak m^{2n}$ (Quadratic lifting). - -The main ingredient of the proof is that coprime polynomials over a field satisfy Bézout's identity. That is, if f and g are coprime univariate polynomials over a field (here $R/\mathfrak m$), there are polynomials a and b such that $\deg a <\deg g,$ $\deg b <\deg f,$ and -$$ -af+bg=1. -$$ - -Bézout's identity allows defining coprime polynomials and proving Hensel's lemma, even if the ideal $\mathfrak m$ is not maximal. Therefore, in the following proofs, one starts from a commutative ring R, an ideal I, a polynomial $h\in R[X]$ that has a leading coefficient that is invertible modulo I (that is its image in $R/I$ is a unit in $R/I$), and factorization of h modulo I or modulo a power of I, such that the factors satisfy a Bézout's identity modulo I. In these proofs, A\equiv B \pmod I means $A-B\in IR[X].$ - -Let I be an ideal of a commutative ring R, and $h\in R[X]$ be a univariate polynomial with coefficients in R that has a leading coefficient $\alpha$ that is invertible modulo I (that is, the image of $\alpha$ in $R/I$ is a unit in $R/I$). - -Suppose that for some positive integer k there is a factorization -$$ -h\equiv \alpha fg \pmod {I^k}, -$$ - -such that f and g are monic polynomials that are coprime modulo I, in the sense that there exist $a,b \in R[X],$ such that af+bg\equiv 1\pmod I. Then, there are polynomials $\delta_f, \delta_g\in I^k R[X],$ such that $\deg \delta_f <\deg f,$ $\deg \delta_g <\deg g,$ and -$$ -h\equiv \alpha(f+\delta_f)(g+\delta_g) \pmod {I^{k+1}}. -$$ - -Under these conditions, $\delta_f$ and $\delta_g$ are unique modulo $I^{k+1}R[X].$ - -Moreover, $f+\delta_f$ and $g+\delta_g$ satisfy the same Bézout's identity as f and g, that is, a(f+\delta_f)+b(g+\delta_g)\equiv 1\pmod I. This follows immediately from the preceding assertions, but is needed to apply iteratively the result with increasing values of k. - -The proof that follows is written for computing $\delta_f$ and $\delta_g$ by using only polynomials with coefficients in $R/I$ or $I^k/I^{k+1}.$ When $R=\Z$ and $I=p\Z,$ this allows manipulating only integers modulo p. - -Proof: By hypothesis, $\alpha$ is invertible modulo I. This means that there exists $\beta\in R$ and $\gamma\in IR[X]$ such that $\alpha\beta=1-\gamma.$ - -Let $\delta_h\in I^kR[X],$ of degree less than $\deg h,$ such that -$$ -\delta_h\equiv h-\alpha fg \pmod{I^{k+1}}. -$$ - -(One may choose $\delta_h=h-\alpha fg,$ but other choices may lead to simpler computations. For example, if $R=\Z$ and $I=p\Z,$ it is possible and better to choose $\delta_h=p^k\delta'_h$ where the coefficients of $\delta'_h$ are integers in the interval $[0,p-1].$) - -As g is monic, the Euclidean division of $a\delta_h$ by g is defined, and provides q and c such that $a\delta_h = qg+c,$ and $\deg c <\deg g.$ Moreover, both q and c are in $I^{k} R[X].$ Similarly, let $b\delta_h = q'f+d,$ with $\deg d <\deg f,$ and $q', d\in I^{k} R[X].$ - -One has $q+q'\in I^{k+1}R[X].$ Indeed, one has -$$ -fc+gd=af\delta_h +bg\delta_h -fg(q+q')\equiv \delta_h-fg(q+q') \pmod{I^{k+1}}. -$$ - -As $fg$ is monic, the degree modulo $I^{k+1}$ of $fg(q+q')$ can be less than $\deg fg$ only if $q+q'\in I^{k+1}R[X].$ - -Thus, considering congruences modulo $I^{k+1},$ one has - -\begin{align} - -\alpha(f+\beta d)&(g+\beta c)-h\\ - -&\equiv \alpha fg-h+ \alpha \beta (f(a\delta_h-qg)+g(b\delta_h-q'f))\\ - -&\equiv \delta_h(-1 +\alpha\beta(af+bg)) - \alpha\beta fg(q+q')\\ - -&\equiv 0 \pmod{I^{k+1}}. - -\end{align} - -So, the existence assertion is verified with -$$ -\delta_f=\beta d, \qquad \delta_g=\beta c. -$$ - -Let R, I, h and $\alpha$ as a in the preceding section. Let -$$ -h\equiv \alpha fg {\pmod I} -$$ - -be a factorization into coprime polynomials (in the above sense), such $\deg f_0+\deg g_0=\deg h.$ The application of linear lifting for $k=1, 2, \ldots, n-1 \ldots,$ shows the existence of $\delta_f$ and $\delta_g$ such that $\deg \delta_f <\deg f,$ $\deg \delta_g <\deg g,$ and -$$ -h\equiv \alpha (f+\delta_f)(g+\delta_g) {\pmod I^n}. -$$ - -The polynomials $\delta_f$ and $\delta_g$ are uniquely defined modulo $I^n.$ This means that, if another pair $(\delta'_f, \delta'_g)$ satisfies the same conditions, then one has -$$ -\delta'_f\equiv \delta_f \pmod{I^n}\qquad\text{and}\qquad \delta'_g\equiv \delta_g \pmod{I^n}. -$$ - -Proof: Since a congruence modulo $I^n$ implies the same concruence modulo $I^{n-1},$ one can proceed by induction and suppose that the uniqueness has been proved for n − 1, the case n = 0 being trivial. That is, one can suppose that -$$ -\delta_f- \delta'_f \in I^{n-1} R[X]\qquad\text{and}\qquad \delta_g - \delta'_g \in I^{n-1} R[X]. -$$ - -By hypothesis, has -$$ -h\equiv \alpha(f+\delta_f)(g+\delta_g) \equiv \alpha(f+\delta'_f)(g+\delta'_g)\pmod {I^n}, -$$ - -and thus - -\begin{align} - -\alpha(f+\delta_f)(g+\delta_g) &- \alpha(f+\delta'_f)(g+\delta'_g)\\ - -&= \alpha(f(\delta_g-\delta'_g) +g(\delta_f-\delta'_f)) +\alpha (\delta_f(\delta_g-\delta'_g)-\delta_g(\delta_f-\delta'_f)) \in I^n R[X]. - -\end{align} - -By induction hypothesis, the second term of the latter sum belongs to $I^n,$ and the same is thus true for the first term. As $\alpha$ is invertible modulo I, there exist $\beta\in R$ and $\gamma \in I$ such that $\alpha\beta=1+\gamma.$ Thus - -\begin{align} - -f(\delta_g-\delta'_g) &+g(\delta_f-\delta'_f)\\ - -&= \alpha\beta (f(\delta_g-\delta'_g) +g(\delta_f-\delta'_f))-\gamma(f(\delta_g-\delta'_g) +g(\delta_f-\delta'_f)) \in I^n R[X], - -\end{align} - -using the induction hypothesis again. - -The coprimality modulo I implies the existence of $a,b\in R[X]$ such that 1\equiv af+bg\pmod I. Using the induction hypothesis once more, one gets - -\begin{align} - -\delta_g-\delta'_g &\equiv (af+bg)(\delta_g-\delta'_g)\\ - -&\equiv g(b(\delta_g-\delta'_g) - a(\delta_f-\delta'_f))\pmod {I^n}. - -\end{align} - -Thus one has a polynomial of degree less than $\deg g$ that is congruent modulo $I^n$ to the product of the monic polynomial g and another polynomial w. This is possible only if $w\in I^n R[X],$ and implies $\delta_g-\delta'_g \in I^n R[X].$ Similarly, $\delta_g-\delta'_g $ is also in $I^n R[X],$ and this proves the uniqueness. - -Linear lifting allows lifting a factorization modulo $I^n$ to a factorization modulo $I^{n+1}.$ Quadratic lifting allows lifting directly to a factorization modulo $I^{2n},$ at the cost of lifting also the Bézout's identity and of computing modulo $I^n$ instead of modulo I (if one uses the above description of linear lifting). - -For lifting up to modulo $I^N$ for large N one can use either method. If, say, $N=2^k,$ a factorization modulo $I^N$ requires N − 1 steps of linear lifting or only k − 1 steps of quadratic lifting. However, in the latter case the size of the coefficients that have to be manipulated increase during the computation. This implies that the best lifting method depends on the context (value of N, nature of R, multiplication algorithm that is used, hardware specificities, etc.). - -Quadratic lifting is based on the following property. - -Suppose that for some positive integer k there is a factorization -$$ -h\equiv \alpha fg \pmod {I^k}, -$$ - -such that f and g are monic polynomials that are coprime modulo I, in the sense that there exist $a,b \in R[X],$ such that af+bg\equiv 1\pmod {I^k}. Then, there are polynomials $\delta_f, \delta_g\in I^k R[X],$ such that $\deg \delta_f <\deg f,$ $\deg \delta_g <\deg g,$ and -$$ -h\equiv \alpha(f+\delta_f)(g+\delta_g) \pmod {I^{2k}}. -$$ - -Moreover, $f+\delta_f$ and $g+\delta_g$ satisfy a Bézout's identity of the form -$$ - (a+\delta_a)(f+\delta_f)+(b+\delta_b)(g+\delta_g)\equiv 1\pmod {I^{2k}}. -$$ - -(This is required for allowing iterations of quadratic lifting.) - -Proof: The first assertion is exactly that of linear lifting applied with k = 1 to the ideal $I^k$ instead of I. - -Let $\alpha=af+bg-1\in I^k R[X].$ One has -$$ -a(f+\delta_f)+b(g+\delta_g)=1-\Delta, -$$ - -where -$$ -\Delta=\alpha+a\delta_f+b\delta_g\in I^k R[X]. -$$ - -Setting $\delta_a=-a\Delta$ and $\delta_b=-b\Delta,$ one gets -$$ -(a+\delta_a)(f+\delta_f)+(b+\delta_b)(g+\delta_g)=1-\Delta^2\in I^{2k} R[X], -$$ - -which proves the second assertion. - -Let $f(X)= X^6 - 2 \in \mathbb{Q}[X].$ - -Modulo 2, Hensel's lemma cannot be applied since the reduction of $f(X)$ modulo 2 is simplypg 15-16 -$$ -\bar{f}(X) = X^6 - \overline{2} = X^6 -$$ - -with 6 factors $X$ not being relatively prime to each other. By Eisenstein's criterion, however, one can conclude that the polynomial $f(X)$ is irreducible in $\Q_2[X] .$
    - -Over $k = \mathbb{F}_7$, on the other hand, one has -$$ -\bar{f}(X) = X^6 - \overline{2} = X^6 - \overline{16} = (X^3 - \overline{4})(X^3 + \overline{4}) -$$ - -where $4$ is the square root of 2 in $\mathbb{F}_7$. As 4 is not a cube in $\mathbb F_7,$ these two factors are irreducible over $\mathbb F_7$. Hence the complete factorization of $X^6-2$ in $\Z_7[X]$ and $\Q_7[X]$ is -$$ -f(X) = X^6 - 2 = (X^3-\alpha)(X^3 + \alpha), -$$ - -where $\alpha = \ldots 450454_7$ is a square root of 2 in $\Z_7$ that can be obtained by lifting the above factorization.
    - -Finally, in $\mathbb F_{727}[X]$ the polynomial splits into -$$ -\bar{f}(X) = X^6 - \overline{2} = (X-\overline{3})(X-\overline{116})(X-\overline{119})(X-\overline{608})(X-\overline{611})(X-\overline{724}) -$$ - -with all factors relatively prime to each other, so that in $\Z_{727}[X] $ and $\Q_{727}[X] $ there are 6 factors $X - \beta $ with the (non-rational) 727-adic integers -$$ -\beta = \left\{ \begin{array}{rrr} 3 +& \!\!\! 545\cdot 727 +& \!\!\! 537 \cdot 727^2 +& \!\!\! 161 \cdot 727^3 +\ldots \\116 +& \!\!\! 48\cdot 727 +& \!\!\! 130\cdot 727^2 +& \!\!\! 498 \cdot 727^3 +\ldots \\119 +& \!\!\! 593\cdot 727 +& \!\!\! 667\cdot 727^2 +& \!\!\! 659 \cdot 727^3 +\ldots \\608 +& \!\!\! 133\cdot 727 +& \!\!\! 59 \cdot 727^2 +& \!\!\! 67 \cdot 727^3 +\ldots \\611 +& \!\!\! 678\cdot 727 +& \!\!\! 596\cdot 727^2 +& \!\!\! 228 \cdot 727^3 +\ldots \\724 +& \!\!\!181 \cdot 727 +& \!\!\! 189\cdot 727^2 +& \!\!\! 565 \cdot 727^3 +\ldots \end{array} \right. -$$ - -Let $f(x)$ be a polynomial with integer (or p-adic integer) coefficients, and let m,k be positive integers such that m ≤ k. If r is an integer such that -$$ -f(r) \equiv 0 \bmod p^k \quad \text{and} \quad f'(r) \not\equiv 0 \bmod p -$$ - -then, for every $m>0$ there exists an integer s such that -$$ -f(s) \equiv 0 \bmod p^{k+m} \quad \text{and} \quad r \equiv s \bmod p^k. -$$ - -Furthermore, this s is unique modulo pk+m, and can be computed explicitly as the integer such that -$$ -s = r - f(r)\cdot a, -$$ - -where $a$ is an integer satisfying -$$ -a \equiv [f'(r)]^{-1} \bmod p^m. -$$ - -Note that $f(r) \equiv 0 \bmod p^k $ so that the condition $s \equiv r \bmod p^k $ is met. As an aside, if $f'(r) \equiv 0 \bmod p$, then 0, 1, or several s may exist (see Hensel Lifting below). - -We use the Taylor expansion of f around r to write: -$$ -f(s) = \sum_{n=0}^N c_n (s-r)^n, \qquad c_n = f^{(n)}(r)/n!. -$$ - -From $r \equiv s \bmod p^k,$ we see that s − r = tpk for some integer t. Let - -\begin{align} - -f(s) &= \sum_{n=0}^N c_n \left(tp^k\right)^n \\ - -&= f(r) + t p^k f'(r) + \sum_{n=2}^N c_n t^n p^{kn} \\ - -&= f(r) + t p^k f'(r) + p^{2k}t^2g(t) && g(t) \in \Z[t] \\ - -&= zp^k + t p^k f'(r) + p^{2k}t^2g(t) && f(r) \equiv 0 \bmod p^k \\ - -&= (z+tf'(r)) p^k + p^{2k}t^2g(t) - -\end{align} - -For $m \leqslant k,$ we have: - -\begin{align} - -f(s) \equiv 0 \bmod p^{k+m} &\Longleftrightarrow (z + tf'(r))p^k \equiv 0 \bmod p^{k+m} \\ - -&\Longleftrightarrow z + tf'(r) \equiv 0 \bmod p^m \\ - -&\Longleftrightarrow tf'(r) \equiv -z \bmod p^m \\ - -&\Longleftrightarrow t \equiv -z [f'(r)]^{-1} \bmod p^m && p \nmid f'(r) - -\end{align} - -The assumption that $f'(r)$ is not divisible by p ensures that $f'(r)$ has an inverse mod $p^m$ which is necessarily unique. Hence a solution for t exists uniquely modulo $p^m,$ and s exists uniquely modulo $p^{k+m}.$ - -Using the above hypotheses, if we consider an irreducible polynomial -$$ -f(x) = a_0+a_1x + \cdots + a_nx^n \in K[X] -$$ - -such that $a_0,a_n \neq 0$, then -$$ -|f| = \max\{|a_0|, |a_n|\} -$$ - -In particular, for $f(X) = X^6 + 10X - 1$, we find in $\mathbb{Q}_2[X]$ - -\begin{align} - -|f(X)| &= \max\{|a_0|,\ldots,|a_n|\} \\ - -&= \max\{0,1,0 \} = 1 - -\end{align} - -but $\max\{|a_0|, |a_n|\} = 0$, hence the polynomial cannot be irreducible. Whereas in $\mathbb{Q}_7[X]$ we have both values agreeing, meaning the polynomial could be irreducible. In order to determine irreducibility, the Newton polygon must be employed.pg 144 - -Note that given an $a \in \mathbb{F}_p$ the Frobenius endomorphism $(-) \mapsto (-)^p$ gives a polynomial $x^p - a$ which always has zero derivative - -\begin{align} - -\frac{d}{dx}x^p - a &= p\cdot x^{p-1} \\ - -&\equiv 0\cdot x^{p-1} \bmod p \\ - -& \equiv 0 \bmod p - -\end{align} - -hence the p-th roots of $a$ do not exist in $\mathbb{Z}_p$. For $a = 1$, this implies $\mathbb{Z}_p$ cannot contain the root of unity $\mu_p$. - -Although the $p$-th roots of unity are not contained in $\mathbb{F}_p$, there are solutions of $x^p - x = x(x^{p-1} - 1)$. Note - -\begin{align} - -\frac{d}{dx} x^p - x &= px^{p-1} - 1 \\ - -&\equiv -1 \bmod p - -\end{align} - -is never zero, so if there exists a solution, it necessarily lifts to $\mathbb{Z}_p$. Because the Frobenius gives $a^p = a ,$ all of the non-zero elements $\mathbb{F}_p^\times$ are solutions. In fact, these are the only roots of unity contained in $\mathbb{Q}_p$. - -Using the lemma, one can "lift" a root r of the polynomial f modulo pk to a new root s modulo pk+1 such that r ≡ s mod pk (by taking m=1; taking larger m follows by induction). In fact, a root modulo pk+1 is also a root modulo pk, so the roots modulo pk+1 are precisely the liftings of roots modulo pk. The new root s is congruent to r modulo p, so the new root also satisfies $f'(s) \equiv f'(r) \not\equiv 0 \bmod p.$ So the lifting can be repeated, and starting from a solution rk of $f(x) \equiv 0 \bmod p^k$ we can derive a sequence of solutions rk+1, rk+2, ... of the same congruence for successively higher powers of p, provided $f'(r_k) \not\equiv 0 \bmod p$ for the initial root rk. This also shows that f has the same number of roots mod pk as mod pk+1, mod p k+2, or any other higher power of p, provided the roots of f mod pk are all simple. - -What happens to this process if r is not a simple root mod p? Suppose -$$ -f(r) \equiv 0 \bmod p^k \quad \text{and} \quad f'(r) \equiv 0 \bmod p. -$$ - -Then $s \equiv r \bmod p^k $ implies $f(s) \equiv f(r) \bmod p^{k+1}.$ That is, $f(r + tp^k) \equiv f(r)\bmod p^{k+1} $ for all integers t. Therefore, we have two cases: - -*If $ f(r) \not\equiv 0 \bmod p^{k+1} $ then there is no lifting of r to a root of f(x) modulo pk+1. - -*If $f(r) \equiv 0 \bmod p^{k+1} $ then every lifting of r to modulus pk+1 is a root of f(x) modulo pk+1. - -Example. To see both cases we examine two different polynomials with p = 2: -$$ -f(x) = x^2 +1 -$$ and r = 1. Then $f(1)\equiv 0 \bmod 2$ and $f'(1) \equiv 0 \bmod 2.$ We have $f(1) \not\equiv 0 \bmod 4$ which means that no lifting of 1 to modulus 4 is a root of f(x) modulo 4. -$$ -g(x) = x^2 -17 -$$ and r = 1. Then $g(1)\equiv 0 \bmod 2$ and $g'(1) \equiv 0 \bmod 2.$ However, since $g(1) \equiv 0 \bmod 4,$ we can lift our solution to modulus 4 and both lifts (i.e. 1, 3) are solutions. The derivative is still 0 modulo 2, so a priori we don't know whether we can lift them to modulo 8, but in fact we can, since g(1) is 0 mod 8 and g(3) is 0 mod 8, giving solutions at 1, 3, 5, and 7 mod 8. Since of these only g(1) and g(7) are 0 mod 16 we can lift only 1 and 7 to modulo 16, giving 1, 7, 9, and 15 mod 16. Of these, only 7 and 9 give g(x) = 0 mod 32, so these can be raised giving 7, 9, 23, and 25 mod 32. It turns out that for every integer k ≥ 3, there are four liftings of 1 mod 2 to a root of g(x) mod 2k. - -In the p-adic numbers, where we can make sense of rational numbers modulo powers of p as long as the denominator is not a multiple of p, the recursion from rk (roots mod pk) to rk+1 (roots mod pk+1) can be expressed in a much more intuitive way. Instead of choosing t to be an(y) integer which solves the congruence -$$ -tf'(r_k) \equiv -(f(r_k)/p^{k})\bmod p^m, -$$ - -let t be the rational number (the pk here is not really a denominator since f(rk) is divisible by pk): -$$ --(f(r_k)/p^{k})/f'(r_k). -$$ - -Then set -$$ -r_{k+1} = r_k + tp^k = r_k - \frac{f(r_k)}{f'(r_k)}. -$$ - -This fraction may not be an integer, but it is a p-adic integer, and the sequence of numbers rk converges in the p-adic integers to a root of f(x) = 0. Moreover, the displayed recursive formula for the (new) number rk+1 in terms of rk is precisely Newton's method for finding roots to equations in the real numbers. - -By working directly in the p-adics and using the p-adic absolute value, there is a version of Hensel's lemma which can be applied even if we start with a solution of f(a) ≡ 0 mod p such that $f'(a)\equiv 0 \bmod p.$ We just need to make sure the number $f'(a)$ is not exactly 0. This more general version is as follows: if there is an integer a which satisfies: -$$ -|f(a)|_p < |f'(a)|_p^2, -$$ - -then there is a unique p-adic integer b such f(b) = 0 and $|b-a|_p <|f'(a)|_p.$ The construction of b amounts to showing that the recursion from Newton's method with initial value a converges in the p-adics and we let b be the limit. The uniqueness of b as a root fitting the condition $|b-a|_p <|f'(a)|_p$ needs additional work. - -The statement of Hensel's lemma given above (taking $m=1$) is a special case of this more general version, since the conditions that f(a) ≡ 0 mod p and $f'(a)\not\equiv 0 \bmod p$ say that $|f(a)|_p < 1$ and $|f'(a)|_p = 1.$ - -Suppose that p is an odd prime and a is a non-zero quadratic residue modulo p. Then Hensel's lemma implies that a has a square root in the ring of p-adic integers $\Z_p.$ Indeed, let $f(x)=x^2-a.$ If r is a square root of a modulo p then: -$$ -f(r) = r^2 - a \equiv 0 \bmod p \quad \text{and} \quad f'(r) = 2r \not\equiv 0 \bmod p, -$$ - -where the second condition is dependent on the fact that p is odd. The basic version of Hensel's lemma tells us that starting from r1 = r we can recursively construct a sequence of integers $\{r_k\}$ such that: -$$ -r_{k+1} \equiv r_k \bmod p^k, \quad r_k^2 \equiv a \bmod p^k. -$$ - -This sequence converges to some p-adic integer b which satisfies b2 = a. In fact, b is the unique square root of a in $\Z_p$ congruent to r1 modulo p. Conversely, if a is a perfect square in $\Z_p$ and it is not divisible by p then it is a nonzero quadratic residue mod p. Note that the quadratic reciprocity law allows one to easily test whether a is a nonzero quadratic residue mod p, thus we get a practical way to determine which p-adic numbers (for p odd) have a p-adic square root, and it can be extended to cover the case p = 2 using the more general version of Hensel's lemma (an example with 2-adic square roots of 17 is given later). - -To make the discussion above more explicit, let us find a "square root of 2" (the solution to $x^2-2=0$) in the 7-adic integers. Modulo 7 one solution is 3 (we could also take 4), so we set $r_1 = 3$. Hensel's lemma then allows us to find $r_2$ as follows: - -\begin{align} - -f(r_1) &= 3^2-2=7 \\ - -f(r_1)/p^1 &=7/7=1 \\ - -f'(r_1) &=2r_1=6 - -\end{align} - -Based on which the expression -$$ -tf'(r_1) \equiv -(f(r_1)/p^k)\bmod p, -$$ - -turns into: -$$ -t\cdot 6 \equiv -1\bmod 7 -$$ - -which implies $t = 1.$ Now: -$$ -r_2 = r_1 + tp^1 = 3+1 \cdot 7 = 10 = 13_7. -$$ - -And sure enough, $10^2\equiv 2\bmod 7^2.$ (If we had used the Newton method recursion directly in the 7-adics, then $r_2 = r_1 - f(r_1)/f'(r_1) = 3 - 7/6 = 11/6,$ and $11/6 \equiv 10 \bmod 7^2.$) - -We can continue and find $r_3 = 108 = 3 + 7 + 2\cdot 7^2 = 213_7$. Each time we carry out the calculation (that is, for each successive value of k), one more base 7 digit is added for the next higher power of 7. In the 7-adic integers this sequence converges, and the limit is a square root of 2 in $\Z_7$ which has initial 7-adic expansion -$$ -3 + 7 + 2\cdot7^2 + 6\cdot 7^3 + 7^4 + 2\cdot 7^5 + 7^6 + 2\cdot 7^7 + 4\cdot 7^8 + \cdots. -$$ - -If we started with the initial choice $r_1 = 4$ then Hensel's lemma would produce a square root of 2 in $\Z_7$ which is congruent to 4 (mod 7) instead of 3 (mod 7) and in fact this second square root would be the negative of the first square root (which is consistent with 4 = −3 mod 7). - -As an example where the original version of Hensel's lemma is not valid but the more general one is, let $f(x) = x^2-17$ and $a=1.$ Then $f(a) =-16$ and $f'(a) = 2,$ so -$$ -|f(a)|_2 < |f'(a)|_2^2, -$$ - -which implies there is a unique 2-adic integer b satisfying -$$ -b^2 = 17 \quad \text{and} \quad |b-a|_2 < |f'(a)|_2 = \frac{1}{2}, -$$ - -i.e., b ≡ 1 mod 4. There are two square roots of 17 in the 2-adic integers, differing by a sign, and although they are congruent mod 2 they are not congruent mod 4. This is consistent with the general version of Hensel's lemma only giving us a unique 2-adic square root of 17 that is congruent to 1 mod 4 rather than mod 2. If we had started with the initial approximate root a = 3 then we could apply the more general Hensel's lemma again to find a unique 2-adic square root of 17 which is congruent to 3 mod 4. This is the other 2-adic square root of 17. - -In terms of lifting the roots of $x^2-17$ from modulus 2k to 2k+1, the lifts starting with the root 1 mod 2 are as follows: - -1 mod 2 --> 1, 3 mod 4 - -1 mod 4 --> 1, 5 mod 8 and 3 mod 4 ---> 3, 7 mod 8 - -1 mod 8 --> 1, 9 mod 16 and 7 mod 8 ---> 7, 15 mod 16, while 3 mod 8 and 5 mod 8 don't lift to roots mod 16 - -9 mod 16 --> 9, 25 mod 32 and 7 mod 16 --> 7, 23 mod 16, while 1 mod 16 and 15 mod 16 don't lift to roots mod 32. - -For every k at least 3, there are four roots of x2 − 17 mod 2k, but if we look at their 2-adic expansions we can see that in pairs they are converging to just two 2-adic limits. For instance, the four roots mod 32 break up into two pairs of roots which each look the same mod 16: - -9 = 1 + 23 and 25 = 1 + 23 + 24. - -7 = 1 + 2 + 22 and 23 = 1 + 2 + 22 + 24. - -The 2-adic square roots of 17 have expansions -$$ -1 + 2^3 +2^5 +2^6 +2^7 +2^9 + 2^{10} + \cdots -$$ -$$ -1 + 2 + 2^2 + 2^4 + 2^8 + 2^{11} + \cdots -$$ - -Another example where we can use the more general version of Hensel's lemma but not the basic version is a proof that any 3-adic integer c ≡ 1 mod 9 is a cube in $\Z_3.$ Let $f(x) =x^3-c$ and take initial approximation a = 1. The basic Hensel's lemma cannot be used to find roots of f(x) since $f'(r)\equiv 0 \bmod 3$ for every r. To apply the general version of Hensel's lemma we want $|f(1)|_3 <|f'(1)|_3^2,$ which means $c\equiv 1 \bmod 27.$ That is, if c ≡ 1 mod 27 then the general Hensel's lemma tells us f(x) has a 3-adic root, so c is a 3-adic cube. However, we wanted to have this result under the weaker condition that c ≡ 1 mod 9. If c ≡ 1 mod 9 then c ≡ 1, 10, or 19 mod 27. We can apply the general Hensel's lemma three times depending on the value of c mod 27: if c ≡ 1 mod 27 then use a = 1, if c ≡ 10 mod 27 then use a = 4 (since 4 is a root of f(x) mod 27), and if c ≡ 19 mod 27 then use a = 7. (It is not true that every c ≡ 1 mod 3 is a 3-adic cube, e.g., 4 is not a 3-adic cube since it is not a cube mod 9.) - -In a similar way, after some preliminary work, Hensel's lemma can be used to show that for any odd prime number p, any p-adic integer c congruent to 1 modulo p2 is a p-th power in $\Z_p.$ (This is false for p = 2.) - -Suppose A is a commutative ring, complete with respect to an ideal $\mathfrak{m},$ and let $f(x) \in A[x].$ a ∈ A is called an "approximate root" of f, if -$$ - f(a) \equiv 0 \bmod f'(a)^2 \mathfrak{m}. -$$ - -If f has an approximate root then it has an exact root b ∈ A "close to" a; that is, -$$ -f(b) = 0 \quad \text{and} \quad b \equiv a \bmod{\mathfrak m}. -$$ - -Furthermore, if $f'(a)$ is not a zero-divisor then b is unique. - -This result can be generalized to several variables as follows: - -Theorem. Let A be a commutative ring that is complete with respect to ideal $\mathfrak{m} \subset A.$ Let $f_1, \ldots, f_n \in A[x_1, \ldots, x_n]$ be a system of n polynomials in n variables over A. View $\mathbf{f} = (f_1, \ldots, f_n),$ as a mapping from An to itself, and let $J_{\mathbf{f}}(\mathbf{x})$ denote its Jacobian matrix. Suppose a = (a1, ..., an) ∈ An is an approximate solution to f = 0 in the sense that -$$ -f_i(\mathbf{a}) \equiv 0 \bmod (\det J_{\mathbf{f}}(a))^2 \mathfrak{m}, \qquad 1 \leqslant i \leqslant n. -$$ - -Then there is some b = (b1, ..., bn) ∈ An satisfying f(b) = 0, i.e., -$$ -f_i(\mathbf{b}) =0, \qquad 1 \leqslant i \leqslant n. -$$ - -Furthermore this solution is "close" to a in the sense that -$$ -b_i \equiv a_i \bmod \det J_{\mathbf{f}}(a) \mathfrak{m}, \qquad 1 \leqslant i \leqslant n. -$$ - -As a special case, if $f_i(\mathbf{a}) \equiv 0 \bmod \mathfrak{m}$ for all i and $\det J_{\mathbf{f}}(\mathbf{a})$ is a unit in A then there is a solution to f(b) = 0 with $b_i \equiv a_i \bmod \mathfrak{m}$ for all i. - -When n = 1, a = a is an element of A and $J_{\mathbf{f}}(\mathbf{a}) = J_f(a)=f'(a).$ The hypotheses of this multivariable Hensel's lemma reduce to the ones which were stated in the one-variable Hensel's lemma. - -Completeness of a ring is not a necessary condition for the ring to have the Henselian property: Goro Azumaya in 1950 defined a commutative local ring satisfying the Henselian property for the maximal ideal m to be a Henselian ring. - -Masayoshi Nagata proved in the 1950s that for any commutative local ring A with maximal ideal m there always exists a smallest ring Ah containing A such that Ah is Henselian with respect to mAh. This Ah is called the Henselization of A. If A is noetherian, Ah will also be noetherian, and Ah is manifestly algebraic as it is constructed as a limit of étale neighbourhoods. This means that Ah is usually much smaller than the completion  while still retaining the Henselian property and remaining in the same category. diff --git a/wiki/wikipedia/2625.txt b/wiki/wikipedia/2625.txt deleted file mode 100644 index c18cd4429a80e130024c11182414511f05d07084..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2625.txt +++ /dev/null @@ -1,113 +0,0 @@ -In mathematics, the Gershgorin circle theorem may be used to bound the spectrum of a square matrix. It was first published by the Soviet mathematician Semyon Aronovich Gershgorin in 1931. Gershgorin's name has been transliterated in several different ways, including Geršgorin, Gerschgorin, Gershgorin, Hershhorn, and Hirschhorn. - -Let $A$ be a complex $n\times n$ matrix, with entries $a_{ij}$. For $i \in\{1,\dots,n\}$ let $R_i$ be the sum of the absolute values of the non-diagonal entries in the $i$-th row: -$$ - R_i= \sum_{j\neq{i}} \left|a_{ij}\right|. -$$ - -Let $D(a_{ii}, R_i) \subseteq \Complex$ be a closed disc centered at $a_{ii}$ with radius $R_i$. Such a disc is called a Gershgorin disc. - -Theorem. Every eigenvalue of $ A $ lies within at least one of the Gershgorin discs $D(a_{ii},R_i).$ - -Proof. Let $\lambda$ be an eigenvalue of $A$. Choose a corresponding eigenvector $x = (x_j)$ so that one component $x_i$ is equal to $1$ and the others are of absolute value less than or equal to $1$: $x_i = 1$ and $|x_j|\le 1$ for $j \ne i$. There is always such an $x$, which can be obtained simply by dividing any eigenvector by its component with largest modulus. Since $Ax=\lambda x$, in particular -$$ - \sum_j a_{ij} x_j = \lambda x_i = \lambda. -$$ - -So, splitting the sum and taking into account once again that $x_i = 1$, we get -$$ - \sum_{j \ne i} a_{ij} x_j + a_{ii}= \lambda. -$$ - -Therefore, applying the triangle inequality, - - \left|\lambda - a_{ii}\right| - -= \left|\sum_{j \ne i} a_{ij} x_j\right| - -\le \sum_{j \ne i} \left|a_{ij}\right| \left|x_j\right| - -\le \sum_{j \ne i} \left|a_{ij}\right| = R_i. - -Corollary. The eigenvalues of A must also lie within the Gershgorin discs Cj corresponding to the columns of A. - -Proof. Apply the Theorem to AT. - -Example. For a diagonal matrix, the Gershgorin discs coincide with the spectrum. Conversely, if the Gershgorin discs coincide with the spectrum, the matrix is diagonal. - -One way to interpret this theorem is that if the off-diagonal entries of a square matrix over the complex numbers have small norms, the eigenvalues of the matrix cannot be "far from" the diagonal entries of the matrix. Therefore, by reducing the norms of off-diagonal entries one can attempt to approximate the eigenvalues of the matrix. Of course, diagonal entries may change in the process of minimizing off-diagonal entries. - -The theorem does not claim that there is one disc for each eigenvalue; if anything, the discs rather correspond to the axes in $\mathbb{C}^n$, and each expresses a bound on precisely those eigenvalues whose eigenspaces are closest to one particular axis. In the matrix - - \begin{pmatrix} 3 & 2 & 2 \\ 1 & 1 & 0 \\ 1 & 0 & 1 \end{pmatrix} - -\begin{pmatrix} a & 0 & 0 \\ 0 & b & 0 \\ 0 & 0 & c \end{pmatrix} - -\begin{pmatrix} 3 & 2 & 2 \\ 1 & 1 & 0 \\ 1 & 0 & 1 \end{pmatrix}^{-1} - -= \begin{pmatrix} -3a+2b+2c & 6a-2b-4c & 6a-4b-2c \\ b-a & a+(a-b) & 2(a-b) \\ c-a & 2(a-c) & a+(a-c) \end{pmatrix} - -— which by construction has eigenvalues $a$, $b$, and $c$ with eigenvectors $ \left(\begin{smallmatrix} 3 \\ 1 \\ 1 \end{smallmatrix}\right) $, $ \left(\begin{smallmatrix} 2 \\ 1 \\ 0 \end{smallmatrix}\right) $, and $ \left(\begin{smallmatrix} 2 \\ 0 \\ 1 \end{smallmatrix}\right) $ — it is easy to see that the disc for row 2 covers $a$ and $b$ while the disc for row 3 covers $a$ and $c$. This is however just a happy coincidence; if working through the steps of the proof one finds that it in each eigenvector is the first element that is the largest (every eigenspace is closer to the first axis than to any other axis), so the theorem only promises that the disc for row 1 (whose radius can be twice the sum of the other two radii) covers all three eigenvalues. - -If one of the discs is disjoint from the others then it contains exactly one eigenvalue. If however it meets another disc it is possible that it contains no eigenvalue (for example, $ A = \left(\begin{smallmatrix}0&1\\4&0\end{smallmatrix}\right) $ or $ A = \left(\begin{smallmatrix}1&-2\\1&-1\end{smallmatrix}\right) $). In the general case the theorem can be strengthened as follows: - -Theorem: If the union of k discs is disjoint from the union of the other n − k discs then the former union contains exactly k and the latter n − k eigenvalues of A. - -Proof: Let D be the diagonal matrix with entries equal to the diagonal entries of A and let -$$ -B(t) = (1-t) D + t A. -$$ - -We will use the fact that the eigenvalues are continuous in $t$, and show that if any eigenvalue moves from one of the unions to the other, then it must be outside all the discs for some $t$, which is a contradiction. - -The statement is true for $D = B(0)$. The diagonal entries of $B(t)$ are equal to that of A, thus the centers of the Gershgorin circles are the same, however their radii are t times that of A. Therefore, the union of the corresponding k discs of $B(t)$ is disjoint from the union of the remaining n-k for all $t \in [0,1] $. The discs are closed, so the distance of the two unions for A is $d>0$. The distance for $B(t)$ is a decreasing function of t, so it is always at least d. Since the eigenvalues of $B(t)$ are a continuous function of t, for any eigenvalue $\lambda(t)$ of $B(t)$ in the union of the k discs its distance $d(t)$ from the union of the other n-k discs is also continuous. Obviously $d(0)\ge d$, and assume $\lambda(1)$ lies in the union of the n-k discs. Then $d(1)=0$, so there exists $0 < t_0 <1 $ such that $0 < d(t_0) < d$. But this means $\lambda(t_0)$ lies outside the Gershgorin discs, which is impossible. Therefore $\lambda(1)$ lies in the union of the k discs, and the theorem is proven. - -Remarks: - -* The continuity of $\lambda(t)$ should be understood in the sense of topology. It is sufficient to show that the roots (as a point in space $\mathbb{C}^n$) is continuous function of its coefficients. Note that the inverse map that maps roots to coefficients is described by Vieta's formulas (note for Characteristic polynomial $a_n\equiv1$) which can be proved an open map. This proves the roots as a whole is a continuous function of its coefficients. Since composition of continuous functions is again continuous, the $\lambda(t)$ as a composition of roots solver and $B(t)$ is also continuous. - -* Individual eigenvalue $\lambda(t)$ could merge with other eigenvalue(s) or appeared from a splitting of previous eigenvalue. This may confuse people and questioning the concept of continuous. However, when viewing from the space of eigenvalue set $\mathbb{C}^n$, the trajectory is still a continuous curve although not necessarily smooth everywhere. - -Added Remark: - -* The proof given above is arguably (in)correct...... There are two types of continuity concerning eigenvalues: (1) each individual eigenvalue is a usual continuous function (such a representation does exist on a real interval but may not exist on a complex domain), (2) eigenvalues are continuous as a whole in the topological sense (a mapping from the matrix space with metric induced by a norm to unordered tuples, i.e., the quotient space of C^n under permutation equivalence with induced metric). Whichever continuity is used in a proof of the Gerschgorin disk theorem, it should be justified that the sum of algebraic multiplicities of eigenvalues remains unchanged on each connected region. A proof using the argument principle of complex analysis requires no eigenvalue continuity of any kind. - -The Gershgorin circle theorem is useful in solving matrix equations of the form Ax = b for x where b is a vector and A is a matrix with a large condition number. - -In this kind of problem, the error in the final result is usually of the same order of magnitude as the error in the initial data multiplied by the condition number of A. For instance, if b is known to six decimal places and the condition number of A is 1000 then we can only be confident that x is accurate to three decimal places. For very high condition numbers, even very small errors due to rounding can be magnified to such an extent that the result is meaningless. - -It would be good to reduce the condition number of A. This can be done by preconditioning: A matrix P such that P ≈ A−1 is constructed, and then the equation PAx = Pb is solved for x. Using the exact inverse of A would be nice but finding the inverse of a matrix is something we want to avoid because of the computational expense. - -Now, since PA ≈ I where I is the identity matrix, the eigenvalues of PA should all be close to 1. By the Gershgorin circle theorem, every eigenvalue of PA lies within a known area and so we can form a rough estimate of how good our choice of P was. - -Use the Gershgorin circle theorem to estimate the eigenvalues of: - -[[File:Gershgorin Disk Theorem Example.svg|thumb|400px|right|This diagram shows the discs in yellow derived for the eigenvalues. - -The first two disks overlap and their union contains two eigenvalues. The third and fourth disks are disjoint from the others and contain one eigenvalue each.]] - - A = \begin{bmatrix} - -10 & -1 & 0 & 1\\ - -0.2 & 8 & 0.2 & 0.2\\ - -1 & 1 & 2 & 1\\ - --1 & -1 & -1 & -11\\ - -\end{bmatrix}. - -Starting with row one, we take the element on the diagonal, aii as the center for the disc. We then take the remaining elements in the row and apply the formula: -$$ - \sum_{j\ne i} |a_{ij}| = R_i -$$ - -to obtain the following four discs: -$$ - D(10,2), D(8,0.6), D(2,3), \text{and} D(-11,3). -$$ - -Note that we can improve the accuracy of the last two discs by applying the formula to the corresponding columns of the matrix, obtaining $ D(2,1.2) $ and $ D(-11,2.2) $. - -The eigenvalues are 9.8218, 8.1478, 1.8995, −10.86. Note that this is a (column) diagonally dominant matrix: $|a_{ii}| > \sum_{j\neq i} |a_{ji}|$. This means that most of the matrix is in the diagonal, which explains why the eigenvalues are so close to the centers of the circles, and the estimates are very good. For a random matrix, we would expect the eigenvalues to be substantially further from the centers of the circles. diff --git a/wiki/wikipedia/2626.txt b/wiki/wikipedia/2626.txt deleted file mode 100644 index 930726baa6a185df797694037877bb1edb7bea33..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2626.txt +++ /dev/null @@ -1,9 +0,0 @@ -In combinatorial mathematics, Baranyai's theorem (proved by and named after Zsolt Baranyai) deals with the decompositions of complete hypergraphs. - -The statement of the result is that if $2\le r0, m(B)>0 $, and equation (), then there is a continuous surjective homomorphism $ \phi: G \to \mathbb T $ and there are closed intervals $ I $, $ J $ in $ \mathbb T $ such that $A \subseteq \phi^{-1}(I)$, $B \subseteq \phi^{-1}(J)$, $m(A) = m (\phi^{-1}(I))$, and $ m(B) = m(\phi^{-1}(J))$. diff --git a/wiki/wikipedia/2629.txt b/wiki/wikipedia/2629.txt deleted file mode 100644 index ab8a3d7ce814fcef6ad85cc3aea088bc1445aad5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2629.txt +++ /dev/null @@ -1,7 +0,0 @@ -In quantum mechanics, the spectral gap of a system is the energy difference between its ground state and its first excited state. The mass gap is the spectral gap between the vacuum and the lightest particle. A Hamiltonian with a spectral gap is called a gapped Hamiltonian, and those that do not are called gapless. - -In solid-state physics, the most important spectral gap is for the many-body system of electrons in a solid material, in which case it is often known as an energy gap. - -In quantum many-body systems, ground states of gapped Hamiltonians have exponential decay of correlations. - -In 2015 it was shown that the problem of determining the existence of a spectral gap is undecidable in two or more dimensions. The authors used an aperiodic tiling of quantum Turing machines and showed that this hypothetical material becomes gapped if and only if the machine halts. The one-dimensional case was also proven undecidable in 2020 by constructing a chain of interacting qutrits divided into blocks that gain energy if and only if they represent a full computation by a Turing machine, and showing that this system becomes gapped if and only if the machine does not halt. diff --git a/wiki/wikipedia/263.txt b/wiki/wikipedia/263.txt deleted file mode 100644 index f88ffecac4ad1424cd34b1b70110ae20afaef5a5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/263.txt +++ /dev/null @@ -1,19 +0,0 @@ -Beggar-my-neighbour, also known as Strip Jack naked, Beat your neighbour out of doors, or Beat Jack out of doors, or Beat Your Neighbour is a simple card game. It is somewhat similar in nature to the children's card game War, and has spawned a more complicated variant, Egyptian Ratscrew. - -The game was probably invented in Great Britain and has been known there since at least the 1840s. - -It may be the same as Beat the Knave out of Doors or Knave out o' Doors, in which case it is much older as this game is mentioned as early as 1755. - -It appears in Charles Dickens's 1861 novel Great Expectations, as the only card game Pip, the book's protagonist, seems to know how to play as a child. - -A standard 52-card deck is divided equally between two players, and the two stacks of cards are placed on the table face down. The first player lays down their top card face up to start a central pile, and the opponent plays their top card, also face up, on it, and this goes on alternately as long as no Ace or court card (King, Queen, or Jack) appears. These cards are called "penalty cards". - -If either player turns up such a card, their opponent has to pay a penalty: four cards for an Ace, three for a King, two for a Queen, or one for a Jack. They do this playing the required number of cards to the central pile. When they have done so, if all the cards are numerals, the player of the penalty card wins the hand, takes all the cards in the pile and places them under their pack. The game continues in the same fashion, the winner having the advantage of placing the first card. However, if the second player turns up another Ace or court card in the course of paying to the original penalty card, their payment ceases and the first player must pay to this new card. This changing of penalisation can continue indefinitely. When a single player has all of the cards in the deck in their stack, they have won. - -For more than two players, play proceeds clockwise. If a player reveals a new penalty card while paying their penalty, the next player around pays the tax. - -A longstanding question in combinatorial game theory asks whether there is a game of beggar-my-neighbour that goes on forever. This can happen only if the game is eventually periodic—that is, if it eventually reaches some state it has been in before. Some smaller decks of cards have infinite games, while others do not. John Conway once listed this among his anti-Hilbert problems, - -open questions whose pursuit should emphatically not drive the future of mathematical research. - -The search for a non-terminating game has resulted in "longest known games" of increasing length. diff --git a/wiki/wikipedia/2630.txt b/wiki/wikipedia/2630.txt deleted file mode 100644 index 185ff565f8850f2769468a948ca30ead88376377..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2630.txt +++ /dev/null @@ -1,121 +0,0 @@ -In combinatorial mathematics, Ramsey's theorem, in one of its graph-theoretic forms, states that one will find monochromatic cliques in any edge labelling (with colours) of a sufficiently large complete graph. To demonstrate the theorem for two colours (say, blue and red), let r and s be any two positive integers. Ramsey's theorem states that there exists a least positive integer R(r, s) for which every blue-red edge colouring of the complete graph on R(r, s) vertices contains a blue clique on r vertices or a red clique on s vertices. (Here R(r, s) signifies an integer that depends on both r and s.) - -Ramsey's theorem is a foundational result in combinatorics. The first version of this result was proved by F. P. Ramsey. This initiated the combinatorial theory now called Ramsey theory, that seeks regularity amid disorder: general conditions for the existence of substructures with regular properties. In this application it is a question of the existence of monochromatic subsets, that is, subsets of connected edges of just one colour. - -An extension of this theorem applies to any finite number of colours, rather than just two. More precisely, the theorem states that for any given number of colours, c, and any given integers n1, ..., nc, there is a number, R(n1, ..., nc), such that if the edges of a complete graph of order R(n1, ..., nc) are coloured with c different colours, then for some i between 1 and c, it must contain a complete subgraph of order ni whose edges are all colour i. The special case above has c = 2 (and n1 = r and n2 = s). - -Suppose the edges of a complete graph on 6 vertices are coloured red and blue. Pick a vertex, v. There are 5 edges incident to v and so (by the pigeonhole principle) at least 3 of them must be the same colour. Without loss of generality we can assume at least 3 of these edges, connecting the vertex, v, to vertices, r, s and t, are blue. (If not, exchange red and blue in what follows.) If any of the edges, (r, s), (r, t), (s, t), are also blue then we have an entirely blue triangle. If not, then those three edges are all red and we have an entirely red triangle. Since this argument works for any colouring, any K6 contains a monochromatic K3, and therefore R(3, 3) ≤ 6. The popular version of this is called the theorem on friends and strangers. - -An alternative proof works by double counting. It goes as follows: Count the number of ordered triples of vertices, x, y, z, such that the edge, (xy), is red and the edge, (yz), is blue. Firstly, any given vertex will be the middle of either 0 × 5 = 0 (all edges from the vertex are the same colour), 1 × 4 = 4 (four are the same colour, one is the other colour), or 2 × 3 = 6 (three are the same colour, two are the other colour) such triples. Therefore, there are at most 6 × 6 = 36 such triples. Secondly, for any non-monochromatic triangle (xyz), there exist precisely two such triples. Therefore, there are at most 18 non-monochromatic triangles. Therefore, at least 2 of the 20 triangles in the K6 are monochromatic. - -Conversely, it is possible to 2-colour a K5 without creating any monochromatic K3, showing that R(3, 3) > 5. The unique colouring is shown to the right. Thus R(3, 3) = 6. - -The task of proving that R(3, 3) ≤ 6 was one of the problems of William Lowell Putnam Mathematical Competition in 1953, as well as in the Hungarian Math Olympiad in 1947. - -A multicolour Ramsey number is a Ramsey number using 3 or more colours. There are (up to symmetries) only two non-trivial multicolour Ramsey numbers for which the exact value is known, namely R(3, 3, 3) = 17 and R(3, 3, 4) = 30. Upper bounds are often considerably more difficult to establish: one either has to check all possible colourings to confirm the absence of a counterexample, or to present a mathematical argument for its absence. - -A sophisticated computer program does not need to look at all colourings individually in order to eliminate all of them; nevertheless it is a very difficult computational task that existing software can only manage on small sizes. Each complete graph Kn has 1/2n(n − 1) edges, so there would be a total of cn(n − 1)/2 graphs to search through (for c colours) if brute force is used. Therefore, the complexity for searching all possible graphs (via brute force) is O(cn2) for c colourings and at most n nodes. - -The situation is unlikely to improve with the advent of quantum computers. The best known algorithm exhibits only a quadratic speedup (c.f. Grover's algorithm) relative to classical computers, so that the computation time is still exponential in the number of nodes. - -As described above, R(3, 3) = 6. It is easy to prove that R(4, 2) = 4, and, more generally, that R(s, 2) = s for all s: a graph on s − 1 nodes with all edges coloured red serves as a counterexample and proves that R(s, 2) ≥ s; among colourings of a graph on s nodes, the colouring with all edges coloured red contains a s-node red subgraph, and all other colourings contain a 2-node blue subgraph (that is, a pair of nodes connected with a blue edge.) - -Using induction inequalities, it can be concluded that R(4, 3) ≤ R(4, 2) + R(3, 3) − 1 = 9, and therefore R(4, 4) ≤ R(4, 3) + R(3, 4) ≤ 18. There are only two (4, 4, 16) graphs (that is, 2-colourings of a complete graph on 16 nodes without 4-node red or blue complete subgraphs) among 6.4 × 1022 different 2-colourings of 16-node graphs, and only one (4, 4, 17) graph (the Paley graph of order 17) among 2.46 × 1026 colourings. - -The exact value of R(5, 5) is unknown, although it is known to lie between 43 (Geoffrey Exoo (1989)) and 48 (Angeltveit and McKay (2017)) (inclusive). - -In 1997, McKay, Radziszowski and Exoo employed computer-assisted graph generation methods to conjecture that R(5, 5) = 43. They were able to construct exactly 656 (5, 5, 42) graphs, arriving at the same set of graphs through different routes. None of the 656 graphs can be extended to a (5, 5, 43) graph. - -For R(r, s) with r, s > 5, only weak bounds are available. Lower bounds for R(6, 6) and R(8, 8) have not been improved since 1965 and 1972, respectively. Where not cited otherwise, entries in the table below are taken from the January 2021 edition. (Note there is a trivial symmetry across the diagonal since R(r, s) = R(s, r).) - -The inequality R(r, s) ≤ R(r − 1, s) + R(r, s − 1) may be applied inductively to prove that -$$ -R(r, s) \leq \binom{r + s - 2}{r - 1}. -$$ - -In particular, this result, due to Erdős and Szekeres, implies that when r = s, -$$ -R(s, s) \leq [1 + o(1)]\frac{4^{s - 1}}{\sqrt{\pi s}}. -$$ - -An exponential lower bound, -$$ -R(s, s) \geq [1 + o(1)] \frac{s}{\sqrt{2} e} 2^{s/2}, -$$ - -was given by Erdős in 1947 and was instrumental in his introduction of the probabilistic method. There is obviously a huge gap between these two bounds: for example, for s = 10, this gives 101 ≤ R(10, 10) ≤ 48620. Nevertheless, exponential growth factors of either bound have not been improved to date and still stand at 4 and sqrt 2 respectively. There is no known explicit construction producing an exponential lower bound. The best known lower and upper bounds for diagonal Ramsey numbers currently stand at -$$ -[1 + o(1)] \frac{\sqrt{2} s}{e} 2^{\frac{s}{2}} \leq R(s, s) \leq s^{-(c \log s)/(\log \log s)} 4^s, -$$ - -due to Spencer and Conlon respectively. - -For the off-diagonal Ramsey numbers R(3, t), it is known that they are of order $\tfrac{t^2}{\log t}$; this may be stated equivalently as saying that the smallest possible independence number in an n-vertex triangle-free graph is -$$ -\Theta \left (\sqrt{n\log n} \right ). -$$ - -The upper bound for R(3, t) is given by Ajtai, Komlós, and Szemerédi, the lower bound was obtained originally by Kim, and was improved by Griffiths, Morris, Fiz Pontiveros, and Bohman and Keevash, by analysing the triangle-free process. More generally, for off-diagonal Ramsey numbers, R(s, t), with s fixed and t growing, the best known bounds are -$$ -c'_s \frac{t^{\frac{s + 1}{2}}}{(\log t)^{\frac{s + 1}{2} - \frac{1}{s - 2}}} \leq R(s, t) \leq c_s \frac{t^{s - 1}}{(\log t)^{s - 2}}, -$$ - -due to Bohman and Keevash and Ajtai, Komlós and Szemerédi respectively. - -There is a less well-known yet interesting analogue of Ramsey's theorem for induced subgraphs. Roughly speaking, instead of finding a monochromatic subgraph, we are now required to find a monochromatic induced subgraph. In this variant, it is no longer sufficient to restrict our focus to complete graphs, since the existence of a complete subgraph does not imply the existence of an induced subgraph. The qualitative statement of the theorem in the next section was first proven independently by Erdös, Hajnal and Pósa, Deuber and Rödl in the 1970's. Since then, there has been much research in obtaining good bounds for induced Ramsey numbers. - -Let $H$ be a graph on $n$ vertices. Then, there exists a graph $G$ such that any coloring of the edges of $G$ using two colors contains a monochromatic induced copy of $H$ (i.e. an induced subgraph of $G$ such that it is isomorphic to $H$ and its edges are monochromatic). The smallest possible number of vertices of $G$ is the induced Ramsey number $r_{\text{ind}}(H)$. - -Sometimes, we also consider the asymmetric version of the problem. We define $r_{\text{ind}}(X,Y)$ to be the smallest possible number of vertices of a graph $G$ such that every coloring of the edges of $G$ using only red or blue contains a red induced subgraph of $X$ or blue induced subgraph of $Y$. - -Similar to Ramsey's Theorem, it is unclear a priori whether induced Ramsey numbers exist for every graph $H$. In the early 1970's, Erdös, Hajnal and Pósa, Deuber and Rödl independently proved that this is the case. In fact, they showed that every $(n,d,\lambda)$-graph $G$ with small $\lambda$ and suitable $d$ contains an induced monochromatic copy of any graph on $k$ vertices in any coloring of edges of $G$ in two colors. In particular, for some constant $c$, the Paley graph on $n \ge 2^{ck\log^2 k}$ vertices is such that all of its edge colorings in two colors contain an induced monochromatic copy of every $k$-vertex graph. - -In 2010, Conlon, Fox and Sudakov were able to improve the bound to $r_{\text{ind}}(H) \le 2^{ck\log k}$, which remains the current best upper bound for general induced Ramsey numbers. Similar to the previous work in 2008, they showed that every $(n,d,\lambda)$-graph $G$ with small $\lambda$ and edge density $\frac{1}{2}$ contains an induced monochromatic copy of every graph on $k$ vertices in any edge coloring in two colors. Currently, Erdös's conjecture that $r_{\text{ind}}(H) \le 2^{ck}$ remains open and is one of the important problems in extremal graph theory. - -For lower bounds, not much is known in general except for the fact that induced Ramsey numbers must be at least the corresponding Ramsey numbers. Some lower bounds have been obtained for some special cases (see Special Cases). - -While the general bounds for the induced Ramsey numbers are exponential in the size of the graph, the behaviour is much different on special classes of graphs (in particular, sparse ones). Many of these classes have induced Ramsey numbers polynomial in the number of vertices. - -If $H$ is a cycle, path or star on $k$ vertices, it is known that $r_{\text{ind}}(H)$ is linear in $k$. - -If $H$ is a tree on $k$ vertices, it is known that $r_{\text{ind}}(H) = O(k^2\log^{2}k)$. It is also known that $r_{\text{ind}}(H)$ is superlinear (i.e. $r_{\text{ind}}(H) = \omega(k)$). Note that this is in contrast to the usual Ramsey numbers, where the Burr–Erdős conjecture (now proven) tells us that $r(H)$ is linear (since trees are 1-degenerate). - -For graphs $H$ with number of vertices $k$ and bounded degree $\Delta$, it was conjectured that $r_{\text{ind}}(H) \le cn^{d(\Delta)}$, for some constant $d$ depending only on $\Delta$. This result was first proven by Łuczak and Rödl in 1996, with $d(\Delta)$ growing as a tower of twos with height $O(\Delta^2)$. More reasonable bounds for $d(\Delta)$ were obtained since then. In 2013, Conlon, Fox and Zhao showed using a counting lemma for sparse pseudorandom graphs that $r_{\text{ind}}(H) \le cn^{2\Delta + 8}$, where the exponent is best possible up to constant factors. - -Similar to Ramsey numbers, we can generalize the notion of induced Ramsey numbers to hypergraphs and multicolor settings. - -We can also generalize the induced Ramsey's theorem to a multicolor setting. For graphs $H_1,H_2,\cdots,H_r$, define $r_{\text{ind}}(H_1,H_2,\cdots,H_r)$ to be the minimum number of vertices in a graph $G$ such that any coloring of the edges of $G$ into $r$ colors contain a induced subgraph isomorphic to $H_i$ where all edges are colored in the $i$-th color for some $1 \le i \le r$. Let $r_{\text{ind}}(H;q) := r_{\text{ind}}(H,H,\cdots,H)$ ($q$ copies of $H$). - -It is possible to derive a bound on $r_{\text{ind}}(H;q)$ which is approximately a tower of two of height $\sim \log q$ by iteratively applying the bound on the two-color case. The current best known bound is due to Fox and Sudakov, which achieves $r_{\text{ind}}(H;q) \le 2^{ck^3}$, where $k$ is the number of vertices of $H$ and $c$ is a constant depending only on $q$. - -We can extend the definition of induced Ramsey numbers to $d$-uniform hypergraphs by simply changing the word graph in the statement to hypergraph. Furthermore, we can define the multicolor version of induced Ramsey numbers in the same way as the previous subsection. - -Let $H$ be a $d$-uniform hypergraph with $k$ vertices. Define the tower function $t_r(x)$ by letting $t_1(x) = x$ and for $i \ge 1$, $t_{i+1}(x) = 2^{t_{i}(x)}$. Using the hypergraph container method, Conlon, Dellamonica, La Fleur, Rödl and Schacht were able to show that for $d \ge 3, q \ge 2$, $r_{\text{ind}}(H;q) \le t_d(ck)$ for some constant $c$ depending on only $d$ and $q$. In particular, this result mirrors the best known bound for the usual Ramsey number when $d=3$. - -A further result, also commonly called Ramsey's theorem, applies to infinite graphs. In a context where finite graphs are also being discussed it is often called the "Infinite Ramsey theorem". As intuition provided by the pictorial representation of a graph is diminished when moving from finite to infinite graphs, theorems in this area are usually phrased in set-theoretic terminology. - -Theorem. Let X be some infinite set and colour the elements of X(n) (the subsets of X of size n) in c different colours. Then there exists some infinite subset M of X such that the size n subsets of M all have the same colour. - -Proof: The proof is by induction on n, the size of the subsets. For n = 1, the statement is equivalent to saying that if you split an infinite set into a finite number of sets, then one of them is infinite. This is evident. Assuming the theorem is true for n ≤ r, we prove it for n = r + 1. Given a c-colouring of the (r + 1)-element subsets of X, let a0 be an element of X and let Y = X \ {a0}. We then induce a c-colouring of the r-element subsets of Y, by just adding a0 to each r-element subset (to get an (r + 1)-element subset of X). By the induction hypothesis, there exists an infinite subset Y1 of Y such that every r-element subset of Y1 is coloured the same colour in the induced colouring. Thus there is an element a0 and an infinite subset Y1 such that all the (r + 1)-element subsets of X consisting of a0 and r elements of Y1 have the same colour. By the same argument, there is an element a1 in Y1 and an infinite subset Y2 of Y1 with the same properties. Inductively, we obtain a sequence {a0, a1, a2, ...} such that the colour of each (r + 1)-element subset (ai(1), ai(2), ..., ai(r + 1)) with i(1) < i(2) < ... < i(r + 1) depends only on the value of i(1). Further, there are infinitely many values of i(n) such that this colour will be the same. Take these ai(n)'s to get the desired monochromatic set. - -A stronger but unbalanced infinite form of Ramsey's theorem for graphs, the Erdős–Dushnik–Miller theorem, states that every infinite graph contains either a countably infinite independent set, or an infinite clique of the same cardinality as the original graph. - -It is possible to deduce the finite Ramsey theorem from the infinite version by a proof by contradiction. Suppose the finite Ramsey theorem is false. Then there exist integers c, n, T such that for every integer k, there exists a c-colouring of $[k]^{(n)}$ without a monochromatic set of size T. Let Ck denote the c-colourings of $[k]^{(n)}$ without a monochromatic set of size T. - -For any k, the restriction of a colouring in Ck+1 to $[k]^{(n)}$ (by ignoring the colour of all sets containing k + 1) is a colouring in Ck. Define $C^1_k$ to be the colourings in Ck which are restrictions of colourings in Ck+1. Since Ck+1 is not empty, neither is $C^{1}_k$. - -Similarly, the restriction of any colouring in $C^1_{k+1}$ is in $C^1_k$, allowing one to define $C^2_k$ as the set of all such restrictions, a non-empty set. Continuing so, define $C^{m}_k$ for all integers m, k. - -Now, for any integer k, $C_k\supseteq C^1_k\supseteq C^2_k\supseteq \cdots$, and each set is non-empty. Furthermore, Ck is finite as $|C_k|\le c^{\frac{k!}{n!(k-n)!}}$. It follows that the intersection of all of these sets is non-empty, and let $D_k=C_k\cap C^1_k\cap C^2_k\cap \cdots$. Then every colouring in Dk is the restriction of a colouring in Dk+1. Therefore, by unrestricting a colouring in Dk to a colouring in Dk+1, and continuing doing so, one constructs a colouring of $\mathbb N^{(n)}$ without any monochromatic set of size T. This contradicts the infinite Ramsey theorem. - -If a suitable topological viewpoint is taken, this argument becomes a standard compactness argument showing that the infinite version of the theorem implies the finite version. - -The theorem can also be extended to hypergraphs. An m-hypergraph is a graph whose "edges" are sets of m vertices – in a normal graph an edge is a set of 2 vertices. The full statement of Ramsey's theorem for hypergraphs is that for any integers m and c, and any integers n1, ..., nc, there is an integer R(n1, ..., nc;c, m) such that if the hyperedges of a complete m-hypergraph of order R(n1, ..., nc;c, m) are coloured with c different colours, then for some i between 1 and c, the hypergraph must contain a complete sub-m-hypergraph of order ni whose hyperedges are all colour i. This theorem is usually proved by induction on m, the 'hyper-ness' of the graph. The base case for the proof is m = 2, which is exactly the theorem above. - -It is also possible to define Ramsey numbers for directed graphs; these were introduced by . Let R(n) be the smallest number Q such that any complete graph with singly directed arcs (also called a "tournament") and with ≥ Q nodes contains an acyclic (also called "transitive") n-node subtournament. - -This is the directed-graph analogue of what (above) has been called R(n, n; 2), the smallest number Z such that any 2-colouring of the edges of a complete undirected graph with ≥ Z nodes, contains a monochromatic complete graph on n nodes. (The directed analogue of the two possible arc colours is the two directions of the arcs, the analogue of "monochromatic" is "all arc-arrows point the same way"; i.e., "acyclic.") - -We have R(0) = 0, R(1) = 1, R(2) = 2, R(3) = 4, R(4) = 8, R(5) = 14, R(6) = 28, and 34 ≤ R(7) ≤ 47. - -In reverse mathematics, there is a significant difference in proof strength between the version of Ramsey's theorem for infinite graphs (the case n = 2) and for infinite multigraphs (the case n ≥ 3). The multigraph version of the theorem is equivalent in strength to the arithmetical comprehension axiom, making it part of the subsystem ACA0 of second-order arithmetic, one of the big five subsystems in reverse mathematics. In contrast, by a theorem of David Seetapun, the graph version of the theorem is weaker than ACA0, and (combining Seetapun's result with others) it does not fall into one of the big five subsystems. Over ZF, however, the graph version is equivalent to the classical Kőnig's lemma. diff --git a/wiki/wikipedia/2631.txt b/wiki/wikipedia/2631.txt deleted file mode 100644 index 4a2a6ea8598f69cc71b5edb3618e12ec8fd3dc71..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2631.txt +++ /dev/null @@ -1,15 +0,0 @@ -The New Tetris is a puzzle video game for the Nintendo 64. The game was developed by H2O Entertainment and published by Nintendo, based on the latter's popular Tetris series. The game was originally released on July 31, 1999, in North America. - -The game is notable for showing scenic fly-bys of famous structures (for examples the Sphinx, the Pantheon, Saint Basil's Cathedral, a Mayan temple, and others) rendered in real-time. This is relatively difficult for the Nintendo 64 hardware to accomplish at the quality that is achieved. The New Tetris also features a multiplayer mode with up to four players and an ethnically themed electronic dance music soundtrack by Neil Voss, who also composed the award-winning music for Tetrisphere. - -There are several key differences in gameplay from the original Tetris. First, in addition to clearing lines, one can also form 4x4 large squares of four pieces to form "blocks." When a block is created, it turns solid gold or silver, depending on the makeup of the block- a block built from all the same kind of piece becomes a golden block or "monosquare", while any other combination becomes a silver block or "multisquare". Blocks can only be constructed from whole pieces: if any part of a piece has been cleared, then it cannot be used to form a block. When a line that has pieces from a block is cleared, it earns more points. - -Second, in order to aid in the planning of building blocks, the game shows three upcoming pieces and has a "storage area" where a spare piece can be stored. If the piece in the storage area is more desirable than the currently falling piece, the player can press the L button to swap the currently falling piece with the stored piece. - -One other feature is that the rotation is much more flexible than in traditional Tetris games, trying several slight nudges, which players have called "wall kicks", before finding one where the tetromino fits. Some of these compensations move the pieces away from walls even "over" other pieces. In fact, the game rewards players for performing these seemingly impossible "spin moves": If a line is cleared by doing a spin move, all the pieces above or below the spin move break apart into individual blocks and fall down, possibly clearing many lines and filling in empty spaces in the bottom portion of the play area. Unfortunately, the spin move process causes golden and silver blocks to become ordinary pieces again and so they no longer carry their multiplier when cleared. - -Tetris Worlds includes the rules of The New Tetris under the name "Square Tetris", with even more flexible wall kick rules, although the rule for what constitutes a spin move differs significantly. - -The lead programmer on The New Tetris, , placed a secret hidden rant within the code which was dumped by hackers soon after the release of the game. - -The game received "favorable" reviews according to the review aggregation website GameRankings. diff --git a/wiki/wikipedia/2632.txt b/wiki/wikipedia/2632.txt deleted file mode 100644 index 54ef5be925285aab02c6a8d39c5b58fd4039ef04..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2632.txt +++ /dev/null @@ -1,5 +0,0 @@ -In plane geometry, Van Aubel's theorem describes a relationship between squares constructed on the sides of a quadrilateral. Starting with a given convex quadrilateral, construct a square, external to the quadrilateral, on each side. Van Aubel's theorem states that the two line segments between the centers of opposite squares are of equal lengths and are at right angles to one another. Another way of saying the same thing is that the center points of the four squares form the vertices of an equidiagonal orthodiagonal quadrilateral. The theorem is named after H. H. van Aubel, who published it in 1878. - -The theorem holds true also for re-entrant quadrilaterals, and when the squares are constructed internally to the given quadrilateral. For complex (self-intersecting) quadrilaterals, the external and internal constructions for the squares are not definable. In this case, the theorem holds true when the constructions are carried out in the more general way: - -*The Van Aubel points, the mid-points of the quadrilateral diagonals and the mid-points of the Van Aubel segments are concyclic. diff --git a/wiki/wikipedia/2633.txt b/wiki/wikipedia/2633.txt deleted file mode 100644 index 6d1944d677b95e61a5a63dfdc2affdf6065299ab..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2633.txt +++ /dev/null @@ -1,45 +0,0 @@ -Ice Lake is Intel's codename for the 10th generation Intel Core mobile and 3rd generation Xeon Scalable server processors based on the new Sunny Cove microarchitecture. Ice Lake represents an Architecture step in Intel's Process-Architecture-Optimization model. Produced on the second generation of Intel's 10 nm process, 10 nm+, Ice Lake is Intel's second microarchitecture to be manufactured on the 10 nm process, following the limited launch of Cannon Lake in 2018. However, Intel altered their naming scheme in 2020 for the 10 nm process. In this new naming scheme, Ice Lake's manufacturing process is called simply 10 nm, without any appended pluses. - -Ice Lake CPUs are sold together with the 14 nm Comet Lake CPUs as Intel's "10th Generation Core" product family. There are no Ice Lake desktop or high-power mobile processors; Comet Lake fulfills this role. Sunny Cove-based Xeon Scalable CPUs (codenamed "Ice Lake-SP") officially launched on April 6, 2021. Intel officially launched Xeon W-3300 series workstation processors on July 29, 2021. - -Ice Lake's direct successor in mobile is Tiger Lake, a third-generation 10 nm processor family using the new Willow Cove microarchitecture and integrated graphics based on the new Intel Xe microarchitecture. Ice Lake-SP will be succeeded by Sapphire Rapids, powered by Golden Cove cores. Several mobile Ice Lake CPUs were discontinued on July 7, 2021. - -Ice Lake was designed by Intel Israel's processor design team in Haifa, Israel. - -Ice Lake is built on the Sunny Cove microarchitecture. - -Ice Lake features Intel's Gen11 graphics, increasing the number of execution units to 64, from 24 or 48 in Gen9.5 graphics, achieving over 1 TFLOPS of compute performance. Each execution unit supports 7 threads, meaning that the design has 512 concurrent pipelines. Feeding these execution units is a 3 megabyte L3 cache, a four-fold increase from Gen9.5, alongside the increased memory bandwidth enabled by LPDDR4X on low-power mobile platforms. Gen11 graphics also introduces tile-based rendering and Coarse Pixel Shading (CPS), Intel's implementation of variable-rate shading (VRS). The architecture also includes an all-new HEVC encoder design. - -*Hardware acceleration for SHA operations (Secure Hash Algorithms) - -* Intel Deep Learning Boost, used for machine learning/artificial intelligence inference acceleration - -* Variable Rate Shading - -* DisplayPort 1.4a with Display Stream Compression; HDMI 2.0b - -* Up to 1.15 TFLOPS of computational performance - -* Two HEVC 10-bit encode pipelines, either two 4K 60Hz RGB/ 4:4:4 streams simultaneously or one 8K 30Hz 4:2:2 - -* VP9 8-bit and 10-bit hardware encoding for all supported platforms as part of Intel Quick Sync Video - -* Integer and nearest neighbor image scaling - -*4th Gen IPU - -* 10 nm transistors (originally called 10 nm+ transistors in older naming scheme) - -* New memory controller with DDR4 3200 and LPDDR4X 3733 support - -* Integrated support for Wi-Fi 6 (802.11ax) - -* Thunderbolt 3 support - -There are no bronze series processors in Xeon SP Gen3. - -"Ice Lake-W3300" (10 nm) - -* PCI Express lanes: 64 - -* Supports up to 16 DIMMs of DDR4 memory, maximum 4 TB. diff --git a/wiki/wikipedia/2634.txt b/wiki/wikipedia/2634.txt deleted file mode 100644 index e0beba4904dd7d71d0ef4a49ebf983c6e11628b4..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2634.txt +++ /dev/null @@ -1,131 +0,0 @@ -In mathematics, the Arthur–Selberg trace formula is a generalization of the Selberg trace formula from the group SL2 to arbitrary reductive groups over global fields, developed by James Arthur in a long series of papers from 1974 to 2003. It describes the character of the representation of G(A) on the discrete part L_0|p=2(G(F)∖G(A)) of L2(G(F)∖G(A)) in terms of geometric data, where G is a reductive algebraic group defined over a global field F and A is the ring of adeles of F. - -There are several different versions of the trace formula. The first version was the unrefined trace formula, whose terms depend on truncation operators and have the disadvantage that they are not invariant. Arthur later found the invariant trace formula and the stable trace formula which are more suitable for applications. The simple trace formula is less general but easier to prove. The local trace formula is an analogue over local fields. - -Jacquet's relative trace formula is a generalization where one integrates the kernel function over non-diagonal subgroups. - -*F is a global field, such as the field of rational numbers. - -*A is the ring of adeles of F. - -*G is a reductive algebraic group defined over F. - -In the (rare) case when G(F)∖G(A) is compact the representation splits as a direct sum of irreducible representations, and the trace formula is similar to the Frobenius formula for the character of the representation induced from the trivial representation of a subgroup of finite index. - -In the compact case, which is essentially due to Selberg, the groups G(F) and G(A) can be replaced by any - -discrete subgroup Γof a locally compact group G with Γ\G compact. The group G acts on the space of functions on - -Γ∖G by the right regular representation R, and this extends to an action of the group ring of G, considered as the ring of functions f on G. The character of this representation is given by a generalization of the Frobenius formula as follows. - -The action of a function f on a function φ on Γ∖G is given by -$$ -\displaystyle R(f)(\phi)(x) = \int_G f(y)\phi(xy) dy = \int_{\Gamma\backslash G}\sum_{\gamma\in \Gamma}f(x^{-1}\gamma y)\phi(y)dy. -$$ - -In other words, R(f) is an integral operator on L2(Γ∖G) (the space of functions on Γ∖G) with kernel -$$ -\displaystyle K_f(x,y) = \sum_{\gamma\in \Gamma}f(x^{-1}\gamma y). -$$ - -Therefore, the trace of R(f) is given by -$$ -\displaystyle \operatorname{Tr}(R(f)) = \int_{\Gamma\backslash G}K_f(x,x) dx. -$$ - -The kernel K can be written as -$$ -K_f(x,y) = \sum_{o\in O}K_o(x,y) -$$ - -where O is the set of conjugacy classes in Γ, and -$$ -K_o(x,y)= \sum_{\gamma\in o}f(x^{-1}\gamma y) = \sum_{\delta\in \Gamma_\gamma\backslash \Gamma}f(x^{-1}\delta^{-1}\gamma\delta y) -$$ - -where γ is an element of the conjugacy class o, and Γγ is its centralizer in Γ. - -On the other hand, the trace is also given by -$$ -\displaystyle \operatorname{Tr}(R(f)) = \sum_{\pi} m(\pi)\operatorname{Tr}(R(f)|\pi) -$$ - -where m(π) is the multiplicity of the irreducible unitary representation π of G in L2(Γ∖G). - -*If Γ and G are both finite, the trace formula is equivalent to the Frobenius formula for the character of an induced representation. - -*If G is the group R of real numbers and Γ the subgroup Z of integers, then the trace formula becomes the Poisson summation formula. - -In most cases of the Arthur–Selberg trace formula, the quotient G(F)∖G(A) is not compact, which causes the following (closely related) problems: - -*The representation on L2(G(F)∖G(A)) contains not only discrete components, but also continuous components. - -*The kernel is no longer integrable over the diagonal, and the operators R(f) are no longer of trace class. - -Arthur dealt with these problems by truncating the kernel at cusps in such a way that the truncated kernel is integrable over the diagonal. This truncation process causes many problems; for example, the truncated terms are no longer invariant under conjugation. By manipulating the terms further, Arthur was able to produce an invariant trace formula whose terms are invariant. - -The original Selberg trace formula studied a discrete subgroup Γ of a real Lie group G(R) (usually SL2(R)). - -In higher rank it is more convenient to replace the Lie group with an adelic group G(A). One reason for this that the discrete group can be taken as the group of points G(F) for F a (global) field, which is easier to work with - -than discrete subgroups of Lie groups. It also makes Hecke operators easier to work with. - -One version of the trace formula asserts the equality of two distributions on G(A): -$$ -\sum_{o\in O}J_o^T = \sum_{\chi\in X}J_\chi^T. -$$ - -The left hand side is the geometric side of the trace formula, and is a sum over equivalence classes in the group of rational points G(F) of G, while the right hand side is the spectral side of the trace formula and is a sum over certain representations of subgroups of G(A). - -The version of the trace formula above is not particularly easy to use in practice, one of the problems being that the terms in it are not invariant under conjugation. Arthur found a modification in which the terms are invariant. - -The invariant trace formula states - - \sum_M\frac \sum_{\gamma\in (M(Q))}a^M(\gamma)I_M(\gamma,f) - -= \sum_M\frac \int_{\Pi(M)}a^M(\pi)I_M(\pi,f) d\pi - -where - -*f is a test function on G(A) - -*M ranges over a finite set of rational Levi subgroups of G - -*(M(Q)) is the set of conjugacy classes of M(Q) - -*Π(M) is the set of irreducible unitary representations of M(A) - -*aM(γ) is related to the volume of M(Q,γ)\M(A,γ) - -*aM(π) is related to the multiplicity of the irreducible representation π in L2(M(Q)\M(A)) - -*$\displaystyle I_M(\gamma,f)$ is related to $\displaystyle\int_{M(A,\gamma)\backslash M(A)}f(x^{-1}\gamma x) dx$ - -*$\displaystyle I_M(\pi,f)$ is related to trace $\displaystyle \int_{M(A)}f(x) \pi(x) dx$ - -*W0(M) is the Weyl group of M. - -Langlands suggested the possibility a stable refinement of the trace formula that can be used to compare the trace formula for two different groups. Such a stable trace formula was found and proved by Arthur. - -Two elements of a group G(F) are called stably conjugate if they are conjugate over - -the algebraic closure of the field F. The point is that when one compares elements in two different groups, related for example by inner twisting, one does not usually get a good correspondence between conjugacy classes, but only between stable conjugacy classes. So to compare the geometric terms in the trace formulas for two different groups, one would like the terms to be not just invariant under conjugacy, but also to be well behaved on stable conjugacy classes; these are called stable distributions. - -The stable trace formula writes the terms in the trace formula of a group G in terms of stable distributions. However these stable distributions are not distributions on the group G, but are distributions on a family of quasisplit groups called the endoscopic groups of G. Unstable orbital integrals on the group G correspond to stable orbital integrals on its endoscopic groups H. - -There are several simple forms of the trace formula, which restrict the compactly supported test functions f in some way . The advantage of this is that the trace formula and its proof become much easier, and the disadvantage is that the resulting formula is less powerful. - -For example, if the functions f are cuspidal, which means that -$$ -\int_{n\in N(A)}f(xny) dn=0 -$$ - -for any unipotent radical N of a proper parabolic subgroup (defined over F) and any x, y in G(A), then the operator R(f) has image in the space of cusp forms so is compact. - -Jacquet used the Selberg trace formula to prove the Jacquet–Langlands correspondence between automorphic forms on GL2 and its twisted forms. The Arthur–Selberg trace formula can be used to study similar correspondences on higher rank groups. It can also be used to prove several other special cases of Langlands functoriality, such as base change, for - -some groups. - -Kottwitz used the Arthur–Selberg trace formula to prove the Weil conjecture on Tamagawa numbers. - -Lafforgue described how the trace formula is used in his proof of the Langlands conjecture for general linear groups over function fields. diff --git a/wiki/wikipedia/2635.txt b/wiki/wikipedia/2635.txt deleted file mode 100644 index c9a80f12ea508b08a11048a21728822a4c03e1dc..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2635.txt +++ /dev/null @@ -1,24 +0,0 @@ -In mathematics, Ehrling's lemma is a result concerning Banach spaces. It is often used in functional analysis to demonstrate the equivalence of certain norms on Sobolev spaces. It was proposed by Gunnar Ehrling. - -Let (X, ||·||X), (Y, ||·||Y) and (Z, ||·||Z) be three Banach spaces. Assume that: - -* X is compactly embedded in Y: i.e. X ⊆ Y and every ||·||X-bounded sequence in X has a subsequence that is ||·||Y-convergent; and - -* Y is continuously embedded in Z: i.e. Y ⊆ Z and there is a constant k so that ||y||Z ≤ k||y||Y for every y ∈ Y. - -Then, for every ε > 0, there exists a constant C(ε) such that, for all x ∈ X, -$$ -\| x \|_{Y} \leq \varepsilon \| x \|_{X} + C(\varepsilon) \| x \|_{Z} -$$ - -Let Ω ⊂ Rn be open and bounded, and let k ∈ N. Suppose that the Sobolev space Hk(Ω) is compactly embedded in Hk-1(Ω). Then the following two norms on Hk(Ω) are equivalent: -$$ -\| \cdot \| : H^{k} (\Omega) \to \mathbf{R}: u \mapsto \| u \| := \sqrt{\sum_{| \alpha | \leq k} \| \mathrm{D}^{\alpha} u \|_{L^{2} (\Omega)}^{2}} -$$ - -and -$$ -\| \cdot \|' : H^{k} (\Omega) \to \mathbf{R}: u \mapsto \| u \|' := \sqrt{\| u \|_{L^{2} (\Omega)}^{2} + \sum_{| \alpha | = k} \| \mathrm{D}^{\alpha} u \|_{L^{2} (\Omega)}^{2}}. -$$ - -For the subspace of Hk(Ω) consisting of those Sobolev functions with zero trace (those that are "zero on the boundary" of Ω), the L2 norm of u can be left out to yield another equivalent norm. diff --git a/wiki/wikipedia/2636.txt b/wiki/wikipedia/2636.txt deleted file mode 100644 index ccf23e85d770606fb4519ab8e0fea23dcde93a81..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2636.txt +++ /dev/null @@ -1,92 +0,0 @@ -In quantum chemistry and physics, the Lieb–Oxford inequality provides a lower bound for the indirect part of the Coulomb energy of a quantum mechanical system. It is named after Elliott H. Lieb and Stephen Oxford. - -The inequality is of importance for density functional theory and plays a role in the proof of stability of matter. - -In classical physics, one can calculate the Coulomb energy of a configuration of charged particles in the following way. First, calculate the charge density ρ, where ρ is a function of the coordinates x ∈ ℝ^3. Second, calculate the Coulomb energy by integrating: -$$ -\frac{1}{2}\int_{\mathbb{R}^3}\int_{\mathbb{R}^3}\frac{\rho(x)\rho(y)} \mathrm{d}^3 x \mathrm{d}^3 y. -$$ - -In other words, for each pair of points x and y, this expression calculates the energy related to the fact that the charge at x is attracted to or repelled from the charge at y. The factor of 1/2 corrects for double-counting the pairs of points. - -In quantum mechanics, it is also possible to calculate a charge density ρ, which is a function of x ∈ ℝ^3. More specifically, ρ is defined as the expectation value of charge density at each point. But in this case, the above formula for Coulomb energy is not correct, due to exchange and correlation effects. The above, classical formula for Coulomb energy is then called the "direct" part of Coulomb energy. To get the actual Coulomb energy, it is necessary to add a correction term, called the "indirect" part of Coulomb energy. The Lieb–Oxford inequality concerns this indirect part. It is relevant in density functional theory, where the expectation value ρ plays a central role. - -For a quantum mechanical system of N particles, each with charge e, the N-particle density is denoted by -$$ -P(x_1,\dots,x_N). -$$ - -The function P is only assumed to be non-negative and normalized. Thus the following applies to particles with any "statistics". For example, if the system is described by a normalised square integrable N-particle wave function -$$ -\psi\in L^2(\mathbb{R}^{3N}), -$$ - -then -$$ -P(x_1,\dots,x_N)=|\psi(x_1,\dots,x_N)|^2. -$$ - -More generally, in the case of particles with spin having q spin states per particle and with corresponding wave function -$$ -\psi(x_1,\sigma_1,\dots,x_N,\sigma_N) -$$ - -the N-particle density is given by -$$ -P(x_1,\dots,x_N)=\sum_{\sigma_1=1}^q\cdots\sum_{\sigma_N=1}^q|\psi(x_1,\sigma_1,\dots,x_N,\sigma_N)|^2. -$$ - -Alternatively, if the system is described by a density matrix γ, then P is the diagonal -$$ -\gamma(x_1, ... , x_N; x_1, ..., x_N ). -$$ - -The electrostatic energy of the system is defined as -$$ -I_P=e^2\sum_{1\le i - -E_P=I_P-D(\rho)\ge -C|e|^\frac23\int_{\mathbb{R}^3}|\rho(x)|^\frac43 \mathrm{d}^3 x, - -|}} - -where C ≤ 1.68 is a constant independent of the particle number N. E_P is referred to as the indirect part of the Coulomb energy and in density functional theory more commonly as the exchange plus correlation energy. A similar bound exists if the particles have different charges e_1, ... , e_N. No upper bound is possible for E_P. - -While the original proof yielded the constant C = 8.52, Lieb and Oxford managed to refine this result to C = 1.68. Later, the same method of proof was used to further improve the constant to C = 1.64. With these constants the inequality holds for any particle number N. - -The constant can be further improved if the particle number N is restricted. In the case of a single particle N = 1 the Coulomb energy vanishes, I_P = 0, and the smallest possible constant can be computed explicitly as C_1 = 1.092. The same lower bound C_LO ≥ 1.4442 had been proved earlier in, and acknowledged as such in. Hence, to summarise, the best known bounds for C are 1.44 ≤ C ≤ 1.64. - -Historically, the first approximation of the indirect part E_P of the Coulomb energy in terms of the single particle charge density was given by Paul Dirac in 1930 for fermions. The wave function under consideration is -$$ -\psi(x_1,\sigma_1,\dots,x_N,\sigma_N)= \frac{\det(\varphi_i(x_j,\sigma_j))}{\sqrt{N!}}. -$$ - -With the aim of evoking perturbation theory, one considers the eigenfunctions of the Laplacian in a large cubic box of volume and sets -$$ -\varphi_{\alpha,k}(x,\sigma) = \frac{\chi_\alpha(\sigma)\mathrm{e}^{2\pi\mathrm{i} k\cdot x}}{\sqrt}, -$$ - -where χ_1, ..., χ_q forms an orthonormal basis of ℂ^q. The allowed values of k ∈ ℝ^3 are n/ with n ∈ ℤ_3|b=+. For large N, , and fixed ρ = N , the indirect part of the Coulomb energy can be computed to be -$$ -E_P(\mathrm{Dirac})=-C |e|^{2/3} q^{-1/3}\rho^{4/3}|\Lambda|, -$$ - -with C = 0.93. - -This result can be compared to the lower bound (). In contrast to Dirac's approximation the Lieb–Oxford inequality does not include the number q of spin states on the right-hand side. The dependence on q in Dirac's formula is a consequence of his specific choice of wave functions and not a general feature. - -The constant C in () can be made smaller at the price of adding another term to the right-hand side. By including a term that involves the gradient of a power of the single particle charge density ρ, the constant C can be improved to 1.45. Thus, for a uniform density system C ≤ 1.45. diff --git a/wiki/wikipedia/2637.txt b/wiki/wikipedia/2637.txt deleted file mode 100644 index 7dde03a5b7fddae17ec5828d572354e1d4c9d55e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2637.txt +++ /dev/null @@ -1,232 +0,0 @@ -Relevance logic, also called relevant logic, is a kind of non-classical logic requiring the antecedent and consequent of implications to be relevantly related. They may be viewed as a family of substructural or modal logics. It is generally, but not universally, called relevant logic by British and, especially, Australian logicians, and relevance logic by American logicians. - -Relevance logic aims to capture aspects of implication that are ignored by the "material implication" operator in classical truth-functional logic, namely the notion of relevance between antecedent and conditional of a true implication. This idea is not new: C. I. Lewis was led to invent modal logic, and specifically strict implication, on the grounds that classical logic grants paradoxes of material implication such as the principle that a falsehood implies any proposition. Hence "if I'm a donkey, then two and two is four" is true when translated as a material implication, yet it seems intuitively false since a true implication must tie the antecedent and consequent together by some notion of relevance. And whether or not the speaker is a donkey seems in no way relevant to whether two and two is four. - -How does relevance logic formally capture a notion of relevance? In terms of a syntactical constraint for a propositional calculus, it is necessary, but not sufficient, that premises and conclusion share atomic formulae (formulae that do not contain any logical connectives). In a predicate calculus, relevance requires sharing of variables and constants between premises and conclusion. This can be ensured (along with stronger conditions) by, e.g., placing certain restrictions on the rules of a natural deduction system. In particular, a Fitch-style natural deduction can be adapted to accommodate relevance by introducing tags at the end of each line of an application of an inference indicating the premises relevant to the conclusion of the inference. Gentzen-style sequent calculi can be modified by removing the weakening rules that allow for the introduction of arbitrary formulae on the right or left side of the sequents. - -A notable feature of relevance logics is that they are paraconsistent logics: the existence of a contradiction will not cause "explosion". This follows from the fact that a conditional with a contradictory antecedent that does not share any propositional or predicate letters with the consequent cannot be true (or derivable). - -Relevance logic was proposed in 1928 by Soviet philosopher Ivan E. Orlov (1886 – circa 1936) in his strictly mathematical paper "The Logic of Compatibility of Propositions" published in Matematicheskii Sbornik. - -The basic idea of relevant implication appears in medieval logic, and some pioneering work was done by Ackermann, - -Moh, - -and Church - -in the 1950s. Drawing on them, Nuel Belnap and Alan Ross Anderson (with others) wrote the magnum opus of the subject, Entailment: The Logic of Relevance and Necessity in the 1970s (the second volume being published in the nineties). They focused on both systems of entailment and systems of relevance, where implications of the former kinds are supposed to be both relevant and necessary. - -The early developments in relevance logic focused on the stronger systems. The development of the Routley–Meyer semantics brought out a range of weaker logics. The weakest of these logics is the relevance logic B. It is axiomatized with the following axioms and rules. - -# $A\to A$ - -# $A\land B\to A$ - -# $A\land B\to B$ - -# $(A\to B)\land(A\to C)\to (A\to B\land C)$ - -# $A\to A\lor B$ - -# $B\to A\lor B$ - -# $(A\to C)\land(B\to C)\to (A\lor B\to C)$ - -# $A\land(B\lor C)\to (A\land B)\lor(A\land C)$ - -# $\lnot\lnot A\to A$ - -The rules are the following. - -# $A, A\to B\vdash B$ - -# $A, B\vdash A\land B$ - -# $A\to B\vdash (C\to A)\to(C\to B)$ - -# $A\to B\vdash (B\to C)\to(A\to C)$ - -# $A\to B\vdash \lnot B\to\lnot A$ - -Stronger logics can be obtained by adding any of the following axioms. - -# $(A\to B)\to (\lnot B\to\lnot A)$ - -# $(A\to B)\land(B\to C)\to (A\to C)$ - -# $(A\to B)\to((B\to C)\to(A\to C))$ - -# $(A\to B)\to((C\to A)\to(C\to B))$ - -# $(A\to(A\to B))\to(A\to B)$ - -# $(A\land (A\to B))\to B$ - -# $(A\to\lnot A)\to\lnot A$ - -# $(A\to (B\to C))\to(B\to(A\to C))$ - -# $A\to((A\to B)\to B)$ - -# $((A\to A)\to B)\to B$ - -# $A\lor\lnot A$ - -# $A\to(A\to A)$ - -There are some notable logics stronger than B that can be obtained by adding axioms to B as follows. - -* For DW, add axiom 1. - -* For DJ, add axioms 1, 2. - -* For TW, add axioms 1, 2, 3, 4. - -* For RW, add axioms 1, 2, 3, 4, 8, 9. - -* For T, add axioms 1, 2, 3, 4, 5, 6, 7, 11. - -* For R, add axioms 1-11. - -* For E, add axioms 1-7, 10, 11, $((A\to A)\land(B\to B)\to C)\to C$, and $\Box A\land \Box B\to \Box (A\land B)$, where $\Box A$ is defined as $(A\to A)\to A$. - -* For RM, add all the additional axioms. - -The standard model theory for relevance logics is the Routley-Meyer ternary-relational semantics developed by Richard Routley and Robert Meyer. A Routley–Meyer frame F for a propositional language is a quadruple (W,R,*,0), where W is a non-empty set, R is a ternary relation on W, and * is a function from W to W, and $0\in W$. A Routley-Meyer model M is a Routley-Meyer frame F together with a valuation, $\Vdash$, that assigns a truth value to each atomic proposition relative to each point $a\in W$. There are some conditions placed on Routley-Meyer frames. Define $a\leq b$ as $R0ab$. - -* $a\leq a$. - -* If $a\leq b$ and $b\leq c$, then $a\leq c$. - -* If $d\leq a$ and $Rabc$, then $Rdbc$. - -* $a^{**}=a$. - -* If $a\leq b$, then $b^*\leq a^*$. - -Write $M,a\Vdash A$ and $M,a\nVdash A$ to indicate that the formula $A$ is true, or not true, respectively, at point $a$ in $M$. - -One final condition on Routley-Meyer models is the hereditariness condition. - -* If $M,a\Vdash p$ and $a\leq b$, then $M,b\Vdash p$, for all atomic propositions $p$. - -By an inductive argument, hereditariness can be shown to extend to complex formulas, using the truth conditions below. - -* If $M,a\Vdash A$ and $a\leq b$, then $M,b\Vdash A$, for all formulas $A$. - -The truth conditions for complex formulas are as follows. - -* $M,a\Vdash A\land B \iff M, a\Vdash A$ and $M,a\Vdash B$ - -* $M,a\Vdash A\lor B \iff M, a\Vdash A$ or $M,a\Vdash B$ - -* $M,a\Vdash A\to B\iff \forall b,c((Rabc\land M,b\Vdash A)\Rightarrow M,c\Vdash B)$ - -* $M,a\Vdash\lnot A\iff M,a^*\nVdash A$ - -A formula $A$ holds in a model $M$ just in case $M,0\Vdash A$. A formula $A$ holds on a frame $F$ iff A holds in every model $(F,\Vdash)$. A formula $A$ is valid in a class of frames iff A holds on every frame in that class. - -The class of all Routley–Meyer frames satisfying the above conditions validates that relevance logic B. One can obtain Routley-Meyer frames for other relevance logics by placing appropriate restrictions on R and on *. These conditions are easier to state using some standard definitions. Let $Rabcd$ be defined as $\exists x(Rabx \land Rxcd)$, and let $Ra(bc)d$ be defined as $\exists x(Rbcx \land Raxd)$. Some of the frame conditions and the axioms they validate are the following. - -The last two conditions validate forms of weakening that relevance logics were originally developed to avoid. They are included to show the flexibility of the Routley–Meyer models. - -Operational models for negation-free fragments of relevance logics were developed by Alasdair Urquhart in his PhD thesis and in subsequent work. The intuitive idea behind the operational models is that points in a model are pieces of information, and combining information supporting a conditional with the information supporting its antecedent yields some information that supports the consequent. Since the operational models do not generally interpret negation, this section will consider only languages with a conditional, conjunction, and disjunction. - -An operational frame $F$ is a triple $(K,\cdot,0)$, where $K$ is a non-empty set, $0\in K$, and $\cdot$ is a binary operation on $K$. Frames have conditions, some of which may be dropped to model different logics. The conditions Urquhart proposed to model the conditional of the relevance logic R are the following. - -* $x\cdot x=x$ - -* $(x\cdot y)\cdot z=x\cdot(y\cdot z)$ - -* $x\cdot y=y\cdot x$ - -* $0\cdot x=x$ - -Under these conditions, the operational frame is a join-semilattice. - -An operational model $M$ is a frame $F$ with a valuation $V$ that maps pairs of points and atomic propositions to truth values, T or F. $V$ can be extended to a valuation $\Vdash$ on complex formulas as follows. - -* $M,a\Vdash p \iff V(a,p)=T$, for atomic propositions - -* $M,a\Vdash A\land B \iff M, a\Vdash A$ and $M,a\Vdash B$ - -* $M,a\Vdash A\lor B \iff M, a\Vdash A$ or $M,a\Vdash B$ - -* $M,a\Vdash A\to B\iff \forall b(M,b\Vdash A\Rightarrow M,a\cdot b\Vdash B)$ - -A formula $A$ holds in a model $M$ iff $M,0\Vdash A$. A formula $A$ is valid in a class of models $C$ iff it holds in each model $M\in C$. - -The conditional fragment of R is sound and complete with respect to the class of semilattice models. The logic with conjunction and disjunction is properly stronger than the conditional, conjunction, disjunction fragment of R. In particular, the formula $(A\to(B\lor C))\land(B\to C)\to (A\to C)$ is valid for the operational models but it is invalid in R. The logic generated by the operational models for R has a complete axiomatic proof system, due Kit Fine and to Gerald Charlwood. Charlwood also provided a natural deduction system for the logic, which he proved equivalent to the axiomatic system. Charlwood showed that his natural deduction system is equivalent to a system provided by Dag Prawitz. - -The operational semantics can be adapted to model the conditional of E by adding a non-empty set of worlds $W$ and an accessibility relation $\leq$ on $W\times W$ to the frames. The accessibility relation is required to be reflexive and transitive, to capture the idea that E's conditional has an S4 necessity. The valuations then map triples of atomic propositions, points, and worlds to truth values. The truth condition for the conditional is changed to the following. - -* $M,a, w\Vdash A\to B\iff \forall b, \forall w'\geq w(M,b, w'\Vdash A\Rightarrow M,a\cdot b,w'\Vdash B)$ - -The operational semantics can be adapted to model the conditional of T by adding a relation $\leq$ on $K\times K$. The relation is required to obey the following conditions. - -* $0\leq x$ - -* If $x\leq y$ and $y\leq z$, then $x\leq z$ - -* If $x\leq y$, then $x\cdot z\leq y\cdot z$ - -The truth condition for the conditional is changed to the following. - -* $M,a\Vdash A\to B\iff \forall b((a\leq b\land M,b\Vdash A)\Rightarrow M,a\cdot b\Vdash B)$ - -There are two ways to model the contraction-less relevance logics TW and RW with the operational models. The first way is to drop the condition that $x\cdot x=x$. The second way is to keep the semilattice conditions on frames and add a binary relation, $J$, of disjointness to the frame. For these models, the truth conditions for the conditional is changed to the following, with the addition of the ordering in the case of TW. - -* $M,a\Vdash A\to B\iff \forall b((Jab \land M,b\Vdash A)\Rightarrow M,a\cdot b\Vdash B)$ - -Urquhart showed that the semilattice logic for R is properly stronger than the positive fragment of R. Lloyd Humberstone provided an enrichment of the operational models that permitted a different truth condition for disjunction. The resulting class of models generates exactly the positive fragment of R. - -An operational frame $F$ is a quadruple $(K,\cdot,+,0)$, where $K$ is a non-empty set, $0\in K$, and {$\cdot$, $+$} are binary operations on $K$. Let $a\leq b$ be defined as $\exists x(a+x=b)$. The frame conditions are the following. - -An operational model $M$ is a frame $F$ with a valuation $V$ that maps pairs of points and atomic propositions to truth values, T or F. $V$ can be extended to a valuation $\Vdash$ on complex formulas as follows. - -* $M,a\Vdash p \iff V(a,p)=T$, for atomic propositions - -* $M,a+b\Vdash p \iff M,a\Vdash p$ and $M,b\Vdash p$ - -* $M,a\Vdash A\land B \iff M,a\Vdash A$ and $M,a\Vdash B$ - -* $M,a\Vdash A\lor B \iff M, a\Vdash A$ or $M,a\Vdash B$ or $\exists b,c(a=b+c$; $M,b\Vdash A$ and $M,c\Vdash B)$ - -* $M,a\Vdash A\to B\iff \forall b(M,b\Vdash A\Rightarrow M,a\cdot b\Vdash B)$ - -A formula $A$ holds in a model $M$ iff $M,0\Vdash A$. A formula $A$ is valid in a class of models $C$ iff it holds in each model $M\in C$. - -The positive fragment of R is sound and complete with respect to the class of these models. Humberstone's semantics can be adapted to model different logics by dropping or adding frame conditions as follows. - -Some relevance logics can be given algebraic models, such as the logic R. The algebraic structures for R are de Morgan monoids, which are sextuples $(D,\land,\lor,\lnot,\circ,e)$ where - -* $(D,\land,\lor,\lnot)$ is a distributive lattice with a unary operation, $\lnot$ obeying the laws $\lnot\lnot x=x$ and if $x\leq y$ then $\lnot y\leq \lnot x$; - -* $e\in D$, the binary operation $\circ$ is commutative ($x\circ y=y\circ x$) and associative ($(x\circ y)\circ z=x\circ (y\circ z)$), and $e\circ x=x$, i.e. $(D,\circ,e)$ is an Abelian monoid with identity $e$; - -* the monoid is lattice-ordered and satisfies $x\circ(y\lor z)=(x\circ y)\lor(x\circ z)$; - -* $x\leq x\circ x$; and - -* if $x\circ y\leq z$, then $x\circ\lnot z\leq \lnot y$. - -The operation $x\to y$ interpreting the conditional of R is defined as $\lnot(x\circ\lnot y)$. - -A de Morgan monoid is a residuated lattice, obeying the following residuation condition. -$$ -x \circ y\leq z \iff x\leq y\to z -$$ - -An interpretation $v$ is a homomorphism from the propositional language to a de Morgan monoid $M$ such that - -* $v(p)\in D$ for all atomic propositions, - -* $v(\lnot A)=\lnot v(A)$ - -* $v(A\lor B)=v(A)\lor v(B)$ - -* $v(A\land B)=v(A)\land v(B)$ - -* $v(A\to B)=v(A)\to v(B)$ - -Given a de Morgan monoid $M$ and an interpretation $v$, one can say that formula $A$ holds on $v$ just in case $e\leq v(A)$. A formula $A$ is valid just in case it holds on all interpretations on all de Morgan monoids. The logic R is sound and complete for de Morgan monoids. diff --git a/wiki/wikipedia/2638.txt b/wiki/wikipedia/2638.txt deleted file mode 100644 index 6f1e32cb9244032520813671ddf470d643589ec8..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2638.txt +++ /dev/null @@ -1,21 +0,0 @@ -In mathematical analysis, Agmon's inequalities, named after Shmuel Agmon, consist of two closely related interpolation inequalities between the Lebesgue space $L^\infty$ and the Sobolev spaces $H^s$. It is useful in the study of partial differential equations. - -Let $u\in H^2(\Omega)\cap H^1_0(\Omega)$ where $\Omega\subset\mathbb{R}^3$. Then Agmon's inequalities in 3D state that there exists a constant $C$ such that -$$ -\displaystyle \|u\|_{L^\infty(\Omega)}\leq C \|u\|_{H^1(\Omega)}^{1/2} \|u\|_{H^2(\Omega)}^{1/2}, -$$ - -and -$$ -\displaystyle \|u\|_{L^\infty(\Omega)}\leq C \|u\|_{L^2(\Omega)}^{1/4} \|u\|_{H^2(\Omega)}^{3/4}. -$$ - -In 2D, the first inequality still holds, but not the second: let $u\in H^2(\Omega)\cap H^1_0(\Omega)$ where $\Omega\subset\mathbb{R}^2$. Then Agmon's inequality in 2D states that there exists a constant $C$ such that -$$ -\displaystyle \|u\|_{L^\infty(\Omega)}\leq C \|u\|_{L^2(\Omega)}^{1/2} \|u\|_{H^2(\Omega)}^{1/2}. -$$ - -For the $n$-dimensional case, choose $s_1$ and $s_2$ such that $s_1< \tfrac{n}{2} < s_2$. Then, if $0< \theta < 1$ and $\tfrac{n}{2} = \theta s_1 + (1-\theta)s_2$, the following inequality holds for any $u\in H^{s_2}(\Omega)$ -$$ -\displaystyle \|u\|_{L^\infty(\Omega)}\leq C \|u\|_{H^{s_1}(\Omega)}^{\theta} \|u\|_{H^{s_2}(\Omega)}^{1-\theta} -$$ diff --git a/wiki/wikipedia/2639.txt b/wiki/wikipedia/2639.txt deleted file mode 100644 index 12a47145ce56ae1d42c607db8adbce41d0806440..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2639.txt +++ /dev/null @@ -1,143 +0,0 @@ -In mathematics, Vinogradov's mean value theorem is an estimate for the number of equal sums of powers. - -It is an important inequality in analytic number theory, named for I. M. Vinogradov. - -More specifically, let $J_{s,k}(X)$ count the number of solutions to the system of $k$ simultaneous Diophantine equations in $2s$ variables given by -$$ -x_1^j+x_2^j+\cdots+x_s^j=y_1^j+y_2^j+\cdots+y_s^j\quad (1\le j\le k) -$$ - -with -$$ -1\le x_i,y_i\le X, (1\le i\le s) -$$. - -That is, it counts the number of equal sums of powers with equal numbers of terms ($s$) and equal exponents ($j$), - -up to $k$th powers and up to powers of $X$. An alternative analytic expression for $J_{s,k}(X)$ is -$$ -J_{s,k}(X)=\int_{[0,1)^k}|f_k(\mathbf\alpha;X)|^{2s}d\mathbf\alpha -$$ - -where -$$ -f_k(\mathbf\alpha;X)=\sum_{1\le x\le X}\exp(2\pi i(\alpha_1x+\cdots+\alpha_kx^k)). -$$ - -Vinogradov's mean-value theorem gives an upper bound on the value of $J_{s,k}(X)$. - -A strong estimate for $J_{s,k}(X)$ is an important part of the Hardy-Littlewood method for attacking Waring's problem and also for demonstrating a zero free region for the Riemann zeta-function in the critical strip. Various bounds have been produced for $J_{s,k}(X)$, valid for different relative ranges of $s$ and $k$. The classical form of the theorem applies when $s$ is very large in terms of $k$. - -An analysis of the proofs of the Vinogradov mean-value conjecture can be found in the Bourbaki Séminaire talk by Lillian Pierce. - -By considering the $X^s$ solutions where -$$ -x_i=y_i, (1\le i\le s) -$$ - -one can see that $J_{s,k}(X)\gg X^s$. - -A more careful analysis (see Vaughan equation 7.4) provides the lower bound -$$ -J_{s,k}\gg X^s+X^{2s-\frac12k(k+1)}. -$$ - -The main conjecture of Vinogradov's mean value theorem was that the upper bound is close to this lower bound. More specifically that for any $\epsilon>0$ we have -$$ -J_{s,k}(X)\ll X^{s+\epsilon}+X^{2s-\frac12k(k+1)+\epsilon}. -$$ - -This was proved by Jean Bourgain, Ciprian Demeter, and Larry Guth and by a different method by Trevor Wooley. - -If -$$ -s\ge \frac12k(k+1) -$$ - -this is equivalent to the bound -$$ -J_{s,k}(X)\ll X^{2s-\frac12k(k+1)+\epsilon}. -$$ - -Similarly if $ s\le \frac12k(k+1)$ the conjectural form is equivalent to the bound -$$ -J_{s,k}(X)\ll X^{s+\epsilon}. -$$ - -Stronger forms of the theorem lead to an asymptotic expression for $J_{s,k}$, in particular for large $s$ relative to $k$ the expression -$$ -J_{s,k}\sim \mathcal C(s,k)X^{2s-\frac12k(k+1)}, -$$ - -where $\mathcal C(s,k)$ is a fixed positive number depending on at most $s$ and $k$, holds, see Theorem 1.2 in. - -Vinogradov's original theorem of 1935 showed that for fixed $s,k$ with -$$ -s\ge k^2\log (k^2+k)+\frac14k^2+\frac54 k+1 -$$ - -there exists a positive constant $D(s,k)$ such that -$$ -J_{s,k}(X)\le D(s,k)(\log X)^{2s}X^{2s-\frac12k(k+1)+\frac12}. -$$ - -Although this was a ground-breaking result, it falls short of the full conjectured form. Instead it demonstrates the conjectured form when -$$ -\epsilon>\frac12 -$$. - -Vinogradov's approach was improved upon by Karatsuba and Stechkin who showed that for $s\ge k$ there exists a positive constant $D(s,k)$ such that -$$ -J_{s,k}(X)\le D(s,k)X^{2s-\frac12k(k+1)+\eta_{s,k}}, -$$ - -where -$$ -\eta_{s,k}=\frac12 k^2\left(1-\frac1k\right)^{\left[\frac sk\right]}\le k^2e^{-s/k^2}. -$$ - -Noting that for -$$ -s>k^2(2\log k-\log\epsilon) -$$ - -we have -$$ -\eta_{s,k}<\epsilon -$$, - -this proves that the conjectural form holds for $s$ of this size. - -The method can be sharpened further to prove the asymptotic estimate -$$ -J_{s,k}\sim \mathcal C(s,k)X^{2s-\frac12k(k+1)}, -$$ - -for large $s$ in terms of $k$. - -In 2012 Wooley improved the range of $s$ for which the conjectural form holds. He proved that for -$$ -k\ge 2 -$$ and $s\ge k(k+1)$ - -and for any $\epsilon>0$ we have -$$ -J_{s,k}(X)\ll X^{2s-\frac12k(k+1)+\epsilon}. -$$ - -Ford and Wooley have shown that the conjectural form is established for small $s$ in terms of $k$. Specifically they show that for -$$ -k\ge 4 -$$ - -and -$$ -1\le s\le \frac14(k+1)^2 -$$ - -for any $\epsilon>0$ - -we have -$$ -J_{s,k}(X)\ll X^{s+\epsilon}. -$$ diff --git a/wiki/wikipedia/264.txt b/wiki/wikipedia/264.txt deleted file mode 100644 index 599d52ed5d186b5e678b6ab65593bea2b7182975..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/264.txt +++ /dev/null @@ -1,73 +0,0 @@ -In mathematical logic, the diagonal lemma (also known as diagonalization lemma, self-reference lemma or fixed point theorem) establishes the existence of self-referential sentences in certain formal theories of the natural numbers—specifically those theories that are strong enough to represent all computable functions. The sentences whose existence is secured by the diagonal lemma can then, in turn, be used to prove fundamental limitative results such as Gödel's incompleteness theorems and Tarski's undefinability theorem. - -Let $\mathbb{N}$ be the set of natural numbers. A first-order theory $T$ in the language of arithmetic represents the computable function $f: \mathbb{N}\rightarrow\mathbb{N}$ if there exists a "graph" predicate $\mathcal{G}_f(x, y)$ in the language of $T$ such that for each $n \in \mathbb{N}$ -$$ -\vdash_{T}(\forall y)[(^\circ f(n)=y) \Leftrightarrow \mathcal{G}_f(^\circ n,y)] -$$ - -Here ${}^\circ n$ is the numeral corresponding to the natural number $n$, which is defined to be the $n$th successor of presumed first numeral $0$ in $T$. - -The diagonal lemma also requires a systematic way of assigning to every formula $\mathcal{A}$ a natural number $\#(\mathcal{A})$ (also written as $\#_{\mathcal{A}}$) called its Gödel number. Formulas can then be represented within $T$ by the numerals corresponding to their Gödel numbers. For example, $\mathcal{A}$ is represented by $^{\circ}\#_{\mathcal{A}}$ - -The diagonal lemma applies to theories capable of representing all primitive recursive functions. Such theories include first-order Peano arithmetic and the weaker Robinson arithmetic. A common statement of the lemma (as given below) makes the stronger assumption that the theory can represent all computable functions, which still applies to first-order Peano arithmetic. - -{{math theorem|Let $T$ be a first-order theory in the language of arithmetic and capable of representing all computable functions, and $\mathcal{F}(y)$ be a formula in $T$ with one free variable. Then there exists a formula $\mathcal{C}$ such that -$$ -\vdash_T\mathcal{C}\Leftrightarrow\mathcal{F}({}^{\circ}\#_{\mathcal{C}}) -$$ - -| name = Lemma - -}} - -Intuitively, $\mathcal{C}$ is a self-referential formula: $\mathcal{C}$ says that $\mathcal{C}$ has the property $\mathcal{F}$. The sentence $\mathcal{C}$ can also be viewed as a fixed point of the operation assigning to each formula $\mathcal{A}$ the formula $\mathcal{F}(^\circ \#_{\mathcal{A}})$. The formula $\mathcal{C}$ constructed in the proof is not literally the same as $F(^\circ \#_{\mathcal{C}})$, but is provably equivalent to it in the theory $T$. - -Let $f:\mathbb{N}\to\mathbb{N}$ be the function defined by: -$$ -f(\#_{\mathcal{A}}) = \#[\mathcal{A}(^{\circ}\#_{\mathcal{A}})] -$$ - -for each formula $\mathcal{A}(x)$ with only one free variable $x$ in theory $T$, and $f(n)=0$ otherwise. Here $\#_{\mathcal{A}}=\#(\mathcal{A}(x))$ denotes the Gödel number of formula $\mathcal{A}(x)$. The function $f$ is computable (which is ultimately an assumption about the Gödel numbering scheme), so there is a formula $\mathcal{G}_f(x,y)$ representing $f$ in $T$. Namely -$$ -\vdash_T(\forall y)\{\mathcal{G}_f(^{\circ}\#_{\mathcal{A}},y) \Leftrightarrow [y = {}^{\circ}f(\#_{\mathcal{A}})]\} -$$ - -which is to say -$$ -\vdash_T(\forall y)\{\mathcal{G}_f(^{\circ}\#_{\mathcal{A}},y) \Leftrightarrow [y = {}^{\circ}\#(\mathcal{A}(^{\circ}\#_{\mathcal{A}}))]\} -$$ - -Now, given an arbitrary formula $\mathcal{F}(y)$ with one free variable $y$, define the formula $\mathcal{B}(z)$ as: -$$ -\mathcal{B}(z) := (\forall y) [\mathcal{G}_f(z,y)\Rightarrow \mathcal{F}(y)] -$$ - -Then, for all formulas $\mathcal{A}(x)$ with one free variable: -$$ -\vdash_T\mathcal{B}(^{\circ}\#_{\mathcal{A}}) \Leftrightarrow (\forall y)\{[ y = {}^{\circ}\#(\mathcal{A}(^{\circ}\#_{\mathcal{A}}))] \Rightarrow \mathcal{F}(y)\} -$$ - -which is to say -$$ -\vdash_T\mathcal{B}(^{\circ}\#_{\mathcal{A}}) \Leftrightarrow \mathcal{F}\{^{\circ}\#[\mathcal{A}(^{\circ}\#_{\mathcal{A}})]\} -$$ - -Now substitute $\mathcal{A}$ with $\mathcal{B}$, and define the formula $\mathcal{C}$ as: -$$ -\mathcal{C}:= \mathcal{B}(^{\circ}\#_{\mathcal{B}}) -$$ - -Then the previous line can be rewritten as -$$ -\vdash_T\mathcal{C}\Leftrightarrow\mathcal{F}(^{\circ}\#_{\mathcal{C}}) -$$ - -which is the desired result. - -(The same argument in different terms is given in [Raatikainen (2015a)].) - -The lemma is called "diagonal" because it bears some resemblance to Cantor's diagonal argument. The terms "diagonal lemma" or "fixed point" do not appear in Kurt Gödel's 1931 article or in Alfred Tarski's 1936 article. - -Rudolf Carnap (1934) was the first to prove the general self-referential lemma, which says that for any formula F in a theory T satisfying certain conditions, there exists a formula ψ such that ψ ↔ F(°#(ψ)) is provable in T. Carnap's work was phrased in alternate language, as the concept of computable functions was not yet developed in 1934. Mendelson (1997, p. 204) believes that Carnap was the first to state that something like the diagonal lemma was implicit in Gödel's reasoning. Gödel was aware of Carnap's work by 1937. - -The diagonal lemma is closely related to Kleene's recursion theorem in computability theory, and their respective proofs are similar. diff --git a/wiki/wikipedia/2640.txt b/wiki/wikipedia/2640.txt deleted file mode 100644 index afc4700dc4bbfad70a33c92230a2c1bbf08740ca..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2640.txt +++ /dev/null @@ -1,44 +0,0 @@ -The friendship paradox is the phenomenon first observed by the sociologist Scott L. Feld in 1991 that most people have fewer friends than their friends have, on average. It can be explained as a form of sampling bias in which people with more friends are more likely to be in one's own friend group. Or, said another way, one is less likely to be friends with someone who has very few friends. In contradiction to this, most people believe that they have more friends than their friends have. - -The same observation can be applied more generally to social networks defined by other relations than friendship: for instance, most people's sexual partners have had (on the average) a greater number of sexual partners than they have. - -The friendship paradox is an example of how network structure can significantly distort an individual's local observations. - -In spite of its apparently paradoxical nature, the phenomenon is real, and can be explained as a consequence of the general mathematical properties of social networks. The mathematics behind this are directly related to the arithmetic-geometric mean inequality and the Cauchy–Schwarz inequality. - -Formally, Feld assumes that a social network is represented by an undirected graph G = (V, E), where the set V of vertices corresponds to the people in the social network, and the set E of edges corresponds to the friendship relation between pairs of people. That is, he assumes that friendship is a symmetric relation: if x is a friend of y, then y is a friend of x. The friendship between x and y is therefore modeled by the edge {x, y}, and the number of friends an individual has corresponds to a vertex's degree. The average number of friends of a person in the social network is therefore given by the average of the degrees of the vertices in the graph. That is, if vertex v has d(v) edges touching it (representing a person who has d(v) friends), then the average number μ of friends of a random person in the graph is -$$ -\mu=\frac{\sum_{v\in V} d(v)}=\frac{2|E|}. -$$ - -The average number of friends that a typical friend has can be modeled by choosing a random person (who has at least one friend), and then calculating how many friends their friends have on average. This amounts to choosing, uniformly at random, an edge of the graph (representing a pair of friends) and an endpoint of that edge (one of the friends), and again calculating the degree of the selected endpoint. The probability of a certain vertex $v$ to be chosen is : -$$ - \frac{d(v)}\frac{1}{2} -$$ - -The first factor corresponds to how likely it is that the chosen edge contains the vertex, which increases when the vertex has more friends. The halving factor simply comes from the fact that each edge has two vertices. So the expected value of the number of friends of a (randomly chosen) friend is : -$$ - \sum_{v} \left ( \frac{d(v)}\frac{1}{2} \right )d(v) = \frac{\sum_v d(v)^2 }{2|E|} -$$ - -We know from the definition of variance that : -$$ - \frac{\sum_v d(v)^2 } = \mu ^2 + \sigma^2 -$$ - -where $\sigma^2$ is the variance of the degrees in the graph. This allows us to compute the desired expected value : -$$ - \frac{\sum_v d(v)^2 }{2|E|} = \frac{2|E|} (\mu^2 + \sigma^2) = \frac{\mu^2+\sigma^2}{\mu} = \mu + \frac{\sigma^2}{\mu} -$$ - -For a graph that has vertices of varying degrees (as is typical for social networks), $ {\sigma}^{2} $ is strictly positive, which implies that the average degree of a friend is strictly greater than the average degree of a random node. - -Another way of understanding how the first term came is as follows. For each friendship (u, v), a node u mentions that v is a friend and v has d(v) friends. There are d(v) such friends who mention this. Hence the square of d(v) term. We add this for all such friendships in the network from both the u's and v's perspective, which gives the numerator. The denominator is the number of total such friendships, which is twice the total edges in the network (one from the u's perspective and the other from the v's). - -After this analysis, Feld goes on to make some more qualitative assumptions about the statistical correlation between the number of friends that two friends have, based on theories of social networks such as assortative mixing, and he analyzes what these assumptions imply about the number of people whose friends have more friends than they do. Based on this analysis, he concludes that in real social networks, most people are likely to have fewer friends than the average of their friends' numbers of friends. However, this conclusion is not a mathematical certainty; there exist undirected graphs (such as the graph formed by removing a single edge from a large complete graph) that are unlikely to arise as social networks but in which most vertices have higher degree than the average of their neighbors' degrees. - -The analysis of the friendship paradox implies that the friends of randomly selected individuals are likely to have higher than average centrality. This observation has been used as a way to forecast and slow the course of epidemics, by using this random selection process to choose individuals to immunize or monitor for infection while avoiding the need for a complex computation of the centrality of all nodes in the network. - -A study in 2010 by Christakis and Fowler showed that flu outbreaks can be detected almost two weeks before traditional surveillance measures would do so by using the friendship paradox in monitoring the infection in a social network. They found that using the friendship paradox to analyze the health of central friends is "an ideal way to predict outbreaks, but detailed information doesn't exist for most groups, and to produce it would be time-consuming and costly." - -The "generalized friendship paradox" states that the friendship paradox applies to other characteristics as well. For example, one's co-authors are on average likely to be more prominent, with more publications, more citations and more collaborators, or one's followers on Twitter have more followers. The same effect has also been demonstrated for Subjective Well-Being by Bollen et al (2017), who used a large-scale Twitter network and longitudinal data on subjective well-being for each individual in the network to demonstrate that both a Friendship and a "happiness" paradox can occur in online social networks. diff --git a/wiki/wikipedia/2641.txt b/wiki/wikipedia/2641.txt deleted file mode 100644 index 36a1efbb2ebc5cdbd96b0a8acb433a3eb14b13d6..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2641.txt +++ /dev/null @@ -1,139 +0,0 @@ -This page lists notable examples of incomplete published mathematical proofs. Most of these were accepted as correct for several years but later discovered to contain gaps. There are both examples where a complete proof was later found and where the alleged result turned out to be false. - -This section lists examples of proofs that were published and accepted as complete before a gap or error was found in them. It does not include any of the many incomplete attempted solutions by amateurs of famous problems such as Fermat's Last Theorem or squaring the circle. It also does not include unpublished preprints that were withdrawn because an error was found before publication. - -The examples are arranged roughly in order of the publication date of the incomplete proof. Several of the examples on the list were taken from answers to questions on the MathOverflow site, listed in the External links below. The examples use the following symbols: - -* Euclid's Elements. Euclid's proofs are essentially correct, but strictly speaking sometimes contain gaps because he tacitly uses some unstated assumptions, such as the existence of intersection points. In 1899 David Hilbert gave a complete set of (second order) axioms for Euclidean geometry, called Hilbert's axioms, and between 1926 and 1959 Tarski gave some complete sets of first order axioms, called Tarski's axioms. - -* Isoperimetric inequality. For three dimensions it states that the shape enclosing the maximum volume for its surface area is the sphere. It was formulated by Archimedes but not proved rigorously until the 19th century, by Hermann Schwarz. - -* Infinitesimals. In the 18th century there was widespread use of infinitesimals in calculus, though these were not really well defined. Calculus was put on firm foundations in the 19th century, and Robinson put infinitesimals in a rigorous basis with the introduction of nonstandard analysis in the 20th century. - -* Fundamental theorem of algebra (see History). Many incomplete or incorrect attempts were made at proving this theorem in the 18th century, including by d'Alembert (1746), Euler (1749), de Foncenex (1759), Lagrange (1772), Laplace (1795), Wood (1798), and Gauss (1799). The first rigorous proof was published by Argand in 1806. - -* In 1759 Euler claimed that there were no closed knight tours on a chess board with 3 rows, but in 1917 Ernest Bergholt found tours on 3 by 10 and 3 by 12 boards. - -* Euler's conjecture on Graeco-Latin squares. In the 1780s Euler conjectured that no such squares exist for any oddly even number n ≡ 2 (mod 4). In 1959, R. C. Bose and S. S. Shrikhande constructed counterexamples of order 22. Then E. T. Parker found a counterexample of order 10 using a one-hour computer search. Finally Parker, Bose, and Shrikhande showed this conjecture to be false for all n ≥ 10. - -* In 1798 A. M. Legendre claimed that 6 is not the sum of 2 rational cubes, which as Lamé pointed out in 1865 is false as 6 = (37/21)3 + (17/21)3. - -* In 1803, Gian Francesco Malfatti claimed to prove that a certain arrangement of three circles would cover the maximum possible area inside a right triangle. However, to do so he made certain unwarranted assumptions about the configuration of the circles. It was shown in 1930 that circles in a different configuration could cover a greater area, and in 1967 that Malfatti's configuration was never optimal. See Malfatti circles. - -* In 1806 André-Marie Ampère claimed to prove that a continuous function is differentiable at most points (though it is not entirely clear what he was claiming as he did not give a precise definition of a function). However, in 1872 Weierstrass gave an example of a continuous function that was not differentiable anywhere: The Weierstrass function. - -* Dirichlet's theorem on arithmetic progressions. In 1808 Legendre published an attempt at a proof of Dirichlet's theorem, but as Dupré pointed out in 1859 one of the lemmas used by Legendre is false. Dirichlet gave a complete proof in 1837. - -* Uniform convergence. In his Cours d'Analyse of 1821, Cauchy "proved" that if a sum of continuous functions converges pointwise, then its limit is also continuous. However, Abel observed three years later that this is not the case. For the conclusion to hold, "pointwise convergence" must be replaced with "uniform convergence". It is not entirely clear that Cauchy's original result was wrong, because his definition of pointwise convergence was a little vague and may have been stronger than the one currently in use, and there are ways to interpret his result so that it is correct. There are many counterexamples using the standard definition of pointwise convergence. For example, a Fourier series of sine and cosine functions, all continuous, may converge pointwise to a discontinuous function such as a step function. - -* Intersection theory. In 1848 Steiner claimed that the number of conics tangent to 5 given conics is 7776 = 65, but later realized this was wrong. The correct number 3264 was found by Berner in 1865 and by Ernest de Jonquieres around 1859 and by Chasles in 1864 using his theory of characteristics. However these results, like many others in classical intersection theory, do not seem to have been given complete proofs until the work of Fulton and Macpherson in about 1978. - -* Dirichlet's principle. This was used by Riemann in 1851, but Weierstrass found a counterexample to one version of this principle in 1870, and Hilbert stated and proved a correct version in 1900. - -* The proofs of the Kronecker–Weber theorem by Kronecker (1853) and Weber (1886) both had gaps. The first complete proof was given by Hilbert in 1896. - -* incorrectly claimed that there are three different groups of order 6. This mistake is strange because in an earlier 1854 paper he correctly stated that there are just two such groups. - -* In 1879, Alfred Kempe published a purported proof of the four color theorem, whose validity as a proof was accepted for eleven years before it was refuted by Percy Heawood. Peter Guthrie Tait gave another incorrect proof in 1880 which was shown to be incorrect by Julius Petersen in 1891. Kempe's proof did, however, suffice to show the weaker five color theorem. The four-color theorem was eventually proved by Kenneth Appel and Wolfgang Haken in 1976. - -* Frege's foundations of mathematics in his 1879 book Begriffsschrift turned out to be inconsistent because of Russell's paradox, found in 1901. - -* In 1885, Evgraf Fedorov classified the convex polyhedra with congruent rhombic faces, but missed a case. Stanko Bilinski in 1960 rediscovered the Bilinski dodecahedron (forgotten after its previous 1752 publication) and proved that, with the addition of this shape, the classification was complete. - -* Schröder–Bernstein theorem. In 1896 Schröder published a proof sketch which, however, was shown to be faulty by Alwin Reinhold Korselt in 1911 (confirmed by Schröder). - -* Jordan curve theorem. There has been some controversy about whether Jordan's original proof of this in 1887 contains gaps. Oswald Veblen in 1905 claimed that Jordan's proof is incomplete, but in 2007 Hales said that the gaps are minor and that Jordan's proof is essentially complete. - -* Wronskians. In 1887 Mansion claimed in his textbook that if a Wronskian of some functions vanishes everywhere then the functions are linearly dependent. In 1889 Peano pointed out the counterexample x2 and x|x|. The result is correct if the functions are analytic. - -* published a purported example of an algebraic curve in 3-dimensional projective space that could not be defined as the zeros of 3 polynomials, but in 1941 Perron found 3 equations defining Vahlen's curve. In 1961 Kneser showed that any algebraic curve in projective 3-space can be given as the zeros of 3 polynomials. - -* In 1898 Miller published a paper incorrectly claiming to prove that the Mathieu group M24 does not exist, though in 1900 he pointed out that his proof was wrong. - -* Little claimed in 1900 that the writhe of a reduced knot diagram is an invariant. However, in 1974 Perko discovered a counterexample called the Perko pair, a pair of knots listed as distinct in tables for many years that are in fact the same. - -* In 1905 Lebesgue tried to prove the (correct) result that a function implicitly defined by a Baire function is Baire, but his proof incorrectly assumed that the projection of a Borel set is Borel. Suslin pointed out the error and was inspired by it to define analytic sets as continuous images of Borel sets. - -* Carmichael's totient function conjecture was stated as a theorem by Robert Daniel Carmichael in 1907, but in 1922 he pointed out that his proof was incomplete. As of 2016 the problem is still open. - -* Hilbert's twenty-first problem. In 1908 Plemelj claimed to have shown the existence of Fuchsian differential equations with any given monodromy group, but in 1989 Bolibruch discovered a counterexample. - -* Dehn's lemma. Dehn published an attempted proof in 1910, but Kneser found a gap in 1929. It was finally proved in 1956 by Christos Papakyriakopoulos. - -* Italian school of algebraic geometry. Most gaps in proofs are caused either by a subtle technical oversight, or before the 20th century by a lack of precise definitions. A major exception to this is the Italian school of algebraic geometry in the first half of the 20th century, where lower standards of rigor gradually became acceptable. The result was that there are many papers in this area where the proofs are incomplete, or the theorems are not stated precisely. This list contains a few representative examples, where the result was not just incompletely proved but also hopelessly wrong. - -* Hilbert's sixteenth problem about the finiteness of the number of limit cycles of a plane polynomial vector field. Henri Dulac published a partial solution to this problem in 1923, but in about 1980 Écalle and Ilyashenko independently found a serious gap, and fixed it in about 1991. - -* In 1925 Ackermann published a proof that a weak system can prove the consistency of a version of analysis, but von Neumann found an explicit mistake in it a few years later. Gödel's incompleteness theorems showed that it is not possible to prove the consistency of analysis using weaker systems. - -* In 1929 Lazar Lyusternik and Lev Schnirelmann published a proof of the theorem of the three geodesics, which was later found to be flawed. The proof was completed by Werner Ballmann about 50 years later. - -* Groups of order 64. In 1930 Miller published a paper claiming that there are 294 groups of order 64. Hall and Senior showed in 1964 that the correct number is 267. - -* Church's original published attempt in 1932 to define a formal system was inconsistent, as was his correction in 1933. The consistent part of his system later became the lambda calculus. - -* Kurt Gödel proved in 1933 that the truth of a certain class of sentences of first-order arithmetic, known in the literature as [*2*, all, (0)], was decidable. That is, there was a method for deciding correctly whether any statement of that form was true. In the final sentence of that paper, he asserted that the same proof would work for the decidability of the larger class [*2*, all, (0)]=, which also includes formulas that contain an equality predicate. However, in the mid-1960s, Stål Aanderaa showed that Gödel's proof would not go through for the larger class, and in 1982 Warren Goldfarb showed that validity of formulas from the larger class was in fact undecidable. - -* Grunwald–Wang theorem. Wilhelm Grunwald published an incorrect proof in 1933 of an incorrect theorem, and George Whaples later published another incorrect proof. Shianghao Wang found a counterexample in 1948 and published a corrected version of the theorem in 1950. - -* In 1933 George David Birkhoff and Waldemar Joseph Trjitzinsky published a very general theorem on the asymptotics of sequences satisfying linear recurrences. The theorem was popularized by Jet Wimp and Doron Zeilberger in 1985. However, while the result is probably true, as of now (2021) Birkhoff and Trjitzinsky's proof is not generally accepted by experts, and the theorem is (acceptedly) proved only in special cases. - -* In 1934 Severi claimed that the space of rational equivalence classes of cycles on an algebraic surface is finite-dimensional, but Mumford showed that this is false for surfaces of positive geometric genus. - -* Littlewood–Richardson rule. Robinson published an incomplete proof in 1938, though the gaps were not noticed for many years. The first complete proofs were given by Marcel-Paul Schützenberger in 1977 and Thomas in 1974. - -* Jacobian conjecture. Keller asked this as a question in 1939, and in the next few years there were several published incomplete proofs, including 3 by B. Segre, but Vitushkin found gaps in many of them. The Jacobian conjecture is (as of 2016) an open problem, and more incomplete proofs are regularly announced. discuss the errors in some of these incomplete proofs. - -* Quine published his original description of the system Mathematical Logic in 1940, but in 1942 Rosser showed it was inconsistent. Wang found a correction in 1950; the consistency of this revised system is still unclear. - -* One of many examples from algebraic geometry in the first half of the 20th century: Severi claimed that a degree-n surface in 3-dimensional projective space has at most (_3|p=n+2)−4 nodes, B. Segre pointed out that this was wrong; for example, for degree 6 the maximum number of nodes is 65, achieved by the Barth sextic, which is more than the maximum of 52 claimed by Severi. - -* Rokhlin invariant. Rokhlin incorrectly claimed in 1951 that the third stable stem of the homotopy groups of spheres is of order 12. In 1952 he discovered his error: it is in fact cyclic of order 24. The difference is crucial as it results in the existence of the Rokhlin invariant, a fundamental tool in the theory of 3- and 4-dimensional manifolds. - -* Class numbers of imaginary quadratic fields. In 1952 Heegner published a solution to this problem. His paper was not accepted as a complete proof as it contained a gap, and the first complete proofs were given in about 1967 by Baker and Stark. In 1969 Stark showed how to fill the gap in Heegner's paper. - -* A strengthening of Hilbert's sixteenth problem asking whether there exists a uniform finite upper bound for the number of limit cycles of planar polynomial vector fields of given degree n. In the 1950s, Evgenii Landis and Ivan Petrovsky published a purported solution, but it was shown wrong in the early 1960s. - -* In 1954 Zarankiewicz claimed to have solved Turán's brick factory problem about the crossing number of complete bipartite graphs, but Kainen and Ringel later noticed a gap in his proof. - -* In 1954 Igor Shafarevich published a proof that every finite solvable group is a Galois group over the rationals. However Schmidt pointed out a gap in the argument at the prime 2, which Shafarevich fixed in 1989. - -* Nielsen realization problem. Kravetz claimed to solve this in 1959 by first showing that Teichmüller space is negatively curved, but in 1974 Masur showed that it is not negatively curved. The Nielsen realization problem was finally solved in 1980 by Kerckhoff. - -* Yamabe problem. Yamabe claimed a solution in 1960, but Trudinger discovered a gap in 1968, and a complete proof was not given until 1984. - -* In 1961, Jan-Erik Roos published an incorrect theorem about the vanishing of the first derived functor of the inverse limit functor under certain general conditions. However, in 2002, Amnon Neeman constructed a counterexample. Roos showed in 2006 that the theorem holds if one adds the assumption that the category has a set of generators. - -* Mordell conjecture over function fields. Manin published a proof in 1963, but Coleman found and corrected a gap in the proof. - -* The Schur multiplier of the Mathieu group M22 is particularly notorious as it was miscalculated more than once: Burgoyne first claimed it had order 3, then in a 1968 correction claimed it had order 6; its order is in fact (currently believed to be) 12. This caused an error in the title of Janko's paper A new finite simple group of order 86,775,570,046,077,562,880 which possesses M24 and the full covering group of M22 as subgroup on J4: it does not have the full covering group as a subgroup, as the full covering group is larger than was realized at the time. - -* The original statement of the classification of N-groups by Thompson in 1968 accidentally omitted the Tits group, though he soon fixed this. - -* In 1967 Reinhardt proposed Reinhardt cardinals, which Kunen showed to be inconsistent with ZFC in 1971, though they are not known to be inconsistent with ZF. - -* Complex structures on the 6-sphere. In 1969 Alfred Adler published a paper in the American Journal of Mathematics claiming that the 6-sphere has no complex structure. His argument was incomplete, and this is (as of 2016) still a major open problem. - -* Per Martin-Löf's original version of intuitionistic type theory proposed in 1971 was shown to be inconsistent by Jean-Yves Girard in 1972, and was replaced by a corrected version. - -* In 1973 Britton published a 282-page attempted solution of Burnside's problem. In his proof he assumed the existence of a set of parameters satisfying some inequalities, but Adian pointed out that these inequalities were inconsistent. Novikov and Adian had previously found a correct solution around 1968. - -* In 1975, Leitzel, Madan, and Queen incorrectly claimed that there are only 7 function fields over finite fields with genus > 0 and class number 1, but in 2013 Stirpe found another; there are in fact exactly 8. - -* Closed geodesics. In 1978 Wilhelm Klingenberg published a proof that smooth compact manifolds without boundary have infinitely many closed geodesics. His proof was controversial, and there is currently (as of 2016) no consensus on whether his proof is complete. - -* Classification of finite simple groups. In 1983, Gorenstein announced that the proof of the classification had been completed, but he had been misinformed about the status of the proof of classification of quasithin groups, which had a serious gap in it. A complete proof for this case was published by Aschbacher and Smith in 2004. - -* Telescope conjecture. Ravenel announced a refutation of this in 1992, but later withdrew it, and the conjecture is still open. - -* Kepler conjecture. Hsiang published an incomplete proof of this in 1993. In 1998 Hales published a proof depending on long computer calculations. - -* Busemann–Petty problem. Zhang published two papers in the Annals of Mathematics in 1994 and 1999, in the first of which he proved that the Busemann–Petty problem in R4 has a negative solution, and in the second of which he proved that it has a positive solution. - -* Algebraic stacks. The book Laumon on algebraic stacks mistakenly claimed that morphisms of algebraic stacks induce morphisms of lisse-étale topoi. The results depending on this were repaired by Olsson. - -* Matroid bundles. In 2003 Daniel Biss published a paper in the Annals of Mathematics claiming to show that matroid bundles are equivalent to real vector bundles, but in 2009 published a correction pointing out a serious gap in the proof. His correction was based on a 2007 paper by Mnëv. - -* In 2012, the Japanese mathematician Shinichi Mochizuki released online a series of papers in which he claims to prove the abc conjecture. Despite the publication in a peer-reviewed journal later, his proof has not accepted as correct in the mainstream mathematical community. - -Lecat is a list over a hundred pages long of (mostly rather trivial) published errors made by mathematicians. diff --git a/wiki/wikipedia/2642.txt b/wiki/wikipedia/2642.txt deleted file mode 100644 index 2a95f6c1f6a7b4f308375aab75d5a5d82ddf8da9..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2642.txt +++ /dev/null @@ -1,9 +0,0 @@ -Superstabilization is a concept of fault-tolerance in distributed computing. Superstabilizing distributed algorithms combine the features of self-stabilizing algorithms and dynamic algorithms. A superstabilizing algorithm – just like any other self-stabilizing algorithm – can be started in an arbitrary state, and it will eventually converge to a legitimate state. Additionally, a superstabilizing algorithm will recover rapidly from a single change in the network topology (adding or removing one edge or node in the network). - -Any self-stabilizing algorithm recovers from a change in the network topology – the system configuration after a topology change can be treated just like any other arbitrary starting configuration. However, in a self-stabilizing algorithm, the convergence after a single change in the network topology may be as slow as the convergence from an arbitrary starting state. In the study of superstabilizing algorithms, special attention is paid to the time it takes to recover from a single change in the network topology. - -The stabilization time of a superstabilizing algorithm is defined exactly as in the case of self-stabilizing algorithm: how long it takes to converge to a legitimate state from an arbitrary configuration. Depending on the computational model, time is measured, e.g., in synchronous communication rounds or in asynchronous cycles. - -The superstabilization time is the time to recover from a single topology change. It is assumed that the system is initially in a legitimate configuration. Then the network topology is changed; the superstabilization time is the maximum time it takes for the system to reach a legitimate configuration again. Similarly, the adjustment measure is the maximum number of nodes that have to change their state after such changes. - -The “almost-legitimate configurations” which occur after one topology change can be formally modelled by using passage predicates: a passage predicate is a predicate that holds after a single change in the network topology, and also during the convergence to a legitimate configuration. diff --git a/wiki/wikipedia/2643.txt b/wiki/wikipedia/2643.txt deleted file mode 100644 index 393e230d4e9be197f538192c73609eddf03b3027..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2643.txt +++ /dev/null @@ -1,17 +0,0 @@ -The Reeh–Schlieder theorem is a result in relativistic local quantum field theory published by Helmut Reeh and Siegfried Schlieder (1918-2003) in 1961. - -The theorem states that the vacuum state $\vert \Omega \rangle$ is a cyclic vector for the field algebra $\mathcal{A}(\mathcal{O})$ corresponding to any open set $\mathcal{O}$ in Minkowski space. That is, any state $\vert \psi \rangle$ can be approximated to arbitrary precision by acting on the vacuum with an operator selected from the local algebra, even for $\vert \psi \rangle$ that contain excitations arbitrarily far away in space. In this sense, states created by applying elements of the local algebra to the vacuum state are not localized to the region $\mathcal{O}$. - -For practical purposes, however, local operators still generate quasi-local states. More precisely, the long range effects of the operators of the - -local algebra will diminish rapidly with distance, as seen by the cluster properties of the Wightman functions. And with increasing distance, creating a unit vector localized outside the region requires operators of ever increasing - -operator norm. - -This theorem is also cited in connection with quantum entanglement. But it is subject to some doubt whether - -the Reeh–Schlieder theorem can usefully be seen as the quantum field theory analog to quantum entanglement, since the - -exponentially-increasing energy needed for long range actions will prohibit any macroscopic effects. However, B.Reznik showed that vacuum entanglement can be distilled into EPR pairs used in quantum information tasks. - -It is known that the Reeh-Schlieder property applies not just to the vacuum but in fact to any state with bounded energy. If some finite number N of space-like separated regions is chosen, the multipartite entanglement can be analyzed in the typical quantum information setting of N abstract quantum systems, each with a Hilbert space possessing a countable basis, and the corresponding structure has been called superentanglement. diff --git a/wiki/wikipedia/2644.txt b/wiki/wikipedia/2644.txt deleted file mode 100644 index 3f42bd2ab41c81a88bbc956f85fe6894a6d8c133..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2644.txt +++ /dev/null @@ -1,31 +0,0 @@ -Rippling refers to a group of meta-level heuristics, developed primarily in the Mathematical Reasoning Group in the School of Informatics at the University of Edinburgh, and most commonly used to guide inductive proofs in automated theorem proving systems. Rippling may be viewed as a restricted form of rewrite system, where special object level annotations are used to ensure fertilization upon the completion of rewriting, with a measure decreasing requirement ensuring termination for any set of rewrite rules and expression. - -Raymond Aubin was the first person to use the term "rippling out" whilst working on his 1976 PhD thesis at the University of Edinburgh. He recognised a common pattern of movement during the rewriting stage of inductive proofs. Alan Bundy later turned this concept on its head by defining rippling to be this pattern of movement, rather than a side effect. - -Since then, "rippling sideways", "rippling in" and "rippling past" were coined, so the term was generalised to rippling. Rippling continues to be developed at Edinburgh, and elsewhere, as of 2007. - -Rippling has been applied to many problems traditionally viewed as being hard in the inductive theorem proving community, including Bledsoe's limit theorems and a proof of the Gordon microprocessor, a miniature computer developed by Michael J. C. Gordon and his team at Cambridge. - -Very often, when attempting to prove a proposition, we are given a source expression and a target expression, which differ only by the inclusion of a few extra syntactic elements. - -This is especially true in inductive proofs, where the given expression is taken to be the inductive hypothesis, and the target expression the inductive conclusion. Usually, the differences between the hypothesis and conclusion are only minor, perhaps the inclusion of a successor function (e.g., +1) around the induction variable. - -At the start of rippling the differences between the two expressions, known as wave-fronts in rippling parlance, are identified. Typically these differences prevent the completion of the proof and need to be "moved away". The target expression is annotated to distinguish the wavefronts (differences) and skeleton (common structure) between the two expressions. Special rules, called wave rules, can then be used in a terminating fashion to manipulate the target expression until the source expression can be used to complete the proof. - -We aim to show that the addition of natural numbers is commutative. This is an elementary property, and the proof is by routine induction. Nevertheless, the search space for finding such a proof may become quite large. - -Typically, the base case of any inductive proof is solved by methods other than rippling. For this reason, we will concentrate on the step case. - -Our step case takes the following form, where we have chosen to use x as the induction variable: - -We may also possess several rewrite rules, drawn from lemmas, inductive definitions or elsewhere, that can be used to form wave-rules. - -Suppose we have the following three rewrite rules: - -then these can be annotated, to form: - -Note that all these annotated rules preserve the skeleton (x + y = y + x, in the first case and x + y in the second/third). Now, annotating the inductive step case, gives us: - -And we are all set to perform rippling: - -Note that the final rewrite causes all wave-fronts to disappear, and we may now apply fertilization, the application of the inductive hypotheses, to complete the proof. diff --git a/wiki/wikipedia/2645.txt b/wiki/wikipedia/2645.txt deleted file mode 100644 index 248f68502eddd7f758f31b7407d6d8bfa54cc2f8..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2645.txt +++ /dev/null @@ -1,10 +0,0 @@ -Misuse of p-values is common in scientific research and scientific education. p-values are often used or interpreted incorrectly; the American Statistical Association states that p-values can indicate how incompatible the data are with a specified statistical model. - -#The p-value is not the probability that the null hypothesis is true, or the probability that the alternative hypothesis is false. - -#The p-value does not indicate the size or importance of the observed effect. After failing to find a significant (p < 0.05) correlation between eating jellybeans and acne, the scientists investigate 20 different colors of jellybeans individually, without adjusting for multiple comparisons. They find one color (green) nominally associated with acne (p < 0.05). The results are then reported by a newspaper as indicating that green jellybeans are linked to acne at a 95% confidence level—as if green were the only color tested. In fact, if 20 independent tests are conducted at the 0.05 significance level and all null hypotheses are true, there is a 64.2% chance of obtaining at least one false positive and the expected number of false positives is 1 (i.e. 0.05 × 20). - -In general, the family-wise error rate (FWER)-the probability of obtaining at least one false positive-increases with the number of tests performed. The FWER when all null hypotheses are true for m independent tests, each conducted at significance level α, is: -$$ -\text{FWER}=1 - (1-\alpha)^m -$$ diff --git a/wiki/wikipedia/2646.txt b/wiki/wikipedia/2646.txt deleted file mode 100644 index 359347342be301bf7f470dc31a432cf38d44816f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2646.txt +++ /dev/null @@ -1,5 +0,0 @@ -The equioscillation theorem concerns the approximation of continuous functions using polynomials when the merit function is the maximum difference (uniform norm). Its discovery is attributed to Chebyshev. - -Let $f$ be a continuous function from $[a,b]$ to $\mathbb{R}$. Among all the polynomials of degree $\le n$, the polynomial $g$ minimizes the uniform norm of the difference $ \| f - g \| _\infty $ if and only if there are $n+2$ points $a \le x_0 < x_1 < \cdots < x_{n+1} \le b$ such that $f(x_i) - g(x_i) = \sigma (-1)^i \| f - g \|_\infty$ where $\sigma$ is either -1 or +1. - -Several minimax approximation algorithms are available, the most common being the Remez algorithm. diff --git a/wiki/wikipedia/2647.txt b/wiki/wikipedia/2647.txt deleted file mode 100644 index 33e77b47871033dd634d34383478ff4eae4c8cd6..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2647.txt +++ /dev/null @@ -1,5 +0,0 @@ -A COMMIT statement in SQL ends a transaction within a relational database management system (RDBMS) and makes all changes visible to other users. The general format is to issue a BEGIN WORK statement, one or more SQL statements, and then the COMMIT statement. A COMMIT statement will also release any existing savepoints that may be in use. This means that once a COMMIT statement is issued, you can not rollback the transaction. - -In terms of transactions, the opposite of commit is to discard the tentative changes of a transaction, a rollback. - -The transaction, commit and rollback concepts are key to the ACID property of databases. diff --git a/wiki/wikipedia/2648.txt b/wiki/wikipedia/2648.txt deleted file mode 100644 index f2027437a1c2cb545a73210fa6027a0d3a2f8900..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2648.txt +++ /dev/null @@ -1,15 +0,0 @@ -In mathematics, the well-ordering principle states that every non-empty set of positive integers contains a least element. In other words, the set of positive integers is well-ordered by its "natural" or "magnitude" order in which $x$ precedes $y$ if and only if $y$ is either $x$ or the sum of $x$ and some positive integer (other orderings include the ordering $2, 4, 6, ...$; and $1, 3, 5, ...$). - -The phrase "well-ordering principle" is sometimes taken to be synonymous with the "well-ordering theorem". On other occasions it is understood to be the proposition that the set of integers $\{\ldots, -2, -1, 0, 1, 2, 3, \ldots \}$ contains a well-ordered subset, called the natural numbers, in which every nonempty subset contains a least element. - -Depending on the framework in which the natural numbers are introduced, this (second order) property of the set of natural numbers is either an axiom or a provable theorem. For example: - -* In Peano arithmetic, second-order arithmetic and related systems, and indeed in most (not necessarily formal) mathematical treatments of the well-ordering principle, the principle is derived from the principle of mathematical induction, which is itself taken as basic. - -* Considering the natural numbers as a subset of the real numbers, and assuming that we know already that the real numbers are complete (again, either as an axiom or a theorem about the real number system), i.e., every bounded (from below) set has an infimum, then also every set $A$ of natural numbers has an infimum, say $a^*$. We can now find an integer $n^*$ such that $a^*$ lies in the half-open interval $(n^*-1,n^*]$, and can then show that we must have $a^* = n^*$, and $n^*$ in $A$. - -* In axiomatic set theory, the natural numbers are defined as the smallest inductive set (i.e., set containing 0 and closed under the successor operation). One can (even without invoking the regularity axiom) show that the set of all natural numbers $n$ such that "$\{0,\ldots,n\}$ is well-ordered" is inductive, and must therefore contain all natural numbers; from this property one can conclude that the set of all natural numbers is also well-ordered. - -In the second sense, this phrase is used when that proposition is relied on for the purpose of justifying proofs that take the following form: to prove that every natural number belongs to a specified set $S$, assume the contrary, which implies that the set of counterexamples is non-empty and thus contains a smallest counterexample. Then show that for any counterexample there is a still smaller counterexample, producing a contradiction. This mode of argument is the contrapositive of proof by complete induction. It is known light-heartedly as the "minimal criminal" method and is similar in its nature to Fermat's method of "infinite descent". - -Garrett Birkhoff and Saunders Mac Lane wrote in A Survey of Modern Algebra that this property, like the least upper bound axiom for real numbers, is non-algebraic; i.e., it cannot be deduced from the algebraic properties of the integers (which form an ordered integral domain). diff --git a/wiki/wikipedia/2649.txt b/wiki/wikipedia/2649.txt deleted file mode 100644 index d551fcf20a19ab3ea904708a77a9b2580a3456bd..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2649.txt +++ /dev/null @@ -1,21 +0,0 @@ -In mathematics, Niven's theorem, named after Ivan Niven, states that the only rational values of θ in the interval 0° ≤ θ ≤ 90° for which the sine of θ degrees is also a rational number are: - - - -\begin{align} - -\sin 0^\circ & = 0, \\[10pt] - -\sin 30^\circ & = \frac 12, \\[10pt] - -\sin 90^\circ & = 1. - -\end{align} - - - -In radians, one would require that 0 ≤ x ≤ pi/2, that x/pi be rational, and that sinx be rational. The conclusion is then that the only such values are sin 0 = 0, sin pi/6 = 1/2, and sin pi/2 = 1. - -The theorem appears as Corollary 3.12 in Niven's book on irrational numbers. - -The theorem extends to the other trigonometric functions as well. For rational values of θ, the only rational values of the sine or cosine are 0, ±1/2, and ±1; the only rational values of the secant or cosecant are ±1 and ±2; and the only rational values of the tangent or cotangent are 0 and ±1. diff --git a/wiki/wikipedia/265.txt b/wiki/wikipedia/265.txt deleted file mode 100644 index 250615822954e2f22e779f1d9e0c343db74e5591..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/265.txt +++ /dev/null @@ -1,47 +0,0 @@ -The cut-elimination theorem (or Gentzen's Hauptsatz) is the central result establishing the significance of the sequent calculus. It was originally proved by Gerhard Gentzen in his landmark 1934 paper "Investigations in Logical Deduction" for the systems LJ and LK formalising intuitionistic and classical logic respectively. The cut-elimination theorem states that any judgement that possesses a proof in the sequent calculus making use of the cut rule also possesses a cut-free proof, that is, a proof that does not make use of the cut rule. - -A sequent is a logical expression relating multiple formulas, in the form "$A_1, A_2, A_3, \ldots \vdash B_1, B_2, B_3, \ldots$", which is to be read as "$A_1, A_2, A_3, \ldots$ proves $B_1, B_2, B_3, \ldots$", and (as glossed by Gentzen) should be understood as equivalent to the truth-function "If ($A_1$ and $A_2$ and $A_3$ …) then ($B_1$ or $B_2$ or $B_3$ …)." Note that the left-hand side (LHS) is a conjunction (and) and the right-hand side (RHS) is a disjunction (or). - -The LHS may have arbitrarily many or few formulae; when the LHS is empty, the RHS is a tautology. In LK, the RHS may also have any number of formulae—if it has none, the LHS is a contradiction, whereas in LJ the RHS may only have one formula or none: here we see that allowing more than one formula in the RHS is equivalent, in the presence of the right contraction rule, to the admissibility of the law of the excluded middle. However, the sequent calculus is a fairly expressive framework, and there have been sequent calculi for intuitionistic logic proposed that allow many formulae in the RHS. From Jean-Yves Girard's logic LC it is easy to obtain a rather natural formalisation of classical logic where the RHS contains at most one formula; it is the interplay of the logical and structural rules that is the key here. - -"Cut" is a rule in the normal statement of the sequent calculus, and equivalent to a variety of rules in other proof theories, which, given - -
    1. $ \Gamma \vdash A,\Delta$
    - -and - -
    1. $ \Pi, A \vdash \Lambda$
    - -allows one to infer - -
    1. $\Gamma, \Pi \vdash \Delta,\Lambda$
    - -That is, it "cuts" the occurrences of the formula $A$ out of the inferential relation. - -The cut-elimination theorem states that (for a given system) any sequent provable using the rule Cut can be proved without use of this rule. - -For sequent calculi that have only one formula in the RHS, the "Cut" rule reads, given - -
    1. $ \Gamma \vdash A$
    - -and - -
    1. $ \Pi, A \vdash B$
    - -allows one to infer - -
    1. $\Gamma, \Pi \vdash B$
    - -If we think of $B$ as a theorem, then cut-elimination in this case simply says that a lemma $A$ used to prove this theorem can be inlined. Whenever the theorem's proof mentions lemma $A$, we can substitute the occurrences for the proof of $A$. Consequently, the cut rule is admissible. - -For systems formulated in the sequent calculus, analytic proofs are those proofs that do not use Cut. Typically such a proof will be longer, of course, and not necessarily trivially so. In his essay "Don't Eliminate Cut!" George Boolos demonstrated that there was a derivation that could be completed in a page using cut, but whose analytic proof could not be completed in the lifespan of the universe. - -The theorem has many, rich consequences: - -* A system is inconsistent if it admits a proof of the absurd. If the system has a cut elimination theorem, then if it has a proof of the absurd, or of the empty sequent, it should also have a proof of the absurd (or the empty sequent), without cuts. It is typically very easy to check that there are no such proofs. Thus, once a system is shown to have a cut elimination theorem, it is normally immediate that the system is consistent. - -* Normally also the system has, at least in first order logic, the subformula property, an important property in several approaches to proof-theoretic semantics. - -Cut elimination is one of the most powerful tools for proving interpolation theorems. The possibility of carrying out proof search based on resolution, the essential insight leading to the Prolog programming language, depends upon the admissibility of Cut in the appropriate system. - -For proof systems based on higher-order typed lambda calculus through a Curry-Howard isomorphism, cut elimination algorithms correspond to the strong normalization property (every proof term reduces in a finite number of steps into a normal form). diff --git a/wiki/wikipedia/2650.txt b/wiki/wikipedia/2650.txt deleted file mode 100644 index 9846476df1bdbf4bec48645b52268629d26b895c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2650.txt +++ /dev/null @@ -1,92 +0,0 @@ -In probability theory, the central limit theorem states that, under certain circumstances, the probability distribution of the scaled mean of a random sample converges to a normal distribution as the sample size increases to infinity. Under stronger assumptions, the Berry–Esseen theorem, or Berry–Esseen inequality, gives a more quantitative result, because it also specifies the rate at which this convergence takes place by giving a bound on the maximal error of approximation between the normal distribution and the true distribution of the scaled sample mean. The approximation is measured by the Kolmogorov–Smirnov distance. In the case of independent samples, the convergence rate is n−1/2, where n is the sample size, and the constant is estimated in terms of the third absolute normalized moments. - -Statements of the theorem vary, as it was independently discovered by two mathematicians, Andrew C. Berry (in 1941) and Carl-Gustav Esseen (1942), who then, along with other authors, refined it repeatedly over subsequent decades. - -One version, sacrificing generality somewhat for the sake of clarity, is the following: - -There exists a positive constant C such that if X1, X2, ..., are i.i.d. random variables with E(X1) = 0, E(X12) = σ2 > 0, and E(|X1|3) = ρ < ∞, and if we define -$$ -Y_n = {X_1 + X_2 + \cdots + X_n \over n} -$$ - -the sample mean, with Fn the cumulative distribution function of -$$ -{Y_n \sqrt{n} \over {\sigma}}, -$$ - -and Φ the cumulative distribution function of the standard normal distribution, then for all x and n, -$$ -\left|F_n(x) - \Phi(x)\right| \le {C \rho \over \sigma^3\sqrt{n}}.\ \ \ \ (1) -$$ - -That is: given a sequence of independent and identically distributed random variables, each having mean zero and positive variance, if additionally the third absolute moment is finite, then the cumulative distribution functions of the standardized sample mean and the standard normal distribution differ (vertically, on a graph) by no more than the specified amount. Note that the approximation error for all n (and hence the limiting rate of convergence for indefinite n sufficiently large) is bounded by the order of n−1/2. - -Calculated values of the constant C have decreased markedly over the years, from the original value of 7.59 by Esseen, to 0.7882 by van Beek, then 0.7655 by Shiganov, then 0.7056 by Shevtsova, then 0.7005 by Shevtsova, then 0.5894 by Tyurin, then 0.5129 by Korolev, then 0.4785 by Tyurin. The detailed review can be found in the papers Korolev and Korolev. The best estimate , C < 0.4748, follows from the inequality -$$ -\sup_{x\in\mathbb R}\left|F_n(x) - \Phi(x)\right| \le {0.33554 (\rho+0.415\sigma^3)\over \sigma^3\sqrt{n}}, -$$ - -due to Shevtsova, since σ3 ≤ ρ and 0.33554 · 1.415 < 0.4748. However, if ρ ≥ 1.286σ3, then the estimate -$$ -\sup_{x\in\mathbb R}\left|F_n(x) - \Phi(x)\right| \le {0.3328 (\rho+0.429\sigma^3)\over \sigma^3\sqrt{n}}, -$$ - -which is also proved in Shevtsova, gives an even tighter upper estimate. - -Esseen proved that the constant also satisfies the lower bound - - - -C\geq\frac{\sqrt{10}+3}{6\sqrt{2\pi}} \approx 0.40973 \approx \frac{1}{\sqrt{2\pi}} + 0.01079 . - - - -Let X1, X2, ..., be independent random variables with E(Xi) = 0, E(Xi2) = σi2 > 0, and E(|Xi|3) = ρi < ∞. Also, let -$$ -S_n = {X_1 + X_2 + \cdots + X_n \over \sqrt{\sigma_1^2+\sigma_2^2+\cdots+\sigma_n^2} } -$$ - -be the normalized n-th partial sum. Denote Fn the cdf of Sn, and Φ the cdf of the standard normal distribution. For the sake of convenience denote -$$ -\vec{\sigma}=(\sigma_1,\ldots,\sigma_n),\ \vec{\rho}=(\rho_1,\ldots,\rho_n). -$$ - -In 1941, Andrew C. Berry proved that for all n there exists an absolute constant C1 such that -$$ -\sup_{x\in\mathbb R}\left|F_n(x) - \Phi(x)\right| \le C_1\cdot\psi_1,\ \ \ \ (2) -$$ - -where - -\psi_1=\psi_1\big(\vec{\sigma},\vec{\rho}\big)=\Big({\textstyle\sum\limits_{i=1}^n\sigma_i^2}\Big)^{-1/2}\cdot\max_{1\le - -i\le n}\frac{\rho_i}{\sigma_i^2}. - -Independently, in 1942, Carl-Gustav Esseen proved that for all n there exists an absolute constant C0 such that -$$ -\sup_{x\in\mathbb R}\left|F_n(x) - \Phi(x)\right| \le C_0\cdot\psi_0, \ \ \ \ (3) -$$ - -where -$$ -\psi_0=\psi_0\big(\vec{\sigma},\vec{\rho}\big)=\Big({\textstyle\sum\limits_{i=1}^n\sigma_i^2}\Big)^{-3/2}\cdot\sum\limits_{i=1}^n\rho_i. -$$ - -It is easy to make sure that ψ0≤ψ1. Due to this circumstance inequality (3) is conventionally called the Berry–Esseen inequality, and the quantity ψ0 is called the Lyapunov fraction of the third order. Moreover, in the case where the summands X1, ..., Xn have identical distributions -$$ -\psi_0=\psi_1=\frac{\rho_1}{\sigma_1^3\sqrt{n}}, -$$ - -and thus the bounds stated by inequalities (1), (2) and (3) coincide apart from the constant. - -Regarding C0, obviously, the lower bound established by Esseen remains valid: - - - -C_0\geq\frac{\sqrt{10}+3}{6\sqrt{2\pi}} = 0.4097\ldots. - - - -The upper bounds for C0 were subsequently lowered from the original estimate 7.59 due to Esseen to (considering recent results only) 0.9051 due to Zolotarev, 0.7975 due to van Beek, 0.7915 due to Shiganov, 0.6379 and 0.5606 due to Tyurin and Tyurin. the best estimate is 0.5600 obtained by Shevtsova. - -As with the multidimensional central limit theorem, there is a multidimensional version of the Berry–Esseen theorem. diff --git a/wiki/wikipedia/2651.txt b/wiki/wikipedia/2651.txt deleted file mode 100644 index a98d85a05b64158c8e39e29f407705f82c9334c5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2651.txt +++ /dev/null @@ -1,94 +0,0 @@ -In the study of ordinary differential equations and their associated boundary value problems, Lagrange's identity, named after Joseph Louis Lagrange, gives the boundary terms arising from integration by parts of a self-adjoint linear differential operator. Lagrange's identity is fundamental in Sturm-Liouville theory. In more than one independent variable, Lagrange's identity is generalized by Green's second identity. - -In general terms, Lagrange's identity for any pair of functions u and v in function space C2 (that is, twice differentiable) in n dimensions is: -$$ -vL[u]-uL^*[v]=\nabla \cdot \boldsymbol M, -$$ - -where: - -M_i = \sum_{j=1}^n a_{ij}\left( - -v \frac{\partial u}{\partial x_j} -u \frac{\partial v}{\partial x_j} - -\right ) + uv \left( - -b_i - \sum_{j=1}^{n} \frac{\partial a_{ij}}{\partial x_j} \right ), - -and -$$ -\nabla \cdot \boldsymbol M = \sum_{i=1}^n \frac{\partial}{\partial x_i} M_i, -$$ - -The operator L and its adjoint operator L* are given by: -$$ -L[u] = \sum_{i,\ j =1}^n a_{i,j} \frac {\partial ^2 u }{\partial x_i \partial x_j} + \sum_{i=1}^n b_i \frac {\partial u}{\partial x_i} +c u -$$ - -and -$$ -L^*[v] = \sum_{i,\ j =1}^n \frac {\partial ^2 (a_{i,j} v) }{\partial x_i \partial x_j} - \sum_{i=1}^n \frac {\partial (b_i v)}{\partial x_i} + cv. -$$ - -If Lagrange's identity is integrated over a bounded region, then the divergence theorem can be used to form Green's second identity in the form: -$$ -\int_\Omega v L[u]\ d\Omega = \int_{\Omega} u L^*[v]\ d\Omega +\int_S \boldsymbol{M \cdot n } dS, -$$ - -where S is the surface bounding the volume Ω and n is the unit outward normal to the surface S. - -===Ordinary differential equations=== - -Any second order ordinary differential equation of the form: -$$ -a(x)\frac{d^2y}{dx^2} + b(x)\frac {dy}{dx} +c(x)y +\lambda w(x) y =0, -$$ - -can be put in the form: -$$ -\frac {d}{dx} \left( p(x) \frac {dy}{dx} \right ) +\left( q(x)+ \lambda w(x) \right) y(x) = 0. -$$ - -This general form motivates introduction of the Sturm–Liouville operator L, defined as an operation upon a function f such that: -$$ -L f = \frac {d}{dx} \left( p(x) \frac {df}{dx} \right) + q(x) f. -$$ - -It can be shown that for any u and v for which the various derivatives exist, Lagrange's identity for ordinary differential equations holds: -$$ -\int_0^1 \ dx \ ( uLv-vLu) = \left[p(x)\left(u \frac {dv}{dx}- v \frac {du}{dx} \right)\right]_0^1, -$$ - -where $ p=P(x)$, $ q=Q(x)$, $ u=U(x)$ and $ v=V(x)$ are functions of $ x$. $ u$ and $ v$ having continuous second derivatives on the interval $ [0,1] $. - -We have: -$$ -uLv = u \left[\frac {d}{dx} \left( p(x) \frac {dv}{dx} \right) + q(x) v \right], -$$ - -and -$$ -vLu = v \left[\frac {d}{dx} \left( p(x) \frac {du}{dx} \right) + q(x) u \right]. -$$ - -Subtracting: -$$ -uLv-vLu = u \frac {d}{dx} \left( p(x) \frac {dv}{dx} \right)-v \frac {d}{dx} \left( p(x) \frac {du}{dx} \right). -$$ - -The leading multiplied u and v can be moved inside the differentiation, because the extra differentiated terms in u and v are the same in the two subtracted terms and simply cancel each other. Thus, - -\begin{align} - -uLv-vLu &= \frac {d}{dx} \left( p(x)u \frac {dv}{dx} \right)-\frac {d}{dx} \left( v p(x) \frac {du}{dx} \right), \\ - -&=\frac {d}{dx}\left[p(x)\left(u \frac {dv}{dx}- v \frac {du}{dx} \right)\right], - -\end{align} - -which is Lagrange's identity. Integrating from zero to one: -$$ -\int_0^1 \ dx \ ( uLv-vLu) = \left[p(x)\left(u \frac {dv}{dx}- v \frac {du}{dx} \right)\right]_0^1, -$$ - -as was to be shown. diff --git a/wiki/wikipedia/2652.txt b/wiki/wikipedia/2652.txt deleted file mode 100644 index 0b9373c77036c1f08d00a44a94ed847570febf43..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2652.txt +++ /dev/null @@ -1,35 +0,0 @@ -In graph theory, the Erdős–Faber–Lovász conjecture is a problem about graph coloring, named after Paul Erdős, Vance Faber, and László Lovász, who formulated it in 1972. It says: - -If k complete graphs, each having exactly k vertices, have the property that every pair of complete graphs has at most one shared vertex, then the union of the graphs can be properly colored with k colors. - -A proof of the conjecture for all sufficiently large values of k was announced in 2021 by Dong Yeap Kang, Tom Kelly, Daniela Kühn, Abhishek Methuku, and Deryk Osthus. - -Haddad introduced the problem with a story about seating assignment in committees: suppose that, in a university department, there are k committees, each consisting of k faculty members, and that all committees meet in the same room, which has k chairs. Suppose also that at most one person belongs to the intersection of any two committees. Is it possible to assign the committee members to chairs in such a way that each member sits in the same chair for all the different committees to which he or she belongs? In this model of the problem, the faculty members correspond to graph vertices, committees correspond to complete graphs, and chairs correspond to vertex colors. - -A linear hypergraph (also known as partial linear space) is a hypergraph with the property that every two hyperedges have at most one vertex in common. A hypergraph is said to be uniform if all of its hyperedges have the same number of vertices as each other. The n cliques of size n in the Erdős–Faber–Lovász conjecture may be interpreted as the hyperedges of an n-uniform linear hypergraph that has the same vertices as the underlying graph. In this language, the Erdős–Faber–Lovász conjecture states that, given any n-uniform linear hypergraph with n hyperedges, one may n-color the vertices such that each hyperedge has one vertex of each color. - -A simple hypergraph is a hypergraph in which at most one hyperedge connects any pair of vertices and there are no hyperedges of size at most one. In the graph coloring formulation of the Erdős–Faber–Lovász conjecture, it is safe to remove vertices that belong to a single clique, as their coloring presents no difficulty; once this is done, the hypergraph that has a vertex for each clique, and a hyperedge for each graph vertex, forms a simple hypergraph. - -And, the hypergraph dual of vertex coloring is edge coloring. Thus, the Erdős–Faber–Lovász conjecture is equivalent to the statement that any simple hypergraph with n vertices has chromatic index (edge coloring number) at most n. - -The graph of the Erdős–Faber–Lovász conjecture may be represented as an intersection graph of sets: to each vertex of the graph, correspond the set of the cliques containing that vertex, and connect any two vertices by an edge whenever their corresponding sets have a nonempty intersection. Using this description of the graph, the conjecture may be restated as follows: if some family of sets has n total elements, and any two sets intersect in at most one element, then the intersection graph of the sets may be n-colored. - -The intersection number of a graph G is the minimum number of elements in a family of sets whose intersection graph is G, or equivalently the minimum number of vertices in a hypergraph whose line graph is G. Klein define the linear intersection number of a graph, similarly, to be the minimum number of vertices in a linear hypergraph whose line graph is G. As they observe, the Erdős–Faber–Lovász conjecture is equivalent to the statement that the chromatic number of any graph is at most equal to its linear intersection number. - -Haddad present another yet equivalent formulation, in terms of the theory of clones. - -Paul Erdős, Vance Faber, and László Lovász formulated the harmless looking conjecture at a party in Boulder Colorado in September 1972. Its difficulty was - -realised only slowly. - -Paul Erdős originally offered US$50 for proving the conjecture in the affirmative, and later raised the reward to US$500. - -Chiang proved that the chromatic number of the graphs in the conjecture is at most 3k/2 − 2, and Kahn improved this to k + o(k). - -In 2021, almost 50 years after the original conjecture was stated, it was resolved for all sufficiently large n. - -It is also of interest to consider the chromatic number of graphs formed as the union of k cliques of k vertices each, without restricting how big the intersections of pairs of cliques can be. In this case, the chromatic number of their union is at most 1+ ksqrt k − 1, and some graphs formed in this way require this many colors. - -A version of the conjecture that uses the fractional chromatic number in place of the chromatic number is known to be true. That is, if a graph G is formed as the union of k k-cliques that intersect pairwise in at most one vertex, then G can be k-colored. - -In the framework of edge coloring simple hypergraphs, Hindman defines a number L from a simple hypergraph as the number of hypergraph vertices that belong to a hyperedge of three or more vertices. He shows that, for any fixed value of L, a finite calculation suffices to verify that the conjecture is true for all simple hypergraphs with that value of L. Based on this idea, he shows that the conjecture is indeed true for all simple hypergraphs with L ≤ 10. In the formulation of coloring graphs formed by unions of cliques, Hindman's result shows that the conjecture is true whenever at most ten of the cliques contain a vertex that belongs to three or more cliques. In particular, it is true for n ≤ 10. diff --git a/wiki/wikipedia/2653.txt b/wiki/wikipedia/2653.txt deleted file mode 100644 index 414f1ea502c30e33db042df3c599301751642faa..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2653.txt +++ /dev/null @@ -1,21 +0,0 @@ -is a Japan-exclusive puzzle game based on the popular Picross game. The game is considered to be a spin-off of Sunsoft's Hebereke series. Its first release was on the PlayStation on September 8, 1995, and would later be ported onto other platforms ever since. The game has received 2 sequels. It is the only game in the Hebereke series to feature Utsuzin as the main villain, rather than Unyohn. - -O-Chan and Hebe are walking on a grass field. All of a sudden, Utsuzin suddenly captured Hebe in a UFO. O-Chan tries to run after him, but she's too late. Then she ventures to rescue Hebe from Utsuzin's lair. - -The ending shows O-Chan waking from her bed, indicating (but not explaining) that it was a dream. - -The game presents the player with a puzzle grid. The player should chisel the grid; if done correctly, the player will win the game. - -The game also has a multiplayer mode in which the player must chisel the grid in order to beat the other player. - -The game even has an edit mode in which the player can create their own puzzles. - -The Sega Saturn version included a story mode in which the player must verse opponents that come out of their way. The player can also complete to get better times on that mode, as they are timed. - -The game has a tutorial mode, which teaches the player how to play the game. - -On September 27, 1996, was released on the PlayStation. It was also ported to the WonderSwan on January 6, 2000, though the "2" was dropped from the title, as it was the only game in the series released for that platform. The game featured new puzzles, music tracks, and updated visuals. The game also added a painting mode, in which the puzzle grid must be filled with color. - -The final entry in the series, known as , was released on January 11, 2001 exclusively on the PlayStation. The game uses the same engine as the first game, featuring more levels; the game lacked the additions the second game had. - -was released in 2004 for mobile devices. It was the final game in the Hebereke series. Little information is known about this one. As its name suggests, it is a remake of the first game. diff --git a/wiki/wikipedia/2654.txt b/wiki/wikipedia/2654.txt deleted file mode 100644 index f0257530ce473f96c9a8129bdd2a11d2cbc66eb5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2654.txt +++ /dev/null @@ -1,5 +0,0 @@ -T.C. Mits (acronym for "the celebrated man in the street"), is a term coined by Lillian Rosanoff Lieber to refer to an everyman. In Lieber's works, T.C. Mits was a character who made scientific topics more approachable to the public audience. - -The phrase has enjoyed sparse use by authors in fields such as molecular biology, secondary education, and general semantics. - -Dr. Lillian Rosanoff Lieber wrote this treatise on mathematical thinking in twenty chapters. The writing took a form that resembled free-verse poetry, though Lieber included an introduction stating that the form was meant only to facilitate rapid reading, rather than emulate free-verse. Lieber's husband, a fellow professor at Long Island University, Hugh Gray Lieber, provided illustrations for the book. The title of the book was meant to emphasize that mathematics can be understood by anyone, which was further shown when a special "Overseas edition for the Armed Forces" was published in 1942, and approved by the Council on Books in Wartime to be sent to American troops fighting in World War II. diff --git a/wiki/wikipedia/2655.txt b/wiki/wikipedia/2655.txt deleted file mode 100644 index 6f684a054b7b4f572d3ffd54bf67932503e3dd24..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2655.txt +++ /dev/null @@ -1,35 +0,0 @@ -In algebraic geometry, Behrend's trace formula is a generalization of the Grothendieck–Lefschetz trace formula to a smooth algebraic stack over a finite field, conjectured in 1993 and proven in 2003 by Kai Behrend. Unlike the classical one, the formula counts points in the "stacky way"; it takes into account the presence of nontrivial automorphisms. - -The desire for the formula comes from the fact that it applies to the moduli stack of principal bundles on a curve over a finite field (in some instances indirectly, via the Harder–Narasimhan stratification, as the moduli stack is not of finite type.) See the moduli stack of principal bundles and references therein for the precise formulation in this case. - -Pierre Deligne found an example that shows the formula may be interpreted as a sort of the Selberg trace formula. - -A proof of the formula in the context of the six operations formalism developed by Yves Laszlo and Martin Olsson is given by Shenghao Sun. - -By definition, if C is a category in which each object has finitely many automorphisms, the number of points in $C$ is denoted by -$$ -\# C = \sum_p {1 \over \# \operatorname{Aut}(p)}, -$$ - -with the sum running over representatives p of all isomorphism classes in C. (The series may diverge in general.) The formula states: for a smooth algebraic stack X of finite type over a finite field $\mathbb{F}_q$ and the "arithmetic" Frobenius $\phi^{-1}: X \to X$, i.e., the inverse of the usual geometric Frobenius $\phi$ in Grothendieck's formula, -$$ -\# X (\mathbb{F}_q) = q^{\dim X} \sum_{i=0}^{\infty} (-1)^i \operatorname{tr} \left (\phi^{-1}; H^i(X, \Q_l) \right ). -$$ - -Here, it is crucial that the cohomology of a stack is with respect to the smooth topology (not etale). - -When X is a variety, the smooth cohomology is the same as etale one and, via the Poincaré duality, this is equivalent to Grothendieck's trace formula. (But the proof of Behrend's trace formula relies on Grothendieck's formula, so this does not subsume Grothendieck's.) - -Consider $B\mathbb{G}_m = [\operatorname{Spec} \mathbb{F}_q/\mathbb{G}_m]$, the classifying stack of the multiplicative group scheme (that is, $\mathbb{G}_m(R) = R^\times$). By definition, $B \mathbb{G}_m(\mathbb{F}_q)$ is the category of principal $\mathbb{G}_m$-bundles over $\operatorname{Spec} \mathbb{F}_q$, which has only one isomorphism class (since all such bundles are trivial by Lang's theorem). Its group of automorphisms is $\mathbb{G}_m$, which means that the number of $\mathbb{F}_q$-isomorphisms is $\#\mathbb{G}_m(\mathbb{F}_q) = \#\mathbb{F}_q^\times = q-1$. - -On the other hand, we may compute the l-adic cohomology of $B\mathbb{G}_m$ directly. We remark that in the topological setting, we have $B\C^\times \cong \mathbb{CP}^\infty$ (where $B\C^\times$ now denotes the usual classifying space of a topological group), whose rational cohomology ring is a polynomial ring in one generator (Borel's theorem), but we shall not use this directly. If we wish to stay in the world of algebraic geometry, we may instead "approximate" $B\mathbb{G}_m$ by projective spaces of larger and larger dimension. Thus we consider the map $B\mathbb{G}_m \to \mathbb{P}^N$ induced by the $\mathbb{G}_m$-bundle corresponding to $\mathcal{O}(1).$ This map induces an isomorphism in cohomology in degrees up to 2N. Thus the even (resp. odd) Betti numbers of $B \mathbb{G}_m$ are 1 (resp. 0), and the l-adic Galois representation on the (2n)th cohomology group is the nth power of the cyclotomic character. The second part is a consequence of the fact that the cohomology of $\mathbb{P}^N$ is generated by algebraic cycle classes. This shows that -$$ -\sum_{i \ge 0} (-1)^i \operatorname{tr} \left (\phi^{-1}; H^i(B\mathbb{G}_m, \Q_l) \right ) = 1 + \frac{1}{q} + \frac{1}{q^2} + \cdots = \frac{q}{q-1}. -$$ - -Note that -$$ -\dim B \mathbb{G}_m = \dim \operatorname{Spec} \mathbb{F}_q - \dim \mathbb{G}_m = -1. -$$ - -Multiplying by $q^{-1}$, one obtains the predicted equality. diff --git a/wiki/wikipedia/2656.txt b/wiki/wikipedia/2656.txt deleted file mode 100644 index bf71de32da42680bafd50b9a593e46bd7fcd0a8c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2656.txt +++ /dev/null @@ -1,98 +0,0 @@ -In mathematics, an Euler brick, named after Leonhard Euler, is a rectangular cuboid whose edges and face diagonals all have integer lengths. A primitive Euler brick is an Euler brick whose edge lengths are relatively prime. A perfect Euler brick is one where the longest space diagonal is also an integer but such a brick has not yet been found. - -The definition of an Euler brick in geometric terms is equivalent to a solution to the following system of Diophantine equations: -$$ -\begin{cases} a^2 + b^2 = d^2\\ a^2 + c^2 = e^2\\ b^2 + c^2 = f^2\end{cases} -$$ - -where a, b, c are the edges and d, e, f are the diagonals. - -* If (a, b, c) is a solution, then (ka, kb, kc) is also a solution for any k. Consequently, the solutions in rational numbers are all rescalings of integer solutions. Given an Euler brick with edge-lengths (a, b, c), the triple (bc, ac, ab) constitutes an Euler brick as well. - -* At least two edges of an Euler brick are divisible by 3. parametric formula. Let (u, v, w) be a Pythagorean triple (that is, u^2 + v^2 = w^2.) Then - -* the smallest edge must be greater than 5e11. - -Three cuboid conjectures are three mathematical propositions claiming irreducibility of three univariate polynomials with integer coefficients depending on several integer parameters. The conjectures are related to the perfect cuboid problem. Though they are not equivalent to the perfect cuboid problem, if all of these three conjectures are valid, then no perfect cuboids exist. They are neither proved nor disproved. - -Cuboid conjecture 1. For any two positive coprime integer numbers $\displaystyle a \neq u$ the eighth degree polynomial - -{{NumBlk|:| - -P_{au}(t)=t^8+6(u^2-a^2)t^6+(a^4-4a^2u^2+u^4)t^4-6a^2u^2(u^2-a^2)t^2+u^4a^4|}} - -is irreducible over the ring of integers $\displaystyle\mathbb Z$. - -Cuboid conjecture 2. For any two positive coprime integer numbers $\displaystyle p \neq q$ the tenth-degree polynomial - -{{NumBlk|:|\begin{align} - -Q_{pq}(t)= {} & t^{10}+(2q^2+p^2)(3q^2-2p^2)t^8 \\[4pt] - -& {} +(q^8+10p^2q^6+4p^4q^4-14p^6q^2+p^8)t^6\\[4pt] - -& {} -p^2 q^2(q^8-14p^2q^6+4p^4q^4+10p^6q^2+p^8)t^4 \\[4pt] - -& {} -p^6q^6(q^2+2p^2)(-2q^2+3p^2)t^2\\[4pt] - -& {} -q^{10}p^{10} - -\end{align} - -|}} - -is irreducible over the ring of integers $\displaystyle\mathbb Z$. - -Cuboid conjecture 3. For any three positive coprime integer numbers $\displaystyle a$, $\displaystyle b$, $\displaystyle u$ such that none of the conditions - -{{NumBlk|:|\begin{array}{lcr} - -\text{1)}\qquad a=b;\qquad\qquad & \text{3)}\qquad bu=a^2;\qquad\qquad &\text{5)}\qquad a=u;\\ - -\text{2)}\qquad a=b=u;\qquad\qquad &\text{4)}\qquad au=b^2;\qquad\qquad &\text{6)}\qquad b=u - -\end{array}|}} - -are fulfilled, the twelfth degree polynomial - -{{NumBlk|:|\begin{align} - -P_{abu}(t) = {} & t^{12}+(6u^2-2a^2-2b^2)t^{10} \\ - -& {} + (u^4+b^4+a^4+4a^2u^2+4b^2u^2-12b^2 a^2)t^8 \\ - -& {} + (6a^4u^2+6u^2b^4-8a^2b^2u^2-2u^4a^2-2u^4b^2-2a^4b^2-2b^4a^2)t^6 \\ - -& {} + (4u^2b^4a^2+4a^4u^2b^2-12u^4a^2b^2+u^4a^4+u^4b^4+a^4b^4)t^4 \\ - -& {} + (6a^4u^2b^4-2u^4a^4b^2-2u^4a^2b^4)t^2+u^4a^4b^4. - -\end{align}|}} - -is irreducible over the ring of integers $\displaystyle\mathbb Z$. - -An almost-perfect cuboid has 6 out of the 7 lengths as rational. Such cuboids can be sorted into three types, called Body, Edge, and Face cuboids. - -In the case of the Body cuboid, the body (space) diagonal g is irrational. For the Edge cuboid, one of the edges a, b, c is irrational. The Face cuboid has just one of the face diagonals d, e, f irrational. - -The Body cuboid is commonly referred to as the Euler cuboid in honor of Leonard Euler, who discussed this type of cuboid. He was also aware of Face cuboids, and provided the (104, 153, 672) example. The three integer cuboid edge lengths and three integer diagonal lengths of a face cuboid can also be interpreted as the edge lengths of a Heronian tetrahedron that is also a Schläfli orthoscheme. There are infinitely many face cuboids, and infinitely many Heronian orthoschemes. - -Only recently have cuboids in complex numbers become known. - -, Randall L. Rathbun published 155,151 found cuboids with the smallest integer edge less than 157,000,000,000: 56,575 were Euler (Body) cuboids, 15,449 were Edge cuboids with a complex number edge length, 30,081 were Edge cuboids, and 53,046 were Face cuboids. - -The smallest solutions for each type of almost-perfect cuboids, given as edges, face diagonals and the space diagonal (a, b, c, d, e, f, g): - -* Body cuboid: (44, 117, 240, 125, 244, 267, sqrt 73225) - -* Edge cuboid: (520, 576, sqrt 618849, 776, 943, 975, 1105) - -* Face cuboid: (104, 153, 672, 185, 680, sqrt 474993, 697) - -* Complex Body cuboid: (63i, 60i, 65, 87i, 16, 25, sqrt -3344) - -* Complex Edge cuboid: (sqrt -3344, 60, 63, 16, 25, 87, 65) - -* Complex Face cuboid: (672i, 153i, 697, sqrt -474993, 185, 680, 104) - -A perfect parallelepiped is a parallelepiped with integer-length edges, face diagonals, and body diagonals, but not necessarily with all right angles; a perfect cuboid is a special case of a perfect parallelepiped. In 2009, dozens of perfect parallelepipeds were shown to exist, answering an open question of Richard Guy. Some of these perfect parallelepipeds have two rectangular faces. The smallest perfect parallelepiped has edges 271, 106, and 103; short face diagonals 101, 266, and 255; long face diagonals 183, 312, and 323; and body diagonals 374, 300, 278, and 272. diff --git a/wiki/wikipedia/2657.txt b/wiki/wikipedia/2657.txt deleted file mode 100644 index f6d933e81db90299a220ad58ffcd16e23ab5cbf0..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2657.txt +++ /dev/null @@ -1,161 +0,0 @@ -In the mathematical field of graph theory, Kirchhoff's theorem or Kirchhoff's matrix tree theorem named after Gustav Kirchhoff is a theorem about the number of spanning trees in a graph, showing that this number can be computed in polynomial time as the determinant of the Laplacian matrix of the graph. It is a generalization of Cayley's formula which provides the number of spanning trees in a complete graph. - -Kirchhoff's theorem relies on the notion of the Laplacian matrix of a graph that is equal to the difference between the graph's degree matrix (a diagonal matrix with vertex degrees on the diagonals) and its adjacency matrix (a (0,1)-matrix with 1's at places corresponding to entries where the vertices are adjacent and 0's otherwise). - -For a given connected graph G with n labeled vertices, let λ1, λ2, ..., λn−1 be the non-zero eigenvalues of its Laplacian matrix. Then the number of spanning trees of G is -$$ -t(G)=\frac{1}{n} \lambda_1\lambda_2\cdots\lambda_{n-1}. -$$ - -Equivalently the number of spanning trees is equal to any cofactor of the Laplacian matrix of G. - -First, construct the Laplacian matrix Q for the example diamond graph G (see image on the right): - -Q = \left[\begin{array}{rrrr} - -2 & -1 & -1 & 0 \\ - --1 & 3 & -1 & -1 \\ - --1 & -1 & 3 & -1 \\ - -0 & -1 & -1 & 2 - -\end{array}\right]. - -Next, construct a matrix Q* by deleting any row and any column from Q. For example, deleting row 1 and column 1 yields - -Q^\ast = - -\left[\begin{array}{rrr} - -3 & -1 & -1 \\ - --1 & 3 & -1 \\ - --1 & -1 & 2 - -\end{array}\right]. - -Finally, take the determinant of Q* to obtain t(G), which is 8 for the diamond graph. (Notice t(G) is the (1,1)-cofactor of Q in this example.) - -(The proof below is based on the Cauchy-Binet formula. An elementary induction argument for Kirchhoff's theorem can be found on page 654 of Moore (2011).) - -First notice that the Laplacian matrix has the property that the sum of its entries across any row and any column is 0. Thus we can transform any minor into any other minor by adding rows and columns, switching them, and multiplying a row or a column by −1. Thus the cofactors are the same up to sign, and it can be verified that, in fact, they have the same sign. - -We proceed to show that the determinant of the minor M11 counts the number of spanning trees. Let n be the number of vertices of the graph, and m the number of its edges. The incidence matrix E is an n-by-m matrix, which may be defined as follows: suppose that (i, j) is the kth edge of the graph, and that i < j. Then Eik = 1, Ejk = −1, and all other entries in column k are 0 (see oriented incidence matrix for understanding this modified incidence matrix E). For the preceding example (with n = 4 and m = 5): - - - -E = \begin{bmatrix} - -1 & 1 & 0 & 0 & 0 \\ - --1 & 0 & 1 & 1 & 0 \\ - -0 & -1 & -1 & 0 & 1 \\ - -0 & 0 & 0 & -1 & -1 \\ - -\end{bmatrix}. - - - -Recall that the Laplacian L can be factored into the product of the incidence matrix and its transpose, i.e., L = EET. Furthermore, let F be the matrix E with its first row deleted, so that FFT = M11. - -Now the Cauchy-Binet formula allows us to write -$$ -\det\left(M_{11}\right) = \sum_S \det\left(F_S\right)\det\left(F_S^{\mathrm{T}}\right) = \sum_S \det\left(F_S\right)^2 -$$ - -where S ranges across subsets of [m] of size n − 1, and FS denotes the (n − 1)-by-(n − 1) matrix whose columns are those of F with index in S. Then every S specifies n − 1 edges of the original graph, and it can be shown that those edges induce a spanning tree if and only if the determinant of FS is +1 or −1, and that they do not induce a spanning tree if and only if the determinant is 0. This completes the proof. - -Cayley's formula follows from Kirchhoff's theorem as a special case, since every vector with 1 in one place, −1 in another place, and 0 elsewhere is an eigenvector of the Laplacian matrix of the complete graph, with the corresponding eigenvalue being n. These vectors together span a space of dimension n − 1, so there are no other non-zero eigenvalues. - -Alternatively, note that as Cayley's formula counts the number of distinct labeled trees of a complete graph Kn we need to compute any cofactor of the Laplacian matrix of Kn. The Laplacian matrix in this case is - - - -\begin{bmatrix} - -n-1 & -1 & \cdots & -1 \\ - --1 & n-1 & \cdots & -1 \\ - -\vdots & \vdots& \ddots & \vdots \\ - --1 & -1 & \cdots & n-1 \\ - -\end{bmatrix}. - - - -Any cofactor of the above matrix is nn−2, which is Cayley's formula. - -Kirchhoff's theorem holds for multigraphs as well; the matrix Q is modified as follows: - -* The entry qi,j equals −m, where m is the number of edges between i and j; - -* when counting the degree of a vertex, all loops are excluded. - -Cayley's formula for a complete multigraph is mn-1(nn-1-(n-1)nn-2) by same methods produced above, since a simple graph is a multigraph with m = 1. - -Kirchhoff's theorem can be strengthened by altering the definition of the Laplacian matrix. Rather than merely counting edges emanating from each vertex or connecting a pair of vertices, label each edge with an indeterminate and let the (i, j)-th entry of the modified Laplacian matrix be the sum over the indeterminates corresponding to edges between the i-th and j-th vertices when i does not equal j, and the negative sum over all indeterminates corresponding to edges emanating from the i-th vertex when i equals j. - -The determinant of the modified Laplacian matrix by deleting any row and column (similar to finding the number of spanning trees from the original Laplacian matrix), above is then a homogeneous polynomial (the Kirchhoff polynomial) in the indeterminates corresponding to the edges of the graph. After collecting terms and performing all possible cancellations, each monomial in the resulting expression represents a spanning tree consisting of the edges corresponding to the indeterminates appearing in that monomial. In this way, one can obtain explicit enumeration of all the spanning trees of the graph simply by computing the determinant. - -The spanning trees of a graph form the bases of a graphic matroid, so Kirchhoff's theorem provides a formula to count the number of bases in a graphic matroid. The same method may also be used to count the number of bases in regular matroids, a generalization of the graphic matroids . - -Kirchhoff's theorem can be modified to count the number of oriented spanning trees in directed multigraphs. - -The matrix Q is constructed as follows: - -* The entry qi,j for distinct i and j equals −m, where m is the number of edges from i to j; - -* The entry qi,i equals the indegree of i minus the number of loops at i. - -The number of oriented spanning trees rooted at a vertex i is the determinant of the matrix gotten by removing the ith row and column of Q. - -Kirchhoff's theorem can be generalized to count k-component spanning forests in an unweighted graph. A k-component spanning forest is a subgraph with k connected components that contains all vertices and is cycle-free, i.e., there is at most one path between each pair of vertices. Given such a forest F with connected components $F_1, \dots, F_k$, define its weight $w(F) = |V(F_1)| \cdot \dots \cdot |V(F_k)|$ to be the product of the number of vertices in each component. Then - - - -\sum_F w(F) = q_k, - - - -where the sum is over all k-component spanning forests and $q_k$ is the coefficient of $x^k$ of the polynomial - - - -(x+\lambda_1) \dots (x+\lambda_{n-1}) x. - - - -The last factor in the polynomial is due to the zero eigenvalue $\lambda_n=0$. More explicitly, the number $q_k$ can be computed as - - - -q_k = \sum_{\{i_1, \dots, i_{n-k}\}\subset\{1\dots n-1\}} \lambda_{i_1} \dots \lambda_{i_{n-k}}. - - - -where the sum is over all n–k-element subsets of $\{1, \dots, n\}$. For example - -\begin{align} - -q_{n-1} &= \lambda_1 + \dots + \lambda_{n-1} = \mathrm{tr} Q = 2|E| \\ - -q_{n-2} &= \lambda_1\lambda_2 + \lambda_1 \lambda_3 + \dots + \lambda_{n-2} \lambda_{n-1} \\ - -q_{2} &= \lambda_1 \dots \lambda_{n-2} + \lambda_1 \dots \lambda_{n-3} \lambda_{n-1} + \dots + \lambda_2 \dots \lambda_{n-1}\\ - -q_{1} &= \lambda_1 \dots \lambda_{n-1} \\ - -\end{align} - -Since a spanning forest with n–1 components corresponds to a single edge, the k = n – 1 case states that the sum of the eigenvalues of Q is twice the number of edges. The k = 1 case corresponds to the original Kirchhoff theorem since the weight of every spanning tree is n. - -The proof can be done analogously to the proof of Kirchhoff's theorem. An invertible $(n-k) \times (n-k) $ submatrix of the incidence matrix corresponds bijectively to a k-component spanning forest with a choice of vertex for each component. - -The coefficients $q_k$ are up to sign the coefficients of the characteristic polynomial of Q. diff --git a/wiki/wikipedia/2658.txt b/wiki/wikipedia/2658.txt deleted file mode 100644 index a8fae8bf90e28d4a0b32e113df2bb01dc7e2995a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2658.txt +++ /dev/null @@ -1,47 +0,0 @@ -MassTransit Enterprise, a managed file transfer server from GroupLogic, Inc, (Arlington, Virginia, USA), runs on Windows and Macintosh operating systems and provides a range of managed file transfer functions. - -MassTransit was originally released as Adobe Virtual Network in 1995. - -MassTransit supports a number of TCP/IP based file transfer protocols including HTTP, HTTPS, FTP, SFTP, an optimized proprietary protocol MTAP, a secure version of MTAP as well as a UDP Data Transport version of MTAP based on User Datagram Protocol (UDP). - -MassTransit server (masstransit.exe) requires Windows and supports Windows Server 2008 (R2, 32-bit & 64-bit) and Windows 2003 Server. Active Directory users and groups control access to the transfer and reporting capabilities. - -* MassTransit 7.5 September 2014 - -* MassTransit 7.3 February 2014 - -* MassTransit 7.2.7 August 2012 - -* MassTransit 7.2.6 March 2012 - -* MassTransit 7.2.5 April 2012 - -* MassTransit 7.2.4 November 2011 - -* MassTransit 7.2.3 June 2011 - -* MassTransit 7.0.0 1 July 2010 - -* MassTransit 6.1.1 December 2009 - -* MassTransit 6.1 November 2009 - -* MassTransit 6.0.1x04 December 2008 - -* MassTransit 6.0 for Mac OS X and Windows in November 2008. - -* MassTransit 5.1.2 as hot fixes for Mac OS X and Windows from March to July 2008. - -* MassTransit 5.1.1 as hot fixes for Mac OS X and Windows in February 2008. - -* MassTransit 5.1 9/10/2007 for Mac OS X and Windows. - -* MassTransit 5.0.2 as hot fixes for Mac OS X and Windows from January to August 2007. - -* MassTransit 5.0.1 November 2006. - -* MassTransit 5.0 - -* MassTransit 4.5.1 as a series of hot fixes from April 2004 through March 2006. - -* MassTransit 1.0 released as Adobe Virtual Network 1.0 April 1996 diff --git a/wiki/wikipedia/2659.txt b/wiki/wikipedia/2659.txt deleted file mode 100644 index 8ec288072dba5c07085b2ef3c0fcc45e5a029720..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2659.txt +++ /dev/null @@ -1,12 +0,0 @@ -Harcourt's theorem is a formula in geometry for the area of a triangle, as a function of its side lengths and the perpendicular distances of its vertices from an arbitrary line tangent to its incircle. - -The theorem is named after J. Harcourt, an Irish professor. - -Let a triangle be given with vertices A, B, and C, opposite sides of lengths a, b, and c, area K, and a line that is tangent to the triangle's incircle at any point on that circle. Denote the signed perpendicular distances of the vertices from the line as a ', b ', and c ', with a distance being negative if and only if the vertex is on the opposite side of the line from the incenter. Then -$$ -a a ^\prime + b b^\prime + c c^\prime = 2K. -$$ - -If the tangent line contains one of the sides of the triangle, then two of the distances are zero and the formula collapses to the familiar formula that twice the area of a triangle is a base (the coinciding triangle side) times the altitude from that base. - -If the line is instead tangent to the excircle opposite, say, vertex A of the triangle, then diff --git a/wiki/wikipedia/266.txt b/wiki/wikipedia/266.txt deleted file mode 100644 index 23780e0d70dff68ee0c7236d7fcb6f10447f9f8e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/266.txt +++ /dev/null @@ -1,39 +0,0 @@ -In discrete mathematics and theoretical computer science, the rotation distance between two binary trees with the same number of nodes is the minimum number of tree rotations needed to reconfigure one tree into another. Because of a combinatorial equivalence between binary trees and triangulations of convex polygons, rotation distance is equivalent to the flip distance for triangulations of convex polygons. - -Rotation distance was first defined by Karel Čulík II and Derick Wood in 1982. Every two n-node binary trees have rotation distance at most 2n - 6, and some pairs of trees have exactly this distance. The computational complexity of computing the rotation distance is unknown. - -A binary tree is a structure consisting of a set of nodes, one of which is designated as the root node, in which each remaining node is either the left child or right child of some other node, its parent, and in which following the parent links from any node eventually leads to the root node. - -(In some sources, the nodes described here are called "internal nodes", there exists another set of nodes called "external nodes", each internal node is required to have exactly two children, and each external node is required to have zero children. The version described here can be obtained by removing all the external nodes from such a tree.) - -For any node x in the tree, there is a subtree of the same form, rooted at x and consisting of all the nodes that can reach x by following parent links. Each binary tree has a left-to-right ordering of its nodes, its inorder traversal, obtained by recursively traversing the left subtree (the subtree at the left child of the root, if such a child exists), then listing the root itself, and then recursively traversing the right subtree. - -In a binary search tree, each node is associated with a search key, and the left-to-right ordering is required to be consistent with the order of the keys. - -A tree rotation is an operation that changes the structure of a binary tree without changing its left-to-right ordering. Several self-balancing binary search tree data structures use these rotations as a primitive operation in their rebalancing algorithms. A rotation operates on two nodes x and y, where x is the parent of y, and restructures the tree by making y be the parent of x and taking the place of x in the tree. To free up one of the child links of y and make room to link x as a child of y, this operation may also need to move one of the children of y to become a child of x. - -There are two variations of this operation, a right rotation in which y begins as the left child of x and x ends as the right child of y, and a left rotation in which y begins as the right child of x and x ends as the left child of y. - -Any two trees that have the same left-to-right sequence of nodes may be transformed into each other by a sequence of rotations. The rotation distance between the two trees is the number of rotations in the shortest possible sequence of rotations that performs this transformation. It can also be described as the shortest path distance in a rotation graph, a graph that has a vertex for each binary tree on a given left-to-right sequence of nodes and an edge for each rotation between two trees. This rotation graph is exactly the graph of vertices and edges of an associahedron. - -Given a family of triangulations of some geometric object, a flip is an operation that transforms one triangulation to another by removing an edge between two triangles and adding the opposite diagonal to the resulting quadrilateral. The flip distance between two triangulations is the minimum number of flips needed to transform one triangulation into another. It can also be described as the shortest path distance in a flip graph, a graph that has a vertex for each triangulation and an edge for each flip between two triangulations. Flips and flip distances can be defined in this way for several different kinds of triangulations, including triangulations of sets of points in the Euclidean plane, triangulations of polygons, and triangulations of abstract manifolds. - -There is a one-to-one correspondence between triangulations of a given convex polygon, with a designated root edge, and binary trees, taking triangulations of n-sided polygons into binary trees with n - 2 nodes. In this correspondence, each triangle of a triangulation corresponds to a node in a binary tree. The root node is the triangle having the designated root edge as one of its sides, and two nodes are linked as parent and child in the tree when the corresponding triangles share a diagonal in the triangulation. - -Under this correspondence, rotations in binary trees correspond exactly to flips in the corresponding triangulations. Therefore, the rotation distance on (n - 2)-node trees corresponds exactly to flip distance on triangulations of n-sided convex polygons. - -Čulík define the "right spine" of a binary tree to be the path obtained by starting from the root and following right child links until reaching a node that has no right child. If a tree has the property that not all nodes belong to the right spine, there always exists a right rotation that increases the length of the right spine. For, in this case, there exists at least one node x on the right spine that has a left child y that is not on the right spine. Performing a right rotation on x and y adds y to the right spine without removing any other node from it. By repeatedly increasing the length of the right spine, any n-node tree can be transformed into the unique tree with the same node order in which all nodes belong to the right spine, in at most n - 1 steps. Given any two trees with the same node order, one can transform one into the other by transforming the first tree into a tree with all nodes on the right spine, and then reversing the same transformation of the second tree, in a total of at most 2n - 2 steps. Therefore, as Čulík proved, the rotation distance between any two trees is at most 2n - 2. - -By considering the problem in terms of flips of convex polygons instead of rotations of trees, Sleator were able to show that the rotation distance is at most 2n - 6. In terms of triangulations of convex polygons, the right spine is the sequence of triangles incident to the right endpoint of the root edge, and the tree in which all vertices lie on the spine corresponds to a fan triangulation for this vertex. The main idea of their improvement is to try flipping both given triangulations to a fan triangulation for any vertex, rather than only the one for the right endpoint of the root edge. It is not possible for all of these choices to simultaneously give the worst-case distance n - 1 from each starting triangulation, giving the improvement. - -Sleator also used a geometric argument to show that, for infinitely many values of n, the maximum rotation distance is exactly 2n - 6. They again use the interpretation of the problem in terms of flips of triangulations of convex polygons, and they interpret the starting and ending triangulation as the top and bottom faces of a convex polyhedron with the convex polygon itself interpreted as a Hamiltonian circuit in this polyhedron. Under this interpretation, a sequence of flips from one triangulation to the other can be translated into a collection of tetrahedra that triangulate the given three-dimensional polyhedron. They find a family of polyhedra with the property that (in three-dimensional hyperbolic geometry) the polyhedra have large volume, but all tetrahedra inside them have much smaller volume, implying that many tetrahedra are needed in any triangulation. The binary trees obtained from translating the top and bottom sets of faces of these polyhedra back into trees have high rotation distance, at least 2n - 6. - -Subsequently, Pournin provided a proof that for all n ≥ 11, the maximum rotation distance is exactly 2n - 6. Pournin's proof is combinatorial, and avoids the use of hyperbolic geometry. - -As well as defining rotation distance, Čulík asked for the computational complexity of computing the rotation distance between two given trees. The existence of short rotation sequences between any two trees implies that testing whether the rotation distance is at most k belongs to the complexity class NP, but it is not known to be NP-complete, nor is it known to be solvable in polynomial time. Yet, Fordham's algorithm computes a rotation distance in linear time, but only allows 2 kind of rotations : (ab)c = a(bc) and a((bc)d) = a(b(cd)). Fordham's algorithm relies on a classification of nodes into 7 types, and a lookup table is used to find the number of rotations required to transform a node of one type into an other. - -The rotation distance between any two trees can be lower bounded, in the equivalent view of polygon triangulations, by the number of diagonals that need to be removed from one triangulation and replaced by other diagonals to produce the other triangulation. It can also be upper bounded by twice this number, - -by partitioning the problem into subproblems along any diagonals shared between both triangulations and then applying the method of Čulík to each subproblem. This method provides an approximation algorithm for the problem with an approximation ratio of two. A similar approach of partitioning into subproblems along shared diagonals leads to a fixed-parameter tractable algorithm for computing the rotation distance exactly. - -Determining the complexity of computing the rotation distance exactly without parameterization remains unsolved, and the best algorithms currently known for the problem run in exponential time. Yet, the existence of Fordham's algorithm strongly suggests a linear time algorithm exists for computing rotation distance. diff --git a/wiki/wikipedia/2660.txt b/wiki/wikipedia/2660.txt deleted file mode 100644 index dd6a2bc0fe5c3d0e4ffa75785be043ad9c6fa1b1..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2660.txt +++ /dev/null @@ -1,224 +0,0 @@ -In mathematics, the Lasker–Noether theorem states that every Noetherian ring is a Lasker ring, which means that every ideal can be decomposed as an intersection, called primary decomposition, of finitely many primary ideals (which are related to, but not quite the same as, powers of prime ideals). The theorem was first proven by for the special case of polynomial rings and convergent power series rings, and was proven in its full generality by . - -The Lasker–Noether theorem is an extension of the fundamental theorem of arithmetic, and more generally the fundamental theorem of finitely generated abelian groups to all Noetherian rings. The Lasker–Noether theorem plays an important role in algebraic geometry, by asserting that every algebraic set may be uniquely decomposed into a finite union of irreducible components. - -It has a straightforward extension to modules stating that every submodule of a finitely generated module over a Noetherian ring is a finite intersection of primary submodules. This contains the case for rings as a special case, considering the ring as a module over itself, so that ideals are submodules. This also generalizes the primary decomposition form of the structure theorem for finitely generated modules over a principal ideal domain, and for the special case of polynomial rings over a field, it generalizes the decomposition of an algebraic set into a finite union of (irreducible) varieties. - -The first algorithm for computing primary decompositions for polynomial rings over a field of characteristic 0 was published by Noether's student . The decomposition does not hold in general for non-commutative Noetherian rings. Noether gave an example of a non-commutative Noetherian ring with a right ideal that is not an intersection of primary ideals. - -Let $R$ be a Noetherian commutative ring. An ideal $I$ of $R$ is called primary if it is a proper ideal and for each pair of elements $x$ and $y$ in $R$ such that $xy$ is in $I$, either $x$ or some power of $y$ is in $I$; equivalently, every zero-divisor in the quotient $R/I$ is nilpotent. The radical of a primary ideal $Q$ is a prime ideal and $Q$ is said to be $\mathfrak{p}$-primary for $\mathfrak{p} = \sqrt{Q}$. - -Let $I$ be an ideal in $R$. Then $I$ has an irredundant primary decomposition into primary ideals: -$$ -I = Q_1 \cap \cdots \cap Q_n\ -$$. - -Irredundancy means: - -*Removing any of the $Q_i$ changes the intersection, i.e. for each $i$ we have: $\cap_{j \ne i} Q_j \not\subset Q_i$. - -*The prime ideals $\sqrt{Q_i}$ are all distinct. - -Moreover, this decomposition is unique in the two ways: - -*The set $\{ \sqrt{Q_i} \mid i \}$ is uniquely determined by $I$, and - -*If $\mathfrak{p} = \sqrt{Q_i}$ is a minimal element of the above set, then $Q_i$ is uniquely determined by $I$; in fact, $Q_i$ is the pre-image of $I R_{\mathfrak{p}}$ under the localization map $R \to R_{\mathfrak{p}}$. - -Primary ideals which correspond to non-minimal prime ideals over $I$ are in general not unique (see an example below). For the existence of the decomposition, see #Primary decomposition from associated primes below. - -The elements of $\{ \sqrt{Q_i} \mid i \}$ are called the prime divisors of $I$ or the primes belonging to $I$. In the language of module theory, as discussed below, the set $\{ \sqrt{Q_i} \mid i \}$ is also the set of associated primes of the $R$-module $R/I$. Explicitly, that means that there exist elements $g_1, \dots, g_n$ in $R$ such that -$$ -\sqrt{Q_i} = \{ f \in R \mid fg_i \in I \}. -$$ - -By a way of shortcut, some authors call an associated prime of $R/I$ simply an associated prime of $I$ (note this practice will conflict with the usage in the module theory). - -*The minimal elements of $\{ \sqrt{Q_i} \mid i \}$ are the same as the minimal prime ideals containing $I$ and are called isolated primes. - -*The non-minimal elements, on the other hand, are called the embedded primes. - -In the case of the ring of integers $\mathbb Z$, the Lasker–Noether theorem is equivalent to the fundamental theorem of arithmetic. If an integer $n$ has prime factorization $n = \pm p_1^{d_1} \cdots p_r^{d_r}$, then the primary decomposition of the ideal $\langle n \rangle$ generated by $n$ in $\mathbb Z$, is -$$ -\langle n\rangle = \langle p_1^{d_1} \rangle \cap \cdots \cap \langle p_r^{d_r}\rangle. -$$ - -Similarly, in a unique factorization domain, if an element has a prime factorization $f = u p_1^{d_1} \cdots p_r^{d_r},$ where $u$ is a unit, then the primary decomposition of the principal ideal generated by $f$ is -$$ -\langle f\rangle = \langle p_1^{d_1} \rangle \cap \cdots \cap \langle p_r^{d_r}\rangle. -$$ - -The examples of the section are designed for illustrating some properties of primary decompositions, which may appear as surprising or counter-intuitive. All examples are ideals in a polynomial ring over a field k. - -The primary decomposition in $k[x,y,z]$ of the ideal $I=\langle x,yz \rangle$ is -$$ -I = \langle x,yz \rangle = \langle x,y \rangle \cap \langle x,z \rangle. -$$ - -Because of the generator of degree one, I is not the product of two larger ideals. A similar example is given, in two indeterminates by -$$ -I = \langle x,y(y+1) \rangle = \langle x,y \rangle \cap \langle x,y+1 \rangle. -$$ - -In $k[x,y]$, the ideal $\langle x,y^2 \rangle$ is a primary ideal that has $\langle x,y \rangle$ as associated prime. It is not a power of its associated prime. - -For every positive integer n, a primary decomposition in $k[x,y]$ of the ideal $I=\langle x^2, xy \rangle$ is -$$ -I = \langle x^2,xy \rangle = \langle x \rangle \cap \langle x^2, xy, y^n \rangle. -$$ - -The associated primes are -$$ -\langle x \rangle \subset \langle x,y \rangle. -$$ - -Example: Let N = R = k[x, y] for some field k, and let M be the ideal (xy, y2). Then M has two different minimal primary decompositions - -M = (y) ∩ (x, y2) = (y) ∩ (x + y, y2). - -The minimal prime is (y) and the embedded prime is (x, y). - -In $k[x,y,z],$ the ideal $I=\langle x^2, xy, xz \rangle$ has the (non-unique) primary decomposition -$$ -I = \langle x^2,xy, xz \rangle = \langle x \rangle \cap \langle x^2, y^2, z^2, xy, xz, yz \rangle. -$$ - -The associated prime ideals are $\langle x \rangle \subset \langle x,y,z \rangle,$ and $\langle x, y \rangle$ is a non associated prime ideal such that -$$ -\langle x \rangle \subset \langle x,y \rangle \subset \langle x,y,z \rangle. -$$ - -Unless for very simple examples, a primary decomposition may be hard to compute and may have a very complicated output. The following example has been designed for providing such a complicated output, and, nevertheless, being accessible to hand-written computation. - -Let - - - -\begin {align} - -P&=a_0x^m + a_1x^{m-1}y +\cdots +a_my^m \\ - -Q&=b_0x^n + b_1x^{n-1}y +\cdots +b_ny^n - -\end {align} - -be two homogeneous polynomials in x, y, whose coefficients $a_1, \ldots, a_m, b_0, \ldots, b_n$ are polynomials in other indeterminates $z_1, \ldots, z_h$ over a field k. That is, P and Q belong to $R=k[x,y,z_1, \ldots, z_h],$ and it is in this ring that a primary decomposition of the ideal $I=\langle P,Q\rangle$ is searched. For computing the primary decomposition, we suppose first that 1 is a greatest common divisor of P and Q. - -This condition implies that I has no primary component of height one. As I is generated by two elements, this implies that it is a complete intersection (more precisely, it defines an algebraic set, which is a complete intersection), and thus all primary components have height two. Therefore, the associated primes of I are exactly the primes ideals of height two that contain I. - -It follows that $\langle x,y\rangle$ is an associated prime of I. - -Let $D\in k[z_1, \ldots, z_h]$ be the homogeneous resultant in x, y of P and Q. As the greatest common divisor of P and Q is a constant, the resultant D is not zero, and resultant theory implies that I contains all products of D by a monomial in x, y of degree m + n – 1. As $D\not\in \langle x,y\rangle,$ all these monomials belong to the primary component contained in $\langle x,y\rangle.$ This primary component contains P and Q, and the behavior of primary decompositions under localization shows that this primary component is -$$ -\{t|\exists e, D^et \in I\}. -$$ - -In short, we have a primary component, with the very simple associated prime $\langle x,y\rangle,$ such all its generating sets involve all indeterminates. - -The other primary component contains D. One may prove that if P and Q are sufficiently generic (for example if the coefficients of P and Q are distinct indeterminates), then there is only another primary component, which is a prime ideal, and is generated by P, Q and D. - -In algebraic geometry, an affine algebraic set V(I) is defined as the set of the common zeros of an ideal I of a polynomial ring $R=k[x_1,\ldots, x_n].$ - -An irredundant primary decomposition -$$ -I=Q_1\cap\cdots\cap Q_r -$$ - -of I defines a decomposition of V(I) into a union of algebraic sets V(Qi), which are irreducible, as not being the union of two smaller algebraic sets. - -If $P_i$ is the associated prime of $Q_i$, then $V(P_i)=V(Q_i),$ and Lasker–Noether theorem shows that V(I) has a unique irredundant decomposition into irreducible algebraic varieties -$$ -V(I)=\bigcup V(P_i), -$$ - -where the union is restricted to minimal associated primes. These minimal associated primes are the primary components of the radical of I. For this reason, the primary decomposition of the radical of I is sometimes called the prime decomposition of I. - -The components of a primary decomposition (as well as of the algebraic set decomposition) corresponding to minimal primes are said isolated, and the others are said '. - -For the decomposition of algebraic varieties, only the minimal primes are interesting, but in intersection theory, and, more generally in scheme theory, the complete primary decomposition has a geometric meaning. - -Nowadays, it is common to do primary decomposition of ideals and modules within the theory of associated primes. Bourbaki's influential textbook Algèbre commutative, in particular, takes this approach. - -Let R be a ring and M a module over it. By definition, an associated prime is a prime ideal appearing in the set $\{ \operatorname{Ann}(m) | 0 \neq m \in M \}$ = the set of annihilators of nonzero elements of M. Equivalently, a prime ideal $\mathfrak{p}$ is an associated prime of M if there is an injection of an R-module $R/\mathfrak{p} \hookrightarrow M$. - -A maximal element of the set of annihilators of nonzero elements of M can be shown to be a prime ideal and thus, when R is a Noetherian ring, M is nonzero if and only if there exists an associated prime of M. - -The set of associated primes of M is denoted by $\operatorname{Ass}_R(M)$ or $\operatorname{Ass}(M)$. Directly from the definition, - -*If $M = \bigoplus_i M_i$, then $\operatorname{Ass}(M) = \bigcup_i \operatorname{Ass}(M_i)$. - -*For an exact sequence $0 \to N \to M \to L \to 0$, $\operatorname{Ass}(N) \subset \operatorname{Ass}(M) \subset \operatorname{Ass}(N) \cup \operatorname{Ass}(L)$. - -*If R is a Noetherian ring, then $\operatorname{Ass}(M) \subset \operatorname{Supp}(M)$ where $\operatorname{Supp}$ refers to support. Also, the set of minimal elements of $\operatorname{Ass}(M)$ is the same as the set of minimal elements of $\operatorname{Supp}(M)$. - -If M is a finitely generated module over R, then there is a finite ascending sequence of submodules -$$ -0=M_0\subsetneq M_1\subsetneq\cdots\subsetneq M_{n-1}\subsetneq M_n=M -$$ - -such that each quotient Mi /Mi−1 is isomorphic to $R/\mathfrak{p}_i$ for some prime ideals $\mathfrak{p}_i$, each of which is necessarily in the support of M. Moreover every associated prime of M occurs among the set of primes $\mathfrak{p}_i$; i.e., -$$ -\operatorname{Ass}(M) \subset \{ \mathfrak{p}_1, \dots, \mathfrak{p}_n \} \subset \operatorname{Supp}(M) -$$. - -(In general, these inclusions are not the equalities.) In particular, $\operatorname{Ass}(M)$ is a finite set when M is finitely generated. - -Let $M$ be a finitely generated module over a Noetherian ring R and N a submodule of M. Given $\operatorname{Ass}(M/N) = \{ \mathfrak{p}_1, \dots, \mathfrak{p}_n \}$, the set of associated primes of $M/N$, there exist submodules $Q_i \subset M$ such that $\operatorname{Ass}(M/Q_i) = \{ \mathfrak{p}_i \}$ and -$$ -N = \bigcap_{i=1}^n Q_i. -$$ - -A submodule N of M is called $\mathfrak{p}$-primary if $\operatorname{Ass}(M/N) = \{ \mathfrak{p} \}$. A submodule of the R-module R is $\mathfrak{p}$-primary as a submodule if and only if it is a $\mathfrak{p}$-primary ideal; thus, when $M = R$, the above decomposition is precisely a primary decomposition of an ideal. - -Taking $N = 0$, the above decomposition says the set of associated primes of a finitely generated module M is the same as $\{ \operatorname{Ass}(M/Q_i) | i \}$ when $0 = \cap_1^n Q_i$ (without finite generation, there can be infinitely many associated primes.) - -Let $R$ be a Noetherian ring. Then - -*The set of zero-divisors on R is the same as the union of the associated primes of R (this is because the set of zerodivisors of R is the union of the set of annihilators of nonzero elements, the maximal elements of which are associated primes). - -* For the same reason, the union of the associated primes of an R-module M is exactly the set of zero-divisors on M, that is, an element r such that the endomorphism $m \mapsto rm, M \to M$ is not injective. - -* Given a subset $\Phi \subset \operatorname{Ass}(M)$, M an R-module , there exists a submodule $N \subset M$ such that $\operatorname{Ass}(N) = \operatorname{Ass}(M) - \Phi$ and $\operatorname{Ass}(M/N) = \Phi$. - -*Let $S \subset R$ be a multiplicative subset, $M$ an $R$-module and $\Phi$ the set of all prime ideals of $R$ not intersecting $S$. Then - -*:$\mathfrak{p} \mapsto S^{-1}\mathfrak{p}, \operatorname{Ass}_R(M)\cap \Phi \to \operatorname{Ass}_{S^{-1}R}(S^{-1} M)$ - -is a bijection. Also, $\operatorname{Ass}_R(M)\cap \Phi = \operatorname{Ass}_R(S^{-1}M)$. - -* Any prime ideal minimal with respect to containing an ideal J is in $\mathrm{Ass}_R(R/J).$ These primes are precisely the isolated primes. - -* A module M over R has finite length if and only if M is finitely generated and $\mathrm{Ass}(M)$ consists of maximal ideals. - -*Let $A \to B$ be a ring homomorphism between Noetherian rings and F a B-module that is flat over A. Then, for each A-module E, -$$ -\operatorname{Ass}_B(E \otimes_A F) = \bigcup_{\mathfrak{p} \in \operatorname{Ass}(E)} \operatorname{Ass}_B(F/\mathfrak{p}F) -$$. - -The next theorem gives necessary and sufficient conditions for a ring to have primary decompositions for its ideals. - -Let R be a commutative ring. Then the following are equivalent. - -# Every ideal in R has a primary decomposition. - -# R has the following properties: - -#*(L1) For every proper ideal I and a prime ideal P, there exists an x in R - P such that (I : x) is the pre-image of I RP under the localization map R → RP. - -#*(L2) For every ideal I, the set of all pre-images of I S−1R under the localization map R → S−1R, S running over all multiplicatively closed subsets of R, is finite. - -The proof is given at Chapter 4 of Atiyah–MacDonald as a series of exercises. - -There is the following uniqueness theorem for an ideal having a primary decomposition. - -{{math_theorem|Let R be a commutative ring and I an ideal. Suppose I has a minimal primary decomposition $I = \cap_1^r Q_i$ (note: "minimal" implies $\sqrt{Q_i}$ are distinct.) Then - -# The set $E = \left \{ \sqrt{Q_i} | 1 \le i \le r \right \}$ is the set of all prime ideals in the set $\left\{ \sqrt{(I : x)} | x \in R \right\}$. - -# The set of minimal elements of E is the same as the set of minimal prime ideals over I. Moreover, the primary ideal corresponding to a minimal prime P is the pre-image of I RP and thus is uniquely determined by I.}} - -Now, for any commutative ring R, an ideal I and a minimal prime P over I, the pre-image of I RP under the localization map is the smallest P-primary ideal containing I. Thus, in the setting of preceding theorem, the primary ideal Q corresponding to a minimal prime P is also the smallest P-primary ideal containing I and is called the P-primary component of I. - -For example, if the power Pn of a prime P has a primary decomposition, then its P-primary component is the n-th symbolic power of P. - -This result is the first in an area now known as the additive theory of ideals, which studies the ways of representing an ideal as the intersection of a special class of ideals. The decision on the "special class", e.g., primary ideals, is a problem in itself. In the case of non-commutative rings, the class of tertiary ideals is a useful substitute for the class of primary ideals. diff --git a/wiki/wikipedia/2661.txt b/wiki/wikipedia/2661.txt deleted file mode 100644 index b5e46de8e5b3e8d2e6db98fb22aa4eb87fc9df79..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2661.txt +++ /dev/null @@ -1,68 +0,0 @@ -Byzantine fault tolerant protocols are algorithms that are robust to arbitrary types of failures in distributed algorithms. The Byzantine agreement protocol is an essential part of this task. The constant-time quantum version of the Byzantine protocol, is described below. - -The Byzantine Agreement protocol is a protocol in distributed computing. - -It takes its name from a problem formulated by Lamport, Shostak and Pease in 1982, which itself is a reference to a historical problem. The Byzantine army was divided into divisions with each division being led by a General with the following properties: - -*Each General is either loyal or a traitor to the Byzantine state. - -*All Generals communicate by sending and receiving messages. - -*There are only two commands: attack and retreat. - -*All loyal Generals should agree on the same plan of action: attack or retreat. - -*A small linear fraction of bad Generals should not cause the protocol to fail (less than a $\tfrac{1}{3}$ fraction). - -(See for the proof of the impossibility result). - -The problem usually is equivalently restated in the form of a commanding General and loyal Lieutenants with the General being either loyal or a traitor and the same for the Lieutenants with the following properties. - -*All loyal Lieutenants carry out the same order. - -*If the commanding General is loyal, all loyal Lieutenants obey the order that he sends. - -*A strictly less than $\tfrac{1}{3}$ fraction including the commanding General are traitors. - -Failures in an algorithm or protocol can be categorized into three main types: - -# A failure to take another execution step in the algorithm: This is usually referred to as a "fail stop" fault. - -# A random failure to execute correctly: This is called a "random fault" or "random Byzantine" fault. - -# An arbitrary failure where the algorithm fails to execute the steps correctly (usually in a clever way by some adversary to make the whole algorithm fail) which also encompasses the previous two types of faults; this is called a "Byzantine fault". - -A Byzantine resilient or Byzantine fault tolerant protocol or algorithm is an algorithm that is robust to all the kinds of failures mentioned above. For example, given a space shuttle with multiple redundant processors, if the processors give conflicting data, which processors or sets of processors should be believed? The solution can be formulated as a Byzantine fault tolerant protocol. - -We will sketch here the asynchronous algorithm The two players A and B initially start with no inputs and they are to compute some value $c_{A},c_{B} \in [0,1]$ and be able to accuse anyone of cheating. The protocol is successful if A and B agree on the outcome. The outcome 0 is defined as A winning and 1 as B winning. The protocol has the following properties: - -**If both players are honest (they follow the protocol), then they agree on the outcome of the protocol $ c_{A} = c_{B}$ with $ Pr(c_{A} = c_{B} = b) = \tfrac{1}{2}$ for $ a,b \in \{0, 1\}$. - -**If one of the players is honest (i.e., the other player may deviate arbitrarily from the protocol in his or her local computation), then the other party wins with probability at most $\tfrac{1}{2} + \epsilon$. In other words, if B is dishonest, then $Pr(c_{A} = c_{B} = 1) \leq \tfrac{1}{2} + \epsilon$, and if A is dishonest, then $Pr(c_{A} = c_{B} = 0)\leq \tfrac{1}{2} + \epsilon $. - -* A strong coin flipping protocol: In a strong coin flipping protocol, the goal is instead to produce a random bit which is biased away from any particular value 0 or 1. Clearly, any strong coin flipping protocol with bias $\epsilon$ leads to weak coin flipping with the same bias. - -* A verifiable secret sharing protocol: A (n,k) secret sharing protocol allows a set of n players to share a secret, s such that only a quorum of k or more players can discover the secret. The player sharing (distributing the secret pieces) the secret is usually referred to as the dealer. A verifiable secret sharing protocol differs from a basic secret sharing protocol in that players can verify that their shares are consistent even in the presence of a malicious dealer. - -#Round 1 generate the GHZ state $|\mathrm{Coin}_i\rangle =\tfrac{1}{\sqrt{2}}|0,0,\ldots,0\rangle + \tfrac{1}{\sqrt{2}}|1,1,\ldots,1\rangle$ on $n$ qubits and send the $k$th qubit to the $k$th player keeping one part - -# Generate the state $|\mathrm{Leader}_i\rangle= \tfrac{1}{n^{3/2}}\sum\nolimits_{a=1}^{n^3}|a,a,\ldots,a\rangle$ on $n$ qubits, an equal superposition of the numbers between 1 and $n^3$. Distribute the $n$ qubits between all the players - -# Receive the quantum messages from all players and wait for the next communication round, thus forcing the adversary to choose which messages were passed. - -# Round 2: Measure (in the standard base) all $\mathrm{Leader}_{j}$ qubits received in round I. Select the player with the highest leader value (ties broken arbitrarily) as the "leader" of the round. Measure the leader's coin in the standard base. - -# Set the output of the QuantumCoinFlip protocol: $v_{i}$ = measurement outcome of the leader's coin. - -To generate a random coin assign an integer in the range [0,n-1] to each player and each player is not allowed to choose its own - -random ID as each player $P_k$ selects a random number $s{_{k}^{i}}$ for every other player $P_{i}$ and distributes this using a verifiable secret sharing scheme. - -At the end of this phase players agree on which secrets were properly shared, the secrets are then opened and each player $P_i$ is assigned the value -$$ -s_i =\sum {s_{k}^{i}}{\text{for all secrets properly shared}}\mod n -$$ - -This requires private information channels so we replace the random secrets by the superposition $|\phi\rangle =\tfrac{1}{\sqrt{n}}\sum\nolimits_{a=0}^{n-1}|a\rangle$. In which the state is encoded using a quantum verifiable secret sharing protocol (QVSS). We cannot distribute the state $|\phi,\phi,\ldots \phi\rangle$ since the bad players can collapse the state. To prevent bad players from doing so we encode the state using the Quantum verifiable secret sharing (QVSS) and send each player their share of the secret. Here again the verification requires Byzantine Agreement, but replacing the agreement by the grade-cast protocol is enough. - -A grade-cast protocol has the following properties using the definitions in using a four-photon polarization-entangled state. This shows that the quantum implementation of classical Byzantine Agreement protocols is indeed feasible. diff --git a/wiki/wikipedia/2662.txt b/wiki/wikipedia/2662.txt deleted file mode 100644 index 55c13b26ebc9bfdfe5d992de54fe9db32aaeaedb..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2662.txt +++ /dev/null @@ -1,14 +0,0 @@ -In real algebraic geometry, Harnack's curve theorem, named after Axel Harnack, gives the possible numbers of connected components that an algebraic curve can have, in terms of the degree of the curve. For any algebraic curve of degree m in the real projective plane, the number of components c is bounded by -$$ -\frac{1-(-1)^m}{2} \le c \le \frac{(m-1)(m-2)}{2}+1.\ -$$ - -The maximum number is one more than the maximum genus of a curve of degree m, attained when the curve is nonsingular. Moreover, any number of components in this range of possible values can be attained. - -A curve which attains the maximum number of real components is called an - -M-curve (from "maximum") – for example, an elliptic curve with two components, such as $y^2=x^3-x,$ or the Trott curve, a quartic with four components, are examples of M-curves. - -This theorem formed the background to Hilbert's sixteenth problem. - -In a recent development a Harnack curve is shown to be a curve whose amoeba has area equal to the Newton polygon of the polynomial P, which is called the characteristic curve of dimer models, and every Harnack curve is the spectral curve of some dimer model.(Kenyon) diff --git a/wiki/wikipedia/2663.txt b/wiki/wikipedia/2663.txt deleted file mode 100644 index f990675ca715a16aab4fd5c8d11f69a33ff6d08e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2663.txt +++ /dev/null @@ -1,14 +0,0 @@ -In the mathematical field of complex analysis, the Looman–Menchoff theorem states that a continuous complex-valued function defined in an open set of the complex plane is holomorphic if and only if it satisfies the Cauchy–Riemann equations. It is thus a generalization of a theorem by Édouard Goursat, which instead of assuming the continuity of f, assumes its Fréchet differentiability when regarded as a function from a subset of R2 to R2. - -A complete statement of the theorem is as follows: - -* Let Ω be an open set in C and f : Ω → C be a continuous function. Suppose that the partial derivatives $\partial f/\partial x$ and $\partial f/\partial y$ exist everywhere but a countable set in Ω. Then f is holomorphic if and only if it satisfies the Cauchy–Riemann equation: -$$ -\frac{\partial f}{\partial\bar{z}} = \frac{1}{2}\left(\frac{\partial f}{\partial x} + i\frac{\partial f}{\partial y}\right)=0. -$$ - -Looman pointed out that the function given by f(z) = exp(−z−4) for z ≠ 0, f(0) = 0 satisfies the Cauchy–Riemann equations everywhere but is not analytic (or even continuous) at z = 0. This shows that the function f must be assumed continuous in the theorem. - -The function given by f(z) = z5/|z|4 for z ≠ 0, f(0) = 0 is continuous everywhere and satisfies the Cauchy–Riemann equations at z = 0, but is not analytic at z = 0 (or anywhere else). This shows that a naive generalization of the Looman–Menchoff theorem to a single point is false: - -* Let f be continuous at a neighborhood of a point z, and such that $\partial f/\partial x$ and $\partial f/\partial y$ exist at z. Then f is holomorphic at z if and only if it satisfies the Cauchy–Riemann equation at z. diff --git a/wiki/wikipedia/2664.txt b/wiki/wikipedia/2664.txt deleted file mode 100644 index c692a92b3acf5fd572bd967fb0a89615af1102d4..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2664.txt +++ /dev/null @@ -1,21 +0,0 @@ -The reconstruction conjecture of Stanisław Ulam is one of the best-known open problems in graph theory. Using the terminology of Frank Harary it can be stated as follows: If G and H are two graphs on at least three vertices and ƒ is a bijection from V(G) to V(H) such that G\{v} and H\{ƒ(v)} are isomorphic for all vertices v in V(G), then G and H are isomorphic. - -In 1964 Harary extended the reconstruction conjecture to directed graphs on at least five vertices as the so-called digraph reconstruction conjecture. Many results supporting the digraph reconstruction conjecture appeared between 1964 and 1976. However, this conjecture was proved to be false when P. K. Stockmeyer discovered several infinite families of counterexample pairs of digraphs (including tournaments) of arbitrarily large order. The falsity of the digraph reconstruction conjecture caused doubt about the reconstruction conjecture itself. Stockmeyer even observed that “perhaps the considerable effort being spent in attempts to prove the (reconstruction) conjecture should be balanced by more serious attempts to construct counterexamples.” - -In 1979, Ramachandran revived the digraph reconstruction conjecture in a slightly weaker form called the new digraph reconstruction conjecture. In a digraph, the number of arcs incident from (respectively, to) a vertex v is called the outdegree (indegree) of v and is denoted by od(v) (respectively, id(v)). The new digraph conjecture may be stated as follows: - -If D and E are any two digraphs and ƒ is a bijection from V(D) to V(E) such that D\{v} and E\{ƒ(v)} are isomorphic and (od(v),id(v)) = (od(ƒ(v)),id(ƒ(v))) for all v in V(D), then D and E are isomorphic. - -The new digraph reconstruction conjecture reduces to the reconstruction conjecture in the undirected case, because if all the vertex-deleted subgraphs of two graphs are isomorphic, then the corresponding vertices must have the same degree. Thus, the new digraph reconstruction conjecture is stronger than the reconstruction conjecture, but weaker than the disproved digraph reconstruction conjecture. Several families of digraphs have been shown to satisfy the new digraph reconstruction conjecture and these include all the digraphs in the known counterexample pairs to the digraph reconstruction conjecture. - -* All Digraphs are N-reconstructible if all Digraphs with 2-connected underlying Graphs are N-reconstructible. - -*All Digraphs are N-reconstructible if and only if either of the following two classes of Digraphs are N-reconstructible, where diam(D) and radius(D) are defined to be the diameter and radius of the underlying graph of D. - -*#Digraphs with diam(D) ≤ 2 or diam(D) = diam(Dc) = 3 - -*#Digraphs D with 2-connected underlying graphs and radius(D) ≤ 2 - -* - -As of 2021, no counterexample to the new digraph reconstruction conjecture is known. This conjecture is now known as Degree Associated Reconstruction Conjecture also. diff --git a/wiki/wikipedia/2665.txt b/wiki/wikipedia/2665.txt deleted file mode 100644 index 0ed6726bf552908c694f89bae9253f3a01470d0d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2665.txt +++ /dev/null @@ -1,30 +0,0 @@ -Destructive dilemma is the name of a valid rule of inference of propositional logic. It is the inference that, if P implies Q and R implies S and either Q is false or S is false, then either P or R must be false. In sum, if two conditionals are true, but one of their consequents is false, then one of their antecedents has to be false. Destructive dilemma is the disjunctive version of modus tollens. The disjunctive version of modus ponens is the constructive dilemma. The destructive dilemma rule can be stated: -$$ -\frac{P \to Q, R \to S, \neg Q \lor \neg S}{\therefore \neg P \lor \neg R} -$$ - -where the rule is that wherever instances of "$P \to Q$", "$R \to S$", and "$\neg Q \lor \neg S$" appear on lines of a proof, "$\neg P \lor \neg R$" can be placed on a subsequent line. - -The destructive dilemma rule may be written in sequent notation: -$$ -(P \to Q), (R \to S), (\neg Q \lor \neg S) \vdash (\neg P \lor \neg R) -$$ - -where $\vdash$ is a metalogical symbol meaning that $\neg P \lor \neg R$ is a syntactic consequence of $P \to Q$, $R \to S$, and $\neg Q \lor \neg S$ in some logical system; - -and expressed as a truth-functional tautology or theorem of propositional logic: -$$ -(((P \to Q) \land (R \to S)) \land (\neg Q \lor \neg S)) \to (\neg P \lor \neg R) -$$ - -where $P$, $Q$, $R$ and $S$ are propositions expressed in some formal system. - -If it rains, we will stay inside. - -If it is sunny, we will go for a walk. - -Either we will not stay inside, or we will not go for a walk, or both. - -Therefore, either it will not rain, or it will not be sunny, or both. - -The validity of this argument structure can be shown by using both conditional proof (CP) and reductio ad absurdum (RAA) in the following way: diff --git a/wiki/wikipedia/2666.txt b/wiki/wikipedia/2666.txt deleted file mode 100644 index a0323cb9ede3d5c74955d472efbd233477fb9a24..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2666.txt +++ /dev/null @@ -1,78 +0,0 @@ -In graph theory, a k-degenerate graph is an undirected graph in which every subgraph has a vertex of degree at most k: that is, some vertex in the subgraph touches k or fewer of the subgraph's edges. The degeneracy of a graph is the smallest value of k for which it is k-degenerate. The degeneracy of a graph is a measure of how sparse it is, and is within a constant factor of other sparsity measures such as the arboricity of a graph. - -Degeneracy is also known as the k-core number, width, and linkage, and is essentially the same as the coloring number or Szekeres-Wilf number (named after ). k-degenerate graphs have also been called k-inductive graphs. The degeneracy of a graph may be computed in linear time by an algorithm that repeatedly removes minimum-degree vertices. The connected components that are left after all vertices of degree less than k have been removed are called the k-cores of the graph and the degeneracy of a graph is the largest value k such that it has a k-core. - -Every finite forest has either an isolated vertex (incident to no edges) or a leaf vertex (incident to exactly one edge); therefore, trees and forests are 1-degenerate graphs. Every 1-degenerate graph is a forest. - -Every finite planar graph has a vertex of degree five or less; therefore, every planar graph is 5-degenerate, and the degeneracy of any planar graph is at most five. Similarly, every outerplanar graph has degeneracy at most two, and the Apollonian networks have degeneracy three. - -The Barabási–Albert model for generating random scale-free networks is parameterized by a number m such that each vertex that is added to the graph has m previously-added vertices. It follows that any subgraph of a network formed in this way has a vertex of degree at most m (the last vertex in the subgraph to have been added to the graph) and Barabási–Albert networks are automatically m-degenerate. - -Every k-regular graph has degeneracy exactly k. More strongly, the degeneracy of a graph equals its maximum vertex degree if and only if at least one of the connected components of the graph is regular of maximum degree. For all other graphs, the degeneracy is strictly less than the maximum degree. - -The coloring number of a graph G was defined by Erdős to be the least κ for which there exists an ordering of the vertices of G in which each vertex has fewer than κ neighbors that are earlier in the ordering. It should be distinguished from the chromatic number of G, the minimum number of colors needed to color the vertices so that no two adjacent vertices have the same color; the ordering which determines the coloring number provides an order to color the vertices of G with the coloring number, but in general the chromatic number may be smaller. - -The degeneracy of a graph G was defined by Lick as the least k such that every induced subgraph of G contains a vertex with k or fewer neighbors. The definition would be the same if arbitrary subgraphs are allowed in place of induced subgraphs, as a non-induced subgraph can only have vertex degrees that are smaller than or equal to the vertex degrees in the subgraph induced by the same vertex set. - -The two concepts of coloring number and degeneracy are equivalent: in any finite graph the degeneracy is just one less than the coloring number. For, if a graph has an ordering with coloring number κ then in each subgraph H the vertex that belongs to H and is last in the ordering has at most κ - 1 neighbors in H. In the other direction, if G is k-degenerate, then an ordering with coloring number k + 1 can be obtained by repeatedly finding a vertex v with at most k neighbors, removing v from the graph, ordering the remaining vertices, and adding v to the end of the order. - -A third, equivalent formulation is that G is k-degenerate (or has coloring number at most k + 1) if and only if the edges of G can be oriented to form a directed acyclic graph with outdegree at most k. Such an orientation can be formed by orienting each edge towards the earlier of its two endpoints in a coloring number ordering. In the other direction, if an orientation with outdegree k is given, an ordering with coloring number k + 1 can be obtained as a topological ordering of the resulting directed acyclic graph. - -A k-core of a graph G is a maximal connected subgraph of G in which all vertices have degree at least k. Equivalently, it is one of the connected components of the subgraph of G formed by repeatedly deleting all vertices of degree less than k. If a non-empty k-core exists, then, clearly, G has degeneracy at least k, and the degeneracy of G is the largest k for which G has a k-core. - -A vertex $u$ has coreness $c$ if it belongs to a -$$ -c -$$-core but not to any $(c+1)$-core. - -The concept of a k-core was introduced to study the clustering structure of social networks and to describe the evolution of random graphs. It has also been applied in bioinformatics, network visualization, Internet structure, spreading of economic crises, identifying influential spreaders, brain cortex structure or resilience of networks in ecology. Extensions of k-core structures in weighted networks have also been developed. A survey of the topic, covering the main concepts, important algorithmic techniques as well as some application domains, may be found in Malliaros. - -Bootstrap percolation is a random process studied as an epidemic model and as a model for fault tolerance for distributed computing. It consists of selecting a random subset of active cells from a lattice or other space, and then considering the k-core of the induced subgraph of this subset. In k-core or bootstrap percolation on weakly interconnected networks, the interconnections can be regarded as an external field at the transition. - -Matula outline an algorithm to derive the degeneracy ordering of a graph $G = (V, E)$ with vertex set $V$ and edge set $E$ in $\mathcal{O}(\vert V \vert + \vert E \vert)$ time and $2\vert E \vert + \mathcal{O}(\vert V \vert)$ space, by storing vertices in a degree-indexed bucket queue and repeatedly removing the vertex with the smallest degree. The degeneracy $k$ is given by the highest degree of any vertex at the time of its removal. - - - -INITIALISE empty output list L - -INITIALISE degeneracy k ← 0 - -COMPUTE vertex degrees d_v for every v in V - -INITIALISE bucket queue B, such that B[i] contains all vertices v of degree d_v=i - -REPEAT |V| times: - -FIND B[i] with lowest i that is not empty - -REMOVE any vertex v from B[i] and add it to the start of L - -UPDATE k ← MAX(k,i) - -FOR all neighbours w of v not in L: - -MOVE w from B[j] to B[j - 1] - -RETURN k, L - - - -At the end of the algorithm, any vertex $L[i]$ will have at most $k$ edges to the vertices $L[1,\ldots,i-1]$. The $l$-cores of $G$ are the subgraphs $H_l \subset G$ that are induced by the vertices $L[1,\ldots,i]$, where $i$ is the first vertex with degree $\geq l$ at the time it is added to $L$. - -If a graph G is oriented acyclically with outdegree k, then its edges may be partitioned into k forests by choosing one forest for each outgoing edge of each node. Thus, the arboricity of G is at most equal to its degeneracy. In the other direction, an n-vertex graph that can be partitioned into k forests has at most k(n - 1) edges and therefore has a vertex of degree at most 2k- 1 – thus, the degeneracy is less than twice the arboricity. One may also compute in polynomial time an orientation of a graph that minimizes the outdegree but is not required to be acyclic. The edges of a graph with such an orientation may be partitioned in the same way into k pseudoforests, and conversely any partition of a graph's edges into k pseudoforests leads to an outdegree-k orientation (by choosing an outdegree-1 orientation for each pseudoforest), so the minimum outdegree of such an orientation is the pseudoarboricity, which again is at most equal to the degeneracy. The thickness is also within a constant factor of the arboricity, and therefore also of the degeneracy. - -A k-degenerate graph has chromatic number at most k + 1; this is proved by a simple induction on the number of vertices - -which is exactly like the proof of the six-color theorem for planar graphs. Since chromatic number is an upper bound on the order of - -the maximum clique, the latter invariant is also at most degeneracy plus one. By using a greedy coloring algorithm on an ordering with optimal coloring number, one can graph color a k-degenerate graph using at most k + 1 colors. - -A k-vertex-connected graph is a graph that cannot be partitioned into more than one component by the removal of fewer than k vertices, or equivalently a graph in which each pair of vertices can be connected by k vertex-disjoint paths. Since these paths must leave the two vertices of the pair via disjoint edges, a k-vertex-connected graph must have degeneracy at least k. Concepts related to k-cores but based on vertex connectivity have been studied in social network theory under the name of structural cohesion. - -If a graph has treewidth or pathwidth at most k, then it is a subgraph of a chordal graph which has a perfect elimination ordering in which each vertex has at most k earlier neighbors. Therefore, the degeneracy is at most equal to the treewidth and at most equal to the pathwidth. However, there exist graphs with bounded degeneracy and unbounded treewidth, such as the grid graphs. - -The Burr–Erdős conjecture relates the degeneracy of a graph G to the Ramsey number of G, the least n such that any two-edge-coloring of an n-vertex complete graph must contain a monochromatic copy of G. Specifically, the conjecture is that for any fixed value of k, the Ramsey number of k-degenerate graphs grows linearly in the number of vertices of the graphs. The conjecture was proven by Lee. - -Although concepts of degeneracy and coloring number are frequently considered in the context of finite graphs, the original motivation for Erdős was the theory of infinite graphs. For an infinite graph G, one may define the coloring number analogously to the definition for finite graphs, as the smallest cardinal number α such that there exists a well-ordering of the vertices of G in which each vertex has fewer than α neighbors that are earlier in the ordering. The inequality between coloring and chromatic numbers holds also in this infinite setting; Erdős state that, at the time of publication of their paper, it was already well known. - -The degeneracy of random subsets of infinite lattices has been studied under the name of bootstrap percolation. diff --git a/wiki/wikipedia/2667.txt b/wiki/wikipedia/2667.txt deleted file mode 100644 index a0e2cbee2d2c988874c2a4476e3f0eba4157d614..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2667.txt +++ /dev/null @@ -1,15 +0,0 @@ -In analytic number theory the Friedlander–Iwaniec theorem states that there are infinitely many prime numbers of the form $a^2 + b^4$. The first few such primes are - -2, 5, 17, 37, 41, 97, 101, 137, 181, 197, 241, 257, 277, 281, 337, 401, 457, 577, 617, 641, 661, 677, 757, 769, 821, 857, 881, 977, … . - -The difficulty in this statement lies in the very sparse nature of this sequence: the number of integers of the form $a^2+b^4$ less than $X$ is roughly of the order $X^{3/4}$. - -The theorem was proved in 1997 by John Friedlander and Henryk Iwaniec. Iwaniec was awarded the 2001 Ostrowski Prize in part for his contributions to this work. - -The theorem was refined by D.R. Heath-Brown and Xiannan Li in 2017 In particular, they proved that the polynomial $a^2 + b^4$ represents infinitely many primes when the variable $b$ is also required to be prime. - -When b = 1, the Friedlander–Iwaniec primes have the form $a^2+1$, forming the set - -2, 5, 17, 37, 101, 197, 257, 401, 577, 677, 1297, 1601, 2917, 3137, 4357, 5477, 7057, 8101, 8837, 12101, 13457, 14401, 15377, … . - -It is conjectured (one of Landau's problems) that this set is infinite. However, this is not implied by the Friedlander–Iwaniec theorem. diff --git a/wiki/wikipedia/2668.txt b/wiki/wikipedia/2668.txt deleted file mode 100644 index e90a37285bfb1a2a05542dbb20c8307be1330872..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2668.txt +++ /dev/null @@ -1,596 +0,0 @@ -In logic and proof theory, natural deduction is a kind of proof calculus in which logical reasoning is expressed by inference rules closely related to the "natural" way of reasoning. This contrasts with Hilbert-style systems, which instead use axioms as much as possible to express the logical laws of deductive reasoning. - -Natural deduction grew out of a context of dissatisfaction with the axiomatizations of deductive reasoning common to the systems of Hilbert, Frege, and Russell (see, e.g., Hilbert system). Such axiomatizations were most famously used by Russell and Whitehead in their mathematical treatise Principia Mathematica. Spurred on by a series of seminars in Poland in 1926 by Łukasiewicz that advocated a more natural treatment of logic, Jaśkowski made the earliest attempts at defining a more natural deduction, first in 1929 using a diagrammatic notation, and later updating his proposal in a sequence of papers in 1934 and 1935. His proposals led to different notations - -such as Fitch-style calculus (or Fitch's diagrams) or Suppes' method for which Lemmon gave a variant called system L. - -Natural deduction in its modern form was independently proposed by the German mathematician Gerhard Gentzen in 1934, in a dissertation delivered to the faculty of mathematical sciences of the University of Göttingen. The term natural deduction (or rather, its German equivalent natürliches Schließen) was coined in that paper: - -Gentzen was motivated by a desire to establish the consistency of number theory. He was unable to prove the main result required for the consistency result, the cut elimination theorem—the Hauptsatz—directly for natural deduction. For this reason he introduced his alternative system, the sequent calculus, for which he proved the Hauptsatz both for classical and intuitionistic logic. In a series of seminars in 1961 and 1962 Prawitz gave a comprehensive summary of natural deduction calculi, and transported much of Gentzen's work with sequent calculi into the natural deduction framework. His 1965 monograph Natural deduction: a proof-theoretical study was to become a reference work on natural deduction, and included applications for modal and second-order logic. - -In natural deduction, a proposition is deduced from a collection of premises by applying inference rules repeatedly. The system presented in this article is a minor variation of Gentzen's or Prawitz's formulation, but with a closer adherence to Martin-Löf's description of logical judgments and connectives. - -A judgment is something that is knowable, that is, an object of knowledge. It is evident if one in fact knows it. Thus "it is raining" is a judgment, which is evident for the one who knows that it is actually raining; in this case one may readily find evidence for the judgment by looking outside the window or stepping out of the house. In mathematical logic however, evidence is often not as directly observable, but rather deduced from more basic evident judgments. The process of deduction is what constitutes a proof; in other words, a judgment is evident if one has a proof for it. - -The most important judgments in logic are of the form "A is true". The letter A stands for any expression representing a proposition; the truth judgments thus require a more primitive judgment: "A is a proposition". Many other judgments have been studied; for example, "A is false" (see classical logic), "A is true at time t" (see temporal logic), "A is necessarily true" or "A is possibly true" (see modal logic), "the program M has type τ" (see programming languages and type theory), "A is achievable from the available resources" (see linear logic), and many others. To start with, we shall concern ourselves with the simplest two judgments "A is a proposition" and "A is true", abbreviated as "A prop" and "A true" respectively. - -The judgment "A prop" defines the structure of valid proofs of A, which in turn defines the structure of propositions. For this reason, the inference rules for this judgment are sometimes known as formation rules. To illustrate, if we have two propositions A and B (that is, the judgments "A prop" and "B prop" are evident), then we form the compound proposition A and B, written symbolically as "$A \wedge B$". We can write this in the form of an inference rule: -$$ -\frac{A\hbox{ prop} \qquad B\hbox{ prop}}{(A \wedge B)\hbox{ prop}}\ \wedge_F -$$ - -where the parentheses are omitted to make the inference rule more succinct: -$$ -\frac{A\hbox{ prop} \qquad B\hbox{ prop}}{A \wedge B\hbox{ prop}}\ \wedge_F -$$ - -This inference rule is schematic: A and B can be instantiated with any expression. The general form of an inference rule is: -$$ -\frac{J_1 \qquad J_2 \qquad \cdots \qquad J_n}{J}\ \hbox{name} -$$ - -where each $J_i$ is a judgment and the inference rule is named "name". The judgments above the line are known as premises, and those below the line are conclusions. Other common logical propositions are disjunction ($A \vee B$), negation ($\neg A$), implication ($A \supset B$), and the logical constants truth ($\top$) and falsehood ($\bot$). Their formation rules are below. - - - -\frac{A\hbox{ prop} \qquad B\hbox{ prop}}{A \vee B\hbox{ prop}}\ \vee_F - -\qquad - -\frac{A\hbox{ prop} \qquad B\hbox{ prop}}{A \supset B\hbox{ prop}}\ \supset_F - - - - - -\frac{\hbox{ }}{\top\hbox{ prop}}\ \top_F - -\qquad - -\frac{\hbox{ }}{\bot\hbox{ prop}}\ \bot_F - -\qquad - -\frac{A\hbox{ prop}}{\neg A\hbox{ prop}}\ \neg_F - - - -Now we discuss the "A true" judgment. Inference rules that introduce a logical connective in the conclusion are known as introduction rules. To introduce conjunctions, i.e., to conclude "A and B true" for propositions A and B, one requires evidence for "A true" and "B true". As an inference rule: - -
    - - - -\frac{A\hbox{ true} \qquad B\hbox{ true}}{(A \wedge B)\hbox{ true}}\ \wedge_I - - - -
    - -It must be understood that in such rules the objects are propositions. That is, the above rule is really an abbreviation for: - -
    - - - -\frac{A\hbox{ prop} \qquad B\hbox{ prop} \qquad A\hbox{ true} \qquad B\hbox{ true}}{(A \wedge B) \hbox{ true}}\ \wedge_I - - - -
    - -This can also be written: - -
    - - - -\frac{A \wedge B\hbox{ prop} \qquad A\hbox{ true} \qquad B\hbox{ true}}{(A \wedge B)\hbox{ true}}\ \wedge_I - - - -
    - -In this form, the first premise can be satisfied by the $\wedge_F$ formation rule, giving the first two premises of the previous form. In this article we shall elide the "prop" judgments where they are understood. In the nullary case, one can derive truth from no premises. - -
    - - - -\frac{\ }{\top\hbox{ true}}\ \top_I - - - -
    - -If the truth of a proposition can be established in more than one way, the corresponding connective has multiple introduction rules. - -
    - - - -\frac{A\hbox{ true}}{A \vee B\hbox{ true}}\ \vee_{I1} - -\qquad - -\frac{B\hbox{ true}}{A \vee B\hbox{ true}}\ \vee_{I2} - - - -
    - -Note that in the nullary case, i.e., for falsehood, there are no introduction rules. Thus one can never infer falsehood from simpler judgments. - -Dual to introduction rules are elimination rules to describe how to deconstruct information about a compound proposition into information about its constituents. Thus, from "A ∧ B true", we can conclude "A true" and "B true": - -
    - - - -\frac{A \wedge B\hbox{ true}}{A\hbox{ true}}\ \wedge_{E1} - -\qquad - -\frac{A \wedge B\hbox{ true}}{B\hbox{ true}}\ \wedge_{E2} - - - -
    - -As an example of the use of inference rules, consider commutativity of conjunction. If A ∧ B is true, then B ∧ A is true; this derivation can be drawn by composing inference rules in such a fashion that premises of a lower inference match the conclusion of the next higher inference. - -
    - - - -\cfrac{\cfrac{A \wedge B\hbox{ true}}{B\hbox{ true}}\ \wedge_{E2} - -\qquad - -\cfrac{A \wedge B\hbox{ true}}{A\hbox{ true}}\ \wedge_{E1}} - -{B \wedge A\hbox{ true}}\ \wedge_I - - - -
    - -The inference figures we have seen so far are not sufficient to state the rules of implication introduction or disjunction elimination; for these, we need a more general notion of hypothetical derivation. - -A pervasive operation in mathematical logic is reasoning from assumptions. For example, consider the following derivation: - -
    - - - -\cfrac{A \wedge \left ( B \wedge C \right ) \hbox{ true}}{\cfrac{B \wedge C \hbox{ true}}{B \hbox{ true}}\ \wedge_{E1}}\ \wedge_{E2} - - - -
    - -This derivation does not establish the truth of B as such; rather, it establishes the following fact: - -If A ∧ (B ∧ C) is true then B is true. - -In logic, one says "assuming A ∧ (B ∧ C) is true, we show that B is true"; in other words, the judgment "B true" depends on the assumed judgment "A ∧ (B ∧ C) true". This is a hypothetical derivation, which we write as follows: - -
    - - - -\begin{matrix} - -A \wedge \left ( B \wedge C \right ) \hbox{ true} \\ - -\vdots \\ - -B \hbox{ true} - -\end{matrix} - - - -
    - -The interpretation is: "B true is derivable from A ∧ (B ∧ C) true". Of course, in this specific example we actually know the derivation of "B true" from "A ∧ (B ∧ C) true", but in general we may not a priori know the derivation. The general form of a hypothetical derivation is: - -
    - - - -\begin{matrix} - -D_1 \quad D_2 \ \cdots \ D_n \\ - -\vdots \\ - -J - -\end{matrix} - - - -
    - -Each hypothetical derivation has a collection of antecedent derivations (the Di) written on the top line, and a succedent judgment (J) written on the bottom line. Each of the premises may itself be a hypothetical derivation. (For simplicity, we treat a judgment as a premise-less derivation.) - -The notion of hypothetical judgment is internalised as the connective of implication. The introduction and elimination rules are as follows. - -
    - - - -\cfrac{ - -\begin{matrix} - -\cfrac{}{A \hbox{ true}}\ u \\ - -\vdots \\ - -B \hbox{ true} - -\end{matrix} - -}{A \supset B \hbox{ true}}\ \supset_{I^u} - -\qquad \cfrac{A \supset B \hbox{ true} \quad A \hbox{ true}}{B \hbox{ true}}\ \supset_E - - - -
    - -In the introduction rule, the antecedent named u is discharged in the conclusion. This is a mechanism for delimiting the scope of the hypothesis: its sole reason for existence is to establish "B true"; it cannot be used for any other purpose, and in particular, it cannot be used below the introduction. As an example, consider the derivation of "A ⊃ (B ⊃ (A ∧ B)) true": - -
    - - - -\cfrac{\cfrac{\cfrac{}{A \hbox{ true}}\ u \quad \cfrac{}{B \hbox{ true}}\ w}{A \wedge B \hbox{ true}}\ \wedge_I}{ - -\cfrac{B \supset \left ( A \wedge B \right ) \hbox{ true}}{ - -A \supset \left ( B \supset \left ( A \wedge B \right ) \right ) \hbox{ true} - -}\ \supset_{I^u} - -}\ \supset_{I^w} - - - -
    - -This full derivation has no unsatisfied premises; however, sub-derivations are hypothetical. For instance, the derivation of "B ⊃ (A ∧ B) true" is hypothetical with antecedent "A true" (named u). - -With hypothetical derivations, we can now write the elimination rule for disjunction: - -
    - - - -\cfrac{ - -A \vee B \hbox{ true} - -\quad - -\begin{matrix} - -\cfrac{}{A \hbox{ true}}\ u \\ - -\vdots \\ - -C \hbox{ true} - -\end{matrix} - -\quad - -\begin{matrix} - -\cfrac{}{B \hbox{ true}}\ w \\ - -\vdots \\ - -C \hbox{ true} - -\end{matrix} - -}{C \hbox{ true}}\ \vee_{E^{u,w}} - - - -
    - -In words, if A ∨ B is true, and we can derive "C true" both from "A true" and from "B true", then C is indeed true. Note that this rule does not commit to either "A true" or "B true". In the zero-ary case, i.e. for falsehood, we obtain the following elimination rule: - -
    - - - -\frac{\perp \hbox{ true}}{C \hbox{ true}}\ \perp_E - - - -
    - -This is read as: if falsehood is true, then any proposition C is true. - -Negation is similar to implication. - -
    - - - -\cfrac{ - -\begin{matrix} - -\cfrac{}{A \hbox{ true}}\ u \\ - -\vdots \\ - -p \hbox{ true} - -\end{matrix} - -}{\lnot A \hbox{ true}}\ \lnot_{I^{u,p}} - -\qquad - -\cfrac{\lnot A \hbox{ true} \quad A \hbox{ true}}{C \hbox{ true}}\ \lnot_E - - - -
    - -The introduction rule discharges both the name of the hypothesis u, and the succedent p, i.e., the proposition p must not occur in the conclusion A. Since these rules are schematic, the interpretation of the introduction rule is: if from "A true" we can derive for every proposition p that "p true", then A must be false, i.e., "not A true". For the elimination, if both A and not A are shown to be true, then there is a contradiction, in which case every proposition C is true. Because the rules for implication and negation are so similar, it should be fairly easy to see that not A and A ⊃ ⊥ are equivalent, i.e., each is derivable from the other. - -A theory is said to be consistent if falsehood is not provable (from no assumptions) and is complete if every theorem or its negation is provable using the inference rules of the logic. These are statements about the entire logic, and are usually tied to some notion of a model. However, there are local notions of consistency and completeness that are purely syntactic checks on the inference rules, and require no appeals to models. The first of these is local consistency, also known as local reducibility, which says that any derivation containing an introduction of a connective followed immediately by its elimination can be turned into an equivalent derivation without this detour. It is a check on the strength of elimination rules: they must not be so strong that they include knowledge not already contained in their premises. As an example, consider conjunctions. - -Dually, local completeness says that the elimination rules are strong enough to decompose a connective into the forms suitable for its introduction rule. Again for conjunctions: - -These notions correspond exactly to β-reduction (beta reduction) and η-conversion (eta conversion) in the lambda calculus, using the Curry-Howard isomorphism. By local completeness, we see that every derivation can be converted to an equivalent derivation where the principal connective is introduced. In fact, if the entire derivation obeys this ordering of eliminations followed by introductions, then it is said to be normal. In a normal derivation all eliminations happen above introductions. In most logics, every derivation has an equivalent normal derivation, called a normal form. The existence of normal forms is generally hard to prove using natural deduction alone, though such accounts do exist in the literature, most notably by Dag Prawitz in 1961. It is much easier to show this indirectly by means of a cut-free sequent calculus presentation. - -The logic of the earlier section is an example of a single-sorted logic, i.e., a logic with a single kind of object: propositions. Many extensions of this simple framework have been proposed; in this section we will extend it with a second sort of individuals or terms. More precisely, we will add a new kind of judgment, "t is a term" (or "t term") where t is schematic. We shall fix a countable set V of variables, another countable set F of function symbols, and construct terms with the following formation rules: - - - -\frac{v\in V}{v\hbox{ term}} \hbox{ var}_F - - - -and - - - -\frac{f\in F\qquad t_1\hbox{ term}\qquad t_2\hbox{ term}\qquad \cdots \qquad t_n\hbox{ term}}{f(t_1, t_2,\cdots,t_n)\hbox{ term}} \hbox{ app}_F - - - -For propositions, we consider a third countable set P of predicates, and define atomic predicates over terms with the following formation rule: - - - -\frac{\phi\in P\qquad t_1\hbox{ term}\qquad t_2\hbox{ term}\qquad \cdots \qquad t_n\hbox{ term}}{\phi(t_1, t_2,\cdots,t_n)\hbox{ prop}} \hbox{ pred}_F - - - -The first two rules of formation provide a definition of a term that is effectively the same as that defined in term algebra and model theory, although the focus of those fields of study is quite different from natural deduction. The third rule of formation effectively defines an atomic formula, as in first-order logic, and again in model theory. - -To these are added a pair of formation rules, defining the notation for quantified propositions; one for universal (∀) and existential (∃) quantification: - - - -\frac{x\in V \qquad A \hbox{ prop}}{\forall x.A \hbox{ prop}} \forall_F - -\qquad\qquad - -\frac{x\in V \qquad A \hbox{ prop}}{\exists x.A \hbox{ prop}} \exists_F - - - -The universal quantifier has the introduction and elimination rules: - - - -\begin{matrix} - -\cfrac{}{a \hbox{ term}}\hbox{ u} \\ - -\vdots \\ - -\cfrac{[a/x]A \hbox{ true}}{\forall x.A \hbox{ true}}\forall_{I^{u,a}} - -\end{matrix} - -\qquad \qquad - -\frac{\forall x.A \hbox{ true}\qquad t \hbox{ term}}{[t/x]A\hbox{ true}}\forall_{E} - - - -The existential quantifier has the introduction and elimination rules: - - - -\frac{[t/x]A \hbox{ true}}{\exists x.A\hbox{ true}}\exists_{I} - -\qquad\qquad - -\cfrac{ - -\begin{matrix} - -\\ - -\\ - -\\ - -\\ - -\exists x.A\hbox{ true} \\ - -\end{matrix} - -\qquad - -\begin{matrix} - -\cfrac{}{a \hbox{ term}}\hbox{ u} \qquad \cfrac{}{[a/x]A \hbox{ true}}\hbox{ v}\\ - -\vdots \\ - -C\hbox{ true} \\ - -\end{matrix} - -} - -{C \hbox{ true}}\exists_{E^{a,u,v}} - - - -In these rules, the notation [t/x] A stands for the substitution of t for every (visible) instance of x in A, avoiding capture. As before the superscripts on the name stand for the components that are discharged: the term a cannot occur in the conclusion of ∀I (such terms are known as eigenvariables or parameters), and the hypotheses named u and v in ∃E are localised to the second premise in a hypothetical derivation. Although the propositional logic of earlier sections was decidable, adding the quantifiers makes the logic undecidable. - -So far, the quantified extensions are first-order: they distinguish propositions from the kinds of objects quantified over. Higher-order logic takes a different approach and has only a single sort of propositions. The quantifiers have as the domain of quantification the very same sort of propositions, as reflected in the formation rules: - - - -\cfrac{ - -\begin{matrix} - -\cfrac{}{p \hbox{ prop}}\hbox{ u} \\ - -\vdots \\ - -A\hbox{ prop} \\ - -\end{matrix}} - -{\forall p.A \hbox{ prop}} \forall_{F^u} - -\qquad\qquad - -\cfrac{ - -\begin{matrix} - -\cfrac{}{p \hbox{ prop}}\hbox{ u} \\ - -\vdots \\ - -A\hbox{ prop} \\ - -\end{matrix}} - -{\exists p.A \hbox{ prop}} \exists_{F^u} - - - -A discussion of the introduction and elimination forms for higher-order logic is beyond the scope of this article. It is possible to be in-between first-order and higher-order logics. For example, second-order logic has two kinds of propositions, one kind quantifying over terms, and the second kind quantifying over propositions of the first kind. - -Gentzen's discharging annotations used to internalise hypothetical judgments can be avoided by representing proofs as a tree of sequents Γ ⊢A instead of a tree of A true judgments. - -Jaśkowski's representations of natural deduction led to different notations such as Fitch-style calculus (or Fitch's diagrams) or Suppes' method, of which Lemmon gave a variant called system L. Such presentation systems, which are more accurately described as tabular, include the following. - -* 1940: In a textbook, Quine indicated antecedent dependencies by line numbers in square brackets, anticipating Suppes' 1957 line-number notation. - -* 1950: In a textbook, Quine demonstrated a method of using one or more asterisks to the left of each line of proof to indicate dependencies. This is equivalent to Kleene's vertical bars. (It is not totally clear if Quine's asterisk notation appeared in the original 1950 edition or was added in a later edition.) - -* 1957: An introduction to practical logic theorem proving in a textbook by Suppes. This indicated dependencies (i.e. antecedent propositions) by line numbers at the left of each line. - -* 1963: Stoll uses sets of line numbers to indicate antecedent dependencies of the lines of sequential logical arguments based on natural deduction inference rules. - -* 1965: The entire textbook by Lemmon is an introduction to logic proofs using a method based on that of Suppes. - -* 1967: In a textbook, Kleene briefly demonstrated two kinds of practical logic proofs, one system using explicit quotations of antecedent propositions on the left of each line, the other system using vertical bar-lines on the left to indicate dependencies. - -The presentation of natural deduction so far has concentrated on the nature of propositions without giving a formal definition of a proof. To formalise the notion of proof, we alter the presentation of hypothetical derivations slightly. We label the antecedents with proof variables (from some countable set V of variables), and decorate the succedent with the actual proof. The antecedents or hypotheses are separated from the succedent by means of a turnstile (⊢). This modification sometimes goes under the name of localised hypotheses. The following diagram summarises the change. - -The collection of hypotheses will be written as Γ when their exact composition is not relevant. - -To make proofs explicit, we move from the proof-less judgment "A true" to a judgment: "π is a proof of (A true)", which is written symbolically as "π : A true". Following the standard approach, proofs are specified with their own formation rules for the judgment "π proof". The simplest possible proof is the use of a labelled hypothesis; in this case the evidence is the label itself. - -For brevity, we shall leave off the judgmental label true in the rest of this article, i.e., write "Γ ⊢ π : A". Let us re-examine some of the connectives with explicit proofs. For conjunction, we look at the introduction rule ∧I to discover the form of proofs of conjunction: they must be a pair of proofs of the two conjuncts. Thus: - -The elimination rules ∧E1 and ∧E2 select either the left or the right conjunct; thus the proofs are a pair of projections-first (fst) and second (snd). - -For implication, the introduction form localises or binds the hypothesis, written using a λ; this corresponds to the discharged label. In the rule, "Γ, u:A" stands for the collection of hypotheses Γ, together with the additional hypothesis u. - -With proofs available explicitly, one can manipulate and reason about proofs. The key operation on proofs is the substitution of one proof for an assumption used in another proof. This is commonly known as a substitution theorem, and can be proved by induction on the depth (or structure) of the second judgment. - -; Substitution theorem : If Γ ⊢ π1 : A and Γ, u:A ⊢ π2 : B, then Γ ⊢ [π1/u] π2 : B. - -So far the judgment "Γ ⊢ π : A" has had a purely logical interpretation. In type theory, the logical view is exchanged for a more computational view of objects. Propositions in the logical interpretation are now viewed as types, and proofs as programs in the lambda calculus. Thus the interpretation of "π : A" is "the program π has type A". The logical connectives are also given a different reading: conjunction is viewed as product (×), implication as the function arrow (→), etc. The differences are only cosmetic, however. Type theory has a natural deduction presentation in terms of formation, introduction and elimination rules; in fact, the reader can easily reconstruct what is known as simple type theory from the previous sections. - -The difference between logic and type theory is primarily a shift of focus from the types (propositions) to the programs (proofs). Type theory is chiefly interested in the convertibility or reducibility of programs. For every type, there are canonical programs of that type which are irreducible; these are known as canonical forms or values. If every program can be reduced to a canonical form, then the type theory is said to be normalising (or weakly normalising). If the canonical form is unique, then the theory is said to be strongly normalising. Normalisability is a rare feature of most non-trivial type theories, which is a big departure from the logical world. (Recall that almost every logical derivation has an equivalent normal derivation.) To sketch the reason: in type theories that admit recursive definitions, it is possible to write programs that never reduce to a value; such looping programs can generally be given any type. In particular, the looping program has type ⊥, although there is no logical proof of "⊥ true". For this reason, the propositions as types; proofs as programs paradigm only works in one direction, if at all: interpreting a type theory as a logic generally gives an inconsistent logic. - -Like logic, type theory has many extensions and variants, including first-order and higher-order versions. One branch, known as dependent type theory, is used in a number of computer-assisted proof systems. Dependent type theory allows quantifiers to range over programs themselves. These quantified types are written as Π and Σ instead of ∀ and ∃, and have the following formation rules: - -These types are generalisations of the arrow and product types, respectively, as witnessed by their introduction and elimination rules. - -Dependent type theory in full generality is very powerful: it is able to express almost any conceivable property of programs directly in the types of the program. This generality comes at a steep price - either typechecking is undecidable (extensional type theory), or extensional reasoning is more difficult (intensional type theory). For this reason, some dependent type theories do not allow quantification over arbitrary programs, but rather restrict to programs of a given decidable index domain, for example integers, strings, or linear programs. - -Since dependent type theories allow types to depend on programs, a natural question to ask is whether it is possible for programs to depend on types, or any other combination. There are many kinds of answers to such questions. A popular approach in type theory is to allow programs to be quantified over types, also known as parametric polymorphism; of this there are two main kinds: if types and programs are kept separate, then one obtains a somewhat more well-behaved system called predicative polymorphism; if the distinction between program and type is blurred, one obtains the type-theoretic analogue of higher-order logic, also known as impredicative polymorphism. Various combinations of dependency and polymorphism have been considered in the literature, the most famous being the lambda cube of Henk Barendregt. - -The intersection of logic and type theory is a vast and active research area. New logics are usually formalised in a general type theoretic setting, known as a logical framework. Popular modern logical frameworks such as the calculus of constructions and LF are based on higher-order dependent type theory, with various trade-offs in terms of decidability and expressive power. These logical frameworks are themselves always specified as natural deduction systems, which is a testament to the versatility of the natural deduction approach. - -For simplicity, the logics presented so far have been intuitionistic. Classical logic extends intuitionistic logic with an additional axiom or principle of excluded middle: - -For any proposition p, the proposition p ∨ ¬p is true. - -This statement is not obviously either an introduction or an elimination; indeed, it involves two distinct connectives. Gentzen's original treatment of excluded middle prescribed one of the following three (equivalent) formulations, which were already present in analogous forms in the systems of Hilbert and Heyting: - -(XM3 is merely XM2 expressed in terms of E.) This treatment of excluded middle, in addition to being objectionable from a purist's standpoint, introduces additional complications in the definition of normal forms. - -A comparatively more satisfactory treatment of classical natural deduction in terms of introduction and elimination rules alone was first proposed by Parigot in 1992 in the form of a classical lambda calculus called λμ. The key insight of his approach was to replace a truth-centric judgment A true with a more classical notion, reminiscent of the sequent calculus: in localised form, instead of Γ ⊢ A, he used Γ ⊢ Δ, with Δ a collection of propositions similar to Γ. Γ was treated as a conjunction, and Δ as a disjunction. This structure is essentially lifted directly from classical sequent calculi, but the innovation in λμ was to give a computational meaning to classical natural deduction proofs in terms of a callcc or a throw/catch mechanism seen in LISP and its descendants. (See also: first class control.) - -Another important extension was for modal and other logics that need more than just the basic judgment of truth. These were first described, for the alethic modal logics S4 and S5, in a natural deduction style by Prawitz in 1965, and have since accumulated a large body of related work. To give a simple example, the modal logic S4 requires one new judgment, "A valid", that is categorical with respect to truth: - -If "A true" under no assumptions of the form "B true", then "A valid". - -This categorical judgment is internalised as a unary connective ◻A (read "necessarily A") with the following introduction and elimination rules: - -Note that the premise "A valid" has no defining rules; instead, the categorical definition of validity is used in its place. This mode becomes clearer in the localised form when the hypotheses are explicit. We write "Ω;Γ ⊢ A true" where Γ contains the true hypotheses as before, and Ω contains valid hypotheses. On the right there is just a single judgment "A true"; validity is not needed here since "Ω ⊢ A valid" is by definition the same as "Ω;⋅ ⊢ A true". The introduction and elimination forms are then: - -The modal hypotheses have their own version of the hypothesis rule and substitution theorem. - -; Modal substitution theorem : If Ω;⋅ ⊢ π1 : A true and Ω, u: (A valid) ; Γ ⊢ π2 : C true, then Ω;Γ ⊢ [π1/u] π2 : C true. - -This framework of separating judgments into distinct collections of hypotheses, also known as multi-zoned or polyadic contexts, is very powerful and extensible; it has been applied for many different modal logics, and also for linear and other substructural logics, to give a few examples. However, relatively few systems of modal logic can be formalised directly in natural deduction. To give proof-theoretic characterisations of these systems, extensions such as labelling or systems of deep inference. - -The addition of labels to formulae permits much finer control of the conditions under which rules apply, allowing the more flexible techniques of analytic tableaux to be applied, as has been done in the case of labelled deduction. Labels also allow the naming of worlds in Kripke semantics; Simpson presents an influential technique for converting frame conditions of modal logics in Kripke semantics into inference rules in a natural deduction formalisation of hybrid logic. Stouppa surveys the application of many proof theories, such as Avron and Pottinger's hypersequents and Belnap's display logic to such modal logics as S5 and B. - -The sequent calculus is the chief alternative to natural deduction as a foundation of mathematical logic. In natural deduction the flow of information is bi-directional: elimination rules flow information downwards by deconstruction, and introduction rules flow information upwards by assembly. Thus, a natural deduction proof does not have a purely bottom-up or top-down reading, making it unsuitable for automation in proof search. To address this fact, Gentzen in 1935 proposed his sequent calculus, though he initially intended it as a technical device for clarifying the consistency of predicate logic. Kleene, in his seminal 1952 book Introduction to Metamathematics, gave the first formulation of the sequent calculus in the modern style. - -In the sequent calculus all inference rules have a purely bottom-up reading. Inference rules can apply to elements on both sides of the turnstile. (To differentiate from natural deduction, this article uses a double arrow ⇒ instead of the right tack ⊢ for sequents.) The introduction rules of natural deduction are viewed as right rules in the sequent calculus, and are structurally very similar. The elimination rules on the other hand turn into left rules in the sequent calculus. To give an example, consider disjunction; the right rules are familiar: - -On the left: - -Recall the ∨E rule of natural deduction in localised form: - -The proposition A ∨ B, which is the succedent of a premise in ∨E, turns into a hypothesis of the conclusion in the left rule ∨L. Thus, left rules can be seen as a sort of inverted elimination rule. This observation can be illustrated as follows: - -In the sequent calculus, the left and right rules are performed in lock-step until one reaches the initial sequent, which corresponds to the meeting point of elimination and introduction rules in natural deduction. These initial rules are superficially similar to the hypothesis rule of natural deduction, but in the sequent calculus they describe a transposition or a handshake of a left and a right proposition: - -The correspondence between the sequent calculus and natural deduction is a pair of soundness and completeness theorems, which are both provable by means of an inductive argument. - -; Soundness of ⇒ wrt. ⊢ : If Γ ⇒ A, then Γ ⊢ A. - -; Completeness of ⇒ wrt. ⊢ : If Γ ⊢ A, then Γ ⇒ A. - -It is clear by these theorems that the sequent calculus does not change the notion of truth, because the same collection of propositions remain true. Thus, one can use the same proof objects as before in sequent calculus derivations. As an example, consider the conjunctions. The right rule is virtually identical to the introduction rule - -The left rule, however, performs some additional substitutions that are not performed in the corresponding elimination rules. - -The kinds of proofs generated in the sequent calculus are therefore rather different from those of natural deduction. The sequent calculus produces proofs in what is known as the β-normal η-long form, which corresponds to a canonical representation of the normal form of the natural deduction proof. If one attempts to describe these proofs using natural deduction itself, one obtains what is called the intercalation calculus (first described by John Byrnes), which can be used to formally define the notion of a normal form for natural deduction. - -The substitution theorem of natural deduction takes the form of a structural rule or structural theorem known as cut in the sequent calculus. - -; Cut (substitution) : If Γ ⇒ π1 : A and Γ, u:A ⇒ π2 : C, then Γ ⇒ [π1/u] π2 : C. - -In most well behaved logics, cut is unnecessary as an inference rule, though it remains provable as a meta-theorem; the superfluousness of the cut rule is usually presented as a computational process, known as cut elimination. This has an interesting application for natural deduction; usually it is extremely tedious to prove certain properties directly in natural deduction because of an unbounded number of cases. For example, consider showing that a given proposition is not provable in natural deduction. A simple inductive argument fails because of rules like ∨E or E which can introduce arbitrary propositions. However, we know that the sequent calculus is complete with respect to natural deduction, so it is enough to show this unprovability in the sequent calculus. Now, if cut is not available as an inference rule, then all sequent rules either introduce a connective on the right or the left, so the depth of a sequent derivation is fully bounded by the connectives in the final conclusion. Thus, showing unprovability is much easier, because there are only a finite number of cases to consider, and each case is composed entirely of sub-propositions of the conclusion. A simple instance of this is the global consistency theorem: "⋅ ⊢ ⊥ true" is not provable. In the sequent calculus version, this is manifestly true because there is no rule that can have "⋅ ⇒ ⊥" as a conclusion! Proof theorists often prefer to work on cut-free sequent calculus formulations because of such properties. diff --git a/wiki/wikipedia/2669.txt b/wiki/wikipedia/2669.txt deleted file mode 100644 index 333a6c1b34c9ca394b68812b44095c38cb2c58d9..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2669.txt +++ /dev/null @@ -1,5 +0,0 @@ -In computer science and data management, a commit is the making of a set of tentative changes permanent, marking the end of a transaction and providing Durability to ACID transactions. A commit is an act of committing. The record of commits is called the commit log. - -A COMMIT statement in SQL ends a transaction within a relational database management system (RDBMS) and makes all changes visible to other users. The general format is to issue a BEGIN WORK (or BEGIN TRANSACTION, depending on the database vendor) statement, one or more SQL statements, and then the COMMIT statement. Alternatively, a ROLLBACK statement can be issued, which undoes all the work performed since BEGIN WORK was issued. A COMMIT statement will also release any existing savepoints that may be in use. - -In terms of transactions, the opposite of commit is to discard the tentative changes of a transaction, a rollback. diff --git a/wiki/wikipedia/267.txt b/wiki/wikipedia/267.txt deleted file mode 100644 index ef2c601559020e3e8ccaf51ed2cbf67eb09f13b2..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/267.txt +++ /dev/null @@ -1,57 +0,0 @@ -In statistics, the Hájek–Le Cam convolution theorem states that any regular estimator in a parametric model is asymptotically equivalent to a sum of two independent random variables, one of which is normal with asymptotic variance equal to the inverse of Fisher information, and the other having arbitrary distribution. - -The obvious corollary from this theorem is that the “best” among regular estimators are those with the second component identically equal to zero. Such estimators are called efficient and are known to always exist for regular parametric models. - -The theorem is named after Jaroslav Hájek and Lucien Le Cam. - -Let ℘ = {Pθ | θ ∈ Θ ⊂ ℝk} be a regular parametric model, and q(θ): Θ → ℝm be a parameter in this model (typically a parameter is just one of the components of vector θ). Assume that function q is differentiable on Θ, with the m × k matrix of derivatives denoted as q̇θ. Define -$$ - I_{q(\theta)}^{-1} = \dot{q}(\theta)I^{-1}(\theta)\dot{q}(\theta)' -$$ — the information bound for q, -$$ - \psi_{q(\theta)} = \dot{q}(\theta)I^{-1}(\theta)\dot\ell(\theta) -$$ — the efficient influence function for q, - -where I(θ) is the Fisher information matrix for model ℘, $\scriptstyle\dot\ell(\theta)$ is the score function, and ′ denotes matrix transpose. - -
    - -Theorem . Suppose Tn is a uniformly (locally) regular estimator of the parameter q. Then - -
      - -
    1. There exist independent random m-vectors $\scriptstyle Z_\theta\sim\mathcal{N}(0,I^{-1}_{q(\theta)})$ and Δθ such that - - - -\sqrt{n}(T_n - q(\theta)) \ \xrightarrow{d}\ Z_\theta + \Delta_\theta, - - - -where d denotes convergence in distribution. More specifically, - - - -\begin{pmatrix} - -\sqrt{n}(T_n - q(\theta)) - \tfrac{1}{\sqrt{n}} \sum_{i=1}^n \psi_{q(\theta)}(x_i) \\ - -\tfrac{1}{\sqrt{n}} \sum_{i=1}^n \psi_{q(\theta)}(x_i) - -\end{pmatrix} - -\ \xrightarrow{d}\ - -\begin{pmatrix} - -\Delta_\theta \\ - -Z_\theta - -\end{pmatrix}. - - - -
    2. If the map θ → q̇θ is continuous, then the convergence in (A) holds uniformly on compact subsets of Θ. Moreover, in that case Δθ = 0 for all θ if and only if Tn is uniformly (locally) asymptotically linear with influence function ψq(θ) - -
    diff --git a/wiki/wikipedia/2670.txt b/wiki/wikipedia/2670.txt deleted file mode 100644 index 96c59c07bb4852529edbd17172011669fb4c9bb8..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2670.txt +++ /dev/null @@ -1,40 +0,0 @@ -In geometry, a hypotenuse is the longest side of a right-angled triangle, the side opposite the right angle. The length of the hypotenuse can be found using the Pythagorean theorem, which states that the square of the length of the hypotenuse equals the sum of the squares of the lengths of the other two sides. For example, if one of the other sides has a length of 3 (when squared, 9) and the other has a length of 4 (when squared, 16), then their squares add up to 25. The length of the hypotenuse is the square root of 25, that is, 5. - -The word hypotenuse is derived from Greek (sc. or ), meaning "[side] subtending the right angle" (Apollodorus), hupoteinousa being the feminine present active participle of the verb hupo-teinō "to stretch below, to subtend", from teinō "to stretch, extend". The nominalised participle, , was used for the hypotenuse of a triangle in the 4th century BCE (attested in Plato, Timaeus 54d). The Greek term was loaned into Late Latin, as hypotēnūsa. The spelling in -e, as hypotenuse, is French in origin (Estienne de La Roche 1520). - -The length of the hypotenuse can be calculated using the square root function implied by the Pythagorean theorem. Using the common notation that the length of the two legs of the triangle (the sides perpendicular to each other) are a and b and that of the hypotenuse is c, we have -$$ -c = \sqrt { a^2 + b^2 } . -$$ - -The Pythagorean theorem, and hence this length, can also be derived from the law of cosines by observing that the angle opposite the hypotenuse is 90° and noting that its cosine is 0: -$$ -c^2 = a^2 + b^2 - 2ab\cos90^\circ = a^2 + b^2 \therefore c = \sqrt{a^2 + b^2}. -$$ - -Many computer languages support the ISO C standard function hypot(x,y), which returns the value above. The function is designed not to fail where the straightforward calculation might overflow or underflow and can be slightly more accurate and sometimes significantly slower. - -Some scientific calculators provide a function to convert from rectangular coordinates to polar coordinates. This gives both the length of the hypotenuse and the angle the hypotenuse makes with the base line (c1 above) at the same time when given x and y. The angle returned is normally given by atan2(y,x). - -By means of trigonometric ratios, one can obtain the value of two acute angles, $\alpha$and $ \beta$, of the right triangle. - -Given the length of the hypotenuse $ c$and of a cathetus $ b$, the ratio is: -$$ - \frac{b}{c} = \sin (\beta) -$$ - -The trigonometric inverse function is: -$$ - \beta\ = \arcsin\left(\frac {b}{c} \right) -$$ - -in which $\beta$ is the angle opposite the cathetus $ b$. - -The adjacent angle of the catheti $ b$ is $\alpha$ = 90° – $\beta$ - -One may also obtain the value of the angle $\beta$by the equation: -$$ - \beta\ = \arccos\left(\frac {a}{c} \right) -$$ - -in which $ a$ is the other cathetus. diff --git a/wiki/wikipedia/2671.txt b/wiki/wikipedia/2671.txt deleted file mode 100644 index 2f87f249a42a4ed7c95cc16ebb498708ed93b5d9..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2671.txt +++ /dev/null @@ -1,199 +0,0 @@ -The rank–nullity theorem is a theorem in linear algebra, which asserts that the dimension of the domain of a linear map is the sum of its rank (the dimension of its image) and its nullity (the dimension of its kernel). - -Let $T : V \to W$ be a linear transformation between two vector spaces where $T$'s domain $V$ is finite dimensional. Then - -\operatorname{Rank}(T) ~+~ \operatorname{Nullity}(T) ~=~ \dim V, - -where - -\operatorname{Rank}(T) ~:=~ \dim(\operatorname{Image}(T)) \qquad \text{ and } \qquad \operatorname{Nullity}(T) ~:=~ \dim(\operatorname{Ker} (T)). - -In other words, - -\dim (\ker T) + \dim (\operatorname{im} T) = \dim (\operatorname{domain} T). - -This theorem can be refined via the splitting lemma to be a statement about an isomorphism of spaces, not just dimensions. Explicitly, since T induces an isomorphism from $V / \operatorname{Ker} (T)$ to $\operatorname{Image} (T),$ the existence of a basis for V that extends any given basis of $\operatorname{Ker}(T)$ implies, via the splitting lemma, that $\operatorname{Image}(T) \oplus \operatorname{Ker}(T) \cong V.$ Taking dimensions, the rank–nullity theorem follows. - -Since $\operatorname{Mat}_{m \times n}(\mathbb{F}) \cong \operatorname{Hom}\left(\mathbb{F}^n, \mathbb{F}^m\right),$ matrices immediately come to mind when discussing linear maps. In the case of an $m \times n$ matrix, the dimension of the domain is $n,$ the number of columns in the matrix. Thus the rank–nullity theorem for a given matrix $M \in \operatorname{Mat}_{m \times n}(\mathbb{F})$ immediately becomes - -\operatorname{Rank}(M) + \operatorname{Nullity}(M) = n. - -Here we provide two proofs. The first operates in the general case, using linear maps. The second proof looks at the homogeneous system $\mathbf{Ax} = \mathbf{0}$ for $\mathbf{A} \in \operatorname{Mat}_{m \times n}(\mathbb{F})$ with rank $r$ and shows explicitly that there exists a set of $n-r$ linearly independent solutions that span the kernel of $\mathbf{A}$. - -While the theorem requires that the domain of the linear map be finite-dimensional, there is no such assumption on the codomain. This means that there are linear maps not given by matrices for which the theorem applies. Despite this, the first proof is not actually more general than the second: since the image of the linear map is finite-dimensional, we can represent the map from its domain to its image by a matrix, prove the theorem for that matrix, then compose with the inclusion of the image into the full codomain. - -Let $V, W$ be vector spaces over some field $\mathbb{F}$ and $T$ defined as in the statement of the theorem with $\dim V = n$. - -As $\operatorname{Ker}T \subset V$ is a subspace, there exists a basis for it. Suppose $\dim\operatorname{Ker}T = k$ and let - -\mathcal{K} := \{v_1, \ldots, v_k\} \subset \operatorname{Ker}(T) - -be such a basis. - -We may now, by the Steinitz exchange lemma, extend $\mathcal{K}$ with $n-k$ linearly independent vectors $w_1, \ldots, w_{n-k}$ to form a full basis of $V$. - -Let - - \mathcal{S} := \{w_1, \ldots, w_{n-k}\} \subset V \setminus \operatorname{Ker}(T) - -such that - - \mathcal{B} := \mathcal{K} \cup \mathcal{S} = \{v_1, \ldots, v_k, w_1, \ldots, w_{n-k}\} \subset V - -is a basis for $V$. - -From this, we know that - -\operatorname{Im} T = \operatorname{Span}T(\mathcal{B}) = \operatorname{Span}\{T(v_1), \ldots, T(v_k), T(w_1), \ldots, T(w_{n-k})\} = \operatorname{Span}\{T(w_1), \ldots, T(w_{n-k})\} = \operatorname{Span}T(\mathcal{S}) . - -We now claim that $T(\mathcal{S})$ is a basis for $\operatorname{Im} T$. - -The above equality already states that $T(\mathcal{S})$ is a generating set for $\operatorname{Im} T$; it remains to be shown that it is also linearly independent to conclude that it is a basis. - -Suppose $T(\mathcal{S})$ is not linearly independent, and let - - \sum_{j=1}^{n-k} \alpha _j T(w_j) = 0_W - -for some $\alpha _j \in \mathbb{F}$. - -Thus, owing to the linearity of $T$, it follows that - - T \left(\sum_{j=1}^{n-k} \alpha _j w_j \right) = 0_W \implies \left(\sum_{j=1}^{n-k} \alpha _j w_j \right) \in \operatorname{Ker} T = \operatorname{Span} \mathcal{K} \subset V . - -This is a contradiction to $\mathcal{B}$ being a basis, unless all $\alpha _j$ are equal to zero. This shows that $T(\mathcal{S})$ is linearly independent, and more specifically that it is a basis for $\operatorname{Im}T$. - -To summarize, we have $\mathcal{K}$, a basis for $\operatorname{Ker}T$, and $T(\mathcal{S})$, a basis for $\operatorname{Im}T$. - -Finally we may state that - - \operatorname{Rank}(T) + \operatorname{Nullity}(T) = \dim \operatorname{Im} T + \dim \operatorname{Ker}T = |T(\mathcal{S})| + |\mathcal{K}| = (n-k) + k = n = \dim V . - -This concludes our proof. - -Let $\mathbf{A} \in \operatorname{Mat}_{m \times n}(\mathbb{F})$ with $r$ linearly independent columns (i.e. $\operatorname{Rank}(\mathbf{A}) = r$). We will show that: - -{{ordered list|type=lower-roman - -|1= There exists a set of $n - r$ linearly independent solutions to the homogeneous system $\mathbf{Ax} = \mathbf{0}$. - -|2= That every other solution is a linear combination of these $n-r$ solutions. - -}} - -To do this, we will produce a matrix $\mathbf{X} \in \operatorname{Mat}_{n \times (n-r)}(\mathbb{F})$ whose columns form a basis of the null space of $\mathbf{A}$. - -Without loss of generality, assume that the first $r$ columns of $\mathbf{A}$ are linearly independent. So, we can write - -\mathbf{A} = \begin{pmatrix} \mathbf{A}_1 & \mathbf{A}_2\end{pmatrix} , - -where - -*$\mathbf{A}_1 \in \operatorname{Mat}_{m \times r}(\mathbb{F}) $ with $r$ linearly independent column vectors, and - -*$\mathbf{A}_2 \in \operatorname{Mat}_{m \times (n-r)}(\mathbb{F})$, each of whose $n-r$ columns are linear combinations of the columns of $\mathbf{A}_1$. - -This means that $\mathbf{A}_2 = \mathbf{A}_1\mathbf{B}$ for some $\mathbf{B} \in \operatorname{Mat} _{r \times (n-r)}$ (see rank factorization) and, hence, - -\mathbf{A} = \begin{pmatrix} \mathbf{A}_1 & \mathbf{A}_1\mathbf{B}\end{pmatrix} . - -Let - -\mathbf{X} = \begin{pmatrix} -\mathbf{B} \\ \mathbf{I}_{n-r} \end{pmatrix} , - -where $\mathbf{I}_{n-r}$ is the $(n-r)\times (n-r)$ identity matrix. We note that $\mathbf{X} \in \operatorname{Mat}_{n \times (n-r)}(\mathbb{F})$ satisfies - - \mathbf{A}\mathbf{X} = \begin{pmatrix}\mathbf{A}_1 & \mathbf{A}_1\mathbf{B} \end{pmatrix}\begin{pmatrix} -\mathbf{B} \\ \mathbf{I}_{n-r} \end{pmatrix} = -\mathbf{A}_1\mathbf{B} + \mathbf{A}_1\mathbf{B} = \mathbf{0}_{m \times (n-r)}. - -Therefore, each of the $n-r$ columns of $\mathbf{X}$ are particular solutions of $\mathbf{Ax} = \mathbf{0}_{\mathbb{F}^{m}}$. - -Furthermore, the $n-r$ columns of $\mathbf{X}$ are linearly independent because $\mathbf{Xu} = \mathbf{0}_{\mathbb{F}^{n}}$ will imply $\mathbf{u} = \mathbf{0}_{\mathbb{F}^{n-r}}$ for $\mathbf{u} \in \mathbb{F}^{n-r}$: - - \mathbf{X}\mathbf{u} = \mathbf{0}_{\mathbb{F}^{n}} \implies \begin{pmatrix} - --\mathbf{B} \\ - -\mathbf{I}_{n-r} - -\end{pmatrix}\mathbf{u} = \mathbf{0}_{\mathbb{F}^{n}} \implies \begin{pmatrix} - --\mathbf{B}\mathbf{u} \\ - -\mathbf{u} - -\end{pmatrix} = \begin{pmatrix} - -\mathbf{0}_{\mathbb{F}^{r}} \\ - -\mathbf{0}_{\mathbb{F}^{n-r}} - -\end{pmatrix} \implies \mathbf{u} = \mathbf{0}_{\mathbb{F}^{n-r}}. - -Therefore, the column vectors of $\mathbf{X}$ constitute a set of $n-r$ linearly independent solutions for $\mathbf{Ax} = \mathbf{0}_{\mathbb{F}^{m}}$. - -We next prove that any solution of $\mathbf{Ax} = \mathbf{0}_{\mathbb{F}^{m}}$ must be a linear combination of the columns of $\mathbf{X}$. - -For this, let - -\mathbf{u} = \begin{pmatrix} - -\mathbf{u}_1 \\ - -\mathbf{u}_2 - -\end{pmatrix} \in \mathbb{F}^{n} - -be any vector such that $\mathbf{Au} = \mathbf{0}_{\mathbb{F}^{m}}$. Note that since the columns of $\mathbf{A}_1$ are linearly independent, $\mathbf{A}_1\mathbf{x} = \mathbf{0}_{\mathbb{F}^{m}}$ implies $\mathbf{x} = \mathbf{0}_{\mathbb{F}^{r}}$. - -Therefore, - -\begin{array}{rcl} - -\mathbf{A}\mathbf{u} & = & \mathbf{0}_{\mathbb{F}^{m}} \\ - -\implies \begin{pmatrix}\mathbf{A}_1 & \mathbf{A}_1\mathbf{B}\end{pmatrix} \begin{pmatrix} \mathbf{u}_1 \\ \mathbf{u}_2 \end{pmatrix} & = & \mathbf{A}_1\mathbf{u}_1 + \mathbf{A}_1\mathbf{B}\mathbf{u}_2 & = & \mathbf{A}_1(\mathbf{u}_1 + \mathbf{B}\mathbf{u}_2) & = & \mathbf{0}_{\mathbb{F}^{m}} \\ - -\implies \mathbf{u}_1 + \mathbf{B}\mathbf{u}_2 & = & \mathbf{0}_{\mathbb{F}^{r}} \\ - -\implies \mathbf{u}_1 & = & -\mathbf{B}\mathbf{u}_2 - -\end{array} - - \implies \mathbf{u} = \begin{pmatrix} \mathbf{u}_1 \\ \mathbf{u}_2 \end{pmatrix} - -= \begin{pmatrix} -\mathbf{B} \\ \mathbf{I}_{n-r} \end{pmatrix}\mathbf{u}_2 - -= \mathbf{X}\mathbf{u}_2. - -This proves that any vector $\mathbf{u}$ that is a solution of $\mathbf{Ax} = \mathbf{0}$ must be a linear combination of the $n-r$ special solutions given by the columns of $\mathbf{X}$. And we have already seen that the columns of $\mathbf{X}$ are linearly independent. Hence, the columns of $\mathbf{X}$ constitute a basis for the null space of $\mathbf{A}$. Therefore, the nullity of $\mathbf{A}$ is $n - r$. Since $r$ equals rank of $\mathbf{A}$, it follows that $\operatorname{Rank}(\mathbf{A}) + \operatorname{Nullity}(\mathbf{A}) = n$. This concludes our proof. - -This theorem is a statement of the first isomorphism theorem of algebra for the case of vector spaces; it generalizes to the splitting lemma. - -In more modern language, the theorem can also be phrased as saying that each short exact sequence of vector spaces splits. Explicitly, given that - - 0 \rightarrow U \rightarrow V \overset{T}{\rightarrow} R \rightarrow 0 - -is a short exact sequence of vector spaces, then $ U \oplus R \cong V $, hence - -\dim(U) + \dim(R) = \dim(V) . - -Here R plays the role of im T and U is ker T, i.e. - - 0 \rightarrow \ker T~{\hookrightarrow}~V~\overset{T}{\rightarrow}~\operatorname{im} T \rightarrow 0 - -In the finite-dimensional case, this formulation is susceptible to a generalization: if - -em=1.5|text=0 → V1 → V2 → ⋯ → Vr → 0 - -is an exact sequence of finite-dimensional vector spaces, then - -\sum_{i=1}^r (-1)^i\dim(V_i) = 0. - -The rank–nullity theorem for finite-dimensional vector spaces may also be formulated in terms of the index of a linear map. The index of a linear map $T \in \operatorname{Hom}(V,W)$, where $V$ and $W$ are finite-dimensional, is defined by - - \operatorname{index} T = \dim \operatorname{Ker}(T) - \dim \operatorname{Coker} T . - -Intuitively, $\dim \operatorname{Ker} T$ is the number of independent solutions $v$ of the equation $Tv = 0$, and $\dim \operatorname{Coker} T $ is the number of independent restrictions that have to be put on $w$ to make $Tv = w $ solvable. The rank–nullity theorem for finite-dimensional vector spaces is equivalent to the statement - - \operatorname{index} T = \dim V - \dim W . - -We see that we can easily read off the index of the linear map $T$ from the involved spaces, without any need to analyze $T$ in detail. This effect also occurs in a much deeper result: the Atiyah–Singer index theorem states that the index of certain differential operators can be read off the geometry of the involved spaces. diff --git a/wiki/wikipedia/2672.txt b/wiki/wikipedia/2672.txt deleted file mode 100644 index de9405f75d69c74f8763447b2c307024ec10414f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2672.txt +++ /dev/null @@ -1,47 +0,0 @@ -The Mackey–Arens theorem is an important theorem in functional analysis that characterizes those locally convex vector topologies that have some given space of linear functionals as their continuous dual space. - -According to Narici (2011), this profound result is central to duality theory; a theory that is "the central part of the modern theory of topological vector spaces." - -Let X be a vector space and let Y be a vector subspace of the algebraic dual of X that separates points on X. - -If 𝜏 is any other locally convex Hausdorff topological vector space topology on X, then we say that 𝜏 is compatible with duality between X and Y if when X is equipped with 𝜏, then it has Y as its continuous dual space. - -If we give X the weak topology 𝜎(X, Y) then X𝜎(X, Y) is a Hausdorff locally convex topological vector space (TVS) and 𝜎(X, Y) is compatible with duality between X and Y (i.e. $X_{\sigma(X, Y)}^{\prime} = \left( X_{\sigma(X, Y)} \right)^{\prime} = Y$). - -We can now ask the question: what are all of the locally convex Hausdorff TVS topologies that we can place on X that are compatible with duality between X and Y? - -The answer to this question is called the Mackey–Arens theorem. - -{{Math theorem|name=Mackey–Arens theorem|math_statement= - -Let X be a vector space and let 𝒯 be a locally convex Hausdorff topological vector space topology on X. Let X denote the continuous dual space of X and let $X_{\mathcal{T}}$ denote X with the topology 𝒯. Then the following are equivalent: - -{{ordered list| - -| 𝒯 is identical to a $\mathcal{G}^{\prime}$-topology on X, where $\mathcal{G}^{\prime}$ is a covering of 1, S2, …, Sm such that the sum of the numbers in each one is equal to T. The S1, S2, …, Sm must form a partition of S in the sense that they are disjoint and they cover S. - -The 3-partition problem remains strongly NP-complete under the restriction that every integer in S is strictly between T/4 and T/2. - -The set S = { 20, 23, 25, 30, 49, 45, 27, 30, 30, 40, 22, 19 } can be partitioned into the four sets { 20, 25, 45 }, { 23, 27, 40 }, { 49, 22, 19 } , { 30, 30, 30}, each of which sum to T = 90. Another example; the set S = {1, 2, 5, 6, 7, 9} can be partitioned into the two sets {1, 5, 9}, {2, 6, 7} each of which sum to T = 15. - -The 3-partition problem remains NP-complete even when the integers in S are bounded above by a polynomial in n. In other words, the problem remains NP-complete even when representing the numbers in the input instance in unary. i.e., 3-partition is NP-complete in the strong sense or strongly NP-complete. This property, and 3-partition in general, is useful in many reductions where numbers are naturally represented in unary. - -The 3-partition problem is similar to the partition problem, in which the goal is to partition S into two subsets with equal sum, and the multiway number partitioning, in which the goal is to partition S into k subsets with equal sum, where k is a fixed parameter. In 3-Partition the goal is to partition S into m = n/3 subsets, not just a fixed number of subsets, with equal sum. Partition is "easier" than 3-Partition: while 3-Partition is strongly NP-hard, Partition is only weakly NP-hard - it is hard only when the numbers are encoded in non-unary system, and have value exponential in n. When the values are polynomial in n, Partition can be solved in polynomial time using the pseudopolynomial time number partitioning algorithm. - -In the unrestricted-input variant, the inputs can be arbitrary integers; in the restricted-input variant, the inputs must be in (T/4 , T/2). The restricted version is as hard as the unrestricted version: given an instance Su of the unrestricted variant, construct a new instance of the restricted version Sr := {s + 2 T | s in Su}. Every solution of Su corresponds to a solution of Sr but with a sum of 7 T instead of T, and every element of Sr is in [2 T, 3 T] which is contained in (7 T / 4, 7 T / 2). - -In the distinct-input variant, the inputs must be in (T/4 , T/2), and in addition, they must all be distinct integers. It, too, is as hard as the unrestricted version. - -In the unrestricted-output variant, the m output subsets can be of arbitrary size - not necessarily 3 (but they still need to have the same sum T). The restricted-output variant can be reduced to the unrestricted-variant: given an instance Su of the restricted variant, construct a new instance of the unrestricted variant Sr := {s + 2 T | s in Su}, with target sum 7 T. Every solution of Su naturally corresponds to a solution of Sr. In every solution of Sr, since the target sum is 7 T and each element is in (7 T / 4, 7 T / 2), there must be exactly 3 elements per set, so it corresponds to a solution of Su. - -The 4-partition problem is a variant in which S contains n = 4 m integers, the sum of all integers is m T, and the goal is to partition it into m quadruples, all with a sum of T. It can be assumed that each integer is strictly between T/5 and T/3. - -The ABC-partition problem is a variant in which, instead of a set S with 3 m integers, there are three sets A, B, C with m integers in each. The sum of numbers in all sets is m T. The goal is to construct m triplets, each of which contains one element from A, one from B and one from C, such that the sum of each triplet is T. This problem can be reduced to 3-partition as follows. Construct a set S containing the numbers 1000a+100 for each a in A; 1000b+10 for each b in B; and 1000c+1 for each c in C. Every solution of the ABC-partition instance induces a solution of the 3-partition instance with sum 1000(a+b+c)+111 = 1000T+111. Conversely, in every solution of the 3-partition instance, all triplet-sums must have the same hundreds, tens and units digits, which means that they must have exactly 1 in each of these digits. Therefore, each triplet must have exactly one number of the form 1000a+100, one 1000b+10 and one 1000c+1. Hence, it induces a solution to the ABC-partition instance. - -* The ABC-partition problem is also called numerical 3-d matching, as it can also be reduced to the 3-dimensional matching problem: given an instance of ABC-partition, construct a tripartite hypergraph with sides A, B, C, where there is an hyperedge (a, b, c) for every three vertices in A, B, C such that a+b+c = T. A matching in this hypergraph corresponds to a solution to ABC-partition. - -Garey and Johnson (1975) originally proved 3-Partition to be NP-complete, by a reduction from 3-dimensional matching. The classic reference by Garey and Johnson (1979) describes an NP-completeness proof, reducing from 3-dimensional matching to 4-partition to 3-partition. - -The NP-hardness of 3-partition was used to prove the NP-hardness rectangle packing, as well as of Tetris and some other puzzles, and some job scheduling problems. diff --git a/wiki/wikipedia/2676.txt b/wiki/wikipedia/2676.txt deleted file mode 100644 index 9e3cd1ab275e739841b71e18690964fc5f0b587c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2676.txt +++ /dev/null @@ -1,343 +0,0 @@ -In mathematics, the convolution theorem states that under suitable conditions the Fourier transform of a convolution of two functions (or signals) is the pointwise product of their Fourier transforms. More generally, convolution in one domain (e.g., time domain) equals point-wise multiplication in the other domain (e.g., frequency domain). Other versions of the convolution theorem are applicable to various Fourier-related transforms. - -Consider two functions $g(x)$ and $h(x)$ with Fourier transforms $G$ and $H$: - - - -\begin{align} - -G(f) &\triangleq \mathcal{F}\{g\}(f) = \int_{-\infty}^{\infty}g(x) e^{-i 2 \pi f x} dx, \quad f \in \mathbb{R}\\ - -H(f) &\triangleq \mathcal{F}\{h\}(f) = \int_{-\infty}^{\infty}h(x) e^{-i 2 \pi f x} dx, \quad f \in \mathbb{R} - -\end{align} - - - -where $\mathcal{F}$ denotes the Fourier transform operator. The transform may be normalized in other ways, in which case constant scaling factors (typically $2\pi$ or $\sqrt{2\pi}$) will appear in the convolution theorem below. The convolution of $g$ and $h$ is defined by: -$$ -r(x) = \{g*h\}(x) \triangleq \int_{-\infty}^{\infty} g(\tau) h(x-\tau) d\tau = \int_{-\infty}^{\infty} g(x-\tau) h(\tau) d\tau. -$$ - -In this context the asterisk denotes convolution, instead of standard multiplication. The tensor product symbol $\otimes$ is sometimes used instead. - -The convolution theorem states that: - -{{NumBlk|| - -R(f) \triangleq \mathcal{F}\{r\}(f) = G(f) H(f). \quad f \in \mathbb{R} - -|}} - -}} - -Applying the inverse Fourier transform $\mathcal{F}^{-1}$, produces the corollary: - -}} - -The theorem also generally applies to multi-dimensional functions. - -Consider functions $g,h$ in Lp-space $L^1(\mathbb{R}^n)$, with Fourier transforms $G,H$: - - - -\begin{align} - -G(f) &\triangleq \mathcal{F}\{g\}(f) = \int_{\mathbb{R}^n} g(x) e^{-i 2 \pi f \cdot x} dx, \quad f \in \mathbb{R}^n\\ - -H(f) &\triangleq \mathcal{F}\{h\}(f) = \int_{\mathbb{R}^n} h(x) e^{-i 2 \pi f \cdot x} dx, - -\end{align} - - - -where $f\cdot x$ indicates the inner product of $\mathbb{R}^n$: $f\cdot x = \sum_{j=1}^{n} {f}_j x_j,$ and $dx = \prod_{j=1}^{n} d x_j.$ - -The convolution of $g$ and $h$ is defined by: -$$ -r(x) \triangleq \int_{\mathbb{R}^n} g(\tau) h(x-\tau) d\tau. -$$ - -Also: -$$ -\iint |g(\tau)h(x-\tau)|dxd\tau=\int \left( |g(\tau)| \int |h(x-\tau)|dx \right) d\tau = \int |g(\tau)|\|h\|_1d\tau=\|g\|_1 \|h\|_1. -$$ - -Hence by Fubini's theorem we have that $r\in L^1(\mathbb{R}^n)$ so its Fourier transform $R$ is defined by the integral formula: - - - -\begin{align} - -R(f) \triangleq \mathcal{F}\{r\}(f) &= \int_{\mathbb{R}^n} r(x) e^{-i 2 \pi f \cdot x} dx\\ - -&= \int_{\mathbb{R}^n} \left(\int_{\mathbb{R}^n} g(\tau) h(x-\tau) d\tau\right) e^{-i 2 \pi f \cdot x} dx. - -\end{align} - - - -Note that $|g(\tau)h(x-\tau)e^{-i 2\pi f \cdot x}|=|g(\tau)h(x-\tau)|$ and hence by the argument above we may apply Fubini's theorem again (i.e. interchange the order of integration): - - - -\begin{align} - -R(f) &= \int_{\mathbb{R}^n} g(\tau) - -\underbrace{\left(\int_{\mathbb{R}^n} h(x-\tau)\ e^{-i 2 \pi f \cdot x}dx\right)}_{H(f)\ e^{-i 2 \pi f \cdot \tau}}d\tau\\ - -&=\underbrace{\left(\int_{\mathbb{R}^n} g(\tau)\ e^{-i 2\pi f \cdot \tau}d\tau\right)}_{G(f)}\ H(f). - -\end{align} - - - -This theorem also holds for the Laplace transform, the two-sided Laplace transform and, when suitably modified, for the Mellin transform and Hartley transform (see Mellin inversion theorem). It can be extended to the Fourier transform of abstract harmonic analysis defined over locally compact abelian groups. - -Consider $P$-periodic functions $g_{_P}$ and $h_{_P},$ which can be expressed as periodic summations: -$$ -g_{_P}(x)\ \triangleq \sum_{m=-\infty}^{\infty} g(x-mP) -$$ and $h_{_P}(x)\ \triangleq \sum_{m=-\infty}^{\infty} h(x-mP).$ - -In practice the non-zero portion of components $g$ and $h$ are often limited to duration $P,$ but nothing in the theorem requires that. The Fourier series coefficients are: - -\begin{align} - -G[k] &\triangleq \mathcal{F}\{g_{_P}\}[k] = \frac{1}{P} \int_P g_{_P}(x) e^{-i 2\pi k x/P} dx, \quad k \in \mathbb{Z}; \quad \quad \scriptstyle \text{integration over any interval of length } P\\ - -H[k] &\triangleq \mathcal{F}\{h_{_P}\}[k] = \frac{1}{P} \int_P h_{_P}(x) e^{-i 2\pi k x/P} dx, \quad k \in \mathbb{Z} - -\end{align} - -where $\mathcal{F}$ denotes the Fourier series integral. - -* The pointwise product: $g_{_P}(x)\cdot h_{_P}(x)$ is also $P$-periodic, and its Fourier series coefficients are given by the discrete convolution of the $G$ and $H$ sequences: \mathcal{F}\{g_{_P}\cdot h_{_P}\}[k] = \{G*H\}[k]. - -*The convolution: \begin{align} - -\{g_{_P} * h\}(x)\ &\triangleq \int_{-\infty}^{\infty} g_{_P}(x-\tau)\cdot h(\tau)\ d\tau\\ - -&\equiv \int_P g_{_P}(x-\tau)\cdot h_{_P}(\tau)\ d\tau; \quad \quad \scriptstyle \text{integration over any interval of length } P - -\end{align}is also $P$-periodic,{{efn-ua - -|Proof: - -\begin{align} - -\int_{-\infty}^\infty g_{_P}(x - \tau) \cdot h(\tau)d\tau - -&= \sum_{k=-\infty}^\infty \left[\int_{x_o+kP}^{x_o+(k+1)P} g_{_P}(x - \tau) \cdot h(\tau)\ d\tau\right] \quad x_0 \text{ is an arbitrary parameter}\\ - -&=\sum_{k=-\infty}^\infty \left[\int_{x_o}^{x_o+P} \underbrace{g_{_P}(x - \tau-kP)}_{g_{_P}(x - \tau), \text{ by periodicity}} \cdot h(\tau + kP)\ d\tau\right] \quad \text{substituting } \tau \rightarrow \tau+kP\\ - -&=\int_{x_o}^{x_o+P} g_{_P}(x - \tau) \cdot \underbrace{\left[\sum_{k=-\infty}^\infty h(\tau + kP)\right]}_{\triangleq \ h_{_P}(\tau)}\ d\tau - -\end{align} - -}} and is called a periodic convolution. The corresponding convolution theorem is: - -{{NumBlk|| - -\mathcal{F}\{g_{_P} * h\}[k] =\ P\cdot G[k]\ H[k]. - -|}} - -}} - -\begin{align} - -\mathcal{F}\{g_{_P} * h\}[k] &\triangleq \frac{1}{P} \int_P \left(\int_P g_{_P}(\tau)\cdot h_{_P}(x-\tau)\ d\tau\right) e^{-i 2\pi k x/P} dx\\ - -&= \int_P g_{_P}(\tau)\left(\frac{1}{P}\int_P h_{_P}(x-\tau)\ e^{-i 2\pi k x/P} dx\right) d\tau\\ - -&= \int_P g_{_P}(\tau)\ e^{-i 2\pi k \tau/P} - -\underbrace{\left(\frac{1}{P}\int_P h_{_P}(x-\tau)\ e^{-i 2\pi k (x-\tau)/P} dx\right)}_{H[k], \quad \text{due to periodicity}} d\tau\\ - -&=\underbrace{\left(\int_P\ g_{_P}(\tau)\ e^{-i 2\pi k \tau/P} d\tau\right)}_{P\cdot G[k]}\ H[k]. - -\end{align} - - - -By a derivation similar to Eq.1, there is an analogous theorem for sequences, such as samples of two continuous functions, where now $\mathcal{F}$ denotes the discrete-time Fourier transform (DTFT) operator. Consider two sequences $g[n]$ and $h[n]$ with transforms $G$ and $H$: - -\begin{align} - -G(f) &\triangleq \mathcal{F}\{g\}(f) = \sum_{n=-\infty}^{\infty} g[n]\cdot e^{-i 2\pi f n}, \quad f \in \mathbb{R}\\ - -H(f) &\triangleq \mathcal{F}\{h\}(f) = \sum_{n=-\infty}^{\infty} h[n]\cdot e^{-i 2\pi f n}. \quad f \in \mathbb{R} - -\end{align} - -The of $g$ and $h$ is defined by: -$$ -r[n] \triangleq (g * h)[n] = \sum_{m=-\infty}^\infty g[m]\cdot h[n - m] = \sum_{m=-\infty}^\infty g[n-m]\cdot h[m]. -$$ - -The convolution theorem for discrete sequences is: - -}} -$$ -G(f) -$$ and $H(f),$ as defined above, are periodic, with a period of 1. Consider $N$-periodic sequences $g_{_N}$ and $h_{_N}$: -$$ -g_{_N}[n]\ \triangleq \sum_{m=-\infty}^{\infty} g[n-mN] -$$ and $h_{_N}[n]\ \triangleq \sum_{m=-\infty}^{\infty} h[n-mN], \quad n \in \mathbb{Z}.$ - -These functions occur as the result of sampling $G$ and $H$ at intervals of $1/N$ and performing an inverse discrete Fourier transform (DFT) on $N$ samples (see ). The discrete convolution: -$$ -\{g_{_N} * h\}[n]\ \triangleq \sum_{m=-\infty}^{\infty} g_{_N}[m]\cdot h[n-m] \equiv \sum_{m=0}^{N-1} g_{_N}[m]\cdot h_{_N}[n-m] -$$ - -is also $N$-periodic, and is called a periodic convolution. Redefining the $\mathcal{F}$ operator as the $N$-length DFT, the corresponding theorem is: - -{{NumBlk|| - -\mathcal{F}\{g_{_N} * h\}[k] =\ \underbrace{\mathcal{F}\{g_{_N}\}[k]}_{G(k/N)} \cdot \underbrace{\mathcal{F}\{h_{_N}\}[k]}_{H(k/N)}, \quad k \in \mathbb{Z}. - -|}} - -}} - -And therefore: - -}} - -Under the right conditions, it is possible for this N-length sequence to contain a distortion-free segment of a $g*h$ convolution. But when the non-zero portion of the $g(n)$ or $h(n)$ sequence is equal or longer than $N,$ some distortion is inevitable. Such is the case when the $H(k/N)$ sequence is obtained by directly sampling the DTFT of the infinitely long impulse response. - -For $g$ and $h$ sequences whose non-zero duration is less than or equal to $N,$ a final simplification is: - -{{Equation box 1 - -|title=Circular convolution - -|indent=: |cellpadding=6 |border= |border colour=#0073CF |background colour=#F5FFFA - -|equation={{NumBlk|| - -\{g_{_N} * h\}[n] =\ \mathcal{F}^{-1}\{\mathcal{F}\{g\} \cdot \mathcal{F}\{h\}\}. - -|}} - -}} - -This form is often used to efficiently implement numerical convolution by computer. (see and ) - -A time-domain derivation proceeds as follows: - - - -\begin{align} - -{\scriptstyle \rm DFT} \{g_{_N} * h\}[k] &\triangleq \sum_{n=0}^{N-1} \left(\sum_{m=0}^{N-1} g_{_N}[m]\cdot h_{_N}[n-m]\right) e^{-i 2\pi kn/N}\\ - -&= \sum_{m=0}^{N-1} g_{_N}[m] \left(\sum_{n=0}^{N-1} h_{_N}[n-m]\cdot e^{-i 2\pi kn/N}\right)\\ - -&= \sum_{m=0}^{N-1} g_{_N}[m]\cdot e^{-i 2\pi km/N} - -\underbrace{ - -\left(\sum_{n=0}^{N-1} h_{_N}[n-m]\cdot e^{-i 2\pi k(n-m)/N}\right)}_{\scriptstyle{\rm DFT}\displaystyle\{h_{_N}\}[k]\quad \scriptstyle \text{due to periodicity}}\\ - -&= \underbrace{ - -\left(\sum_{m=0}^{N-1} g_{_N}[m]\cdot e^{-i 2\pi km/N}\right)}_{\scriptstyle{\rm DFT}\displaystyle\{g_{_N}\}[k]} - -\left(\scriptstyle{\rm DFT}\displaystyle\{h_{_N}\}[k]\right). - -\end{align} - - - -A frequency-domain derivation follows from , which indicates that the DTFTs can be written as: - - - -\mathcal{F}\{g_{_N} * h\}(f) = \frac{1}{N} \sum_{k=-\infty}^{\infty} \left(\scriptstyle{\rm DFT}\displaystyle \{g_{_N} * h\}[k]\right)\cdot \delta\left(f-k/N\right). \quad \scriptstyle \mathsf{(Eq.5a)} - - - - - -\mathcal{F}\{g_{_N}\}(f) = \frac{1}{N} \sum_{k=-\infty}^{\infty} \left(\scriptstyle{\rm DFT}\displaystyle\{g_{_N}\}[k]\right)\cdot \delta\left(f-k/N\right). - - - -The product with $H(f)$ is thereby reduced to a discrete-frequency function: - - - -\begin{align} - -\mathcal{F}\{g_{_N} * h\}(f) &= G_{_N}(f) H(f) \\ - -&= \frac{1}{N} \sum_{k=-\infty}^{\infty} \left(\scriptstyle{\rm DFT}\displaystyle\{g_{_N}\}[k]\right)\cdot H(f)\cdot \delta\left(f-k/N\right)\\ - -&= \frac{1}{N} \sum_{k=-\infty}^{\infty} \left(\scriptstyle{\rm DFT}\displaystyle\{g_{_N}\}[k]\right)\cdot H(k/N)\cdot \delta\left(f-k/N\right)\\ - -&= \frac{1}{N} \sum_{k=-\infty}^{\infty} \left(\scriptstyle{\rm DFT}\displaystyle\{g_{_N}\}[k]\right)\cdot \left(\scriptstyle{\rm DFT}\displaystyle\{h_{_N}\}[k]\right) \cdot \delta\left(f-k/N\right), \quad \scriptstyle \mathsf{(Eq.5b)} - -\end{align} - - - -where the equivalence of $H(k/N)$ and $\left(\scriptstyle{\rm DFT}\displaystyle\{h_{_N}\}[k]\right)$ follows from . Therefore, the equivalence of (5a) and (5b) requires: - -\scriptstyle{\rm DFT} - -\displaystyle {\{g_{_N} * h\}[k]} - -= \left(\scriptstyle{\rm DFT} - -\displaystyle\{g_{_N}\}[k]\right)\cdot \left(\scriptstyle{\rm DFT}\displaystyle\{h_{_N}\}[k]\right). - -
    We can also verify the inverse DTFT of (5b): - - - -\begin{align} - -(g_{_N} * h)[n] & = \int_{0}^{1} \left(\frac{1}{N} \sum_{k=-\infty}^{\infty} \scriptstyle{\rm DFT}\displaystyle\{g_{_N}\}[k]\cdot \scriptstyle{\rm DFT}\displaystyle\{h_{_N}\}[k]\cdot \delta\left(f-k/N\right)\right)\cdot e^{i 2 \pi f n} df \\ - -& = \frac{1}{N} \sum_{k=-\infty}^{\infty} \scriptstyle{\rm DFT}\displaystyle\{g_{_N}\}[k]\cdot \scriptstyle{\rm DFT}\displaystyle\{h_{_N}\}[k]\cdot \underbrace{\left(\int_{0}^{1} \delta\left(f-k/N\right)\cdot e^{i 2 \pi f n} df\right)}_{\text{0, for} \ k\ \notin\ [0,\ N)} \\ - -& = \frac{1}{N} \sum_{k=0}^{N-1} \bigg(\scriptstyle{\rm DFT}\displaystyle\{g_{_N}\}[k]\cdot \scriptstyle{\rm DFT}\displaystyle\{h_{_N}\}[k]\bigg)\cdot e^{i 2 \pi \frac{n}{N} k}\\ - -&=\ \scriptstyle{\rm DFT}^{-1} \displaystyle \bigg( \scriptstyle{\rm DFT}\displaystyle \{g_{_N}\}\cdot \scriptstyle{\rm DFT}\displaystyle \{h_{_N}\} \bigg). - -\end{align} - - - -There is also a convolution theorem for the inverse Fourier transform: -$$ -\mathcal{F}\{g*h\} = \mathcal{F}\{g\} \cdot \mathcal{F}\{h\} -$$ -$$ -\mathcal{F}\{g \cdot h\}= \mathcal{F}\{g\}*\mathcal{F}\{h\} -$$ - -so that -$$ -g*h= \mathcal{F}^{-1}\left\{\mathcal{F}\{g\}\cdot\mathcal{F}\{h\}\right\} -$$ -$$ -g \cdot h= \mathcal{F}^{-1}\left\{\mathcal{F}\{g\}*\mathcal{F}\{h\}\right\} -$$ - -The convolution theorem extends to tempered distributions. - -Here, $g$ is an arbitrary tempered distribution (e.g. the Dirac comb) -$$ -\mathcal{F}\{f*g\} = \mathcal{F}\{f\} \cdot \mathcal{F}\{g\} -$$ -$$ -\mathcal{F}\{\alpha \cdot g\}= \mathcal{F}\{\alpha\}*\mathcal{F}\{g\} -$$ - -but $f = F\{\alpha\}$ must be "rapidly decreasing" towards $-\infty$ and $+\infty$ in order to guarantee the existence of both, convolution and multiplication product. Equivalently, if $\alpha = F^{-1}\{f\}$ is a smooth "slowly growing" ordinary function, it guarantees the existence of both, multiplication and convolution product. - -In particular, every compactly supported tempered distribution, such as the Dirac Delta, is "rapidly decreasing". Equivalently, bandlimited functions, such as the function that is constantly $1$ are smooth "slowly growing" ordinary functions. If, for example, $g\equiv\operatorname{III}$ is the Dirac comb both equations yield the Poisson summation formula and if, furthermore, $f\equiv\delta$ is the Dirac delta then $\alpha \equiv 1$ is constantly one and these equations yield the Dirac comb identity. diff --git a/wiki/wikipedia/2677.txt b/wiki/wikipedia/2677.txt deleted file mode 100644 index 7c491d7a34b8e862d5531c792fbc97295e356ee4..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2677.txt +++ /dev/null @@ -1,13 +0,0 @@ -In geometry, the two ears theorem states that every simple polygon with more than three vertices has at least two ears, vertices that can be removed from the polygon without introducing any crossings. The two ears theorem is equivalent to the existence of polygon triangulations. It is frequently attributed to Gary H. Meisters, but was proved earlier by Max Dehn. - -An ear of a polygon is defined as a vertex v such that the line segment between the two neighbors of v lies entirely in the interior of the polygon. The two ears theorem states that every simple polygon has at least two ears. - -An ear and its two neighbors form a triangle within the polygon that is not crossed by any other part of the polygon. Removing a triangle of this type produces a polygon with fewer sides, and repeatedly removing ears allows any simple polygon to be triangulated. - -Conversely, if a polygon is triangulated, the weak dual of the triangulation (a graph with one vertex per triangle and one edge per pair of adjacent triangles) will be a tree and each leaf of the tree will form an ear. Since every tree with more than one vertex has at least two leaves, every triangulated polygon with more than one triangle has at least two ears. Thus, the two ears theorem is equivalent to the fact that every simple polygon has a triangulation. - -An ear is called exposed when it forms a vertex of the convex hull of the polygon. However, it is possible for a polygon to have no exposed ears. - -Ears are a special case of a principal vertex, a vertex such that the line segment connecting the vertex's neighbors does not cross the polygon or touch any other vertex of it. A principal vertex for which this line segment lies outside the polygon is called a mouth. Analogously to the two ears theorem, every non-convex simple polygon has at least one mouth. Polygons with the minimum number of principal vertices of both types, two ears and a mouth, are called anthropomorphic polygons. - -The two ears theorem is often attributed to a 1975 paper by Gary H. Meisters, from which the "ear" terminology originated. However, the theorem was proved earlier by Max Dehn (circa 1899) as part of a proof of the Jordan curve theorem. To prove the theorem, Dehn observes that every polygon has at least three convex vertices. If one of these vertices, v, is not an ear, then it can be connected by a diagonal to another vertex x inside the triangle uvw formed by v and its two neighbors; x can be chosen to be the vertex within this triangle that is farthest from line uw. This diagonal decomposes the polygon into two smaller polygons, and repeated decomposition by ears and diagonals eventually produces a triangulation of the whole polygon, from which an ear can be found as a leaf of the dual tree. diff --git a/wiki/wikipedia/2678.txt b/wiki/wikipedia/2678.txt deleted file mode 100644 index 1d1e6685c99a3a92401ec9656574b9a465d81f92..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2678.txt +++ /dev/null @@ -1,7 +0,0 @@ -Peter D. Eades (born 8 January 1952) is an Australian computer scientist, a professor in the School of Information Technologies at the University of Sydney, known for his expertise in graph drawing. - -Eades received his bachelor's degree in mathematics from Australian National University in 1974, and his Ph.D. in mathematics from the same university in 1977 under the supervision of Jennifer Seberry. He then did postdoctoral studies at the University of Waterloo before taking an academic position at the University of Queensland, where he remained until 1991. He was a professor of computer science at the University of Newcastle from 1992 to 1999, and joined the University of Sydney faculty in 2000. As well as his faculty position at Sydney, Eades is also a distinguished researcher at NICTA. maintenance of the "mental map" in dynamically changing drawings, heuristics for reducing the number of edge crossings in layered graph drawings, and visual display of clustering information in graphs. He was the keynote speaker at the 12th IEEE Symposium on Information Visualization in 2006, was one of three invited speakers at the 19th International Symposium on Algorithms and Computation in 2008, and was one of two invited speakers at the 18th International Symposium on Graph Drawing in 2010. - -He has been the doctoral advisor of over 30 graduate students. - -A workshop in honor of Eades' 60th birthday was held in 2012, as part of the International Symposium on Graph Drawing. diff --git a/wiki/wikipedia/2679.txt b/wiki/wikipedia/2679.txt deleted file mode 100644 index 9df770bb6e9c292acf2a6614a2c252af0ed31d3b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2679.txt +++ /dev/null @@ -1,23 +0,0 @@ -SyncToy was a freeware tool in Microsoft's PowerToys series that provided an easy-to-use graphical user interface for synchronizing files and folders in Windows versions XP, Vista, 7 and 10. It was written using Microsoft's .NET Framework and used the Microsoft Sync Framework. - -Users initially need to create a "folder pair" that represents the two folders ("left" and "right" folders) to be compared and synchronized. These folders can be on the local drive, on an external device such as a flash drive, or on a network share from another computer. SyncToy supports UNC paths. It provides a Browse option to find the folder or network share, or users can type it in directly. SyncToy offers two safeguards to ensure that the user does not lose files permanently when they are deemed unnecessary during a sync. Firstly, a user can preview what is going to happen when the sync takes place, without actually changing anything; secondly, any deleted files are optionally moved to the Recycle Bin. - -SyncToy defines three different types of operation to synchronize two folders: - -* Synchronize takes the two folders and makes sure they have exactly the same files. To do this, SyncToy may copy files in either direction and may delete or rename files in either folder. In the case that a file has been updated in both the left and right folders, the version with the later modification date is considered the winner. The other version will be overwritten (but can be recovered via the Recycle Bin if one's settings specify that all deletions go to the Recycle Bin). - -* Echo looks for changes (file modifications, new files, renames, deletes) in the left folder and makes the right folder match the left folder in every way. - -* Contribute is like an Echo, but it does not delete any files. - -SyncToy supports 32-bit and 64-bit versions of Windows 7, Windows Vista, and Windows XP. - -SyncToy started as a Powertoy for Windows XP. Initially releases took version numbers 1.x, culminating in version 1.4. These versions were written in Microsoft's .NET Framework but contained their own code for folder synchronization. They included the same actions as the present version, plus two additional actions (labelled Subscribe and Combine): - -*Subscribe would update any file in the left folder that also exists in the right folder and is found to be older. No new files would be copied, only existing files updated, if needed. - -*Combine was similar to synchronize except that no files would be deleted between the pairs. If a file on one side is out-of-date it is renamed then the newer file copied, so both the updated copy and the older version are retained in that folder. And any file deleted in either of the paired folders is not deleted in the other folder. Only copy (and rename) operations occur. - -In November 2008 version 2.0 was released. This was a rewritten version built to use the Microsoft Sync Framework. Compared to version 1.4 it included better support for unattended synchronization runs, x64 compatibility, support for synchronizing encrypted files, file and folder exclusion based on both names and file types, renaming folder pairs and detection of drive letter reassignment. SyncToy 2.1 was released on November 10, 2009, and includes several minor enhancements and fixes for several bugs, including a serious issue where data on NAS would be corrupted, and another where deletes would not be synchronized when in Echo mode. - -Version 2.1 was the last version available when its official download was discontinued in January 2021. diff --git a/wiki/wikipedia/268.txt b/wiki/wikipedia/268.txt deleted file mode 100644 index a4b7b310d57deccd5861f54f86dc08c83e666ef5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/268.txt +++ /dev/null @@ -1,7 +0,0 @@ -Automatic layout is an option in graph drawing toolkits that allow to lay out the Graph according to specific rules, such as: - - - -*reducing the length of the arcs between the Graph vertices - -*reduce the number of edges crossing (to improve the graph readability) diff --git a/wiki/wikipedia/2680.txt b/wiki/wikipedia/2680.txt deleted file mode 100644 index dfb9fe51a79e10cfa4ed7cc6f4e36ca79100aa5f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2680.txt +++ /dev/null @@ -1,48 +0,0 @@ -In mathematics, an inequation is a statement that an inequality holds between two values. It is usually written in the form of a pair of expressions denoting the values in question, with a relational sign between them indicating the specific inequality relation. Some examples of inequations are: - -* $a < b$ - -* $x+y+z \leq 1$ - -* $n > 1$ - -* $x \neq 0$ - -In some cases, the term "inequation" can be considered synonymous to the term "inequality", while in other cases, an inequation is reserved only for statements whose inequality relation is "not equal to" (≠). - -A shorthand notation is used for the conjunction of several inequations involving common expressions, by chaining them together. For example, the chain -$$ -0 \leq a < b \leq 1 -$$ - -is shorthand for -$$ -0 \leq a ~ ~ \mathrm{and} ~ ~ a < b ~ ~ \mathrm{and} ~ ~ b \leq 1 -$$ - -which also implies that $0 < b$ and $a < 1$. - -In rare cases, chains without such implications about distant terms are used. - -For example $i \neq 0 \neq j$ is shorthand for $i \neq 0 ~ ~ \mathrm{and} ~ ~ 0 \neq j$, which does not imply $i \neq j.$ Similarly, $a < b > c$ is shorthand for $a < b ~ ~ \mathrm{and} ~ ~ b > c$, which does not imply any order of $a$ and $c$. - -Similar to equation solving, inequation solving means finding what values (numbers, functions, sets, etc.) fulfill a condition stated in the form of an inequation or a conjunction of several inequations. These expressions contain one or more unknowns, which are free variables for which values are sought that cause the condition to be fulfilled. To be precise, what is sought are often not necessarily actual values, but, more in general, expressions. A solution of the inequation is an assignment of expressions to the unknowns that satisfies the inequation(s); in other words, expressions such that, when they are substituted for the unknowns, make the inequations true propositions. - -Often, an additional objective expression (i.e., an optimization equation) is given, that is to be minimized or maximized by an optimal solution. - -For example, -$$ -0 \leq x_1 \leq 690 - 1.5 \cdot x_2 \land 0 \leq x_2 \leq 530 - x_1 \land x_1 \leq 640 - 0.75 \cdot x_2 -$$ - -is a conjunction of inequations, partly written as chains (where $\land$ can be read as "and"); the set of its solutions is shown in blue in the picture (the red, green, and orange line corresponding to the 1st, 2nd, and 3rd conjunct, respectively). For a larger example. see Linear programming#Example. - -Computer support in solving inequations is described in constraint programming; in particular, the simplex algorithm finds optimal solutions of linear inequations. The programming language Prolog III also supports solving algorithms for particular classes of inequalities (and other relations) as a basic language feature. For more, see constraint logic programming. - -Usually because of the properties of certain functions (like square roots), some inequations are equivalent to a combination of multiple others. For example, the inequation $\textstyle \sqrt < g(x)$ is logically equivalent to the following three inequations combined: - -# $ f(x) \ge 0$ - -# $ g(x) > 0$ - -# $ f(x) < \left(g(x)\right)^2$ diff --git a/wiki/wikipedia/2681.txt b/wiki/wikipedia/2681.txt deleted file mode 100644 index 0b91dce4dca2dd6f41f8e39e9417b4507243f916..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2681.txt +++ /dev/null @@ -1,75 +0,0 @@ -In the mathematical fields of graph theory and combinatorial optimization, the bipartite dimension or biclique cover number of a graph G = (V, E) is the minimum number of bicliques (that is complete bipartite subgraphs), needed to cover all edges in E. A collection of bicliques covering all edges in G is called a biclique edge cover, or sometimes biclique cover. The bipartite dimension of G is often denoted by the symbol d(G). - -An example for a biclique edge cover is given in the following diagrams: - - - -Image:Bipartite-dimension-bipartite-graph.svg|A bipartite graph... - -Image:Bipartite-dimension-biclique-cover.svg|...and a covering with four bicliques - -Image:Bipartite-dimension-red-biclique.svg|the red biclique from the cover - -Image:Bipartite-dimension-blue-biclique.svg|the blue biclique from the cover - -Image:Bipartite-dimension-green-biclique.svg|the green biclique from the cover - -Image:Bipartite-dimension-black-biclique.svg|the black biclique from the cover - - - -The bipartite dimension of the n-vertex complete graph, $K_n$ is $\lceil \log_2 n\rceil $. - -The bipartite dimension of a 2n-vertex - -crown graph equals $\sigma(n)$, where -$$ -\sigma(n)=\min \left\{k \mid n \le \binom{k}{\lfloor k/2 \rfloor} \right\} -$$ - -is the inverse function of the central binomial coefficient . - -The bipartite dimension of the $n \times m$ lattice graph is -$$ -\frac{nm}{2} -1 -$$, if $m$ is even and $n-1 = k (m-1) + 2 \ell$ for some integers $0 \leq \ell < k$; - -and is $\big\lfloor \frac{nm}{2} \big\rfloor$ otherwise . - -Fishburn determine the bipartite dimension for some special graphs. For example, the path $P_n$ - -has $d(P_n) = \lfloor n/2 \rfloor$ and the cycle $C_n$ has $d(C_n)=\lceil n/2\rceil$. - -The computational task of determining the bipartite dimension for a given graph G is an optimization problem. The decision problem for bipartite dimension can be phrased as: - -INSTANCE: A graph $G=(V,E)$ and a positive integer $k$. - -QUESTION: Does G admit a biclique edge cover containing at most $k$ bicliques? - -This problem appears as problem GT18 in Garey and Johnson's classical book on NP-completeness, and is a rather straightforward reformulation of - -another decision problem on families of finite sets. - -The set basis problem appears as problem SP7 in Garey and Johnson's book. - -Here, for a family $\mathcal{S}=\{S_1,\ldots,S_n\}$ of subsets of a finite set $\mathcal{U}$, - -a set basis for $\mathcal{S}$ is another family of subsets $\mathcal{B} = \{B_1,\ldots,B_\ell\}$ of $\mathcal{U}$, such that every set $S_i$ can be described as the union of some basis elements from $\mathcal{B}$. The set basis problem is now given as follows: - -INSTANCE: A finite set $\mathcal{U}$, a family $\mathcal{S}=\{S_1,\ldots,S_n\}$ of subsets of $\mathcal{U}$, and a positive integer k. - -QUESTION: Does there exist a set basis of size at most $k$ for $\mathcal{S}$? - -In its former formulation, the problem was proved to be NP-complete by Orlin, even for bipartite graphs. The formulation as a set basis problem was proved to be NP-complete earlier by Stockmeyer. The problem remains NP-hard even if we restrict our attention to bipartite graphs whose bipartite dimension is guaranteed to be at most $O(\log\!n)$, with n denoting the size of the given problem instance . On the positive side, the problem is solvable in polynomial time on bipartite domino-free graphs . - -Regarding the existence of approximation algorithms, Simon proved that the problem cannot be approximated well (assuming P ≠ NP). Indeed, the bipartite dimension is NP-hard to approximate within $|V|^{1/3-\epsilon}$ for every fixed $\epsilon>0$, already for bipartite graphs . - -In contrast, proving that the problem is fixed-parameter tractable is an exercise in designing kernelization algorithms, which appears as such in the textbook by Downey. Fleischner also provide a concrete bound on the size of the resulting kernel, which has meanwhile been improved by Nor. - -In fact, for a given bipartite graph on n vertices, it can be decided in time $O(f(k))+n^3$ with $f(k) = 2^{k2^{k-1}+3k}$ whether its bipartite dimension is at most k - -The problem of determining the bipartite dimension of a graph appears in various contexts of computing. For instance, in computer systems, different users of a system can be allowed or disallowed accessing various resources. In a role-based access control system, a role provides access rights to a set of resources. A user can own multiple roles, and he has permission to access all resources granted by some of his roles. Also, a role can be owned by multiple users. The role mining problem is to find a minimum set of roles, such that for each user, his roles taken together grant access to all specified resources. The set of users together with the set of resources in the system naturally induces a bipartite graph, whose edges are permissions. Each biclique in this graph is a potential role, and the optimum solutions to the role mining problem are precisely the minimum biclique edge covers . - -A similar scenario is known in computer security, more specifically in secure broadcasting. In that setup, several messages need to be sent each to a set of receivers, over an insecure channel. Each message has to be encrypted using some cryptographic key that is known only to the intended receivers. Each receiver may possess multiple encryption keys, and each key will be distributed to multiple receivers. The optimum key generation problem is to find a minimum set of encryption keys for ensuring secure transmission. As above, the problem can be modeled using a bipartite graph whose minimum biclique edge covers coincide with the solutions to the optimum key generation problem . - -A different application lies in biology, where minimum biclique edge covers are used in mathematical models of human leukocyte antigen (HLA) serology . diff --git a/wiki/wikipedia/2682.txt b/wiki/wikipedia/2682.txt deleted file mode 100644 index 8c4a574b0bf1dff1b9d922108fba4e4e9f9e3f07..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2682.txt +++ /dev/null @@ -1,318 +0,0 @@ -In the mathematical field of real analysis, the monotone convergence theorem is any of a number of related theorems proving the convergence of monotonic sequences (sequences that are non-decreasing or non-increasing) that are also bounded. Informally, the theorems state that if a sequence is increasing and bounded above by a supremum, then the sequence will converge to the supremum; in the same way, if a sequence is decreasing and is bounded below by an infimum, it will converge to the infimum. - -If a sequence of real numbers is increasing and bounded above, then its supremum is the limit. - -Let $ (a_n)_{n\in\mathbb{N}} $ be such a sequence, and let $\{ a_n \}$ be the set of terms of $ (a_n)_{n\in\mathbb{N}} $. By assumption, $\{ a_n \}$ is non-empty and bounded above. By the least-upper-bound property of real numbers, $c = \sup_n \{a_n\}$ exists and is finite. Now, for every $\varepsilon > 0$, there exists $N$ such that $a_N > c - \varepsilon $, since otherwise $c - \varepsilon $ is an upper bound of $\{ a_n \}$, which contradicts to the definition of $c$. Then since $(a_n)_{n\in\mathbb{N}}$ is increasing, and $c$ is its upper bound, for every $n > N$, we have $|c - a_n| \leq |c - a_N| < \varepsilon $. Hence, by definition, the limit of $(a_n)_{n\in\mathbb{N}}$ is $\sup_n \{a_n\}.$ - -If a sequence of real numbers is decreasing and bounded below, then its infimum is the limit. - -The proof is similar to the proof for the case when the sequence is increasing and bounded above, - -If $(a_n)_{n\in\mathbb{N}}$ is a monotone sequence of real numbers (i.e., if an ≤ an+1 for every n ≥ 1 or an ≥ an+1 for every n ≥ 1), then this sequence has a limit if and only if the sequence is bounded. - -* "If"-direction: The proof follows directly from the lemmas. - -* "Only If"-direction: By definition of limit, every sequence $(a_n)_{n\in\mathbb{N}}$ with a limit $L$ is necessarily bounded. - -If for all natural numbers j and k, aj,k is a non-negative real number and aj,k ≤ aj+1,k, then -$$ -\lim_{j\to\infty} \sum_k a_{j,k} = \sum_k \lim_{j\to\infty} a_{j,k}. -$$ - -The theorem states that if you have an infinite matrix of non-negative real numbers such that - -#the columns are weakly increasing and bounded, and - -#for each row, the series whose terms are given by this row has a convergent sum, - -then the limit of the sums of the rows is equal to the sum of the series whose term k is given by the limit of column k (which is also its supremum). The series has a convergent sum if and only if the (weakly increasing) sequence of row sums is bounded and therefore convergent. - -As an example, consider the infinite series of rows -$$ - \left( 1+ \frac1 n\right)^n = \sum_{k=0}^n \binom nk \frac 1 {n^k} = \sum_{k=0}^n \frac1{k!} \times \frac nn \times \frac{n-1}n\times\cdots\times\frac{n-k+1}n, -$$ - -where n approaches infinity (the limit of this series is e). Here the matrix entry in row n and column k is -$$ -\binom nk \frac 1 {n^k} =\frac1{k!}\times\frac nn\times\frac{n-1}n\times\cdots\times\frac{n-k+1}n; -$$ - -the columns (fixed k) are indeed weakly increasing with n and bounded (by 1/k!), while the rows only have finitely many nonzero terms, so condition 2 is satisfied; the theorem now says that you can compute the limit of the row sums $(1+1/n)^n$ by taking the sum of the column limits, namely $\frac1{k!}$. - -The following result is due to Beppo Levi, who proved a slight generalization in 1906 of an earlier result by Henri Lebesgue. In what follows, $\operatorname{\mathcal B}_{\R_{\geq 0}}$ denotes the $\sigma$-algebra of Borel sets on $[0,+\infty]$. By definition, $\operatorname{\mathcal B}_{\R_{\geq 0}}$ contains the set $\{+\infty\}$ and all Borel subsets of $\R_{\geq 0}.$ - -Let $(\Omega,\Sigma,\mu)$ be a measure space, and $X\in\Sigma$. Consider a pointwise non-decreasing sequence $\{f_k\}^\infty_{k=1}$ of $(\Sigma,\operatorname{\mathcal B}_{\R_{\geq 0}})$-measurable non-negative functions $f_k:X\to [0,+\infty]$, i.e., for every ${k\geq 1}$ and every ${x\in X}$, -$$ - 0 \leq f_k(x) \leq f_{k+1}(x)\leq\infty. -$$ - -Set the pointwise limit of the sequence $\{f_{n}\}$ to be $f$. That is, for every $x\in X$, -$$ - f(x):= \lim_{k\to\infty} f_k(x). -$$ - -Then $f$ is $(\Sigma,\operatorname{\mathcal B}_{\R_{\geq 0}})$-measurable and -$$ -\lim_{k\to\infty} \int_X f_k d\mu = \int_X f d\mu. -$$ - -Remark 1. The integrals may be finite or infinite. - -Remark 2. The theorem remains true if its assumptions hold $\mu$-almost everywhere. In other words, it is enough that there is a null set $N$ such that the sequence $\{f_n(x)\}$ non-decreases for every ${x\in X\setminus N}.$ To see why this is true, we start with an observation that allowing the sequence $\{ f_n \}$ to pointwise non-decrease almost everywhere causes its pointwise limit $f$ to be undefined on some null set $N$. On that null set, $f$ may then be defined arbitrarily, e.g. as zero, or in any other way that preserves measurability. To see why this will not affect the outcome of the theorem, note that since ${\mu(N)=0},$ we have, for every $k,$ -$$ - \int_X f_k d\mu = \int_{X \setminus N} f_k d\mu -$$ and $\int_X f d\mu = \int_{X \setminus N} f d\mu, $ - -provided that $f$ is $(\Sigma,\operatorname{\mathcal B}_{\R_{\geq 0}})$-measurable. (These equalities follow directly from the definition of Lebesgue integral for a non-negative function). - -Remark 3. Under assumptions of the theorem, - -(Note that the second chain of equalities follows from Remark 5). - -Remark 4. The proof below does not use any properties of Lebesgue integral except those established here. The theorem, thus, can be used to prove other basic properties, such as linearity, pertaining to Lebesgue integration. - -Remark 5 (monotonicity of Lebesgue integral). In the proof below, we apply the monotonic property of Lebesgue integral to non-negative functions only. Specifically (see Remark 4), let the functions $f,g : X \to [0,+\infty]$ be $(\Sigma,\operatorname{\mathcal B}_{\R_{\geq 0}})$-measurable. - -*If $f \leq g$ everywhere on $X,$ then -$$ -\int_X fd\mu \leq \int_X gd\mu. -$$ - -*If $ X_1,X_2 \in \Sigma $ and $X_1 \subseteq X_2, $ then -$$ -\int_{X_1} fd\mu \leq \int_{X_2} fd\mu. -$$ - -Proof. Denote $\operatorname{SF}(h)$ the set of simple $(\Sigma, \operatorname{\mathcal B}_{\R_{\geq 0}})$-measurable functions $s:X\to [0,\infty)$ such that -$$ -0\leq s\leq h -$$ everywhere on $X.$ - -1. Since $f \leq g,$ we have -$$ - \operatorname{SF}(f) \subseteq \operatorname{SF}(g). -$$ - -By definition of Lebesgue integral and the properties of supremum, -$$ -\int_X fd\mu = \sup_{s\in {\rm SF}(f)}\int_X sd\mu \leq \sup_{s\in {\rm SF}(g)}\int_X sd\mu = \int_X gd\mu. -$$ - -2. Let ${\mathbf 1}_{X_1}$ be the indicator function of the set $X_1.$ It can be deduced from the definition of Lebesgue integral that -$$ - \int_{X_2} f\cdot {\mathbf 1}_{X_1} d\mu = \int_{X_1} f d\mu -$$ - -if we notice that, for every $s \in {\rm SF}(f\cdot {\mathbf 1}_{X_1}),$ $s=0$ outside of $X_1.$ Combined with the previous property, the inequality $ f\cdot {\mathbf 1}_{X_1} \leq f$ implies -$$ - \int_{X_1} f d\mu = \int_{X_2} f\cdot {\mathbf 1}_{X_1} d\mu \leq \int_{X_2} f d\mu. -$$ - -This proof does not rely on Fatou's lemma. However, we do explain how that lemma might be used. - -For those not interested in independent proof, the intermediate results below may be skipped. - -Lemma 1. Let $(\Omega,\Sigma,\mu)$ be a measurable space. Consider a simple $(\Sigma,\operatorname{\mathcal B}_{\R_{\geq 0}})$-measurable non-negative function $s:\Omega\to{\mathbb R_{\geq 0}}$. For a subset $S\subseteq\Omega$, define -$$ -\nu(S)=\int_Ssd\mu. -$$ - -Then $\nu$ is a measure on $\Omega$. - -Monotonicity follows from Remark 5. Here, we will only prove countable additivity, leaving the rest up to the reader. Let $S=\bigcup^\infty_{i=1}S_i$, where all the sets $S_i$ are pairwise disjoint. Due to simplicity, -$$ -s=\sum^n_{i=1}c_i\cdot {\mathbf 1}_{A_i}, -$$ - -for some finite non-negative constants $c_i\in{\mathbb R}_{\geq 0}$ and pairwise disjoint sets $A_i\in\Sigma$ such that $\bigcup^n_{i=1}A_i=\Omega$. By definition of Lebesgue integral, - - - -\begin{align} - -\nu(S) - -&=\sum^n_{i=1}c_i\cdot\mu(S\cap A_i)\\ - -&=\sum^n_{i=1}c_i\cdot\mu\left(\left(\bigcup^\infty_{j=1} S_j\right)\cap A_i\right)\\ - -&=\sum^n_{i=1}c_i\cdot\mu\left(\bigcup^\infty_{j=1}(S_j\cap A_i)\right) - -\end{align} - - - -Since all the sets $S_j\cap A_i$ are pairwise disjoint, the countable additivity of $\mu$ - -gives us - - - -\sum^n_{i=1} c_i\cdot\mu \left(\bigcup^\infty_{j=1}(S_j\cap A_i)\right)=\sum^n_{i=1}c_i\cdot\sum^\infty_{j=1} \mu(S_j\cap A_i). - - - -Since all the summands are non-negative, the sum of the series, whether this sum is finite or infinite, cannot change if summation order does. For that reason, - - - -\begin{align} - -\sum^n_{i=1}c_i\cdot\sum^\infty_{j=1}\mu(S_j\cap A_i)&=\sum^\infty_{j=1}\sum^n_{i=1}c_i\cdot - -\mu(S_j\cap A_i)\\ - -&=\sum^\infty_{j=1}\int_{S_j} sd\mu\\ - -&=\sum^\infty_{j=1}\nu(S_j), - -\end{align} - - - -as required. - -The following property is a direct consequence of the definition of measure. - -Lemma 2. Let $\mu$ be a measure, and $S = \bigcup^\infty_{i=1}S_i$, where - - - -S_1\subseteq\cdots\subseteq S_i\subseteq S_{i+1}\subseteq\cdots\subseteq S - - - -is a non-decreasing chain with all its sets $\mu$-measurable. Then -$$ -\mu(S)=\lim_i\mu(S_i). -$$ - -Step 1. We begin by showing that $f$ is $ (\Sigma, \operatorname{\mathcal B}_{\R_{\geq 0}}) $–measurable. - -Note. If we were using Fatou's lemma, the measurability would follow easily from Remark 3(a). - -To do this without using Fatou's lemma, it is sufficient to show that the inverse image of an interval $[0,t]$ under $f$ is an element of the sigma-algebra $\Sigma$ on $X$, because (closed) intervals generate the Borel sigma algebra on the reals. Since $[0,t]$ is a closed interval, and, for every $k$, $0\le f_k(x) \le f(x)$, -$$ -0\leq f(x)\leq t\quad \Leftrightarrow\quad \Bigl[\forall k\quad 0\leq f_k(x)\leq t\Bigr]. -$$ - -Thus, -$$ -\{x\in X \mid 0\leq f(x)\leq t\} = \bigcap_k \{x\in X \mid 0\leq f_k(x)\leq t\}. -$$ - -Being the inverse image of a Borel set under a $(\Sigma,\operatorname{\mathcal B}_{\R_{\geq 0}})$-measurable function $f_k$, each set in the countable intersection is an element of $\Sigma$. Since $\sigma$-algebras are, by definition, closed under countable intersections, this shows that $f$ is $(\Sigma,\operatorname{\mathcal B}_{\R_{\geq 0}})$-measurable, and the integral $\textstyle \int_X f d\mu $ is well defined (and possibly infinite). - -Step 2. We will first show that $\textstyle\int_X f d\mu \geq \lim_k \int_X f_k d\mu. $ - -The definition of $f$ and monotonicity of $\{f_k\}$ imply that $f(x)\geq f_k(x)$, for every $k$ and every $x\in X$. By monotonicity (or, more precisely, its narrower version established in Remark 5; see also Remark 4) of Lebesgue integral, -$$ -\int_X fd\mu\geq\int_X f_kd\mu, -$$ - -and -$$ -\int_X fd\mu\geq\lim_k\int_X f_kd\mu. -$$ - -Note that the limit on the right exists (finite or infinite) because, due to monotonicity (see Remark 5 and Remark 4), the sequence is non-decreasing. - -End of Step 2. - -We now prove the reverse inequality. We seek to show that -$$ - \int_X f d\mu \leq \lim_k \int_X f_k d\mu -$$. - -Proof using Fatou's lemma. Per Remark 3, the inequality we want to prove is equivalent to -$$ -\int_X \liminf_k f_k(x) d\mu \leq \liminf_k \int_X f_k d\mu. -$$ - -But the latter follows immediately from Fatou's lemma, and the proof is complete. - -Independent proof. To prove the inequality without using Fatou's lemma, we need some extra machinery. Denote $\operatorname{SF}(f)$ the set of simple $(\Sigma,\operatorname{\mathcal B}_{\R_{\geq 0}})$-measurable functions $s:X\to [0,\infty)$ such that -$$ -0\leq s\leq f -$$ on $X$. - -Step 3. Given a simple function $s\in\operatorname{SF}(f)$ and a real number $t\in (0,1)$, define -$$ -B^{s,t}_k=\{x\in X\mid t\cdot s(x)\leq f_k(x)\}\subseteq X. -$$ - -Then $B^{s,t}_k\in\Sigma$, $B^{s,t}_k\subseteq B^{s,t}_{k+1}$, and $\textstyle X=\bigcup_k B^{s,t}_k$. - -Step 3a. To prove the first claim, let $\textstyle s=\sum^m_{i=1}c_i\cdot{\mathbf 1}_{A_i}$, for some finite collection of pairwise disjoint measurable sets $A_i\in\Sigma$ such that $\textstyle X=\cup^m_{i=1}A_i$, some (finite) non-negative constants $c_i\in {\mathbb R}_{\geq 0}$, and ${\mathbf 1}_{A_i}$ denoting the indicator function of the set $A_i$. - -For every $ x\in A_i, $ $t\cdot s(x)\leq f_k(x)$ holds if and only if $ f_k(x) \in [t\cdot c_i, +\infty].$ Given that the sets $A_i$ are pairwise disjoint, -$$ -B^{s,t}_k=\bigcup^m_{i=1}\Bigl(f^{-1}_k\Bigl([t\cdot c_i,+\infty]\Bigr)\cap A_i\Bigr). -$$ - -Since the pre-image $f^{-1}_k\Bigl([t\cdot c_i,+\infty]\Bigr)$ of the Borel set -$$ -[t\cdot c_i,+\infty] -$$ under the measurable function $f_k$ is measurable, and $\sigma$-algebras, by definition, are closed under finite intersection and unions, the first claim follows. - -Step 3b. To prove the second claim, note that, for each $k$ and every $x\in X$, $f_k(x)\leq f_{k+1}(x).$ - -Step 3c. To prove the third claim, we show that $\textstyle X\subseteq\bigcup_k B^{s,t}_k$. - -Indeed, if, to the contrary, $\textstyle X\not\subseteq\bigcup_k B^{s,t}_k$, then an element -$$ -\textstyle x_0\in X\setminus\bigcup_k B^{s,t}_k=\bigcap_k(X\setminus B^{s,t}_k) -$$ - -exists such that $f_k(x_0)0,...,m2m-1 ∈ R, consider the collection C of measures μ on R such that -$$ -\int x^k d\mu(x) = m_k -$$ - -for k = 0,1,...,2m - 1 (and in particular the integral is defined and finite). - -Let P0,P1, ...,Pm be the first m + 1 orthogonal polynomials with respect to μ ∈ C, and let ξ1,...ξm be the zeros of Pm. It is not hard to see that the polynomials P0,P1, ...,Pm-1 and the numbers ξ1,...ξm are the same for every μ ∈ C, and therefore are determined uniquely by m0,...,m2m-1. - -Denote -$$ -\rho_{m-1}(z) = 1 \Big/ \sum_{k=0}^{m-1} |P_k(z)|^2 -$$. - -Theorem For j = 1,2,...,m, and any μ ∈ C, -$$ -\mu(-\infty, \xi_j] \leq \rho_{m-1}(\xi_1) + \cdots + \rho_{m-1}(\xi_j) \leq \mu(-\infty,\xi_{j+1}). -$$ diff --git a/wiki/wikipedia/2684.txt b/wiki/wikipedia/2684.txt deleted file mode 100644 index 335b9d20a551431b1636b3d4b815e11b0f28ec9a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2684.txt +++ /dev/null @@ -1,69 +0,0 @@ -Cambridge Analytica Ltd (CA) was a British political consulting firm that came to prominence through the Facebook–Cambridge Analytica data scandal. It was started in 2013 as a subsidiary of the private intelligence company and self-described "global election management agency" SCL Group by long-time SCL executives Nigel Oakes, Alexander Nix and Alexander Oakes, with Nix as CEO. - -Cambridge Analytica was established as a subsidiary of the private intelligence company SCL Group that was active in military and political arenas. The men who ran Cambridge Analytica and its parent SCL were described as having close ties to the Conservative Party, royalty and the British military. Cambridge Analytica (SCL USA) was incorporated in January 2015 with its registered office in Westferry Circus, London and just one staff member, its director and CEO Alexander James Ashburner Nix (also appointed in January 2015). Nix was also the director of nine similar companies sharing the same registered offices in London, including Firecrest technologies, Emerdata and six SCL Group companies including "SCL elections limited". Nigel Oakes, known as the former boyfriend of Lady Helen Windsor, had founded the predecessor SCL Group in the 1990s, and in 2005 Oakes established SCL Group together with his brother Alexander Oakes and Alexander Nix; SCL Group was the parent company of Cambridge Analytica. Conservative minister and MP Sir Geoffrey Pattie was the founding chairman of SCL; Lord Ivar Mountbatten also joined Oakes as a director of the company. and the family of American hedge fund manager Robert Mercer. Political scientists question CA's claims about the effectiveness of its methods of targeting voters. - -In March 2018, multiple media outlets broke news of Cambridge Analytica's business practices. The New York Times and The Observer reported that the company had acquired and used personal data about Facebook users from an external researcher who had told Facebook he was collecting it for academic purposes. Shortly afterwards, Channel 4 News aired undercover investigative videos showing Nix boasting about using prostitutes, bribery sting operations, and honey traps to discredit politicians on whom it conducted opposition research, and saying that the company "ran all of (Donald Trump's) digital campaign". In response to the media reports, the Information Commissioner's Office (ICO) of the UK pursued a warrant to search the company's servers. Facebook banned Cambridge Analytica from advertising on its platform, saying that it had been deceived. On 23 March 2018, the British High Court granted the ICO a warrant to search Cambridge Analytica's London offices. As a result, Nix was suspended as CEO, and replaced by Julian Wheatland. - -The personal data of up to 87 million Facebook users were acquired via the 270,000 Facebook users who used a Facebook app called "This Is Your Digital Life." By giving this third-party app permission to acquire their data, back in 2015, this also gave the app access to information on the user's friends network; this resulted in the data of about 87 million users, the majority of whom had not explicitly given Cambridge Analytica permission to access their data, being collected. The app developer breached Facebook's terms of service by giving the data to Cambridge Analytica. - -On 1 May 2018, Cambridge Analytica and its parent company SCL filed for insolvency proceedings and closed operations. Alexander Tayler, a former director for Cambridge Analytica, was appointed director of Emerdata on 28 March 2018. Rebekah Mercer, Jennifer Mercer, Alexander Nix and , who has links to American businessman Erik Prince, are in leadership positions at Emerdata. The Russo brothers are producing an upcoming film on Cambridge Analytica. In 2020, the British Information Commissioner's Office closed a three-year inquiry into the company, concluded that Cambridge Analytica was "not involved" in the 2016 Brexit referendum and found no additional evidence for Russia's alleged interference during the campaign. US sensitive polling and election data however were passed to Russian Intelligence via a Cambridge Analytica contractor Sam Patten and Konstantin Kilimnik who was indicted during the affair. - -Publicly, parent company SCL Group called itself a "global election management agency", Politico reported it was known for involvement "in military disinformation campaigns to social media branding and voter targeting". SCL gained work on a large number of campaigns for the US and UK governments' War on Terror advancing their model of behavioral conflict during the 2000s. SCL's involvement in the political world has been primarily in the developing world where it has been used by the military and politicians to study and manipulate public opinion and political will. Slate writer Sharon Weinberger compared one of SCL's hypothetical test scenarios to fomenting a coup. and Boris Johnson had a meeting with Alexander Nix in 2016. He showed that with a limited number of "likes", people can be analysed better than friends or relatives can do and that individual psychological targeting is a powerful tool to influence people. - -A large amount of data can be extracted from the record of the trace of almost every step we take online — a digital footprint of human behavior. Another source of information was the "Cruz Crew" mobile app that tracked physical movements and contacts and according to the Associated Press, invaded personal data more than previous presidential campaign apps. - -The company claimed to use "data enhancement and audience segmentation techniques" providing "psychographic analysis" for a "deeper knowledge of the target audience". The company used the Big Five model of personality Zamel signed a memorandum of understanding for Psy-Group with Cambridge Analytica on 14 December 2016. The company also released a statement that the allegations did not represent the ethics of the company, and an independent entity would investigate Nix's statements. - -The investigation also raised questions regarding campaign finance law. During the 2016 election, the company was employed both by Trump's campaign and Robert Mercer's Make America Number 1 Super PAC which supported Trump. While PACs are not limited in the amount of funds they can spend on behalf of a candidate, they are not allowed to coordinate strategy with the campaigns they are supporting. Nix's statements in the recorded video describe how the Trump campaign itself could "take the high road" and "stay clean", while the negative attacks were handled by the firm and the Super PAC, in a way which makes it "unattributable, untrackable". These statements potentially suggested unlawful coordination between Trump's campaign and the PAC, although Cambridge Analytica has denied this. - -Some political scientists have been skeptical of claims made by Cambridge Analytica about the effectiveness of its microtargeting of voters. They believe that access to digital data doesn't provide significantly more information than from public voter databases, and the digital data has limited value over time as the preferences of voters change. In March 2017, The New York Times reported that CA had exaggerated its capabilities: "Cambridge executives now concede that the company never used psychographics in the Trump campaign." - -Significant backlash against Facebook came to light in March 2018, resulting in controversy as well as a $37 billion drop in the market capitalization of Facebook, as of 20 March. Due to the scandal of enabling monetization of Facebook personal data, one assessment was that only 41% of Facebook users trust the company. On 26 March, the US Federal Trade Commission announced it is "conducting an open investigation of Facebook Inc's privacy practices following the disclosure that 50 million users' data got into the hands of political consultancy Cambridge Analytica." In March 2019 Facebook acknowledged it had concerns about “improper data-gathering practices” by CA, months before the previously reported onset-of-alert at December 2015. In December 2019 the Federal Trade Commission also filed a complaint against Cambridge Analytica for its practices, while filing settlements with CEO Alexander Nix and app developer Aleksandr Kogan. Cambridge Analytica was scoping both political as well as commercial work in Australia. The Jubilee Party downplayed CA's role, saying it had hired the firm's parent company, to assist with branding. - -In its Disinformation and 'fake news' inquiry, published on 29 July 2018, the UK Parliament's Digital, Culture, Media and Sport Committee noted that it is believed that CA, or its associated companies, worked with the Labour Party in Malta, on the 2013 Maltese general election campaign. Several sources claim that CA had close relationships with Henley & Partners who would immediately after the election introduce and run a lucrative Citizenship by Investment Program in Malta. The Maltese Government issued a press release denying the claims and calling the report and its sources "fake news". Henley & Partners denied any wrongdoing. According to Henley & Partners, there was never a formal working relationship with CA. - -The Final Report by the UK Parliament's Digital, Culture, Media and Sport Committee, published on 18 February 2019, took note of the Maltese Government's submissions (including through PR agency Chelgate's services) but determined that compelling evidence shown to the Committee confirmed that: - -"SCL certainly had meetings in Malta, that Christian Kalin of Henley & Partners was introduced by SCL to Joseph Muscat in 2011, and that Christian Kalin met with both political parties before 2013". - -The Maltese Government later issued a further denial decrying the use of "unnamed sources" and "confidential documents". - -After the Facebook–Cambridge Analytica data scandal, Forbes published that the British news program Channel 4 News had mentioned the existence of proof revealing ties between the Institutional Revolutionary Party (PRI) and Cambridge Analytica, suggesting a modus operandi similar to the one in the United States. According to Channel 4 News' Guillermo Galdos, CA worked for the PRI at least until January 2018. An investigation was requested. - -In 2017 the company had reached out to the PRI, Mexico's ruling political party, in order to bolster the party's presidential campaign during the largest-ever political elections of 2018. The party decided that it was sufficiently equipped to mess with the election on its own, but still paid Cambridge Analytica to prevent it from working with rival parties. - -Many donors to the UK Conservative Party reportedly have connections to the parent company of Cambridge Analytica. - -CA became involved in the 2016 United Kingdom European Union membership referendum (Brexit) supporting "persuadable" voters to vote for leaving the European Union (EU). Articles by Carole Cadwalladr in The Observer and Guardian newspapers, respectively published in February and May 2017, speculated in detail that CA had influenced both the Brexit/Vote Leave option in the UK's 2016 EU membership referendum and Trump's 2016 US presidential campaign with Robert Mercer's backing of Donald Trump being key. They also discuss the legality of using the social data farmed. CA is pursuing legal action over the claims made in Cadwalladr's articles. - -The company worked with the John Bolton Super PAC (political action committee) on a major digital and TV campaign focused on senate races in Arkansas, North Carolina and New Hampshire and helped turn out voters for the Republican Party candidates in those states. Two of the Republican candidates backed by the Bolton Super PAC, Thom Tillis in North Carolina and Tom Cotton in Arkansas, won their Senate bids, while Scott Brown lost in New Hampshire. The PAC ran 15 different TV advertisements each in North Carolina and Arkansas and 17 in New Hampshire, mostly online with some targeted directly to households using Dish Network and DirecTV. All were intended to push Bolton's national security agenda. - -CA also supported Thom Tillis's successful campaign to oust Kay Hagan as a senator for North Carolina. The firm was credited for its role in identifying a sizeable cluster of North Carolinians who prioritised foreign affairs, which encouraged Tillis to shift the conversation from state-level debates over education policy to charges that incumbent Kay Hagan had failed to take ISIS's rise seriously. Tillis's campaign and the North Carolina Republican Party paid Cambridge Analytica $345,000 for these services. - -CA sent dozens of non-U.S. citizens to provide campaign strategy and messaging advice to Republican candidates in 2014, opening the firm and individuals to prosecution under the Foreign Agents Registration Act, for being foreign agents having not registered through the United States Department of Justice as such. - -CA's involvement in the 2016 Republican Party presidential primaries became known in July 2015. At that time Robert Mercer was a major supporter of Ted Cruz. The Mercer family funded CA directly and indirectly through several super-PACs as well as through payments via Cruz's campaign. with additional money coming from allied Super-PACs. Ultimately the Cruz campaign spent $5.8 million on work by CA. Marco Rubio's campaign was supported by Optimus Consulting. Meanwhile, the third competitor, Governor John Kasich, was supported by rivalling firm Applecart. - -After Cruz dropped out of the race for the Republican presidential nomination in May 2016, Robert Mercer and his daughter Rebekah Mercer started to support Trump. In August, it became known that CA followed their allegiance and worked for Trump's presidential campaign. In September, the Trump campaign spent $5 million to purchase television advertising. The Trump campaign spent less than $1 million in data work. - -In 2016, the company said that it had not used psychographics in the Trump presidential campaign. Cambridge Analytica targeted potential voters with bespoke messages. Cambridge Analytica's data head, Alexander Tayler said, "When you think about the fact that Donald Trump lost the popular vote by 3m votes but won the electoral college vote, [t]hat's down to the data and the research." - -The head of Cambridge Analytica said he asked WikiLeaks founder, Julian Assange, for help finding Hillary Clinton's 33,000 deleted emails. - -On 18 May 2017, Time reported that the US Congress was investigating CA in connection with Russian interference in the 2016 United States elections. The report alleges that CA may have coordinated the spread of Russian propaganda using its microtargetting capabilities. According to the Trump campaign's digital operations chief, CA worked "side-by-side" with representatives from Facebook, Alphabet Inc. and Twitter on Trump's digital campaign activities. - -On 4 August 2017, Michael Flynn, who is under investigation by US counterintelligence for his contacts with Russian officials, amended a public financial filing to reflect that he had served in an advisory role in an agreement with CA during the 2016 Trump campaign. - -On 8 October 2017, Brad Parscale, who was the digital media director for Trump's 2016 presidential campaign, stated in an interview with Lesley Stahl from CBS News on 60 Minutes that Parscale was able to utilize Facebook advertising to directly target individual voters in swing states. Parscale cited the example in which he was able to target specific universes (audiences) who care about infrastructure and promote Trump and his message to build back up the crumbling American infrastructure. Although he hired Cambridge Analytica to assist with microtargeting, and Cambridge Analytica stated that it was the key to Trump's victory, Parscale denied that he gained assistance from the firm, stating that he thought Cambridge Analytica's use of psychographics doesn't work. He also denied any assistance with links to Russia. the Czech Republic, and Argentina. During the investigation it was admitted that the company has been contacted from a famous Italian party to manage the electoral campaign in Italy but the name of the party was not revealed. - -In the Philippines, Cambridge Analytica was also involved in the 2016 presidential election with reports citing it helped Rodrigo Duterte win the race. Duterte's camp denied this association. The SCL Group, Cambridge Analytica's parent company, claimed that it rebranded the politician's image to target voters who they found are swayed by qualities such as toughness and decisiveness. During the election cycle, Facebook confirmed that its data of more 1 million Filipino users were improperly shared with the communications company. - -On 4 January 2020, a release of more than 100,000 documents showed how Cambridge Analytica worked in 68 countries. A global infrastructure with operations to manipulate voters on "an industrial scale". The release of documents began on New Year's Day from an anonymous Twitter account called @HindsightFiles, that published material on elections in Malaysia, Kenya and Brazil (and next days so more countries). This documents came from Brittany Kaiser, an ex-Cambridge Analytica employee turned whistleblower, and were retrieved from her email accounts and hard drives. - -* Jungherr, Andreas, Gonzalo Rivero, and Daniel Gayo-Avello. 2020. . Cambridge University Press. - -* Nazzareno Tirino, Cambridge Analytica. Il potere segreto, la gestione del consenso e la fine della propaganda. Libellula Edizioni, Lecce (ITA), 2019. - -*Maschewski, Felix; Nosthoff, Anna-Verena: , . - -* Wylie, Christopher (October 2019). Mindf*ck: inside Cambridge Analytica's plot to break the world. London, United Kingdom: Profile Books. . Export edition. diff --git a/wiki/wikipedia/2685.txt b/wiki/wikipedia/2685.txt deleted file mode 100644 index 76956ab54cf98e3ca944487b88083ce72323793e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2685.txt +++ /dev/null @@ -1,43 +0,0 @@ -Szymański's Mutual Exclusion Algorithm is a mutual exclusion algorithm devised by computer scientist Dr. Bolesław Szymański, which has many favorable properties including linear wait, and which extension solved the open problem posted by Leslie Lamport whether there is an algorithm with a constant number of communication bits per process that satisfies every reasonable fairness and failure-tolerance requirement that Lamport conceived of (Lamport's solution used n factorial communication variables vs. Szymański's 5). - -The algorithm is modeled on a waiting room with an entry and exit doorway. Initially the entry door is open and the exit door is closed. All processes which request entry into the critical section at roughly the same time enter the waiting room; the last of them closes the entry door and opens the exit door. The processes then enter the critical section one by one (or in larger groups if the critical section permits this). The last process to leave the critical section closes the exit door and reopens the entry door, so the next batch of processes may enter. - -The implementation consists of each process having a flag variable which is written by that process and read by all others (this single-writer property is desirable for efficient cache usage). The status of the entry door is computed by reading the flags of all processes. Pseudo-code is given below: - - - -# Entry protocol - -flag[self] ← 1 # Standing outside waiting room - -await(all flag[1..N] ∈ {0, 1, 2}) # Wait for open door - -flag[self] ← 3 # Standing in doorway - -if any flag[1..N] = 1: # Another process is waiting to enter - -flag[self] ← 2 # Waiting for other processes to enter - -await(any flag[1..N] = 4) # Wait for a process to enter and close the door - -flag[self] ← 4 # The door is closed - -await(all flag[1..self-1] ∈ {0, 1}) # Wait for everyone of lower ID to finish exit protocol - -# Critical section - -# ... - -# Exit protocol - -await(all flag[self+1..N] ∈ {0, 1, 4}) # Ensure everyone in the waiting room has - -# realized that the door is supposed to be closed - -flag[self] ← 0 # Leave. Reopen door if nobody is still in the waiting room - - - -Note that the order of the "all" and "any" tests must be uniform. Also the "any" tests should be satisfied by a thread other than self. For example, if the test is any flag[1..N] = 1 and only flag[self] = 1, then the test is said to have failed/returned 0. - -Despite the intuitive explanation, the algorithm was not easy to prove correct, however due to its favorable properties a proof of correctness was desirable and multiple proofs have been presented. diff --git a/wiki/wikipedia/2686.txt b/wiki/wikipedia/2686.txt deleted file mode 100644 index 22b5527a4c83cd474cfe43bbab174638e8371f3e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2686.txt +++ /dev/null @@ -1,92 +0,0 @@ -In computational complexity theory, the maximum satisfiability problem (MAX-SAT) is the problem of determining the maximum number of clauses, of a given Boolean formula in conjunctive normal form, that can be made true by an assignment of truth values to the variables of the formula. It is a generalization of the Boolean satisfiability problem, which asks whether there exists a truth assignment that makes all clauses true. - -The conjunctive normal form formula -$$ - (x_0\lor x_1)\land(x_0\lor\lnot x_1)\land(\lnot x_0\lor x_1)\land(\lnot x_0\lor\lnot x_1) -$$ - -is not satisfiable: no matter which truth values are assigned to its two variables, at least one of its four clauses will be false. - -However, it is possible to assign truth values in such a way as to make three out of four clauses true; indeed, every truth assignment will do this. - -Therefore, if this formula is given as an instance of the MAX-SAT problem, the solution to the problem is the number three. - -The MAX-SAT problem is NP-hard, since its solution easily leads to the solution of the boolean satisfiability problem, which is NP-complete. - -It is also difficult to find an approximate solution of the problem, that satisfies a number of clauses within a guaranteed approximation ratio of the optimal solution. More precisely, the problem is APX-complete, and thus does not admit a polynomial-time approximation scheme unless P = NP. - -More generally, one can define a weighted version of MAX-SAT as follows: given a conjunctive normal form formula with non-negative weights assigned to each clause, find truth values for its variables that maximize the combined weight of the satisfied clauses. The MAX-SAT problem is an instance of weighted MAX-SAT where all weights are 1. - -Randomly assigning each variable to be true with probability 1/2 gives an expected 2-approximation. More precisely, if each clause has at least k variables, then this yields a (1 − 2−k)-approximation. This algorithm can be derandomized using the method of conditional probabilities. - -MAX-SAT can also be expressed using an integer linear program (ILP). Fix a conjunctive normal form formula F with variables x1, x2, ..., xn, and let C denote the clauses of F. For each clause c in C, let S+c and Sc denote the sets of variables which are not negated in c, and those that are negated in c, respectively. The variables yx of the ILP will correspond to the variables of the formula F, whereas the variables zc will correspond to the clauses. The ILP is as follows: - -The above program can be relaxed to the following linear program L: - -The following algorithm using that relaxation is an expected (1-1/e)-approximation: - -# Solve the linear program L and obtain a solution O - -# Set variable x to be true with probability yx where yx is the value given in O. - -This algorithm can also be derandomized using the method of conditional probabilities. - -The 1/2-approximation algorithm does better when clauses are large whereas the (1-1/e)-approximation does better when clauses are small. They can be combined as follows: - -# Run the (derandomized) 1/2-approximation algorithm to get a truth assignment X. - -# Run the (derandomized) (1-1/e)-approximation to get a truth assignment Y. - -# Output whichever of X or Y maximizes the weight of the satisfied clauses. - -This is a deterministic factor (3/4)-approximation. - -On the formula - -F=\underbrace{(x\lor y)}_{\text{weight }1}\land \underbrace{(x\lor\lnot y)}_{\text{weight }1}\land\underbrace{(\lnot x\lor z)}_{\text{weight }2+\epsilon} - -where $\epsilon >0$, the (1-1/e)-approximation will set each variable to True with probability 1/2, and so will behave identically to the 1/2-approximation. Assuming that the assignment of x is chosen first during derandomization, the derandomized algorithms will pick a solution with total weight $3+\epsilon$, whereas the optimal solution has weight $4+\epsilon$. - -Many exact solvers for MAX-SAT have been developed during recent years, and many of them were presented in the well-known conference on the boolean satisfiability problem and related problems, the SAT Conference. In 2006 the SAT Conference hosted the first MAX-SAT evaluation comparing performance of practical solvers for MAX-SAT, as it has done in the past for the pseudo-boolean satisfiability problem and the quantified boolean formula problem. - -Because of its NP-hardness, large-size MAX-SAT instances cannot in general be solved exactly, and one must often resort to approximation algorithms - -and heuristics - -There are several solvers submitted to the last Max-SAT Evaluations: - -* Branch and Bound based: Clone, MaxSatz (based on Satz), IncMaxSatz, IUT_MaxSatz, WBO, GIDSHSat. - -* Satisfiability based: SAT4J, QMaxSat. - -* Unsatisfiability based: msuncore, WPM1, PM2. - -MAX-SAT is one of the optimization extensions of the boolean satisfiability problem, which is the problem of determining whether the variables of a given Boolean formula can be assigned in such a way as to make the formula evaluate to TRUE. If the clauses are restricted to have at most 2 literals, as in 2-satisfiability, we get the MAX-2SAT problem. If they are restricted to at most 3 literals per clause, as in 3-satisfiability, we get the MAX-3SAT problem. - -There are many problems related to the satisfiability of conjunctive normal form Boolean formulas. - -* Decision problems: - -** 2SAT - -** 3SAT - -* Optimization problems, where the goal is to maximize the number of clauses satisfied: - -** MAX-SAT, and the corresponded weighted version Weighted MAX-SAT - -** MAX-kSAT, where each clause has exactly k variables: - -*** MAX-2SAT - -*** MAX-3SAT - -*** MAXEkSAT - -** The partial maximum satisfiability problem (PMAX-SAT) asks for the maximum number of clauses which can be satisfied by any assignment of a given subset of clauses. The rest of the clauses must be satisfied. - -** The soft satisfiability problem (soft-SAT), given a set of SAT problems, asks for the maximum number of those problems which can be satisfied by any assignment. - -** The minimum satisfiability problem. - -* The MAX-SAT problem can be extended to the case where the variables of the constraint satisfaction problem belong to the set of reals. The problem amounts to finding the smallest q such that the q-relaxed intersection of the constraints is not empty. diff --git a/wiki/wikipedia/2687.txt b/wiki/wikipedia/2687.txt deleted file mode 100644 index 7698b0ae751724775d12622e70827c3b2b8fef17..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2687.txt +++ /dev/null @@ -1,27 +0,0 @@ -In Riemannian geometry, Cheng's eigenvalue comparison theorem states in general terms that when a domain is large, the first Dirichlet eigenvalue of its Laplace–Beltrami operator is small. This general characterization is not precise, in part because the notion of "size" of the domain must also account for its curvature. The theorem is due to Cheng by Shiu-Yuen Cheng. Using geodesic balls, it can be generalized to certain tubular domains . - -Let M be a Riemannian manifold with dimension n, and let BM(p, r) be a geodesic ball centered at p with radius r less than the injectivity radius of p ∈ M. For each real number k, let N(k) denote the simply connected space form of dimension n and constant sectional curvature k. Cheng's eigenvalue comparison theorem compares the first eigenvalue λ1(BM(p, r)) of the Dirichlet problem in BM(p, r) with the first eigenvalue in BN(k)(r) for suitable values of k. There are two parts to the theorem: - -* Suppose that KM, the sectional curvature of M, satisfies -$$ -K_M\le k. -$$ - -Then -$$ -\lambda_1\left(B_{N(k)}(r)\right) \le \lambda_1\left(B_M(p,r)\right). -$$ - -The second part is a comparison theorem for the Ricci curvature of M: - -* Suppose that the Ricci curvature of M satisfies, for every vector field X, -$$ -\operatorname{Ric}(X,X) \ge k(n-1)|X|^2. -$$ - -Then, with the same notation as above, -$$ -\lambda_1\left(B_{N(k)}(r)\right) \ge \lambda_1\left(B_M(p,r)\right). -$$ - -S.Y. Cheng used Barta's theorem to derive the eigenvalue comparison theorem. As a special case, if k = −1 and inj(p) = ∞, Cheng’s inequality becomes λ*(N) ≥ λ*(H n(−1)) which is McKean’s inequality. diff --git a/wiki/wikipedia/2688.txt b/wiki/wikipedia/2688.txt deleted file mode 100644 index 3e26c1ad8ded48cd0de51dce2fd54db815b3f4fb..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2688.txt +++ /dev/null @@ -1,10 +0,0 @@ -Pósa's theorem, in graph theory, is a sufficient condition for the existence of a Hamiltonian cycle based on the degrees of the vertices in an undirected graph. It implies two other degree-based sufficient conditions, Dirac's theorem on Hamiltonian cycles and Ore's theorem. Unlike those conditions, it can be applied to graphs with a small number of low-degree vertices. It is named after Lajos Pósa, a protégé of Paul Erdős born in 1947, who discovered this theorem in 1962. - -The Pósa condition for a finite undirected graph $G$ having $n$ vertices requires that, if the degrees of the $n$ vertices in increasing order as -$$ -d_{1} \leq d_{2} \leq ... \leq d_{n}, -$$ - -then for each index $k < n/2$ the inequality $k < d_{k}$ is satisfied. - -Pósa's theorem states that if a finite undirected graph satisfies the Pósa condition, then that graph has a Hamiltonian cycle in it. diff --git a/wiki/wikipedia/2689.txt b/wiki/wikipedia/2689.txt deleted file mode 100644 index c048e05d5bdd6ef34b6fcb2072d54b274021f01b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2689.txt +++ /dev/null @@ -1,9 +0,0 @@ -In mathematics, the Weil conjecture on Tamagawa numbers is the statement that the Tamagawa number $\tau(G)$ of a simply connected simple algebraic group defined over a number field is 1. In this case, simply connected means "not having a proper algebraic covering" in the algebraic group theory sense, which is not always the topologists' meaning. - -calculated the Tamagawa number in many cases of classical groups and observed that it is an integer in all considered cases and that it was equal to 1 in the cases when the group is simply connected. The first observation does not hold for all groups: Ono found examples where the Tamagawa numbers are not integers. The second observation, that the Tamagawa numbers of simply connected semisimple groups seem to be 1, became known as the Weil conjecture. - -Robert Langlands (1966) introduced harmonic analysis methods to show it for Chevalley groups. K. F. Lai (1980) extended the class of known cases to quasisplit reductive groups. Kottwitz proved it for all groups satisfying the Hasse principle, which at the time was known for all groups without E8 factors. V. I. Chernousov (1989) removed this restriction, by proving the Hasse principle for the resistant E8 case (see strong approximation in algebraic groups), thus completing the proof of Weil's conjecture. In 2011, Jacob Lurie and Dennis Gaitsgory announced a proof of the conjecture for algebraic groups over function fields over finite fields. - -Ono used the Weil conjecture to calculate the Tamagawa numbers of all semisimple algebraic groups. - -For spin groups, the conjecture implies the known Smith–Minkowski–Siegel mass formula. diff --git a/wiki/wikipedia/269.txt b/wiki/wikipedia/269.txt deleted file mode 100644 index e47852f0ba733b86da59fcdea5c3d226057fa82f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/269.txt +++ /dev/null @@ -1,15 +0,0 @@ -In real-time computing, priority inheritance is a method for eliminating unbounded priority inversion. Using this programming method, a process scheduling algorithm increases the priority of a process (A) to the maximum priority of any other process waiting for any resource on which A has a resource lock (if it is higher than the original priority of A). - -The basic idea of the priority inheritance protocol is that when a job blocks one or more high-priority jobs, it ignores its original priority assignment and executes its critical section at an elevated priority level. After executing its critical section and releasing its locks, the process returns to its original priority level. - -Consider three jobs: - -Suppose that both H and L require some shared resource. If L acquires this shared resource (entering a critical section), and H subsequently requires it, H will block until L releases it (leaving its critical section). Without priority inheritance, process M could preempt process L during the critical section and delay its completion, in effect causing the lower-priority process M to indirectly preempt the high-priority process H. This is a priority inversion bug. - -With priority inheritance, L will execute its critical section at H's high priority whenever H is blocked on the shared resource. As a result, M will be unable to preempt L and will be blocked. That is, the higher-priority job M must wait for the critical section of the lower priority job L to be executed, because L has inherited H's priority. When L exits its critical section, it regains its original (low) priority and awakens H (which was blocked by L). H, having high priority, preempts L and runs to completion. This enables M and L to resume in succession and run to completion without priority inversion. - -*FreeRTOS - -*Microsoft Azure RTOS, formerly Express Logic's ThreadX - -*Linux diff --git a/wiki/wikipedia/2690.txt b/wiki/wikipedia/2690.txt deleted file mode 100644 index 59fbf720e5a5cdf56aed9a8757d47cd8f93377aa..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2690.txt +++ /dev/null @@ -1,99 +0,0 @@ -The probabilistic method is a nonconstructive method, primarily used in combinatorics and pioneered by Paul Erdős, for proving the existence of a prescribed kind of mathematical object. It works by showing that if one randomly chooses objects from a specified class, the probability that the result is of the prescribed kind is strictly greater than zero. Although the proof uses probability, the final conclusion is determined for certain, without any possible error. - -This method has now been applied to other areas of mathematics such as number theory, linear algebra, and real analysis, as well as in computer science (e.g. randomized rounding), and information theory. - -If every object in a collection of objects fails to have a certain property, then the probability that a random object chosen from the collection has that property is zero. - -Similarly, showing that the probability is (strictly) less than 1 can be used to prove the existence of an object that does not satisfy the prescribed properties. - -Another way to use the probabilistic method is by calculating the expected value of some random variable. If it can be shown that the random variable can take on a value less than the expected value, this proves that the random variable can also take on some value greater than the expected value. - -Alternatively, the probabilistic method can also be used to guarantee the existence of a desired element in a sample space with a value that is greater than or equal to the calculated expected value, since the non-existence of such element would imply every element in the sample space is less than the expected value, a contradiction. - -Common tools used in the probabilistic method include Markov's inequality, the Chernoff bound, and the Lovász local lemma. - -Although others before him proved theorems via the probabilistic method (for example, Szele's 1943 result that there exist tournaments containing a large number of Hamiltonian cycles), many of the most well known proofs using this method are due to Erdős. The first example below describes one such result from 1947 that gives a proof of a lower bound for the Ramsey number R(r, r). - -Suppose we have a complete graph on n vertices. We wish to show (for small enough values of n) that it is possible to color the edges of the graph in two colors (say red and blue) so that there is no complete subgraph on r vertices which is monochromatic (every edge colored the same color). - -To do so, we color the graph randomly. Color each edge independently with probability 1/2 of being red and 1/2 of being blue. We calculate the expected number of monochromatic subgraphs on r vertices as follows: - -For any set $S_r$ of $r$ vertices from our graph, define the variable $X(S_r)$ to be 1 if every edge amongst the $r$ vertices is the same color, and 0 otherwise. Note that the number of monochromatic $r$-subgraphs is the sum of $X(S_r)$ over all possible subsets $S_r$. For any individual set $S_r^i$, the expected value of $X(S_r^i)$ is simply the probability that all of the $C(r, 2)$ edges in $S_r^i$ are the same color: -$$ -E[X(S_r^i)] = 2 \cdot 2^{-{r \choose 2}} -$$ - -(the factor of 2 comes because there are two possible colors). - -This holds true for any of the $C(n, r)$ possible subsets we could have chosen, i.e. $i$ ranges from 1 to $C(n,r)$. So we have that the sum of $E[X(S_r^i)]$ over all $S_r^i$ is -$$ -\sum_{i=1}^{C(n,r)} E[X(S_r^i)] = {n \choose r}2^{1-{r \choose 2}}. -$$ - -The sum of expectations is the expectation of the sum (regardless of whether the variables are independent), so the expectation of the sum (the expected number of all monochromatic $r$-subgraphs) is -$$ -E[X(S_r)] = {n \choose r}2^{1-{r \choose 2}}. -$$ - -Consider what happens if this value is less than 1. Since the expected number of monochromatic r-subgraphs is strictly less than 1, it must be that a specific random coloring satisfies that the number of monochromatic r-subgraphs is strictly less than 1. The number of monochromatic r-subgraphs in this random coloring is a non-negative integer, hence it must be 0 (0 is the only non-negative integer less than 1). It follows that if -$$ -E[X(S_r)] = {n \choose r}2^{1-{r \choose 2}} < 1 -$$ - -(which holds, for example, for n=5 and r=4), there must exist a coloring in which there are no monochromatic r-subgraphs. {{efn| - -The same fact can be proved without probability, using a simple counting argument: - -* The total number of r-subgraphs is ${n \choose r}$. - -* Each r-subgraphs has ${r \choose 2}$ edges and thus can be colored in $2^{r \choose 2}$ different ways. - -* Of these colorings, only 2 colorings are 'bad' for that subgraph (the colorings in which all vertices are red or all vertices are blue). - -* Hence, the total number of colorings that are bad for some (at least one) subgraph is at most $2 {n \choose r}$. - -* Hence, if $2^{r \choose 2} > 2 {n \choose r}$, there must be at least one coloring which is not 'bad' for any subgraph. - -}} - -By definition of the Ramsey number, this implies that R(r, r) must be bigger than n. In particular, R(r, r) must grow at least exponentially with r. - -A weakness of this argument is that it is entirely nonconstructive. Even though it proves (for example) that almost every coloring of the complete graph on (1.1)r vertices contains no monochromatic r-subgraph, it gives no explicit example of such a coloring. The problem of finding such a coloring has been open for more than 50 years. - ----- - -A 1959 paper of Erdős (see reference cited below) addressed the following problem in graph theory: given positive integers g and k, does there exist a graph G containing only cycles of length at least g, such that the chromatic number of G is at least k? - -It can be shown that such a graph exists for any g and k, and the proof is reasonably simple. Let n be very large and consider a random graph G on n vertices, where every edge in G exists with probability p = n1/g−1. We show that with positive probability, G satisfies the following two properties: - -Property 1. G contains at most n/2 cycles of length less than g. - -Proof. Let X be the number cycles of length less than g. Number of cycles of length i in the complete graph on n vertices is -$$ -\frac{n!}{2\cdot i \cdot (n-i)!} \le \frac{n^i}{2} -$$ - -and each of them is present in G with probability pi. Hence by Markov's inequality we have -$$ -\Pr \left (X> \tfrac{n}{2} \right )\le \frac{2}{n} E[X] \le \frac{1}{n} \sum_{i=3}^{g-1} p^i n^i = \frac{1}{n} \sum_{i=3}^{g-1} n^{\frac{i}{g}} \le \frac{g}{n} n^{\frac{g-1}{g}} = gn^{-\frac{1}{g}} = o(1). -$$ - -Thus for sufficiently large n, property 1 holds with a probability of more than 1/2. - -Property 2. G contains no independent set of size $\lceil \tfrac{n}{2k} \rceil$. - -Proof. Let Y be the size of the largest independent set in G. Clearly, we have -$$ -\Pr (Y\ge y) \le {n \choose y}(1-p)^{\frac{y(y-1)}{2}} \le n^y e^{-\frac{py(y-1)}{2}} = e^{- \frac{y}{2} \cdot (py -2\ln n - p)} = o(1), -$$ - -when -$$ -y = \left \lceil \frac{n}{2k} \right \rceil. -$$ Thus, for sufficiently large n, property 2 holds with a probability of more than 1/2. - -For sufficiently large n, the probability that a graph from the distribution has both properties is positive, as the events for these properties cannot be disjoint (if they were, their probabilities would sum up to more than 1). - -Here comes the trick: since G has these two properties, we can remove at most n/2 vertices from G to obtain a new graph G′ on $n'\geq n/2$ vertices that contains only cycles of length at least g. We can see that this new graph has no independent set of size $\left \lceil \frac{n'}{k}\right\rceil$. G′ can only be partitioned into at least k independent sets, and, hence, has chromatic number at least k. - -This result gives a hint as to why the computation of the chromatic number of a graph is so difficult: even when there are no local reasons (such as small cycles) for a graph to require many colors the chromatic number can still be arbitrarily large. diff --git a/wiki/wikipedia/2691.txt b/wiki/wikipedia/2691.txt deleted file mode 100644 index 2e4a1bc4eebdacd6621abf0bb50f2dc5fcf31617..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2691.txt +++ /dev/null @@ -1,12 +0,0 @@ -Carnot's theorem (named after Lazare Carnot) describes a necessary and sufficient condition for a common point of intersection of three lines being perpendicular to the (extended) sides of triangle. The theorem can also be thought of as a generalization of the Pythagorean theorem. - -For a triangle $\triangle ABC$ with sides $a, b, c$ consider three lines, which are perpendicular to the triangle sides and intersect in a common point $F$. If $P_a, P_b, P_c$ are the pedal points of those three perpendiculars on the sides $a, b, c$, then the following equation holds: -$$ - |AP_c|^2+|BP_a|^2+|CP_b|^2=|BP_c|^2+|CP_a|^2+|AP_b|^2 -$$ - -The converse of the statement above is true as well, that is if the equation holds for the pedal points of three perpendiculars on the three triangle sides then they intersect in a common point. Therefore, the equation provides a necessary and sufficient condition. - -If the triangle $\triangle ABC$ has a right angle in $C$ and the intersection point $F$ is located on either $A$ or $B$, then the equation above yields the Pythagorean theorem. For instance if $F$ coincides with $A$ then this yields $|AP_b|=0$, $|AP_c|=0$, $|CP_a|=0$, $|CP_b|=b$, $|BP_a|=a$ and $ |BP_c|=c$. Therefore, the equation above transforms into the Pythagorean theorem $a^2 + b^2 =c^2$. - -Another corollary is the property of perpendicular bisectors of a triangle to intersect in a common point. In the case of perpendicular bisectors you have $ |AP_c| = |BP_c|$, $|BP_a| = |CP_a|$ and $|CP_b| = |AP_b|$ and therefore the equation above holds. which means all three perpendicular bisectors intersect in the same point. diff --git a/wiki/wikipedia/2692.txt b/wiki/wikipedia/2692.txt deleted file mode 100644 index a47639a9c52911ee2bc873c8914c2702037ac633..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2692.txt +++ /dev/null @@ -1,275 +0,0 @@ -In linear algebra and functional analysis, the min-max theorem, or variational theorem, or Courant-Fischer-Weyl min-max principle, is a result that gives a variational characterization of eigenvalues of compact Hermitian operators on Hilbert spaces. It can be viewed as the starting point of many results of similar nature. - -This article first discusses the finite-dimensional case and its applications before considering compact operators on infinite-dimensional Hilbert spaces. - -We will see that for compact operators, the proof of the main theorem uses essentially the same idea from the finite-dimensional argument. - -In the case that the operator is non-Hermitian, the theorem provides an equivalent characterization of the associated singular values. - -The min-max theorem can be extended to self-adjoint operators that are bounded below. - -Let A be a n × n Hermitian matrix. As with many other variational results on eigenvalues, one considers the Rayleigh-Ritz quotient RA : Cn \ {0} → R defined by -$$ -R_A(x) = \frac{(Ax, x)}{(x,x)} -$$ - -where (⋅, ⋅) denotes the Euclidean inner product on Cn. - -Clearly, the Rayleigh quotient of an eigenvector is its associated eigenvalue. Equivalently, the Rayleigh-Ritz quotient can be replaced by -$$ -f(x) = (Ax, x), \|x\| = 1. -$$ - -For Hermitian matrices A, the range of the continuous function RA(x), or f(x), is a compact subset [a, b] of the real line. The maximum b and the minimum a are the largest and smallest eigenvalue of A, respectively. The min-max theorem is a refinement of this fact. - -Let A be an n × n Hermitian matrix with eigenvalues λ1 ≤ ... ≤ λk ≤ ... ≤ λn then -$$ -\lambda_k = \min_U \{ \max_x \{ R_A(x) \mid x \in U \text{ and } x \neq 0 \} \mid \dim(U)=k \} -$$ - -and -$$ -\lambda_k = \max_U \{ \min_x \{ R_A(x) \mid x \in U \text{ and } x \neq 0 \} \mid \dim(U)=n-k+1 \} -$$ - -in particular, -$$ -\lambda_1 \leq R_A(x) \leq \lambda_n \quad\forall x \in \mathbf{C}^n\backslash\{0\} -$$ - -and these bounds are attained when x is an eigenvector of the appropriate eigenvalues. - -Also the simpler formulation for the maximal eigenvalue λn is given by: -$$ - \lambda_n = \max \{R_A(x) : x \neq 0 \}. -$$ - -Similarly, the minimal eigenvalue λ1 is given by: -$$ - \lambda_1 = \min \{R_A(x) : x \neq 0 \}. -$$ - -{{Math proof|drop=hidden|proof= - -Since the matrix A is Hermitian it is diagonalizable and we can choose an orthonormal basis of eigenvectors {u1, ..., un} that is, ui is an eigenvector for the eigenvalue λi and such that (ui, ui) = 1 and (ui, uj) = 0 for all i ≠ j. - -If U is a subspace of dimension k then its intersection with the subspace span{uk, ..., un} isn't zero (by simply checking dimensions) and hence there exists a vector v ≠ 0 in this intersection that we can write as -$$ -v = \sum_{i=k}^n \alpha_i u_i -$$ - -and whose Rayleigh quotient is -$$ -R_A(v) = \frac{\sum_{i=k}^n \lambda_i \alpha_i^2}{\sum_{i=k}^n \alpha_i^2} \geq \lambda_k -$$ - -(as all $\lambda_i \geq \lambda_k$ for i=k,..,n) - -and hence -$$ -\max \{ R_A(x) \mid x \in U \} \geq \lambda_k -$$ - -Since this is true for all U, we can conclude that -$$ -\min \{ \max \{ R_A(x) \mid x \in U \text{ and } x \neq 0 \} \mid \dim(U)=k \} \geq \lambda_k -$$ - -This is one inequality. To establish the other inequality, chose the specific k-dimensional space - -V = span{u1, ..., uk} , for which -$$ - \max \{ R_A(x) \mid x \in V \text{ and } x \neq 0 \} \leq \lambda_k -$$ - -because $\lambda_k$ is the largest eigenvalue in V. Therefore, also -$$ -\min \{ \max \{ R_A(x) \mid x \in U \text{ and } x \neq 0 \} \mid \dim(U)=k \} \leq \lambda_k -$$ - -In the case where U is a subspace of dimension n-k+1, we proceed in a similar fashion: Consider the subspace of dimension k, span{u1, ..., uk}. - -Its intersection with the subspace U isn't zero (by simply checking dimensions) and hence there exists a vector v in this intersection that we can write as -$$ -v = \sum_{i=1}^k \alpha_i u_i -$$ - -and whose Rayleigh quotient is -$$ -R_A(v) = \frac{\sum_{i=1}^k \lambda_i \alpha_i^2}{\sum_{i=1}^k \alpha_i^2} \leq \lambda_k -$$ - -and hence -$$ -\min \{ R_A(x) \mid x \in U \} \leq \lambda_k -$$ - -Since this is true for all U, we can conclude that -$$ -\max \{ \min \{ R_A(x) \mid x \in U \text{ and } x \neq 0 \} \mid \dim(U)=n-k+1 \} \leq \lambda_k -$$ - -Again, this is one part of the equation. - -To get the other inequality, note again that the eigenvector u of -$$ -\lambda_k -$$ is contained in U = span{uk, ..., un} so that we can conclude the equality. - -}} - -Let N be the nilpotent matrix -$$ -\begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix}. -$$ - -Define the Rayleigh quotient $ R_N(x) $ exactly as above in the Hermitian case. Then it is easy to see that the only eigenvalue of N is zero, while the maximum value of the Rayleigh ratio is 1/2. That is, the maximum value of the Rayleigh quotient is larger than the maximum eigenvalue. - -The singular values {σk} of a square matrix M are the square roots of the eigenvalues of M*M (equivalently MM*). An immediate consequence of the first equality in the min-max theorem is: -$$ -\sigma_k^{\uparrow} = \min_{S:\dim(S)=k} \max_{x \in S, \|x\| = 1} (M^* Mx, x)^{\frac{1}{2}}=\min_{S:\dim(S)=k} \max_{x \in S, \|x\| = 1} \| Mx \|. -$$ - -Similarly, -$$ -\sigma_k^{\uparrow} = \max_{S:\dim(S)=n-k+1} \min_{x \in S, \|x\| = 1} \| Mx \|. -$$ - -Here $\sigma_k=\sigma_k^\uparrow$ denotes the kth entry in the increasing sequence of σ's, so that $\sigma_1\leq\sigma_2\leq\cdots $. - -Let A be a symmetric n × n matrix. The m × m matrix B, where m ≤ n, is called a compression of A if there exists an orthogonal projection P onto a subspace of dimension m such that PAP* = B. The Cauchy interlacing theorem states: - -Theorem. If the eigenvalues of A are α1 ≤ ... ≤ αn, and those of B are β1 ≤ ... ≤ βj ≤ ... ≤ βm, then for all j ≤ m, -$$ -\alpha_j \leq \beta_j \leq \alpha_{n-m+j}. -$$ - -This can be proven using the min-max principle. Let βi have corresponding eigenvector bi and Sj be the j dimensional subspace Sj = span{b1, ..., bj}, then - -\beta_j = \max_{x \in S_j, \|x\| = 1} (Bx, x) = \max_{x \in S_j, \|x\| = 1} (PAP^*x, x) \geq \min_{S_j} \max_{x \in - -S_j, \|x\| = 1} (A(P^*x), P^*x) = \alpha_j. - -According to first part of min-max, αj ≤ βj. On the other hand, if we define Sm−j+1 = span{bj, ..., bm}, then -$$ -\beta_j = \min_{x \in S_{m-j+1}, \|x\| = 1} (Bx, x) = \min_{x \in S_{m-j+1}, \|x\| = 1} (PAP^*x, x)= \min_{x \in S_{m-j+1}, \|x\| = 1} (A(P^*x), P^*x) \leq \alpha_{n-m+j}, -$$ - -where the last inequality is given by the second part of min-max. - -When n − m = 1, we have αj ≤ βj ≤ αj+1, hence the name interlacing theorem. - -Let A be a compact, Hermitian operator on a Hilbert space H. Recall that the spectrum of such an operator (the set of eigenvalues) is a set of real numbers whose only possible cluster point is zero. - -It is thus convenient to list the positive eigenvalues of A as -$$ -\cdots \le \lambda_k \le \cdots \le \lambda_1, -$$ - -where entries are repeated with multiplicity, as in the matrix case. (To emphasize that the sequence is decreasing, we may write $\lambda_k = \lambda_k^\downarrow$.) - -When H is infinite-dimensional, the above sequence of eigenvalues is necessarily infinite. - -We now apply the same reasoning as in the matrix case. Letting Sk ⊂ H be a k dimensional subspace, we can obtain the following theorem. - -Theorem (Min-Max). Let A be a compact, self-adjoint operator on a Hilbert space H, whose positive eigenvalues are listed in decreasing order ... ≤ λk ≤ ... ≤ λ1. Then: - -\begin{align} - -\max_{S_k} \min_{x \in S_k, \|x\| = 1} (Ax,x) &= \lambda_k ^{\downarrow}, \\ - -\min_{S_{k-1}} \max_{x \in S_{k-1}^{\perp}, \|x\|=1} (Ax, x) &= \lambda_k^{\downarrow}. - -\end{align} - -A similar pair of equalities hold for negative eigenvalues. - -{{Math proof|drop=hidden|proof= - -Let S' be the closure of the linear span $S' =\operatorname{span}\{u_k,u_{k+1},\ldots\}$. - -The subspace S' has codimension k − 1. By the same dimension count argument as in the matrix case, S' ∩ Sk is non empty. So there exists x ∈ S' ∩ Sk with $\|x\|=1$. Since it is an element of S' , such an x necessarily satisfy -$$ -(Ax, x) \le \lambda_k. -$$ - -Therefore, for all Sk -$$ -\inf_{x \in S_k, \|x\| = 1}(Ax,x) \le \lambda_k -$$ - -But A is compact, therefore the function f(x) = (Ax, x) is weakly continuous. Furthermore, any bounded set in H is weakly compact. This lets us replace the infimum by minimum: -$$ -\min_{x \in S_k, \|x\| = 1}(Ax,x) \le \lambda_k. -$$ - -So -$$ -\sup_{S_k} \min_{x \in S_k, \|x\| = 1}(Ax,x) \le \lambda_k. -$$ - -Because equality is achieved when $S_k=\operatorname{span}\{u_1,\ldots,u_k\}$, -$$ -\max_{S_k} \min_{x \in S_k, \|x\| = 1}(Ax,x) = \lambda_k. -$$ - -This is the first part of min-max theorem for compact self-adjoint operators. - -Analogously, consider now a (k − 1)-dimensional subspace Sk−1, whose the orthogonal complement is denoted by Sk−1. If S' = span{u1...uk}, -$$ -S' \cap S_{k-1}^{\perp} \ne {0}. -$$ - -So -$$ -\exists x \in S_{k-1}^{\perp} \|x\| = 1, (Ax, x) \ge \lambda_k. -$$ - -This implies -$$ -\max_{x \in S_{k-1}^{\perp}, \|x\| = 1} (Ax, x) \ge \lambda_k -$$ - -where the compactness of A was applied. Index the above by the collection of k-1-dimensional subspaces gives -$$ -\inf_{S_{k-1}} \max_{x \in S_{k-1}^{\perp}, \|x\|=1} (Ax, x) \ge \lambda_k. -$$ - -Pick Sk−1 = span{u1, ..., uk−1} and we deduce -$$ -\min_{S_{k-1}} \max_{x \in S_{k-1}^{\perp}, \|x\|=1} (Ax, x) = \lambda_k. -$$ - -}} - -The min-max theorem also applies to (possibly unbounded) self-adjoint operators. Recall the essential spectrum is the spectrum without isolated eigenvalues of finite multiplicity. - -Sometimes we have some eigenvalues below the essential spectrum, and we would like to approximate the eigenvalues and eigenfunctions. - -Theorem (Min-Max). Let A be self-adjoint, and let $E_1\le E_2\le E_3\le\cdots$ be the eigenvalues of A below the essential spectrum. Then -$$ -E_n=\min_{\psi_1,\ldots,\psi_{n}}\max\{\langle\psi,A\psi\rangle:\psi\in\operatorname{span}(\psi_1,\ldots,\psi_{n}), \| \psi \| = 1\} -$$. - -If we only have N eigenvalues and hence run out of eigenvalues, then we let $E_n:=\inf\sigma_{ess}(A)$ (the bottom of the essential spectrum) for n>N, and the above statement holds after replacing min-max with inf-sup. - -Theorem (Max-Min). Let A be self-adjoint, and let $E_1\le E_2\le E_3\le\cdots$ be the eigenvalues of A below the essential spectrum. Then -$$ -E_n=\max_{\psi_1,\ldots,\psi_{n-1}}\min\{\langle\psi,A\psi\rangle:\psi\perp\psi_1,\ldots,\psi_{n-1}, \| \psi \| = 1\} -$$. - -If we only have N eigenvalues and hence run out of eigenvalues, then we let $E_n:=\inf\sigma_{ess}(A)$ (the bottom of the essential spectrum) for n > N, and the above statement holds after replacing max-min with sup-inf. - -The proofs use the following results about self-adjoint operators: - -Theorem. Let A be self-adjoint. Then $(A-E)\ge0$ for $E\in\mathbb{R}$ if and only if $\sigma(A)\subseteq[E,\infty)$. - -Theorem. If A is self-adjoint, then -$$ -\inf\sigma(A)=\inf_{\psi\in\mathfrak{D}(A),\|\psi\|=1}\langle\psi,A\psi\rangle -$$ - -and -$$ -\sup\sigma(A)=\sup_{\psi\in\mathfrak{D}(A),\|\psi\|=1}\langle\psi,A\psi\rangle -$$. diff --git a/wiki/wikipedia/2693.txt b/wiki/wikipedia/2693.txt deleted file mode 100644 index 4508dff306e77698b7c8eee0cf6527d4aa4dbd5f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2693.txt +++ /dev/null @@ -1,147 +0,0 @@ -The Wolfram Language ( ) is a general multi-paradigm programming language developed by Wolfram Research. It emphasizes symbolic computation, functional programming, and rule-based programming and can employ arbitrary structures and data. - -The Wolfram Language was a part of the initial version of Mathematica in 1988. - -Symbolic aspects of the engine make it a computer algebra system. The language can perform integration, differentiation, matrix manipulations, and solve differential equations using a set of rules. Also in 1988 was the notebook model and the ability to embed sound and images, according to Theodore Gray's patent. - -An online frontend for the language, WolframAlpha, was released in 2009. Wolfram implemented this website by translating natural language statements into Wolfram-language queries that link to its database. The work leading to Wolfram Alpha also means that Wolfram's implementation of the language now has built-in access to a knowledge-base as well as natural language processing functions. - -Wolfram also added features for more complex tasks, such as 3D modeling. - -A name was finally adopted for the language in 2013, as Wolfram Research decided to make a version of the language engine free for Raspberry Pi users, and they needed to come up with a name for it. It was included in the recommended software bundle that the Raspberry Pi Foundation provides for beginners, which caused some controversy due to the Wolfram language's proprietary nature. Plans to port the Wolfram language to the Intel Edison were announced after the board's introduction at CES 2014 but was never released. In 2019, a link was added to make Wolfram libraries compatible with the Unity game engine, giving game developers access to the language's high level functions. - -The Wolfram Language syntax is overall similar to the M-expression of 1960s LISP, with support for infix operators and "function-notation" function calls. - -The Wolfram language writes basic arithmetic expressions using infix operators. - - - -(* This is a comment. *) - -4 + 3 - -(* = 7 *) - -1 + 2 * (3 + 4) - -(* = 15 *) - -(* Note that Multiplication can be omitted: 1 + 2 (3 + 4) *) - -(* Divisions return rational numbers: *) - -3 / 2 - -(* = 3/2 *) - - - -Function calls are denoted with square brackets: - - - -Sin[Pi] - -(* = 0 *) - -(* This is the function to convert rationals to floating point: *) - -N[3 / 2] - -(* = 1.5 *) - - - -Lists are enclosed in curly brackets: - - - -Oddlist={1,3,5} - -(* = {1,3,5} *) - - - -The language may deviate from the M-expression paradigm when an alternative, more human-friendly way of showing an expression is available: - -* A number of formatting rules are used in this language, including for typeset expressions and for language input. - -* Functions can also be applied using the prefix expression and the postfix expression . - -* Derivatives can be denoted with an apostrophe . - -* The infix operators themselves are considered "sugar" for the function notation system. - -A formatter desugars the input: - - - -FullForm[1+2] - -(* = Plus[1, 2] *) - - - -Currying is supported. - -Functions in the Wolfram Language are effectively a case of simple patterns for replacement: - - - -F[x_] := x ^ 0 - - - -The is a "SetDelayed operator", so that the x is not immediately looked for. is syntax sugar for , i.e. a "blank" for any value to replace x in the rest of the evaluation. - -An iteration of bubble sort is expressed as: - - - -sortRule := {x___,y_,z_,k___} /; y>z -> {x,z,y,k} - -(* Rule[Condition[List[PatternSequence[x, BlankNullSequence[]], Pattern[y, Blank[]], Pattern[z, Blank[]], PatternSequence[k, BlankNullSequence[]]], Greater[y, z]], List[x, z, y, k]] *) - - - -The operator is "condition", so that the rule only applies when . The three underscores are a syntax for a , for a sequence that can be null. - -A ReplaceRepeated operator can be used to apply this rule repeatedly, until no more change happens: - - - -{ 9, 5, 3, 1, 2, 4 } //. sortRule - -(* = ReplaceRepeated[{ 9, 5, 3, 1, 2, 4 }, sortRule] *) - -(* = {1, 2, 3, 4, 5, 9} *) - - - -The pattern matching system also easily gives rise to rule-based integration and derivation. The following are excerpts from the Rubi package of rules: - - - -(* Reciprocal rule *) - -Int[1/x_,x_Symbol] := - -Log[x]; - -(* Power rule *) - -Int[x_^m_.,x_Symbol] := - -x^(m+1)/(m+1) /; - -FreeQ[m,x] && NeQ[m,-1] - - - -The official, and reference, implementation of the Wolfram Language lies in Mathematica and associated online services. These are closed source. Wolfram Research has, however, released a C++ parser of the language under the open source MIT License. The reference book is open access. - -In the over three-decade-long existence of the Wolfram language, a number of open source third party implementations have also been developed. Richard Fateman's MockMMA from 1991 is of historical note, both for being the earliest reimplementation and for having received a cease-and-desist from Wolfram. Modern ones still being maintained include Symja in Java, expreduce in Golang, and the SymPy-based Mathics. These implementations focus on the core language and the computer algebra system that it implies, not on the online "knowledgebase" features of Wolfram. - -In 2019, Wolfram Research released a freeware Wolfram Engine, to be used as a programming library in non-commercial software. - -The language was officially named in June 2013 although, as the backend of the computing system Mathematica, it has been in use in various forms for over 30 years since Mathematica's initial release. diff --git a/wiki/wikipedia/2694.txt b/wiki/wikipedia/2694.txt deleted file mode 100644 index 1b342dd74d7ff7d51ad7e6663669ec1f6d04e11f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2694.txt +++ /dev/null @@ -1,17 +0,0 @@ -In theoretical physics, specifically quantum field theory, C-theorem states that there exists a positive real function, $C(g^{}_i,\mu)$, depending on the coupling constants of the quantum field theory considered, $g^{}_i$, and on the energy scale, $\mu^{}_{}$, which has the following properties: - -*$C(g^{}_i,\mu)$ decreases monotonically under the renormalization group (RG) flow. - -*At fixed points of the RG flow, which are specified by a set of fixed-point couplings $g^*_i$, the function $C(g^*_i,\mu)=C_*$ is a constant, independent of energy scale. - -The theorem formalizes the notion that theories at high energies have more degrees of freedom than theories at low energies and that information is lost as we flow from the former to the latter. - -Alexander Zamolodchikov proved in 1986 that two-dimensional quantum field theory always has such a C-function. Moreover, at fixed points of the RG flow, which correspond to conformal field theories, Zamolodchikov's C-function is equal to the central charge of the corresponding conformal field theory, which lends the name C to the theorem. - -John Cardy in 1988 considered the possibility to generalise C-theorem to higher-dimensional quantum field theory. He conjectured that in four spacetime dimensions, the quantity behaving monotonically under renormalization group flows, and thus playing the role analogous to the central charge c in two dimensions, is a certain anomaly coefficient which came to be denoted as a. - -For this reason, the analog of the C-theorem in four dimensions is called the A-theorem. - -In perturbation theory, that is for renormalization flows which do not deviate much from free theories, the A-theorem in four dimensions was proved by Hugh Osborn using the local renormalization group equation. However, the problem of finding a proof valid beyond perturbation theory remained open for many years. - -In 2011, Zohar Komargodski and Adam Schwimmer of the Weizmann Institute of Science proposed a nonperturbative proof for the A-theorem, which has gained acceptance. (Still, simultaneous monotonic and cyclic (limit cycle) or even chaotic RG flows are compatible with such flow functions when multivalued in the couplings, as evinced in specific systems.) RG flows of theories in 4 dimensions and the question of whether scale invariance implies conformal invariance, is a field of active research and not all questions are settled. diff --git a/wiki/wikipedia/2695.txt b/wiki/wikipedia/2695.txt deleted file mode 100644 index 9ae97111d47733c969d9f0b8e0126420f1ef09bc..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2695.txt +++ /dev/null @@ -1,53 +0,0 @@ -János Pach (born May 3, 1954) - -He was Research Professor at the Courant Institute of Mathematical Sciences at NYU (since 1986), Distinguished Professor of Computer Science at City College, CUNY (1992-2011), and Neilson Professor at Smith College (2008-2009). - -Between 2008 and 2019, he was Professor of the Chair of Combinatorial Geometry at École Polytechnique Fédérale de Lausanne. - -He was the program chair for the International Symposium on Graph Drawing in 2004 and - -Symposium on Computational Geometry in 2015. With Kenneth L. Clarkson and Günter Ziegler, he is co-editor-in-chief of the journal Discrete and Computational Geometry, and he serves on the editorial boards of several other journals including Combinatorica, SIAM Journal on Discrete Mathematics, Computational Geometry, Graphs and Combinatorics, Central European Journal of Mathematics, and Moscow Journal of Combinatorics and Number Theory. - -He was an invited speaker at the Combinatorics session of the International Congress of Mathematicians, in Seoul, 2014. - -Pach has authored several books and over 300 research papers. He was one of the most frequent collaborators of Paul Erdős, authoring over 20 papers with him and thus has an Erdős number of one. - -Pach's research is focused in the areas of combinatorics and discrete geometry. - -In 1981, he solved Ulam's problem, showing that there exists - -no universal planar graph. - -In the early 90s - -together with Micha Perles, he initiated the systematic study of extremal problems on [[topological graph|topological and - -geometric graphs]]. - -Some of Pach's most-cited research work concerns the combinatorial complexity of families of curves in the plane and their applications to motion planning problems the maximum number of k-sets and halving lines that a planar point set may have, crossing numbers of graphs, embedding of planar graphs onto fixed sets of points, and lower bounds for epsilon-nets. - -Pach received the Grünwald Medal of the János Bolyai Mathematical Society (1982), the Ford Award from the Mathematical Association of America (1990), and the Alfréd Rényi Prize from the Hungarian Academy of Sciences (1992). He was an Erdős Lecturer at Hebrew University of Jerusalem in 2005. - -In 2011 he was listed as a fellow of the Association for Computing Machinery for his research in computational geometry. - -In 2014 he was elected as a member of Academia Europaea, and in 2015 as a fellow of the American Mathematical Society "for contributions to discrete and combinatorial geometry and to convexity and combinatorics." - -*. - -*. - -*. - -*. - -*. - -*. - -*. - -*. - -*. - -*. diff --git a/wiki/wikipedia/2696.txt b/wiki/wikipedia/2696.txt deleted file mode 100644 index 5d534b95de1fb625c63b279b31e0c059b84d6e34..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2696.txt +++ /dev/null @@ -1,78 +0,0 @@ -In mathematics and logic, a direct proof is a way of showing the - -truth or falsehood of a given statement by a straightforward combination of - -established facts, usually axioms, existing lemmas and theorems, without making any further assumptions. In order to directly prove a conditional statement of the form "If p, then q", it suffices to consider the situations in which the statement p is true. Logical deduction is employed to reason from assumptions to conclusion. The type of logic employed is almost invariably first-order logic, employing the quantifiers for all and there exists. Common proof rules used are modus ponens and universal instantiation. - -In contrast, an indirect proof may begin with certain hypothetical scenarios and then proceed to eliminate the uncertainties in each of these scenarios until an inescapable conclusion is forced. For example, instead of showing directly p ⇒ q, one proves its contrapositive ~q ⇒ ~p (one assumes ~q and shows that it leads to ~p). Since p ⇒ q and ~q ⇒ ~p are equivalent by the principle of transposition (see law of excluded middle), p ⇒ q is indirectly proved. Proof methods that are not direct include proof by contradiction, including proof by infinite descent. Direct proof methods include proof by exhaustion and proof by induction. - -A direct proof is the simplest form of proof there is. The word ‘proof’ comes from the Latin word probare, which means “to test”. The earliest use of proofs was prominent in legal proceedings. A person with authority, such as a nobleman, was said to have probity, which means that the evidence was by his relative authority, which outweighed empirical testimony. In days gone by, mathematics and proof was often intertwined with practical questions – with populations like the Egyptians and the Greeks showing an interest in surveying land. This led to a natural curiosity with regards to geometry and trigonometry – particularly triangles and rectangles. These were the shapes which provided the most questions in terms of practical things, so early geometrical concepts were focused on these shapes, for example, the likes of buildings and pyramids used these shapes in abundance. Another shape which is crucial in the history of direct proof is the circle, which was crucial for the design of arenas and water tanks. This meant that ancient geometry (and Euclidean Geometry) discussed circles. - -The earliest form of mathematics was phenomenological. For example, if someone could draw a reasonable picture, or give a convincing description, then that met all the criteria for something to be described as a mathematical “fact”. On occasion, analogical arguments took place, or even by “invoking the gods”. The idea that mathematical statements could be proven had not been developed yet, so these were the earliest forms of the concept of proof, despite not being actual proof at all. - -Proof as we know it came about with one specific question: “what is a proof?” Traditionally, a proof is a platform which convinces someone beyond reasonable doubt that a statement is mathematically true. Naturally, one would assume that the best way to prove the truth of something like this (B) would be to draw up a comparison with something old (A) that has already been proven as true. Thus was created the concept of deriving a new result from an old result. - -Consider two even integers x and y. Since they are even, they can be written as -$$ - x =2a -$$ -$$ - y=2b -$$ - -respectively for integers a and b. - -Then the sum can be written as -$$ - x+y = 2a + 2b = 2(a+b)=2p -$$ where $p=a+b$, a and b are all integers. - -It follows that x + y has 2 as a factor and therefore is even, so the sum of any two even integers is even. - -Observe that we have four right-angled triangles and a square packed into a large square. Each of the triangles has sides a and b and hypotenuse c. The area of a square is defined as the square of the length of its sides - in this case, (a + b)2. However, the area of the large square can also be expressed as the sum of the areas of its components. In this case, that would be the sum of the areas of the four triangles and the small square in the middle. - -We know that the area of the large square is equal to (a + b)2. - -The area of a triangle is equal to $ \frac12ab. $ - -We know that the area of the large square is also equal to the sum of the areas of the triangles, plus the area of the small square, and thus the area of the large square equals $ 4(\frac 12 ab) + c^2.$ - -These are equal, and so -$$ - (a + b)^2 = 4(\frac 12 ab) + c^2 . -$$ - -After some simplifying, -$$ - a^2 + 2ab + b^2 = 2ab + c^2 . -$$ - -Removing the ab that appears on both sides gives -$$ - a^2 + b^2 = c^2 , -$$ - -which proves Pythagoras' theorem. ∎ - -By definition, if n is an odd integer, it can be expressed as -$$ - n = 2k + 1 -$$ - -for some integer k. Thus - -\begin{align} - -n^2 &= (2k + 1)^2\\ - -&= (2k + 1)(2k + 1)\\ - -&=4k^2 + 2k + 2k + 1\\ - -&=4k^2 + 4k + 1\\ - -&=2(2k^2 + 2k) + 1. - -\end{align} - -Since 2k2+ 2k is an integer, n2 is also odd. ∎ diff --git a/wiki/wikipedia/2697.txt b/wiki/wikipedia/2697.txt deleted file mode 100644 index 4db5f629cee62a55d6ddb5c039d699b2505e056e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2697.txt +++ /dev/null @@ -1,95 +0,0 @@ -Transaction processing is a way of computing that divides work into individual, indivisible operations, called transactions. A transaction processing system (TPS) is a software system, or software/hardware combination, that supports transaction processing. - -The first transaction processing system was SABRE, made by IBM for American Airlines, which became operational in 1964. Designed to process up to 83,000 transactions a day, the system ran on two IBM 7090 computers. SABRE was migrated to IBM System/360 computers in 1972, and became an IBM product first as Airline control Program (ACP) and later as Transaction Processing Facility (TPF). In addition to airlines TPF is used by large banks, credit card companies, and hotel chains. - -The Hewlett-Packard NonStop system (formerly Tandem NonStop) was a hardware and software system designed for Online Transaction Processing (OLTP) introduced in 1976. The systems were designed for transaction processing and provided an extreme level of availability and data integrity. - -* IBM Transaction Processing Facility (TPF) – 1960. At Amity Unlike most other transaction processing systems TPF is a dedicated operating system for transaction processing on IBM System z mainframes. Originally Airline Control Program (ACP). - -* IBM Information Management System (IMS) – 1966. A joint hierarchical database and information management system with extensive transaction processing capabilities. Runs on OS/360 and successors. - -* IBM Customer Information Control System (CICS) – 1969. A transaction manager designed for rapid, high-volume online processing, CICS originally used standard system datasets, but now has a connection to IBM's DB/2 relational database system. Runs on OS/360 and successors and DOS/360 and successors, IBM AIX, VM, and OS/2. Non-mainframe versions are called TXSeries. - -* Tuxedo – 1980s. Transactions for Unix, Extended for Distributed Operations developed by AT&T Corporation, now owned by Oracle Corporation. Tuxedo is a cross-platform TPS. - -* UNIVAC Transaction Interface Package (TIP) – 1970s. A transaction processing monitor for UNIVAC 1100/2200 series computers. - -* Burroughs Corporation supported transaction processing capabilities in its MCP operating systems using GEMCOS (Generalized Message Control System of 1976). As of 2012 UNISYS ClearPath Enterprise Servers include Transaction Server, "an extremely flexible, high-performance message and application control system." - -* Digital Equipment Corporation (DEC) Application Control and Management System (ACMS) – 1985. "Provides an environment for creating and controlling online transaction processing (OLTP) applications on the VMS operating system." Runs on VAX/VMS systems. - -* Digital Equipment Corporation (DEC) Message Control System (MCS-10) for PDP-10 TOPS-10 systems. - -* Honeywell Multics Transaction Processing. Feature (TP) – 1979. - -* Transaction Management eXecutive (TMX) was NCR Corporation's proprietary transaction processing system running on NCR Tower 5000-series systems. This system was used mainly by financial institutions in the 1980s and 1990s. - -* Hewlett-Packard NonStop system – 1976. NonStop is an integrated hardware and software system specifically designed for transaction processing. Originally from Tandem Computers. - -* Transarc Encina – 1991. Transarc was purchased by IBM in 1994. Encina was discontinued as a product and folded into IBM's TXSeries. Encina support was discontinued in 2006. - -Transaction processing is distinct from and can be contrasted with other computer processing models, such as batch processing, time-sharing, and real-time processing. - -Batch processing is execution of a series of programs (jobs) on a computer without manual intervention. Several transactions, called a batch are collected and processed at the same time. The results of each transaction are not immediately available when the transaction is being entered; there is a time delay. - -"Real time systems attempt to guarantee an appropriate response to a stimulus or request quickly enough to affect the conditions that caused the stimulus." - -Each transaction in realtime processing is unique; it is not part of a group of transactions. - -A Transaction Processing System (TPS) is a type of information system that collects, stores, modifies and retrieves the data transactions of an enterprise. - -Transaction processing systems also attempt to provide predictable response times to requests, although this is not as critical as for real-time systems. Rather than allowing the user to run arbitrary programs as time-sharing, transaction processing allows only predefined, structured transactions. Each transaction is usually short duration and the processing activity for each transaction is programmed in advance. - -The following features are considered important in evaluating transaction processing systems. - -Fast performance with a rapid response time is critical. Transaction processing systems are usually measured by the number of transactions they can process in a given period of time. - -The system must be available during the time period when the users are entering transactions. Many organizations rely heavily on their TPS; a breakdown will disrupt operations or even stop the business. - -The system must be able to handle hardware or software problems without corrupting data. Multiple users must be protected from attempting to change the same piece of data at the same time, for example two operators cannot sell the same seat on an airplane. - -Often users of transaction processing systems are casual users. The system should be simple for them to understand, protect them from data-entry errors as much as possible, and allow them to easily correct their errors. - -The system should be capable of growth at incremental costs, rather than requiring a complete replacement. It should be possible to add, replace, or update hardware and software components without shutting down the system. - -Transactions may be collected and processed as in batch processing. Transactions will be collected and later updated as a batch when it's convenient or economical to process them. Historically, this was the most common method as the information technology did not exist to allow real-time processing. - -This is the immediate processing of data. It provides instant confirmation of a transaction. It may involve a large number of users who are simultaneously performing transactions which change data. Because of advances in technology (such as the increase in the speed of data transmission and larger bandwidth), real-time updating is possible. - -A database is an organized collection of data. Databases offer fast retrieval times for non-structured requests as in a typical transaction processing application. - -Databases for transaction processing may be constructed using hierarchical, network, or relational structures. - -* Hierarchical structure: organizes data in a series of levels. Its top-to-bottom-like structure consists of nodes and branches; each child node has branches and is only linked to one higher level parent node. - -* Network structure: network structures also organizes data using nodes and branches. But, unlike hierarchical, each child node can be linked to multiple, higher parent nodes. - -* Relational structure: a relational database organizes its data in a series of related tables. This gives flexibility as relationships between the tables are built. - -The following features are desirable in a database system used in transaction processing systems: - -* Good data placement: The database should be designed to access patterns of data from many simultaneous users. - -* Short transactions: Short transactions enables quick processing. This avoids concurrency and paces the systems. - -* Real-time backup: Backup should be scheduled between low times of activity to prevent lag of the server. - -* High normalization: This lowers redundant information to increase the speed and improve concurrency, this also improves backups. - -* Archiving of historical data: Uncommonly used data are moved into other databases or backed up tables. This keeps tables small and also improves backup times. - -* Good hardware configuration: Hardware must be able to handle many users and provide quick response times. - -Since business organizations have become very dependent on transaction processing, a breakdown may disrupt the business' regular routine and stop its operation for a certain amount of time. In order to prevent data loss and minimize disruptions there have to be well-designed backup and recovery procedures. The recovery process can rebuild the system when it goes down. - -There are two main types of back-up procedures: grandfather-father-son and partial backups: - -This procedure involves taking complete backups of all data at regular intervals daily, weekly, monthly, or whatever is appropriate. Multiple generations of backup are retained, often three which gives rise to the name. The most recent backup is the son, the previous the father, and the oldest backup is the grandfather. This method is commonly used for a batch transaction processing system with a magnetic tape. If the system fails during a batch run, the master file is recreated by restoring the son backup and then restarting the batch. However, if the son backup fails, is corrupted or destroyed, then the previous generation of backup (the father) is used. Likewise, if that fails, then the generation of backup previous to the father (i.e. the grandfather) is required. Of course the older the generation, the more the data may be out of date. Organize only of records that have changed. For example, a full backup could be performed weekly, and partial backups taken nightly. Recovery using this scheme involves restoring the last full backup and then restoring all partial backups in order to produce an up-to-date database. This process is quicker than taking only complete backups, at the expense of longer recovery time. - -* Batch or real-time processing available. - -* Reduction in processing time, lead time and order cycle time. - -* Reduction in inventory, personnel and ordering costs. - -* Increase in productivity and customer satisfaction. diff --git a/wiki/wikipedia/2698.txt b/wiki/wikipedia/2698.txt deleted file mode 100644 index baf5d32967d1183454a106c20b0ab9d22c339aff..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2698.txt +++ /dev/null @@ -1,346 +0,0 @@ -A set of dice is intransitive (or nontransitive) if it contains three dice, A, B, and C, with the property that A rolls higher than B more than half the time, and B rolls higher than C more than half the time, but it is not true that A rolls higher than C more than half the time. In other words, a set of dice is intransitive if the binary relation – X rolls a higher number than Y more than half the time – on its elements is not transitive. More simply, A normally beats B, B normally beats C, but A does not normally beat C. - -It is possible to find sets of dice with the even stronger property that, for each die in the set, there is another die that rolls a higher number than it more than half the time. This is different in that instead of only "A does not normally beat C" it is now "C normally beats A" Using such a set of dice, one can invent games which are biased in ways that people unused to intransitive dice might not expect (see Example). - -Consider the following set of dice. - -* Die A has sides 2, 2, 4, 4, 9, 9. - -* Die B has sides 1, 1, 6, 6, 8, 8. - -* Die C has sides 3, 3, 5, 5, 7, 7. - -The probability that A rolls a higher number than B, the probability that B rolls higher than C, and the probability that C rolls higher than A are all 5/9, so this set of dice is intransitive. In fact, it has the even stronger property that, for each die in the set, there is another die that rolls a higher number than it more than half the time. - -Now, consider the following game, which is played with a set of dice. - -# The first player chooses a die from the set. - -# The second player chooses one die from the remaining dice. - -# Both players roll their die; the player who rolls the higher number wins. - -If this game is played with a transitive set of dice, it is either fair or biased in favor of the first player, because the first player can always find a die that will not be beaten by any other dice more than half the time. If it is played with the set of dice described above, however, the game is biased in favor of the second player, because the second player can always find a die that will beat the first player's die with probability 5/9. The following tables show all possible outcomes for all three pairs of dice. - -Though the three intransitive dice A, B, C (first set of dice) - -* A: 2, 2, 6, 6, 7, 7 - -* B: 1, 1, 5, 5, 9, 9 - -* C: 3, 3, 4, 4, 8, 8 - -P(A > B) = P(B > C) = P(C > A) = 5/9 - -and the three intransitive dice A′, B′, C′ (second set of dice) - -* A′: 2, 2, 4, 4, 9, 9 - -* B′: 1, 1, 6, 6, 8, 8 - -* C′: 3, 3, 5, 5, 7, 7 - -P(A′ > B′) = P(B′ > C′) = P(C′ > A′) = 5/9 - -win against each other with equal probability they are not equivalent. While the first set of dice (A, B, C) has a 'highest' die, the second set of dice has a 'lowest' die. Rolling the three dice of a set and always using the highest score for evaluation will show a different winning pattern for the two sets of dice. With the first set of dice, die B will win with the highest probability (11/27) and dice A and C will each win with a probability of 8/27. With the second set of dice, die C′ will win with the lowest probability (7/27) and dice A′ and B′ will each win with a probability of 10/27. - -Efron's dice are a set of four intransitive dice invented by Bradley Efron. - -The four dice A, B, C, D have the following numbers on their six faces: - -* A: 4, 4, 4, 4, 0, 0 - -* B: 3, 3, 3, 3, 3, 3 - -* C: 6, 6, 2, 2, 2, 2 - -* D: 5, 5, 5, 1, 1, 1 - -Each die is beaten by the previous die in the list, with a probability of 2/3: -$$ -P(A>B) = P(B>C) = P(C>D) = P(D>A) = {2 \over 3} -$$ - -B's value is constant; A beats it on 2/3 rolls because four of its six faces are higher. - -Similarly, B beats C with a 2/3 probability because only two of C's faces are higher. - -P(C>D) can be calculated by summing conditional probabilities for two events: - -* C rolls 6 (probability 1/3); wins regardless of D (probability 1) - -* C rolls 2 (probability 2/3); wins only if D rolls 1 (probability 1/2) - -The total probability of win for C is therefore -$$ -\left( {1 \over 3}\times1 \right) + \left( {2 \over 3}\times{1 \over 2} \right) = {2 \over 3} -$$ - -With a similar calculation, the probability of D winning over A is -$$ -\left( {1 \over 2}\times1 \right) + \left( {1 \over 2}\times{1 \over 3} \right) = {2 \over 3} -$$ - -The four dice have unequal probabilities of beating a die chosen at random from the remaining three: - -As proven above, die A beats B two-thirds of the time but beats D only one-third of the time. The probability of die A beating C is 4/9 (A must roll 4 and C must roll 2). So the likelihood of A beating any other randomly selected die is: -$$ -{1 \over 3}\times \left( {2 \over 3} + {1 \over 3} + {4 \over 9} \right) = {13 \over 27} -$$ - -Similarly, die B beats C two-thirds of the time but beats A only one-third of the time. The probability of die B beating D is 1/2 (only when D rolls 1). So the likelihood of B beating any other randomly selected die is: -$$ -{1 \over 3}\times \left( {2 \over 3} + {1 \over 3} + {1 \over 2} \right) = {1 \over 2} -$$ - -Die C beats D two-thirds of the time but beats B only one-third of the time. The probability of die C beating A is 5/9. So the likelihood of C beating any other randomly selected die is: -$$ -{1 \over 3}\times \left( {2 \over 3} + {1 \over 3} + {5 \over 9} \right) = {14 \over 27} -$$ - -Finally, die D beats A two-thirds of the time but beats C only one-third of the time. The probability of die D beating B is 1/2 (only when D rolls 5). So the likelihood of D beating any other randomly selected die is: -$$ -{1 \over 3}\times \left( {2 \over 3} + {1 \over 3} + {1 \over 2} \right) = {1 \over 2} -$$ - -Therefore, the best overall die is C with a probability of winning of 0.5185. C also rolls the highest average number in absolute terms, 3/1|3. (A's average is 2/2|3, while B's and D's are both 3.) - -Note that Efron's dice have different average rolls: the average roll of A is 8/3, while B and D each average 9/3, and C averages 10/3. The intransitive property depends on which faces are larger or smaller, but does not depend on the absolute magnitude of the faces. Hence one can find variants of Efron's dice where the odds of winning are unchanged, but all the dice have the same average roll. For example, - -* A: 7, 7, 7, 7, 1, 1 - -* B: 5, 5, 5, 5, 5, 5 - -* C: 9, 9, 3, 3, 3, 3 - -* D: 8, 8, 8, 2, 2, 2 - -These variant dice are useful, e.g., to introduce students to different ways of comparing random variables (and how comparing only averages may overlook essential details). - -A set of four dice using all of the numbers 1 through 24 can be made to be intransitive. - -With adjacent pairs, one die's probability of winning is 2/3. - -For rolling high number, B beats A, C beats B, D beats C, A beats D. - -* A: 01, 02, 16, 17, 18, 19 - -* B: 03, 04, 05, 20, 21, 22 - -* C: 06, 07, 08, 09, 23, 24 - -* D: 10, 11, 12, 13, 14, 15 - -These dice are basically the same as Efron's dice, as each number of a series of successive numbers on a single die can all be replaced by the lowest number of the series and afterwards renumbering them. - -* A: → → - -* B: → → - -* C: → → - -* D: → → - -Miwin's Dice were invented in 1975 by the physicist Michael Winkelmann. - -Consider a set of three dice, III, IV and V such that - -* die III has sides 1, 2, 5, 6, 7, 9 - -* die IV has sides 1, 3, 4, 5, 8, 9 - -* die V has sides 2, 3, 4, 6, 7, 8 - -Then: - -* the probability that III rolls a higher number than IV is 17/36 - -* the probability that IV rolls a higher number than V is 17/36 - -* the probability that V rolls a higher number than III is 17/36 - -The following intransitive dice have only a few differences compared to 1 through 6 standard dice: - -* as with standard dice, the total number of pips is always 21 - -* as with standard dice, the sides only carry pip numbers between 1 and 6 - -* faces with the same number of pips occur a maximum of twice per dice - -* only two sides on each die have numbers different from standard dice: - -** A: 1, 1, 3, 5, 5, 6 - -** B: 2, 3, 3, 4, 4, 5 - -** C: 1, 2, 2, 4, 6, 6 - -Like Miwin’s set, the probability of A winning versus B (or B vs. C, C vs. A) is 17/36. The probability of a draw, however, is 4/36, so that only 15 out of 36 rolls lose. So the overall winning expectation is higher. - -Warren Buffett is known to be a fan of intransitive dice. In the book Fortune's Formula: The Untold Story of the Scientific Betting System that Beat the Casinos and Wall Street, a discussion between him and Edward Thorp is described. Buffett and Thorp discussed their shared interest in intransitive dice. "These are a mathematical curiosity, a type of 'trick' dice that confound most people's ideas about probability." - -Buffett once attempted to win a game of dice with Bill Gates using intransitive dice. "Buffett suggested that each of them choose one of the dice, then discard the other two. They would bet on who would roll the highest number most often. Buffett offered to let Gates pick his die first. This suggestion instantly aroused Gates's curiosity. He asked to examine the dice, after which he demanded that Buffett choose first." - -In 2010, Wall Street Journal magazine quoted Sharon Osberg, Buffett's bridge partner, saying that when she first visited his office 20 years earlier, he tricked her into playing a game with intransitive dice that could not be won and "thought it was hilarious". - -A number of people have introduced variations of intransitive dice where one can compete against more than one opponent. - -Oskar van Deventer introduced a set of seven dice (all faces with probability 1/6) as follows: - -* A: 2, 02, 14, 14, 17, 17 - -* B: 7, 07, 10, 10, 16, 16 - -* C: 5, 05, 13, 13, 15, 15 - -* D: 3, 03, 09, 09, 21, 21 - -* E: 1, 01, 12, 12, 20, 20 - -* F: 6, 06, 08, 08, 19, 19 - -* G: 4, 04, 11, 11, 18, 18 - -One can verify that A beats {B,C,E}; B beats {C,D,F}; C beats {D,E,G}; D beats {A,E,F}; E beats {B,F,G}; F beats {A,C,G}; G beats {A,B,D}. Consequently, for arbitrarily chosen two dice there is a third one that beats both of them. Namely, - -* G beats {A,B}; F beats {A,C}; G beats {A,D}; D beats {A,E}; D beats {A,F}; F beats {A,G}; - -* A beats {B,C}; G beats {B,D}; A beats {B,E}; E beats {B,F}; E beats {B,G}; - -* B beats {C,D}; A beats {C,E}; B beats {C,F}; F beats {C,G}; - -* C beats {D,E}; B beats {D,F}; C beats {D,G}; - -* D beats {E,F}; C beats {E,G}; - -* E beats {F,G}. - -Whatever the two opponents choose, the third player will find one of the remaining dice that beats both opponents' dice. - -Dr. James Grime discovered a set of five dice as follows: - -* A: 2, 2, 2, 7, 7, 7 - -* B: 1, 1, 6, 6, 6, 6 - -* C: 0, 5, 5, 5, 5, 5 - -* D: 4, 4, 4, 4, 4, 9 - -* E: 3, 3, 3, 3, 8, 8 - -One can verify that, when the game is played with one set of Grime dice: - -* A beats B beats C beats D beats E beats A (first chain); - -* A beats C beats E beats B beats D beats A (second chain). - -However, when the game is played with two such sets, then the first chain remains the same (with one exception discussed later) but the second chain is reversed (i.e. A beats D beats B beats E beats C beats A). Consequently, whatever dice the two opponents choose, the third player can always find one of the remaining dice that beats them both (as long as the player is then allowed to choose between the one-die option and the two-die option): - -There are two major issues with this set, however. The first one is that in the two-die option of the game, the first chain should stay exactly the same in order to make the game intransitive. In practice, though, D actually beats C. The second problem is that the third player would have to be allowed to choose between the one-die option and the two-die option – which may be seen as unfair to other players. - -The above issue of D defeating C arises because the dice have 6 faces rather than 5. By replacing the lowest (or highest) face of each die with "reroll" (R), all five dice will function exactly as Dr. James Grime intended: - -* A: R, 2, 2, 7, 7, 7 - -* B: R, 1, 6, 6, 6, 6 - -* C: R, 5, 5, 5, 5, 5 - -* D: R, 4, 4, 4, 4, 9 - -* E: R, 3, 3, 3, 8, 8 - -Alternatively, these faces could be mapped to a set of pentagonal-trapezohedral (10-sided) dice, with each number appearing exactly twice, or to a set of icosahedral (20-sided) dice, with each number appearing four times. This eliminates the need for a "reroll" face. - -This solution was discovered by Jon Chambers, an Australian Pre-Service Mathematics Teacher. - -A four-player set has not yet been discovered, but it was proved that such a set would require at least 19 dice. - -Tetrahedra can be used as dice with four possible results. - -;Set 1: - -* A: 1, 4, 7, 7 - -* B: 2, 6, 6, 6 - -* C: 3, 5, 5 ,8 - -P(A > B) = P(B > C) = P(C > A) = 9/16 - -The following tables show all possible outcomes: - -In "A versus B", A wins in 9 out of 16 cases. - -In "B versus C", B wins in 9 out of 16 cases. - -In "C versus A", C wins in 9 out of 16 cases. - -;Set 2: - -* A: 3, 3, 3, 6 - -* B: 2, 2, 5, 5 - -* C: 1, 4, 4, 4 - -P(A > B) = P(B > C) = 10/16, P(C > A) = 9/16 - -In analogy to the intransitive six-sided dice, there are also dodecahedra which serve as intransitive twelve-sided dice. The points on each of the dice result in the sum of 114. There are no repetitive numbers on each of the dodecahedra. - -Miwin’s dodecahedra (set 1) win cyclically against each other in a ratio of 35:34. - -The miwin’s dodecahedra (set 2) win cyclically against each other in a ratio of 71:67. - -Set 1: - - - -Standard-Dodekaeder-D III.gif|D III - -Standard-Dodekaeder-D IV.gif|D IV - -Standard-Dodekaeder-D V.gif|D V - - - -Set 2: - - - -Standard-Dodekaeder-D VI.gif|D VI - -Standard-Dodekaeder-D VII.gif|D VII - -Standard-Dodekaeder-D VIII.gif|D VIII - - - -It is also possible to construct sets of intransitive dodecahedra such that there are no repeated numbers and all numbers are primes. Miwin’s intransitive prime-numbered dodecahedra win cyclically against each other in a ratio of 35:34. - -Set 1: The numbers add up to 564. - - - -Primzahlen-Dodekaeder-PD 11bf.gif|PD 11 - -Primzahlen-Dodekaeder-PD 12bf.gif|PD 12 - -Primzahlen-Dodekaeder-PD 13bf.gif|PD 13 - - - -Set 2: The numbers add up to 468. - - - -Primzahlen-Dodekaeder-PD 1bf.gif|PD 1 - -Primzahlen-Dodekaeder-PD 2bf.gif|PD 2 - -Primzahlen-Dodekaeder-PD 3bf.gif|PD 3 - - - -Three or more sets of dice in each of which dice form their own intransitive cycles of superiority, and relations between the sets are intransitive too. The first example: 's meta-dice. diff --git a/wiki/wikipedia/2699.txt b/wiki/wikipedia/2699.txt deleted file mode 100644 index de6ec69851f73ad9b3471e8ef4b1c17a395a255d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2699.txt +++ /dev/null @@ -1,21 +0,0 @@ -See also Gap theorem (disambiguation) for other gap theorems in mathematics. - -In computational complexity theory, the Gap Theorem, also known as the Borodin–Trakhtenbrot Gap Theorem, is a major theorem about the complexity of computable functions. - -It essentially states that there are arbitrarily large computable gaps in the hierarchy of complexity classes. For any computable function that represents an increase in computational resources, one can find a resource bound such that the set of functions computable within the expanded resource bound is the same as the set computable within the original bound. - -The theorem was proved independently by Boris Trakhtenbrot and Allan Borodin. - -Although Trakhtenbrot's derivation preceded Borodin's by several years, it was not known nor recognized in the West until after Borodin's work was published. - -The general form of the theorem is as follows. - -Suppose Φ is an abstract (Blum) complexity measure. For any total computable function g for which $g(x) \geq x$ for every x, there is a total computable function t such that with respect to Φ, the complexity classes with boundary functions t and $g \circ t$ are identical. - -The theorem can be proved by using the Blum axioms without any reference to a concrete computational model, so it applies to time, space, or any other reasonable complexity measure. - -For the special case of time complexity, this can be stated more simply as: - -for any total computable function $g : \omega \to \omega$ such that $g(x) \geq x$ for all x, there exists a time bound $T(n)$ such that $\mathsf{DTIME}(g(T(n))) = \mathsf{DTIME}(T(n))$. - -Because the bound T(n) may be very large (and often will be nonconstructible) the Gap Theorem does not imply anything interesting for complexity classes such as P or NP, and it does not contradict the time hierarchy theorem or space hierarchy theorem. diff --git a/wiki/wikipedia/27.txt b/wiki/wikipedia/27.txt deleted file mode 100644 index b79a562f530989c89c3aafae67035aba9e3d1b08..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/27.txt +++ /dev/null @@ -1,17 +0,0 @@ -Dubner's conjecture is an as yet (2018) unsolved conjecture by American mathematician Harvey Dubner. It states that every even number greater than 4208 is the sum of two t-primes, where a t-prime is a prime which has a twin. The conjecture is computer-verified for numbers up to $4 \times 10^{11}$. - -Even numbers that make an exception are: 2, 4, 94, 96, 98, 400, 402, 404, 514, 516, 518, 784, 786, 788, 904, 906, 908, 1114, 1116, 1118, 1144, 1146, 1148, 1264, 1266, 1268, 1354, 1356, 1358, 3244, 3246, 3248, 4204, 4206, 4208. - -The conjecture, if proved, will prove both the Goldbach's conjecture (because it has already been verified that all the even numbers 2n, such that 2 < 2n ≤ 4208, are the sum of two primes) and the twin prime conjecture (there exists an infinite number of t-primes, and thus an infinite number of twin prime pairs). - -Whilst already itself a generalization of both these conjectures, the original conjecture of Dubner may be generalized even further: - -* For each natural number k > 0, every sufficiently large even number n(k) is the sum of two d(2k)-primes, where a d(2k)-prime is a prime p which has a prime q such that d(p,q) = |q − p| = 2k and p, q successive primes. The conjecture implies the Goldbach's conjecture (for all the even numbers greater than a large value ℓ(k)) for each k, and the de Polignac's conjecture if we consider all the cases k. The original Dubner's conjecture is the case for k = 1. - -* The same idea, but p and q are not necessarily consecutive in the definition of a d(2k)-prime. Again, the Dubner's conjecture is a case for k = 1. It implies the Goldbach's conjecture and the generalized de Polignac's conjecture (if we consider all the cases k) are concerned. - -* Harvey Dubner (2000), Twin Prime Conjectures, Journal of Recreational Mathematics, volume 30, issue 3, pp. 199–205 - -* Jean-Paul Delahaye (June 2002), Nombres premiers inévitables et pyramidaux, Pour la Science, issue 296, pp. 98–102 - -Category:Conjectures about prime numbers diff --git a/wiki/wikipedia/270.txt b/wiki/wikipedia/270.txt deleted file mode 100644 index 72e415b75a064e28afe04ece8450bf38d4c85077..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/270.txt +++ /dev/null @@ -1 +0,0 @@ -In mathematics, the Demazure conjecture is a conjecture about representations of algebraic groups over the integers made by . The conjecture implies that many of the results of his paper can be extended from complex algebraic groups to algebraic groups over fields of other characteristics or over the integers. showed that Demazure's conjecture (for classical groups) follows from their work on standard monomial theory, and Peter Littelmann extended this to all reductive algebraic groups. diff --git a/wiki/wikipedia/2700.txt b/wiki/wikipedia/2700.txt deleted file mode 100644 index e1a6b0c34c0915c74b558310ece2c1a6a33486f6..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2700.txt +++ /dev/null @@ -1,13 +0,0 @@ -In graph theory, Robbins' theorem, named after , states that the graphs that have strong orientations are exactly the 2-edge-connected graphs. That is, it is possible to choose a direction for each edge of an undirected graph G, turning it into a directed graph that has a path from every vertex to every other vertex, if and only if G is connected and has no bridge. - -Robbins' characterization of the graphs with strong orientations may be proven using ear decomposition, a tool introduced by Robbins for this task. - -If a graph has a bridge, then it cannot be strongly orientable, for no matter which orientation is chosen for the bridge there will be no path from one of the two endpoints of the bridge to the other. - -In the other direction, it is necessary to show that every connected bridgeless graph can be strongly oriented. As Robbins proved, every such graph has a partition into a sequence of subgraphs called "ears", in which the first subgraph in the sequence is a cycle and each subsequent subgraph is a path, with the two path endpoints both belonging to earlier ears in the sequence. (The two path endpoints may be equal, in which case the subgraph is a cycle.) Orienting the edges within each ear so that it forms a directed cycle or a directed path leads to a strongly connected orientation of the overall graph. - -An extension of Robbins' theorem to mixed graphs by Boesch shows that, if G is a graph in which some edges may be directed and others undirected, and G contains a path respecting the edge orientations from every vertex to every other vertex, then any undirected edge of G that is not a bridge may be made directed without changing the connectivity of G. In particular, a bridgeless undirected graph may be made into a strongly connected directed graph by a greedy algorithm that directs edges one at a time while preserving the existence of paths between every pair of vertices; it is impossible for such an algorithm to get stuck in a situation in which no additional orientation decisions can be made. - -A strong orientation of a given bridgeless undirected graph may be found in linear time by performing a depth first search of the graph, orienting all edges in the depth first search tree away from the tree root, and orienting all the remaining edges (which must necessarily connect an ancestor and a descendant in the depth first search tree) from the descendant to the ancestor. Although this algorithm is not suitable for parallel computers, due to the difficulty of performing depth first search on them, alternative algorithms are available that solve the problem efficiently in the parallel model. Parallel algorithms are also known for finding strongly connected orientations of mixed graphs. - -Robbins originally motivated his work by an application to the design of one-way streets in cities. Another application arises in structural rigidity, in the theory of grid bracing. This theory concerns the problem of making a square grid, constructed from rigid rods attached at flexible joints, rigid by adding more rods or wires as cross bracing on the diagonals of the grid. A set of added rods makes the grid rigid if an associated undirected graph is connected, and is doubly braced (remaining rigid if any edge is removed) if in addition it is bridgeless. Analogously, a set of added wires (which can bend to reduce the distance between the points they connect, but cannot expand) makes the grid rigid if an associated directed graph is strongly connected. Therefore, reinterpreting Robbins' theorem for this application, the doubly braced structures are exactly the structures whose rods can be replaced by wires while remaining rigid. diff --git a/wiki/wikipedia/2701.txt b/wiki/wikipedia/2701.txt deleted file mode 100644 index 5df3459d997f34c1623e4a5e0c198f31814e5ac3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2701.txt +++ /dev/null @@ -1,349 +0,0 @@ -In propositional logic and Boolean algebra, De Morgan's laws are a pair of transformation rules that are both valid rules of inference. They are named after Augustus De Morgan, a 19th-century British mathematician. The rules allow the expression of conjunctions and disjunctions purely in terms of each other via negation. - -The rules can be expressed in English as: - -* the negation of a disjunction is the conjunction of the negations - -* the negation of a conjunction is the disjunction of the negations - -or - -* the complement of the union of two sets is the same as the intersection of their complements - -* the complement of the intersection of two sets is the same as the union of their complements - -or - -* not (A or B) = (not A) and (not B) - -* not (A and B) = (not A) or (not B), - -where "A or B" is an "inclusive or" meaning at least one of A or B rather than an "exclusive or" that means exactly one of A or B. - -In set theory and Boolean algebra, these are written formally as - -\begin{align} - -\overline{A \cup B} &= \overline{A} \cap \overline{B}, \\ - -\overline{A \cap B} &= \overline{A} \cup \overline{B}, - -\end{align} - -where - -* $A$ and $B$ are sets, - -* $\overline{A}$ is the complement of $A$, - -* $\cap$ is the intersection, and - -* $\cup$ is the union. - -In formal language, the rules are written as -$$ -\neg(P\lor Q)\iff(\neg P)\land(\neg Q), -$$ - -and -$$ -\neg(P\land Q)\iff(\neg P)\lor(\neg Q) -$$ - -where - -* P and Q are propositions, - -* $\neg$ is the negation logic operator (NOT), - -* $\land$ is the conjunction logic operator (AND), - -* $\lor$ is the disjunction logic operator (OR), - -* $\iff$ is a metalogical symbol meaning "can be replaced in a logical proof with". - -Applications of the rules include simplification of logical expressions in computer programs and digital circuit designs. De Morgan's laws are an example of a more general concept of mathematical duality. - -The negation of conjunction rule may be written in sequent notation: -$$ -\neg(P \land Q) \vdash (\neg P \lor \neg Q) -$$, - -and -$$ -(\neg P \lor \neg Q) \vdash \neg(P \land Q) -$$. - -The negation of disjunction rule may be written as: -$$ -\neg(P \lor Q) \vdash (\neg P \land \neg Q) -$$, - -and -$$ -(\neg P \land \neg Q) \vdash \neg(P \lor Q) -$$. - -In rule form: negation of conjunction -$$ -\frac{\neg (P \land Q)}{\therefore \neg P \lor \neg Q} -$$ -$$ -\frac{\neg P \lor \neg Q}{\therefore \neg (P \land Q)} -$$ - -and negation of disjunction -$$ -\frac{\neg (P \lor Q)}{\therefore \neg P \land \neg Q} -$$ -$$ -\frac{\neg P \land \neg Q}{\therefore \neg (P \lor Q)} -$$ - -and expressed as a truth-functional tautology or theorem of propositional logic: - -\begin{align} - -\neg (P \land Q) &\to (\neg P \lor \neg Q), \\ - -(\neg P \lor \neg Q) &\to \neg (P \land Q), \\ - -\neg (P \lor Q) &\to (\neg P \land \neg Q), \\ - -(\neg P \land \neg Q) &\to \neg (P \lor Q), - -\end{align} - -where $P$ and $Q$ are propositions expressed in some formal system. - -De Morgan's laws are normally shown in the compact form above, with the negation of the output on the left and negation of the inputs on the right. A clearer form for substitution can be stated as: - -\begin{align} - -(P \land Q) &\Longleftrightarrow \neg (\neg P \lor \neg Q), \\ - -(P \lor Q) &\Longleftrightarrow \neg (\neg P \land \neg Q). - -\end{align} - -This emphasizes the need to invert both the inputs and the output, as well as change the operator when doing a substitution. - -The laws have an important gap to the ($\neg(\neg p) \iff p$) double negation law. $\mathbb{L}$, to become a formal logic system: $\ p, q, r, ...., \emptyset \in \mathbb{L}\ $ sequence reports symbols that are defined well formed at first order. The same system has those conjunctions: $C_{|j}:x\ |\ x \in set :: \{\land, \lor, \iff, \vdash\}$. Obviously, $C_{|j} = set, \ x \in C_{|j}$ is valid knowledge, then there is at least one $x$ conjunction, which -$T $ number —in the truth table, basic proposition of $x$— is equal to atomic existence context of $x$, of course according to the $\forall x:(\mathbb{L}\vDash \forall c \subsetneq C_{|j}, \ x\in c)$ knowledge. We regarded the equivalence theory, the logic does. At this point, De Morgan Laws show effect that is upward or downward, the atomic context of $x$. - -In set theory and Boolean algebra, it is often stated as "union and intersection interchange under complementation", which can be formally expressed as: - -\begin{align} - -\overline{A \cup B} &= \overline{A} \cap \overline{B}, \\ - -\overline{A \cap B} &= \overline{A} \cup \overline{B}, - -\end{align} - -where: - -* $\overline{A}$ is the negation of $A$, the overline being written above the terms to be negated, - -* $\cap$ is the intersection operator (AND), - -* $\cup$ is the union operator (OR). - -The generalized form is - -\begin{align} - -\overline{\bigcap_{i \in I} A_{i}} &\equiv \bigcup_{i \in I} \overline{A_{i}}, \\ - -\overline{\bigcup_{i \in I} A_{i}} &\equiv \bigcap_{i \in I} \overline{A_{i}}, - -\end{align} - -where I is some, possibly uncountable, indexing set. - -In set notation, De Morgan's laws can be remembered using the mnemonic "break the line, change the sign". - -In electrical and computer engineering, De Morgan's laws are commonly written as: -$$ -\overline{A \cdot B} \equiv \overline {A} + \overline {B} -$$ - -and -$$ -\overline{A + B} \equiv \overline {A} \cdot \overline {B}, -$$ - -where: - -* $ \cdot $ is the logical AND, - -* $+$ is the logical OR, - -* the overbar is the logical NOT of what is underneath the overbar. - -De Morgan's laws commonly apply to text searching using Boolean operators AND, OR, and NOT. Consider a set of documents containing the words “cars” and “trucks”. De Morgan's laws hold that these two searches will return the same set of documents: - -Search A: NOT (cars OR trucks) - -Search B: (NOT cars) AND (NOT trucks) - -The corpus of documents containing “cars” or “trucks” can be represented by four documents: - -Document 1: Contains only the word “cars”. - -Document 2: Contains only “trucks”. - -Document 3: Contains both “cars” and “trucks”. - -Document 4: Contains neither “cars” nor “trucks”. - -To evaluate Search A, clearly the search “(cars OR trucks)” will hit on Documents 1, 2, and 3. So the negation of that search (which is Search A) will hit everything else, which is Document 4. - -Evaluating Search B, the search “(NOT cars)” will hit on documents that do not contain “cars”, which is Documents 2 and 4. Similarly the search “(NOT trucks)” will hit on Documents 1 and 4. Applying the AND operator to these two searches (which is Search B) will hit on the documents that are common to these two searches, which is Document 4. - -A similar evaluation can be applied to show that the following two searches will return the same set of documents (Documents 1, 2, 4): - -Search C: NOT (cars AND trucks), - -Search D: (NOT cars) OR (NOT trucks). - -The laws are named after Augustus De Morgan (1806–1871), who introduced a formal version of the laws to classical propositional logic. De Morgan's formulation was influenced by algebraization of logic undertaken by George Boole, which later cemented De Morgan's claim to the find. Nevertheless, a similar observation was made by Aristotle, and was known to Greek and Medieval logicians. For example, in the 14th century, William of Ockham wrote down the words that would result by reading the laws out. Jean Buridan, in his Summulae de Dialectica, also describes rules of conversion that follow the lines of De Morgan's laws. Still, De Morgan is given credit for stating the laws in the terms of modern formal logic, and incorporating them into the language of logic. De Morgan's laws can be proved easily, and may even seem trivial. Nonetheless, these laws are helpful in making valid inferences in proofs and deductive arguments. - -De Morgan's theorem may be applied to the negation of a disjunction or the negation of a conjunction in all or part of a formula. - -In the case of its application to a disjunction, consider the following claim: "it is false that either of A or B is true", which is written as: -$$ -\neg(A\lor B). -$$ - -In that it has been established that neither A nor B is true, then it must follow that both A is not true and B is not true, which may be written directly as: -$$ -(\neg A)\wedge(\neg B). -$$ - -If either A or B were true, then the disjunction of A and B would be true, making its negation false. Presented in English, this follows the logic that "since two things are both false, it is also false that either of them is true". - -Working in the opposite direction, the second expression asserts that A is false and B is false (or equivalently that "not A" and "not B" are true). Knowing this, a disjunction of A and B must be false also. The negation of said disjunction must thus be true, and the result is identical to the first claim. - -The application of De Morgan's theorem to conjunction is very similar to its application to a disjunction both in form and rationale. Consider the following claim: "it is false that A and B are both true", which is written as: -$$ -\neg(A\land B). -$$ - -In order for this claim to be true, either or both of A or B must be false, for if they both were true, then the conjunction of A and B would be true, making its negation false. Thus, one (at least) or more of A and B must be false (or equivalently, one or more of "not A" and "not B" must be true). This may be written directly as, -$$ -(\neg A)\lor(\neg B). -$$ - -Presented in English, this follows the logic that "since it is false that two things are both true, at least one of them must be false". - -Working in the opposite direction again, the second expression asserts that at least one of "not A" and "not B" must be true, or equivalently that at least one of A and B must be false. Since at least one of them must be false, then their conjunction would likewise be false. Negating said conjunction thus results in a true expression, and this expression is identical to the first claim. - -Here we use $A^\complement$to denote the complement of A. The proof that $(A\cap B)^\complement = A^\complement \cup B^\complement$ is completed in 2 steps by proving both $(A\cap B)^\complement \subseteq A^\complement \cup B^\complement$ and $A^\complement \cup B^\complement \subseteq (A\cap B)^\complement$. - -Let $x \in (A \cap B)^\complement$. Then, $x \not\in A \cap B$. - -Because $A \cap B = \{y\ |\ y\in A \wedge y \in B\}$, it must be the case that $x \not\in A$ or $x \not\in B$. - -If $x \not\in A$, then $x \in A^\complement$, so $x \in A^\complement \cup B^\complement$. - -Similarly, if $x \not\in B$, then $x \in B^\complement$, so $x \in A^\complement\cup B^\complement$. - -Thus, $\forall x( x \in (A\cap B)^\complement \rarr x \in A^\complement \cup B^\complement)$; - -that is, $(A\cap B)^\complement \subseteq A^\complement \cup B^\complement$. - -To prove the reverse direction, let $x \in A^\complement \cup B^\complement$, and for contradiction assume $x \not\in (A\cap B)^\complement$. - -Under that assumption, it must be the case that $x \in A\cap B$, - -so it follows that $x \in A$ and $x \in B$, and thus $x \not\in A^\complement$ and $x \not\in B^\complement$. - -However, that means $x \not\in A^\complement \cup B^\complement$, in contradiction to the hypothesis that $x \in A^\complement \cup B^\complement$, - -therefore, the assumption $x \not\in (A\cap B)^\complement$ must not be the case, meaning that $x \in (A\cap B)^\complement$. - -Hence, $\forall x( x \in A^\complement \cup B^\complement \rarr x \in (A\cap B)^\complement)$, - -that is, $A^\complement \cup B^\complement \subseteq (A\cap B)^\complement$. - -If $A^\complement \cup B^\complement \subseteq (A\cap B)^\complement$ and $(A \cap B)^\complement \subseteq A^\complement \cup B^\complement$, then $(A\cap B)^\complement = A^\complement \cup B^\complement$; this concludes the proof of De Morgan's law. - -The other De Morgan's law, $(A \cup B)^\complement = A^\complement \cap B^\complement$, is proven similarly. - -In extensions of classical propositional logic, the duality still holds (that is, to any logical operator one can always find its dual), since in the presence of the identities governing negation, one may always introduce an operator that is the De Morgan dual of another. This leads to an important property of logics based on classical logic, namely the existence of negation normal forms: any formula is equivalent to another formula where negations only occur applied to the non-logical atoms of the formula. The existence of negation normal forms drives many applications, for example in digital circuit design, where it is used to manipulate the types of logic gates, and in formal logic, where it is needed to find the conjunctive normal form and disjunctive normal form of a formula. Computer programmers use them to simplify or properly negate complicated logical conditions. They are also often useful in computations in elementary probability theory. - -Let one define the dual of any propositional operator P(p, q, ...) depending on elementary propositions p, q, ... to be the operator $\mbox{P}^d$ defined by -$$ -\mbox{P}^d(p, q, ...) = \neg P(\neg p, \neg q, \dots). -$$ - -This duality can be generalised to quantifiers, so for example the universal quantifier and existential quantifier are duals: -$$ - \forall x P(x) \equiv \neg [ \exists x \neg P(x)] -$$ -$$ - \exists x P(x) \equiv \neg [ \forall x \neg P(x)] -$$ - -To relate these quantifier dualities to the De Morgan laws, set up a model with some small number of elements in its domain D, such as - -D = {a, b, c}. - -Then -$$ - \forall x P(x) \equiv P(a) \land P(b) \land P(c) -$$ - -and -$$ - \exists x P(x) \equiv P(a) \lor P(b) \lor P(c). -$$ - -But, using De Morgan's laws, -$$ - P(a) \land P(b) \land P(c) \equiv \neg (\neg P(a) \lor \neg P(b) \lor \neg P(c)) -$$ - -and -$$ - P(a) \lor P(b) \lor P(c) \equiv \neg (\neg P(a) \land \neg P(b) \land \neg P(c)), -$$ - -verifying the quantifier dualities in the model. - -Then, the quantifier dualities can be extended further to modal logic, relating the box ("necessarily") and diamond ("possibly") operators: -$$ - \Box p \equiv \neg \Diamond \neg p, -$$ -$$ - \Diamond p \equiv \neg \Box \neg p. -$$ - -In its application to the alethic modalities of possibility and necessity, Aristotle observed this case, and in the case of normal modal logic, the relationship of these modal operators to the quantification can be understood by setting up models using Kripke semantics. - -Three out of the four implications of de Morgan's laws hold in intuitionistic logic. Specifically, we have -$$ -\neg(P\lor Q)\iff(\neg P)\land(\neg Q), -$$ - -and -$$ -(P\lor Q)\rightarrow \neg ((\neg P)\land(\neg Q)), -$$ -$$ -(P\land Q)\rightarrow \neg ((\neg P)\lor(\neg Q)), -$$ -$$ -(\neg P)\lor(\neg Q) \rightarrow \neg(P\land Q) -$$ - -while the converse of the last implication does not hold in pure intuitionistic logic and would be equivalent to the law of the weak excluded middle -$$ -\neg P \lor \neg \neg P -$$ - -which can be used as a foundation for an intermediate logic. - -The 2nd most-famous quote from Sherlock Holmes stories, used several times, is a restatement of de Morgan's law: - -
    How often have I said to you that when you have eliminated the impossible, whatever remains, however improbable, must be the truth?
    diff --git a/wiki/wikipedia/2702.txt b/wiki/wikipedia/2702.txt deleted file mode 100644 index 4c1644aa8f0ffeca6d4815787b12895e1fb6e980..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2702.txt +++ /dev/null @@ -1,17 +0,0 @@ -Light Up (Japanese: 美術館 bijutsukan, art gallery), also called Akari, is a binary-determination logic puzzle published by Nikoli. As of 2011, three books consisting entirely of Light Up puzzles have been published by Nikoli. - -Light Up is played on a rectangular grid of white and black cells. The player places light bulbs in white cells such that no two bulbs shine on each other, until the entire grid is lit up. A bulb sends rays of light horizontally and vertically, illuminating its entire row and column unless its light is blocked by a black cell. A black cell may have a number on it from 0 to 4, indicating how many bulbs must be placed adjacent to its four sides; for example, a cell with a 4 must have four bulbs around it, one on each side, and a cell with a 0 cannot have a bulb next to any of its sides. An unnumbered black cell may have any number of light bulbs adjacent to it, or none. Bulbs placed diagonally adjacent to a numbered cell do not contribute to the bulb count. - -A typical starting point in the solution of a Light Up puzzle is to find a black cell with a 4, or a cell with a smaller number that is blocked on one or more sides (for example, a 3 against a wall or a 2 in a corner) and therefore has only one configuration of surrounding bulbs. After this step, other numbered cells may be illuminated on one or more sides, narrowing down the possible bulb configurations around them, and in some cases making only one configuration possible. - -Another common technique is to look for a cell that is not yet lit, and determine if there is only one possible cell in which a bulb can be placed to light it up. - -When it is unclear where to place a bulb, one may also place dots in white cells that cannot have bulbs, such as around a 0 or in places where a bulb would create a contradiction. For example, a light bulb placed diagonally adjacent to a 3 will block two of its surrounding cells, making it impossible to have three bulbs around it; therefore, the diagonal cells around a 3 can never have lights in them and can be always dotted. Similarly, one may put dots in places where a bulb would "trap" another unlit cell, making it impossible to light it up without breaking the rules. - -More advanced techniques tend to focus on different combinations of clues. Two 3s that are one space apart, for example, with nothing between them or to the other two sides of the cell in between, must have a lightbulb in that space, and the two spaces next to the two threes, on the line joining them. If not, then one would have two lightbulbs illuminating each other. Also, from this deduction, the remaining four cells surrounding the threes must contain two lightbulbs. Note that as the four spaces are arranged in two rows with nothing in between, one must have one lightbulb to each row, so one can mark all other spaces in those rows as empty. - -Another fairly common pattern is a 1 diagonally adjacent to a 2, with one of the spaces next to the 2 but not adjacent to the 1 either empty or walled off. At most one lightbulb can be placed in the two cells common to the two clues, so the last lightbulb must go in the last space around the 2. Now, it is known that there is exactly one lightbulb in those cells, so the other cells next to the 1 must both be empty. - -Determining whether a given Light Up puzzle is solvable is NP-complete. This is proved by a polynomial-time reduction from Circuit-SAT, which is known to be NP-complete, to Light Up puzzles. - -A variation on the original Light Up puzzle in which there are walls without numbers plus walls with one certain number, so either 0, 1, 2, 3 or 4 (we call these variations Akari-$n$), have also been researched in complexity. It is shown, by a polynomial-time reduction from Circuit-SAT, that Akari-1, Akari-2 and Akari-3 are NP-complete; for Akari-4 and puzzles without any numbers it is shown that these are in P; Akari-0 is thus far uncategorized. diff --git a/wiki/wikipedia/2703.txt b/wiki/wikipedia/2703.txt deleted file mode 100644 index 60c58da4fed1dc96f97eab1ea4b3eb7ef2bd7e24..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2703.txt +++ /dev/null @@ -1,27 +0,0 @@ -In topology and related areas of mathematics, a metrizable space is a topological space that is homeomorphic to a metric space. That is, a topological space $(X, \mathcal{T})$ is said to be metrizable if there is a metric $d : X \times X \to [0, \infty)$ such that the topology induced by $d$ is $\mathcal{T}.$ Metrization theorems are theorems that give sufficient conditions for a topological space to be metrizable. - -Metrizable spaces inherit all topological properties from metric spaces. For example, they are Hausdorff paracompact spaces (and hence normal and Tychonoff) and first-countable. However, some properties of the metric, such as completeness, cannot be said to be inherited. This is also true of other structures linked to the metric. A metrizable uniform space, for example, may have a different set of contraction maps than a metric space to which it is homeomorphic. - -One of the first widely recognized metrization theorems was '. This states that every Hausdorff second-countable regular space is metrizable. So, for example, every second-countable manifold is metrizable. (Historical note: The form of the theorem shown here was in fact proved by Tychonoff in 1926. What Urysohn had shown, in a paper published posthumously in 1925, was that every second-countable normal Hausdorff space is metrizable). The converse does not hold: there exist metric spaces that are not second countable, for example, an uncountable set endowed with the discrete metric. The Nagata–Smirnov metrization theorem, described below, provides a more specific theorem where the converse does hold. - -Several other metrization theorems follow as simple corollaries to Urysohn's theorem. For example, a compact Hausdorff space is metrizable if and only if it is second-countable. - -Urysohn's Theorem can be restated as: A topological space is separable and metrizable if and only if it is regular, Hausdorff and second-countable. The Nagata–Smirnov metrization theorem extends this to the non-separable case. It states that a topological space is metrizable if and only if it is regular, Hausdorff and has a σ-locally finite base. A σ-locally finite base is a base which is a union of countably many locally finite collections of open sets. For a closely related theorem see the Bing metrization theorem. - -Separable metrizable spaces can also be characterized as those spaces which are homeomorphic to a subspace of the Hilbert cube $\lbrack 0, 1 \rbrack ^\N,$ that is, the countably infinite product of the unit interval (with its natural subspace topology from the reals) with itself, endowed with the product topology. - -A space is said to be locally metrizable if every point has a metrizable neighbourhood. Smirnov proved that a locally metrizable space is metrizable if and only if it is Hausdorff and paracompact. In particular, a manifold is metrizable if and only if it is paracompact. - -The group of unitary operators $\mathbb{U}(\mathcal{H})$ on a separable Hilbert space $\mathcal{H}$ endowed - -with the strong operator topology is metrizable (see Proposition II.1 in ). - -Non-normal spaces cannot be metrizable; important examples include - -* the Zariski topology on an algebraic variety or on the spectrum of a ring, used in algebraic geometry, - -* the topological vector space of all functions from the real line $\R$ to itself, with the topology of pointwise convergence. - -The real line with the lower limit topology is not metrizable. The usual distance function is not a metric on this space because the topology it determines is the usual topology, not the lower limit topology. This space is Hausdorff, paracompact and first countable. - -The long line is locally metrizable but not metrizable; in a sense it is "too long". diff --git a/wiki/wikipedia/2704.txt b/wiki/wikipedia/2704.txt deleted file mode 100644 index 54e398025a9dc0cd140673cf01b7cba0f526092f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2704.txt +++ /dev/null @@ -1,25 +0,0 @@ -In number theory, Goldbach's weak conjecture, also known as the odd Goldbach conjecture, the ternary Goldbach problem, or the 3-primes problem, states that - -Every odd number greater than 5 can be expressed as the sum of three primes. (A prime may be used more than once in the same sum.) - -This conjecture is called "weak" because if Goldbach's strong conjecture (concerning sums of two primes) is proven, then this would also be true. For if every even number greater than 4 is the sum of two odd primes, adding 3 to each even number greater than 4 will produce the odd numbers greater than 7 (and 7 itself is equal to 2+2+3). - -In 2013, Harald Helfgott published a proof of Goldbach's weak conjecture. - -This version excludes 7 = 2+2+3 because this requires the even prime 2. On odd numbers larger than 7 it is slightly stronger as it also excludes sums like 17 = 2+2+13, which are allowed in the other formulation. Helfgott's proof covers both versions of the conjecture. Like the other formulation, this one also immediately follows from Goldbach's strong conjecture. - -The conjecture originated in correspondence between Christian Goldbach and Leonhard Euler. One formulation of the strong Goldbach conjecture, equivalent to the more common one in terms of sums of two primes, is - -Every integer greater than 5 can be written as the sum of three primes. - -The weak conjecture is simply this statement restricted to the case where the integer is odd (and possibly with the added requirement that the three primes in the sum be odd). - -In 1923, Hardy and Littlewood showed that, assuming the generalized Riemann hypothesis, the weak Goldbach conjecture is true for all sufficiently large odd numbers. In 1937, Ivan Matveevich Vinogradov eliminated the dependency on the generalised Riemann hypothesis and proved directly (see Vinogradov's theorem) that all sufficiently large odd numbers can be expressed as the sum of three primes. Vinogradov's original proof, as it used the ineffective Siegel–Walfisz theorem, did not give a bound for "sufficiently large"; his student K. Borozdkin (1956) derived that $e^{e^{16.038}}\approx3^{3^{15}}$ is large enough. The integer part of this number has 4,008,660 decimal digits, so checking every number under this figure would be completely infeasible. - -In 1997, Deshouillers, Effinger, te Riele and Zinoviev published a result showing that the generalized Riemann hypothesis implies Goldbach's weak conjecture for all numbers. This result combines a general statement valid for numbers greater than 1020 with an extensive computer search of the small cases. Saouter also conducted a computer search covering the same cases at approximately the same time. - -Olivier Ramaré in 1995 showed that every even number n ≥ 4 is in fact the sum of at most six primes, from which it follows that every odd number n ≥ 5 is the sum of at most seven primes. Leszek Kaniecki showed every odd integer is a sum of at most five primes, under the Riemann Hypothesis. In 2012, Terence Tao proved this without the Riemann Hypothesis; this improves both results. - -In 2002, Liu Ming-Chit (University of Hong Kong) and Wang Tian-Ze lowered Borozdkin's threshold to approximately $n>e^{3100}\approx 2 \times 10^{1346}$. The exponent is still much too large to admit checking all smaller numbers by computer. (Computer searches have only reached as far as 1018 for the strong Goldbach conjecture, and not much further than that for the weak Goldbach conjecture.) - -In 2012 and 2013, Peruvian mathematician Harald Helfgott released a pair of papers improving major and minor arc estimates sufficiently to unconditionally prove the weak Goldbach conjecture. Here, the major arcs $\mathfrak M$ is the union of intervals $\left (a/q-cr_0/qx,a/q+cr_0/qx\right )$ around the rationals $a/q,qW.M. Woods ... a mathematician ... writes ... "... a variable is one of the many things a parameter is not." ... The dependent variable, the speed of the car, depends on the independent variable, the position of the gas pedal. - -
    - -
    [Kilpatrick quoting Woods] "Now ... the engineers ... change the lever arms of the linkage ... the speed of the car ... will still depend on the pedal position ... but in a ... different manner. You have changed a parameter"
    - -* A parametric equaliser is an audio filter that allows the frequency of maximum cut or boost to be set by one control, and the size of the cut or boost by another. These settings, the frequency level of the peak or trough, are two of the parameters of a frequency response curve, and in a two-control equaliser they completely describe the curve. More elaborate parametric equalisers may allow other parameters to be varied, such as skew. These parameters each describe some aspect of the response curve seen as a whole, over all frequencies. A graphic equaliser provides individual level controls for various frequency bands, each of which acts only on that particular frequency band. - -* If asked to imagine the graph of the relationship y = ax2, one typically visualizes a range of values of x, but only one value of a. Of course a different value of a can be used, generating a different relation between x and y. Thus a is a parameter: it is less variable than the variable x or y, but it is not an explicit constant like the exponent 2. More precisely, changing the parameter a gives a different (though related) problem, whereas the variations of the variables x and y (and their interrelation) are part of the problem itself. - -* In calculating income based on wage and hours worked (income equals wage multiplied by hours worked), it is typically assumed that the number of hours worked is easily changed, but the wage is more static. This makes wage a parameter, hours worked an independent variable, and income a dependent variable. - -In the context of a mathematical model, such as a probability distribution, the distinction between variables and parameters was described by Bard as follows: - -We refer to the relations which supposedly describe a certain physical situation, as a model. Typically, a model consists of one or more equations. The quantities appearing in the equations we classify into variables and parameters. The distinction between these is not always clear cut, and it frequently depends on the context in which the variables appear. Usually a model is designed to explain the relationships that exist among quantities which can be measured independently in an experiment; these are the variables of the model. To formulate these relationships, however, one frequently introduces "constants" which stand for inherent properties of nature (or of the materials and equipment used in a given experiment). These are the parameters. - -In analytic geometry, curves are often given as the image of some function. The argument of the function is invariably called "the parameter". A circle of radius 1 centered at the origin can be specified in more than one form: - -*implicit form, the curve is all points (x,y) that satisfy the relation - -*:$x^2+y^2=1$ - -*parametric form, the curve is all points (cos(t), sin(t)), when t varies over some set of values, like [0, 2π), or (-∞,∞) - -*:$(x,y)=(\cos t,\sin t)$ - -*:where t is the parameter. - -Hence these equations, which might be called functions elsewhere are in analytic geometry characterized as parametric equations and the independent variables are considered as parameters. - -In mathematical analysis, integrals dependent on a parameter are often considered. These are of the form -$$ -F(t)=\int_{x_0(t)}^{x_1(t)}f(x;t)dx. -$$ - -In this formula, t is the argument of the function F, and on the right-hand side the parameter on which the integral depends. When evaluating the integral, t is held constant, and so it is considered to be a parameter. If we are interested in the value of F for different values of t, we then consider t to be a variable. The quantity x is a dummy variable or variable of integration (confusingly, also sometimes called a parameter of integration). - -In statistics and econometrics, the probability framework above still holds, but attention shifts to estimating the parameters of a distribution based on observed data, or testing hypotheses about them. In frequentist estimation parameters are considered "fixed but unknown", whereas in Bayesian estimation they are treated as random variables, and their uncertainty is described as a distribution. - -In estimation theory of statistics, "statistic" or estimator refers to samples, whereas "parameter" or estimand refers to populations, where the samples are taken from. A statistic is a numerical characteristic of a sample that can be used as an estimate of the corresponding parameter, the numerical characteristic of the population from which the sample was drawn. - -For example, the sample mean (estimator), denoted $\overline X$, can be used as an estimate of the mean parameter (estimand), denoted μ, of the population from which the sample was drawn. Similarly, the sample variance (estimator), denoted S2, can be used to estimate the variance parameter (estimand), denoted σ2, of the population from which the sample was drawn. (Note that the sample standard deviation (S) is not an unbiased estimate of the population standard deviation (σ): see Unbiased estimation of standard deviation.) - -It is possible to make statistical inferences without assuming a particular parametric family of probability distributions. In that case, one speaks of non-parametric statistics as opposed to the parametric statistics just described. For example, a test based on Spearman's rank correlation coefficient would be called non-parametric since the statistic is computed from the rank-order of the data disregarding their actual values (and thus regardless of the distribution they were sampled from), whereas those based on the Pearson product-moment correlation coefficient are parametric tests since it is computed directly from the data values and thus estimates the parameter known as the population correlation. - -thumb|right|These traces all represent Poisson distributions, but with different values for the parameter λIn probability theory, one may describe the distribution of a random variable as belonging to a family of probability distributions, distinguished from each other by the values of a finite number of parameters. For example, one talks about "a Poisson distribution with mean value λ". The function defining the distribution (the probability mass function) is: -$$ -f(k;\lambda)=\frac{e^{-\lambda} \lambda^k}{k!}. -$$ - -This example nicely illustrates the distinction between constants, parameters, and variables. e is Euler's number, a fundamental mathematical constant. The parameter λ is the mean number of observations of some phenomenon in question, a property characteristic of the system. k is a variable, in this case the number of occurrences of the phenomenon actually observed from a particular sample. If we want to know the probability of observing k1 occurrences, we plug it into the function to get $f(k_1 ; \lambda)$. Without altering the system, we can take multiple samples, which will have a range of values of k, but the system is always characterized by the same λ. - -For instance, suppose we have a radioactive sample that emits, on average, five particles every ten minutes. We take measurements of how many particles the sample emits over ten-minute periods. The measurements exhibit different values of k, and if the sample behaves according to Poisson statistics, then each value of k will come up in a proportion given by the probability mass function above. From measurement to measurement, however, λ remains constant at 5. If we do not alter the system, then the parameter λ is unchanged from measurement to measurement; if, on the other hand, we modulate the system by replacing the sample with a more radioactive one, then the parameter λ would increase. - -Another common distribution is the normal distribution, which has as parameters the mean μ and the variance σ². - -In these above examples, the distributions of the random variables are completely specified by the type of distribution, i.e. Poisson or normal, and the parameter values, i.e. mean and variance. In such a case, we have a parameterized distribution. - -It is possible to use the sequence of moments (mean, mean square, ...) or cumulants (mean, variance, ...) as parameters for a probability distribution: see Statistical parameter. - -In computer programming, two notions of parameter are commonly used, and are referred to as parameters and arguments—or more formally as a formal parameter and an actual parameter. - -For example, in the definition of a function such as - -y = f(x) = x + 2, - -x is the formal parameter (the parameter) of the defined function. - -When the function is evaluated for a given value, as in - -f(3): or, y = f(3) = 3 + 2 = 5, - -3 is the actual parameter (the argument) for evaluation by the defined function; it is a given value (actual value) that is substituted for the formal parameter of the defined function. (In casual usage the terms parameter and argument might inadvertently be interchanged, and thereby used incorrectly.) - -These concepts are discussed in a more precise way in functional programming and its foundational disciplines, lambda calculus and combinatory logic. Terminology varies between languages; some computer languages such as C define parameter and argument as given here, while Eiffel uses an alternative convention. - -==Engineering== - -In engineering (especially involving data acquisition) the term parameter sometimes loosely refers to an individual measured item. This usage isn't consistent, as sometimes the term channel refers to an individual measured item, with parameter referring to the setup information about that channel. - -"Speaking generally, properties are those physical quantities which directly describe the physical attributes of the system; parameters are those combinations of the properties which suffice to determine the response of the system. Properties can have all sorts of dimensions, depending upon the system being considered; parameters are dimensionless, or have the dimension of time or its reciprocal." - -The term can also be used in engineering contexts, however, as it is typically used in the physical sciences. - -In environmental science and particularly in chemistry and microbiology, a parameter is used to describe a discrete chemical or microbiological entity that can be assigned a value: commonly a concentration, but may also be a logical entity (present or absent), a statistical result such as a 95 percentile value or in some cases a subjective value. - -Within linguistics, the word "parameter" is almost exclusively used to denote a binary switch in a Universal Grammar within a Principles and Parameters framework. - -In logic, the parameters passed to (or operated on by) an open predicate are called parameters by some authors (e.g., Prawitz, "Natural Deduction"; Paulson, "Designing a theorem prover"). Parameters locally defined within the predicate are called variables. This extra distinction pays off when defining substitution (without this distinction special provision must be made to avoid variable capture). Others (maybe most) just call parameters passed to (or operated on by) an open predicate variables, and when defining substitution have to distinguish between free variables and bound variables. - -In music theory, a parameter denotes an element which may be manipulated (composed), separately from the other elements. The term is used particularly for pitch, loudness, duration, and timbre, though theorists or composers have sometimes considered other musical aspects as parameters. The term is particularly used in serial music, where each parameter may follow some specified series. Paul Lansky and George Perle criticized the extension of the word "parameter" to this sense, since it is not closely related to its mathematical sense, but it remains common. The term is also common in music production, as the functions of audio processing units (such as the attack, release, ratio, threshold, and other variables on a compressor) are defined by parameters specific to the type of unit (compressor, equalizer, delay, etc.). diff --git a/wiki/wikipedia/271.txt b/wiki/wikipedia/271.txt deleted file mode 100644 index 2cae53617fdc4a919e5a909148ff1598a8b1808b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/271.txt +++ /dev/null @@ -1,9 +0,0 @@ -An eodermdrome is a form of word play wherein a word (or phrase) is formed from a set of letters (or words) in such a way that it has a non-planar spelling net. Gary S. Bloom, Allan Gewirtz, John W. Kennedy, and Peter J. Wexler first described the eodermdrome in May 1980, and it subsequently became more widely known after publication in Word Ways: The Journal of Recreational Linguistics in August 1980. - -It is well illustrated by the word eodermdrome itself. Eodermdrome contains only the letters e, o, d, r and m. When plotted as a graph, the lettered vertices are sequentially connected by edges to spell a word. If the graph is non-planar, the word is an eodermdrome. The graph of eodermdrome is the non-planar graph K5. - -Eckler searched for all eodermdromes in Webster's Dictionary. One of his examples is supersaturates. The graph of the complete word contains a subgraph which is a subdivision of the non-planar graph K3,3, and as such is itself non-planar. - -By extension, the vertices can be identified with words instead of letters to form eodermdromic phrases or sentences. - -The concept has been studied within both mathematics and linguistics. diff --git a/wiki/wikipedia/2710.txt b/wiki/wikipedia/2710.txt deleted file mode 100644 index c071c4663f1a7108172e19ee42ba22f9abf2cca9..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2710.txt +++ /dev/null @@ -1,204 +0,0 @@ -In graph theory, the Stoer–Wagner algorithm is a recursive algorithm to solve the minimum cut problem in undirected weighted graphs with non-negative weights. It was proposed by Mechthild Stoer and Frank Wagner in 1995. The essential idea of this algorithm is to shrink the graph by merging the most intensive vertices, until the graph only contains two combined vertex sets. - -This algorithm starts by finding an $s$ and a $t$ in $V$, and an s-t min-cut $(S,T)$ of $G$. For any pair $\left\{s,t\right\}$, there are two possible situations: either $(S,T)$ is a global min-cut of $G$, or $s$ and $t$ belong to the same side of the global min-cut of $G$. Therefore, the global min-cut can be found by checking the graph $G/\left\{s,t\right\}$, which is the graph after the merging of vertices $s$ and $t$. During the merging, if $s$ and $t$ are connected by an edge then this edge disappears. If s and t both have edges to some vertex v, then the weight of the edge from the new vertex st to v is $w(s,v)+w(t,v)$. - -MinimumCutPhase$(G,w,a)$ -$$ -A\gets\left\{a\right\} -$$ - -while $\ A\ne V$ - -add to $A$ the most tightly connected vertex - -end - -store the cut in which the last remaining vertex is by itself (the "cut-of-the-phase") - -shrink $G$ by merging the two vertices (s, t) added last (the value of "cut-of-the-phase" is the value of minimum s, t cut.) - -MinimumCut$(G,w,a)$ - -while $|V|>1$ - -MinimumCutPhase$(G,w,a)$ - -if the cut-of-the-phase is lighter than the current minimum cut - -then store the cut-of-the-phase as the current minimum cut - -The algorithm works in phases. In the MinimumCutPhase, the subset $A$ of the graphs vertices grows starting with an arbitrary single vertex until $A$ is equal to $V$. In each step, the vertex which is outside of $A$, but most tightly connected with $A$ is added to the set $A$. This procedure can be formally shown as: After one phase of the MinimumCutPhase, the two vertices are merged as a new vertex, and edges from the two vertices to a remaining vertex are replaced by an edge weighted by the sum of the weights of the previous two edges. Edges joining the merged nodes are removed. If there is a minimum cut of $G$ separating $s$ and $t$, the $C$ is a minimum cut of $G$. If not, then the minimum cut of $G$ must have $s$ and $t$ on a same side. Therefore, the algorithm would merge them as one node. In addition, the MinimumCut would record and update the global minimum cut after each MinimumCutPhase. After $n-1$ phases, the minimum cut can be determined. - -This section refers to Figs. 1–6 in the original paper - -The graph in step 1 shows the original graph $G$ and randomly selects node 2 as the starting node for this algorithm. In the MinimumCutPhase, set $A$ only has node 2, the heaviest edge is edge (2,3), so node 3 is added into set $A$. Next, set $A$ contains node 2 and node 3, the heaviest edge is (3,4), thus node 4 is added to set $A$. By following this procedure, the last two nodes are node 5 and node 1, which are $s$ and $t$ in this phase. By merging them, the new graph is as shown in step 2. In this phase, the weight of cut is 5, which is the summation of edges (1,2) and (1,5). Right now, the first loop of MinimumCut is completed. - -In step 2, starting from node 2, the heaviest edge is (2,15), thus node 15 is put in set $A$. The next heaviest edges is (2,3) or (15,6), we choose (15,6) thus node 6 is added to the set. Then we compare edge (2,3) and (6,7) and choose node 3 to put in set $A$. The last two nodes are node 7 and node 8. Therefore, merge edge (7,8). The minimum cut is 5, so remain the minimum as 5. - -The following steps repeat the same operations on the merged graph, until there is only one edge in the graph, as shown in step 7. The global minimum cut has edge (2,3) and edge (6,7), which is detected in step 5. - -To prove the correctness of this algorithm, we need to prove that the cut given by MinimumCutPhase is in fact a minimum $s\text{-}t$ cut of the graph, where s and t are the two vertices last added in the phase. Therefore, a lemma is shown below:
    Lemma 1: MinimumCutPhase returns a minimum $s\text{-}t$-cut of $G$.
    Let $C=(X,\overline{X})$ be an arbitrary $s\text{-}t$ cut, and $CP$ be the cut given by the phase. We must show that $W(C)\ge W(CP)$. Observe that a single run of MinimumCutPhase gives us an ordering of all the vertices in the graph (where $a$ is the first and $s$ and $t$ are the two vertices added last in the phase). We say the vertex $v$ is active if $v$ and the vertex added just before $v$ are - -in opposite sides of the cut. We prove the lemma by induction on the set of active vertices. We define $A_v$ as the set of vertices added to $A$ before $v$, and $C_v$ to be the set of edges in $C$ with both of their ends in $A_v \cup \{v\}$, i.e. $C_v \subseteq C $ is the cut induced by $A_v \cup \{v\}$. We prove, for each active vertex $v$,
    $w(A_v,v)\le w(C_v)$
    Let $v_0$ be the first active vertex. By the definition of these two quantities, $w(A_{v_0},v_0)$ and $w(C_{v_0})$ are equivalent. $A_{v_0}$ is simply all vertices added to $A$ before $v_0$, and the edges between these vertices and $v_0$ are the edges that cross the cut $C$. Therefore, as shown above, for active vertices $v$ and $u$, with $v$ added to $A$ before $u$:
    $w(A_u,u)=w(A_v,u)+w(A_u-A_v,u)$
    $w(A_u,u)\le w(C_v)+w(A_u-A_v,u)$ by induction, $w(A_v,u)\le w(A_v,v)\le w(C_v)$
    $w(A_{u},u)\le w(C_{u})$ since $w(A_u-A_v,u)$ contributes to $w(C_{u})$ but not to $w(C_{v})$ (and other edges are of non-negative weights)
    Thus, since $t$ is always an active vertex since the last cut of the phase separates $s$ from $t$ by definition, for any active vertex $t$:
    $w(A_t,t)\le w(C_t)=w(C)$
    Therefore, the cut of the phase is at most as heavy as $C$. - -The running time of the algorithm MinimumCut is equal to the added running time of the $|V|-1$ runs of MinimumCutPhase, which is called on graphs with decreasing number of vertices and edges. - -For the MinimumCutPhase, a single run of it needs at most $O(|E|+|V|\log|V|)$ time. - -Therefore, the overall running time should be the product of two phase complexity, which is $O(|V||E|+|V|^2\log|V|)$[2]. - -For the further improvement, the key is to make it easy to select the next vertex to be added to the set $A$, the most tightly connected vertex. During execution of a phase, all vertices that are not in $A$ reside in a priority queue based on a key field. The key of a vertex $V$ is the sum of the weights of the edges connecting it to the current $A$, that is, $w(A,v)$. Whenever a vertex $v$ is added to $A$ we have to perform an update of the queue. $v$ has to be deleted from the queue, and the key of every vertex $w$ not in $A$, connected to $v$ has to be increased by the weight of the edge $vw$, if it exists. As this is done exactly once for every edge, overall we have to perform $|V|$ ExtractMax and $|E|$ IncreaseKey operations. By using the Fibonacci heap we can perform an ExtractMax operation in $O(\log|V|)$ amortized time and an IncreaseKey operation in $O(1)$ amortized time. Thus, the time we need for this key step that dominates the rest of the phase, is $O(|E|+|V|\log|V|)$. - -Below is a concise c++ implementation of the Stoer-Wagner algorithm. - - - -// Adjacency matrix implementation of Stoer–Wagner min cut algorithm. - -// - -// Running time: - -// O(|V|^3) - -#include - -using namespace std; - -pair> globalMinCut(vector> mat) { - -pair> best = {INT_MAX, {}}; - -int n = mat.size(); - -vector> co(n); - -for (int i = 0; i < n; i++) co[i] = {i}; - -for (int ph = 1; ph < n; ph++) { - -vector w = mat[0]; - -size_t s = 0, t = 0; - -for (int it = 0; it < n - ph; it++) { // O(V^2) -> O(E log V) with prio. queue - -w[t] = INT_MIN; - -s = t, t = max_element(w.begin(), w.end()) - w.begin(); - -for (int i = 0; i < n; i++) w[i] += mat[t][i]; - -} - -best = min(best, {w[t] - mat[t][t], co[t]}); - -co[s].insert(co[s].end(), co[t].begin(), co[t].end()); - -for (int i = 0; i < n; i++) mat[s][i] += mat[t][i]; - -for (int i = 0; i < n; i++) mat[i][s] = mat[s][i]; - -mat[0][t] = INT_MIN; - -} - -return best; - -} - - - - - -const int maxn = 550; - -const int inf = 1000000000; - -int n, r; - -int edge[maxn][maxn], dist[maxn]; - -bool vis[maxn], bin[maxn]; - -void init() - -{ - -memset(edge, 0, sizeof(edge)); - -memset(bin, false, sizeof(bin)); - -} - -int contract( int &s, int &t ) // Find s,t - -{ - -memset(dist, 0, sizeof(dist)); - -memset(vis, false, sizeof(vis)); - -int i, j, k, mincut, maxc; - -for (i = 1; i <= n; i++) - -{ - -k = -1; maxc = -1; - -for (j = 1; j <= n; j++)if (!bin[j] && !vis[j] && dist[j] > maxc) - -{ - -k = j; maxc = dist[j]; - -} - -if (k == -1) return mincut; - -s = t; t = k; - -mincut = maxc; - -vis[k] = true; - -for (j = 1; j <= n; j++) if (!bin[j] && !vis[j]) - -dist[j] += edge[k][j]; - -} - -return mincut; - -} - -int Stoer_Wagner() - -{ - -int mincut, i, j, s, t, ans; - -for (mincut = inf, i = 1; i < n; i++) - -{ - -ans = contract( s, t ); - -bin[t] = true; - -if (mincut > ans) mincut = ans; - -if (mincut == 0) return 0; - -for (j = 1; j <= n; j++) if (!bin[j]) - -edge[s][j] = (edge[j][s] += edge[j][t]); - -} - -return mincut; - -} - - diff --git a/wiki/wikipedia/2711.txt b/wiki/wikipedia/2711.txt deleted file mode 100644 index 63bd5b0418341b80eebf38f4913fc6e1bbd0b785..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2711.txt +++ /dev/null @@ -1,42 +0,0 @@ -In the mathematical discipline of linear algebra, the Schur decomposition or Schur triangulation, named after Issai Schur, is a matrix decomposition. It allows one to write an arbitrary complex square matrix as unitarily equivalent to an upper triangular matrix whose diagonal elements are the eigenvalues of the original matrix. - -The Schur decomposition reads as follows: if A is an n × n square matrix with complex entries, then A can be expressed as -$$ - A = Q U Q^{-1} -$$ - -where Q is a unitary matrix (so that its inverse Q−1 is also the conjugate transpose Q* of Q), and U is an upper triangular matrix, which is called a Schur form of A. Since U is similar to A, it has the same spectrum, and since it is triangular, its eigenvalues are the diagonal entries of U. - -The Schur decomposition implies that there exists a nested sequence of A-invariant subspaces {0} = V0 ⊂ V1 ⊂ ⋯ ⊂ Vn = Cn, and that there exists an ordered orthonormal basis (for the standard Hermitian form of Cn) such that the first i basis vectors span Vi for each i occurring in the nested sequence. Phrased somewhat differently, the first part says that a linear operator J on a complex finite-dimensional vector space stabilizes a complete flag (V1,…,Vn). - -A constructive proof for the Schur decomposition is as follows: every operator A on a complex finite-dimensional vector space has an eigenvalue λ, corresponding to some eigenspace Vλ. Let Vλ be its orthogonal complement. It is clear that, with respect to this orthogonal decomposition, A has matrix representation (one can pick here any orthonormal bases Z1 and Z2 spanning Vλ and Vλ respectively) - -\begin{bmatrix} Z_1 & Z_2 \end{bmatrix}^{*} A \begin{bmatrix}Z_1 & Z_2\end{bmatrix} = \begin{bmatrix} \lambda I_{\lambda} & A_{12} \\ 0 & A_{22} \end{bmatrix}: - -\begin{matrix} - -V_{\lambda} \\ - -\oplus \\ - -V_{\lambda}^{\perp} - -\end{matrix} - -\rightarrow - -\begin{matrix} - -V_{\lambda} \\ - -\oplus \\ - -V_{\lambda}^{\perp} - -\end{matrix} - - - -where Iλ is the identity operator on Vλ. The above matrix would be upper-triangular except for the A22 block. But exactly the same procedure can be applied to the sub-matrix A22, viewed as an operator on Vλ, and its submatrices. Continue this way until the resulting matrix is upper triangular. Since each conjugation increases the dimension of the upper-triangular block by at least one, this process takes at most n steps. Thus the space Cn will be exhausted and the procedure has yielded the desired result. - -The above argument can be slightly restated as follows: let λ be an eigenvalue of A, corresponding to some eigenspace Vλ. A induces an operator T on the quotient space Cn/Vλ. This operator is precisely the A22 submatrix from above. As before, T would have an eigenspace, say Wμ ⊂ Cn modulo Vλ. Notice the preimage of Wμ under the quotient map is an invariant subspace of A that contains Vλ. Continue this way until the resulting quotient space has dimension 0. Then the successive preimages of the eigenspaces found at each step form a flag that A stabilizes. diff --git a/wiki/wikipedia/2712.txt b/wiki/wikipedia/2712.txt deleted file mode 100644 index 2ab7828ccbc496b2121ee706b031f81b30e6d301..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2712.txt +++ /dev/null @@ -1,505 +0,0 @@ -Automatic vectorization, in parallel computing, is a special case of automatic parallelization, where a computer program is converted from a scalar implementation, which processes a single pair of operands at a time, to a vector implementation, which processes one operation on multiple pairs of operands at once. For example, modern conventional computers, including specialized supercomputers, typically have vector operations that simultaneously perform operations such as the following four additions (via SIMD or SPMD hardware): - -\begin{align} - -c_1 & = a_1 + b_1 \\ - -c_2 & = a_2 + b_2 \\ - -c_3 & = a_3 + b_3 \\ - -c_4 & = a_4 + b_4 - -\end{align} - -However, in most programming languages one typically writes loops that sequentially perform additions of many numbers. Here is an example of such a loop, written in C: - - - -for (i = 0; i < n; i++) - -c[i] = a[i] + b[i]; - - - -A vectorizing compiler transforms such loops into sequences of vector operations. These vector operations perform additions on blocks of elements from the arrays a, b and c. Automatic vectorization is a major research topic in computer science. - -Early computers usually had one logic unit, which executed one instruction on one pair of operands at a time. Computer languages and programs therefore were designed to execute in sequence. Modern computers, though, can do many things at once. So, many optimizing compilers perform automatic vectorization, where parts of sequential programs are transformed into parallel operations. - -Loop vectorization transforms procedural loops by assigning a processing unit to each pair of operands. Programs spend most of their time within such loops. Therefore, vectorization can significantly accelerate them, especially over large data sets. Loop vectorization is implemented in Intel's MMX, SSE, and AVX, in Power ISA's AltiVec, and in ARM's NEON, SVE and SVE2 instruction sets. - -Many constraints prevent or hinder vectorization. Sometimes vectorization can slow down execution, for example because of pipeline synchronization or data-movement timing. Loop dependence analysis identifies loops that can be vectorized, relying on the data dependence of the instructions inside loops. - -Automatic vectorization, like any loop optimization or other compile-time optimization, must exactly preserve program behavior. - -All dependencies must be respected during execution to prevent incorrect results. - -In general, loop invariant dependencies and lexically forward dependencies can be easily vectorized, and lexically backward dependencies can be transformed into lexically forward dependencies. However, these transformations must be done safely, in order to ensure that the dependence between all statements remain true to the original. - -Cyclic dependencies must be processed independently of the vectorized instructions. - -Integer precision (bit-size) must be kept during vector instruction execution. The correct vector instruction must be chosen based on the size and behavior of the internal integers. Also, with mixed integer types, extra care must be taken to promote/demote them correctly without losing precision. Special care must be taken with sign extension (because multiple integers are packed inside the same register) and during shift operations, or operations with carry bits that would otherwise be taken into account. - -Floating-point precision must be kept as well, unless IEEE-754 compliance is turned off, in which case operations will be faster but the results may vary slightly. Big variations, even ignoring IEEE-754 usually signify programmer error. - -To vectorize a program, the compiler's optimizer must first understand the dependencies between statements and re-align them, if necessary. Once the dependencies are mapped, the optimizer must properly arrange the implementing instructions changing appropriate candidates to vector instructions, which operate on multiple data items. - -The first step is to build the dependency graph, identifying which statements depend on which other statements. This involves examining each statement and identifying every data item that the statement accesses, mapping array access modifiers to functions and checking every access' dependency to all others in all statements. Alias analysis can be used to certify that the different variables access (or intersect) the same region in memory. - -The dependency graph contains all local dependencies with distance not greater than the vector size. So, if the vector register is 128 bits, and the array type is 32 bits, the vector size is 128/32 = 4. All other non-cyclic dependencies should not invalidate vectorization, since there won't be any concurrent access in the same vector instruction. - -Suppose the vector size is the same as 4 ints: - - - -for (i = 0; i < 128; i++) { - -a[i] = a[i-16]; // 16 > 4, safe to ignore - -a[i] = a[i-1]; // 1 < 4, stays on dependency graph - -} - - - -Using the graph, the optimizer can then cluster the strongly connected components (SCC) and separate vectorizable statements from the rest. - -For example, consider a program fragment containing three statement groups inside a loop: (SCC1+SCC2), SCC3 and SCC4, in that order, in which only the second group (SCC3) can be vectorized. The final program will then contain three loops, one for each group, with only the middle one vectorized. The optimizer cannot join the first with the last without violating statement execution order, which would invalidate the necessary guarantees. - -Some non-obvious dependencies can be further optimized based on specific idioms. - -For instance, the following self-data-dependencies can be vectorized because the value of the right-hand values (RHS) are fetched and then stored on the left-hand value, so there is no way the data will change within the assignment. - - - -a[i] = a[i] + a[i+1]; - - - -Self-dependence by scalars can be vectorized by variable elimination. - -The general framework for loop vectorization is split into four stages: - -* Prelude: Where the loop-independent variables are prepared to be used inside the loop. This normally involves moving them to vector registers with specific patterns that will be used in vector instructions. This is also the place to insert the run-time dependence check. If the check decides vectorization is not possible, branch to Cleanup. - -* Loop(s): All vectorized (or not) loops, separated by SCCs clusters in order of appearance in the original code. - -* Postlude: Return all loop-independent variables, inductions and reductions. - -* Cleanup: Implement plain (non-vectorized) loops for iterations at the end of a loop that are not a multiple of the vector size or for when run-time checks prohibit vector processing. - -Some vectorizations cannot be fully checked at compile time. For example, library functions can defeat optimization if the data they process is supplied by the caller. Even in these cases, run-time optimization can still vectorize loops on-the-fly. - -This run-time check is made in the prelude stage and directs the flow to vectorized instructions if possible, otherwise reverts to standard processing, depending on the variables that are being passed on the registers or scalar variables. - -The following code can easily be vectorized at compile time, as it doesn't have any dependence on external parameters. Also, the language guarantees that neither will occupy the same region in memory as any other variable, as they are local variables and live only in the execution stack. - - - -int a[128]; - -int b[128]; - -// initialize b - -for (i = 0; i<128; i++) - -a[i] = b[i] + 5; - - - -On the other hand, the code below has no information on memory positions, because the references are pointers and the memory they point to may overlap. - - - -void compute(int *a, int *b) - -{ - -int i; - -for (i = 0; i < 128; i++, a++, b++) - -*a = *b + 5; - -} - - - -A quick run-time check on the address of both a and b, plus the loop iteration space (128) is enough to tell if the arrays overlap or not, thus revealing any dependencies. - -There exist some tools to dynamically analyze existing applications to assess the inherent latent potential for SIMD parallelism, exploitable through further compiler advances and/or via manual code changes. - -An example would be a program to multiply two vectors of numeric data. A scalar approach would be something like: - - - -for (i = 0; i < 1024; i++) - -c[i] = a[i] * b[i]; - - - -This could be vectorized to look something like: - - - -for (i = 0; i < 1024; i += 4) - -c[i:i+3] = a[i:i+3] * b[i:i+3]; - - - -Here, c[i:i+3] represents the four array elements from c[i] to c[i+3] and the vector processor can perform four operations for a single vector instruction. Since the four vector operations complete in roughly the same time as one scalar instruction, the vector approach can run up to four times faster than the original code. - -There are two distinct compiler approaches: one based on the conventional vectorization technique and the other based on loop unrolling. - -This technique, used for conventional vector machines, tries to find and exploit SIMD parallelism at the loop level. It consists of two major steps as follows. - -# Find an innermost loop that can be vectorized - -# Transform the loop and generate vector codes - -In the first step, the compiler looks for obstacles that can prevent vectorization. A major obstacle for vectorization is true data dependency shorter than the vector length. Other obstacles include function calls and short iteration counts. - -Once the loop is determined to be vectorizable, the loop is stripmined by the vector length and each scalar instruction within the loop body is replaced with the corresponding vector instruction. Below, the component transformations for this step are shown using the above example. - -* After stripmining - - - -for (i = 0; i < 1024; i += 4) - -for (j = 0; j < 4; j++) - -c[i+j] = a[i+j] * b[i+j]; - - - -* After loop distribution using temporary arrays - - - -for (i = 0; i < 1024; i += 4) - -{ - -for (j = 0; j < 4; j++) tA[j] = A[i+j]; - -for (j = 0; j < 4; j++) tB[j] = B[i+j]; - -for (j = 0; j < 4; j++) tC[j] = tA[j] * tB[j]; - -for (j = 0; j < 4; j++) C[i+j] = tC[j]; - -} - - - -* After replacing with vector codes - - - -for (i = 0; i < 1024; i += 4) - -{ - -vA = vec_ld(&A[i]); - -vB = vec_ld(&B[i]); - -vC = vec_mul(vA, vB); - -vec_st(vC, &C[i]); - -} - - - -This relatively new technique specifically targets modern SIMD architectures with short vector lengths. Although loops can be unrolled to increase the amount of SIMD parallelism in basic blocks, this technique exploits SIMD parallelism within basic blocks rather than loops. The two major steps are as follows. - -# The innermost loop is unrolled by a factor of the vector length to form a large loop body. - -# Isomorphic scalar instructions (that perform the same operation) are packed into a vector instruction if dependencies do not prevent doing so. - -To show step-by-step transformations for this approach, the same example is used again. - -* After loop unrolling (by the vector length, assumed to be 4 in this case) - - - -for (i = 0; i < 1024; i += 4) - -{ - -sA0 = ld(&A[i+0]); - -sB0 = ld(&B[i+0]); - -sC0 = sA0 * sB0; - -st(sC0, &C[i+0]); - -... - -sA3 = ld(&A[i+3]); - -sB3 = ld(&B[i+3]); - -sC3 = sA3 * sB3; - -st(sC3, &C[i+3]); - -} - - - -* After packing - - - -for (i = 0; i < 1024; i += 4) - -{ - -(sA0, sA1, sA2, sA3) = ld(&A[i+0:i+3]); - -(sB0, sB1, sB2, sB3) = ld(&B[i+0:i+3]); - -(sC0, sC1, sC2, sC3) = (sA0, sA1, sA2, sA3) * (sB0, sB1, sB2, sB3); - -st((sC0, sC1, sC2, sC3), &C[i+0:i+3]); - -} - - - -* After code generation - - - -for (i = 0; i < 1024; i += 4) - -{ - -vA = vec_ld(&A[i]); - -vB = vec_ld(&B[i]); - -vC = vec_mul(vA, vB); - -vec_st(vC, &C[i]); - -} - - - -Here, sA1, sB1, ... represent scalar variables and vA, vB, and vC represent vector variables. - -Most automatically vectorizing commercial compilers use the conventional loop-level approach except the IBM XL Compiler, which uses both. - -The presence of if-statements in the loop body requires the execution of instructions in all control paths to merge the multiple values of a variable. One general approach is to go through a sequence of code transformations: predication → vectorization(using one of the above methods) → remove vector predicates → remove scalar predicates. If the following code is used as an example to show these transformations; - - - -for (i = 0; i < 1024; i++) - -if (A[i] > 0) - -C[i] = B[i]; - -else - -D[i] = D[i-1]; - - - -* After predication - - - -for (i = 0; i < 1024; i++) - -{ - -P = A[i] > 0; - -NP = !P; - -C[i] = B[i]; (P) - -D[i] = D[i-1]; (NP) - -} - - - -where (P) denotes a predicate guarding the statement. - -* After vectorization - - - -for (i = 0; i < 1024; i += 4) - -{ - -vP = A[i:i+3] > (0, 0, 0, 0); - -vNP = vec_not(vP); - -C[i:i+3] = B[i:i+3]; (vP) - -(NP1, NP2, NP3, NP4) = vNP; - -D[i+3] = D[i+2]; (NP4) - -D[i+2] = D[i+1]; (NP3) - -D[i+1] = D[i]; (NP2) - -D[i] = D[i-1]; (NP1) - -} - - - -* After removing vector predicates - - - -for (i = 0; i < 1024; i += 4) - -{ - -vP = A[i:i+3] > (0, 0, 0, 0); - -vNP = vec_not(vP); - -C[i:i+3] = vec_sel(C[i:i+3], B[i:i+3], vP); - -(NP1, NP2, NP3, NP4) = vNP; - -D[i+3] = D[i+2]; (NP4) - -D[i+2] = D[i+1]; (NP3) - -D[i+1] = D[i]; (NP2) - -D[i] = D[i-1]; (NP1) - -} - - - -* After removing scalar predicates - - - -for (i = 0; i < 1024; i += 4) - -{ - -vP = A[i:i+3] > (0, 0, 0, 0); - -vNP = vec_not(vP); - -C[i:i+3] = vec_sel(C[i:i+3], B[i:i+3], vP); - -(NP1, NP2, NP3, NP4) = vNP; - -if (NP4) D[i+3] = D[i+2]; - -if (NP3) D[i+2] = D[i+1]; - -if (NP2) D[i+1] = D[i]; - -if (NP1) D[i] = D[i-1]; - -} - - - -Having to execute the instructions in all control paths in vector code has been one of the major factors that slow down the vector code with respect to the scalar baseline. The more complex the control flow becomes and the more instructions are bypassed in the scalar code, the larger the vectorization overhead becomes. To reduce this vectorization overhead, vector branches can be inserted to bypass vector instructions similar to the way scalar branches bypass scalar instructions. Below, AltiVec predicates are used to show how this can be achieved. - -* Scalar baseline (original code) - - - -for (i = 0; i < 1024; i++) - -{ - -if (A[i] > 0) - -{ - -C[i] = B[i]; - -if (B[i] < 0) - -D[i] = E[i]; - -} - -} - - - -* After vectorization in the presence of control flow - - - -for (i = 0; i < 1024; i += 4) - -{ - -vPA = A[i:i+3] > (0, 0, 0, 0); - -C[i:i+3] = vec_sel(C[i:i+3], B[i:i+3], vPA); - -vT = B[i:i+3] < (0,0,0,0); - -vPB = vec_sel((0, 0, 0, 0), vT, vPA); - -D[i:i+3] = vec_sel(D[i:i+3], E[i:i+3], vPB); - -} - - - -* After inserting vector branches - - - -for (i = 0; i < 1024; i += 4) - -{ - -if (vec_any_gt(A[i:i+3], (0, 0, 0, 0))) - -{ - -vPA = A[i:i+3] > (0,0,0,0); - -C[i:i+3] = vec_sel(C[i:i+3], B[i:i+3], vPA); - -vT = B[i:i+3] < (0, 0, 0, 0); - -vPB = vec_sel((0, 0, 0, 0), vT, vPA); - -if (vec_any_ne(vPB, (0, 0, 0, 0))) - -D[i:i+3] = vec_sel(D[i:i+3], E[i:i+3], vPB); - -} - -} - - - -There are two things to note in the final code with vector branches; First, the predicate defining instruction for vPA is also included within the body of the outer vector branch by using vec_any_gt. Second, the profitability of the inner vector branch for vPB depends on the conditional probability of vPB having false values in all fields given vPA has false values in all fields. - -Consider an example where the outer branch in the scalar baseline is always taken, bypassing most instructions in the loop body. The intermediate case above, without vector branches, executes all vector instructions. The final code, with vector branches, executes both the comparison and the branch in vector mode, potentially gaining performance over the scalar baseline. - -In most C and C++ compilers, it is possible to use intrinsic functions to manually vectorise, at the expense of programmer effort and maintainability. diff --git a/wiki/wikipedia/2713.txt b/wiki/wikipedia/2713.txt deleted file mode 100644 index 8d50b55181bb111e2ff40ded4166704cb98106ab..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2713.txt +++ /dev/null @@ -1,22 +0,0 @@ -In point-set topology, Kuratowski's closure-complement problem asks for the largest number of distinct sets obtainable by repeatedly applying the set operations of closure and complement to a given starting subset of a topological space. The answer is 14. This result was first published by Kazimierz Kuratowski in 1922. The problem gained wide exposure three decades later as an exercise in John L. Kelley's classic textbook General Topology. - -Letting S denote an arbitrary subset of a topological space, write kS for the closure of S, and cS for the complement of S. The following three identities imply that no more than 14 distinct sets are obtainable: - -(1) kkS = kS. (The closure operation is idempotent.) - -(2) ccS = S. (The complement operation is an involution.) - -(3) kckckckcS = kckcS. (Or equivalently kckckckS = kckckckccS = kckS. Using identity (2).) - -The first two are trivial. The third follows from the identity kikiS = kiS where iS is the interior of S which is equal to the complement of the closure of the complement of S, iS = ckcS. (The operation ki = kckc is idempotent.) - -A subset realizing the maximum of 14 is called a 14-set. The space of real numbers under the usual topology contains 14-sets. Here is one example: -$$ -(0,1)\cup(1,2)\cup\{3\}\cup\bigl([4,5]\cap\Q\bigr), -$$ - -where $(1,2)$ denotes an open interval and $[4,5]$ denotes a closed interval. - -Despite its origin within the context of a topological space, Kuratowski's closure-complement problem is actually more algebraic than topological. A surprising abundance of closely related problems and results have appeared since 1960, many of which have little or nothing to do with point-set topology. - -The closure-complement operations yield a monoid that can be used to classify topological spaces. diff --git a/wiki/wikipedia/2714.txt b/wiki/wikipedia/2714.txt deleted file mode 100644 index 7de1bc19b677625fb2b7fedb3a68f4e2a5eb7b4e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2714.txt +++ /dev/null @@ -1,129 +0,0 @@ -In mathematics, the pentagonal number theorem, originally due to Euler, relates the product and series representations of the Euler function. It states that -$$ -\prod_{n=1}^{\infty}\left(1-x^{n}\right)=\sum_{k=-\infty}^{\infty}\left(-1\right)^{k}x^{k\left(3k-1\right)/2}=1+\sum_{k=1}^\infty(-1)^k\left(x^{k(3k+1)/2}+x^{k(3k-1)/2}\right). -$$ - -In other words, -$$ -(1-x)(1-x^2)(1-x^3) \cdots = 1 - x - x^2 + x^5 + x^7 - x^{12} - x^{15} + x^{22} + x^{26} - \cdots. -$$ - -The exponents 1, 2, 5, 7, 12, ... on the right hand side are given by the formula gk = k(3k − 1)/2 for k = 1, −1, 2, −2, 3, ... and are called (generalized) pentagonal numbers . (The constant term 1 corresponds to $k=0$.) - -This holds as an identity of convergent power series for $|x|<1$, and also as an identity of formal power series. - -A striking feature of this formula is the amount of cancellation in the expansion of the product. - -The identity implies a recurrence for calculating $p(n)$, the number of partitions of n: -$$ -p(n)=p(n-1)+p(n-2)-p(n-5)-p(n-7)+\cdots -$$ - -or more formally, -$$ -p(n)=\sum_{k\neq 0} (-1)^{k-1}p(n-g_k) -$$ - -where the summation is over all nonzero integers k (positive and negative) and $g_k $ is the kth generalized pentagonal number. Since $p(n)=0$ for all $n<0$, the series will eventually become zeroes, enabling discrete calculation. - -The theorem can be interpreted combinatorially in terms of partitions. In particular, the left hand side is a generating function for the number of partitions of n into an even number of distinct parts minus the number of partitions of n into an odd number of distinct parts. Each partition of n into an even number of distinct parts contributes +1 to the coefficient of xn; each partition into an odd number of distinct parts contributes -1. (The article on unrestricted partition functions discusses this type of generating function.) - -For example, the coefficient of x5 is +1 because there are two ways to split 5 into an even number of distinct parts (4+1 and 3+2), but only one way to do so for an odd number of distinct parts (the one-part partition 5). However, the coefficient of x12 is −1 because there are seven ways to partition 12 into an even number of distinct parts, but there are eight ways to partition 12 into an odd number of distinct parts. - -This interpretation leads to a proof of the identity via involution (i.e. a bijection which is its own inverse). Consider the Ferrers diagram of any partition of n into distinct parts. For example, the diagram below shows n = 20 and the partition 20 = 7 + 6 + 4 + 3. - -Let m be the number of elements in the smallest row of the diagram (m = 3 in the above example). Let s be the number of elements in the rightmost 45 degree line of the diagram (s = 2 dots in red above, since 7-1 = 6, but 6-1 > 4). If m > s, take the rightmost 45-degree line and move it to form a new row, as in the diagram below. - -If m ≤ s (as in our newly formed diagram where m = 2, s = 5) we may reverse the process by moving the bottom row to form a new 45 degree line (adding 1 element to each of the first m rows), taking us back to the first diagram. - -A bit of thought shows that this process always changes the parity of the number of rows, and applying the process twice brings us back to the original diagram. This enables us to pair off Ferrers diagrams contributing 1 and -1 to the xn term of the series, resulting in a net coefficient of 0. This holds for every term except when the process cannot be performed on every Ferrers diagram with n dots. There are two such cases: - -1) m = s and the rightmost diagonal and bottom row meet. For example, - -Attempting to perform the operation would lead us to: - -which fails to change the parity of the number of rows, and is not reversible in the sense that performing the operation again does not take us back to the original diagram. If there are m elements in the last row of the original diagram, then -$$ -n=m+(m+1)+(m+2)+\cdots+(2m-1)=\frac {m(3m-1)}{2}=\frac {k(3k-1)}{2} -$$ - -where the new index k is taken to equal m. Note that the sign associated with this partition is (−1)s, which by construction equals (−1)m and (−1)k. - -2) m = s+1 and the rightmost diagonal and bottom row meet. For example, - -Our operation requires us to move the right diagonal to the bottom row, but that would lead to two rows of three elements, forbidden since we're counting partitions into distinct parts. This is the previous case but with one fewer row, so -$$ -n=m+(m+1)+(m+2)+\cdots+(2m-2)=\frac{(m-1)(3m-2)}{2}=\frac{k(3k-1)}{2}, -$$ - -where we take k = 1-m (a negative integer). Here the associated sign is (-1)s with s = m-1 = −k, therefore the sign is again (−1)k. - -In summary, it has been shown that partitions into an even number of distinct parts and an odd number of distinct parts exactly cancel each other, except if n is a generalized pentagonal number $g_k = k(3k-1)/2$, in which case there is exactly one Ferrers diagram left over. But this is precisely what the right side of the identity says should happen, so we are finished. - -We can rephrase the above proof, using partitions, which we denote as: -$$ -n = \lambda_1 + \lambda_2 + \dotsb + \lambda_\ell -$$, - -where $\lambda_1\geq \lambda_2\geq\ldots\geq\lambda_\ell > 0$. - -The number of partitions of n is the partition function p(n) having generating function: -$$ -\sum_{n=0}^\infty p(n) x^n = \prod_{k=1}^\infty (1 - x^k)^{-1} -$$ - -Note that is the reciprocal of the product on the left hand side of our identity: - - \left( \sum_{n=0}^\infty p(n) x^n \right) \cdot \left(\prod_{n=1}^\infty (1-x^n)\right) - -= 1 - -Let us denote the expansion of our product by -$$ - \prod_{n=1}^\infty (1-x^n) = \sum_{n=0}^\infty a_nx^n -$$, - -so that -$$ - \left( \sum_{n=0}^\infty p(n) x^n \right) \cdot \left(\sum_{n=0}^\infty a_nx^n\right) = 1 -$$. - -Multiplying out the left hand side and equating coefficients on the two sides, we obtain - -a0 p(0) = 1 and $\sum_{i=0}^n p(n{-}i) a_i = 0$ for all $n\geq 1$. This gives a recurrence relation defining p(n) in terms of an, and vice versa a recurrence for an in terms of p(n). Thus, our desired result: - -a_i := \begin{cases}1 & \mbox{ if } i = \frac{1}{2}(3k^2 \pm k) \mbox{ and } k \mbox{ is even}\\ - --1 & \mbox{ if } i = \frac{1}{2}(3k^2 \pm k) \mbox{ and } k \mbox{ is odd }\\ - -0 & \mbox{ otherwise }\end{cases} - -for $i\geq 1$ is equivalent to the identity $\sum_i (-1)^i p(n{-}g_i) = 0,$ where $g_i := \textstyle\frac{1}{2}(3i^2-i)$ and i ranges over all integers such that $g_i \leq n$ (this range includes both positive and negative i, so as to use both kinds of generalized pentagonal numbers). This in turn means: -$$ -\sum_{i \mathrm{\ even}} p(n{-}g_i) = \sum_{i \mathrm{\ odd}} p(n{-}g_i), -$$. - -In terms of sets of partitions, this is equivalent to saying that the following sets are of equal cardinality: -$$ -\mathcal{X} := \bigcup_{i \mathrm{\ even}} \mathcal{P}(n-g_i) -$$ and $\mathcal{Y} := \bigcup_{i \mathrm{\ odd}} \mathcal{P}(n-g_i)$, - -where $\mathcal{P}(n)$ denotes the set of all partitions of $n$. - -All that remains is to give a bijection from one set to the other, which is accomplished by the function φ from X to Y which maps the partition $\mathcal{P}(n-g_i) \ni \lambda : n-g_i = \lambda_1 + \lambda_2 + \dotsb + \lambda_\ell$ to the partition $\lambda' = \varphi(\lambda)$ defined by: - - \varphi(\lambda) := - -\begin{cases} - -\lambda' : n - g_{i-1} = (\ell + 3i -2) + (\lambda_1 - 1) + \dotsb + (\lambda_\ell - 1) &\mbox{ if } \ell+3i > \lambda_1\\ - -\\ - -\lambda' : n - g_{i+1} = (\lambda_2 + 1) + \dotsb + (\lambda_\ell + 1) + \underbrace{1+\dotsb+1}_{\lambda_1 - \ell - 3i} &\mbox{ if } \ell+3i \leq \lambda_1. - -\end{cases} - - - -This is an involution (a self-inverse mapping), and thus in particular a bijection, which proves our claim and the identity. diff --git a/wiki/wikipedia/2715.txt b/wiki/wikipedia/2715.txt deleted file mode 100644 index f71a418094eddd7198514b70b0d61d7e9832056c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2715.txt +++ /dev/null @@ -1,57 +0,0 @@ -Probability theory routinely uses results from other fields of mathematics (mostly, analysis). The opposite cases, collected below, are relatively rare; however, probability theory is used systematically in combinatorics via the probabilistic method. They are particularly used for non-constructive proofs. - -* Normal numbers exist. Moreover, computable normal numbers exist. These non-probabilistic existence theorems follow from probabilistic results: (a) a number chosen at random (uniformly on (0,1)) is normal almost surely (which follows easily from the strong law of large numbers); (b) some probabilistic inequalities behind the strong law. The existence of a normal number follows from (a) immediately. The proof of the existence of computable normal numbers, based on (b), involves additional arguments. All known proofs use probabilistic arguments. - -* Dvoretzky's theorem which states that high-dimensional convex bodies have ball-like slices is proved probabilistically. No deterministic construction is known, even for many specific bodies. - -* The diameter of the Banach–Mazur compactum was calculated using a probabilistic construction. No deterministic construction is known. - -* The original proof that the Hausdorff–Young inequality cannot be extended to $p > 2$ is probabilistic. The proof of the de Leeuw–Kahane–Katznelson theorem (which is a stronger claim) is partially probabilistic. - -* The first construction of a Salem set was probabilistic. Only in 1981 did Kaufman give a deterministic construction. - -* Every continuous function on a compact interval can be uniformly approximated by polynomials, which is the Weierstrass approximation theorem. A probabilistic proof uses the weak law of large numbers. Non-probabilistic proofs were available earlier. - -* Existence of a nowhere differentiable continuous function follows easily from properties of Wiener process. A non-probabilistic proof was available earlier. - -* Stirling's formula was first discovered by Abraham de Moivre in his `The Doctrine of Chances' (with a constant identified later by Stirling) in order to be used in probability theory. Several probabilistic proofs of Stirling's formula (and related results) were found in the 20th century. - -* The only bounded harmonic functions defined on the whole plane are constant functions by Liouville's theorem. A probabilistic proof via two-dimensional Brownian motion is well known. Non-probabilistic proofs were available earlier. - -* Non-tangential boundary values of an analytic or harmonic function exist at almost all boundary points of non-tangential boundedness. This result (Privalov's theorem), and several results of this kind, are deduced from martingale convergence. Non-probabilistic proofs were available earlier. - -* The boundary Harnack principle is proved using Brownian motion (see also). Non-probabilistic proofs were available earlier. - -* Euler's Basel sum, - -\qquad \sum_{n=1}^\infin \frac{1}{n^2} = \frac{\pi^2}{6}, - - can be demonstrated by considering the expected exit time of planar Brownian motion from an infinite strip. A number of other less well-known identities can be deduced in a similar manner. - -* The Picard theorem can be proved using the winding properties of planar Brownian motion. - -* The fact that every Lipschitz function on the real line is differentiable almost everywhere can be proved using martingale convergence. - -* Multidimensional Fourier inversion formula can be established by the weak law of large numbers and some elementary results from complex analysis. - -* A number of theorems stating existence of graphs (and other discrete structures) with desired properties are proved by the probabilistic method. Non-probabilistic proofs are available for a few of them. - -* The maximum-minimums identity admits a probabilistic proof. - -* Crossing number inequality which is a lower bound on the number of crossing for any drawing of a graph as a function of the number of vertices, edges the graph has. - -* The fundamental theorem of algebra can be proved using two-dimensional Brownian motion. This conjecture is proved using Brownian motion, local time, stochastic integration, coupling, hypercontractivity etc. (see also). A non-probabilistic proof is found 18 years later. - -* The Loewner's torus inequality relates the area of a compact surface (topologically, a torus) to its systole. It can be proved most easily by using the probabilistic notion of variance. A non-probabilistic proof was available earlier. - -* The weak halfspace theorem for minimal surfaces states that any complete minimal surface of bounded curvature which is not a plane is not contained in any halfspace. This theorem is proved using a coupling between Brownian motions on minimal surfaces. A non-probabilistic proof was available earlier. - -* The normal number theorem (1909), due to Émile Borel, could be one of the first examples of the probabilistic method, providing the first proof of existence of normal numbers, with the help of the first version of the strong law of large numbers (see also the first item of the section Analysis). - -* The Rogers–Ramanujan identities are proved using Markov chains. A non-probabilistic proof was available earlier. - -* Non-commutative dynamics (called also quantum dynamics) is formulated in terms of Von Neumann algebras and continuous tensor products of Hilbert spaces. Several results (for example, a continuum of mutually non-isomorphic models) are obtained by probabilistic means (random compact sets and Brownian motion). One part of this theory (so-called type III systems) is translated into the analytic language and is developing analytically; the other part (so-called type II systems) exists still in the probabilistic language only. - -* Tripartite quantum states can lead to arbitrary large violations of Bell inequalities (in sharp contrast to the bipartite case). The proof uses random unitary matrices. No other proof is available. - -* The proof of Shannon's channel coding theorem uses random coding to show the existence of a code that achieves channel capacity. diff --git a/wiki/wikipedia/2716.txt b/wiki/wikipedia/2716.txt deleted file mode 100644 index 38811e85a81e2d807336461344ad81ae216c89e1..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2716.txt +++ /dev/null @@ -1,71 +0,0 @@ -Prover9 is an automated theorem prover for first-order and equational logic developed by William McCune. - -Prover9 is the successor of the Otter theorem prover also developed by William McCune. Prover9 is noted for producing relatively readable proofs and having a powerful hints strategy. - -Prover9 is intentionally paired with Mace4, which searches for finite models and counterexamples. Both can be run simultaneously from the same input, with Prover9 attempting to find a proof, while Mace4 attempts to find a (disproving) counter-example. Prover9, Mace4, and many other tools are built on an underlying library named LADR ("Library for Automated Deduction Research") to simplify implementation. Resulting proofs can be double-checked by Ivy, a proof-checking tool that has been separately verified using ACL2. - -In July 2006 the LADR/Prover9/Mace4 input language made a major change (which also differentiates it from Otter). The key distinction between "clauses" and "formulas" completely disappeared; "formulas" can now have free variables; and "clauses" are now a subset of "formulas". Prover9/Mace4 also supports a "goal" type of formula, which is automatically negated for proof. Prover9 attempts to automatically generate a proof by default; in contrast, Otter's automatic mode must be explicitly set. - -Prover9 was under active development, with new releases every month or every other month, until 2009. Prover9 is free software, and therefore, open source software; it is released under GPL version 2 or later. - -The traditional "all men are mortal", "Socrates is a man", prove "Socrates is mortal" can be expressed this way in Prover9: - -formulas(assumptions). - -man(x) -> mortal(x). % open formula with free variable x - -man(socrates). - -end_of_list. - -formulas(goals). - -mortal(socrates). - -end_of_list. - -This will be automatically converted into clausal form (which Prover9 also accepts): - -formulas(sos). - --man(x) | mortal(x). - -man(socrates). - --mortal(socrates). - -end_of_list. - -A proof that the square root of 2 is irrational can be expressed this way: - -formulas(assumptions). - -1*x = x. % identity - -x*y = y*x. % commutativity - -x*(y*z) = (x*y)*z. % associativity - -( x*y = x*z ) -> y = z. % cancellation (0 is not allowed, so x!=0). - -% - -% Now let's define divides(x,y): x divides y. - -% Example: divides(2,6) is true because 2*3=6. - -% - -divides(x,y) <-> (exists z x*z = y). - -divides(2,x*x) -> divides(2,x). % If 2 divides x*x, it divides x. - -a*a = 2*(b*b). % a/b = sqrt(2), so a^2 = 2 * b^2. - -(x != 1) -> -(divides(x,a) & - -divides(x,b)). % a/b is in lowest terms - -2 != 1. % Original author almost forgot this. - -end_of_list. diff --git a/wiki/wikipedia/2717.txt b/wiki/wikipedia/2717.txt deleted file mode 100644 index e4dc5f605a35264d63bd0e88bda801baa806627b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2717.txt +++ /dev/null @@ -1,15 +0,0 @@ -In graph theory, a k-outerplanar graph is a planar graph that has a planar embedding in which the vertices belong to at most $k$ concentric layers. The outerplanarity index of a planar graph is the minimum value of $k$ for which it is $k$-outerplanar. - -An outerplanar graph (or 1-outerplanar graph) has all of its vertices on the unbounded (outside) face of the graph. A 2-outerplanar graph is a planar graph with the property that, when the vertices on the unbounded face are removed, the remaining vertices all lie on the newly formed unbounded face. And so on. - -More formally, a graph is $k$-outerplanar if it has a planar embedding such that, for every vertex, there is an alternating sequence of at most $k$ faces and $k$ vertices of the embedding, starting with the unbounded face and ending with the vertex, in which each consecutive face and vertex are incident to each other. - -The $k$-outerplanar graphs have treewidth at most $3k-1$. However, some bounded-treewidth planar graphs such as the nested triangles graph may be $k$-outerplanar only for very large $k$, linear in the number of vertices. - -Baker's technique covers a planar graph with a constant number of $k$-outerplanar graphs and uses their low treewidth in order to quickly approximate several hard graph optimization problems. - -In connection with the GNRS conjecture on metric embedding of minor-closed graph families, the $k$-outerplanar graphs are one of the most general classes of graphs for which the conjecture has been proved. - -A conjectured converse of Courcelle's theorem, according to which every graph property recognizable on graphs of bounded treewidth by finite state tree automata is definable in the monadic second-order logic of graphs, has been proven for the $k$-outerplanar graphs. - -The smallest value of $k$ for which a given graph is $k$-outerplanar (its outerplanarity index) can be computed in quadratic time. diff --git a/wiki/wikipedia/2718.txt b/wiki/wikipedia/2718.txt deleted file mode 100644 index b1ef856dab30dcb8a7529439348665a98c372320..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2718.txt +++ /dev/null @@ -1,3 +0,0 @@ -The Browder fixed-point theorem is a refinement of the Banach fixed-point theorem for uniformly convex Banach spaces. It asserts that if $K$ is a nonempty convex closed bounded set in uniformly convex Banach space and $f$ is a mapping of $K$ into itself such that $\|f(x)-f(y)\|\leq\|x-y\|$ (i.e. $f$ is non-expansive), then $f$ has a fixed point. - -Following the publication in 1965 of two independent versions of the theorem by Felix Browder and by William Kirk, a new proof of Michael Edelstein showed that, in a uniformly convex Banach space, every iterative sequence $f^nx_0$ of a non-expansive map $f$ has a unique asymptotic center, which is a fixed point of $f$. (An asymptotic center of a sequence $(x_k)_{k\in\mathbb N}$, if it exists, is a limit of the Chebyshev centers $c_n$ for truncated sequences $(x_k)_{k\ge n}$.) A stronger property than asymptotic center is Delta-limit of Teck-Cheong Lim, which in the uniformly convex space coincides with the weak limit if the space has the Opial property. diff --git a/wiki/wikipedia/2719.txt b/wiki/wikipedia/2719.txt deleted file mode 100644 index 23c1256290c6b9ddf0dc144f0110734e1cf1d9a3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2719.txt +++ /dev/null @@ -1,56 +0,0 @@ -This is about lattice theory. For other similarly named results, see Birkhoff's theorem (disambiguation). - -In mathematics, Birkhoff's representation theorem for distributive lattices states that the elements of any finite distributive lattice can be represented as finite sets, in such a way that the lattice operations correspond to unions and intersections of sets. The theorem can be interpreted as providing a one-to-one correspondence between distributive lattices and partial orders, between quasi-ordinal knowledge spaces and preorders, or between finite topological spaces and preorders. It is named after Garrett Birkhoff, who published a proof of it in 1937. - -The name “Birkhoff's representation theorem” has also been applied to two other results of Birkhoff, one from 1935 on the representation of Boolean algebras as families of sets closed under union, intersection, and complement (so-called fields of sets, closely related to the rings of sets used by Birkhoff to represent distributive lattices), and Birkhoff's HSP theorem representing algebras as products of irreducible algebras. Birkhoff's representation theorem has also been called the fundamental theorem for finite distributive lattices. - -Many lattices can be defined in such a way that the elements of the lattice are represented by sets, the join operation of the lattice is represented by set union, and the meet operation of the lattice is represented by set intersection. For instance, the Boolean lattice defined from the family of all subsets of a finite set has this property. More generally any finite topological space has a lattice of sets as its family of open sets. Because set unions and intersections obey the distributive law, any lattice defined in this way is a distributive lattice. Birkhoff's theorem states that in fact all finite distributive lattices can be obtained this way, and later generalizations of Birkhoff's theorem state a similar thing for infinite distributive lattices. - -Consider the divisors of some composite number, such as (in the figure) 120, partially ordered by divisibility. Any two divisors of 120, such as 12 and 20, have a unique greatest common factor 12 ∧ 20 = 4, the largest number that divides both of them, and a unique least common multiple 12 ∨ 20 = 60; both of these numbers are also divisors of 120. These two operations ∨ and ∧ satisfy the distributive law, in either of two equivalent forms: (x ∧ y) ∨ z = (x ∨ z) ∧ (y ∨ z) and (x ∨ y) ∧ z = (x ∧ z) ∨ (y ∧ z), for all x, y, and z. Therefore, the divisors form a finite distributive lattice. - -One may associate each divisor with the set of prime powers that divide it: thus, 12 is associated with the set {2,3,4}, while 20 is associated with the set {2,4,5}. Then 12 ∧ 20 = 4 is associated with the set {2,3,4} ∩ {2,4,5} = {2,4}, while 12 ∨ 20 = 60 is associated with the set {2,3,4} ∪ {2,4,5} = {2,3,4,5}, so the join and meet operations of the lattice correspond to union and intersection of sets. - -The prime powers 2, 3, 4, 5, and 8 appearing as elements in these sets may themselves be partially ordered by divisibility; in this smaller partial order, 2 ≤ 4 ≤ 8 and there are no order relations between other pairs. The 16 sets that are associated with divisors of 120 are the lower sets of this smaller partial order, subsets of elements such that if x ≤ y and y belongs to the subset, then x must also belong to the subset. From any lower set L, one can recover the associated divisor by computing the least common multiple of the prime powers in L. Thus, the partial order on the five prime powers 2, 3, 4, 5, and 8 carries enough information to recover the entire original 16-element divisibility lattice. - -Birkhoff's theorem states that this relation between the operations ∧ and ∨ of the lattice of divisors and the operations ∩ and ∪ of the associated sets of prime powers is not coincidental, and not dependent on the specific properties of prime numbers and divisibility: the elements of any finite distributive lattice may be associated with lower sets of a partial order in the same way. - -As another example, the application of Birkhoff's theorem to the family of subsets of an n-element set, partially ordered by inclusion, produces the free distributive lattice with n generators. The number of elements in this lattice is given by the Dedekind numbers. - -In a lattice, an element x is join-irreducible if x is not the join of a finite set of other elements. Equivalently, x is join-irreducible if it is neither the bottom element of the lattice (the join of zero elements) nor the join of any two smaller elements. For instance, in the lattice of divisors of 120, there is no pair of elements whose join is 4, so 4 is join-irreducible. An element x is join-prime if it differs from the bottom element, and whenever x ≤ y ∨ z, either x ≤ y or x ≤ z. In the same lattice, 4 is join-prime: whenever lcm(y,z) is divisible by 4, at least one of y and z must itself be divisible by 4. - -In any lattice, a join-prime element must be join-irreducible. Equivalently, an element that is not join-irreducible is not join-prime. For, if an element x is not join-irreducible, there exist smaller y and z such that x = y ∨ z. But then x ≤ y ∨ z, and x is not less than or equal to either y or z, showing that it is not join-prime. - -There exist lattices in which the join-prime elements form a proper subset of the join-irreducible elements, but in a distributive lattice the two types of elements coincide. For, suppose that x is join-irreducible, and that x ≤ y ∨ z. This inequality is equivalent to the statement that x = x ∧ (y ∨ z), and by the distributive law x = (x ∧ y) ∨ (x ∧ z). But since x is join-irreducible, at least one of the two terms in this join must be x itself, showing that either x = x ∧ y (equivalently x ≤ y) or x = x ∧ z (equivalently x ≤ z). - -The lattice ordering on the subset of join-irreducible elements forms a partial order; Birkhoff's theorem states that the lattice itself can be recovered from the lower sets of this partial order. - -In any partial order, the lower sets form a lattice in which the lattice's partial ordering is given by set inclusion, the join operation corresponds to set union, and the meet operation corresponds to set intersection, because unions and intersections preserve the property of being a lower set. Because set unions and intersections obey the distributive law, this is a distributive lattice. Birkhoff's theorem states that any finite distributive lattice can be constructed in this way. - -Theorem. Any finite distributive lattice L is isomorphic to the lattice of lower sets of the partial order of the join-irreducible elements of L. - -That is, there is a one-to-one order-preserving correspondence between elements of L and lower sets of the partial order. The lower set corresponding to an element x of L is simply the set of join-irreducible elements of L that are less than or equal to x, and the element of L corresponding to a lower set S of join-irreducible elements is the join of S. - -For any lower set S of join-irreducible elements, let x be the join of S, and let T be the lower set of the join-irreducible elements less than or equal to x. Then S = T. For, every element of S clearly belongs to T, and any join-irreducible element less than or equal to x must (by join-primality) be less than or equal to one of the members of S, and therefore must (by the assumption that S is a lower set) belong to S itself. Conversely, for any element x of L, let S be the join-irreducible elements less than or equal to x, and let y be the join of S. Then x = y. For, as a join of elements less than or equal to x, y can be no greater than x itself, but if x is join-irreducible then x belongs to S while if x is the join of two or more join-irreducible items then they must again belong to S, so y ≥ x. Therefore, the correspondence is one-to-one and the theorem is proved. - -Birkhoff defined a ring of sets to be a family of sets that is closed under the operations of set unions and set intersections; later, motivated by applications in mathematical psychology, Doignon called the same structure a quasi-ordinal knowledge space. If the sets in a ring of sets are ordered by inclusion, they form a distributive lattice. The elements of the sets may be given a preorder in which x ≤ y whenever some set in the ring contains x but not y. The ring of sets itself is then the family of lower sets of this preorder, and any preorder gives rise to a ring of sets in this way. - -Birkhoff's theorem, as stated above, is a correspondence between individual partial orders and distributive lattices. However, it can also be extended to a correspondence between order-preserving functions of partial orders and bounded homomorphisms of the corresponding distributive lattices. The direction of these maps is reversed in this correspondence. - -Let 2 denote the partial order on the two-element set {0, 1}, with the order relation 0 < 1, and (following Stanley) let J(P) denote the distributive lattice of lower sets of a finite partial order P. Then the elements of J(P) correspond one-for-one to the order-preserving functions from P to 2. For, if ƒ is such a function, ƒ−1(0) forms a lower set, and conversely if L is a lower set one may define an order-preserving function ƒL that maps L to 0 and that maps the remaining elements of P to 1. If g is any order-preserving function from Q to P, one may define a function g* from J(P) to J(Q) that uses the composition of functions to map any element L of J(P) to ƒL ∘ g. This composite function maps Q to 2 and therefore corresponds to an element g*(L) = (ƒL ∘ g)−1(0) of J(Q). Further, for any x and y in J(P), g*(x ∧ y) = g*(x) ∧ g*(y) (an element of Q is mapped by g to the lower set x ∩ y if and only if belongs both to the set of elements mapped to x and the set of elements mapped to y) and symmetrically g*(x ∨ y) = g*(x) ∨ g*(y). Additionally, the bottom element of J(P) (the function that maps all elements of P to 0) is mapped by g* to the bottom element of J(Q), and the top element of J(P) is mapped by g* to the top element of J(Q). That is, g* is a homomorphism of bounded lattices. - -However, the elements of P themselves correspond one-for-one with bounded lattice homomorphisms from J(P) to 2. For, if x is any element of P, one may define a bounded lattice homomorphism jx that maps all lower sets containing x to 1 and all other lower sets to 0. And, for any lattice homomorphism from J(P) to 2, the elements of J(P) that are mapped to 1 must have a unique minimal element x (the meet of all elements mapped to 1), which must be join-irreducible (it cannot be the join of any set of elements mapped to 0), so every lattice homomorphism has the form jx for some x. Again, from any bounded lattice homomorphism h from J(P) to J(Q) one may use composition of functions to define an order-preserving map h* from Q to P. It may be verified that g** = g for any order-preserving map g from Q to P and that and h** = h for any bounded lattice homomorphism h from J(P) to J(Q). - -In category theoretic terminology, J is a contravariant hom-functor J = Hom(—,2) that defines a duality of categories between, on the one hand, the category of finite partial orders and order-preserving maps, and on the other hand the category of finite distributive lattices and bounded lattice homomorphisms. - -In an infinite distributive lattice, it may not be the case that the lower sets of the join-irreducible elements are in one-to-one correspondence with lattice elements. Indeed, there may be no join-irreducibles at all. This happens, for instance, in the lattice of all natural numbers, ordered with the reverse of the usual divisibility ordering (so x ≤ y when y divides x): any number x can be expressed as the join of numbers xp and xq where p and q are distinct prime numbers. However, elements in infinite distributive lattices may still be represented as sets via Stone's representation theorem for distributive lattices, a form of Stone duality in which each lattice element corresponds to a compact open set in a certain topological space. This generalized representation theorem can be expressed as a category-theoretic duality between distributive lattices and spectral spaces (sometimes called coherent spaces, but not the same as the coherent spaces in linear logic), topological spaces in which the compact open sets are closed under intersection and form a base for the topology. Hilary Priestley showed that Stone's representation theorem could be interpreted as an extension of the idea of representing lattice elements by lower sets of a partial order, using Nachbin's idea of ordered topological spaces. Stone spaces with an additional partial order linked with the topology via Priestley separation axiom can also be used to represent bounded distributive lattices. Such spaces are known as Priestley spaces. Further, certain bitopological spaces, namely pairwise Stone spaces, generalize Stone's original approach by utilizing two topologies on a set to represent an abstract distributive lattice. Thus, Birkhoff's representation theorem extends to the case of infinite (bounded) distributive lattices in at least three different ways, summed up in duality theory for distributive lattices. - -Birkhoff's representation theorem may also be generalized to finite structures other than distributive lattices. In a distributive lattice, the self-dual median operation -$$ -m(x,y,z)=(x\vee y)\wedge(x\vee z)\wedge(y\vee z)=(x\wedge y)\vee(x\wedge z)\vee(y\wedge z) -$$ - -gives rise to a median algebra, and the covering relation of the lattice forms a median graph. Finite median algebras and median graphs have a dual structure - -as the set of solutions of a 2-satisfiability instance; Barthélemy formulate this structure equivalently as the family of initial stable sets in a mixed graph. For a distributive lattice, the corresponding mixed graph has no undirected edges, and the initial stable sets are just the lower sets of the transitive closure of the graph. Equivalently, for a distributive lattice, the implication graph of the 2-satisfiability instance can be partitioned into two connected components, one on the positive variables of the instance and the other on the negative variables; the transitive closure of the positive component is the underlying partial order of the distributive lattice. - -Another result analogous to Birkhoff's representation theorem, but applying to a broader class of lattices, is the theorem of Edelman that any finite join-distributive lattice may be represented as an antimatroid, a family of sets closed under unions but in which closure under intersections has been replaced by the property that each nonempty set has a removable element. diff --git a/wiki/wikipedia/272.txt b/wiki/wikipedia/272.txt deleted file mode 100644 index 96e879449c103073947c2762e97e46e999cd3858..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/272.txt +++ /dev/null @@ -1,27 +0,0 @@ -In the mathematical discipline of graph theory, the medial graph of plane graph G is another graph M(G) that represents the adjacencies between edges in the faces of G. Medial graphs were introduced in 1922 by Ernst Steinitz to study combinatorial properties of convex polyhedra, although the inverse construction was already used by Peter Tait in 1877 in his foundational study of knots and links. - -Given a connected plane graph G, its medial graph M(G) has - -* a vertex for each edge of G and - -* an edge between two vertices for each face of G in which their corresponding edges occur consecutively. - -The medial graph of a disconnected graph is the disjoint union of the medial graphs of each connected component. The definition of medial graph also extends without modification to graph embeddings on surfaces of higher genus. - -* The medial graph of any plane graph is a 4-regular plane graph. - -* For any plane graph G, the medial graph of G and the medial graph of the dual graph of G are isomorphic. Conversely, for any 4-regular plane graph H, the only two plane graphs with medial graph H are dual to each other. - -* Since the medial graph depends on a particular embedding, the medial graph of a planar graph is not unique; the same planar graph can have non-isomorphic medial graphs. In the picture, the red graphs are not isomorphic because the two vertices with self loops share an edge in one graph but not in the other. - -* Every 4-regular plane graph is the medial graph of some plane graph. For a connected 4-regular plane graph H, a planar graph G with H as its medial graph can be constructed as follows. Color the faces of H with just two colors, which is possible since H is Eulerian (and thus the dual graph of H is bipartite). The vertices in G correspond to the faces of a single color in H. These vertices are connected by an edge for each vertex shared by their corresponding faces in H. Note that performing this construction using the faces of the other color as the vertices produces the dual graph of G. - -* The medial graph of a 3-regular plane graph coincides with its line graph. However, this is not true for medial graphs of plane graphs that have vertices of degree greater than three. - -For a plane graph G, twice the evaluation of the Tutte polynomial at the point (3,3) equals the sum over weighted Eulerian orientations in the medial graph of G, where the weight of an orientation is 2 to the number of saddle vertices of the orientation (that is, the number of vertices with incident edges cyclically ordered "in, out, in out"). Since the Tutte polynomial is invariant under embeddings, this result shows that every medial graph has the same sum of these weighted Eulerian orientations. - -The medial graph definition can be extended to include an orientation. First, the faces of the medial graph are colored black if they contain a vertex of the original graph and white otherwise. This coloring causes each edge of the medial graph to be bordered by one black face and one white face. Then each edge is oriented so that the black face is on its left. - -A plane graph and its dual do not have the same directed medial graph; their directed medial graphs are the transpose of each other. - -Using the directed medial graph, one can effectively generalize the result on evaluations of the Tutte polynomial at (3,3). For a plane graph G, n times the evaluation of the Tutte polynomial at the point (n+1,n+1) equals the weighted sum over all edge colorings using n colors in the directed medial graph of G so that each (possibly empty) set of monochromatic edges forms a directed Eulerian graph, where the weight of a directed Eulerian orientation is 2 to the number of monochromatic vertices. diff --git a/wiki/wikipedia/2720.txt b/wiki/wikipedia/2720.txt deleted file mode 100644 index c31bb4ee1fceee99ab371d45bc4eec6437df4e1b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2720.txt +++ /dev/null @@ -1,15 +0,0 @@ -In formal semantics and philosophical logic, simplification of disjunctive antecedents (SDA) is the phenomenon whereby a disjunction in the antecedent of a conditional appears to distribute over the conditional as a whole. This inference is shown schematically below: - -# $ (A \lor B) \Rightarrow C \models (A \Rightarrow C) \land (B \Rightarrow C) $ - -This inference has been argued to be valid on the basis of sentence pairs such as that below, since Sentence 1 seems to imply Sentence 2. - -# If Yde or Dani had come to the party, it would have been fun. - -# If Yde had come to the party, it would be been fun and if Dani had come to the party, it would have been fun. - -The SDA inference was first discussed as a potential problem for the similarity analysis of counterfactuals. In these approaches, a counterfactual $ (A \lor B) > C$ is predicted to be true if $C$ holds throughout the possible worlds where $A \lor B$ holds which are most similar to the world of evaluation. On a Boolean semantics for disjunction, $A \lor B$ can hold at a world simply in virtue of $A$ being true there, meaning that the most similar $A \lor B$-worlds could all be ones where $A$ holds but $B$ does not. If $C$ is also true at these worlds but not at the closest worlds here $B$ is true, then this approach will predict a failure of SDA: $(A \lor B) > C$ will be true at the world of evaluation while $(B > C)$ will be false. - -In more intuitive terms, imagine that Yde missed the most recent party because he happened to get a flat tire while Dani missed it because she hates parties and is also deceased. In all of the closest worlds where either Yde or Dani comes to the party, it will be Yde and not Dani who attends. If Yde is a fun person to have at parties, this will mean that Sentence 1 above is predicted to be true on the similarity approach. However, if Dani tends to have the opposite effect on parties she attends, then Sentence 2 is predicted false, in violation of SDA. - -SDA has been analyzed in a variety of ways. One is to derive it as a semantic entailment by positing a non-classical treatment of disjunction such as that of alternative semantics or inquisitive semantics. Another approach also derives it as a semantic entailment, but does so by adopting an alternative denotation for conditionals such as the strict conditional or any of the options made available in situation semantics. Finally, some researchers have suggested that it can be analyzed as a pragmatic implicature derived on the basis of classical disjunction and a standard semantics for conditionals. SDA is sometimes considered an embedded instance of the free choice inference. diff --git a/wiki/wikipedia/2721.txt b/wiki/wikipedia/2721.txt deleted file mode 100644 index 4842935f14b42b30f8cbefaa86ceda0f2c6136b5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2721.txt +++ /dev/null @@ -1,9 +0,0 @@ -Civis Analytics is a US data science software and consultancy company founded by Dan Wagner in 2013, with backing by Eric Schmidt. - -Wagner had served as the Chief Analytics Officer for Barack Obama's 2012 re-election campaign. - -Civis Analytics helps businesses "understand their data, use that data to make predictions, and get recommendations on what steps to take next". - -In 2020, the company faced controversy for firing employee David Shor after he tweeted a short summary of an academic paper by African American Princeton professor Omar Wasow. The paper presented evidence that some violent protest tactics in the 1960s civil rights movement may have been counterproductive, and some Twitter users accused Shor of racism for highlighting this result. After the firing, Civis Analytics initially released a statement claiming that had not fired any employees for tweeting academic papers, but later retracted that statement and replaced it with a new statement that omitted that claim. In 2021, Wasow was quoted as having concluded from his conversations with Civis Analytics and Shor that "at the heart of it was how [Shor] was treated on Twitter by people who essentially shot the messenger", and dismissing accusation of racism or otherwise acting improperly as "baseless". - -Eleven employees were laid off on October 30, 2020. In December, seven of them filed unfair labor practice charges with the NLRB for wrongful termination. In July 2021 the NLRB dismissed the claim. diff --git a/wiki/wikipedia/2722.txt b/wiki/wikipedia/2722.txt deleted file mode 100644 index 23b44341c484e18afa6f9d077fb3182f8afb33dc..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2722.txt +++ /dev/null @@ -1,103 +0,0 @@ -In geometry, an incidence relation is a heterogeneous relation that captures the idea being expressed when phrases such as "a point lies on a line" or "a line is contained in a plane" are used. The most basic incidence relation is that between a point, P, and a line, l, sometimes denoted P I l. If P I l the pair (P, l) is called a flag. There are many expressions used in common language to describe incidence (for example, a line passes through a point, a point lies in a plane, etc.) but the term "incidence" is preferred because it does not have the additional connotations that these other terms have, and it can be used in a symmetric manner. Statements such as "line l1 intersects line l2" are also statements about incidence relations, but in this case, it is because this is a shorthand way of saying that "there exists a point P that is incident with both line l1 and line l2". When one type of object can be thought of as a set of the other type of object (viz., a plane is a set of points) then an incidence relation may be viewed as containment. - -Statements such as "any two lines in a plane meet" are called incidence propositions. This particular statement is true in a projective plane, though not true in the Euclidean plane where lines may be parallel. Historically, projective geometry was developed in order to make the propositions of incidence true without exceptions, such as those caused by the existence of parallels. From the point of view of synthetic geometry, projective geometry should be developed using such propositions as axioms. This is most significant for projective planes due to the universal validity of Desargues' theorem in higher dimensions. - -In contrast, the analytic approach is to define projective space based on linear algebra and utilizing homogeneous co-ordinates. The propositions of incidence are derived from the following basic result on vector spaces: given subspaces U and W of a (finite-dimensional) vector space V, the dimension of their intersection is dim U + dim W - dim (U + W). Bearing in mind that the geometric dimension of the projective space P(V) associated to V is dim V - 1 and that the geometric dimension of any subspace is positive, the basic proposition of incidence in this setting can take the form: linear subspaces L and M of projective space P meet provided dim L + dim M ≥ dim P. - -The following sections are limited to projective planes defined over fields, often denoted by PG(2, F), where F is a field, or P2F. However these computations can be naturally extended to higher-dimensional projective spaces, and the field may be replaced by a division ring (or skewfield) provided that one pays attention to the fact that multiplication is not commutative in that case. - -Let V be the three-dimensional vector space defined over the field F. The projective plane P(V) = PG(2, F) consists of the one-dimensional vector subspaces of V, called points, and the two-dimensional vector subspaces of V, called lines. Incidence of a point and a line is given by containment of the one-dimensional subspace in the two-dimensional subspace. - -Fix a basis for V so that we may describe its vectors as coordinate triples (with respect to that basis). A one-dimensional vector subspace consists of a non-zero vector and all of its scalar multiples. The non-zero scalar multiples, written as coordinate triples, are the homogeneous coordinates of the given point, called point coordinates. With respect to this basis, the solution space of a single linear equation {(x, y, z) } is a two-dimensional subspace of V, and hence a line of P(V). This line may be denoted by line coordinates [a, b, c], which are also homogeneous coordinates since non-zero scalar multiples would give the same line. Other notations are also widely used. Point coordinates may be written as column vectors, (x, y, z)T, with colons, (x : y : z), or with a subscript, (x, y, z)P. Correspondingly, line coordinates may be written as row vectors, (a, b, c), with colons, [a : b : c] or with a subscript, (a, b, c)L. Other variations are also possible. - -Given a point P = (x, y, z) and a line l = [a, b, c], written in terms of point and line coordinates, the point is incident with the line (often written as P I l), if and only if, - -ax + by + cz = 0. - -This can be expressed in other notations as: -$$ -ax + by + cz = [a,b,c] \cdot (x,y,z) =(a,b,c)_L \cdot (x,y,z)_P = -$$ -$$ - = [a:b:c] \cdot (x:y:z) = (a,b,c) \left ( \begin{matrix} x \\ y \\ z \end{matrix} \right ) = 0. -$$ - -No matter what notation is employed, when the homogeneous coordinates of the point and line are just considered as ordered triples, their incidence is expressed as having their dot product equal 0. - -Let P1 and P2 be a pair of distinct points with homogeneous coordinates (x1, y1, z1) and (x2, y2, z2) respectively. These points determine a unique line l with an equation of the form ax + by + cz = 0 and must satisfy the equations: - -ax1 + by1 + cz1 = 0 and - -ax2 + by2 + cz2 = 0. - -In matrix form this system of simultaneous linear equations can be expressed as: -$$ -\left( \begin{matrix} x & y & z \\ x_1 & y_1 & z_1 \\ x_2 & y_2 & z_2 \end{matrix} \right) \left( \begin{matrix} a \\ b \\ c \end{matrix} \right) = \left( \begin{matrix} 0 \\ 0 \\ 0 \end{matrix} \right). -$$ - -This system has a nontrivial solution if and only if the determinant, -$$ - \left| \begin{matrix} x & y & z \\ x_1 & y_1 & z_1 \\x_2 & y_2 & z_2 \end{matrix} \right| = 0. -$$ - -Expansion of this determinantal equation produces a homogeneous linear equation, which must be the equation of line l. Therefore, up to a common non-zero constant factor we have l = [a, b, c] where: - -a = y1z2 - y2z1, - -b = x2z1 - x1z2, and - -c = x1y2 - x2y1. - -In terms of the scalar triple product notation for vectors, the equation of this line may be written as: - -P ⋅ P1 × P2 = 0, - -where P = (x, y, z) is a generic point. - -Points that are incident with the same line are said to be collinear. The set of all points incident with the same line is called a range. - -If P1 = (x1, y1, z1), P2 = (x2, y2, z2), and P3 = (x3, y3, z3), then these points are collinear if and only if -$$ - \left| \begin{matrix} x_1 & y_1 & z_1 \\ x_2 & y_2 & z_2 \\ x_3 & y_3 & z_3 \end{matrix} \right| = 0, -$$ - -i.e., if and only if the determinant of the homogeneous coordinates of the points is equal to zero. - -Let l1 = [a1, b1, c1] and l2 = [a2, b2, c2] be a pair of distinct lines. Then the intersection of lines l1 and l2 is point a P = (x0, y0, z0) that is the simultaneous solution (up to a scalar factor) of the system of linear equations: - -a1x + b1y + c1z = 0 and - -a2x + b2y + c2z = 0. - -The solution of this system gives: - -x0 = b1c2 - b2c1, - -y0 = a2c1 - a1c2, and - -z0 = a1b2 - a2b1. - -Alternatively, consider another line l = [a, b, c] passing through the point P, that is, the homogeneous coordinates of P satisfy the equation: - -ax+ by + cz = 0. - -Combining this equation with the two that define P, we can seek a non-trivial solution of the matrix equation: -$$ -\left( \begin{matrix} a & b & c \\ a_1 & b_1 & c_1 \\ a_2 & b_2 & c_2 \end{matrix} \right) \left( \begin{matrix} x \\ y \\ z \end{matrix} \right) = \left( \begin{matrix} 0 \\ 0 \\ 0 \end{matrix} \right). -$$ - -Such a solution exists provided the determinant, -$$ - \left| \begin{matrix} a & b & c \\ a_1 & b_1 & c_1 \\a_2 & b_2 & c_2 \end{matrix} \right| = 0. -$$ - -The coefficients of a, b and c in this equation give the homogeneous coordinates of P. - -The equation of the generic line passing through the point P in scalar triple product notation is: - -l ⋅ l1 × l2 = 0. - -Lines that meet at the same point are said to be concurrent. The set of all lines in a plane incident with the same point is called a pencil of lines centered at that point. The computation of the intersection of two lines shows that the entire pencil of lines centered at a point is determined by any two of the lines that intersect at that point. It immediately follows that the algebraic condition for three lines, [a1, b1, c1], [a2, b2, c2], [a3, b3, c3] to be concurrent is that the determinant, -$$ - \left| \begin{matrix} a_1 & b_1 & c_1 \\a_2 & b_2 & c_2 \\ a_3 & b_3 & c_3 \end{matrix} \right| = 0. -$$ diff --git a/wiki/wikipedia/2723.txt b/wiki/wikipedia/2723.txt deleted file mode 100644 index be3d34e5992c221f24e065d5e66eee7775723c83..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2723.txt +++ /dev/null @@ -1,8 +0,0 @@ -In real algebraic geometry, Gudkov's conjecture, also called Gudkov’s congruence, (named after Dmitry Gudkov) was a conjecture, and is now a theorem, which states that an M-curve of even degree $2d$ obeys the congruence -$$ - p - n \equiv d^2 (\!\bmod 8), -$$ - -where $p$ is the number of positive ovals and $n$ the number of negative ovals of the M-curve. (Here, the term M-curve stands for "maximal curve"; it means a smooth algebraic curve over the reals whose genus is $k-1$, where $k$ is the number of maximal components of the curve.) - -The theorem was proved by the combined works of Vladimir Arnold and Vladimir Rokhlin. diff --git a/wiki/wikipedia/2724.txt b/wiki/wikipedia/2724.txt deleted file mode 100644 index 2423a49adabeac265ecd730695a45a619b910836..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2724.txt +++ /dev/null @@ -1,5 +0,0 @@ -JUNG (the Java Universal Network/Graph Framework) is an open-source graph modeling and visualization framework written in Java, under the BSD license. The framework comes with a number of layout algorithms built in, as well as analysis algorithms such as graph clustering and metrics for node centrality. - -JUNG's architecture is designed to support a variety of representations of entities and their relations, such as directed and undirected graphs, , graphs with parallel edges, and hypergraphs. It provides a mechanism for annotating graphs, entities, and relations with metadata. JUNG also facilitates the creation of analytic tools for complex data sets that can examine the relations between entities as well as the metadata attached to each entity and relation. JUNG includes implementations of a number of algorithms from graph theory, data mining, and social network analysis, such as routines for clustering, , , random graph generation, statistical analysis, and calculation of network distances, flows, and importance measures. - -JUNG provides a visualization framework that makes it easy to construct tools for the interactive exploration of network data. Users can use one of the layout algorithms provided, or use the framework to create their own custom layouts. In addition, filtering mechanisms are provided which allow users to focus their attention, or their algorithms, on specific portions of the graph. diff --git a/wiki/wikipedia/2725.txt b/wiki/wikipedia/2725.txt deleted file mode 100644 index 1bc28ca7132becec8629a516d58713839f996032..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2725.txt +++ /dev/null @@ -1,152 +0,0 @@ -In computability theory, Kleene's recursion theorems are a pair of fundamental results about the application of computable functions to their own descriptions. The theorems were first proved by Stephen Kleene in 1938 and appear in his 1952 book Introduction to Metamathematics. A related theorem, which constructs fixed points of a computable function, is known as Rogers's theorem and is due to Hartley Rogers, Jr.. - -The recursion theorems can be applied to construct fixed points of certain operations on computable functions, to generate quines, and to construct functions defined via recursive definitions. - -The statement of the theorems refers to an admissible numbering $\varphi$ of the partial recursive functions, such that the function corresponding to index $e$ is $\varphi_e$. - -If $F$ and $G$ are partial functions on the natural numbers, the notation $F \simeq G$ indicates that, for each n, either $F(n)$ and $G(n)$ are both defined and are equal, or else $F(n)$ and $G(n)$ are both undefined. - -Given a function $F$, a fixed point of $F$ is an index $e$ such that $\varphi_e \simeq \varphi_{F(e)}$. Rogers describes the following result as "a simpler version" of Kleene's (second) recursion theorem. - -Rogers's fixed-point theorem. If $F$ is a total computable function, it has a fixed point. - -The proof uses a particular total computable function $h$, defined as follows. Given a natural number $x$, the function $h$ outputs the index of the partial computable function that performs the following computation: - -Given an input $y$, first attempt to compute $\varphi_{x}(x)$. If that computation returns an output $e$, then compute $\varphi_e(y)$ and return its value, if any. - -Thus, for all indices $x$ of partial computable functions, if $\varphi_x(x)$ is defined, then $\varphi_{h(x)} \simeq \varphi_{\varphi_x(x)}$. If $\varphi_x(x)$ is not defined, then $\varphi_{h(x)}$ is a function that is nowhere defined. The function $h$ can be constructed from the partial computable function $g(x,y)$ described above and the s-m-n theorem: for each $x$, $h(x)$ is the index of a program which computes the function $y \mapsto g(x,y)$. - -To complete the proof, let $F$ be any total computable function, and construct $h$ as above. Let $e$ be an index of the composition $F \circ h$, which is a total computable function. Then $\varphi_{h(e)} \simeq \varphi_{\varphi_e(e)}$ by the definition of $h$. - -But, because $e$ is an index of $F \circ h$, $\varphi_e(e) = (F \circ h)(e)$, and thus $\varphi_{\varphi_e(e)} \simeq \varphi_{F(h(e))}$. By the transitivity of $\simeq$, this means $\varphi_{h(e)} \simeq \varphi_{F(h(e))}$. Hence $\varphi_n \simeq \varphi_{F(n)}$ for $n = h(e)$. - -This proof is a construction of a partial recursive function which implements the Y combinator. - -A function $F$ such that $ \varphi_e \not \simeq \varphi_{F(e)}$ for all $e$ is called fixed-point free. The fixed-point theorem shows that no total computable function is fixed-point free, but there are many non-computable fixed-point-free functions. Arslanov's completeness criterion states that the only recursively enumerable Turing degree that computes a fixed-point-free function is 0′, the degree of the halting problem. - -The second recursion theorem is a generalization of Rogers's theorem with a second input in the function. One informal interpretation of the second recursion theorem is that it is possible to construct self-referential programs; see "Application to quines" below. - -The second recursion theorem. For any partial recursive function $Q(x,y)$ there is an index $p$ such that $\varphi_p \simeq \lambda y.Q(p,y)$. - -The theorem can be proved from Rogers's theorem by letting $F(p)$ be a function such that $\varphi_{F(p)}(y) = Q(p,y)$ (a construction described by the S-m-n theorem). One can then verify that a fixed-point of this $F$ is an index $p$ as required. The theorem is constructive in the sense that a fixed computable function maps an index for Q into the index p. - -Kleene's second recursion theorem and Rogers's theorem can both be proved, rather simply, from each other. However, a direct proof of Kleene's theorem does not make use of a universal program, which means that the theorem holds for certain subrecursive programming systems that do not have a universal program. - -A classic example using the second recursion theorem is the function $Q(x,y)=x$. The corresponding index $p$ in this case yields a computable function that outputs its own index when applied to any value. When expressed as computer programs, such indices are known as quines. - -The following example in Lisp illustrates how the $p$ in the corollary can be effectively produced from the function $Q$. The function s11 in the code is the function of that name produced by the S-m-n theorem. - -Q can be changed to any two-argument function. - - - -(setq Q '(lambda (x y) x)) - -(setq s11 '(lambda (f x) (list 'lambda '(y) (list f x 'y)))) - -(setq n (list 'lambda '(x y) (list Q (list s11 'x 'x) 'y))) - -(setq p (eval (list s11 n n))) - - - -The results of the following expressions should be the same. $\varphi$ p(nil) - - - -(eval (list p nil)) - - - -Q(p, nil) - - - -(eval (list Q p nil)) - - - -Suppose that $g$ and $h$ are total computable functions that are used in a recursive definition for a function $f$: -$$ -f(0,y) \simeq g(y), -$$ -$$ -f(x+1,y) \simeq h(f(x,y),x,y), -$$ - -The second recursion theorem can be used to show that such equations define a computable function, where the notion of computability does not have to allow, prima facie, for recursive definitions (for example, it may be defined by μ-recursion, or by Turing machines). This recursive definition can be converted into a computable function $\varphi_{F}(e,x,y)$ that assumes $e$ is an index to itself, to simulate recursion: -$$ -\varphi_{F}(e,0,y) \simeq g(y), -$$ -$$ -\varphi_{F}(e,x+1,y) \simeq h(\varphi_e(x,y),x,y). -$$ - -The recursion theorem establishes the existence of a computable function $\varphi_f$ such that $\varphi_f(x,y) \simeq \varphi_{F}(f,x,y)$. Thus $f$ satisfies the given recursive definition. - -Reflexive, or reflective, programming refers to the usage of self-reference in programs. Jones presents a view of the second recursion theorem based on a reflexive language. - -It is shown that the reflexive language defined is not stronger than a language without reflection (because an interpreter for the reflexive language can be implemented without using reflection); then, it is shown that the recursion theorem is almost trivial in the reflexive language. - -While the second recursion theorem is about fixed points of computable functions, the first recursion theorem is related to fixed points determined by enumeration operators, which are a computable analogue of inductive definitions. An enumeration operator is a set of pairs (A,n) where A is a (code for a) finite set of numbers and n is a single natural number. Often, n will be viewed as a code for an ordered pair of natural numbers, particularly when functions are defined via enumeration operators. Enumeration operators are of central importance in the study of enumeration reducibility. - -Each enumeration operator Φ determines a function from sets of naturals to sets of naturals given by -$$ -\Phi(X) = \{ n \mid \exists A \subseteq X [(A,n) \in \Phi]\}. -$$ - -A recursive operator is an enumeration operator that, when given the graph of a partial recursive function, always returns the graph of a partial recursive function. - -A fixed point of an enumeration operator Φ is a set F such that Φ(F) = F. The first enumeration theorem shows that fixed points can be effectively obtained if the enumeration operator itself is computable. - -First recursion theorem. The following statements hold. - -# For any computable enumeration operator Φ there is a recursively enumerable set F such that Φ(F) = F and F is the smallest set with this property. - -# For any recursive operator Ψ there is a partial computable function φ such that Ψ(φ) = φ and φ is the smallest partial computable function with this property. - -Like the second recursion theorem, the first recursion theorem can be used to obtain functions satisfying systems of recursion equations. To apply the first recursion theorem, the recursion equations must first be recast as a recursive operator. - -Consider the recursion equations for the factorial function f: -$$ -f(0)=1 -$$ -$$ -f(n+1) = (n + 1) \cdot f(n) -$$ - -The corresponding recursive operator Φ will have information that tells how to get to the next value of f from the previous value. However, the recursive operator will actually define the graph of f. First, Φ will contain the pair $( \varnothing, (0, 1))$. This indicates that f(0) is unequivocally 1, and thus the pair (0,1) is in the graph of f. - -Next, for each n and m, Φ will contain the pair $( \{ (n, m) \}, (n+1, (n+1)\cdot m))$. This indicates that, if f(n) is m, then f(n + 1) is (n + 1)m, so that the pair (n + 1, (n + 1)m) is in the graph of f. Unlike the base case f(0) = 1, the recursive operator requires some information about f(n) before it defines a value of f(n + 1). - -The first recursion theorem (in particular, part 1) states that there is a set F such that Φ(F) = F. The set F will consist entirely of ordered pairs of natural numbers, and will be the graph of the factorial function f, as desired. - -The restriction to recursion equations that can be recast as recursive operators ensures that the recursion equations actually define a least fixed point. For example, consider the set of recursion equations: -$$ -g(0) = 1 -$$ -$$ -g(n + 1) = 1 -$$ -$$ -g(2n) = 0 -$$ - -There is no function g satisfying these equations, because they imply g(2) = 1 and also imply g(2) = 0. Thus there is no fixed point g satisfying these recursion equations. It is possible to make an enumeration operator corresponding to these equations, but it will not be a recursive operator. - -The proof of part 1 of the first recursion theorem is obtained by iterating the enumeration operator Φ beginning with the empty set. First, a sequence Fk is constructed, for $k = 0, 1, \ldots$. Let F0 be the empty set. Proceeding inductively, for each k, let Fk + 1 be $F_k \cup \Phi(F_k)$. Finally, F is taken to be $\bigcup F_k$. The remainder of the proof consists of a verification that F is recursively enumerable and is the least fixed point of Φ. The sequence Fk used in this proof corresponds to the Kleene chain in the proof of the Kleene fixed-point theorem. - -The second part of the first recursion theorem follows from the first part. The assumption that Φ is a recursive operator is used to show that the fixed point of Φ is the graph of a partial function. The key point is that if the fixed point F is not the graph of a function, then there is some k such that Fk is not the graph of a function. - -Compared to the second recursion theorem, the first recursion theorem produces a stronger conclusion but only when narrower hypotheses are satisfied. Rogers uses the term weak recursion theorem for the first recursion theorem and strong recursion theorem for the second recursion theorem. - -One difference between the first and second recursion theorems is that the fixed points obtained by the first recursion theorem are guaranteed to be least fixed points, while those obtained from the second recursion theorem may not be least fixed points. - -A second difference is that the first recursion theorem only applies to systems of equations that can be recast as recursive operators. This restriction is similar to the restriction to continuous operators in the Kleene fixed-point theorem of order theory. The second recursion theorem can be applied to any total recursive function. - -In the context of his theory of numberings, Ershov showed that Kleene's recursion theorem holds for any precomplete numbering. A Gödel numbering is a precomplete numbering on the set of computable functions so the generalized theorem yields the Kleene recursion theorem as a special case. - -Given a precomplete numbering $\nu$, then for any partial computable function $f$ with two parameters there exists a total computable function $t$ with one parameter such that -$$ -\forall n \in \mathbb{N} : \nu \circ f(n,t(n)) = \nu \circ t(n). -$$ diff --git a/wiki/wikipedia/2726.txt b/wiki/wikipedia/2726.txt deleted file mode 100644 index ad9b0bca0c9364e4c8187e1e09eeb0849b4c85a7..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2726.txt +++ /dev/null @@ -1,12 +0,0 @@ -The tangent-secant theorem describes the relation of line segments created by a secant and a tangent line with the associated circle. - -This result is found as Proposition 36 in Book 3 of Euclid's Elements. - -Given a secant g intersecting the circle at points G1 and G2 and a tangent t intersecting the circle at point T and given that g and t intersect at point P, the following equation holds: -$$ -|PT|^2=|PG_1|\cdot|PG_2| -$$ - -The tangent-secant theorem can be proven using similar triangles (see graphic). - -Like the intersecting chords theorem and the intersecting secants theorem, the tangent-secant theorem represents one of the three basic cases of a more general theorem about two intersecting lines and a circle, namely, the power of point theorem. diff --git a/wiki/wikipedia/2727.txt b/wiki/wikipedia/2727.txt deleted file mode 100644 index 68572587b4513ae12f5c83f6cadd22177a3c85ab..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2727.txt +++ /dev/null @@ -1,5 +0,0 @@ -In computing, a blind write occurs when a transaction writes a value without reading it. Any view serializable schedule that is not conflict serializable must contain a blind write. - -In particular, a write wi(X) is said to be blind if it is not the last action of resource X and the following action on X is a write wj(X). - -Category:Transaction processing diff --git a/wiki/wikipedia/2728.txt b/wiki/wikipedia/2728.txt deleted file mode 100644 index cdf3c8d18ada8b3b286d3b93ff069a6d9854a6cc..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2728.txt +++ /dev/null @@ -1,9 +0,0 @@ -Concept mapping and mind mapping software is used to create diagrams of relationships between concepts, ideas, or other pieces of information. It has been suggested that the mind mapping technique can improve learning and study efficiency up to 15% over conventional note-taking. Many software packages and websites allow creating, or otherwise supporting, mind maps. - -Using a standard file format allows interchange of files between various programs. Many programs listed below support the mm format used by FreeMind, which is an XML text format of tagged objects. - -The following tools comply with the Free Software Foundation's (FSF) definition of free software. As such, they are also open-source software. - -The following is a list of notable concept mapping and mind mapping applications which are freeware, and available at no cost. Some are open source, and others are proprietary software. - -The table below lists pieces of proprietary commercial software that allow creating mind and concept maps. diff --git a/wiki/wikipedia/2729.txt b/wiki/wikipedia/2729.txt deleted file mode 100644 index d86b96ace6c9b2f69fb0edcddcdb6e4936f38dba..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2729.txt +++ /dev/null @@ -1,115 +0,0 @@ -In combinatorics, Vandermonde's identity (or Vandermonde's convolution) is the following identity for binomial coefficients: -$$ -{m+n \choose r}=\sum_{k=0}^r{m \choose k}{n \choose r-k} -$$ - -for any nonnegative integers r, m, n. The identity is named after Alexandre-Théophile Vandermonde (1772), although it was already known in 1303 by the Chinese mathematician Zhu Shijie. - -There is a q-analog to this theorem called the q-Vandermonde identity. - -Vandermonde's identity can be generalized in numerous ways, including to the identity - - - -{ n_1+\dots +n_p \choose m }= \sum_{k_1+\cdots +k_p = m} {n_1\choose k_1} {n_2\choose k_2} \cdots {n_p\choose k_p}. - - - -In general, the product of two polynomials with degrees m and n, respectively, is given by - -\biggl(\sum_{i=0}^m a_ix^i\biggr) \biggl(\sum_{j=0}^n b_jx^j\biggr) - -= \sum_{r=0}^{m+n}\biggl(\sum_{k=0}^r a_k b_{r-k}\biggr) x^r, - -where we use the convention that ai = 0 for all integers i > m and bj = 0 for all integers j > n. By the binomial theorem, -$$ -(1+x)^{m+n} = \sum_{r=0}^{m+n} {m+n \choose r}x^r. -$$ - -Using the binomial theorem also for the exponents m and n, and then the above formula for the product of polynomials, we obtain - -\begin{align} - -\sum_{r=0}^{m+n} {m+n \choose r}x^r - -&= (1+x)^{m+n}\\ - -&= (1+x)^m (1+x)^n \\ - -&= \biggl(\sum_{i=0}^m {m\choose i}x^i\biggr) - -\biggl(\sum_{j=0}^n {n\choose j}x^j\biggr)\\ - -&=\sum_{r=0}^{m+n}\biggl(\sum_{k=0}^r {m\choose k} {n\choose r-k}\biggr) x^r, - -\end{align} - - - -where the above convention for the coefficients of the polynomials agrees with the definition of the binomial coefficients, because both give zero for all i > m and j > n, respectively. - -By comparing coefficients of xr, Vandermonde's identity follows for all integers r with 0 ≤ r ≤ m + n. For larger integers r, both sides of Vandermonde's identity are zero due to the definition of binomial coefficients. - -Vandermonde's identity also admits a combinatorial double counting proof, as follows. Suppose a committee consists of m men and n women. In how many ways can a subcommittee of r members be formed? The answer is -$$ -{m+n \choose r}. -$$ - -The answer is also the sum over all possible values of k, of the number of subcommittees consisting of k men and r − k women: -$$ -\sum_{k=0}^r{m \choose k}{n \choose r-k}. -$$ - -Take a rectangular grid of r x (m+n−r) squares. There are -$$ -\binom{r+(m+n-r)}{r}=\binom{m+n}{r} -$$ - -paths that start on the bottom left vertex and, moving only upwards or rightwards, end at the top right vertex (this is because r right moves and m+n-r up moves must be made (or vice versa) in any order, and the total path length is m + n). Call the bottom left vertex (0, 0). - -There are $\binom{m}{k}$ paths starting at (0, 0) that end at (k, m−k), as k right moves and m−k upward moves must be made (and the path length is m). Similarly, there are $\binom{n}{r-k}$ paths starting at (k, m−k) that end at (r, m+n−r), as a total of r−k right moves and (m+n−r) − (m−k) upward moves must be made and the path length must be r−k + (m+n−r) − (m−k) = n. Thus there are -$$ - \binom{m}{k}\binom{n}{r-k} -$$ - -paths that start at (0, 0), end at (r, m+n−r), and go through (k, m−k). This is a subset of all paths that start at (0, 0) and end at (r, m+n−r), so sum from k = 0 to k = r (as the point (k, m−k) is confined to be within the square) to obtain the total number of paths that start at (0, 0) and end at (r, m+n−r). - -One can generalize Vandermonde's identity as follows: - - - -\sum_{k_1+\cdots +k_p = m} {n_1\choose k_1} {n_2\choose k_2} \cdots {n_p\choose k_p} = { n_1+\dots +n_p \choose m }. - - - -This identity can be obtained through the algebraic derivation above when more than two polynomials are used, or through a simple double counting argument. - -On the one hand, one chooses $\textstyle k_1$ elements out of a first set of $\textstyle n_1$ elements; then $\textstyle k_2$ out of another set, and so on, through $\textstyle p$ such sets, until a total of $\textstyle m$ elements have been chosen from the $\textstyle p$ sets. One therefore chooses $\textstyle m$ elements out of $\textstyle n_1+\dots +n_p$ in the left-hand side, which is also exactly what is done in the right-hand side. - -The identity generalizes to non-integer arguments. In this case, it is known as the Chu-Vandermonde identity (see Askey 1975, pp. 59-60) and takes the form -$$ -{s+t \choose n}=\sum_{k=0}^n {s \choose k}{t \choose n-k} -$$ - -for general complex-valued s and t and any non-negative integer n. It can be proved along the lines of the algebraic proof above by multiplying the binomial series for $(1+x)^s$ and $(1+x)^t$ and comparing terms with the binomial series for $(1+x)^{s+t}$. - -This identity may be rewritten in terms of the falling Pochhammer symbols as -$$ -(s+t)_n = \sum_{k=0}^n {n \choose k} (s)_k (t)_{n-k} -$$ - -in which form it is clearly recognizable as an umbral variant of the binomial theorem (for more on umbral variants of the binomial theorem, see binomial type). The Chu-Vandermonde identity can also be seen to be a special case of Gauss's hypergeometric theorem, which states that -$$ -_2F_1(a,b;c;1) = \frac{\Gamma(c)\Gamma(c-a-b)}{\Gamma(c-a)\Gamma(c-b)} -$$ - -where $_2F_1$ is the hypergeometric function and $\Gamma(n+1)=n!$ is the gamma function. One regains the Chu-Vandermonde identity by taking a = −n and applying the identity -$$ -{n\choose k} = (-1)^k {k-n-1 \choose k} -$$ - -liberally. - -The Rothe–Hagen identity is a further generalization of this identity. - -When both sides have been divided by the expression on the left, so that the sum is 1, then the terms of the sum may be interpreted as probabilities. The resulting probability distribution is the hypergeometric distribution. That is the probability distribution of the number of red marbles in r draws without replacement from an urn containing n red and m blue marbles. diff --git a/wiki/wikipedia/273.txt b/wiki/wikipedia/273.txt deleted file mode 100644 index 310e65ee631c683654ef05a6ae94e1a37cd0259e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/273.txt +++ /dev/null @@ -1,89 +0,0 @@ -"Cheryl's Birthday" is a logic puzzle, specifically a knowledge puzzle. The objective is to determine the birthday of a girl named Cheryl using a handful of clues given to her friends Albert and Bernard. Written by Dr Joseph Yeo Boon Woi of Singapore's National Institute of Education, the question was posed as part of the Singapore and Asian Schools Math Olympiad (SASMO) in 2015, and was first posted online by Singapore television presenter Kenneth Kong. It went viral in a matter of days. - -An early version of Cheryl's Birthday, with different names and dates, appeared in an online forum in 2006. - -The SASMO version of the question was posted on Facebook by Singapore television presenter Kenneth Kong on April 10, 2015, and quickly went viral. although it was actually part of the 2015 Singapore and Asian Schools Math Olympiad meant for 14-year-old students, a fact later acknowledged by Kong. The competition was held on 8 April 2015, with 28,000 participants from Singapore, Thailand, Vietnam, China and the UK. According to SASMO's organisers, the quiz was aimed at the top 40 per cent of the contestants and aimed to "sift out the better students". SASMO's executive director told the BBC that "there was a place for some kind of logical and analytical thinking in the workplace and in our daily lives". - -The question is number 24 in a list of 25 questions, and reads as follows: - -The solutions that arrive at this answer ignore that the latter part of: - -
    Albert: I don't know when Cheryl's birthday is, but I know that Bernard doesn't know too.
    - -conveys information to Bernard about how Albert was able to deduce this. Bernard would only have known the birthday if the date was unique, 18 or 19. Albert therefore is able to deduce that "Bernard doesn't know" because he heard a month that does not contain those dates (July or August). Realizing this, Bernard can rule out May and June, which allows him to arrive at a unique birthday even if he is given the dates 15 or 16, not just 17. - -The SASMO organizers pointed out that August 17 would be the solution if the sequence of statements instead began with Bernard saying that he did not know Cheryl's birthday: - -
    Bernard: I don't know when Cheryl's birthday is.
    - -Albert: I still don't know when Cheryl's birthday is.
    - -Bernard: At first I didn't know when Cheryl's birthday is, but I know now.
    - -Albert: Then I also know when Cheryl's birthday is.
    - -It would also be the answer if the first statement were instead made by Cheryl: - -
    Cheryl: Bernard doesn't know when my birthday is.
    - -Albert: I still don't know when Cheryl's birthday is.
    - -Bernard: At first I didn't know when Cheryl's birthday is, but I know now.
    - -Albert: Then I also know when Cheryl's birthday is.
    - -Note: The final statements by Albert in the two alternative examples only completes a dialogue; they are not needed by the reader to determine Cheryl's birthday as August 17. - -On May 14, 2015, Nanyang Technological University uploaded a second part to the question on Facebook, entitled "Cheryl's Age". It reads as follows: - -
    Albert and Bernard now want to know how old Cheryl is.
    - -Cheryl: I have two younger brothers. The product of all our ages (i.e. my age and the ages of my two brothers) is 144, assuming that we use whole numbers for our ages.
    - -Albert: We still don't know your age. What other hints can you give us?
    - -Cheryl: The sum of all our ages is the bus number of this bus that we are on.
    - -Bernard: Of course we know the bus number, but we still don't know your age.
    - -Cheryl: Oh, I forgot to tell you that my brothers have the same age.
    - -Albert and Bernard: Oh, now we know your age.
    - -So what is Cheryl's age? - -===Solution to sequel=== - -Cheryl first says that she is the oldest of three siblings, and that their ages multiplied makes 144. 144 can be decomposed into prime number factors by the fundamental theorem of arithmetic (144 = 24 × 32), and all possible ages for Cheryl and her two brothers examined (for example, 16, 9, 1, or 8, 6, 3, and so on). The sums of the ages can then be computed. Because Bernard (who knows the bus number) cannot determine Cheryl's age despite having been told this sum, it must be a sum that is not unique among the possible solutions. On examining all the possible ages, it turns out there are two pairs of sets of possible ages that produce the same sum as each other: 9, 4, 4 and 8, 6, 3, which sum to 17, and 12, 4, 3 and 9, 8, 2, which sum to 19. Cheryl then says that her brothers are the same age, which eliminates the last three possibilities and leaves only 9, 4, 4, so we can deduce that Cheryl is 9 years old and her brothers are 4 years old, and the bus the three of them are on has the number 17. - -On May 25, 2015, mathematics writer Alex Bellos published a follow-up to the puzzle, entitled "Denise's Revenge", in his column "Alex Bellos's Monday Puzzle" in The Guardian. This sequel was also written by Dr Yeo, the original author of "Cheryl's Birthday". The puzzle features a new character, Denise, whose birth date the three original characters aim to determine. The puzzle states: - -
    Albert, Bernard and Cheryl became friends with Denise, and they wanted to know when her birthday is. Denise gave them a list of 20 possible dates. - -17 Feb 2001, 16 Mar 2002, 13 Jan 2003, 19 Jan 2004
    - -13 Mar 2001, 15 Apr 2002, 16 Feb 2003, 18 Feb 2004
    - -13 Apr 2001, 14 May 2002, 14 Mar 2003, 19 May 2004
    - -15 May 2001, 12 Jun 2002, 11 Apr 2003, 14 Jul 2004
    - -17 Jun 2001, 16 Aug 2002, 16 Jul 2003, 18 Aug 2004
    - -Denise then told Albert, Bernard and Cheryl separately the month, the day and the year of her birthday respectively. The following conversation ensues: - -Albert: I don't know when Denise's birthday is, but I know that Bernard does not know.
    - -Bernard: I still don't know when Denise's birthday is, but I know that Cheryl still does not know.
    - -Cheryl: I still don't know when Denise's birthday is, but I know that Albert still does not know.
    - -Albert: Now I know when Denise's birthday is.
    - -Bernard: Now I know too.
    - -Cheryl: Me too.
    - -So, when is Denise's birthday?
    - -The next day, Bellos published the solution to "Denise's Revenge", which is solved in the same way as "Cheryl's Birthday", by successive eliminations. The correct solution is 14 May 2002. diff --git a/wiki/wikipedia/2730.txt b/wiki/wikipedia/2730.txt deleted file mode 100644 index e89fa490a571b626281b4ce8d7b64925ae2c2749..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2730.txt +++ /dev/null @@ -1,11 +0,0 @@ -In graph theory, a branch of mathematics, the Moser spindle (also called the Mosers' spindle or Moser graph) is an undirected graph, named after mathematicians Leo Moser and his brother William, with seven vertices and eleven edges. It is a unit distance graph requiring four colors in any graph coloring, and its existence can be used to prove that the chromatic number of the plane is at least four. - -The Moser spindle has also been called the Hajós graph after György Hajós, as it can be viewed as an instance of the Hajós construction. However, the name "Hajós graph" has also been applied to a different graph, in the form of a triangle inscribed within a hexagon. - -As a unit distance graph, the Moser spindle is formed by two rhombi with 60 and 120 degree angles, so that the sides and short diagonals of the rhombi form equilateral triangles. The two rhombi are placed in the plane, sharing one of their acute-angled vertices, in such a way that the remaining two acute-angled vertices are a unit distance apart from each other. The eleven edges of the graph are the eight rhombus sides, the two short diagonals of the rhombi, and the edge between the unit-distance pair of acute-angled vertices. - -The Moser spindle may also be constructed graph-theoretically, without reference to a geometric embedding, using the Hajós construction starting with two complete graphs on four vertices. This construction removes an edge from each complete graph, merges two of the endpoints of the removed edges into a single vertex shared by both cliques, and adds a new edge connecting the remaining two endpoints of the removed edge. Even more directly, each independent set in the Moser spindle has at most two vertices, - -Adding any edge to the Moser spindle results in a graph that cannot be embedded in the plane as a unit distance graph, and there does not exist a graph homomorphism from the Moser spindle to any smaller unit distance graph. These two properties of the Moser spindle were used by Horvat to show the NP-hardness of testing whether a given graph has a two-dimensional unit distance representation; the proof uses a reduction from 3SAT in which the Moser spindle is used as the central truth-setting gadget in the reduction. - -The Moser spindle can also be used to prove a result in Euclidean Ramsey theory: if T is any triangle in the plane, and the points of the plane are two-colored black and white, then there is either a black translate of T or a pair of white points at unit distance from each other. For, let M be a unit-distance embedding of the Moser spindle, and let M + T be the Minkowski sum of M and T. If M + T has no white unit-distance pair, then each of the three copies of the Moser spindle in M + T must have at most two white points, because the white points in each copy must form an independent set and the largest independent set in the Moser spindle has size two. Therefore, among the seven vertices of the Moser spindle, there are at most six that have a white copy in M + T, so there must be one of the seven vertices all of whose copies are black. But then the three copies of this vertex form a translate of T. diff --git a/wiki/wikipedia/2731.txt b/wiki/wikipedia/2731.txt deleted file mode 100644 index a72dbfbb34d3147116d90985187e8dc331d9d8f6..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2731.txt +++ /dev/null @@ -1,197 +0,0 @@ -Mesh generation is the practice of creating a mesh, a subdivision of a continuous geometric space into discrete geometric and topological cells. - -Often these cells form a simplicial complex. - -Usually the cells partition the geometric input domain. - -Mesh cells are used as discrete local approximations of the larger domain. Meshes are created by computer algorithms, often with human guidance through a GUI , depending on the complexity of the domain and the type of mesh desired. - -The goal is to create a mesh that accurately captures the input domain geometry, with high-quality (well-shaped) cells, and without so many cells as to make subsequent calculations intractable. - -The mesh should also be fine (have small elements) in areas that are important for the subsequent calculations. - -Meshes are used for rendering to a computer screen and for physical simulation such as finite element analysis or computational fluid dynamics. Meshes are composed of simple cells like triangles because, e.g., we know how to perform operations such as finite element calculations (engineering) or ray tracing (computer graphics) on triangles, but we do not know how to perform these operations directly on complicated spaces and shapes such as a roadway bridge. We can simulate the strength of the bridge, or draw it on a computer screen, by performing calculations on each triangle and calculating the interactions between triangles. - -A major distinction is between structured and unstructured meshing. In structured meshing the mesh is a regular lattice, such as an array, with implied connectivity between elements. In unstructured meshing, elements may be connected to each other in irregular patterns, and more complicated domains can be captured. This page is primarily about unstructured meshes. - -While a mesh may be a triangulation, the process of meshing is distinguished from point set triangulation in that meshing includes the freedom to add vertices not present in the input. "Facetting" (triangulating) CAD models for drafting has the same freedom to add vertices, but the goal is to represent the shape accurately using as few triangles as possible and the shape of individual triangles is not important. Computer graphics renderings of textures and realistic lighting conditions use meshes instead. - -Many mesh generation software is coupled to a CAD system defining its input, and simulation software for taking its output. The input can vary greatly but common forms are Solid modeling, Geometric modeling, NURBS, B-rep, STL or a point cloud. - -The terms "mesh generation," "grid generation," "meshing," " and "gridding," are often used interchangeably, although strictly speaking the latter two are broader and encompass mesh improvement: changing the mesh with the goal of increasing the speed or accuracy of the numerical calculations that will be performed over it. In computer graphics rendering, and mathematics, a mesh is sometimes referred to as a tessellation. - -Mesh faces (cells, entities) have different names depending on their dimension and the context in which the mesh will be used. In finite elements, the highest-dimensional mesh entities are called "elements," "edges" are 1D and "nodes" are 0D. If the elements are 3D, then the 2D entities are "faces." In computational geometry, the 0D points are called vertices. Tetrahedra are often abbreviated as "tets"; triangles are "tris", quadrilaterals are "quads" and hexahedra (topological cubes) are "hexes." - -Many meshing techniques are built on the principles of the Delaunay triangulation, together with rules for adding vertices, such as Ruppert's algorithm. - -A distinguishing feature is that an initial coarse mesh of the entire space is formed, then vertices and triangles are added. - -In contrast, advancing front algorithms start from the domain boundary, and add elements incrementally filling up the interior. - -Hybrid techniques do both. A special class of advancing front techniques creates thin boundary layers of elements for fluid flow. - -In structured mesh generation the entire mesh is a lattice graph, such as a regular grid of squares. Structured mesh generation for regular grids is an entire field itself, with mathematical techniques applied to ensure high-polynomial-order grid lines follow the solution space smoothly and accurately. - -In block-structured meshing, the domain is divided into large subregions, each of which is a structured mesh. - -Some direct methods start with a block-structured mesh and then move the mesh to conform to the input; see based on polycube. Another direct method is to cut the structured cells by the domain boundary; see based on Marching cubes. - -Some types of meshes are much more difficult to create than others. Simplicial meshes tend to be easier than cubical meshes. An important category is generating a hex mesh conforming to a fixed quad surface mesh; a research subarea is studying the existence and generation of meshes of specific small configurations, such as the tetragonal trapezohedron. Because of the difficulty of this problem, the existence of combinatorial hex meshes has been studied apart from the problem of generating good geometric realizations. While known algorithms generate simplicial meshes with guaranteed minimum quality, such guarantees are rare for cubical meshes, and many popular implementations generate inverted (inside-out) hexes from some inputs. - -Meshes are often created in serial on workstations, even when subsequent calculations over the mesh will be done in parallel on super-computers. This is both because of the limitation that most mesh generators are interactive, and because mesh generation runtime is typically insignificant compared to solver time. However, if the mesh is too large to fit in the memory of a single serial machine, or the mesh must be changed (adapted) during the simulation, meshing is done in parallel. - -Usually the cells are polygonal or polyhedral and form a mesh that partitions the domain. - -Important classes of two-dimensional elements include triangles (simplices) and quadrilaterals (topological squares). - -In three-dimensions the most-common cells are tetrahedra (simplices) and hexahedra (topological cubes). - -Simplicial meshes may be of any dimension and include triangles (2D) and tetrahedra (3D) as important instances. - -Cubical meshes is the pan-dimensional category that includes quads (2D) and hexes (3D). In 3D, 4-sided pyramids and 3-sided prisms appear in conformal meshes of mixed cell type. - -The mesh is embedded in a geometric space that is typically two or three dimensional, although sometimes the dimension is increased by one by adding the time-dimension. Higher dimensional meshes are used in niche contexts. One-dimensional meshes are useful as well. A significant category is surface meshes, which are 2D meshes embedded in 3D to represent a curved surface. - -Dual graphs have several roles in meshing. One can make a polyhedral Voronoi diagram mesh by dualizing a Delaunay triangulation simplicial mesh. One can create a cubical mesh by generating an arrangement of surfaces and dualizing the intersection graph; see spatial twist continuum. Sometimes both the primal mesh and its dual mesh are used in the same simulation; see Hodge star operator. This arises from physics involving divergence and curl (mathematics) operators, such as flux & vorticity or electricity & magnetism, where one variable naturally lives on the primal faces and its counterpart on the dual faces. - -Three-dimensional meshes created for finite element analysis need to consist of tetrahedra, pyramids, prisms or hexahedra. Those used for the finite volume method can consist of arbitrary polyhedra. Those used for finite difference methods consist of piecewise structured arrays of hexahedra known as multi-block structured meshes. - -4-sided pyramids are useful to conformally connect hexes to tets. 3-sided prisms are used for boundary layers conforming to a tet mesh of the far-interior of the object. - -Surface meshes are useful in computer graphics where the surfaces of objects reflect light (also subsurface scattering) and a full 3D mesh is not needed. Surface meshes are also used to model thin objects such as sheet metal in auto manufacturing and building exteriors in architecture. High (e.g., 17) dimensional cubical meshes are common in astrophysics and string theory. - -What is the precise definition of a mesh? There is not a universally-accepted mathematical description that applies in all contexts. - -However, some mathematical objects are clearly meshes: a simplicial complex is a mesh composed of simplices. - -Most polyhedral (e.g. cubical) meshes are conformal, meaning they have the cell structure of a CW complex, a generalization of a simplicial complex. A mesh need not be simplicial because an arbitrary subset of nodes of a cell is not necessarily a cell: e.g., three nodes of a quad does not define a cell. - -However, two cells intersect at cells: e.g. a quad does not have a node in its interior. The intersection of two cells may be several cells: e.g., two quads may share two edges. An intersection being more than one cell is sometimes forbidden and rarely desired; the goal of some mesh improvement techniques (e.g. pillowing) is to remove these configurations. In some contexts, a distinction is made between a topological mesh and a geometric mesh whose embedding satisfies certain quality criteria. - -Important mesh variants that are not CW complexes include non-conformal meshes where cells do not meet strictly face-to-face, but the cells nonetheless partition the domain. An example of this is an octree, where an element face may be partitioned by the faces of adjacent elements. Such meshes are useful for flux-based simulations. In overset grids, there are multiple conformal meshes that overlap geometrically and do not partition the domain; see e.g., Overflow, the OVERset grid FLOW solver. So-called meshless or meshfree methods often make use of some mesh-like discretization of the domain, and have basis functions with overlapping support. Sometimes a local mesh is created near each simulation degree-of-freedom point, and these meshes may overlap and be non-conformal to one another. - -Implicit triangulations are based on a delta complex: for each triangle the lengths of its edges, and a gluing map between face edges. (please expand) - -Many meshes use linear elements, where the mapping from the abstract to realized element is linear, and mesh edges are straight segments. - -Higher order polynomial mappings are common, especially quadratic. - -A primary goal for higher-order elements is to more accurately represent the domain boundary, although they have accuracy benefits in the interior of the mesh as well. - -One of the motivations for cubical meshes is that linear cubical elements have some of the same numerical advantages as quadratic simplicial elements. - -In the isogeometric analysis simulation technique, the mesh cells containing the domain boundary use the CAD representation directly instead of a linear or polynomial approximation. - -Improving a mesh involves changing its discrete connectivity, the continuous geometric position of its cells, or both. For discrete changes, for simplicial elements one swaps edges and inserts/removes nodes. The same kinds of operations are done for cubical (quad/hex) meshes, although there are fewer possible operations and local changes have global consequences. E.g., for a hexahedral mesh, merging two nodes creates cells that are not hexes, but if diagonally-opposite nodes on a quadrilateral are merged and this is propagated into collapsing an entire face-connected column of hexes, then all remaining cells will still be hexes. - -In adaptive mesh refinement, elements are split (h-refinement) in areas where the function being calculated has a high gradient. - -Meshes are also coarsened, removing elements for efficiency. The multigrid method does something similar to refinement and coarsening to speed up the numerical solve, but without actually changing the mesh. - -For continuous changes, nodes are moved, or the higher-dimensional faces are moved by changing the polynomial order of elements. Moving nodes to improve quality is called "smoothing" or "r-refinement" and increasing the order of elements is called "p-refinement." Nodes are also moved in simulations where the shape of objects change over time. This degrades the shape of the elements. If the object deforms enough, the entire object is remeshed and the current solution mapped from the old mesh to the new mesh. - -The field is highly interdisciplinary, with contributions found in mathematics, computer science, and engineering. Meshing R&D is distinguished by an equal focus on discrete and continuous math and computation, as with computational geometry, but in contrast to graph theory (discrete) and numerical analysis (continuous). Mesh generation is deceptively difficult: it is easy for humans to see how to create a mesh of a given object, but difficult to program a computer to make good decisions for arbitrary input a priori. There is an infinite variety of geometry found in nature and man-made objects. Many mesh generation researchers were first users of meshes. Mesh generation continues to receive widespread attention, support and funding because the human-time to create a mesh dwarfs the time to set up and solve the calculation once the mesh is finished. This has always been the situation since numerical simulation and computer graphics were invented, because as computer hardware and simple equation-solving software have improved, people have been drawn to larger and more complex geometric models in a drive for greater fidelity, scientific insight, and artistic expression. - -Meshing research is published in a broad range of journals. This is in keeping with the interdisciplinary nature of the research required to make progress, and also the wide variety of applications that make use of meshes. About 150 meshing publications appear each year across 20 journals, with at most 20 publications appearing in any one journal. There is no journal whose primary topic is meshing. The journals that publish at least 10 meshing papers per year are in bold. - -* Advances in Engineering Software - -* American Institute of Aeronautics and Astronautics Journal (AIAAJ) - -* Algorithmica - -* Applied Computational Electromagnetics Society Journal - -* Applied Numerical Mathematics - -* Astronomy and Computing - -* Computational Geometry: Theory and Applications - -* Computer-Aided Design, often including a special issue devoted to extended papers from the IMR (see conferences below) - -* Computer Aided Geometric Design (CAGD) - -* Computer Graphics Forum (Eurographics) - -* Computer Methods in Applied Mechanics and Engineering - -* Discrete and Computational Geometry - -* Engineering with Computers - -* Finite Elements in Analysis and Design - -* International Journal for Numerical Methods in Engineering (IJNME) - -* International Journal for Numerical Methods in Fluids - -* International Journal for Numerical Methods in Biomedical Engineering - -* International Journal of Computational Geometry & Applications - -* Journal of Computational Physics (JCP) - -* Journal on Numerical Analysis - -* Journal on Scientific Computing (SISC) - -* Transactions on Graphics (ACM TOG) - -* Transactions on Mathematical Software (ACM TOMS) - -* Transactions on Visualization and Computer Graphics (IEEE TVCG) - -* Lecture Notes in Computational Science and Engineering (LNCSE) - -* Computational Mathematics and Mathematical Physics (CMMP) - -(Conferences whose primary topic is meshing are in bold.) - -* Aerospace Sciences Meeting AIAA (15 meshing talks/papers) - -* Canadian Conference on Computational Geometry CCCG - -* CompIMAGE: International Symposium Computational Modeling of Objects Represented in Images - -* Computational Fluid Dynamics Conference AIAA - -* Computational Fluid Dynamics Conference ECCOMAS - -* Computational Science & Engineering CS&E - -* Conference on Numerical Grid Generation ISGG - -* Eurographics Annual Conference (Eurographics)] (proceedings in Computer Graphics Forum) - -* Geometric & Physical Modeling SIAM - -* International Conference on Isogeometric Analysis IGA - -* International Meshing Roundtable (IMR) - -* International Symposium on Computational Geometry SoCG - -* Numerical Geometry, Grid Generation and Scientific Computing (NUMGRID) (proceedings in Lecture Notes in Computational Science and Engineering) - -* SIGGRAPH (proceedings in ACM Transactions on Graphics) - -* Symposium on Geometry Processing SGP (Eurographics) (proceedings in Computer Graphics Forum) - -* World Congress on Engineering - -Workshops whose primary topic is meshing are in bold. - -* Conference on Geometry: Theory and Applications CGTA - -* European Workshop on Computational Geometry EuroCG - -* Fall Workshop on Computational Geometry - -* Finite Elements in Fluids FEF - -* MeshTrends Symposium (in WCCM or USNCCM alternate years) - -* Polytopal Element Methods in Mathematics and Engineering - -* Tetrahedron workshop diff --git a/wiki/wikipedia/2732.txt b/wiki/wikipedia/2732.txt deleted file mode 100644 index 12da0181e95325cdead193d8afdd930e92f84ac1..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2732.txt +++ /dev/null @@ -1,102 +0,0 @@ -The Gell-Mann and Low theorem is a theorem in quantum field theory that allows one to relate the ground (or vacuum) state of an interacting system to the ground state of the corresponding non-interacting theory. It was proved in 1951 by Murray Gell-Mann and Francis E. Low. The theorem is useful because, among other things, by relating the ground state of the interacting theory to its non-interacting ground state, it allows one to express Green's functions (which are defined as expectation values of Heisenberg-picture fields in the interacting vacuum) as expectation values of interaction picture fields in the non-interacting vacuum. While typically applied to the ground state, the Gell-Mann and Low theorem applies to any eigenstate of the Hamiltonian. Its proof relies on the concept of starting with a non-interacting Hamiltonian and adiabatically switching on the interactions. - -The theorem was proved first by Gell-Mann and Low in 1951, making use of the Dyson series. In 1969 Klaus Hepp provided an alternative derivation for the case where the original Hamiltonian describes free particles and the interaction is norm bounded. In 1989 Nenciu and Rasche proved it using the adiabatic theorem. A proof that does not rely on the Dyson expansion was given in 2007 by Molinari. - -Let $|\Psi_0\rangle$ be an eigenstate of $H_0$ with energy $E_0$ and let the 'interacting' Hamiltonian be $H=H_0 + gV$, where $g$ is a coupling constant and $V$ the interaction term. We define a Hamiltonian $H_\epsilon=H_0 + e^{-\epsilon |t|}gV$ which effectively interpolates between $H$ and $H_0$ in the limit $\epsilon \rightarrow 0^+$ and $|t|\rightarrow\infty$. Let $U_{\epsilon I}$ denote the evolution operator in the interaction picture. The Gell-Mann and Low theorem asserts that if the limit as $\epsilon\rightarrow 0^+$ of -$$ - |\Psi^{(\pm)}_\epsilon \rangle = \frac{ U_{\epsilon I} (0,\pm\infty) |\Psi_0 \rangle}{\langle \Psi_0 | U_{\epsilon I}(0,\pm\infty)|\Psi_0\rangle} -$$ - -exists, then $|\Psi^{(\pm)}_\epsilon \rangle$ are eigenstates of $H$. - -Note that when applied to, say, the ground-state, the theorem does not guarantee that the evolved state will be a ground state. In other words, level crossing is not excluded. - -As in the original paper, the theorem is typically proved making use of Dyson's expansion of the evolution operator. Its validity however extends beyond the scope of perturbation theory as has been demonstrated by Molinari. We follow Molinari's method here. Focus on $H_\epsilon$ and let $g=e^{\epsilon \theta}$. From Schrödinger's equation for the time-evolution operator -$$ - i\hbar \partial_{t_1} U_\epsilon(t_1,t_2) = H_\epsilon(t_1) U_\epsilon(t_1,t_2) -$$ - -and the boundary condition $U_\epsilon(t_2,t_2)=1$ we can formally write - - - -U_\epsilon(t_1,t_2) = 1+ \frac{1}{i\hbar} \int_{t_2}^{t_1} dt' (H_0 + e^{\epsilon(\theta -|t'|)} V) U_\epsilon(t',t_2). - - - -Focus for the moment on the case $0\geq t_1\geq t_2$. Through a change of variables $\tau=t'+\theta$ we can write - - - -U_\epsilon(t_1,t_2) = 1+ \frac{1}{i\hbar} \int_{\theta +t_2}^{\theta+t_1} d\tau (H_0 + e^{\epsilon \tau} V) U_\epsilon(\tau-\theta,t_2). - - - -We therefore have that - - - -\partial_\theta U_\epsilon(t_1,t_2) = \epsilon g \partial_g U_\epsilon(t_1,t_2) = \partial_{t_1} U_\epsilon(t_1,t_2) + \partial_{t_2} U_\epsilon(t_1,t_2). - - - -This result can be combined with the Schrödinger equation and its adjoint -$$ - -i\hbar \partial_{t_1} U_\epsilon(t_2,t_1) = U_\epsilon(t_2,t_1) H_\epsilon(t_1) -$$ - -to obtain - - - -i\hbar \epsilon g \partial_g U_\epsilon(t_1,t_2) = H_\epsilon(t_1)U_\epsilon(t_1,t_2)- U_\epsilon (t_1,t_2)H_\epsilon (t_2). - - - -The corresponding equation between $H_{\epsilon I}, U_{\epsilon I}$ is the same. It can be obtained by pre-multiplying both sides with $e^{i H_0 t_1/\hbar}$, post-multiplying with $e^{i H_0 t_2/\hbar}$ and making use of - - - -U_{\epsilon I} (t_1,t_2) = e^{i H_0 t_1/\hbar} U_{\epsilon}(t_1,t_2) e^{-i H_0 t_2 /\hbar}. - - - -The other case we are interested in, namely $t_2\geq t_1 \geq 0$ can be treated in an analogous fashion - -and yields an additional minus sign in front of the commutator (we are not concerned here with the case where -$$ -t_{1,2} -$$ have mixed signs). In summary, we obtain - - - -\left(H_{\epsilon, t=0}-E_0 \pm i \hbar \epsilon g \partial_g\right) U_{\epsilon I}(0,\pm\infty) |\Psi_0\rangle = 0. - - - -We proceed for the negative-times case. Abbreviating the various operators for clarity -$$ -i \hbar \epsilon g \partial_g \left(U|\Psi_0\rangle\right) = (H_\epsilon-E_0)U|\Psi_0\rangle. -$$ - -Now using the definition of $\Psi_\epsilon$ we differentiate and eliminate derivatives $\partial_g(U|\Psi_0\rangle)$ using the above expression, finding - - - -\begin{align} - -i \hbar \epsilon g \partial_g | \Psi_\epsilon \rangle &= - -\frac{1}{\langle\Psi_0| U |\Psi_0 \rangle} (H_\epsilon-E_0) U|\Psi_0\rangle - -- \frac{U|\Psi_0\rangle }{{\langle\Psi_0 |U| \Psi_0 \rangle}^2} \langle \Psi_0 | ( H_\epsilon-E_0 ) U | \Psi_0\rangle \\ - -&= (H_\epsilon-E_0)|\Psi_\epsilon\rangle - |\Psi_\epsilon\rangle \langle \Psi_0 |H_\epsilon-E_0|\Psi_\epsilon\rangle \\ - -& = \left[ H_\epsilon - E \right] |\Psi_\epsilon\rangle. - -\end{align} - - - -where $E = E_0 + \langle\Psi_0 | H_\epsilon-H_0 | \Psi_\epsilon\rangle$. We can now let $\epsilon\rightarrow 0^+$ as by assumption the $g \partial_g | \Psi_\epsilon \rangle$ in left hand side is finite. We then clearly see that $|\Psi_\epsilon\rangle$ is an eigenstate of $H$ and the proof is complete. diff --git a/wiki/wikipedia/2733.txt b/wiki/wikipedia/2733.txt deleted file mode 100644 index 800bd0a5787e4df3cbf11730d50aceb9ec2673c5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2733.txt +++ /dev/null @@ -1,5 +0,0 @@ -Ivan Vladislavovich Sleshinsky or Jan Śleszyński () (23 July 1854 – 9 March 1931) was a Polish-Russian mathematician. He was born in Lysianka, Russian Empire to Polish parents. - -Śleszyński's main work was on continued fractions, least squares and axiomatic proof theory based on mathematical logic. He and Alfred Pringsheim, working separately, proved what is now called the Śleszyński–Pringsheim theorem. - -His most important publications include: "Teoria dowodu" ("The theory of proof") in two volumes (1925, 1929), and "Teoria wyznaczników" ("The theory of determinants") (1926). He is buried at Rakowicki Cemetery. diff --git a/wiki/wikipedia/2734.txt b/wiki/wikipedia/2734.txt deleted file mode 100644 index 926f2d610bbf52682cb83722edacfe98a8ec917c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2734.txt +++ /dev/null @@ -1,294 +0,0 @@ -In mathematics, the congruence lattice problem asks whether every algebraic distributive lattice is isomorphic to the congruence lattice of some other lattice. The problem was posed by Robert P. Dilworth, and for many years it was one of the most famous and long-standing open problems in lattice theory; it had a deep impact on the development of lattice theory itself. The conjecture that every distributive lattice is a congruence lattice is true for all distributive lattices with at most ℵ1 compact elements, but F. Wehrung provided a counterexample for distributive lattices with ℵ2 compact elements using a construction based on Kuratowski's free set theorem. - -We denote by Con A the congruence lattice of an algebra A, that is, the lattice of all congruences of A under inclusion. - -The following is a universal-algebraic triviality. It says that for a congruence, being finitely generated is a lattice-theoretical property. - -Lemma. - -A congruence of an algebra A is finitely generated if and only if it is a compact element of Con A. - -As every congruence of an algebra is the join of the finitely generated congruences below it (e.g., every submodule of a module is the union of all its finitely generated submodules), we obtain the following result, first published by Birkhoff and Frink in 1948. - -Theorem (Birkhoff and Frink 1948). - -The congruence lattice Con A of any algebra A is an algebraic lattice. - -While congruences of lattices lose something in comparison to groups, modules, rings (they cannot be identified with subsets of the universe), they also have a property unique among all the other structures encountered yet. - -Theorem (Funayama and Nakayama 1942). - -The congruence lattice of any lattice is distributive. - -This says that α ∧ (β ∨ γ) = (α ∧ β) ∨ (α ∧ γ), for any congruences α, β, and γ of a given lattice. The analogue of this result fails, for instance, for modules, as $A\cap(B+C)\neq(A\cap B)+(A\cap C)$, as a rule, for submodules A, B, C of a given module. - -Soon after this result, Dilworth proved the following result. He did not publish the result but it appears as an exercise credited to him in Birkhoff 1948. The first published proof is in Grätzer and Schmidt 1962. - -Theorem (Dilworth ≈1940, Grätzer and Schmidt 1962). - -Every finite distributive lattice is isomorphic to the congruence lattice of some finite lattice. - -It is important to observe that the solution lattice found in Grätzer and Schmidt's proof is sectionally complemented, that is, it has a least element (true for any finite lattice) and for all elements a ≤ b there exists an element x with a ∨ x = b and a ∧ x = 0. It is also in that paper that CLP is first stated in published form, although it seems that the earliest attempts at CLP were made by Dilworth himself. Congruence lattices of finite lattices have been given an enormous amount of attention, for which a reference is Grätzer's 2005 monograph. - ----- - -The congruence lattice problem (CLP): - -Is every distributive algebraic lattice isomorphic to the congruence lattice of some lattice? - ----- - -The problem CLP has been one of the most intriguing and longest-standing open problems of lattice theory. Some related results of universal algebra are the following. - -Theorem (Grätzer and Schmidt 1963). - -Every algebraic lattice is isomorphic to the congruence lattice of some algebra. - -The lattice Sub V of all subspaces of a vector space V is certainly an algebraic lattice. As the next result shows, these algebraic lattices are difficult to represent. - -Theorem (Freese, Lampe, and Taylor 1979). - -Let V be an infinite-dimensional vector space over an uncountable field F. Then Con A isomorphic to Sub V implies that A has at least card F operations, for any algebra A. - -As V is infinite-dimensional, the largest element (unit) of Sub V is not compact. However innocuous it sounds, the compact unit assumption is essential in the statement of the result above, as demonstrated by the following result. - -Theorem (Lampe 1982). - -Every algebraic lattice with compact unit is isomorphic to the congruence lattice of some groupoid. - -The congruence lattice Con A of an algebra A is an algebraic lattice. The (∨,0)-semilattice of compact elements of Con A is denoted by Conc A, and it is sometimes called the congruence semilattice of A. Then Con A is isomorphic to the ideal lattice of Conc A. By using the classical equivalence between the category of all (∨,0)-semilattices and the category of all algebraic lattices (with suitable definitions of morphisms), as it is outlined here, we obtain the following semilattice-theoretical formulation of CLP. - ----- - -Semilattice-theoretical formulation of CLP: - -Is every distributive (∨,0)-semilattice isomorphic to the congruence semilattice of some lattice? - ----- - -Say that a distributive (∨,0)-semilattice is representable, if it is isomorphic to Conc L, for some lattice L. So CLP asks whether every distributive (∨,0)-semilattice is representable. - -Many investigations around this problem involve diagrams of semilattices or of algebras. A most useful folklore result about these is the following. - -Theorem. - -The functor Conc, defined on all algebras of a given signature, to all (∨,0)-semilattices, preserves direct limits. - -We say that a (∨,0)-semilattice satisfies Schmidt's Condition, if it is isomorphic to the quotient of a generalized Boolean semilattice B under some distributive join-congruence of B. One of the deepest results about representability of (∨,0)-semilattices is the following. - -Theorem (Schmidt 1968). - -Any (∨,0)-semilattice satisfying Schmidt's Condition is representable. - -This raised the following problem, stated in the same paper. - ----- - -Problem 1 (Schmidt 1968). - -Does any (∨,0)-semilattice satisfy Schmidt's Condition? - ----- - -Partial positive answers are the following. - -Theorem (Schmidt 1981). - -Every distributive lattice with zero satisfies Schmidt's Condition; thus it is representable. - -This result has been improved further as follows, via a very long and technical proof, using forcing and Boolean-valued models. - -Theorem (Wehrung 2003). - -Every direct limit of a countable sequence of distributive lattices with zero and (∨,0)-homomorphisms is representable. - -Other important representability results are related to the cardinality of the semilattice. The following result was prepared for publication by Dobbertin after Huhn's passing away in 1985. The two corresponding papers were published in 1989. - -Theorem (Huhn 1985). Every distributive (∨,0)-semilattice of cardinality at most ℵ1 satisfies Schmidt's Condition. Thus it is representable. - -By using different methods, Dobbertin got the following result. - -Theorem (Dobbertin 1986). - -Every distributive (∨,0)-semilattice in which every principal ideal is at most countable is representable. - ----- - -Problem 2 (Dobbertin 1983). Is every conical refinement monoid measurable? - ----- - -The approach of CLP suggested by Pudlák in his 1985 paper is different. It is based on the following result, Fact 4, p. 100 in Pudlák's 1985 paper, obtained earlier by Ju.L. Ershov as the main theorem in Section 3 of the Introduction of his 1977 monograph. - -Theorem (Ershov 1977, Pudlák 1985). - -Every distributive (∨,0)-semilattice is the directed union of its finite distributive (∨,0)-subsemilattices. - -This means that every finite subset in a distributive (∨,0)-semilattice S is contained in some finite distributive (∨,0)-subsemilattice of S. Now we are trying to represent a given distributive (∨,0)-semilattice S as Conc L, for some lattice L. Writing S as a directed union $S=\bigcup(S_i\mid i\in I)$ of finite distributive (∨,0)-subsemilattices, we are hoping to represent each Si as the congruence lattice of a lattice Li with lattice homomorphisms fij : Li→ Lj, for i ≤ j in I, such that the diagram $\mathcal{S}$ of all Si with all inclusion maps Si→Sj, for i ≤ j in I, is naturally equivalent to $(\mathrm{Con_c}L_i,\mathrm{Con_c}f_i^j\mid i\leq j\text{ in }I)$, we say that the diagram $(L_i,f_i^j\mid i\leq j\text{ in }I)$ lifts $\mathcal{S}$ (with respect to the Conc functor). If this can be done, then, as we have seen that the Conc functor preserves direct limits, the direct limit $L=\varinjlim_{i\in I}L_i$ satisfies ${\rm Con_c}L\cong S$. - -While the problem whether this could be done in general remained open for about 20 years, Pudlák could prove it for distributive lattices with zero, thus extending one of Schmidt's results by providing a functorial solution. - -Theorem (Pudlák 1985). - -There exists a direct limits preserving functor Φ, from the category of all distributive lattices with zero and 0-lattice embeddings to the category of all lattices with zero and 0-lattice embeddings, such that ConcΦ is naturally equivalent to the identity. Furthermore, Φ(S) is a finite atomistic lattice, for any finite distributive (∨,0)-semilattice S. - -This result is improved further, by an even far more complex construction, to locally finite, sectionally complemented modular lattices by Růžička in 2004 and 2006. - -Pudlák asked in 1985 whether his result above could be extended to the whole category of distributive (∨,0)-semilattices with (∨,0)-embeddings. The problem remained open until it was recently solved in the negative by Tůma and Wehrung. - -Theorem (Tůma and Wehrung 2006). - -There exists a diagram D of finite Boolean (∨,0)-semilattices and (∨,0,1)-embeddings, indexed by a finite partially ordered set, that cannot be lifted, with respect to the Conc functor, by any diagram of lattices and lattice homomorphisms. - -In particular, this implies immediately that CLP has no functorial solution. - -Furthermore, it follows from deep 1998 results of universal algebra by Kearnes and Szendrei in so-called commutator theory of varieties that the result above can be extended from the variety of all lattices to any variety $\mathcal{V}$ such that all Con A, for $A\in\mathcal{V}$, satisfy a fixed nontrivial identity in the signature (∨,∧) (in short, with a nontrivial congruence identity). - -We should also mention that many attempts at CLP were also based on the following result, first proved by Bulman-Fleming and McDowell in 1978 by using a categorical 1974 result of Shannon, see also Goodearl and Wehrung in 2001 for a direct argument. - -Theorem (Bulman-Fleming and McDowell 1978). - -Every distributive (∨,0)-semilattice is a direct limit of finite Boolean (∨,0)-semilattices and (∨,0)-homomorphisms. - -It should be observed that while the transition homomorphisms used in the Ershov-Pudlák Theorem are (∨,0)-embeddings, the transition homomorphisms used in the result above are not necessarily one-to-one, for example when one tries to represent the three-element chain. Practically this does not cause much trouble, and makes it possible to prove the following results. - -Theorem. - -Every distributive (∨,0)-semilattice of cardinality at most ℵ1 is isomorphic to - -(1) Conc L, for some locally finite, relatively complemented modular lattice L (Tůma 1998 and Grätzer, Lakser, and Wehrung 2000). - -(2) The semilattice of finitely generated two-sided ideals of some (not necessarily unital) von Neumann regular ring (Wehrung 2000). - -(3) Conc L, for some sectionally complemented modular lattice L (Wehrung 2000). - -(4) The semilattice of finitely generated normal subgroups of some locally finite group (Růžička, Tůma, and Wehrung 2006). - -(5) The submodule lattice of some right module over a (non-commutative) ring (Růžička, Tůma, and Wehrung 2006). - -We recall that for a (unital, associative) ring R, we denote by V(R) the (conical, commutative) monoid of isomorphism classes of finitely generated projective right R-modules, see here for more details. Recall that if R is von Neumann regular, then V(R) is a refinement monoid. Denote by Idc R the (∨,0)-semilattice of finitely generated two-sided ideals of R. We denote by L(R) the lattice of all principal right ideals of a von Neumann regular ring R. It is well known that L(R) is a complemented modular lattice. - -The following result was observed by Wehrung, building on earlier works mainly by Jónsson and Goodearl. - -Theorem (Wehrung 1999). - -Let R be a von Neumann regular ring. Then the (∨,0)-semilattices Idc R and Conc L(R) are both isomorphic to the maximal semilattice quotient of V(R). - -Bergman proves in a well-known unpublished note from 1986 that any at most countable distributive (∨,0)-semilattice is isomorphic to Idc R, for some locally matricial ring R (over any given field). This result is extended to semilattices of cardinality at most ℵ1 in 2000 by Wehrung, by keeping only the regularity of R (the ring constructed by the proof is not locally matricial). The question whether R could be taken locally matricial in the ℵ1 case remained open for a while, until it was disproved by Wehrung in 2004. Translating back to the lattice world by using the theorem above and using a lattice-theoretical analogue of the V(R) construction, called the dimension monoid, introduced by Wehrung in 1998, yields the following result. - -Theorem (Wehrung 2004). - -There exists a distributive (∨,0,1)-semilattice of cardinality ℵ1 that is not isomorphic to Conc L, for any modular lattice L every finitely generated sublattice of which has finite length. - ----- - -Problem 3 (Goodearl 1991). Is the positive cone of any dimension group with order-unit isomorphic to V(R), for some von Neumann regular ring R? - ----- - -The abovementioned Problem 1 (Schmidt), Problem 2 (Dobbertin), and Problem 3 (Goodearl) were solved simultaneously in the negative in 1998. - -Theorem (Wehrung 1998). - -There exists a dimension vector space G over the rationals with order-unit whose positive cone G+ is not isomorphic to V(R), for any von Neumann regular ring R, and is not measurable in Dobbertin's sense. Furthermore, the maximal semilattice quotient of G+ does not satisfy Schmidt's Condition. Furthermore, G can be taken of any given cardinality greater than or equal to ℵ2. - -It follows from the previously mentioned works of Schmidt, Huhn, Dobbertin, Goodearl, and Handelman that the ℵ2 bound is optimal in all three negative results above. - -As the ℵ2 bound suggests, infinite combinatorics are involved. The principle used is Kuratowski's Free Set Theorem, first published in 1951. Only the case n=2 is used here. - -The semilattice part of the result above is achieved via an infinitary semilattice-theoretical statement URP (Uniform Refinement Property). If we want to disprove Schmidt's problem, the idea is (1) to prove that any generalized Boolean semilattice satisfies URP (which is easy), (2) that URP is preserved under homomorphic image under a weakly distributive homomorphism (which is also easy), and (3) that there exists a distributive (∨,0)-semilattice of cardinality ℵ2 that does not satisfy URP (which is difficult, and uses Kuratowski's Free Set Theorem). - -Schematically, the construction in the theorem above can be described as follows. For a set Ω, we consider the partially ordered vector space E(Ω) defined by generators 1 and ai,x, for i<2 and x in Ω, and relations a0,x+a1,x=1, a0,x ≥ 0, and a1,x ≥ 0, for any x in Ω. By using a Skolemization of the theory of dimension groups, we can embed E(Ω) functorially into a dimension vector space F(Ω). The vector space counterexample of the theorem above is G=F(Ω), for any set Ω with at least ℵ2 elements. - -This counterexample has been modified subsequently by Ploščica and Tůma to a direct semilattice construction. For a (∨,0)-semilattice, the larger semilattice R(S) is the (∨,0)-semilattice freely generated by new elements t(a,b,c), for a, b, c in S such that c ≤ a ∨ b, subjected to the only relations c=t(a,b,c) ∨ t(b,a,c) and t(a,b,c) ≤ a. Iterating this construction gives the free distributive extension $D(S)=\bigcup(R^n(S)\mid n<\omega)$ of S. Now, for a set Ω, let L(Ω) be the (∨,0)-semilattice defined by generators 1 and ai,x, for i<2 and x in Ω, and relations a0,x ∨ a1,x=1, for any x in Ω. Finally, put G(Ω)=D(L(Ω)). - -In most related works, the following uniform refinement property is used. It is a modification of the one introduced by Wehrung in 1998 and 1999. - -Definition (Ploščica, Tůma, and Wehrung 1998). - -Let e be an element in a (∨,0)-semilattice S. We say that the weak uniform refinement property WURP holds at e, if for all families $(a_i)_{i\in I}$ and $(b_i)_{i\in I}$ of elements in S such that ai ∨ bi=e for all i in I, there exists a family $(c_{i,j}\mid (i,j)\in I\times I)$ of elements of S such that the relations - -• ci,j ≤ ai,bj, - -• ci,j ∨ aj ∨ bi=e, - -• ci,k ≤ ci,j∨ cj,k - -hold for all i, j, k in I. We say that S satisfies WURP, if WURP holds at every element of S. - -By building on Wehrung's abovementioned work on dimension vector spaces, Ploščica and Tůma proved that WURP does not hold in G(Ω), for any set Ω of cardinality at least ℵ2. Hence G(Ω) does not satisfy Schmidt's Condition. All negative representation results mentioned here always make use of some uniform refinement property, including the first one about dimension vector spaces. - -However, the semilattices used in these negative results are relatively complicated. The following result, proved by Ploščica, Tůma, and Wehrung in 1998, is more striking, because it shows examples of representable semilattices that do not satisfy Schmidt's Condition. We denote by FV(Ω) the free lattice on Ω in V, for any variety V of lattices. - -Theorem (Ploščica, Tůma, and Wehrung 1998). - -The semilattice Conc FV(Ω) does not satisfy WURP, for any set Ω of cardinality at least ℵ2 and any non-distributive variety V of lattices. Consequently, Conc FV(Ω) does not satisfy Schmidt's Condition. - -It is proved by Tůma and Wehrung in 2001 that Conc FV(Ω) is not isomorphic to Conc L, for any lattice L with permutable congruences. By using a slight weakening of WURP, this result is extended to arbitrary algebras with permutable congruences by Růžička, Tůma, and Wehrung in 2006. Hence, for example, if Ω has at least ℵ2 elements, then Conc FV(Ω) is not isomorphic to the normal subgroup lattice of any group, or the submodule lattice of any module. - -The following recent theorem solves CLP. - -Theorem (Wehrung 2007). - -The semilattice G(Ω) is not isomorphic to Conc L for any lattice L, whenever the set Ω has at least ℵω+1 elements. - -Hence, the counterexample to CLP had been known for nearly ten years, it is just that nobody knew why it worked! All the results prior to the theorem above made use of some form of permutability of congruences. The difficulty was to find enough structure in congruence lattices of non-congruence-permutable lattices. - -We shall denote by ε the `parity function' on the natural numbers, that is, ε(n)=n mod 2, for any natural number n. - -We let L be an algebra possessing a structure of semilattice (L,∨) such that every congruence of L is also a congruence for the operation ∨ . We put - - U\vee V=\{u\vee v \mid (u,v)\in U\times V\},\quad - -\text{for all }U,V\subseteq L, - - - -and we denote by ConcU L the (∨,0)-subsemilattice of Conc L generated by all principal congruences Θ(u,v) ( = least congruence of L that identifies u and v), where (u,v) belongs to U ×U. We put Θ+(u,v)=Θ(u ∨ v,v), for all u, v in L.br /> - -The Erosion Lemma (Wehrung 2007). - -Let x0, x1 in L and let $Z=\{z_0,z_1,\dots,z_n\}$, for a positive integer n, be a finite subset of L with $\bigvee_{i z_0\vee x_0\vee x_1\equiv z_n\vee x_0\vee x_1 - -\pmod{\theta_0\vee\theta_1}\quad\text{and}\quad - -\theta_j\subseteq\alpha_j\cap\Theta_L^+(z_n,x_j),\text{ for all }j<2. - - - -(Observe the faint formal similarity with first-order resolution in mathematical logic. Could this analogy be pushed further?) - -The proof of the theorem above runs by setting a structure theorem for congruence lattices of semilattices—namely, the Erosion Lemma, against non-structure theorems for free distributive extensions G(Ω), the main one being called the Evaporation Lemma. While the latter are technically difficult, they are, in some sense, predictable. Quite to the contrary, the proof of the Erosion Lemma is elementary and easy, so it is probably the strangeness of its statement that explains that it has been hidden for so long. - -More is, in fact, proved in the theorem above: For any algebra L with a congruence-compatible structure of join-semilattice with unit and for any set Ω with at least ℵω+1 elements, there is no weakly distributive homomorphism μ: Conc L → G(Ω) containing 1 in its range. In particular, CLP was, after all, not a problem of lattice theory, but rather of universal algebra—even more specifically, semilattice theory! These results can also be translated in terms of a uniform refinement property, denoted by CLR in Wehrung's paper presenting the solution of CLP, which is noticeably more complicated than WURP. - -Finally, the cardinality bound ℵω+1 has been improved to the optimal bound ℵ2 by Růžička. - -Theorem (Růžička 2008). - -The semilattice G(Ω) is not isomorphic to Conc L for any lattice L, whenever the set Ω has at least ℵ2 elements. - -Růžička's proof follows the main lines of Wehrung's proof, except that it introduces an enhancement of Kuratowski's Free Set Theorem, called there existence of free trees, which it uses in the final argument involving the Erosion Lemma. - -The proof of the negative solution for CLP shows that the problem of representing distributive semilattices by compact congruences of lattices already appears for congruence lattices of semilattices. The question whether the structure of partially ordered set would cause similar problems is answered by the following result. - -Theorem (Wehrung 2008). For any distributive (∨,0)-semilattice S, there are a (∧,0)-semilattice P and a map μ : P × P → S such that the following conditions hold: - -(1) x ≤ y implies that μ(x,y)=0, for all x, y in P. - -(2) μ(x,z) ≤ μ(x,y) ∨ μ(y,z), for all x, y, z in P. - -(3) For all x ≥ y in P and all α, β in S such that μ(x,y) ≤ α ∨ β, there are a positive integer n and elements x=z0 ≥ z1 ≥ ... ≥ z2n=y such that μ(zi,zi+1) ≤ α (resp., μ(zi,zi+1) ≤ β) whenever i < 2n is even (resp., odd). - -(4) S is generated, as a join-semilattice, by all the elements of the form μ(x,0), for x in P. - -Furthermore, if S has a largest element, then P can be assumed to be a lattice with a largest element. - -It is not hard to verify that conditions (1)–(4) above imply the distributivity of S, so the result above gives a characterization of distributivity for (∨,0)-semilattices. diff --git a/wiki/wikipedia/2735.txt b/wiki/wikipedia/2735.txt deleted file mode 100644 index 493765da0df770353837d032e65bbaf069d94559..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2735.txt +++ /dev/null @@ -1,39 +0,0 @@ -In mathematics, discrepancy theory describes the deviation of a situation from the state one would like it to be in. It is also called the theory of irregularities of distribution. This refers to the theme of classical discrepancy theory, namely distributing points in some space such that they are evenly distributed with respect to some (mostly geometrically defined) subsets. The discrepancy (irregularity) measures how far a given distribution deviates from an ideal one. - -Discrepancy theory can be described as the study of inevitable irregularities of distributions, in measure-theoretic and combinatorial settings. Just as Ramsey theory elucidates the impossibility of total disorder, discrepancy theory studies the deviations from total uniformity. - -A significant event in the history of discrepancy theory was the 1916 paper of Weyl on the uniform distribution of sequences in the unit interval. - -__NOTOC__ - -Discrepancy theory is based on the following classic theorems: - -* The theorem of van Aardenne–Ehrenfest - -* Axis-parallel rectangles in the plane (Roth, Schmidt) - -* Discrepancy of half-planes (Alexander, Matoušek) - -* Arithmetic progressions (Roth, Sarkozy, Beck, Matousek & Spencer) - -* Beck–Fiala theorem - -* Six Standard Deviations Suffice (Spencer) - -The unsolved problems relating to discrepancy theory include: - -* Axis-parallel rectangles in dimensions three and higher (folklore) - -* Komlós conjecture - -* Heilbronn triangle problem on the minimum area of a triangle determined by three points from an n-point set - -Applications for discrepancy theory include: - -* Numerical integration: Monte Carlo methods in high dimensions. - -* Computational geometry: Divide-and-conquer algorithm. - -* Image processing: Halftoning - -* Random trial formulation: Randomized Controlled Trial diff --git a/wiki/wikipedia/2736.txt b/wiki/wikipedia/2736.txt deleted file mode 100644 index b814cd34c7c67a1037b7d2ba2999f3b8c05b82e9..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2736.txt +++ /dev/null @@ -1,39 +0,0 @@ -In number theory, a juggler sequence is an integer sequence that starts with a positive integer a0, with each subsequent term in the sequence defined by the recurrence relation: - -a_{k+1}= \begin{cases} - -\left \lfloor a_k^{\frac{1}{2}} \right \rfloor, & \text{if } a_k \text{ is even} \\ - -\\ - -\left \lfloor a_k^{\frac{3}{2}} \right \rfloor, & \text{if } a_k \text{ is odd}. - -\end{cases} - -Juggler sequences were publicised by American mathematician and author Clifford A. Pickover. The name is derived from the rising and falling nature of the sequences, like balls in the hands of a juggler. - -For example, the juggler sequence starting with a0 = 3 is -$$ -a_1= \lfloor 3^\frac{3}{2} \rfloor = \lfloor 5.196\dots \rfloor = 5, -$$ -$$ -a_2= \lfloor 5^\frac{3}{2} \rfloor = \lfloor 11.180\dots \rfloor = 11, -$$ -$$ -a_3= \lfloor 11^\frac{3}{2} \rfloor = \lfloor 36.482\dots \rfloor = 36, -$$ -$$ -a_4= \lfloor 36^\frac{1}{2} \rfloor = \lfloor 6 \rfloor = 6, -$$ -$$ -a_5= \lfloor 6^\frac{1}{2} \rfloor = \lfloor 2.449\dots \rfloor = 2, -$$ -$$ -a_6= \lfloor 2^\frac{1}{2} \rfloor = \lfloor 1.414\dots \rfloor = 1. -$$ - -If a juggler sequence reaches 1, then all subsequent terms are equal to 1. It is conjectured that all juggler sequences eventually reach 1. This conjecture has been verified for initial terms up to 106, but has not been proved. Juggler sequences therefore present a problem that is similar to the Collatz conjecture, about which Paul Erdős stated that "mathematics is not yet ready for such problems". - -For a given initial term n, one defines l(n) to be the number of steps which the juggler sequence starting at n takes to first reach 1, and h(n) to be the maximum value in the juggler sequence starting at n. For small values of n we have: - -Juggler sequences can reach very large values before descending to 1. For example, the juggler sequence starting at a0 = 37 reaches a maximum value of 24906114455136. Harry J. Smith has determined that the juggler sequence starting at a0 = 48443 reaches a maximum value at a60 with 972,463 digits, before reaching 1 at a157. diff --git a/wiki/wikipedia/2737.txt b/wiki/wikipedia/2737.txt deleted file mode 100644 index 182ffeb78734274928fa643745bf7c24b1e01564..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2737.txt +++ /dev/null @@ -1,49 +0,0 @@ -[[File:Block scheme of control system.jpg|thumb| - -Fig. 1. Block scheme of control system. G(s) – linear transfer function, f(e) – single-valued, continuous, differentiable function]] - -Kalman's conjecture or Kalman problem is a disproved conjecture on absolute stability of - -nonlinear control system with one scalar nonlinearity, which belongs to the sector of linear stability. Kalman's conjecture is a strengthening of Aizerman's conjecture - -and is a special case of Markus–Yamabe conjecture. This conjecture was proven false but led to the (valid) sufficient criteria on absolute stability. - -In 1957 R. E. Kalman in his paper - - stated the following: - -
    - -If f(e) in Fig. 1 is replaced by constants K corresponding to all possible values of f'(e), and it is found that the closed-loop system is stable for all such K, then it intuitively clear that the system must be monostable; i.e., all transient solutions will converge to a unique, stable critical point. - -
    - -Kalman's statement can be reformulated in the following conjecture: - -
    - -Consider a system with one scalar nonlinearity - - - -\frac{dx}{dt}=Px+qf(e),\quad e=r^*x \quad x\in R^n, - - - -where P is a constant n×n matrix, q, r are constant n-dimensional vectors, ∗ is an operation of transposition, f(e) is scalar function, and f(0) = 0. Suppose, f(e) is a differentiable function and the following condition - - - -k_1 < f'(e)< k_2. - - - -is valid. Then Kalman's conjecture is that the system is stable in the large (i.e. a unique stationary point is a global attractor) if all linear systems with f(e) = ke, k ∈ (k1, k2) are asymptotically stable. - -
    - -In Aizerman's conjecture in place of the condition on the derivative of nonlinearity it is required that the nonlinearity itself belongs to the linear sector. - -Kalman's conjecture is true for n ≤ 3 and for n > 3 there are effective methods for construction of counterexamples: the nonlinearity derivative belongs to the sector of linear stability, and a unique stable equilibrium coexists with a stable periodic solution (hidden oscillation). - -In discrete-time, the Kalman conjecture is only true for n=1, counterexamples for n ≥ 2 can be constructed. diff --git a/wiki/wikipedia/2738.txt b/wiki/wikipedia/2738.txt deleted file mode 100644 index 116d54a6502db91895e619452aa96b333a2015b1..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2738.txt +++ /dev/null @@ -1,11 +0,0 @@ -In graph theory, an st-planar graph is a bipolar orientation of a plane graph for which both the source and the sink of the orientation are on the outer face of the graph. That is, it is a directed graph drawn without crossings in the plane, in such a way that there are no directed cycles in the graph, exactly one graph vertex has no incoming edges, exactly one graph vertex has no outgoing edges, and these two special vertices both lie on the outer face of the graph. - -Within the drawing, each face of the graph must have the same structure: there is one vertex that acts as the source of the face, one vertex that acts as the sink of the face, and all edges within the face are directed along two paths from the source to the sink. If one draws an additional edge from the sink of an st-planar graph back to the source, through the outer face, and then constructs the dual graph (oriented each dual edge clockwise with respect to its primal edge) then the result is again an st-planar graph, augmented with an extra edge in the same way. - -These graphs are closely related to partially ordered sets and lattices. The Hasse diagram of a partially ordered set is a directed acyclic graph whose vertices are the set elements, with an edge from x to y for each pair x, y of elements for which x ≤ y in the partial order but for which there does not exist z with x ≤ y ≤ z. - -A partially ordered set forms a complete lattice if and only if every subset of elements has a unique greatest lower bound and a unique least upper bound, and the order dimension of a partially ordered set is the least number of total orders on the same set of elements whose intersection is the given partial order. - -If the vertices of an st-planar graph are partially ordered by reachability, then this ordering always forms a two-dimensional complete lattice, whose Hasse diagram is the transitive reduction of the given graph. Conversely, the Hasse diagram of every two-dimensional complete lattice is always an st-planar graph. - -Based on this two-dimensional partial order property, every st-planar graph can be given a dominance drawing, in which for every two vertices u and v there exists a path from u to v if and only if both coordinates of u are smaller than the corresponding coordinates of v. The coordinates of such a drawing may also be used as a data structure that can be used to test whether one vertex of an st-planar graph can reach another in constant time per query. Rotating such a drawing by 45° gives an upward planar drawing of the graph. A directed acyclic graph G has an upward planar drawing if and only if G is a subgraph of an st-planar graph. diff --git a/wiki/wikipedia/2739.txt b/wiki/wikipedia/2739.txt deleted file mode 100644 index 0bd9ce9551f93f17275156c50de724e0d0980eec..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2739.txt +++ /dev/null @@ -1 +0,0 @@ -In group theory, the trichotomy theorem divides the finite simple groups of characteristic 2 type and rank at least 3 into three classes. It was proved by for rank 3 and by Gorenstein for rank at least 4. The three classes are groups of GF(2) type (classified by Timmesfeld and others), groups of "standard type" for some odd prime (classified by the Gilman–Griess theorem and work by several others), and groups of uniqueness type, where Aschbacher proved that there are no simple groups. diff --git a/wiki/wikipedia/274.txt b/wiki/wikipedia/274.txt deleted file mode 100644 index 82c390a975a3b35a0b2b1794c36730d800d6c42e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/274.txt +++ /dev/null @@ -1,182 +0,0 @@ -In mathematical analysis, the final value theorem (FVT) is one of several similar theorems used to relate frequency domain expressions to the time domain behavior as time approaches infinity. - -Mathematically, if $f(t)$ in continuous time has (unilateral) Laplace transform $F(s)$ then a final value theorem establishes conditions under which -$$ -\lim_{t\to\infty}f(t) = \lim_{s\to 0}{sF(s)} -$$ - -Likewise, if $f[k]$ in discrete time has (unilateral) Z-transform $F(z)$ then a final value theorem establishes conditions under which -$$ -\lim_{k\to\infty}f[k] = \lim_{z\to 1}{(z-1)F(z)} -$$ - -An Abelian final value theorem makes assumptions about the time-domain behavior of $f(t)$ (or $f[k]$) to calculate $\lim_{s\to 0}{sF(s)}$. - -Conversely a Tauberian final value theorem makes assumptions about the frequency-domain behaviour of $F(s)$ to calculate $\lim_{t\to\infty}f(t)$ (or $\lim_{k\to\infty}f[k]$) - -(see Abelian and Tauberian theorems for integral transforms). - -In the following statements, the notation '$s \to 0$' means that $s$ approaches 0, whereas '$s \downarrow 0$' means that $s$ approaches 0 through the positive numbers. - -Suppose that every pole of $F(s)$ is either in the open left half plane or at the origin, and that $F(s)$ has at most a single pole at the origin. Then $sF(s) \to L \in \mathbb{R}$ as $s \to 0$, and $\lim_{t\to\infty}f(t) = L$. - -Suppose that $f(t)$ and $f'(t)$ both have Laplace transforms that exist for all $s > 0$. If $\lim_{t\to\infty}f(t)$ exists and $\lim_{s\to 0}{sF(s)}$ exists then $\lim_{t\to\infty}f(t) = \lim_{s\to 0}{sF(s)}$. - -
    - -Remark - -Both limits must exist for the theorem to hold. For example, if $f(t) = \sin(t)$ then $\lim_{t\to\infty}f(t)$ does not exist, but -$$ -\lim_{s\to 0}{sF(s)} = \lim_{s\to 0}{\frac{s}{s^2+1}} = 0 -$$. - -Suppose that $f : (0,\infty) \to \mathbb{C} $ is bounded and differentiable, and that -$$ -t f'(t) -$$ is also bounded on $(0,\infty)$. If $sF(s) \to L \in \mathbb{C}$ as $s \to 0$ then $\lim_{t\to\infty}f(t) = L$. - -Suppose that every pole of $F(s)$ is either in the open left half plane or at the origin. Then one of the following occurs: - -# $sF(s) \to L \in \mathbb{R}$ as $s \downarrow 0$, and $\lim_{t\to\infty}f(t) = L$. - -# $sF(s) \to +\infty \in \mathbb{R}$ as $s \downarrow 0$, and $f(t) \to +\infty$ as $t \to \infty$. - -# $sF(s) \to -\infty \in \mathbb{R}$ as $s \downarrow 0$, and $f(t) \to -\infty$ as $t \to \infty$. - -In particular, if $s = 0$ is a multiple pole of $F(s)$ then case 2 or 3 applies ($f(t) \to +\infty$ or $f(t) \to -\infty$). - -Remark - -The proof uses the Dominated Convergence Theorem. - -Suppose that $f : [0,\infty) \to \mathbb{R} $ is continuous and absolutely integrable in $[0,\infty)$. Suppose further that $f$ is asymptotically equal to a finite sum of periodic functions $f_{\mathrm{as}}$, that is -$$ -| f(t) - f_{\mathrm{as}}(t) | < \phi(t) -$$ - -where $\phi(t)$ is absolutely integrable in $[0,\infty)$ and vanishes at infinity. Then -$$ -\lim_{s \to 0}sF(s) = \lim_{t \to \infty} \frac{1}{t} \int_{0}^{t} f(x) dx -$$. - -Let $f(t) : [0,\infty) \to \mathbb{R}$ and $F(s)$ be the Laplace transform of $f(t)$. Suppose that $f(t)$ satisfies all of the following conditions: - -# $f(t)$ is infinitely differentiable at zero - -# $f^{(k)}(t)$ has a Laplace transform for all non-negative integers $k$ - -# $f(t)$ diverges to infinity as $t \to \infty$ - -Then $sF(s)$ diverges to infinity as $s \to 0^{+}$. - -Let $h : [0,\infty) \to \mathbb{R}$ be measurable and such that the (possibly improper) integral $f(x) := \int_0^x h(t) dt$ converges for $x\to\infty$. Then -$$ -\int_0^\infty h(t) dt := \lim_{x\to\infty} f(x) = \lim_{s\downarrow 0}\int_0^\infty e^{-st}h(t) dt. -$$ - -This is a version of Abel's theorem. - -To see this, notice that $f'(t) = h(t)$ and apply the final value theorem to $f$ after an integration by parts: For $s > 0$, - - - -s\int_0^\infty e^{-st}f(t) dt = \Big[- e^{-st}f(t)\Big]_{t=o}^\infty + \int_0^\infty e^{-st} f'(t) dt = \int_0^\infty e^{-st} h(t) dt. - - - -By the final value theorem, the left-hand side converges to $\lim_{x\to\infty} f(x)$ for $s\to 0$. - -To establish the convergence of the improper integral $\lim_{x\to\infty}f(x)$ in practise, Dirichlet's test for improper integrals is often helpful. An example is the Dirichlet integral. - -Final value theorems for obtaining $\lim_{s\to 0}{sF(s)}$ have applications in probability and statistics to calculate the moments of a random variable. Let $R(x)$ be cumulative distribution function of a continuous random variable $X$ and let $\rho(s)$ be the Laplace-Stieltjes transform of $R(x)$. Then the $n$-th moment of $X$ can be calculated as -$$ -E[X^n] = (-1)^n\left.\frac{d^n\rho(s)}{ds^n}\right|_{s=0} -$$ - -The strategy is to write -$$ -\frac{d^n\rho(s)}{ds^n} = \mathcal{F}\bigl(G_1(s), G_2(s), \dots, G_k(s), \dots\bigr) -$$ - -where $\mathcal{F}(\dots)$ is continuous and - -for each $k$, $G_k(s) = sF_k(s)$ for a function $F_k(s)$. For each $k$, put $f_k(t)$ as the inverse Laplace transform of $F_k(s)$, obtain -$$ -\lim_{t\to\infty}f_k(t) -$$, and apply a final value theorem to deduce -$$ -\lim_{s\to 0}{G_k(s)} =\lim_{s\to 0}{sF_k(s)} = \lim_{t\to\infty}f_k(t) -$$. Then -$$ -\left.\frac{d^n\rho(s)}{ds^n}\right|_{s=0} = \mathcal{F}\Bigl(\lim_{s\to 0} G_1(s), \lim_{s\to 0} G_2(s), \dots, \lim_{s\to 0} G_k(s), \dots\Bigr) -$$ - -and hence $E[X^n]$ is obtained. - -For example, for a system described by transfer function -$$ -H(s) = \frac{ 6 }{s + 2}, -$$ - -and so the impulse response converges to -$$ -\lim_{t \to \infty} h(t) = \lim_{s \to 0} \frac{6s}{s+2} = 0. -$$ - -That is, the system returns to zero after being disturbed by a short impulse. However, the Laplace transform of the unit step response is -$$ -G(s) = \frac{1}{s} \frac{6}{s+2} -$$ - -and so the step response converges to -$$ -\lim_{t \to \infty} g(t) = \lim_{s \to 0} \frac{s}{s} \frac{6}{s+2} = \frac{6}{2} = 3 -$$ - -and so a zero-state system will follow an exponential rise to a final value of 3. - -For a system described by the transfer function -$$ -H(s) = \frac{9}{s^2 + 9}, -$$ - -the final value theorem appears to predict the final value of the impulse response to be 0 and the final value of the step response to be 1. However, neither time-domain limit exists, and so the final value theorem predictions are not valid. In fact, both the impulse response and step response oscillate, and (in this special case) the final value theorem describes the average values around which the responses oscillate. - -There are two checks performed in Control theory which confirm valid results for the Final Value Theorem: - -# All non-zero roots of the denominator of $H(s)$ must have negative real parts. - -# $H(s)$ must not have more than one pole at the origin. - -Rule 1 was not satisfied in this example, in that the roots of the denominator are $0+j3$ and $0-j3$. - -If $\lim_{k\to\infty}f[k]$ exists and $\lim_{z\to 1}{(z-1)F(z)}$ exists then $\lim_{k\to\infty}f[k] = \lim_{z\to 1}{(z-1)F(z)}$. - -Final value of the system -$$ -\dot{\mathbf{x}}(t) = \mathbf{A} \mathbf{x}(t) + \mathbf{B} \mathbf{u}(t) -$$ -$$ -\mathbf{y}(t) = \mathbf{C} \mathbf{x}(t) -$$ - -in response to a step input $\mathbf{u}(t)$ with amplitude $R$ is: -$$ -\lim_{t\to\infty}\mathbf{y}(t) = \mathbf{CA}^{-1}\mathbf{B}R -$$ - -The sampled-data system of the above continuous-time LTI system at the aperiodic sampling times $t_{i}, i=1,2,...$ is the discrete-time system -$$ -{\mathbf{x}}(t_{i+1}) = \mathbf{\Phi}(h_{i}) \mathbf{x}(t_{i}) + \mathbf{\Gamma}(h_{i}) \mathbf{u}(t_{i}) -$$ -$$ -\mathbf{y}(t_{i}) = \mathbf{C} \mathbf{x}(t_{i}) -$$ - -where $h_{i} = t_{i+1}-t_{i}$ and -$$ -\mathbf{\Phi}(h_{i})=e^{\mathbf{A}h_{i}} -$$, $\mathbf{\Gamma}(h_{i})=\int_0^{h_{i}} e^{\mathbf{A}s} ds$ - -The final value of this system in response to a step input $\mathbf{u}(t)$ with amplitude $R$ is the same as the final value of its original continuous-time system. diff --git a/wiki/wikipedia/2740.txt b/wiki/wikipedia/2740.txt deleted file mode 100644 index 48a9a35ece28b17e8f45aa262a1283a13eab3610..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2740.txt +++ /dev/null @@ -1,164 +0,0 @@ -In probabilistic logic, the Fréchet inequalities, also known as the Boole–Fréchet inequalities, are rules implicit in the work of George Boole and explicitly derived by Maurice Fréchet that govern the combination of probabilities about logical propositions or events logically linked together in conjunctions (AND operations) or disjunctions (OR operations) as in Boolean expressions or fault or event trees common in risk assessments, engineering design and artificial intelligence. These inequalities can be considered rules about how to bound calculations involving probabilities without assuming independence or, indeed, without making any dependence assumptions whatsoever. The Fréchet inequalities are closely related to the Boole–Bonferroni–Fréchet inequalities, and to Fréchet bounds. - -If Ai are logical propositions or events, the Fréchet inequalities are - -Probability of a logical conjunction ($\land$) -$$ -\max\left(0, \sum_{k = 1}^n \mathbb{P}(A_k) - (n - 1)\right) \leq \mathbb{P}\left(\bigwedge_{k = 1}^n A_k\right) \leq \min_k\{\mathbb{P}(A_k)\}, -$$ - -Probability of a logical disjunction ($\lor$) -$$ -\max_k\{\mathbb{P}(A_k)\} \leq \mathbb{P}\left(\bigvee_{k = 1}^n A_k\right) \leq \min\left(1, \sum_{k = 1}^n \mathbb{P}(A_k)\right), -$$ - -where P( ) denotes the probability of an event or proposition. In the case where there are only two events, say A and B, the inequalities reduce to - -Probability of a logical conjunction ($\land$) -$$ -\max(0, \mathbb{P}(A) + \mathbb{P}(B) - 1) \leq \mathbb{P}(A \land B) \leq \min(\mathbb{P}(A), \mathbb{P}(B)), -$$ - -Probability of a logical disjunction ($\lor$) -$$ -\max(\mathbb{P}(A), \mathbb{P}(B)) \leq \mathbb{P}(A \lor B) \leq \min(1, \mathbb{P}(A) + \mathbb{P}(B)). -$$ - -The inequalities bound the probabilities of the two kinds of joint events given the probabilities of the individual events. For example, if A is "has lung cancer", and B is "has mesothelioma", then A & B is "has both lung cancer and mesothelioma", and A ∨ B is "has lung cancer or mesothelioma or both diseases", and the inequalities relate the risks of these events. - -Note that logical conjunctions are denoted in various ways in different fields, including AND, &, ∧ and graphical AND-gates. Logical disjunctions are likewise denoted in various ways, including OR, |, ∨, and graphical OR-gates. If events are taken to be sets rather than logical propositions, the set-theoretic versions of the Fréchet inequalities are - -Probability of an intersection of events -$$ -\max(0, \mathbb{P}(A) + \mathbb{P}(B) - 1) \leq \mathbb{P}(A \cap B) \leq \min(\mathbb{P}(A), \mathbb{P}(B)), -$$ - -Probability of a union of events -$$ -\max(\mathbb{P}(A), \mathbb{P}(B)) \leq \mathbb{P}(A \cup B) \leq \min(1, \mathbb{P}(A) + \mathbb{P}(B)). -$$ - -If the probability of an event A is P(A) = a = 0.7, and the probability of the event B is P(B) = b = 0.8, then the probability of the conjunction, i.e., the joint event A & B, is surely in the interval - -\begin{align} - -\mathbb{P}(A \land B) &\in [\max(0, a + b - 1), \min(a, b)] \\ - -&= [\max(0, 0.7 + 0.8 - 1), \min(0.7, 0.8)] \\ - -&= [0.5, 0.7] - -\end{align}. - -Likewise, the probability of the disjunction A ∨ B is surely in the interval - -\begin{align} - -\mathbb{P}(A \lor B) &\in [\max(a, b), \min(1, a + b)] \\ - -&= [\max(0.7, 0.8), \min(1, 0.7 + 0.8)] \\ - -&= [0.8, 1] - -\end{align}. - -These intervals are contrasted with the results obtained from the rules of probability assuming independence, where the probability of the conjunction is P(A & B) = a × b = 0.7 × 0.8 = 0.56, and the probability of the disjunction is P(A ∨ B) = a + b - a × b = 0.94. - -When the marginal probabilities are very small (or large), the - -Fréchet intervals are strongly asymmetric about the analogous results under independence. For example, suppose P(A) = 0.000002 = 2×10-6 and P(B) = 0.000003 = 3×10-6. Then the Fréchet inequalities say P(A & B) is in the interval [0, 2×10-6], and P(A ∨ B) is in the interval [3×10-6, 5×10-6]. If A and B are independent, however, the probability of A & B is 6×10-12 which is, comparatively, very close to the lower limit (zero) of the Fréchet interval. Similarly, the probability of A ∨ B is 4.999994×10-6, which is very close to the upper limit of the Fréchet interval. This is what justifies the rare-event approximation often used in reliability theory. - -The proofs are elementary. Recall that P(A ∨ B) = P(A) + P(B) - P(A & B), which implies P(A) + P(B) - P(A ∨ B) = P(A & B). Because all probabilities are no bigger than 1, we know P(A ∨ B) ≤ 1, which implies that P(A) + P(B) - 1 ≤ P(A & B). Because all probabilities are also positive we can similarly say 0 ≤ P(A & B), so max(0, P(A) + P(B) - 1) ≤ P(A & B). This gives the lower bound on the conjunction. - -To get the upper bound, recall that P(A & B) = P(A|B) P(B) = P(B|A) P(A). Because P(A|B) ≤ 1 and P(B|A) ≤ 1, we know P(A & B) ≤ P(A) and P(A & B) ≤ P(B). Therefore, P(A & B) ≤ min(P(A), P(B)), which is the upper bound. - -The best-possible nature of these bounds follows from observing that they are realized by some dependency between the events A and B. Comparable bounds on the disjunction are similarly derived. - -When the input probabilities are themselves interval ranges, the Fréchet formulas still work as a probability bounds analysis. - -Hailperin have suggested using the inequalities in various applications of artificial intelligence and have extended the rules to account for various assumptions about the dependence among the events. The inequalities can also be generalized to other logical operations, including even modus ponens. Consider a composite quantum system. In particular, we focus on a composite quantum system AB made by two finite subsystems denoted as A and B. Assume that we know the density matrix of the subsystem A, i.e., $\rho^A$ that is a trace-one positive definite matrix in $\Complex_{h}^{n\times n}$ (the space of Hermitian matrices of dimension $n \times n$), and the density matrix of subsystem B denoted as $\rho^B.$ We can think of $\rho^A$ and $\rho^B$ as the marginals of the subsystems A and B. From the knowledge of these marginals, we want to infer something about the joint $\rho^{AB}$ in $\Complex_{h}^{nm\times nm}.$ We restrict our attention to joint $\rho^{AB}$ that are separable. A density matrix on a composite system is separable if there exist $p_k\geq 0, \{\rho_1^k \}$ and $\{ \rho_2^k \}$ which are mixed states of the respective subsystems such that -$$ -\rho^{AB}=\sum_k p_k \rho_1^k \otimes \rho_2^k -$$ - -where -$$ -\sum_k p_k = 1. -$$ - -Otherwise $\rho^{AB}$ is called an entangled state. - -For separable density matrices $ \rho^{AB}$ in $\Complex_{h}^{nm\times nm}$ the following Fréchet like bounds hold: -$$ - \begin{cases} \rho^{AB} \leq \rho^{A} \otimes I_m \\ \rho^{AB} \leq I_n \otimes \rho^{B} \\[6pt] \rho^{AB} \geq \rho^A \otimes I_m + I_n \otimes \rho^B-I_{nm} \\ \rho^{AB} \gneq 0 \end{cases} -$$ - -The inequalities are matrix inequalities, $\otimes$ denotes the tensor product and $I_x$ the identity matrix of dimension $x$. It is evident that structurally the above inequalities are analogues of the classical Fréchet bounds for the logical conjunction. It is also worth to notice that when the matrices $\rho^A,\rho^B$ and $\rho^{AB}$ are restricted to be diagonal, we obtain the classical Fréchet bounds. - -The upper bound is known in Quantum Mechanics as reduction criterion for density matrices; it was first proven by and independently formulated by. The lower bound has been obtained in that provides a Bayesian interpretation of these bounds. - -We have observed when the matrices $\rho^A,\rho^B$ and $\rho^{AB}$ are all diagonal, we obtain the classical Fréchet bounds. To show that, consider again the previous numerical example: - -\begin{align} - -\rho^A & = \text{diag}(p_{a},p_{\bar{a}})=\text{diag}(0.7,0.3) \\ - -\rho^B & = \text{diag}(p_{b},p_{\bar{b}})=\text{diag}(0.8,0.2) - -\end{align} - -then we have: - -\begin{align} - -\rho^{AB}&= \text{diag}(p_{ab},p_{a\bar{b}},p_{\bar{a}b},p_{\bar{a}\bar{b}}) \leqslant \rho^{A} \otimes I_2 =\text{diag}(0.7,0.7,0.3,0.3) \\ - -\rho^{AB}&= \text{diag}(p_{ab},p_{a\bar{b}},p_{\bar{a}b},p_{\bar{a}\bar{b}}) \leqslant I_2 \otimes \rho^{B} =\text{diag}(0.8,0.2,0.8,0.2) \\ - -\rho^{AB}&= \text{diag}(p_{ab},p_{a\bar{b}},p_{\bar{a}b},p_{\bar{a}\bar{b}}) \geqslant \rho^{A} \otimes I_2 + I_2 \otimes \rho^{B} - I_4 = \text{diag}(0.5, -0.1, 0.1, -0.5) \\ - -\rho^{AB}&=\text{diag}(p_{ab},p_{a\bar{b}},p_{\bar{a}b},p_{\bar{a}\bar{b}}) \geqslant 0 - -\end{align} - -which means: - -\begin{align} - -0.5 &\leqslant p_{ab} \leqslant 0.7 \\ - -0 &\leqslant p_{a\bar{b}} \leqslant 0.2 \\ - -0.1 &\leqslant p_{\bar{a}b} \leqslant 0.3 \\ - -0 &\leqslant p_{\bar{a}\bar{b}} \leqslant 0.2 - -\end{align} - -It is worth to point out that entangled states violate the above Fréchet bounds. Consider for instance the entangled density matrix (which is not separable): - -\rho^{AB}=\frac{1}{2} \begin{bmatrix} - -1 & 0 &0 & 1\\ - -0 & 0 &0 & 0\\ - -0 & 0 &0 & 0\\ - -1 & 0 &0 & 1\\ - -\end{bmatrix}, - -which has marginal -$$ -\rho^A=\rho^B=\text{diag}\left(\tfrac{1}{2}, \tfrac{1}{2} \right). -$$ - -Entangled states are not separable and it can easily be verified that -$$ -\begin{cases} \rho^A \otimes I_m-\rho^{AB} \ngeqslant0 \\ I_n \otimes \rho^B -\rho^{AB} \ngeqslant0 \end{cases} -$$ - -since the resulting matrices have one negative eigenvalue. - -Another example of violation of probabilistic bounds is provided by the famous Bell's inequality: entangled states exhibit a form of stochastic dependence stronger than the strongest classical dependence: and in fact they violate Fréchet like bounds. diff --git a/wiki/wikipedia/2741.txt b/wiki/wikipedia/2741.txt deleted file mode 100644 index e3d2891ec559db56ca77b79ace86dc095a442e90..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2741.txt +++ /dev/null @@ -1,7 +0,0 @@ -In arithmetic geometry, the Bombieri–Lang conjecture is an unsolved problem conjectured by Enrico Bombieri and Serge Lang about the Zariski density of the set of rational points of an algebraic variety of general type. - -The weak Bombieri–Lang conjecture for surfaces states that if $X$ is a smooth surface of general type defined over a number field $k$, then the $k$-rational points of $X$ do not form a dense set in the Zariski topology on $X$. Independently in a series of papers starting in 1971, Serge Lang conjectured a more general relation between the distribution of rational points and algebraic hyperbolicity, - -The Bombieri–Lang conjecture is an analogue for surfaces of Faltings's theorem, which states that algebraic curves of genus greater than one only have finitely many rational points. - -If true, the Bombieri–Lang conjecture would resolve the Erdős–Ulam problem, as it would imply that there do not exist dense subsets of the Euclidean plane all of whose pairwise distances are rational. diff --git a/wiki/wikipedia/2742.txt b/wiki/wikipedia/2742.txt deleted file mode 100644 index 2c365a8e2be5638c762545a7b21d44d394d78bd7..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2742.txt +++ /dev/null @@ -1,37 +0,0 @@ -A Pythagorean quadruple is a tuple of integers a, b, c, and d, such that a^2 + b^2 + c^2 = d^2. They are solutions of a Diophantine equation and often only positive integer values are considered. However, to provide a more complete geometric interpretation, the integer values can be allowed to be negative and zero (thus allowing Pythagorean triples to be included) with the only condition being that d > 0. In this setting, a Pythagorean quadruple (a, b, c, d) defines a cuboid with integer side lengths |a|, |b|, and |c|, whose space diagonal has integer length d; with this interpretation, Pythagorean quadruples are thus also called Pythagorean boxes. In this article we will assume, unless otherwise stated, that the values of a Pythagorean quadruple are all positive integers. - -A Pythagorean quadruple is called primitive if the greatest common divisor of its entries is 1. Every Pythagorean quadruple is an integer multiple of a primitive quadruple. The set of primitive Pythagorean quadruples for which a is odd can be generated by the formulas -$$ -\begin{align} a &= m^2+n^2-p^2-q^2, \\ b &= 2(mq+np), \\ c &= 2(nq-mp), \\ d &= m^2+n^2+p^2+q^2, \end{align} -$$ - -where m, n, p, q are non-negative integers with greatest common divisor 1 such that m + n + p + q is odd. Thus, all primitive Pythagorean quadruples are characterized by Lebesgue's identity -$$ -(m^2 + n^2 + p^2 + q^2)^2 = (2mq + 2np)^2 + (2nq - 2mp)^2 + (m^2 + n^2 - p^2 - q^2)^2. -$$ - -All Pythagorean quadruples (including non-primitives, and with repetition, though a, b, and c do not appear in all possible orders) can be generated from two positive integers a and b as follows: - -If a and b have different parity, let p be any factor of a^2 + b^2 such that p^2 < a^2 + b^2. Then c = a^2 + b^2 − p^2/2p and d = a^2 + b^2 + p^2/2p. Note that p = d − c. - -A similar method exists for generating all Pythagorean quadruples for which a and b are both even. Let l = a/2 and m = b/2 and let n be a factor of l^2 + m^2 such that n^2 < l^2 + m^2. Then c = l^2 + m^2 − n^2/n and d = l^2 + m^2 + n^2/n. This method generates all Pythagorean quadruples exactly once each when l and m run through all pairs of natural numbers and n runs through all permissible values for each pair. - -No such method exists if both a and b are odd, in which case no solutions exist as can be seen by the parametrization in the previous section. - -The largest number that always divides the product abcd is 12. The quadruple with the minimal product is (1, 2, 2, 3). - -A primitive Pythagorean quadruple (a, b, c, d) parametrized by (m, n, p, q) corresponds to the first column of the matrix representation E(α) of conjugation α(⋅)α by the Hurwitz quaternion α = m + ni + pj + qk restricted to the subspace of $\mathbb{H}$ spanned by i, j, k, which is given by - -E(\alpha) = \begin{pmatrix} - -m^2+n^2-p^2-q^2&2np-2mq &2mp+2nq \\ - -2mq+2np &m^2-n^2+p^2-q^2&2pq-2mn \\ - -2nq-2mp &2mn+2pq &m^2-n^2-p^2+q^2\\ - -\end{pmatrix}, - -where the columns are pairwise orthogonal and each has norm d. Furthermore, we have 1/d'E(α) ∈ SO(3,$\mathbb{Q}$), and, in fact, all 3 × 3 orthogonal matrices with rational coefficients arise in this manner. - -There are 31 primitive Pythagorean quadruples in which all entries are less than 30. diff --git a/wiki/wikipedia/2743.txt b/wiki/wikipedia/2743.txt deleted file mode 100644 index 2fda07d8cae39eee472ec9905ba53ae5c8959cc4..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2743.txt +++ /dev/null @@ -1,9 +0,0 @@ -In mathematics, a Jacobian, named for Carl Gustav Jacob Jacobi, may refer to: - -*Jacobian matrix and determinant - -*Jacobian elliptic functions - -*Jacobian variety - -*Intermediate Jacobian diff --git a/wiki/wikipedia/2744.txt b/wiki/wikipedia/2744.txt deleted file mode 100644 index eaa40dada749e17751c305995cc240ce23e90f2d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2744.txt +++ /dev/null @@ -1,51 +0,0 @@ -In computability theory and computational complexity theory, RE (recursively enumerable) is the class of decision problems for which a 'yes' answer can be verified by a Turing machine in a finite amount of time. Informally, it means that if the answer to a problem instance is 'yes', then there is some procedure that takes finite time to determine this, and this procedure never falsely reports 'yes' when the true answer is 'no'. However, when the true answer is 'no', the procedure is not required to halt; it may go into an "infinite loop" for some 'no' cases. Such a procedure is sometimes called a semi-algorithm, to distinguish it from an algorithm, defined as a complete solution to a decision problem. - -Similarly, co-RE is the set of all languages that are complements of a language in RE. In a sense, co-RE contains languages of which membership can be disproved in a finite amount of time, but proving membership might take forever. - -Equivalently, RE is the class of decision problems for which a Turing machine can list all the 'yes' instances, one by one (this is what 'enumerable' means). - -Each member of RE is a recursively enumerable set and therefore a Diophantine set. - -To show this is equivalent, note that if there is a machine $E$ that enumerates all accepted inputs, another machine that takes in a string can run $E$ and accept if the string is enumerated. Conversely, if a machine $M$ accepts when an input is in a language, another machine can enumerate all strings in the language by interleaving simulations of $M$ on every input and outputting strings that are accepted (there is an order of execution that will eventually get to every execution step because there are countably many ordered pairs of inputs and steps). - -The set of recursive languages (R) is a subset of both RE and co-RE. In fact, it is the intersection of those two classes, because we can decide any problem for which there exists a recogniser and also a co-recogniser by simply interleaving them until one obtains a result. Therefore: -$$ -\mbox{R} = \mbox{RE}\cap\mbox{co-RE} -$$. - -Conversely, the set of languages that are neither RE nor co-RE is known as NRNC. These are the set of languages for which neither membership nor non-membership can be proven in a finite amount of time, and contain all other languages that are not in either RE or co-RE. That is: -$$ -\mbox{NRNC} = \mbox{ALL} - (\mbox{RE}\cup\mbox{co-RE}) -$$. - -Not only are these problems undecidable, but neither they nor their complement are recursively enumerable. - -In January of 2020, a preprint announced a proof that RE was equivalent to the class MIP* (the class where a classical verifier interacts with multiple all-powerful quantum provers who share entanglement). - -RE-complete is the set of decision problems that are complete for RE. In a sense, these are the "hardest" recursively enumerable problems. Generally, no constraint is placed on the reductions used except that they must be many-one reductions. - -Examples of RE-complete problems: - -#Halting problem: Whether a program given a finite input finishes running or will run forever. - -#By Rice's Theorem, deciding membership in any nontrivial subset of the set of recursive functions is RE-hard. It will be complete whenever the set is recursively enumerable. - -# proved that all creative sets are RE-complete. - -#The uniform word problem for groups or semigroups. (Indeed, the word problem for some individual groups is RE-complete.) - -#Deciding membership in a general unrestricted formal grammar. (Again, certain individual grammars have RE-complete membership problems.) - -#The validity problem for first-order logic. - -#Post correspondence problem: Given a list of pairs of strings, determine if there is a selection from these pairs (allowing repeats) such that the concatenation of the first items (of the pairs) is equal to the concatenation of the second items. - -#Determining if a Diophantine equation has any integer solutions. - -co-RE-complete is the set of decision problems that are complete for co-RE. In a sense, these are the complements of the hardest recursively enumerable problems. - -Examples of co-RE-complete problems: - -#The domino problem for Wang tiles. - -#The satisfiability problem for first-order logic. diff --git a/wiki/wikipedia/2745.txt b/wiki/wikipedia/2745.txt deleted file mode 100644 index 59858050753aa8a77f890cdb164a16f3bb6514e3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2745.txt +++ /dev/null @@ -1,33 +0,0 @@ -In applied mathematics, the Atkinson–Mingarelli theorem, named after Frederick Valentine Atkinson and A. B. Mingarelli, concerns eigenvalues of certain Sturm–Liouville differential operators. - -In the simplest of formulations let p, q, w be real-valued piecewise continuous functions defined on a closed bounded real interval, I = [a, b]. The function w(x), which is sometimes denoted by r(x), is called the "weight" or "density" function. Consider the Sturm–Liouville differential equation - -{{NumBlk|:|$ -\frac{d}{dx}\left[p(x)\frac{dy}{ dx}\right]+q(x)y=\lambda w(x)y,$|}} - -where y is a function of the independent variable x. In this case, y is called a solution if it is continuously differentiable on (a,b) and (p y′)(x) is piecewise continuously differentiable and y satisfies the equation () at all except a finite number of points in (a,b). The unknown function y is typically required to satisfy some boundary conditions at a and b. - -The boundary conditions under consideration here are usually called separated boundary conditions and they are of the form: - -{{NumBlk|:|$ \alpha_{1}y(a)+\alpha_{2}y'(a)=0\qquad(\alpha_1^2+\alpha_2^2>0),$|}} - -{{NumBlk|:|$ \beta_{1}y(b)+\beta_{2}y'(b)=0\qquad(\beta_1^2+\beta_2^2>0),$|}} - -where the $ \{\alpha_i, \beta_i\}$, i = 1, 2 are real numbers. We define - -Assume that p(x) has a finite number of sign changes and that the positive (resp. negative) part of the function p(x)/w(x) defined by $(w/p)_{+}(x) = \max \{w(x)/p(x), 0\}$, (resp. $(w/p)_{-}(x) = \max \{ -w(x)/p(x), 0\})$ are not identically zero functions over I. Then the eigenvalue problem (), ()–() has an infinite number of real positive eigenvalues ${\lambda_i}^{+}$, - - 0 < {\lambda_1}^{+} < {\lambda_2}^{+} < {\lambda_3}^{+} < \cdots < {\lambda_n}^{+} < \cdots \to \infty; - -and an infinite number of negative eigenvalues ${\lambda_i}^{-}$, - - 0 > {\lambda_1}^{-} > {\lambda_2}^{-} > {\lambda_3}^{-} > \cdots > {\lambda_n}^{-} > \cdots \to - \infty; - -whose spectral asymptotics are given by their solution [2] of Jörgens' Conjecture [3]: - - {\lambda_n}^{+} \sim \frac{n^2 \pi^2}{\left(\int_a^b \sqrt{(w/p)_{+}(x)} dx\right)^2},\quad n \to \infty, - -and - - {\lambda_n}^{-} \sim \frac{- n^2 \pi^2}{\left(\int_a^b \sqrt{(w/p)_{-}(x)} dx\right)^2},\quad n \to \infty. - -For more information on the general theory behind () see the article on Sturm–Liouville theory. The stated theorem is actually valid more generally for coefficient functions $1/p, q, w$ that are Lebesgue integrable over I. diff --git a/wiki/wikipedia/2746.txt b/wiki/wikipedia/2746.txt deleted file mode 100644 index 6917161e52a5001a124a52388582341ca7d8e8df..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2746.txt +++ /dev/null @@ -1,21 +0,0 @@ -In combinatorics, Sun's curious identity is the following identity involving binomial coefficients, first established by Zhi-Wei Sun in 2002: - - - -(x+m+1)\sum_{i=0}^m(-1)^i\dbinom{x+y+i}{m-i}\dbinom{y+2i}{i} - --\sum_{i=0}^{m}\dbinom{x+i}{m-i}(-4)^i=(x-m)\dbinom{x}{m}. - - - -After Sun's publication of this identity in 2002, five other proofs were obtained by various mathematicians: - -* Panholzer and Prodinger's proof via generating functions; - -* Merlini and Sprugnoli's proof using Riordan arrays; - -* Ekhad and Mohammed's proof by the WZ method; - -* Chu and Claudio's proof with the help of Jensen's formula; - -* Callan's combinatorial proof involving dominos and colorings. diff --git a/wiki/wikipedia/2747.txt b/wiki/wikipedia/2747.txt deleted file mode 100644 index ffbe8ae39804cc790178b3e6b311fbb131b5ef78..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2747.txt +++ /dev/null @@ -1,17 +0,0 @@ -In mathematics, Littlewood's Tauberian theorem is a strengthening of Tauber's theorem introduced by . - -Littlewood showed the following: If an = O(1/n ), and as x ↑ 1 we have -$$ -\sum a_n x^n \to s, -$$ - -then -$$ - \sum a_n = s. -$$ - -Hardy and Littlewood later showed that the hypothesis on an could be weakened to the "one-sided" condition an ≥ –C/n for some constant C. However in some sense the condition is optimal: Littlewood showed that if cn is any unbounded sequence then there is a series with |an| ≤ |cn|/n which is divergent but Abel summable. - -Littlewood described his discovery of the proof of his Tauberian theorem. Alfred Tauber's original theorem was similar to Littlewood's, but with the stronger hypothesis that an=o(1/n). Hardy had proved a similar theorem for Cesàro summation with the weaker hypothesis an=O(1/n), and suggested to Littlewood that the same weaker hypothesis might also be enough for Tauber's theorem. In spite of the fact that the hypothesis in Littlewood's theorem seems only slightly weaker than the hypothesis in Tauber's theorem, Littlewood's proof was far harder than Tauber's, though Jovan Karamata later found an easier proof. - -Littlewood's theorem follows from the later Hardy–Littlewood tauberian theorem, which is in turn a special case of Wiener's tauberian theorem, which itself is a special case of various abstract Tauberian theorems about Banach algebras. diff --git a/wiki/wikipedia/2748.txt b/wiki/wikipedia/2748.txt deleted file mode 100644 index 816eeed2089f7e3f02c21a1421dd0a93d5edc303..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2748.txt +++ /dev/null @@ -1,131 +0,0 @@ -In geometry, the Fermat point of a triangle, also called the Torricelli point or Fermat–Torricelli point, is a point such that the sum of the three distances from each of the three vertices of the triangle to the point is the smallest possible. It is so named because this problem was first raised by Fermat in a private letter to Evangelista Torricelli, who solved it. - -The Fermat point gives a solution to the geometric median and Steiner tree problems for three points. - -The Fermat point of a triangle with largest angle at most 120° is simply its first isogonic center or X(13), which is constructed as follows: - -# Construct an equilateral triangle on each of two arbitrarily chosen sides of the given triangle. - -# Draw a line from each new vertex to the opposite vertex of the original triangle. - -# The two lines intersect at the Fermat point. - -An alternative method is the following: - -# On each of two arbitrarily chosen sides, construct an isosceles triangle, with base the side in question, 30-degree angles at the base, and the third vertex of each isosceles triangle lying outside the original triangle. - -# For each isosceles triangle draw a circle, in each case with center on the new vertex of the isosceles triangle and with radius equal to each of the two new sides of that isosceles triangle. - -# The intersection inside the original triangle between the two circles is the Fermat point. - -When a triangle has an angle greater than 120°, the Fermat point is sited at the obtuse-angled vertex. - -In what follows "Case 1" means the triangle has an angle exceeding 120°. "Case 2" means no angle of the triangle exceeds 120°. - -Fig. 2 shows the equilateral triangles ARB, AQC and CPB attached to the sides of the arbitrary triangle ABC. - -Here is a proof using properties of concyclic points to show that the three lines RC, BQ and AP in Fig 2 all intersect at the point F and cut one another at angles of 60°. - -The triangles RAC and BAQ are congruent because the second is a 60° rotation of the first about A. Hence ∠ARF = ∠ABF and ∠AQF = ∠ACF. By the converse of the inscribed angle theorem applied to the segment AF, the points ARBF are concyclic (they lie on a circle). Similarly, the points AFCQ are concyclic. - -∠ARB = 60°, so ∠AFB = 120°, using the inscribed angle theorem. Similarly, ∠AFC = 120°. - -So ∠BFC = 120°. So, ∠BFC and ∠BPC add up to 180°. Using the inscribed angle theorem, this implies that the points BPCF are concyclic. So, using the inscribed angle theorem applied to the segment BP, ∠BFP = ∠BCP = 60°. Because ∠BFP + ∠BFA = 180°, the point F lies on the line segment AP. So, the lines RC, BQ and AP are concurrent (they intersect at a single point). Q.E.D. - -This proof applies only in Case 2 since if ∠BAC > 120°, point A lies inside the circumcircle of BPC which switches the relative positions of A and F. However it is easily modified to cover Case 1. Then ∠AFB = ∠AFC = 60° hence ∠BFC = ∠AFB + ∠AFC = 120° which means BPCF is concyclic so ∠BFP = ∠BCP = 60° = ∠BFA. Therefore, A lies on FP. - -The lines joining the centers of the circles in Fig. 2 are perpendicular to the line segments AP, BQ and CR. For example, the line joining the center of the circle containing ARB and the center of the circle containing AQC, is perpendicular to the segment AP. So, the lines joining the centers of the circles also intersect at 60° angles. Therefore, the centers of the circles form an equilateral triangle. This is known as Napoleon's Theorem. - -Given any Euclidean triangle ABC and an arbitrary point P let d(P) = PA+PB+PC, with PA denoting the distance between P and A. The aim of this section is to identify a point P0 such that d(P0) < d(P) for all P ≠ P0. If such a point exists then it will be the Fermat point. In what follows Δ will denote the points inside the triangle and will be taken to include its boundary Ω. - -A key result that will be used is the dogleg rule which asserts that if a triangle and a polygon have one side in common and the rest of the triangle lies inside the polygon then the triangle has a shorter perimeter than the polygon.
    [If AB is the common side extend AC to cut the polygon at X. Then by the triangle inequality the polygon perimeter > AB + AX + XB = AB + AC + CX + XB ≥ AB + AC + BC.] - -Let P be any point outside Δ. Associate each vertex with its remote zone; that is, the half-plane beyond the (extended) opposite side. These 3 zones cover the entire plane except for Δ itself and P clearly lies in either one or two of them. If P is in two (say the B and C zones intersection) then setting P' = A implies d(P') = d(A) < d(P) by the dogleg rule. Alternatively if P is in only one zone, say the A-zone, then d(P') < d(P) where P' is the intersection of AP and BC. So for every point P outside Δ there exists a point P' in Ω such that d(P') < d(P). - -Case 1. The triangle has an angle ≥ 120°. - -Without loss of generality suppose that the angle at A is ≥ 120°. Construct the equilateral triangle AFB and for any point P in Δ (except A itself) construct Q so that the triangle AQP is equilateral and has the orientation shown. Then the triangle ABP is a 60° rotation of the triangle AFQ about A so these two triangles are congruent and it follows that d(P) = CP+PQ+QF which is simply the length of the path CPQF. As P is constrained to lie within ABC, by the dogleg rule the length of this path exceeds AC+AF = d(A). Therefore, d(A) < d(P) for all P є Δ, P ≠ A. Now allow P to range outside Δ. From above a point P' є Ω exists such that d(P') < d(P) and as d(A) ≤ d (P') it follows that d(A) < d(P) for all P outside Δ. Thus d(A) < d(P) for all P ≠ A which means that A is the Fermat point of Δ. In other words, the Fermat point lies at the obtuse-angled vertex. - -Case 2. The triangle has no angle ≥ 120°. - -Construct the equilateral triangle BCD and let P be any point inside Δ and construct the equilateral triangle CPQ. Then CQD is a 60° rotation of CPB about C so d(P) = PA+PB+PC = AP+PQ+QD which is simply the length of the path APQD. Let P0 be the point where AD and CF intersect. This point is commonly called the first isogonic center. Carry out the same exercise with P0 as you did with P, and find the point Q0. By the angular restriction P0 lies inside Δ moreover BCF is a 60° rotation of BDA about B so Q0 must lie somewhere on AD. Since CDB = 60° it follows that Q0 lies between P0 and D which means AP0Q0D is a straight line so d(P0) = AD. Moreover, if P ≠ P0 then either P or Q won't lie on AD which means d(P0) = AD < d(P). Now allow P to range outside Δ. From above a point P' є Ω exists such that d(P') < d(P) and as d(P0) ≤ d(P') it follows that d(P0) < d(P) for all P outside Δ. That means P0 is the Fermat point of Δ. In other words, the Fermat point is coincident with the first isogonic center. - -Let O, A, B, C, X be any five points in a plane. Denote the vectors $\overrightarrow{\mathrm{OA}}, \overrightarrow{\mathrm{OB}}, \overrightarrow{\mathrm{OC}}, \overrightarrow{\mathrm{OX}}$ by a, b, c, x respectively, and let i, j, k be the unit vectors from O along a, b, c.
    - -Now |a| = a⋅i = (a − x)⋅i + x⋅i ≤ |a − x| + x⋅i and similarly |b| ≤ |b − x| + x⋅j and |c| ≤ |c − x| + x⋅k.
    - -Adding gives |a| + |b| + |c| ≤ |a − x| + |b − x| + |c − x| + x⋅(i + j + k).
    - -If a, b, c meet at O at angles of 120° then i + j + k = 0 so |a| + |b| + |c| ≤ |a − x| + |b − x| + |c − x| for all x.
    - -In other words, OA + OB + OC ≤ XA + XB + XC and hence O is the Fermat point of ABC.
    - -This argument fails when the triangle has an angle ∠C > 120° because there is no point O where a, b, c meet at angles of 120°. Nevertheless, it is easily fixed by redefining k = − (i + j) and placing O at C so that c = 0. Note that |k| ≤ 1 because the angle between the unit vectors i and j is ∠C which exceeds 120°. Since |0| ≤ |0 − x| + x⋅k the third inequality still holds, the other two inequalities are unchanged. The proof now continues as above (adding the three inequalities and using i + j + k = 0) to reach the same conclusion that O (or in this case C) must be the Fermat point of ABC. - -Another approach to finding the point within a triangle, from which the sum of the distances to the vertices of the triangle is minimal, is to use one of the mathematical optimization methods; specifically, the method of Lagrange multipliers and the law of cosines. - -We draw lines from the point within the triangle to its vertices and call them X, Y and Z. Also, let the lengths of these lines be x, y, and z, respectively. Let the angle between X and Y be α, Y and Z be β. Then the angle between X and Z is (2π − α − β). Using the method of Lagrange multipliers we have to find the minimum of the Lagrangian L, which is expressed as: - -L = x + y + z + λ1 (x2 + y2 − 2xy cos(α) − a2) + λ2 (y2 + z2 − 2yz cos(β) − b2) + λ3 (z2 + x2 − 2zx cos(α + β) − c2) - -where a, b and c are the lengths of the sides of the triangle. - -Equating each of the five partial derivatives δL/δx, δL/δy, δL/δz, δL/δα, δL/δβ to zero and eliminating λ1, λ2, λ3 eventually gives sin(α) = sin(β) and sin(α + β) = − sin(β) so α = β = 120°. However the elimination is a long and tedious business, and the end result covers only Case 2. - -* When the largest angle of the triangle is not larger than 120°, X(13) is the Fermat point. - -* The angles subtended by the sides of the triangle at X(13) are all equal to 120° (Case 2), or 60°, 60°, 120° (Case 1). - -* The circumcircles of the three constructed equilateral triangles are concurrent at X(13). - -* Trilinear coordinates for the first isogonic center, X(13): - -csc(A + π/3) : csc(B + π/3) : csc(C + π/3), or, equivalently, - -sec(A - π/6) : sec(B - π/6) : sec(C - π/6). - -* Trilinear coordinates for the second isogonic center, X(14): - -csc(A - π/3) : csc(B - π/3) : csc(C - π/3), or, equivalently, - -sec(A + π/6) : sec(B + π/6) : sec(C + π/6). - -* Trilinear coordinates for the Fermat point: - -1 - u + uvw sec(A - π/6) : 1 - v + uvw sec(B - π/6) : 1 - w + uvw sec(C - π/6) - -where u, v, w respectively denote the Boolean variables (A<120°), (B<120°), (C<120°). - -* The isogonal conjugate of X(13) is the first isodynamic point, X(15): - -sin(A + π/3) : sin(B + π/3) : sin(C + π/3). - -* The isogonal conjugate of X(14) is the second isodynamic point, X(16): - -sin(A - π/3) : sin(B - π/3) : sin(C - π/3). - -* The following triangles are equilateral: - -antipedal triangle of X(13) - -antipedal triangle of X(14) - -pedal triangle of X(15) - -pedal triangle of X(16) - -circumcevian triangle of X(15) - -circumcevian triangle of X(16) - -* The lines X(13)X(15) and X(14)X(16) are parallel to the Euler line. The three lines meet at the Euler infinity point, X(30). - -* The points X(13), X(14), the circumcenter, and the nine-point center lie on a Lester circle. - -* The line X(13)X(14) meets the Euler line at midpoint of X(2) and X(4). - -* The Fermat point lies in the open orthocentroidal disk punctured at its own center, and could be any point therein. - -The isogonic centers X(13) and X(14) are also known as the first Fermat point and the second Fermat point respectively. Alternatives are the positive Fermat point and the negative Fermat point. However these different names can be confusing and are perhaps best avoided. The problem is that much of the literature blurs the distinction between the Fermat point and the first Fermat point whereas it is only in Case 2 above that they are actually the same. - -This question was proposed by Fermat, as a challenge to Evangelista Torricelli. He solved the problem in a similar way to Fermat's, albeit using the intersection of the circumcircles of the three regular triangles instead. His pupil, Viviani, published the solution in 1659. diff --git a/wiki/wikipedia/2749.txt b/wiki/wikipedia/2749.txt deleted file mode 100644 index 40295fabdcb5550c4e50601f549155b56f596d07..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2749.txt +++ /dev/null @@ -1,5 +0,0 @@ -In computer science, the string-to-string correction problem refers to determining the minimum number of edit operations necessary to change one string into another (i.e., computing the shortest edit distance). A single edit operation may be changing a single symbol of the string into another, deleting, or inserting a symbol. The length of the edit sequence provides a measure of the distance between the two strings. - -Several algorithms exist to provide an efficient way to determine string distance and specify the minimum number of transformation operations required. Such algorithms are particularly useful for delta creation operations where something is stored as a set of differences relative to a base version. This allows several versions of a single object to be stored much more efficiently than storing them separately. This holds true even for single versions of several objects if they do not differ greatly, or anything in between. - -Notably, such difference algorithms are used in molecular biology to provide some measure of kinship between different kinds of organisms based on the similarities of their macromolecules (such as proteins or DNA). diff --git a/wiki/wikipedia/275.txt b/wiki/wikipedia/275.txt deleted file mode 100644 index 29c5ef0dda79544c29ab3c2803a2786dce929952..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/275.txt +++ /dev/null @@ -1,33 +0,0 @@ -In combinatorial optimization, network flow problems are a class of computational problems in which the input is a flow network (a graph with numerical capacities on its edges), and the goal is to construct a flow, numerical values on each edge that respect the capacity constraints and that have incoming flow equal to outgoing flow at all vertices except for certain designated terminals. - -Specific types of network flow problems include: - -*The maximum flow problem, in which the goal is to maximize the total amount of flow out of the source terminals and into the sink terminals - -*The minimum-cost flow problem, in which the edges have costs as well as capacities and the goal is to achieve a given amount of flow (or a maximum flow) that has the minimum possible cost - -*The multi-commodity flow problem, in which one must construct multiple flows for different commodities whose total flow amounts together respect the capacities - -*Nowhere-zero flow, a type of flow studied in combinatorics in which the flow amounts are restricted to a finite set of nonzero values - -The max-flow min-cut theorem equates the value of a maximum flow to the value of a minimum cut, a partition of the vertices of the flow network that minimizes the total capacity of edges crossing from one side of the partition to the other. Approximate max-flow min-cut theorems provide an extension of this result to multi-commodity flow problems. The Gomory–Hu tree of an undirected flow network provides a concise representation of all minimum cuts between different pairs of terminal vertices. - -Algorithms for constructing flows include - -*Dinic's algorithm, a strongly polynomial algorithm for maximum flow - -*The Edmonds–Karp algorithm, a faster strongly polynomial algorithm for maximum flow - -*The Ford–Fulkerson algorithm, a greedy algorithm for maximum flow that is not in general strongly polynomial - -*The network simplex algorithm, a method based on linear programming but specialized for network flow - -*The out-of-kilter algorithm for minimum-cost flow - -*The push–relabel maximum flow algorithm, one of the most efficient known techniques for maximum flow - -Otherwise the problem can be formulated as a more conventional linear program or similar and solved using a general purpose optimization solver. - -Category:Graph algorithms - -Category:Combinatorial optimization diff --git a/wiki/wikipedia/2750.txt b/wiki/wikipedia/2750.txt deleted file mode 100644 index 360caa1cb08d02769d30aea9d54a6a914a3e181e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2750.txt +++ /dev/null @@ -1,49 +0,0 @@ -In mathematics, the theory of Latin squares is an active research area with many open problems. As in other areas of mathematics, such problems are often made public at professional conferences and meetings. Problems posed here appeared in, for instance, the Loops (Prague) conferences and the Milehigh (Denver) conferences. - -
    - -A transversal in a Latin square of order n is a set S of n cells such that every row and every column contains exactly one cell of S, and such that the symbols in S form {1, ..., n}. Let T(n) be the maximum number of transversals in a Latin square of order n. Estimate T(n). - -
    - -*Proposed: by Ian Wanless at Loops '03, Prague 2003 - -*Comments: Wanless, McKay and McLeod have bounds of the form cn < T(n) < dn n!, where c > 1 and d is about 0.6. A conjecture by Rivin, Vardi and Zimmermann (Rivin et al., 1994) says that you can place at least exp(c n log n) queens in non-attacking positions on a toroidal chessboard (for some constant c). If true this would imply that T(n) > exp(c n log n). A related question is to estimate the number of transversals in the Cayley tables of cyclic groups of odd order. In other words, how many orthomorphisms do these groups have? - -The minimum number of transversals of a Latin square is also an open problem. H. J. Ryser conjectured (Oberwolfach, 1967) that every Latin square of odd order has one. Closely related is the conjecture, attributed to Richard Brualdi, that every Latin square of order n has a partial transversal of order at least n − 1. - -
    - -Describe how all Latin subsquares in multiplication tables of Moufang loops arise. - -
    - -*Proposed: by Aleš Drápal at Loops '03, Prague 2003 - -*Comments: It is well known that every Latin subsquare in a multiplication table of a group G is of the form aH x Hb, where H is a subgroup of G and a, b are elements of G. - -
    - -A partial Latin square has Blackburn property if whenever the cells (i, j) and (k, l) are occupied by the same symbol, the opposite corners (i, l) and (k, j) are empty. What is the highest achievable density of filled cells in a partial Latin square with the Blackburn property? In particular, is there some constant c > 0 such that we can always fill at least c n2 cells? - -
    - -*Proposed: by Ian Wanless at Loops '03, Prague 2003 - -*Comments: In a paper to appear, Wanless has shown that if c exists then c < 0.463. He also constructed a family of partial Latin squares with the Blackburn property and asymptotic density of at least exp(-d(log n)1/2) for constant d > 0. - -
    - -Let $L_n$ be the number of Latin squares of order n. What is the largest integer $p(n)$ such that $2^{p(n)}$ divides $L_n$? Does $p(n)$ grow quadratically in n? - -
    - -* Proposed: by Ian Wanless at Loops '03, Prague 2003 - -* Comments: Of course, $L_n=n!(n-1)!R_n$ where $R_n$ is the number of reduced Latin squares of order n. This immediately gives a linear number of factors of 2. However, here are the prime factorizations of $R_n$ for n = 2, ...,11: - -
    - -
    - -This table suggests that the power of 2 is growing superlinearly. The best current result is that $R_n$ is always divisible by f!, where f is about n/2. See (McKay and Wanless, 2003). Two authors noticed the suspiciously high power of 2 (without being able to shed much light on it): (Alter, 1975), (Mullen, 1978). diff --git a/wiki/wikipedia/2751.txt b/wiki/wikipedia/2751.txt deleted file mode 100644 index 26de69b930f9b8c7779bca4ca060f13c1b7dd5b0..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2751.txt +++ /dev/null @@ -1,75 +0,0 @@ -Dropbox is a file hosting service operated by the American company Dropbox, Inc., headquartered in San Francisco, California, US that offers cloud storage, file synchronization, personal cloud, and client software. Dropbox was founded in 2007 by MIT students Drew Houston and Arash Ferdowsi as a startup company, with initial funding from seed accelerator Y Combinator. - -Dropbox has been ranked as one of the most valuable startups in the US and the world, with a valuation of over US$10 billion, and it has been described as one of Y Combinator's most successful investments to date. However, Dropbox has also experienced criticism and generated controversy for issues including security breaches and privacy concerns. - -Dropbox has been blocked in China since 2014. It received a five star rating in the Electronic Frontier Foundation's 2017 "Protecting Your Data From Government Requests" report. - -Dropbox brings files together in one central place by creating a special folder on the user's computer. The contents of these folders are synchronized to Dropbox's servers and to other computers and devices where the user has installed Dropbox, keeping the same files up-to-date on all devices. Dropbox uses a freemium business model, where users are offered a free account with a set storage size, with paid subscriptions available that offer more capacity and additional features. Dropbox Basic users are given two gigabytes of free storage space. Dropbox Plus users are given two terabytes of storage space, as well as additional features, including advanced sharing controls, remote wipe, and an optional Extended Version History add-on. Dropbox offers computer apps for Microsoft Windows, Apple macOS, and Linux computers, and mobile apps for iOS, Android, and Windows Phone smartphones and tablets. In March 2013, the company acquired Mailbox, a popular email app, and in April 2014, the company introduced Dropbox Carousel, a photo and video gallery app. Both Mailbox and Carousel were shut down in December 2015, with key features from both apps implemented into the regular Dropbox service. In October 2015, it officially announced Dropbox Paper, its collaborative document editor, in a reported effort to expand its operations towards businesses. In March 2016, Dropbox had 500 million users. - -Dropbox founder Drew Houston conceived the Dropbox concept after repeatedly forgetting his USB flash drive while he was a student at MIT. In a 2009 "Meet the Team" post on the Dropbox blog, he wrote that existing services at the time "suffered problems with Internet latency, large files, bugs, or just made me think too much". He began making something for his personal use, but then realized that it could benefit others with the same problems. - -Houston founded Evenflow, Inc. in May 2007 as the company behind Dropbox, and shortly thereafter secured seed funding from Y Combinator. Dropbox was officially launched at 2008's TechCrunch Disrupt, an annual technology conference. Owing to trademark disputes between Proxy, Inc. and Evenflow, Dropbox's official domain name was "getdropbox.com" until October 2009, when it acquired its current domain, "dropbox.com". - -A May 2010 report in The Wall Street Journal said that "since [founder Drew Houston] started reading Eric Ries' Lean startup blog about a year ago, the company has started trickling out new features when they are ready instead of waiting to launch a fully featured product. That helps test customer appetite, he says, dubbing the practice "minimum viable product". - -TechCrunch reported in July 2011 that Dropbox had been looking to raise between US$200 and US$300 million, and had a valuation "to end up in the $5 billion to $10 billion range. [...] quite a step up from its previous funding rounds which have totalled a tiny $7.2 million". As noted in a Forbes article, Dropbox had "revenue on track to hit $240 million in 2011". - -In April 2012, Dropbox announced that Bono and The Edge, two members of the Irish rock band U2, were individual investors in the company. - -In 2014 Dropbox raised financing from BlackRock Inc. and others that values the company at $10 billion. - -In March 2017, Bloomberg reported that Dropbox had secured a US$600 million credit line, with the company expected to file for its initial public offering (IPO) "as soon as this year". - -In February 2018, Dropbox filed an IPO to be listed on the Nasdaq. The company's initial intent was to raise $500 million. Dropbox's stock rose 42 percent to $29.89 in its first day of trading on March 23, 2018. - -As of February 2021, Dropbox has been profitable in the last three quarters, whilst also having no debt. - -Dropbox uses a freemium business model, where users are offered a free account with a set storage size, with paid subscriptions available that offer more capacity and additional features. Accordingly, Dropbox's revenue is a product of how many users they can convert to their paid services. - -Dropbox Basic users are given two gigabytes of free storage space. This can be expanded through referrals; users recommend the service to other people, and if those people start using the service, the user is awarded with additional 500 megabytes of storage space. Dropbox Basic users can earn up to 16 gigabytes through the referral program. - -The Dropbox Plus subscription (named Dropbox Pro prior to March 2017) gives users 2 terabytes of storage space, as well as additional features, including: - -* Advanced sharing controls: When sharing a link to a file or folder, users can set passwords and expiration limits. - -* Remote wipe: If a device is stolen or lost, users can remotely wipe the Dropbox folder from the device the next time it comes online. - -* "Extended Version History": An available add-on, it makes Dropbox keep deleted and previous versions of files for one year, a significant extension of the default 30-day recovery time. - -In November 2013, Dropbox announced changes to "Dropbox for Business" that would enable users to connect both their personal Dropbox and their business Dropbox to the same device, with each of the folders being "properly labeled for personal or work, and come with its own password, contacts, settings, and files". Furthermore, Dropbox announced shared audit logs, remote wipe for business administrators, and account transfers, as new features of its Business offering. In January 2017, Dropbox introduced "Smart Sync" for Business and Enterprise customers, a feature that lets Windows and macOS users see all files in the Dropbox folder, but only download specific files on-demand. - -Similarly to Dropbox Basic, Dropbox Plus users can also earn extra space through referrals. Plus users earn 1 gigabyte per referral, up to 32 gigabytes. In July 2014, Dropbox began migrating its performance-critical backend infrastructure to Go. - -In September 2012, Dropbox's website code base was rewritten from JavaScript to CoffeeScript. - -Dropbox originally used Amazon's S3 storage system to store user files, but between 2014 and 2016 they gradually moved away from Amazon to use their own hardware, referred to as "Magic Pocket", due to Dropbox's description as "a place where you keep all your stuff, it doesn’t get lost, and you can always access it". In June 2017, the company announced a major global network expansion, aiming to increase synchronization speeds while cutting costs. The expansion, starting with 14 cities across 7 countries on 3 continents, adds "hundreds of gigabits of Internet connectivity with transit providers (regional and global ISPs), and hundreds of new peering partners (where we exchange traffic directly rather than through an ISP)". - -Dropbox uses SSL transfers for synchronization and stores the data via Advanced Encryption Standard (AES)-256 encryption. - -The functionality of Dropbox can be integrated into third-party applications through an application programming interface (API). - -Dropbox prevents sharing of copyrighted data, by checking the hash of files shared in public folders or between users against a blacklist of copyrighted material. This only applies to files or folders shared with other users or publicly, and not to files kept in an individual's Dropbox folder that are not shared. - -In March 2013, Dropbox acquired Mailbox, a popular email app, with Mailbox CEO Gentry Underwood saying that "Rather than grow Mailbox on our own, we've decided to join forces with Dropbox and build it out together". Under the deal, the developers of Mailbox joined Dropbox, but kept Mailbox running as a stand-alone app. Mailbox CEO stated: "We are still struggling to keep up with the demand from those who want to use it", and Dropbox CEO Drew Houston said "We felt we could help Mailbox reach a much different audience much faster". The acquisition was reported to cost $100 million. - -In December 2015, Dropbox announced the shut-down of Mailbox. In a blog post, Drew Houston and Arash Ferdowsi explained that "We'll [...] be using what we've learned from Mailbox to build new ways to communicate and collaborate on Dropbox". - -In April 2014, Dropbox introduced Carousel, a photo and video gallery that "combines the photos in your Dropbox with the photos on your phone, and automatically backs up new ones as you take them." Carousel sorted photos by event and date. In December 2015, Dropbox announced the shut-down of Carousel. In a blog post, Drew Houston and Arash Ferdowsi explained that "We'll be taking key features from Carousel back to the place where your photos live - in the Dropbox app." - -In April 2015, Dropbox launched a Dropbox Notes collaborative note-taking service in beta testing phase, prompting speculation if Dropbox was planning to bring out a product to compete with Google Docs. TechCrunch noted that Dropbox Notes appeared to be a new version of "Project Composer", a previous iteration of the service with roots from the acquisition of Hackpad in April 2014. In October 2015, Dropbox announced the upcoming launch of Dropbox Paper, its collaborative document editor, noted by the media as the result of its development of a Dropbox Notes service earlier in 2015. Dropbox Paper entered open beta in August 2016, allowing anyone to join and test the product. Mobile apps for Android and iOS were also released. In January 2017, Dropbox Paper was officially launched. Aimed for businesses, Dropbox Paper was described as "one part online document, one part collaboration, one part task management tool, one part content hub" by Rob Baesman, Dropbox's head of product, and allows for importing, editing, and collaboration on "a number of other file types from Google, Microsoft, and others". - -Users have devised a number of uses for and mashups of the technology that expand Dropbox's functionality. These include: sending files to a Dropbox via Gmail; using Dropbox to sync instant messaging chat logs; BitTorrent management; password management; remote application launching and system monitoring; and as a free web hosting service. - -Dropbox has received several awards, including the Crunchie Award in 2010 for Best Internet Application, and Macworlds 2009 Editor's Choice Award for Software. It was nominated for a 2010 Webby Award, and for the 2010 Mac Design Awards by Ars Technica. Dropbox's mobile iPhone app release in 2010 was among the top 10 "best apps" selected by Alex Ahlund, former CEO of two websites focused on mobile apps, and the company's Android app was also selected as one of the top five "best apps" in a list compiled in 2010 by Jason Hiner for ZDNet. Founders Drew Houston and Arash Ferdowsi were named among the top 30 under 30 entrepreneurs by Inc. in 2011. - -In 2011, Business Insider named Dropbox the world's sixth most valuable startup, and in 2017, the publication ranked Dropbox as the eighth most valuable US startup, with a valuation of $10 billion. It has been described as one of Y Combinator's most successful investments to date. Apple launched its own cloud storage service later in 2011, iCloud, but this didn't hold back Dropbox's growth. In January 2012, Dropbox was named startup of the year by TechCrunch, and in 2016, the company was ranked #2 on the Forbes Cloud 100 list. - -Dropbox has been the subject of criticism and controversy related to multiple incidents, including a June 2011 authentication problem that let accounts be accessed for several hours without passwords; a July 2011 Privacy Policy update with language suggesting Dropbox had ownership of users' data; concerns about Dropbox employee access to users' information; July 2012 email spam with recurrence in February 2013; leaked government documents in June 2013 with information that Dropbox was being considered for inclusion in the National Security Agency's PRISM surveillance program; a July 2014 comment from NSA whistleblower Edward Snowden criticizing Dropbox's encryption keys being available to employees; the leak of 68 million account passwords on the Internet in August 2016; and a January 2017 accidental data restoration incident where years-old supposedly deleted files reappeared in users' accounts. - -While Dropbox uses SSL to encrypt data in transit between itself and customers, it stores data in encrypted form and does not use end-to-end encryption in which the user controls the keys used to encrypt the stored data. As a result, Dropbox can decrypt customers' data if it chooses to. - -The Dropbox headquarters, located in San Francisco, were originally on Market Street, until its expansion to the China Basin Landing building in July 2011, allowing for a significant space increase. As the number of employees grew, the company again needed expansion, and in February 2014, it signed a lease for two buildings in Brannan Street. Not needing the substantial amounts of space after all, the company started shopping the remaining available space to other companies for sublease in November 2015. - -Dropbox expanded into its second U.S. office in Austin, Texas in February 2014. The State of Texas and City of Austin provided a $1.7 million performance-based incentives package to Dropbox in exchange for locating their office in Austin. - -In December 2012, Dropbox set up an office in Dublin, Ireland, its first office outside the United States. diff --git a/wiki/wikipedia/2752.txt b/wiki/wikipedia/2752.txt deleted file mode 100644 index 455961bb95edfdff25a0165265d6dd4c1992c971..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2752.txt +++ /dev/null @@ -1,3 +0,0 @@ -In geometry, the ', named for 18th century British mathematicians William Braikenridge and Colin Maclaurin, is the converse to Pascal's theorem. It states that if the three intersection points of the three pairs of lines through opposite sides of a hexagon lie on a line L, then the six vertices of the hexagon lie on a conic C; the conic may be degenerate, as in Pappus's theorem. - -The Braikenridge–Maclaurin theorem may be applied in the Braikenridge–Maclaurin construction, which is a synthetic construction of the conic defined by five points, by varying the sixth point. Namely, Pascal's theorem states that given six points on a conic (the vertices of a hexagon), the lines defined by opposite sides intersect in three collinear points. This can be reversed to construct the possible locations for a sixth point, given five existing ones. diff --git a/wiki/wikipedia/2753.txt b/wiki/wikipedia/2753.txt deleted file mode 100644 index a8f5db8b550981d07630bf54e950c8941f6023b0..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2753.txt +++ /dev/null @@ -1,47 +0,0 @@ -In algebraic combinatorics, the Kruskal–Katona theorem gives a complete characterization of the f-vectors of abstract simplicial complexes. It includes as a special case the Erdős–Ko–Rado theorem and can be restated in terms of uniform hypergraphs. It is named after Joseph Kruskal and Gyula O. H. Katona, but has been independently discovered by several others. - -Given two positive integers N and i, there is a unique way to expand N as a sum of binomial coefficients as follows: - - N=\binom{n_i}{i}+\binom{n_{i-1}}{i-1}+\ldots+\binom{n_j}{j},\quad - -n_i > n_{i-1} > \ldots > n_j \geq j\geq 1. - -This expansion can be constructed by applying the greedy algorithm: set ni to be the maximal n such that $ N\geq \binom{n}{i}, $ replace N with the difference, i with i − 1, and repeat until the difference becomes zero. Define -$$ - N^{(i)}=\binom{n_i}{i+1}+\binom{n_{i-1}}{i}+\ldots+\binom{n_j}{j+1}. -$$ - -An integral vector $(f_0, f_1, ..., f_{d-1})$ is the f-vector of some $(d-1)$-dimensional simplicial complex if and only if -$$ - 0 \leq f_{i} \leq f_{i-1}^{(i)},\quad 1\leq i\leq d-1. -$$ - -Let A be a set consisting of N distinct i-element subsets of a fixed set U ("the universe") and B be the set of all $(i-r)$-element subsets of the sets in A. Expand N as above. Then the cardinality of B is bounded below as follows: -$$ - |B| \geq \binom{n_i}{i-r}+\binom{n_{i-1}}{i-r-1}+\ldots+\binom{n_j}{j-r}. -$$ - -The following weaker but useful form is due to Let A be a set of i-element subsets of a fixed set U ("the universe") and B be the set of all $(i-r)$-element subsets of the sets in A. If $|A| = \binom{x}{i}$ then $|B| \geq \binom{x}{i-r}$. - -In this formulation, x need not be an integer. The value of the binomial expression is $\binom{x}{i} = \frac{x(x-1)\dots(x-i+1)}{i!}$. - -For every positive i, list all i-element subsets a1 < a2 < … ai of the set N of natural numbers in the colexicographical order. For example, for i = 3, the list begins -$$ - 123, 124, 134, 234, 125, 135, 235, 145, 245, 345, \ldots. -$$ - -Given a vector $f = (f_0, f_1, ..., f_{d-1})$ with positive integer components, let Δf be the subset of the power set 2N consisting of the empty set together with the first $f_{i-1}$ i-element subsets of N in the list for i = 1, …, d. Then the following conditions are equivalent: - -# Vector f is the f-vector of a simplicial complex Δ. - -# Δf is a simplicial complex. - -# $ f_{i} \leq f_{i-1}^{(i)},\quad 1\leq i\leq d-1.$ - -The difficult implication is 1 ⇒ 2. - -The theorem is named after Joseph Kruskal and Gyula O. H. Katona, who published it in 1963 and 1968 respectively. - -According to Le, it was discovered independently by Kruskal, Katona, , Harper, and Clements. - -writes that the earliest of these references, by Schützenberger, has an incomplete proof. diff --git a/wiki/wikipedia/2754.txt b/wiki/wikipedia/2754.txt deleted file mode 100644 index cc23f3fb0cea0ffcca4364f100c6872891c1da17..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2754.txt +++ /dev/null @@ -1,27 +0,0 @@ -In structural complexity theory, the Berman–Hartmanis conjecture is an unsolved conjecture named after Leonard C. Berman and Juris Hartmanis that states that all NP-complete languages look alike, in the sense that they can be related to each other by polynomial time isomorphisms. - -An isomorphism between formal languages L1 and L2 is a bijective map f from strings in the alphabet of L1 to strings in the alphabet of L2, with the property that a string x belongs to L1 if and only if f(x) belongs to L2. It is a polynomial time isomorphism (or p-isomorphism for short) if both f and its inverse function can be computed in an amount of time polynomial in the lengths of their arguments. - -Berman observed that all languages known at that time to be NP-complete were p-isomorphic. More strongly, they observed that all then-known NP-complete languages were paddable, and they proved (analogously to the Myhill isomorphism theorem) that all pairs of paddable NP-complete languages are p-isomorphic. A language L is paddable if there is a polynomial time function f(x,y) with a polynomial time inverse and with the property that, for all x and y, x belongs to L if and only if f(x,y) belongs to L: that is, it is possible to pad the input x with irrelevant information y, in an invertible way, without changing its membership in the language. - -Based on these results, Berman and Hartmanis conjectured that all NP-complete languages are p-isomorphic. - -Since p-isomorphism preserves paddability, and there exist paddable NP-complete languages, an equivalent way of stating the Berman–Hartmanis conjecture is that all NP-complete languages are paddable. - -Polynomial time isomorphism is an equivalence relation, and it can be used to partition the formal languages into equivalence classes, so another way of stating the Berman–Hartmanis conjecture is that the NP-complete languages form a single equivalence class for this relation. - -If the Berman–Hartmanis conjecture is true, an immediate consequence would be the nonexistence of sparse NP-complete languages, namely languages in which the number of yes-instances of length n grows only polynomially as a function of n. The known NP-complete languages have a number of yes-instances that grows exponentially, and if L is a language with exponentially many yes-instances then it cannot be p-isomorphic to a sparse language, because its yes-instances would have to be mapped to strings that are more than polynomially long in order for the mapping to be one-to-one. - -The nonexistence of sparse NP-complete languages in turn implies that P ≠ NP, because if P = NP then every nontrivial language in P (including some sparse ones, such as the language of binary strings all of whose bits are zero) would be NP-complete. In 1982, Steve Mahaney published his proof that the nonexistence of sparse NP-complete languages (with NP-completeness defined in the standard way using many-one reductions) is in fact equivalent to the statement that P ≠ NP; this is Mahaney's theorem. Even for a relaxed definition of NP-completeness using Turing reductions, the existence of a sparse NP-complete language would imply an unexpected collapse of the polynomial hierarchy. - -As evidence towards the conjecture, Agrawal showed that an analogous conjecture with a restricted type of reduction is true: every two languages that are complete for NP under AC0 many-one reductions have an AC0 isomorphism. - -Agrawal showed that, if there exist one-way functions that cannot be inverted in polynomial time on all inputs, but if every such function has a small but dense subset of inputs on which it can be inverted in P/poly (as is true for known functions of this type) then every two NP-complete languages have a P/poly isomorphism. - -And Fenner found an oracle machine model in which the analogue to the isomorphism conjecture is true. - -Evidence against the conjecture was provided by Joseph and Kurtz. Joseph and Young introduced a class of NP-complete problems, the k-creative sets, for which no p-isomorphism to the standard NP-complete problems is known. - -Kurtz et al. showed that in oracle machine models given access to a random oracle, the analogue of the conjecture is not true: if A is a random oracle, then not all sets complete for NPA have isomorphisms in PA. - -Random oracles are commonly used in the theory of cryptography to model cryptographic hash functions that are computationally indistinguishable from random, and the construction of Kurtz et al. can be carried out with such a function in place of the oracle. For this reason, among others, the Berman–Hartmanis isomorphism conjecture is believed false by many complexity theorists. diff --git a/wiki/wikipedia/2755.txt b/wiki/wikipedia/2755.txt deleted file mode 100644 index b0b21200ba34e0f7cc344f3fc012586dec60392d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2755.txt +++ /dev/null @@ -1,9 +0,0 @@ -Invariance of domain is a theorem in topology about homeomorphic subsets of Euclidean space $\R^n$. - -It states: - -If $U$ is an open subset of $\R^n$ and $f : U \rarr \R^n$ is an injective continuous map, then $V := f(U)$ is open in $\R^n$ and $f$ is a homeomorphism between $U$ and $V$. - -The theorem and its proof are due to L. E. J. Brouwer, published in 1912. - -The proof uses tools of algebraic topology, notably the Brouwer fixed point theorem. diff --git a/wiki/wikipedia/2756.txt b/wiki/wikipedia/2756.txt deleted file mode 100644 index 2d4e98ecb194768feb84b59f8964e6c15adc11ea..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2756.txt +++ /dev/null @@ -1,5 +0,0 @@ -Naimark's problem is a question in functional analysis asked by . It asks whether every C*-algebra that has only one irreducible $ * $-representation up to unitary equivalence is isomorphic to the $ * $-algebra of compact operators on some (not necessarily separable) Hilbert space. - -The problem has been solved in the affirmative for special cases (specifically for separable and Type-I C*-algebras). Akemann used the $ \diamondsuit $-Principle to construct a C*-algebra with $ \aleph_{1} $ generators that serves as a counterexample to Naimark's Problem. More precisely, they showed that the existence of a counterexample generated by $ \aleph_{1} $ elements is independent of the axioms of Zermelo–Fraenkel set theory and the Axiom of Choice ($ \mathsf{ZFC} $). - -Whether Naimark's problem itself is independent of $ \mathsf{ZFC} $ remains unknown. diff --git a/wiki/wikipedia/2757.txt b/wiki/wikipedia/2757.txt deleted file mode 100644 index 45a844a119514e6db4c799fe52570a54a6751e38..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2757.txt +++ /dev/null @@ -1,15 +0,0 @@ -In mathematics, George Glauberman's ZJ theorem states that if a finite group G is p-constrained and p-stable and has a normal p-subgroup for some odd prime p, then Op′(G)Z(J(S)) is a normal subgroup of G, for any Sylow p-subgroup S. - -*J(S) is the Thompson subgroup of a p-group S: the subgroup generated by the abelian subgroups of maximal order. - -*Z(H) means the center of a group H. - -*Op′ is the maximal normal subgroup of G of order coprime to p, the p′-core - -*Op is the maximal normal p-subgroup of G, the p-core. - -*Op′,p(G) is the maximal normal p-nilpotent subgroup of G, the p′,p-core, part of the upper p-series. - -*For an odd prime p, a group G with Op(G) ≠ 1 is said to be p-stable if whenever P is a p-subgroup of G such that POp′(G) is normal in G, and [P,x,x] = 1, then the image of x in NG(P)/CG(P) is contained in a normal p-subgroup of NG(P)/CG(P). - -*For an odd prime p, a group G with Op(G) ≠ 1 is said to be p-constrained if the centralizer CG(P) is contained in Op′,p(G) whenever P is a Sylow p-subgroup of Op′,p(G). diff --git a/wiki/wikipedia/2758.txt b/wiki/wikipedia/2758.txt deleted file mode 100644 index f29de91c9aff3034fd2458e921b6fa6ee4153dc8..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2758.txt +++ /dev/null @@ -1,31 +0,0 @@ -In propositional calculus and proof complexity a propositional proof system (pps), also called a Cook–Reckhow propositional proof system, is a system for proving classical propositional tautologies. - -Formally a pps is a polynomial-time function P whose range is the set of all propositional tautologies (denoted TAUT). - -Propositional proof system can be compared using the notion of p-simulation. A propositional proof system P p-simulates Q (written as P ≤pQ) when there is a polynomial-time function F such that P(F(x)) = Q(x) for every x. That is, given a Q-proof x, we can find in polynomial time a P-proof of the same tautology. If P ≤pQ and Q ≤pP, the proof systems P and Q are p-equivalent. There is also a weaker notion of simulation: a pps P simulates or weakly p-simulates a pps Q if there is a polynomial p such that for every Q-proof x of a tautology A, there is a P-proof y of A such that the length of y, |y| is at most p(|x|). (Some authors use the words p-simulation and simulation interchangeably for either of these two concepts, usually the latter.) - -A propositional proof system is called p-optimal if it p-simulates all other propositional proof systems, and it is optimal if it simulates all other pps. A propositional proof system P is polynomially bounded (also called super) if every tautology has a short (i.e., polynomial-size) P-proof. - -If P is polynomially bounded and Q simulates P, then Q is also polynomially bounded. - -The set of propositional tautologies, TAUT, is a coNP-complete set. A propositional proof system is a certificate-verifier for membership in TAUT. Existence of a polynomially bounded propositional proof system means that there is a verifier with polynomial-size certificates, i.e., TAUT is in NP. In fact these two statements are equivalent, i.e., there is a polynomially bounded propositional proof system if and only if the complexity classes NP and coNP are equal. - -Some equivalence classes of proof systems under simulation or p-simulation are closely related to theories of bounded arithmetic; they are essentially "non-uniform" versions of the bounded arithmetic, in the same way that circuit classes are non-uniform versions of resource-based complexity classes. "Extended Frege" systems (allowing the introduction of new variables by definition) correspond in this way to polynomially-bounded systems, for example. Where the bounded arithmetic in turn corresponds to a circuit-based complexity class, there are often similarities between the theory of proof systems and the theory of the circuit families, such as matching lower bound results and separations. For example, just as counting cannot be done by an $\mathbf{AC}^0$ circuit family of subexponential size, many tautologies relating to the pigeonhole principle cannot have subexponential proofs in a proof system based on bounded-depth formulas (and in particular, not by resolution-based systems, since they rely solely on depth 1 formulas). - -Some examples of propositional proof systems studied are: - -* Propositional Resolution and various restrictions and extensions of it like DPLL algorithm - -* Natural deduction - -* Sequent calculus - -* Frege system - -* Extended Frege system - -* Polynomial calculus - -* Nullstellensatz system - -* Cutting-plane method diff --git a/wiki/wikipedia/2759.txt b/wiki/wikipedia/2759.txt deleted file mode 100644 index 4972374ae40d109933079f2e8d09bf08b16cf297..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2759.txt +++ /dev/null @@ -1,3 +0,0 @@ -In mathematics, Szymanski's conjecture, named after , states that every permutation on the n-dimensional doubly directed hypercube graph can be routed with edge-disjoint paths. That is, if the permutation σ matches each vertex v to another vertex σ(v), then for each v there exists a path in the hypercube graph from v to σ(v) such that no two paths for two different vertices u and v use the same edge in the same direction. - -Through computer experiments it has been verified that the conjecture is true for n ≤ 4 . Although the conjecture remains open for n ≥ 5, in this case there exist permutations that require the use of paths that are not shortest paths in order to be routed . diff --git a/wiki/wikipedia/276.txt b/wiki/wikipedia/276.txt deleted file mode 100644 index 6baf42abd9ba91df2429e12473ef6a04ba35d4dc..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/276.txt +++ /dev/null @@ -1,55 +0,0 @@ -In computer science and graph theory, the term color-coding refers to an algorithmic technique which is useful in the discovery of network motifs. For example, it can be used to detect a simple path of length k in a given graph. The traditional color-coding algorithm is probabilistic, but it can be derandomized without much overhead in the running time. - -Color-coding also applies to the detection of cycles of a given length, and more generally it applies to the subgraph isomorphism problem (an NP-complete problem), where it yields polynomial time algorithms when the subgraph pattern that it is trying to detect has bounded treewidth. - -The color-coding method was proposed and analyzed in 1994 by Noga Alon, Raphael Yuster, and Uri Zwick. - -The following results can be obtained through the method of color-coding: - -* For every fixed constant k, if a graph G = (V, E) contains a simple cycle of size k, then such a cycle can be found in: - -** $O(V^\omega)$ expected time, or - -** $O(V^\omega \log V)$ worst-case time, where ω is the exponent of matrix multiplication. - -* For every fixed constant k, and every graph G = (V, E) that is in any nontrivial minor-closed graph family (e.g., a planar graph), if G contains a simple cycle of size k, then such cycle can be found in: - -** O(V) expected time, or - -** O(V log V) worst-case time. - -* If a graph G = (V, E) contains a subgraph isomorphic to a bounded treewidth graph which has O(log V) vertices, then such a subgraph can be found in polynomial time. - -To solve the problem of finding a subgraph $H = (V_H, E_H)$ in a given graph G = (V, E), where H can be a path, a cycle, or any bounded treewidth graph where $|V_H| = O(\log V)$, the method of color-coding begins by randomly coloring each vertex of G with $k = |V_H|$ colors, and then tries to find a colorful copy of H in colored G. Here, a graph is colorful if every vertex in it is colored with a distinct color. This method works by repeating (1) random coloring a graph and (2) finding colorful copy of the target subgraph, and eventually the target subgraph can be found if the process is repeated a sufficient number of times. - -Suppose a copy of H in G becomes colorful with some non-zero probability p. It immediately follows that if the random coloring is repeated 1/p times, then this copy is expected to become colorful once. Note that though p is small, it is shown that if $|V_H| = O(\log V)$, p is only polynomially small. Suppose again there exists an algorithm such that, given a graph G and a coloring which maps each vertex of G to one of the k colors, it finds a copy of colorful H, if one exists, within some runtime O(r). Then the expected time to find a copy of H in G, if one exists, is $O(\tfrac{r}{p})$. - -Sometimes it is also desirable to use a more restricted version of colorfulness. For example, in the context of finding cycles in planar graphs, it is possible to develop an algorithm that finds well-colored cycles. Here, a cycle is well-colored if its vertices are colored by consecutive colors. - -An example would be finding a simple cycle of length k in graph G = (V, E). - -By applying random coloring method, each simple cycle has a probability of $k!/k^k > e^{-k}$ to become colorful, since there are $k^k$ ways of coloring the k vertices on the cycle, among which there are $k!$ colorful occurrences. Then an algorithm (described next) can be used to find colorful cycles in the randomly colored graph G in time $O(V^\omega)$, where $\omega$ is the matrix multiplication constant. Therefore, it takes $e^k\cdot O(V^\omega)$ overall time to find a simple cycle of length k in G. - -The colorful cycle-finding algorithm works by first finding all pairs of vertices in V that are connected by a simple path of length k − 1, and then checking whether the two vertices in each pair are connected. Given a coloring function c : V → {1, ..., k} to color graph G, enumerate all partitions of the color set {1, ..., k} into two subsets C1, C2 of size $k/2$ each. Note that V can be divided into V1 and V2 accordingly, and let G1 and G2 denote the subgraphs induced by V1 and V2 respectively. Then, recursively find colorful paths of length $k/2 - 1$ in each of G1 and G2. Suppose the boolean matrix A1 and A2 represent the connectivity of each pair of vertices in G1 and G2 by a colorful path, respectively, and let B be the matrix describing the adjacency relations between vertices of V1 and those of V2, the boolean product $A_1BA_2$ gives all pairs of vertices in V that are connected by a colorful path of length k − 1. Thus, the recursive relation of matrix multiplications is $t(k) \le 2^k\cdot t(k/2)$, which yields a runtime of $2^{O(k)}\cdot V^\omega = O(V^\omega)$. Although this algorithm finds only the end points of the colorful path, another algorithm by Alon and Naor that finds colorful paths themselves can be incorporated into it. - -The derandomization of color-coding involves enumerating possible colorings of a graph G, such that the randomness of coloring G is no longer required. For the target subgraph H in G to be discoverable, the enumeration has to include at least one instance where the H is colorful. To achieve this, enumerating a k-perfect family F of hash functions from {1, ..., to {1, ..., k} is sufficient. By definition, F is k-perfect if for every subset S of {1, ..., where $|S| = k$, there exists a hash function h in F such that h : S → {1, ..., k} is perfect. In other words, there must exist a hash function in F that colors any given k vertices with k distinct colors. - -There are several approaches to construct such a k-perfect hash family: - -# The best explicit construction is by Moni Naor, Leonard J. Schulman, and Aravind Srinivasan, where a family of size $e^k k^{O(\log k)} \log |V|$ can be obtained. This construction does not require the target subgraph to exist in the original subgraph finding problem. - -# Another explicit construction by Jeanette P. Schmidt and Alan Siegel yields a family of size $2^{O(k)}\log^2 |V|$. - -# Another construction that appears in the original paper of Noga Alon et al. can be obtained by first building a k-perfect family that maps {1, ..., to {1, ..., k2}, followed by building another k-perfect family that maps {1, ..., k2} to {1, ..., k}. In the first step, it is possible to construct such a family with 2nlog k random bits that are almost 2log k-wise independent, and the sample space needed for generating those random bits can be as small as $k^{O(1)}\log |V|$. In the second step, it has been shown by Jeanette P. Schmidt and Alan Siegel that the size of such k-perfect family can be $2^{O(k)}$. Consequently, by composing the k-perfect families from both steps, a k-perfect family of size $2^{O(k)}\log |V|$ that maps from {1, ..., to {1, ..., k} can be obtained. - -In the case of derandomizing well-coloring, where each vertex on the subgraph is colored consecutively, a k-perfect family of hash functions from {1, ..., to {1, ..., k!} is needed. A sufficient k-perfect family which maps from {1, ..., to {1, ..., kk} can be constructed in a way similar to the approach 3 above (the first step). In particular, it is done by using nklog k random bits that are almost klog k independent, and the size of the resulting k-perfect family will be $k^{O(k)}\log |V|$. - -The derandomization of color-coding method can be easily parallelized, yielding efficient NC algorithms. - -Recently, color-coding has attracted much attention in the field of bioinformatics. One example is the detection of signaling pathways in protein-protein interaction (PPI) networks. Another example is to discover and to count the number of motifs in PPI networks. Studying both signaling pathways and motifs allows a deeper understanding of the similarities and differences of many biological functions, processes, and structures among organisms. - -Due to the huge amount of gene data that can be collected, searching for pathways or motifs can be highly time consuming. However, by exploiting the color-coding method, the motifs or signaling pathways with $k=O(\log n)$ vertices in a network G with n vertices can be found very efficiently in polynomial time. Thus, this enables us to explore more complex or larger structures in PPI networks. - -* - -* diff --git a/wiki/wikipedia/2760.txt b/wiki/wikipedia/2760.txt deleted file mode 100644 index 4e2850666144d0c4af4c575e7eebf1300afa2c5a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2760.txt +++ /dev/null @@ -1,35 +0,0 @@ -thumb|Guido Grandi - -Dom Guido Grandi, O.S.B. Cam. (1 October 1671 - 4 July 1742) was an Italian monk, priest, philosopher, theologian, mathematician, and engineer. - -Grandi was born on 1 October 1671 in Cremona, Italy and christened Luigi Francesco Lodovico. When he was of age, he was educated at the Jesuit college there. After he completed his studies there in 1687, he entered the novitiate of the Camaldolese monks at Ferrara and took the name of Guido. In 1693 he was sent to the Monastery of St. Gregory the Great, the Camaldolese house in Rome, to complete his studies in philosophy and theology in preparation for Holy Orders. A year later, Grandi was assigned as professor of both fields at the Camaldolese Monastery of St. Mary of the Angels in Florence. It appears that it was during this period of his life that he took an interest in mathematics. He did his research privately, however, as he was appointed professor of philosophy at St. Gregory Monastery in 1700, subsequently holding a post in the same field in Pisa. - -By 1707, however, Dom Grandi had developed such a reputation in the field of mathematics that he was named court mathematician to the Grand Duke of Tuscany, Cosimo III de Medici. In that post, he also worked as an engineer, being appointed Superintendent of Water for the Duchy, and in that capacity he was involved in the drainage of the Chiana Valley. In 1709 he visited England where he clearly impressed his colleagues there, as he was elected a Fellow of the Royal Society. The University of Pisa named him Professor of Mathematics in 1714. It was there that he died on 4 July 1742. - -In 1701 Grandi published a study of the conical loxodrome, followed by a study in 1703 of the curve which he named versiera, from the (to turn). This curve was later studied by one of the few women scientists to achieve a degree, Maria Gaetana Agnesi. Through a mistranslation by the translator of her work into English who mistook the term "witch" () for Grandi's term, this curve became known in English as the witch of Agnesi. It was through his studies on this curve that Grandi helped introduce Leibniz' ideas on calculus to Italy. - -In mathematics Grandi is best known for his work Flores geometrici (1728), studying the rose curve, a curve which has the shape of a petalled flower, and for Grandi's series. He named the rose curve rhodonea. He also contributed to the Note on the Treatise of Galileo Concerning Natural Motion in the first Florentine edition of Galileo Galilei's works. - -* - -* - -* - -* - -* - -* - -* - -* - -* - -* - -* - -* diff --git a/wiki/wikipedia/2761.txt b/wiki/wikipedia/2761.txt deleted file mode 100644 index 49e4f31eaf6db2b85a2a4020ffdae18da0dac04c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2761.txt +++ /dev/null @@ -1,39 +0,0 @@ -In differential geometry the Hitchin-Thorpe inequality is a relation which restricts the topology of 4-manifolds that carry an Einstein metric. - -Let M be a closed, oriented, four-dimensional smooth manifold. If there exists a Riemannian metric on M which is an Einstein metric, then -$$ -\chi(M) \geq \frac{3}{2}|\tau(M)|, -$$ - -where χ(M) is the Euler characteristic of M and τ(M) is the signature of M. This inequality was first stated by John Thorpe in a footnote to a 1969 paper focusing on manifolds of higher dimension. Nigel Hitchin then rediscovered the inequality, and gave a complete characterization of the equality case in 1974; he found that if (M, g) is an Einstein manifold for which equality in the Hitchin-Thorpe inequality is obtained, then the Ricci curvature of g is zero; if the sectional curvature is not identically equal to zero, then (M, g) is a Calabi-Yau manifold whose universal cover is a K3 surface. - -Let (M, g) be a four-dimensional smooth Riemannian manifold which is Einstein. Given any point p of M, there exists a gp-orthonormal basis e1, e2, e3, e4 of the tangent space TpM such that the curvature operator Rmp, which is a symmetric linear map of ∧2TpM into itself, has matrix -$$ -\begin{pmatrix}\lambda_1&0&0&\mu_1&0&0\\ 0&\lambda_2&0&0&\mu_2&0\\ 0&0&\lambda_3&0&0&\mu_3\\ \mu_1&0&0&\lambda_1&0&0\\ 0&\mu_2&0&0&\lambda_2&0\\ 0&0&\mu_3&0&0&\lambda_3\end{pmatrix} -$$ - -relative to the basis e1 ∧ e2, e1 ∧ e3, e1 ∧ e4, e3 ∧ e4, e4 ∧ e2, e2 ∧ e3. One has that μ1 + μ2 + μ3 is zero and that λ1 + λ2 + λ3 is one-fourth of the scalar curvature of g at p. Furthermore, under the conditions λ1 ≤ λ2 ≤ λ3 and μ1 ≤ μ2 ≤ μ3, each of these six functions is uniquely determined and defines a continuous real-valued function on M. - -According to Chern-Weil theory, if M is oriented then the Euler characteristic and signature of M can be computed by - -\begin{align} - -\chi(M)&=\frac{1}{4\pi^2}\int_M\big(\lambda_1^2+\lambda_2^2+\lambda_3^2+\mu_1^2+\mu_2^2+\mu_3^2\big)d\mu_g\\ - -\tau(M)&=\frac{1}{3\pi^2}\int_M\big(\lambda_1\mu_1+\lambda_2\mu_2+\lambda_3\mu_3\big)d\mu_g. - -\end{align} - -Equipped with these tools, the Hitchin-Thorpe inequality amounts to the elementary observation -$$ -\lambda_1^2+\lambda_2^2+\lambda_3^2+\mu_1^2+\mu_2^2+\mu_3^2=\underbrace{(\lambda_1-\mu_1)^2+(\lambda_2-\mu_2)^2+(\lambda_3-\mu_3)^2}_{\geq 0}+2\big(\lambda_1\mu_1+\lambda_2\mu_2+\lambda_3\mu_3\big). -$$ - -A natural question to ask is whether the Hitchin-Thorpe inequality provides a sufficient condition for the existence of Einstein metrics. In 1995, Claude LeBrun and - -Andrea Sambusetti independently showed that the answer is no: there exist infinitely many non-homeomorphic compact, smooth, oriented 4-manifolds M that carry no Einstein metrics but nevertheless satisfy -$$ -\chi(M) > \frac{3}{2}|\tau(M)|. -$$ - -LeBrun's examples are actually simply connected, and the relevant obstruction depends on the smooth structure of the manifold. By contrast, Sambusetti's obstruction only applies to 4-manifolds with infinite fundamental group, but the volume-entropy estimate he uses to prove non-existence only depends on the homotopy type of the manifold. diff --git a/wiki/wikipedia/2762.txt b/wiki/wikipedia/2762.txt deleted file mode 100644 index 458afc9a9ed563323de6bff134a9b687a52533e5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2762.txt +++ /dev/null @@ -1 +0,0 @@ -In several complex variables, the Ohsawa–Takegoshi L2 extension theorem is a fundamental result concerning the holomorphic extension of an $L^2$-holomorphic function defined on a bounded Stein manifold (such as a pseudoconvex compact set in $\mathbb{C}^n$ of dimension less than $n$) to a domain of higher dimension, with a bound on the growth. It was discovered by Takeo Ohsawa and Kensho Takegoshi in 1987, using what have been described as ad hoc methods involving twisted Laplace–Beltrami operators, but simpler proofs have since been discovered. Many generalizations and similar results exist, and are known as theorems of Ohsawa–Takegoshi type. diff --git a/wiki/wikipedia/2763.txt b/wiki/wikipedia/2763.txt deleted file mode 100644 index 923d61a17313e73cce498f17fab1d165f42c0f5b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2763.txt +++ /dev/null @@ -1,21 +0,0 @@ -Fuglede's conjecture is a closed problem in mathematics proposed by Bent Fuglede in 1974. It states that every domain of $\mathbb{R}^{d}$ (i.e. subset of $\mathbb{R}^{d}$ with positive finite Lebesgue measure) is a spectral set if and only if it tiles $\mathbb{R}^{d}$ by translation. - -Spectral sets in $\mathbb{R}^d$ - -A set $\Omega$ $\subset$ $\mathbb{R}^{d}$ with positive finite Lebesgue measure is said to be a spectral set if there exists a $\Lambda$ $\subset$ $\mathbb{R}^d$ such that $\left \{ e^{2\pi i\left \langle \lambda, \cdot \right \rangle} \right \}_{\lambda\in\Lambda}$is an orthogonal basis of $L^2(\Omega)$. The set $\Lambda$ is then said to be a spectrum of $\Omega$ and $(\Omega, \Lambda)$ is called a spectral pair. - -Translational tiles of $\mathbb{R}^d$ - -A set $\Omega\subset\mathbb{R}^d$ is said to tile $\mathbb{R}^d$ by translation (i.e. $\Omega$ is a translational tile) if there exist a discrete set $\Tau$ such that $\bigcup_{t\in\Tau}(\Omega + t)=\mathbb{R}^d$ and the Lebesgue measure of $(\Omega + t) \cap (\Omega + t')$ is zero for all $t\neq t'$in $\Tau$. - -* Fuglede proved in 1974, that the conjecture holds if $\Omega$ is a fundamental domain of a lattice. - -* In 2003, Alex Iosevich, Nets Katz and Terence Tao proved that the conjecture holds if $\Omega$ is a convex planar domain. - -* In 2004, Terence Tao showed that the conjecture is false on $\mathbb{R}^{d}$ for $d\geq5$. It was later shown by Bálint Farkas, Mihail N. Kolounzakis, Máté Matolcsi and Péter Móra that the conjecture is also false for $d=3 $ and $4$. However, the conjecture remains unknown for $d=1,2$. - -* Alex Iosevich, Azita Mayeli and Jonathan Pakianathan showed that the conjecture holds in $\mathbb{Z}_{p}\times\mathbb{Z}_{p}$, where $\mathbb{Z}_{p}$ is the finite group of order p. - -* In 2017, Rachel Greenfeld and Nir Lev proved the conjecture for convex polytopes in $\mathbb{R}^3$. - -*In 2019, Nir Lev and Máté Matolcsi settled the conjecture for convex domains affirmatively in all dimensions. diff --git a/wiki/wikipedia/2764.txt b/wiki/wikipedia/2764.txt deleted file mode 100644 index 0222f2100523e3fa6d3e7ef759579a150cde71fe..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2764.txt +++ /dev/null @@ -1,52 +0,0 @@ -In mathematics, Serre's modularity conjecture, introduced by , states that an odd, irreducible, two-dimensional Galois representation over a finite field arises from a modular form. A stronger version of this conjecture specifies the weight and level of the modular form. The conjecture in the level 1 case was proved by Chandrashekhar Khare in 2005, and a proof of the full conjecture was completed jointly by Khare and Jean-Pierre Wintenberger in 2008. - -The conjecture concerns the absolute Galois group $G_\mathbb{Q}$ of the rational number field $\mathbb{Q}$. - -Let $\rho$ be an absolutely irreducible, continuous, two-dimensional representation of $G_\mathbb{Q}$ over a finite field $F = \mathbb{F}_{\ell^r}$. -$$ - \rho \colon G_\mathbb{Q} \rightarrow \mathrm{GL}_2(F). -$$ - -Additionally, assume $\rho$ is odd, meaning the image of complex conjugation has determinant -1. - -To any normalized modular eigenform -$$ - f = q+a_2q^2+a_3q^3+\cdots -$$ - -of level $ N=N(\rho) $, weight $ k=k(\rho) $, and some Nebentype character -$$ - \chi \colon \mathbb{Z}/N\mathbb{Z} \rightarrow F^* -$$, - -a theorem due to Shimura, Deligne, and Serre-Deligne attaches to $ f $ a representation -$$ - \rho_f\colon G_\mathbb{Q} \rightarrow \mathrm{GL}_2(\mathcal{O}), -$$ - -where $ \mathcal{O} $ is the ring of integers in a finite extension of $ \mathbb{Q}_\ell $. This representation is characterized by the condition that for all prime numbers $p$, coprime to $N\ell$ we have -$$ - \operatorname{Trace}(\rho_f(\operatorname{Frob}_p))=a_p -$$ - -and -$$ - \det(\rho_f(\operatorname{Frob}_p))=p^{k-1} \chi(p). -$$ - -Reducing this representation modulo the maximal ideal of $ \mathcal{O} $ gives a mod $ \ell $ representation $ \overline{\rho_f} $ of $ G_\mathbb{Q} $. - -Serre's conjecture asserts that for any representation $ \rho $ as above, there is a modular eigenform $ f $ such that -$$ - \overline{\rho_f} \cong \rho -$$. - -The level and weight of the conjectural form $f$ are explicitly conjectured in Serre's article. In addition, he derives a number of results from this conjecture, among them Fermat's Last Theorem and the now-proven Taniyama–Weil (or Taniyama–Shimura) conjecture, now known as the modularity theorem (although this implies Fermat's Last Theorem, Serre proves it directly from his conjecture). - -The strong form of Serre's conjecture describes the level and weight of the modular form. - -The optimal level is the Artin conductor of the representation, with the power of $l$ removed. - -A proof of the level 1 and small weight cases of the conjecture was obtained in 2004 by Chandrashekhar Khare and Jean-Pierre Wintenberger, and by Luis Dieulefait, independently. - -In 2005, Chandrashekhar Khare obtained a proof of the level 1 case of Serre conjecture, and in 2008 a proof of the full conjecture in collaboration with Jean-Pierre Wintenberger. diff --git a/wiki/wikipedia/2765.txt b/wiki/wikipedia/2765.txt deleted file mode 100644 index d650eefa0f58de7e5513adaf95f6525e72ce2abe..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2765.txt +++ /dev/null @@ -1,370 +0,0 @@ -In mathematics, and more specifically in linear algebra, a linear subspace, also known as a vector subspace is a vector space that is a subset of some larger vector space. A linear subspace is usually simply called a subspace when the context serves to distinguish it from other types of subspaces. - -If V is a vector space over a field K and if W is a subset of V, then W is a linear subspace of V if under the operations of V, W is a vector space over K. Equivalently, a nonempty subset W is a subspace of V if, whenever w1, w2 are elements of W and α, β are elements of K, it follows that αw1 + βw2 is in W. - -As a corollary, all vector spaces are equipped with at least two (possibly different) linear subspaces: the zero vector space consisting of the zero vector alone and the entire vector space itself. These are called the trivial subspaces of the vector space. - -Let the field K be the set R of real numbers, and let the vector space V be the real coordinate space R3. - -Take W to be the set of all vectors in V whose last component is 0. - -Then W is a subspace of V. - -Proof: - -#Given u and v in W, then they can be expressed as 1=u = (u1, u2, 0) and 1=v = (v1, v2, 0). Then 1=u + v = (u1+v1, u2+v2, 0+0) = (u1+v1, u2+v2, 0). Thus, u + v is an element of W, too. - -#Given u in W and a scalar c in R, if 1=u = (u1, u2, 0) again, then 1=cu = (cu1, cu2, c0) = (cu1, cu2,0). Thus, cu is an element of W too. - -Let the field be R again, but now let the vector space V be the Cartesian plane R2. - -Take W to be the set of points (x, y) of R2 such that x = y. - -Then W is a subspace of R2. - -Proof: - -#Let 1=p = (p1, p2) and 1=q = (q1, q2) be elements of W, that is, points in the plane such that p1 = p2 and q1 = q2. Then 1=p + q = (p1+q1, p2+q2); since p1 = p2 and q1 = q2, then p1 + q1 = p2 + q2, so p + q is an element of W. - -#Let p = (p1, p2) be an element of W, that is, a point in the plane such that p1 = p2, and let c be a scalar in R. Then 1=cp = (cp1, cp2); since p1 = p2, then cp1 = cp2, so cp is an element of W. - -In general, any subset of the real coordinate space Rn that is defined by a system of homogeneous linear equations will yield a subspace. - -(The equation in example I was z = 0, and the equation in example II was x = y.) - -Geometrically, these subspaces are points, lines, planes and spaces that pass through the point 0. - -Again take the field to be R, but now let the vector space V be the set RR of all functions from R to R. - -Let C(R) be the subset consisting of continuous functions. - -Then C(R) is a subspace of RR. - -Proof: - -#We know from calculus that 0 ∈ C(R) ⊂ RR. - -#We know from calculus that the sum of continuous functions is continuous. - -#Again, we know from calculus that the product of a continuous function and a number is continuous. - -Keep the same field and vector space as before, but now consider the set Diff(R) of all differentiable functions. - -The same sort of argument as before shows that this is a subspace too. - -Examples that extend these themes are common in functional analysis. - -From the definition of vector spaces, it follows that subspaces are nonempty, and are closed under sums and under scalar multiples. Equivalently, subspaces can be characterized by the property of being closed under linear combinations. That is, a nonempty set W is a subspace if and only if every linear combination of finitely many elements of W also belongs to W. - -The equivalent definition states that it is also equivalent to consider linear combinations of two elements at a time. - -In a topological vector space X, a subspace W need not be topologically closed, but a finite-dimensional subspace is always closed. The same is true for subspaces of finite codimension (i.e., subspaces determined by a finite number of continuous linear functionals). - -Descriptions of subspaces include the solution set to a homogeneous system of linear equations, the subset of Euclidean space described by a system of homogeneous linear parametric equations, the span of a collection of vectors, and the null space, column space, and row space of a matrix. Geometrically (especially over the field of real numbers and its subfields), a subspace is a flat in an n-space that passes through the origin. - -A natural description of a 1-subspace is the scalar multiplication of one non-zero vector v to all possible scalar values. 1-subspaces specified by two vectors are equal if and only if one vector can be obtained from another with scalar multiplication: -$$ -\exist c\in K: \mathbf{v}' = c\mathbf{v}\text{ (or }\mathbf{v} = \frac{1}{c}\mathbf{v}'\text{)} -$$ - -This idea is generalized for higher dimensions with linear span, but criteria for equality of k-spaces specified by sets of k vectors are not so simple. - -A dual description is provided with linear functionals (usually implemented as linear equations). One non-zero linear functional F specifies its kernel subspace F = 0 of codimension 1. Subspaces of codimension 1 specified by two linear functionals are equal, if and only if one functional can be obtained from another with scalar multiplication (in the dual space): -$$ -\exist c\in K: \mathbf{F}' = c\mathbf{F}\text{ (or }\mathbf{F} = \frac{1}{c}\mathbf{F}'\text{)} -$$ - -It is generalized for higher codimensions with a system of equations. The following two subsections will present this latter description in details, and the remaining four subsections further describe the idea of linear span. - -The solution set to any homogeneous system of linear equations with n variables is a subspace in the coordinate space Kn: - -\left\{ \left[\!\! \begin{array}{c} x_1 \\ x_2 \\ \vdots \\ x_n \end{array} \!\!\right] \in K^n : \begin{alignat}{6} - -a_{11} x_1 && + && a_{12} x_2 && + \cdots + && a_{1n} x_n && = 0& \\ - -a_{21} x_1 && + && a_{22} x_2 && + \cdots + && a_{2n} x_n && = 0& \\ - -&& && && && && \vdots\quad& \\ - -a_{m1} x_1 && + && a_{m2} x_2 && + \cdots + && a_{mn} x_n && = 0& - -\end{alignat} \right\}. - -For example, the set of all vectors (x, y, z) (over real or rational numbers) satisfying the equations -$$ -x + 3y + 2z = 0 \text{and} 2x - 4y + 5z = 0 -$$ - -is a one-dimensional subspace. More generally, that is to say that given a set of n independent functions, the dimension of the subspace in Kk will be the dimension of the null set of A, the composite matrix of the n functions. - -In a finite-dimensional space, a homogeneous system of linear equations can be written as a single matrix equation: -$$ -A\mathbf{x} = \mathbf{0}. -$$ - -The set of solutions to this equation is known as the null space of the matrix. For example, the subspace described above is the null space of the matrix -$$ -A = \begin{bmatrix} 1 & 3 & 2 \\ 2 & -4 & 5 \end{bmatrix} . -$$ - -Every subspace of Kn can be described as the null space of some matrix (see below for more). - -The subset of Kn described by a system of homogeneous linear parametric equations is a subspace: - -\left\{ \left[\!\! \begin{array}{c} x_1 \\ x_2 \\ \vdots \\ x_n \end{array} \!\!\right] \in K^n : \begin{alignat}{7} - -x_1 && = && a_{11} t_1 && + && a_{12} t_2 && + \cdots + && a_{1m} t_m & \\ - -x_2 && = && a_{21} t_1 && + && a_{22} t_2 && + \cdots + && a_{2m} t_m & \\ - -&& \vdots && && && && && & \\ - -x_n && = && a_{n1} t_1 && + && a_{n2} t_2 && + \cdots + && a_{nm} t_m & \\ - -\end{alignat} \text{ for some } t_1,\ldots,t_m\in K \right\}. - -For example, the set of all vectors (x, y, z) parameterized by the equations -$$ -x = 2t_1 + 3t_2,y = 5t_1 - 4t_2,\text{and}z = -t_1 + 2t_2 -$$ - -is a two-dimensional subspace of K3, if K is a number field (such as real or rational numbers). - -In linear algebra, the system of parametric equations can be written as a single vector equation: -$$ -\begin{bmatrix} x \\ y \\ z \end{bmatrix} = t_1 \!\begin{bmatrix} 2 \\ 5 \\ -1 \end{bmatrix} + t_2 \!\begin{bmatrix} 3 \\ -4 \\ 2 \end{bmatrix}. -$$ - -The expression on the right is called a linear combination of the vectors - -(2, 5, −1) and (3, −4, 2). These two vectors are said to span the resulting subspace. - -In general, a linear combination of vectors v1, v2, ... , vk is any vector of the form -$$ -t_1 \mathbf{v}_1 + \cdots + t_k \mathbf{v}_k. -$$ - -The set of all possible linear combinations is called the span: - -\text{Span} \{ \mathbf{v}_1, \ldots, \mathbf{v}_k \} - -= \left\{ t_1 \mathbf{v}_1 + \cdots + t_k \mathbf{v}_k : t_1,\ldots,t_k\in K \right\} . - -If the vectors v1, ... , vk have n components, then their span is a subspace of Kn. Geometrically, the span is the flat through the origin in n-dimensional space determined by the points v1, ... , vk. - -; Example - -The xz-plane in R3 can be parameterized by the equations -$$ -x = t_1, y = 0, z = t_2. -$$ - -As a subspace, the xz-plane is spanned by the vectors (1, 0, 0) and (0, 0, 1). Every vector in the xz-plane can be written as a linear combination of these two: -$$ -(t_1, 0, t_2) = t_1(1,0,0) + t_2(0,0,1)\text{.} -$$ - -Geometrically, this corresponds to the fact that every point on the xz-plane can be reached from the origin by first moving some distance in the direction of (1, 0, 0) and then moving some distance in the direction of (0, 0, 1). - -A system of linear parametric equations in a finite-dimensional space can also be written as a single matrix equation: -$$ -\mathbf{x} = A\mathbf{t}\text{where}A = \left[ \begin{alignat}{2} 2 && 3 & \\ 5 && -4 & \\ -1 && 2 & \end{alignat} \right]\text{.} -$$ - -In this case, the subspace consists of all possible values of the vector x. In linear algebra, this subspace is known as the column space (or image) of the matrix A. It is precisely the subspace of Kn spanned by the column vectors of A. - -The row space of a matrix is the subspace spanned by its row vectors. The row space is interesting because it is the orthogonal complement of the null space (see below). - -In general, a subspace of Kn determined by k parameters (or spanned by k vectors) has dimension k. However, there are exceptions to this rule. For example, the subspace of K3 spanned by the three vectors (1, 0, 0), (0, 0, 1), and (2, 0, 3) is just the xz-plane, with each point on the plane described by infinitely many different values of t1, t2, t3. - -In general, vectors v1, ... , vk are called linearly independent if -$$ -t_1 \mathbf{v}_1 + \cdots + t_k \mathbf{v}_k \ne u_1 \mathbf{v}_1 + \cdots + u_k \mathbf{v}_k -$$ - -for - -(t1, t2, ... , tk) ≠ (u1, u2, ... , uk). - -If v1, ..., vk are linearly independent, then the coordinates t1, ..., tk for a vector in the span are uniquely determined. - -A basis for a subspace S is a set of linearly independent vectors whose span is S. The number of elements in a basis is always equal to the geometric dimension of the subspace. Any spanning set for a subspace can be changed into a basis by removing redundant vectors (see § Algorithms below for more). - -; Example - -Let S be the subspace of R4 defined by the equations -$$ -x_1 = 2 x_2\text{and}x_3 = 5x_4. -$$ - -Then the vectors (2, 1, 0, 0) and (0, 0, 5, 1) are a basis for S. In particular, every vector that satisfies the above equations can be written uniquely as a linear combination of the two basis vectors: -$$ -(2t_1, t_1, 5t_2, t_2) = t_1(2, 1, 0, 0) + t_2(0, 0, 5, 1). -$$ - -The subspace S is two-dimensional. Geometrically, it is the plane in R4 passing through the points (0, 0, 0, 0), (2, 1, 0, 0), and (0, 0, 5, 1). - -The set-theoretical inclusion binary relation specifies a partial order on the set of all subspaces (of any dimension). - -A subspace cannot lie in any subspace of lesser dimension. If dim U = k, a finite number, and U ⊂ W, then dim W = k if and only if U = W. - -Given subspaces U and W of a vector space V, then their intersection U ∩ W := {v ∈ V : v is an element of both U and W} is also a subspace of V. - -Proof: - -# Let v and w be elements of U ∩ W. Then v and w belong to both U and W. Because U is a subspace, then v + w belongs to U. Similarly, since W is a subspace, then v + w belongs to W. Thus, v + w belongs to U ∩ W. - -# Let v belong to U ∩ W, and let c be a scalar. Then v belongs to both U and W. Since U and W are subspaces, cv belongs to both U and W. - -# Since U and W are vector spaces, then 0 belongs to both sets. Thus, 0 belongs to U ∩ W. - -For every vector space V, the set {0} and V itself are subspaces of V. - -If U and W are subspaces, their sum is the subspace -$$ -U + W = \left\{ \mathbf{u} + \mathbf{w} \colon \mathbf{u}\in U, \mathbf{w}\in W \right\}. -$$ - -For example, the sum of two lines is the plane that contains them both. The dimension of the sum satisfies the inequality -$$ -\max(\dim U,\dim W) \leq \dim(U + W) \leq \dim(U) + \dim(W). -$$ - -Here, the minimum only occurs if one subspace is contained in the other, while the maximum is the most general case. The dimension of the intersection and the sum are related by the following equation: -$$ -\dim(U+W) = \dim(U) + \dim(W) - \dim(U \cap W). -$$ - -A set of subspaces is independent when the only intersection between any pair of subspaces is the trivial subspace. The direct sum is the sum of independent subspaces, written as $U \oplus W$. An equivalent restatement is that a direct sum is a subspace sum under the condition that every subspace contributes to the span of the sum. - -The dimension of a direct sum $U \oplus W$ is the same as the sum of subspaces, but may be shortened because the dimension of the trivial subspace is zero. - -\dim (U \oplus W) = \dim (U) + \dim (W) - -The operations intersection and sum make the set of all subspaces a bounded modular lattice, where the {0} subspace, the least element, is an identity element of the sum operation, and the identical subspace V, the greatest element, is an identity element of the intersection operation. - -If $V$ is an inner product space and $N$ is a subset of $V$, then the orthogonal complement of $N$, denoted $N^{\perp}$, is again a subspace. If $V$ is finite-dimensional and $N$ is a subspace, then the dimensions of $N$ and $N^{\perp}$ satisfy the complementary relationship $\dim (N) + \dim (N^{\perp}) = \dim (V) $. Moreover, no vector is orthogonal to itself, so $ N \cap N^\perp = \{ 0 \}$ and $V$ is the direct sum of $N$ and $N^{\perp}$. Applying orthogonal complements twice returns the original subspace: $(N^{\perp})^{\perp} = N$ for every subspace $N$. - -This operation, understood as negation ($\neg$), makes the lattice of subspaces a (possibly infinite) orthocomplemented lattice (although not a distributive lattice). - -In spaces with other bilinear forms, some but not all of these results still hold. In pseudo-Euclidean spaces and symplectic vector spaces, for example, orthogonal complements exist. However, these spaces may have null vectors that are orthogonal to themselves, and consequently there exist subspaces $N$ such that $N \cap N^{\perp} \ne \{ 0 \}$. As a result, this operation does not turn the lattice of subspaces into a Boolean algebra (nor a Heyting algebra). - -Most algorithms for dealing with subspaces involve row reduction. This is the process of applying elementary row operations to a matrix, until it reaches either row echelon form or reduced row echelon form. Row reduction has the following important properties: - -# The reduced matrix has the same null space as the original. - -# Row reduction does not change the span of the row vectors, i.e. the reduced matrix has the same row space as the original. - -# Row reduction does not affect the linear dependence of the column vectors. - -Input An m × n matrix A. - -Output A basis for the row space of A. - -# Use elementary row operations to put A into row echelon form. - -# The nonzero rows of the echelon form are a basis for the row space of A. - -See the article on row space for an example. - -If we instead put the matrix A into reduced row echelon form, then the resulting basis for the row space is uniquely determined. This provides an algorithm for checking whether two row spaces are equal and, by extension, whether two subspaces of Kn are equal. - -Input A basis {b1, b2, ..., bk} for a subspace S of Kn, and a vector v with n components. - -Output Determines whether v is an element of S - -# Create a (k + 1) × n matrix A whose rows are the vectors b1, ... , bk and v. - -# Use elementary row operations to put A into row echelon form. - -# If the echelon form has a row of zeroes, then the vectors {b1, ..., bk, v} are linearly dependent, and therefore v ∈ S. - -Input An m × n matrix A - -Output A basis for the column space of A - -# Use elementary row operations to put A into row echelon form. - -# Determine which columns of the echelon form have pivots. The corresponding columns of the original matrix are a basis for the column space. - -See the article on column space for an example. - -This produces a basis for the column space that is a subset of the original column vectors. It works because the columns with pivots are a basis for the column space of the echelon form, and row reduction does not change the linear dependence relationships between the columns. - -Input A basis {b1, b2, ..., bk} for a subspace S of Kn, and a vector v ∈ S - -Output Numbers t1, t2, ..., tk such that 1= v = t1b1 + ··· + tkbk - -# Create an augmented matrix A whose columns are b1,...,bk , with the last column being v. - -# Use elementary row operations to put A into reduced row echelon form. - -# Express the final column of the reduced echelon form as a linear combination of the first k columns. The coefficients used are the desired numbers t1, t2, ..., tk. (These should be precisely the first k entries in the final column of the reduced echelon form.) - -If the final column of the reduced row echelon form contains a pivot, then the input vector v does not lie in S. - -Input An m × n matrix A. - -Output A basis for the null space of A - -# Use elementary row operations to put A in reduced row echelon form. - -# Using the reduced row echelon form, determine which of the variables x1, x2, ..., xn are free. Write equations for the dependent variables in terms of the free variables. - -# For each free variable xi, choose a vector in the null space for which 1= xi = 1 and the remaining free variables are zero. The resulting collection of vectors is a basis for the null space of A. - -See the article on null space for an example. - -Given two subspaces U and W of V, a basis of the sum $U + W$ and the intersection $U \cap W$ can be calculated using the Zassenhaus algorithm - -Input A basis {b1, b2, ..., bk} for a subspace S of Kn - -Output An (n − k) × n matrix whose null space is S. - -# Create a matrix A whose rows are b1, b2, ..., bk. - -# Use elementary row operations to put A into reduced row echelon form. - -# Let c1, c2, ..., cn be the columns of the reduced row echelon form. For each column without a pivot, write an equation expressing the column as a linear combination of the columns with pivots. - -# This results in a homogeneous system of n − k linear equations involving the variables c1,...,cn. The (n − k) × n matrix corresponding to this system is the desired matrix with nullspace S. - -; Example - -If the reduced row echelon form of A is - -\left[ \begin{alignat}{6} - -1 && 0 && -3 && 0 && 2 && 0 \\ - -0 && 1 && 5 && 0 && -1 && 4 \\ - -0 && 0 && 0 && 1 && 7 && -9 \\ - -0 && 0 && 0 && 0 && 0 && 0 \end{alignat} \right] - -then the column vectors c1, ..., c6 satisfy the equations - - \begin{alignat}{1} - -\mathbf{c}_3 &= -3\mathbf{c}_1 + 5\mathbf{c}_2 \\ - -\mathbf{c}_5 &= 2\mathbf{c}_1 - \mathbf{c}_2 + 7\mathbf{c}_4 \\ - -\mathbf{c}_6 &= 4\mathbf{c}_2 - 9\mathbf{c}_4 - -\end{alignat} - -It follows that the row vectors of A satisfy the equations - - \begin{alignat}{1} - -x_3 &= -3x_1 + 5x_2 \\ - -x_5 &= 2x_1 - x_2 + 7x_4 \\ - -x_6 &= 4x_2 - 9x_4. - -\end{alignat} - -In particular, the row vectors of A are a basis for the null space of the corresponding matrix. diff --git a/wiki/wikipedia/2766.txt b/wiki/wikipedia/2766.txt deleted file mode 100644 index 49b87ed85cd463a53d26435b5fff8072b8771316..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2766.txt +++ /dev/null @@ -1,23 +0,0 @@ -In mathematics, the Morse–Palais lemma is a result in the calculus of variations and theory of Hilbert spaces. Roughly speaking, it states that a smooth enough function near a critical point can be expressed as a quadratic form after a suitable change of coordinates. - -The Morse–Palais lemma was originally proved in the finite-dimensional case by the American mathematician Marston Morse, using the Gram–Schmidt orthogonalization process. This result plays a crucial role in Morse theory. The generalization to Hilbert spaces is due to Richard Palais and Stephen Smale. - -Let $(H, \langle \cdot ,\cdot \rangle)$ be a real Hilbert space, and let $U$ be an open neighbourhood of the origin in $H.$ Let $f : U \to \R$ be a $(k+2)$-times continuously differentiable function with $k \geq 1;$ that is, $f \in C^{k+2}(U; \R).$ Assume that $f(0) = 0$ and that $0$ is a non-degenerate critical point of $f;$ that is, the second derivative $D^2 f(0)$ defines an isomorphism of $H$ with its continuous dual space $H^*$ by - -H \ni x \mapsto \mathrm{D}^2 f(0) (x, -) \in H^*. - -Then there exists a subneighbourhood $V$ of $0$ in $U,$ a diffeomorphism $\varphi : V \to V$ that is $C^k$ with $C^k$ inverse, and an invertible symmetric operator $A : H \to H,$ such that - -f(x) = \langle A \varphi(x), \varphi(x) \rangle \quad \text{ for all } x \in V. - -Let $f : U \to \R$ be $f \in C^{k+2}$ such that $0$ is a non-degenerate critical point. Then there exists a $C^k$-with-$C^k$-inverse diffeomorphism $\psi : V \to V$ and an orthogonal decomposition - -H = G \oplus G^{\perp}, - -such that, if one writes - -\psi (x) = y + z \quad \mbox{ with } y \in G, z \in G^{\perp}, - -then - -f (\psi(x)) = \langle y, y \rangle - \langle z, z \rangle \quad \text{ for all } x \in V. diff --git a/wiki/wikipedia/2767.txt b/wiki/wikipedia/2767.txt deleted file mode 100644 index e277af7387bb2c699b2dc6035d04a5656060d931..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2767.txt +++ /dev/null @@ -1,213 +0,0 @@ -In graph theory, graph coloring is a special case of graph labeling; it is an assignment of labels traditionally called "colors" to elements of a graph subject to certain constraints. In its simplest form, it is a way of coloring the vertices of a graph such that no two adjacent vertices are of the same color; this is called a vertex coloring. Similarly, an edge coloring assigns a color to each edge so that no two adjacent edges are of the same color, and a face coloring of a planar graph assigns a color to each face or region so that no two faces that share a boundary have the same color. - -Vertex coloring is often used to introduce graph coloring problems, since other coloring problems can be transformed into a vertex coloring instance. For example, an edge coloring of a graph is just a vertex coloring of its line graph, and a face coloring of a plane graph is just a vertex coloring of its dual. However, non-vertex coloring problems are often stated and studied as-is. This is partly pedagogical, and partly because some problems are best studied in their non-vertex form, as in the case of edge coloring. - -The convention of using colors originates from coloring the countries of a map, where each face is literally colored. This was generalized to coloring the faces of a graph embedded in the plane. By planar duality it became coloring the vertices, and in this form it generalizes to all graphs. In mathematical and computer representations, it is typical to use the first few positive or non-negative integers as the "colors". In general, one can use any finite set as the "color set". The nature of the coloring problem depends on the number of colors but not on what they are. - -Graph coloring enjoys many practical applications as well as theoretical challenges. Beside the classical types of problems, different limitations can also be set on the graph, or on the way a color is assigned, or even on the color itself. It has even reached popularity with the general public in the form of the popular number puzzle Sudoku. Graph coloring is still a very active field of research. - -Note: Many terms used in this article are defined in Glossary of graph theory. - -The first results about graph coloring deal almost exclusively with planar graphs in the form of the coloring of maps. - -While trying to color a map of the counties of England, Francis Guthrie postulated the four color conjecture, noting that four colors were sufficient to color the map so that no regions sharing a common border received the same color. Guthrie’s brother passed on the question to his mathematics teacher Augustus de Morgan at University College, who mentioned it in a letter to William Hamilton in 1852. Arthur Cayley raised the problem at a meeting of the London Mathematical Society in 1879. The same year, Alfred Kempe published a paper that claimed to establish the result, and for a decade the four color problem was considered solved. For his accomplishment Kempe was elected a Fellow of the Royal Society and later President of the London Mathematical Society. - -In 1890, Heawood pointed out that Kempe’s argument was wrong. However, in that paper he proved the five color theorem, saying that every planar map can be colored with no more than five colors, using ideas of Kempe. In the following century, a vast amount of work and theories were developed to reduce the number of colors to four, until the four color theorem was finally proved in 1976 by Kenneth Appel and Wolfgang Haken. The proof went back to the ideas of Heawood and Kempe and largely disregarded the intervening developments. The proof of the four color theorem is also noteworthy for being the first major computer-aided proof. - -In 1912, George David Birkhoff introduced the chromatic polynomial to study the coloring problems, which was generalised to the Tutte polynomial by Tutte, important structures in algebraic graph theory. Kempe had already drawn attention to the general, non-planar case in 1879, and many results on generalisations of planar graph coloring to surfaces of higher order followed in the early 20th century. - -In 1960, Claude Berge formulated another conjecture about graph coloring, the strong perfect graph conjecture, originally motivated by an information-theoretic concept called the zero-error capacity of a graph introduced by Shannon. The conjecture remained unresolved for 40 years, until it was established as the celebrated strong perfect graph theorem by Chudnovsky, Robertson, Seymour, and Thomas in 2002. - -Graph coloring has been studied as an algorithmic problem since the early 1970s: the chromatic number problem (see below) is one of Karp’s 21 NP-complete problems from 1972, and at approximately the same time various exponential-time algorithms were developed based on backtracking and on the deletion-contraction recurrence of Zykov. One of the major applications of graph coloring, register allocation in compilers, was introduced in 1981. - -==Definition and terminology== - -When used without any qualification, a coloring of a graph is almost always a proper vertex coloring, namely a labeling of the graph’s vertices with colors such that no two vertices sharing the same edge have the same color. Since a vertex with a loop (i.e. a connection directly back to itself) could never be properly colored, it is understood that graphs in this context are loopless. - -The terminology of using colors for vertex labels goes back to map coloring. Labels like red and blue are only used when the number of colors is small, and normally it is understood that the labels are drawn from the integers {1, 2, 3, ...}. - -A coloring using at most k colors is called a (proper) k-coloring. The smallest number of colors needed to color a graph G is called its chromatic number, and is often denoted χ(G). Sometimes γ(G) is used, since χ(G) is also used to denote the Euler characteristic of a graph. A graph that can be assigned a (proper) k-coloring is k-colorable, and it is k-chromatic if its chromatic number is exactly k. A subset of vertices assigned to the same color is called a color class, every such class forms an independent set. Thus, a k-coloring is the same as a partition of the vertex set into k independent sets, and the terms k-partite and k-colorable have the same meaning. - -The chromatic polynomial counts the number of ways a graph can be colored using some of a given number of colors. For example, using three colors, the graph in the adjacent image can be colored in 12 ways. With only two colors, it cannot be colored at all. With four colors, it can be colored in 24 + 4⋅12 = 72 ways: using all four colors, there are 4! = 24 valid colorings (every assignment of four colors to any 4-vertex graph is a proper coloring); and for every choice of three of the four colors, there are 12 valid 3-colorings. So, for the graph in the example, a table of the number of valid colorings would start like this: - -The chromatic polynomial is a function $P(G,t)$ that counts the number of t-colorings of G. As the name indicates, for a given G the function is indeed a polynomial in t. For the example graph, $P(G,t)=t(t-1)^{2}(t-2)$, and indeed $P(G,4)=72$. - -The chromatic polynomial includes more information about the colorability of G than does the chromatic number. Indeed, χ is the smallest positive integer that is not a zero of the chromatic polynomial: -$$ -\chi (G)=\min\{ k\colonP(G,k) > 0 \}. -$$ - -An edge coloring of a graph is a proper coloring of the edges, meaning an assignment of colors to edges so that no vertex is incident to two edges of the same color. An edge coloring with k colors is called a k-edge-coloring and is equivalent to the problem of partitioning the edge set into k matchings. The smallest number of colors needed for an edge coloring of a graph G is the chromatic index, or edge chromatic number, χ′(G). A Tait coloring is a 3-edge coloring of a cubic graph. The four color theorem is equivalent to the assertion that every planar cubic bridgeless graph admits a Tait coloring. - -Total coloring is a type of coloring on the vertices and edges of a graph. When used without any qualification, a total coloring is always assumed to be proper in the sense that no adjacent vertices, no adjacent edges, and no edge and its end-vertices are assigned the same color. The total chromatic number χ″(G) of a graph G is the fewest colors needed in any total coloring of G. - -An unlabeled coloring of a graph is an orbit of a coloring under the action of the automorphism group of the graph. Note that the colors remain labeled; it is the graph that is unlabeled. - -There is an analogue of the chromatic polynomial which counts the number of unlabeled colorings of a graph from a given finite color set. - -If we interpret a coloring of a graph on $d$ vertices as a vector in $\mathbb{Z}^d$, the action of an automorphism is a permutation of the coefficients in the coloring vector. - -Assigning distinct colors to distinct vertices always yields a proper coloring, so -$$ -1 \le \chi(G) \le n. -$$ - -The only graphs that can be 1-colored are edgeless graphs. A complete graph $K_n$ of n vertices requires $\chi(K_n)=n$ colors. In an optimal coloring there must be at least one of the graph’s m edges between every pair of color classes, so -$$ -\chi(G)(\chi(G)-1) \le 2m. -$$ - -If G contains a clique of size k, then at least k colors are needed to color that clique; in other words, the chromatic number is at least the clique number: -$$ -\chi(G) \ge \omega(G). -$$ - -For perfect graphs this bound is tight. Finding cliques is known as the clique problem. - -More generally a family $\mathcal{F}$ of graphs is $\chi$-bounded if there is some function $c$ such that the graphs $G$ in $\mathcal{F}$ can be colored with at most $c(\omega(G))$ colors, for the family of the perfect graphs this function is $c(\omega(G))=\omega(G)$. - -The 2-colorable graphs are exactly the bipartite graphs, including trees and forests. - -By the four color theorem, every planar graph can be 4-colored. - -A greedy coloring shows that every graph can be colored with one more color than the maximum vertex degree, -$$ -\chi(G) \le \Delta(G) + 1. -$$ - -Complete graphs have $\chi(G)=n$ and $\Delta(G)=n-1$, and odd cycles have $\chi(G)=3$ and $\Delta(G)=2$, so for these graphs this bound is best possible. In all other cases, the bound can be slightly improved; Brooks’ theorem states that - -Brooks’ theorem: $\chi (G) \le \Delta (G) $ for a connected, simple graph G, unless G is a complete graph or an odd cycle. - -Several lower bounds for the chromatic bounds have been discovered over the years: - -Hoffman's bound: Let $W$ be a real symmetric matrix such that $ W_{i,j} = 0 $ whenever $(i,j) $ is not an edge in $G$. Define $\chi_W(G) = 1 - \tfrac{\lambda_{\max}(W)}{\lambda_{\min}(W)}$, where $\lambda_{\max}(W), \lambda_{\min}(W)$ are the largest and smallest eigenvalues of $W$. Define $ \chi_H(G) = \max_W \chi_W(G)$, with $W$ as above. Then: -$$ - \chi_H(G)\leq \chi(G). -$$ - -Let $W$ be a positive semi-definite matrix such that $ W_{i,j} \le -\tfrac{1}{k-1} $ whenever $(i,j) $ is an edge in $G$. Define $\chi_V(G)$ to be the least k for which such a matrix $W$ exists. Then -$$ - \chi_V(G)\leq \chi(G). -$$ - -Lovász number: The Lovász number of a complementary graph is also a lower bound on the chromatic number: -$$ - \vartheta(\bar{G}) \leq \chi(G). -$$ - -Fractional chromatic number: The fractional chromatic number of a graph is a lower bound on the chromatic number as well: -$$ - \chi_f(G) \leq \chi(G). -$$ - -These bounds are ordered as follows: -$$ - \chi_H(G) \leq \chi_V(G) \leq \vartheta(\bar{G}) \leq \chi_f(G) \leq \chi(G). -$$ - -Graphs with large cliques have a high chromatic number, but the opposite is not true. The Grötzsch graph is an example of a 4-chromatic graph without a triangle, and the example can be generalized to the Mycielskians. - -Theorem (, , ): There exist triangle-free graphs with arbitrarily high chromatic number. - -To prove this, both, Mycielski and Zykov, each gave a construction of an inductively defined family of triangle-free graphs but with arbitrarily large chromatic number. Burling (1965) constructed axis aligned boxes in $\mathbb{R}^{3}$ whose intersection graph is triangle-free and requires arbitrarily many colors to be properly colored. This family of graphs is then called the Burling graphs. The same class of graphs is used for the construction of a family of triangle-free line segments in the plane, given by Pawlik et. al. (2014). It shows that the chromatic number of its intersection graph is arbitrarily large as well. Hence, this implies that axis aligned boxes in $\mathbb{R}^{3}$ as well as line segments in $\mathbb{R}^{2}$ are not χ-bounded. for any k. Faster algorithms are known for 3- and 4-colorability, which can be decided in time $O(1.3289^n)$ and $O(1.7272^n)$, respectively. - -The contraction $G/uv$ of a graph G is the graph obtained by identifying the vertices u and v, and removing any edges between them. The remaining edges originally incident to u or v are now incident to their identification. This operation plays a major role in the analysis of graph coloring. - -The chromatic number satisfies the recurrence relation: -$$ -\chi(G) = \text{min} \{ \chi(G+uv), \chi(G/uv)\} -$$ - -due to Zykov, where u and v are non-adjacent vertices, and $G+uv$ is the graph with the edge uv added. Several algorithms are based on evaluating this recurrence and the resulting computation tree is sometimes called a Zykov tree. The running time is based on a heuristic for choosing the vertices u and v. - -The chromatic polynomial satisfies the following recurrence relation -$$ -P(G-uv, k)= P(G/uv, k)+ P(G, k) -$$ - -where u and v are adjacent vertices, and $G-uv$ is the graph with the edge uv removed. $P(G - uv, k)$ represents the number of possible proper colorings of the graph, where the vertices may have the same or different colors. Then the proper colorings arise from two different graphs. To explain, if the vertices u and v have different colors, then we might as well consider a graph where u and v are adjacent. If u and v have the same colors, we might as well consider a graph where u and v are contracted. Tutte’s curiosity about which other graph properties satisfied this recurrence led him to discover a bivariate generalization of the chromatic polynomial, the Tutte polynomial. - -These expressions give rise to a recursive procedure called the deletion–contraction algorithm, which forms the basis of many algorithms for graph coloring. The running time satisfies the same recurrence relation as the Fibonacci numbers, so in the worst case the algorithm runs in time within a polynomial factor of $\left(\tfrac{1+\sqrt{5}}2\right)^{n+m}=O(1.6180^{n+m})$ for n vertices and m edges. The analysis can be improved to within a polynomial factor of the number $t(G)$ of spanning trees of the input graph. In practice, branch and bound strategies and graph isomorphism rejection are employed to avoid some recursive calls. The running time depends on the heuristic used to pick the vertex pair. - -The greedy algorithm considers the vertices in a specific order $v_1$,…,$ v_n$ and assigns to $v_i$ the smallest available color not used by $v_i$’s neighbours among $v_1$,…,$ v_{i-1}$, adding a fresh color if needed. The quality of the resulting coloring depends on the chosen ordering. There exists an ordering that leads to a greedy coloring with the optimal number of $\chi(G)$ colors. On the other hand, greedy colorings can be arbitrarily bad; for example, the crown graph on n vertices can be 2-colored, but has an ordering that leads to a greedy coloring with $n/2$ colors. - -For chordal graphs, and for special cases of chordal graphs such as interval graphs and indifference graphs, the greedy coloring algorithm can be used to find optimal colorings in polynomial time, by choosing the vertex ordering to be the reverse of a perfect elimination ordering for the graph. The perfectly orderable graphs generalize this property, but it is NP-hard to find a perfect ordering of these graphs. - -If the vertices are ordered according to their degrees, the resulting greedy coloring uses at most \text{max}_i \text{ min} - -\{d(x_i ) + 1, i\} colors, at most one more than the graph’s maximum degree. This heuristic is sometimes called the Welsh–Powell algorithm. Another heuristic due to Brélaz establishes the ordering dynamically while the algorithm proceeds, choosing next the vertex adjacent to the largest number of different colors. Many other graph coloring heuristics are similarly based on greedy coloring for a specific static or dynamic strategy of ordering the vertices, these algorithms are sometimes called sequential coloring algorithms. - -The maximum (worst) number of colors that can be obtained by the greedy algorithm, by using a vertex ordering chosen to maximize this number, is called the Grundy number of a graph. - -In the field of distributed algorithms, graph coloring is closely related to the problem of symmetry breaking. The current state-of-the-art randomized algorithms are faster for sufficiently large maximum degree Δ than deterministic algorithms. The fastest randomized algorithms employ the multi-trials technique by Schneider et al. - -In a symmetric graph, a deterministic distributed algorithm cannot find a proper vertex coloring. Some auxiliary information is needed in order to break symmetry. A standard assumption is that initially each node has a unique identifier, for example, from the set {1, 2, ..., n}. Put otherwise, we assume that we are given an n-coloring. The challenge is to reduce the number of colors from n to, e.g., Δ + 1. The more colors are employed, e.g. O(Δ) instead of Δ + 1, the fewer communication rounds are required. - -An important class of improper coloring problems is studied in Ramsey theory, where the graph’s edges are assigned to colors, and there is no restriction on the colors of incident edges. A simple example is the friendship theorem, which states that in any coloring of the edges of $K_6$, the complete graph of six vertices, there will be a monochromatic triangle; often illustrated by saying that any group of six people either has three mutual strangers or three mutual acquaintances. Ramsey theory is concerned with generalisations of this idea to seek regularity amid disorder, finding general conditions for the existence of monochromatic subgraphs with given structure. - -; Adjacent-vertex-distinguishing-total coloring : A total coloring with the additional restriction that any two adjacent vertices have different color sets - -; Acyclic coloring : Every 2-chromatic subgraph is acyclic - -; B-coloring : a coloring of the vertices where each color class contains a vertex that has a neighbor in all other color classes. - -; Circular coloring : Motivated by task systems in which production proceeds in a cyclic way - -; Cocoloring : An improper vertex coloring where every color class induces an independent set or a clique - -; Complete coloring : Every pair of colors appears on at least one edge - -; Defective coloring : An improper vertex coloring where every color class induces a bounded degree subgraph. - -; Distinguishing coloring : An improper vertex coloring that destroys all the symmetries of the graph - -; Equitable coloring : The sizes of color classes differ by at most one - -; Exact coloring : Every pair of colors appears on exactly one edge - -; Fractional coloring : Vertices may have multiple colors, and on each edge the sum of the color parts of each vertex is not greater than one - -; Hamiltonian coloring : Uses the length of the longest path between two vertices, also known as the detour distance - -; Harmonious coloring : Every pair of colors appears on at most one edge - -; Incidence coloring: Each adjacent incidence of vertex and edge is colored with distinct colors - -; Interval edge coloring : A color of edges meeting in a common vertex must be contiguous - -; List coloring: Each vertex chooses from a list of colors - -; List edge-coloring:Each edge chooses from a list of colors - -; L(h, k)-coloring: Difference of colors at adjacent vertices is at least h and difference of colors of vertices at a distance two is at least k. A particular case is L(2,1)-coloring. - -; Oriented coloring : Takes into account orientation of edges of the graph - -; Path coloring : Models a routing problem in graphs - -; Radio coloring : Sum of the distance between the vertices and the difference of their colors is greater than k+1, where k is a positive integer. - -; Rank coloring : If two vertices have the same color i, then every path between them contain a vertex with color greater than i - -; Subcoloring : An improper vertex coloring where every color class induces a union of cliques - -; Sum coloring : The criterion of minimalization is the sum of colors - -; Star coloring : Every 2-chromatic subgraph is a disjoint collection of stars - -; Strong coloring : Every color appears in every partition of equal size exactly once - -; Strong edge coloring : Edges are colored such that each color class induces a matching (equivalent to coloring the square of the line graph) - -; T-coloring : Absolute value of the difference between two colors of adjacent vertices must not belong to fixed set T - -; Total coloring :Vertices and edges are colored - -; Centered coloring: Every connected induced subgraph has a color that is used exactly once - -; Triangle-free edge coloring: The edges are colored so that each color class forms a triangle-free subgraph - -; Weak coloring : An improper vertex coloring where every non-isolated node has at least one neighbor with a different color - -Coloring can also be considered for signed graphs and gain graphs. diff --git a/wiki/wikipedia/2768.txt b/wiki/wikipedia/2768.txt deleted file mode 100644 index 7fc95e54d67ee5f702c822b2c139cbd39afb6fd1..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2768.txt +++ /dev/null @@ -1,62 +0,0 @@ -In mathematics, Gårding's inequality is a result that gives a lower bound for the bilinear form induced by a real linear elliptic partial differential operator. The inequality is named after Lars Gårding. - -Let Ω be a bounded, open domain in n-dimensional Euclidean space and let Hk(Ω) denote the Sobolev space of k-times weakly differentiable functions u : Ω → R with weak derivatives in L2. Assume that Ω satisfies the k-extension property, i.e., that there exists a bounded linear operator E : Hk(Ω) → Hk(Rn) such that (Eu)|Ω = u for all u in Hk(Ω). - -Let L be a linear partial differential operator of even order 2k, written in divergence form -$$ -(L u)(x) = \sum_{0 \leq | \alpha |, | \beta | \leq k} (-1)^ \mathrm{D}^{\alpha} \left( A_{\alpha \beta} (x) \mathrm{D}^{\beta} u(x) \right), -$$ - -and suppose that L is uniformly elliptic, i.e., there exists a constant θ > 0 such that -$$ -\sum_{| \alpha |, | \beta | = k} \xi^{\alpha} A_{\alpha \beta} (x) \xi^{\beta} > \theta | \xi |^{2 k} \mbox{ for all } x \in \Omega, \xi \in \mathbb{R}^{n} \setminus \{ 0 \}. -$$ - -Finally, suppose that the coefficients Aαβ are bounded, continuous functions on the closure of Ω for |α| = |β| = k and that -$$ -A_{\alpha \beta} \in L^{\infty} (\Omega) \mbox{ for all } | \alpha |, | \beta | \leq k. -$$ - -Then Gårding's inequality holds: there exist constants C > 0 and G ≥ 0 -$$ -B[u, u] + G \| u \|_{L^{2} (\Omega)}^{2} \geq C \| u \|_{H^{k} (\Omega)}^{2} \mbox{ for all } u \in H_{0}^{k} (\Omega), -$$ - -where -$$ -B[v, u] = \sum_{0 \leq | \alpha |, | \beta | \leq k} \int_{\Omega} A_{\alpha \beta} (x) \mathrm{D}^{\alpha} u(x) \mathrm{D}^{\beta} v(x) \mathrm{d} x -$$ - -is the bilinear form associated to the operator L. - -Be careful, in this application, Garding's Inequality seems useless here as the final result is a direct consequence of Poincaré's Inequality, or Friedrich Inequality. (See talk on the article). - -As a simple example, consider the Laplace operator Δ. More specifically, suppose that one wishes to solve, for f ∈ L2(Ω) the Poisson equation -$$ -\begin{cases} - \Delta u(x) = f(x), & x \in \Omega; \\ u(x) = 0, & x \in \partial \Omega; \end{cases} -$$ - -where Ω is a bounded Lipschitz domain in Rn. The corresponding weak form of the problem is to find u in the Sobolev space H01(Ω) such that -$$ -B[u, v] = \langle f, v \rangle \mbox{ for all } v \in H_{0}^{1} (\Omega), -$$ - -where -$$ -B[u, v] = \int_{\Omega} \nabla u(x) \cdot \nabla v(x) \mathrm{d} x, -$$ -$$ -\langle f, v \rangle = \int_{\Omega} f(x) v(x) \mathrm{d} x. -$$ - -The Lax–Milgram lemma ensures that if the bilinear form B is both continuous and elliptic with respect to the norm on H01(Ω), then, for each f ∈ L2(Ω), a unique solution u must exist in H01(Ω). The hypotheses of Gårding's inequality are easy to verify for the Laplace operator Δ, so there exist constants C and G ≥ 0 -$$ -B[u, u] \geq C \| u \|_{H^{1} (\Omega)}^{2} - G \| u \|_{L^{2} (\Omega)}^{2} \mbox{ for all } u \in H_{0}^{1} (\Omega). -$$ - -Applying the Poincaré inequality allows the two terms on the right-hand side to be combined, yielding a new constant K > 0 with -$$ -B[u, u] \geq K \| u \|_{H^{1} (\Omega)}^{2} \mbox{ for all } u \in H_{0}^{1} (\Omega), -$$ - -which is precisely the statement that B is elliptic. The continuity of B is even easier to see: simply apply the Cauchy–Schwarz inequality and the fact that the Sobolev norm is controlled by the L2 norm of the gradient. diff --git a/wiki/wikipedia/2769.txt b/wiki/wikipedia/2769.txt deleted file mode 100644 index e4dfe44981c9b72144c387bdb1a1b2effc6a4613..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2769.txt +++ /dev/null @@ -1,44 +0,0 @@ -In number theory, Firoozbakht’s conjecture (or the Firoozbakht conjecture) is a conjecture about the distribution of prime numbers. It is named after the Iranian mathematician Farideh Firoozbakht from the University of Isfahan who stated it first in 1982. - -The conjecture states that $p_{n}^{1/n}$ (where $p_n$ is the nth prime) is a strictly decreasing function of n, i.e., -$$ -\sqrt[n+1]{p_{n+1}} < \sqrt[n]{p_n} \qquad \text{ for all } n \ge 1. -$$ - -Equivalently: -$$ -p_{n+1} < p_n^{1+\frac{1}{n}} \qquad \text{ for all } n \ge 1, -$$ - -see , . - -By using a table of maximal gaps, Farideh Firoozbakht verified her conjecture up to 4.444. - -If the conjecture were true, then the prime gap function $g_n = p_{n+1} - p_n $ would satisfy: -$$ - g_n < (\log p_n)^2 - \log p_n \qquad \text{ for all } n > 4. -$$ - -Moreover: -$$ - g_n < (\log p_n)^2 - \log p_n - 1 \qquad \text{ for all } n > 9, -$$ - -see also . This is among the strongest upper bounds conjectured for prime gaps, even somewhat stronger than the Cramér and Shanks conjectures. and of Maier which suggest that -$$ - g_n > \frac{2-\varepsilon}{e^\gamma}(\log p_n)^2 \approx 1.1229(\log p_n)^2 , -$$ - -occurs infinitely often for any $\varepsilon>0,$ where $\gamma$ denotes the Euler–Mascheroni constant. - -Two related conjectures (see the comments of ) are -$$ -\left(\frac{\log(p_{n+1})}{\log(p_n)}\right)^n < e, -$$ - -which is weaker, and -$$ -\left(\frac{p_{n+1}}{p_n}\right)^n < n\log(n)\qquad \text{ for all } n > 5, -$$ - -which is stronger. diff --git a/wiki/wikipedia/277.txt b/wiki/wikipedia/277.txt deleted file mode 100644 index d86cacdd86bc24297444fc9f3d8b5fa97eb0d858..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/277.txt +++ /dev/null @@ -1,11 +0,0 @@ -Codd's theorem states that relational algebra and the domain-independent relational calculus queries, two well-known foundational query languages for the relational model, are precisely equivalent in expressive power. That is, a database query can be formulated in one language if and only if it can be expressed in the other. - -The theorem is named after Edgar F. Codd, the father of the relational model for database management. - -The domain independent relational calculus queries are precisely those relational calculus queries that are invariant under choosing domains of values beyond those appearing in the database itself. That is, queries that may return different results for different domains are excluded. An example of such a forbidden query is the query "select all tuples other than those occurring in relation R", where R is a relation in the database. Assuming different domains, i.e., sets of atomic data items from which tuples can be constructed, this query returns different results and thus is clearly not domain independent. - -Codd's Theorem is notable since it establishes the equivalence of two syntactically quite dissimilar languages: relational algebra is a variable-free language, while relational calculus is a logical language with variables and quantification. - -Relational calculus is essentially equivalent to first-order logic, and indeed, Codd's Theorem had been known to logicians since the late 1940s. - -Query languages that are equivalent in expressive power to relational algebra were called relationally complete by Codd. By Codd's Theorem, this includes relational calculus. Relational completeness clearly does not imply that any interesting database query can be expressed in relationally complete languages. Well-known examples of inexpressible queries include simple aggregations (counting tuples, or summing up values occurring in tuples, which are operations expressible in SQL but not in relational algebra) and computing the transitive closure of a graph given by its binary edge relation (see also expressive power). Codd's theorem also doesn't consider SQL nulls and the three-valued logic they entail; the logical treatment of nulls remains mired in controversy. Additionally, SQL has multiset semantics and allows duplicate rows. Nevertheless, relational completeness constitutes an important yardstick by which the expressive power of query languages can be compared. diff --git a/wiki/wikipedia/2770.txt b/wiki/wikipedia/2770.txt deleted file mode 100644 index 0bf4fbae376a6fe13f4cbca0ca66d9c143ff3d6c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2770.txt +++ /dev/null @@ -1,127 +0,0 @@ -In mathematics, the Farrell–Jones conjecture, named after F. Thomas Farrell and Lowell E. Jones, states that certain assembly maps are isomorphisms. These maps are given as certain homomorphisms. - -The motivation is the interest in the target of the assembly maps; this may be, for instance, the algebraic K-theory of a group ring -$$ -K_n(RG) -$$ - -or the L-theory of a group ring -$$ -L_n(RG) -$$, - -where G is some group. - -The sources of the assembly maps are equivariant homology theory evaluated on the classifying space of G with respect to the family of virtually cyclic subgroups of G. So assuming the Farrell–Jones conjecture is true, it is possible to restrict computations to virtually cyclic subgroups to get information on complicated objects such as $K_n(RG)$ or $L_n(RG)$. - -The Baum–Connes conjecture formulates a similar statement, for the topological K-theory of reduced group $C^*$-algebras $K^ {top}_n(C^r_*(G))$. - -One can find for any ring $ R$ equivariant homology theories $KR^?_*,LR^?_*$ satisfying -$$ -KR_n^G(\{\cdot\})\cong K_n(R[G]) -$$ respectively $LR_n^G(\{\cdot\})\cong L_n(R[G]).$ - -Here $R[G]$ denotes the group ring. - -The K-theoretic Farrell–Jones conjecture for a group G states that the map $p:E_{VCYC}(G)\rightarrow \{\cdot\}$ induces an isomorphism on homology -$$ -KR_*^G(p):KR_*^G(E_{VCYC}(G))\rightarrow KR_*^G(\{\cdot\})\cong K_*(R[G]). -$$ - -Here $E_{VCYC}(G)$ denotes the classifying space of the group G with respect to the family of virtually cyclic subgroups, i.e. a G-CW-complex whose isotropy groups are virtually cyclic and for any virtually cyclic subgroup of G the fixed point set is contractible. - -The L-theoretic Farrell–Jones conjecture is analogous. - -The computation of the algebraic K-groups and the L-groups of a group ring $R[G]$ is motivated by obstructions living in those groups (see for example Wall's finiteness obstruction, surgery obstruction, Whitehead torsion). So suppose a group $G$ satisfies the Farrell–Jones conjecture for algebraic K-theory. Suppose furthermore we have already found a model $X$ for the classifying space for virtually cyclic subgroups: -$$ - \emptyset=X^{-1}\subset X^0\subset X^1\subset \ldots \subset X -$$ - -Choose $G$-pushouts and apply the Mayer-Vietoris sequence to them: -$$ - KR_n^G(\coprod_{j\in I_i} G/H_j\times S^{i-1})\rightarrow KR_n^G(\coprod_{j\in I_i} G/H_j\times D^i)\oplus KR_n^G(X^{i-1})\rightarrow KR_n^G(X^i) -$$$\rightarrow KR_{n-1}^G(\coprod_{j\in I_i} G/H_j\times S^{i-1})\rightarrow KR_{n-1}^G(\coprod_{j\in I_i} G/H_j\times D^i)\oplus KR_{n-1}^G(X^{i-1}) $ - -This sequence simplifies to: -$$ - \bigoplus_{j\in I_i}K_n(R[H_j])\oplus \bigoplus_{j\in I_i} K_{n-1}(RH_j)\rightarrow \bigoplus_{j\in I_i} K_n(RH_j)\oplus KR_n^G(X^{i-1})\rightarrow KR_n^G(X^i) -$$$\rightarrow \bigoplus_{j\in I_i}K_{n-1}(RH_j)\oplus\bigoplus_{j\in I_i}K_{n-2}(RH_j)\rightarrow \bigoplus_{j\in I_i} K_{n-1}(RH_j)\oplus KR^G_{n-1}(X^{i-1}) $ - -This means that if any group satisfies a certain isomorphism conjecture one can compute its algebraic K-theory (L-theory) only by knowing the algebraic K-Theory (L-Theory) of virtually cyclic groups and by knowing a suitable model for $ E_{VCYC}(G)$. - -One might also try to take for example the family of finite subgroups into account. This family is much easier to handle. Consider the infinite cyclic group $ \Z $. A model for $E_{FIN}(\Z)$ is given by the real line $\R$, on which $\Z$ acts freely by translations. Using the properties of equivariant K-theory we get -$$ -K_n^\Z(\R)=K_n(S^1)=K_n(pt)\oplus K_{n-1}(pt)=K_n(R)\oplus K_{n-1}(R). -$$ - -The Bass-Heller-Swan decomposition gives -$$ -K_n^\Z(pt)=K_n(R[\Z])\cong K_n(R)\oplus K_{n-1}(R)\oplus NK_n(R)\oplus NK_n(R). -$$ - -Indeed one checks that the assembly map is given by the canonical inclusion. -$$ -K_n(R)\oplus K_{n-1}(R)\hookrightarrow K_n(R)\oplus K_{n-1}(R)\oplus NK_n(R)\oplus NK_n(R) -$$ - -So it is an isomorphism if and only if $NK_n(R) =0$, which is the case if $R$ is a regular ring. So in this case one can really use the family of finite subgroups. On the other hand this shows that the isomorphism conjecture for algebraic K-Theory and the family of finite subgroups is not true. One has to extend the conjecture to a larger family of subgroups which contains all the counterexamples. Currently no counterexamples for the Farrell–Jones conjecture are known. If there is a counterexample, one has to enlarge the family of subgroups to a larger family which contains that counterexample. - -The class of groups which satisfies the fibered Farrell–Jones conjecture contain the following groups - -* virtually cyclic groups (definition) - -* hyperbolic groups (see ) - -* CAT(0)-groups (see ) - -* solvable groups (see ) - -* mapping class groups (see ) - -Furthermore the class has the following inheritance properties: - -* Closed under finite products of groups. - -* Closed under taking subgroups. - -Fix an equivariant homology theory $H^?_*$. One could say, that a group G satisfies the isomorphism conjecture for a family of subgroups$ F$, if and only if the map induced by the projection $ E_F(G)\rightarrow \{\cdot\} $ induces an isomorphism on homology: -$$ - H_*^G(E_F(G))\rightarrow H_*^G(\{\cdot\}) -$$ - -The group G satisfies the fibered isomorphism conjecture for the family of subgroups F if and only if for any group homomorphism $ \alpha :H\rightarrow G$ the group H satisfies the isomorphism conjecture for the family -$$ -\alpha^*F:=\{H'\le H|\alpha(H)\in F\} -$$. - -One gets immediately that in this situation $H$ also satisfies the fibered isomorphism conjecture for the family $\alpha^*F$. - -The transitivity principle is a tool to change the family of subgroups to consider. Given two families $F\subset F'$ of subgroups of $ G$. Suppose every group $ H\in F'$ satisfies the (fibered) isomorphism conjecture with respect to the family $ F|_H:=\{H'\in F|H'\subset H\}$. - -Then the group $G$ satisfies the fibered isomorphism conjecture with respect to the family $F$ if and only if it satisfies the (fibered) isomorphism conjecture with respect to the family $F'$. - -Given any group homomorphism $\alpha\colon H\rightarrow G$ and suppose that G"' satisfies the fibered isomorphism conjecture for a family F of subgroups. Then also H"' satisfies the fibered isomorphism conjecture for the family $ \alpha^*F$. For example if $ \alpha$ has finite kernel the family $ \alpha^*VCYC $ agrees with the family of virtually cyclic subgroups of H. - -For suitable $\alpha$ one can use the transitivity principle to reduce the family again. - -There are also connections from the Farrell–Jones conjecture to the Novikov conjecture. It is known that if one of the following maps -$$ -H^G_*(E_{VCYC}(G),L^{\langle-\infty\rangle}_R)\rightarrow H^G_*(\{\cdot\},L^{\langle-\infty\rangle}_R)= L^{\langle-\infty\rangle}_*(RG) -$$ -$$ - H^G_*(E_{FIN}(G),K^{top}) \rightarrow H^G_*(\{\cdot\},K^{top}) = K_n(C^*_r(G)) -$$ - -is rationally injective then the Novikov-conjecture holds for $G$. See for example,. - -The Bost conjecture (named for Jean-Benoît Bost) states that the assembly map -$$ - H^G_*(E_{FIN}(G),K^{top}_{l^1})\rightarrow H^G_*(\{\cdot\},K^{top}_{l^1})=K_*(l^1(G)) -$$ - -is an isomorphism. The ring homomorphism $ l^1(G)\rightarrow C_r(G)$ induces maps in K-theory $K_*(l^1(G))\rightarrow K_*(C_r(G))$. Composing the upper assembly map with this homomorphism one gets exactly the assembly map occurring in the Baum–Connes conjecture. -$$ - H^G_*(E_{FIN}(G),K^{top}_{l^1})=H^G_*(E_{FIN}(G),K^{top})\rightarrow H^G_*(\{\cdot\},K^{top})=K_*(C_r(G)) -$$ - -The Kaplansky conjecture predicts that for an integral domain $R$ and a torsionfree group $G$ the only idempotents in $R[G]$ are $0,1$. Each such idempotent $p$ gives a projective $R[G]$ module by taking the image of the right multiplication with $p$. Hence there seems to be a connection between the Kaplansky conjecture and the vanishing of $K_0(R[G])$. There are theorems relating the Kaplansky conjecture to the Farrell Williams–Jones conjecture (compare ). diff --git a/wiki/wikipedia/2771.txt b/wiki/wikipedia/2771.txt deleted file mode 100644 index ad3429e4d42df39686b86b46acc2d7fef1a931c6..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2771.txt +++ /dev/null @@ -1,86 +0,0 @@ -In mathematics, Apéry's theorem is a result in number theory that states the Apéry's constant ζ(3) is irrational. That is, the number -$$ -\zeta(3) = \sum_{n=1}^\infty \frac{1}{n^3} = \frac{1}{1^3} + \frac{1}{2^3} + \frac{1}{3^3} + \ldots = 1.2020569\ldots -$$ - -cannot be written as a fraction p/q where p and q are integers. The theorem is named after Roger Apéry. - -The special values of the Riemann zeta function at even integers 2n (n > 0) can be shown in terms of Bernoulli numbers to be irrational, while it remains open whether the function's values are in general rational or not at the odd integers 2n + 1 (n > 1) (though they are conjectured to be irrational). - -Euler proved that if n is a positive integer then -$$ -\frac{1}{1^{2n}} + \frac{1}{2^{2n}} + \frac{1}{3^{2n}} + \frac{1}{4^{2n}} + \ldots = \frac{p}{q}\pi^{2n} -$$ - -for some rational number p/q. Specifically, writing the infinite series on the left as ζ(2n) he showed -$$ -\zeta(2n) = (-1)^{n+1}\frac{B_{2n}(2\pi)^{2n}}{2(2n)!} -$$ - -where the Bn are the rational Bernoulli numbers. Once it was proved that πn is always irrational this showed that ζ(2n) is irrational for all positive integers n. - -No such representation in terms of π is known for the so-called zeta constants for odd arguments, the values ζ(2n + 1) for positive integers n. It has been conjectured that the ratios of these quantities -$$ -\frac{\zeta(2n+1)}{\pi^{2n+1}}, -$$ - -are transcendental for every integer n ≥ 1. - -Because of this, no proof could be found to show that the zeta constants with odd arguments were irrational, even though they were (and still are) all believed to be transcendental. However, in June 1978, Roger Apéry gave a talk titled "Sur l'irrationalité de ζ(3)." During the course of the talk he outlined proofs that ζ(3) and ζ(2) were irrational, the latter using methods simplified from those used to tackle the former rather than relying on the expression in terms of π. Due to the wholly unexpected nature of the proof and Apéry's blasé and very sketchy approach to the subject, many of the mathematicians in the audience dismissed the proof as flawed. However Henri Cohen, Hendrik Lenstra, and Alfred van der Poorten suspected Apéry was on to something and set out to confirm his proof. Two months later they finished verification of Apéry's proof, and on August 18 Cohen delivered a lecture giving full details of the proof. After the lecture Apéry himself took to the podium to explain the source of some of his ideas. - -Apéry's original proof was based on the well known irrationality criterion from Peter Gustav Lejeune Dirichlet, which states that a number ξ is irrational if there are infinitely many coprime integers p and q such that -$$ -\left|\xi-\frac{p}{q}\right|<\frac{c}{q^{1+\delta}} -$$ - -for some fixed c, δ > 0. - -The starting point for Apéry was the series representation of ζ(3) as -$$ -\zeta(3) = \frac{5}{2} \sum_{n=1}^\infty \frac{(-1)^{n-1}}{n^3\binom{2n}{n}}. -$$ - -Roughly speaking, Apéry then defined a sequence cn,k which converges to ζ(3) about as fast as the above series, specifically -$$ -c_{n,k} = \sum_{m=1}^{n}\frac{1}{m^{3}} + \sum_{m=1}^{k}\frac{(-1)^{m-1}}{2m^{3}\binom{n}{m}\binom{n+m}{m}}. -$$ - -He then defined two more sequences an and bn that, roughly, have the quotient cn,k. These sequences were -$$ -a_{n} = \sum_{k=0}^{n}c_{n,k}\binom{n}{k}^{2}\binom{n+k}{k}^{2} -$$ - -and -$$ -b_{n}=\sum_{k=0}^{n}\binom{n}{k}^{2}\binom{n+k}{k}^{2}. -$$ - -The sequence an/bn converges to ζ(3) fast enough to apply the criterion, but unfortunately an is not an integer after n = 2. Nevertheless, Apéry showed that even after multiplying an and bn by a suitable integer to cure this problem the convergence was still fast enough to guarantee irrationality. - -Within a year of Apéry's result an alternative proof was found by Frits Beukers, who replaced Apéry's series with integrals involving the shifted Legendre polynomials $\tilde{P_{n}}(x)$. Using a representation that would later be generalized to Hadjicostas's formula, Beukers showed that -$$ -\int_{0}^{1}\int_{0}^{1}\frac{-\log(xy)}{1-xy}\tilde{P_{n}}(x)\tilde{P_{n}}(y)dxdy=\frac{A_{n}+B_{n}\zeta(3)}{\operatorname{lcm}\left[1,\ldots,n\right]^{3}} -$$ - -for some integers An and Bn (sequences and ). Using partial integration and the assumption that ζ(3) was rational and equal to a/b, Beukers eventually derived the inequality -$$ -0<\frac{1}{b}\leq\left|A_{n}+B_{n}\zeta(3)\right|\leq 4\left(\frac{4}{5}\right)^{n}, -$$ - -which is a contradiction since the right-most expression tends to zero and so must eventually fall below 1/b. - -A more recent proof by Wadim Zudilin is more reminiscent of Apéry's original proof, and also has similarities to a fourth proof by Yuri Nesterenko. These later proofs again derive a contradiction from the assumption that ζ(3) is rational by constructing sequences that tend to zero but are bounded below by some positive constant. They are somewhat less transparent than the earlier proofs, since they rely upon hypergeometric series. - -Apéry and Beukers could simplify their proofs to work on ζ(2) as well thanks to the series representation -$$ -\zeta(2)=3\sum_{n=1}^{\infty}\frac{1}{n^{2}\binom{2n}{n}}. -$$ - -Due to the success of Apéry's method a search was undertaken for a number ξ5 with the property that -$$ -\zeta(5)=\xi_{5}\sum_{n=1}^{\infty}\frac{(-1)^{n-1}}{n^{5}\binom{2n}{n}}. -$$ - -If such a ξ5 were found then the methods used to prove Apéry's theorem would be expected to work on a proof that ζ(5) is irrational. Unfortunately, extensive computer searching has failed to find such a constant, and in fact it is now known that if ξ5 exists and if it is an algebraic number of degree at most 25, then the coefficients in its minimal polynomial must be enormous, at least 10383, so extending Apéry's proof to work on the higher odd zeta constants does not seem likely to work. - -Despite this, many mathematicians working in this area expect a breakthrough sometime soon. Indeed, recent work by Wadim Zudilin and Tanguy Rivoal has shown that infinitely many of the numbers ζ(2n + 1) must be irrational, and even that at least one of the numbers ζ(5), ζ(7), ζ(9), and ζ(11) must be irrational. Their work uses linear forms in values of the zeta function and estimates upon them to bound the dimension of a vector space spanned by values of the zeta function at odd integers. Hopes that Zudilin could cut his list further to just one number did not materialise, but work on this problem is still an active area of research. Higher zeta constants have application to physics: they describe correlation functions in quantum spin chains. diff --git a/wiki/wikipedia/2772.txt b/wiki/wikipedia/2772.txt deleted file mode 100644 index e50b72107b385e0ac35441b00930c11111207c4d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2772.txt +++ /dev/null @@ -1,10 +0,0 @@ -In mathematics, the Frobenius determinant theorem was a conjecture made in 1896 by the mathematician Richard Dedekind, who wrote a letter to F. G. Frobenius about it (reproduced in , with an English translation in ). - -If one takes the multiplication table of a finite group G and replaces each entry g with the variable xg, and subsequently takes the determinant, then the determinant factors as a product of n irreducible polynomials, where n is the number of conjugacy classes. Moreover, each polynomial is raised to a power equal to its degree. Frobenius proved this surprising conjecture, and it became known as the Frobenius determinant theorem. - -Let a finite group $G$ have elements $g_1, g_2,\dots,g_n$, and let $x_{g_i}$ be associated with each element of $G$. Define the matrix $X_G$ with entries $a_{ij}=x_{g_i g_j}$. Then -$$ - \det X_G = \prod_{j=1}^r P_j(x_{g_1},x_{g_2},\dots,x_{g_n})^{\deg P_j} -$$ - -where r is the number of conjugacy classes of G. diff --git a/wiki/wikipedia/2773.txt b/wiki/wikipedia/2773.txt deleted file mode 100644 index d75faa13ba3076d4c16e183edb5314e35d4b0802..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2773.txt +++ /dev/null @@ -1,25 +0,0 @@ -Microsoft Sync Framework is a data synchronization platform from Microsoft that can be used to synchronize data across multiple data stores. Sync Framework includes a transport-agnostic architecture, into which data store-specific synchronization providers, modelled on the ADO.NET data provider API, can be plugged in. Sync Framework can be used for offline access to data, by working against a cached set of data and submitting the changes to a master database in a batch, as well as to synchronize changes to a data source across all consumers (publish/subscribe sync) and peer-to-peer synchronization of multiple data sources. Sync Framework features built-in capabilities for conflict detection – whether data to be changed has already been updated – and can flag them for manual inspection or use defined policies to try to resolve the conflict. Sync Services includes an embedded SQL Server Compact database to store metadata about the synchronization relationships as well as about each sync attempt. The Sync Framework API is surfaced both in managed code, for use with .NET Framework applications, as well as unmanaged code, for use with COM applications. It was scheduled to ship with Visual Studio 2008 in late November 2007. - -The Sync Framework runtime provides synchronization functionality, without being tied to any data store or data transport protocols. By providing data source specific synchronization providers, any data source can be supported. For example, using proper synchronization providers, files can be synchronized across computers, project updates synchronized across project participants, or media synchronized across devices. Sync Framework ships with three providers: Microsoft Sync Services for ADO.NET, Sync Services for File Systems, and Sync Services for SSE. Sync Services can be used to synchronize devices by supplying providers for the device. Similarly, PIM software such as Microsoft Office Outlook and media libraries such as Windows Media Player can also be supported by providing suitable providers. - -The providers are used to enumerate the items in a data store, each identified by an Item ID. In addition, they also have to maintain synchronization metadata and the state of the data store, so that changes can be enumerated quickly. The metadata is maintained for every instance of the data store (replica) that the provider is attached to. The metadata maintained includes the replica ID, tick count (representing progression in time), conflict log, tombstone log, and the set of the changes the data store has seen (knowledge). A replica ID and tick count pair makes up a version and encodes the state of the data store until that time. Sync Framework defines a set of operation for the Knowledge object for a replica: Contains which determines if the store contains a specified change, Union to merge two knowledge sets, Project to project out the knowledge for a subset of the items, and Exclude to create a new knowledge set without the changes for a subset of the items. The metadata is managed by the metadata storage service which uses an in-process SQL Server Compact database to store the metadata on a per-provider basis. - -The Sync Services API operates by creating a synchronization session, represented by a Session object. A synchronization session synchronizes data across two synchronization providers - one for the source data store and the other for the destination. Instances of both the providers are passed to the Session object. During a synchronization session, the destination provider sends the knowledge set of the store. The source provider compares the knowledge of the destination with the change set in the source to enumerate the changes and then transfer it to the destination. The destination provider makes sure the changes are not conflicting and merges the changes and updates the knowledge. - -#Snapshot sync (download-only sync): The data in the data source (or a subset of it) is synchronized with clients. - -#Upload-only sync: Data in the client is merged to the source replica. - -#Bidirectional sync: Both the data sources can be modified independently and changes are synchronized with each other. An n-level sync is achieved by performing multiple bidirectional synchronizations. - -Microsoft Sync Services for ADO.NET is the synchronization provider for synchronizing across databases using ADO.NET. ADO.NET Datasets are synchronized between the source and the destination, which are then persisted to a database server. It can also support data sources other than a relational database, like an XML database or web service as long as a proxy is provided to abstract the data source and a data provider is available for the proxy. - -The Sync Services for ADO.NET provider is intended for use in offline applications, where data from the central database is cached locally. The application works against the cached data, and the changes are uploaded in a batch. In addition, the provider can also be used for collaborative applications, where each application will work against its local dataset, which will be synchronized periodically in a peer-to-peer manner with the other participants. Locally, the datasets can be stored either by using the SQL Server Compact database or any other database server supporting ADO.NET. Sync Services for ADO.NET allows incremental change tracking, which allows only the changes to be replicated rather than replicating the entire copy. - -The Sync Services for File Systems provider is used to synchronize two file system locations, which can either be local folders or network shares. In addition to mirroring new files, changes to existing files are also synchronized. Changes to files are detected by using timestamps, or optionally, by hashing the file contents. Conflicting changes to the same file are detected and can be set to be automatically resolved. For conflicting updates to a same file, the newer edit will be kept. If a file is deleted in one replica but updated in another, the update will take precedence over the delete. If two files with different content are created with the same name across two replicas, during the sync operation, the one created later will be persisted. If a rename operation caused the files to get the same name, both are retained by renaming one of them. Any deletes can be configured to move the file to the Recycle Bin, so that it can be recovered if necessary. The Sync Services for File Systems provider also provides a preview mode which enumerates the actions that will be taken for a sync operation, without actually performing the operations, with a view to letting the users review the changes that will be made. The synchronization is performed in a peer-to-peer manner. Neither Sync Framework or the Sync Services for File Systems provider perform any authentication before accessing the files; so any authentication is the job of the application using the Sync Framework API. The files are transferred without encryption. To use encryption in transit, custom providers that uses an encrypted TCP connection needs to be used. The Sync Services for File Systems provider also supports static filters to exclude files based on wildcards or attributes. In the first CTP release, however, the Sync Services for File Systems provider does not sync either NTFS security descriptors or Alternate Data Streams. - -The Sync Services for FeedSync provider can be used to help synchronize replicas by creating a FeedSync enabled feed, either in RSS or ATOM formats, which can then be subscribed to by interested parties. The provider can also be used to extract items from a FeedSync feed and merge the changes back to the data store. Sync Services for FeedSync uses another provider to connect to the data store. - -Sync Services for FeedSync provides services that can be used to help synchronize the data of a replica with RSS and Atom feeds. (A replica is a particular repository of information to be synchronized.) By using the FeedSync producer service, a synchronization application can work with a synchronization provider to create a list of items from a replica and put them in an RSS or Atom XML stream. These items can then be published to interested subscribers. Similarly, the FeedSync consumer service helps a synchronization application take an input RSS or Atom XML stream, extract items from it, and then use a synchronization provider to apply only the appropriate changes to a replica. Because Sync Framework underlies the exchange of feed items, two feeds can be cross-subscribed and easily synchronized with one another as peers in a synchronization community. (A synchronization community is a set of replicas that keep their data synchronized with each other.) - -Microsoft Sync Framework is free on Windows and Windows Mobile devices. Support for other platforms is available through commercial licensing and porting kits. diff --git a/wiki/wikipedia/2774.txt b/wiki/wikipedia/2774.txt deleted file mode 100644 index da3c52d76dd0203e4d707658201b0b2573dd8dd8..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2774.txt +++ /dev/null @@ -1 +0,0 @@ -In finite group theory, Dade's conjecture is a conjecture relating the numbers of characters of blocks of a finite group to the numbers of characters of blocks of local subgroups, introduced by Everett C. Dade. diff --git a/wiki/wikipedia/2775.txt b/wiki/wikipedia/2775.txt deleted file mode 100644 index 904e5701b6a316a7823787666b25cc3e3952a733..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2775.txt +++ /dev/null @@ -1,101 +0,0 @@ -Mastermind or Master Mind is a code-breaking game for two players. The modern game with pegs was invented in 1970 by Mordecai Meirowitz, an Israeli postmaster and telecommunications expert. It resembles an earlier pencil and paper game called Bulls and Cows that may date back a century. - -The game is played using: - -* a decoding board, with a shield at one end covering a row of four large holes, and twelve (or ten, or eight, or six) additional rows containing four large holes next to a set of four small holes; - -* code pegs of six different colors (or more; see Variations below), with round heads, which will be placed in the large holes on the board; and - -* key pegs, some colored black, some white, which are flat-headed and smaller than the code pegs; they will be placed in the small holes on the board. - -The two players decide in advance how many games they will play, which must be an even number. One player becomes the codemaker, the other the codebreaker. The codemaker chooses a pattern of four code pegs. Duplicates and blanks are allowed depending on player choice, so the player could even choose four code pegs of the same color or four blanks. In the instance that blanks are not elected to be a part of the game, the codebreaker may not use blanks in order to establish the final code. The chosen pattern is placed in the four holes covered by the shield, visible to the codemaker but not to the codebreaker. - -The codebreaker tries to guess the pattern, in both order and color, within eight to twelve turns. Each guess is made by placing a row of code pegs on the decoding board. Once placed, the codemaker provides feedback by placing from zero to four key pegs in the small holes of the row with the guess. A colored or black key peg is placed for each code peg from the guess which is correct in both color and position. A white key peg indicates the existence of a correct color code peg placed in the wrong position. - -If there are duplicate colours in the guess, they cannot all be awarded a key peg unless they correspond to the same number of duplicate colours in the hidden code. For example, if the hidden code is red-red-blue-blue and the player guesses red-red-red-blue, the codemaker will award two colored key pegs for the two correct reds, nothing for the third red as there is not a third red in the code, and a colored key peg for the blue. No indication is given of the fact that the code also includes a second blue. - -Once feedback is provided, another guess is made; guesses and feedback continue to alternate until either the codebreaker guesses correctly, or all rows of the decoding boards are full. - -Traditionally, players can only earn points when playing as the codemaker. The codemaker gets one point for each guess the codebreaker makes. An extra point is earned by the codemaker if the codebreaker is unable to guess the exact pattern within the given number of turns. (An alternative is to score based on the number of key pegs placed.) The winner is the one who has the most points after the agreed-upon number of games are played. - -Other rules may be specified. - -The game is based on an older, paper based game called Bulls and Cows. A computer adaptation of it was run in the 1960s on Cambridge University’s Titan computer system, where it was called 'MOO'. This version was written by Frank King. There was also another version for the TSS/8 time sharing system, written by J.S. Felton and finally a version for the Multics system at MIT by Jerrold Grochow. - -The modern game with pegs was invented in 1960 by Mordecai Meirowitz, an Israeli postmaster and telecommunications expert. Meirowitz presented the idea to many major toy companies but, after showing it at the Nuremberg International Toy Fair, it was picked up by a plastics company, Invicta Plastics, based near Leicester, UK. Invicta purchased all the rights to the game and the founder, Edward Jones-Fenleigh, refined the game further. It was released in 1971–2. - -Since 1971, the rights to Mastermind have been held by Invicta Plastics. (Invicta always named the game Master Mind.) They originally manufactured it themselves, though they have since licensed its manufacture to Hasbro worldwide, with the exception of Pressman Toys and Orda Industries who have the manufacturing rights to the United States and Israel, respectively. - -Starting in 1973, the game box featured a photograph of a man in a suit jacket seated in the foreground, with a young Asian woman standing behind him. The two amateur models (Bill Woodward and Cecilia Fung) reunited in June 2003 to pose for another publicity photo. - -Before asking for a best strategy of the codebreaker one has to define what is the meaning of "best": The minimal number of moves can be analyzed under the conditions of worst and average case and in the sense of a minimax value of a zero-sum game in game theory. - -With four pegs and six colours, there are 64 = 1296 different patterns (allowing duplicate colours). - -In 1977, Donald Knuth demonstrated that the codebreaker can solve the pattern in five moves or fewer, using an algorithm that progressively reduces the number of possible patterns. - -The algorithm works as follows: - -# Create the set S of 1296 possible codes (1111, 1112 ... 6665, 6666) - -# Start with initial guess 1122 (Knuth gives examples showing that this algorithm using other first guesses such as 1123, 1234 does not win in five tries on every code) - -# Play the guess to get a response of coloured and white pegs. - -# If the response is four colored pegs, the game is won, the algorithm terminates. - -# Otherwise, remove from S any code that would not give the same response if it (the guess) were the code. - -# Apply minimax technique to find a next guess as follows: For each possible guess, that is, any unused code of the 1296 not just those in S, calculate how many possibilities in S would be eliminated for each possible colored/white peg score. The score of a guess is the minimum number of possibilities it might eliminate from S. A single pass through S for each unused code of the 1296 will provide a hit count for each coloured/white peg score found; the coloured/white peg score with the highest hit count will eliminate the fewest possibilities; calculate the score of a guess by using "minimum eliminated" = "count of elements in S" - (minus) "highest hit count". From the set of guesses with the maximum score, select one as the next guess, choosing a member of S whenever possible. (Knuth follows the convention of choosing the guess with the least numeric value e.g. 2345 is lower than 3456. Knuth also gives an example showing that in some cases no member of S will be among the highest scoring guesses and thus the guess cannot win on the next turn, yet will be necessary to assure a win in five.) - -# Repeat from step 3. - -Subsequent mathematicians have been finding various algorithms that reduce the average number of turns needed to solve the pattern: in 1993, Kenji Koyama and Tony W. Lai performed an exhaustive depth-first search showing that the optimal method for solving a random code could achieve an average of 5625/1296 = 4.3403 turns to solve, with a worst-case scenario of six turns. - -The minimax value in the sense of game theory is 5600/1290 = 4.341. The minimax strategy of the codemaker consists in a uniformly distributed selection of one of the 1290 patterns with two or more colors. - -A new algorithm with an embedded genetic algorithm, where a large set of eligible codes is collected throughout the different generations. The quality of each of these codes is determined based on a comparison with a selection of elements of the eligible set. This algorithm is based on a heuristic that assigns a score to each eligible combination based on its probability of actually being the hidden combination. Since this combination is not known, the score is based on characteristics of the set of eligible solutions or the sample of them found by the evolutionary algorithm. - -The algorithm works as follows: - -# Set i = 1 - -# Play fixed initial guess G1 - -# Get the response X1 and Y1 - -# Repeat while Xi ≠ P: - -## Increment i - -## Set Ei = ∅ and h = 1 - -## Initialize population - -## Repeat while h ≤ maxgen and |Ei| ≤ maxsize: - -### Generate new population using crossover, mutation, inversion and permutation - -### Calculate fitness - -### Add eligible combinations to Ei - -### Increment h - -## Play guess Gi which belongs to Ei - -## Get response Xi and Yi - -In November 2004, Michiel de Bondt proved that solving a Mastermind board is an NP-complete problem when played with n pegs per row and two colors, by showing how to represent any one-in-three 3SAT problem in it. He also showed the same for Consistent Mastermind (playing the game so that every guess is a candidate for the secret code that is consistent with the hints in the previous guesses). - -The Mastermind satisfiability problem is a decision problem that asks, "Given a set of guesses and the number of colored and white pegs scored for each guess, is there at least one secret pattern that generates those exact scores?" (If not, then the codemaker must have incorrectly scored at least one guess.) In December 2005, Jeff Stuckman and Guo-Qiang Zhang showed in an arXiv article that the Mastermind satisfiability problem is NP-complete. - -Varying the number of colors and the number of holes results in a spectrum of Mastermind games of different levels of difficulty. Another common variation is to support different numbers of players taking on the roles of codemaker and codebreaker. The following are some examples of Mastermind games produced by Invicta, Parker Brothers, Pressman, Hasbro, and other game manufacturers: - -The difficulty level of any of the above can be increased by treating “empty” as an additional color or decreased by requiring only that the code's colors be guessed, independent of position. - -Computer and Internet versions of the game have also been made, sometimes with variations in the number and type of pieces involved and often under different names to avoid trademark infringement. Mastermind can also be played with paper and pencil. - -There is a numeral variety of the Mastermind in which a 4-digit number is guessed. - -The game was compiled into Clubhouse Games: 51 Worldwide Classics for the Switch under the name "Hit & Blow". diff --git a/wiki/wikipedia/2776.txt b/wiki/wikipedia/2776.txt deleted file mode 100644 index 294e3be9bb57cb71309c9149c34c2dcf611f23f0..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2776.txt +++ /dev/null @@ -1,196 +0,0 @@ -In mathematics, Hurwitz's theorem is a theorem of Adolf Hurwitz (1859–1919), published posthumously in 1923, solving the Hurwitz problem for finite-dimensional unital real non-associative algebras endowed with a positive-definite quadratic form. The theorem states that if the quadratic form defines a homomorphism into the positive real numbers on the non-zero part of the algebra, then the algebra must be isomorphic to the real numbers, the complex numbers, the quaternions, or the octonions. Such algebras, sometimes called Hurwitz algebras, are examples of composition algebras. - -The theory of composition algebras has subsequently been generalized to arbitrary quadratic forms and arbitrary fields. Hurwitz's theorem implies that multiplicative formulas for sums of squares can only occur in 1, 2, 4 and 8 dimensions, a result originally proved by Hurwitz in 1898. It is a special case of the Hurwitz problem, solved also in Radon. Subsequent proofs of the restrictions on the dimension have been given by Eckmann using the representation theory of finite groups and by Lee and Chevalley using Clifford algebras. Hurwitz's theorem has been applied in algebraic topology to problems on vector fields on spheres and the homotopy groups of the classical groups and in quantum mechanics to the classification of simple Jordan algebras. - -A Hurwitz algebra or composition algebra is a finite-dimensional not necessarily associative algebra A with identity endowed with a nondegenerate quadratic form q such that q(a b) = q(a) q(b). If the underlying coefficient field is the reals and q is positive-definite, so that (a, b) = 1/2[q(a + b) − q(a) − q(b)] is an inner product, then A is called a Euclidean Hurwitz algebra or (finite-dimensional) normed division algebra. - -If A is a Euclidean Hurwitz algebra and a is in A, define the involution and right and left multiplication operators by -$$ -\displaystyle{a^*=-a +2(a,1)1, L(a)b = ab, R(a)b=ba.} -$$ - -Evidently the involution has period two and preserves the inner product and norm. These operators have the following properties: - -* the involution is an antiautomorphism, i.e. (a b)*=b* a* - -* a a* = ‖ a ‖2 1 = a* a - -* L(a*) = L(a)*, R(a*) = R(a)*, so that the involution on the algebra corresponds to taking adjoints - -* Re(a b) = Re(b a) if Re x = (x + x*)/2 = (x, 1)1 - -* Re(a b) c = Re a(b c) - -* L(a2) = L(a)2, R(a2) = R(a)2, so that A is an alternative algebra. - -These properties are proved starting from the polarized version of the identity (a b, a b) = (a, a)(b, b): -$$ -\displaystyle{2(a,b)(c,d)=(ac,bd) + (ad,bc).} -$$ - -Setting b = 1 or d = 1 yields L(a*) = L(a)* and R(c*) = R(c)*. - -Hence Re(a b) = (a b, 1)1 = (a, b*)1 = (b a, 1)1 = Re(b a). - -Similarly Re (a b)c = ((a b)c,1)1 = (a b, c*)1 = (b, a* c*)1 = (bc,a*)1 = (a(bc),1)1 = Re a(b c). - -Hence ((ab)*,c) = (ab,c*) = (b,a*c*) = (1,b*(a*c*)) = (1,(b*a*)c*) = (b*a*,c), so that (ab)* = b*a*. - -By the polarized identity ‖ a ‖2 (c, d) = (a c, a d) = (a* (a c), d) so L(a*) L(a) = L(‖ a ‖2). Applied to 1 this gives a* a = ‖ a ‖2 1. Replacing a by a* gives the other identity. - -Substituting the formula for a* in L(a*) L(a) = L(a* a) gives L(a)2 = L(a2). The formula R(a2) = R(a)2 is proved analogically. - -It is routine to check that the real numbers R, the complex numbers C and the quaternions H are examples of associative Euclidean Hurwitz algebras with their standard norms and involutions. There are moreover natural inclusions R ⊂ C ⊂ H. - -Analysing such an inclusion leads to the Cayley–Dickson construction, formalized by A.A. Albert. Let A be a Euclidean Hurwitz algebra and B a proper unital subalgebra, so a Euclidean Hurwitz algebra in its own right. Pick a unit vector j in A orthogonal to B. Since (j, 1) = 0, it follows that j* = −j and hence j2 = −1. Let C be subalgebra generated by B and j. It is unital and is again a Euclidean Hurwitz algebra. It satisfies the following Cayley–Dickson multiplication laws: -$$ -\displaystyle{C=B\oplus Bj, (a+bj)^*=a^* - bj, (a+bj)(c+dj)=(ac -d^*b) +(bc^*+da)j.} -$$ - -B and B j are orthogonal, since j is orthogonal to B. If a is in B, then j a = a* j, since by orthogonal 0 = 2 (j, a*) = j a − a* j. The formula for the involution follows. To show that B ⊕ B j is closed under multiplication Bj = j B. Since B j is orthogonal to 1, (b j)* = −b j. - -* b(c j) = (c b)j since (b, j) = 0 so that, for x in A, (b(c j), x) = (b(j x), j(c j)) = −(b(j x), c*) = −(c b, (j x)*) = −((c b)j, x*) = ((c b)j, x). - -* (j c)b = j(b c) taking adjoints above. - -* (b j)(c j) = −c* b since (b, c j) = 0, so that, for x in A, ((b j)(c j), x) = −((c j)x*, b j) = (b x*, (c j)j) = −(c* b, x). - -Imposing the multiplicativity of the norm on C for a + b j and c + d j gives: -$$ -\displaystyle{(\|a\|^2+\|b\|^2)(\|c\|^2+\|d\|^2)=\|ac -d^*b\|^2 + \|bc^*+da\|^2,} -$$ - -which leads to -$$ -\displaystyle{(ac,d^*b)=(bc^*,da).} -$$ - -Hence d(a c) = (d a)c, so that B must be associative. - -This analysis applies to the inclusion of R in C and C in H. Taking O = H ⊕ H with the product and inner product above gives a noncommutative nonassociative algebra generated by J = (0, 1). This recovers the usual definition of the octonions or Cayley numbers. If A is a Euclidean algebra, it must contain R. If it is strictly larger than R, the argument above shows that it contains C. If it is larger than C, it contains H. If it is larger still, it must contain O. But there the process must stop, because O is not associative. In fact H is not commutative and a(b j) = (b a) j ≠ (a b)j in O. - -' The only Euclidean Hurwitz algebras are the real numbers, the complex numbers, the quaternions and the octonions. - -The proofs of Lee and Chevalley use Clifford algebras to show that the dimension N of A must be 1, 2, 4 or 8. In fact the operators L(a) with (a, 1) = 0 satisfy L(a)2 = −‖ a ‖2 and so form a real Clifford algebra. If a is a unit vector, then L(a) is skew-adjoint with square −I. So N must be either even or 1 (in which case A contains no unit vectors orthogonal to 1). The real Clifford algebra and its complexification act on the complexification of A, an N-dimensional complex space. If N is even, N − 1 is odd, so the Clifford algebra has exactly two complex irreducible representations of dimension 2N/2 − 1. So this power of 2 must divide N. It is easy to see that this implies N can only be 1, 2, 4 or 8. - -The proof of Eckmann uses the representation theory of finite groups, or the projective representation theory of elementary Abelian 2-groups, known to be equivalent to the representation theory of real Clifford algebras. Indeed, taking an orthonormal basis ei of the orthogonal complement of 1 gives rise to operators Ui = L(ei) - -satisfying -$$ -\displaystyle{U_i^2=-I, U_iU_j=-U_jU_i (i\ne j).} -$$ - -This is a projective representation of a direct product of N − 1 groups of order 2. (N is assumed to be greater than 1.) The operators Ui by construction are skew-symmetric and orthogonal. In fact Eckmann constructed operators of this type in a slightly different but equivalent way. It is in fact the method originally followed in Hurwitz. Assume that there is a composition law for two forms -$$ -\displaystyle{(x_1^2 + \cdots +x_N^2)(y_1^2 + \cdots + y_N^2) =z_1^2 + \cdots + z_N^2,} -$$ - -where zi is bilinear in x and y. Thus -$$ -\displaystyle{z_i=\sum_{j=1}^N a_{ij}(x)y_j} -$$ - -where the matrix T(x) = (aij) is linear in x. The relations above are equivalent to -$$ -\displaystyle{T(x)T(x)^t=x_1^2 +\cdots + x_N^2.} -$$ - -Writing -$$ -\displaystyle{T(x)=T_1x_1 + \cdots + T_Nx_N,} -$$ - -the relations become -$$ -\displaystyle{T_iT^t_j+T_jT_i^t =2\delta_{ij}I.} -$$ - -Now set Vi = (TN)t Ti. Thus VN = I and the V1, ... , VN − 1 are skew-adjoint, orthogonal satisfying exactly the same relations as the Ui's: -$$ -\displaystyle{V_i^2=-I, V_iV_j=-V_jV_i (i\ne j).} -$$ - -Since Vi is an orthogonal matrix with square −I on a real vector space, N is even. - -Let G be the finite group generated by elements vi such that -$$ -\displaystyle{v_i^2=\varepsilon, v_iv_j=\varepsilon v_jv_i (i\ne j),} -$$ - -where ε is central of order 2. The commutator subgroup [G, G] is just formed of 1 and ε. If N is odd this coincides with the center while if N is even the center has order 4 with extra elements γ = v1 ... vN − 1 and ε γ. If g in G is not in the center its conjugacy class is exactly g and ε g. Thus there are - -2N − 1 + 1 conjugacy classes for N odd and 2N − 1 + 2 for N even. G has 1-dimensional complex representations. The total number of irreducible complex representations is the number of conjugacy classes. So since N is even, there are two further irreducible complex representations. Since the sum of the squares of the dimensions equals and the dimensions divide , the two irreducibles must have dimension 2(N − 2)/2. When N is even, there are two and their dimension must divide the order of the group, so is a power of two, so they must both have dimension 2(N − 2)/2. The space on which the Vi's act can be complexified. It will have complex dimension N. It breaks up into some of complex irreducible representations of G, all having dimension 2(N − 2)/2. In particular this dimension is ≤ N, so N is less than or equal to 8. If N = 6, the dimension is 4, which does not divide 6. So N can only be 1, 2, 4 or 8. - -Let A be a Euclidean Hurwitz algebra and let Mn(A) be the algebra of n-by-n matrices over A. It is a unital nonassociative algebra with an involution given by -$$ -\displaystyle{(x_{ij})^*=(x_{ji}^*).} -$$ - -The trace Tr(X) is defined as the sum of the diagonal elements of X and the real-valued trace by - -TrR(X) = Re Tr(X). The real-valued trace satisfies: -$$ - \operatorname{Tr}_{\mathbf{R}} XY = \operatorname{Tr}_{\mathbf{R}} YX, \qquad \operatorname{Tr}_{\mathbf{R}} (XY)Z = \operatorname{Tr}_{\mathbf{R}} X(YZ). -$$ - -These are immediate consequences of the known identities for n = 1. - -In A define the associator by -$$ -\displaystyle{[a,b,c]=a(bc) - (ab)c.} -$$ - -It is trilinear and vanishes identically if A is associative. Since A is an alternative algebra - -[a, a, b] = 0 and [b, a, a] = 0. Polarizing it follows that the associator is antisymmetric in its three entries. Furthermore, if a, b or c lie in R then [a, b, c] = 0. These facts imply that M3(A) has certain commutation properties. In fact if X is a matrix in M3(A) with real entries on the diagonal then -$$ -\displaystyle{[X,X^2]=aI,} -$$ - -with a in A. In fact if Y = [X, X2], then -$$ -\displaystyle{y_{ij}=\sum_{k,\ell} [x_{ik},x_{k\ell},x_{\ell j}].} -$$ - -Since the diagonal entries of X are real, the off diagonal entries of Y vanish. Each diagonal - -entry of Y is a sum of two associators involving only off diagonal terms of X. Since the associators are invariant under cyclic permutations, the diagonal entries of Y are all equal. - -Let Hn(A) be the space of self-adjoint elements in Mn(A) with product X∘Y = 1/2(X Y + Y X) and inner product (X, Y) = TrR(X Y). - -' Hn(A) is a Euclidean Jordan algebra if A is associative (the real numbers, complex numbers or quaternions) and n ≥ 3 or if A is nonassociative (the octonions) and n = 3. - -The exceptional Jordan algebra H3(O) is called the Albert algebra after A.A. Albert. - -To check that Hn(A) satisfies the axioms for a Euclidean Jordan algebra, the real trace defines a symmetric bilinear form with (X, X) = Σ ‖ xij ‖2. So it is an inner product. It satisfies the associativity property (Z∘X, Y) = (X, Z∘Y) because of the properties of the real trace. The main axiom to check is the Jordan condition for the operators L(X) defined by L(X)Y = X∘Y: -$$ -\displaystyle{[L(X),L(X^2)]=0.} -$$ - -This is easy to check when A is associative, since Mn(A) is an associative algebra so a Jordan algebra with X∘Y = 1/2(X Y + Y X). When A = O and n = 3 a special argument is required, one of the shortest being due to Freudenthal. - -In fact if T is in H3(O) with Tr T = 0, then -$$ -\displaystyle{D(X) = TX -XT} -$$ - -defines a skew-adjoint derivation of H3(O). Indeed, -$$ - \operatorname{Tr}(T(X(X^2)) -T(X^2(X)))=\operatorname{Tr} T(aI) = \operatorname{Tr}(T)a=0, -$$ - -so that -$$ - (D(X),X^2)=0. -$$ - -Polarizing yields: -$$ - (D(X),Y\circ Z)+(D(Y),Z\circ X)+ (D(Z),X\circ Y)=0. -$$ - -Setting Z = 1, shows that D is skew-adjoint. The derivation property D(X∘Y) = D(X)∘Y + X∘D(Y) follows by this and the associativity property of the inner product in the identity above. - -With A and n as in the statement of the theorem, let K be the group of automorphisms of E = Hn(A) leaving invariant the inner product. It is a closed subgroup of O(E) so a compact Lie group. Its Lie algebra consists of skew-adjoint derivations. Freudenthal showed that given X in E there is an automorphism k in K such that k(X) is a diagonal matrix. (By self-adjointness the diagonal entries will be real.) Freudenthal's diagonalization theorem immediately implies the Jordan condition, since Jordan products by real diagonal matrices commute on Mn(A) for any non-associative algebra A. - -To prove the diagonalization theorem, take X in E. By compactness k can be chosen in K minimizing the sums of the squares of the norms of the off-diagonal terms of k(X). Since K preserves the sums of all the squares, this is equivalent to maximizing the sums of the squares of the norms of the diagonal terms of k(X). Replacing X by k X, it can be assumed that the maximum is attained at X. Since the symmetric group Sn, acting by permuting the coordinates, lies in K, if X is not diagonal, it can be supposed that x12 and its adjoint x21 are non-zero. Let T be the skew-adjoint matrix with (2, 1) entry a, (1, 2) entry −a* and 0 elsewhere and let D be the derivation ad T of E. Let kt = exp tD in K. Then only the first two diagonal entries in X(t) = ktX differ from those of X. The diagonal entries are real. The derivative of x11(t) at t = 0 is the (1, 1) coordinate of [T, X], i.e. a* x21 + x12 a = 2(x21, a). This derivative is non-zero if a = x21. On the other hand, the group kt preserves the real-valued trace. Since it can only change x11 and x22, it preserves their sum. However, on the line x + y = constant, x2 + y2 has no local maximum (only a global minimum), a contradiction. Hence X must be diagonal. diff --git a/wiki/wikipedia/2777.txt b/wiki/wikipedia/2777.txt deleted file mode 100644 index d9de51a35715fa3379e1078903748180c0070de4..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2777.txt +++ /dev/null @@ -1,3 +0,0 @@ -Long-running transactions (also known as the saga interaction pattern) are computer database transactions that avoid locks on non-local resources, use compensation to handle failures, potentially aggregate smaller ACID transactions (also referred to as atomic transactions), and typically use a coordinator to complete or abort the transaction. In contrast to rollback in ACID transactions, compensation restores the original state, or an equivalent, and is business-specific. For example, the compensating action for making a hotel reservation is canceling that reservation. - -A number of protocols have been specified for long-running transactions using Web services within business processes. OASIS Business Transaction Processing and WS-CAF are examples. These protocols use a coordinator to mediate the successful completion or use of compensation in a long-running transaction. diff --git a/wiki/wikipedia/2778.txt b/wiki/wikipedia/2778.txt deleted file mode 100644 index 5cf88cae09850dc2cdf6b6923e24f415086c5980..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2778.txt +++ /dev/null @@ -1,17 +0,0 @@ -In mathematical logic, Beth definability is a result that connects implicit definability of a property to its explicit definability. Specifically Beth definability states that the two senses of definability are equivalent. - -First-order logic has the Beth definability property. - -For first-order logic, the theorem states that, given a theory T in the language L' ⊇ L and a formula φ in L', then the following are equivalent: - -* for any two models A and B of T such that A|L = B|L (where A|L is the reduct of A to L), it is the case that A ⊨ φ[a] if and only if B ⊨ φ[a] (for all tuples a of A) - -* φ is equivalent modulo T to a formula ψ in L. - -Less formally: a property is implicitly definable in a theory in language L (via a formula φ of an extended language L') only if that property is explicitly definable in that theory (by formula ψ in the original language L). - -Clearly the converse holds as well, so that we have an equivalence between implicit and explicit definability. That is, a "property" is implicitly definable with respect to a theory if and only if it is explicitly definable. - -The theorem does not hold if the condition is restricted to finite models. We may have A ⊨ φ[a] if and only if B ⊨ φ[a] for all pairs A,B of finite models without there being any L-formula ψ equivalent to φ modulo T. - -The result was first proven by Evert Willem Beth. diff --git a/wiki/wikipedia/2779.txt b/wiki/wikipedia/2779.txt deleted file mode 100644 index 48ab977587a055b5dd035e02fd94e3802eacc2b1..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2779.txt +++ /dev/null @@ -1,49 +0,0 @@ -In mathematics, the Scholz conjecture is a conjecture on the length of certain addition chains. - -It is sometimes also called the Scholz–Brauer conjecture or the Brauer–Scholz conjecture, after Arnold Scholz who formulated it in 1937 and Alfred Brauer who studied it soon afterward and proved a weaker bound. - -The conjecture states that - -l(2n - 1) ≤ n - 1 + l(n), - -where l(n) is the length of the shortest addition chain producing n. - -Here, an addition chain is defined as a sequence of numbers, starting with 1, such that every number after the first can be expressed as a sum of two earlier numbers (which are allowed to both be equal). Its length is the number of sums needed to express all its numbers, which is one less than the length of the sequence of numbers (since there is no sum of previous numbers for the first number in the sequence, 1). Computing the length of the shortest addition chain that contains a given number x can be done by dynamic programming for small numbers, but it is not known whether it can be done in polynomial time measured as a function of the length of the binary representation of x. Scholz's conjecture, if true, would provide short addition chains for numbers x of a special form, the Mersenne numbers. - -As an example, l(5) = 3: it has a shortest addition chain - -1, 2, 4, 5 - -of length three, determined by the three sums - -1 + 1 = 2, - -2 + 2 = 4, - -4 + 1 = 5. - -Also, l(31) = 7: it has a shortest addition chain - -1, 2, 3, 6, 12, 24, 30, 31 - -of length seven, determined by the seven sums - -1 + 1 = 2, - -2 + 1 = 3, - -3 + 3 = 6, - -6 + 6 = 12, - -12 + 12 = 24, - -24 + 6 = 30, - -30 + 1 = 31. - -Both l(31) and 5 - 1 + l(5) equal 7. - -Therefore, these values obey the inequality (which in this case is an equality) and the Scholz conjecture is true for the case n = 5. - -By using a combination of computer search techniques and mathematical characterizations of optimal addition chains, Clift showed that the conjecture is true for all n < 5784689. Additionally, he verified that for all n ≤ 64, the inequality of the conjecture is actually an equality. diff --git a/wiki/wikipedia/278.txt b/wiki/wikipedia/278.txt deleted file mode 100644 index b430ce709352813ad39e1c002a261d63ff99d331..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/278.txt +++ /dev/null @@ -1 +0,0 @@ -In mathematics, the Brumer bound is a bound for the rank of an elliptic curve, proved by . diff --git a/wiki/wikipedia/2780.txt b/wiki/wikipedia/2780.txt deleted file mode 100644 index da4e44f479495de8bc2845b7dc667ec0aa0292f0..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2780.txt +++ /dev/null @@ -1,45 +0,0 @@ -BioTapestry is an open source software application for modeling and visualizing gene regulatory networks (GRNs). - -BioTapestry was created at the in Seattle, in collaboration with the at the . The project was initiated to support the ongoing development of the model of the GRN regulating the development of the endomesoderm in the sea urchin Strongylocentrotus purpuratus. BioTapestry was initially made public in late 2003 as a web-based, read-only , with the first fully functional editor released in August 2004 (v0.94.1). The current version, 7.0.0, was released in September 2014. - -Development work on BioTapestry is ongoing. For more information about version 7.0, see the . - -BioTapestry is an interactive tool for modeling and visualizing gene regulatory networks. - -* from the Davidson Lab. - -* from the Davidson Lab. - -* from the McMahon Lab. - -* from the Baliga Lab. - -* from the Rothenberg Lab. - -* from the Yuh Lab. - -* from the Vokes Lab. - -Input - -* Gene Regulatory Networks can be drawn by hand. - -* Networks can be built using lists of interactions entered via dialog boxes. - -* Lists of interactions can be input using comma-separated-value (CSV) files. - -* Networks can be built using SIF files as input. - -* BioTapestry can accept network definitions via the Gaggle framework. - -Visualization - -* BioTapestry uses orthogonal-directed hyperlinks and a hierarchical presentation of models. - -Analysis - -* BioTapestry can create Systems Biology Markup Language files for a subset of networks. - -Documentation - -* The BioTapestry home page has links to several tutorials for using the software. diff --git a/wiki/wikipedia/2781.txt b/wiki/wikipedia/2781.txt deleted file mode 100644 index cdef20e48c8a1b2e3215b35bc4e48298195aab23..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2781.txt +++ /dev/null @@ -1,5 +0,0 @@ -Geoplexing is a computer science term relating to the duplication of computer storage and applications within a server farm over geographically diverse locations for the purpose of fault tolerance. The name comes from a contraction of geographical multiplex. - -In a geoplex, server clusters are duplicated over one or more geographically separate sites. Geoplexes can be "active-active" (where all of the clusters are used in tandem) or "active-passive" (where one or more of the clusters are kept as a hot spare). - -Data and applications are shared either via cloning or partitioning. With cloning, each server in a cluster handles one or more applications, with the applications and the data both being cloned to other servers in the geoplex, and so a load balancer then distributes requests to the cloned servers. Meanwhile, with partitioning, hardware and applications are duplicated in the geoplex, while application datasets are divided between the servers, and therefore requests are routed to the appropriate server. diff --git a/wiki/wikipedia/2782.txt b/wiki/wikipedia/2782.txt deleted file mode 100644 index c9a74ea67e4a7a29582839c52c8810a4b59e4c56..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2782.txt +++ /dev/null @@ -1,134 +0,0 @@ -The quantum algorithm for linear systems of equations, also called HHL algorithm, designed by Aram Harrow, Avinatan Hassidim, and Seth Lloyd, is a quantum algorithm formulated in 2009 for solving linear systems. The algorithm estimates the result of a scalar measurement on the solution vector to a given linear system of equations. - -The algorithm is one of the main fundamental algorithms expected to provide a speedup over their classical counterparts, along with Shor's factoring algorithm, Grover's search algorithm, the quantum fourier transform and quantum simulation. Provided the linear system is sparse and has a low condition number $\kappa$, and that the user is interested in the result of a scalar measurement on the solution vector, instead of the values of the solution vector itself, then the algorithm has a runtime of $O(\log(N)\kappa^2)$, where $N$ is the number of variables in the linear system. This offers an exponential speedup over the fastest classical algorithm, which runs in $O(N\kappa)$ (or $O(N\sqrt{\kappa})$ for positive semidefinite matrices). - -An implementation of the quantum algorithm for linear systems of equations was first demonstrated in 2013 by Cai et al., Barz et al. and Pan et al. in parallel. The demonstrations consisted of simple linear equations on specially designed quantum devices. The first demonstration of a general-purpose version of the algorithm appeared in 2018 in the work of Zhao et al. - -Due to the prevalence of linear systems in virtually all areas of science and engineering, the quantum algorithm for linear systems of equations has the potential for widespread applicability. - -The problem we are trying to solve is: given a $N \times N$ Hermitian matrix $A$ and a unit vector $\overrightarrow{b}$, find the solution vector $\overrightarrow{x}$ satisfying $A\overrightarrow{x}=\overrightarrow{b}$. This algorithm assumes that the user is not interested in the values of $\overrightarrow{x}$ itself, but rather the result of applying some operator $M$ onto x, $\langle x|M|x\rangle$. - -First, the algorithm represents the vector $\overrightarrow{b}$ as a quantum state of the form: -$$ -|b\rangle=\sum_{i \mathop =1}^N b_i|i\rangle. -$$ - -Next, Hamiltonian simulation techniques are used to apply the unitary operator $e^{iAt}$ to $|b\rangle$ for a superposition of different times $t$. The ability to decompose $|b\rangle$ into the eigenbasis of $A$ and to find the corresponding eigenvalues $\lambda_j$ is facilitated by the use of quantum phase estimation. - -The state of the system after this decomposition is approximately: -$$ -\sum_{j \mathop =1}^N \beta_j|u_j\rangle|\lambda_j\rangle, -$$ - -where $u_j$ is the eigenvector basis of $A$, and $|b\rangle=\sum_{j \mathop =1}^N \beta_j|u_j\rangle$. - -We would then like to perform the linear map taking $|\lambda_j\rangle$ to $C\lambda^{-1}_j|\lambda_j\rangle$, where $C$ is a normalizing constant. The linear mapping operation is not unitary and thus will require a number of repetitions as it has some probability of failing. After it succeeds, we uncompute the $|\lambda_j\rangle$ register and are left with a state proportional to: -$$ -\sum_{i \mathop =1}^N \beta_i\lambda^{-1}_j|u_j\rangle=A^{-1}|b\rangle=|x\rangle, -$$ - -where $|x\rangle$ is a quantum-mechanical representation of the desired solution vector x. To read out all components of x would require the procedure be repeated at least N times. However, it is often the case that one is not interested in $x$ itself, but rather some expectation value of a linear operator M acting on x. By mapping M to a quantum-mechanical operator and performing the quantum measurement corresponding to M, we obtain an estimate of the expectation value $\langle x|M|x\rangle$. This allows for a wide variety of features of the vector x to be extracted including normalization, weights in different parts of the state space, and moments without actually computing all the values of the solution vector x. - -Firstly, the algorithm requires that the matrix $A$ be Hermitian so that it can be converted into a unitary operator. In the case where $A$ is not Hermitian, define - -\mathbf{C} = \begin{bmatrix} - -0 & A \\ - -A^\dagger & 0 - -\end{bmatrix}. - -As $C$ is Hermitian, the algorithm can now be used to solve Cy=\begin{bmatrix} - -b \\ - -0 - -\end{bmatrix}. to obtain y = \begin{bmatrix} - -0 \\ - -x - -\end{bmatrix} . - -Secondly, The algorithm requires an efficient procedure to prepare $|b\rangle$, the quantum representation of b. It is assumed that there exists some linear operator $B$ that can take some arbitrary quantum state $|\mathrm{initial}\rangle$ to $|b\rangle$ efficiently or that this algorithm is a subroutine in a larger algorithm and is given $|b\rangle$ as input. Any error in the preparation of state $|b\rangle$ is ignored. - -Finally, the algorithm assumes that the state $| \psi_0 \rangle $ can be prepared efficiently. Where -$$ -| \psi_0 \rangle := \sqrt{2/T} \sum_{\tau \mathop =0}^{T-1} \sin\pi\left(\tfrac{\tau+\tfrac{1}{2}}{T}\right) |\tau\rangle -$$ - -for some large $T$. The coefficients of $| \psi_0 \rangle $ are chosen to minimize a certain quadratic loss function which induces error in the $U_\mathrm{invert}$ subroutine described below. - -Hamiltonian simulation is used to transform the Hermitian matrix $A$ into a unitary operator, which can then be applied at will. This is possible if A is s-sparse and efficiently row computable, meaning it has at most s nonzero entries per row and given a row index these entries can be computed in time O(s). Under these assumptions, quantum Hamiltonian simulation allows $e^{iAt}$ to be simulated in time $O(\log(N)s^2t)$. - -The key subroutine to the algorithm, denoted $U_\mathrm{invert}$, is defined as follows and incorporates a phase estimation subroutine: - -1. Prepare $|\psi_0\rangle^C$ on register C - -2. Apply the conditional Hamiltonian evolution (sum) - -3. Apply the Fourier transform to the register C. Denote the resulting basis states with $|k\rangle$ for k = 0, ..., T - 1. Define $\lambda_k:=2\pi k/t_0$. - -4. Adjoin a three-dimensional register S in the state -$$ -|h(\lambda_k)\rangle^S := \sqrt{1-f(\lambda_k)^2 - g(\lambda_k)^2} |\mathrm{nothing} \rangle^S + f(\lambda_k)| \mathrm{well} \rangle^S+g (\lambda_k) | \mathrm{ill} \rangle^S, -$$ - -5. Reverse steps 1–3, uncomputing any garbage produced along the way. - -The phase estimation procedure in steps 1-3 allows for the estimation of eigenvalues of A up to error $\epsilon$. - -The ancilla register in step 4 is necessary to construct a final state with inverted eigenvalues corresponding to the diagonalized inverse of A. In this register, the functions f, g, are called filter functions. The states 'nothing', 'well' and 'ill' are used to instruct the loop body on how to proceed; 'nothing' indicates that the desired matrix inversion has not yet taken place, 'well' indicates that the inversion has taken place and the loop should halt, and 'ill' indicates that part of $|b\rangle$ is in the ill-conditioned subspace of A and the algorithm will not be able to produce the desired inversion. Producing a state proportional to the inverse of A requires 'well' to be measured, after which the overall state of the system collapses to the desired state by the extended Born rule. - -The body of the algorithm follows the amplitude amplification procedure: starting with $U_\mathrm{invert}B|\mathrm{initial}\rangle$, the following operation is repeatedly applied: -$$ -U_\mathrm{invert}BR_\mathrm{init}B^\dagger U^\dagger_\mathrm{invert}R_\mathrm{succ} -$$ - -where -$$ -R_\mathrm{succ}=I-2|\mathrm{well}\rangle\langle \mathrm{well}|, -$$ - -and -$$ -R_\mathrm{init}=I-2|\mathrm{initial}\rangle\langle\mathrm{initial}|. -$$ - -After each repetition, $S$ is measured and will produce a value of 'nothing', 'well', or 'ill' as described above. This loop is repeated until $|\mathrm{well}\rangle$ is measured, which occurs with a probability $p$. Rather than repeating $\frac{1}{p}$ times to minimize error, amplitude amplification is used to achieve the same error resilience using only $O\left(\frac{1}{\sqrt{p}}\right)$ repetitions. - -After successfully measuring 'well' on $S$ the system will be in a state proportional to: -$$ -\sum_{i \mathop =1}^N \beta_i \lambda^{-1}_j |u_j\rangle = A^{-1} |b \rangle = | x\rangle. -$$ - -Finally, we perform the quantum-mechanical operator corresponding to M and obtain an estimate of the value of $\langle x|M|x\rangle$. - -The best classical algorithm which produces the actual solution vector $\overrightarrow{x}$ is Gaussian elimination, which runs in $O(N^3)$ time. - -If A is s-sparse and positive semi-definite, then the Conjugate Gradient method can be used to find the solution vector $\overrightarrow{x}$, which can be found in $O(Ns\kappa)$ time by minimizing the quadratic function $|A\overrightarrow{x} -\overrightarrow{b}|^2$. - -When only a summary statistic of the solution vector $\overrightarrow{x}$ is needed, as is the case for the quantum algorithm for linear systems of equations, a classical computer can find an estimate of $\overrightarrow{x}^\dagger M \overrightarrow{x}$ in $O(N\sqrt{\kappa})$. - -The runtime of the quantum algorithm for solving systems of linear equations originally proposed by Harrow et al. was shown to be $O(\kappa^2\log N/\varepsilon)$, where $\varepsilon>0$ is the error parameter and $\kappa$ is the condition number of $A$. This was subsequently improved to $O(\kappa \log^3\kappa \log N /\varepsilon^3)$ by Andris Ambainis and a quantum algorithm with runtime polynomial in $\log(1/\varepsilon)$ was developed by Childs et al. Since the HHL algorithm maintains its logarithmic scaling in $N$ only for sparse or low rank matrices, Wossnig et al. extended the HHL algorithm based on a quantum singular value estimation technique and provided a linear system algorithm for dense matrices which runs in $O(\sqrt N \log N \kappa^2)$ time compared to the $O(N \log N \kappa^2)$ of the standard HHL algorithm. - -An important factor in the performance of the matrix inversion algorithm is the condition number $\kappa$, which represents the ratio of $A$'s largest and smallest eigenvalues. As the condition number increases, the ease with which the solution vector can be found using gradient descent methods such as the conjugate gradient method decreases, as $A$ becomes closer to a matrix which cannot be inverted and the solution vector becomes less stable. This algorithm assumes that all singular values of the matrix $A$ lie between $\frac{1}{\kappa}$ and 1, in which case the claimed run-time proportional to $\kappa^2$ will be achieved. Therefore, the speedup over classical algorithms is increased further when $\kappa$ is a $\mathrm{poly}(\log(N))$. - -On February 5, 2013, Stefanie Barz and co-workers demonstrated the quantum algorithm for linear systems of equations on a photonic quantum computing architecture. This implementation used two consecutive entangling gates on the same pair of polarization-encoded qubits. Two separately controlled NOT gates were realized where the successful operation of the first was heralded by a measurement of two ancillary photons. Barz et al. found that the fidelity in the obtained output state ranged from 64.7% to 98.1% due to the influence of higher-order emissions from spontaneous parametric down-conversion. - -Dominic Berry proposed a new algorithm for solving linear time dependent differential equations as an extension of the quantum algorithm for solving linear systems of equations. Berry provides an efficient algorithm for solving the full-time evolution under sparse linear differential equations on a quantum computer. - -The Finite Element Method uses large systems of linear equations to find approximate solutions to various physical and mathematical models. Montanaro and Pallister demonstrate that the HHL algorithm, when applied to certain FEM problems, can achieve a polynomial quantum speedup. They suggest that an exponential speedup is not possible in problems with fixed dimensions, and for which the solution meets certain smoothness conditions. - -Quantum speedups for the finite element method are higher for problems which include solutions with higher-order derivatives and large spatial dimensions. For example, problems in many-body dynamics require the solution of equations containing derivatives on orders scaling with the number of bodies, and some problems in computational finance, such as Black-Scholes models, require large spatial dimensions. - -Wiebe et al. provide a new quantum algorithm to determine the quality of a least-squares fit in which a continuous function is used to approximate a set of discrete points by extending the quantum algorithm for linear systems of equations. As the number of discrete points increases, the time required to produce a least-squares fit using even a quantum computer running a quantum state tomography algorithm becomes very large. Wiebe et al. find that in many cases, their algorithm can efficiently find a concise approximation of the data points, eliminating the need for the higher-complexity tomography algorithm. - -Machine learning is the study of systems that can identify trends in data. Tasks in machine learning frequently involve manipulating and classifying a large volume of data in high-dimensional vector spaces. The runtime of classical machine learning algorithms is limited by a polynomial dependence on both the volume of data and the dimensions of the space. Quantum computers are capable of manipulating high-dimensional vectors using tensor product spaces and are thus the perfect platform for machine learning algorithms. - -The quantum algorithm for linear systems of equations has been applied to a support vector machine, which is an optimized linear or non-linear binary classifier. A support vector machine can be used for supervised machine learning, in which training set of already classified data is available, or unsupervised machine learning, in which all data given to the system is unclassified. Rebentrost et al. show that a quantum support vector machine can be used for big data classification and achieve an exponential speedup over classical computers. - -In June 2018, Zhao et al. developed an algorithm for performing Bayesian training of deep neural networks in quantum computers with an exponential speedup over classical training due to the use of the quantum algorithm for linear systems of equations, providing also the first general-purpose implementation of the algorithm to be run in cloud-based quantum computers. diff --git a/wiki/wikipedia/2783.txt b/wiki/wikipedia/2783.txt deleted file mode 100644 index bc0b73ec2472b5007fe4ac986fd5cf9109d0a7e5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2783.txt +++ /dev/null @@ -1,59 +0,0 @@ -Sexy primes are prime numbers that differ from each other by 6. For example, the numbers 5 and 11 are both sexy primes, because both are prime and 1=11 − 5 = 6. - -The term "sexy prime" is a pun stemming from the Latin word for six: . - -If p + 2 or p + 4 (where p is the lower prime) is also prime, then the sexy prime is part of a prime triplet. In August 2014 the Polymath group, seeking the proof of the twin prime conjecture, showed that if the generalized Elliott–Halberstam conjecture is proven, one can show the existence of infinitely many pairs of consecutive primes that differ by at most 6 and as such they are either twin, cousin or sexy primes. - -As used in this article, n# stands for the product 2 · 3 · 5 · 7 · … of all the primes ≤ n. - -The sexy primes (sequences and in OEIS) below 500 are: - -(5,11), (7,13), (11,17), (13,19), (17,23), (23,29), (31,37), (37,43), (41,47), (47,53), (53,59), (61,67), (67,73), (73,79), (83,89), (97,103), (101,107), (103,109), (107,113), (131,137), (151,157), (157,163), (167,173), (173,179), (191,197), (193,199), (223,229), (227,233), (233,239), (251,257), (257,263), (263,269), (271,277), (277,283), (307,313), (311,317), (331,337), (347,353), (353,359), (367,373), (373,379), (383,389), (433,439), (443,449), (457,463), (461,467). - -, the largest-known pair of sexy primes was found by P. Kaiser and has 50,539 digits. The primes are: - -p = (520461 × 255931+1) × (98569639289 × (520461 × 255931-1)2-3)-1 - -p+6 = (520461 × 255931+1) × (98569639289 × (520461 × 255931-1)2-3)+5 - -Sexy primes can be extended to larger constellations. Triplets of primes (p, p+6, p+12) such that p+18 is composite are called sexy prime triplets. Those below 1,000 are (, , ): - -(7,13,19), (17,23,29), (31,37,43), (47,53,59), (67,73,79), (97,103,109), (101,107,113), (151,157,163), (167,173,179), (227,233,239), (257,263,269), (271,277,283), (347,353,359), (367,373,379), (557,563,569), (587,593,599), (607,613,619), (647,653,659), (727,733,739), (941,947,953), (971,977,983). - -In May 2019, Peter Kaiser set a record for the largest-known sexy prime triplet with 6,031 digits: - -p = 10409207693×220000−1. - -Gerd Lamprecht improved the record to 6,116 digits in August 2019: - -p = 20730011943×14221#+344231. - -Ken Davis further improved the record with a 6,180 digit Brillhart-Lehmer-Selfridge provable triplet in October 2019: - -p = (72865897*809857*4801#*(809857*4801#+1)+210)*(809857*4801#-1)/35+1 - -Norman Luhn & Gerd Lamprecht improved the record to 6,701 digits in October 2019: - -p = 22582235875×222224+1. - -Gerd Lamprecht & Norman Luhn improved the record to 10,602 digits in December 2019: - -p = 2683143625525x235176+1. - -Sexy prime quadruplets (p, p+6, p+12, p+18) can only begin with primes ending in a 1 in their decimal representation (except for the quadruplet with p = 5). The sexy prime quadruplets below 1000 are (, , , ): - -(5,11,17,23), (11,17,23,29), (41,47,53,59), (61,67,73,79), (251,257,263,269), (601,607,613,619), (641,647,653,659). - -In November 2005 the largest-known sexy prime quadruplet, found by Jens Kruse Andersen had 1,002 digits: - -p = 411784973 · 2347# + 3301. - -In September 2010 Ken Davis announced a 1,004 digit quadruplet with p = 23333 + 1582534968299. - -In May 2019 Marek Hubal announced a 1,138 digit quadruplet with p = 1567237911 × 2677# + 3301. - -In June 2019 Peter Kaiser announced a 1,534 digit quadruplet with p = 19299420002127 × 25050 + 17233. - -In October 2019 Gerd Lamprecht and Norman Luhn announced a 3,025 digit quadruplet with p = 121152729080 × 7019#/1729 + 1. - -In an arithmetic progression of five terms with common difference 6, one of the terms must be divisible by 5, because 5 and 6 are relatively prime. Thus, the only sexy prime quintuplet is (5,11,17,23,29); no longer sequence of sexy primes is possible. diff --git a/wiki/wikipedia/2784.txt b/wiki/wikipedia/2784.txt deleted file mode 100644 index f180f92bdfc686e6f6be1dd607c2d4004533518f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2784.txt +++ /dev/null @@ -1,37 +0,0 @@ -In mathematics, specifically transcendental number theory, Schanuel's conjecture is a conjecture made by Stephen Schanuel in the 1960s concerning the transcendence degree of certain field extensions of the rational numbers. - -The conjecture is as follows: - -Given any n complex numbers z1, ..., zn that are linearly independent over the rational numbers $\mathbb{Q}$, the field extension $\mathbb{Q}$(z1, ..., zn, ez1, ..., ezn) has transcendence degree at least n over $\mathbb{Q}$. - -The conjecture can be found in Lang (1966). - -The conjecture, if proven, would generalize most known results in transcendental number theory. The special case where the numbers z1,...,zn are all algebraic is the Lindemann–Weierstrass theorem. If, on the other hand, the numbers are chosen so as to make exp(z1),...,exp(zn) all algebraic then one would prove that linearly independent logarithms of algebraic numbers are algebraically independent, a strengthening of Baker's theorem. - -The Gelfond–Schneider theorem follows from this strengthened version of Baker's theorem, as does the currently unproven four exponentials conjecture. - -Schanuel's conjecture, if proved, would also settle whether numbers such as e + pi and ee are algebraic or transcendental, and prove that e and pi are algebraically independent simply by setting z1 = 1 and z2 = pii, and using Euler's identity. - -Euler's identity states that epii + 1 = 0. If Schanuel's conjecture is true then this is, in some precise sense involving exponential rings, the only relation between e, pi, and i over the complex numbers. - -Although ostensibly a problem in number theory, the conjecture has implications in model theory as well. Angus Macintyre and Alex Wilkie, for example, proved that the theory of the real field with exponentiation, $\mathbb{R}$exp, is decidable provided Schanuel's conjecture is true. In fact they only needed the real version of the conjecture, defined below, to prove this result, which would be a positive solution to Tarski's exponential function problem. - -The converse Schanuel conjecture is the following statement: - -Suppose F is a countable field with characteristic 0, and e : F → F is a homomorphism from the additive group (F,+) to the multiplicative group (F,·) whose kernel is cyclic. Suppose further that for any n elements x1,...,xn of F which are linearly independent over $\mathbb{Q}$, the extension field $\mathbb{Q}$(x1,...,xn,e(x1),...,e(xn)) has transcendence degree at least n over $\mathbb{Q}$. Then there exists a field homomorphism h : F → $\mathbb{C}$ such that h(e(x)) = exp(h(x)) for all x in F. - -A version of Schanuel's conjecture for formal power series, also by Schanuel, was proven by James Ax in 1971. It states: - -Given any n formal power series f1,...,fn in t$\mathbb{C}$t which are linearly independent over $\mathbb{Q}$, then the field extension $\mathbb{C}$(t,f1,...,fn,exp(f1),...,exp(fn)) has transcendence degree at least n over $\mathbb{C}$(t). - -As stated above, the decidability of $\mathbb{R}$exp follows from the real version of Schanuel's conjecture which is as follows: - -Suppose x1,...,xn are real numbers and the transcendence degree of the field $\mathbb{Q}$(x1,...,xn, exp(x1),...,exp(xn)) is strictly less than n, then there are integers m1,...,mn, not all zero, such that m1x1 +...+ mnxn = 0. - -A related conjecture called the uniform real Schanuel's conjecture essentially says the same but puts a bound on the integers mi. The uniform real version of the conjecture is equivalent to the standard real version. Macintyre and Wilkie showed that a consequence of Schanuel's conjecture, which they dubbed the Weak Schanuel's conjecture, was equivalent to the decidability of $\mathbb{R}$exp. This conjecture states that there is a computable upper bound on the norm of non-singular solutions to systems of exponential polynomials; this is, non-obviously, a consequence of Schanuel's conjecture for the reals. - -It is also known that Schanuel's conjecture would be a consequence of conjectural results in the theory of motives. In this setting Grothendieck's period conjecture for an abelian variety A states that the transcendence degree of its period matrix is the same as the dimension of the associated Mumford–Tate group, and what is known by work of Pierre Deligne is that the dimension is an upper bound for the transcendence degree. Bertolin has shown how a generalised period conjecture includes Schanuel's conjecture. - -While a proof of Schanuel's conjecture seems a long way off, connections with model theory have prompted a surge of research on the conjecture. - -In 2004, Boris Zilber systematically constructed exponential fields Kexp that are algebraically closed and of characteristic zero, and such that one of these fields exists for each uncountable cardinality. He axiomatised these fields and, using Hrushovski's construction and techniques inspired by work of Shelah on categoricity in infinitary logics, proved that this theory of "pseudo-exponentiation" has a unique model in each uncountable cardinal. Schanuel's conjecture is part of this axiomatisation, and so the natural conjecture that the unique model of cardinality continuum is actually isomorphic to the complex exponential field implies Schanuel's conjecture. In fact, Zilber showed that this conjecture holds if and only if both Schanuel's conjecture and another unproven condition on the complex exponentiation field, which Zilber calls exponential-algebraic closedness, hold. As this construction can also give models with counterexamples of Schanuel's conjecture, this method cannot prove Schanuel's conjecture. diff --git a/wiki/wikipedia/2785.txt b/wiki/wikipedia/2785.txt deleted file mode 100644 index 5e3911829f7fa13542fd66db91b58a00811bcb93..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2785.txt +++ /dev/null @@ -1,25 +0,0 @@ -In discrete geometry, the original orchard-planting problem asks for the maximum number of 3-point lines attainable by a configuration of a specific number of points in the plane. It is also called the tree-planting problem or simply the orchard problem. There are also investigations into how many k-point lines there can be. Hallard T. Croft and Paul Erdős proved - -tk > c n2 / k3, where n is the number of points and tk is the number of k-point lines. - -Their construction contains some m-point lines, where m > k. One can also ask the question if these are not allowed. - -Define t3orchard(n) to be the maximum number of 3-point lines attainable with a configuration of n points. - -For an arbitrary number of points, n, t3orchard(n) was shown to be (1/6)n2 - O(n) in 1974. - -The first few values of t3orchard(n) are given in the following table . - -Since no two lines may share two distinct points, a trivial upper-bound for the number of 3-point lines determined by n points is -$$ -\left\lfloor\binom{n}{2}\Big/\binom{3}{2}\right\rfloor=\left\lfloor\frac{n^2-n}{6}\right\rfloor. -$$ - -Using the fact that the number of 2-point lines is at least 6n/13 , this upper bound can be lowered to -$$ -\left\lfloor\frac{\binom{n}{2}-6n/13}{3}\right\rfloor=\left\lfloor\frac{n^2}{6}-\frac{25n}{78}\right\rfloor. -$$ - -Lower bounds for t3orchard(n) are given by constructions for sets of points with many 3-point lines. The earliest quadratic lower bound of ~(1/8)n2 was given by Sylvester, who placed n points on the cubic curve y = x3. This was improved to [(1/6)n2 - (1/2)n] + 1 in 1974 by , using a construction based on Weierstrass's elliptic functions. An elementary construction using hypocycloids was found by Füredi achieving the same lower bound. - -In September 2013, Ben Green and Terence Tao published a paper in which they prove that for all point sets of sufficient size, n > n0, there are at most ([n(n - 3)/6] + 1) = [(1/6)n2 - (1/2)n + 1] 3-point lines which matches the lower bound established by Burr, Grünbaum and Sloane. This is slightly better than the bound that would directly follow from their tight lower bound of n/2 for the number of 2-point lines: [n(n - 2)/6], proved in the same paper and solving a 1951 problem posed independently by Gabriel Andrew Dirac and Theodore Motzkin. diff --git a/wiki/wikipedia/2786.txt b/wiki/wikipedia/2786.txt deleted file mode 100644 index efd3eb72512bd807e8df3b650dc0eb8989ad4bc6..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2786.txt +++ /dev/null @@ -1,9 +0,0 @@ -In algebra, the theorem of transition is said to hold between commutative rings $A \subset B$ if - -# $B$ dominates $A$; i.e., for each proper ideal I of A, $IB$ is proper and for each maximal ideal $\mathfrak n$ of B, $\mathfrak n \cap A$ is maximal - -# for each maximal ideal $\mathfrak m$ and $\mathfrak m$-primary ideal $Q$ of $A$, $\operatorname{length}_B (B/ Q B)$ is finite and moreover - -#:$\operatorname{length}_B (B/ Q B) = \operatorname{length}_B (B/ \mathfrak{m} B) \operatorname{length}_A(A/Q).$ - -Given commutative rings $A \subset B$ such that $B$ dominates $A$ and for each maximal ideal $\mathfrak m$ of $A$ such that $\operatorname{length}_B (B/ \mathfrak{m} B)$ is finite, the natural inclusion $A \to B$ is a faithfully flat ring homomorphism if and only if the theorem of transition holds between $A \subset B$. diff --git a/wiki/wikipedia/2787.txt b/wiki/wikipedia/2787.txt deleted file mode 100644 index a273a4f075cb4e81ef8533b554ba4dfba6f88e67..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2787.txt +++ /dev/null @@ -1,3 +0,0 @@ -In mathematics, the Stallings–Zeeman theorem is a result in algebraic topology, used in the proof of the Poincaré conjecture for dimension greater than or equal to five. It is named after the mathematicians John R. Stallings and Christopher Zeeman. - -Let M be a finite simplicial complex of dimension dim(M) = m ≥ 5. Suppose that M has the homotopy type of the m-dimensional sphere Sm and that M is locally piecewise linearly homeomorphic to m-dimensional Euclidean space Rm. Then M is homeomorphic to Sm under a map that is piecewise linear except possibly at a single point x. That is, M \ {x} is piecewise linearly homeomorphic to Rm. diff --git a/wiki/wikipedia/2788.txt b/wiki/wikipedia/2788.txt deleted file mode 100644 index c0dca313f3007ac7109ec6da0720d0505def0eb0..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2788.txt +++ /dev/null @@ -1,7 +0,0 @@ -Galileo's paradox is a demonstration of one of the surprising properties of infinite sets. In his final scientific work, Two New Sciences, Galileo Galilei made apparently contradictory statements about the positive integers. First, some numbers are squares, while others are not; therefore, all the numbers, including both squares and non-squares, must be more numerous than just the squares. And yet, for every number there is exactly one square; hence, there cannot be more of one than of the other. This is an early use, though not the first, of the idea of one-to-one correspondence in the context of infinite sets. - -Galileo concluded that the ideas of less, equal, and greater apply to (what we would now call) finite sets, but not to infinite sets. In the nineteenth century Cantor found a framework in which this restriction is not necessary; it is possible to define comparisons amongst infinite sets in a meaningful way (by which definition the two sets, integers and squares, have "the same size"), and that by this definition some infinite sets are strictly larger than others. - -The ideas were not new with Galileo, but his name has come to be associated with them. In particular, Duns Scotus, around 1302, compared even numbers to the whole of numbers. - -The relevant section of Two New Sciences is excerpted below: diff --git a/wiki/wikipedia/2789.txt b/wiki/wikipedia/2789.txt deleted file mode 100644 index be1277a57d84e0a5977beb49c4c9970c3887a884..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2789.txt +++ /dev/null @@ -1,93 +0,0 @@ -Mathematical beauty is the aesthetic pleasure typically derived from the abstractness, purity, simplicity, depth or orderliness of mathematics. Mathematicians often express this pleasure by describing mathematics (or, at least, some aspect of mathematics) as beautiful. They might also describe mathematics as an art form (e.g., a position taken by G. H. Hardy) or, at a minimum, as a creative activity. Comparisons are often made with music and poetry. - -Bertrand Russell expressed his sense of mathematical beauty in these words: - -
    - -Mathematics, rightly viewed, possesses not only truth, but supreme beauty—a beauty cold and austere, like that of sculpture, without appeal to any part of our weaker nature, without the gorgeous trappings of painting or music, yet sublimely pure, and capable of a stern perfection such as only the greatest art can show. The true spirit of delight, the exaltation, the sense of being more than Man, which is the touchstone of the highest excellence, is to be found in mathematics as surely as poetry. - -
    - -Paul Erdős expressed his views on the ineffability of mathematics when he said, "Why are numbers beautiful? It's like asking why is Beethoven's Ninth Symphony beautiful. If you don't see why, someone can't tell you. I know numbers are beautiful. If they aren't beautiful, nothing is". - -Mathematicians describe an especially pleasing method of proof as elegant. Depending on context, this may mean: - -* A proof that uses a minimum of additional assumptions or previous results. - -* A proof that is unusually succinct. - -* A proof that derives a result in a surprising way (e.g., from an apparently unrelated theorem or a collection of theorems). - -* A proof that is based on new and original insights. - -* A method of proof that can be easily generalized to solve a family of similar problems. - -In the search for an elegant proof, mathematicians often look for different independent ways to prove a result—as the first proof that is found can often be improved. The theorem for which the greatest number of different proofs have been discovered is possibly the Pythagorean theorem, with hundreds of proofs being published up to date. Another theorem that has been proved in many different ways is the theorem of quadratic reciprocity. In fact, Carl Friedrich Gauss alone had eight different proofs of this theorem, six of which he published. - -Conversely, results that are logically correct but involve laborious calculations, over-elaborate methods, highly conventional approaches or a large number of powerful axioms or previous results are usually not considered to be elegant, and may be even referred to as ugly or clumsy. - -==Beauty in results== - -Some mathematicians see beauty in mathematical results that establish connections between two areas of mathematics that at first sight appear to be unrelated. These results are often described as deep. While it is difficult to find universal agreement on whether a result is deep, some examples are more commonly cited than others. One such example is Euler's identity: - -\displaystyle e^{i \pi} +1 = 0 . - -Euler's identity is a special case of Euler's formula, which the physicist Richard Feynman called "our jewel" and "the most remarkable formula in mathematics". Modern examples include the modularity theorem, which establishes an important connection between elliptic curves and modular forms (work on which led to the awarding of the Wolf Prize to Andrew Wiles and Robert Langlands), and "monstrous moonshine", which connects the Monster group to modular functions via string theory (for which Richard Borcherds was awarded the Fields Medal). - -Other examples of deep results include unexpected insights into mathematical structures. For example, Gauss's Theorema Egregium is a deep theorem which relates a local phenomenon (curvature) to a global phenomenon (area) in a surprising way. In particular, the area of a triangle on a curved surface is proportional to the excess of the triangle and the proportionality is curvature. Another example is the fundamental theorem of calculus (and its vector versions including Green's theorem and Stokes' theorem). - -The opposite of deep is trivial. A trivial theorem may be a result that can be derived in an obvious and straightforward way from other known results, or which applies only to a specific set of particular objects such as the empty set. In some occasions, however, a statement of a theorem can be original enough to be considered deep—even though its proof is fairly obvious. - -In his A Mathematician's Apology, Hardy suggests that a beautiful proof or result possesses "inevitability", "unexpectedness", and "economy". - -Rota, however, disagrees with unexpectedness as a necessary condition for beauty and proposes a counterexample: - -In contrast, Monastyrsky writes: - -This disagreement illustrates both the subjective nature of mathematical beauty and its connection with mathematical results: in this case, not only the existence of exotic spheres, but also a particular realization of them. - -Interest in pure mathematics that is separate from empirical study has been part of the experience of various civilizations, including that of the ancient Greeks, who "did mathematics for the beauty of it". The aesthetic pleasure that mathematical physicists tend to experience in Einstein's theory of general relativity has been attributed (by Paul Dirac, among others) to its "great mathematical beauty". The beauty of mathematics is experienced when the physical reality of objects are represented by mathematical models. Group theory, developed in the early 1800s for the sole purpose of solving polynomial equations, became a fruitful way of categorizing elementary particles—the building blocks of matter. Similarly, the study of knots provides important insights into string theory and loop quantum gravity. - -Some believe that in order to appreciate mathematics, one must engage in doing mathematics. - -For example, Math Circle is an after-school enrichment program where students do mathematics through games and activities; there are also some teachers that encourage student engagement by teaching mathematics in a kinesthetic way (see kinesthetic learning). - -In a general Math Circle lesson, students use pattern finding, observation, and exploration to make their own mathematical discoveries. For example, mathematical beauty arises in a Math Circle activity on symmetry designed for 2nd and 3rd graders, where students create their own snowflakes by folding a square piece of paper and cutting out designs of their choice along the edges of the folded paper. When the paper is unfolded, a symmetrical design reveals itself. In a day to day elementary school mathematics class, symmetry can be presented as such in an artistic manner where students see aesthetically pleasing results in mathematics. - -Some teachers prefer to use mathematical manipulatives to present mathematics in an aesthetically pleasing way. Examples of a manipulative include algebra tiles, cuisenaire rods, and pattern blocks. For example, one can teach the method of completing the square by using algebra tiles. Cuisenaire rods can be used to teach fractions, and pattern blocks can be used to teach geometry. Using mathematical manipulatives helps students gain a conceptual understanding that might not be seen immediately in written mathematical formulas. - -Another example of beauty in experience involves the use of origami. Origami, the art of paper folding, has aesthetic qualities and many mathematical connections. One can study the mathematics of paper folding by observing the crease pattern on unfolded origami pieces. - -Combinatorics, the study of counting, has artistic representations that some find mathematically beautiful. There are many visual examples that illustrate combinatorial concepts. Some of the topics and objects seen in combinatorics courses with visual representations include, among others: - -* Four color theorem - -* Young tableau - -* Permutohedron - -* Graph theory - -* Partition of a set - -Some mathematicians are of the opinion that the doing of mathematics is closer to discovery than invention, for example: - -These mathematicians believe that the detailed and precise results of mathematics may be reasonably taken to be true without any dependence on the universe in which we live. For example, they would argue that the theory of the natural numbers is fundamentally valid, in a way that does not require any specific context. Some mathematicians have extrapolated this viewpoint that mathematical beauty is truth further, in some cases becoming mysticism. - -In Plato's philosophy there were two worlds, the physical one in which we live and another abstract world which contained unchanging truth, including mathematics. He believed that the physical world was a mere reflection of the more perfect abstract world. - -Hungarian mathematician Paul Erdős spoke of an imaginary book, in which God has written down all the most beautiful mathematical proofs. When Erdős wanted to express particular appreciation of a proof, he would exclaim "This one's from The Book!" - -Twentieth-century French philosopher Alain Badiou claims that ontology is mathematics. Badiou also believes in deep connections between mathematics, poetry and philosophy. - -In some cases, natural philosophers and other scientists who have made extensive use of mathematics have made leaps of inference between beauty and physical truth in ways that turned out to be erroneous. For example, at one stage in his life, Johannes Kepler believed that the proportions of the orbits of the then-known planets in the Solar System have been arranged by God to correspond to a concentric arrangement of the five Platonic solids, each orbit lying on the circumsphere of one polyhedron and the insphere of another. As there are exactly five Platonic solids, Kepler's hypothesis could only accommodate six planetary orbits and was disproved by the subsequent discovery of Uranus. - -In the 1970s, Abraham Moles and Frieder Nake analyzed links between beauty, information processing, and information theory. In the 1990s, Jürgen Schmidhuber formulated a mathematical theory of observer-dependent subjective beauty based on algorithmic information theory: the most beautiful objects among subjectively comparable objects have short algorithmic descriptions (i.e., Kolmogorov complexity) relative to what the observer already knows. Schmidhuber explicitly distinguishes between beautiful and interesting. The latter corresponds to the first derivative of subjectively perceived beauty: - -the observer continually tries to improve the predictability and compressibility of the observations by discovering regularities such as repetitions and symmetries and fractal self-similarity. Whenever the observer's learning process (possibly a predictive artificial neural network) leads to improved data compression such that the observation sequence can be described by fewer bits than before, the temporary interesting-ness of the data corresponds to the compression progress, and is proportional to the observer's internal curiosity reward. - -Examples of the use of mathematics in music include the stochastic music of Iannis Xenakis, Fibonacci in Tool's Lateralus, counterpoint of Johann Sebastian Bach, polyrhythmic structures (as in Igor Stravinsky's The Rite of Spring), the Metric modulation of Elliott Carter, permutation theory in serialism beginning with Arnold Schoenberg, and application of Shepard tones in Karlheinz Stockhausen's Hymnen. - -Examples of the use of mathematics in the visual arts include applications of chaos theory and fractal geometry to computer-generated art, symmetry studies of Leonardo da Vinci, projective geometries in development of the perspective theory of Renaissance art, grids in Op art, optical geometry in the camera obscura of Giambattista della Porta, and multiple perspective in analytic cubism and futurism. - -The Dutch graphic designer M. C. Escher created mathematically inspired woodcuts, lithographs, and mezzotints. These feature impossible constructions, explorations of infinity, architecture, visual paradoxes and tessellations. Some painters and sculptors create work distorted with the mathematical principles of anamorphosis, including South African sculptor Jonty Hurwitz. British constructionist artist John Ernest created reliefs and paintings inspired by group theory. A number of other British artists of the constructionist and systems schools of thought also draw on mathematics models and structures as a source of inspiration, including Anthony Hill and Peter Lowe. Computer-generated art is based on mathematical algorithms. diff --git a/wiki/wikipedia/279.txt b/wiki/wikipedia/279.txt deleted file mode 100644 index e3b9ae27a79a537d4d813af7cbb10793698189a3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/279.txt +++ /dev/null @@ -1,32 +0,0 @@ -In mathematics, the Bogomolov–Miyaoka–Yau inequality is the inequality -$$ - c_1^2 \le 3 c_2 -$$ - -between Chern numbers of compact complex surfaces of general type. Its major interest is the way it restricts the possible topological types of the underlying real 4-manifold. It was proved independently by and , after and proved weaker versions with the constant 3 replaced by 8 and 4. - -Armand Borel and Friedrich Hirzebruch showed that the inequality is best possible by finding infinitely many cases where equality holds. The inequality is false in positive characteristic: and gave examples of surfaces in characteristic p, such as generalized Raynaud surfaces, for which it fails. - -The conventional formulation of the Bogomolov–Miyaoka–Yau inequality is as follows. Let X be a compact complex surface of general type, and let c1 = c1(X) and c2 = c2(X) be the first and second Chern class of the complex tangent bundle of the surface. Then -$$ - c_1^2 \le 3 c_2. -$$ - -Moreover if equality holds then X is a quotient of a ball. The latter statement is a consequence of Yau's differential geometric approach which is based on his resolution of the Calabi conjecture. - -Since $ c_2(X) = e(X) $ is the topological Euler characteristic and by the Thom–Hirzebruch signature theorem $ c_1^2(X) = 2 e(X) + 3\sigma(X) $ where $\sigma(X)$ is the signature of the intersection form on the second cohomology, the Bogomolov–Miyaoka–Yau inequality can also be written as a restriction on the topological type of the surface of general type: -$$ - \sigma(X) \le \frac{1}{3} e(X), -$$ - -moreover if $\sigma(X) = (1/3)e(X)$ then the universal covering is a ball. - -Together with the Noether inequality the Bogomolov–Miyaoka–Yau inequality sets boundaries in the search for complex surfaces. Mapping out the topological types that are realized as complex surfaces is called geography of surfaces. see surfaces of general type. - -If X is a surface of general type with $ c_1^2 = 3 c_2$, so that equality holds in the Bogomolov–Miyaoka–Yau inequality, then Yau proved that X is isomorphic to a quotient of the unit ball in ${\mathbb C}^2$ by an infinite discrete group. Examples of surfaces satisfying this equality are hard to find. Borel showed that there are infinitely many values of c^2_1 = 3c2 for which a surface exists. found a fake projective plane with c^2_1 = 3c2 = 9, which is the minimum possible value because c^2_1 + c2 is always divisible by 12, and Prasad, Prasad, showed that there are exactly 50 fake projective planes. - -Barthel gave a method for finding examples, which in particular produced a surface X with c^2_1 = 3c2 = 3254. - -Ishida found a quotient of this surface with c^2_1 = 3c2 = 45, and taking unbranched coverings of this quotient gives examples with c^2_1 = 3c2 = 45k for any positive integer k. - -found examples with c^2_1 = 3c2 = 9n for every positive integer n. diff --git a/wiki/wikipedia/2790.txt b/wiki/wikipedia/2790.txt deleted file mode 100644 index c6eb4444b978324cf5e97767cbc7cb1c8f2e0627..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2790.txt +++ /dev/null @@ -1,9 +0,0 @@ -The carpenter's rule problem is a discrete geometry problem, which can be stated in the following manner: Can a simple planar polygon be moved continuously to a position where all its vertices are in convex position, so that the edge lengths and simplicity are preserved along the way? A closely related problem is to show that any non-self-crossing polygonal chain can be straightened, again by a continuous transformation that preserves edge distances and avoids crossings. - -Both problems were successfully solved by Connelly. - -Subsequently to their work, Ileana Streinu provided a simplified combinatorial proof formulated in the terminology of robot arm motion planning. Both the original proof and Streinu's proof work by finding non-expansive motions of the input, continuous transformations such that no two points ever move towards each other. Streinu's version of the proof adds edges to the input to form a pointed pseudotriangulation, removes one added convex hull edge from this graph, and shows that the remaining graph has a one-parameter family of motions in which all distances are nondecreasing. By repeatedly applying such motions, one eventually reaches a state in which no further expansive motions are possible, which can only happen when the input has been straightened or convexified. - -Streinu provide an application of this result to the mathematics of paper folding: they describe how to fold any single-vertex origami shape using only simple non-self-intersecting motions of the paper. Essentially, this folding process is a time-reversed version of the problem of convexifying a polygon of length smaller than π, but on the surface of a sphere rather than in the Euclidean plane. This result was extended by Panina for spherical polygons of edge length smaller than 2π. - -generalized the Carpenter's rule problem to rectifiable curves. He showed that every rectifiable Jordan curve can be made convex, without increasing its length and without decreasing the distance between any pair of points. This research, performed while he was still a high school student, won the second-place prize for Pardon in the 2007 Intel Science Talent Search . diff --git a/wiki/wikipedia/2791.txt b/wiki/wikipedia/2791.txt deleted file mode 100644 index f87fc112581a0ab0082415179ad39781a99793cf..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2791.txt +++ /dev/null @@ -1,151 +0,0 @@ -In mathematics, Liouville's formula, also known as the Abel-Jacobi-Liouville Identity, is an equation that expresses the determinant of a square-matrix solution of a first-order system of homogeneous linear differential equations in terms of the sum of the diagonal coefficients of the system. The formula is named after the French mathematician Joseph Liouville. Jacobi's formula provides another representation of the same mathematical relationship. - -Liouville's formula is a generalization of Abel's identity and can be used to prove it. Since Liouville's formula relates the different linearly independent solutions of the system of differential equations, it can help to find one solution from the other(s), see the example application below. - -Consider the n-dimensional first-order homogeneous linear differential equation -$$ -y'=A(x)y -$$ - -on an interval I of the real line, where A(x) for x ∈ I denotes a square matrix of dimension n with real or complex entries. Let Φ denote a matrix-valued solution on I, meaning that each Φ(x) is a square matrix of dimension n with real or complex entries and the derivative satisfies -$$ -\Phi'(x)=A(x)\Phi(x),\qquad x\in I. -$$ - -Let -$$ -\mathrm{tr}A(\xi)=\sum_{i=1}^n a_{i,i}(\xi),\qquad \xi\in I, -$$ - -denote the trace of A(ξ) = (ai, j (ξ))i, j ∈ {1,...,n}, the sum of its diagonal entries. If the trace of A is a continuous function, then the determinant of Φ satisfies -$$ -\det\Phi(x)=\det\Phi(x_0)\exp\left(\int_{x_0}^x \mathrm{tr}A(\xi) \textrm{d}\xi\right) -$$ - -for all x and x0 in I. - -This example illustrates how Liouville's formula can help to find the general solution of a first-order system of homogeneous linear differential equations. Consider -$$ -y'=\underbrace{\begin{pmatrix}1&-1/x\\1+x&-1\end{pmatrix}}_{=A(x)}y -$$ - -on the open interval I = . Assume that the easy solution -$$ -y(x)=\begin{pmatrix}1\\x\end{pmatrix},\qquad x\in I, -$$ - -is already found. Let -$$ -y(x)=\begin{pmatrix}y_1(x)\\y_2(x)\end{pmatrix} -$$ - -denote another solution, then -$$ -\Phi(x)=\begin{pmatrix}y_1(x)&1\\y_2(x)&x\end{pmatrix},\qquad x\in I, -$$ - -is a square-matrix-valued solution of the above differential equation. Since the trace of A(x) is zero for all x ∈ I, Liouville's formula implies that the determinant - -is actually a constant independent of x. Writing down the first component of the differential equation for y, we obtain using () that -$$ -y'_1(x)=y_1(x)-\frac{y_2(x)}x=\frac{xy_1(x)-y_2(x)}x=\frac{c_1}x,\qquad x\in I. -$$ - -Therefore, by integration, we see that -$$ -y_1(x)=c_1\ln x+c_2,\qquad x\in I, -$$ - -involving the natural logarithm and the constant of integration c2. Solving equation () for y2(x) and substituting for y1(x) gives -$$ -y_2(x)=xy_1(x)-c_1=c_1x\ln x+c_2x-c_1,\qquad x\in I, -$$ - -which is the general solution for y. With the special choice c1 = 0 and c2 = 1 we recover the easy solution we started with, the choice c1 = 1 and c2 = 0 yields a linearly independent solution. Therefore, -$$ -\Phi(x)=\begin{pmatrix}\ln x&1\\x\ln x-1&x\end{pmatrix},\qquad x\in I, -$$ - -is a so-called fundamental solution of the system. - -We omit the argument x for brevity. By the Leibniz formula for determinants, the derivative of the determinant of Φ = (Φi, j )i, j ∈ {0,...,n} can be calculated by differentiating one row at a time and taking the sum, i.e. - -{{NumBlk|:|(\det\Phi)'=\sum_{i=1}^n\det\begin{pmatrix} - -\Phi_{1,1}&\Phi_{1,2}&\cdots&\Phi_{1,n}\\ - -\vdots&\vdots&&\vdots\\ - -\Phi'_{i,1}&\Phi'_{i,2}&\cdots&\Phi'_{i,n}\\ - -\vdots&\vdots&&\vdots\\ - -\Phi_{n,1}&\Phi_{n,2}&\cdots&\Phi_{n,n} - -\end{pmatrix}.|}} - -Since the matrix-valued solution Φ satisfies the equation Φ' = AΦ, we have for every entry of the matrix Φ' -$$ -\Phi'_{i,k}=\sum_{j=1}^n a_{i,j}\Phi_{j,k},\qquad i,k\in\{1,\ldots,n\}, -$$ - -or for the entire row - -(\Phi'_{i,1},\dots,\Phi'_{i,n}) - -=\sum_{j=1}^n a_{i,j}(\Phi_{j,1},\ldots,\Phi_{j,n}), \qquad i\in\{1,\ldots,n\}. - -When we subtract from the i-th row the linear combination -$$ -\sum_{\scriptstyle j=1\atop\scriptstyle j \neq i}^n a_{i,j}(\Phi_{j,1},\ldots,\Phi_{j,n}), -$$ - -of all the other rows, then the value of the determinant remains unchanged, hence - -\det\begin{pmatrix} - -\Phi_{1,1}&\Phi_{1,2}&\cdots&\Phi_{1,n}\\ - -\vdots&\vdots&&\vdots\\ - -\Phi'_{i,1}&\Phi'_{i,2}&\cdots&\Phi'_{i,n}\\ - -\vdots&\vdots&&\vdots\\ - -\Phi_{n,1}&\Phi_{n,2}&\cdots&\Phi_{n,n} - -\end{pmatrix} - -=\det\begin{pmatrix} - -\Phi_{1,1}&\Phi_{1,2}&\cdots&\Phi_{1,n}\\ - -\vdots&\vdots&&\vdots\\ - -a_{i,i}\Phi_{i,1}&a_{i,i}\Phi_{i,2}&\cdots&a_{i,i}\Phi_{i,n}\\ - -\vdots&\vdots&&\vdots\\ - -\Phi_{n,1}&\Phi_{n,2}&\cdots&\Phi_{n,n} - -\end{pmatrix} - -=a_{i,i}\det\Phi - -for every i ∈ {1, . . . , n} by the linearity of the determinant with respect to every row. Hence - -{{NumBlk|:|$(\det\Phi)'=\sum_{i=1}^n a_{i,i}\det\Phi=\mathrm{tr}A\det\Phi$|}} - -by () and the definition of the trace. It remains to show that this representation of the derivative implies Liouville's formula. - -Fix x0 ∈ I. Since the trace of A is assumed to be continuous function on I, it is bounded on every closed and bounded subinterval of I and therefore integrable, hence -$$ -g(x):=\det\Phi(x) \exp\left(-\int_{x_0}^x \mathrm{tr}A(\xi) \textrm{d}\xi\right), \qquad x\in I, -$$ - -is a well defined function. Differentiating both sides, using the product rule, the chain rule, the derivative of the exponential function and the fundamental theorem of calculus, we obtain -$$ -g'(x)=\bigl((\det\Phi(x))'-\det\Phi(x)\mathrm{tr}A(x)\bigr)\exp\left(-\int_{x_0}^x \mathrm{tr}A(\xi) \textrm{d}\xi\right)=0,\qquad x\in I, -$$ - -due to the derivative in (). Therefore, g has to be constant on I, because otherwise we would obtain a contradiction to the mean value theorem (applied separately to the real and imaginary part in the complex-valued case). Since g(x0) = det Φ(x0), Liouville's formula follows by solving the definition of g for det Φ(x). diff --git a/wiki/wikipedia/2792.txt b/wiki/wikipedia/2792.txt deleted file mode 100644 index a434387fc27d5ddbe71220753295b7c8637d25ab..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2792.txt +++ /dev/null @@ -1,27 +0,0 @@ -Ore's theorem is a result in graph theory proved in 1960 by Norwegian mathematician Øystein Ore. It gives a sufficient condition for a graph to be Hamiltonian, essentially stating that a graph with sufficiently many edges must contain a Hamilton cycle. Specifically, the theorem considers the sum of the degrees of pairs of non-adjacent vertices: if every such pair has a sum that at least equals the total number of vertices in the graph, then the graph is Hamiltonian. - -Let G be a (finite and simple) graph with n ≥ 3 vertices. We denote by deg v the degree of a vertex v in G, i.e. the number of incident edges in G to v. Then, Ore's theorem states that if - -then G is Hamiltonian. - -It is equivalent to show that every non-Hamiltonian graph G does not obey condition (∗). Accordingly, let G be a graph on n ≥ 3 vertices that is not Hamiltonian, and let H be formed from G by adding edges one at a time that do not create a Hamiltonian cycle, until no more edges can be added. Let x and y be any two non-adjacent vertices in H. Then adding edge xy to H would create at least one new Hamiltonian cycle, and the edges other than xy in such a cycle must form a Hamiltonian path v1v2...vn in H with x = v1 and y = vn. For each index i in the range 2 ≤ i ≤ n, consider the two possible edges in H from v1 to vi and from vi - 1 to vn. At most one of these two edges can be present in H, for otherwise the cycle v1v2...vi - 1vnvn - 1...vi would be a Hamiltonian cycle. Thus, the total number of edges incident to either v1 or vn is at most equal to the number of choices of i, which is n - 1. Therefore, H does not obey property (∗), which requires that this total number of edges (deg v1 + deg vn) be greater than or equal to n. Since the vertex degrees in G are at most equal to the degrees in H, it follows that G also does not obey property (∗). - -Palmer describes the following simple algorithm for constructing a Hamiltonian cycle in a graph meeting Ore's condition. - -#Arrange the vertices arbitrarily into a cycle, ignoring adjacencies in the graph. - -#While the cycle contains two consecutive vertices vi and vi + 1 that are not adjacent in the graph, perform the following two steps: - -#*Search for an index j such that the four vertices vi, vi + 1, vj, and vj + 1 are all distinct and such that the graph contains edges from vi to vj and from vj + 1 to vi + 1 - -#*Reverse the part of the cycle between vi + 1 and vj (inclusive). - -Each step increases the number of consecutive pairs in the cycle that are adjacent in the graph, by one or two pairs (depending on whether vj and vj + 1 are already adjacent), so the outer loop can only happen at most n times before the algorithm terminates, where n is the number of vertices in the given graph. By an argument similar to the one in the proof of the theorem, the desired index j must exist, or else the nonadjacent vertices vi and vi + 1 would have too small a total degree. Finding i and j, and reversing part of the cycle, can all be accomplished in time O(n). Therefore, the total time for the algorithm is O(n2), matching the number of edges in the input graph. - -Ore's theorem is a generalization of Dirac's theorem that, when each vertex has degree at least n/2, the graph is Hamiltonian. For, if a graph meets Dirac's condition, then clearly each pair of vertices has degrees adding to at least n. - -In turn Ore's theorem is generalized by the Bondy–Chvátal theorem. One may define a closure operation on a graph in which, whenever two nonadjacent vertices have degrees adding to at least n, one adds an edge connecting them; if a graph meets the conditions of Ore's theorem, its closure is a complete graph. The Bondy–Chvátal theorem states that a graph is Hamiltonian if and only if its closure is Hamiltonian; since the complete graph is Hamiltonian, Ore's theorem is an immediate consequence. - -Woodall found a version of Ore's theorem that applies to directed graphs. Suppose a digraph G has the property that, for every two vertices u and v, either there is an edge from u to v or the outdegree of u plus the indegree of v equals or exceeds the number of vertices in G. Then, according to Woodall's theorem, G contains a directed Hamiltonian cycle. Ore's theorem may be obtained from Woodall by replacing every edge in a given undirected graph by a pair of directed edges. A closely related theorem by Meyniel states that an n-vertex strongly connected digraph with the property that, for every two nonadjacent vertices u and v, the total number of edges incident to u or v is at least 2n - 1 must be Hamiltonian. - -Ore's theorem may also be strengthened to give a stronger conclusion than Hamiltonicity as a consequence of the degree condition in the theorem. Specifically, every graph satisfying the conditions of Ore's theorem is either a regular complete bipartite graph or is pancyclic . diff --git a/wiki/wikipedia/2793.txt b/wiki/wikipedia/2793.txt deleted file mode 100644 index 3e030a0718a792480e727215642ec856334210e8..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2793.txt +++ /dev/null @@ -1,19 +0,0 @@ -The knower paradox is a paradox belonging to the family of the paradoxes of self-reference (like the liar paradox). Informally, it consists in considering a sentence saying of itself that it is not known, and apparently deriving the contradiction that such sentence is both not known and known. - -A version of the paradox occurs already in chapter 9 of Thomas Bradwardine’s Insolubilia. In the wake of the modern discussion of the paradoxes of self-reference, the paradox has been rediscovered (and dubbed with its current name) by the US logicians and philosophers David Kaplan and Richard Montague, and is now considered an important paradox in the area. The paradox bears connections with other epistemic paradoxes such as the hangman paradox and the paradox of knowability. - -The notion of knowledge seems to be governed by the principle that knowledge is factive: - -(KF): If the sentence ' P ' is known, then P - -(where we use single quotes to refer to the linguistic expression inside the quotes and where 'is known' is short for 'is known by someone at some time'). It also seems to be governed by the principle that proof yields knowledge: - -(PK): If the sentence ' P ' has been proved, then ' P ' is known - -Consider however the sentence: - -(K): (K) is not known - -Assume for reductio ad absurdum that (K) is known. Then, by (KF), (K) is not known, and so, by reductio ad absurdum, (K) is not known. Now, this conclusion, which is the sentence (K) itself, depends on no undischarged assumptions, and so has just been proved. Therefore, by (PK), we can further conclude that (K) is known. Putting the two conclusions together, we have the contradiction that (K) is both not known and known. - -Since, given the diagonal lemma, every sufficiently strong theory will have to accept something like (K), absurdity can only be avoided either by rejecting one of the two principles of knowledge (KF) and (PK) or by rejecting classical logic (which validates the reasoning from (KF) and (PK) to absurdity). The first kind of strategy subdivides in several alternatives. One approach takes its inspiration from the hierarchy of truth predicates familiar from Alfred Tarski's work on the Liar paradox and constructs a similar hierarchy of knowledge predicates. Another approach upholds a single knowledge predicate but takes the paradox to call into doubt either the unrestricted validity of (PK) or at least knowledge of (KF). The second kind of strategy also subdivides in several alternatives. One approach rejects the law of excluded middle and consequently reductio ad absurdum. Another approach upholds reductio ad absurdum and thus accepts the conclusion that (K) is both not known and known, thereby rejecting the law of non-contradiction. diff --git a/wiki/wikipedia/2794.txt b/wiki/wikipedia/2794.txt deleted file mode 100644 index a2453c10ae292429006cbdfd28638a9910aea05a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2794.txt +++ /dev/null @@ -1,30 +0,0 @@ -In applied mathematics, the Kaplan–Yorke conjecture concerns the dimension of an attractor, using Lyapunov exponents. By arranging the Lyapunov exponents in order from largest to smallest $\lambda_1\geq\lambda_2\geq\dots\geq\lambda_n$, let j be the index for which -$$ - \sum_{i=1}^j \lambda_i \geqslant 0 -$$ - -and -$$ - \sum_{i=1}^{j+1} \lambda_i < 0. -$$ - -Then the conjecture is that the dimension of the attractor is -$$ - D=j+\frac{\sum_{i=1}^j\lambda_i}. -$$ - -This idea is used for the definition of the Lyapunov dimension. - -Especially for chaotic systems, the Kaplan–Yorke conjecture is a useful tool in order to estimate the fractal dimension - -and the Hausdorff dimension of the corresponding attractor. - -* The Hénon map with parameters a = 1.4 and b = 0.3 has the ordered Lyapunov exponents $\lambda_1=0.603$ and $\lambda_2=-2.34$. In this case, we find j = 1 and the dimension formula reduces to -$$ -D=j+\frac{\lambda_1}=1+\frac{0.603}=1.26. -$$ - -* The Lorenz system shows chaotic behavior at the parameter values $\sigma=16$, $\rho=45.92$ and $\beta=4.0$. The resulting Lyapunov exponents are {2.16, 0.00, −32.4}. Noting that j = 2, we find -$$ -D=2+\frac{2.16 + 0.00}=2.07. -$$ diff --git a/wiki/wikipedia/2795.txt b/wiki/wikipedia/2795.txt deleted file mode 100644 index 6525b8133d7be424dc7a30d412180fb4f1d505c8..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2795.txt +++ /dev/null @@ -1,18 +0,0 @@ -In number theory, Vantieghems theorem is a primality criterion. It states that a natural number n(n≥3) is prime if and only if -$$ - \prod_{1 \leq k \leq n-1} \left( 2^k - 1 \right) \equiv n \mod \left( 2^n - 1 \right). -$$ - -Similarly, n is prime, if and only if the following congruence for polynomials in X holds: -$$ - \prod_{1 \leq k \leq n-1} \left( X^k - 1 \right) \equiv n- \left( X^n - 1 \right)/\left( X - 1 \right) \mod \left( X^n - 1 \right) -$$ - -or: -$$ - \prod_{1 \leq k \leq n-1} \left( X^k - 1 \right) \equiv n \mod \left( X^n - 1 \right)/\left( X - 1 \right). -$$ - -Let n=7 forming the product 1*3*7*15*31*63 = 615195. 615195 = 7 mod 127 and so 7 is prime
    - -Let n=9 forming the product 1*3*7*15*31*63*127*255 = 19923090075. 19923090075 = 301 mod 511 and so 9 is composite diff --git a/wiki/wikipedia/2796.txt b/wiki/wikipedia/2796.txt deleted file mode 100644 index 03baaf5f9f491110e33be3875bc3940fb88cd1f3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2796.txt +++ /dev/null @@ -1,27 +0,0 @@ -Layered graph drawing or hierarchical graph drawing is a type of graph drawing in which the vertices of a directed graph are drawn in horizontal rows or layers with the edges generally directed downwards. It is also known as Sugiyama-style graph drawing after Kozo Sugiyama, who first developed this drawing style. - -The ideal form for a layered drawing would be an upward planar drawing, in which all edges are oriented in a consistent direction and no pairs of edges cross. However, graphs often contain cycles, minimizing the number of inconsistently-oriented edges is NP-hard, and minimizing the number of crossings is also NP-hard, so layered graph drawing systems typically apply a sequence of heuristics that reduce these types of flaws in the drawing without guaranteeing to find a drawing with the minimum number of flaws. - -The construction of a layered graph drawing proceeds in a sequence of steps: - -*If the input graph is not already a directed acyclic graph, a set of edges is identified the reversal of which will make it acyclic. Finding the smallest possible set of edges is the NP-complete feedback arc set problem, so often greedy heuristics are used here in place of exact optimization algorithms. Integer programming procedures, although more time-consuming, may be used to combine edge length minimization with limits on the number of vertices per level. - -*Edges that span multiple layers are replaced by paths of dummy vertices so that, after this step, each edge in the expanded graph connects two vertices on adjacent layers of the drawing. so again it is typical to resort to heuristics, such as placing each vertex at a position determined by finding the average or median of the positions of its neighbors on the previous level and then swapping adjacent pairs as long as that improves the number of crossings. Alternatively, the ordering of the vertices in one layer at a time may be chosen using an algorithm that is fixed-parameter tractable in the number of crossings between it and the previous layer. - -*Each vertex is assigned a coordinate within its layer, consistent with the permutation calculated in the previous step. Considerations in this step include placing dummy nodes on a line between their two neighbors to prevent unnecessary bends, and placing each vertex in a centered position with respect to its neighbors. Sugiyama's original work proposed a quadratic programming formulation of this step; a later method of Brandes and Köpf takes linear time and guarantees at most two bends per edge. - -*The edges reversed in the first step of the algorithm are returned to their original orientations, the dummy vertices are removed from the graph and the vertices and edges are drawn. To avoid intersections between vertices and edges, edges that span multiple layers of the drawing may be drawn as polygonal chains or spline curves passing through each of the positions assigned to the dummy vertices along the edge. - -In its simplest form, layered graph drawing algorithms may require O(mn) time in graphs with n vertices and m edges, because of the large number of dummy vertices that may be created. However, for some variants of the algorithm, it is possible to simulate the effect of the dummy vertices without actually constructing them explicitly, leading to a near-linear time implementation. - -The "dot" tool in Graphviz produces layered drawings. A layered graph drawing algorithm is also included in Microsoft Automatic Graph Layout and in Tulip. - -Although typically drawn with vertices in rows and edges proceeding from top to bottom, layered graph drawing algorithms may instead be drawn with vertices in columns and edges proceeding from left to right. The same algorithmic framework has also been applied to radial layouts in which the graphs are arranged in concentric circles around some starting node and to three-dimensional layered drawings of graphs. - -In layered graph drawings with many long edges, edge clutter may be reduced by grouping sets of edges into bundles and routing them together through the same set of dummy vertices. Similarly, for drawings with many edges crossing between pairs of consecutive layers, the edges in maximal bipartite subgraphs may be grouped into confluent bundles. - -Drawings in which the vertices are arranged in layers may be constructed by algorithms that do not follow Sugiyama's framework. For instance, it is possible to tell whether an undirected graph has a drawing with at most k crossings, using h layers, in an amount of time that is polynomial for any fixed choice of k and h, using the fact that the graphs that have drawings of this type have bounded pathwidth. - -For layered drawings of concept lattices, a hybrid approach combining Sugiyama's framework with additive methods (in which each vertex represents a set and the position of the vertex is a sum of vectors representing elements in the set) may be used. In this hybrid approach, the vertex permutation and coordinate assignment phases of the algorithm are replaced by a single phase in which the horizontal position of each vertex is chosen as a sum of scalars representing the elements for that vertex. - -Layered graph drawing methods have also been used to provide an initial placement for force-directed graph drawing algorithms. diff --git a/wiki/wikipedia/2797.txt b/wiki/wikipedia/2797.txt deleted file mode 100644 index be6d45521cec4dbc236764840c492fff4a8498a7..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2797.txt +++ /dev/null @@ -1,293 +0,0 @@ -220px|thumb|right|[[Eugene Wigner|E.P. Wigner (1902–1995), ForMemRS, first proved the theorem bearing his name. It was a key step towards the modern classification scheme of particle types, according to which particle types are partly characterized by which representation of the Lorentz group under which it transforms. The Lorentz group is a symmetry group of every relativistic quantum field theory. - -Wigner's early work laid the ground for what many physicists came to call the group theory disease in quantum mechanics - or as Hermann Weyl (co-responsible) puts it in his (preface to 2nd ed.), "It has been rumored that the group pest is gradually being cut out from quantum mechanics. This is certainly not true…" - -]] - -Wigner's theorem, proved by Eugene Wigner in 1931, is a cornerstone of the mathematical formulation of quantum mechanics. The theorem specifies how physical symmetries such as rotations, translations, and CPT are represented on the Hilbert space of states. - -The physical states in a quantum theory are represented by unit vectors in Hilbert space up to a phase factor, i.e. by the complex line or ray the vector spans. In addition, by the Born rule the absolute value of the unit vectors inner product, or equivalently the cosine squared of the angle between the lines the vectors span, corresponds to the transition probability. Ray space, in mathematics known as projective Hilbert space, is the space of all unit vectors in Hilbert space up to the equivalence relation of differing by a phase factor. By Wigner's theorem, any transformation of ray space that preserves the absolute value of the inner products can be represented by a unitary or antiunitary transformation of Hilbert space, which is unique up to a phase factor. As a consequence, the representation of a symmetry group on ray space can be lifted to a projective representation or sometimes even an ordinary representation on Hilbert space. - -It is a postulate of quantum mechanics that vectors in Hilbert space that are scalar nonzero multiples of each other represent the same pure state. A ray belonging to the vector $\Psi \in H \setminus \{0\}$ is the complex line through the origin -$$ -\underline{\Psi} = \left\{\lambda\Psi :\lambda \in \mathbb C\right\} -$$. - -Two nonzero vectors $\Psi_1, \Psi_2$ define the same ray, iff they differ by some nonzero complex number: $\Psi_1 = \lambda \Psi_2$, $\lambda \in \mathbb{C}^* = \mathbb C \setminus \{0\}$. - -Alternatively, we can consider a ray $\underline \Psi$ as a set vectors with norm 1 that span the same line, a unit ray, by intersecting the line $\underline \Psi$ with the unit sphere -$$ - SH = \{\Phi \in H \mid \|\Phi\|^2 = 1 \} -$$. - -Two unit vectors $\Psi_1, \Psi_2$ then define the same unit ray $\underline{\Psi_1} = \underline{\Psi_2}$ iff they differ by a phase factor: $\Psi_1 = e^{i\alpha}\Psi_2$. - -This is the more usual picture in physics. - -The set of rays is in one to one correspondence with the set of unit rays and we can identify them. - -There is also a one-to-one correspondence between physical pure states $\rho$ and (unit)rays $\underline{\Phi}$ given by -$$ -\rho = P_{\Phi}= \frac{\langle\Phi|\Phi\rangle} -$$ - -where $P_{\Phi}$ is the orthogonal projection on the line $\underline{\Phi}$. In either interpretation, if $\Phi \in \underline{\Psi}$ or $P_{\Phi} = P_{\Psi}$ then $\Phi$ is a representative of $\underline{\Psi}$. - -The space of all rays is called ray space. It can be defined in several ways. One may define an equivalence relation $\approx$ on $H \setminus \{0\}$ by -$$ -\Psi \approx \Phi \Leftrightarrow \Psi = \lambda\Phi,\quad \lambda \in \mathbb{C} \smallsetminus \{0\}, -$$ and define ray space as -$$ -\mathbb P H = H \smallsetminus \{0\} / {\approx}. -$$ - -Alternatively define a relation ≅ as an equivalence relation on the sphere $SH$. The unit ray space $\mathbb P H$, is an incarnation of ray space defined (making no notational distinction with ray space) as the set of equivalence classes -$$ -\mathbb P H = SH / {\cong}. -$$ - -A third equivalent definition of ray space is as pure state ray space i.e. as density matrices that are orthogonal projections of rank 1 -$$ -\mathbb P H = \{P\in B(H) \mid P^2 = P = P^\dagger, \mathbb{tr}(P) = 1 \}. -$$ - -Each of these definitions make it clear that ray space is nothing but another name for projective Hilbert space - -. - -If $\dim(H) = N$ is finite, $\mathbb{P}H$ has real dimension $2N - 2$. - -In fact, $\mathbb P H$ is a compact complex manifold of dimension $N - 1$ which (by choosing a basis) is isomorphic to the projective space $ \mathbb{C}\mathbb{P}^{N-1} = \mathbb{P}(\mathbb{C}^N)$. E.g. the Bloch sphere -$$ - \mathbb{P}(\lambda_1 |+\rangle + \lambda_2 |-\rangle, \ (\lambda_1, \lambda_2) \in \mathbb{C}^2 \setminus \{0\}) -$$ - -is isomorphic to the Riemann sphere $\mathbb{C}\mathbb{P}^1$. - -Ray space (i.e. projective space) takes a little getting used to, but is a very well studied object that predates quantum mechanics going back to the study of perspective by renaissance artists. It is not a vector space with well-defined linear combinations of rays. However, for every two vectors $\Psi_1, \Psi_2$ and ratio of complex numbers $(\lambda_1 : \lambda_2)$ (i.e. element of $\mathbb{C}\mathbb{P}^1$) there is a well defined ray $\underline{\lambda_1\Psi_1 + \lambda_2\Psi_2}$. Moreover, for distinct rays $\underline{\Psi}_1, \underline{\Psi}_2$ (i.e. linearly independent lines) there is a projective line of rays of the form $\underline{\lambda_1\Psi_1 + \lambda_2\Psi_2}$ in $\mathbb PH$: all 1 dimensional complex lines in the 2 complex dimensional plane spanned by $\Psi_1$ and $\Psi_2$ in $H$). - -The Hilbert space structure on $H$ defines additional structure on ray space. Define the ray correlation (or ray product) - -\underline{\Psi} \cdot \underline{\Phi} = \frac{\left|\left\langle\Psi, \Phi\right\rangle\right|}{\|\Phi\|\|\Psi\|} - -= \sqrt{\mathrm{tr}(P_{\Psi}P_{\Phi})}, - -where $\langle , \rangle$ is the Hilbert space inner product, and $\Psi, \Phi$ are representatives of $\underline{\Phi}$ and $\underline{\Psi}$. Note that the righthand side is independent of the choice of representatives. - -The physical significance of this definition is that according to the Born rule, another postulate of quantum mechanics, the transition probabilities between normalised states $\Psi$ and $\Phi$ in Hilbert space is given by -$$ -P(\Psi \rightarrow \Phi) = |\langle\Psi, \Phi\rangle|^2 = \left(\underline{\Psi} \cdot \underline{\Phi}\right)^2 -$$ - -i.e. we can define Born's rule on ray space by. -$$ -P(\underline{\Psi} \to \underline{\Phi}) := \left(\underline{\Psi} \cdot \underline{\Phi}\right)^2. -$$ - -Geometrically, we can define an angle $\theta$ with $0 \le \theta\le \pi/2$ between the lines -$$ -\underline{\Phi} -$$ and $\underline{\Psi}$ by $\cos(\theta) = (\underline{\Psi} \cdot \underline{\Phi})$. The angle then turns out to satisfy the triangle inequality and defines a metric structure on ray space which comes from a Riemannian metric, the Fubini-Study metric. - -Loosely speaking, a symmetry transformation is a change in which "nothing happens" or a "change in our point of view" that does not change the outcomes of possible experiments. For example, translating a system in a homogeneous environment should have no qualitative effect on the outcomes of experiments made on the system. Likewise for rotating a system in an isotropic environment. This becomes even clearer when one considers the mathematically equivalent passive transformations, i.e. simply changes of coordinates and let the system be. Usually, the domain and range Hilbert spaces are the same. An exception would be (in a non-relativistic theory) the Hilbert space of electron states that is subjected to a charge conjugation transformation. In this case the electron states are mapped to the Hilbert space of positron states and vice versa. However this means that the symmetry acts on the direct sum of the Hilbert spaces. - -A transformation of a physical system is a transformation of states, hence mathematically a transformation, not of the Hilbert space, but of its ray space. Hence, in quantum mechanics, a transformation of a physical system gives rise to a bijective ray transformation $T$ - -\begin{align} - -T: \mathbb{P}H &\to \mathbb{P}H\\ - -\underline{\Psi} &\mapsto T\underline{\Psi}.\\ - -\end{align} - - - -Since the composition of two physical transformations and the reversal of a physical transformation are also physical transformations, the set of all ray transformations so obtained is a group acting on $\mathbb{P}H$. Not all bijections of $\mathbb{P}H$ are permissible as symmetry transformations, however. Physical transformations must preserve Borns rule. - -For a physical transformation, the transition probabilities in the transformed and untransformed systems should be preserved: -$$ -P(\underline{\Psi} \rightarrow \underline{\Phi}) = \left(\underline{\Psi} \cdot \underline{\Phi}\right)^2 = \left(T\underline{\Psi} \cdot T\underline{\Phi}\right)^2 = P\left(T\Psi \rightarrow T\Phi \right) -$$ - -A bijective ray transformation $\mathbb{P}H \to \mathbb{P}H$ is called a symmetry transformation iff -$$ -T \underline{\Psi} \cdot T\underline{\Phi} = \underline{\Psi} \cdot \underline{\Phi},\quad \forall \underline\Psi, \underline\Phi \in \mathbb{P}H. -$$ - -A geometric interpretation, is that a symmetry transformation is an isometry of ray space. - -Some facts about symmetry transformations that can be verified using the definition: - -* The product of two symmetry transformations, i.e. two symmetry transformations applied in succession, is a symmetry transformation. - -* Any symmetry transformation has an inverse. - -* The identity transformation is a symmetry transformation. - -* Multiplication of symmetry transformations is associative. - -The set of symmetry transformations thus forms a group, the symmetry group of the system. Some important frequently occurring subgroups in the symmetry group of a system are realizations of - -* The symmetric group with its subgroups. This is important on the exchange of particle labels. - -* The Poincaré group. It encodes the fundamental symmetries of spacetime. - -* Internal symmetry groups like SU(2) and SU(3). They describe so called internal symmetries, like isospin and color charge peculiar to quantum mechanical systems. - -These groups are also referred to as symmetry groups of the system. - -Some preliminary definitions are needed to state the theorem. A transformation $U: H \to K $ of Hilbert spaces is unitary if it is bijective and -$$ -\langle U \Psi, U \Phi\rangle = \langle \Psi, \Phi \rangle -$$. - -Since - -\langle U(\lambda_1\Psi_1 + \lambda_2\Psi_2), \Phi' \rangle = \langle \lambda_1 \Psi_1 + \lambda_2 \Psi_2, U^{-1}\Phi' \rangle - -= \lambda_1 \langle\Psi_1, U^{-1}\Phi'\rangle + \lambda_2\langle \Psi_2, U^{-1}\Phi' \rangle - -= \lambda_1 \langle U\Psi_1, \Phi'\rangle + \lambda_2 \langle U\Psi_2, \Phi'\rangle - -= \langle \lambda_1 U\Psi_1 + \lambda_2 U\Psi_2, \Phi' \rangle - - - -for all $\Phi' \in K$, a unitary transformation is automatically linear and $U^\dagger = U^{-1}$. - -Likewise, a transformation $A:H \to K$ is antiunitary if it is bijective and -$$ -\langle A \Psi, A \Phi\rangle = \langle\Psi, \Phi\rangle^* = \langle\Phi, \Psi\rangle. -$$ - -As above, an antiunitary transformation is necessarily antilinear. Both variants are real linear and additive. - -Given a unitary transformation $U:H \to K$ of Hilbert spaces, define - -\begin{align} - -T_U: \mathbb{P}H &\to \mathbb{P}K \\ - -\underline{\Psi} &\mapsto \underline{U\Psi}\\ - -\end{align} - - - -This is a symmetry transformation since - - - -T\underline{\Psi} \cdot T\underline{\Phi} = - -\frac{ \left|\langle U\Psi, U\Phi \rangle\right|}{\|U\Psi\|\|U\Phi\|} = \frac{\left|\langle\Psi, \Phi\rangle\right|}{\|\Psi\|\|\Phi\|} - -= \underline{\Psi} \cdot \underline{\Phi}. - - - -In the same way an antiunitary transformation of Hilbert space induces a symmetry transformation. One says that a transformation $U:H \to K $ of Hilbert spaces is compatible with the transformation $T:\mathbb{P}H \to \mathbb{P}K$ of ray spaces if $T = T_U$ - -or equivalently -$$ -U\Psi \in T \underline \Psi -$$ - -for all $\Psi \in H \setminus \{0\}$. - -Transformations of Hilbert space induced by either a unitary linear transformation or an antiunitary antilinear operator are obviously compatible with the transformations or ray space they induce as described. - -Wigner's theorem states a converse of the above: - -Wigner's theorem (1931): If $H$ and $K$ are Hilbert spaces and if -$$ -T:\mathbb{P}H \to \mathbb{P}K -$$ - -is a symmetry transformation, then there exists a unitary or antiunitary transformation $V: H \to K$ which is compatible with $T$. If $\dim(H) \ge 2$ , $V$ is either unitary or antiunitary. If $\dim(H) = 1 $ (and $\mathbb{P}H$ and $\mathbb{P}K$ consist of a single point), all unitary transformations $U : H \to K $ and all antiunitary transformations $ A: H \to K $ are compatible with $T$. If $V_1$ and $V_2$ are both compatible with $T$ then $V_1 = e^{i\alpha}V_2$ for some' $\alpha \in \mathbb{R}$ - -Proofs can be found in , Bargmann and Weinberg. - -Antiunitary and antilinear transformations are less prominent in physics. They are all related to a reversal of the direction of the flow of time. - -Remark 1: The significance of the uniqueness part of the theorem is that it specifies the degree of uniqueness of the representation on $H$. For example, one might be tempted to believe that -$$ -V\Psi = Ue^{i\alpha(\Psi)}\Psi, \alpha(\Psi) \in \mathbb{R}, \Psi \in H \quad (\text{wrong unless } \alpha(\Psi) \text{ is const.}) -$$ - -would be admissible, with $\alpha(\Psi) \ne \alpha(\Phi)$ for $\langle \Psi, \Phi \rangle = 0$ but this is not the case according to the theorem. In fact such a $V$ would not be additive. - -Remark 2: Whether $T$ must be represented by a unitary or antiunitary operator is determined by topology. If $\dim_{\mathbb{C}}(\mathbb{P}H) = \dim_{\mathbb{C}}(\mathbb{P}K) \ge 1$, the second cohomology $H^2(\mathbb{P}H)$ has a unique generator $c_{\mathbb{P}H}$ such that for a (equivalently for every) complex projective line $L \subset \mathbb{P}H$, one has $ c_{\mathbb{P}H} \cap [L] = \deg_L(c_{\mathbb{P}H}|_L) = 1 $. Since $T$ is a homeomorphism, $T^*c_{\mathbb{P}K}$ also generates $H^2(\mathbb{P}H)$ and so we have $T^*c_{\mathbb{P}K} = \pm c_{\mathbb{P}H}$. If $U:H \to K$ is unitary, then $T_U^*c_{\mathbb{P}K} = c_{\mathbb{P}H}$ while if $A:H \to K$ is anti linear then $T_A^*c_{\mathbb{P}K} = -c_{\mathbb{P}H}$. - -If G is a symmetry group (in this latter sense of being embedded as a subgroup of the symmetry group of the system acting on ray space), and if f, g, h ∈ G with fg = h, then -$$ -T(f)T(g) = T(h), -$$ - -where the T are ray transformations. From the uniqueness part of Wigner's theorem, one has for the compatible representatives U, -$$ -U(f)U(g) = \omega(f, g)U(fg) = e^{i\xi(f, g)}U(fg), -$$ - -where ω(f, g) is a phase factor. - -The function ω is called a 2-cocycle or Schur multiplier. A map U:G → GL(V) satisfying the above relation for some vector space V is called a projective representation or a ray representation. If ω(f, g) = 1, then it is called a representation. - -One should note that the terminology differs between mathematics and physics. In the linked article, term projective representation has a slightly different meaning, but the term as presented here enters as an ingredient and the mathematics per se is of course the same. If the realization of the symmetry group, g → T(g), is given in terms of action on the space of unit rays S = PH, then it is a projective representation G → PGL(H) in the mathematical sense, while its representative on Hilbert space is a projective representation G → GL(H) in the physical sense. - -Applying the last relation (several times) to the product fgh and appealing to the known associativity of multiplication of operators on H, one finds - -\begin{align} - -\omega(f, g)\omega(fg, h) &= \omega(g, h)\omega(f, gh), \\ - -\xi(f, g) + \xi(fg, h) &= \xi(g, h) + \xi(f, gh) \quad (\operatorname{mod} 2\pi). - -\end{align} - -They also satisfy - -\begin{align} - -\omega(g, e) &= \omega(e, g) = 1, \\ - -\xi(g, e) &= \xi(e, g) = 0 \quad (\operatorname{mod} 2\pi), \\ - -\omega\left(g, g^{-1}\right) &= \omega(g^{-1}, g), \\ - -\xi\left(g, g^{-1}\right) &= \xi(g^{-1}, g) = 0 \quad (\operatorname{mod} 2\pi). \\ - -\end{align} - -Upon redefinition of the phases, -$$ -U(g) \mapsto \hat{U}(g) = \eta(g)U(g) = e^{i\zeta(g)}U(g), -$$ - -which is allowed by last theorem, one finds - -\begin{align} - -\hat{\omega}(g, h) &= \omega(g, h)\eta(g)\eta(h)\eta(gh)^{-1},\\ - -\hat{\xi}(g, h) &= \xi(g, h) + \zeta(g) + \zeta(h) - \zeta(gh) \quad (\operatorname{mod} 2\pi),\end{align} - -where the hatted quantities are defined by -$$ -\hat{U}(f)\hat{U}(g) = \hat{\omega}(f, g)\hat{U}(fg) = e^{i\hat{\xi}(f,g)}\hat{U}(fg). -$$ - -The following rather technical theorems and many more can be found, with accessible proofs, in Bargmann. - -The freedom of choice of phases can be used to simplify the phase factors. For some groups the phase can be eliminated altogether. - -* Theorem: If G is semisimple and simply connected, then ω(g, h) = 1 is possible. - -In the case of the Lorentz group and its subgroup the rotation group SO(3), phases can, for projective representations, be chosen such that ω(g, h) = ± 1. For their respective universal covering groups, SL(2,C) and Spin(3), it is according to the theorem possible to have ω(g, h) = 1, i.e. they are proper representations. - -The study of redefinition of phases involves group cohomology. Two functions related as the hatted and non-hatted versions of ω above are said to be cohomologous. They belong to the same second cohomology class, i.e. they are represented by the same element in H2(G), the second cohomology group of G. If an element of H2(G) contains the trivial function ω = 0, then it is said to be trivial. - -* Theorem: (Wigner 1939). The phase freedom can be used such that in a some neighborhood of the identity the map g → U(g) is strongly continuous. - -* Theorem (Bargmann). In a sufficiently small neighborhood of e, the choice ω(g1, g2) ≡ 1 is possible for semisimple Lie groups (such as SO(n), SO(3,1) and affine linear groups, (in particular the Poincaré group). More precisely, this is exactly the case when the second cohomology group H2( of the Lie algebra g of G is trivial. diff --git a/wiki/wikipedia/2798.txt b/wiki/wikipedia/2798.txt deleted file mode 100644 index d3fb74bde9790052494d93229b91c02cff509875..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2798.txt +++ /dev/null @@ -1,18 +0,0 @@ -In plane geometry the Blaschke–Lebesgue theorem states that the Reuleaux triangle has the least area of all curves of given constant width. In the form that every curve of a given width has area at least as large as the Reuleaux triangle, it is also known as the Blaschke–Lebesgue inequality. It is named after Wilhelm Blaschke and Henri Lebesgue, who published it separately in the early 20th century. - -The width of a convex set $K$ in the Euclidean plane is defined as the minimum distance between any two parallel lines that enclose it. The two minimum-distance lines are both necessarily tangent lines to $K$, on opposite sides. A curve of constant width is the boundary of a convex set with the property that, for every direction of parallel lines, the two tangent lines with that direction that are tangent to opposite sides of the curve are at a distance equal to the width. These curves include both the circle and the Reuleaux triangle, a curved triangle formed from arcs of three equal-radius circles each centered at a crossing point of the other two circles. The area enclosed by a Reuleaux triangle with width $w$ is -$$ -\frac{1}{2}(\pi - \sqrt3)w^2 \approx 0.70477w^2. -$$ - -The Blaschke–Lebesgue theorem states that this is the unique minimum possible area of a curve of constant width, and the Blaschke–Lebesgue inequality states that every convex set of width $w$ has area at least this large, with equality only when the set is bounded by a Reuleaux triangle. - -The Blaschke–Lebesgue theorem was published independently in 1914 by Henri Lebesgue and in 1915 by Wilhelm Blaschke. Since their work, several other proofs have been published. - -The same theorem is also true in the hyperbolic plane. For any convex distance function on the plane (a distance defined as the norm of the vector difference of points, for any norm), an analogous theorem holds true, according to which the minimum-area curve of constant width is an intersection of three metric disks, each centered on a boundary point of the other two. - -The Blaschke–Lebesgue theorem has been used to provide an efficient strategy for generalizations of the game of Battleship, in which one player has a ship formed by intersecting the integer grid with a convex set and the other player, after having found one point on this ship, is aiming to determine its location using the fewest possible missed shots. For a ship with $n$ grid points, it is possible to bound the number of missed shots by $O(\log\log n)$. - -By the isoperimetric inequality, the curve of constant width in the Euclidean plane with the largest area is a circle. The perimeter of a curve of constant width $w$ is $\pi w$, regardless of its shape; this is Barbier's theorem. - -It is unknown which surfaces of constant width in three-dimensional space have the minimum volume. Bonnesen and Fenchel conjectured in 1934 that the minimizers are the two Meissner bodies obtained by rounding some of the edges of a Reuleaux tetrahedron, but this remains unproven. diff --git a/wiki/wikipedia/2799.txt b/wiki/wikipedia/2799.txt deleted file mode 100644 index 687dc364131327ad5288d07004aa64599e90a1ad..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2799.txt +++ /dev/null @@ -1,13 +0,0 @@ -In mathematics, a number of fixed-point theorems in infinite-dimensional spaces generalise the Brouwer fixed-point theorem. They have applications, for example, to the proof of existence theorems for partial differential equations. - -The first result in the field was the Schauder fixed-point theorem, proved in 1930 by Juliusz Schauder (a previous result in a different vein, the Banach fixed-point theorem for contraction mappings in complete metric spaces was proved in 1922). Quite a number of further results followed. One way in which fixed-point theorems of this kind have had a larger influence on mathematics as a whole has been that one approach is to try to carry over methods of algebraic topology, first proved for finite simplicial complexes, to spaces of infinite dimension. For example, the research of Jean Leray who founded sheaf theory came out of efforts to extend Schauder's work. - -
    Schauder fixed-point theorem: Let C be a nonempty closed convex subset of a Banach space V. If f : C → C is continuous with a compact image, then f has a fixed point.
    - -
    Tikhonov (Tychonoff) fixed-point theorem: Let V be a locally convex topological vector space. For any nonempty compact convex set X in V, any continuous function f : X → X has a fixed point.
    - -
    Browder fixed-point theorem: Let K be a nonempty closed bounded convex set in a uniformly convex Banach space. Then any non-expansive function f : K → K has a fixed point. (A function $f$ is called non-expansive if $\|f(x)-f(y)\|\leq \|x-y\| $ for each $x$ and $y$.)
    - -Other results include the Markov–Kakutani fixed-point theorem (1936-1938) and the Ryll-Nardzewski fixed-point theorem (1967) for continuous affine self-mappings of compact convex sets, as well as the Earle–Hamilton fixed-point theorem (1968) for holomorphic self-mappings of open domains. - -
    Kakutani fixed-point theorem: Every correspondence that maps a compact convex subset of a locally convex space into itself with a closed graph and convex nonempty images has a fixed point.
    diff --git a/wiki/wikipedia/28.txt b/wiki/wikipedia/28.txt deleted file mode 100644 index f23750edbc05e82a1b494c27b8179585b6908ea8..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/28.txt +++ /dev/null @@ -1,53 +0,0 @@ -In mathematics, specifically order theory, a well-quasi-ordering or wqo is a quasi-ordering such that any infinite sequence of elements $x_0, x_1, x_2, \ldots$ from $X$ contains an increasing pair $x_i\le x_j$ with $i x_1> x_2> \cdots$) nor infinite sequences of pairwise incomparable elements. Hence a quasi-order (X, ≤) is wqo if and only if (X, <) is well-founded and has no infinite antichains. - -* $(\N, \le)$, the set of natural numbers with standard ordering, is a well partial order (in fact, a well-order). However, $(\Z, \le)$, the set of positive and negative integers, is not a well-quasi-order, because it is not well-founded (see Pic.1). - -* $(\N, |)$, the set of natural numbers ordered by divisibility, is not a well-quasi-order: the prime numbers are an infinite antichain (see Pic.2). - -* $(\N^k, \le)$, the set of vectors of $k$ natural numbers (where $k$ is finite) with component-wise ordering, is a well partial order (Dickson's lemma; see Pic.3). More generally, if $(X, \le)$ is well-quasi-order, then $(X^k,\le^k)$ is also a well-quasi-order for all $k$. - -* Let $X$ be an arbitrary finite set with at least two elements. The set $X^*$ of words over $X$ ordered lexicographically (as in a dictionary) is not a well-quasi-order because it contains the infinite decreasing sequence $b, ab, aab, aaab, \ldots$. Similarly, $X^*$ ordered by the prefix relation is not a well-quasi-order, because the previous sequence is an infinite antichain of this partial order. However, $X^*$ ordered by the subsequence relation is a well partial order. (If $X$ has only one element, these three partial orders are identical.) - -* More generally, $(X^*,\le)$, the set of finite $X$-sequences ordered by embedding is a well-quasi-order if and only if $(X, \le)$ is a well-quasi-order (Higman's lemma). Recall that one embeds a sequence $u$ into a sequence $v$ by finding a subsequence of $v$ that has the same length as $u$ and that dominates it term by term. When $(X,=)$ is an unordered set, $u\le v$ if and only if $u$ is a subsequence of $v$. - -* $(X^\omega,\le)$, the set of infinite sequences over a well-quasi-order $(X, \le)$, ordered by embedding, is not a well-quasi-order in general. That is, Higman's lemma does not carry over to infinite sequences. Better-quasi-orderings have been introduced to generalize Higman's lemma to sequences of arbitrary lengths. - -* Embedding between finite trees with nodes labeled by elements of a wqo $(X, \le)$ is a wqo (Kruskal's tree theorem). - -* Embedding between infinite trees with nodes labeled by elements of a wqo $(X, \le)$ is a wqo (Nash-Williams' theorem). - -* Embedding between countable scattered linear order types is a well-quasi-order (Laver's theorem). - -* Embedding between countable boolean algebras is a well-quasi-order. This follows from Laver's theorem and a theorem of Ketonen. - -* Finite graphs ordered by a notion of embedding called "graph minor" is a well-quasi-order (Robertson–Seymour theorem). - -* Graphs of finite tree-depth ordered by the induced subgraph relation form a well-quasi-order, as do the cographs ordered by induced subgraphs. - -In practice, the wqo's one manipulates are quite often not orderings (see examples above), and the theory is technically smoother if we do not require antisymmetry, so it is built with wqo's as the basic notion. On the other hand, according to Milner 1985, no real gain in generality is obtained by considering quasi-orders rather than partial orders... it is simply more convenient to do so. - -Observe that a wpo is a wqo, and that a wqo gives rise to a wpo between equivalence classes induced by the kernel of the wqo. For example, if we order $\Z$ by divisibility, we end up with $n\equiv m$ if and only if $n=\pm m$, so that $(\Z,|)\approx(\N,|)$. - -If $(X, \le)$ is wqo then every infinite sequence $x_0, x_1, x_2, \ldots,$ contains an infinite increasing subsequence $x_{n_0} \le x_{n_1}\le x_{n_2} \le \cdots$ (with $n_0< n_1< n_2< \cdots$). Such a subsequence is sometimes called perfect. - -This can be proved by a Ramsey argument: given some sequence $(x_i)_i$, consider the set $I$ of indexes $i$ such that $x_i$ has no larger or equal $x_j$ to its right, i.e., with $i i, \exists x \in S_j \setminus S_i$, a contradiction is reached by extracting an infinite non-ascending subsequence. - -* Given a well-quasi-ordering $(X,\le)$, any subset $S$ of $X$ has a finite number of minimal elements with respect to $\le$, for otherwise the minimal elements of $S$ would constitute an infinite antichain. diff --git a/wiki/wikipedia/280.txt b/wiki/wikipedia/280.txt deleted file mode 100644 index 332d90d377bc72bf83cc17334b99b5bfe8fd8069..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/280.txt +++ /dev/null @@ -1,75 +0,0 @@ -In number theory, Bertrand's postulate is a theorem stating that for any integer $n > 3$, there always exists at least one prime number $p$ with -$$ -n < p < 2n - 2. -$$ - -A less restrictive formulation is: for every $n > 1$ there is always at least one prime $p$ such that -$$ -n < p < 2n. -$$ - -Another formulation, where $p_n$ is the $n$-th prime, is for $n \ge 1$ -$$ - p_{n+1} < 2p_n. -$$ - -This statement was first conjectured in 1845 by Joseph Bertrand (1822–1900). Bertrand himself verified his statement for all integers $2 \le n \le 3000000$. - -His conjecture was completely proved by Chebyshev (1821–1894) in 1852 and so the postulate is also called the Bertrand–Chebyshev theorem or Chebyshev's theorem. Chebyshev's theorem can also be stated as a relationship with $\pi(x)$, where $\pi(x)$ is the prime-counting function (number of primes less than or equal to $x $): -$$ -\pi(x) - \pi\bigl(\tfrac{x}{2}\bigr) \ge 1 -$$, for all $x \ge 2$. - -The prime number theorem (PNT) implies that the number of primes up to x is roughly x/ln(x), so if we replace x with 2x then we see the number of primes up to 2x is asymptotically twice the number of primes up to x (the terms ln(2x) and ln(x) are asymptotically equivalent). Therefore, the number of primes between n and 2n is roughly n/ln(n) when n is large, and so in particular there are many more primes in this interval than are guaranteed by Bertrand's Postulate. So Bertrand's postulate is comparatively weaker than the PNT. But PNT is a deep theorem, while Bertrand's Postulate can be stated more memorably and proved more easily, and also makes precise claims about what happens for small values of n. (In addition, Chebyshev's theorem was proved before the PNT and so has historical interest.) - -The similar and still unsolved Legendre's conjecture asks whether for every n > 1, there is a prime p, such that n2 < p < (n + 1)2. Again we expect that there will be not just one but many primes between n2 and (n + 1)2, but in this case the PNT doesn't help: the number of primes up to x2 is asymptotic to x2/ln(x2) while the number of primes up to (x + 1)2 is asymptotic to (x + 1)2/ln((x + 1)2), which is asymptotic to the estimate on primes up to x2. So unlike the previous case of x and 2x we don't get a proof of Legendre's conjecture even for all large n. Error estimates on the PNT are not (indeed, cannot be) sufficient to prove the existence of even one prime in this interval. - -In 1919, Ramanujan (1887–1920) used properties of the Gamma function to give a simpler proof. The short paper included a generalization of the postulate, from which would later arise the concept of Ramanujan primes. Further generalizations of Ramanujan primes have also been discovered; for instance, there is a proof that -$$ -2p_{i-n} > p_i \text{ for } i>k \text{ where } k=\pi(p_k)=\pi(R_n) , -$$ - -with pk the kth prime and Rn the nth Ramanujan prime. - -Other generalizations of Bertrand's Postulate have been obtained using elementary methods. (In the following, n runs through the set of positive integers.) In 2006, M. El Bachraoui proved that there exists a prime between 2n and 3n. In 1973, Denis Hanson proved that there exists a prime between 3n and 4n. Furthermore, in 2011, Andy Loo proved that as n tends to infinity, the number of primes between 3n and 4n also goes to infinity, thereby generalizing Erdős' and Ramanujan's results (see the section on Erdős' theorems below). The first result is obtained with elementary methods. The second one is based on analytic bounds for the factorial function. - -Bertrand's postulate was proposed for applications to permutation groups. Sylvester (1814–1897) generalized the weaker statement with the statement: the product of k consecutive integers greater than k is divisible by a prime greater than k. Bertrand's (weaker) postulate follows from this by taking k = n, and considering the k numbers n + 1, n + 2, up to and including n + k = 2n, where n > 1. According to Sylvester's generalization, one of these numbers has a prime factor greater than k. Since all these numbers are less than 2(k + 1), the number with a prime factor greater than k has only one prime factor, and thus is a prime. Note that 2n is not prime, and thus indeed we now know there exists a prime p with n < p < 2n. - -In 1932, Erdős (1913–1996) also published a simpler proof using binomial coefficients and the Chebyshev function ϑ, defined as: -$$ - \vartheta(x) = \sum_{p=2}^x \ln (p) -$$ - -where p ≤ x runs over primes. See proof of Bertrand's postulate for the details. - -Erdős proved in 1934 that for any positive integer k, there is a natural number N such that for all n > N, there are at least k primes between n and 2n. An equivalent statement had been proved in 1919 by Ramanujan (see Ramanujan prime). - -It follows from the prime number theorem that for any real $\varepsilon > 0$ there is a $n_0 > 0 $ such that for all $n > n_0$ there is a prime $p$ such that $n < p < (1 + \varepsilon) n$. It can be shown, for instance, that -$$ -\lim_{n \to \infty}\frac{\pi((1+\varepsilon)n)-\pi(n)}{n/\log n}=\varepsilon, -$$ - -which implies that $ \pi (( 1 + \varepsilon ) n) - \pi (n) $ goes to infinity (and, in particular, is greater than 1 for sufficiently large $n$). - -Non-asymptotic bounds have also been proved. In 1952, Jitsuro Nagura proved that for $n \ge 25$ there is always a prime between $n$ and $\bigl(1+\tfrac{1}{5} \bigr) n$. - -In 1976, Lowell Schoenfeld showed that for $n \ge 2010760$, there is always a prime $p$ in the open interval $n < p < \bigl(1+\tfrac{1}{16597} \bigr) n$. - -In his 1998 doctoral thesis, Pierre Dusart improved the above result, showing that for $k \ge 463$, -$$ -p_{k+1} \le \left( 1 + \frac{1}{2 \ln^2{p_k}} \right) p_k -$$, - -and in particular for $x \ge 3275$, there exists a prime $p$ in the interval $x < p \le \left( 1 + \frac{1}{ 2 \ln^2{x} } \right) x$. - -In 2010 Pierre Dusart proved that for $x \ge 396738$ there is at least one prime $p$ in the interval $x < p \le \left( 1 + \frac{1}{ 25 \ln^2{x} } \right) x$. - -In 2016, Pierre Dusart improved his result from 2010, showing (Proposition 5.4) that, if $x \ge 89693$, there is at least one prime $p$ in the interval $x < p \le \left( 1 + \frac{1}{ \ln^3{x} } \right) x$. He also shows (Corollary 5.5) that, for $x \ge 468991632$, there is at least one prime $p$ in the interval $x < p \le \left( 1 + \frac{1}{ 5000 \ln^2{x} } \right) x$. - -Baker, Harman and Pintz proved that there is a prime in the interval $[x-x^{0.525},x]$ for all sufficiently large $x$. - -Dudek proved that, for all $n \ge e^{e^{33.3}}$, there is at least one prime between $n^3$ and $(n + 1)^3$. - -*The sequence of primes, along with 1, is a complete sequence; any positive integer can be written as a sum of primes (and 1) using each at most once. - -*The only harmonic number that is an integer is the number 1. diff --git a/wiki/wikipedia/2800.txt b/wiki/wikipedia/2800.txt deleted file mode 100644 index 9fe69d22f2ab533ee71a07cf0b5510e5c9c0933a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2800.txt +++ /dev/null @@ -1,204 +0,0 @@ -In logic, the law of excluded middle (or the principle of excluded middle) states that for every proposition, either this proposition or its negation is true. It is one of the so called three laws of thought, along with the law of noncontradiction, and the law of identity. However, no system of logic is built on just these laws, and none of these laws provide inference rules, such as modus ponens or De Morgan's laws. - -The law is also known as the law (or principle) of the excluded third, in Latin principium tertii exclusi. Another Latin designation for this law is tertium non datur: "no third [possibility] is given". It is a tautology. - -The principle should not be confused with the semantical principle of bivalence, which states that every proposition is either true or false. The principle of bivalence always implies the law of excluded middle, while the converse is not always true. A commonly cited counterexample uses statements unprovable now, but provable in the future to show that the law of excluded middle may apply when the principle of bivalence fails. - -The earliest known formulation is in Aristotle's discussion of the principle of non-contradiction, first proposed in On Interpretation, where he says that of two contradictory propositions (i.e. where one proposition is the negation of the other) one must be true, and the other false. He also states it as a principle in the Metaphysics book 3, saying that it is necessary in every case to affirm or deny, and that it is impossible that there should be anything between the two parts of a contradiction. - -Aristotle wrote that ambiguity can arise from the use of ambiguous names, but cannot exist in the facts themselves: - -Aristotle's assertion that "it will not be possible to be and not to be the same thing", which would be written in propositional logic as ¬(P ∧ ¬P), is a statement modern logicians could call the law of excluded middle (P ∨ ¬P), as distribution of the negation of Aristotle's assertion makes them equivalent, regardless that the former claims that no statement is both true and false, while the latter requires that any statement is either true or false. - -But Aristotle also writes, "since it is impossible that contradictories should be at the same time true of the same thing, obviously contraries also cannot belong at the same time to the same thing" (Book IV, CH 6, p. 531). He then proposes that "there cannot be an intermediate between contradictories, but of one subject we must either affirm or deny any one predicate" (Book IV, CH 7, p. 531). In the context of Aristotle's traditional logic, this is a remarkably precise statement of the law of excluded middle, P ∨ ¬P. - -Also in On Interpretation, Aristotle seems to deny the law of excluded middle in the case of future contingents, in his discussion on the sea battle. - -The principle was stated as a theorem of propositional logic by Russell and Whitehead in Principia Mathematica as: -$$ -\mathbf{*2\cdot11}. \ \ \vdash . \ p \ \vee \thicksim p -$$. - -So just what is "truth" and "falsehood"? At the opening PM quickly announces some definitions: - -This is not much help. But later, in a much deeper discussion ("Definition and systematic ambiguity of Truth and Falsehood" Chapter II part III, p. 41 ff), PM defines truth and falsehood in terms of a relationship between the "a" and the "b" and the "percipient". For example "This 'a' is 'b'" (e.g. "This 'object a' is 'red'") really means "'object a' is a sense-datum" and "'red' is a sense-datum", and they "stand in relation" to one another and in relation to "I". Thus what we really mean is: "I perceive that 'This object a is red'" and this is an undeniable-by-3rd-party "truth". - -PM further defines a distinction between a "sense-datum" and a "sensation": - -Russell reiterated his distinction between "sense-datum" and "sensation" in his book The Problems of Philosophy (1912), published at the same time as PM (1910–1913): - -Russell further described his reasoning behind his definitions of "truth" and "falsehood" in the same book (Chapter XII, Truth and Falsehood). - -From the law of excluded middle, formula ✸2.1 in Principia Mathematica, Whitehead and Russell derive some of the most powerful tools in the logician's argumentation toolkit. (In Principia Mathematica, formulas and propositions are identified by a leading asterisk and two numbers, such as "✸2.1".) - -✸2.1 ~p ∨ p "This is the Law of excluded middle" (PM, p. 101). - -The proof of ✸2.1 is roughly as follows: "primitive idea" 1.08 defines p → q = ~p ∨ q. Substituting p for q in this rule yields p → p = ~p ∨ p. Since p → p is true (this is Theorem 2.08, which is proved separately), then ~p ∨ p must be true. - -✸2.11 p ∨ ~p (Permutation of the assertions is allowed by axiom 1.4)
    - -✸2.12 p → ~(~p) (Principle of double negation, part 1: if "this rose is red" is true then it's not true that "'this rose is not-red' is true".)
    - -✸2.13 p ∨ ~{~(~p)} (Lemma together with 2.12 used to derive 2.14)
    - -✸2.14 ~(~p) → p (Principle of double negation, part 2)
    - -✸2.15 (~p → q) → (~q → p) (One of the four "Principles of transposition". Similar to 1.03, 1.16 and 1.17. A very long demonstration was required here.)
    - -✸2.16 (p → q) → (~q → ~p) (If it's true that "If this rose is red then this pig flies" then it's true that "If this pig doesn't fly then this rose isn't red.")
    - -✸2.17 ( ~p → ~q ) → (q → p) (Another of the "Principles of transposition".)
    - -✸2.18 (~p → p) → p (Called "The complement of reductio ad absurdum. It states that a proposition which follows from the hypothesis of its own falsehood is true" (PM, pp. 103–104).) - -Most of these theorems—in particular ✸2.1, ✸2.11, and ✸2.14—are rejected by intuitionism. These tools are recast into another form that Kolmogorov cites as "Hilbert's four axioms of implication" and "Hilbert's two axioms of negation" (Kolmogorov in van Heijenoort, p. 335). - -Propositions ✸2.12 and ✸2.14, "double negation": - -The intuitionist writings of L. E. J. Brouwer refer to what he calls "the principle of the reciprocity of the multiple species, that is, the principle that for every system the correctness of a property follows from the impossibility of the impossibility of this property" (Brouwer, ibid, p. 335). - -This principle is commonly called "the principle of double negation" (PM, pp. 101–102). From the law of excluded middle (✸2.1 and ✸2.11), PM derives principle ✸2.12 immediately. We substitute ~p for p in 2.11 to yield ~p ∨ ~(~p), and by the definition of implication (i.e. 1.01 p → q = ~p ∨ q) then ~p ∨ ~(~p)= p → ~(~p). QED (The derivation of 2.14 is a bit more involved.) - -It is correct, at least for bivalent logic—i.e. it can be seen with a Karnaugh map—that this law removes "the middle" of the inclusive-or used in his law (3). And this is the point of Reichenbach's demonstration that some believe the exclusive-or should take the place of the inclusive-or. - -About this issue (in admittedly very technical terms) Reichenbach observes: - -The tertium non datur - -29. (x)[f(x) ∨ ~f(x)] - -is not exhaustive in its major terms and is therefore an inflated formula. This fact may perhaps explain why some people consider it unreasonable to write (29) with the inclusive-'or', and want to have it written with the sign of the exclusive-'or' - -30. (x)[f(x) ⊕ ~f(x)], where the symbol "⊕" signifies exclusive-or - -in which form it would be fully exhaustive and therefore nomological in the narrower sense. (Reichenbach, p. 376) - -In line (30) the "(x)" means "for all" or "for every", a form used by Russell and Reichenbach; today the symbolism is usually $\forall$ x. Thus an example of the expression would look like this: - -* (pig): (Flies(pig) ⊕ ~Flies(pig)) - -* (For all instances of "pig" seen and unseen): ("Pig does fly" or "Pig does not fly" but not both simultaneously) - -From the late 1800s through the 1930s, a bitter, persistent debate raged between Hilbert and his followers versus Hermann Weyl and L. E. J. Brouwer. Brouwer's philosophy, called intuitionism, started in earnest with Leopold Kronecker in the late 1800s. - -Hilbert intensely disliked Kronecker's ideas: - -The debate had a profound effect on Hilbert. Reid indicates that Hilbert's second problem (one of Hilbert's problems from the Second International Conference in Paris in 1900) evolved from this debate (italics in the original): - -In his second problem [Hilbert] had asked for a mathematical proof of the consistency of the axioms of the arithmetic of real numbers. - -To show the significance of this problem, he added the following observation: - -"If contradictory attributes be assigned to a concept, I say that mathematically the concept does not exist" (Reid p. 71) - -Thus Hilbert was saying: "If p and ~p are both shown to be true, then p does not exist", and was thereby invoking the law of excluded middle cast into the form of the law of contradiction. - -The rancorous debate continued through the early 1900s into the 1920s; in 1927 Brouwer complained about "polemicizing against it [intuitionism] in sneering tones" (Brouwer in van Heijenoort, p. 492). But the debate was fertile: it resulted in Principia Mathematica (1910–1913), and that work gave a precise definition to the law of excluded middle, and all this provided an intellectual setting and the tools necessary for the mathematicians of the early 20th century: - -Brouwer reduced the debate to the use of proofs designed from "negative" or "non-existence" versus "constructive" proof: - -According to Brouwer, a statement that an object exists having a given property means that, and is only proved, when a method is known which in principle at least will enable such an object to be found or constructed... - -Hilbert naturally disagreed. - -"pure existence proofs have been the most important landmarks in the historical development of our science," he maintained. (Reid p. 155) - -Brouwer ... refused to accept the logical principle of the excluded middle... His argument was the following: - -"Suppose that A is the statement "There exists a member of the set S having the property P." If the set is finite, it is possible—in principle—to examine each member of S and determine whether there is a member of S with the property P or that every member of S lacks the property P. For finite sets, therefore, Brouwer accepted the principle of the excluded middle as valid. He refused to accept it for infinite sets because if the set S is infinite, we cannot—even in principle—examine each member of the set. If, during the course of our examination, we find a member of the set with the property P, the first alternative is substantiated; but if we never find such a member, the second alternative is still not substantiated. - -Since mathematical theorems are often proved by establishing that the negation would involve us in a contradiction, this third possibility which Brouwer suggested would throw into question many of the mathematical statements currently accepted. - -"Taking the Principle of the Excluded Middle from the mathematician," Hilbert said, "is the same as ... prohibiting the boxer the use of his fists." - -"The possible loss did not seem to bother Weyl... Brouwer's program was the coming thing, he insisted to his friends in Zürich." (Reid, p. 149)}} - -In his lecture in 1941 at Yale and the subsequent paper Gödel proposed a solution: "that the negation of a universal proposition was to be understood as asserting the existence ... of a counterexample" (Dawson, p. 157)) - -Gödel's approach to the law of excluded middle was to assert that objections against "the use of 'impredicative definitions'" "carried more weight" than "the law of excluded middle and related theorems of the propositional calculus" (Dawson p. 156). He proposed his "system Σ ... and he concluded by mentioning several applications of his interpretation. Among them were a proof of the consistency with intuitionistic logic of the principle ~ (∀A: (A ∨ ~A)) (despite the inconsistency of the assumption ∃ A: ~ (A ∨ ~A)" (Dawson, p. 157) - -The debate seemed to weaken: mathematicians, logicians and engineers continue to use the law of excluded middle (and double negation) in their daily work. - -The following highlights the deep mathematical and philosophic problem behind what it means to "know", and also helps elucidate what the "law" implies (i.e. what the law really means). Their difficulties with the law emerge: that they do not want to accept as true implications drawn from that which is unverifiable (untestable, unknowable) or from the impossible or the false. (All quotes are from van Heijenoort, italics added). - -Brouwer offers his definition of "principle of excluded middle"; we see here also the issue of "testability": - -On the basis of the testability just mentioned, there hold, for properties conceived within a specific finite main system, the "principle of excluded middle", that is, the principle that for every system every property is either correct [richtig] or impossible, and in particular the principle of the reciprocity of the complementary species, that is, the principle that for every system the correctness of a property follows from the impossibility of this property. (335) - -Kolmogorovs definition cites Hilbert's two axioms of negation - -
    1. A → (~A → B)
    2. - -
    3. (A → B) → { (~A → B) → B}
    - -Hilbert's first axiom of negation, "anything follows from the false", made its appearance only with the rise of symbolic logic, as did the first axiom of implication.... while... the axiom under consideration [axiom 5] asserts something about the consequences of something impossible: we have to accept B if the true judgment A is regarded as false... - -Hilbert's second axiom of negation expresses the principle of excluded middle. The principle is expressed here in the form in which is it used for derivations: if B follows from A as well as from ~A, then B is true. Its usual form, "every judgment is either true or false" is equivalent to that given above". - -From the first interpretation of negation, that is, the interdiction from regarding the judgment as true, it is impossible to obtain the certitude that the principle of excluded middle is true... Brouwer showed that in the case of such transfinite judgments the principle of excluded middle cannot be considered obvious - -footnote 9: "This is Leibniz's very simple formulation (see Nouveaux Essais, IV,2). The formulation "A is either B or not-B" has nothing to do with the logic of judgments. - -footnote 10: "Symbolically the second form is expressed thus - -A ∨ ~A - -where ∨ means "or". The equivalence of the two forms is easily proved (p. 421) - -For example, if P is the proposition: - -Socrates is mortal. - -then the law of excluded middle holds that the logical disjunction: - -Either Socrates is mortal, or it is not the case that Socrates is mortal. - -is true by virtue of its form alone. That is, the "middle" position, that Socrates is neither mortal nor not-mortal, is excluded by logic, and therefore either the first possibility (Socrates is mortal) or its negation (it is not the case that Socrates is mortal) must be true. - -An example of an argument that depends on the law of excluded middle follows. We seek to prove that - -there exist two irrational numbers $a$ and $b$ such that $a^b$ is rational. - -It is known that $\sqrt{2}$ is irrational (see proof). Consider the number -$$ -\sqrt{2}^{\sqrt{2}} -$$. - -Clearly (excluded middle) this number is either rational or irrational. If it is rational, the proof is complete, and -$$ -a=\sqrt{2} -$$ and $b=\sqrt{2}$. - -But if $\sqrt{2}^{\sqrt{2}}$ is irrational, then let -$$ -a=\sqrt{2}^{\sqrt{2}} -$$ and $b=\sqrt{2}$. - -Then -$$ -a^b = \left(\sqrt{2}^{\sqrt{2}}\right)^{\sqrt{2}} = \sqrt{2}^{\left(\sqrt{2}\cdot\sqrt{2}\right)} = \sqrt{2}^2 = 2 -$$, - -and 2 is certainly rational. This concludes the proof. - -In the above argument, the assertion "this number is either rational or irrational" invokes the law of excluded middle. An intuitionist, for example, would not accept this argument without further support for that statement. This might come in the form of a proof that the number in question is in fact irrational (or rational, as the case may be); or a finite algorithm that could determine whether the number is rational. - -The above proof is an example of a non-constructive proof disallowed by intuitionists: - -{{quote|The proof is non-constructive because it doesn't give specific numbers $a$ and $b$ that satisfy the theorem but only two separate possibilities, one of which must work. (Actually $a=\sqrt{2}^{\sqrt{2}}$ is irrational but there is no known easy proof of that fact.) (Davis 2000:220)}} (Constructive proofs of the specific example above are not hard to produce; for example $a=\sqrt{2}$ and $b=\log_2 9$ are both easily shown to be irrational, and $a^b=3$; a proof allowed by intuitionists). - -By non-constructive Davis means that "a proof that there actually are mathematic entities satisfying certain conditions would not have to provide a method to exhibit explicitly the entities in question." (p. 85). Such proofs presume the existence of a totality that is complete, a notion disallowed by intuitionists when extended to the infinite—for them the infinite can never be completed: - -David Hilbert and Luitzen E. J. Brouwer both give examples of the law of excluded middle extended to the infinite. Hilbert's example: "the assertion that either there are only finitely many prime numbers or there are infinitely many" (quoted in Davis 2000:97); and Brouwer's: "Every mathematical species is either finite or infinite." (Brouwer 1923 in van Heijenoort 1967:336). In general, intuitionists allow the use of the law of excluded middle when it is confined to discourse over finite collections (sets), but not when it is used in discourse over infinite sets (e.g. the natural numbers). Thus intuitionists absolutely disallow the blanket assertion: "For all propositions P concerning infinite sets D: P or ~P" (Kleene 1952:48). - -Putative counterexamples to the law of excluded middle include the liar paradox or Quine's paradox. Certain resolutions of these paradoxes, particularly Graham Priest's dialetheism as formalised in LP, have the law of excluded middle as a theorem, but resolve out the Liar as both true and false. In this way, the law of excluded middle is true, but because truth itself, and therefore disjunction, is not exclusive, it says next to nothing if one of the disjuncts is paradoxical, or both true and false. - -Many modern logic systems replace the law of excluded middle with the concept of negation as failure. Instead of a proposition's being either true or false, a proposition is either true or not able to be proved true. These two dichotomies only differ in logical systems that are not complete. The principle of negation as failure is used as a foundation for autoepistemic logic, and is widely used in logic programming. In these systems, the programmer is free to assert the law of excluded middle as a true fact, but it is not built-in a priori into these systems. - -Mathematicians such as L. E. J. Brouwer and Arend Heyting have also contested the usefulness of the law of excluded middle in the context of modern mathematics. - -In modern mathematical logic, the excluded middle has been shown to result in possible self-contradiction. It is possible in logic to make well-constructed propositions that can be neither true nor false; a common example of this is the "Liar's paradox", the statement "this statement is false", which can itself be neither true nor false. The law of excluded middle still holds here as the negation of this statement "This statement is not false", can be assigned true. In set theory, such a self-referential paradox can be constructed by examining the set "the set of all sets that do not contain themselves". This set is unambiguously defined, but leads to a Russell's paradox: does the set contain, as one of its elements, itself? However, in the modern Zermelo–Fraenkel set theory, this type of contradiction is no longer admitted. - -Some systems of logic have different but analogous laws. For some finite n-valued logics, there is an analogous law called the law of excluded n+1th. If negation is cyclic and "∨" is a "max operator", then the law can be expressed in the object language by (P ∨ ~P ∨ ~~P ∨ ... ∨ ~...~P), where "~...~" represents n−1 negation signs and "∨ ... ∨" n−1 disjunction signs. It is easy to check that the sentence must receive at least one of the n truth values (and not a value that is not one of the n). - -Other systems reject the law entirely. diff --git a/wiki/wikipedia/2801.txt b/wiki/wikipedia/2801.txt deleted file mode 100644 index bcea92d27db52627eb9fed320fa1d1de06a1f442..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2801.txt +++ /dev/null @@ -1,35 +0,0 @@ -SpicyNodes was a system for displaying hierarchical data, in which a focus node displays detailed information, and the surrounding nodes represent related information (Focus + Context), with a layout based on radial maps. It has web (Flash) and mobile (iOS) implementations. It has ended operation as of 1st January 2018 - -SpicyNodes displays a central node, orbited by related (child) nodes. Each child node can be linked to other child nodes. As the user navigates (changes focus) from node to node, a root path traces the path back to the home node. In a typical implementation, only child and ancestor nodes are displayed. When the user browses, nodes appear and disappear, and the layout rearranges to fit. It is a generic method, with uses ranging from dynamic poetry, to mind mapping and concept mapping. - -* Visual browsing – Similar to other concept mapping tools, SpicyNodes allows authors to display visual thoughts and links between information, and publish an information map for users to browse. - -* Non-linear – Users can jump from node to node, or descend into a tree to find specific information. Since the number of nodes increases exponentially with the number of orbits, a user can find a piece of information in only N clicks/taps, while navigating a space of XN nodes, where X=average nodes per orbit. Conversely, node layouts are inefficient for reading contiguous pieces of content in a linear manner. - -* Displays a subset – Only a limited number of nodes can fit on a typical screen at once, which requires a large enough screen to fit the nodes, and means it is usually not possible to display all the nodes simultaneously. - -* Balanced branches – Layouts only make sense if there are balanced branches with fewer than two dozen child nodes. A typical implementation requires an average of 2-10 linked/child nodes per node. Too few, and the layout becomes a string of pearls. Too many, and the nodes do not fit. - -SpicyNodes is a radial tree layout engine, modified using force-based algorithms, bias controls, and variable pivot point. It also uses an approach similar to hyperbolic trees to reduce sizes far from the focus node. Key aspects of the method are publicly described. The layout is adaptive, changing as the user clicks from node to node, to minimize cluttering. Nodes can contain any content (formatted text, images, videos, etc.) or links to other nodes or content. There is a "focus" node, and users change focus from node to node. - -The algorithm was developed by Michael Douma and colleagues at IDEA.org, starting in 2005. The layout algorithm is based on the work of Yee and his associates, and the underlying mechanics have been further described in papers and talks at conferences on Information visualization, on Museums and the Web, and on distance education. - -Early implementations include: - -(a) Genealogical browser of the Greek Gods released March 2006 in the WebExhibits online museum. Also used a teaching resource in 'Mythology' taught by Mr. Russell Rice. - -(b) A master's thesis in 2007. - -(c) Virtual exhibit navigation, for three online exhibits (e.g., Daylight Saving Time, Calendars, Poetry forms) released in 2008 in the WebExhibits online museum. - -* Web-based – A web-based platform for authoring and publishing node maps. is available as a Software as a service, built on Adobe Flash, provided with both free and paid versions by the original development team at IDEA.org, launched 2009. It has an open API. Received a "Best Website for Teaching and Learning" award in 2011 from American Association of School Librarians (AASL), and voted #edchat's 35 Best Web 2.0 Classroom Tools in 2010. - -It has been used for presentations in professional conferences and meetings. - -There are third party guides, reviews regarding general usage, and instructional design. - -The web implementation allows embedding in a blog, and can also be run as a form of slide show where each node corresponds to a slide. - -* Multitouch – The first multitouch implementation of SpicyNodes was as part of the WikiNodes multitouch Wikipedia browser for the Apple iPad, and launched in April 2011. - -For authoring, there are related mind mapping and concept mapping products, such as FreeMind. Typically these do not allow the end user to change focus from node to node. For display, there is analogous software for moving node to node, including: Visual Thesaurus from ThinkMap, TuneGlue, Lexipedia, and Prefuse Flare, and the Discovr apps. (The Discovr app, which also uses radial layouts, with a different layout algorithm which is primarily force-based.) diff --git a/wiki/wikipedia/2802.txt b/wiki/wikipedia/2802.txt deleted file mode 100644 index 724d6ba8a1288c649b1c677cf106c2f0b76d007e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2802.txt +++ /dev/null @@ -1,9 +0,0 @@ -In mathematics, Jean-Pierre Serre conjectured the following statement regarding the Galois cohomology of a simply connected semisimple algebraic group. Namely, he conjectured that if G is such a group over a perfect field F of cohomological dimension at most 2, then the Galois cohomology set H1(F, G) is zero. - -A converse of the conjecture holds: if the field F is perfect and if the cohomology set H1(F, G) is zero for every semisimple simply connected algebraic group G then the p-cohomological dimension of F is at most 2 for every prime p. - -The conjecture holds in the case where F is a local field (such as p-adic field) or a global field with no real embeddings (such as Q(−1)). This is a special case of the Kneser–Harder–Chernousov Hasse Principle for algebraic groups over global fields. (Note that such fields do indeed have cohomological dimension at most 2.) - -The conjecture also holds when F is finitely generated over the complex numbers and has transcendence degree at most 2. - -The conjecture is also known to hold for certain groups G. For special linear groups, it is a consequence of the Merkurjev–Suslin theorem. Building on this result, the conjecture holds if G is a classical group. The conjecture also holds if G is one of certain kinds of exceptional group. diff --git a/wiki/wikipedia/2803.txt b/wiki/wikipedia/2803.txt deleted file mode 100644 index 575ecb73f8b7033e0c389f88c1ca2876190dce53..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2803.txt +++ /dev/null @@ -1,166 +0,0 @@ -In geometry, an inscribed angle is the angle formed in the interior of a circle when two chords intersect on the circle. It can also be defined as the angle subtended at a point on the circle by two given points on the circle. - -Equivalently, an inscribed angle is defined by two chords of the circle sharing an endpoint. - -The inscribed angle theorem relates the measure of an inscribed angle to that of the central angle subtending the same arc. - -The inscribed angle theorem appears as Proposition 20 on Book 3 of Euclid's Elements. - -The inscribed angle theorem states that an angle θ inscribed in a circle is half of the central angle 2θ that subtends the same arc on the circle. Therefore, the angle does not change as its vertex is moved to different positions on the circle. - -Let O be the center of a circle, as in the diagram at right. Choose two points on the circle, and call them V and A. Draw line VO and extended past O so that it intersects the circle at point B which is diametrically opposite the point V. Draw an angle whose vertex is point V and whose sides pass through points A and B. - -Draw line OA. Angle BOA is a central angle; call it θ. Lines OV and OA are both radii of the circle, so they have equal lengths. Therefore, triangle VOA is isosceles, so angle BVA (the inscribed angle) and angle VAO are equal; let each of them be denoted as ψ. - -Angles BOA and AOV add up to 180°, since line VB passing through O is a straight line. Therefore, angle AOV measures 180° - θ. - -It is known that the three angles of a triangle add up to 180°, and the three angles of triangle VOA are: - -180° - θ - -ψ - -ψ. - -Therefore, -$$ - 2 \psi + 180^\circ - \theta = 180^\circ. -$$ - -Subtract -$$ - (180^\circ - \theta) -$$ - -from both sides, -$$ - 2 \psi = \theta, -$$ - -where θ is the central angle subtending arc AB and ψ is the inscribed angle subtending arc AB. - -Given a circle whose center is point O, choose three points V, C, and D on the circle. Draw lines VC and VD: angle DVC is an inscribed angle. Now draw line VO and extend it past point O so that it intersects the circle at point E. Angle DVC subtends arc DC on the circle. - -Suppose this arc includes point E within it. Point E is diametrically opposite to point V. Angles DVE and EVC are also inscribed angles, but both of these angles have one side which passes through the center of the circle, therefore the theorem from the above Part 1 can be applied to them. - -Therefore, -$$ - \angle DVC = \angle DVE + \angle EVC. -$$ - -then let -$$ - \psi_0 = \angle DVC, -$$ -$$ - \psi_1 = \angle DVE, -$$ -$$ - \psi_2 = \angle EVC, -$$ - -so that -$$ - \psi_0 = \psi_1 + \psi_2. \qquad \qquad (1) -$$ - -Draw lines OC and OD. Angle DOC is a central angle, but so are angles DOE and EOC, and -$$ - \angle DOC = \angle DOE + \angle EOC. -$$ - -Let -$$ - \theta_0 = \angle DOC, -$$ -$$ - \theta_1 = \angle DOE, -$$ -$$ - \theta_2 = \angle EOC, -$$ - -so that -$$ - \theta_0 = \theta_1 + \theta_2. \qquad \qquad (2) -$$ - -From Part One we know that $ \theta_1 = 2 \psi_1 $ and that $ \theta_2 = 2 \psi_2 $. Combining these results with equation (2) yields -$$ - \theta_0 = 2 \psi_1 + 2 \psi_2 = 2(\psi_1 + \psi_2) -$$ - -therefore, by equation (1), -$$ - \theta_0 = 2 \psi_0. -$$ - -The previous case can be extended to cover the case where the measure of the inscribed angle is the difference between two inscribed angles as discussed in the first part of this proof. - -Given a circle whose center is point O, choose three points V, C, and D on the circle. Draw lines VC and VD: angle DVC is an inscribed angle. Now draw line VO and extend it past point O so that it intersects the circle at point E. Angle DVC subtends arc DC on the circle. - -Suppose this arc does not include point E within it. Point E is diametrically opposite to point V. Angles EVD and EVC are also inscribed angles, but both of these angles have one side which passes through the center of the circle, therefore the theorem from the above Part 1 can be applied to them. - -Therefore, -$$ - \angle DVC = \angle EVC - \angle EVD -$$. - -then let -$$ - \psi_0 = \angle DVC, -$$ -$$ - \psi_1 = \angle EVD, -$$ -$$ - \psi_2 = \angle EVC, -$$ - -so that -$$ - \psi_0 = \psi_2 - \psi_1. \qquad \qquad (3) -$$ - -Draw lines OC and OD. Angle DOC is a central angle, but so are angles EOD and EOC, and -$$ - \angle DOC = \angle EOC - \angle EOD. -$$ - -Let -$$ - \theta_0 = \angle DOC, -$$ -$$ - \theta_1 = \angle EOD, -$$ -$$ - \theta_2 = \angle EOC, -$$ - -so that -$$ - \theta_0 = \theta_2 - \theta_1. \qquad \qquad (4) -$$ - -From Part One we know that $ \theta_1 = 2 \psi_1 $ and that $ \theta_2 = 2 \psi_2 $. Combining these results with equation (4) yields -$$ - \theta_0 = 2 \psi_2 - 2 \psi_1 -$$ - -therefore, by equation (3), -$$ - \theta_0 = 2 \psi_0. -$$ - -By a similar argument, the angle between a chord and the tangent line at one of its intersection points equals half of the central angle subtended by the chord. See also Tangent lines to circles. - -The inscribed angle theorem is used in many proofs of elementary Euclidean geometry of the plane. A special case of the theorem is Thales' theorem, which states that the angle subtended by a diameter is always 90°, i.e., a right angle. As a consequence of the theorem, opposite angles of cyclic quadrilaterals sum to 180°; conversely, any quadrilateral for which this is true can be inscribed in a circle. As another example, the inscribed angle theorem is the basis for several theorems related to the power of a point with respect to a circle. Further, it allows one to prove that when two chords intersect in a circle, the products of the lengths of their pieces are equal. - -Inscribed angle theorems exist for ellipses, hyperbolas and parabolas, too. The essential differences are the measurements of an angle. (An angle is considered a pair of intersecting lines.) - -* Ellipse - -* Hyperbola - -* Parabola diff --git a/wiki/wikipedia/2804.txt b/wiki/wikipedia/2804.txt deleted file mode 100644 index 7da4c6faae1baae53a34b08ee006bd92127be3ab..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2804.txt +++ /dev/null @@ -1,41 +0,0 @@ -The Kalman–Yakubovich–Popov lemma is a result in system analysis and control theory which states: Given a number $\gamma > 0$, two n-vectors B, C and an n x n Hurwitz matrix A, if the pair $(A,B)$ is completely controllable, then a symmetric matrix P and a vector Q satisfying -$$ -A^T P + P A = -Q Q^T -$$ -$$ - P B-C = \sqrt{\gamma}Q -$$ - -exist if and only if - - - -\gamma+2 Re[C^T (j\omega I-A)^{-1}B]\ge 0 - - - -Moreover, the set $\{x: x^T P x = 0\}$ is the unobservable subspace for the pair $(C,A)$. - -The lemma can be seen as a generalization of the Lyapunov equation in stability theory. It establishes a relation between a linear matrix inequality involving the state space constructs A, B, C and a condition in the frequency domain. - -The Kalman–Popov–Yakubovich lemma which was first formulated and proved in 1962 by Vladimir Andreevich Yakubovich where it was stated that for the strict frequency inequality. The case of nonstrict frequency inequality was published in 1963 by Rudolf E. Kálmán. In that paper the relation to solvability of the Lur’e equations was also established. Both papers considered scalar-input systems. The constraint on the control dimensionality was removed in 1964 by Gantmakher and Yakubovich and independently by Vasile Mihai Popov. Extensive review of the topic can be found in. - -Given $A \in \R^{n \times n}, B \in \R^{n \times m}, M = M^T \in \R^{(n+m) \times (n+m)}$ with $\det(j\omega I - A) \ne 0$ for all $\omega \in \R$ and $(A, B)$ controllable, the following are equivalent: - -
      - -
    1. for all $\omega \in \R \cup \{\infty\} $ -$$ - \left[\begin{matrix} (j\omega I - A)^{-1}B \\ I \end{matrix}\right]^* M \left[\begin{matrix} (j\omega I - A)^{-1}B \\ I \end{matrix}\right] \le 0 -$$
    2. - -
    3. there exists a matrix $P \in \R^{n \times n}$ such that $P = P^T$ and -$$ -M + \left[\begin{matrix} A^T P + PA & PB \\ B^T P & 0 \end{matrix}\right] \le 0. -$$ - -
    4. - -
    - -The corresponding equivalence for strict inequalities holds even if $(A, B)$ is not controllable. diff --git a/wiki/wikipedia/2805.txt b/wiki/wikipedia/2805.txt deleted file mode 100644 index 71dc7e212b845e16e8a97414965fb3fedcca7c84..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2805.txt +++ /dev/null @@ -1,14 +0,0 @@ -In mathematics, Favard's theorem, also called the Shohat–Favard theorem, states that a sequence of polynomials satisfying a suitable 3-term recurrence relation is a sequence of orthogonal polynomials. The theorem was introduced in the theory of orthogonal polynomials by and Shohat, though essentially the same theorem was used by Stieltjes in the theory of continued fractions many years before Favard's paper, and was rediscovered several times by other authors before Favard's work. - -Suppose that y0 = 1, y1, ... is a sequence of polynomials where yn has degree n. If this is a sequence of orthogonal polynomials for some positive weight function then it satisfies a 3-term recurrence relation. Favard's theorem is roughly a converse of this, and states that if these polynomials satisfy a 3-term recurrence relation of the form -$$ - y_{n+1}= (x-c_n)y_n - d_n y_{n-1} -$$ - -for some numbers cn and dn, - -then the polynomials yn form an orthogonal sequence for some linear functional Λ with Λ(1)=1; in other words Λ(ymyn) = 0 if m ≠ n. - -The linear functional Λ is unique, and is given by Λ(1) = 1, Λ(yn) = 0 if n > 0. - -The functional Λ satisfies Λ(y_n|p=2) = dn Λ(y_n–1|p=2), which implies that Λ is positive definite if (and only if) the numbers cn are real and the numbers dn are positive. diff --git a/wiki/wikipedia/2806.txt b/wiki/wikipedia/2806.txt deleted file mode 100644 index 5b614c94de7c1d3e401d7b78a197c91a29e6ac8c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2806.txt +++ /dev/null @@ -1 +0,0 @@ -A nonobtuse triangle mesh is composed of a set of triangles in which no angle is obtuse, i.e. greater than 90°. If each (triangle) face angle is strictly less than 90°, then the triangle mesh is said to be acute. The immediate benefits of a nonobtuse or acute mesh include more efficient and more accurate geodesic computation using fast marching, and guaranteed validity for planar mesh embeddings via discrete harmonic maps. diff --git a/wiki/wikipedia/2807.txt b/wiki/wikipedia/2807.txt deleted file mode 100644 index 36f9b4b0d495230ff2a6ed90f303184879de9c73..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2807.txt +++ /dev/null @@ -1,18 +0,0 @@ -Rudolf Haag postulated that the interaction picture does not exist in an interacting, relativistic, quantum field theory, something now commonly known as Haag’s theorem. Haag’s original proof was subsequently generalized by a number of authors, notably Hall & Wightman (1957), who reached the conclusion that a single, universal Hilbert space representation does not suffice for describing both free and interacting fields. Reed & Simon (1975) proved that a Haag-like theorem also applies to free neutral scalar fields of different masses, which implies that the interaction picture cannot exist even in the absence of interactions. - -In its modern form, the Haag theorem may be stated as follows: - -
    - -Consider two faithful representations of the canonical commutation relations, $(H_1 , \{O^i_1\})$ and -$$ -(H_2 , \{O^i_2\}) -$$ (where each $H_n, ~ n=1,2,$ denotes one of two Hilbert spaces and each set $\{ O^i_n \}$ is the collection of operators for the respective space in the canonical commutation relations). - -The two representations are called unitarily equivalent if and only if there exists some unitary mapping $U$ from Hilbert space $H_1$ to Hilbert space $H_2$ such that for all $j,~ O^j_2 = U O^j_1 U^{-1}~.$
    Lupher (2005) - -Sklar (2000) - -Wallace (2011) He justifies the latter claim with the insights gained from modern renormalization group theory, namely the fact that
    ... we can absorb all our ignorance of how the cutoff [i.e., the short-range cutoff required to carry out the renormalization procedure] is implemented, into the values of finitely many coefficients which can be measured empirically.
    - -Concerning the consequences of Haag’s theorem, Wallace’s observation implies that since QFT does not attempt to predict fundamental parameters, such as particle masses or coupling constants, potentially harmful effects arising from unitarily non-equivalent representations remain absorbed inside the empirical values that stem from measurements of these parameters (at a given length scale) and that are readily imported into QFT. Thus they remain invisible to quantum field theorists, in practice. diff --git a/wiki/wikipedia/2808.txt b/wiki/wikipedia/2808.txt deleted file mode 100644 index cadceeee81786808b30c019521e30d7b41ea710d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2808.txt +++ /dev/null @@ -1,15 +0,0 @@ -The Coleman–Mandula theorem (named after Sidney Coleman and Jeffrey Mandula) - -The first condition for the theorem is that the unified group "G contains a subgroup locally isomorphic to the Poincare group." Therefore, the theorem only makes a statement about the unification of the Poincare group with an internal symmetry group. However, if the Poincare group is replaced with a different spacetime symmetry, for example, with the de Sitter group the theorem no longer holds, an infinite number of massless bosonic Higher Spin fields is required to exist however In addition, if all particles are massless the Coleman–Mandula theorem allows a combination of internal and spacetime symmetries, because the spacetime symmetry group is then the conformal group. - -Note that this theorem only constrains the symmetries of the S-matrix itself. As such, it places no constraints on spontaneously broken symmetries which do not show up directly on the S-matrix level. In fact, it is easy to construct spontaneously broken symmetries (in interacting theories) which unify spatial and internal symmetries. - -This theorem also only applies to discrete Lie algebras and not continuous Lie groups. As such, it does not apply to discrete symmetries or globally for Lie groups. As an example of the latter, we might have a model where a rotation by τ (a discrete spacetime symmetry) is an involutive internal symmetry which commutes with all the other internal symmetries. - -If there is no mass gap, it could be a tensor product of the conformal algebra with an internal Lie algebra. But in the absence of a mass gap, there are also other possibilities. For example, quantum electrodynamics has vector and tensor conserved charges. See infraparticle for more details. - -Supersymmetry may be considered a possible "loophole" of the theorem because it contains additional generators (supercharges) that are not scalars but rather spinors. This loophole is possible because supersymmetry is a Lie superalgebra, not a Lie algebra. The corresponding theorem for supersymmetric theories with a mass gap is the Haag–Łopuszański–Sohnius theorem. - -Quantum group symmetry, present in some two-dimensional integrable quantum field theories like the sine-Gordon model, exploits a similar loophole. - -It was proven that conformal theories with higher-spin symmetry are not compatible with interactions. diff --git a/wiki/wikipedia/2809.txt b/wiki/wikipedia/2809.txt deleted file mode 100644 index 3956affda8e0d66853ee3f20bba47d0b7065025a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2809.txt +++ /dev/null @@ -1 +0,0 @@ -In computer science, a single pushout graph rewriting or SPO graph rewriting refers to a mathematical framework for graph rewriting, and is used in contrast to the double-pushout approach of graph rewriting. diff --git a/wiki/wikipedia/281.txt b/wiki/wikipedia/281.txt deleted file mode 100644 index a4a3b0576df19b2e82f2f96d35fa7266263456cd..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/281.txt +++ /dev/null @@ -1,21 +0,0 @@ -In mathematics, the Freidlin–Wentzell theorem (due to Mark Freidlin and Alexander D. Wentzell) is a result in the large deviations theory of stochastic processes. Roughly speaking, the Freidlin–Wentzell theorem gives an estimate for the probability that a (scaled-down) sample path of an Itō diffusion will stray far from the mean path. This statement is made precise using rate functions. The Freidlin–Wentzell theorem generalizes Schilder's theorem for standard Brownian motion. - -Let B be a standard Brownian motion on Rd starting at the origin, 0 ∈ Rd, and let Xε be an Rd-valued Itō diffusion solving an Itō stochastic differential equation of the form -$$ -\begin{cases} dX_t^\varepsilon = b(X_t^\varepsilon) dt + \sqrt{\varepsilon} dB_t, \\ X_0^\varepsilon = 0, \end{cases} -$$ - -where the drift vector field b : Rd → Rd is uniformly Lipschitz continuous. Then, on the Banach space C0 = C0([0, T]; Rd) equipped with the supremum norm ||·||, the family of processes (Xε)ε>0 satisfies the large deviations principle with good rate function I : C0 → R ∪ {+∞} given by -$$ -I(\omega) = \frac{1}{2} \int_0^T | \dot{\omega}_t - b(\omega_t) |^2 dt -$$ - -if ω lies in the Sobolev space H1([0, T]; Rd), and I(ω) = +∞ otherwise. In other words, for every open set G ⊆ C0 and every closed set F ⊆ C0, -$$ -\limsup_{\varepsilon \downarrow 0} \big( \varepsilon \log \mathbf{P} \big[ X^\varepsilon \in F \big]\big) \leq -\inf_{\omega \in F} I(\omega) -$$ - -and -$$ -\liminf_{\varepsilon \downarrow 0} \big( \varepsilon \log \mathbf{P} \big[ X^{\varepsilon} \in G \big]\big) \geq - \inf_{\omega \in G} I(\omega). -$$ diff --git a/wiki/wikipedia/2810.txt b/wiki/wikipedia/2810.txt deleted file mode 100644 index 5870409cdc7de3b0573e31c37661b4122b2afc6c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2810.txt +++ /dev/null @@ -1,285 +0,0 @@ -The fundamental theorem of algebra also known as d'Alembert's theorem or the d'Alembert-Gauss theorem states that every non-constant single-variable polynomial with complex coefficients has at least one complex root. This includes polynomials with real coefficients, since every real number is a complex number with its imaginary part equal to zero. - -Equivalently (by definition), the theorem states that the field of complex numbers is algebraically closed. - -The theorem is also stated as follows: every non-zero, single-variable, degree n polynomial with complex coefficients has, counted with multiplicity, exactly n complex roots. The equivalence of the two statements can be proven through the use of successive polynomial division. - -Despite its name, there is no purely algebraic proof of the theorem, since any proof must use some form of the analytic completeness of the real numbers, which is not an algebraic concept. Additionally, it is not fundamental for modern algebra; its name was given at a time when algebra was synonymous with theory of equations. - -Peter Roth, in his book Arithmetica Philosophica (published in 1608, at Nürnberg, by Johann Lantzenberger), wrote that a polynomial equation of degree n (with real coefficients) may have n solutions. Albert Girard, in his book L'invention nouvelle en l'Algèbre (published in 1629), asserted that a polynomial equation of degree n has n solutions, but he did not state that they had to be real numbers. Furthermore, he added that his assertion holds "unless the equation is incomplete", by which he meant that no coefficient is equal to 0. However, when he explains in detail what he means, it is clear that he actually believes that his assertion is always true; for instance, he shows that the equation $x^4 = 4x-3,$ although incomplete, has four solutions (counting multiplicities): 1 (twice), $-1+i\sqrt{2},$ and $-1-i\sqrt{2}.$ - -As will be mentioned again below, it follows from the fundamental theorem of algebra that every non-constant polynomial with real coefficients can be written as a product of polynomials with real coefficients whose degrees are either 1 or 2. However, in 1702 Leibniz erroneously said that no polynomial of the type x4 + a4 (with a real and distinct from 0) can be written in such a way. Later, Nikolaus Bernoulli made the same assertion concerning the polynomial x4 − 4x3 + 2x2 + 4x + 4, but he got a letter from Euler in 1742 in which it was shown that this polynomial is equal to -$$ -\left (x^2-(2+\alpha)x+1+\sqrt{7}+\alpha \right ) \left (x^2-(2-\alpha)x+1+\sqrt{7}-\alpha \right ), -$$ - -with $\alpha = \sqrt{4+2\sqrt{7}}.$ - -Also, Euler pointed out that -$$ -x^4+a^4= \left (x^2+a\sqrt{2}\cdot x+a^2 \right ) \left (x^2-a\sqrt{2}\cdot x+a^2 \right ). -$$ - -A first attempt at proving the theorem was made by d'Alembert in 1746, but his proof was incomplete. Among other problems, it assumed implicitly a theorem (now known as Puiseux's theorem), which would not be proved until more than a century later and using the fundamental theorem of algebra. Other attempts were made by Euler (1749), de Foncenex (1759), Lagrange (1772), and Laplace (1795). These last four attempts assumed implicitly Girard's assertion; to be more precise, the existence of solutions was assumed and all that remained to be proved was that their form was a + bi for some real numbers a and b. In modern terms, Euler, de Foncenex, Lagrange, and Laplace were assuming the existence of a splitting field of the polynomial p(z). - -At the end of the 18th century, two new proofs were published which did not assume the existence of roots, but neither of which was complete. One of them, due to James Wood and mainly algebraic, was published in 1798 and it was totally ignored. Wood's proof had an algebraic gap. The other one was published by Gauss in 1799 and it was mainly geometric, but it had a topological gap, only filled by Alexander Ostrowski in 1920, as discussed in Smale (1981). The first rigorous proof was published by Argand in 1806 (and revisited in 1813); it was also here that, for the first time, the fundamental theorem of algebra was stated for polynomials with complex coefficients, rather than just real coefficients. Gauss produced two other proofs in 1816 and another incomplete version of his original proof in 1849. - -The first textbook containing a proof of the theorem was Cauchy's Cours d'analyse de l'École Royale Polytechnique (1821). It contained Argand's proof, although Argand is not credited for it. - -None of the proofs mentioned so far is constructive. It was Weierstrass who raised for the first time, in the middle of the 19th century, the problem of finding a constructive proof of the fundamental theorem of algebra. He presented his solution, which amounts in modern terms to a combination of the Durand–Kerner method with the homotopy continuation principle, in 1891. Another proof of this kind was obtained by Hellmuth Kneser in 1940 and simplified by his son Martin Kneser in 1981. - -Without using countable choice, it is not possible to constructively prove the fundamental theorem of algebra for complex numbers based on the Dedekind real numbers (which are not constructively equivalent to the Cauchy real numbers without countable choice). However, Fred Richman proved a reformulated version of the theorem that does work. - -All proofs below involve some mathematical analysis, or at least the topological concept of continuity of real or complex functions. Some also use differentiable or even analytic functions. This fact has led to the remark that the Fundamental Theorem of Algebra is neither fundamental, nor a theorem of algebra. - -Some proofs of the theorem only prove that any non-constant polynomial with real coefficients has some complex root. This is enough to establish the theorem in the general case because, given a non-constant polynomial p(z) with complex coefficients, the polynomial -$$ -q(z)=p(z)\overline{p(\overline z)} -$$ - -has only real coefficients and, if z is a zero of q(z), then either z or its conjugate is a root of p(z). - -A large number of non-algebraic proofs of the theorem use the fact (sometimes called "growth lemma") that an n-th degree polynomial function p(z) whose dominant coefficient is 1 behaves like zn when |z| is large enough. A more precise statement is: there is some positive real number R such that: -$$ -\tfrac{1}{2}|z^n|<|p(z)|<\tfrac{3}{2}|z^n| -$$ - -when |z| > R. - -Find a closed disk D of radius r centered at the origin such that |p(z)| > |p(0)| whenever |z| ≥ r. The minimum of |p(z)| on D, which must exist since D is compact, is therefore achieved at some point z0 in the interior of D, but not at any point of its boundary. The Maximum modulus principle (applied to 1/p(z)) implies then that p(z0) = 0. In other words, z0 is a zero of p(z). - -A variation of this proof does not require the use of the maximum modulus principle (in fact, the same argument with minor changes also gives a proof of the maximum modulus principle for holomorphic functions). If we assume by contradiction that a := p(z0) ≠ 0, then, expanding p(z) in powers of z − z0 we can write -$$ -p(z) = a + c_k (z-z_0)^k + c_{k+1} (z-z_0)^{k+1} + \cdots + c_n (z-z_0)^n. -$$ - -Here, the cj are simply the coefficients of the polynomial z → p(z + z0), and we let k be the index of the first coefficient following the constant term that is non-zero. But now we see that for z sufficiently close to z0 this has behavior asymptotically similar to the simpler polynomial $q(z) = a+c_k (z-z_0)^k$, - -in the sense that (as is easy to check) the function -$$ -\left|\frac{p(z)-q(z)}{(z-z_0)^{k+1}}\right| -$$ - -is bounded by some positive constant M in some neighborhood of z0. Therefore, if we define $\theta_0 = (\arg(a)+\pi-\arg(c_k)) /k$ and let $z = z_0 + r e^{i \theta_0}$, then for any sufficiently small positive number r (so that the bound M mentioned above holds), using the triangle inequality we see that - -\begin{align} - -|p(z)| &\le |q(z)| + r^{k+1} \left|\frac{p(z)-q(z)}{r^{k+1}}\right|\\[4pt] - -&\le \left|a +(-1)c_k r^k e^{i(\arg(a)-\arg(c_k))}\right| + M r^{k+1} \\[4pt] - -&= |a|-|c_k|r^k + M r^{k+1} - -\end{align} - -When r is sufficiently close to 0 this upper bound for |p(z)| is strictly smaller than |a|, in contradiction to the definition of z0. (Geometrically, we have found an explicit direction θ0 such that if one approaches z0 from that direction one can obtain values p(z) smaller in absolute value than |p(z0)|.) - -Another analytic proof can be obtained along this line of thought observing that, since |p(z)| > |p(0)| outside D, the minimum of |p(z)| on the whole complex plane is achieved at z0. If |p(z0)| > 0, then 1/p is a bounded holomorphic function in the entire complex plane since, for each complex number z, |1/p(z)| ≤ |1/p(z0)|. Applying Liouville's theorem, which states that a bounded entire function must be constant, this would imply that 1/p is constant and therefore that p is constant. This gives a contradiction, and hence p(z0) = 0. - -Yet another analytic proof uses the argument principle. Let R be a positive real number large enough so that every root of p(z) has absolute value smaller than R; such a number must exist because every non-constant polynomial function of degree n has at most n zeros. For each r > R, consider the number -$$ -\frac{1}{2\pi i}\int_{c(r)}\frac{p'(z)}{p(z)}dz, -$$ - -where c(r) is the circle centered at 0 with radius r oriented counterclockwise; then the argument principle says that this number is the number N of zeros of p(z) in the open ball centered at 0 with radius r, which, since r > R, is the total number of zeros of p(z). On the other hand, the integral of n/z along c(r) divided by 2πi is equal to n. But the difference between the two numbers is -$$ -\frac{1}{2\pi i}\int_{c(r)}\left(\frac{p'(z)}{p(z)}-\frac{n}{z}\right)dz=\frac{1}{2\pi i}\int_{c(r)}\frac{zp'(z)-np(z)}{zp(z)}dz. -$$ - -The numerator of the rational expression being integrated has degree at most n − 1 and the degree of the denominator is n + 1. Therefore, the number above tends to 0 as r → +∞. But the number is also equal to N − n and so N = n. - -Still another complex-analytic proof can be given by combining linear algebra with the Cauchy theorem. To establish that every complex polynomial of degree n > 0 has a zero, it suffices to show that every complex square matrix of size n > 0 has a (complex) eigenvalue. The proof of the latter statement is by contradiction. - -Let A be a complex square matrix of size n > 0 and let In be the unit matrix of the same size. Assume A has no eigenvalues. Consider the resolvent function -$$ - R(z)=(zI_n-A)^{-1}, -$$ - -which is a meromorphic function on the complex plane with values in the vector space of matrices. The eigenvalues of A are precisely the poles of R(z). Since, by assumption, A has no eigenvalues, the function R(z) is an entire function and Cauchy theorem implies that -$$ - \int_{c(r)} R(z) dz =0. -$$ - -On the other hand, R(z) expanded as a geometric series gives: -$$ -R(z)=z^{-1}(I_n-z^{-1}A)^{-1}=z^{-1}\sum_{k=0}^\infty \frac{1}{z^k}A^k\cdot -$$ - -This formula is valid outside the closed disc of radius $\|A\|$ (the operator norm of A). Let $r>\|A\|.$ Then -$$ -\int_{c(r)}R(z)dz=\sum_{k=0}^{\infty}\int_{c(r)}\frac{dz}{z^{k+1}}A^k=2\pi iI_n -$$ - -(in which only the summand k = 0 has a nonzero integral). This is a contradiction, and so A has an eigenvalue. - -Finally, Rouché's theorem gives perhaps the shortest proof of the theorem. - -Suppose the minimum of |p(z)| on the whole complex plane is achieved at z0; it was seen at the proof which uses Liouville's theorem that such a number must exist. We can write p(z) as a polynomial in z − z0: there is some natural number k and there are some complex numbers ck, ck + 1, ..., cn such that ck ≠ 0 and: -$$ -p(z)=p(z_0)+c_k(z-z_0)^k+c_{k+1}(z-z_0)^{k+1}+ \cdots +c_n(z-z_0)^n. -$$ - -If p(z0) is nonzero, it follows that if a is a kth root of −p(z0)/ck and if t is positive and sufficiently small, then |p(z0 + ta)| < |p(z0)|, which is impossible, since |p(z0)| is the minimum of |p| on D. - -For another topological proof by contradiction, suppose that the polynomial p(z) has no roots, and consequently is never equal to 0. Think of the polynomial as a map from the complex plane into the complex plane. It maps any circle |z| = R into a closed loop, a curve P(R). We will consider what happens to the winding number of P(R) at the extremes when R is very large and when R = 0. When R is a sufficiently large number, then the leading term zn of p(z) dominates all other terms combined; in other words, -$$ -\left | z^n \right | > \left | a_{n-1} z^{n-1} + \cdots + a_0 \right |. -$$ - -When z traverses the circle $Re^{i\theta}$ once counter-clockwise $(0\leq \theta \leq 2\pi),$ then $z^n=R^ne^{in\theta}$ winds n times counter-clockwise $(0\leq \theta \leq 2\pi n)$ around the origin (0,0), and P(R) likewise. At the other extreme, with |z| = 0, the curve P(0) is merely the single point p(0), which must be nonzero because p(z) is never zero. Thus p(0) must be distinct from the origin (0,0), which denotes 0 in the complex plane. The winding number of P(0) around the origin (0,0) is thus 0. Now changing R continuously will deform the loop continuously. At some R the winding number must change. But that can only happen if the curve P(R) includes the origin (0,0) for some R. But then for some z on that circle |z| = R we have p(z) = 0, contradicting our original assumption. Therefore, p(z) has at least one zero. - -These proofs of the Fundamental Theorem of Algebra must make use of the following two facts about real numbers that are not algebraic but require only a small amount of analysis (more precisely, the intermediate value theorem in both cases): - -* every polynomial with an odd degree and real coefficients has some real root; - -* every non-negative real number has a square root. - -The second fact, together with the quadratic formula, implies the theorem for real quadratic polynomials. In other words, algebraic proofs of the fundamental theorem actually show that if R is any real-closed field, then its extension C = R(−1) is algebraically closed. - -As mentioned above, it suffices to check the statement "every non-constant polynomial p(z) with real coefficients has a complex root". This statement can be proved by induction on the greatest non-negative integer k such that 2k divides the degree n of p(z). Let a be the coefficient of zn in p(z) and let F be a splitting field of p(z) over C; in other words, the field F contains C and there are elements z1, z2, ..., zn in F such that -$$ -p(z)=a(z-z_1)(z-z_2) \cdots (z-z_n). -$$ - -If k = 0, then n is odd, and therefore p(z) has a real root. Now, suppose that n = 2km (with m odd and k > 0) and that the theorem is already proved when the degree of the polynomial has the form 2k − 1m′ with m′ odd. For a real number t, define: -$$ -q_t(z)=\prod_{1\le it(z) are symmetric polynomials in the zi with real coefficients. Therefore, they can be expressed as polynomials with real coefficients in the elementary symmetric polynomials, that is, in −a1, a2, ..., (−1)nan. So qt(z) has in fact real coefficients. Furthermore, the degree of qt(z) is n(n − 1)/2 = 2k−1m(n − 1), and m(n − 1) is an odd number. So, using the induction hypothesis, qt has at least one complex root; in other words, zi + zj + tzizj is complex for two distinct elements i and j from {1, ..., n}. Since there are more real numbers than pairs (i, j), one can find distinct real numbers t and s such that zi + zj + tzizj and zi + zj + szizj are complex (for the same i and j). So, both zi + zj and zizj are complex numbers. It is easy to check that every complex number has a complex square root, thus every complex polynomial of degree 2 has a complex root by the quadratic formula. It follows that zi and zj are complex numbers, since they are roots of the quadratic polynomial z2 − (zi + zj)z + zizj. - -Joseph Shipman showed in 2007 that the assumption that odd degree polynomials have roots is stronger than necessary; any field in which polynomials of prime degree have roots is algebraically closed (so "odd" can be replaced by "odd prime" and this holds for fields of all characteristics). For axiomatization of algebraically closed fields, this is the best possible, as there are counterexamples if a single prime is excluded. However, these counterexamples rely on −1 having a square root. If we take a field where −1 has no square root, and every polynomial of degree n ∈ I has a root, where I is any fixed infinite set of odd numbers, then every polynomial f(x) of odd degree has a root (since (x2 + 1)kf(x) has a root, where k is chosen so that deg(f) + 2k ∈ I). Mohsen Aliabadi generalized Shipman's result in 2013, providing an independent proof that a sufficient condition for an arbitrary field (of any characteristic) to be algebraically closed is that it has a root for every polynomial of prime degree. - -Another algebraic proof of the fundamental theorem can be given using Galois theory. It suffices to show that C has no proper finite field extension. Let K/C be a finite extension. Since the normal closure of K over R still has a finite degree over C (or R), we may assume without loss of generality that K is a normal extension of R (hence it is a Galois extension, as every algebraic extension of a field of characteristic 0 is separable). Let G be the Galois group of this extension, and let H be a Sylow 2-subgroup of G, so that the order of H is a power of 2, and the index of H in G is odd. By the fundamental theorem of Galois theory, there exists a subextension L of K/R such that Gal(K/L) = H. As [L:R] = [G:H] is odd, and there are no nonlinear irreducible real polynomials of odd degree, we must have L = R, thus [K:R] and [K:C] are powers of 2. Assuming by way of contradiction that [K:C] > 1, we conclude that the 2-group Gal(K/C) contains a subgroup of index 2, so there exists a subextension M of C of degree 2. However, C has no extension of degree 2, because every quadratic complex polynomial has a complex root, as mentioned above. This shows that [K:C] = 1, and therefore K = C, which completes the proof. - -There exists still another way to approach the fundamental theorem of algebra, due to J. M. Almira and A. Romero: by Riemannian geometric arguments. The main idea here is to prove that the existence of a non-constant polynomial p(z) without zeros implies the existence of a flat Riemannian metric over the sphere S2. This leads to a contradiction since the sphere is not flat. - -A Riemannian surface (M, g) is said to be flat if its Gaussian curvature, which we denote by Kg, is identically null. Now, Gauss–Bonnet theorem, when applied to the sphere S2, claims that -$$ -\int_{\mathbf{S}^2}K_g=4\pi, -$$ - -which proves that the sphere is not flat. - -Let us now assume that n > 0 and -$$ -p(z) = a_0 + a_1 z + \cdots + a_n z^n \neq 0 -$$ - -for each complex number z. Let us define -$$ -p^*(z) = z^n p \left ( \tfrac{1}{z} \right ) = a_0 z^n + a_1 z^{n-1} + \cdots + a_n. -$$ - -Obviously, p*(z) ≠ 0 for all z in C. Consider the polynomial f(z) = p(z)p*(z). Then f(z) ≠ 0 for each z in C. Furthermore, -$$ -f(\tfrac{1}{w}) = p \left (\tfrac{1}{w} \right )p^* \left (\tfrac{1}{w} \right ) = w^{-2n}p^*(w)p(w) = w^{-2n}f(w). -$$ - -We can use this functional equation to prove that g, given by -$$ -g=\frac{1}{|f(w)|^{\frac{2}{n}}}|dw|^2 -$$ - -for w in C, and -$$ -g=\frac{1}{\left |f\left (\tfrac{1}{w} \right ) \right |^{\frac{2}{n}}}\left |d\left (\tfrac{1}{w} \right ) \right |^2 -$$ - -for w ∈ S2\{0}, is a well defined Riemannian metric over the sphere S2 (which we identify with the extended complex plane C ∪ {∞}). - -Now, a simple computation shows that -$$ -\forall w\in\mathbf{C}: \qquad \frac{1}{|f(w)|^{\frac{1}{n}}} K_g=\frac{1}{n}\Delta \log|f(w)|=\frac{1}{n}\Delta \text{Re}(\log f(w))=0, -$$ - -since the real part of an analytic function is harmonic. This proves that Kg = 0. - -Since the fundamental theorem of algebra can be seen as the statement that the field of complex numbers is algebraically closed, it follows that any theorem concerning algebraically closed fields applies to the field of complex numbers. Here are a few more consequences of the theorem, which are either about the field of real numbers or the relationship between the field of real numbers and the field of complex numbers: - -* The field of complex numbers is the algebraic closure of the field of real numbers. - -* Every polynomial in one variable z with complex coefficients is the product of a complex constant and polynomials of the form z + a with a complex. - -* Every polynomial in one variable x with real coefficients can be uniquely written as the product of a constant, polynomials of the form x + a with a real, and polynomials of the form x2 + ax + b with a and b real and a2 − 4b < 0 (which is the same thing as saying that the polynomial x2 + ax + b has no real roots). (By the Abel–Ruffini theorem, the real numbers a and b are not necessarily expressible in terms of the coefficients of the polynomial, the basic arithmetic operations and the extraction of n-th roots.) This implies that the number of non-real complex roots is always even and remains even when counted with their multiplicity. - -* Every rational function in one variable x, with real coefficients, can be written as the sum of a polynomial function with rational functions of the form a/(x − b)n (where n is a natural number, and a and b are real numbers), and rational functions of the form (ax + b)/(x2 + cx + d)n (where n is a natural number, and a, b, c, and d are real numbers such that c2 − 4d < 0). A corollary of this is that every rational function in one variable and real coefficients has an elementary primitive. - -* Every algebraic extension of the real field is isomorphic either to the real field or to the complex field. - -While the fundamental theorem of algebra states a general existence result, it is of some interest, both from the theoretical and from the practical point of view, to have information on the location of the zeros of a given polynomial. The simpler result in this direction is a bound on the modulus: all zeros ζ of a monic polynomial $z^n+a_{n-1}z^{n-1}+\cdots+a_1z +a_0$ satisfy an inequality |ζ| ≤ R, where -$$ -R_{\infty}:= 1+\max\{|a_0|,\ldots,|a_{n-1}|\}. -$$ - -Notice that, as stated, this is not yet an existence result but rather an example of what is called an a priori bound: it says that if there are solutions then they lie inside the closed disk of center the origin and radius R. However, once coupled with the fundamental theorem of algebra it says that the disk contains in fact at least one solution. More generally, a bound can be given directly in terms of any p-norm of the n-vector of coefficients $a:=( a_0, a_1, \ldots, a_{n-1}),$ that is |ζ| ≤ Rp, where Rp is precisely the q-norm of the 2-vector $(1, \|a\|_p),$ q being the conjugate exponent of p, $\tfrac{1}{p} + \tfrac{1}{q} =1,$ for any 1 ≤ p ≤ ∞. Thus, the modulus of any solution is also bounded by -$$ - R_1:= \max\left \{ 1 , \sum_{0\leq kn to mean 1, which is reasonable since 1 is indeed the n-th coefficient of our polynomial). The case of a generic polynomial of degree n, -$$ -P(z):= a_n z^n+a_{n-1}z^{n-1}+\cdots+a_1z +a_0, -$$ - -is of course reduced to the case of a monic, dividing all coefficients by an ≠ 0. Also, in case that 0 is not a root, i.e. a0 ≠ 0, bounds from below on the roots ζ follow immediately as bounds from above on $\tfrac{1}{\zeta}$, that is, the roots of -$$ -a_0 z^n+a_1z^{n-1}+\cdots+a_{n-1}z +a_n. -$$ - -Finally, the distance $|\zeta-\zeta_0|$ from the roots ζ to any point $\zeta_0$ can be estimated from below and above, seeing $\zeta-\zeta_0$ as zeros of the polynomial $P(z+\zeta_0)$, whose coefficients are the Taylor expansion of P(z) at $z=\zeta_0.$ - -Let ζ be a root of the polynomial -$$ -z^n+a_{n-1}z^{n-1}+\cdots+a_1z +a_0; -$$ - -in order to prove the inequality |ζ| ≤ Rp we can assume, of course, |ζ| > 1. Writing the equation as -$$ --\zeta^n=a_{n-1}\zeta^{n-1}+\cdots+a_1\zeta+a_0, -$$ - -and using the Hölder's inequality we find -$$ -|\zeta|^n\leq \|a\|_p \left \| \left (\zeta^{n-1},\ldots,\zeta, 1 \right ) \right \|_q. -$$ - -Now, if p = 1, this is -$$ -|\zeta|^n\leq\|a\|_1\max \left \{|\zeta|^{n-1},\ldots,|\zeta|,1 \right \} =\|a\|_1|\zeta|^{n-1}, -$$ - -thus -$$ -|\zeta|\leq \max\{1, \|a\|_1\}. -$$ - -In the case 1 < p ≤ ∞, taking into account the summation formula for a geometric progression, we have -$$ -|\zeta|^n\leq \|a\|_p \left(|\zeta|^{q(n-1)}+\cdots+|\zeta|^q +1\right)^{\frac{1}{q}}=\|a\|_p \left(\frac{|\zeta|^{qn}-1}{|\zeta|^q-1}\right)^{\frac{1}{q}}\leq\|a\|_p \left(\frac{|\zeta|^{qn}}{|\zeta|^q-1}\right)^{\frac{1}{q}}, -$$ - -thus -$$ -|\zeta|^{nq}\leq \|a\|_p^q \frac{|\zeta|^{qn}}{|\zeta|^q-1} -$$ - -and simplifying, -$$ -|\zeta|^q\leq 1+\|a\|_p^q. -$$ - -Therefore -$$ -|\zeta|\leq \left \| \left (1,\|a\|_p \right ) \right \|_q=R_p -$$ - -holds, for all 1 ≤ p ≤ ∞. diff --git a/wiki/wikipedia/2811.txt b/wiki/wikipedia/2811.txt deleted file mode 100644 index 09317ff4f408acd838fde3fcfe9a6e8807000d92..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2811.txt +++ /dev/null @@ -1,15 +0,0 @@ -In geometry, the zone theorem is a result that establishes the complexity of the zone of a line in an arrangement of lines. - -A line arrangement, denoted as $A(L)$, is a subdivision of the plane, induced by a set of lines $L$, into cells ($2$-dimensional faces), edges ($1$-dimensional faces) and vertices ($0$-dimensional faces). Given a set of $n$ lines $L$, the line arrangement $A(L)$, and a line $l$ (not belonging to $L$), the zone of $l$ is the set of faces intersected by $l$. The complexity of a zone is the total number of edges in its boundary, expressed as a function of $n$. - -The zone theorem states that said complexity is $O(n)$. - -This result was published for the first time in 1985; Chazelle et al. gave the upper bound of $10n+2$ for the complexity of the zone of a line in an arrangement. In 1991, this bound was improved to $\lfloor9.5n\rfloor -1$, and it was also shown that this is the best possible upper bound up to a small additive factor. Then, in 2011, Rob Pinchasi proved that the complexity of the zone of a line in an arrangement is at most $\lfloor9.5n\rfloor -3$, and this is a tight bound. - -Some paradigms used in the different proofs of the theorem are induction, - -Although the most popular version is for arrangements of lines in the plane, there exist some generalizations of the zone theorem. For instance, in dimension $d$, considering arrangements of hyperplanes, the complexity of the zone of a hyperplane $h$ is the number of facets ($d-1$ - dimensional faces) bounding the set of cells ($d$-dimensional faces) intersected by $h$. Analogously, the $d$-dimensional zone theorem states that the complexity of the zone of a hyperplane is $O(n^{d-1})$. There are considerably fewer proofs for the theorem for dimension $d\geq3$. For the $3$-dimensional case, there are proofs based on sweep techniques and for higher dimensions is used Euler’s relation: $\Sigma_{i=0}^{d} (-1)^i F_i \geq 0. $ - -Another generalization is considering arrangements of pseudolines (and pseudohyperplanes in dimension $d$) instead of lines (and hyperplanes). Some proofs for the theorem work well in this case since they do not use the straightness of the lines substantially through their arguments. - -The primary motivation to study the zone complexity in arrangements arises from looking for efficient algorithms to construct arrangements. A classical algorithm is the incremental construction, which can be roughly described as adding the lines one after the other and storing all faces generated by each in an appropriate data structure (the usual structure for arrangements is the doubly connected edge list (DCEL). Here, the consequence of the zone theorem is that the entire construction of any arrangement of $n$ lines can be done in time $O(n^2)$, since the insertion of each line takes time $O(n)$. diff --git a/wiki/wikipedia/2812.txt b/wiki/wikipedia/2812.txt deleted file mode 100644 index e5f756580286c939befd8c4cb67ac7d72b82d034..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2812.txt +++ /dev/null @@ -1,7 +0,0 @@ -In mathematics Alexander's theorem states that every knot or link can be represented as a closed braid; that is, a braid in which the corresponding ends of the strings are connected in pairs. The theorem is named after James Waddell Alexander II, who published a proof in 1923. - -Braids were first considered as a tool of knot theory by Alexander. His theorem gives a positive answer to the question Is it always possible to transform a given knot into a closed braid? A good construction example is found in Colin Adams's book. - -However, the correspondence between knots and braids is clearly not one-to-one: a knot may have many braid representations. For example, conjugate braids yield equivalent knots. This leads to a second fundamental question: Which closed braids represent the same knot type? - -This question is addressed in Markov's theorem, which gives ‘moves’ relating any two closed braids that represent the same knot. diff --git a/wiki/wikipedia/2813.txt b/wiki/wikipedia/2813.txt deleted file mode 100644 index da9d7ba21153ead97db2d44ea7dbd8f8b44535f7..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2813.txt +++ /dev/null @@ -1,41 +0,0 @@ -POWER9 is a family of superscalar, multithreading, multi-core microprocessors based on the Power ISA announced in August 2016 at the Hot Chips conference. The SMT4- and SMT8-cores are similar, in that they consist of a number of so-called slices fed by common schedulers. A slice is a rudimentary 64-bit single-threaded processing core with load store unit (LSU), integer unit (ALU) and a vector scalar unit (VSU, doing SIMD and floating point). A super-slice is the combination of two slices. An SMT4-core consists of a 32 KiB L1 cache (1 KiB = 1024 bytes), a 32 KiB L1 data cache, an instruction fetch unit (IFU) and an instruction sequencing unit (ISU) which feeds two super-slices. An SMT8-core has two sets of L1 caches and, IFUs and ISUs to feed four super-slices. The result is that the 12-core and 24-core versions of POWER9 each consist of the same number of slices (96 each) and the same amount of L1 cache. - -A POWER9 core, whether SMT4 or SMT8, has a 12-stage pipeline (five stages shorter than its predecessor, the POWER8), but aims to retain the clock frequency of around 4 GHz. On chip are co-processors for compression and cryptography, as well as a large low-latency eDRAM L3 cache. - -* General purpose PCIe v.4 connections for attaching regular ASICs, FPGAs and other peripherals as well as CAPI 2.0 and CAPI 1.0 devices designed for POWER8. - -* Multiprocessor (symmetric multiprocessor system) links to connect other POWER9 processors on the same motherboard, or in other closely attached enclosures. - -POWER9 chips can be made with two types of cores, and in a Scale Out or Scale Up configuration. POWER9 cores are either SMT4 or SMT8, with SMT8 cores intended for PowerVM systems, while the SMT4 cores are intended for PowerNV systems, which do not use PowerVM, and predominantly run Linux. With POWER9, chips made for Scale Out can support directly-attached memory, while Scale Up chips are intended for use with machines with more than two CPU sockets, and use buffered memory. - -* Monza – 68.5 mm × 68.5 mm, 8 DDR4, 34 PCIe lanes, 1 XBus 4B, 48 OpenCAPI lanes - -* LaGrange – 68.5 mm × 68.5 mm, 8 DDR4, 42 PCIe lanes, 2 XBus 4B, 16 OpenCAPI lanes - -Sforza modules use a land grid array (LGA) 2601-pin socket. - -Talos II – two-socket workstation/server platform using POWER9 SMT4 Sforza processors; available as 2U server, 4U server, tower, or EATX mainboard. Marketed as secure and owner-controllable with free and open-source software and firmware. Initially shipping with 4-core, 8-core, 18-core, and 22-core chip options until chips with more cores are available. - -Talos II Lite – single-socket version of the Talos II mainboard, made using the same PCB. - -Blackbird – single-socket microATX platform using SMT4 Sforza processors (up to 8-core 160 W variant), 4–8 cores, 2 RAM slots (supporting up to 256 GiB total) - -Barreleye G2 / Zaius – two-socket server platform using LaGrange processors; - -Power Systems L922 – 2U, 1–2× POWER9 SMT8, 8–12 cores per processor, up to 4 TiB DDR4 RAM (1 TiB = 1024 GiB), PowerVM running Linux. - -Power Systems S914 – 4U, 1× POWER9 SMT8, 4–8 cores, up to 1 TiB DDR4 RAM, PowerVM running AIX/IBM i/Linux. - -Power Systems S924 – 4U, 2× POWER9 SMT8, 8–12 cores per processor, up to 4 TiB DDR4 RAM, PowerVM running AIX/IBM i/Linux. - -Power Systems H924 – 4U, 2× POWER9 SMT8, 8–12 cores per processor, up to 4 TiB DDR4 RAM, PowerVM running SAP HANA (on Linux) with AIX/IBM i on up to 25% of the system. Sierra is based on IBM's Power Systems AC922 compute node. - -The first racks of Summit were delivered to Oak Ridge National Laboratory on 31 July 2017. - -MareNostrum 4 – One of the three clusters in the emerging technologies block of the fourth MareNostrum supercomputer is a POWER9 cluster with Nvidia Volta GPUs. This cluster is expected to provide more than 1.5 petaflops of computing capacity when installed. The emerging technologies block of the MareNostrum 4 exists to test if new developments might be "suitable for future versions of MareNostrum". - -As with its predecessor, POWER9 is supported by FreeBSD, IBM AIX, IBM i, Linux (both running with and without PowerVM), and OpenBSD. - -Implementation of POWER9 support in the Linux kernel began with version 4.6 in March 2016. - -Red Hat Enterprise Linux (RHEL), SUSE Linux Enterprise (SLES), Debian Linux, and CentOS are supported . diff --git a/wiki/wikipedia/2814.txt b/wiki/wikipedia/2814.txt deleted file mode 100644 index 5570be9de549f772ae8a29429f5a16ab80d210fe..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2814.txt +++ /dev/null @@ -1,13 +0,0 @@ -Büchi arithmetic of base k is the first-order theory of the natural numbers with addition and the function $V_k(x)$ which is defined as the largest power of k dividing x, named in honor of the Swiss mathematician Julius Richard Büchi. The signature of Büchi arithmetic contains only the addition operation, $V_k$ and equality, omitting the multiplication operation entirely. - -Unlike Peano arithmetic, Büchi arithmetic is a decidable theory. This means it is possible to effectively determine, for any sentence in the language of Büchi arithmetic, whether that sentence is provable from the axioms of Büchi arithmetic. - -A subset $X\subseteq \mathbb N^n$ is definable in Büchi arithmetic of base k if and only if it is k-recognisable. - -If $n=1$ this means that the set of integers of X in base k is accepted by an automaton. Similarly if $n>1$ there exists an automaton that reads the first digits, then the second digits, and so on, of n integers in base k, and accepts the words if the n integers are in the relation X. - -If k and l are multiplicatively dependent, then the Büchi arithmetics of base k and l have the same expressivity. Indeed $V_l$ can be defined in $\text{FO}(V_k,+)$, the first-order theory of $V_k$ and $+$. - -Otherwise, an arithmetic theory with both $V_k$ and $V_l$ functions is equivalent to Peano arithmetic, which has both addition and multiplication, since multiplication is definable in $\text{FO}(V_k,V_l,+)$. - -Further, by the Cobham–Semënov theorem, if a relation is definable in both k and l Büchi arithmetics, then it is definable in Presburger arithmetic. diff --git a/wiki/wikipedia/2815.txt b/wiki/wikipedia/2815.txt deleted file mode 100644 index 91f21d9a5156bee9210533e1b22448584122ba7b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2815.txt +++ /dev/null @@ -1,53 +0,0 @@ -In computer science and graph theory, the Canadian traveller problem (CTP) is a generalization of the shortest path problem to graphs that are partially observable. In other words, the graph is revealed while it is being explored, and explorative edges are charged even if they do not contribute to the final path. - -This optimization problem was introduced by Christos Papadimitriou and Mihalis Yannakakis in 1989 and a number of variants of the problem have been studied since. The name supposedly originates from conversations of the authors who learned of a difficulty Canadian drivers had: traveling a network of cities with snowfall randomly blocking roads. The stochastic version, where each edge is associated with a probability of independently being in the graph, has been given considerable attention in operations research under the name "the Stochastic Shortest Path Problem with Recourse" (SSPPR). - -For a given instance, there are a number of possibilities, or realizations, of how the hidden graph may look. Given an instance, a description of how to follow the instance in the best way is called a policy. The CTP task is to compute the expected cost of the optimal policies. To compute an actual description of an optimal policy may be a harder problem. - -Given an instance and policy for the instance, every realization produces its own (deterministic) walk in the graph. Note that the walk is not necessarily a path since the best strategy may be to, e.g., visit every vertex of a cycle and return to the start. This differs from the shortest path problem (with strictly positive weights), where repetitions in a walk implies that a better solution exists. - -There are primarily five parameters distinguishing the number of variants of the Canadian Traveller Problem. The first parameter is how to value the walk produced by a policy for a given instance and realization. In the Stochastic Shortest Path Problem with Recourse, the goal is simply to minimize the cost of the walk (defined as the sum over all edges of the cost of the edge times the number of times that edge was taken). For the Canadian Traveller Problem, the task is to minimize the competitive ratio of the walk; i.e., to minimize the number of times longer the produced walk is to the shortest path in the realization. - -The second parameter is how to evaluate a policy with respect to different realizations consistent with the instance under consideration. In the Canadian Traveller Problem, one wishes to study the worst case and in SSPPR, the average case. For average case analysis, one must furthermore specify an a priori distribution over the realizations. - -The third parameter is restricted to the stochastic versions and is about what assumptions we can make about the distribution of the realizations and how the distribution is represented in the input. In the Stochastic Canadian Traveller Problem and in the Edge-independent Stochastic Shortest Path Problem (i-SSPPR), each uncertain edge (or cost) has an associated probability of being in the realization and the event that an edge is in the graph is independent of which other edges are in the realization. Even though this is a considerable simplification, the problem is still #P-hard. Another variant is to make no assumption on the distribution but require that each realization with non-zero probability be explicitly stated (such as “Probability 0.1 of edge set { {3,4},{1,2} }, probability 0.2 of...”). This is called the Distribution Stochastic Shortest Path Problem (d-SSPPR or R-SSPPR) and is NP-complete. The first variant is harder than the second because the former can represent in logarithmic space some distributions that the latter represents in linear space. - -The fourth and final parameter is how the graph changes over time. In CTP and SSPPR, the realization is fixed but not known. In the Stochastic Shortest Path Problem with Recourse and Resets or the Expected Shortest Path problem, a new realization is chosen from the distribution after each step taken by the policy. This problem can be solved in polynomial time by reducing it to a Markov decision process with polynomial horizon. The Markov generalization, where the realization of the graph may influence the next realization, is known to be much harder. - -An additional parameter is how new knowledge is being discovered on the realization. In traditional variants of CTP, the agent uncovers the exact weight (or status) of an edge upon reaching an adjacent vertex. A new variant was recently suggested where an agent also has the ability to perform remote sensing from any location on the realization. In this variant, the task is to minimize the travel cost plus the cost of sensing operations. - -We define the variant studied in the paper from 1989. That is, the goal is to minimize the competitive ratio in the worst case. It is necessary that we begin by introducing certain terms. - -Consider a given graph and the family of undirected graphs that can be constructed by adding one or more edges from a given set. Formally, let $\mathcal{G}(V,E,F) = \{(V,E+F') | F' \subseteq F\}, E \cap F = \emptyset$ where we think of E as the edges that must be in the graph and of F as the edges that may be in the graph. We say that $G \in \mathcal{G}(V,E,F)$ is a realization of the graph family. Furthermore, let W be an associated cost matrix where $w_{ij}$ is the cost of going from vertex i to vertex j, assuming that this edge is in the realization. - -For any vertex v in V, we call $E_B(v,V)$ its incident edges with respect to the edge set B on V. Furthermore, for a realization $G \in \mathcal{G}(V,E,F)$, let $d_B(s,t)$ be the cost of the shortest path in the graph from s to t. This is called the off-line problem because an algorithm for such a problem would have complete information of the graph. - -We say that a strategy $\pi$ to navigate such a graph is a mapping from $(\mathcal{P}(E),\mathcal{P}(F),V)$ to $V$, where $\mathcal{P}(X)$ denotes the powerset of X. We define the cost $c(\pi, B)$ of a strategy $\pi$ with respect to a particular realization $G = (V,B)$ as follows. - -* Let $v_0 = s, E_0 = E$ and $F_0 = F$. - -* For $i = 0, 1, 2, ...$, define - -** $E_{i+1} = E_i \cup E_B(v_i,V)$, - -** $F_{i+1} = F_i - E_F(v_i,V)$, and - -** $v_{i+1} = \pi(E_{i+1}, F_{i+1}, v_i)$. - -* If there exists a T such that $v_T = t$, then $c(\pi, B) = \sum_{i=0}^{T-1} w_{v_i,v_{i+1}}$; otherwise let $c(\pi, B) = \infty$. - -In other words, we evaluate the policy based on the edges we currently know are in the graph ($E_i$) and the edges we know might be in the graph ($F_i$). When we take a step in the graph, the edges incident to our new location become known to us. Those edges that are in the graph are added to $E_i$, and regardless of whether the edges are in the graph or not, they are removed from the set of unknown edges, $F_i$. If the goal is never reached, we say that we have an infinite cost. If the goal is reached, we define the cost of the walk as the sum of the costs of all of the edges traversed, with cardinality. - -Finally, we define the Canadian traveller problem. - -Given a CTP instance $(V,E,F,s,t,r)$, decide whether there exists a policy $\pi$ such that for every realization $(V,B) \in \mathcal{G}(V,E,F)$, the cost $c(\pi, B)$ of the policy is no more than r times the off-line optimal, $d_B(s, t)$. - -Papadimitriou and Yannakakis noted that this defines a two-player game, where the players compete over the cost of their respective paths and the edge set is chosen by the second player (nature). - -The original paper analysed the complexity of the problem and reported it to be PSPACE-complete. It was also shown that finding an optimal path in the case where each edge has an associated probability of being in the graph (i-SSPPR) is a PSPACE-easy but ♯P-hard problem. It was an open problem to bridge this gap, but since then both the directed and undirected versions were shown to be PSPACE-hard. - -The directed version of the stochastic problem is known in operations research as the Stochastic Shortest Path Problem with Recourse. - -The problem is said to have applications in operations research, transportation planning, artificial intelligence, machine learning, communication networks, and routing. A variant of the problem has been studied for robot navigation with probabilistic landmark recognition. - -Despite the age of the problem and its many potential applications, many natural questions still remain open. Is there a constant-factor approximation or is the problem APX-hard? Is i-SSPPR #P-complete? An even more fundamental question has been left unanswered: is there a polynomial-size description of an optimal policy, setting aside for a moment the time necessary to compute the description? diff --git a/wiki/wikipedia/2816.txt b/wiki/wikipedia/2816.txt deleted file mode 100644 index 0a6220d09d47171504e8483a06e0d79c701c4f27..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2816.txt +++ /dev/null @@ -1,28 +0,0 @@ -In combinatorial mathematics, specifically in combinatorial design theory and combinatorial matrix theory the Williamson conjecture is that Williamson matrices of order $n$ exist for all positive integers $n$. - -Four symmetric and circulant matrices $A$, $B$, $C$, $D$ are known as Williamson matrices if their entries are $\pm1$ and they satisfy the relationship -$$ -A^2 + B^2 + C^2 + D^2 = 4n I -$$ - -where $I$ is the identity matrix of order $n$. John Williamson showed that if $A$, $B$, $C$, $D$ are Williamson matrices then - -\begin{bmatrix} - -A & B & C & D \\ - --B & A & -D & C \\ - --C & D & A & -B \\ - --D & -C & B & A - -\end{bmatrix} - -is an Hadamard matrix of order $4n$. - -It was once considered likely that Williamson matrices exist for all orders $n$ - -and that the structure of Williamson matrices could provide a route to proving the Hadamard conjecture that Hadamard matrices exist for all orders $4n$. - -However, in 1993 the Williamson conjecture was shown to be false via an exhaustive computer search by Dragomir Ž. Ðoković, who showed that Williamson matrices do not exist in order $n=35$. In 2008, the counterexamples 47, 53, and 59 were additionally discovered. diff --git a/wiki/wikipedia/2817.txt b/wiki/wikipedia/2817.txt deleted file mode 100644 index 4dcaa1f28a5f95f9a504df9e9210c7b67a42eff4..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2817.txt +++ /dev/null @@ -1,21 +0,0 @@ -In graph theory, a book graph (often written $B_p$ ) may be any of several kinds of graph formed by multiple cycles sharing an edge. - -One kind, which may be called a quadrilateral book, consists of p quadrilaterals sharing a common edge (known as the "spine" or "base" of the book). That is, it is a Cartesian product of a star and a single edge. The 7-page book graph of this type provides an example of a graph with no harmonious labeling. - -A second type, which might be called a triangular book, is the complete tripartite graph K1,1,p. It is a graph consisting of $p$ triangles sharing a common edge. A book of this type is a split graph. - -This graph has also been called a $K_e(2,p)$ or a thagomizer graph (after thagomizers, the pointy tails of Stegosaurus dinosaurs, because of their pointy appearance in certain drawings) and their graphic matroids have been called thagomizer matroids. Triangular books form one of the key building blocks of line perfect graphs. - -The term "book-graph" has been employed for other uses. Barioli used it to mean a graph composed of a number of arbitrary subgraphs having two vertices in common. (Barioli did not write $B_p$ for his book-graph.) - -Given a graph $G$, one may write $bk(G)$ for the largest book (of the kind being considered) contained within $G$. - -Denote the Ramsey number of two triangular books by $r(B_p,\ B_q).$ This is the smallest number $r$ such that for every $r$-vertex graph, either the graph itself contains $B_p$ as a subgraph, or its complement graph contains $B_q$ as a subgraph. - -* If $1\leq p\leq q$, then $r(B_p,\ B_q)=2q+3$. - -* There exists a constant $c=o(1)$ such that $r(B_p,\ B_q)=2q+3$ whenever $q\geq cp$. - -* If $ p\leq q/6+o(q)$, and $q$ is large, the Ramsey number is given by $2q+3$. - -* Let $C$ be a constant, and $k = Cn$. Then every graph on $n$ vertices and $m$ edges contains a (triangular) $B_k$. diff --git a/wiki/wikipedia/2818.txt b/wiki/wikipedia/2818.txt deleted file mode 100644 index ab72c2953fd841b77599f1748d512979d1826e08..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2818.txt +++ /dev/null @@ -1,35 +0,0 @@ -In mathematical logic, a theory is categorical if it has exactly one model (up to isomorphism). Such a theory can be viewed as defining its model, uniquely characterizing its structure. - -In first-order logic, only theories with a finite model can be categorical. Higher-order logic contains categorical theories with an infinite model. For example, the second-order Peano axioms are categorical, having a unique model whose domain is the set of natural numbers $\mathbb{N}.$ - -In model theory, the notion of a categorical theory is refined with respect to cardinality. A theory is κ-categorical (or categorical in κ) if it has exactly one model of cardinality κ up to isomorphism. Morley's categoricity theorem is a theorem of stating that if a first-order theory in a countable language is categorical in some uncountable cardinality, then it is categorical in all uncountable cardinalities. - -extended Morley's theorem to uncountable languages: if the language has cardinality κ and a theory is categorical in some uncountable cardinal greater than or equal to κ then it is categorical in all cardinalities greater than κ. - -Oswald Veblen in 1904 defined a theory to be categorical if all of its models are isomorphic. It follows from the definition above and the Löwenheim–Skolem theorem that any first-order theory with a model of infinite cardinality cannot be categorical. One is then immediately led to the more subtle notion of κ-categoricity, which asks: for which cardinals κ is there exactly one model of cardinality κ of the given theory T up to isomorphism? This is a deep question and significant progress was only made in 1954 when Jerzy Łoś noticed that, at least for complete theories T over countable languages with at least one infinite model, he could only find three ways for T to be κ-categorical at some κ: - -*T is totally categorical, i.e. T is κ-categorical for all infinite cardinals κ. - -*T is uncountably categorical, i.e. T is κ-categorical if and only if κ is an uncountable cardinal. - -*T is countably categorical, i.e. T is κ-categorical if and only if κ is a countable cardinal. - -In other words, he observed that, in all the cases he could think of, κ-categoricity at any one uncountable cardinal implied κ-categoricity at all other uncountable cardinals. This observation spurred a great amount of research into the 1960s, eventually culminating in Michael Morley's famous result that these are in fact the only possibilities. The theory was subsequently extended and refined by Saharon Shelah in the 1970s and beyond, leading to stability theory and Shelah's more general programme of classification theory. - -There are not many natural examples of theories that are categorical in some uncountable cardinal. The known examples include: - -* Pure identity theory (with no functions, constants, predicates other than "=", or axioms). - -* The classic example is the theory of algebraically closed fields of a given characteristic. Categoricity does not say that all algebraically closed fields of characteristic 0 as large as the complex numbers C are the same as C; it only asserts that they are isomorphic as fields to C. It follows that although the completed p-adic closures Cp are all isomorphic as fields to C, they may (and in fact do) have completely different topological and analytic properties. The theory of algebraically closed fields of given characteristic is not categorical in ω (the countable infinite cardinal); there are models of transcendence degree 0, 1, 2, ..., ω. - -* Vector spaces over a given countable field. This includes abelian groups of given prime exponent (essentially the same as vector spaces over a finite field) and divisible torsion-free abelian groups (essentially the same as vector spaces over the rationals). - -* The theory of the set of natural numbers with a successor function. - -There are also examples of theories that are categorical in ω but not categorical in uncountable cardinals. - -The simplest example is the theory of an equivalence relation with exactly two equivalence classes, both of which are infinite. Another example is the theory of dense linear orders with no endpoints; Cantor proved that any such countable linear order is isomorphic to the rational numbers. - -Every categorical theory is complete. However, the converse does not hold. - -Any theory T categorical in some infinite cardinal κ is very close to being complete. More precisely, the Łoś–Vaught test states that if a satisfiable theory has no finite models and is categorical in some infinite cardinal κ at least equal to the cardinality of its language, then the theory is complete. The reason is that all infinite models are equivalent to some model of cardinal κ by the Löwenheim–Skolem theorem, and so are all equivalent as the theory is categorical in κ. Therefore, the theory is complete as all models are equivalent. The assumption that the theory have no finite models is necessary. diff --git a/wiki/wikipedia/2819.txt b/wiki/wikipedia/2819.txt deleted file mode 100644 index 44fcd7a8e70e1af10954379e70ca87e387383447..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2819.txt +++ /dev/null @@ -1 +0,0 @@ -In mathematics, the Atiyah–Jones conjecture is a conjecture about the homology of the moduli spaces of instantons. The original form of the conjecture considered instantons over a 4 dimensional sphere. It was introduced by and proved by . The more general version of the Atiyah–Jones conjecture is a question about the homology of the moduli spaces of instantons on any 4 dimensional real manifold, or on a complex surface. The Atiyah–Jones conjecture has been proved for Ruled Surfaces by R. J. Milgram and J. Hurtubise, and for Rational Surfaces by Elizabeth Gasparim. The conjecture remains unproved for other types of 4 manifolds. diff --git a/wiki/wikipedia/282.txt b/wiki/wikipedia/282.txt deleted file mode 100644 index 6aeedbef5b18f7be3ee7dc88ff15d3957e59dac8..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/282.txt +++ /dev/null @@ -1,7 +0,0 @@ -Cartan's theorem may refer to several mathematical results by Élie Cartan: - -*Closed-subgroup theorem, 1930, that any closed subgroup of a Lie group is a Lie subgroup - -*Theorem of the highest weight, that the irreducible representations of Lie algebras or Lie groups are classified by their highest weights - -*Lie's third theorem, an equivalence between Lie algebras and simply-connected Lie groups diff --git a/wiki/wikipedia/2820.txt b/wiki/wikipedia/2820.txt deleted file mode 100644 index b40ad145a0b6337cd6ba9aaccdb1808b40c78846..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2820.txt +++ /dev/null @@ -1,68 +0,0 @@ -In mathematics, the Jacobi identity is a property of a binary operation that describes how the order of evaluation, the placement of parentheses in a multiple product, affects the result of the operation. By contrast, for operations with the associative property, any order of evaluation gives the same result (parentheses in a multiple product are not needed). The identity is named after the German mathematician Carl Gustav Jakob Jacobi. - -The cross product $a\times b$ and the Lie bracket operation $[a,b]$ both satisfy the Jacobi identity. In analytical mechanics, the Jacobi identity is satisfied by the Poisson brackets. In quantum mechanics, it is satisfied by operator commutators on a Hilbert space and equivalently in the phase space formulation of quantum mechanics by the Moyal bracket. - -Let $+$ and $\times$ be two binary operations, and let $0$ be the neutral element for $+$. The ' is - -x \times (y \times z) \ +\ y \times (z \times x) \ +\ z \times (x \times y)\ =\ 0. - -Notice the pattern in the variables on the left side of this identity. In each subsequent expression of the form $a \times (b \times c)$, the variables $x$, $y$ and $z$ are permuted according to the cycle $x \mapsto y \mapsto z \mapsto x$. Alternatively, we may observe that the ordered triples $(x,y,z)$, $(y,z,x)$ and $(z,x,y)$, are the even permutations of the ordered triple $(x,y,z)$. - -The simplest informative example of a Lie algebra is constructed from the (associative) ring of $n\times n$ matrices, which may be thought of as infinitesimal motions of an n-dimensional vector space. The × operation is the commutator, which measures the failure of commutativity in matrix multiplication. Instead of $X\times Y$, the Lie bracket notation is used: -$$ -[X,Y]=XY-YX. -$$ - -In that notation, the Jacobi identity is: -$$ -[X, [Y, Z] ] + [Y, [Z, X] ] + [Z, [X, Y] ] \ =\ 0. -$$ - -That is easily checked by computation. - -More generally, if A is an associative algebra and V is a subspace of A that is closed under the bracket operation: $[X,Y]=XY-YX$ belongs to V for all $X,Y\in V$, the Jacobi identity continues to hold on V. Thus, if a binary operation $[X,Y]$ satisfies the Jacobi identity, it may be said that it behaves as if it were given by $XY-YX$ in some associative algebra even if it is not actually defined that way. - -Using the antisymmetry property $[X,Y]=-[Y,X]$, the Jacobi identity may be rewritten as a modification of the associative property: -$$ -X, Y], Z] = [X, [Y, Z - [Y, [X, Z]]~. -$$ - -If $[X,Z]$ is the action of the infinitesimal motion X on Z, that can be stated as: - -There is also a plethora of graded Jacobi identities involving anticommutators $\{X,Y\}$, such as: - - - -[\{X,Y\},Z]+ [\{Y,Z\},X]+[\{Z,X\},Y] =0,\qquad - -[\{X,Y\},Z]+ \{[Z,Y],X\}+\{[Z, - -X],Y\} =0. - - - -Most common examples of the Jacobi identity come from the bracket multiplication $[x,y]$ on Lie algebras and Lie rings. The Jacobi identity is written as: -$$ -[x,[y,z]] + [z,[x,y]] + [y,[z,x]] = 0. -$$ - -Because the bracket multiplication is antisymmetric, the Jacobi identity admits two equivalent reformulations. Defining the adjoint operator $\operatorname{ad}_x: y \mapsto [x,y]$, the identity becomes: -$$ -\operatorname{ad}_x[y,z]=[\operatorname{ad}_xy,z]+[y,\operatorname{ad}_xz]. -$$ - -Thus, the Jacobi identity for Lie algebras states that the action of any element on the algebra is a derivation. That form of the Jacobi identity is also used to define the notion of Leibniz algebra. - -Another rearrangement shows that the Jacobi identity is equivalent to the following identity between the operators of the adjoint representation: -$$ -\operatorname{ad}_{[x,y]}=[\operatorname{ad}_x,\operatorname{ad}_y]. -$$ - -There, the bracket on the left side is the operation of the original algebra, the bracket on the right is the commutator of the composition of operators, and the identity states that the $\mathrm{ad}$ map sending each element to its adjoint action is a Lie algebra homomorphism. - -The Hall–Witt identity is the analogous identity for the commutator operation in a group. - -The following identitity follows from anticommutativity and Jacobi identity and holds in arbitrary Lie algebra: -$$ -[x,[y,[z,w]]] + [y,[x,[w,z]]] + [z,[w,[x,y]]] + [w,[z,[y,x]]] = 0. -$$ diff --git a/wiki/wikipedia/2821.txt b/wiki/wikipedia/2821.txt deleted file mode 100644 index ea491a9e1f18dbe3962da74b348e1cd8528ec261..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2821.txt +++ /dev/null @@ -1,43 +0,0 @@ -The spherical Bernstein's problem is a possible generalization of the original Bernstein's problem in the field of global differential geometry, first proposed by Shiing-Shen Chern in 1969, and then later in 1970, during his plenary address at the International Congress of Mathematicians in Nice. - -Are the equators in $\mathbb{S}^{n+1}$ the only smooth embedded minimal hypersurfaces which are topological $n$-dimensional spheres? - -Additionally, the spherical Bernstein's problem, while itself a generalization of the original Bernstein's problem, can, too, be generalized further by replacing the ambient space $\mathbb{S}^{n+1}$ by a simply-connected, compact symmetric space. Some results in this direction are due to Wu-Chung Hsiang and Wu-Yi Hsiang work. - -Below are two alternative ways to express the problem: - -Let the (n − 1) sphere be embedded as a minimal hypersurface in $S^n$(1). Is it necessarily an equator? - -By the Almgren–Calabi theorem, it's true when n = 3 (or n = 2 for the 1st formulation). - -Wu-Chung Hsiang proved it for n ∈  {4, 5, 6, 7, 8, 10, 12, 14} (or n ∈ {3, 4, 5, 6, 7, 9, 11, 13}, respectively) - -In 1987, Per Tomter proved it for all even n (or all odd n, respectively). - -Thus, it only remains unknown for all odd n ≥ 9 (or all even n ≥ 8, respectively) - -Is it true that an embedded, minimal hypersphere inside the Euclidean $n$-sphere is - -necessarily an equator? - -Geometrically, the problem is analogous to the following problem: - -Is the local topology at an isolated singular point of a minimal hypersurface necessarily different from that of a disc? - -For example, the affirmative answer for spherical Bernstein problem when n = 3 is equivalent to the fact that the local topology at an isolated singular point of any minimal hypersurface in an arbitrary Riemannian 4-manifold must be different from that of a disc. - -*F.J. Almgren, Jr., Some interior regularity theorems for minimal surfaces and an extension of the Bernstein's theorem, Annals of Mathematics, volume 85, number 1 (1966), pp. 277–292 - -*E. Calabi, Minimal immersions of surfaces in euclidean spaces, Journal of Differential Geometry, volume 1 (1967), pp. 111–125 - -*P. Tomter, The spherical Bernstein problem in even dimensions and related problems, Acta Mathematica, volume 158 (1987), pp. 189–212 - -*S.S. Chern, Brief survey of minimal submanifolds, Tagungsbericht (1969), Mathematisches Forschungsinstitut Oberwolfach - -*S.S. Chern, Differential geometry, its past and its future, Actes du Congrès international des mathématiciens (Nice, 1970), volume 1, pp. 41–53, Gauthier-Villars, (1971) - -*W.Y. Hsiang, W.T. Hsiang, P. Tomter, On the existence of minimal hyperspheres in compact symmetric spaces, Annales Scientifiques de l'École Normale Supérieure, volume 21 (1988), pp. 287–305 - -Category:Mathematical problems - -Category:Unsolved problems in geometry diff --git a/wiki/wikipedia/2822.txt b/wiki/wikipedia/2822.txt deleted file mode 100644 index af1c188e3f2795f43d3707f183db6b01b33f5e34..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2822.txt +++ /dev/null @@ -1,115 +0,0 @@ -iCloud is a cloud storage and cloud computing service from Apple Inc. launched on October 12, 2011. As of 2018, the service had an estimated 850 million users, up from 782 million users in 2016. - -iCloud enables users to store data such as documents, photos, and music on remote servers for download to iOS, macOS or Windows devices, to share and send data to other users, and to manage their Apple devices if lost or stolen. - -iCloud also provides the means to wirelessly back up iOS devices directly to iCloud, instead of being reliant on manual backups to a host Mac or Windows computer using iTunes. Service users are also able to share photos, music, and games instantly by linking accounts via AirDrop wireless. - -iCloud replaced Apple's MobileMe service, - -Beginning in 2011, iCloud is based on Amazon Web Services and Microsoft Azure (Apple iOS Security white paper published in 2014, Apple acknowledged that encrypted iOS files are stored in Amazon S3 and Microsoft Azure). In 2016, Apple signed a deal with Google to use Google Cloud Platform for some iCloud services. - -In October 2016, Bloomberg reported that Apple was working on project Pie which aims to improve the speed and experience of Apple's online services by being operated more directly by Apple. - -In June 2021, Apple introduced iCloud+, which introduced Private Relay, Hide My Email and custom email domains to paid users of the services, as well as an unlimited storage limit for video from cameras added through HomeKit Secure Video. - -iCloud account creation requires either an iOS device running iOS 5 or later or a Mac running OS X Lion v10.7.5 or later, as well as an internet connection and a compatible web browser. Also, certain features have their own minimum requirements of OS versions. For example, using iCloud Photo Sharing requires OS X Mavericks v10.9 or above on a Mac. - -Devices running older versions of macOS (before Mavericks) or iOS (below 7) may be unable to sign into iCloud after the iCloud password has been changed: the only resolution for this issue is to upgrade the OS, which may be impossible on a device that does not meet the newer OS minimum requirements. - -Synchronizing with a PC requires Windows 7 or later and using the iCloud Control Panel, and optionally Outlook 2007 or later or the built-in Windows 10 Mail and Calendar apps to sync Calendar, Contacts, and Reminders. Users must own an Apple device to set up iCloud for Windows. - -Synchronization of bookmarks requires Safari 5.1.1 or later on macOS, and Internet Explorer 9, Firefox 22 or Google Chrome 28 or later on Windows. - -MobileMe account users could move their accounts to an iCloud account, keeping the same account details. - -iCloud was announced on June 6, 2011, at the 2011 Apple Worldwide Developers Conference (WWDC). Apple announced that MobileMe would be discontinued after June 30, 2012, with anyone who had an account before the unveiling of iCloud having their MobileMe service extended to that date, free of charge. - -The official website, www.icloud.com, went live in early August for Apple Developers. On October 12, 2011, iCloud became available to use via an iTunes update. iCloud had 20 million users in less than a week after launch. The iCloud.com domain and registered trademark were bought from a Swedish company called Xcerion, who rebranded their service to CloudMe. Apple now controls major domains like iCloud.de, iCloud.fr and iCloud.es. - -A class action lawsuit by customers unhappy over the transition from MobileMe to iCloud was filed in early-May 2012. - -In June 2019, iCloud was introduced to Windows 10 via the Microsoft Store. - -On June 7, 2021, Apple introduced an upgraded version of iCloud for users who paid for additional storage called iCloud+ during their 2021 Apple Worldwide Developers Conference. iCloud+ includes Private Relay, which allowed users to browse Safari without being tracked, Hide My Email, which allows users to sign up to websites and other apps with a private email address that forwarded messages to their main inbox, and updates to HomeKit Secure Video which allows iCloud+ users to add an unlimited number of HomeKit cameras that do not count against the storage limit. - -The first official mention of iCloud from Apple came on May 31, 2011, when a press release announced that it would demonstrate the service at the WWDC on June 6, 2011. A banner hung at the Moscone Center for WWDC revealed the iCloud logo five days before the official launch. - -At the WWDC 2011 keynote speech, Steve Jobs (in one of his last public appearances) announced iCloud will replace MobileMe services and that the basic iCloud service will be free of charge. - -The cloud-based system allows users to store heterogeneous music, photos, applications, documents, bookmarks, reminders, backups, notes, Apple Books, and contacts and provides a platform for Apple's email servers and calendars. Third-party iOS and macOS app developers can implement iCloud functionality in their apps through the iCloud API. - -iCloud allows users to back up the settings and data on iOS devices running iOS 5 or later. Data backed up includes photos and videos in the Camera Roll, device settings, app data, messages (iMessage, SMS, and MMS), ringtones, and Visual Voicemails. Backups occur daily when the device is locked and connected to Wi-Fi and a power source. In case of a malfunction of any Apple device, during the restoration process, iCloud offers to restore all data along with App data only if the device was synced to iCloud and backed up. - -Back to My Mac, also previously part of MobileMe, is now part of iCloud. As before, this service allows users to log in remotely to other computers that have Back to My Mac enabled and are configured with the same Apple ID. On August 9, 2018, Apple to note that Back to My Mac would not be part of the upcoming macOS Mojave (10.14) release. - -An iCloud account can include an email account, much like MobileMe, .Mac, and iTools did previously. However, unlike MobileMe and its previous iterations, the email account is an optional part of an iCloud account, in that the user can choose to use a non-iCloud email address as their iCloud Apple ID. The email account can be accessed using any standard IMAP-compatible email client, as well as via web browser at iCloud.com. Additionally, on an iOS device, iCloud email is push-enabled. - -Users who converted existing MobileMe accounts to iCloud accounts kept their existing "@me.com" email addresses; users whose accounts pre-dated MobileMe and had both me.com and mac.com email addresses kept both. As well as retaining their previous addresses, users also received the matching "@icloud.com" address. As there is only one mailbox per account, all messages sent to any of a user's iCloud email addresses end up in the same inbox. - -Find My iPhone My Friends was added to iCloud alongside the launch of iOS 5, allowing users to share their current location with their friends or family. iOS 6 added location-based alerts to notify the user when a device arrives at a certain location. - -On iOS 9 and 10, Find My Friends is built into iOS and cannot be removed. From iOS 11 onwards it is included, but can be deleted and then subsequently reinstalled from the iOS App Store. - -In October 2015, Find My Friends was added to iCloud.com to view other "friends" locations. - -Find My iPhone, formerly part of MobileMe, allows users to track the location of their iOS device or Mac. A user can see the device's approximate location on a map (along with a circle showing the radius depicting the margin of error), display a message and/or play a sound on the device (even if it is set to silent), change the password on the device, and remotely erase its contents. The feature was first announced on June 10, 2009, and was included in the iOS 3.0 software update as a feature for paying MobileMe users. Find My iPhone was made free of charge with the iOS 4.2.1 software update on November 22, 2010, but only for devices introduced in 2010. An iOS app was also released by Apple on June 18, 2010, which allows users to locate their device from other iOS devices running iOS 4 or later software. In iOS 5, Find My iPhone was continued as a feature for iCloud. iOS 6 introduced Lost Mode, a new feature that allows the user to mark a device as "lost", making it easier to protect and find. The feature also allows someone that finds the user's lost iPhone to call the user directly without unlocking it. Similar phone finder services under various names are available for other families of smartphones. - -Activation Lock was introduced in 2013 with iOS 7. It is integrated with iCloud and Find My iPhone feature. This new feature locks the activation of any iPhone, iPad, iPod touch or Apple watch which has been restored in either DFU or Recovery mode without first disabling the Find My iPhone feature. Once restore is completed, the device will ask for the Apple ID and password that has been previously associated with it, to proceed with activation, ultimately preventing any stolen device from being usable. - -As of iOS 9, Find my iPhone is a built-in app, and thus cannot be removed. - -In iOS and iPadOS 13, Both Find my iPhone and Find My Friends have been removed in favour of Find My. - -Find My replaced Find my iPhone and Find My Friends, merging the two apps in iOS and iPadOS 14. - -iCloud Keychain is a password manager developed by Apple that syncs passwords across devices and suggests secure ones when creating new accounts. - -iCloud Keychain backups provide different security guarantees than traditional iCloud backups. This is because iCloud Keychain uses "end-to-end encryption", meaning that iCloud Keychain backups are designed so that the provider does not have access to unencrypted data. This is accomplished through the use of a novel "key vault" design based on a Hardware Security Module located in Apple's data centers. - -iTunes Match debuted on November 14, 2011. It was initially available to US users only. For an annual fee, customers can scan and match tracks in their iTunes music library, including tracks copied from CDs or other sources, with tracks in the iTunes Store, so customers do not have to repurchase said tracks. Customers may download up to 100,000 tracks in 256 kbit/s DRM-free AAC file format that matches tracks in any supported audio file formats in customers' iTunes libraries, including ALAC and MP3. Customers also have the choice to keep their original copies stored on their computers or have them replaced by copies from the iTunes Store. Any music not available in the iTunes Store is uploaded for download onto customers' other supported devices and computers; doing this will not take storage from the customers' iCloud's storage allowance. Any such tracks stored in the higher quality lossless audio ALAC, or original uncompressed PCM formats, WAV and AIFF, are transcoded to 256 kbit/s DRM-free AAC format before uploading to the customers' iCloud storage account, leaving the original higher quality local files in their original format. - -If a user stops paying for the iTunes Match service, all copies of the DRM-free AAC iTunes Store versions of tracks that have already been downloaded onto any device can be kept, whether on iOS devices or computers. - -The streaming Genius shuffle is not available in current versions of iOS but is available in iTunes on the Mac. - -On January 28, 2016, ad-free iTunes Radio was discontinued and is therefore no longer part of iTunes Match. - -, iTunes Match is available in 116 countries, while iTunes in the Cloud is available in 155 countries. - -During the 2013 Apple Worldwide Developers Conference (WWDC) keynote speech, iWork for iCloud was announced for release at the same time as the next version of the app versions of iWork later in the year. The three apps for both iOS and macOS that form Apple's iWork suite (Pages, Numbers, and Keynote), will be made available on a web interface (named as Pages for iCloud, Numbers for iCloud, and Keynote for iCloud respectively), and accessed via the iCloud website under each user's iCloud Apple ID login. They will also sync with the user's iOS and macOS versions of the app, should they have them, again via their iCloud Apple ID. - -This allows the user to edit and create documents on the web, using one of the supported browsers: Safari, Chrome, and Microsoft Edge. It also means that Microsoft Windows users now have access to these native –previously only Apple device– document editing tools, via the web interface. - -Photo Stream is a service supplied with the basic iCloud service which allows users to store the most recent 1,000 photos on the iCloud servers for up to 30 days free of charge. When a photo is taken on a device with Photo Stream enabled, it is automatically uploaded to the iCloud servers. From there, it becomes available for viewing and saving on the rest of the user's Photo Stream-enabled devices. The photo is automatically removed from the server after 30 days or when it becomes photo number 1,001 in the user's stream. Photo Stream installed on a Mac or Windows desktop computer includes an option to have all photos permanently saved on that device. The service is also integrated with Apple TV, allowing users to view their recent photos wirelessly on their HDTV. - -iCloud Photos is a feature on iOS 8.1 or later and OS X Yosemite (version 10.10) or later, plus web app access. The service stores all of the user's photos, maintaining their original resolution and metadata. Users can access their iCloud Photos on supported devices via the new Photos app when available or via the iCloud Photos web app at iCloud.com, which helps limit the amount of local storage each device needs to use to store photos (particularly those with smaller storage capacities) by storing lower-resolution versions on the device, with the user having the option to keep some/all stored locally at a higher resolution. - -Since its introduction in 2011, each account has 5 GB of free storage for owners of either an iOS device using iOS 5.x or later, or a Mac using OS X Lion 10.7 or later. Users can purchase additional storage for a total of 50 GB, 200 GB or 2 TB. The amount of storage is shared across all devices per iCloud Apple ID. - -Several native features of iCloud use each user's iCloud storage allowance, specifically, Backup and restore, and email, Contacts, and Calendars. On Macs, users can also store most filetypes into iCloud folders of their choosing, rather than only storing them locally on the machine. While Photo Stream uses the iCloud servers, usage does not come out of the user's iCloud storage allowance. This is also true for iTunes Match music content, even for music that is not sold in the iTunes Store and which gets uploaded into iCloud storage, it does not count against the user's allowance. Other apps can optionally integrate app storage out of the user's iCloud storage allowance. - -Not all of a user's content counts as part of their iCloud storage allowance. Apple can keep a permanent track of every purchase a user makes under their Apple ID account, and by associating each piece of content with the user, it means only one copy of every Store item is needed to be kept on Apple's servers. For items bought from the iTunes Store (music, music videos, movies, TV shows), Apple Books Store (books), or App Store (iOS apps), this uses a service Apple call iTunes in the Cloud, allowing the user to automatically, or manually if preferred, re-download any of their previous purchases on to a Mac, PC, or iOS device. Similarly, macOS apps purchased from the Mac App Store are also linked to the Apple ID they were purchased through and can be downloaded to any Mac using the same Apple ID. Also, when a user registers any new device, all previously bought Store content can be downloaded from the Store servers or non-Store content from the iCloud servers. - -Audiobooks and their metadata fields from non-Apple purchased sources are not synced across devices (macOS or iOS) inside the Apple Books apps, and nor does the metadata from non-Apple purchased books (in Ebook or PDF format). There remains a syncing mismatch on some types of media, between Apple-purchased content and non-Apple purchased content that remains in effect for iCloud users. - -iCloud Drive is iCloud's file hosting service, that syncs files across devices running iOS 8, OS X Yosemite (version 10.10), or Windows 7 or later, plus online web app access via iCloud.com. Users can store any kind of file (including photos, videos, documents, music, and other apps' data) in iCloud Drive and access it on any Mac, iPad, iPhone, iPod Touch, or Windows PC, with any single file being a maximum of 50 GB in file size (earlier it was 15 GB). This allows users to start their work on one device and continue on another device. By default, users still get 5 GB of storage for free as previously, but the expandable storage plans available have increased in size (current tiers: 50 GB, 200 GB, and 2 TB), and altered to monthly subscription payment options from the yearly ones offered under the previous MobileMe service. - -In iOS 11, iCloud Drive has been integrated into the new Files app that gives users access to all their cloud and local on-device storage, which replaced the standalone iCloud Drive app. - -Messages on iCloud is a feature on iOS 11.4 and macOS High Sierra 10.13.5 which keeps all of a user's iMessages and SMS texts stored in the cloud. - -Private Relay, an iCloud+ feature currently on beta, allows users to browse Safari privately, similar to a virtual private network. The Private Relay feature is not available in China, Belarus, Colombia, Egypt, Kazakhstan, Saudi Arabia, South Africa, Turkmenistan, Uganda, and the Philippines, but this may vary over time. - -Hide My Email is available to iCloud+ users and allows users in Mail and Safari to generate temporary Apple email addresses which forward messages to their main email address. Third-party developers have reported that the changes implemented in the release of iOS 7 and OS X Mavericks (version 10.9) address these iCloud criticisms. - -iCloud Communications, a telecommunications company in Arizona, sued Apple in June 2011 for trademark infringement shortly after Apple announced iCloud. The lawsuit was filed in the US District Court of Arizona and demanded that Apple stop using the iCloud name and pay unspecified monetary damages. iCloud Communications changed its name to Clear Digital Communications in August 2011 and dropped its lawsuit against Apple shortly thereafter. - -Apple's iCloud service, including iCloud Drive and iOS device backups, does not provide end-to-end encryption, also known as client-side encryption, and without end-to-end encryption, users' information is left unsecured because it remains easily accessible to unauthorized persons. Furthermore, Apple reserves the right to and admits to scanning user data for illegal content. - -In August 2014, it was rumored that hackers had discovered an exploit involving the Find My iPhone service, which potentially allowed an attacker to brute-force a user's Apple ID and access their iCloud data. The exploit was later incorrectly rumored to have been used as part of an August 2014 leak of a large number of private, nude photos of celebrities that had been synced to their iCloud storage from their iPhone. Apple confirmed that it was working with law enforcement agencies to investigate the leak. Apple subsequently denied that the iCloud service itself or the alleged exploit was responsible for the leak, asserting that the leaks were the result of a very targeted phishing attack against the celebrities. On September 13, 2014 Tim Cook, while being interviewed by Charlie Rose, stated on camera that the celebrity leaks were not an iCloud exploit at all, but rather the celebrities had been phished by very targeted phishing to trick them out of their login credentials. - -Apple has been scanning iCloud Mail for CSAM information starting 2019. On August 5, 2021, Apple confirmed it has planned to started scanning iCloud Photos for the same reason. After receiving a public backlash against Apple scanning private photos, Apple announced it will collect further input before releasing new functionality. - -In February 2018, Apple announced that iCloud users in China would have their data, including encryption data, on servers called "云上贵州" located in the country to comply with local regulations. This raised concerns from human rights activists who claim that it may be used to track dissidents. In response, CEO Tim Cook stated that Apple encrypts "the same in every country in the world other than China". - -On June 7, 2021, during the WWDC event, Apple announced that iCloud's new 'private relay' feature would not work in China for regulatory reasons. diff --git a/wiki/wikipedia/2823.txt b/wiki/wikipedia/2823.txt deleted file mode 100644 index 192368ea6e813ff7aa7ed4051c08f969980d5875..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2823.txt +++ /dev/null @@ -1,336 +0,0 @@ -In probability theory, the birthday problem asks for the probability that, in a set of n randomly chosen people, at least two will share a birthday. The birthday paradox is that, counterintuitively, the probability of a shared birthday exceeds 50% in a group of only 23 people. - -The birthday paradox is a veridical paradox: it appears wrong, but is in fact true. While it may seem surprising that only 23 individuals are required to reach a 50% probability of a shared birthday, this result is made more intuitive by considering that the comparisons of birthdays will be made between every possible pair of individuals. With 23 individuals, there are (23 × 22) / 2 = 253 pairs to consider, which is well over half the number of days in a year (182.5 or 183). - -Real-world applications for the birthday problem include a cryptographic attack called the birthday attack, which uses this probabilistic model to reduce the complexity of finding a collision for a hash function, as well as calculating the approximate risk of a hash collision existing within the hashes of a given size of population. - -The problem is generally attributed to Harold Davenport in about 1927, though he did not publish it at the time. Davenport did not claim to be its discoverer "because he could not believe that it had not been stated earlier". The first publication of a version of the birthday problem was by Richard von Mises in 1939. - -The problem is to compute an approximate probability that in a group of n people at least two have the same birthday. For simplicity, leap years, twins, seasonal, and weekday variations are disregarded, and it is assumed that all 365 possible birthdays are equally likely, which is the worst case, as an uneven distribution increases the probability of a shared birthday. The problem of a non-uniform number of births occurring during each day of the year was first addressed by Murray Klamkin in 1967. As it happens, the real-world distribution yields a critical size of 23 to reach 50%. - -The goal is to compute P(A), the probability that at least two people in the room have the same birthday. However, it is simpler to calculate P(A′), the probability that no two people in the room have the same birthday. Then, because A and A′ are the only two possibilities and are also mutually exclusive, P(A) = 1 − P(A′). - -Here is the calculation of P(A) for 23 people. Let the 23 people be numbered 1 to 23. The event that all 23 people have different birthdays is the same as the event that person 2 does not have the same birthday as person 1, and that person 3 does not have the same birthday as either person 1 or person 2, and so on, and finally that person 23 does not have the same birthday as any of persons 1 through 22. Let these events be called Event 2, Event 3, and so on. Event 1 is the event of person 1 having a birthday, which occurs with probability 1. This conjunction of events may be computed using conditional probability: the probability of Event 2 is 364/365, as person 2 may have any birthday other than the birthday of person 1. Similarly, the probability of Event 3 given that Event 2 occurred is 363/365, as person 3 may have any of the birthdays not already taken by persons 1 and 2. This continues until finally the probability of Event 23 given that all preceding events occurred is 343/365. Finally, the principle of conditional probability implies that P(A′) is equal to the product of these individual probabilities: - -{{NumBlk|:|$P(A')=\frac{365}{365}\times\frac{364}{365}\times\frac{363}{365}\times\frac{362}{365}\times\cdots\times\frac{343}{365}$|}} - -The terms of equation () can be collected to arrive at: - -{{NumBlk|:|$P(A')=\left(\frac{1}{365}\right)^{23}\times(365\times364\times363\times\cdots\times343)$|}} - -Evaluating equation () gives P(A′) ≈ 0.492703 - -Therefore, P(A) ≈ 1 − 0.492703 = 0.507297 (50.7297%). - -This process can be generalized to a group of n people, where p(n) is the probability of at least two of the n people sharing a birthday. It is easier to first calculate the probability p(n) that all n birthdays are different. According to the pigeonhole principle, p(n) is zero when n > 365. When n ≤ 365: -$$ - \begin{align} \bar p(n) &= 1 \times \left(1-\frac{1}{365}\right) \times \left(1-\frac{2}{365}\right) \times \cdots \times \left(1-\frac{n-1}{365}\right) \\[6pt] &= \frac{ 365 \times 364 \times \cdots \times (365-n+1) }{ 365^n } \\[6pt] &= \frac{ 365! }{ 365^n (365-n)!} = \frac{n!\cdot\binom{365}{n}}{365^n} = \frac{_{365}P_n}{365^n}\end{align} -$$ - -where ! is the factorial operator, (_365|b=n|a=c) is the binomial coefficient and kPr denotes permutation. - -The equation expresses the fact that the first person has no one to share a birthday, the second person cannot have the same birthday as the first (364/365), the third cannot have the same birthday as either of the first two (363/365), and in general the nth birthday cannot be the same as any of the n − 1 preceding birthdays. - -The event of at least two of the n persons having the same birthday is complementary to all n birthdays being different. Therefore, its probability p(n) is -$$ - p(n) = 1 - \bar p(n). -$$ - -The following table shows the probability for some other values of n (for this table, the existence of leap years is ignored, and each birthday is assumed to be equally likely): - -Leap years. If we substitute 366 for 365 in the formula for $ \bar p(n) $, a similar calculation shows that for leap years, the number of people required for the probability of a match to be more than 50% is also 23; the probability of a match in this case is 50.6%. - -The Taylor series expansion of the exponential function (the constant e ≈ 2.718281828) -$$ - e^x = 1 + x + \frac{x^2}{2!}+\cdots -$$ - -provides a first-order approximation for ex for $|x| \ll 1$: -$$ - e^x \approx 1 + x. -$$ - -To apply this approximation to the first expression derived for p(n), set x = −a/365. Thus, -$$ - e^{-a/365} \approx 1 - \frac{a}{365}. -$$ - -Then, replace a with non-negative integers for each term in the formula of p(n) until a = n − 1, for example, when a = 1, -$$ - e^{-1/365} \approx 1 - \frac{1}{365}. -$$ - -The first expression derived for p(n) can be approximated as - - - -\begin{align} - -\bar p(n) & \approx 1 \cdot e^{-1/365} \cdot e^{-2/365} \cdots e^{-(n-1)/365} \\[6pt] - -& = e^{-\left.\big( 1+2+ \cdots +(n-1)\big)\right/365} \\[6pt] - -& = e^{-(n(n-1)/2)/365} = e^{-n(n-1)/730}. - -\end{align} - - - -Therefore, -$$ - p(n) = 1-\bar p(n) \approx 1 - e^{-n(n-1)/730}. -$$ - -An even coarser approximation is given by -$$ -p(n)\approx 1-e^{-n^2/730}, -$$ - -which, as the graph illustrates, is still fairly accurate. - -According to the approximation, the same approach can be applied to any number of "people" and "days". If rather than 365 days there are d, if there are n persons, and if n ≪ d, then using the same approach as above we achieve the result that if p(n, d) is the probability that at least two out of n people share the same birthday from a set of d available days, then: - -\begin{align} - -p(n, d) & \approx 1-e^{-n(n-1)/(2d)} \\[6pt] - -& \approx 1-e^{-n^2/(2d)}. - -\end{align} - -The probability of any two people not having the same birthday is 364/365. In a room containing n people, there are (_n|b=2|a=c) = n(n − 1)/2 pairs of people, i.e. (_n|b=2|a=c) events. The probability of no two people sharing the same birthday can be approximated by assuming that these events are independent and hence by multiplying their probability together. In short 364/365 can be multiplied by itself (_n|b=2|a=c) times, which gives us -$$ -\bar p(n) \approx \left(\frac{364}{365}\right)^\binom{n}{2}. -$$ - -Since this is the probability of no one having the same birthday, then the probability of someone sharing a birthday is -$$ -p(n) \approx 1 - \left(\frac{364}{365}\right)^\binom{n}{2}. -$$ - -Applying the Poisson approximation for the binomial on the group of 23 people, -$$ -\operatorname{Poi}\left(\frac{\binom{23}{2}}{365}\right) =\operatorname{Poi}\left(\frac{253}{365}\right) \approx \operatorname{Poi}(0.6932) -$$ - -so -$$ -\Pr(X>0)=1-\Pr(X=0) \approx 1-e^{-0.6932} \approx 1-0.499998=0.500002. -$$ - -The result is over 50% as previous descriptions. This approximation is the same as the one above based on the Taylor expansion that uses $ e^x \approx 1 + x$. - -A good rule of thumb which can be used for mental calculation is the relation -$$ -p(n) \approx \frac{n^2}{2m} -$$ - -which can also be written as -$$ -n \approx \sqrt { 2m \times p(n)} -$$ - -which works well for probabilities less than or equal to 1/2. In these equations, m is the number of days in a year. - -For instance, to estimate the number of people required for a 1/2 chance of a shared birthday, we get -$$ -n \approx \sqrt{ 2 \times 365 \times \tfrac12} = \sqrt{365} \approx 19 -$$ - -Which is not too far from the correct answer of 23. - -This can also be approximated using the following formula for the number of people necessary to have at least a 1/2 chance of matching: -$$ -n \approx \tfrac{1}{2} + \sqrt{\tfrac{1}{4} + 2 \times \ln(2) \times 365} = 22.999943. -$$ - -This is a result of the good approximation that an event with 1/k probability will have a 1/2 chance of occurring at least once if it is repeated k ln 2 times. - -[[File:birthday_attack_vs_paradox.svg|thumb|Comparison of the birthday problem (1) and birthday attack (2): - -In (1), collisions are found within one set, in this case, 3 out of 276 pairings of the 24 lunar astronauts. - -In (2), collisions are found between two sets, in this case, 1 out of 256 pairings of only the first bytes of SHA-256 hashes of 16 variants each of benign and harmful contracts.]] - -The lighter fields in this table show the number of hashes needed to achieve the given probability of collision (column) given a hash space of a certain size in bits (row). Using the birthday analogy: the "hash space size" resembles the "available days", the "probability of collision" resembles the "probability of shared birthday", and the "required number of hashed elements" resembles the "required number of people in a group". One could also use this chart to determine the minimum hash size required (given upper bounds on the hashes and probability of error), or the probability of collision (for fixed number of hashes and probability of error). - -For comparison, e=-18 to e=-15 is the uncorrectable bit error rate of a typical hard disk. In theory, 128-bit hash functions, such as MD5, should stay within that range until about 8.2|e=11 documents, even if its possible outputs are many more. - -The argument below is adapted from an argument of Paul Halmos. - -As stated above, the probability that no two birthdays coincide is -$$ -1-p(n) = \bar p(n) = \prod_{k=1}^{n-1}\left(1-\frac{k}{365}\right) . -$$ - -As in earlier paragraphs, interest lies in the smallest n such that p(n) > 1/2; or equivalently, the smallest n such that p(n) < 1/2. - -Using the inequality 1 − x < e−x in the above expression we replace 1 − k/365 with e−k/365. This yields -$$ -\bar p(n) = \prod_{k=1}^{n-1}\left(1-\frac{k}{365}\right) < \prod_{k=1}^{n-1}\left(e^{-k/365}\right) = e^{-n(n-1)/730} . -$$ - -Therefore, the expression above is not only an approximation, but also an upper bound of p(n). The inequality -$$ - e^{-n(n-1)/730} < \frac{1}{2} -$$ - -implies p(n) < 1/2. Solving for n gives -$$ -n^2-n > 730 \ln 2 . -$$ - -Now, 730 ln 2 is approximately 505.997, which is barely below 506, the value of n2 − n attained when n = 23. Therefore, 23 people suffice. Incidentally, solving n2 − n = 730 ln 2 for n gives the approximate formula of Frank H. Mathis cited above. - -This derivation only shows that at most 23 people are needed to ensure a birthday match with even chance; it leaves open the possibility that n is 22 or less could also work. - -Given a year with d days, the generalized birthday problem asks for the minimal number n(d) such that, in a set of n randomly chosen people, the probability of a birthday coincidence is at least 50%. In other words, n(d) is the minimal integer n such that -$$ -1-\left(1-\frac{1}{d}\right)\left(1-\frac{2}{d}\right)\cdots\left(1-\frac{n-1}{d}\right)\geq \frac{1}{2}. -$$ - -The classical birthday problem thus corresponds to determining n(365). The first 99 values of n(d) are given here : - -A similar calculation shows that n(d) = 23 when d is in the range 341–372. - -A number of bounds and formulas for n(d) have been published. - -For any d ≥ 1, the number n(d) satisfies -$$ -\frac{3-2\ln2}{6}ij, for $1\leq i \leq j\leq k$, by - -\begin{alignat}{2} - -X_{ij} & - -= I \{ \text{person }i\text{ and person }j\text{ have the same birthday} \} \\ & - -= \begin{cases} - -1, & \text{if person }i\text{ and person }j\text{ have the same birthday;} \\ - -0, & \text{otherwise.} - -\end{cases} - -\end{alignat} - -\begin{alignat}{2} - -E[X_{ij}] & - -= \Pr \{ \text{person }i\text{ and person }j\text{ have the same birthday} \} \\ & - -= 1/n - -\end{alignat} - -Let X be a random variable counting the pairs of individuals with the same birthday. - -X =\sum_{i=1}^k \sum_{j=i+1}^k X_{ij} - -\begin{alignat}{3} - -E[X] - -& = \sum_{i=1}^k \sum_{j=i+1}^k E[X_{ij}]\\ - -& = \binom{k}{2} \frac{1}{n}\\ - -& = \frac{k(k-1)}{2n}\\ - -\end{alignat} - -For n = 365, if k = 28, the expected number of people with the same birthday is $(28 \cdot 27) / (2 \cdot 365) \approx 1.0356.$ Therefore, we can expect at least one matching pair with at least 28 people. - -An informal demonstration of the problem can be made from the list of Prime Ministers of Australia, of which there have been 29 , in which Paul Keating, the 24th prime minister, and Edmund Barton, the first prime minister, share the same birthday, 18 January. - -In the 2014 FIFA World Cup, each of the 32 squads had 23 players. An analysis of the official squad lists suggested that 16 squads had pairs of players sharing birthdays, and of these 5 squads had two pairs: Argentina, France, Iran, South Korea and Switzerland each had two pairs, and Australia, Bosnia and Herzegovina, Brazil, Cameroon, Colombia, Honduras, Netherlands, Nigeria, Russia, Spain and USA each with one pair. - -Voracek, Tran and Formann showed that the majority of people markedly overestimate the number of people that is necessary to achieve a given probability of people having the same birthday, and markedly underestimate the probability of people having the same birthday when a specific sample size is given. Further results showed that psychology students and women did better on the task than casino visitors/personnel or men, but were less confident about their estimates. - -The reverse problem is to find, for a fixed probability p, - -the greatest n for which the probability p(n) is smaller than the given p, or the smallest n for which the probability p(n) is greater than the given p. - -Taking the above formula for d = 365, one has -$$ -n(p;365)\approx \sqrt{730\ln\left(\frac{1}{1-p}\right)}. -$$ - -The following table gives some sample calculations. - -Some values falling outside the bounds have been colored to show that the approximation is not always exact. - -A related problem is the partition problem, a variant of the knapsack problem from operations research. Some weights are put on a balance scale; each weight is an integer number of grams randomly chosen between one gram and one million grams (one tonne). The question is whether one can usually (that is, with probability close to 1) transfer the weights between the left and right arms to balance the scale. (In case the sum of all the weights is an odd number of grams, a discrepancy of one gram is allowed.) If there are only two or three weights, the answer is very clearly no; although there are some combinations which work, the majority of randomly selected combinations of three weights do not. If there are very many weights, the answer is clearly yes. The question is, how many are just sufficient? That is, what is the number of weights such that it is equally likely for it to be possible to balance them as it is to be impossible? - -Often, people's intuition is that the answer is above 100000. Most people's intuition is that it is in the thousands or tens of thousands, while others feel it should at least be in the hundreds. The correct answer is 23. - -The reason is that the correct comparison is to the number of partitions of the weights into left and right. There are 2N − 1 different partitions for N weights, and the left sum minus the right sum can be thought of as a new random quantity for each partition. The distribution of the sum of weights is approximately Gaussian, with a peak at 1000000N and width 1000000sqrt N, so that when 2N − 1 is approximately equal to 1000000sqrt N the transition occurs. 223 − 1 is about 4 million, while the width of the distribution is only 5 million. - -Arthur C. Clarke's novel A Fall of Moondust, published in 1961, contains a section where the main characters, trapped underground for an indefinite amount of time, are celebrating a birthday and find themselves discussing the validity of the birthday problem. As stated by a physicist passenger: "If you have a group of more than twenty-four people, the odds are better than even that two of them have the same birthday." Eventually, out of 22 present, it is revealed that two characters share the same birthday, May 23. diff --git a/wiki/wikipedia/2824.txt b/wiki/wikipedia/2824.txt deleted file mode 100644 index a080fbd52e02333f3710ce305d444460d600c3a6..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2824.txt +++ /dev/null @@ -1,31 +0,0 @@ -The Indiana Pi Bill is the popular name for bill #246 of the 1897 sitting of the Indiana General Assembly, one of the most notorious attempts to establish mathematical truth by legislative fiat. Despite its name, the main result claimed by the bill is a method to square the circle, although it does imply various incorrect values of the mathematical constant pi, the ratio of the circumference of a circle to its diameter. The bill, written by a physician who was an amateur mathematician, never became law due to the intervention of Professor C. A. Waldo of Purdue University, who happened to be present in the legislature on the day it went up for a vote. - -The mathematical impossibility of squaring the circle using only compass and straightedge constructions, suspected since ancient times, had been rigorously proven 15 years previously, in 1882 by Ferdinand von Lindemann. Better approximations of pi than those implied by the bill have been known since ancient times. - -In 1894, Indiana physician Edward J. Goodwin (ca. 1825–1902) believed that he had discovered a correct way of squaring the circle. He proposed a bill to state representative Taylor I. Record, which Record introduced in the House under the long title "A Bill for an act introducing a new mathematical truth and offered as a contribution to education to be used only by the State of Indiana free of cost by paying any royalties whatever on the same, provided it is accepted and adopted by the official action of the Legislature of 1897". - -The text of the bill consists of a series of mathematical claims (detailed below), followed by a recitation of Goodwin's previous accomplishments: - -Goodwin's "solutions" were indeed published in the American Mathematical Monthly, though with a disclaimer of "published by request of the author". - -Upon its introduction in the Indiana House of Representatives, the bill's language and topic occasioned confusion among the membership; a member from Bloomington proposed that it be referred to the Finance Committee, but the Speaker accepted another member's recommendation to refer the bill to the Committee on Swamplands, where the bill could "find a deserved grave". It was transferred to the Committee on Education, which reported favorably; following a motion to suspend the rules, the bill passed on February 6, 1897 - -When it reached the Indiana Senate, the bill was not treated so kindly, for Waldo had coached the senators previously. The Committee on Temperance to which it had been assigned had reported it favorably, but the Senate on February 12, 1897 postponed the bill indefinitely. It had been nearly passed, but opinion changed when one senator observed that the General Assembly lacked the power to define mathematical truth. - -
    ... the bill was brought up and made fun of. The Senators made bad puns about it, ridiculed it and laughed over it. The fun lasted half an hour. Senator Hubbell said that it was not meet for the Senate, which was costing the State $250 a day, to waste its time in such frivolity. He said that in reading the leading newspapers of Chicago and the East, he found that the Indiana State Legislature had laid itself open to ridicule by the action already taken on the bill. He thought consideration of such a proposition was not dignified or worthy of the Senate. He moved the indefinite postponement of the bill, and the motion carried.
    - -Although the bill has become known as the "Pi Bill", its text does not mention the name "pi" at all, and Goodwin appears to have thought of the ratio between the circumference and diameter of a circle as distinctly secondary to his main aim of squaring the circle. Towards the end of Section 2 the following passage appears: - -This comes close to an explicit claim that pi = 4/1.25 = 3.2, and that sqrt 2 = 10/7 ≈ 1.429. - -This quotation is often read as three mutually incompatible assertions, but they fit together well if the statement about 2 is taken to be about the inscribed square (with the circle's diameter as diagonal) rather than the square on the radius (with the chord of 90° as diagonal). Together they describe the circle shown in the figure, whose diameter is 10 and circumference is 32; the chord of 90° is taken to be 7. Both of the values 7 and 32 are within a few percent of the true lengths for a diameter-10 circle (which does not justify Goodwin's presentation of them as exact). The circumference should be nearer to 31.4159 and the diagonal "7" should be the square root of 50 (=25+25), or nearer to 7.071. - -Goodwin's main goal was not to measure lengths in the circle but to square it, which he interpreted literally as finding a square with the same area as the circle. He knew that Archimedes' formula for the area of a circle, which calls for multiplying the diameter by one fourth of the circumference, is not considered a solution to the ancient problem of squaring the circle. This is because the problem is to construct the area using compass and straightedge only, and Archimedes did not give a method for constructing a straight line with the same length as the circumference. Apparently, Goodwin was unaware of this central requirement; he believed that the problem with the Archimedean formula is that it gives wrong numerical results, and that a solution of the ancient problem should consist of replacing it with a "correct" formula. In the bill he proposed, without argument, his own method: - -This appears needlessly convoluted, as an "equilateral rectangle" is, by definition, a square. In simple terms, the assertion is that the area of a circle is the same as that of a square with the same perimeter. This claim results in other mathematical contradictions to which Goodwin attempts to respond. For example, right after the above quotation, the bill goes on to say: - -In the model circle above, the Archimedean area (accepting Goodwin's values for the circumference and diameter) would be 80, whereas Goodwin's proposed rule leads to an area of 64. Now, 80 exceeds 64 by one fifth of 80, and Goodwin appears to confuse 64 = 80 × (1 − 5}}) with 80 = 64 × (1 + . - -The area found by Goodwin's rule is {{sfrac/pi|4 times the true area of the circle, which in many accounts of the Pi Bill is interpreted as a claim that pi = 4. However, there is no internal evidence in the bill that Goodwin intended to make such a claim; on the contrary, he repeatedly denies that the area of the circle has anything to do with its diameter. - -The relative area error of 1 − pi/4 works out to about 21 percent, which is much more grave than the approximations of the lengths in the model circle of the previous section. It is unknown what made Goodwin believe that his rule could be correct. In general, figures with identical perimeters do not have identical area (see isoperimetry); the typical demonstration of this fact is to compare a long thin shape with a small enclosed area (the area approaching zero as the width decreases) to one of the same perimeter that is approximately as tall as it is wide (the area approaching the square of the width), obviously of much greater area. diff --git a/wiki/wikipedia/2825.txt b/wiki/wikipedia/2825.txt deleted file mode 100644 index e8b3b5ca8a35f53e76b81440ebd5f21da7208d47..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2825.txt +++ /dev/null @@ -1,153 +0,0 @@ -In mathematics, the Fibonacci numbers, commonly denoted Fn, form a sequence, the Fibonacci sequence, in which each number is the sum of the two preceding ones. The sequence commonly starts from 0 and 1, although some authors omit the initial terms and start the sequence from 1 and 1 or from 1 and 2. - -Starting from 0 and 1, the next few values in the sequence are: - -0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, ... - -The Fibonacci numbers were first described in Indian mathematics, In the Sanskrit poetic tradition, there was interest in enumerating all patterns of long (L) syllables of 2 units duration, juxtaposed with short (S) syllables of 1 unit duration. Counting the different patterns of successive L and S with a given total duration results in the Fibonacci numbers: the number of patterns of duration m units is Fm + 1. - -Knowledge of the Fibonacci sequence was expressed as early as Pingala ( 450 BC–200 BC). Singh cites Pingala's cryptic formula misrau cha ("the two are mixed") and scholars who interpret it in context as saying that the number of patterns for m beats (Fm+1) is obtained by adding one [S] to the Fm cases and one [L] to the Fm−1 cases. - -Bharata Muni also expresses knowledge of the sequence in the Natya Shastra (c. 100 BC–c. 350 AD). - -However, the clearest exposition of the sequence arises in the work of Virahanka (c. 700 AD), whose own work is lost, but is available in a quotation by Gopala (c. 1135): - -
    Variations of two earlier meters [is the variation]... For example, for [a meter of length] four, variations of meters of two [and] three being mixed, five happens. [works out examples 8, 13, 21]... In this way, the process should be followed in all mātrā-vṛttas [prosodic combinations].
    - -Hemachandra (c. 1150) is credited with knowledge of the sequence as well, - -Cassini's identity states that - -{F_n}^2 - F_{n+1}F_{n-1} = (-1)^{n-1} - -Catalan's identity is a generalization: - -{F_n}^2 - F_{n+r}F_{n-r} = (-1)^{n-r}F_r^2 - -F_m F_{n+1} - F_{m+1} F_n = (-1)^n F_{m-n} - -F_{2 n} = {F_{n+1}}^2 - {F_{n-1}}^2 = F_n \left (F_{n+1}+F_{n-1} \right ) = F_nL_n - -where Ln is the nth Lucas number. The last is an identity for doubling n; other identities of this type are - -F_{3 n} = 2{F_n^3} + 3 F_n F_{n+1} F_{n-1} = 5{F_n}^3 + 3 (-1)^n F_n - -by Cassini's identity. - -F_{3 n+1} = F_{n+1}^3 + 3 F_{n+1}{F_n}^2 - F_n^3 - -F_{3 n+2} = F_{n+1}^3 + 3 F_{n+1}^2 F_n + {F_n}^3 - -F_{4 n} = 4 F_n F_{n+1} \left ({F_{n+1}}^2 + 2{F_n}^2 \right ) - 3{F_n}^2 \left ({F_n}^2 + 2{F_{n+1}}^2 \right ) - -These can be found experimentally using lattice reduction, and are useful in setting up the special number field sieve to factorize a Fibonacci number. - -More generally, More generally, no Fibonacci number other than 1 can be multiply perfect, and no ratio of two Fibonacci numbers can be perfect. - -With the exceptions of 1, 8 and 144 (F1 = F2, F6 and F12) every Fibonacci number has a prime factor that is not a factor of any smaller Fibonacci number (Carmichael's theorem). As a result, 8 and 144 (F6 and F12) are the only Fibonacci numbers that are the product of other Fibonacci numbers . - -The divisibility of Fibonacci numbers by a prime p is related to the Legendre symbol $\left(\tfrac{p}{5}\right)$ which is evaluated as follows: - -\left(\frac{p}{5}\right) = \begin{cases} 0 & \text{if } p = 5\\ 1 & \text{if } p \equiv \pm 1 \pmod 5\\ -1 & \text{if } p \equiv \pm 2 \pmod 5.\end{cases} - -If p is a prime number then - - F_p \equiv \left(\frac{p}{5}\right) \pmod p \quad \text{and}\quad F_{p-\left(\frac{p}{5}\right)} \equiv 0 \pmod p. - -For example, - -\begin{align} - -(\tfrac{2}{5}) &= -1, &F_3 &= 2, &F_2&=1, \\ - -(\tfrac{3}{5}) &= -1, &F_4 &= 3,&F_3&=2, \\ - -(\tfrac{5}{5}) &= 0, &F_5 &= 5, \\ - -(\tfrac{7}{5}) &= -1, &F_8 &= 21,&F_7&=13, \\ - -(\tfrac{11}{5})& = +1, &F_{10}& = 55, &F_{11}&=89. - -\end{align} - -It is not known whether there exists a prime p such that - -F_{p-\left(\frac{p}{5}\right)} \equiv 0 \pmod{p^2}. - -Such primes (if there are any) would be called Wall–Sun–Sun primes. - -Also, if p ≠ 5 is an odd prime number then: - -5 F^2_{\frac{p \pm 1}{2}} \equiv \begin{cases} - -\tfrac{1}{2} \left (5\left(\frac{p}{5}\right)\pm 5 \right ) \pmod p & \text{if } p \equiv 1 \pmod 4\\ - -\tfrac{1}{2} \left (5\left(\frac{p}{5}\right)\mp 3 \right ) \pmod p & \text{if } p \equiv 3 \pmod 4. - -\end{cases} - -Example 1. p = 7, in this case p ≡ 3 (mod 4) and we have: - -(\tfrac{7}{5}) = -1: \qquad \tfrac{1}{2}\left (5(\tfrac{7}{5})+3 \right ) =-1, \quad \tfrac{1}{2} \left (5(\tfrac{7}{5})-3 \right )=-4. - -F_3=2 \text{ and } F_4=3. - -5F_3^2=20\equiv -1 \pmod {7}\text{ and }5F_4^2=45\equiv -4 \pmod {7} - -Example 2. p = 11, in this case p ≡ 3 (mod 4) and we have: - -(\tfrac{11}{5}) = +1: \qquad \tfrac{1}{2}\left (5(\tfrac{11}{5})+3 \right )=4, \quad \tfrac{1}{2} \left (5(\tfrac{11}{5})- 3 \right )=1. - -F_5=5 \text{ and } F_6=8. - -5F_5^2=125\equiv 4 \pmod {11} \text{ and }5F_6^2=320\equiv 1 \pmod {11} - -Example 3. p = 13, in this case p ≡ 1 (mod 4) and we have: - -(\tfrac{13}{5}) = -1: \qquad \tfrac{1}{2}\left (5(\tfrac{13}{5})-5 \right ) =-5, \quad \tfrac{1}{2}\left (5(\tfrac{13}{5})+ 5 \right )=0. - -F_6=8 \text{ and } F_7=13. - -5F_6^2=320\equiv -5 \pmod {13} \text{ and }5F_7^2=845\equiv 0 \pmod {13} - -Example 4. p = 29, in this case p ≡ 1 (mod 4) and we have: - -(\tfrac{29}{5}) = +1: \qquad \tfrac{1}{2}\left (5(\tfrac{29}{5})-5 \right )=0, \quad \tfrac{1}{2}\left (5(\tfrac{29}{5})+5 \right )=5. - -F_{14}=377 \text{ and } F_{15}=610. - -5F_{14}^2=710645\equiv 0 \pmod {29} \text{ and }5F_{15}^2=1860500\equiv 5 \pmod {29} - -For odd n, all odd prime divisors of Fn are congruent to 1 modulo 4, implying that all odd divisors of Fn (as the products of odd prime divisors) are congruent to 1 modulo 4. - -For example, - -F_1 = 1, F_3 = 2, F_5 = 5, F_7 = 13, F_9 = 34 = 2 \cdot 17, F_{11} = 89, F_{13} = 233, F_{15} = 610 = 2 \cdot 5 \cdot 61. - -All known factors of Fibonacci numbers F(i) for all i < 50000 are collected at the relevant repositories. - -If the members of the Fibonacci sequence are taken mod n, the resulting sequence is periodic with period at most 6n. The lengths of the periods for various n form the so-called Pisano periods . Determining a general formula for the Pisano periods is an open problem, which includes as a subproblem a special instance of the problem of finding the multiplicative order of a modular integer or of an element in a finite field. However, for any particular n, the Pisano period may be found as an instance of cycle detection. - -The Fibonacci sequence is one of the simplest and earliest known sequences defined by a recurrence relation, and specifically by a linear difference equation. All these sequences may be viewed as generalizations of the Fibonacci sequence. In particular, Binet's formula may be generalized to any sequence that is a solution of a homogeneous linear difference equation with constant coefficients. - -Some specific examples that are close, in some sense, from Fibonacci sequence include: - -* Generalizing the index to negative integers to produce the negafibonacci numbers. - -* Generalizing the index to real numbers using a modification of Binet's formula. A male individual has an X chromosome, which he received from his mother, and a Y chromosome, which he received from his father. The male counts as the "origin" of his own X chromosome ($F_1=1$), and at his parents' generation, his X chromosome came from a single parent ($F_2=1$). The male's mother received one X chromosome from her mother (the son's maternal grandmother), and one from her father (the son's maternal grandfather), so two grandparents contributed to the male descendant's X chromosome ($F_3=2$). The maternal grandfather received his X chromosome from his mother, and the maternal grandmother received X chromosomes from both of her parents, so three great-grandparents contributed to the male descendant's X chromosome ($F_4=3$). Five great-great-grandparents contributed to the male descendant's X chromosome ($F_5=5$), etc. (This assumes that all ancestors of a given descendant are independent, but if any genealogy is traced far enough back in time, ancestors begin to appear on multiple lines of the genealogy, until eventually a population founder appears on all lines of the genealogy.) - -The pathways of tubulins on intracellular microtubules arrange in patterns of 3, 5, 8 and 13. - -*In optics, when a beam of light shines at an angle through two stacked transparent plates of different materials of different refractive indexes, it may reflect off three surfaces: the top, middle, and bottom surfaces of the two plates. The number of different beam paths that have k reflections, for k > 1, is the $k$th Fibonacci number. (However, when k = 1, there are three reflection paths, not two, one for each of the three surfaces.) - -* Fibonacci retracement levels are widely used in technical analysis for financial market trading. - -*Since the conversion factor 1.609344 for miles to kilometers is close to the golden ratio, the decomposition of distance in miles into a sum of Fibonacci numbers becomes nearly the kilometer sum when the Fibonacci numbers are replaced by their successors. This method amounts to a radix 2 number register in golden ratio base φ being shifted. To convert from kilometers to miles, shift the register down the Fibonacci sequence instead. - -*The measured values of voltages and currents in the infinite resistor chain circuit (also called the resistor ladder or infinite series-parallel circuit) follow the Fibonacci sequence. The intermediate results of adding the alternating series and parallel resistances yields fractions composed of consecutive Fibonacci numbers. The equivalent resistance of the entire circuit equals the golden ratio. - -*Brasch et al. 2012 show how a generalised Fibonacci sequence also can be connected to the field of economics. In particular, it is shown how a generalised Fibonacci sequence enters the control function of finite-horizon dynamic optimisation problems with one state and one control variable. The procedure is illustrated in an example often referred to as the Brock–Mirman economic growth model. - -*Mario Merz included the Fibonacci sequence in some of his artworks beginning in 1970. - -*Joseph Schillinger (1895–1943) developed a system of composition which uses Fibonacci intervals in some of its melodies; he viewed these as the musical counterpart to the elaborate harmony evident within nature. See also . diff --git a/wiki/wikipedia/2826.txt b/wiki/wikipedia/2826.txt deleted file mode 100644 index f6918d7c05662874c37680586b058337c4817018..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2826.txt +++ /dev/null @@ -1,157 +0,0 @@ -Belief propagation, also known as sum-product message passing, is a message-passing algorithm for performing inference on graphical models, such as Bayesian networks and Markov random fields. It calculates the marginal distribution for each unobserved node (or variable), conditional on any observed nodes (or variables). Belief propagation is commonly used in artificial intelligence and information theory and has demonstrated empirical success in numerous applications including low-density parity-check codes, turbo codes, free energy approximation, and satisfiability. who formulated it as an exact inference algorithm on trees, which was later extended to polytrees. While it is not exact on general graphs, it has been shown to be a useful approximate algorithm. - -If X={Xi} is a set of discrete random variables with a joint mass function p, the marginal distribution of a single Xi is simply the summation of p over all other variables: -$$ -p_{X_i}(x_i) = \sum_{\mathbf{x}': x'_i = x_i} p(\mathbf{x}'). -$$ - -However, this quickly becomes computationally prohibitive: if there are 100 binary variables, then one needs to sum over 299 ≈ 6.338 × 1029 possible values. By exploiting the polytree structure, belief propagation allows the marginals to be computed much more efficiently. - -Variants of the belief propagation algorithm exist for several types of graphical models (Bayesian networks and Markov random fields in particular). We describe here the variant that operates on a factor graph. A factor graph is a bipartite graph containing nodes corresponding to variables V and factors F, with edges between variables and the factors in which they appear. We can write the joint mass function: -$$ -p(\mathbf{x}) = \prod_{a \in F} f_a (\mathbf{x}_a) -$$ - -where xa is the vector of neighboring variable nodes to the factor node a. Any Bayesian network or Markov random field can be represented as a factor graph by using a factor for each node with its parents or a factor for each node with its neighborhood respectively. - -The algorithm works by passing real valued functions called messages along with the edges between the hidden nodes. More precisely, if v is a variable node and a is a factor node connected to v in the factor graph, the messages from v to a, (denoted by $\mu_{v \to a}$) and from a to v ($\mu_{a \to v}$), are real-valued functions whose domain is Dom(v), the set of values that can be taken by the random variable associated with v. These messages contain the "influence" that one variable exerts on another. The messages are computed differently depending on whether the node receiving the message is a variable node or a factor node. Keeping the same notation: - -* A message from a variable node v to a factor node a is the product of the messages from all other neighboring factor nodes (except the recipient; alternatively one can say the recipient sends as message the constant function equal to "1"): -$$ -\forall x_v\in Dom(v), \mu_{v \to a} (x_v) = \prod_{a^* \in N(v)\setminus\{a\} } \mu_{a^* \to v} (x_v). -$$ - -where N(v) is the set of neighboring (factor) nodes to v. If $N(v)\setminus\{a\}$ is empty, then $\mu_{v \to a}(x_v)$ is set to the uniform distribution. - -* A message from a factor node a to a variable node v is the product of the factor with messages from all other nodes, marginalized over all variables except the one associated with v: -$$ -\forall x_v\in Dom(v), \mu_{a \to v} (x_v) = \sum_{\mathbf{x}'_a: x'_v = x_v } f_a (\mathbf{x}'_a) \prod_{v^* \in N(a) \setminus \{v\}} \mu_{v^* \to a} (x'_{v^*}). -$$ - -where N(a) is the set of neighboring (variable) nodes to a. If $N(a) \setminus \{v\}$ is empty then $\mu_{a \to v} (x_v) = f_a(x_v)$, since in this case $ x_v = x_a $. - -As shown by the previous formula: the complete marginalization is reduced to a sum of products of simpler terms than the ones appearing in the full joint distribution. This is the reason why it is called the sum-product algorithm. - -In a typical run, each message will be updated iteratively from the previous value of the neighboring messages. Different scheduling can be used for updating the messages. In the case where the graphical model is a tree, an optimal scheduling allows to reach convergence after computing each messages only once (see next sub-section). When the factor graph has cycles, such an optimal scheduling does not exist, and a typical choice is to update all messages simultaneously at each iteration. - -Upon convergence (if convergence happened), the estimated marginal distribution of each node is proportional to the product of all messages from adjoining factors (missing the normalization constant): -$$ - p_{X_v} (x_v) \propto \prod_{a \in N(v)} \mu_{a \to v} (x_v). -$$ - -Likewise, the estimated joint marginal distribution of the set of variables belonging to one factor is proportional to the product of the factor and the messages from the variables: -$$ - p_{X_a} (\mathbf{x}_a) \propto f_a(\mathbf{x}_a) \prod_{v \in N(a)} \mu_{v \to a} (x_v). -$$ - -In the case where the factor graph is acyclic (i.e. is a tree or a forest), these estimated marginal actually converge to the true marginals in a finite number of iterations. This can be shown by mathematical induction. - -In the case when the factor graph is a tree, the belief propagation algorithm will compute the exact marginals. Furthermore, with proper scheduling of the message updates, it will terminate after 2 steps. This optimal scheduling can be described as follows: - -Before starting, the graph is oriented by designating one node as the root; any non-root node which is connected to only one other node is called a leaf. - -In the first step, messages are passed inwards: starting at the leaves, each node passes a message along the (unique) edge towards the root node. The tree structure guarantees that it is possible to obtain messages from all other adjoining nodes before passing the message on. This continues until the root has obtained messages from all of its adjoining nodes. - -The second step involves passing the messages back out: starting at the root, messages are passed in the reverse direction. The algorithm is completed when all leaves have received their messages. - -Although it was originally designed for acyclic graphical models, the Belief Propagation algorithm can be used in general graphs. The algorithm is then sometimes called loopy belief propagation, because graphs typically contain cycles, or loops. The initialization and scheduling of message updates must be adjusted slightly (compared with the previously described schedule for acyclic graphs) because graphs might not contain any leaves. Instead, one initializes all variable messages to 1 and uses the same message definitions above, updating all messages at every iteration (although messages coming from known leaves or tree-structured subgraphs may no longer need updating after sufficient iterations). It is easy to show that in a tree, the message definitions of this modified procedure will converge to the set of message definitions given above within a number of iterations equal to the diameter of the tree. - -The precise conditions under which loopy belief propagation will converge are still not well understood; it is known that on graphs containing a single loop it converges in most cases, but the probabilities obtained might be incorrect. Several sufficient (but not necessary) conditions for convergence of loopy belief propagation to a unique fixed point exist. There exist graphs which will fail to converge, or which will oscillate between multiple states over repeated iterations. Techniques like EXIT charts can provide an approximate visualization of the progress of belief propagation and an approximate test for convergence. - -There are other approximate methods for marginalization including variational methods and Monte Carlo methods. - -One method of exact marginalization in general graphs is called the junction tree algorithm, which is simply belief propagation on a modified graph guaranteed to be a tree. The basic premise is to eliminate cycles by clustering them into single nodes. - -A similar algorithm is commonly referred to as the Viterbi algorithm, but also known as a special case of the max-product or min-sum algorithm, which solves the related problem of maximization, or most probable explanation. Instead of attempting to solve the marginal, the goal here is to find the values $\mathbf{x}$ that maximizes the global function (i.e. most probable values in a probabilistic setting), and it can be defined using the arg max: -$$ -\operatorname*{\arg\max}_{\mathbf{x}} g(\mathbf{x}). -$$ - -An algorithm that solves this problem is nearly identical to belief propagation, with the sums replaced by maxima in the definitions. - -It is worth noting that inference problems like marginalization and maximization are NP-hard to solve exactly and approximately (at least for relative error) in a graphical model. More precisely, the marginalization problem defined above is #P-complete and maximization is NP-complete. - -The memory usage of belief propagation can be reduced through the use of the Island algorithm (at a small cost in time complexity). - -The sum-product algorithm is related to the calculation of free energy in thermodynamics. Let Z be the partition function. A probability distribution -$$ -P(\mathbf{X}) = \frac{1}{Z} \prod_{f_j} f_j(x_j) -$$ - -(as per the factor graph representation) can be viewed as a measure of the internal energy present in a system, computed as -$$ -E(\mathbf{X}) = -\log \prod_{f_j} f_j(x_j). -$$ - -The free energy of the system is then -$$ -F = U - H = \sum_{\mathbf{X}} P(\mathbf{X}) E(\mathbf{X}) + \sum_{\mathbf{X}} P(\mathbf{X}) \log P(\mathbf{X}). -$$ - -It can then be shown that the points of convergence of the sum-product algorithm represent the points where the free energy in such a system is minimized. Similarly, it can be shown that a fixed point of the iterative belief propagation algorithm in graphs with cycles is a stationary point of a free energy approximation. - -Belief propagation algorithms are normally presented as message update equations on a factor graph, involving messages between variable nodes and their neighboring factor nodes and vice versa. Considering messages between regions in a graph is one way of generalizing the belief propagation algorithm. - -and graph coloring. - -The cluster variational method and the survey propagation algorithms are two different improvements to belief propagation. The name generalized survey propagation (GSP) is waiting to be assigned to the algorithm that merges both generalizations. - -Gaussian belief propagation is a variant of the belief propagation algorithm when the underlying distributions are Gaussian. The first work analyzing this special model was the seminal work of Weiss and Freeman. - -The GaBP algorithm solves the following marginalization problem: -$$ - P(x_i) = \frac{1}{Z} \int_{j \ne i} \exp(-1/2x^TAx + b^Tx)dx_j -$$ - -where Z is a normalization constant, A is a symmetric positive definite matrix (inverse covariance matrix a.k.a. precision matrix) and b is the shift vector. - -Equivalently, it can be shown that using the Gaussian model, the solution of the marginalization problem is equivalent to the MAP assignment problem: -$$ -\underset{x}{\operatorname{argmax}}\ P(x) = \frac{1}{Z} \exp(-1/2x^TAx + b^Tx). -$$ - -This problem is also equivalent to the following minimization problem of the quadratic form: -$$ - \underset{x}{\operatorname{min}}\ 1/2x^TAx - b^Tx. -$$ - -Which is also equivalent to the linear system of equations -$$ - Ax = b. -$$ - -Convergence of the GaBP algorithm is easier to analyze (relatively to the general BP case) and there are two known sufficient convergence conditions. The first one was formulated by Weiss et al. in the year 2000, when the information matrix A is diagonally dominant. The second convergence condition was formulated by Johnson et al. in 2006, when the spectral radius of the matrix -$$ -\rho (I - |D^{-1/2}AD^{-1/2}|) < 1 -$$ - -where D = diag(A). Later, Su and Wu established the necessary and sufficient convergence conditions for synchronous GaBP and damped GaBP, as well as another sufficient convergence condition for asynchronous GaBP. For each case, the convergence condition involves verifying 1) a set (determined by A) being non-empty, 2) the spectral radius of a certain matrix being smaller than one, and 3) the singularity issue (when converting BP message into belief) does not occur. - -The GaBP algorithm was linked to the linear algebra domain, and it was shown that the GaBP algorithm can be - -viewed as an iterative algorithm for solving the linear system of equations - -Ax = b where A is the information matrix and b is the shift vector. Empirically, the GaBP algorithm is shown to converge faster than classical iterative methods like the Jacobi method, the Gauss-Seidel method, successive over-relaxation, and others. Additionally, the GaBP algorithm is shown to be immune to numerical problems of the preconditioned conjugate gradient method - -The previous description of BP algorithm is called the codeword-based decoding, which calculates the approximate marginal probability $P(x|X)$, given received codeword $X$. There is an equivalent form, which calculate $P(e|s)$, where $s$ is the syndrome of the received codeword $X$ and $e$ is the decoded error. The decoded input vector is $x=X+e$. This variation only change the interpretation of the mass function $f_a(X_a)$. Explicitly, the messages are -$$ -\forall x_v\in Dom(v), \mu_{v \to a} (x_v) = P(X_v)\prod_{a^* \in N(v)\setminus\{a\} } \mu_{a^* \to v} (x_v). -$$ - -where $P(X_v)$ is the prior error probability on variable $v$$\forall x_v\in Dom(v), \mu_{a \to v} (x_v) = \sum_{\mathbf{x}'_a: x'_v = x_v } \delta(\text{syndrome}({\mathbf x}'_v)={\mathbf s}) \prod_{v^* \in N(a) \setminus \{v\}} \mu_{v^* \to a} (x'_{v^*}).$ - -This syndrome-based decoder doesn't require information on the received bits, thus can be adapted to quantum codes, where the only information is the measurement syndrome. - -In the binary case, $x_i \in \{0,1\}$, those messages can be simplifies to cause an exponential reduction of $2^$ in the complexity - -Define log-likelihood ratio $l_v=\log \frac{u_{v \to a}(x_v=0)}{u_{v \to a} (x_v=1)}$ , $L_a=\log \frac{u_{a \to v}(x_v=0)}{u_{a \to v} (x_v=1)}$, then -$$ -v \to a: l_v=l_v^{(0)}+\sum_{a^* \in N(v)\setminus\{a\}} (L_{a^*}) -$$ -$$ -a \to v: L_a = (-1)^{s_a} 2 \tanh^{-1} \prod_{v^* \in N(a)\setminus\{v\}} \tanh (l_{v^*}/2) -$$ - -where $l_v^{(0)}=\log (P(x_v=0)/P(x_v=1)) = \text{const}$ - -The posterior log-likelihood ratio can be estimated as $l_v=l_v^{(0)}+\sum_{a \in N(v)} (L_{a})$ diff --git a/wiki/wikipedia/2827.txt b/wiki/wikipedia/2827.txt deleted file mode 100644 index 0e816170c9e18011b621839fbee5a7775d8feee1..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2827.txt +++ /dev/null @@ -1,11 +0,0 @@ -In computing, Oracle Coherence (originally Tangosol Coherence) is a Java-based distributed cache and in-memory data grid, intended for systems that require high availability, high scalability and low latency, particularly in cases that traditional relational database management systems provide insufficient throughput, or insufficient performance. - -Tangosol Coherence was created by Cameron Purdy and Gene Gleyzer, and initially released in December, 2001. - -Oracle Corporation acquired Tangosol Inc., the original owner of the product, in April 2007, at which point it had more than 100 direct customers. Tangosol Coherence was also embedded in a number of other companies' software products, some of which belonged to Oracle Corporations's competitors. - -Coherence provides a variety of mechanisms to integrate with other services using TopLink, Java Persistence API, Oracle Golden Gate or almost any other platform using Coherence provided APIs. - -Coherence can be used to manage HTTP sessions via Coherence*Web. With Coherence*Web, application services such as Oracle WebLogic Server, IBM WebSphere, Apache Tomcat and others can reap the same benefits of performance, fault tolerance, and scalability as data. - -In the summer of 2020, Coherence Community Edition was released as open source on GitHub. Some Coherence usage patterns are also open source and are listed and supported through the Oracle Coherence incubator. These patterns implement features such as messaging, work distribution and data replication across wide area networks with Coherence. diff --git a/wiki/wikipedia/2828.txt b/wiki/wikipedia/2828.txt deleted file mode 100644 index 0242eb59d1ff3e56f083b922b13a5efe3f95525f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2828.txt +++ /dev/null @@ -1,15 +0,0 @@ -In query languages, path expressions identify an object by describing how to navigate to it - -in some graph (possibly implicit) of objects. For example, the path expression p.Manager.Home.City might refer the city of residence of someone's manager. - -Path expressions have been extended to support regular expression-like flexibility. - -XPath is an example of a path expression language. - -In concurrency control, path expressions are a mechanism for expressing permitted sequences of execution. For example, a path expression like " {read}, write" might specify that either multiple simultaneous executions of read or a single execution of write but not both are allowed at any point in time. - -It is important to know that the path expressions are a mechanism for the synchronization of processes at the monitor level in the software. That provides a clear and structured approach to the description of shared data and the coordination and communication between concurrent processes. This method is flexible in its ability to express timing, and can be used in different ways. - -In addition, path expressions are useful for process synchronization for two reasons: first, the close relationship between stream expressions and regular expressions that simplify the task of writing and reasoning about programs that use this synchronization mechanism. Second, synchronization in many concurrent programs in a finite state, and therefore can be adequately described by regular expressions. For precisely the same reasons, path expressions are useful for controlling the behavior of complicated asynchronous circuits. In fact, the finite state assumption may be even more reasonable at the hardware level than at the monitor level. - -Path expressions provide a high level of descriptive synchronization that aids in the prevention and detection of design errors in complex systems and overcomes some of the dangers, such as certain forms of coding errors. diff --git a/wiki/wikipedia/2829.txt b/wiki/wikipedia/2829.txt deleted file mode 100644 index 9795c4e2eb57eb8eeccec24d17e03a840bb774ff..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2829.txt +++ /dev/null @@ -1,9 +0,0 @@ -In number theory, zero-sum problems are certain kinds of combinatorial problems about the structure of a finite abelian group. Concretely, given a finite abelian group G and a positive integer n, one asks for the smallest value of k such that every sequence of elements of G of size k contains n terms that sum to 0. - -The classic result in this area is the 1961 theorem of Paul Erdős, Abraham Ginzburg, and Abraham Ziv. They proved that for the group $\mathbb{Z}/n\mathbb{Z}$ of integers modulo n, - -k = 2n - 1. - -Explicitly this says that any multiset of 2n − 1 integers has a subset of size n the sum of whose elements is a multiple of n, but that the same is not true of multisets of size 2n − 2. (Indeed, the lower bound is easy to see: the multiset containing n − 1 copies of 0 and n − 1 copies of 1 contains no n-subset summing to a multiple of n.) This result is known as the Erdős–Ginzburg–Ziv theorem after its discoverers. It may also be deduced from the Cauchy–Davenport theorem. - -More general results than this theorem exist, such as Olson's theorem, Kemnitz's conjecture (proved by Christian Reiher in 2003), and the weighted EGZ theorem (proved by David J. Grynkiewicz in 2005). diff --git a/wiki/wikipedia/283.txt b/wiki/wikipedia/283.txt deleted file mode 100644 index 9603fe7b87ef2c77615578aa5f6cfcba79806e75..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/283.txt +++ /dev/null @@ -1,7 +0,0 @@ -In differential topology, the Whitney immersion theorem (named after Hassler Whitney) states that for $m>1$, any smooth $m$-dimensional manifold (required also to be Hausdorff and second-countable) has a one-to-one immersion in Euclidean $2m$-space, and a (not necessarily one-to-one) immersion in $(2m-1)$-space. Similarly, every smooth $m$-dimensional manifold can be immersed in the $2m-1$-dimensional sphere (this removes the $m>1$ constraint). - -The weak version, for $2m+1$, is due to transversality (general position, dimension counting): two m-dimensional manifolds in $\mathbf{R}^{2m}$ intersect generically in a 0-dimensional space. - -William S. Massey went on to prove that every n-dimensional manifold is cobordant to a manifold that immerses in $S^{2n-a(n)}$ where $a(n)$ is the number of 1's that appear in the binary expansion of $n$. In the same paper, Massey proved that for every n there is manifold (which happens to be a product of real projective spaces) that does not immerse in $S^{2n-1-a(n)}$. - -The conjecture that every n-manifold immerses in $S^{2n-a(n)}$ became known as the immersion conjecture. This conjecture was eventually solved in the affirmative by . diff --git a/wiki/wikipedia/2830.txt b/wiki/wikipedia/2830.txt deleted file mode 100644 index ecb9224b2c7d975c6e9d222e52403e7383ec86cc..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2830.txt +++ /dev/null @@ -1,37 +0,0 @@ -In computer science, Prim's algorithm (also known as Jarník's algorithm) is a greedy algorithm that finds a minimum spanning tree for a weighted undirected graph. This means it finds a subset of the edges that forms a tree that includes every vertex, where the total weight of all the edges in the tree is minimized. The algorithm operates by building this tree one vertex at a time, from an arbitrary starting vertex, at each step adding the cheapest possible connection from the tree to another vertex. - -The algorithm was developed in 1930 by Czech mathematician Vojtěch Jarník and later rediscovered and republished by computer scientists Robert C. Prim in 1957 and Edsger W. Dijkstra in 1959. Therefore, it is also sometimes called the Jarník's algorithm, Prim–Jarník algorithm, Prim–Dijkstra algorithm - -or the DJP algorithm. - -Other well-known algorithms for this problem include Kruskal's algorithm and Borůvka's algorithm. These algorithms find the minimum spanning forest in a possibly disconnected graph; in contrast, the most basic form of Prim's algorithm only finds minimum spanning trees in connected graphs. However, running Prim's algorithm separately for each connected component of the graph, it can also be used to find the minimum spanning forest. In terms of their asymptotic time complexity, these three algorithms are equally fast for sparse graphs, but slower than other more sophisticated algorithms. - -The algorithm may informally be described as performing the following steps: - -In more detail, it may be implemented following the pseudocode below. - -As described above, the starting vertex for the algorithm will be chosen arbitrarily, because the first iteration of the main loop of the algorithm will have a set of vertices in Q that all have equal weights, and the algorithm will automatically start a new tree in F when it completes a spanning tree of each connected component of the input graph. The algorithm may be modified to start with any particular vertex s by setting C[s] to be a number smaller than the other values of C (for instance, zero), and it may be modified to only find a single spanning tree rather than an entire spanning forest (matching more closely the informal description) by stopping whenever it encounters another vertex flagged as having no associated edge. - -Different variations of the algorithm differ from each other in how the set Q is implemented: as a simple linked list or array of vertices, or as a more complicated priority queue data structure. This choice leads to differences in the time complexity of the algorithm. In general, a priority queue will be quicker at finding the vertex v with minimum cost, but will entail more expensive updates when the value of C[w] changes. - -The time complexity of Prim's algorithm depends on the data structures used for the graph and for ordering the edges by weight, which can be done using a priority queue. The following table shows the typical choices: - -A simple implementation of Prim's, using an adjacency matrix or an adjacency list graph representation and linearly searching an array of weights to find the minimum weight edge to add, requires O(|V|2) running time. However, this running time can be greatly improved further by using heaps to implement finding minimum weight edges in the algorithm's inner loop. - -A first improved version uses a heap to store all edges of the input graph, ordered by their weight. This leads to an O(|E| log |E|) worst-case running time. But storing vertices instead of edges can improve it still further. The heap should order the vertices by the smallest edge-weight that connects them to any vertex in the partially constructed minimum spanning tree (MST) (or infinity if no such edge exists). Every time a vertex v is chosen and added to the MST, a decrease-key operation is performed on all vertices w outside the partial MST such that v is connected to w, setting the key to the minimum of its previous value and the edge cost of (v,w). - -Using a simple binary heap data structure, Prim's algorithm can now be shown to run in time O(|E| log |V|) where |E| is the number of edges and |V| is the number of vertices. Using a more sophisticated Fibonacci heap, this can be brought down to O(|E| + |V| log |V|), which is asymptotically faster when the graph is dense enough that |E| is ω(|V|), and linear time when |E| is at least |V| log |V|. For graphs of even greater density (having at least |V|c edges for some c > 1), Prim's algorithm can be made to run in linear time even more simply, by using a d-ary heap in place of a Fibonacci heap. The following pseudocode demonstrates this. - -{{ordered list - -| Assign each processors $P_i$ a set $V_i$ of consecutive vertices of length $\tfrac$. - -| Create C, E, F, and Q as in the sequential algorithm and divide C, E, as well as the graph between all processors such that each processor holds the incoming edges to its set of vertices. Let $C_i$, $E_i$ denote the parts of C, E stored on processor $P_i$. - -| Repeat the following steps until Q is empty: - -| Return F - -}} - -This algorithm can generally be implemented on distributed machines as well as on shared memory machines. The running time is $O(\tfrac) + O(|V| \log |P|)$, assuming that the reduce and broadcast operations can be performed in $O(\log |P|)$. A variant of Prim's algorithm for shared memory machines, in which Prim's sequential algorithm is being run in parallel, starting from different vertices, has also been explored. It should, however, be noted that more sophisticated algorithms exist to solve the distributed minimum spanning tree problem in a more efficient manner. diff --git a/wiki/wikipedia/2831.txt b/wiki/wikipedia/2831.txt deleted file mode 100644 index 987f3d77622600e2bf184a3a86b1b0d7012f31ea..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2831.txt +++ /dev/null @@ -1,11 +0,0 @@ -A seqlock (short for sequence lock) is a special locking mechanism used in Linux for supporting fast writes of shared variables between two parallel operating system routines. The semantics stabilized as of version 2.5.59, and they are present in the 2.6.x stable kernel series. The seqlocks were developed by Stephen Hemminger and originally called frlocks, based on earlier work by Andrea Arcangeli. The first implementation was in the x86-64 time code where it was needed to synchronize with user space where it was not possible to use a real lock. - -It is a reader–writer consistent mechanism which avoids the problem of writer starvation. A seqlock consists of storage for saving a sequence number in addition to a lock. The lock is to support synchronization between two writers and the counter is for indicating consistency in readers. In addition to updating the shared data, the writer increments the sequence number, both after acquiring the lock and before releasing the lock. Readers read the sequence number before and after reading the shared data. If the sequence number is odd on either occasion, a writer had taken the lock while the data was being read and it may have changed. If the sequence numbers are different, a writer has changed the data while it was being read. In either case readers simply retry (using a loop) until they read the same even sequence number before and after. - -The reader never blocks, but it may have to retry if a write is in progress; this speeds up the readers in the case where the data was not modified, since they do not have to acquire the lock as they would with a traditional read–write lock. Also, writers do not wait for readers, whereas with traditional read–write locks they do, leading to potential resource starvation in a situation where there are a number of readers (because the writer must wait for there to be no readers). Because of these two factors, seqlocks are more efficient than traditional read–write locks for the situation where there are many readers and few writers. The drawback is that if there is too much write activity or the reader is too slow, they might livelock (and the readers may starve). - -The technique will not work for data that contains pointers, because any writer could invalidate a pointer that a reader has already followed. In this case, using read-copy-update synchronization is preferred. - -This was first applied to system time counter updating. Each time interrupt updates the time of the day; there may be many readers of the time for operating system internal use and applications, but writes are relatively infrequent and only occur one at a time. The BSD timecounter code for instance appears to use a similar technique. - -One subtle issue of using seqlocks for a time counter is that it is impossible to step through it with a debugger. The retry logic will trigger all the time because the debugger is slow enough to make the read race occur always. diff --git a/wiki/wikipedia/2832.txt b/wiki/wikipedia/2832.txt deleted file mode 100644 index 56232c036ac61d1a5561b7c8344a4404f99c586b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2832.txt +++ /dev/null @@ -1,58 +0,0 @@ -In mathematics, specifically in complex analysis, Fatou's theorem, named after Pierre Fatou, is a statement concerning holomorphic functions on the unit disk and their pointwise extension to the boundary of the disk. - -If we have a holomorphic function $f$ defined on the open unit disk $\mathbb{D}=\{z:|z|<1\}$, it is reasonable to ask under what conditions we can extend this function to the boundary of the unit disk. To do this, we can look at what the function looks like on each circle inside the disk centered at 0, each with some radius $r$. This defines a new function: -$$ -\begin{cases} f_r:S^1 \to \Complex \\ f_{r}(e^{i\theta})=f(re^{i\theta}) \end{cases} -$$ - -where -$$ -S^1:=\{e^{i\theta}:\theta\in[0,2\pi]\}=\{z\in \Complex:|z|=1\}, -$$ - -is the unit circle. Then it would be expected that the values of the extension of $f$ onto the circle should be the limit of these functions, and so the question reduces to determining when $f_r$ converges, and in what sense, as $r\to 1$, and how well defined is this limit. In particular, if the $L^p$ norms of these $f_r$ are well behaved, we have an answer: - -Theorem. Let $f:\mathbb{D}\to\Complex$ be a holomorphic function such that -$$ -\sup_{0\begin{align} - -\left |f_r(e^{i\theta})-f_{1}(e^{i\theta}) \right | &\to 0 && \text{for almost every } \theta\in [0,2\pi] \\ - -\|f_r-f_1\|_{L^p(S^1)} &\to 0 - -\end{align} - -Now, notice that this pointwise limit is a radial limit. That is, the limit being taken is along a straight line from the center of the disk to the boundary of the circle, and the statement above hence says that -$$ - f(re^{i\theta})\to f_1(e^{i\theta}) \qquad \text{for almost every } \theta. -$$ - -The natural question is, with this boundary function defined, will we converge pointwise to this function by taking a limit in any other way? That is, suppose instead of following a straight line to the boundary, we follow an arbitrary curve $\gamma:[0,1)\to \mathbb{D}$ converging to some point $e^{i\theta}$ on the boundary. Will $f$ converge to $f_{1}(e^{i\theta})$? (Note that the above theorem is just the special case of $\gamma(t)=te^{i\theta}$). It turns out that the curve $\gamma$ needs to be non-tangential, meaning that the curve does not approach its target on the boundary in a way that makes it tangent to the boundary of the circle. In other words, the range of $\gamma$ must be contained in a wedge emanating from the limit point. We summarize as follows: - -Definition. Let $\gamma:[0,1)\to \mathbb{D}$ be a continuous path such that $\lim\nolimits_{t\to 1}\gamma(t)=e^{i\theta}\in S^{1}$. Define - -\begin{align} - -\Gamma_\alpha &=\{z:\arg z\in [\pi-\alpha,\pi+\alpha]\} \\ - -\Gamma_\alpha(\theta) &=\mathbb{D}\cap e^{i\theta}(\Gamma_\alpha+1) - -\end{align} - -That is, $\Gamma_\alpha(\theta)$ is the wedge inside the disk with angle $2\alpha$ whose axis passes between $e^{i\theta}$ and zero. We say that $\gamma$ converges non-tangentially to $e^{i\theta}$, or that it is a non-tangential limit, if there exists $0<\alpha<\tfrac{\pi}{2}$ such that $\gamma$ is contained in $\Gamma_\alpha$ and $\lim\nolimits_{t\to 1}\gamma(t)=e^{i\theta}$. - -Fatou's Theorem. Let $f\in H^p(\mathbb{D}).$ Then for almost all $\theta\in[0,2\pi],$ -$$ -\lim_{t\to 1}f(\gamma(t))=f_1(e^{i\theta}) -$$ - -for every non-tangential limit $\gamma$ converging to $e^{i\theta},$ where $f_1$ is defined as above. - -* The proof utilizes the symmetry of the Poisson kernel using the Hardy–Littlewood maximal function for the circle. - -* The analogous theorem is frequently defined for the Hardy space over the upper-half plane and is proved in much the same way. diff --git a/wiki/wikipedia/2833.txt b/wiki/wikipedia/2833.txt deleted file mode 100644 index 56b95e9a6d84d550fd88fe7e60f493dd5f5a64ac..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2833.txt +++ /dev/null @@ -1,34 +0,0 @@ -The quadratic assignment problem (QAP) is one of the fundamental combinatorial optimization problems in the branch of optimization or operations research in mathematics, from the category of the facilities location problems first introduced by Koopmans and Beckmann. - -The problem models the following real-life problem: - -There are a set of n facilities and a set of n locations. For each pair of locations, a distance is specified and for each pair of facilities a weight or flow is specified (e.g., the amount of supplies transported between the two facilities). The problem is to assign all facilities to different locations with the goal of minimizing the sum of the distances multiplied by the corresponding flows. - -Intuitively, the cost function encourages facilities with high flows between each other to be placed close together. - -The problem statement resembles that of the assignment problem, except that the cost function is expressed in terms of quadratic inequalities, hence the name. - -The formal definition of the quadratic assignment problem is as follows: - -Given two sets, P ("facilities") and L ("locations"), of equal size, together with a weight function w : P × P → R and a distance function d : L × L → R. Find the bijection f : P → L ("assignment") such that the cost function: -$$ -\sum_{a,b\in P}w(a,b)\cdot d(f(a), f(b)) -$$ - -is minimized. - -Usually weight and distance functions are viewed as square real-valued matrices, so that the cost function is written down as: -$$ -\sum_{a,b\in P}w_{a,b}d_{f(a),f(b)} -$$ - -In matrix notation: -$$ -\min_{X\in\Pi_n} \operatorname{trace}(WXDX^T) -$$ - -where $\Pi_n$ is the set of $n \times n$ permutation matrices, $W$ is the weight matrix and $D$ is the distance matrix. - -The problem is NP-hard, so there is no known algorithm for solving this problem in polynomial time, and even small instances may require long computation time. It was also proven that the problem does not have an approximation algorithm running in polynomial time for any (constant) factor, unless P = NP. The travelling salesman problem may be seen as a special case of QAP if one assumes that the flows connect all facilities only along a single ring, all flows have the same non-zero (constant) value. Many other problems of standard combinatorial optimization problems may be written in this form. - -In addition to the original plant location formulation, QAP is a mathematical model for the problem of placement of interconnected electronic components onto a printed circuit board or on a microchip, which is part of the place and route stage of computer aided design in the electronics industry. diff --git a/wiki/wikipedia/2834.txt b/wiki/wikipedia/2834.txt deleted file mode 100644 index 2c7ac571ce25a6992212b2b33117d38fa09863ad..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2834.txt +++ /dev/null @@ -1,21 +0,0 @@ -A proof is sufficient evidence or a sufficient argument for the truth of a proposition. - -The concept applies in a variety of disciplines, - -with both the nature of the evidence or justification and the criteria for sufficiency being area-dependent. In the area of oral and written communication such as conversation, dialog, rhetoric, etc., a proof is a persuasive perlocutionary speech act, which demonstrates the truth of a proposition. In any area of mathematics defined by its assumptions or axioms, a proof is an argument establishing a theorem of that area via accepted rules of inference starting from those axioms and from other previously established theorems. The subject of logic, in particular proof theory, formalizes and studies the notion of formal proof. In some areas of epistemology and theology, the notion of justification plays approximately the role of proof, while in jurisprudence the corresponding term is evidence, - -with "burden of proof" as a concept common to both philosophy and law. - -In most disciplines, evidence is required to prove something. Evidence is drawn from the experience of the world around us, with science obtaining its evidence from nature, law obtaining its evidence from witnesses and forensic investigation, and so on. A notable exception is mathematics, whose proofs are drawn from a mathematical world begun with axioms and further developed and enriched by theorems proved earlier. - -Exactly what evidence is sufficient to prove something is also strongly area-dependent, usually with no absolute threshold of sufficiency at which evidence becomes proof. In law, the same evidence that may convince one jury may not persuade another. Formal proof provides the main exception, where the criteria for proofhood are ironclad and it is impermissible to defend any step in the reasoning as "obvious" (except for the necessary ability of the one proving and the one being proven to, to correctly identify any symbol used in the proof.); for a well-formed formula to qualify as part of a formal proof, it must be the result of applying a rule of the deductive apparatus of some formal system to the previous well-formed formulae in the proof sequence. - -Proofs have been presented since antiquity. Aristotle used the observation that patterns of nature never display the machine-like uniformity of determinism as proof that chance is an inherent part of nature. On the other hand, Thomas Aquinas used the observation of the existence of rich patterns in nature as proof that nature is not ruled by chance. - -Proofs need not be verbal. Before Copernicus, people took the apparent motion of the Sun across the sky as proof that the Sun went round the Earth. Suitably incriminating evidence left at the scene of a crime may serve as proof of the identity of the perpetrator. Conversely, a verbal entity need not assert a proposition to constitute a proof of that proposition. For example, a signature constitutes direct proof of authorship; less directly, handwriting analysis may be submitted as proof of authorship of a document. Privileged information in a document can serve as proof that the document's author had access to that information; such access might in turn establish the location of the author at certain time, which might then provide the author with an alibi. - -18th-century Scottish philosopher David Hume built on Aristotle's separation of belief from knowledge, recognizing that one can be said to "know" something only if one has firsthand experience with it, in a strict sense proof, while one can infer that something is true and therefore "believe" it without knowing, via evidence or supposition. This speaks to one way of separating proof from evidence: - -If one cannot find their chocolate bar, and sees chocolate on their napping roommate's face, this evidence can cause one to believe their roommate ate the chocolate bar. But they do not know their roommate ate it. It may turn out that the roommate put the candy away when straightening up, but was thus inspired to go eat their own chocolate. Only if one directly experiences proof of the roommate eating it, perhaps by walking in on them doing so, does one know the roommate did it. - -In an absolute sense, one can be argued not to "know" anything, except for the existence of one's own thoughts, as 17th-century philosopher John Locke pointed out. Even earlier, Descartes addressed when saying cogito, ergo sum (I think, therefore I am). While Descartes was attempting to "prove" logically that the world exists, his legacy in doing so is to have shown that one cannot have such proof, because all of one's perceptions could be false (such as under the evil demon or simulated reality hypotheses). But one at least has proof of one's own thoughts existing, and strong evidence that the world exists, enough to be considered "proof" by practical standards, though always indirect and impossible to objectively confirm. diff --git a/wiki/wikipedia/2835.txt b/wiki/wikipedia/2835.txt deleted file mode 100644 index 5a31d709d330614bb15ecfcfc66bff38894c9538..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2835.txt +++ /dev/null @@ -1,17 +0,0 @@ -Dekker's algorithm is the first known correct solution to the mutual exclusion problem in concurrent programming. The solution is attributed to Dutch mathematician Th. J. Dekker by Edsger W. Dijkstra in an unpublished paper on sequential process descriptions and his manuscript on cooperating sequential processes. It allows two threads to share a single-use resource without conflict, using only shared memory for communication. - -It avoids the strict alternation of a naïve turn-taking algorithm, and was one of the first mutual exclusion algorithms to be invented. - -If two processes attempt to enter a critical section at the same time, the algorithm will allow only one process in, based on whose turn it is. If one process is already in the critical section, the other process will busy wait for the first process to exit. This is done by the use of two flags, wants_to_enter[0] and wants_to_enter[1], which indicate an intention to enter the critical section on the part of processes 0 and 1, respectively, and a variable turn that indicates who has priority between the two processes. - -Dekker's algorithm can be expressed in pseudocode, as follows. - -
    - -
    - -Processes indicate an intention to enter the critical section which is tested by the outer while loop. If the other process has not flagged intent, the critical section can be entered safely irrespective of the current turn. Mutual exclusion will still be guaranteed as neither process can become critical before setting their flag (implying at least one process will enter the while loop). This also guarantees progress as waiting will not occur on a process which has withdrawn intent to become critical. Alternatively, if the other process's variable was set the while loop is entered and the turn variable will establish who is permitted to become critical. Processes without priority will withdraw their intention to enter the critical section until they are given priority again (the inner while loop). Processes with priority will break from the while loop and enter their critical section. - -Dekker's algorithm guarantees mutual exclusion, freedom from deadlock, and freedom from starvation. Let us see why the last property holds. Suppose p0 is stuck inside the "while wants_to_enter[1]" loop forever. There is freedom from deadlock, so eventually p1 will proceed to its critical section and set turn = 0 (and the value of turn will remain unchanged as long as p0 doesn't progress). Eventually p0 will break out of the inner "while turn ≠ 0" loop (if it was ever stuck on it). After that it will set wants_to_enter[0] to true and settle down to waiting for wants_to_enter[1] to become false (since turn = 0, it will never do the actions in the while loop). The next time p1 tries to enter its critical section, it will be forced to execute the actions in its "while wants_to_enter[0]" loop. In particular, it will eventually set wants_to_enter[1] to false and get stuck in the "while turn ≠ 1" loop (since turn remains 0). The next time control passes to p0, it will exit the "while wants_to_enter[1]" loop and enter its critical section. - -If the algorithm were modified by performing the actions in the "while wants_to_enter[1]" loop without checking if turn = 0, then there is a possibility of starvation. Thus all the steps in the algorithm are necessary. diff --git a/wiki/wikipedia/2836.txt b/wiki/wikipedia/2836.txt deleted file mode 100644 index 6faf72bff766ac86c54b961b0750876868c69307..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2836.txt +++ /dev/null @@ -1,9 +0,0 @@ -In mathematics, particularly in the study of functions of several complex variables, Ushiki's theorem, named after S. Ushiki, states that certain well-behaved functions cannot have certain kinds of well-behaved invariant manifolds. - -A biholomorphic mapping $F:\mathbb{C}^n\to\mathbb{C}^n$ cannot have a 1-dimensional compact smooth invariant manifold. In particular, such a map cannot have a homoclinic connection or heteroclinic connection. - -Invariant manifolds typically appear as solutions of certain asymptotic problems in dynamical systems. The most common is the stable manifold or its kin, the unstable manifold. - -Ushiki's theorem was published in 1980. The theorem appeared in print again several years later, in a certain Russian journal, by an author apparently unaware of Ushiki's work. - -The standard map cannot have a homoclinic or heteroclinic connection. The practical consequence is that one cannot show the existence of a Smale's horseshoe in this system by a perturbation method, starting from a homoclinic or heteroclinic connection. Nevertheless, one can show that Smale's horseshoe exists in the standard map for many parameter values, based on crude rigorous numerical calculations. diff --git a/wiki/wikipedia/2837.txt b/wiki/wikipedia/2837.txt deleted file mode 100644 index 665a84b4a733dfc6b920247a66ed53f8ce2e0b71..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2837.txt +++ /dev/null @@ -1,167 +0,0 @@ -A triangular number or triangle number counts objects arranged in an equilateral triangle. Triangular numbers are a type of figurate number, other examples being square numbers and cube numbers. The nth triangular number is the number of dots in the triangular arrangement with n dots on each side, and is equal to the sum of the n natural numbers from 1 to n. The sequence of triangular numbers, starting with the 0th triangular number, is - -0, 1, 3, 6, 10, 15, 21, 28, 36, 45, 55, 66, 78, 91, 105, 120, 136, 153, 171, 190, 210, 231, 253, 276, 300, 325, 351, 378, 406, 435, 465, 496, 528, 561, 595, 630, 666... - -(This sequence is included in the On-Line Encyclopedia of Integer Sequences .) - -The triangular numbers are given by the following explicit formulas: - - T_n= \sum_{k=1}^n k = 1+2+3+ \dotsb +n = \frac{n(n+1)}{2} = {n+1 \choose 2} , - -where $\textstyle {n+1 \choose 2}$ is a binomial coefficient. It represents the number of distinct pairs that can be selected from n + 1 objects, and it is read aloud as "n plus one choose two". - -The first equation can be illustrated using a visual proof. For every triangular number $T_n$, imagine a "half-square" arrangement of objects corresponding to the triangular number, as in the figure below. Copying this arrangement and rotating it to create a rectangular figure doubles the number of objects, producing a rectangle with dimensions $n \times (n+1)$, which is also the number of objects in the rectangle. Clearly, the triangular number itself is always exactly half of the number of objects in such a figure, or: $T_n = \frac{n(n+1)}{2} $. The example $T_4$ follows: - -The first equation can also be established using mathematical induction: Since $T_1$ is equal to one, a basis case is established. It follows from the definition that $T_n = n + T_{n-1}$, so assuming the inductive hypothesis for $n-1$, adding $n$ to both sides immediately gives - - T_n = n + T_{n-1} = \frac{2n}{2} + \frac{(n-1)n}{2} = \frac{(2+n-1)n}{2} = \frac{(n+1)n}{2}. - -In other words, since the proposition $P(n)$ (that is, the first equation, or inductive hypothesis itself) is true when $n=1$, and since $P(n)$ being true implies that $P(n+1)$ is also true, then the first equation is true for all natural numbers. The above argument can be easily modified to start with, and include, zero. - -The German mathematician and scientist, Carl Friedrich Gauss, is said to have found this relationship in his early youth, by multiplying n/2 pairs of numbers in the sum by the values of each pair n + 1. However, regardless of the truth of this story, Gauss was not the first to discover this formula, and some find it likely that its origin goes back to the Pythagoreans in the 5th century BC. The two formulas were described by the Irish monk Dicuil in about 816 in his Computus. - -The triangular number Tn solves the handshake problem of counting the number of handshakes if each person in a room with n + 1 people shakes hands once with each person. In other words, the solution to the handshake problem of n people is Tn−1. The function T is the additive analog of the factorial function, which is the products of integers from 1 to n. - -The number of line segments between closest pairs of dots in the triangle can be represented in terms of the number of dots or with a recurrence relation: - -L_n = 3 T_{n-1}= 3{n \choose 2};~~~L_n = L_{n-1} + 3(n-1), ~L_1 = 0. - -In the limit, the ratio between the two numbers, dots and line segments is - -\lim_{n\to\infty} \frac{T_n}{L_n} = \frac{1}{3}. - -Triangular numbers have a wide variety of relations to other figurate numbers. - -Most simply, the sum of two consecutive triangular numbers is a square number, with the sum being the square of the difference between the two (and thus the difference of the two being the square root of the sum). Algebraically, -$$ -T_n + T_{n-1} = \left (\frac{n^2}{2} + \frac{n}{2}\right) + \left(\frac{\left(n-1\right)^2}{2} + \frac{n-1}{2} \right ) = \left (\frac{n^2}{2} + \frac{n}{2}\right) + \left(\frac{n^2}{2} - \frac{n}{2} \right ) = n^2 = (T_n - T_{n-1})^2. -$$ - -This fact can be demonstrated graphically by positioning the triangles in opposite directions to create a square: - -There are infinitely many triangular numbers that are also square numbers; e.g., 1, 36, 1225. Some of them can be generated by a simple recursive formula: - -S_{n+1} = 4S_n \left( 8S_n + 1\right) with $S_1 = 1.$ - -All square triangular numbers are found from the recursion - -S_n = 34S_{n-1} - S_{n-2} + 2 with $S_0 = 0$ and $S_1 = 1.$ - -Also, the square of the nth triangular number is the same as the sum of the cubes of the integers 1 to n. This can also be expressed as -$$ - \sum_{k=1}^n k^3 = \left(\sum_{k=1}^n k \right)^2. -$$ - -The sum of the first n triangular numbers is the nth tetrahedral number: - - \sum_{k=1}^n T_k = \sum_{k=1}^n \frac{k(k+1)}{2} = \frac {n(n+1)(n+2)} {6}. - -More generally, the difference between the nth m-gonal number and the nth (m + 1)-gonal number is the (n − 1)th triangular number. For example, the sixth heptagonal number (81) minus the sixth hexagonal number (66) equals the fifth triangular number, 15. Every other triangular number is a hexagonal number. Knowing the triangular numbers, one can reckon any centered polygonal number; the nth centered k-gonal number is obtained by the formula - -Ck_n = kT_{n-1}+1 - -where T is a triangular number. - -The positive difference of two triangular numbers is a trapezoidal number. - -The pattern found for triangular numbers $ \sum_{n_1=1}^{n_2}n_1=\binom{n_2+1}{2}$ and for tetrahedral numbers $ \sum_{n_2=1}^{n_3}\sum_{n_1=1}^{n_2}n_1=\binom{n_3+2}{3},$ which uses binomial coefficients, can be generalized. This leads to the formula: - - \sum_{n_{k-1}=1}^{n_k}\sum_{n_{k-2}=1}^{n_{k-1}}\ldots\sum_{n_2=1}^{n_3}\sum_{n_1=1}^{n_2}n_1=\binom{n_k+k-1}{k} - -Triangular numbers correspond to the first-degree case of Faulhaber's formula. - -Alternating triangular numbers (1, 6, 15, 28, ...) are also hexagonal numbers. - -Every even perfect number is triangular (as well as hexagonal), given by the formula - -M_p 2^{p-1} = \frac{M_p (M_p + 1)}2 = T_{M_p} - -where Mp is a Mersenne prime. No odd perfect numbers are known; hence, all known perfect numbers are triangular. - -For example, the third triangular number is (3 × 2 =) 6, the seventh is (7 × 4 =) 28, the 31st is (31 × 16 =) 496, and the 127th is (127 × 64 =) 8128. - -The final digit of a triangular number is 0, 1, 3, 5, 6, or 8. A final 3 must be preceded by a 0 or 5; a final 8 must be preceded by a 2 or 7. - -In base 10, the digital root of a nonzero triangular number is always 1, 3, 6, or 9. Hence, every triangular number is either divisible by three or has a remainder of 1 when divided by 9: - -left=1.6|1= - -0 = 9 × 0 - -1 = 9 × 0 + 1 - -3 = 9 × 0 + 3 - -6 = 9 × 0 + 6 - -10 = 9 × 1 + 1 - -15 = 9 × 1 + 6 - -21 = 9 × 2 + 3 - -28 = 9 × 3 + 1 - -36 = 9 × 4 - -45 = 9 × 5 - -55 = 9 × 6 + 1 - -66 = 9 × 7 + 3 - -78 = 9 × 8 + 6 - -91 = 9 × 10 + 1 - -... - -There is a more specific property to the triangular numbers that aren't divisible by 3; that is, they either have a remainder 1 or 10 when divided by 27. Those that are equal to 10 mod 27 are also equal to 10 mod 81. - -The digital root pattern for triangular numbers, repeating every nine terms, as shown above, is "1, 3, 6, 1, 6, 3, 1, 9, 9". - -The converse of the statement above is, however, not always true. For example, the digital root of 12, which is not a triangular number, is 3 and divisible by three. - -If x is a triangular number, then ax + b is also a triangular number, given a is an odd square and b = a − 1/8. Note that - -b will always be a triangular number, because 8Tn + 1 = (2n + 1)2, which yields all the odd squares are revealed by multiplying a triangular number by 8 and adding 1, and the process for b given a is an odd square is the inverse of this operation. - -The first several pairs of this form (not counting 1x + 0) are: 9x + 1, 25x + 3, 49x + 6, 81x + 10, 121x + 15, 169x + 21, ... etc. Given x is equal to Tn, these formulas yield T3n + 1, T5n + 2, T7n + 3, T9n + 4, and so on. - -The sum of the reciprocals of all the nonzero triangular numbers is - - \sum_{n=1}^{\infty}{1 \over = 2\sum_{n=1}^{\infty}{1 \over {n^2 + n}} = 2 . - -This can be shown by using the basic sum of a telescoping series: - - \sum_{n=1}^{\infty}{1 \over {n(n+1)}} = 1 . - -Two other formulas regarding triangular numbers are - -T_{a+b} = T_a + T_b + ab - -and - -T_{ab} = T_aT_b + T_{a-1}T_{b-1}, - -both of which can easily be established either by looking at dot patterns (see above) or with some simple algebra. - -In 1796, Gauss discovered that every positive integer is representable as a sum of three triangular numbers (possibly including T0 = 0), writing in his diary his famous words, "ΕΥΡΗΚΑ! 1=num = Δ + Δ + Δ". This theorem does not imply that the triangular numbers are different (as in the case of 20 = 10 + 10 + 0), nor that a solution with exactly three nonzero triangular numbers must exist. This is a special case of the Fermat polygonal number theorem. - -The largest triangular number of the form 2k − 1 is 4095 (see Ramanujan–Nagell equation). - -Wacław Franciszek Sierpiński posed the question as to the existence of four distinct triangular numbers in geometric progression. It was conjectured by Polish mathematician Kazimierz Szymiczek to be impossible and was later proven by Fang and Chen in 2007. - -Formulas involving expressing an integer as the sum of triangular numbers are connected to theta functions, in particular the Ramanujan theta function. - -A fully connected network of n computing devices requires the presence of Tn − 1 cables or other connections; this is equivalent to the handshake problem mentioned above. - -In a tournament format that uses a round-robin group stage, the number of matches that need to be played between n teams is equal to the triangular number Tn − 1. For example, a group stage with 4 teams requires 6 matches, and a group stage with 8 teams requires 28 matches. This is also equivalent to the handshake problem and fully connected network problems. - -One way of calculating the depreciation of an asset is the sum-of-years' digits method, which involves finding Tn, where n is the length in years of the asset's useful life. Each year, the item loses (b − s) × n − y/Tn, where b is the item's beginning value (in units of currency), s is its final salvage value, n is the total number of years the item is usable, and y the current year in the depreciation schedule. Under this method, an item with a usable life of n = 4 years would lose 4/10 of its "losable" value in the first year, 3/10 in the second, 2/10 in the third, and 1/10 in the fourth, accumulating a total depreciation of 10/10 (the whole) of the losable value. - -By analogy with the square root of x, one can define the (positive) triangular root of x as the number n such that Tn = x: - -n = \frac{\sqrt{8x+1}-1}{2} - -which follows immediately from the quadratic formula. So an integer x is triangular if and only if 8x + 1 is a square. Equivalently, if the positive triangular root n of x is an integer, then x is the nth triangular number. However, although some other sources use this name and notation, they are not in wide use. diff --git a/wiki/wikipedia/2838.txt b/wiki/wikipedia/2838.txt deleted file mode 100644 index cbc063ad0047599720fc5d43ea35745bb32c6e58..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2838.txt +++ /dev/null @@ -1,19 +0,0 @@ -In graph drawing, a circular layout is a style of drawing that places the vertices of a graph on a circle, often evenly spaced so that they form the vertices of a regular polygon. - -Circular layouts are a good fit for communications network topologies such as star or ring networks, and for the cyclic parts of metabolic networks. For graphs with a known Hamiltonian cycle, a circular layout allows the cycle to be depicted as the circle, and in this way circular layouts form the basis of the LCF notation for Hamiltonian cubic graphs. - -A circular layout may be used on its own for an entire graph drawing, but it also may be used as the layout for smaller clusters of vertices within a larger graph drawing, such as its biconnected components, clusters of genes in a gene interaction graph, or natural subgroups within a social network. If multiple vertex circles are used in this way, other methods such as force-directed graph drawing may be used to arrange the clusters. - -One advantage of a circular layout in some of these applications, such as bioinformatics or social network visualization, is its neutrality: by placing all vertices at equal distances from each other and from the center of the drawing, none is given a privileged position, countering the tendency of viewers to perceive more centrally located nodes as being more important. - -The edges of the drawing may be depicted as chords of the circle, as circular arcs - -The visual distinction between the inside and the outside of the vertex circle in a circular layout may be used to separate two different styles of edge drawing. For instance, a circular drawing algorithm of Gansner uses edge bundling within the circle, together with some edges that are not bundled, drawn outside the circle. - -Several authors have studied the problem of finding a permutation of the vertices of a circular layout that minimizes the number of edge crossings when all edges are drawn inside the vertex circle. This number of crossings is zero only for outerplanar graphs. For other graphs, it may be optimized or reduced separately for each biconnected component of the graph before combining the solutions, as these components may be drawn so that they do not interact. - -In general, minimizing the number of crossings is NP-complete, but may be approximated with an approximation ratio of O(log2 n) where n is the number of vertices. Heuristic methods for reducing the crossing complexity have also been devised, based e.g. on a careful vertex insertion order and on local optimization. - -A circular layout may also be used to maximize the number of crossings. In particular, choosing a random permutation for the vertices causes each possible crossing to occur with probability 1/3, so the expected number of crossings is within a factor of three of the maximum number of crossings among all possible layouts. Derandomizing this method gives a deterministic approximation algorithm with approximation ratio three. - -Along with crossings, circular versions of problems of optimizing the lengths of edges in a circular layout, the angular resolution of the crossings, or the cutwidth (the maximum number of edges that connects one arc of the circle to the opposite arc) have also been considered, but many of these problems are NP-complete. diff --git a/wiki/wikipedia/2839.txt b/wiki/wikipedia/2839.txt deleted file mode 100644 index eb20dda42d2a5473694957b4a71b76b812a906bc..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2839.txt +++ /dev/null @@ -1,87 +0,0 @@ -Boole's expansion theorem, often referred to as the Shannon expansion or decomposition, is the identity: $F = x \cdot F_x + x' \cdot F_{x'}$, where $F$ is any Boolean function, $x$ is a variable, $x'$ is the complement of $x$, and $F_x$and $F_{x'}$ are $F$ with the argument $x$ set equal to $1$ and to $0$ respectively. - -The terms $F_x$ and $F_{x'}$ are sometimes called the positive and negative Shannon cofactors, respectively, of $F$ with respect to $x$. These are functions, computed by restrict operator, $\operatorname{restrict}(F, x, 0)$ and $\operatorname{restrict}(F, x, 1)$ (see valuation (logic) and partial application). - -It has been called the "fundamental theorem of Boolean algebra". Besides its theoretical importance, it paved the way for binary decision diagrams (BDDs), satisfiability solvers, and many other techniques relevant to computer engineering and formal verification of digital circuits. - -In such engineering contexts (especially in BBDs), the expansion is interpreted as a if-then-else, with the variable $x$ being the condition and the cofactors being the branches ($F_x$ when $x$ is true and respectively $F_{x'}$ when $x$ is false). - -A more explicit way of stating the theorem is: -$$ -f(X_1, X_2, \dots , X_n) = X_1 \cdot f(1, X_2, \dots , X_n) + X_1' \cdot f(0, X_2, \dots , X_n) -$$ - -; XOR-Form : The statement also holds when the disjunction "+" is replaced by the XOR operator: -$$ -f(X_1, X_2, \dots , X_n) = X_1 \cdot f(1, X_2, \dots , X_n) \oplus X_1' \cdot f(0, X_2, \dots , X_n) -$$ - -; Dual form: There is a dual form of the Shannon expansion (which does not have a related XOR form): -$$ -f(X_1, X_2, \dots , X_n) = (X_1 + f(0, X_2, \dots , X_n)) \cdot (X_1' + f(1, X_2, \dots , X_n)) -$$ - -Repeated application for each argument leads to the Sum of Products (SoP) canonical form of the Boolean function $f$. For example for $n=2$ that would be - -\begin{align}f(X_1, X_2) & = X_1 \cdot f(1, X_2) + X_1' \cdot f(0, X_2)\\ - -& = X_1 X_2 \cdot f(1, 1) + X_1 X_2' \cdot f(1, 0) + X_1' X_2 \cdot f(0, 1) + X_1' X_2' \cdot f(0, 0) - -\end{align} - -Likewise, application of the dual form leads to the Product of Sums (PoS) canonical form (using the distributivity law of $+$ over $\cdot$): - -\begin{align}f(X_1, X_2) & = (X_1 + f(0, X_2)) \cdot (X_1' + f(1, X_2))\\ - -& = (X_1 + X_2 + f(0, 0)) \cdot (X_1 + X_2' + f(0, 1)) \cdot (X_1' + X_2 + f(1, 0)) \cdot (X_1' + X_2' + f(1, 1)) - -\end{align} - -; Linear properties of cofactors: - -For a Boolean function F which is made up of two Boolean functions G and H the following are true: - -If $F = H'$ then $F_x = H'_x$ - -If $F = G \cdot H$ then $F_x = G_x \cdot H_x$ - -If $F = G + H$ then $F_x = G_x + H_x$ - -If $F = G \oplus H$ then $F_x = G_x \oplus H_x$ - -; Characteristics of unate functions: - -If F is a unate function and... - -If F is positive unate then $F = x \cdot F_x + F_{x'}$ - -If F is negative unate then $F = F_x + x' \cdot F_{x'}$ - -; Boolean difference: - -The Boolean difference or Boolean derivative of the function F with respect to the literal x is defined as: -$$ - \frac{\partial F}{\partial x} = F_x \oplus F_{x'} -$$ - -; Universal quantification: - -The universal quantification of F is defined as: -$$ - \forall x F = F_x \cdot F_{x'} -$$ - -; Existential quantification: - -The existential quantification of F is defined as: -$$ - \exists x F = F_x + F_{x'} -$$ - -George Boole presented this expansion as his Proposition II, "To expand or develop a function involving any number of logical symbols", in his Laws of Thought (1854), and it was "widely applied by Boole and other nineteenth-century logicians". - -Claude Shannon mentioned this expansion, among other Boolean identities, in a 1949 paper, and showed the switching network interpretations of the identity. In the literature of computer design and switching theory, the identity is often incorrectly attributed to Shannon. - -# Binary decision diagrams follow from systematic use of this theorem - -# Any Boolean function can be implemented directly in a switching circuit using a hierarchy of basic multiplexer by repeated application of this theorem. diff --git a/wiki/wikipedia/284.txt b/wiki/wikipedia/284.txt deleted file mode 100644 index f6af45f9c115f52af5c041bc670bd9a64bfa99cc..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/284.txt +++ /dev/null @@ -1,158 +0,0 @@ -In abstract algebra, the biquaternions are the numbers w + x i + y j + z k, where w, x, y, and z are complex numbers, or variants thereof, and the elements of {1, i, j, k} multiply as in the quaternion group and commute with their coefficients. There are three types of biquaternions corresponding to complex numbers and the variations thereof: - -* Biquaternions when the coefficients are complex numbers. - -* Split-biquaternions when the coefficients are split-complex numbers. - -* Dual quaternions when the coefficients are dual numbers. - -This article is about the ordinary biquaternions named by William Rowan Hamilton in 1844 (see Proceedings of the Royal Irish Academy 1844 & 1850 page 388). Some of the more prominent proponents of these biquaternions include Alexander Macfarlane, Arthur W. Conway, Ludwik Silberstein, and Cornelius Lanczos. As developed below, the unit quasi-sphere of the biquaternions provides a representation of the Lorentz group, which is the foundation of special relativity. - -The algebra of biquaternions can be considered as a tensor product $\mathbb{C} \otimes \mathbb{H}$ (taken over the reals) where C or $\mathbb{C}$ is the field of complex numbers and H or $\mathbb{H}$ is the division algebra of (real) quaternions. In other words, the biquaternions are just the complexification of the quaternions. Viewed as a complex algebra, the biquaternions are isomorphic to the algebra of 2 × 2 complex matrices M2(C). They are also isomorphic to several Clifford algebras including H(C) = Cℓ03(C) = Cℓ2(C) = Cℓ1,2(R), the Pauli algebra Cℓ3,0(R), and the even part Cℓ01,3(R) = Cℓ03,1(R) of the spacetime algebra. To distinguish square roots of minus one in the biquaternions, Hamilton and Arthur W. Conway used the convention of representing the square root of minus one in the scalar field C by h to avoid confusion with the i in the quaternion group. Commutativity of the scalar field with the quaternion group is assumed: -$$ - h \mathbf i = \mathbf i h,\ \ h \mathbf j = \mathbf j h,\ \ h \mathbf k = \mathbf k h . -$$ - -Hamilton introduced the terms bivector, biconjugate, bitensor, and biversor to extend notions used with real quaternions H. - -Hamilton's primary exposition on biquaternions came in 1853 in his Lectures on Quaternions. The editions of Elements of Quaternions, in 1866 by William Edwin Hamilton (son of Rowan), and in 1899, 1901 by Charles Jasper Joly, reduced the biquaternion coverage in favor of the real quaternions. - -Considered with the operations of component-wise addition, and multiplication according to the quaternion group, this collection forms a 4-dimensional algebra over the complex numbers C. The algebra of biquaternions is associative, but not commutative. A biquaternion is either a unit or a zero divisor. The algebra of biquaternions forms a composition algebra and can be constructed from bicomplex numbers. See ' below. - -Note the matrix product -$$ -\begin{pmatrix}h & 0\\0 & -h\end{pmatrix}\begin{pmatrix}0 & 1\\-1 & 0\end{pmatrix} = \begin{pmatrix}0 & h\\h & 0\end{pmatrix} -$$. - -Because h is the imaginary unit, each of these three arrays has a square equal to the negative of the identity matrix. - -When this matrix product is interpreted as i j = k, then one obtains a subgroup of matrices that is isomorphic to the quaternion group. Consequently, -$$ -\begin{pmatrix}u+hv & w+hx\\-w+hx & u-hv\end{pmatrix} -$$ - -represents biquaternion q = u 1 + v i + w j + x k. - -Given any 2 × 2 complex matrix, there are complex values u, v, w, and x to put it in this form so that the matrix ring M(2,C) is isomorphic to the biquaternion ring. - -Considering the biquaternion algebra over the scalar field of real numbers R, the set -$$ -\{\mathbf 1, h, \mathbf i, h\mathbf i, \mathbf j, h\mathbf j, \mathbf k, h\mathbf k \} -$$ - -forms a basis so the algebra has eight real dimensions. The squares of the elements hi, hj, and hk are all positive one, for example, (hi)2 = h2i2 = (−1)(−1) = +1. - -The subalgebra given by -$$ -\{ x + y(h\mathbf i) : x, y \in \R \} -$$ - -is ring isomorphic to the plane of split-complex numbers, which has an algebraic structure built upon the unit hyperbola. The elements hj and hk also determine such subalgebras. - -Furthermore, -$$ -\{ x + y \mathbf j : x,y \in \Complex \} -$$ - -is a subalgebra isomorphic to the tessarines. - -A third subalgebra called coquaternions is generated by hj and hk. It is seen that (hj)(hk) = (−1)i, and that the square of this element is −1. These elements generate the dihedral group of the square. The linear subspace with basis {1, i, hj, hk} thus is closed under multiplication, and forms the coquaternion algebra. - -In the context of quantum mechanics and spinor algebra, the biquaternions hi, hj, and hk (or their negatives), viewed in the M2(C) representation, are called Pauli matrices. - -The biquaternions have two conjugations: - -* the biconjugate or biscalar minus bivector is $q^* = w - x\mathbf i - y\mathbf j - z\mathbf k \!\ ,$ and - -* the complex conjugation of biquaternion coefficients $q^{\star} = w^{\star} + x^{\star}\mathbf i + y^{\star}\mathbf j + z^{\star}\mathbf k $ - -where $z^{\star} = a - bh$ when $z = a + bh,\quad a,b \in \mathbb R,\quad h^2 = -\mathbf 1.$ - -Note that $(pq)^* = q^* p^*, \quad (pq)^{\star} = p^{\star} q^{\star} , \quad (q^*)^{\star} = (q^{\star})^*.$ - -Clearly, if $q q^* = 0 $ then q is a zero divisor. Otherwise $\lbrace q q^* \rbrace^{-\mathbf 1} $ is defined over the complex numbers. Further, $q q^* = q^* q $ is easily verified. This allows an inverse to be defined by - -* $q^{-\mathbf 1} = q^* \lbrace q q^* \rbrace^{-\mathbf 1}$, if $qq^* \neq 0.$ - -Consider now the linear subspace -$$ -M = \lbrace q\colon q^* = q^{\star} \rbrace = \lbrace t + x(h\mathbf i) + y(h \mathbf j) + z(h \mathbf k)\colon t, x, y, z \in \mathbb R \rbrace . -$$ - -M is not a subalgebra since it is not closed under products; for example $(h\mathbf i)(h\mathbf j) = h^2 \mathbf{ij} = -\mathbf k \notin M.$. Indeed, M cannot form an algebra if it is not even a magma. - -Proposition: If q is in M, then $q q^* = t^2 - x^2 - y^2 - z^2.$ - -Proof: From the definitions, - -\begin{align} - -q q^* &= (t+xh\mathbf i+yh\mathbf j+zh\mathbf k)(t-xh\mathbf i-yh\mathbf j-zh\mathbf k)\\ - -&= t^2 - x^2(h\mathbf i)^2 - y^2(h\mathbf j)^2 - z^2(h\mathbf k)^2 \\ - -&= t^2 - x^2 - y^2 - z^2. - -\end{align} - -Definition: Let biquaternion g satisfy $g g^* = \mathbf 1.$ Then the Lorentz transformation associated with g is given by -$$ -T(q) = g^* q g^{\star}. -$$ - -Proposition: If q is in M, then T(q) is also in M. - -Proof: $(g^* q g^{\star})^* = (g^{\star})^* q^* g = (g^*)^{\star} q^{\star} g = (g^* q g^{\star})^{\star}.$ - -Proposition: $\quad T(q) (T(q))^* = q q^* $ - -Proof: Note first that gg* = 1 means that the sum of the squares of its four complex components is one. Then the sum of the squares of the complex conjugates of these components is also one. Therefore, $g^{\star} (g^{\star})^* = \mathbf 1.$ Now -$$ -(g^* q g^{\star})(g^* q g^{\star})^* = g^* q g^{\star} (g^{\star})^* q^* g = g^* q q^* g = q q^*. -$$ - -As the biquaternions have been a fixture of linear algebra since the beginnings of mathematical physics, there is an array of concepts that are illustrated or represented by biquaternion algebra. The transformation group $G = \lbrace g : g g^* = 1 \rbrace $ has two parts, $G \cap H$ and $G \cap M.$ The first part is characterized by $g = g^{\star}$ ; then the Lorentz transformation corresponding to g is given by $T(q) = g^{-1} q g $ since $g^* = g^{-1}. $ Such a transformation is a rotation by quaternion multiplication, and the collection of them is SO(3) $\cong G \cap H .$ But this subgroup of G is not a normal subgroup, so no quotient group can be formed. - -To view $G \cap M$ it is necessary to show some subalgebra structure in the biquaternions. Let r represent an element of the sphere of square roots of minus one in the real quaternion subalgebra H. Then (hr)2 = +1 and the plane of biquaternions given by $D_r = \lbrace z = x + yhr : x, y \in \mathbb R \rbrace$ is a commutative subalgebra isomorphic to the plane of split-complex numbers. Just as the ordinary complex plane has a unit circle, $D_r $ has a unit hyperbola given by -$$ -\exp(ahr) = \cosh(a) + hr\ \sinh(a),\quad a \in R. -$$ - -Just as the unit circle turns by multiplication through one of its elements, so the hyperbola turns because $\exp(ahr) \exp(bhr) = \exp((a+b)hr). $ Hence these algebraic operators on the hyperbola are called hyperbolic versors. The unit circle in C and unit hyperbola in Dr are examples of one-parameter groups. For every square root r of minus one in H, there is a one-parameter group in the biquaternions given by $G \cap D_r.$ - -The space of biquaternions has a natural topology through the Euclidean metric on 8-space. With respect to this topology, G is a topological group. Moreover, it has analytic structure making it a six-parameter Lie group. Consider the subspace of bivectors $A = \lbrace q : q^* = -q \rbrace $. Then the exponential map -$$ -\exp:A \to G -$$ takes the real vectors to $G \cap H$ and the h-vectors to $G \cap M.$ When equipped with the commutator, A forms the Lie algebra of G. Thus this study of a six-dimensional space serves to introduce the general concepts of Lie theory. When viewed in the matrix representation, G is called the special linear group SL(2,C) in M2(C). - -Many of the concepts of special relativity are illustrated through the biquaternion structures laid out. The subspace M corresponds to Minkowski space, with the four coordinates giving the time and space locations of events in a resting frame of reference. Any hyperbolic versor exp(ahr) corresponds to a velocity in direction r of speed c tanh a where c is the velocity of light. The inertial frame of reference of this velocity can be made the resting frame by applying the Lorentz boost T given by g = exp(0.5ahr) since then $g^{\star} = \exp(-0.5ahr) = g^*$ so that $T(\exp(ahr)) = 1 .$ - -Naturally the hyperboloid $G \cap M,$ which represents the range of velocities for sub-luminal motion, is of physical interest. There has been considerable work associating this "velocity space" with the hyperboloid model of hyperbolic geometry. In special relativity, the hyperbolic angle parameter of a hyperbolic versor is called rapidity. Thus we see the biquaternion group G provides a group representation for the Lorentz group. - -After the introduction of spinor theory, particularly in the hands of Wolfgang Pauli and Élie Cartan, the biquaternion representation of the Lorentz group was superseded. The new methods were founded on basis vectors in the set -$$ -\{ q \ :\ q q^* = 0 \} = \left\{ w + x\mathbf i + y\mathbf j + z\mathbf k \ :\ w^2 + x^2 + y^2 + z^2 = 0 \right\} -$$ - -which is called the complex light cone. The above representation of the Lorentz group coincides with what physicists refer to as four-vectors. Beyond four-vectors, the standard model of particle physics also includes other Lorentz representations, known as scalars, and the (1, 0) ⊕ (0, 1)-representation associated with e.g. the electromagnetic field tensor. Furthermore, particle physics makes use of the SL(2, C) representations (or projective representations of the Lorentz group) known as left- and right-handed Weyl spinors, Majorana spinors, and Dirac spinors. It is known that each of these seven representations can be constructed as invariant subspaces within the biquaternions. - -Although W.R. Hamilton introduced biquaternions in the 19th century, its delineation of its mathematical structure as a special type of algebra over a field was accomplished in the 20th century: the biquaternions may be generated out of the bicomplex numbers in the same way that Adrian Albert generated the real quaternions out of complex numbers in the so-called Cayley–Dickson construction. In this construction, a bicomplex number (w,z) has conjugate (w,z)* = (w, – z). - -The biquaternion is then a pair of bicomplex numbers (a,b), where the product with a second biquaternion (c, d) is -$$ -(a,b)(c,d) = (a c - d^* b, d a + b c^* ). -$$ - -If $a = (u, v), b = (w,z), $ then the biconjugate $(a, b)^* = (a^*, -b).$ - -When (a,b)* is written as a 4-vector of ordinary complex numbers, -$$ -(u, v, w, z)^* = (u, -v, -w, -z). -$$ - -The biquaternions form an example of a quaternion algebra, and it has norm -$$ -N(u,v,w,z) = u^2 + v^2 + w^2 + z^2 . -$$ - -Two biquaternions p and q satisfy $N(p q) = N(p) N(q) $ indicating that N is a quadratic form admitting composition, so that the biquaternions form a composition algebra. diff --git a/wiki/wikipedia/2840.txt b/wiki/wikipedia/2840.txt deleted file mode 100644 index d65073c74b3e1fc51e90050322e448adb2e1c392..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2840.txt +++ /dev/null @@ -1,378 +0,0 @@ -In mathematics, more specifically calculus, L'Hôpital's rule or L'Hospital's rule (, , ), also known as Bernoulli's rule, is a theorem which provides a technique to evaluate limits of indeterminate forms. Application (or repeated application) of the rule often converts an indeterminate form to an expression that can be easily evaluated by substitution. The rule is named after the 17th-century French mathematician Guillaume de l'Hôpital. Although the rule is often attributed to L'Hôpital, the theorem was first introduced to him in 1694 by the Swiss mathematician Johann Bernoulli. - -L'Hôpital's rule states that for functions f and g which are differentiable on an open interval I except possibly at a point c contained in I, if \lim_{x\to c}f(x)=\lim_{x\to c}g(x)=0 \text{ or } \pm\infty, and g'(x)\ne 0 for all x in I with x ≠ c, and \lim_{x\to c}\frac{f'(x)}{g'(x)} exists, then -$$ -\lim_{x\to c}\frac{f(x)}{g(x)} = \lim_{x\to c}\frac{f'(x)}{g'(x)}. -$$ - -The differentiation of the numerator and denominator often simplifies the quotient or converts it to a limit that can be evaluated directly. - -Guillaume de l'Hôpital (also written l'Hospital) published this rule in his 1696 book Analyse des Infiniment Petits pour l'Intelligence des Lignes Courbes (literal translation: Analysis of the Infinitely Small for the Understanding of Curved Lines), the first textbook on differential calculus. However, it is believed that the rule was discovered by the Swiss mathematician Johann Bernoulli. - -The general form of L'Hôpital's rule covers many cases. Let c and L be extended real numbers (i.e., real numbers, positive infinity, or negative infinity). Let I be an open interval containing c (for a two-sided limit) or an open interval with endpoint c (for a one-sided limit, or a limit at infinity if c is infinite). The real valued functions f and g are assumed to be differentiable on I except possibly at c, and additionally $g'(x) \ne 0$ on I except possibly at c. It is also assumed that $\lim_{x\to c} \frac{f'(x)}{g'(x)} = L.$ Thus the rule applies to situations in which the ratio of the derivatives has a finite or infinite limit, but not to situations in which that ratio fluctuates permanently as x gets closer and closer to c. - -If either - -\lim_{x\to c}f(x) = \lim_{x\to c}g(x) = 0 - -or - -\lim_{x\to c} |f(x)| = \lim_{x\to c} |g(x)| = \infty, - -then - -\lim_{x\to c} \frac{f(x)}{g(x)} = L. - -Although we have written x → c throughout, the limits may also be one-sided limits (x → c+ or x → c), when c is a finite endpoint of I. - -In the second case, the hypothesis that f diverges to infinity is not used in the proof (see note at the end of the proof section); thus, while the conditions of the rule are normally stated as above, the second sufficient condition for the rule's procedure to be valid can be more briefly stated as $\lim_{x \to c} |g(x)| = \infty.$ - -The hypothesis that $g'(x)\ne 0$ appears most commonly in the literature, but some authors sidestep this hypothesis by adding other hypotheses elsewhere. One method is to define the limit of a function with the additional requirement that the limiting function is defined everywhere on the relevant interval I except possibly at c. Another method is to require that both f and g be differentiable everywhere on an interval containing c. - -The requirement that the limit -$$ -\lim_{x\to c}\frac{f'(x)}{g'(x)} -$$ - -exists is essential. Without this condition, $f'$ or $g'$ may exhibit undamped oscillations as $x$ approaches $c$, in which case L'Hôpital's rule does not apply. For example, if $f(x)=x+\sin(x)$, $g(x)=x$ and $c=\pm\infty$, then -$$ -\frac{f'(x)}{g'(x)}=\frac{1+\cos(x)}{1}; -$$ - -this expression does not approach a limit as $x$ goes to $c$, since the cosine function oscillates between 1 and −1. But working with the original functions, $\lim_{x\to\infty}\frac{f(x)}{g(x)}$ can be shown to exist: -$$ -\lim_{x\to\infty}\frac{f(x)}{g(x)} = \lim_{x\to\infty}\left(1+\frac{\sin(x)}{x}\right) = 1. -$$ - -In a case such as this, all that can be concluded is that -$$ - \liminf_{x \to c} \frac{f'(x)}{g'(x)} \leq \liminf_{x \to c} \frac{f(x)}{g(x)} \leq \limsup_{x \to c} \frac{f(x)}{g(x)} \leq \limsup_{x \to c} \frac{f'(x)}{g'(x)} , -$$ - -so that if the limit of f/g exists, then it must lie between the inferior and superior limits of f′/g′. (In the example above, this is true, since 1 indeed lies between 0 and 2.) - -* Here is a basic example involving the exponential function, which involves the indeterminate form 0/0 at x = 0: - -\begin{align} - -\lim_{x\to 0} \frac{e^x - 1}{x^2+x} &= \lim_{x\to 0} \frac{\frac{d}{dx}(e^x - 1)}{\frac{d}{dx}(x^2+x)} \\[4pt] - -&= \lim_{x\to 0} \frac{e^x}{2x+1} \\[4pt] - -&= 1. - -\end{align} - - - -* This is a more elaborate example involving 0/0. Applying L'Hôpital's rule a single time still results in an indeterminate form. In this case, the limit may be evaluated by applying the rule three times: - -\begin{align} - -\lim_{x\to 0}{\frac{2\sin(x)-\sin(2x)}{x-\sin(x)}} - -& =\lim_{x\to 0}{\frac{2\cos(x)-2\cos(2x)}{1-\cos(x)}} \\[4pt] - -& = \lim_{x\to 0}{\frac{-2\sin(x)+4\sin(2x)}{\sin(x)}} \\[4pt] - -& = \lim_{x\to 0}{\frac{-2\cos(x)+8\cos(2x)}{\cos(x)}} \\[4pt] - -& ={\frac{-2+8}{1}} \\[4pt] - -& =6. - -\end{align} - -* Here is an example involving ∞/∞: - -\lim_{x\to\infty}x^n\cdot e^{-x} - -=\lim_{x\to\infty}{\frac{x^n}{e^x}} - -=\lim_{x\to\infty}{\frac{nx^{n-1}}{e^x}} - -=n\cdot \lim_{x\to\infty}{\frac{x^{n-1}}{e^x}}. - - Repeatedly apply L'Hôpital's rule until the exponent is zero (if n is an integer) or negative (if n is fractional) to conclude that the limit is zero. - -* Here is an example involving the indeterminate form 0 · ∞ (see below), which is rewritten as the form ∞/∞: \lim_{x\to 0^+}x \ln x =\lim_{x\to 0^+} \frac{\ln x}{\frac{1}{x}} - -= \lim_{x\to 0^+} \frac{\frac{1}{x}}{-\frac{1}{x^2}} - -= \lim_{x\to 0^+} -x = 0. - -*Here is an example involving the mortgage repayment formula and 0/0. Let P be the principal (loan amount), r the interest rate per period and n the number of periods. When r is zero, the repayment amount per period is $\frac{P}{n}$ (since only principal is being repaid); this is consistent with the formula for non-zero interest rates: \begin{align} - -\lim_{r\to 0}\frac{Pr(1+r)^n}{(1+r)^n-1} - -& = P \lim_{r\to 0} \frac{(1+r)^n+rn(1+r)^{n-1}}{n(1+r)^{n-1}} \\[4pt] - -& = \frac{P}{n}. - -\end{align} - -* One can also use L'Hôpital's rule to prove the following theorem. If f is twice-differentiable in a neighborhood of x and that its second derivative is continuous on this neighbourhood, then \begin{align} - -\lim_{h\to 0}\frac{f(x+h)+f(x-h)-2f(x)}{h^2} - -&= \lim_{h\to 0}\frac{f'(x+h)-f'(x-h)}{2h} \\[4pt] - -&= \lim_{h\to 0}\frac{f(x+h) + f(x-h)}{2} \\[4pt] - -&= f(x). - -\end{align} - -*

    Sometimes L'Hôpital's rule is invoked in a tricky way: suppose f(x) + f′(x) converges as x → ∞ and that $e^x\cdot f(x)$ converges to positive or negative infinity. Then: - -\lim_{x\to\infty }f(x) - -= \lim_{x\to\infty}\frac{e^x\cdot f(x)}{e^x} - -= \lim_{x\to\infty}\frac{e^x\bigl(f(x)+f'(x)\bigr)}{e^x} - -= \lim_{x\to\infty}\bigl(f(x)+f'(x)\bigr) - - and so, $\lim_{x\to\infty}f(x)$ exists and $\lim_{x\to\infty}f'(x) = 0.$

    The result remains true without the added hypothesis that $e^x\cdot f(x)$ converges to positive or negative infinity, but the justification is then incomplete.

    - -Sometimes L'Hôpital's rule does not lead to an answer in a finite number of steps unless some additional steps are applied. Examples include the following: - -* Two applications can lead to a return to the original expression that was to be evaluated: - -\lim_{x\to\infty} \frac{e^x+e^{-x}}{e^x-e^{-x}} - -= \lim_{x\to\infty} \frac{e^x-e^{-x}}{e^x+e^{-x}} - -= \lim_{x\to\infty} \frac{e^x+e^{-x}}{e^x-e^{-x}} - -= \cdots . - - This situation can be dealt with by substituting $y=e^x$ and noting that y goes to infinity as x goes to infinity; with this substitution, this problem can be solved with a single application of the rule: - -\lim_{x\to\infty} \frac{e^x+e^{-x}}{e^x-e^{-x}} - -= \lim_{y\to\infty} \frac{y+y^{-1}}{y-y^{-1}} - -= \lim_{y\to\infty} \frac{1-y^{-2}}{1+y^{-2}} - -= \frac{1}{1} = 1. - - Alternatively, the numerator and denominator can both be multiplied by $e^x,$ at which point L'Hôpital's rule can immediately be applied successfully: - -\lim_{x\to\infty} \frac{e^x+e^{-x}}{e^x-e^{-x}} - -= \lim_{x\to\infty} \frac{e^{2x} + 1}{e^{2x} - 1} - -= \lim_{x\to\infty} \frac{2e^{2x}}{2e^{2x}} - -= 1. - -*An arbitrarily large number of applications may never lead to an answer even without repeating: - -\lim_{x\to\infty} \frac{x^\frac1{2}+x^{-\frac1{2}}}{x^\frac1{2}-x^{-\frac1{2}}} - -= \lim_{x\to\infty} \frac{\frac1{2}x^{-\frac1{2}}-\frac{1}{2}x^{-\frac3{2}}}{\frac1{2}x^{-\frac1{2}}+\frac1{2}x^{-\frac3{2}}} - -= \lim_{x\to\infty} \frac{-\frac1{4}x^{-\frac3{2}}+\frac3{4}x^{-\frac5{2}}}{-\frac1{4}x^{-\frac3{2}}-\frac3{4}x^{-\frac5{2}}} - -= \cdots .This situation too can be dealt with by a transformation of variables, in this case $y = \sqrt{x}$: - -\lim_{x\to\infty} \frac{x^\frac1{2}+x^{-\frac1{2}}}{x^\frac1{2}-x^{-\frac1{2}}} - -= \lim_{y\to\infty} \frac{y+y^{-1}}{y-y^{-1}} - -= \lim_{y\to\infty} \frac{1-y^{-2}}{1+y^{-2}} - -= \frac1{1} - -= 1. - - Again, an alternative approach is to multiply numerator and denominator by $x^{1/2}$ before applying L'Hôpital's rule: - -\lim_{x\to\infty} \frac{x^\frac{1}{2}+x^{-\frac{1}{2}}}{x^\frac{1}{2}-x^{-\frac{1}{2}}} - -= \lim_{x\to\infty} \frac{x+1}{x-1} - -= \lim_{x\to\infty} \frac{1}{1} - -= 1. - -A common pitfall is using L'Hôpital's rule with some circular reasoning to compute a derivative via a difference quotient. For example, consider the task of proving the derivative formula for powers of x: -$$ -\lim_{h\to 0}\frac{(x+h)^n-x^n}{h}=nx^{n-1}. -$$ - -Applying L'Hôpital's rule and finding the derivatives with respect to h of the numerator and the denominator yields - -nxn−1 as expected. However, differentiating the numerator required the use of the very fact that is being proven. This is an example of begging the question, since one may not assume the fact to be proven during the course of the proof. - -The necessity of the condition that $g'(x)\ne 0$ near $c$ can be seen by the following counterexample due to Otto Stolz. Let $f(x)=x+\sin x \cos x$ and $g(x)=f(x)e^{\sin x}.$ Then there is no limit for $f(x)/g(x)$ as $x\to\infty.$ However, - -\begin{align} - -\frac{f'(x)}{g'(x)} &= \frac{2\cos^2 x}{(2 \cos^2 x) e^{\sin x} + (x+\sin x \cos x) e^{\sin x} \cos x} \\[4pt] - -&= \frac{2\cos x}{2 \cos x +x+\sin x \cos x} e^{-\sin x}, - -\end{align} - -which tends to 0 as $x\to\infty$. Further examples of this type were found by Ralph P. Boas Jr. - -Other indeterminate forms, such as 1, 00, ∞0, 0 · ∞, and ∞ − ∞, can sometimes be evaluated using L'Hôpital's rule. For example, to evaluate a limit involving ∞ − ∞, convert the difference of two functions to a quotient: - - - -\begin{align} - -\lim_{x\to 1}\left(\frac{x}{x-1}-\frac1{\ln x}\right) - -& = \lim_{x\to 1}\frac{x\cdot\ln x -x+1}{(x-1)\cdot\ln x} & \quad (1) \\[6pt] - -& = \lim_{x\to 1}\frac{\ln x}{\frac{x-1}{x}+\ln x} & \quad (2) \\[6pt] - -& = \lim_{x\to 1}\frac{x\cdot\ln x}{x-1+x\cdot\ln x} & \quad (3) \\[6pt] - -& = \lim_{x\to 1}\frac{1+\ln x}{1+1+\ln x} & \quad (4) \\[6pt] - -& = \lim_{x\to 1}\frac{1+\ln x}{2+\ln x} \\[6pt] - -& = \frac{1}{2}, - -\end{align} - - - -where L'Hôpital's rule is applied when going from (1) to (2) and again when going from (3) to (4). - -L'Hôpital's rule can be used on indeterminate forms involving exponents by using logarithms to "move the exponent down". Here is an example involving the indeterminate form 00: - - - -\lim_{x\to 0^+}x^x - -= \lim_{x\to 0^+}e^{\ln (x^x)} - -= \lim_{x\to 0^+}e^{x\cdot\ln x} - -= e^{\lim\limits_{x\to 0^+}(x\cdot\ln x)}. - - - -It is valid to move the limit inside the exponential function because the exponential function is continuous. Now the exponent $x$ has been "moved down". The limit $\lim_{x\to 0^+}x\cdot\ln x$ is of the indeterminate form 0 · ∞, but as shown in an example above, l'Hôpital's rule may be used to determine that -$$ -\lim_{x\to 0^+}x\cdot\ln x = 0. -$$ - -Thus -$$ -\lim_{x\to 0^+}x^x = e^0 = 1. -$$ - -The Stolz–Cesàro theorem is a similar result involving limits of sequences, but it uses finite difference operators rather than derivatives. - -Consider the curve in the plane whose x-coordinate is given by g(t) and whose y-coordinate is given by f(t), with both functions continuous, i.e., the locus of points of the form [g(t), f(t)]. Suppose f(c) = g(c) = 0. The limit of the ratio f(t)/g(t) as t → c is the slope of the tangent to the curve at the point [g(c), f(c)] = [0,0]. The tangent to the curve at the point [g(t), f(t)] is given by [g′(t), f′(t)]. L'Hôpital's rule then states that the slope of the curve when t = c is the limit of the slope of the tangent to the curve as the curve approaches the origin, provided that this is defined. - -The proof of L'Hôpital's rule is simple in the case where f and g are continuously differentiable at the point c and where a finite limit is found after the first round of differentiation. It is not a proof of the general L'Hôpital's rule because it is stricter in its definition, requiring both differentiability and that c be a real number. Since many common functions have continuous derivatives (e.g. polynomials, sine and cosine, exponential functions), it is a special case worthy of attention. - -Suppose that f and g are continuously differentiable at a real number c, that $f(c)=g(c)=0$, and that $g'(c)\neq 0$. Then - - - -\begin{align} - -& \lim_{x\to c}\frac{f(x)}{g(x)} = \lim_{x\to c}\frac{f(x)-0}{g(x)-0} = \lim_{x\to c}\frac{f(x)-f(c)}{g(x)-g(c)} \\[6pt] - -= {} & \lim_{x\to c}\frac{\left(\frac{f(x)-f(c)}{x-c}\right)}{\left(\frac{g(x)-g(c)}{x-c} \right)} = \frac{\lim\limits_{x\to c}\left(\frac{f(x)-f(c)}{x-c}\right)}{\lim\limits_{x\to c} \left(\frac{g(x)-g(c)}{x-c}\right)}= \frac{f'(c)}{g'(c)} = \lim_{x\to c}\frac{f'(x)}{g'(x)}. - -\end{align} - - - -This follows from the difference-quotient definition of the derivative. The last equality follows from the continuity of the derivatives at c. The limit in the conclusion is not indeterminate because $g'(c)\ne 0$. - -The proof of a more general version of L'Hôpital's rule is given below. - -The following proof is due to Taylor, where a unified proof for the 0/0 and ±∞/±∞ indeterminate forms is given. Taylor notes that different proofs may be found in Lettenmeyer and Wazewski. - -Let f and g be functions satisfying the hypotheses in the General form section. Let $\mathcal{I}$ be the open interval in the hypothesis with endpoint c. Considering that $g'(x)\ne 0$ on this interval and g is continuous, $\mathcal{I}$ can be chosen smaller so that g is nonzero on $\mathcal{I}$. - -For each x in the interval, define $m(x)=\inf\frac{f'(\xi)}{g'(\xi)}$ and $M(x)=\sup\frac{f'(\xi)}{g'(\xi)}$ as $\xi$ ranges over all values between x and c. (The symbols inf and sup denote the infimum and supremum.) - -From the differentiability of f and g on $\mathcal{I}$, Cauchy's mean value theorem ensures that for any two distinct points x and y in $\mathcal{I}$ there exists a $\xi$ between x and y such that $\frac{f(x)-f(y)}{g(x)-g(y)}=\frac{f'(\xi)}{g'(\xi)}$. Consequently, $m(x)\leq \frac{f(x)-f(y)}{g(x)-g(y)} \leq M(x)$ for all choices of distinct x and y in the interval. The value g(x)-g(y) is always nonzero for distinct x and y in the interval, for if it was not, the mean value theorem would imply the existence of a p between x and y such that g' (p)=0. - -The definition of m(x) and M(x) will result in an extended real number, and so it is possible for them to take on the values ±∞. In the following two cases, m(x) and M(x) will establish bounds on the ratio f/g. - -Case 1: $\lim_{x\to c}f(x)=\lim_{x\to c}g(x)=0$ - -For any x in the interval $\mathcal{I}$, and point y between x and c, -$$ -m(x)\le \frac{f(x)-f(y)}{g(x)-g(y)}=\frac{\frac{f(x)}{g(x)}-\frac{f(y)}{g(x)}}{1-\frac{g(y)}{g(x)}}\le M(x) -$$ - -and therefore as y approaches c, $\frac{f(y)}{g(x)}$ and $\frac{g(y)}{g(x)}$ become zero, and so -$$ -m(x)\leq\frac{f(x)}{g(x)}\leq M(x). -$$ - -Case 2: $\lim_{x\to c}|g(x)|=\infty$ - -For every x in the interval $\mathcal{I}$, define $S_x=\{y\mid y \text{ is between } x \text{ and } c\}$. For every point y between x and c, -$$ -m(x)\le \frac{f(y)-f(x)}{g(y)-g(x)}=\frac{\frac{f(y)}{g(y)}-\frac{f(x)}{g(y)}}{1-\frac{g(x)}{g(y)}} \le M(x). -$$ - -As y approaches c, both $\frac{f(x)}{g(y)}$ and $\frac{g(x)}{g(y)}$ become zero, and therefore -$$ -m(x)\le \liminf_{y\in S_x} \frac{f(y)}{g(y)} \le \limsup_{y\in S_x} \frac{f(y)}{g(y)} \le M(x). -$$ - -The limit superior and limit inferior are necessary since the existence of the limit of f/g has not yet been established. - -It is also the case that -$$ -\lim_{x\to c}m(x)=\lim_{x\to c}M(x)=\lim_{x\to c}\frac{f'(x)}{g'(x)}=L. -$$ - -{{efn| - -The limits $\lim_{x\to c} m(x)$ and $\lim_{x\to c} M(x)$ both exist as they feature nondecreasing and nonincreasing functions of x, respectively. - -Consider a sequence $x_i \to c$. Then $\lim_i m(x_i) \le \lim_i \frac{f'(x_i)}{g'(x_i)} \le \lim_i M(x_i)$, as the inequality holds for each i; this yields the inequalities $\lim_{x\to c}m(x) \le \lim_{x\to c}\frac{f'(x)}{g'(x)} \le \lim_{x\to c}M(x)$ - -The next step is to show $\lim_{x\to c}M(x) \le \lim_{x\to c}\frac{f'(x)}{g'(x)}$. Fix a sequence of numbers $\varepsilon_i > 0$ such that $ \lim_i \varepsilon_i = 0$, and a sequence $x_i\to c $. For each i, choose $x_i < y_i < c$ such that $\frac{f'(y_i)}{g'(y_i)} + \varepsilon_i \ge \sup_{x_i < \xi < c}\frac{f'(\xi)}{g'(\xi)}$, by the definition of $\sup$. Thus - -\begin{align} - -\lim_i M(x_i) &\leq \lim_i \frac{f'(y_i)}{g'(y_i)} + \varepsilon_i \\ - -&= \lim_i \frac{f'(y_i)}{g'(y_i)} + \lim_i \varepsilon_i \\ - -&= \lim_i \frac{f'(y_i)}{g'(y_i)} - -\end{align} as desired. - -The argument that $\lim_{x\to c}m(x) \ge \lim_{x\to c} \frac{f'(x)}{g'(x)}$ is similar. - -}} - -and -$$ -\lim_{x\to c}\left(\liminf_{y\in S_x}\frac{f(y)}{g(y)}\right)=\liminf_{x\to c}\frac{f(x)}{g(x)} -$$ and $\lim_{x\to c}\left(\limsup_{y\in S_x} \frac{f(y)}{g(y)}\right)=\limsup_{x\to c}\frac{f(x)}{g(x)}. $ - -In case 1, the squeeze theorem establishes that $\lim_{x\to c}\frac{f(x)}{g(x)}$ exists and is equal to L. In the case 2, and the squeeze theorem again asserts that $\liminf_{x\to c}\frac{f(x)}{g(x)}=\limsup_{x\to c}\frac{f(x)}{g(x)}=L$, and so the limit $\lim_{x\to c}\frac{f(x)}{g(x)}$ exists and is equal to L. This is the result that was to be proven. - -In case 2 the assumption that f(x) diverges to infinity was not used within the proof. This means that if |g(x)| diverges to infinity as x approaches c and both f and g satisfy the hypotheses of L'Hôpital's rule, then no additional assumption is needed about the limit of f(x): It could even be the case that the limit of f(x) does not exist. In this case, L'Hopital's theorem is actually a consequence of Cesàro–Stolz. - -In the case when |g(x)| diverges to infinity as x approaches c and f(x) converges to a finite limit at c, then L'Hôpital's rule would be applicable, but not absolutely necessary, since basic limit calculus will show that the limit of f(x)/g(x) as x approaches c must be zero. - -A simple but very useful consequence of L'Hopital's rule is a well-known criterion for differentiability. It states the following: - -suppose that f is continuous at a, and that $f'(x)$ exists for all x in some open interval containing a, except perhaps for $x = a$. Suppose, moreover, that $\lim_{x\to a}f'(x)$ exists. Then $f'(a)$ also exists and -$$ -f'(a) = \lim_{x\to a}f'(x). -$$ - -In particular, f is also continuous at a. - -Consider the functions $h(x) = f(x)-f(a)$ and $g(x) = x-a$. The continuity of f at a tells us that $\lim_{x\to a}h(x) = 0$. Moreover, $\lim_{x\to a}g(x) = 0$ since a polynomial function is always continuous everywhere. Applying L'Hopital's rule shows that $f'(a) := \lim_{x\to a}\frac{f(x)-f(a)}{x-a} = \lim_{x\to a}\frac{h(x)}{g(x)} = \lim_{x\to a}f'(x)$. diff --git a/wiki/wikipedia/2841.txt b/wiki/wikipedia/2841.txt deleted file mode 100644 index 3e34ae028db08f353f3eba9a9d49486c1d51d432..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2841.txt +++ /dev/null @@ -1,15 +0,0 @@ -In quantum computing, the quantum threshold theorem (or quantum fault-tolerance theorem) states that a quantum computer with a physical error rate below a certain threshold can, through application of quantum error correction schemes, suppress the logical error rate to arbitrarily low levels. This shows that quantum computers can be made fault-tolerant, as an analogue to von Neumann's threshold theorem for classical computation. This result was proven (for various error models) by the groups of Dorit Aharanov and Michael Ben-Or; Emanuel Knill, Raymond Laflamme, and Wojciech Zurek; and Alexei Kitaev independently. These results built off a paper of Peter Shor, which proved a weaker version of the threshold theorem. - -The key question that the threshold theorem resolves is whether quantum computers in practice could perform long computations without succumbing to noise. Since a quantum computer will not be able to perform gate operations perfectly, some small constant error is inevitable; hypothetically, this could mean that quantum computers with imperfect gates can only apply a constant number of gates before the computation is destroyed by noise. - -Surprisingly, the quantum threshold theorem shows that if the error to perform each gate is a small enough constant, one can perform arbitrarily long quantum computations to arbitrarily good precision, with only some small added overhead in the number of gates. The formal statement of the threshold theorem depends on the types of error correction codes and error model being considered. Quantum Computation and Quantum Information, by Michael Nielsen and Isaac Chuang, gives the general framework for such a theorem: - -Threshold theorem for quantum computation: A quantum circuit on n qubits and containing p(n) gates may be simulated with probability of error at most ε using - -O(\log^c(p(n)/\varepsilon)p(n)) - -gates (for some constant c) on hardware whose components fail with probability at most p, provided p is below some constant threshold, $p < p_{\rm th}$, and given reasonable assumptions about the noise in the underlying hardware. - -Threshold theorems for classical computation have the same form as above, except for classical circuits instead of quantum. The proof strategy for quantum computation is similar to that of classical computation: for any particular error model (such as having each gate fail with independent probability p), use error correcting codes to build better gates out of existing gates. Though these "better gates" are larger, and so are more prone to errors within them, their error-correction properties mean that they have a lower chance of failing than the original gate (provided p is a small-enough constant). Then, one can use these better gates to recursively create even better gates, until one has gates with the desired failure probability, which can be used for the desired quantum circuit. According to quantum information theorist Scott Aaronson:
    "The entire content of the Threshold Theorem is that you're correcting errors faster than they're created. That's the whole point, and the whole non-trivial thing that the theorem shows. That's the problem it solves."
    - -Current estimates put the threshold for the surface code on the order of 1%, though estimates range widely and are difficult to calculate due to the exponential difficulty of simulating large quantum systems. At a 0.1% probability of a depolarizing error, the surface code would require approximately 1,000-10,000 physical qubits per logical data qubit, though more pathological error types could change this figure drastically. diff --git a/wiki/wikipedia/2842.txt b/wiki/wikipedia/2842.txt deleted file mode 100644 index df9d763dc3415c129b638dc4eff8cda27ffce4b4..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2842.txt +++ /dev/null @@ -1,8 +0,0 @@ -In the mathematical study of the differential geometry of surfaces, the Bertrand–Diguet–Puiseux theorem expresses the Gaussian curvature of a surface in terms of the circumference of a geodesic circle, or the area of a geodesic disc. The theorem is named for Joseph Bertrand, Victor Puiseux, and Charles François Diguet. - -Let p be a point on a smooth surface M. The geodesic circle of radius r centered at p is the set of all points whose geodesic distance from p is equal to r. Let C(r) denote the circumference of this circle, and A(r) denote the area of the disc contained within the circle. The Bertrand–Diguet–Puiseux theorem asserts that -$$ -K(p) = \lim_{r\to 0^+} 3\frac{2\pi r-C(r)}{\pi r^3} = \lim_{r\to 0^+} 12\frac{\pi r^2-A(r)}{\pi r^4}. -$$ - -The theorem is closely related to the Gauss–Bonnet theorem. diff --git a/wiki/wikipedia/2843.txt b/wiki/wikipedia/2843.txt deleted file mode 100644 index cc33d90ed304075b678e24918d5dec22db0b0e84..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2843.txt +++ /dev/null @@ -1,45 +0,0 @@ -In mathematics, Ky Fan's lemma (KFL) is a combinatorial lemma about labellings of triangulations. It is a generalization of Tucker's lemma. It was proved by Ky Fan in 1952. - -KFL uses the following concepts. - -* $B_n$: the closed n-dimensional ball. - -** $S_{n-1}$: its boundary sphere. - -* T: a triangulation of $B_n$. - -** T is called boundary antipodally symmetric if the subset of simplices of T which are in $S_{n-1}$ provides a triangulation of $S_{n-1}$ where if σ is a simplex then so is −σ. - -* L: a labeling of the vertices of T, which assigns to each vertex a non-zero integer: $L: V(T) \to \mathbb{Z}\setminus\{0\}$. - -** L is called boundary odd if for every vertex $v\in S_{n-1}$, $L(-v) = -L(v)$. - -* An edge of T is called a complementary edge of L if the labels of its two endpoints have the same size and opposite signs, e.g. {−2, +2}. - -* An n-dimensional simplex of T is called an alternating simplex of T if its labels have different sizes with alternating signs, e.g.{−1, +2, −3} or {+3, −5, +7}. - -Let T be a boundary-antipodally-symmetric triangulation of $B_n$ and L a boundary-odd labeling of T. - -If L has no complementary edge, then L has an odd number of n-dimensional alternating simplices. - -By definition, an n-dimensional alternating simplex must have labels with n + 1 different sizes. - -This means that, if the labeling L uses only n different sizes (i.e. $L: V(T) \to \{+1,-1,+2,-2,\ldots,+n,-n\}$), it cannot have an n-dimensional alternating simplex. - -Hence, by KFL, L must have a complementary edge. - -KFL can be proved constructively based on a path-based algorithm. The algorithm it starts at a certain point or edge of the triangulation, then goes from simplex to simplex according to prescribed rules, until it is not possible to proceed any more. It can be proved that the path must end in an alternating simplex. - -The proof is by induction on n. - -The basis is $n=1$. In this case, $B_n$ is the interval $[-1,1]$ and its boundary is the set $\{-1,1\}$. The labeling L is boundary-odd, so $L(-1)=-L(+1)$. Without loss of generality, assume that $L(-1)=-1$ and $L(+1)=+1$. Start at −1 and go right. At some edge e, the labeling must change from negative to positive. Since L has no complementary edges, e must have a negative label and a positive label with a different size (e.g. −1 and +2); this means that e is a 1-dimensional alternating simplex. Moreover, if at any point the labeling changes again from positive to negative, then this change makes a second alternating simplex, and by the same reasoning as before there must be a third alternating simplex later. Hence, the number of alternating simplices is odd. - -The following description illustrates the induction step for $n=2$. In this case $B_n$ is a disc and its boundary is a circle. The labeling L is boundary-odd, so in particular $L(-v) = -L(v)$ for some point v on the boundary. Split the boundary circle to two semi-circles and treat each semi-circle as an interval. By the induction basis, this interval must have an alternating simplex, e.g. an edge with labels (+1,−2). Moreover, the number of such edges on both intervals is odd. Using the boundary criterion, on the boundary we have an odd number of edges where the smaller number is positive and the larger negative, and an odd number of edges where the smaller number is negative and the larger positive. We call the former decreasing, the latter increasing. - -There are two kinds of triangles. - -* If a triangle is not alternating, it must have an even number of increasing edges and an even number of decreasing edges. - -* If a triangle is alternating, it must have one increasing edge and one decreasing edge, thus we have an odd number of alternating triangles. - -By induction, this proof can be extended to any dimension. diff --git a/wiki/wikipedia/2844.txt b/wiki/wikipedia/2844.txt deleted file mode 100644 index 14c12b47a37694c14ef4027175d54245c3c69b4f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2844.txt +++ /dev/null @@ -1,7 +0,0 @@ -The Walsh–Lebesgue theorem is a famous result from harmonic analysis proved by the American mathematician Joseph L. Walsh in 1929, using results proved by Lebesgue in 1907. The theorem states the following: - -Let K be a compact subset of the Euclidean plane ℝ2 such the relative complement of $K$ with respect to ℝ2 is connected. Then, every real-valued continuous function on $\partial{K}$ (i.e. the boundary of K) can be approximated uniformly on $\partial{K}$ by (real-valued) harmonic polynomials in the real variables x and y. - -The Walsh–Lebesgue theorem has been generalized to Riemann surfaces and to ℝn. - -In 1974 Anthony G. O'Farrell gave a generalization of the Walsh–Lebesgue theorem by means of the 1964 Browder–Wermer theorem with related techniques. diff --git a/wiki/wikipedia/2845.txt b/wiki/wikipedia/2845.txt deleted file mode 100644 index 67f413f65e492455c5f7611cb3f57f46378dd995..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2845.txt +++ /dev/null @@ -1,118 +0,0 @@ -The Hausdorff−Young inequality is a foundational result in the mathematical field of Fourier analysis. As a statement about Fourier series, it was discovered by and extended by . It is now typically understood as a rather direct corollary of the Plancherel theorem, found in 1910, in combination with the Riesz-Thorin theorem, originally discovered by Marcel Riesz in 1927. With this machinery, it readily admits several generalizations, including to multidimensional Fourier series and to the Fourier transform on the real line, Euclidean spaces, as well as more general spaces. With these extensions, it is one of the best-known results of Fourier analysis, appearing in nearly every introductory graduate-level textbook on the subject. - -The nature of the Hausdorff-Young inequality can be understood with only Riemann integration and infinite series as prerequisite. Given a continuous function f:(0,1)→ℝ, define its "Fourier coefficients" by -$$ -c_n=\int_0^1 e^{-2\pi inx}f(x)dx -$$ - -for each integer n. The Hausdorff-Young inequality says that - -\left(\sum_{n=-\infty}^\infty |c_n|^3\right)^{1/3}\leq - -\left(\int_0^{1}|f(t)|^{3/2}dt\right)^{2/3}. - -Loosely speaking, this can be interpreted as saying that the "size" of the function f, as represented by the right-hand side of the above inequality, controls the "size" of its sequence of Fourier coefficients, as represented by the left-hand side. - -However, this is only a very specific case of the general theorem. The usual formulations of the theorem are given below, with use of the machinery of Lp spaces and Lebesgue integration. - -Given a nonzero real number p, define the real number p' (the "conjugate exponent" of p) by the equation -$$ -\frac{1}{p}+\frac{1}{p'}=1. -$$ - -If p is equal to one, this equation has no solution, but it is interpreted to mean that p' is infinite, as an element of the extended real number line. Likewise, if p is infinite, as an element of the extended real number line, then this is interpreted to mean that p' is equal to one. - -The commonly understood features of the conjugate exponent are simple: - -* the conjugate exponent of a number in the range [1,2] is in the range [2,∞] - -* the conjugate exponent of a number in the range [2,∞] is in the range [1,2] - -* the conjugate exponent of 2 is 2 - -Given a function $f:(0,1)\to\mathbb{C},$ one defines its "Fourier coefficients" as a function $c:\mathbb{Z}\to\mathbb{C}$ by -$$ -c(n)=\int_0^{1} f(t)e^{-2\pi int}dt, -$$ - -although for an arbitrary function f, these integrals may not exist. Hölder's inequality shows that if f is in Lp(0,1) for some number p∈[1,∞], then each Fourier coefficient is well-defined. - -The Hausdorff-Young inequality says that, for any number p in the interval (1,2], one has -$$ -\Big(\sum_{n=-\infty}^\infty \big|c(n)\big|^{p'}\Big)^{1/p'}\leq\Big(\int_0^{1}|f(t)|^pdt\Big)^{1/p} -$$ - -for all f in Lp(0,1). Conversely, still supposing p∈(1,2], if $c:\mathbb{Z}\to\mathbb{C}$ is a mapping for which -$$ -\sum_{n=-\infty}^\infty \big|c(n)\big|^p<\infty, -$$ - -then there exists $f\in L^{p'}(0,1)$ whose Fourier coefficients are c and with -$$ -\Big(\int_0^{1}|f(t)|^{p'}dt\Big)^{1/p'}\leq\Big(\sum_{n=-\infty}^\infty \big|c(n)\big|^{p}\Big)^{1/p}. -$$ - -References. Section XII.2 in volume II of Zygmund's book - -The case of Fourier series generalizes to the multidimensional case. Given a function $f:(0,1)^k\to\mathbb{C},$ define its Fourier coefficients $c:\mathbb{Z}^k\to\mathbb{C}$ by -$$ -c(n_1,\ldots,n_k)=\int_{(0,1)^k}f(x)e^{-2\pi i(n_1x_1+\cdots+n_kx_k)}dx. -$$ - -As in the case of Fourier series, the assumption that f is in Lp for some value of p in [1,∞] ensures, via the Hölder inequality, the existence of the Fourier coefficients. Now, the Hausdorff-Young inequality says that if p is in the range [1,2], then -$$ -\Big(\sum_{n\in\mathbb{Z}^k}\big|c(n)\big|^{p'}\Big)^{1/p'}\leq\Big(\int_{(0,1)^k}|f(x)|^pdx\Big)^{1/p} -$$ - -for any f in Lp((0,1)k). - -References. Page 248 of Folland's book - -One defines the multidimensional Fourier transform by -$$ -\widehat{f}(\xi)=\int_{\mathbb{R}^n}e^{-2\pi i\langle x,\xi\rangle}f(x)dx. -$$ - -The Hausdorff-Young inequality, in this setting, says that if p is a number in the interval [1,2], then one has -$$ -\Big(\int_{\mathbb{R}^n}\big|\widehat{f}(\xi)\big|^{p'}d\xi\Big)^{1/p'}\leq \Big(\int_{\mathbb{R}^n}\big|f(x)\big|^pdx\Big)^{1/p} -$$ - -for any f in Lp(ℝn). - -References. page 114 of Grafakos' book, page 165 of Hörmander's book, page 11 of Reed and Simon's book, or section 5.1 of Stein and Weiss' book. Hörmander and Reed-Simon's books use conventions for the definition of the Fourier transform which are different from those of this article. - -The above results can be rephrased succinctly as: - -* the map which sends a function (0,1)k→ℂ to its Fourier coefficients defines a bounded complex-linear map Lp((0,1)k,dx)→Lp/(p-1)(ℤk,dn) for any number p in the range [1,2]. Here dx denotes Lebesgue measure and dn denotes counting measure. Furthermore, the operator norm of this linear map is less than or equal to one. - -* the map which sends a function ℝn→ℂ to its Fourier transform defines a bounded complex-linear map Lp(ℝn)→Lp/(p-1)(ℝn) for any number p in the range [1,2]. Furthermore, the operator norm of this linear map is less than or equal to one. - -Here we use the language of normed vector spaces and bounded linear maps, as is convenient for application of the Riesz-Thorin theorem. There are two ingredients in the proof: - -* according to the Plancherel theorem, the Fourier series (or Fourier transform) defines a bounded linear map L2→L2. - -* using only the single equality $|e^{-2\pi i na}|=1$ for any real numbers n and a, one can see directly that the Fourier series (or Fourier transform) defines a bounded linear map L1→L. - -The operator norm of either linear maps is less than or equal to one, as one can directly verify. One can then apply the Riesz–Thorin theorem. - -Equality is achieved in the Hausdorff-Young inequality for (multidimensional) Fourier series by taking -$$ -f(x)=e^{2\pi i(m_1x_1+\cdots+m_kx_k)} -$$ - -for any particular choice of integers $m_1,\ldots,m_k.$ In the above terminology of "normed vector spaces", this asserts that the operator norm of the corresponding bounded linear map is exactly equal to one. - -Since the Fourier transform is closely analogous to the Fourier series, and the above Hausdorff-Young inequality for the Fourier transform is proved by exactly the same means as the Hausdorff-Young inequality for Fourier series, it may be surprising that equality is not achieved for the above Hausdorff-Young inequality for the Fourier transform, aside from the special case $p=2$ for which the Plancherel theorem asserts that the Hausdorff-Young inequality is an exact equality. - -In fact, Beckner, following a special case appearing in Babenko, showed that if p is a number in the interval [1,2], then -$$ -\Big(\int_{\mathbb{R}^n}\big|\widehat{f}(\xi)\big|^{p'}d\xi\Big)^{1/p'}\leq \Big(\frac{p^{1/p}}{(p')^{1/p'}}\Big)^{n/2}\Big(\int_{\mathbb{R}^n}\big|f(x)\big|^pdx\Big)^{1/p} -$$ - -for any f in Lp(ℝn). This is an improvement of the standard Hausdorff-Young inequality, as the context p≤2 and p≥2 ensures that the number appearing on the right-hand side of this "Babenko–Beckner inequality" is less than or equal to 1. Moreover, this number cannot be replaced by a smaller one, since equality is achieved in the case of Gaussian functions. In this sense, Beckner's paper gives an optimal ("sharp") version of the Hausdorff-Young inequality. In the language of normed vector spaces, it says that the operator norm of the bounded linear map Lp(ℝn)→Lp/(p-1)(ℝn), as defined by the Fourier transform, is exactly equal to -$$ -\Big(\frac{p^{1/p}}{(p')^{1/p'}}\Big)^{n/2}. -$$ - -The condition p∈[1,2] is essential. If p>2, then the fact that a function belongs to $L^p$, does not give any additional information on the order of growth of its Fourier series beyond the fact that it is in $\ell^2$. diff --git a/wiki/wikipedia/2846.txt b/wiki/wikipedia/2846.txt deleted file mode 100644 index 0828fd361d7f0b63c1536e605b1951ed7dfdf687..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2846.txt +++ /dev/null @@ -1,97 +0,0 @@ -In Euclidean and projective geometry, just as two (distinct) points determine a line (a degree-1 plane curve), five points determine a conic (a degree-2 plane curve). There are additional subtleties for conics that do not exist for lines, and thus the statement and its proof for conics are both more technical than for lines. - -Formally, given any five points in the plane in general linear position, meaning no three collinear, there is a unique conic passing through them, which will be non-degenerate; this is true over both the Euclidean plane and any pappian projective plane. Indeed, given any five points there is a conic passing through them, but if three of the points are collinear the conic will be degenerate (reducible, because it contains a line), and may not be unique; see further discussion. - -This result can be proven numerous different ways; the dimension counting argument is most direct, and generalizes to higher degree, while other proofs are special to conics. - -Intuitively, passing through five points in general linear position specifies five independent linear constraints on the (projective) linear space of conics, and hence specifies a unique conic, though this brief statement ignores subtleties. - -More precisely, this is seen as follows: - -* conics correspond to points in the five-dimensional projective space $\mathbf{P}^5;$ - -* requiring a conic to pass through a point imposes a linear condition on the coordinates: for a fixed $(x,y),$ the equation $Ax^2 + Bxy + Cy^2 +Dx + Ey + F = 0$ is a linear equation in $(A,B,C,D,E,F);$ - -* by dimension counting, five constraints (that the curve passes through five points) are necessary to specify a conic, as each constraint cuts the dimension of possibilities by 1, and one starts with 5 dimensions; - -* in 5 dimensions, the intersection of 5 (independent) hyperplanes is a single point (formally, by Bézout's theorem); - -* general linear position of the points means that the constraints are independent, and thus do specify a unique conic; - -* the resulting conic is non-degenerate because it is a curve (since it has more than 1 point), and does not contain a line (else it would split as two lines, at least one of which must contain 3 of the 5 points, by the pigeonhole principle), so it is irreducible. - -The two subtleties in the above analysis are that the resulting point is a quadratic equation (not a linear equation), and that the constraints are independent. The first is simple: if A, B, and C all vanish, then the equation $Dx + Ey + F = 0$ defines a line, and any 3 points on this (indeed any number of points) lie on a line – thus general linear position ensures a conic. The second, that the constraints are independent, is significantly subtler: it corresponds to the fact that given five points in general linear position in the plane, their images in $\mathbf{P}^5$ under the Veronese map are in general linear position, which is true because the Veronese map is biregular: i.e., if the image of five points satisfy a relation, then the relation can be pulled back and the original points must also satisfy a relation. The Veronese map has coordinates $[x^2 : xy : y^2 : xz : yz : z^2 ],$ and the target $\mathbf{P}^5$ is dual to the $[A : B : C : D : E : F]$ $\mathbf{P}^5$ of conics. The Veronese map corresponds to "evaluation of a conic at a point", and the statement about independence of constraints is exactly a geometric statement about this map. - -That five points determine a conic can be proven by synthetic geometry-i.e., in terms of lines and points in the plane-in addition to the analytic (algebraic) proof given above. Such a proof can be given using a theorem of Jakob Steiner, which states: - -Given a projective transformation f, between the pencil of lines passing through a point X and the pencil of lines passing through a point Y, the set C of intersection points between a line x and its image $f(x)$ forms a conic. - -Note that X and Y are on this conic by considering the preimage and image of the line XY (which is respectively a line through X and a line through Y). - -This can be shown by taking the points X and Y to the standard points $[1:0:0]$ and $[0:1:0]$ by a projective transformation, in which case the pencils of lines correspond to the horizontal and vertical lines in the plane, and the intersections of corresponding lines to the graph of a function, which (must be shown) is a hyperbola, hence a conic, hence the original curve C is a conic. - -Now given five points X, Y, A, B, C, the three lines $XA, XB, XC$ can be taken to the three lines $YA, YB, YC$ by a unique projective transform, since projective transforms are simply 3-transitive on lines (they are simply 3-transitive on points, hence by projective duality they are 3-transitive on lines). Under this map X maps to Y, since these are the unique intersection points of these lines, and thus satisfy the hypothesis of Steiner’s theorem. The resulting conic thus contains all five points, and is the unique such conic, as desired. - -Given five points, one can construct the conic containing them in various ways. - -Analytically, given the coordinates $(x_i,y_i)_{i=1,2,3,4,5}$ of the five points, the equation for the conic can be found by linear algebra, by writing and solving the five equations in the coefficients, substituting the variables with the values of the coordinates: five equations, six unknowns, but homogeneous so scaling removes one dimension; concretely, setting one of the coefficients to 1 accomplishes this. - -This can be achieved quite directly as the following determinantal equation: - - - -\det \begin{bmatrix} - -x^2 & xy & y^2 & x & y & 1 \\ - -x_1^2 & x_1y_1 & y_1^2 & x_1 & y_1 & 1 \\ - -x_2^2 & x_2y_2 & y_2^2 & x_2 & y_2 & 1 \\ - -x_3^2 & x_3y_3 & y_3^2 & x_3 & y_3 & 1 \\ - -x_4^2 & x_4y_4 & y_4^2 & x_4 & y_4 & 1 \\ - -x_5^2 & x_5y_5 & y_5^2 & x_5 & y_5 & 1 - -\end{bmatrix} - -= 0 - - - -This matrix has variables in its first row and numbers in all other rows, so the determinant is visibly a linear combination of the six monomials of degree at most 2. Also, the resulting polynomial clearly vanishes at the five input points (when $(x,y) = (x_i,y_i)$), as the matrix has then a repeated row. - -Synthetically, the conic can be constructed by the ', by applying the Braikenridge–Maclaurin theorem, which is the converse of Pascal's theorem. Pascal's theorem states that given 6 points on a conic (a hexagon), the lines defined by opposite sides intersect in three collinear points. This can be reversed to construct the possible locations for a 6th point, given 5 existing ones. - -The natural generalization is to ask for what value of k a configuration of k points (in general position) in n-space determines a variety of degree d and dimension m, which is a fundamental question in enumerative geometry. - -A simple case of this is for a hypersurface (a codimension 1 subvariety, the zeros of a single polynomial, the case $m = n-1$), of which plane curves are an example. - -In the case of a hypersurface, the answer is given in terms of the multiset coefficient, more familiarly the binomial coefficient, or more elegantly the rising factorial, as: -$$ -k = \left(\!\!{n + 1 \choose d}\!\!\right) - 1 = {n+d \choose d} - 1 = \frac{1}{n!}(d+1)^{(n)} - 1. -$$ - -This is via the analogous analysis of the Veronese map: k points in general position impose k independent linear conditions on a variety (because the Veronese map is biregular), and the number of monomials of degree d in $n+1$ variables (n-dimensional projective space has $n+1$ homogeneous coordinates) is $\textstyle{\left(\!\!{n + 1 \choose d}\!\!\right)},$ from which 1 is subtracted because of projectivization: multiplying a polynomial by a constant does not change its zeros. - -In the above formula, the number of points k is a polynomial in d of degree n, with leading coefficient $1/n!$ - -In the case of plane curves, where $n=2,$ the formula becomes: -$$ -\textstyle{\frac{1}{2}}(d+1)(d+2) - 1 = \textstyle{\frac{1}{2}}(d^2 + 3d) -$$ - -whose values for $d=0,1,2,3,4$ are $0,2,5,9,14$ – there are no curves of degree 0 (a single point is a point and is thus determined by a point, which is codimension 2), 2 points determine a line, 5 points determine a conic, 9 points determine a cubic, 14 points determine a quartic, and so forth. - -While five points determine a conic, sets of six or more points on a conic are not in general position, that is, they are constrained as is demonstrated in Pascal's theorem. - -Similarly, while nine points determine a cubic, if the nine points lie on more than one cubic-i.e., they are the intersection of two cubics-then they are not in general position, and indeed satisfy an addition constraint, as stated in the Cayley–Bacharach theorem. - -Four points do not determine a conic, but rather a pencil, the 1-dimensional linear system of conics which all pass through the four points (formally, have the four points as base locus). Similarly, three points determine a 2-dimensional linear system (net), two points determine a 3-dimensional linear system (web), one point determines a 4-dimensional linear system, and zero points place no constraints on the 5-dimensional linear system of all conics. - -As is well known, three non-collinear points determine a circle in Euclidean geometry and two distinct points determine a pencil of circles such as the Apollonian circles. These results seem to run counter the general result since circles are special cases of conics. However, in a pappian projective plane a conic is a circle only if it passes through two specific points on the line at infinity, so a circle is determined by five non-collinear points, three in the affine plane and these two special points. Similar considerations explain the smaller than expected number of points needed to define pencils of circles. - -Instead of passing through points, a different condition on a curve is being tangent to a given line. Being tangent to five given lines also determines a conic, by projective duality, but from the algebraic point of view tangency to a line is a quadratic constraint, so naive dimension counting yields 25 = 32 conics tangent to five given lines, of which 31 must be ascribed to degenerate conics, as described in fudge factors in enumerative geometry; formalizing this intuition requires significant further development to justify. - -Another classic problem in enumerative geometry, of similar vintage to conics, is the Problem of Apollonius: a circle that is tangent to three circles in general determines eight circles, as each of these is a quadratic condition and 23 = 8. As a question in real geometry, a full analysis involves many special cases, and the actual number of circles may be any number between 0 and 8, except for 7. diff --git a/wiki/wikipedia/2847.txt b/wiki/wikipedia/2847.txt deleted file mode 100644 index dbbc3798e95e623f18b572930cf06fba10e40790..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2847.txt +++ /dev/null @@ -1,23 +0,0 @@ -A thrackle is an embedding of a graph in the plane, such that each edge is a Jordan arc - -and every pair of edges meet exactly once. Edges may either meet at a common endpoint, or, if they have no endpoints in common, at a point in their interiors. In the latter case, the crossing must be transverse. - -A linear thrackle is a thrackle drawn in such a way that its edges are straight line segments. Every linear thrackle has at most as many edges as vertices, a fact that was observed by Paul Erdős. Erdős observed that, if a vertex v is connected to three or more edges vw, vx, and vy in a linear thrackle, then at least one of those edges lies on a line that separates two other edges; without loss of generality assume that vw is such an edge, with x and y lying in opposite closed halfspaces bounded by line vw. Then, w must have degree one, because no other edge than vw can touch both vx and vy. Removing w from the thrackle produces a smaller thrackle, without changing the difference between the numbers of edges and vertices. On the other hand, if every vertex has at most two neighbors, then by the handshaking lemma the number of edges is at most the number of vertices. Based on Erdős' proof, one can infer that every linear thrackle is a pseudoforest. Every cycle of odd length may be arranged to form a linear thrackle, but this is not possible for an even-length cycle, because if one edge of the cycle is chosen arbitrarily then the other cycle vertices must lie alternatingly on opposite sides of the line through this edge. - -Micha Perles provided another simple proof that linear thrackles have at most n edges, based on the fact that in a linear thrackle every edge has an endpoint at which the edges span an angle of at most 180°, and for which it is the most clockwise edge within this span. For, if not, there would be two edges, incident to opposite endpoints of the edge and lying on opposite sides of the line through the edge, which could not cross each other. But each vertex can only have this property with respect to a single edge, so the number of edges is at most equal to the number of vertices. - -John H. Conway conjectured that, in any thrackle, the number of edges is at most equal to the number of vertices. Conway himself used the terminology paths and spots (for edges and vertices respectively), so Conway's thrackle conjecture was originally stated - -in the form every thrackle has at least as many spots as paths. Conway offered a $1000 prize for proving or disproving this conjecture, as part of a set of prize problems also including Conway's 99-graph problem, the minimum spacing of Danzer sets, and the winner of Sylver coinage after the move 16. - -Equivalently, the thrackle conjecture may be stated as every thrackle is a pseudoforest. More specifically, if the thrackle conjecture is true, the thrackles may be exactly characterized by a result of Woodall: they are the pseudoforests in which there is no cycle of length four and at most one odd cycle. - -It has been proven that every cycle graph other than C4 has a thrackle embedding, which shows that the conjecture is sharp. That is, there are thrackles having the same number of spots as paths. At the other extreme, the worst-case scenario is that the number of spots is twice the number of paths; this is also attainable. - -The thrackle conjecture is known to be true for thrackles drawn in such a way that every edge is an x-monotone curve, crossed at most once by every vertical line. - -Lovász proved that every bipartite thrackle is a planar graph, although not drawn in a planar way. Moreover, the method used to prove the latter result yields for any ε > 0 a finite algorithm that either - -improves the bound to (1 + ε)n or disproves the conjecture. The current record is due to Fulek, who proved a bound of 1.3984n. - -If the conjecture is false, a minimal counterexample would have the form of two even cycles sharing a vertex. Therefore, to prove the conjecture, it would suffice to prove that graphs of this type cannot be drawn as thrackles. diff --git a/wiki/wikipedia/2848.txt b/wiki/wikipedia/2848.txt deleted file mode 100644 index 855f18c2124db33be69bb36d51ee78d10b74dec7..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2848.txt +++ /dev/null @@ -1,158 +0,0 @@ -In the mathematical subject of group theory, the Grushko theorem or the Grushko–Neumann theorem is a theorem stating that the rank (that is, the smallest cardinality of a generating set) of a free product of two groups is equal to the sum of the ranks of the two free factors. The theorem was first obtained in a 1940 article of Grushko and then, independently, in a 1943 article of Neumann. - -Let A and B be finitely generated groups and let A∗B be the free product of A and B. Then - -rank(A∗B) = rank(A) + rank(B). - -It is obvious that rank(A∗B) ≤ rank(A) + rank(B) since if X is a finite generating set of A and Y is a finite generating set of B then X∪Y is a generating set for A∗B and that |X∪Y|≤|X| + |Y|. The opposite inequality, rank(A∗B) ≥ rank(A) + rank(B), requires proof. - -Grushko, but not Neumann, proved a more precise version of Grushko's theorem in terms of Nielsen equivalence. It states that if M = (g1, g2, ..., gn) is an n-tuple of elements of G = A∗B such that M generates G, 1, g2, ..., gn> = G, then M is Nielsen equivalent in G to an n-tuple of the form - -M = (a1, ..., ak, b1, ..., bn-k) where {a1, ..., ak}⊆A is a generating set for A and where {b1, ..., bn-k}⊆B is a generating set for B. In particular, rank(A) ≤ k, rank(B) ≤ n - k and rank(A) + rank(B) ≤ k + (n - k) = n. If one takes M to be the minimal generating tuple for G, that is, with n = rank(G), this implies that rank(A) + rank(B) ≤ rank(G). Since the opposite inequality, rank(G) ≤ rank(A) + rank(B), is obvious, it follows that rank(G)=rank(A) + rank(B), as required. - -After the original proofs of Grushko (1940) and Neumann(1943), there were many subsequent alternative proofs, simplifications and generalizations of Grushko's theorem. A close version of Grushko's original proof is given in the 1955 book of Kurosh. - -Like the original proofs, Lyndon's proof (1965) relied on length-functions considerations but with substantial simplifications. A 1965 paper of Stallings - - gave a greatly simplified topological proof of Grushko's theorem. - -A 1970 paper of Zieschang gave a Nielsen equivalence version of Grushko's theorem (stated above) and provided some generalizations of Grushko's theorem for amalgamated free products. Scott (1974) gave another topological proof of Grushko's theorem, inspired by the methods of 3-manifold topology Imrich (1984) gave a version of Grushko's theorem for free products with infinitely many factors. - -A 1976 paper of Chiswell gave a relatively straightforward proof of Grushko's theorem, modelled on Stallings' 1965 proof, that used the techniques of Bass–Serre theory. The argument directly inspired the machinery of foldings for group actions on trees and for graphs of groups and Dicks' even more straightforward proof of Grushko's theorem (see, for example, - -). - -Grushko's theorem is, in a sense, a starting point in Dunwoody's theory of accessibility for finitely generated and finitely presented groups. Since the ranks of the free factors are smaller than the rank of a free product, Grushko's theorem implies that the process of iterated splitting of a finitely generated group G as a free product must terminate in a finite number of steps (more precisely, in at most rank(G) steps). There is a natural similar question for iterating splittings of finitely generated groups over finite subgroups. Dunwoody proved that such a process must always terminate if a group G is finitely presented but may go on forever if G is finitely generated but not finitely presented. - -An algebraic proof of a substantial generalization of Grushko's theorem using the machinery of groupoids was given by Higgins (1966). Higgins' theorem starts with groups G and B with free decompositions G = ∗i Gi, B = ∗i Bi and f : G → B a morphism such that f(Gi) = Bi for all i. Let H be a subgroup of G such that f(H) = B. Then H has a decomposition H = ∗i Hi such that f(Hi) = Bi for all i. Full details of the proof and applications may also be found in - -. - -A useful consequence of the original Grushko theorem is the so-called Grushko decomposition theorem. It asserts that any nontrivial finitely generated group G can be decomposed as a free product - -G = A1∗A2∗...∗Ar∗Fs, where s ≥ 0, r ≥ 0, - -where each of the groups Ai is nontrivial, freely indecomposable (that is, it cannot be decomposed as a free product) and not infinite cyclic, and where Fs is a free group of rank s; - -moreover, for a given G, the groups A1, ..., Ar are unique up to a permutation of their conjugacy classes in G (and, in particular, the sequence of isomorphism types of these groups is unique up to a permutation) and the numbers s and r are unique as well. - -More precisely, if G = B1∗...∗Bk∗Ft is another such decomposition then k = r, s = t, and there exists a permutation σ∈Sr such that for each i=1,...,r the subgroups Ai and Bσ(i) are conjugate in G. - -The existence of the above decomposition, called the Grushko decomposition of G, is an immediate corollary of the original Grushko theorem, while the uniqueness statement requires additional arguments (see, for example). - -Algorithmically computing the Grushko decomposition for specific classes of groups is a difficult problem which primarily requires being able to determine if a given group is freely decomposable. Positive results are available for some classes of groups such as torsion-free word-hyperbolic groups, certain classes of relatively hyperbolic groups, fundamental groups of finite graphs of finitely generated free groups and others. - -Grushko decomposition theorem is a group-theoretic analog of the Kneser prime decomposition theorem for 3-manifolds which says that a closed 3-manifold can be uniquely decomposed as a connected sum of irreducible 3-manifolds. - -The following is a sketch of the proof of Grushko's theorem based on the use of foldings techniques for groups acting on trees (see for complete proofs using this argument). - -Let S={g1,....,gn} be a finite generating set for G=A∗B of size |S|=n=rank(G). Realize G as the fundamental group of a graph of groups Y which is a single non-loop edge with vertex groups A and B and with the trivial edge group. Let $\tilde{\mathbf Y}$ be the Bass–Serre covering tree for Y. Let F=F(x1,....,xn) be the free group with free basis x1,....,xn and let φ0:F → G be the homomorphism such that φ0(xi)=gi for i=1,...,n. Realize F as the fundamental group of a graph Z0 which is the wedge of n circles that correspond to the elements x1,....,xn. We also think of Z0 as a graph of groups with the underlying graph Z0 and the trivial vertex and edge groups. Then the universal cover $\tilde Z_0$ of Z0 and the Bass–Serre covering tree for Z0 coincide. Consider a φ0-equivariant map $r_0:\tilde Z_0\to \tilde{\mathbf Y}$ so that it sends vertices to vertices and edges to edge-paths. This map is non-injective and, since both the source and the target of the map are trees, this map "folds" some edge-pairs in the source. The graph of groups Z0 serves as an initial approximation for Y. - -We now start performing a sequence of "folding moves" on Z0 (and on its Bass-Serre covering tree) to construct a sequence of graphs of groups Z0, Z1, Z2, ...., that form better and better approximations for Y. Each of the graphs of groups Zj has trivial edge groups and comes with the following additional structure: for each nontrivial vertex group of it there assigned a finite generating set of that vertex group. The complexity c(Zj) of Zj is the sum of the sizes of the generating sets of its vertex groups and the rank of the free group π1(Zj). For the initial approximation graph we have c(Z0)=n. - -The folding moves that take Zj to Zj+1 can be of one of two types: - -*folds that identify two edges of the underlying graph with a common initial vertex but distinct end-vertices into a single edge; when such a fold is performed, the generating sets of the vertex groups and the terminal edges are "joined" together into a generating set of the new vertex group; the rank of the fundamental group of the underlying graph does not change under such a move. - -*folds that identify two edges, that already had common initial vertices and common terminal vertices, into a single edge; such a move decreases the rank of the fundamental group of the underlying graph by 1 and an element that corresponded to the loop in the graph that is being collapsed is "added" to the generating set of one of the vertex groups. - -One sees that the folding moves do not increase complexity but they do decrease the number of edges in Zj. Therefore, the folding process must terminate in a finite number of steps with a graph of groups Zk that cannot be folded any more. It follows from the basic Bass–Serre theory considerations that Zk must in fact be equal to the edge of groups Y and that Zk comes equipped with finite generating sets for the vertex groups A and B. The sum of the sizes of these generating sets is the complexity of Zk which is therefore less than or equal to c(Z0)=n. This implies that the sum of the ranks of the vertex groups A and B is at most n, that is - -rank(A)+rank(B)≤rank(G), as required. - -Stallings' proof of Grushko Theorem follows from the following lemma. - -Let F be finitely generated free group, with n generators. Let G1 and G2 be two finitely presented groups. Suppose there exists a surjective homomorphism $\phi:F\rightarrow G_1\ast G_2$, then there exists two subgroups F1 and F2 of F with $\phi(F_1)=G_1$ and $\phi(F_2)=G_2$ such that $F=F_1\ast F_2$. - -Proof: - -We give the proof assuming that F has no generator which is mapped to the identity of $G_1\ast G_2$, for if there are such generators, they may be added to any of $F_1$ or $F_2$. - -The following general results are used in the proof. - -1. There is a one or two dimensional CW complex, Z with fundamental group F. By Van Kampen theorem, the wedge of n circles is one such space. - -2. There exists a two complex $ X=X_1\cup X_2$ where $ \{p\}=X_1\cap X_2$ is a point on a one cell of X such that X1 and X2 are two complexes with fundamental groups G1 and G2 respectively. Note that by the Van Kampen theorem, this implies that the fundamental group of X is $G_1\ast G_2$. - -3. There exists a map $f:Z\rightarrow X$ such that the induced map $f_\ast$ on the fundamental groups is same as $\phi$ - -For the sake of convenience, let us denote $f^{-1}(X_1)=:Z_1$ and $f^{-1}(X_2)=:Z_2$. - -Since no generator of F maps to identity, the set $Z_1\cap Z_2$ has no loops, for if it does, these will correspond to circles of Z which map to $p\in X$, which in turn correspond to generators of F which go to the identity. So, the components of $Z_1\cap Z_2$ are contractible. - -In the case where $Z_1\cap Z_2$ has only one component, by Van Kampen's theorem, we are done, as in that case, :$F=\Pi_1(Z_1)\ast\Pi_1(Z_2)$. - -The general proof follows by reducing Z to a space homotopically equivalent to it, but with fewer components in $Z_1\cap Z_2$, and thus by induction on the components of $Z_1\cap Z_2$. - -Such a reduction of Z is done by attaching discs along binding ties. - -We call a map $ \gamma :[0,1]\rightarrow Z$ a binding tie if it satisfies the following properties - -1. It is monochromatic i.e. $ \gamma([0,1])\subseteq Z_1$ or $ \gamma([0,1])\subseteq Z_2$ - -2. It is a tie i.e. $\gamma(0)$ and $\gamma(1)$ lie in different components of $Z_1\cap Z_2$. - -3. It is null i.e. $ f \circ \gamma([0,1])$ is null homotopic in X. - -Let us assume that such a binding tie exists. Let $\gamma$ be the binding tie. - -Consider the map $g:[0,1]\rightarrow D^2$ given by $g(t)= e^{it}$. This map is a homeomorphism onto its image. Define the space $Z'$ as -$$ -Z'= Z \coprod\! D^2/\! \sim -$$ where :x\!\!\sim y \text{ iff} - -\begin{cases} - -x=y, \mbox{ or }\\ - -x=\gamma (t) \text{ and } y= g(t) \text{ for some } t\in [0,1]\mbox{ or }\\ - -x=g (t) \text{ and } y= \gamma (t) \text{ for some } t\in [0,1] - -\end{cases} - -Note that the space Z' deformation retracts to Z - -We first extend f to a function $f:Z\coprod \partial D^2/\!\sim$ as -$$ - f(x) = \begin{cases}f(x),\ x\in Z\\ p \text{ otherwise.}\end{cases} -$$ - -Since the $ f(\gamma)$ is null homotopic, $f$ further extends to the interior of the disc, and therefore, to $Z'$ . - -Let $Z_i' = f'^{-1}(X_i)$ i = 1,2. - -As $\gamma(0)$ and $\gamma(1)$ lay in different components of $Z_1\cap Z_2$, $Z_1' \cap Z_2'$ has one less component than $Z_1\cap Z_2$. - -The binding tie is constructed in two steps. - -Step 1: Constructing a null tie: - -Consider a map $ \gamma' :[0,1]\rightarrow Z$ with $\gamma' (0)$ and $\gamma' (1)$ in different components of $Z_1\cap Z_2$. Since $f_\ast$ is surjective, there exits a loop $\!\lambda$ based at γ'(1) such that $\! f(\gamma')$ and $\! f(\lambda)$ are homotopically equivalent in X. - -If we define a curve $ \gamma :[0,1]\rightarrow Z$ as $\gamma(t)= \gamma'\ast\lambda(t)$ for all $t\in [0,1]$, then $\!\gamma$ is a null tie. - -Step 2: Making the null tie monochromatic: - -The tie $\!\gamma$ may be written as $\gamma_1\ast \gamma_2\ast \cdots \ast\gamma_m$ where each $\gamma_i$ is a curve in $Z_1$ or $Z_2$ such that if $\gamma_i$ is in $Z_1$, then $\gamma_{i+1}$ is in $Z_2$ and vice versa. This also implies that $f(\gamma_i)$ is a loop based at p in X. So, - - - -[e]=[f(\gamma)]=[f(\gamma_1)]\ast\cdots\ast [f(\gamma_m)] - -Hence, $ [f(\gamma_j)]=[e]$ for some j. - -If this $\!\gamma_j$ is a tie, then we have a monochromatic, null tie. - -If $\!\gamma_j$ is not a tie, then the end points of $\!\gamma_j$ are in the same component of $Z_1\cap Z_2$. In this case, we replace $\!\gamma_j$ by a path in $Z_1\cap Z_2$, say $\!\gamma_j'$. This path may be appended to $\!\gamma_{j-1}$ and we get a new null tie -$$ -\gamma = \gamma_1\ast \cdots \ast \gamma_{j-1}'\ast\gamma_{j+1} \cdots \gamma_m -$$, where $\!\gamma_{j-1}' = \gamma_{j-1}\ast\gamma_j'$. - -Thus, by induction on m, we prove the existence of a binding tie. - -Suppose that $ G = A*B$ is generated by $\{g_1, g_2,\ldots, g_n\}$. Let $F$ be the free group with $n$-generators, viz. $\{f_1, f_2,\ldots, f_n\}$. Consider the homomorphism $h:F\rightarrow G$ given by $h(f_i) = g_i$, where $i=1,\ldots, n$. - -By the lemma, there exists free groups $F_1$ and $F_2$ with $ F=F_1\ast F_2$ such that $h(F_1)=A$ and $h(F_2)=B$. Therefore, $\text{Rank }(A) \leq \text{Rank }(F_1)$ and $\text{Rank }(B) \leq \text{Rank }(F_2)$. - -Therefore, $\text{Rank }(A) + \text{Rank }(B)\leq\text{Rank }(F_1) + \text{Rank }(F_2) = \text{Rank }(F) = \text{Rank } (A\ast B).$ diff --git a/wiki/wikipedia/2849.txt b/wiki/wikipedia/2849.txt deleted file mode 100644 index 7e11c14ff7de0c430c7f15138ba9ddbb1a88363b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2849.txt +++ /dev/null @@ -1,49 +0,0 @@ -In mathematics, the fiber bundle construction theorem is a theorem which constructs a fiber bundle from a given base space, fiber and a suitable set of transition functions. The theorem also gives conditions under which two such bundles are isomorphic. The theorem is important in the associated bundle construction where one starts with a given bundle and surgically replaces the fiber with a new space while keeping all other data the same. - -Let X and F be topological spaces and let G be a topological group with a continuous left action on F. Given an open cover {Ui} of X and a set of continuous functions -$$ -t_{ij} : U_i \cap U_j \to G -$$ - -defined on each nonempty overlap, such that the cocycle condition -$$ -t_{ik}(x) = t_{ij}(x)t_{jk}(x) \qquad \forall x \in U_i \cap U_j \cap U_k -$$ - -holds, there exists a fiber bundle E → X with fiber F and structure group G that is trivializable over {Ui} with transition functions tij. - -Let E′ be another fiber bundle with the same base space, fiber, structure group, and trivializing neighborhoods, but transition functions t′ij. If the action of G on F is faithful, then E′ and E are isomorphic if and only if there exist functions -$$ -t_i : U_i \to G -$$ - -such that -$$ -t'_{ij}(x) = t_i(x)^{-1}t_{ij}(x)t_j(x) \qquad \forall x \in U_i \cap U_j. -$$ - -Taking ti to be constant functions to the identity in G, we see that two fiber bundles with the same base, fiber, structure group, trivializing neighborhoods, and transition functions are isomorphic. - -A similar theorem holds in the smooth category, where X and Y are smooth manifolds, G is a Lie group with a smooth left action on Y and the maps tij are all smooth. - -The proof of the theorem is constructive. That is, it actually constructs a fiber bundle with the given properties. One starts by taking the disjoint union of the product spaces Ui × F -$$ -T = \coprod_{i\in I}U_i \times F = \{(i,x,y) : i\in I, x\in U_i, y\in F\} -$$ - -and then forms the quotient by the equivalence relation -$$ -(j,x,y) \sim (i,x,t_{ij}(x)\cdot y)\qquad \forall x\in U_i \cap U_j, y\in F. -$$ - -The total space E of the bundle is T/~ and the projection π : E → X is the map which sends the equivalence class of (i, x, y) to x. The local trivializations -$$ -\phi_i : \pi^{-1}(U_i) \to U_i \times F -$$ - -are then defined by -$$ -\phi_i^{-1}(x,y) = [(i,x,y)]. -$$ - -Let E → X a fiber bundle with fiber F and structure group G, and let F′ be another left G-space. One can form an associated bundle E′ → X with a fiber F′ and structure group G by taking any local trivialization of E and replacing F by F′ in the construction theorem. If one takes F′ to be G with the action of left multiplication then one obtains the associated principal bundle. diff --git a/wiki/wikipedia/285.txt b/wiki/wikipedia/285.txt deleted file mode 100644 index 13667f976a571bf4a9c894aa66782a6a120f9151..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/285.txt +++ /dev/null @@ -1,37 +0,0 @@ -Midgard is an open source persistent storage framework. It provides an object-oriented and replicated environment for building data-intensive applications. - -Midgard also ships with MidCOM content management system (CMS) built on the Midgard framework. MidCOM's features include web-based authoring WYSIWYG interfaces and a component interface for installing additional web functionalities, including wikis and blogs. - -Midgard is built on the GNOME stack of libraries like GLib and libgda, and has language bindings for C, Python, Objective-C and PHP. Communications between applications written in the different languages happen over D-Bus. The CMS functionalities run on the LAMP (Linux, Apache, MySQL and PHP) platform. Midgard can also be used with PHPCR, the PHP implementation of the Java Content Repository standard. In early 2000s (decade) there was also a pure-PHP implementation of the Midgard API called Midgard Lite that has since been re-implemented as the midgard-portable project. - -The project follows the synchronized, 6 month release cycle that is implemented by several major open source projects like Ubuntu and GNOME. Because of this, the version numbering reflects the year and month of a release. The version 8.09 Ragnaroek has been designated as a "Long Term Support" release. - -Especially the templating and page composition features of Midgard have received praise, earning honorary mentions in several CMS Watch surveys. It also got score of 42 out of 45 in the Celebrity CMS Deathmatch of 2009 - -The name Midgard comes from Nordic mythology, meaning Middle earth, the world of humans. Most of the Midgard developer community comes from the Baltic region, and the project has been referred by CMS Watch as the Hanseatic League of Content Management. - -Midgard Project was started in early 1998 by Jukka Zitting and Henri Bergius for a Finnish historical reenactment organization —Harmaasudet— as a system for them to publish their material online. - -Since the organization didn't have resources to maintain a large development project by itself, the open source model was chosen for creating a community of contributors to the system. The version 1.0 of Midgard was released to the public on May 8, 1999. It attracted a steady stream of users, and the development project flourished despite quite primitive early user interfaces. - -Commercial services for the platform started to appear in early 2000. One of the first adopters was Envida, a Dutch company that realized the potential of Midgard for Web hosting purposes. First proprietary application for the platform was Hong Kong Linux Center (HKLC) Nadmin Studio content management system. - -In early 2000s (decade), Midgard developers participated actively in OSCOM, the collaborative organization for open source content management systems. This included development of shared content editing clients like Twingle and tutorials in various conferences. Midgard also featured in F.U.D., the Wyona Pictures documentary about OSCOM. - -First application not connected with content management was Nemein.Net, - -a Professional Services Automation application released in 2002 by Nemein, a Finnish Midgard company. In May 2004 the Nemein.Net suite was renamed to OpenPSA and released under Open Source licensing. - -By 2009, some social web services, like Qaiku have also adopted Midgard as their content management platform. It also runs in organizations like Helsinki University of Technology and Maemo. e-commerce implementations with Midgard include the Movie-TV online video rental service. It has been used by New Zealand government for running the country's eGovernment portal. - -Midgard has seen some non-Web use also, including providing synchronization with the Tomboy note-taking application for Linux desktop. - -In addition to regular content management, Midgard is seeing use in special web application scenarios like Lufthansa's system for managing global marketing budgets and HP's client documentation system. - -The Midgard content repository library entered the Debian distribution in November 2010. Some parts of the history of Midgard are recounted in the book Open Advice. - -The Midgard core libraries and the MidCOM CMS are distributed under the GNU Lesser General Public License (LGPL), a license which permits the software to be freely used so long as it is dynamically linked or the user can relink it to new versions of the libraries. This is the same license used by the GNU C Library. This licensing scheme qualifies Midgard as free software developed with an open source model. - -Official documentation is licensed under the Creative Commons Attribution-ShareAlike License which supports the free usage principles defined by the GPL for code. - -Applications developed using the Midgard application programming interfaces (API) can be copyrighted and licensed under any terms by their authors, enabling creation of commercial products and services based on the platform. diff --git a/wiki/wikipedia/2850.txt b/wiki/wikipedia/2850.txt deleted file mode 100644 index 134b6f99228d736098147956b86b8d014b4ca4d2..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2850.txt +++ /dev/null @@ -1,7 +0,0 @@ -In mathematics, especially in topology, equidimensionality is a property of a space that the local dimension is the same everywhere. - -A topological space X is said to be equidimensional if for all points p in X, the dimension at p, that is dim p(X), is constant. The Euclidean space is an example of an equidimensional space. The disjoint union of two spaces X and Y (as topological spaces) of different dimension is an example of a non-equidimensional space. - -A scheme S is said to be equidimensional if every irreducible component has the same Krull dimension. For example, the affine scheme Spec k[x,y,z]/(xy,xz), which intuitively looks like a line intersecting a plane, is not equidimensional. - -An affine algebraic variety whose coordinate ring is a Cohen–Macaulay ring is equidimensional. diff --git a/wiki/wikipedia/2851.txt b/wiki/wikipedia/2851.txt deleted file mode 100644 index e946bad3494c38db48c12ab44337006cae099c51..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2851.txt +++ /dev/null @@ -1,140 +0,0 @@ -In computational complexity theory, PostBQP is a complexity class consisting of all of the computational problems solvable in polynomial time on a quantum Turing machine with postselection and bounded error (in the sense that the algorithm is correct at least 2/3 of the time on all inputs). - -Postselection is not considered to be a feature that a realistic computer (even a quantum one) would possess, but nevertheless postselecting machines are interesting from a theoretical perspective. - -Removing either one of the two main features (quantumness, postselection) from PostBQP gives the following two complexity classes, both of which are subsets of PostBQP: - -* BQP is the same as PostBQP except without postselection - -* BPPpath is the same as PostBQP except that instead of quantum, the algorithm is a classical randomized algorithm (with postselection) - -The addition of postselection seems to make quantum Turing machines much more powerful: Scott Aaronson proved PostBQP is equal to PP, a class which is believed to be relatively powerful, whereas BQP is not known even to contain the seemingly smaller class NP. Using similar techniques, Aaronson also proved that small changes to the laws of quantum computing would have significant effects. As specific examples, under either of the two following changes, the "new" version of BQP would equal PP: - -* if we broadened the definition of 'quantum gate' to include not just unitary operations but linear operations, or - -* if the probability of measuring a basis state $|x\rangle$ was proportional to $|\alpha_x|^p$ instead of $|\alpha_x|^2$ for any even integer p > 2. - -In order to describe some of the properties of PostBQP we fix a formal way of describing quantum postselection. Define a quantum algorithm to be a family of quantum circuits (specifically, a uniform circuit family). We designate one qubit as the postselection qubit P and another as the output qubit Q. Then PostBQP is defined by postselecting upon the event that the postselection qubit is $|1\rangle$. Explicitly, a language L is in PostBQP if there is a quantum algorithm A so that after running A on input x and measuring the two qubits P and Q, - -* P = 1 with nonzero probability - -* if the input x is in L then Pr[Q = 1|P = 1] ≥ 2/3 - -* if the input x is not in L then Pr[Q = 0|P = 1] ≥ 2/3. - -One can show that allowing a single postselection step at the end of the algorithm (as described above) or allowing intermediate postselection steps during the algorithm are equivalent. - -Here are three basic properties of PostBQP (which also hold for BQP via similar proofs): - -# PostBQP is closed under complement. Given a language L in PostBQP and a corresponding deciding circuit family, create a new circuit family by flipping the output qubit after measurement, then the new circuit family proves the complement of L is in PostBQP. - -# You can do probability amplification in PostBQP. The definition of PostBQP is not changed if we replace the 2/3 value in its definition by any other constant strictly between 1/2 and 1. As an example, given a PostBQP algorithm A with success probability 2/3, we can construct another algorithm which runs three independent copies of A, outputs a postselection bit equal to the conjunction of the three "inner" ones, and outputs an output bit equal to the majority of the three "inner" ones; the new algorithm will be correct with conditional probability $(2/3)^3 + 3(1/3)(2/3)^2 = 20/27$, greater than the original 2/3. - -# PostBQP is closed under intersection. Suppose we have PostBQP circuit families for two languages L_1 and L_2, with respective postselection qubits and output qubits P_1, P_2, Q_1, Q_2. We may assume by probability amplification that both circuit families have success probability at least 5/6. Then we create a composite algorithm where the circuits for L_1 and L_2 are run independently, and we set P to the conjunction of P_1 and P_2, and Q to the conjunction of Q_1 and Q_2. It is not hard to see by a union bound that this composite algorithm correctly decides membership in $L_1 \cap L_2$ with (conditional) probability at least 2/3. - -More generally, combinations of these ideas show that PostBQP is closed under union and BQP truth-table reductions. - -Scott Aaronson showed that the complexity classes \mathsf{PostBQP} (postselected bounded error quantum polynomial time) and PP (probabilistic polynomial time) are equal. The result was significant because this quantum computation reformulation of \mathsf{PP} gave new insight and simpler proofs of properties of \mathsf{PP} . - -The usual definition of a \mathsf{PostBQP} circuit family is one with two outbit qubits P (postselection) and Q (output) with a single measurement of P and Q at the end such that the probability of measuring P = 1 has nonzero probability, the conditional probability Pr[Q = 1|P = 1] ≥ 2/3 if the input x is in the language, and Pr[Q = 0|P = 1] ≥ 2/3 if the input x is not in the language. For technical reasons we tweak the definition of \mathsf{PostBQP} as follows: we require that Pr[P = 1] ≥ 2-nc for some constant c depending on the circuit family. Note this choice does not affect the basic properties of \mathsf{PostBQP} , and also it can be shown that any computation consisting of typical gates (e.g. Hadamard, Toffoli) has this property whenever Pr[P = 1] > 0. - -Suppose we are given a \mathsf{PostBQP} family of circuits to decide a language L. We assume without loss of generality (e.g. see the inessential properties of quantum computers) that all gates have transition matrices that are represented with real numbers, at the expense of adding one more qubit. - -Let Ψ denote the final quantum state of the circuit before the postselecting measurement is made. The overall goal of the proof is to construct a \mathsf{PP} algorithm to decide L. More specifically it suffices to have L correctly compare the squared amplitude of Ψ in the states with Q = 1, P = 1 to the squared amplitude of Ψ in the states with Q = 0, P = 1 to determine which is bigger. The key insight is that the comparison of these amplitudes can be transformed into comparing the acceptance probability of a \mathsf{PP} machine with 1/2. - -Let n denote the input size, B = B(n) denote the total number of qubits in the circuit (inputs, ancillary, output and postselection qubits), and G = G(n) denote the total number of gates. - -Represent the ith gate by its transition matrix Ai (a real unitary $2^B \times 2^B$ matrix) and let the initial state be $|x\rangle$ (padded with zeroes). Then $\Psi = A^G A^{G-1}\dotsb A^2 A^1 |x\rangle$. Define S1 (resp. S0) to be the set of basis states corresponding to P = 1, Q = 1 (resp. P = 1, Q = 0) and define the probabilities -$$ -\pi_1 := \text{Pr}[P=1,Q=1]=\sum_{\omega \in S_1} \Psi^2_\omega -$$ -$$ -\pi_0 := \text{Pr}[P=1,Q=0]=\sum_{\omega \in S_0} \Psi^2_\omega. -$$ - -The definition of \mathsf{PostBQP} ensures that either $\pi_{1} \ge 2\pi_0$ or $\pi_0 \ge 2\pi_1$ according to whether x is in L or not. - -Our \mathsf{PP} machine will compare $\pi_{1}$ and $\pi_{0}$. In order to do this, we expand the definition of matrix multiplication: -$$ -\Psi_\omega = \sum_{\alpha_1, \ldots, \alpha_{G}}A^{G}_{\omega,\alpha_{G}}A^{G-1}_{\alpha_G,\alpha_{G-1}}\dotsb A^2_{\alpha_3,\alpha_2} A^1_{\alpha_2,\alpha_1} x_{\alpha_1} -$$ - -where the sum is taken over all lists of G basis vectors $\alpha_i$. Now $\pi_1$ and $\pi_0$ can be expressed as a sum of pairwise products of these terms. Intuitively, we want to design a machine whose acceptance probability is something like $\tfrac{1}{2}(1+\pi_1-\pi_0)$, since then $x \in L$ would imply that the acceptance probability is $\tfrac{1}{2}(1+\pi_{1}-\pi_{0})>\tfrac 1 2$, while $x \not\in L$ would imply that the acceptance probability is $\tfrac{1}{2}(1+\pi_1-\pi_0)<\tfrac 1 2$. - -The definition of \mathsf{PostBQP} tells us that $\pi_{1} \ge \tfrac{2}{3}(\pi_0+\pi_1)$ if x is in L, and that otherwise $\pi_{0} \ge \tfrac{2}{3}(\pi_0+\pi_1)$. Let us replace all entries of A by the nearest fraction with denominator $2^{f(n)}$ for a large polynomial f(n) that we presently describe. What will be used later is that the new pi values satisfy $\pi_1 > \tfrac{1}{2}(\pi_0+\pi_1)$ if x is in L, and $\pi_0 > \tfrac{1}{2}(\pi_0+\pi_1)$ if x is not in L. Using the earlier technical assumption and by analyzing how the 1-norm of the computational state changes, this is seen to be satisfied if $(1+2^{-f(n)}2^{B})^{G}-1 < \tfrac{1}{6}2^{-n^c},$ thus clearly there is a large enough f that is polynomial in n. - -Now we provide the detailed implementation of our \mathsf{PP} machine. Let α denote the sequence $\{\alpha_i\}_{i=1}^G$ and define the shorthand notation -$$ -\Pi(A, \omega, \alpha, x) := A^{G}_{\omega,\alpha_{G}}A^{G-1}_{\alpha_{G},\alpha_{G-1}}\dotsb A^2_{\alpha_3,\alpha_2} A^1_{\alpha_2,\alpha_1} x_{\alpha_1} -$$, - -then -$$ -\pi_1 - \pi_0 = \sum_{\omega \in S_1} \sum_{\alpha,\alpha'} \Pi(A, \omega, \alpha, x)\Pi(A, \omega, \alpha', x) - \sum_{\omega \in S_0} \sum_{\alpha,\alpha'} \Pi(A, \omega, \alpha, x)\Pi(A, \omega, \alpha', x). -$$ - -We define our \mathsf{PP} machine to - -* pick a basis state ω uniformly at random - -* if $\omega \not\in S_0 \cup S_1$ then STOP and accept with probability 1/2, reject with probability 1/2 - -* pick two sequences $\alpha,\alpha'$ of G basis states uniformly at random - -* compute $X = \Pi(A, \omega, \alpha, x)\Pi(A, \omega, \alpha', x)$ (which is a fraction with denominator $2^{2f(n)G(n)}$ such that $-1 \le X \le 1$) - -* if $\omega \in S_1$ then accept with probability $\tfrac{1+X}{2}$ and reject with probability $\tfrac{1-X}{2}$ (which takes at most 2f(n)G(n)+1 coin flips) - -* otherwise (then $\omega \in S_0$) accept with probability $\tfrac{1-X}{2}$ and reject with probability $\tfrac{1+X}{2}$ (which again takes at most 2f(n)G(n)+1 coin flips) - -Then it is straightforward to compute that this machine accepts with probability -$$ -\frac{1}{2}+\frac{\pi_{1}-\pi_{0}}{2^{1+B(n)+2B(n)G(n)}}, -$$ - -so this is a \mathsf{PP} machine for the language L, as needed. - -Suppose we have a \mathsf{PP} machine with time complexity 1=T:=T(n) on input x of length $n := |x|$. Thus the machine flips a coin at most T times during the computation. We can thus view the machine as a deterministic function f (implemented, e.g. by a classical circuit) which takes two inputs (x, r) where r, a binary string of length T, represents the results of the random coin flips that are performed by the computation, and the output of f is 1 (accept) or 0 (reject). The definition of \mathsf{PP} tells us that -$$ -x \in L \Leftrightarrow \#\{r \in \{0,1\}^T\mid f(x, r)=1\} \ge 2^{T-1} -$$ - -Thus, we want a \mathsf{PostBQP} algorithm that can determine whether the above statement is true. - -Define s to be the number of random strings which lead to acceptance, -$$ -s := \#\{r \in \{0,1\}^T\mid f(x, r)=1\} -$$ - -and so $2^T-s$ is the number of rejected strings. - -It is straightforward to argue that without loss of generality, $s \not\in \{0, 2^T/2, 2^T\}$; for details, see a similar without loss of generality assumption in the proof that \mathsf{PP} is closed under complementation. - -Aaronson's algorithm for solving this problem is as follows. For simplicity, we will write all quantum states as unnormalized. First, we prepare the state $|x \rangle \otimes \sum_{r \in \{0,1\}^T} |r \rangle |f(x,r) \rangle$. Second, we apply Hadamard gates to the second register (each of the first T qubits), measure the second register and postselect on it being the all-zero string. It is easy to verify that this leaves the last register (the last qubit) in the residual state -$$ - |\psi \rangle := (2^T-s)|0 \rangle + s|1 \rangle. -$$ - -Where H denotes the Hadamard gate, we compute the state -$$ - H |\psi\rangle = (2^T |0\rangle + (2^T - 2s)|1 \rangle)/\sqrt{2} -$$. - -Where $\alpha, \beta$ are positive real numbers to be chosen later with $\alpha^2+\beta^2=1$, we compute the state $\alpha |0\rangle|\psi\rangle+\beta |1\rangle|H\psi\rangle$ and measure the second qubit, postselecting on its value being equal to 1, which leaves the first qubit in a residual state depending on $\beta/\alpha$ which we denote -$$ - | \phi_{\beta/\alpha} \rangle := \alpha s |0\rangle + \frac{\beta}{\sqrt{2}}(2^T-2s)|1\rangle -$$. - -Visualizing the possible states of a qubit as a circle, we see that if $s > 2^{T-1}$, (i.e. if $x \in L$) then $\phi_{\beta/\alpha}$ lies in the open quadrant $Q_{acc} := (-|1\rangle, |0\rangle)$ while if $s < 2^{T-1}$, (i.e. if $x \not\in L$) then $\phi_{\beta/\alpha}$ lies in the open quadrant $Q_{rej} := (|0\rangle,|1\rangle)$. In fact for any fixed x (and its corresponding s), as we vary the ratio $\beta/\alpha$ in $(0, \infty)$, note that the image of $|\phi_{\beta/\alpha}\rangle $ is precisely the corresponding open quadrant. In the rest of the proof, we make precise the idea that we can distinguish between these two quadrants. - -Let $|+\rangle = (|1\rangle+|0\rangle)/\sqrt{2}$, which is the center of $Q_{rej}$, and let $|-\rangle$ be orthogonal to $|+\rangle$. Any qubit in $Q_{acc}$, when measured in the basis $\{|+\rangle, |-\rangle\}$, gives the value $|+\rangle$ less than 1/2 of the time. On the other hand, if $x \not\in L$ and we had picked $\beta/\alpha = r^* := \sqrt{2}s / (2^T-2s)$ then measuring $| \phi_{\beta/\alpha} \rangle$ in the basis $\{|+\rangle, |-\rangle\}$ would give the value $|+\rangle$ all of the time. Since we don't know s we also don't know the precise value of r*, but we can try several (polynomially many) different values for $\beta/\alpha$ in hopes of getting one that is "near" r*. - -Specifically, note $2^{-T} < r* < 2^T$ and let us successively set $\beta/\alpha$ to every value of the form $2^i$ for $-T \leq i \leq T$. Then elementary calculations show that for one of these values of i, the probability that the measurement of $| \phi_{2^i} \rangle$ in the basis $\{|+\rangle, |-\rangle\}$ yields $|+\rangle$ is at least $(3+2\sqrt{2})/6 \approx 0.971.$ - -Overall, the \mathsf{PostBQP} algorithm is as follows. Let k be any constant strictly between 1/2 and $(3+2\sqrt{2})/6$. - -We do the following experiment for each $-T \leq i \leq T$: construct and measure $| \phi_{2^i} \rangle$ in the basis $\{|+\rangle, |-\rangle\}$ a total of $C \log T$ times where C is a constant. If the proportion of $|+\rangle$ measurements is greater than k, then reject. If we don't reject for any i, accept. Chernoff bounds then show that for a sufficiently large universal constant C, we correctly classify x with probability at least 2/3. - -Note that this algorithm satisfies the technical assumption that the overall postselection probability is not too small: each individual measurement of $| \phi_{2^i} \rangle$ has postselection probability $1/2^{O(T)}$ and so the overall probability is $1/2^{O(T^2 \log T)}$. - -* See Quantum computation reformulation of PP diff --git a/wiki/wikipedia/2852.txt b/wiki/wikipedia/2852.txt deleted file mode 100644 index 21a1464f5000b8d608822dffc02b8ff3f480e377..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2852.txt +++ /dev/null @@ -1,22 +0,0 @@ -In algebraic number theory, it can be shown that every cyclotomic field is an abelian extension of the rational number field Q, having Galois group of the form $(\mathbb Z/n\mathbb Z)^\times$. The Kronecker–Weber theorem provides a partial converse: every finite abelian extension of Q is contained within some cyclotomic field. In other words, every algebraic integer whose Galois group is abelian can be expressed as a sum of roots of unity with rational coefficients. For example, -$$ -\sqrt{5} = e^{2 \pi i / 5} - e^{4 \pi i / 5} - e^{6 \pi i / 5} + e^{8 \pi i / 5}, -$$ $\sqrt{-3} = e^{2 \pi i / 3} - e^{4 \pi i / 3},$ and $\sqrt{3} = e^{\pi i / 6} - e^{5 \pi i / 6}.$ - -The theorem is named after Leopold Kronecker and Heinrich Martin Weber. - -The Kronecker–Weber theorem can be stated in terms of fields and field extensions. - -Precisely, the Kronecker–Weber theorem states: every finite abelian extension of the rational numbers Q is a subfield of a cyclotomic field. - -That is, whenever an algebraic number field has a Galois group over Q that is an abelian group, the field is a subfield of a field obtained by adjoining a root of unity to the rational numbers. - -For a given abelian extension K of Q there is a minimal cyclotomic field that contains it. The theorem allows one to define the conductor of K as the smallest integer n such that K lies inside the field generated by the n-th roots of unity. For example the quadratic fields have as conductor the absolute value of their discriminant, a fact generalised in class field theory. - -The theorem was first stated by though his argument was not complete for extensions of degree a power of 2. - -published a proof, but this had some gaps and errors that were pointed out and corrected by Neumann. The first complete proof was given by . - -proved the local Kronecker–Weber theorem which states that any abelian extension of a local field can be constructed using cyclotomic extensions and Lubin–Tate extensions. , and gave other proofs. - -Hilbert's twelfth problem asks for generalizations of the Kronecker–Weber theorem to base fields other than the rational numbers, and asks for the analogues of the roots of unity for those fields. A different approach to abelian extensions is given by class field theory. diff --git a/wiki/wikipedia/2853.txt b/wiki/wikipedia/2853.txt deleted file mode 100644 index 300e2324ae05d801b3d59a65edafee68e60e3267..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2853.txt +++ /dev/null @@ -1,14 +0,0 @@ -Hierarchical closeness (HC) is a structural centrality measure used in network theory or graph theory. It is extended from closeness centrality to rank how centrally located a node is in a directed network. While the original closeness centrality of a directed network considers the most important node to be that with the least total distance from all other nodes, hierarchical closeness evaluates the most important node as the one which reaches the most nodes by the shortest paths. The hierarchical closeness explicitly includes information about the range of other nodes that can be affected by the given node. In a directed network $G(V, A)$ where $V$ is the set of nodes and $A$ is the set of interactions, hierarchical closeness of a node $i$ ∈ $V$ called $C_{hc}(i)$ was proposed by Tran and Kwon as follows: -$$ -C_{hc}(i) = N_R(i) + C_{(clo-i)}(i) -$$ - -where: - -* $N_R(i) \in [0, |V| - 1] $ is the reachability of a node $i$ defined by $N_R(i) = |\{j \in V: \exists$ a path from $i$ to $j\}|$, and - -* $C_{clo}(i)$ is the normalized form of original closeness (Sabidussi, 1966). It can use a variant definition of closeness as follows: $C_{clo-i}(i)=\frac{1}{|V|-1} \sum_{j \in V\setminus\{i\}} \frac{1}{d(i,j)}$ where $d(i, j)$ is the distance of the shortest path, if any, from $i$ to $j$; otherwise, $d(i, j)$ is specified as an infinite value. - -In the formula, $N_R(i)$ represents the number of nodes in $V$ that can be reachable from $i$. It can also represent the hierarchical position of a node in a directed network. It notes that if $N_R(i) = 0$, then $C_{hc}(i) = 0$ because $C_{(clo-i)}(i)$ is $0$. In cases where $N_R(i) > 0$, the reachability is a dominant factor because $N_R(i) \geq 1$ but $C_{(clo-i)}(i) < 1$. In other words, the first term indicates the level of the global hierarchy and the second term presents the level of the local centrality. - -Hierarchical closeness can be used in biological networks to rank the risk of genes to carry diseases. diff --git a/wiki/wikipedia/2854.txt b/wiki/wikipedia/2854.txt deleted file mode 100644 index aed0b9f8f1b4e4959f1835f69fa8e6026066f948..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2854.txt +++ /dev/null @@ -1,74 +0,0 @@ -In mathematical logic and computer science the symbol $\vdash$ has taken the name turnstile because of its resemblance to a typical turnstile if viewed from above. It is also referred to as tee and is often read as "yields", "proves", "satisfies" or "entails". - -In TeX, the turnstile symbol $\vdash$ is obtained from the command \vdash. In Unicode, the turnstile symbol () is called right tack and is at code point U+22A2. (Code point U+22A6 is named assertion sign ().) On a typewriter, a turnstile can be composed from a vertical bar (|) and a dash (–). In LaTeX there is a turnstile package which issues this sign in many ways, and is capable of putting labels below or above it, in the correct places. - -The turnstile represents a binary relation. It has several different interpretations in different contexts: - -* In epistemology, Per Martin-Löf (1996) analyzes the $\vdash$ symbol thus: "...[T]he combination of Frege's , judgement stroke [ | ], and , content stroke [—], came to be called the assertion sign." Frege's notation for a judgement of some content A -$$ -\vdash A -$$ - -can then be read - -I know A is true. - -In the same vein, a conditional assertion -$$ -P \vdash Q -$$ - -can be read as: - -From P, I know that Q - -* In metalogic, the study of formal languages; the turnstile represents syntactic consequence (or "derivability"). This is to say, that it shows that one string can be derived from another in a single step, according to the transformation rules (i.e. the syntax) of some given formal system. As such, the expression -$$ -P \vdash Q -$$ - -means that Q is derivable from P in the system. - -Consistent with its use for derivability, a "⊢" followed by an expression without anything preceding it denotes a theorem, which is to say that the expression can be derived from the rules using an empty set of axioms. As such, the expression -$$ -\vdash Q -$$ - -means that Q is a theorem in the system. - -*In proof theory, the turnstile is used to denote "provability" or "derivability". For example, if T is a formal theory and S is a particular sentence in the language of the theory then -$$ -T \vdash S -$$ - -means that S is provable from T. This usage is demonstrated in the article on propositional calculus. The syntactic consequence of provability should be contrasted to semantic consequence, denoted by the double turnstile symbol $\models$. One says that $S$ is a semantic consequence of $T$, or $T \models S$, when all possible valuations in which $T$ is true, $S$ is also true. For propositional logic, it may be shown that semantic consequence $\models$ and derivability $\vdash$ are equivalent to one-another. That is, propositional logic is sound ($\vdash$ implies $\models$) and complete ($\models$ implies $\vdash$) - -* In sequent calculus, the turnstile is used to denote a sequent. A sequent $A_1,\dots,A_m \vdash B_1,\dots,B_n$ asserts that, if all the antecedants $A_1,\dots,A_m$ are true, then at least one of the consequents $B_1,\dots,B_n$ must be true. - -* In the typed lambda calculus, the turnstile is used to separate typing assumptions from the typing judgment. - -* In category theory, a reversed turnstile ($\dashv$), as in $F \dashv G$, is used to indicate that the functor F is left adjoint to the functor G. More rarely, a turnstile ($\vdash$), as in $G \vdash F$, is used to indicate that the functor G is right adjoint to the functor F. - -* In APL the symbol is called "right tack" and represents the ambivalent right identity function where both X⊢Y and ⊢Y are Y. The reversed symbol "⊣" is called "left tack" and represents the analogous left identity where X⊣Y is X and ⊣Y is Y. - -* In combinatorics, $\lambda \vdash n$ means that λ is a partition of the integer n. - -* In Hewlett-Packard's HP-41C/CV/CX and HP-42S series of calculators, the symbol (at code point 127 in the FOCAL character set) is called "Append character" and is used to indicate that the following characters will be appended to the alpha register rather than replacing the existing contents of the register. The symbol is also supported (at code point 148) in a modified variant of the HP Roman-8 character set used by other HP calculators. - -*On the Casio fx-92 Collège 2D and fx-92+ Spéciale Collège calculators, the symbol represents the modulo operator; entering $5\vdash2$ will produce an answer of $Q=2;R=1$, where Q is the quotient and R is the remainder. On other CASIO calculators (such as on the Belgian variants—the fx-92B Spéciale Collège and fx-92B Collège 2D calculators—where the decimal separator is represented as a dot instead of a comma), the modulo operator is represented by ÷R instead. - -* ꜔ (U+A714) Modifier Letter Mid Left-Stem Tone Bar - -* ├ (U+251C) Box Drawings Light Vertical And Right - -* ㅏ (U+314F) Korean Ah - -* Ͱ (U+0370) Greek Capital Letter Heta - -* ͱ (U+0371) Greek Small Letter Heta - -* Ⱶ (U+2C75) Latin Capital Letter Half H - -* ⱶ (U+2C76) Latin Small Letter Half H - -* ⎬ (U+23AB) Right curly bracket diff --git a/wiki/wikipedia/2855.txt b/wiki/wikipedia/2855.txt deleted file mode 100644 index 5c73039b2ddb29dffe6c7b8dc57a889718af5376..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2855.txt +++ /dev/null @@ -1,99 +0,0 @@ -In mathematics, Ramanujan's master theorem (named after Srinivasa Ramanujan) is a technique that provides an analytic expression for the Mellin transform of an analytic function. - -The result is stated as follows: - -If a complex-valued function $ f(x) $ has an expansion of the form -$$ - f(x)=\sum_{k=0}^\infty \frac{\varphi(k)}{k!}(-x)^k -$$ - -then the Mellin transform of $ f(x) $ is given by -$$ - \int_0^\infty x^{s-1} f(x) dx = \Gamma(s)\varphi(-s) -$$ - -where $ \Gamma(s) $ is the gamma function. - -It was widely used by Ramanujan to calculate definite integrals and infinite series. - -Higher-dimensional versions of this theorem also appear in quantum physics (through Feynman diagrams). - -A similar result was also obtained by Glaisher. - -An alternative formulation of Ramanujan's master theorem is as follows: -$$ - \int_0^\infty x^{s-1}\left(\lambda(0) - x\lambda(1) + x^{2}\lambda(2) -\cdots\right) dx = \frac{\pi}{\sin(\pi s)}\lambda(-s) -$$ - -which gets converted to the above form after substituting $ \lambda(n) \equiv \frac{\varphi(n)}{\Gamma(1+n)} $ and using the functional equation for the gamma function. - -The integral above is convergent for $ 0 < \mathcal{Re}(s) < 1 $ subject to growth conditions on $ \varphi $. - -A proof subject to "natural" assumptions (though not the weakest necessary conditions) to Ramanujan's Master theorem was provided by G. H. Hardy employing the residue theorem and the well-known Mellin inversion theorem. - -The generating function of the Bernoulli polynomials $ B_k(x)$ is given by: -$$ - \frac{ze^{xz}}{e^z - 1}=\sum_{k=0}^\infty B_k(x)\frac{z^k}{k!} -$$ - -These polynomials are given in terms of the Hurwitz zeta function: -$$ - \zeta(s,a) = \sum_{n=0}^\infty \frac{1}{(n+a)^s} -$$ - -by $~\zeta(1-n,a) = -\frac{B_n(a)}{n} ~$ for $~ n \geq 1~ $. - -Using the Ramanujan master theorem and the generating function of Bernoulli polynomials one has the following integral representation: -$$ - \int_0^\infty x^{s-1}\left(\frac{e^{-ax}}{1 - e^{-x}}-\frac{1}{x}\right) dx = \Gamma(s)\zeta(s,a) \! -$$ - -which is valid for $~ 0 < \mathcal{Re}(s) < 1 ~$. - -Weierstrass's definition of the Gamma function -$$ - \Gamma(x) = \frac{e^{-\gammax}}{x}\prod_{n=1}^\infty \left(1 + \frac{x}{n}\right)^{-1} e^{x/n} \! -$$ - -is equivalent to expression -$$ - \log\Gamma(1+x) = -\gammax + \sum_{k=2}^\infty \frac{\zeta(k)}{k}(-x)^k -$$ - -where $ \zeta(k) $ is the Riemann zeta function. - -Then applying Ramanujan master theorem we have: -$$ - \int_0^\infty x^{s-1} \frac{\gammax + \log\Gamma(1+x)}{x^2} \operatorname d x = \frac{\pi}{\sin(\pi s)}\frac{\zeta(2-s)}{2-s} \! -$$ - -valid for $ 0 < \mathcal{Re}(s) < 1 $. - -Special cases of $ s = \frac{1}{2} $ and $ s = \frac{3}{4} $ are -$$ - \int_0^\infty \frac{\gamma x+\log\Gamma(1+x)}{x^{5/2}} \operatorname d x = \frac{2\pi}{3}\zeta\left( \frac{3}{2} \right) -$$ -$$ - \int_0^\infty \frac{\gammax+\log\Gamma(1+x)}{x^{9/4}} \operatorname d x = \sqrt{2} \frac{4\pi}{5} \zeta\left(\frac 5 4\right) -$$ - -The Bessel function of the first kind has the power series -$$ - J_\nu(z)=\sum_{k=0}^\infty \frac{(-1)^k}{\Gamma(k+\nu+1)k!}\bigg(\frac{z}{2}\bigg)^{2k+\nu} -$$ - -By Ramanujan's master theorem, together with some identities for the gamma function and rearranging, we can evaluate the integral -$$ - \frac{2^{\nu-2s}\pi}{\sin{(\pi(s-\nu))}} \int_0^\infty z^{s-1-\nu/2}J_\nu(\sqrt{z})dz = \Gamma(s)\Gamma(s-\nu) -$$ - -valid for $ 0 < 2\mathcal{Re}(s) < \mathcal{Re}(\nu)+\tfrac{3}{2} $. - -Equivalently, if the spherical Bessel function $j_\nu(z)$ is preferred, the formula becomes -$$ - \frac{2^{\nu-2s}\sqrt{\pi}(1-2s+2\nu)}{\cos{(\pi(s-\nu))}} \int_0^\infty z^{s-1-\nu/2}j_\nu(\sqrt{z})dz = \Gamma(s)\Gamma\bigg(\frac{1}{2}+s-\nu\bigg) -$$ - -valid for $ 0 < 2\mathcal{Re}(s) < \mathcal{Re}(\nu)+2 $. - -The solution is remarkable in that it is able to interpolate across the major identities for the gamma function. In particular, the choice of $J_{0}(\sqrt{z})$ gives the square of the gamma function, $j_{0}(\sqrt{z})$ gives the duplication formula, $z^{-1/2}J_{1}(\sqrt{z})$ gives the reflection formula, and fixing to the evaluable $s=\tfrac{1}{2}$ or $s=1$ gives the gamma function by itself, up to reflection and scaling. diff --git a/wiki/wikipedia/2856.txt b/wiki/wikipedia/2856.txt deleted file mode 100644 index 57ce5a614eb180e407977f0bb0247dbebe10f447..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2856.txt +++ /dev/null @@ -1,55 +0,0 @@ -In graph theory, a maximal independent set (MIS) or maximal stable set is an independent set that is not a subset of any other independent set. In other words, there is no vertex outside the independent set that may join it because it is maximal with respect to the independent set property. - -For example, in the graph $P_3$, a path with three vertices $a$, $b$, and $c$, and two edges $ab$ and $bc$, the sets $\{b\}$ and $\{a, c\}$ are both maximally independent. The set $\{a\}$ is independent, but is not maximal independent, because it is a subset of the larger independent set $\{a, c\}$. In this same graph, the maximal cliques are the sets $\{a, b\}$ and $\{b, c\}$. - -A MIS is also a dominating set in the graph, and every dominating set that is independent must be maximal independent, so MISs are also called independent dominating sets. - -A graph may have many MISs of widely varying sizes; the largest, or possibly several equally large, MISs of a graph is called a maximum independent set. The graphs in which all maximal independent sets have the same size are called well-covered graphs. - -The phrase "maximal independent set" is also used to describe maximal subsets of independent elements in mathematical structures other than graphs, and in particular in vector spaces and matroids. - -Two algorithmic problems are associated with MISs: finding a single MIS in a given graph and listing all MISs in a given graph. - -For a graph $G = (V, E)$, an independent set $S$ is a maximal independent set if for $v \in V$, one of the following is true: - -# Initialize I to an empty set. - -# While V is not empty: - -#* Choose a random set of vertices S ⊆ V, by selecting each vertex v independently with probability 1/(2d(v)), where d is the degree of v (the number of neighbours of v). - -#* For every edge in E, if both its endpoints are in the random set S, then remove from S the endpoint whose degree is lower (i.e. has fewer neighbours). Break ties arbitrarily, e.g. using a lexicographic order on the vertex names. - -#* Add the set S to I. - -#* Remove from V the set S and all the neighbours of nodes in S. - -# Return I. - -ANALYSIS: For each node v, divide its neighbours to lower neighbours (whose degree is lower than the degree of v) and higher neighbours (whose degree is higher than the degree of v), breaking ties as in the algorithm. - -Call a node v bad if more than 2/3 of its neighbors are higher neighbours. Call an edge bad if both its endpoints are bad; otherwise the edge is good. - -* At least 1/2 of all edges are always good. PROOF: Build a directed version of G by directing each edge to the node with the higher degree (breaking ties arbitrarily). So for every bad node, the number of out-going edges is more than 2 times the number of in-coming edges. So every bad edge, that enters a node v, can be matched to a distinct set of two edges that exit the node v. Hence the total number of edges is at least 2 times the number of bad edges. - -* For every good node u, the probability that a neighbour of u is selected to S is at least a certain positive constant. PROOF: the probability that NO neighbour of u is selected to S is at most the probability that none of u's lower neighbours is selected. For each lower-neighbour v, the probability that it is not selected is (1-1/2d(v)), which is at most (1-1/2d(u)) (since d(u)>d(v)). The number of such neighbours is at least d(u)/3, since u is good. Hence the probability that no lower-neighbour is selected is at most 1-exp(-1/6). - -* For every node u that is selected to S, the probability that u will be removed from S is at most 1/2. PROOF: This probability is at most the probability that a higher-neighbour of u is selected to S. For each higher-neighbour v, the probability that it is selected is at most 1/2d(v), which is at most 1/2d(u) (since d(v)>d(u)). By union bound, the probability that no higher-neighbour is selected is at most d(u)/2d(u) = 1/2. - -* Hence, for every good node u, the probability that a neighbour of u is selected to S and remains in S is a certain positive constant. Hence, the probability that u is removed, in each step, is at least a positive constant. - -* Hence, for every good edge e, the probability that e is removed, in each step, is at least a positive constant. So the number of good edges drops by at least a constant factor each step. - -* Since at least half the edges are good, the total number of edges also drops by a constant factor each step. - -* Hence, the number of steps is O(log m), where m is the number of edges. This is bounded by $O(\log(n))$. - -A worst-case graph, in which the average number of steps is $\Theta(\log(n))$, is a graph made of n/2 connected components, each with 2 nodes. The degree of all nodes is 1, so each node is selected with probability 1/2, and with probability 1/4 both nodes in a component are not chosen. Hence, the number of nodes drops by a factor of 4 each step, and the expected number of steps is $\log_4(n)$. - -The following algorithm is better than the previous one in that at least one new node is always added in each connected component: Typically, the structure of the algorithm given follows other parallel graph algorithms - that is they subdivide the graph into smaller local problems that are solvable in parallel by running an identical algorithm. - -Initial research into the maximal independent set problem started on the PRAM model and has since expanded to produce results for distributed algorithms on computer clusters. The many challenges of designing distributed parallel algorithms apply in equal to the maximum independent set problem. In particular, finding an algorithm that exhibits efficient runtime and is optimal in data communication for subdividing the graph and merging the independent set. - -It was shown in 1984 by Karp et al. that a deterministic parallel solution on PRAM to the maximal independent set belonged in the Nick's Class complexity zoo of $NC_4$. That is to say, their algorithm finds a maximal independent set in $O(\log^4n)$ using $O((n/\log n)^3)$, where $n$ is the vertex set size. In the same paper, a randomized parallel solution was also provided with a runtime of $O(\log^4n)$ using $O(n^2)$ processors. Shortly after, Luby and Alon et al. independently improved on this result, bringing the maximal independent set problem into the realm of $NC_2$ with an $O(\log^2n)$ runtime using $O(mn^2)$ processors, where $m$ is the number of edges in the graph. In order to show that their algorithm is in $NC_2$, they initially presented a randomized algorithm that uses $O(m)$ processors but could be derandomized with an additional $O(n^2)$ processors. Today, it remains an open question as to if the maximal independent set problem is in $NC_1$. - -Distributed maximal independent set algorithms are strongly influenced by algorithms on the PRAM model. The original work by Luby and Alon et al. has led to several distributed algorithms. In terms of exchange of bits, these algorithms had a message size lower bound per round of $O(\log n)$ bits and would require additional characteristics of the graph. For example, the size of the graph would need to be known or the maximum degree of neighboring vertices for a given vertex could be queried. In 2010, Métivier et al. reduced the required message size per round to $O(1)$, which is optimal and removed the need for any additional graph knowledge. diff --git a/wiki/wikipedia/2857.txt b/wiki/wikipedia/2857.txt deleted file mode 100644 index 8e19429cbb5c5f7d58db502628b5f3d60ab606c3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2857.txt +++ /dev/null @@ -1,94 +0,0 @@ -In mathematics, the Schwarz lemma, named after Hermann Amandus Schwarz, is a result in complex analysis about holomorphic functions from the open unit disk to itself. The lemma is less celebrated than deeper theorems, such as the Riemann mapping theorem, which it helps to prove. It is, however, one of the simplest results capturing the rigidity of holomorphic functions. - -Let $\mathbf{D} = \{z : |z| < 1\}$ be the open unit disk in the complex plane $\mathbb{C}$ centered at the origin, and let $f : \mathbf{D}\rightarrow \mathbb{C}$ be a holomorphic map such that $f(0) = 0$ and $|f(z)|\leq 1$ on $\mathbf{D}$. - -Then $|f(z)| \leq |z|$ for all $z \in \mathbf{D}$, and $|f'(0)| \leq 1$. - -Moreover, if $|f(z)| = |z|$ for some non-zero $z$ or $|f'(0)| = 1$, then $f(z) = az$ for some $a \in \mathbb{C}$ with $|a| = 1$. - -The proof is a straightforward application of the maximum modulus principle on the function - -g(z) = \begin{cases} - -\frac{f(z)}{z} & \mbox{if } z \neq 0 \\ - -f'(0) & \mbox{if } z = 0, - -\end{cases} - -which is holomorphic on the whole of $D$, including at the origin (because $f$ is differentiable at the origin and fixes zero). Now if $D_r = \{z : |z| \le r\}$ denotes the closed disk of radius $r$ centered at the origin, then the maximum modulus principle implies that, for $r < 1$, given any $z \in D_r$, there exists $z_r$ on the boundary of $D_r$ such that -$$ - |g(z)| \le |g(z_r)| = \frac \le \frac{1}{r}. -$$ - -As $r \rightarrow 1$ we get $|g(z)| \leq 1$. - -Moreover, suppose that $|f(z)| = |z|$ for some non-zero $z \in D$, or $|f'(0)| = 1$. Then, $|g(z)| = 1$ at some point of $D$. So by the maximum modulus principle, $g(z)$ is equal to a constant $a$ such that $|a| = 1$. Therefore, $f(z) = az$, as desired. - -A variant of the Schwarz lemma, known as the Schwarz-Pick theorem (after Georg Pick), characterizes the analytic automorphisms of the unit disc, i.e. bijective holomorphic mappings of the unit disc to itself: - -Let f : D → D be holomorphic. Then, for all z1, z2 ∈ D, -$$ -\left|\frac{f(z_1)-f(z_2)}{1-\overline{f(z_1)}f(z_2)}\right| \le \left|\frac{z_1-z_2}{1-\overline{z_1}z_2}\right| -$$ - -and, for all z ∈ D, -$$ -\frac{\left|f'(z)\right|}{1-\left|f(z)\right|^2} \le \frac{1}{1-\left|z\right|^2}. -$$ - -The expression -$$ - d(z_1,z_2)=\tanh^{-1} \left|\frac{z_1-z_2}{1-\overline{z_1}z_2}\right| -$$ - -is the distance of the points z1, z2 in the Poincaré metric, i.e. the metric in the Poincaré disc model for hyperbolic geometry in dimension two. The Schwarz-Pick theorem then essentially states that a holomorphic map of the unit disk into itself decreases the distance of points in the Poincaré metric. If equality holds throughout in one of the two inequalities above (which is equivalent to saying that the holomorphic map preserves the distance in the Poincaré metric), then f must be an analytic automorphism of the unit disc, given by a Möbius transformation mapping the unit disc to itself. - -An analogous statement on the upper half-plane H can be made as follows: - -
    Let f : H → H be holomorphic. Then, for all z1, z2 ∈ H, -$$ -\left|\frac{f(z_1)-f(z_2)}{\overline{f(z_1)}-f(z_2)}\right|\le \frac{\left|z_1-z_2\right|}{\left|\overline{z_1}-z_2\right|}. -$$ - -
    - -This is an easy consequence of the Schwarz-Pick theorem mentioned above: One just needs to remember that the Cayley transform W(z) = (z − i)/(z + i) maps the upper half-plane H conformally onto the unit disc D. Then, the map W o f o W−1 is a holomorphic map from D onto D. Using the Schwarz-Pick theorem on this map, and finally simplifying the results by using the formula for W, we get the desired result. Also, for all z ∈ H, -$$ -\frac{\left|f'(z)\right|}{\text{Im}(f(z))} \le \frac{1}{\text{Im}(z)}. -$$ - -If equality holds for either the one or the other expressions, then f must be a Möbius transformation with real coefficients. That is, if equality holds, then -$$ -f(z)=\frac{az+b}{cz+d} -$$ - -with a, b, c, d ∈ R, and ad − bc > 0. - -The proof of the Schwarz-Pick theorem follows from Schwarz's lemma and the fact that a Möbius transformation of the form -$$ -\frac{z-z_0}{\overline{z_0}z-1}, \qquad |z_0| < 1, -$$ - -maps the unit circle to itself. Fix z1 and define the Möbius transformations -$$ -M(z)=\frac{z_1-z}{1-\overline{z_1}z}, \qquad \varphi(z)=\frac{f(z_1)-z}{1-\overline{f(z_1)}z}. -$$ - -Since M(z1) = 0 and the Möbius transformation is invertible, the composition φ(f(M−1(z))) maps 0 to 0 and the unit disk is mapped into itself. Thus we can apply Schwarz's lemma, which is to say -$$ -\left |\varphi\left(f(M^{-1}(z))\right) \right|=\left|\frac{f(z_1)-f(M^{-1}(z))}{1-\overline{f(z_1)}f(M^{-1}(z))}\right| \le |z|. -$$ - -Now calling z2 = M−1(z) (which will still be in the unit disk) yields the desired conclusion -$$ -\left|\frac{f(z_1)-f(z_2)}{1-\overline{f(z_1)}f(z_2)}\right| \le \left|\frac{z_1-z_2}{1-\overline{z_1}z_2}\right|. -$$ - -To prove the second part of the theorem, we rearrange the left-hand side into the difference quotient and let z2 tend to z1. - -The Schwarz–Ahlfors–Pick theorem provides an analogous theorem for hyperbolic manifolds. - -De Branges' theorem, formerly known as the Bieberbach Conjecture, is an important extension of the lemma, giving restrictions on the higher derivatives of f at 0 in case f is injective; that is, univalent. - -The Koebe 1/4 theorem provides a related estimate in the case that f is univalent. diff --git a/wiki/wikipedia/2858.txt b/wiki/wikipedia/2858.txt deleted file mode 100644 index 8eca9959fa25ab1103276ffa05c400c9ae271dfb..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2858.txt +++ /dev/null @@ -1,391 +0,0 @@ -In mathematics, infinite compositions of analytic functions (ICAF) offer alternative formulations of analytic continued fractions, series, products and other infinite expansions, and the theory evolving from such compositions may shed light on the convergence/divergence of these expansions. Some functions can actually be expanded directly as infinite compositions. In addition, it is possible to use ICAF to evaluate solutions of fixed point equations involving infinite expansions. Complex dynamics offers another venue for iteration of systems of functions rather than a single function. For infinite compositions of a single function see Iterated function. For compositions of a finite number of functions, useful in fractal theory, see Iterated function system. - -Although the title of this article specifies analytic functions, there are results for more general functions of a complex variable as well. - -There are several notations describing infinite compositions, including the following: - -Forward compositions: $F_{k,n}(z) = f_k \circ f_{k + 1} \circ \dots \circ f_{n - 1} \circ f_n (z).$ - -Backward compositions: $G_{k,n}(z) = f_n \circ f_{n - 1} \circ \dots \circ f_{k + 1} \circ f_k (z).$ - -In each case convergence is interpreted as the existence of the following limits: -$$ - \lim_{n\to \infty} F_{1,n}(z), \qquad \lim_{n\to\infty} G_{1,n}(z). -$$ - -For convenience, set Fn(z) = F1,n(z) and Gn(z) = G1,n(z). - -One may also write $F_n(z)=\underset{k=1}{\overset{n}{\mathop R}}f_k(z)=f_1 \circ f_2\circ \cdots \circ f_n(z)$ and -$$ -G_n(z)=\underset{k=1}{\overset{n}{\mathop L}}g_k(z)=g_n \circ g_{n-1}\circ \cdots \circ g_1(z) -$$ - -Many results can be considered extensions of the following result: - -Let f be analytic in a simply-connected region S and continuous on the closure S of S. Suppose f(S) is a bounded set contained in S. Then for all z in S there exists an attractive fixed point α of f in S such that: - -F_n(z)=(f\circ f\circ \cdots \circ f)(z)\to \alpha. - -Let {fn} be a sequence of functions analytic on a simply-connected domain S. Suppose there exists a compact set Ω ⊂ S such that for each n, fn(S) ⊂ Ω. - -{Fn} converges uniformly on compact subsets of S to a constant function F(z) = λ. - -{Gn} converges uniformly on compact subsets of S to γ ∈ Ω if and only if the sequence of fixed points {γn} of the {fn} converges to γ. - -Additional theory resulting from investigations based on these two theorems, particularly Forward Compositions Theorem, include location analysis for the limits obtained here . For a different approach to Backward Compositions Theorem, see . - -Regarding Backward Compositions Theorem, the example f2n(z) = 1/2 and f2n−1(z) = −1/2 for S = {z : |z| < 1} demonstrates the inadequacy of simply requiring contraction into a compact subset, like Forward Compositions Theorem. - -For functions not necessarily analytic the Lipschitz condition suffices: - -Suppose $S$ is a simply connected compact subset of $\Complex$ and let $t_n : S \to S$ be a family of functions that satisfies - - \forall n, \forall z_1, z_2 \in S, \exists \rho: \quad \left|t_n(z_1)-t_n(z_2) \right|\le \rho |z_1-z_2|, \quad \rho < 1. - -Define: - -\begin{align} - -G_n(z) &= \left (t_n\circ t_{n-1}\circ \cdots \circ t_1 \right ) (z) \\ - -F_n(z) &= \left (t_1 \circ t_2\circ \cdots \circ t_n \right ) (z) - -\end{align} - -Then $F_n(z)\to \beta \in S$ uniformly on $S.$ If $\alpha_n$ is the unique fixed point of $t_n$ then $G_n(z)\to \alpha $ uniformly on $S$ if and only if $|\alpha_n -\alpha| = \varepsilon_n \to 0$. - -Results involving entire functions include the following, as examples. Set - -\begin{align} - -f_n(z)&=a_n z + c_{n,2}z^2+c_{n,3} z^3+\cdots \\ - -\rho_n &= \sup_r \left\{ \left| c_{n,r} \right|^{\frac{1}{r-1}} \right\} - -\end{align} - -Then the following results hold: - -If an ≡ 1, - -\sum_{n=1}^\infty \rho_n < \infty - -then Fn → F is entire. - -Set εn = |an−1| suppose there exists non-negative δn, M1, M2, R such that the following holds: - -\begin{align} - -\sum_{n=1}^\infty \varepsilon_n &< \infty, \\ - -\sum_{n=1}^\infty \delta_n &< \infty, \\ - -\prod_{n=1}^\infty (1+\delta_n) &< M_1, \\ - -\prod_{n=1}^\infty (1+\varepsilon_n) &< M_2, \\ - -\rho_n &< \frac{\delta_n}{R M_1 M_2}. - -\end{align} - -Then Gn(z) → G(z) is analytic for |z| < R. Convergence is uniform on compact subsets of {z : |z| < R}. - -Additional elementary results include: - -Suppose $f_k(z)=z+\rho_k \varphi_k (z)$ where there exist $R, M > 0$ such that $|z| < R$ implies $| \varphi_k (z)| < M, \forall k, \ $ Furthermore, suppose $\rho_k \ge 0, \sum_{k=1}^\infty \rho_k < \infty $ and $R > M\sum_{k=1}^\infty \rho_k.$ Then for $R* < R-M\sum_{k=1}^\infty \rho_k$ - -G_n(z)\equiv \left (f_n\circ f_{n-1}\circ \cdots \circ f_1 \right ) (z) \to G(z) \qquad \text{ for } \{z:|z| - -Suppose $f_k(z)=z+\rho_k \varphi_k (z)$ where there exist $R, M > 0$ such that $|z|M \sum_{k=1}^\infty \rho_k.$ Then for $R* < R-M\sum_{k=1}^\infty\rho_k$ - -F_n(z)\equiv \left (f_1\circ f_2\circ \cdots \circ f_n \right) (z) \to F(z) \qquad \text{ for } \{z:|z|< R* \} - -Let $f_n(z)=z+ zg_n(z),$ analytic for |z| < R0, with |gn(z)| ≤ Cβn, - -\sum_{n=1}^\infty \beta_n<\infty. - -Choose 0 < r < R0 and define - -R=R(r)=\frac{R_0-r}{\prod_{n=1}^\infty (1+C\beta_n)}. - -Then Fn → F uniformly for |z| ≤ R. Furthermore, - -\left| F'(z) \right|\le \prod_{n=1}^\infty \left( 1+\tfrac{R_0}{r} C\beta_n \right). - -Example GF1: $F_{40}(x+iy)=\underset{k=1}{\overset{40}{\mathop R}} \left( \frac{x+iy}{1+\tfrac{1}{4^k} (x\cos(y)+iy\sin(x))} \right),\qquad[-20,20]$ - -Example GF2: $ G_{40}(x+iy)=\underset{k=1}{\overset{40}{\mathop L}}\left( \frac{x+iy}{1+\tfrac{1}{2^k} (x\cos(y)+iy\sin(x))} \right), \ [-20,20] $ - -Results for compositions of linear fractional (Möbius) transformations include the following, as examples: - -On the set of convergence of a sequence {Fn} of non-singular LFTs, the limit function is either: - -
      - -
    1. a non-singular LFT,
    2. - -
    3. a function taking on two distinct values, or
    4. - -
    5. a constant.
    6. - -
    - -In (a), the sequence converges everywhere in the extended plane. In (b), the sequence converges either everywhere, and to the same value everywhere except at one point, or it converges at only two points. Case (c) can occur with every possible set of convergence. - -If {Fn} converges to an LFT, then fn converge to the identity function f(z) = z. - -If fn → f and all functions are hyperbolic or loxodromic Möbius transformations, then Fn(z) → λ, a constant, for all $z \ne \beta = \lim_{n\to \infty} \beta_n$, where {βn} are the repulsive fixed points of the {fn}. - -If fn → f where f is parabolic with fixed point γ. Let the fixed-points of the {fn} be {γn} and {βn}. If - -\sum_{n=1}^\infty \left|\gamma_n-\beta_n \right| < \infty \quad \text{and} \quad \sum_{n=1}^\infty n \left|\beta_{n+1}-\beta_n \right|<\infty - -then Fn(z) → λ, a constant in the extended complex plane, for all z. - -The value of the infinite continued fraction -$$ -\cfrac{a_1}{b_1+\cfrac{a_2}{b_2+\cdots}} -$$ - -may be expressed as the limit of the sequence {Fn(0)} where -$$ -f_n(z)=\frac{a_n}{b_n+z}. -$$ - -As a simple example, a well-known result (Worpitsky Circle*) follows from an application of Theorem (A): - -Consider the continued fraction -$$ -\cfrac{a_1\zeta }{1+\cfrac{a_2\zeta }{1+\cdots}} -$$ - -with -$$ -f_n(z)=\frac{a_n \zeta }{1+z}. -$$ - -Stipulate that |ζ| < 1 and |z| < R < 1. Then for 0 < r < 1, -$$ -|a_n| 1 \\ \phi(0) = 0 \\ \phi'(0) =1 \end{cases} -$$ - -Then -$$ -f_n(z)=z+\frac{z^2}{t^n}\Longrightarrow F_n(z)\to \phi (z) -$$. - -Example 2. -$$ -f_n(z)=z+\frac{z^2}{2^n}\Longrightarrow F_n(z)\to \frac{1}{2}\left( e^{2z}-1 \right) -$$ - -Example 3. -$$ -f_n(z)= \frac{z}{1-\tfrac{z^2}{4^n}}\Longrightarrow F_n(z)\to \tan (z) -$$ - -Example 4. -$$ -g_n(z)=\frac{2 \cdot 4^{n}}{z} \left ( \sqrt{1+\frac{z^2}{4^{n}}}-1 \right )\Longrightarrow G_n(z) \to \arctan (z) -$$ - -Theorem (B) can be applied to determine the fixed-points of functions defined by infinite expansions or certain integrals. The following examples illustrate the process: - -Example FP1. For |ζ| ≤ 1 let -$$ -G(\zeta )=\frac{ \tfrac{e^\zeta}{4}}{3+\zeta +\cfrac{\tfrac{e^\zeta} 8 }{3+\zeta + \cfrac{\tfrac{e^\zeta}{12}}{3+\zeta +\cdots}}} -$$ - -To find α = G(α), first we define: - -\begin{align} - -t_n(z)&=\cfrac{\tfrac{e^\zeta}{4n}}{3+\zeta +z} \\ - -f_n(\zeta )&= t_1\circ t_2\circ \cdots \circ t_n(0) - -\end{align} - -Then calculate $G_n(\zeta )=f_n\circ \cdots \circ f_1(\zeta )$ with ζ = 1, which gives: α = 0.087118118... to ten decimal places after ten iterations. - -Let φ(ζ, t) be analytic in S = {z : |z| < R} for all t in [0, 1] and continuous in t. Set - -f_n (\zeta)=\frac{1}{n} \sum_{k=1}^n \varphi \left( \zeta ,\tfrac{k}{n} \right). - -If |φ(ζ, t)| ≤ r < R for ζ ∈ S and t ∈ [0, 1], then - -\zeta =\int_0^1 \varphi (\zeta ,t) dt - -has a unique solution, α in S, with $\lim_{n\to \infty} G_n(\zeta ) = \alpha. $ - -Consider a time interval, normalized to I = [0, 1]. ICAFs can be constructed to describe continuous motion of a point, z, over the interval, but in such a way that at each "instant" the motion is virtually zero (see Zeno's Arrow): For the interval divided into n equal subintervals, 1 ≤ k ≤ n set $g_{k,n}(z)=z+\varphi_{k,n}(z)$ analytic or simply continuous – in a domain S, such that -$$ -\lim_{n\to \infty}\varphi_{k,n}(z)=0 -$$ for all k and all z in S, - -and $g_{k,n}(z) \in S$. - -\begin{align} - -g_{k,n}(z) &=z+\frac{1}{n}\phi \left (z,\tfrac{k}{n} \right ) \\ - -G_{k,n}(z) &= \left (g_{k,n}\circ g_{k-1,n} \circ \cdots \circ g_{1,n} \right ) (z) \\ - -G_n(z) &=G_{n,n}(z) - -\end{align} - -implies -$$ -\lambda_n(z)\doteq G_n(z)-z=\frac{1}{n}\sum_{k=1}^n \phi \left( G_{k-1,n}(z)\tfrac k n \right)\doteq \frac 1 n \sum_{k=1}^n \psi \left (z,\tfrac{k}{n} \right) \sim \int_0^1 \psi (z,t)dt, -$$ - -where the integral is well-defined if $\tfrac{dz}{dt}=\phi (z,t)$ has a closed-form solution z(t). Then -$$ -\lambda_n(z_0)\approx \int_0^1 \phi ( z(t),t)dt. -$$ - -Otherwise, the integrand is poorly defined although the value of the integral is easily computed. In this case one might call the integral a "virtual" integral. - -Example. $\phi (z,t)=\frac{2t-\cos y}{1-\sin x\cos y}+i\frac{1-2t\sin x}{1-\sin x\cos y}, \int_0^1 \psi (z,t) dt$ - -Example. Let: -$$ -g_n(z)=z+\frac{c_n}{n}\phi (z), \quad \text{with} \quad f(z) = z + \phi(z). -$$ - -Next, set $T_{1,n}(z)=g_n(z), T_{k,n}(z)= g_n(T_{k-1,n}(z)),$ and Tn(z) = Tn,n(z). Let -$$ -T(z)=\lim_{n\to \infty} T_n(z) -$$ - -when that limit exists. The sequence {Tn(z)} defines contours γ = γ(cn, z) that follow the flow of the vector field f(z). If there exists an attractive fixed point α, meaning |f(z) − α| ≤ ρ|z − α| for 0 ≤ ρ < 1, then Tn(z) → T(z) ≡ α along γ = γ(cn, z), provided (for example) $c_n = \sqrt{n}$. If cn ≡ c > 0, then Tn(z) → T(z), a point on the contour γ = γ(c, z). It is easily seen that -$$ -\oint_\gamma \phi (\zeta ) d\zeta =\lim_{n\to \infty}\frac c n \sum_{k=1}^n \phi ^2 \left (T_{k-1,n}(z) \right ) -$$ - -and -$$ -L(\gamma (z))=\lim_{n\to \infty} \frac{c}{n}\sum_{k=1}^n \left| \phi \left (T_{k-1,n}(z) \right ) \right|, -$$ - -when these limits exist. - -These concepts are marginally related to active contour theory in image processing, and are simple generalizations of the Euler method - -The series defined recursively by fn(z) = z + gn(z) have the property that the nth term is predicated on the sum of the first n − 1 terms. In order to employ theorem (GF3) it is necessary to show boundedness in the following sense: If each fn is defined for |z| < M then |Gn(z)| < M must follow before |fn(z) − z| = |gn(z)| ≤ Cβn is defined for iterative purposes. This is because $g_n(G_{n-1}(z))$ occurs throughout the expansion. The restriction -$$ -|z|0 -$$ - -serves this purpose. Then Gn(z) → G(z) uniformly on the restricted domain. - -Example (S1). Set -$$ -f_n(z)=z+\frac{1}{\rho n^2}\sqrt{z}, \qquad \rho >\sqrt{\frac{\pi}{6}} -$$ - -and M = ρ2. Then R = ρ2 − (π/6) > 0. Then, if $S=\left\{ z: |z|0 \right\}$, z in S implies |Gn(z)| < M and theorem (GF3) applies, so that - -\begin{align} - -G_n(z) &=z+g_1(z)+g_2(G_1(z))+g_3(G_2(z))+\cdots + g_n(G_{n-1}(z)) \\ - -&= z+\frac{1}{\rho \cdot 1^2}\sqrt{z}+\frac{1}{\rho \cdot 2^2}\sqrt{G_1(z)}+\frac{1}{\rho \cdot 3^2}\sqrt{G_2(z)}+\cdots +\frac{1}{\rho \cdot n^2} \sqrt{G_{n-1}(z)} - -\end{align} - -converges absolutely, hence is convergent. - -Example (S2): $f_n(z)=z+\frac 1 {n^2}\cdot \varphi (z), \varphi (z)=2\cos(x/y)+i2\sin (x/y), >G_n(z)=f_n \circ f_{n-1}\circ \cdots \circ f_1(z), \qquad [-10,10], n=50 $ - -The product defined recursively by -$$ -f_n(z)=z( 1+g_n(z)), \qquad |z| \leqslant M, -$$ - -has the appearance -$$ -G_n(z) = z \prod _{k=1}^n \left( 1+g_k \left( G_{k-1}(z) \right) \right). -$$ - -In order to apply Theorem GF3 it is required that: -$$ -\left| zg_n(z) \right|\le C\beta_n, \qquad \sum_{k=1}^\infty \beta_k<\infty. -$$ - -Once again, a boundedness condition must support -$$ -\left|G_{n-1}(z) g_n(G_{n-1}(z))\right|\le C \beta_n. -$$ - -If one knows Cβn in advance, the following will suffice: -$$ -|z| \leqslant R = \frac{M}{P} \qquad \text{where} \quad P = \prod_{n=1}^\infty \left( 1+C\beta_n\right). -$$ - -Then Gn(z) → G(z) uniformly on the restricted domain. - -Example (P1). Suppose $f_n(z)=z(1+g_n(z))$ with $g_n(z)=\tfrac{z^2}{n^3},$ observing after a few preliminary computations, that |z| ≤ 1/4 implies |Gn(z)| < 0.27. Then -$$ -\left|G_n(z) \frac{G_n(z)^2}{n^3} \right|<(0.02)\frac{1}{n^3}=C\beta_n -$$ - -and -$$ -G_n(z)=z \prod_{k=1}^{n-1}\left( 1+\frac{G_k(z)^2}{n^3}\right) -$$ - -converges uniformly. - -Example (P2). -$$ -g_{k,n}(z)=z\left( 1+\frac 1 n \varphi \left (z,\tfrac k n \right ) \right), -$$ -$$ -G_{n,n}(z)= \left( g_{n,n}\circ g_{n-1,n}\circ \cdots \circ g_{1,n} \right ) (z) = z\prod_{k=1}^n ( 1+P_{k,n}(z)), -$$ -$$ -P_{k,n}(z)=\frac 1 n \varphi \left (G_{k-1,n}(z),\tfrac{k}{n} \right ), -$$ -$$ -\prod_{k=1}^{n-1} \left( 1+P_{k,n}(z) \right) = 1+P_{1,n}(z)+P_{2,n}(z)+\cdots + P_{k-1,n}(z) + R_n(z) \sim \int_0^1 \pi (z,t) dt + 1+R_n(z), -$$ -$$ -\varphi (z)=x\cos(y)+iy\sin(x), \int_0^1 (z\pi (z,t)-1) dt, \qquad [-15,15]: -$$ - -Example (CF1): A self-generating continued fraction. - -\begin{align} - -F_n(z) &= \frac{\rho (z)}{\delta_1+} \frac{\rho (F_1(z))}{\delta_2 +} \frac{\rho (F_2(z))}{\delta_3+} \cdots \frac{\rho (F_{n-1}(z))}{\delta_n}, \\ - -\rho (z) &= \frac{\cos(y)}{\cos (y)+\sin (x)}+i\frac{\sin(x)}{\cos (y)+\sin (x)}, \qquad [0 - -Example (CF2): Best described as a self-generating reverse Euler continued fraction. -$$ -G_n(z)=\frac{\rho (G_{n-1}(z))}{1+\rho (G_{n-1}(z))-}\ \frac{\rho (G_{n-2}(z))}{1+\rho (G_{n-2}(z))-}\cdots \frac{\rho (G_1(z))}{1+\rho (G_1(z))-}\ \frac{\rho (z)}{1+\rho (z)-z}, -$$ -$$ -\rho (z)=\rho (x+iy)=x\cos(y)+iy\sin(x), \qquad [-15,15], n=30 -$$ diff --git a/wiki/wikipedia/2859.txt b/wiki/wikipedia/2859.txt deleted file mode 100644 index 80834a250cfcd5fe3e36849e664bb7fec9df6d3e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2859.txt +++ /dev/null @@ -1,12 +0,0 @@ -In mathematics, Nagao's theorem, named after Hirosi Nagao, is a result about the structure of the group of 2-by-2 invertible matrices over the ring of polynomials over a field. It has been extended by Serre to give a description of the structure of the corresponding matrix group over the coordinate ring of a projective curve. - -For a general ring R we let GL2(R) denote the group of invertible 2-by-2 matrices with entries in R, and let R* denote the group of units of R, and let -$$ - B(R) = \left\lbrace{ \left({\begin{array}{*{20}c} a & b \\ 0 & d \end{array}}\right) : a,d \in R^*, ~ b \in R }\right\rbrace. -$$ - -Then B(R) is a subgroup of GL2(R). - -Nagao's theorem states that in the case that R is the ring K[t] of polynomials in one variable over a field K, the group GL2(R) is the amalgamated product of GL2(K) and B(K[t]) over their intersection B(K). - -In this setting, C is a smooth projective curve C over a field K. For a closed point P of C let R be the corresponding coordinate ring of C with P removed. There exists a graph of groups (G,T) where T is a tree with at most one non-terminal vertex, such that GL2(R) is isomorphic to the fundamental group π1(G,T). diff --git a/wiki/wikipedia/286.txt b/wiki/wikipedia/286.txt deleted file mode 100644 index 123877c7887de17395be173e354ece560241701c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/286.txt +++ /dev/null @@ -1 +0,0 @@ -In mathematics, Ihara's lemma, introduced by and named by Ribet, states that the kernel of the sum of the two p-degeneracy maps from J0(N)×J0(N) to J0(Np) is Eisenstein whenever the prime p does not divide N. Here J0(N) is the Jacobian of the compactification of the modular curve of Γ0(N). diff --git a/wiki/wikipedia/2860.txt b/wiki/wikipedia/2860.txt deleted file mode 100644 index 13d236ec7570ad33d7e294d754743a52afd47899..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2860.txt +++ /dev/null @@ -1,588 +0,0 @@ -In mathematical logic, sequent calculus is a style of formal logical argumentation in which every line of a proof is a conditional tautology (called a sequent by Gerhard Gentzen) instead of an unconditional tautology. Each conditional tautology is inferred from other conditional tautologies on earlier lines in a formal argument according to rules and procedures of inference, giving a better approximation to the natural style of deduction used by mathematicians than to David Hilbert's earlier style of formal logic, in which every line was an unconditional tautology. More subtle distinctions may exist; for example, propositions may implicitly depend upon non-logical axioms. In that case, sequents signify conditional theorems in a first-order language rather than conditional tautologies. - -Sequent calculus is one of several extant styles of proof calculus for expressing line-by-line logical arguments. - -* Hilbert style. Every line is an unconditional tautology (or theorem). - -* Gentzen style. Every line is a conditional tautology (or theorem) with zero or more conditions on the left. - -** Natural deduction. Every (conditional) line has exactly one asserted proposition on the right. - -** Sequent calculus. Every (conditional) line has zero or more asserted propositions on the right. - -In other words, natural deduction and sequent calculus systems are particular distinct kinds of Gentzen-style systems. Hilbert-style systems typically have a very small number of inference rules, relying more on sets of axioms. Gentzen-style systems typically have very few axioms, if any, relying more on sets of rules. - -Gentzen-style systems have significant practical and theoretical advantages compared to Hilbert-style systems. For example, both natural deduction and sequent calculus systems facilitate the elimination and introduction of universal and existential quantifiers so that unquantified logical expressions can be manipulated according to the much simpler rules of propositional calculus. In a typical argument, quantifiers are eliminated, then propositional calculus is applied to unquantified expressions (which typically contain free variables), and then the quantifiers are reintroduced. This very much parallels the way in which mathematical proofs are carried out in practice by mathematicians. Predicate calculus proofs are generally much easier to discover with this approach, and are often shorter. Natural deduction systems are more suited to practical theorem-proving. Sequent calculus systems are more suited to theoretical analysis. - -In proof theory and mathematical logic, sequent calculus is a family of formal systems sharing a certain style of inference and certain formal properties. The first sequent calculi systems, LK and LJ, were introduced in 1934/1935 by Gerhard Gentzen as a tool for studying natural deduction in first-order logic (in classical and intuitionistic versions, respectively). Gentzen's so-called "Main Theorem" (Hauptsatz) about LK and LJ was the cut-elimination theorem, a result with far-reaching meta-theoretic consequences, including consistency. Gentzen further demonstrated the power and flexibility of this technique a few years later, applying a cut-elimination argument to give a (transfinite) proof of the consistency of Peano arithmetic, in surprising response to Gödel's incompleteness theorems. Since this early work, sequent calculi, also called Gentzen systems, and the general concepts relating to them, have been widely applied in the fields of proof theory, mathematical logic, and automated deduction. - -One way to classify different styles of deduction systems is to look at the form of judgments in the system, i.e., which things may appear as the conclusion of a (sub)proof. The simplest judgment form is used in Hilbert-style deduction systems, where a judgment has the form -$$ -B -$$ - -where $B$ is any formula of first-order logic (or whatever logic the deduction system applies to, e.g., propositional calculus or a higher-order logic or a modal logic). The theorems are those formulae that appear as the concluding judgment in a valid proof. A Hilbert-style system needs no distinction between formulae and judgments; we make one here solely for comparison with the cases that follow. - -The price paid for the simple syntax of a Hilbert-style system is that complete formal proofs tend to get extremely long. Concrete arguments about proofs in such a system almost always appeal to the deduction theorem. This leads to the idea of including the deduction theorem as a formal rule in the system, which happens in natural deduction. - -In natural deduction, judgments have the shape -$$ -A_1, A_2, \ldots, A_n \vdash B -$$ - -where the $A_i$'s and $B$ are again formulae and $n\geq 0$. Permutations of the $A_i$'s are immaterial. In other words, a judgment consists of a list (possibly empty) of formulae on the left-hand side of a turnstile symbol "$\vdash$", with a single formula on the right-hand side. The theorems are those formulae $B$ such that $\vdash B$ (with an empty left-hand side) is the conclusion of a valid proof. - -(In some presentations of natural deduction, the $A_i$s and the turnstile are not written down explicitly; instead a two-dimensional notation from which they can be inferred is used.) - -The standard semantics of a judgment in natural deduction is that it asserts that whenever $A_1$, $A_2$, etc., are all true, $B$ will also be true. The judgments -$$ -A_1, \ldots, A_n \vdash B -$$ - -and -$$ -\vdash (A_1 \land \cdots \land A_n) \rightarrow B -$$ - -are equivalent in the strong sense that a proof of either one may be extended to a proof of the other. - -Finally, sequent calculus generalizes the form of a natural deduction judgment to -$$ -A_1, \ldots, A_n \vdash B_1, \ldots, B_k, -$$ - -a syntactic object called a sequent. The formulas on left-hand side of the turnstile are called the antecedent, and the formulas on right-hand side are called the succedent or consequent; together they are called cedents or sequents. Again, $A_i$ and $B_i$ are formulae, and $n$ and $k$ are nonnegative integers, that is, the left-hand-side or the right-hand-side (or neither or both) may be empty. As in natural deduction, theorems are those $B$ where $\vdash B$ is the conclusion of a valid proof. - -The standard semantics of a sequent is an assertion that whenever every $ A_i$ is true, at least one $B_i$ will also be true. Thus the empty sequent, having both cedents empty, is false. One way to express this is that a comma to the left of the turnstile should be thought of as an "and", and a comma to the right of the turnstile should be thought of as an (inclusive) "or". The sequents -$$ -A_1, \ldots, A_n \vdash B_1, \ldots, B_k -$$ - -and -$$ -\vdash (A_1 \land\cdots\land A_n)\rightarrow(B_1 \lor\cdots\lor B_k) -$$ - -are equivalent in the strong sense that a proof of either sequent may be extended to a proof of the other sequent. - -At first sight, this extension of the judgment form may appear to be a strange complication—it is not motivated by an obvious shortcoming of natural deduction, and it is initially confusing that the comma seems to mean entirely different things on the two sides of the turnstile. However, in a classical context the semantics of the sequent can also (by propositional tautology) be expressed either as -$$ -\vdash \neg A_1 \lor \neg A_2 \lor \cdots \lor \neg A_n \lor B_1 \lor B_2 \lor\cdots\lor B_k -$$ - -(at least one of the As is false, or one of the Bs is true) - -or as -$$ -\vdash \neg(A_1 \land A_2 \land \cdots \land A_n \land \neg B_1 \land \neg B_2 \land\cdots\land \neg B_k) -$$ - -(it cannot be the case that all of the As are true and all of the Bs are false). - -In these formulations, the only difference between formulae on either side of the turnstile is that one side is negated. Thus, swapping left for right in a sequent corresponds to negating all of the constituent formulae. This means that a symmetry such as De Morgan's laws, which manifests itself as logical negation on the semantic level, translates directly into a left-right symmetry of sequents—and indeed, the inference rules in sequent calculus for dealing with conjunction (∧) are mirror images of those dealing with disjunction (∨). - -Many logicians feel that this symmetric presentation offers a deeper insight in the structure of the logic than other styles of proof system, where the classical duality of negation is not as apparent in the rules. - -Gentzen asserted a sharp distinction between his single-output natural deduction systems (NK and NJ) and his multiple-output sequent calculus systems (LK and LJ). He wrote that the intuitionistic natural deduction system NJ was somewhat ugly. He said that the special role of the excluded middle in the classical natural deduction system NK is removed in the classical sequent calculus system LK. He said that the sequent calculus LJ gave more symmetry than natural deduction NJ in the case of intuitionistic logic, as also in the case of classical logic (LK versus NK). Then he said that in addition to these reasons, the sequent calculus with multiple succedent formulas is intended particularly for his principal theorem ("Hauptsatz"). - -The word "sequent" is taken from the word "Sequenz" in Gentzen's 1934 paper. - -Consider the following formula: -$$ -((p\rightarrow r)\lor (q\rightarrow r))\rightarrow ((p\land q)\rightarrow r) -$$ - -This is written in the following form, where the proposition that needs to be proven is to the right of the turnstile symbol $\vdash$: -$$ -\vdash((p\rightarrow r)\lor (q\rightarrow r))\rightarrow ((p\land q)\rightarrow r) -$$ - -Now, instead of proving this from the axioms, it is enough to assume the premise of the implication and then try to prove its conclusion. Hence one moves to the following sequent: -$$ -(p\rightarrow r)\lor (q\rightarrow r)\vdash (p\land q)\rightarrow r -$$ - -Again the right hand side includes an implication, whose premise can further be assumed so that only its conclusion needs to be proven: -$$ -(p\rightarrow r)\lor (q\rightarrow r), (p\land q)\vdash r -$$ - -Since the arguments in the left-hand side are assumed to be related by conjunction, this can be replaced by the following: -$$ -(p\rightarrow r)\lor (q\rightarrow r), p, q\vdash r -$$ - -This is equivalent to proving the conclusion in both cases of the disjunction on the first argument on the left. Thus we may split the sequent to two, where we now have to prove each separately: -$$ -p\rightarrow r, p, q\vdash r -$$ -$$ -q\rightarrow r, p, q\vdash r -$$ - -In the case of the first judgment, we rewrite $p\rightarrow r$ as $\lnot p \lor r$ and split the sequent again to get: -$$ -\lnot p, p, q \vdash r -$$ -$$ -r, p, q \vdash r -$$ - -The second sequent is done; the first sequent can be further simplified into: -$$ -p, q \vdash p, r -$$ - -This process can always be continued until there are only atomic formulas in each side. - -The process can be graphically described by a rooted tree graph, as depicted on the right. The root of the tree is the formula we wish to prove; the leaves consist of atomic formulas only. The tree is known as a reduction tree. - -The items to the left of the turnstile are understood to be connected by conjunction, and those to the right by disjunction. Therefore, when both consist only of atomic symbols, the sequent is accepted axiomatically (and always true) if and only if at least one of the symbols on the right also appears on the left. - -Following are the rules by which one proceeds along the tree. Whenever one sequent is split into two, the tree vertex has two child vertices, and the tree is branched. Additionally, one may freely change the order of the arguments in each side; Γ and Δ stand for possible additional arguments. - -The usual term for the horizontal line used in Gentzen-style layouts for natural deduction is inference line. - -Starting with any formula in propositional logic, by a series of steps, the right side of the turnstile can be processed until it includes only atomic symbols. Then, the same is done for the left side. Since every logical operator appears in one of the rules above, and is removed by the rule, the process terminates when no logical operators remain: The formula has been decomposed. - -Thus, the sequents in the leaves of the trees include only atomic symbols, which are either provable by the axiom or not, according to whether one of the symbols on the right also appears on the left. - -It is easy to see that the steps in the tree preserve the semantic truth value of the formulas implied by them, with conjunction understood between the tree's different branches whenever there is a split. It is also obvious that an axiom is provable if and only if it is true for every assignment of truth values to the atomic symbols. Thus this system is sound and complete for classical propositional logic. - -Sequent calculus is related to other axiomatizations of propositional calculus, such as Frege's propositional calculus or Jan Łukasiewicz's axiomatization (itself a part of the standard Hilbert system): Every formula that can be proven in these has a reduction tree. - -This can be shown as follows: Every proof in propositional calculus uses only axioms and the inference rules. Each use of an axiom scheme yields a true logical formula, and can thus be proven in sequent calculus; examples for these are shown below. The only inference rule in the systems mentioned above is modus ponens, which is implemented by the cut rule. - -This section introduces the rules of the sequent calculus LK as introduced by Gentzen in 1934. (LK, unintuitively, stands for "klassische Prädikatenlogik".) - -A (formal) proof in this calculus is a sequence of sequents, where each of the sequents is derivable from sequents appearing earlier in the sequence by using one of the rules below. - -The following notation will be used: - -* $\vdash$ known as the turnstile, separates the assumptions on the left from the propositions on the right - -* $A$ and $B$ denote formulae of first-order predicate logic (one may also restrict this to propositional logic), - -* $\Gamma, \Delta, \Sigma$, and $\Pi$ are finite (possibly empty) sequences of formulae (in fact, the order of formulae does not matter; see subsection Structural Rules), called contexts, - -** when on the left of the $\vdash$, the sequence of formulas is considered conjunctively (all assumed to hold at the same time), - -** while on the right of the $\vdash$, the sequence of formulas is considered disjunctively (at least one of the formulas must hold for any assignment of variables), - -* $t$ denotes an arbitrary term, - -* $x$ and $y$ denote variables. - -* a variable is said to occur free within a formula if it occurs outside the scope of quantifiers $\forall$ or $\exists$. - -* $A[t/x]$ denotes the formula that is obtained by substituting the term $t$ for every free occurrence of the variable $x$ in formula $A$ with the restriction that the term $t$ must be free for the variable $x$ in $A$ (i.e., no occurrence of any variable in $t$ becomes bound in $A[t/x]$). - -* $WL$, $WR$, $CL$, $CR$, $PL$, $PR$: These six stand for the two versions of each of three structural rules; one for use on the left ('L') of a $\vdash$, and the other on its right ('R'). The rules are abbreviated 'W' for Weakening (Left/Right), 'C' for Contraction, and 'P' for Permutation. - -Note that, contrary to the rules for proceeding along the reduction tree presented above, the following rules are for moving in the opposite directions, from axioms to theorems. Thus they are exact mirror-images of the rules above, except that here symmetry is not implicitly assumed, and rules regarding quantification are added. - -Restrictions: In the rules $({\forall}R)$ and $({\exists}L)$, the variable $y$ must not occur free anywhere in the respective lower sequents. - -The above rules can be divided into two major groups: logical and structural ones. Each of the logical rules introduces a new logical formula either on the left or on the right of the turnstile $\vdash$. In contrast, the structural rules operate on the structure of the sequents, ignoring the exact shape of the formulae. The two exceptions to this general scheme are the axiom of identity (I) and the rule of (Cut). - -Although stated in a formal way, the above rules allow for a very intuitive reading in terms of classical logic. Consider, for example, the rule $({\land}L_1)$. It says that, whenever one can prove that $\Delta$ can be concluded from some sequence of formulae that contain $A$, then one can also conclude $\Delta$ from the (stronger) assumption that $A \land B$ holds. Likewise, the rule $({\neg}R)$ states that, if $\Gamma$ and $A$ suffice to conclude $\Delta$, then from $\Gamma$ alone one can either still conclude $\Delta$ or $A$ must be false, i.e. ${\neg}A$ holds. All the rules can be interpreted in this way. - -For an intuition about the quantifier rules, consider the rule $({\forall}R)$. Of course concluding that $\forall{x} A$ holds just from the fact that $A[y/x]$ is true is not in general possible. If, however, the variable y is not mentioned elsewhere (i.e. it can still be chosen freely, without influencing the other formulae), then one may assume, that $A[y/x]$ holds for any value of y. The other rules should then be pretty straightforward. - -Instead of viewing the rules as descriptions for legal derivations in predicate logic, one may also consider them as instructions for the construction of a proof for a given statement. In this case the rules can be read bottom-up; for example, $({\land}R)$ says that, to prove that $A \land B$ follows from the assumptions $\Gamma$ and $\Sigma$, it suffices to prove that $A$ can be concluded from $\Gamma$ and $B$ can be concluded from $\Sigma$, respectively. Note that, given some antecedent, it is not clear how this is to be split into $\Gamma$ and $\Sigma$. However, there are only finitely many possibilities to be checked since the antecedent by assumption is finite. This also illustrates how proof theory can be viewed as operating on proofs in a combinatorial fashion: given proofs for both $A$ and $B$, one can construct a proof for $A \land B$. - -When looking for some proof, most of the rules offer more or less direct recipes of how to do this. The rule of cut is different: it states that, when a formula $A$ can be concluded and this formula may also serve as a premise for concluding other statements, then the formula $A$ can be "cut out" and the respective derivations are joined. When constructing a proof bottom-up, this creates the problem of guessing $A$ (since it does not appear at all below). The cut-elimination theorem is thus crucial to the applications of sequent calculus in automated deduction: it states that all uses of the cut rule can be eliminated from a proof, implying that any provable sequent can be given a cut-free proof. - -The second rule that is somewhat special is the axiom of identity (I). The intuitive reading of this is obvious: every formula proves itself. Like the cut rule, the axiom of identity is somewhat redundant: the completeness of atomic initial sequents states that the rule can be restricted to atomic formulas without any loss of provability. - -Observe that all rules have mirror companions, except the ones for implication. This reflects the fact that the usual language of first-order logic does not include the "is not implied by" connective $\not\leftarrow$ that would be the De Morgan dual of implication. Adding such a connective with its natural rules would make the calculus completely left-right symmetric. - -Here is the derivation of "$ \vdash A \lor \lnot A $", known as - -the Law of excluded middle (tertium non datur in Latin). - -Next is the proof of a simple fact involving quantifiers. Note that the converse is not true, and its falsity can be seen when attempting to derive it bottom-up, because an existing free variable cannot be used in substitution in the rules $(\forall R)$ and $(\exists L)$. - -For something more interesting we shall prove ${\left( \left( A \rightarrow \left( B \lor C \right) \right) \rightarrow \left( \left( \left( B \rightarrow \lnot A \right) \land \lnot C \right) \rightarrow \lnot A \right) \right)}$. It is straightforward to find the derivation, which exemplifies the usefulness of LK in automated proving. - -| - -| valign=bottom | - -| - -| valign=bottom | - -|} - -| - -| rowspan=2 valign=bottom | - -(\lor L) - - - -|- - -| align=center style='border-top:1px solid black;' rowspan=2 | - -B \lor C \vdash B , C - - - -| - -|- - -| - -| rowspan=2 | - -(PR) - - - -|- - -| align=center style='border-top:1px solid black;' rowspan=2 | - -B \lor C \vdash C , B - - - -| - -|- - -| - -| rowspan=2 | - -(\lnot L) - - - -|- - -| align=center style='border-top:1px solid black;' rowspan=2 | - -B \lor C , \lnot C \vdash B - - - -| - -|- - -| - -| - -|} - -| - -| valign=bottom | - -|} - -| - -| rowspan=2 valign=bottom | - -(\rightarrow L) - - - -|- - -| align=center style='border-top:1px solid black;' rowspan=2 | - -\left( B \lor C \right) , \lnot C , \left( B \rightarrow \lnot A \right) \vdash \lnot A - - - -| - -|- - -| - -| rowspan=2 | - -(\land L_1) - - - -|- - -| align=center style='border-top:1px solid black;' rowspan=2 | - -\left( B \lor C \right) , \lnot C , \left( \left( B \rightarrow \lnot A \right) \land \lnot C \right) \vdash \lnot A - - - -| - -|- - -| - -| rowspan=2 | - -(PL) - - - -|- - -| align=center style='border-top:1px solid black;' rowspan=2 | - -\left( B \lor C \right) , \left( \left( B \rightarrow \lnot A \right) \land \lnot C \right) , \lnot C \vdash \lnot A - - - -| - -|- - -| - -| rowspan=2 | - -(\land L_2) - - - -|- - -| align=center style='border-top:1px solid black;' rowspan=2 | - -\left( B \lor C \right) , \left( \left( B \rightarrow \lnot A \right) \land \lnot C \right) , \left( \left( B \rightarrow \lnot A \right) \land \lnot C \right) \vdash \lnot A - - - -| - -|- - -| - -| rowspan=2 | - -(CL) - - - -|- - -| align=center style='border-top:1px solid black;' rowspan=2 | - -\left( B \lor C \right) , \left( \left( B \rightarrow \lnot A \right) \land \lnot C \right) \vdash \lnot A - - - -| - -|- - -| - -| rowspan=2 | - -(PL) - - - -|- - -| align=center style='border-top:1px solid black;' rowspan=2 | - -\left( \left( B \rightarrow \lnot A \right) \land \lnot C \right) , \left( B \lor C \right) \vdash \lnot A - - - -| - -|- - -| - -| - -|} - -|} - -| - -| rowspan=2 valign=bottom | - -(\rightarrow L) - - - -|- - -| align=center style='border-top:1px solid black;' rowspan=2 | - -\left( \left( B \rightarrow \lnot A \right) \land \lnot C \right) , \left( A \rightarrow \left( B \lor C \right) \right) \vdash \lnot A , \lnot A - - - -| - -|- - -| - -| rowspan=2 | - -(CR) - - - -|- - -| align=center style='border-top:1px solid black;' rowspan=2 | - -\left( \left( B \rightarrow \lnot A \right) \land \lnot C \right) , \left( A \rightarrow \left( B \lor C \right) \right) \vdash \lnot A - - - -| - -|- - -| - -| rowspan=2 | - -(PL) - - - -|- - -| align=center style='border-top:1px solid black;' rowspan=2 | - -\left( A \rightarrow \left( B \lor C \right) \right) , \left( \left( B \rightarrow \lnot A \right) \land \lnot C \right) \vdash \lnot A - - - -| - -|- - -| - -| rowspan=2 | - -(\rightarrow R) - - - -|- - -| align=center style='border-top:1px solid black;' rowspan=2 | - -\left( A \rightarrow \left( B \lor C \right) \right) \vdash \left( \left( \left( B \rightarrow \lnot A \right) \land \lnot C \right) \rightarrow \lnot A \right) - - - -| - -|- - -| - -| rowspan=2 | - -(\rightarrow R) - - - -|- - -| align=center style='border-top:1px solid black;' rowspan=2 | - -\vdash \left( \left( A \rightarrow \left( B \lor C \right) \right) \rightarrow \left( \left( \left( B \rightarrow \lnot A \right) \land \lnot C \right) \rightarrow \lnot A \right) \right) - - - -| - -|- - -| - -| - -|} - -These derivations also emphasize the strictly formal structure of the sequent calculus. For example, the logical rules as defined above always act on a formula immediately adjacent to the turnstile, such that the permutation rules are necessary. Note, however, that this is in part an artifact of the presentation, in the original style of Gentzen. A common simplification involves the use of multisets of formulas in the interpretation of the sequent, rather than sequences, eliminating the need for an explicit permutation rule. This corresponds to shifting commutativity of assumptions and derivations outside the sequent calculus, whereas LK embeds it within the system itself. - -For certain formulations (i.e. variants) of the sequent calculus, a proof in such a calculus is isomorphic to an upside-down, closed analytic tableau. - -The structural rules deserve some additional discussion. - -Weakening (W) allows the addition of arbitrary elements to a sequence. Intuitively, this is allowed in the antecedent because we can always restrict the scope of our proof (if all cars have wheels, then it's safe to say that all black cars have wheels); and in the succedent because we can always allow for alternative conclusions (if all cars have wheels, then it's safe to say that all cars have either wheels or wings). - -Contraction (C) and Permutation (P) assure that neither the order (P) nor the multiplicity of occurrences (C) of elements of the sequences matters. Thus, one could instead of sequences also consider sets. - -The extra effort of using sequences, however, is justified since part or all of the structural rules may be omitted. Doing so, one obtains the so-called substructural logics. - -This system of rules can be shown to be both sound and complete with respect to first-order logic, i.e. a statement $A$ follows semantically from a set of premises $\Gamma$ $(\Gamma \vDash A)$ if and only if the sequent $\Gamma \vdash A$ can be derived by the above rules. - -In the sequent calculus, the rule of cut is admissible. This result is also referred to as Gentzen's Hauptsatz ("Main Theorem"). - -The above rules can be modified in various ways: - -There is some freedom of choice regarding the technical details of how sequents and structural rules are formalized. As long as every derivation in LK can be effectively transformed to a derivation using the new rules and vice versa, the modified rules may still be called LK. - -First of all, as mentioned above, the sequents can be viewed to consist of sets or multisets. In this case, the rules for permuting and (when using sets) contracting formulae are obsolete. - -The rule of weakening will become admissible, when the axiom (I) is changed, such that any sequent of the form $\Gamma , A \vdash A , \Delta$ can be concluded. This means that $A$ proves $A$ in any context. Any weakening that appears in a derivation can then be performed right at the start. This may be a convenient change when constructing proofs bottom-up. - -Independent of these one may also change the way in which contexts are split within the rules: In the cases $({\land}R), ({\lor}L)$, and $({\rightarrow}L)$ the left context is somehow split into $\Gamma$ and $\Sigma$ when going upwards. Since contraction allows for the duplication of these, one may assume that the full context is used in both branches of the derivation. By doing this, one assures that no important premises are lost in the wrong branch. Using weakening, the irrelevant parts of the context can be eliminated later. - -One can introduce $\bot$, the absurdity constant representing false, with the axiom: - - - -\cfrac{}{\bot \vdash \quad } - - - -Or if, as described above, weakening is to be an admissible rule, then with the axiom: - - - -\cfrac{}{\Gamma, \bot \vdash \Delta} - - - -With $\bot$, negation can be subsumed as a special case of implication, via the definition $\neg A \iff A \to \bot$. - -Alternatively, one may restrict or forbid the use of some of the structural rules. This yields a variety of substructural logic systems. They are generally weaker than LK (i.e., they have fewer theorems), and thus not complete with respect to the standard semantics of first-order logic. However, they have other interesting properties that have led to applications in theoretical computer science and artificial intelligence. - -Surprisingly, some small changes in the rules of LK suffice to turn it into a proof system for intuitionistic logic. To this end, one has to restrict to sequents with at most one formula on the right-hand side, and modify the rules to maintain this invariant. For example, $({\lor}L)$ is reformulated as follows (where C is an arbitrary formula): - - - -\cfrac{\Gamma, A \vdash C \qquad \Sigma, B \vdash C }{\Gamma, \Sigma, A \lor B \vdash C} \quad ({\lor}L) - - - -The resulting system is called LJ. It is sound and complete with respect to intuitionistic logic and admits a similar cut-elimination proof. This can be used in proving disjunction and existence properties. - -In fact, the only two rules in LK that need to be restricted to single-formula consequents are $({\to}R)$ and $(\neg R)$ (and the latter can be seen as a special case of the former, via $\bot$ as described above). When multi-formula consequents are interpreted as disjunctions, all of the other inference rules of LK are actually derivable in LJ, while the offending rule is - - - -\cfrac{\Gamma, A \vdash B \lor C}{\Gamma \vdash (A \to B) \lor C} - - - -This amounts to the propositional formula $(A \to (B \lor C)) \to ((A \to B) \lor C)$, a classical tautology that is not constructively valid. diff --git a/wiki/wikipedia/2861.txt b/wiki/wikipedia/2861.txt deleted file mode 100644 index 392eed872606925ec83e28991ca52cd813697302..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2861.txt +++ /dev/null @@ -1,96 +0,0 @@ -In number theory, Waring's problem asks whether each natural number k has an associated positive integer s such that every natural number is the sum of at most s natural numbers raised to the power k. For example, every natural number is the sum of at most 4 squares, 9 cubes, or 19 fourth powers. Waring's problem was proposed in 1770 by Edward Waring, after whom it is named. Its affirmative answer, known as the Hilbert–Waring theorem, was provided by Hilbert in 1909. Waring's problem has its own Mathematics Subject Classification, 11P05, "Waring's problem and variants". - -Long before Waring posed his problem, Diophantus had asked whether every positive integer could be represented as the sum of four perfect squares greater than or equal to zero. This question later became known as Bachet's conjecture, after the 1621 translation of Diophantus by Claude Gaspard Bachet de Méziriac, and it was solved by Joseph-Louis Lagrange in his four-square theorem in 1770, the same year Waring made his conjecture. Waring sought to generalize this problem by trying to represent all positive integers as the sum of cubes, integers to the fourth power, and so forth, to show that any positive integer may be represented as the sum of other integers raised to a specific exponent, and that there was always a maximum number of integers raised to a certain exponent required to represent all positive integers in this way. - -For every $k$, let $g(k)$ denote the minimum number $s$ of $k$th powers of naturals needed to represent all positive integers. Every positive integer is the sum of one first power, itself, so $g(1) = 1$. Some simple computations show that 7 requires 4 squares, 23 requires 9 cubes, and 79 requires 19 fourth powers; these examples show that $g(2) \ge 4$, $g(3) \ge 9$, and $g(4) \ge 19$. Waring conjectured that these lower bounds were in fact exact values. - -Lagrange's four-square theorem of 1770 states that every natural number is the sum of at most four squares. Since three squares are not enough, this theorem establishes $g(2) = 4$. Lagrange's four-square theorem was conjectured in Bachet's 1621 edition of Diophantus's Arithmetica; Fermat claimed to have a proof, but did not publish it. - -Over the years various bounds were established, using increasingly sophisticated and complex proof techniques. For example, Liouville showed that $g(4)$ is at most 53. Hardy and Littlewood showed that all sufficiently large numbers are the sum of at most 19 fourth powers. - -That $g(3) = 9$ was established from 1909 to 1912 by Wieferich and A. J. Kempner, $g(4) = 19$ in 1986 by R. Balasubramanian, F. Dress, and J.-M. Deshouillers, $g(5) = 37$ in 1964 by Chen Jingrun, and $g(6) = 73$ in 1940 by Pillai. - -Let $\lfloor x\rfloor$ and $\{x\}$ respectively denote the integral and fractional part of a positive real number $x$. Given the number $c = 2^k \lfloor(3/2)^k\rfloor - 1 < 3^k$, only $2^k$ and $1^k$ can be used to represent $c$; the most economical representation requires -$$ -\lfloor(3/2)^k\rfloor - 1 -$$ terms of $2^k$ and $2^k - 1$ terms of $1^k$. It follows that $g(k)$ is at least as large as $2^k + \lfloor(3/2)^k\rfloor - 2$. This was noted by J. A. Euler, the son of Leonhard Euler, in about 1772. Later work by Dickson, Pillai, Rubugunday, Niven and many others has proved that - - - -g(k) = \begin{cases} - -2^k + \lfloor(3/2)^k\rfloor - 2 - -&\text{if}\quad - -2^k \{(3/2)^k\} + \lfloor(3/2)^k\rfloor \le 2^k, \\ - -2^k + \lfloor(3/2)^k\rfloor + \lfloor(4/3)^k\rfloor - 2 - -&\text{if}\quad - -2^k \{(3/2)^k\} + \lfloor(3/2)^k\rfloor > 2^k - -\text{ and } - -\lfloor(4/3)^k\rfloor \lfloor(3/2)^k\rfloor + \lfloor(4/3)^k\rfloor + \lfloor(3/2)^k\rfloor = 2^k, \\ - -2^k + \lfloor(3/2)^k\rfloor + \lfloor(4/3)^k\rfloor - 3 - -&\text{if}\quad - -2^k \{(3/2)^k\} + \lfloor(3/2)^k\rfloor > 2^k - -\text{ and } - -\lfloor(4/3)^k\rfloor \lfloor(3/2)^k\rfloor + \lfloor(4/3)^k\rfloor + \lfloor(3/2)^k\rfloor > 2^k. - -\end{cases} - - - -No value of $k$ is known for which $2^k\{(3/2)^k\} + \lfloor(3/2)^k\rfloor > 2^k$. Mahler proved that there can only be a finite number of such $k$, and Kubina and Wunderlich have shown that any such $k$ must satisfy $k > 471600000$. Thus it is conjectured that this never happens, that is, $g(k) = 2^k + \lfloor(3/2)^k\rfloor - 2$ for every positive integer $k$. - -The first few values of $g(k)$ are: - -1, 4, 9, 19, 37, 73, 143, 279, 548, 1079, 2132, 4223, 8384, 16673, 33203, 66190, 132055, 263619, 526502, 1051899, ... . - -From the work of Hardy and Littlewood, the related quantity G(k) was studied with g(k). G(k) is defined to be the least positive integer s such that every sufficiently large integer (i.e. every integer greater than some constant) can be represented as a sum of at most s positive integers to the power of k. Clearly, G(1) = 1. Since squares are congruent to 0, 1, or 4 (mod 8), no integer congruent to 7 (mod 8) can be represented as a sum of three squares, implying that G(2) ≥ 4. Since G(k) ≤ g(k) for all k, this shows that 1=G(2) = 4. Davenport showed that 1=G(4) = 16 in 1939, by demonstrating that any sufficiently large number congruent to 1 through 14 mod 16 could be written as a sum of 14 fourth powers (Vaughan in 1985 and 1989 reduced the 14 successively to 13 and 12). The exact value of G(k) is unknown for any other k, but there exist bounds. - -The number G(k) is greater than or equal to - -In the absence of congruence restrictions, a density argument suggests that G(k) should equal k + 1. - -G(3) is at least 4 (since cubes are congruent to 0, 1 or −1 mod 9); for numbers less than 1.3, 1290740 is the last to require 6 cubes, and the number of numbers between N and 2N requiring 5 cubes drops off with increasing N at sufficient speed to have people believe that 1=G(3) = 4; the largest number now known not to be a sum of 4 cubes is 7373170279850, and the authors give reasonable arguments there that this may be the largest possible. The upper bound G(3) ≤ 7 is due to Linnik in 1943. (All nonnegative integers require at most 9 cubes, and the largest integers requiring 9, 8, 7, 6 and 5 cubes are conjectured to be 239, 454, 8042, 1290740 and 7373170279850, respectively.) - -13792 is the largest number to require 17 fourth powers (Deshouillers, Hennecart and Landreau showed in 2000 that every number between 13793 and 10245 required at most 16, and Kawada, Wooley and Deshouillers extended Davenport's 1939 result to show that every number above 10220 required no more than 16). Numbers of the form 31·16n always require 16 fourth powers. - -617597724 is the last number less than 1.3 that requires 10 fifth powers, and 51033617 is the last number less than 1.3 that requires 11. - -The upper bounds on the right with 1=k = 5, 6, ..., 20 are due to Vaughan and Wooley. - -Using his improved Hardy-Littlewood method, I. M. Vinogradov published numerous refinements leading to -$$ -G(k)\le k(3\log k + 11) -$$ - -in 1947 and, ultimately, -$$ -G(k) \le k(2\log k + 2\log\log k + C\log\log\log k) -$$ - -for an unspecified constant C and sufficiently large k in 1959. - -Applying his p-adic form of the Hardy–Littlewood–Ramanujan–Vinogradov method to estimating trigonometric sums, in which the summation is taken over numbers with small prime divisors, Anatolii Alexeevitch Karatsuba obtained (1985) a new estimate of the Hardy function $G(k)$ (for $k \ge 400$): -$$ -G(k) < 2 k\log k + 2 k\log\log k + 12 k. -$$ - -Further refinements were obtained by Vaughan in 1989. - -Wooley then established that for some constant C, -$$ -G(k) \le k\log k + k\log\log k + Ck. -$$ - -Vaughan and Wooley have written a comprehensive survey article. diff --git a/wiki/wikipedia/2862.txt b/wiki/wikipedia/2862.txt deleted file mode 100644 index b7da149fd3a666e239e23b11e7f056b961276413..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2862.txt +++ /dev/null @@ -1,23 +0,0 @@ -In mathematics, Whitney's planarity criterion is a matroid-theoretic characterization of planar graphs, named after Hassler Whitney. It states that a graph G is planar if and only if its graphic matroid is also cographic (that is, it is the dual matroid of another graphic matroid). - -In purely graph-theoretic terms, this criterion can be stated as follows: There must be another (dual) graph G'=(V',E') and a bijective correspondence between the edges E' and the edges E of the original graph G, such that a subset T of E forms a spanning tree of G if and only if the edges corresponding to the complementary subset E-T form a spanning tree of G'. - -An equivalent form of Whitney's criterion is that a graph G is planar if and only if it has a dual graph whose graphic matroid is dual to the graphic matroid of G. - -A graph whose graphic matroid is dual to the graphic matroid of G is known as an algebraic dual of G. Thus, Whitney's planarity criterion can be expressed succinctly as: a graph is planar if and only if it has an algebraic dual. - -If a graph is embedded into a topological surface such as the plane, in such a way that every face of the embedding is a topological disk, then the dual graph of the embedding is defined as the graph (or in some cases multigraph) H that has a vertex for every face of the embedding, and an edge for every adjacency between a pair of faces. - -According to Whitney's criterion, the following conditions are equivalent: - -*The surface on which the embedding exists is topologically equivalent to the plane, sphere, or punctured plane - -*H is an algebraic dual of G - -*Every simple cycle in G corresponds to a minimal cut in H, and vice versa - -*Every simple cycle in H corresponds to a minimal cut in G, and vice versa - -*Every spanning tree in G corresponds to the complement of a spanning tree in H, and vice versa. - -It is possible to define dual graphs of graphs embedded on nonplanar surfaces such as the torus, but these duals do not generally have the correspondence between cuts, cycles, and spanning trees required by Whitney's criterion. diff --git a/wiki/wikipedia/2863.txt b/wiki/wikipedia/2863.txt deleted file mode 100644 index efbe609c67f6b7c50a54ce3b3c49418a5c1e2ce1..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2863.txt +++ /dev/null @@ -1,5 +0,0 @@ -In geometry, the crossbar theorem states that if ray AD is between ray AC and ray AB, then ray AD intersects line segment BC. - -This result is one of the deeper results in axiomatic plane geometry. It is often used in proofs to justify the statement that a line through a vertex of a triangle lying inside the triangle meets the side of the triangle opposite that vertex. This property was often used by Euclid in his proofs without explicit justification. - -Some modern treatments (not Euclid's) of the proof of the theorem that the base angles of an isosceles triangle are congruent start like this: Let ABC be a triangle with side AB congruent to side AC. Draw the angle bisector of angle A and let D be the point at which it meets side BC. And so on. The justification for the existence of point D is the often unstated crossbar theorem. For this particular result, other proofs exist which do not require the use of the crossbar theorem. diff --git a/wiki/wikipedia/2864.txt b/wiki/wikipedia/2864.txt deleted file mode 100644 index 309189e6e60ca4f3dcb0dce6998b162f34d28dcd..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2864.txt +++ /dev/null @@ -1,239 +0,0 @@ -In geometry, triangle inequalities are inequalities involving the parameters of triangles, that hold for every triangle, or for every triangle meeting certain conditions. The inequalities give an ordering of two different values: they are of the form "less than", "less than or equal to", "greater than", or "greater than or equal to". The parameters in a triangle inequality can be the side lengths, the semiperimeter, the angle measures, the values of trigonometric functions of those angles, the area of the triangle, the medians of the sides, the altitudes, the lengths of the internal angle bisectors from each angle to the opposite side, the perpendicular bisectors of the sides, the distance from an arbitrary point to another point, the inradius, the exradii, the circumradius, and/or other quantities. - -Unless otherwise specified, this article deals with triangles in the Euclidean plane. - -The parameters most commonly appearing in triangle inequalities are: - -*the side lengths a, b, and c; - -*the semiperimeter s = (a + b + c) / 2 (half the perimeter p); - -*the angle measures A, B, and C of the angles of the vertices opposite the respective sides a, b, and c (with the vertices denoted with the same symbols as their angle measures); - -*the values of trigonometric functions of the angles; - -*the area T of the triangle; - -*the medians ma, mb, and mc of the sides (each being the length of the line segment from the midpoint of the side to the opposite vertex); - -*the altitudes ha, hb, and hc (each being the length of a segment perpendicular to one side and reaching from that side (or possibly the extension of that side) to the opposite vertex); - -*the lengths of the internal angle bisectors ta, tb, and tc (each being a segment from a vertex to the opposite side and bisecting the vertex's angle); - -*the perpendicular bisectors pa, pb, and pc of the sides (each being the length of a segment perpendicular to one side at its midpoint and reaching to one of the other sides); - -*the lengths of line segments with an endpoint at an arbitrary point P in the plane (for example, the length of the segment from P to vertex A is denoted PA or AP); - -*the inradius r (radius of the circle inscribed in the triangle, tangent to all three sides), the exradii ra, rb, and rc (each being the radius of an excircle tangent to side a, b, or c respectively and tangent to the extensions of the other two sides), and the circumradius R (radius of the circle circumscribed around the triangle and passing through all three vertices). - -The basic triangle inequality is - -a < b+c, \quad b < c + a, \quad c < a + b - -or equivalently - -\max(a, b, c) - -In addition, - -\frac{3}{2} \le \frac{a}{b+c} + \frac{b}{a+c} + \frac{c}{a+b} < 2, - -where the value of the right side is the lowest possible bound, approached asymptotically as certain classes of triangles approach the degenerate case of zero area. The left inequality, which holds for all positive a, b, c, is Nesbitt's inequality. - -We have -$$ -3\left( \frac{a}{b}+\frac{b}{c}+\frac{c}{a}\right) \geq 2\left( \frac{b}{a}+\frac{c}{b}+\frac{a}{c} \right) +3. -$$ -$$ -a^2+b^2 > \frac{c^2}{2}, -$$ - -with equality approached in the limit only as the apex angle of an isosceles triangle approaches 180°. - -If the centroid of the triangle is inside the triangle's incircle, then -$$ -a^2 < 4bc, \quad b^2 < 4ac, \quad c^2 < 4ab. -$$ - -While all of the above inequalities are true because a, b, and c must follow the basic triangle inequality that the longest side is less than half the perimeter, the following relations hold for all positive a, b, and c: -$$ -\sin A+\sin B \cdot \sin C \leq \varphi -$$ -$$ -\sin A \cdot \cos B +\sin B \cdot \cos C+\sin C \cdot \cos A \leq \frac{3\sqrt{3}}{4}. -$$ This is strengthened by -$$ -T \le \frac{\sqrt{3}}{4}(abc)^{2/3}. -$$ - -Bonnesen's inequality also strengthens the isoperimetric inequality: -$$ - \pi^2 (R-r)^2 \leq (a+b+c)^2-4\pi T. -$$ - -We also have -$$ -\frac{9abc}{a+b+c} \ge 4\sqrt{3} \cdot T -$$ - -with equality only in the equilateral case; -$$ -38T^2 \leq 2s^4-a^4-b^4-c^4 -$$ - -If an inner triangle is inscribed in a reference triangle so that the inner triangle's vertices partition the perimeter of the reference triangle into equal length segments, the ratio of their areas is bounded by -$$ -IGB \quad \text{then} \quad t_aa etc. of the triangle-interior portions of the perpendicular bisectors of sides of the triangle. Denoting the sides so that $a \geq b \geq c,$ we have -$$ -p_a \geq p_b -$$ - -and -$$ -p_c \geq p_b. -$$ - -Consider any point P in the interior of the triangle, with the triangle's vertices denoted A, B, and C and with the lengths of line segments denoted PA etc. We have -$$ -k_1\cdot (PA)^t + k_2\cdot (PB)^t + k_3\cdot (PC)^t \geq 2^t \sqrt{k_1k_2k_3} \left(\frac{(PD)^t}{\sqrt{k_1}} + \frac{(PE)^t}{\sqrt{k_2}} + \frac{(PF)^t}{\sqrt{k_3}} \right), -$$ - -while for t > 1 we have -$$ -PA+PB+PC \geq 6r. -$$ - -Others include: -$$ -PA^3+PB^3+PC^3 + k \cdot (PA \cdot PB \cdot PC) \geq8(k+3)r^3 -$$ - -for k = 0, 1, ..., 6; -$$ -PA^2+PB^2+PC^2 + (PA \cdot PB \cdot PC)^{2/3} \geq 16r^2; -$$ -$$ -PA^2+PB^2+PC^2 + 2(PA \cdot PB \cdot PC)^{2/3} \geq 20r^2; -$$ - -and -$$ -PA^4+PB^4+PC^4 + k(PA \cdot PB \cdot PC)^{4/3} \geq 16(k+3)r^4 -$$ - -for k = 0, 1, ..., 9. - -Furthermore, for circumradius R, -$$ -(PA \cdot PB)^{3/2} + (PB \cdot PC)^{3/2} + (PC \cdot PA)^{3/2} \geq 12Rr^2; -$$ -$$ -(PA \cdot PB)^{2} + (PB \cdot PC)^{2} + (PC \cdot PA)^{2} \geq 8(R+r)Rr^2; -$$ Here the expression $\sqrt{R^2-2Rr}=d$ where d is the distance between the incenter and the circumcenter. In the latter double inequality, the first part holds with equality if and only if the triangle is isosceles with an apex angle of at least 60°, and the last part holds with equality if and only if the triangle is isosceles with an apex angle of at most 60°. Thus both are equalities if and only if the triangle is equilateral. -$$ -(R-d)^2-r^2 \le 4R^2 r^2\left(\frac{(R+d)^2-r^2}{(R+d)^4} \right) \le \frac{a^2}{4} \le Q \le (R+d)^2-r^2, -$$ - -where $Q=R^2$ if the circumcenter is on or outside of the incircle and $Q=4R^2 r^2 \left(\frac{(R-d)^2-r^2}{(R-d)^4}\right)$ if the circumcenter is inside the incircle. The circumcenter is inside the incircle if and only if -$$ -\frac{4}{R}\le \frac{1}{R_A}+\frac{1}{R_B}+\frac{1}{R_C}\le \frac{2}{r} -$$ - -with equality only in the equilateral case, and -$$ -\frac{9}{2}r\le R_A+R_B+R_C \le 2R+\frac{1}{2}r -$$ - -with equality only in the equilateral case. - -For the circumradius R we have -$$ -1 \geq \frac{x_a}{x_b} \geq \frac{2\sqrt{2}}{3} \approx 0.94. -$$ - -Moreover, for any square inscribed in any triangle we have -$$ -\frac{\text{Area of triangle}}{\text{Area of inscribed square}} \geq 2. -$$ - -A triangle's Euler line goes through its orthocenter, its circumcenter, and its centroid, but does not go through its incenter unless the triangle is isosceles. For all non-isosceles triangles, the distance d from the incenter to the Euler line satisfies the following inequalities in terms of the triangle's longest median v, its longest side u, and its semiperimeter s: -$$ -\frac{d}{s} < \frac{d}{u} < \frac{d}{v} < \frac{1}{3}. -$$ - -For all of these ratios, the upper bound of 1/3 is the tightest possible. - -In right triangles the legs a and b and the hypotenuse c obey the following, with equality only in the isosceles case: -$$ -a+b \leq c\sqrt{2}. -$$ - -In terms of the inradius, the hypotenuse obeys -$$ -2r \leq c(\sqrt{2}-1), -$$ - -and in terms of the altitude from the hypotenuse the legs obey -$$ -h_c \leq \frac{\sqrt{2}}{4}(a+b). -$$ - -If the two equal sides of an isosceles triangle have length a and the other side has length c, then the internal angle bisector t from one of the two equal-angled vertices satisfies -$$ -\frac{2ac}{a+c} > t > \frac{ac\sqrt{2}}{a+c}. -$$ - -For any point P in the plane of an equilateral triangle ABC, the distances of P from the vertices, PA, PB, and PC, are such that, unless P is on the triangle's circumcircle, they obey the basic triangle inequality and thus can themselves form the sides of a triangle: - -PA+PB > PC, \quad PB+PC > PA, \quad PC+PA > PB. - -However, when P is on the circumcircle the sum of the distances from P to the nearest two vertices exactly equals the distance to the farthest vertex. - -A triangle is equilateral if and only if, for every point P in the plane, with distances PD, PE, and PF to the triangle's sides and distances PA, PB, and PC to its vertices, - -4(PD^2+PE^2+PF^2) \geq PA^2+PB^2+PC^2. - -Pedoe's inequality for two triangles, one with sides a, b, and c and area T, and the other with sides d, e, and f and area S, states that -$$ -d^2(b^2+c^2-a^2)+e^2(a^2+c^2-b^2)+f^2(a^2+b^2-c^2)\geq 16TS, -$$ - -with equality if and only if the two triangles are similar. - -The hinge theorem or open-mouth theorem states that if two sides of one triangle are congruent to two sides of another triangle, and the included angle of the first is larger than the included angle of the second, then the third side of the first triangle is longer than the third side of the second triangle. That is, in triangles ABC and DEF with sides a, b, c, and d, e, f respectively (with a opposite A etc.), if a = d and b = e and angle C > angle F, then -$$ - c>f. -$$ - -The converse also holds: if c > f, then C > F. - -The angles in any two triangles ABC and DEF are related in terms of the cotangent function according to -$$ -\cot A (\cot E + \cot F) + \cot B(\cot F+\cot D) + \cot C(\cot D + \cot E) \geq 2. -$$ - -In a triangle on the surface of a sphere, as well as in elliptic geometry, -$$ -\angle A+\angle B+\angle C >180^\circ. -$$ - -This inequality is reversed for hyperbolic triangles. diff --git a/wiki/wikipedia/2865.txt b/wiki/wikipedia/2865.txt deleted file mode 100644 index 23d087a45ffb9cfdccd343afdc710213c0cfddce..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2865.txt +++ /dev/null @@ -1,79 +0,0 @@ -In mathematics, Abel's test (also known as Abel's criterion) is a method of testing for the convergence of an infinite series. The test is named after mathematician Niels Henrik Abel. There are two slightly different versions of Abel's test - one is used with series of real numbers, and the other is used with power series in complex analysis. Abel's uniform convergence test is a criterion for the uniform convergence of a series of functions dependent on parameters. - -Suppose the following statements are true: - -# $\sum a_n $ is a convergent series, - -# {bn} is a monotone sequence, and - -# {bn} is bounded. - -Then $\sum a_nb_n $ is also convergent. - -It is important to understand that this test is mainly pertinent and - -useful in the context of non absolutely convergent series $\sum a_n$. - -For absolutely convergent series, this theorem, albeit true, is almost self evident. - -This theorem can be proved directly using summation by parts. - -A closely related convergence test, also known as Abel's test, can often be used to establish the convergence of a power series on the boundary of its circle of convergence. Specifically, Abel's test states that if a sequence of positive real numbers $(a_n)$ is decreasing monotonically (or at least that for all n greater than some natural number m, we have $a_n \geq a_{n+1}$) with - - - -\lim_{n\rightarrow\infty} a_n = 0 - - - -then the power series - - - -f(z) = \sum_{n=0}^\infty a_nz^n - - - -converges everywhere on the closed unit circle, except when z = 1. Abel's test cannot be applied when z = 1, so convergence at that single point must be investigated separately. Notice that Abel's test implies in particular that the radius of convergence is at least 1. It can also be applied to a power series with radius of convergence R ≠ 1 by a simple change of variables ζ = z/R. Notice that Abel's test is a generalization of the Leibniz Criterion by taking z = −1. - -Proof of Abel's test: Suppose that z is a point on the unit circle, z ≠ 1. For each $n\geq1$, we define - - - -f_n(z):=\sum_{k=0}^n a_k z^k. - - - -By multiplying this function by (1 − z), we obtain - - - -\begin{align} - -(1-z)f_n(z) & = \sum_{k=0}^n a_k (1-z)z^k - -= \sum_{k=0}^n a_k z^k - \sum_{k=0}^n a_k z^{k+1} - -= a_0 + \sum_{k=1}^n a_k z^k - \sum_{k=1}^{n+1} a_{k-1} z^k \\ - -& = a_0 - a_n z^{n+1} + \sum_{k=1}^n (a_k - a_{k-1}) z^k . - -\end{align} - - - -The first summand is constant, the second converges uniformly to zero (since by assumption the sequence $(a_n)$ converges to zero). It only remains to show that the series converges. We will show this by showing that it even converges absolutely: - - - -\sum_{k=1}^\infty \left|(a_k - a_{k-1}) z^k \right| = \sum_{k=1}^\infty |a_k-a_{k-1}|\cdot |z|^k \leq \sum_{k=1}^\infty (a_{k-1}-a_{k}) - - - -where the last sum is a converging telescoping sum. The absolute value vanished because the sequence $(a_n)$ is decreasing by assumption. - -Hence, the sequence $(1-z)f_n(z)$ converges (even uniformly) on the closed unit disc. If $z\not = 1$, we may divide by (1 − z) and obtain the result. - -Abel's uniform convergence test is a criterion for the uniform convergence of a series of functions or an improper integration of functions dependent on parameters. It is related to Abel's test for the convergence of an ordinary series of real numbers, and the proof relies on the same technique of summation by parts. - -The test is as follows. Let {gn} be a uniformly bounded sequence of real-valued continuous functions on a set E such that gn+1(x) ≤ gn(x) for all x ∈ E and positive integers n, and let {fn} be a sequence of real-valued functions such that the series Σfn(x) converges uniformly on E. Then Σfn(x)gn(x) converges uniformly on E. diff --git a/wiki/wikipedia/2866.txt b/wiki/wikipedia/2866.txt deleted file mode 100644 index a61738d5a99c30074c0eff3282e99dfd5a238e70..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2866.txt +++ /dev/null @@ -1,3 +0,0 @@ -In the mathematical field of graph theory, the Fritsch graph is a planar graph with 9 vertices and 21 edges. - -It was obtained by Fritsch as a minimal sized counterexample to the Alfred Kempe's attempt to prove the four-color theorem. diff --git a/wiki/wikipedia/2867.txt b/wiki/wikipedia/2867.txt deleted file mode 100644 index 6cac5ff37854a0cbba4212980cf7268b91c82640..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2867.txt +++ /dev/null @@ -1,216 +0,0 @@ -In mathematics, in the area of numerical analysis, Galerkin methods, named after the Russian mathematician Boris Galerkin, convert a continuous operator problem, such as a differential equation, commonly in a weak formulation, to a discrete problem by applying linear constraints determined by finite sets of basis functions. - -Often when referring to a Galerkin method, one also gives the name along with typical assumptions and approximation methods used: - -* Ritz–Galerkin method (after Walther Ritz) typically assumes symmetric and positive definite bilinear form in the weak formulation, where the differential equation for a physical system can be formulated via minimization of a quadratic function representing the system energy and the approximate solution is a linear combination of the given set of the basis functions. - -* Bubnov–Galerkin method (after Ivan Bubnov) does not require the bilinear form to be symmetric and substitutes the energy minimization with orthogonality constrains determined by the same basis functions that are used to approximate the solution. In an operator formulation of the differential equation, Bubnov–Galerkin method can be viewed as applying an orthogonal projection to the operator. - -* Petrov–Galerkin method (after Georgii I. Petrov) allows using basis functions for orthogonality constrains (called test basis functions) that are different from the basis functions used to approximate the solution. Petrov–Galerkin method can be viewed as an extension of Bubnov–Galerkin method, applying a projection that is not necessarily orthogonal in the operator formulation of the differential equation. - -Examples of Galerkin methods are: - -* the Galerkin method of weighted residuals, the most common method of calculating the global stiffness matrix in the finite element method, - -* the boundary element method for solving integral equations, - -* Krylov subspace methods. - -We first introduce and illustrate the Galerkin method as being applied to a system of linear equations $A\mathbf x = \mathbf b $ with the following symmetric and positive definite matrix - -A = \begin{bmatrix} - -2 & 0 & 0\\ - -0 & 2 & 1\\ - -0 & 1 & 2 - -\end{bmatrix} - -and the solution and right-hand-side vectors -$$ -\mathbf x = \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix}, \quad \mathbf b = \begin{bmatrix} 2 \\ 0 \\ 0 \end{bmatrix}. -$$ - -Let us take - -V = \begin{bmatrix} - -0 & 0\\ - -1 & 0\\ - -0 & 1 - -\end{bmatrix}, - -then the matrix of the Galerkin equation is - -V^* A V = \begin{bmatrix} - -2 & 1\\ - -1 & 2 - -\end{bmatrix}, - - - -the right-hand-side vector of the Galerkin equation is - - - -V^* \mathbf b = \begin{bmatrix} - -0 \\ - -0 - -\end{bmatrix}, - - - -so that we obtain the solution vector - -\mathbf y = \begin{bmatrix} - -0 \\ - -0 - -\end{bmatrix} - -to the Galerkin equation $\left(V^* A V\right) \mathbf y = V^* \mathbf b$, which we finally uplift to determine the approximate solution to the original equation as - -V \mathbf y = \begin{bmatrix} - -0 \\ - -0 \\ - -0 - -\end{bmatrix}. - -In this example, our original Hilbert space is actually the 3-dimensional Euclidean space $\mathbb{R}^3$ equipped with the standard scalar product $(\mathbf u, \mathbf v) = \mathbf u^T \mathbf v $, our 3-by-3 matrix - -A defines the bilinear form $a(\mathbf u, \mathbf v) = \mathbf u^T A \mathbf v $, and the right-hand-side vector $\mathbf b $ defines the bounded linear functional $f(\mathbf v) = \mathbf b^T \mathbf v $. The columns - -\mathbf e_1 = \begin{bmatrix} - -0\\ - -1\\ - -0 - -\end{bmatrix} \quad - -\mathbf e_2 = \begin{bmatrix} - -0\\ - -0\\ - -1 - -\end{bmatrix}, - - - -of the matrix $V$ form an orthonormal basis of the 2-dimensional subspace of the Galerkin projection. The entries of the 2-by-2 Galerkin matrix $V^* A V$ are $a(e_j, e_i), i, j = 1, 2$, while the components of the right-hand-side vector - -V^* \mathbf b of the Galerkin equation are $f(e_i), i = 1, 2$. Finally, the approximate solution $V \mathbf y$ is obtained from the components of the solution vector $\mathbf y$ of the Galerkin equation and the basis as $\sum_{j=1}^2 \mathbf y_j \mathbf e_j$. - -Let us introduce Galerkin's method with an abstract problem posed as a weak formulation on a Hilbert space $V$, namely, - -find $u\in V$ such that for all $v\in V, a(u,v) = f(v)$. - -Here, $a(\cdot,\cdot)$ is a bilinear form (the exact requirements on $a(\cdot,\cdot)$ will be specified later) and $f$ is a bounded linear functional on $V$. - -Choose a subspace $V_n \subset V$ of dimension n and solve the projected problem: - -Find $u_n\in V_n$ such that for all $v_n\in V_n, a(u_n,v_n) = f(v_n)$. - -We call this the Galerkin equation. Notice that the equation has remained unchanged and only the spaces have changed. - -Reducing the problem to a finite-dimensional vector subspace allows us to numerically compute $ u_n $ as a finite linear combination of the basis vectors in $ V_n $. - -The key property of the Galerkin approach is that the error is orthogonal to the chosen subspaces. Since $V_n \subset V$, we can use $v_n$ as a test vector in the original equation. Subtracting the two, we get the Galerkin orthogonality relation for the error, $\epsilon_n = u-u_n$ which is the error between the solution of the original problem, $u$, and the solution of the Galerkin equation, $u_n$ -$$ - a(\epsilon_n, v_n) = a(u,v_n) - a(u_n, v_n) = f(v_n) - f(v_n) = 0. -$$ - -Since the aim of Galerkin's method is the production of a linear system of equations, we build its matrix form, which can be used to compute the solution algorithmically. - -Let $e_1, e_2,\ldots,e_n$ be a basis for $V_n$. Then, it is sufficient to use these in turn for testing the Galerkin equation, i.e.: find $u_n \in V_n$ such that -$$ -a(u_n, e_i) = f(e_i) \quad i=1,\ldots,n. -$$ - -We expand $u_n$ with respect to this basis, $u_n = \sum_{j=1}^n u_je_j$ and insert it into the equation above, to obtain -$$ -a\left(\sum_{j=1}^n u_je_j, e_i\right) = \sum_{j=1}^n u_j a(e_j, e_i) = f(e_i) \quad i=1,\ldots,n. -$$ - -This previous equation is actually a linear system of equations $Au=f$, where -$$ -A_{ij} = a(e_j, e_i), \quad f_i = f(e_i). -$$ - -Due to the definition of the matrix entries, the matrix of the Galerkin equation is symmetric if and only if the bilinear form $a(\cdot,\cdot)$ is symmetric. - -Here, we will restrict ourselves to symmetric bilinear forms, that is -$$ -a(u,v) = a(v,u). -$$ - -While this is not really a restriction of Galerkin methods, the application of the standard theory becomes much simpler. Furthermore, a Petrov–Galerkin method may be required in the nonsymmetric case. - -The analysis of these methods proceeds in two steps. First, we will show that the Galerkin equation is a well-posed problem in the sense of Hadamard and therefore admits a unique solution. In the second step, we study the quality of approximation of the Galerkin solution $u_n$. - -The analysis will mostly rest on two properties of the bilinear form, namely - -* Boundedness: for all $u,v\in V$ holds - -*:$a(u,v) \le C \|u\| \|v\|$ for some constant $C>0$ - -* Ellipticity: for all $u\in V$ holds - -*:$a(u,u) \ge c \|u\|^2$ for some constant $c>0.$ - -By the Lax-Milgram theorem (see weak formulation), these two conditions imply well-posedness of the original problem in weak formulation. All norms in the following sections will be norms for which the above inequalities hold (these norms are often called an energy norm). - -Since $V_n \subset V$, boundedness and ellipticity of the bilinear form apply to $V_n$. Therefore, the well-posedness of the Galerkin problem is actually inherited from the well-posedness of the original problem. - -The error $u-u_n$ between the original and the Galerkin solution admits the estimate -$$ -\|u-u_n\| \le \frac{C}{c} \inf_{v_n\in V_n} \|u-v_n\|. -$$ - -This means, that up to the constant $C/c$, the Galerkin solution $u_n$ - -is as close to the original solution $u$ as any other vector in $V_n$. In particular, it will be sufficient to study approximation by spaces $V_n$, completely forgetting about the equation being solved. - -Since the proof is very simple and the basic principle behind all Galerkin methods, we include it here: - -by ellipticity and boundedness of the bilinear form (inequalities) and Galerkin orthogonality (equals sign in the middle), we have for arbitrary $v_n\in V_n$: -$$ -c\|u-u_n\|^2 \le a(u-u_n, u-u_n) = a(u-u_n, u-v_n) \le C \|u-u_n\| \|u-v_n\|. -$$ - -Dividing by $c \|u-u_n\|$ and taking the infimum over all possible $v_n$ yields the lemma. - -For simplicity of presentation in the section above we have assumed that the bilinear form $a(u, v)$ is symmetric and positive definite, which implies that it is a scalar product and the expression $\|u\|_a=\sqrt{a(u, u)}$ is actually a valid vector norm, called the energy norm. Under these assumptions one can easily prove in addition Galerkin's best approximation property in the energy norm. - -Using Galerkin a-orthogonality and the Cauchy–Schwarz inequality for the energy norm, we obtain -$$ -\|u-u_n\|_a^2 = a(u-u_n, u-u_n) = a(u-u_n, u-v_n) \le \|u-u_n\|_a \|u-v_n\|_a. -$$ - -Dividing by $\|u-u_n\|_a$ and taking the infimum over all possible $v_n\in V_n$ proves that the Galerkin approximation $u_n\in V_n$ is the best approximation in the energy norm within the subspace $V_n \subset V$, i.e. $u_n\in V_n$ is nothing but the orthogonal, with respect to the scalar product $a(u, v)$, projection of the solution $u$ to the subspace $V_n$. - -I. Elishakof, M. Amato, A. Marzani, P.A. Arvan, and J.N. Reddy studied the application of the Galerkin method to stepped structures. They showed that the generalized function, namely unit-step function, Dirac’s delta function, and the doublet function are needed for obtaining accurate results. - -The approach is usually credited to Boris Galerkin. The method was explained to the Western reader by Hencky and Duncan among others. Its convergence was studied by Mikhlin and Leipholz Its coincidence with Fourier method was illustrated by Elishakoff et al. Its equivalence to Ritz's method for conservative problems was shown by Singer. Gander and Wanner showed how Ritz and Galerkin methods led to the modern finite element method. One hundred years of method's development was discussed by Repin. Elishakoff, Kaplunov and Kaplunov show that the Galerkin’s method was not developed by Ritz, contrary to the Timoshenko’s statements. diff --git a/wiki/wikipedia/2868.txt b/wiki/wikipedia/2868.txt deleted file mode 100644 index 6f7be79265603ccb5675aff573e5c7e5fbf55e1e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2868.txt +++ /dev/null @@ -1,37 +0,0 @@ -Identical-machines scheduling is an optimization problem in computer science and operations research. We are given n jobs J1, J2, ..., Jn of varying processing times, which need to be scheduled on m identical machines, such that a certain objective function is optimized, for example, the makespan is minimized. - -Identical machine scheduling is a special case of uniform machine scheduling, which is itself a special case of optimal job scheduling. In the general case, the processing time of each job may be different on different machines; in the case of identical machine scheduling, the processing time of each job is the same on each machine. Therefore, identical machine scheduling is equivalent to multiway number partitioning. A special case of identical machine scheduling is single-machine scheduling. - -In the standard three-field notation for optimal job scheduling problems, the identical-machines variant is denoted by P in the first field. For example, " P||$C_\max$" is an identical machine scheduling problem with no constraints, where the goal is to minimize the maximum completion time. - -In some variants of the problem, instead of minimizing the maximum completion time, it is desired to minimize the average completion time (averaged over all n jobs); it is denoted by P||$\sum C_i$. More generally, when some jobs are more important than others, it may be desired to minimize a weighted average of the completion time, where each job has a different weight. This is denoted by P||$\sum w_i C_i$. - -Minimizing the average completion time (P||$\sum C_i$) can be done in polynomial time. The SPT algorithm (Shortest Processing Time First), sorts the jobs by their length, shortest first, and then assigns them to the processor with the earliest end time so far. It runs in time O(n log n), and minimizes the average completion time on identical machines, P||$\sum C_i$. - -* There can be many SPT schedules; finding the SPT schedule with the smallest finish time (also called OMFT - optimal mean finish time) is NP-hard. - -Minimizing the weighted average completion time is NP-hard even on identical machines, by reduction from the knapsack problem. The bound is tight for any m. This algorithm runs in time O(n). - -* The specific list-scheduling algorithm called Longest Processing Time First (LPT), which sorts the jobs by descending length, is a $4/3-1/3m$ approximation for identical machines. It is also called greedy number partitioning. - -Coffman, Garey and Johnson presented a different algorithm called multifit algorithm, using techniques from bin packing, which has an approximation factor of 13/11≈1.182. - -Huang and Lu presented a simple polynomial-time algorithm that attains an 11/9≈1.222 approximation in time O(m log m + n), through the more general problem of maximin-share allocation of chores. - -Sahni presented a PTAS that attains (1+ε)OPT in time $O(n\cdot (n^2 / \epsilon)^{m-1})$. It is an FPTAS if m is fixed. For m=2, the run-time improves to $O(n^2 / \epsilon)$. The algorithm uses a technique called interval partitioning. - -Hochbaum and Shmoys presented several approximation algorithms for any number of identical machines (even when the number of machines is not fixed): - -* For any r >0, an algorithm with approximation ratio at most (6/5+2−r ) in time $O(n(r+\log{n}))$. - -* For any r >0, an algorithm with approximation ratio at most (7/6+2−r ) in time $O(n(r m^4+\log{n}))$. - -* For any ε>0, an algorithm with approximation ratio at most (1+ε) in time $O((n/\varepsilon)^{(1/\varepsilon^2)})$ . This is a PTAS. Note that, when the number of machines is a part of the input, the problem is strongly NP-hard, so no FPTAS is possible. - -Leung improved the run-time of this algorithm to $O\left((n/\varepsilon)^{(1/\varepsilon)\log{(1/\varepsilon)}}\right)$. - -Maximizing the minimum completion time (P||$C_\min$) is applicable when the "jobs" are actually spare parts that are required to keep the machines running, and they have different life-times. The goal is to keep machines running for as long as possible. The LPT algorithm attains at least $\frac{3m-1}{4m-2}$ of the optimum. - -Woeginger presented a PTAS that attains an approximation factor of $1-{\varepsilon}$ in time $O(c_{\varepsilon}n\log{k})$, where $c_{\varepsilon}$ a huge constant that is exponential in the required approximation factor ε. The algorithm uses Lenstra's algorithm for integer linear programming. - -Alon, Azar, Woeginger and Yadid consider a more general objective function. Given a positive real function f, which depends only on the completion times Ci, they consider the objectives of minimizing $\sum_{i=1}^m f(C_i)$, minimizing $\max_{i=1}^m f(C_i)$, maximizing $\sum_{i=1}^m f(C_i)$, and maximizing $\min_{i=1}^m f(C_i)$. They prove that, if f is non-negative, convex, and satisfies a strong continuity assumption that they call "F*", then both minimization problems have a PTAS. Similarly, if f is non-negative, concave, and satisfies F*, then both maximization problems have a PTAS. In both cases, the run-time of the PTAS is O(n), but with constants that are exponential in 1/ε. diff --git a/wiki/wikipedia/2869.txt b/wiki/wikipedia/2869.txt deleted file mode 100644 index 0bff0f56d7df0a9bacb87a5a2d501ec487e0a15b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2869.txt +++ /dev/null @@ -1,30 +0,0 @@ -In combinatorial mathematics, Stanley's reciprocity theorem, named after MIT mathematician Richard P. Stanley, states that a certain functional equation is satisfied by the generating function of any rational cone (defined below) and the generating function of the cone's interior. - -A rational cone is the set of all d-tuples - -(a1, ..., ad) - -of nonnegative integers satisfying a system of inequalities -$$ -M\left[\begin{matrix}a_1 \\ \vdots \\ a_d\end{matrix}\right] \geq \left[\begin{matrix}0 \\ \vdots \\ 0\end{matrix}\right] -$$ - -where M is a matrix of integers. A d-tuple satisfying the corresponding strict inequalities, i.e., with ">" rather than "≥", is in the interior of the cone. - -The generating function of such a cone is -$$ -F(x_1,\dots,x_d)=\sum_{(a_1,\dots,a_d)\in {\rm cone}} x_1^{a_1}\cdots x_d^{a_d}. -$$ - -The generating function Fint(x1, ..., xd) of the interior of the cone is defined in the same way, but one sums over d-tuples in the interior rather than in the whole cone. - -It can be shown that these are rational functions. - -Stanley's reciprocity theorem states that for a rational cone as above, we have -$$ -F(1/x_1,\dots,1/x_d)=(-1)^d F_{\rm int}(x_1,\dots,x_d). -$$ - -Matthias Beck and Mike Develin have shown how to prove this by using the calculus of residues. Develin has said that this amounts to proving the result "without doing any work". - -Stanley's reciprocity theorem generalizes Ehrhart-Macdonald reciprocity for Ehrhart polynomials of rational convex polytopes. diff --git a/wiki/wikipedia/287.txt b/wiki/wikipedia/287.txt deleted file mode 100644 index d81ddef2f24c167821cd795556f75e02441b824f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/287.txt +++ /dev/null @@ -1,9 +0,0 @@ -The Symmetric hypergraph theorem is a theorem in combinatorics that puts an upper bound on the chromatic number of a graph (or hypergraph in general). The original reference for this paper is unknown at the moment, and has been called folklore. - -A group $G$ acting on a set $S$ is called transitive if given any two elements $x$ and $y$ in $S$, there exists an element $f$ of $G$ such that $f(x) = y$. A graph (or hypergraph) is called symmetric if its automorphism group is transitive. - -Theorem. Let $H = (S, E)$ be a symmetric hypergraph. Let $m = |S|$, and let $\chi(H)$ denote the chromatic number of $H$, and let $\alpha(H)$ denote the independence number of $H$. Then - -
    $\chi(H) \leq 1 + \frac{\ln{m}}{-\ln{(1-\alpha(H)/m)}}.$
    - -This theorem has applications to Ramsey theory, specifically graph Ramsey theory. Using this theorem, a relationship between the graph Ramsey numbers and the extremal numbers can be shown (see Graham-Rothschild-Spencer for the details). diff --git a/wiki/wikipedia/2870.txt b/wiki/wikipedia/2870.txt deleted file mode 100644 index c4cd7a66822ee1ea74d7744577b1474f7d999209..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2870.txt +++ /dev/null @@ -1,21 +0,0 @@ -In statistics, the Chapman–Robbins bound or Hammersley–Chapman–Robbins bound is a lower bound on the variance of estimators of a deterministic parameter. It is a generalization of the Cramér–Rao bound; compared to the Cramér–Rao bound, it is both tighter and applicable to a wider range of problems. However, it is usually more difficult to compute. - -The bound was independently discovered by John Hammersley in 1950, and by Douglas Chapman and Herbert Robbins in 1951. - -Let θ ∈ Rn be an unknown, deterministic parameter, and let X ∈ Rk be a random variable, interpreted as a measurement of θ. Suppose the probability density function of X is given by p(x; θ). It is assumed that p(x; θ) is well-defined and that p(x; θ) > 0 for all values of x and θ. - -Suppose δ(X) is an unbiased estimate of an arbitrary scalar function g: Rn → R of θ, i.e., -$$ -\operatorname E_\theta\{\delta(X)\} = g(\theta)\text{ for all }\theta. -$$ - -The Chapman–Robbins bound then states that -$$ -\operatorname{Var}_\theta(\delta(X)) \ge \sup_\Delta \frac{\left[ g(\theta+\Delta) - g(\theta) \right]^2}{\operatorname E_\theta \left[ \tfrac{p(X;\theta+\Delta)}{p(X;\theta)} - 1 \right]^2}. -$$ - -Note that the denominator in the lower bound above is exactly the $ \chi^2$-divergence of $ p(\cdot; \theta+\Delta)$ with respect to $ p(\cdot; \theta)$. - -The expression inside the supremum in the Chapman–Robbins bound converges to the Cramér–Rao bound when Δ → 0, assuming the regularity conditions of the Cramér–Rao bound hold. This implies that, when both bounds exist, the Chapman–Robbins version is always at least as tight as the Cramér–Rao bound; in many cases, it is substantially tighter. - -The Chapman–Robbins bound also holds under much weaker regularity conditions. For example, no assumption is made regarding differentiability of the probability density function p(x; θ). When p(x; θ) is non-differentiable, the Fisher information is not defined, and hence the Cramér–Rao bound does not exist. diff --git a/wiki/wikipedia/2871.txt b/wiki/wikipedia/2871.txt deleted file mode 100644 index 13f554d5851247e382fa0f2ff808fb5800c5e5e6..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2871.txt +++ /dev/null @@ -1,5 +0,0 @@ -Picross e is a series of nonogram puzzle video games developed and published by Jupiter for the Nintendo 3DS handheld game console. It is the successor to Jupiter's Nintendo-published Picross games including the Mario's Picross series and Picross DS. - -Jupiter used the mechanics and UI of their Picross e games as the basis for a number of licensed spin-off games: - -Reception towards the series from professional critics has been "average" according to aggregate review website Metacritic and GameRankings. diff --git a/wiki/wikipedia/2872.txt b/wiki/wikipedia/2872.txt deleted file mode 100644 index 2b02adcb0bc53e46a6fabf2353990a36fa3efa84..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2872.txt +++ /dev/null @@ -1,5 +0,0 @@ -In mathematics, and in particular number theory, Grimm's conjecture (named after Carl Albert Grimm, 1 April 1926 – 2 January 2018) states that to each element of a set of consecutive composite numbers one can assign a distinct prime that divides it. It was first published in American Mathematical Monthly, 76(1969) 1126-1128. - -If n + 1, n + 2, …, n + k are all composite numbers, then there are k distinct primes pi such that pi divides n + i for 1 ≤ i ≤ k. - -A weaker, though still unproven, version of this conjecture states: If there is no prime in the interval $[n+1, n+k]$, then $\prod_{1\le x\le k}(n+x)$ has at least k distinct prime divisors. diff --git a/wiki/wikipedia/2873.txt b/wiki/wikipedia/2873.txt deleted file mode 100644 index c07d3253d6f49c11db316f0de601ca3526f4dfce..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2873.txt +++ /dev/null @@ -1,71 +0,0 @@ -In mathematics, Fatou components are components of the Fatou set. They were named after Pierre Fatou. - -If f is a rational function -$$ -f = \frac{P(z)}{Q(z)} -$$ - -defined in the extended complex plane, and if it is a nonlinear function (degree > 1) -$$ - d(f) = \max(\deg(P), \deg(Q))\geq 2, -$$ - -then for a periodic component $U$ of the Fatou set, exactly one of the following holds: - -# $U$ contains an attracting periodic point - -# $U$ is parabolic - -# $U$ is a Siegel disc: a simply connected Fatou component on which f(z) is analytically conjugate to a Euclidean rotation of the unit disc onto itself by an irrational rotation angle. - -# $U$ is a Herman ring: a double connected Fatou component (an annulus) on which f(z) is analytically conjugate to a Euclidean rotation of a round annulus, again by an irrational rotation angle. - - - -File:Julia-set_N_z3-1.png|Julia set (white) and Fatou set (dark red/green/blue) for $f: z\mapsto z-\frac{g}{g'}(z)$ with $g: z \mapsto z^3-1$ in the complex plane. - -Basilica Julia set, level curves of escape and attraction time.png|Julia set with superattracting cycles (hyperbolic) in the interior and the exterior - -Basilica_Julia_set_-_DLD.png|Level curves and rays in superattractive case - -Cauliflower Julia set DLD field lines.png|Julia set with parabolic cycle - -Quadratic Golden Mean Siegel Disc Average Velocity - Gray.png|Julia set with Siegel disc (elliptic case) - -Herman Standard.png|Julia set with Herman ring - - - -The components of the map $f(z) = z - (z^3-1)/3z^2$ contain the attracting points that are the solutions to $z^3=1$. This is because the map is the one to use for finding solutions to the equation $z^3=1$ by Newton–Raphson formula. The solutions must naturally be attracting fixed points. - -The map -$$ -f(z) = e^{2 \pi i t} z^2(z - 4)/(1 - 4z) -$$ - -and t = 0.6151732... will produce a Herman ring. It is shown by Shishikura that the degree of such map must be at least 3, as in this example. - -If degree d is greater than 2 then there is more than one critical point and then can be more than one type of component - - - -Herman+Parabolic.png|Herman+Parabolic - -Cubic set z^3+A*z+c with two cycles of length 3 and 105.png|Period 3 and 105 - -Julia set z+0.5z2-0.5z3.png|attracting and parabolic - -Geometrically finite Julia set.png|period 1 and period 1 - -Julia set f(z)=1 over az5+z3+bz.png|period 4 and 4 (2 attracting basins) - -Julia set for f(z)=1 over (z3+a*z+ b) with a = 2.099609375 and b = 0.349609375.png|two period 2 basins - - - -In case of transcendental functions there is another type of periodic Fatou components, called Baker domain: these are "domains on which the iterates tend to an essential singularity (not possible for polynomials and rational functions)" Example function : -$$ -f(z) = z - 1 + (1 - 2z)e^z -$$ - -Transcendental maps may have wandering domains: these are Fatou components that are not eventually periodic. diff --git a/wiki/wikipedia/2874.txt b/wiki/wikipedia/2874.txt deleted file mode 100644 index 0ea7e6920d93412da4336f29f6bc173e013f0f34..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2874.txt +++ /dev/null @@ -1,17 +0,0 @@ -The Cantor–Bernstein–Schroeder theorem of set theory has a counterpart for measurable spaces, sometimes called the Borel Schroeder–Bernstein theorem, since measurable spaces are also called Borel spaces. This theorem, whose proof is quite easy, is instrumental when proving that two measurable spaces are isomorphic. The general theory of standard Borel spaces contains very strong results about isomorphic measurable spaces, see Kuratowski's theorem. However, (a) the latter theorem is very difficult to prove, (b) the former theorem is satisfactory in many important cases (see Examples), and (c) the former theorem is used in the proof of the latter theorem. - -Let $X$ and $Y$ be measurable spaces. If there exist injective, bimeasurable maps $f : X \to Y, $ $g : Y \to X, $ then $X$ and $Y$ are isomorphic (the Schröder–Bernstein property). - -The phrase "$f$ is bimeasurable" means that, first, $f$ is measurable (that is, the preimage $f^{-1}(B)$ is measurable for every measurable $B \subset Y $), and second, the image $f(A)$ is measurable for every measurable $A \subset X $. (Thus, $f(X)$ must be a measurable subset of $ Y, $ not necessarily the whole $ Y. $) - -An isomorphism (between two measurable spaces) is, by definition, a bimeasurable bijection. If it exists, these measurable spaces are called isomorphic. - -First, one constructs a bijection $h : X \to Y $ out of $f$ and $g$ exactly as in the proof of the Cantor–Bernstein–Schroeder theorem. Second, $h$ is measurable, since it coincides with $f$ on a measurable set and with $g^{-1}$ on its complement. Similarly, $h^{-1}$ is measurable. - -The open interval (0, 1) and the closed interval [0, 1] are evidently non-isomorphic as topological spaces (that is, not homeomorphic). However, they are isomorphic as measurable spaces. Indeed, the closed interval is evidently isomorphic to a shorter closed subinterval of the open interval. Also the open interval is evidently isomorphic to a part of the closed interval (just itself, for instance). - -The real line $\mathbb{R}$ and the plane $\mathbb{R}^2$ are isomorphic as measurable spaces. It is immediate to embed $\mathbb{R}$ into $\mathbb{R}^2.$ The converse, embedding of $\mathbb{R}^2.$ into $\mathbb{R}$ (as measurable spaces, of course, not as topological spaces) can be made by a well-known trick with interspersed digits; for example, - -g(π,100e) = g(, ) = . …. - -The map $ g : \mathbb{R}^2 \to \mathbb{R}$ is clearly injective. It is easy to check that it is bimeasurable. (However, it is not bijective; for example, the number $ 1/11 = 0.090909\dots $ is not of the form $ g(x,y) $). diff --git a/wiki/wikipedia/2875.txt b/wiki/wikipedia/2875.txt deleted file mode 100644 index dad4b34a8b35e965f6bf63e973e4de5977a74025..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2875.txt +++ /dev/null @@ -1,7 +0,0 @@ -In the study of planar graphs, blossom trees are trees with additional directed half edges. Each blossom tree is associated with an embedding of a planar graph. Blossom trees can be used to sample random planar graphs. - -A blossom tree is constructed from a rooted tree embedded in the plane by adding opening and closing stems to vertices. The number of opening and closing stems must match. Some authors require that blossom trees be rooted and put conditions on which kind of stems they can carry. The terms leaves and blossoms are also sometimes used for opening and closing stems. If we orient the added edges from opening to closing stem, we have no counter-clockwised edges. This process takes linear time. - -Similarly, an embedding of a rooted planar graph can be encoded as a blossom tree. If the root is in the corner, this can be done in linear time. The edges of a rooted planar graph can be oriented so that there is a path from the root to any vertex, but there are no counter-clockwise cycles. In this case, one can use a depth-first algorithm to turn it into a blossom tree. Starting with the root vertex, look at every edge incident to it. If it points away from our current vertex, cut it, labelling the outward half-edge as a closing stem and inward half-edge as an opening stem. If it points toward our current vertex, mark it as to be kept and set its other endpoint as our current vertex. Continue until all edges have been considered. If the map is not rooted in a corner, constructing the blossom tree takes quadratic time. - -Blossom trees are also used to randomly generate large knot diagrams. Knots can be represented by 4-regular planar graphs where each node is marked as an overcrossing or undercrossings. Blossom trees can be used to generate random 4-regular planar graphs. However, these do not always give knot diagrams as there may be more than one component. This can be checked in cubic time. diff --git a/wiki/wikipedia/2876.txt b/wiki/wikipedia/2876.txt deleted file mode 100644 index cb6e84a5dafc0547dd0e80e05601533641b4362e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2876.txt +++ /dev/null @@ -1 +0,0 @@ -WS-Transaction - a Web Services specification developed by BEA Systems, IBM, and Microsoft. The WS-Transaction specification describes coordination types that are used with the extensible coordination framework described in the WS-Coordination specification. It defines two coordination types: Atomic Transaction (AT) for individual operations, and Business Activity (BA) for long running transactions. Developers can use either or both of these coordination types when building applications that require consistent agreement on the outcome of distributed activities. diff --git a/wiki/wikipedia/2877.txt b/wiki/wikipedia/2877.txt deleted file mode 100644 index 030533db532e62f89dbfcf43d302fb6887a74eb6..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2877.txt +++ /dev/null @@ -1,182 +0,0 @@ -In mathematics, smooth functions (also called infinitely differentiable functions) and analytic functions are two very important types of functions. One can easily prove that any analytic function of a real argument is smooth. The converse is not true, as demonstrated with the counterexample below. - -One of the most important applications of smooth functions with compact support is the construction of so-called mollifiers, which are important in theories of generalized functions, such as Laurent Schwartz's theory of distributions. - -The existence of smooth but non-analytic functions represents one of the main differences between differential geometry and analytic geometry. In terms of sheaf theory, this difference can be stated as follows: the sheaf of differentiable functions on a differentiable manifold is fine, in contrast with the analytic case. - -The functions below are generally used to build up partitions of unity on differentiable manifolds. - -Consider the function -$$ -f(x)=\begin{cases}e^{-\frac{1}{x}}&\text{if }x>0,\\ 0&\text{if }x\le0,\end{cases} -$$ - -defined for every real number x. - -The function f has continuous derivatives of all orders at every point x of the real line. The formula for these derivatives is -$$ -f^{(n)}(x) = \begin{cases}\displaystyle\frac{p_n(x)}{x^{2n}}f(x) & \text{if }x>0, \\ 0 &\text{if }x \le 0,\end{cases} -$$ - -where pn(x) is a polynomial of degree n - 1 given recursively by p1(x) = 1 and -$$ -p_{n+1}(x)=x^2p_n'(x)-(2nx-1)p_n(x) -$$ - -for any positive integer n. From this formula, it is not completely clear that the derivatives are continuous at 0; this follows from the one-sided limit -$$ -\lim_{x\searrow 0} \frac{e^{-\frac{1}{x}}}{x^m} = 0 -$$ - -for any nonnegative integer m. - -By the power series representation of the exponential function, we have for every natural number $m$ (including zero) - -\frac1{x^m}=x\Bigl(\frac1{x}\Bigr)^{m+1}\le (m+1)!x\sum_{n=0}^\infty\frac1{n!}\Bigl(\frac1x\Bigr)^n - -=(m+1)!x e^{\frac{1}{x}},\qquad x>0, - -because all the positive terms for $n \neq m+1$ are added. Therefore, dividing this inequality by $e^{\frac{1}{x}}$ and taking the limit from above, - -\lim_{x\searrow0}\frac{e^{-\frac{1}{x}}}{x^m} - -\le (m+1)!\lim_{x\searrow0}x=0. - -We now prove the formula for the nth derivative of f by mathematical induction. Using the chain rule, the reciprocal rule, and the fact that the derivative of the exponential function is again the exponential function, we see that the formula is correct for the first derivative of f for all x > 0 and that p1(x) is a polynomial of degree 0. Of course, the derivative of f is zero for x < 0. - -It remains to show that the right-hand side derivative of f at x = 0 is zero. Using the above limit, we see that -$$ -f'(0)=\lim_{x\searrow0}\frac{f(x)-f(0)}{x-0}=\lim_{x\searrow0}\frac{e^{-\frac{1}{x}}}{x}=0. -$$ - -The induction step from n to n + 1 is similar. For x > 0 we get for the derivative - -\begin{align}f^{(n+1)}(x) - -&=\biggl(\frac{p'_n(x)}{x^{2n}}-2n\frac{p_n(x)}{x^{2n+1}}+\frac{p_n(x)}{x^{2n+2}}\biggr)f(x)\\ - -&=\frac{x^2p'_n(x)-(2nx-1)p_n(x)}{x^{2n+2}}f(x)\\ - -&=\frac{p_{n+1}(x)}{x^{2(n+1)}}f(x),\end{align} - -where pn+1(x) is a polynomial of degree n = (n + 1) - 1. Of course, the (n + 1)st derivative of f is zero for x < 0. For the right-hand side derivative of f (n) at x = 0 we obtain with the above limit -$$ -\lim_{x\searrow0} \frac{f^{(n)}(x) - f^{(n)}(0)}{x-0} = \lim_{x\searrow0} \frac{p_n(x)}{x^{2n+1}}e^{-1/x} = 0. -$$ - -As seen earlier, the function f is smooth, and all its derivatives at the origin are 0. Therefore, the Taylor series of f at the origin converges everywhere to the zero function, -$$ -\sum_{n=0}^\infty \frac{f^{(n)}(0)}{n!}x^n=\sum_{n=0}^\infty \frac{0}{n!}x^n = 0,\qquad x\in\mathbb{R}, -$$ - -and so the Taylor series does not equal f(x) for x > 0. Consequently, f is not analytic at the origin. - -The function -$$ -g(x)=\frac{f(x)}{f(x)+f(1-x)},\qquad x\in\mathbb{R}, -$$ - -has a strictly positive denominator everywhere on the real line, hence g is also smooth. Furthermore, g(x) = 0 for x ≤ 0 and g(x) = 1 for x ≥ 1, hence it provides a smooth transition from the level 0 to the level 1 in the unit interval [0, 1]. To have the smooth transition in the real interval [a, b] with a < b, consider the function -$$ -\mathbb{R}\ni x\mapsto g\Bigl(\frac{x-a}{b-a}\Bigr). -$$ - -For real numbers a < b < c < d, the smooth function -$$ -\mathbb{R}\ni x\mapsto g\Bigl(\frac{x-a}{b-a}\Bigr)g\Bigl(\frac{d-x}{d-c}\Bigr) -$$ - -equals 1 on the closed interval [b, c] and vanishes outside the open interval (a, d), hence it can serve as a bump function. - -A more pathological example, of an infinitely differentiable function which is not analytic at any point can be constructed by means of a Fourier series as follows. Let A := { 2n : n ∈ $\mathbb{N}$ } be the set of all powers of 2, and define for all x ∈ $\mathbb{R}$ -$$ -F(x):=\sum_{k\in A} e^{-\sqrt{k}}\cos(kx)\ . -$$ - -Since the series $\sum_{k\in A} e^{-\sqrt{k}}k^n$ converges for all n ∈ $\mathbb{N}$, this function is easily seen to be of class C, by a standard inductive application of the Weierstrass M-test to demonstrate uniform convergence of each series of derivatives. Moreover, for any dyadic rational multiple of π, that is, for any x := π· p/q with p ∈ $\mathbb{N}$ and q ∈ A, and for all order of derivation n ∈ A, n ≥ 4 and n > q we have -$$ -F^{(n)}(x):=\sum_{k\in A} e^{-\sqrt{k}} k^n\cos(kx) = \sum_{k\in A\atop k>q} e^{-\sqrt{k}} k^n+\sum_{k\in A\atop k\le q} e^{-\sqrt{k}} k^n\cos(kx) \ge e^{-n} n^{2n} + O(q^n)\quad (\mathrm{as} n\to \infty) -$$ - -where we used the fact that cos(kx) = 1 for all k > q, and we bounded the first sum from below by the term with $k=n^2$. As a consequence, at any such x ∈ $\mathbb{R}$ -$$ -\limsup_{n\to\infty} \left(\frac{n!}\right)^{1/n}=+\infty , -$$ - -so that the radius of convergence of the Taylor series of F at x is 0 by the Cauchy-Hadamard formula. Since the set of analyticity of a function is an open set, and since dyadic rationals are dense, we conclude that F is nowhere analytic in $\mathbb{R}$. - -For every sequence α0, α1, α2, . . . of real or complex numbers, the following construction shows the existence of a smooth function F on the real line which has these numbers as derivatives at the origin. In particular, every sequence of numbers can appear as the coefficients of the Taylor series of a smooth function. This result is known as Borel's lemma, after Émile Borel. - -With the smooth transition function g as above, define -$$ -h(x)=g(2+x)g(2-x),\qquad x\in\mathbb{R}. -$$ - -This function h is also smooth; it equals 1 on the closed interval [-1,1] and vanishes outside the open interval (-2,2). Using h, define for every natural number n (including zero) the smooth function -$$ -\psi_n(x)=x^nh(x),\qquad x\in\mathbb{R}, -$$ - -which agrees with the monomial xn on [-1,1] and vanishes outside the interval (-2,2). Hence, the k-th derivative of ψn at the origin satisfies -$$ -\psi_n^{(k)}(0)=\begin{cases}n!&\text{if }k=n,\\0&\text{otherwise,}\end{cases}\quad k,n\in\mathbb{N}_0, -$$ - -and the boundedness theorem implies that ψn and every derivative of ψn is bounded. Therefore, the constants -$$ -\lambda_n=\max\bigl\{1,|\alpha_n|,\|\psi_n\|_\infty,\|\psi_n^{(1)}\|_\infty,\ldots,\|\psi_n^{(n)}\|_\infty\bigr\},\qquad n\in\mathbb{N}_0, -$$ - -involving the supremum norm of ψn and its first n derivatives, are well-defined real numbers. Define the scaled functions -$$ -f_n(x)=\frac{\alpha_n}{n!\lambda_n^n}\psi_n(\lambda_n x),\qquad n\in\mathbb{N}_0,x\in\mathbb{R}. -$$ - -By repeated application of the chain rule, -$$ -f_n^{(k)}(x)=\frac{\alpha_n}{n!\lambda_n^{n-k}}\psi_n^{(k)}(\lambda_n x),\qquad k,n\in\mathbb{N}_0,x\in\mathbb{R}, -$$ - -and, using the previous result for the k-th derivative of ψn at zero, -$$ -f_n^{(k)}(0)=\begin{cases}\alpha_n&\text{if }k=n,\\0&\text{otherwise,}\end{cases}\qquad k,n\in\mathbb{N}_0. -$$ - -It remains to show that the function -$$ -F(x)=\sum_{n=0}^\infty f_n(x),\qquad x\in\mathbb{R}, -$$ - -is well defined and can be differentiated term-by-term infinitely many times. To this end, observe that for every k - -\sum_{n=0}^\infty\|f_n^{(k)}\|_\infty - -\le \sum_{n=0}^{k+1}\frac{n!\lambda_n^{n-k}}\|\psi_n^{(k)}\|_\infty - -+\sum_{n=k+2}^\infty\frac1{n!} - -\underbrace{\frac1{\lambda_n^{n-k-2}}}_{\le1} - -\underbrace{\frac{\lambda_n}}_{\le1} - -\underbrace{\frac{\|\psi_n^{(k)}\|_\infty}{\lambda_n}}_{\le1} - -<\infty, - -where the remaining infinite series converges by the ratio test. - -For every radius r > 0, -$$ -\mathbb{R}^n\ni x\mapsto \Psi_r(x)=f(r^2-\|x\|^2) -$$ - -with Euclidean norm ||x|| defines a smooth function on n-dimensional Euclidean space with support in the ball of radius r, but $\Psi_r(0)>0$. - -This pathology cannot occur with differentiable functions of a complex variable rather than of a real variable. Indeed, all holomorphic functions are analytic, so that the failure of the function f defined in this article to be analytic in spite of its being infinitely differentiable is an indication of one of the most dramatic differences between real-variable and complex-variable analysis. - -Note that although the function f has derivatives of all orders over the real line, the analytic continuation of f from the positive half-line x > 0 to the complex plane, that is, the function -$$ -\mathbb{C}\setminus\{0\}\ni z\mapsto e^{-\frac{1}{z}}\in\mathbb{C}, -$$ - -has an essential singularity at the origin, and hence is not even continuous, much less analytic. By the great Picard theorem, it attains every complex value (with the exception of zero) infinitely many times in every neighbourhood of the origin. diff --git a/wiki/wikipedia/2878.txt b/wiki/wikipedia/2878.txt deleted file mode 100644 index a59f72d751401a7f4c3843e2fcf590929a691582..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2878.txt +++ /dev/null @@ -1,74 +0,0 @@ -In mathematics and computer science, graph edit distance (GED) is a measure of similarity (or dissimilarity) between two graphs. - -The concept of graph edit distance was first formalized mathematically by Alberto Sanfeliu and King-Sun Fu in 1983. - -A major application of graph edit distance is in inexact graph matching, such - -as error-tolerant pattern recognition in machine learning. - -The graph edit distance between two graphs is related to the - -string edit distance between strings. - -With the interpretation of strings as [[Connected component (graph theory)| - -connected]], directed acyclic graphs of - -maximum degree one, classical definitions - -of edit distance such as Levenshtein distance, - -Hamming distance - -and Jaro–Winkler distance may be interpreted as graph edit distances - -between suitably constrained graphs. Likewise, graph edit distance is - -also a generalization of tree edit distance between - -rooted trees. - -The mathematical definition of graph edit distance is dependent upon the definitions of - -the graphs over which it is defined, i.e. whether and how the vertices and edges of the - -graph are labeled and whether the edges are directed. - -Generally, given a set of graph edit operations (also known as elementary graph operations), the graph edit distance between two graphs $g_{1}$ and $g_{2}$, written as $GED(g_{1},g_{2})$ can be defined as -$$ - GED(g_{1},g_{2}) = \min_{(e_{1},...,e_{k}) \in \mathcal{P}(g_{1},g_{2})} \sum_{i=1}^{k} c(e_{i}) -$$ - -where $\mathcal{P}(g_{1},g_{2})$ denotes the set of edit paths transforming $g_{1}$ into (a graph isomorphic to) $g_{2}$ and $c(e) \ge 0$ is the cost of each graph edit operation $e$. - -The set of elementary graph edit operators typically includes: - -vertex insertion to introduce a single new labeled vertex to a graph. - -vertex deletion to remove a single (often disconnected) vertex from a graph. - -vertex substitution to change the label (or color) of a given vertex. - -edge insertion to introduce a new colored edge between a pair of vertices. - -edge deletion to remove a single edge between a pair of vertices. - -edge substitution to change the label (or color) of a given edge. - -Additional, but less common operators, include operations such as edge splitting that introduces a new vertex into an edge (also creating a new edge), and edge contraction that eliminates vertices of degree two between edges (of the same color). Although such complex edit operators can be defined in terms of more elementary transformations, their use allows finer parameterization of the cost function $c$ when the operator is cheaper than the sum of its constituents. - -A deep analysis of the elementary graph edit operators is presented in - -And some methods have been presented to automatically deduce these elementary graph edit operators. And some algorithms learn these costs online: - -Graph edit distance finds applications in handwriting recognition, fingerprint recognition and cheminformatics. - -Exact algorithms for computing the graph edit distance between a pair of graphs typically transform the problem into one of finding the minimum cost edit path between the two graphs. - -The computation of the optimal edit path is cast as a pathfinding search or shortest path problem, often implemented as an A* search algorithm. - -In addition to exact algorithms, a number of efficient approximation algorithms are also known. Most of them have cubic computational time - -Moreover, there is an algorithm that deduces an approximation of the GED in linear time - -Despite the above algorithms sometimes working well in practice, in general the problem of computing graph edit distance is NP-hard (for a proof that's available online, see Section 2 of ), and is even hard to approximate (formally, it is APX-hard). diff --git a/wiki/wikipedia/2879.txt b/wiki/wikipedia/2879.txt deleted file mode 100644 index b3720a25dc1caf6cca204a809ea2f2ad0636c584..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2879.txt +++ /dev/null @@ -1,106 +0,0 @@ -In complex analysis, the argument principle (or Cauchy's argument principle) relates the difference between the number of zeros and poles of a meromorphic function to a contour integral of the function's logarithmic derivative. - -Specifically, if f(z) is a meromorphic function inside and on some closed contour C, and f has no zeros or poles on C, then -$$ -\frac{1}{2\pi i}\oint_{C} {f'(z) \over f(z)} dz=Z-P -$$ - -where Z and P denote respectively the number of zeros and poles of f(z) inside the contour C, with each zero and pole counted as many times as its multiplicity and order, respectively, indicate. This statement of the theorem assumes that the contour C is simple, that is, without self-intersections, and that it is oriented counter-clockwise. - -More generally, suppose that f(z) is a meromorphic function on an open set Ω in the complex plane and that C is a closed curve in Ω which avoids all zeros and poles of f and is contractible to a point inside Ω. For each point z ∈ Ω, let n(C,z) be the winding number of C around z. Then -$$ -\frac{1}{2\pi i}\oint_{C} \frac{f'(z)}{f(z)} dz = \sum_a n(C,a) - \sum_b n(C,b) -$$ - -where the first summation is over all zeros a of f counted with their multiplicities, and the second summation is over the poles b of f counted with their orders. - -The contour integral $\oint_{C} \frac{f'(z)}{f(z)} dz$ can be interpreted as 2πi times the winding number of the path f(C) around the origin, using the substitution w = f(z): -$$ -\oint_{C} \frac{f'(z)}{f(z)} dz = \oint_{f(C)} \frac{1}{w} dw -$$ - -That is, it is i times the total change in the argument of f(z) as z travels around C, explaining the name of the theorem; this follows from -$$ -\frac{d}{dz}\log(f(z))=\frac{f'(z)}{f(z)} -$$ - -and the relation between arguments and logarithms. - -Let zZ be a zero of f. We can write f(z) = (z - zZ)kg(z) where k is the multiplicity of the zero, and thus g(zZ) ≠ 0. We get -$$ -f'(z)=k(z-z_Z)^{k-1}g(z)+(z-z_Z)^kg'(z)\! -$$ - -and -$$ -{f'(z)\over f(z)}={k \over z-z_Z}+{g'(z)\over g(z)}. -$$ - -Since g(zZ) ≠ 0, it follows that g' (z)/g(z) has no singularities at zZ, and thus is analytic at zZ, which implies that the residue of f′(z)/f(z) at zZ is k. - -Let zP be a pole of f. We can write f(z) = (z - zP)-mh(z) where m is the order of the pole, and - -h(zP) ≠ 0. Then, -$$ -f'(z)=-m(z-z_P)^{-m-1}h(z)+(z-z_P)^{-m}h'(z)\!. -$$ - -and -$$ -{f'(z)\over f(z)}={-m \over z-z_P}+{h'(z)\over h(z)} -$$ - -similarly as above. It follows that h′(z)/h(z) has no singularities at zP since h(zP) ≠ 0 and thus it is analytic at zP. We find that the residue of - -f′(z)/f(z) at zP is -m. - -Putting these together, each zero zZ of multiplicity k of f creates a simple pole for - -f′(z)/f(z) with the residue being k, and each pole zP of order m of - -f creates a simple pole for f′(z)/f(z) with the residue being -m. (Here, by a simple pole we - -mean a pole of order one.) In addition, it can be shown that f′(z)/f(z) has no other poles, - -and so no other residues. - -By the residue theorem we have that the integral about C is the product of 2πi and the sum of the residues. Together, the sum of the k 's for each zero zZ is the number of zeros counting multiplicities of the zeros, and likewise for the poles, and so we have our result. - -The argument principle can be used to efficiently locate zeros or poles of meromorphic functions on a computer. Even with rounding errors, the expression ${1\over 2\pi i}\oint_{C} {f'(z) \over f(z)} dz$ will yield results close to an integer; by determining these integers for different contours C one can obtain information about the location of the zeros and poles. Numerical tests of the Riemann hypothesis use this technique to get an upper bound for the number of zeros of Riemann's $\xi(s)$ function inside a rectangle intersecting the critical line. - -The proof of Rouché's theorem uses the argument principle. - -Modern books on feedback control theory quite frequently use the argument principle to serve as the theoretical basis of the Nyquist stability criterion. - -A consequence of the more general formulation of the argument principle is that, under the same hypothesis, if g is an analytic function in Ω, then -$$ - \frac{1}{2\pi i} \oint_C g(z)\frac{f'(z)}{f(z)} dz = \sum_a n(C,a)g(a) - \sum_b n(C,b)g(b). -$$ - -For example, if f is a polynomial having zeros z1, ..., zp inside a simple contour C, and g(z) = zk, then -$$ - \frac{1}{2\pi i} \oint_C z^k\frac{f'(z)}{f(z)} dz = z_1^k+z_2^k+\cdots+z_p^k, -$$ - -is power sum symmetric polynomial of the roots of f. - -Another consequence is if we compute the complex integral: -$$ -\oint_C f(z){g'(z) \over g(z)} dz -$$ - -for an appropriate choice of g and f we have the Abel–Plana formula: -$$ - \sum_{n=0}^{\infty}f(n)-\int_{0}^{\infty}f(x)dx= f(0)/2+i\int_{0}^{\infty}\frac{f(it)-f(-it)}{e^{2\pi t}-1} dt -$$ - -which expresses the relationship between a discrete sum and its integral. - -There is an immediate generalization of the argument principle. Suppose that g is analytic in the region $\Omega$. Then -$$ -\frac{1}{2\pi i}\oint_{C} {f'(z) \over f(z)} g(z) dz = \sum_a g(a) n(C,a) - \sum_b g(b) n(C,b) -$$ - -where the first summation is again over all zeros a of f counted with their multiplicities, and the second summation is again over the poles b of f counted with their orders. - -According to the book by Frank Smithies (Cauchy and the Creation of Complex Function Theory, Cambridge University Press, 1997, p. 177), Augustin-Louis Cauchy presented a theorem similar to the above on 27 November 1831, during his self-imposed exile in Turin (then capital of the Kingdom of Piedmont-Sardinia) away from France. However, according to this book, only zeroes were mentioned, not poles. This theorem by Cauchy was only published many years later in 1874 in a hand-written form and so is quite difficult to read. Cauchy published a paper with a discussion on both zeroes and poles in 1855, two years before his death. diff --git a/wiki/wikipedia/288.txt b/wiki/wikipedia/288.txt deleted file mode 100644 index 68fa2840a282b7d484e1c475b29ffc9150274b7a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/288.txt +++ /dev/null @@ -1,9 +0,0 @@ -Jarke J. (Jack) van Wijk (born 1959) is a Dutch computer scientist, a professor in the Department of Mathematics and Computer Science at the Eindhoven University of Technology, and an expert in information visualization. - -Van Wijk received his M.S. from the Delft University of Technology in 1982. His master's thesis, on simulation of traffic collisions, led him to become interested in computer visualization, and he remained at Delft for his doctoral studies, completing a Ph.D. in 1986 under the supervision of Dennis J. McConalogue. - -In information visualization, Van Wijk is known for his research in texture synthesis, treemaps, and flow visualization. His work on map projection won the 2009 Henry Johns Award of the British Cartographic Society for best cartographic journal article. - -He has twice been program co-chair for IEEE Visualization, and once for IEEE InfoVis. - -In graph drawing, van Wijk has worked on the visualization of small-world networks and on the depiction of abstract trees as biological trees. He has also conducted user studies that showed that the standard depiction of directed edges in graph drawings using arrowheads is less effective at conveying the directionality of the edges to readers than other conventions such as tapering. He was one of two invited speakers at the 19th International Symposium on Graph Drawing in 2011, and the capstone speaker at the IEEE Visualization 2013. diff --git a/wiki/wikipedia/2880.txt b/wiki/wikipedia/2880.txt deleted file mode 100644 index 3fcdd869e72069c545c8d9acb56c669ff3618e21..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2880.txt +++ /dev/null @@ -1,39 +0,0 @@ -In the geometry of triangles, the incircle and nine-point circle of a triangle are internally tangent to each other at the Feuerbach point of the triangle. The Feuerbach point is a triangle center, meaning that its definition does not depend on the placement and scale of the triangle. It is listed as X(11) in Clark Kimberling's Encyclopedia of Triangle Centers, and is named after Karl Wilhelm Feuerbach. - -Feuerbach's theorem, published by Feuerbach in 1822, states more generally that the nine-point circle is tangent to the three excircles of the triangle as well as its incircle. A very short proof of this theorem based on Casey's theorem on the bitangents of four circles tangent to a fifth circle was published by John Casey in 1866; Feuerbach's theorem has also been used as a test case for automated theorem proving. The three points of tangency with the excircles form the Feuerbach triangle of the given triangle. - -The incircle of a triangle ABC is a circle that is tangent to all three sides of the triangle. Its center, the incenter of the triangle, lies at the point where the three internal angle bisectors of the triangle cross each other. - -The nine-point circle is another circle defined from a triangle. It is so called because it passes through nine significant points of the triangle, among which the simplest to construct are the midpoints of the triangle's sides. The nine-point circle passes through these three midpoints; thus, it is the circumcircle of the medial triangle. - -These two circles meet in a single point, where they are tangent to each other. That point of tangency is the Feuerbach point of the triangle. - -Associated with the incircle of a triangle are three more circles, the excircles. These are circles that are each tangent to the three lines through the triangle's sides. Each excircle touches one of these lines from the opposite side of the triangle, and is on the same side as the triangle for the other two lines. Like the incircle, the excircles are all tangent to the nine-point circle. Their points of tangency with the nine-point circle form a triangle, the Feuerbach triangle. - -The Feuerbach point lies on the line through the centers of the two tangent circles that define it. These centers are the incenter and nine-point center of the triangle. -$$ -x+y+z = 2\max(x,y,z), -$$ - -or, equivalently, the largest of the three distances equals the sum of the other two. Specifically, we have $x=\frac{R}{2OI}|b-c|, y=\frac{R}{2OI}|c-a|, z=\frac{R}{2OI}|a-b|, $ where O is the reference triangle's circumcenter and I is its incenter. - -The latter property also holds for the tangency point of any of the excircles with the nine–point circle: the greatest distance from this tangency to one of the original triangle's side midpoints equals the sum of the distances to the other two side midpoints. - -If the incircle of triangle ABC touches the sides BC, CA, AB at X, Y, and Z respectively, and the midpoints of these sides are respectively P, Q, and R, then with Feuerbach point F the triangles FPX, FQY, and FRZ are similar to the triangles AOI, BOI, COI respectively. - -The trilinear coordinates for the Feuerbach point are -$$ -1 - \cos (B - C) : 1 - \cos (C - A) : 1 - \cos (A - B). -$$ - -Its barycentric coordinates are -$$ -(s-a)(b-c)^2 : (s-b)(c-a)^2 : (s-c)(a-b)^2, -$$ - -where s is the triangle's semiperimeter (a+b+c)/2. - -The three lines from the vertices of the original triangle through the corresponding vertices of the Feuerbach triangle meet at another triangle center, listed as X(12) in the Encyclopedia of Triangle Centers. Its trilinear coordinates are: -$$ -1 + \cos (B - C) : 1 + \cos (C - A) : 1 + \cos (A - B). -$$ diff --git a/wiki/wikipedia/2881.txt b/wiki/wikipedia/2881.txt deleted file mode 100644 index 43832eeb1da9b99a40d0f6e95263a5f77d4e39d9..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2881.txt +++ /dev/null @@ -1,13 +0,0 @@ -Livedrive is an online cloud backup and sync storage service owned by j2 Global. The company provides users with unlimited backup space and 2,000 GB or more of sync storage. Livedrive enables users to access their data from mobile phones and tablets. Currently Livedrive has apps for iOS, Android, Windows, macOS and Chrome OS. - -Livedrive was founded in late 2008 by Andrew Michael. Another investor in the company was Nicholas Cowell. - -In October 2009 Livedrive entered into the US market place via a distribution agreement with Lifeboat Distribution - an international speciality software distribution for security, application lifecycle, and virtualization and network infrastructure products. - -In April 2011, Livedrive created an April fools video which falsely stated that the company was storing files on paper using QR codes. The story was picked up by several press sources as a true story including CBS's Money Watch. - -On February 10, 2014, Livedrive was purchased by j2 Global. Livedrive is part of j2's Business Cloud Services division, which includes eFax, eVoice, Fusemail, Campaigner, and KeepITsafe. - -After shutting down hundreds of user accounts for “excessive bandwidth/storage”, Livedrive was reported in August 2014 to be facing legal action from discontented customers. - -In September 2010, Livedrive added personal music and movie streaming to their accounts. This gave users the ability to listen to their own music collections or watch their movies on a remote computer, with transcoding handled by Livedrive. The company also gave users FTP access and unlimited versioning. diff --git a/wiki/wikipedia/2882.txt b/wiki/wikipedia/2882.txt deleted file mode 100644 index d1c2debf8d2ebac595fb52b6e30a29b9c8e15a73..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2882.txt +++ /dev/null @@ -1,187 +0,0 @@ -In computer science, a topological sort or topological ordering of a directed graph is a linear ordering of its vertices such that for every directed edge uv from vertex u to vertex v, u comes before v in the ordering. For instance, the vertices of the graph may represent tasks to be performed, and the edges may represent constraints that one task must be performed before another; in this application, a topological ordering is just a valid sequence for the tasks. Precisely, a topological sort is a graph traversal in which each node v is visited only after all its dependencies are visited. A topological ordering is possible if and only if the graph has no directed cycles, that is, if it is a directed acyclic graph (DAG). Any DAG has at least one topological ordering, and algorithms are known for constructing a topological ordering of any DAG in linear time. Topological sorting has many applications especially in ranking problems such as feedback arc set. Topological sorting is possible even when the DAG has disconnected components. - -The canonical application of topological sorting is in scheduling a sequence of jobs or tasks based on their dependencies. The jobs are represented by vertices, and there is an edge from x to y if job x must be completed before job y can be started (for example, when washing clothes, the washing machine must finish before we put the clothes in the dryer). Then, a topological sort gives an order in which to perform the jobs. A closely related application of topological sorting algorithms was first studied in the early 1960s in the context of the PERT technique for scheduling in project management. In this application, the vertices of a graph represent the milestones of a project, and the edges represent tasks that must be performed between one milestone and another. Topological sorting forms the basis of linear-time algorithms for finding the critical path of the project, a sequence of milestones and tasks that controls the length of the overall project schedule. - -In computer science, applications of this type arise in instruction scheduling, ordering of formula cell evaluation when recomputing formula values in spreadsheets, logic synthesis, determining the order of compilation tasks to perform in makefiles, data serialization, and resolving symbol dependencies in linkers. It is also used to decide in which order to load tables with foreign keys in databases. - -The usual algorithms for topological sorting have running time linear in the number of nodes plus the number of edges, asymptotically, $O(\left|{V}\right| + \left|{E}\right|).$ - -One of these algorithms, first described by , works by choosing vertices in the same order as the eventual topological sort. First, find a list of "start nodes" which have no incoming edges and insert them into a set S; at least one such node must exist in a non-empty acyclic graph. Then: - -L ← Empty list that will contain the sorted elements - -S ← Set of all nodes with no incoming edge - -while S is not empty do - -remove a node n from S - -add n to L - -for each node m with an edge e from n to m do - -remove edge e from the graph - -if m has no other incoming edges then - -insert m into S - -if graph has edges then - -return error (graph has at least one cycle) - -else - -return L (a topologically sorted order) - -If the graph is a DAG, a solution will be contained in the list L (the solution is not necessarily unique). Otherwise, the graph must have at least one cycle and therefore a topological sort is impossible. - -Reflecting the non-uniqueness of the resulting sort, the structure S can be simply a set or a queue or a stack. Depending on the order that nodes n are removed from set S, a different solution is created. A variation of Kahn's algorithm that breaks ties lexicographically forms a key component of the Coffman–Graham algorithm for parallel scheduling and layered graph drawing. - -An alternative algorithm for topological sorting is based on depth-first search. The algorithm loops through each node of the graph, in an arbitrary order, initiating a depth-first search that terminates when it hits any node that has already been visited since the beginning of the topological sort or the node has no outgoing edges (i.e. a leaf node): - -L ← Empty list that will contain the sorted nodes - -while exists nodes without a permanent mark do - -select an unmarked node n - -visit(n) - -function visit(node n) - -if n has a permanent mark then - -return - -if n has a temporary mark then - -stop (not a DAG) - -mark n with a temporary mark - -for each node m with an edge from n to m do - -visit(m) - -remove temporary mark from n - -mark n with a permanent mark - -add n to head of L - -Each node n gets prepended to the output list L only after considering all other nodes which depend on n (all descendants of n in the graph). Specifically, when the algorithm adds node n, we are guaranteed that all nodes which depend on n are already in the output list L: they were added to L either by the recursive call to visit() which ended before the call to visit n, or by a call to visit() which started even before the call to visit n. Since each edge and node is visited once, the algorithm runs in linear time. This depth-first-search-based algorithm is the one described by ; it seems to have been first described in print by Tarjan in 1976. - -On a parallel random-access machine, a topological ordering can be constructed in O(log2 n) time using a polynomial number of processors, putting the problem into the complexity class NC2. - -One method for doing this is to repeatedly square the adjacency matrix of the given graph, logarithmically many times, using min-plus matrix multiplication with maximization in place of minimization. The resulting matrix describes the longest path distances in the graph. Sorting the vertices by the lengths of their longest incoming paths produces a topological ordering. - -An algorithm for parallel topological sorting on distributed memory machines parallelizes the algorithm of Kahn for a DAG $G = (V, E)$. On a high level, the algorithm of Kahn repeatedly removes the vertices of indegree 0 and adds them to the topological sorting in the order in which they were removed. Since the outgoing edges of the removed vertices are also removed, there will be a new set of vertices of indegree 0, where the procedure is repeated until no vertices are left. This algorithm performs $D+1$ iterations, where D is the longest path in G. Each iteration can be parallelized, which is the idea of the following algorithm. - -In the following it is assumed that the graph partition is stored on p processing elements (PE) which are labeled $0, \dots, p-1$. Each PE i initializes a set of local vertices $Q_i^1$ with indegree 0, where the upper index represents the current iteration. Since all vertices in the local sets $Q_0^1, \dots, Q_{p-1}^1$ have indegree 0, i.e. they are not adjacent, they can be given in an arbitrary order for a valid topological sorting. To assign a global index to each vertex, a prefix sum is calculated over the sizes of $Q_0^1, \dots, Q_{p-1}^1$. So each step, there are $\sum_{i=0}^{p-1} |Q_i|$ vertices added to the topological sorting. - -In the first step, PE j assigns the indices $\sum_{i=0}^{j-1} |Q_i^1|, \dots, \left(\sum_{i=0}^{j} |Q_i^1|\right) - 1$ to the local vertices in $Q_j^1$. These vertices in $Q_j^1$ are removed, together with their corresponding outgoing edges. For each outgoing edge $(u, v)$ with endpoint v in another PE $l, j \neq l$, the message $(u, v)$ is posted to PE l. After all vertices in $Q_j^1$ are removed, the posted messages are sent to their corresponding PE. Each message $(u, v)$ received updates the indegree of the local vertex v. If the indegree drops to zero, v is added to $Q_j^2$. Then the next iteration starts. - -In step k, PE j assigns the indices $a_{k-1} + \sum_{i=0}^{j-1} |Q_i^k|, \dots, a_{k-1} + \left(\sum_{i=0}^{j} |Q_i^k|\right) - 1$, where $a_{k-1}$is the total amount of processed vertices after step k-1. This procedure repeats until there are no vertices left to process, hence $\sum_{i=0}^{p-1} |Q_i^{D+1}| = 0$. Below is a high level, single program, multiple data pseudo code overview of this algorithm. - -Note that the prefix sum for the local offsets $a_{k-1} + \sum_{i=0}^{j-1} |Q_i^k|, \dots, a_{k-1} + \left(\sum_{i=0}^{j} |Q_i^k|\right) - 1$ can be efficiently calculated in parallel. - -p processing elements with IDs from 0 to p-1 - -Input: G = (V, E) DAG, distributed to PEs, PE index j = 0, ..., p - 1 - -Output: topological sorting of G - -function traverseDAGDistributed - -δ incoming degree of local vertices V - -Q = {v ∈ V | δ[v] = 0} // All vertices with indegree 0 - -nrOfVerticesProcessed = 0 - -do - -global build prefix sum over size of Q // get offsets and total amount of vertices in this step - -offset = nrOfVerticesProcessed + sum(Qi, i = 0 to j - 1) // j is the processor index - -foreach u in Q - -localOrder[u] = index++; - -foreach (u,v) in E do post message (u, v) to PE owning vertex v - -nrOfVerticesProcessed += sum(|Qi|, i = 0 to p - 1) - -deliver all messages to neighbors of vertices in Q - -receive messages for local vertices V - -remove all vertices in Q - -foreach message (u, v) received: - -if --δ[v] = 0 - -add v to Q - -while global size of Q > 0 - -return localOrder - -The communication cost depends heavily on the given graph partition. As for runtime, on a CRCW-PRAM model that allows fetch-and-decrement in constant time, this algorithm runs in $\mathcal{O} \left(\frac{m + n}{p} + D (\Delta + \log n)\right)$, where D is again the longest path in G and Δ the maximum degree. - -The topological ordering can also be used to quickly compute shortest paths through a weighted directed acyclic graph. Let V be the list of vertices in such a graph, in topological order. Then the following algorithm computes the shortest path from some source vertex s to all other vertices: - -
    - -* Let d be an array of the same length as V; this will hold the shortest-path distances from s. Set d[s] = 0, all other d[u] = ∞. - -* Let p be an array of the same length as V, with all elements initialized to nil. Each p[u] will hold the predecessor of u in the shortest path from s to u. - -* Loop over the vertices u as ordered in V, starting from s: - -** For each vertex v directly following u (i.e., there exists an edge from u to v): - -*** Let w be the weight of the edge from u to v. - -*** Relax the edge: if d[v] > d[u] + w, set - -**** d[v] ← d[u] + w, - -**** p[v] ← u. - -
    - -Equivalently: - -
    - -* Let d be an array of the same length as V; this will hold the shortest-path distances from s. Set d[s] = 0, all other d[u] = ∞. - -* Let p be an array of the same length as V, with all elements initialized to nil. Each p[u] will hold the predecessor of u in the shortest path from s to u. - -* Loop over the vertices u as ordered in V, starting from s: - -** For each vertex v into u (i.e., there exists an edge from v to u): - -*** Let w be the weight of the edge from v to u. - -*** Relax the edge: if d[u] > d[v] + w, set - -**** d[u] ← d[v] + w, - -**** p[u] ← v. - -
    - -On a graph of n vertices and m edges, this algorithm takes Θ(n + m), i.e., linear, time. - -If a topological sort has the property that all pairs of consecutive vertices in the sorted order are connected by edges, then these edges form a directed Hamiltonian path in the DAG. If a Hamiltonian path exists, the topological sort order is unique; no other order respects the edges of the path. Conversely, if a topological sort does not form a Hamiltonian path, the DAG will have two or more valid topological orderings, for in this case it is always possible to form a second valid ordering by swapping two consecutive vertices that are not connected by an edge to each other. Therefore, it is possible to test in linear time whether a unique ordering exists, and whether a Hamiltonian path exists, despite the NP-hardness of the Hamiltonian path problem for more general directed graphs (i.e. cyclic directed graphs). - -Topological orderings are also closely related to the concept of a linear extension of a partial order in mathematics. A partially ordered set is just a set of objects together with a definition of the "≤" inequality relation, satisfying the axioms of reflexivity (x ≤ x), antisymmetry (if x ≤ y and y ≤ x then x = y) and transitivity (if x ≤ y and y ≤ z, then x ≤ z). A total order is a partial order in which, for every two objects x and y in the set, either x ≤ y or y ≤ x. Total orders are familiar in computer science as the comparison operators needed to perform comparison sorting algorithms. For finite sets, total orders may be identified with linear sequences of objects, where the "≤" relation is true whenever the first object precedes the second object in the order; a comparison sorting algorithm may be used to convert a total order into a sequence in this way. A linear extension of a partial order is a total order that is compatible with it, in the sense that, if x ≤ y in the partial order, then x ≤ y in the total order as well. - -One can define a partial ordering from any DAG by letting the set of objects be the vertices of the DAG, and defining x ≤ y to be true, for any two vertices x and y, whenever there exists a directed path from x to y; that is, whenever y is reachable from x. With these definitions, a topological ordering of the DAG is the same thing as a linear extension of this partial order. Conversely, any partial ordering may be defined as the reachability relation in a DAG. One way of doing this is to define a DAG that has a vertex for every object in the partially ordered set, and an edge xy for every pair of objects for which x ≤ y. An alternative way of doing this is to use the transitive reduction of the partial ordering; in general, this produces DAGs with fewer edges, but the reachability relation in these DAGs is still the same partial order. By using these constructions, one can use topological ordering algorithms to find linear extensions of partial orders. - -By definition, the solution of a scheduling problem that includes a precedence graph is a valid solution to topological sort (irrespective of the number of machines), however, topological sort in itself is not enough to optimally solve a scheduling optimisation problem. Hu's algorithm is a popular method used to solve scheduling problems that require a precedence graph and involve processing times (where the goal is to minimise the largest completion time amongst all the jobs). Like topological sort, Hu's algorithm is not unique and can be solved using DFS (by finding the largest path length and then assigning the jobs). diff --git a/wiki/wikipedia/2883.txt b/wiki/wikipedia/2883.txt deleted file mode 100644 index ac9a507d976c7f5545fba1640c99b6bc3279b849..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2883.txt +++ /dev/null @@ -1,22 +0,0 @@ -In mathematics, the Bombieri–Vinogradov theorem (sometimes simply called Bombieri's theorem) is a major result of analytic number theory, obtained in the mid-1960s, concerning the distribution of primes in arithmetic progressions, averaged over a range of moduli. The first result of this kind was obtained by Mark Barban in 1961 and the Bombieri–Vinogradov theorem is a refinement of Barban's result. The Bombieri–Vinogradov theorem is named after Enrico Bombieri and A. I. Vinogradov, who published on a related topic, the density hypothesis, in 1965. - -This result is a major application of the large sieve method, which developed rapidly in the early 1960s, from its beginnings in work of Yuri Linnik two decades earlier. Besides Bombieri, Klaus Roth was working in this area. In the late 1960s and early 1970s, many of the key ingredients and estimates were simplified by Patrick X. Gallagher. - -Let $x$ and $ Q $ be any two positive real numbers with -$$ -x^{1/2}\log^{-A}x \leq Q \leq x^{1/2}. -$$ - -Then -$$ -\sum_{q\leq Q}\max_{y2$. In three dimensions, there are four Kepler–Poinsot polyhedra, and in four dimensions, ten Schläfli–Hess polychora; in higher dimensions, there are no non-convex regular figures. - -These can be generalized to tessellations of other spaces, especially uniform tessellations, notably tilings of Euclidean space (honeycombs), which have exceptional objects, and tilings of hyperbolic space. There are various exceptional objects in dimension below 6, but in dimension 6 and above, the only regular polyhedra/tilings/hyperbolic tilings are the simplex, hypercube, cross-polytope, and hypercube lattice. - -Related to tilings and the regular polyhedra, there are exceptional Schwarz triangles (triangles that tile the sphere, or more generally Euclidean plane or hyperbolic plane via their triangle group of reflections in their edges), particularly the Möbius triangles. In the sphere, there are 3 Möbius triangles (and 1 1-parameter family), corresponding to the 3 exceptional Platonic solid groups, while in the Euclidean plane, there are 3 Möbius triangles, corresponding to the 3 special triangles: 60-60-60 (equilateral), 45-45-90 (isosceles right), and 30-60-90. There are additional exceptional Schwarz triangles in the sphere and Euclidean plane. By contrast, in the hyperbolic plane, there is a 3-parameter family of Möbius triangles, and none exceptional. - -The finite simple groups have been classified into a number of series as well as 26 sporadic groups. Of these, 20 are subgroups or subquotients of the monster group, referred to as the "Happy Family", while 6 are not, and are referred to as "pariahs". - -Several of the sporadic groups are related to the Leech lattice, most notably the Conway group Co1, which is the automorphism group of the Leech lattice, quotiented out by its center. - -There are only three finite-dimensional associative division algebras over the reals — the real numbers, the complex numbers and the quaternions. The only non-associative division algebra is the algebra of octonions. The octonions are connected to a wide variety of exceptional objects. For example, the exceptional formally real Jordan algebra is the Albert algebra of 3 by 3 self-adjoint matrices over the octonions. - -The simple Lie groups form a number of series (classical Lie groups) labelled A, B, C and D. In addition, there are the exceptional groups G2 (the automorphism group of the octonions), F4, E6, E7, E8. These last four groups can be viewed as the symmetry groups of projective planes over O, C⊗O, H⊗O and O⊗O, respectively, where O is the octonions and the tensor products are over the reals. - -The classification of Lie groups corresponds to the classification of root systems, and thus the exceptional Lie groups correspond to exceptional root systems and exceptional Dynkin diagrams. - -There are a few exceptional objects with supersymmetry. The classification of superalgebras by Kac and Tierry-Mieg indicates that the Lie superalgebras G(3) in 31 dimensions and F(4) in 40 dimensions, and the Jordan superalgebras K3 and K10, are examples of exceptional objects. - -Up to isometry, there is only one even unimodular lattice in 15 dimensions or less - the E8 lattice. Up to dimension 24, there is only one even unimodular lattice without roots, the Leech lattice. Three of the sporadic simple groups were discovered by Conway while investigating the automorphism group of the Leech lattice. For example, Co1 is the automorphism group itself modulo ±1. The groups Co2 and Co3, as well as a number of other sporadic groups, arise as stabilisers of various subsets of the Leech lattice. - -Some codes also stand out as exceptional objects, in particular the perfect binary Golay code, which is closely related to the Leech lattice. The Mathieu group $M_{24}$, one of the sporadic simple groups, is the group of automorphisms of the extended binary Golay code, and four more of the sporadic simple groups arise as various types of stabilizer subgroup of $M_{24}$. - -An exceptional block design is the Steiner system S(5,8,24) whose automorphism group is the sporadic simple Mathieu group $M_{24}$. - -The codewords of the extended binary Golay code have a length of 24 bits and have weights 0, 8, 12, 16, or 24. This code can correct up to three errors. So every 24-bit word with weight 5 can be corrected to a codeword with weight 8. The bits of a 24-bit word can be thought of as specifying the possible subsets of a 24 element set. So the extended binary Golay code gives a unique 8 element subset for each 5 element subset. In fact, it defines S(5,8,24). - -Certain families of groups often have a certain outer automorphism group, but in particular cases, they have other exceptional outer automorphisms. - -Among families of finite simple groups, the only example is in the automorphisms of the symmetric and alternating groups: for $n \geq 3, n \neq 6$ the alternating group $A_n$ has one outer automorphism (corresponding to conjugation by an odd element of $S_n$) and the symmetric group $S_n$ has no outer automorphisms. However, for $n=6,$ there is an exceptional outer automorphism of $S_6$ (of order 2), and correspondingly, the outer automorphism group of $A_6$ is not $C_2$ (the group of order 2), but rather $C_2 \times C_2$, the Klein four-group. - -If one instead considers $A_6$ as the (isomorphic) projective special linear group $\operatorname{PSL}(2,9) $, then the outer automorphism is not exceptional; thus the exceptional-ness can be seen as due to the exceptional isomorphism $A_6 \cong \operatorname{PSL}(2,9).$ This exceptional outer automorphism is realized inside of the Mathieu group $M_{12}$ and similarly, $M_{12}$ acts on a set of 12 elements in 2 different ways. - -Among Lie groups, the spin group $\operatorname{Spin}(8)$ has an exceptionally large outer automorphism group (namely $S_3$), which corresponds to the exceptional symmetries of the Dynkin diagram $D_4$. This phenomenon is referred to as triality. - -The exceptional symmetry of the $D_4$ diagram also gives rise to the Steinberg groups. - -The Kervaire invariant is an invariant of a (4k + 2)-dimensional manifold that measures whether the manifold could be surgically converted into a sphere. This invariant evaluates to 0 if the manifold can be converted to a sphere, and 1 otherwise. More specifically, the Kervaire invariant applies to a framed manifold, that is, to a manifold equipped with an embedding into Euclidean space and a trivialization of the normal bundle. The Kervaire invariant problem is the problem of determining in which dimensions the Kervaire invariant can be nonzero. For differentiable manifolds, this can happen in dimensions 2, 6, 14, 30, 62, and possibly 126, and in no other dimensions. The final case of dimension 126 remains open. These five or six framed cobordism classes of manifolds having Kervaire invariant 1 are exceptional objects related to exotic spheres. The first three cases are related to the complex numbers, quaternions and octonions respectively: a manifold of Kervaire invariant 1 can be constructed as the product of two spheres, with its exotic framing determined by the normed division algebra. - -Due to similarities of dimensions, it is conjectured that the remaining cases (dimensions 30, 62 and 126) are related to the Rosenfeld projective planes, which are defined over algebras constructed from the octonions. Specifically, it has been conjectured that there is a construction that takes these projective planes and produces a manifold with nonzero Kervaire invariant in two dimensions lower, but this remains unconfirmed. - -In quantum information theory, there exist structures known as SIC-POVMs or SICs, which correspond to maximal sets of complex equiangular lines. Some of the known SICs-those in vector spaces of 2 and 3 dimensions, as well as certain solutions in 8 dimensions-are considered exceptional objects and called "sporadic SICs". They differ from the other known SICs in ways that involve their symmetry groups, the Galois theory of the numerical values of their vector components, and so forth. The sporadic SICs in dimension 8 are related to the integral octonions. - -Numerous connections have been observed between some, though not all, of these exceptional objects. Most common are objects related to 8 and 24 dimensions, noting that 24 = 8 · 3. By contrast, the pariah groups stand apart, as the name suggests. - -Exceptional objects related to the number 8 include the following. - -* The octonions are 8-dimensional. - -* The E8 lattice can be realized as the integral octonions (up to a scale factor). - -* The exceptional Lie groups can be seen as symmetries of the octonions and structures derived from the octonions; further, the E8 algebra is related to the E8 lattice, as the notation implies (the lattice is generated by the root system of the algebra). - -* Triality occurs for Spin(8), which also connects to 8 · 3 = 24. - -Likewise, exceptional objects related to the number 24 include the following. - -* The Leech lattice is 24-dimensional. - -* Most sporadic simple groups can be related to the Leech lattice, or more broadly the Monster. - -* The exceptional Jordan algebra has a representation in terms of 24×24 real matrices together with the Jordan product rule. - -These objects are connected to various other phenomena in math which may be considered surprising but not themselves "exceptional". For example, in algebraic topology, 8-fold real Bott periodicity can be seen as coming from the octonions. In the theory of modular forms, the 24-dimensional nature of the Leech lattice underlies the presence of 24 in the formulas for the Dedekind eta function and the modular discriminant, which connection is deepened by Monstrous moonshine, a development that related modular functions to the Monster group. - -In string theory and superstring theory we often find that particular dimensions are singled out as a result of exceptional algebraic phenomena. For example, bosonic string theory requires a spacetime of dimension 26 which is directly related to the presence of 24 in the Dedekind eta function. Similarly, the possible dimensions of supergravity are related to the dimensions of the division algebras. - -Many of the exceptional objects in mathematics and physics have been found to be connected to each other. Developments such as the Monstrous moonshine conjectures show how, for example, the Monster group is connected to string theory. The theory of modular forms shows how the algebra E8 is connected to the Monster group. (In fact, well before the proof of the Monstrous moonshine conjecture, the elliptic j-function was discovered to encode the representations of E8.) Other interesting connections include how the Leech lattice is connected via the Golay code to the adjacency matrix of the dodecahedron (another exceptional object). Below is a mind map showing how some of the exceptional objects in mathematics and mathematical physics are related. - -The connections can partly be explained by thinking of the algebras as a tower of lattice vertex operator algebras. It just so happens that the vertex algebras at the bottom are so simple that they are isomorphic to familiar non-vertex algebras. Thus the connections can be seen simply as the consequence of some lattices being sub-lattices of others. - -The Jordan superalgebras are a parallel set of exceptional objects with supersymmetry. These are the Lie superalgebras which are related to Lorentzian lattices. This subject is less explored, and the connections between the objects are less well established. There are new conjectures parallel to the Monstrous moonshine conjectures for these super-objects, involving different sporadic groups. - -"Exceptional" object is reserved for objects that are unusual, meaning rare, the exception, not for unexpected or non-standard objects. These unexpected-but-typical (or common) phenomena are generally referred to as pathological, such as nowhere differentiable functions, or "exotic", as in exotic spheres — there are exotic spheres in arbitrarily high dimension (not only a finite set of exceptions), and in many dimensions most (differential structures on) spheres are exotic. - -Exceptional objects must be distinguished from extremal objects: those that fall in a family and are the most extreme example by some measure are of interest, but not unusual in the way exceptional objects are. For example, the golden ratio φ has the simplest continued fraction approximation, and accordingly is most difficult to approximate by rationals; however, it is but one of infinitely many such quadratic numbers (continued fractions). - -Similarly, the (2,3,7) Schwarz triangle is the smallest hyperbolic Schwarz triangle, and the associated (2,3,7) triangle group is of particular interest, being the universal Hurwitz group, and thus being associated with the Hurwitz curves, the maximally symmetric algebraic curves. However, it falls in a family of such triangles ((2,4,7), (2,3,8), (3,3,7), etc.), and while the smallest, is not exceptional or unlike the others. diff --git a/wiki/wikipedia/2886.txt b/wiki/wikipedia/2886.txt deleted file mode 100644 index 8103e6ff0a7c86b48f63cefc69945365f2327ab0..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2886.txt +++ /dev/null @@ -1,129 +0,0 @@ -The Tutte polynomial, also called the dichromate or the Tutte–Whitney polynomial, is a graph polynomial. It is a polynomial in two variables which plays an important role in graph theory. It is defined for every undirected graph $G$ and contains information about how the graph is connected. It is denoted by $T_G$. - -The importance of this polynomial stems from the information it contains about $G$. Though originally studied in algebraic graph theory as a generalization of counting problems related to graph coloring and nowhere-zero flow, it contains several famous other specializations from other sciences such as the Jones polynomial from knot theory and the partition functions of the Potts model from statistical physics. It is also the source of several central computational problems in theoretical computer science. - -The Tutte polynomial has several equivalent definitions. It is equivalent to Whitney’s rank polynomial, Tutte’s own dichromatic polynomial and Fortuin–Kasteleyn’s random cluster model under simple transformations. It is essentially a generating function for the number of edge sets of a given size and connected components, with immediate generalizations to matroids. It is also the most general graph invariant that can be defined by a deletion–contraction recurrence. Several textbooks about graph theory and matroid theory devote entire chapters to it. - -
    Definition. For an undirected graph $G=(V,E)$ one may define the Tutte polynomial as -$$ -T_G(x,y)=\sum\nolimits_{A\subseteq E} (x-1)^{k(A)-k(E)}(y-1)^{k(A)+|A|-|V|}, -$$ - -where $k(A)$ denotes the number of connected components of the graph $(V,A)$. In this definition it is clear that $T_G$ is well-defined and a polynomial in $x$ and $y$.
    - -The same definition can be given using slightly different notation by letting $r(A)=|V|-k(A)$ denote the rank of the graph $(V,A)$. Then the Whitney rank generating function is defined as -$$ -R_G(u,v)=\sum\nolimits_{A\subseteq E} u^{r(E)-r(A)} v^ -$$ - -is equivalent to $T_G$ under the transformation -$$ -T_G(x, y)=(x-1)^{-k(E)}(y-1)^{-|V|} \cdot Z_G\Big((x-1)(y-1), y-1\Big). -$$ - -The Tutte polynomial factors into connected components. If $G$ is the union of disjoint graphs $H$ and $H'$ then -$$ -T_G= T_H \cdot T_{H'} -$$ - -If $G$ is planar and $G^*$ denotes its dual graph then -$$ -T_G(x,y)= T_{G^*} (y,x) -$$ - -Especially, the chromatic polynomial of a planar graph is the flow polynomial of its dual. Tutte refers to such functions as V-functions. R. M. Foster had already observed that the chromatic polynomial is one such function, and Tutte began to discover more. His original terminology for graph invariants that satisfy the deletion–contraction recursion was W-function, and V-function if multiplicative over components. Tutte writes, “Playing with my W-functions I obtained a two-variable polynomial from which either the chromatic polynomial or the flow-polynomial could be obtained by setting one of the variables equal to zero, and adjusting signs.” - -Independently of the work in algebraic graph theory, Potts began studying the partition function of certain models in statistical mechanics in 1952. The work by Fortuin and Kasteleyn on the random cluster model, a generalisation of the Potts model, provided a unifying expression that showed the relation to the Tutte polynomial. -$$ -T_G(0,2) -$$ counts the number of strongly connected orientations of G. -$$ -T_G(2,2) -$$ is the number $2^$ where $|E|$ is the number of edges of graph G. - -If G is a 4-regular graph, then - -(-1)^ - -At $x=0$, the Tutte polynomial specialises to the flow polynomial studied in combinatorics. For a connected and undirected graph G and integer k, a nowhere-zero k-flow is an assignment of “flow” values $1,2,\dots,k-1$ to the edges of an arbitrary orientation of G such that the total flow entering and leaving each vertex is congruent modulo k. The flow polynomial $C_G(k)$ denotes the number of nowhere-zero k-flows. This value is intimately connected with the chromatic polynomial, in fact, if G is a planar graph, the chromatic polynomial of G is equivalent to the flow polynomial of its dual graph $G^*$ in the sense that - -
    Theorem (Tutte). -$$ -C_G(k)=k^{-1} \chi_{G^*}(k). -$$ - -The connection to the Tutte polynomial is given by: -$$ -C_G(k)= (-1)^{|E|-|V|+k(G)} T_G(0,1-k). -$$
    - -At $x=1$, the Tutte polynomial specialises to the all-terminal reliability polynomial studied in network theory. For a connected graph G remove every edge with probability p; this models a network subject to random edge failures. Then the reliability polynomial is a function $R_G(p)$, a polynomial in p, that gives the probability that every pair of vertices in G remains connected after the edges fail. The connection to the Tutte polynomial is given by -$$ -R_G(p) = (1-p)^{|V|-k(G)} p^{|E|-|V|+k(G)} T_G \left (1, \tfrac{1}{p} \right). -$$ - -Tutte also defined a closer 2-variable generalization of the chromatic polynomial, the dichromatic polynomial of a graph. This is -$$ -Q_G(u,v) = \sum\nolimits_{A \subseteq E} u^{k(A)} v^{|A|-|V|+k(A)}, -$$ - -where $k(A)$ is the number of connected components of the spanning subgraph (V,A). This is related to the corank-nullity polynomial by -$$ -Q_G(u,v) = u^{k(G)} R_G(u,v). -$$ - -The dichromatic polynomial does not generalize to matroids because k(A) is not a matroid property: different graphs with the same matroid can have different numbers of connected components. - -The Martin polynomial $m_{\vec{G}}(x)$ of an oriented 4-regular graph $\vec{G}$ was defined by Pierre Martin in 1977. He showed that if G is a plane graph and $\vec{G}_m$ is its directed medial graph, then -$$ -T_G(x,x) = m_{\vec{G}_m}(x). -$$ - -The deletion–contraction recurrence for the Tutte polynomial, -$$ -T_G(x,y)= T_{G \setminus e}(x,y) + T_{G/e}(x,y), \qquad e \text{ not a loop nor a bridge.} -$$ - -immediately yields a recursive algorithm for computing it for a given graph: as long as you can find an edge e that is not a loop or bridge, recursively compute the Tutte polynomial for when that edge is deleted, and when that edge is contracted. Then add the two sub-results together to get the overall Tutte polynomial for the graph. - -The base case is a monomial $x^my^n$ where m is the number of bridges, and n is the number of loops. - -Within a polynomial factor, the running time t of this algorithm can be expressed in terms of the number of vertices n and the number of edges m of the graph, -$$ -t(n+m)= t(n+m-1) + t(n+m-2), -$$ - -a recurrence relation that scales as the Fibonacci numbers with solution -$$ - t(n+m)= \left (\frac{1+\sqrt{5}}{2} \right )^{n+m} = O \left (1.6180^{n+m} \right ). -$$ - -The analysis can be improved to within a polynomial factor of the number $\tau(G)$ of spanning trees of the input graph. For sparse graphs with $m=O(n)$ this running time is $O(\exp(n))$. For regular graphs of degree k, the number of spanning trees can be bounded by -$$ -\tau(G) = O \left (\nu_k^n n^{-1} \log n \right ), -$$ - -where -$$ -\nu_k = \frac{(k-1)^{k-1}}{(k^2-2k)^{\frac{k}{2}-1}}. -$$ - -so the deletion–contraction algorithm runs within a polynomial factor of this bound. For example: -$$ -\nu_5 \approx 4.4066. -$$ - -In practice, graph isomorphism testing is used to avoid some recursive calls. This approach works well for graphs that are quite sparse and exhibit many symmetries; the performance of the algorithm depends on the heuristic used to pick the edge e. - -The computational complexity of exactly computing $T_G(x,y)$ falls into one of two classes for any $x, y \in \mathbb{C}$. The problem is #P-hard unless $(x,y)$ lies on the hyperbola $H_1$ or is one of the points -$$ -\left \{ (1,1), (-1,-1), (0,-1), (-1,0), (i,-i), (-i,i), \left(j,j^2 \right), \left(j^2,j \right) \right \}, \qquad j = e^{\frac{2 \pi i}{3}}. -$$ - -in which cases it is computable in polynomial time. If the problem is restricted to the class of planar graphs, the points on the hyperbola $H_2$ become polynomial-time computable as well. All other points remain #P-hard, even for bipartite planar graphs. In his paper on the dichotomy for planar graphs, Vertigan claims (in his conclusion) that the same result holds when further restricted to graphs with vertex degree at most three, save for the point $T_G(0,-2)$, which counts nowhere-zero Z3-flows and is computable in polynomial time. - -These results contain several notable special cases. For example, the problem of computing the partition function of the Ising model is #P-hard in general, even though celebrated algorithms of Onsager and Fisher solve it for planar lattices. Also, the Jones polynomial is #P-hard to compute. Finally, computing the number of four-colorings of a planar graph is #P-complete, even though the decision problem is trivial by the four color theorem. In contrast, it is easy to see that counting the number of three-colorings for planar graphs is #P-complete because the decision problem is known to be NP-complete via a parsimonious reduction. - -The question which points admit a good approximation algorithm has been very well studied. Apart from the points that can be computed exactly in polynomial time, the only approximation algorithm known for $T_G(x,y)$ is Jerrum and Sinclair’s FPRAS, which works for points on the “Ising” hyperbola $H_2$ for y > 0. If the input graphs are restricted to dense instances, with degree $\Omega(n)$, there is an FPRAS if x ≥ 1, y ≥ 1. - -Even though the situation is not as well understood as for exact computation, large areas of the plane are known to be hard to approximate. diff --git a/wiki/wikipedia/2887.txt b/wiki/wikipedia/2887.txt deleted file mode 100644 index 80e5d4f2771c2377b9bc3b7ef00ddb06d9a23dc8..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2887.txt +++ /dev/null @@ -1,17 +0,0 @@ -The Erdős–Nagy theorem is a result in discrete geometry stating that a non-convex simple polygon can be made into a convex polygon by a finite sequence of flips. The flips are defined by taking a convex hull of a polygon and reflecting a pocket with respect to the boundary edge. The theorem is named after mathematicians Paul Erdős and Béla Szőkefalvi-Nagy. - -A pocket of a non-convex simple polygon is a simple polygon bounded by a consecutive sequence of edges of the polygon together with a single edge of its convex hull that is not an edge of the polygon itself. Every convex hull edge that is not a polygon edge defines a pocket in this way. A flip of a pocket is obtained by reflecting the polygon edges that bound the pocket, across a reflection line containing the convex hull edge. Because the reflected pocket lies entirely within the reflected image of the convex hull, on the other side of this line, this operation cannot introduce any crossings, so the result of a flip is another simple polygon, with larger area. - -In some cases, a single flip will cause a non-convex simple polygon to become convex. Once this happens, no more flips are possible. - -The Erdős–Nagy theorem states that it is always possible to find a sequence of flips that produces a convex polygon in this way. - -More strongly, for every simple polygon, every sequence of flips will eventually produce a convex polygon, in a finite number of steps. - -There exist quadrilaterals that require an arbitrarily large (but finite) number of flips to be made convex. Therefore, it is not possible to bound the number of steps as a function of the number of sides of the polygon. - -Paul Erdős conjectured the result in 1935 as a problem in the American Mathematical Monthly. In the version posed by Erdős, all pockets are to be flipped simultaneously; however, this may cause the polygon to become non-simple, as two pockets may flip on top of each other. In 1939, Szőkefalvi-Nagy pointed out this problem with Erdős's formulation, reformulated the problem in its now-standard form, and published a proof. Szőkefalvi-Nagy's proof had an incorrect case, which was pointed out in a 1995 survey of the problem by Branko Grünbaum; however, the proofs by Grünbaum and Godfried Toussaint are similarly incomplete. Additional proofs (some but not all correct) were provided in 1957 by two independent Russian mathematicians, Reshetnyak and Yusupov, in 1959, by Bing and Kazarinoff, and in 1993 by Wegner. - -Demaine, Gassend, O'Rourke, and Toussaint survey this history and provide a corrected proof. - -An alternative method of making non-convex polygons convex that has also been studied is to perform flipturns, 180-degree rotations of a pocket around the midpoint of its convex hull edge. diff --git a/wiki/wikipedia/2888.txt b/wiki/wikipedia/2888.txt deleted file mode 100644 index 38b5d1faa6d369b4cf74a8e9a21bf816b8af1615..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2888.txt +++ /dev/null @@ -1,41 +0,0 @@ -In number theory, friendly numbers are two or more natural numbers with a common abundancy index, the ratio between the sum of divisors of a number and the number itself. Two numbers with the same "abundancy" form a friendly pair; n numbers with the same "abundancy" form a friendly n-tuple. - -Being mutually friendly is an equivalence relation, and thus induces a partition of the positive naturals into clubs (equivalence classes) of mutually "friendly numbers". - -A number that is not part of any friendly pair is called solitary. - -The "abundancy" index of n is the rational number σ(n) / n, in which σ denotes the sum of divisors function. A number n is a "friendly number" if there exists m ≠ n such that σ(m) / m = σ(n) / n. "Abundancy" is not the same as abundance, which is defined as σ(n) − 2n. - -"Abundancy" may also be expressed as $\sigma_{-\!1}(n)$ where $\sigma_k$ denotes a divisor function with $\sigma_{k}(n)$ equal to the sum of the k-th powers of the divisors of n. - -The numbers 1 through 5 are all solitary. The smallest "friendly number" is 6, forming for example, the "friendly" pair 6 and 28 with "abundancy" σ(6) / 6 = (1+2+3+6) / 6 = 2, the same as σ(28) / 28 = (1+2+4+7+14+28) / 28 = 2. The shared value 2 is an integer in this case but not in many other cases. Numbers with "abundancy" 2 are also known as perfect numbers. There are several unsolved problems related to the "friendly numbers". - -In spite of the similarity in name, there is no specific relationship between the friendly numbers and the amicable numbers or the sociable numbers, although the definitions of the latter two also involve the divisor function. - -As another example, 30 and 140 form a friendly pair, because 30 and 140 have the same "abundancy": -$$ - \dfrac{\sigma(30)}{30} = \dfrac{1+2+3+5+6+10+15+30}{30} =\dfrac{72}{30} = \dfrac{12}{5} -$$ -$$ - \dfrac{\sigma(140)}{140} = \dfrac{1+2+4+5+7+10+14+20+28+35+70+140}{140} = \dfrac{336}{140} = \dfrac{12}{5}. -$$ - -The numbers 2480, 6200 and 40640 are also members of this club, as they each have an "abundancy" equal to 12/5. - -For an example of odd numbers being friendly, consider 135 and 819 ("abundancy" 16/9 (deficient)). There are also cases of even being "friendly" to odd, such as 42 and 544635 ("abundancy" 16/7). The odd "friend" may be less than the even one, as in 84729645 and 155315394 ("abundancy" 896/351). - -A square number can be friendly, for instance both 693479556 (the square of 26334) and 8640 have "abundancy" 127/36 (this example is accredited to Dean Hickerson). - -In the table below, are proven friendly , are proven solitary , numbers n such that n and $\sigma(n)$ are coprime are left black, though they are known to be solitary. Other numbers have unknown status and are . - -A number that belongs to a singleton club, because no other number is "friendly" with it, is a solitary number. All prime numbers are known to be solitary, as are powers of prime numbers. More generally, if the numbers n and σ(n) are coprime – meaning that the greatest common divisor of these numbers is 1, so that σ(n)/n is an irreducible fraction – then the number n is solitary . For a prime number p we have σ(p) = p + 1, which is co-prime with p. - -No general method is known for determining whether a number is "friendly" or solitary. The smallest number whose classification is unknown is 10; it is conjectured to be solitary. If it is not, its smallest friend is at least $10^{30}$. - -It is an open problem whether there are infinitely large clubs of mutually "friendly" numbers. The perfect numbers form a club, and it is conjectured that there are infinitely many perfect numbers (at least as many as there are Mersenne primes), but no proof is known. , 51 perfect numbers are known, the largest of which has more than 49 million digits in decimal notation. There are clubs with more known members: in particular, those formed by multiply perfect numbers, which are numbers whose "abundancy" is an integer. As of early 2013, the club of "friendly" numbers with "abundancy" equal to 9 has 2094 known members. Although some are known to be quite large, clubs of multiply perfect numbers (excluding the perfect numbers themselves) are conjectured to be finite. - -Every pair a, b of friendly numbers gives rise to a positive proportion of all natural numbers being friendly (but in different clubs), by considering pairs na, nb for multipliers n with gcd(n, ab) = 1. For example, the "primitive" friendly pair 6 and 28 gives rise to friendly pairs 6n and 28n for all n that are congruent to 1, 5, 11, 13, 17, 19, 23, 25, 29, 31, 37, or 41 modulo 42. - -This shows that the natural density of the friendly numbers (if it exists) is positive. - -Anderson and Hickerson proposed that the density should in fact be 1 (or equivalently that the density of the solitary numbers should be 0). According to the MathWorld article on Solitary Number (see References section below), this conjecture has not been resolved, although Pomerance thought at one point he had disproved it. diff --git a/wiki/wikipedia/2889.txt b/wiki/wikipedia/2889.txt deleted file mode 100644 index 5df4e8d8b148dffd681c2b10d1f6fefd87b0f1ff..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2889.txt +++ /dev/null @@ -1,3 +0,0 @@ -In probability theory, Maxwell's theorem, named in honor of James Clerk Maxwell, states that if the probability distribution of a vector-valued random variable X = ( X1, ..., Xn )T is the same as the distribution of GX for every n×n orthogonal matrix G and the components are independent, then the components X1, ..., Xn are normally distributed with expected value 0 and all have the same variance. This theorem is one of many characterizations of the normal distribution. - -Since a multiplication by an orthogonal matrix is a rotation, the theorem says that if the probability distribution of a random vector is unchanged by rotations and if the components are independent, then the components are identically distributed and normally distributed. In other words, the only rotationally invariant probability distributions on Rn that have independent components are multivariate normal distributions with expected value 0 and variance σ2In, (where In = the n×n identity matrix), for some positive number σ2. diff --git a/wiki/wikipedia/289.txt b/wiki/wikipedia/289.txt deleted file mode 100644 index c962a005adcefda7bde537f355ab57f514c1b40a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/289.txt +++ /dev/null @@ -1,158 +0,0 @@ -In mathematics, the commutator gives an indication of the extent to which a certain binary operation fails to be commutative. There are different definitions used in group theory and ring theory. - -The commutator of two elements, g and h, of a group G, is the element - -[g, h] = g−1h−1gh. - -This element is equal to the group's identity if and only if g and h commute (from the definition gh = hg [g, h] , being [g, h] equal to the identity if and only if gh = hg). - -The set of all commutators of a group is not in general closed under the group operation, but the subgroup of G generated by all commutators is closed and is called the derived group or the commutator subgroup of G. Commutators are used to define nilpotent and solvable groups and the largest abelian quotient group. - -The definition of the commutator above is used throughout this article, but many other group theorists define the commutator as - -[g, h] = ghg−1h−1. - -Commutator identities are an important tool in group theory. The expression ax denotes the conjugate of a by x, defined as x−1ax. - -# $x^y = x[x, y].$ - -# $[y, x] = [x,y]^{-1}.$ - -# $[x, zy] = [x, y]\cdot [x, z]^y$ and $[x z, y] = [x, y]^z \cdot [z, y].$ - -# $\left[x, y^{-1}\right] = [y, x]^{y^{-1}}$ and $\left[x^{-1}, y\right] = [y, x]^{x^{-1}}.$ - -# $\left[\left[x, y^{-1}\right], z\right]^y \cdot \left[\left[y, z^{-1}\right], x\right]^z \cdot \left[\left[z, x^{-1}\right], y\right]^x = 1$ and $\left[\left[x, y\right], z^x\right] \cdot \leftz ,x], y^z\right] \cdot \left[[y, z], x^y\right] = 1.$ - -Identity (5) is also known as the Hall–Witt identity, after [[Philip Hall and Ernst Witt. It is a group-theoretic analogue of the Jacobi identity for the ring-theoretic commutator (see next section). - -N.B., the above definition of the conjugate of a by x is used by some group theorists. Many other group theorists define the conjugate of a by x as xax−1. This is often written ${}^x a$. Similar identities hold for these conventions. - -Many identities are used that are true modulo certain subgroups. These can be particularly useful in the study of solvable groups and nilpotent groups. For instance, in any group, second powers behave well: -$$ -(xy)^2 = x^2 y^2 [y, x]y, x], y]. -$$ - -If the ring (including any associative algebra) is defined by -$$ -[a, b] = ab - ba. -$$ - -It is zero if and only if a and b commute. In linear algebra, if two endomorphisms of a space are represented by commuting matrices in terms of one basis, then they are so represented in terms of every basis. By using the commutator as a Lie bracket, every associative algebra can be turned into a Lie algebra. - -The anticommutator of two elements a and b of a ring or an associative algebra is defined by -$$ -\{a, b\} = ab + ba. -$$ - -Sometimes $[a,b]_+$ is used to denote anticommutator, while $[a,b]_-$ is then used for commutator. The anticommutator is used less often, but can be used to define Clifford algebras and Jordan algebras, and in the derivation of the Dirac equation in particle physics. - -The commutator of two operators acting on a Hilbert space is a central concept in quantum mechanics, since it quantifies how well the two observables described by these operators can be measured simultaneously. The uncertainty principle is ultimately a theorem about such commutators, by virtue of the Robertson–Schrödinger relation. In phase space, equivalent commutators of function star-products are called Moyal brackets, and are completely isomorphic to the Hilbert space commutator structures mentioned. - -The commutator has the following properties: - -# $[A + B, C] = [A, C] + [B, C]$ - -# $[A, A] = 0$ - -# $[A, B] = -[B, A]$ - -# $[A, [B, C]] + [B, [C, A]] + [C, [A, B]] = 0$ - -Relation (3) is called anticommutativity, while (4) is the Jacobi identity. - -# $[A, BC] = [A, B]C + B[A, C]$ - -# $[A, BCD] = [A, B]CD + B[A, C]D + BC[A, D]$ - -# $[A, BCDE] = [A, B]CDE + B[A, C]DE + BC[A, D]E + BCD[A, E]$ - -# $[AB, C] = A[B, C] + [A, C]B$ - -# $[ABC, D] = AB[C, D] + A[B, D]C + [A, D]BC$ - -# $[ABCD, E] = ABC[D, E] + AB[C, E]D + A[B, E]CD + [A, E]BCD$ - -# $[A, B + C] = [A, B] + [A, C]$ - -# $[A + B, C + D] = [A, C] + [A, D] + [B, C] + [B, D]$ - -# $[AB, CD] = A[B, C]D + [A, C]BD + CA[B, D] + C[A, D]B$ - -# $A, C], [B, D = A}} is a fixed element of a ring R, identity (1) can be interpreted as a [[product rule|Leibniz rule for the map \operatorname{ad}_A: R \rightarrow R$ given by $\operatorname{ad}_A(B) = [A, B]$. In other words, the map adA defines a derivation on the ring R. Identities (2), (3) represent Leibniz rules for more than two factors, and are valid for any derivation. Identities (4)–(6) can also be interpreted as Leibniz rules. Identities (7), (8) express Z-bilinearity. - -Some of the above identities can be extended to the anticommutator using the above ± subscript notation. - -For example: - -#$[AB, C]_\pm = A[B, C]_- + [A, C]_\pm B$ - -#$[AB, CD]_\pm = A[B, C]_- D + AC[B, D]_- + [A, C]_- DB + C[A, D]_\pm B$ - -#$\left[A, [B, C]_\pm\right] + \left[B, [C, A]_\pm\right] + \left[C, [A, B]_\pm\right] = 0$ - -#$[A,BC]_\pm = [A,B]_- C + B[A,C]_\pm$ - -#$[A,BC] = [A,B]_\pm C \mp B[A,C]_\pm$ - -Consider a ring or algebra in which the exponential $e^A = \exp(A) = 1 + A + \tfrac{1}{2!}A^2 + \cdots$ can be meaningfully defined, such as a Banach algebra or a ring of formal power series. - -In such a ring, Hadamard's lemma applied to nested commutators gives: e^A Be^{-A} - -\ =\ B + [A, B] + \frac{1}{2!}[A, [A, B]] + \frac{1}{3!}[A, [A, [A, B]]] + \cdots - -\ =\ e^{\operatorname{ad}_A}(B). - - (For the last expression, see Adjoint derivation below.) This formula underlies the Baker–Campbell–Hausdorff expansion of log(exp(A) exp(B)). - -A similar expansion expresses the group commutator of expressions $e^A$ (analogous to elements of a Lie group) in terms of a series of nested commutators (Lie brackets), - - - -\begin{align} - -&e^A e^B e^{-A} e^{-B} - -\\={}& - -\exp\!\left( [A, B] + \frac{1}{2!}[A{+}B, [A, B]] + \frac{1}{3!} \left(\frac{1}{2} [A, [B, [B, A]]] + [A{+}B, [A{+}B, [A, B]]]\right) + \cdots\right). - -\end{align} - -When dealing with graded algebras, the commutator is usually replaced by the graded commutator, defined in homogeneous components as -$$ -[\omega, \eta]_{gr} := \omega\eta - (-1)^{\deg \omega \deg \eta} \eta\omega. -$$ - -Especially if one deals with multiple commutators in a ring R, another notation turns out to be useful. For an element $x\in R$, we define the adjoint mapping $\mathrm{ad}_x:R\to R$ by: -$$ -\operatorname{ad}_x(y) = [x, y] = xy-yx. -$$ - -This mapping is a derivation on the ring R: -$$ -\mathrm{ad}_x\!(yz) \ =\ \mathrm{ad}_x\!(y) z + y\mathrm{ad}_x\!(z). -$$ - -By the Jacobi identity, it is also a derivation over the commutation operation: -$$ -\mathrm{ad}_x[y,z] \ =\ [\mathrm{ad}_x\!(y),z] + [y,\mathrm{ad}_x\!(z)] . -$$ - -Composing such mappings, we get for example $\operatorname{ad}_x\operatorname{ad}_y(z) = [x, [y, z]] $ and \operatorname{ad}_x^2\!(z) \ =\ - -\operatorname{ad}_x\!(\operatorname{ad}_x\!(z)) \ =\ - -[x, [x, z]]. We may consider $\mathrm{ad}$ itself as a mapping, $\mathrm{ad}: R \to \mathrm{End}(R) $, where $\mathrm{End}(R)$ is the ring of mappings from R to itself with composition as the multiplication operation. Then $\mathrm{ad}$ is a Lie algebra homomorphism, preserving the commutator: -$$ -\operatorname{ad}_{[x, y]} = \left[ \operatorname{ad}_x, \operatorname{ad}_y \right]. -$$ - -By contrast, it is not always a ring homomorphism: usually $\operatorname{ad}_{xy} \neq \operatorname{ad}_x\operatorname{ad}_y $. - -The general Leibniz rule, expanding repeated derivatives of a product, can be written abstractly using the adjoint representation: -$$ -x^n y = \sum_{k = 0}^n \binom{n}{k} \operatorname{ad}_x^k\!(y) x^{n - k}. -$$ - -Replacing x by the differentiation operator $\partial$, and y by the multiplication operator $m_f : g \mapsto fg$, we get $\operatorname{ad}(\partial)(m_f) = m_{\partial(f)}$, and applying both sides to a function g, the identity becomes the usual Leibniz rule for the n-th derivative $\partial^{n}\!(fg)$. diff --git a/wiki/wikipedia/2890.txt b/wiki/wikipedia/2890.txt deleted file mode 100644 index 0e79b6763ad64ac325164aadbdf7af5cea940790..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2890.txt +++ /dev/null @@ -1,135 +0,0 @@ -In functional analysis, the open mapping theorem, also known as the Banach–Schauder theorem or the Banach theorem (named after Stefan Banach and Juliusz Schauder), is a fundamental result which states that if a bounded or continuous linear operator between Banach spaces is surjective then it is an open map. - -note= - -This proof uses the Baire category theorem, and completeness of both $X$ and $Y$ is essential to the theorem. The statement of the theorem is no longer true if either space is just assumed to be a normed space, but is true if $X$ and $Y$ are taken to be Fréchet spaces. - -Suppose $A : X \to Y$ is a surjective continuous linear operator. In order to prove that $A$ is an open map, it is sufficient to show that $A$ maps the open unit ball in $X$ to a neighborhood of the origin of $Y.$ - -Let $U = B_1^X(0), V = B_1^Y(0).$ - -Then - -X = \bigcup_{k \in \N} k U. - -Since $A$ is surjective: - -Y = A(X) = A\left(\bigcup_{k \in \N} k U\right) = \bigcup_{k \in \N} A(kU). - -But $Y$ is Banach so by Baire's category theorem - -\exists k \in \N: \qquad \left(\overline{A(kU)} \right)^{\circ} \neq \varnothing. - -That is, we have $c \in Y$ and $r > 0$ such that - -B_r(c) \subseteq \left(\overline{A(kU)} \right)^\circ \subseteq \overline{A(kU)}. - -Let $v \in V,$ then - -c, c + r v \in B_r(c) \subseteq \overline{A(kU)}. - -By continuity of addition and linearity, the difference $r v$ satisfies - -r v \in \overline{A(kU)} + \overline{A(kU)} \subseteq \overline{A(kU) + A(kU)} \subseteq \overline{A(2kU)}, - -and by linearity again, - -V \subseteq \overline{A(LU)} - -where we have set $L = 2 k / r.$ - -It follows that for all $y \in Y$ and all $\epsilon > 0,$ there exists some $x \in X$ such that - -\qquad \|x\|_X \leq L \|y\|_Y \quad \text{and} \quad \|y - A x\|_Y < \epsilon. \qquad (1) - -Our next goal is to show that $V \subseteq A(2LU).$ - -Let $y \in V.$ - -By (1), there is some $x_1$ with $\left\|x_1\right\| < L$ and $\left\|y - A x_1\right\| < 1/2.$ - -Define a sequence $\left(x_n\right)$ inductively as follows. - -Assume: - -\|x_n\| < \frac{L}{2^{n-1}} \quad \text{and} \quad \left\|y - A\left(x_1 + x_2 + \cdots + x_n\right)\right\| < \frac{1}{2^n}. \qquad (2) - -Then by (1) we can pick $x_{n+1}$ so that: - -\|x_{n+1}\| < \frac{L}{2^n} \quad \text{and} \quad \left\|y - A\left(x_1 + x_2 + \cdots + x_n\right) - A\left(x_{n+1}\right)\right\| < \frac{1}{2^{n+1}}, - -so (2) is satisfied for $x_{n+1}.$ Let - -s_n = x_1 + x_2 + \cdots + x_n. - -From the first inequality in (2), $\left(s_n\right)$is a Cauchy sequence, and since $X$ is complete, $s_n$ converges to some $x \in X.$ - -By (2), the sequence $A s_n$ tends to $y$ and so $Ax = y$ by continuity of $A.$ - -Also, - -\|x\| = \lim_{n \to \infty} \|s_n\| \leq \sum_{n=1}^\infty \|x_n\| < 2 L. - -This shows that $y$ belongs to $A(2LU),$ so $V \subseteq A(2LU)$ as claimed. - -Thus the image $A(U)$ of the unit ball in $X$ contains the open ball $V / 2L$ of $Y.$ - -Hence, $A(U)$ is a neighborhood of the origin in $Y,$ and this concludes the proof. - -{{Math theorem|name=Theorem|math_statement= - -Let $X$ and $Y$ be Banach spaces, let $B_X$ and $B_Y$ denote their open unit balls, and let $T : X \to Y$ be a bounded linear operator. - -If $\delta > 0$ then among the following four statements we have $(1) \implies (2) \implies (3) \implies (4)$ (with the same $\delta$) - -# $\left\|T^* y^*\right\| \geq \delta \left\|y^*\right\|$ for all $y^* \in Y^*$; - -# $\overline{T\left(B_X\right)} \supseteq \delta B_Y$; - -# ${T\left(B_X\right)} \supseteq \delta B_Y$; - -# $\operatorname{Im} T = Y$ (that is, $T$ is surjective). - -Furthermore, if $T$ is surjective then (1) holds for some $\delta > 0$ - -}} - -The open mapping theorem has several important consequences: - -* If $A : X \to Y$ is a bijective continuous linear operator between the Banach spaces $X$ and $Y,$ then the inverse operator $A^{-1} : Y \to X$ is continuous as well (this is called the bounded inverse theorem). - -* If $A : X \to Y$ is a linear operator between the Banach spaces $X$ and $Y,$ and if for every sequence $\left(x_n\right)$ in $X$ with $x_n \to 0$ and $A x_n \to y$ it follows that $y = 0,$ then $A$ is continuous (the closed graph theorem). - -Local convexity of $X$ or $Y$  is not essential to the proof, but completeness is: the theorem remains true in the case when $X$ and $Y$ are F-spaces. Furthermore, the theorem can be combined with the Baire category theorem in the following manner: - -note= - -Furthermore, in this latter case if $N$ is the kernel of $A,$ then there is a canonical factorization of $A$ in the form - -X \to X/N \overset{\alpha}{\to} Y - -where $X / N$ is the quotient space (also an F-space) of $X$ by the closed subspace $N.$ - -The quotient mapping $X \to X / N$ is open, and the mapping $\alpha$ is an isomorphism of topological vector spaces. - -note= - -note={{sfn|Narici|Beckenstein|2011|p=468|math_statement= - -Let $A : X \to Y$ be a continuous linear operator from an complete pseudometrizable TVS $X$ into a Hausdorff topological vector space $Y.$ - -If $\operatorname{Im} A$ is nonmeager in $Y$ then $A : X \to Y$ is a surjective open map and $Y$ is a complete pseudometrizable TVS. - -}} - -The open mapping theorem can also be stated as - -{{Math theorem|name=Theorem|math_statement= - -Let $X$ and $Y$ be two F-spaces. Then every continuous linear map of $X$ onto $Y$ is a TVS homomorphism, - -where a linear map $u : X \to Y$ is a topological vector space (TVS) homomorphism if the induced map $\hat{u} : X / \ker(u) \to Y$ is a TVS-isomorphism onto its image. - -}} - -Webbed spaces are a class of topological vector spaces for which the open mapping theorem and the closed graph theorem hold. diff --git a/wiki/wikipedia/2891.txt b/wiki/wikipedia/2891.txt deleted file mode 100644 index a5210e0f0d506f4bb3dfdf70c6545ea511dbed97..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2891.txt +++ /dev/null @@ -1,61 +0,0 @@ -In mathematics, Welch bounds are a family of inequalities pertinent to the problem of evenly spreading a set of unit vectors in a vector space. The bounds are important tools in the design and analysis of certain methods in telecommunication engineering, particularly in coding theory. The bounds were originally published in a 1974 paper by L. R. Welch. - -If $\{x_1,\ldots,x_m\}$ are unit vectors in $\mathbb{C}^n$, define $c_\max = \max_{i\neq j} |\langle x_i, x_j \rangle|$, where $\langle\cdot,\cdot\rangle$ is the usual inner product on $\mathbb{C}^n$. Then the following inequalities hold for $k=1,2,\dots$: -$$ -(c_\max)^{2k} \geq \frac{1}{m-1} \left[ \frac{m}{\binom{n+k-1}{k}}-1 \right] -$$ - -If $m\leq n$, then the vectors $\{x_i\}$ can form an orthonormal set in $\mathbb{C}^n$. In this case, $c_\max=0$ and the bounds are vacuous. Consequently, interpretation of the bounds is only meaningful if $m>n$. This will be assumed throughout the remainder of this article. - -The "first Welch bound," corresponding to $k=1$, is by far the most commonly used in applications. Its proof proceeds in two steps, each of which depends on a more basic mathematical inequality. The first step invokes the Cauchy–Schwarz inequality and begins by considering the $m\times m$ Gram matrix $G$ of the vectors $\{x_i\}$; i.e., -$$ -G=\left[ \begin{array}{ccc} \langle x_1, x_1 \rangle & \cdots & \langle x_1, x_m \rangle \\ \vdots & \ddots & \vdots \\ \langle x_m, x_1 \rangle & \cdots & \langle x_m, x_m \rangle \end{array}\right] -$$ - -The trace of $G$ is equal to the sum of its eigenvalues. Because the rank of $G$ is at most $n$, and it is a positive semidefinite matrix, $G$ has at most $n$ positive eigenvalues with its remaining eigenvalues all equal to zero. Writing the non-zero eigenvalues of $G$ as $\lambda_1,\ldots,\lambda_r$ with $r\leq n$ and applying the Cauchy-Schwarz inequality to the inner product of an $r$-vector of ones with a vector whose components are these eigenvalues yields -$$ -(\mathrm{Tr}G)^2 = \left( \sum_{i=1}^r \lambda_i \right)^2 \leq r \sum_{i=1}^r \lambda_i^2 \leq n \sum_{i=1}^m \lambda_i^2 -$$ - -The square of the Frobenius norm (Hilbert-Schmidt norm) of $G$ satisfies -$$ - ||G||^2 = \sum_{i=1}^{m} \sum_{j=1}^m |\langle x_i , x_j \rangle|^2 = \sum_{i=1}^m \lambda_i^2 -$$ - -Taking this together with the preceding inequality gives -$$ -\sum_{i=1}^m \sum_{j=1}^m |\langle x_i , x_j \rangle|^2\geq \frac{(\mathrm{Tr}G)^2}{n} -$$ - -Because each $x_i$ has unit length, the elements on the main diagonal of $G$ are ones, and hence its trace is $\mathrm{Tr}G = m$. So, -$$ -\sum_{i=1}^{m} \sum_{j=1}^m |\langle x_i , x_j \rangle|^2 = m+\sum_{i\neq j} |\langle x_i , x_j \rangle|^2 \geq \frac{m^2}{n} -$$ - -or -$$ -\sum_{i\neq j} |\langle x_i , x_j \rangle|^2 \geq \frac{m(m-n)}{n} -$$ - -The second part of the proof uses an inequality encompassing the simple observation that the average of a set of non-negative numbers can be no greater than the largest number in the set. In mathematical notation, if $a_{\ell}\geq 0$ for $\ell=1,\ldots, L$, then -$$ -\frac{1}{L}\sum_{\ell=1}^L a_{\ell} \leq \max a_{\ell} -$$ - -The previous expression has $m(m-1)$ non-negative terms in the sum, the largest of which is $c_\max^2$. So, -$$ -(c_\max)^2\geq \frac{1}{m(m-1)}\sum_{i\neq j} |\langle x_i , x_j \rangle|^2\geq\frac{m-n}{n(m-1)} -$$ - -or -$$ -(c_\max)^2\geq \frac{m-n}{n(m-1)} -$$ - -which is precisely the inequality given by Welch in the case that $k=1$. - -In certain telecommunications applications, it is desirable to construct sets of vectors that meet the Welch bounds with equality. Several techniques have been introduced to obtain so-called Welch Bound Equality (WBE) sets of vectors for the k = 1 bound. - -The proof given above shows that two separate mathematical inequalities are incorporated into the Welch bound when $k=1$. The Cauchy-Schwarz inequality is met with equality when the two vectors involved are collinear. In the way it is used in the above proof, this occurs when all the non-zero eigenvalues of the Gram matrix $G$ are equal, which happens precisely when the vectors $\{x_1,\ldots,x_m\}$ constitute a tight frame for $\mathbb{C}^n$. - -The other inequality in the proof is satisfied with equality if and only if $|\langle x_i, x_j \rangle|$ is the same for every choice of $i\neq j$. In this case, the vectors are equiangular. So this Welch bound is met with equality if and only if the set of vectors $\{x_i\}$ is an equiangular tight frame in $\mathbb{C}^n$. diff --git a/wiki/wikipedia/2892.txt b/wiki/wikipedia/2892.txt deleted file mode 100644 index cc3b597504a10732c2c49a1734ca2ee3ebbb624b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2892.txt +++ /dev/null @@ -1,82 +0,0 @@ -The unique homomorphic extension theorem is a result in mathematical logic which formalizes the intuition that the truth or falsity of a statement can be deduced from the truth values of its parts. - -Let A be a non-empty set, X a subset of A, F a set of functions in A, and $ X_+ $ the inductive closure of X under F. - -Let be B any non-empty set and let G be the set of functions on B, such that there is a function $d:F\to G$ in G that maps with each function f of arity n in F the following function $d(f):B^n\to B$ in G (G cannot be a bijection). - -From this lemma we can now build the concept of unique homomorphic extension. - -If $ X_+ $ is a free set generated by X and F, for each function $h:X\to B$ there is a single function $\hat h:X_+\to B$ such that: -$$ -\forall x\in X, \hat h(x)= h(x); \qquad (1) -$$ - -For each function f of arity n > 0, for each $x_1,\ldots,x_n\in X^n_+,$ -$$ -\hat h(f(x_1, \ldots, x_n)) = g(\hat h(x_1),\ldots,\hat h(x_n)), \text{ where } g=d(f) \qquad (2) -$$ - -The identities seen in (1) e (2) show that $\hat h$ is an homomorphism, specifically named the unique homomorphic extension of $h$. To prove the theorem, two requirements must be met: to prove that the extension ($\hat h$) exists and is unique (assuring the lack of bijections). - -We must define a sequence of functions $ h_i:X_i\to B $ inductively, satisfying conditions (1) and (2) restricted to $X_i$. For this, we define $h_0=h$, and given $h_i$ then $h_{i+1}$shall have the following graph: -$$ -{\{(f(x_1,\ldots,x_n),g(h_i(x_1),\ldots,h_i(x_n))) \mid (x_1,\ldots,x_n)\in X^n_i - X^n_{i-1},f\in F\}} \cup {\operatorname{graph}(h_i)} \text{ with } g=d(f) -$$ - -First we must be certain the graph actually has functionality, since $X_+$ is a free set, from the lemma we have  $f(x_1,\ldots,x_n)\in X_{i+1} - X_i$ when $(x_1,\ldots,x_n)\in X^n_i - X^n_{i-1},(i\geq 0)$, so we only have to determine the functionality for the left side of the union. Knowing that the elements of G are functions(again, as defined by the lemma), the only instance where $(x,y)\in graph(h_i)$ and $(x,z)\in graph(h_i)$ for some $x\in X_{i+1} - X_i$ is possible is if we have  $ x=f(x_1,\ldots,x_m)=f'(y_1,\ldots,y_n) $  for some $(x_1,\ldots,x_m)\in X^m_i - X^m_{i-1},(y_1,\ldots,y_n)\in X^n_i - X^n_{i-1}$ and for some generators $f$ and ${f'}$ in $F$. - -Since $ f(X^m_+) $ and $ {f'}(X^n_+) $  are disjoint when $f\neq {f'},f(x_1,\ldots,x_m) = f'(y_1,\ldots,Y_n)$ this implies $f=f' $ and $ m=n$. Being all $f\in F $ in $ X^n_+$, we must have $x_j=y_j,\forall j,1\leq j\leq n$. - -Then we have $y=z=g(x_1,\ldots,x_n)$ with $ g=d(f) $, displaying functionality. - -Before moving further we must make use of a new lemma that determines the rules for partial functions, it may be written as: - -(3)Be $(f_n)_{n\geq 0}$ a sequence of partial functions $f_n:A\to B$ such that $f_n\subseteq f_{n+1},\forall n\geq 0$. Then, $g=(A,\bigcup graph(f_n),B)$ is a partial function. - -Using (3), $\hat h =\bigcup_{i\geq 0} h_i$ is a partial function. Since  $ dom(\hat h)=\bigcup dom(h_i)=\bigcup X_i=X_+$ then $\hat h$ is total in $X_+$. - -Furthermore, it is clear from the definition of $h_i$ that $\hat h$ satisfies (1) and (2). To prove the uniqueness of $\hat h$, or any other function ${h'}$ that satisfies (1) and (2), it is enough to use a simple induction that shows $\hat h$ and ${h'}$ work for $X_i,\forall i\geq 0$, and such is proved the Theorem of the Unique Homomorphic Extension. - -We can use the theorem of unique homomorphic extension for calculating numeric expressions over whole numbers. First, we must define the following: -$$ - A=\Sigma^* -$$ where $\Sigma= \mathrm{Variables} \cup \{0,1,2,\ldots,9\} \cup \{+,-,*\} \cup \{(,)\}, \text{ where }| *=\mathrm{Variables} \cup \{{0,\ldots,9}\}$ - -Be $F =\{{f-,f+,f*}\}$ -$$ -f:\Sigma^*\to \Sigma^*_{w\mapsto {-w}} -$$ -$$ -f:\Sigma^*x\Sigma^*\to \Sigma^*_{w_1,w_2\mapsto {w_1+w_2}} -$$ -$$ -f:\Sigma^*x\Sigma^*\to \Sigma^*_{w_1,w_2\mapsto {w_1*w_2}} -$$ - -Be $EXPR$ he inductive closure of $X$ under $F$ and be$B=\Z, G={\{Soma(-.-),Mult(-,-),Menos(-)}\}$ - -Be $d:F\to G$ -$$ -d({f-})=menos -$$ -$$ -d({f+})=mais -$$ -$$ -d({f*})=mult -$$ - -Then $\hat h:X_+\to\{{0,1}\}$ will be a function that calculates recursively the truth-value of a proposition, and in a way, will be an extension of the function $h:X\to\{{0,1}\}$that associates a truth-value to each atomic proposition, such that: - -(1)$\hat h (\phi) = h(\phi)$ - -(2)$\hat h({(\neg\phi)})=NAO(\hat h(\psi))$ (Negation) -$$ -\hat h({(\rho\land \theta)})= E(\hat h(\rho),\hat h(\theta)) -$$ (AND Operator) -$$ -\hat h({(\rho\lor \theta)})= OU(\hat h(\rho),\hat h(\theta)) -$$ (OR Operator) -$$ -\hat h({(\rho\to \theta)})= SEENTAO(\hat h(\rho),\hat h(\theta)) -$$ (IF-THEN Operator) diff --git a/wiki/wikipedia/2893.txt b/wiki/wikipedia/2893.txt deleted file mode 100644 index 028c5e36c7f5acf1e2db845864b7a5e2b0e8fd58..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2893.txt +++ /dev/null @@ -1,11 +0,0 @@ -In the mathematical discipline of graph theory, the 2-factor theorem, discovered by Julius Petersen, is one of the earliest works in graph theory. It can be stated as follows: - -2-factor theorem. Let G be a regular graph whose degree is an even number, 2k. Then the edges of G can be partitioned into k edge-disjoint 2-factors. - -Here, a 2-factor is a subgraph of G in which all vertices have degree two; that is, it is a collection of cycles that together touch each vertex exactly once. - -In order to prove this generalized form of the theorem, Petersen first proved that a 4-regular graph can be factorized into two 2-factors by taking alternate edges in a Eulerian trail. He noted that the same technique used for the 4-regular graph yields a factorization of a 2k-regular graph into two k-factors. - -To prove this theorem, it is sufficient to consider connected graphs. A connected graph with even degree has an Eulerian trail. Traversing this Eulerian trail generates an orientation D of G such that every point has indegree and outdegree = k. Next, replace every vertex v ϵ V(D) by two vertices v’ and v”, and replace every directed edge uv of the oriented graph by an undirected edge from u’ to v”. Since D has in- and outdegree equal to k the resulting bipartite graph G’ is k-regular. The edges of G’ can be partitioned into k perfect matchings by a theorem of Kőnig. Now merging v’ with v” for every v recover the graph G, and maps the k perfect matchings of G’ onto k 2-factors of G which partition its edges.[1] - -The theorem was discovered by Julius Petersen, a Danish mathematician. It is in fact, one of the first results in graph theory. The theorem appears first in the 1891 article "Die Theorie der regulären graphs". To prove the theorem Petersen's fundamental idea was to 'colour' the edges of a trial or a path alternatingly red and blue, and then to use the edges of one or both colours for the construction of other paths or trials. diff --git a/wiki/wikipedia/2894.txt b/wiki/wikipedia/2894.txt deleted file mode 100644 index 4b2bb940e88eb06e56016c380ad99c3cd90d3367..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2894.txt +++ /dev/null @@ -1,78 +0,0 @@ -In computer science, a timestamp-based concurrency control algorithm is a non-lock concurrency control method. It is used in some databases to safely handle transactions, using timestamps. - -* Every timestamp value is unique and accurately represents an instant in time. - -* No two timestamps can be the same. - -* A higher-valued timestamp occurs later in time than a lower-valued timestamp. - -A number of different ways have been used to generate timestamp - -* Use the value of the system's clock at the start of a transaction as the timestamp. - -* Use a thread-safe shared counter that is incremented at the start of a transaction as the timestamp. - -* A combination of the above two methods. - -Each transaction ($T_i$) is an ordered list of actions ($A_{ix}$). Before the transaction performs its first action ($A_{i1}$), it is marked with the current timestamp, or any other strictly totally ordered sequence: $TS(T_i) = NOW()$. Every transaction is also given an initially empty set of transactions upon which it depends, $DEP(T_i) = []$, and an initially empty set of old objects which it updated, $OLD(T_i) = []$. - -Each object $(O_j)$ in the database is given two timestamp fields which are not used other than for concurrency control: $RTS(O_j)$ is the time at which the value of object was last used by a transaction, $WTS(O_j)$ is the time at which the value of the object was last updated by a transaction. - -For all $T_i$: - -For each action $A_{ix}$: - -If $A_{ix}$ wishes to use the value of $O_j$: - -If $WTS(O_j) > TS(T_i)$ then abort (a more recent thread has overwritten the value), - -Otherwise update the set of dependencies $DEP(T_i).\mathrm{add}(WTS(O_j))$ and set $RTS(O_j) = \max(RTS(O_j), TS(T_i))$; - -If $A_{ix}$ wishes to update the value of $O_j$: - -If $RTS(O_j) > TS(T_i)$ then abort (a more recent thread is already relying on the old value), - -If $WTS(O_j) > TS(T_i)$ then skip (the Thomas Write Rule), - -Otherwise store the previous values, $OLD(T_i).\mathrm{add}(O_j, WTS(O_j))$, set $WTS(O_j) = TS(T_i)$, and update the value of $O_j$. - -While there is a transaction in $DEP(T_i)$ that has not ended: wait - -If there is a transaction in $DEP(T_i)$ that aborted then abort - -Otherwise: commit. - -To abort: - -For each $(\mathrm{old}O_j, \mathrm{old}WTS(O_j))$ in $OLD(T_i)$ - -If $WTS(O_j)$ equals $TS(T_i)$ then restore $O_j = \mathrm{old}O_j$ and $WTS(O_j) = \mathrm{old}WTS(O_j)$ - -Whenever a transaction begins, it receives a timestamp. This timestamp indicates the order in which the transaction must occur, relative to the other transactions. So, given two transactions that affect the same object, the operation of the transaction with the earlier timestamp must execute before the operation of the transaction with the later timestamp. However, if the operation of the wrong transaction is actually presented first, then it is aborted and the transaction must be restarted. - -Every object in the database has a read timestamp, which is updated whenever the object's data is read, and a write timestamp, which is updated whenever the object's data is changed. - -If a transaction wants to read an object, - -* but the transaction started before the object's write timestamp it means that something changed the object's data after the transaction started. In this case, the transaction is canceled and must be restarted. - -* and the transaction started after the object's write timestamp, it means that it is safe to read the object. In this case, if the transaction timestamp is after the object's read timestamp, the read timestamp is set to the transaction timestamp. - -If a transaction wants to write to an object, - -* but the transaction started before the object's read timestamp it means that something has had a look at the object, and we assume it took a copy of the object's data. So we can't write to the object as that would make any copied data invalid, so the transaction is aborted and must be restarted. - -* and the transaction started before the object's write timestamp it means that something has changed the object since we started our transaction. In this case we use the Thomas write rule and simply skip our write operation and continue as normal; the transaction does not have to be aborted or restarted - -* otherwise, the transaction writes to the object, and the object's write timestamp is set to the transaction's timestamp. - -Note that timestamp ordering in its basic form does not produce recoverable histories. Consider for example the following history with transactions $T_1$ and $T_2$: -$$ -W_1(x)R_2(x)W_2(y)C_2R_1(z)C_1 -$$ - -This could be produced by a TO scheduler, but is not recoverable, as $T_2$ commits even though having read from an uncommitted transaction. To make sure that it produces recoverable histories, a scheduler can keep a list of other transactions each transaction has read from, and not let a transaction commit before this list consisted of only committed transactions. To avoid cascading aborts, the scheduler could tag data written by uncommitted transactions as dirty, and never let a read operation commence on such a data item before it was untagged. To get a strict history, the scheduler should not allow any operations on dirty items. - -This is the minimum time elapsed between two adjacent timestamps. If the resolution of the timestamp is too large (coarse), the possibility of two or more timestamps being equal is increased and thus enabling some transactions to commit out of correct order. For example, assuming that we have a system that can create one hundred unique timestamps per second, and given two events that occur 2 milliseconds apart, they will probably be given the same timestamp even though they actually occurred at different times. - -Even though this technique is a non-locking one, in as much as the Object is not locked from concurrent access for the duration of a transaction, the act of recording each timestamp against the Object requires an extremely short duration lock on the Object or its proxy. diff --git a/wiki/wikipedia/2895.txt b/wiki/wikipedia/2895.txt deleted file mode 100644 index 8daf7faa0417ef4ee956eccb8870bbd52eb35420..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2895.txt +++ /dev/null @@ -1,24 +0,0 @@ -The Turán–Kubilius inequality is a mathematical theorem in probabilistic number theory. It is useful for proving results about the normal order of an arithmetic function. - -This formulation is from Tenenbaum. Other formulations are in Narkiewicz - -and in Cojocaru & Murty. - -Suppose f is an additive complex-valued arithmetic function, and write p for an arbitrary prime and ν for an arbitrary positive integer. Write -$$ -A(x)=\sum_{p^\nu \le x} f(p^\nu) p^{-\nu}(1-p^{-1}) -$$ - -and -$$ -B(x)^2 = \sum_{p^\nu \le x} \left| f(p^\nu) \right| ^2 p^{-\nu}. -$$ - -Then there is a function ε(x) that goes to zero when x goes to infinity, and such that for x ≥ 2 we have -$$ -\frac{1}{x} \sum_{n \le x} |f(n) - A(x)|^2 \le (2 + \varepsilon(x)) B(x)^2. -$$ - -Turán developed the inequality to create a simpler proof of the Hardy–Ramanujan theorem about the normal order of the number ω(n) of distinct prime divisors of an integer n. There is an exposition of Turán's proof in Hardy & Wright, §22.11. - -Tenenbaum gives a proof of the Hardy–Ramanujan theorem using the Turán–Kubilius inequality and states without proof several other applications. diff --git a/wiki/wikipedia/2896.txt b/wiki/wikipedia/2896.txt deleted file mode 100644 index 8e941080ee8e2f19a0a7bfcf8f2bcefc7795deea..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2896.txt +++ /dev/null @@ -1,79 +0,0 @@ -Proofs from THE BOOK is a book of mathematical proofs by Martin Aigner and Günter M. Ziegler. The book is dedicated to the mathematician Paul Erdős, who often referred to "The Book" in which God keeps the most elegant proof of each mathematical theorem. During a lecture in 1985, Erdős said, "You don't have to believe in God, but you should believe in The Book." - -Proofs from THE BOOK contains 32 sections (45 in the sixth edition), each devoted to one theorem but often containing multiple proofs and related results. It spans a broad range of mathematical fields: number theory, geometry, analysis, combinatorics and graph theory. Erdős himself made many suggestions for the book, but died before its publication. The book is illustrated by . It has gone through six editions in English, and has been translated into Persian, French, German, Hungarian, Italian, Japanese, Chinese, Polish, Portuguese, Korean, Turkish, Russian and Spanish. - -In November 2017 the American Mathematical Society announced the 2018 Leroy P. Steele Prize for Mathematical Exposition to be awarded to Aigner and Ziegler for this book. - -The proofs include: - -* Six proofs of the infinitude of the primes, including Euclid's and Furstenberg's - -* Proof of Bertrand's postulate - -* Fermat's theorem on sums of two squares - -* Two proofs of the Law of quadratic reciprocity - -* Proof of Wedderburn's little theorem asserting that every finite division ring is a field - -* Four proofs of the Basel problem - -* Proof that e is irrational (also showing the irrationality of certain related numbers) - -* Hilbert's third problem - -* Sylvester–Gallai theorem and De Bruijn–Erdős theorem - -* Cauchy's theorem - -* Borsuk's conjecture - -* Schr%C3%B6der%E2%80%93Bernstein theorem - -* Wetzel's problem on families of analytic functions with few distinct values - -* The fundamental theorem of algebra - -* Monsky's theorem (4th edition) - -* Van der Waerden's conjecture - -* Littlewood–Offord lemma - -* Buffon's needle problem - -* Sperner's theorem, Erdős–Ko–Rado theorem and Hall's theorem - -* Lindström–Gessel–Viennot lemma and the Cauchy–Binet formula - -* Four proofs of Cayley's formula - -* Kakeya sets in vector spaces over finite fields - -* Bregman–Minc inequality - -* Dinitz problem - -* Steve Fisk's proof of the art gallery theorem - -* Five proofs of Turán's theorem - -* Shannon capacity and Lovász number - -* Chromatic number of Kneser graphs - -* Friendship theorem - -* Some proofs using the Probabilistic method - -==References== - -*Proofs from THE BOOK, Springer, Berlin 1998, - -** - -** - -*, including a list of editions and translations. - -*. diff --git a/wiki/wikipedia/2897.txt b/wiki/wikipedia/2897.txt deleted file mode 100644 index d613ac160e2aeb86b3a5532b78cd284a083595a4..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2897.txt +++ /dev/null @@ -1,14 +0,0 @@ -In number theory, and especially the study of diophantine approximation, the lonely runner conjecture is a conjecture originally due to J. M. Wills in 1967. Applications of the conjecture are widespread in mathematics; they include view obstruction problems and calculating the chromatic number of distance graphs and circulant graphs. The conjecture was given its picturesque name by L. Goddyn in 1998. - -Consider k runners on a circular track of unit length. At t = 0, all runners are at the same position and start to run; the runners' speeds are pairwise distinct. A runner is said to be lonely at time t if they are at a distance (measured along the circle) of at least 1/k from every other runner at time t. The lonely runner conjecture states that each runner is lonely at some time. - -A convenient reformulation of the conjecture is to assume that the runners have integer speeds, not all divisible by the same prime; the runner to be lonely has zero speed. The conjecture then states that for any set D of k − 1 positive integers with greatest common divisor 1, -$$ -\exists t\in \mathbb{R}\quad \forall d\in D\quad \|td\| \geq \frac{1}{k}, -$$ - -where ||x|| denotes the distance of real number x to the nearest integer. - -Dubickas shows that for a sufficiently large number of runners for speeds $v_1 < v_2 < \dots < v_k$ the lonely runner conjecture is true if $\frac{v_{i+1}}{v_i} \geq 1 + \frac{33\log(k)}{k}$. - -Czerwiński shows that, with probability tending to one, a much stronger statement holds for random sets in which the bound $\frac{1}{k}$ is replaced by $\frac{1}{2}-\epsilon$. diff --git a/wiki/wikipedia/2898.txt b/wiki/wikipedia/2898.txt deleted file mode 100644 index 0a857b2f246a5cb13372571bdf36b7beb4f3de7d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2898.txt +++ /dev/null @@ -1,9 +0,0 @@ -Mirror sites or mirrors are replicas of other websites or any network node. The concept of mirroring applies to network services accessible through any protocol, such as HTTP or FTP. Such sites have different URLs than the original site, but host identical or near-identical content. Mirror sites are often located in a different geographic region than the original, or upstream site. The purpose of mirrors is to reduce network traffic, improve access speed, ensure availability of the original site for technical or political reasons, or provide a real-time backup of the original site. Mirror sites are particularly important in developing countries, where internet access may be slower or less reliable. The maintainers of some mirrors choose not to replicate the entire contents of the upstream server they are mirroring because of technical constraints, or selecting only a subset relevant to their purpose, such as software written in a particular programming language, runnable on a single computer platform, or written by one author. These sites are called partial mirrors or secondary mirrors. - -Mirror sites were heavily used on the early internet, when most users accessed through dialup and the Internet backbone had much lower bandwidth than today, making a geographically-localized mirror network a worthwhile benefit. Download archives such as Info-Mac, Tucows and CPAN maintained worldwide networks mirroring their content accessible over HTTP or anonymous FTP. Some of these networks, such as Info-Mac or Tucows are no longer active or have removed their mirrored download sections, but some like CPAN or the Debian package mirrors are still active in 2019. Debian removed FTP access to its mirrors in 2017 because of declining use and the relative stagnation of the FTP protocol, mentioning FTP servers' lack of support for techniques such as caching and load balancing that are available to HTTP. New Mirrors use HTTPS and support IPv6 along with IPv4. The primary mirror sites distribute to secondary mirrors by using rsync to ensure integrity of files and file consistency with the master mirror. - -Notable websites with mirrors include Project Gutenberg, KickassTorrents, The Pirate Bay, WikiLeaks, the website of the Environmental Protection Agency, and Wikipedia. Some notable partial mirrors include free and open-source software projects such as GNU, in particular Linux distributions CentOS, Debian, Fedora, and Ubuntu; such projects provide mirrors of the download sites (since those are expected to have high server load). Many open source application providers such as VideoLAN use mirrors to distribute VLC Media Player, and The Document Foundation uses mirrors to distribute LibreOffice. - -It was once common for tech companies such as Microsoft, Hewlet-Packard or Apple Computer to maintain a network of mirrors accessible over HTTP or anonymous FTP, hosting software updates, sample code and various freely-downloadable utilities. Much of these sites were shut down in the first decades of the 21st century, with Apple shutting down its FTP services in 2012 and Microsoft stopping updates in 2010. Today, the contents of a number of these mirror sites are archived at https://archive.org/details/ftpsites&tab=collection - -Occasionally, some people will use web scraping software to produce static dumps of existing sites, such as the BBC's Top Gear and RedFlagDeals. diff --git a/wiki/wikipedia/2899.txt b/wiki/wikipedia/2899.txt deleted file mode 100644 index f891241c581f30acfd197cbad0e04a510a610744..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2899.txt +++ /dev/null @@ -1,23 +0,0 @@ -Ubuntu One is an OpenID-based single sign-on service operated by Canonical Ltd. to allow users to log onto many Canonical-owned Web sites. Until April 2014, Ubuntu One was also a file hosting service and music store that allowed users to store data "in the cloud". - -The service enabled users to store files online and sync them between computers and mobile devices, as well as stream audio and music from cloud to mobile devices. - -In April 2014, Canonical announced that the cloud storage and synchronization features would be shut down at the end of July 31 of 2014, leaving the sign-on features intact. - -Ubuntu One had a client application that ran on Ubuntu 9.04 and later, Windows XP or newer, and Mac OS X 10.6 and higher. Other Linux distributions not running GNOME were supported through a console client. The source code is available through launchpad and can easily be compiled for other Unix-like operating systems such as FreeBSD. There was an Ubuntu One music app for iOS devices. A free Ubuntu One account offered 5 GB of storage. - -The Ubuntu One service was similar to Microsoft OneDrive, iCloud, Dropbox, Google Play Music, Amazon Cloud Player. Its client code was written in Python. It used Twisted for its low-level networking and Protocol Buffers for protocol description. Data was synced over a custom protocol called "u1storage", and stored on Amazon S3. - -Ubuntu One offered automatic upload of photos taken from Android mobile devices for immediate synchronization across computers; integration with Mozilla Thunderbird for contacts and with Tomboy for notes due to the access to the local CouchDB instance. It also had capabilities for purchasing DRM-free music while synchronizing them automatically with an Ubuntu One Account via the Ubuntu One Music Store (in partnership with 7digital). - -Ubuntu One published APIs for developers wishing to build applications with file and data synchronization or music streaming. - -An Ubuntu One account gave users access to the Canonical Store, Launchpad, Ubuntu One and other Ubuntu services; an Ubuntu One account allowed users to store files within the cloud, store their contacts details within the interface, access the Ubuntu One Music Store to buy music from and activate the Ubuntu Software Center. Other sites that support OpenID authorization also had support for Ubuntu One. - -In June 2013, the Ubuntu Single Sign On account was re-branded under Ubuntu One as part of consolidating Canonical's online services under the Ubuntu One brand. Also, the announcement identified Ubuntu Pay as another service to come under the brand. Following a security breach in July 2013, Canonical put the Ubuntu Forums under the brand, meaning that Forum users now log in using Ubuntu One, rather than with the previous username-password system. - -On April 2, 2014, Canonical announced shutting down of select Ubuntu One services. As of the day of announcement, it was no longer possible to purchase storage space or music. File services would be unavailable from June 1, but existing users were allowed to download their content until July 31, when all stored data would be permanently deleted. The Amarok development team announced that they would not add support for the Ubuntu One Music Store to the Amarok media player for the moment, unlike the Magnatune media store, which returns 10% of the revenue produced via the interface to Amarok. - -Storage was out-sourced to Amazon S3. - -Files stored in the Ubuntu One file stores were not encrypted. diff --git a/wiki/wikipedia/29.txt b/wiki/wikipedia/29.txt deleted file mode 100644 index a0cd00b1f21af9a587d860df1312822c689c9e39..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/29.txt +++ /dev/null @@ -1,32 +0,0 @@ -In set theory, Easton's theorem is a result on the possible cardinal numbers of powersets. Easton (extending a result of Robert M. Solovay) showed via forcing that the only constraints on permissible values for 2κ when κ is a regular cardinal are -$$ - \kappa < \operatorname{cf}(2^\kappa) -$$ - -(where cf(α) is the cofinality of α) and -$$ - \text{if } \kappa < \lambda \text{ then } 2^\kappa\le 2^\lambda. -$$ - -If G is a class function whose domain consists of ordinals and whose range consists of ordinals such that - -# G is non-decreasing, - -# the cofinality of $\aleph_{G(\alpha)}$ is greater than $\aleph_\alpha$ for each α in the domain of G, and - -# $\aleph_\alpha$ is regular for each α in the domain of G, - -then there is a model of ZFC such that -$$ -2^{\aleph_\alpha} = \aleph_{G(\alpha)} -$$ - -for each $\alpha$ in the domain of G. - -The proof of Easton's theorem uses forcing with a proper class of forcing conditions over a model satisfying the generalized continuum hypothesis. - -The first two conditions in the theorem are necessary. Condition 1 is a well known property of cardinality, while condition 2 follows from König's theorem. - -In Easton's model the powersets of singular cardinals have the smallest possible cardinality compatible with the conditions that 2κ has cofinality greater than κ and is a non-decreasing function of κ. - -Silver proved that a singular cardinal of uncountable cofinality cannot be the smallest cardinal for which the generalized continuum hypothesis fails. This shows that Easton's theorem cannot be extended to the class of all cardinals. The program of PCF theory gives results on the possible values of $2^\lambda$ for singular cardinals $\lambda$. PCF theory shows that the values of the continuum function on singular cardinals are strongly influenced by the values on smaller cardinals, whereas Easton's theorem shows that the values of the continuum function on regular cardinals are only weakly influenced by the values on smaller cardinals. diff --git a/wiki/wikipedia/290.txt b/wiki/wikipedia/290.txt deleted file mode 100644 index dcee11bf7f8cb459cef62499ea04fe70e147d513..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/290.txt +++ /dev/null @@ -1,48 +0,0 @@ -In mathematics, a Noetherian topological space, named for Emmy Noether, is a topological space in which closed subsets satisfy the descending chain condition. Equivalently, we could say that the open subsets satisfy the ascending chain condition, since they are the complements of the closed subsets. The Noetherian property of a topological space can also be seen as a strong compactness condition, namely that every open subset of such a space is compact, and in fact it is equivalent to the seemingly stronger statement that every subset is compact. - -A topological space $X$ is called Noetherian if it satisfies the descending chain condition for closed subsets: for any sequence -$$ - Y_1 \supseteq Y_2 \supseteq \cdots -$$ - -of closed subsets $Y_i$ of $X$, there is an integer $m$ such that $Y_m=Y_{m+1}=\cdots.$ - -* A topological space $X$ is Noetherian if and only if every subspace of $X$ is compact (i.e., $X$ is hereditarily compact), and if and only if every open subset of $X$ is compact. - -* Every subspace of a Noetherian space is Noetherian. - -* The continuous image of a Noetherian space is Noetherian. - -* A finite union of Noetherian subspaces of a topological space is Noetherian. - -* Every Hausdorff Noetherian space is finite with the discrete topology. - -Proof: Every subset of X is compact in a Hausdorff space, hence closed. So X has the discrete topology, and being compact, it must be finite. - -* Every Noetherian space X has a finite number of irreducible components. If the irreducible components are $X_1,...,X_n$, then $X=X_1\cup\cdots\cup X_n$, and none of the components $X_i$ is contained in the union of the other components. - -Many examples of Noetherian topological spaces come from algebraic geometry, where for the Zariski topology an irreducible set has the intuitive property that any closed proper subset has smaller dimension. Since dimension can only 'jump down' a finite number of times, and algebraic sets are made up of finite unions of irreducible sets, descending chains of Zariski closed sets must eventually be constant. - -A more algebraic way to see this is that the associated ideals defining algebraic sets must satisfy the ascending chain condition. That follows because the rings of algebraic geometry, in the classical sense, are Noetherian rings. This class of examples therefore also explains the name. - -If R is a commutative Noetherian ring, then Spec(R), the prime spectrum of R, is a Noetherian topological space. More generally, a Noetherian scheme is a Noetherian topological space. The converse does not hold, since Spec(R) of a one-dimensional valuation domain R consists of exactly two points and therefore is Noetherian, but there are examples of such rings which are not Noetherian. - -The space $\mathbb{A}^n_k$ (affine $n$-space over a field $k$) under the Zariski topology is an example of a Noetherian topological space. By properties of the ideal of a subset of $\mathbb{A}^n_k$, we know that if -$$ -Y_1 \supseteq Y_2 \supseteq Y_3 \supseteq \cdots -$$ - -is a descending chain of Zariski-closed subsets, then -$$ -I(Y_1) \subseteq I(Y_2) \subseteq I(Y_3) \subseteq \cdots -$$ - -is an ascending chain of ideals of $k[x_1,\ldots,x_n].$ Since $k[x_1,\ldots,x_n]$ is a Noetherian ring, there exists an integer $m$ such that -$$ -I(Y_m)=I(Y_{m+1})=I(Y_{m + 2})=\cdots. -$$ - -Since $V(I(Y))$ is the closure of Y for all Y, $V(I(Y_i))=Y_i$ for all $i.$ Hence -$$ -Y_m=Y_{m+1}=Y_{m + 2}=\cdots -$$ as required. diff --git a/wiki/wikipedia/2900.txt b/wiki/wikipedia/2900.txt deleted file mode 100644 index 93a931f565e5be50f1f90f2422dfd5f2e5ad6c9c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2900.txt +++ /dev/null @@ -1,41 +0,0 @@ -Tarski's undefinability theorem, stated and proved by Alfred Tarski in 1933, is an important limitative result in mathematical logic, the foundations of mathematics, and in formal semantics. Informally, the theorem states that arithmetical truth cannot be defined in arithmetic. - -The theorem applies more generally to any sufficiently strong formal system, showing that truth in the standard model of the system cannot be defined within the system. - -In 1931, Kurt Gödel published the incompleteness theorems, which he proved in part by showing how to represent the syntax of formal logic within first-order arithmetic. Each expression of the formal language of arithmetic is assigned a distinct number. This procedure is known variously as Gödel numbering, coding and, more generally, as arithmetization. In particular, various sets of expressions are coded as sets of numbers. For various syntactic properties (such as being a formula, being a sentence, etc.), these sets are computable. Moreover, any computable set of numbers can be defined by some arithmetical formula. For example, there are formulas in the language of arithmetic defining the set of codes for arithmetic sentences, and for provable arithmetic sentences. - -The undefinability theorem shows that this encoding cannot be done for semantic concepts such as truth. It shows that no sufficiently rich interpreted language can represent its own semantics. A corollary is that any metalanguage capable of expressing the semantics of some object language must have expressive power exceeding that of the object language. The metalanguage includes primitive notions, axioms, and rules absent from the object language, so that there are theorems provable in the metalanguage not provable in the object language. - -The undefinability theorem is conventionally attributed to Alfred Tarski. Gödel also discovered the undefinability theorem in 1930, while proving his incompleteness theorems published in 1931, and well before the 1933 publication of Tarski's work (Murawski 1998). While Gödel never published anything bearing on his independent discovery of undefinability, he did describe it in a 1931 letter to John von Neumann. Tarski had obtained almost all results of his 1933 monograph "The Concept of Truth in the Languages of the Deductive Sciences" between 1929 and 1931, and spoke about them to Polish audiences. However, as he emphasized in the paper, the undefinability theorem was the only result he did not obtain earlier. According to the footnote to the undefinability theorem (Twierdzenie I) of the 1933 monograph, the theorem and the sketch of the proof were added to the monograph only after the manuscript had been sent to the printer in 1931. Tarski reports there that, when he presented the content of his monograph to the Warsaw Academy of Science on March 21, 1931, he expressed at this place only some conjectures, based partly on his own investigations and partly on Gödel's short report on the incompleteness theorems "Einige metamathematische Resultate über Entscheidungsdefinitheit und Widerspruchsfreiheit", Akademie der Wissenschaften in Wien, 1930. - -We will first state a simplified version of Tarski's theorem, then state and prove in the next section the theorem Tarski proved in 1933. - -Let L be the language of first-order arithmetic. This is the theory of the natural numbers, including their addition and multiplication, axiomatized by the first-order Peano axioms. This is a "first-order" theory: the quantifiers extend over natural numbers, but not over sets or functions of natural numbers. The theory is strong enough to describe recursively defined integer functions such as exponentiation, factorials or the Fibonacci sequence. - -Let N be the standard structure for L, i.e. N consists of the ordinary set of natural numbers and their addition and multiplication. Each sentence in L can be interpreted in N and then becomes either true or false. Thus (L, N) is the "interpreted first-order language of arithmetic." - -Each formula φ in L has a Gödel number g(φ). This is a natural number that "encodes" φ. In that way, the language L can talk about formulas in L, not just about numbers. Let T denote the set of L-sentences true in N, and T* the set of Gödel numbers of the sentences in T. The following theorem answers the question: Can T* be defined by a formula of first-order arithmetic? - -Tarski's undefinability theorem: There is no L-formula True(n) that defines T*. - -That is, there is no L-formula True(n) such that for every L-sentence A, True(g(A)) ↔ A holds in N. - -Informally, the theorem says that the concept of truth of first-order arithmetic statements cannot be defined by a formula in first-order arithmetic. This implies a major limitation on the scope of "self-representation." It is possible to define a formula True(n) whose extension is T*, but only by drawing on a metalanguage whose expressive power goes beyond that of L. For example, a truth predicate for first-order arithmetic can be defined in second-order arithmetic. However, this formula would only be able to define a truth predicate for formulas in the original language L. To define a truth predicate for the metalanguage would require a still higher metametalanguage, and so on. - -To prove the theorem, we proceed by contradiction and assume that an L-formula True(n) exists which is true for the natural number n in N if and only if n is the Gödel number of a sentence in L that's true in N. We could then use True(n) to define a new L-formula S(m) which is true for the natural number m if and only if m is the Gödel number of a formula φ(x) (with a free variable x) such that φ(m) is false when interpreted in N (i.e. the formula φ(x), when applied to its own Gödel number, yields a false statement). If we now consider the Gödel number g of the formula S(m), and ask whether the sentence S(g) is true in N, we obtain a contradiction. (This is known as a diagonal argument.) - -The theorem is a corollary of Post's theorem about the arithmetical hierarchy, proved some years after Tarski (1933). A semantic proof of Tarski's theorem from Post's theorem is obtained by reductio ad absurdum as follows. Assuming T* is arithmetically definable, there is a natural number n such that T* is definable by a formula at level $\Sigma^0_n$ of the arithmetical hierarchy. However, T* is $\Sigma^0_k$-hard for all k. Thus the arithmetical hierarchy collapses at level n, contradicting Post's theorem. - -Tarski proved a stronger theorem than the one stated above, using an entirely syntactical method. The resulting theorem applies to any formal language with negation, and with sufficient capability for self-reference that the diagonal lemma holds. First-order arithmetic satisfies these preconditions, but the theorem applies to much more general formal systems. - -Tarski's undefinability theorem (general form): Let (L,N) be any interpreted formal language which includes negation and has a Gödel numbering g(φ) satisfying the diagonal lemma, i.e. for every L-formula B(x) (with one free variable x) there is a sentence A such that A ↔ B(g(A)) holds in N. Then there is no L-formula True(n) with the following property: for every L-sentence A, True(g(A)) ↔ A is true in N. - -The proof of Tarski's undefinability theorem in this form is again by reductio ad absurdum. Suppose that an L-formula True(n) as above existed, i.e., if A is a sentence of arithmetic, then True(g(A)) holds in N if and only if A holds in N. Hence for all A, the formula True(g(A)) ↔ A holds in N. But the diagonal lemma yields a counterexample to this equivalence, by giving a "liar" formula S such that S ↔ ¬True(g(S)) holds in N. This is a contradiction. QED. - -The formal machinery of the proof given above is wholly elementary except for the diagonalization which the diagonal lemma requires. The proof of the diagonal lemma is likewise surprisingly simple; for example, it does not invoke recursive functions in any way. The proof does assume that every L-formula has a Gödel number, but the specifics of a coding method are not required. Hence Tarski's theorem is much easier to motivate and prove than the more celebrated theorems of Gödel about the metamathematical properties of first-order arithmetic. - -Smullyan (1991, 2001) has argued forcefully that Tarski's undefinability theorem deserves much of the attention garnered by Gödel's incompleteness theorems. That the latter theorems have much to say about all of mathematics and more controversially, about a range of philosophical issues (e.g., Lucas 1961) is less than evident. Tarski's theorem, on the other hand, is not directly about mathematics but about the inherent limitations of any formal language sufficiently expressive to be of real interest. Such languages are necessarily capable of enough self-reference for the diagonal lemma to apply to them. The broader philosophical import of Tarski's theorem is more strikingly evident. - -An interpreted language is strongly-semantically-self-representational exactly when the language contains predicates and function symbols defining all the semantic concepts specific to the language. Hence the required functions include the "semantic valuation function" mapping a formula A to its truth value ||A||, and the "semantic denotation function" mapping a term t to the object it denotes. Tarski's theorem then generalizes as follows: No sufficiently powerful language is strongly-semantically-self-representational. - -The undefinability theorem does not prevent truth in one theory from being defined in a stronger theory. For example, the set of (codes for) formulas of first-order Peano arithmetic that are true in N is definable by a formula in second order arithmetic. Similarly, the set of true formulas of the standard model of second order arithmetic (or n-th order arithmetic for any n) can be defined by a formula in first-order ZFC. diff --git a/wiki/wikipedia/2901.txt b/wiki/wikipedia/2901.txt deleted file mode 100644 index 70d1f98761353473fc5fb13a7206d1fbf9f4467d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2901.txt +++ /dev/null @@ -1,15 +0,0 @@ -Lévy's modulus of continuity theorem is a theorem that gives a result about an almost sure behaviour of an estimate of the modulus of continuity for Wiener process, that is used to model what's known as Brownian motion. - -Lévy's modulus of continuity theorem is named after the French mathematician Paul Lévy. - -Let $B : [0, 1] \times \Omega \to \mathbb{R}$ be a standard Wiener process. Then, almost surely, -$$ -\lim_{h \to 0} \sup_{t, t'\leq 1; |t-t'|\leq h } \frac{\sqrt{2 h \log (1 / h)}} = 1. -$$ - -In other words, the sample paths of Brownian motion have modulus of continuity -$$ -\omega_{B} (\delta) = \sqrt{2 \delta \log (1 / \delta)} -$$ - -with probability one, and for sufficiently small $\delta > 0$. diff --git a/wiki/wikipedia/2902.txt b/wiki/wikipedia/2902.txt deleted file mode 100644 index 83ea299a972af4563bc0f1d9cabc5389771452eb..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2902.txt +++ /dev/null @@ -1,13 +0,0 @@ -CTERA Networks is a privately held enterprise software company headquartered in New York and Israel. The company has regional offices in the UK, Italy, France, Spain, Germany, and Australia. , the company is designated as the leading vendor in distributed cloud file storage by GigaOm. In October 2016 IBM became a CTERA reseller. IBM's Cloud Object Storage, integrated with the CTERA Enterprise File Services Platform, can be deployed on-premises, in the cloud or in a hybrid on-premises/cloud setup. - -CTERA was founded in 2008 by Liran Eshel and Zohar Kaufman. - -Gartner named CTERA one of five Cool Venders in Storage Technologies in 2013 and it was included on the Deloitte Fast 500 list in technology in 2015. - -In June 2016, CTERA earned a contract to provide a private cloud file sharing solution to the United States Department of Defense. As part of the agreement, CTERA and DISA (Defense Information Systems Agency) co-developed a mutual authentication technology using government-issued smart cards for additional layers of protection beyond the standard enterprise integration with Microsoft Active Directory servers. - -In 2009, CTERA released its first cloud storage gateway, the C200. The gateway combined the speed of local network storage with off-site cloud storage and backup technology. In 2010, CTERA released the C400 cloud storage gateway which added features for office server backup and recovery and collaboration capability for multiple users working on files stored in the cloud or on the gateway. In 2011, the C800 gateway was released with 24 TB of raw local storage. In 2016, CTERA announced a virtual cloud storage gateway that can be deployed from VMware or KVM servers that enabled customers to use existing hardware. CTERA released updates to its cloud storage gateway portfolio in April 2016 focused on efficiency and storage capacity. - -CTERA released Enterprise File Sync and Share (EFSS) in 2012, which enabled users to access, share and collaborate on files from any location by storing files either locally or on the cloud. Also in 2012, CTERA launched a mobile app which made it possible to access files and backups on iOS and Android devices. - -In December 2015, CTERA announced Cloud Server Data Protection, its storage-as-a-service server data protection software for enterprise CloudOps. Cloud Server Data Protection protects server data on a private or virtual private cloud using backup agents for Windows and Linux environments and object storage services through cloud platforms such as Amazon Web Services, Microsoft Azure, IBM Cloud and OpenStack-based clouds. diff --git a/wiki/wikipedia/2903.txt b/wiki/wikipedia/2903.txt deleted file mode 100644 index 633badd6b7a88a0d134dbe120feb0f78fb9b05f2..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2903.txt +++ /dev/null @@ -1,334 +0,0 @@ -In mathematics, especially in the area of abstract algebra known as combinatorial group theory, the word problem for a finitely generated group G is the algorithmic problem of deciding whether two words in the generators represent the same element. More precisely, if A is a finite set of generators for G then the word problem is the membership problem for the formal language of all words in A and a formal set of inverses that map to the identity under the natural map from the free monoid with involution on A to the group G. If B is another finite generating set for G, then the word problem over the generating set B is equivalent to the word problem over the generating set A. Thus one can speak unambiguously of the decidability of the word problem for the finitely generated group G. - -The related but different uniform word problem for a class K of recursively presented groups is the algorithmic problem of deciding, given as input a presentation P for a group G in the class K and two words in the generators of G, whether the words represent the same element of G. Some authors require the class K to be definable by a recursively enumerable set of presentations. - -Throughout the history of the subject, computations in groups have been carried out using various normal forms. These usually implicitly solve the word problem for the groups in question. In 1911 Max Dehn proposed that the word problem was an important area of study in its own right, together with the conjugacy problem and the group isomorphism problem. In 1912 he gave an algorithm that solves both the word and conjugacy problem for the fundamental groups of closed orientable two-dimensional manifolds of genus greater than or equal to 2. Subsequent authors have greatly extended Dehn's algorithm and applied it to a wide range of group theoretic decision problems. - -It was shown by Pyotr Novikov in 1955 that there exists a finitely presented group G such that the word problem for G is undecidable. It follows immediately that the uniform word problem is also undecidable. A different proof was obtained by William Boone in 1958. - -The word problem was one of the first examples of an unsolvable problem to be found not in mathematical logic or the theory of algorithms, but in one of the central branches of classical mathematics, algebra. As a result of its unsolvability, several other problems in combinatorial group theory have been shown to be unsolvable as well. - -It is important to realize that the word problem is in fact solvable for many groups G. For example, polycyclic groups have solvable word problems since the normal form of an arbitrary word in a polycyclic presentation is readily computable; other algorithms for groups may, in suitable circumstances, also solve the word problem, see the Todd–Coxeter algorithm and the Knuth–Bendix completion algorithm. On the other hand, the fact that a particular algorithm does not solve the word problem for a particular group does not show that the group has an unsolvable word problem. For instance Dehn's algorithm does not solve the word problem for the fundamental group of the torus. However this group is the direct product of two infinite cyclic groups and so has a solvable word problem. - -In more concrete terms, the uniform word problem can be expressed as a rewriting question, for literal strings. For a presentation P of a group G, P will specify a certain number of generators - -x, y, z, ... - -for G. We need to introduce one letter for x and another (for convenience) for the group element represented by x-1. Call these letters (twice as many as the generators) the alphabet $\Sigma$ for our problem. Then each element in G is represented in some way by a product - -abc ... pqr - -of symbols from $\Sigma$, of some length, multiplied in G. The string of length 0 (null string) stands for the identity element e of G. The crux of the whole problem is to be able to recognise all the ways e can be represented, given some relations. - -The effect of the relations in G is to make various such strings represent the same element of G. In fact the relations provide a list of strings that can be either introduced where we want, or cancelled out whenever we see them, without changing the 'value', i.e. the group element that is the result of the multiplication. - -For a simple example, take the presentation {a | a3}. Writing A for the inverse of a, we have possible strings combining any number of the symbols a and A. Whenever we see aaa, or aA or Aa we may strike these out. We should also remember to strike out AAA; this says that since the cube of a is the identity element of G, so is the cube of the inverse of a. Under these conditions the word problem becomes easy. First reduce strings to the empty string, a, aa, A or AA. Then note that we may also multiply by aaa, so we can convert A to aa and convert AA to a. The result is that the word problem, here for the cyclic group of order three, is solvable. - -This is not, however, the typical case. For the example, we have a canonical form available that reduces any string to one of length at most three, by decreasing the length monotonically. In general, it is not true that one can get a canonical form for the elements, by stepwise cancellation. One may have to use relations to expand a string many-fold, in order eventually to find a cancellation that brings the length right down. - -The upshot is, in the worst case, that the relation between strings that says they are equal in G is an Undecidable problem. - -The following groups have a solvable word problem: - -*Automatic groups, including: - -**Finite groups - -**Negatively curved (aka. hyperbolic) groups - -**Euclidean groups - -**Coxeter groups - -**Braid groups - -**Geometrically finite groups - -*Finitely generated free groups - -*Finitely generated free abelian groups - -*Polycyclic groups - -*Finitely generated recursively absolutely presented groups, including: - -**Finitely presented simple groups. - -*Finitely presented residually finite groups - -*One relator groups (this is a theorem of Magnus), including: - -**Fundamental groups of closed orientable two-dimensional manifolds. - -*Combable groups - -*Autostackable groups - -Examples with unsolvable word problems are also known: - -*Given a recursively enumerable set A of positive integers that has insoluble membership problem, ⟨a,b,c,d | anban = cndcn : n ∈ A⟩ is a finitely generated group with a recursively enumerable presentation whose word problem is insoluble - -*Every finitely generated group with a recursively enumerable presentation and insoluble word problem is a subgroup of a finitely presented group with insoluble word problem - -*The number of relators in a finitely presented group with insoluble word problem may be as low as 14 or even 12. - -*An explicit example of a reasonable short presentation with insoluble word problem is given in Collins 1986: - -\begin{array}{lllll}\langle & a,b,c,d,e,p,q,r,t,k & | & &\\ - -&p^{10}a = ap, &pacqr = rpcaq, &ra=ar, &\\ - -&p^{10}b = bp, &p^2adq^2r = rp^2daq^2, &rb=br, &\\ - -&p^{10}c = cp, &p^3bcq^3r = rp^3cbq^3, &rc=cr, &\\ - -&p^{10}d = dp, &p^4bdq^4r = rp^4dbq^4, &rd=dr, &\\ - -&p^{10}e = ep, &p^5ceq^5r = rp^5ecaq^5, &re=er, &\\ - -&aq^{10} = qa, &p^6deq^6r = rp^6edbq^6, &pt=tp, &\\ - -&bq^{10} = qb, &p^7cdcq^7r = rp^7cdceq^7, &qt=tq, &\\ - -&cq^{10} = qc, &p^8ca^3q^8r = rp^8a^3q^8, &&\\ - -&dq^{10} = qd, &p^9da^3q^9r = rp^9a^3q^9, &&\\ - -&eq^{10} = qe, &a^{-3}ta^3k = ka^{-3}ta^3 &&\rangle \end{array} - -The word problem for a recursively presented group can be partially solved in the following sense: - -Given a recursive presentation P = ⟨X|R⟩ for a group G, define: -$$ -S=\{\langle u,v \rangle : u \text{ and } v \text{ are words in } X \text{ and } u=v \text{ in } G\ \} -$$ - -then there is a partial recursive function fP such that: - -f_P(\langle u,v \rangle) = - -\begin{cases} - -0 &\text{if}\ \langle u,v \rangle \in S \\ - -\text{undefined/does not halt}\ &\text{if}\ \langle u,v \rangle \notin S - -\end{cases} - -More informally, there is an algorithm that halts if u=v, but does not do so otherwise. - -It follows that to solve the word problem for P it is sufficient to construct a recursive function g such that: - -g(\langle u,v \rangle) = - -\begin{cases} - -0 &\text{if}\ \langle u,v \rangle \notin S \\ - -\text{undefined/does not halt}\ &\text{if}\ \langle u,v \rangle \in S - -\end{cases} - -However u=v in G if and only if uv−1=1 in G. It follows that to solve the word problem for P it is sufficient to construct a recursive function h such that: - -h(x) = - -\begin{cases} - -0 &\text{if}\ x\neq1\ \text{in}\ G \\ - -\text{undefined/does not halt}\ &\text{if}\ x=1\ \text{in}\ G - -\end{cases} - -The following will be proved as an example of the use of this technique: - -Theorem: A finitely presented residually finite group has solvable word problem. - -Proof: Suppose G = ⟨X|R⟩ is a finitely presented, residually finite group. - -Let S be the group of all permutations of N, the natural numbers, that fixes all but finitely many numbers then: - -# S is locally finite and contains a copy of every finite group. - -# The word problem in S is solvable by calculating products of permutations. - -# There is a recursive enumeration of all mappings of the finite set X into S. - -# Since G is residually finite, if w is a word in the generators X of G then w ≠ 1 in G if and only of some mapping of X into S induces a homomorphism such that w ≠ 1 in S. - -Given these facts, algorithm defined by the following pseudocode: - -For every mapping of X into S - -If every relator in R is satisfied in S - -If w ≠ 1 in S - -return 0 - -End if - -End if - -End for - -defines a recursive function h such that: - -h(x) = - -\begin{cases} - -0 &\text{if}\ x\neq 1\ \text{in}\ G \\ - -\text{undefined/does not halt}\ &\text{if}\ x=1\ \text{in}\ G - -\end{cases} - -This shows that G has solvable word problem. - -The criterion given above, for the solvability of the word problem in a single group, can be extended by a straightforward argument. This gives the following criterion for the uniform solvability of the word problem for a class of finitely presented groups: - -To solve the uniform word problem for a class K of groups, it is sufficient to find a recursive function f(P,w) that takes a finite presentation P for a group G and a word w in the generators of G, such that whenever G ∈ K: - -f(P,w) = - -\begin{cases} - -0 &\text{if}\ w\neq1\ \text{in}\ G \\ - -\text{undefined/does not halt}\ &\text{if}\ w=1\ \text{in}\ G - -\end{cases} - -Boone-Rogers Theorem: There is no uniform partial algorithm that solves the word problem in all finitely presented groups with solvable word problem. - -In other words, the uniform word problem for the class of all finitely presented groups with solvable word problem is unsolvable. This has some interesting consequences. For instance, the Higman embedding theorem can be used to construct a group containing an isomorphic copy of every finitely presented group with solvable word problem. It seems natural to ask whether this group can have solvable word problem. But it is a consequence of the Boone-Rogers result that: - -Corollary: There is no universal solvable word problem group. That is, if G is a finitely presented group that contains an isomorphic copy of every finitely presented group with solvable word problem, then G itself must have unsolvable word problem. - -Remark: Suppose G = ⟨X|R⟩ is a finitely presented group with solvable word problem and H is a finite subset of G. Let H* = ⟨H⟩, be the group generated by H. Then the word problem in H* is solvable: given two words h, k in the generators H of H*, write them as words in X and compare them using the solution to the word problem in G. It is easy to think that this demonstrates a uniform solution of the word problem for the class K (say) of finitely generated groups that can be embedded in G. If this were the case, the non-existence of a universal solvable word problem group would follow easily from Boone-Rogers. However, the solution just exhibited for the word problem for groups in K is not uniform. To see this, consider a group J = ⟨Y|T⟩ ∈ K; in order to use the above argument to solve the word problem in J, it is first necessary to exhibit a mapping e: Y → G that extends to an embedding e*: J → G. If there were a recursive function that mapped (finitely generated) presentations of groups in K to embeddings into G, then a uniform solution of the word problem in K could indeed be constructed. But there is no reason, in general, to suppose that such a recursive function exists. However, it turns out that, using a more sophisticated argument, the word problem in J can be solved without using an embedding e: J → G. Instead an enumeration of homomorphisms is used, and since such an enumeration can be constructed uniformly, it results in a uniform solution to the word problem in K. - -Suppose G were a universal solvable word problem group. Given a finite presentation P = ⟨X|R⟩ of a group H, one can recursively enumerate all homomorphisms h: H → G by first enumerating all mappings h: X → G. Not all of these mappings extend to homomorphisms, but, since h(R) is finite, it is possible to distinguish between homomorphisms and non-homomorphisms, by using the solution to the word problem in G. "Weeding out" non-homomorphisms gives the required recursive enumeration: h1, h2, ..., hn, ... . - -If H has solvable word problem, then at least one of these homomorphisms must be an embedding. So given a word w in the generators of H: -$$ -\text{If}\ w\ne 1\ \text{in}\ H,\ h_n(w)\ne 1\ \text{in}\ G\ \text{for some}\ h_n -$$ -$$ -\text{If}\ w= 1\ \text{in}\ H,\ h_n(w)= 1\ \text{in}\ G\ \text{for all}\ h_n -$$ - -Consider the algorithm described by the pseudocode: - -Let n = 0 - -Let repeatable = TRUE - -while (repeatable) - -increase n by 1 - -if (solution to word problem in G reveals hn(w) ≠ 1 in G) - -Let repeatable = FALSE - -output 0. - -This describes a recursive function: - -f(w) = - -\begin{cases} - -0 &\text{if}\ w\neq1\ \text{in}\ H \\ - -\text{undefined/does not halt}\ &\text{if}\ w=1\ \text{in}\ H. - -\end{cases} - -The function f clearly depends on the presentation P. Considering it to be a function of the two variables, a recursive function f(P,w) has been constructed that takes a finite presentation P for a group H and a word w in the generators of a group G, such that whenever G has soluble word problem: - -f(P,w) = - -\begin{cases} - -0 &\text{if}\ w\neq1\ \text{in}\ H \\ - -\text{undefined/does not halt}\ &\text{if}\ w=1\ \text{in}\ H. - -\end{cases} - -But this uniformly solves the word problem for the class of all finitely presented groups with solvable word problem, contradicting Boone-Rogers. This contradiction proves G cannot exist. - -There are a number of results that relate solvability of the word problem and algebraic structure. The most significant of these is the Boone-Higman theorem: - -A finitely presented group has solvable word problem if and only if it can be embedded in a simple group that can be embedded in a finitely presented group. - -It is widely believed that it should be possible to do the construction so that the simple group itself is finitely presented. If so one would expect it to be difficult to prove as the mapping from presentations to simple groups would have to be non-recursive. - -The following has been proved by Bernhard Neumann and Angus Macintyre: - -A finitely presented group has solvable word problem if and only if it can be embedded in every algebraically closed group - -What is remarkable about this is that the algebraically closed groups are so wild that none of them has a recursive presentation. - -The oldest result relating algebraic structure to solvability of the word problem is Kuznetsov's theorem: - -A recursively presented simple group S has solvable word problem. - -To prove this let ⟨X|R⟩ be a recursive presentation for S. Choose a ∈ S such that a ≠ 1 in S. - -If w is a word on the generators X of S, then let: -$$ -S_w = \langle X | R\cup \{w\} \rangle. -$$ - -There is a recursive function $f_{\langle X | R\cup \{w\} \rangle}$ such that: - -f_{\langle X | R\cup \{w\} \rangle}(x) = - -\begin{cases} - -0 &\text{if}\ x=1\ \text{in}\ S_w\\ - -\text{undefined/does not halt}\ &\text{if}\ x\neq 1\ \text{in}\ S_w. - -\end{cases} - -Write: -$$ -g(w, x) = f_{\langle X | R\cup \{w\} \rangle}(x). -$$ - -Then because the construction of f was uniform, this is a recursive function of two variables. - -It follows that: 1=h(w)=g(w, a) is recursive. By construction: - -h(w) = - -\begin{cases} - -0 &\text{if}\ a=1\ \text{in}\ S_w\\ - -\text{undefined/does not halt}\ &\text{if}\ a\neq 1\ \text{in}\ S_w. - -\end{cases} - -Since S is a simple group, its only quotient groups are itself and the trivial group. Since a ≠ 1 in S, we see a = 1 in Sw if and only if Sw is trivial if and only if w ≠ 1 in S. Therefore: - -h(w) = - -\begin{cases} - -0 &\text{if}\ w\ne 1\ \text{in}\ S\\ - -\text{undefined/does not halt}\ &\text{if}\ w=1\ \text{in}\ S. - -\end{cases} - -The existence of such a function is sufficient to prove the word problem is solvable for S. - -This proof does not prove the existence of a uniform algorithm for solving the word problem for this class of groups. The non-uniformity resides in choosing a non-trivial element of the simple group. There is no reason to suppose that there is a recursive function that maps a presentation of a simple groups to a non-trivial element of the group. However, in the case of a finitely presented group we know that not all the generators can be trivial (Any individual generator could be, of course). Using this fact it is possible to modify the proof to show: - -The word problem is uniformly solvable for the class of finitely presented simple groups. diff --git a/wiki/wikipedia/2904.txt b/wiki/wikipedia/2904.txt deleted file mode 100644 index cbf1fdebf481b24b88bbc5499a0426d76bde3a48..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2904.txt +++ /dev/null @@ -1,3 +0,0 @@ -In mathematics, a lethargy theorem is a statement about the distance of points in a metric space from members of a sequence of subspaces; one application in numerical analysis is to approximation theory, where such theorems quantify the difficulty of approximating general functions by functions of special form, such as polynomials. In more recent work, the convergence of a sequence of operators is studied: these operators generalise the projections of the earlier work. - -Let $ V_1 \subset V_2 \subset \ldots $ be a strictly ascending sequence of finite-dimensional linear subspaces of a Banach space X, and let $\epsilon_1 \ge \epsilon_2 \ge \ldots$ be a decreasing sequence of real numbers tending to zero. Then there exists a point x in X such that the distance of x to Vi is exactly $\epsilon_i$. diff --git a/wiki/wikipedia/2905.txt b/wiki/wikipedia/2905.txt deleted file mode 100644 index 9e6182ea459ded5c9046ea980f8768564380b8e7..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2905.txt +++ /dev/null @@ -1,23 +0,0 @@ -In geometric graph theory, a unit disk graph is the intersection graph of a family of unit disks in the Euclidean plane. That is, it is a graph with one vertex for each disk in the family, and with an edge between two vertices whenever the corresponding vertices lie within a unit distance of each other. - -They are commonly formed from a Poisson point process, making them a simple example of a random structure. - -There are several possible definitions of the unit disk graph, equivalent to each other up to a choice of scale factor: - -* A graph formed from a collection of points in the Euclidean plane, in which two points are connected if their distance is below a fixed threshold. - -* An intersection graph of equal-radius circles, or of equal-radius disks (see Fig. 1). - -* A graph formed from a collection of equal-radius circles, in which two circles are connected by an edge if one circle contains the centre of the other circle. - -Every induced subgraph of a unit disk graph is also a unit disk graph. An example of a graph that is not a unit disk graph is the star K1,7 with one central node connected to seven leaves: if each of seven unit disks touches a common unit disk, some two of the seven disks must touch each other (as the kissing number in the plane is 6). Therefore, unit disk graphs cannot contain an induced K1,7 subgraph. - -Beginning with the work of Huson, unit disk graphs have been used in computer science to model the topology of ad hoc wireless communication networks. In this application, nodes are connected through a direct wireless connection without a base station. It is assumed that all nodes are homogeneous and equipped with omnidirectional antennas. Node locations are modelled as Euclidean points, and the area within which a signal from one node can be received by another node is modelled as a circle. If all nodes have transmitters of equal power, these circles are all equal. Random geometric graphs, formed as unit disk graphs with randomly generated disk centres, have also been used as a model of percolation and various other phenomena. - -If one is given a collection of unit disks (or their centres) in a space of any fixed dimension, it is possible to construct the corresponding unit disk graph in linear time, by rounding the centres to nearby integer grid points, using a hash table to find all pairs of centres within constant distance of each other, and filtering the resulting list of pairs for the ones whose circles intersect. The ratio of the number of pairs considered by this algorithm to the number of edges in the eventual graph is a constant, giving the linear time bound. However, this constant grows exponentially as a function of the dimension . - -It is NP-hard (more specifically, complete for the existential theory of the reals) to determine whether a graph, given without geometry, can be represented as a unit disk graph. Additionally, it is probably impossible in polynomial time to output explicit coordinates of a unit disk graph representation: there exist unit disk graphs that require exponentially many bits of precision in any such representation. - -However, many important and difficult graph optimization problems such as maximum independent set, graph coloring, and minimum dominating set can be approximated efficiently by using the geometric structure of these graphs, and the maximum clique problem can be solved exactly for these graphs in polynomial time, given a disk representation. Even if a disk representation is not known, and an abstract graph is given as input, it is possible in polynomial time to produce either a maximum clique or a proof that the graph is not a unit disk graph, and to 3-approximate the optimum coloring by using a greedy coloring algorithm. - -When a given vertex set forms a subset of a triangular lattice, a necessary and sufficient condition for the perfectness of a unit graph is known. For the perfect graphs, a number of NP-complete optimization problems (graph coloring problem, maximum clique problem, and maximum independent set problem) are polynomially solvable. diff --git a/wiki/wikipedia/2906.txt b/wiki/wikipedia/2906.txt deleted file mode 100644 index 2885866a446b305b5f2bbebb0cfed00011f4b993..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2906.txt +++ /dev/null @@ -1,89 +0,0 @@ -Confusion of the inverse, also called the conditional probability fallacy or the inverse fallacy, is a logical fallacy whereupon a conditional probability is equated with its inverse; that is, given two events A and B, the probability of A happening given that B has happened is assumed to be about the same as the probability of B given A, when there is actually no evidence for this assumption. More formally, P(A|B) is assumed to be approximately equal to P(B|A). - -In one study, physicians were asked to give the chances of malignancy with a 1% prior probability of occurring. A test can detect 80% of malignancies and has a 10% false positive rate. What is the probability of malignancy given a positive test result? Approximately 95 out of 100 physicians responded the probability of malignancy would be about 75%, apparently because the physicians believed that the chances of malignancy given a positive test result were approximately the same as the chances of a positive test result given malignancy. - -The correct probability of malignancy given a positive test result as stated above is 7.5%, derived via Bayes' theorem: - - - -\begin{align} - -& {}\qquad P(\text{malignant}|\text{positive}) \\[8pt] - -& = \frac{P(\text{positive}|\text{malignant}) P(\text{malignant})}{P(\text{positive}|\text{malignant}) P(\text{malignant}) + P(\text{positive}|\text{benign}) P(\text{benign})} \\[8pt] - -& = \frac{(0.80 \cdot 0.01)}{(0.80 \cdot 0.01) + (0.10 \cdot 0.99)} = 0.075 - -\end{align} - - - -Other examples of confusion include: - -* Hard drug users tend to use marijuana; therefore, marijuana users tend to use hard drugs (the first probability is marijuana use given hard drug use, the second is hard drug use given marijuana use). - -* Most accidents occur within 25 miles from home; therefore, you are safest when you are far from home. - -* Terrorists tend to have an engineering background; so, engineers have a tendency towards terrorism. - -For other errors in conditional probability, see the Monty Hall problem and the base rate fallacy. Compare to illicit conversion. - -In order to identify individuals having a serious disease in an early curable form, one may consider screening a large group of people. While the benefits are obvious, an argument against such screenings is the disturbance caused by false positive screening results: If a person not having the disease is incorrectly found to have it by the initial test, they will most likely be distressed, and even if they subsequently take a more careful test and are told they are well, their lives may still be affected negatively. If they undertake unnecessary treatment for the disease, they may be harmed by the treatment's side effects and costs. - -The magnitude of this problem is best understood in terms of conditional probabilities. - -Suppose 1% of the group suffer from the disease, and the rest are well. Choosing an individual at random, -$$ -P(\text{ill}) = 1\% = 0.01\text{ and }P(\text{well})=99\%=0.99. -$$ - -Suppose that when the screening test is applied to a person not having the disease, there is a 1% chance of getting a false positive result (and hence 99% chance of getting a true negative result, a number known as the specificity of the test), i.e. -$$ -P(\text{positive}|\text{well})=1\%,\text{ and } P(\text{negative}|\text{well})=99\%. -$$ - -Finally, suppose that when the test is applied to a person having the disease, there is a 1% chance of a false negative result (and 99% chance of getting a true positive result, known as the sensitivity of the test), i.e. -$$ -P(\text{negative}|\text{ill})=1\%\text{ and }P(\text{positive}|\text{ill}) = 99\%. -$$ - -The fraction of individuals in the whole group who are well and test negative (true negative): -$$ -P(\text{well}\cap\text{negative})=P(\text{well})\times P(\text{negative}|\text{well})=99\%\times99\%=98.01\%. -$$ - -The fraction of individuals in the whole group who are ill and test positive (true positive): -$$ -P(\text{ill}\cap\text{positive}) = P(\text{ill})\times P(\text{positive}|\text{ill}) = 1\%\times99\% = 0.99\%. -$$ - -The fraction of individuals in the whole group who have false positive results: -$$ -P(\text{well}\cap\text{positive})=P(\text{well})\times P(\text{positive}|\text{well})=99\%\times1\%=0.99\%. -$$ - -The fraction of individuals in the whole group who have false negative results: -$$ -P(\text{ill}\cap\text{negative})=P(\text{ill})\times P(\text{negative}|\text{ill}) = 1\%\times1\% = 0.01\%. -$$ - -Furthermore, the fraction of individuals in the whole group who test positive: - - - -\begin{align} - -P(\text{positive}) & {} =P(\text{well }\cap\text{ positive}) + P(\text{ill} \cap \text{positive}) \\ - -& {} = 0.99\%+0.99\%=1.98\%. - -\end{align} - - - -Finally, the probability that an individual actually has the disease, given that the test result is positive: -$$ -P(\text{ill}|\text{positive})=\frac{P(\text{ill}\cap\text{positive})} {P(\text{positive})} = \frac{0.99\%}{1.98\%}= 50\%. -$$ - -In this example, it should be easy to relate to the difference between the conditional probabilities P(positive | ill) which with the assumed probabilities is 99%, and P(ill | positive) which is 50%: the first is the probability that an individual who has the disease tests positive; the second is the probability that an individual who tests positive actually has the disease. Thus, with the probabilities picked in this example, roughly the same number of individuals receive the benefits of early treatment as are distressed by false positives; these positive and negative effects can then be considered in deciding whether to carry out the screening, or if possible whether to adjust the test criteria to decrease the number of false positives (possibly at the expense of more false negatives). diff --git a/wiki/wikipedia/2907.txt b/wiki/wikipedia/2907.txt deleted file mode 100644 index 0798db95d85f7027830178de3ecb3f06f99ee851..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2907.txt +++ /dev/null @@ -1,27 +0,0 @@ -Category:Bin packing - -Best-fit is an online algorithm for bin packing. Its input is a list of items of different sizes. Its output is a packing - a partition of the items into bins of fixed capacity, such that the sum of sizes of items in each bin is at most the capacity. Ideally, we would like to use as few bins as possible, but minimizing the number of bins is an NP-hard problem. The best-fit algorithm uses the following heuristic: - -* It keeps a list of open bins, which is initially empty. - -* When an item arrives, it find a bin with the maximum load into which the item can fit, if any. - -** If such a bin is found, the new item is placed inside it. - -** Otherwise, a new bin is opened and the coming item is placed inside it. - -Denote by BF(L) the number of bins used by Best-Fit, and by OPT(L) the optimal number of bins possible for the list L. The analysis of BF(L) was done in several steps. - -* The first upper bound of $BF(L) \leq 1.7\mathrm{OPT}+3$ was proven by Ullman in 1971. - -* An improved upper bound $BF(L) \leq 1.7\mathrm{OPT}+2$ was proved by Garey, Graham and Ullman, Johnson and Demers. - -* Afterward, it was improved by Garey, Graham, Johnson, Ullman,Yao and Chi-Chih to $BF(L) \leq \lceil 1.7\mathrm{OPT}\rceil$. - -* Finally this bound was improved to $FF(L) \leq \lfloor 1.7\mathrm{OPT}\rfloor$ by Dósa and Sgall. They also present an example input list $L$, for that $BF(L)$ matches this bound. - -Worst-Fit is a "dual" algorithm to best-fit: it tries to put the next item in the bin with minimum load. - -This algorithm can behave as badly as Next-Fit, and will do so on the worst-case list for that $NF(L) = 2 \cdot \mathrm{OPT}(L) -2 $. Furthermore, it holds that $R_{WF}^{\infty}(\text{size}\leq \alpha) = R_{NF}^{\infty}(\text{size}\leq \alpha)$. - -Since Worst-Fit is an AnyFit-algorithm, there exists an AnyFit-algorithm such that $R_{AF}^{\infty}(\alpha) = R_{NF}^{\infty}(\alpha)$. diff --git a/wiki/wikipedia/2908.txt b/wiki/wikipedia/2908.txt deleted file mode 100644 index e3996b8577f17e70ba5f8961bc316bc0c99b1ebf..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2908.txt +++ /dev/null @@ -1,3 +0,0 @@ -In probability and game theory, the Waldegrave problem refers to a problem first described in the second edition of Pierre Raymond de Montmort`s Essay d'analyse sur les jeux de hazard. This problem is remarkable in that it is the first appearance to a mixed strategy solution in game theory. Montmort originally called Waldegrave's Problem the Problème de la Poulle or the Problem of the Pool. He provides a minimax mixed strategy solution to a two-person version of the card game le Her. It was Isaac Todhunter who called it Waldegrave's Problem. - -The general description of the problem is as follows: Suppose there are n+1 players with each player putting one unit into the pot or pool. The first two players play each other and the winner plays the third player. The loser of each game puts one unit into the pot. Play continues in like fashion through all the players until one of the players has beaten all the others in succession. The original problem, stated in a letter dated 10 April 1711, from Montmort to Nicholas Bernoulli is for n = 2 and is attributed to M. de Waldegrave. The problem, according to Montmort, is to find the expectation of each player and the probability that the pool will be won within a specified number of games. diff --git a/wiki/wikipedia/2909.txt b/wiki/wikipedia/2909.txt deleted file mode 100644 index e952d146ba3f34387931759339ca47266903cc2d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2909.txt +++ /dev/null @@ -1,15 +0,0 @@ -In mathematics, Fubini's theorem on differentiation, named after Guido Fubini, is a result in real analysis concerning the differentiation of series of monotonic functions. It can be proven by using Fatou's lemma and the properties of null sets. - -Assume $I \subseteq \mathbb R$ is an interval and that for every natural number k, $f_k: I \to \mathbb R$ is an increasing function. If, -$$ -s(x) := \sum_{k=1}^\infty f_k(x) -$$ - -exists for all $x \in I,$ then, -$$ -s'(x) = \sum_{k=1}^\infty f_k'(x) -$$ - -almost everywhere in I. - -In general, if we don't suppose fk is increasing for every k, in order to get the same conclusion, we need a stricter condition like uniform convergence of $\sum_{k=1}^n f_k'(x)$ on I for every n. diff --git a/wiki/wikipedia/291.txt b/wiki/wikipedia/291.txt deleted file mode 100644 index b3c5018550827e4881c9e915ae0e4cc2b7b4ce0e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/291.txt +++ /dev/null @@ -1,46 +0,0 @@ -In mathematics, Hadamard's inequality (also known as Hadamard's theorem on determinants) is a result first published by Jacques Hadamard in 1893. It is a bound on the determinant of a matrix whose entries are complex numbers in terms of the lengths of its column vectors. In geometrical terms, when restricted to real numbers, it bounds the volume in Euclidean space of n dimensions marked out by n vectors vi for 1 ≤ i ≤ n in terms of the lengths of these vectors ||vi||. - -Specifically, Hadamard's inequality states that if N is the matrix having columns vi, then -$$ - \left| \det(N) \right| \le \prod_{i=1}^n \|v_i\|. -$$ - -If the n vectors are non-zero, equality in Hadamard's inequality is achieved if and only if the vectors are orthogonal. - -A corollary is that if the entries of an n by n matrix N are bounded by B, so |Nij|≤B for all i and j, then -$$ -\left| \det(N) \right| \le B^nn^{n/2}. -$$ - -In particular, if the entries of N are +1 and -1 only then -$$ -\left| \det(N) \right| \le n^{n/2}. -$$ - -In combinatorics, matrices N for which equality holds, i.e. those with orthogonal columns, are called Hadamard matrices. - -A positive-semidefinite matrix P can be written as N*N, where N* denotes the conjugate transpose of N (see Cholesky decomposition). Then -$$ -\det(P)=\det(N)^2 \le \prod_{i=1}^n \|v_i\|^2 = \prod_{i=1}^n p_{ii}. -$$ - -So, the determinant of a positive definite matrix is less than or equal to the product of its diagonal entries. Sometimes this is also known as Hadamard's inequality. - -The result is trivial if the matrix N is singular, so assume the columns of N are linearly independent. By dividing each column by its length, it can be seen that the result is equivalent to the special case where each column has length 1, in other words if ei are unit vectors and M is the matrix having the ei as columns then - -and equality is achieved if and only if the vectors are an orthogonal set. The general result now follows: -$$ -\left| \det N \right| = \bigg (\prod_{i=1}^n \|v_i\| \bigg) \left|\det M\right| \leq \prod_{i=1}^n \|v_i\|. -$$ - -To prove , consider P =M*M and let the eigenvalues of P be λ1, λ2, … λn. Since the length of each column of M is 1, each entry in the diagonal of P is 1, so the trace of P is n. Applying the inequality of arithmetic and geometric means, -$$ -\det P=\prod_{i=1}^n \lambda_i \le \bigg({1 \over n}\sum_{i=1}^n \lambda_i\bigg)^n = \left({1 \over n} \operatorname{tr} P \right)^n = 1^n = 1, -$$ - -so -$$ - \left| \det M \right| = \sqrt{\det P} \le 1. -$$ - -If there is equality then each of the λi's must all be equal and their sum is n, so they must all be 1. The matrix P is Hermitian, therefore diagonalizable, so it is the identity matrix—in other words the columns of M are an orthonormal set and the columns of N are an orthogonal set. Many other proofs can be found in the literature. diff --git a/wiki/wikipedia/2910.txt b/wiki/wikipedia/2910.txt deleted file mode 100644 index a69e84a87a2afa6ea7cdbdd2ca3b720aaef2d801..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2910.txt +++ /dev/null @@ -1,11 +0,0 @@ -ActiveSync is a mobile data synchronization app developed by Microsoft, originally released in 1996. It synchronizes data with handheld devices and desktop computers. In the Windows Task Manager, the associated process is called wcescomm.exe. - -ActiveSync allows a mobile device to be synchronized with either a desktop PC or a server running a compatible software product. - -On desktops, ActiveSync synchronizes emails, calendar, contacts and tasks with Microsoft Outlook, along with Internet bookmarks and files. ActiveSync does not support all features of Outlook. For instance, contacts grouped into subfolders are not transferred. Only the contacts which are not in a subfolder are synchronized. In case of Exchange Server, only emails, calendar, contacts and tasks may be synchronized. - -ActiveSync also provides for the manual transfer of files to a mobile device, along with limited backup functionality, and the ability to install and uninstall mobile device applications. - -Supported mobile devices include PDAs or smartphones running Windows Mobile, Windows CE, BlackBerry 10 or iOS but not the older BlackBerry versions, Palm OS and Symbian platforms. Windows Phone 7 doesn't support desktop ActiveSync synchronization. - -Starting with Windows Vista, ActiveSync has been replaced with the Windows Mobile Device Center, which is included as part of the operating system. diff --git a/wiki/wikipedia/2911.txt b/wiki/wikipedia/2911.txt deleted file mode 100644 index 40602991a7ca86b165cb0f2efe4b187165a9de94..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2911.txt +++ /dev/null @@ -1,5 +0,0 @@ -In mathematics, the structure theorem for Gaussian measures shows that the abstract Wiener space construction is essentially the only way to obtain a strictly positive Gaussian measure on a separable Banach space. It was proved in the 1970s by Kallianpur-Sato-Stefan and Dudley-Feldman-le Cam. - -There is the earlier result due to H. Satô (1969) which proves that "any Gaussian measure on a separable Banach space is an abstract Wiener measure in the sense of L. Gross". The result by Dudley et al. generalizes this result to the setting of Gaussian measures on a general topological vector space. - -Let γ be a strictly positive Gaussian measure on a separable Banach space (E, || ||). Then there exists a separable Hilbert space (H, ⟨ , ⟩) and a map i : H → E such that i : H → E is an abstract Wiener space with γ = iH), where γH is the canonical Gaussian cylinder set measure on H. diff --git a/wiki/wikipedia/2912.txt b/wiki/wikipedia/2912.txt deleted file mode 100644 index 44ecdb3ee4d7baa3f4aa98df695c6e2f57a991ee..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2912.txt +++ /dev/null @@ -1,19 +0,0 @@ -In the philosophy of mathematics, a non-surveyable proof is a mathematical proof that is considered infeasible for a human mathematician to verify and so of controversial validity. The term was coined by Thomas Tymoczko in 1979 in criticism of Kenneth Appel and Wolfgang Haken's computer-assisted proof of the four color theorem, and has since been applied to other arguments, mainly those with excessive case splitting and/or with portions dispatched by a difficult-to-verify computer program. Surveyability remains an important consideration in computational mathematics. - -Tymoczko argued that three criteria determine whether an argument is a mathematical proof: - -* Convincingness, which refers to the proof's ability to persuade a rational prover of its conclusion; - -* Surveyability, which refers to the proof's accessibility for verification by members of the human mathematical community; and - -* Formalizability, which refers to the proof's appealing to only logical relationships between concepts to substantiate its argument. - -In Tymoczko's view, the Appel–Haken proof failed the surveyability criterion - -by, he argued, substituting experiment for deduction: - -Without surveyability, a proof may serve its first purpose of convincing a reader of its result and yet fail at its second purpose of enlightening the reader as to why that result is true—it may play the role of an observation rather than of an argument. - -This distinction is important because it means that non-surveyable proofs expose mathematics to a much higher potential for error. Especially in the case where non-surveyability is due to the use of a computer program (which may have bugs), most especially when that program is not published, convincingness may suffer as a result. - -Nonetheless, automated verification has yet to see widespread adoption. diff --git a/wiki/wikipedia/2913.txt b/wiki/wikipedia/2913.txt deleted file mode 100644 index 4fc3349cecdd9b9c2b16feff571b5da4a2f92cb9..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2913.txt +++ /dev/null @@ -1,22 +0,0 @@ -In mathematics, the Goormaghtigh conjecture is a conjecture in number theory named for the Belgian mathematician René Goormaghtigh. The conjecture is that the only non-trivial integer solutions of the exponential Diophantine equation -$$ -\frac{x^m - 1}{x-1}=\frac{y^n - 1}{y - 1} -$$ - -satisfying $x>y>1$ and $n,m>2$ are -$$ -\frac{5^3-1}{5-1}=\frac{2^5-1}{2-1}=31 -$$ - -and -$$ -\frac{90^3-1}{90-1}=\frac{2^{13}-1}{2-1}=8191. -$$ - -Davenport showed that, for each pair of fixed exponents $m$ and $n$, this equation has only finitely many solutions. But this proof depends on Siegel's finiteness theorem, which is ineffective. Nesterenko showed that, if $m-1=dr$ and $n-1=ds$ with $d\ge 2$, $r\ge 1$, and $s\ge 1$, then $\max(x,y,m,n)$ is bounded by an effectively computable constant depending only on $r$ and $s$. Yuan showed that for $m=3$ and odd $n$, this equation has no solution $(x,y,n)$ other than the two solutions given above. - -Balasubramanian and Shorey proved in 1980 that there are only finitely many possible solutions $(x,y,m,n)$ to the equations with prime divisors of $x$ and $y$ lying in a given finite set and that they may be effectively computed. - -He showed that, for each fixed $x$ and $y$, this equation has at most one solution. - -The Goormaghtigh conjecture may be expressed as saying that 31 (111 in base 5, 11111 in base 2) and 8191 (111 in base 90, 1111111111111 in base 2) are the only two numbers that are repunits with at least 3 digits in two different bases. diff --git a/wiki/wikipedia/2914.txt b/wiki/wikipedia/2914.txt deleted file mode 100644 index 2a9d48cee933b8ad46a3feaad956f85ec586e16c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2914.txt +++ /dev/null @@ -1,53 +0,0 @@ -In computer science, a term index is a data structure to facilitate fast lookup of terms and clauses in a logic program, deductive database, or automated theorem prover. - -Many operations in automatic theorem provers require search in huge collections of terms and clauses. Such operations typically fall into the following scheme. Given a collection $S$ of terms (clauses) and a query term (clause) $q$, find in $S$ some/all terms $t$ related to $q$ according to a certain retrieval condition. Most interesting retrieval conditions are formulated as existence of a substitution that relates in a special way the query and the retrieved objects $t$. Here is a list of retrieval conditions frequently used in provers: - -* term $q$ is unifiable with term $t$, i.e., there exists a substitution $ \theta $, such that $q\theta$ = $t\theta$ - -* term $t$ is an instance of $q$, i.e., there exists a substitution $\theta$, such that $q\theta$ = $t$ - -* term $t$ is a generalisation of $q$, i.e., there exists a substitution $\theta$, such that $q$ = $t\theta$ - -* clause $q$ subsumes clause $t$, i.e., there exists a substitution $\theta$, such that $q\theta$ is a subset/submultiset of $t$ - -* clause $q$ is subsumed by $t$, i.e., there exists a substitution $\theta$, such that $t\theta$ is a subset/submultiset of $q$ - -More often than not, we are actually interested in finding the appropriate - -substitutions explicitly, together with the retrieved terms $t$, - -rather than just in establishing existence of such substitutions. - -Very often the sizes of term sets to be searched are large, - -the retrieval calls are frequent and the retrieval condition test - -is rather complex. In such situations linear search in $S$, when the retrieval - -condition is tested on every term from $S$, becomes prohibitively costly. - -To overcome this problem, special data structures, called indexes, are - -designed in order to support fast retrieval. Such data structures, - -together with the accompanying algorithms for index maintenance - -and retrieval, are called term indexing techniques. - -* discrimination trees - -* substitution trees - -* path indexing - -Substitution trees outperform path indexing, discrimination tree indexing, and abstraction trees. - -A discrimination tree term index stores its information in a trie data structure. - -* feature vector indexing - -* code trees - -* context trees - -* relational path indexing diff --git a/wiki/wikipedia/2915.txt b/wiki/wikipedia/2915.txt deleted file mode 100644 index d8e37b7364f07c192ff280280b309cb26467b603..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2915.txt +++ /dev/null @@ -1,39 +0,0 @@ -In mathematics, differential refers to infinitesimal differences or to the derivatives of functions. The term is used in various branches of mathematics such as calculus, differential geometry, algebraic geometry and algebraic topology. - -* In calculus, the differential represents a change in the linearization of a function. - -** The total differential is its generalization for functions of multiple variables. - -* In traditional approaches to calculus, the differentials (e.g. dx, dy, dt, etc.) are interpreted as infinitesimals. There are several methods of defining infinitesimals rigorously, but it is sufficient to say that an infinitesimal number is smaller in absolute value than any positive real number, just as an infinitely large number is larger than any real number. - -* The differential is another name for the Jacobian matrix of partial derivatives of a function from Rn to Rm (especially when this matrix is viewed as a linear map). - -* More generally, the differential or pushforward refers to the derivative of a map between smooth manifolds and the pushforward operations it defines. The differential is also used to define the dual concept of pullback. - -* Stochastic calculus provides a notion of stochastic differential and an associated calculus for stochastic processes. - -* The integrator in a Stieltjes integral is represented as the differential of a function. Formally, the differential appearing under the integral behaves exactly as a differential: thus, the integration by substitution and integration by parts formulae for Stieltjes integral correspond, respectively, to the chain rule and product rule for the differential. - -The notion of a differential motivates several concepts in differential geometry (and differential topology). - -*The differential (Pushforward) of a map between manifolds. - -*Differential forms provide a framework which accommodates multiplication and differentiation of differentials. - -*The exterior derivative is a notion of differentiation of differential forms which generalizes the differential of a function (which is a differential 1-form). - -*Pullback is, in particular, a geometric name for the chain rule for composing a map between manifolds with a differential form on the target manifold. - -*Covariant derivatives or differentials provide a general notion for differentiating of vector fields and tensor fields on a manifold, or, more generally, sections of a vector bundle: see Connection (vector bundle). This ultimately leads to the general concept of a connection. - -Differentials are also important in algebraic geometry, and there are several important notions. - -* Abelian differentials usually mean differential one-forms on an algebraic curve or Riemann surface. - -* Quadratic differentials (which behave like "squares" of abelian differentials) are also important in the theory of Riemann surfaces. - -* Kähler differentials provide a general notion of differential in algebraic geometry. - -The term differential has also been adopted in homological algebra and algebraic topology, because of the role the exterior derivative plays in de Rham cohomology: in a cochain complex $(C_\bullet, d_\bullet)$, the maps (or coboundary operators) di are often called differentials. Dually, the boundary operators in a chain complex are sometimes called codifferentials. - -The properties of the differential also motivate the algebraic notions of a derivation and a differential algebra. diff --git a/wiki/wikipedia/2916.txt b/wiki/wikipedia/2916.txt deleted file mode 100644 index 41d7057886f759d26811be6c8deea3b2f74fc348..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2916.txt +++ /dev/null @@ -1,80 +0,0 @@ -Earliest deadline first (EDF) or least time to go is a dynamic priority scheduling algorithm used in real-time operating systems to place processes in a priority queue. Whenever a scheduling event occurs (task finishes, new task released, etc.) the queue will be searched for the process closest to its deadline. This process is the next to be scheduled for execution. - -EDF is an optimal scheduling algorithm on preemptive uniprocessors, in the following sense: if a collection of independent jobs, each characterized by an arrival time, an execution requirement and a deadline, can be scheduled (by any algorithm) in a way that ensures all the jobs complete by their deadline, the EDF will schedule this collection of jobs so they all complete by their deadline. - -With scheduling periodic processes that have deadlines equal to their periods, EDF has a utilization bound of 100%. Thus, the for EDF is: -$$ -U = \sum_{i=1}^{n} \frac{C_i}{T_i} \leq 1, -$$ - -where the $\left\{C_i\right\}$ are the worst-case computation-times of the $n$ processes and the $\left\{T_i\right\}$ are their respective inter-arrival periods (assumed to be equal to the relative deadlines). - -That is, EDF can guarantee that all deadlines are met provided that the total CPU utilization is not more than 100%. Compared to fixed priority scheduling techniques like rate-monotonic scheduling, EDF can guarantee all the deadlines in the system at higher loading. - -EDF is also an optimal scheduling algorithm on non-preemptive uniprocessors, but only among the class of scheduling algorithms that do not allow inserted idle time. When scheduling periodic processes that have deadlines equal to their periods, a sufficient (but not necessary) schedulability test for EDF becomes: -$$ -U = \sum_{i=1}^{n} \frac{C_i}{T_i} \leq{1-p}, -$$ - -Where p represents the penalty for non-preemption, given by max $\left\{C_i\right\}$ / min $\left\{T_i\right\}$. If this factor can be kept small, non-preemptive EDF can be beneficial as it has low implementation overhead. - -However, when the system is overloaded, the set of processes that will miss deadlines is largely unpredictable (it will be a function of the exact deadlines and time at which the overload occurs.) This is a considerable disadvantage to a real time systems designer. The algorithm is also difficult to implement in hardware and there is a tricky issue of representing deadlines in different ranges (deadlines can not be more precise than the granularity of the clock used for the scheduling). If a modular arithmetic is used to calculate future deadlines relative to now, the field storing a future relative deadline must accommodate at least the value of the (("duration" {of the longest expected time to completion} * 2) + "now"). Therefore EDF is not commonly found in industrial real-time computer systems. - -Instead, most real-time computer systems use fixed priority scheduling (usually rate-monotonic scheduling). With fixed priorities, it is easy to predict that overload conditions will cause the low-priority processes to miss deadlines, while the highest-priority process will still meet its deadline. - -There is a significant body of research dealing with EDF scheduling in real-time computing; it is possible to calculate worst case response times of processes in EDF, to deal with other types of processes than periodic processes and to use servers to regulate overloads. - -Consider 3 periodic processes scheduled on a preemptive uniprocessor. The execution times and periods are as shown in the following table: - -In this example, the units of time may be considered to be schedulable time slices. The deadlines are that each periodic process must complete within its period. - -In the timing diagram, the columns represent time slices with time increasing to the right, and the processes all start their periods at time slice 0. The timing diagram's alternating blue and white shading indicates each process's periods, with deadlines at the color changes. - -The first process scheduled by EDF is P2, because its period is shortest, and therefore it has the earliest deadline. Likewise, when P2 completes, P1 is scheduled, followed by P3. - -At time slice 5, both P2 and P3 have the same deadline, needing to complete before time slice 10, so EDF may schedule either one. - -The utilization will be: - - -$$ - \left ( \frac{1}{8} + \frac{2}{5} + \frac{4}{10} \right ) = \left ( \frac{37}{40} \right ) = 0.925 = {\mathbf{92.5\%}} -$$ - -Since the least common multiple of the periods is 40, the scheduling pattern can repeat every 40 time slices. But, only 37 of those 40 time slices are used by P1, P2, or P3. Since the utilization, 92.5%, and not greater than 100%, the system is schedulable with EDF. - -Undesirable deadline interchanges may occur with EDF scheduling. A process may use a shared resource inside a critical section, to prevent it from being pre-emptively descheduled in favour of another process with an earlier deadline. If so, it becomes important for the scheduler to assign the running process the earliest deadline from among the other processes waiting for the resource. Otherwise the processes with earlier deadlines might miss them. - -This is especially important if the process running the critical section has a much longer time to complete and exit from its critical section, which will delay releasing the shared resource. But the process might still be pre-empted in favour of others that have earlier deadlines but do not share the critical resource. This hazard of deadline interchange is analogous to priority inversion when using fixed priority pre-emptive scheduling. - -To speed up the deadline search within the ready queue, the queue entries be sorted according to their deadlines. When a new process or a periodic process is given a new deadline, it is inserted before the first process with a later deadline. This way, the processes with the earliest deadlines are always at the beginning of the queue. - -In a heavy-traffic analysis of the behavior of a single-server queue under an , the processes have deadlines and are served only until their deadlines elapse. The fraction of “reneged work,” defined as the residual work not serviced due to elapsed deadlines, is an important performance measure. - -It is commonly accepted that an implementation of a Fixed-priority pre-emptive scheduling (FPS) is simpler than a dynamic priority scheduler, like the EDF. However, when comparing the maximum usage of an optimal scheduling under fixed priority (with the priority of each thread given by the Rate-monotonic scheduling), the EDF can reach 100% while the theorical maximum value for Rate-monotonic scheduling is around 69%. - -Note that EDF does not make any specific assumption on the periodicity of the tasks; hence, it can be used for scheduling periodic as well as aperiodic tasks. - -Although EDF implementations are not common in commercial real-time kernels, here are a few links of open-source and real-time kernels implementing EDF: - -* The SHaRK RTOS, implementing various versions of EDF scheduling and resource reservation scheduling algorithms - -* ERIKA Enterprise, which provides an implementation of EDF optimized for small microcontrollers with an API similar to the OSEK API. - -* The Everyman Kernel implements either EDF or Deadline Monotonic scheduling depending on the user's configuration. - -* MaRTE OS acts as a runtime for Ada applications and implements a wide range of scheduling algorithms including EDF. - -* The AQuoSA project constitutes a modification to the Linux kernel enriching the process scheduler with EDF scheduling capabilities. The timing of the scheduling cannot be as precise as in the case of the above hard real-time Operating Systems, yet it is sufficiently precise so as to greatly enhance predictability, and thus fulfill the real-time requirements, of multimedia applications. AQuoSA is one of a few projects that provides real-time scheduling capabilities to unprivileged users on a system in a controlled way, by means of a properly designed access-control model. - -* The Linux kernel has an earliest deadline first implementation named SCHED DEADLINE which is available since the release 3.14. - -* The developed in the context of the European Project is a multi-processor real-time scheduler for the Linux kernel, particularly suitable for temporal isolation and provisioning of QoS guarantees to complex multi-threaded software components and also entire virtual machines. For example, when using Linux as host OS and KVM as hypervisor, IRMOS can be used to provide scheduling guarantees to individual VMs and at the same time isolate their performance so as to avoid undesired temporal interferences. IRMOS features a combined EDF/FP hierarchical scheduler. At the outer level there is a partitioned EDF scheduler on the available CPUs. However, reservations are multi-CPU, and global FP over multi-processors is used at the inner level in order to schedule the threads (and/or processes) attached to each outer EDF reservation. See also for a general overview and a short tutorial about the subject. - -* Xen has had an EDF scheduler for some time now. The contains a short description. - -* The incorporates EDFI, a "lightweight real-time scheduling protocol that combines EDF with deadline inheritance over shared resources." - -* : The EDF scheduler will be available in version 4.11. - -* is a real-time extension of the Linux kernel with a focus on multiprocessor real-time scheduling and synchronization. Its set of real-time algorithms include Partitioned-EDF, Global-EDF, and Clustered-EDF schedulers. diff --git a/wiki/wikipedia/2917.txt b/wiki/wikipedia/2917.txt deleted file mode 100644 index 2bb78754ed48d22689402dbb837e1ba55b9e25dc..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2917.txt +++ /dev/null @@ -1,5 +0,0 @@ -In Euclidean geometry, the trillium theorem – (from , - -Other triangle reconstruction problems, such as the reconstruction of a triangle from a vertex, incenter, and center of its nine-point circle, can be solved by reducing the problem to the case of a vertex, incenter, and circumcenter. - -Let I and J be any two of the four points given by the incenter and the three excenters of a triangle ABC. Then I and J are collinear with one of the three triangle vertices. The circle with IJ as diameter passes through the other two vertices and is centered on the circumcircle of ABC. When one of I or J is the incenter, this is the trillium theorem, with line IJ as the (internal) angle bisector of one of the triangle's angles. However, it is also true when I and J are both excenters; in this case, line IJ is the external angle bisector of one of the triangle's angles. diff --git a/wiki/wikipedia/2918.txt b/wiki/wikipedia/2918.txt deleted file mode 100644 index 6f40559e792214274213728bb1509b6c48ddc236..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2918.txt +++ /dev/null @@ -1,40 +0,0 @@ -Sinkhorn's theorem states that every square matrix with positive entries can be written in a certain standard form. - -If A is an n × n matrix with strictly positive elements, then there exist diagonal matrices D1 and D2 with strictly positive diagonal elements such that D1AD2 is doubly stochastic. The matrices D1 and D2 are unique modulo multiplying the first matrix by a positive number and dividing the second one by the same number. - -A simple iterative method to approach the double stochastic matrix is to alternately rescale all rows and all columns of A to sum to 1. Sinkhorn and Knopp presented this algorithm and analyzed its convergence. - -This is essentially the same as the Iterative proportional fitting algorithm, well known in survey statistics. - -The following analogue for unitary matrices is also true: for every unitary matrix U there exist two diagonal unitary matrices L and R such that LUR has each of its columns and rows summing to 1. - -The following extension to maps between matrices is also true (see Theorem 5 and also Theorem 4.7): given a Kraus operator - -which represents the quantum operation Φ mapping a density matrix into another, -$$ - S \to \Phi(S) = \sum_i B_i S B_i^*, -$$ - -that is trace preserving, -$$ - \sum_i B_i^* B_i = I, -$$ - -and, in addition, whose range is in the interior of the positive definite cone (strict positivity), there exist scalings xj, for j in {0,1}, that are positive definite so that the rescaled Kraus operator -$$ - S \to x_1\Phi(x_0^{-1}Sx_0^{-1})x_1 = \sum_i (x_1B_ix_0^{-1}) S (x_1B_ix_0^{-1})^* -$$ - -is doubly stochastic. In other words, it is such that both, -$$ - x_1\Phi(x_0^{-1}I x_0^{-1})x_1 = I, -$$ - -as well as for the adjoint, -$$ - x_0^{-1}\Phi^*(x_1I x_1)x_0^{-1} = I, -$$ - -where I denotes the identity operator. - -In the 2010s Sinkhorn's theorem came to be used to find solutions of entropy-regularised optimal transport problems. This has been of interest in machine learning because such "Sinkhorn distances" can be used to evaluate the difference between data distributions and permutations. This improves the training of machine learning algorithms, in situations where maximum likelihood training may not be the best method. diff --git a/wiki/wikipedia/2919.txt b/wiki/wikipedia/2919.txt deleted file mode 100644 index 3ac30ed1f21ded85b003b9f4d8fcbb5fe0cd0153..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2919.txt +++ /dev/null @@ -1,5 +0,0 @@ -Tetris Blitz was a variant of Tetris. It was developed by EA Salt Lake published by Electronic Arts. It was released free on mobile platforms such as iOS, Android, and Windows Phone. It was released in 2013 and was given mixed reviews. On January 20, 2020, EA announced that the game would be discontinued on April 21, 2020. As of April 22, 2020, it is no longer playable. - -The game had power-ups to boost your score up. Classic ones were Time Shift, Lasers, Quake, and Multiplier. New ones came soon afterwards. The game has 3 modes: classic, battles, and tournaments. The latter two were unlocked after leveling up. - -The game was given mixed reviews. Google Play rates it with 4.2 stars. diff --git a/wiki/wikipedia/292.txt b/wiki/wikipedia/292.txt deleted file mode 100644 index 596dcc7a522e791b14be5a46d608dc2b9375c45d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/292.txt +++ /dev/null @@ -1,35 +0,0 @@ -Matita - -is an experimental proof assistant under development at the Computer Science Department of the University of Bologna. It is a tool aiding the development of formal proofs by man-machine collaboration, providing a programming environment where formal specifications, executable algorithms and automatically verifiable correctness certificates naturally coexist. - -Matita is based on a dependent type system known as the Calculus of (Co)Inductive Constructions (a derivative of Calculus of Constructions), and is compatible, to some extent, with Coq. - -The word "matita" means "pencil" in Italian (a simple and widespread editing tool). It is a reasonably small and simple application, whose architectural and software complexity is meant to be mastered by students, providing a tool particularly suited for testing innovative ideas and solutions. Matita adopts a tactic-based editing mode; (XML-encoded) proof objects are produced for storage and exchange. - -Existential variables are native in Matita, allowing a simpler management of dependent goals. - -Matita implements a bidirectional type inference algorithm exploiting both inferred and expected types. - -The power of the type inference system (refiner) is further augmented by a mechanism of - -hints - -that helps in synthetizing unifiers in particular situations specified by the user. - -Matita supports a sophisticated disambiguation strategy - -based on a dialog between the parser and the typechecker. - -At the interactive level, the system implements a small step execution of structured tactics - -allowing a much better management of the proof development, and naturally leading - -to more structured and readable scripts. - -Matita has been employed in (Certified Complexity): a - -European Project - -focused on the development of a formally verified, complexity preserving compiler from a large subset of C to the assembler of a MCS-51 microprocessor. - -The Matita tutorial provides a pragmatic introduction to the main functionalities of the Matita interactive theorem prover, offering a guided tour through a set of non-trivial examples in the field of software specification and verification. diff --git a/wiki/wikipedia/2920.txt b/wiki/wikipedia/2920.txt deleted file mode 100644 index a5cd0e0bcc91f37e6953030a94187f47adc9b50e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2920.txt +++ /dev/null @@ -1,9 +0,0 @@ -In graph theory, a branch of mathematics, a map graph is an undirected graph formed as the intersection graph of finitely many simply connected and internally disjoint regions of the Euclidean plane. The map graphs include the planar graphs, but are more general. Any number of regions can meet at a common corner (as in the Four Corners of the United States, where four states meet), and when they do the map graph will contain a clique connecting the corresponding vertices, unlike planar graphs in which the largest cliques have only four vertices. Another example of a map graph is the king's graph, a map graph of the squares of the chessboard connecting pairs of squares between which the chess king can move. - -Map graphs can be represented combinatorially as the "half-squares of planar bipartite graphs". That is, let G = (U,V,E) be a planar bipartite graph, with bipartition (U,V). The square of G is another graph on the same vertex set, in which two vertices are adjacent in the square when they are at most two steps apart in G. The half-square or bipartite half is the induced subgraph of one side of the bipartition (say V) in the square graph: its vertex set is V and it has an edge between each two vertices in V that are two steps apart in G. The half-square is a map graph. It can be represented geometrically by finding a planar embedding of G, and expanding each vertex of V and its adjacent edges into a star-shaped region, so that these regions touch at the vertices of U. Conversely, every map graph can be represented as a half-square in this way. - -In 1998, Mikkel Thorup claimed that map graphs can be recognized in polynomial time. However, the high exponent of the algorithm that he sketched makes it impractical, and Thorup has not published the details of his method and its proof. - -The maximum independent set problem has a polynomial-time approximation scheme for map graphs, and the chromatic number can be approximated to within a factor of two in polynomial time. The theory of bidimensionality leads to many other approximation algorithms and fixed-parameter tractable algorithms for optimization problems on map graphs. - -A k-map graph is a map graph derived from a set of regions in which at most k regions meet at any point. Equivalently, it is the half-square of a planar bipartite graph in which the vertex set U (the side of the bipartition not used to induce the half-square) has maximum degree k. A 3-map graph is a planar graph, and every planar graph can be represented as a 3-map graph. Every 4-map graph is a 1-planar graph, a graph that can be drawn with at most one crossing per edge, and every optimal 1-planar graph (a graph formed from a planar quadrangulation by adding two crossing diagonals to every quadrilateral face) is a 4-map graph. However, some other 1-planar graphs are not map graphs, because (unlike map graphs) they include crossing edges that are not part of a four-vertex complete subgraph. diff --git a/wiki/wikipedia/2921.txt b/wiki/wikipedia/2921.txt deleted file mode 100644 index f7bcb9553465bd559197dc0168c72d37e5803aed..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2921.txt +++ /dev/null @@ -1,49 +0,0 @@ -In geometry, the Petr–Douglas–Neumann theorem (or the PDN-theorem) is a result concerning arbitrary planar polygons. The theorem asserts that a certain procedure when applied to an arbitrary polygon always yields a regular polygon having the same number of sides as the initial polygon. The theorem was first published by Karel Petr (1868–1950) of Prague in 1908. and also by B H Neumann (1909–2002) in 1941. - -The proof begins by encoding an n-gon by a list complex numbers representing the vertices of the n-gon. This list can be thought of as a vector in the n-dimensional complex linear space Cn. Take an n-gon A and let it be represented by the complex vector - -A = ( a1, a2, ... , an ). - -Let the polygon B be formed by the free vertices of similar triangles built on the sides of A and let it be represented by the complex vector - -B = ( b1, b2, ... , bn ). - -Then we have - -α( ar - br ) = ar+1 - br, where α = exp( i θ ) for some θ (here i is the square root of -1). - -This yields the following expression to compute the br ' s: - -br = (1−α)−1 ( ar+1 − αar ). - -In terms of the linear operator S : Cn → Cn that cyclically permutes the coordinates one place, we have - -B = (1−α)−1( S − αI )A, where I is the identity matrix. - -This means that the polygon An−2 that we need to show is regular is obtained from A0 by applying the composition of the following operators: - -( 1 − ωk )−1( S − ωk I ) for k = 1, 2, ... , n − 2, where ω = exp( 2πi/n ). (These commute because they are all polynomials in the same operator S.) - -A polygon P = ( p1, p2, ..., pn ) is a regular n-gon if each side of P is obtained from the next by rotating through an angle of 2π/n, that is, if - -pr + 1 − pr = ω( pr + 2 − pr + 1 ). - -This condition can be formulated in terms of S as follows: - -( S − I )( I − ωS ) P = 0. - -Or equivalently as - -( S − I )( S - ωn - 1 I ) P = 0, since ωn = 1. - -Petr–Douglas–Neumann theorem now follows from the following computations. - -( S − I )( S - ωn - 1 I ) An - 2 - -= ( S − I )( S - ωn - 1 I ) ( 1 − ω )−1 ( S - ω I ) ( 1 − ω2 )−1 ( S - ω2 I ) ... ( 1 − ωn - 2 )−1 ( S - ωn - 2 I ) A0 - -= ( 1 − ω )−1( 1 − ω2 )−1 ... ( 1 − ωn - 2 )−1 ( S - I ) ( S - ω I ) ( S - ω2 I ) ... ( S - ωn - 1 I)A0 - -= ( 1 − ω )−1( 1 − ω2 )−1 ... ( 1 − ωn - 2 )−1 ( Sn - I ) A0 - -= 0, since Sn = I. diff --git a/wiki/wikipedia/2922.txt b/wiki/wikipedia/2922.txt deleted file mode 100644 index 33fcb69ec8ec6d7be32b3e27bce15990fdf7a41f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2922.txt +++ /dev/null @@ -1,27 +0,0 @@ -In mathematics, Auerbach's lemma, named after Herman Auerbach, is a theorem in functional analysis which asserts that a certain property of Euclidean spaces holds for general finite-dimensional normed vector spaces. - -Let (V, ||·||) be an n-dimensional normed vector space. Then there exists a basis {e1, ..., en} of V such that - -||ei|| = 1 and ||ei|| = 1 for i = 1, ..., n, - -where {e1, ..., en} is a basis of V* dual to {e1, ..., en}, i. e. ei(ej) = δij. - -A basis with this property is called an Auerbach basis. - -If V is an inner product space (or even infinite-dimensional Hilbert space) then this result is obvious as one may take for {ei} any orthonormal basis of V (the dual basis is then {(ei|·)}). - -An equivalent statement is the following: any centrally symmetric convex body in $ \mathbf{R}^n $ has a linear image which contains the unit cross-polytope (the unit ball for the $\ell_1^n$ norm) and is contained in the unit cube (the unit ball for the $\ell_{\infty}^n $ norm). - -The lemma has a corollary with implications to approximation theory. - -Let V be an n-dimensional subspace of a normed vector space (X, ||·||). Then there exists a projection P of X onto V such that ||P|| ≤ n. - -Let {e1, ..., en} be an Auerbach basis of V and {e1, ..., en} corresponding dual basis. By Hahn–Banach theorem each ei extends to f i ∈ X* such that - -||f i|| = 1. - -Now set - -P(x) = Σ f i(x) ei. - -It's easy to check that P is indeed a projection onto V and that ||P|| ≤ n (this follows from triangle inequality). diff --git a/wiki/wikipedia/2923.txt b/wiki/wikipedia/2923.txt deleted file mode 100644 index 99ff6dacd3b666a264121e2dc6744c1f04c964f6..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2923.txt +++ /dev/null @@ -1,135 +0,0 @@ -In mathematics and physics, Lieb–Thirring inequalities provide an upper bound on the sums of powers of the negative eigenvalues of a Schrödinger operator in terms of integrals of the potential. They are named after E. H. Lieb and W. E. Thirring. - -The inequalities are useful in studies of quantum mechanics and differential equations and imply, as a corollary, a lower bound on the kinetic energy of $N$ quantum mechanical particles that plays an important role in the proof of stability of matter. - -For the Schrödinger operator $-\Delta+V(x)=-\nabla^2+V(x)$ on $\mathbb{R}^n$ with real-valued potential $V(x):\mathbb{R}^n\to\mathbb{R}$, the numbers $\lambda_1\le\lambda_2\le\dots\le0$ denote the (not necessarily finite) sequence of negative eigenvalues. Then, for $\gamma$ and $n$ satisfying one of the conditions - -\begin{align} - -\gamma\ge\frac12&,n=1,\\ - -\gamma>0&,n=2,\\ - -\gamma\ge0&,n\ge3, - -\end{align} - -there exists a constant $L_{\gamma,n}$, which only depends on $\gamma$ and $n$, such that - -{{NumBlk|:| - -\sum_{j\ge1}|\lambda_j|^\gamma\le L_{\gamma,n}\int_{\R^n}V(x)_-^{\gamma+\frac n2}\mathrm{d}^n x - -|}} - -where $V(x)_-:=\max(-V(x),0)$ is the negative part of the potential $V$. The cases $\gamma>1/2,n=1$ as well as $\gamma>0,n\ge2$ were proven by E. H. Lieb and W. E. Thirring in 1976 and G. V. Rozenbljum. The resulting $\gamma=0$ inequality is thus also called the Cwikel–Lieb–Rosenbljum bound. The remaining critical case $\gamma=1/2, n=1$ was proven to hold by T. Weidl - -The conditions on $\gamma$ and $n$ are necessary and cannot be relaxed. - -The Lieb–Thirring inequalities can be compared to the semi-classical limit. - -The classical phase space consists of pairs $(p,x)\in\mathbb{R}^{2n}$. Identifying the momentum operator $-\mathrm{i}\nabla$ with $p$ and assuming that every quantum state is contained in a volume $(2\pi)^n$ in the $2n$-dimensional phase space, the semi-classical approximation - - - -\sum_{j\ge 1}|\lambda_j|^\gamma\approx \frac{1}{(2\pi)^n}\int_{\mathbb{R}^n}\int_{\mathbb{R}^n}\big(p^2+V(x)\big)_-^\gamma\mathrm{d}^n p\mathrm{d}^n x - -=L^{\mathrm{cl}}_{\gamma,n}\int_{\mathbb{R}^n} V(x)_-^{\gamma+\frac n2}\mathrm{d}^n x - - - -is derived with the constant - - - -L_{\gamma,n}^{\mathrm{cl}}=(4\pi)^{-\frac n2}\frac{\Gamma(\gamma+1)}{\Gamma(\gamma+1+\frac n2)}. - - - -While the semi-classical approximation does not need any assumptions on $\gamma>0$, the Lieb–Thirring inequalities only hold for suitable $\gamma$. - -Numerous results have been published about the best possible constant $L_{\gamma,n}$ in () but this problem is still partly open. The semiclassical approximation becomes exact in the limit of large coupling, that is for potentials $\beta V$ the Weyl asymptotics - - - -\lim_{\beta\to\infty}\frac{1}{\beta^{\gamma+\frac n2}}\mathrm{tr} (-\Delta+\beta V)_-^\gamma=L^\mathrm{cl}_{\gamma,n}\int_{\mathbb{R}^n} V(x)_-^{\gamma+\frac n2}\mathrm{d}^n x - - - -hold. This implies that $L_{\gamma,n}^{\mathrm{cl}}\le L_{\gamma,n}$. Lieb and Thirring were able to show that $ L_{\gamma,n}=L_{\gamma,n}^{\mathrm{cl}}$ for $\gamma\ge 3/2, n=1$. M. Aizenman and E. H. Lieb proved that for fixed dimension $n$ the ratio $L_{\gamma,n}/L_{\gamma,n}^{\mathrm{cl}}$ is a monotonic, non-increasing function of $\gamma$. Subsequently $L_{\gamma,n}=L_{\gamma,n}^{\mathrm{cl}}$ was also shown to hold for all $n$ when $\gamma\ge 3/2$ by A. Laptev and T. Weidl. For $\gamma=1/2,n=1$ D. Hundertmark, E. H. Lieb and L. E. Thomas proved that the best constant is given by $L_{1/2,1}=2L_{1/2,1}^{\mathrm{cl}}=1/2$. - -On the other hand, it is known that $L^\mathrm{cl}_{\gamma,n} - -L_{\gamma,1}=2L^\mathrm{cl}_{\gamma,1}\left(\frac{\gamma-\frac12}{\gamma+\frac12}\right)^{\gamma-\frac12}. - - - -The best known value for the physical relevant constant $L_{1,3}$ is $\pi L_{1,3}^\mathrm{cl}/\sqrt{3}$ and the smallest known constant in the Cwikel–Lieb–Rosenbljum inequality is $6.869L_{0,n}^\mathrm{cl} $. A complete survey of the presently best known values for $L_{\gamma,n}$ can be found in the literature. - -The Lieb–Thirring inequality for $\gamma=1$ is equivalent to a lower bound on the kinetic energy of a given normalised $N$-particle wave function $\psi\in L^2(\mathbb{R}^{Nn})$ in terms of the one-body density. For an anti-symmetric wave function such that - - - -\psi(x_1,\dots,x_i,\dots,x_j,\dots,x_N)=-\psi(x_1,\dots,x_j,\dots,x_i,\dots,x_N) - - - -for all $1\le i,j\le N$, the one-body density is defined as - - - -\rho_\psi(x) - -=N\int_{\mathbb{R}^{(N-1)n}}|\psi(x,x_2\dots,x_N)|^2 - -\mathrm{d}^n x_2\cdots\mathrm{d}^n x_{N}, x\in\mathbb{R}^n. - - - -The Lieb–Thirring inequality () for $\gamma=1$ is equivalent to the statement that - -{{NumBlk|:| - -\sum_{i=1}^N \int_{\mathbb{R}^n}|\nabla_i\psi|^2\mathrm{d}^n x_i\ge K_n\int_{\mathbb{R}^n}{\rho_\psi(x)^{1+\frac 2n}}\mathrm{d}^n x - -|}} - -where the sharp constant $K_n$ is defined via - - - -\left(\left(1+\frac2n\right)K_n\right)^{1+\frac n2}\left(\left(1+\frac n2\right)L_{1,n}\right)^{1+\frac2n}=1. - - - -The inequality can be extended to particles with spin states by replacing the one-body density by the spin-summed one-body density. The constant $K_n$ then has to be replaced by $K_n/q^{2/n}$ where $q$ is the number of quantum spin states available to each particle ($q=2$ for electrons). If the wave function is symmetric, instead of anti-symmetric, such that - - - -\psi(x_1,\dots,x_i,\dots,x_j,\dots,x_n)=\psi(x_1,\dots,x_j,\dots,x_i,\dots,x_n) - - - -for all $1\le i,j\le N$, the constant $K_n$ has to be replaced by $K_n/N^{2/n}$. Inequality () describes the minimum kinetic energy necessary to achieve a given density $\rho_\psi$ with $N$ particles in $n$ dimensions. If $L_{1,3}=L^\mathrm{cl}_{1,3}$ was proven to hold, the right-hand side of () for $n=3$ would be precisely the kinetic energy term in Thomas–Fermi theory. - -The inequality can be compared to the Sobolev inequality. M. Rumin derived the kinetic energy inequality () (with a smaller constant) directly without the use of the Lieb–Thirring inequality. - -The kinetic energy inequality plays an important role in the proof of stability of matter as presented by Lieb and Thirring. The Hamiltonian under consideration describes a system of $N$ particles with $q$ spin states and $M$ fixed nuclei at locations $R_j\in\mathbb{R}^3$ with charges $Z_j>0$. The particles and nuclei interact with each other through the electrostatic Coulomb force and an arbitrary magnetic field can be introduced. If the particles under consideration are fermions (i.e. the wave function $\psi$ is antisymmetric), then the kinetic energy inequality () holds with the constant $K_n/q^{2/n}$ (not $K_n/N^{2/n}$). This is a crucial ingredient in the proof of stability of matter for a system of fermions. It ensures that the ground state energy $E_{N,M}(Z_1,\dots,Z_M)$ of the system can be bounded from below by a constant depending only on the maximum of the nuclei charges, $Z_{\max}$, times the number of particles, - - - -E_{N,M}(Z_1,\dots,Z_M)\ge -C(Z_{\max}) (M+N). - - - -The system is then stable of the first kind since the ground-state energy is bounded from below and also stable of the second kind, i.e. the energy of decreases linearly with the number of particles and nuclei. In comparison, if the particles are assumed to be bosons (i.e. the wave function $\psi$ is symmetric), then the kinetic energy inequality () holds only with the constant $K_n/N^{2/n}$ and for the ground state energy only a bound of the form $-CN^{5/3}$ holds. Since the power $5/3$ can be shown to be optimal, a system of bosons is stable of the first kind but unstable of the second kind. - -If the Laplacian $-\Delta=-\nabla^2$ is replaced by $(\mathrm{i}\nabla+A(x))^2$, where $A(x)$ is a magnetic field vector potential in $\mathbb{R}^n$, the Lieb–Thirring inequality () remains true. The proof of this statement uses the diamagnetic inequality. Although all presently known constants $L_{\gamma,n}$ remain unchanged, it is not known whether this is true in general for the best possible constant. - -The Laplacian can also be replaced by other powers of $-\Delta$. In particular for the operator $\sqrt{-\Delta}$, a Lieb–Thirring inequality similar to () holds with a different constant $L_{\gamma,n}$ and with the power on the right-hand side replaced by $\gamma+n$. Analogously a kinetic inequality similar to () holds, with $1+2/n$ replaced by $1+1/n$, which can be used to prove stability of matter for the relativistic Schrödinger operator under additional assumptions on the charges $Z_k$. - -In essence, the Lieb–Thirring inequality () gives an upper bound on the distances of the eigenvalues $\lambda_j$ to the essential spectrum $[0,\infty)$ in terms of the perturbation $V$. Similar inequalities can be proved for Jacobi operators. diff --git a/wiki/wikipedia/2924.txt b/wiki/wikipedia/2924.txt deleted file mode 100644 index feb7f49a37ee9d323150139c5f45cfe9842cb48a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2924.txt +++ /dev/null @@ -1,43 +0,0 @@ -Open Problems in Mathematics is a book, edited by John Forbes Nash Jr. and Michael Th. Rassias, published in 2016 by Springer (). The book consists of seventeen expository articles, written by outstanding researchers, on some of the central open problems in the field of mathematics. The book also features an Introduction on John Nash: Theorems and Ideas, by Mikhail Leonidovich Gromov. According to the editors’ Preface, each article is devoted to one open problem or a “constellation of related problems”. - -Nash and Rassias write in the preface of the book that the open problems presented “were chosen for a variety of reasons. Some were chosen for their undoubtable importance and applicability, others because they constitute intriguing curiosities which remain unexplained mysteries on the basis of current knowledge and techniques, and some for more emotional reasons. Additionally, the attribute of a problem having a somewhat vintage flavor was also influential” in their decision process. - -* Preface, by John F. Nash Jr. and Michael Th. Rassias - -* A Farewell to “A Beautiful Mind and a Beautiful Person”, by Michael Th. Rassias - -* Introduction, John Nash: Theorems and Ideas, by Mikhail Leonidovich Gromov - -* P =? NP, by Scott Aaronson - -* From Quantum Systems to L-Functions: Pair Correlation Statistics and Beyond, by Owen Barrett, Frank W. K. Firk, Steven J. Miller, and Caroline Turnage-Butterbaugh - -* The Generalized Fermat Equation, by Michael Bennett, Preda Mihăilescu, and Samir Siksek - -* The Conjecture of Birch and Swinnerton-Dyer, by John H. Coates - -* An Essay on the Riemann Hypothesis, by Alain Connes - -* Navier–Stokes Equations: A Quick Reminder and a Few Remarks, by Peter Constantin - -* Plateau’s Problem, by Jenny Harrison and Harrison Pugh - -* The Unknotting Problem, by Louis Kauffman - -* How Can Cooperative Game Theory Be Made More Relevant to Economics?: An Open Problem, by Eric Maskin - -* The Erdős–Szekeres Problem, by Walter Morris and Valeriu Soltan - -* Novikov’s Conjecture, by Jonathan Rosenberg - -* The Discrete Logarithm Problem, by René Schoof - -* Hadwiger’s Conjecture, by Paul Seymour - -* The Hadwiger–Nelson Problem, by Alexander Soifer - -* Erdős’s Unit Distance Problem, by Endre Szemerédi - -* Goldbach’s Conjectures: A Historical Perspective, by Robert Charles Vaughan - -* The Hodge Conjecture, by Claire Voisin diff --git a/wiki/wikipedia/2925.txt b/wiki/wikipedia/2925.txt deleted file mode 100644 index 32ceeb95afbbdab0f9ed9ec7259dd848d8900f65..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2925.txt +++ /dev/null @@ -1,21 +0,0 @@ -In Euclidean geometry Newton's theorem states that in every tangential quadrilateral other than a rhombus, the center of the incircle lies on the Newton line. - -Let ABCD be a tangential quadrilateral with at most one pair of parallel sides. Furthermore, let E and F the midpoints of its diagonals AC and BD and P be the center of its incircle. Given such a configuration the point P is located on the Newton line, that is line EF connecting the midpoints of the diagonals. - -A tangential quadrilateral with two pairs of parallel sides is a rhombus. In this case both midpoints and the center of the incircle coincide and by definition no Newton line exists. - -Newton's theorem can easily be derived from Anne's theorem considering that in tangential quadrilaterals the combined lengths of opposite sides are equal (Pitot theorem: a + c = b + d). Now according to Anne's theorem showing that the combined areas of opposite triangles PAD and PBC and the combined areas of triangles PAB and PCD are equal is sufficient to ensure that P lies on EF. Let r be the radius of the incircle, then r is also the altitude of all four triangles. - -\begin{align} - -&A(\triangle PAB)+A(\triangle PCD) \\[5pt] - -={}&\tfrac{1}{2}ra+\tfrac{1}{2}rc=\tfrac{1}{2}r(a+c) \\[5pt] - -={}&\tfrac{1}{2}r(b+d)=\tfrac{1}{2}rb+\tfrac{1}{2}rd \\[5pt] - -={}&A(\triangle PBC)+A(\triangle PAD) - -\end{align} - - diff --git a/wiki/wikipedia/2926.txt b/wiki/wikipedia/2926.txt deleted file mode 100644 index 9031914049b32439a17714386a5a28a5ab398289..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2926.txt +++ /dev/null @@ -1,85 +0,0 @@ -In the mathematical discipline of graph theory, a matching or independent edge set in an undirected graph is a set of edges without common vertices. Finding a matching in a bipartite graph can be treated as a network flow problem. - -Given a graph G = (V, E), a matching M in G is a set of pairwise non-adjacent edges, none of which are loops; that is, no two edges share common vertices. - -A vertex is matched (or saturated) if it is an endpoint of one of the edges in the matching. Otherwise the vertex is unmatched (or unsaturated). - -A maximal matching is a matching M of a graph G that is not a subset of any other matching. A matching M of a graph G is maximal if every edge in G has a non-empty intersection with at least one edge in M. The following figure shows examples of maximal matchings (red) in three graphs. - -A maximum matching (also known as maximum-cardinality matching) is a matching that contains the largest possible number of edges. There may be many maximum matchings. The matching number $\nu(G)$ of a graph G is the size of a maximum matching. Every maximum matching is maximal, but not every maximal matching is a maximum matching. The following figure shows examples of maximum matchings in the same three graphs. - -A perfect matching is a matching that matches all vertices of the graph. That is, a matching is perfect if every vertex of the graph is incident to an edge of the matching. Every perfect matching is maximum and hence maximal. In some literature, the term complete matching is used. In the above figure, only part (b) shows a perfect matching. A perfect matching is also a minimum-size edge cover. Thus, the size of a maximum matching is no larger than the size of a minimum edge cover: \nu(G) \le \rho(G). A graph can only contain a perfect matching when the graph has an even number of vertices. - -A near-perfect matching is one in which exactly one vertex is unmatched. Clearly, a graph can only contain a near-perfect matching when the graph has an odd number of vertices, and near-perfect matchings are maximum matchings. In the above figure, part (c) shows a near-perfect matching. If every vertex is unmatched by some near-perfect matching, then the graph is called factor-critical. - -Given a matching M, an alternating path is a path that begins with an unmatched vertex and whose edges belong alternately to the matching and not to the matching. An augmenting path is an alternating path that starts from and ends on free (unmatched) vertices. Berge's lemma states that a matching M is maximum if and only if there is no augmenting path with respect to M. - -An induced matching is a matching that is the edge set of an induced subgraph. - -In any graph without isolated vertices, the sum of the matching number and the edge covering number equals the number of vertices. If there is a perfect matching, then both the matching number and the edge cover number are . - -If A and B are two maximal matchings, then and . To see this, observe that each edge in B \ A can be adjacent to at most two edges in A \ B because A is a matching; moreover each edge in A \ B is adjacent to an edge in B \ A by maximality of B, hence -$$ -|A \setminus B| \le 2|B \setminus A |. -$$ - -Further we deduce that -$$ -|A| = |A \cap B| + |A \setminus B| \le 2|B \cap A| + 2|B \setminus A| = 2|B|. -$$ - -In particular, this shows that any maximal matching is a 2-approximation of a maximum matching and also a 2-approximation of a minimum maximal matching. This inequality is tight: for example, if G is a path with 3 edges and 4 vertices, the size of a minimum maximal matching is 1 and the size of a maximum matching is 2. - -A spectral characterization of the matching number of a graph is given by Hassani Monfared and Mallik as follows: Let $G$ be a graph on $n$ vertices, and $\lambda_1 > \lambda_2 > \ldots > \lambda_k>0$ be $k$ distinct nonzero purely imaginary numbers where $2k \leq n$. Then the matching number of $G$ is $k$ if and only if (a) there is a real skew-symmetric matrix $A$ with graph $G$ and eigenvalues $\pm \lambda_1, \pm\lambda_2,\ldots,\pm\lambda_k$ and $n-2k$ zeros, and (b) all real skew-symmetric matrices with graph $G$ have at most $2k$ nonzero eigenvalues. Note that the (simple) graph of a real symmetric or skew-symmetric matrix $A$ of order $n$ has $n$ vertices and edges given by the nonozero off-diagonal entries of $A$. - -A generating function of the number of k-edge matchings in a graph is called a matching polynomial. Let G be a graph and mk be the number of k-edge matchings. One matching polynomial of G is -$$ -\sum_{k\geq0} m_k x^k. -$$ - -Another definition gives the matching polynomial as -$$ -\sum_{k\geq0} (-1)^k m_k x^{n-2k}, -$$ - -where n is the number of vertices in the graph. Each type has its uses; for more information see the article on matching polynomials. - -A fundamental problem in combinatorial optimization is finding a maximum matching. This problem has various algorithms for different classes of graphs. - -In an unweighted bipartite graph, the optimization problem is to find a maximum cardinality matching. The problem is solved by the Hopcroft-Karp algorithm in time O( time, and there are more efficient randomized algorithms, approximation algorithms, and algorithms for special classes of graphs such as bipartite planar graphs, as described in the main article. - -In a weighted bipartite graph, the optimization problem is to find a maximum-weight matching; a dual problem is to find a minimum-weight matching. This problem is often called maximum weighted bipartite matching, or the assignment problem. The Hungarian algorithm solves the assignment problem and it was one of the beginnings of combinatorial optimization algorithms. It uses a modified shortest path search in the augmenting path algorithm. If the Bellman–Ford algorithm is used for this step, the running time of the Hungarian algorithm becomes $O(V^2 E)$, or the edge cost can be shifted with a potential to achieve $O(V^2 \log{V} + V E)$ running time with the Dijkstra algorithm and Fibonacci heap. - -In a non-bipartite weighted graph, the problem of maximum weight matching can be solved in time $O(V^{2}E)$ using Edmonds' blossom algorithm. - -A maximal matching can be found with a simple greedy algorithm. A maximum matching is also a maximal matching, and hence it is possible to find a largest maximal matching in polynomial time. However, no polynomial-time algorithm is known for finding a minimum maximal matching, that is, a maximal matching that contains the smallest possible number of edges. - -A maximal matching with k edges is an edge dominating set with k edges. Conversely, if we are given a minimum edge dominating set with k edges, we can construct a maximal matching with k edges in polynomial time. Therefore, the problem of finding a minimum maximal matching is essentially equal to the problem of finding a minimum edge dominating set. Both of these two optimization problems are known to be NP-hard; the decision versions of these problems are classical examples of NP-complete problems. Both problems can be approximated within factor 2 in polynomial time: simply find an arbitrary maximal matching M. - -The number of matchings in a graph is known as the Hosoya index of the graph. It is #P-complete to compute this quantity, even for bipartite graphs. It is also #P-complete to count perfect matchings, even in bipartite graphs, because computing the permanent of an arbitrary 0–1 matrix (another #P-complete problem) is the same as computing the number of perfect matchings in the bipartite graph having the given matrix as its biadjacency matrix. However, there exists a fully polynomial time randomized approximation scheme for counting the number of bipartite matchings. A remarkable theorem of Kasteleyn states that the number of perfect matchings in a planar graph can be computed exactly in polynomial time via the FKT algorithm. - -The number of perfect matchings in a complete graph Kn (with n even) is given by the double factorial (n - 1)!!. The numbers of matchings in complete graphs, without constraining the matchings to be perfect, are given by the telephone numbers. - -One of the basic problems in matching theory is to find in a given graph all edges that may be extended to a maximum matching in the graph (such edges are called maximally-matchable edges, or allowed edges). Algorithms for this problem include: - -* For general graphs, a deterministic algorithm in time $O(VE)$ and a randomized algorithm in time $\tilde{O}(V^{2.376}) $. - -* For bipartite graphs, if a single maximum matching is found, a deterministic algorithm runs in time $O(V+E)$. - -The problem of developing an online algorithm for matching was first considered by Richard M. Karp, Umesh Vazirani, and Vijay Vazirani in 1990. - -In the online setting, nodes on one side of the bipartite graph arrive one at a time and must either be immediately matched to the other side of the graph or discarded. This is a natural generalization of the secretary problem and has applications to online ad auctions. The best online algorithm, for the unweighted maximization case with a random arrival model, attains a competitive ratio of 0.696. - -Kőnig's theorem states that, in bipartite graphs, the maximum matching is equal in size to the minimum vertex cover. Via this result, the minimum vertex cover, maximum independent set, and maximum vertex biclique problems may be solved in polynomial time for bipartite graphs. - -Hall's marriage theorem provides a characterization of bipartite graphs which have a perfect matching and the Tutte theorem provides a characterization for arbitrary graphs. - -* A Kekulé structure of an aromatic compound consists of a perfect matching of its carbon skeleton, showing the locations of double bonds in the chemical structure. These structures are named after Friedrich August Kekulé von Stradonitz, who showed that benzene (in graph theoretical terms, a 6-vertex cycle) can be given such a structure. - -* The Hosoya index is the number of non-empty matchings plus one; it is used in computational chemistry and mathematical chemistry investigations for organic compounds. - -* is about choosing minimum set of classes from given requirements for graduation. - -* Hitchcock transport problem involves bipartite matching as sub-problem. - -* Subtree isomorphism problem involves bipartite matching as sub-problem. diff --git a/wiki/wikipedia/2927.txt b/wiki/wikipedia/2927.txt deleted file mode 100644 index 5b47020a889ede41cffde3c0d00929da7882dee2..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2927.txt +++ /dev/null @@ -1,21 +0,0 @@ -In statistics, Lukacs's proportion-sum independence theorem is a result that is used when studying proportions, in particular the Dirichlet distribution. It is named after Eugene Lukacs. - -If Y1 and Y2 are non-degenerate, independent random variables, then the random variables -$$ -W=Y_1+Y_2\text{ and }P = \frac{Y_1}{Y_1+Y_2} -$$ - -are independently distributed if and only if both Y1 and Y2 have gamma distributions with the same scale parameter. - -Suppose Y i, i = 1, ..., k be non-degenerate, independent, positive random variables. Then each of k - 1 random variables - - - -P_i=\frac{Y_i}{\sum_{i=1}^k Y_i} - -is independent of -$$ -W=\sum_{i=1}^k Y_i -$$ - -if and only if all the Y i have gamma distributions with the same scale parameter. diff --git a/wiki/wikipedia/2928.txt b/wiki/wikipedia/2928.txt deleted file mode 100644 index 6fcb752f5981fea98883811797dc617f79958b47..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2928.txt +++ /dev/null @@ -1,5 +0,0 @@ -In computing, the X/Open XA standard (short for "eXtended Architecture") is a specification released in 1991 by X/Open (which later merged with The Open Group) for distributed transaction processing (DTP). - -The goal of XA is to guarantee atomicity in "global transactions" that are executed across heterogeneous components. A transaction is a unit of work such as transferring money from one person to another. Distributed transactions update multiple data stores (such as databases, application servers, message queues, transactional caches, etc.) To guarantee integrity, XA uses a two-phase commit (2PC) to ensure that all of a transaction's changes either take effect (commit) or do not (roll back), i.e., atomically. - -Specifically, XA describes the interface between a global transaction manager and a specific application. An application that wants to use XA engages an XA transaction manager using a library or separate service. The transaction manager tracks the participants in the transaction (i.e. the various data stores to which the application writes), and works with them to carry out the two-phase commit. In other words, the XA transaction manager is separate from an application's interactions with servers. XA maintains a log of its decisions to commit or roll back, which it can use to recover in case of a system outage. diff --git a/wiki/wikipedia/2929.txt b/wiki/wikipedia/2929.txt deleted file mode 100644 index bacbdb88e8ccb5f48e9d469edfb46d9a2c6b3c55..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2929.txt +++ /dev/null @@ -1,26 +0,0 @@ -In probability theory, Talagrand's concentration inequality is an isoperimetric-type inequality for product probability spaces. It was first proved by the French mathematician Michel Talagrand. The inequality is one of the manifestations of the concentration of measure phenomenon. - -The inequality states that if $\Omega = \Omega_1 \times \Omega_2 \times \cdots \times \Omega_n$ is a product space endowed with a product probability measure and $A$ - -is a subset in this space, then for any $t \ge 0$ -$$ -\Pr[A] \cdot \Pr\left[{A^c_t}\right] \le e^{-t^2/4} , -$$ - -where ${A^c_t}$ is the complement of $A_{t}$ where this is defined by -$$ -A_t = \{ x \in \Omega ~:~ \rho(A,x) \le t \} -$$ - -and where $\rho$ is Talagrand's convex distance defined as -$$ -\rho(A,x) = \max_{\alpha, \|\alpha\|_2 \le 1} \ \min_{y \in A} \ \sum_{i~:~x_i \neq y_i} \alpha_i -$$ - -where $\alpha \in \mathbf{R}^n$, $x,y \in \Omega$ are $n$-dimensional vectors with entries -$$ -\alpha_i, x_i, y_i -$$ respectively and $\|\cdot\|_2$ is the $\ell^2$-norm. That is, -$$ -\|\alpha\|_2=\left(\sum_i\alpha_i^2\right)^{1/2} -$$ diff --git a/wiki/wikipedia/293.txt b/wiki/wikipedia/293.txt deleted file mode 100644 index e3e3adb3b9ccf4e30c7be94d82062a5d98867cfd..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/293.txt +++ /dev/null @@ -1,61 +0,0 @@ -Carleson's theorem is a fundamental result in mathematical analysis establishing the pointwise (Lebesgue) almost everywhere convergence of Fourier series of L2 functions, proved by . The name is also often used to refer to the extension of the result by to Lp functions for p ∈ (1, ∞] (also known as the Carleson-Hunt theorem) and the analogous results for pointwise almost everywhere convergence of Fourier integrals, which can be shown to be equivalent by transference methods. - -The result, in the form of its extension by Hunt, can be formally stated as follows: - -Let ƒ be an Lp periodic function for some p ∈ (1, ∞], with Fourier coefficients $\hat{f}(n)$. Then -$$ -\lim_{N \rightarrow \infty} \sum_{|n| \leq N} \hat{f}(n) e^{inx} = f(x) -$$ - -for almost every x. - -The analogous result for Fourier integrals can be formally stated as follows: - -Let ƒ ∈ Lp(R) for some p ∈ (1, 2] have Fourier transform $\hat{f}(\xi)$. Then -$$ -\lim_{R \rightarrow \infty} \int_{|\xi| \leq R} \hat{f}(\xi) e^{2 \pi i x \xi} d\xi = f(x) -$$ - -for almost every x ∈ R. - -A fundamental question about Fourier series, asked by Fourier himself at the beginning of the 19th century, is whether the Fourier series of a continuous function converges pointwise to the function. - -By strengthening the continuity assumption slightly one can easily show that the Fourier series converges everywhere. For example, if a function has bounded variation then its Fourier series converges everywhere to the local average of the function. In particular, if a function is continuously differentiable then its Fourier series converges to it everywhere. This was proven by Dirichlet, who expressed his belief that he would soon be able to extend his result to cover all continuous functions. Another way to obtain convergence everywhere is to change the summation method. For example, Fejér's theorem shows that if one replaces ordinary summation by Cesàro summation then the Fourier series of any continuous function converges uniformly to the function. Further, it is easy to show that the Fourier series of any L2 function converges to it in L2 norm. - -After Dirichlet's result, several experts, including Dirichlet, Riemann, Weierstrass and Dedekind, stated their belief that the Fourier series of any continuous function would converge everywhere. This was disproved by Paul du Bois-Reymond, who showed in 1876 that there is a continuous function whose Fourier series diverges at one point. - -The almost-everywhere convergence of Fourier series for L2 functions was postulated by , and the problem was known as Luzin's conjecture (up until its proof by Carleson). Kolmogorov showed that the analogue of Carleson's result for L1 is false by finding such a function whose Fourier series diverges almost everywhere (improved slightly in 1926 to diverging everywhere). Before Carleson's result, the best known estimate for the partial sums sn of the Fourier series of a function in Lp was -$$ - s_n(x)=o( \log (n)^{1/p})\text{ almost everywhere}, -$$ - -proved by Kolmogorov–Seliverstov–Plessner for p = 2, by G. H. Hardy for p = 1, and by Littlewood–Paley for p > 1 . This result had not been improved for several decades, leading some experts to suspect that it was the best possible and that Luzin's conjecture was false. Kolmogorov's counterexample in L1 was unbounded in any interval, but it was thought to be only a matter of time before a continuous counterexample was found. Carleson said in an interview with Raussen that he started by trying to find a continuous counterexample and at one point thought he had a method that would construct one, but realized eventually that his approach could not work. He then tried instead to prove Luzin's conjecture since the failure of his counterexample convinced him that it was probably true. - -Carleson's original proof is exceptionally hard to read, and although several authors have simplified the argument there are still no easy proofs of his theorem. - -Expositions of the original paper Carleson include Kahane, Mozzochi, Jørsboe, and Arias de Reyna. - -published a new proof of Hunt's extension which proceeded by bounding a maximal operator. This, in turn, inspired a much simplified proof of the L2 result by , explained in more detail in Lacey. The books Fremlin and Grafakos also give proofs of Carleson's theorem. - -Katznelson showed that for any set of measure 0 there is a continuous periodic function whose Fourier series diverges at all points of the set (and possibly elsewhere). When combined with Carleson's theorem this shows that there is a continuous function whose Fourier series diverges at all points of a given set of reals if and only if the set has measure 0. - -The extension of Carleson's theorem to Lp for p > 1 was stated to be a "rather obvious" extension of the case p = 2 in Carleson's paper, and was proved by Hunt. Carleson's result was improved further by - -Sjölin to the space Llog+(L)log+log+(L) and by Antonov to the space Llog+(L)log+log+log+(L). (Here log+(L) is log(L) if L>1 and 0 otherwise, and if φ is a function then - -φ(L) stands for the space of functions f such that φ(|f(x)|) is integrable.) - -Konyagin improved Kolmogorov's counterexample by finding functions with everywhere-divergent Fourier series in a space slightly larger than Llog+(L)1/2. - -One can ask if there is in some sense a largest natural space of functions whose Fourier series converge almost everywhere. The simplest candidate for such a space that is consistent with the results of Antonov and Konyagin is Llog+(L). - -The extension of Carleson's theorem to Fourier series and integrals in several variables is made more complicated as there are many different ways in which one can sum the coefficients; for example, one can sum over increasing balls, or increasing rectangles. Convergence of rectangular partial sums (and indeed general polygonal partial sums) follows from the one-dimensional case, but the spherical summation problem is still open for L2. - -The Carleson operator C is the non-linear operator defined by -$$ - Cf(x) = \sup_N\left|\int_{-N}^N \hat f(y)e^{2\pi i xy} dy\right| -$$ - -It is relatively easy to show that the Carleson-Hunt theorem follows from the boundedness of the Carleson operator from Lp(R) to itself for 1 < p < ∞. - -However, proving that it is bounded is difficult, and this was actually what Carleson proved. diff --git a/wiki/wikipedia/2930.txt b/wiki/wikipedia/2930.txt deleted file mode 100644 index e815ecf269d082887527711dfa347e607944d60d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2930.txt +++ /dev/null @@ -1,7 +0,0 @@ -In computability theory, the UTM theorem, or universal Turing machine theorem, is a basic result about Gödel numberings of the set of computable functions. It affirms the existence of a computable universal function, which is capable of calculating any other computable function. The universal function is an abstract version of the universal Turing machine, thus the name of the theorem. - -Roger's equivalence theorem provides a characterization of the Gödel numbering of the computable functions in terms of the smn theorem and the UTM theorem. - -The theorem states that a partial computable function u of two variables exists such that, for every computable function f of one variable, an e exists such that $f(x) \simeq u(e,x)$ for all x. This means that, for each x, either f(x) and u(e,x) are both defined and are equal, or are both undefined. - -The theorem thus shows that, defining φe(x) as u(e, x), the sequence φ1, φ2, … is an enumeration of the partial computable functions. The function $u$ in the statement of the theorem is called a universal function. diff --git a/wiki/wikipedia/2931.txt b/wiki/wikipedia/2931.txt deleted file mode 100644 index 39f7ea885ca4df70a142e8e2f4c691843d98c4d8..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2931.txt +++ /dev/null @@ -1,27 +0,0 @@ -In graph theory, Vizing's conjecture concerns a relation between the domination number and the cartesian product of graphs. - -This conjecture was first stated by , and states that, if γ(G) denotes the minimum number of vertices in a dominating set for G, then -$$ - \gamma(G\BoxH) \ge \gamma(G)\gamma(H). -$$ - -Gravier conjectured a similar bound for the domination number of the tensor product of graphs; however, a counterexample was found by Klavžar. Since Vizing proposed his conjecture, many mathematicians have worked on it, with partial results described below. For a more detailed overview of these results, see Brešar. - -A 4-cycle C4 has domination number two: any single vertex only dominates itself and its two neighbors, but any pair of vertices dominates the whole graph. The product $C_4 \BoxC_4$ is a four-dimensional hypercube graph; it has 16 vertices, and any single vertex can only dominate itself and four neighbors, so three vertices could only dominate 15 of the 16 vertices. Therefore, at least four vertices are required to dominate the entire graph, the bound given by Vizing's conjecture. - -It is possible for the domination number of a product to be much larger than the bound given by Vizing's conjecture. For instance, for a star K1,n, its domination number γ(K1,n) is one: it is possible to dominate the entire star with a single vertex at its hub. Therefore, for the graph $ G = K_{1,n}\BoxK_{1,n} $ formed as the product of two stars, Vizing's conjecture states only that the domination number should be at least 1 × 1 = 1. However, the domination number of this graph is actually much higher. It has n2 + 2n + 1 vertices: n2 formed from the product of a leaf in both factors, 2n from the product of a leaf in one factor and the hub in the other factor, and one remaining vertex formed from the product of the two hubs. Each leaf-hub product vertex in G dominates exactly n of the leaf-leaf vertices, so n leaf-hub vertices are needed to dominate all of the leaf-leaf vertices. However, no leaf-hub vertex dominates any other such vertex, so even after n leaf-hub vertices are chosen to be included in the dominating set, there remain n more undominated leaf-hub vertices, which can be dominated by the single hub-hub vertex. Thus, the domination number of this graph is $ \gamma(K_{1,n}\BoxK_{1,n}) = n+1 $ far higher than the trivial bound of one given by Vizing's conjecture. - -There exist infinite families of graph products for which the bound of Vizing's conjecture is exactly met. For instance, if G and H are both connected graphs, each having at least four vertices and having exactly twice as many total vertices as their domination numbers, then $ \gamma(G\BoxH)=\gamma(G)\gamma(H)$. The graphs G and H with this property consist of the four-vertex cycle C4 together with the rooted products of a connected graph and a single edge. - -Clearly, the conjecture holds when either G or H has domination number one: for, the product contains an isomorphic copy of the other factor, dominating which requires at least γ(G)γ(H) vertices. - -Vizing's conjecture is also known to hold for cycles and for graphs with domination number two. - -Clark proved that the domination number of the product is at least half as large as the conjectured bound, for all G and H. - -Vizing observed that -$$ - \gamma(G \Box H) \le \min\{\gamma(G) |V(H)|,\gamma(H)|V(G)|\}. -$$ - -A dominating set meeting this bound may be formed as the cartesian product of a dominating set in one of G or H with the set of all vertices in the other graph. diff --git a/wiki/wikipedia/2932.txt b/wiki/wikipedia/2932.txt deleted file mode 100644 index d4fcb79c63fd4aa5f9617513b4fa34867071ef0e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2932.txt +++ /dev/null @@ -1,31 +0,0 @@ -In mathematics, Lindelöf's theorem is a result in complex analysis named after the Finnish mathematician Ernst Leonard Lindelöf. It states that a holomorphic function on a half-strip in the complex plane that is bounded on the boundary of the strip and does not grow "too fast" in the unbounded direction of the strip must remain bounded on the whole strip. The result is useful in the study of the Riemann zeta function, and is a special case of the Phragmén-Lindelöf principle. Also, see Hadamard three-lines theorem. - -Let Ω be a half-strip in the complex plane: -$$ -\Omega = \{ z \in \mathbb{C} | x_1 \leq \mathrm{Re} (z) \leq x_2 \text{ and } \mathrm{Im} (z) \geq y_0 \} \subsetneq \mathbb{C}. -$$ - -Suppose that ƒ is holomorphic (i.e. analytic) on Ω and that there are constants M, A and B such that -$$ -| f(z) | \leq M \text{ for all } z \in \partial \Omega -$$ - -and -$$ -\frac{y^{A}} \leq B \text{ for all } x + i y \in \Omega. -$$ - -Then f is bounded by M on all of Ω: -$$ -| f(z) | \leq M \text{ for all } z \in \Omega. -$$ - -Fix a point $\xi=\sigma+i\tau$ inside $\Omega$. Choose $\lambda>-y_0$, an integer $N>A$ and $y_1>\tau$ large enough such that -$$ -\frac {By_1^A}{(y_1 + \lambda)^N}\le \frac {M}{(y_0+\lambda)^N} -$$. Applying maximum modulus principle to the function $g(z)=\frac {f(z)}{(z+i\lambda)^N}$ and - -the rectangular area $\{z \in \mathbb{C} | x_1 \leq \mathrm{Re} (z) \leq x_2 \text{ and } y_0 \leq \mathrm{Im} (z) \leq y_1 \}$ we obtain $|g(\xi)|\le \frac{M}{(y_0+\lambda)^N}$, that is, $|f(\xi)|\le M\left(\frac{y_0+\lambda}\right)^N$. Letting $\lambda \rightarrow +\infty$ yields -$$ -|f(\xi)| \le M -$$ as required. diff --git a/wiki/wikipedia/2933.txt b/wiki/wikipedia/2933.txt deleted file mode 100644 index 874fb3fb81d3c8bba5ec257541d6ffbddd9eb668..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2933.txt +++ /dev/null @@ -1,13 +0,0 @@ -In computer science, GSAT and WalkSAT are local search algorithms to solve Boolean satisfiability problems. - -Both algorithms work on formulae in Boolean logic that are in, or have been converted into conjunctive normal form. They start by assigning a random value to each variable in the formula. If the assignment satisfies all clauses, the algorithm terminates, returning the assignment. Otherwise, a variable is flipped and the above is then repeated until all the clauses are satisfied. WalkSAT and GSAT differ in the methods used to select which variable to flip. - -GSAT makes the change which minimizes the number of unsatisfied clauses in the new assignment, or with some probability picks a variable at random. - -WalkSAT first picks a clause which is unsatisfied by the current assignment, then flips a variable within that clause. The clause is picked at random among unsatisfied clauses. The variable is picked that will result in the fewest previously satisfied clauses becoming unsatisfied, with some probability of picking one of the variables at random. When picking at random, WalkSAT is guaranteed at least a chance of one out of the number of variables in the clause of fixing a currently incorrect assignment. When picking a guessed-to-be-optimal variable, WalkSAT has to do less calculation than GSAT because it is considering fewer possibilities. - -The algorithm may restart with a new random assignment if no solution has been found for too long, as a way of getting out of local minima of numbers of unsatisfied clauses. - -Many versions of GSAT and WalkSAT exist. WalkSAT has been proven particularly useful in solving satisfiability problems produced by conversion from automated planning problems. The approach to planning that converts planning problems into Boolean satisfiability problems is called satplan. - -MaxWalkSAT is a variant of WalkSAT designed to solve the weighted satisfiability problem, in which each clause has associated with a weight, and the goal is to find an assignment—one which may or may not satisfy the entire formula—that maximizes the total weight of the clauses satisfied by that assignment. diff --git a/wiki/wikipedia/2934.txt b/wiki/wikipedia/2934.txt deleted file mode 100644 index 8766567df6f809ac49c36d46ede2bac4aefd29b6..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2934.txt +++ /dev/null @@ -1,25 +0,0 @@ -Tupper's self-referential formula is a formula that visually represents itself when graphed at a specific location in the (x, y) plane. - -The formula was defined by Jeff Tupper and appears as an example in Tupper's 2001 SIGGRAPH paper on reliable two-dimensional computer graphing algorithms. This paper discusses methods related to the GrafEq formula-graphing program developed by Tupper. - -Although the formula is called "self-referential", Tupper did not name it as such. - -The formula is an inequality defined as: - -\frac{1}{2} < \left\lfloor \mathrm{mod}\left(\left\lfloor \frac{y}{17} \right\rfloor 2^{-17 \lfloor x \rfloor - \mathrm{mod}\left(\lfloor y\rfloor, 17\right)},2\right)\right\rfloor - -or, as plaintext, - -where ⌊ ⌋ denotes the floor function, and mod is the modulo operation. - -Let $k$ equal the following 543-digit integer: - -960939379918958884971672962127852754715004339660129306651505519271702802395266424689642842174350718121267153782770623355993237280874144307891325963941337723487857735749823926629715517173716995165232890538221612403238855866184013235585136048828693337902491454229288667081096184496091705183454067827731551705405381627380967602565625016981482083418783163849115590225610003652351370343874461848378737238198224849863465033159410054974700593138339226497249461751545728366702369745461014655997933798537483143786841806593422227898388722980000748404719 - -If one graphs the set of points $(x, y)$ in $0 \le x < 106$ and $k \le y < k + 17$ satisfying the inequality given above, the resulting graph looks like this (the axes in this plot have been reversed, otherwise the picture would be upside-down and mirrored): - -The formula is a general-purpose method of decoding a bitmap stored in the constant k, and it could actually be used to draw any other image. When applied to the unbounded positive range 0 ≤ y, the formula tiles a vertical swath of the plane with a pattern that contains all possible 17-pixel-tall bitmaps. One horizontal slice of that infinite bitmap depicts the drawing formula itself, but this is not remarkable, since other slices depict all other possible formulae that might fit in a 17-pixel-tall bitmap. Tupper has created extended versions of his original formula that rule out all but one slice. - -The constant k is a simple monochrome bitmap image of the formula treated as a binary number and multiplied by 17. If k is divided by 17, the least significant bit encodes the upper-right corner (k, 0); the 17 least significant bits encode the rightmost column of pixels; the next 17 least significant bits encode the 2nd-rightmost column, and so on. - -It fundamentally describes a way to plot points on a two dimensional surface. The value of k is the binary number that forms the plot in base 10. The following plot demonstrates the addition of different values of k. In the fourth subplot the k value of "AFGP" and "Aesthetic Function Graph" are added to get the resultant graph, where both the texts can be seen with some distortion, due to the effects of binary addition. The information regarding the shape of the plot is stored within k. diff --git a/wiki/wikipedia/2935.txt b/wiki/wikipedia/2935.txt deleted file mode 100644 index 7ee055adbff3ef157aff968a393eb8077a9ecc4f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2935.txt +++ /dev/null @@ -1,27 +0,0 @@ -__NOTOC__ - -In mathematics, De Gua's theorem is a three-dimensional analog of the Pythagorean theorem named after Jean Paul de Gua de Malves. It states that if a tetrahedron has a right-angle corner (like the corner of a cube), then the square of the area of the face opposite the right-angle corner is the sum of the squares of the areas of the other three faces: -$$ - A_{ABC}^2 = A_{\color {blue} ABO}^2+A_{\color {green} ACO}^2+A_{\color {red} BCO}^2 -$$ - -The Pythagorean theorem and de Gua's theorem are special cases (n = 2, 3) of a general theorem about n-simplices with a right-angle corner. This, in turn, is a special case of a yet more general theorem by Donald R. Conant and William A. Beyer, which can be stated as follows. - -Let U be a measurable subset of a k-dimensional affine subspace of $\mathbb{R}^n$ (so $k \le n$). For any subset $I \subseteq \{ 1, \ldots, n \}$ with exactly k elements, let $U_I$ be the orthogonal projection of U onto the linear span of $e_{i_1}, \ldots, e_{i_k}$, where $I = \{i_1, \ldots, i_k\}$ and $e_1, \ldots, e_n$ is the standard basis for $\mathbb{R}^n$. Then -$$ -\mbox{vol}_k^2(U) = \sum_I \mbox{vol}_k^2(U_I), -$$ - -where $\mbox{vol}_k(U)$ is the k-dimensional volume of U and the sum is over all subsets $I \subseteq \{ 1, \ldots, n \}$ with exactly k elements. - -De Gua's theorem and its generalisation (above) to n-simplices with right-angle corners correspond to the special case where k = n-1 and U is an (n-1)-simplex in $\mathbb{R}^n$ with vertices on the co-ordinate axes. For example, suppose n = 3, k = 2 and U is the triangle $\triangle ABC$ in $\mathbb{R}^3$ with vertices A, B and C lying on the $x_1$-, $x_2$- and $x_3$-axes, respectively. The subsets $I$ of $\{ 1, 2, 3 \}$ with exactly 2 elements are $\{ 2,3 \}$, $\{ 1,3 \}$ and $\{ 1,2 \}$. By definition, $U_{\{ 2,3 \}}$ is the orthogonal projection of $U = \triangle ABC$ onto the $x_2 x_3$-plane, so $U_{\{ 2,3 \}}$ is the triangle $\triangle OBC$ with vertices O, B and C, where O is the origin of $\mathbb{R}^3$. Similarly, $U_{\{ 1,3 \}} = \triangle AOC$ and $U_{\{ 1,2 \}} = \triangle ABO$, so the Conant–Beyer theorem says - -\mbox{vol}_2^2(\triangle ABC) = \mbox{vol}_2^2(\triangle OBC) + - -\mbox{vol}_2^2(\triangle AOC) + \mbox{vol}_2^2(\triangle ABO), - -which is de Gua's theorem. - -The generalisation of de Gua's theorem to n-simplices with right-angle corners can also be obtained as a special case from the Cayley–Menger determinant formula. - -Jean Paul de Gua de Malves (1713-85) published the theorem in 1783, but around the same time a slightly more general version was published by another French mathematician, Charles de Tinseau d'Amondans (1746-1818), as well. However the theorem had also been known much earlier to Johann Faulhaber (1580-1635) and René Descartes (1596-1650). diff --git a/wiki/wikipedia/2936.txt b/wiki/wikipedia/2936.txt deleted file mode 100644 index 43208dcc1ca9433cbef064875d0491a8f4f04161..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2936.txt +++ /dev/null @@ -1,8 +0,0 @@ -Artstein's theorem states that a nonlinear dynamical system in the control-affine form -$$ -\dot{\mathbf{x}} = \mathbf{f(x)} + \sum_{i=1}^m \mathbf{g}_i(\mathbf{x})u_i -$$ - -has a differentiable control-Lyapunov function if and only if it admits a regular stabilizing feedback u(x), that is a locally Lipschitz function on Rn\{0}. - -The original proof of Zvi Artstein proceeds by a nonconstructive argument. In 1989 Eduardo D. Sontag provided a constructive version of this theorem explicitly exhibiting the feedback. diff --git a/wiki/wikipedia/2937.txt b/wiki/wikipedia/2937.txt deleted file mode 100644 index ce551158ac7963aa14d608f54f641dd43044e570..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2937.txt +++ /dev/null @@ -1,14 +0,0 @@ -In mathematics, the Chvátal–Sankoff constants are mathematical constants that describe the lengths of longest common subsequences of random strings. Although the existence of these constants has been proven, their exact values are unknown. They are named after Václav Chvátal and David Sankoff, who began investigating them in the mid-1970s. - -There is one Chvátal–Sankoff constant $\gamma_k$ for each positive integer k, where k is the number of characters in the alphabet from which the random strings are drawn. The sequence of these numbers grows inversely proportionally to the square root of k. - -A common subsequence of two strings S and T is a string whose characters appear in the same order (not necessarily consecutively) both in S and in T. The problem of computing a longest common subsequence has been well studied in computer science. It can be solved in polynomial time by dynamic programming; this basic algorithm has additional speedups for small alphabets (the Method of Four Russians), for strings with few differences, - -One motivation for studying the longest common subsequences of random strings, given already by Chvátal and Sankoff, is to calibrate the computations of longest common subsequences on strings that are not random. If such a computation returns a subsequence that is significantly longer than what would be obtained at random, one might infer from this result that the match is meaningful or significant. -$$ -\lim_{k\to\infty} \gamma_k \sqrt k = 2. -$$ - -There has also been research into the distribution of values of the longest common subsequence, generalizing the study of the expectation of this value. For instance, the standard deviation of the length of the longest common subsequence of random strings of length n is known to be proportional to the square root of n. - -One complication in performing this sort of analysis is that the random variables describing whether the characters at different pairs of positions match each other are not independent of each other. For a more mathematically tractable simplification of the longest common subsequence problem, in which the allowed matches between pairs of symbols are not controlled by whether those symbols are equal to each other but instead by independent random variables with probability 1/k of being 1 and (k - 1)/k of being 0, it has been shown that the distribution of the longest common subsequence length is controlled by the Tracy–Widom distribution. diff --git a/wiki/wikipedia/2938.txt b/wiki/wikipedia/2938.txt deleted file mode 100644 index 5fe53c18900650d9a2731ffd1ba7edc84a7161d7..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2938.txt +++ /dev/null @@ -1,15 +0,0 @@ -In abstract algebra, the fundamental theorem on homomorphisms, also known as the fundamental homomorphism theorem, relates the structure of two objects between which a homomorphism is given, and of the kernel and image of the homomorphism. - -The homomorphism theorem is used to prove the isomorphism theorems. - -Given two groups G and H and a group homomorphism f : G → H, let K be a normal subgroup in G and φ the natural surjective homomorphism G → G/K (where G/K is the quotient group of G by K). If K is a subset of ker(f) then there exists a unique homomorphism h: G/K → H such that f = h∘φ. - -In other words, the natural projection φ is universal among homomorphisms on G that map K to the identity element. - -The situation is described by the following commutative diagram: - -h is injective if and only if K = ker(f). Therefore, by setting K = ker(f) we immediately get the first isomorphism theorem. - -We can write the statement of the fundamental theorem on homomorphisms of groups as "every homomorphic image of a group is isomorphic to a quotient group". - -Similar theorems are valid for monoids, vector spaces, modules, and rings. diff --git a/wiki/wikipedia/2939.txt b/wiki/wikipedia/2939.txt deleted file mode 100644 index fd692ab045873e380864fd0ec4ad9f17a744d088..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2939.txt +++ /dev/null @@ -1,63 +0,0 @@ -The KeY tool is used in formal verification of Java programs. It accepts specifications written in the Java Modeling Language to Java source files. These are transformed into theorems of dynamic logic and then compared against program semantics that are likewise defined in terms of dynamic logic. KeY is significantly powerful in that it supports both interactive (i.e. by hand) and fully automated correctness proofs. Failed proof attempts can be used for a more efficient debugging or verification-based testing. There have been several extensions to KeY in order to apply it to the verification of C programs or hybrid systems. KeY is jointly developed by Karlsruhe Institute of Technology, Germany; Technische Universität Darmstadt, Germany; and Chalmers University of Technology in Gothenburg, Sweden and is licensed under the GPL. - -The usual user input to KeY consists of a Java source file with annotations in JML. Both are translated to KeY's internal representation, dynamic logic. From the given specifications, several proof obligations arise which are to be discharged, i.e. a proof has to be found. To this ends, the program is symbolically executed with the resulting changes to program variables stored in so-called updates. Once the program has been processed completely, there remains a first-order logic proof obligation. At the heart of the KeY system lies a first-order theorem prover based on sequent calculus, which is used to close the proof. Interference rules are captured in so called taclets which consist of an own simple language to describe changes to a sequent. - -The theoretical foundation of KeY is a formal logic called Java Card DL. DL stands for Dynamic Logic. It is a version of a first-order dynamic logic tailored to Java Card programs. As such, it for example allows statements (formulas) like $\phi \rightarrow [\alpha]\psi$, which intuitively says that the post-condition $\psi$ must hold in all program states reachable by executing the Java Card program $\alpha$ in any state that satisfies the pre-condition $\phi$. This is equivalent to $\{\phi\}\alpha\{\psi\}$ in Hoare calculus if $\phi$ and $\psi$ are purely first order. Dynamic logic, however, extends Hoare logic in that formulas may contain nested program modalities such as $[\alpha]$, or that quantification over formulas which contain modalities is possible. There is also a dual modality $\langle\alpha\rangle$ which includes termination. This dynamic logic can be seen as a special multi-modal logic (with an infinite number of modalities) where for each Java block $\alpha$ there are modalities $[\alpha]$ and $\langle\alpha\rangle$. - -At the heart of the KeY system lies a first-order theorem prover based on a sequent calculus. A sequent is of the form $\Gamma \vdash \Delta$ where $\Gamma$ (assumptions) and $\Delta$ (propositions) are sets of formulas with the intuitive meaning that $\bigwedge_{\gamma\in\Gamma} \gamma \rightarrow \bigvee_{\delta\in\Delta}\delta$ holds true. By means of deduction, an initial sequent representing the proof obligation is shown to be constructible from just fundamental first-order axioms (such as equality $e\ \dot{=}\ e$). - -During that, program modalities are eliminated by symbolic execution. For instance, the formula $x\ \dot{=}\ 0 \rightarrow [x++;]x\ \dot{=}\ 1$ is logically equivalent to $x\ \dot{=}\ 0 \rightarrow x\ \dot{=}\ 0$. As this example shows, symbolic execution in dynamic logic is very similar to calculating weakest preconditions. Both $[\alpha]\psi$ and $wp(\alpha,\psi)$ essentially denote the same thing – with two exceptions: Firstly, $wp$ is a function of some meta-calculus while $[\alpha]\psi$ really is a formula of the given calculus. Secondly, symbolic execution runs through the program forward just as an actual execution would. To save intermediate results of assignments, KeY introduces a concept called updates, which are similar to substitutions but are only applied once the program modality has been fully eliminated. Syntactically, updates are consist of parallel (side-effect free) assignments written in curly braces in front of a modality. An example of symbolic execution with updates: $[x= 3; x=x+1;]x\ \dot{=}\ 4$ is transformed to $\{x:= 3\}[x=x+1;]x\ \dot{=}\ 4$ in the first step and to $\{x:= 4\}[]x\ \dot{=}\ 4$ in the second step. The modality then is empty and "backwards application" of the update to the postcondition yields a precondition where $x$ could take any value. - -Suppose one wants to prove that the following method calculates the product of some non-negative integers $x$ and $y$. - - - -int foo (int x, int y) { - -int z = 0; - -while (y > 0) - -if (y % 2 == 0) { - -x = x*2; - -y = y/2; - -} else { - -y = y/2; - -z = z+x; - -x = x*2; - -} - -return z; - -} - - - -One thus starts the proof with the premise $x \geq 0 \land y \geq 0$ and the to-be-shown conclusion $z\ \dot{=}\ x \cdot y$. Note that tableaux of sequent calculi are usually written "upside-down", i.e., the starting sequent appears at the bottom and deduction steps go upwards. The proof can be seen in the figure on the right. - -The Symbolic Execution Debugger visualizes the control flow of a program as a symbolic execution tree that contains all feasible execution paths through the program up to a certain point. It is provided as a plugin to the Eclipse development platform. - -KeY is usable as a model-based testing tool that can generate unit tests for Java programs. The model from which test data and the test case are derived consists of a formal specification (provided in JML) and a symbolic execution tree of the implementation under test which is computed by the KeY system. - -KeY is free software written in Java and licensed under GPL. It can be downloaded from the project website in source; currently there are no pre-compiled binaries available. As another possibility, KeY can be executed directly via Java web start without the need for compilation and installation. - -KeY-Hoare is built on top of KeY and features a Hoare calculus with state updates. State updates are a means of describing state transitions in a Kripke structure. This calculus can be seen as a subset to the one that is used in the main branch of KeY. Due to the simplicity of the Hoare calculus, this implementation is essentially meant to exemplify formal methods in undergraduate classes. - -KeYmaera (previously called HyKeY) is a deductive verification tool for hybrid systems based on a calculus for the differential dynamic logic dL . - -It extends the KeY tool with computer algebra systems like Mathematica and corresponding algorithms and proof strategies such that it can be used for practical verification of hybrid systems. - -KeYmaera has been developed at the University of Oldenburg and the Carnegie Mellon University. The name of the tool was chosen as a homophone to Chimera, the hybrid animal from ancient Greek mythology. - -KeYmaeraX developed at the Carnegie Mellon University is the successor of KeYmaera. It has been completely rewritten. - -KeY for C is an adaption of the KeY System to MISRA C, a subset of the C programming language. This variant is no longer supported. - -There is also an adaptation to use KeY for the symbolic execution of Abstract State Machines, that was developed at ETH Zürich. This variant is no longer supported; more information can be found on the weblink below. diff --git a/wiki/wikipedia/294.txt b/wiki/wikipedia/294.txt deleted file mode 100644 index e31320b2dee9171b0ef83d667903910b246b6340..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/294.txt +++ /dev/null @@ -1,3 +0,0 @@ -In mathematics, Zahorski's theorem is a theorem of real analysis. It states that a necessary and sufficient condition for a subset of the real line to be the set of points of non-differentiability of a continuous real-valued function, is that it be the union of a Gδ set and a ${G_\delta}_\sigma$ set of zero measure. - -This result was proved by in 1939 and first published in 1941. diff --git a/wiki/wikipedia/2940.txt b/wiki/wikipedia/2940.txt deleted file mode 100644 index e51531a9f24545b7d801a13282c4f32972824164..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2940.txt +++ /dev/null @@ -1,145 +0,0 @@ -In mathematics, specifically in commutative algebra, the elementary symmetric polynomials are one type of basic building block for symmetric polynomials, in the sense that any symmetric polynomial can be expressed as a polynomial in elementary symmetric polynomials. That is, any symmetric polynomial P is given by an expression involving only additions and multiplication of constants and elementary symmetric polynomials. There is one elementary symmetric polynomial of degree d in n variables for each nonnegative integer d ≤ n, and it is formed by adding together all distinct products of d distinct variables. - -The elementary symmetric polynomials in n variables X1, …, Xn, written ek(X1, …, Xn) for k = 0, 1, …, n, are defined by - -\begin{align} - -e_0 (X_1, X_2, \dots,X_n) &= 1,\\[10pt] - -e_1 (X_1, X_2, \dots,X_n) &= \sum_{1 \leq j \leq n} X_j,\\ - -e_2 (X_1, X_2, \dots,X_n) &= \sum_{1 \leq j < k \leq n} X_j X_k,\\ - -e_3 (X_1, X_2, \dots,X_n) &= \sum_{1 \leq j < k < l \leq n} X_j X_k X_l,\\ - -\end{align} - -and so forth, ending with -$$ - e_n (X_1, X_2, \dots,X_n) = X_1 X_2 \cdots X_n. -$$ - -In general, for k ≥ 0 we define -$$ - e_k (X_1 , \ldots , X_n )=\sum_{1\le j_1 < j_2 < \cdots < j_k \le n} X_{j_1} \dotsm X_{j_k}, -$$ - -so that ek(X1, …, Xn) = 0 if k > n. - -Thus, for each non-negative integer k less than or equal to n there exists exactly one elementary symmetric polynomial of degree k in n variables. To form the one that has degree k, we take the sum of all products of k-subsets of the n variables. (By contrast, if one performs the same operation using multisets of variables, that is, taking variables with repetition, one arrives at the complete homogeneous symmetric polynomials.) - -Given an integer partition (that is, a finite non-increasing sequence of positive integers) λ = (λ1, …, λm), one defines the symmetric polynomial eλ(X1, …, Xn), also called an elementary symmetric polynomial, by -$$ -e_\lambda (X_1, \dots,X_n) = e_{\lambda_1}(X_1, \dots, X_n) \cdot e_{\lambda_2}(X_1, \dots, X_n) \cdots e_{\lambda_m}(X_1, \dots, X_n) -$$. - -Sometimes the notation σk is used instead of ek. - -The following lists the n elementary symmetric polynomials for the first four positive values of n. (In every case, e0 = 1 is also one of the polynomials.) - -For n = 1: -$$ -e_1(X_1) = X_1. -$$ - -For n = 2: - -\begin{align} - -e_1(X_1,X_2) &= X_1 + X_2,\\ - -e_2(X_1,X_2) &= X_1X_2.\\ - -\end{align} - -For n = 3: - -\begin{align} - -e_1(X_1,X_2,X_3) &= X_1 + X_2 + X_3,\\ - -e_2(X_1,X_2,X_3) &= X_1X_2 + X_1X_3 + X_2X_3,\\ - -e_3(X_1,X_2,X_3) &= X_1X_2X_3.\\ - -\end{align} - -For n = 4: - -\begin{align} - -e_1(X_1,X_2,X_3,X_4) &= X_1 + X_2 + X_3 + X_4,\\ - -e_2(X_1,X_2,X_3,X_4) &= X_1X_2 + X_1X_3 + X_1X_4 + X_2X_3 + X_2X_4 + X_3X_4,\\ - -e_3(X_1,X_2,X_3,X_4) &= X_1X_2X_3 + X_1X_2X_4 + X_1X_3X_4 + X_2X_3X_4,\\ - -e_4(X_1,X_2,X_3,X_4) &= X_1X_2X_3X_4.\\ - -\end{align} - -The elementary symmetric polynomials appear when we expand a linear factorization of a monic polynomial: we have the identity -$$ -\prod_{j=1}^n ( \lambda - X_j)=\lambda^n - e_1(X_1,\ldots,X_n)\lambda^{n-1} + e_2(X_1,\ldots,X_n)\lambda^{n-2} + \cdots +(-1)^n e_n(X_1,\ldots,X_n). -$$ - -That is, when we substitute numerical values for the variables X1, X2, …, Xn, we obtain the monic univariate polynomial (with variable λ) whose roots are the values substituted for X1, X2, …, Xn and whose coefficients are – up to their sign – the elementary symmetric polynomials. These relations between the roots and the coefficients of a polynomial are called Vieta's formulas. - -The characteristic polynomial of a square matrix is an example of application of Vieta's formulas. The roots of this polynomial are the eigenvalues of the matrix. When we substitute these eigenvalues into the elementary symmetric polynomials, we obtain – up to their sign – the coefficients of the characteristic polynomial, which are invariants of the matrix. In particular, the trace (the sum of the elements of the diagonal) is the value of e1, and thus the sum of the eigenvalues. Similarly, the determinant is – up to the sign – the constant term of the characteristic polynomial, i.e. the value of en. Thus the determinant of a square matrix is the product of the eigenvalues. - -The set of elementary symmetric polynomials in n variables generates the ring of symmetric polynomials in n variables. More specifically, the ring of symmetric polynomials with integer coefficients equals the integral polynomial ring ℤ[e1(X1, …, Xn), …, en(X1, …, Xn)]. (See below for a more general statement and proof.) This fact is one of the foundations of invariant theory. For another system of symmetric polynomials with the same property see Complete homogeneous symmetric polynomials, and for a system with a similar, but slightly weaker, property see Power sum symmetric polynomial. - -== Fundamental theorem of symmetric polynomials == - -For any commutative ring A, denote the ring of symmetric polynomials in the variables X1, …, Xn with coefficients in A by A[X1, …, Xn]Sn. This is a polynomial ring in the n elementary symmetric polynomials ek(X1, …, Xn) for k = 1, …, n. (Note that e0 is not among these polynomials; since e0 = 1, it cannot be member of any set of algebraically independent elements.) - -This means that every symmetric polynomial P(X1, …, Xn) ∈ A[X1, …, Xn]Sn has a unique representation -$$ - P(X_1,\ldots, X_n)=Q\big(e_1(X_1 , \ldots ,X_n), \ldots, e_n(X_1 , \ldots ,X_n)\big) -$$ - -for some polynomial Q ∈ A[Y1, …, Yn]. Another way of saying the same thing is that the ring homomorphism that sends Yk to ek(X1, …, Xn) for k = 1, …, n defines an isomorphism between A[Y1, …, Yn] and A[X1, …, Xn]Sn. - -The theorem may be proved for symmetric homogeneous polynomials by a double induction with respect to the number of variables n and, for fixed n, with respect to the degree of the homogeneous polynomial. The general case then follows by splitting an arbitrary symmetric polynomial into its homogeneous components (which are again symmetric). - -In the case n = 1 the result is trivial because every polynomial in one variable is automatically symmetric. - -Assume now that the theorem has been proved for all polynomials for m < n variables and all symmetric polynomials in n variables with degree < d. Every homogeneous symmetric polynomial P in A[X1, …, Xn]Sn can be decomposed as a sum of homogeneous symmetric polynomials -$$ - P(X_1,\ldots,X_n)= P_{\text{lacunary}} (X_1,\ldots,X_n) + X_1 \cdots X_n \cdot Q(X_1,\ldots,X_n). -$$ - -Here the "lacunary part" Placunary is defined as the sum of all monomials in P which contain only a proper subset of the n variables X1, …, Xn, i.e., where at least one variable Xj is missing. - -Because P is symmetric, the lacunary part is determined by its terms containing only the variables X1, …, Xn − 1, i.e., which do not contain Xn. More precisely: If A and B are two homogeneous symmetric polynomials in X1, …, Xn having the same degree, and if the coefficient of A before each monomial which contains only the variables X1, …, Xn − 1 equals the corresponding coefficient of B, then A and B have equal lacunary parts. (This is because every monomial which can appear in a lacunary part must lack at least one variable, and thus can be transformed by a permutation of the variables into a monomial which contains only the variables X1, …, Xn − 1.) - -But the terms of P which contain only the variables X1, …, Xn − 1 are precisely the terms that survive the operation of setting Xn to 0, so their sum equals P(X1, …, Xn − 1, 0), which is a symmetric polynomial in the variables X1, …, Xn − 1 that we shall denote by P̃(X1, …, Xn − 1). By the inductive hypothesis, this polynomial can be written as -$$ - \tilde{P}(X_1, \ldots, X_{n-1})=\tilde{Q}(\sigma_{1,n-1}, \ldots, \sigma_{n-1,n-1}) -$$ - -for some Q̃. Here the doubly indexed σj,n − 1 denote the elementary symmetric polynomials in n − 1 variables. - -Consider now the polynomial -$$ -R(X_1, \ldots, X_{n}):= \tilde{Q}(\sigma_{1,n}, \ldots, \sigma_{n-1,n}) . -$$ - -Then R(X1, …, Xn) is a symmetric polynomial in X1, …, Xn, of the same degree as Placunary, which satisfies -$$ -R(X_1, \ldots, X_{n-1},0) = \tilde{Q}(\sigma_{1,n-1}, \ldots, \sigma_{n-1,n-1}) = P(X_1, \ldots,X_{n-1},0) -$$ - -(the first equality holds because setting Xn to 0 in σj,n gives σj,n − 1, for all j < n). In other words, the coefficient of R before each monomial which contains only the variables X1, …, Xn − 1 equals the corresponding coefficient of P. As we know, this shows that the lacunary part of R coincides with that of the original polynomial P. Therefore the difference P − R has no lacunary part, and is therefore divisible by the product X1···Xn of all variables, which equals the elementary symmetric polynomial σn,n. Then writing P − R = σn,nQ, the quotient Q is a homogeneous symmetric polynomial of degree less than d (in fact degree at most d − n) which by the inductive hypothesis can be expressed as a polynomial in the elementary symmetric functions. Combining the representations for P − R and R one finds a polynomial representation for P. - -The uniqueness of the representation can be proved inductively in a similar way. (It is equivalent to the fact that the n polynomials e1, …, en are algebraically independent over the ring A.) The fact that the polynomial representation is unique implies that A[X1, …, Xn]Sn is isomorphic to A[Y1, …, Yn]. - -The following proof is also inductive, but does not involve other polynomials than those symmetric in X1, …, Xn, and also leads to a fairly direct procedure to effectively write a symmetric polynomial as a polynomial in the elementary symmetric ones. Assume the symmetric polynomial to be homogeneous of degree d; different homogeneous components can be decomposed separately. Order the monomials in the variables Xi lexicographically, where the individual variables are ordered X1 > … > Xn, in other words the dominant term of a polynomial is one with the highest occurring power of X1, and among those the one with the highest power of X2, etc. Furthermore parametrize all products of elementary symmetric polynomials that have degree d (they are in fact homogeneous) as follows by partitions of d. Order the individual elementary symmetric polynomials ei(X1, …, Xn) in the product so that those with larger indices i come first, then build for each such factor a column of i boxes, and arrange those columns from left to right to form a Young diagram containing d boxes in all. The shape of this diagram is a partition of d, and each partition λ of d arises for exactly one product of elementary symmetric polynomials, which we shall denote by eλt (X1, …, Xn) (the t is present only because traditionally this product is associated to the transpose partition of λ). The essential ingredient of the proof is the following simple property, which uses multi-index notation for monomials in the variables Xi. - -Lemma. The leading term of eλt (X1, …, Xn) is X λ. - -Proof. The leading term of the product is the product of the leading terms of each factor (this is true whenever one uses a monomial order, like the lexicographic order used here), and the leading term of the factor ei (X1, …, Xn) is clearly X1X2···Xi. To count the occurrences of the individual variables in the resulting monomial, fill the column of the Young diagram corresponding to the factor concerned with the numbers 1, …, i of the variables, then all boxes in the first row contain 1, those in the second row 2, and so forth, which means the leading term is X λ. - -Now one proves by induction on the leading monomial in lexicographic order, that any nonzero homogeneous symmetric polynomial P of degree d can be written as polynomial in the elementary symmetric polynomials. Since P is symmetric, its leading monomial has weakly decreasing exponents, so it is some X λ with λ a partition of d. Let the coefficient of this term be c, then P − ceλt (X1, …, Xn) is either zero or a symmetric polynomial with a strictly smaller leading monomial. Writing this difference inductively as a polynomial in the elementary symmetric polynomials, and adding back ceλt (X1, …, Xn) to it, one obtains the sought for polynomial expression for P. - -The fact that this expression is unique, or equivalently that all the products (monomials) eλt (X1, …, Xn) of elementary symmetric polynomials are linearly independent, is also easily proved. The lemma shows that all these products have different leading monomials, and this suffices: if a nontrivial linear combination of the eλt (X1, …, Xn) were zero, one focuses on the contribution in the linear combination with nonzero coefficient and with (as polynomial in the variables Xi) the largest leading monomial; the leading term of this contribution cannot be cancelled by any other contribution of the linear combination, which gives a contradiction. diff --git a/wiki/wikipedia/2941.txt b/wiki/wikipedia/2941.txt deleted file mode 100644 index 0734b5d8311c0da95dca5c61d52b133d357035f6..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2941.txt +++ /dev/null @@ -1,25 +0,0 @@ -In algebraic topology, in the cellular approximation theorem, a map between CW-complexes can always be taken to be of a specific type. Concretely, if X and Y are CW-complexes, and f : X → Y is a continuous map, then f is said to be cellular, if f takes the n-skeleton of X to the n-skeleton of Y for all n, i.e. if $f(X^n)\subseteq Y^n$ for all n. The content of the cellular approximation theorem is then that any continuous map f : X → Y between CW-complexes X and Y is homotopic to a cellular map, and if f is already cellular on a subcomplex A of X, then we can furthermore choose the homotopy to be stationary on A. From an algebraic topological viewpoint, any map between CW-complexes can thus be taken to be cellular. - -The proof can be given by induction after n, with the statement that f is cellular on the skeleton Xn. For the base case n=0, notice that every path-component of Y must contain a 0-cell. The image under f of a 0-cell of X can thus be connected to a 0-cell of Y by a path, but this gives a homotopy from f to a map which is cellular on the 0-skeleton of X. - -Assume inductively that f is cellular on the (n - 1)-skeleton of X, and let en be an n-cell of X. The closure of en is compact in X, being the image of the characteristic map of the cell, and hence the image of the closure of en under f is also compact in Y. Then it is a general result of CW-complexes that any compact subspace of a CW-complex meets (that is, intersects non-trivially) only finitely many cells of the complex. Thus f(en) meets at most finitely many cells of Y, so we can take $e^k\subseteq Y$ to be a cell of highest dimension meeting f(en). If $k\leq n$, the map f is already cellular on en, since in this case only cells of the n-skeleton of Y meets f(en), so we may assume that k > n. It is then a technical, non-trivial result (see Hatcher) that the restriction of f to $X^{n-1}\cup e^n$ can be homotoped relative to Xn-1 to a map missing a point p ∈ ek. Since Yk - {p} deformation retracts onto the subspace Yk-ek, we can further homotope the restriction of f to $X^{n-1}\cup e^n$ to a map, say, g, with the property that g(en) misses the cell ek of Y, still relative to Xn-1. Since f(en) met only finitely many cells of Y to begin with, we can repeat this process finitely many times to make $f(e^n)$ miss all cells of Y of dimension larger than n. - -We repeat this process for every n-cell of X, fixing cells of the subcomplex A on which f is already cellular, and we thus obtain a homotopy (relative to the (n - 1)-skeleton of X and the n-cells of A) of the restriction of f to Xn to a map cellular on all cells of X of dimension at most n. Using then the homotopy extension property to extend this to a homotopy on all of X, and patching these homotopies together, will finish the proof. For details, consult Hatcher. - -The cellular approximation theorem can be used to immediately calculate some homotopy groups. In particular, if $n - -We have in particular that $(X,X^n)$ is n-connected, so it follows from the long exact sequence of homotopy groups for the pair $(X,X^n) $ that we have isomorphisms $\pi_i(X^n) $→$\pi_i(X) $ for all $i\begin{array}{ccc} - -H^* (F)\otimes H^*(B) & \longrightarrow & H^* (E) \\ - -\alpha \otimes \beta & \longmapsto & s (\alpha)\smallsmile \pi^*(\beta) - -\end{array} - -is an isomorphism of $H^*(B)$-modules. - -In other words, if for every $p$, there exist classes -$$ -c_{1,p},\ldots,c_{m_p,p} \in H^p(E) -$$ - -that restrict, on each fiber $F$, to a basis of the cohomology in degree $p$, the map given below is then an isomorphism of $H^*(B)$ modules. - -\begin{array}{ccc} - -H^*(F)\otimes H^*(B) & \longrightarrow & H^*(E) \\ - -\sum_{i,j,k}a_{i,j,k}\iota^*(c_{i,j})\otimes b_k & \longmapsto & \sum_{i,j,k}a_{i,j,k}c_{i,j}\wedge\pi^*(b_k) - -\end{array} - -where $\{b_k\}$ is a basis for $H^*(B)$ and thus, induces a basis $\{\iota^*(c_{i,j})\otimes b_k\}$ for $H^*(F)\otimes H^*(B).$ diff --git a/wiki/wikipedia/2943.txt b/wiki/wikipedia/2943.txt deleted file mode 100644 index d635178328d4724e5fcea490614883ddb36f6838..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2943.txt +++ /dev/null @@ -1,25 +0,0 @@ -MTCS (Minimum Teleprocessing Communications System) was a transaction processor that ran on IBM mainframe systems under OS/VS1 and DOS/VS. - -MTCS was available from IBM and designed for rapid, low to medium volume online processing. This process was entirely interactive (screen-oriented using 3270 display terminals). - -The 'official' version of MTCS was single thread only and was a forerunner of CICS before it was released. - -An unofficial and multi-threaded version of MTCS was developed by Littlewoods Pools, UK at the same time as a multi-threaded "MTCS bridge" (middleware MTCS simulator) became available for running MTCS transactions directly under CICS. This version was also used by other customers including Granada Productions under a license agreement. - -An MTCS transaction is a set of operations which together perform a task. Usually, the majority of transactions are relatively simple tasks such as updating the balance of an account. - -MTCS applications comprise transactions which were written in IBM Basic Assembly Language and interfaced with 3270 terminals. - -Each MTCS program was initiated using a transaction identifier. MTCS screens were sent as native 3270 datastreams to the terminal. - -The first release of MTCS was made available prior to the first release of CICS in the late 1960s. - -A forerunner of MTCS was known as "FASTER" and was a higher level BTAM based product that controlled IBM 2260 display terminals - -According to IBM System/360 and System/370 Bibliography of September 1974 (GA22-6822-21, File No. S360/S370-00), MTCS is described by the following manuals: - -* GB21-0061, MINIMUM TELEPROCESSING COMMUNICATION SYSTEM (DOS) - FDP AVAILABILITY NOTICE, PROG. NO. 5798-AAY - -* SB21-0062, MINIMUM TELEPROCESSING COMMUNICATIONS, SYSTEM MANUAL PROGRAM NUMBER 5798-AAY - -* LB21-0063, MINIMUM TELEPROCESSING COMMUNICATIONS SYSTEM: FDP SYSTEMS GUIDE, PROG. NO. 5798-AAY. FEATURE NO. 8021 diff --git a/wiki/wikipedia/2944.txt b/wiki/wikipedia/2944.txt deleted file mode 100644 index d7749736845f077240599c93729ddbcecd82a9a2..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2944.txt +++ /dev/null @@ -1,5 +0,0 @@ -Richard Lawrence Taylor (born 19 May 1962) is a British mathematician working in the field of number theory. He is currently the Barbara Kimball Browning Professor in Humanities and Sciences at Stanford University. - -Taylor received the 2015 Breakthrough Prize in Mathematics "for numerous breakthrough results in the theory of automorphic forms, including the Taniyama–Weil conjecture, the local Langlands conjecture for general linear groups, and the Sato–Tate conjecture." He also received the 2007 Shaw Prize in Mathematical Sciences for his work on the Langlands program with Robert Langlands. He also served on the Mathematical Sciences jury for the Infosys Prize from 2012 to 2014. - -He received his B.A. from Clare College, Cambridge. and later became the Herchel Smith Professor of Mathematics at Harvard University, and held the Robert and Luisa Fernholz Professorship at the Institute for Advanced Study. He is currently the Barbara Kimball Browning Professor in Humanities & Sciences at Stanford University. diff --git a/wiki/wikipedia/2945.txt b/wiki/wikipedia/2945.txt deleted file mode 100644 index 028f1aacc0f80f94c7e59d14d2f9af33aee12462..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2945.txt +++ /dev/null @@ -1,16 +0,0 @@ -In mathematics, Hua's lemma, named for Hua Loo-keng, is an estimate for exponential sums. - -It states that if P is an integral-valued polynomial of degree k, $\varepsilon$ is a positive real number, and f a real function defined by -$$ -f(\alpha)=\sum_{x=1}^N\exp(2\pi iP(x)\alpha), -$$ - -then -$$ -\int_0^1|f(\alpha)|^\lambda d\alpha\ll_{P, \varepsilon} N^{\mu(\lambda)} -$$, - -where $(\lambda,\mu(\lambda))$ lies on a polygonal line with vertices -$$ -(2^\nu,2^\nu-\nu+\varepsilon),\quad\nu=1,\ldots,k. -$$ diff --git a/wiki/wikipedia/2946.txt b/wiki/wikipedia/2946.txt deleted file mode 100644 index d190d9662a3c8579b3a01a8ad179e8f7914ebea1..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2946.txt +++ /dev/null @@ -1,33 +0,0 @@ -In graph theory, the Hadwiger conjecture states that if G is loopless and has no $K_t$ minor then its chromatic number satisfies $\chi(G) < t$. It is known to be true for $1 \leq t \leq 6$. The conjecture is a generalization of the four-color theorem and is considered to be one of the most important and challenging open problems in the field. - -In more detail, if all proper colorings of an undirected graph G use k or more colors, then one can find k disjoint connected subgraphs of G such that each subgraph is connected by an edge to each other subgraph. Contracting the edges within each of these subgraphs so that each subgraph collapses to a single vertex produces a complete graph Kk on k vertices as a minor of G. - -This conjecture, a far-reaching generalization of the four-color problem, was made by Hugo Hadwiger in 1943 and is still unsolved. Bollobás call it “one of the deepest unsolved problems in graph theory.” The Hadwiger conjecture can be stated in the simple algebraic form χ(G) ≤ h(G) where χ(G) denotes the chromatic number of G. - -The case k = 2 is trivial: a graph requires more than one color if and only if it has an edge, and that edge is itself a K2 minor. The case k = 3 is also easy: the graphs requiring three colors are the non-bipartite graphs, and every non-bipartite graph has an odd cycle, which can be contracted to a 3-cycle, that is, a K3 minor. - -In the same paper in which he introduced the conjecture, Hadwiger proved its truth for k ≤ 4. The graphs with no K4 minor are the series–parallel graphs and their subgraphs. Each graph of this type has a vertex with at most two incident edges; one can 3-color any such graph by removing one such vertex, coloring the remaining graph recursively, and then adding back and coloring the removed vertex. Because the removed vertex has at most two edges, one of the three colors will always be available to color it when the vertex is added back. - -The truth of the conjecture for k = 5 implies the four color theorem: for, if the conjecture is true, every graph requiring five or more colors would have a K5 minor and would (by Wagner's theorem) be nonplanar. - -Klaus Wagner proved in 1937 that the case k = 5 is actually equivalent to the four color theorem and therefore we now know it to be true. As Wagner showed, every graph that has no K5 minor can be decomposed via clique-sums into pieces that are either planar or an 8-vertex Möbius ladder, and each of these pieces can be 4-colored independently of each other, so the 4-colorability of a K5-minor-free graph follows from the 4-colorability of each of the planar pieces. - -Robertson proved the conjecture for k = 6, also using the four color theorem; their paper with this proof won the 1994 Fulkerson Prize. It follows from their proof that linklessly embeddable graphs, a three-dimensional analogue of planar graphs, have chromatic number at most five. Due to this result, the conjecture is known to be true for k ≤ 6, but it remains unsolved for all k > 6. - -For k = 7, some partial results are known: every 7-chromatic graph must contain either a K7 minor or both a K4,4 minor and a K3,5 minor. - -Every graph G has a vertex with at most O(h(G) log h(G)) incident edges, from which it follows that a greedy coloring algorithm that removes this low-degree vertex, colors the remaining graph, and then adds back the removed vertex and colors it, will color the given graph with O(h(G) log h(G)) colors. - -In the 1980s, Alexander V. Kostochka and Andrew Thomason both independently proved that every graph with no $K_k$-minors has average degree $O (k \sqrt {\log k})$ and can thus be colored using $O (k \sqrt {\log k})$ colors. - -This result has later been improved by Sergey Norin, Zi-Xia Song and Luke Postle to $O (k {(\log k )}^{\beta} )$ colors for any $\beta > {1}/{4}$. In 2020, Postle announced another improvement to $O(k {(\log \log k)}^6)$-colorability for graphs without $K_k$-minors. - -Van der Zypen has constructed a graph H with χ(H) = ω but no Kω minor, demonstrating that the conjecture does not hold for graphs with countably infinite coloring number. - -György Hajós conjectured that Hadwiger's conjecture could be strengthened to subdivisions rather than minors: that is, that every graph with chromatic number k contains a subdivision of a complete graph Kk. Hajós' conjecture is true for k ≤ 4, but Catlin found counterexamples to this strengthened conjecture for k ≥ 7; the cases k = 5 and k = 6 remain open. Erdős observed that Hajós' conjecture fails badly for random graphs: for any ε > 0, in the limit as the number of vertices, n, goes to infinity, the probability approaches one that a random n-vertex graph has chromatic number ≥ (1/2 - ε)n / log2 n, and that its largest clique subdivision has at most cn1/2 vertices for some constant c. In this context, it is worth noting that the probability also approaches one that a random n-vertex graph has Hadwiger number greater than or equal to its chromatic number, so the Hadwiger conjecture holds for random graphs with high probability; more precisely, the Hadwiger number is with high probability a constant times n/log n. - -Borowiecki asked whether Hadwiger's conjecture could be extended to list coloring. For k ≤ 4, every graph with list chromatic number k has a k-vertex clique minor. However, the maximum list chromatic number of planar graphs is 5, not 4, so the extension fails already for K5-minor-free graphs. More generally, for every t ≥ 1, there exist graphs whose Hadwiger number is 3t + 1 and whose list chromatic number is 4t + 1. - -Gerards and Seymour conjectured that every graph G with chromatic number k has a complete graph Kk as an odd minor. Such a structure can be represented as a family of k vertex-disjoint subtrees of G, each of which is two-colored, such that each pair of subtrees is connected by a monochromatic edge. Although graphs with no odd Kk minor are not necessarily sparse, a similar upper bound holds for them as it does for the standard Hadwiger conjecture: a graph with no odd Kk minor has chromatic number χ(G) = O(k log k). - -By imposing extra conditions on G, it may be possible to prove the existence of larger minors than Kk. One example is the snark theorem, that every cubic graph requiring four colors in any edge coloring has the Petersen graph as a minor, conjectured by W. T. Tutte and announced to be proved in 2001 by Robertson, Sanders, Seymour, and Thomas. diff --git a/wiki/wikipedia/2947.txt b/wiki/wikipedia/2947.txt deleted file mode 100644 index 90b337f13259e804f1f3c98661f56124215f8228..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2947.txt +++ /dev/null @@ -1,18 +0,0 @@ -In quantum information and computation, the Solovay–Kitaev theorem says, roughly, that if a set of single-qubit quantum gates generates a dense subset of SU(2) then that set is guaranteed to fill SU(2) quickly, which means any desired gate can be approximated by a fairly short sequence of gates from the generating set. Robert M. Solovay initially announced the result on an email list in 1995, and Alexei Kitaev independently gave an outline of its proof in 1997. Solovay also gave a talk on his result at MSRI in 2000 but it was interrupted by a fire alarm. Christopher M. Dawson and Michael Nielsen call the theorem one of the most important fundamental results in the field of quantum computation. - -A consequence of this theorem is that a quantum circuit of $m$ constant-qubit gates can be approximated to $\varepsilon$ error (in operator norm) by a quantum circuit of $O(m\log^c(m/\varepsilon))$ gates from a desired finite universal gate set. By comparison, just knowing that a gate set is universal only implies that constant-qubit gates can be approximated by a finite circuit from the gate set, with no bound on its length. So, the Solovay–Kitaev theorem shows that this approximation can be made surprisingly efficient, thereby justifying that quantum computers need only implement a finite number of gates to gain the full power of quantum computation. - -Let $\mathcal{G}$ be a finite set of elements in SU(2) containing its own inverses (so $g \in \mathcal{G}$ implies $g^{-1} \in \mathcal{G}$) and such that the group $ \langle \mathcal{G} \rangle $ they generate is dense in SU(2). Consider some $\varepsilon > 0$. Then there is a constant $c$ such that for any $U \in \mathrm{SU}(2)$, there is a sequence $S$ of gates from $\mathcal{G}$ of length $O(\log^c(1/\varepsilon))$ such that $\|S - U\| \leq \varepsilon$. That is, $S$ approximates $U$ to operator norm error. - -The constant $c$ can be made to be $3+\delta$ for any fixed $\delta > 0$. However, there exist particular gate sets for which we can take $c=1$, which makes the length of the gate sequence tight up to a constant factor. - -The proof of the Solovay–Kitaev theorem proceeds by recursively constructing a gate sequence giving increasingly good approximations to $U \in \operatorname{SU}(2)$. Suppose we have an approximation $U_{n-1} \in \operatorname{SU}(2)$ such that $\| U - U_{n-1} \| \leq \varepsilon_{n-1}$. Our goal is to find a sequence of gates approximating $U U_{n-1}^{-1}$ to $\varepsilon_n$ error, for $\varepsilon_n < \varepsilon_{n-1}$. By concatenating this sequence of gates with $U_{n-1}$, we get a sequence of gates $U_n$ such that $\|U - U_n\| \leq \varepsilon_n$. - -The key idea is that commutators of elements close to the identity can be approximated "better-than-expected". Specifically, for $V,W \in \operatorname{SU}(2)$ satisfying $\|V - I\| \leq \delta_1$ and $\|W - I\| \leq \delta_1$ and approximations $\tilde{V}, \tilde{W} \in \operatorname{SU}(2)$ satisfying $\|V - \tilde{V}\| \leq \delta_2$ and $\|W - \tilde{W}\| \leq \delta_2$, then -$$ -\|VWV^{-1}W^{-1} - \tilde{V}\tilde{W}\tilde{V}^{-1}\tilde{W}^{-1}\| \leq O(\delta_1\delta_2), -$$ - -where the big O notation hides higher-order terms. One can naively bound the above expression to be $O(\delta_2)$, but the group commutator structure creates substantial error cancellation. - -We use this observation by rewriting the expression we wish to approximate as a group commutator $U U_{n-1}^{-1} = V_{n-1}W_{n-1}V_{n-1}^{-1}W_{n-1}^{-1}$. This can be done such that both $V_{n-1}$ and $W_{n-1}$ are close to the identity (since $\|U U_{n-1}^{-1} - I\| \leq \varepsilon_{n-1}$). So, if we recursively compute gate sequences approximating $V_{n-1}$ and $W_{n-1}$ to $\varepsilon_{n-1}$ error, we get a gate sequence approximating $U U_{n-1}^{-1}$ to the desired better precision $\varepsilon_n$ with $\varepsilon_n$. We can get a base case approximation with constant $\varepsilon_0$ by brute-force computation of all sufficiently long gate sequences. diff --git a/wiki/wikipedia/2948.txt b/wiki/wikipedia/2948.txt deleted file mode 100644 index 1ba90e0251a1549aeb08e8b69d79290f4954d033..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2948.txt +++ /dev/null @@ -1,15 +0,0 @@ -In operator theory, the commutant lifting theorem, due to Sz.-Nagy and Foias, is a powerful theorem used to prove several interpolation results. - -The commutant lifting theorem states that if T is a contraction on a Hilbert space H, U is its minimal unitary dilation acting on some Hilbert space K (which can be shown to exist by Sz.-Nagy's dilation theorem), and R is an operator on H commuting with T, then there is an operator S on K commuting with U such that -$$ -R T^n = P_H S U^n \vert_H \forall n \geq 0, -$$ - -and -$$ -\Vert S \Vert = \Vert R \Vert. -$$ - -In other words, an operator from the commutant of T can be "lifted" to an operator in the commutant of the unitary dilation of T. - -The commutant lifting theorem can be used to prove the left Nevanlinna-Pick interpolation theorem, the Sarason interpolation theorem, and the two-sided Nudelman theorem, among others. diff --git a/wiki/wikipedia/2949.txt b/wiki/wikipedia/2949.txt deleted file mode 100644 index 8a9d6a73e98d744179ddc0dc0b354b2a46843aeb..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2949.txt +++ /dev/null @@ -1,11 +0,0 @@ -In computer science, Uniform consensus is a distributed computing problem that is a similar to the consensus problem with one more condition which is no two processes (whether faulty or not) decide differently. - -More specifically one should consider this problem: - -* Each process has an input, should on decide an output (one-shot problem) - -* Uniform Agreement: every two decisions are the same - -* Validity: every decision is an input of one of the processes - -* Termination: eventually all correct processes decide diff --git a/wiki/wikipedia/295.txt b/wiki/wikipedia/295.txt deleted file mode 100644 index 0dc5c347892bd7ea195001f0e02012f9fb786694..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/295.txt +++ /dev/null @@ -1,50 +0,0 @@ -In coding theory, the Gilbert–Varshamov bound (due to Edgar Gilbert and independently Rom Varshamov) is a limit on the parameters of a (not necessarily linear) code. It is occasionally known as the Gilbert–Shannon–Varshamov bound (or the GSV bound), but the name "Gilbert–Varshamov bound" is by far the most popular. Varshamov proved this bound by using the probabilistic method for linear codes. For more about that proof, see Gilbert–Varshamov bound for linear codes. - -Let -$$ -A_q(n,d) -$$ - -denote the maximum possible size of a q-ary code $C$ with length n and minimum Hamming distance d (a q-ary code is a code over the field $\mathbb{F}_q$ of q elements). - -Then: -$$ -A_q(n,d) \geqslant \frac{q^n}{\sum_{j=0}^{d-1} \binom{n}{j}(q-1)^j}. -$$ - -Let $C$ be a code of length $n$ and minimum Hamming distance $d$ having maximal size: -$$ -|C|=A_q(n,d). -$$ - -Then for all $x\in\mathbb{F}_q^n$ , there exists at least one codeword $c_x \in C$ such that the Hamming distance $d(x,c_x)$ between $x$ and $c_x$ satisfies -$$ -d(x,c_x)\leqslant d-1 -$$ - -since otherwise we could add x to the code whilst maintaining the code's minimum Hamming distance $d$ – a contradiction on the maximality of $|C|$. - -Hence the whole of $\mathbb{F}_q^n$ is contained in the union of all balls of radius $d-1$ having their centre at some $c \in C$ : -$$ -\mathbb{F}_q^n =\bigcup_{c \in C} B(c,d-1). -$$ - -Now each ball has size -$$ - \sum_{j=0}^{d-1} \binom{n}{j}(q-1)^j -$$ - -since we may allow (or choose) up to $d-1$ of the $n$ components of a codeword to deviate (from the value of the corresponding component of the ball's centre) to one of $(q-1)$ possible other values (recall: the code is q-ary: it takes values in $\mathbb{F}_q^n$). Hence we deduce -$$ -q^n = \left |\mathbb{F}_q^n \right | = \left |\bigcup_{c \in C} B(c,d-1) \right | \leqslant \sum_{c \in C} |B(c,d-1)| = |C|\sum_{j=0}^{d-1} \binom{n}{j}(q-1)^j -$$ - -That is: -$$ - A_q(n,d) = |C| \geqslant \frac{q^n}{\sum_{j=0}^{d-1} \binom{n}{j}(q-1)^j}. -$$ - -For q a prime power, one can improve the bound to $A_q(n,d)\ge q^k$ where k is the greatest integer for which -$$ -q^k < \frac{q^n}{\sum_{j=0}^{d-2} \binom{n-1}{j}(q-1)^j}. -$$ diff --git a/wiki/wikipedia/2950.txt b/wiki/wikipedia/2950.txt deleted file mode 100644 index ca6576a38c09c4276c65d925530e6c21796b8213..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2950.txt +++ /dev/null @@ -1,7 +0,0 @@ -In algebraic geometry, the Keel–Mori theorem gives conditions for the existence of the quotient of an algebraic space by a group. The theorem was proved by . - -A consequence of the Keel–Mori theorem is the existence of a coarse moduli space of a separated algebraic stack, which is roughly a "best possible" approximation to the stack by a separated algebraic space. - -All algebraic spaces are assumed of finite type over a locally Noetherian base. Suppose that j:R→X×X is a flat groupoid whose stabilizer j−1Δ is finite over X (where Δ is the diagonal of X×X). The Keel–Mori theorem states that there is an algebraic space that is a geometric and uniform categorical quotient of X by j, which is separated if j is finite. - -A corollary is that for any flat group scheme G acting properly on an algebraic space X with finite stabilizers there is a uniform geometric and uniform categorical quotient X/G which is a separated algebraic space. proved a slightly weaker version of this and described several applications. diff --git a/wiki/wikipedia/2951.txt b/wiki/wikipedia/2951.txt deleted file mode 100644 index 3e904f1bb10b92841dba56538051fee8b3a53118..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2951.txt +++ /dev/null @@ -1,3 +0,0 @@ -In polynomial algebra and field theory, Joubert's theorem states that if $K$ and $L$ are fields, $L$ is a separable field extension of $K$ of degree 6, and the characteristic of $K$ is not equal to 2, then $L$ is generated over $K$ by some element λ in $L$, such that the minimal polynomial $p$ of λ has the form $p(t)$ = $t^6 + c_4 t^4 + c_2 t^2 + c_1 t + c_0$, for some constants $c_4, c_2, c_1, c_0$ in $K$. The theorem is named in honor of Charles Joubert, a French mathematician, lycée professor, and Jesuit priest. - -In 1867 Joubert published his theorem in his paper Sur l'équation du sixième degré in tome 64 of Comptes rendus hebdomadaires des séances de l'Académie des sciences. He seems to have made the assumption that the fields involved in the theorem are subfields of the complex field. In 2006 gave a proof of Joubert's theorem "based on an enhanced version of Joubert’s argument". In 2014 Zinovy Reichstein proved that the condition characteristic($K$) ≠ 2 is necessary in general to prove the theorem, but the theorem's conclusion can be proved in the characteristic 2 case with some additional assumptions on $K$ and $L$. diff --git a/wiki/wikipedia/2952.txt b/wiki/wikipedia/2952.txt deleted file mode 100644 index 6fbe326c2ce156300dd44b0db0b6d3dae46518bc..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2952.txt +++ /dev/null @@ -1,12 +0,0 @@ -In mathematics, the constant problem is the problem of deciding whether a given expression is equal to zero. - -This problem is also referred to as the identity problem or the method of zero estimates. It has no formal statement as such but refers to a general problem prevalent in transcendental number theory. Often proofs in transcendence theory are proofs by contradiction. Specifically, they use some auxiliary function to create an integer n ≥ 0, which is shown to satisfy n < 1. Clearly, this means that n must have the value zero, and so a contradiction arises if one can show that in fact n is not zero. - -In many transcendence proofs, proving that n ≠ 0 is very difficult, and hence a lot of work has been done to develop methods that can be used to prove the non-vanishing of certain expressions. The sheer generality of the problem is what makes it difficult to prove general results or come up with general methods for attacking it. The number n that arises may involve integrals, limits, polynomials, other functions, and determinants of matrices. - -In certain cases, algorithms or other methods exist for proving that a given expression is non-zero, or of showing that the problem is undecidable. For example, if x1, ..., xn are real numbers, then there is an algorithm for deciding whether there are integers a1, ..., an such that -$$ -a_1 x_1 + \cdots + a_n x_n = 0. -$$ - -If the expression we are interested in contains an oscillating function, such as the sine or cosine function, then it has been shown that the problem is undecidable, a result known as Richardson's theorem. In general, methods specific to the expression being studied are required to prove that it cannot be zero. diff --git a/wiki/wikipedia/2953.txt b/wiki/wikipedia/2953.txt deleted file mode 100644 index fecb0c164dd1584f86e9a194d6ebd48eb4efb394..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2953.txt +++ /dev/null @@ -1,9 +0,0 @@ -The pill jar puzzle is a probability puzzle, which asks the expected value of the number of half-pills remaining when the last whole pill is popped from a jar initially containing n whole pills and the way to proceed is by removing a pill from the bottle at random. If the pill removed is a whole pill, it is broken into two half pills. One half pill is consumed and the other one is returned to the jar. If the pill removed is a half pill, then it is simply consumed and nothing is returned to the jar. - -The problem becomes very easy to solve once a binary variable Xk defined as Xk = 1, if the kth half pill remains inside the jar after all the whole pills are removed. The kth half pill is defined as the result of the breaking of the kth whole pill being removed from the jar. Xk = 1 if out of the n − k + 1 pills (n − k whole pills + kth half pill), the one half pill is removed at the very end. This occurs with probability 1/(n − k + 1). - -The expected value is then given by, E(X1) + E(X2) + ... + E(Xn). - -Since - -E(Xk) = P(Xk = 1) = 1/(n − k + 1), the sought expected value is 1/n + 1/(n − 1) + 1/(n − 2) + ... + 1 = Hn (the nth harmonic number). diff --git a/wiki/wikipedia/2954.txt b/wiki/wikipedia/2954.txt deleted file mode 100644 index 480df941c8c067f314cdc2cb396288691e37ba1c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2954.txt +++ /dev/null @@ -1,3 +0,0 @@ -Rudin's conjecture is a mathematical hypothesis (in additive combinatorics and elementary number theory) concerning an upper bound for the number of squares in finite arithmetic progressions. The conjecture, which has applications in the theory of trigonometric series, was first stated by Walter Rudin in his 1960 paper Trigonometric series with gaps. - -For positive integers $N, q, a$ define the expression $Q(N; q, a)$ to be the number of perfect squares in the arithmetic progression $qn + a$, for $n = 0, 1, \ldots, N-1$, and define $Q(N)$ to be the maximum of the set {Q(N; q, a) : q, a ≥ 1} . Rudin's conjecture asserts (in big O notation) that $Q(N) = O(\sqrt { N })$ and in its stronger form that, if $N > 5$, $Q(N) = Q(N; 24, 1)$. diff --git a/wiki/wikipedia/2955.txt b/wiki/wikipedia/2955.txt deleted file mode 100644 index 9ca44bb4e7239a61511a2200d04319486af0bbcb..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2955.txt +++ /dev/null @@ -1,110 +0,0 @@ -In mathematics, the projection-slice theorem, central slice theorem or Fourier slice theorem in two dimensions states that the results of the following two calculations are equal: - -* Take a two-dimensional function f(r), project (e.g. using the Radon transform) it onto a (one-dimensional) line, and do a Fourier transform of that projection. - -* Take that same function, but do a two-dimensional Fourier transform first, and then slice it through its origin, which is parallel to the projection line. - -In operator terms, if - -* F1 and F2 are the 1- and 2-dimensional Fourier transform operators mentioned above, - -* P1 is the projection operator (which projects a 2-D function onto a 1-D line), - -* S1 is a slice operator (which extracts a 1-D central slice from a function), - -then -$$ -F_1 P_1 = S_1 F_2. -$$ - -This idea can be extended to higher dimensions. - -This theorem is used, for example, in the analysis of medical - -CT scans where a "projection" is an x-ray - -image of an internal organ. The Fourier transforms of these images are - -seen to be slices through the Fourier transform of the 3-dimensional - -density of the internal organ, and these slices can be interpolated to build - -up a complete Fourier transform of that density. The inverse Fourier transform - -is then used to recover the 3-dimensional density of the object. This technique was first derived by Ronald N. Bracewell in 1956 for a radio-astronomy problem. - -In N dimensions, the projection-slice theorem states that the - -Fourier transform of the projection of an N-dimensional function - -f(r) onto an m-dimensional linear submanifold - -is equal to an m-dimensional slice of the N-dimensional Fourier transform of that - -function consisting of an m-dimensional linear submanifold through the origin in the Fourier space which is parallel to the projection submanifold. In operator terms: -$$ -F_mP_m=S_mF_N. -$$ - -In addition to generalizing to N dimensions, the projection-slice theorem can be further generalized with an arbitrary change of basis. For convenience of notation, we consider the change of basis to be represented as B, an N-by-N invertible matrix operating on N-dimensional column vectors. Then the generalized Fourier-slice theorem can be stated as -$$ -F_m P_m B = S_m \frac{B^{-T}} F_N -$$ - -where $B^{-T}=(B^{-1})^T$ is the transpose of the inverse of the change of basis transform. - -The projection-slice theorem is easily proven for the case of two dimensions. - -Without loss of generality, we can take the projection line to be the x-axis. - -There is no loss of generality because if we use a shifted and rotated line, the law still applies. Using a shifted line (in y) gives the same projection and therefore the same 1D Fourier transform results. The rotated function is the Fourier pair of the rotated Fourier transform, for which the theorem again holds. - -If f(x, y) is a two-dimensional function, then the projection of f(x, y) onto the x axis is p(x) where -$$ -p(x)=\int_{-\infty}^\infty f(x,y)dy. -$$ - -The Fourier transform of $f(x,y)$ is - - - -F(k_x,k_y)=\int_{-\infty}^\infty \int_{-\infty}^\infty - -f(x,y)e^{-2\pi i(xk_x+yk_y)}dxdy. - - - -The slice is then $s(k_x)$ - -s(k_x)=F(k_x,0) - -=\int_{-\infty}^\infty \int_{-\infty}^\infty f(x,y)e^{-2\pi ixk_x}dxdy - - - -=\int_{-\infty}^\infty - -\left[\int_{-\infty}^\infty f(x,y)dy\right]e^{-2\pi ixk_x} dx - - - -=\int_{-\infty}^\infty p(x)e^{-2\pi ixk_x} dx - - - -which is just the Fourier transform of p(x). The proof for higher dimensions is easily generalized from the above example. - -If the two-dimensional function f(r) is circularly symmetric, it may be represented as f(r), where r = |r|. In this case the projection onto any projection line - -will be the Abel transform of f(r). The two-dimensional Fourier transform - -of f(r) will be a circularly symmetric function given by the zeroth-order Hankel transform of f(r), which will therefore also represent any slice through the origin. The projection-slice theorem then states that the Fourier transform of the projection equals the slice or -$$ -F_1 A_1 = H, -$$ - -where A1 represents the Abel-transform operator, projecting a two-dimensional circularly symmetric function onto a one-dimensional line, F1 represents the 1-D Fourier-transform - -operator, and H represents the zeroth-order Hankel-transform operator. - -The projection-slice theorem is suitable for CT image reconstruction with parallel beam projections. It does not directly apply to fanbeam or conebeam CT. The theorem was extended to fan-beam and conebeam CT image reconstruction by Shuang-ren Zhao in 1995. diff --git a/wiki/wikipedia/2956.txt b/wiki/wikipedia/2956.txt deleted file mode 100644 index 4854f6e9b31708332fa438e498ae390b2e626853..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2956.txt +++ /dev/null @@ -1,48 +0,0 @@ -In mathematics, the Jacobi–Anger expansion (or Jacobi–Anger identity) is an expansion of exponentials of trigonometric functions in the basis of their harmonics. It is useful in physics (for example, to convert between plane waves and cylindrical waves), and in signal processing (to describe FM signals). This identity is named after the 19th-century mathematicians Carl Jacobi and Carl Theodor Anger. - -The most general identity is given by: - - - -e^{i z \cos \theta} \equiv \sum_{n=-\infty}^{\infty} i^n J_n(z) e^{i n \theta}, - - - -where $J_n(z)$ is the $n$-th Bessel function of the first kind and $i$ is the imaginary unit, $i^2=-1.$ - -Substituting $\theta$ by $\theta-\frac{\pi}{2}$, we also get: - - - -e^{i z \sin \theta} \equiv \sum_{n=-\infty}^{\infty} J_n(z) e^{i n \theta}. - - - -Using the relation $J_{-n}(z) = (-1)^n J_{n}(z),$ valid for integer $n$, the expansion becomes: -$$ -e^{i z \cos \theta} \equiv J_0(z) + 2 \sum_{n=1}^{\infty} i^n J_n(z) \cos (n \theta). -$$ - -The following real-valued variations are often useful as well: - - - -\begin{align} - -\cos(z \cos \theta) &\equiv J_0(z)+2 \sum_{n=1}^{\infty}(-1)^n J_{2n}(z) \cos(2n \theta), - -\\ - -\sin(z \cos \theta) &\equiv -2 \sum_{n=1}^{\infty}(-1)^n J_{2n-1}(z) \cos\left[\left(2n-1\right) \theta\right], - -\\ - -\cos(z \sin \theta) &\equiv J_0(z)+2 \sum_{n=1}^{\infty} J_{2n}(z) \cos(2n \theta), - -\\ - -\sin(z \sin \theta) &\equiv 2 \sum_{n=1}^{\infty} J_{2n-1}(z) \sin\left[\left(2n-1\right) \theta\right]. - -\end{align} - - diff --git a/wiki/wikipedia/2957.txt b/wiki/wikipedia/2957.txt deleted file mode 100644 index 4a138613395aaae8f073f2cc4d4cb14e8ca545d9..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2957.txt +++ /dev/null @@ -1,39 +0,0 @@ -Atomic semantics is a type of guarantee provided by a data register shared by several processors in a parallel machine or in a network of computers working together. - -Atomic semantics are very strong. An atomic register provides strong guarantees even when there is concurrency and failures. - -A read/write register R stores a value and is accessed by two basic operations: read and write(v). A read returns the value stored in R and write(v) changes the value stored in R to v. - -A register is called atomic if it satisfies the two following properties: - -1) Each invocation op of a read or write operation: - -•Must appear as if it were executed at a single point τ(op) in time. - -•τ (op) works as follow: - -τb(op) ≤ τ (op) ≤ τe(op): where τb(op) and τe(op) indicate the time when the operation op begins and ends. - -•If op1 ≠ op2, then τ (op1)≠τ (op2) - -2) Each read operation returns the value written by the last write operation before the read, in the sequence where all operations are ordered by their τ values. - -Atomic/Linearizable register: - -Termination: when a node is correct, sooner or later each read and write operation will complete. - -Safety Property (Linearization points for read and write and failed operations): - -Read operation:It appears as if happened at all nodes at some times between the invocation and response time. - -Write operation: Similar to read operation, it appears as if happened at all nodes at some times between the invocation and response time. - -Failed operation(The atomic term comes from this notion):It appears as if it is completed at every single node or it never happened at any node. - -Example : We know that an atomic register is one that is linearizable to a sequential safe register. - -The following picture shows where we should put the linearization point for each operation: - -An atomic register could be defined for a variable with a single writer but multi- readers (SWMR), single-writer/single-reader (SWSR), or multi-writer/multi-reader (MWMR). Here is an example of a multi-reader multi-writer atomic register which is accessed by three processes (P1, P2, P3). Note that R. read() → v means that the corresponding read operation returns v, which is the value of the register. Therefore, the following execution of the register R could satisfies the definition of the atomic registers: - -R.write(1), R.read()→1, R.write(3), R.write(2), R.read()→2, R.read()→2. diff --git a/wiki/wikipedia/2958.txt b/wiki/wikipedia/2958.txt deleted file mode 100644 index 70070a90637f382d0ebc650e097728b411a4da90..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2958.txt +++ /dev/null @@ -1,130 +0,0 @@ -In mathematics, a Dirichlet problem is the problem of finding a function which solves a specified partial differential equation (PDE) in the interior of a given region that takes prescribed values on the boundary of the region. - -The Dirichlet problem can be solved for many PDEs, although originally it was posed for Laplace's equation. In that case the problem can be stated as follows: - -Given a function f that has values everywhere on the boundary of a region in Rn, is there a unique continuous function u twice continuously differentiable in the interior and continuous on the boundary, such that u is harmonic in the interior and u = f on the boundary? - -This requirement is called the Dirichlet boundary condition. The main issue is to prove the existence of a solution; uniqueness can be proved using the maximum principle. - -The Dirichlet problem goes back to George Green, who studied the problem on general domains with general boundary conditions in his Essay on the Application of Mathematical Analysis to the Theories of Electricity and Magnetism, published in 1828. He reduced the problem into a problem of constructing what we now call Green's functions, and argued that Green's function exists for any domain. His methods were not rigorous by today's standards, but the ideas were highly influential in the subsequent developments. The next steps in the study of the Dirichlet's problem were taken by Karl Friedrich Gauss, William Thomson (Lord Kelvin) and Peter Gustav Lejeune Dirichlet, after whom the problem was named, and the solution to the problem (at least for the ball) using the Poisson kernel was known to Dirichlet (judging by his 1850 paper submitted to the Prussian academy). Lord Kelvin and Dirichlet suggested a solution to the problem by a variational method based on the minimization of "Dirichlet's energy". According to Hans Freudenthal (in the Dictionary of Scientific Biography, vol. 11), Bernhard Riemann was the first mathematician who solved this variational problem based on a method which he called Dirichlet's principle. The existence of a unique solution is very plausible by the "physical argument": any charge distribution on the boundary should, by the laws of electrostatics, determine an electrical potential as solution. However, Karl Weierstrass found a flaw in Riemann's argument, and a rigorous proof of existence was found only in 1900 by David Hilbert, using his direct method in the calculus of variations. It turns out that the existence of a solution depends delicately on the smoothness of the boundary and the prescribed data. - -For a domain $D$ having a sufficiently smooth boundary $\partial D$, the general solution to the Dirichlet problem is given by -$$ -u(x) = \int_{\partial D} \nu(s) \frac{\partial G(x, s)}{\partial n} ds, -$$ - -where $G(x, y)$ is the Green's function for the partial differential equation, and -$$ -\frac{\partial G(x, s)}{\partial n} = \widehat{n} \cdot \nabla_s G (x, s) = \sum_i n_i \frac{\partial G(x, s)}{\partial s_i} -$$ - -is the derivative of the Green's function along the inward-pointing unit normal vector $\widehat{n}$. The integration is performed on the boundary, with measure $ds$. The function $\nu(s)$ is given by the unique solution to the Fredholm integral equation of the second kind, -$$ -f(x) = -\frac{\nu(x)}{2} + \int_{\partial D} \nu(s) \frac{\partial G(x, s)}{\partial n} ds. -$$ - -The Green's function to be used in the above integral is one which vanishes on the boundary: -$$ -G(x, s) = 0 -$$ - -for $s \in \partial D$ and $x \in D$. Such a Green's function is usually a sum of the free-field Green's function and a harmonic solution to the differential equation. - -The Dirichlet problem for harmonic functions always has a solution, and that solution is unique, when the boundary is sufficiently smooth and $f(s)$ is continuous. More precisely, it has a solution when -$$ -\partial D \in C^{1,\alpha} -$$ - -for some $\alpha \in (0, 1)$, where $C^{1,\alpha}$ denotes the Hölder condition. - -In some simple cases the Dirichlet problem can be solved explicitly. For example, the solution to the Dirichlet problem for the unit disk in R2 is given by the Poisson integral formula. - -If $f$ is a continuous function on the boundary $\partial D$ of the open unit disk $D$, then the solution to the Dirichlet problem is $u(z)$ given by - -u(z) = - -\begin{cases} - -\displaystyle \frac{1}{2\pi} \int_0^{2\pi} f(e^{i\psi}) \frac {1 - |z|^2}{|1 - ze^{-i\psi}|^2} d\psi & \text{if } z \in D, \\ - -f(z) & \text{if } z \in \partial D. - -\end{cases} - - - -The solution $u$ is continuous on the closed unit disk $\bar{D}$ and harmonic on $D.$ - -The integrand is known as the Poisson kernel; this solution follows from the Green's function in two dimensions: -$$ -G(z, x) = -\frac{1}{2\pi} \log|z - x| + \gamma(z, x), -$$ - -where $\gamma(z, x)$ is harmonic ($\Delta_x \gamma(z, x) = 0$) and chosen such that $G(z, x) = 0$ for $x \in \partial D$. - -For bounded domains, the Dirichlet problem can be solved using the Perron method, which relies on the maximum principle for subharmonic functions. This approach is described in many text books. It is not well-suited to describing smoothness of solutions when the boundary is smooth. Another classical Hilbert space approach through Sobolev spaces does yield such information. The solution of the Dirichlet problem using Sobolev spaces for planar domains can be used to prove the smooth version of the Riemann mapping theorem. Bell has outlined a different approach for establishing the smooth Riemann mapping theorem, based on the reproducing kernels of Szegő and Bergman, and in turn used it to solve the Dirichlet problem. The classical methods of potential theory allow the Dirichlet problem to be solved directly in terms of integral operators, for which the standard theory of compact and Fredholm operators is applicable. The same methods work equally for the Neumann problem. - -Dirichlet problems are typical of elliptic partial differential equations, and potential theory, and the Laplace equation in particular. Other examples include the biharmonic equation and related equations in elasticity theory. - -They are one of several types of classes of PDE problems defined by the information given at the boundary, including Neumann problems and Cauchy problems. - -Consider the Dirichlet problem for the wave equation describing a string attached between walls with one end attached permanently and the other moving with the constant velocity i.e. the d'Alembert equation on the triangular region of the Cartesian product of the space and the time: -$$ -\frac{\partial^2}{\partial t^2} u(x, t) - \frac{\partial^2}{\partial x^2} u(x, t) = 0, -$$ -$$ -u(0, t) = 0, -$$ -$$ -u(\lambda t, t) = 0. -$$ - -As one can easily check by substitution, the solution fulfilling the first condition is -$$ -u(x, t) = f(t - x) - f(x + t). -$$ - -Additionally we want -$$ -f(t - \lambda t) - f(\lambda t + t) = 0. -$$ - -Substituting -$$ -\tau = (\lambda + 1) t, -$$ - -we get the condition of self-similarity -$$ -f(\gamma \tau) = f(\tau), -$$ - -where -$$ -\gamma = \frac{1 - \lambda}{\lambda + 1}. -$$ - -It is fulfilled, for example, by the composite function -$$ -\sin[\log(e^{2 \pi} x)] = \sin[\log(x)] -$$ - -with -$$ -\lambda = e^{2\pi} = 1^{-i}, -$$ - -thus in general -$$ -f(\tau) = g[\log(\gamma \tau)], -$$ - -where $g$ is a periodic function with a period $\log(\gamma)$: -$$ -g[\tau + \log(\gamma)] = g(\tau), -$$ - -and we get the general solution -$$ -u(x, t) = g[\log(t - x)] - g[\log(x + t)]. -$$ diff --git a/wiki/wikipedia/2959.txt b/wiki/wikipedia/2959.txt deleted file mode 100644 index 4505d32219e4e5dd2b9ab212bdf36f6bc9f458b5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2959.txt +++ /dev/null @@ -1,115 +0,0 @@ -In arithmetic, Euclidean division – or division with remainder – is the process of dividing one integer (the dividend) by another (the divisor), in a way that produces a quotient and a remainder smaller than the divisor. A fundamental property is that the quotient and the remainder exist and are unique, under some conditions. Because of this uniqueness, Euclidean division is often considered without referring to any method of computation, and without explicitly computing the quotient and the remainder. The methods of computation are called integer division algorithms, the best known of which being long division. - -Euclidean division, and algorithms to compute it, are fundamental for many questions concerning integers, such as the Euclidean algorithm for finding the greatest common divisor of two integers, and modular arithmetic, for which only remainders are considered. The operation consisting of computing only the remainder is called the modulo operation, and is used often in both mathematics and computer science. - -Euclidean division is based on the following result, which is sometimes called Euclid's division lemma. - -Given two integers a and b, with b ≠ 0, there exist unique integers q and r such that - -a = bq + r - -and - -0 ≤ r < , - -where denotes the absolute value of b. - -In the above theorem, each of the four integers has a name of its own: a is called the dividend, b is called the divisor, q is called the quotient and r is called the remainder. - -The computation of the quotient and the remainder from the dividend and the divisor is called division or — in case of ambiguity — Euclidean division. The theorem is frequently referred to as the division algorithm (although it is a theorem and not an algorithm), because its proof as given below lends itself to a simple division algorithm for computing q and r (see the section Proof for more). - -Division is not defined in the case where b = 0; see division by zero. - -For the remainder and the modulo operation, there are conventions other than 0 ≤ r < , see . - -Although "Euclidean division" is named after Euclid, it seems that he did not know the existence and uniqueness theorem, and that the only computation method that he knew was the division by repeated subtraction. - -Before the discovery of Hindu–Arabic numeral system, which was introduced in Europe during the 13th century by Fibonacci, division was extremely difficult, and only the best mathematicians were able to do it. Presently, most division algorithms, including long division, are based on this notation or its variants, such as binary numerals. A notable exception is Newton–Raphson division, which is independent from any numeral system. - -The term "Euclidean division" was introduced during the 20th century as a shorthand for "division of Euclidean rings". It has been rapidly adopted by mathematicians for distinguishing this division from the other kinds of division of numbers. - -Suppose that a pie has 9 slices and they are to be divided evenly among 4 people. Using Euclidean division, 9 divided by 4 is 2 with remainder 1. In other words, each person receives 2 slices of pie, and there is 1 slice left over. - -This can be confirmed using multiplication—the inverse of division: if each of the 4 people received 2 slices, then 4 × 2 = 8 slices were given out in total. Adding the 1 slice remaining, the result is 9 slices. In summary: 9 = 4 × 2 + 1. - -In general, if the number of slices is denoted $a$ and the number of people is denoted $b$, then one can divide the pie evenly among the people such that each person receives $q$ slices (the quotient), with some number of slices $r < b$ being the leftover (the remainder). In which case, the equation $a = bq + r$ holds. - -As an alternate example, if 9 slices were divided among 3 people instead of 4, then each would receive 3 and no slice would be left over, which means that the remainder would be zero, leading to the conclusion that 3 evenly divides 9, or that 3 divides 9. - -Euclidean division can also be extended to negative dividend (or negative divisor) using the same formula; for example −9 = 4 × (−3) + 3, which means that −9 divided by 4 is −3 with remainder 3. - -*If a = 7 and b = 3, then q = 2 and r = 1, since 7 = 3 × 2 + 1. - -*If a = 7 and b = −3, then q = −2 and r = 1, since 7 = −3 × (−2) + 1. - -*If a = −7 and b = 3, then q = −3 and r = 2, since −7 = 3 × (−3) + 2. - -*If a = −7 and b = −3, then q = 3 and r = 2, since −7 = −3 × 3 + 2. - -The following proof of the division theorem relies on the fact that a decreasing sequence of non-negative integers stops eventually. It is separated into two parts: one for existence and another for uniqueness of $q$ and $r$. Other proofs use the well-ordering principle (i.e., the assertion that every non-empty set of non-negative integers has a smallest element) to make the reasoning simpler, but have the disadvantage of not providing directly an algorithm for solving the division (see for more). - -Consider first the case b < 0. Setting b' = –b and q' = –q, the equation a = bq + r may be rewritten as a = b'q' + r and the inequality 0 ≤ r < |b| may be rewritten as 0 ≤ r < . This reduces the existence for the case b < 0 to that of the case b > 0. - -Similarly, if a < 0 and b > 0, setting a' = –a, q' = –q – 1, and r' = b – r, the equation a = bq + r may be rewritten as a' = bq' + , and the inequality 0 ≤ r < may be rewritten as 0 ≤ r' < . Thus the proof of the existence is reduced to the case a ≥ 0 and b > 0 — which will be considered in the remainder of the proof. - -Let q1 = 0 and r1 = a, then these are non-negative numbers such that a = bq1 + r1. If r1 < b then the division is complete, so suppose r1 ≥ b. Then defining q2 = q1 + 1 and r2 = r1 – b, one has a = bq2 + r2 with 0 ≤ r2 < r1. As there are only r1 non-negative integers less than r1, one only needs to repeat this process at most r1 times to reach the final quotient and the remainder. That is, there exist a natural number k ≤ r1 such that a = bqk + rk and 0 ≤ rk < . - -This proves the existence and also gives a simple division algorithm for computing the quotient and the remainder. However, this algorithm is not efficient, since its number of steps is of the order of a/b. - -The pair of integers r and q such that a = bq + r is unique, in the sense that there can't be other pair of integers that satisfies the same condition in the Euclidean division theorem. In other words, if we have another division of a by b, say a = bq' + r with 0 ≤ r' < |b|, then we must have that - -q' = q and r' = r. - -To prove this statement, we first start with the assumptions that - -0 ≤ r < |b| - -0 ≤ r' < |b| - -a = bq + r - -a = bq' + r' - -Subtracting the two equations yields - -b(q – . - -So b is a divisor of . As - -| - -by the above inequalities, one gets - -, - -and - -b(q – . - -Since b ≠ 0, we get that r = and q = , which proves the uniqueness part of the Euclidean division theorem. - -In general, an existence proof does not provide an algorithm for computing the existing quotient and remainder, but the above proof does immediately provide an algorithm (see Division algorithm#Division by repeated subtraction), even though it is not a very efficient one as it requires as many steps as the size of the quotient. This is related to the fact that it uses only additions, subtractions and comparisons of integers, without involving multiplication, nor any particular representation of the integers such as decimal notation. - -In terms of decimal notation, long division provides a much more efficient algorithm for solving Euclidean divisions. Its generalization to binary and hexadecimal notation provides further flexibility and possibility for computer implementation. However, for large inputs, algorithms that reduce division to multiplication, such as Newton–Raphson, are usually preferred, because they only need a time which is proportional to the time of the multiplication needed to verify the result—independently of the multiplication algorithm which is used (for more, see Division algorithm#Fast division methods). - -The Euclidean division admits a number of variants, some of which are listed below. - -In Euclidean division with d as divisor, the remainder is supposed to belong to the interval [0, d) of length . Any other interval of the same length may be used. More precisely, given integers $m$, $a$, $d$ with $m>0$, there exist unique integers $q$ and $r$ with $d \le r < m+d $ such that $a = mq+r$. - -In particular, if $ d=- \left\lfloor \frac{m}{2} \right\rfloor $ then $ - \left\lfloor \frac{m}{2} \right\rfloor \le r < m-\left\lfloor \frac{m}{2} \right\rfloor $ . This division is called the centered division, and its remainder $r$ is called the centered remainder or the least absolute remainder. - -This is used for approximating real numbers: Euclidean division defines truncation, and centered division defines rounding. - -Given integers $a$, $m$ and $R,$ with $m >0$ and $\gcd(R,m) =1,$ let $R^{-1}$ be the modular multiplicative inverse of $R$ (i.e., $ 0 0$ for all $\mathbf{x} \in \mathbb{R}^{2}$, and - -(c) the system is coupled everywhere with either - -\frac{\partial f^1}{\partial x_1} \frac{\partial f^2}{\partial x_2} \neq 0, - -\text{ or } \frac{\partial f^1}{\partial x_2} \frac{\partial f^2}{\partial x_1} \neq 0 \text{ for all } \mathbf{x} \in \mathbb{R}^2. diff --git a/wiki/wikipedia/2965.txt b/wiki/wikipedia/2965.txt deleted file mode 100644 index 256de5d399a91986d0341ab619b1bc39a29926be..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2965.txt +++ /dev/null @@ -1 +0,0 @@ -EQP, an abbreviation for equational prover, is an automated theorem proving program for equational logic, developed by the Mathematics and Computer Science Division of the Argonne National Laboratory. It was one of the provers used for solving a longstanding problem posed by Herbert Robbins, namely, whether all Robbins algebras are Boolean algebras. diff --git a/wiki/wikipedia/2966.txt b/wiki/wikipedia/2966.txt deleted file mode 100644 index b17ae2f36a4b30853d64e15a1bbeff14d2c90586..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2966.txt +++ /dev/null @@ -1,55 +0,0 @@ -In complex analysis, Mittag-Leffler's theorem concerns the existence of meromorphic functions with prescribed poles. Conversely, it can be used to express any meromorphic function as a sum of partial fractions. It is sister to the Weierstrass factorization theorem, which asserts existence of holomorphic functions with prescribed zeros. It is named after Gösta Mittag-Leffler. - -Let $D$ be an open set in $\mathbb C$ and $E \subset D$ a closed discrete subset. For each $a$ in $ E$, let $p_a(z)$ be a polynomial in $1/(z-a)$. There is a meromorphic function $f$ on $D$ such that for each $a \in E$, the function $f(z)-p_a(z)$ has only a removable singularity at $a$. In particular, the principal part of $f$ at $a$ is $p_a(z)$. Moreoever, $f$ is holomorphic on $D \setminus E$. - -One possible proof outline is as follows. If $ E $ is finite, it suffices to take $ f(z) = \sum_{a \in E} p_a(z)$. If $E$ is not finite, consider the finite sum $ S_F(z) = \sum_{a \in F} p_a(z)$ where $ F $ is a finite subset of $E$. While the $S_F(z)$ may not converge as F approaches E, one may subtract well-chosen rational functions with poles outside of D (provided by Runge's theorem) without changing the principal parts of the $S_F(z)$ and in such a way that convergence is guaranteed. - -Suppose that we desire a meromorphic function with simple poles of residue 1 at all positive integers. With notation as above, letting - -p_k = \frac{1}{z-k} - -and $E = \mathbb{Z}^+$, Mittag-Leffler's theorem asserts (non-constructively) the existence of a meromorphic function $f$ with principal part $ p_k(z) $ at $z=k$ for each positive integer $ k$. This $f $ has the desired properties. More constructively we can let - -f(z) = z\sum_{k=1}^\infty \frac{1}{k(z-k)} . - -This series converges normally on $ \Complex $ (as can be shown using the M-test) to a meromorphic function with the desired properties. - -Here are some examples of pole expansions of meromorphic functions: - - \tan(z) = \sum_{n=0}^\infty \frac{8z}{(2n+1)^2\pi^2-4z^2} - - \csc(z) - -= \sum_{n \in \Z} \frac{(-1)^n}{z-n\pi} - -= \frac{1}{z} + 2z\sum_{n=1}^\infty (-1)^n \frac{1}{z^2 - (n\pi)^2} - - \sec(z) - -\equiv -\csc\left(z-\frac{\pi}{2}\right) - -= \sum_{n \in \Z} \frac{(-1)^{n-1}}{z-\left(n+\frac{1}{2}\right)\pi} - -= \sum_{n=0}^\infty \frac{(-1)^n(2n+1)\pi}{(n+\frac{1}{2})^2\pi^2-z^2} - - - - \cot(z) - -\equiv \frac{\cos (z)}{\sin (z)} - -= \sum_{n \in \Z} \frac{1}{z-n\pi} - -= \frac{1}{z} + 2z\sum_{k=1}^\infty \frac{1}{z^2 - (k\pi)^2} - - \csc^2(z) = \sum_{n \in \Z} \frac{1}{(z-n\pi)^2} - - \sec^2(z) = \frac{d}{dz}\tan(z) - -= \sum_{n=0}^\infty \frac{8((2n+1)^2\pi^2+4z^2)}{((2n+1)^2\pi^2-4z^2)^2} - - \frac{1}{z \sin(z)} - -= \frac{1}{z^2} + \sum_{n \neq 0} \frac{(-1)^n}{\pi n(z-\pi n)} - -= \frac{1}{z^2} + \sum_{n=1}^\infty {(-1)^n} \frac{2}{z^2 - (n\pi)^2} diff --git a/wiki/wikipedia/2967.txt b/wiki/wikipedia/2967.txt deleted file mode 100644 index 1847a999e2394ed565f9d4777ccd4d0eff2538d8..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2967.txt +++ /dev/null @@ -1,115 +0,0 @@ -In mathematics, the Sturm sequence of a univariate polynomial p is a sequence of polynomials associated with p and its derivative by a variant of Euclid's algorithm for polynomials. Sturm's theorem expresses the number of distinct real roots of p located in an interval in terms of the number of changes of signs of the values of the Sturm sequence at the bounds of the interval. Applied to the interval of all the real numbers, it gives the total number of real roots of p. - -Whereas the fundamental theorem of algebra readily yields the overall number of complex roots, counted with multiplicity, it does not provide a procedure for calculating them. Sturm's theorem counts the number of distinct real roots and locates them in intervals. By subdividing the intervals containing some roots, it can isolate the roots into arbitrary small intervals, each containing exactly one root. This yields the oldest real-root isolation algorithm, and arbitrary-precision root-finding algorithm for univariate polynomials. - -For computing over the reals, Sturm's theorem is less efficient than other methods based on Descartes' rule of signs. However, it works on every real closed field, and, therefore, remains fundamental for the theoretical study of the computational complexity of decidability and quantifier elimination in the first order theory of real numbers. - -The Sturm sequence and Sturm's theorem are named after Jacques Charles François Sturm, who discovered the theorem in 1829. - -The Sturm chain or Sturm sequence of a univariate polynomial P(x) with real coefficients is the sequence of polynomials $P_0, P_1, \ldots,$ such that - -\begin{align} - -P_0&=P,\\ - -P_1&=P',\\ - -P_{i+1}&=-\operatorname{rem}(P_{i-1},P_i), - -\end{align} - -for i ≥ 1, where P' is the derivative of P, and $\operatorname{rem}(P_{i-1},P_i)$ is the remainder of the Euclidean division of $P_{i-1}$ by $P_i.$ The length of the Sturm sequence is at most the degree of P. - -The number of sign variations at ξ of the Sturm sequence of P is the number of sign changes–ignoring zeros—in the sequence of real numbers -$$ -P_0(\xi), P_1(\xi),P_2(\xi),\ldots. -$$ - -This number of sign variations is denoted here V(ξ). - -Sturm's theorem states that, if P is a square-free polynomial, the number of distinct real roots of P in the half-open interval (a, b] is V(a) − V(b) (here, a and b are real numbers such that a < b). - -The theorem extends to unbounded intervals by defining the sign at +∞ of a polynomial as the sign of its leading coefficient (that is, the coefficient of the term of highest degree). At –∞ the sign of a polynomial is the sign of its leading coefficient for a polynomial of even degree, and the opposite sign for a polynomial of odd degree. - -In the case of a non-square-free polynomial, if neither a nor b is a multiple root of p, then V(a) − V(b) is the number of distinct real roots of P. - -The proof of the theorem is as follows: when the value of x increases from a to b, it may pass through a zero of some $P_i$ (i > 0); when this occurs, the number of sign variations of $(P_{i-1}, P_i, P_{i+1})$ does not change. When x passes through a root of $P_0=P,$ the number of sign variations of $(P_0, P_1)$ decreases from 1 to 0. These are the only values of x where some sign may change. - -Suppose we wish to find the number of roots in some range for the polynomial $p(x)=x^4+x^3-x-1$. So - -\begin{align} p_0(x) &=p(x)=x^4+x^3-x-1 \\ - -p_1(x)&=p'(x)=4x^3+3x^2-1 - -\end{align} - -The remainder of the Euclidean division of p0 by p1 is $-\tfrac{3}{16}x^2-\tfrac{3}{4}x-\tfrac{15}{16};$ multiplying it by −1 we obtain -$$ -p_2(x)=\tfrac{3}{16}x^2+\tfrac{3}{4}x+\tfrac{15}{16} -$$. - -Next dividing p1 by p2 and multiplying the remainder by −1, we obtain -$$ -p_3(x)=-32x-64 -$$. - -Now dividing p2 by p3 and multiplying the remainder by −1, we obtain -$$ -p_4(x)=-\tfrac{3}{16} -$$. - -As this is a constant, this finishes the computation of the Sturm sequence. - -To find the number of real roots of $p_0$ one has to evaluate the sequences of the signs of these polynomials at −∞ and ∞, which are respectively (+, −, +, +, −) and (+, +, +, −, −). Thus -$$ -V(-\infty)-V(+\infty) = 3-1=2, -$$ - -which shows that p has two real roots. - -This can be verified by noting that p(x) can be factored as (x2 − 1)(x2 + x + 1), where the first factor has the roots −1 and 1, and second factor has no real roots. This last assertion results from the quadratic formula, and also from Sturm's theorem, which gives the sign sequences (+, –, –) at −∞ and (+, +, –) at +∞. - -Sturm sequences have been generalized in two directions. To define each polynomial in the sequence, Sturm used the negative of the remainder of the Euclidean division of the two preceding ones. The theorem remains true if one replaces the negative of the remainder by its product or quotient by a positive constant or the square of a polynomial. It is also useful (see below) to consider sequences where the second polynomial is not the derivative of the first one. - -A generalized Sturm sequence is a finite sequence of polynomials with real coefficients -$$ -P_0, P_1, \dots, P_m -$$ - -such that - -* the degrees are decreasing after the first one: $\deg P_{i} <\deg P_{i-1}$ for i = 2, ..., m; - -* $P_m$ does not have any real root or does not changes of sign near its real roots. - -* if Pi(ξ) = 0 for 0 < i < m and ξ a real number, then Pi −1 (ξ) Pi + 1(ξ) < 0. - -The last condition implies that two consecutive polynomials do not have any common real root. In particular the original Sturm sequence is a generalized Sturm sequence, if (and only if) the polynomial has no multiple real root (otherwise the first two polynomials of its Sturm sequence have a common root). - -When computing the original Sturm sequence by Euclidean division, it may happen that one encounters a polynomial that has a factor that is never negative, such a $x^2$ or $x^2+1$. In this case, if one continues the computation with the polynomial replaced by its quotient by the nonnegative factor, one gets a generalized Sturm sequence, which may also be used for computing the number of real roots, since the proof of Sturm's theorem still applies (because of the third condition). This may sometimes simplify the computation, although it is generally difficult to find such nonnegative factors, except for even powers of x. - -In computer algebra, the polynomials that are considered have integer coefficients or may be transformed to have integer coefficients. The Sturm sequence of a polynomial with integer coefficients generally contains polynomials whose coefficients are not integers (see above example). - -To avoid computation with rational numbers, a common method is to replace Euclidean division by pseudo-division for computing polynomial greatest common divisors. This amounts to replacing the remainder sequence of the Euclidean algorithm by a pseudo-remainder sequence, a pseudo remainder sequence being a sequence $p_0, \ldots, p_k$ of polynomials such that there are constants $a_i$ and $b_i$ such that $b_ip_{i+1}$ is the remainder of the Euclidean division of $a_ip_{i-1}$ by $p_i.$ (The different kinds of pseudo-remainder sequences are defined by the choice of $a_i$ and $b_i;$ typically, $a_i$ is chosen for not introducing denominators during Euclidean division, and $b_i$ is a common divisor of the coefficients of the resulting remainder; see Pseudo-remainder sequence for details.) - -For example, the remainder sequence of the Euclidean algorithm is a pseudo-remainder sequence with $a_i=b_i=1$ for every i, and the Sturm sequence of a polynomial is a pseudo-remainder sequence with $a_i=1$ and $b_i=-1$ for every i. - -Various pseudo-remainder sequences have been designed for computing greatest common divisors of polynomials with integer coefficients without introducing denominators (see Pseudo-remainder sequence). They can all be made generalized Sturm sequences by choosing the sign of the $b_i$ to be the opposite of the sign of the $a_i.$ This allows the use of Sturm's theorem with pseudo-remainder sequences. - -For a polynomial with real coefficients, root isolation consists of finding, for each real root, an interval that contains this root, and no other roots. - -This is useful for root finding, allowing the selection of the root to be found and providing a good starting point for fast numerical algorithms such as Newton's method; it is also useful for certifying the result, as if Newton's method converge outside the interval one may immediately deduce that it converges to the wrong root. - -Root isolation is also useful for computing with algebraic numbers. For computing with algebraic numbers, a common method is to represent them as a pair of a polynomial to which the algebraic number is a root, and an isolation interval. For example $\sqrt 2$ may be unambiguously represented by $(x^2-2, [0,2]).$ - -Sturm's theorem provides a way for isolating real roots that is less efficient (for polynomials with integer coefficients) than other methods involving Descartes' rule of signs. However, it remains useful in some circumstances, mainly for theoretical purposes, for example for algorithms of real algebraic geometry that involve infinitesimals. - -For isolating the real roots, one starts from an interval $(a,b]$ containing all the real roots, or the roots of interest (often, typically in physical problems, only positive roots are interesting), and one computes $V(a)$ and $V(b).$ For defining this starting interval, one may use bounds on the size of the roots (see ). Then, one divides this interval in two, by choosing c in the middle of $(a,b].$ The computation of $V(c)$ provides the number of real roots in $(a,c]$ and $(c,b],$ and one may repeat the same operation on each subinterval. When one encounters, during this process an interval that does not contain any root, it may be suppressed from the list of intervals to consider. When one encounters an interval containing exactly one root, one may stop dividing it, as it is an isolation interval. The process stops eventually, when only isolating intervals remain. - -This isolating process may be used with any method for computing the number of real roots in an interval. Theoretical complexity analysis and practical experiences show that methods based on Descartes' rule of signs are more efficient. It follows that, nowadays, Sturm sequences are rarely used for root isolation. - -Generalized Sturm sequences allow counting the roots of a polynomial where another polynomial is positive (or negative), without computing these root explicitly. If one knows an isolating interval for a root of the first polynomial, this allows also finding the sign of the second polynomial at this particular root of the first polynomial, without computing a better approximation of the root. - -Let P(x) and Q(x) be two polynomials with real coefficients such that P and Q have no common root and P has no multiple roots. In other words, P and P' are coprime polynomials. This restriction does not really affect the generality of what follows as GCD computations allows reducing the general case to this case, and the cost of the computation of a Sturm sequence is the same as that of a GCD. - -Let W(a) denote the number of sign variations at a of a generalized Sturm sequence starting from P and P'. If a < b are two real numbers, then W(a) – W(b) is the number of roots of P in the interval $(a,b]$ such that Q(a) > 0 minus the number of roots in the same interval such that Q(a) < 0. Combined with the total number of roots of P in the same interval given by Sturm's theorem, this gives the number of roots of P such that Q(a) > 0 and the number of roots of P such that Q(a) < 0. diff --git a/wiki/wikipedia/2968.txt b/wiki/wikipedia/2968.txt deleted file mode 100644 index 0771a26bf68d122e48b600d3f04fa65237624dd0..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2968.txt +++ /dev/null @@ -1,22 +0,0 @@ -Proofs That Really Count: the Art of Combinatorial Proof is an undergraduate-level mathematics book on combinatorial proofs of mathematical identies. That is, it concerns equations between two integer-valued formulas, shown to be equal either by showing that both sides of the equation count the same type of mathematical objects, or by finding a one-to-one correspondence between the different types of object that they count. It was written by Arthur T. Benjamin and Jennifer Quinn, and published in 2003 by the Mathematical Association of America as volume 27 of their Dolciani Mathematical Expositions series. It won the Beckenbach Book Prize of the Mathematical Association of America. - -The book provides combinatorial proofs of thirteen theorems in combinatorics and 246 numbered identities (collated in an appendix). Several additional "uncounted identities" are also included. Many proofs are based on a visual-reasoning method that the authors call "tiling", and in a foreword, the authors describe their work as providing a follow-up for counting problems of the Proof Without Words books by Roger B. Nelson. - -The first three chapters of the book start with integer sequences defined by linear recurrence relations, the prototypical example of which is the sequence of Fibonacci numbers. These numbers can be given a combinatorial interpretation as the number of ways of tiling a $1\times n$ strip of squares with tiles of two types, single squares and dominos; this interpretation can be used to prove many of the fundamental identities involving the Fibonacci numbers, and generalized to similar relations about other sequences defined similarly, such as the Lucas numbers, using "circular tilings and colored tilings". For instance, for the Fibonacci numbers, considering whether a tiling does or does not connect positions $a$ and $a+1$ of a strip of length $a+b$ immediately leads to the identity -$$ -F_{a+b}=F_{a-1}F_{b-1}+F_aF_b. -$$ - -Chapters four through seven of the book concern identities involving continued fractions, binomial coefficients, harmonic numbers, Stirling numbers, and factorials. The eighth chapter branches out from combinatorics to number theory and abstract algebra, and the final chapter returns to the Fibonacci numbers with more advanced material on their identities. - -The book is aimed at undergraduate mathematics students, but the material is largely self-contained, and could also be read by advanced high school students. Additionally, many of the book's chapters are themselves self-contained, allowing for arbitrary reading orders or for excerpts of this material to be used in classes. Although it is structured as a textbook with exercises in each chapter, reviewer Robert Beezer writes that it is "not meant as a textbook", but rather intended as a "resource" for teachers and researchers. Echoing this, reviewer Joe Roberts writes that despite its elementary nature, this book should be "valuable as a reference ... for anyone working with such identities". - -In an initial review, Darren Glass complained that many of the results are presented as dry formulas, without any context or explanation for why they should be interesting or useful, and that - -this lack of context would be an obstacle for using it as the main text for a class. Nevertheless, in a second review after a year of owning the book, he wrote that he was "lending it out to person after person". - -Reviewer Peter G. Anderson praises the book's "beautiful ways of seeing old, familiar mathematics and some new mathematics too", calling it "a treasure". Reviewer Gerald L. Alexanderson describes the book's proofs as "ingenious, concrete and memorable". The award citation for the book's 2006 Beckenbach Book Prize states that it "illustrates in a magical way the pervasiveness and power of counting techniques throughout mathematics. It is one of those rare books that will appeal to the mathematical professional and seduce the neophyte." - -One of the open problems from the book, seeking a bijective proof of an identity combining binomial coefficients with Fibonacci numbers, was subsequently answered positively by Doron Zeilberger. In the web site where he links a preprint of his paper, Zeilberger writes, - -Proofs That Really Count won the 2006 Beckenbach Book Prize of the Mathematical Association of America, and the 2010 CHOICE Award for Outstanding Academic Title of the American Library Association. It has been listed by the Basic Library List Committee of the Mathematical Association of America as essential for inclusion in any undergraduate mathematics library. diff --git a/wiki/wikipedia/2969.txt b/wiki/wikipedia/2969.txt deleted file mode 100644 index cf629708d19cae4bd533a9afcd09ca4955f75708..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2969.txt +++ /dev/null @@ -1,94 +0,0 @@ -In mathematics, the Abel–Ruffini theorem (also known as Abel's impossibility theorem) states that there is no solution in radicals to general polynomial equations of degree five or higher with arbitrary coefficients. Here, general means that the coefficients of the equation are viewed and manipulated as indeterminates. - -The theorem is named after Paolo Ruffini, who made an incomplete proof in 1799, - -Abel–Ruffini theorem refers also to the slightly stronger result that there are equations of degree five and higher that cannot be solved by radicals. This does not follow from Abel's statement of the theorem, but is a corollary of his proof, as his proof is based on the fact that some polynomials in the coefficients of the equation are not the zero polynomial. This improved statement follows directly from . Galois theory implies also that -$$ -x^5-x-1=0 -$$ - -is the simplest equation that cannot be solved in radicals, and that almost all polynomials of degree five or higher cannot be solved in radicals. - -The impossibility of solving in degree five or higher contrasts with the case of lower degree: one has the quadratic formula, the cubic formula, and the quartic formula for degrees two, three, and four, respectively. - -Polynomial equations of degree two can be solved with the quadratic formula, which has been known since antiquity. Similarly the cubic formula for degree three, and the quartic formula for degree four, were found during the 16th century. At that time a fundamental problem was whether equations of higher degree could be solved in a similar way. - -The fact that every polynomial equation of positive degree has solutions, possibly non-real, was asserted during the 17th century, but completely proved only at the beginning of the 19th century. This is the fundamental theorem of algebra, which does not provide any tool for computing exactly the solutions, although Newton's method allows approximating the solutions to any desired accuracy. - -From the 16th century to beginning of the 19th century, the main problem of algebra was to search for a formula for the solutions of polynomial equations of degree five and higher, hence the name the "fundamental theorem of algebra". This meant a solution in radicals, that is, an expression involving only the coefficients of the equation, and the operations of addition, subtraction, multiplication, division, and nth root extraction. - -The Abel–Ruffini theorem proves that this is impossible. However, this impossibility does not imply that a specific equation of any degree cannot be solved in radicals. On the contrary, there are equations of any degree that can be solved in radicals. This is the case of the equation $x^n-1=0$ for any n, and the equations defined by cyclotomic polynomials, all of whose solutions can be expressed in radicals. - -Abel's proof of the theorem does not explicitly contain the assertion that there are specific equations that cannot be solved by radicals. Such an assertion is not a consequence of Abel's statement of the theorem, as the statement does not exclude the possibility that "every particular quintic equation might be soluble, with a special formula for each equation." However, the existence of specific equations that cannot be solved in radicals seems to be a consequence of Abel's proof, as the proof uses the fact that some polynomials in the coefficients are not the zero polynomial, and, given a finite number of polynomials, there are values of the variables at which none of the polynomials takes the value zero. - -Soon after Abel's publication of its proof, Évariste Galois introduced a theory, now called Galois theory that allows deciding, for any given equation, whether it is solvable in radicals (this is theoretical, as, in practice, this decision may need huge computation which can be difficult, even with powerful computers). This decision is done by introducing auxiliary polynomials, called resolvents, whose coefficients depend polynomially upon those of the original polynomial. The polynomial is solvable in radicals if and only if some resolvent has a rational root. - -The proof of the Abel–Ruffini theorem predates Galois theory. However, Galois theory allows a better understanding of the subject, and modern proofs are generally based on it, while the original proofs of the Abel–Ruffini theorem are still presented for historical purposes. - -The proofs based on Galois theory comprise four main steps: the characterization of solvable equations in terms of field theory; the use of the Galois correspondence between subfields of a given field and the subgroups of its Galois group for expressing this characterization in terms of solvable groups; the proof that the symmetric group is not solvable if its order is five or higher; and the existence of polynomials with a symmetric Galois group. - -An algebraic solution of a polynomial equation is an expression involving the four basic arithmetic operations (addition, subtraction, multiplication, and division), and root extractions. Such an expression may be viewed as the description of a computation that starts from the coefficients of the equation to be solved and proceeds by computing some numbers, one after the other. - -At each step of the computation, one may consider the smallest field that contains all numbers that have been computed so far. This field is changed only for the steps involving the computation of an nth root. - -So, an algebraic solution produces a sequence -$$ -F_0\subseteq F_1\subseteq \cdots \subseteq F_k -$$ - -of fields, and elements $x_i\in F_i$ such that -$$ -F_i=F_{i-1}(x_i) -$$ for $i=1,\ldots, k,$ with $x_i^{n_i}\in F_{i-1}$ for some integer $n_i>1.$ An algebraic solution of the initial polynomial equation exists if and only if there exists such a sequence of fields such that $F_k$ contains a solution. - -For having normal extensions, which are fundamental for the theory, one must refine the sequence of fields as follows. If $F_{i-1}$ does not contain all $n_i$-th roots of unity, one introduces the field $K_i$ that extends $F_{i-1}$ by a primitive root of unity, and one redefines $F_i$ as $K_i(x_i).$ - -So, if one starts from a solution in terms of radicals, one gets an increasing sequence of fields such that the last one contains the solution, and each is a normal extension of the preceding one with a Galois group that is cyclic. - -Conversely, if one has such a sequence of fields, the equation is solvable in terms of radicals. For proving this, it suffices to prove that a normal extension with a cyclic Galois group can be built from a succession of radical extensions. - -The Galois correspondence establishes a one to one correspondence between the subextensions of a normal field extension $F/E$ and the subgroups of the Galois group of the extension. This correspondence maps a field K such $E\subseteq K \subseteq F$ to the Galois group $\operatorname{Gal}(F/K)$ of the automorphisms of F that leave K fixed, and, conversely, maps a subgroup H of $\operatorname{Gal}(F/E)$ to the field of the elements of F that are fixed by H. - -The preceding section shows that an equation is solvable in terms of radicals if and only if the Galois group of its splitting field (the smallest field that contains all the roots) is solvable, that is, it contains a sequence of subgroups such that each is normal in the preceding one, with a quotient group that is cyclic. (Solvable groups are commonly defined with abelian instead of cyclic quotient groups, but the fundamental theorem of finite abelian groups shows that the two definitions are equivalent). - -So, for proving Abel–Ruffini theorem, it remains to prove that the symmetric group $S_5$ is not solvable, and that there are polynomials with symmetric Galois group. - -For n > 4, the symmetric group $\mathcal S_n$ of degree n has only the alternating group $\mathcal A_n$ as a nontrivial normal subgroup (see ). For n > 4, the alternating group $\mathcal A_n$ is not abelian and simple (that is, it does not have any nontrivial normal subgroup). This implies that both $\mathcal A_n$ and $\mathcal S_n$ are not solvable for n > 4. Thus, the Abel–Ruffini theorem results from the existence of polynomials with a symmetric Galois group; this will be shown in the next section. - -On the other hand, for n ≤ 4, the symmetric group and all its subgroups are solvable. Somehow, this explains the existence of the quadratic, cubic, and quartic formulas. - -The general or generic polynomial equation of degree n is the equation -$$ -x^n+a_1x^{n-1}+ \cdots+ a_{n-1}x+a_n=0, -$$ - -where $a_1,\ldots, a_n$ are distinct indeterminates. This is an equation defined over the field $F=\Q(a_1,\ldots,a_n)$ of the rational fractions in $a_1,\ldots, a_n$ with rational number coefficients. The original Abel–Ruffini theorem asserts that, for n > 4, this equation is not solvable in radicals. In view of the preceding sections, this results from the fact that the Galois group over F of the equation is the symmetric group $\mathcal S_n$ (this Galois group is the group of the field automorphisms of the splitting field of the equation that fix the elements of F, where the spliiting field is the smallest field containing all the roots of the equation). - -For proving that the Galois group is $\mathcal S_n,$ it is simpler to start from the roots. Let $x_1, \ldots, x_n$ be new indeterminates, aimed to be the roots, and consider the polynomial -$$ -P(x)=x^n+b_1x^{n-1}+ \cdots+ b_{n-1}x+b_n= (x-x_1)\cdots (x-x_n). -$$ - -Let $H=\Q(x_1,\ldots,x_n)$ be the field of the rational fractions in $x_1, \ldots, x_n,$ and $K=\Q(b_1,\ldots, b_n)$ be its subfield generated by the coefficients of $P(x).$ The permutations of the $x_i$ induce automorphisms of H. Vieta's formulas imply that every element of K is a symmetric function of the $x_i,$ and is thus fixed by all these automorphisms. It follows that the Galois group $\operatorname{Gal}(H/K)$ is the symmetric group $\mathcal S_n.$ - -The fundamental theorem of symmetric polynomials implies that the $b_i$ are algebraic independent, and thus that the map that sends each $a_i$ to the corresponding $b_i$ is a field isomorphism from F to K. This means that one may consider $P(x)=0$ as a generic equation. This finishes the proof that the Galois group of a general equation is the symmetric group, and thus proves the original Abel–Ruffini theorem, which asserts that the general polynomial equation of degree n cannot be solved in radicals for n > 4. - -The equation $x^5-x-1=0$ is not solvable in radicals, as will be explained below. - -Let q be $x^5-x-1$. - -Let G be its Galois group, which acts faithfully on the set of complex roots of q. - -Numbering the roots lets one identify G with a subgroup of the symmetric group $\mathcal S_5$. - -Since $q \bmod 2$ factors as $(x^2 + x + 1)(x^3 + x^2 + 1)$ in $\mathbb{F}_2[x]$, the group G contains a permutation g that is a product of disjoint cycles of lengths 2 and 3 (in general, when a monic integer polynomial reduces modulo a prime to a product of distinct monic irreducible polynomials, the degrees of the factors give the lengths of the disjoint cycles in some permutation belonging to the Galois group); then G also contains $g^3$, which is a transposition. - -Since $q \bmod 3$ is irreducible in $\mathbb{F}_3[x]$, the same principle shows that G contains a 5-cycle. - -Because 5 is prime, any transposition and 5-cycle in $\mathcal S_5$ generate the whole group; see . Thus $G = \mathcal S_5$. Since the group $\mathcal S_5$ is not solvable, the equation $x^5-x-1=0$ is not solvable in radicals. - -Testing whether a specific quintic is solvable in radicals can be done by using Cayley's resolvent. This is a univariate polynomial of degree six whose coefficients are polynomials in the coefficients of a generic quintic. A specific irreducible quintic is solvable in radicals if and only, when its coefficients are substituted in Cayley's resolvent, the resulting sextic polynomial has a rational root. - -Around 1770, Joseph Louis Lagrange began the groundwork that unified the many different tricks that had been used up to that point to solve equations, relating them to the theory of groups of permutations, in the form of Lagrange resolvents. This innovative work by Lagrange was a precursor to Galois theory, and its failure to develop solutions for equations of fifth and higher degrees hinted that such solutions might be impossible, but it did not provide conclusive proof. The first person who conjectured that the problem of solving quintics by radicals might be impossible to solve was Carl Friedrich Gauss, who wrote in 1798 in section 359 of his book Disquisitiones Arithmeticae (which would be published only in 1801) that "there is little doubt that this problem does not so much defy modern methods of analysis as that it proposes the impossible". The next year, in his thesis, he wrote "After the labors of many geometers left little hope of ever arriving at the resolution of the general equation algebraically, it appears more and more likely that this resolution is impossible and contradictory." And he added "Perhaps it will not be so difficult to prove, with all rigor, the impossibility for the fifth degree. I shall set forth my investigations of this at greater length in another place." Actually, Gauss published nothing else on this subject. However, in general, Ruffini's proof was not considered convincing. Abel wrote: "The first and, if I am not mistaken, the only one who, before me, has sought to prove the impossibility of the algebraic solution of general equations is the mathematician Ruffini. But his memoir is so complicated that it is very difficult to determine the validity of his argument. It seems to me that his argument is not completely satisfying." In 1830, Galois (at the age of 18) submitted to the Paris Academy of Sciences a memoir on his theory of solvability by radicals, which was ultimately rejected in 1831 as being too sketchy and for giving a condition in terms of the roots of the equation instead of its coefficients. Galois was aware of the contributions of Ruffini and Abel, since he wrote "It is a common truth, today, that the general equation of degree greater than 4 cannot be solved by radicals... this truth has become common (by hearsay) despite the fact that geometers have ignored the proofs of Abel and Ruffini..." Galois then died in 1832 and his paper Mémoire sur les conditions de resolubilité des équations par radicaux remained unpublished until 1846, when it was published by Joseph Liouville accompanied by some of his own explanations. Prior to this publication, Liouville announced Galois' result to the Academy in a speech he gave on 4 July 1843. When Wantzel published it, he was already aware of the contributions by Galois and he mentions that, whereas Abel's proof is valid only for general polynomials, Galois' approach can be used to provide a concrete polynomial of degree 5 whose roots cannot be expressed in radicals from its coefficients. - -In 1963, Vladimir Arnold discovered a topological proof of the Abel–Ruffini theorem, which served as a starting point for topological Galois theory. diff --git a/wiki/wikipedia/297.txt b/wiki/wikipedia/297.txt deleted file mode 100644 index b749a479036cb35fa793f26fe25ee9257c84b3d8..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/297.txt +++ /dev/null @@ -1,24 +0,0 @@ -In mathematics, Kummer's congruences are some congruences involving Bernoulli numbers, found by . - -Kubota used Kummer's congruences to define the p-adic zeta function. - -The simplest form of Kummer's congruence states that -$$ - \frac{B_h}{h}\equiv \frac{B_k}{k} \pmod p \text{ whenever } h\equiv k \pmod {p-1} -$$ - -where p is a prime, h and k are positive even integers not divisible by p−1 and the numbers Bh are Bernoulli numbers. - -More generally if h and k are positive even integers not divisible by p − 1, then -$$ - (1-p^{h-1})\frac{B_h}{h}\equiv (1-p^{k-1})\frac{B_k}{k} \pmod {p^{a+1}} -$$ - -whenever -$$ - h\equiv k\pmod {\varphi(p^{a+1})} -$$ - -where φ(pa+1) is the Euler totient function, evaluated at pa+1 and a is a non negative integer. At a = 0, the expression takes the simpler form, as seen above. - -The two sides of the Kummer congruence are essentially values of the p-adic zeta function, and the Kummer congruences imply that the p-adic zeta function for negative integers is continuous, so can be extended by continuity to all p-adic integers. diff --git a/wiki/wikipedia/2970.txt b/wiki/wikipedia/2970.txt deleted file mode 100644 index 1f6108ae9a8a81301d01c0ef551029a82d667cea..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2970.txt +++ /dev/null @@ -1,15 +0,0 @@ -In database systems, atomicity (; from ) is one of the ACID (Atomicity, Consistency, Isolation, Durability) transaction properties. An atomic transaction is an indivisible and irreducible series of database operations such that either all occurs, or nothing occurs. A guarantee of atomicity prevents updates to the database occurring only partially, which can cause greater problems than rejecting the whole series outright. As a consequence, the transaction cannot be observed to be in progress by another database client. At one moment in time, it has not yet happened, and at the next it has already occurred in whole (or nothing happened if the transaction was cancelled in progress). - -An example of an atomic transaction is a monetary transfer from bank account A to account B. It consists of two operations, withdrawing the money from account A and saving it to account B. Performing these operations in an atomic transaction ensures that the database remains in a consistent state, that is, money is neither lost nor created if either of those two operations fail. - -The same term is also used in the definition of First normal form in database systems, where it instead refers to the concept that the values for fields may not consist of multiple smaller value to be decomposed, such as a string into which multiple names, numbers, dates, or other types may be packed. - -Atomicity does not behave completely orthogonally with regard to the other ACID properties of the transactions. For example, isolation relies on atomicity to roll back changes in the event of isolation failures such as deadlock; consistency also relies on rollback in the event of a consistency-violation by an illegal transaction. Finally, atomicity itself relies on durability to ensure the atomicity of transactions even in the face of external failures. - -As a result of this, failure to detect errors and roll back the enclosing transaction may cause failures of isolation and consistency. - -Typically, systems implement Atomicity by providing some mechanism to indicate which transactions have started and which finished; or by keeping a copy of the data before any changes occurred (read-copy-update). Several filesystems have developed methods for avoiding the need to keep multiple copies of data, using journaling (see journaling file system). Databases usually implement this using some form of logging/journaling to track changes. The system synchronizes the logs (often the metadata) as necessary after changes have successfully taken place. Afterwards, crash recovery ignores incomplete entries. Although implementations vary depending on factors such as concurrency issues, the principle of atomicity – i.e. complete success or complete failure – remain. - -Ultimately, any application-level implementation relies on operating-system functionality. At the file-system level, POSIX-compliant systems provide system calls such as open(2) and flock(2) that allow applications to atomically open or lock a file. At the process level, POSIX Threads provide adequate synchronization primitives. - -The hardware level requires atomic operations such as Test-and-set, Fetch-and-add, Compare-and-swap, or Load-Link/Store-Conditional, together with memory barriers. Portable operating systems cannot simply block interrupts to implement synchronization, since hardware that lacks concurrent execution such as hyper-threading or multi-processing is now extremely rare. diff --git a/wiki/wikipedia/2971.txt b/wiki/wikipedia/2971.txt deleted file mode 100644 index 2dbb7641dcf87e9750548b3432638dc3c7243386..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2971.txt +++ /dev/null @@ -1,11 +0,0 @@ -TitanFile Inc. is a Canadian company headquartered in Toronto, Ontario. TitanFile provides a secure way for professionals to communicate and share files with their clients. The company is one of the secure cloud computing pioneers advocating for stronger privacy policies in Canada as well as United States. - -TitanFile's secure document transfer solutions are offered at a reduced cost to non-profit organizations and are accepted by many government organizations due to its full privacy policy compliance. - -In 2010, the company won the Nova Scotia Co-operative's Best Big Idea 2010 competition and Dyn and CloudCamp's Cloudy Awards 2011. In March 2011, co-founders Milan Vrekic and Tony Abou-Assaleh launched TitanFile publicly. and in December 2011, TitanFile announced their first expansion plans into Kitchener, Ontario. The company was recognized in Canada's Top 25 ICT Up & Coming List by Branham300 and was awarded runner-up in Backbone magazine's Alpha Exchange Innovation Campaign Pitch-Off in Toronto. - -On October 11, 2012, the company revealed that it was relaunching the original TitanFile design to accommodate a more collaborative environment for their users. That same day it announced a $1.1 million financing round from investors at Innovacorp, First Angel Network (FAN), and a handful of private investors. - -On October 13, 2013, TitanFile released a suite of enterprise features to cater to the needs of modern professional teams. In December 2013, the company was part of the "48 Hours in the Valley" program held in San Francisco. The company finished the year by partnering with Hitachi Solutions America, a leader in cloud security, to create a more secure system with enhanced user controls and mobile capabilities. - -TitanFile was originally designed to be a robust and reliable one-way file transfer solution for users wishing to send and receive electronic documents that were too big or too confidential for email. The company focused on to accommodate secure client correspondence in 2012. Today, TitanFile is a web and mobile application that combines file management and sharing, instant messaging, and security. diff --git a/wiki/wikipedia/2972.txt b/wiki/wikipedia/2972.txt deleted file mode 100644 index f4a76a432bcbc9bb1a6993c32bd78869432171f8..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2972.txt +++ /dev/null @@ -1,29 +0,0 @@ -SafePeak Technologies is a software company founded in 2007 in Israel. It markets products for big data related to relational database management systems. - -SafePeak Technologies, originally DCF Technologies Ltd, was founded in 2007 by Vladi Vexler. It operated in stealth mode until 2009. - -Between 2009 and 2013 the company partnered with distributors and technology partners from Israel (Ness, Valinor), Greece, USA and Hong-Kong. - -In 2013 SafePeak partnered with Amazon Web Services on Microsoft SQL Server databases. - -In January 2014 SafePeak Technologies entered into a technology IP acquisition agreement with a USA Boston based company ScaleBase, led by Ram Metser. - -SafePeak Technologies developed technology for resolving databases scalability and performance of relational databases such as such as the SQL Server and MySQL - automated dynamic caching. The Dynamic Database Caching technology was invented, patented and developed by the SafePeak Technologies. - -SafePeak technology is designed to transform existing, working applications and databases into scalable, mostly-in-memory, high performance, low latency, high-load database systems running on commodity hardware. The software seamlessly integrated in the architecture and works both in private, public and hybrid cloud environments. The software resolves data access bottlenecks and latency without any change to existing applications or databases. - -SafePeak caching is focused on caching of queries and stored procedures result-sets, storing the data entirely in RAM based special cache; no disk I/O is required for query operations. The Dynamic Cache nature of the system makes it: a) Application agnostic, as it does not require application or database code changes or additions; b) Any read-oriented queries and stored procedures are cacheable; b) Never stale cache = automated transaction ACID level data correctness. - -After installation, the application connection string set the SafePeak hostname or server IP as the data source. SafePeak works with any standard Ado.Net, ODBC, JDBC or other database connection drivers. - -SafePeak fully fits 3rd party applications or platforms as it requires no code changes in the application and database levels. - -* Reverse Proxy: SafePeak acts as a reverse proxy for database connectivity, implementing the database networking level protocol, like TDS (Tabular Data Stream) in SQL Server. Client applications create standard connections to SafePeak and the received results are expected database answers. - -* Metadata learning: SafePeak analyses the structure of the database schema, parses all types of schema objects (tables, views, triggers, functions, stored procedures, foreign keys) and creates an internal map of dependencies. On DDL commands, or schema changes, SafePeak automatically re-analyzes the modified objects and applies required changes to its object definitions and SQL Patterns configuration. - -* SQL Patterns Identification: Application queries and stored procedure calls are transformed into patterns of similar queries, analyzed and then used as rules for automated dynamic caching. - -* Dynamic Caching: Queries arriving to SafePeak matched for existing cached response item in memory. If not found, the commands are passed for execution in the database. If the query is matching an allowed for caching pattern, then the result is stored in memory for future repetitive requests. On arrival of DML commands (inserts, updates, deletes, etc.) or arrival of stored procedures calls that were identified as containing DML commands - the relevant items in cache memory are cleaned and the command is passed to the database server for execution. - -* 100% Data Integrity: All features of ACID are supported. The data returned is always correct. diff --git a/wiki/wikipedia/2973.txt b/wiki/wikipedia/2973.txt deleted file mode 100644 index 5aed87c8fbd20a52ae89cf5e7b58ea83d1239ee9..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2973.txt +++ /dev/null @@ -1,90 +0,0 @@ -In mathematics, a constructive proof is a method of proof that demonstrates the existence of a mathematical object by creating or providing a method for creating the object. This is in contrast to a non-constructive proof (also known as an existence proof or pure existence theorem), which proves the existence of a particular kind of object without providing an example. For avoiding confusion with the stronger concept that follows, such a constructive proof is sometimes called an effective proof. - -A constructive proof may also refer to the stronger concept of a proof that is valid in constructive mathematics. - -Constructivism is a mathematical philosophy that rejects all proof methods that involve the existence of objects that are not explicitly built. This excludes, in particular, the use of the law of the excluded middle, the axiom of infinity, and the axiom of choice, and induces a different meaning for some terminology (for example, the term "or" has a stronger meaning in constructive mathematics than in classical). - -Some non-constructive proofs show that if a certain proposition is false, a contradiction ensues; consequently the proposition must be true (proof by contradiction). However, the principle of explosion (ex falso quodlibet) has been accepted in some varieties of constructive mathematics, including intuitionism. - -Constructive proofs can be seen as defining certified mathematical algorithms: this idea is explored in the Brouwer–Heyting–Kolmogorov interpretation of constructive logic, the Curry–Howard correspondence between proofs and programs, and such logical systems as Per Martin-Löf's intuitionistic type theory, and Thierry Coquand and Gérard Huet's calculus of constructions. - -Until the end of 19th century, all mathematical proofs were essentially constructive. The first non-constructive constructions appeared with Georg Cantor’s theory of infinite sets, and the formal definition of real numbers. - -The first use of non-constructive proofs for solving previously considered problems seems to be Hilbert's Nullstellensatz and Hilbert's basis theorem. From a philosophical point of view, the former is especially interesting, as implying the existence of a well specified object. - -The Nullstellensatz may be stated as follows: If $f_1,\ldots,f_k$ are polynomials in n indeterminates with complex coefficients, which have no common complex zeros, then there are polynomials $g_1,\ldots, g_k$ such that -$$ -f_1g_1+\ldots +f_kg_k=1. -$$ - -Such a non-constructive existence theorem was such a surprise for mathematicians of that time that one of them, Paul Gordan, wrote: "this is not mathematics, it is theology". - -Twenty five years later, Grete Hermann provided an algorithm for computing $g_1,\ldots, g_k,$ which is not a constructive proof in the strong sense, as she used Hilbert's result. She proved that, if $g_1,\ldots, g_k$ exist, they can be found with degrees less than -$$ -2^{2^n} -$$. - -This provides an algorithm, as the problem is reduced to solving a system of linear equations, by considering as unknowns the finite number of coefficients of the $g_i.$ - -First consider the theorem that there are an infinitude of prime numbers. Euclid's proof is constructive. But a common way of simplifying Euclid's proof postulates that, contrary to the assertion in the theorem, there are only a finite number of them, in which case there is a largest one, denoted n. Then consider the number n! + 1 (1 + the product of the first n numbers). Either this number is prime, or all of its prime factors are greater than n. Without establishing a specific prime number, this proves that one exists that is greater than n, contrary to the original postulate. - -Now consider the theorem "there exist irrational numbers $a$ and $b$ such that $a^b$ is rational." This theorem can be proven by using both a constructive proof, and a non-constructive proof. - -The following 1953 proof by Dov Jarden has been widely used as an example of a non-constructive proof since at least 1970: - -
    - -CURIOSA
    - -339. A Simple Proof That a Power of an Irrational Number to an Irrational Exponent May Be Rational.
    -$$ -\sqrt{2}^{\sqrt{2}} -$$ is either rational or irrational. If it is rational, our statement is proved. If it is irrational, $(\sqrt{2}^{\sqrt{2}})^{\sqrt{2}} = 2$ proves our statement.
    - - Dov Jarden Jerusalem - -
    - -In a bit more detail: - -*Recall that $\sqrt{2}$ is irrational, and 2 is rational. Consider the number $q = \sqrt{2}^{\sqrt2}$. Either it is rational or it is irrational. - -*If $q$ is rational, then the theorem is true, with $a$ and $b$ both being $\sqrt{2}$. - -*If $q$ is irrational, then the theorem is true, with $a$ being $\sqrt{2}^{\sqrt2}$ and $b$ being $\sqrt{2}$, since -$$ -\left (\sqrt{2}^{\sqrt2}\right )^{\sqrt2} = \sqrt{2}^{(\sqrt{2} \cdot \sqrt{2})} = \sqrt{2}^2 = 2. -$$ - -At its core, this proof is non-constructive because it relies on the statement "Either q is rational or it is irrational"-an instance of the law of excluded middle, which is not valid within a constructive proof. The non-constructive proof does not construct an example a and b; it merely gives a number of possibilities (in this case, two mutually exclusive possibilities) and shows that one of them-but does not show which one-must yield the desired example. - -As it turns out, $\sqrt{2}^{\sqrt2}$ is irrational because of the Gelfond–Schneider theorem, but this fact is irrelevant to the correctness of the non-constructive proof. - -A constructive proof of the above theorem on irrational powers of irrationals would give an actual example, such as: -$$ -a = \sqrt{2} , \quad b = \log_2 9 , \quad a^b = 3 . -$$ - -The square root of 2 is irrational, and 3 is rational. $\log_2 9$ is also irrational: if it were equal to $m \over n$, then, by the properties of logarithms, 9n would be equal to 2m, but the former is odd, and the latter is even. - -A more substantial example is the graph minor theorem. A consequence of this theorem is that a graph can be drawn on the torus if, and only if, none of its minors belong to a certain finite set of "forbidden minors". However, the proof of the existence of this finite set is not constructive, and the forbidden minors are not actually specified. They are still unknown. - -In constructive mathematics, a statement may be disproved by giving a counterexample, as in classical mathematics. However, it is also possible to give a Brouwerian counterexample to show that the statement is non-constructive. This sort of counterexample shows that the statement implies some principle that is known to be non-constructive. If it can be proved constructively that a statement implies some principle that is not constructively provable, then the statement itself cannot be constructively provable. - -For example, a particular statement may be shown to imply the law of the excluded middle. An example of a Brouwerian counterexample of this type is Diaconescu's theorem, which shows that the full axiom of choice is non-constructive in systems of constructive set theory, since the axiom of choice implies the law of excluded middle in such systems. The field of constructive reverse mathematics develops this idea further by classifying various principles in terms of "how nonconstructive" they are, by showing they are equivalent to various fragments of the law of the excluded middle. - -Brouwer also provided "weak" counterexamples. Such counterexamples do not disprove a statement, however; they only show that, at present, no constructive proof of the statement is known. One weak counterexample begins by taking some unsolved problem of mathematics, such as Goldbach's conjecture, which asks whether every even natural number larger than 4 is the sum of two primes. Define a sequence a(n) of rational numbers as follows: - -a(n) = - -\begin{cases} (1/2)^n & \mbox{if every even natural number in the interval } [4,n] \mbox{ is the sum of two primes}, \\ - -(1/2)^k & \mbox{if } k \mbox{ is the least even natural number in the interval } [4,n] \mbox{ which is not the sum of two primes} - -\end{cases} - -For each n, the value of a(n) can be determined by exhaustive search, and so a is a well defined sequence, constructively. Moreover, because a is a Cauchy sequence with a fixed rate - -of convergence, a converges to some real number α, according to the usual treatment of real numbers in constructive mathematics. - -Several facts about the real number α can be proved constructively. However, based on the different meaning of the words in constructive mathematics, if there is a constructive proof that "α = 0 or α ≠ 0" then this would mean that there is a constructive proof of Goldbach's conjecture (in the former case) or a constructive proof that Goldbach's conjecture is false (in the latter case). Because no such proof is known, the quoted statement must also not have a known constructive proof. However, it is entirely possible that Goldbach's conjecture may have a constructive proof (as we do not know at present whether it does), in which case the quoted statement would have a constructive proof as well, albeit one that is unknown at present. The main practical use of weak counterexamples is to identify the "hardness" of a problem. For example, the counterexample just shown shows that the quoted statement is "at least as hard to prove" as Goldbach's conjecture. Weak counterexamples of this sort are often related to the limited principle of omniscience. diff --git a/wiki/wikipedia/2974.txt b/wiki/wikipedia/2974.txt deleted file mode 100644 index 2e6d137b5cb9d25a1a45ed77b23c8e6c3c5cd3c9..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2974.txt +++ /dev/null @@ -1,109 +0,0 @@ -The gradient theorem, also known as the fundamental theorem of calculus for line integrals, says that a line integral through a gradient field can be evaluated by evaluating the original scalar field at the endpoints of the curve. The theorem is a generalization of the fundamental theorem of calculus to any curve in a plane or space (generally n-dimensional) rather than just the real line. - -Let φ : U ⊆ ℝn → ℝ be a continuously differentiable function and γ any curve in U which starts at p and ends at q. Then -$$ - \int_{\gamma} \nabla\varphi(\mathbf{r})\cdot \mathrm{d}\mathbf{r} = \varphi\left(\mathbf{q}\right)-\varphi\left(\mathbf{p}\right) -$$ - -(where ∇φ denotes the gradient vector field of φ). - -The gradient theorem implies that line integrals through gradient fields are path independent. In physics this theorem is one of the ways of defining a conservative force. By placing φ as potential, ∇φ is a conservative field. Work done by conservative forces does not depend on the path followed by the object, but only the end points, as the above equation shows. - -The gradient theorem also has an interesting converse: any path-independent vector field can be expressed as the gradient of a scalar field. Just like the gradient theorem itself, this converse has many striking consequences and applications in both pure and applied mathematics. - -If φ is a differentiable function from some open subset U (of ℝn) to ℝ, and if r is a differentiable function from some closed interval [a, b] to U, then by the multivariate chain rule, the composite function φ ∘ r is differentiable on (a, b) and -$$ -\frac{\mathrm{d}}{\mathrm{d}t}(\varphi \circ \mathbf{r})(t)=\nabla \varphi(\mathbf{r}(t)) \cdot \mathbf{r}'(t) -$$ - -for all t in (a, b). Here the ⋅ denotes the usual inner product. - -Now suppose the domain U of φ contains the differentiable curve γ with endpoints a and b, (oriented in the direction from a to b). If r parametrizes γ for t in [a, b], then the above shows that - -\begin{align} - -\int_{\gamma} \nabla\varphi(\mathbf{u}) \cdot \mathrm{d}\mathbf{u} &=\int_a^b \nabla\varphi(\mathbf{r}(t)) \cdot \mathbf{r}'(t)\mathrm{d}t \\ - -&=\int_a^b \frac{d}{dt}\varphi(\mathbf{r}(t))\mathrm{d}t =\varphi(\mathbf{r}(b))-\varphi(\mathbf{r}(a))=\varphi\left(\mathbf{q}\right)-\varphi\left(\mathbf{p}\right) , - -\end{align} - -where the definition of the line integral is used in the first equality, and the fundamental theorem of calculus is used in the third equality - -Suppose γ ⊂ ℝ2 is the circular arc oriented counterclockwise from (5, 0) to (−4, 3). Using the definition of a line integral, - -\begin{align} - -\int_{\gamma} y \mathrm{d}x + x \mathrm{d}y - -&= \int_0^{\pi - \tan^{-1}\!\left(\frac{3}{4}\right)} ((5\sin t)(-5 \sin t) + (5 \cos t)(5 \cos t)) \mathrm{d}t \\ - -&= \int_0^{\pi - \tan^{-1}\!\left(\frac{3}{4}\right)} 25 \left(-\sin^2 t + \cos^2 t\right) \mathrm{d}t \\ - -&= \int_0^{\pi - \tan^{-1}\!\left(\frac{3}{4}\right)} 25 \cos(2t) \mathrm{d}t \ =\ \left.\tfrac{25}{2}\sin(2t)\right|_0^{\pi - \tan^{-1}\!\left(\tfrac{3}{4}\right)} \\[.5em] - -&= \tfrac{25}{2}\sin\left(2\pi - 2\tan^{-1}\!\!\left(\tfrac{3}{4}\right)\right) \\[.5em] - -&= -\tfrac{25}{2}\sin\left(2\tan^{-1}\!\!\left(\tfrac{3}{4}\right)\right) \ =\ -\frac{25(3/4)}{(3/4)^2 + 1} = -12. - -\end{align} - -This result can be obtained much more simply by noticing that the function $f(x,y)=xy$ has gradient $\nabla f(x,y)=(y,x)$, so by the Gradient Theorem: -$$ -\int_{\gamma} y \mathrm{d}x+x \mathrm{d}y=\int_{\gamma}\nabla(xy) \cdot (\mathrm{d}x,\mathrm{d}y)\ =\ xy|_{(5,0)}^{(-4,3)}=-4 \cdot 3-5 \cdot 0=-12 . -$$ - -For a more abstract example, suppose γ ⊂ ℝn has endpoints p, q, with orientation from p to q. For u in ℝn, let |u| denote the Euclidean norm of u. If α ≥ 1 is a real number, then - -\begin{align} - -\int_{\gamma} |\mathbf{x}|^{\alpha - 1} \mathbf{x} \cdot \mathrm{d}\mathbf{x} - -&= \frac{1}{\alpha + 1} \int_{\gamma} (\alpha + 1) |\mathbf{x}|^{(\alpha + 1) - 2} \mathbf{x} \cdot \mathrm{d}\mathbf{x} \\ - -&= \frac{1}{\alpha + 1} \int_{\gamma} \nabla |\mathbf{x}|^{\alpha + 1} \cdot \mathrm{d}\mathbf{x}= \frac. - -Thus, continuing from above and using the gradient theorem, - - - -W = -kq \sum_{i=1}^n \left( Q_i \int_{\gamma} \nabla \frac{1}{\left|\mathbf{r} - \mathbf{p}_i\right|} \cdot \mathrm{d}\mathbf{r} \right) - -= kq \sum_{i=1}^n Q_i \left( \frac{1}{\left|\mathbf{a} - \mathbf{p}_i\right|} - \frac{1}{\left|\mathbf{b} - \mathbf{p}_i\right|} \right) - - - -We are finished. Of course, we could have easily completed this calculation using the powerful language of electrostatic potential or electrostatic potential energy (with the familiar formulas W = −ΔU = −qΔV). However, we have not yet defined potential or potential energy, because the converse of the gradient theorem is required to prove that these are well-defined, differentiable functions and that these formulas hold (see below). Thus, we have solved this problem using only Coulomb's Law, the definition of work, and the gradient theorem. - -The gradient theorem states that if the vector field F is the gradient of some scalar-valued function (i.e., if F is conservative), then F is a path-independent vector field (i.e., the integral of F over some piecewise-differentiable curve is dependent only on end points). This theorem has a powerful converse: - -
    If F is a path-independent vector field, then F is the gradient of some scalar-valued function. - -
    - -It is straightforward to show that a vector field is path-independent if and only if the integral of the vector field over every closed loop in its domain is zero. Thus the converse can alternatively be stated as follows: If the integral of F over every closed loop in the domain of F is zero, then F is the gradient of some scalar-valued function. - -To illustrate the power of this converse principle, we cite an example that has significant physical consequences. In classical electromagnetism, the electric force is a path-independent force; i.e. the work done on a particle that has returned to its original position within an electric field is zero (assuming that no changing magnetic fields are present). - -Therefore, the above theorem implies that the electric force field Fe : S → ℝ3 is conservative (here S is some open, path-connected subset of ℝ3 that contains a charge distribution). Following the ideas of the above proof, we can set some reference point a in S, and define a function Ue: S → ℝ by -$$ - U_e(\mathbf{r}) := -\int_{\gamma[\mathbf{a},\mathbf{r}]} \mathbf{F}_e(\mathbf{u}) \cdot \mathrm{d}\mathbf{u} -$$ - -Using the above proof, we know Ue is well-defined and differentiable, and Fe = −∇Ue (from this formula we can use the gradient theorem to easily derive the well-known formula for calculating work done by conservative forces: W = −ΔU). This function Ue is often referred to as the electrostatic potential energy of the system of charges in S (with reference to the zero-of-potential a). In many cases, the domain S is assumed to be unbounded and the reference point a is taken to be "infinity", which can be made rigorous using limiting techniques. This function Ue is an indispensable tool used in the analysis of many physical systems. - -Many of the critical theorems of vector calculus generalize elegantly to statements about the integration of differential forms on manifolds. In the language of differential forms and exterior derivatives, the gradient theorem states that -$$ - \int_{\partial \gamma} \phi = \int_{\gamma} \mathrm{d}\phi -$$ - -for any 0-form, ϕ, defined on some differentiable curve γ ⊂ ℝn (here the integral of ϕ over the boundary of the γ is understood to be the evaluation of ϕ at the endpoints of γ). - -Notice the striking similarity between this statement and the generalized version of Stokes' theorem, which says that the integral of any compactly supported differential form ω over the boundary of some orientable manifold Ω is equal to the integral of its exterior derivative dω over the whole of Ω, i.e., -$$ -\int_{\partial \Omega}\omega=\int_{\Omega}\mathrm{d}\omega -$$ - -This powerful statement is a generalization of the gradient theorem from 1-forms defined on one-dimensional manifolds to differential forms defined on manifolds of arbitrary dimension. - -The converse statement of the gradient theorem also has a powerful generalization in terms of differential forms on manifolds. In particular, suppose ω is a form defined on a contractible domain, and the integral of ω over any closed manifold is zero. Then there exists a form ψ such that ω = dψ. Thus, on a contractible domain, every closed form is exact. This result is summarized by the Poincaré lemma. diff --git a/wiki/wikipedia/2975.txt b/wiki/wikipedia/2975.txt deleted file mode 100644 index 7db7c2b9e2790f15e27d0357377375cfbf5b6588..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2975.txt +++ /dev/null @@ -1,19 +0,0 @@ -In mathematics, Lindelöf's lemma is a simple but useful lemma in topology on the real line, named for the Finnish mathematician Ernst Leonard Lindelöf. - -Let the real line have its standard topology. Then every open subset of the real line is a countable union of open intervals. - -Lindelöf's lemma is also known as the statement that every open cover in a second-countable space has a countable subcover (Kelley 1955:49). This means that every second-countable space is also a Lindelöf space. - -Let $B$ be a countable basis of $X$. Consider an open cover, $\mathcal{F} = \bigcup_{\alpha} U_{\alpha}$. To get prepared for the following deduction, we define two sets for convenience, $B_{\alpha} := \left \{ \beta \in B: \beta \subset U_{\alpha} \right \}$, $B':= \bigcup_{\alpha} B_{\alpha}$. - -A straight-forward but essential observation is that, $U_{\alpha} = \bigcup_{\beta \in B_{\alpha}} \beta$ which is from the definition of base. (Here, we use the definition of "base" in M.A.Armstrong, Basic Topology, chapter 2, §1, i.e. a collection of open sets such that every open set is a union of members of this collection.) Therefore, we can get that, -$$ -\mathcal{F} = \bigcup_{\alpha} U_{\alpha} = \bigcup_{\alpha} \bigcup_{\beta \in B_{\alpha}} \beta = \bigcup_{\beta \in B'} \beta -$$ - -where $B' \subset B$, and is therefore at most countable. Next, by construction, for each $\beta\in B'$ there is some $\delta_{\beta}$ such that $\beta\in U_{\delta_{\beta}}$. We can therefore write -$$ -\mathcal{F} = \bigcup_{\beta\in B'} U_{\delta_{\beta}} -$$ - -completing the proof. diff --git a/wiki/wikipedia/2976.txt b/wiki/wikipedia/2976.txt deleted file mode 100644 index bb0ccf50093d874dcf066ec02f59ffba30b06a75..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2976.txt +++ /dev/null @@ -1,105 +0,0 @@ -In mathematics, the simplest form of the parallelogram law (also called the parallelogram identity) belongs to elementary geometry. It states that the sum of the squares of the lengths of the four sides of a parallelogram equals the sum of the squares of the lengths of the two diagonals. We use these notations for the sides: AB, BC, CD, DA. But since in Euclidean geometry a parallelogram necessarily has opposite sides equal, that is, AB = CD and BC = DA, the law can be stated as - -2AB^2 + 2BC^2 = AC^2 + BD^2 - -If the parallelogram is a rectangle, the two diagonals are of equal lengths AC = BD, so - -2AB^2 + 2BC^2 = 2AC^2 - -and the statement reduces to the Pythagorean theorem. For the general quadrilateral with four sides not necessarily equal, - -AB^2 + BC^2 + CD^2+DA^2 = AC^2+BD^2 + 4x^2, - -where $x$ is the length of the line segment joining the midpoints of the diagonals. It can be seen from the diagram that $x = 0$ for a parallelogram, and so the general formula simplifies to the parallelogram law. - -In the parallelogram on the right, let AD = BC = a, AB = DC = b, $\angle BAD = \alpha.$ By using the law of cosines in triangle $\triangle BAD,$ we get: - -a^2 + b^2-2ab\cos(\alpha) = BD^2. - -In a parallelogram, adjacent angles are supplementary, therefore $\angle ADC = 180^{\circ} - \alpha.$ Using the law of cosines in triangle $\triangle ADC,$ produces: - -a^2 + b^2 - 2ab\cos(180^{\circ}-\alpha) = AC^2. - -By applying the trigonometric identity $\cos(180^{\circ} - x) = -\cos x$ to the former result proves: - -a^2 + b^2 + 2ab\cos(\alpha) = AC^2. - -Now the sum of squares $BD^2 + AC^2$ can be expressed as: - -BD^2 + AC^2 = a^2 + b^2 -2ab\cos(\alpha) + a^2 + b^2 +2ab\cos(\alpha). - -Simplifying this expression, it becomes: - -BD^2 + AC^2 = 2a^2 + 2b^2. - -In a normed space, the statement of the parallelogram law is an equation relating norms: - -2\|x\|^2 + 2\|y\|^2 = \|x+y\|^2 + \|x-y\|^2 \quad \text{ for all } x, y. - -The parallelogram law is equivalent to the seemingly weaker statement: - -2\|x\|^2 + 2\|y\|^2 \leq \|x + y\|^2 + \|x - y\|^2 \quad \text{ for all } x, y - -because the reverse inequality can be obtained from it by substituting \frac{1}{2}\left( x + y \right) for $x,$ and \frac{1}{2}\left( x - y \right) for $y,$ and then simplifying. With the same proof, the parallelogram law is also equivalent to: - -\|x + y\|^2 + \|x - y\|^2 \leq 2\|x\|^2 + 2\|y\|^2 \quad \text{ for all } x, y. - -In an inner product space, the norm is determined using the inner product: - -\|x\|^2 = \langle x, x\rangle. - -As a consequence of this definition, in an inner product space the parallelogram law is an algebraic identity, readily established using the properties of the inner product: - -\|x+y\|^2 = \langle x+y, x+y\rangle = \langle x, x\rangle + \langle x, y\rangle + \langle y, x\rangle + \langle y, y\rangle, - -\|x-y\|^2 = \langle x-y, x-y\rangle = \langle x, x\rangle - \langle x, y\rangle - \langle y, x\rangle + \langle y, y\rangle. - -Adding these two expressions: - -\|x+y\|^2 + \|x-y\|^2 = 2\langle x, x\rangle + 2\langle y, y\rangle = 2\|x\|^2 + 2\|y\|^2, - -as required. - -If $x$ is orthogonal to $y,$ meaning $\langle x ,\ y \rangle = 0,$ and the above equation for the norm of a sum becomes: - -\|x+y\|^2 = \langle x, x\rangle + \langle x, y\rangle + \langle y, x\rangle + \langle y, y\rangle = \|x\|^2 + \|y\|^2, - -which is Pythagoras' theorem. - -Most real and complex normed vector spaces do not have inner products, but all normed vector spaces have norms (by definition). For example, a commonly used norm for a vector $x = (x_1, x_2, \ldots, x_n)$ in the real coordinate space $\R^n$ is the $p$-norm: - -\|x\| _p = \left(|x_1|^p + |x_2|^p + \dotsb + |x_n|^p\right)^{1/p}. - -Given a norm, one can evaluate both sides of the parallelogram law above. A remarkable fact is that if the parallelogram law holds, then the norm must arise in the usual way from some inner product. In particular, it holds for the $p$-norm if and only if $p = 2,$ the so-called Euclidean norm or standard norm. - -For any norm satisfying the parallelogram law (which necessarily is an inner product norm), the inner product generating the norm is unique as a consequence of the polarization identity. In the real case, the polarization identity is given by: - -\langle x, y \rangle = \frac{\|x+y\|^2 - \|x-y\|^2}{4}, - -or equivalently by - -\frac{\|x+y\|^2 - \|x\|^2 - \|y\|^2}{2} \qquad \text{ or } \qquad \frac{\|x\|^2 + \|y\|^2 - \|x-y\|^2}{2}. - -In the complex case it is given by: - -\langle x, y \rangle = \frac{\|x+y\|^2 - \|x-y\|^2}{4} + i \frac{\|ix-y\|^2 - \|ix+y\|^2}{4}. - -For example, using the $p$-norm with $p = 2$ and real vectors $x$ and $y,$ the evaluation of the inner product proceeds as follows: - -\begin{align} - -\langle x, y \rangle &= \frac{\|x+y\|^2 - \|x-y\|^2}{4}\\[4mu] - -&= \tfrac{1}{4} \left(\sum_i |x_i +y_i|^2 - \sum_i |x_i-y_i|^2\right)\\[2mu] - -&= \tfrac{1}{4} \left(4 \sum_i x_i y_i\right)\\ - -&= x \cdot y,\\ - -\end{align} - -which is the standard dot product of two vectors. - -Another necessary and sufficient condition for there to exist an inner product that induces the given norm $\|\cdot\|$ is for the norm to satisfy Ptolemy's inequality: - -\|x - y\| \|z\| ~+~ \|y - z\| \|x\| ~\geq~ \|x - z\| \|y\| \qquad \text{ for all vectors } x, y, z. diff --git a/wiki/wikipedia/2977.txt b/wiki/wikipedia/2977.txt deleted file mode 100644 index 590b6bdb359f076519723165443113ac8d0227a9..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2977.txt +++ /dev/null @@ -1,69 +0,0 @@ -In mathematics, the Lefschetz fixed-point theorem is a formula that counts the fixed points of a continuous mapping from a compact topological space $X$ to itself by means of traces of the induced mappings on the homology groups of $X$. It is named after Solomon Lefschetz, who first stated it in 1926. - -The counting is subject to an imputed multiplicity at a fixed point called the fixed-point index. A weak version of the theorem is enough to show that a mapping without any fixed point must have rather special topological properties (like a rotation of a circle). - -For a formal statement of the theorem, let -$$ -f\colon X \rightarrow X -$$ - -be a continuous map from a compact triangulable space $X$ to itself. Define the Lefschetz number $\Lambda_f$ of $f$ by -$$ -\Lambda_f:=\sum_{k\geq 0}(-1)^k\mathrm{tr}(f_*|H_k(X,\Q)), -$$ - -the alternating (finite) sum of the matrix traces of the linear maps induced by $f$ on $H_k(X,\Q)$, the singular homology groups of $X$ with rational coefficients. - -A simple version of the Lefschetz fixed-point theorem states: if -$$ -\Lambda_f \neq 0 -$$ - -then $f$ has at least one fixed point, i.e., there exists at least one $x$ in $X$ such that $f(x) = x$. In fact, since the Lefschetz number has been defined at the homology level, the conclusion can be extended to say that any map homotopic to $f$ has a fixed point as well. - -Note however that the converse is not true in general: $\Lambda_f$ may be zero even if $f$ has fixed points. - -First, by applying the simplicial approximation theorem, one shows that if $f$ has no fixed points, then (possibly after subdividing $X$) $f$ is homotopic to a fixed-point-free simplicial map (i.e., it sends each simplex to a different simplex). This means that the diagonal values of the matrices of the linear maps induced on the simplicial chain complex of $X$ must be all be zero. Then one notes that, in general, the Lefschetz number can also be computed using the alternating sum of the matrix traces of the aforementioned linear maps (this is true for almost exactly the same reason that the Euler characteristic has a definition in terms of homology groups; see below for the relation to the Euler characteristic). In the particular case of a fixed-point-free simplicial map, all of the diagonal values are zero, and thus the traces are all zero. - -A stronger form of the theorem, also known as the Lefschetz–Hopf theorem, states that, if $f$ has only finitely many fixed points, then -$$ -\sum_{x \in \mathrm{Fix}(f)} i(f,x) = \Lambda_f, -$$ - -where $\mathrm{Fix}(f)$ is the set of fixed points of $f$, and $i(f,x)$ denotes the index of the fixed point $x$. From this theorem one deduces the Poincaré–Hopf theorem for vector fields. - -The Lefschetz number of the identity map on a finite CW complex can be easily computed by realizing that each $f_\ast$ can be thought of as an identity matrix, and so each trace term is simply the dimension of the appropriate homology group. Thus the Lefschetz number of the identity map is equal to the alternating sum of the Betti numbers of the space, which in turn is equal to the Euler characteristic $\chi(X)$. Thus we have -$$ -\Lambda_{\mathrm{id}} = \chi(X).\ -$$ - -The Lefschetz fixed-point theorem generalizes the Brouwer fixed-point theorem, which states that every continuous map from the $n$-dimensional closed unit disk $D^n$ to $D^n$ must have at least one fixed point. - -This can be seen as follows: $D^n$ is compact and triangulable, all its homology groups except $H_0$ are zero, and every continuous map $f\colon D^n \to D^n$ induces the identity map $f_* \colon H_0(D^n, \Q) \to H_0(D^n, \Q)$, whose trace is one; all this together implies that $\Lambda_f$ is non-zero for any continuous map $f\colon D^n \to D^n$. - -Lefschetz presented his fixed-point theorem in . Lefschetz's focus was not on fixed points of maps, but rather on what are now called coincidence points of maps. - -Given two maps $f$ and $g$ from an orientable manifold $X$ to an orientable manifold $Y$ of the same dimension, the Lefschetz coincidence number of $f$ and $g$ is defined as -$$ -\Lambda_{f,g} = \sum (-1)^k \mathrm{tr}( D_X \circ g^* \circ D_Y^{-1} \circ f_*), -$$ - -where $f_*$ is as above, $g_*$ is the homomorphism induced by $g$ on the cohomology groups with rational coefficients, and $D_X$ and $D_X$ are the Poincaré duality isomorphisms for $X$ and $Y$, respectively. - -Lefschetz proved that if the coincidence number is nonzero, then $f$ and $g$ have a coincidence point. He noted in his paper that letting $X= Y$ and letting $g$ be the identity map gives a simpler result, which we now know as the fixed-point theorem. - -Let $X$ be a variety defined over the finite field $k$ with $q$ elements and let $\bar X$ be the base change of $X$ to the algebraic closure of $k$. The Frobenius endomorphism of $\bar X$ (often the geometric Frobenius, or just the Frobenius), denoted by $F_q$, maps a point with coordinates $x_1,\ldots,x_n$ to the point with coordinates $x_1^q,\ldots,x_n^q$. Thus the fixed points of $F_q$ are exactly the points of $X$ with coordinates in $k$; the set of such points is denoted by $X(k)$. The Lefschetz trace formula holds in this context, and reads: -$$ -\#X(k)=\sum_i (-1)^i \mathrm{tr}(F_q^*| H^i_c(\bar{X},\Q_{\ell})). -$$ - -This formula involves the trace of the Frobenius on the étale cohomology, with compact supports, of $\bar X$ with values in the field of $\ell$-adic numbers, where $\ell$ is a prime coprime to $q$. - -If $X$ is smooth and equidimensional, this formula can be rewritten in terms of the arithmetic Frobenius $\Phi_q$, which acts as the inverse of $F_q$ on cohomology: -$$ -\#X(k)=q^{\dim X}\sum_i (-1)^i \mathrm{tr}((\Phi_q^{-1})^*| H^i(\bar X,\Q_\ell)). -$$ - -This formula involves usual cohomology, rather than cohomology with compact supports. - -The Lefschetz trace formula can also be generalized to algebraic stacks over finite fields. diff --git a/wiki/wikipedia/2978.txt b/wiki/wikipedia/2978.txt deleted file mode 100644 index e2194914f898bfb99fd327b95c6cab96b700219d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2978.txt +++ /dev/null @@ -1,17 +0,0 @@ -In graph theory and theoretical computer science, a maximum common induced subgraph of two graphs G and H is a graph that is an induced subgraph of both G and H, - -and that has as many vertices as possible. - -Finding this graph is NP-hard. - -In the associated decision problem, the input is two graphs G and H and a number k. The problem is to decide whether G and H have a common induced subgraph with at least k vertices. This problem is NP-complete. It is a generalization of the induced subgraph isomorphism problem, which arises when k equals the number of vertices in the smaller of G and H, so that this entire graph must appear as an induced subgraph of the other graph. - -Based on hardness of approximation results for the maximum independent set problem, the maximum common induced subgraph problem is also hard to approximate. This implies that, unless P = NP, there is no approximation algorithm that, in polynomial time on $n$-vertex graphs, always finds a solution within a factor of $n^{1-\epsilon}$ of optimal, for any $\epsilon > 0$. - -One possible solution for this problem is to build a modular product graph of G and H. - -In this graph, the largest clique corresponds to a maximum common induced subgraph of G and H. Therefore, algorithms for finding maximum cliques can be used to find the maximum common induced subgraph. Moreover, a modified maximum-clique algorithm can be used to find a maximum common connected subgraph. - -The McSplit algorithm (along with its McSplit↓ variant) is a forward checking algorithm that does not use the clique encoding, but uses a compact data structure to keep track of the vertices in graph H to which each vertex in graph G may be mapped. Both versions of the McSplit algorithm outperform the clique encoding for many graph classes. - -Maximum common induced subgraph algorithms have a long tradition in cheminformatics and pharmacophore mapping. diff --git a/wiki/wikipedia/2979.txt b/wiki/wikipedia/2979.txt deleted file mode 100644 index d86c22965c6c93b12f78e4363d16dd6cf8039d3d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2979.txt +++ /dev/null @@ -1,9 +0,0 @@ -Pompeiu's theorem is a result of plane geometry, discovered by the Romanian mathematician Dimitrie Pompeiu. The theorem is simple, but not classical. It states the following: - -Given an equilateral triangle ABC in the plane, and a point P in the plane of the triangle ABC, the lengths PA, PB, and PC form the sides of a (maybe, degenerate) triangle. - -The proof is quick. Consider a rotation of 60° about the point B. Assume A maps to C, and P maps to P '. Then $\scriptstyle PB\ =\ P'B$, and $\scriptstyle\angle PBP'\ =\ 60^{\circ}$. Hence triangle PBP ' is equilateral and $\scriptstyle PP'\ =\ PB$. Then $\scriptstyle PA\ =\ P'C$. Thus, triangle PCP ' has sides equal to PA, PB, and PC and the proof by construction is complete (see drawing). - -Further investigations reveal that if P is not in the interior of the triangle, but rather on the circumcircle, then PA, PB, PC form a degenerate triangle, with the largest being equal to the sum of the others, this observation is also known as Van Schooten's theorem. - -Pompeiu published the theorem in 1936, however August Ferdinand Möbius had published a more general theorem about four points in the Euclidean plane already in 1852. In this paper Möbius also derived the statement of Pompeiu's theorem explicitly as a special case of his more general theorem. For this reason the theorem is also known as the Möbius-Pompeiu theorem. diff --git a/wiki/wikipedia/298.txt b/wiki/wikipedia/298.txt deleted file mode 100644 index ce64e10369f8ac328eb96539fdbe0374a058738a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/298.txt +++ /dev/null @@ -1,74 +0,0 @@ -In mathematics, the Goldbach–Euler theorem (also known as Goldbach's theorem), states that the sum of 1/(p - 1) over the set of perfect powers p, excluding 1 and omitting repetitions, converges to 1: -$$ -\sum_{p}^{\infty }\frac{1}{p-1}= {\frac{1}{3} + \frac{1}{7} + \frac{1}{8}+ \frac{1}{15} + \frac{1}{24} + \frac{1}{26}+ \frac{1}{31}}+ \cdots = 1. -$$ - -This result was first published in Euler's 1737 paper "Variæ observationes circa series infinitas". Euler attributed the result to a letter (now lost) from Goldbach. - -Goldbach's original proof to Euler involved assigning a constant to the harmonic series: -$$ - \textstyle x = \sum_{n=1}^\infty \frac{1}{n} -$$, which is divergent. Such a proof is not considered rigorous by modern standards. There is a strong resemblance between the method of sieving out powers employed in his proof and the method of factorization used to derive Euler's product formula for the Riemann zeta function. - -Let x be given by -$$ -x = 1 + \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + \frac{1}{5} + \frac{1}{6} + \frac{1}{7} + \frac{1}{8} \cdots -$$ - -Since the sum of the reciprocal of every power of two is $ \textstyle 1 = \frac{1}{2} + \frac{1}{4} + \frac{1}{8} + \frac{1}{16} + \cdots$, subtracting the terms with powers of two from x gives -$$ -x - 1 = 1 + \frac{1}{3} + \frac{1}{5} + \frac{1}{6} + \frac{1}{7} + \frac{1}{9} + \frac{1}{10} + \frac{1}{11} + \cdots -$$ - -Repeat the process with the terms with the powers of three: $\textstyle \frac{1}{2} = \frac{1}{3} + \frac{1}{9} + \frac{1}{27} + \frac{1}{81} + \cdots$ -$$ -x - 1 - \frac{1}{2} = 1 + \frac{1}{5} + \frac{1}{6} + \frac{1}{7} + \frac{1}{10} + \frac{1}{11} + \frac{1}{12} + \cdots -$$ - -Absent from the above sum are now all terms with powers of two and three. Continue by removing terms with powers of 5, 6 and so on until the right side is exhausted to the value of 1. Eventually, we obtain the equation -$$ -x - 1 - \frac{1}{2} - \frac{1}{4} - \frac{1}{5} - \frac{1}{6} - \frac{1}{9} - \cdots = 1 -$$ - -which we rearrange into -$$ -x - 1 = 1 + \frac{1}{2} + \frac{1}{4} + \frac{1}{5} + \frac{1}{6} + \frac{1}{9} + \cdots -$$ - -where the denominators consist of all positive integers that are the non-powers minus one. By subtracting the previous equation from the definition of x given above, we obtain -$$ -1 = \frac{1}{3} + \frac{1}{7} + \frac{1}{8}+ \frac{1}{15} + \frac{1}{24} + \frac{1}{26}+ \frac{1}{31} + \cdots -$$ - -where the denominators now consist only of perfect powers minus one. - -While lacking mathematical rigor, Goldbach's proof provides a reasonably intuitive argument for the theorem's truth. Rigorous proofs require proper and more careful treatment of the divergent terms of the harmonic series. Other proofs make use of the fact that the sum of 1/p over the set of perfect powers p, excluding 1 but including repetitions, converges to 1 by demonstrating the equivalence: -$$ -\sum_{p}^{\infty }\frac{1}{p - 1} = \sum_{m=2}^\infty \sum_{n=2}^\infty \frac{1}{m^n} = 1. -$$ - -A generalized Euler-Goldbach series, with $ s \in \C$, is defined as: - - - -\begin{align} - -\sum_{p}^{\infty}\frac{1}{p^s-1}. - -\end{align} - - - -For Re$ (s)>1$ this can be expressed as: - - - -\begin{align} - -\sum_{p}^{\infty} \frac{1}{p^s-1} = \sum_{n=2}^{\infty} \frac{1}{n^s-1} - (\zeta(s)-1) - -\end{align} - - - -where $ \zeta(s)$ is the Riemann zeta function. By using Telescoping series the special case $ s=2$ can be shown to equal $ 7/4 - \pi^2/6$. diff --git a/wiki/wikipedia/2980.txt b/wiki/wikipedia/2980.txt deleted file mode 100644 index 5410cb3cba071b6580f54c4ba03313050101aa53..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2980.txt +++ /dev/null @@ -1,157 +0,0 @@ -In computer science, the longest palindromic substring or longest symmetric factor problem is the problem of finding a maximum-length contiguous substring of a given string that is also a palindrome. For example, the longest palindromic substring of "bananas" is "anana". The longest palindromic substring is not guaranteed to be unique; for example, in the string "abracadabra", there is no palindromic substring with length greater than three, but there are two palindromic substrings with length three, namely, "aca" and "ada". In some applications it may be necessary to return all maximal palindromic substrings (that is, all substrings that are themselves palindromes and cannot be extended to larger palindromic substrings) rather than returning only one substring or returning the maximum length of a palindromic substring. - -Manacher invented a linear time algorithm for listing all the palindromes that appear at the start of a given string. However, as observed e.g., by Apostolico, the same algorithm can also be used to find all maximal palindromic substrings anywhere within the input string, again in linear time. Therefore, it provides a linear time solution to the longest palindromic substring problem. Alternative linear time solutions were provided by Jeuring, and by Gusfield, who described a solution based on suffix trees. Efficient parallel algorithms are also known for the problem. - -The longest palindromic substring problem should not be confused with the different problem of finding the longest palindromic subsequence. - -This algorithm is slower than Manacher's algorithm, but is a good stepping stone for understanding Manacher's algorithm. It looks at each character as the center of a palindrome and loops to determine the largest palindrome with that center. - -The loop at the center of the function only works for palindromes where the length is an odd number. The function works for even-length palindromes by modifying the input string. The character '|' is inserted between every character in the inputs string, and at both ends. So the input "book" becomes "|b|o|o|k|". The even-length palindrome "oo" in "book" becomes the odd-length palindrome "|o|o|". - -Longest_Palindrome_SLOW(string S) { - -string S' = S with a bogus character (eg. '|') inserted between each character (including outer boundaries) - -array PalindromeRadii = [0,...,0] // The radius of the longest palindrome centered on each place in S' - -// note: length(S') = length(PalindromeRadii) = 2 × length(S) + 1 - -Center = 0 - -while Center < length(S') { - -// Determine the longest palindrome starting at Center-Radius and going to Center+Radius - -Radius = 0 - -while Center-(Radius+1) >= 0 and Center+(Radius+1) < length(S') and S'[Center-(Radius+1)] = S'[Center+(Radius+1)] { - -Radius = Radius+1 - -} - -// Save the radius of the longest palindrome in the array - -PalindromeRadii[Center] = Radius - -Center = Center+1 - -} - -longest_palindrome_in_S' = 2*max(PalindromeRadii)+1 - -longest_palindrome_in_S = (longest_palindrome_in_S'-1)/2 - -return longest_palindrome_in_S - -} - -The runtime of this algorithm is $O(n^2)$. The outer loop runs $n$ times and the inner loop can run up to $n/2$ times. - -Below is the pseudocode for Manacher's algorithm. The algorithm is faster than the previous algorithm because it exploits when a palindrome happens inside another palindrome. - -For example, consider the input string "abacaba". By the time it gets to the "c", Manacher's algorithm will have identified the length of every palindrome centered on the letters before the "c". At the "c", it runs a loop to identify the largest palindrome centered on the "c": "abacaba". With that knowledge, everything after the "c" looks like the reflection of everything before the "c". The "a" after the "c" has the same longest palindrome as the "a" before the "c". Similarly, the "b" after the "c" has a longest palindrome that is at least the length of the longest palindrome centered on the "b" before the "c". There are some special cases to consider, but that trick speeds up the computation dramatically. - -Longest_Palindrome(string S) { - -string S' = S with a bogus character (eg. '|') inserted between each character (including outer boundaries) - -array PalindromeRadii = [0,...,0] // The radius of the longest palindrome centered on each place in S' - -// note: length(S') = length(PalindromeRadii) = 2 × length(S) + 1 - -Center = 0 - -Radius = 0 - -while Center < length(S') { - -// At the start of the loop, Radius is already set to a lower-bound for the longest radius. - -// In the first iteration, Radius is 0, but it can be higher. - -// Determine the longest palindrome starting at Center-Radius and going to Center+Radius - -while Center-(Radius+1) >= 0 and Center+(Radius+1) < length(S') and S'[Center-(Radius+1)] = S'[Center+(Radius+1)] { - -Radius = Radius+1 - -} - -// Save the radius of the longest palindrome in the array - -PalindromeRadii[Center] = Radius - -// Below, Center is incremented. - -// If any precomputed values can be reused, they are. - -// Also, Radius may be set to a value greater than 0 - -OldCenter = Center - -OldRadius = Radius - -Center = Center+1 - -// Radius' default value will be 0, if we reach the end of the following loop. - -Radius = 0 - -while Center <= OldCenter + OldRadius { - -// Because Center lies inside the old palindrome and every character inside - -// a palindrome has a "mirrored" character reflected across its center, we - -// can use the data that was precomputed for the Center's mirrored point. - -MirroredCenter = OldCenter - (Center - OldCenter) - -MaxMirroredRadius = OldCenter + OldRadius - Center - -if PalindromeRadii[MirroredCenter] < MaxMirroredRadius { - -PalindromeRadii[Center] = PalindromeRadii[MirroredCenter] - -Center = Center+1 - -} - -else if PalindromeRadii[MirroredCenter] > MaxMirroredRadius { - -PalindromeRadii[Center] = MaxMirroredRadius - -Center = Center+1 - -} - -else { // PalindromeRadii[MirroredCenter] = MaxMirroredRadius - -Radius = MaxMirroredRadius - -break // exit while loop early - -} - -} - -} - -longest_palindrome_in_S' = 2*max(PalindromeRadii)+1 - -longest_palindrome_in_S = (longest_palindrome_in_S'-1)/2 - -return longest_palindrome_in_S - -} - -Manacher's algorithm is faster because it reuses precomputed data when a palindrome exists inside another palindrome. There are 3 cases of this. They are represented by the "if / else if / else" statement in the pseudocode. - -The first case is when the palindrome at MirroredCenter lies completely inside the "Old" palindrome. In this situation, the palindrome at Center will have the same length as the one at MirroredCenter. For example, if the "Old" palindrome is "abcbpbcba", we can see that the palindrome centered on "c" after the "p" must have the same length as the palindrome centered on the "c" before the "p". - -The second case is when the palindrome at MirroredCenter extends outside the "Old" palindrome. That is, it extends "to the left" (or, contains characters with a lower index than any inside the "Old" palindrome). Because the "Old" palindrome is the largest possible palindrome centered on OldCenter, we know the characters before and after it are different. Thus, the palindrome at Center will run exactly up to the border of the "Old" palindrome, because the next character will be different than the one inside the palindrome at MirroredCenter. For example, if the string was "ababc", the "Old" palindrome could be "bab" with the Center being the second "b" and the MirroredCenter being the first "b". Since the palindrome at the MirroredCenter is "aba" and extends beyond the boundaries of the "Old" palindrome, we know the longest palindrome at the second "b" can only extend up to the border of the "Old" palindrome. We know this because if the character after the "Old" palindrome had been an "a" instead of a "c", the "Old" palindrome would have been longer. - -The third and last case is when the palindrome at MirroredCenter extends exactly up to the border of the "Old" palindrome. In this case, we don't know if the character after the "Old" palindrome might make the palindrome at Center longer than the one at MirroredCenter. But we do know that the palindrome at Center is at least as long as the one at MirroredCenter. In this case, Radius is initialized to the radius of the palindrome at MirroredCenter and the search starts from there. An example string would be "abcbpbcbp" where the "Old" palindrome is "bcbpbcb" and the Center is on the second "c". The MirroredCenter is the first "c" and it has a longest palindrome of "bcb". The longest palindrome at the Center on the second "c" has to be at least that long and, in this case, is longer. - -The algorithm runs in linear time. This can be seen by putting bounds on how many iterations are run of each loop. The outer loop and second inner loop increment Center by $1$ for every iteration. Since Center is bounded by the length of the string, we know these loops run $n$ times. The first inner loop increments Radius by $1$ for every iteration and the second inner loop, when it stops, decrements Radius by at most $1$ for every iteration. Since the second inner loop can run at most $n$ times and the value for Radius cannot exceed $n/2,$ the first inner loop can run at most $n + n/2$ times. The overall runtime is $O\left(n + n + n/2\right) = O(n).$ diff --git a/wiki/wikipedia/2981.txt b/wiki/wikipedia/2981.txt deleted file mode 100644 index db58947aad722555aad037030af05ae7efe3a998..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2981.txt +++ /dev/null @@ -1,13 +0,0 @@ -In topology, an area of mathematics, the virtually Haken conjecture states that every compact, orientable, irreducible three-dimensional manifold with infinite fundamental group is virtually Haken. That is, it has a finite cover (a covering space with a finite-to-one covering map) that is a Haken manifold. - -After the proof of the geometrization conjecture by Perelman, the conjecture was only open for hyperbolic 3-manifolds. - -The conjecture is usually attributed to Friedhelm Waldhausen in a paper from 1968, although he did not formally state it. This problem is formally stated as Problem 3.2 in Kirby's problem list. - -A proof of the conjecture was announced on March 12, 2012 by Ian Agol in a seminar lecture he gave at the Institut Henri Poincaré. The proof appeared shortly thereafter in a preprint which was eventually published in Documenta Mathematica. The proof was obtained via a strategy by previous work of Daniel Wise and collaborators, relying on actions of the fundamental group on certain auxiliary spaces (CAT(0) cube complexes) - -It used as an essential ingredient the freshly-obtained solution to the surface subgroup conjecture by Jeremy Kahn and Vladimir Markovic. - -Other results which are directly used in Agol's proof include the Malnormal Special Quotient Theorem of Wise and a criterion of Nicolas Bergeron and Wise for the cubulation of groups. - -In 2018 related results were obtained by Piotr Przytycki and Daniel Wise proving that mixed 3-manifolds are also virtually special, that is they can be cubulated into a cube complex with a finite cover where all the hyperplanes are embedded which by the previous mentioned work can be made virtually Hanken diff --git a/wiki/wikipedia/2982.txt b/wiki/wikipedia/2982.txt deleted file mode 100644 index 42eeb25816db0226e73bde3584ce173984e02eac..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2982.txt +++ /dev/null @@ -1,3 +0,0 @@ -In geometric topology, the double suspension theorem of James W. Cannon (Cannon) and Robert D. Edwards states that the double suspension S2X of a homology sphere X is a topological sphere. - -If X is a piecewise-linear homology sphere but not a sphere, then its double suspension S2X (with a triangulation derived by applying the double suspension operation to a triangulation of X) is an example of a triangulation of a topological sphere that is not piecewise-linear. The reason is that, unlike in piecewise-linear manifolds, the link of one of the suspension points is not a sphere. diff --git a/wiki/wikipedia/2983.txt b/wiki/wikipedia/2983.txt deleted file mode 100644 index fc666903afe0fa0374c79f47df0760172aa35cf0..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2983.txt +++ /dev/null @@ -1,3 +0,0 @@ -MAX-3LIN-EQN is a problem in Computational complexity theory where the input is a system of linear equations (modulo 2). Each equation contains at most 3 variables. The problem is to find an assignment to the variables that satisfies the maximum number of equations. - -This problem is closely related to the MAX-3SAT problem. It is NP-hard to approximate MAX-3LIN-EQN with ratio (1/2 - δ) for any δ > 0. diff --git a/wiki/wikipedia/2984.txt b/wiki/wikipedia/2984.txt deleted file mode 100644 index 93c2d396f0db8e0a9dcc4e3aaf3586b9ac497f39..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2984.txt +++ /dev/null @@ -1,5 +0,0 @@ -In Euclidean geometry, Kosnita's theorem is a property of certain circles associated with an arbitrary triangle. - -Let $ABC$ be an arbitrary triangle, $O$ its circumcenter and $O_a,O_b,O_c$ are the circumcenters of three triangles $OBC$, $OCA$, and $OAB$ respectively. The theorem claims that the three straight lines $AO_a$, $BO_b$, and $CO_c$ are concurrent. This result was established by the Romanian mathematician Cezar Coşniţă (1910-1962). - -Their point of concurrence is known as the triangle's Kosnita point (named by Rigby in 1997). It is the isogonal conjugate of the nine-point center. It is triangle center $X(54)$ in Clark Kimberling's list. This theorem is special case of Dao's theorem on six circumcenters associated with a cyclic hexagon in. diff --git a/wiki/wikipedia/2985.txt b/wiki/wikipedia/2985.txt deleted file mode 100644 index 109f093e29264a0099b7d9be30c22d1c92b48bd3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2985.txt +++ /dev/null @@ -1,25 +0,0 @@ -In mathematics, Mostow's rigidity theorem, or strong rigidity theorem, or Mostow–Prasad rigidity theorem, essentially states that the geometry of a complete, finite-volume hyperbolic manifold of dimension greater than two is determined by the fundamental group and hence unique. The theorem was proven for closed manifolds by and extended to finite volume manifolds by Marden in 3 dimensions, and by in all dimensions at least 3. Gromov gave an alternate proof using the Gromov norm. Besson gave the simplest available proof. - -While the theorem shows that the deformation space of (complete) hyperbolic structures on a finite volume hyperbolic $n$-manifold (for $n >2$) is a point, for a hyperbolic surface of genus $g>1$ there is a moduli space of dimension $6g-6$ that parameterizes all metrics of constant curvature (up to diffeomorphism), a fact essential for Teichmüller theory. There is also a rich theory of deformation spaces of hyperbolic structures on infinite volume manifolds in three dimensions. - -The theorem can be given in a geometric formulation (pertaining to finite-volume, complete manifolds), and in an algebraic formulation (pertaining to lattices in Lie groups). - -Let $\mathbb H^n$ be the $n$-dimensional hyperbolic space. A complete hyperbolic manifold can be defined as a quotient of $\mathbb H^n$ by a group of isometries acting freely and properly discontinuously (it is equivalent to define it as a Riemannian manifold with sectional curvature -1 which is complete). It is of finite volume if the integral of a volume form is finite (which is the case, for example, if it is compact). The Mostow rigidity theorem may be stated as: - -Suppose $M$ and $N$ are complete finite-volume hyperbolic manifolds of dimension $n \ge 3$. If there exists an isomorphism $f\colon \pi_1(M) \to \pi_1(N)$ then it is induced by a unique isometry from $M$ to $N$. - -Here $\pi_1(X)$ is the fundamental group of a manifold $X$. If $X$ is an hyperbolic manifold obtained as the quotient of $\mathbb H^n$ by a group $\Gamma$ then $\pi_1(X) \cong \Gamma$. - -An equivalent statement is that any homotopy equivalence from $M$ to $N$ can be homotoped to a unique isometry. The proof actually shows that if $N$ has greater dimension than $M$ then there can be no homotopy equivalence between them. - -The group of isometries of hyperbolic space $\mathbb H^n$ can be identified with the Lie group $\mathrm{PO}(n,1)$ (the projective orthogonal group of a quadratic form of signature $(n,1)$. Then the following statement is equivalent to the one above. - -Let $ n \ge 3 $ and $\Gamma$ and $\Lambda$ be two lattices in $\mathrm{PO}(n,1)$ and suppose that there is a group isomorphism $f\colon \Gamma \to \Lambda$. Then $\Gamma$ and $\Lambda$ are conjugate in $\mathrm{PO}(n,1)$. That is, there exists a $g \in \mathrm{PO}(n,1)$ such that $ \Lambda = g \Gamma g^{-1}$. - -Mostow rigidity holds (in its geometric formulation) more generally for fundamental groups of all complete, finite volume locally symmetric spaces of dimension at least 3, or in its algebraic formulation for all lattices in simple Lie groups not locally isomorphic to $\mathrm{SL}_2(\R)$. - -It follows from the Mostow rigidity theorem that the group of isometries of a finite-volume hyperbolic n-manifold M (for n>2) is finite and isomorphic to $\operatorname{Out}(\pi_1(M))$. - -Mostow rigidity was also used by Thurston to prove the uniqueness of circle packing representations of triangulated planar graphs. - -A consequence of Mostow rigidity of interest in geometric group theory is that there exist hyperbolic groups which are quasi-isometric but not commensurable to each other. diff --git a/wiki/wikipedia/2986.txt b/wiki/wikipedia/2986.txt deleted file mode 100644 index 6b30e10059314a15daf2c27f0a6c32f80d9d6f20..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2986.txt +++ /dev/null @@ -1,52 +0,0 @@ -In mathematics, in the areas of order theory and combinatorics, Dilworth's theorem characterizes the width of any finite partially ordered set in terms of a partition of the order into a minimum number of chains. It is named for the mathematician . - -An antichain in a partially ordered set is a set of elements no two of which are comparable to each other, and a chain is a set of elements every two of which are comparable. A chain decomposition is a partition of the elements of the order into disjoint chains. Dilworth's theorem states that, in any finite partially ordered set, the largest antichain has the same size as the smallest chain decomposition. Here, the size of the antichain is its number of elements, and the size of the chain decomposition is its number of chains. The width of the partial order is defined as the common size of the antichain and chain decomposition. - -A version of the theorem for infinite partially ordered sets states that, when there exists a decomposition into finitely many chains, or when there exists a finite upper bound on the size of an antichain, the sizes of the largest antichain and of the smallest chain decomposition are again equal. - -The following proof by induction on the size of the partially ordered set $P$ is based on that of . - -Let $P$ be a finite partially ordered set. The theorem holds trivially if $P$ is empty. So, assume that $P$ has at least one element, and let $a$ be a maximal element of $P$. - -By induction, we assume that for some integer $k$ the partially ordered set $P':=P\setminus\{a\}$ can be covered by $k$ disjoint chains $C_1,\dots,C_k$ and has at least one antichain $A_0$ of size $k$. Clearly, $A_0\cap C_i\ne\emptyset$ for $i=1,2,\dots,k$. For $i=1,2,\dots,k$, let $x_i$ be the maximal element in $C_i$ that belongs to an antichain of size $k$ in $P'$, and set $A:=\{x_1,x_2,\dots,x_k\}$. - -We claim that $A$ is an antichain. - -Let $A_i$ be an antichain of size $k$ that contains $x_i$. Fix arbitrary distinct indices $i$ and $j$. Then $A_i\cap C_j\ne\emptyset$. Let $y\in A_i\cap C_j$. Then $y\le x_j$, by the definition of $x_j$. This implies that $x_i\not \ge x_j$, since $x_i\not\ge y$. By interchanging the roles of $i$ and $j$ in this argument we also have $x_j\not\ge x_i$. This verifies that $A$ is an antichain. - -We now return to $P$. Suppose first that $a\ge x_i$ for some $i\in\{1,2,\dots,k\}$. Let $K$ be the chain $\{a\}\cup\{z\in C_i:z\le x_i\}$. Then by the choice of $x_i$, $P\setminus K$ does not have an antichain of size $k$. Induction then implies that $P\setminus K$ can be covered by $k-1$ disjoint chains since $A \setminus \{x_i \}$ is an antichain of size $k - 1$ in $P \setminus K$. - -Thus, $P$ can be covered by $k$ disjoint chains, as required. Next, if $a\not\ge x_i$ for each $i\in\{1,2,\dots,k\}$, then $A\cup\{a\}$ is an antichain of size $k+1$ in $P$ (since $a$ is maximal in $P$). Now $P$ can be covered by the $k+1$ chains $\{a\},C_1,C_2,\dots,C_k$, completing the proof. - -Like a number of other results in combinatorics, Dilworth's theorem is equivalent to Kőnig's theorem on bipartite graph matching and several other related theorems including Hall's marriage theorem . - -To prove Dilworth's theorem for a partial order S with n elements, using Kőnig's theorem, define a bipartite graph G = (U,V,E) where U = V = S and where (u,v) is an edge in G when u < v in S. By Kőnig's theorem, there exists a matching M in G, and a set of vertices C in G, such that each edge in the graph contains at least one vertex in C and such that M and C have the same cardinality m. Let A be the set of elements of S that do not correspond to any vertex in C; then A has at least n - m elements (possibly more if C contains vertices corresponding to the same element on both sides of the bipartition) and no two elements of A are comparable to each other. Let P be a family of chains formed by including x and y in the same chain whenever there is an edge (x,y) in M; then P has n - m chains. Therefore, we have constructed an antichain and a partition into chains with the same cardinality. - -To prove Kőnig's theorem from Dilworth's theorem, for a bipartite graph G = (U,V,E), form a partial order on the vertices of G in which u < v exactly when u is in U, v is in V, and there exists an edge in E from u to v. By Dilworth's theorem, there exists an antichain A and a partition into chains P both of which have the same size. But the only nontrivial chains in the partial order are pairs of elements corresponding to the edges in the graph, so the nontrivial chains in P form a matching in the graph. The complement of A forms a vertex cover in G with the same cardinality as this matching. - -This connection to bipartite matching allows the width of any partial order to be computed in polynomial time. More precisely, n-element partial orders of width k can be recognized in time O(kn2) . - -Dilworth's theorem for infinite partially ordered sets states that a partially ordered set has finite width w if and only if it may be partitioned into w chains. For, suppose that an infinite partial order P has width w, meaning that there are at most a finite number w of elements in any antichain. For any subset S of P, a decomposition into w chains (if it exists) may be described as a coloring of the incomparability graph of S (a graph that has the elements of S as vertices, with an edge between every two incomparable elements) using w colors; every color class in a proper coloring of the incomparability graph must be a chain. By the assumption that P has width w, and by the finite version of Dilworth's theorem, every finite subset S of P has a w-colorable incomparability graph. Therefore, by the De Bruijn–Erdős theorem, P itself also has a w-colorable incomparability graph, and thus has the desired partition into chains . - -However, the theorem does not extend so simply to partially ordered sets in which the width, and not just the cardinality of the set, is infinite. In this case the size of the largest antichain and the minimum number of chains needed to cover the partial order may be very different from each other. In particular, for every infinite cardinal number κ there is an infinite partially ordered set of width ℵ0 whose partition into the fewest chains has κ chains . - -Perles discusses analogues of Dilworth's theorem in the infinite setting. - -A dual of Dilworth's theorem states that the size of the largest chain in a partial order (if finite) equals the smallest number of antichains into which the order may be partitioned . The proof of this is much simpler than the proof of Dilworth's theorem itself: for any element x, consider the chains that have x as their largest element, and let N(x) denote the size of the largest of these x-maximal chains. Then each set N−1(i), consisting of elements that have equal values of N, is an antichain, and these antichains partition the partial order into a number of antichains equal to the size of the largest chain. - -A comparability graph is an undirected graph formed from a partial order by creating a vertex per element of the order, and an edge connecting any two comparable elements. Thus, a clique in a comparability graph corresponds to a chain, and an independent set in a comparability graph corresponds to an antichain. Any induced subgraph of a comparability graph is itself a comparability graph, formed from the restriction of the partial order to a subset of its elements. - -An undirected graph is perfect if, in every induced subgraph, the chromatic number equals the size of the largest clique. Every comparability graph is perfect: this is essentially just Mirsky's theorem, restated in graph-theoretic terms . By the perfect graph theorem of Lovász, the complement of any perfect graph is also perfect. Therefore, the complement of any comparability graph is perfect; this is essentially just Dilworth's theorem itself, restated in graph-theoretic terms . Thus, the complementation property of perfect graphs can provide an alternative proof of Dilworth's theorem. - -The Boolean lattice Bn is the power set of an n-element set X—essentially {1, 2, …, n}—ordered by inclusion or, notationally, (2[n], ⊆). Sperner's theorem states that a maximum antichain of Bn has size at most -$$ -\operatorname{width}(B_n) = {n \choose \lfloor{n/2}\rfloor}. -$$ - -In other words, a largest family of incomparable subsets of X is obtained by selecting the subsets of X that have median size. The Lubell–Yamamoto–Meshalkin inequality also concerns antichains in a power set and can be used to prove Sperner's theorem. - -If we order the integers in the interval [1, 2n] by divisibility, the subinterval [n + 1, 2n] forms an antichain with cardinality n. A partition of this partial order into n chains is easy to achieve: for each odd integer m in [1,2n], form a chain of the numbers of the form m2i. Therefore, by Dilworth's theorem, the width of this partial order is n. - -The Erdős–Szekeres theorem on monotone subsequences can be interpreted as an application of Dilworth's theorem to partial orders of order dimension two . - -The "convex dimension" of an antimatroid is defined as the minimum number of chains needed to define the antimatroid, and Dilworth's theorem can be used to show that it equals the width of an associated partial order; this connection leads to a polynomial time algorithm for convex dimension . diff --git a/wiki/wikipedia/2987.txt b/wiki/wikipedia/2987.txt deleted file mode 100644 index c901b4a2d1d7be6dc9bed61c963fee801561e9b0..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2987.txt +++ /dev/null @@ -1,5 +0,0 @@ -The generalized star-height problem in formal language theory is the open question whether all regular languages can be expressed using generalized regular expressions with a limited nesting depth of Kleene stars. Here, generalized regular expressions are defined like regular expressions, but they have a built-in complement operator. For a regular language, its generalized star height is defined as the minimum nesting depth of Kleene stars needed in order to describe the language by means of a generalized regular expression, hence the name of the problem. - -More specifically, it is an open question whether a nesting depth of more than 1 is required, and if so, whether there is an algorithm to determine the minimum required star height. - -Regular languages of star-height 0 are also known as star-free languages. The theorem of Schützenberger provides an algebraic characterization of star-free languages by means of aperiodic syntactic monoids. In particular star-free languages are a proper decidable subclass of regular languages. diff --git a/wiki/wikipedia/2988.txt b/wiki/wikipedia/2988.txt deleted file mode 100644 index 8997924dcf393bb1ccf130cf06b6607eae77ac9d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2988.txt +++ /dev/null @@ -1 +0,0 @@ -In sequent calculus, the completeness of atomic initial sequents states that initial sequents AA (where A is an arbitrary formula) can be derived from only atomic initial sequents pp (where p is an atomic formula). This theorem plays a role analogous to eta expansion in lambda calculus, and dual to cut-elimination and beta reduction. Typically it can be established by induction on the structure of A, much more easily than cut-elimination. diff --git a/wiki/wikipedia/2989.txt b/wiki/wikipedia/2989.txt deleted file mode 100644 index 744388f39fe91464e86811efc62627abac143537..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2989.txt +++ /dev/null @@ -1,87 +0,0 @@ -Discrepancy of hypergraphs is an area of discrepancy theory. - -In the classical setting, we aim at partitioning the vertices of a hypergraph $\mathcal{H}=(V, \mathcal{E})$ into two classes in such a way that ideally each hyperedge contains the same number of vertices in both classes. A partition into two classes can be represented by a coloring $\chi \colon V \rightarrow \{-1, +1\}$. We call −1 and +1 colors. The color-classes $\chi^{-1}(-1)$ and $\chi^{-1}(+1)$ form the corresponding partition. For a hyperedge $E \in \mathcal{E}$, set -$$ -\chi(E) := \sum_{v\in E} \chi(v). -$$ - -The discrepancy of $\mathcal{H}$ with respect to $\chi$ and the discrepancy of $\mathcal{H}$ are defined by -$$ -\operatorname{disc}(\mathcal{H},\chi) := \max_{E \in \mathcal{E}} |\chi(E)|, -$$ -$$ -\operatorname{disc}(\mathcal{H}) := \min_{\chi:V\rightarrow\{-1,+1\}} \operatorname{disc}(\mathcal{H}, \chi). -$$ - -These notions as well as the term 'discrepancy' seem to have appeared for the first time in a paper of Beck. Earlier results on this problem include the famous lower bound on the discrepancy of arithmetic progressions by Roth and upper bounds for this problem and other results by Erdős and Spencer and Sárközi (described on p. 39). At that time, discrepancy problems were called quasi-Ramsey problems. - -To get some intuition for this concept, let's have a look at a few examples. - -* If all edges of $\mathcal{H}$ intersect trivially, i.e. $E_1 \cap E_2 = \varnothing$ for any two distinct edges $E_1, E_2 \in \mathcal{E}$, then the discrepancy is zero, if all edges have even cardinality, and one, if there is an odd cardinality edge. - -* The other extreme is marked by the complete hypergraph $(V, 2^V)$. In this case the discrepancy is $\lceil \frac{1}{2} |V|\rceil$. Any 2-coloring will have a color class of at least this size, and this set is also an edge. On the other hand, any coloring $\chi$ with color classes of size $\lceil \frac{1}{2} |V|\rceil$ and $\lfloor \frac{1}{2} |V|\rfloor$ proves that the discrepancy is not larger than $\lceil \frac{1}{2} |V|\rceil$. It seems that the discrepancy reflects how chaotic the hyperedges of $\mathcal{H}$ intersect. Things are not that easy, however, as the following example shows. - -* Set $n=4k$, $k \in \mathcal{N}$ and $\mathcal{H}_n = ([n], \{E \subseteq [n] \mid | E \cap [2k]| = | E \setminus [2k]|\})$. In words, $\mathcal{H}_n$ is the hypergraph on 4k vertices {1,...,4k}, whose edges are all subsets that have the same number of elements in {1,...,2k} as in {2k+1,...,4k}. Now $\mathcal{H}_n$ has many (more than $\binom{n/2}{n/4}^2 = \Theta(\frac 1 n 2^n)$) complicatedly intersecting edges. However, its discrepancy is zero, since we can color {1,...,2k} in one color and {2k+1,...,4k} in another color. - -The last example shows that we cannot expect to determine the discrepancy by looking at a single parameter like the number of hyperedges. Still, the size of the hypergraph yields first upper bounds. - -1. For any hypergraph $\mathcal{H}$ with n vertices and m edges: - -*$\operatorname{disc}(\mathcal{H}) \leq \sqrt{2n \ln (2m)}.$ - -The proof is a simple application of the probabilistic method: - -Let $\chi:V \rightarrow \{-1,1\}$ be a random coloring, i.e. we have -$$ -\Pr(\chi(v) = -1) = \Pr(\chi(v) = 1) = \frac{1}{2} -$$ - -independently for all $v \in V$. Since $\chi(E) = \sum_{v \in E} \chi(v)$ is a sum of independent −1, 1 random variables. So we have $\Pr(|\chi(E)|>\lambda)<2 \exp(-\lambda^2/(2n))$ for all $E \subseteq V$ and $\lambda \geq 0$. Put $\lambda = \sqrt{2n \ln (2m)}$ for convenience. Then -$$ -\Pr(\operatorname{disc}(\mathcal{H},\chi)> \lambda) \leq \sum_{E \in \mathcal{E}} \Pr(|\chi(E)| > \lambda) < 1. -$$ - -Since a random coloring with positive probability has discrepancy at most $\lambda$, in particular, there are colorings that have discrepancy at most $\lambda$. Hence $\operatorname{disc}(\mathcal{H}) \leq \lambda. \ \Box$ - -2. For any hypergraph $\mathcal{H}$ with n vertices and m edges such that $m \geq n$: - -* $\operatorname{disc}(\mathcal{H}) \in O(\sqrt{n}).$ - -To prove this, a much more sophisticated approach using the entropy function was necessary. - -Of course this is particularly interesting for $m = O(n)$. In the case $m=n$, $\operatorname{disc}(\mathcal{H}) \leq 6 \sqrt{n}$ can be shown for n large enough. Therefore, this result is usually known to as 'Six Standard Deviations Suffice'. It is considered to be one of the milestones of discrepancy theory. The entropy method has seen numerous other applications, e.g. in the proof of the tight upper bound for the arithmetic progressions of Matoušek and Spencer or the upper bound in terms of the primal shatter function due to Matoušek. - -If each vertex of $\mathcal{H}$ is contained in at most t edges, then -$$ -\operatorname{disc}(\mathcal{H}) < 2t -$$. - -This result, the Beck–Fiala theorem, is due to Beck and Fiala. They bound the discrepancy by the maximum degree of $\mathcal{H}$. It is a famous open problem whether this bound can be improved asymptotically (modified versions of the original proof give 2t−1 or even 2t−3). - -Beck and Fiala conjectured that $\operatorname{disc}(\mathcal{H}) = O(\sqrt t)$, but little progress has been made so far in this direction. Bednarchak and Helm and Helm improved the Beck-Fiala bound in tiny steps to $\operatorname{disc}(\mathcal{H}) \leq 2t - 3$ (for a slightly restricted situation, i.e. $ t \geq 3 $). Bukh improved this in 2016 to $2t - \log^* t$, where $\log^* t$ denotes the iterated logarithm. A corollary of Beck's paper – the first time the notion of discrepancy explicitly appeared – shows $\operatorname{disc}(\mathcal{H}) \leq C \sqrt{t \log m} \log n$ for some constant C. The latest improvement in this direction is due to Banaszczyk: $\operatorname{disc}(\mathcal{H}) = O(\sqrt{t \log n})$. - -Suppose p1, ...,pm are permutations of [n]. Suppose $\mathcal{H}$ is the hypergraph on [n] whose edges are all the intervals of every permutation. For example, if one of the permutations is (1,2,3,4), then the hypergraph $\mathcal{H}$ contains e.g. the edges (1,2), (1,2,3), (2,3), (2,3,4), etc. The discrepancy of $\mathcal{H}$ is the minimum over all red-blue colorings of the integers in [n], of the maximum over all intervals, of the difference between the number of red and blue integers in the interval. Then: - -* For any two permutations, $\operatorname{disc}(\mathcal{H})= 2$. - -* For any m permutations, $\operatorname{disc}(\mathcal{H}) \leq 8 m\log {n}$, and such a coloring can be computed efficiently. - -*For any three permutations, Beck conjectures that $\operatorname{disc}(\mathcal{H})= \text{constant}$. However, this conjecture was refuted: for any n which is a power of 3, there exist 3 permutations whose discrepancy is $\lceil (\log_3{n})/3 + 1 \rceil$. More precisely, for any {1,-1} coloring, if the sum of all colors is d, then there exists some integer q such that, in all three permutations, the sum of the first q colors is at most $(-\log_3{n} + 2 d - 2)/3$. This has implications for the bin packing problem. - -* Axis-parallel rectangles in the plane (Roth, Schmidt) - -* Discrepancy of half-planes (Alexander, Matoušek) - -* Arithmetic progressions (Roth, Sárközy, Beck, Matoušek & Spencer) - -* Six Standard Deviations Suffice (Spencer) - -* Axis-parallel rectangles in dimensions three and higher (Folklore) - -* Komlós Conjecture - -* Numerical Integration: Monte Carlo methods in high dimensions. - -* Computational Geometry: Divide and conquer algorithms. - -* Image Processing: Halftoning diff --git a/wiki/wikipedia/299.txt b/wiki/wikipedia/299.txt deleted file mode 100644 index 313c0103583d8092dca113bcfa34b1da377ea60e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/299.txt +++ /dev/null @@ -1,3 +0,0 @@ -In Riemannian geometry, the Rauch comparison theorem, named after Harry Rauch, who proved it in 1951, is a fundamental result which relates the sectional curvature of a Riemannian manifold to the rate at which geodesics spread apart. Intuitively, it states that for positive curvature, geodesics tend to converge, while for negative curvature, geodesics tend to spread. This theorem is formulated using Jacobi fields to measure the variation in geodesics. - -Let $M, \widetilde{M}$ be Riemannian manifolds, let $\gamma : [0, T] \to M$ and $\widetilde{\gamma} : [0,T] \to \widetilde{M}$ be unit speed geodesic segments such that $\widetilde{\gamma}(0)$ has no conjugate points along $\widetilde{\gamma}$, and let $J, \widetilde{J}$ be normal Jacobi fields along $\gamma$ and $\widetilde{\gamma}$ such that $J(0) = \widetilde{J}(0) = 0$ and $|D_t J(0)| = \left|\widetilde{D}_t \widetilde{J}(0)\right|$. Suppose that the sectional curvatures of $M$ and $\widetilde{M}$ satisfy $K(\Pi) \leq \widetilde{K}(\widetilde{\Pi})$ whenever $\Pi \subset T_{\gamma(t)} M$ is a 2-plane containing $\dot{\gamma}(t)$ and $\widetilde{\Pi} \subset T_{\tilde{\gamma}(t)} \widetilde{M}$ is a 2-plane containing $\dot{\widetilde{\gamma}}(t)$. Then $|J(t)| \geq |\widetilde{J}(t)|$ for all $t \in [0, T]$. diff --git a/wiki/wikipedia/2990.txt b/wiki/wikipedia/2990.txt deleted file mode 100644 index de7bd0892e8b8889adf4475fcf5dbb197d0e11e5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2990.txt +++ /dev/null @@ -1,78 +0,0 @@ -Braess's paradox is the observation that adding one or more roads to a road network can slow down overall traffic flow through it. The paradox was discovered by German mathematician Dietrich Braess in 1968. - -The paradox may have analogies in electrical power grids and biological systems. It has been suggested that in theory, the improvement of a malfunctioning network could be accomplished by removing certain parts of it. The paradox has been used to explain instances of improved traffic flow when existing major roads are closed. - -Dietrich Braess, a mathematician at Ruhr University, Germany, noticed the flow in a road network could be impeded by adding a new road, when he was working on traffic modelling. His idea was that if each driver is making the optimal self-interested decision as to which route is quickest, a shortcut could be chosen too often for drivers to have the shortest travel times possible. More formally, the idea behind Braess's discovery is that the Nash equilibrium may not equate with the best overall flow through a network. - -The paradox is stated as follows:
    "For each point of a road network, let there be given the number of cars starting from it and the destination of the cars. Under these conditions, one wishes to estimate the distribution of traffic flow. Whether one street is preferable to another depends not only on the quality of the road, but also on the density of the flow. If every driver takes the path that looks most favourable to them, the resultant running times need not be minimal. Furthermore, it is indicated by an example that an extension of the road network may cause a redistribution of the traffic that results in longer individual running times."
    - -Adding extra capacity to a network when the moving entities selfishly choose their route can in some cases reduce overall performance. That is because the Nash equilibrium of such a system is not necessarily optimal. The network change induces a new game structure which leads to a (multiplayer) prisoner's dilemma. In a Nash equilibrium, drivers have no incentive to change their routes. While the system is not in a Nash equilibrium, individual drivers are able to improve their respective travel times by changing the routes they take. In the case of Braess's paradox, drivers will continue to switch until they reach Nash equilibrium despite the reduction in overall performance. - -If the latency functions are linear, adding an edge can never make total travel time at equilibrium worse by a factor of more than 4/3. - -In 1983, Steinberg and Zangwill provided, under reasonable assumptions, the necessary and sufficient conditions for Braess's paradox to occur in a general transportation network when a new route is added. (Note that their result applies to the addition of any new route, not just to the case of adding a single link.) As a corollary, they obtain that Braess's paradox is about as likely to occur as not occur; their result applies to random rather than planned networks and additions. - -Braess's paradox has a counterpart in case of a reduction of the road network (which may cause a reduction of individual commuting time). In 1990 the temporary closing of 42nd Street in New York City for Earth Day reduced the amount of congestion in the area. In 2008 Youn, Gastner and Jeong demonstrated specific routes in Boston, New York City and London where that might actually occur and pointed out roads that could be closed to reduce predicted travel times. In 2009, New York experimented with closures of Broadway at Times Square and Herald Square, which resulted in improved traffic flow and permanent pedestrian plazas. - -In 2012, Paul Lecroart, of the institute of planning and development of the Île-de-France, wrote that "Despite initial fears, the removal of main roads does not cause deterioration of traffic conditions beyond the starting adjustments. The traffic transfer are limited and below expectations". - -In 2012, scientists at the Max Planck Institute for Dynamics and Self-Organization demonstrated, through computational modeling, the potential for the phenomenon to occur in power transmission networks where power generation is decentralized. - -In 2012, an international team of researchers from Institut Néel (CNRS, France), INP (France), IEMN (CNRS, France) and UCL (Belgium) published in Physical Review Letters a paper showing that Braess's paradox may occur in mesoscopic electron systems. In particular, they showed that adding a path for electrons in a nanoscopic network paradoxically reduced its conductance. That was shown both by simulations as well as experiments at low temperature using as scanning gate microscopy. - -Adilson E. Motter and collaborators demonstrated that Braess's paradox outcomes may often occur in biological and ecological systems. Motter suggests removing part of a perturbed network could rescue it. For resource management of endangered species food webs, in which extinction of many species might follow sequentially, selective removal of a doomed species from the network could in principle bring about the positive outcome of preventing a series of further extinctions. - -It has been suggested that in basketball, a team can be seen as a network of possibilities for a route to scoring a basket, with a different efficiency for each pathway, and a star player could reduce the overall efficiency of the team, analogous to a shortcut that is overused increasing the overall times for a journey through a road network. A proposed solution for maximum efficiency in scoring is for a star player to shoot about the same number of shots as teammates. However, this approach is not supported by hard statistical evidence, as noted in the original paper. - -In soccer Helenio Herrera is well known for his famous quote "with 10 [players] our team plays better than with 11". - -Consider a road network as shown in the adjacent diagram on which 4000 drivers wish to travel from point Start to End. The travel time in minutes on the Start–A road is the number of travelers (T) divided by 100, and on Start–B is a constant 45 minutes (likewise with the roads across from them). If the dashed road does not exist (so the traffic network has 4 roads in total), the time needed to drive Start–A–End route with $a$ drivers would be $\tfrac{a}{100} + 45$. The time needed to drive the Start–B–End route with $b$ drivers would be $\tfrac{b}{100} + 45$. As there are 4000 drivers, the fact that $a + b = 4000$ can be used to derive the fact that $a = b = 2000$ when the system is at equilibrium. Therefore, each route takes $\tfrac{2000}{100} + 45 = 65$ minutes. If either route took less time, it would not be a Nash equilibrium: a rational driver would switch from the longer route to the shorter route. - -Now suppose the dashed line A–B is a road with an extremely short travel time of approximately 0 minutes. Suppose that the road is opened and one driver tries Start–A–B–End. To his surprise he finds that his time is $\tfrac{2000}{100} + \tfrac{2001}{100} = 40.01$ minutes, a saving of almost 25 minutes. Soon, more of the 4000 drivers are trying this new route. The time taken rises from 40.01 and keeps climbing. When the number of drivers trying the new route reaches 2500, with 1500 still in the Start–B–End route, their time will be $\tfrac{2500}{100} + \tfrac{4000}{100} = 65$ minutes, which is no improvement over the original route. Meanwhile, those 1500 drivers have been slowed to $ 45 + \tfrac{4000}{100} = 85$ minutes, a 20-minute increase. They are obliged to switch to the new route via A too, so it now takes $\tfrac{4000}{100} + \tfrac{4000}{100} = 80$ minutes. Nobody has any incentive to travel A-End or Start-B because any driver trying them will take 85 minutes. Thus, the opening of the cross route triggers an irreversible change to it by everyone, costing everyone 80 minutes instead of the original 65. If every driver were to agree not to use the A–B path, or if that route were closed, every driver would benefit by a 15-minute reduction in travel time. - -If one assumes the travel time for each person driving on an edge to be equal, an equilibrium will always exist. - -Let $L_e(x)$ be the formula for the travel time of each person traveling along edge $e$ when $x$ people take that edge. Suppose there is a traffic graph with $x_e$ people driving along edge $e$. Let the energy of $e$, $E(e)$, be -$$ -\sum_{i=1}^{x_e} L_e(i) = L_e(1) + L_e(2) + \cdots + L_e(x_e) -$$ - -(If $x_e = 0$ let $E(e) = 0$). Let the total energy of the traffic graph be the sum of the energies of every edge in the graph. - -Take a choice of routes that minimizes the total energy. Such a choice must exist because there are finitely many choices of routes. That will be an equilibrium. - -Assume, for contradiction, this is not the case. Then, there is at least one driver who can switch the route and improve the travel time. Suppose the original route is $e_0, e_1, \ldots, e_n$ while the new route is $e'_0, e'_1, \ldots, e'_m$. Let $E$ be total energy of the traffic graph, and consider what happens when the route $e_0, e_1, ... e_n$ is removed. The energy of each edge $e_i$ will be reduced by $L_{e_i}(x_{e_i})$ and so the $E$ will be reduced by $\sum_{i=0}^n L_{e_i}(x_{e_i})$. That is simply the total travel time needed to take the original route. If the new route is then added, $e'_0, e'_1, \ldots, e'_m$, the total energy $E$ will be increased by the total travel time needed to take the new route. Because the new route is shorter than the original route, $E$ must decrease relative to the original configuration, contradicting the assumption that the original set of routes minimized the total energy. - -Therefore, the choice of routes minimizing total energy is an equilibrium. - -The above proof outlines a procedure known as best response dynamics, which finds an equilibrium for a linear traffic graph and terminates in a finite number of steps. The algorithm is termed "best response" because at each step of the algorithm, if the graph is not at equilibrium then some driver has a best response to the strategies of all other drivers and switches to that response. - -Pseudocode for Best Response Dynamics: - -Let P be some traffic pattern. - -while P is not at equilibrium: - -compute the potential energy e of P - -for each driver d in P: - -for each alternate path p available to d: - -compute the potential energy n of the pattern when d takes path p - -if n < e: - -modify P so that d takes path p - -continue the topmost while - -At each step, if some particular driver could do better by taking an alternate path (a "best response"), doing so strictly decreases the energy of the graph. If no driver has a best response, the graph is at equilibrium. Since the energy of the graph strictly decreases with each step, the best response dynamics algorithm must eventually halt. - -If the travel time functions are linear, that is $L_e(x) = a_e x + b_e$ for some $a_e, b_e \geq 0$, then at worst, traffic in the energy-minimizing equilibrium is twice as bad as socially optimal. - -Proof: Let $Z$ be some traffic configuration, with associated energy $E(Z)$ and total travel time $T(Z)$. For each edge, the energy is the sum of an arithmetic progression, and using the formula for the sum of an arithmetic progression, one can show that $E(Z)\leq T(Z)\leq 2E(Z)$. If $Z_o$ is the socially-optimal traffic flow and $Z_e$ is the energy-minimizing traffic flow, the inequality implies that $T(Z_e) \leq 2E(Z_e) \leq 2E(Z_o) \leq 2T(Z_o)$. - -Thus, the total travel time for the energy-minimizing equilibrium is at most twice as bad as for the optimal flow. - -In 2013, Dal Forno and Merlone interpret Braess's paradox as a dynamical ternary choice problem. The analysis shows how the new path changes the problem. Before the new path is available, the dynamics is the same as in binary choices with externalities, but the new path transforms it into a ternary choice problem. The addition of an extra resource enriches the complexity of the dynamics. In fact, there can even be coexistence of cycles, and the implication of the paradox on the dynamics can be seen from both a geometrical and an analytical perspective. diff --git a/wiki/wikipedia/2991.txt b/wiki/wikipedia/2991.txt deleted file mode 100644 index 6be55b7a8e5d285aa9b5dfe572d02e357e6f897b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2991.txt +++ /dev/null @@ -1,5 +0,0 @@ -Paradox is a finite-domain model finder for pure first-order logic (FOL) with equality developed by Koen Lindström Claessen and Niklas Sörensson at the Chalmers University of Technology. It can a participate as part of an automated theorem proving system. The software is primarily written in the Haskell programming language. It is released under the terms of the GNU General Public License and is free. - -The Paradox developers described the software as a Mace-style method after the McCune's tool of that name. Paradox was developed up to version 4, the final version being effective in model finding for Web Ontology Language OWL2. - -Paradox was a division winner in the annual CADE ATP System Competition, an annual contest for automated theorem proving, in the years 2003 to 2012. diff --git a/wiki/wikipedia/2992.txt b/wiki/wikipedia/2992.txt deleted file mode 100644 index b1b730c1e4596b30f54506ef756959c9b611d74f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2992.txt +++ /dev/null @@ -1,123 +0,0 @@ -In information theory, Shannon's source coding theorem (or noiseless coding theorem) establishes the limits to possible data compression, and the operational meaning of the Shannon entropy. - -Named after Claude Shannon, the source coding theorem shows that (in the limit, as the length of a stream of independent and identically-distributed random variable (i.i.d.) data tends to infinity) it is impossible to compress the data such that the code rate (average number of bits per symbol) is less than the Shannon entropy of the source, without it being virtually certain that information will be lost. However it is possible to get the code rate arbitrarily close to the Shannon entropy, with negligible probability of loss. - -The source coding theorem for symbol codes places an upper and a lower bound on the minimal possible expected length of codewords as a function of the entropy of the input word (which is viewed as a random variable) and of the size of the target alphabet. - -Source coding is a mapping from (a sequence of) symbols from an information source to a sequence of alphabet symbols (usually bits) such that the source symbols can be exactly recovered from the binary bits (lossless source coding) or recovered within some distortion (lossy source coding). This is the concept behind data compression. - -In information theory, the source coding theorem (Shannon 1948) informally states that (MacKay 2003, pg. 81, Cover 2006, Chapter 5): - -
    N i.i.d. random variables each with entropy H(X) can be compressed into more than N H(X) bits with negligible risk of information loss, as N → ∞; but conversely, if they are compressed into fewer than N H(X) bits it is virtually certain that information will be lost.
    - -Let Σ1, Σ2 denote two finite alphabets and let Σ_1|p=∗ and Σ_2|p=∗ denote the set of all finite words from those alphabets (respectively). - -Suppose that X is a random variable taking values in Σ1 and let  f  be a uniquely decodable code from Σ_1|p=∗ to Σ_2|p=∗ where . Let S denote the random variable given by the length of codeword  f (X). - -If  f  is optimal in the sense that it has the minimal expected word length for X, then (Shannon 1948): -$$ - \frac{H(X)}{\log_2 a} \leq \mathbb{E}[S] < \frac{H(X)}{\log_2 a} +1 -$$ - -Where $\mathbb{E}$ denotes the expected value operator. - -Given X is an i.i.d. source, its time series X1, ..., Xn is i.i.d. with entropy H(X) in the discrete-valued case and differential entropy in the continuous-valued case. The Source coding theorem states that for any ε > 0, i.e. for any rate H(X) + ε larger than the entropy of the source, there is large enough n and an encoder that takes n i.i.d. repetition of the source, X1:n, and maps it to n(H(X) + ε) binary bits such that the source symbols X1:n are recoverable from the binary bits with probability of at least 1 − ε. - -Proof of Achievability. Fix some ε > 0, and let -$$ -p(x_1, \ldots, x_n) = \Pr \left[X_1 = x_1, \cdots, X_n = x_n \right]. -$$ - -The typical set, A_n|p=ε, is defined as follows: -$$ -A_n^\varepsilon =\left\{(x_1, \cdots, x_n) \ : \ \left|-\frac{1}{n} \log p(x_1, \cdots, x_n) - H_n(X)\right| < \varepsilon \right\}. -$$ - -The Asymptotic Equipartition Property (AEP) shows that for large enough n, the probability that a sequence generated by the source lies in the typical set, A_n|p=ε, as defined approaches one. In particular, for sufficiently large n, $P((X_1,X_2,\cdots,X_n) \in A_n^\varepsilon)$ can be made arbitrarily close to 1, and specifically, greater than $1-\varepsilon$ (See - -AEP for a proof). - -The definition of typical sets implies that those sequences that lie in the typical set satisfy: -$$ -2^{-n(H(X)+\varepsilon)} \leq p \left (x_1, \cdots, x_n \right ) \leq 2^{-n(H(X)-\varepsilon)} -$$ - -Note that: - -*The probability of a sequence $(X_1,X_2,\cdots X_n)$ being drawn from A_n|p=ε is greater than 1 − ε. - -*$\left| A_n^\varepsilon \right| \leq 2^{n(H(X)+\varepsilon)}$, which follows from the left hand side (lower bound) for $ p(x_1,x_2,\cdots x_n)$. - -*$\left| A_n^\varepsilon \right| \geq (1-\varepsilon) 2^{n(H(X)-\varepsilon)}$, which follows from upper bound for $ p(x_1,x_2,\cdots x_n)$ and the lower bound on the total probability of the whole set A_n|p=ε. - -Since $\left| A_n^\varepsilon \right| \leq 2^{n(H(X)+\varepsilon)}, n(H(X)+\varepsilon)$ bits are enough to point to any string in this set. - -The encoding algorithm: The encoder checks if the input sequence lies within the typical set; if yes, it outputs the index of the input sequence within the typical set; if not, the encoder outputs an arbitrary n(H(X) + ε) digit number. As long as the input sequence lies within the typical set (with probability at least 1 − ε), the encoder doesn't make any error. So, the probability of error of the encoder is bounded above by ε. - -Proof of Converse. The converse is proved by showing that any set of size smaller than A_n|p=ε (in the sense of exponent) would cover a set of probability bounded away from 1. - -For 1 ≤ i ≤ n let si denote the word length of each possible xi. Define $q_i = a^{-s_i}/C$, where C is chosen so that q1 + ... + qn = 1. Then - -\begin{align} - -H(X) &= -\sum_{i=1}^n p_i \log_2 p_i \\ - -&\leq -\sum_{i=1}^n p_i \log_2 q_i \\ - -&= -\sum_{i=1}^n p_i \log_2 a^{-s_i} + \sum_{i=1}^n p_i \log_2 C \\ - -&= -\sum_{i=1}^n p_i \log_2 a^{-s_i} + \log_2 C \\ - -&\leq -\sum_{i=1}^n - s_i p_i \log_2 a \\ - -&\leq \mathbb{E} S \log_2 a \\ - -\end{align} - -where the second line follows from Gibbs' inequality and the fifth line follows from Kraft's inequality: -$$ -C = \sum_{i=1}^n a^{-s_i} \leq 1 -$$ - -so log C ≤ 0. - -For the second inequality we may set -$$ -s_i = \lceil - \log_a p_i \rceil -$$ - -so that -$$ - - \log_a p_i \leq s_i < -\log_a p_i + 1 -$$ - -and so -$$ - a^{-s_i} \leq p_i -$$ - -and -$$ - \sum a^{-s_i} \leq \sum p_i = 1 -$$ - -and so by Kraft's inequality there exists a prefix-free code having those word lengths. Thus the minimal S satisfies - -\begin{align} - -\mathbb{E} S & = \sum p_i s_i \\ - -& < \sum p_i \left( -\log_a p_i +1 \right) \\ - -& = \sum - p_i \frac{\log_2 p_i}{\log_2 a} +1 \\ - -& = \frac{H(X)}{\log_2 a} +1 \\ - -\end{align} - -Define typical set A_n|p=ε as: -$$ -A_n^\varepsilon = \left \{x_1^n \ : \ \left|-\frac{1}{n} \log p \left (X_1, \cdots, X_n \right ) - \overline{H_n}(X)\right| < \varepsilon \right \}. -$$ - -Then, for given δ > 0, for n large enough, Pr(A_n|p=ε) > 1 − δ. Now we just encode the sequences in the typical set, and usual methods in source coding show that the cardinality of this set is smaller than $2^{n(\overline{H_n}(X)+\varepsilon)}$. Thus, on an average, Hn(X) + ε bits suffice for encoding with probability greater than 1 − δ, where ε and δ can be made arbitrarily small, by making n larger. diff --git a/wiki/wikipedia/2993.txt b/wiki/wikipedia/2993.txt deleted file mode 100644 index e978a6aa1a288c5ffcf1e53e7b67e5834f5cca5d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2993.txt +++ /dev/null @@ -1,148 +0,0 @@ -The Post correspondence problem is an undecidable decision problem that was introduced by Emil Post in 1946. Because it is simpler than the halting problem and the Entscheidungsproblem it is often used in proofs of undecidability. - -Let $A$ be an alphabet with at least two symbols. The input of the problem consists of two finite lists $\alpha_{1}, \ldots, \alpha_{N}$ and $\beta_{1}, \ldots, \beta_{N}$ of words over $A$. A solution to this problem is a sequence of indices $(i_k)_{1 \le k \le K}$ with $K \ge 1$ and $ 1 \le i_k \le N$ for all $k$, such that -$$ -\alpha_{i_1} \ldots \alpha_{i_K} = \beta_{i_1} \ldots \beta_{i_K}. -$$ - -The decision problem then is to decide whether such a solution exists or not. -$$ -g: (i_1,\ldots,i_K) \mapsto \alpha_{i_1} \ldots \alpha_{i_K} -$$ -$$ -h: (i_1,\ldots,i_K) \mapsto \beta_{i_1} \ldots \beta_{i_K}. -$$ - -This gives rise to an equivalent alternative definition often found in the literature, according to which any two homomorphisms $g,h$ with a common domain and a common codomain form an instance of the Post correspondence problem, which now asks whether there exists a nonempty word $w$ in the domain such that -$$ -g(w)=h(w) -$$. - -Another definition describes this problem easily as a type of puzzle. We begin with a collection of dominos, each containing two strings, one on each side. An individual domino looks like -$$ -[a/ab] -$$ - -and a collection of dominos looks like -$$ -{ [bc/ca], [a/ab], [ca/a], [abc/c] } -$$. - -The task is to make a list of these dominos (repetition permitted) so that the string we get by reading off the symbols on the top is the same as the string of symbols on the bottom. This list is called a match. The Post Correspondence Problem is to determine whether a collection of dominos has a match. - -For example, the following list is a match for this puzzle. -$$ -{ [a/ab], [bc/ca], [a/ab], [abc/c] } -$$. - -For some collections of dominos, finding a match may not be possible. For example, the collection -$$ -{ [abc/ab], [ca/a], [acc/ba] } -$$. - -cannot contain a match because every top string is longer than the correspondending bottom string. - -Consider the following two lists: - -A solution to this problem would be the sequence (3, 2, 3, 1), because -$$ -\alpha_3 \alpha_2 \alpha_3 \alpha_1 = bba \cdot ab \cdot bba \cdot a = bbaabbbaa = bb \cdot aa \cdot bb \cdot baa = \beta_{3} \beta_{2} \beta_{3} \beta_{1}. -$$ - -Furthermore, since (3, 2, 3, 1) is a solution, so are all of its "repetitions", such as (3, 2, 3, 1, 3, 2, 3, 1), etc.; that is, when a solution exists, there are infinitely many solutions of this repetitive kind. - -However, if the two lists had consisted of only $\alpha_2, \alpha_3$ and $\beta_{2}, \beta_{3}$ from those sets, then there would have been no solution (the last letter of any such α string is not the same as the letter before it, whereas β only constructs pairs of the same letter). - -A convenient way to view an instance of a Post correspondence problem is as a collection of blocks of the form - -there being an unlimited supply of each type of block. Thus the above example is viewed as - -i = 1 - -i = 2 - -i = 3 - -where the solver has an endless supply of each of these three block types. A solution corresponds to some way of laying blocks next to each other so that the string in the top cells corresponds to the string in the bottom cells. Then the solution to the above example corresponds to: - -i1 = 3 - -i2 = 2 - -i3 = 3 - -i4 = 1 - -Again using blocks to represent an instance of the problem, the following is an example that has infinitely many solutions in addition to the kind obtained by merely "repeating" a solution. - -1 - -2 - -3 - -In this instance, every sequence of the form (1, 2, 2, . . ., 2, 3) is a solution (in addition to all their repetitions): - -1 - -2 - -2 - -2 - -3 - -The most common proof for the undecidability of PCP describes an instance of PCP that can simulate the computation of an arbitrary Turing machine on a particular input. A match will occur if and only if the input would be accepted by the Turing machine. Because deciding if a Turing machine will accept an input is a basic undecidable problem, PCP cannot be decidable either. The following discussion is based on Michael Sipser's textbook Introduction to the Theory of Computation. - -In more detail, the idea is that the string along the top and bottom will be a computation history of the Turing machine's computation. This means it will list a string describing the initial state, followed by a string describing the next state, and so on until it ends with a string describing an accepting state. The state strings are separated by some separator symbol (usually written #). According to the definition of a Turing machine, the full state of the machine consists of three parts: - -* The current contents of the tape. - -* The current state of the finite state machine which operates the tape head. - -* The current position of the tape head on the tape. - -Although the tape has infinitely many cells, only some finite prefix of these will be non-blank. We write these down as part of our state. To describe the state of the finite control, we create new symbols, labelled q1 through qk, for each of the finite state machine's k states. We insert the correct symbol into the string describing the tape's contents at the position of the tape head, thereby indicating both the tape head's position and the current state of the finite control. For the alphabet {0,1}, a typical state might look something like: - -101101110q700110. - -A simple computation history would then look something like this: - -q0101#1q401#11q21#1q810. - -We start out with this block, where x is the input string and q0 is the start state: - -The top starts out "lagging" the bottom by one state, and keeps this lag until the very end stage. Next, for each symbol a in the tape alphabet, as well as #, we have a "copy" block, which copies it unmodified from one state to the next: - -We also have a block for each position transition the machine can make, showing how the tape head moves, how the finite state changes, and what happens to the surrounding symbols. For example, here the tape head is over a 0 in state 4, and then writes a 1 and moves right, changing to state 7: - -Finally, when the top reaches an accepting state, the bottom needs a chance to finally catch up to complete the match. To allow this, we extend the computation so that once an accepting state is reached, each subsequent machine step will cause a symbol near the tape head to vanish, one at a time, until none remain. If qf is an accepting state, we can represent this with the following transition blocks, where a is a tape alphabet symbol: - -There are a number of details to work out, such as dealing with boundaries between states, making sure that our initial tile goes first in the match, and so on, but this shows the general idea of how a static tile puzzle can simulate a Turing machine computation. - -The previous example - -q0101#1q401#11q21#1q810. - -is represented as the following solution to the Post correspondence problem: - -... - -Many variants of PCP have been considered. One reason is that, when one tries to prove undecidability of some new problem by reducing from PCP, it often happens that the first reduction one finds is not from PCP itself but from an apparently weaker version. - -* The problem may be phrased in terms of monoid morphisms f, g from the free monoid B to the free monoid A where B is of size n. The problem is to determine whether there is a word w in B+ such that f(w) = g(w). - -* The condition that the alphabet $A$ have at least two symbols is required since the problem is decidable if $A$ has only one symbol. - -* A simple variant is to fix n, the number of tiles. This problem is decidable if n ≤ 2, but remains undecidable for n ≥ 5. It is unknown whether the problem is decidable for 3 ≤ n ≤ 4. - -* The circular Post correspondence problem asks whether indexes $i_1, i_2,\ldots$ can be found such that $\alpha_{i_1} \cdots \alpha_{i_k}$ and $\beta_{i_1} \cdots \beta_{i_k}$ are conjugate words, i.e., they are equal modulo rotation. This variant is undecidable. - -* One of the most important variants of PCP is the bounded Post correspondence problem, which asks if we can find a match using no more than k tiles, including repeated tiles. A brute force search solves the problem in time O(2k), but this may be difficult to improve upon, since the problem is NP-complete. Unlike some NP-complete problems like the boolean satisfiability problem, a small variation of the bounded problem was also shown to be complete for RNP, which means that it remains hard even if the inputs are chosen at random (it is hard on average over uniformly distributed inputs). - -* Another variant of PCP is called the marked Post Correspondence Problem, in which each $\alpha_i$ must begin with a different symbol, and each $\beta_i$ must also begin with a different symbol. Halava, Hirvensalo, and de Wolf showed that this variation is decidable in exponential time. Moreover, they showed that if this requirement is slightly loosened so that only one of the first two characters need to differ (the so-called 2-marked Post Correspondence Problem), the problem becomes undecidable again. - -* The Post Embedding Problem is another variant where one looks for indexes $i_1, i_2, \ldots$ such that $\alpha_{i_1} \cdots \alpha_{i_k}$ is a (scattered) subword of $\beta_{i_1} \cdots \beta_{i_k}$. This variant is easily decidable since, when some solutions exist, in particular a length-one solution exists. More interesting is the Regular Post Embedding Problem, a further variant where one looks for solutions that belong to a given regular language (submitted, e.g., under the form of a regular expression on the set $\{1,\ldots,N\}$). The Regular Post Embedding Problem is still decidable but, because of the added regular constraint, it has a very high complexity that dominates every multiply recursive function. - -* The Identity Correspondence Problem (ICP) asks whether a finite set of pairs of words (over a group alphabet) can generate an identity pair by a sequence of concatenations. The problem is undecidable and equivalent to the following Group Problem: is the semigroup generated by a finite set of pairs of words (over a group alphabet) a group. diff --git a/wiki/wikipedia/2994.txt b/wiki/wikipedia/2994.txt deleted file mode 100644 index 9cd8ffc16aa9828635dde4c937c77df2482074b9..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2994.txt +++ /dev/null @@ -1,49 +0,0 @@ -In statistics, the Behrens–Fisher problem, named after Walter Behrens and Ronald Fisher, is the problem of interval estimation and hypothesis testing concerning the difference between the means of two normally distributed populations when the variances of the two populations are not assumed to be equal, based on two independent samples. - -One difficulty with discussing the Behrens–Fisher problem and proposed solutions, is that there are many different interpretations of what is meant by "the Behrens–Fisher problem". These differences involve not only what is counted as being a relevant solution, but even the basic statement of the context being considered. - -Let X1, ..., Xn and Y1, ..., Ym be i.i.d. samples from two populations which both come from the same location–scale family of distributions. The scale parameters are assumed to be unknown and not necessarily equal, and the problem is to assess whether the location parameters can reasonably be treated as equal. Lehmann states that "the Behrens–Fisher problem" is used both for this general form of model when the family of distributions is arbitrary and for when the restriction to a normal distribution is made. While Lehmann discusses a number of approaches to the more general problem, mainly based on nonparametrics, most other sources appear to use "the Behrens–Fisher problem" to refer only to the case where the distribution is assumed to be normal: most of this article makes this assumption. - -Solutions to the Behrens–Fisher problem have been presented that make use of either a classical or a Bayesian inference point of view and either solution would be notionally invalid judged from the other point of view. If consideration is restricted to classical statistical inference only, it is possible to seek solutions to the inference problem that are simple to apply in a practical sense, giving preference to this simplicity over any inaccuracy in the corresponding probability statements. Where exactness of the significance levels of statistical tests is required, there may be an additional requirement that the procedure should make maximum use of the statistical information in the dataset. It is well known that an exact test can be gained by randomly discarding data from the larger dataset until the sample sizes are equal, assembling data in pairs and taking differences, and then using an ordinary t-test to test for the mean-difference being zero: clearly this would not be "optimal" in any sense. - -The task of specifying interval estimates for this problem is one where a frequentist approach fails to provide an exact solution, although some approximations are available. Standard Bayesian approaches also fail to provide an answer that can be expressed as straightforward simple formulae, but modern computational methods of Bayesian analysis do allow essentially exact solutions to be found. Thus study of the problem can be used to elucidate the differences between the frequentist and Bayesian approaches to interval estimation. - -Ronald Fisher in 1935 introduced fiducial inference in order to apply it to this problem. He referred to an earlier paper by Walter Ulrich Behrens from 1929. Behrens and Fisher proposed to find the probability distribution of -$$ - T \equiv {\bar x_1 - \bar x_2 \over \sqrt{s_1^2/n_1 + s_2^2/n_2}} -$$ - -where $ \bar x_1 $ and $ \bar x_2 $ are the two sample means, and s1 and s2 are their standard deviations. See Behrens–Fisher distribution. Fisher approximated the distribution of this by ignoring the random variation of the relative sizes of the standard deviations, -$$ - {s_1 / \sqrt{n_1} \over \sqrt{s_1^2/n_1 + s_2^2/n_2}}. -$$ - -Fisher's solution provoked controversy because it did not have the property that the hypothesis of equal means would be rejected with probability α if the means were in fact equal. Many other methods of treating the problem have been proposed since, and the effect on the resulting confidence intervals have been investigated. - -A widely used method is that of B. L. Welch, who, like Fisher, was at University College London. The variance of the mean difference -$$ -\bar d =\bar x_1 - \bar x_2 -$$ - -results in -$$ - s_{\bar d}^2 = \frac{s_1^2}{n_1} + \frac{s_2^2}{n_2}. -$$ - -Welch (1938) approximated the distribution of $s_{\bar d}^2$ by the Type III Pearson distribution (a scaled chi-squared distribution) whose first two moments agree with that of $s_{\bar d}^2$. This applies to the following number of degrees of freedom (d.f.), which is generally non-integer: -$$ - \nu \approx {(\gamma_1 + \gamma_2)^2 \over \gamma_1^2/(n_1-1) + \gamma_2^2/(n_2-1)} \quad \text{ where }\gamma_i = \sigma_i^2/n_i. -$$ - -Under the null hypothesis of equal expectations, μ1 = μ2, the distribution of the Behrens–Fisher statistic T, which also depends on the variance ratio σ1222, could now be approximated by Student's t distribution with these ν degrees of freedom. But this ν contains the population variances σi2, and these are unknown. The following estimate only replaces the population variances by the sample variances: -$$ -\hat\nu \approx \frac{(g_1 + g_2)^2}{g_1^2/(n_1-1) + g_2^2/(n_2-1)} \quad \text{ where } g_i = s_i^2/n_i. -$$ - -This $\hat\nu$ is a random variable. A t distribution with a random number of degrees of freedom does not exist. Nevertheless, the Behrens–Fisher T can be compared with a corresponding quantile of Student's t distribution with these estimated numbers of degrees of freedom, $\hat\nu$, which is generally non-integer. In this way, the boundary between acceptance and rejection region of the test statistic T is calculated based on the empirical variances si2, in a way that is a smooth function of these. - -This method also does not give exactly the nominal rate, but is generally not too far off. However, if the population variances are equal, or if the samples are rather small and the population variances can be assumed to be approximately equal, it is more accurate to use Student's t-test. - -A number of different approaches to the general problem have been proposed, some of which claim to "solve" some version of the problem. Among these are, it was found that the Dudewicz–Ahmed procedure is recommended for practical use. - -For several decades, it was commonly believed that no exact solution to the common Behrens–Fisher problem existed. However, it was proved in 1966 that it has an exact solution. In 2018 the probability density function of a generalized Behrens–Fisher distribution of m means and m distinct standard errors from m samples of distinct sizes from independent normal distributions with distinct means and variances was proved and the paper also examined its asymptotic approximations. A follow-up paper showed that the classic paired t-test is a central Behrens–Fisher problem with a non-zero population correlation coefficient and derived its corresponding probability density function by solving its associated non-central Behrens–Fisher problem with a nonzero population correlation coefficient. It also solved a more general non-central Behrens–Fisher problem with a non-zero population correlation coefficient in the appendix. Tests include the Cucconi test of 1968 and the Lepage test of 1971. diff --git a/wiki/wikipedia/2995.txt b/wiki/wikipedia/2995.txt deleted file mode 100644 index ba3a02a5be608538c2542ffe0d9c222ebc1a8fc2..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2995.txt +++ /dev/null @@ -1,99 +0,0 @@ -In number theory, Mertens' theorems are three 1874 results related to the density of prime numbers proved by Franz Mertens. "Mertens' theorem" may also refer to his theorem in analysis. - -In the following, let $p\le n$ mean all primes not exceeding n. - -Mertens' first theorem: -$$ - \sum_{p \le n} \frac{\ln p}{p} - \ln n -$$ - -does not exceed 2 in absolute value for any $ n\ge 2$. () - -Mertens' second theorem: -$$ -\lim_{n\to\infty}\left(\sum_{p\le n}\frac1p -\ln\ln n-M\right) =0, -$$ - -where M is the Meissel–Mertens constant (). More precisely, Mertens proves that the expression under the limit does not in absolute value exceed -$$ - \frac 4{\ln(n+1)} +\frac 2{n\ln n} -$$ - -for any $ n\ge 2$. - -Mertens' third theorem: -$$ -\lim_{n\to\infty}\ln n\prod_{p\le n}\left(1-\frac1p\right)=e^{-\gamma}, -$$ - -where γ is the Euler–Mascheroni constant (). - -In a paper on the growth rate of the sum-of-divisors function published in 1983, Guy Robin proved that in Mertens' 2nd theorem the difference -$$ -\sum_{p\le n}\frac1p -\ln\ln n-M -$$ - -changes sign infinitely often, and that in Mertens' 3rd theorem the difference -$$ -\ln n\prod_{p\le n}\left(1-\frac1p\right)-e^{-\gamma} -$$ - -changes sign infinitely often. Robin's results are analogous to Littlewood's famous theorem that the difference π(x) − li(x) changes sign infinitely often. No analog of the Skewes number (an upper bound on the first natural number x for which π(x) > li(x)) is known in the case of Mertens' 2nd and 3rd theorems. - -Regarding this asymptotic formula Mertens refers in his paper to "two curious formula of Legendre", the first one being Mertens' second theorem's prototype (and the second one being Mertens' third theorem's prototype: see the very first lines of the paper). He recalls that it is contained in Legendre's third edition of his "Théorie des nombres" (1830; it is in fact already mentioned in the second edition, 1808), and also that a more elaborate version was proved by Chebyshev in 1851. Note that, already in 1737, Euler knew the asymptotic behaviour of this sum. - -Mertens diplomatically describes his proof as more precise and rigorous. In reality none of the previous proofs are acceptable by modern standards: Euler's computations involve the infinity (and the hyperbolic logarithm of infinity, and the logarithm of the logarithm of infinity!); Legendre's argument is heuristic; and Chebyshev's proof, although perfectly sound, makes use of the Legendre-Gauss conjecture, which was not proved until 1896 and became better known as the prime number theorem. - -Mertens' proof does not appeal to any unproved hypothesis (in 1874), and only to elementary real analysis. It comes 22 years before the first proof of the prime number theorem which, by contrast, relies on a careful analysis of the behavior of the Riemann zeta function as a function of a complex variable. - -Mertens' proof is in that respect remarkable. Indeed, with modern notation it yields -$$ -\sum_{p\le x}\frac1p=\log\log x+M+O(1/\log x) -$$ - -whereas the prime number theorem (in its simplest form, without error estimate), can be shown to be equivalent to -$$ -\sum_{p\le x}\frac1p=\log\log x+M+o(1/\log x). -$$ - -In 1909 Edmund Landau, by using the best version of the prime number theorem then at his disposition, proved that -$$ -\sum_{p\le x}\frac1p=\log\log x+M+O(e^{-(\log x)^{1/14}}) -$$ - -holds; in particular the error term is smaller than $1/(\log x)^k$ for any fixed integer k. A simple summation by parts exploiting the strongest form known of the prime number theorem improves this to -$$ -\sum_{p\le x}\frac1p=\log\log x+M+O(e^{-c(\log x)^{3/5}(\log\log x)^{-1/5}}) -$$ - -for some $c > 0$. - -Similarly a partial summation shows that $\sum_{p\le x} \frac{\log p}{p} = \log x+ C+o(1)$ is equivalent to the PNT. - -An estimate of the probability of $X$ ($X \gg n$) having no factor $\le n$ is given by -$$ -\prod_{p\le n}\left(1-\frac1p\right) -$$ - -This is closely related to Mertens' third theorem which gives an asymptotic approximation of -$$ -P(p \nmid X\ \forall p \le n) = \frac{1}{e^\gamma \ln n } -$$ - -The main step is - -O(n)+n\log n=\log n! =\sum_{p^k\le n} \lfloor n/p^k\rfloor\log p = - -\sum_{p^k\le n} \left(\frac{n}{p^k}+O(1)\right)\log p= n \sum_{p^k\le n}\frac{\log p}{p^k}\ + O(n) - -where the last equality needs $\sum_{p^k\le n}\log p =O(n)$ which follows from $\sum_{p\in (n,2n]}\log p\le \log{2n\choose n}=O(n)$. - -Thus, we have proved that -$$ -\sum_{p\le n}\frac{\log p}{p}=\log n+O(1) -$$. - -A partial summation yields -$$ -\sum_{p\le n} \frac1{p} = \log\log n+M+O(1/\log n) -$$. diff --git a/wiki/wikipedia/2996.txt b/wiki/wikipedia/2996.txt deleted file mode 100644 index ec87318057e76a57b3fab176557a6524f71f088e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2996.txt +++ /dev/null @@ -1,16 +0,0 @@ -In mathematics - specifically, in measure theory - Fernique's theorem is a result about Gaussian measures on Banach spaces. It extends the finite-dimensional result that a Gaussian random variable has exponential tails. The result was proved in 1970 by the mathematician Xavier Fernique. - -Let (X, || ||) be a separable Banach space. Let μ be a centred Gaussian measure on X, i.e. a probability measure defined on the Borel sets of X such that, for every bounded linear functional ℓ : X → R, the push-forward measure ℓμ defined on the Borel sets of R by -$$ -( \ell_{\ast} \mu ) (A) = \mu ( \ell^{-1} (A) ), -$$ - -is a Gaussian measure (a normal distribution) with zero mean. Then there exists α > 0 such that -$$ -\int_{X} \exp ( \alpha \| x \|^{2} ) \mathrm{d} \mu (x) < + \infty. -$$ - -A fortiori, μ (equivalently, any X-valued random variable G whose law is μ) has moments of all orders: for all k ≥ 0, -$$ -\mathbb{E} [ \| G \|^{k} ] = \int_{X} \| x \|^{k} \mathrm{d} \mu (x) < + \infty. -$$ diff --git a/wiki/wikipedia/2997.txt b/wiki/wikipedia/2997.txt deleted file mode 100644 index 47d8cebaf603e6bcb4830102b82a10874658f2a8..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2997.txt +++ /dev/null @@ -1,12 +0,0 @@ -In algebraic topology, the homotopy excision theorem offers a substitute for the absence of excision in homotopy theory. More precisely, let $(X; A, B)$ be an excisive triad with $C = A \cap B$ nonempty, and suppose the pair $(A, C)$ is ($m-1$)-connected, $m \ge 2$, and the pair $(B, C)$ is ($n-1$)-connected, $n \ge 1$. Then the map induced by the inclusion $i\colon (A, C) \to (X, B)$, -$$ -i_*\colon \pi_q(A, C) \to \pi_q(X, B) -$$, - -is bijective for $q < m+n-2$ and is surjective for $q = m+n-2$. - -A geometric proof is given in a book by Tammo tom Dieck. - -This result should also be seen as a consequence of the most general form of the Blakers–Massey theorem, which deals with the non-simply-connected case. - -The most important consequence is the Freudenthal suspension theorem. diff --git a/wiki/wikipedia/2998.txt b/wiki/wikipedia/2998.txt deleted file mode 100644 index 8b0e7662f9f0c65853f8f8c65c1974aa28f8b85f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2998.txt +++ /dev/null @@ -1,45 +0,0 @@ -HP Reliable Transaction Router (RTR) is a transactional middleware for computer software, marketed by Hewlett Packard. RTR is used to integrate with applications that require reliable transaction services. - -RTR manages the messages sent between client-server to provide node and network fail-over for increased reliability, transactional integrity, and interoperability between dissimilar systems. - -The RTR software has three logical entities and referred to as front-end (FE), back-end (BE) and transaction-router(TR). The router is a software component that provides the fail-over intelligence and manages connections to the back-end. - -The client applications running on the Front-End combined with Router and Server applications running on back-end interact to provide transaction integrity and reliability. The three logical entities can exist on the same node but are usually deployed on different nodes to achieve modularity, scalability and high availability. - -The client application interacts with the front-end which forwards the messages to the router, the router in turn routes the message to the intended back-end where the appropriate Server application is available for processing the message. - -The RTR routing capability partitions data across multiple servers and nodes for increased performance. Within an application, the partition determines how messages are routed between the client and the servers. - -The message exchange happens between the client and server. Transactions start at the client and involve many messages that can go to a number of different servers. - -Such method of messaging is used in situations where there are multiple recipients for a message, or where unsolicited messages need to be sent. - -RTR can help survive the failures generally seen in distributed application environment which include complete site failure, node failure, network link failure and software process failure. RTR also provides continuous availability by using redundant resources in the distributed environment. - -RTR provides a Web Interface and a Command Line Interface(CLI) for managing the RTR environment. When RTR and its components are running along with the applications, then Client Application, Server Application, RTR services will be active. - -RTR is integrated with client applications and can be customized. - -User and Management Applications can be written using RTR APIs. The C, C++, Java and .Net variants of APIs are available for creating applications to use RTR. - -RTR was first conceived in Zurich, Switzerland by Dr. Paul Shrager in early 1988, and developed by a small team of four engineers, working for DEC (Digital Equipment Corporation). The initial release was written in a mix of Macro, Bliss, Pascal and SDL on top of DECnet and VMS. Later it was reimplemented in C on top of a TCP/IP stack and an OS agnostic infrastructure, that allowed it to be deployed on multiple operating systems, including various flavors of Unix/Linux, VMS, Windows. A Java and C++ veneer was added in the mid 90s to support a RPC style veneer, on top of the "services" oriented interface. - -RTR was one of the first OLTP middleware services that provided the following features (in addition of the usual ones), viz. - -* Concurrent servers (a service could be offered by multiple entities, either as multiple threads within the same process, or as independent processes) - -* Standby servers (a set of services that is capable of offering the services, if required, but not currently being asked to do so) - -* Shadow servers (a set of services currently processing an identical set of requests as the primary servers) - -Additionally, RTR guarantees the data equivalence of the repositories behind the primary and shadow servers, by enforcing a deduced "dependency relationship" among the set of concurrent transactions being shadowed. This allows RTR to process multiple transactions on the shadow without compromising dependency violations. - -The most high-profile users are banks, stock exchanges and Railway Passenger Reservation Systems. - -RTR was available on HP-UX, Linux, Windows and OpenVMS in 2010. - -*Transaction processing - -*Middleware - -*Distributed Computing Environment diff --git a/wiki/wikipedia/2999.txt b/wiki/wikipedia/2999.txt deleted file mode 100644 index 6cf31ca40b7b1a631670463d62bb8312fe3e1d1a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/2999.txt +++ /dev/null @@ -1,3 +0,0 @@ -In geometry, Pasch's theorem, stated in 1882 by the German mathematician Moritz Pasch, is a result in plane geometry which cannot be derived from Euclid's postulates. - -The statement is as follows: Given points a, b, c, and d on a line, if it is known that the points are ordered as (a, b, c) and (b, c, d), then it is also true that (a, b, d). [Here, for example, (a, b, c) means that point b lies between points a and c.] diff --git a/wiki/wikipedia/3.txt b/wiki/wikipedia/3.txt deleted file mode 100644 index b8679163e7891791d833e2fdb4333480c8540ea4..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3.txt +++ /dev/null @@ -1,3 +0,0 @@ -In mathematics, Dieudonné's theorem, named after Jean Dieudonné, is a theorem on when the Minkowski sum of closed sets is closed. - -Let $X$ be a locally convex space and $A,B \subset X$ nonempty closed convex sets. If either $A$ or $B$ is locally compact and $\operatorname{recc}(A) \cap \operatorname{recc}(B)$ (where $\operatorname{recc}$ gives the recession cone) is a linear subspace, then $A - B$ is closed. diff --git a/wiki/wikipedia/30.txt b/wiki/wikipedia/30.txt deleted file mode 100644 index 5394abd5f19d3344a9155b75f6619ab9281b81f0..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/30.txt +++ /dev/null @@ -1,53 +0,0 @@ -In cryptography, the dining cryptographers problem studies how to perform a secure multi-party computation of the boolean-OR function. David Chaum first proposed this problem in the early 1980s and used it as an illustrative example to show that it was possible to send anonymous messages with unconditional sender and recipient untraceability. Anonymous communication networks based on this problem are often referred to as DC-nets (where DC stands for "dining cryptographers"). - -Despite the word dining, the dining cryptographers problem is unrelated to the dining philosophers problem. - -Three cryptographers gather around a table for dinner. The waiter informs them that the meal has been paid for by someone, who could be one of the cryptographers or the National Security Agency (NSA). The cryptographers respect each other's right to make an anonymous payment, but want to find out whether the NSA paid. So they decide to execute a two-stage protocol. - -In the first stage, every two cryptographers establish a shared one-bit secret, say by tossing a coin behind a menu so that only two cryptographers see the outcome in turn for each two cryptographers. Suppose, for example, that after the coin tossing, cryptographer A and B share a secret bit $1$, A and C share $0$, and B and C share $1$. - -In the second stage, each cryptographer publicly announces a bit, which is: - -* if he didn't pay for the meal, the exclusive OR (XOR) of the two shared bits they hold with their two neighbours, - -* if they did pay for the meal, the opposite of that XOR. - -Supposing none of the cryptographers paid, then A announces $1 \oplus 0 = 1$, B announces $1 \oplus 1 = 0$, and C announces $0 \oplus 1 = 1$. On the other hand, if A paid, she announces $\lnot (1 \oplus 0) = 0$. - -The three public announcements combined reveal the answer to their question. One simply computes the XOR of the three bits announced. If the result is 0, it implies that none of the cryptographers paid (so the NSA must have paid the bill). Otherwise, one of the cryptographers paid, but their identity remains unknown to the other cryptographers. - -David Chaum coined the term dining cryptographers network, or DC-net, for this protocol. - -The DC-net protocol is simple and elegant. It has several limitations, however, some solutions to which have been explored in follow-up research (see the References section below). - -; Collision : If two cryptographers paid for the dinner, their messages will cancel each other out, and the final XOR result will be $0$. This is called a collision and allows only one participant to transmit at a time using this protocol. In a more general case, a collision happens as long as any even number of participants send messages. - -; Disruption : Any malicious cryptographer who does not want the group to communicate successfully can jam the protocol so that the final XOR result is useless, simply by sending random bits instead of the correct result of the XOR. This problem occurs because the original protocol was designed without using any public key technology and lacks reliable mechanisms to check whether participants honestly follow the protocol. - -; Complexity : The protocol requires pairwise shared secret keys between the participants, which may be problematic if there are many participants. Also, though the DC-net protocol is "unconditionally secure", it actually depends on the assumption that "unconditionally secure" channels already exist between pairs of the participants, which is not easy to achieve in practice. - -A related anonymous veto network algorithm computes the logical OR of several users' inputs, rather than a logical XOR as in DC-nets, which may be useful in applications to which a logical OR combining operation is naturally suited. - -David Chaum first thought about this problem in the early 1980s. The first publication that outlines the basic underlying ideas is his. The journal version appeared in the very first issue of the Journal of Cryptology. - -DC-nets are readily generalized to allow for transmissions of more than one bit per round, for groups larger than three participants, and for arbitrary "alphabets" other than the binary digits 0 and 1, as described below. - -To enable an anonymous sender to transmit more than one bit of information per DC-nets round, the group of cryptographers can simply repeat the protocol as many times as desired to create a desired number of bits worth of transmission bandwidth. These repetitions need not be performed serially. In practical DC-net systems, it is typical for pairs of participants to agree up-front on a single shared "master" secret, using Diffie–Hellman key exchange for example. Each participant then locally feeds this shared master secret into a pseudorandom number generator, in order to produce as many shared "coin flips" as desired to allow an anonymous sender to transmit multiple bits of information. - -The protocol can be generalized to a group of $n$ participants, each with a shared secret key in common with each other participant. In each round of the protocol, if a participant wants to transmit an untraceable message to the group, they invert their publicly announced bit. The participants can be visualized as a fully connected graph with the vertices representing the participants and the edges representing their shared secret keys. - -The protocol may be run with less than fully connected secret sharing graphs, which can improve the performance and scalability of practical DC-net implementations, at the potential risk of reducing anonymity if colluding participants can split the secret sharing graph into separate connected components. For example, an intuitively appealing but less secure generalization to $n > 3$ participants using a ring topology, where each cryptographer sitting around a table shares a secret only with the cryptographer to their immediate left and right, and not with every other cryptographer. Such a topology is appealing because each cryptographer needs to coordinate two coin flips per round, rather than $n$. However, if Adam and Charlie are actually NSA agents sitting immediately to the left and right of Bob, an innocent victim, and if Adam and Charlie secretly collude to reveal their secrets to each other, then they can determine with certainty whether or not Bob was the sender of a 1 bit in a DC-net run, regardless of how many participants there are in total. This is because the colluding participants Adam and Charlie effectively "split" the secret sharing graph into two separate disconnected components, one containing only Bob, the other containing all other honest participants. - -Another compromise secret sharing DC-net topology, employed in the system for scalability, may be described as a client/server or user/trustee topology. In this variant, we assume there are two types of participants playing different roles: a potentially large number n of users who desire anonymity, and a much smaller number $m$ of trustees whose role is to help the users obtain that anonymity. In this topology, each of the $n$ users shares a secret with each of the $m$ trustees—but users share no secrets directly with other users, and trustees share no secrets directly with other trustees—resulting in an $n \times m$ secret sharing matrix. If the number of trustees $m$ is small, then each user needs to manage only a few shared secrets, improving efficiency for users in the same way the ring topology does. However, as long as at least one trustee behaves honestly and does not leak his or her secrets or collude with other participants, then that honest trustee forms a "hub" connecting all honest users into a single fully connected component, regardless of which or how many other users and/or trustees might be dishonestly colluding. Users need not know or guess which trustee is honest; their security depends only on the existence of at least one honest, non-colluding trustee. - -Though the simple DC-nets protocol uses binary digits as its transmission alphabet, and uses the XOR operator to combine cipher texts, the basic protocol generalizes to any alphabet and combining operator suitable for one-time pad encryption. This flexibility arises naturally from the fact that the secrets shared between the many pairs of participants are, in effect, merely one-time pads combined together symmetrically within a single DC-net round. - -One useful alternate choice of DC-nets alphabet and combining operator is to use a finite group suitable for public-key cryptography as the alphabet—such as a Schnorr group or elliptic curve—and to use the associated group operator as the DC-net combining operator. Such a choice of alphabet and operator makes it possible for clients to use zero-knowledge proof techniques to prove correctness properties about the DC-net ciphertexts that they produce, such as that the participant is not "jamming" the transmission channel, without compromising the anonymity offered by the DC-net. This technique was first suggested by Golle and Juels, further developed by Franck, and later implemented in , a cryptographically verifiable implementation of the system. - -The measure originally suggested by David Chaum to avoid collisions is to retransmit the message once a collision is detected, but the paper does not explain exactly how to arrange the retransmission. - -avoids the possibility of unintentional collisions by using a verifiable shuffle to establish a DC-nets transmission schedule, such that each participant knows exactly which bits in the schedule correspond to his own transmission slot, but does not know who owns other transmission slots. - -divides a large anonymity network into smaller DC-net groups, enabling participants to evade disruption attempts by leaving a disrupted group and joining another group, until the participant finds a group free of disruptors. This evasion approach introduces the risk that an adversary who owns many nodes could selectively disrupt only groups the adversary has not completely compromised, thereby "herding" participants toward groups that may be functional precisely because they are completely compromised. - -implements several schemes to counter disruption. The original protocol used a verifiable cryptographic shuffle to form a DC-net transmission schedule and distribute "transmission assignments", allowing the correctness of subsequent DC-nets ciphertexts to be verified with a simple cryptographic hash check. This technique required a fresh verifiable before every DC-nets round, however, leading to high latencies. A later, more efficient scheme allows a series of DC-net rounds to proceed without intervening shuffles in the absence of disruption, but in response to a disruption event uses a shuffle to distribute anonymous accusations enabling a disruption victim to expose and prove the identity of the perpetrator. Finally, more recent versions support fully verifiable DC-nets - at substantial cost in computation efficiency due to the use of public-key cryptography in the DC-net - as well as a hybrid mode that uses efficient XOR-based DC-nets in the normal case and verifiable DC-nets only upon disruption, to distribute accusations more quickly than is feasible using verifiable shuffles. diff --git a/wiki/wikipedia/300.txt b/wiki/wikipedia/300.txt deleted file mode 100644 index 96eef1fad93cffc7b00b1deafc5138321b8fb672..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/300.txt +++ /dev/null @@ -1,26 +0,0 @@ -In mathematics, the twisted Poincaré duality is a theorem removing the restriction on Poincaré duality to oriented manifolds. The existence of a global orientation is replaced by carrying along local information, by means of a local coefficient system. - -Another version of the theorem with real coefficients features de Rham cohomology with values in the orientation bundle. This is the flat real line bundle denoted $ o(M)$, that is trivialized by coordinate charts of the manifold $M$, with transition maps the sign of the Jacobian determinant of the charts transition maps. As a flat line bundle, it has a de Rham cohomology, denoted by -$$ -H^* (M; \R^w) -$$ or $H^* (M; o(M))$. - -For M a compact manifold, the top degree cohomology is equipped with a so-called trace morphism -$$ -\theta\colon H^d (M; o(M)) \to \R -$$, - -that is to be interpreted as integration on M, i.e., evaluating against the fundamental class. - -Poincaré duality for differential forms is then the conjunction, for M connected, of the following two statements: - -* The trace morphism is a linear isomorphism. - -* The cup product, or exterior product of differential forms -$$ -\cup \colon H^* (M; \R)\otimes H^{d-*}(M, o(M)) \to H^d(M, o(M)) \simeq \R -$$ - -is non-degenerate. - -The oriented Poincaré duality is contained in this statement, as understood from the fact that the orientation bundle o(M) is trivial if the manifold is oriented, an orientation being a global trivialization, i.e., a nowhere vanishing parallel section. diff --git a/wiki/wikipedia/3000.txt b/wiki/wikipedia/3000.txt deleted file mode 100644 index adafd89111948a9b14de5da7324cc448f56c8c76..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3000.txt +++ /dev/null @@ -1,59 +0,0 @@ -Optimistic replication, also known as lazy replication, is a strategy for replication, in which replicas are allowed to diverge. - -Traditional pessimistic replication systems try to guarantee from the beginning that all of the replicas are identical to each other, as if there was only a single copy of the data all along. Optimistic replication does away with this in favor of eventual consistency, meaning that replicas are guaranteed to converge only when the system has been quiesced for a period of time. As a result, there is no longer a need to wait for all of the copies to be synchronized when updating data, which helps concurrency and parallelism. The trade-off is that different replicas may require explicit reconciliation later on, which might then prove difficult or even insoluble. - -An optimistic replication algorithm consists of five elements: - -# Operation submission: Users submit operations at independent sites. - -# Propagation: Each site shares the operations it knows about with the rest of the system. - -# Scheduling: Each site decides on an order for the operations it knows about. - -# Conflict resolution: If there are any conflicts among the operations a site has scheduled, it must modify them in some way. - -# Commitment: The sites agree on a final schedule and conflict resolution result, and the operations are made permanent. - -There are two strategies for propagation: state transfer, where sites propagate a representation of the current state, and operation transfer, where sites propagate the operations that were performed (essentially, a list of instructions on how to reach the new state). - -Scheduling and conflict resolution can either be syntactic or semantic. Syntactic systems rely on general information, such as when or where an operation was submitted. Semantic systems are able to make use of application-specific information to make smarter decisions. Note that state transfer systems generally have no information about the semantics of the data being transferred, and so they have to use syntactic scheduling and conflict resolution. - -One well-known example of a system based on optimistic replication is the CVS version control system, or any other version control system which uses the paradigm. CVS covers each of the five elements: - -# Operation submission: Users edit local versions of files. - -# Propagation: Users manually pull updates from a central server, or push changes out once the user feels they are ready. - -# Scheduling: Operations are scheduled in the order that they are received by the central server. - -# Conflict resolution: When a user pushes to or pulls from the central repository, any conflicts will be flagged for that user to fix manually. - -# Commitment: Once the central server accepts the changes which a user pushes, they are permanently committed. - -A special case of replication is synchronization, where there are only two replicas. For example, personal digital assistants (PDAs) allow users to edit data either on the PDA or a computer, and then to merge these two datasets together. Note, however, that replication is a broader problem than synchronization, since there may be more than two replicas. - -Other examples include: - -* Usenet, and other systems which use the Thomas Write Rule (See ) - -* Multi-master database replication - -* The Coda distributed filesystem - -* Operational Transformation, a theoretical framework for group editing - -* Peer-to-peer wikis - -* Conflict-free replicated data types - -* The Bayou distributed database - -* IceCube - -Applications built on top of optimistic replicated databases need to be careful about ensuring that the delayed updates observed do not impair the correctness of the application. - -As a simple example, if an application contains a way of viewing some part of the database state, and a way of editing it, then users may edit that state but then not see it changing in the viewer. Alarmed that their edit "didn't work", they may try it again, potentially more than once. If the updates are not idempotent (e.g., they increment a value), this can lead to disaster. Even if they are idempotent, the spurious database updates can lead to performance bottlenecks, especially when the database systems are processing heavy loads; this can become a vicious circle. - -Testing of applications is often done on a testing environment, smaller in size (perhaps only a single server) and less loaded than the "live" environment. The replication behaviour of such an installation may differ from a live environment in ways that mean that replication lag is unlikely to be observed in testing, masking replication-sensitive bugs. Application developers must be very careful about the assumptions they make about the effect of a database update, and must be sure to simulate lag in their testing environments. - -Optimistically replicated databases have to be very careful about offering features such as validity constraints on data. If any given update may or may not be accepted based on the current state of the record, then two updates (A and B) may be individually legal against the starting state of the system, but one or more of the updates may not be legal against the state of the system after the other update (e.g., A and B are both legal, but AB or BA are illegal). If A and B are both initiated at roughly the same time within the database, then A may be successfully applied on some nodes and B on others, but as soon as A and B "meet" and one is attempted on a node which has already applied the other, a conflict will be found. The system must, in this case, decide which update finally "wins", and arrange for any nodes that have already applied the losing update to revert it. However, some nodes may temporarily expose the state with the reverted update, and there may be no way to inform the user who initiated the update of its failure, without requiring them to wait (potentially forever) for confirmation of acceptance at every node. diff --git a/wiki/wikipedia/3001.txt b/wiki/wikipedia/3001.txt deleted file mode 100644 index 74d92731e044861bb2fd72eb8ecca5976416ec53..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3001.txt +++ /dev/null @@ -1,3 +0,0 @@ -In mathematics, Ribet's lemma gives conditions for a subgroup of a product of groups to be the whole product group. It was introduced by . - -Suppose G1×...×Gn is a product of perfect groups. Then any subgroup of this product that maps onto all the factors Gi for i=1, ..., n is the whole product group. diff --git a/wiki/wikipedia/3002.txt b/wiki/wikipedia/3002.txt deleted file mode 100644 index cf5bba6cc0080932f3cc61257bb6881e6c4de89e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3002.txt +++ /dev/null @@ -1,75 +0,0 @@ -In mathematics, Minkowski's theorem is the statement that every convex set in $\mathbb{R}^n$ which is symmetric with respect to the origin and which has volume greater than $2^n$ contains a non-zero integer point (meaning a point in $\Z^n$ that is not the origin). The theorem was proved by Hermann Minkowski in 1889 and became the foundation of the branch of number theory called the geometry of numbers. It can be extended from the integers to any lattice $L$ and to any symmetric convex set with volume greater than $2^nd(L)$, where $d(L)$ denotes the covolume of the lattice (the absolute value of the determinant of any of its bases). - -Suppose that L is a lattice of determinant d(L) in the n-dimensional real vector space ℝn and S is a convex subset of ℝn that is symmetric with respect to the origin, meaning that if x is in S then −x is also in S. Minkowski's theorem states that if the volume of S is strictly greater than 2n d(L), then S must contain at least one lattice point other than the origin. (Since the set S is symmetric, it would then contain at least three lattice points: the origin 0 and a pair of points ±x, where x ∈ L \ 0.) - -The simplest example of a lattice is the integer lattice ℤn of all points with integer coefficients; its determinant is 1. For n = 2, the theorem claims that a convex figure in the Euclidean plane symmetric about the origin and with area greater than 4 encloses at least one lattice point in addition to the origin. The area bound is sharp: if S is the interior of the square with vertices (±1, ±1) then S is symmetric and convex, and has area 4, but the only lattice point it contains is the origin. This example, showing that the bound of the theorem is sharp, generalizes to hypercubes in every dimension n. - -The following argument proves Minkowski's theorem for the specific case of L = ℤ2. - -Proof of the \mathbb{Z}^2 case: Consider the map -$$ -f: S \to \mathbb{R}^2/2L, \qquad (x,y) \mapsto (x \bmod 2, y \bmod 2) -$$ - -Intuitively, this map cuts the plane into 2 by 2 squares, then stacks the squares on top of each other. Clearly f(S) has area less than or equal to 4, because this set lies within a 2 by 2 square. Assume for a contradiction that f could be injective, which means the pieces of S cut out by the squares stack up in a non-overlapping way. Because f is locally area-preserving, this non-overlapping property would make it area-preserving for all of S, so the area of f(S) would be the same as that of S, which is greater than 4. That is not the case, so the assumption must be false: f is not injective, meaning that there exist at least two distinct points p1, p2 in S that are mapped by f to the same point: f(p1) = f(p2). - -Because of the way f was defined, - -the only way that f(p1) can equal f(p2) is for p2 - -to equal p1 + (2i, 2j) for some integers i and j, not both zero. - -That is, the coordinates of the two points differ by two even integers. - -Since S is symmetric about the origin, −p1 is also a point in S. Since S is convex, the line segment between −p1 and p2 lies entirely in S, and in particular the midpoint of that segment lies in S. In other words, -$$ -\tfrac{1}{2}\left(-p_1 + p_2\right) = \tfrac{1}{2}\left(-p_1 + p_1 + (2i, 2j)\right) = (i, j) -$$ - -is a point in S. But this point (i,j) is an integer point, and is not the origin since i and j are not both zero. - -Therefore, S contains a nonzero integer point. - -Remarks: - -* The argument above proves the theorem that any set of volume > \det(L) contains two distinct points that differ by a lattice vector. This is a special case of Blichfeldt's theorem. - -* The argument above highlights that the term 2^n \text{det}(L) is the covolume of the lattice 2 L . - -* To obtain a proof for general lattices, it suffices to prove Minkowski's theorem only for \mathbb{Z}^n ; this is because every full-rank lattice can be written as B \mathbb{Z}^n for some linear transformation B , and the properties of being convex and symmetric around the origin are preserved by linear transformations, while the covolume of B \mathbb{Z}^n is | \text{det}(B)| and volume of a body scales by exactly \frac{1}{\text{det}(B)} under an application of B^{-1} . - -Minkowski's theorem gives an upper bound for the length of the shortest nonzero vector. This result has applications in lattice cryptography and number theory. - -Theorem (Minkowski's bound on the shortest vector): Let $ L $ be a lattice. Then there is a $ x \in L \setminus \{0\} $ with $ ||x||_{\infty} \leq |\det(L)|^{1/n} $. In particular, by the standard comparison between $ l_2 $ and $ l_{\infty} $ norms, $ ||x||_2 \leq \sqrt{n} | \det(L)|^{1/n} $. - -Proof: Let $ l = \min \{ ||x||_{\infty} : x \in L \setminus \{0\} \} $, and set $ C = \{ y : ||y||_{\infty} < l \} $. Then $ \text{vol}(C) = (2l)^n $. If $ (2l)^n > 2^n |d(L)| $, then $ C $ contains a non-zero lattice point, which is a contradiction. Thus $ l \leq | d(L)|^{1/n} $. QED - -Remarks: - -* The constant in the $ L^2 $ bound can be improved, for instance by taking the open ball of radius $ < l $ as $ C $ in the above argument. The optimal constant is known as the Hermite constant. - -* The bound given by the theorem can be very loose, as can be seen by considering the lattice generated by $ (1,0), (0,n) $. - -* Even though Minkowski's theorem guarantees a short lattice vector within a certain magnitude bound, finding this vector is in general a hard computational problem. Finding the vector within a factor guaranteed by Minkowski's bound is using transference properties of the dual lattice. The computational problem is also sometimes referred to as HermiteSVP. - -* The LLL-basis reduction algorithm can be seen as a weak but efficiently algorithmic version of Minkowski's bound on the shortest vector. This is because a $ \delta $-LLL reduced basis $ b_1, \ldots, b_n $ for $ L $ has the property that $ ||b_1|| \leq (\frac{1}{ \delta - .25})^{\frac{n-1}{4}} \text{det}(L)^{1/n} $; see these for more on this. As explained in, proofs of bounds on the Hermite constant contain some of the key ideas in the LLL-reduction algorithm. - -The difficult implication in Fermat's theorem on sums of two squares can be proven using Minkowski's bound on the shortest vector. - -Theorem: Every prime with $ p \equiv 1 \mod 4 $ can be written as a sum of two squares. - -Proof: Since $ 4 | p - 1 $ and a is a quadratic residue modulo a prime $ p $ iff a^{\frac{p-1}{2}} = 1~(\text{mod}~p) (Euler's Criterion) there is a square root of $ -1 $ in $ \mathbb{Z}/p\mathbb{Z} $; choose one and call one representative in $ \mathbb{Z} $ for it $ j $. Consider the lattice $ L $ defined by the vectors $ (1, j), (0,p) $, and let - -B denote the associated matrix. The determinant of this lattice is $ p $, whence Minkowski's bound tells us that there is a nonzero $ x = (x_1, x_2) \in \mathbb{Z}^2 $ with $ 0 < ||Bx||_2^2 < 2p $. We have $ ||Bx||^2 = || ( x_1, jx_1 + px_2 )||^2 = x_1^2 + (jx_1 + px_2)^2 $ and we define the integers $ a = x_1, b = (jx_1 + px_2) $. Minkowski's bound tells us that $ 0 < a^2 + b^2 < 2p $, and simple modular arithmetic shows that $ a^2 + b^2 = x_1^2 + (jx_1 + px_2)^2 = 0 \mod p $, and thus we conclude that $ a^2 + b^2 = p $. QED - -Additionally, the lattice perspective gives a computationally efficient approach to Fermat's theorem on sums of squares: - -First, recall that finding any nonzero vector with norm less than $ 2p $ in $ L $, the lattice of the proof, gives a decomposition of $ p $ as a sum of two squares. Such vectors can be found efficiently, for instance using LLL-algorithm. In particular, if $ b_1, b_2 $ is a $ 3/4 $-LLL reduced basis, then, by the property that $ ||b_1|| \leq ( \frac{1}{\delta - .25})^{\frac{n-1}{4}} \text{det}(B)^{1/n} $, $ ||b_1||^2 \leq \sqrt{2} p < 2p $. Thus, by running the LLL-lattice basis reduction algorithm with $ \delta = 3/4 $, we obtain a decomposition of $ p $ as a sum of squares. Note that because every vector in $ L $ has norm squared a multiple of $ p $, the vector returned by the LLL-algorithm in this case is in fact a shortest vector. - -Minkowski's theorem is also useful to prove Lagrange's four-square theorem, which states that every natural number can be written as the sum of the squares of four natural numbers. - -Minkowski's theorem can be used to prove Dirichlet's theorem on simultaneous rational approximation. - -Another application of Minkowski's theorem is the result that every class in the ideal class group of a number field K contains an integral ideal of norm not exceeding a certain bound, depending on K, called Minkowski's bound: the finiteness of the class number of an algebraic number field follows immediately. - -The complexity of finding the point guaranteed by Minkowski's theorem, or the closely related Blitchfields theorem, have been studied from the perspective of TFNP search problems. In particular, it is known that a computational analogue of Blichfeldt's theorem, a corollary of the proof of Minkowski's theorem, is PPP-complete. It is also known that the computational analogue of Minkowski's theorem is in the class PPP, and it was conjectured to be PPP complete. diff --git a/wiki/wikipedia/3003.txt b/wiki/wikipedia/3003.txt deleted file mode 100644 index 6a9839bc67b19c8ce0ca78e1a6fc335f4f8f7b2d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3003.txt +++ /dev/null @@ -1,51 +0,0 @@ -In propositional logic, tautology is either of two commonly used rules of replacement. The rules are used to eliminate redundancy in disjunctions and conjunctions when they occur in logical proofs. They are: - -The principle of idempotency of disjunction: -$$ -P \lor P \Leftrightarrow P -$$ - -and the principle of idempotency of conjunction: -$$ -P \land P \Leftrightarrow P -$$ - -Where "$\Leftrightarrow$" is a metalogical symbol representing "can be replaced in a logical proof with." - -Theorems are those logical formulas $\phi$ where $\vdash \phi$ is the conclusion of a valid proof, while the equivalent semantic consequence $\models \phi$ indicates a tautology. - -The tautology rule may be expressed as a sequent: -$$ -P \lor P \vdash P -$$ - -and -$$ -P \land P \vdash P -$$ - -where $\vdash$ is a metalogical symbol meaning that $P$ is a syntactic consequence of $P \lor P$, in the one case, $P \land P$ in the other, in some logical system; - -or as a rule of inference: -$$ -\frac{P \lor P}{\therefore P} -$$ - -and -$$ -\frac{P \land P}{\therefore P} -$$ - -where the rule is that wherever an instance of "$P \lor P$" or "$P \land P$" appears on a line of a proof, it can be replaced with "$P$"; - -or as the statement of a truth-functional tautology or theorem of propositional logic. The principle was stated as a theorem of propositional logic by Russell and Whitehead in Principia Mathematica as: -$$ -(P \lor P) \to P -$$ - -and -$$ -(P \land P) \to P -$$ - -where $P$ is a proposition expressed in some formal system. diff --git a/wiki/wikipedia/3004.txt b/wiki/wikipedia/3004.txt deleted file mode 100644 index 6dfe5762bbb40135f497a493dc2b3134721d2273..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3004.txt +++ /dev/null @@ -1,20 +0,0 @@ -In algebraic number theory, Leopoldt's conjecture, introduced by , states that the p-adic regulator of a number field does not vanish. The p-adic regulator is an analogue of the usual - -regulator defined using p-adic logarithms instead of the usual logarithms, introduced by . - -Leopoldt proposed a definition of a p-adic regulator Rp attached to K and a prime number p. The definition of Rp uses an appropriate determinant with entries the p-adic logarithm of a generating set of units of K (up to torsion), in the manner of the usual regulator. The conjecture, which for general K is still open , then comes out as the statement that Rp is not zero. - -Let K be a number field and for each prime P of K above some fixed rational prime p, let UP denote the local units at P and let U1,P denote the subgroup of principal units in UP. Set -$$ - U_1 = \prod_{P|p} U_{1,P}. -$$ - -Then let E1 denote the set of global units ε that map to U1 via the diagonal embedding of the global units in E. - -Since $E_1$ is a finite-index subgroup of the global units, it is an abelian group of rank $r_1 + r_2 - 1$, where $r_1$ is the number of real embeddings of $K$ and $r_2$ the number of pairs of complex embeddings. Leopoldt's conjecture states that the $\mathbb{Z}_p$-module rank of the closure of $E_1$ embedded diagonally in $U_1$ is also $r_1 + r_2 - 1.$ - -Leopoldt's conjecture is known in the special case where $K$ is an abelian extension of $\mathbb{Q}$ or an abelian extension of an imaginary quadratic number field: Ax reduced the abelian case to a p-adic version of Baker's theorem, which was proved shortly afterwards by Brumer. - -has announced a proof of Leopoldt's conjecture for all CM-extensions of $\mathbb{Q}$. - -expressed the residue of the p-adic Dedekind zeta function of a totally real field at s = 1 in terms of the p-adic regulator. As a consequence, Leopoldt's conjecture for those fields is equivalent to their p-adic Dedekind zeta functions having a simple pole at s = 1. diff --git a/wiki/wikipedia/3005.txt b/wiki/wikipedia/3005.txt deleted file mode 100644 index f6616606886778dfcc6455d9745b0781518455c8..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3005.txt +++ /dev/null @@ -1,50 +0,0 @@ -The McMullen problem is an open problem in discrete geometry named after Peter McMullen. - -In 1972, David G. Larman wrote about the following problem: - -left=1.6|1=Determine the largest number $\nu(d)$ such that for any given $\nu(d)$ points in general position in the $d$-dimensional affine space $\mathbb{R}^d$ there is a projective transformation mapping these points into convex position (so they form the vertices of a convex polytope). - -Larman credited the problem to a private communication by Peter McMullen. - -Using the Gale transform, this problem can be reformulated as: - -left=1.6|1=Determine the smallest number $\mu(d)$ such that for every set of $\mu(d)$ points -$$ -X=\{x_1,x_2,\dots,x_{\mu(d)}\} -$$ in linearly general position on the sphere $S^{d-1}$ it is possible to choose a set $Y=\{\varepsilon_1x_1,\varepsilon_2x_2,\dots,\varepsilon_{\mu(d)}x_{\mu(d)}\}$ where $\varepsilon_i=\pm 1$ for $i=1,2,\dots,\mu(d)$, such that every open hemisphere of $S^{d-1}$ contains at least two members of $Y$. - -The numbers $\nu$ of the original formulation of the McMullen problem and $\mu$ of the Gale transform formulation are connected by the relationships - - - -\begin{align} - -\mu(k)&=\min\{w \mid w\leq\nu(w-k-1)\} \\ - -\nu(d)&=\max\{w \mid w\geq\mu(w-d-1)\} - -\end{align} - - - -Also, by simple geometric observation, it can be reformulated as: - -left=1.6|1=Determine the smallest number $\lambda(d)$ such that for every set $X$ of $\lambda(d)$ points in $\mathbb{R}^d$ there exists a partition of $X$ into two sets $A$ and $B$ with - -\operatorname{conv}(A\backslash \{x\})\cap \operatorname{conv}(B\backslash \{x\})\not=\varnothing,\forall x\in X. - -The relation between $\mu$ and $\lambda$ is - -\mu(d+1)=\lambda(d),\qquad d\geq1 - -The equivalent projective dual statement to the McMullen problem is to determine the largest number $\nu(d)$ such that every set of $\nu(d)$ hyperplanes in general position in d-dimensional real projective space form an arrangement of hyperplanes in which one of the cells is bounded by all of the hyperplanes. - -This problem is still open. However, the bounds of $\nu(d)$ are in the following results: - -*David Larman proved in 1972 that 2d+1\leq\nu(d)\leq(d+1)^2. - -*Michel Las Vergnas proved in 1986 that \nu(d)\leq\frac{(d+1)(d+2)}{2}. - -*Jorge Luis Ramírez Alfonsín proved in 2001 that \nu(d)\leq2d+\left\lceil\frac{d+1}{2}\right\rceil. - -The conjecture of this problem is that $\nu(d)=2d+1$. This has been proven for $d=2,3,4$. diff --git a/wiki/wikipedia/3006.txt b/wiki/wikipedia/3006.txt deleted file mode 100644 index d1351da2dbbef481fd6b7e78baa5be9f87508e34..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3006.txt +++ /dev/null @@ -1,25 +0,0 @@ -The leftover hash lemma is a lemma in cryptography first stated by Russell Impagliazzo, Leonid Levin, and Michael Luby. - -Imagine that you have a secret key X that has n uniform random bits, and you would like to use this secret key to encrypt a message. Unfortunately, you were a bit careless with the key, and know that an adversary was able to learn the values of some t < n bits of that key, but you do not know which t bits. Can you still use your key, or do you have to throw it away and choose a new key? The leftover hash lemma tells us that we can produce a key of about n − t bits, over which the adversary has almost no knowledge. Since the adversary knows all but n − t bits, this is almost optimal. - -More precisely, the leftover hash lemma tells us that we can extract a length asymptotic to $H_\infty(X)$ (the min-entropy of X) bits from a random variable X that are almost uniformly distributed. In other words, an adversary who has some partial knowledge about X, will have almost no knowledge about the extracted value. That is why this is also called privacy amplification (see privacy amplification section in the article Quantum key distribution). - -Randomness extractors achieve the same result, but use (normally) less randomness. - -Let X be a random variable over $\mathcal{X}$ and let $m > 0$. Let $h\colon \mathcal{S} \times \mathcal{X} \rightarrow \{0, 1\}^m$ be a 2-universal hash function. If -$$ -m \leq H_\infty(X) - 2 \log\left(\frac{1}{\varepsilon}\right) -$$ - -then for S uniform over $\mathcal{S}$ and independent of X, we have: -$$ -\delta\left[(h(S, X), S), (U, S)\right] \leq \varepsilon. -$$ - -where U is uniform over $\{0, 1\}^m$ and independent of S. -$$ -H_\infty(X) = -\log \max_x \Pr[X=x] -$$ is the min-entropy of X, which measures the amount of randomness X has. The min-entropy is always less than or equal to the Shannon entropy. Note that $\max_x \Pr[X=x]$ is the probability of correctly guessing X. (The best guess is to guess the most probable value.) Therefore, the min-entropy measures how difficult it is to guess X. -$$ -0 \le \delta(X, Y) = \frac{1}{2} \sum_v \left| \Pr[X=v] - \Pr[Y=v] \right| \le 1 -$$ is a statistical distance between X and Y. diff --git a/wiki/wikipedia/3007.txt b/wiki/wikipedia/3007.txt deleted file mode 100644 index cf96ef01eb2a2eb1e709f4a347b2d7e7ac1190ef..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3007.txt +++ /dev/null @@ -1,5 +0,0 @@ -In mathematics, Castelnuovo's contraction theorem is used in the classification theory of algebraic surfaces to construct the minimal model of a given smooth algebraic surface. - -More precisely, let $X$ be a smooth projective surface over $\mathbb{C}$ and $C$ a (-1)-curve on $X$ (which means a smooth rational curve of self-intersection number -1), then there exists a morphism from $X$ to another smooth projective surface $Y$ such that the curve $C$ has been contracted to one point $P$, and moreover this morphism is an isomorphism outside $C$ (i.e., $X\setminus C$ is isomorphic with $Y\setminus P$). - -This contraction morphism is sometimes called a blowdown, which is the inverse operation of blowup. The curve $C$ is also called an exceptional curve of the first kind. diff --git a/wiki/wikipedia/3008.txt b/wiki/wikipedia/3008.txt deleted file mode 100644 index e6ba21633296f42debe4a8b28fc5c8d305a7e666..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3008.txt +++ /dev/null @@ -1,79 +0,0 @@ -In logic and linguistics, a metalanguage is a language used to describe another language, often called the object language. Expressions in a metalanguage are often distinguished from those in the object language by the use of italics, quotation marks, or writing on a separate line. The structure of sentences and phrases in a metalanguage can be described by a metasyntax. - -There are a variety of recognized metalanguages, including embedded, ordered, and nested (or hierarchical) metalanguages. - -An embedded metalanguage is a language formally, naturally and firmly fixed in an object language. This idea is found in Douglas Hofstadter's book, Gödel, Escher, Bach, in a discussion of the relationship between formal languages and number theory: "... it is in the nature of any formalization of number theory that its metalanguage is embedded within it." - -It occurs in natural, or informal, languages, as well—such as in English, where words such as noun, verb, or even word describe features and concepts pertaining to the English language itself. - -An ordered metalanguage is analogous to an ordered logic. An example of an ordered metalanguage is the construction of one metalanguage to discuss an object language, followed by the creation of another metalanguage to discuss the first, etc. - -A nested (or hierarchical) metalanguage is similar to an ordered metalanguage in that each level represents a greater degree of abstraction. However, a nested metalanguage differs from an ordered one in that each level includes the one below. - -The paradigmatic example of a nested metalanguage comes from the Linnean taxonomic system in biology. Each level in the system incorporates the one below it. The language used to discuss genus is also used to discuss species; the one used to discuss orders is also used to discuss genera, etc., up to kingdoms. - -Natural language combines nested and ordered metalanguages. In a natural language there is an infinite regress of metalanguages, each with more specialized vocabulary and simpler syntax. - -Designating the language now as $L_0$, the grammar of the language is a discourse in the metalanguage $L_1$, which is a sublanguage nested within $L_0$. - -* The grammar of $L_1$, which has the form of a factual description, is a discourse in the metametalanguage $L_2$, which is also a sublanguage of $L_0$. - -* The grammar of $L_2$, which has the form of a theory describing the syntactic structure of such factual descriptions, is stated in the metametametalanguage $L_3$, which likewise is a sublanguage of $L_0$. - -* The grammar of $L_3$ has the form of a metatheory describing the syntactic structure of theories stated in $L_2$. - -* $L_4$ and succeeding metalanguages have the same grammar as $L_3$, differing only in reference. - -Since all of these metalanguages are sublanguages of $L_0$, $L_1$ is a nested metalanguage, but $L_2$ and sequel are ordered metalanguages. Since all these metalanguages are sublanguages of $L_0$ they are all embedded languages with respect to the language as a whole. - -Metalanguages of formal systems all resolve ultimately to natural language, the 'common parlance' in which mathematicians and logicians converse to define their terms and operations and 'read out' their formulae. - -There are several entities commonly expressed in a metalanguage. In logic usually the object language that the metalanguage is discussing is a formal language, and very often the metalanguage as well. - -A deductive system (or, deductive apparatus of a formal system) consists of the axioms (or axiom schemata) and rules of inference that can be used to derive the theorems of the system. - -A metavariable (or metalinguistic or metasyntactic variable) is a symbol or set of symbols in a metalanguage which stands for a symbol or set of symbols in some object language. For instance, in the sentence: - -Let A and B be arbitrary formulas of a formal language $L$. - -The symbols A and B are not symbols of the object language $L$, they are metavariables in the metalanguage (in this case, English) that is discussing the object language $L$. - -A metatheory is a theory whose subject matter is some other theory (a theory about a theory). Statements made in the metatheory about the theory are called metatheorems. A metatheorem is a true statement about a formal system expressed in a metalanguage. Unlike theorems proved within a given formal system, a metatheorem is proved within a metatheory, and may reference concepts that are present in the metatheory but not the object theory. - -An interpretation is an assignment of meanings to the symbols and words of a language. - -Michael J. Reddy (1979) argues that much of the language we use to talk about language is conceptualized and structured by what he refers to as the conduit metaphor. This paradigm operates through two distinct, related frameworks. - -The major framework views language as a sealed pipeline between people:
    - -1. Language transfers people's thoughts and feelings (mental content) to others - -ex: Try to get your thoughts across better. - -2. Speakers and writers insert their mental content into words - -ex: You have to put each concept into words more carefully. - -3. Words are containers - -ex: That sentence was filled with emotion. - -4. Listeners and readers extract mental content from words - -ex: Let me know if you find any new sensations in the poem. - -The minor framework views language as an open pipe spilling mental content into the void:
    1. Speakers and writers eject mental content into an external space - -ex: Get those ideas out where they can do some good. - -2. Mental content is reified (viewed as concrete) in this space - -ex: That concept has been floating around for decades. - -3. Listeners and readers extract mental content from this space - -ex: Let me know if you find any good concepts in the essay. - -Computers follow programs, sets of instructions in a formal language. The development of a programming language involves the use of a metalanguage. The act of working with metalanguages in programming is known as metaprogramming. - -Backus–Naur form, developed in the 1960s by John Backus and Peter Naur, is one of the earliest metalanguages used in computing. Examples of modern-day programming languages which commonly find use in metaprogramming include ML, Lisp, m4, and Yacc. diff --git a/wiki/wikipedia/3009.txt b/wiki/wikipedia/3009.txt deleted file mode 100644 index 600581743b6272861044938842dedf018f020ad9..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3009.txt +++ /dev/null @@ -1,160 +0,0 @@ -{{Image frame|width=220|align=right|caption=The quadratic formula, the symbolic solution of the quadratic equation ax2 + bx + c = 0 - -|content=$\overset{}{\underset{}{ x=\frac{-b\pm\sqrt{b^2-4ac} }{2a} } }$}} - -In mathematics, to solve an equation is to find its solutions, which are the values (numbers, functions, sets, etc.) that fulfill the condition stated by the equation, consisting generally of two expressions related by an equals sign. When seeking a solution, one or more variables are designated as unknowns. A solution is an assignment of values to the unknown variables that makes the equality in the equation true. In other words, a solution is a value or a collection of values (one for each unknown) such that, when substituted for the unknowns, the equation becomes an equality. - -A solution of an equation is often called a root of the equation, particularly but not only for polynomial equations. The set of all solutions of an equation is its solution set. - -An equation may be solved either numerically or symbolically. Solving an equation numerically means that only numbers are admitted as solutions. Solving an equation symbolically means that expressions can be used for representing the solutions. - -For example, the equation x + y = 2x – 1 is solved for the unknown x by the expression x = y + 1, because substituting y + 1 for x in the equation results in (y + 1) + y = 2(y + 1) – 1, a true statement. It is also possible to take the variable y to be the unknown, and then the equation is solved by y = x – 1. Or x and y can both be treated as unknowns, and then there are many solutions to the equation; a symbolic solution is (x, y) = (a + 1, a), where the variable a may take any value. Instantiating a symbolic solution with specific numbers gives a numerical solution; for example, a = 0 gives (x, y) = (1, 0) (that is, x = 1, y = 0), and a = 1 gives (x, y) = (2, 1). - -The distinction between known variables and unknown variables is generally made in the statement of the problem, by phrases such as "an equation in x and y", or "solve for x and y", which indicate the unknowns, here x and y. - -However, it is common to reserve x, y, z, ... to denote the unknowns, and to use a, b, c, ... to denote the known variables, which are often called parameters. This is typically the case when considering polynomial equations, such as quadratic equations. However, for some problems, all variables may assume either role. - -Depending on the context, solving an equation may consist to find either any solution (finding a single solution is enough), all solutions, or a solution that satisfies further properties, such as belonging to a given interval. When the task is to find the solution that is the best under some criterion, this is an optimization problem. Solving an optimization problem is generally not referred to as "equation solving", as, generally, solving methods start from a particular solution for finding a better solution, and repeating the process until finding eventually the best solution. - -One general form of an equation is -$$ -f\left(x_1,\dots,x_n\right)=c, -$$ - -where f is a function, x1, ..., xn are the unknowns, and c is a constant. Its solutions are the elements of the inverse image -$$ -f^{-1}(c)=\bigl\{(a_1,\dots,a_n)\in D\mid f\left(a_1,\dots,a_n\right)=c\bigr\}, -$$ - -where D is the domain of the function f. The set of solutions can be the empty set (there are no solutions), a singleton (there is exactly one solution), finite, or infinite (there are infinitely many solutions). - -For example, an equation such as -$$ -3x+2y=21z, -$$ - -with unknowns x, y and z, can be put in the above form by subtracting 21z from both sides of the equation, to obtain -$$ -3x+2y-21z=0 -$$ - -In this particular case there is not just one solution, but an infinite set of solutions, which can be written using set builder notation as -$$ -\bigl\{(x,y,z)\mid 3x+2y-21z=0\bigr\}. -$$ - -One particular solution is x = 0, y = 0, z = 0. Two other solutions are x = 3, y = 6, z = 1, and x = 8, y = 9, z = 2. There is a unique plane in three-dimensional space which passes through the three points with these coordinates, and this plane is the set of all points whose coordinates are solutions of the equation. - -The solution set of a given set of equations or inequalities is the set of all its solutions, a solution being a tuple of values, one for each unknown, that satisfies all the equations or inequalities. - -If the solution set is empty, then there are no values of the unknowns that satisfy simultaneously all equations and inequalities. - -For a simple example, consider the equation -$$ -x^2=2. -$$ - -This equation can be viewed as a Diophantine equation, that is, an equation for which only integer solutions are sought. In this case, the solution set is the empty set, since 2 is not the square of an integer. However, if one searches for real solutions, there are two solutions, 2 and –2; in other words, the solution set is {2}, −. - -When an equation contains several unknowns, and when one has several equations with more unknowns than equations, the solution set is often infinite. In this case, the solutions cannot be listed. For representing them, a parametrization is often useful, which consists of expressing the solutions in terms of some of the unknowns or auxiliary variables. This is always possible when all the equations are linear. - -Such infinite solution sets can naturally be interpreted as geometric shapes such as lines, curves (see picture), planes, and more generally algebraic varieties or manifolds. In particular, algebraic geometry may be viewed as the study of solution sets of algebraic equations. - -The methods for solving equations generally depend on the type of equation, both the kind of expressions in the equation and the kind of values that may be assumed by the unknowns. The variety in types of equations is large, and so are the corresponding methods. Only a few specific types are mentioned below. - -In general, given a class of equations, there may be no known systematic method (algorithm) that is guaranteed to work. This may be due to a lack of mathematical knowledge; some problems were only solved after centuries of effort. But this also reflects that, in general, no such method can exist: some problems are known to be unsolvable by an algorithm, such as Hilbert's tenth problem, which was proved unsolvable in 1970. - -For several classes of equations, algorithms have been found for solving them, some of which have been implemented and incorporated in computer algebra systems, but often require no more sophisticated technology than pencil and paper. In some other cases, heuristic methods are known that are often successful but that are not guaranteed to lead to success. - -If the solution set of an equation is restricted to a finite set (as is the case for equations in modular arithmetic, for example), or can be limited to a finite number of possibilities (as is the case with some Diophantine equations), the solution set can be found by brute force, that is, by testing each of the possible values (candidate solutions). It may be the case, though, that the number of possibilities to be considered, although finite, is so huge that an exhaustive search is not practically feasible; this is, in fact, a requirement for strong encryption methods. - -As with all kinds of problem solving, trial and error may sometimes yield a solution, in particular where the form of the equation, or its similarity to another equation with a known solution, may lead to an "inspired guess" at the solution. If a guess, when tested, fails to be a solution, consideration of the way in which it fails may lead to a modified guess. - -Equations involving linear or simple rational functions of a single real-valued unknown, say x, such as -$$ -8x+7=4x+35 \quad \text{or} \quad \frac{4x + 9}{3x + 4} = 2 , -$$ - -can be solved using the methods of elementary algebra. - -Smaller systems of linear equations can be solved likewise by methods of elementary algebra. For solving larger systems, algorithms are used that are based on linear algebra. - -Polynomial equations of degree up to four can be solved exactly using algebraic methods, of which the quadratic formula is the simplest example. Polynomial equations with a degree of five or higher require in general numerical methods (see below) or special functions such as Bring radicals, although some specific cases may be solvable algebraically, for example -$$ -4x^5 - x^3 - 3 = 0 -$$ - -(by using the rational root theorem), and -$$ -x^6 - 5x^3 + 6 = 0 , -$$ - -(by using the substitution x = z1/3, which simplifies this to a quadratic equation in z). - -In Diophantine equations the solutions are required to be integers. In some cases a brute force approach can be used, as mentioned above. In some other cases, in particular if the equation is in one unknown, it is possible to solve the equation for rational-valued unknowns (see Rational root theorem), and then find solutions to the Diophantine equation by restricting the solution set to integer-valued solutions. For example, the polynomial equation -$$ -2x^5-5x^4-x^3-7x^2+2x+3=0 -$$ - -has as rational solutions x = −1/2 and x = 3, and so, viewed as a Diophantine equation, it has the unique solution x = 3. - -In general, however, Diophantine equations are among the most difficult equations to solve. - -In the simple case of a function of one variable, say, h(x), we can solve an equation of the form h(x) = c for some constant c by considering what is known as the inverse function of h. - -Given a function h : A → B, the inverse function, denoted h−1 and defined as h−1 : B → A, is a function such that -$$ -h^{-1}\bigl(h(x)\bigr) = h\bigl(h^{-1}(x)\bigr) = x . -$$ - -Now, if we apply the inverse function to both sides of h(x) = c, where c is a constant value in B, we obtain - -\begin{align} - -h^{-1}\bigl(h(x)\bigr) &= h^{-1}(c) \\ - -x &= h^{-1}(c) \\ - -\end{align} - -and we have found the solution to the equation. However, depending on the function, the inverse may be difficult to be defined, or may not be a function on all of the set B (only on some subset), and have many values at some point. - -If just one solution will do, instead of the full solution set, it is actually sufficient if only the functional identity -$$ -h\left(h^{-1}(x)\right) = x -$$ - -holds. For example, the projection π1 : R2 → R defined by π1(x, y) = x has no post-inverse, but it has a pre-inverse π_1|p=−1 defined by π_1|p=−1(x) = (x, 0). Indeed, the equation π1(x, y) = c is solved by -$$ -(x,y) = \pi_1^{-1}(c) = (c,0). -$$ - -Examples of inverse functions include the nth root (inverse of xn); the logarithm (inverse of ax); the inverse trigonometric functions; and Lambert's W function (inverse of xex). - -If the left-hand side expression of an equation P = 0 can be factorized as P = QR, the solution set of the original solution consists of the union of the solution sets of the two equations Q = 0 and R = 0. - -For example, the equation -$$ -\tan x + \cot x = 2 -$$ - -can be rewritten, using the identity tan x cot x = 1 as -$$ -\frac{\tan^2 x -2 \tan x+1}{\tan x} = 0, -$$ - -which can be factorized into -$$ -\frac{\left(\tan x - 1\right)^2}{\tan x}= 0. -$$ - -The solutions are thus the solutions of the equation tan x = 1, and are thus the set -$$ -x = \tfrac{\pi}{4} + k\pi, \quad k = 0, \pm 1, \pm 2, \ldots. -$$ - -With more complicated equations in real or complex numbers, simple methods to solve equations can fail. Often, root-finding algorithms like the Newton–Raphson method can be used to find a numerical solution to an equation, which, for some applications, can be entirely sufficient to solve some problem. - -Equations involving matrices and vectors of real numbers can often be solved by using methods from linear algebra. - -There is a vast body of methods for solving various kinds of differential equations, both numerically and analytically. A particular class of problem that can be considered to belong here is integration, and the analytic methods for solving this kind of problems are now called symbolic integration. Solutions of differential equations can be implicit or explicit. diff --git a/wiki/wikipedia/301.txt b/wiki/wikipedia/301.txt deleted file mode 100644 index 3103ef89e932059bea9bf49c0899cc09dd55ab05..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/301.txt +++ /dev/null @@ -1,5 +0,0 @@ -In mathematics, the gradient conjecture, due to René Thom (1989), was proved in 2000 by three Polish mathematicians, Krzysztof Kurdyka (University of Savoie, France), Tadeusz Mostowski (Warsaw University, Poland) and Adam Parusiński (University of Angers, France). - -The conjecture states that given a real-valued analytic function f defined on Rn and a trajectory x(t) of the gradient vector field of f having a limit point x0 ∈ Rn, where f has an isolated critical point at x0, there exists a limit (in the projective space PRn-1) for the secant lines from x(t) to x0, as t tends to zero. - -The proof depends on a theorem due to Stanis%C5%82aw %C5%81ojasiewicz. diff --git a/wiki/wikipedia/3010.txt b/wiki/wikipedia/3010.txt deleted file mode 100644 index 53da402dd0c91cae897e6460879d941a1d26d64b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3010.txt +++ /dev/null @@ -1,9 +0,0 @@ -Zapier is a product that allows end users to integrate the web applications they use. Zapier is based in Sunnyvale, California and employs a workforce of 550+ workers located around the United States and 38 other countries. - -Zapier provides workflows to automate the use of web applications together. It is often described as a translator between web APIs. - -Zapier was started in Columbia, Missouri by co-founders Wade Foster, Bryan Helmig, and Mike Knoop as part of the first Startup Weekend Columbia in 2011. After initially submitting an application for the Winter 2012 funding cycle and being rejected, they then built their initial prototype with 25 apps, and were accepted to Y Combinator startup seed accelerator in the Summer 2012 funding cycle. As a result of the acceptance, the company relocated to Mountain View, California in Spring 2012. In October of the same year, Zapier received a $1.3 million seed funding round led by global venture investment firm Bessemer Venture Partners. Zapier reached profitability in 2014. - -In March 2017, the company offered a "de-location package", consisting of $10,000 in moving reimbursement to employees who desired to move away from the San Francisco Bay Area. After the announcement, job applications increased by 50%. - -In March 2021, the company acquired Makerpad, a no-code education service and community, for an undisclosed amount. diff --git a/wiki/wikipedia/3011.txt b/wiki/wikipedia/3011.txt deleted file mode 100644 index c3d348b55ea00657e8e6461897d37855cd18a721..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3011.txt +++ /dev/null @@ -1,25 +0,0 @@ -David Harel (; born 12 April 1950) is a computer scientist, currently serving as President of the Israel Academy of Sciences and Humanities. He has been on the faculty of the Weizmann Institute of Science in Israel since 1980, and holds the William Sussman Professorial Chair of Mathematics. Born in London, England, he was Dean of the Faculty of Mathematics and Computer Science at the institute for seven years. - -Harel is best known for his work on dynamic logic, computability, database theory, software engineering and modelling biological systems. In the 1980s he invented the graphical language of Statecharts for specifying and programming reactive systems, which has been adopted as part of the UML standard. Since the late 1990s he has concentrated on a scenario-based approach to programming such systems, launched by his co-invention (with W. Damm) of Live Sequence Charts. He has published expository accounts of computer science, such as his award winning 1987 book "Algorithmics: The Spirit of Computing" and his 2000 book "Computers Ltd.: What They Really Can’t do", and has presented series on computer science for Israeli radio and television. He has also worked on other diverse topics, such as graph layout, computer science education, biological modeling and the analysis and communication of odors. - -Harel completed his PhD at MIT between 1976 and 1978. In 1987, he co-founded the software company I-Logix, which in 2006 became part of IBM. He has advocated building a full computer model of the Caenorhabditis elegans nematode, which was the first multicellular organism to have its genome completely sequenced. The eventual completeness of such a model depends on his updated version of the Turing test. He is a fellow of the ACM, the IEEE, the AAAS, and the EATCS, and a member of several international academies. Harel is active in a number of peace and human rights organizations in Israel. - -* 1986 Stevens Award for Software Development Methods - -* 1992 ACM Karlstrom Outstanding Educator Award - -* 1994 ACM Fellow - -* 2006 Doctor (Laura) Honoris Causa, University of Milano-Bicocca, 18 May 2006 - -* 2006 Fellow Honoris Causa, Open University of Israel - -* 2007 ACM Software System Award - -* 2014 International Honorary Member of the American Academy of Arts and Sciences - -* 2019 International Member of the US National Academy of Sciences. - -* 2020 Fellow of the Royal Society (FRS) - -*2021 Foreign Member of the Chinese Academy of Sciences diff --git a/wiki/wikipedia/3012.txt b/wiki/wikipedia/3012.txt deleted file mode 100644 index e20d8ea1da7828e4c99ae654244e35ec358aa3e8..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3012.txt +++ /dev/null @@ -1,59 +0,0 @@ -In projective geometry Qvist's theorem, named after the Finnish mathematician Bertil Qvist, is a statement on ovals in finite projective planes. Standard examples of ovals are non-degenerate (projective) conic sections. The theorem gives an answer to the question How many tangents to an oval can pass through a point in a finite projective plane? The answer depends essentially upon the order (number of points on a line −1) of the plane. - -*In a projective plane a set Ω of points is called an oval, if: - -# Any line l meets Ω in at most two points, and - -# For any point P ∈ Ω there exists exactly one tangent line t through P, i.e., t ∩ Ω = {P}. - -When |l ∩ Ω | = 0 the line l is an exterior line (or passant), if |l ∩ Ω| = 1 a tangent line and if |l ∩ Ω| = 2 the line is a secant line. - -For finite planes (i.e. the set of points is finite) we have a more convenient characterization: - -* For a finite projective plane of order n (i.e. any line contains n + 1 points) a set Ω of points is an oval if and only if |Ω| = n + 1 and no three points are collinear (on a common line). - -;Qvist's theorem - -Let Ω be an oval in a finite projective plane of order n. - -(a) If n is odd, - -every point P ∉ Ω is incident with 0 or 2 tangents. - -(b) If n is even, - -there exists a point N, the nucleus or knot, such that, the set of tangents to oval Ω is the pencil of all lines through N. - -;Proof: - -(a) Let tR be the tangent to Ω at point R and let P1, ... , Pn be the remaining points of this line. For each i, the lines through Pi partition Ω into sets of cardinality 2 or 1 or 0. Since the number |Ω| = n + 1 is even, for any point Pi, there must exist at least one more tangent through that point. The total number of tangents is n + 1, hence, there are exactly two tangents through each Pi, tR and one other. Thus, for any point P not in oval Ω, if P is on any tangent to Ω it is on exactly two tangents. - -(b) Let s be a secant, s ∩ Ω = {P0, P1} and s= {P0, P1,...,Pn}. Because |Ω| = n + 1 is odd, through any Pi, i = 2,...,n, there passes at least one tangent ti. The total number of tangents is n + 1. Hence, through any point Pi for i = 2,...,n there is exactly one tangent. If N is the point of intersection of two tangents, no secant can pass through N. Because n + 1, the number of tangents, is also the number of lines through any point, any line through N is a tangent. - -; Example in a pappian plane of even order: - -Using inhomogeneous coordinates over a field K, |K| = n even, the set - -Ω1 = {(x, y) {{! y = x2} ∪ {(∞)}}}, - -the projective closure of the parabola y = x2, is an oval with the point N = (0) as nucleus (see image), i.e., any line y = c, with c ∈ K, is a tangent. - -*Any oval Ω in a finite projective plane of even order n has a nucleus N. - -The point set Ω := Ω ∪ {N} is called a hyperoval or (n + 2)-arc. (A finite oval is an (n + 1)-arc.) - -One easily checks the following essential property of a hyperoval: - -*For a hyperoval Ω and a point R ∈ Ω the pointset Ω \ {R} is an oval. - -This property provides a simple means of constructing additional ovals from a given oval. - -;Example: - -For a projective plane over a finite field K, |K| = n even and n > 4, the set - -Ω1 = {(x, y) {{! y = x2} ∪ {(∞)}}} is an oval (conic section) (see image), - -Ω1 = {(x, y) {{! y = x2} ∪ {(0), (∞)}}} is a hyperoval and - -Ω2 = {(x, y) {{! y = x2} ∪ {(0)}}} is another oval that is not a conic section. (Recall that a conic section is determined uniquely by 5 points.) diff --git a/wiki/wikipedia/3013.txt b/wiki/wikipedia/3013.txt deleted file mode 100644 index 144089becb970194df4f65bb42ee82a4fc40e69e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3013.txt +++ /dev/null @@ -1,3 +0,0 @@ -Tetris Evolution is a video game, based on Tetris. It was released in 2007 by THQ for the Xbox 360. - -It has a rating of 52/100 at Metacritic. Many criticised it for its lack of innovation, limited game modes and high price. GameSpot gave the game a 6.6/10 saying "The single-player and multiplayer options are solid yet predictable, and the game's got all the personality of a screensaver. If 22 years of Tetris have left you tired of the formula, Tetris Evolution will do very little to reignite your passion for this classic Russian mind-bender. diff --git a/wiki/wikipedia/3014.txt b/wiki/wikipedia/3014.txt deleted file mode 100644 index 468cc1474d79eea988b047360e111f4238c8082c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3014.txt +++ /dev/null @@ -1,32 +0,0 @@ -Constructive dilemma is a valid rule of inference of propositional logic. It is the inference that, if P implies Q and R implies S and either P or R is true, then either Q or S has to be true. In sum, if two conditionals are true and at least one of their antecedents is, then at least one of their consequents must be too. Constructive dilemma is the disjunctive version of modus ponens, whereas, - -destructive dilemma is the disjunctive version of modus tollens. The constructive dilemma rule can be stated: -$$ -\frac{(P \to Q), (R \to S), P \lor R}{\therefore Q \lor S} -$$ - -where the rule is that whenever instances of "$P \to Q$", "$R \to S$", and "$P \lor R$" appear on lines of a proof, "$Q \lor S$" can be placed on a subsequent line. - -The constructive dilemma rule may be written in sequent notation: -$$ -(P \to Q), (R \to S), (P \lor R) \vdash (Q \lor S) -$$ - -where $\vdash$ is a metalogical symbol meaning that $Q \lor S$ is a syntactic consequence of $P \to Q$, $R \to S$, and $P \lor R$ in some logical system; - -and expressed as a truth-functional tautology or theorem of propositional logic: -$$ -(((P \to Q) \land (R \to S)) \land (P \lor R)) \to (Q \lor S) -$$ - -where $P$, $Q$, $R$ and $S$ are propositions expressed in some formal system. - -If I win a million dollars, I will donate it to an orphanage. - -If my friend wins a million dollars, he will donate it to a wildlife fund. - -Either I win a million dollars or my friend wins a million dollars. - -Therefore, either an orphanage will get a million dollars, or a wildlife fund will get a million dollars. - -The dilemma derives its name because of the transfer of disjunctive operator. diff --git a/wiki/wikipedia/3015.txt b/wiki/wikipedia/3015.txt deleted file mode 100644 index 997b872936319c2b40a6e84546b772065c915290..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3015.txt +++ /dev/null @@ -1,17 +0,0 @@ -DSatur is a graph colouring algorithm put forward by Daniel Brélaz in 1979. Similarly to the greedy colouring algorithm, DSatur colours the vertices of a graph one after another, expending a previously unused colour when needed. Once a new vertex has been coloured, the algorithm determines which of the remaining uncoloured vertices has the highest number of colours in its neighbourhood and colours this vertex next. Brélaz defines this number as the degree of saturation of a given vertex. DSatur is a heuristic graph colouring algorithm, yet produces exact results for bipartite, cycle, and wheel graphs. DSatur has also been referred to as saturation LF in the literature. - -Define the degree of saturation of a vertex as the number of different colours in its neighbourhood. Given a simple, undirected graph G compromising vertex set V and edge set E: - -# Generate a degree ordering of V. - -# Select a vertex of maximal degree and colour it with the first colour. - -# Consider a vertex with the highest degree of saturation. Break ties by considering that vertex with the highest degree. Further ties are broken randomly. - -# Loop through the colour classes created so far, and colour the selected vertex with the first suitable colour. - -# Unless V is all coloured, return to step 3 - -The worst-case complexity of DSatur is $O(n^2)$, where $n$ is the number of vertices in the graph. The algorithm can also be implemented using a binary heap to store saturation degrees, operating in $O((n+m)\log n)$ where $m$ is the number of edges in the graph. This produces much faster runs with sparse graphs. - -DSatur is known to be exact for bipartite graphs, as well as for cycle and wheel graphs. In an empirical comparison by Lewis in 2021, DSatur produced significantly better vertex colourings than the greedy algorithm on random graphs with edge probability $p=0.5$, while in turn producing significantly worse colourings than the Recursive Largest First algorithm. diff --git a/wiki/wikipedia/3016.txt b/wiki/wikipedia/3016.txt deleted file mode 100644 index 06bc83c6d9cd6fd5e405f813b1e47b006d307e80..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3016.txt +++ /dev/null @@ -1,105 +0,0 @@ -In mathematics, Fermat's theorem (also known as interior extremum theorem) is a method to find local maxima and minima of differentiable functions on open sets by showing that every local extremum of the function is a stationary point (the function's derivative is zero at that point). Fermat's theorem is a theorem in real analysis, named after Pierre de Fermat. - -By using Fermat's theorem, the potential extrema of a function $\displaystyle f$, with derivative $\displaystyle f'$, are found by solving an equation in $\displaystyle f'$. Fermat's theorem gives only a necessary condition for extreme function values, as some stationary points are inflection points (not a maximum or minimum). The function's second derivative, if it exists, can sometimes be used to determine whether a stationary point is a maximum or minimum. - -One way to state Fermat's theorem is that, if a function has a local extremum at some point and is differentiable there, then the function's derivative at that point must be zero. In precise mathematical language: - -Let $f\colon (a,b) \rightarrow \mathbb{R}$ be a function and suppose that $x_0 \in (a,b)$ is a point where $f$ has a local extremum. If $f$ is differentiable at $\displaystyle x_0$, then $f'(x_0) = 0$. - -Another way to understand the theorem is via the contrapositive statement: if the derivative of a function at any point is not zero, then there is not a local extremum at that point. Formally: - -If $f$ is differentiable at $x_0 \in (a,b)$, and $f'(x_0) \neq 0$, then $x_0$ is not a local extremum of $f$. - -The global extrema of a function f on a domain A occur only at boundaries, non-differentiable points, and stationary points. - -If $x_0$ is a global extremum of f, then one of the following is true: - -* boundary: $x_0$ is in the boundary of A - -* non-differentiable: f is not differentiable at $x_0$ - -* stationary point: $x_0$ is a stationary point of f - -In higher dimensions, exactly the same statement holds; however, the proof is slightly more complicated. The complication is that in 1 dimension, one can either move left or right from a point, while in higher dimensions, one can move in many directions. Thus, if the derivative does not vanish, one must argue that there is some direction in which the function increases – and thus in the opposite direction the function decreases. This is the only change to the proof or the analysis. - -The statement can also be extended to differentiable manifolds. If $f : M \to \mathbb{R}$ is a differentiable function on a manifold $M$, then its local extrema must be critical points of $f$, in particular points where the exterior derivative $df$ is zero. - -Fermat's theorem is central to the calculus method of determining maxima and minima: in one dimension, one can find extrema by simply computing the stationary points (by computing the zeros of the derivative), the non-differentiable points, and the boundary points, and then investigating this set to determine the extrema. - -One can do this either by evaluating the function at each point and taking the maximum, or by analyzing the derivatives further, using the first derivative test, the second derivative test, or the higher-order derivative test. - -Intuitively, a differentiable function is approximated by its derivative – a differentiable function behaves infinitesimally like a linear function $a+bx,$ or more precisely, $f(x_0) + f'(x_0)(x-x_0).$ Thus, from the perspective that "if f is differentiable and has non-vanishing derivative at $x_0,$ then it does not attain an extremum at $x_0,$" the intuition is that if the derivative at $x_0$ is positive, the function is increasing near $x_0,$ while if the derivative is negative, the function is decreasing near $x_0.$ In both cases, it cannot attain a maximum or minimum, because its value is changing. It can only attain a maximum or minimum if it "stops" – if the derivative vanishes (or if it is not differentiable, or if one runs into the boundary and cannot continue). However, making "behaves like a linear function" precise requires careful analytic proof. - -More precisely, the intuition can be stated as: if the derivative is positive, there is some point to the right of $x_0$ where f is greater, and some point to the left of $x_0$ where f is less, and thus f attains neither a maximum nor a minimum at $x_0.$ Conversely, if the derivative is negative, there is a point to the right which is lesser, and a point to the left which is greater. Stated this way, the proof is just translating this into equations and verifying "how much greater or less". - -The intuition is based on the behavior of polynomial functions. Assume that function f has a maximum at x0, the reasoning being similar for a function minimum. If $\displaystyle x_0 \in (a,b)$ is a local maximum then, roughly, there is a (possibly small) neighborhood of $\displaystyle x_0$ such as the function "is increasing before" and "decreasing after" $\displaystyle x_0$. As the derivative is positive for an increasing function and negative for a decreasing function, $\displaystyle f'$ is positive before and negative after $\displaystyle x_0$. $\displaystyle f'$ doesn't skip values (by Darboux's theorem), so it has to be zero at some point between the positive and negative values. The only point in the neighbourhood where it is possible to have $\displaystyle f'(x) = 0$ is $\displaystyle x_0$. - -The theorem (and its proof below) is more general than the intuition in that it doesn't require the function to be differentiable over a neighbourhood around $\displaystyle x_0$. It is sufficient for the function to be differentiable only in the extreme point. - -Suppose that f is differentiable at $x_0 \in (a,b),$ with derivative K, and assume without loss of generality that $K > 0,$ so the tangent line at $x_0$ has positive slope (is increasing). Then there is a neighborhood of $x_0$ on which the secant lines through $x_0$ all have positive slope, and thus to the right of $x_0,$ f is greater, and to the left of $x_0,$ f is lesser. - -The schematic of the proof is: - -* an infinitesimal statement about derivative (tangent line) at $x_0$ implies - -* a local statement about difference quotients (secant lines) near $x_0,$ which implies - -* a local statement about the value of f near $x_0.$ - -Formally, by the definition of derivative, $f'(x_0) = K$ means that -$$ -\lim_{\varepsilon \to 0} \frac{f(x_0+\varepsilon)-f(x_0)}{\varepsilon} = K. -$$ - -In particular, for sufficiently small $\varepsilon$ (less than some $\varepsilon_0$), the quotient must be at least $K/2,$ by the definition of limit. Thus on the interval $(x_0-\varepsilon_0,x_0+\varepsilon_0)$ one has: -$$ -\frac{f(x_0+\varepsilon)-f(x_0)}{\varepsilon} > K/2; -$$ - -one has replaced the equality in the limit (an infinitesimal statement) with an inequality on a neighborhood (a local statement). Thus, rearranging the equation, if $\varepsilon > 0,$ then: -$$ -f(x_0+\varepsilon) > f(x_0) + (K/2)\varepsilon > f(x_0), -$$ - -so on the interval to the right, f is greater than $f(x_0),$ and if $\varepsilon < 0,$ then: -$$ -f(x_0+\varepsilon) < f(x_0) + (K/2)\varepsilon < f(x_0), -$$ - -so on the interval to the left, f is less than $f(x_0).$ - -Thus $x_0$ is not a local or global maximum or minimum of f. - -Alternatively, one can start by assuming that $\displaystyle x_0$ is a local maximum, and then prove that the derivative is 0. - -Suppose that $\displaystyle x_0$ is a local maximum (a similar proof applies if $\displaystyle x_0$ is a local minimum). Then there exists $\delta > 0$ such that $(x_0 - \delta,x_0 + \delta) \subset (a,b)$ and such that we have $f(x_0) \ge f(x)$ for all $x$ with $\displaystyle |x - x_0| < \delta$. Hence for any $h \in (0,\delta)$ we have -$$ -\frac{f(x_0+h) - f(x_0)}{h} \le 0. -$$ - -Since the limit of this ratio as $\displaystyle h$ gets close to 0 from above exists and is equal to $\displaystyle f'(x_0)$ we conclude that $f'(x_0) \le 0$. On the other hand, for $h \in (-\delta,0)$ we notice that -$$ -\frac{f(x_0+h) - f(x_0)}{h} \ge 0 -$$ - -but again the limit as $\displaystyle h$ gets close to 0 from below exists and is equal to $\displaystyle f'(x_0)$ so we also have $f'(x_0) \ge 0$. - -Hence we conclude that $\displaystyle f'(x_0) = 0.$ - -A subtle misconception that is often held in the context of Fermat's theorem is to assume that it makes a stronger statement about local behavior than it does. Notably, Fermat's theorem does not say that functions (monotonically) "increase up to" or "decrease down from" a local maximum. This is very similar to the misconception that a limit means "monotonically getting closer to a point". For "well-behaved functions" (which here means continuously differentiable), some intuitions hold, but in general functions may be ill-behaved, as illustrated below. The moral is that derivatives determine infinitesimal behavior, and that continuous derivatives determine local behavior. - -If f is continuously differentiable $\left(C^1\right)$ on an open neighborhood of the point $x_0$, then $f'(x_0) > 0$ does mean that f is increasing on a neighborhood of $x_0,$ as follows. - -If $f'(x_0) = K > 0$ and $f \in C^1,$ then by continuity of the derivative, there is some $\varepsilon_0 > 0$ such that $f'(x) > K/2$ for all $x \in (x_0 - \varepsilon_0, x_0 + \varepsilon_0)$. Then f is increasing on this interval, by the mean value theorem: the slope of any secant line is at least $K/2,$ as it equals the slope of some tangent line. - -However, in the general statement of Fermat's theorem, where one is only given that the derivative at $x_0$ is positive, one can only conclude that secant lines through $x_0$ will have positive slope, for secant lines between $x_0$ and near enough points. - -Conversely, if the derivative of f at a point is zero ($x_0$ is a stationary point), one cannot in general conclude anything about the local behavior of f – it may increase to one side and decrease to the other (as in $x^3$), increase to both sides (as in $x^4$), decrease to both sides (as in $-x^4$), or behave in more complicated ways, such as oscillating (as in $x^2 \sin(1/x)$, as discussed below). - -One can analyze the infinitesimal behavior via the second derivative test and higher-order derivative test, if the function is differentiable enough, and if the first non-vanishing derivative at $x_0$ is a continuous function, one can then conclude local behavior (i.e., if $f^{(k)}(x_0) \neq 0$ is the first non-vanishing derivative, and $f^{(k)}$ is continuous, so $f \in C^k$), then one can treat f as locally close to a polynomial of degree k, since it behaves approximately as $f^{(k)}(x_0) (x - x_0)^k,$ but if the k-th derivative is not continuous, one cannot draw such conclusions, and it may behave rather differently. - -The function $\sin(1/x)$ – it oscillates increasingly rapidly between $-1$ and $1$ as x approaches 0. Consequently, the function $f(x) = (1 + \sin(1/x))x^2$ oscillates increasingly rapidly between 0 and $2x^2$ as x approaches 0. If one extends this function by defining $f(0) = 0$ then the extended function is continuous and everywhere differentiable (it is differentiable at 0 with derivative 0), but has rather unexpected behavior near 0: in any neighborhood of 0 it attains 0 infinitely many times, but also equals $2x^2$ (a positive number) infinitely often. - -Continuing in this vein, one may define $g(x) = (2 + \sin(1/x))x^2$, which oscillates between $x^2$ and $3x^2$. The function has its local and global minimum at $x=0$, but on no neighborhood of 0 is it decreasing down to or increasing up from 0 – it oscillates wildly near 0. - -This pathology can be understood because, while the function g is everywhere differentiable, it is not continuously differentiable: the limit of $g'(x)$ as $x \to 0$ does not exist, so the derivative is not continuous at 0. This reflects the oscillation between increasing and decreasing values as it approaches 0. diff --git a/wiki/wikipedia/3017.txt b/wiki/wikipedia/3017.txt deleted file mode 100644 index 1f2938b9187e8c5cab47de17ef8a80865f617d51..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3017.txt +++ /dev/null @@ -1,35 +0,0 @@ -Modus ponendo tollens (MPT; Latin: "mode that denies by affirming") is a valid rule of inference for propositional logic. It is closely related to modus ponens and modus tollendo ponens. - -MPT is usually described as having the form: - -#Not both A and B - -#A - -#Therefore, not B - -For example: - -# Ann and Bill cannot both win the race. - -# Ann won the race. - -# Therefore, Bill cannot have won the race. - -As E. J. Lemmon describes it:"Modus ponendo tollens is the principle that, if the negation of a conjunction holds and also one of its conjuncts, then the negation of its other conjunct holds." - -In logic notation this can be represented as: - -# $ \neg (A \land B)$ - -# $ A$ - -# $ \therefore \neg B$ - -Based on the Sheffer Stroke (alternative denial), "|", the inference can also be formalized in this way: - -# $ A|B$ - -# $ A$ - -# $ \therefore \neg B$ diff --git a/wiki/wikipedia/3018.txt b/wiki/wikipedia/3018.txt deleted file mode 100644 index 4895d3187500dee4bf4baefc3172dbb326975248..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3018.txt +++ /dev/null @@ -1,37 +0,0 @@ -In geometry, the Malfatti circles are three circles inside a given triangle such that each circle is tangent to the other two and to two sides of the triangle. They are named after Gian Francesco Malfatti, who made early studies of the problem of constructing these circles in the mistaken belief that they would have the largest possible total area of any three disjoint circles within the triangle. - -Malfatti's problem has been used to refer both to the problem of constructing the Malfatti circles and to the problem of finding three area-maximizing circles within a triangle. - -A simple construction of the Malfatti circles was given by Steiner, and many mathematicians have since studied the problem. Malfatti himself supplied a formula for the radii of the three circles, and they may also be used to define two triangle centers, the Ajima–Malfatti points of a triangle. - -The problem of maximizing the total area of three circles in a triangle is never solved by the Malfatti circles. Instead, the optimal solution can always be found by a greedy algorithm that finds the largest circle within the given triangle, the largest circle within the three connected subsets of the triangle outside of the first circle, and the largest circle within the five connected subsets of the triangle outside of the first two circles. Although this procedure was first formulated in 1930, its correctness was not proven until 1994. - -posed the problem of cutting three cylindrical columns out of a triangular prism of marble, maximizing the total volume of the columns. He assumed that the solution to this problem was given by three tangent circles within the triangular cross-section of the wedge. That is, more abstractly, he conjectured that the three Malfatti circles have the maximum total area of any three disjoint circles within a given triangle. - -Malfatti's work was popularized for a wider readership in French by Joseph Diaz Gergonne in the first volume of his Annales (Gergonne|1811}}|1811), with further discussion in the second and tenth. However, Gergonne only stated the circle-tangency problem, not the area-maximizing one. - -Malfatti's assumption that the two problems are equivalent is incorrect. , who went back to the original Italian text, observed that for some triangles a larger area can be achieved by a greedy algorithm that inscribes a single circle of maximal radius within the triangle, inscribes a second circle within one of the three remaining corners of the triangle, the one with the smallest angle, and inscribes a third circle within the largest of the five remaining pieces. The difference in area for an equilateral triangle is small, just over 1%, but as pointed out, for an isosceles triangle with a very sharp apex, the optimal circles (stacked one atop each other above the base of the triangle) have nearly twice the area of the Malfatti circles. - -provided a convincing numerical demonstration that, for every triangle, the Lob–Richmond procedure produces three circles with larger area than the Malfatti circles, so the Malfatti circles are never optimal. followed up with a rigorous mathematical proof of this fact. classified all of the different ways that a set of maximal circles can be packed within a triangle; using their classification, they proved that the greedy algorithm always finds three area-maximizing circles, and they provided a formula for determining which packing is optimal for a given triangle. Melissen conjectured more generally that, for any integer n, the greedy algorithm finds the area-maximizing set of n circles within a given triangle; the conjecture is known to be true for n ≤ 3. - -The problem of constructing three circles tangent to each other within a triangle was posed by the 18th-century Japanese mathematician Ajima Naonobu prior to the work of Malfatti, and included in an unpublished collection of Ajima's works made a year after Ajima's death by his student Kusaka Makoto. Even earlier, the same problem was considered in a 1384 manuscript by Gilio di Cecco da Montepulciano, now in the Municipal Library of Siena, Italy. studied a special case of the problem, for a specific isosceles triangle. - -Since the work of Malfatti, there has been a significant amount of work on methods for constructing Malfatti's three tangent circles; Richard K. Guy writes that the literature on the problem is "extensive, widely scattered, and not always aware of itself". - -The radius of each of the three Malfatti circles may be determined as a formula involving the three side lengths a, b, and c of the triangle, the inradius r, the semiperimeter $s = (a + b + c)/2$, and the three distances d, e, and f from the incenter of the triangle to the vertices opposite sides a, b, and c respectively. The formulae for the three radii are: - - - -\begin{align} - -r_1 &= \frac{r}{2(s-a)}(s-r+d-e-f),\\ - -r_2 &= \frac{r}{2(s-b)}(s-r-d+e-f),\\ - -r_3 &= \frac{r}{2(s-c)}(s-r-d-e+f).\\ - -\end{align} - -Related formulae may be used to find examples of triangles whose side lengths, inradii, and Malfatti radii are all rational numbers or all integers. For instance, the triangle with side lengths 28392, 21000, and 25872 has inradius 6930 and Malfatti radii 3969, 4900, and 4356. As another example, the triangle with side lengths 152460, 165000, and 190740 has inradius 47520 and Malfatti radii 27225, 30976, and 32400. - -Given a triangle ABC and its three Malfatti circles, let D, E, and F be the points where two of the circles touch each other, opposite vertices A, B, and C respectively. Then the three lines AD, BE, and CF meet in a single triangle center known as the first Ajima–Malfatti point after the contributions of Ajima and Malfatti to the circle problem. The second Ajima–Malfatti point is the meeting point of three lines connecting the tangencies of the Malfatti circles with the centers of the excircles of the triangle. Other triangle centers also associated with the Malfatti circles include the Yff–Malfatti point, formed in the same way as the first Malfatti point from three mutually tangent circles that are all tangent to the lines through the sides of the given triangle, but that lie partially outside the triangle, and the radical center of the three Malfatti circles (the point where the three bitangents used in their construction meet). diff --git a/wiki/wikipedia/3019.txt b/wiki/wikipedia/3019.txt deleted file mode 100644 index 9a763ca9605e83a49e4992d2355d1c9d4f4cd920..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3019.txt +++ /dev/null @@ -1,308 +0,0 @@ -In mathematics, the spectral radius of a square matrix or a bounded linear operator is the largest absolute value of its eigenvalues (i.e. supremum among the absolute values of the elements in its spectrum). It is sometimes denoted by ρ(·). - -Let λ1, ..., λn be the (real or complex) eigenvalues of a matrix A ∈ Cn×n. Then its spectral radius ρ(A) is defined as: -$$ -\rho(A) = \max \left \{ |\lambda_1|, \dotsc, |\lambda_n| \right \}. -$$ - -The spectral radius is a sort of infimum of all norms of a matrix. Indeed, on the one hand, $ \rho(A) \leqslant \|A\| $ for every natural matrix norm $\|\cdot\|$; and on the other hand, Gelfand's formula states that $ \rho(A) = \lim_{k\to\infty} \|A^k\|^{1/k} $. Both these results are shown below. - -However, the spectral radius does not necessarily satisfy $ \|A\mathbf{v}\| \leqslant \rho(A) \|\mathbf{v}\| $ for arbitrary vectors $ \mathbf{v} \in \mathbb{C}^n $. To see why, let $ r>1 $ be arbitrary and consider the matrix -$$ - C_r = \begin{pmatrix} 0 & r^{-1} \\ r & 0 \end{pmatrix} -$$. - -The characteristic polynomial of $ C_r $ is $ \lambda^2 - 1 $, so its eigenvalues are $\{-1, 1\} $ and thus $ \rho(C_r) = 1 $. However, $ C_r \mathbf{e}_1 = r \mathbf{e}_2 $. As a result, for any $ \ell^p $ norm, -$$ - \| C_r \mathbf{e}_1 \| = r > 1 = \rho(C_r) \|\mathbf{e}_1\|. -$$ - -As an illustration of Gelfand's formula, note that $ \|C_r^k\|^{1/k} \to 1 $ as $ k \to \infty $, since $C_r^k = I$ if $k$ is even and $C_r^k = C_r$ if $k$ is odd. - -A special case in which $ \|A\mathbf{v}\| \leqslant \rho(A) \|\mathbf{v}\| $ for all $ \mathbf{v} \in \mathbb{C}^n $ is when $A$ is a Hermitian matrix and $ \|\cdot\| $ is the Euclidean norm. This is because any Hermitian Matrix is diagonalizable by a unitary matrix, and unitary matrices preserve vector length: -$$ - \|A\mathbf{v}\| = \|U^*DU\mathbf{v}\| = \|DU\mathbf{v}\| \leqslant \rho(A) \|U\mathbf{v}\| = \rho(A) \|\mathbf{v}\| . -$$ - -For a bounded linear operator A on a Banach space, the eigenvalues are replaced with the spectrum of the operator, those values $\lambda$ for which $A - \lambda I$ fails to be bijective; we denote the spectrum by -$$ -\sigma(A) = \left\{ \lambda \in \C: A - \lambda I \text{is not bijective} \right\} -$$ - -The spectral radius is then defined as the supremum of the magnitudes of elements in the spectrum: -$$ -\rho(A) = \sup_{\lambda \in \sigma(A)} |\lambda| -$$ - -If we denote the operator norm by $\|\cdot\|$, then we have the spectral radius formula or Gelfand's formula: -$$ -\rho(A) = \lim_{k \to \infty}\|A^k\|^{\frac{1}{k}}. -$$ - -A bounded operator (on a complex Hilbert space) is called a spectraloid operator if its spectral radius coincides with its numerical radius. An example of such an operator is a normal operator. - -The spectral radius of a finite graph is defined to be the spectral radius of its adjacency matrix. - -This definition extends to the case of infinite graphs with bounded degrees of vertices (i.e. there exists some real number C such that the degree of every vertex of the graph is smaller than C). In this case, for the graph G define: -$$ - \ell^2(G) = \left \{ f : V(G) \to \mathbf{R} \ : \ \sum\nolimits_{v \in V(G)} \left \|f(v)^2 \right \| < \infty \right \}. -$$ - -Let γ be the adjacency operator of G: -$$ - \begin{cases} \gamma : \ell^2(G) \to \ell^2(G) \\ (\gamma f)(v) = \sum_{(u,v) \in E(G)} f(u) \end{cases} -$$ - -The spectral radius of G is defined to be the spectral radius of the bounded linear operator γ. - -The following proposition shows a simple yet useful upper bound for the spectral radius of a matrix: - -Proposition. Let A ∈ Cn×n with spectral radius ρ(A) and a consistent matrix norm . Then for each integer $k \geqslant 1$: -$$ -\rho(A)\leq \|A^k\|^{\frac{1}{k}}. -$$ - -Proof - -Let (v, λ) be an eigenvector-eigenvalue pair for a matrix A. By the sub-multiplicative property of the matrix norm, we get: -$$ -|\lambda|^k\|\mathbf{v}\| = \|\lambda^k \mathbf{v}\| = \|A^k \mathbf{v}\| \leq \|A^k\|\cdot\|\mathbf{v}\| -$$ - -and since v ≠ 0 we have -$$ -|\lambda|^k \leq \|A^k\| -$$ - -and therefore -$$ -\rho(A)\leq \|A^k\|^{\frac{1}{k}}. -$$ - -There are many upper bounds for the spectral radius of a graph in terms of its number n of vertices and its number m of edges. For instance, if -$$ -\frac{(k-2)(k-3)}{2} \leq m-n \leq \frac{k(k-3)}{2} -$$ - -where $3 \le k \le n$ is an integer, then -$$ -\rho(G) \leq \sqrt{2 m-n-k+\frac{5}{2}+\sqrt{2 m-2 n+\frac{9}{4}}} -$$ - -The spectral radius is closely related to the behaviour of the convergence of the power sequence of a matrix; namely, the following theorem holds: - -Theorem. Let A ∈ Cn×n with spectral radius ρ(A). Then ρ(A) < 1 if and only if -$$ -\lim_{k \to \infty} A^k = 0. -$$ - -On the other hand, if ρ(A) > 1, $\lim_{k \to \infty} \|A^k\| = \infty$. The statement holds for any choice of matrix norm on Cn×n. - -Assume the limit in question is zero, we will show that ρ(A) < 1. Let (v, λ) be an eigenvector-eigenvalue pair for A. Since Akv = λkv we have: - -\begin{align} - -0 &= \left(\lim_{k \to \infty} A^k \right) \mathbf{v} \\ - -&= \lim_{k \to \infty} \left(A^k\mathbf{v} \right ) \\ - -&= \lim_{k \to \infty} \lambda^k\mathbf{v} \\ - -&= \mathbf{v} \lim_{k \to \infty} \lambda^k - -\end{align} - -and, since by hypothesis v ≠ 0, we must have -$$ -\lim_{k \to \infty}\lambda^k = 0 -$$ - -which implies |λ| < 1. Since this must be true for any eigenvalue λ, we can conclude ρ(A) < 1. - -Now assume the radius of A is less than 1. From the Jordan normal form theorem, we know that for all A ∈ Cn×n, there exist V, J ∈ Cn×n with V non-singular and J block diagonal such that: -$$ -A = VJV^{-1} -$$ - -with - -J=\begin{bmatrix} - -J_{m_1}(\lambda_1) & 0 & 0 & \cdots & 0 \\ - -0 & J_{m_2}(\lambda_2) & 0 & \cdots & 0 \\ - -\vdots & \cdots & \ddots & \cdots & \vdots \\ - -0 & \cdots & 0 & J_{m_{s-1}}(\lambda_{s-1}) & 0 \\ - -0 & \cdots & \cdots & 0 & J_{m_s}(\lambda_s) - -\end{bmatrix} - -where - -J_{m_i}(\lambda_i)=\begin{bmatrix} - -\lambda_i & 1 & 0 & \cdots & 0 \\ - -0 & \lambda_i & 1 & \cdots & 0 \\ - -\vdots & \vdots & \ddots & \ddots & \vdots \\ - -0 & 0 & \cdots & \lambda_i & 1 \\ - -0 & 0 & \cdots & 0 & \lambda_i - -\end{bmatrix}\in \mathbf{C}^{m_i \times m_i}, 1\leq i\leq s. - -It is easy to see that -$$ -A^k=VJ^kV^{-1} -$$ - -and, since J is block-diagonal, - -J^k=\begin{bmatrix} - -J_{m_1}^k(\lambda_1) & 0 & 0 & \cdots & 0 \\ - -0 & J_{m_2}^k(\lambda_2) & 0 & \cdots & 0 \\ - -\vdots & \cdots & \ddots & \cdots & \vdots \\ - -0 & \cdots & 0 & J_{m_{s-1}}^k(\lambda_{s-1}) & 0 \\ - -0 & \cdots & \cdots & 0 & J_{m_s}^k(\lambda_s) - -\end{bmatrix} - -Now, a standard result on the k-power of an $m_i \times m_i$ Jordan block states that, for $k \geq m_i-1$: - -J_{m_i}^k(\lambda_i)=\begin{bmatrix} - -\lambda_i^k & {k \choose 1}\lambda_i^{k-1} & {k \choose 2}\lambda_i^{k-2} & \cdots & {k \choose m_i-1}\lambda_i^{k-m_i+1} \\ - -0 & \lambda_i^k & {k \choose 1}\lambda_i^{k-1} & \cdots & {k \choose m_i-2}\lambda_i^{k-m_i+2} \\ - -\vdots & \vdots & \ddots & \ddots & \vdots \\ - -0 & 0 & \cdots & \lambda_i^k & {k \choose 1}\lambda_i^{k-1} \\ - -0 & 0 & \cdots & 0 & \lambda_i^k - -\end{bmatrix} - -Thus, if $\rho(A) < 1$ then for all i $|\lambda_i| < 1$. Hence for all i we have: -$$ -\lim_{k \to \infty}J_{m_i}^k=0 -$$ - -which implies -$$ -\lim_{k \to \infty} J^k = 0. -$$ - -Therefore, -$$ -\lim_{k \to \infty}A^k=\lim_{k \to \infty}VJ^kV^{-1}=V \left (\lim_{k \to \infty}J^k \right )V^{-1}=0 -$$ - -On the other side, if $\rho(A)>1$, there is at least one element in J which doesn't remain bounded as k increases, so proving the second part of the statement. - -The next theorem gives the spectral radius as a limit of matrix norms. - -Theorem (Gelfand's Formula; 1941). For any matrix norm we have -$$ -\rho(A)=\lim_{k \to \infty} \left \|A^k \right \|^{\frac{1}{k}}. -$$. - -For any ε > 0, first we construct the following two matrices: -$$ -A_{\pm}= \frac{1}{\rho(A) \pm\varepsilon}A. -$$ - -Then: -$$ -\rho \left (A_{\pm} \right ) = \frac{\rho(A)}{\rho(A) \pm \varepsilon}, \qquad \rho (A_+) < 1 < \rho (A_-). -$$ - -First we apply the previous theorem to A+: -$$ -\lim_{k \to \infty} A_+^k=0. -$$ - -That means, by the sequence limit definition, there exists N+ ∈ N such that for all k ≥ N+, - -\begin{align} - -\left \|A_+^k \right \| < 1 \end{align} - -so - -\begin{align} - -\left \|A^k \right \|^{\frac{1}{k}} < \rho(A)+\varepsilon. - -\end{align} - -Applying the previous theorem to A implies $\|A_-^k\|$ is not bounded and there exists N ∈ N such that for all k ≥ N, - -\begin{align} - -\left\|A_-^k \right \| > 1 - -\end{align} - -so - -\begin{align} - -\left\|A^k \right\|^{\frac{1}{k}} > \rho(A)-\varepsilon. - -\end{align} - -Let N = max{N+, N}, then we have: -$$ -\forall \varepsilon>0\quad \exists N\in\mathbf{N} \quad \forall k\geq N \quad \rho(A)-\varepsilon < \left \|A^k \right \|^{\frac{1}{k}} < \rho(A)+\varepsilon -$$ - -which, by definition, is -$$ -\lim_{k \to \infty} \left \|A^k \right \|^{\frac{1}{k}} = \rho(A). -$$ - -Gelfand's formula leads directly to a bound on the spectral radius of a product of finitely many matrices, namely assuming that they all commute we obtain -$$ -\rho(A_1 \cdots A_n) \leq \rho(A_1) \cdots \rho(A_n). -$$ - -Actually, in case the norm is consistent, the proof shows more than the thesis; in fact, using the previous lemma, we can replace in the limit definition the left lower bound with the spectral radius itself and write more precisely: -$$ -\forall \varepsilon>0, \exists N\in\mathbf{N}, \forall k\geq N \quad \rho(A) \leq \|A^k\|^{\frac{1}{k}} < \rho(A)+\varepsilon -$$ - -which, by definition, is -$$ -\lim_{k \to \infty} \left \|A^k \right \|^{\frac{1}{k}} = \rho(A)^+, -$$ - -where the + means that the limit is approached from above. - -Consider the matrix - -A=\begin{bmatrix} - -9 & -1 & 2\\ - --2 & 8 & 4\\ - -1 & 1 & 8 - -\end{bmatrix} - -whose eigenvalues are 5, 10, 10; by definition, ρ(A) = 10. In the following table, the values of $\|A^k\|^{\frac{1}{k}}$ for the four most used norms are listed versus several increasing values of k (note that, due to the particular form of this matrix,$\|.\|_1=\|.\|_\infty$): - -* - -* diff --git a/wiki/wikipedia/302.txt b/wiki/wikipedia/302.txt deleted file mode 100644 index fd66d798c064ec4e7f51c9bb45fe357fdb76174a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/302.txt +++ /dev/null @@ -1,61 +0,0 @@ -A Byzantine fault (also Byzantine generals problem, interactive consistency, source congruency, error avalanche, Byzantine agreement problem, and Byzantine failure) is a condition of a computer system, particularly distributed computing systems, where components may fail and there is imperfect information on whether a component has failed. The term takes its name from an allegory, the "Byzantine generals problem", developed to describe a situation in which, in order to avoid catastrophic failure of the system, the system's actors must agree on a concerted strategy, but some of these actors are unreliable. - -In a Byzantine fault, a component such as a server can inconsistently appear both failed and functioning to failure-detection systems, presenting different symptoms to different observers. It is difficult for the other components to declare it failed and shut it out of the network, because they need to first reach a consensus regarding which component has failed in the first place. - -Byzantine fault tolerance (BFT) is the dependability of a fault-tolerant computer system to such conditions. It has applications especially in cryptocurrency. - -In its simplest form, a number of generals are attacking a fortress and must decide as a group only whether to attack or retreat. Some generals may prefer to attack, while others prefer to retreat. The important thing is that all generals agree on a common decision, for a halfhearted attack by a few generals would become a rout, and would be worse than either a coordinated attack or a coordinated retreat. - -The problem is complicated by the presence of treacherous generals who may not only cast a vote for a suboptimal strategy, they may do so selectively. For instance, if nine generals are voting, four of whom support attacking while four others are in favor of retreat, the ninth general may send a vote of retreat to those generals in favor of retreat, and a vote of attack to the rest. Those who received a retreat vote from the ninth general will retreat, while the rest will attack (which may not go well for the attackers). The problem is complicated further by the generals being physically separated and having to send their votes via messengers who may fail to deliver votes or may forge false votes. - -Byzantine fault tolerance can be achieved if the loyal (non-faulty) generals have a majority agreement on their strategy. There can be a default vote value given to missing messages. For example, missing messages can be given a "null" value. Further, if the agreement is that the null votes are in the majority, a pre-assigned default strategy can be used (e.g. retreat). - -The typical mapping of this story onto computer systems is that the computers are the generals and their digital communication system links are the messengers. Although the problem is formulated in the analogy as a decision-making and security problem, in electronics, it cannot be solved simply by cryptographic digital signatures, because failures such as incorrect voltages can propagate through the encryption process. Thus, a component may appear functioning to one component and faulty to another, which prevents forming a consensus as to whether the component is faulty or not. - -A Byzantine fault is any fault presenting different symptoms to different observers. A Byzantine failure is the loss of a system service due to a Byzantine fault in systems that require consensus. - -The objective of Byzantine fault tolerance is to be able to defend against failures of system components with or without symptoms that prevent other components of the system from reaching an agreement among themselves, where such an agreement is needed for the correct operation of the system. - -The remaining operationally correct components of a Byzantine fault tolerant system will be able to continue providing the system's service as originally intended, assuming there are a sufficient number of accurately-operating components to maintain the service. - -Byzantine failures are considered the most general and most difficult class of failures among the failure modes. The so-called fail-stop failure mode occupies the simplest end of the spectrum. Whereas fail-stop failure mode simply means that the only way to fail is a node crash, detected by other nodes, Byzantine failures imply no restrictions, which means that the failed node can generate arbitrary data, including data that makes it appear like a functioning node. Thus, Byzantine failures can confuse failure detection systems, which makes fault tolerance difficult. Despite the analogy, a Byzantine failure is not necessarily a security problem involving hostile human interference: it can arise purely from electrical or software faults. - -The terms fault and failure are used here according to the standard definitions originally created by a joint committee on "Fundamental Concepts and Terminology" formed by the IEEE Computer Society's Technical Committee on Dependable Computing and Fault-Tolerance and IFIP Working Group 10.4 on Dependable Computing and Fault Tolerance. See also dependability. - -Byzantine fault tolerance is only concerned with broadcast consistency, that is, the property that when one component broadcasts a single consistent value to other components (i.e., sends the same value to the other components), they all receive exactly the same value, or in the case that the broadcaster is not consistent, the other components agree on a common value. This kind of fault tolerance does not encompass the correctness of the value itself; for example, an adversarial component that deliberately sends an incorrect value, but sends that same value consistently to all components, will not be caught in the Byzantine fault tolerance scheme. - -Setting: - -Given a system of n components, t of which are dishonest, and assuming only point-to-point channel between all the components. - -Whenever a component A tries to broadcast a value x, the other components are allowed to discuss with each other and verify the consistency of A's broadcast, and eventually settle on a common value y. - -Property: The system is said to resist Byzantine faults if a component A can broadcast a value x, and then: - -# If A is honest, then all honest components agree on the value x. - -# In any case, all honest components agree on the same value y. - -Variants: The problem has been studied in the case of both synchronous and asynchronous communications. - -The communication graph above is assumed to be the complete graph (i.e. each component can discuss with every other), but the communication graph can be restricted. - -It can also be relaxed in a more "realistic" problem where the faulty components do not collude together in an attempt to lure the others into error. It is in this setting that practical algorithms have been devised. - -The problem of obtaining Byzantine consensus was conceived and formalized by Robert Shostak, who dubbed it the interactive consistency problem. This work was done in 1978 in the context of the NASA-sponsored SIFT - -Several solutions were described by Lamport, Shostak, and Pease in 1982. While error detecting codes, such as CRCs, are better than cryptographic techniques, neither provide adequate coverage for active electronics in safety-critical systems. This is illustrated by the Schrödinger CRC scenario where a CRC-protected message with a single Byzantine faulty bit presents different data to different observers and each observer sees a valid CRC. Honeywell's MMFCS, and SRI's SIFT. - -In 1999, Miguel Castro and Barbara Liskov introduced the "Practical Byzantine Fault Tolerance" (PBFT) algorithm, which provides high-performance Byzantine state machine replication, processing thousands of requests per second with sub-millisecond increases in latency. - -After PBFT, several BFT protocols were introduced to improve its robustness and performance. For instance, Q/U, HQ, Zyzzyva, and ABsTRACTs, addressed the performance and cost issues; whereas other protocols, like Aardvark and RBFT, addressed its robustness issues. Furthermore, Adapt tried to make use of existing BFT protocols, through switching between them in an adaptive way, to improve system robustness and performance as the underlying conditions change. Furthermore, BFT protocols were introduced that leverage trusted components to reduce the number of replicas, e.g., A2M-PBFT-EA and MinBFT. - -Motivated by PBFT, Tendermint BFT was introduced for partial asynchronous networks and it is mainly used for Proof of Stake blockchains. - -One example of BFT in use is bitcoin, a peer-to-peer digital cash system. The bitcoin network works in parallel to generate a blockchain with proof-of-work allowing the system to overcome Byzantine failures and reach a coherent global view of the system's state. - -Some aircraft systems, such as the Boeing 777 Aircraft Information Management System (via its ARINC 659 SAFEbus network), - -the Boeing 777 flight control system, and the Boeing 787 flight control systems use Byzantine fault tolerance; because these are real-time systems, their Byzantine fault tolerance solutions must have very low latency. For example, SAFEbus can achieve Byzantine fault tolerance within the order of a microsecond of added latency. The SpaceX Dragon considers Byzantine fault tolerance in its design. - -Byzantine fault tolerance mechanisms use components that repeat an incoming message (or just its signature) to other recipients of that incoming message. All these mechanisms make the assumption that the act of repeating a message blocks the propagation of Byzantine symptoms. For systems that have a high degree of safety or security criticality, these assumptions must be proven to be true to an acceptable level of fault coverage. When providing proof through testing, one difficulty is creating a sufficiently wide range of signals with Byzantine symptoms. Such testing likely will require specialized fault injectors. diff --git a/wiki/wikipedia/3020.txt b/wiki/wikipedia/3020.txt deleted file mode 100644 index a88b255bb59cfeb9b35c999ee7f7c9fe66425c37..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3020.txt +++ /dev/null @@ -1,118 +0,0 @@ -Negation as failure (NAF, for short) is a non-monotonic inference rule in logic programming, used to derive $\mathrm{not}~p$ (i.e. that $~p$ is assumed not to hold) from failure to derive $~p$. Note that $\mathrm{not} ~p$ can be different from the statement $\neg p$ of the logical negation of $~p$, depending on the completeness of the inference algorithm and thus also on the formal logic system. - -Negation as failure has been an important feature of logic programming since the earliest days of both Planner and Prolog. In Prolog, it is usually implemented using Prolog's extralogical constructs. - -More generally, this kind of negation is known as weak negation, in contrast with the strong (i.e. explicit, provable) negation. - -In Planner, negation as failure could be implemented as follows: - -if (not (goal p)), then (assert ¬p) - -which says that if an exhaustive search to prove p fails, then assert ¬p. This states that proposition p shall be assumed as "not true" in any subsequent processing. However, Planner not being based on a logical model, a logical interpretation of the preceding remains obscure. - -In pure Prolog, NAF literals of the form $\mathrm{not}~p$ can occur in the body of clauses and can be used to derive other NAF literals. For example, given only the four clauses -$$ -p \leftarrow q \land \mathrm{not}~r -$$ -$$ -q \leftarrow s -$$ -$$ -q \leftarrow t -$$ -$$ -t \leftarrow -$$ - -NAF derives $\mathrm{not}~s$, $\mathrm{not}~r$ and $~p$ as well as $~t$ and $~q$. - -The semantics of NAF remained an open issue until 1978, when Keith Clark showed that it is correct with respect to the completion of the logic program, where, loosely speaking, "only" and $\leftarrow$ are interpreted as "if and only if", written as "iff" or "$\equiv$". - -For example, the completion of the four clauses above is -$$ -p \equiv q \land \mathrm{not}~r -$$ -$$ -q \equiv s \lor t -$$ -$$ -t \equiv \mathrm{true} -$$ -$$ -r \equiv \mathrm{false} -$$ -$$ -s \equiv \mathrm{false} -$$ - -The NAF inference rule simulates reasoning explicitly with the completion, where both sides of the equivalence are negated and negation on the right-hand side is distributed down to atomic formulae. For example, to show $\mathrm{not}~p$, NAF simulates reasoning with the equivalences -$$ -\mathrm{not}~p \equiv \mathrm{not}~q \lor r -$$ -$$ -\mathrm{not}~q \equiv \mathrm{not}~s \land \mathrm{not}~t -$$ -$$ -\mathrm{not}~t \equiv \mathrm{false} -$$ -$$ -\mathrm{not}~r \equiv \mathrm{true} -$$ -$$ -\mathrm{not}~s \equiv \mathrm{true} -$$ - -In the non-propositional case, the completion needs to be augmented with equality axioms, to formalize the assumption that individuals with distinct names are distinct. NAF simulates this by failure of unification. For example, given only the two clauses -$$ -p(a) \leftarrow -$$ -$$ -p(b) \leftarrow -$$ t - -NAF derives $\mathrm{not}~p(c)$. - -The completion of the program is -$$ -p(X) \equiv X=a \lor X=b -$$ - -augmented with unique names axioms and domain closure axioms. - -The completion semantics is closely related both to circumscription and to the closed world assumption. - -The completion semantics justifies interpreting the result $\mathrm{not}~p$ of a NAF inference as the classical negation $\neg p$ of $p$. However, in 1987, Michael Gelfond showed that it is also possible to interpret $\mathrm{not}~p$ literally as "$p$ can not be shown", "$p$ is not known" or "$p$ is not believed", as in autoepistemic logic. The autoepistemic interpretation was developed further by Gelfond and Lifschitz in 1988, and is the basis of answer set programming. - -The autoepistemics semantics of a pure Prolog program P with NAF literals is obtained by "expanding" P with a set of ground (variable-free) NAF literals Δ that is stable in the sense that - -Δ = {$\mathrm{not}~p$ | $p$ is not implied by P ∪ Δ} - -In other words, a set of assumptions Δ about what can not be shown is stable if and only if Δ is the set of all sentences that truly can not be shown from the program P expanded by Δ. Here, because of the simple syntax of pure Prolog programs, "implied by" can be understood very simply as derivability using modus ponens and universal instantiation alone. - -A program can have zero, one or more stable expansions. For example, -$$ -p \leftarrow \mathrm{not}~p -$$ - -has no stable expansions. -$$ -p \leftarrow \mathrm{not}~q -$$ - -has exactly one stable expansion Δ = {$\mathrm{not}~q$} -$$ -p \leftarrow \mathrm{not}~q -$$ -$$ -q \leftarrow \mathrm{not}~p -$$ - -has exactly two stable expansions Δ1 = {$\mathrm{not}~p$} and Δ2 = {$\mathrm{not}~q$}. - -The autoepistemic interpretation of NAF can be combined with classical negation, as in extended logic programming and answer set programming. Combining the two negations, it is possible to express, for example -$$ -\neg p \leftarrow \mathrm{not}~p -$$ (the closed world assumption) and -$$ -p \leftarrow \mathrm{not}~\neg p -$$ ($p$ holds by default). diff --git a/wiki/wikipedia/3021.txt b/wiki/wikipedia/3021.txt deleted file mode 100644 index 71907fd342debdcaa4fe64af17398e7cf5a506a9..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3021.txt +++ /dev/null @@ -1,13 +0,0 @@ -In mathematics, the Lions–Magenes lemma (or theorem) is the result in the theory of Sobolev spaces of Banach space-valued functions, which provides a criterion for moving a time derivative of a function out of its action (as a functional) on the function itself. - -Let X0, X and X1 be three Hilbert spaces with X0 ⊆ X ⊆ X1. Suppose that X0 is continuously embedded in X and that X is continuously embedded in X1, and that X1 is the dual space of X0. Denote the norm on X by || · ||X, and denote the action of X1 on X0 by $\langle\cdot,\cdot\rangle$. Suppose for some $T>0$ that $u \in L^2 ([0, T]; X_0)$ is such that its time derivative $\dot{u} \in L^2 ([0, T]; X_1)$. Then $u$ is almost everywhere equal to a function continuous from $[0,T]$ into $X$, and moreover the following equality holds in the sense of scalar distributions on $(0,T)$: -$$ -\frac{1}{2}\frac{d}{dt} \|u\|_X^2 = \langle\dot{u},u\rangle -$$ - -The above inequality is meaningful, since the functions -$$ -t\rightarrow \|u\|_X^2, \quad t\rightarrow \langle \dot{u}(t),u(t)\rangle -$$ - -are both integrable on $[0,T]$. diff --git a/wiki/wikipedia/3022.txt b/wiki/wikipedia/3022.txt deleted file mode 100644 index 2a7f249c869439c8048383ac61d53eff659411a9..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3022.txt +++ /dev/null @@ -1,5 +0,0 @@ -In mathematics, especially the theory of several complex variables, the Oka–Weil theorem is a result about the uniform convergence of holomorphic functions on Stein spaces due to Kiyoshi Oka and André Weil. - -The Oka–Weil theorem states that if X is a Stein space and K is a compact $\mathcal{O}(X)$-convex subset of X, then every holomorphic function in an open neighborhood of K can be approximated uniformly on K by holomorphic functions on $\mathcal{O}(X)$ (i.e. by polynomials). - -Since Runge's theorem may not hold for several complex variables, the Oka–Weil theorem is often used as an approximation theorem for several complex variables. The Behnke–Stein theorem was originally proved using the Oka–Weil theorem. diff --git a/wiki/wikipedia/3023.txt b/wiki/wikipedia/3023.txt deleted file mode 100644 index 5138fc9d1a081c7f7ef5a4dfc9f475c2c30c1df4..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3023.txt +++ /dev/null @@ -1,17 +0,0 @@ -In mathematics, it is a theorem that there is no analogue of Lebesgue measure on an infinite-dimensional Banach space. Other kinds of measures are therefore used on infinite-dimensional spaces: often, the abstract Wiener space construction is used. Alternatively, one may consider Lebesgue measure on finite-dimensional subspaces of the larger space and consider so-called prevalent and shy sets. - -Compact sets in Banach spaces may also carry natural measures: the Hilbert cube, for instance, carries the product Lebesgue measure. In a similar spirit, the compact topological group given by the Tychonoff product of infinitely many copies of the circle group is infinite-dimensional, and carries a Haar measure that is translation-invariant. - -It can be shown that Lebesgue measure λn on Euclidean space Rn is locally finite, strictly positive and translation-invariant, explicitly: - -* every point x in Rn has an open neighbourhood Nx with finite measure λn(Nx) < +∞; - -* every non-empty open subset U of Rn has positive measure λn(U) > 0; and - -* if A is any Lebesgue-measurable subset of Rn, Th : Rn → Rn, Th(x) = x + h, denotes the translation map, and (Th)n) denotes the push forward, then (Th)n)(A) = λn(A). - -Geometrically speaking, these three properties make Lebesgue measure very nice to work with. When we consider an infinite-dimensional space such as an Lp space or the space of continuous paths in Euclidean space, it would be nice to have a similarly nice measure to work with. Unfortunately, this is not possible. - -Let (X, ||·||) be an infinite-dimensional, separable Banach space. Then the only locally finite and translation-invariant Borel measure μ on X is the trivial measure, with μ(A) = 0 for every measurable set A. Equivalently, every translation-invariant measure that is not identically zero assigns infinite measure to all open subsets of X. - -Let X be an infinite-dimensional, separable Banach space equipped with a locally finite, translation-invariant measure μ. Using local finiteness, suppose that, for some δ > 0, the open ball B(δ) of radius δ has finite μ-measure. Since X is infinite-dimensional, there is an infinite sequence of pairwise disjoint open balls Bn(δ/4), n ∈ N, of radius δ/4, with all the smaller balls Bn(δ/4) contained within the larger ball B(δ). By translation-invariance, all of the smaller balls have the same measure; since the sum of these measures is finite, the smaller balls must all have μ-measure zero. Now, since X is separable, it can be covered by a countable collection of balls of radius δ/4; since each such ball has μ-measure zero, so must the whole space X, and so μ is the trivial measure. diff --git a/wiki/wikipedia/3024.txt b/wiki/wikipedia/3024.txt deleted file mode 100644 index 2dcd987df5b4f5ba0eb07e026d24f5a48287b402..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3024.txt +++ /dev/null @@ -1,61 +0,0 @@ -Substitution is a fundamental concept in logic. - -A substitution is a syntactic transformation on formal expressions. - -To apply a substitution to an expression means to consistently replace its variable, or placeholder, symbols by other expressions. - -The resulting expression is called a substitution instance, or instance for short, of the original expression. - -Where ψ and φ represent formulas of propositional logic, ψ is a substitution instance of φ if and only if ψ may be obtained from φ by substituting formulas for symbols in φ, replacing each occurrence of the same symbol by an occurrence of the same formula. For example: - -(R → S) & (T → S) - -is a substitution instance of: - -P & Q - -and - -(A ↔ A) ↔ (A ↔ A) - -is a substitution instance of: - -(A ↔ A) - -In some deduction systems for propositional logic, a new expression (a proposition) may be entered on a line of a derivation if it is a substitution instance of a previous line of the derivation (Hunter 1971, p. 118). This is how new lines are introduced in some axiomatic systems. In systems that use rules of transformation, a rule may include the use of a substitution instance for the purpose of introducing certain variables into a derivation. - -In first-order logic, every closed propositional formula that can be derived from an open propositional formula a by substitution is said to be a substitution instance of a. If a is a closed propositional formula we count a itself as its only substitution instance. - -A propositional formula is a tautology if it is true under every valuation (or interpretation) of its predicate symbols. If Φ is a tautology, and Θ is a substitution instance of Φ, then Θ is again a tautology. This fact implies the soundness of the deduction rule described in the previous section. - -In first-order logic, a substitution is a total mapping σ: V → T from variables to terms; many, authors additionally require σ(x) = x for all but finitely many variables x. The notation { x1 ↦ t1, …, xk ↦ tk } - -refers to a substitution mapping each variable xi to the corresponding term ti, for i=1,…,k, and every other variable to itself; the xi must be pairwise distinct. Applying that substitution to a term t is written in postfix notation as t { x1 ↦ t1, ..., xk ↦ tk }; it means to (simultaneously) replace every occurrence of each xi in t by ti. The result tσ of applying a substitution σ to a term t is called an instance of that term t. - -For example, applying the substitution { x ↦ z, z ↦ h(a,y) } to the term - -The domain dom(σ) of a substitution σ is commonly defined as the set of variables actually replaced, i.e. dom(σ) = { x ∈ V | xσ ≠ x }. - -A substitution is called a ground substitution if it maps all variables of its domain to ground, i.e. variable-free, terms. - -The substitution instance tσ of a ground substitution is a ground term if all of ts variables are in σs domain, i.e. if vars(t) ⊆ dom(σ). - -A substitution σ is called a linear substitution if tσ is a linear term for some (and hence every) linear term t containing precisely the variables of σs domain, i.e. with vars(t) = dom(σ). - -A substitution σ is called a flat substitution if xσ is a variable for every variable x. - -A substitution σ is called a renaming substitution if it is a permutation on the set of all variables. Like every permutation, a renaming substitution σ always has an inverse substitution σ−1, such that tσσ−1 = t = tσ−1σ for every term t. However, it is not possible to define an inverse for an arbitrary substitution. - -For example, { x ↦ 2, y ↦ 3+4 } is a ground substitution, { x ↦ x1, y ↦ y2+4 } is non-ground and non-flat, but linear, - -{ x ↦ y2, y ↦ y2+4 } is non-linear and non-flat, { x ↦ y2, y ↦ y2 } is flat, but non-linear, { x ↦ x1, y ↦ y2 } is both linear and flat, but not a renaming, since is maps both y and y2 to y2; each of these substitutions has the set {x,y} as its domain. An example for a renaming substitution is { x ↦ x1, x1 ↦ y, y ↦ y2, y2 ↦ x }, it has the inverse { x ↦ y2, y2 ↦ y, y ↦ x1, x1 ↦ x }. The flat substitution { x ↦ z, y ↦ z } cannot have an inverse, since e.g. (x+y) { x ↦ z, y ↦ z } = z+z, and the latter term cannot be transformed back to x+y, as the information about the origin a z stems from is lost. The ground substitution { x ↦ 2 } cannot have an inverse due to a similar loss of origin information e.g. in (x+2) { x ↦ 2 } = 2+2, even if replacing constants by variables was allowed by some fictitious kind of "generalized substitutions". - -Two substitutions are considered equal if they map each variable to structurally equal result terms, formally: σ = τ if xσ = xτ for each variable x ∈ V. - -The composition of two substitutions σ = { x1 ↦ t1, …, xk ↦ tk } and τ = { y1 ↦ u1, …, yl ↦ ul } is obtained by removing from the substitution { x1 ↦ t1τ, …, xk ↦ tkτ, y1 ↦ u1, …, yl ↦ ul } those pairs yi ↦ ui for which yi ∈ { x1, …, xk }. - -The composition of σ and τ is denoted by στ. Composition is an associative operation, and is compatible with substitution application, i.e. (ρσ)τ = ρ(στ), and (tσ)τ = t(στ), respectively, for every substitutions ρ, σ, τ, and every term t. - -The identity substitution, which maps every variable to itself, is the neutral element of substitution composition. A substitution σ is called idempotent if σσ = σ, and hence tσσ = tσ for every term t. The substitution { x1 ↦ t1, …, xk ↦ tk } is idempotent if and only if none of the variables xi occurs in any ti. Substitution composition is not commutative, that is, στ may be different from τσ, even if σ and τ are idempotent. - -For example, { x ↦ 2, y ↦ 3+4 } is equal to { y ↦ 3+4, x ↦ 2 }, but different from { x ↦ 2, y ↦ 7 }. The substitution { x ↦ y+y } is idempotent, e.g. ((x+y) {x↦y+y}) {x↦y+y} = ((y+y)+y) {x↦y+y} = (y+y)+y, while the substitution { x ↦ x+y } is non-idempotent, e.g. ((x+y) {x↦x+y}) {x↦x+y} = ((x+y)+y) {x↦x+y} = ((x+y)+y)+y. An example for non-commuting substitutions is { x ↦ y } { y ↦ z } = { x ↦ z, y ↦ z }, but { y ↦ z} { x ↦ y} = { x ↦ y, y ↦ z }. diff --git a/wiki/wikipedia/3025.txt b/wiki/wikipedia/3025.txt deleted file mode 100644 index b15473038c42731369f7a396f699a07fcd2924fc..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3025.txt +++ /dev/null @@ -1,11 +0,0 @@ -The 15 puzzle (also called Gem Puzzle, Boss Puzzle, Game of Fifteen, Mystic Square and many others) is a sliding puzzle having 15 square tiles numbered 1–15 in a frame that is 4 tiles high and 4 tiles wide, leaving one unoccupied tile position. Tiles in the same row or column of the open position can be moved by sliding them horizontally or vertically, respectively. The goal of the puzzle is to place the tiles in numerical order. - -Named for the number of tiles in the frame, the 15 puzzle may also be called a 16 puzzle, alluding to its total tile capacity. Similar names are used for different sized variants of the 15 puzzle, such as the 8 puzzle that has 8 tiles in a 3×3 frame. - -The n puzzle is a classical problem for modelling algorithms involving heuristics. Commonly used heuristics for this problem include counting the number of misplaced tiles and finding the sum of the taxicab distances between each block and its position in the goal configuration. For the 15 puzzle, lengths of optimal solutions range from 0 to 80 single-tile moves (there are 17 configurations requiring 80 moves) or 43 multi-tile moves; the 8 puzzle always can be solved in no more than 31 single-tile moves or 24 multi-tile moves (integer sequence ). The multi-tile metric counts subsequent moves of the empty tile in the same direction as one. - -Sam Loyd claimed from 1891 until his death in 1911 that he had invented the puzzle, for example writing in the Cyclopedia of Puzzles (published 1914): "The older inhabitants of Puzzleland will remember how in the early seventies I drove the entire world crazy over a little box of movable pieces which became known as the '14-15 Puzzle'." However, Loyd had nothing to do with the invention or initial popularity of the puzzle, and in any case, the craze was in 1880, not the early 1870s. Loyd's first article about the puzzle was published in 1886, and it was not until 1891 that he first claimed to be the inventor. This is impossible, as had been shown over a decade earlier by Johnson, because it requires a transformation from an even to an odd permutation. - -The Minus Cube, manufactured in the USSR, is a 3D puzzle with similar operations to the 15 puzzle. - -Bobby Fischer was an expert at solving the 15-Puzzle. He had been timed to be able to solve it within 25 seconds; Fischer demonstrated this on November 8, 1972, on The Tonight Show Starring Johnny Carson. diff --git a/wiki/wikipedia/3026.txt b/wiki/wikipedia/3026.txt deleted file mode 100644 index bbadbd541ac5333da72010d8d02f088e04c8c26b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3026.txt +++ /dev/null @@ -1,63 +0,0 @@ -In mathematics, the (exponential) shift theorem is a theorem about polynomial differential operators (D-operators) and exponential functions. It permits one to eliminate, in certain cases, the exponential from under the D-operators. - -The theorem states that, if P(D) is a polynomial D-operator, then, for any sufficiently differentiable function y, -$$ -P(D)(e^{ax}y)\equiv e^{ax}P(D+a)y. -$$ - -To prove the result, proceed by induction. Note that only the special case -$$ -P(D)=D^n -$$ - -needs to be proved, since the general result then follows by linearity of D-operators. - -The result is clearly true for n = 1 since -$$ -D(e^{ax}y)=e^{ax}(D+a)y. -$$ - -Now suppose the result true for n = k, that is, -$$ -D^k(e^{ax}y)=e^{ax}(D+a)^k y. -$$ - -Then, - -\begin{align} - -D^{k+1}(e^{ax}y)&\equiv\frac{d}{dx}\left\{e^{ax}\left(D+a\right)^ky\right\}\\ - -&{}=e^{ax}\frac{d}{dx}\left\{\left(D+a\right)^k y\right\} + ae^{ax}\left\{\left(D+a\right)^ky\right\}\\ - -&{}=e^{ax}\left\{\left(\frac{d}{dx}+a\right) \left(D+a\right)^ky\right\}\\ - -&{}=e^{ax}(D+a)^{k+1}y. - -\end{align} - -This completes the proof. - -The shift theorem can be applied equally well to inverse operators: -$$ -\frac{1}{P(D)}(e^{ax}y)=e^{ax}\frac{1}{P(D+a)}y. -$$ - -There is a similar version of the shift theorem for Laplace transforms ($t\begin{align} - -D^3 f &= D^3 (e^x\sin(x)) = e^x (D+1)^3 \sin (x) \\ - -&= e^x \left(D^3 + 3D^2 + 3D + 1\right) \sin(x) \\ - -&= e^x\left(-\cos(x)-3\sin(x)+3\cos(x)+\sin(x)\right) - -\end{align} - -Another application of the exponential shift theorem is to solve linear differential equations whose characteristic polynomial has repeated roots. diff --git a/wiki/wikipedia/3027.txt b/wiki/wikipedia/3027.txt deleted file mode 100644 index 7104b18d466b7d120a484a916c3fecf83ee3450a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3027.txt +++ /dev/null @@ -1,61 +0,0 @@ -A triangle mesh is a type of polygon mesh in computer graphics. It comprises a set of triangles (typically in three dimensions) that are connected by their common edges or corners. - -Many graphics software packages and hardware devices can operate more efficiently on triangles that are grouped into meshes than on a similar number of triangles that are presented individually. This is typically because computer graphics do operations on the vertices at the corners of triangles. With individual triangles, the system has to operate on three vertices for every triangle. In a large mesh, there could be eight or more triangles meeting at a single vertex - by processing those vertices just once, it is possible to do a fraction of the work and achieve an identical effect. - -In many computer graphics applications it is necessary to manage a mesh of triangles. The mesh components are vertices, edges, and triangles. An application might require knowledge of the various connections between the mesh components. These connections can be managed independently of the actual vertex positions. This document describes a simple data structure that is convenient for managing the connections. This is not the only possible data structure. Many other types exist and have support for various queries about meshes. - -Various methods of storing and working with a mesh in computer memory are possible. With the OpenGL and DirectX APIs there are two primary ways of passing a triangle mesh to the graphics hardware, triangle strips and index arrays. - -One way of sharing vertex data between triangles is the triangle strip. With strips of triangles each triangle shares one complete edge with one neighbour and another with the next. Another way is the triangle fan which is a set of connected triangles sharing one central vertex. With these methods vertices are dealt with efficiently resulting in the need to only process N+2 vertices in order to draw N triangles. - -Triangle strips are efficient, however the drawback is that it may not be obvious how or convenient to translate an arbitrary triangle mesh into strips. - -The data structure representing the mesh provides support for two basic operations: inserting triangles and removing triangles. It also supports an edge collapse operation that is useful in triangle decimation schemes. The structure provides no support for the vertex positions, but it does assume that each vertex is assigned a unique integer identifier, typically the index of that vertex in an array of contiguous vertex positions. A mesh vertex is defined by a single integer and is denoted by hvi. A mesh edge is defined by a pair of integers hv0,v1i, each integer corresponding to an end point of the edge. To support edge maps, the edges are stored so that v0 = min(v0,v1). A triangle component is defined by a triple of integers hv0,v1,v2i, each integer corresponding to a vertex of the triangle. To support triangle maps, the triangles are stored so that v0 = min(v0,v1,v2). Observe that hv0,v1,v2i and hv0,v2,v1i are treated as different triangles. An application requiring double–sided triangles must insert both triples into the data structure. For the sake of avoiding constant reminders about order of indices, in the remainder of the document the pair/triple information does not imply the vertices are ordering in any way (although the implementation does handle the ordering). - -Connectivity between the components is completely determined by the set of triples representing the triangles. A triangle t = hv0,v1,v2i has vertices v0, v1, and v2. It has edges e0 = hv0,v1i, e1 = hv1,v2i, and e2 = hv2,v0i. The inverse connections are also known. Vertex v0 is adjacent to edges e0 and e2 and to triangle t. Vertex v1 is adjacent to edges e0 and e1 and to triangle t. Vertex v2 is adjacent to edges e1 and e2 and to triangle t. All three edges e0, e1, and e2 are adjacent to t. - -How much of this information a data structure stores is dependent on the needs of an application. Moreover, the application might want to have additional information stored at the components. The information stored at a vertex, edge, or triangle is referred to as the vertex attribute, edge attribute, or triangle attribute. The abstract representations of these for the simple data structure described here are - -
    -
    -Vertex = ; // v
    -
    -Edge = ; // v0, v1
    -
    -Triangle ; // v0, v1, v2
    -
    -VData = ;
    -
    -EData = ;
    -
    -TData = ;
    -
    -VAttribute = ,set>; // data, eset, tset
    -
    -EAttribute = >; // data, tset
    -
    -TAttribute = ; // data
    -
    -VPair = pair;
    -
    -EPair = pair;
    -
    -TPair = pair;
    -
    -VMap = map;
    -
    -EMap = map;
    -
    -TMap = map;
    -
    -Mesh = ; // vmap, emap, tmap
    -
    -
    - -The maps support the standard insertion and removal functions for a hash table. Insertion occurs only if the item does not already exist. Removal occurs only if the item does exist. - -This operation involves identifying an edge hvk, vti where vk is called the keep vertex and vt is called the throw vertex. The triangles that share this edge are removed from the mesh. The vertex vt is also removed from the mesh. Any triangles that shared vt have that vertex replaced by vk. Figure 1 shows a triangle mesh and a sequence of three edge collapses applied to the mesh. - -With index arrays, a mesh is represented by two separate arrays, one array holding the vertices, and another holding sets of three indices into that array which define a triangle. The graphics system processes the vertices first and renders the triangles afterwards, using the index sets working on the transformed data. In OpenGL, this is supported by the glDrawElements() primitive when using Vertex Buffer Object (VBO). - -With this method, any arbitrary set of triangles sharing any arbitrary number of vertices can be stored, manipulated, and passed to the graphics API, without any intermediary processing. diff --git a/wiki/wikipedia/3028.txt b/wiki/wikipedia/3028.txt deleted file mode 100644 index 9e4ab53378b239d0411ac664d026c3bb24982dd3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3028.txt +++ /dev/null @@ -1,30 +0,0 @@ -In mathematics, the Poincaré–Hopf theorem (also known as the Poincaré–Hopf index formula, Poincaré–Hopf index theorem, or Hopf index theorem) is an important theorem that is used in differential topology. It is named after Henri Poincaré and Heinz Hopf. - -The Poincaré–Hopf theorem is often - -illustrated by the special case of the hairy ball theorem, which simply states that there is no smooth vector field on an even-dimensional n-sphere having no sources or sinks. - -Let $M$ be a differentiable manifold, of dimension $n$, and $v$ a vector field on $M$. Suppose that $x$ is an isolated zero of $v$, and fix some local coordinates near $x$. Pick a closed ball $D$ centered at $x$, so that $x$ is the only zero of $v$ in $D$. Then the index of $v$ at $x$, $\operatorname{index}_x(v)$, can be defined as the degree of the map $u : \partial D \to \mathbb S^{n-1}$ from the boundary of $D$ to the $(n-1)$-sphere given by $u(z)=v(z)/\|v(z)\|$. - -Theorem. Let $M$ be a compact differentiable manifold. Let $v$ be a vector field on $M$ with isolated zeroes. If $M$ has boundary, then we insist that $v$ be pointing in the outward normal direction along the boundary. Then we have the formula -$$ -\sum_i \operatorname{index}_{x_i}(v) = \chi(M) -$$ - -where the sum of the indices is over all the isolated zeroes of $v$ and $\chi(M)$ is the Euler characteristic of $M$. A particularly useful corollary is when there is a non-vanishing vector field implying Euler characteristic 0. - -The theorem was proven for two dimensions by Henri Poincaré and later generalized to higher dimensions by Heinz Hopf. - -The Euler characteristic of a closed surface is a purely topological concept, whereas the index of a vector field is purely analytic. Thus, this theorem establishes a deep link between two seemingly unrelated areas of mathematics. It is perhaps as interesting that the proof of this theorem relies heavily on integration, and, in particular, Stokes' theorem, which states that the integral of the exterior derivative of a differential form is equal to the integral of that form over the boundary. In the special case of a manifold without boundary, this amounts to saying that the integral is 0. But by examining vector fields in a sufficiently small neighborhood of a source or sink, we see that sources and sinks contribute integer amounts (known as the index) to the total, and they must all sum to 0. This result may be considered one of the earliest of a whole series of theorems establishing deep relationships between geometric and analytical or physical concepts. They play an important role in the modern study of both fields. - -1. Embed M in some high-dimensional Euclidean space. (Use the Whitney embedding theorem.) - -2. Take a small neighborhood of M in that Euclidean space, Nε. Extend the vector field to this neighborhood so that it still has the same zeroes and the zeroes have the same indices. In addition, make sure that the extended vector field at the boundary of Nε is directed outwards. - -3. The sum of indices of the zeroes of the old (and new) vector field is equal to the degree of the Gauss map from the boundary of Nε to the 1=(n–1)-dimensional sphere. Thus, the sum of the indices is independent of the actual vector field, and depends only on the manifold M. - -Technique: cut away all zeroes of the vector field with small neighborhoods. Then use the fact that the degree of a map from the boundary of an n-dimensional manifold to an 1=(n–1)-dimensional sphere, that can be extended to the whole n-dimensional manifold, is zero. - -4. Finally, identify this sum of indices as the Euler characteristic of M. To do that, construct a very specific vector field on M using a triangulation of M for which it is clear that the sum of indices is equal to the Euler characteristic. - -It is still possible to define the index for a vector field with nonisolated zeroes. A construction of this index and the extension of Poincaré–Hopf theorem for vector fields with nonisolated zeroes is outlined in Section 1.1.2 of . diff --git a/wiki/wikipedia/3029.txt b/wiki/wikipedia/3029.txt deleted file mode 100644 index 0362f7158d3043fe869d2134d5093da57651ad80..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3029.txt +++ /dev/null @@ -1,5 +0,0 @@ -In mathematics, the Simon problems (or Simon's problems) are a series of fifteen questions posed in the year 2000 by Barry Simon, an American mathematical physicist. Inspired by other collections of mathematical problems and open conjectures, such as the famous list by David Hilbert, the Simon problems concern quantum operators. Eight of the problems pertain to anomalous spectral behavior of Schrödinger operators, and five concern operators that incorporate the Coulomb potential. Among these was the problem of proving that the set of energy levels of one particular abstract quantum system was in fact the Cantor set, a challenge known as the "Ten Martini Problem" after the reward that Mark Kac offered for solving it. - -Background definitions for the "Coulomb energies" problems: - -* $\mathcal{H}_f^{(N)}$ is the space of functions on $L^2(\mathcal{R}^{2N}; \mathcal{C}^{3N})$ which are antisymmetric in spin and space. diff --git a/wiki/wikipedia/303.txt b/wiki/wikipedia/303.txt deleted file mode 100644 index 08e4a8764549b6ca8b092c52125f2d2944cd7596..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/303.txt +++ /dev/null @@ -1 +0,0 @@ -In mathematics, Tate's isogeny theorem, proved by , states that two abelian varieties over a finite field are isogeneous if and only if their Tate modules are isomorphic (as Galois representations). diff --git a/wiki/wikipedia/3030.txt b/wiki/wikipedia/3030.txt deleted file mode 100644 index 32d1c448a8995c86d7b2a8c7fb194522e2b4651c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3030.txt +++ /dev/null @@ -1 +0,0 @@ -The Alexander–Hirschowitz theorem shows that a specific collection of k double points in the P^r will impose independent types of conditions on homogenous polynomials and the hypersurface of d with many known lists of exceptions. In which case, the classic polynomial interpolation that is located in several variables can be generalized to points that have larger multiplicities. diff --git a/wiki/wikipedia/3031.txt b/wiki/wikipedia/3031.txt deleted file mode 100644 index b5afedf74090da9329df9fe695a2a001421c2308..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3031.txt +++ /dev/null @@ -1,129 +0,0 @@ -In trigonometry, the law of sines, sine law, sine formula, or sine rule is an equation relating the lengths of the sides of a triangle (any shape) to the sines of its angles. According to the law, - - \frac{a}{\sin{\alpha}} = \frac{b}{\sin{\beta}} = \frac{c}{\sin{\gamma}} = 2R, - -where a, b, and c are the lengths of the sides of a triangle, and α, β, and γ are the opposite angles (see the figure to the right), while R is the radius of the triangle's circumcircle. When the last part of the equation is not used, the law is sometimes stated using the reciprocals; - - \frac{\sin{\alpha}}{a} = \frac{\sin{\beta}}{b} = \frac{\sin{\gamma}}{c}. - -The law of sines can be used to compute the remaining sides of a triangle when two angles and a side are known—a technique known as triangulation. It can also be used when two sides and one of the non-enclosed angles are known. In some such cases, the triangle is not uniquely determined by this data (called the ambiguous case) and the technique gives two possible values for the enclosed angle. - -The law of sines is one of two trigonometric equations commonly applied to find lengths and angles in scalene triangles, with the other being the law of cosines. - -The law of sines can be generalized to higher dimensions on surfaces with constant curvature. - -According to Ubiratàn D'Ambrosio and Helaine Selin, the spherical law of sines was discovered in the 10th century. It is variously attributed to Abu-Mahmud Khojandi, Abu al-Wafa' Buzjani, Nasir al-Din al-Tusi and Abu Nasr Mansur. - -Ibn Muʿādh al-Jayyānī's The book of unknown arcs of a sphere in the 11th century contains the general law of sines. The plane law of sines was later stated in the 13th century by Nasīr al-Dīn al-Tūsī. In his On the Sector Figure, he stated the law of sines for plane and spherical triangles, and provided proofs for this law. - -According to Glen Van Brummelen, "The Law of Sines is really Regiomontanus's foundation for his solutions of right-angled triangles in Book IV, and these solutions are in turn the bases for his solutions of general triangles." Regiomontanus was a 15th-century German mathematician. - -The area T of any triangle can be written as one half of its base times its height. Selecting one side of the triangle as the base, the height of the triangle relative to that base is computed as the length of another side times the sine of the angle between the chosen side and the base. Thus depending on the selection of the base, the area of the triangle can be written as any of: - -T = \frac{1}{2} b \left(c \sin{\alpha}\right) = \frac{1}{2} c \left(a \sin{\beta}\right) = \frac{1}{2} a \left(b \sin{\gamma}\right). - -Multiplying these by 2/abc gives - -\frac{2T}{abc} = \frac{\sin{\alpha}}{a} = \frac{\sin{\beta}}{b} = \frac{\sin{\gamma}}{c}. - -When using the law of sines to find a side of a triangle, an ambiguous case occurs when two separate triangles can be constructed from the data provided (i.e., there are two different possible solutions to the triangle). In the case shown below they are triangles ABC and ABC′. - -Given a general triangle, the following conditions would need to be fulfilled for the case to be ambiguous: - -* The only information known about the triangle is the angle α and the sides a and c. - -* The angle α is acute (i.e., α < 90°). - -* The side a is shorter than the side c (i.e., a < c). - -* The side a is longer than the altitude h from angle β, where h = c sin α (i.e., a > h). - -If all the above conditions are true, then each of angles β and β′ produces a valid triangle, meaning that both of the following are true: - - {\gamma}' = \arcsin\frac{c \sin{\alpha}}{a} \quad \text{or} \quad {\gamma} = \pi - \arcsin\frac{c \sin{\alpha}}{a}. - -From there we can find the corresponding β and b or β′ and b′ if required, where b is the side bounded by vertices A and C and b′ is bounded by A and C′. - -The following are examples of how to solve a problem using the law of sines. - -Given: side a = 20, side c = 24, and angle γ = 40°. Angle α is desired. - -Using the law of sines, we conclude that - -\frac{\sin \alpha}{20} = \frac{\sin (40^\circ)}{24}. - - \alpha = \arcsin\left( \frac{20\sin (40^\circ)}{24} \right) \approx 32.39^\circ. - -Note that the potential solution α = 147.61° is excluded because that would necessarily give α + β + γ > 180°. - -If the lengths of two sides of the triangle a and b are equal to x, the third side has length c, and the angles opposite the sides of lengths a, b, and c are α, β, and γ respectively then - -\begin{align} - -& \alpha = \beta = \frac{180^\circ-\gamma}{2}= 90^\circ-\frac{\gamma}{2} \\[6pt] - -& \sin \alpha = \sin \beta = \sin \left(90^\circ-\frac{\gamma}{2}\right) = \cos \left(\frac{\gamma}{2}\right) \\[6pt] - -& \frac{c}{\sin \gamma}=\frac{a}{\sin \alpha}=\frac{x}{\cos \left(\frac{\gamma}{2}\right)} \\[6pt] - -& \frac{c \cos \left(\frac{\gamma}{2}\right)}{\sin \gamma} = x - -\end{align} - -In the identity - - \frac{a}{\sin{\alpha}} = \frac{b}{\sin{\beta}} = \frac{c}{\sin{\gamma}}, - -the common value of the three fractions is actually the diameter of the triangle's circumcircle. This result dates back to Ptolemy. - -As shown in the figure, let there be a circle with inscribed $ \triangle ABC$ and another inscribed $ \triangle ADB$ that passes through the circle's center O. The $ \angle AOD$ has a central angle of $ 180^\circ$ and thus $ \angle ABD = 90^\circ$. Since $ \triangle ABD$ is a right triangle, - - \sin{\delta}= \frac{\text{opposite}}{\text{hypotenuse}}= \frac{c}{2R}, - -where $ R= \frac{d}{2}$ is the radius of the circumscribing circle of the triangle. (see Figure 3 in this paper) to derive the sine law using elementary linear algebra and projection matrices. - -In hyperbolic geometry when the curvature is −1, the law of sines becomes - -\frac{\sin A}{\sinh a} = \frac{\sin B}{\sinh b} = \frac{\sin C}{\sinh c} . - -In the special case when B is a right angle, one gets - -\sin C = \frac{\sinh c}{\sinh b} - -which is the analog of the formula in Euclidean geometry expressing the sine of an angle as the opposite side divided by the hypotenuse. - -Define a generalized sine function, depending also on a real parameter K: - -\sin_K x = x - \frac{K x^3}{3!} + \frac{K^2 x^5}{5!} - \frac{K^3 x^7}{7!} + \cdots. - -The law of sines in constant curvature K reads as - -\frac{\sin A}{\sin_K a} = \frac{\sin B}{\sin_K b} = \frac{\sin C}{\sin_K c} . - -By substituting K = 0, K = 1, and K = −1, one obtains respectively the Euclidean, spherical, and hyperbolic cases of the law of sines described above. - -Let pK(r) indicate the circumference of a circle of radius r in a space of constant curvature K. Then pK(r) = 2π sinK r. Therefore, the law of sines can also be expressed as: - -\frac{\sin A}{p_K(a)} = \frac{\sin B}{p_K(b)} = \frac{\sin C}{p_K(c)} . - -This formulation was discovered by János Bolyai. - -For an n-dimensional simplex (i.e., triangle (n = 2), tetrahedron (n = 3), pentatope (n = 4), etc.) in n-dimensional Euclidean space, the absolute value of the polar sine (psin) of the normal vectors of the facets that meet at a vertex, divided by the hyperarea of the facet opposite the vertex is independent of the choice of the vertex. Writing V for the hypervolume of the n-dimensional simplex and P for the product of the hyperareas of its (n − 1)-dimensional facets, the common ratio is - -\frac{(nV)^{n-1}}{(n-1)! P}. - -For example, a tetrahedron has four triangular facets. The absolute value of the polar sine of the normal vectors to the three facets that share a vertex, divided by the area of the fourth facet will not depend upon the choice of the vertex: - -\begin{align} - -& \frac{\left|\operatorname{psin}(\mathbf{n_2}, \mathbf{n_3}, \mathbf{n_4})\right|}{\mathrm{Area}_1} = - -\frac{\left|\operatorname{psin}(\mathbf{n_1}, \mathbf{n_3}, \mathbf{n_4})\right|}{\mathrm{Area}_2} = - -\frac{\left|\operatorname{psin}(\mathbf{n_1}, \mathbf{n_2}, \mathbf{n_4})\right|}{\mathrm{Area}_3} = - -\frac{\left|\operatorname{psin}(\mathbf{n_1}, \mathbf{n_2}, \mathbf{n_3})\right|}{\mathrm{Area}_4} \\[4pt] - -= {} & \frac{(3\operatorname{Volume}_\mathrm{tetrahedron})^2}{2!~\mathrm{Area}_1 \mathrm{Area}_2 \mathrm{Area}_3 \mathrm{Area}_4}. - -\end{align} diff --git a/wiki/wikipedia/3032.txt b/wiki/wikipedia/3032.txt deleted file mode 100644 index 5c8313d75fc9033506ac9dbb4f33f600c492d0cd..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3032.txt +++ /dev/null @@ -1 +0,0 @@ -In mathematics, Glaeser's theorem, introduced by , is a theorem giving conditions for a smooth function to be a composition of F and θ for some given smooth function θ. One consequence is a generalization of Newton's theorem that every symmetric polynomial is a polynomial in the elementary symmetric polynomials, from polynomials to smooth functions. diff --git a/wiki/wikipedia/3033.txt b/wiki/wikipedia/3033.txt deleted file mode 100644 index ef305c9b976e22c4af0c1249d6cf6ab32fbc1f46..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3033.txt +++ /dev/null @@ -1,25 +0,0 @@ -In digital transmission, bit slip is the loss or gain of a bit or bits, caused by clock drift – variations in the respective clock rates of the transmitting and receiving devices. - -One cause of bit slippage is overflow of a receive buffer that occurs when the transmitter's clock rate exceeds that of the receiver. This causes one or more bits to be dropped for lack of storage capacity. - -One way to maintain timing between transmitting and receiving devices is to employ an asynchronous protocol such as start-stop. Alternatively, bit slip can be prevented by using a self-clocking signal (such as a signal modulated using OQPSK) or using a line coding such as Manchester encoding. - -Another cause is "losing count", as on a hard drive: if a hard drive encounters a long string of 0s, without any 1s (or a string of 1s without 0s), it may lose track of the frame between fields, and suffer bit slip. - -When a pulse of N consecutive zero bits are sent, clock drift may cause the hardware to apparently detect N-1 zero bits or N+1 zero bits—both kinds of errors are called bit slip. - -Thus one prevents long strings without change via such devices as run length limited codes. - -Many communication systems use linear-feedback shift register scrambling to prevent long strings of 0s (or other symbol), - -including VSAT, 1000BASE-T, , etc. - -While a scrambler makes the "losing count" type of bit slip error occur far less often, - -when bit slip errors do occur (perhaps for other reasons), - -scramblers have the property of expanding small errors that add or lose a single bit into a much longer burst of errors. - -The optimized cipher feedback mode (OCFB), the statistical self-synchronization mode, and the "one-bit CFB mode" also expand small bit-slip errors into a longer burst of errors, but eventually recover and produce the correct decrypted plaintext. - -A bit-slip error when using any other block cipher mode of operation generally results in complete corruption of the rest of the message. diff --git a/wiki/wikipedia/3034.txt b/wiki/wikipedia/3034.txt deleted file mode 100644 index 1b9b0fa6cd1eaba6d4059ad15d530db938a2b5af..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3034.txt +++ /dev/null @@ -1,5 +0,0 @@ -In plane geometry, Holditch's theorem states that if a chord of fixed length is allowed to rotate inside a convex closed curve, then the locus of a point on the chord a distance p from one end and a distance q from the other is a closed curve whose enclosed area is less than that of the original curve by $\pi pq$. The theorem was published in 1858 by Rev. Hamnet Holditch. - -The theorem is included as one of Clifford Pickover's 250 milestones in the history of mathematics. Some peculiarities of the theorem include that the area formula $\pi pq$ is independent of both the shape and the size of the original curve, and that the area formula is the same as for that of the area of an ellipse with semi-axes p and q. The theorem's author was a president of Caius College, Cambridge. - -Broman gives a more precise statement of the theorem, along with a generalization. The generalization allows, for example, consideration of the case in which the outer curve is a triangle, so that the conditions of the precise statement of Holditch's theorem do not hold because the paths of the endpoints of the chord have retrograde portions (portions that retrace themselves) whenever an acute angle is traversed. Nevertheless, the generalization shows that if the chord is shorter than any of the triangle's altitudes, and is short enough that the traced locus is a simple curve, Holditch's formula for the in-between area is still correct (and remains so if the triangle is replaced by any convex polygon with a short enough chord). However, other cases result in different formulas. diff --git a/wiki/wikipedia/3035.txt b/wiki/wikipedia/3035.txt deleted file mode 100644 index 62cec91b07d22ed5380fb54d7c9e22e92d3fb3b0..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3035.txt +++ /dev/null @@ -1,3 +0,0 @@ -#redirect Hartley transform#cas - -Category:Trigonometry diff --git a/wiki/wikipedia/3036.txt b/wiki/wikipedia/3036.txt deleted file mode 100644 index 0c8c101acea990aac7daf0600d0c3e42d360f51d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3036.txt +++ /dev/null @@ -1,31 +0,0 @@ -Fisher's inequality is a necessary condition for the existence of a balanced incomplete block design, that is, a system of subsets that satisfy certain prescribed conditions in combinatorial mathematics. Outlined by Ronald Fisher, a population geneticist and statistician, who was concerned with the design of experiments such as studying the differences among several different varieties of plants, under each of a number of different growing conditions, called blocks. - -Let: - -* v be the number of varieties of plants; - -* b be the number of blocks. - -To be a balanced incomplete block design it is required that: - -* k different varieties are in each block, 1 ≤ k < v; no variety occurs twice in any one block; - -* any two varieties occur together in exactly λ blocks; - -* each variety occurs in exactly r blocks. - -Fisher's inequality states simply that - -b ≥ v. - -Let the incidence matrix M be a v × b matrix defined so that Mi,j is 1 if element i is in block j and 0 otherwise. Then B = MMT is a v × v matrix such that Bi,i = r and Bi,j = λ for i ≠ j. Since r ≠ λ, det(B) ≠ 0, so rank(B) = v; on the other hand, rank(B) ≤ rank(M) ≤ b, so v ≤ b. - -Fisher's inequality is valid for more general classes of designs. A pairwise balanced design (or PBD) is a set X together with a family of non-empty subsets of X (which need not have the same size and may contain repeats) such that every pair of distinct elements of X is contained in exactly λ (a positive integer) subsets. The set X is allowed to be one of the subsets, and if all the subsets are copies of X, the PBD is called "trivial". The size of X is v and the number of subsets in the family (counted with multiplicity) is b. - -Theorem: For any non-trivial PBD, v ≤ b. - -This result also generalizes the Erdős–De Bruijn theorem: - -For a PBD with λ = 1 having no blocks of size 1 or size v, v ≤ b, with equality if and only if the PBD is a projective plane or a near-pencil (meaning that exactly n − 1 of the points are collinear). - -In another direction, Ray-Chaudhuri and Wilson proved in 1975 that in a 2s-(v, k, λ) design, the number of blocks is at least $\binom{v}{s}$. diff --git a/wiki/wikipedia/3037.txt b/wiki/wikipedia/3037.txt deleted file mode 100644 index af617a477e97ee03a996108ced186ab62c3947f5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3037.txt +++ /dev/null @@ -1,83 +0,0 @@ -In mathematics, the Langlands program is a web of far-reaching and influential conjectures about connections between number theory and geometry. Proposed by , it seeks to relate Galois groups in algebraic number theory to automorphic forms and representation theory of algebraic groups over local fields and adeles. Widely seen as the single biggest project in modern mathematical research, the Langlands program has been described by Edward Frenkel as "a kind of grand unified theory of mathematics." - -The Langlands program consists of some very complicated theoretical abstractions, which can be difficult even for specialist mathematicians to grasp. So to oversimplify, the foundational result and fundamental lemma of the project, posits a direct connection between the generalized fundamental representation of a finite field with its group extension, to the automorphic forms under which it is invariant. This is accomplished through abstraction to higher dimensional integration, by an equivalence to a certain analytical group as an absolute extension of its algebra. Consequently, this allows an analytical functional construction of powerful invariance transformations for a number field to its own algebraic structure. - -Intuitively speaking, the meaning of such a construction is rather nuanced; yet very powerful in its specific solutions and generalizations. The consequence for proof of existence to such theoretical objects, implies an analytical method in constructing the categoric mapping of fundamental structures for virtually any number field. As an analogue to the possible exact distribution of primes, the Langlands program allows a potential general tool for resolution of invariance at generalized algebraic structures. This in turn permits a somewhat unified analysis of arithmetic objects through their automorphic functions. Simply put, the Langlands philosophy allows a general analysis of structuring the abstractions of numbers. Naturally, this description is at once a reduction and over-generalization of the program's proper theorems. But, these mathematical analogues provide the basis of its conceptualization. - -In a very broad context, the program built on existing ideas: the philosophy of cusp forms formulated a few years earlier by Harish-Chandra and , the work and approach of Harish-Chandra on semisimple Lie groups, and in technical terms the trace formula of Selberg and others. - -What initially was very new in Langlands' work, besides technical depth, was the proposed direct connection to number theory, together with the rich organisational structure hypothesised (so-called functoriality). - -For example, in the work of Harish-Chandra one finds the principle that what can be done for one semisimple (or reductive) Lie group, should be done for all. Therefore, once the role of some low-dimensional Lie groups such as GL(2) in the theory of modular forms had been recognised, and with hindsight GL(1) in class field theory, the way was open at least to speculation about GL(n) for general n > 2. - -The cusp form idea came out of the cusps on modular curves but also had a meaning visible in spectral theory as "discrete spectrum", contrasted with the "continuous spectrum" from Eisenstein series. It becomes much more technical for bigger Lie groups, because the parabolic subgroups are more numerous. - -In all these approaches there was no shortage of technical methods, often inductive in nature and based on Levi decompositions amongst other matters, but the field was- and is- very demanding. - -And on the side of modular forms, there were examples such as Hilbert modular forms, Siegel modular forms, and theta-series. - -There are a number of related Langlands conjectures. There are many different groups over many different fields for which they can be stated, and for each field there are several different versions of the conjectures. Some versions of the Langlands conjectures are vague, or depend on objects such as the Langlands groups, whose existence is unproven, or on the L-group that has several inequivalent definitions. Moreover, the Langlands conjectures have evolved since Langlands first stated them in 1967. - -There are different types of objects for which the Langlands conjectures can be stated: - -*Representations of reductive groups over local fields (with different subcases corresponding to archimedean local fields, p-adic local fields, and completions of function fields) - -*Automorphic forms on reductive groups over global fields (with subcases corresponding to number fields or function fields). - -*Finite fields. Langlands did not originally consider this case, but his conjectures have analogues for it. - -*More general fields, such as function fields over the complex numbers. - -There are several different ways of stating the Langlands conjectures, which are closely related but not obviously equivalent. - -The starting point of the program may be seen as Emil Artin's reciprocity law, which generalizes quadratic reciprocity. The Artin reciprocity law applies to a Galois extension of an algebraic number field whose Galois group is abelian; it assigns L-functions to the one-dimensional representations of this Galois group, and states that these L-functions are identical to certain Dirichlet L-series or more general series (that is, certain analogues of the Riemann zeta function) constructed from Hecke characters. The precise correspondence between these different kinds of L-functions constitutes Artin's reciprocity law. - -For non-abelian Galois groups and higher-dimensional representations of them, one can still define L-functions in a natural way: Artin L-functions. - -The insight of Langlands was to find the proper generalization of Dirichlet L-functions, which would allow the formulation of Artin's statement in this more general setting. Hecke had earlier related Dirichlet L-functions with automorphic forms (holomorphic functions on the upper half plane of $\mathbb{C}$ (the complex numbers) that satisfy certain functional equations). Langlands then generalized these to automorphic cuspidal representations, which are certain infinite dimensional irreducible representations of the general linear group GL(n) over the adele ring of $\mathbb{Q}$ (the rational numbers). (This ring simultaneously keeps track of all the completions of $\mathbb{Q},$ see p-adic numbers.) - -Langlands attached automorphic L-functions to these automorphic representations, and conjectured that every Artin L-function arising from a finite-dimensional representation of the Galois group of a number field is equal to one arising from an automorphic cuspidal representation. This is known as his "reciprocity conjecture". - -Roughly speaking, the reciprocity conjecture gives a correspondence between automorphic representations of a reductive group and homomorphisms from a Langlands group to an L-group. There are numerous variations of this, in part because the definitions of Langlands group and L-group are not fixed. - -Over local fields this is expected to give a parameterization of L-packets of admissible irreducible representations of a reductive group over the local field. For example, over the real numbers, this correspondence is the Langlands classification of representations of real reductive groups. Over global fields, it should give a parameterization of automorphic forms. - -The functoriality conjecture states that a suitable homomorphism of L-groups is expected to give a correspondence between automorphic forms (in the global case) or representations (in the local case). Roughly speaking, the Langlands reciprocity conjecture is the special case of the functoriality conjecture when one of the reductive groups is trivial. - -Langlands generalized the idea of functoriality: instead of using the general linear group GL(n), other connected reductive groups can be used. Furthermore, given such a group G, Langlands constructs the Langlands dual group LG, and then, for every automorphic cuspidal representation of G and every finite-dimensional representation of LG, he defines an L-function. One of his conjectures states that these L-functions satisfy a certain functional equation generalizing those of other known L-functions. - -He then goes on to formulate a very general "Functoriality Principle". Given two reductive groups and a (well behaved) morphism between their corresponding L-groups, this conjecture relates their automorphic representations in a way that is compatible with their L-functions. This functoriality conjecture implies all the other conjectures presented so far. It is of the nature of an induced representation construction—what in the more traditional theory of automorphic forms had been called a 'lifting', known in special cases, and so is covariant (whereas a restricted representation is contravariant). Attempts to specify a direct construction have only produced some conditional results. - -All these conjectures can be formulated for more general fields in place of $\mathbb{Q}$: algebraic number fields (the original and most important case), local fields, and function fields (finite extensions of Fp(t) where p is a prime and Fp(t) is the field of rational functions over the finite field with p elements). - -The so-called geometric Langlands program, suggested by Gérard Laumon following ideas of Vladimir Drinfeld, arises from a geometric reformulation of the usual Langlands program that attempts to relate more than just irreducible representations. In simple cases, it relates l-adic representations of the étale fundamental group of an algebraic curve to objects of the derived category of l-adic sheaves on the moduli stack of vector bundles over the curve. - -The Langlands conjectures for GL(1, K) follow from (and are essentially equivalent to) class field theory. - -Langlands proved the Langlands conjectures for groups over the archimedean local fields $\mathbb{R}$ (the real numbers) and $\mathbb{C}$ by giving the Langlands classification of their irreducible representations. - -Lusztig's classification of the irreducible representations of groups of Lie type over finite fields can be considered an analogue of the Langlands conjectures for finite fields. - -Andrew Wiles' proof of modularity of semistable elliptic curves over rationals can be viewed as an instance of the Langlands reciprocity conjecture, since the main idea is to relate the Galois representations arising from elliptic curves to modular forms. Although Wiles' results have been substantially generalized, in many different directions, the full Langlands conjecture for $\text{GL}(2,\mathbb{Q})$ remains unproved. - -In 1998, Laurent Lafforgue proved Lafforgue's theorem verifying the Langlands conjectures for the general linear group GL(n, K) for function fields K. This work continued earlier investigations by Drinfeld, who proved the case GL(2, K) in the 1980s. - -In 2018, Vincent Lafforgue established the global Langlands correspondence (the direction from automorphic forms to Galois representations) for connected reductive groups over global function fields. - -proved the local Langlands conjectures for the general linear group GL(2, K) over local fields. - -proved the local Langlands conjectures for the general linear group GL(n, K) for positive characteristic local fields K. Their proof uses a global argument. - -proved the local Langlands conjectures for the general linear group GL(n, K) for characteristic 0 local fields K. gave another proof. Both proofs use a global argument. gave another proof. - -In 2008, Ngô Bảo Châu proved the "fundamental lemma", which was originally conjectured by Langlands and Shelstad in 1983 and being required in the proof of some important conjectures in the Langlands program. - -To a lay reader or even nonspecialist mathematician, abstractions within the Langlands program can be somewhat impenetrable. However, there are some strong and clear implications for proof or disproof of the fundamental Langlands conjectures. - -As the program posits a powerful connection between analytic number theory and generalizations of algebraic geometry, the idea of 'functoriality' between abstract algebraic representations of number fields and their analytical prime constructions, results in powerful functional tools allowing an exact quantification of prime distributions. This in turn, yields the capacity for classification of diophantine equations and further abstractions of algebraic functions. - -Furthermore, if the reciprocity of such generalized algebras for the posited objects exists, and if their analytical functions can be shown to be well defined, some very deep results in mathematics can be within reach of proof; such as rational solutions of elliptic curves, topological construction of algebraic varieties, and the famous Riemann hypothesis, each of which relates to the invariance within structures of number fields. - -Additionally some connections between the Langlands program and M theory have been posited, as their dualities connect in nontrivial ways, providing potential exact solutions in superstring theory (as was similarly done in group theory through monstrous moonshine). - -Simply put, the Langlands project implies a deep and powerful framework of solutions, which touches the most fundamental areas of mathematics. Through high-order generalizations in exact solutions of algebraic equations, with analytical functions, as embedded in geometric forms. It allows a unification of many distant mathematical fields into a formalism of powerful analytical methods. diff --git a/wiki/wikipedia/3038.txt b/wiki/wikipedia/3038.txt deleted file mode 100644 index 5bdd05153e133191eadd51965c5acccf2d356e0d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3038.txt +++ /dev/null @@ -1,7 +0,0 @@ -Brian Conrad (born November 20, 1970) is an American mathematician and number theorist, working at Stanford University. Previously, he taught at the University of Michigan and at Columbia University. - -Conrad and others proved the modularity theorem, also known as the Taniyama-Shimura Conjecture. He proved this in 1999 with Christophe Breuil, Fred Diamond and Richard Taylor, while holding a joint postdoctoral position at Harvard University and the Institute for Advanced Study in Princeton, New Jersey. - -Conrad received his bachelor's degree from Harvard in 1992, where he won a prize for his undergraduate thesis. He did his doctoral work under Andrew Wiles and went on to receive his Ph.D. from Princeton University in 1996 with a dissertation entitled Finite Honda Systems And Supersingular Elliptic Curves. He was also featured as an extra in Nova's The Proof. - -His identical twin brother Keith Conrad, also a number theorist, is a professor at the University of Connecticut. diff --git a/wiki/wikipedia/3039.txt b/wiki/wikipedia/3039.txt deleted file mode 100644 index 502234c2490a9207ec5fa1cb09e38020710a89e3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3039.txt +++ /dev/null @@ -1,106 +0,0 @@ -As applied in the field of computer vision, graph cut optimization can be employed to efficiently solve a wide variety of low-level computer vision problems (early vision), such as image smoothing, the stereo correspondence problem, image segmentation, object co-segmentation, and many other computer vision problems that can be formulated in terms of energy minimization. Many of these energy minimization problems can be approximated by solving a maximum flow problem in a graph (and thus, by the max-flow min-cut theorem, define a minimal cut of the graph). Under most formulations of such problems in computer vision, the minimum energy solution corresponds to the maximum a posteriori estimate of a solution. Although many computer vision algorithms involve cutting a graph (e.g., normalized cuts), the term "graph cuts" is applied specifically to those models which employ a max-flow/min-cut optimization (other graph cutting algorithms may be considered as graph partitioning algorithms). - -"Binary" problems (such as denoising a binary image) can be solved exactly using this approach; problems where pixels can be labeled with more than two different labels (such as stereo correspondence, or denoising of a grayscale image) cannot be solved exactly, but solutions produced are usually near the global optimum. - -The theory of graph cuts used as an optimization method was first applied in computer vision in the seminal paper by Greig, Porteous and Seheult of Durham University. Allan Seheult and Bruce Porteous were members of Durham's lauded statistics group of the time, lead by Julian Besag and Peter Green (statistician), with the optimisation expert Margaret Greig notable as the first ever female member of staff of the Durham Mathematical Sciences Department. - -In the Bayesian statistical context of smoothing noisy (or corrupted) images, they showed how the maximum a posteriori estimate of a binary image can be obtained exactly by maximizing the flow through an associated image network, involving the introduction of a source and sink. The problem was therefore shown to be efficiently solvable. Prior to this result, approximate techniques such as simulated annealing (as proposed by the Geman brothers), or iterated conditional modes (a type of greedy algorithm as suggested by Julian Besag) were used to solve such image smoothing problems. - -Although the general $k$-colour problem remains unsolved for $k > 2,$ the approach of Greig, Porteous and Seheult to have wide applicability in general computer vision problems. Greig, Porteous and Seheult approaches are often applied iteratively to a sequence of binary problems, usually yielding near optimal solutions. - -In 2011, C. Couprie et al. proposed a general image segmentation framework, called the "Power Watershed", that minimized a real-valued indicator function from [0,1] over a graph, constrained by user seeds (or unary terms) set to 0 or 1, in which the minimization of the indicator function over the graph is optimized with respect to an exponent $p$. When $p=1$, the Power Watershed is optimized by graph cuts, when $p=0$ the Power Watershed is optimized by shortest paths, $p=2$ is optimized by the Random walker algorithm and $p=\infty$ is optimized by the Watershed (image processing) algorithm. In this way, the Power Watershed may be viewed as a generalization of graph cuts that provides a straightforward connection with other energy optimization segmentation/clustering algorithms. - -* Image: $x \in \{R,G,B\}^N$ - -* Output: Segmentation (also called opacity) $S \in R^N$ (soft segmentation). For hard segmentation $S \in \{0 \text{ for background}, 1 \text{ for foreground/object to be detected}\}^N$ - -* Energy function: $E(x, S, C, \lambda)$ where C is the color parameter and λ is the coherence parameter. - -* $E(x,S,C,\lambda)=E_{\rm color} + E_{\rm coherence}$ - -* Optimization: The segmentation can be estimated as a global minimum over S: ${\arg\min}_S E(x, S, C, \lambda)$ - -* Standard Graph cuts: optimize energy function over the segmentation (unknown S value). - -* Iterated Graph cuts: - -# First step optimizes over the color parameters using K-means. - -# Second step performs the usual graph cuts algorithm. - -These 2 steps are repeated recursively until convergence. - -* Dynamic graph cuts:
    Allows to re-run the algorithm much faster after modifying the problem (e.g. after new seeds have been added by a user). -$$ -\Pr(x\mid S) = K^{-E} -$$ - -where the energy $E$ is composed of two different models ($E_{\rm color}$ and $E_{\rm coherence}$): -$$ -E_{\rm color} -$$ — unary term describing the likelihood of each color. - -* This term can be modeled using different local (e.g. ) or global (e.g. histograms, GMMs, Adaboost likelihood) approaches that are described below. - -* We use intensities of pixels marked as seeds to get histograms for object (foreground) and background intensity distributions: P(I|O) and P(I|B). - -* Then, we use these histograms to set the regional penalties as negative log-likelihoods. - -* We usually use two distributions: one for background modelling and another for foreground pixels. - -* Use a Gaussian mixture model (with 5–8 components) to model those 2 distributions. - -* Goal: Try to pull apart those two distributions. - -* A (or ) is a set of pixels that has certain characteristics and is repeated in an image. - -* Steps: - -# Determine a good natural scale for the texture elements. - -# Compute non-parametric statistics of the model-interior , either on intensity or on Gabor filter responses. - -* Examples: - -** - -** -$$ -E_{\rm coherence} -$$ — binary term describing the coherence between neighborhood pixels. - -* In practice, pixels are defined as neighbors if they are adjacent either horizontally, vertically or diagonally (4 way connectivity or 8 way connectivity for 2D images). - -* Costs can be based on local intensity gradient, Laplacian zero-crossing, gradient direction, color mixture model,... - -* Different energy functions have been defined: - -** Standard Markov random field: Associate a penalty to disagreeing pixels by evaluating the difference between their segmentation label (crude measure of the length of the boundaries). See Boykov and Kolmogorov ICCV 2003 - -** Conditional random field: If the color is very different, it might be a good place to put a boundary. See Lafferty et al. 2001; Kumar and Hebert 2003 - -Graph cuts methods have become popular alternatives to the level set-based approaches for optimizing the location of a contour (see for an extensive comparison). However, graph cut approaches have been criticized in the literature for several issues: - -* Metrication artifacts: When an image is represented by a 4-connected lattice, graph cuts methods can exhibit unwanted "blockiness" artifacts. Various methods have been proposed for addressing this issue, such as using additional edges or by formulating the max-flow problem in continuous space. - -* Shrinking bias: Since graph cuts finds a minimum cut, the algorithm can be biased toward producing a small contour. For example, the algorithm is not well-suited for segmentation of thin objects like blood vessels (see for a proposed fix). - -* Multiple labels: Graph cuts is only able to find a global optimum for binary labeling (i.e., two labels) problems, such as foreground/background image segmentation. Extensions have been proposed that can find approximate solutions for multilabel graph cuts problems. - -* Memory: the memory usage of graph cuts increases quickly as the image size increases. As an illustration, the Boykov-Kolmogorov max-flow algorithm v2.2 allocates $24n+14m$ bytes ($n$ and $m$ are respectively the number of nodes and edges in the graph). Nevertheless, some amount of work has been recently done in this direction for reducing the graphs before the maximum-flow computation. - -* Minimization is done using a standard minimum cut algorithm. - -* Due to the Max-flow min-cut theorem we can solve energy minimization by maximizing the flow over the network. The Max Flow problem consists of a directed graph with edges labeled with capacities, and there are two distinct nodes: the source and the sink. Intuitively, it's easy to see that the maximum flow is determined by the bottleneck. - -The Boykov-Kolmogorov algorithm is an efficient way to compute the max-flow for computer vision related graph. - -The Sim Cut algorithm approximates the graph cut. The algorithm implements a solution by simulation of an electrical network. This is the approach suggested by Cederbaum's maximum flow theorem. Acceleration of the algorithm is possible through parallel computing. - -* — An implementation of the maxflow algorithm described in "An Experimental Comparison of Min-Cut/Max-Flow Algorithms for Energy Minimization in Computer Vision" by Vladimir Kolmogorov - -* — some graph cut libraries and MATLAB wrappers - -* — fast multi-core max-flow/min-cut solver optimized for grid-like graphs - -* — An implementation of the Sim Cut; an algorithm for computing an approximate solution of the minimum s-t cut in a massively parallel manner. diff --git a/wiki/wikipedia/304.txt b/wiki/wikipedia/304.txt deleted file mode 100644 index 8dede8dc396533d9ddb85f700c028de0da089052..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/304.txt +++ /dev/null @@ -1,90 +0,0 @@ -In mathematics, specifically in differential topology, Morse theory enables one to analyze the topology of a manifold by studying differentiable functions on that manifold. According to the basic insights of Marston Morse, a typical differentiable function on a manifold will reflect the topology quite directly. Morse theory allows one to find CW structures and handle decompositions on manifolds and to obtain substantial information about their homology. - -Before Morse, Arthur Cayley and James Clerk Maxwell had developed some of the ideas of Morse theory in the context of topography. Morse originally applied his theory to geodesics (critical points of the energy functional on paths). These techniques were used in Raoul Bott's proof of his periodicity theorem. - -The analogue of Morse theory for complex manifolds is Picard–Lefschetz theory. - -Consider, for purposes of illustration, a mountainous landscape $M.$ If $f$ is the function $M \to \mathbb{R}$ sending each point to its elevation, then the inverse image of a point in $\mathbb{R}$ is a contour line (more generally, a level set). Each connected component of a contour line is either a point, a simple closed curve, or a closed curve with a double point. Contour lines may also have points of higher order (triple points, etc.), but these are unstable and may be removed by a slight deformation of the landscape. Double points in contour lines occur at saddle points, or passes. Saddle points are points where the surrounding landscape curves up in one direction and down in the other. - -Imagine flooding this landscape with water. Then, the region covered by water when the water reaches an elevation of $a$ is $f^{-1}(-\infty, a]$, or the points with elevation less than or equal to $a.$ Consider how the topology of this region changes as the water rises. It appears, intuitively, that it does not change except when $a$ passes the height of a critical point; that is, a point where the gradient of $f$ is $0$ (that is the Jacobian matrix acting as a linear map from the tangent space at that point to the tangent space at its image under the map $f$ does not have maximal rank). In other words, it does not change except when the water either (1) starts filling a basin, (2) covers a saddle (a mountain pass), or (3) submerges a peak. - -To each of these three types of critical points – basins, passes, and peaks (also called minima, saddles, and maxima) – one associates a number called the index. Intuitively speaking, the index of a critical point $b$ is the number of independent directions around $b$ in which $f$ decreases. More precisely the index of a non-degenerate critical point $b$ of $f$ is the dimension of the largest subspace of the tangent space to $M$ at $b$ on which the Hessian of $f$ is negative definite. Therefore, the indices of basins, passes, and peaks are $0, 1,$ and $2,$ respectively. - -Define $M^a$ as $f^{-1}(-\infty, a]$. Leaving the context of topography, one can make a similar analysis of how the topology of $M^a$ changes as $a$ increases when $M$ is a torus oriented as in the image and $f$ is projection on a vertical axis, taking a point to its height above the plane. - -Starting from the bottom of the torus, let $p, q, r,$ and $s$ be the four critical points of index $0, 1, 1,$ and $2,$ respectively. When $a$ is less than $f(p) = 0,$ then $M^a$ is the empty set. After $a$ passes the level of $p,$ when $0 < a < f(q),$ then $M^a$ is a disk, which is homotopy equivalent to a point (a 0-cell), which has been "attached" to the empty set. Next, when $a$ exceeds the level of $q,$ and $f(q) < a < f(r),$ then $M^a$ is a cylinder, and is homotopy equivalent to a disk with a 1-cell attached (image at left). Once $a$ passes the level of $r,$ and $f(r) < a < f(s),$ then $M^1$ is a torus with a disk removed, which is homotopy equivalent to a cylinder with a 1-cell attached (image at right). Finally, when $a$ is greater than the critical level of $s,$ $M^a$ is a torus. A torus, of course, is the same as a torus with a disk removed with a disk (a 2-cell) attached. - -One therefore appears to have the following rule: the topology of $M^{\alpha}$ does not change except when $\alpha$ passes the height of a critical point, and when $\alpha$ passes the height of a critical point of index $\gamma$, a $\gamma$-cell is attached to $M^{\alpha}.$ This does not address the question of what happens when two critical points are at the same height. That situation can be resolved by a slight perturbation of $f.$ In the case of a landscape (or a manifold embedded in Euclidean space), this perturbation might simply be tilting the landscape slightly, or rotating the coordinate system. - -One should be careful and verify the non-degeneracy of critical points. To see what can pose a problem, let $M = \R$ and let $f(x) = x^3.$ Then $0$ is a critical point of $f,$ but the topology of $M^{\alpha}$ does not change when $\alpha$ passes $0.$ The problem is that the second derivative of $f$ is also $0$ at $0,$ that is, the Hessian of $f$ vanishes and this critical point is degenerate. Note that this situation is unstable: by slightly deforming $f,$ the degenerate critical point is either removed or breaks up into two non-degenerate critical points. - -For a real-valued smooth function $f : M \to \R$ on a differentiable manifold $M,$ the points where the differential of $f$ vanishes are called critical points of $f$ and their images under $f$ are called critical values. If at a critical point $b,$ the matrix of second partial derivatives (the Hessian matrix) is non-singular, then $b$ is called a '; if the Hessian is singular then $b$ is a '. - -For the functions - -f(x)=a + b x+ c x^2+d x^3+\cdots - -from $\R$ to $\R,$ $f$ has a critical point at the origin if $b = 0,$ which is non-degenerate if $c \neq 0$ (that is, $f$ is of the form $a + c x ^2 + \cdots$) and degenerate if $c = 0$ (that is, $f$ is of the form $a + dx^3 + \cdots$). A less trivial example of a degenerate critical point is the origin of the monkey saddle. - -The index of a non-degenerate critical point $b$ of $f$ is the dimension of the largest subspace of the tangent space to $M$ at $b$ on which the Hessian is negative definite. This corresponds to the intuitive notion that the index is the number of directions in which $f$ decreases. The degeneracy and index of a critical point are independent of the choice of the local coordinate system used, as shown by Sylvester's Law. - -Let $b$ be a non-degenerate critical point of $f : M \to R.$ Then there exists a chart $\left(x_1, x_2, \ldots, x_n\right)$ in a neighborhood $U$ of $b$ such that $x_i(b) = 0$ for all $i$ and - -f(x) = f(b) - x_1^2 - \cdots - x_{\alpha}^2 + x_{\alpha +1}^2 + \cdots + x_n^2 - -throughout $U.$ Here $\alpha$ is equal to the index of $f$ at $b.$ As a corollary of the Morse lemma, one sees that non-degenerate critical points are isolated. (Regarding an extension to the complex domain see Complex Morse Lemma. For a generalization, see Morse–Palais lemma). - -A smooth real-valued function on a manifold $M$ is a Morse function if it has no degenerate critical points. A basic result of Morse theory says that almost all functions are Morse functions. Technically, the Morse functions form an open, dense subset of all smooth functions $M \to \R$ in the $C^2$ topology. This is sometimes expressed as "a typical function is Morse" or "a generic function is Morse". - -As indicated before, we are interested in the question of when the topology of $M^a = f^{-1}(-\infty, a]$ changes as $a$ varies. Half of the answer to this question is given by the following theorem. - -Theorem. Suppose $f$ is a smooth real-valued function on $M,$ $a < b,$ $f^{-1}[a, b]$ is compact, and there are no critical values between $a$ and $b.$ Then $M^a$ is diffeomorphic to $M^b,$ and $M^b$ deformation retracts onto $M^a.$ - -It is also of interest to know how the topology of $M^a$ changes when $a$ passes a critical point. The following theorem answers that question. - -Theorem. Suppose $f$ is a smooth real-valued function on $M$ and $p$ is a non-degenerate critical point of $f$ of index $\gamma,$ and that $f(p) = q.$ Suppose $f^{-1}[q - \varepsilon, q + \varepsilon]$ is compact and contains no critical points besides $p.$ Then $M^{q + \varepsilon}$ is homotopy equivalent to $M^{q - \varepsilon}$ with a $\gamma$-cell attached. - -These results generalize and formalize the 'rule' stated in the previous section. - -Using the two previous results and the fact that there exists a Morse function on any differentiable manifold, one can prove that any differentiable manifold is a CW complex with an $n$-cell for each critical point of index $n.$ To do this, one needs the technical fact that one can arrange to have a single critical point on each critical level, which is usually proven by using gradient-like vector fields to rearrange the critical points. - -Morse theory can be used to prove some strong results on the homology of manifolds. The number of critical points of index $\gamma$ of $f : M \to \R$ is equal to the number of $\gamma$ cells in the CW structure on $M$ obtained from "climbing" $f.$ Using the fact that the alternating sum of the ranks of the homology groups of a topological space is equal to the alternating sum of the ranks of the chain groups from which the homology is computed, then by using the cellular chain groups (see cellular homology) it is clear that the Euler characteristic $\chi(M)$ is equal to the sum - -\sum(-1)^\gamma C^\gamma = \chi(M) - -where $C^{\gamma}$ is the number of critical points of index $\gamma.$ Also by cellular homology, the rank of the $n$th homology group of a CW complex $M$ is less than or equal to the number of $n$-cells in $M.$ Therefore, the rank of the $\gamma$th homology group, that is, the Betti number $b_\gamma(M)$, is less than or equal to the number of critical points of index $\gamma$ of a Morse function on $M.$ These facts can be strengthened to obtain the ': - -C^\gamma -C^{\gamma -1} \pm \cdots + (-1)^\gamma C^0 \geq b_\gamma(M)-b_{\gamma-1}(M) \pm \cdots + (-1)^\gamma b_0(M). - -In particular, for any - -\gamma \in \{0, \ldots, n = \dim M\}, - -one has - -C^\gamma \geq b_\gamma(M). - -This gives a powerful tool to study manifold topology. Suppose on a closed manifold there exists a Morse function $f : M \to \R$ with precisely k critical points. In what way does the existence of the function $f$ restrict $M$? The case $k = 2$ was studied by Georges Reeb in 1952; the Reeb sphere theorem states that $M$ is homeomorphic to a sphere $S^n.$ The case $k = 3$ is possible only in a small number of low dimensions, and M is homeomorphic to an Eells–Kuiper manifold. - -In 1982 Edward Witten developed an analytic approach to the Morse inequalities by considering the de Rham complex for the perturbed operator -$$ -d_t = e^{-tf} d e^{tf}. -$$ - -Morse theory has been used to classify closed 2-manifolds up to diffeomorphism. If $M$ is oriented, then $M$ is classified by its genus $g$ and is diffeomorphic to a sphere with $g$ handles: thus if $g = 0,$ $M$ is diffeomorphic to the 2-sphere; and if $g > 0,$ $M$ is diffeomorphic to the connected sum of $g$ 2-tori. If $N$ is unorientable, it is classified by a number $g > 0$ and is diffeomorphic to the connected sum of $g$ real projective spaces $\mathbf{RP}^2.$ In particular two closed 2-manifolds are homeomorphic if and only if they are diffeomorphic. - -Morse homology is a particularly easy way to understand the homology of smooth manifolds. It is defined using a generic choice of Morse function and Riemannian metric. The basic theorem is that the resulting homology is an invariant of the manifold (that is,, independent of the function and metric) and isomorphic to the singular homology of the manifold; this implies that the Morse and singular Betti numbers agree and gives an immediate proof of the Morse inequalities. An infinite dimensional analog of Morse homology in symplectic geometry is known as Floer homology. - -The notion of a Morse function can be generalized to consider functions that have nondegenerate manifolds of critical points. A ' is a smooth function on a manifold whose critical set is a closed submanifold and whose Hessian is non-degenerate in the normal direction. (Equivalently, the kernel of the Hessian at a critical point equals the tangent space to the critical submanifold.) A Morse function is the special case where the critical manifolds are zero-dimensional (so the Hessian at critical points is non-degenerate in every direction, that is, has no kernel). - -The index is most naturally thought of as a pair - -\left(i_-, i_+\right), - -where $i_-$ is the dimension of the unstable manifold at a given point of the critical manifold, and $i_+$ is equal to $i_-$ plus the dimension of the critical manifold. If the Morse–Bott function is perturbed by a small function on the critical locus, the index of all critical points of the perturbed function on a critical manifold of the unperturbed function will lie between $i_-$ and $i_+.$ - -Morse–Bott functions are useful because generic Morse functions are difficult to work with; the functions one can visualize, and with which one can easily calculate, typically have symmetries. They often lead to positive-dimensional critical manifolds. Raoul Bott used Morse–Bott theory in his original proof of the Bott periodicity theorem. - -Round functions are examples of Morse–Bott functions, where the critical sets are (disjoint unions of) circles. - -Morse homology can also be formulated for Morse–Bott functions; the differential in Morse–Bott homology is computed by a spectral sequence. Frederic Bourgeois sketched an approach in the course of his work on a Morse–Bott version of symplectic field theory, but this work was never published due to substantial analytic difficulties. diff --git a/wiki/wikipedia/3040.txt b/wiki/wikipedia/3040.txt deleted file mode 100644 index 589d7cb1b6dc1eeb37bc35918fc996a24339af16..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3040.txt +++ /dev/null @@ -1,25 +0,0 @@ -In mathematics, George Glauberman's Z* theorem is stated as follows: - -
    Z* theorem: Let G be a finite group, with O(G) being its maximal normal subgroup of odd order. If T is a Sylow 2-subgroup of G containing an involution not conjugate in G to any other element of T, then the involution lies in Z*(G), which is the inverse image in G of the center of G/O(G).
    - -This generalizes the Brauer–Suzuki theorem (and the proof uses the Brauer–Suzuki theorem to deal with some small cases). - -The original paper gave several criteria for an element to lie outside Z*(G). Its theorem 4 states: - -
    For an element t in T, it is necessary and sufficient for t to lie outside Z*(G) that there is some g in G and abelian subgroup U of T satisfying the following properties: - -# g normalizes both U and the centralizer CT(U), that is g is contained in N = NG(U) ∩ NG(CT(U)) - -# t is contained in U and tg ≠ gt - -# U is generated by the N-conjugates of t - -# the exponent of U is equal to the order of t - -Moreover g may be chosen to have prime power order if t is in the center of T, and g may be chosen in T otherwise.
    - -A simple corollary is that an element t in T is not in Z*(G) if and only if there is some s ≠ t such that s and t commute and s and t are G-conjugate. - -A generalization to odd primes was recorded in : if t is an element of prime order p and the commutator [t, g] has order coprime to p for all g, then t is central modulo the p′-core. This was also generalized to odd primes and to compact Lie groups in , which also contains several useful results in the finite case. - -have also studied an extension of the Z* theorem to pairs of groups (G, H) with H a normal subgroup of G. diff --git a/wiki/wikipedia/3041.txt b/wiki/wikipedia/3041.txt deleted file mode 100644 index 22b464f84b5a26273fd68a91b99c97fdddc85ce1..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3041.txt +++ /dev/null @@ -1,33 +0,0 @@ -In mathematical logic, independence is the unprovability of a sentence from other sentences. - -A sentence σ is independent of a given first-order theory T if T neither proves nor refutes σ; that is, it is impossible to prove σ from T, and it is also impossible to prove from T that σ is false. Sometimes, σ is said (synonymously) to be undecidable from T; this is not the same meaning of "decidability" as in a decision problem. - -A theory T is independent if each axiom in T is not provable from the remaining axioms in T. A theory for which there is an independent set of axioms is independently axiomatizable. - -Some authors say that σ is independent of T when T simply cannot prove σ, and do not necessarily assert by this that T cannot refute σ. These authors will sometimes say "σ is independent of and consistent with T" to indicate that T can neither prove nor refute σ. - -Many interesting statements in set theory are independent of Zermelo–Fraenkel set theory (ZF). The following statements in set theory are known to be independent of ZF, under the assumption that ZF is consistent: - -*The axiom of choice - -*The continuum hypothesis and the generalized continuum hypothesis - -*The Suslin conjecture - -The following statements (none of which have been proved false) cannot be proved in ZFC (the Zermelo-Fraenkel set theory plus the axiom of choice) to be independent of ZFC, under the added hypothesis that ZFC is consistent. - -*The existence of strongly inaccessible cardinals - -*The existence of large cardinals - -*The non-existence of Kurepa trees - -The following statements are inconsistent with the axiom of choice, and therefore with ZFC. However they are probably independent of ZF, in a corresponding sense to the above: They cannot be proved in ZF, and few working set theorists expect to find a refutation in ZF. However ZF cannot prove that they are independent of ZF, even with the added hypothesis that ZF is consistent. - -*The axiom of determinacy - -*The axiom of real determinacy - -*AD+ - -Since 2000, logical independence has become understood as having crucial significance in the foundations of physics. diff --git a/wiki/wikipedia/3042.txt b/wiki/wikipedia/3042.txt deleted file mode 100644 index cb78d647e23f8b3779ad06ecb11ddd7aae93bc8c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3042.txt +++ /dev/null @@ -1,5 +0,0 @@ -Graph matching is the problem of finding a similarity between graphs. - -Graphs are commonly used to encode structural information in many fields, including computer vision and pattern recognition, and graph matching is an important tool in these areas. In these areas it is commonly assumed that the comparison is between the data graph and the model graph. - -The case of exact graph matching is known as the graph isomorphism problem. The class of algorithms is called error-tolerant graph matching. diff --git a/wiki/wikipedia/3043.txt b/wiki/wikipedia/3043.txt deleted file mode 100644 index 2a965c3ee8e36dafc53766db45647d3ce1c8e2e1..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3043.txt +++ /dev/null @@ -1,61 +0,0 @@ -Brauer's main theorems are three theorems in representation theory of finite groups linking the blocks of a finite group (in characteristic p) with those of its p-local subgroups, that is to say, the normalizers of its non-trivial p-subgroups. - -The second and third main theorems allow refinements of orthogonality relations for ordinary characters which may be applied in finite group theory. These do not presently admit a proof purely in terms of ordinary characters. - -All three main theorems are stated in terms of the Brauer correspondence. - -There are many ways to extend the definition which follows, but this is close to the early treatments - -by Brauer. Let G be a finite group, p be a prime, F be a field of characteristic p. - -Let H be a subgroup of G which contains -$$ -QC_G(Q) -$$ - -for some p-subgroup Q - -of G, and is contained in the normalizer -$$ -N_G(Q) -$$, - -where $C_G(Q)$ is the centralizer of Q in G. - -The Brauer homomorphism (with respect to H) is a linear map from the center of the group algebra of G over F to the corresponding algebra for H. Specifically, it is the restriction to -$$ -Z(FG) -$$ of the (linear) projection from $FG$ to $FC_G(Q)$ whose - -kernel is spanned by the elements of G outside $C_G(Q)$. The image of this map is contained in -$$ -Z(FH) -$$, and it transpires that the map is also a ring homomorphism. - -Since it is a ring homomorphism, for any block B of FG, the Brauer homomorphism - -sends the identity element of B either to 0 or to an idempotent element. In the latter case, - -the idempotent may be decomposed as a sum of (mutually orthogonal) primitive idempotents of Z(FH). - -Each of these primitive idempotents is the multiplicative identity of some block of FH. The block b of FH is said to be a Brauer correspondent of B if its identity element occurs - -in this decomposition of the image of the identity of B under the Brauer homomorphism. - -Brauer's first main theorem states that if $G$ is a finite group and $D$ is a $p$-subgroup of $G$, then there is a bijection between the set of - -(characteristic p) blocks of $G$ with defect group $D$ and blocks of the normalizer $N_G(D)$ with - -defect group D. This bijection arises because when $H = N_G(D)$, each block of G - -with defect group D has a unique Brauer correspondent block of H, which also has defect - -group D. - -Brauer's second main theorem gives, for an element t whose order is a power of a prime p, a criterion for a (characteristic p) block of $C_G(t)$ to correspond to a given block of $G$, via generalized decomposition numbers. These are the coefficients which occur when the restrictions of ordinary characters of $G$ (from the given block) to elements of the form tu, where u ranges over elements of order prime to p in $C_G(t)$, are written as linear combinations of the irreducible Brauer characters of $C_G(t)$. The content of the theorem is that it is only necessary to use Brauer characters from blocks of $C_G(t)$ which are Brauer correspondents of the chosen block of G. - -Brauer's third main theorem states that when Q is a p-subgroup of the finite group G, - -and H is a subgroup of G, containing $QC_G(Q)$, and contained in $N_G(Q)$, - -then the principal block of H is the only Brauer correspondent of the principal block of G (where the blocks referred to are calculated in characteristic p). diff --git a/wiki/wikipedia/3044.txt b/wiki/wikipedia/3044.txt deleted file mode 100644 index 58ec514b7ce5549deb7b5e37053dcba42654c748..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3044.txt +++ /dev/null @@ -1,16 +0,0 @@ -In probability theory, Etemadi's inequality is a so-called "maximal inequality", an inequality that gives a bound on the probability that the partial sums of a finite collection of independent random variables exceed some specified bound. The result is due to Nasrollah Etemadi. - -Let X1, ..., Xn be independent real-valued random variables defined on some common probability space, and let α ≥ 0. Let Sk denote the partial sum -$$ -S_k = X_1 + \cdots + X_k. -$$ - -Then -$$ -\Pr \Bigl( \max_{1 \leq k \leq n} | S_k | \geq 3 \alpha \Bigr) \leq 3 \max_{1 \leq k \leq n} \Pr \bigl( | S_k | \geq \alpha \bigr). -$$ - -Suppose that the random variables Xk have common expected value zero. Apply Chebyshev's inequality to the right-hand side of Etemadi's inequality and replace α by α / 3. The result is Kolmogorov's inequality with an extra factor of 27 on the right-hand side: -$$ - \Pr \Bigl( \max_{1 \leq k \leq n} | S_k | \geq \alpha \Bigr) \leq \frac{27}{\alpha^2} \operatorname{var} (S_n). -$$ diff --git a/wiki/wikipedia/3045.txt b/wiki/wikipedia/3045.txt deleted file mode 100644 index 8bd88d670d215aa674a6c93b6e632b5f515811a0..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3045.txt +++ /dev/null @@ -1,251 +0,0 @@ -In mathematics, the Pell numbers are an infinite sequence of integers, known since ancient times, that comprise the denominators of the closest rational approximations to the square root of 2. This sequence of approximations begins 1/1, 3/2, 7/5, 17/12, and 41/29, so the sequence of Pell numbers begins with 1, 2, 5, 12, and 29. The numerators of the same sequence of approximations are half the companion Pell numbers or Pell–Lucas numbers; these numbers form a second infinite sequence that begins with 2, 6, 14, 34, and 82. - -Both the Pell numbers and the companion Pell numbers may be calculated by means of a recurrence relation similar to that for the Fibonacci numbers, and both sequences of numbers grow exponentially, proportionally to powers of the silver ratio 1 + sqrt 2. As well as being used to approximate the square root of two, Pell numbers can be used to find square triangular numbers, to construct integer approximations to the right isosceles triangle, and to solve certain combinatorial enumeration problems. - -As with Pell's equation, the name of the Pell numbers stems from Leonhard Euler's mistaken attribution of the equation and the numbers derived from it to John Pell. The Pell–Lucas numbers are also named after Édouard Lucas, who studied sequences defined by recurrences of this type; the Pell and companion Pell numbers are Lucas sequences. - -The Pell numbers are defined by the recurrence relation: -$$ -P_n=\begin{cases}0&\mbox{if }n=0;\\1&\mbox{if }n=1;\\2P_{n-1}+P_{n-2}&\mbox{otherwise.}\end{cases} -$$ - -In words, the sequence of Pell numbers starts with 0 and 1, and then each Pell number is the sum of twice the previous Pell number and the Pell number before that. The first few terms of the sequence are - -0, 1, 2, 5, 12, 29, 70, 169, 408, 985, 2378, 5741, 13860,… . - -The Pell numbers can also be expressed by the closed form formula -$$ -P_n=\frac{\left(1+\sqrt2\right)^n-\left(1-\sqrt2\right)^n}{2\sqrt2}. -$$ - -For large values of n, the (1 + sqrt 2)n term dominates this expression, so the Pell numbers are approximately proportional to powers of the silver ratio 1 + sqrt 2, analogous to the growth rate of Fibonacci numbers as powers of the golden ratio. - -A third definition is possible, from the matrix formula -$$ -\begin{pmatrix} P_{n+1} & P_n \\ P_n & P_{n-1} \end{pmatrix} = \begin{pmatrix} 2 & 1 \\ 1 & 0 \end{pmatrix}^n. -$$ - -Many identities can be derived or proven from these definitions; for instance an identity analogous to Cassini's identity for Fibonacci numbers, -$$ -P_{n+1}P_{n-1}-P_n^2 = (-1)^n, -$$ - -is an immediate consequence of the matrix formula (found by considering the determinants of the matrices on the left and right sides of the matrix formula). - -Pell numbers arise historically and most notably in the rational approximation to sqrt 2. If two large integers x and y form a solution to the Pell equation -$$ -x^2-2y^2=\pm 1, -$$ - -then their ratio x/y provides a close approximation to sqrt 2. The sequence of approximations of this form is -$$ -\frac11, \frac32, \frac75, \frac{17}{12}, \frac{41}{29}, \frac{99}{70}, \dots -$$ - -where the denominator of each fraction is a Pell number and the numerator is the sum of a Pell number and its predecessor in the sequence. That is, the solutions have the form -$$ -\frac{P_{n-1}+P_n}{P_n}. -$$ - -The approximation -$$ -\sqrt 2\approx\frac{577}{408} -$$ - -of this type was known to Indian mathematicians in the third or fourth century B.C. The Greek mathematicians of the fifth century B.C. also knew of this sequence of approximations: Plato refers to the numerators as rational diameters. In the 2nd century CE Theon of Smyrna used the term the side and diameter numbers to describe the denominators and numerators of this sequence. - -These approximations can be derived from the continued fraction expansion of $\scriptstyle\sqrt 2$: -$$ -\sqrt 2 = 1 + \cfrac{1}{2 + \cfrac{1}{2 + \cfrac{1}{2 + \cfrac{1}{2 + \cfrac{1}{2 + \ddots}}}}}. -$$ - -Truncating this expansion to any number of terms produces one of the Pell-number-based approximations in this sequence; for instance, -$$ -\frac{577}{408} = 1 + \cfrac{1}{2 + \cfrac{1}{2 + \cfrac{1}{2 + \cfrac{1}{2 + \cfrac{1}{2 + \cfrac{1}{2 + \cfrac{1}{2}}}}}}}. -$$ - -As Knuth (1994) describes, the fact that Pell numbers approximate sqrt 2 allows them to be used for accurate rational approximations to a regular octagon with vertex coordinates (±Pi, ±Pi+1) and (±Pi+1, ±Pi). All vertices are equally distant from the origin, and form nearly uniform angles around the origin. Alternatively, the points $(\pm(P_i+P_{i-1}),0)$, $(0,\pm(P_i+P_{i-1}))$, and $(\pm P_i,\pm P_i)$ form approximate octagons in which the vertices are nearly equally distant from the origin and form uniform angles. - -A Pell prime is a Pell number that is prime. The first few Pell primes are - -2, 5, 29, 5741, 33461, 44560482149, 1746860020068409, 68480406462161287469, ... . - -The indices of these primes within the sequence of all Pell numbers are - -2, 3, 5, 11, 13, 29, 41, 53, 59, 89, 97, 101, 167, 181, 191, 523, 929, 1217, 1301, 1361, 2087, 2273, 2393, 8093, ... - -These indices are all themselves prime. As with the Fibonacci numbers, a Pell number Pn can only be prime if n itself is prime, because if d is a divisor of n then Pd is a divisor of Pn. - -The only Pell numbers that are squares, cubes, or any higher power of an integer are 0, 1, and 169 = 132. - -However, despite having so few squares or other powers, Pell numbers have a close connection to square triangular numbers. Specifically, these numbers arise from the following identity of Pell numbers: -$$ -\bigl(\left(P_{k-1}+P_k\right)\cdot P_k\bigr)^2 = \frac{\left(P_{k-1}+P_k\right)^2\cdot\left(\left(P_{k-1}+P_k\right)^2-(-1)^k\right)}{2}. -$$ - -The left side of this identity describes a square number, while the right side describes a triangular number, so the result is a square triangular number. - -Santana and Diaz-Barrero (2006) proved another identity relating Pell numbers to squares and showing that the sum of the Pell numbers up to P4n+1 is always a square: -$$ -\sum_{i=0}^{4n+1} P_i = \left(\sum_{r=0}^n 2^r{2n+1\choose 2r}\right)^2 = \left(P_{2n}+P_{2n+1}\right)^2. -$$ - -For instance, the sum of the Pell numbers up to P5, 1=0 + 1 + 2 + 5 + 12 + 29 = 49, is the square of 1=P2 + P3 = 2 + 5 = 7. The numbers P2n + P2n+1 forming the square roots of these sums, - -1, 7, 41, 239, 1393, 8119, 47321,… , - -are known as the Newman–Shanks–Williams (NSW) numbers. - -If a right triangle has integer side lengths a, b, c (necessarily satisfying the Pythagorean theorem 1=a2 + b2 = c2), then (a,b,c) is known as a Pythagorean triple. As Martin (1875) describes, the Pell numbers can be used to form Pythagorean triples in which a and b are one unit apart, corresponding to right triangles that are nearly isosceles. Each such triple has the form -$$ -\left(2P_{n}P_{n+1}, P_{n+1}^2 - P_{n}^2, P_{n+1}^2 + P_{n}^2=P_{2n+1}\right). -$$ - -The sequence of Pythagorean triples formed in this way is - -(4,3,5), (20,21,29), (120,119,169), (696,697,985),… - -The companion Pell numbers or Pell–Lucas numbers are defined by the recurrence relation -$$ -Q_n=\begin{cases}2&\mbox{if }n=0;\\2&\mbox{if }n=1;\\2Q_{n-1}+Q_{n-2}&\mbox{otherwise.}\end{cases} -$$ - -In words: the first two numbers in the sequence are both 2, and each successive number is formed by adding twice the previous Pell–Lucas number to the Pell–Lucas number before that, or equivalently, by adding the next Pell number to the previous Pell number: thus, 82 is the companion to 29, and 1=82 = 2 × 34 + 14 = 70 + 12. The first few terms of the sequence are : 2, 2, 6, 14, 34, 82, 198, 478,… - -Like the relationship between Fibonacci numbers and Lucas numbers, -$$ -Q_n=\frac{P_{2n}}{P_n} -$$ - -for all natural numbers n. - -The companion Pell numbers can be expressed by the closed form formula -$$ -Q_n=\left(1+\sqrt 2\right)^n+\left(1-\sqrt 2\right)^n. -$$ - -These numbers are all even; each such number is twice the numerator in one of the rational approximations to $\scriptstyle\sqrt 2$ discussed above. - -Like the Lucas sequence, if a Pell–Lucas number 1/2Qn is prime, it is necessary that n be either prime or a power of 2. The Pell–Lucas primes are - -3, 7, 17, 41, 239, 577,… . - -For these n are - -2, 3, 4, 5, 7, 8, 16, 19, 29, 47, 59, 163, 257, 421,… . - -The following table gives the first few powers of the silver ratio δ = δS = 1 + sqrt 2 and its conjugate δ = 1 − sqrt 2. - -The coefficients are the half-companion Pell numbers Hn and the Pell numbers Pn which are the (non-negative) solutions to 1=H2 − 2P2 = ±1. - -A square triangular number is a number -$$ -N=\frac{t(t+1)}{2}=s^2, -$$ - -which is both the tth triangular number and the sth square number. A near-isosceles Pythagorean triple is an integer solution to 1=a2 + b2 = c2 where 1=a + 1 = b. - -The next table shows that splitting the odd number Hn into nearly equal halves gives a square triangular number when n is even and a near isosceles Pythagorean triple when n is odd. All solutions arise in this manner. - -The half-companion Pell numbers Hn and the Pell numbers Pn can be derived in a number of easily equivalent ways. -$$ -\left(1+\sqrt2\right)^n=H_n+P_n\sqrt{2} -$$ -$$ -\left(1-\sqrt2\right)^n=H_n-P_n\sqrt{2}. -$$ - -From this it follows that there are closed forms: -$$ -H_n=\frac{\left(1+\sqrt2\right)^n+\left(1-\sqrt2\right)^n}{2}. -$$ - -and -$$ -P_n\sqrt2=\frac{\left(1+\sqrt2\right)^n-\left(1-\sqrt2\right)^n}{2}. -$$ -$$ -H_n=\begin{cases}1&\mbox{if }n=0;\\H_{n-1}+2P_{n-1}&\mbox{otherwise.}\end{cases} -$$ -$$ -P_n=\begin{cases}0&\mbox{if }n=0;\\H_{n-1}+P_{n-1}&\mbox{otherwise.}\end{cases} -$$ - -Let n be at least 2. -$$ -H_n=(3P_n-P_{n-2})/2=3P_{n-1}+P_{n-2} -$$; -$$ -P_n=(3H_n-H_{n-2})/4=(3H_{n-1}+H_{n-2})/2 -$$. -$$ -\begin{pmatrix} H_n \\ P_n \end{pmatrix} = \begin{pmatrix} 1 & 2 \\ 1 & 1 \end{pmatrix} \begin{pmatrix} H_{n-1} \\ P_{n-1} \end{pmatrix} = \begin{pmatrix} 1 & 2 \\ 1 & 1 \end{pmatrix}^n \begin{pmatrix} 1 \\ 0 \end{pmatrix}. -$$ - -So -$$ - \begin{pmatrix} H_n & 2P_n \\ P_n & H_n \end{pmatrix}= \begin{pmatrix} 1 & 2 \\ 1 & 1 \end{pmatrix}^n . -$$ - -The difference between Hn and Pnsqrt 2 is -$$ -\left(1-\sqrt2\right)^n \approx (-0.41421)^n, -$$ - -which goes rapidly to zero. So -$$ -\left(1+\sqrt2\right)^n=H_n+P_n\sqrt2 -$$ - -is extremely close to 2Hn. - -From this last observation it follows that the integer ratios Hn/Pn rapidly approach sqrt 2; and Hn/Hn−1 and Pn/Pn−1 rapidly approach 1 + sqrt 2. - -Since sqrt 2 is irrational, we cannot have H/P = sqrt 2, i.e., -$$ -\frac{H^2}{P^2}=\frac{2P^2}{P^2}. -$$ - -The best we can achieve is either -$$ -\frac{H^2}{P^2}=\frac{2P^2-1}{P^2}\quad \mbox{or} \quad \frac{H^2}{P^2}=\frac{2P^2+1}{P^2}. -$$ - -The (non-negative) solutions to 1=H2 − 2P2 = 1 are exactly the pairs (Hn, Pn) with n even, and the solutions to 1=H2 − 2P2 = −1 are exactly the pairs (Hn, Pn) with n odd. To see this, note first that -$$ -H_{n+1}^2-2P_{n+1}^2=\left(H_n+2P_n\right)^2-2\left(H_n+P_n\right)^2=-\left(H_n^2-2P_n^2\right), -$$ - -so that these differences, starting with 1=H_0|p=2 − 2P_0|p=2 = 1, are alternately - -1 and −1. Then note that every positive solution comes in this way from a solution with smaller integers since -$$ -(2P-H)^2-2(H-P)^2=-\left(H^2-2P^2\right). -$$ - -The smaller solution also has positive integers, with the one exception: 1=H = P = 1 which comes from H0 = 1 and P0 = 0. - -The required equation -$$ -\frac{t(t+1)}{2}=s^2 -$$ - -is equivalent to:$4t^2+4t+1=8s^2+1,$ - -which becomes 1=H2 = 2P2 + 1 with the substitutions H = 2t + 1 and P = 2s. Hence the nth solution is -$$ -t_n=\frac{H_{2n}-1}{2} \quad \mbox{and} \quad s_n=\frac{P_{2n}}{2}. -$$ - -Observe that t and t + 1 are relatively prime, so that t(t + 1)/2 = s2 happens exactly when they are adjacent integers, one a square H2 and the other twice a square 2P2. Since we know all solutions of that equation, we also have -$$ -t_n=\begin{cases}2P_n^2&\mbox{if }n\mbox{ is even};\\H_{n}^2&\mbox{if }n\mbox{ is odd.}\end{cases} -$$ - -and $s_n=H_nP_n.$ - -This alternate expression is seen in the next table. - -The equality 1=c2 = a2 + (a + 1)2 = 2a2 + 2a + 1 occurs exactly when 1=2c2 = 4a2 + 4a + 2 which becomes 1=2P2 = H2 + 1 with the substitutions 1=H = 2a + 1 and 1=P = c. Hence the nth solution is 1=an = H2n+1 − 1/2 and 1=cn = P2n+1. - -The table above shows that, in one order or the other, an and 1=bn = an + 1 are 1=HnHn+1 and 1=2PnPn+1 while 1=cn = Hn+1Pn + Pn+1Hn. diff --git a/wiki/wikipedia/3046.txt b/wiki/wikipedia/3046.txt deleted file mode 100644 index 2ed8ab7d17ba199c4f4c0b0ca7a174e2bdceb4dd..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3046.txt +++ /dev/null @@ -1,15 +0,0 @@ -The vehicle rescheduling problem (VRSP) is a combinatorial optimization and integer programming problem seeking to service customers on a trip after change of schedule such as vehicle break down or major delay. Proposed by Li, Mirchandani and Borenstein in 2007, the VRSP is an important problem in the fields of transportation and logistics. - -Determining the optimal solution is an NP-complete problem in combinatorial optimization, so in practice heuristic and deterministic methods are used to find acceptably good solutions for the VRSP. - -Several variations and specializations of the vehicle rescheduling problem exist: - -* Single Depot Vehicle Rescheduling Problem (SDVRSP): A number of trips need to be rescheduled due to delay, vehicle break down or for any other reason. The goal is to find optimal rescheduling of the existing fleet, using possibly extra vehicles from the depot, in order to minimise the delay and the operating costs. In the Single Depot variation, there is only one depot which contains all extra vehicles, and in which every vehicle starts and ends its schedule. - -* Multi Depot Vehicle Rescheduling Problem (MDVRSP): Similar to SDVRSP, except additional depots are introduced. Each depot has capacity constraints, as well as variable extra vehicles. Usually vehicle schedules have an additional constraint which requires that each vehicle returns to the depot where it started its schedule. - -* Open Vehicle Rescheduling Problem (OVRSP): Vehicles are not required to return to the depot. - -Although VRSP is related to the Single Depot Vehicle Scheduling Problem and the Multi Depot Vehicle Scheduling Problem, there is a significant difference in runtime requirements, as VRSP need to be solved in near real-time to allow rescheduling during operations, while SDVSP and MDVSP are typically solved using long running linear programming methods. - -Another field where VRSP is used is in transportation of goods in order to reschedule the routes when demand substantially changes diff --git a/wiki/wikipedia/3047.txt b/wiki/wikipedia/3047.txt deleted file mode 100644 index 4cc40e99034d9cb39d56d2996073d15ff82e062e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3047.txt +++ /dev/null @@ -1,384 +0,0 @@ -In proof theory, the semantic tableau (; plural: tableaux, also called truth tree) is a decision procedure for sentential and related logics, and a proof procedure for formulae of first-order logic. An analytic tableau is a tree structure computed for a logical formula, having at each node a subformula of the original formula to be proved or refuted. Computation constructs this tree and uses it to prove or refute the whole formula. The tableau method can also determine the satisfiability of finite sets of formulas of various logics. It is the most popular proof procedure for modal logics (Girle 2000). - -For refutation tableaux, the objective is to show that the negation of a formula cannot be satisfied. There are rules for handling each of the usual connectives, starting with the main connective. In many cases, applying these rules causes the subtableau to divide into two. Quantifiers are instantiated. If any branch of a tableau leads to an evident contradiction, the branch closes. If all branches close, the proof is complete and the original formula is a logical truth. - -Although the fundamental idea behind the analytic tableau method is derived from the cut-elimination theorem of structural proof theory, the origins of tableau calculi lie in the meaning (or semantics) of the logical connectives, as the connection with proof theory was made only in recent decades. - -More specifically, a tableau calculus consists of a finite collection of rules with each rule specifying how to break down one logical connective into its constituent parts. The rules typically are expressed in terms of finite sets of formulae, although there are logics for which we must use more complicated data structures, such as multisets, lists, or even trees of formulas. Henceforth, "set" denotes any of {set, multiset, list, tree}. - -If there is such a rule for every logical connective then the procedure will eventually produce a set which consists only of atomic formulae and their negations, which cannot be broken down any further. Such a set is easily recognizable as satisfiable or unsatisfiable with respect to the semantics of the logic in question. To keep track of this process, the nodes of a tableau itself are set out in the form of a tree and the branches of this tree are created and assessed in a systematic way. Such a systematic method for searching this tree gives rise to an algorithm for performing deduction and automated reasoning. Note that this larger tree is present regardless of whether the nodes contain sets, multisets, lists or trees. - -This section presents the tableau calculus for classical propositional logic. A tableau checks whether a given set of formulae is satisfiable or not. It can be used to check either validity or entailment: a formula is valid if its negation is unsatisfiable and formulae $A_1,\ldots,A_n$ imply $B$ if $\{A_1,\ldots,A_n,\neg B\}$ is unsatisfiable. - -The main principle of propositional tableaux is to attempt to "break" complex formulae into smaller ones until complementary pairs of literals are produced or no further expansion is possible. - -The method works on a tree whose nodes are labeled with formulae. At each step, this tree is modified; in the propositional case, the only allowed changes are additions of a node as descendant of a leaf. The procedure starts by generating the tree made of a chain of all formulae in the set to prove unsatisfiability. A variant to this starting step is to begin with a single-node tree whose root is labeled by $\top$; in this second case, the procedure can always copy a formula in the set below a leaf. As a running example, the tableau for the set $\{(a \vee \neg b) \wedge b, \neg a\}$ is shown. - -The principle of tableau is that formulae in nodes of the same branch are considered in conjunction while the different branches are considered to be disjuncted. As a result, a tableau is a tree-like representation of a formula that is a disjunction of conjunctions. This formula is equivalent to the set to prove unsatisfiability. The procedure modifies the tableau in such a way that the formula represented by the resulting tableau is equivalent to the original one. One of these conjunctions may contain a pair of complementary literals, in which case that conjunction is proved to be unsatisfiable. If all conjunctions are proved unsatisfiable, the original set of formulae is unsatisfiable. - -Whenever a branch of a tableau contains a formula $A \wedge B$ that is the conjunction of two formulae, these two formulae are both consequences of that formula. This fact can be formalized by the following rule for expansion of a tableau: - -
    - -($\wedge$) If a branch of the tableau contains a conjunctive formula $A \wedge B$, add to its leaf the chain of two nodes containing the formulae $A$ and $B$ - -
    - -This rule is generally written as follows: -$$ -(\land) \frac{A \wedge B}{\begin{array}{c} A \\ B\end{array}} -$$ - -A variant of this rule allows a node to contain a set of formulae rather than a single one. In this case, the formulae in this set are considered in conjunction, so one can add $\{A, B\}$ at the end of a branch containing $A \wedge B$. More precisely, if a node on a branch is labeled $X \cup \{A \wedge B\}$, one can add to the branch the new leaf $X \cup \{A, B\}$. - -If a branch of a tableau contains a formula that is a disjunction of two formulae, such as $A \vee B$, the following rule can be applied: - -
    - -($\vee$) If a node on a branch contains a disjunctive formula $A \vee B$, then create two sibling children to the leaf of the branch, containing the formulae $A$ and $B$, respectively. - -
    - -This rule splits a branch into two, differing only for the final node. Since branches are considered in disjunction to each other, the two resulting branches are equivalent to the original one, as the disjunction of their non-common nodes is precisely $A \vee B$. The rule for disjunction is generally formally written using the symbol $|$ for separating the formulae of the two distinct nodes to be created: -$$ -(\vee) \frac{A \vee B}{A\mid B} -$$ - -If nodes are assumed to contain sets of formulae, this rule is replaced by: if a node is labeled $Y \cup \{A \vee B\}$, a leaf of the branch this node is in can be appended two sibling child nodes labeled $Y \cup \{A\}$ and $Y \cup \{B\}$, respectively. - -The aim of tableaux is to generate progressively simpler formulae until pairs of opposite literals are produced or no other rule can be applied. Negation can be treated by initially making formulae in negation normal form, so that negation only occurs in front of literals. Alternatively, one can use De Morgan's laws during the expansion of the tableau, so that for example $\neg (A \wedge B)$ is treated as $\neg A \vee \neg B$. Rules that introduce or remove a pair of negations (such as in $\neg \neg A$) are also used in this case (otherwise, there would be no way of expanding a formula like $\neg \neg (A \wedge B)$: -$$ -(\neg 1) \frac{A}{\neg \neg A} -$$ -$$ -(\neg 2) \frac{\neg \neg A}{A} -$$ - -Every tableau can be considered as a graphical representation of a formula, which is equivalent to the set the tableau is built from. This formula is as follows: each branch of the tableau represents the conjunction of its formulae; the tableau represents the disjunction of its branches. The expansion rules transforms a tableau into one having an equivalent represented formula. Since the tableau is initialized as a single branch containing the formulae of the input set, all subsequent tableaux obtained from it represent formulae which are equivalent to that set (in the variant where the initial tableau is the single node labeled true, the formulae represented by tableaux are consequences of the original set.) - -The method of tableaux works by starting with the initial set of formulae and then adding to the tableau simpler and simpler formulae until contradiction is shown in the simple form of opposite literals. Since the formula represented by a tableau is the disjunction of the formulae represented by its branches, contradiction is obtained when every branch contains a pair of opposite literals. - -Once a branch contains a literal and its negation, its corresponding formula is unsatisfiable. As a result, this branch can be now "closed", as there is no need to further expand it. If all branches of a tableau are closed, the formula represented by the tableau is unsatisfiable; therefore, the original set is unsatisfiable as well. Obtaining a tableau where all branches are closed is a way for proving the unsatisfiability of the original set. In the propositional case, one can also prove that satisfiability is proved by the impossibility of finding a closed tableau, provided that every expansion rule has been applied everywhere it could be applied. In particular, if a tableau contains some open (non-closed) branches and every formula that is not a literal has been used by a rule to generate a new node on every branch the formula is in, the set is satisfiable. - -This rule takes into account that a formula may occur in more than one branch (this is the case if there is at least a branching point "below" the node). In this case, the rule for expanding the formula has to be applied so that its conclusion(s) are appended to all of these branches that are still open, before one can conclude that the tableau cannot be further expanded and that the formula is therefore satisfiable. - -A variant of tableau is to label nodes with sets of formulae rather than single formulae. In this case, the initial tableau is a single node labeled with the set to be proved satisfiable. The formulae in a set are therefore considered to be in conjunction. - -The rules of expansion of the tableau can now work on the leaves of the tableau, ignoring all internal nodes. For conjunction, the rule is based on the equivalence of a set containing a conjunction $A \wedge B$ with the set containing both $A$ and $B$ in place of it. In particular, if a leaf is labeled with $X \cup \{A \wedge B\}$, a node can be appended to it with label $X \cup \{A, B\}$: -$$ -(\wedge) \frac{X \cup \{A \wedge B\}}{X \cup \{A, B\}} -$$ - -For disjunction, a set $X \cup \{A \vee B\}$ is equivalent to the disjunction of the two sets $X \cup \{A\}$ and $X \cup \{B\}$. As a result, if the first set labels a leaf, two children can be appended to it, labeled with the latter two formulae. -$$ -(\vee) \frac{X \cup \{A \vee B\}}{X \cup \{A\}|X \cup \{B\}} -$$ - -Finally, if a set contains both a literal and its negation, this branch can be closed: -$$ -(id) \frac{X \cup \{p, \neg p\}}{closed} -$$ - -A tableau for a given finite set X is a finite (upside down) tree with root X in which all child nodes are obtained by applying the tableau rules to their parents. A branch in such a tableau is closed if its leaf node contains "closed". A tableau is closed if all its branches are closed. A tableau is open if at least one branch is not closed. - -Here are two closed tableaux for the set X = {r0 & ~r0, p0 & ((~p0 ∨ q0) & ~q0)} with each rule application marked at the right hand side (& and ~ stand for $\wedge$ and $\neg$, respectively) - -{r0 & ~r0, p0 & ((~p0 v q0) & ~q0)} {r0 & ~r0, p0 & ((~p0 v q0) & ~q0)} - ---------------------------------------(&) ------------------------------------------------------------(&) - -{r0, ~r0, p0 & ((~p0 v q0) & ~q0)} {r0 & ~r0, p0, ((~p0 v q0) & ~q0)} - --------------------------------------(id) ----------------------------------------------------------(&) - -closed {r0 & ~r0, p0, (~p0 v q0), ~q0} - --------------------------------------------------------------(v) - -{r0 & ~r0, p0, ~p0, ~q0} | {r0 & ~r0, p0, q0, ~q0} - --------------------------- (id) ---------------------- (id) - -closed closed - -The left hand tableau closes after only one rule application while the right hand one misses the mark and takes a lot longer to close. Clearly, we would prefer to always find the shortest closed tableaux but it can be shown that one single algorithm that finds the shortest closed tableaux for all input sets of formulae cannot exist. - -The three rules $(\wedge)$, $(\vee)$ and $(id)$ given above are then enough to decide if a given set $X'$ of formulae in negated normal form are jointly satisfiable: - -
    - -Just apply all possible rules in all possible orders until we find a closed tableau for $X'$ or until we exhaust all possibilities and conclude that every tableau for $X'$ is open. - -
    - -In the first case, $X'$ is jointly unsatisfiable and in the second the case the leaf node of the open branch gives an assignment to the atomic formulae and negated atomic formulae which makes $X'$ jointly satisfiable. Classical logic actually has the rather nice property that we need to investigate only (any) one tableau completely: if it closes then $X'$ is unsatisfiable and if it is open then $X'$ is satisfiable. But this property is not generally enjoyed by other logics. - -These rules suffice for all of classical logic by taking an initial set of formulae X and replacing each member C by its logically equivalent negated normal form C' giving a set of formulae X' . We know that X is satisfiable if and only if X' is - -satisfiable, so it suffices to search for a closed tableau for X' using the procedure outlined above. - -By setting $X = \{\neg A\}$ we can test whether the formula A is a tautology of classical logic: - -
    - -If the tableau for $\{\neg A\}$ closes then $\neg A$ is unsatisfiable and so A is a tautology since no assignment of truth values will ever make A false. Otherwise any open leaf of any open branch of any open tableau for $\{\neg A\}$ gives an assignment that falsifies A. - -
    - -Classical propositional logic usually has a connective to denote material implication. If we write this connective as ⇒, then the formula A ⇒ B stands for "if A then B". It is possible to give a tableau rule for breaking down A ⇒ B into its constituent formulae. Similarly, we can give one rule each for breaking down each of ¬(A ∧ B), ¬(A ∨ B), ¬(¬A), and ¬(A ⇒ B). Together these rules would give a terminating procedure for deciding whether a given set of formulae is simultaneously satisfiable in classical logic since each rule breaks down one formula into its constituents but no rule builds larger formulae out of smaller constituents. Thus we must eventually reach a node that contains only atoms and negations of atoms. If this last node matches (id) then we can close the branch, otherwise it remains open. - -But note that the following equivalences hold in classical logic where (...) = (...) means that the left hand side formula is logically equivalent to the right hand side formula: - - - -\begin{array}{lcl} - -\neg (A \land B) & = & \neg A \lor \neg B \\ - -\neg (A \lor B) & = & \neg A \land \neg B \\ - -\neg (\neg A) & = & A \\ - -\neg (A \Rightarrow B) & = & A \land \neg B \\ - -A \Rightarrow B & = & \neg A \lor B \\ - -A \Leftrightarrow B & = & (A \land B) \lor (\neg A \land \neg B) \\ - -\neg (A \Leftrightarrow B) & = & (A \land \neg B) \lor (\neg A \land B) - -\end{array} - - - -If we start with an arbitrary formula C of classical logic, and apply these equivalences repeatedly to replace the left hand sides with the right hand sides in C, then we will obtain a formula C' which is logically equivalent to C but which has the property that C' contains no implications, and ¬ appears in front of atomic formulae only. Such a formula is said to be in negation normal form and it is possible to prove formally that every formula C of classical logic has a logically equivalent formula C' in negation normal form. That is, C is satisfiable if and only if C' is satisfiable. - -Tableaux are extended to first order predicate logic by two rules for dealing with universal and existential quantifiers, respectively. Two different sets of rules can be used; both employ a form of Skolemization for handling existential quantifiers, but differ on the handling of universal quantifiers. - -The set of formulae to check for validity is here supposed to contain no free variables; this is not a limitation as free variables are implicitly universally quantified, so universal quantifiers over these variables can be added, resulting in a formula with no free variables. - -A first-order formula $\forall x . \gamma(x)$ implies all formulae $\gamma(t)$ where $t$ is a ground term. The following inference rule is therefore correct: -$$ -(\forall) \frac{\forall x . \gamma(x)}{\gamma(t)} -$$ where $t$ is an arbitrary ground term - -Contrarily to the rules for the propositional connectives, multiple applications of this rule to the same formula may be necessary. As an example, the set $\{\neg P(a) \vee \neg P(b), \forall x . P(x)\}$ can only be proved unsatisfiable if both $P(a)$ and $P(b)$ are generated from $\forall x . P(x)$. - -Existential quantifiers are dealt with by means of Skolemization. In particular, a formula with a leading existential quantifier like $\exists x . \delta(x)$ generates its Skolemization $\delta(c)$, where $c$ is a new constant symbol. -$$ -(\exists) \frac{\exists x . \delta(x)}{\delta(c)} -$$ where $c$ is a new constant symbol - -The Skolem term $c$ is a constant (a function of arity 0) because the quantification over $x$ does not occur within the scope of any universal quantifier. If the original formula contained some universal quantifiers such that the quantification over $x$ was within their scope, these quantifiers have evidently been removed by the application of the rule for universal quantifiers. - -The rule for existential quantifiers introduces new constant symbols. These symbols can be used by the rule for universal quantifiers, so that $\forall y . \gamma(y)$ can generate $\gamma(c)$ even if $c$ was not in the original formula but is a Skolem constant created by the rule for existential quantifiers. - -The above two rules for universal and existential quantifiers are correct, and so are the propositional rules: if a set of formulae generates a closed tableau, this set is unsatisfiable. Completeness can also be proved: if a set of formulae is unsatisfiable, there exists a closed tableau built from it by these rules. However, actually finding such a closed tableau requires a suitable policy of application of rules. Otherwise, an unsatisfiable set can generate an infinite-growing tableau. As an example, the set $\{\neg P(f(c)), \forall x . P(x)\}$ is unsatisfiable, but a closed tableau is never obtained if one unwisely keeps applying the rule for universal quantifiers to $\forall x . P(x)$, generating for example $P(c), P(f(c)), P(f(f(c))), \ldots$. A closed tableau can always be found by ruling out this and similar "unfair" policies of application of tableau rules. - -The rule for universal quantifiers $(\forall)$ is the only non-deterministic rule, as it does not specify which term to instantiate with. Moreover, while the other rules need to be applied only once for each formula and each path the formula is in, this one may require multiple applications. Application of this rule can however be restricted by delaying the application of the rule until no other rule is applicable and by restricting the application of the rule to ground terms that already appear in the path of the tableau. The variant of tableaux with unification shown below aims at solving the problem of non-determinism. - -The main problem of tableau without unification is how to choose a ground term $t$ for the universal quantifier rule. Indeed, every possible ground term can be used, but clearly most of them might be useless for closing the tableau. - -A solution to this problem is to "delay" the choice of the term to the time when the consequent of the rule allows closing at least a branch of the tableau. This can be done by using a variable instead of a term, so that $\forall x . \gamma(x)$ generates $\gamma(x')$, and then allowing substitutions to later replace $x'$ with a term. The rule for universal quantifiers becomes: -$$ -(\forall) \frac{\forall x . \gamma(x)}{\gamma(x')} -$$ where $x'$ is a variable not occurring everywhere else in the tableau - -While the initial set of formulae is supposed not to contain free variables, a formula of the tableau contain the free variables generated by this rule. These free variables are implicitly considered universally quantified. - -This rule employs a variable instead of a ground term. The gain of this change is that these variables can be then given a value when a branch of the tableau can be closed, solving the problem of generating terms that might be useless. - -As an example, $\{\neg P(a), \forall x . P(x)\}$ can be proved unsatisfiable by first generating $P(x_1)$; the negation of this literal is unifiable with $\neg P(a)$, the most general unifier being the substitution that replaces $x_1$ with $a$; applying this substitution results in replacing $P(x_1)$ with $P(a)$, which closes the tableau. - -This rule closes at least a branch of the tableau -the one containing the considered pair of literals. However, the substitution has to be applied to the whole tableau, not only on these two literals. This is expressed by saying that the free variables of the tableau are rigid: if an occurrence of a variable is replaced by something else, all other occurrences of the same variable must be replaced in the same way. Formally, the free variables are (implicitly) universally quantified and all formulae of the tableau are within the scope of these quantifiers. - -Existential quantifiers are dealt with by Skolemization. Contrary to the tableau without unification, Skolem terms may not be simple constant. Indeed, formulae in a tableau with unification may contain free variables, which are implicitly considered universally quantified. As a result, a formula like $\exists x . \delta(x)$ may be within the scope of universal quantifiers; if this is the case, the Skolem term is not a simple constant but a term made of a new function symbol and the free variables of the formula. -$$ -(\exists) \frac{\exists x . \delta(x)}{\delta(f(x_1,\ldots,x_n))} -$$ where $f$ is a new function symbol and $x_1,\ldots,x_n$ the free variables of $\delta$ - -This rule incorporates a simplification over a rule where $x_1,\ldots,x_n$ are the free variables of the branch, not of $\delta$ alone. This rule can be further simplified by the reuse of a function symbol if it has already been used in a formula that is identical to $\delta$ up to variable renaming. - -The formula represented by a tableau is obtained in a way that is similar to the propositional case, with the additional assumption that free variables are considered universally quantified. As for the propositional case, formulae in each branch are conjoined and the resulting formulae are disjoined. In addition, all free variables of the resulting formula are universally quantified. All these quantifiers have the whole formula in their scope. In other words, if $F$ is the formula obtained by disjoining the conjunction of the formulae in each branch, and $x_1,\ldots,x_n$ are the free variables in it, then $\forall x_1,\ldots,x_n . F$ is the formula represented by the tableau. The following considerations apply: - -* The assumption that free variables are universally quantified is what makes the application of a most general unifier a sound rule: since $\gamma(x')$ means that $\gamma$ is true for every possible value of $x'$, then $\gamma(t)$ is true for the term $t$ that the most general unifier replaces $x$ with. - -* Free variables in a tableau are rigid: all occurrences of the same variable have to be replaced all with the same term. Every variable can be considered a symbol representing a term that is yet to be decided. This is a consequence of free variables being assumed universally quantified over the whole formula represented by the tableau: if the same variable occurs free in two different nodes, both occurrences are in the scope of the same quantifier. As an example, if the formulae in two nodes are $A(x)$ and $B(x)$, where $x$ is free in both, the formula represented by the tableau is something in the form $\forall x . (...A(x)...B(x)...)$. This formula implies that $(...A(x)...B(x)...)$ is true for any value of $x$, but does not in general imply $(...A(t)...A(t')...)$ for two different terms $t$ and $t'$, as these two terms may in general take different values. This means that $x$ cannot be replaced by two different terms in $A(x)$ and $B(x)$. - -* Free variables in a formula to check for validity are also considered universally quantified. However, these variables cannot be left free when building a tableau, because tableau rules works on the converse of the formula but still treats free variables as universally quantified. For example, $P(x) \rightarrow P(c)$ is not valid (it is not true in the model where $D=\{1,2\}, P(1)=\bot, P(2)=\top, c=1$, and the interpretation where $x=2$). Consequently, $\{P(x),\neg P(c)\}$ is satisfiable (it is satisfied by the same model and interpretation). However, a closed tableau could be generated with $P(x)$ and $\neg P(c)$, and substituting $x$ with $c$ would generate a closure. A correct procedure is to first make universal quantifiers explicit, thus generating $\forall x . (P(x) \rightarrow P(c))$. - -The following two variants are also correct. - -* Applying to the whole tableau a substitution to the free variables of the tableau is a correct rule, provided that this substitution is free for the formula representing the tableau. In other worlds, applying such a substitution leads to a tableau whose formula is still a consequence of the input set. Using most general unifiers automatically ensures that the condition of freeness for the tableau is met. - -* While in general every variable has to be replaced with the same term in the whole tableau, there are some special cases in which this is not necessary. - -Tableaux with unification can be proved complete: if a set of formulae is unsatisfiable, it has a tableau-with-unification proof. However, actually finding such a proof may be a difficult problem. Contrarily to the case without unification, applying a substitution can modify the existing part of a tableau; while applying a substitution closes at least a branch, it may make other branches impossible to close (even if the set is unsatisfiable). - -A solution to this problem is that delayed instantiation: no substitution is applied until one that closes all branches at the same time is found. With this variant, a proof for an unsatisfiable set can always be found by a suitable policy of application of the other rules. This method however requires the whole tableau to be kept in memory: the general method closes branches which can be then discarded, while this variant does not close any branch until the end. - -The problem that some tableaux that can be generated are impossible to close even if the set is unsatisfiable is common to other sets of tableau expansion rules: even if some specific sequences of application of these rules allow constructing a closed tableau (if the set is unsatisfiable), some other sequences lead to tableaux that cannot be closed. General solutions for these cases are outlined in the "Searching for a tableau" section. - -A tableau calculus is a set of rules that allows building and modification of a tableau. Propositional tableau rules, tableau rules without unification, and tableau rules with unification, are all tableau calculi. Some important properties a tableau calculus may or may not possess are completeness, destructiveness, and proof confluence. - -A tableau calculus is called complete if it allows building a tableau proof for every given unsatisfiable set of formulae. The tableau calculi mentioned above can be proved complete. - -A remarkable difference between tableau with unification and the other two calculi is that the latter two calculi only modify a tableau by adding new nodes to it, while the former one allows substitutions to modify the existing part of the tableau. More generally, tableau calculi are classed as destructive or non-destructive depending on whether they only add new nodes to tableau or not. Tableau with unification is therefore destructive, while propositional tableau and tableau without unification are non-destructive. - -Proof confluence is the property of a tableau calculus to obtain a proof for an arbitrary unsatisfiable set from an arbitrary tableau, assuming that this tableau has itself been obtained by applying the rules of the calculus. In other words, in a proof confluent tableau calculus, from an unsatisfiable set one can apply whatever set of rules and still obtain a tableau from which a closed one can be obtained by applying some other rules. - -A tableau calculus is simply a set of rules that tells how a tableau can be modified. A proof procedure is a method for actually finding a proof (if one exists). In other words, a tableau calculus is a set of rules, while a proof procedure is a policy of application of these rules. Even if a calculus is complete, not every possible choice of application of rules leads to a proof of an unsatisfiable set. For example, $\{P(f(c)), R(c), \neg P(f(c)) \vee \neg R(c), \forall x . Q(x)\}$ is unsatisfiable, but both tableaux with unification and tableaux without unification allow the rule for the universal quantifiers to be applied repeatedly to the last formula, while simply applying the rule for disjunction to the third one would directly lead to closure. - -For proof procedures, a definition of completeness has been given: a proof procedure is strongly complete if it allows finding a closed tableau for any given unsatisfiable set of formulae. Proof confluence of the underlying calculus is relevant to completeness: proof confluence is the guarantee that a closed tableau can be always generated from an arbitrary partially constructed tableau (if the set is unsatisfiable). Without proof confluence, the application of a 'wrong' rule may result in the impossibility of making the tableau complete by applying other rules. - -Propositional tableaux and tableaux without unification have strongly complete proof procedures. In particular, a complete proof procedure is that of applying the rules in a fair way. This is because the only way such calculi cannot generate a closed tableau from an unsatisfiable set is by not applying some applicable rules. - -For propositional tableaux, fairness amounts to expanding every formula in every branch. More precisely, for every formula and every branch the formula is in, the rule having the formula as a precondition has been used to expand the branch. A fair proof procedure for propositional tableaux is strongly complete. - -For first-order tableaux without unification, the condition of fairness is similar, with the exception that the rule for universal quantifier might require more than one application. Fairness amounts to expanding every universal quantifier infinitely often. In other words, a fair policy of application of rules cannot keep applying other rules without expanding every universal quantifier in every branch that is still open once in a while. - -If a tableau calculus is complete, every unsatisfiable set of formulae has an associated closed tableau. While this tableau can always be obtained by applying some of the rules of the calculus, the problem of which rules to apply for a given formula still remains. As a result, completeness does not automatically imply the existence of a feasible policy of application of rules that always leads to a closed tableau for every given unsatisfiable set of formulae. While a fair proof procedure is complete for ground tableau and tableau without unification, this is not the case for tableau with unification. - -A general solution for this problem is that of searching the space of tableaux until a closed one is found (if any exists, that is, the set is unsatisfiable). In this approach, one starts with an empty tableau and then recursively applies every possible applicable rule. This procedure visits a (implicit) tree whose nodes are labeled with tableaux, and such that the tableau in a node is obtained from the tableau in its parent by applying one of the valid rules. - -Since each branch can be infinite, this tree has to be visited breadth-first rather than depth-first. This requires a large amount of space, as the breadth of the tree can grow exponentially. A method that may visit some nodes more than once but works in polynomial space is to visit in a depth-first manner with iterative deepening: one first visits the tree up to a certain depth, then increases the depth and perform the visit again. This particular procedure uses the depth (which is also the number of tableau rules that have been applied) for deciding when to stop at each step. Various other parameters (such as the size of the tableau labeling a node) have been used instead. - -The size of the search tree depends on the number of (children) tableaux that can be generated from a given (parent) one. Reducing the number of such tableaux therefore reduces the required search. - -A way for reducing this number is to disallow the generation of some tableaux based on their internal structure. An example is the condition of regularity: if a branch contains a literal, using an expansion rule that generates the same literal is useless because the branch containing two copies of the literals would have the same set of formulae of the original one. This expansion can be disallowed because if a closed tableau exists, it can be found without it. This restriction is structural because it can be checked by looking at the structure of the tableau to expand only. - -Different methods for reducing search disallow the generation of some tableaux on the ground that a closed tableau can still be found by expanding the other ones. These restrictions are called global. As an example of a global restriction, one may employ a rule that specifies which of the open branches is to be expanded. As a result, if a tableau has for example two non-closed branches, the rule tells which one is to be expanded, disallowing the expansion of the second one. This restriction reduces the search space because one possible choice is now forbidden; completeness is however not harmed, as the second branch will still be expanded if the first one is eventually closed. As an example, a tableau with root $\neg a \wedge \neg b$, child $a \vee b$, and two leaves $a$ and $b$ can be closed in two ways: applying $(\wedge)$ first to $a$ and then to $b$, or vice versa. There is clearly no need to follow both possibilities; one may consider only the case in which $(\wedge)$ is first applied to $a$ and disregard the case in which it is first applied to $b$. This is a global restriction because what allows neglecting this second expansion is the presence of the other tableau, where expansion is applied to $a$ first and $b$ afterwards. - -When applied to sets of clauses (rather than of arbitrary formulae), tableaux methods allow for a number of efficiency improvements. A first-order clause is a formula $\forall x_1,\ldots,x_n L_1 \vee \cdots \vee L_m$ that does not contain free variables and such that each $L_i$ is a literal. The universal quantifiers are often omitted for clarity, so that for example $P(x,y) \vee Q(f(x))$ actually means $\forall x,y . P(x,y) \vee Q(f(x))$. Note that, if taken literally, these two formulae are not the same as for satisfiability: rather, the satisfiability $P(x,y) \vee Q(f(x))$ is the same as that of $\exists x,y . P(x,y) \vee Q(f(x))$. That free variables are universally quantified is not a consequence of the definition of first-order satisfiability; it is rather used as an implicit common assumption when dealing with clauses. - -The only expansion rules that are applicable to a clause are $(\forall)$ and $(\vee)$; these two rules can be replaced by their combination without losing completeness. In particular, the following rule corresponds to applying in sequence the rules $(\forall)$ and $(\vee)$ of the first-order calculus with unification. -$$ -(C) \frac{L_1 \vee \cdots \vee L_n}{L_1'|\cdots|L_n'} -$$ where $L_1' \vee \cdots \vee L_n'$ is obtained by replacing every variable with a new one in $L_1 \vee \cdots \vee L_n$ - -When the set to be checked for satisfiability is only composed of clauses, this and the unification rules are sufficient to prove unsatisfiability. In other worlds, the tableau calculi composed of $(C)$ and $(\sigma)$ is complete. - -Since the clause expansion rule only generates literals and never new clauses, the clauses to which it can be applied are only clauses of the input set. As a result, the clause expansion rule can be further restricted to the case where the clause is in the input set. -$$ -(C) \frac{L_1 \vee \cdots \vee L_n}{L_1'|\cdots|L_n'} -$$ where $L_1' \vee \cdots \vee L_n'$ is obtained by replacing every variable with a new one in $L_1 \vee \cdots \vee L_n$, which is a clause of the input set - -Since this rule directly exploits the clauses in the input set there is no need to initialize the tableau to the chain of the input clauses. The initial tableau can therefore be initialize with the single node labeled $true$; this label is often omitted as implicit. As a result of this further simplification, every node of the tableau (apart from the root) is labeled with a literal. - -A number of optimizations can be used for clause tableau. These optimization are aimed at reducing the number of possible tableaux to be explored when searching for a closed tableau as described in the "Searching for a closed tableau" section above. - -Connection is a condition over tableau that forbids expanding a branch using clauses that are unrelated to the literals that are already in the branch. Connection can be defined in two ways: - -; strong connectedness : when expanding a branch, use an input clause only if it contains a literal that can be unified with the negation of the literal in the current leaf - -; weak connectedness : allow the use of clauses that contain a literal that unifies with the negation of a literal on the branch - -Both conditions apply only to branches consisting not only of the root. The second definition allows for the use of a clause containing a literal that unifies with the negation of a literal in the branch, while the first only further constraint that literal to be in leaf of the current branch. - -If clause expansion is restricted by connectedness (either strong or weak), its application produces a tableau in which substitution can applied to one of the new leaves, closing its branch. In particular, this is the leaf containing the literal of the clause that unifies with the negation of a literal in the branch (or the negation of the literal in the parent, in case of strong connection). - -Both conditions of connectedness lead to a complete first-order calculus: if a set of clauses is unsatisfiable, it has a closed connected (strongly or weakly) tableau. Such a closed tableau can be found by searching in the space of tableaux as explained in the "Searching for a closed tableau" section. During this search, connectedness eliminates some possible choices of expansion, thus reducing search. In other worlds, while the tableau in a node of the tree can be in general expanded in several different ways, connection may allow only few of them, thus reducing the number of resulting tableaux that need to be further expanded. - -This can be seen on the following (propositional) example. The tableau made of a chain $true - a$ for the set of clauses $\{a, \neg a \vee b, \neg c \vee d, \neg b\}$ can be in general expanded using each of the four input clauses, but connection only allows the expansion that uses $\neg a \vee b$. This means that the tree of tableaux has four leaves in general but only one if connectedness is imposed. This means that connectedness leaves only one tableau to try to expand, instead of the four ones to consider in general. In spite of this reduction of choices, the completeness theorem implies that a closed tableau can be found if the set is unsatisfiable. - -The connectedness conditions, when applied to the propositional (clausal) case, make the resulting calculus non-confluent. As an example, $\{a, b, \neg b\}$ is unsatisfiable, but applying $(C)$ to $a$ generates the chain $true - a$, which is not closed and to which no other expansion rule can be applied without violating either strong or weak connectedness. In the case of weak connectedness, confluence holds provided that the clause used for expanding the root is relevant to unsatisfiability, that is, it is contained in a minimally unsatisfiable subset of the set of clauses. Unfortunately, the problem of checking whether a clause meets this condition is itself a hard problem. In spite of non-confluence, a closed tableau can be found using search, as presented in the "Searching for a closed tableau" section above. While search is made necessary, connectedness reduces the possible choices of expansion, thus making search more efficient. - -A tableau is regular if no literal occurs twice in the same branch. Enforcing this condition allows for a reduction of the possible choices of tableau expansion, as the clauses that would generate a non-regular tableau cannot be expanded. - -These disallowed expansion steps are however useless. If $B$ is a branch containing a literal $L$, and $C$ is a clause whose expansion violates regularity, then $C$ contains $L$. In order to close the tableau, one needs to expand and close, among others, the branch where $B - L$, where $L$ occurs twice. However, the formulae in this branch are exactly the same as the formulae of $B$ alone. As a result, the same expansion steps that close $B - L$ also close $B$. This means that expanding $C$ was unnecessary; moreover, if $C$ contained other literals, its expansion generated other leaves that needed to be closed. In the propositional case, the expansion needed to close these leaves are completely useless; in the first-order case, they may only affect the rest of the tableau because of some unifications; these can however be combined to the substitutions used to close the rest of the tableau. - -In a modal logic, a model comprises a set of possible worlds, each one associated to a truth evaluation; an accessibility relation tells when a world is accessible from another one. A modal formula may specify not only conditions over a possible world, but also on the ones that are accessible from it. As an example, $\Box A$ is true in a world if $A$ is true in all worlds that are accessible from it. - -As for propositional logic, tableaux for modal logics are based on recursively breaking formulae into its basic components. Expanding a modal formula may however require stating conditions over different worlds. As an example, if $\neg \Box A$ is true in a world then there exists a world accessible from it where $A$ is false. However, one cannot simply add the following rule to the propositional ones. -$$ -\frac{\neg \Box A}{\neg A} -$$ - -In propositional tableaux all formulae refer to the same truth evaluation, but the precondition of the rule above holds in a world while the consequence holds in another. Not taking into account this would generate wrong results. For example, formula $a \wedge \neg \Box a$ states that $a$ is true in the current world and $a$ is false in a world that is accessible from it. Simply applying $(\wedge)$ and the expansion rule above would produce $a$ and $\neg a$, but these two formulae should not in general generate a contradiction, as they hold in different worlds. Modal tableaux calculi do contain rules of the kind of the one above, but include mechanisms to avoid the incorrect interaction of formulae referring to different worlds. - -Technically, tableaux for modal logics check the satisfiability of a set of formulae: they check whether there exists a model $M$ and world $w$ such that the formulae in the set are true in that model and world. In the example above, while $a$ states the truth of $a$ in $w$, the formula $\neg \Box a$ states the truth of $\neg a$ in some world $w'$ that is accessible from $w$ and which may in general be different from $w$. Tableaux calculi for modal logic take into account that formulae may refer to different worlds. - -This fact has an important consequence: formulae that hold in a world may imply conditions over different successors of that world. Unsatisfiability may then be proved from the subset of formulae referring to a single successor. This holds if a world may have more than one successor, which is true for most modal logic. If this is the case, a formula like $\neg \Box A \wedge \neg \Box B$ is true if a successor where $\neg A$ holds exists and a successor where $\neg B$ holds exists. In the other way around, if one can show unsatisfiability of $\neg A$ in an arbitrary successor, the formula is proved unsatisfiable without checking for worlds where $\neg B$ holds. At the same time, if one can show unsatisfiability of $\neg B$, there is no need to check $\neg A$. As a result, while there are two possible way to expand $\neg \Box A \wedge \neg \Box B$, one of these two ways is always sufficient to prove unsatisfiability if the formula is unsatisfiable. For example, one may expand the tableau by considering an arbitrary world where $\neg A$ holds. If this expansion leads to unsatisfiability, the original formula is unsatisfiable. However, it is also possible that unsatisfiability cannot be proved this way, and that the world where $\neg B$ holds should have been considered instead. As a result, one can always prove unsatisfiability by expanding either $\neg \Box A$ only or $\neg \Box B$ only; however, if the wrong choice is done the resulting tableau may not be closed. Expanding either subformula leads to tableau calculi that are complete but not proof-confluent. Searching as described in the "Searching for a closed tableau" may therefore be necessary. - -Depending on whether the precondition and consequence of a tableau expansion rule refer to the same world or not, the rule is called static or transactional. While rules for propositional connectives are all static, not all rules for modal connectives are transactional: for example, in every modal logic including axiom T, it holds that $\Box A$ implies $A$ in the same world. As a result, the relative (modal) tableau expansion rule is static, as both its precondition and consequence refer to the same world. - -A way for making formulae referring to different worlds not interacting in the wrong way is to make sure that all formulae of a branch refer to the same world. This condition is initially true as all formulae in the set to be checked for consistency are assumed referring to the same world. When expanding a branch, two situations are possible: either the new formulae refer to the same world as the other one in the branch or not. In the first case, the rule is applied normally. In the second case, all formulae of the branch that do not also hold in the new world are deleted from the branch, and possibly added to all other branches that are still relative to the old world. - -As an example, in S5 every formula $\Box A$ that is true in a world is also true in all accessible worlds (that is, in all accessible worlds both $A$ and $\Box A$ are true). Therefore, when applying $\frac{\neg \Box A}{\neg A}$, whose consequence holds in a different world, one deletes all formulae from the branch, but can keep all formulae $\Box A$, as these hold in the new world as well. In order to retain completeness, the deleted formulae are then added to all other branches that still refer to the old world. - -A different mechanism for ensuring the correct interaction between formulae referring to different worlds is to switch from formulae to labeled formulae: instead of writing $A$, one would write $w:A$ to make it explicit that $A$ holds in world $w$. - -All propositional expansion rules are adapted to this variant by stating that they all refer to formulae with the same world label. For example, $w:A \wedge B$ generates two nodes labeled with $w:A$ and $w:B$; a branch is closed only if it contains two opposite literals of the same world, like $w:a$ and $w:\neg a$; no closure is generated if the two world labels are different, like in $w:a$ and $w':\neg a$. - -The modal expansion rule may have a consequence that refer to a different worlds. For example, the rule for $\neg \Box A$ would be written as follows -$$ -\frac{w:\neg \Box A}{w':\neg A} -$$ - -The precondition and consequent of this rule refer to worlds $w$ and $w'$, respectively. The various calculi use different methods for keeping track of the accessibility of the worlds used as labels. Some include pseudo-formulae like $wRw'$ to denote that $w'$ is accessible from $w$. Some others use sequences of integers as world labels, this notation implicitly representing the accessibility relation (for example, $(1,4,2,3)$ is accessible from $(1,4,2)$.) - -The problem of interaction between formulae holding in different worlds can be overcome by using set-labeling tableaux. These are trees whose nodes are labeled with sets of formulae; the expansion rules tell how to attach new nodes to a leaf, based only on the label of the leaf (and not on the label of other nodes in the branch). - -Tableaux for modal logics are used to verify the satisfiability of a set of modal formulae in a given modal logic. Given a set of formulae $S$, they check the existence of a model $M$ and a world $w$ such that $M,w \models S$. - -The expansion rules depend on the particular modal logic used. A tableau system for the basic modal logic K can be obtained by adding to the propositional tableau rules the following one: -$$ -(K) \frac{\Box A_1; \ldots ; \Box A_n ; \neg \Box B}{A_1; \ldots ; A_n ; \neg B} -$$ - -Intuitively, the precondition of this rule expresses the truth of all formulae $A_1,\ldots,A_n$ at all accessible worlds, and truth of $\neg B$ at some accessible worlds. The consequence of this rule is a formula that must be true at one of those worlds where $\neg B$ is true. - -More technically, modal tableaux methods check the existence of a model $M$ and a world $w$ that make set of formulae true. If $\Box A_1; \ldots ; \Box A_n ; \neg \Box B$ are true in $w$, there must be a world $w'$ that is accessible from $w$ and that makes $A_1; \ldots ; A_n ; \neg B$ true. This rule therefore amounts to deriving a set of formulae that must be satisfied in such $w'$. - -While the preconditions $\Box A_1; \ldots ; \Box A_n ; \neg \Box B$ are assumed satisfied by $M,w$, the consequences $A_1; \ldots ; A_n ; \neg B$ are assumed satisfied in $M,w'$: same model but possibly different worlds. Set-labeled tableaux do not explicitly keep track of the world where each formula is assumed true: two nodes may or may not refer to the same world. However, the formulae labeling any given node are assumed true at the same world. - -As a result of the possibly different worlds where formulae are assumed true, a formula in a node is not automatically valid in all its descendants, as every application of the modal rule correspond to a move from a world to another one. This condition is automatically captured by set-labeling tableaux, as expansion rules are based only on the leaf where they are applied and not on its ancestors. - -Remarkably, $(K)$ does not directly extend to multiple negated boxed formulae such as in $\Box A_1; \ldots; \Box A_n; \neg \Box B_1; \neg \Box B_2$: while there exists an accessible world where $B_1$ is false and one in which $B_2$ is false, these two worlds are not necessarily the same. - -Differently from the propositional rules, $(K)$ states conditions over all its preconditions. For example, it cannot be applied to a node labeled by $a; \Box b; \Box (b \rightarrow c); \neg \Box c$; while this set is inconsistent and this could be easily proved by applying $(K)$, this rule cannot be applied because of formula $a$, which is not even relevant to inconsistency. Removal of such formulae is made possible by the rule: -$$ -(\theta) \frac{A_1;\ldots;A_n;B_1;\ldots;B_m}{A_1;\ldots;A_n} -$$ - -The addition of this rule (thinning rule) makes the resulting calculus non-confluent: a tableau for an inconsistent set may be impossible to close, even if a closed tableau for the same set exists. - -Rule $(\theta)$ is non-deterministic: the set of formulae to be removed (or to be kept) can be chosen arbitrarily; this creates the problem of choosing a set of formulae to discard that is not so large it makes the resulting set satisfiable and not so small it makes the necessary expansion rules inapplicable. Having a large number of possible choices makes the problem of searching for a closed tableau harder. - -This non-determinism can be avoided by restricting the usage of $(\theta)$ so that it is only applied before a modal expansion rule, and so that it only removes the formulae that make that other rule inapplicable. This condition can be also formulated by merging the two rules in a single one. The resulting rule produces the same result as the old one, but implicitly discard all formulae that made the old rule inapplicable. This mechanism for removing $(\theta)$ has been proved to preserve completeness for many modal logics. - -Axiom T expresses reflexivity of the accessibility relation: every world is accessible from itself. The corresponding tableau expansion rule is: -$$ -(T) \frac{A_1;\ldots;A_n;\Box B}{A_1;\ldots;A_n; \Box B; B} -$$ - -This rule relates conditions over the same world: if $\Box B$ is true in a world, by reflexivity $B$ is also true in the same world. This rule is static, not transactional, as both its precondition and consequent refer to the same world. - -This rule copies $\Box B$ from the precondition to the consequent, in spite of this formula having been "used" to generate $B$. This is correct, as the considered world is the same, so $\Box B$ also holds there. This "copying" is necessary in some cases. It is for example necessary to prove the inconsistency of $\Box(a \wedge \neg \Box a)$: the only applicable rules are in order $(T), (\wedge), (\theta), (K)$, from which one is blocked if $\Box a$ is not copied. - -A different method for dealing with formulae holding in alternate worlds is to start a different tableau for each new world that is introduced in the tableau. For example, $\neg \Box A$ implies that $A$ is false in an accessible world, so one starts a new tableau rooted by $\neg A$. This new tableau is attached to the node of the original tableau where the expansion rule has been applied; a closure of this tableau immediately generates a closure of all branches where that node is, regardless of whether the same node is associated other auxiliary tableaux. The expansion rules for the auxiliary tableaux are the same as for the original one; therefore, an auxiliary tableau can have in turns other (sub-)auxiliary tableaux. - -The above modal tableaux establish the consistency of a set of formulae, and can be used for solving the local logical consequence problem. This is the problem of telling whether, for each model $M$, if $A$ is true in a world $w$, then $B$ is also true in the same world. This is the same as checking whether $B$ is true in a world of a model, in the assumption that $A$ is also true in the same world of the same model. - -A related problem is the global consequence problem, where the assumption is that a formula (or set of formulae) $G$ is true in all possible worlds of the model. The problem is that of checking whether, in all models $M$ where $G$ is true in all worlds, $B$ is also true in all worlds. - -Local and global assumption differ on models where the assumed formula is true in some worlds but not in others. As an example, $\{P, \neg \Box (P \wedge Q)\}$ entails $\neg \Box Q$ globally but not locally. Local entailment does not hold in a model consisting of two worlds making $P$ and $\neg P, Q$ true, respectively, and where the second is accessible from the first; in the first world, the assumption is true but $\Box Q$ is false. This counterexample works because $P$ can be assumed true in a world and false in another one. If however the same assumption is considered global, $\neg P$ is not allowed in any world of the model. - -These two problems can be combined, so that one can check whether $B$ is a local consequence of $A$ under the global assumption $G$. Tableaux calculi can deal with global assumption by a rule allowing its addition to every node, regardless of the world it refers to. - -The following conventions are sometimes used. - -When writing tableaux expansion rules, formulae are often denoted using a convention, so that for example $\alpha$ is always considered to be $\alpha_1 \wedge \alpha_2$. The following table provides the notation for formulae in propositional, first-order, and modal logic. - -Each label in the first column is taken to be either formula in the other columns. An overlined formula such as $\overline{\alpha_1}$ indicates that $\alpha_1$ is the negation of whatever formula appears in its place, so that for example in formula $\neg (a \vee b)$ the subformula $\alpha_1$ is the negation of $a$. - -Since every label indicates many equivalent formulae, this notation allows writing a single rule for all these equivalent formulae. For example, the conjunction expansion rule is formulated as: -$$ -(\alpha) \frac{\alpha}{\begin{array}{c}\alpha_1\\ \alpha_2\end{array}} -$$ - -A formula in a tableau is assumed true. Signed tableaux allows stating that a formula is false. This is generally achieved by adding a label to each formula, where the label T indicates formulae assumed true and F those assumed false. A different but equivalent notation is that to write formulae that are assumed true at the left of the node and formulae assumed false at its right. - -In his Symbolic Logic Part II, Charles Lutwidge Dodgson introduced the Method of Trees, the earliest modern use of a truth tree. - -The method of semantic tableaux was invented by the Dutch logician Evert Willem Beth (Beth 1955) and simplified, for classical logic, by Raymond Smullyan (Smullyan 1968, 1995). It is Smullyan's simplification, "one-sided tableaux", that is described above. Smullyan's method has been generalized to arbitrary many-valued propositional and first-order logics by Walter Carnielli (Carnielli 1987). Tableaux can be intuitively seen as sequent systems upside-down. This symmetrical relation between tableaux and sequent systems was formally established in (Carnielli 1991). diff --git a/wiki/wikipedia/3048.txt b/wiki/wikipedia/3048.txt deleted file mode 100644 index 346c25ea5852d3b9774aa1e124d003dee40a287e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3048.txt +++ /dev/null @@ -1,17 +0,0 @@ -In mathematics, specifically real analysis and functional analysis, the Kirszbraun theorem states that if U is a subset of some Hilbert space H1, and H2 is another Hilbert space, and - -f : U → H2 - -is a Lipschitz-continuous map, then there is a Lipschitz-continuous map - -F: H1 → H2 - -that extends f and has the same Lipschitz constant as f. - -Note that this result in particular applies to Euclidean spaces En and Em, and it was in this form that Kirszbraun originally formulated and proved the theorem. The version for Hilbert spaces can for example be found in (Schwartz 1969, p. 21). If H1 is a separable space (in particular, if it is a Euclidean space) the result is true in Zermelo–Fraenkel set theory; for the fully general case, it appears to need some form of the axiom of choice; the Boolean prime ideal theorem is known to be sufficient. - -The proof of the theorem uses geometric features of Hilbert spaces; the corresponding statement for Banach spaces is not true in general, not even for finite-dimensional Banach spaces. It is for instance possible to construct counterexamples where the domain is a subset of Rn with the maximum norm and Rm carries the Euclidean norm. More generally, the theorem fails for $ \mathbb{R}^m $ equipped with any $ \ell_p$ norm ($ p \neq 2$) (Schwartz 1969, p. 20). - -For an R-valued function the extension is provided by $\tilde f(x):=\inf_{u\in U}\big(f(u)+\text{Lip}(f)\cdot d(x,u)\big),$ where $\text{Lip}(f)$ is f's Lipschitz constant on U. - -The theorem was proved by Mojżesz David Kirszbraun, and later it was reproved by Frederick Valentine, who first proved it for the Euclidean plane. Sometimes this theorem is also called Kirszbraun–Valentine theorem. diff --git a/wiki/wikipedia/3049.txt b/wiki/wikipedia/3049.txt deleted file mode 100644 index b4b5ee5b999db93f93209b72093f961b3b0f4979..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3049.txt +++ /dev/null @@ -1,111 +0,0 @@ -In Euclidean geometry, Brahmagupta's formula is used to find the area of any cyclic quadrilateral (one that can be inscribed in a circle) given the lengths of the sides. - -Brahmagupta's formula gives the area K of a cyclic quadrilateral whose sides have lengths a, b, c, d as -$$ -K=\sqrt{(s-a)(s-b)(s-c)(s-d)} -$$ - -where s, the semiperimeter, is defined to be -$$ -s=\frac{a+b+c+d}{2}. -$$ - -This formula generalizes Heron's formula for the area of a triangle. A triangle may be regarded as a quadrilateral with one side of length zero. From this perspective, as d approaches zero, a cyclic quadrilateral converges into a cyclic triangle (all triangles are cyclic), and Brahmagupta's formula simplifies to Heron's formula. - -If the semiperimeter is not used, Brahmagupta's formula is -$$ -K=\frac{1}{4}\sqrt{(-a+b+c+d)(a-b+c+d)(a+b-c+d)(a+b+c-d)}. -$$ - -Another equivalent version is -$$ -K=\frac{\sqrt{(a^2+b^2+c^2+d^2)^2+8abcd-2(a^4+b^4+c^4+d^4)}}{4}\cdot -$$ - -Here the notations in the figure to the right are used. The area K of the cyclic quadrilateral equals the sum of the areas of △ADB and △BDC: -$$ -= \frac{1}{2}pq\sin A + \frac{1}{2}rs\sin C. -$$ - -But since □ABCD is a cyclic quadrilateral, ∠DAB = 180° − ∠DCB. Hence sin A = sin C. Therefore, -$$ -K = \frac{1}{2}pq\sin A + \frac{1}{2}rs\sin A -$$ -$$ -K^2 = \frac{1}{4} (pq + rs)^2 \sin^2 A -$$ -$$ -4K^2 = (pq + rs)^2 (1 - \cos^2 A) -$$ - -(using the trigonometric identity) - -Solving for common side DB, in △ADB and △BDC, the law of cosines gives -$$ -p^2 + q^2 - 2pq\cos A = r^2 + s^2 - 2rs\cos C. -$$ - -Substituting cos C = −cos A (since angles A and C are supplementary) and rearranging, we have -$$ -2 (pq + rs) \cos A = p^2 + q^2 - r^2 - s^2. -$$ - -Substituting this in the equation for the area, -$$ -4K^2 = (pq + rs)^2 - \frac{1}{4}(p^2 + q^2 - r^2 - s^2)^2 -$$ -$$ -16K^2 = 4(pq + rs)^2 - (p^2 + q^2 - r^2 - s^2)^2. -$$ - -The right-hand side is of the form a^2 − b^2 = (a − b)(a + b) and hence can be written as -$$ -[2(pq + rs) - p^2 - q^2 + r^2 +s^2][2(pq + rs) + p^2 + q^2 -r^2 - s^2] -$$ - -which, upon rearranging the terms in the square brackets, yields -$$ -= [ (r+s)^2 - (p-q)^2 ][ (p+q)^2 - (r-s)^2 ] -$$ -$$ -= (q+r+s-p)(p+r+s-q)(p+q+s-r)(p+q+r-s). -$$ - -Introducing the semiperimeter S = p + q + r + s/2, -$$ -16K^2 = 16(S-p)(S-q)(S-r)(S-s). -$$ - -Taking the square root, we get -$$ -K = \sqrt{(S-p)(S-q)(S-r)(S-s)}. -$$ - -An alternative, non-trigonometric proof utilizes two applications of Heron's triangle area formula on similar triangles. - -In the case of non-cyclic quadrilaterals, Brahmagupta's formula can be extended by considering the measures of two opposite angles of the quadrilateral: -$$ -K=\sqrt{(s-a)(s-b)(s-c)(s-d)-abcd\cos^2\theta} -$$ - -where θ is half the sum of any two opposite angles. (The choice of which pair of opposite angles is irrelevant: if the other two angles are taken, half their sum is 180° − θ. Since cos(180° − θ) = −cos θ, we have cos2(180° − θ) = cos2 θ.) This more general formula is known as Bretschneider's formula. - -It is a property of cyclic quadrilaterals (and ultimately of inscribed angles) that opposite angles of a quadrilateral sum to 180°. Consequently, in the case of an inscribed quadrilateral, θ is 90°, whence the term -$$ -abcd\cos^2\theta=abcd\cos^2 \left(90^\circ\right)=abcd\cdot0=0, -$$ - -giving the basic form of Brahmagupta's formula. It follows from the latter equation that the area of a cyclic quadrilateral is the maximum possible area for any quadrilateral with the given side lengths. - -A related formula, which was proved by Coolidge, also gives the area of a general convex quadrilateral. It is -$$ -K=\sqrt{(s-a)(s-b)(s-c)(s-d)-\textstyle{1\over4}(ac+bd+pq)(ac+bd-pq)} -$$ - -where p and q are the lengths of the diagonals of the quadrilateral. In a cyclic quadrilateral, pq = ac + bd according to Ptolemy's theorem, and the formula of Coolidge reduces to Brahmagupta's formula. - -* Heron's formula for the area of a triangle is the special case obtained by taking d = 0. - -* The relationship between the general and extended form of Brahmagupta's formula is similar to how the law of cosines extends the Pythagorean theorem. - -* Increasingly complicated closed-form formulas exist for the area of general polygons on circles, as described by Maley et al. diff --git a/wiki/wikipedia/305.txt b/wiki/wikipedia/305.txt deleted file mode 100644 index 6e78bd3f8b6f8e4701e4c13197c95932d18b4b29..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/305.txt +++ /dev/null @@ -1,104 +0,0 @@ -The Szemerédi–Trotter theorem is a mathematical result in the field of Discrete geometry. It asserts that given n points and m lines in the Euclidean plane, the number of incidences (i.e., the number of point-line pairs, such that the point lies on the line) is -$$ -O \left ( n^{\frac{2}{3}} m^{\frac{2}{3}} + n + m \right ). -$$ - -This bound cannot be improved, except in terms of the implicit constants. - -As for the implicit constants, it was shown by János Pach, Radoš Radoičić, Gábor Tardos, and Géza Tóth that the upper bound $ 2.5n^{2/3} m^{2/3} + n + m$ holds. Since then better constants are known due to better crossing lemma constants; the current best is 2.44. On the other hand, Pach and Tóth showed that the statement does not hold true if one replaces the coefficient 2.5 with 0.42. - -An equivalent formulation of the theorem is the following. Given n points and an integer k ≥ 2, the number of lines which pass through at least k of the points is -$$ -O \left( \frac{n^2}{k^3} + \frac{n}{k} \right ). -$$ - -The original proof of Endre Szemerédi and William T. Trotter was somewhat complicated, using a combinatorial technique known as cell decomposition. Later, László Székely discovered a much simpler proof using the crossing number inequality for graphs. (See below.) - -The Szemerédi–Trotter theorem has a number of consequences, including Beck's theorem in incidence geometry and the Erdős-Szemerédi sum-product problem in additive combinatorics. - -We may discard the lines which contain two or fewer of the points, as they can contribute at most 2m incidences to the total number. Thus we may assume that every line contains at least three of the points. - -If a line contains k points, then it will contain k − 1 line segments which connect two consecutive points along the line. Because k ≥ 3 after discarding the two-point lines, it follows that k − 1 ≥ k/2, so the number of these line segments on each line is at least half the number of incidences on that line. Summing over all of the lines, the number of these line segments is again at least half the total number of incidences. Thus if e denotes the number of such line segments, it will suffice to show that -$$ -e = O \left ( n^{\frac{2}{3}} m^{\frac{2}{3}} + n + m \right). -$$ - -Now consider the graph formed by using the n points as vertices, and the e line segments as edges. Since each line segment lies on one of m lines, and any two lines intersect in at most one point, the crossing number of this graph is at most the number of points where two lines intersect, which is at most m(m - 1)/2. The crossing number inequality implies that either e ≤ 7.5n, or that m(m - 1)/2 ≥ e3 / 33.75n2. In either case e ≤ 3.24(nm)2/3 + 7.5n, giving the desired bound -$$ -e = O \left ( n^{\frac{2}{3}} m^{\frac{2}{3}} + n + m \right ). -$$ - -Since every pair of points can be connected by at most one line, there can be at most n(n − 1)/2 lines which can connect at k or more points, since k ≥ 2. This bound will prove the theorem when k is small (e.g. if k ≤ C for some absolute constant C). Thus, we need only consider the case when k is large, say k ≥ C. - -Suppose that there are m lines that each contain at least k points. These lines generate at least mk incidences, and so by the first formulation of the Szemerédi–Trotter theorem, we have -$$ -mk = O \left ( n^{\frac{2}{3}} m^{\frac{2}{3}} + n + m \right ), -$$ - -and so at least one of the statements $mk = O( n^{2/3} m^{2/3} ), mk = O(n)$, or $mk = O(m)$ is true. The third possibility is ruled out since k was assumed to be large, so we are left with the first two. But in either of these two cases, some elementary algebra will give the bound $m = O( n^2 / k^3 + n/k )$ as desired. - -Except for its constant, the Szemerédi–Trotter incidence bound cannot be improved. To see this, consider for any positive integer $N\in \mathbb{N}$ a set of points on the integer lattice -$$ -P = \left \{ (a, b) \in \mathbb{Z}^2 \ : \ 1 \leq a \leq N; 1 \leq b \leq 2N^2 \right \}, -$$ - -and a set of lines -$$ -L = \left \{ (x, mx + b) \ : \ m, b \in \mathbb{Z}; 1 \leq m \leq N; 1 \leq b \leq N^2 \right \}. -$$ - -Clearly, $|P| = 2N^3$ and $|L| = N^3$. Since each line is incident to N points (i.e., once for each $x \in \{1, \cdots, N\}$), the number of incidences is $N^4$ which matches the upper bound. - -One generalization of this result to arbitrary dimension, $\mathbb{R}^d$, was found by Agarwal and Aronov. Given a set of n points, S, and the set of m hyperplanes, H, which are each spanned by S, the number of incidences between S and H is bounded above by -$$ -O \left (m^{\frac{2}{3}}n^{\frac{d}{3}}+n^{d-1} \right ). -$$ - -Equivalently, the number of hyperplanes in H containing k or more points is bounded above by -$$ -O\left( \frac{n^d}{k^3} + \frac{n^{d-1}}{k} \right ). -$$ - -A construction due to Edelsbrunner shows this bound to be asymptotically optimal. - -József Solymosi and Terence Tao obtained near sharp upper bounds for the number of incidences between points and algebraic varieties in higher dimensions, when the points and varieties satisfy "certain pseudo-line type axioms". Their proof uses the Polynomial Ham Sandwich Theorem. - -==In $\mathbb{C}^2$== - -Many proofs of the Szemerédi–Trotter theorem over $\mathbb{R}$ rely in a crucial way on the topology of Euclidean space, so do not extend easily to other fields. e.g. the original proof of Szemerédi and Trotter; the polynomial partitioning proof and the crossing number proof do not extend to the complex plane. - -Tóth successfully generalized the original proof of Szemerédi and Trotter to the complex plane $\mathbb{C}^2$ by introducing additional ideas. This result was also obtained independently and through a different method by Zahl. The implicit constant in the bound is not the same in the complex numbers: in Tóth's proof the constant can be taken to be $10^{60}$; the constant is not explicit in Zahl's proof. - -When the point set is a Cartesian product, Solymosi and Tardos show that the Szemerédi-Trotter bound holds using a much simpler argument. - -Let $\mathbb{F}$ be a field. - -A Szemerédi-Trotter bound is impossible in general due to the following example, stated here in $\mathbb{F}_p$: let $\mathcal{P} = \mathbb{F}_p\times \mathbb{F}_p$ be the set of all $p^2$ points and let $\mathcal{L}$ be the set of all $p^2$ lines in the plane. Since each line contains $p$ points, there are $p^3$ incidences. On the other hand, a Szemerédi-Trotter bound would give $O((p^2)^{2/3} (p^2)^{2/3} + p^2) = O(p^{8/3})$ incidences. This example shows that the trivial, combinatorial incidence bound is tight. - -Bourgain, Katz and Tao show that if this example is excluded, then an incidence bound that is an improvement on the trivial bound can be attained. - -Incidence bounds over finite fields are of two types: (i) when at least one of the set of points or lines is `large' in terms of the characteristic of the field; (ii) both the set of points and the set of lines are `small' in terms of the characteristic. - -Let $q$ be an odd prime power. Then Vinh showed that the number of incidences between $n$ points and $m$ lines in $\mathbb{F}_q^2$ is at most -$$ -\frac{nm}{q} + \sqrt{qnm}. -$$ - -Note that there is no implicit constant in this bound. - -Let $\mathbb{F}$ be a field of characteristic $p\neq 2$. Stevens and de Zeeuw show that the number of incidences between $n$ points and $m$ lines in $\mathbb{F}^2$ is -$$ -O\left(m^{\frac{11}{15}}n^{\frac{11}{15}}\right) -$$ - -under the condition $m^{-2}n^{13} \leq p^{15}$ in positive characteristic. (In a field of characteristic zero, this condition is not necessary.) This bound is better than the trivial incidence estimate when $m^{7/8} < n < m^{8/7}$. - -If the point set is a Cartesian Product, then they show an improved incidence bound: let $\mathcal{P} = A\times B \subseteq \mathbb{F}^2$ be a finite set of points with $|A|\leq |B|$ and let $\mathcal{L} $ be a set of lines in the plane. Suppose that $|A||B|^2 \leq |\mathcal{L}|^3$ and in positive characteristic that $|A||\mathcal{L}|\leq p^2$. Then the number of incidences between $\mathcal{P}$ and $\mathcal{L} $ is - -O\left(|A|^{\frac34}|B|^{\frac12} |\mathcal{L}|^\frac34 + |\mathcal{L}| - -\right). - -This bound is optimal. Note that by point-line duality in the plane, this incidence bound can be rephrased for an arbitrary point set and a set of lines having a Cartesian product structure. - -In both the reals and arbitrary fields, Rudnev and Shkredov show an incidence bound for when both the point set and the line set has a Cartesian product structure. This is sometimes better than the above bounds. diff --git a/wiki/wikipedia/3050.txt b/wiki/wikipedia/3050.txt deleted file mode 100644 index 9e82c80bd36ce26e75be639e987412513b85431a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3050.txt +++ /dev/null @@ -1,5 +0,0 @@ -In field theory, a branch of mathematics, the isomorphism extension theorem is an important theorem regarding the extension of a field isomorphism to a larger field. - -The theorem states that given any field $F$, an algebraic extension field $E$ of $F$ and an isomorphism $\phi$ mapping $F$ onto a field $F'$ then $\phi$ can be extended to an isomorphism $\tau$ mapping $E$ onto an algebraic extension $E'$ of $F'$ (a subfield of the algebraic closure of $F'$). - -The proof of the isomorphism extension theorem depends on Zorn's lemma. diff --git a/wiki/wikipedia/3051.txt b/wiki/wikipedia/3051.txt deleted file mode 100644 index 351e977dec1470e4490827817a80386adb6ec6c7..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3051.txt +++ /dev/null @@ -1,51 +0,0 @@ -Squaring the square is the problem of tiling an integral square using only other integral squares. (An integral square is a square whose sides have integer length.) The name was coined in a humorous analogy with squaring the circle. Squaring the square is an easy task unless additional conditions are set. The most studied restriction is that the squaring be perfect, meaning the sizes of the smaller squares are all different. A related problem is squaring the plane, which can be done even with the restriction that each natural number occurs exactly once as a size of a square in the tiling. The order of a squared square is its number of constituent squares. - -A “perfect” squared square is a square such that each of the smaller squares has a different size. - -It is first recorded as being studied by R. L. Brooks, C. A. B. Smith, A. H. Stone and W. T. Tutte at Cambridge University between 1936 and 1938. - -They transformed the square tiling into an equivalent electrical circuit — they called it a "Smith diagram" — by considering the squares as resistors that connected to their neighbors at their top and bottom edges, and then applied Kirchhoff's circuit laws and circuit decomposition techniques to that circuit. The first perfect squared squares they found were of order 69. - -The first perfect squared square to be published, a compound one of side 4205 and order 55, was found by Roland Sprague in 1939. - -Martin Gardner published an extensive article written by W. T. Tutte about the early history of squaring the square in his mathematical games column in November 1958. - -A "simple" squared square is one where no subset of the squares forms a rectangle or square, otherwise it is "compound". - -In 1978, discovered a simple perfect squared square of side 112 with the smallest number of squares using a computer search. His tiling uses 21 squares, and has been proved to be minimal. This squared square forms the logo of the Trinity Mathematical Society. It is also appears on the cover of the Journal of Combinatorial Theory. - -Duijvestijn also found two simple perfect squared squares of sides 110 but each comprising 22 squares. Theophilus Harding Willcocks, an amateur mathematician and fairy chess composer, found another. In 1999, I. Gambini proved that these three are the smallest perfect squared squares in terms of side length. - -The perfect compound squared square with the fewest squares was discovered by T.H. Willcocks in 1946 and has 24 squares; however, it was not until 1982 that Duijvestijn, Pasquale Joseph Federico and P. Leeuw mathematically proved it to be the lowest-order example. - -When the constraint of all the squares being different sizes is relaxed, a squared square such that the side lengths of the smaller squares do not have a common divisor larger than 1 is called a "Mrs. Perkins's quilt". In other words, the greatest common divisor of all the smaller side lengths should be 1. - -The Mrs. Perkins's quilt problem is to find a Mrs. Perkins's quilt with the fewest pieces for a given n × n square. - -A cute number means a positive integer n such that some square admits a dissection into n squares of no more than two different sizes, without other restrictions. It can be shown that aside from 2, 3, and 5, every positive integer is cute. - -[[File:squaring_the_plane.svg|thumb|200px|Tiling the plane with different integral squares using the Fibonacci series - -
    1. Tiling with squares with Fibonacci-number sides is almost perfect except for 2 squares of side 1.
    - -
    2. Duijvestijn found a 110-square tiled with 22 different integer squares.
    - -
    3. Scaling the Fibonacci tiling by 110 times and replacing one of the 110-squares with Duijvestijn's perfects the tiling.
    ]] - -In 1975, Solomon Golomb raised the question whether the whole plane can be tiled by squares, one of each integer edge-length, which he called the heterogeneous tiling conjecture. This problem was later publicized by Martin Gardner in his Scientific American column and appeared in several books, but it defied solution for over 30 years. - -In Tilings and Patterns, published in 1987, Branko Grünbaum and G. C. Shephard stated that in all perfect integral tilings of the plane known at that time, the sizes of the squares grew exponentially. For example, the plane can be tiled with different integral squares, but not for every integer, by recursively taking any perfect squared square and enlarging it so that the formerly smallest tile now has the size of the original squared square, then replacing this tile with a copy of the original squared square. - -In 2008 James Henle and Frederick Henle proved that this, in fact, can be done. Their proof is constructive and proceeds by "puffing up" an L-shaped region formed by two side-by-side and horizontally flush squares of different sizes to a perfect tiling of a larger rectangular region, then adjoining the square of the smallest size not yet used to get another, larger L-shaped region. The squares added during the puffing up procedure have sizes that have not yet appeared in the construction and the procedure is set up so that the resulting rectangular regions are expanding in all four directions, which leads to a tiling of the whole plane. - -Cubing the cube is the analogue in three dimensions of squaring the square: that is, given a cube C, the problem of dividing it into finitely many smaller cubes, no two congruent. - -Unlike the case of squaring the square, a hard yet solvable problem, there is no perfect cubed cube and, more generally, no dissection of a rectangular cuboid C into a finite number of unequal cubes. - -To prove this, we start with the following claim: for any perfect dissection of a rectangle in squares, the smallest square in this dissection does not lie on an edge of the rectangle. Indeed, each corner square has a smaller adjacent edge square, and the smallest edge square is adjacent to smaller squares not on the edge. - -Now suppose that there is a perfect dissection of a rectangular cuboid in cubes. Make a face of C its horizontal base. The base is divided into a perfect squared rectangle R by the cubes which rest on it. The smallest square s1 in R is surrounded by larger, and therefore higher, cubes. Hence the upper face of the cube on s1 is divided into a perfect squared square by the cubes which rest on it. Let s2 be the smallest square in this dissection. By the claim above, this is surrounded on all 4 sides by squares which are larger than s2 and therefore higher. - -The sequence of squares s1, s2, ... is infinite and the corresponding cubes are infinite in number. This contradicts our original supposition. - -If a 4-dimensional hypercube could be perfectly hypercubed then its 'faces' would be perfect cubed cubes; this is impossible. Similarly, there is no solution for all cubes of higher dimensions. diff --git a/wiki/wikipedia/3052.txt b/wiki/wikipedia/3052.txt deleted file mode 100644 index 3863b02dcc922bef10a9a8eed9357f689a537703..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3052.txt +++ /dev/null @@ -1,9 +0,0 @@ -METIS is a software package for graph partitioning that implements various multilevel algorithms. METIS' multilevel approach has three phases and comes with several algorithms for each phase: - -# Coarsen the graph by generating a sequence of graphs G0, G1, ..., GN, where G0 is the original graph and for each 0 ≤ i ≤ j ≤ N, the number of vertices in Gi is greater than the number of vertices in Gj. - -# Compute a partition of GN - -# Project the partition back through the sequence in the order of GN, ..., G0, refining it with respect to each graph. - -The final partition computed during the third phase (the refined partition projected onto G0) is a partition of the original graph. diff --git a/wiki/wikipedia/3053.txt b/wiki/wikipedia/3053.txt deleted file mode 100644 index 0b36b361c59fbcfe30e7fb04e275afd0e8587335..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3053.txt +++ /dev/null @@ -1,11 +0,0 @@ -The execution of a business process consists of one or more transactions. Each transaction may consist of several individual operations yet, as a whole, it moves the system between consistent states. - -There are two groups of systems where compensating transaction may be applied: - -1. In the context of a database this is often easily achieved using transactions and the commit/rollback mechanism. Compensating transaction logic could be implemented as additional on top of database supporting commit/rollback. In that case, we can decrease business transaction granularity. - -2. For systems without a commit/rollback mechanism available, one can undo a failed transaction with a compensating transaction, which will bring the system back to its initial state. Typically, this is only a workaround which has to be implemented manually and cannot guarantee that the system always ends in a consistent state. The system designer may need to consider what happens if the compensating transaction also fails. - -Compensating transactions are also used in case where a transaction is long lived (commonly called Saga Transactions), for instance in a business process requiring user input. In such cases, data will be committed to permanent storage, but may subsequently need to be rolled back, perhaps due to the user opting to cancel the operation. Unlike conventional rollbacks, specific business logic will typically be required to roll back a long lived transaction and restore the system to its original state. This type of transaction differs from distributed transactions (often implemented using the two-phase-commit protocol), because although both types of transactions can result in multiple data stores being updated, compensating transactions allows for the updates to span a long period of time. - -Compensating transactions are often designed into Web services that participate in the execution of business processes that are part of a service-oriented architecture solution. diff --git a/wiki/wikipedia/3054.txt b/wiki/wikipedia/3054.txt deleted file mode 100644 index 83cccbd8d9f2cd64484c1da8993092579382e92e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3054.txt +++ /dev/null @@ -1,30 +0,0 @@ -In number theory, a Woodall number (Wn) is any natural number of the form -$$ -W_n = n \cdot 2^n - 1 -$$ - -for some natural number n. The first few Woodall numbers are: - -1, 7, 23, 63, 159, 383, 895, … . - -Woodall numbers were first studied by Allan J. C. Cunningham and H. J. Woodall in 1917, inspired by James Cullen's earlier study of the similarly defined Cullen numbers. - -Woodall numbers that are also prime numbers are called Woodall primes; the first few exponents n for which the corresponding Woodall numbers Wn are prime are 2, 3, 6, 30, 75, 81, 115, 123, 249, 362, 384, ... ; the Woodall primes themselves begin with 7, 23, 383, 32212254719, ... . - -In 1976 Christopher Hooley showed that almost all Cullen numbers are composite. In October 1995, Wilfred Keller published a paper discussing several new Cullen primes and the efforts made to factorise other Cullen and Woodall numbers. Included in that paper is a personal communication to Keller from Hiromi Suyama, asserting that Hooley's method can be reformulated to show that it works for any sequence of numbers n · 2n + a + b, where a and b are integers, and in particular, that almost all Woodall numbers are composite. It is an open problem whether there are infinitely many Woodall primes. , the largest known Woodall prime is 17016602 × 217016602 − 1. It has 5,122,515 digits and was found by Diego Bertolotti in March 2018 in the distributed computing project PrimeGrid. - -Starting with W4 = 63 and W5 = 159, every sixth Woodall number is divisible by 3; thus, in order for Wn to be prime, the index n cannot be congruent to 4 or 5 (modulo 6). Also, for a positive integer m, the Woodall number W2m may be prime only if 2m + m is prime. As of January 2019, the only known primes that are both Woodall primes and Mersenne primes are W2 = M3 = 7, and W512 = M521. - -Like Cullen numbers, Woodall numbers have many divisibility properties. For example, if p is a prime number, then p divides - -W(p + 1) / 2 if the Jacobi symbol $\left(\frac{2}{p}\right)$ is +1 and - -W(3p − 1) / 2 if the Jacobi symbol $\left(\frac{2}{p}\right)$ is −1. - -A generalized Woodall number base b is defined to be a number of the form n × bn − 1, where n + 2 > b; if a prime can be written in this form, it is then called a generalized Woodall prime. - -The smallest value of n such that n × bn − 1 is prime for b = 1, 2, 3, ... are - -3, 2, 1, 1, 8, 1, 2, 1, 10, 2, 2, 1, 2, 1, 2, 167, 2, 1, 12, 1, 2, 2, 29028, 1, 2, 3, 10, 2, 26850, 1, 8, 1, 42, 2, 6, 2, 24, 1, 2, 3, 2, 1, 2, 1, 2, 2, 140, 1, 2, 2, 22, 2, 8, 1, 2064, 2, 468, 6, 2, 1, 362, 1, 2, 2, 6, 3, 26, 1, 2, 3, 20, 1, 2, 1, 28, 2, 38, 5, 3024, 1, 2, 81, 858, 1, 2, 3, 2, 8, 60, 1, 2, 2, 10, 5, 2, 7, 182, 1, 17782, 3, ... - -, the largest known generalized Woodall number with base greater than 2 is 2740879 × 322740879 − 1. diff --git a/wiki/wikipedia/3055.txt b/wiki/wikipedia/3055.txt deleted file mode 100644 index 83ab53db1da4094d9dc15fbeb786840699263f3d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3055.txt +++ /dev/null @@ -1,9 +0,0 @@ -Carnot's theorem (named after Lazare Carnot) describes a relation between conic sections and triangles. - -In a triangle $ABC$ with points $C_A, C_B$ on the side $AB$, $A_B, A_C$ on the side $BC$ and $B_C, B_A$ on the side $AC$ those six points are located on a common conic section if and only if the following equation holds: - - - -\frac\cdot \frac\cdot \frac\cdot \frac \cdot \frac\cdot \frac=1 - -. diff --git a/wiki/wikipedia/3056.txt b/wiki/wikipedia/3056.txt deleted file mode 100644 index 403cfef5c78b9c230d0fcb067e1d1919ea31e1b9..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3056.txt +++ /dev/null @@ -1,56 +0,0 @@ -Bioche's rules, formulated by the French mathematician (1859–1949), are rules to aid in the computation of certain indefinite integrals in which the integrand contains sines and cosines. - -In the following, $f(t)$ is a rational expression in $\sin t$ and $\cos t$. In order to calculate $\int f(t)dt$, consider the integrand $\omega(t)=f(t)dt$. We consider the behavior of this entire integrand, including the n} be independent, identically distributed random variables with means zero and unit variances. Let Sn = Y1 + ... + Yn. Then - - - -\limsup_{n \to \infty} \frac{\pm S_n}{\sqrt{2n \log\log n}} = 1 \quad \text{a.s.}, - - - -where “log” is the natural logarithm, “lim sup” denotes the limit superior, and “a.s.” stands for “almost surely”. - -The law of iterated logarithms operates “in between” the law of large numbers and the central limit theorem. There are two versions of the law of large numbers — the weak and the strong — and they both state that the sums Sn, scaled by n−1, converge to zero, respectively in probability and almost surely: - - - -\frac{S_n}{n} \ \xrightarrow{p}\ 0, \qquad - -\frac{S_n}{n} \ \xrightarrow{a.s.} 0, \qquad \text{as}\ \ n\to\infty. - - - -On the other hand, the central limit theorem states that the sums Sn scaled by the factor n−½ converge in distribution to a standard normal distribution. By Kolmogorov's zero–one law, for any fixed M, the probability that the event -$$ - \limsup_n \frac{S_n}{\sqrt{n}} \geq M -$$ - -occurs is 0 or 1. - -Then -$$ - \Pr\left( \limsup_n \frac{S_n}{\sqrt{n}} \geq M \right) \geqslant \limsup_n \Pr\left( \frac{S_n}{\sqrt{n}} \geq M \right) = \Pr\left( \mathcal{N}(0, 1) \geq M \right) > 0 -$$ - -so -$$ -\limsup_n \frac{S_n}{\sqrt{n}}=\infty \qquad \text{with probability 1.} -$$ - -An identical argument shows that -$$ - \liminf_n \frac{S_n}{\sqrt{n}}=-\infty \qquad \text{with probability 1.} -$$ - -This implies that these quantities cannot converge almost surely. In fact, they cannot even converge in probability, which follows from the equality -$$ -\frac{S_{2n}}{\sqrt{2n}}-\frac{S_n}{\sqrt{n}} = \frac1{\sqrt2}\frac{S_{2n}-S_n}{\sqrt{n}} - \left (1-\frac1\sqrt2 \right )\frac{S_n}{\sqrt{n}} -$$ - -and the fact that the random variables -$$ -\frac{S_n}{\sqrt{n}}\quad \text{and} \quad \frac{S_{2n}-S_n}{\sqrt{n}} -$$ - -are independent and both converge in distribution to $\mathcal{N}(0, 1).$ - -The law of the iterated logarithm provides the scaling factor where the two limits become different: - - - -\frac{S_n}{\sqrt{2n\log\log n}} \ \xrightarrow{p}\ 0, \qquad - -\frac{S_n}{\sqrt{2n\log\log n}} \ \stackrel{a.s.}{\nrightarrow}\ 0, \qquad \text{as}\ \ n\to\infty. - - - -Thus, although the absolute value of the quantity $S_n/\sqrt{2n\log\log n}$ is less than any predefined ε > 0 with probability approaching one, it will nevertheless almost surely be greater than ε infinitely often; in fact, the quantity will be visiting the neighborhoods of any point in the interval (-1,1) almost surely. - -The law of the iterated logarithm (LIL) for a sum of independent and identically distributed (i.i.d.) random variables with zero mean and bounded increment dates back to Khinchin and Kolmogorov in the 1920s. - -Since then, there has been a tremendous amount of work on the LIL for various kinds of - -dependent structures and for stochastic processes. The following is a small sample of notable developments. - -Hartman–Wintner (1940) generalized LIL to random walks with increments with zero mean and finite variance. De Acosta (1983) gave a simple proof of the Hartman–Wintner version of the LIL. - -Strassen (1964) studied the LIL from the point of view of invariance principles. - -Stout (1970) generalized the LIL to stationary ergodic martingales. - -Wittmann (1985) generalized Hartman–Wintner version of LIL to random walks satisfying milder conditions. - -Vovk (1987) derived a version of LIL valid for a single chaotic sequence (Kolmogorov random sequence). This is notable, as it is outside the realm of classical probability theory. - -Yongge Wang (1996) showed that the law of the iterated logarithm holds for polynomial time pseudorandom sequences also. The Java-based software tests whether a pseudorandom generator outputs sequences that satisfy the LIL. - -Balsubramani (2014) proved a non-asymptotic LIL that holds over finite-time martingale sample paths. This subsumes the martingale LIL as it provides matching finite-sample concentration and anti-concentration bounds, and enables sequential testing and other applications. diff --git a/wiki/wikipedia/306.txt b/wiki/wikipedia/306.txt deleted file mode 100644 index f52e2fb7c43399cabb527dbc37835d6ac840a677..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/306.txt +++ /dev/null @@ -1,70 +0,0 @@ -In mathematics, the Whitney inequality gives an upper bound for the error of best approximation of a function by polynomials in terms of the moduli of smoothness. It was first proved by Hassler Whitney in 1957, and is an important tool in the field of approximation theory for obtaining upper estimates on the errors of best approximation. - -Denote the value of the best uniform approximation of a function $f\in C([a,b])$ by algebraic polynomials $P_n$ of degree $\leq n$ by -$$ - E_n(f)_{[a,b]} := \inf_{P_n}{\| f-P_n \|_{C([a,b])}} -$$ - -The moduli of smoothness of order $k$ of a function $f\in C([a,b])$ are defined as: -$$ -\omega_k(t):=\omega_k(t;f;[a,b]) :=\sup_{h\in [0,t]} \|\Delta_h^k(f;\cdot)\|_{C([a,b-kh])} \quad \text{ for } \quad t\in [0,(b-a)/k], -$$ -$$ -\omega_k(t):=\omega_k((b-a)/k)\quad \text{ for} \quad t>(b-a)/k , -$$ - -where $\Delta_h^k$ is the finite difference of order $k$. - -Theorem: [Whitney, 1957] If $f\in C([a,b])$, then -$$ -E_{k-1}(f)_{[a,b]}\leq W_k \omega_k\left(\frac{b-a}{k};f;[a,b]\right) -$$ - -where $W_k$ is a constant depending only on $k$. The Whitney constant $W(k)$ is the smallest value of $W_k$ for which the above inequality holds. The theorem is particularly useful when applied on intervals of small length, leading to good estimates on the error of spline approximation. - -The original proof given by Whitney follows an analytic argument which utilizes the properties of moduli of smoothness. However, it can also be proved in a much shorter way using Peetre's K-functionals. - -Let: -$$ -x_0:=a, \quad h:=\frac{b-a}{k}, \quad x_j:=x+0+jh, \quad F(x) = \int_a^x f(u) du, -$$ -$$ -G(x):=F(x)-L(x;F;x_0,\ldots,x_k), \quad g(x):=G'(x), -$$ -$$ -\omega_k(t):=\omega_k(t;f;[a,b])\equiv \omega_k(t;g;[a,b]) -$$ - -where $L(x;F;x_0,\ldots,x_k)$ is the Lagrange polynomial for $F$ at the nodes $\{x_0,\ldots,x_k\}$. - -Now fix some $x\in [a,b]$ and choose $\delta$ for which $(x+k\delta)\in [a,b]$. Then: -$$ -\int_0^1 \Delta_{t\delta}^k (g;x) dt =(-1)^kg(x)+\sum_{j=1}^k (-1)^{k-j} \binom{k}{j} \int_0^1 g(x+jt\delta) dt -$$ -$$ -=(-1)^kg(x)+\sum_{j=1}^k{(-1)^{k-j} \binom{k}{j} \frac{1}{j\delta}(G(x+j\delta)-G(x))}, -$$ - -Therefore: -$$ -|g(x)| \leq \int_0^1 |\Delta_{t\delta}^k(g;x)| dt + \frac{2}\|G\|_{C([a,b])} \sum_{j=1}^k \binom{k}{j}\frac{1}{j} \leq \omega_k(|\delta|)+\frac{1}2^{k+1}\|G\|_{C([a,b])} -$$ - -And since we have $\|G\|_{C([a,b])}\leq h\omega_k(h)$, (a property of moduli of smoothness) -$$ -E_{k-1}(f)_{[a,b]}\leq \|g\|_{C([a,b])} \leq \omega_k(|\delta|) +\frac{1} h 2^{k+1}\omega_k(h). -$$ - -Since $\delta$ can always be chosen in such a way that $h\geq |\delta| \geq h/2$, this completes the proof. - -It is important to have sharp estimates of the Whitney constants. It is easily shown that $W(1)=1/2$, and it was first proved by Burkill (1952) that $W(2)\leq 1$, who conjectured that $W(k)\leq 1$ for all $k$. Whitney was also able to prove that -$$ -W(2)=\frac{1}{2}, \quad \frac{8}{15}\leq W(3) \leq 0.7 \quad W(4)\leq 3.3 \quad W(5)\leq 10.4 -$$ - -and -$$ -W(k)\geq \frac{1}{2},\quad k\in\mathbb{N} -$$ - -In 1964, Brudnyi was able to obtain the estimate $W(k)=O(k^{2k})$, and in 1982, Sendov proved that $W(k)\leq (k+1)k^k$. Then, in 1985, Ivanov and Takev proved that $W(k)=O(k\ln k)$, and Binev proved that $W(k)=O(k)$. Sendov conjectured that $W(k)\leq 1$ for all $k$, and in 1985 was able to prove that the Whitney constants are bounded above by an absolute constant, that is, $W(k)\leq 6$ for all $k$. Kryakin, Gilewicz, and Shevchuk (2002) were able to show that $W(k)\leq 2$ for $k \leq 82000$, and that $W(k)\leq 2+\frac{1}{e^2}$ for all $k$. diff --git a/wiki/wikipedia/3060.txt b/wiki/wikipedia/3060.txt deleted file mode 100644 index 77d354f7276593d2af5610a0e844c9f2cac747aa..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3060.txt +++ /dev/null @@ -1,34 +0,0 @@ -In mathematics, the Borel–Carathéodory theorem in complex analysis shows that an analytic function may be bounded by its real part. It is an application of the maximum modulus principle. It is named for Émile Borel and Constantin Carathéodory. - -Let a function $f$ be analytic on a closed disc of radius R centered at the origin. Suppose that r < R. Then, we have the following inequality: -$$ - \|f\|_r \le \frac{2r}{R-r} \sup_ \leq |z|. -$$ - -Take |z| ≤ r. The above becomes -$$ -R|f(z)| \leq r|f(z) - 2A| \leq r|f(z)| + 2Ar -$$ - -so -$$ -|f(z)| \leq \frac{2Ar}{R-r} -$$, - -as claimed. In the general case, we may apply the above to f(z)-f(0): - - - -\begin{align} - -|f(z)|-|f(0)| - -&\leq |f(z)-f(0)| - -\leq \frac{2r}{R-r} \sup_{|w| \leq R} \operatorname{Re}(f(w) - f(0)) \\ - -&\leq \frac{2r}{R-r} \left(\sup_{|w| \leq R} \operatorname{Re} f(w) + |f(0)|\right), - -\end{align} - -which, when rearranged, gives the claim. diff --git a/wiki/wikipedia/3061.txt b/wiki/wikipedia/3061.txt deleted file mode 100644 index 5cc285a46ac5bb70fa5c0c7994bdf58e833bc0a4..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3061.txt +++ /dev/null @@ -1,9 +0,0 @@ -In mathematics, Takeuti's conjecture is the conjecture of Gaisi Takeuti that a sequent formalisation of second-order logic has cut-elimination (Takeuti 1953). It was settled positively: - -* By Tait, using a semantic technique for proving cut-elimination, based on work by Schütte (Tait 1966); - -* Independently by Prawitz (Prawitz 1968) and Takahashi (Takahashi 1967) by a similar technique (Takahashi 1967) - although Prawitz's and Takahashi's proofs are not limited to second-order logic, but concern higher-order logics in general; - -* It is a corollary of Jean-Yves Girard's syntactic proof of strong normalization for System F. - -Takeuti's conjecture is equivalent to the consistency of second-order arithmetic in the sense that each of the statements can be derived from each other in the weak system PRA; consistency refers here to the truth of the Gödel sentence for second-order arithmetic. It is also equivalent to the strong normalization of the Girard/Reynold's System F. diff --git a/wiki/wikipedia/3062.txt b/wiki/wikipedia/3062.txt deleted file mode 100644 index 2e81202262fbe4fc4c9795ccd7755df15683f334..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3062.txt +++ /dev/null @@ -1,132 +0,0 @@ -In topology and other branches of mathematics, a topological space X is - -locally connected if every point admits a neighbourhood basis consisting entirely of open, connected sets. - -Throughout the history of topology, connectedness and compactness have been two of the most - -widely studied topological properties. Indeed, the study of these properties even among subsets of Euclidean space, and the recognition of their independence from the particular form of the Euclidean metric, played a large role in clarifying the notion of a topological property and thus a topological space. However, whereas the structure of compact subsets of Euclidean space was understood quite early on via the Heine–Borel theorem, connected subsets of $\R^n$ (for n > 1) proved to be much more complicated. Indeed, while any compact Hausdorff space is locally compact, a connected space—and even a connected subset of the Euclidean plane—need not be locally connected (see below). - -This led to a rich vein of research in the first half of the twentieth century, in which topologists studied the implications between increasingly subtle and complex variations on the notion of a locally connected space. As an example, the notion of weak local connectedness at a point and its relation to local connectedness will be considered later on in the article. - -In the latter part of the twentieth century, research trends shifted to more intense study of spaces like manifolds, which are locally well understood (being locally homeomorphic to Euclidean space) but have complicated global behavior. By this it is meant that although the basic point-set topology of manifolds is relatively simple (as manifolds are essentially metrizable according to most definitions of the concept), their algebraic topology is far more complex. From this modern perspective, the stronger property of local path connectedness turns out to be more important: for instance, in order for a space to admit a universal cover it must be connected and locally path connected. Local path connectedness will be discussed as well. - -A space is locally connected if and only if for every open set U, the connected components of U (in the subspace topology) are open. It follows, for instance, that a continuous function from a locally connected space to a totally disconnected space must be locally constant. In fact the openness of components is so natural that one must be sure to keep in mind that it is not true in general: for instance Cantor space is totally disconnected but not discrete. - -==Definitions== - -Let $X$ be a topological space, and let $x$ be a point of $X.$ - -A space $X$ is said to be ' (respectively, ') if for every open set $V$ containing $x,$ there exists a subset $U \subseteq V$ containing $x$ that is both connected (respectively, path-connected) and an open subset of $X.$ - -Said more succinctly, a space is locally connected (respectively, locally path connected) at a given point if and only if there exists a neighborhood basis at that point consisting entirely of connected open neighborhoods (respectively, path connected open neighborhoods). - -A space $X$ is said to be ' (respectively, ') if it is locally connected at $x$ for every $x \in X.$ - -Local connectedness does not imply connectedness (consider two disjoint open intervals in $\R$ for example); and connectedness does not imply local connectedness (see the topologist's sine curve). - -Path connected spaces are connected and consequently, locally path connected spaces are locally connected. The converse does not hold (see some of the examples below). - -' - -Removing the requirement that $U$ be open in $X$ leads to the following weaker notion. - -A space $X$ is said to be weakly locally connected at $x$ (or connected im kleinen at $x$) if for every open set $V$ containing $x$ there exists a connected subset $N$ of $V$ such that $x$ lies in the interior of $N.$ An equivalent definition is: each open set $V$ containing $x$ contains an open neighborhood $U$ of $x$ such that any two points in $U$ lie in some connected subset of $V.$ - -A space $X$ is called ' if it is weakly locally connected at $x$ for every $x \in X.$ - -The only difference between the two definitions is that for local connectedness at $x$ we require a neighborhood base of open connected sets containing $x,$ whereas for weak local connectedness at $x$ we require only a neighborhood base of connected sets containing $x.$ - -Said more succinctly, a space is weakly locally connected (respectively, locally connected) at a point $x$ if and only if there exists a neighborhood basis at $x$ consisting entirely of connected (respectively, connected and open) neighborhoods. - -Evidently a space that is locally connected at $x$ is weakly locally connected at $x.$ The converse does not hold (a counterexample, the broom space, is given below). On the other hand, it is equally clear that a locally connected space is weakly locally connected, but here it turns out (as proved below) that the converse does hold: a space that is weakly locally connected at all of its points is necessarily locally connected at all of its points. Thus - -It is shown that if $X$ is a weakly locally connected space then it is a locally connected space. - -It is sufficient to show that the components of open sets are open. Let $U$ be open in $X$ and let $C$ be a component of $U.$ Let $x$ be an element of $C.$ Then $x$ is an element of $U$ so that there is a connected subspace $A$ of $X$ contained in $U$ and containing a neighbourhood $V$ of $x.$ Since $A$ is connected and $A$ contains $x,$ $A$ must be a subset of $C$ (the component containing $x$). Therefore, the neighbourhood $V$ of $x$ is a subset of $C,$ which shows that $x$ is an interior point of $C.$ Since $x$ was an arbitrary point of $C,$ $C$ is open in $X.$ Therefore, $X$ is locally connected. - -By replacing the every instance of the word "connected" with "path connected" in the proof, it may be shown that a space is locally path connected if and only if every point has a neighborhood basis consisting entirely of path connected neighborhoods (these neighborhoods need not be open sets). Some authors use this latter condition (where neighborhoods need not be open) as the definition of "locally path connected space" instead of the definition that was given. - -A certain infinite union of decreasing broom spaces is an example of a space that is weakly locally connected at a particular point, but not locally connected at that point. - -# For any positive integer n, the Euclidean space $\R^n$ is locally path connected, thus locally connected; it is also connected. - -# More generally, every locally convex topological vector space is locally connected, since each point has a local base of convex (and hence connected) neighborhoods. - -# The subspace $S = [0,1] \cup [2,3]$ of the real line $\R^1$ is locally path connected but not connected. - -# The topologist's sine curve is a subspace of the Euclidean plane that is connected, but not locally connected. - -# The space $\Q$ of rational numbers endowed with the standard Euclidean topology, is neither connected nor locally connected. - -# The comb space is path connected but not locally path connected, and not even locally connected. - -# A countably infinite set endowed with the cofinite topology is locally connected (indeed, hyperconnected) but not locally path connected. - -# The lexicographic order topology on the unit square is connected and locally connected, but not path connected, nor locally path connected. - -A first-countable Hausdorff space $(X, \tau)$ is locally path-connected if and only if $\tau$ is equal to the final topology on $X$ induced by the set $C([0, 1]; X)$ of all continuous paths $[0, 1] \to (X, \tau).$ - -Further examples are given later on in the article. - -# Local connectedness is, by definition, a local property of topological spaces, i.e., a topological property P such that a space X possesses property P if and only if each point x in X admits a neighborhood base of sets that have property P. Accordingly, all the "metaproperties" held by a local property hold for local connectedness. In particular: - -# A space is locally connected if and only if it admits a base of (open) connected subsets. - -# The disjoint union $\coprod_i X_i$ of a family $\{X_i\}$ of spaces is locally connected if and only if each $X_i$ is locally connected. In particular, since a single point is certainly locally connected, it follows that any discrete space is locally connected. On the other hand, a discrete space is totally disconnected, so is connected only if it has at most one point. - -# Conversely, a totally disconnected space is locally connected if and only if it is discrete. This can be used to explain the aforementioned fact that the rational numbers are not locally connected. - -# A nonempty product space $\prod_i X_i$ is locally connected if and only if each $X_i$ is locally connected and all but finitely many of the $X_i$ are connected. - -# Every hyperconnected space is locally connected, and connected. - -The following result follows almost immediately from the definitions but will be quite useful: - -Lemma: Let X be a space, and $\{Y_i\}$ a family of subsets of X. Suppose that $ \bigcap_i Y_i $ is nonempty. Then, if each $Y_i$ is connected (respectively, path connected) then the union $\bigcup_i Y_i$ is connected (respectively, path connected). - -Now consider two relations on a topological space X: for $x,y \in X,$ write: -$$ -x \equiv_c y -$$ if there is a connected subset of X containing both x and y; and -$$ - x \equiv_{pc} y -$$ if there is a path connected subset of X containing both x and y. - -Evidently both relations are reflexive and symmetric. Moreover, if x and y are contained in a connected (respectively, path connected) subset A and y and z are connected in a connected (respectively, path connected) subset B, then the Lemma implies that $A \cup B$ is a connected (respectively, path connected) subset containing x, y and z. Thus each relation is an equivalence relation, and defines a partition of X into equivalence classes. We consider these two partitions in turn. - -For x in X, the set $C_x$ of all points y such that $y \equiv_c x$ is called the connected component of x. The Lemma implies that $C_x$ is the unique maximal connected subset of X containing x. Since - -the closure of $C_x$ is also a connected subset containing x, it follows that $C_x$ is closed. - -If X has only finitely many connected components, then each component is the complement of a finite union of closed sets and therefore open. In general, the connected components need not be open, since, e.g., there exist totally disconnected spaces (i.e., $C_x = \{x\}$ for all points x) that are not discrete, like Cantor space. However, the connected components of a locally connected space are also open, and thus are clopen sets. It follows that a locally connected space X is a topological disjoint union $\coprod C_x$ of its distinct connected components. Conversely, if for every open subset U of X, the connected components of U are open, then X admits a base of connected sets and is therefore locally connected. - -Similarly x in X, the set $PC_x$ of all points y such that $y \equiv_{pc} x$ is called the path component of x. As above, $PC_x$ is also the union of all path connected subsets of X that contain x, so by the Lemma is itself path connected. Because path connected sets are connected, we have $PC_x \subseteq C_x$ for all $x \in X.$ - -However the closure of a path connected set need not be path connected: for instance, the topologist's sine curve is the closure of the open subset U consisting of all points (x,y) with x > 0, and U, being homeomorphic to an interval on the real line, is certainly path connected. Moreover, the path components of the topologist's sine curve C are U, which is open but not closed, and $C \setminus U,$ which is closed but not open. - -A space is locally path connected if and only if for all open subsets U, the path components of U are open. Therefore the path components of a locally path connected space give a partition of X into pairwise disjoint open sets. It follows that an open connected subspace of a locally path connected space is necessarily path connected. Moreover, if a space is locally path connected, then it is also locally connected, so for all $x \in X,$ $C_x$ is connected and open, hence path connected, that is, $C_x = PC_x.$ That is, for a locally path connected space the components and path components coincide. - -# The set $I \times I$ (where $I = [0, 1]$) in the dictionary order topology has exactly one component (because it is connected) but has uncountably many path components. Indeed, any set of the form $\{a\} \times I$ is a path component for each a belonging to I. - -# Let $f : \R \to \R_{\ell}$ be a continuous map from $\R$ to $\R_{\ell}$ (which is $\R$ in the lower limit topology). Since $\R$ is connected, and the image of a connected space under a continuous map must be connected, the image of $\R$ under $f$ must be connected. Therefore, the image of $\R$ under $f$ must be a subset of a component of $\R_{\ell}/$ Since this image is nonempty, the only continuous maps from '$\R$ to $\R_{\ell},$ are the constant maps. In fact, any continuous map from a connected space to a totally disconnected space must be constant. - -Let X be a topological space. We define a third relation on X: $x \equiv_{qc} y$ if there is no separation of X into open sets A and B such that x is an element of A and y is an element of B. This is an equivalence relation on X and the equivalence class $QC_x$ containing x is called the quasicomponent of x. -$$ -QC_x -$$ can also be characterized as the intersection of all clopen subsets of X that contain x. Accordingly $QC_x$ is closed; in general it need not be open. - -Evidently $C_x \subseteq QC_x$ for all $x \in X.$ Overall we have the following containments among path components, components and quasicomponents at x: - -PC_x \subseteq C_x \subseteq QC_x. - -If X is locally connected, then, as above, $C_x$ is a clopen set containing x, so $QC_x \subseteq C_x$ and thus $QC_x = C_x.$ Since local path connectedness implies local connectedness, it follows that at all points x of a locally path connected space we have - -PC_x = C_x = QC_x. - -Another class of spaces for which the quasicomponents agree with the components is the class of compact Hausdorff spaces. - -# An example of a space whose quasicomponents are not equal to its components is a sequence with a double limit point. This space is totally disconnected, but both limit points lie in the same quasicomponent, because any clopen set containing one of them must contain a tail of the sequence, and thus the other point too. - -# The space $(\{0\}\cup\{\frac{1}{n} : n \in \Z^+\}) \times [-1,1] \setminus \{(0,0)\}$ is locally compact and Hausdorff but the sets $\{0\} \times [-1,0)$ and $\{0\} \times (0,1]$ are two different components which lie in the same quasicomponent. - -# The Arens–Fort space is not locally connected, but nevertheless the components and the quasicomponents coincide: indeed $QC_x = C_x = \{x\}$ for all points x. diff --git a/wiki/wikipedia/3063.txt b/wiki/wikipedia/3063.txt deleted file mode 100644 index fe9b732ade3810a92ffcae70f47e501a31320ce8..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3063.txt +++ /dev/null @@ -1,131 +0,0 @@ -A Fibonacci prime is a Fibonacci number that is prime, a type of integer sequence prime. - -The first Fibonacci primes are : - -2, 3, 5, 13, 89, 233, 1597, 28657, 514229, 433494437, 2971215073, .... - -It is not known whether there are infinitely many Fibonacci primes. With the indexing starting with 1=F1 = F2 = 1, the first 34 are Fn for the n values : - -n = 3, 4, 5, 7, 11, 13, 17, 23, 29, 43, 47, 83, 131, 137, 359, 431, 433, 449, 509, 569, 571, 2971, 4723, 5387, 9311, 9677, 14431, 25561, 30757, 35999, 37511, 50833, 81839, 104911. - -In addition to these proven Fibonacci primes, there have been found probable primes for - -n = 130021, 148091, 201107, 397379, 433781, 590041, 593689, 604711, 931517, 1049897, 1285607, 1636007, 1803059, 1968721, 2904353, 3244369, 3340367. - -Except for the case n = 4, all Fibonacci primes have a prime index, because if a divides b, then $F_a$ also divides $F_b$ (but not every prime index results in a Fibonacci prime). - -Fp is prime for 8 of the first 10 primes p; the exceptions are F2 = 1 and F19 = 4181 = 37 × 113. However, Fibonacci primes appear to become rarer as the index increases. Fp is prime for only 26 of the 1,229 primes p below 10,000. The number of prime factors in the Fibonacci numbers with prime index are: - -0, 1, 1, 1, 1, 1, 1, 2, 1, 1, 2, 3, 2, 1, 1, 2, 2, 2, 3, 2, 2, 2, 1, 2, 4, 2, 3, 2, 2, 2, 2, 1, 1, 3, 4, 2, 4, 4, 2, 2, 3, 3, 2, 2, 4, 2, 4, 4, 2, 5, 3, 4, 3, 2, 3, 3, 4, 2, 2, 3, 4, 2, 4, 4, 4, 3, 2, 3, 5, 4, 2, 1, ... - -, the largest known certain Fibonacci prime is F104911, with 21925 digits. It was proved prime by Mathew Steine and Bouk de Water in 2015. The largest known probable Fibonacci prime is F3340367. It was found by Henri Lifchitz in 2018. - -It was proved by Nick MacKinnon that the only Fibonacci numbers that are also twin primes are 3, 5, and 13. - -A prime $p$ divides $F_{p-1}$ if and only if p is congruent to ±1 modulo 5, and p divides $F_{p+1}$ if and only if it is congruent to ±2 modulo 5. (For p = 5, F5 = 5 so 5 divides F5) - -Fibonacci numbers that have a prime index p do not share any common divisors greater than 1 with the preceding Fibonacci numbers, due to the identity: -$$ -\gcd(F_n, F_m) = F_{\gcd(n,m)}, -$$ - -which implies the infinitude of primes since $F_p$ is divisible by at least one prime for all $ p > 2 $. - -For n ≥ 3, Fn divides Fm if and only if n divides m. - -If we suppose that m is a prime number p, and n is less than p, then it is clear that Fp cannot share any common divisors with the preceding Fibonacci numbers. -$$ -\gcd(F_p, F_n) = F_{\gcd(p,n)} = F_1 = 1. -$$ - -This means that Fp will always have characteristic factors or be a prime characteristic factor itself. The number of distinct prime factors of each Fibonacci number can be put into simple terms. - -* Fnk is a multiple of Fk for all values of n and k such that n ≥ 1 and k ≥ 1. It's safe to say that Fnk will have "at least" the same number of distinct prime factors as Fk. All Fp will have no factors of Fk, but "at least" one new characteristic prime from Carmichael's theorem. - -* Carmichael's Theorem applies to all Fibonacci numbers except four special cases: $F_1 =F_2 =1, F_6 = 8$ and $F_{12} = 144.$ If we look at the prime factors of a Fibonacci number, there will be at least one of them that has never before appeared as a factor in any earlier Fibonacci number. Let πn be the number of distinct prime factors of Fn. - -If k | n then $\pi_n \geqslant \pi_k +1$ except for $\pi_6 = \pi_3 =1.$ - -If k = 1, and n is an odd prime, then 1 | p and $\pi_p \geqslant \pi_1 +1 = 1.$ - -The first step in finding the characteristic quotient of any Fn is to divide out the prime factors of all earlier Fibonacci numbers Fk for which k | n. - -The exact quotients left over are prime factors that have not yet appeared. - -If p and q are both primes, then all factors of Fpq are characteristic, except for those of Fp and Fq. - -\begin{align} - -\gcd (F_{pq}, F_q ) &= F_{\gcd(pq, q)} = F_q \\ - -\gcd (F_{pq}, F_p ) &= F_{\gcd(pq, p)} = F_p - -\end{align} - -Therefore: -$$ -\pi_{pq} \geqslant \begin{cases} \pi_p + \pi_q + 1 & p\neq q \\ \pi_p + 1 & p = q \end{cases} -$$ - -The number of distinct prime factors of the Fibonacci numbers with a prime index is directly relevant to the counting function. - -For a prime p, the smallest index u > 0 such that Fu is divisible by p is called the rank of apparition (sometimes called Fibonacci entry point) of p and denoted a(p). The rank of apparition a(p) is defined for every prime p. The rank of apparition divides the Pisano period π(p) and allows to determine all Fibonacci numbers divisible by p. - -For the divisibility of Fibonacci numbers by powers of a prime, $p \geqslant 3, n \geqslant 2$ and $k \geqslant 0$ -$$ -p^n \mid F_{a(p)kp^{n-1}}. -$$ - -In particular -$$ -p^2 \mid F_{a(p)p}. -$$ - -A prime p ≠ 2, 5 is called a Fibonacci–Wieferich prime or a Wall-Sun-Sun prime if $p^2 \mid F_q,$ where -$$ -q = p - \left(\frac5\right) -$$ - -and $\left(\tfrac5\right)$ is the Legendre symbol: -$$ -\left(\frac{p}{5}\right) = \begin{cases} 1 & p \equiv \pm 1 \bmod 5\\ -1 & p \equiv \pm 2 \bmod 5 \end{cases} -$$ - -It is known that for p ≠ 2, 5, a(p) is a divisor of: -$$ -p-\left(\frac5\right) = \begin{cases} p-1 & p \equiv \pm 1 \bmod 5\\ p+1 & p \equiv \pm 2 \bmod 5 \end{cases} -$$ - -For every prime p that is not a Wall-Sun-Sun prime, $a(p^2) = p a(p)$ as illustrated in the table below: - -The existence of Wall-Sun-Sun primes is conjectural. - -The primitive part of the Fibonacci numbers are - -1, 1, 2, 3, 5, 4, 13, 7, 17, 11, 89, 6, 233, 29, 61, 47, 1597, 19, 4181, 41, 421, 199, 28657, 46, 15005, 521, 5777, 281, 514229, 31, 1346269, 2207, 19801, 3571, 141961, 321, 24157817, 9349, 135721, 2161, 165580141, 211, 433494437, 13201, 109441, ... - -The product of the primitive prime factors of the Fibonacci numbers are - -1, 1, 2, 3, 5, 1, 13, 7, 17, 11, 89, 1, 233, 29, 61, 47, 1597, 19, 4181, 41, 421, 199, 28657, 23, 3001, 521, 5777, 281, 514229, 31, 1346269, 2207, 19801, 3571, 141961, 107, 24157817, 9349, 135721, 2161, 165580141, 211, 433494437, 13201, 109441, 64079, 2971215073, 1103, 598364773, 15251, ... - -The first case of more than one primitive prime factor is 4181 = 37 × 113 for $F_{19}$. - -The primitive part has a non-primitive prime factor in some cases. The ratio between the two above sequences is - -1, 1, 1, 1, 1, 4, 1, 1, 1, 1, 1, 3, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 5, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 3, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 7, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 13, 1, 1, .... - -The natural numbers n for which $F_n$ has exactly one primitive prime factor are - -3, 4, 5, 7, 8, 9, 10, 11, 13, 14, 15, 16, 17, 18, 20, 21, 22, 23, 24, 25, 26, 28, 29, 30, 32, 33, 34, 35, 36, 38, 39, 40, 42, 43, 45, 47, 48, 51, 52, 54, 56, 60, 62, 63, 65, 66, 72, 74, 75, 76, 82, 83, 93, 94, 98, 105, 106, 108, 111, 112, 119, 121, 122, 123, 124, 125, 131, 132, 135, 136, 137, 140, 142, 144, 145, ... - -For a prime p, p is in this sequence if and only if $F_p$ is a Fibonacci prime, and 2p is in this sequence if and only if $L_p$ is a Lucas prime (where $L_p$ is the $p$th Lucas number). Moreover, 2n is in this sequence if and only if $L_{2^{n-1}}$ is a Lucas prime. - -The number of primitive prime factors of $F_n$ are - -0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 2, 1, 1, 1, 1, 1, 3, 1, 1, 1, 2, 1, 1, 2, 1, 2, 1, 1, 2, 2, 1, 1, 2, 1, 2, 1, 2, 2, 2, 1, 2, 1, 1, 2, 1, 1, 3, 2, 3, 2, 2, 1, 2, 1, 1, 1, 2, 2, 2, 2, 3, 1, 1, 2, 2, 2, 2, 3, 2, 2, 2, 2, 1, 1, 3, 2, 4, 1, 2, 2, 2, 2, 3, 2, 1, 1, 2, 1, 2, 2, 1, 1, 2, 2, 2, 2, 2, 3, 1, 2, 1, 1, 1, 1, 1, 2, 2, 2, ... - -The least primitive prime factors of $F_n$ are - -1, 1, 2, 3, 5, 1, 13, 7, 17, 11, 89, 1, 233, 29, 61, 47, 1597, 19, 37, 41, 421, 199, 28657, 23, 3001, 521, 53, 281, 514229, 31, 557, 2207, 19801, 3571, 141961, 107, 73, 9349, 135721, 2161, 2789, 211, 433494437, 43, 109441, 139, 2971215073, 1103, 97, 101, ... - -It is conjectured that all the prime factors of $F_n$ are primitive when $n$ is a prime number. diff --git a/wiki/wikipedia/3064.txt b/wiki/wikipedia/3064.txt deleted file mode 100644 index d6d76c488f2edc257dca087250303669f5950de8..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3064.txt +++ /dev/null @@ -1,95 +0,0 @@ -The maximum theorem provides conditions for the continuity of an optimized function and the set of its maximizers with respect to its parameters. The statement was first proven by Claude Berge in 1959. The theorem is primarily used in mathematical economics and optimal control. - -Maximum Theorem. - -Let $X$ and $\Theta$ be topological spaces, $f:X\times\Theta\to\mathbb{R}$ be a continuous function on the product $X \times \Theta$, and $C:\Theta\rightrightarrows X$ be a compact-valued correspondence such that $C(\theta) \ne \emptyset$ for all $\theta \in \Theta$. Define the marginal function (or value function) $f^* : \Theta \to \mathbb{R}$ by -$$ -f^*(\theta)=\sup\{f(x, \theta) : x\in C(\theta)\} -$$ - -and the set of maximizers $C^* : \Theta \rightrightarrows X$ by - - - -C^*(\theta)= - -\mathrm{arg}\sup\{f(x,\theta) : x \in C(\theta)\} - -= \{x \in C(\theta) : f(x, \theta) = f^*(\theta)\} - -. - -If $C$ is continuous (i.e. both upper and lower hemicontinuous) at $\theta$, then $f^*$ is continuous and $C^*$is upper hemicontinuous with nonempty and compact values. As a consequence, the $\sup$ may be replaced by $\max$ and the $\mathrm{arg}\sup$ by $\mathrm{arg}\max$. - -The theorem is typically interpreted as providing conditions for a parametric optimization problem to have continuous solutions with regard to the parameter. In this case, $\Theta$ is the parameter space, $f(x,\theta)$ is the function to be maximized, and $C(\theta)$ gives the constraint set that $f$ is maximized over. Then, $f^*(\theta)$ is the maximized value of the function and $C^*$ is the set of points that maximize $f$. - -The result is that if the elements of an optimization problem are sufficiently continuous, then some, but not all, of that continuity is preserved in the solutions. - -Throughout this proof we will use the term neighborhood to refer to an open set containing a particular point. We preface with a preliminary lemma, which is a general fact in the calculus of correspondences. Recall that a correspondence is closed if its graph is closed. - -Lemma. - -If $A, B : \Theta \rightrightarrows X$ are correspondences, $A$ is upper hemicontinuous and compact-valued, and $B$ is closed, then $A \cap B : \Theta \rightrightarrows X$ defined by $(A \cap B) (\theta) = A(\theta) \cap B(\theta)$ is upper hemicontinuous. - -Let $\theta \in \Theta$, and suppose $G$ is an open set containing $(A\cap B)(\theta)$. If $A(\theta) \subseteq G$, then the result follows immediately. Otherwise, observe that for each $x \in A(\theta) \setminus G$ we have $x \notin B(\theta)$, and since $B$ is closed there is a neighborhood $U_x \times V_x$ of $(\theta, x)$ in which $x' \notin B(\theta')$ whenever $(\theta', x') \in U_x \times V_x$. The collection of sets $\{G\} \cup \{V_x : x \in A(\theta) \setminus G\}$ forms an open cover of the compact set $A(\theta)$, which allows us to extract a finite subcover $G, V_{x_1}, \dots, V_{x_n}$. Then whenever $\theta \in U_{x_1} \cap \dots \cap U_{x_n}$, we have $A(\theta) \subseteq G \cup V_{x_1} \cup \dots \cup V_{x_n}$, and so $(A \cap B)(\theta) \subseteq G$. This completes the proof. $\square$ - -The continuity of $f^*$ in the maximum theorem is the result of combining two independent theorems together. - -Theorem 1. - -If $f$ is upper semicontinuous and $C$ is upper hemicontinuous, nonempty and compact-valued, then $f^*$ is upper semicontinuous. - -Fix $\theta \in \Theta$, and let $\varepsilon > 0$ be arbitrary. For each $x \in C(\theta)$, there exists a neighborhood $U_x \times V_x$ of $(\theta, x)$ such that whenever $(\theta', x') \in U_x \times V_x$, we have $f(x', \theta') < f(x, \theta) + \varepsilon$. The set of neighborhoods $\{V_x : x \in C(\theta)\}$ covers $C(\theta)$, which is compact, so $V_{x_1}, \dots, V_{x_n}$ suffice. Furthermore, since $C$ is upper hemicontinuous, there exists a neighborhood $U'$ of $\theta$ such that whenever $\theta' \in U'$ it follows that $C(\theta') \subseteq \bigcup_{k=1}^{n} V_{x_k}$. Let $U = U' \cap U_{x_1} \cap \dots \cap U_{x_n}$. Then for all $\theta' \in U$, we have $f(x', \theta') < f(x_k, \theta) + \varepsilon$ for each $x' \in C(\theta')$, as $x' \in V_{x_k}$ for some $k$. It follows that - - f^*(\theta') = \sup_{x' \in C(\theta')} f(x', \theta') < \max_{k=1, \dots, n} f(x_k, \theta) + \varepsilon \leq f^*(\theta) + \varepsilon, - - - -which was desired. $\square$ - -Theorem 2. - -If $f$ is lower semicontinuous and $C$ is lower hemicontinuous, then $f^*$ is lower semicontinuous. - -Fix $\theta \in \Theta$, and let $\varepsilon > 0$ be arbitrary. - -By definition of $f^*$, there exists $x \in C(\theta)$ such that $f^*(\theta) < f(x,\theta) + \frac{\varepsilon}{2}$. - -Now, since $f$ is lower semicontinuous, there exists a neighborhood $U_1 \times V$ of $(\theta, x)$ such that whenever $(\theta', x') \in U_1 \times V$ we have $f(x, \theta) < f(x', \theta') + \frac{\varepsilon}{2}$. Observe that $C(\theta) \cap V \ne \emptyset$ (in particular, $x \in C(\theta) \cap V$). Therefore, since $C$ is lower hemicontinuous, there exists a neighborhood $U_2$ such that whenever $\theta' \in U_2$ there exists $x' \in C(\theta') \cap V$. - -Let $U = U_1 \cap U_2$. - -Then whenever $\theta' \in U$ there exists $x' \in C(\theta') \cap V$, which implies -$$ -f^*(\theta) < f(x, \theta) + \frac{\varepsilon}{2} < f(x', \theta') + \varepsilon \leq f^*(\theta') + \varepsilon -$$ - -which was desired. $\square$ - -Under the hypotheses of the Maximum theorem, $f^*$ is continuous. It remains to verify that $C^*$ is an upper hemicontinuous correspondence with compact values. Let $\theta \in \Theta$. To see that $C^*(\theta)$ is nonempty, observe that the function $f_\theta : C(\theta) \to \mathbb{R}$ by $f_\theta(x) = f(x, \theta)$ is continuous on the compact set $C(\theta)$. The Extreme Value theorem implies that $C^*(\theta)$ is nonempty. In addition, since $f_\theta$ is continuous, it follows that $C^*(\theta)$ a closed subset of the compact set $C(\theta)$, which implies $C^*(\theta)$ is compact. Finally, let $D : \Theta \rightrightarrows X$ be defined by $D(\theta) = \{x \in C(\theta) : f(x, \theta) = f^*(\theta)\}$. Since $f$ is a continuous function, $D$ is a closed correspondence. Moreover, since $C^*(\theta) = C(\theta) \cap D(\theta)$, the preliminary Lemma implies that $C^*$ is upper hemicontinuous. $\square$ - -A natural generalization from the above results gives sufficient local conditions for $f^*$to be continuous and $C^*$to be nonempty, compact-valued, and upper semi-continuous. - -If in addition to the conditions above, $f$ is quasiconcave in $x$ for each $\theta$ and $C$ is convex-valued, then $C^*$ is also convex-valued. If $f$ is strictly quasiconcave in $x$ for each $\theta$ and $C$ is convex-valued, then $C^*$ is single-valued, and thus is a continuous function rather than a correspondence. - -If $f$ is concave and $C$ has a convex graph, then $f^*$ is concave and $C^*$ is convex-valued. Similarly to above, if $f$ is strictly concave, then $C^*$ is a continuous function. - -It is also possible to generalize Berge's theorem to non-compact set-valued correspondences if the objective function is K-inf-compact. - -Consider a utility maximization problem where a consumer makes a choice from their budget set. Translating from the notation above to the standard consumer theory notation, - -* $X=\mathbb{R}_+^l$ is the space of all bundles of $l$ commodities, - -* $\Theta=\mathbb{R}_{++}^l \times \mathbb{R}_{++}$ represents the price vector of the commodities $p$ and the consumer's wealth $w$, - -* $f(x,\theta)=u(x)$ is the consumer's utility function, and - -* $C(\theta)=B(p,w)=\{x | px \leq w\}$ is the consumer's budget set. - -Then, - -* $f^*(\theta)=v(p,w)$ is the indirect utility function and - -* $C^*(\theta)=x(p,w)$ is the Marshallian demand. - -Proofs in general equilibrium theory often apply the Brouwer or Kakutani fixed-point theorems to the consumer's demand, which require compactness and continuity, and the maximum theorem provides the sufficient conditions to do so. diff --git a/wiki/wikipedia/3065.txt b/wiki/wikipedia/3065.txt deleted file mode 100644 index 6660347152a63fc1a679ba85e2f1c6b18c67ebca..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3065.txt +++ /dev/null @@ -1,3 +0,0 @@ -In mathematical analysis, Rademacher's theorem, named after Hans Rademacher, states the following: If U is an open subset of Rn and f: U → Rm is Lipschitz continuous, then f is differentiable almost everywhere in U; that is, the points in U at which f is not differentiable form a set of Lebesgue measure zero. - -There is a version of Rademacher's theorem that holds for Lipschitz functions from a Euclidean space into an arbitrary metric space in terms of metric differentials instead of the usual derivative. diff --git a/wiki/wikipedia/3066.txt b/wiki/wikipedia/3066.txt deleted file mode 100644 index 90e181aeea36e835d164dea77a4e1c79cc886d4e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3066.txt +++ /dev/null @@ -1,64 +0,0 @@ -In mathematics, the Lagrange reversion theorem gives series or formal power series expansions of certain implicitly defined functions; indeed, of compositions with such functions. - -Let v be a function of x and y in terms of another function f such that -$$ -v=x+yf(v) -$$ - -Then for any function g, for small enough y: -$$ -g(v)=g(x)+\sum_{k=1}^\infty\frac{y^k}{k!}\left(\frac\partial{\partial x}\right)^{k-1}\left(f(x)^kg'(x)\right). -$$ - -If g is the identity, this becomes -$$ -v=x+\sum_{k=1}^\infty\frac{y^k}{k!}\left(\frac\partial{\partial x}\right)^{k-1}\left(f(x)^k\right) -$$ - -In which case the equation can be derived using perturbation theory. - -In 1770, Joseph Louis Lagrange (1736–1813) published his power series solution of the implicit equation for v mentioned above. However, his solution used cumbersome series expansions of logarithms. In 1780, Pierre-Simon Laplace (1749–1827) published a simpler proof of the theorem, which was based on relations between partial derivatives with respect to the variable x and the parameter y. Charles Hermite (1822–1901) presented the most straightforward proof of the theorem by using contour integration. - -Lagrange's reversion theorem is used to obtain numerical solutions to Kepler's equation. - -We start by writing: -$$ - g(v) = \int \delta(y f(z) - z + x) g(z) (1-y f'(z)) dz -$$ - -Writing the delta-function as an integral we have: - - - -\begin{align} - -g(v) & = \iint \exp(ik[y f(z) - z + x]) g(z) (1-y f'(z)) \frac{dk}{2\pi} dz \\[10pt] - -& =\sum_{n=0}^\infty \iint \frac{(ik y f(z))^n}{n!} g(z) (1-y f'(z)) e^{ik(x-z)} \frac{dk}{2\pi} dz \\[10pt] - -& =\sum_{n=0}^\infty \left(\frac{\partial}{\partial x}\right)^n\iint \frac{(y f(z))^n}{n!} g(z) (1-y f'(z)) e^{ik(x-z)} \frac{dk}{2\pi} dz - -\end{align} - - - -The integral over k then gives $\delta(x-z)$ and we have: - - - -\begin{align} - -g(v) & = \sum_{n=0}^\infty \left(\frac{\partial}{\partial x}\right)^n \left[ \frac{(y f(x))^n}{n!} g(x) (1-y f'(x))\right] \\[10pt] - -& =\sum_{n=0}^\infty \left(\frac{\partial}{\partial x}\right)^n \left[ - -\frac{y^n f(x)^n g(x)}{n!} - \frac{y^{n+1}}{(n+1)!}\left\{ (g(x) f(x)^{n+1})' - g'(x) f(x)^{n+1}\right\} \right] - -\end{align} - - - -Rearranging the sum and cancelling then gives the result: -$$ -g(v)=g(x)+\sum_{k=1}^\infty\frac{y^k}{k!}\left(\frac\partial{\partial x}\right)^{k-1}\left(f(x)^kg'(x)\right) -$$ diff --git a/wiki/wikipedia/3067.txt b/wiki/wikipedia/3067.txt deleted file mode 100644 index 3c06411d45a685bd7b3ad602a96a7e782e407d60..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3067.txt +++ /dev/null @@ -1,75 +0,0 @@ -In analysis, a branch of mathematics, Hilbert's inequality states that - - - -\left|\sum_{r\neq s}\dfrac{u_{r}\overline{u_{s}}}{r-s}\right|\le\pi\displaystyle\sum_{r}|u_{r}|^2. - - - -for any sequence u1,u2,... of complex numbers. It was first demonstrated by David Hilbert with the constant 2pi instead of pi; the sharp constant was found by Issai Schur. It implies that the discrete Hilbert transform is a bounded operator in ℓ2. - -Let (um) be a sequence of complex numbers. If the sequence is infinite, assume that it is square-summable: -$$ - \sum_m |u_m|^2 < \infty -$$ - -Hilbert's inequality (see Steele) asserts that - - - -\left|\sum_{r\neq s}\dfrac{u_{r}\overline{u_{s}}}{r-s}\right|\le\pi\displaystyle\sum_{r}|u_{r}|^2. - - - -In 1973, Montgomery & Vaughan reported several generalizations of Hilbert's inequality, considering the bilinear forms -$$ - \sum_{r\neq s}u_r\overline u_s\csc\pi(x_r-x_s) -$$ - -and -$$ -\sum_{r\neq s}\dfrac{u_r\overline u_s}{\lambda_r-\lambda_s}, -$$ - -where x1,x2,...,xm are distinct real numbers modulo 1 (i.e. they belong to distinct classes in the quotient group R/Z) and λ1,...,λm are distinct real numbers. Montgomery & Vaughan's generalizations of Hilbert's inequality are then given by - - - -\left|\sum_{r\neq s} u_r \overline{u_s}\csc\pi(x_r-x_s)\right|\le\delta^{-1}\sum_r |u_r|^2. - - - -and - - - -\left|\sum_{r\neq s}\dfrac{u_r\overline{u_s}}{\lambda_r-\lambda_s}\right|\le\pi\tau^{-1} \sum_r |u_r|^2. - - - -where -$$ -\delta={\min_{r,s}}{}_{+}\|x_{r}-x_{s}\|, \quad \tau=\min_{r,s}{}_{+}\|\lambda_r-\lambda_s\|, -$$ -$$ -\|s\|= \min_{m\in\mathbb{Z}}|s-m| -$$ - -is the distance from s to the nearest integer, and min+ denotes the smallest positive value. Moreover, if -$$ -0<\delta_r \le {\min_s}{}_{+}\|x_r-x_s\| \quad \text{and} \quad 0<\tau_{r}\le {\min_{s}}{}_{+}\|\lambda_r-\lambda_s\|, -$$ - -then the following inequalities hold: - - - -\left|\sum_{r\neq s} u_r\overline{u_s}\csc\pi(x_r-x_s)\right|\le\dfrac{3}{2} \sum_r |u_r|^2 \delta_r^{-1}. - - - -and - -\left|\sum_{r\neq s}\dfrac{u_r \overline{u_s}}{\lambda_r-\lambda_s}\right|\le \dfrac{3}{2} \pi \sum_r |u_r|^2\tau_r^{-1}. - - diff --git a/wiki/wikipedia/3068.txt b/wiki/wikipedia/3068.txt deleted file mode 100644 index bfc4502207c9f91bf845433a76da25c528ee9eec..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3068.txt +++ /dev/null @@ -1,79 +0,0 @@ -In mathematics, Casey's theorem, also known as the generalized Ptolemy's theorem, is a theorem in Euclidean geometry named after the Irish mathematician John Casey. - -Let $O$ be a circle of radius $R$. Let $O_1, O_2, O_3, O_4$ be (in that order) four non-intersecting circles that lie inside $O$ and tangent to it. Denote by $t_{ij}$ the length of the exterior common bitangent of the circles $O_i, O_j$. Then: -$$ -t_{12} \cdot t_{34}+t_{14} \cdot t_{23}=t_{13}\cdot t_{24}. -$$ - -Note that in the degenerate case, where all four circles reduce to points, this is exactly Ptolemy's theorem. - -The following proof is attributable to Zacharias. Denote the radius of circle $O_i$ by $R_i$ and its tangency point with the circle $O$ by $K_i$. We will use the notation $O, O_i$ for the centers of the circles. - -Note that from Pythagorean theorem, -$$ -t_{ij}^2=\overline{O_iO_j}^2-(R_i-R_j)^2. -$$ - -We will try to express this length in terms of the points $K_i,K_j$. By the law of cosines in triangle $O_iOO_j$, -$$ -\overline{O_iO_j}^2=\overline{OO_i}^2+\overline{OO_j}^2-2\overline{OO_i}\cdot \overline{OO_j}\cdot \cos\angle O_iOO_j -$$ - -Since the circles $O,O_i$ tangent to each other: -$$ -\overline{OO_i} = R - R_i, \angle O_iOO_j = \angle K_iOK_j -$$ - -Let $C$ be a point on the circle $O$. According to the law of sines in triangle $K_iCK_j$: -$$ -\overline{K_iK_j} = 2R\cdot \sin\angle K_iCK_j = 2R\cdot \sin\frac{\angle K_iOK_j}{2} -$$ - -Therefore, -$$ -\cos\angle K_iOK_j = 1-2\sin^2\frac{\angle K_iOK_j}{2}=1-2\cdot \left(\frac{\overline{K_iK_j}}{2R}\right)^2 = 1 - \frac{\overline{K_iK_j}^2}{2R^2} -$$ - -and substituting these in the formula above: -$$ -\overline{O_iO_j}^2=(R-R_i)^2+(R-R_j)^2-2(R-R_i)(R-R_j)\left(1-\frac{\overline{K_iK_j}^2}{2R^2}\right) -$$ -$$ -\overline{O_iO_j}^2=(R-R_i)^2+(R-R_j)^2-2(R-R_i)(R-R_j)+(R-R_i)(R-R_j)\cdot \frac{\overline{K_iK_j}^2}{R^2} -$$ -$$ -\overline{O_iO_j}^2=((R-R_i)-(R-R_j))^2+(R-R_i)(R-R_j)\cdot \frac{\overline{K_iK_j}^2}{R^2} -$$ - -And finally, the length we seek is -$$ -t_{ij}=\sqrt{\overline{O_iO_j}^2-(R_i-R_j)^2}=\frac{\sqrt{R-R_i}\cdot \sqrt{R-R_j}\cdot \overline{K_iK_j}}{R} -$$ - -We can now evaluate the left hand side, with the help of the original Ptolemy's theorem applied to the inscribed quadrilateral $K_1K_2K_3K_4$: - - - -\begin{align} - -& t_{12}t_{34}+t_{14}t_{23} \\[4pt] - -= {} & \frac{1}{R^2}\cdot \sqrt{R-R_1}\sqrt{R-R_2}\sqrt{R-R_3}\sqrt{R-R_4} \left(\overline{K_1K_2} \cdot \overline{K_3K_4}+\overline{K_1K_4}\cdot \overline{K_2K_3}\right) \\[4pt] - -= {} & \frac{1}{R^2}\cdot \sqrt{R-R_1}\sqrt{R-R_2}\sqrt{R-R_3}\sqrt{R-R_4}\left(\overline{K_1K_3}\cdot \overline{K_2K_4}\right) \\[4pt] - -= {} & t_{13}t_{24} - -\end{align} - - - -It can be seen that the four circles need not lie inside the big circle. In fact, they may be tangent to it from the outside as well. In that case, the following change should be made: - -If $O_i, O_j$ are both tangent from the same side of $O$ (both in or both out), $t_{ij}$ is the length of the exterior common tangent. - -If $O_i, O_j$ are tangent from different sides of $O$ (one in and one out), $t_{ij}$ is the length of the interior common tangent. - -The converse of Casey's theorem is also true. That is, if equality holds, the circles are tangent to a common circle. - -Casey's theorem and its converse can be used to prove a variety of statements in Euclidean geometry. For example, the shortest known proof of Feuerbach's theorem uses the converse theorem. diff --git a/wiki/wikipedia/3069.txt b/wiki/wikipedia/3069.txt deleted file mode 100644 index c5815d61f5438eb4f40d0ebede9c804a50058dd7..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3069.txt +++ /dev/null @@ -1,57 +0,0 @@ -In mathematics, Bondy's theorem is a bound on the number of elements needed to distinguish the sets in a family of sets from each other. It belongs to the field of combinatorics, and is named after John Adrian Bondy, who published it in 1972. - -The theorem is as follows: - -Let X be a set with n elements and let A1, A2, ..., An be distinct subsets of X. Then there exists a subset S of X with n - 1 elements such that the sets Ai ∩ S are all distinct. - -In other words, if we have a 0-1 matrix with n rows and n columns such that each row is distinct, we can remove one column such that the rows of the resulting n × (n - 1) matrix are distinct. - -Consider the 4 × 4 matrix - -\begin{bmatrix} - -1&1&0&1\\ - -0&1&0&1\\ - -0&0&1&1\\ - -0&1&1&0 - -\end{bmatrix} - -where all rows are pairwise distinct. If we delete, for example, the first column, the resulting matrix - -\begin{bmatrix} - -1&0&1\\ - -1&0&1\\ - -0&1&1\\ - -1&1&0 - -\end{bmatrix} - -no longer has this property: the first row is identical to the second row. Nevertheless, by Bondy's theorem we know that we can always find a column that can be deleted without introducing any identical rows. In this case, we can delete the third column: all rows of the 3 × 4 matrix - -\begin{bmatrix} - -1&1&1\\ - -0&1&1\\ - -0&0&1\\ - -0&1&0 - -\end{bmatrix} - -are distinct. Another possibility would have been deleting the fourth column. - -From the perspective of computational learning theory, Bondy's theorem can be rephrased as follows: - -Let C be a concept class over a finite domain X. Then there exists a subset S of X with the size at most |C| - 1 such that S is a witness set for every concept in C. - -This implies that every finite concept class C has its teaching dimension bounded by |C| - 1. diff --git a/wiki/wikipedia/307.txt b/wiki/wikipedia/307.txt deleted file mode 100644 index 802a3943f5b313415bfaddaa8b13debbdf2bd00b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/307.txt +++ /dev/null @@ -1,99 +0,0 @@ -External memory graph traversal is a type of graph traversal optimized for accessing externally stored memory. - -Graph traversal is a subroutine in most graph algorithms. The goal of a graph traversal algorithm is to visit (and / or process) every node of a graph. Graph traversal algorithms, like breadth-first search and depth-first search, are analyzed using the von Neumann model, which assumes uniform memory access cost. This view neglects the fact, that for huge instances part of the graph resides on disk rather than internal memory. Since accessing the disk is magnitudes slower than accessing internal memory, the need for efficient traversal of external memory exists. - -For external memory algorithms the external memory model by Aggarwal and Vitter is used for analysis. A machine is specified by three parameters: M, B and D. - -M is the size of the internal memory, B is the block size of a disk and D is the number of parallel disks. - -The measure of performance for an external memory algorithm is the number of I/Os it performs. - -The breadth-first search algorithm starts at a root node and traverses every node with depth one. If there are no more unvisited nodes at the current depth, nodes at a higher depth are traversed. Eventually, every node of the graph has been visited. - -For an undirected graph $G$, Munagala and Ranade proposed the following external memory algorithm: - -Let $L(t)$ denote the nodes in breadth-first search level t and let $A(t):=N(L(t-1))$ be the multi-set of neighbors of level t-1. For every t, $L(t)$ can be constructed from $A(t)$ by transforming it into a set and excluding previously visited nodes from it. - -# Create $A(t)$ by accessing the adjacency list of every vertex in $L(t-1)$. This step requires $O(|L(t-1)|+|A(t)|/(D\cdot B))$ I/Os. - -# Next $A'(t)$ is created from $A(t)$ by removing duplicates. This can be achieved via sorting of $A(t)$, followed by a scan and compaction phase needing $O(\operatorname{sort}(|A|))$ I/Os. - -# $L(t):=A'(t)\backslash\{L(t-1)\cup L(t-2)\}$ is calculated by a parallel scan over $L(t-1)$ and $L(t-2)$ and requires $O((|A(t)|+|L(t-1)|+|L(t-2)|)/(D\cdot B))$ I/Os. - -The overall number of I/Os of this algorithm follows with consideration that $\sum_t |A(t)|=O(m)$ and $\sum_t |L(t)|=O(n)$ and is $O(n+\operatorname{sort}(n+m))$. - -A visualization of the three described steps necessary to compute L(t) is depicted in the figure on the right. - -Mehlhorn and Meyer proposed an algorithm that is based on the algorithm of Munagala and Ranade (MR) and improves their result. - -It consists of two phases. In the first phase the graph is preprocessed, the second phase performs a breadth-first search using the information gathered in phase one. - -During the preprocessing phase the graph is partitioned into disjointed subgraphs $S_i,0\leq i\leq K$ with small diameter. It further partitions the adjacency lists accordingly, by constructing an external file $F=F_0F_1\dots F_{K-1}$, where $F_i$ contains the adjacency list for all nodes in $S_i$. - -The breadth-first search phase is similar to the MR algorithm. In addition the algorithm maintains a sorted external file $H$. This file is initialized with $F_0$. Further, the nodes of any created breadth-first search level carry identifiers for the files $F_i$ of their respective subgraphs $S_i$. Instead of using random accesses to construct $L(t)$ the file $H$ is used. - -# Perform a parallel scan of sorted list $L(t-1)$ and $H$. Extract the adjacency lists for nodes $v\in L(t-1)$, that can be found in $H$. - -# The adjacency lists for the remaining nodes that could not be found in $H$ need to be fetched. A scan over $L(t-1)$ yields the partition identifiers. After sorting and deletion of duplicates, the respective files $F_i$ can be concatenated into a temporary file $F'$. - -# The missing adjacency lists can be extracted from $F'$ with a scan. Next, the remaining adjacency lists are merged into $H$ with a single pass. - -# $A(t)$ is created by a simple scan. The partition information is attached to each node in $A(t)$. - -# The algorithm proceeds like the MR algorithm. - -Edges might be scanned more often in $H$, but unstructured I/Os in order to fetch adjacency lists are reduced. - -The overall number of I/Os for this algorithm is $O(\sqrt{n\cdot(n+m)/(D\cdot B)}+\operatorname{sort}(n+m))$ - -The depth-first search algorithm explores a graph along each branch as deep as possible, before backtracing. - -For directed graphs Buchsbaum, Goldwasser, Venkatasubramanian and Westbrook proposed an algorithm with $O((V+E/B)\log_2 (V/B)+\operatorname{sort}(E))$ I/Os. - -This algorithm is based on a data structure called buffered repository tree (BRT). It stores a multi-set of items from an ordered universe. Items are identified by key. A BTR offers two operations: - -* insert(T, x), which adds item x to T and needs $O(1/B\log_2 (N/B))$ amortized I/Os. N is the number of items added to the BTR. - -* extract(T, k), which reports and deletes from T all items with key k. It requires $O(\log_2 (N/B)+S/B)$ I/Os, where S is the size of the set returned by extract. - -The algorithm simulates an internal depth-first search algorithm. A stack S of nodes is hold. During an iteration for the node v on top of S push an unvisited neighbor onto S and iterate. If there are no unvisited neighbors pop v. - -The difficulty is to determine whether a node is unvisited without doing $\Omega(1)$ I/Os per edge. To do this for a node v incoming edges (x,v) are put into a BRT D, when v is first discovered. Further, outgoing edges (v,x) are put into a priority queue P(v), keyed by the rank in the adjacency list. - -For vertex u on top of S all edges (u,x) are extracted from D. Such edges only exist if x has been discovered since the last time u was on top of S (or since the start of the algorithm if u is the first time on top of S). For every edge (u,x) a delete(x) operation is performed on P(u). Finally a delete-min operation on P(u) yields the next unvisited node. If P(u) is empty, u is popped from S. - -Pseudocode for this algorithm is given below. - -1 procedure BGVW-depth-first-search(G, v): - -2 let S be a stack, P[] a priority queue for each node and D a BRT - -3 S.push(v) - -4 while S is not empty - -5 v = S.top() - -6 if v is not marked: - -7 mark(v) - -8 extract all edges (v, x) from D, $\forall$x: P[v].delete(x) - -9 if u = P[v].delete-min() not null - -10 S.push(u) - -11 else - -12 S.pop() - -13 procedure mark(v) - -14 put all edges (x, v) into D - -15 $\forall$ (v, x): put x into P[v] - -== References == - -Category:External memory algorithms diff --git a/wiki/wikipedia/3070.txt b/wiki/wikipedia/3070.txt deleted file mode 100644 index 09920c3cd60d2f9aaecabec38ad96269e0cb51a5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3070.txt +++ /dev/null @@ -1,34 +0,0 @@ -In mathematics, and especially differential topology and singularity theory, the Eisenbud–Levine–Khimshiashvili signature formula gives a way of computing the Poincaré–Hopf index of a real, analytic vector field at an algebraically isolated singularity. It is named after David Eisenbud, Harold I. Levine, and George Khimshiashvili. Intuitively, the index of a vector field near a zero is the number of times the vector field wraps around the sphere. Because analytic vector fields have a rich algebraic structure, the techniques of commutative algebra can be brought to bear to compute their index. The signature formula expresses the index of an analytic vector field in terms of the signature of a certain quadratic form. - -Consider the n-dimensional space Rn. Assume that Rn has some fixed coordinate system, and write x for a point in Rn, where 1=x = (x1, …, xn). - -Let X be a vector field on Rn. For 1=1 ≤ k ≤ n there exist functions 1=ƒk : Rn → R such that one may express X as -$$ - X = f_1({\mathbf x})\frac{\partial}{\partial x_1} + \cdots + f_n({\mathbf x})\frac{\partial}{\partial x_n} . -$$ - -To say that X is an analytic vector field means that each of the functions 1=ƒk : Rn → R is an analytic function. One says that X is singular at a point p in Rn (or that p is a singular point of X) if 1=X(p) = 0, i.e. X vanishes at p. In terms of the functions 1=ƒk : Rn → R it means that 1=ƒk(p) = 0 for all 1=1 ≤ k ≤ n. A singular point p of X is called isolated (or that p is an isolated singularity of X) if 1=X(p) = 0 and there exists an open neighbourhood 1=U ⊆ Rn, containing p, such that 1=X(q) ≠ 0 for all q in U, different from p. An isolated singularity of X is called algebraically isolated if, when considered over the complex domain, it remains isolated. - -Since the Poincaré–Hopf index at a point is a purely local invariant (cf. Poincaré–Hopf theorem), one may restrict one's study to that of germs. Assume that each of the ƒk from above are function germs, i.e. 1=ƒk : (Rn,0) → (R,0). In turn, one may call X a vector field germ. - -Let An,0 denote the ring of analytic function germs 1=(Rn,0) → (R,0). Assume that X is a vector field germ of the form -$$ - X = f_1({\mathbf x})\frac{\partial}{\partial x_1} + \cdots + f_n({\mathbf x})\frac{\partial}{\partial x_n} -$$ - -with an algebraically isolated singularity at 0. Where, as mentioned above, each of the ƒk are function germs 1=(Rn,0) → (R,0). Denote by IX the ideal generated by the ƒk, i.e. 1=IX = (ƒ1, …, ƒn). Then one considers the local algebra, BX, given by the quotient -$$ - B_X := A_{n,0} / I_X . -$$ - -The Eisenbud–Levine–Khimshiashvili signature formula states that the index of the vector field X at 0 is given by the signature of a certain non-degenerate bilinear form (to be defined below) on the local algebra BX. This is very rarely the case, and was the reason for the choice of example. If one takes polar coordinates on the plane, i.e. 1=x = r cos(θ) and 1=y = r sin(θ) then 1=x3 − 3xy2 = r3cos(3θ) and 1=3x2y − y3 = r3sin(3θ). Restrict X to a circle, centre 0, radius 1= 0 < ε ≪ 1, denoted by C0,ε; and consider the map 1=G : C0,ε → C0,1 given by -$$ - G\colon X \longmapsto \frac{X} . -$$ - -The Poincaré–Hopf index of X is, by definition, the topological degree of the map G. Restricting X to the circle C0,ε, for arbitrarily small ε, gives -$$ - G(\theta) = (\cos(3\theta),\sin(3\theta)) , -$$ - -meaning that as θ makes one rotation about the circle C0,ε in an anti-clockwise direction; the image G(θ) makes three complete, anti-clockwise rotations about the unit circle C0,1. Meaning that the topological degree of G is +3 and that the Poincaré–Hopf index of X at 0 is +3. diff --git a/wiki/wikipedia/3071.txt b/wiki/wikipedia/3071.txt deleted file mode 100644 index d0cb5f6e5ce8759d67749341bd796d8d2bec6b73..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3071.txt +++ /dev/null @@ -1,39 +0,0 @@ -In computer science, B* (pronounced "B star") is a best-first graph search algorithm that finds the least-cost path from a given initial node to any goal node (out of one or more possible goals). First published by Hans Berliner in 1979, it is related to the A* search algorithm. - -The algorithm stores intervals for nodes of the tree as opposed to single point-valued estimates. Then, leaf nodes of the tree can be searched until one of the top level nodes has an interval which is clearly "best." - -Leaf nodes of a B*-tree are given evaluations that are intervals rather than single numbers. The interval is supposed to contain the true value of that node. If all intervals attached to leaf nodes satisfy this property, then B* will identify an optimal path to the goal state. - -To back up the intervals within the tree, a parent's upper bound is set to the maximum of the upper bounds of the children. A parent's lower bound is set to the maximum of the lower bound of the children. Note that different children might supply these bounds. - -B* systematically expands nodes in order to create "separation," which occurs when the lower bound of a direct child of the root is at least as large as the upper bound of any other direct child of the root. A tree that creates separation at the root contains a proof that the best child is at least as good as any other child. - -In practice, complex searches might not terminate within practical resource limits. So the algorithm is normally augmented with artificial termination criteria such as time or memory limits. When an artificial limit is hit, then you must make a heuristic judgment about which move to select. Normally, the tree would supply you with extensive evidence, like the intervals of root nodes. - -B* is a best-first process, which means that it is very efficient to traverse the tree, repeatedly descending to find a leaf to expand. This section describes how to choose the node to expand. (Note: Whether or not the tree is memory-resident, is a function of the overall implementation efficiency, including how it may be mapped and/or managed via real or virtual memory.) - -At the root of the tree, the algorithm applies one of two strategies, called prove-best and disprove-rest. In the prove-best strategy, the algorithm selects the node associated with the highest upper bound. The hope is that expanding that node will raise its lower bound higher than any other node's upper bound. - -The disprove-rest strategy selects the child of the root that has the second-highest upper bound. The hope is that by expanding that node you might be able to reduce the upper bound to less than the lower bound of the best child. - -Note that applying the disprove-rest strategy is pointless until the lower bound of the child node that has the highest upper bound is the highest among all lower bounds. - -The original algorithm description did not give any further guidance on which strategy to select. There are several reasonable alternatives, such as expanding the choice that has the smaller tree. - -Once a child of the root has been selected (using prove-best or disprove-rest) then the algorithm descends to a leaf node by repeatedly selecting the child that has the highest upper bound. - -When a leaf node is reached, the algorithm generates all successor nodes and assigns intervals to them using the evaluation function. Then the intervals of all nodes have to be backed up using the backup operation. - -When transpositions are possible, then the back-up operation might need to alter the values of nodes that did not lie on the selection path. In this case, the algorithm needs pointers from children to all parents so that changes can be propagated. Note that propagation can cease when a backup operation does not change the interval associated with a node. - -If intervals are incorrect (in the sense that the game-theoretic value of the node is not contained within the interval), then B* might not be able to identify the correct path. However, the algorithm is fairly robust to errors in practice. - -The Maven (Scrabble) program has an innovation that improves the robustness of B* when evaluation errors are possible. If a search terminates due to separation then Maven restarts the search after widening all of the evaluation intervals by a small amount. This policy progressively widens the tree, eventually erasing all errors. - -The B* algorithm applies to two-player deterministic zero-sum games. In fact, the only change is to interpret "best" with respect to the side moving in that node. So you would take the maximum if your side is moving, and the minimum if the opponent is moving. Equivalently, you can represent all intervals from the perspective of the side to move, and then negate the values during the back-up operation. - -Andrew Palay applied B* to chess. Endpoint evaluations were assigned by performing null-move searches. There is no report of how well this system performed compared to alpha-beta pruning search engines running on the same hardware. - -The Maven (Scrabble) program applied B* search to endgames. Endpoint evaluations were assigned using a heuristic planning system. - -The B* search algorithm has been used to compute optimal strategy in a sum game of a set of combinatorial games. diff --git a/wiki/wikipedia/3072.txt b/wiki/wikipedia/3072.txt deleted file mode 100644 index a4c7fe5b2744bff437731ecdb35d5ada46a6ffbe..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3072.txt +++ /dev/null @@ -1,55 +0,0 @@ -The prolific mathematician Paul Erdős and his various collaborators made many famous mathematical conjectures, over a wide field of subjects, and in many cases Erdős offered monetary rewards for solving them. - -* The Erdős–Gyárfás conjecture on cycles with lengths equal to a power of two in graphs with minimum degree 3. - -* The Erdős–Hajnal conjecture that in a family of graphs defined by an excluded induced subgraph, every graph has either a large clique or a large independent set. - -* The Erdős–Mollin–Walsh conjecture on consecutive triples of powerful numbers. - -* The Erdős–Selfridge conjecture that a covering system with distinct moduli contains at least one even modulus. - -* The Erdős–Straus conjecture on the Diophantine equation 4/n = 1/x + 1/y + 1/z. - -* The Erdős conjecture on arithmetic progressions in sequences with divergent sums of reciprocals. - -* The Erdős–Szekeres conjecture on the number of points needed to ensure that a point set contains a large convex polygon. - -* The Erdős–Turán conjecture on additive bases of natural numbers. - -* A conjecture on quickly growing integer sequences with rational reciprocal series. - -* A conjecture with Norman Oler on circle packing in an equilateral triangle with a number of circles one less than a triangular number. - -* The minimum overlap problem to estimate the limit of M(n). - -* A conjecture on whether the ternary expansion of $2^n$ contains at least one digit 2, for $n > 8$. - -* The Erdős–Faber–Lovász conjecture on coloring unions of cliques, proved (for all large n) by Dong Yeap Kang, Tom Kelly, Daniela Kühn, Abhishek Methuku, and Deryk Osthus. - -* The Erdős sumset conjecture on sets, proven by Joel Moreira, Florian Karl Richter, Donald Robertson in 2018. The proof has appeared in "Annals of Mathematics" in March 2019. - -* The Burr–Erdős conjecture on Ramsey numbers of graphs, proved by Choongbum Lee in 2015. - -* A conjecture on equitable colorings proven in 1970 by András Hajnal and Endre Szemerédi and now known as the Hajnal–Szemerédi theorem. - -* A conjecture that would have strengthened the Furstenberg–Sárközy theorem to state that the number of elements in a square-difference-free set of positive integers could only exceed the square root of its largest value by a polylogarithmic factor, disproved by András Sárközy in 1978. - -* The Erdős–Lovász conjecture on weak/strong delta-systems, proved by Michel Deza in 1974. - -* The Erdős–Heilbronn conjecture in combinatorial number theory on the number of sums of two sets of residues modulo a prime, proved by Dias da Silva and Hamidoune in 1994. - -* The Erdős–Graham conjecture in combinatorial number theory on monochromatic Egyptian fraction representations of unity, proved by Ernie Croot in 2000. - -* The Erdős–Stewart conjecture on the Diophantine equation n! + 1 = pka pk+1b, solved by Florian Luca in 2001. - -* The Cameron–Erdős conjecture on sum-free sets of integers, proved by Ben Green and Alexander Sapozhenko in 2003–2004. - -* The Erdős–Menger conjecture on disjoint paths in infinite graphs, proved by Ron Aharoni and Eli Berger in 2009. - -* The Erdős distinct distances problem. The correct exponent was proved in 2010 by Larry Guth and Nets Katz, but the correct power of log n is still open. - -* The Erdős–Rankin conjecture on prime gaps, proved by Ford, Green, Konyagin, and Tao in 2014. - -* Erdős discrepancy problem on partial sums of ±1-sequences. - -* Erdős squarefree conjecture that central binomial coefficients C(2n, n) are never squarefree for n > 4 was proved in 1996. diff --git a/wiki/wikipedia/3073.txt b/wiki/wikipedia/3073.txt deleted file mode 100644 index 3bf7d17f41a820b20e9f91d954a31cefaff92611..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3073.txt +++ /dev/null @@ -1,59 +0,0 @@ -Interpretability logics comprise a family of modal logics that extend provability logic to describe interpretability or various related metamathematical properties and relations such as weak interpretability, Π1-conservativity, cointerpretability, tolerance, cotolerance, and arithmetic complexities. - -Main contributors to the field are Alessandro Berarducci, Petr Hájek, Konstantin Ignatiev, Giorgi Japaridze, Franco Montagna, Vladimir Shavrukov, Rineke Verbrugge, Albert Visser, and Domenico Zambella. - -The language of ILM extends that of classical propositional logic by adding the unary modal operator $\Box$ and the binary modal operator $\triangleright$ (as always, $\Diamond p$ is defined as $\neg \Box\neg p$). The arithmetical interpretation of $\Box p$ is “$p$ is provable in Peano arithmetic (PA)”, and $p \triangleright q$ is understood as “$PA+q$ is interpretable in $PA+p$”. - -Axiom schemata: - -1. All classical tautologies - -2. $\Box(p \rightarrow q) \rightarrow (\Box p \rightarrow \Box q)$ - -3. $\Box(\Box p \rightarrow p) \rightarrow \Box p$ - -4. $ \Box (p \rightarrow q) \rightarrow (p \triangleright q)$ - -5. $ (p \triangleright q)\wedge (q \triangleright r)\rightarrow (p\triangleright r)$ - -6. $ (p \triangleright r)\wedge (q \triangleright r)\rightarrow ((p\vee q)\triangleright r)$ - -7. $ (p \triangleright q)\rightarrow (\Diamond p \rightarrow \Diamond q) $ - -8. $ \Diamond p \triangleright p $ - -9. $ (p \triangleright q)\rightarrow((p\wedge\Box r)\triangleright (q\wedge\Box r)) $ - -Rules of inference: - -1. “From $p$ and $p\rightarrow q$ conclude $q$” - -2. “From $p$ conclude $\Box p$”. - -The completeness of ILM with respect to its arithmetical interpretation was independently proven by Alessandro Berarducci and Vladimir Shavrukov. - -The language of TOL extends that of classical propositional logic by adding the modal operator $\Diamond$ which is allowed to take any nonempty sequence of arguments. The arithmetical interpretation of $\Diamond( p_1,\ldots,p_n)$ is “$(PA+p_1,\ldots,PA+p_n)$ is a tolerant sequence of theories”. - -Axioms (with $p,q$ standing for any formulas, $\vec{r},\vec{s}$ for any sequences of formulas, and $\Diamond()$ identified with ⊤): - -1. All classical tautologies - -2. $\Diamond (\vec{r},p,\vec{s})\rightarrow \Diamond (\vec{r}, p\wedge\neg q,\vec{s})\vee \Diamond (\vec{r}, q,\vec{s}) $ - -3. $\Diamond (p)\rightarrow \Diamond (p\wedge \neg\Diamond (p)) $ - -4. $\Diamond (\vec{r},p,\vec{s})\rightarrow \Diamond (\vec{r},\vec{s})$ - -5. $\Diamond (\vec{r},p,\vec{s})\rightarrow \Diamond (\vec{r},p,p,\vec{s})$ - -6. $\Diamond (p,\Diamond(\vec{r}))\rightarrow \Diamond (p\wedge\Diamond(\vec{r}))$ - -7. $\Diamond (\vec{r},\Diamond(\vec{s}))\rightarrow \Diamond (\vec{r},\vec{s})$ - -Rules of inference: - -1. “From $p$ and $p\rightarrow q$ conclude $q$” - -2. “From $\neg p$ conclude $\neg \Diamond( p)$”. - -The completeness of TOL with respect to its arithmetical interpretation was proven by Giorgi Japaridze. diff --git a/wiki/wikipedia/3074.txt b/wiki/wikipedia/3074.txt deleted file mode 100644 index 8d3577b7adac9e898a677d86bdf95b1f584e08db..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3074.txt +++ /dev/null @@ -1,3 +0,0 @@ -In mathematics, a Kline sphere characterization, named after John Robert Kline, is a topological characterization of a two-dimensional sphere in terms of what sort of subset separates it. Its proof was one of the first notable accomplishments of R. H. Bing; Bing gave an alternate proof using brick partitioning in his paper Complementary domains of continuous curves - -A simple closed curve in a two-dimensional sphere (for instance, its equator) separates the sphere into two pieces upon removal. If one removes a pair of points from a sphere, however, the remainder is connected. Kline's sphere characterization states that the converse is true: If a nondegenerate locally connected metric continuum is separated by any simple closed curve but by no pair of points, then it is a two-dimensional sphere. diff --git a/wiki/wikipedia/3075.txt b/wiki/wikipedia/3075.txt deleted file mode 100644 index 171da4b688f41af9d00cd9a8485eea4d72b89589..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3075.txt +++ /dev/null @@ -1,169 +0,0 @@ -A symmetric, informationally complete, positive operator-valued measure (SIC-POVM) is a special case of a generalized measurement on a Hilbert space, used in the field of quantum mechanics. A measurement of the prescribed form satisfies certain defining qualities that makes it an interesting candidate for a "standard quantum measurement", utilized in the study of foundational quantum mechanics, most notably in QBism. Furthermore, it has been shown that applications exist in quantum state tomography and quantum cryptography, and a possible connection has been discovered with Hilbert's twelfth problem. - -Due to the use of SIC-POVMs primarily in quantum mechanics, Dirac notation will be used throughout this article to represent elements in a Hilbert space. - -A POVM over a $d$-dimensional Hilbert space $\mathcal{H}$ is a set of $m$ positive-semidefinite operators $F = \left\{F_i \mid i \in \{1, \ldots, m\}\right\}$ on the Hilbert space that sum to the identity: - - - -\sum_{i=1}^m F_i = I. - - - -If a POVM consists of at least $d^2$ operators which span the space of self-adjoint operators $\mathcal{L}(\mathcal{H})$, it is said to be an informationally complete POVM (IC-POVM). IC-POVMs consisting of exactly $d^2$ elements are called minimal. A set of $d^2$ rank-1 projectors $\Pi = \left\{\Pi_i \mid i \in \{1, \ldots, d^2\} \land \Pi_i^2 = \Pi_i\right\}$ which have equal pairwise Hilbert–Schmidt inner products, - - - -\mathrm{Tr}\left( \Pi_i \Pi_j \right) = \frac{d\delta_{ij} + 1}{d + 1}, - -defines a minimal IC-POVM $F = \left\{F_i \mid i \in \{1, \ldots, d^2\} \land F_i = \frac{1}{d} \Pi_i \land \Pi_i \in \Pi\right\}$ called a SIC-POVM. - -The condition that the projectors $\Pi_i\in\Pi$ defined above have equal pairwise inner products actually fixes the value of this constant. Recall that $ \frac{1}{d} \sum_i \Pi_i = I$ and set $ \mathrm{Tr}(\Pi_i \Pi_j ) = c$. Then - - - -\begin{align} - -d &= \mathrm{Tr}(I^2) \\ - -&= \frac{1}{d^2} \sum_{i,j} \mathrm{Tr}(\Pi_i \Pi_j) \\ - -&= \frac{1}{d^2} \left( d^2 + c d^2 (d^2-1) \right) - -\end{align} - - - -implies that $c = \frac{1}{d + 1}$. Thus, - - - -\mathrm{Tr}\left( \Pi_i \Pi_j \right) = \frac{d\delta_{ij} + 1}{d + 1}. - - - -This property is what makes SIC-POVMs symmetric; with respect to the Hilbert–Schmidt inner product, any pair of elements is equivalent to any other pair. - -In using the SIC-POVM elements, an interesting superoperator can be constructed, the likes of which map $ \mathcal{L}(\mathcal{H}) \rightarrow \mathcal{L}(\mathcal{H}) $. This operator is most useful in considering the relation of SIC-POVMs with spherical t-designs. Consider the map - - \begin{align} \mathcal{G}: \mathcal{L}(\mathcal{H}) &\rightarrow \mathcal{L}(\mathcal{H})\\ - -A &\mapsto \displaystyle \sum_\alpha |\psi_\alpha \rangle \langle \psi_\alpha | A |\psi_\alpha \rangle \langle \psi_\alpha | \end{align} - -This operator acts on a SIC-POVM element in a way very similar to identity, in that - - \begin{align} \mathcal{G}(\Pi_\beta) &= \displaystyle \sum_\alpha \Pi_\alpha \left| \langle \psi_\alpha | \psi_\beta \rangle \right|^2 \\ - -&= \displaystyle \Pi_\beta + \frac{1}{d+1} \sum_{\alpha \neq \beta} \Pi_\alpha \\ - -&= \displaystyle \frac{d}{d+1} \Pi_\beta + \frac{1}{d+1} \Pi_\beta + \frac{1}{d+1} \sum_{\alpha \neq \beta} \Pi_\alpha \\ - -&= \displaystyle \frac{d}{d+1} \Pi_\beta + \frac{d}{d+1}\sum_\alpha \frac{1}{d}\Pi_\alpha \\ - -&= \displaystyle \frac{d}{d+1} \left( \Pi_\beta + I \right) - -\end{align} - -But since elements of a SIC-POVM can completely and uniquely determine any quantum state, this linear operator can be applied to the decomposition of any state, resulting in the ability to write the following: -$$ - G = \frac{d}{d+1} \left( \mathcal{I} + I \right) -$$ where $ I(A) = A \text{ and } \mathcal{I}(A) = \mathrm{Tr}(A)I $ - -From here, the left inverse can be calculated to be $ G^{-1} = \frac1d \left[ \left(d+1\right)I - \mathcal{I} \right]$, and so with the knowledge that -$$ - I=G^{-1}G = \frac1d \sum_\alpha \left[ (d+1)\Pi_\alpha \odot \Pi_\alpha - I\odot \Pi_\alpha \right] -$$, - -an expression for a state $ \rho $ can be created in terms of a quasi-probability distribution, as follows: - - \begin{align} \rho = I | \rho ) &= \displaystyle \sum_\alpha \left[ (d+1)\Pi_\alpha - I \right] \frac{ (\Pi_\alpha|\rho)}{d} \\ - -&= \displaystyle \sum_\alpha \left[ (d+1)\Pi_\alpha - I \right] \frac{ \mathrm{Tr}(\Pi_\alpha\rho)}{d} \\ - -&= \displaystyle \sum_\alpha p_\alpha \left[ (d+1)\Pi_\alpha - I \right] \quad \text{ where } p_\alpha = \mathrm{Tr}(\Pi_\alpha\rho)/d\\ - -&= \displaystyle -I + (d+1) \sum_\alpha p_\alpha |\psi_\alpha \rangle \langle \psi_\alpha | \\ - -&= \displaystyle \sum_\alpha \left[ (d+1)p_\alpha - \frac1d \right] |\psi_\alpha \rangle \langle \psi_\alpha | - -\end{align} - -where $ | \rho ) $ is the Dirac notation for the density operator viewed in the Hilbert space $ \mathcal{L} (\mathcal{H}) $. This shows that the appropriate quasi-probability distribution (termed as such because it may yield negative results) representation of the state $ \rho $ is given by -$$ -(d+1)p_\alpha - \frac1d -$$ - -For $d=2$ the equations that define the SIC-POVM can be solved by hand, yielding the vectors - - - -\begin{align} - -|\psi_1\rangle &= |0\rangle \\ - -|\psi_2\rangle &= \frac1{\sqrt3}|0\rangle + \sqrt{\frac23}|1\rangle \\ - -|\psi_3\rangle &= \frac1{\sqrt3}|0\rangle + \sqrt{\frac23}e^{i\frac{2\pi}{3}}|1\rangle \\ - -|\psi_4\rangle &= \frac1{\sqrt3}|0\rangle + \sqrt{\frac23}e^{i\frac{4\pi}{3}}|1\rangle, - -\end{align} - - - -which form the vertices of a regular tetrahedron in the Bloch sphere. The projectors that define the SIC-POVM are given by $\Pi_i = |\psi_i\rangle\langle\psi_i|$. - -For higher dimensions this is not feasible, necessitating the use of a more sophisticated approach. - -A SIC-POVM $P$ is said to be group covariant if there exists a group $ G $ with a $d^2$-dimensional unitary representation such that - -* $ \forall |\psi\rangle\langle \psi | \in P, \quad \forall U_g \in G,\quad U_g|\psi\rangle \in P $ - -* $ \forall |\psi\rangle\langle \psi |, |\phi \rangle\langle \phi | \in P, \quad \exists U_g \in G, \quad U_g |\phi \rangle = | \psi \rangle $ - -The search for SIC-POVMs can be greatly simplified by exploiting the property of group covariance. Indeed, the problem is reduced to finding a normalized fiducial vector $ | \phi \rangle $ such that -$$ - | \langle \phi | U_g | \phi \rangle |^2 = \frac{1}{d+1} \ \forall g \neq id -$$. - -The SIC-POVM is then the set generated by the group action of $ U_g $ on $ |\phi \rangle $. - -So far, most SIC-POVM's have been found by considering group covariance under $ \mathbb{Z}_d \times \mathbb{Z}_d $. a conjecture about the existence of a fiducial vector for arbitrary dimensions was hypothesized. - -More specifically, - -
    - -For every dimension $d\geq 2$ there exists a SIC-POVM whose elements are the orbit of a positive rank-one operator $E_0$ under the Weyl-Heisenberg group $ H_d $. What is more, $E_0$ commutes with an element T of the Jacobi group $J_d=H_d \rtimes SL(2,\mathbb{Z}_d)$. The action of T on $H_d$ modulo the center has order three. - -
    - -Utilizing the notion of group covariance on $ \mathbb{Z}_d \times \mathbb{Z}_d $, this can be restated as but is an ongoing field of research in the quantum information community. - -Exact expressions for SIC sets have been found for Hilbert spaces of all dimensions from $ d=2 $ through $ d = 53 $ inclusive, and in some higher dimensions as large as $ d = 5779 $, for 115 values of $ d $ in all. Furthermore, using the Heisenberg group covariance on $ \mathbb{Z}_d\times \mathbb{Z}_d $, numerical solutions have been found for all integers up through $ d=193 $, and in some larger dimensions up to $ d = 2208$. - -A spherical t-design is a set of vectors $ S=\left\{ | \phi_k \rangle : |\phi_k \rangle \in \mathbb{S}^d \right\} $ on the d-dimensional generalized hypersphere, such that the average value of any $ t^{th}$-order polynomial $ f_t(\psi) $ over $ S $ is equal to the average of $ f_t(\psi) $ over all normalized vectors $ | \psi \rangle $. Defining $ \mathcal{H}_t = \displaystyle \bigotimes_{i=1}^t \mathcal{H} $ as the t-fold tensor product of the Hilbert spaces, and -$$ -S_t = \displaystyle \sum_{k=1}^n | \Phi_k^t \rangle \langle \Phi_k^t | , \quad |\Phi_k^t\rangle = |\phi_k\rangle^{\otimes t} -$$ - -as the t-fold tensor product frame operator, it can be shown that a set of normalized vectors $ \left\{ | \phi_k \rangle \in \mathbb{S}^d \right\}_{k=1}^n $ with $ n \geq {t+d-1 \choose d-1} $ forms a spherical t-design if and only if -$$ - \displaystyle \mathrm{Tr}\left[ S_t^2 \right] = \sum_{j,k} \left| \langle \phi_j | \phi_k \rangle \right|^{2t} = \frac{n^2 t! (d-1)!}{(t+d-1)!} -$$ - -It then immediately follows that every SIC-POVM is a 2-design, since -$$ - \mathrm{Tr}(S^2_2) = \displaystyle \sum_{j,k} |\langle \phi_j |\phi_k \rangle |^4 = \frac{2d^3}{d+1} -$$ - -which is precisely the necessary value that satisfies the above theorem. - -In a d-dimensional Hilbert space, two distinct bases $ \left\{|\psi_i\rangle \right\}, \left\{ |\phi_j \rangle \right\} $are said to be mutually unbiased if -$$ -\displaystyle |\langle \psi_i | \phi_j \rangle|^2 = \frac{1}{d}, \quad \forall i,j -$$ - -This seems similar in nature to the symmetric property of SIC-POVMs. Wootters points out that a complete set of $d+1$ unbiased bases yields a geometric structure known as a finite projective plane, while a SIC-POVM (in any dimension that is a prime power) yields a finite affine plane, a type of structure whose definition is identical to that of a finite projective plane with the roles of points and lines exchanged. In this sense, the problems of SIC-POVMs and of mutually unbiased bases are dual to one another. - -In dimension $d = 3$, the analogy can be taken further: a complete set of mutually unbiased bases can be directly constructed from a SIC-POVM. The 9 vectors of the SIC-POVM, together with the 12 vectors of the mutually unbiased bases, form a set that can be used in a Kochen–Specker proof. However, in 6-dimensional Hilbert space, a SIC-POVM is known, but no complete set of mutually unbiased bases has yet been discovered, and it is widely believed that no such set exists. diff --git a/wiki/wikipedia/3076.txt b/wiki/wikipedia/3076.txt deleted file mode 100644 index 9299719a65df0bf6e9c6e257504f7956c7995902..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3076.txt +++ /dev/null @@ -1,51 +0,0 @@ -In mathematics, the main conjecture of Iwasawa theory is a deep relationship between p-adic L-functions and ideal class groups of cyclotomic fields, proved by Kenkichi Iwasawa for primes satisfying the Kummer–Vandiver conjecture and proved for all primes by - -. The Herbrand–Ribet theorem and the Gras conjecture are both easy consequences of the main conjecture. - -There are several generalizations of the main conjecture, to totally real fields, CM fields, elliptic curves, and so on. - -Iwasawa was partly motivated by an analogy with Weil's description of the zeta function of an algebraic curve over a finite field in terms of eigenvalues of the Frobenius endomorphism on its Jacobian variety. In this analogy, - -* The action of the Frobenius corresponds to the action of the group Γ. - -* The Jacobian of a curve corresponds to a module X over Γ defined in terms of ideal class groups. - -* The zeta function of a curve over a finite field corresponds to a p-adic L-function. - -* Weil's theorem relating the eigenvalues of Frobenius to the zeros of the zeta function of the curve corresponds to Iwasawa's main conjecture relating the action of the Iwasawa algebra on X to zeros of the p-adic zeta function. - -The main conjecture of Iwasawa theory was formulated as an assertion that two methods of defining p-adic L-functions (by module theory, by interpolation) should coincide, as far as that was well-defined. This was proved by Mazur for Q, and for all totally real number fields by Wiles. These proofs were modeled upon Ken Ribet's proof of the converse to Herbrand's theorem (the Herbrand–Ribet theorem). - -Karl Rubin found a more elementary proof of the Mazur–Wiles theorem by using Thaine's method and Kolyvagin's Euler systems, described in Lang and Washington, and later proved other generalizations of the main conjecture for imaginary quadratic fields. - -In 2014, Christopher Skinner and Eric Urban proved several cases of the main conjectures for a large class of modular forms. As a consequence, for a modular elliptic curve over the rational numbers, they prove that the vanishing of the Hasse–Weil L-function L(E, s) of E at s = 1 implies that the p-adic Selmer group of E is infinite. Combined with theorems of Gross-Zagier and Kolyvagin, this gave a conditional proof (on the Tate–Shafarevich conjecture) of the conjecture that E has infinitely many rational points if and only if L(E, 1) = 0, a (weak) form of the Birch–Swinnerton-Dyer conjecture. These results were used by Manjul Bhargava, Skinner, and Wei Zhang to prove that a positive proportion of elliptic curves satisfy the Birch–Swinnerton-Dyer conjecture. - -* p is a prime number. - -* Fn is the field Q(ζ) where ζ is a root of unity of order pn+1. - -* Γ is the largest subgroup of the absolute Galois group of F isomorphic to the p-adic integers. - -* γ is a topological generator of Γ - -* Ln is the p-Hilbert class field of Fn. - -* Hn is the Galois group Gal(Ln/Fn), isomorphic to the subgroup of elements of the ideal class group of Fn whose order is a power of p. - -* H is the inverse limit of the Galois groups Hn. - -* V is the vector space HZpQp. - -* ω is the Teichmüller character. - -* Vi is the ωi eigenspace of V. - -* hpi,T) is the characteristic polynomial of γ acting on the vector space Vi - -* Lp is the p-adic L function with Lpi,1–k) = –Bki–k)/k, where B is a generalized Bernoulli number. - -* u is the unique p-adic number satisfying γ(ζ) = ζu for all p-power roots of unity ζ - -* Gp is the power series with Gpi,us–1) = Lpi,s) - -The main conjecture of Iwasawa theory proved by Mazur and Wiles states that if i is an odd integer not congruent to 1 mod p–1 then the ideals of Zp – T – generated by hpi,T) and Gp1–i,T) are equal. diff --git a/wiki/wikipedia/3077.txt b/wiki/wikipedia/3077.txt deleted file mode 100644 index 4e06cfae0c4f074bbebe9eb0fe30d16106586ce7..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3077.txt +++ /dev/null @@ -1,35 +0,0 @@ -The Wiener–Araya graph is, in graph theory, a graph on 42 vertices - -with 67 edges. It is hypohamiltonian, which means that - -it does not itself have a Hamiltonian cycle but every graph formed by - -removing a single vertex from it is Hamiltonian. It is also planar. - -Hypohamiltonian graphs were first studied by Sousselier in Problèmes plaisants - -et délectables (1963). - -In 1967, Lindgren built an infinite sequence of hypohamiltonian graphs. - -He first cited Gaudin, Herz and Rossi, then Busacker and Saaty - -as pioneers on this topic. - -From the start, the smallest hypohamiltonian graph is known: the Petersen graph. However, the hunt for the smallest planar hypohamiltonian graph continues. This question was first raised by Václav Chvátal in 1973. - -The first candidate answer was provided in 1976 by Carsten Thomassen, who exhibited a 105-vertices construction, the 105-Thomassen graph. - -In 1979, Hatzel improved this result with a planar hypohamiltonian graph on 57 vertices : the Hatzel graph. - -This bound was lowered in 2007 by the 48-Zamfirescu graph. - -In 2009, a graph built by Gábor Wiener and Makoto Araya became (with its 42 vertices) the smallest planar hypohamiltonian graph known. - -In their paper, Wiener and Araya conjectured that their graph's is optimal arguing that its order (42) appears to be the - -[[Phrases from The Hitchhiker's Guide to the Galaxy#Answer to the Ultimate Question of Life, the Universe, and Everything (42)| - -answer to The Ultimate Question of Life, the Universe, and Everything]] - -from The Hitchhiker's Guide to the Galaxy, a Douglas Adams novel. diff --git a/wiki/wikipedia/3078.txt b/wiki/wikipedia/3078.txt deleted file mode 100644 index 4619aff2fca6f973e27d71ea985b8dd407de6f75..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3078.txt +++ /dev/null @@ -1,71 +0,0 @@ -In mathematics, the Kolmogorov extension theorem (also known as Kolmogorov existence theorem, the Kolmogorov consistency theorem or the Daniell-Kolmogorov theorem) is a theorem that guarantees that a suitably "consistent" collection of finite-dimensional distributions will define a stochastic process. It is credited to the English mathematician Percy John Daniell and the Russian mathematician Andrey Nikolaevich Kolmogorov. - -Let $T$ denote some interval (thought of as "time"), and let $n \in \mathbb{N}$. For each $k \in \mathbb{N}$ and finite sequence of distinct times $t_{1}, \dots, t_{k} \in T$, let $\nu_{t_{1} \dots t_{k}}$ be a probability measure on $(\mathbb{R}^{n})^{k}$. Suppose that these measures satisfy two consistency conditions: - -1. for all permutations $\pi$ of $\{ 1, \dots, k \}$ and measurable sets $F_{i} \subseteq \mathbb{R}^{n}$, -$$ -\nu_{t_{\pi (1)} \dots t_{\pi (k)}} \left( F_{\pi (1)} \times \dots \times F_{ \pi(k)} \right) = \nu_{t_{1} \dots t_{k}} \left( F_{1} \times \dots \times F_{k} \right); -$$ - -2. for all measurable sets $F_{i} \subseteq \mathbb{R}^{n}$,$m \in \mathbb{N}$ -$$ -\nu_{t_{1} \dots t_{k}} \left( F_{1} \times \dots \times F_{k} \right) = \nu_{t_{1} \dots t_{k}, t_{k + 1}, \dots , t_{k+m}} \left( F_{1} \times \dots \times F_{k} \times \underbrace{\mathbb{R}^{n} \times \dots \times \mathbb{R}^{n}}_{m} \right). -$$ - -Then there exists a probability space $(\Omega, \mathcal{F}, \mathbb{P})$ and a stochastic process $X : T \times \Omega \to \mathbb{R}^{n}$ such that -$$ -\nu_{t_{1} \dots t_{k}} \left( F_{1} \times \dots \times F_{k} \right) = \mathbb{P} \left( X_{t_{1}} \in F_{1}, \dots, X_{t_{k}} \in F_{k} \right) -$$ - -for all $t_{i} \in T$, $k \in \mathbb{N}$ and measurable sets $F_{i} \subseteq \mathbb{R}^{n}$, i.e. $X$ has $\nu_{t_{1} \dots t_{k}}$ as its finite-dimensional distributions relative to times $t_{1} \dots t_{k}$. - -In fact, it is always possible to take as the underlying probability space $\Omega = (\mathbb{R}^n)^T$ and to take for $X$ the canonical process $X\colon (t,Y) \mapsto Y_t$. Therefore, an alternative way of stating Kolmogorov's extension theorem is that, provided that the above consistency conditions hold, there exists a (unique) measure $\nu$ on $(\mathbb{R}^n)^T$ with marginals $\nu_{t_{1} \dots t_{k}}$ for any finite collection of times $t_{1} \dots t_{k}$. Kolmogorov's extension theorem applies when $T$ is uncountable, but the price to pay - -for this level of generality is that the measure $\nu$ is only defined on the product σ-algebra of $(\mathbb{R}^n)^T$, which is not very rich. - -The two conditions required by the theorem are trivially satisfied by any stochastic process. For example, consider a real-valued discrete-time stochastic process $X$. Then the probability $\mathbb{P}(X_1 >0, X_2<0)$ can be computed either as $\nu_{1,2}( \mathbb{R}_+ \times \mathbb{R}_-)$ or as $\nu_{2,1}( \mathbb{R}_- \times \mathbb{R}_+)$. Hence, for the finite-dimensional distributions to be consistent, it must hold that -$$ -\nu_{1,2}( \mathbb{R}_+ \times \mathbb{R}_-) = \nu_{2,1}( \mathbb{R}_- \times \mathbb{R}_+) -$$. - -The first condition generalizes this statement to hold for any number of time points $t_i$, and any control sets $F_i$. - -Continuing the example, the second condition implies that $\mathbb{P}(X_1>0) = \mathbb{P}(X_1>0, X_2 \in \mathbb{R})$. Also this is a trivial condition that will be satisfied by any consistent family of finite-dimensional distributions. - -Since the two conditions are trivially satisfied for any stochastic process, the power of the theorem is that no other conditions are required: For any reasonable (i.e., consistent) family of finite-dimensional distributions, there exists a stochastic process with these distributions. - -The measure-theoretic approach to stochastic processes starts with a probability space and defines a stochastic process as a family of functions on this probability space. However, in many applications the starting point is really the finite-dimensional distributions of the stochastic process. The theorem says that provided the finite-dimensional distributions satisfy the obvious consistency requirements, one can always identify a probability space to match the purpose. In many situations, this means that one does not have to be explicit about what the probability space is. Many texts on stochastic processes do, indeed, assume a probability space but never state explicitly what it is. - -The theorem is used in one of the standard proofs of existence of a Brownian motion, by specifying the finite dimensional distributions to be Gaussian random variables, satisfying the consistency conditions above. As in most of the definitions of Brownian motion it is required that the sample paths are continuous almost surely, and one then uses the Kolmogorov continuity theorem to construct a continuous modification of the process constructed by the Kolmogorov extension theorem. - -The Kolmogorov extension theorem gives us conditions for a collection of measures on Euclidean spaces to be the finite-dimensional distributions of some $\mathbb{R}^{n}$-valued stochastic process, but the assumption that the state space be $\mathbb{R}^{n}$ is unnecessary. In fact, any collection of measurable spaces together with a collection of inner regular measures defined on the finite products of these spaces would suffice, provided that these measures satisfy a certain compatibility relation. The formal statement of the general theorem is as follows. - -Let $T$ be any set. Let $ \{ (\Omega_t, \mathcal{F}_t) \}_{t \in T} $ be some collection of measurable spaces, and for each $ t \in T $, let $ \tau_t$ be a Hausdorff topology on $ \Omega_t$. For each finite subset $J \subset T$, define -$$ -\Omega_J := \prod_{t\in J} \Omega_t -$$. - -For subsets $I \subset J \subset T$, let $\pi^J_I: \Omega_J \to \Omega_I$ denote the canonical projection map $ \omega \mapsto \omega|_I $. - -For each finite subset $ F \subset T$, suppose we have a probability measure $ \mu_F $ on $ \Omega_F $ which is inner regular with respect to the product topology (induced by the $\tau_t$) on $\Omega_F $. Suppose also that this collection $\{\mu_F\}$ of measures satisfies the following compatibility relation: for finite subsets $F \subset G \subset T$, we have that -$$ -\mu_F = (\pi^G_F)_* \mu_G -$$ - -where $(\pi^G_F)_* \mu_G$ denotes the pushforward measure of $ \mu_G$ induced by the canonical projection map $\pi^G_F$. - -Then there exists a unique probability measure $\mu$ on $\Omega_T $ such that $\mu_F=(\pi^T_F)_* \mu$ for every finite subset $F \subset T$. - -As a remark, all of the measures $\mu_F,\mu$ are defined on the product sigma algebra on their respective spaces, which (as mentioned before) is rather coarse. The measure $\mu$ may sometimes be extended appropriately to a larger sigma algebra, if there is additional structure involved. - -Note that the original statement of the theorem is just a special case of this theorem with $\Omega_t = \mathbb{R}^n $ for all $t \in T$, and $ \mu_{\{t_1,...,t_k\}}=\nu_{t_1 \dots t_k}$ for $ t_1,...,t_k \in T$. The stochastic process would simply be the canonical process $ (\pi_t)_{t \in T}$, defined on $\Omega=(\mathbb{R}^n)^T$ with probability measure $P=\mu$. The reason that the original statement of the theorem does not mention inner regularity of the measures $\nu_{t_1\dots t_k}$ is that this would automatically follow, since Borel probability measures on Polish spaces are automatically Radon. - -This theorem has many far-reaching consequences; for example it can be used to prove the existence of the following, among others: - -*Brownian motion, i.e., the Wiener process, - -*a Markov chain taking values in a given state space with a given transition matrix, - -*infinite products of (inner-regular) probability spaces. - -According to John Aldrich, the theorem was independently discovered by British mathematician Percy John Daniell in the slightly different setting of integration theory. diff --git a/wiki/wikipedia/3079.txt b/wiki/wikipedia/3079.txt deleted file mode 100644 index 388678d1170368f7b8223e6367c41c998732a833..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3079.txt +++ /dev/null @@ -1,29 +0,0 @@ -Windows Live Mesh (formerly known as Windows Live FolderShare, Live Mesh, and Windows Live Sync) is a free-to-use Internet-based file synchronization application by Microsoft designed to allow files and folders between two or more computers to be in sync with each other on Windows (Vista and later) and Mac OS X (v. 10.5 Leopard and later, Intel processors only) computers or the Web via SkyDrive. Windows Live Mesh also enabled remote desktop access via the Internet. - -Windows Live Mesh was part of the Windows Live Essentials 2011 suite of software. However this application was replaced by SkyDrive for Windows application in Windows Essentials 2012 and later OneDrive in Windows 8/8.1/10. Microsoft announced on December 13, 2012 that Windows Live Mesh would be discontinued on February 13, 2013. - -Features of Windows Live Mesh include: - -*Ability to sync up to 200 folders with 100,000 files each (each file up to 40 GB) for PC-to-PC synchronization - -*Ability to sync up to 5 GB of files to "SkyDrive synced storage" in the cloud - -*Remote Desktop access via Windows Live Mesh and the Windows Live Devices web service - -*PC-to-PC synchronisation of application settings for applications such as: - -**Windows Internet Explorer - synchronisation of favorites and recently typed URLs between computers - -**Microsoft Office - synchronisation of dictionaries, Outlook email signatures, styles and templates between computers - -Microsoft bought FolderShare from ByteTaxi Inc. on November 3, 2005, and subsequently made it a part of their Windows Live range of services. - -On March 10, 2008, Microsoft released its first user visible update to the then Windows Live FolderShare. This comprised a rewrite of the FolderShare website and an updated Windows Live FolderShare client. Support for discussion groups and Remote Desktop Search was also removed in the update. The new client had some user interface and branding updates and contained several bug fixes - including official support for Windows Vista and discontinued support for Windows 2000. - -Since its rebrand as Windows Live FolderShare, the client and service had undergone extensive platform changes, switching from the original LAMP which it was originally built on when acquired, to the Windows Server platform. In the Windows Live Essentials "Wave 3" release, Windows Live FolderShare was again rebranded as Windows Live Sync. New UI improvements were also announced to be part of the "Wave 3" release, integrating it with other Windows Live services. New features of the then Windows Live Sync "Wave 3" compared to FolderShare included increased limit of sync folders, integration with Windows Live ID, integration with Recycle Bin, unicode support, support for Mac OS X, and integration with Windows Live Photo Gallery and Windows Live Toolbar to sync photo albums and favorites between PCs. Windows Live Sync Wave 3 was released on December 11, 2008, and an update of Windows Live Sync for Mac was released on November 2, 2009 to add support for Mac OS X 10.6. - -Microsoft released the Live Mesh technology preview on April 23, 2008, a data synchronization system that allowed files, folders and other data to be shared and synchronized across multiple personal devices and up to 5 GB on the web. Live Mesh was based on FeedSync technologies to convey the changes made in each device so that the changes can be synchronized across all devices and the cloud. The information about devices and folders participating in a synchronization relationship was not stored locally but at the service-end. - -The Live Mesh software, called Mesh Operating Environment (MOE), It could be used to create and manage the synchronization relationships between devices and data. Live Mesh also included a cloud storage component, called Live Desktop, which was an online storage service that allows synchronized folders to be accessible via a website. Live Mesh Remote Desktop allowed users to control their devices from the Live Mesh application, as well as from any other internet connected PC. - -Live Mesh also included a developer component, which consisted of a set of protocols and Application Programming Interfaces (API) known as Live Framework (which was also briefly known as MeshFX). It was a REST-based API for accessing the Live Mesh services over HTTP. Microsoft had also provided APIs for managed code (including .NET Framework and Microsoft Silverlight) as well as for Win32 and JavaScript via a developer Software Development Kit (SDK). diff --git a/wiki/wikipedia/308.txt b/wiki/wikipedia/308.txt deleted file mode 100644 index df188115084c36f619feef891430e3c2c1de824f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/308.txt +++ /dev/null @@ -1,15 +0,0 @@ -Gauss's Theorema Egregium (Latin for "Remarkable Theorem") is a major result of differential geometry (proved by Carl Friedrich Gauss in 1827) that concerns the curvature of surfaces. The theorem is that Gaussian curvature can be determined entirely by measuring angles, distances and their rates on a surface, without reference to the particular manner in which the surface is embedded in the ambient 3-dimensional Euclidean space. In other words, the Gaussian curvature of a surface does not change if one bends the surface without stretching it. Thus the Gaussian curvature is an intrinsic invariant of a surface. - -Gauss presented the theorem in this manner (translated from Latin): - -Thus the formula of the preceding article leads itself to the remarkable Theorem. If a curved surface is developed upon any other surface whatever, the measure of curvature in each point remains unchanged. - -The theorem is "remarkable" because the starting definition of Gaussian curvature makes direct use of position of the surface in space. So it is quite surprising that the result does not depend on its embedding in spite of all bending and twisting deformations undergone. - -In modern mathematical terminology, the theorem may be stated as follows: - -A sphere of radius R has constant Gaussian curvature which is equal to 1/R2. At the same time, a plane has zero Gaussian curvature. As a corollary of Theorema Egregium, a piece of paper cannot be bent onto a sphere without crumpling. Conversely, the surface of a sphere cannot be unfolded onto a flat plane without distorting the distances. If one were to step on an empty egg shell, its edges have to split in expansion before being flattened. Mathematically, a sphere and a plane are not isometric, even locally. This fact is significant for cartography: it implies that no planar (flat) map of Earth can be perfect, even for a portion of the Earth's surface. Thus every cartographic projection necessarily distorts at least some distances. - -The catenoid and the helicoid are two very different-looking surfaces. Nevertheless, each of them can be continuously bent into the other: they are locally isometric. It follows from Theorema Egregium that under this bending the Gaussian curvature at any two corresponding points of the catenoid and helicoid is always the same. Thus isometry is simply bending and twisting of a surface without internal crumpling or tearing, in other words without extra tension, compression, or shear. - -An application of the Theorema Egregium is seen when a flat object is somewhat folded or bent along a line, creating rigidity in the perpendicular direction. This is of practical use in construction, as well as in a common pizza-eating strategy: A flat slice of pizza can be seen as a surface with constant Gaussian curvature 0. Gently bending a slice must then roughly maintain this curvature (assuming the bend is roughly a local isometry). If one bends a slice horizontally along a radius, non-zero principal curvatures are created along the bend, dictating that the other principal curvature at these points must be zero. This creates rigidity in the direction perpendicular to the fold, an attribute desirable for eating pizza, as it holds its shape long enough to be consumed without a mess. This same principle is used for strengthening in corrugated materials, most familiarly corrugated fiberboard and corrugated galvanised iron, and in some forms of potato chips. diff --git a/wiki/wikipedia/3080.txt b/wiki/wikipedia/3080.txt deleted file mode 100644 index 785754e2d71297648ce1abd4f8f68d9ccbe8f1e3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3080.txt +++ /dev/null @@ -1,34 +0,0 @@ -In set theory, a universal set is a set which contains all objects, including itself. In set theory as usually formulated, the conception of a universal set leads to Russell's paradox and is consequently not allowed. However, some non-standard variants of set theory include a universal set. - -There is no standard notation for the universal set of a given set theory. Common symbols include V, U, ξ and S. - -Many set theories do not allow for the existence of a universal set. For example, it is directly contradicted by the axioms such as the axiom of regularity and its existence would imply inconsistencies. The standard Zermelo–Fraenkel set theory is instead based on the cumulative hierarchy. - -Russell's paradox prevents the existence of a universal set in Zermelo–Fraenkel set theory and other set theories that include Zermelo's axiom of comprehension. - -This axiom states that, for any formula $\varphi(x)$ and any set A, there exists a set -$$ -\{x \in A \mid \varphi(x)\} -$$ - -that contains exactly those elements x of A that satisfy $\varphi$. - -As a consequence, for every set $A$ we can find a set that it does not contain, hence there is no set of all sets. This indeed holds even with predicative comprehension and over Intuitionistic logic. - -A second difficulty with the idea of a universal set concerns the power set of the set of all sets. Because this power set is a set of sets, it would necessarily be a subset of the set of all sets, provided that both exist. However, this conflicts with Cantor's theorem that the power set of any set (whether infinite or not) always has strictly higher cardinality than the set itself. - -The difficulties associated with a universal set can be avoided either by using a variant of set theory in which the axiom of comprehension is restricted in some way, or by using a universal object that is not considered to be a set. - -There are set theories known to be consistent (if the usual set theory is consistent) in which the universal set V does exist (and $V \in V$ is true). In these theories, Zermelo's axiom of comprehension does not hold in general, and the axiom of comprehension of naive set theory is restricted in a different way. A set theory containing a universal set is necessarily a non-well-founded set theory. - -The most widely studied set theory with a universal set is Willard Van Orman Quine's New Foundations. Alonzo Church and Arnold Oberschelp also published work on such set theories. Church speculated that his theory might be extended in a manner consistent with Quine's, - - but this is not possible for Oberschelp's, since in it the singleton function is provably a set, which leads immediately to paradox in New Foundations. - -Another example is positive set theory, where the axiom of comprehension is restricted to hold only for the positive formulas (formulas that do not contain negations). Such set theories are motivated by notions of closure in topology. - -The idea of a universal set seems intuitively desirable in the Zermelo–Fraenkel set theory, particularly because most versions of this theory do allow the use of quantifiers over all sets (see universal quantifier). One way of allowing an object that behaves similarly to a universal set, without creating paradoxes, is to describe V and similar large collections as proper classes rather than as sets. One difference between a universal set and a universal class is that the universal class does not contain itself, because proper classes cannot be elements of other classes. Russell's paradox does not apply in these theories because the axiom of comprehension operates on sets, not on classes. - -The category of sets can also be considered to be a universal object that is, again, not itself a set. It has all sets as elements, and also includes arrows for all functions from one set to another. - -Again, it does not contain itself, because it is not itself a set. diff --git a/wiki/wikipedia/3081.txt b/wiki/wikipedia/3081.txt deleted file mode 100644 index 49dbcb4c9cd201040cd75d00abcdfca318b06cb0..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3081.txt +++ /dev/null @@ -1,57 +0,0 @@ -In mathematics, Tychonoff's theorem states that the product of any collection of compact topological spaces is compact with respect to the product topology. The theorem is named after Andrey Nikolayevich Tikhonov (whose surname sometimes is transcribed Tychonoff), who proved it first in 1930 for powers of the closed unit interval and in 1935 stated the full theorem along with the remark that its proof was the same as for the special case. The earliest known published proof is contained in a 1937 paper of Eduard Čech. - -Tychonoff's theorem is often considered as perhaps the single most important result in general topology (along with Urysohn's lemma). The theorem is also valid for topological spaces based on fuzzy sets. - -The theorem depends crucially upon the precise definitions of compactness and of the product topology; in fact, Tychonoff's 1935 paper defines the product topology for the first time. Conversely, part of its importance is to give confidence that these particular definitions are the most useful (i.e. most well-behaved) ones. - -Indeed, the Heine–Borel definition of compactness—that every covering of a space by open sets admits a finite subcovering—is relatively recent. More popular in the 19th and early 20th centuries was the Bolzano–Weierstrass criterion that every sequence admits a convergent subsequence, now called sequential compactness. These conditions are equivalent for metrizable spaces, but neither one implies the other in the class of all topological spaces. - -It is almost trivial to prove that the product of two sequentially compact spaces is sequentially compact—one passes to a subsequence for the first component and then a subsubsequence for the second component. An only slightly more elaborate "diagonalization" argument establishes the sequential compactness of a countable product of sequentially compact spaces. However, the product of continuum many copies of the closed unit interval (with its usual topology) fails to be sequentially compact with respect to the product topology, even though it is compact by Tychonoff's theorem (e.g., see ). - -This is a critical failure: if X is a completely regular Hausdorff space, there is a natural embedding from X into [0,1]C(X,[0,1]), where C(X,[0,1]) is the set of continuous maps from X to [0,1]. The compactness of [0,1]C(X,[0,1]) thus shows that every completely regular Hausdorff space embeds in a compact Hausdorff space (or, can be "compactified".) This construction is the Stone–Čech compactification. Conversely, all subspaces of compact Hausdorff spaces are completely regular Hausdorff, so this characterizes the completely regular Hausdorff spaces as those that can be compactified. Such spaces are now called Tychonoff spaces. - -Tychonoff's theorem has been used to prove many other mathematical theorems. These include theorems about compactness of certain spaces such as the Banach–Alaoglu theorem on the weak-* compactness of the unit ball of the dual space of a normed vector space, and the Arzelà–Ascoli theorem characterizing the sequences of functions in which every subsequence has a uniformly convergent subsequence. They also include statements less obviously related to compactness, such as the De Bruijn–Erdős theorem stating that every minimal k-chromatic graph is finite, and the Curtis–Hedlund–Lyndon theorem providing a topological characterization of cellular automata. - -As a rule of thumb, any sort of construction that takes as input a fairly general object (often of an algebraic, or topological-algebraic nature) and outputs a compact space is likely to use Tychonoff: e.g., the Gelfand space of maximal ideals of a commutative C*-algebra, the Stone space of maximal ideals of a Boolean algebra, and the Berkovich spectrum of a commutative Banach ring. - -1) Tychonoff's 1930 proof used the concept of a complete accumulation point. - -2) The theorem is a quick corollary of the Alexander subbase theorem. - -More modern proofs have been motivated by the following considerations: the approach to compactness via convergence of subsequences leads to a simple and transparent proof in the case of countable index sets. However, the approach to convergence in a topological space using sequences is sufficient when the space satisfies the first axiom of countability (as metrizable spaces do), but generally not otherwise. However, the product of uncountably many metrizable spaces, each with at least two points, fails to be first countable. So it is natural to hope that a suitable notion of convergence in arbitrary spaces will lead to a compactness criterion generalizing sequential compactness in metrizable spaces that will be as easily applied to deduce the compactness of products. This has turned out to be the case. - -3) The theory of convergence via filters, due to Henri Cartan and developed by Bourbaki in 1937, leads to the following criterion: assuming the ultrafilter lemma, a space is compact if and only if each ultrafilter on the space converges. With this in hand, the proof becomes easy: the (filter generated by the) image of an ultrafilter on the product space under any projection map is an ultrafilter on the factor space, which therefore converges, to at least one xi. One then shows that the original ultrafilter converges to x = (xi). In his textbook, Munkres gives a reworking of the Cartan–Bourbaki proof that does not explicitly use any filter-theoretic language or preliminaries. - -4) Similarly, the Moore–Smith theory of convergence via nets, as supplemented by Kelley's notion of a universal net, leads to the criterion that a space is compact if and only if each universal net on the space converges. This criterion leads to a proof (Kelley, 1950) of Tychonoff's theorem, which is, word for word, identical to the Cartan/Bourbaki proof using filters, save for the repeated substitution of "universal net" for "ultrafilter base". - -5) A proof using nets but not universal nets was given in 1992 by Paul Chernoff. - -All of the above proofs use the axiom of choice (AC) in some way. For instance, the third proof uses that every filter is contained in an ultrafilter (i.e., a maximal filter), and this is seen by invoking Zorn's lemma. Zorn's lemma is also used to prove Kelley's theorem, that every net has a universal subnet. In fact these uses of AC are essential: in 1950 Kelley proved that Tychonoff's theorem implies the axiom of choice in ZF. Note that one formulation of AC is that the Cartesian product of a family of nonempty sets is nonempty; but since the empty set is most certainly compact, the proof cannot proceed along such straightforward lines. Thus Tychonoff's theorem joins several other basic theorems (e.g. that every vector space has a basis) in being equivalent to AC. - -On the other hand, the statement that every filter is contained in an ultrafilter does not imply AC. Indeed, it is not hard to see that it is equivalent to the Boolean prime ideal theorem (BPI), a well-known intermediate point between the axioms of Zermelo-Fraenkel set theory (ZF) and the ZF theory augmented by the axiom of choice (ZFC). A first glance at the second proof of Tychnoff may suggest that the proof uses no more than (BPI), in contradiction to the above. However, the spaces in which every convergent filter has a unique limit are precisely the Hausdorff spaces. In general we must select, for each element of the index set, an element of the nonempty set of limits of the projected ultrafilter base, and of course this uses AC. However, it also shows that the compactness of the product of compact Hausdorff spaces can be proved using (BPI), and in fact the converse also holds. Studying the strength of Tychonoff's theorem for various restricted classes of spaces is an active area in set-theoretic topology. - -The analogue of Tychonoff's theorem in pointless topology does not require any form of the axiom of choice. - -To prove that Tychonoff's theorem in its general version implies the axiom of choice, we establish that every infinite cartesian product of non-empty sets is nonempty. The trickiest part of the proof is introducing the right topology. The right topology, as it turns out, is the cofinite topology with a small twist. It turns out that every set given this topology automatically becomes a compact space. Once we have this fact, Tychonoff's theorem can be applied; we then use the finite intersection property (FIP) definition of compactness. The proof itself (due to J. L. Kelley) follows: - -Let {Ai} be an indexed family of nonempty sets, for i ranging in I (where I is an arbitrary indexing set). We wish to show that the cartesian product of these sets is nonempty. Now, for each i, take Xi to be Ai with the index i itself tacked on (renaming the indices using the disjoint union if necessary, we may assume that i is not a member of Ai, so simply take Xi = Ai ∪ {i}). - -Now define the cartesian product - -X = \prod_{i \in I} X_i - -along with the natural projection maps πi which take a member of X to its ith term. - -We give each Xi the topology whose open sets are the cofinite subsets of Xi, plus the empty set (the cofinite topology) and the singleton {i}. - -This makes Xi compact, and by Tychonoff's theorem, X is also compact (in the product topology). The projection maps are continuous; all the Ais are closed, being complements of the singleton open set {i} in Xi. So the inverse images πi−1(Ai) are closed subsets of X. We note that - -\prod_{i \in I} A_i = \bigcap_{i \in I} \pi_i^{-1}(A_i) - -and prove that these inverse images are nonempty and have the FIP. Let i1, ..., iN be a finite collection of indices in I. Then the finite product Ai1 × ... × AiN - -is non-empty (only finitely many choices here, so AC is not needed); it merely consists of N-tuples. Let a = (a1, ..., aN) be such an N-tuple. We extend a to the whole index set: take a to the function f defined by f(j) = ak if j = ik, and f(j) = j otherwise. This step is where the addition of the extra point to each space is crucial, for it allows us to define f for everything outside of the N-tuple in a precise way without choices (we can already choose, by construction, j from Xj ). πik(f) = ak is obviously an element of each Aik so that f is in each inverse image; thus we have - -\bigcap_{k = 1}^N \pi_{i_k}^{-1}(A_{i_k}) \neq \varnothing. - -By the FIP definition of compactness, the entire intersection over I must be nonempty, and the proof is complete. diff --git a/wiki/wikipedia/3082.txt b/wiki/wikipedia/3082.txt deleted file mode 100644 index 2d2999bc80a20ef74037944a639999f38d1abe05..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3082.txt +++ /dev/null @@ -1,28 +0,0 @@ -Oppermann's conjecture is an unsolved problem in mathematics on the distribution of prime numbers. It is closely related to but stronger than Legendre's conjecture, Andrica's conjecture, and Brocard's conjecture. It is named after Danish mathematician Ludvig Oppermann, who announced it in an unpublished lecture in March 1877. - -The conjecture states that, for every integer x > 1, there is at least one prime number between - -x(x - 1) and x2, - -and at least another prime between - -x2 and x(x + 1). - -It can also be phrased equivalently as stating that the prime-counting function must take unequal values at the endpoints of each range. That is: - -π(x2 - x) < π(x2) < π(x2 + x) for x > 1 - -with π(x) being the number of prime numbers less than or equal to x. - -The end points of these two ranges are a square between two pronic numbers, with each of the pronic numbers being twice a pair triangular number. The sum of the pair of triangular numbers is the square. - -If the conjecture is true, then the gap size would be on the order of -$$ - g_n < \sqrt{p_n} -$$. - -This also means there would be at least two primes between x2 and (x + 1)2 (one in the range from x2 to x(x + 1) and the second in the range from x(x + 1) to (x + 1)2), strengthening Legendre's conjecture that there is at least one prime in this range. Because there is at least one non-prime between any two odd primes it would also imply Brocard's conjecture that there are at least four primes between the squares of consecutive odd primes. Additionally, it would imply that the largest possible gaps between two consecutive prime numbers could be at most proportional to twice the square root of the numbers, as Andrica's conjecture states. - -The conjecture also implies that at least one prime can be found in every quarter revolution of the Ulam spiral. - -Even for small values of x, the numbers of primes in the ranges given by the conjecture are much larger than 1, providing strong evidence that the conjecture is true. However, Oppermann's conjecture has not been proved . diff --git a/wiki/wikipedia/3083.txt b/wiki/wikipedia/3083.txt deleted file mode 100644 index 8b742b5d1254f9146141e4f53c7b884ee221bcfd..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3083.txt +++ /dev/null @@ -1,25 +0,0 @@ -In analytic geometry, spatial transformations in the 3-dimensional Euclidean space $\R^3$ are distinguished into active or alibi transformations, and passive or alias transformations. An active transformation is a transformation which actually changes the physical position (alibi, elsewhere) of a point, or rigid body, which can be defined in the absence of a coordinate system; whereas a passive transformation is merely a change in the coordinate system in which the object is described (alias, other name) (change of coordinate map, or change of basis). By transformation, mathematicians usually refer to active transformations, while physicists and engineers could mean either. Both types of transformation can be represented by a combination of a translation and a linear transformation. - -Put differently, a passive transformation refers to description of the same object in two different coordinate systems. - -On the other hand, an active transformation is a transformation of one or more objects with respect to the same coordinate system. For instance, active transformations are useful to describe successive positions of a rigid body. On the other hand, passive transformations may be useful in human motion analysis to observe the motion of the tibia relative to the femur, that is, its motion relative to a (local) coordinate system which moves together with the femur, rather than a (global) coordinate system which is fixed to the floor. This gives a new coordinate system XYZ with basis vectors: -$$ -\mathbf{e}_X = T^{-1}(1,0,0),\ \mathbf{e}_Y = T^{-1}(0,1,0),\ \mathbf{e}_Z = T^{-1}(0,0,1) -$$ - -The new coordinates $(v_X,v_Y,v_Z)$ of $\mathbf{v}$ with respect to the new coordinate system XYZ are given by: -$$ -\mathbf{v} = (v_x,v_y,v_z) = v_Xe_X+v_Ye_Y+v_Ze_Z = T^{-1}(v_X,v_Y,v_Z) -$$. - -From this equation one sees that the new coordinates are given by -$$ -(v_X,v_Y,v_Z) = T(v_x,v_y,v_z) -$$. - -As a passive transformation $T$ transforms the old coordinates into the new ones. - -Note the equivalence between the two kinds of transformations: the coordinates of the new point in the active transformation and the new coordinates of the point in the passive transformation are the same, namely -$$ -(v_X,v_Y,v_Z)=(v'_x,v'_y,v'_z) -$$. diff --git a/wiki/wikipedia/3084.txt b/wiki/wikipedia/3084.txt deleted file mode 100644 index 0f75692848698fb0d33c288c58f35a7713819ca8..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3084.txt +++ /dev/null @@ -1,13 +0,0 @@ -In mathematics, the Newton inequalities are named after Isaac Newton. Suppose a1, a2, ..., an are real numbers and let $e_k$ denote the kth elementary symmetric polynomial in a1, a2, ..., an. Then the elementary symmetric means, given by -$$ -S_k = \frac{e_k}{\binom{n}{k}}, -$$ - -satisfy the inequality -$$ -S_{k-1}S_{k+1} \le S_k^2. -$$ - -If all the numbers ai are non-zero, then equality holds if and only if all the numbers ai are equal. - -It can be seen that S1 is the arithmetic mean, and Sn is the n-th power of the geometric mean. diff --git a/wiki/wikipedia/3085.txt b/wiki/wikipedia/3085.txt deleted file mode 100644 index 1f9b65afa6ab0a113402f7775f8d4756915e8d2e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3085.txt +++ /dev/null @@ -1 +0,0 @@ -#Redirect Real-root isolation#Vincent's and related theorems diff --git a/wiki/wikipedia/3086.txt b/wiki/wikipedia/3086.txt deleted file mode 100644 index 662eafe2f31397af40a796b5928e18feb3edcde0..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3086.txt +++ /dev/null @@ -1,38 +0,0 @@ -In mathematics, Fenchel's duality theorem is a result in the theory of convex functions named after Werner Fenchel. - -Let ƒ be a proper convex function on Rn and let g be a proper concave function on Rn. Then, if regularity conditions are satisfied, -$$ -\inf_x ( f(x)-g(x) ) = \sup_p ( g_*(p)-f^*(p)). -$$ - -where ƒ * is the convex conjugate of ƒ (also referred to as the Fenchel-Legendre transform) and g * is the concave conjugate of g. That is, -$$ -f^{*} \left( x^{*} \right) := \sup \left \{ \left. \left\langle x^{*} , x \right\rangle - f \left( x \right) \right| x \in \mathbb{R}^n \right\} -$$ -$$ -g_{*} \left( x^{*} \right) := \inf \left \{ \left. \left\langle x^{*} , x \right\rangle - g \left( x \right) \right| x \in \mathbb{R}^n \right\} -$$ - -Let X and Y be Banach spaces, $f: X \to \mathbb{R} \cup \{+\infty\}$ and $g: Y \to \mathbb{R} \cup \{+\infty\}$ be convex functions and $A: X \to Y$ be a bounded linear map. Then the Fenchel problems: -$$ -p^* = \inf_{x \in X} \{f(x) + g(Ax)\} -$$ -$$ -d^* = \sup_{y^* \in Y^*} \{-f^*(A^*y^*) - g^*(-y^*)\} -$$ - -satisfy weak duality, i.e. $p^* \geq d^*$. Note that $f^*,g^*$ are the convex conjugates of f,g respectively, and $A^*$ is the adjoint operator. The perturbation function for this dual problem is given by $F(x,y) = f(x) + g(Ax - y)$. - -Suppose that f,g, and A satisfy either - -# f and g are lower semi-continuous and $0 \in \operatorname{core}(\operatorname{dom}g - A \operatorname{dom}f)$ where $\operatorname{core}$ is the algebraic interior and $\operatorname{dom}h$, where h is some function, is the set $\{z: h(z) < +\infty\}$, or - -# $A \operatorname{dom}f \cap \operatorname{cont}g \neq \emptyset$ where $\operatorname{cont}$ are the points where the function is continuous. - -Then strong duality holds, i.e. $p^* = d^*$. If $d^* \in \mathbb{R}$ then supremum is attained. - -In the following figure, the minimization problem on the left side of the equation is illustrated. One seeks to vary x such that the vertical distance between the convex and concave curves at x is as small as possible. The position of the vertical line in the figure is the (approximate) optimum. - -The next figure illustrates the maximization problem on the right hand side of the above equation. Tangents are drawn to each of the two curves such that both tangents have the same slope p. The problem is to adjust p in such a way that the two tangents are as far away from each other as possible (more precisely, such that the points where they intersect the y-axis are as far from each other as possible). Imagine the two tangents as metal bars with vertical springs between them that push them apart and against the two parabolas that are fixed in place. - -Fenchel's theorem states that the two problems have the same solution. The points having the minimum vertical separation are also the tangency points for the maximally separated parallel tangents. diff --git a/wiki/wikipedia/3087.txt b/wiki/wikipedia/3087.txt deleted file mode 100644 index 6cd18157542e8c6aa614fc114b4777d7703b4e43..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3087.txt +++ /dev/null @@ -1 +0,0 @@ -In differential geometry, Fenchel's theorem states that the average curvature of any closed convex curve in the Euclidean plane equals $ 2 \pi/L$, where $L$ is the length of the curve. It is named after Werner Fenchel, who published it in 1929. More generally, for an arbitrary closed space curve the average curvature is $\ge 2 \pi/L$ with equality holding only for convex plane curves. diff --git a/wiki/wikipedia/3088.txt b/wiki/wikipedia/3088.txt deleted file mode 100644 index 43d0f5802b06f9767531c303581b75fc5b602d94..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3088.txt +++ /dev/null @@ -1 +0,0 @@ -#redirect Crouzeix's conjecture diff --git a/wiki/wikipedia/3089.txt b/wiki/wikipedia/3089.txt deleted file mode 100644 index ddc878c1473f17771d1ab7c3be76e4e61cd20312..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3089.txt +++ /dev/null @@ -1,33 +0,0 @@ -In the theory of formal languages, Ogden's lemma (named after William F. Ogden) is a generalization of the pumping lemma for context-free languages. - -Ogden's lemma states that if a language L is context-free, then there exists some number $p\geq 1$ (where p may or may not be a pumping length) such that for any string s of length at least p in L and every way of "marking" p or more of the positions in s, s can be written as -$$ -s = uvwxy -$$ - -with strings u, v, w, x, and y, such that - -#vx has at least one marked position, - -#vwx has at most p marked positions, and - -#$uv^n wx^n y \in L$ for all $n \geq 0$. - -In the special case where every position is marked, Ogden's lemma is equivalent to the pumping lemma for context-free languages. Ogden's lemma can be used to show that certain languages are not context-free in cases where the pumping lemma is not sufficient. An example is the language $\{a^i b^j c^k d^l : i = 0 \text{ or } j = k = l\}$. - -Ogden's lemma can also be used to prove the inherent ambiguity of some languages. - -Bader and Moura have generalized the lemma to allow marking some positions that are not to be included in vx. Their dependence of the parameters was later improved by Dömösi and Kudlek. If we denote the number of such excluded positions by e, then the number d of distinguished positions of which we want to include some in vx must satisfy $d\geq p(e+1)$, where p is some constant that depends only on the language. The statement becomes that every s can be written as -$$ -s = uvwxy -$$ - -with strings u, v, w, x, and y, such that - -#vx has at least one distinguished position and no excluded position, - -#vwx has at most $p^{(e+1)}$ distinguished positions, and - -#$uv^n wx^n y \in L$ for all $n \geq 0$. - -Moreover, either each of u,v,w has a distinguished position, or each of $w,x,y$ has a distinguished position. diff --git a/wiki/wikipedia/309.txt b/wiki/wikipedia/309.txt deleted file mode 100644 index 845e17b4a9f9e43a93540fee7380554ec45c7621..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/309.txt +++ /dev/null @@ -1,19 +0,0 @@ -SUDO-Q is a British game show that was broadcast between 5 December 2005 and 23 March 2007. It was hosted by Eamonn Holmes. The format was based on a mix of the number puzzle Sudoku and general knowledge questions. - -Three teams of two (originally three) compete. There are four rounds in the game. - -The first round is based on general knowledge questions and a 4×4 Sudoku grid. Teams are shown the grid, with four numbers already placed. Then a square gets highlighted and the teams lock in what number goes in that square. The fastest team to lock in the right number gets one point and the right to answer a general knowledge question for another point, and the right to verbally place a number in another highlighted square for one more point, for a maximum of three points. The round ends when the entire grid is completed or if there's one square left (not enough to play one more subround). - -In the first series, the team winning the first square got two questions and potentially two squares, thus a possible 5 points. - -The second round is focused on eliminating one of the three teams. Another 4×4 grid is revealed with four numbers placed in advance, and once again teams lock in what number goes in the highlighted square. Again the fastest team whom locked in the right number gets one point and the right answer to answer a general knowledge question for another point, only this time a correct answer also gives the team the right to nominate members of the opposing team in a sudden death showdown in which the fastest player with the correct number gets one point and stays in the game, with the other player eliminated from the game. The first team to lose both players is eliminated from the show, regardless of their score to that point. - -In the first series, when there were three teams of three contestants, winning the square allowed the team to nominate a member of another team, then to attempt a general knowledge question to eliminate that player; an incorrect answer under these conditions cost the answering team a player, selected by the player they had nominated. Again, the first team to be completely eliminated is out of the game. - -The third round is Speed Sudoku where two players (one on each team) have 45 seconds each to solve squares on a 6×6 grid, while alternating turns. No general knowledge questions are played this round. Correct numbers are worth 2 points, but incorrect numbers deduct 1 point. When one player runs out of time, the other player is given the rest of their time to finish as much of the grid as possible. The round ends when time expires for both players or when the grid is completed. The team with the most points at the end of this round wins the game. - -If a team has one player left coming out of round two, the person left standing can either buy back the eliminated player with 10 seconds of his/her Speed Sudoku time deducted (making it 35 seconds), or reject his/her partner for the rest of the show & keep the full 45 seconds. In the first series, each team had 60 seconds, and all teams still in the game were completely reunited after the end of round 2. No points were deducted for incorrect solves. - -The final round gives the winning team three minutes to solve another 6×6 grid for money. Host Holmes asks a series of questions in which correct answers earn an attempt to place a number in a highlighted square. An incorrect answer stays on that square, while a correct or an incorrect number goes to another square. If both partners are in the game after the final round, each one takes 90 seconds of the time; otherwise, the solo player takes the entire three minutes. There are a total of 18 squares to be solved. Each correctly placed number is worth £50, with £50 more for a completed row, column, and/or 3×2 region, and a £200 bonus for completing the entire grid, thus a grand prize of £2,000. - -In the first series, each of the three team members would take 60 seconds. The individual squares were worth £25 each, and the completion bonus was £150, for a possible total of £1,500. Also in the first series, the best-performing teams in the final, determined by the number of squares solved, were given the opportunity to participate in a grand final. From series 2 onward, the winning team stays on to play during the next show but if a team wins 5 shows in a row they are declared undefeated champions (winning a £500 bonus) and three new teams play the next day. diff --git a/wiki/wikipedia/3090.txt b/wiki/wikipedia/3090.txt deleted file mode 100644 index 02916f077ee2e40dbd839e7bd26a412284920202..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3090.txt +++ /dev/null @@ -1,108 +0,0 @@ -The Millennium Prize Problems are seven unsolved problems in mathematics that were stated by the Clay Mathematics Institute on May 24, 2000. The problems are the Birch and Swinnerton-Dyer conjecture, Hodge conjecture, Navier–Stokes existence and smoothness, P versus NP problem, Poincaré conjecture, Riemann hypothesis, and Yang–Mills existence and mass gap. A correct solution to any of the problems results in a US$1 million prize being awarded by the institute to the discoverer(s). - -This has echoes of a set of twenty-three problems set by the mathematician David Hilbert in 1900, which were influential in driving the progress of mathematics in the twentieth century. - -To date, the only Millennium Prize problem to have been solved is the Poincaré conjecture, which was solved in 2003 by the Russian mathematician Grigori Perelman. He declined the prize money. - -In dimension 2, a sphere is characterized by the fact that it is the only closed and simply-connected surface. The Poincaré conjecture states that this is also true in dimension 3. It is central to the more general problem of classifying all 3-manifolds. The precise formulation of the conjecture states: - -A proof of this conjecture was given by Grigori Perelman in 2003. Perelman's solution was based on Richard Hamilton's theory of Ricci flow. However, this solution included major original advancements by Perelman and made use of results on spaces of metrics due to Cheeger, Gromov, and Perelman himself. Perelman also proved William Thurston's Geometrization Conjecture, a special case of which is the Poincaré conjecture, without which the Poincaré conjecture proof would not have been possible; its review was completed in August 2006. Perelman was officially awarded the Millennium Prize on March 18, 2010, but he also declined the award and the associated prize money from the Clay Mathematics Institute as he had done with the Fields Medal. The Interfax news agency quoted Perelman as saying he believed the prize was unfair, as he considered his contribution to solving the Poincaré conjecture to be no greater than Hamilton's. - -The Birch and Swinnerton-Dyer conjecture deals with certain types of equations: those defining elliptic curves over the rational numbers. The conjecture is that there is a simple way to tell whether such equations have a finite or infinite number of rational solutions. Hilbert's tenth problem dealt with a more general type of equation, and in that case it was proven that there is no way to decide whether a given equation even has any solutions. - -The official statement of the problem was given by Andrew Wiles. - -The Hodge conjecture is that for projective algebraic varieties, Hodge cycles are rational linear combinations of algebraic cycles. -$$ -\operatorname{Hdg}^k(X) = H^{2k}(X, \Q) \cap H^{k,k}(X). -$$ - -We call this the group of Hodge classes of degree 2k on X. - -The modern statement of the Hodge conjecture is: - -Let X be a non-singular complex projective manifold. Then every Hodge class on X is a linear combination with rational coefficients of the cohomology classes of complex subvarieties of X. - -The official statement of the problem was given by Pierre Deligne. - - - -\overbrace{ - -\underbrace{\frac{\partial \mathbf{u}}{\partial t}}_{ - -\begin{smallmatrix} - -\text{Variation} - -\end{smallmatrix}} + - -\underbrace{(\mathbf{u} \cdot \nabla) \mathbf{u}}_{ - -\begin{smallmatrix} - -\text{Convection} - -\end{smallmatrix}}}^{\text{Inertia (per volume)}} - -\overbrace{-\underbrace{\nu \nabla^2 \mathbf{u}}_{\text{Diffusion}}= - -\underbrace{-\nabla w}_{ - -\begin{smallmatrix} - -\text{Internal} \\ - -\text{source} - -\end{smallmatrix}}}^{\text{Divergence of stress}} + - -\underbrace{\mathbf{g}}_{ - -\begin{smallmatrix} - -\text{External} \\ - -\text{source} - -\end{smallmatrix}} - -. - - - -The Navier–Stokes equations describe the motion of fluids, and are one of the pillars of fluid mechanics. However, theoretical understanding of their solutions is incomplete, despite its importance in science and engineering. For the three-dimensional system of equations, and given some initial conditions, mathematicians have not yet proven that smooth solutions always exist. This is called the Navier–Stokes existence and smoothness problem. - -The problem, restricted to the case of an incompressible fluid, is to prove either that smooth, globally defined solutions exist that meet certain conditions, or that they do not always exist and the equations break down. The official statement of the problem was given by Charles Fefferman. - -The question is whether or not, for all problems for which an algorithm can verify a given solution quickly (that is, in polynomial time), an algorithm can also find that solution quickly. Since the former describes the class of problems termed NP, while the latter describes P, the question is equivalent to asking whether all problems in NP are also in P. This is generally considered one of the most important open questions in mathematics and theoretical computer science as it has far-reaching consequences to other problems in mathematics, and to biology, philosophy and cryptography (see P versus NP problem proof consequences). A common example of an NP problem not known to be in P is the Boolean satisfiability problem. - -Most mathematicians and computer scientists expect that P ≠ NP; however, it remains unproven. - -The official statement of the problem was given by Stephen Cook. -$$ -\zeta(s) = \sum_{n=1}^\infty n^{-s} = \frac{1}{1^s} + \frac{1}{2^s} + \frac{1}{3^s} + \cdots -$$ - -The Riemann zeta function ζ(s) is a function whose argument s may be any complex number other than 1, and whose values are also complex. Its analytical continuation has zeros at the negative even integers; that is, ζ(s) = 0 when s is one of −2, −4, −6, .... These are called its trivial zeros. However, the negative even integers are not the only values for which the zeta function is zero. The other ones are called nontrivial zeros. The Riemann hypothesis is concerned with the locations of these nontrivial zeros, and states that: - -The real part of every nontrivial zero of the Riemann zeta function is 1/2. - -The Riemann hypothesis is that all nontrivial zeros of the analytical continuation of the Riemann zeta function have a real part of 1/2. A proof or disproof of this would have far-reaching implications in number theory, especially for the distribution of prime numbers. This was Hilbert's eighth problem, and is still considered an important open problem a century later. - -The official statement of the problem was given by Enrico Bombieri. - -In quantum field theory, the mass gap is the difference in energy between the vacuum and the next lowest energy state. The energy of the vacuum is zero by definition, and assuming that all energy states can be thought of as particles in plane-waves, the mass gap is the mass of the lightest particle. - -For a given real field $\phi(x)$, we can say that the theory has a mass gap if the two-point function has the property -$$ -\langle\phi(0,t)\phi(0,0)\rangle\sim \sum_nA_n\exp\left(-\Delta_nt\right) -$$ - -with $\Delta_0>0$ being the lowest energy value in the spectrum of the Hamiltonian and thus the mass gap. This quantity, easy to generalize to other fields, is what is generally measured in lattice computations. - -Quantum Yang–Mills theory is the current grounding for the majority of theoretical applications of thought to the reality and potential realities of elementary particle physics. The theory is a generalization of the Maxwell theory of electromagnetism where the chromo-electromagnetic field itself carries charge. As a classical field theory it has solutions which travel at the speed of light so that its quantum version should describe massless particles (gluons). However, the postulated phenomenon of color confinement permits only bound states of gluons, forming massive particles. This is the mass gap. Another aspect of confinement is asymptotic freedom which makes it conceivable that quantum Yang-Mills theory exists without restriction to low energy scales. The problem is to establish rigorously the existence of the quantum Yang–Mills theory and a mass gap. - -Prove that for any compact simple gauge group G, a non-trivial quantum Yang–Mills theory exists on $\mathbb{R}^4$ and has a mass gap Δ > 0. Existence includes establishing axiomatic properties at least as strong as those cited in Streater, Osterwalder and Osterwalder. - -The official statement of the problem was given by Arthur Jaffe and Edward Witten. diff --git a/wiki/wikipedia/3091.txt b/wiki/wikipedia/3091.txt deleted file mode 100644 index 962732e26ec9ea8b7d6b1ea9fdac6c91585bfb9b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3091.txt +++ /dev/null @@ -1,46 +0,0 @@ -In mathematics, Fredholm's theorems are a set of celebrated results of Ivar Fredholm in the Fredholm theory of integral equations. There are several closely related theorems, which may be stated in terms of integral equations, in terms of linear algebra, or in terms of the Fredholm operator on Banach spaces. - -The Fredholm alternative is one of the Fredholm theorems. - -Fredholm's theorem in linear algebra is as follows: if M is a matrix, then the orthogonal complement of the row space of M is the null space of M: -$$ -(\operatorname{row } M)^\bot = \ker M. -$$ - -Similarly, the orthogonal complement of the column space of M is the null space of the adjoint: -$$ -(\operatorname{col } M)^\bot = \ker M^*. -$$ - -Fredholm's theorem for integral equations is expressed as follows. Let $K(x,y)$ be an integral kernel, and consider the homogeneous equations -$$ -\int_a^b K(x,y) \phi(y) dy = \lambda \phi(x) -$$ - -and its complex adjoint -$$ -\int_a^b \psi(x) \overline{K(x,y)} dx = \overline {\lambda}\psi(y). -$$ - -Here, $\overline{\lambda}$ denotes the complex conjugate of the complex number $\lambda$, and similarly for $\overline{K(x,y)}$. Then, Fredholm's theorem is that, for any fixed value of $\lambda$, these equations have either the trivial solution $\psi(x)=\phi(x)=0$ or have the same number of linearly independent solutions $\phi_1(x),\cdots,\phi_n(x)$, $\psi_1(y),\cdots,\psi_n(y)$. - -A sufficient condition for this theorem to hold is for $K(x,y)$ to be square integrable on the rectangle $[a,b]\times[a,b]$ (where a and/or b may be minus or plus infinity). - -Here, the integral is expressed as a one-dimensional integral on the real number line. In Fredholm theory, this result generalizes to integral operators on multi-dimensional spaces, including, for example, Riemannian manifolds. - -One of Fredholm's theorems, closely related to the Fredholm alternative, concerns the existence of solutions to the inhomogeneous Fredholm equation -$$ - \lambda \phi(x)-\int_a^b K(x,y) \phi(y) dy=f(x). -$$ - -Solutions to this equation exist if and only if the function $f(x)$ is orthogonal to the complete set of solutions $\{\psi_n(x)\}$ of the corresponding homogeneous adjoint equation: -$$ -\int_a^b \overline{\psi_n(x)} f(x) dx=0 -$$ - -where $\overline{\psi_n(x)}$ is the complex conjugate of $\psi_n(x)$ and the former is one of the complete set of solutions to -$$ -\lambda\overline{\psi(y)} -\int_a^b \overline{\psi(x)} K(x,y) dx=0. -$$ - -A sufficient condition for this theorem to hold is for $K(x,y)$ to be square integrable on the rectangle $[a,b]\times[a,b]$. diff --git a/wiki/wikipedia/3092.txt b/wiki/wikipedia/3092.txt deleted file mode 100644 index 8980a80835b409b54a924ddf255147121c4ff0ba..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3092.txt +++ /dev/null @@ -1,52 +0,0 @@ -In mathematics, the "happy ending problem" (so named by Paul Erdős because it led to the marriage of George Szekeres and Esther Klein) is the following statement: - -Theorem: any set of five points in the plane in general position has a subset of four points that form the vertices of a convex quadrilateral. - -This was one of the original results that led to the development of Ramsey theory. - -The happy ending theorem can be proven by a simple case analysis: if four or more points are vertices of the convex hull, any four such points can be chosen. If on the other hand, the convex hull has the form of a triangle with two points inside it, the two inner points and one of the triangle sides can be chosen. See Peterson for an illustrated explanation of this proof, and Morris for a more detailed survey of the problem. - -The Erdős–Szekeres conjecture states precisely a more general relationship between the number of points in a general-position point set and its largest convex polygon, namely that the smallest number of points for which any general position arrangement contains a convex subset of $n$ points is $2^{n-2}+1$ . It remains unproven, but less precise bounds are known. - -Erdős proved the following generalisation: - -Theorem: for any positive integer N, any sufficiently large finite set of points in the plane in general position has a subset of N points that form the vertices of a convex polygon. - -The proof appeared in the same paper that proves the Erdős–Szekeres theorem on monotonic subsequences in sequences of numbers. - -Let f(N) denote the minimum M for which any set of M points in general position must contain a convex N-gon. It is known that - -* f(3) = 3, trivially. - -* f(4) = 5. - -* f(5) = 9. A set of eight points with no convex pentagon is shown in the illustration, demonstrating that f(5) > 8; the more difficult part of the proof is to show that every set of nine points in general position contains the vertices of a convex pentagon. - -* f(6) = 17. - -* The value of f(N) is unknown for all N > 6. By the result of Erdős it is known to be finite. - -On the basis of the known values of f(N) for N = 3, 4 and 5, Erdős and Szekeres conjectured in their original paper that -$$ -f(N) = 1 + 2^{N-2} \quad \mbox{for all } N \geq 3. -$$ - -They proved later, by constructing explicit examples, that -$$ -f(N) \geq 1 + 2^{N-2}, -$$ - -but the best known upper bound when N ≥ 7 is -$$ -f(N)\leq 2^{N + o(N)}. -$$ - -There is also the question of whether any sufficiently large set of points in general position has an "empty" convex quadrilateral, pentagon, etc., - -that is, one that contains no other input point. The original solution to the happy ending problem can be adapted to show that any five points in general position have an empty convex quadrilateral, as shown in the illustration, and any ten points in general position have an empty convex pentagon. However, there exist arbitrarily large sets of points in general position that contain no empty convex heptagon. - -For a long time the question of the existence of empty hexagons remained open, but Nicolás and Gerken proved that every sufficiently large point set in general position contains a convex empty hexagon. More specifically, Gerken showed that the number of points needed is no more than f(9) for the same function f defined above, while Nicolás showed that the number of points needed is no more than f(25). Valtr supplies a simplification of Gerken's proof that however requires more points, f(15) instead of f(9). At least 30 points are needed; there exists a set of 29 points in general position with no empty convex hexagon. - -The problem of finding sets of n points minimizing the number of convex quadrilaterals is equivalent to minimizing the crossing number in a straight-line drawing of a complete graph. The number of quadrilaterals must be proportional to the fourth power of n, but the precise constant is not known. - -It is straightforward to show that, in higher-dimensional Euclidean spaces, sufficiently large sets of points will have a subset of k points that forms the vertices of a convex polytope, for any k greater than the dimension: this follows immediately from existence of convex k-gons in sufficiently large planar point sets, by projecting the higher-dimensional point set into an arbitrary two-dimensional subspace. However, the number of points necessary to find k points in convex position may be smaller in higher dimensions than it is in the plane, and it is possible to find subsets that are more highly constrained. In particular, in d dimensions, every d + 3 points in general position have a subset of d + 2 points that form the vertices of a cyclic polytope. More generally, for every d and k > d there exists a number m(d, k) such that every set of m(d, k) points in general position has a subset of k points that form the vertices of a neighborly polytope. diff --git a/wiki/wikipedia/3093.txt b/wiki/wikipedia/3093.txt deleted file mode 100644 index fd2a106d648bbb069d5d9fdb49db64c4d696c95e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3093.txt +++ /dev/null @@ -1,7 +0,0 @@ -Timing failure is a failure of a process, or part of a process, in a synchronous distributed system or real-time system to meet limits set on execution time, message delivery, clock drift rate, or clock skew. - -Asynchronous distributed systems cannot be said to have timing failures as guarantees are not provided for response times. - -Category:Distributed computing problems - -Category:Real-time computing diff --git a/wiki/wikipedia/3094.txt b/wiki/wikipedia/3094.txt deleted file mode 100644 index a1b5b0748a02eec1d4c46617156959fe88b70c01..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3094.txt +++ /dev/null @@ -1,7 +0,0 @@ -Overlapped I/O is a name used for asynchronous I/O in the Windows API. It was introduced as an extension to the API in Windows NT. - -Utilizing overlapped I/O requires passing an OVERLAPPED structure to API functions that normally block, including ReadFile(), WriteFile(), and Winsock's WSASend() and WSARecv(). The requested operation is initiated by a function call which returns immediately, and is completed by the OS in the background. The caller may optionally specify a Win32 event handle to be raised when the operation completes. Alternatively, a program may receive notification of an event via an I/O completion port, which is the preferred method of receiving notification when used in symmetric multiprocessing environments or when handling I/O on a large number of files or sockets. The third and the last method to get the I/O completion notification with overlapped IO is to use ReadFileEx() and WriteFileEx(), which allow the User APC routine to be provided, which will be fired on the same thread on completion (User APC is the thing very similar to UNIX signal, with the main difference being that the signals are using signal numbers from the historically predefined enumeration, while the User APC can be any function declared as "void f(void* context)"). The so-called overlapped API presents some differences depending on the Windows version used. - -Asynchronous I/O is particularly useful for sockets and pipes. - -Unix and Linux implement the POSIX asynchronous I/O API (AIO). diff --git a/wiki/wikipedia/3095.txt b/wiki/wikipedia/3095.txt deleted file mode 100644 index 57d8bfe3e1e951f8ec0e1ca9f881ecf9342480a8..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3095.txt +++ /dev/null @@ -1,17 +0,0 @@ -In mathematics, the Jordan–Schur theorem also known as Jordan's theorem on finite linear groups is a theorem in its original form due to Camille Jordan. In that form, it states that there is a function ƒ(n) such that given a finite subgroup G of the group GL(n, C) of invertible n-by-n complex matrices, there is a subgroup H of G with the following properties: - -* H is abelian. - -* H is a normal subgroup of G. - -* The index of H in G satisfies (G : H) ≤ ƒ(n). - -Schur proved a more general result that applies when G is not assumed to be finite, but just periodic. Schur showed that ƒ(n) may be taken to be - -((8n)1/2 + 1)2n2 − ((8n)1/2 − 1)2n2. - -A tighter bound (for n ≥ 3) is due to Speiser, who showed that as long as G is finite, one can take - -ƒ(n) = n! 12n(π(n+1)+1) - -where π(n) is the prime-counting function. This was subsequently improved by Hans Frederick Blichfeldt who replaced the 12 with a 6. Unpublished work on the finite case was also done by Boris Weisfeiler. Subsequently, Michael Collins, using the classification of finite simple groups, showed that in the finite case, one can take ƒ(n) = (n + 1)! when n is at least 71, and gave near complete descriptions of the behavior for smaller n. diff --git a/wiki/wikipedia/3096.txt b/wiki/wikipedia/3096.txt deleted file mode 100644 index 107da63db682485e44225c3b7ac43097a8de966c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3096.txt +++ /dev/null @@ -1,18 +0,0 @@ -In predicate logic, universal instantiation (UI; also called universal specification or universal elimination, and sometimes confused with dictum de omni) is a valid rule of inference from a truth about each member of a class of individuals to the truth about a particular individual of that class. It is generally given as a quantification rule for the universal quantifier but it can also be encoded in an axiom schema. It is one of the basic principles used in quantification theory. - -Example: "All dogs are mammals. Fido is a dog. Therefore Fido is a mammal." - -In symbols, the rule as an axiom schema is -$$ -\forall x A \Rightarrow A\{x \mapsto a\}, -$$ - -for every formula A and every term a, where $A\{x \mapsto a\}$ is the result of substituting a for each free occurrence of x in A. $ A\{x \mapsto a\}$ is an instance of $\forall x A.$ - -And as a rule of inference it is - -from ⊢ ∀x A infer ⊢ A{x↦a}. - -Irving Copi noted that universal instantiation "...follows from variants of rules for 'natural deduction', which were devised independently by Gerhard Gentzen and Stanisław Jaśkowski in 1934." - -According to Willard Van Orman Quine, universal instantiation and existential generalization are two aspects of a single principle, for instead of saying that "∀x x = x" implies "Socrates = Socrates", we could as well say that the denial "Socrates ≠ Socrates" implies "∃x x ≠ x". The principle embodied in these two operations is the link between quantifications and the singular statements that are related to them as instances. Yet it is a principle only by courtesy. It holds only in the case where a term names and, furthermore, occurs referentially. diff --git a/wiki/wikipedia/3097.txt b/wiki/wikipedia/3097.txt deleted file mode 100644 index bdf5e8157873e8d39004020827be2a2a77cdf60c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3097.txt +++ /dev/null @@ -1,5 +0,0 @@ -Synkron is an open-source multiplatform utility designed for file synchronization of two or more folders, supporting synchs across computers. It is written in C++ and uses the Qt4 libraries. Synkron is distributed under the terms of the GPL v2. - -Apart from carrying out synchronisations, Synkron provides other features. The user interface of Synkron is divided into several sections: Synchronise, Multisync, SyncView, Scheduler, Restore, Blacklist and Filters. The user can switch between these sections by using the toolbar. Multisync supports synching multiple folders into one folder. - -Synkron is available as a portable app. It can be installed from the software repositories of most major KDE Linux distributions. diff --git a/wiki/wikipedia/3098.txt b/wiki/wikipedia/3098.txt deleted file mode 100644 index 9dafa9d4c5e59dbfa34d0e2bef0b840ae62f3458..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3098.txt +++ /dev/null @@ -1 +0,0 @@ -In mathematics, the Gorenstein–Walter theorem, proved by , states that if a finite group G has a dihedral Sylow 2-subgroup, and O(G) is the maximal normal subgroup of odd order, then G/O(G) is isomorphic to a 2-group, or the alternating group A7, or a subgroup of PΓL2(q) containing PSL2(q) for q an odd prime power. Note that A5 ≈ PSL2(4) ≈ PSL2(5) and A6 ≈ PSL2(9). diff --git a/wiki/wikipedia/3099.txt b/wiki/wikipedia/3099.txt deleted file mode 100644 index d744f5a6881735e35e512742dd0444566102a39d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3099.txt +++ /dev/null @@ -1,7 +0,0 @@ -In mathematics, the word null (from meaning "zero", which is from meaning "none") is often associated with the concept of zero or the concept of nothing. It is used in varying context from "having zero members in a set" (e.g., null set) to "having a value of zero" (e.g., null vector). - -In a vector space, the null vector is the neutral element of vector addition; depending on the context, a null vector may also be a vector mapped to some null by a function under consideration (such as a quadratic form coming with the vector space, see null vector, a linear mapping given as matrix product or dot product, a seminorm in a Minkowski space, etc.). In set theory, the empty set, that is, the set with zero elements, denoted "{}" or "∅", may also be called null set. In measure theory, a null set is a (possibly nonempty) set with zero measure. - -A null space of a mapping is the part of the domain that is mapped into the null element of the image (the inverse image of the null element). For example, in linear algebra, the null space of a linear mapping, also known as kernel, is the set of vectors which map to the null vector under that mapping. - -In statistics, a null hypothesis is a proposition that no effect or relationship exists between populations and phenomena. It is the hypothesis which is presumed true—unless statistical evidence indicates otherwise. diff --git a/wiki/wikipedia/31.txt b/wiki/wikipedia/31.txt deleted file mode 100644 index d888218efdb164588ab6e58c578cf3fc9bafcebd..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/31.txt +++ /dev/null @@ -1,85 +0,0 @@ -In computer science, lexicographic breadth-first search or Lex-BFS is a linear time algorithm for ordering the vertices of a graph. The algorithm is different from a breadth-first search, but it produces an ordering that is consistent with breadth-first search. - -The lexicographic breadth-first search algorithm is based on the idea of partition refinement and was first developed by . A more detailed survey of the topic is presented by Corneil. - -It has been used as a subroutine in other graph algorithms including the recognition of chordal graphs, and optimal coloring of distance-hereditary graphs. - -The breadth-first search algorithm is commonly defined by the following process: - -*Initialize a queue of graph vertices, with the starting vertex of the graph as the queue's only element. - -*While the queue is non-empty, remove (dequeue) a vertex v from the queue, and add to the queue (enqueue) all the other vertices that can be reached by an edge from v that have not already been added in earlier steps. - -However, rather than defining the vertex to choose at each step in an imperative way as the one produced by the dequeue operation of a queue, one can define the same sequence of vertices declaratively by the properties of these vertices. That is, a standard breadth-first search is just the result of repeatedly applying this rule: - -*Repeatedly output a vertex v, choosing at each step a vertex v that has not already been chosen and that has a predecessor (a vertex that has an edge to v) as early in the output as possible. - -In some cases, this ordering of vertices by the output positions of their predecessors may have ties — two different vertices have the same earliest predecessor. In this case, the order in which those two vertices are chosen may be arbitrary. The output of lexicographic breadth-first search differs from a standard breadth-first search in having a consistent rule for breaking such ties. In lexicographic breadth-first search, the output ordering is the order that would be produced by the rule: - -*Repeatedly output a vertex v, choosing at each step a vertex v that has not already been chosen and whose entire set of already-output predecessors is as small as possible in lexicographic order. - -So, when two vertices v and w have the same earliest predecessor, earlier than any other unchosen vertices, - -the standard breadth-first search algorithm will order them arbitrarily. Instead, in this case, the LexBFS algorithm would choose between v and w by the output ordering of their second-earliest predecessors. - -If only one of them has a second-earliest predecessor that has already been output, that one is chosen. - -If both v and w have the same second-earliest predecessor, then the tie is broken by considering their third-earliest predecessors, and so on. - -Applying this rule directly by comparing vertices according to this rule would lead to an inefficient algorithm. Instead, the lexicographic breadth-first search uses a set partitioning data structure in order to produce the same ordering more efficiently, just as a standard breadth-first search uses a queue data structure to produce its ordering efficiently. - -The lexicographic breadth-first search algorithm replaces the queue of vertices of a standard breadth-first search with an ordered sequence of sets of vertices. The sets in the sequence form a partition of the remaining vertices. At each step, a vertex v from the first set in the sequence is removed from that set, and if that removal causes the set to become empty then the set is removed from the sequence. Then, each set in the sequence is replaced by two subsets: the neighbors of v and the non-neighbors of v. The subset of neighbors is placed earlier in the sequence than the subset of non-neighbors. In pseudocode, the algorithm can be expressed as follows: - -*Initialize a sequence Σ of sets, to contain a single set containing all vertices. - -*Initialize the output sequence of vertices to be empty. - -*While Σ is non-empty: - -**Find and remove a vertex v from the first set in Σ - -**If the first set in Σ is now empty, remove it from Σ - -**Add v to the end of the output sequence. - -**For each edge v-w such that w still belongs to a set S in Σ: - -***If the set S containing w has not yet been replaced while processing v, create a new empty replacement set T and place it prior to S in the sequence; otherwise, let T be the set prior to S. - -***Move w from S to T, and if this causes S to become empty remove S from Σ. - -Each vertex is processed once, each edge is examined only when its two endpoints are processed, and (with an appropriate representation for the sets in Σ that allows items to be moved from one set to another in constant time) each iteration of the inner loop takes only constant time. Therefore, like simpler graph search algorithms such as breadth-first search and depth first search, this algorithm takes linear time. - -The algorithm is called lexicographic breadth-first search because the order it produces is an ordering that could also have been produced by a breadth-first search, and because if the ordering is used to index the rows and columns of an adjacency matrix of a graph then the algorithm sorts the rows and columns into lexicographical order. - -A graph G is defined to be chordal if its vertices have a perfect elimination ordering, an ordering such that for any vertex v the neighbors that occur later in the ordering form a clique. In a chordal graph, the reverse of a lexicographic ordering is always a perfect elimination ordering. Therefore, one can test whether a graph is chordal in linear time by the following algorithm: - -*Use lexicographic breadth-first search to find a lexicographic ordering of G - -*For each vertex v: - -**Let w be the neighbor of v occurring prior to v, as close to v in the sequence as possible - -***(Continue to the next vertex v if there is no such w) - -**If the set of earlier neighbors of v (excluding w itself) is not a subset of the set of earlier neighbors of w, the graph is not chordal - -*If the loop terminates without showing that the graph is not chordal, then it is chordal. - -This application was the original motivation that led Rose to develop the lexicographic breadth first search algorithm. - -A graph G is said to be perfectly orderable if there is a sequence of its vertices with the property that, for any induced subgraph of G, a greedy coloring algorithm that colors the vertices in the induced sequence ordering is guaranteed to produce an optimal coloring. - -For a chordal graph, a perfect elimination ordering is a perfect ordering: the number of the color used for any vertex is the size of the clique formed by it and its earlier neighbors, so the maximum number of colors used is equal to the size of the largest clique in the graph, and no coloring can use fewer colors. An induced subgraph of a chordal graph is chordal and the induced subsequence of its perfect elimination ordering is a perfect elimination ordering on the subgraph, so chordal graphs are perfectly orderable, and lexicographic breadth-first search can be used to optimally color them. - -The same property is true for a larger class of graphs, the distance-hereditary graphs: distance-hereditary graphs are perfectly orderable, with a perfect ordering given by the reverse of a lexicographic ordering, so lexicographic breadth-first search can be used in conjunction with greedy coloring algorithms to color them optimally in linear time. - -Bretscher describe an extension of lexicographic breadth-first search that breaks any additional ties using the complement graph of the input graph. As they show, this can be used to recognize cographs in linear time. Habib describe additional applications of lexicographic breadth-first search including the recognition of comparability graphs and interval graphs. - -An enumeration of the vertices of a graph is said to be a LexBFS ordering if it is the possible output of the application of LexBFS to this graph. - -Let $G=(V,E)$ be a graph with $n$ vertices. Recall that $N(v)$ is the set of neighbors of $v$. - -Let $\sigma=(v_1,\dots,v_n)$ be an enumeration of the vertices of $V$. - -The enumeration $\sigma$ is a LexBFS ordering (with source $v_1$) if, for all $1\le i1, x2 ..., xn) of elements of X which is non-decreasing in the order ≤, that is, x1 ≤x2 ≤ ... ≤ xn. Extend h to all canonical monomials as follows: if (x1, x2, ..., xn) is a canonical monomial, let -$$ - h(x_1, x_2, \ldots, x_n) = h(x_1) \cdot h(x_2) \cdots h(x_n). -$$ - -Then h is injective on the set of canonical monomials and the image of this set $ \{h(x_1, \ldots, x_n) | x_1 \leq ... \leq x_n \} $ forms a basis for U(L) as a K-vector space. - -Stated somewhat differently, consider Y = h(X). Y is totally ordered by the induced ordering from X. The set of monomials -$$ - y_1^{k_1} y_2^{k_2} \cdots y_\ell^{k_\ell} -$$ - -where y1 2 < ... < yn are elements of Y, and the exponents are non-negative, together with the multiplicative unit 1, form a basis for U(L). Note that the unit element 1 corresponds to the empty canonical monomial. The theorem then asserts that these monomials form a basis for U(L) as a vector space. It is easy to see that these monomials span U(L); the content of the theorem is that they are linearly independent. - -The multiplicative structure of U(L) is determined by the structure constants in the basis X, that is, the coefficients $c_{u,v}^x$ such that -$$ - [u,v] = \sum_{x \in X} c_{u,v}^x x. -$$ - -This relation allows one to reduce any product of ys to a linear combination of canonical monomials: The structure constants determine yiyj - yjyi, i.e. what to do in order to change the order of two elements of Y in a product. This fact, modulo an inductive argument on the degree of (non-canonical) monomials, shows one can always achieve products where the factors are ordered in a non-decreasing fashion. - -The Poincaré–Birkhoff–Witt theorem can be interpreted as saying that the end result of this reduction is unique and does not depend on the order in which one swaps adjacent elements. - -Corollary. If L is a Lie algebra over a field, the canonical map L → U(L) is injective. In particular, any Lie algebra over a field is isomorphic to a Lie subalgebra of an associative algebra. - -Already at its earliest stages, it was known that K could be replaced by any commutative ring, provided that L is a free K-module, i.e., has a basis as above. - -To extend to the case when L is no longer a free K-module, one needs to make a reformulation that does not use bases. This involves replacing the space of monomials in some basis with the symmetric algebra, S(L), on L. - -In the case that K contains the field of rational numbers, one can consider the natural map from S(L) to U(L), sending a monomial $ v_1 v_2 \cdots v_n$. for $v_i \in L$, to the element -$$ -\frac{1}{n!} \sum_{\sigma \in S_n} v_{\sigma(1)} v_{\sigma(2)} \cdots v_{\sigma(n)}. -$$ - -Then, one has the theorem that this map is an isomorphism of K-modules. - -Still more generally and naturally, one can consider U(L) as a filtered algebra, equipped with the filtration given by specifying that $ v_1 v_2 \cdots v_n$ lies in filtered degree $\leq n$. The map L → U(L) of K-modules canonically extends to a map T(L) → U(L) of algebras, where T(L) is the tensor algebra on L (for example, by the universal property of tensor algebras), and this is a filtered map equipping T(L) with the filtration putting L in degree one (actually, T(L) is graded). Then, passing to the associated graded, one gets a canonical morphism T(L) → grU(L), which kills the elements vw - wv for v, w ∈ L, and hence descends to a canonical morphism S(L) → grU(L). Then, the (graded) PBW theorem can be reformulated as the statement that, under certain hypotheses, this final morphism is an isomorphism of commutative algebras. - -This is not true for all K and L (see, for example, the last section of Cohn's 1961 paper), but is true in many cases. These include the aforementioned ones, where either L is a free K-module (hence whenever K is a field), or K contains the field of rational numbers. More generally, the PBW theorem as formulated above extends to cases such as where (1) L is a flat K-module, (2) L is torsion-free as an abelian group, (3) L is a direct sum of cyclic modules (or all its localizations at prime ideals of K have this property), or (4) K is a Dedekind domain. See, for example, the 1969 paper by Higgins for these statements. - -Finally, it is worth noting that, in some of these cases, one also obtains the stronger statement that the canonical morphism S(L) → grU(L) lifts to a K-module isomorphism S(L) → U(L), without taking associated graded. This is true in the first cases mentioned, where L is a free K-module, or K contains the field of rational numbers, using the construction outlined here (in fact, the result is a coalgebra isomorphism, and not merely a K-module isomorphism, equipping both S(L) and U(L) with their natural coalgebra structures such that $\Delta(v) = v \otimes 1 + 1 \otimes v$ for v ∈ L). This stronger statement, however, might not extend to all of the cases in the previous paragraph. - -In four papers from the 1880s by Alfredo Capelli proved, in different terminology, what is now known as the Poincaré–Birkhoff–Witt theorem in the case of $L=\mathfrak{gl}_n,$ the General linear Lie algebra; while Poincaré later stated it more generally in 1900. Armand Borel says that these results of Capelli were "completely forgotten for almost a century", and he does not suggest that Poincaré was aware of Capelli's result. have investigated the history of the theorem. They have found out that the majority of the sources before Bourbaki's 1960 book call it Birkhoff-Witt theorem. Following this old tradition, Fofanova in her encyclopaedic entry says that Poincaré obtained the first variant of the theorem. She further says that the theorem was subsequently completely demonstrated by Witt and Birkhoff. It appears that pre-Bourbaki sources were not familiar with Poincaré's paper. - -Birkhoff and Witt do not mention Poincaré's work in their 1937 papers. Cartan and Eilenberg call the theorem Poincaré-Witt Theorem and attribute the complete proof to Witt. Bourbaki were the first to use all three names in their 1960 book. Knapp presents a clear illustration of the shifting tradition. In his 1986 book he calls it Birkhoff-Witt Theorem, while in his later 1996 book he switches to Poincaré-Birkhoff-Witt Theorem. - -It is not clear whether Poincaré's result was complete. Ton-That and Tran conclude that "Poincaré had discovered and completely demonstrated this theorem at least thirty-seven years before Witt and Birkhoff". On the other hand, they point out that "Poincaré makes several statements without bothering to prove them". Their own proofs of all the steps are rather long according to their admission. Borel states that Poincaré "more or less proved the Poincaré-Birkhoff-Witt theorem" in 1900. diff --git a/wiki/wikipedia/3101.txt b/wiki/wikipedia/3101.txt deleted file mode 100644 index 96274c285addd899eeb2ccc0430f3cf8319f0657..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3101.txt +++ /dev/null @@ -1,67 +0,0 @@ -In mathematics, specifically in operator K-theory, the Baum-Connes conjecture suggests a link between the K-theory of the reduced C*-algebra of a group and the K-homology of the classifying space of proper actions of that group. The conjecture sets up a correspondence between different areas of mathematics, with the K-homology of the classifying space being related to geometry, differential operator theory, and homotopy theory, while the K-theory of the group's reduced C*-algebra is a purely analytical object. - -The conjecture, if true, would have some older famous conjectures as consequences. For instance, the surjectivity part implies the Kadison–Kaplansky conjecture for discrete torsion-free groups, and the injectivity is closely related to the Novikov conjecture. - -The conjecture is also closely related to index theory, as the assembly map $\mu$ is a sort of index, and it plays a major role in Alain Connes' noncommutative geometry program. - -The origins of the conjecture go back to Fredholm theory, the Atiyah–Singer index theorem and the interplay of geometry with operator K-theory as expressed in the works of Brown, Douglas and Fillmore, among many other motivating subjects. - -Let Γ be a second countable locally compact group (for instance a countable discrete group). One can define a morphism -$$ - \mu: RK^\Gamma_*(\underline{E\Gamma}) \to K_*(C^*_r(\Gamma)), -$$ - -called the assembly map, from the equivariant K-homology with $\Gamma$-compact supports of the classifying space of proper actions $\underline{E\Gamma}$ to the K-theory of the reduced C*-algebra of Γ. The subscript index * can be 0 or 1. - -Paul Baum and Alain Connes introduced the following conjecture (1982) about this morphism: - -Baum-Connes Conjecture. The assembly map $\mu$ is an isomorphism. - -As the left hand side tends to be more easily accessible than the right hand side, because there are hardly any general structure theorems of the $C^*$-algebra, one usually views the conjecture as an "explanation" of the right hand side. - -The original formulation of the conjecture was somewhat different, as the notion of equivariant K-homology was not yet common in 1982. - -In case $\Gamma$ is discrete and torsion-free, the left hand side reduces to the non-equivariant K-homology with compact supports of the ordinary classifying space $B\Gamma$ of $\Gamma$. - -There is also more general form of the conjecture, known as Baum–Connes conjecture with coefficients, where both sides have coefficients in the form of a $C^*$-algebra $A$ on which $\Gamma$ acts by $C^*$-automorphisms. It says in KK-language that the assembly map -$$ - \mu_{A,\Gamma}: RKK^\Gamma_*(\underline{E\Gamma},A) \to K_*(A\rtimes_\lambda \Gamma), -$$ - -is an isomorphism, containing the case without coefficients as the case $A=\C.$ - -However, counterexamples to the conjecture with coefficients were found in 2002 by Nigel Higson, Vincent Lafforgue and Georges Skandalis. However, the conjecture with coefficients remains an active area of research, since it is, not unlike the classical conjecture, often seen as a statement concerning particular groups or class of groups. - -Let $\Gamma$ be the integers $\Z$. Then the left hand side is the K-homology of $B\Z$ which is the circle. The $C^*$-algebra of the integers is by the commutative Gelfand–Naimark transform, which reduces to the Fourier transform in this case, isomorphic to the algebra of continuous functions on the circle. So the right hand side is the topological K-theory of the circle. One can then show that the assembly map is KK-theoretic Poincaré duality as defined by Gennadi Kasparov, which is an isomorphism. - -The conjecture without coefficients is still open, although the field has received great attention since 1982. - -The conjecture is proved for the following classes of groups: - -* Discrete subgroups of $SO(n,1)$ and $SU(n,1)$. - -* Groups with the Haagerup property, sometimes called a-T-menable groups. These are groups that admit an isometric action on an affine Hilbert space $H$ which is proper in the sense that $\lim_{n\to\infty} g_n\xi\to\infty$ for all $\xi\in H$ and all sequences of group elements $g_n$ with $\lim_{n\to\infty}g_n\to\infty$. Examples of a-T-menable groups are amenable groups, Coxeter groups, groups acting properly on trees, and groups acting properly on simply connected $CAT(0)$ cubical complexes. - -* Groups that admit a finite presentation with only one relation. - -* Discrete cocompact subgroups of real Lie groups of real rank 1. - -* Cocompact lattices in $SL(3,\R), SL(3,\C)$ or $SL(3,\Q_p)$. It was a long-standing problem since the first days of the conjecture to expose a single infinite property T-group that satisfies it. However, such a group was given by V. Lafforgue in 1998 as he showed that cocompact lattices in $SL(3,\R)$ have the property of rapid decay and thus satisfy the conjecture. - -* Gromov hyperbolic groups and their subgroups. - -* Among non-discrete groups, the conjecture has been shown in 2003 by J. Chabert, S. Echterhoff and R. Nest for the vast class of all almost connected groups (i. e. groups having a cocompact connected component), and all groups of $k$-rational points of a linear algebraic group over a local field $k$ of characteristic zero (e.g. $k = \Q_p$). For the important subclass of real reductive groups, the conjecture had already been shown in 1987 by Antony Wassermann. - -Injectivity is known for a much larger class of groups thanks to the Dirac-dual-Dirac method. This goes back to ideas of Michael Atiyah and was developed in great generality by Gennadi Kasparov in 1987. - -Injectivity is known for the following classes: - -* Discrete subgroups of connected Lie groups or virtually connected Lie groups. - -* Discrete subgroups of p-adic groups. - -* Bolic groups (a certain generalization of hyperbolic groups). - -* Groups which admit an amenable action on some compact space. - -The simplest example of a group for which it is not known whether it satisfies the conjecture is $SL_3(\Z)$. diff --git a/wiki/wikipedia/3102.txt b/wiki/wikipedia/3102.txt deleted file mode 100644 index 03347048dac85023182c110b307371eca35ca0be..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3102.txt +++ /dev/null @@ -1,172 +0,0 @@ -In mathematics, Bernoulli's inequality (named after Jacob Bernoulli)) is an inequality that approximates exponentiations of 1 + x. It is often employed in real analysis. It has several useful variants: - -* $(1 + x)^r \geq 1 + rx$ for every integer r ≥ 0 and real number x ≥ −1. The inequality is strict if x ≠ 0 and r ≥ 2. - -* $(1 + x)^r \geq 1 + rx$ for every even integer r ≥ 0 and every real number x. - -* $(1 + x)^r \geq 1 + rx$ for every real numbers r ≥ 1 and x ≥ −1. The inequalities are strict if x ≠ 0 and r ≠ 0, 1. - -* $(1 + x)^r \leq 1 + rx$ for every real numbers 0 ≤ r ≤ 1 and x ≥ −1. - -Jacob Bernoulli first published the inequality in his treatise “Positiones Arithmeticae de Seriebus Infinitis” (Basel, 1689), where he used the inequality often. - -According to Joseph E. Hofmann, Über die Exercitatio Geometrica des M. A. Ricci (1963), p. 177, the inequality is actually due to Sluse in his Mesolabum (1668 edition), Chapter IV "De maximis & minimis". - -Bernoulli's inequality can be proved for the case in which r is an integer, using mathematical induction in the following form: - -* we prove the inequality for $r\in\{0,1\}$, - -* from validity for some r we deduce validity for r + 2. - -For r = 0, -$$ -(1+x)^0 \ge 1+0x -$$ - -is equivalent to 1 ≥ 1 which is true. - -Similarly, for r = 1 we have -$$ -(1+x)^r=1+x\ge 1+x=1+rx. -$$ - -Now suppose the statement is true for r = k: -$$ -(1+x)^k \ge 1+kx. -$$ - -Then it follows that - - - -\begin{align} - -(1+x)^{k+2} &= (1+x)^k(1+x)^2 \\ - -&\ge (1+kx)\left(1+2x+x^2\right) \qquad\qquad\qquad\text{ by hypothesis and }(1+x)^2\ge 0 \\ - -&=1+2x+x^2+kx+2kx^2+kx^3 \\ - -&=1+(k+2)x+kx^2(x+2)+x^2 \\ - -&\ge 1+(k+2)x - -\end{align} - - - -since $x^2\ge 0$ as well as $x+2\ge0$. By the modified induction we conclude the statement is true for every non-negative integer r. - -The exponent r can be generalized to an arbitrary real number as follows: if x > −1, then -$$ -(1 + x)^r \geq 1 + rx -$$ - -for r ≤ 0 or r ≥ 1, and -$$ -(1 + x)^r \leq 1 + rx -$$ - -for 0 ≤ r ≤ 1. - -This generalization can be proved by comparing derivatives. The strict versions of these inequalities require x ≠ 0 and r ≠ 0, 1. - -Instead of $(1+x)^n$ the inequality holds also in the form $(1+x_1)(1+x_2)\dots(1+x_r) \geq 1+x_1+x_2 + \dots + x_r$ where $x_1, x_2, \dots , x_r$ are real numbers, all greater than -1, all with the same sign. The Bernoulli's inequality is special case when $x_1 = x_2 = \dots = x_r = x$. This generalized inequality can be proved by mathematical induction. - -In the first step we take $n=1$. In this case inequality $1+x_1 \geq 1 + x_1$ is obviously true. - -In the second step we assume validity of inequality for $r$ numbers and deduce validity for $r+1$ numbers. - -We assume that(1+x_1)(1+x_2)\dots(1+x_r) \geq 1+x_1+x_2 + \dots + x_ris valid. After multiplying both sides with a positive number $(x_{r+1} + 1)$ we get: - -\begin{alignat}{2} - -(1+x_1)(1+x_2)\dots(1+x_r)(1+x_{r+1}) \geq & (1+x_1+x_2 + \dots + x_r)(1+x_{r+1}) - -\\ - -\geq & (1+x_1+x_2+ \dots + x_r) \cdot 1 + (1+x_1+x_2+ \dots + x_r) \cdot x_{r+1} - -\\ - -\geq & (1+x_1+x_2+ \dots + x_r) + x_{r+1} + x_1 x_{r+1} + x_2 x_{r+1} + \dots + x_r x_{r+1} - -\\ \end{alignat} - -As $x_1, x_2, \dots x_r, x_{r+1}$ have all equal sign, the products $x_1 x_{r+1}, x_2 x_{r+1}, \dots x_r x_{r+1}$ are all positive numbers. So the quantity on the right-hand side can be bounded as follows:(1+x_1+x_2+ \dots + x_r) + x_{r+1} + x_1 x_{r+1} + x_2 x_{r+1} + \dots + x_r x_{r+1} \geq 1+x_1+x_2+ \dots + x_r + x_{r+1},what was to be shown. - -The following inequality estimates the r-th power of 1 + x from the other side. For any real numbers x, r with r > 0, one has -$$ -(1 + x)^r \le e^{rx}, -$$ - -where e = 2.718.... This may be proved using the inequality (1 + 1/k)k < e. - -An alternative form of Bernoulli's inequality for $ t\geq 1 $ and $ 0\le x\le 1 $ is: -$$ -(1-x)^t \ge 1-xt. -$$ - -This can be proved (for any integer t) by using the formula for geometric series: (using y = 1 − x) -$$ -t=1+1+\dots+1 \ge 1+y+y^2+\ldots+y^{t-1} = \frac{1-y^t}{1-y}, -$$ - -or equivalently $xt \ge 1-(1-x)^t.$ - -Using AM-GM - -An elementary proof for $0\le r\le 1$ and x ≥ -1 can be given using weighted AM-GM. - -Let $\lambda_1, \lambda_2$ be two non-negative real constants. By weighted AM-GM on $1,1+x$ with weights $\lambda_1, \lambda_2$ respectively, we get -$$ -\dfrac{\lambda_1\cdot 1 + \lambda_2\cdot (1+x)}{\lambda_1+\lambda_2}\ge \sqrt[\lambda_1+\lambda_2]{(1+x)^{\lambda_2}}. -$$ - -Note that -$$ -\dfrac{\lambda_1\cdot 1 + \lambda_2\cdot (1+x)}{\lambda_1+\lambda_2}=\dfrac{\lambda_1+\lambda_2+\lambda_2x}{\lambda_1+\lambda_2}=1+\dfrac{\lambda_2}{\lambda_1+\lambda_2}x -$$ - -and -$$ -\sqrt[\lambda_1+\lambda_2]{(1+x)^{\lambda_2}} = (1+x)^{\frac{\lambda_2}{\lambda_1+\lambda_2}}, -$$ - -so our inequality is equivalent to -$$ -1 + \dfrac{\lambda_2}{\lambda_1+\lambda_2}x \ge (1+x)^{\frac{\lambda_2}{\lambda_1+\lambda_2}}. -$$ - -After substituting $r = \dfrac{\lambda_2}{\lambda_1+\lambda_2}$ (bearing in mind that this implies $0\le r\le 1$) our inequality turns into -$$ -1+rx \ge (1+x)^r -$$ - -which is Bernoulli's inequality. - -Using the formula for geometric series - -Bernoulli's inequality - -is equivalent to - -and by the formula for geometric series (using y = 1 + x) we get - -{{NumBlk|:|$ (1+x)^r - 1 = y^r-1 = \left(\sum^{r-1}_{k=0}y^k\right) \cdot (y-1) = \left(\sum^{r-1}_{k=0}(1+x)^k\right)\cdot x$|3}} - -which leads to - -{{NumBlk|:|$(1+x)^r - 1-rx = \left(\left(\sum^{r-1}_{k=0}(1+x)^k\right) - r\right)\cdot x = \left(\sum^{r-1}_{k=0}\left((1+x)^k-1\right)\right)\cdot x \ge 0.$|}} - -Now if $x \ge 0$ then by monotony of the powers each summand $(1+x)^k - 1 = (1+x)^k - 1^k \ge 0$, and therefore their sum is greater $0$ and hence the product on the LHS of (). - -If $ 0 \ge x\ge -2 $ then by the same arguments $1\ge(1+x)^k$ and thus - -all addends $(1+x)^k-1$ are non-positive and hence so is their sum. Since the product of two non-positive numbers is non-negative, we get again - -(). - -Using the binomial theorem - -One can prove Bernoulli's inequality for x ≥ 0 using the binomial theorem. It is true trivially for r = 0, so suppose r is a positive integer. Then $(1+x)^r = 1 + rx + \tbinom r2 x^2 + ... + \tbinom rr x^r.$ Clearly $\tbinom r2 x^2 + ... + \tbinom rr x^r \ge 0,$ and hence $(1+x)^r \ge 1+rx$ as required. diff --git a/wiki/wikipedia/3103.txt b/wiki/wikipedia/3103.txt deleted file mode 100644 index 2d29d4a4ae86c3b78d292261a86a1f430e716976..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3103.txt +++ /dev/null @@ -1,63 +0,0 @@ -In algebraic geometry, the Witten conjecture is a conjecture about intersection numbers of stable classes on the moduli space of curves, introduced by Edward Witten in the paper , and generalized in Witten. - -Witten's original conjecture was proved by Maxim Kontsevich in the paper Kontsevich. - -Witten's motivation for the conjecture was that two different models of 2-dimensional quantum gravity should have the same partition function. The partition function for one of these models can be described in terms of intersection numbers on the moduli stack of algebraic curves, and the partition function for the other is the logarithm of the τ-function of the KdV hierarchy. Identifying these partition functions gives Witten's conjecture that a certain generating function formed from intersection numbers should satisfy the differential equations of the KdV hierarchy. - -Suppose that Mg,n is the moduli stack of compact Riemann surfaces of genus g with n distinct marked points x1,...,xn, - -and Mg,n is its Deligne–Mumford compactification. There are n line bundles Li on - -Mg,n, whose fiber at a point of the moduli stack is given by the cotangent space of a Riemann surface at the marked point xi. The intersection index 〈τd1, ..., τdn〉 is the intersection index of Π c1(Li)di on Mg,n where Σdi = dimMg,n = 3g – 3 + n, and 0 if no such g exists, where c1 is the first Chern class of a line bundle. Witten's generating function - -F(t_0,t_1,\ldots) - -= \sum\langle\tau_0^{k_0}\tau_1^{k_1}\cdots\rangle\prod_{i\ge 0} \frac{t_i^{k_i}}{k_i!} - -=\frac{t_0^3}{6}+ \frac{t_1}{24} + \frac{t_0t_2}{24} + \frac{t_1^2}{24}+ \frac{t_0^2t_3}{48} + \cdots - - - -encodes all the intersection indices as its coefficients. - -Witten's conjecture states that the partition function Z = exp F is a τ-function for the KdV hierarchy, in other words it satisfies a certain series of partial differential equations corresponding to the basis $\{L_{-1}, L_0, L_1,\ldots\}$ of the Virasoro algebra. - -Kontsevich used a combinatorial description of the moduli spaces in terms of ribbon graphs to show that - -\sum_{d_1+\cdots+d_n=3g-3+n}\langle \tau_{d_1},\ldots,\tau_{d_n}\rangle \prod_{1\le i\le n} \frac{(2d_i-1)!!}{\lambda_i^{2d_i+1}} - -=\sum_{\Gamma\in G_{g,n}}\frac{2^{-|X_0|}}\prod_{e\in X_1}\frac{2}{\lambda(e)} - - - -Here the sum on the right is over the set Gg,n of ribbon graphs X of compact Riemann surfaces of genus g with n marked points. The set of edges e and points of X are denoted by X 0 and X1. The function λ is thought of as a function from the marked points to the reals, and extended to edges of the ribbon graph by setting λ of an edge equal to the sum of λ at the two marked points corresponding to each side of the edge. - -By Feynman diagram techniques, this implies that - -F(t0,...) is an asymptotic expansion of -$$ - \log\int \exp(i \text{tr} X^3/6)d\mu -$$ - -as Λ lends to infinity, where Λ and Χ are positive definite N by N hermitian matrices, and ti is given by -$$ - t_i = \frac{- \text{tr } \Lambda^{-1-2i}}{1\times3\times5\times\cdots\times (2i-1)} -$$ - -and the probability measure μ on the positive definite hermitian matrices is given by -$$ - d\mu =c_\Lambda\exp(-\text{tr} X^2\Lambda/2)dX -$$ - -where cΛ is a normalizing constant. This measure has the property that -$$ -\int X_{ij}X_{kl}d\mu = \delta_{il}\delta_{jk}\frac{2}{\Lambda_i+\Lambda_j} -$$ - -which implies that its expansion in terms of Feynman diagrams is the expression for F in terms of ribbon graphs. - -From this he deduced that exp F is a τ-function for the KdV hierarchy, thus proving Witten's conjecture. - -The Witten conjecture is a special case of a more general relation between integrable systems of Hamiltonian PDEs and the geometry of certain families of 2D topological field theories (axiomatized in the form of the so-called cohomological field theories by Kontsevich and Manin), which was explored and studied systematically by B. Dubrovin and Y. Zhang, A. Givental, C. Teleman and others. - -The Virasoro conjecture is a generalization of the Witten conjecture. diff --git a/wiki/wikipedia/3104.txt b/wiki/wikipedia/3104.txt deleted file mode 100644 index d11155891fb40c99e0f4a813d75c2cc6a8719184..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3104.txt +++ /dev/null @@ -1,519 +0,0 @@ -A divisibility rule is a shorthand and useful way of determining whether a given integer is divisible by a fixed divisor without performing the division, usually by examining its digits. Although there are divisibility tests for numbers in any radix, or base, and they are all different, this article presents rules and examples only for decimal, or base 10, numbers. Martin Gardner explained and popularized these rules in his September 1962 "Mathematical Games" column in Scientific American. - -The rules given below transform a given number into a generally smaller number, while preserving divisibility by the divisor of interest. Therefore, unless otherwise noted, the resulting number should be evaluated for divisibility by the same divisor. In some cases the process can be iterated until the divisibility is obvious; for others (such as examining the last n digits) the result must be examined by other means. - -For divisors with multiple rules, the rules are generally ordered first for those appropriate for numbers with many digits, then those useful for numbers with fewer digits. - -Note: To test divisibility by any number that can be expressed as 2n or 5n, in which n is a positive integer, just examine the last n digits. - -Note: To test divisibility by any number expressed as the product of prime factors $p_1^n p_2^m p_3^q$, we can separately test for divisibility by each prime to its appropriate power. For example, testing divisibility by 24 (24 = 8*3 = 23*3) is equivalent to testing divisibility by 8 (23) and 3 simultaneously, thus we need only show divisibility by 8 and by 3 to prove divisibility by 24. - -First, take any number (for this example it will be 376) and note the last digit in the number, discarding the other digits. Then take that digit (6) while ignoring the rest of the number and determine if it is divisible by 2. If it is divisible by 2, then the original number is divisible by 2. - -Example - -# 376 (The original number) - -# 37 6 (Take the last digit) - -# 6 ÷ 2 = 3 (Check to see if the last digit is divisible by 2) - -# 376 ÷ 2 = 188 (If the last digit is divisible by 2, then the whole number is divisible by 2) - -First, take any number (for this example it will be 492) and add together each digit in the number (4 + 9 + 2 = 15). Then take that sum (15) and determine if it is divisible by 3. The original number is divisible by 3 (or 9) if and only if the sum of its digits is divisible by 3 (or 9). - -Adding the digits of a number up, and then repeating the process with the result until only one digit remains, will give the remainder of the original number if it were divided by nine (unless that single digit is nine itself, in which case the number is divisible by nine and the remainder is zero). - -This can be generalized to any standard positional system, in which the divisor in question then becomes one less than the radix; thus, in base-twelve, the digits will add up to the remainder of the original number if divided by eleven, and numbers are divisible by eleven only if the digit sum is divisible by eleven. - -The product of three consecutive numbers is always divisible by 3. This is useful for when a number takes the form of n × (n − 1) × (n + 1). - -Example. - -# 492 (The original number) - -# 4 + 9 + 2 = 15 (Add each individual digit together) - -# 15 is divisible by 3 at which point we can stop. Alternatively we can continue using the same method if the number is still too large: - -# 1 + 5 = 6 (Add each individual digit together) - -# 6 ÷ 3 = 2 (Check to see if the number received is divisible by 3) - -# 492 ÷ 3 = 164 (If the number obtained by using the rule is divisible by 3, then the whole number is divisible by 3) - -Example. - -# 336 (The original number) - -# 6 × 7 × 8 = 336 - -# 336 ÷ 3 = 112 - -The basic rule for divisibility by 4 is that if the number formed by the last two digits in a number is divisible by 4, the original number is divisible by 4; - -Another method is multiplication by 3. A number of the form 10x + y has the same remainder when divided by 7 as 3x + y. One must multiply the leftmost digit of the original number by 3, add the next digit, take the remainder when divided by 7, and continue from the beginning: multiply by 3, add the next digit, etc. For example, the number 371: 3×3 + 7 = 16 remainder 2, and 2×3 + 1 = 7. This method can be used to find the remainder of division by 7. - -A more complicated algorithm for testing divisibility by 7 uses the fact that 100 ≡ 1, 101 ≡ 3, 102 ≡ 2, 103 ≡ 6, 104 ≡ 4, 105 ≡ 5, 106 ≡ 1, ... (mod 7). Take each digit of the number (371) in reverse order (173), multiplying them successively by the digits 1, 3, 2, 6, 4, 5, repeating with this sequence of multipliers as long as necessary (1, 3, 2, 6, 4, 5, 1, 3, 2, 6, 4, 5, ...), and adding the products (1×1 + 7×3 + 3×2 = 1 + 21 + 6 = 28). The original number is divisible by 7 if and only if the number obtained using this procedure is divisible by 7 (hence 371 is divisible by 7 since 28 is). - -This method can be simplified by removing the need to multiply. All it would take with this simplification is to memorize the sequence above (132645...), and to add and subtract, but always working with one-digit numbers. - -The simplification goes as follows: - -*Take for instance the number 371 - -*Change all occurrences of 7, 8 or 9 into 0, 1 and 2, respectively. In this example, we get: 301. This second step may be skipped, except for the left most digit, but following it may facilitate calculations later on. - -*Now convert the first digit (3) into the following digit in the sequence 13264513... In our example, 3 becomes 2. - -*Add the result in the previous step (2) to the second digit of the number, and substitute the result for both digits, leaving all remaining digits unmodified: 2 + 0 = 2. So 301 becomes 21. - -*Repeat the procedure until you have a recognizable multiple of 7, or to make sure, a number between 0 and 6. So, starting from 21 (which is a recognizable multiple of 7), take the first digit (2) and convert it into the following in the sequence above: 2 becomes 6. Then add this to the second digit: 6 + 1 = 7. - -*If at any point the first digit is 8 or 9, these become 1 or 2, respectively. But if it is a 7 it should become 0, only if no other digits follow. Otherwise, it should simply be dropped. This is because that 7 would have become 0, and numbers with at least two digits before the decimal dot do not begin with 0, which is useless. According to this, our 7 becomes 0. - -If through this procedure you obtain a 0 or any recognizable multiple of 7, then the original number is a multiple of 7. If you obtain any number from 1 to 6, that will indicate how much you should subtract from the original number to get a multiple of 7. In other words, you will find the remainder of dividing the number by 7. For example, take the number 186: - -*First, change the 8 into a 1: 116. - -*Now, change 1 into the following digit in the sequence (3), add it to the second digit, and write the result instead of both: 3 + 1 = 4. So 116 becomes now 46. - -*Repeat the procedure, since the number is greater than 7. Now, 4 becomes 5, which must be added to 6. That is 11. - -*Repeat the procedure one more time: 1 becomes 3, which is added to the second digit (1): 3 + 1 = 4. - -Now we have a number lower than 7, and this number (4) is the remainder of dividing 186/7. So 186 minus 4, which is 182, must be a multiple of 7. - -Note: The reason why this works is that if we have: a+b=c and b is a multiple of any given number n, then a and c will necessarily produce the same remainder when divided by n. In other words, in 2 + 7 = 9, 7 is divisible by 7. So 2 and 9 must have the same reminder when divided by 7. The remainder is 2. - -Therefore, if a number n is a multiple of 7 (i.e.: the remainder of n/7 is 0), then adding (or subtracting) multiples of 7 cannot change that property. - -What this procedure does, as explained above for most divisibility rules, is simply subtract little by little multiples of 7 from the original number until reaching a number that is small enough for us to remember whether it is a multiple of 7. If 1 becomes a 3 in the following decimal position, that is just the same as converting 10×10n into a 3×10n. And that is actually the same as subtracting 7×10n (clearly a multiple of 7) from 10×10n. - -Similarly, when you turn a 3 into a 2 in the following decimal position, you are turning 30×10n into 2×10n, which is the same as subtracting 30×10n−28×10n, and this is again subtracting a multiple of 7. The same reason applies for all the remaining conversions: - -* 20×10n − 6×10n=14×10n - -* 60×10n − 4×10n=56×10n - -* 40×10n − 5×10n=35×10n - -* 50×10n − 1×10n=49×10n - -First method example
    - -1050 → 105 − 0=105 → 10 − 10 = 0. ANSWER: 1050 is divisible by 7. - -Second method example
    - -1050 → 0501 (reverse) → 0×1 + 5×3 + 0×2 + 1×6 = 0 + 15 + 0 + 6 = 21 (multiply and add). ANSWER: 1050 is divisible by 7. - -Vedic method of divisibility by osculation
    - -Divisibility by seven can be tested by multiplication by the Ekhādika. Convert the divisor seven to the nines family by multiplying by seven. 7×7=49. Add one, drop the units digit and, take the 5, the Ekhādika, as the multiplier. Start on the right. Multiply by 5, add the product to the next digit to the left. Set down that result on a line below that digit. Repeat that method of multiplying the units digit by five and adding that product to the number of tens. Add the result to the next digit to the left. Write down that result below the digit. Continue to the end. If the end result is zero or a multiple of seven, then yes, the number is divisible by seven. Otherwise, it is not. This follows the Vedic ideal, one-line notation. - -Vedic method example: - -Is 438,722,025 divisible by seven? Multiplier = 5. - -4 3 8 7 2 2 0 2 5 - -42 37 46 37 6 40 37 27 - -YES - -Pohlman–Mass method of divisibility by 7
    - -The Pohlman–Mass method provides a quick solution that can determine if most integers are divisible by seven in three steps or less. This method could be useful in a mathematics competition such as MATHCOUNTS, where time is a factor to determine the solution without a calculator in the Sprint Round. - -Step A: - -If the integer is 1,000 or less, subtract twice the last digit from the number formed by the remaining digits. If the result is a multiple of seven, then so is the original number (and vice versa). For example: - -112 -> 11 − (2×2) = 11 − 4 = 7 YES - -98 -> 9 − (8×2) = 9 − 16 = −7 YES - -634 -> 63 − (4×2) = 63 − 8 = 55 NO - -Because 1,001 is divisible by seven, an interesting pattern develops for repeating sets of 1, 2, or 3 digits that form 6-digit numbers (leading zeros are allowed) in that all such numbers are divisible by seven. For example: - -001 001 = 1,001 / 7 = 143 - -010 010 = 10,010 / 7 = 1,430 - -011 011 = 11,011 / 7 = 1,573 - -100 100 = 100,100 / 7 = 14,300 - -101 101 = 101,101 / 7 = 14,443 - -110 110 = 110,110 / 7 = 15,730 - -01 01 01 = 10,101 / 7 = 1,443 - -10 10 10 = 101,010 / 7 = 14,430 - -111,111 / 7 = 15,873 - -222,222 / 7 = 31,746 - -999,999 / 7 = 142,857 - -576,576 / 7 = 82,368 - -For all of the above examples, subtracting the first three digits from the last three results in a multiple of seven. Notice that leading zeros are permitted to form a 6-digit pattern. - -This phenomenon forms the basis for Steps B and C. - -Step B: - -If the integer is between 1,001 and one million, find a repeating pattern of 1, 2, or 3 digits that forms a 6-digit number that is close to the integer (leading zeros are allowed and can help you visualize the pattern). If the positive difference is less than 1,000, apply Step A. This can be done by subtracting the first three digits from the last three digits. For example: - -341,355 − 341,341 = 14 -> 1 − (4×2) = 1 − 8 = −7 YES - -67,326 − 067,067 = 259 -> 25 − (9×2) = 25 − 18 = 7 YES - -The fact that 999,999 is a multiple of 7 can be used for determining divisibility of integers larger than one million by reducing the integer to a 6-digit number that can be determined using Step B. This can be done easily by adding the digits left of the first six to the last six and follow with Step A. - -Step C: - -If the integer is larger than one million, subtract the nearest multiple of 999,999 and then apply Step B. For even larger numbers, use larger sets such as 12-digits (999,999,999,999) and so on. Then, break the integer into a smaller number that can be solved using Step B. For example: - -22,862,420 − (999,999 × 22) = 22,862,420 − 21,999,978 -> 862,420 + 22 = 862,442 - -862,442 -> 862 − 442 (Step B) = 420 -> 42 − (0×2) (Step A) = 42 YES - -This allows adding and subtracting alternating sets of three digits to determine divisibility by seven. Understanding these patterns allows you to quickly calculate divisibility of seven as seen in the following examples: - -Pohlman–Mass method of divisibility by 7, examples: - -Is 98 divisible by seven? - -98 -> 9 − (8×2) = 9 − 16 = −7 YES (Step A) - -Is 634 divisible by seven? - -634 -> 63 − (4×2) = 63 − 8 = 55 NO (Step A) - -Is 355,341 divisible by seven? - -355,341 − 341,341 = 14,000 (Step B) -> 014 − 000 (Step B) -> 14 = 1 − (4×2) (Step A) = 1 − 8 = −7 YES - -Is 42,341,530 divisible by seven? - -42,341,530 -> 341,530 + 42 = 341,572 (Step C) - -341,572 − 341,341 = 231 (Step B) - -231 -> 23 − (1×2) = 23 − 2 = 21 YES (Step A) - -Using quick alternating additions and subtractions: - -42,341,530 -> 530 − 341 + 42 = 189 + 42 = 231 -> 23 − (1×2) = 21 YES - -Multiplication by 3 method of divisibility by 7, examples: - -Is 98 divisible by seven? - -98 -> 9 remainder 2 -> 2×3 + 8 = 14 YES - -Is 634 divisible by seven? - -634 -> 6×3 + 3 = 21 -> remainder 0 -> 0×3 + 4 = 4 NO - -Is 355,341 divisible by seven? - -3 * 3 + 5 = 14 -> remainder 0 -> 0×3 + 5 = 5 -> 5×3 + 3 = 18 -> remainder 4 -> 4×3 + 4 = 16 -> remainder 2 -> 2×3 + 1 = 7 YES - -Find remainder of 1036125837 divided by 7 - -1×3 + 0 = 3 - -3×3 + 3 = 12 remainder 5 - -5×3 + 6 = 21 remainder 0 - -0×3 + 1 = 1 - -1×3 + 2 = 5 - -5×3 + 5 = 20 remainder 6 - -6×3 + 8 = 26 remainder 5 - -5×3 + 3 = 18 remainder 4 - -4×3 + 7 = 19 remainder 5 - -Answer is 5 - -Finding remainder of a number when divided by 7 - -7 − (1, 3, 2, −1, −3, −2, cycle repeats for the next six digits) Period: 6 digits. - -Recurring numbers: 1, 3, 2, −1, −3, −2 - -
    Minimum magnitude sequence
    - -(1, 3, 2, 6, 4, 5, cycle repeats for the next six digits) Period: 6 digits. - -Recurring numbers: 1, 3, 2, 6, 4, 5 - -
    Positive sequence - -Multiply the right most digit by the left most digit in the sequence and multiply the second right most digit by the second left most digit in the sequence and so on and so for. Next, compute the sum of all the values and take the modulus of 7. - -
    Example: What is the remainder when 1036125837 is divided by 7?
    - -
    Multiplication of the rightmost digit = 1 × 7 = 7
    - -
    Multiplication of the second rightmost digit = 3 × 3 = 9
    - -
    Third rightmost digit = 8 × 2 = 16
    - -
    Fourth rightmost digit = 5 × −1 = −5
    - -
    Fifth rightmost digit = 2 × −3 = −6
    - -
    Sixth rightmost digit = 1 × −2 = −2
    - -
    Seventh rightmost digit = 6 × 1 = 6
    - -
    Eighth rightmost digit = 3 × 3 = 9
    - -
    Ninth rightmost digit = 0
    - -
    Tenth rightmost digit = 1 × −1 = −1
    - -
    Sum = 33
    - -
    33 modulus 7 = 5
    - -
    Remainder = 5 - -Digit pair method of divisibility by 7 - -This method uses 1, −3, 2 pattern on the digit pairs. That is, the divisibility of any number by seven can be tested by first separating the number into digit pairs, and then applying the algorithm on three digit pairs (six digits). When the number is smaller than six digits, then fill zero’s to the right side until there are six digits. When the number is larger than six digits, then repeat the cycle on the next six digit group and then add the results. Repeat the algorithm until the result is a small number. The original number is divisible by seven if and only if the number obtained using this algorithm is divisible by seven. This method is especially suitable for large numbers. - -Example 1:
    - -The number to be tested is 157514. - -First we separate the number into three digit pairs: 15, 75 and 14.
    - -Then we apply the algorithm: 1 × 15 − 3 × 75 + 2 × 14 = 182
    - -Because the resulting 182 is less than six digits, we add zero’s to the right side until it is six digits.
    - -Then we apply our algorithm again: 1 × 18 − 3 × 20 + 2 × 0 = −42
    - -The result −42 is divisible by seven, thus the original number 157514 is divisible by seven. - -Example 2:
    - -The number to be tested is 15751537186.
    - -(1 × 15 − 3 × 75 + 2 × 15) + (1 × 37 − 3 × 18 + 2 × 60) = −180 + 103 = −77
    - -The result −77 is divisible by seven, thus the original number 15751537186 is divisible by seven. - -Another digit pair method of divisibility by 7 - -Method - -This is a non-recursive method to find the remainder left by a number on dividing by 7: - -# Separate the number into digit pairs starting from the ones place. Prepend the number with 0 to complete the final pair if required. - -# Calculate the remainders left by each digit pair on dividing by 7. - -# Multiply the remainders with the appropriate multiplier from the sequence 1, 2, 4, 1, 2, 4, … : the remainder from the digit pair consisting of ones place and tens place should be multiplied by 1, hundreds and thousands by 2, ten thousands and hundred thousands by 4, million and ten million again by 1 and so on. - -# Calculate the remainders left by each product on dividing by 7. - -# Add these remainders. - -# The remainder of the sum when divided by 7 is the remainder of the given number when divided by 7. - -For example: - -The number 194,536 leaves a remainder of 6 on dividing by 7. - -The number 510,517,813 leaves a remainder of 1 on dividing by 7. - -Proof of correctness of the method - -The method is based on the observation that 100 leaves a remainder of 2 when divided by 7. And since we are breaking the number into digit pairs we essentially have powers of 100. - -1 mod 7 = 1 - -100 mod 7 = 2 - -10,000 mod 7 = 2^2 = 4 - -1,000,000 mod 7 = 2^3 = 8; 8 mod 7 = 1 - -10,0000,000 mod 7 = 2^4 = 16; 16 mod 7 = 2 - -1,000,0000,000 mod 7 = 2^5 = 32; 32 mod 7 = 4 - -And so on. - -The correctness of the method is then established by the following chain of equalities: - -Let N be the given number $\overline{a_{2n} a_{2n-1} ... a_2a_1}$. -$$ -\overline{a_{2n}a_{2n-1}...a_2a_1}\mod 7 -$$ - -=$[\sum_{k=1}^n(a_{2k}a_{2k-1}) \times 10^{2k-2}] \bmod 7$ - -= $\sum_{k=1}^n(a_{2k}a_{2k-1} \times 10^{2k-2}) \bmod 7$ - -= $\sum_{k=1}^n(a_{2k}a_{2k-1} \bmod 7) \times (10^{2k-2} \bmod 7)$ - -Remainder Test - -13 (1, −3, −4, −1, 3, 4, cycle goes on.) - -If you are not comfortable with negative numbers, then use this sequence. (1, 10, 9, 12, 3, 4) - -Multiply the right most digit of the number with the left most number in the sequence shown above and the second right most digit to the second left most digit of the number in the sequence. The cycle goes on. - -Example: What is the remainder when 321 is divided by 13?
    - -Using the first sequence,
    - -Ans: 1 × 1 + 2 × −3 + 3 × −4 = −17
    - -Remainder = −17 mod 13 = 9 - -Example: What is the remainder when 1234567 is divided by 13?
    - -Using the second sequence,
    - -Answer: 7 × 1 + 6 × 10 + 5 × 9 + 4 × 12 + 3 × 3 + 2 × 4 + 1 × 1 = 178 mod 13 = 9
    - -Remainder = 9 - -Divisibility properties of numbers can be determined in two ways, depending on the type of the divisor. - -A number is divisible by a given divisor if it is divisible by the highest power of each of its prime factors. For example, to determine divisibility by 36, check divisibility by 4 and by 9. - -Many of the simpler rules can be produced using only algebraic manipulation, creating binomials and rearranging them. By writing a number as the sum of each digit times a power of 10 each digit's power can be manipulated individually. - -Case where all digits are summed - -This method works for divisors that are factors of 10 − 1 = 9. - -Using 3 as an example, 3 divides 9 = 10 − 1. That means $10 \equiv 1 \pmod{3}$ (see modular arithmetic). The same for all the higher powers of 10: $10^n \equiv 1^n \equiv 1 \pmod{3}$ They are all congruent to 1 modulo 3. Since two things that are congruent modulo 3 are either both divisible by 3 or both not, we can interchange values that are congruent modulo 3. So, in a number such as the following, we can replace all the powers of 10 by 1: -$$ -100\cdot a + 10\cdot b + 1\cdot c \equiv (1)a + (1)b + (1)c \pmod{3} -$$ - -which is exactly the sum of the digits. - -Case where the alternating sum of digits is used - -This method works for divisors that are factors of 10 + 1 = 11. - -Using 11 as an example, 11 divides 11 = 10 + 1. That means $10 \equiv -1 \pmod{11}$. For the higher powers of 10, they are congruent to 1 for even powers and congruent to −1 for odd powers: -$$ -10^n \equiv (-1)^n \equiv \begin{cases} 1, & \mbox{if }n\mbox{ is even} \\ -1, & \mbox{if }n\mbox{ is odd} \end{cases} \pmod{11}. -$$ - -Like the previous case, we can substitute powers of 10 with congruent values: -$$ -1000\cdot a + 100\cdot b + 10\cdot c + 1\cdot d \equiv (-1)a + (1)b + (-1)c + (1)d \pmod{11} -$$ - -which is also the difference between the sum of digits at odd positions and the sum of digits at even positions. - -Case where only the last digit(s) matter - -This applies to divisors that are a factor of a power of 10. This is because sufficiently high powers of the base are multiples of the divisor, and can be eliminated. - -For example, in base 10, the factors of 101 include 2, 5, and 10. Therefore, divisibility by 2, 5, and 10 only depend on whether the last 1 digit is divisible by those divisors. The factors of 102 include 4 and 25, and divisibility by those only depend on the last 2 digits. - -Case where only the last digit(s) are removed - -Most numbers do not divide 9 or 10 evenly, but do divide a higher power of 10n or 10n − 1. In this case the number is still written in powers of 10, but not fully expanded. - -For example, 7 does not divide 9 or 10, but does divide 98, which is close to 100. Thus, proceed from -$$ -100 \cdot a + b -$$ - -where in this case a is any integer, and b can range from 0 to 99. Next, -$$ -(98+2) \cdot a + b -$$ - -and again expanding -$$ -98 \cdot a + 2 \cdot a + b, -$$ - -and after eliminating the known multiple of 7, the result is -$$ -2 \cdot a + b, -$$ - -which is the rule "double the number formed by all but the last two digits, then add the last two digits". - -Case where the last digit(s) is multiplied by a factor - -The representation of the number may also be multiplied by any number relatively prime to the divisor without changing its divisibility. After observing that 7 divides 21, we can perform the following: -$$ -10 \cdot a + b, -$$ - -after multiplying by 2, this becomes -$$ -20 \cdot a + 2 \cdot b, -$$ - -and then -$$ -(21 - 1) \cdot a + 2 \cdot b. -$$ - -Eliminating the 21 gives -$$ - -1 \cdot a + 2 \cdot b, -$$ - -and multiplying by −1 gives -$$ - a - 2 \cdot b. -$$ - -Either of the last two rules may be used, depending on which is easier to perform. They correspond to the rule "subtract twice the last digit from the rest". - -This section will illustrate the basic method; all the rules can be derived following the same procedure. The following requires a basic grounding in modular arithmetic; for divisibility other than by 2's and 5's the proofs rest on the basic fact that 10 mod m is invertible if 10 and m are relatively prime. - -For 2n or 5n: - -Only the last n digits need to be checked. -$$ -10^n = 2^n \cdot 5^n \equiv 0 \pmod{2^n \mathrm{\ or\ } 5^n} -$$ - -Representing x as $10^n \cdot y + z,$ -$$ -x = 10^n \cdot y + z \equiv z \pmod{2^n \mathrm{\ or\ } 5^n} -$$ - -and the divisibility of x is the same as that of z. - -For 7: - -Since 10 × 5 ≡ 10 × (−2) ≡ 1 (mod 7) we can do the following: - -Representing x as $10 \cdot y + z,$ -$$ --2x \equiv y -2z \pmod{7}, -$$ - -so x is divisible by 7 if and only if y − 2z is divisible by 7. diff --git a/wiki/wikipedia/3105.txt b/wiki/wikipedia/3105.txt deleted file mode 100644 index 9e0496b70fac44345a1138248847c2eb649b0515..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3105.txt +++ /dev/null @@ -1,5 +0,0 @@ -Geometry Expert (GEX) is a Chinese software for dynamic diagram drawing and automated geometry theorem proving and discovering. - -There's a new Chinese version of Geometry Expert, called MMP/Geometer. - -Java Geometry Expert is free under GNU General Public License. diff --git a/wiki/wikipedia/3106.txt b/wiki/wikipedia/3106.txt deleted file mode 100644 index fc59452d9de630f237a411372a909276aeda2e14..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3106.txt +++ /dev/null @@ -1,19 +0,0 @@ -DUCS (Display Unit Control System) was a teleprocessing monitor from CFS Inc. It was one of two early local teleprocessing packages for IBM's DOS/VSE environment. DUCS provided an interface and access method for programmers to 'talk' to monitors. Such access methods later became known as APIs. - -Initially written for the IBM 2260 running under DOS on IBM mainframes, the original product was free for IBM users. With the advent of DOS/VS and the IBM 3270 series terminals, the original author commercialized the product, circa 1970. The company added transparent remote access about 1972. - -The product is believed to be the first non-IBM publicly available commercial software package to transmit data via satellite. - -DUCS differed from competing products such as Westi and IBM's own CICS in that it was subordinate to the application's mainline program. Westi, for example, was the mainline program and users wrote subroutines to read and write data to and from terminals and discs. This real time paradigm became known as transaction processing. - -DUCS reversed that model in that it was, in fact, a subroutine package that read from and wrote to monitors, both local and remote. While DUCS was considerably easier to program and use, it also placed the onus of task management upon the programmer. Correctly designed, a DUCS program was faster than any competing package or access method. - -Dick Goran wrote the original DOS 2260 package. Its popularity made him realize it had potential as a commercial product, and he left IBM about 1970, and incorporated in Brookline, Massachusetts as CFS, Inc. - -In 1972, IBM released DOS/VS with the IBM/370 and the first IBM 3270 terminals, and CFS began a rewrite for the new products. Former New York City IBMer, Leigh Lundin, wrote DUCS Remote, a bi-sync module to handle remote teleprocessing. The bi-sync handler was only 4k, in contrast to IBM's BTAM at 28k, QTAM at 36k, and TCAM at 42k, and VTAM which started at 48k. - -Lundin wrote games in Fortran and Assembler and Goran in COBOL to demonstrate the API for programmers. To model IBM's new light pen, programmers contributed a simple tic-tac-toe (noughts and crosses), possibly the only practical use of the subsequently discontinued light pen. - -DUCS was sold in North America by CFS, Inc, Brookline, Ma. - -For overseas sales, CFS engaged in both mail order and local vendors. diff --git a/wiki/wikipedia/3107.txt b/wiki/wikipedia/3107.txt deleted file mode 100644 index 9d39ced15fe5b08f82e5f9ce690e34b40dfb95bf..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3107.txt +++ /dev/null @@ -1 +0,0 @@ -In mathematics, the Alperin–Brauer–Gorenstein theorem characterizes the finite simple groups with quasidihedral or wreathed Sylow 2-subgroups. These are isomorphic either to three-dimensional projective special linear groups or projective special unitary groups over a finite field of odd order, depending on a certain congruence, or to the Mathieu group $M_{11}$. Alperin proved this in the course of 261 pages. The subdivision by 2-fusion is sketched there, given as an exercise in Gorenstein, and presented in some detail in Kwon. diff --git a/wiki/wikipedia/3108.txt b/wiki/wikipedia/3108.txt deleted file mode 100644 index e2f9d1a33c0ca74b94edc8933d2432e2c5dc2022..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3108.txt +++ /dev/null @@ -1,51 +0,0 @@ -The Dutch national flag problem The flag of the Netherlands consists of three colors: red, white, and blue. Given balls of these three colors arranged randomly in a line (it does not matter how many balls there are), the task is to arrange them such that all balls of the same color are together and their collective color groups are in the correct order. - -The solution to this problem is of interest for designing sorting algorithms; in particular, variants of the quicksort algorithm that must be robust to repeated elements may use a three-way partitioning function that groups items less than a given key (red), equal to the key (white) and greater than the key (blue). Several solutions exist that have varying performance characteristics, tailored to sorting arrays with either small or large numbers of repeated elements. - -This problem can also be viewed in terms of rearranging elements of an array. - -Suppose each of the possible elements could be classified into exactly one of three categories (bottom, middle, and top). - -For example, if all the elements are in 0 ... 1, the bottom could be defined as elements in 0 ... 0.25 (not including 0.25), the middle as 0.25 ... 0.5 (not including 0.5) - -and the top as 0.5 and greater. (The choice of these values illustrates that the categories need not be equal ranges). The problem is then to produce an array such that all "bottom" elements come before (have an index less than the index of) all "middle" elements, which come before all "top" elements. - -One algorithm is to have the top group grow down from the top of the array, the bottom group grow up from the bottom, and keep the middle group just above the bottom. The algorithm indexes three locations, the bottom of the top group, the top of the bottom group, and the top of the middle group. Elements that are yet to be sorted fall between the middle and the top group. At each step, examine the element just above the middle. If it belongs to the top group, swap it with the element just below the top. If it belongs in the bottom, swap it with the element just above the bottom. If it is in the middle, leave it. Update the appropriate index. Complexity is Θ(n) moves and examinations. - -The following pseudocode for three-way partitioning which assumes zero-based array indexing was proposed by Dijkstra himself. It uses three indices i, j and k, maintaining the invariant that i ≤ j ≤ k. - -* Entries from 0 up to (but not including) i are values less than mid, - -* entries from i up to (but not including) j are values equal to mid, - -* entries from j up to (and including) k are values not yet sorted, and - -* entries from k + 1 to the end of the array are values greater than mid. - -procedure three-way-partition(A : array of values, mid : value): - -i ← 0 - -j ← 0 - -k ← size of A - 1 - -while j <= k: - -if A[j] < mid: - -swap A[i] and A[j] - -i ← i + 1 - -j ← j + 1 - -else if A[j] > mid: - -swap A[j] and A[k] - -k ← k - 1 - -else: - -j ← j + 1 diff --git a/wiki/wikipedia/3109.txt b/wiki/wikipedia/3109.txt deleted file mode 100644 index ed3fd39b0b6a9aa55ab6344a11005b5d6ab13843..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3109.txt +++ /dev/null @@ -1,113 +0,0 @@ -Yen's algorithm computes single-source K-shortest loopless paths for a graph with non-negative edge cost. The algorithm was published by Jin Y. Yen in 1971 and employs any shortest path algorithm to find the best path, then proceeds to find K - 1 deviations of the best path. - -The algorithm can be broken down into two parts, determining the first k-shortest path, $A^k$, and then determining all other k-shortest paths. It is assumed that the container $A$ will hold the k-shortest path, whereas the container $B$, will hold the potential k-shortest paths. To determine $A^1$, the shortest path from the source to the sink, any efficient shortest path algorithm can be used. - -To find the $A^k$, where $k$ ranges from $2$ to $K$, the algorithm assumes that all paths from $A^1$ to $A^{k-1}$ have previously been found. The $k$ iteration can be divided into two processes, finding all the deviations ${A^k}_i$ and choosing a minimum length path to become $A^k$. Note that in this iteration, $i$ ranges from $1$ to ${Q^k}_k$. - -The first process can be further subdivided into three operations, choosing the ${R^k}_i$, finding ${S^k}_i$, and then adding ${A^k}_i$ to the container $B$. The root path, ${R^k}_i$, is chosen by finding the subpath in $A^{k-1}$ that follows the first $i$ nodes of $A^{j}$, where $j$ ranges from $1$ to $k-1$. Then, if a path is found, the cost of edge $d_{i(i+1)}$ of $A^{j}$ is set to infinity. Next, the spur path, ${S^k}_i$, is found by computing the shortest path from the spur node, node $i$, to the sink. The removal of previous used edges from $(i)$ to $(i + 1)$ ensures that the spur path is different. ${A^k}_i = {R^k}_i + {S^k}_i$, the addition of the root path and the spur path, is added to $B$. Next, the edges that were removed, i.e. had their cost set to infinity, are restored to their initial values. - -The second process determines a suitable path for $A^k$ by finding the path in container $B$ with the lowest cost. This path is removed from container $B$ and inserted into container $A$ and the algorithm continues to the next iteration. - -The algorithm assumes that the Dijkstra algorithm is used to find the shortest path between two nodes, but any shortest path algorithm can be used in its place. - -function YenKSP(Graph, source, sink, K): - -// Determine the shortest path from the source to the sink. - -A[0] = Dijkstra(Graph, source, sink); - -// Initialize the set to store the potential kth shortest path. - -B = []; - -for k from 1 to K: - -// The spur node ranges from the first node to the next to last node in the previous k-shortest path. - -for i from 0 to size(A[k - 1]) - 2: - -// Spur node is retrieved from the previous k-shortest path, k - 1. - -spurNode = A[k-1].node(i); - -// The sequence of nodes from the source to the spur node of the previous k-shortest path. - -rootPath = A[k-1].nodes(0, i); - -for each path p in A: - -if rootPath == p.nodes(0, i): - -// Remove the links that are part of the previous shortest paths which share the same root path. - -remove p.edge(i,i + 1) from Graph; - -for each node rootPathNode in rootPath except spurNode: - -remove rootPathNode from Graph; - -// Calculate the spur path from the spur node to the sink. - -// Consider also checking if any spurPath found - -spurPath = Dijkstra(Graph, spurNode, sink); - -// Entire path is made up of the root path and spur path. - -totalPath = rootPath + spurPath; - -// Add the potential k-shortest path to the heap. - -if (totalPath not in B): - -B.append(totalPath); - -// Add back the edges and nodes that were removed from the graph. - -restore edges to Graph; - -restore nodes in rootPath to Graph; - -if B is empty: - -// This handles the case of there being no spur paths, or no spur paths left. - -// This could happen if the spur paths have already been exhausted (added to A), - -// or there are no spur paths at all - such as when both the source and sink vertices - -// lie along a "dead end". - -break; - -// Sort the potential k-shortest paths by cost. - -B.sort(); - -// Add the lowest cost path becomes the k-shortest path. - -A[k] = B[0]; - -// In fact we should rather use shift since we are removing the first element - -B.pop(); - -return A; - -The example uses Yen's K-Shortest Path Algorithm to compute three paths from $(C)$ to $(H)$. Dijkstra's algorithm is used to calculate the best path from $(C)$ to $(H)$, which is $(C)-(E)-(F)-(H)$ with cost 5. This path is appended to container $A$ and becomes the first k-shortest path, $A^1$. - -Node $(C)$ of $A^1$ becomes the spur node with a root path of itself, ${R^2}_1 = (C)$. The edge, $(C)-(E)$, is removed because it coincides with the root path and a path in container $A$. Dijkstra's algorithm is used to compute the spur path ${S^2}_1$, which is $(C)-(D)-(F)-(H)$, with a cost of 8. ${A^2}_1 = {R^2}_1 + {S^2}_1 = (C)-(D)-(F)-(H)$ is added to container $B$ as a potential k-shortest path. - -Node $(E)$ of $A^1$ becomes the spur node with ${R^2}_2 = (C)-(E)$. The edge, $(E)-(F)$, is removed because it coincides with the root path and a path in container $A$. Dijkstra's algorithm is used to compute the spur path ${S^2}_2$, which is $(E)-(G)-(H)$, with a cost of 7. ${A^2}_2 = {R^2}_2 + {S^2}_2 = (C)-(E)-(G)-(H)$ is added to container $B$ as a potential k-shortest path. - -Node $(F)$ of $A^1$ becomes the spur node with a root path, ${R^2}_3 = (C)-(E)-(F)$. The edge, $(F)-(H)$, is removed because it coincides with the root path and a path in container $A$. Dijkstra's algorithm is used to compute the spur path ${S^2}_3$, which is $(F)-(G)-(H)$, with a cost of 8. ${A^2}_3 = {R^2}_3 + {S^2}_3 = (C)-(E)-(F)-(G)-(H)$ is added to container $B$ as a potential k-shortest path. - -Of the three paths in container B, ${A^2}_2$ is chosen to become $A^2$ because it has the lowest cost of 7. This process is continued to the 3rd k-shortest path. However, within this 3rd iteration, note that some spur paths do not exist. And the path that is chosen to become $A^3$ is $(C)-(D)-(F)-(H)$. - -To store the edges of the graph, the shortest path list $A$, and the potential shortest path list $B$, $N^2 + KN$ memory addresses are required. where $M$ is the amount of edges in the graph. Since Yen's algorithm makes $Kl$ calls to the Dijkstra in computing the spur paths, where $l$ is the length of spur paths. In a condensed graph, the expected value of $l$ is $ O(\log N) $, while the worst case is $N$. - -, the time complexity becomes $O(KN(M + N\log N))$. - -Yen's algorithm can be improved by using a heap to store $B$, the set of potential k-shortest paths. Using a heap instead of a list will improve the performance of the algorithm, but not the complexity. One method to slightly decrease complexity is to skip the nodes where there are non-existent spur paths. This case is produced when all the spur paths from a spur node have been used in the previous $A^k$. Also, if container $B$ has $K-k$ paths of minimum length, in reference to those in container $A$, then they can be extract and inserted into container $A$ since no shorter paths will be found. - -Eugene Lawler proposed a modification to Yen's algorithm in which duplicates path are not calculated as opposed to the original algorithm where they are calculated and then discarded when they are found to be duplicates. These duplicates paths result from calculating spur paths of nodes in the root of $A^k$. For instance, $A^{k}$ deviates from $A^{k-1}$ at some node $(i)$. Any spur path, ${S^k}_j$ where $j = 0,\ldots,i$, that is calculated will be a duplicate because they have already been calculated during the $k-1$ iteration. Therefore, only spur paths for nodes that were on the spur path of $A^{k-1}$ must be calculated, i.e. only ${S^k}_h$ where $h$ ranges from $(i+1)^{k-1}$ to $(Q_k)^{k-1}$. To perform this operation for $A^{k}$, a record is needed to identify the node where $A^{k-1}$ branched from $A^{k-2}$. diff --git a/wiki/wikipedia/311.txt b/wiki/wikipedia/311.txt deleted file mode 100644 index 3655b738ac1868ebb0718647d2f9df2d5cd3113f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/311.txt +++ /dev/null @@ -1,51 +0,0 @@ -In mathematics, $\in$-induction (epsilon-induction or set-induction) is a variant of transfinite induction. - -Considered as a set theory axiom schema, it is called the Axiom schema of set induction. - -It can be used in set theory to prove that all sets satisfy a given property. This is a special case of well-founded induction. - -It states, for any given property $\Phi$, that if for every set $x$, the truth of $\Phi(x)$ follows from the truth of $\Phi$ for all elements of $x$, then this property $\Phi$ holds for all sets. - -In symbols: -$$ -\forall x. \Big(\left(\forall y \in x. \Phi(y)\right) \rightarrow \Phi(x)\Big)\ \rightarrow\ \forall z. \Phi(z) -$$ - -Note that for the "bottom case" where $x$ denotes the empty set, the subexpression $\forall y \in x. \Phi(y)$ is vacuously true for all propositions. - -The above can be compared with $\omega$-induction over the natural numbers $n\in\{0, 1, 2,\dots\}$ for number properties $\phi$. This may be expressed as -$$ -\Big(\phi(0)\land \forall n. (\phi(n) \to \phi(n+1))\Big)\ \to\ \forall m. \phi(m) -$$ - -or, using the symbol for the tautologically true statement, $\top$, -$$ -\Big((\top \to \phi(0))\land \forall n. (\phi(n) \to \phi(n+1))\Big)\ \to\ \forall m. \phi(m) -$$ - -Fix a predicate $Q$ and define a new predicate equivalent to $Q$, except with an argument offset by one and equal to $\top$ for $n=0$. With this we can also get a statement equivalent to induction for $Q$, but without a conjunction. By abuse of notation, letting "$Q(-1)$" denote $\top$, an instance of the $\omega$-induction schema may thus be expressed as -$$ -\forall n. \Big(Q(n-1) \to Q(n)\Big)\ \to\ \forall m. Q(m) -$$ - -This now mirrors an instance of the Set-Induction schema. - -Conversely, Set-Induction may also be treated in a way that treats the bottom case explicitly. - -With classical tautologies such as $(A\to B)\iff(\neg A \lor B)$ and $(\neg A \lor B)\iff\neg( A \land \neg B)$, an instance of the $\omega$-induction principle can be translated to the following statement: -$$ -\exists n. \Big(Q(n-1) \land \neg Q(n)\Big)\ \lor\ \forall m. Q(m) -$$ - -This expresses that, for any property $Q$, either there is any (first) number $n$ for which $Q$ does not hold, despite $Q$ holding for the preceding case, or - if there is no such failure case - $Q$ is true for all numbers. - -Accordingly, in classical ZF, an instance of the set-induction can be translated to the following statement, clarifying what form of counter-example prevents a set-property $P$ to hold for all sets: -$$ -\exist x. \Big(\left(\forall y \in x. P(y)\right) \land \neg P(x)\Big)\ \lor\ \forall z P(z) -$$ - -This expresses that, for any property $P$, either there a set $x$ for which $P$ does not hold while $P$ being true for all elements of $x$, or $P$ holds for all sets. For any property, if one can prove that $\forall y \in x. P(y)$ implies $P(x)$, then the failure case is ruled out and the formula states that the disjunct $\forall z. P(z)$ must hold. - -In the context of the constructive set theory CZF, adopting the Axiom of regularity would imply the law of excluded middle and also set-induction. But then the resulting theory would be standard ZF. - -However, conversely, the set-induction implies neither of the two. In other words, with a constructive logic framework, set-induction as stated above is strictly weaker than regularity. diff --git a/wiki/wikipedia/3110.txt b/wiki/wikipedia/3110.txt deleted file mode 100644 index 97a53cf57b901084f79fbd0d27df46481f33da89..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3110.txt +++ /dev/null @@ -1,23 +0,0 @@ -Hasse's theorem on elliptic curves, also referred to as the Hasse bound, provides an estimate of the number of points on an elliptic curve over a finite field, bounding the value both above and below. - -If N is the number of points on the elliptic curve E over a finite field with q elements, then Helmut Hasse's result states that -$$ -|N - (q+1)| \le 2 \sqrt{q}. -$$ - -The reason is that N differs from q + 1, the number of points of the projective line over the same field, by an 'error term' that is the sum of two complex numbers, each of absolute value q. - -This result had originally been conjectured by Emil Artin in his thesis. It was proven by Hasse in 1933, with the proof published in a series of papers in 1936. - -Hasse's theorem is equivalent to the determination of the absolute value of the roots of the local zeta-function of E. In this form it can be seen to be the analogue of the Riemann hypothesis for the function field associated with the elliptic curve. - -A generalization of the Hasse bound to higher genus algebraic curves is the Hasse–Weil bound. This provides a bound on the number of points on a curve over a finite field. If the number of points on the curve C of genus g over the finite field $\mathbb{F}_q$ of order q is $\#C(\mathbb{F}_q)$, then -$$ -|\#C(\mathbb{F}_q) - (q+1)| \le 2g \sqrt{q}. -$$ - -This result is again equivalent to the determination of the absolute value of the roots of the local zeta-function of C, and is the analogue of the Riemann hypothesis for the function field associated with the curve. - -The Hasse–Weil bound reduces to the usual Hasse bound when applied to elliptic curves, which have genus g=1. - -The Hasse–Weil bound is a consequence of the Weil conjectures, originally proposed by André Weil in 1949 and proved by André Weil in the case of curves. diff --git a/wiki/wikipedia/3111.txt b/wiki/wikipedia/3111.txt deleted file mode 100644 index 227271594a25a445b7136e652c72cedaa70e7031..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3111.txt +++ /dev/null @@ -1 +0,0 @@ -#redirect Calabi conjecture diff --git a/wiki/wikipedia/3112.txt b/wiki/wikipedia/3112.txt deleted file mode 100644 index 56696a96b31b81d93c36449fe0927e15109a88d1..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3112.txt +++ /dev/null @@ -1 +0,0 @@ -In mathematics, the Cohen–Hewitt factorization theorem states that if $ V $ is a left module over a Banach algebra $ B $ with a left approximate unit $ (u_{i})_{i \in I} $, then an element $ v $ of $ V $ can be factorized as a product $ v = b w $ (for some $ b \in B $ and $ w \in V $) whenever $ \displaystyle \lim_{i \in I} u_{i} v = v $. The theorem was introduced by and . diff --git a/wiki/wikipedia/3113.txt b/wiki/wikipedia/3113.txt deleted file mode 100644 index 1dc4fb6a11c96540de21be420de18635cb9d62f6..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3113.txt +++ /dev/null @@ -1,77 +0,0 @@ -In computer science, an algorithm is called non-blocking if failure or suspension of any thread cannot cause failure or suspension of another thread; for some operations, these algorithms provide a useful alternative to traditional blocking implementations. A non-blocking algorithm is lock-free if there is guaranteed system-wide progress, and wait-free if there is also guaranteed per-thread progress. "Non-blocking" was used as a synonym for "lock-free" in the literature until the introduction of obstruction-freedom in 2003. - -The word "non-blocking" was traditionally used to describe telecommunications networks that could route a connection through a set of relays "without having to re-arrange existing calls" (see Clos network). Also, if the telephone exchange "is not defective, it can always make the connection" (see nonblocking minimal spanning switch). - -The traditional approach to multi-threaded programming is to use locks to synchronize access to shared resources. Synchronization primitives such as mutexes, semaphores, and critical sections are all mechanisms by which a programmer can ensure that certain sections of code do not execute concurrently, if doing so would corrupt shared memory structures. If one thread attempts to acquire a lock that is already held by another thread, the thread will block until the lock is free. - -Blocking a thread can be undesirable for many reasons. An obvious reason is that while the thread is blocked, it cannot accomplish anything: if the blocked thread had been performing a high-priority or real-time task, it would be highly undesirable to halt its progress. - -Other problems are less obvious. For example, certain interactions between locks can lead to error conditions such as deadlock, livelock, and priority inversion. Using locks also involves a trade-off between coarse-grained locking, which can significantly reduce opportunities for parallelism, and fine-grained locking, which requires more careful design, increases locking overhead and is more prone to bugs. - -Unlike blocking algorithms, non-blocking algorithms do not suffer from these downsides, and in addition are safe for use in interrupt handlers: even though the preempted thread cannot be resumed, progress is still possible without it. In contrast, global data structures protected by mutual exclusion cannot safely be accessed in an interrupt handler, as the preempted thread may be the one holding the lock—but this can be rectified easily by masking the interrupt request during the critical section. - -A lock-free data structure can be used to improve performance. - -A lock-free data structure increases the amount of time spent in parallel execution rather than serial execution, improving performance on a multi-core processor, because access to the shared data structure does not need to be serialized to stay coherent. - -With few exceptions, non-blocking algorithms use atomic read-modify-write primitives that the hardware must provide, the most notable of which is compare and swap (CAS). Critical sections are almost always implemented using standard interfaces over these primitives (in the general case, critical sections will be blocking, even when implemented with these primitives). In the 1990s all non-blocking algorithms had to be written "natively" with the underlying primitives to achieve acceptable performance. However, the emerging field of software transactional memory promises standard abstractions for writing efficient non-blocking code. - -Much research has also been done in providing basic data structures such as stacks, queues, sets, and hash tables. These allow programs to easily exchange data between threads asynchronously. - -Additionally, some non-blocking data structures are weak enough to be implemented without special atomic primitives. These exceptions include: - -* a single-reader single-writer ring buffer FIFO, with a size which evenly divides the overflow of one of the available unsigned integer types, can unconditionally be implemented safely using only a memory barrier - -* Read-copy-update with a single writer and any number of readers. (The readers are wait-free; the writer is usually lock-free, until it needs to reclaim memory). - -* Read-copy-update with multiple writers and any number of readers. (The readers are wait-free; multiple writers generally serialize with a lock and are not obstruction-free). - -Several libraries internally use lock-free techniques, but it is difficult to write lock-free code that is correct. - -Non-blocking algorithms generally involve a series of read, read-modify-write, and write instructions in a carefully designed order. - -Optimizing compilers can aggressively re-arrange operations. - -Even when they don't, many modern CPUs often re-arrange such operations (they have a "weak consistency model"), - -unless a memory barrier is used to tell the CPU not to reorder. - -C++11 programmers can use std::atomic in , - -and C11 programmers can use , - -both of which supply types and functions that tell the compiler not to re-arrange such instructions, and to insert the appropriate memory barriers. - -Wait-freedom is the strongest non-blocking guarantee of progress, combining guaranteed system-wide throughput with starvation-freedom. An algorithm is wait-free if every operation has a bound on the number of steps the algorithm will take before the operation completes. - -This property is critical for real-time systems and is always nice to have as long as the performance cost is not too high. - -It was shown in the 1980s that all algorithms can be implemented wait-free, and many transformations from serial code, called universal constructions, have been demonstrated. However, the resulting performance does not in general match even naïve blocking designs. Several papers have since improved the performance of universal constructions, but still, their performance is far below blocking designs. - -Several papers have investigated the difficulty of creating wait-free algorithms. For example, it has been shown that the widely available atomic conditional primitives, CAS and LL/SC, cannot provide starvation-free implementations of many common data structures without memory costs growing linearly in the number of threads. - -But in practice these lower bounds do not present a real barrier as spending a cache line or exclusive reservation granule (up to 2 KB on ARM) of store per thread in the shared memory is not considered too costly for practical systems (typically the amount of store logically required is a word, but physically CAS operations on the same cache line will collide, and LL/SC operations in the same exclusive reservation granule will collide, so the amount of store physically required is greater). - -Wait-free algorithms were rare until 2011, both in research and in practice. However, in 2011 Kogan and Petrank presented a wait-free queue building on the CAS primitive, generally available on common hardware. Their construction expanded the lock-free queue of Michael and Scott, which is an efficient queue often used in practice. A follow-up paper by Kogan and Petrank provided a method for making wait-free algorithms fast and used this method to make the wait-free queue practically as fast as its lock-free counterpart. A subsequent paper by Timnat and Petrank provided an automatic mechanism for generating wait-free data structures from lock-free ones. Thus, wait-free implementations are now available for many data-structures. - -Lock-freedom allows individual threads to starve but guarantees system-wide throughput. An algorithm is lock-free if, when the program threads are run for a sufficiently long time, at least one of the threads makes - -progress (for some sensible definition of progress). - -All wait-free algorithms are lock-free. - -In particular, if one thread is suspended, then a lock-free algorithm guarantees that the remaining threads can still make progress. Hence, if two threads can contend for the same mutex lock or spinlock, then the algorithm is not lock-free. (If we suspend one thread that holds the lock, then the second thread will block.) - -An algorithm is lock-free if infinitely often operation by some processors will succeed in a finite number of steps. For instance, if N processors are trying to execute an operation, some of the N processes will succeed in finishing the operation in a finite number of steps and others might fail and retry on failure. The difference between wait-free and lock-free is that wait-free operation by each process is guaranteed to succeed in a finite number of steps, regardless of the other processors. - -In general, a lock-free algorithm can run in four phases: completing one's own operation, assisting an obstructing operation, aborting an obstructing operation, and waiting. Completing one's own operation is complicated by the possibility of concurrent assistance and abortion, but is invariably the fastest path to completion. - -The decision about when to assist, abort or wait when an obstruction is met is the responsibility of a contention manager. This may be very simple (assist higher priority operations, abort lower priority ones), or may be more optimized to achieve better throughput, or lower the latency of prioritized operations. - -Correct concurrent assistance is typically the most complex part of a lock-free algorithm, and often very costly to execute: not only does the assisting thread slow down, but thanks to the mechanics of shared memory, the thread being assisted will be slowed, too, if it is still running. - -Obstruction-freedom is the weakest natural non-blocking progress guarantee. An algorithm is obstruction-free if at any point, a single thread executed in isolation (i.e., with all obstructing threads suspended) for a bounded number of steps will complete its operation. All lock-free algorithms are obstruction-free. - -Obstruction-freedom demands only that any partially completed operation can be aborted and the changes made rolled back. Dropping concurrent assistance can often result in much simpler algorithms that are easier to validate. Preventing the system from continually live-locking is the task of a contention manager. - -Some obstruction-free algorithms use a pair of "consistency markers" in the data structure. Processes reading the data structure first read one consistency marker, then read the relevant data into an internal buffer, then read the other marker, and then compare the markers. The data is consistent if the two markers are identical. Markers may be non-identical when the read is interrupted by another process updating the data structure. In such a case, the process discards the data in the internal buffer and tries again. diff --git a/wiki/wikipedia/3114.txt b/wiki/wikipedia/3114.txt deleted file mode 100644 index eb89caed9a99794525424289f0af2f078b109ee0..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3114.txt +++ /dev/null @@ -1,29 +0,0 @@ -Q.E.D. or QED is an initialism of the Latin phrase , meaning "which was to be demonstrated". Literally it states "what was to be shown". Traditionally, the abbreviation is placed at the end of mathematical proofs and philosophical arguments in print publications, to indicate that the proof or the argument is complete. - -The phrase quod erat demonstrandum is a translation into Latin from the Greek (; abbreviated as ΟΕΔ). Translating from the Latin phrase into English yields "what was to be demonstrated". However, translating the Greek phrase can produce a slightly different meaning. In particular, since the verb also means to show or to prove, a different translation from the Greek phrase would read "The very thing it was required to have shown." - -The Greek phrase was used by many early Greek mathematicians, including Euclid and Archimedes. The translated Latin phrase (and its associated acronym) was subsequently used by many post-Renaissance mathematicians and philosophers, including Galileo, Spinoza, Isaac Barrow and Isaac Newton. - -During the European Renaissance, scholars often wrote in Latin, and phrases such as Q.E.D. were often used to conclude proofs. - -Perhaps the most famous use of Q.E.D. in a philosophical argument is found in the Ethics of Baruch Spinoza, published posthumously in 1677. Written in Latin, it is considered by many to be Spinoza's magnum opus. The style and system of the book are, as Spinoza says, "demonstrated in geometrical order", with axioms and definitions followed by propositions. For Spinoza, this is a considerable improvement over René Descartes's writing style in the Meditations, which follows the form of a diary. - -There is another Latin phrase with a slightly different meaning, usually shortened similarly, but being less common in use. , originating from the Greek geometers' closing (), meaning "which had to be done". Because of the difference in meaning, the two phrases should not be confused. - -Euclid used the Greek original of Quod Erat Faciendum (Q.E.F.) to close propositions that were not proofs of theorems, but constructions of geometric objects. For example, Euclid's first proposition showing how to construct an equilateral triangle, given one side, is concluded this way. - -There is no common formal English equivalent, although the end of a proof may be announced with a simple statement such as "this completes the proof", "as required", "as desired", "as expected", "hence proved", "ergo", "so correct", or other similar locutions. WWWWW or W5 – an abbreviation of "Which Was What Was Wanted" – has been used similarly. Often this is considered to be more tongue-in-cheek than Q.E.D. or the Halmos tombstone symbol (see below). - -Due to the paramount importance of proofs in mathematics, mathematicians since the time of Euclid have developed conventions to demarcate the beginning and end of proofs. In printed English language texts, the formal statements of theorems, lemmas, and propositions are set in italics by tradition. The beginning of a proof usually follows immediately thereafter, and is indicated by the word "proof" in boldface or italics. On the other hand, several symbolic conventions exist to indicate the end of a proof. - -While some authors still use the classical abbreviation, Q.E.D., it is relatively uncommon in modern mathematical texts. Paul Halmos pioneered the use of a solid black square at the end of a proof as a Q.E.D symbol, a practice which has become standard, although not universal. Halmos adopted this use of a symbol from magazine typography customs in which simple geometric shapes had been used to indicate the end of an article. This symbol was later called the tombstone, the Halmos symbol, or even a halmos by mathematicians. Often the Halmos symbol is drawn on chalkboard to signal the end of a proof during a lecture, although this practice is not so common as its use in printed text. - -The tombstone symbol appears in TeX as the character $\blacksquare$ (filled square, \blacksquare) and sometimes, as a $\square$ (hollow square, \square or \Box). In the AMS Theorem Environment for LaTeX, the hollow square is the default end-of-proof symbol. Unicode explicitly provides the "end of proof" character, U+220E (∎). Some authors use other Unicode symbols to note the end of a proof, including, ▮ (U+25AE, a black vertical rectangle), and ‣ (U+2023, a triangular bullet). Other authors have adopted two forward slashes (//) or four forward slashes (////). In other cases, authors have elected to segregate proofs typographically—by displaying them as indented blocks. - -In Joseph Heller's book Catch-22, the Chaplain, having been told to examine a forged letter allegedly signed by him (which he knew he didn't sign), verified that his name was in fact there. His investigator replied, "Then you wrote it. Q.E.D." The chaplain said he didn't write it and that it wasn't his handwriting, to which the investigator replied, "Then you signed your name in somebody else's handwriting again." - -In the 1978 science-fiction radio comedy, and later in the television, novel, and film adaptations of The Hitchhiker's Guide to the Galaxy, "Q.E.D." is referred to in the Guide's entry for the babel fish, when it is claimed that the babel fish – which serves the "mind-bogglingly" useful purpose of being able to translate any spoken language when inserted into a person's ear – is used as evidence for existence and non-existence of God. The exchange from the novel is as follows: "'I refuse to prove I exist,' says God, 'for proof denies faith, and without faith I am nothing.' 'But,' says Man, 'The babel fish is a dead giveaway, isn't it? It could not have evolved by chance. It proves you exist, and so therefore, by your own arguments, you don't. QED.' 'Oh dear,' says God, 'I hadn't thought of that,' and promptly vanishes in a puff of logic." - -In Neal Stephenson's 1999 novel Cryptonomicon, Q.E.D. is used as a punchline to several humorous anecdotes, in which characters go to great lengths to prove something non-mathematical. - -Singer-songwriter Thomas Dolby's 1988 song "Airhead" includes the lyric, "Quod erat demonstrandum, baby," referring to the self-evident vacuousness of the eponymous subject; and in response, a female voice squeals, delightedly, "Oooh... you speak French!" diff --git a/wiki/wikipedia/3115.txt b/wiki/wikipedia/3115.txt deleted file mode 100644 index 8d372b0793566497d63dfb31bcbfee8b9ae0ca1d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3115.txt +++ /dev/null @@ -1,40 +0,0 @@ -Stein's example (or phenomenon or paradox), in decision theory and estimation theory, is the phenomenon that when three or more parameters are estimated simultaneously, there exist combined estimators more accurate on average (that is, having lower expected mean squared error) than any method that handles the parameters separately. It is named after Charles Stein of Stanford University, who discovered the phenomenon in 1955. - -An intuitive explanation is that optimizing for the mean-squared error of a combined estimator is not the same as optimizing for the errors of separate estimators of the individual parameters. In practical terms, if the combined error is in fact of interest, then a combined estimator should be used, even if the underlying parameters are independent. If one is instead interested in estimating an individual parameter, then using a combined estimator does not help and is in fact worse. - -The following is perhaps the simplest form of the paradox, the special case in which the number of observations is equal to the number of parameters to be estimated. Let θ be a vector consisting of n ≥ 3 unknown parameters. To estimate these parameters, a single measurement Xi is performed for each parameter θi, resulting in a vector X of length n. Suppose the measurements are known to be independent, Gaussian random variables, with mean θ and variance 1, i.e., -$$ -{\mathbf X} \sim N({\boldsymbol \theta}, 1). -$$ - -Thus, each parameter is estimated using a single noisy measurement, and each measurement is equally inaccurate. - -Under these conditions, it is intuitive and common to use each measurement as an estimate of its corresponding parameter. This so-called "ordinary" decision rule can be written as -$$ -\hat {\boldsymbol \theta} = {\mathbf X}. -$$ - -The quality of such an estimator is measured by its risk function. A commonly used risk function is the mean squared error, defined as -$$ -\operatorname{E} \left[ \left\| {\boldsymbol \theta} - \hat {\boldsymbol \theta} \right\|^2 \right]. -$$ - -Surprisingly, it turns out that the "ordinary" estimator proposed above is suboptimal in terms of mean squared error when n ≥ 3. In other words, in the setting discussed here, there exist alternative estimators which always achieve lower mean squared error, no matter what the value of ${\boldsymbol \theta}$ is. - -For a given θ one could obviously define a perfect "estimator" which is always just θ, but this estimator would be bad for other values of θ. The estimators of Stein's paradox are, for a given θ, better than X for some values of X but necessarily worse for others (except perhaps for one particular θ vector, for which the new estimate is always better than X). It is only on average that they are better. - -More accurately, an estimator $\hat {\boldsymbol \theta}_1$ is said to dominate another estimator $\hat {\boldsymbol \theta}_2$ if, for all values of ${\boldsymbol \theta}$, the risk of $\hat {\boldsymbol \theta}_1$ is lower than, or equal to, the risk of $\hat {\boldsymbol \theta}_2$, and if the inequality is strict for some ${\boldsymbol \theta}$. An estimator is said to be admissible if no other estimator dominates it, otherwise it is inadmissible. Thus, Stein's example can be simply stated as follows: The ordinary decision rule for estimating the mean of a multivariate Gaussian distribution is inadmissible under mean squared error risk. - -Many simple, practical estimators achieve better performance than the ordinary estimator. The best-known example is the James–Stein estimator, which works by starting at X and moving towards a particular point (such as the origin) by an amount inversely proportional to the distance of X from that point. - -For a sketch of the proof of this result, see Proof of Stein's example. An alternative proof is due to Larry Brown: he proved that the ordinary estimator for a n-dimensional multivariate normal mean vector is admissible if and only if the n-dimensional Brownian motion is recurrent. Since the Brownian motion is not recurrent for n ≥ 3, the ordinary estimator is not admissible for n ≥ 3. - -Stein's example is surprising, since the "ordinary" decision rule is intuitive and commonly used. In fact, numerous methods for estimator construction, including maximum likelihood estimation, best linear unbiased estimation, least squares estimation and optimal equivariant estimation, all result in the "ordinary" estimator. Yet, as discussed above, this estimator is suboptimal. - -To demonstrate the unintuitive nature of Stein's example, consider the following real-world example. Suppose we are to estimate three unrelated parameters, such as the US wheat yield for 1993, the number of spectators at the Wimbledon tennis tournament in 2001, and the weight of a randomly chosen candy bar from the supermarket. Suppose we have independent Gaussian measurements of each of these quantities. Stein's example now tells us that we can get a better estimate (on average) for the vector of three parameters by simultaneously using the three unrelated measurements. - -At first sight it appears that somehow we get a better estimator for US wheat yield by measuring some other unrelated statistics such as the number of spectators at Wimbledon and the weight of a candy bar. This is of course absurd; we have not obtained a better estimator for US wheat yield by itself, but we have produced an estimator for the vector of the means of all three random variables, which has a reduced total risk. This occurs because the cost of a bad estimate in one component of the vector is compensated by a better estimate in another component. Also, a specific set of the three estimated mean values obtained with the new estimator will not necessarily be better than the ordinary set (the measured values). It is only on average that the new estimator is better. - -For any particular value of θ the new estimator will improve at least one of the individual mean square errors $\operatorname{E} \left[ \left( {\theta_i} - {\hat \theta_i} \right)^2 \right].$ This is not hard − for instance, if $\theta_1$ is between −1 and 1, and σ = 1, then an estimator that moves $X_1$ towards 0 by 0.5 (or sets it to zero if its absolute value was less than 0.5) will have a lower mean square error than $X_1$ itself. But there are other values of $\theta_1$ for which this estimator is worse than $X_1$ itself. The trick of the Stein estimator, and others that yield the Stein paradox, is that they adjust the shift in such a way that there is always (for any θ vector) at least one $X_i$ whose mean square error is improved, and its improvement more than compensates for any degradation in mean square error that might occur for another $\hat \theta_i$. The trouble is that, without knowing θ, you don't know which of the n mean square errors are improved, so you can't use the Stein estimator only for those parameters. - -An example of the above setting occurs in channel estimation in telecommunications, for instance, because different factors affect overall channel performance. diff --git a/wiki/wikipedia/3116.txt b/wiki/wikipedia/3116.txt deleted file mode 100644 index 988b3a490fc4a069fb01d03c6ed67eb1ec49762e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3116.txt +++ /dev/null @@ -1,3 +0,0 @@ -In finite group theory, Jordan's theorem states that if a primitive permutation group G is a subgroup of the symmetric group Sn and contains a p-cycle for some prime number p < n - 2, then G is either the whole symmetric group Sn or the alternating group An. It was first proved by Camille Jordan. - -The statement can be generalized to the case that p is a prime power. diff --git a/wiki/wikipedia/3117.txt b/wiki/wikipedia/3117.txt deleted file mode 100644 index a09cb4c086fe99cac8aa698e21406111e1e2bf0d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3117.txt +++ /dev/null @@ -1,32 +0,0 @@ -In numerical linear algebra, the Cuthill–McKee algorithm (CM), named for Elizabeth Cuthill and James McKee, is an algorithm to permute a sparse matrix that has a symmetric sparsity pattern into a band matrix form with a small bandwidth. The reverse Cuthill–McKee algorithm (RCM) due to Alan George and Joseph Liu is the same algorithm but with the resulting index numbers reversed. In practice this generally results in less fill-in than the CM ordering when Gaussian elimination is applied. - -The Cuthill McKee algorithm is a variant of the standard breadth-first search - -algorithm used in graph algorithms. It starts with a peripheral node and then - -generates levels $R_i$ for $i=1, 2,..$ until all nodes - -are exhausted. The set $ R_{i+1} $ is created from set $ R_i$ - -by listing all vertices adjacent to all nodes in $ R_i $. These - -nodes are ordered according to predecessors and degree. - -Given a symmetric $n\times n$ matrix we visualize the matrix as the adjacency matrix of a graph. The Cuthill–McKee algorithm is then a relabeling of the vertices of the graph to reduce the bandwidth of the adjacency matrix. - -The algorithm produces an ordered n-tuple $R$ of vertices which is the new order of the vertices. - -First we choose a peripheral vertex (the vertex with the lowest degree) $x$ and set $R := ( \{ x \})$. - -Then for $i = 1,2,\dots$ we iterate the following steps while $|R| < n$ - -*Construct the adjacency set $A_i$ of $R_i$ (with $R_i$ the i-th component of $R$) and exclude the vertices we already have in $R$ -$$ -A_i := \operatorname{Adj}(R_i) \setminus R -$$ - -*Sort $A_i$ ascending by minimum predecessor (the already-visited neighbor with the earliest position in R), and as a tiebreak ascending by vertex degree. - -*Append $A_i$ to the Result set $R$. - -In other words, number the vertices according to a particular level structure (computed by breadth-first search) where the vertices in each level are visited in order of their predecessor's numbering from lowest to highest. Where the predecessors are the same, vertices are distinguished by degree (again ordered from lowest to highest). diff --git a/wiki/wikipedia/3118.txt b/wiki/wikipedia/3118.txt deleted file mode 100644 index ef56e5185ff845226a2dfd30cf4dcf835eee878e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3118.txt +++ /dev/null @@ -1,107 +0,0 @@ -In information theory and statistics, Kullback's inequality is a lower bound on the Kullback–Leibler divergence expressed in terms of the large deviations rate function. If P and Q are probability distributions on the real line, such that P is absolutely continuous with respect to Q, i.e. P<Q_\theta(A) = \frac{\int_A e^{\theta x}Q(dx)}{\int_{-\infty}^\infty e^{\theta x}Q(dx)} - -= \frac{1}{M_Q(\theta)} \int_A e^{\theta x}Q(dx) - -for every measurable set A, where $M_Q$ is the moment-generating function of Q. (Note that Q0=Q.) Then - -D_{KL}(P\|Q) = D_{KL}(P\|Q_\theta) - -+ \int_{\mathrm{supp}P}\left(\log\frac{\mathrm dQ_\theta}{\mathrm dQ}\right)\mathrm dP. - -By Gibbs' inequality we have $D_{KL}(P\|Q_\theta) \ge 0$ so that - -D_{KL}(P\|Q) \ge - -\int_{\mathrm{supp}P}\left(\log\frac{\mathrm dQ_\theta}{\mathrm dQ}\right)\mathrm dP - -= \int_{\mathrm{supp}P}\left(\log\frac{e^{\theta x}}{M_Q(\theta)}\right) P(dx) - -Simplifying the right side, we have, for every real θ where $M_Q(\theta) < \infty:$ -$$ -D_{KL}(P\|Q) \ge \mu'_1(P) \theta - \Psi_Q(\theta), -$$ - -where $\mu'_1(P)$ is the first moment, or mean, of P, and $\Psi_Q = \log M_Q$ is called the cumulant-generating function. Taking the supremum completes the process of convex conjugation and yields the rate function: - -D_{KL}(P\|Q) \ge \sup_\theta \left\{ \mu'_1(P) \theta - \Psi_Q(\theta) \right\} - -= \Psi_Q^*(\mu'_1(P)). - -Let Xθ be a family of probability distributions on the real line indexed by the real parameter θ, and satisfying certain regularity conditions. Then - - \lim_{h\rightarrow 0} \frac {D_{KL}(X_{\theta+h}\|X_\theta)} {h^2} - -\ge \lim_{h\rightarrow 0} \frac {\Psi^*_\theta (\mu_{\theta+h})}{h^2}, - - - -where $\Psi^*_\theta$ is the convex conjugate of the cumulant-generating function of $X_\theta$ and $\mu_{\theta+h}$ is the first moment of $X_{\theta+h}.$ - -The left side of this inequality can be simplified as follows: - -\begin{align} - -\lim_{h\to 0} \frac {D_{KL}(X_{\theta+h}\|X_\theta)} {h^2} &=\lim_{h\to 0} \frac 1 {h^2} \int_{-\infty}^\infty \log \left( \frac{\mathrm dX_{\theta+h}}{\mathrm dX_\theta} \right) \mathrm dX_{\theta+h} \\ - -&=-\lim_{h\to 0} \frac 1 {h^2} \int_{-\infty}^\infty \log \left( \frac{\mathrm dX_{\theta}}{\mathrm dX_{\theta+h}} \right) \mathrm dX_{\theta+h} \\ - -&=-\lim_{h\to 0} \frac 1 {h^2} \int_{-\infty}^\infty \log\left( 1- \left (1-\frac{\mathrm dX_{\theta}}{\mathrm dX_{\theta+h}} \right ) \right) \mathrm dX_{\theta+h} \\ - -&= \lim_{h\to 0} \frac 1 {h^2} \int_{-\infty}^\infty \left[ \left( 1 - \frac{\mathrm dX_\theta}{\mathrm dX_{\theta+h}} \right) +\frac 1 2 \left( 1 - \frac{\mathrm dX_\theta}{\mathrm dX_{\theta+h}} \right) ^ 2 - -+ o \left( \left( 1 - \frac{\mathrm dX_\theta}{\mathrm dX_{\theta+h}} \right) ^ 2 \right) \right]\mathrm dX_{\theta+h} && \text{Taylor series for } \log(1-t) \\ - -&= \lim_{h\to 0} \frac 1 {h^2} \int_{-\infty}^\infty \left[ \frac 1 2 \left( 1 - \frac{\mathrm dX_\theta}{\mathrm dX_{\theta+h}} \right)^2 \right]\mathrm dX_{\theta+h} \\ - -&= \lim_{h\to 0} \frac 1 {h^2} \int_{-\infty}^\infty \left[ \frac 1 2 \left( \frac{\mathrm dX_{\theta+h} - \mathrm dX_\theta}{\mathrm dX_{\theta+h}} \right)^2 \right]\mathrm dX_{\theta+h} \\ - -&= \frac 1 2 \mathcal I_X(\theta) - -\end{align} - -which is half the Fisher information of the parameter θ. - -The right side of the inequality can be developed as follows: - - - -\lim_{h\rightarrow 0} \frac {\Psi^*_\theta (\mu_{\theta+h})}{h^2} - -= \lim_{h\rightarrow 0} \frac 1 {h^2} {\sup_t \{\mu_{\theta+h}t - \Psi_\theta(t)\} }. - - - -This supremum is attained at a value of t=τ where the first derivative of the cumulant-generating function is $\Psi'_\theta(\tau) = \mu_{\theta+h},$ but we have $\Psi'_\theta(0) = \mu_\theta,$ so that -$$ -\Psi_\theta(0) = \frac{d\mu_\theta}{d\theta} \lim_{h \rightarrow 0} \frac h \tau. -$$ - -Moreover, - -\lim_{h\rightarrow 0} \frac {\Psi^*_\theta (\mu_{\theta+h})}{h^2} - -= \frac 1 {2\Psi_\theta(0)}\left(\frac {d\mu_\theta}{d\theta}\right)^2 - -= \frac 1 {2\mathrm{Var}(X_\theta)}\left(\frac {d\mu_\theta}{d\theta}\right)^2. - -We have: - -\frac 1 2 \mathcal I_X(\theta) - -\ge \frac 1 {2\mathrm{Var}(X_\theta)}\left(\frac {d\mu_\theta}{d\theta}\right)^2, - -which can be rearranged as: -$$ -\mathrm{Var}(X_\theta) \ge \frac{(d\mu_\theta / d\theta)^2} {\mathcal I_X(\theta)}. -$$ diff --git a/wiki/wikipedia/3119.txt b/wiki/wikipedia/3119.txt deleted file mode 100644 index 4f8e968533fe3ebf227cd7306a4c0f6b3025094d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3119.txt +++ /dev/null @@ -1,7 +0,0 @@ -A data island is a data store, such as on a PDA or other computing device, that has non-existent or limited external connectivity. This limits the ability of the user to synchronize with or copy the data to other devices. Though new data can be added to the system, the ability to move that data elsewhere is impractical or impossible. Data islands, in general, contain a very huge set of data relative to its small physical space that it occupies. - -The connectivity here does not necessarily imply a hardware interface. For example, it may be a result of poorly written system interface software. - -A data island is a subset of entities that are connected to each other via relationships, but that are independent of other entities within the same data store. - -Category:Data synchronization diff --git a/wiki/wikipedia/312.txt b/wiki/wikipedia/312.txt deleted file mode 100644 index e0464f074827acc26a480ae165723f88e84e6cc3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/312.txt +++ /dev/null @@ -1,69 +0,0 @@ -In hyperbolic geometry, two lines may intersect, be ultraparallel, or be limiting parallel. - -The ultraparallel theorem states that every pair of (distinct) ultraparallel lines has a unique common perpendicular (a hyperbolic line which is perpendicular to both lines). - -Let r and s be two ultraparallel lines. - -From any two distinct points A and C on s draw AB and CB' perpendicular to r with B and B' on r. - -If it happens that AB = CB', then the desired common perpendicular joins the midpoints of AC and BB' (by the symmetry of the Saccheri quadrilateral ACB'B). - -If not, we may suppose AB < CB' without loss of generality. Let E be a point on the line s on the opposite side of A from C. Take A' on CB' so that A'B' = AB. Through A' draw a line s' (A'E') on the side closer to E, so that the angle B'A'E' is the same as angle BAE. Then s' meets s in an ordinary point D'. Construct a point D on ray AE so that AD = A'D'. - -Then D' ≠ D. They are the same distance from r and both lie on s. So the perpendicular bisector of D'D (a segment of s) is also perpendicular to r. - -(If r and s were asymptotically parallel rather than ultraparallel, this construction would fail because s' would not meet s. Rather s' would be asymptotically parallel to both s and r.) - -Let -$$ -a < b < c < d -$$ - -be four distinct points on the abscissa of the Cartesian plane. Let $p$ and $q$ be semicircles above the abscissa with diameters $ab$ and $cd$ respectively. Then in the Poincaré half-plane model HP, $p$ and $q$ represent ultraparallel lines. - -Compose the following two hyperbolic motions: -$$ -x \to x-a -$$ -$$ -\mbox{inversion in the unit semicircle.} -$$ - -Then $a \to \infty, \quad b \to (b-a)^{-1},\quad c \to (c-a)^{-1},\quad d \to (d-a)^{-1}.$ - -Now continue with these two hyperbolic motions: -$$ -x \to x-(b-a)^{-1} -$$ -$$ -x \to \left [ (c-a)^{-1} - (b-a)^{-1} \right ]^{-1} x -$$ - -Then $a$ stays at $\infty$, $b \to 0$, $c \to 1$, $d \to z$ (say). The unique semicircle, with center at the origin, perpendicular to the one on $1z$ must have a radius tangent to the radius of the other. The right triangle formed by the abscissa and the perpendicular radii has hypotenuse of length $\begin{matrix} \frac{1}{2} \end{matrix} (z+1)$. Since $\begin{matrix} \frac{1}{2} \end{matrix} (z-1)$ is the radius of the semicircle on $1z$, the common perpendicular sought has radius-square -$$ -\frac{1}{4} \left [ (z+1)^2 - (z-1)^2 \right ] = z. -$$ - -The four hyperbolic motions that produced $z$ above can each be inverted and applied in reverse order to the semicircle centered at the origin and of radius $\sqrt{z}$ to yield the unique hyperbolic line perpendicular to both ultraparallels $p$ and $q$. - -In the Beltrami-Klein model of the hyperbolic geometry: - -* two ultraparallel lines correspond to two non-intersecting chords. - -* The poles of these two lines are the respective intersections of the tangent lines to the boundary circle at the endpoints of the chords. - -* Lines perpendicular to line l are modeled by chords whose extension passes through the pole of l. - -* Hence we draw the unique line between the poles of the two given lines, and intersect it with the boundary circle ; the chord of intersection will be the desired common perpendicular of the ultraparallel lines. - -If one of the chords happens to be a diameter, we do not have a pole, but in this case any chord perpendicular to the diameter it is also perpendicular in the Beltrami-Klein model, and so we draw a line through the pole of the other line intersecting the diameter at right angles to get the common perpendicular. - -The proof is completed by showing this construction is always possible: - -* If both chords are diameters, they intersect.(at the center of the boundary circle) - -* If only one of the chords is a diameter, the other chord projects orthogonally down to a section of the first chord contained in its interior, and a line from the pole orthogonal to the diameter intersects both the diameter and the chord. - -* If both lines are not diameters, then we may extend the tangents drawn from each pole to produce a quadrilateral with the unit circle inscribed within it. The poles are opposite vertices of this quadrilateral, and the chords are lines drawn between adjacent sides of the vertex, across opposite corners. Since the quadrilateral is convex, the line between the poles intersects both of the chords drawn across the corners, and the segment of the line between the chords defines the required chord perpendicular to the two other chords. - -Alternatively, we can construct the common perpendicular of the ultraparallel lines as follows: the ultraparallel lines in Beltrami-Klein model are two non-intersecting chords. But they actually intersect outside the circle. The polar of the intersecting point is the desired common perpendicular. diff --git a/wiki/wikipedia/3120.txt b/wiki/wikipedia/3120.txt deleted file mode 100644 index 01c8e01cd7d61d5d88770e23e804911bd5e30de3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3120.txt +++ /dev/null @@ -1,33 +0,0 @@ -In the mathematical area of topology, the generalized Poincaré conjecture is a statement that a manifold which is a homotopy sphere is a sphere. More precisely, one fixes a category of manifolds: topological (Top), piecewise linear (PL), or differentiable (Diff). Then the statement is - -Every homotopy sphere (a closed n-manifold which is homotopy equivalent to the n-sphere) in the chosen category (i.e. topological manifolds, PL manifolds, or smooth manifolds) is isomorphic in the chosen category (i.e. homeomorphic, PL-isomorphic, or diffeomorphic) to the standard n-sphere. - -The name derives from the Poincaré conjecture, which was made for (topological or PL) manifolds of dimension 3, where being a homotopy sphere is equivalent to being simply connected and closed. The generalized Poincaré conjecture is known to be true or false in a number of instances, due to the work of many distinguished topologists, including the Fields medal awardees John Milnor, Steve Smale, Michael Freedman, and Grigori Perelman. - -Here is a summary of the status of the generalized Poincaré conjecture in various settings. - -* Top: true in all dimensions. - -* PL: true in dimensions other than 4; unknown in dimension 4, where it is equivalent to Diff. - -* Diff: false generally, true in some dimensions including 1,2,3,5, and 6. The first known counterexample is in dimension 7. The case of dimension 4 is equivalent to PL and is unsettled (). - -A fundamental fact of differential topology is that the notion of isomorphism in Top, PL, and Diff is the same in dimension 3 and below; in dimension 4, PL and Diff agree, but Top differs. In dimension above 6 they all differ. In dimensions 5 and 6 every PL manifold admits an infinitely differentiable structure that is so-called Whitehead compatible. - -The cases n = 1 and 2 have long been known by the classification of manifolds in those dimensions. - -For a PL or smooth homotopy n-sphere, in 1960 Stephen Smale proved for $n\ge 7$ that it was homeomorphic to the n-sphere and subsequently extended his proof to $n\ge 5$; he received a Fields Medal for his work in 1966. Shortly after Smale's announcement of a proof, John Stallings gave a different proof for dimensions at least 7 that a PL homotopy n-sphere was homeomorphic to the n-sphere, using the notion of "engulfing". E. C. Zeeman modified Stalling's construction to work in dimensions 5 and 6. In 1962, Smale proved that a PL homotopy n-sphere is PL-isomorphic to the standard PL n-sphere for n at least 5. In 1966, M. H. A. Newman extended PL engulfing to the topological situation and proved that for $n \ge 5$ a topological homotopy n-sphere is homeomorphic to the n-sphere. - -Michael Freedman solved the topological case $n = 4$ in 1982 and received a Fields Medal in 1986. The initial proof consisted of a 50-page outline, with many details missing. Freedman gave a series of lectures at the time, convincing experts that the proof was correct. A project to produce a written version of the proof with background and all details filled in began in 2013, with Freedman's support. The project's output, edited by Stefan Behrens, Boldizsar Kalmar, Min Hoon Kim, Mark Powell, and Arunima Ray, with contributions from 20 mathematicians, was published in August 2021 in the form of a 496 page book, The Disk Embedding Theorem. - -Grigori Perelman solved the case $n = 3$ (where the topological, PL, and differentiable cases all coincide) in 2003 in a sequence of three papers. He was offered a Fields Medal in August 2006 and the Millennium Prize from the Clay Mathematics Institute in March 2010, but declined both. - -The generalized Poincaré conjecture is true topologically, but false smoothly in some dimensions. This results in constructions of manifolds that are homeomorphic, but not diffeomorphic, to the standard sphere, which are known as the exotic spheres: you can interpret these as non-standard smooth structures on the standard (topological) sphere. - -Thus the homotopy spheres that John Milnor produced are homeomorphic (Top-isomorphic, and indeed piecewise linear homeomorphic) to the standard sphere $S^n$, but are not diffeomorphic (Diff-isomorphic) to it, and thus are exotic spheres: they can be interpreted as non-standard differentiable structures on the standard sphere. - -Michel Kervaire and Milnor showed that the oriented 7-sphere has 28 different smooth structures (or 15 ignoring orientations), and in higher dimensions there are usually many different smooth structures on a sphere. It is suspected that certain differentiable structures on the 4-sphere, called Gluck twists, are not isomorphic to the standard one, but at the moment there are no known invariants capable of distinguishing different smooth structures on a 4-sphere. - -For piecewise linear manifolds, the Poincaré conjecture is true except possibly in dimension 4, where the answer is unknown, and equivalent to the smooth case. - -In other words, every compact PL manifold of dimension not equal to 4 that is homotopy equivalent to a sphere is PL isomorphic to a sphere. diff --git a/wiki/wikipedia/3121.txt b/wiki/wikipedia/3121.txt deleted file mode 100644 index 0730e3245a121ad33f739beafdd57e5ac1948ab1..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3121.txt +++ /dev/null @@ -1,27 +0,0 @@ -In algebra, the Krull–Akizuki theorem states the following: let A be a one-dimensional reduced noetherian ring, K its total ring of fractions. If B is a subring of a finite extension L of K containing A - -then B is a one-dimensional noetherian ring. Furthermore, for every nonzero ideal I of B, $B/I$ is finite over A. - -Note that the theorem does not say that B is finite over A. The theorem does not extend to higher dimension. One important consequence of the theorem is that the integral closure of a Dedekind domain A in a finite extension of the field of fractions of A is again a Dedekind domain. This consequence does generalize to a higher dimension: the Mori–Nagata theorem states that the integral closure of a noetherian domain is a Krull domain. - -Here, we give a proof when $L = K$. Let $\mathfrak{p}_i$ be minimal prime ideals of A; there are finitely many of them. Let $K_i$ be the field of fractions of $A/{\mathfrak{p}_i}$ and $I_i$ the kernel of the natural map $B \to K \to K_i$. Then we have: -$$ -A/{\mathfrak{p}_i} \subset B/{I_i} \subset K_i -$$. - -Now, if the theorem holds when A is a domain, then this implies that B is a one-dimensional noetherian domain since each $B/{I_i}$ is and since $B = \prod B/{I_i}$. Hence, we reduced the proof to the case A is a domain. Let $0 \ne I \subset B$ be an ideal and let a be a nonzero element in the nonzero ideal $I \cap A$. Set $I_n = a^nB \cap A + aA$. Since $A/aA$ is a zero-dim noetherian ring; thus, artinian, there is an l such that $I_n = I_l$ for all $n \ge l$. We claim -$$ -a^l B \subset a^{l+1}B + A. -$$ - -Since it suffices to establish the inclusion locally, we may assume A is a local ring with the maximal ideal $\mathfrak{m}$. Let x be a nonzero element in B. Then, since A is noetherian, there is an n such that $\mathfrak{m}^{n+1} \subset x^{-1} A$ and so $a^{n+1}x \in a^{n+1}B \cap A \subset I_{n+2}$. Thus, -$$ -a^n x \in a^{n+1} B \cap A + A. -$$ - -Now, assume n is a minimum integer such that $n \ge l$ and the last inclusion holds. If $n > l$, then we easily see that $a^n x \in I_{n+1}$. But then the above inclusion holds for $n-1$, contradiction. Hence, we have $n = l$ and this establishes the claim. It now follows: -$$ -B/{aB} \simeq a^l B/a^{l+1} B \subset (a^{l +1}B + A)/a^{l+1} B \simeq A/{a^{l +1}B \cap A}. -$$ - -Hence, $B/{aB}$ has finite length as A-module. In particular, the image of I there is finitely generated and so I is finitely generated. Finally, the above shows that $B/{aB}$ has zero dimension and so B has dimension one. $\square$ diff --git a/wiki/wikipedia/3122.txt b/wiki/wikipedia/3122.txt deleted file mode 100644 index 31bb2135bb1dff829f08ae3b1c57c554a53a661a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3122.txt +++ /dev/null @@ -1,87 +0,0 @@ -In probability theory, the optional stopping theorem (or Doob's optional sampling theorem) says that, under certain conditions, the expected value of a martingale at a stopping time is equal to its initial expected value. Since martingales can be used to model the wealth of a gambler participating in a fair game, the optional stopping theorem says that, on average, nothing can be gained by stopping play based on the information obtainable so far (i.e., without looking into the future). Certain conditions are necessary for this result to hold true. In particular, the theorem applies to doubling strategies. - -The optional stopping theorem is an important tool of mathematical finance in the context of the fundamental theorem of asset pricing. - -A discrete-time version of the theorem is given below: - -Let X = (Xt)t∈$\mathbb{N}$0 be a discrete-time martingale and τ a stopping time with values in $\mathbb{N}$0 ∪ {∞}, both with respect to a filtration (Ft)t∈$\mathbb{N}$0. Assume that one of the following three conditions holds: - -() The stopping time τ is almost surely bounded, i.e., there exists a constant c ∈ $\mathbb{N}$ such that τ ≤ c a.s. - -() The stopping time τ has finite expectation and the conditional expectations of the absolute value of the martingale increments are almost surely bounded, more precisely, $\mathbb{E}[\tau]<\infty$ and there exists a constant c such that $\mathbb{E}\bigl[|X_{t+1}-X_t|\big\vert{\mathcal F}_t\bigr]\le c$ almost surely on the event {τ > t} for all t ∈ $\mathbb{N}$0. - -() There exists a constant c such that a.s. for all t ∈ $\mathbb{N}$0 where ∧ denotes the minimum operator. - -Then Xτ is an almost surely well defined random variable and $\mathbb{E}[X_{\tau}]=\mathbb{E}[X_0].$ - -Similarly, if the stochastic process X = (Xt)t∈$\mathbb{N}$0 is a submartingale or a supermartingale and one of the above conditions holds, then -$$ -\mathbb{E}[X_\tau]\ge\mathbb{E}[X_0], -$$ - -for a submartingale, and -$$ -\mathbb{E}[X_\tau]\le\mathbb{E}[X_0], -$$ - -for a supermartingale. - -Under condition () it is possible that τ = ∞ happens with positive probability. On this event Xτ is defined as the almost surely existing pointwise limit of (Xt)t∈$\mathbb{N}$0 , see the proof below for details. - -*The optional stopping theorem can be used to prove the impossibility of successful betting strategies for a gambler with a finite lifetime (which gives condition ()) or a house limit on bets (condition ()). Suppose that the gambler can wager up to c dollars on a fair coin flip at times 1, 2, 3, etc., winning his wager if the coin comes up heads and losing it if the coin comes up tails. Suppose further that he can quit whenever he likes, but cannot predict the outcome of gambles that haven't happened yet. Then the gambler's fortune over time is a martingale, and the time τ at which he decides to quit (or goes broke and is forced to quit) is a stopping time. So the theorem says that E[Xτ] = E[X0]. In other words, the gambler leaves with the same amount of money on average as when he started. (The same result holds if the gambler, instead of having a house limit on individual bets, has a finite limit on his line of credit or how far in debt he may go, though this is easier to show with another version of the theorem.) - -*Suppose a random walk starting at a ≥ 0 that goes up or down by one with equal probability on each step. Suppose further that the walk stops if it reaches 0 or m ≥ a; the time at which this first occurs is a stopping time. If it is known that the expected time at which the walk ends is finite (say, from Markov chain theory), the optional stopping theorem predicts that the expected stop position is equal to the initial position a. Solving a = pm + (1 – p)0 for the probability p that the walk reaches m before 0 gives p = a/m. - -*Now consider a random walk X that starts at 0 and stops if it reaches –m or +m, and use the Yn = Xn2 – n martingale from the examples section. If τ is the time at which X first reaches ±m, then 0 = E[Y0] = E[Yτ] = m2 – E[τ]. This gives E[τ] = m2. - -*Care must be taken, however, to ensure that one of the conditions of the theorem hold. For example, suppose the last example had instead used a 'one-sided' stopping time, so that stopping only occurred at +m, not at −m. The value of X at this stopping time would therefore be m. Therefore, the expectation value E[Xτ] must also be m, seemingly in violation of the theorem which would give E[Xτ] = 0. The failure of the optional stopping theorem shows that all three of the conditions fail. - -Let Xτ denote the stopped process, it is also a martingale (or a submartingale or supermartingale, respectively). Under condition () or (), the random variable Xτ is well defined. Under condition () the stopped process Xτ is bounded, hence by Doob's martingale convergence theorem it converges a.s. pointwise to a random variable which we call Xτ. - -If condition () holds, then the stopped process Xτ is bounded by the constant random variable M := c. Otherwise, writing the stopped process as -$$ -X_t^\tau=X_0+\sum_{s=0}^{\tau-1 \land t-1}(X_{s+1}-X_s),\quad t\in{\mathbb N}_0, -$$ - -gives for all t ∈ $\mathbb{N}$0, where -$$ -M:=|X_0|+\sum_{s=0}^{\tau-1}|X_{s+1}-X_s|=|X_0|+\sum_{s=0}^\infty|X_{s+1}-X_s|\cdot\mathbf{1}_{\{\tau>s\}} -$$. - -By the monotone convergence theorem -$$ -\mathbb{E}[M]=\mathbb{E}[|X_0|]+\sum_{s=0}^\infty \mathbb{E}\bigl[|X_{s+1}-X_s|\cdot\mathbf{1}_{\{\tau>s\}}\bigr] -$$. - -If condition () holds, then this series only has a finite number of non-zero terms, hence M is integrable. - -If condition () holds, then we continue by inserting a conditional expectation and using that the event {τ > s} is known at time s (note that τ is assumed to be a stopping time with respect to the filtration), hence - -\begin{align}\mathbb{E}[M] - -&=\mathbb{E}[|X_0|]+\sum_{s=0}^\infty \mathbb{E}\bigl[\underbrace{\mathbb{E}\bigl[|X_{s+1}-X_s|\big|{\mathcal F}_s\bigr]\cdot\mathbf{1}_{\{\tau>s\}}}_{\lec\mathbf{1}_{\{\tau>s\}}\text{ a.s. by (b)}}\bigr]\\ - -&\le\mathbb{E}[|X_0|]+c\sum_{s=0}^\infty\mathbb{P}(\tau>s)\\ - -&=\mathbb{E}[|X_0|]+c\mathbb{E}[\tau]<\infty,\\ - -\end{align} - -where a representation of the expected value of non-negative integer-valued random variables is used for the last equality. - -Therefore, under any one of the three conditions in the theorem, the stopped process is dominated by an integrable random variable M. Since the stopped process Xτ converges almost surely to Xτ, the dominated convergence theorem implies -$$ -\mathbb{E}[X_\tau]=\lim_{t\to\infty}\mathbb{E}[X_t^\tau]. -$$ - -By the martingale property of the stopped process, -$$ -\mathbb{E}[X_t^\tau]=\mathbb{E}[X_0],\quad t\in{\mathbb N}_0, -$$ - -hence -$$ -\mathbb{E}[X_\tau]=\mathbb{E}[X_0]. -$$ - -Similarly, if X is a submartingale or supermartingale, respectively, change the equality in the last two formulas to the appropriate inequality. diff --git a/wiki/wikipedia/3123.txt b/wiki/wikipedia/3123.txt deleted file mode 100644 index b88fee151531b209e9e6b8479304d7524c80c9e4..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3123.txt +++ /dev/null @@ -1,27 +0,0 @@ -In algebra, Schlessinger's theorem is a theorem in deformation theory introduced by that gives conditions for a functor of artinian local rings to be pro-representable, refining an earlier theorem of Grothendieck. - -Λ is a complete Noetherian local ring with residue field k, and C is the category of local Artinian Λ-algebras (meaning in particular that as modules over Λ they are finitely generated and Artinian) with residue field k. - -A small extension in C is a morphism Y→Z in C that is surjective with kernel a 1-dimensional vector space over k. - -A functor is called representable if it is of the form hX where hX(Y)=hom(X,Y) for some X, and is called pro-representable if it is of the form Y→lim hom(Xi,Y) for a filtered direct limit over i in some filtered ordered set. - -A morphism of functors F→G from C to sets is called smooth if whenever Y→Z is an epimorphism of C, the map from F(Y) to F(Z)×G(Z)G(Y) is surjective. This definition is closely related to the notion of a formally smooth morphism of schemes. If in addition the map between the tangent spaces of F and G is an isomorphism, then F is called a hull of G. - -Grothendieck showed that a functor from the category C of Artinian algebras to sets is pro-representable if and only if it preserves all finite limits. This condition is equivalent to asking that the functor preserves pullbacks and the final object. In fact Grothendieck's theorem applies not only to the category C of Artinian algebras, but to any category with finite limits whose objects are Artinian. - -By taking the projective limit of the pro-representable functor in the larger category of linearly topologized local rings, one obtains a complete linearly topologized local ring representing the functor. - -One difficulty in applying Grothendieck's theorem is that it can be hard to check that a functor preserves all pullbacks. Schlessinger showed that it is sufficient to check that the functor preserves pullbacks of a special form, which is often easier to check. Schlessinger's theorem also gives conditions under which the functor has a hull, even if it is not representable. - -Schessinger's theorem gives conditions for a set-valued functor F on C to be representable by a complete local Λ-algebra R with maximal ideal m such that R/mn is in C for all n. - -Schlessinger's theorem states that a functor from C to sets with F(k) a 1-element set is representable by a complete Noetherian local algebra if it has the following properties, and has a hull if it has the first three properties: - -*H1: The map F(Y×XZ)→F(Y)×F(X)F(Z) is surjective whenever Z→X is a small extension in C and Y→X is some morphism in C. - -*H2: The map in H1 is a bijection whenever Z→X is the small extension k[x]/(x2)→k. - -*H3: The tangent space of F is a finite-dimensional vector space over k. - -*H4: The map in H1 is a bijection whenever Y=Z is a small extension of X and the maps from Y and Z to X are the same. diff --git a/wiki/wikipedia/3124.txt b/wiki/wikipedia/3124.txt deleted file mode 100644 index 00d2ac20806cddc6b3056d62fefc3667d449a8e8..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3124.txt +++ /dev/null @@ -1,26 +0,0 @@ -In mathematics, LHS is informal shorthand for the left-hand side of an equation. Similarly, RHS is the right-hand side. The two sides have the same value, expressed differently, since equality is symmetric. - -More generally, these terms may apply to an inequation or inequality; the right-hand side is everything on the right side of a test operator in an expression, with LHS defined similarly. - -The expression on the right side of the "=" sign is the right side of the equation and the expression on the left of the "=" is the left side of the equation. - -For example, in -$$ - x + 5 = y + 8 -$$ - -x + 5 is the left-hand side (LHS) and y + 8 is the right-hand side (RHS). - -In solving mathematical equations, particularly linear simultaneous equations, differential equations and integral equations, the terminology homogeneous is often used for equations with some linear operator L on the LHS and 0 on the RHS. In contrast, an equation with a non-zero RHS is called inhomogeneous or non-homogeneous, as exemplified by - -Lf = g, - -with g a fixed function, which equation is to be solved for f. Then any solution of the inhomogeneous equation may have a solution of the homogeneous equation added to it, and still remain a solution. - -For example in mathematical physics, the homogeneous equation may correspond to a physical theory formulated in empty space, while the inhomogeneous equation asks for more 'realistic' solutions with some matter, or charged particles. - -More abstractly, when using infix notation - -T * U - -the term T stands as the left-hand side and U as the right-hand side of the operator *. This usage is less common, though. diff --git a/wiki/wikipedia/3125.txt b/wiki/wikipedia/3125.txt deleted file mode 100644 index 4dfb9fe757e9335400f10992f20421cf303eb02e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3125.txt +++ /dev/null @@ -1,155 +0,0 @@ -In mathematics, Fuglede's theorem is a result in operator theory, named after Bent Fuglede. - -Theorem (Fuglede) Let T and N be bounded operators on a complex Hilbert space with N being normal. If TN = NT, then TN* = N*T, where N* denotes the adjoint of N. - -Normality of N is necessary, as is seen by taking T=N. When T is self-adjoint, the claim is trivial regardless of whether N is normal: -$$ -TN^* = (NT)^* = (TN)^* = N^*T. -$$ - -Tentative Proof: If the underlying Hilbert space is finite-dimensional, the spectral theorem says that N is of the form -$$ -N = \sum_i \lambda_i P_i -$$ - -where Pi are pairwise orthogonal projections. One expects that - -TN = NT if and only if TPi = PiT. - -Indeed it can be proved to be true by elementary arguments (e.g. it can be shown that all Pi are representable as polynomials of N and for this reason, if T commutes with N, it has to commute with Pi...). - -Therefore T must also commute with -$$ -N^* = \sum_i {\bar \lambda_i} P_i. -$$ - -In general, when the Hilbert space is not finite-dimensional, the normal operator N gives rise to a projection-valued measure P on its spectrum, σ(N), which assigns a projection PΩ to each Borel subset of σ(N). N can be expressed as -$$ -N = \int_{\sigma(N)} \lambda d P(\lambda). -$$ - -Differently from the finite dimensional case, it is by no means obvious that TN = NT implies TPΩ = PΩT. Thus, it is not so obvious that T also commutes with any simple function of the form -$$ -\rho = \sum_i {\bar \lambda} P_{\Omega_i}. -$$ - -Indeed, following the construction of the spectral decomposition for a bounded, normal, not self-adjoint, operator T, one sees that to verify that T - -commutes with $P_{\Omega_i}$, the most straightforward way is to assume that T commutes with both N and N*, giving rise to a vicious circle! - -That is the relevance of Fuglede's theorem: The latter hypothesis is not really necessary. - -The following contains Fuglede's result as a special case. The proof by Rosenblum pictured below is just that presented by Fuglede for his theorem - -when assuming N=M. - -Theorem (Calvin Richard Putnam) Let T, M, N be linear operators on a complex Hilbert space, and suppose that M and N are normal, T is bounded and MT = TN. - -Then M*T = TN*. - -First proof (Marvin Rosenblum): - -By induction, the hypothesis implies that MkT = TNk for all k. - -Thus for any λ in $\mathbb{C}$, -$$ -e^{\bar\lambda M}T = T e^{\bar\lambda N}. -$$ - -Consider the function -$$ -F(\lambda) = e^{\lambda M^*} T e^{-\lambda N^*}. -$$ - -This is equal to -$$ -e^{\lambda M^*} \left[e^{-\bar\lambda M}T e^{\bar\lambda N}\right] e^{-\lambda N^*} = U(\lambda) T V(\lambda)^{-1} -$$, - -where $U(\lambda) = e^{\lambda M^* - \bar\lambda M}$ because $M$ is normal, and similarly $V(\lambda) = e^{\lambda N^* - \bar\lambda N}$. However we have -$$ -U(\lambda)^* = e^{\bar\lambda M - \lambda M^*} = U(\lambda)^{-1} -$$ - -so U is unitary, and hence has norm 1 for all λ; the same is true for V(λ), so -$$ -\|F(\lambda)\| \le \|T\|\ \forall \lambda. -$$ - -So F is a bounded analytic vector-valued function, and is thus constant, and equal to F(0) = T. Considering the first-order terms in the expansion for small λ, we must have M*T = TN*. - -The original paper of Fuglede appeared in 1950; it was extended to the form given above by Putnam in 1951. The short proof given above was first published by Rosenblum in 1958; it is very elegant, but is less general than the original proof which also considered the case of unbounded operators. Another simple proof of Putnam's theorem is as follows: - -Second proof: Consider the matrices - -T' = - -\begin{bmatrix} - -0 & 0\\ T & 0 - -\end{bmatrix} - -\quad \mbox{and} \quad - -N' = - -\begin{bmatrix} - -N & 0 \\ 0 & M - -\end{bmatrix}. - -The operator N' is normal and, by assumption, T' N' = N' T' . By Fuglede's theorem, one has -$$ -T' (N')^* = (N')^*T'. -$$ - -Comparing entries then gives the desired result. - -From Putnam's generalization, one can deduce the following: - -Corollary If two normal operators M and N are similar, then they are unitarily equivalent. - -Proof: Suppose MS = SN where S is a bounded invertible operator. Putnam's result implies M*S = SN*, i.e. -$$ -S^{-1} M^* S = N^*. -$$ - -Take the adjoint of the above equation and we have -$$ -S^* M (S^{-1})^* = N. -$$ - -So -$$ -S^* M (S^{-1})^* = S^{-1} M S \quad \Rightarrow \quad SS^* M (SS^*)^{-1} = M. -$$ - -Let S*=VR, with V a unitary (since S is invertible) and R the positive square root of SS*. As R is a limit of polynomials on SS*, the above implies that R commutes with M. It is also invertible. Then -$$ -N=S^*M (S^*)^{-1}=VRMR^{-1}V^*=VMV^*. -$$ - -Corollary If M and N are normal operators, and MN = NM, then MN is also normal. - -Proof: The argument invokes only Fuglede's theorem. One can directly compute -$$ -(MN) (MN)^* = MN (NM)^* = MN M^* N^*. -$$ - -By Fuglede, the above becomes -$$ -= M M^* N N^* = M^* M N^*N. -$$ - -But M and N are normal, so -$$ -= M^* N^* MN = (MN)^* MN. -$$ - -The theorem can be rephrased as a statement about elements of C*-algebras. - -Theorem (Fuglede-Putnam-Rosenblum) Let x, y be two normal elements of a C*-algebra A and - -z such that xz = zy. Then it follows that x* z = z y*. diff --git a/wiki/wikipedia/3126.txt b/wiki/wikipedia/3126.txt deleted file mode 100644 index 48fad17b9f8bf6c0b9ebdd2788c61c83854350aa..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3126.txt +++ /dev/null @@ -1,3 +0,0 @@ -Fermat's Last Theorem is a popular science book (1997) by Simon Singh. It tells the story of the search for a proof of Fermat's Last Theorem, first conjectured by Pierre de Fermat in 1637, and explores how many mathematicians such as Évariste Galois had tried and failed to provide a proof for the theorem. Despite the efforts of many mathematicians, the proof would remain incomplete until as late as 1995, with the publication of Andrew Wiles' proof of the Theorem. The book is the first mathematics book to become a Number One seller in the United Kingdom, whilst Singh's documentary The Proof, on which the book was based, won a BAFTA in 1997. - -In the United States, the book was released as Fermat's Enigma: The Epic Quest to Solve the World's Greatest Mathematical Problem. diff --git a/wiki/wikipedia/3127.txt b/wiki/wikipedia/3127.txt deleted file mode 100644 index 98028a151bfee8e397eb20f157b210c808ef5449..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3127.txt +++ /dev/null @@ -1,31 +0,0 @@ -SpiderOak is a US-based collaboration tool, online backup and file hosting service that allows users to access, synchronize and share data using a cloud-based server, offered by a company of the same name. Its first offering, its online backup service later branded "SpiderOak ONE", launched in December 2007. SpiderOak is accessible through an app for Windows, Mac and Linux computer platforms, and Android, N900 Maemo and iOS mobile platforms. - -According to SpiderOak, the software uses encrypted cloud storage and client-side encryption key creation, so SpiderOak employees cannot access users' information. SpiderOak differentiates itself from its competition by this kind of encryption, in provision for syncing files and folders across multiple devices, and in automatic de-duplication of data. - -Some components of SpiderOak are open-source; in 2009, the company announced its intent for the SpiderOak One client's code to be fully open-source in the future. , the SpiderOak One client's source code is only available open-source for mobile platforms, with no current plans to make the desktop client's code open-source. SpiderOak used to provide an open-source password manager named Encryptr, which was discontinued in March 2021. The source code for SpiderOak's group messaging application Semaphor is published to allow auditing. - -SpiderOak was founded in 2007 by Ethan Oberman and Alan Fairless as an encrypted private backup program. In 2013, SpiderOak began developing the Crypton framework, "a JavaScript framework for building applications where the server doesn't know the contents it's storing on behalf of users." Crypton is an open-source project allowing developers to easily add encryption security to mobile applications. By mid-2014, according to Oberman, SpiderOak had near 1 million users. - -As of 2014, SpiderOak was headquartered in Chicago and employed 42 staff, headed by CEO Alan Fairless. Around the same time, the company had offices in Chicago and Kansas City, and was hiring remote employees inside and outside of the US. In 2015, SpiderOak raised $3.5 million in Series A funding, bringing its total funding to around $9 million. - -In February 2017, SpiderOak discontinued using the phrase "zero knowledge" to describe their service following public criticism that the phrase conflicted with the mechanism behind cryptographic zero-knowledge proofs. SpiderOak adopted the phrase "no knowledge" for their marketing. - -In November 2017, founder Alan Fairless was replaced as CEO by Christopher Skinner, who announced that the company would be expanding into enterprise software, partially funded by a $2 million Series B round. - -On August 1, 2018, the warrant canary on SpiderOak's website briefly vanished, followed by some system downtime. It was then replaced by a transparency report. Five days later, the canary was re-signed using GPG encryption. By August 9, Spideroak had also updated their transparency report, making a statement concerning the canary. It is impossible to tell if the change was internally driven or canary was tripped and the company has been compelled by court order to hide the fact. - -* All data accessible in one de-duplicated location - -* Configurable multi-platform synchronization - -* Preserve all historical versions and deleted files - -* Share folders in web ShareRooms with RSS notifications - -* Retrieve files from any internet-connected device - -* Claimed "no knowledge" data encryption if you only use the desktop client, that is, no sharing, web-access, or mobile access. This claim, however, cannot be confirmed due to the client being closed source - -* Unlimited devices - -* A combination of 2048-bit RSA and 256-bit AES diff --git a/wiki/wikipedia/3128.txt b/wiki/wikipedia/3128.txt deleted file mode 100644 index b67810f0d14b3406731da1f7cdc019ebd53120fa..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3128.txt +++ /dev/null @@ -1,21 +0,0 @@ -In probability theory, the Chung–Erdős inequality provides a lower bound on the probability that one out of many (possibly dependent) events occurs. The lower bound is expressed in terms of the probabilities for pairs of events. - -Formally, let $A_1,\ldots,A_n$ be events. Assume that $\Pr[A_i]>0$ for some $i$. Then - -\Pr[A_1\vee\cdots\vee A_n] - -\geq - -\frac{ - -\left(\sum_{i=1}^n \Pr[A_i]\right)^2 - -}{ - -\sum_{i=1}^n\sum_{j=1}^n \Pr[A_i\wedge A_j] - -}. - - - -The inequality was first derived by Kai Lai Chung and Paul Erdős (in, equation (4)). It was stated in the form given above by Petrov (in, equation (6.10)). diff --git a/wiki/wikipedia/3129.txt b/wiki/wikipedia/3129.txt deleted file mode 100644 index 4d1fad91e7346cfa228f15972d0de4f57592f6a7..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3129.txt +++ /dev/null @@ -1,19 +0,0 @@ -thumb|A screw axis. Mozzi–Chasles' theorem says that every Euclidean motion is a screw displacement along some screw axis.In kinematics, Chasles' theorem, or Mozzi–Chasles' theorem, says that the most general rigid body displacement can be produced by a translation along a line (called its screw axis or Mozzi axis) followed (or preceded) by a rotation about an axis parallel to that line. - -The proof that a spatial displacement can be decomposed into a rotation and slide around and along a line is attributed to the astronomer and mathematician Giulio Mozzi (1763), in fact the screw axis is traditionally called asse di Mozzi in Italy. However, most textbooks refer to a subsequent similar work by Michel Chasles dating from 1830. Several other contemporaries of M. Chasles obtained the same or similar results around that time, including G. Giorgini, Cauchy, Poinsot, Poisson and Rodrigues. An account of the 1763 proof by Giulio Mozzi and some of its history can be found here. - -Mozzi considers a rigid body undergoing first a rotation about an axis passing through the center of mass and then a translation of displacement D in an arbitrary direction. Any rigid motion can be accomplished in this way due to a theorem by Euler on the existence of an axis of rotation. - -The displacement D of the center of mass can be decomposed into components parallel and perpendicular to the axis. The perpendicular (and parallel) component acts on all points of the rigid body but Mozzi shows that for some points the previous rotation acted exactly with an opposite displacement, so those points are translated parallel to the axis of rotation. These points lie on the Mozzi axis through which the rigid motion can be accomplished through a screw motion. - -Another elementary proof of Mozzi–Chasles' theorem was given by E. T. Whittaker in 1904. Suppose A is to be transformed into B. Whittaker suggests that line AK be selected parallel to the axis of the given rotation, with K the foot of a perpendicular from B. The appropriate screw displacement is about an axis parallel to AK such that K is moved to B. The method corresponds to Euclidean plane isometry where a composition of rotation and translation can be replaced by rotation about an appropriate center. In Whittaker's terms, "A rotation about any axis is equivalent to a rotation through the same angle about any axis parallel to it, together with a simple translation in a direction perpendicular to the axis." - -The calculation of the commuting translation and rotation from a screw motion can be performed using 3DPGA ($\mathbb{R}_{3,0,1}$), the geometric algebra of 3D Euclidean space. It has three Euclidean basis vectors $\mathbf{e}_i$ satisfying $\mathbf{e}_i^2 = 1$ representing orthogonal planes through the origin, and one Grassmanian basis vector $\mathbf{e}_0$ satisfying $\mathbf{e}_0^2 = 0$ to represent the plane at infinity. Any plane a distance $\delta$ from the origin can then be formed as a linear combination a = \sum_{i=1}^3 a^i \mathbf{e}_i - \delta \mathbf{e}_0which is normalized such that $a^2 = 1$. Because reflections can be represented by the plane in which the reflection occurs, the product of two planes $a$ and $b$ is the bireflection $ab$. The result is a rotation around their intersection line $a \wedge b$, which could also lie on the plane at infinity when the two reflections are parallel, in which case the bireflection $ab$ is a translation. - -A screw motion $S$ is the product of four non-collinear reflections, and thus $S = abcd$. But according to the Mozzi-Chasles' theorem a screw motion can be decomposed into a commuting translation T = e^{\alpha B_1} = 1 + \alpha B_1where $B_1$ is the axis of translation satisfying $B_1^2 = 0$, and rotationR = e^{\beta B_2} = \cos(\beta) + B_2 \sin(\beta)where $B_2$ is the axis of rotation satisfying $B_2^2 = -1$. The two bivector lines $B_1$ and $B_2$ are orthogonal and commuting. To find $T$ and $R$ from $S$, we simply write out $S$ and consider the result grade-by-grade:\begin{aligned} S &= TR \\ - -&= e^{\alpha B_1} e^{\beta B_2} \\ - -&= \underbrace{\cos \beta}_{\text{scalar}} + \underbrace{\sin \beta B_2 + \alpha \cos \beta B_1}_{\text{bivector}} + \underbrace{\alpha \sin \beta B_1 B_2}_\text{quadvector} - -\end{aligned}Because the quadvector part $\langle S \rangle_4 = \langle T \rangle_2 \langle R \rangle_2 $ and $B_1^2 = 0 $, $T $ is directly found to beT = 1 + \frac{\langle S \rangle_4}{\langle S \rangle_2}and thusR = S T^{-1} = T^{-1} S = \frac{S}{T}Thus, for a given screw motion $S$ the commuting translation and rotation can be found using the two formulae above, after which the lines $B_1$ and $B_2$ are found to be proportional to $\langle T \rangle_2 $ and $\langle R \rangle_2 $ respectively. diff --git a/wiki/wikipedia/313.txt b/wiki/wikipedia/313.txt deleted file mode 100644 index 320812d2ea4a428172b07367d12b97799aa59e1c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/313.txt +++ /dev/null @@ -1,13 +0,0 @@ -Formulario Mathematico (Latino sine flexione: Formulation of mathematics) is a book by Giuseppe Peano which expresses fundamental theorems of mathematics in a symbolic language developed by Peano. The author was assisted by Giovanni Vailati, Mario Pieri, Alessandro Padoa, Giovanni Vacca, Vincenzo Vivanti, Gino Fano and Cesare Burali-Forti. - -The Formulario was first published in 1894. The fifth and last edition was published in 1908. - -Kennedy wrote "the development and use of mathematical logic is the guiding motif of the project". He also explains the variety of Peano's publication under the title: - -the five editions of the Formulario [are not] editions in the usual sense of the word. Each is essentially a new elaboration, although much material is repeated. Moreover, the title and language varied: the first three, titled Formulaire de Mathématiques, and the fourth, titled, Formulaire Mathématique, were written in French, while Latino sine flexione, Peano's own invention, was used for the fifth edition, titled Formulario Mathematico. ... Ugo Cassina lists no less than twenty separately published items as being parts of the 'complete' Formulario! - -Peano believed that students needed only precise statement of their lessons. He wrote: - -Each professor will be able to adopt this Formulario as a textbook, for it ought to contain all theorems and all methods. His teaching will be reduced to showing how to read the formulas, and to indicating to the students the theorems that he wishes to explain in his course. - -Such a dismissal of the oral tradition in lectures at universities was the undoing of Peano's own teaching career. diff --git a/wiki/wikipedia/3130.txt b/wiki/wikipedia/3130.txt deleted file mode 100644 index 8b128aaac5a4a7b6546ae23f166d149b90292e18..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3130.txt +++ /dev/null @@ -1,63 +0,0 @@ -Fermat's right triangle theorem is a non-existence proof in number theory, published in 1670 among the works of Pierre de Fermat, soon after his death. It is the only complete proof given by Fermat. It has several equivalent formulations, one of which was stated (but not proved) in 1225 by Fibonacci. In its geometric forms, it states: - -*A right triangle in the Euclidean plane for which all three side lengths are rational numbers cannot have an area that is the square of a rational number. The area of a rational-sided right triangle is called a congruent number, so no congruent number can be square. - -*A right triangle and a square with equal areas cannot have all sides commensurate with each other. - -*There do not exist two integer-sided right triangles in which the two legs of one triangle are the leg and hypotenuse of the other triangle. - -More abstractly, as a result about Diophantine equations (integer or rational-number solutions to polynomial equations), it is equivalent to the statements that: - -*If three square numbers form an arithmetic progression, then the gap between consecutive numbers in the progression (called a congruum) cannot itself be square. - -*The only rational points on the elliptic curve $y^2=x(x-1)(x+1)$ are the three trivial points with $x\in\{-1,0,1\}$ and $y=0$. - -*The quartic equation $x^4-y^4=z^2$ has no nonzero integer solution. - -An immediate consequence of the last of these formulations is that Fermat's Last Theorem is true in the special case that its exponent is 4. - -In 1225, Emperor Frederick II challenged the mathematician Fibonacci to take part in a mathematical contest against several other mathematicians, with three problems set by his court philosopher John of Palermo. The first of these problems asked for three rational numbers whose squares were equally spaced five units apart, solved by Fibonacci with the three numbers $\tfrac{31}{12}$, $\tfrac{41}{12}$, and $\tfrac{49}{12}$. - -In The Book of Squares, published later the same year by Fibonacci, he solved the more general problem of finding triples of square numbers that are equally spaced from each other, forming an arithmetic progression. Fibonacci called the gap between these numbers a congruum. One way of describing Fibonacci's solution is that the numbers to be squared are the difference of legs, hypotenuse, and sum of legs of a Pythagorean triangle, and that the congruum is four times the area of the same triangle. Fibonacci observed that it is impossible for a congruum to be a square number itself, but did not present a satisfactory proof of this fact. - -If three squares $a^2$, $b^2$, and $c^2$ could form an arithmetic progression whose congruum was also a square $d^2$, then these numbers would satisfy the Diophantine equations - - - -\begin{align} - -a^2 + d^2 &= b^2,\\ - -b^2 + d^2 &= c^2.\\ - -\end{align} - - - -That is, by the Pythagorean theorem, they would form two integer-sided right triangles in which the pair $(d,b)$ gives one leg and the hypotenuse of the smaller triangle and the same pair also forms the two legs of the larger triangle. But if (as Fibonacci asserted) no square congruum can exist, then there can be no two integer right triangles that share two sides in this way. - -Because the congrua are exactly the numbers that are four times the area of a Pythagorean triangle, and multiplication by four does not change whether a number is square, the existence of a square congruum is equivalent to the existence of a Pythagorean triangle with a square area. It is this variant of the problem that Fermat's proof concerns: he shows that there is no such triangle. In considering this problem, Fermat was inspired not by Fibonacci but by an edition of Arithmetica by Diophantus, published in a translation into French in 1621 by Claude Gaspar Bachet de Méziriac. This book described various special right triangles whose areas had forms related to squares, but did not consider the case of areas that were themselves square. - -By rearranging the equations for the two Pythagorean triangles above, and then multiplying them together, one obtains the single Diophantine equation - -b^4 - d^4 = (b^2-d^2)(b^2+d^2) = a^2 c^2 - -which can be simplified by introducing a new variable $e=ac$ to - -b^4 - d^4 = e^2. - -Conversely, any three positive integers obeying the equation $b^4 - d^4 = e^2$ lead to a square congruum: for these numbers, the squares $(b^4-d^4-2b^2 d^2)^2$, $(b^4+d^4)^2$, and $(b^4-d^4+2b^2 d^2)^2$ form an arithmetic progression with congruum $4b^2 d^2 (b^4-d^4) = (2bde)^2$, which is a square itself. Thus, the solvability of $b^4 - d^4 = e^2$ is equivalent to the existence of a square congruum. But, if Fermat's Last Theorem had a counterexample for the exponent $4$, an integer solution to the equation $x^4+y^4=z^4$, then squaring one of the three numbers in the counterexample would give three numbers that solve the equation $b^4 - d^4 = e^2$. Therefore, Fermat's proof that no Pythagorean triangle has a square area implies the truth of the exponent-$4$ case of Fermat's Last Theorem. - -Another equivalent formulation of the same problem involves congruent numbers, the numbers that are areas of right triangles whose three sides are all rational numbers. By multiplying the sides by a common denominator, any congruent number may be transformed into the area of a Pythagorean triangle, from which it follows that the congruent numbers are exactly the numbers formed by multiplying a congruum by the square of a rational number. Therefore, the existence of a square congruum is equivalent to the statement that the number 1 is not a congruent number. Another more geometric way of stating this formulation is that it is impossible for a square (the geometric shape) and a right triangle to have both equal areas and all sides commensurate with each other. - -Yet another equivalent form of Fermat's theorem involves the elliptic curve consisting of the points whose Cartesian coordinates $(x,y)$ satisfy the equation - -y^2 = x(x+1)(x-1). - -The points (-1,0), (0,0), and (1,0), provide obvious solutions to this equation. Fermat's theorem is equivalent to the statement that these are the only points on the curve for which both $x$ and $y$ are rational. More generally, the right triangles with rational sides and area $n$ correspond one-for-one with the rational points with positive $y$-coordinate on the elliptic curve $y^2=x(x+n)(x-n)$. - -During his lifetime, Fermat challenged several other mathematicians to prove the non-existence of a Pythagorean triangle with square area, but did not publish the proof himself. However, he wrote a proof in his copy of Diophantus's Arithmetica, the same copy in which he wrote that he could prove Fermat's Last Theorem. Fermat's son Clement-Samuel published an edition of this book, including Fermat's marginal notes with the proof of the right triangle theorem, in 1670. - -Fermat's proof is a proof by infinite descent. It shows that, from any example of a Pythagorean triangle with square area, one can derive a smaller example. Since Pythagorean triangles have positive integer areas, and there does not exist an infinite descending sequence of positive integers, there also cannot exist a Pythagorean triangle with square area. - -In more detail, suppose that $x$, $y$, and $z$ are the integer sides of a right triangle with square area. By dividing by any common factors, one can assume that this triangle is primitive and from the known form of all primitive Pythagorean triples, one can set $x=2pq$, $y=p^2-q^2$, and $z=p^2+q^2$, by which the problem is transformed into finding relatively prime integers $p$ and $q$ (one of which is even) such that the area $pq(p^2-q^2)$ is square. For this number to be a square, its four linear factors $p$, $q$, $p+q$, and $p-q$ (which are relatively prime) must themselves be squares; let $p+q=r^2$ and $p-q=s^2$. Both $r$ and $s$ must be odd since exactly one of $p$ or $q$ is even and the other is odd. Therefore, both $r-s$ and $r+s$ are even, and one of them is divisible by 4. Dividing them by two produces two more integers $u=(r-s)/2$ and $v=(r+s)/2$, one of which is even by the previous sentence. Because $u^2+v^2=(r^2+s^2)/2=p$ is a square, $u$ and $v$ are the legs of another primitive Pythagorean triangle whose area is $uv/2=q/4$. Since $q$ is itself a square and since $uv$ is even, $q/4$ is a square. Thus, any Pythagorean triangle with square area leads to a smaller Pythagorean triangle with square area, completing the proof. diff --git a/wiki/wikipedia/3131.txt b/wiki/wikipedia/3131.txt deleted file mode 100644 index b8dd9480c80ce1bfacbef4c21d7903eedb2f0b05..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3131.txt +++ /dev/null @@ -1,31 +0,0 @@ -In mathematics, the van der Corput inequality is a corollary of the Cauchy–Schwarz inequality that is useful in the study of correlations among vectors, and hence random variables. It is also useful in the study of equidistributed sequences, for example in the Weyl equidistribution estimate. Loosely stated, the van der Corput inequality asserts that if a unit vector $v$ in an inner product space $V$ is strongly correlated with many unit vectors $u_{1}, \dots, u_{n} \in V$, then many of the pairs $u_{i}, u_{j}$ must be strongly correlated with each other. Here, the notion of correlation is made precise by the inner product of the space $V$: when the absolute value of $\langle u, v \rangle$ is close to $1$, then $u$ and $v$ are considered to be strongly correlated. (More generally, if the vectors involved are not unit vectors, then strong correlation means that $| \langle u, v \rangle | \approx \| u \| \| v \|$.) - -Let $V$ be a real or complex inner product space with inner product $\langle \cdot , \cdot \rangle$ and induced norm $\| \cdot \|$. Suppose that $v, u_1, \dots, u_n \in V$ and that $\| v \| = 1$. Then -$$ -\displaystyle \left( \sum_{i = 1}^{n} | \langle v, u_{i} \rangle | \right)^{2} \leq \sum_{i, j = 1}^{n} | \langle u_{i}, u_{j} \rangle | . -$$ - -In terms of the correlation heuristic mentioned above, if $v$ is strongly correlated with many unit vectors $u_1, \dots, u_n \in V$, then the left-hand side of the inequality will be large, which then forces a significant proportion of the vectors $u_{i}$ to be strongly correlated with one another. - -We start by noticing that for any $i\in 1,\dots,n$ there exists $\epsilon_i$ (real or complex) such that $|\epsilon_i|=1$ and $ |\langle v, u_{i} \rangle| = \epsilon_i \langle v, u_{i} \rangle $. Then, -$$ - \left(\sum_{i = 1}^{n} \left| \langle v, u_{i} \rangle \right| \right)^{2} -$$ -$$ -=\left( \sum_{i = 1}^{n} \epsilon_{i} \langle v, u_{i} \rangle \right)^{2} -$$ -$$ -= \left( \left\langle v, \sum_{i = 1}^{n} \epsilon_{i} u_{i} \right\rangle \right)^{2} -$$ since the inner product is bilinear -$$ -\leq \| v \|^{2} \left\| \sum_{i = 1}^{n} \epsilon_{i} u_{i} \right\|^{2} -$$ by the Cauchy–Schwarz inequality -$$ -= \| v \|^{2} \left\langle \sum_{i = 1}^{n} \epsilon_{i} u_{i}, \sum_{j = 1}^{n} \epsilon_{i} u_{j} \right\rangle -$$ by the definition of the induced norm -$$ -= \sum_{i, j = 1}^{n}\epsilon_{i} \epsilon_{j} \langle u_{i}, u_{j} \rangle -$$ since $v$ is a unit vector and the inner product is bilinear -$$ -\leq \sum_{i, j = 1}^{n} | \langle u_{i}, u_{j} \rangle | -$$ since $|\epsilon_i|=1$ for all $i$. diff --git a/wiki/wikipedia/3132.txt b/wiki/wikipedia/3132.txt deleted file mode 100644 index f90ac56d9d0325478395d0ecd5bf106fa545c3cb..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3132.txt +++ /dev/null @@ -1,39 +0,0 @@ -In mathematics, certain kinds of mistaken proof are often exhibited, and sometimes collected, as illustrations of a concept called mathematical fallacy. There is a distinction between a simple mistake and a mathematical fallacy in a proof, in that a mistake in a proof leads to an invalid proof while in the best-known examples of mathematical fallacies there is some element of concealment or deception in the presentation of the proof. - -For example, the reason why validity fails may be attributed to a division by zero that is hidden by algebraic notation. There is a certain quality of the mathematical fallacy: as typically presented, it leads not only to an absurd result, but does so in a crafty or clever way. Therefore, these fallacies, for pedagogic reasons, usually take the form of spurious proofs of obvious contradictions. Although the proofs are flawed, the errors, usually by design, are comparatively subtle, or designed to show that certain steps are conditional, and are not applicable in the cases that are the exceptions to the rules. - -The traditional way of presenting a mathematical fallacy is to give an invalid step of deduction mixed in with valid steps, so that the meaning of fallacy is here slightly different from the logical fallacy. The latter usually applies to a form of argument that does not comply with the valid inference rules of logic, whereas the problematic mathematical step is typically a correct rule applied with a tacit wrong assumption. Beyond pedagogy, the resolution of a fallacy can lead to deeper insights into a subject (e.g., the introduction of Pasch's axiom of Euclidean geometry, the five colour theorem of graph theory). Pseudaria, an ancient lost book of false proofs, is attributed to Euclid. - -Mathematical fallacies exist in many branches of mathematics. In elementary algebra, typical examples may involve a step where division by zero is performed, where a root is incorrectly extracted or, more generally, where different values of a multiple valued function are equated. Well-known fallacies also exist in elementary Euclidean geometry and calculus. - -Examples exist of mathematically correct results derived by incorrect lines of reasoning. Such an argument, however true the conclusion appears to be, is mathematically invalid and is commonly known as a howler. The following is an example of a howler involving anomalous cancellation: - -\frac{16}{64} = \frac{16\!\!\!/}{6\!\!\!/4}=\frac{1}{4}. - -Here, although the conclusion 16/64 = 1/4 is correct, there is a fallacious, invalid cancellation in the middle step. Another classical example of a howler is proving the Cayley–Hamilton theorem by simply substituting the scalar variables of the characteristic polynomial by the matrix. - -Bogus proofs, calculations, or derivations constructed to produce a correct result in spite of incorrect logic or operations were termed "howlers" by Maxwell. △ROB ≅ △QOC (∠BRO = ∠CQO = 90°; BO = OC (hypotenuse); RO = OQ (leg)). - -# Thus, AR = AQ, RB = QC, and AB = AR + RB = AQ + QC = AC. - -Q.E.D. - -As a corollary, one can show that all triangles are equilateral, by showing that AB = BC and AC = BC in the same way. - -The error in the proof is the assumption in the diagram that the point O is inside the triangle. In fact, O always lies at the circumcircle of the △ABC (except for isosceles and equilateral triangles where AO and OD coincide). Furthermore, it can be shown that, if AB is longer than AC, then R will lie within AB, while Q will lie outside of AC, and vice versa (in fact, any diagram drawn with sufficiently accurate instruments will verify the above two facts). Because of this, AB is still AR + RB, but AC is actually AQ − QC; and thus the lengths are not necessarily the same. - -There exist several fallacious proofs by induction in which one of the components, basis case or inductive step, is incorrect. Intuitively, proofs by induction work by arguing that if a statement is true in one case, it is true in the next case, and hence by repeatedly applying this, it can be shown to be true for all cases. The following "proof" shows that all horses are the same colour. - -# Let us say that any group of N horses is all of the same colour. - -# If we remove a horse from the group, we have a group of N − 1 horses of the same colour. If we add another horse, we have another group of N horses. By our previous assumption, all the horses are of the same colour in this new group, since it is a group of N horses. - -# Thus we have constructed two groups of N horses all of the same colour, with N − 1 horses in common. Since these two groups have some horses in common, the two groups must be of the same colour as each other. - -# Therefore, combining all the horses used, we have a group of N + 1 horses of the same colour. - -# Thus if any N horses are all the same colour, any N + 1 horses are the same colour. - -# This is clearly true for N = 1 (i.e. one horse is a group where all the horses are the same colour). Thus, by induction, N horses are the same colour for any positive integer N. i.e. all horses are the same colour. - -The fallacy in this proof arises in line 3. For N = 1, the two groups of horses have N − 1 = 0 horses in common, and thus are not necessarily the same colour as each other, so the group of N + 1 = 2 horses is not necessarily all of the same colour. The implication "every N horses are of the same colour, then N + 1 horses are of the same colour" works for any N > 1, but fails to be true when N = 1. The basis case is correct, but the induction step has a fundamental flaw. diff --git a/wiki/wikipedia/3133.txt b/wiki/wikipedia/3133.txt deleted file mode 100644 index 74b77f395123800921d644abb764cc16d3c1f88e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3133.txt +++ /dev/null @@ -1,79 +0,0 @@ -In probability theory, de Finetti's theorem states that positively correlated exchangeable observations are conditionally independent relative to some latent variable. An epistemic probability distribution could then be assigned to this variable. It is named in honor of Bruno de Finetti. - -For the special case of an exchangeable sequence of Bernoulli random variables it states that such a sequence is a "mixture" of sequences of independent and identically distributed (i.i.d.) Bernoulli random variables. - -A sequence of random variables is called exchangeable if the joint distribution of the sequence is unchanged by any permutation of the indices. While the variables of the exchangeable sequence are not themselves independent, only exchangeable, there is an underlying family of i.i.d. random variables. That is, there are underlying, generally unobservable, quantities that are i.i.d. – exchangeable sequences are mixtures of i.i.d. sequences. - -A Bayesian statistician often seeks the conditional probability distribution of a random quantity given the data. The concept of exchangeability was introduced by de Finetti. De Finetti's theorem explains a mathematical relationship between independence and exchangeability. - -An infinite sequence -$$ -X_1, X_2, X_3, \dots -$$ - -of random variables is said to be exchangeable if for any natural number n and any two finite sequences i1, ..., in and j1, ..., jn (with each of the is distinct, and each of the js distinct), the two sequences -$$ -X_{i_1},\dots,X_{i_n} \text{ and } X_{j_1},\dots,X_{j_n} -$$ - -both have the same joint probability distribution. - -If an identically distributed sequence is independent, then the sequence is exchangeable; however, the converse is false—there exist exchangeable random variables that are not statistically independent, for example the Pólya urn model. - -A random variable X has a Bernoulli distribution if Pr(X = 1) = p and Pr(X = 0) = 1 - p for some p ∈ (0, 1). - -De Finetti's theorem states that the probability distribution of any infinite exchangeable sequence of Bernoulli random variables is a "mixture" of the probability distributions of independent and identically distributed sequences of Bernoulli random variables. "Mixture", in this sense, means a weighted average, but this need not mean a finite or countably infinite (i.e., discrete) weighted average: it can be an integral rather than a sum. - -More precisely, suppose X1, X2, X3, ... is an infinite exchangeable sequence of Bernoulli-distributed random variables. Then there is some probability distribution m on the interval [0, 1] and some random variable Y such that - -* The probability distribution of Y is m, and - -* The conditional probability distribution of the whole sequence X1, X2, X3, ... given the value of Y is described by saying that - -** X1, X2, X3, ... are conditionally independent given Y, and - -** For any i ∈ {1, 2, 3, ...}, the conditional probability that Xi = 1, given the value of Y, is Y. - -Suppose $X_1,X_2,X_3,\ldots$ is an infinite exchangeable sequence of Bernoulli random variables. Then $X_1,X_2,X_3,\ldots$ are conditionally independent and identically distributed given the exchangeable sigma-algebra (i.e., the sigma-algebra of events measurable with respect to $X_1,X_2,\ldots$ and invariant under finite permutations of the indices). - -Here is a concrete example. We construct a sequence -$$ -X_1, X_2, X_3, \dots -$$ - -of random variables, by "mixing" two i.i.d. sequences as follows. - -We assume p = 2/3 with probability 1/2 and p = 9/10 with probability 1/2. Given the event p = 2/3, the conditional distribution of the sequence is that the Xi are independent and identically distributed and X1 = 1 with probability 2/3 and X1 = 0 with probability 1 - 2/3. Given the event p = 9/10, the conditional distribution of the sequence is that the Xi are independent and identically distributed and X1 = 1 with probability 9/10 and X1 = 0 with probability 1 - 9/10. - -This can be interpreted as follows: Make two biased coins, one showing "heads" with 2/3 probability and one showing "heads" with 9/10 probability. Flip a fair coin once to decide which biased coin to use for all flips that are recorded. Here "heads" at flip i means Xi=1. - -The independence asserted here is conditional independence, i.e. the Bernoulli random variables in the sequence are conditionally independent given the event that p = 2/3, and are conditionally independent given the event that p = 9/10. But they are not unconditionally independent; they are positively correlated. - -In view of the strong law of large numbers, we can say that - -\lim_{n\rightarrow\infty} \frac{X_1+\cdots+X_n}{n} = \begin{cases} - -2/3 & \text{with probability }1/2, \\ - -9/10 & \text{with probability }1/2. - -\end{cases} - -Rather than concentrating probability 1/2 at each of two points between 0 and 1, the "mixing distribution" can be any probability distribution supported on the interval from 0 to 1; which one it is depends on the joint distribution of the infinite sequence of Bernoulli random variables. - -The definition of exchangeability, and the statement of the theorem, also makes sense for finite length sequences -$$ -X_1,\dots, X_n, -$$ - -but the theorem is not generally true in that case. It is true if the sequence can be extended to an exchangeable sequence that is infinitely long. The simplest example of an exchangeable sequence of Bernoulli random variables that cannot be so extended is the one in which X1 = 1 - X2 and X1 is either 0 or 1, each with probability 1/2. This sequence is exchangeable, but cannot be extended to an exchangeable sequence of length 3, let alone an infinitely long one. - -Versions of de Finetti's theorem for finite exchangeable sequences, and for Markov exchangeable sequences have been proved by Diaconis and Freedman and by Kerns and Szekely. - -Two notions of partial exchangeability of arrays, known as separate and joint exchangeability lead to extensions of de Finetti's theorem for arrays by Aldous and Hoover. - -The computable de Finetti theorem shows that if an exchangeable sequence of real random variables is given by a computer program, then a program which samples from the mixing measure can be automatically recovered. - -In the setting of free probability, there is a noncommutative extension of de Finetti's theorem which characterizes noncommutative sequences invariant under quantum permutations. - -Extensions of de Finetti's theorem to quantum states have been found to be useful in quantum information, in topics like quantum key distribution and entanglement detection. diff --git a/wiki/wikipedia/3134.txt b/wiki/wikipedia/3134.txt deleted file mode 100644 index 971a043070e3ba1bc02a005cb61b645df04be294..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3134.txt +++ /dev/null @@ -1,261 +0,0 @@ -In the mathematical field of numerical analysis, a Bernstein polynomial is a polynomial that is a linear combination of Bernstein basis polynomials. The idea is named after Sergei Natanovich Bernstein. - -A numerically stable way to evaluate polynomials in Bernstein form is de Casteljau's algorithm. - -Polynomials in Bernstein form were first used by Bernstein in a constructive proof for the Weierstrass approximation theorem. With the advent of computer graphics, Bernstein polynomials, restricted to the interval [0, 1], became important in the form of Bézier curves. - -The n+1 Bernstein basis polynomials of degree n are defined as -$$ -b_{\nu,n}(x) = \binom{n}{\nu} x^{\nu} \left( 1 - x \right)^{n - \nu}, \quad \nu = 0, \ldots, n, -$$ - -where $\tbinom{n}{\nu}$ is a binomial coefficient. So, for example, $b_{2,5}(x) = \tbinom{5}{2}x^2(1-x)^3 = 10x^2(1-x)^3.$ - -The first few Bernstein basis polynomials for blending 1, 2, 3 or 4 values together are: - - - -\begin{align} - -b_{0,0}(x) & = 1, \\ - -b_{0,1}(x) & = 1 - x, & b_{1,1}(x) & = x \\ - -b_{0,2}(x) & = (1 - x)^2, & b_{1,2}(x) & = 2x(1 - x), & b_{2,2}(x) & = x^2 \\ - -b_{0,3}(x) & = (1 - x)^3, & b_{1,3}(x) & = 3x(1 - x)^2, & b_{2,3}(x) & = 3x^2(1 - x), & b_{3,3}(x) & = x^3 - -\end{align} - - - -The Bernstein basis polynomials of degree n form a basis for the vector space $\Pi_n$ of polynomials of degree at most n with real coefficients. A linear combination of Bernstein basis polynomials -$$ -B_n(x) = \sum_{\nu=0}^{n} \beta_{\nu} b_{\nu,n}(x) -$$ - -is called a Bernstein polynomial or polynomial in Bernstein form of degree n. The coefficients $\beta_\nu$ are called Bernstein coefficients or Bézier coefficients. - -The first few Bernstein basis polynomials from above in monomial form are: - - - -\begin{align} - -b_{0,0}(x) & = 1, \\ - -b_{0,1}(x) & = 1 - 1x, & b_{1,1}(x) & = 0 + 1x \\ - -b_{0,2}(x) & = 1 - 2x + 1x^2, & b_{1,2}(x) & = 0 + 2x - 2x^2, & b_{2,2}(x) & = 0 + 0x + 1x^2 \\ - -b_{0,3}(x) & = 1 - 3x + 3x^2 - x^3, & b_{1,3}(x) & = 0 + 3x - 6x^2 + 3x^3, & b_{2,3}(x) & = 0 + 0x + 3x^2 - 3x^3, & b_{3,3}(x) & = 0 + 0x + 0x^2 + 1x^3 - -\end{align} - - - -The Bernstein basis polynomials have the following properties: - -* $b_{\nu, n}(x) = 0$, if $\nu < 0$ or $\nu > n.$ - -* $b_{\nu, n}(x) \ge 0$ for $x \in [0,\ 1].$ - -* $b_{\nu, n}\left( 1 - x \right) = b_{n - \nu, n}(x).$ - -* $b_{\nu, n}(0) = \delta_{\nu, 0}$ and $b_{\nu, n}(1) = \delta_{\nu, n}$ where $\delta$ is the Kronecker delta function: \delta_{ij} = \begin{cases} - -0 &\text{if } i \neq j, \\ - -1 &\text{if } i=j. \end{cases} - -* $b_{\nu, n}(x)$ has a root with multiplicity $\nu$ at point $x = 0$ (note: if $\nu = 0$, there is no root at 0). - -* $b_{\nu, n}(x)$ has a root with multiplicity $\left( n - \nu \right)$ at point $x = 1$ (note: if $\nu = n$, there is no root at 1). - -* The derivative can be written as a combination of two polynomials of lower degree: - -*: $b'_{\nu, n}(x) = n \left( b_{\nu - 1, n - 1}(x) - b_{\nu, n - 1}(x) \right).$ - -* The k:th derivative at 0: -$$ -b_{\nu, n}^{(k)}(0) = \sum_{\nu = 0}^n \frac{n!}{(n - k)!} \binom{k}{\nu} (-1)^{\nu + k}. -$$ - -*The k:th derivative at 1: -$$ -b_{\nu, n}^{(k)}(1) = (-1)^k b_{n - \nu, n}^{(k)}(0). -$$ - -*The transformation of the Bernstein polynomial to monomials is - -*: $b_{\nu,n}(x) = \binom{n}{\nu}\sum_{k=0}^{n-\nu} \binom{n-\nu}{k}(-1)^{n-\nu-k} x^{\nu+k} = \sum_{\ell=\nu}^n \binom{n}{\ell}\binom{\ell}{\nu}(-1)^{\ell-\nu}x^\ell,$ - -and by the inverse binomial transformation, the reverse transformation is -$$ -x^k=\sum_{i=0}^{n-k}\binom{n-k}{i}\frac{1}{\binom{n}{i}}b_{n-i,n}(x) = \frac{1}{\binom{n}{k}} \sum_{j=k}^n \binom{j}{k}b_{j,n}(x). -$$ - -* The indefinite integral is given by - -*: $\int b_{\nu, n}(x)dx = \frac{1}{n+1} \sum_{j=\nu+1}^{n+1} b_{j, n+1}(x).$ - -* The definite integral is constant for a given n: - -*: $\int_{0}^{1}b_{\nu, n}(x)dx = \frac{1}{n+1} \quad\ \text{for all } \nu = 0,1, \dots, n.$ - -* If $n \ne 0$, then $b_{\nu, n}(x)$ has a unique local maximum on the interval $[0,\ 1]$ at $x = \frac{\nu}{n}$. This maximum takes the value - -*: $\nu^\nu n^{-n} \left( n - \nu \right)^{n - \nu} {n \choose \nu}.$ - -* The Bernstein basis polynomials of degree $n$ form a partition of unity: - -*: $\sum_{\nu = 0}^n b_{\nu, n}(x) = \sum_{\nu = 0}^n {n \choose \nu} x^\nu \left(1 - x\right)^{n - \nu} = \left(x + \left( 1 - x \right) \right)^n = 1.$ - -* By taking the first $x$-derivative of $(x+y)^n$, treating $y$ as constant, then substituting the value $y = 1-x$, it can be shown that - -*: $\sum_{\nu=0}^{n}\nu b_{\nu, n}(x) = nx.$ - -* Similarly the second $x$-derivative of $(x+y)^n$, with $y$ again then substituted $y = 1-x$, shows that - -*: $\sum_{\nu=1}^{n}\nu(\nu-1) b_{\nu, n}(x) = n(n-1)x^2.$ - -* A Bernstein polynomial can always be written as a linear combination of polynomials of higher degree: - -*: $b_{\nu, n - 1}(x) = \frac{n - \nu}{n} b_{\nu, n}(x) + \frac{\nu + 1}{n} b_{\nu + 1, n}(x).$ - -* The expansion of the Chebyshev Polynomials of the First Kind into the Bernstein basis is - -*: $T_n(u) = (2n-1)!!\sum_{k=0}^n \frac{(-1)^{n-k}}{(2k-1)!!(2n-2k-1)!!} b_{k,n}(u).$ - -Let ƒ be a continuous function on the interval [0, 1]. Consider the Bernstein polynomial -$$ -B_n(f)(x) = \sum_{\nu = 0}^n f\left( \frac{\nu}{n} \right) b_{\nu,n}(x). -$$ - -It can be shown that -$$ -\lim_{n \to \infty}{ B_n(f) } = f -$$ - -uniformly on the interval [0, 1]. - -A more general statement for a function with continuous kth derivative is -$$ -{\left\| B_n(f)^{(k)} \right\|}_\infty \le \frac{ (n)_k }{ n^k } \left\| f^{(k)} \right\|_\infty \quad\ \text{and} \quad\ \left\| f^{(k)}- B_n(f)^{(k)} \right\|_\infty \to 0, -$$ - -where additionally -$$ -\frac{ (n)_k }{ n^k } = \left( 1 - \frac{0}{n} \right) \left( 1 - \frac{1}{n} \right) \cdots \left( 1 - \frac{k - 1}{n} \right) -$$ - -is an eigenvalue of Bn; the corresponding eigenfunction is a polynomial of degree k. - -This proof follows Bernstein's original proof of 1912. See also Feller (1966) or Koralov & Sinai (2007). - -Suppose K is a random variable distributed as the number of successes in n independent Bernoulli trials with probability x of success on each trial; in other words, K has a binomial distribution with parameters n and x. Then we have the expected value $\operatorname{\mathcal E}\left[\frac{K}{n}\right] = x\ $ and -$$ -p(K) = {n \choose K} x^{K} \left( 1 - x \right)^{n - K} = b_{K,n}(x) -$$ - -By the weak law of large numbers of probability theory, -$$ -\lim_{n \to \infty}{ P\left( \left| \frac{K}{n} - x \right|>\delta \right) } = 0 -$$ - -for every δ > 0. Moreover, this relation holds uniformly in x, which can be seen from its proof via Chebyshev's inequality, taking into account that the variance of 1/n K, equal to 1/n x(1-x), is bounded from above by 1/(4n) irrespective of x. - -Because ƒ, being continuous on a closed bounded interval, must be uniformly continuous on that interval, one infers a statement of the form -$$ -\lim_{n \to \infty}{ P\left( \left| f\left( \frac{K}{n} \right) - f\left( x \right) \right| > \varepsilon \right) } = 0 -$$ - -uniformly in x. Taking into account that ƒ is bounded (on the given interval) one gets for the expectation -$$ -\lim_{n \to \infty}{ \operatorname{\mathcal E}\left( \left| f\left( \frac{K}{n} \right) - f\left( x \right) \right| \right) } = 0 -$$ - -uniformly in x. To this end one splits the sum for the expectation in two parts. On one part the difference does not exceed ε; this part cannot contribute more than ε. - -On the other part the difference exceeds ε, but does not exceed 2M, where M is an upper bound for |ƒ(x)|; this part cannot contribute more than 2M times the small probability that the difference exceeds ε. - -Finally, one observes that the absolute value of the difference between expectations never exceeds the expectation of the absolute value of the difference, and -$$ -\operatorname{\mathcal E}\left[f\left(\frac{K}{n}\right)\right] = \sum_{K=0}^n f\left(\frac{K}{n}\right) p(K) = \sum_{K=0}^n f\left(\frac{K}{n}\right) b_{K,n}(x) = B_n(f)(x) -$$ - -The probabilistic proof can also be rephrased in an elementary way, using the underlying probabilistic ideas but proceeding by direct verification: - -The following identities can be verified: - -(1) $ \sum_k {n \choose k} x^k (1-x)^{n-k} = 1$ - -("probability") - -(2) $ \sum_k {k\over n} {n \choose k} x^k (1-x)^{n-k} = x$ - -("mean") - -(3) $ \sum_k \left( x -{k\over n}\right)^2 {n \choose k} x^k (1-x)^{n-k} = {x(1-x)\over n}. $ - -("variance") - -In fact, by the binomial theorem -$$ -(1+t)^n = \sum_k {n \choose k} t^k, -$$ - -and this equation can be applied twice to $t\frac{d}{dt}$. The identities (1), (2), and (3) follow easily using the substitution $t = x/ (1 - x)$. - -Within these three identities, use the above basis polynomial notation -$$ - b_{k,n}(x) = {n\choose k} x^k (1-x)^{n-k}, -$$ - -and let -$$ - f_n(x) = \sum_k f(k/n) b_{k,n}(x). -$$ - -Thus, by identity (1) -$$ -f_n(x) - f(x) = \sum_k [f(k/n) - f(x)] b_{k,n}(x), -$$ - -so that -$$ -|f_n(x) - f(x)| \le \sum_k |f(k/n) - f(x)| b_{k,n}(x). -$$ - -Since f is uniformly continuous, given $\varepsilon > 0$, there is a $\delta > 0$ such that $|f(a) - f(b)| < \varepsilon$ whenever -$$ -|a-b| < \delta -$$. Moreover, by continuity, $M= \sup |f| < \infty$. But then -$$ - |f_n(x) - f(x)| \le \sum_{|x -{k\over n}|< \delta} |f(k/n) - f(x)| b_{k,n}(x) + \sum_{|x -{k\over n}|\ge \delta} |f(k/n) - f(x)| b_{k,n}(x) . -$$ - -The first sum is less than ε. On the other hand, by identity (3) above, and since $|x - k/n| \ge \delta$, the second sum is bounded by 2M times -$$ -\sum_{|x - k/n|\ge \delta} b_{k,n}(x) \le \sum_k \delta^{-2} \left(x -{k\over n}\right)^2 b_{k,n}(x) = \delta^{-2} {x(1-x)\over n} < \delta^{-2} n^{-1}. -$$ - -(Chebyshev's inequality) - -It follows that the polynomials fn tend to f uniformly. - -Bernstein polynomials can be generalized to k dimensions. The resulting polynomials have the form Pi1(x1) Pi2(x2) ... Pik(xk). In the simplest case only products of the unit interval [0,1] are considered; but, using affine transformations of the line, Bernstein polynomials can also be defined for products [a1, b1] × [a2, b2] × ... × [ak, bk]. For a continuous function f on the k-fold product of the unit interval, the proof that f(x1, x2, ... , xk) can be uniformly approximated by - -\sum_{i_1} \sum_{i_2} \cdots \sum_{i_k} {n_1\choose i_1} {n_2\choose i_2} \cdots {n_k\choose i_k} - -f\left({i_1\over n_1}, {i_2\over n_2}, \dots, {i_k\over n_k}\right) - -x_1^{i_1} (1-x_1)^{n_1-i_1} - -x_2^{i_2} (1-x_2)^{n_2-i_2} - -\cdots x_k^{i_k} (1-x_k)^{n_k - i_k} - - - -is a straightforward extension of Bernstein's proof in one dimension. diff --git a/wiki/wikipedia/3135.txt b/wiki/wikipedia/3135.txt deleted file mode 100644 index 90b4b83909eb1d9ebbfd18bb4ea424b7448efa7b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3135.txt +++ /dev/null @@ -1,82 +0,0 @@ -The butterfly theorem is a classical result in Euclidean geometry, which can be stated as follows: - -Let M be the midpoint of a chord PQ of a circle, through which two other chords AB and CD are drawn; AD and BC intersect chord PQ at X and Y correspondingly. Then M is the midpoint of XY. - -A formal proof of the theorem is as follows: - -Let the perpendiculars XX′ and XX″ be dropped from the point X on the straight lines AM and DM respectively. Similarly, let YY′ and YY″ be dropped from the point Y perpendicular to the straight lines BM and CM respectively. - -Since -$$ - \triangle MXX' \sim \triangle MYY', -$$ -$$ - {MX \over MY} = {XX' \over YY'}, -$$ -$$ - \triangle MXX \sim \triangle MYY, -$$ -$$ - {MX \over MY} = {XX \over YY}, -$$ -$$ - \triangle AXX' \sim \triangle CYY, -$$ -$$ - {XX' \over YY} = {AX \over CY}, -$$ -$$ - \triangle DXX \sim \triangle BYY', -$$ -$$ - {XX \over YY'} = {DX \over BY}. -$$ - -From the preceding equations and the intersecting chords theorem, it can be seen that -$$ - \left({MX \over MY}\right)^2 = {XX' \over YY' } {XX \over YY}, -$$ -$$ - {} = {AX \cdot DX \over CY \cdot BY}, -$$ -$$ - {} = {PX \cdot QX \over PY \cdot QY}, -$$ -$$ - {} = {(PM-XM) \cdot (MQ+XM) \over (PM+MY) \cdot (QM-MY)}, -$$ -$$ - {} = { (PM)^2 - (MX)^2 \over (PM)^2 - (MY)^2}, -$$ - -since PM = MQ. - -So -$$ - { (MX)^2 \over (MY)^2} = {(PM)^2 - (MX)^2 \over (PM)^2 - (MY)^2}. -$$ - -Cross-multiplying in the latter equation, -$$ - {(MX)^2 \cdot (PM)^2 - (MX)^2 \cdot (MY)^2} = {(MY)^2 \cdot (PM)^2 - (MX)^2 \cdot (MY)^2} . -$$ - -Cancelling the common term -$$ - { -(MX)^2 \cdot (MY)^2} -$$ - -from both sides of the resulting equation yields -$$ - {(MX)^2 \cdot (PM)^2} = {(MY)^2 \cdot (PM)^2}, -$$ - -hence MX = MY, since MX, MY, and PM are all positive, real numbers. - -Thus, M is the midpoint of XY. - -Other proofs exist, including one using projective geometry. - -Proving the butterfly theorem was posed as a problem by William Wallace in The Gentlemen's Mathematical Companion (1803). Three solutions were published in 1804, and in 1805 Sir William Herschel posed the question again in a letter to Wallace. Rev. Thomas Scurr asked the same question again in 1814 in the Gentlemen's Diary or Mathematical Repository. - -
    diff --git a/wiki/wikipedia/3136.txt b/wiki/wikipedia/3136.txt deleted file mode 100644 index f690528bfe4c19eeac94e4307aaeebcf5e5e939e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3136.txt +++ /dev/null @@ -1,25 +0,0 @@ -In geometry, it is an unsolved conjecture of Hugo Hadwiger that every simplex can be dissected into orthoschemes, using a number of orthoschemes bounded by a function of the dimension of the simplex. If true, then more generally every convex polytope could be dissected into orthoschemes. - -In this context, a simplex in $d$-dimensional Euclidean space is the convex hull of $d+1$ points that do not all lie in a common hyperplane. For example, a 2-dimensional simplex is just a triangle (the convex hull of three points in the plane) and a 3-dimensional simplex is a tetrahedron (the convex of four points in three-dimensional space). The points that form the simplex in this way are called its vertices. - -An orthoscheme, also called a path simplex, is a special kind of simplex. In it, the vertices can be connected by a path, such that every two edges in the path are at right angles to each other. A two-dimensional orthoscheme is a right triangle. A three-dimensional orthoscheme can be constructed from a cube by finding a path of three edges of the cube that do not all lie on the same square face, and forming the convex hull of the four points on this path. - -A dissection of a shape $S$ (which may be any closed set in Euclidean space) is a representation of $S$ as a union of other shapes whose interiors are disjoint from each other. That is, intuitively, the shapes in the union do not overlap, although they may share points on their boundaries. For instance, a cube can be dissected into six three-dimensional orthoschemes. A similar result applies more generally: every hypercube or hyperrectangle in $d$ dimensions can be dissected into $d!$ orthoschemes. - -Hadwiger's conjecture is that there is a function $f$ such that every $d$-dimensional simplex can be dissected into at most $f(d)$ orthoschemes. Hadwiger posed this problem in 1956; it remains unsolved in general, although special cases for small values of $d$ are known. - -In two dimensions, every triangle can be dissected into at most two right triangles, by dropping an altitude from its widest angle onto its longest edge. - -In three dimensions, some tetrahedra can be dissected in a similar way, by dropping an altitude perpendicularly from a vertex $v$ to a point $p$ in an opposite face, connecting $p$ perpendicularly to the sides of the face, and using the three-edge perpendicular paths through $v$ and $p$ to a side and then to a vertex of the face. However, this does not always work. In particular, there exist tetrahedra for which none of the vertices have altitudes with a foot inside the opposite face. - -Using a more complicated construction, Lenhard proved that every tetrahedron can be dissected into at most 12 orthoschemes. - -Böhm proved that this is optimal: there exist tetrahedra that cannot be dissected into fewer than 12 orthoschemes. In the same paper, Böhm also generalized Lenhard's result to three-dimensional spherical geometry and three-dimensional hyperbolic geometry. - -In four dimensions, at most 500 orthoschemes are needed. In five dimensions, a finite number of orthoschemes is again needed, roughly bounded as at most 12.5 million. Again, this applies to spherical geometry and hyperbolic geometry as well as to Euclidean geometry. - -Hadwiger's conjecture remains unproven for all dimensions greater than five. - -Every convex polytope may be dissected into simplexes. Therefore, if Hadwiger's conjecture is true, every convex polytope would also have a dissection into orthoschemes. - -A related result is that every orthoscheme can itself be dissected into $d$ or $d+1$ smaller orthoschemes. Therefore, for simplexes that can be partitioned into orthoschemes, their dissections can have arbitrarily large numbers of orthoschemes. diff --git a/wiki/wikipedia/3137.txt b/wiki/wikipedia/3137.txt deleted file mode 100644 index 6dde5c0f9d055e9a427d886d77e6f132bdbecf76..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3137.txt +++ /dev/null @@ -1,55 +0,0 @@ -Informally, the reconstruction conjecture in graph theory says that graphs are determined uniquely by their subgraphs. It is due to Kelly and Ulam. - -Given a graph $G = (V,E)$, a vertex-deleted subgraph of $G$ is a subgraph formed by deleting exactly one vertex from $G$. By definition, it is an induced subgraph of $G$. - -For a graph $G$, the deck of G, denoted $D(G)$, is the multiset of isomorphism classes of all vertex-deleted subgraphs of $G$. Each graph in $D(G)$ is called a card. Two graphs that have the same deck are said to be hypomorphic. - -With these definitions, the conjecture can be stated as: - -* Reconstruction Conjecture: Any two hypomorphic graphs on at least three vertices are isomorphic. - -(The requirement that the graphs have at least three vertices is necessary because both graphs on two vertices have the same decks.) - -Harary suggested a stronger version of the conjecture: - -* Set Reconstruction Conjecture: Any two graphs on at least four vertices with the same sets of vertex-deleted subgraphs are isomorphic. - -Given a graph $G = (V,E)$, an edge-deleted subgraph of $G$ is a subgraph formed by deleting exactly one edge from $G$. - -For a graph $G$, the edge-deck of G, denoted $ED(G)$, is the multiset of all isomorphism classes of edge-deleted subgraphs of $G$. Each graph in $ED(G)$ is called an edge-card. - -* Edge Reconstruction Conjecture: (Harary, 1964) - -Both the reconstruction and set reconstruction conjectures have been verified for all graphs with at most 11 vertices by Brendan McKay. - -In a probabilistic sense, it has been shown by Béla Bollobás that almost all graphs are reconstructible. This means that the probability that a randomly chosen graph on $n$ vertices is not reconstructible goes to 0 as $n$ goes to infinity. In fact, it was shown that not only are almost all graphs reconstructible, but in fact that the entire deck is not necessary to reconstruct them - almost all graphs have the property that there exist three cards in their deck that uniquely determine the graph. - -The conjecture has been verified for a number of infinite classes of graphs (and, trivially, their complements). - -*Regular graphs - Regular Graphs are reconstructible by direct application of some of the facts that can be recognized from the deck of a graph. Given an $n$-regular graph $G$ and its deck $D(G)$, we can recognize that the deck is of a regular graph by recognizing its degree sequence. Let us now examine one member of the deck $D(G)$, $G_i$. This graph contains some number of vertices with a degree of $n$ and $n$ vertices with a degree of $n-1$. We can add a vertex to this graph and then connect it to the $n$ vertices of degree $n-1$ to create an $n$-regular graph which is isomorphic to the graph which we started with. Therefore, all regular graphs are reconstructible from their decks. A particular type of regular graph which is interesting is the complete graph. - -*Trees - -*Maximal planar graphs - -*Maximal outerplanar graphs - -*Outerplanar graphs - -*Critical blocks - -The reconstruction conjecture is true if all 2-connected graphs are reconstructible. - -The vertex reconstruction conjecture obeys the duality that if $G$ can be reconstructed from its vertex deck $D(G)$, then its complement $G'$ can be reconstructed from $D(G')$ as follows: Start with $D(G')$, take the complement of every card in it to get $D(G)$, use this to reconstruct $G$, then take the complement again to get $G'$. - -Edge reconstruction does not obey any such duality: Indeed, for some classes of edge-reconstructible graphs it is not known if their complements are edge reconstructible. - -It has been shown that the following are not in general reconstructible: - -* Digraphs: Infinite families of non-reconstructible digraphs are known, including tournaments (Stockmeyer) and non-tournaments (Stockmeyer). A tournament is reconstructible if it is not strongly connected. A weaker version of the reconstruction conjecture has been conjectured for digraphs, see new digraph reconstruction conjecture. - -* Hypergraphs (Kocay). - -* Infinite graphs. Let T be a tree on an infinite number of vertices such that every vertex has infinite degree, and let nT be the disjoint union of n copies of T. These graphs are hypomorphic, and thus not reconstructible. Every vertex-deleted subgraph of any of these graphs is isomorphic: they all are the disjoint union of an infinite number of copies of T. - -* Locally finite graphs. The question of reconstructibility for locally finite infinite trees (the Harary-Schwenk-Scott conjecture from 1972) was a longstanding open problem until 2017, when a non-reconstructible tree of maximum degree 3 was found by Bowler et al. diff --git a/wiki/wikipedia/3138.txt b/wiki/wikipedia/3138.txt deleted file mode 100644 index 212f6846b54c95df634349a7e99892922d1ac2a1..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3138.txt +++ /dev/null @@ -1 +0,0 @@ -Gizzard is an open source sharding framework to create custom fault-tolerant, distributed databases. It was initially used by Twitter and emerged from a wide variety of data storage problems. Gizzard operates as a middleware networking service that runs on the Java Virtual Machine. It manages partitioning data across arbitrary backend datastores, that allows it to be accessed efficiently. The partitioning rules are stored in a forwarding table that maps key ranges to partitions. Each partition manages its own replication through a declarative replication tree. Gizzard handles both physical and logical shards. Physical shards point to a physical database backend whereas logical shards are trees of other shards. In addition Gizzard also supports migrations and gracefully handles failures. The system is made eventually consistent by requiring that all write operations are idempotent and commutative. As operations fail they are retried at a later time. Gizzard is available at GitHub and licensed under the Apache License 2.0. diff --git a/wiki/wikipedia/3139.txt b/wiki/wikipedia/3139.txt deleted file mode 100644 index 622724c2f509fca70b4ca6c9ff6cad870ce92141..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3139.txt +++ /dev/null @@ -1,25 +0,0 @@ -is a series of puzzle arcade video games created by Arika. - -Released in Japan in August 1998 Tetris: The Grand Master was followed by two sequels, Tetris: The Absolute - The Grand Master 2 in October 2000 (with a Plus version released December 2000) and Tetris: The Grand Master 3 - Terror‑Instinct in March 2005. A spin-off console game, Tetris: The Grand Master Ace was published by AQ Interactive on December 10, 2005 and was a launch title for the Xbox 360's Japan release. - -The basic gameplay of Tetris: The Grand Master is similar to that of other Tetris games. The player must move and rotate Tetromino-shaped pieces falling into a well to form horizontal lines, which will then be cleared. During gameplay, the game automatically gives ranks to the player according to their score, starting from 9 up to Grand Master (GM), roughly following the dan ranking system. The game speeds up rapidly, reaching instant gravity and upwards of three tetrominoes per seconds. - -Tetris: The Grand Masters gameplay is heavily inspired by its arcade predecessor, Sega Tetris, released 10 years earlier. It uses a modified rotation system, color scheme, and relies heavily on mechanics such as lock delay. Another game which inspired Tetris: The Grand Master is Shimizu Tetris, a fan game which was the first to introduce 20G gravity. - -The main goal in Tetris: The Grand Master is to score points, awarding the player a higher grade. The game ends when a player reaches level 999. If the player scored enough points, they will be awarded with the grade S9. To achieve the grade GM, the player must also meet some time requirements during play. If the player tops out before reaching level 999, the game ends, awarding the player the current grade and its "mastering time", the time at which the grade was awarded during gameplay. - -The original game was released in Japanese arcades in August 1998. US, Asia, Europe, Hispanic, and Brazil warning texts were found in the game, suggesting that the game was planned to be released in these countries. - -Tetris: The Absolute - The Grand Master 2 was released in 2000, and added additional modes of play. One of these new modes is the Master mode, which extends the classic Tetris: The Grand Master gameplay with larger speed increases, more requirements to achieve the M or GM grades, and an additional challenge when the M rank is achieved where the player must survive the credits roll with the additional handicap of the Tetrominoes turning invisible upon locking. Additional modes include a more casual Normal mode, a Versus mode enhanced with item battles, and a two-player co-op mode. - -Tetris: The Absolute - The Grand Master 2 Plus added additional modes such as TGM+, which adds rising garbage blocks to the gameplay, and T.A. Death where the game begins at 20G (instant gravity) and every other aspect of the game also speeds up steadily. - -Tetris: The Grand Master 3 - Terror‑Instinct was released in 2005. The game now runs on PC-based hardware, specifically the Taito Type X. The level system has been expanded in many forms with increasingly stricter requirements to reach the Grand Master rank. Modes include Easy, Sakura (a puzzle mode also seen in Tetris With Cardcaptor Sakura: Eternal Heart), the traditional Master mode, and Shirase (an extension of T.A. Death with even harsher speed, garbage, and levels beyond 999). It also features World and Classic Rules, the former added by Arika due to The Tetris Company's recent policy changes. - -Released in 2005 as a Japan-only launch title for the Xbox 360, this was the only game in the series to be released outside of arcades. - -Tetris: The Grand Master was to be ported to the PlayStation in 1999, but because of a licensing restriction the port was canceled. - -In July 2004 Arika announced TGM-K for release on the PSP. In 2011, Japanese gaming magazine Famitsu reported its release again for spring of 2012. - -In September 2009, Tetris: The Grand Master 4 - The Masters of Round was unveiled at the Amusement Machine Show. Three modes of Tetris: The Grand Master 4 - The Masters of Round have been shown so far: Master, Konoha (pieces are double size, simulating a 5x10 field and the object is to completely clear the playing field of blocks as many times as possible), and Rounds (similar to T.A. Death and Shirase modes, but with more levels and a fog mechanic that prevents line clears below a particular height until certain conditions are met). Additionally, it featured World and Classic types just like Tetris: The Grand Master 3. Tetris: The Grand Master 4 - The Masters of Round was supposed to run on the Sega RingWide hardware. On September 18, 2010, Arika Vice President Ichiro Mihara announced the cancellation of The Grand Master 4 on his blog. In July 2015, Arika began location testing The Grand Master 4 in Japan and the United States. The titled was changed to The Grand Master 2015 reflecting the lack of a Tetris license or planned release. diff --git a/wiki/wikipedia/314.txt b/wiki/wikipedia/314.txt deleted file mode 100644 index 5c78a70415bd71ebe03eb7e6b473f2876080baf6..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/314.txt +++ /dev/null @@ -1,12 +0,0 @@ -In mathematics, the Aubin–Lions lemma (or theorem) is the result in the theory of Sobolev spaces of Banach space-valued functions, which provides a compactness criterion that is useful in the study of nonlinear evolutionary partial differential equations. Typically, to prove the existence of solutions one first constructs approximate solutions (for example, by a Galerkin method or by mollification of the equation), then uses the compactness lemma to show that there is a convergent subsequence of approximate solutions whose limit is a solution. - -The result is named after the French mathematicians Jean-Pierre Aubin and Jacques-Louis Lions. In the original proof by Aubin, the spaces X0 and X1 in the statement of the lemma were assumed to be reflexive, but this assumption was removed by Simon, so the result is also referred to as the Aubin–Lions–Simon lemma. - -Let X0, X and X1 be three Banach spaces with X0 ⊆ X ⊆ X1. Suppose that X0 is compactly embedded in X and that X is continuously embedded in X1. For $1\leq p, q\leq\infty$, let -$$ -W = \{ u \in L^p ([0, T]; X_0) \mid \dot{u} \in L^q ([0, T]; X_1) \}. -$$ - -(i) If $p<\infty$ then the embedding of W into $L^p([0,T];X)$ is compact. - -(ii) If $p=\infty$ and $q>1$ then the embedding of W into $C([0,T];X)$ is compact. diff --git a/wiki/wikipedia/3140.txt b/wiki/wikipedia/3140.txt deleted file mode 100644 index 5550c6745af83307e7144d6ebbbc9dc4b4eda71e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3140.txt +++ /dev/null @@ -1,19 +0,0 @@ -In mathematics, Marden's theorem, named after Morris Marden but proved about 100 years earlier by Jörg Siebeck, gives a geometric relationship between the zeroes of a third-degree polynomial with complex coefficients and the zeroes of its derivative. See also geometrical properties of polynomial roots. - -A cubic polynomial has three zeroes in the complex number plane, which in general form a triangle, and the Gauss–Lucas theorem states that the roots of its derivative lie within this triangle. Marden's theorem states their location within this triangle more precisely: - -Suppose the zeroes z1, z2, and z3 of a third-degree polynomial p(z) are non-collinear. There is a unique ellipse inscribed in the triangle with vertices z1, z2, z3 and tangent to the sides at their midpoints: the Steiner inellipse. The foci of that ellipse are the zeroes of the derivative p(z). - -By the Gauss–Lucas theorem, the root of the double derivative p"(z) must be the average of the two foci, which is the center point of the ellipse and the centroid of the triangle. - -In the special case that the triangle is equilateral (as happens, for instance, for the polynomial p(z) = z3 - 1) the inscribed ellipse degenerates to a circle, and the derivative of p has a double root at the center of the circle. Conversely, if the derivative has a double root, then the triangle must be equilateral . - -A more general version of the theorem, due to Linfield, applies to polynomials p(z) = (z - a)i (z - b)j (z - c)k whose degree i + j + k may be higher than three, but that have only three roots a, b, and c. For such polynomials, the roots of the derivative may be found at the multiple roots of the given polynomial (the roots whose exponent is greater than one) and at the foci of an ellipse whose points of tangency to the triangle divide its sides in the ratios i : j, j : k, and k : i. - -Another generalization (Parish) is to n-gons: some n-gons have an interior ellipse that is tangent to each side at the side's midpoint. Marden's theorem still applies: the foci of this midpoint-tangent inellipse are zeroes of the derivative of the polynomial whose zeroes are the vertices of the n-gon. - -Jörg Siebeck discovered this theorem 81 years before Marden wrote about it. However, Dan Kalman titled his American Mathematical Monthly paper "Marden's theorem" because, as he writes, "I call this Marden’s Theorem because I first read it in M. Marden’s wonderful book". - -attributes what is now known as Marden's theorem to Siebeck and cites nine papers that included a version of the theorem. Dan Kalman won the 2009 Lester R. Ford Award of the Mathematical Association of America for his 2008 paper in the American Mathematical Monthly describing the theorem. - -A short and elementary proof of Marden’s theorem is explained in the solution of an exercise in Fritz Carlson’s book “Geometri” (in Swedish, 1943). diff --git a/wiki/wikipedia/3141.txt b/wiki/wikipedia/3141.txt deleted file mode 100644 index b31824dacf008e4ac67555c71726aa10b45b5bd1..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3141.txt +++ /dev/null @@ -1,7 +0,0 @@ -In Computer science, a funnel is a synchronization primitive used in kernel development to protect system resources. First used on Digital UNIX as a way to "funnel" device driver execution onto a single processor, funnels are now used in the Mac OS X kernel to serialize access to the BSD portion of XNU. - -A funnel is a mutual exclusion (mutex) mechanism that prevents more than one thread from accessing certain kernel resources at the same time. Each thread acquires a funnel when it enters a synchronized portion of the kernel, and releases it when it leaves. If a thread blocks (sleeps) while holding a funnel, the kernel forces the thread to automatically drop the funnel, thereby allowing other threads to enter the synchronized portion of the kernel. - -Because a funnel is automatically dropped when a thread blocks, care must be taken to ensure that synchronized resources are acquired again after any blocking operation. Specifically, acquiring a funnel can be a blocking operation, so if multiple funnels are needed, they must be acquired at once. This limits the utility of funnels because it increases the granularity of locking when multiple funnels need to be held at once. - -There is only one funnel in OS X 10.4 and higher. Prior to version 10.4, there are two funnels: one protects network resources, and the other protects other BSD kernel resources. A thread was only allowed to hold one funnel at a time, and holding both would cause a kernel panic. As a result of these limitations and the lack of granularity, funnels are being phased out of Mac OS X. For example, the networking funnel has been replaced by finer-grained locking mechanisms. diff --git a/wiki/wikipedia/3142.txt b/wiki/wikipedia/3142.txt deleted file mode 100644 index f644c7cd828ed10cd51975acefaeb102310de082..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3142.txt +++ /dev/null @@ -1,9 +0,0 @@ -Interactive Theorem Proving (ITP) is an annual international academic conference on the topic of automated theorem proving, proof assistants and related topics, ranging from theoretical foundations to implementation aspects and applications in program verification, security, and formalization of mathematics. - -ITP brings together the communities using many systems based on higher-order logic such as ACL2, Coq, Mizar, HOL, Isabelle, Lean, NuPRL, PVS, and Twelf. Individual workshops or meetings devoted to individual systems are usually held concurrently with the conference. - -Together with CADE and TABLEAUX, ITP is usually one of the three main conferences of the International Joint Conference on Automated Reasoning (IJCAR) whenever it convenes, - -The inaugural meeting of ITP was held on 11–14 July 2010 in Edinburgh, Scotland, as part of the Federated Logic Conference. It is the extension of the Theorem Proving in Higher Order Logics (TPHOLs) conference series to the broad field of interactive theorem proving. TPHOLs meetings took place every year from 1988 until 2009. - -The first three were informal users' meetings for the HOL system and were the only ones without published papers. Since 1990 TPHOLs has published formal peer-reviewed proceedings, published by Springer's Lecture Notes in Computer Science series. It has also entertained an increasingly wide field of interest. diff --git a/wiki/wikipedia/3143.txt b/wiki/wikipedia/3143.txt deleted file mode 100644 index 1dfb2d1a4573442952c7be0523d0390ef0689c5e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3143.txt +++ /dev/null @@ -1,59 +0,0 @@ -In number theory, a prime number p is a Sophie Germain prime if 2p + 1 is also prime. The number 2p + 1 associated with a Sophie Germain prime is called a safe prime. For example, 11 is a Sophie Germain prime and 2 × 11 + 1 = 23 is its associated safe prime. Sophie Germain primes are named after French mathematician Sophie Germain, who used them in her investigations of Fermat's Last Theorem. Sophie Germain primes and safe primes have applications in public key cryptography and primality testing. It has been conjectured that there are infinitely many Sophie Germain primes, but this remains unproven. - -The first few Sophie Germain primes (those less than 1000) are - -2, 3, 5, 11, 23, 29, 41, 53, 83, 89, 113, 131, 173, 179, 191, 233, 239, 251, 281, 293, 359, 419, 431, 443, 491, 509, 593, 641, 653, 659, 683, 719, 743, 761, 809, 911, 953, ... - -Hence, the first few safe primes are - -5, 7, 11, 23, 47, 59, 83, 107, 167, 179, 227, 263, 347, 359, 383, 467, 479, 503, 563, 587, 719, 839, 863, 887, 983, 1019, 1187, 1283, 1307, 1319, 1367, 1439, 1487, 1523, 1619, 1823, 1907, ... - -In cryptography much larger Sophie Germain primes like 1,846,389,521,368 + 11600 are required. - -Two distributed computing projects, PrimeGrid and Twin Prime Search, include searches for large Sophie Germain primes. Some of the largest known Sophie Germain primes are given in the following table. - -On 2 Dec 2019, Fabrice Boudot, Pierrick Gaudry, Aurore Guillevic, Nadia Heninger, Emmanuel Thomé, and Paul Zimmermann announced the computation of a discrete logarithm modulo the 240-digit (795 bit) prime RSA-240 + 49204 (the first safe prime above RSA-240) using a number field sieve algorithm; see Discrete logarithm records. - -There is no special primality test for safe primes the way there is for Fermat primes and Mersenne primes. However, Pocklington's criterion can be used to prove the primality of 2p + 1 once one has proven the primality of p. - -Just as every term except the last one of a Cunningham chain of the first kind is a Sophie Germain prime, so every term except the first of such a chain is a safe prime. Safe primes ending in 7, that is, of the form 10n + 7, are the last terms in such chains when they occur, since 2(10n + 7) + 1 = 20n + 15 is divisible by 5. - -If a safe prime q is congruent to 7 modulo 8, then it is a divisor of the Mersenne number with its matching Sophie Germain prime as exponent. - -If q > 7 is a safe prime, then q divides 3(q−1)/2 − 1. (This follows from the fact that 3 is a quadratic residue mod q.) - -With the exception of 7, a safe prime q is of the form 6k - 1 or, equivalently, q ≡ 5 (mod 6) – as is p > 3. Similarly, with the exception of 5, a safe prime q is of the form 4k - 1 or, equivalently, q ≡ 3 (mod 4) — trivially true since (q - 1) / 2 must evaluate to an odd natural number. Combining both forms using lcm(6, 4) we determine that a safe prime q > 7 also must be of the form 12k − 1 or, equivalently, q ≡ 11 (mod 12). It follows that 3 (also 12) is a quadratic residue mod q for any safe prime q > 7. (Thus, 12 is not a primitive root of any safe prime q > 7, and the only safe primes that are also full reptend primes in base 12 are 5 and 7.) - -If p is a Sophie Germain prime greater than 3, then p must be congruent to 2 mod 3. For, if not, it would be congruent to 1 mod 3 and 2p + 1 would be congruent to 3 mod 3, impossible for a prime number. Similar restrictions hold for larger prime moduli, and are the basis for the choice of the "correction factor" 2C in the Hardy–Littlewood estimate on the density of the Sophie Germain primes. Several other famous conjectures in number theory generalize this and the twin prime conjecture; they include the Dickson's conjecture, Schinzel's hypothesis H, and the Bateman–Horn conjecture. - -A heuristic estimate for the number of Sophie Germain primes less than n is -$$ -2C \frac{n}{(\ln n)^2} \approx 1.32032\frac{n}{(\ln n)^2} -$$ - -where -$$ -C=\prod_{p>2} \frac{p(p-2)}{(p-1)^2}\approx 0.660161 -$$ - -is Hardy–Littlewood's twin prime constant. For n = 104, this estimate predicts 156 Sophie Germain primes, which has a 20% error compared to the exact value of 190. For n = 107, the estimate predicts 50822, which is still 10% off from the exact value of 56032. The form of this estimate is due to G. H. Hardy and J. E. Littlewood, who applied a similar estimate to twin primes. - -A sequence (p, 2p + 1, 2(2p + 1) + 1, ...) in which all of the numbers are prime is called a Cunningham chain of the first kind. Every term of such a sequence except the last is a Sophie Germain prime, and every term except the first is a safe prime. Extending the conjecture that there exist infinitely many Sophie Germain primes, it has also been conjectured that arbitrarily long Cunningham chains exist, although infinite chains are known to be impossible. - -A prime number q is a strong prime if q + 1 and q − 1 both have some large (around 500 digits) prime factors. For a safe prime 1=q = 2p + 1, the number q − 1 naturally has a large prime factor, namely p, and so a safe prime q meets part of the criteria for being a strong prime. The running times of some methods of factoring a number with q as a prime factor depend partly on the size of the prime factors of q − 1. This is true, for instance, of the p − 1 method. - -Safe primes are also important in cryptography because of their use in discrete logarithm-based techniques like Diffie–Hellman key exchange. If 2p + 1 is a safe prime, the multiplicative group of integers modulo 2p + 1 has a subgroup of large prime order. It is usually this prime-order subgroup that is desirable, and the reason for using safe primes is so that the modulus is as small as possible relative to p. - -A prime number p = 2q + 1 is called a safe prime if q is prime. Thus, p = 2q + 1 is a safe prime if and only if q is a Sophie Germain prime, so finding safe primes and finding Sophie Germain primes are equivalent in computational difficulty. The notion of a safe prime can be strengthened to a strong prime, for which both p − 1 and p + 1 have large prime factors. Safe and strong primes were useful as the factors of secret keys in the RSA cryptosystem, because they prevent the system being broken by some factorization algorithms such as Pollard's p − 1 algorithm. However, with the current factorization technology, the advantage of using safe and strong primes appears to be negligible. - -Similar issues apply in other cryptosystems as well, including Diffie–Hellman key exchange and similar systems that depend on the security of the discrete log problem rather than on integer factorization. For this reason, key generation protocols for these methods often rely on efficient algorithms for generating strong primes, which in turn rely on the conjecture that these primes have a sufficiently high density. - -In Sophie Germain Counter Mode, it was proposed to use the arithmetic in the finite field of order equal to the Sophie Germain prime 2128 + 12451, to counter weaknesses in Galois/Counter Mode using the binary finite field GF(2128). However, SGCM has been shown to be vulnerable to many of the same cryptographic attacks as GCM. - -In the first version of the AKS primality test paper, a conjecture about Sophie Germain primes is used to lower the worst case complexity from O(log12n) to O(log6n). A later version of the paper is shown to have time complexity O(log7.5n) which can also be lowered to O(log6n) using the conjecture. Later variants of AKS have been proven to have complexity of O(log6n) without any conjectures or use of Sophie Germain primes. - -Safe primes obeying certain congruences can be used to generate pseudo-random numbers of use in Monte Carlo simulation. - -Similarly, Sophie Germain primes may be used in the generation of pseudo-random numbers. The decimal expansion of 1/q will produce a stream of q − 1 pseudo-random digits, if q is the safe prime of a Sophie Germain prime p, with p congruent to 3, 9, or 11 modulo 20. Thus "suitable" prime numbers q are 7, 23, 47, 59, 167, 179, etc. () (corresponding to p = 3, 11, 23, 29, 83, 89, etc.) (). The result is a stream of length q − 1 digits (including leading zeros). So, for example, using q = 23 generates the pseudo-random digits 0, 4, 3, 4, 7, 8, 2, 6, 0, 8, 6, 9, 5, 6, 5, 2, 1, 7, 3, 9, 1, 3. Note that these digits are not appropriate for cryptographic purposes, as the value of each can be derived from its predecessor in the digit-stream. - -Sophie Germain primes are mentioned in the stage play Proof and the subsequent film. diff --git a/wiki/wikipedia/3144.txt b/wiki/wikipedia/3144.txt deleted file mode 100644 index ad6d3623b6c36799aa97fbdb374cb577f62c5f70..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3144.txt +++ /dev/null @@ -1,155 +0,0 @@ -Job-shop scheduling or the job-shop problem (JSP) is an optimization problem in computer science and operations research. It is a variant of optimal job scheduling. In a general job scheduling problem, we are given n jobs J1, J2, ..., Jn of varying processing times, which need to be scheduled on m machines with varying processing power, while trying to minimize the makespan – the total length of the schedule (that is, when all the jobs have finished processing). In the specific variant known as job-shop scheduling, each job consists of a set of operations O1, O2, ..., On which need to be processed in a specific order (known as precedence constraints). Each operation has a specific machine that it needs to be processed on and only one operation in a job can be processed at a given time. A common relaxation is the flexible job shop, where each operation can be processed on any machine of a given set (the machines in each set are identical). - -The name originally came from the scheduling of jobs in a job shop, but the theme has wide applications beyond that type of instance. This problem is one of the best known combinatorial optimization problems, and was the first problem for which competitive analysis was presented, by Graham in 1966. Best problem instances for basic model with makespan objective are due to Taillard. - -In the standard three-field notation for optimal job scheduling problems, the job-shop variant is denoted by J in the first field. For example, the problem denoted by " J3|$p_{ij}$|$C_\max$" is a 3-machines job-shop problem with unit processing times, where the goal is to minimize the maximum completion time. - -Many variations of the problem exist, including the following: - -* Machines can have duplicates (flexible job shop with duplicate machines) or belong to groups of identical machines (flexible job shop). - -* Machines can require a certain gap between jobs or no idle-time. - -* Machines can have sequence-dependent setups. - -* Objective function can be to minimize the makespan, the Lp norm, tardiness, maximum lateness etc. It can also be multi-objective optimization problem. - -* Jobs may have constraints, for example a job i needs to finish before job j can be started (see workflow). Also, the objective function can be multi-criteria. - -* Set of jobs can relate to different set of machines. - -* Deterministic (fixed) processing times or probabilistic processing times. - -Since the traveling salesman problem is NP-hard, the job-shop problem with sequence-dependent setup is clearly also NP-hard since the TSP is a special case of the JSP with a single job (the cities are the machines and the salesman is the job). - -The disjunctive graph is one of the popular models used for describing the job-shop scheduling problem instances. - -A mathematical statement of the problem can be made as follows: - -Let $M = \{ M_{1}, M_{2}, \dots, M_{m} \}$ and $J = \{ J_{1}, J_{2}, \dots, J_{n} \}$ be two finite sets. On account of the industrial origins of the problem, the $\displaystyle M_{i}$ are called machines and the $\displaystyle J_{j}$ are called jobs. - -Let $\displaystyle \ \mathcal{X}$ denote the set of all sequential assignments of jobs to machines, such that every job is done by every machine exactly once; elements $x \in \mathcal{X}$ may be written as $n \times m$ matrices, in which column $\displaystyle i$ lists the jobs that machine $\displaystyle M_{i}$ will do, in order. For example, the matrix -$$ -x = \begin{pmatrix} 1 & 2 \\ 2 & 3 \\ 3 & 1 \end{pmatrix} -$$ - -means that machine $\displaystyle M_{1}$ will do the three jobs $\displaystyle J_{1}, J_{2}, J_{3}$ in the order $\displaystyle J_{1}, J_{2}, J_{3}$, while machine $\displaystyle M_{2}$ will do the jobs in the order $\displaystyle J_{2}, J_{3}, J_{1}$. - -Suppose also that there is some cost function $C : \mathcal{X} \to [0, + \infty]$. The cost function may be interpreted as a "total processing time", and may have some expression in terms of times $C_{ij} : M \times J \to [0, + \infty]$, the cost/time for machine $\displaystyle M_{i}$ to do job $\displaystyle J_{j}$. - -The job-shop problem is to find an assignment of jobs $x \in \mathcal{X}$ such that $\displaystyle C(x)$ is a minimum, that is, there is no $y \in \mathcal{X}$ such that $\displaystyle C(x) > C(y)$. - -Scheduling efficiency can be defined for a schedule through the ratio of total machine idle time to the total processing time as below: -$$ -C'=1+{\sum_{i}l_i \over \sum_{j,k}p_{jk}}={C.m \over \sum_{j,k}p_{jk}} -$$ - -Here $l_i$ is the idle time of machine $i$, $C$ is the makespan and $m$ is the number of machines. Notice that with the above definition, scheduling efficiency is simply the makespan normalized to the number of machines and the total processing time. This makes it possible to compare the usage of resources across JSP instances of different size. - -One of the first problems that must be dealt with in the JSP is that many proposed solutions have infinite cost: i.e., there exists $x_{\infty} \in \mathcal{X}$ such that $C(x_{\infty}) = + \infty$. In fact, it is quite simple to concoct examples of such $x_{\infty}$ by ensuring that two machines will deadlock, so that each waits for the output of the other's next step. - -Graham had already provided the List scheduling algorithm in 1966, which is (2 - 1/m)-competitive, where m is the number of machines. that this problem is NP-complete for m>2, that is, no optimal solution can be computed in polynomial time for three or more machines (unless P=NP). - -In 2011 Xin Chen et al. provided optimal algorithms for online scheduling on two related machines improving previous results. - -The simplest form of the offline makespan minimisation problem deals with atomic jobs, that is, jobs that are not subdivided into multiple operations. It is equivalent to packing a number of items of various different sizes into a fixed number of bins, such that the maximum bin size needed is as small as possible. (If instead the number of bins is to be minimised, and the bin size is fixed, the problem becomes a different problem, known as the bin packing problem.) - -Dorit S. Hochbaum and David Shmoys presented a polynomial-time approximation scheme in 1987 that finds an approximate solution to the offline makespan minimisation problem with atomic jobs to any desired degree of accuracy. - -The basic form of the problem of scheduling jobs with multiple (M) operations, over M machines, such that all of the first operations must be done on the first machine, all of the second operations on the second, etc., and a single job cannot be performed in parallel, is known as the flow-shop scheduling problem. Various algorithms exist, including genetic algorithms. - -A heuristic algorithm by S. M. Johnson can be used to solve the case of a 2 machine N job problem when all jobs are to be processed in the same order. The steps of algorithm are as follows: - -Job Pi has two operations, of duration Pi1, Pi2, to be done on Machine M1, M2 in that sequence. - -*Step 1. List A = { 1, 2, …, N }, List L1 = {}, List L2 = {}. - -*Step 2. From all available operation durations, pick the minimum. - -If the minimum belongs to Pk1, - -Remove K from list A; Add K to end of List L1. - -If minimum belongs to Pk2, - -Remove K from list A; Add K to beginning of List L2. - -*Step 3. Repeat Step 2 until List A is empty. - -*Step 4. Join List L1, List L2. This is the optimum sequence. - -Johnson's method only works optimally for two machines. However, since it is optimal, and easy to compute, some researchers have tried to adopt it for M machines, (M > 2.) - -The idea is as follows: Imagine that each job requires m operations in sequence, on M1, M2 … Mm. We combine the first m/2 machines into an (imaginary) Machining center, MC1, and the remaining Machines into a Machining Center MC2. Then the total processing time for a Job P on MC1 = sum( operation times on first m/2 machines), and processing time for Job P on MC2 = sum(operation times on last m/2 machines). - -By doing so, we have reduced the m-Machine problem into a Two Machining center scheduling problem. We can solve this using Johnson's method. - -Machine learning has been recently used to predict the optimal makespan of a JSP instance without actually producing the optimal schedule. Preliminary results show an accuracy of around 80% when supervised machine learning methods were applied to classify small randomly generated JSP instances based on their optimal scheduling efficiency compared to the average. - -Here is an example of a job-shop scheduling problem formulated in AMPL as a mixed-integer programming problem with indicator constraints: - - - -param N_JOBS; - -param N_MACHINES; - -set JOBS ordered = 1..N_JOBS; - -set MACHINES ordered = 1..N_MACHINES; - -param ProcessingTime{JOBS, MACHINES} > 0; - -param CumulativeTime{i in JOBS, j in MACHINES} = - -sum {jj in MACHINES: ord(jj) <= ord(j)} ProcessingTime[i,jj]; - -param TimeOffset{i1 in JOBS, i2 in JOBS: i1 <> i2} = - -max {j in MACHINES} - -(CumulativeTime[i1,j] - CumulativeTime[i2,j] + ProcessingTime[i2,j]); - -var end >= 0; - -var start{JOBS} >= 0; - -var precedes{i1 in JOBS, i2 in JOBS: ord(i1) < ord(i2)} binary; - -minimize makespan: end; - -subj to makespan_def{i in JOBS}: - -end >= start[i] + sum{j in MACHINES} ProcessingTime[i,j]; - -subj to no12_conflict{i1 in JOBS, i2 in JOBS: ord(i1) < ord(i2)}: - -precedes[i1,i2] ==> start[i2] >= start[i1] + TimeOffset[i1,i2]; - -subj to no21_conflict{i1 in JOBS, i2 in JOBS: ord(i1) < ord(i2)}: - -!precedes[i1,i2] ==> start[i1] >= start[i2] + TimeOffset[i2,i1]; - -data; - -param N_JOBS := 4; - -param N_MACHINES := 4; - -param ProcessingTime: - -1 2 3 4 := - -1 4 2 1 - -2 3 6 2 - -3 7 2 3 - -4 1 5 8; - - - -* Flow-shop scheduling is a similar problem but without the constraint that each operation must be done on a specific machine (only the order constraint is kept). - -* Open-shop scheduling is a similar problem but also without the order constraint. diff --git a/wiki/wikipedia/3145.txt b/wiki/wikipedia/3145.txt deleted file mode 100644 index 8a0513a79e91049a9c7c059f7806ae0266b5b842..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3145.txt +++ /dev/null @@ -1,23 +0,0 @@ -In the theory of formal languages, the Myhill–Nerode theorem provides a necessary and sufficient condition for a language to be regular. The theorem is named for John Myhill and Anil Nerode, who proved it at the University of Chicago in 1958 . - -Given a language $L$, and a pair of strings $x$ and $y$, define a distinguishing extension to be a string $z$ such that - -exactly one of the two strings $xz$ and $yz$ belongs to $L$. - -Define a relation ${}_L{\sim}$ on strings as $x {}_L{\sim}\ y$ iff there is no distinguishing extension for $x$ and $y$. It is easy to show that ${}_L{\sim}$ is an equivalence relation on strings, and thus it divides the set of all strings into equivalence classes. - -The Myhill–Nerode theorem states that a language $L$ is regular if and only if ${}_L{\sim}$ has a finite number of equivalence classes, and moreover, that this number is equal to the number of states in the minimal deterministic finite automaton (DFA) recognizing $L$. In particular, this implies that there is a unique minimal DFA for each regular language . - -Some authors refer to the ${}_L{\sim}$ relation as Nerode congruence, in honor of Anil Nerode. - -If $L$ is a regular language, then by definition there is a DFA $A$ that recognizes it, with only finitely many states. If there are $n$ states, then partition the set of all finite strings into $n$ subsets, where subset $S_i$ is the set of strings that, when given as input to automaton $A$, cause it to end in state $i$. For every two strings $x$ and $y$ that belong to the same subset, and for every choice of a third string $z$, automaton $A$ reaches the same state on input $xz$ as it reaches on input $yz$, and therefore must either accept both of the inputs $xz$ and $yz$ or reject both of them. Therefore, no string $z$ can be a distinguishing extension for $x$ and $y$, so they must be related by ${}_L{\sim}$. Thus, $S_i$ is a subset of an equivalence class of ${}_L{\sim}$. Combining this fact with the fact that every member of one of these equivalence classes belongs to one of the sets $S_i$, this gives a many-to-one relation from states of $A$ to equivalence classes, implying that the number of equivalence classes is finite and at most $n$. - -In the other direction, suppose that ${}_L{\sim}$ has finitely many equivalence classes. In this case, it is possible to design a deterministic finite automaton that has one state for each equivalence class. The start state of the automaton corresponds to the equivalence class containing the empty string, and the transition function from a state $X$ on input symbol $y$ takes the automaton to a new state, the state corresponding to the equivalence class containing string $xy$, where $x$ is an arbitrarily chosen string in the equivalence class for $X$. The definition of the Myhill–Nerode relation implies that the transition function is well-defined: no matter which representative string $x$ is chosen for state $X$, the same transition function value will result. A state of this automaton is accepting if the corresponding equivalence class contains a string in $L$; in this case, again, the definition of the relation implies that all strings in the same equivalence class must also belong to $L$, for otherwise the empty string would be a distinguishing string for some pairs of strings in the class. - -Thus, the existence of a finite automaton recognizing $L$ implies that the Myhill–Nerode relation has a finite number of equivalence classes, at most equal to the number of states of the automaton, and the existence of a finite number of equivalence classes implies the existence of an automaton with that many states. - -The Myhill–Nerode theorem may be used to show that a language $L$ is regular by proving that the number of equivalence classes of ${}_L{\sim}$ is finite. This may be done by an exhaustive case analysis in which, beginning from the empty string, distinguishing extensions are used to find additional equivalence classes until no more can be found. For example, the language consisting of binary representations of numbers that can be divided by 3 is regular. Given the empty string, $00$ (or $11$), $01$ and $10$ are distinguishing extensions resulting in the three classes (corresponding to numbers that give remainders 0, 1 and 2 when divided by 3), but after this step there is no distinguishing extension anymore. The minimal automaton accepting our language would have three states corresponding to these three equivalence classes. - -Another immediate corollary of the theorem is that if for a language $L$ the relation ${}_L{\sim}$ has infinitely many equivalence classes, it is not regular. It is this corollary that is frequently used to prove that a language is not regular. - -The Myhill–Nerode theorem can be generalized to tree automata. diff --git a/wiki/wikipedia/3146.txt b/wiki/wikipedia/3146.txt deleted file mode 100644 index 91903d009ed4ff03700ccc2a6c2f20197f782b27..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3146.txt +++ /dev/null @@ -1,28 +0,0 @@ -In probability theory, Kolmogorov's zero–one law, named in honor of Andrey Nikolaevich Kolmogorov, specifies that a certain type of event, called a tail event, will either almost surely happen or almost surely not happen; that is, the probability of such an event occurring is zero or one. - -Tail events are defined in terms of infinite sequences of random variables. Suppose -$$ -X_1,X_2,X_3,\dots -$$ - -is an infinite sequence of independent random variables (not necessarily identically distributed). Let $\mathcal{F}$ be the σ-algebra generated by the $X_i$. Then, a tail event $F \in \mathcal{F}$ is an event which is probabilistically independent of each finite subset of these random variables. (Note: $F$ belonging to $\mathcal{F}$ implies that membership in $F$ is uniquely determined by the values of the $X_i$ but the latter condition is strictly weaker and does not suffice to prove the zero-one law.) For example, the event that the sequence converges, and the event that its sum converges are both tail events. In an infinite sequence of coin-tosses, a sequence of 100 consecutive heads occurring infinitely many times is a tail event. - -Tail events are precisely those events whose occurrence can still be determined if an arbitrarily large but finite initial segment of the $X_i$ is removed. - -In many situations, it can be easy to apply Kolmogorov's zero–one law to show that some event has probability 0 or 1, but surprisingly hard to determine which of these two extreme values is the correct one. - -A more general statement of Kolmogorov's zero–one law holds for sequences of independent σ-algebras. Let (Ω,F,P) be a probability space and let Fn be a sequence of mutually independent σ-algebras contained in F. Let -$$ -G_n=\sigma\bigg(\bigcup_{k=n}^\infty F_k\bigg) -$$ - -be the smallest σ-algebra containing Fn, Fn+1, …. Then Kolmogorov's zero–one law asserts that for any event -$$ -F\in \bigcap_{n=1}^\infty G_n -$$ - -one has either P(F) = 0 or 1. - -The statement of the law in terms of random variables is obtained from the latter by taking each Fn to be the σ-algebra generated by the random variable Xn. A tail event is then by definition an event which is measurable with respect to the σ-algebra generated by all Xn, but which is independent of any finite number of Xn. That is, a tail event is precisely an element of the intersection $\textstyle{\bigcap_{n=1}^\infty G_n}$. - -An invertible measure-preserving transformation on a standard probability space that obeys the 0-1 law is called a Kolmogorov automorphism. All Bernoulli automorphisms are Kolmogorov automorphisms but not vice versa. diff --git a/wiki/wikipedia/3147.txt b/wiki/wikipedia/3147.txt deleted file mode 100644 index ab97ed82bf2956ce6ebaba8070286b990b243b13..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3147.txt +++ /dev/null @@ -1,19 +0,0 @@ -In algebra, Hua's identity states that for any elements a, b in a division ring, - -a - \left(a^{-1} + \left(b^{-1} - a\right)^{-1}\right)^{-1} = aba - -whenever $ab \ne 0, 1$. Replacing $b$ with $-b^{-1}$ gives another equivalent form of the identity: - -\left(a + ab^{-1}a\right)^{-1} + (a + b)^{-1} = a^{-1}. - -The identity is used in a proof of Hua's theorem, which states that if $\sigma$ is a function between division rings satisfying - -\sigma(a + b) = \sigma(a) + \sigma(b), \quad \sigma(1) = 1, \quad \sigma(a^{-1}) = \sigma(a)^{-1}, - -then $\sigma$ is a homomorphism or an antihomomorphism. This theorem is connected to the fundamental theorem of projective geometry. - -One has - -(a - aba)\left(a^{-1} + \left(b^{-1} - a\right)^{-1}\right) = 1 - ab + ab\left(b^{-1} - a\right)\left(b^{-1} - a\right)^{-1} = 1. - -The proof is valid in any ring as long as $a, b, ab - 1$ are units. diff --git a/wiki/wikipedia/3148.txt b/wiki/wikipedia/3148.txt deleted file mode 100644 index 3c25e117e1f505b5b27cda4c0dde21c2863b3ae8..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3148.txt +++ /dev/null @@ -1,65 +0,0 @@ -In mathematics, mirror symmetry is a conjectural relationship between certain Calabi–Yau manifolds and a constructed "mirror manifold". The conjecture allows one to relate the number of rational curves on a Calabi-Yau manifold (encoded as Gromov–Witten invariants) to integrals from a family of varieties (encoded as period integrals on a variation of Hodge structures). In short, this means there is a relation between the number of genus $g$ algebraic curves of degree $d$ on a Calabi-Yau variety $X$ and integrals on a dual variety $\check{X}$. These relations were original discovered by Candelas, De la Ossa, Green, and Parkes in a paper studying a generic quintic threefold in $\mathbb{P}^4$ as the variety $X$ and a construction from the quintic Dwork family $X_\psi$ giving $\check{X} = \tilde{X}_\psi$. Shortly after, Sheldon Katz wrote a summary paper outlining part of their construction and conjectures what the rigorous mathematical interpretation could be. - -Originally, the construction of mirror manifolds was discovered through an ad-hoc procedure. Essentially, to a generic quintic threefold $X \subset \mathbb{CP}^4$ there should be associated a one-parameter family of Calabi-Yau manifolds $X_\psi$ which has multiple singularities. After blowing up these singularities, they are resolved and a new Calabi-Yau manifold $X^\vee$ was constructed. which had a flipped Hodge diamond. In particular, there are isomorphisms - -
    -$$ -H^q(X,\Omega_X^p) \cong H^q(X^\vee, \Omega_{X^\vee}^{3-p}) -$$ - -
    - -but most importantly, there is an isomorphism - -
    -$$ -H^1(X,\Omega_X^1) \cong H^1(X^\vee, \Omega_{X^\vee}^{2}) -$$ - -
    - -where the string theory (the A-model of $X$) for states in $H^1(X,\Omega_X^1)$ is interchanged with the string theory (the B-model of $X^\vee$) having states in $H^1(X^\vee, \Omega_{X^\vee}^{2})$. The string theory in the A-model only depended upon the Kahler or symplectic structure on $X$ while the B-model only depends upon the complex structure on $X^\vee$. Here we outline the original construction of mirror manifolds, and consider the string-theoretic background and conjecture with the mirror manifolds in a later section of this article. - -Recall that a generic quintic threefold Notice the vector space of global sections has dimension
    $\dim {\displaystyle \Gamma (\mathbb {P} ^{4},{\mathcal {O}}_{\mathbb {P} ^{4}}(5))} = 126$
    but there are two equivalences of these polynomials. First, polynomials under scaling by the algebraic torus $\mathbb{G}_m$ (non-zero scalers of the base field) given equivalent spaces. Second, projective equivalence is given by the automorphism group of $\mathbb{P}^4$, $\text{PGL}(5)$ which is $24$ dimensional. This gives a $101$ dimensional parameter space
    $U_{smooth} \subset \mathbb{P}(\Gamma(\mathbb{P}^4,\mathcal{O}_{\mathbb{P}^4}(5)))/PGL(5)$
    since $126 - 24 - 1 = 101$, which can be constructed using Geometric invariant theory. The set $U_{\text{smooth}}$ corresponds to the equivalence classes of polynomials which define smooth Calabi-Yau quintic threefolds in $\mathbb{P}^4$, giving a moduli space of Calabi-Yau quintics. Now, using Serre duality and the fact each Calabi-Yau manifold has trivial canonical bundle $\omega_X$, the space of deformations has an isomorphism
    $H^1(X,T_X) \cong H^2(X,\Omega_X)$
    with the $(2,1)$ part of the Hodge structure on $H^3(X)$. Using the Lefschetz hyperplane theorem the only non-trivial cohomology group is $H^3(X)$ since the others are isomorphic to $H^i(\mathbb{P}^4)$. Using the Euler characteristic and the Euler class, which is the top Chern class, the dimension of this group is $204$. This is because
    \begin{align} - -\chi(X) &= -200 \\ - -&= h^0 + h^2 - h^3 +h^4 + h^6 \\ - -&= 1 + 1 - \dim H^3(X) + 1 + 1 - -\end{align}
    Using the Hodge structure we can find the dimensions of each of the components. First, because $X$ is Calabi-Yau, $\omega_X \cong \mathcal{O}_X$ so
    $H^0(X,\Omega_X^3) \cong H^0(X,\mathcal{O}_X) $
    giving the Hodge numbers $h^{0,3} = h^{3,0} = 1$, hence
    $\dim H^2(X,\Omega_X) = h^{1,2} = 101$
    giving the dimension of the moduli space of Calabi-Yau manifolds. Because of the Bogomolev-Tian-Todorov theorem, all such deformations are unobstructed, so the smooth space $U_{smooth}$ is in fact the moduli space of quintic threefolds. The whole point of this construction is to show how the complex parameters in this moduli space are converted into Kähler parameters of the mirror manifold. - -There is a distinguished family of Calabi-Yau manifolds $X_\psi$ called the Dwork family. It is the projective family
    X_\psi = \text{Proj} \left( - -\frac{\mathbb{C}[\psi][x_0,\ldots, x_4]}{(x_0^5 + \cdots + x_4^5 - 5\psi x_0x_1x_2x_3x_4)} - -\right)
    over the complex plane $\text{Spec}(\mathbb{C}[\psi])$. Now, notice there is only a single dimension of complex deformations of this family, coming from $\psi$ having varying values. This is important because the Hodge diamond of the mirror manifold $\check{X}$ has
    $\dim H^{2,1}(\check{X}) = 1$.
    Anyway, the family $X_\psi$ has symmetry group
    $G = \{ (a_0,\ldots, a_4) \in (\mathbb{Z}/5)^5 : \sum a_i = 0 \}$
    acting by
    $(a_0,\ldots,a_4)\cdot [x_0:\cdots:x_4] = [e^{ a_0\cdot 2\pi i/5}x_0:\cdots : e^{ a_4\cdot 2\pi i/5}x_4]$
    Notice the projectivity of $X_\psi$ is the reason for the condition
    $\sum a_i = 0$
    The associated quotient variety $X_\psi / G$ has a crepant resolution givenpg 1-2, the super conformal field theory of a Calabi-Yau manifold $X$ should be equivalent to the dual super conformal field theory of the mirror manifold $X^\vee$. Here conformal means conformal equivalence which is the same as and equivalence class of complex structures on the curve $\Sigma$. - -There are two variants of the non-linear sigma models called the A-model and the B-model which consider the pairs $(X,\omega^\mathbb{C})$ and $(X,J)$ and their moduli - -For the same Calabi-Yau manifold $X$ in the A-model subsection, there is a dual superconformal field theory which has states in the eigenspace $H^1(X,T_X)$ of the operator $\overline{Q}$. Its three-point correlation function is defined as
    \langle \theta_1,\theta_2,\theta_3 \rangle = - -\int_X\Omega \wedge (\nabla_{\theta_1}\nabla_{\theta_2}\nabla_{\theta_3}\Omega)
    where $\Omega \in H^0(X,\Omega_X^3)$ is a holomorphic 3-form on $X$ and for an infinitesimal deformation $\theta$ (since $H^1(X,T_X)$ is the tangent space of the moduli space of Calabi-Yau manifolds containing $X$, by the Kodaira–Spencer map and the Bogomolev-Tian-Todorov theorem) there is the Gauss-Manin connection $\nabla_\theta$ taking a $(p,q)$ class to a $(p+1,q-1)$ class, hence
    $\Omega \wedge (\nabla_{\theta_1}\nabla_{\theta_2}\nabla_{\theta_3}\Omega) \in H^3(X,\Omega_X^3)$
    can be integrated on $X$. Note that this correlation function only depends on the complex structure of $X$. - -The action of the cohomology classes $\theta \in H^1(X,T_X)$ on the $\Omega \in H^0(X,\Omega_X^3)$ can also be understood as a cohomological variant of the interior product. Locally, the class $\theta$ corresponds to a Cech cocycle $[\theta_{i}]_{i \in I}$ for some nice enough cover $\{U_i \}_{i \in I}$ giving a section $\theta_i \in T_X(U_i)$. Then, the insertion product gives an element
    $\iota_{\theta_i}(\Omega|_{U_i}) \in H^0(U_i,\Omega_X^2|_{U_i})$
    which can be glued back into an element $\iota_\theta(\Omega)$ of $H^1(X,\Omega_X^2)$. This is because on the overlaps
    $U_i\cap U_j = U_{ij}$, $\theta_{i}|_{ij} = \theta_{j}|_{ij}$
    giving
    \begin{align} - -(\iota_{\theta_i}\Omega|_{U_{i}})|_{U_{ij}} &= \iota_{ - -\theta_i|_{U_{ij}} - -} (\Omega|_{U_{ij}}) \\ - -&= \iota_{ - -\theta_j|_{U_{ij}} - -} (\Omega|_{U_{ij}}) \\ - -&= (\iota_{\theta_j}\Omega|_{U_j})|_{U_{ij}} - -\end{align}
    hence it defines a 1-cocycle. Repeating this process gives a 3-cocycle
    $\iota_{\theta_1}\iota_{\theta_2}\iota_{\theta_3}\Omega \in H^3(X,\mathcal{O}_X)$
    which is equal to $\nabla_{\theta_1}\nabla_{\theta_2}\nabla_{\theta_3}\Omega$. This is because locally the Gauss-Manin connection acts as the interior product. - -Mathematically, the B-model is a variation of hodge structures which was originally given by the construction from the Dwork family. - -Relating these two models of string theory by resolving the ambiguity of sign for the operators $(Q,\overline{Q})$ led physicists to the following conjecturepg 22: for a Calabi-Yau manifold $X$ there should exist a mirror Calabi-Yau manifold $X^\vee$ such that there exists a mirror isomorphism
    $H^1(X,\Omega_X) \cong H^1(X^\vee, T_{X^\vee})$
    giving the compatibility of the associated A-model and B-model. This means given $H \in H^1(X,\Omega_X)$ and $\theta \in H^1(X^\vee,T_{X^\vee})$ such that $H \mapsto \theta$ under the mirror map, there is the equality of correlation functions
    $\langle H,H,H\rangle = \langle \theta,\theta,\theta\rangle$
    This is significant because it relates the number of degree $d$ genus $0$ curves on a quintic threefold $X$ in $\mathbb{P}^4$ (so $H^{1,1}\cong \mathbb{Z}$) to integrals in a variation of Hodge structures. Moreover, these integrals are actually computable! diff --git a/wiki/wikipedia/3149.txt b/wiki/wikipedia/3149.txt deleted file mode 100644 index 4bc31b199a7cc34672300e71f2fff279ca0991e6..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3149.txt +++ /dev/null @@ -1,13 +0,0 @@ -A kinetic triangulation data structure is a kinetic data structure that maintains a triangulation of a set of moving points. Maintaining a kinetic triangulation is important for applications that involve motion planning, such as video games, virtual reality, dynamic simulations and robotics. - -The efficiency of a kinetic data structure is defined based on the ratio of the number of internal events to external events, thus good runtime bounds can sometimes be obtained by choosing to use a triangulation scheme that generates a small number of external events. - -For simple affine motion of the points, the number of discrete changes to the convex hull is estimated by $\Omega(n^2)$, thus the number of changes to any triangulation is also lower bounded by $\Omega(n^2)$. Finding any triangulation scheme that has a near-quadratic bound on the number of discrete changes is an important open problem. - -The Delaunay triangulation seems like a natural candidate, but a tight worst-case analysis of the number of discrete changes that will occur to the Delaunay triangulation (external events) was considered an open problem until 2015; it has now been bounded to be between $\Omega(n^2)$ and $O(n^{2+\epsilon})$. - -There is a kinetic data structure that efficiently maintains the Delaunay triangulation of a set of moving points, in which the ratio of the total number of events to the number of external events is $O(1)$. - -Kaplan et al. developed a randomized triangulation scheme that experiences an expected number of $O(n^2 \beta_{s+2}(n) \log^2 n)$ external events, where $s$ is the maximum number of times each triple of points can become collinear, $\beta_{s+2}(q) = \frac{\lambda_{s+2}(q)}{q}$, and $\lambda_{s+2}(q)$ is the maximum length of a Davenport-Schinzel sequence of order s + 2 on n symbols. - -There is a kinetic data structure (due to Agarwal et al.) which maintains a pseudo-triangulation in $O(n^22^{\sqrt{\log n\log\log n}})$ events total. All events are external and require $O(\lg n)$ time to process. diff --git a/wiki/wikipedia/315.txt b/wiki/wikipedia/315.txt deleted file mode 100644 index 940a514593d7f06e49b31eca235dbd71d6fb9fbb..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/315.txt +++ /dev/null @@ -1,157 +0,0 @@ -In parallel computing, a barrier is a type of synchronization method. A barrier for a group of threads or processes in the source code means any thread/process must stop at this point and cannot proceed until all other threads/processes reach this barrier. - -Many collective routines and directive-based parallel languages impose implicit barriers. For example, a parallel do loop in Fortran with OpenMP will not be allowed to continue on any thread until the last iteration is completed. This is in case the program relies on the result of the loop immediately after its completion. In message passing, any global communication (such as reduction or scatter) may imply a barrier. - -In concurrent computing, a barrier may be in a raised or lowered state. The term latch is sometimes used to refer to a barrier that starts in the raised state and cannot be re-raised once it is in the lowered state. The term count-down latch is sometimes used to refer to a latch that is automatically lowered once a pre-determined number of threads/processes have arrived. - -The basic barrier has mainly two variables, one of which records the pass/stop state of the barrier, the other of which keeps the total number of threads that have entered in the barrier. The barrier state was initialized to be "stop" by the first threads coming into the barrier. Whenever a thread enters, based on the number of threads already in the barrier, only if it is the last one, the thread sets the barrier state to be "pass" so that all the threads can get out of the barrier. On the other hand, when the incoming thread is not the last one, it is trapped in the barrier and keeps testing if the barrier state has changed from "stop" to "pass", and it gets out only when the barrier state changes to "pass". The following C++ code demonstrates this procedure. - - - -struct barrier_type - -{ - -// how many processors have entered the barrier - -// initialize to 0 - -int arrive_counter; - -// how many processors have exited the barrier - -// initialize to p - -int leave_counter; - -int flag; - -std::mutex lock; - -}; - -// barrier for p processors - -void barrier(barrier_type* b, int p) - -{ - -b->lock.lock(); - -if (b->arrive_counter == 0) - -{ - -b->lock.unlock(); - -while (b->leave_counter != p); // wait for all to leave before clearing - -b->lock.lock(); - -b->flag = 0; // first arriver clears flag - -} - -b->arrive_counter++; - -if (b->arrive_counter == p) // last arriver sets flag - -{ - -b->arrive_counter = 0; - -b->leave_counter = 0; - -b->flag = 1; - -} - -b->lock.unlock(); - -while (b->flag == 0); // wait for flag - -b->lock.lock(); - -b->leave_counter++; - -b->lock.unlock(); - -} - - - -The potential problem is: - -# Due to all the threads repeatedly accessing the global variable for pass/stop, the communication traffic is rather high, which decreases the scalability. - -This problem can be resolved by regrouping the threads and using multi-level barrier, e.g. Combining Tree Barrier. Also hardware implementations may have the advantage of higher scalability. - -Instead of using the same value to represent pass/stop, sequential barriers use opposite values for pass/stop state. For example, if barrier 1 uses 0 to stop the threads, barrier 2 will use 1 to stop threads and barrier 3 will use 0 to stop threads again and so on. The following C++ code demonstrates this. - - - -struct barrier_type - -{ - -int counter; // initialize to 0 - -int flag; // initialize to 0 - -std::mutex lock; - -}; - -int local_sense = 0; // private per processor - -// barrier for p processors - -void barrier(barrier_type* b, int p) - -{ - -local_sense = 1 - local_sense; - -b->lock.lock(); - -b->counter++; - -int arrived = b->counter; - -if (arrived == p) // last arriver sets flag - -{ - -b->lock.unlock(); - -b->counter = 0; - -// memory fence to ensure that the change to counter - -// is seen before the change to flag - -b->flag = local_sense; - -} - -else - -{ - -b->lock.unlock(); - -while (b->flag != local_sense); // wait for flag - -} - -} - - - -A Combining Tree Barrier is a hierarchical way of implementing barrier to resolve the scalability by avoiding the case that all threads are spinning at the same location. - -In k-Tree Barrier, all threads are equally divided into subgroups of k threads and a first-round synchronizations are done within these subgroups. Once all subgroups have done their synchronizations, the first thread in each subgroup enters the second level for further synchronization. In the second level, like in the first level, the threads form new subgroups of k threads and synchronize within groups, sending out one thread in each subgroup to next level and so on. Eventually, in the final level there is only one subgroup to be synchronized. After the final-level synchronization, the releasing signal is transmitted to upper levels and all threads get past the barrier. - -The hardware barrier uses hardware to implement the above basic barrier model. - -The simplest hardware implementation uses dedicated wires to transmit signal to implement barrier. This dedicated wire performs OR/AND operation to act as the pass/block flags and thread counter. For small systems, such a model works and communication speed is not a major concern. In large multiprocessor systems this hardware design can make barrier implementation have high latency. The network connection among processors is one implementation to lower the latency, which is analogous to Combining Tree Barrier. diff --git a/wiki/wikipedia/3150.txt b/wiki/wikipedia/3150.txt deleted file mode 100644 index 9e23bb678c7eb828bee99933ed007415a4f85a13..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3150.txt +++ /dev/null @@ -1,57 +0,0 @@ -In mathematics, Schreier's lemma is a theorem in group theory used in the Schreier–Sims algorithm and also for finding a presentation of a subgroup. - -Suppose $H$ is a subgroup of $G$, which is finitely generated with generating set $S$, that is, G = $\scriptstyle \langle S\rangle$. - -Let $R$ be a right transversal of $H$ in $G$. In other words, $R$ is (the image of) a section of the quotient map $G \to H\backslash G$, where $H\backslash G$ denotes the set of right cosets of $H$ in $G$. - -We make the definition that given $g$∈$G$, $\overline{g}$ is the chosen representative in the transversal $R$ of the coset $Hg$, that is, -$$ -g\in H\overline{g}. -$$ - -Then $H$ is generated by the set -$$ -\{rs(\overline{rs})^{-1}|r\in R, s\in S\}. -$$ - -Hence, in particular, Schreier's lemma implies that every subgroup of finite index of a finitely generated group is again finitely generated. - -Let us establish the evident fact that the group Z3 = Z/3Z is indeed cyclic. Via Cayley's theorem, Z3 is a subgroup of the symmetric group S3. Now, -$$ -\mathbb{Z}_3=\{ e, (1\ 2\ 3), (1\ 3\ 2) \} -$$ -$$ -S_3= \{ e, (1\ 2), (1\ 3), (2\ 3), (1\ 2\ 3), (1\ 3\ 2) \} -$$ - -where $e$ is the identity permutation. Note S3 = $\scriptstyle\langle${ s1=(1 2), s2 = (1 2 3) }$\scriptstyle\rangle$. - -Z3 has just two cosets, Z3 and S3 \ Z3, so we select the transversal { t1 = e, t2=(1 2) }, and we have - -\begin{matrix} - -t_1s_1 = (1\ 2),&\quad\text{so}\quad&\overline{t_1s_1} = (1\ 2)\\ - -t_1s_2 = (1\ 2\ 3) ,&\quad\text{so}\quad& \overline{t_1s_2} = e\\ - -t_2s_1 = e ,&\quad\text{so}\quad& \overline{t_2s_1} = e\\ - -t_2s_2 = (2\ 3) ,&\quad\text{so}\quad& \overline{t_2s_2} = (1\ 2). \\ - -\end{matrix} - -Finally, -$$ -t_1s_1\overline{t_1s_1}^{-1} = e -$$ -$$ -t_1s_2\overline{t_1s_2}^{-1} = (1\ 2\ 3) -$$ -$$ -t_2s_1\overline{t_2s_1}^{-1} = e -$$ -$$ -t_2s_2\overline{t_2s_2}^{-1} = (1\ 2\ 3). -$$ - -Thus, by Schreier's subgroup lemma, { e, (1 2 3) } generates Z3, but having the identity in the generating set is redundant, so we can remove it to obtain another generating set for Z3, { (1 2 3) } (as expected). diff --git a/wiki/wikipedia/3151.txt b/wiki/wikipedia/3151.txt deleted file mode 100644 index 67e5b8712958e724c6c88ac173ed13f23754d990..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3151.txt +++ /dev/null @@ -1,35 +0,0 @@ -A counterexample is any exception to a generalization. In logic a counterexample disproves the generalization, and does so rigorously in the fields of mathematics and philosophy. For example, the fact that "John Smith is not a lazy student" is a counterexample to the generalization “students are lazy”, and both are counterexample to, and disproof of, the generalization “all students are lazy.” - -In mathematics, the term "counterexample" is also used (by a slight abuse) to refer to examples which illustrate the necessity of the full hypothesis of a theorem. This is most often done by considering a case where a part of the hypothesis is not satisfied and the conclusion of the theorem does not hold. - -In mathematics, counterexamples are often used to prove the boundaries of possible theorems. By using counterexamples to show that certain conjectures are false, mathematical researchers can then avoid going down blind alleys and learn to modify conjectures to produce provable theorems. It is sometimes said that mathematical development consists primarily in finding (and proving) theorems and counterexamples. - -Suppose that a mathematician is studying geometry and shapes, and she wishes to prove certain theorems about them. She conjectures that "All rectangles are squares", and she is interested in knowing whether this statement is true or false. - -In this case, she can either attempt to prove the truth of the statement using deductive reasoning, or she can attempt to find a counterexample of the statement if she suspects it to be false. In the latter case, a counterexample would be a rectangle that is not a square, such as a rectangle with two sides of length 5 and two sides of length 7. However, despite having found rectangles that were not squares, all the rectangles she did find had four sides. She then makes the new conjecture "All rectangles have four sides". This is logically weaker than her original conjecture, since every square has four sides, but not every four-sided shape is a square. - -The above example explained — in a simplified way — how a mathematician might weaken her conjecture in the face of counterexamples, but counterexamples can also be used to demonstrate the necessity of certain assumptions and hypothesis. For example, suppose that after a while, the mathematician above settled on the new conjecture "All shapes that are rectangles and have four sides of equal length are squares". This conjecture has two parts to the hypothesis: the shape must be 'a rectangle' and must have 'four sides of equal length'. The mathematician then would like to know if she can remove either assumption, and still maintain the truth of her conjecture. This means that she needs to check the truth of the following two statements: - -# "All shapes that are rectangles are squares." - -# "All shapes that have four sides of equal length are squares". - -A counterexample to (1) was already given above, and a counterexample to (2) is a non-square rhombus. Thus, the mathematician now knows that both assumptions were indeed necessary. - -A counterexample to the statement "all prime numbers are odd numbers" is the number 2, as it is a prime number but is not an odd number. Neither of the numbers 7 or 10 is a counterexample, as neither of them are enough to contradict the statement. In this example, 2 is in fact the only possible counterexample to the statement, even though that alone is enough to contradict the statement. In a similar manner, the statement "All natural numbers are either prime or composite" has the number 1 as a counterexample, as 1 is neither prime nor composite. - -Euler's sum of powers conjecture was disproved by counterexample. It asserted that at least n nth powers were necessary to sum to another nth power. This conjecture was disproved in 1966, with a counterexample involving n = 5; other n = 5 counterexamples are now known, as well as some n = 4 counterexamples. - -Witsenhausen's counterexample shows that it is not always true (for control problems) that a quadratic loss function and a linear equation of evolution of the state variable imply optimal control laws that are linear. - -Other examples include the disproofs of the Seifert conjecture, the Pólya conjecture, the conjecture of Hilbert's fourteenth problem, Tait's conjecture, and the Ganea conjecture. - -In philosophy, counterexamples are usually used to argue that a certain philosophical position is wrong by showing that it does not apply in certain cases. Alternatively, the first philosopher can modify their claim so that the counterexample no longer applies; this is analogous to when a mathematician modifies a conjecture because of a counterexample. - -For example, in Plato's Gorgias, Callicles, trying to define what it means to say that some people are "better" than others, claims that those who are stronger are better. - -But Socrates replies that, because of their strength of numbers, the class of common rabble is stronger than the propertied class of nobles, even though the masses are prima facie of worse character. Thus Socrates has proposed a counterexample to Callicles' claim, by looking in an area that Callicles perhaps did not expect — groups of people rather than individual persons. - -Callicles might challenge Socrates' counterexample, arguing perhaps that the common rabble really are better than the nobles, or that even in their large numbers, they still are not stronger. But if Callicles accepts the counterexample, then he must either withdraw his claim, or modify it so that the counterexample no longer applies. For example, he might modify his claim to refer only to individual persons, requiring him to think of the common people as a collection of individuals rather than as a mob. - -As it happens, he modifies his claim to say "wiser" instead of "stronger", arguing that no amount of numerical superiority can make people wiser. diff --git a/wiki/wikipedia/3152.txt b/wiki/wikipedia/3152.txt deleted file mode 100644 index 52f82e773ab9bc1a12c6a39546175deee4e98f03..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3152.txt +++ /dev/null @@ -1,163 +0,0 @@ -Quartic or biquadratic reciprocity is a collection of theorems in elementary and algebraic number theory that state conditions under which the congruence x4 ≡ p (mod q) is solvable; the word "reciprocity" comes from the form of some of these theorems, in that they relate the solvability of the congruence x4 ≡ p (mod q) to that of x4 ≡ q (mod p). - -Euler made the first conjectures about biquadratic reciprocity. Gauss published two monographs on biquadratic reciprocity. In the first one (1828) he proved Euler's conjecture about the biquadratic character of 2. In the second one (1832) he stated the biquadratic reciprocity law for the Gaussian integers and proved the supplementary formulas. He said that a third monograph would be forthcoming with the proof of the general theorem, but it never appeared. Jacobi presented proofs in his Königsberg lectures of 1836–37. The first published proofs were by Eisenstein. - -Since then a number of other proofs of the classical (Gaussian) version have been found, as well as alternate statements. Lemmermeyer states that there has been an explosion of interest in the rational reciprocity laws since the 1970s. - -A quartic or biquadratic residue (mod p) is any number congruent to the fourth power of an integer (mod p). If x4 ≡ a (mod p) does not have an integer solution, a is a quartic or biquadratic nonresidue (mod p). - -As is often the case in number theory, it is easiest to work modulo prime numbers, so in this section all moduli p, q, etc., are assumed to positive, odd primes. - -2 is a quadratic residue mod p if and only if p ≡ ±1 (mod 8). Since p is also ≡ 1 (mod 4), this means p ≡ 1 (mod 8). Every such prime is the sum of a square and twice a square. - -Gauss proved They are 1, i, -1, and -i. They are similar to 1 and -1 in the ordinary integers, in that they divide every number. The units are the powers of i. - -Given a number λ = a + bi, its conjugate is a - bi and its associates are the four numbers - - λ = +a + bi - - iλ = -b + ai - - -λ = -a - bi - --iλ = +b - ai - -If λ = a + bi, the norm of λ, written Nλ, is the number a2 + b2. If λ and μ are two Gaussian integers, Nλμ = Nλ Nμ; in other words, the norm is multiplicative. The norm of zero is zero, the norm of any other number is a positive integer. ε is a unit if and only if Nε = 1. The square root of the norm of λ, a nonnegative real number which may not be a Gaussian integer, is the absolute value of lambda. - -Gauss proves that Z[i] is a unique factorization domain and shows that the primes fall into three classes: - -* 2 is a special case: 2 = i3 (1 + i)2. It is the only prime in Z divisible by the square of a prime in Z[i]. In algebraic number theory, 2 is said to ramify in Z[i]. - -* Positive primes in Z ≡ 3 (mod 4) are also primes in Z[i]. In algebraic number theory, these primes are said to remain inert in Z[i]. - -* Positive primes in Z ≡ 1 (mod 4) are the product of two conjugate primes in Z[i]. In algebraic number theory, these primes are said to split in Z[i]. - -Thus, inert primes are 3, 7, 11, 19, ... and a factorization of the split primes is - - 5 = (2 + i) × (2 - i), - -13 = (2 + 3i) × (2 - 3i), - -17 = (4 + i) × (4 - i), - -29 = (2 + 5i) × (2 - 5i), ... - -The associates and conjugate of a prime are also primes. - -Note that the norm of an inert prime q is Nq = q2 ≡ 1 (mod 4); thus the norm of all primes other than 1 + i and its associates is ≡ 1 (mod 4). - -Gauss calls a number in Z[i] odd if its norm is an odd integer. Thus all primes except 1 + i and its associates are odd. The product of two odd numbers is odd and the conjugate and associates of an odd number are odd. - -In order to state the unique factorization theorem, it is necessary to have a way of distinguishing one of the associates of a number. Gauss defines an odd number to be primary if it is ≡ 1 (mod (1 + i)3). It is straightforward to show that every odd number has exactly one primary associate. An odd number λ = a + bi is primary if a + b ≡ a - b ≡ 1 (mod 4); i.e., a ≡ 1 and b ≡ 0, or a ≡ 3 and b ≡ 2 (mod 4). The product of two primary numbers is primary and the conjugate of a primary number is also primary. - -The unique factorization theorem for Z[i] is: if λ ≠ 0, then -$$ -\lambda = i^\mu(1+i)^\nu\pi_1^{\alpha_1}\pi_2^{\alpha_2}\pi_3^{\alpha_3} \dots -$$ - -where 0 ≤ μ ≤ 3, ν ≥ 0, the πis are primary primes and the αis ≥ 1, and this representation is unique, up to the order of the factors. - -The notions of congruence and greatest common divisor are defined the same way in Z[i] as they are for the ordinary integers Z. Because the units divide all numbers, a congruence (mod λ) is also true modulo any associate of λ, and any associate of a GCD is also a GCD. - -Gauss proves the analogue of Fermat's theorem: if α is not divisible by an odd prime π, then -$$ -\alpha^{N \pi - 1} \equiv 1 \pmod{\pi} -$$ - -Since Nπ ≡ 1 (mod 4), $\alpha^{\frac{N\pi - 1}{4}}$ makes sense, and $\alpha^{\frac{N\pi - 1}{4}}\equiv i^k \pmod{\pi}$ for a unique unit ik. - -This unit is called the quartic or biquadratic residue character of α (mod π) and is denoted by -$$ -\left[\frac{\alpha}{\pi}\right] = i^k \equiv \alpha^{\frac{N\pi - 1}{4}} \pmod{\pi}. -$$ - -It has formal properties similar to those of the Legendre symbol. - -The congruence $x^4 \equiv \alpha \pmod{\pi}$ is solvable in Z[i] if and only if $\left[\frac{\alpha}{\pi}\right] = 1.$ -$$ -\Bigg[\frac{\alpha\beta}{\pi}\Bigg]=\Bigg[\frac{\alpha}{\pi}\Bigg]\Bigg[\frac{\beta}{\pi}\Bigg] -$$ -$$ -\overline{\Bigg[\frac{\alpha}{\pi}\Bigg]}=\Bigg[\frac{\overline{\alpha}}{\overline{\pi}}\Bigg] -$$ where the bar denotes complex conjugation. - -if π and θ are associates, $\Bigg[\frac{\alpha}{\pi}\Bigg]=\Bigg[\frac{\alpha}{\theta}\Bigg]$ - -if α ≡ β (mod π), $\Bigg[\frac{\alpha}{\pi}\Bigg]=\Bigg[\frac{\beta}{\pi}\Bigg]$ - -The biquadratic character can be extended to odd composite numbers in the "denominator" in the same way the Legendre symbol is generalized into the Jacobi symbol. As in that case, if the "denominator" is composite, the symbol can equal one without the congruence being solvable: -$$ -\left[\frac{\alpha}{\lambda}\right] = \left[\frac{\alpha}{\pi_1}\right]^{\alpha_1} \left[\frac{\alpha}{\pi_2}\right]^{\alpha_2} \dots -$$ where - -\lambda = \pi_1^{\alpha_1}\pi_2^{\alpha_2}\pi_3^{\alpha_3} \dots - -If a and b are ordinary integers, a ≠ 0, |b| > 1, gcd(a, b) = 1, then $\left[\frac{a}{b}\right] = 1.$ - -Gauss stated the law of biquadratic reciprocity in this form: - -Let π and θ be distinct primary primes of Z[i]. Then - -if either π or θ or both are ≡ 1 (mod 4), then $\Bigg[\frac{\pi}{\theta}\Bigg] =\left[\frac{\theta}{\pi}\right], $ but - -if both π and θ are ≡ 3 + 2i (mod 4), then $\Bigg[\frac{\pi}{\theta}\Bigg] =-\left[\frac{\theta}{\pi}\right]. $ - -Just as the quadratic reciprocity law for the Legendre symbol is also true for the Jacobi symbol, the requirement that the numbers be prime is not needed; it suffices that they be odd relatively prime nonunits. Probably the most well-known statement is: - -Let π and θ be primary relatively prime nonunits. Then - -\Bigg[\frac{\pi}{\theta}\Bigg]\left[\frac{\theta}{\pi}\right]^{-1}= - -(-1)^{\frac{N\pi - 1}{4}\frac{N\theta-1}{4}}. - -There are supplementary theorems for the units and the half-even prime 1 + i. - -if π = a + bi is a primary prime, then -$$ -\Bigg[\frac{i}{\pi}\Bigg]=i^{-\frac{a-1}{2}}, \Bigg[\frac{1+i}{\pi}\Bigg]=i^\frac{a-b-1-b^2}{4}, -$$ - -and thus -$$ -\Bigg[\frac{-1}{\pi}\Bigg]=(-1)^{\frac{a-1}{2}}, \Bigg[\frac{2}{\pi}\Bigg]=i^{-\frac{b}{2}}. -$$ - -Also, if π = a + bi is a primary prime, and b ≠ 0 then -$$ -\Bigg[\frac{\overline{\pi}}{\pi}\Bigg]=\Bigg[\frac{-2}{\pi}\Bigg](-1)^\frac{a^2-1}{8} -$$ (if b = 0 the symbol is 0). - -Jacobi defined π = a + bi to be primary if a ≡ 1 (mod 4). With this normalization, the law takes the form - -Let α = a + bi and β = c + di where a ≡ c ≡ 1 (mod 4) and b and d are even be relatively prime nonunits. Then - -\left[\frac{\alpha}{\beta}\right]\left[\frac{\beta}{\alpha}\right]^{-1}= - -(-1)^{\frac{bd}{4}} - -The following version was found in Gauss's unpublished manuscripts. - -Let α = a + 2bi and β = c + 2di where a and c are odd be relatively prime nonunits. Then - -\left[\frac{\alpha}{\beta}\right]\left[\frac{\beta}{\alpha}\right]^{-1}= - -(-1)^{bd+\frac{a-1}{2}d+\frac{c-1}{2}b}, - -\left[\frac{1+i}{\alpha}\right]=i^{\frac{b(a-3b)}{2}-\frac{a^2-1}{8}} - - - -The law can be stated without using the concept of primary: - -If λ is odd, let ε(λ) be the unique unit congruent to λ (mod (1 + i)3); i.e., ε(λ) = ik ≡ λ (mod 2 + 2i), where 0 ≤ k ≤ 3. Then for odd and relatively prime α and β, neither one a unit, - -\left[\frac{\alpha}{\beta}\right]\left[\frac{\beta}{\alpha}\right]^{-1}= - -(-1)^{\frac{N\alpha-1}{4}\frac{N\beta-1}{4}}\epsilon(\alpha)^\frac{N\beta-1}{4}\epsilon(\beta)^\frac{N\alpha-1}{4} - - - -For odd λ, let $\lambda^*=(-1)^\frac{N\lambda-1}{4}\lambda.$ Then if λ and μ are relatively prime nonunits, Eisenstein proved -$$ -\left[\frac{\lambda}{\mu}\right]=\Bigg[\frac{\mu^*}{\lambda}\Bigg]. -$$ diff --git a/wiki/wikipedia/3153.txt b/wiki/wikipedia/3153.txt deleted file mode 100644 index 1e2259c54d5bd02ee0a13e844ee137aa1a79a7d2..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3153.txt +++ /dev/null @@ -1,58 +0,0 @@ -The FKT algorithm, named after Fisher, Kasteleyn, and Temperley, counts the number of perfect matchings in a planar graph in polynomial time. This same task is #P-complete for general graphs. For matchings that are not required to be perfect, counting them remains #P-complete even for planar graphs. The key idea of the FKT algorithm is to convert the problem into a Pfaffian computation of a skew-symmetric matrix derived from a planar embedding of the graph. The Pfaffian of this matrix is then computed efficiently using standard determinant algorithms. - -The problem of counting planar perfect matchings has its roots in statistical mechanics and chemistry, where the original question was: If diatomic molecules are adsorbed on a surface, forming a single layer, how many ways can they be arranged? The partition function is an important quantity that encodes the statistical properties of a system at equilibrium and can be used to answer the previous question. However, trying to compute the partition function from its definition is not practical. Thus to exactly solve a physical system is to find an alternate form of the partition function for that particular physical system that is sufficiently simple to calculate exactly. In the early 1960s, the definition of exactly solvable was not rigorous. Computer science provided a rigorous definition with the introduction of polynomial time, which dates to 1965. Similarly, the notation of not exactly solvable, for a counting problem such as this one, should correspond to #P-hardness, which was defined in 1979. - -Another type of physical system to consider is composed of dimers, which is a polymer with two atoms. The dimer model counts the number of dimer coverings of a graph. Another physical system to consider is the bonding of H2O molecules in the form of ice. This can be modelled as a directed, 3-regular graph where the orientation of the edges at each vertex cannot all be the same. How many edge orientations does this model have? - -Motivated by physical systems involving dimers, in 1961, Kasteleyn and Temperley and Fisher independently found the number of domino tilings for the m-by-n rectangle. This is equivalent to counting the number of perfect matchings for the m-by-n lattice graph. By 1967, Kasteleyn had generalized this result to all planar graphs. - -The main insight is that every non-zero term in the Pfaffian of the adjacency matrix of a graph G corresponds to a perfect matching. Thus, if one can find an orientation of G to align all signs of the terms in Pfaffian (no matter + or - ), then the absolute value of the Pfaffian is just the number of perfect matchings in G. The FKT algorithm does such a task for a planar graph G. The orientation it finds is called a Pfaffian orientation. - -Let G = (V, E) be an undirected graph with adjacency matrix A. Define PM(n) to be the set of partitions of n elements into pairs, then the number of perfecting matchings in G is -$$ -\operatorname{PerfMatch}(G) = \sum_{M \in PM(|V|)} \prod_{(i,j) \in M} A_{i j}. -$$ - -Closely related to this is the Pfaffian for an n by n matrix A -$$ -\operatorname{pf}(A) = \sum_{M \in PM(n)} \operatorname{sgn}(M) \prod_{(i,j) \in M} A_{i j}, -$$ - -where sgn(M) is the sign of the permutation M. A Pfaffian orientation of G is a directed graph H with (1, -1, 0)-adjacency matrix B such that pf(B) = PerfMatch(G). In 1967, Kasteleyn proved that planar graphs have an efficiently computable Pfaffian orientation. Specifically, for a planar graph G, let H be a directed version of G where an odd number of edges are oriented clockwise for every face in a planar embedding of G. Then H is a Pfaffian orientation of G. - -Finally, for any skew-symmetric matrix A, -$$ -\operatorname{pf}(A)^2 = \det(A), -$$ - -where det(A) is the determinant of A. This result is due to Cayley. Since determinants are efficiently computable, so is PerfMatch(G). - -# Compute a planar embedding of G. - -# Compute a spanning tree T1 of the input graph G. - -# Give an arbitrary orientation to each edge in G that is also in T1. - -# Use the planar embedding to create an (undirected) graph T2 with the same vertex set as the dual graph of G. - -# Create an edge in T2 between two vertices if their corresponding faces in G share an edge in G that is not in T1. (Note that T2 is a tree.) - -# For each leaf v in T2 (that is not also the root): - -## Let e be the lone edge of G in the face corresponding to v that does not yet have an orientation. - -## Give e an orientation such that the number of edges oriented clock-wise is odd. - -## Remove v from T2. - -# Return the absolute value of the Pfaffian of the (1, -1, 0)-adjacency matrix of G, which is the square root of the determinant. - -The sum of weighted perfect matchings can also be computed by using the Tutte matrix for the adjacency matrix in the last step. - -Kuratowski's theorem states that - -a finite graph is planar if and only if it contains no subgraph homeomorphic to K5 (complete graph on five vertices) or K3,3 (complete bipartite graph on two partitions of size three). - -Vijay Vazirani generalized the FKT algorithm to graphs that do not contain a subgraph homeomorphic to K3,3. Since counting the number of perfect matchings in a general graph is #P-complete, some restriction on the input graph is required unless FP, the function version of P, is equal to #P. Counting matchings, which is known as the Hosoya index, is also #P-complete even for planar graphs. - -The FKT algorithm has seen extensive use in holographic algorithms on planar graphs via matchgates. For example, consider the planar version of the ice model mentioned above, which has the technical name #PL-3-NAE-SAT (where NAE stands for "not all equal"). Valiant found a polynomial time algorithm for this problem which uses matchgates. diff --git a/wiki/wikipedia/3154.txt b/wiki/wikipedia/3154.txt deleted file mode 100644 index a1c9e6316332c8474e2b6ef5775ce650fffa0034..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3154.txt +++ /dev/null @@ -1,19 +0,0 @@ -The Vaught conjecture is a conjecture in the mathematical field of model theory originally proposed by Robert Lawson Vaught in 1961. It states that the number of countable models of a first-order complete theory in a countable language is finite or ℵ0 or 2^ℵ0. Morley showed that the number of countable models is finite or ℵ0 or ℵ1 or 2^ℵ0, which solves the conjecture except for the case of ℵ1 models when the continuum hypothesis fails. For this remaining case, has announced a counterexample to the Vaught conjecture and the topological Vaught conjecture. As of 2021, the counterexample has not been verified. - -Let $T$ be a first-order, countable, complete theory with infinite models. Let $I(T, \alpha)$ denote the number of models of T of cardinality $\alpha$ up to isomorphism, the spectrum of the theory $T$. Morley proved that if I(T, ℵ0) is infinite then it must be ℵ0 or ℵ1 or the cardinality of the continuum. The Vaught conjecture is the statement that it is not possible for $\aleph_{0} < I(T,\aleph_{0}) < 2^{\aleph_{0}}$. The conjecture is a trivial consequence of the continuum hypothesis; so this axiom is often excluded in work on the conjecture. Alternatively there is a sharper form of the conjecture that states that any countable complete T with uncountably many countable models will have a perfect set of uncountable models (as pointed out by John Steel, in "On Vaught's conjecture". Cabal Seminar 76—77 (Proc. Caltech-UCLA Logic Sem., 1976—77), pp. 193–208, Lecture Notes in Math., 689, Springer, Berlin, 1978, this form of the Vaught conjecture is equiprobable with the original). - -The original formulation by Vaught was not stated as a conjecture, but as a problem: Can it be proved, without the use of the continuum hypothesis, that there exists a complete theory having exactly ℵ1 non-isomorphic denumerable models? By the result by Morley mentioned at the beginning, a positive solution to the conjecture essentially corresponds to a negative answer to Vaught's problem as originally stated. - -Vaught proved that the number of countable models of a complete theory cannot be 2. It can be any finite number other than 2, for example: - -*Any complete theory with a finite model has no countable models. - -*The theories with just one countable model are the ω-categorical theories. There are many examples of these, such as the theory of an infinite set, or the theory of a dense unbounded total order. - -*Ehrenfeucht gave the following example of a theory with 3 countable models: the language has a relation ≥ and a countable number of constants c0, c1, ... with axioms stating that ≥ is a dense unbounded total order, and c0 < c1 < c2 < ... The three models differ according to whether this sequence is unbounded, or converges, or is bounded but does not converge. - -*Ehrenfeucht's example can be modified to give a theory with any finite number n ≥ 3 of models by adding n - 2 unary relations Pi to the language, with axioms stating that for every x exactly one of the Pi is true, the values of y for which Pi(y) is true are dense, and P1 is true for all ci. Then the models for which the sequence of elements ci converge to a limit c split into n - 2 cases depending on for which i the relation Pi(c) is true. - -The idea of the proof of Vaught's theorem is as follows. If there are at most countably many countable models, then there is a smallest one: the atomic model, and a largest one, the saturated model, which are different if there is more than one model. If they are different, the saturated model must realize some n-type omitted by the atomic model. Then one can show that an atomic model of the theory of structures realizing this n-type (in a language expanded by finitely many constants) is a third model, not isomorphic to either the atomic or the saturated model. In the example above with 3 models, the atomic model is the one where the sequence is unbounded, the saturated model is the one where the sequence does not converge, and an example of a type not realized by the atomic model is an element greater than all elements of the sequence. - -The topological Vaught conjecture is the statement that whenever a Polish group acts continuously on a Polish space, there are either countably many orbits or continuum many orbits. The topological Vaught conjecture is more general than the original Vaught conjecture: Given a countable language we can form the space of all structures on the natural numbers for that language. If we equip this with the topology generated by first-order formulas, then it is known from A. Gregorczyk, A. Mostowski, C. Ryll-Nardzewski, "Definability of sets of models of axiomatic theories" (Bulletin of the Polish Academy of Sciences (series Mathematics, Astronomy, Physics), vol. 9 (1961), pp. 163–7) that the resulting space is Polish. There is a continuous action of the infinite symmetric group (the collection of all permutations of the natural numbers with the topology of point-wise convergence) that gives rise to the equivalence relation of isomorphism. Given a complete first-order theory T, the set of structures satisfying T is a minimal, closed invariant set, and hence Polish in its own right. diff --git a/wiki/wikipedia/3155.txt b/wiki/wikipedia/3155.txt deleted file mode 100644 index e33d79a1cc784ac6dffc514a40551478027fae72..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3155.txt +++ /dev/null @@ -1,63 +0,0 @@ -In algebraic number theory, the Grunwald–Wang theorem is a local-global principle stating that—except in some precisely defined cases—an element x in a number field K is an nth power in K if it is an nth power in the completion $K_{\mathfrak{p}}$ for all but finitely many primes $\mathfrak{p}$ of K. For example, a rational number is a square of a rational number if it is a square of a p-adic number for almost all primes p. The Grunwald–Wang theorem is an example of a local-global principle. - -It was introduced by , but there was a mistake in this original version that was found and corrected by . The theorem considered by Grunwald and Wang was more general than the one stated above as they discussed the existence of cyclic extensions with certain local properties, and the statement about nth powers is a consequence of this. - -Grunwald, a student of Helmut Hasse, gave an incorrect proof of the erroneous statement that an element in a number field is an nth power if it is an nth power locally almost everywhere. gave another incorrect proof of this incorrect statement. However Wang discovered the following counter-example: 16 is a p-adic 8th power for all odd primes p, but is not a rational or 2-adic 8th power. In his doctoral thesis Wang written under Emil Artin, Wang gave and proved the correct formulation of Grunwald's assertion, by describing the rare cases when it fails. This result is what is now known as the Grunwald–Wang theorem. The history of Wang's counterexample is discussed by - -Grunwald's original claim that an element that is an nth power almost everywhere locally is an nth power globally can fail in two distinct ways: the element can be an nth power almost everywhere locally but not everywhere locally, or it can be an nth power everywhere locally but not globally. - -The element 16 in the rationals is an 8th power at all places except 2, but is not an 8th power in the 2-adic numbers. - -It is clear that 16 is not a 2-adic 8th power, and hence not a rational 8th power, since the 2-adic valuation of 16 is 4 which is not divisible by 8. - -Generally, 16 is an 8th power in a field K if and only if the polynomial $X^8-16$ has a root in K. Write -$$ -X^8-16=(X^4-4)(X^4+4)=(X^2-2)(X^2+2)(X^2-2X+2)(X^2+2X+2). -$$ - -Thus, 16 is an 8th power in K if and only if 2, -2 or -1 is a square in K. Let p be any odd prime. It follows from the multiplicativity of the Legendre symbol that 2, -2 or -1 is a square modulo p. Hence, by Hensel's lemma, 2, -2 or -1 is a square in $\mathbb{Q}_p$. - -16 is not an 8th power in $\mathbb{Q}(\sqrt{7})$ although it is an 8th power locally everywhere (i.e. in $\mathbb{Q}_p(\sqrt{7})$ for all p). This follows from the above and the equality $\mathbb{Q}_2(\sqrt{7})=\mathbb{Q}_2(\sqrt{-1})$. - -Wang's counterexample has the following interesting consequence showing that one cannot always find a cyclic Galois extension of a given degree of a number field in which finitely many given prime places split in a specified way: - -There exists no cyclic degree 8 extension $K/\mathbb{Q}$ in which the prime 2 is totally inert (i.e., such that $K_2/\mathbb{Q}_2$ is unramified of degree 8). - -For any $s\geq 2$ let -$$ -\eta_s:=\exp\left(\frac{2\pi i}{2^s}\right)+\exp\left(-\frac{2\pi i}{2^s}\right)=2\cos\left(\frac{2\pi}{2^s}\right). -$$ - -Note that the $2^s$th cyclotomic field is -$$ -\mathbb{Q}_{2^s}=\mathbb{Q}(i,\eta_s). -$$ - -A field is called s-special if it contains $\eta_{s}$, but neither $i$, $\eta_{s+1}$ nor $i\eta_{s+1}$. - -Consider a number field K and a natural number n. Let S be a finite (possibly empty) set of primes of K and put -$$ -K(n,S):=\{x\in K\mid x\in K_{\mathfrak{p}}^n \mathrm{\ for\ all\ }\mathfrak{p}\not\in S\}. -$$ - -The Grunwald–Wang theorem says that -$$ -K(n,S)=K^n -$$ - -unless we are in the special case which occurs when the following two conditions both hold: - -# $K$ is s-special with an $s$ such that $2^{s+1}$ divides n. - -# $S$ contains the special set $S_0$ consisting of those (necessarily 2-adic) primes $\mathfrak{p}$ such that $K_{\mathfrak{p}}$ is s-special. - -In the special case the failure of the Hasse principle is finite of order 2: the kernel of -$$ -K^\times/K^{\times n} \to \prod_{\mathfrak{p}\not\in S}K_\mathfrak{p}^\times/K_\mathfrak{p}^{\times n} -$$ - -is Z/2Z, generated by the element η_n|b=s+1. - -The field of rational numbers $K=\mathbb{Q}$ is 2-special since it contains $\eta_2=0$, but neither $i$, $\eta_3=\sqrt{2}$ nor $i\eta_3=\sqrt{-2}$. The special set is $S_0=\{2\}$. Thus, the special case in the Grunwald–Wang theorem occurs when n is divisible by 8, and S contains 2. This explains Wang's counter-example and shows that it is minimal. It is also seen that an element in $\mathbb{Q}$ is an nth power if it is a p-adic nth power for all p. - -The field $K=\mathbb{Q}(\sqrt{7})$ is 2-special as well, but with $S_0=\emptyset$. This explains the other counter-example above. diff --git a/wiki/wikipedia/3156.txt b/wiki/wikipedia/3156.txt deleted file mode 100644 index 9bbad96841377b26318e44426d56a51bf6fe7d34..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3156.txt +++ /dev/null @@ -1,65 +0,0 @@ -In computer science, ACID (atomicity, consistency, isolation, durability) is a set of properties of database transactions intended to guarantee data validity despite errors, power failures, and other mishaps. In the context of databases, a sequence of database operations that satisfies the ACID properties (which can be perceived as a single logical operation on the data) is called a transaction. For example, a transfer of funds from one bank account to another, even involving multiple changes such as debiting one account and crediting another, is a single transaction. - -In 1983, Andreas Reuter and Theo Härder coined the acronym ACID, building on earlier work by Jim Gray who named atomicity, consistency, and durability, but not isolation, when characterizing the transaction concept. These four properties are the major guarantees of the transaction paradigm, which has influenced many aspects of development in database systems. - -According to Gray and Reuter, the IBM Information Management System supported ACID transactions as early as 1973 (although the acronym was created later). - -The characteristics of these four properties as defined by Reuter and Härder are as follows: - -Transactions are often composed of multiple statements. Atomicity guarantees that each transaction is treated as a single "unit", which either succeeds completely, or fails completely: if any of the statements constituting a transaction fails to complete, the entire transaction fails and the database is left unchanged. An atomic system must guarantee atomicity in each and every situation, including power failures, errors and crashes. A guarantee of atomicity prevents updates to the database occurring only partially, which can cause greater problems than rejecting the whole series outright. As a consequence, the transaction cannot be observed to be in progress by another database client. At one moment in time, it has not yet happened, and at the next it has already occurred in whole (or nothing happened if the transaction was cancelled in progress). - -An example of an atomic transaction is a monetary transfer from bank account A to account B. It consists of two operations, withdrawing the money from account A and saving it to account B. Performing these operations in an atomic transaction ensures that the database remains in a consistent state, that is, money is neither debited nor credited if either of those two operations fail. - -Consistency ensures that a transaction can only bring the database from one valid state to another, maintaining database invariants: any data written to the database must be valid according to all defined rules, including constraints, cascades, triggers, and any combination thereof. This prevents database corruption by an illegal transaction, but does not guarantee that a transaction is correct. Referential integrity guarantees the primary key – foreign key relationship. - -Transactions are often executed concurrently (e.g., multiple transactions reading and writing to a table at the same time). Isolation ensures that concurrent execution of transactions leaves the database in the same state that would have been obtained if the transactions were executed sequentially. Isolation is the main goal of concurrency control; depending on the method used, the effects of an incomplete transaction might not even be visible to other transactions. - -Durability guarantees that once a transaction has been committed, it will remain committed even in the case of a system failure (e.g., power outage or crash). This usually means that completed transactions (or their effects) are recorded in non-volatile memory. - -The following examples further illustrate the ACID properties. In these examples, the database table has two columns, A and B. An integrity constraint requires that the value in A and the value in B must sum to 100. The following SQL code creates a table as described above:CREATE TABLE acidtest (A INTEGER, B INTEGER, CHECK (A + B = 100)); - -Atomicity is the guarantee that series of database operations in an atomic transaction will either all occur (a successful operation), or none will occur (an unsuccessful operation). The series of operations cannot be separated with only some of them being executed, which makes the series of operations "indivisible". A guarantee of atomicity prevents updates to the database occurring only partially, which can cause greater problems than rejecting the whole series outright. In other words, atomicity means indivisibility and irreducibility. Alternatively, we may say that a logical transaction may be made of, or composed of, one or more (several), physical transactions. Unless and until all component physical transactions are executed, the Logical transaction will not have occurred – to the effects of the database. Say our Logical transaction consists of transferring funds from account A to account B. This logical transaction may be composed of several physical transactions consisting of first removing the amount from account A as a first physical transaction and then, as a second transaction, depositing said amount in account B. We would not want to see the amount removed from account A before we are sure it has been transferred into account B. Then, unless and until both transactions have happened and the amount has been transferred to account B, the transfer has not, to the effects of the database, occurred. - -Consistency is a very general term, which demands that the data must meet all validation rules. In the previous example, the validation is a requirement that A + B = 100. All validation rules must be checked to ensure consistency. Assume that a transaction attempts to subtract 10 from A without altering B. Because consistency is checked after each transaction, it is known that A + B = 100 before the transaction begins. If the transaction removes 10 from A successfully, atomicity will be achieved. However, a validation check will show that A + B = 90, which is inconsistent with the rules of the database. The entire transaction must be cancelled and the affected rows rolled back to their pre-transaction state. If there had been other constraints, triggers, or cascades, every single change operation would have been checked in the same way as above before the transaction was committed. Similar issues may arise with other constraints. We may have required the data types of both A and B to be integers. If we were then to enter, say, the value 13.5 for A, the transaction will be cancelled, or the system may give rise to an alert in the form of a trigger (if/when the trigger has been written to this effect). Another example would be with integrity constraints, which would not allow us to delete a row in one table whose primary key is referred to by at least one foreign key in other tables. - -To demonstrate isolation, we assume two transactions execute at the same time, each attempting to modify the same data. One of the two must wait until the other completes in order to maintain isolation. - -Consider two transactions: - -T1 transfers 10 from A to B. - -T2 transfers 20 from B to A. - -Combined, there are four actions: - -#T1 subtracts 10 from A. - -#T1 adds 10 to B. - -#T2 subtracts 20 from B. - -#T2 adds 20 to A. - -If these operations are performed in order, isolation is maintained, although T2 must wait. Consider what happens if T1 fails halfway through. The database eliminates T1's effects, and T2 sees only valid data. - -By interleaving the transactions, the actual order of actions might be: - -#T1 subtracts 10 from A. - -#T2 subtracts 20 from B. - -#T2 adds 20 to A. - -#T1 adds 10 to B. - -Again, consider what happens if T1 fails while modifying B in Step 4. By the time T1 fails, T2 has already modified A; it cannot be restored to the value it had before T1 without leaving an invalid database. This is known as a write-write conflict, because two transactions attempted to write to the same data field. In a typical system, the problem would be resolved by reverting to the last known good state, canceling the failed transaction T1, and restarting the interrupted transaction T2 from the good state. - -Consider a transaction that transfers 10 from A to B. First it removes 10 from A, then it adds 10 to B. At this point, the user is told the transaction was a success. However, the changes are still queued in the disk buffer waiting to be committed to disk. Power fails and the changes are lost, but the user assumes (understandably) that the changes persist. - -Processing a transaction often requires a sequence of operations that is subject to failure for a number of reasons. For instance, the system may have no room left on its disk drives, or it may have used up its allocated CPU time. There are two popular families of techniques: write-ahead logging and shadow paging. In both cases, locks must be acquired on all information to be updated, and depending on the level of isolation, possibly on all data that may be read as well. In write ahead logging, durability is guaranteed by copying the original (unchanged) data to a log before changing the database. That allows the database to return to a consistent state in the event of a crash. In shadowing, updates are applied to a partial copy of the database, and the new copy is activated when the transaction commits. - -Many databases rely upon locking to provide ACID capabilities. Locking means that the transaction marks the data that it accesses so that the DBMS knows not to allow other transactions to modify it until the first transaction succeeds or fails. The lock must always be acquired before processing data, including data that is read but not modified. Non-trivial transactions typically require a large number of locks, resulting in substantial overhead as well as blocking other transactions. For example, if user A is running a transaction that has to read a row of data that user B wants to modify, user B must wait until user A's transaction completes. Two phase locking is often applied to guarantee full isolation. - -An alternative to locking is multiversion concurrency control, in which the database provides each reading transaction the prior, unmodified version of data that is being modified by another active transaction. This allows readers to operate without acquiring locks, i.e., writing transactions do not block reading transactions, and readers do not block writers. Going back to the example, when user A's transaction requests data that user B is modifying, the database provides A with the version of that data that existed when user B started his transaction. User A gets a consistent view of the database even if other users are changing data. One implementation, namely snapshot isolation, relaxes the isolation property. - -Guaranteeing ACID properties in a distributed transaction across a distributed database, where no single node is responsible for all data affecting a transaction, presents additional complications. Network connections might fail, or one node might successfully complete its part of the transaction and then be required to roll back its changes because of a failure on another node. The two-phase commit protocol (not to be confused with two-phase locking) provides atomicity for distributed transactions to ensure that each participant in the transaction agrees on whether the transaction should be committed or not. Briefly, in the first phase, one node (the coordinator) interrogates the other nodes (the participants) and only when all reply that they are prepared does the coordinator, in the second phase, formalize the transaction. diff --git a/wiki/wikipedia/3157.txt b/wiki/wikipedia/3157.txt deleted file mode 100644 index 41571877beb3a1e4449cb951d3505dbd0963cc3a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3157.txt +++ /dev/null @@ -1,55 +0,0 @@ -In mathematics, the Hamiltonian cycle polynomial of an n×n-matrix is a polynomial in its entries, defined as -$$ - \operatorname{ham}(A)=\sum_{\sigma\in H_n}\prod_{i=1}^n a_{i,\sigma(i)} -$$ - -where $H_n$ is the set of n-permutations having exactly one cycle. - -This is an algebraic option useful, in a number of cases, for determining the existence of a Hamiltonian cycle in a directed graph. - -It is a generalization of the number of Hamiltonian cycles of a digraph as the sum of the products of its Hamiltonian cycles' arc weights (all of which equal unity) for weighted digraphs with arc weights taken from a given commutative ring. In the meantime, for an undirected weighted graph the sum of the products of the edge weights of its Hamiltonian cycles containing any fixed edge (i,j) can be expressed as the product of the weight of (i,j) and the Hamiltonian cycle polynomial of a matrix received from its weighted adjacency matrix via subjecting its rows and columns to any permutation mapping i to 1 and j to 2 and then removing its 1-st row and 2-nd column. - - - -In (Knezevic) it was shown that -$$ -\operatorname{ham} (A) = \sum_{J\subseteq \{2,\dots,n\}} \det(-A_J)\operatorname{per}(A_{\bar J}) -$$ - -where $A_J$ is the submatrix of $A$ induced by the rows and columns of $A$ indexed by $J$, and $\bar J$ is the complement of $J$ in $\{1,\dots,n\}$, while the determinant of the empty submatrix is defined to be 1. - -Due to this and Borchardt's identities, for a non-singular n×n Cauchy matrix $ C(x,y) $ $ \operatorname{ham}(C(x,y)) = {\det}(- D_{1}^2 C^{*2}(x,y) D_{2}^2 + I_{/1}) \operatorname{det} (C(x,y)) $ where $ D_1, D_2 $ are diagonal matrices that make $ D_1 C(x,y) D_2 $ unitary (in a real field or a field of a finite characteristic, or orthogonal in the field of complex numbers), $ C^{*2}(x,y)$ is the Hadamard (entry-wise) square of $ C(x,y) $, and $ I_{/1} $ is the identity n×n-matrix with the entry of indexes 1,1 replaced by 0. - -In a field of characteristic 2 the equality $\operatorname{ham} (A) = \sum_{J\subseteq \{2,\dots,n\}} \det(-A_J)\operatorname{per}(A_{\bar J})$ turns into $\operatorname{ham} (A) = \sum_{J\subseteq \{2,\dots,n\}} \det(A_J)\det(A_{\bar J})$ what therefore provides an opportunity to polynomial-time calculate the Hamiltonian cycle polynomial of any unitary matrix $U$ (i.e. such that $U^{T} U = I$ where $I$ is the identity n×n-matrix), because in such a field each minor of a unitary matrix coincides with its algebraic complement: $ \operatorname{ham} (U) = {\det}^2(U + I_{/1}) $ where $ I_{/1} $ is the identity n×n-matrix with the entry of indexes 1,1 replaced by 0. Hence if it's possible to polynomial-time assign weights from a field of characteristic 2 to a digraph's arcs that make its weighted adjacency matrix unitary and having a non-zero Hamiltonian cycle polynomial then the digraph is Hamiltonian. Therefore the Hamiltonian cycle problem is computable on such graphs in polynomial time. - -In characteristic 2, the Hamiltonian cycle polynomial of an n×n-matrix is zero if n > 2k where k is its rank or if it's involutory and n > 2. - - - -Besides, in an arbitrary ring $R$ whose characteristic isn't an even natural number, for any skew-symmetric n×n-matrix $A$ there exists a power series in a formal variable $\varepsilon$ $ U(\varepsilon) =\sum_{n=0}^\infty U_n \varepsilon^n $ such that it's a unitary n×n-matrix over $R\left(\varepsilon\right)$ and $U_0 = I$, $U_1 = A$, while for any $ U(\varepsilon)$ satisfying these conditions $\operatorname{ham} (A)$ equals the coefficient at the $n$-th power of $\varepsilon$ in the power series $\operatorname{ham} (U(\varepsilon))$. And for any ring $R$ of an even characteristic the same is true when the diagonal of $AA^{T}$ is a multiple of 2. It implies that computing, up to the $n$-th power of $\varepsilon$, the Hamiltonian cycle polynomial of a unitary n×n-matrix over the infinite extension of any ring of characteristic q (not necessarily prime) by the formal variable $\varepsilon$ is a #$_q$P-complete problem if $q$ isn't 2 and computing the Hamiltonian cycle polynomial of a $k$-semi-unitary matrix (i.e. an n×n-matrix $V$ such that $\operatorname{rank}(V^T V - I ) = k$) over such an extension of any ring of characteristic 2 is a #$_2$P-complete problem for any $k$ > 0 (because any $k$-semi-unitary matrix can be received from a unitary matrix via removing $k$ rows and $k$ columns). For $k = 1$ the latter statement can be re-formulated as the #$_2$P-completeness of computing, for a given unitary n×n-matrix $U$ over a field of characteristic 2, the n×n-matrix $H(U)$ whose i,j-th entry is the Hamiltonian cycle polynomial of a matrix received from $U$ via subjecting its rows and columns to any permutation mapping j to 1 and i to 2 and then removing its 1-st row and 2-nd column (i.e. the sum of the products of the arc weights of the corresponding weighted digraph's Hamiltonian paths from vertex i to vertex j) for ij and zero for i = j. This matrix satisfies the matrix equation $U(H(U))^T = H(U)U^T$, while $ \operatorname{ham} \left ( \begin{matrix}U & {Ua} \\a^{T} & 1 \end{matrix} \right ) = (a_{1}^{2} +...+ a_{n}^{2}) \operatorname{ham} (U)$ where $ a $ is an arbitrary n-vector (what can be interpreted as the polynomial-time computability of the Hamiltonian cycle polynomial of any 1-semi-unitary m×m-matrix $A$ such that $AA^{T} = I + bb^{T} $ where $b$ is the $m$-th column of $A$ with its $m$-th entry replaced by 0 and $I$ is the identity m×m-matrix). - -Moreover, it would be worth noting that in characteristic 2 the Hamiltonian cycle polynomial possesses its invariant matrix compressions (partly analogical to the Gaussian modification that is invariant for the determinant), taking into account the fact that $\operatorname{ham} (X) = 0$ for any t×t-matrix $X$ having three equal rows or, if $t$ > 2, a pair of indexes i,j such that its i-th and j-th rows are identical and its i-th and j-th columns are identical too. - -Hence if a matrix has two equal rows with indexes i and j then adding one of them to any third one doesn't change this polynomial in characteristic 2 what allows to Gaussian-style eliminate all the entries of its i-th column except the i,i-th and j,i-th ones (in case if they aren't zero) and remove its i-th column and j-th row (in the manner described above) – then the Hamiltonian cycle polynomial of the initial matrix equals this polynomial of the new one multiplied by the initial j,i-th entry. - -Also in characteristic 2 and for matrices with more than two rows the Hamiltonian cycle polynomial isn't changed by adding the i-th column to the j-th one in a matrix where the i-th and j-th rows are identical what, particularly, yields the identity -$$ -\det(D+D^{-1}) \operatorname{ham} \left ( \begin{matrix}V & A \\B & U \end{matrix} \right ) = \operatorname{ham} \left ( \begin{matrix}V & V+D & A\\ V+D^{-1} & V+D^{-1}+D & A\\ B & B & U\end{matrix} \right ) -$$ - -for an n×n-matrix $U$, m×m-matrices $V$ and diagonal $D$, m×n-matrix $A$ and n×m-matrix $B$. - - - -This identity's restriction to the case when $U$ is unitary, $ VD + DV^{T}+AA^{T}=I+D^{2} $ and $BD=UA^{T}$, where $I$ is the identity m×m-matrix, makes the (2m+n)×(2m+n)-matrix in the equality's right side unitary and its Hamiltonian cycle polynomial computable, hence, in polynomial time what therefore generalizes the above-given formula for the Hamiltonian cycle polynomial of a unitary matrix. Besides, in characteristic 2 for square matrices X, Y $ \operatorname{ham}\left ( \begin{matrix}X & Y\\Y & X \end{matrix} \right ) $ is the square of the sum, over all the pairs of non-equal indexes i,j, of the i,j-th entry of Y multiplied by the Hamiltonian cycle polynomial of the matrix received from X+Y via removing its i-th row and j-th column (in the manner described above). Hence, upon putting in the above equality A = B and U = V, we receive another extension of the class of unitary matrices where the Hamiltonian cycle polynomial is computable in polynomial time. - - - -Apart from the above-mentioned compression transformations, in characteristic 2 the following relation is also valid for the Hamiltonian cycle polynomials of a matrix and its partial inverse (for $A_{11}$ and $A_{22}$ being square, $A_{11}$ being invertible): -$$ -\operatorname{ham}\left ( \begin{matrix}A_{11} & A_{12}\\ A_{21} & A_{22}\end{matrix} \right) = {\det}^2\left ( A_{11} \right ) \operatorname{ham}\left ( \begin{matrix}A_{11}^{-1} & A_{11}^{-1}A_{12}\\ A_{21}A_{11}^{-1} & A_{22} + A_{21} A_{11}^{-1} A_{12} \end{matrix} \right ) -$$ - -and, due to the fact that the Hamiltonian cycle polynomial doesn't depend on the matrix's diagonal entries, adding an arbitrary diagonal matrix doesn't change this polynomial too. These two types of transformation don't compress the matrix, but keep its size unchanged. However, in a number of cases their application allows to reduce the matrix's size by some of the above-mentioned compression operators. - -Hence there is a variety of matrix compression operators performed in polynomial time and preserving the Hamiltonian cycle polynomial in characteristic 2 whose sequential application, together with the transpose transformation (utilized each time the operators leave the matrix intact), has, for each matrix, a certain limit that can be defined as the compression-closure operator. When applied to classes of matrices, that operator thus maps one class to another. As it was proven in (Knezevic), if the compression-closure of the class of unitary matrices contains a subset where computing this polynomial is #$_2$P-complete then the Hamiltonian cycle polynomial is computable in polynomial time over any field of this characteristic -- what would imply the equality RP = NP. diff --git a/wiki/wikipedia/3158.txt b/wiki/wikipedia/3158.txt deleted file mode 100644 index bf4313cdd52efca37c9e95adf1a49f773fb1eca9..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3158.txt +++ /dev/null @@ -1,7 +0,0 @@ -In Riemannian geometry, the sphere theorem, also known as the quarter-pinched sphere theorem, strongly restricts the topology of manifolds admitting metrics with a particular curvature bound. The precise statement of the theorem is as follows. If M is a complete, simply-connected, n-dimensional Riemannian manifold with sectional curvature taking values in the interval $(1,4]$ then M is homeomorphic to the n-sphere. (To be precise, we mean the sectional curvature of every tangent 2-plane at each point must lie in $(1,4]$.) Another way of stating the result is that if M is not homeomorphic to the sphere, then it is impossible to put a metric on M with quarter-pinched curvature. - -Note that the conclusion is false if the sectional curvatures are allowed to take values in the closed interval $[1,4]$. The standard counterexample is complex projective space with the Fubini–Study metric; sectional curvatures of this metric take on values between 1 and 4, with endpoints included. Other counterexamples may be found among the rank one symmetric spaces. - -The original proof of the sphere theorem did not conclude that M was necessarily diffeomorphic to the n-sphere. This complication is because spheres in higher dimensions admit smooth structures that are not diffeomorphic. (For more information, see the article on exotic spheres.) However, in 2007 Simon Brendle and Richard Schoen utilized Ricci flow to prove that with the above hypotheses, M is necessarily diffeomorphic to the n-sphere with its standard smooth structure. Moreover, the proof of Brendle and Schoen only uses the weaker assumption of pointwise rather than global pinching. This result is known as the differentiable sphere theorem. - -Heinz Hopf conjectured that a simply connected manifold with pinched sectional curvature is a sphere. In 1951, Harry Rauch showed that a simply connected manifold with curvature in [3/4,1] is homeomorphic to a sphere. In 1960, Marcel Berger and Wilhelm Klingenberg proved the topological version of the sphere theorem with the optimal pinching constant. diff --git a/wiki/wikipedia/3159.txt b/wiki/wikipedia/3159.txt deleted file mode 100644 index aa4bea59d613dccb410cba0fd80ff91a199bed6b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3159.txt +++ /dev/null @@ -1,7 +0,0 @@ -The maximum agreement subtree problem is any of several closely related problems in graph theory and computer science. In all of these problems one is given a collection of trees $T_1,\ldots, T_m$ each containing $n$ leaves. The leaves of these trees are given labels from some set $L$ with $|L|=n$ so that no pair of leaves in the same tree sharing the same label, within the same tree the labelling for each leaf is distinct. In this problem one would like to find the largest subset $L'\subset L$ such that the minimal spanning subtrees containing the leaves in $L'$, of $T_1\mid S,\ldots, T_m\mid S$ are the "same" while preserving the labelling. - -This version requires that the subtrees $T_1\mid S,\ldots, T_m\mid S$ are homeomorphic to one another. - -This version is the same as the maximum homeomorphic agreement subtree, but we further assume that $T_1,\ldots,T_m$ are rooted and that the subtrees $T_1\mid S,\ldots, T_m\mid S$ contain the root node. This version of the maximum agreement subtree problem is used for the study of phylogenetic trees. Because of its close ties with phylogeny this formulation is often what is mean when one refers to the "maximum agreement subtree" problem. - -There exits other formulations for example the (rooted) maximum isomorphic agreement subtree where we require the subtrees to be isomorphic to one another. diff --git a/wiki/wikipedia/316.txt b/wiki/wikipedia/316.txt deleted file mode 100644 index 522ce2f056579b4b87fad4b43485b2faa5461cb8..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/316.txt +++ /dev/null @@ -1,31 +0,0 @@ -In mathematics, the Hilbert–Pólya conjecture states that the non-trivial zeros of the Riemann zeta function correspond to eigenvalues of a self-adjoint operator. It is a possible approach to the Riemann hypothesis, by means of spectral theory. - -In a letter to Andrew Odlyzko, dated January 3, 1982, George Pólya - -said that while he was in Göttingen around 1912 to 1914 he was asked by Edmund Landau for a physical reason that the Riemann hypothesis should be true, and suggested that this would be the case if the imaginary parts t of the zeros -$$ - \tfrac12 + it -$$ - -of the Riemann zeta function corresponded to eigenvalues of a self-adjoint operator. The earliest published statement of the conjecture seems to be in Montgomery. - -David Hilbert did not work in the central areas of analytic number theory, but his name has become known for the Hilbert–Pólya conjecture due to a story told by Ernst Hellinger, a student of Hilbert, to André Weil. Hellinger said that Hilbert announced in his seminar in the early 1900s that he expected the Riemann Hypothesis would be a consequence of Fredholm's work on integral equations with a symmetric kernel. - -At the time of Pólya's conversation with Landau, there was little basis for such speculation. However Selberg in the early 1950s proved a duality between the length spectrum of a Riemann surface and the eigenvalues of its Laplacian. This so-called Selberg trace formula bore a striking resemblance to the explicit formulae, which gave credibility to the Hilbert–Pólya conjecture. - -Hugh Montgomery investigated and found that the statistical distribution of the zeros on the critical line has a certain property, now called Montgomery's pair correlation conjecture. The zeros tend not to cluster too closely together, but to repel. The simplest Hermitian operator corresponding to xp is -$$ -H = \tfrac1{2} (xp+px) = - i \left( x \frac{\mathrm{d}}{\mathrm{d} x} + \frac1{2} \right). -$$ - -This refinement of the Hilbert–Pólya conjecture is known as the Berry conjecture (or the Berry–Keating conjecture). As of 2008, it is still quite far from being concrete, as it is not clear on which space this operator should act in order to get the correct dynamics, nor how to regularize it in order to get the expected logarithmic corrections. Berry and Keating have conjectured that since this operator is invariant under dilations perhaps the boundary condition f(nx) = f(x) for integer n may help to get the correct asymptotic results valid for large n -$$ - \frac{1}{2} + i \frac{ 2\pi n}{\log n}. -$$ - -A paper was published in March 2017, written by Carl M. Bender, Dorje C. Brody, and Markus P. Müller, which builds on Berry's approach to the problem. There the operator -$$ -\hat{H} = \frac{1}{1-e^{-i\hat{p}}} \left (\hat{x}\hat{p}+\hat{p}\hat{x} \right ) \left (1-e^{-i\hat{p}} \right ) -$$ - -was introduced, which they claim satisfies a certain modified versions of the conditions of the Hilbert–Pólya conjecture. Jean Bellisard has criticized this paper, and the authors have responded with clarifications. Moreover, Frederick Moxley has approached the problem with a Schrödinger equation. diff --git a/wiki/wikipedia/3160.txt b/wiki/wikipedia/3160.txt deleted file mode 100644 index d414178426ca36079502d3e425581bcbac8de05d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3160.txt +++ /dev/null @@ -1,64 +0,0 @@ -A twin prime is a prime number that is either 2 less or 2 more than another prime number—for example, either member of the twin prime pair (41, 43). In other words, a twin prime is a prime that has a prime gap of two. Sometimes the term twin prime is used for a pair of twin primes; an alternative name for this is prime twin or prime pair. - -Twin primes become increasingly rare as one examines larger ranges, in keeping with the general tendency of gaps between adjacent primes to become larger as the numbers themselves get larger. However, it is unknown whether there are infinitely many twin primes (the so-called twin prime conjecture) or if there is a largest pair. The breakthrough work of Yitang Zhang in 2013, as well as work by James Maynard, Terence Tao and others, has made substantial progress towards proving that there are infinitely many twin primes, but at present this remains unsolved. - -Usually the pair (2, 3) is not considered to be a pair of twin primes. Since 2 is the only even prime, this pair is the only pair of prime numbers that differ by one; thus twin primes are as closely spaced as possible for any other two primes. - -The first few twin prime pairs are: - -(3, 5), (5, 7), (11, 13), (17, 19), (29, 31), (41, 43), (59, 61), (71, 73), (101, 103), (107, 109), (137, 139), … . - -Five is the only prime that belongs to two pairs, as every twin prime pair greater than $(3, 5)$ is of the form $(6n-1, 6n+1)$ for some natural number n; that is, the number between the two primes is a multiple of 6. As a result, the sum of any pair of twin primes (other than 3 and 5) is divisible by 12. - -In 1915, Viggo Brun showed that the sum of reciprocals of the twin primes was convergent. This famous result, called Brun's theorem, was the first use of the Brun sieve and helped initiate the development of modern sieve theory. The modern version of Brun's argument can be used to show that the number of twin primes less than N does not exceed -$$ -\frac{CN}{(\log N)^2} -$$ - -for some absolute constant C > 0. In fact, it is bounded above by -$$ -\frac{C'N}{(\log N)^2}\left(1 + O\left(\frac{\log \log N}{\log N}\right)\right), -$$ - -where $C' = 8C_2$, where C2 is the twin prime constant, given below. - -The question of whether there exist infinitely many twin primes has been one of the great open questions in number theory for many years. This is the content of the twin prime conjecture, which states that there are infinitely many primes p such that p + 2 is also prime. In 1849, de Polignac made the more general conjecture that for every natural number k, there are infinitely many primes p such that p + 2k is also prime. The case k = 1 of de Polignac's conjecture is the twin prime conjecture. - -A stronger form of the twin prime conjecture, the Hardy–Littlewood conjecture (see below), postulates a distribution law for twin primes akin to the prime number theorem. - -On April 17, 2013, Yitang Zhang announced a proof that for some integer N that is less than 70 million, there are infinitely many pairs of primes that differ by N. Zhang's paper was accepted by Annals of Mathematics in early May 2013. Terence Tao subsequently proposed a Polymath Project collaborative effort to optimize Zhang's bound. As of April 14, 2014, one year after Zhang's announcement, the bound has been reduced to 246. Further, assuming the Elliott–Halberstam conjecture and its generalized form, the Polymath project wiki states that the bound has been reduced to 12 and 6, respectively. (The second ~ is not part of the conjecture and is proven by integration by parts.) - -The conjecture can be justified (but not proven) by assuming that 1 / ln t describes the density function of the prime distribution. This assumption, which is suggested by the prime number theorem, implies the twin prime conjecture, as shown in the formula for π2(x) above. - -The fully general first Hardy–Littlewood conjecture on prime k-tuples (not given here) implies that the second Hardy–Littlewood conjecture is false. - -This conjecture has been extended by Dickson's conjecture. - -Polignac's conjecture from 1849 states that for every positive even natural number k, there are infinitely many consecutive prime pairs p and p′ such that p′ − p = k (i.e. there are infinitely many prime gaps of size k). The case k = 2 is the twin prime conjecture. The conjecture has not yet been proven or disproven for any specific value of k, but Zhang's result proves that it is true for at least one (currently unknown) value of k. Indeed, if such a k did not exist, then for any positive even natural number N there are at most finitely many n such that pn+1 − pn = m for all m < N and so for n large enough we have pn+1 − pn > N, which would contradict Zhang's result. - -Beginning in 2007, two distributed computing projects, Twin Prime Search and PrimeGrid, have produced several record-largest twin primes. , the current largest twin prime pair known is 2996863034895 · 21290000 ± 1, with 388,342 decimal digits. It was discovered in September 2016. - -There are 808,675,888,577,436 twin prime pairs below 1018. - -An empirical analysis of all prime pairs up to 4.35 · 1015 shows that if the number of such pairs less than x is f(xx/(log x)2 then f(x) is about 1.7 for small x and decreases towards about 1.3 as x tends to infinity. The limiting value of f(x) is conjectured to equal twice the twin prime constant () (not to be confused with Brun's constant), according to the Hardy–Littlewood conjecture. - -Every third odd number is divisible by 3, which requires that no three successive odd numbers can be prime unless one of them is 3. Five is therefore the only prime that is part of two twin prime pairs. The lower member of a pair is by definition a Chen prime. - -It has been proven that the pair (m, m + 2) is a twin prime if and only if -$$ -4((m-1)! + 1) \equiv -m \pmod {m(m+2)}. -$$ - -If m − 4 or m + 6 is also prime then the three primes are called a prime triplet. - -For a twin prime pair of the form (6n − 1, 6n + 1) for some natural number n > 1, n must have units digit 0, 2, 3, 5, 7, or 8 (). - -An isolated prime (also known as single prime or non-twin prime) is a prime number p such that neither p − 2 nor p + 2 is prime. In other words, p is not part of a twin prime pair. For example, 23 is an isolated prime, since 21 and 25 are both composite. - -The first few isolated primes are - -2, 23, 37, 47, 53, 67, 79, 83, 89, 97, ... - -It follows from Brun's theorem that almost all primes are isolated in the sense that - -the ratio of the number of isolated primes less than a given threshold n and the number of all primes less than n tends to 1 as n tends to infinity. diff --git a/wiki/wikipedia/3161.txt b/wiki/wikipedia/3161.txt deleted file mode 100644 index b873b4dd93020c8c4608524bcdae27bd51aeae18..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3161.txt +++ /dev/null @@ -1,87 +0,0 @@ -In graph theory, Hedetniemi's conjecture, formulated by Stephen T. Hedetniemi in 1966, concerns the connection between graph coloring and the tensor product of graphs. This conjecture states that -$$ -\chi (G \times H ) = \min\{\chi (G) , \chi (H)\}. -$$ - -Here $\chi(G)$ denotes the chromatic number of an undirected finite graph $G$. - -The inequality χ(G × H) ≤ min {χ(G), χ(H)} is easy: if G is k-colored, one can k-color G × H by using the same coloring for each copy of G in the product; symmetrically if H is k-colored. Thus, Hedetniemi's conjecture amounts to the assertion that tensor products cannot be colored with an unexpectedly small number of colors. - -A counterexample to the conjecture was discovered by (see ), thus disproving the conjecture in general. - -Any graph with a nonempty set of edges requires at least two colors; if G and H are not 1-colorable, that is, they both contain an edge, then their product also contains an edge, and is hence not 1-colorable either. In particular, the conjecture is true when G or H is a bipartite graph, since then its chromatic number is either 1 or 2. - -Similarly, if two graphs G and H are not 2-colorable, that is, not bipartite, then both contain a cycle of odd length. Since the product of two odd cycle graphs contains an odd cycle, the product G × H is not 2-colorable either. In other words, if G × H is 2-colorable, then at least one of G and H must be 2-colorable as well. - -The next case has been proved long after the conjecture's statement, by El-Zahar: if the product G × H is 3-colorable, then one of G or H must also be 3-colorable. In particular, the conjecture is true whenever G or H is 4-colorable (since then the inequality χ(G × H) ≤ min {χ(G), χ(H)} can only be strict when G × H is 3-colorable). - -In the remaining cases, both graphs in the tensor product are at least 5-chromatic and progress has only been made for very restricted situations. - -The following function (known as the Poljak-Rödl function) measures how low the chromatic number of products of n-chromatic graphs can be. -$$ -f(n) = \min\{ \chi (G \times H) \colon \chi (G) = \chi (H) = n \} -$$ - -Hedetniemi's conjecture is then equivalent to saying that f(n) = n. - -The Weak Hedetniemi Conjecture instead states merely that the function f(n) is unbounded. - -In other words, if the tensor product of two graphs can be colored with few colors, this should imply some bound on the chromatic number of one of the factors. - -The main result of , independently improved by Poljak, James H. Schmerl, and Zhu, states that if the function f(n) is bounded, then it is bounded by at most 9. - -Thus a proof of Hedetniemi's conjecture for 10-chromatic graphs would already imply the Weak Hedetniemi Conjecture for all graphs. - -The conjecture is studied in the more general context of graph homomorphisms, especially because of interesting relations to the category of graphs (with graphs as objects and homomorphisms as arrows). For any fixed graph K, one considers graphs G that admit a homomorphism to K, written G → K. These are also called K-colorable graphs. This generalizes the usual notion of graph coloring, since it follows from definitions that a k-coloring is the same as a Kk-coloring (a homomorphism into the complete graph on k vertices). - -A graph K is called multiplicative if for any graphs G, H, the fact that G × H → K holds implies that G → K or H → K holds. - -As with classical colorings, the reverse implication always holds: if G (or H, symmetrically) is K-colorable, then G × H is easily K-colored by using the same values independently of H. - -Hedetniemi's conjecture is then equivalent to the statement that each complete graph is multiplicative. - -The above known cases are equivalent to saying that K1, K2, and K3 are multiplicative. - -The case of K4 is widely open. - -On the other hand, the proof of El-Zahar has been generalized by Häggkvist to show that all cycle graphs are multiplicative. - -Later, Tardif proved more generally that all circular cliques Kn/k with n/k < 4 are multiplicative. - -In terms of the circular chromatic number χc, this means that if χc(G×H) < 4, then χc(G×H) = min { χc(G), χc(G)} . Wrochna has shown that square-free graphs are multiplicative. - -Examples of non-multiplicative graphs can be constructed from two graphs G and H that are not comparable in the homomorphism order (that is, neither G→H nor H→G holds). In this case, letting K=G×H, we trivially have G×H→K, but neither G nor H can admit a homomorphism into K, since composed with the projection K→H or K→G it would give a contradiction. - -Since the tensor product of graphs is the category-theoretic product in the category of graphs (with graphs as objects and homomorphisms as arrows), the conjecture can be rephrased in terms of the following construction on graphs K and G. - -The exponential graph KG is the graph with all functions V(G) → V(K) as vertices (not only homomorphisms) and two functions f,g adjacent when - -f(v) is adjacent to g(v') in K, for all adjacent vertices v,v ' of G. - -In particular, there is a loop at a function f (it is adjacent to itself) if and only if the function gives a homomorphism from G to K. - -Seen differently, there is an edge between f and g whenever the two functions define a homomorphism from G × K2 (the bipartite double cover of G) to K. - -The exponential graph is the exponential object in the category of graphs. This means homomorphisms from G × H to a graph K correspond to homomorphisms from H to KG. - -Moreover, there is a homomorphism eval : G × KG → K given by eval(v,f) = f(v). - -These properties allow to conclude that the multiplicativity of K is equivalent to the statement: - -either G or KG is K-colorable, for every graph G. - -In other words, Hedetniemi's conjecture can be seen as a statement on exponential graphs: for every integer k, the graph KkG is either k-colorable, or it contains a loop (meaning G is k-colorable). - -One can also see the homomorphisms eval : G × KkG → Kk as the hardest instances of Hedetniemi's conjecture: if the product G × H was a counterexample, then G × KkG would also be a counterexample. - -Generalized to directed graphs, the conjecture has simple counterexamples, as observed by Poljak. Here, the chromatic number of a directed graph is just the chromatic number of the underlying graph, but the tensor product has exactly half the number of edges (for directed edges g→g' in G and h→h' in H, the tensor product G × H has only one edge, from (g,h) to (g',h'), while the product of the underlying undirected graphs would have an edge between (g,h') and (g',h) as well). - -However, the Weak Hedetniemi Conjecture turns out to be equivalent in the directed and undirected settings . - -The problem cannot be generalized to infinite graphs: Hajnal gave an example of two infinite graphs, each requiring an uncountable number of colors, such that their product can be colored with only countably many colors. Rinot proved that in the constructible universe, for every infinite cardinal $\kappa$, there exist a pair of graphs of chromatic number greater than $\kappa$, such that their product can still be colored with only countably many colors. - -A similar equality for the cartesian product of graphs was proven by Sabidussi and rediscovered several times afterwards. - -An exact formula is also known for the lexicographic product of graphs. - -Duffus introduced two stronger conjectures involving unique colorability. diff --git a/wiki/wikipedia/3162.txt b/wiki/wikipedia/3162.txt deleted file mode 100644 index 77b802cf2818464df2e3ac0ceb171027b8a94825..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3162.txt +++ /dev/null @@ -1,29 +0,0 @@ -The St. Petersburg paradox or St. Petersburg lottery is a paradox involving the game of flipping a coin where the expected payoff of the theoretical lottery game approaches infinity but nevertheless seems to be worth only a very small amount to the participants. The St. Petersburg paradox is a situation where a naive decision criterion which takes only the expected value into account predicts a course of action that presumably no actual person would be willing to take. It is related to probability and decision theory in economics. Several resolutions to the paradox have been proposed. - -The paradox takes its name from its analysis by Daniel Bernoulli, one-time resident of the eponymous Russian city, who published his arguments in the Commentaries of the Imperial Academy of Science of Saint Petersburg. However, the problem was invented by Daniel's cousin, Nicolas Bernoulli, who first stated it in a letter to Pierre Raymond de Montmort on September 9, 1713. - -thumb|The St. Petersburg paradox is typically framed in terms of gambles on the outcome of fair coin tosses.|298x298pxA casino offers a game of chance for a single player in which a fair coin is tossed at each stage. The initial stake begins at 2 dollars and is doubled every time heads appears. The first time tails appears, the game ends and the player wins whatever is in the pot. Thus the player wins 2 dollars if tails appears on the first toss, 4 dollars if heads appears on the first toss and tails on the second, 8 dollars if heads appears on the first two tosses and tails on the third, and so on. Mathematically, the player wins $2^{k+1}$ dollars, where $k$ is the number of consecutive head tosses. What would be a fair price to pay the casino for entering the game? - -To answer this, one needs to consider what would be the expected payout at each stage: with probability 1/2, the player wins 2 dollars; with probability 1/4 the player wins 4 dollars; with probability 1/8 the player wins 8 dollars, and so on. Assuming the game can continue as long as the coin toss results in heads and, in particular, that the casino has unlimited resources, the expected value is thus - -\begin{align} - -E &= \frac{1}{2}\cdot 2+\frac{1}{4}\cdot 4 + \frac{1}{8}\cdot 8 + \frac{1}{16}\cdot 16 + \cdots \\ - -&= 1 + 1 + 1 + 1 + \cdots \\ - -&= \infty . - -\end{align} - -This sum grows without bound, and so the expected win is an infinite amount of money. - -Considering nothing but the expected value of the net change in one's monetary wealth, one should therefore play the game at any price if offered the opportunity. Yet, Daniel Bernoulli, after describing the game with an initial stake of one ducat, stated "Although the standard calculation shows that the value of [the player's] expectation is infinitely great, it has ... to be admitted that any fairly reasonable man would sell his chance, with great pleasure, for twenty ducats." However, the overweighting of small probability events introduced in cumulative prospect theory may restore the St. Petersburg paradox. Cumulative prospect theory avoids the St. Petersburg paradox only when the power coefficient of the utility function is lower than the power coefficient of the probability weighting function. Intuitively, the utility function must not simply be concave, but it must be concave relative to the probability weighting function to avoid the St. Petersburg paradox. - -One can argue that the formulas for the prospect theory are obtained in the region of less than $400. Alexis Fontaine des Bertins pointed out in 1754 that the resources of any potential backer of the game are finite. More importantly, the expected value of the game only grows logarithmically with the resources of the casino. As a result, the expected value of the game, even when played against a casino with the largest bankroll realistically conceivable, is quite modest. In 1777, Georges-Louis Leclerc, Comte de Buffon calculated that after 29 rounds of play there would not be enough money in the Kingdom of France to cover the bet. - -If the casino has finite resources, the game must end once those resources are exhausted. Assuming the game ends when the casino can no longer cover the bet, the expected value E of the lottery then becomes: Keynes, in particular, insisted that the relative risk of an alternative could be sufficiently high to reject it even if its expectation were enormous. Intuitively Feller's answer is "to perform this game with a large number of people and calculate the expected value from the sample extraction". In this method, when the games of infinite number of times are possible, the expected value will be infinity, and in the case of finite, the expected value will be a much smaller value. - -Paul Samuelson resolves the paradox by arguing that, even if an entity had infinite resources, the game would never be offered. If the lottery represents an infinite expected gain to the player, then it also represents an infinite expected loss to the host. No one could be observed paying to play the game because it would never be offered. As Samuelson summarized the argument: "Paul will never be willing to give as much as Peter will demand for such a contract; and hence the indicated activity will take place at the equilibrium level of zero intensity." - -Ole Peters resolved the paradox by computing the time-average performance of the lottery, arguing that the expected output should be assessed in the limited period where we can likely make our choices. This solution has non-ergodic features. diff --git a/wiki/wikipedia/3163.txt b/wiki/wikipedia/3163.txt deleted file mode 100644 index b7677b70912db5db0859aaa0c1b885bbd79ef51e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3163.txt +++ /dev/null @@ -1,9 +0,0 @@ -In mathematics, a setoid (X, ~) is a set (or type) X equipped with an equivalence relation ~. A Setoid may also be called E-set, Bishop set, or extensional set. - -Setoids are studied especially in proof theory and in type-theoretic foundations of mathematics. Often in mathematics, when one defines an equivalence relation on a set, one immediately forms the quotient set (turning equivalence into equality). In contrast, setoids may be used when a difference between identity and equivalence must be maintained, often with an interpretation of intensional equality (the equality on the original set) and extensional equality (the equivalence relation, or the equality on the quotient set). - -In proof theory, particularly the proof theory of constructive mathematics based on the Curry–Howard correspondence, one often identifies a mathematical proposition with its set of proofs (if any). A given proposition may have many proofs, of course; according to the principle of proof irrelevance, normally only the truth of the proposition matters, not which proof was used. However, the Curry–Howard correspondence can turn proofs into algorithms, and differences between algorithms are often important. So proof theorists may prefer to identify a proposition with a setoid of proofs, considering proofs equivalent if they can be converted into one another through beta conversion or the like. - -In type-theoretic foundations of mathematics, setoids may be used in a type theory that lacks quotient types to model general mathematical sets. For example, in Per Martin-Löf's intuitionistic type theory, there is no type of real numbers, only a type of regular Cauchy sequences of rational numbers. To do real analysis in Martin-Löf's framework, therefore, one must work with a setoid of real numbers, the type of regular Cauchy sequences equipped with the usual notion of equivalence. Predicates and functions of real numbers need to be defined for regular Cauchy sequences and proven to be compatible with the equivalence relation. Typically (although it depends on the type theory used), the axiom of choice will hold for functions between types (intensional functions), but not for functions between setoids (extensional functions). The term "set" is variously used either as a synonym of "type" or as a synonym of "setoid". - -In constructive mathematics, one often takes a setoid with an apartness relation instead of an equivalence relation, called a constructive setoid. One sometimes also considers a partial setoid using a partial equivalence relation or partial apartness. (see e.g. Barthe et al., section 1) diff --git a/wiki/wikipedia/3164.txt b/wiki/wikipedia/3164.txt deleted file mode 100644 index 0e5d8ec16ab412395e7218c849bf390c92eee470..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3164.txt +++ /dev/null @@ -1,81 +0,0 @@ -is a tile-matching puzzle originally released under the name Chain Shot! in 1985 by Kuniaki Moribe (Morisuke). It has since been ported to numerous computer platforms, handheld devices, and even TiVo, with new versions as of 2016. - -SameGame was originally created as Chain Shot! in 1985 by Kuniaki Moribe. It was distributed for Fujitsu's FM-8 and FM-7 platforms in a Japanese monthly personal computer magazine called Gekkan ASCII. In 1992, the game was ported as SameGame to Unix platforms by Eiji Fukumoto, and to the NEC PC-9801 series by Wataru Yoshioka. In 1993, it was ported to Windows 3.1 by Ikuo Hirohata. This version was translated into English by Hitoshi Ozawa, and is still available from his software archive. - -In 1994, Takahiro Sumiya ported it to Macintosh. This version has some gameplay differences—three, instead of five, colors—and is probably the most widely distributed of the original series. It was the basis for the Same Gnome and KSame variations created for Linux. - -In 2001, Biedl et al. proved that deciding the solvability (whether all blocks can be removed) of 1-column (or 1-row) 2-colour Clickomania can be done in linear time. Deciding the solvability of 2-column, 5-colour Clickomania is NP-Complete. Deciding the solvability of 5-column 3-colour Clickomania is also NP-Complete. - -SameGame is played on a rectangular field, typically initially filled with four or five kinds of blocks placed at random. By selecting a group of adjoining blocks of the same color, a player may remove them from the screen. Blocks that are no longer supported will fall down, and a column without any blocks will be trimmed away by other columns always sliding to one side (often the left). The goal of the game is to remove as many blocks from the playing field as possible. - -In most versions, there are no time constraints during the game. However, some implementations gradually push the rows upward or drop blocks from above. Sometimes the player can control the number and timing of blocks that drop from above in certain ways. For example, on some implementations for the iOS, this can be done by shaking the device. The game ends if a timer runs out or if no more blocks can be removed. Some versions, including some versions for Windows Mobile, include both portrait and landscape orientations. - - - -Swell Foot 3.11.92.png|Swell-Foop, part of GNOME Games - -Ksame.png|KSame, part of kdegames - -Samegame-macos9.png|SameGame for Mac, by Takahiro Sumiya - - - -In one variation, the game starts with no blocks on the field. Blocks fall down to the playing field, and must be removed before they reach the top. If they reach the top and overflow, the game is over. In some variations, such as Bubble Bang, circles or balls are used instead of blocks—which alters the gameplay, as the balls form different shapes than square blocks. - -In three-dimensional variants, the playing field is a cube (containing smaller cubes) instead of a rectangle, and the player has the ability to rotate the cube. "Cubes" for iPhone OS uses this approach. - -Some versions allow the player to rotate the playing field 90 degrees clockwise or counter-clockwise, which causes one of two things to happen: - -# The left and right sides become the bottom and the top, and the blocks fall to the new bottom. The orientation switches between portrait and landscape. NeoSameGame for iPhone OS uses this approach. - -# The blocks fall to the left or right side, but the player must rotate the field back to portrait orientation (which is fixed). Bubblets Tilt for iPhone OS uses this approach. - -In some variations, blocks can be removed when connected to blocks of the same color diagonally, not just horizontally and vertically. Some versions introduce new types of blocks. The different types of blocks interact in various ways with the play field; for example, one type might remove all the blocks in a row. An example of this is the "Revenge mode" in PocketPop Revenge (PocketFun) for iPhone OS. - -# The game ends when the playing field is cleared, or if the remaining blocks cannot be removed. At the end of play, the player receives a score. - -# When the playing field is cleared, instead of ending the game, a new level appears—usually harder, with more block types or lower time limits, or both. The condition for winning may vary between levels. Instead of clearing the whole level, for example, a certain score or a certain number of removed blocks must be reached. When the needed score is reached, in most versions the player is allowed to clear the rest of the level. If the player cannot reach the needed score—or if the timer runs out—the game ends, and the player receives a final score. - -# In an "endless" variant, the game starts with an empty field. The blocks or balls start falling down; but if they reach the top, new blocks stop falling, so they do not overflow—thus, the game never ends. The player can end the game at any time by waiting for blocks to reach the top, then performing a special action (for example, right-click instead of left-click). - -# Some versions have player lives. If a player reaches a losing condition one time, the game does not end; instead, a life is lost. If all lives are lost, the game ends. - -# In the "continuous" variant, whenever a vertical set of blocks has been cleared and the remaining blocks have shifted to one side, a new, randomly selected column of blocks will pop up on the other side, thereby allowing a game to be played for an extended amount of time. - -# In the "shift" variant, when a set of blocks has been cleared, all remaining blocks to the top and left will shift down and to the right. - -# The "megashift" variant is a combination of the rules of the "continuous" and "shift" variations. - -Most versions of the game give $(n-k)^2$ points for removing $n$ tiles at once, where $k = 1$ or $2$, depending on the implementation. For instance, Insane Game for Texas Instruments calculators uses $(n-1)^2$; Ikuo Hirohata's implementation uses the formula $n^2-3n+4$. The Bubble Breaker implementation for Windows Mobile uses the $n (n - 1)$ formula. The 2001 version released by Jeff Reno uses the formula $n (n - 2)$. - -Some versions also offer a large bonus for removing all blocks from the screen, or leaving no more than a certain number of blocks. Others reduce the final score based on the number of blocks remaining at the end of the game. Some game versions award bonus points for clearing the field quickly, encouraging faster play. The faster the player finishes the level, the bigger the bonus. Still others offer combination, or chain, bonuses for clearing the same color of blocks two or more times in succession. - -Another scoring technique awards bonus points for each chain of a certain color that has a certain number of blocks (for example, two red blocks or 11 blue blocks). After receiving the bonus once, sometimes the bonus condition changes. BPop uses this scoring variation. - -Some versions have a simple scoring system: each block removed is worth one point, and there is no bonus for removing more than two blocks at a time. This is seen in the Same Pets and Same Hearths variants. - -Some versions award scores based on the attainment of goals. This is typically seen in multi-level versions of the game. There are four primary scoring systems for such games. - -In one variation, each level has a target score. The player's score starts at zero, and the player must reach the target score. At the beginning of each level, the player's score is reset to zero; the target score increases with each level. - -Other versions have a cumulative target score. In these versions, the player's score carries over from level to level. As a result, if the player substantially exceeds the target score on a given level, they may enter the subsequent level having already met that level's target score, as well. BPop has a cumulative target score. - -Some versions maintain the same target score for each level; such variations can be played indefinitely. In such games, the player typically loses due to poor planning or a lapse in concentration. Examples of such games are Same Pets and Same Hearths. - -In games without a goal score, like Bonkers for iPhone and SameGameBros for iPhone, the goal is to clear the level completely. The game ends when the player fails to do so. - -Blocks typically appear as colored squares, circles, or spheres. Some variations use gradient shading to give the illusion of dimensionality. Other tile themes, or skins, include animals, hearts, stars, faces, Lego blocks, and jelly bears. Designs may follow a theme, such as Christmas or monochrome. Most games have only one skin, but others allow choosing from multiple skins. - -There is a special visual aspect in some versions; instead of separate blocks, games like iDrops and SameGameManiak feature bordered areas for adjacent blocks of the same color. Some have elaborate tile graphics, featuring pictures or patterns inside the tile, like KSame and Same GNOME. - -;Reveal the picture - -The SameGame concept can be extended to a "Reveal the picture" game. A picture or photo is behind the blocks; it becomes increasingly visible as blocks are removed, until it is completely revealed. Examples include Same Pets, Same Hearts and the Nissan Cube promotional app for iPhone. - -;Animation - -Some games feature animation of one or more game events, such as cleared tiles bursting or exploding, or scoring animations (BPop, Bubblets Tilt). - -; Block highlighting - -Some versions display which blocks are selected with a border around them (BPop), jittering of the blocks (BPop), or an increase of the size of the selected blocks (Bubblets Tilt). If the blocks are deselected (usually by dragging away from them, or tapping another block chain or a single block), the highlight is removed. diff --git a/wiki/wikipedia/3165.txt b/wiki/wikipedia/3165.txt deleted file mode 100644 index 3945939dbf95374df04fa6b72f9aa298be199105..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3165.txt +++ /dev/null @@ -1,57 +0,0 @@ -For a graph, a maximum cut is a cut whose size is at least the size of any other cut. That is, it is a partition of the graph's vertices into two complementary sets S and T, such that the number of edges between the set S and the set T is as large as possible. The problem of finding a maximum cut in a graph is known as the Max-Cut Problem. - -The problem can be stated simply as follows. One wants a subset S of the vertex set such that the number of edges between S and the complementary subset is as large as possible. Equivalently, one wants a bipartite subgraph of the graph with as many edges as possible. - -There is a more general version of the problem called weighted Max-Cut, where each edge is associated with a real number, its weight, and the objective is to maximize the total weight of the edges between S and its complement rather than the number of the edges. The weighted Max-Cut problem allowing both positive and negative weights can be trivially transformed into a weighted minimum cut problem by flipping the sign in all weights. - -The following decision problem related to maximum cuts has been studied widely in theoretical computer science: - -Given a graph G and an integer k, determine whether there is a cut of size at least k in G. - -This problem is known to be NP-complete. It is easy to see that the problem is in NP: a yes answer is easy to prove by presenting a large enough cut. The NP-completeness of the problem can be shown, for example, by a reduction from maximum 2-satisfiability (a restriction of the maximum satisfiability problem). The weighted version of the decision problem was one of Karp's 21 NP-complete problems; Karp showed the NP-completeness by a reduction from the partition problem. - -The canonical optimization variant of the above decision problem is usually known as the Maximum-Cut Problem or Max-Cut and is defined as: - -Given a graph G, find a maximum cut. - -The optimization variant is known to be NP-Hard. - -The opposite problem, that of finding a minimum cut is known to be efficiently solvable via the Ford–Fulkerson algorithm. - -As the Max-Cut Problem is NP-hard, no polynomial-time algorithms for Max-Cut in general graphs are known. - -However, in planar graphs, the Maximum-Cut Problem is dual to the route inspection problem (the problem of finding a shortest tour that visits each edge of a graph at least once), in the sense that the edges that do not belong to a maximum cut-set of a graph G are the duals of the edges that are doubled in an optimal inspection tour of the dual graph of G. The optimal inspection tour forms a self-intersecting curve that separates the plane into two subsets, the subset of points for which the winding number of the curve is even and the subset for which the winding number is odd; these two subsets form a cut that includes all of the edges whose duals appear an odd number of times in the tour. The route inspection problem may be solved in polynomial time, and this duality allows the maximum cut problem to also be solved in polynomial time for planar graphs. The Maximum-Bisection problem is known however to be NP-hard. - -The Max-Cut Problem is APX-hard, meaning that there is no polynomial-time approximation scheme (PTAS), arbitrarily close to the optimal solution, for it, unless P = NP. Thus, every known polynomial-time approximation algorithm achieves an approximation ratio strictly less than one. - -There is a simple randomized 0.5-approximation algorithm: for each vertex flip a coin to decide to which half of the partition to assign it. In expectation, half of the edges are cut edges. This algorithm can be derandomized with the method of conditional probabilities; therefore there is a simple deterministic polynomial-time 0.5-approximation algorithm as well. One such algorithm starts with an arbitrary partition of the vertices of the given graph $G = (V, E)$ and repeatedly moves one vertex at a time from one side of the partition to the other, improving the solution at each step, until no more improvements of this type can be made. The number of iterations is at most $|E|$ because the algorithm improves the cut by at least one edge at each step. When the algorithm terminates, at least half of the edges incident to every vertex belong to the cut, for otherwise moving the vertex would improve the cut. Therefore, the cut includes at least $|E|/2$ edges. - -The polynomial-time approximation algorithm for Max-Cut with the best known approximation ratio is a method by Goemans and Williamson using semidefinite programming and randomized rounding that achieves an approximation ratio $\alpha \approx 0.878,$ where -$$ -\alpha = \frac{2}{\pi} \min_{0 \le \theta \le \pi} \frac{\theta}{1- \cos \theta}. -$$ - -If the unique games conjecture is true, this is the best possible approximation ratio for maximum cut. Without such unproven assumptions, it has been proven to be NP-hard to approximate the max-cut value with an approximation ratio better than $\tfrac{16}{17} \approx 0.941$. - -In there is an extended analysis of 10 heuristics for this problem, including open-source implementation. - -In statistical physics and disordered systems, the Max Cut problem is equivalent to minimizing the Hamiltonian of a spin glass model, most simply the Ising model. For the Ising model on a graph G and only nearest-neighbor interactions, the Hamiltonian is -$$ -H[s] = -\sum_{ij\in E(G)} J_{ij}s_is_j -$$ - -Here each vertex i of the graph is a spin site that can take a spin value $s_i = \pm 1.$ A spin configuration partitions $V(G)$ into two sets, those with spin up $V^+$ and those with spin down $V^-.$ We denote with $\delta(V^+)$ the set of edges that connect the two sets. We can then rewrite the Hamiltonian as - -\begin{align} - -H[s] &= -\sum_{ij\in E(V^+)} J_{ij} - \sum_{ij\in E(V^-)} J_{ij} + \sum_{ij\in \delta(V^+)} J_{ij} \\ - -&= -\sum_{ij \in E(G)} J_{ij} + 2 \sum_{ij\in \delta(V^+)} J_{ij} \\ - -&= C + 2 \sum_{ij\in \delta(V^+)} J_{ij} - -\end{align} - -Minimizing this energy is equivalent to the min-cut problem or by setting the graph weights as $w_{ij} = -J_{ij},$ the max-cut problem. - -The max cut problem has applications in VLSI design. diff --git a/wiki/wikipedia/3166.txt b/wiki/wikipedia/3166.txt deleted file mode 100644 index e27713d808a05dfc662722a4b7ee6d12f6ba0cfc..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3166.txt +++ /dev/null @@ -1,19 +0,0 @@ -In mathematical analysis, the Whitney covering lemma, or Whitney decomposition, asserts the existence of a certain type of partition of an open set in a Euclidean space. Originally it was employed in the proof of Hassler Whitney's extension theorem. The lemma was subsequently applied to prove generalizations of the Calderón–Zygmund decomposition. - -Roughly speaking, the lemma states that it is possible to decompose an open set by cubes each of whose diameters is proportional, within certain bounds, to its distance from the boundary of the open set. More precisely: - -Whitney Covering Lemma - -Let $\Omega$ be an open non-empty proper subset of $\mathbb{R}^n$. - -Then there exists a family of closed cubes $\{Q_j\}_j$ such that - -* $\cup_j Q_j = \Omega$ and the $Q_j$'s have disjoint interiors. - -* $\sqrt{n} l(Q_j) \leq \mathrm{dist}(Q_j, \Omega^c) \leq 4 \sqrt{n} l(Q_j).$ - -* If the boundaries of two cubes $Q_j$ and $Q_k$ touch then $\frac{1}{4} \leq \frac{l(Q_j)}{l(Q_k)} \leq 4.$ - -* For a given $Q_j$ there exist at most $12^n Q_k$'s that touch it. - -Where $l(Q)$ denotes the length of a cube $Q$. diff --git a/wiki/wikipedia/3167.txt b/wiki/wikipedia/3167.txt deleted file mode 100644 index 5c029e197e83bc295f8fe919bf22e730345f37e5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3167.txt +++ /dev/null @@ -1,13 +0,0 @@ -In mathematics, the quadratic bottleneck assignment problem (QBAP) is one of fundamental combinatorial optimization problems in the branch of optimization or operations research, from the category of the facilities location problems. - -It is related to the quadratic assignment problem in the same way as the linear bottleneck assignment problem is related to the linear assignment problem, the "sum" is replaced with "max" in the objective function. - -The problem models the following real-life problem: - -There are a set of n facilities and a set of n locations. For each pair of locations, a distance is specified and for each pair of facilities a weight or flow is specified (e.g., the amount of supplies transported between the two facilities). The problem is to assign all facilities to different locations with the goal of minimizing the maximum of the distances multiplied by the corresponding flows. - -The problem is NP-hard, as it can be used to formulate the Hamiltonian cycle problem by using flows in the pattern of a cycle and distances that are short for graph edges and long for non-edges. - -*Bottleneck traveling salesman problem - -*Graph bandwidth problem diff --git a/wiki/wikipedia/3168.txt b/wiki/wikipedia/3168.txt deleted file mode 100644 index c572ed50335018e736511e4093c1a8102575d34b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3168.txt +++ /dev/null @@ -1,89 +0,0 @@ -In mathematics, the Lindelöf hypothesis is a conjecture by Finnish mathematician Ernst Leonard Lindelöf (see Lindelöf) about the rate of growth of the Riemann zeta function on the critical line. This hypothesis is implied by the Riemann hypothesis. It says that for any ε > 0, -$$ -\zeta\left(\frac{1}{2} + it\right) = O(t^\varepsilon), -$$ - -as t tends to infinity (see big O notation). Since ε can be replaced by a smaller value, we can also write the conjecture as, for any positive ε, -$$ -\zeta\left(\frac{1}{2} + it\right) = o(t^\varepsilon). -$$ - -If σ is real, then μ(σ) is defined to be the infimum of all real numbers a such that ζ(σ + iT) = O(Ta). It is trivial to check that μ(σ) = 0 for σ > 1, and the functional equation of the zeta function implies that μ(σ) = μ(1 − σ) − σ + 1/2. The Phragmén–Lindelöf theorem implies that μ is a convex function. The Lindelöf hypothesis states μ(1/2) = 0, which together with the above properties of μ implies that μ(σ) is 0 for σ ≥ 1/2 and 1/2 − σ for σ ≤ 1/2. - -Lindelöf's convexity result together with μ(1) = 0 and μ(0) = 1/2 implies that 0 ≤ μ(1/2) ≤ 1/4. The upper bound of 1/4 was lowered by Hardy and Littlewood to 1/6 by applying Weyl's method of estimating exponential sums to the approximate functional equation. It has since been lowered to slightly less than 1/6 by several authors using long and technical proofs, as in the following table: - -Backlund (1918-1919) showed that the Lindelöf hypothesis is equivalent to the following statement about the zeros of the zeta function: for every ε > 0, the number of zeros with real part at least 1/2 + ε and imaginary part between T and T + 1 is o(log(T)) as T tends to infinity. The Riemann hypothesis implies that there are no zeros at all in this region and so implies the Lindelöf hypothesis. The number of zeros with imaginary part between T and T + 1 is known to be O(log(T)), so the Lindelöf hypothesis seems only slightly stronger than what has already been proved, but in spite of this it has resisted all attempts to prove it. - -The Lindelöf hypothesis is equivalent to the statement that -$$ -\frac{1}{T} \int_0^T|\zeta(1/2+it)|^{2k}dt = O(T^{\varepsilon}) -$$ - -for all positive integers k and all positive real numbers ε. This has been proved for k = 1 or 2, but the case k = 3 seems much harder and is still an open problem. - -There is a much more precise conjecture about the asymptotic behavior of the integral: it is believed that -$$ - \int_0^T|\zeta(1/2+it)|^{2k} dt = T\sum_{j=0}^{k^2}c_{k,j}\log(T)^{k^2-j} + o(T) -$$ - -for some constants ck,j. This has been proved by Littlewood for k = 1 and by Heath-Brown for k = 2 - -(extending a result of Ingham who found the leading term). - -Conrey suggested the value -$$ -\frac{42}{9!}\prod_ p \left((1-p^{-1})^4(1+4p^{-1}+p^{-2})\right) -$$ - -for the leading coefficient when k is 6, and Keating used random matrix theory to suggest some conjectures for the values of the coefficients for higher k. The leading coefficients are conjectured to be the product of an elementary factor, a certain product over primes, and the number of n by n Young tableaux given by the sequence - -1, 1, 2, 42, 24024, 701149020, ... . - -Denoting by pn the n-th prime number, a result by Albert Ingham shows that the Lindelöf hypothesis implies that, for any ε > 0, -$$ -p_{n+1}-p_n\ll p_n^{1/2+\varepsilon} -$$ - -if n is sufficiently large. However, this result is much worse than that of the large prime gap conjecture. - -* - -* - -* - -* - -* - -* - -* - -* - -* - -* - -* - -* - -* - -* - -* - -* - -* - -* - -Category:Conjectures - -Category:Zeta and L-functions - -Category:Unsolved problems in number theory diff --git a/wiki/wikipedia/3169.txt b/wiki/wikipedia/3169.txt deleted file mode 100644 index 40cf358c4188cc9c250ba3e5f430b496e41fec2c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3169.txt +++ /dev/null @@ -1,27 +0,0 @@ -In mathematical analysis, the Szegő limit theorems describe the asymptotic behaviour of the determinants of large Toeplitz matrices. They were first proved by Gábor Szegő. - -Let φ : T→C be a complex function ("symbol") on the unit circle. Consider the n×n Toeplitz matrices Tn(φ), defined by -$$ - T_n(\phi)_{k,l} = \widehat\phi(k-l), \quad 0 \leq k,l \leq n-1, -$$ - -where -$$ - \widehat\phi(k) = \frac{1}{2\pi} \int_0^{2\pi} \phi(e^{i\theta}) e^{-ik\theta} d\theta -$$ - -are the Fourier coefficients of φ. - -The first Szegő theorem states that, if φ > 0 and φ ∈ L1(T), then - -{{NumBlk|::| \lim_{n \to \infty} \frac{\det T_n(\phi)}{\det T_{n-1}(\phi)} - -= \exp \left\{ \frac{1}{2\pi} \int_0^{2\pi} \log \phi(e^{i\theta}) d\theta \right\}.|}} - -The right-hand side of () is the geometric mean of φ (well-defined by the arithmetic-geometric mean inequality). - -Denote the right-hand side of () by G. The second (or strong) Szegő theorem asserts that if, in addition, the derivative of φ is Hölder continuous of order α > 0, then - - \lim_{n \to \infty} \frac{\det T_n(\phi)}{G^n(\phi)} - -= \exp \left\{ \sum_{k=1}^\infty k \left| \widehat{(\log \phi)}(k)\right|^2 \right\}. diff --git a/wiki/wikipedia/317.txt b/wiki/wikipedia/317.txt deleted file mode 100644 index 0ef0df118ce5218162cf8652c197d591d68278a7..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/317.txt +++ /dev/null @@ -1,9 +0,0 @@ -In mathematics, Gabriel's theorem, proved by Pierre Gabriel, classifies the quivers of finite type in terms of Dynkin diagrams. - -A quiver is of finite type if it has only finitely many isomorphism classes of indecomposable representations. Gabriel classified all quivers of finite type, and also their indecomposable representations. More precisely, Gabriel's theorem states that: - -# A (connected) quiver is of finite type if and only if its underlying graph (when the directions of the arrows are ignored) is one of the ADE Dynkin diagrams: $A_n$, $D_n$, $E_6$, $E_7$, $E_8$. - -# The indecomposable representations are in a one-to-one correspondence with the positive roots of the root system of the Dynkin diagram. - -Dlab found a generalization of Gabriel's theorem in which all Dynkin diagrams of finite-dimensional semisimple Lie algebras occur. diff --git a/wiki/wikipedia/3170.txt b/wiki/wikipedia/3170.txt deleted file mode 100644 index 3f86b20b172069b49ba2a68c483e0580d20f1c08..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3170.txt +++ /dev/null @@ -1,18 +0,0 @@ -In algebra, Matlis duality is a duality between Artinian and Noetherian modules over a complete Noetherian local ring. In the special case when the local ring has a field mapping to the residue field it is closely related to earlier work by Francis Sowerby Macaulay on polynomial rings and is sometimes called Macaulay duality, and the general case was introduced by . - -Suppose that R is a Noetherian complete local ring with residue field k, and choose E to be an injective hull of k (sometimes called a Matlis module). The dual DR(M) of a module M is defined to be HomR(M,E). Then Matlis duality states that the duality functor DR gives an anti-equivalence between the categories of Artinian and Noetherian R-modules. In particular the duality functor gives an anti-equivalence from the category of finite-length modules to itself. - -Suppose that the Noetherian complete local ring R has a subfield k that maps onto a subfield of finite index of its residue field R/m. Then the Matlis dual of any R-module is just its dual as a topological vector space over k, if the module is given its m-adic topology. In particular the dual of R as a topological vector space over k is a Matlis module. This case is closely related to work of Macaulay on graded polynomial rings and is sometimes called Macaulay duality. - -If R is a discrete valuation ring with quotient field K then the Matlis module is K/R. In the special case when R is the ring of p-adic numbers, the Matlis dual of a finitely-generated module is the Pontryagin dual of it considered as a locally compact abelian group. - -If R is a Cohen–Macaulay local ring of dimension d with dualizing module Ω, then the Matlis module is given by the local cohomology group H_d|b=R(Ω). In particular if R is an Artinian local ring then the Matlis module is the same as the dualizing module. - -Matlis duality can be conceptually explained using the language of adjoint functors and derived categories: the functor between the derived categories of R- and k-modules induced by regarding a k-module as an R-module, admits a right adjoint - -(derived internal Hom) -$$ -D(k) \gets D(R) : R\operatorname{Hom}_R(k, -). -$$ - -This right adjoint sends the injective hull $E(k)$ mentioned above to k, which is a dualizing object in $D(k)$. This abstract fact then gives rise to the above-mentioned equivalence. diff --git a/wiki/wikipedia/3171.txt b/wiki/wikipedia/3171.txt deleted file mode 100644 index 9b663d0209a67ee26394dbd4b6ed65b605271c6b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3171.txt +++ /dev/null @@ -1,99 +0,0 @@ -In mathematical analysis, the Bohr–Mollerup theorem is a theorem proved by the Danish mathematicians Harald Bohr and Johannes Mollerup. The theorem characterizes the gamma function, defined for x > 0 by -$$ -\Gamma(x)=\int_0^\infty t^{x-1} e^{-t}dt -$$ - -as the only function f on the interval x > 0 that simultaneously has the three properties - -* f (1) = 1, and - -* f (x + 1) = x f (x) for x > 0 and - -* f is logarithmically convex. - -A treatment of this theorem is in Artin's book The Gamma Function, which has been reprinted by the AMS in a collection of Artin's writings. - -The theorem was first published in a textbook on complex analysis, as Bohr and Mollerup thought it had already been proved. - -Bohr–Mollerup Theorem. Γ(x) is the only function that satisfies f (x + 1) = x f (x) with log( f (x)) convex and also with f (1) = 1. - -Let Γ(x) be a function with the assumed properties established above: Γ(x + 1) = xΓ(x) and log(Γ(x)) is convex, and Γ(1) = 1. From Γ(x + 1) = xΓ(x) we can establish -$$ -\Gamma(x+n)=(x+n-1)(x+n-2)(x+n-3)\cdots(x+1)x\Gamma(x) -$$ - -The purpose of the stipulation that Γ(1) = 1 forces the Γ(x + 1) = xΓ(x) property to duplicate the factorials of the integers so we can conclude now that Γ(n) = (n − 1)! if n ∈ N and if Γ(x) exists at all. Because of our relation for Γ(x + n), if we can fully understand Γ(x) for 0 < x ≤ 1 then we understand Γ(x) for all values of x. - -The slope of a line connecting two points (x1, log(Γ (x1))) and (x2, log(Γ (x2))), call it S(x1, x2), is monotonically increasing in each argument with x1 < x2 since we have stipulated log(Γ(x)) is convex. Thus, we know that -$$ -S(n-1,n) \leq S(n,n+x)\leq S(n,n+1)\quad\text{for all }x\in(0,1]. -$$ - -After simplifying using the various properties of the logarithm, and then exponentiating (which preserves the inequalities since the exponential function is monotonically increasing) we obtain -$$ -(n-1)^x(n-1)! \leq \Gamma(n+x)\leq n^x(n-1)!. -$$ - -From previous work this expands to -$$ -(n-1)^x(n-1)! \leq (x+n-1)(x+n-2)\cdots(x+1)x\Gamma(x)\leq n^x(n-1)!, -$$ - -and so -$$ -\frac{(n-1)^x(n-1)!}{(x+n-1)(x+n-2)\cdots(x+1)x} \leq \Gamma(x) \leq \frac{n^xn!}{(x+n)(x+n-1)\cdots(x+1)x}\left(\frac{n+x}{n}\right). -$$ - -The last line is a strong statement. In particular, it is true for all values of n. That is Γ(x) is not greater than the right hand side for any choice of n and likewise, Γ(x) is not less than the left hand side for any other choice of n. Each single inequality stands alone and may be interpreted as an independent statement. Because of this fact, we are free to choose different values of n for the RHS and the LHS. In particular, if we keep n for the RHS and choose n + 1 for the LHS we get: - -\begin{align} - -\frac{((n+1)-1)^x((n+1)-1)!}{(x+(n+1)-1)(x+(n+1)-2)\cdots(x+1)x}&\leq \Gamma(x)\leq\frac{n^xn!}{(x+n)(x+n-1)\cdots(x+1)x}\left(\frac{n+x}{n}\right)\\ - -\frac{n^xn!}{(x+n)(x+n-1)\cdots(x+1)x}&\leq \Gamma(x)\leq\frac{n^xn!}{(x+n)(x+n-1)\cdots(x+1)x}\left(\frac{n+x}{n}\right) - -\end{align} - -It is evident from this last line that a function is being sandwiched between two expressions, a common analysis technique to prove various things such as the existence of a limit, or convergence. Let n → ∞: -$$ -\lim_{n\to\infty} \frac{n+x}{n} = 1 -$$ - -so the left side of the last inequality is driven to equal the right side in the limit and -$$ -\frac{n^xn!}{(x+n)(x+n-1)\cdots(x+1)x} -$$ - -is sandwiched in between. This can only mean that -$$ -\lim_{n\to\infty}\frac{n^xn!}{(x+n)(x+n-1)\cdots(x+1)x} = \Gamma (x). -$$ - -In the context of this proof this means that -$$ -\lim_{n\to\infty}\frac{n^xn!}{(x+n)(x+n-1)\cdots(x+1)x} -$$ - -has the three specified properties belonging to Γ(x). Also, the proof provides a specific expression for Γ(x). And the final critical part of the proof is to remember that the limit of a sequence is unique. This means that for any choice of 0 < x ≤ 1 only one possible number Γ(x) can exist. Therefore, there is no other function with all the properties assigned to Γ(x). - -The remaining loose end is the question of proving that Γ(x) makes sense for all x where -$$ -\lim_{n\to\infty}\frac{n^xn!}{(x+n)(x+n-1)\cdots(x+1)x} -$$ - -exists. The problem is that our first double inequality -$$ -S(n-1,n)\leq S(n+x,n)\leq S(n+1,n) -$$ - -was constructed with the constraint 0 < x ≤ 1. If, say, x > 1 then the fact that S is monotonically increasing would make S(n + 1, n) < S(n + x, n), contradicting the inequality upon which the entire proof is constructed. However, - -\begin{align} - -\Gamma(x+1)&= \lim_{n\to\infty}x\cdot\left(\frac{n^xn!}{(x+n)(x+n-1)\cdots(x+1)x}\right)\frac{n}{n+x+1}\\ - -\Gamma(x)&=\left(\frac{1}{x}\right)\Gamma(x+1) - -\end{align} - -which demonstrates how to bootstrap Γ(x) to all values of x where the limit is defined. diff --git a/wiki/wikipedia/3172.txt b/wiki/wikipedia/3172.txt deleted file mode 100644 index 61a0647106c0ec2e89db0dd6041c6f852ee50d05..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3172.txt +++ /dev/null @@ -1,57 +0,0 @@ -In mathematics, the disintegration theorem is a result in measure theory and probability theory. It rigorously defines the idea of a non-trivial "restriction" of a measure to a measure zero subset of the measure space in question. It is related to the existence of conditional probability measures. In a sense, "disintegration" is the opposite process to the construction of a product measure. - -Consider the unit square in the Euclidean plane R2, 1=S = [0, 1] × [0, 1]. Consider the probability measure μ defined on S by the restriction of two-dimensional Lebesgue measure λ2 to S. That is, the probability of an event E ⊆ S is simply the area of E. We assume E is a measurable subset of S. - -Consider a one-dimensional subset of S such as the line segment Lx = {x} × [0, 1]. Lx has μ-measure zero; every subset of Lx is a μ-null set; since the Lebesgue measure space is a complete measure space, - -E \subseteq L_{x} \implies \mu (E) = 0. - -While true, this is somewhat unsatisfying. It would be nice to say that μ "restricted to" Lx is the one-dimensional Lebesgue measure λ1, rather than the zero measure. The probability of a "two-dimensional" event E could then be obtained as an integral of the one-dimensional probabilities of the vertical "slices" E ∩ Lx: more formally, if μx denotes one-dimensional Lebesgue measure on Lx, then - -\mu (E) = \int_{[0, 1]} \mu_{x} (E \cap L_{x}) \mathrm{d} x - -for any "nice" E ⊆ S. The disintegration theorem makes this argument rigorous in the context of measures on metric spaces. - -(Hereafter, P(X) will denote the collection of Borel probability measures on a metric space (X, d).) - -The assumptions of the theorem are as follows: - -* Let Y and X be two Radon spaces (i.e. a topological space such that every Borel probability measure on M is inner regular e.g. separable metric spaces on which every probability measure is a Radon measure). - -* Let μ ∈ P(Y). - -* Let π : Y → X be a Borel-measurable function. Here one should think of π as a function to "disintegrate" Y, in the sense of partitioning Y into $\{ \pi^{-1}(x)\ |\ x \in X\}$. For example, for the motivating example above, one can define $\pi((a,b)) = a, (a,b) \in [0,1]\times [0,1]$, which gives that $\pi^{-1}(a) = a \times [0,1]$, a slice we want to capture. - -* Let $\nu$ ∈ P(X) be the pushforward measure 1=ν = π(μ) = μ ∘ π−1. This measure provides the distribution of x (which corresponds to the events $\pi^{-1}(x)$). - -The conclusion of the theorem: There exists a $\nu$-almost everywhere uniquely determined family of probability measures {μx}x∈X ⊆ P(Y), which provides a "disintegration" of $\mu$ into $\{\mu_x\}_{x \in X}$, such that: - -* the function $x \mapsto \mu_{x}$ is Borel measurable, in the sense that $x \mapsto \mu_{x} (B)$ is a Borel-measurable function for each Borel-measurable set B ⊆ Y; - -* μx "lives on" the fiber π−1(x): for $\nu$-almost all x ∈ X, \mu_{x} \left( Y \setminus \pi^{-1} (x) \right) = 0, and so μx(E) = μx(E ∩ π−1(x)); - -* for every Borel-measurable function f : Y → [0, ∞], \int_{Y} f(y) \mathrm{d} \mu (y) = \int_{X} \int_{\pi^{-1} (x)} f(y) \mathrm{d} \mu_{x} (y) \mathrm{d} \nu (x). In particular, for any event E ⊆ Y, taking f to be the indicator function of E, \mu (E) = \int_{X} \mu_{x} \left( E \right) \mathrm{d} \nu (x). - -The original example was a special case of the problem of product spaces, to which the disintegration theorem applies. - -When Y is written as a Cartesian product Y = X1 × X2 and πi : Y → Xi is the natural projection, then each fibre π1−1(x1) can be canonically identified with X2 and there exists a Borel family of probability measures $\{ \mu_{x_{1}} \}_{x_{1} \in X_{1}}$ in P(X2) (which is (π1)(μ)-almost everywhere uniquely determined) such that - -\mu = \int_{X_{1}} \mu_{x_{1}} \mu \left(\pi_1^{-1}(\mathrm d x_1) \right)= \int_{X_{1}} \mu_{x_{1}} \mathrm{d} (\pi_{1})_{*} (\mu) (x_{1}), - -which is in particular - -\int_{X_1\times X_2} f(x_1,x_2) \mu(\mathrm d x_1,\mathrm d x_2) = \int_{X_1}\left( \int_{X_2} f(x_1,x_2) \mu(\mathrm d x_2|x_1) \right) \mu\left( \pi_1^{-1}(\mathrm{d} x_{1})\right) - -and - -\mu(A \times B) = \int_A \mu\left(B|x_1\right) \mu\left( \pi_1^{-1}(\mathrm{d} x_{1})\right). - -The relation to conditional expectation is given by the identities - -\operatorname E(f|\pi_1)(x_1)= \int_{X_2} f(x_1,x_2) \mu(\mathrm d x_2|x_1), - -\mu(A\times B|\pi_1)(x_1)= 1_A(x_1) \cdot \mu(B| x_1). - -The disintegration theorem can also be seen as justifying the use of a "restricted" measure in vector calculus. For instance, in Stokes' theorem as applied to a vector field flowing through a compact surface Σ ⊂ R3, it is implicit that the "correct" measure on Σ is the disintegration of three-dimensional Lebesgue measure λ3 on Σ, and that the disintegration of this measure on ∂Σ is the same as the disintegration of λ3 on ∂Σ. - -The disintegration theorem can be applied to give a rigorous treatment of conditional probability distributions in statistics, while avoiding purely abstract formulations of conditional probability. diff --git a/wiki/wikipedia/3173.txt b/wiki/wikipedia/3173.txt deleted file mode 100644 index 6c7fe8d649e160b889f2ccb06af9488a8b0cf7c6..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3173.txt +++ /dev/null @@ -1,17 +0,0 @@ -In theoretical computer science and mathematics, especially in the area of combinatorics on words, the Levi lemma states that, for all strings u, v, x and y, if uv = xy, then there exists a string w such that either - -uw = x and v = wy (if |u| ≤ |x|) - -or - -u = xw and wv = y (if |u| ≥ |x|) - -That is, there is a string w that is "in the middle", and can be grouped to one side or the other. Levi's lemma is named after Friedrich Wilhelm Levi, who published it in 1944. - -Levi's lemma can be applied repeatedly in order to solve word equations; in this context it is sometimes called the Nielsen transformation by analogy with the Nielsen transformation for groups. For example, starting with an equation xα = yβ where x and y are the unknowns, we can transform it (assuming |x| ≥ |y|, so there exists t such that x=yt) to ytα = yβ, thus to tα = β. This approach results in a graph of substitutions generated by repeatedly applying Levi's lemma. If each unknown appears at most twice, then a word equation is called quadratic; in a quadratic word equation the graph obtained by repeatedly applying Levi's lemma is finite, so it is decidable if a quadratic word equation has a solution. A more general method for solving word equations is Makanin's algorithm. - -The above is known as the Levi lemma for strings; the lemma can occur in a more general form in graph theory and in monoid theory; for example, there is a more general Levi lemma for traces originally due to Christine Duboc. - -Several proofs of Levi's Lemma for traces can be found in The Book of Traces. - -A monoid in which Levi's lemma holds is said to have the equidivisibility property. The free monoid of strings and string concatenation has this property (by Levi's lemma for strings), but by itself equidivisibility is not enough to guarantee that a monoid is free. However an equidivisibile monoid M is free if additionally there exists a homomorphism f from M to the monoid of natural numbers (free monoid on one generator) with the property that the preimage of 0 contains only the identity element of M, i.e. $f^{-1}(0) = \{1_M\}$. (Note that f simply being a homomorphism does not guarantee this latter property, as there could be multiple elements of M mapped to 0.) A monoid for which such a homomorphism exists is also called graded (and the f is called a gradation). diff --git a/wiki/wikipedia/3174.txt b/wiki/wikipedia/3174.txt deleted file mode 100644 index 591610cc367353c5eb2c46c7b1e55659bca3ca70..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3174.txt +++ /dev/null @@ -1,406 +0,0 @@ -In statistics, a statistic is sufficient with respect to a statistical model and its associated unknown parameter if "no other statistic that can be calculated from the same sample provides any additional information as to the value of the parameter". In particular, a statistic is sufficient for a family of probability distributions if the sample from which it is calculated gives no additional information than the statistic, as to which of those probability distributions is the sampling distribution. - -A related concept is that of linear sufficiency, which is weaker than sufficiency but can be applied in some cases where there is no sufficient statistic, although it is restricted to linear estimators. The Kolmogorov structure function deals with individual finite data; the related notion there is the algorithmic sufficient statistic. - -The concept is due to Sir Ronald Fisher in 1920. Stephen Stigler noted in 1973 that the concept of sufficiency had fallen out of favor in descriptive statistics because of the strong dependence on an assumption of the distributional form (see Pitman–Koopman–Darmois theorem below), but remained very important in theoretical work. - -Roughly, given a set $ \mathbf{X}$ of independent identically distributed data conditioned on an unknown parameter $\theta$, a sufficient statistic is a function $T(\mathbf{X})$ whose value contains all the information needed to compute any estimate of the parameter (e.g. a maximum likelihood estimate). Due to the factorization theorem (see below), for a sufficient statistic $T(\mathbf{X})$, the probability density can be written as $f_{\mathbf{X}}(x) = h(x) g(\theta, T(x))$. From this factorization, it can easily be seen that the maximum likelihood estimate of $\theta$ will interact with $\mathbf{X}$ only through $T(\mathbf{X})$. Typically, the sufficient statistic is a simple function of the data, e.g. the sum of all the data points. - -More generally, the "unknown parameter" may represent a vector of unknown quantities or may represent everything about the model that is unknown or not fully specified. In such a case, the sufficient statistic may be a set of functions, called a jointly sufficient statistic. Typically, there are as many functions as there are parameters. For example, for a Gaussian distribution with unknown mean and variance, the jointly sufficient statistic, from which maximum likelihood estimates of both parameters can be estimated, consists of two functions, the sum of all data points and the sum of all squared data points (or equivalently, the sample mean and sample variance). - -The concept is equivalent to the statement that, conditional on the value of a sufficient statistic for a parameter, the joint probability distribution of the data does not depend on that parameter. Both the statistic and the underlying parameter can be vectors. - -A statistic t = T(X) is sufficient for underlying parameter θ precisely if the conditional probability distribution of the data X, given the statistic t = T(X), does not depend on the parameter θ. - -Alternatively, one can say the statistic T(X) is sufficient for θ if its mutual information with θ equals the mutual information between X and θ. In other words, the data processing inequality becomes an equality: -$$ -I\bigl(\theta ; T(X)\bigr) = I(\theta ; X) -$$ - -As an example, the sample mean is sufficient for the mean (μ) of a normal distribution with known variance. Once the sample mean is known, no further information about μ can be obtained from the sample itself. On the other hand, for an arbitrary distribution the median is not sufficient for the mean: even if the median of the sample is known, knowing the sample itself would provide further information about the population mean. For example, if the observations that are less than the median are only slightly less, but observations exceeding the median exceed it by a large amount, then this would have a bearing on one's inference about the population mean. - -Fisher's factorization theorem or factorization criterion provides a convenient characterization of a sufficient statistic. If the probability density function is ƒθ(x), then T is sufficient for θ if and only if nonnegative functions g and h can be found such that -$$ - f_\theta(x)=h(x) g_\theta(T(x)), -$$ - -i.e. the density ƒ can be factored into a product such that one factor, h, does not depend on θ and the other factor, which does depend on θ, depends on x only through T(x). - -It is easy to see that if F(t) is a one-to-one function and T is a sufficient - -statistic, then F(T) is a sufficient statistic. In particular we can multiply a - -sufficient statistic by a nonzero constant and get another sufficient statistic. - -An implication of the theorem is that when using likelihood-based inference, two sets of data yielding the same value for the sufficient statistic T(X) will always yield the same inferences about θ. By the factorization criterion, the likelihood's dependence on θ is only in conjunction with T(X). As this is the same in both cases, the dependence on θ will be the same as well, leading to identical inferences. - -Due to Hogg and Craig. Let $X_1, X_2, \ldots, X_n$, denote a random sample from a distribution having the pdf f(x, θ) for ι < θ < δ. Let Y1 = u1(X1, X2, ..., Xn) be a statistic whose pdf is g1(y1; θ). What we want to prove is that Y1 = u1(X1, X2, ..., Xn) is a sufficient statistic for θ if and only if, for some function H, -$$ - \prod_{i=1}^n f(x_i; \theta) = g_1 \left[u_1 (x_1, x_2, \dots, x_n); \theta \right] H(x_1, x_2, \dots, x_n). -$$ - -First, suppose that -$$ - \prod_{i=1}^n f(x_i; \theta) = g_1 \left[u_1 (x_1, x_2, \dots, x_n); \theta \right] H(x_1, x_2, \dots, x_n). -$$ - -We shall make the transformation yi = ui(x1, x2, ..., xn), for i = 1, ..., n, having inverse functions xi = wi(y1, y2, ..., yn), for i = 1, ..., n, and Jacobian $ J = \left[w_i/y_j \right] $. Thus, - - - -\prod_{i=1}^n f \left[ w_i(y_1, y_2, \dots, y_n); \theta \right] = - -|J| g_1 (y_1; \theta) H \left[ w_1(y_1, y_2, \dots, y_n), \dots, w_n(y_1, y_2, \dots, y_n) \right]. - - - -The left-hand member is the joint pdf g(y1, y2, ..., yn; θ) of Y1 = u1(X1, ..., Xn), ..., Yn = un(X1, ..., Xn). In the right-hand member, $g_1(y_1;\theta)$ is the pdf of $Y_1$, so that $H[ w_1, \dots , w_n] |J|$ is the quotient of $g(y_1,\dots,y_n;\theta)$ and $g_1(y_1;\theta)$; that is, it is the conditional pdf $h(y_2, \dots, y_n \mid y_1; \theta)$ of $Y_2,\dots,Y_n$ given $Y_1=y_1$. - -But $H(x_1,x_2,\dots,x_n)$, and thus $H\left[w_1(y_1,\dots,y_n), \dots, w_n(y_1, \dots, y_n))\right]$, was given not to depend upon $\theta$. Since $\theta$ was not introduced in the transformation and accordingly not in the Jacobian $J$, it follows that $h(y_2, \dots, y_n \mid y_1; \theta)$ does not depend upon $\theta$ and that $Y_1$ is a sufficient statistics for $\theta$. - -The converse is proven by taking: -$$ -g(y_1,\dots,y_n;\theta)=g_1(y_1; \theta) h(y_2, \dots, y_n \mid y_1), -$$ - -where $h(y_2, \dots, y_n \mid y_1)$ does not depend upon $\theta$ because $Y_2 ... Y_n$ depend only upon $X_1 ... X_n$, which are independent on $\Theta$ when conditioned by $Y_1$, a sufficient statistics by hypothesis. Now divide both members by the absolute value of the non-vanishing Jacobian $J$, and replace $y_1, \dots, y_n$ by the functions $u_1(x_1, \dots, x_n), \dots, u_n(x_1,\dots, x_n)$ in $x_1,\dots, x_n$. This yields -$$ -\frac{g\left[ u_1(x_1, \dots, x_n), \dots, u_n(x_1, \dots, x_n); \theta \right]}=g_1\left[u_1(x_1,\dots,x_n); \theta\right] \frac{h(u_2, \dots, u_n \mid u_1)} -$$ - -where $J^*$ is the Jacobian with $y_1,\dots,y_n$ replaced by their value in terms $x_1, \dots, x_n$. The left-hand member is necessarily the joint pdf $f(x_1;\theta)\cdots f(x_n;\theta)$ of $X_1,\dots,X_n$. Since $h(y_2,\dots,y_n\mid y_1)$, and thus $h(u_2,\dots,u_n\mid u_1)$, does not depend upon $\theta$, then -$$ -H(x_1,\dots,x_2)=\frac{h(u_2,\dots,u_n\mid u_1)} -$$ - -is a function that does not depend upon $\theta$. - -A simpler more illustrative proof is as follows, although it applies only in the discrete case. - -We use the shorthand notation to denote the joint probability density of $(X, T(X))$ by $f_\theta(x,t)$. Since $T$ is a function of $X$, we have $f_\theta(x,t) = f_\theta(x)$, as long as $t = T(x)$ and zero otherwise. Therefore: - - - -\begin{align} - -f_\theta(x) & = f_\theta(x,t) \\[5pt] - -& = f_\theta (x\mid t) f_\theta(t) \\[5pt] - -& = f(x\mid t) f_\theta(t) - -\end{align} - - - -with the last equality being true by the definition of sufficient statistics. Thus $f_\theta(x)=a(x) b_\theta(t)$ with $a(x) = f_{X \mid t}(x)$ and $b_\theta(t) = f_\theta(t)$. - -Conversely, if $f_\theta(x)=a(x) b_\theta(t)$, we have - - - -\begin{align} - -f_\theta(t) & = \sum _{x : T(x) = t} f_\theta(x, t) \\[5pt] - -& = \sum _{x : T(x) = t} f_\theta(x) \\[5pt] - -& = \sum _{x : T(x) = t} a(x) b_\theta(t) \\[5pt] - -& = \left( \sum _{x : T(x) = t} a(x) \right) b_\theta(t). - -\end{align} - -With the first equality by the definition of pdf for multiple variables, the second by the remark above, the third by hypothesis, and the fourth because the summation is not over $t$. - -Let $f_{X\mid t}(x)$ denote the conditional probability density of $X$ given $T(X)$. Then we can derive an explicit expression for this: - - - -\begin{align} - -f_{X\mid t}(x) - -& = \frac{f_\theta(x, t)}{f_\theta(t)} \\[5pt] - -& = \frac{f_\theta(x)}{f_\theta(t)} \\[5pt] - -& = \frac{a(x) b_\theta(t)}{\left( \sum _{x : T(x) = t} a(x) \right) b_\theta(t)} \\[5pt] - -& = \frac{a(x)}{\sum _{x : T(x) = t} a(x)}. - -\end{align} - -With the first equality by definition of conditional probability density, the second by the remark above, the third by the equality proven above, and the fourth by simplification. This expression does not depend on $\theta$ and thus $T$ is a sufficient statistic. - -A sufficient statistic is minimal sufficient if it can be represented as a function of any other sufficient statistic. In other words, S(X) is minimal sufficient if and only if - -#S(X) is sufficient, and - -#if T(X) is sufficient, then there exists a function f such that S(X) = f(T(X)). - -Intuitively, a minimal sufficient statistic most efficiently captures all possible information about the parameter θ. - -A useful characterization of minimal sufficiency is that when the density fθ exists, S(X) is minimal sufficient if and only if -$$ -\frac{f_\theta(x)}{f_\theta(y)} -$$ is independent of θ :$\Longleftrightarrow$ S(x) = S(y) - -This follows as a consequence from Fisher's factorization theorem stated above. - -A case in which there is no minimal sufficient statistic was shown by Bahadur, 1954. However, under mild conditions, a minimal sufficient statistic does always exist. In particular, in Euclidean space, these conditions always hold if the random variables (associated with $P_\theta$ ) are all discrete or are all continuous. - -If there exists a minimal sufficient statistic, and this is usually the case, then every complete sufficient statistic is necessarily minimal sufficient(note that this statement does not exclude the option of a pathological case in which a complete sufficient exists while there is no minimal sufficient statistic). While it is hard to find cases in which a minimal sufficient statistic does not exist, it is not so hard to find cases in which there is no complete statistic. - -The collection of likelihood ratios $\left\{\frac{L(X \mid \theta_i)}{L(X \mid \theta_0)}\right\}$ for $i = 1, ..., k$, is a minimal sufficient statistic if the parameter space is discrete $\left\{\theta_0, ..., \theta_k\right\}$. - -If X1, ...., Xn are independent Bernoulli-distributed random variables with expected value p, then the sum T(X) = X1 + ... + Xn is a sufficient statistic for p (here 'success' corresponds to Xi = 1 and 'failure' to Xi = 0; so T is the total number of successes) - -This is seen by considering the joint probability distribution: -$$ - \Pr\{X=x\}=\Pr\{X_1=x_1,X_2=x_2,\ldots,X_n=x_n\}. -$$ - -Because the observations are independent, this can be written as - - - -p^{x_1}(1-p)^{1-x_1} p^{x_2}(1-p)^{1-x_2}\cdots p^{x_n}(1-p)^{1-x_n} - -and, collecting powers of p and 1 − p, gives - - - -p^{\sum x_i}(1-p)^{n-\sum x_i}=p^{T(x)}(1-p)^{n-T(x)} - - - -which satisfies the factorization criterion, with h(x) = 1 being just a constant. - -Note the crucial feature: the unknown parameter p interacts with the data x only via the statistic T(x) = Σ xi. - -As a concrete application, this gives a procedure for distinguishing a fair coin from a biased coin. - -If X1, ...., Xn are independent and uniformly distributed on the interval [0,θ], then T(X) = max(X1, ..., Xn) is sufficient for θ — the sample maximum is a sufficient statistic for the population maximum. - -To see this, consider the joint probability density function of X (X1,...,Xn). Because the observations are independent, the pdf can be written as a product of individual densities - -\begin{align} - -f_{\theta}(x_1,\ldots,x_n) - -&= \frac{1}{\theta}\mathbf{1}_{\{0\leq x_1\leq\theta\}} \cdots - -\frac{1}{\theta}\mathbf{1}_{\{0\leq x_n\leq\theta\}} \\[5pt] - -&= \frac{1}{\theta^n} \mathbf{1}_{\{0\leq\min\{x_i\}\}}\mathbf{1}_{\{\max\{x_i\}\leq\theta\}} - -\end{align} - -where 1{...} is the indicator function. Thus the density takes form required by the Fisher–Neyman factorization theorem, where h(x) = 1{min{xi}≥0}, and the rest of the expression is a function of only θ and T(x) = max{xi}. - -In fact, the minimum-variance unbiased estimator (MVUE) for θ is -$$ - \frac{n+1}{n}T(X). -$$ - -This is the sample maximum, scaled to correct for the bias, and is MVUE by the Lehmann–Scheffé theorem. Unscaled sample maximum T(X) is the maximum likelihood estimator for θ. - -If $X_1,...,X_n$ are independent and uniformly distributed on the interval $[\alpha, \beta]$ (where $\alpha$ and $\beta$ are unknown parameters), then $T(X_1^n)=\left(\min_{1 \leq i \leq n}X_i,\max_{1 \leq i \leq n}X_i\right)$ is a two-dimensional sufficient statistic for $(\alpha , \beta)$. - -To see this, consider the joint probability density function of $X_1^n=(X_1,\ldots,X_n)$. Because the observations are independent, the pdf can be written as a product of individual densities, i.e. - -\begin{align} - -f_{X_1^n}(x_1^n) - -&= \prod_{i=1}^n \left({1 \over \beta-\alpha}\right) \mathbf{1}_{ \{ \alpha \leq x_i \leq \beta \} } - -= \left({1 \over \beta-\alpha}\right)^n \mathbf{1}_{ \{ \alpha \leq x_i \leq \beta, \forall i = 1,\ldots,n\}} \\ - -&= \left({1 \over \beta-\alpha}\right)^n \mathbf{1}_{ \{ \alpha \leq \min_{1 \leq i \leq n}X_i \} } \mathbf{1}_{ \{ \max_{1 \leq i \leq n}X_i \leq \beta \} }. - -\end{align} - -The joint density of the sample takes the form required by the Fisher–Neyman factorization theorem, by letting - -\begin{align} - -h(x_1^n)= 1, \quad - -g_{(\alpha, \beta)}(x_1^n)= \left({1 \over \beta-\alpha}\right)^n \mathbf{1}_{ \{ \alpha \leq \min_{1 \leq i \leq n}X_i \} } \mathbf{1}_{ \{ \max_{1 \leq i \leq n}X_i \leq \beta \} }. - -\end{align} - -Since $h(x_1^n)$ does not depend on the parameter $(\alpha, \beta)$ and $g_{(\alpha , \beta)}(x_1^n)$ depends only on $x_1^n$ through the function $T(X_1^n)= \left(\min_{1 \leq i \leq n}X_i,\max_{1 \leq i \leq n}X_i\right),$ - -the Fisher–Neyman factorization theorem implies $T(X_1^n) = \left(\min_{1 \leq i \leq n}X_i,\max_{1 \leq i \leq n}X_i\right)$ is a sufficient statistic for $(\alpha , \beta)$. - -If X1, ...., Xn are independent and have a Poisson distribution with parameter λ, then the sum T(X) = X1 + ... + Xn is a sufficient statistic for λ. - -To see this, consider the joint probability distribution: - - - -\Pr(X=x)=P(X_1=x_1,X_2=x_2,\ldots,X_n=x_n). - - - -Because the observations are independent, this can be written as - - - -{e^{-\lambda} \lambda^{x_1} \over x_1 !} \cdot - -{e^{-\lambda} \lambda^{x_2} \over x_2 !} \cdots - -{e^{-\lambda} \lambda^{x_n} \over x_n !} - - - -which may be written as - - - -e^{-n\lambda} \lambda^{(x_1+x_2+\cdots+x_n)} \cdot - -{1 \over x_1 ! x_2 !\cdots x_n ! } - - - -which shows that the factorization criterion is satisfied, where h(x) is the reciprocal of the product of the factorials. Note the parameter λ interacts with the data only through its sum T(X). - -If $X_1,\ldots,X_n$ are independent and normally distributed with expected value $\theta$ (a parameter) and known finite variance $\sigma^2,$ then -$$ -T(X_1^n)=\overline{x}=\frac1n\sum_{i=1}^nX_i -$$ - -is a sufficient statistic for $\theta.$ - -To see this, consider the joint probability density function of $X_1^n=(X_1,\dots,X_n)$. Because the observations are independent, the pdf can be written as a product of individual densities, i.e. - -\begin{align} - -f_{X_1^n}(x_1^n) - -& = \prod_{i=1}^n \frac{1}{\sqrt{2\pi\sigma^2}} \exp \left (-\frac{(x_i-\theta)^2}{2\sigma^2} \right ) \\ [6pt] - -&= (2\pi\sigma^2)^{-\frac{n}{2}} \exp \left ( -\sum_{i=1}^n \frac{(x_i-\theta)^2}{2\sigma^2} \right ) \\ [6pt] - -& = (2\pi\sigma^2)^{-\frac{n}{2}} \exp \left (-\sum_{i=1}^n \frac{ \left ( \left (x_i-\overline{x} \right ) - \left (\theta-\overline{x} \right ) \right )^2}{2\sigma^2} \right ) \\ [6pt] - -& = (2\pi\sigma^2)^{-\frac{n}{2}} \exp \left( -{1\over2\sigma^2} \left(\sum_{i=1}^n(x_i-\overline{x})^2 + \sum_{i=1}^n(\theta-\overline{x})^2 -2\sum_{i=1}^n(x_i-\overline{x})(\theta-\overline{x})\right) \right) \\ [6pt] - -&= (2\pi\sigma^2)^{-\frac{n}{2}} \exp \left( -{1\over2\sigma^2} \left (\sum_{i=1}^n(x_i-\overline{x})^2 + n(\theta-\overline{x})^2 \right ) \right ) && \sum_{i=1}^n(x_i-\overline{x})(\theta-\overline{x})=0 \\ [6pt] - -&= (2\pi\sigma^2)^{-\frac{n}{2}} \exp \left( -{1\over2\sigma^2} \sum_{i=1}^n (x_i-\overline{x})^2 \right ) \exp \left (-\frac{n}{2\sigma^2} (\theta-\overline{x})^2 \right ) - -\end{align} - -The joint density of the sample takes the form required by the Fisher–Neyman factorization theorem, by letting - -\begin{align} - -h(x_1^n) &= (2\pi\sigma^2)^{-\frac{n}{2}} \exp \left( -{1\over2\sigma^2} \sum_{i=1}^n (x_i-\overline{x})^2 \right ) \\[6pt] - -g_\theta(x_1^n) &= \exp \left (-\frac{n}{2\sigma^2} (\theta-\overline{x})^2 \right ) - -\end{align} - -Since $h(x_1^n)$ does not depend on the parameter $\theta$ and $g_{\theta}(x_1^n)$ depends only on $x_1^n$ through the function -$$ -T(X_1^n)=\overline{x}=\frac1n\sum_{i=1}^nX_i, -$$ - -the Fisher–Neyman factorization theorem implies $T(X_1^n)$ is a sufficient statistic for $\theta$. - -If $ \sigma^2 $ is unknown and since $s^2 = \frac{1}{n-1} \sum_{i=1}^n \left(x_i - \overline{x} \right)^2 $, the above likelihood can be rewritten as - -\begin{align} - -f_{X_1^n}(x_1^n)= (2\pi\sigma^2)^{-n/2} \exp \left( -\frac{n-1}{2\sigma^2}s^2 \right) \exp \left (-\frac{n}{2\sigma^2} (\theta-\overline{x})^2 \right ) . - -\end{align} - -The Fisher–Neyman factorization theorem still holds and implies that $(\overline{x},s^2)$ is a joint sufficient statistic for $ ( \theta , \sigma^2) $. - -If $X_1,\dots,X_n$ are independent and exponentially distributed with expected value θ (an unknown real-valued positive parameter), then $T(X_1^n)=\sum_{i=1}^nX_i$ is a sufficient statistic for θ. - -To see this, consider the joint probability density function of $X_1^n=(X_1,\dots,X_n)$. Because the observations are independent, the pdf can be written as a product of individual densities, i.e. - -\begin{align} - -f_{X_1^n}(x_1^n) - -&= \prod_{i=1}^n {1 \over \theta} e^{ {-1 \over \theta}x_i } - -= {1 \over \theta^n} e^{ {-1 \over \theta} \sum_{i=1}^nx_i }. - -\end{align} - -The joint density of the sample takes the form required by the Fisher–Neyman factorization theorem, by letting - -\begin{align} - -h(x_1^n)= 1, - -g_{\theta}(x_1^n)= {1 \over \theta^n} e^{ {-1 \over \theta} \sum_{i=1}^nx_i }. - -\end{align} - -Since $h(x_1^n)$ does not depend on the parameter $\theta$ and $g_{\theta}(x_1^n)$ depends only on $x_1^n$ through the function $T(X_1^n)=\sum_{i=1}^nX_i$ - -the Fisher–Neyman factorization theorem implies $T(X_1^n)=\sum_{i=1}^nX_i$ is a sufficient statistic for $\theta$. - -If $X_1,\dots,X_n$ are independent and distributed as a $\Gamma(\alpha , \beta) $, where $\alpha$ and $\beta$ are unknown parameters of a Gamma distribution, then $T(X_1^n) = \left( \prod_{i=1}^n{X_i} , \sum_{i=1}^n X_i \right)$ is a two-dimensional sufficient statistic for $(\alpha, \beta)$. - -To see this, consider the joint probability density function of $X_1^n=(X_1,\dots,X_n)$. Because the observations are independent, the pdf can be written as a product of individual densities, i.e. - -\begin{align} - -f_{X_1^n}(x_1^n) - -&= \prod_{i=1}^n \left({1 \over \Gamma(\alpha) \beta^\alpha}\right) x_i^{\alpha -1} e^{(-1/\beta)x_i} \\[5pt] - -&= \left({1 \over \Gamma(\alpha) \beta^\alpha}\right)^n \left(\prod_{i=1}^n x_i\right)^{\alpha-1} e^{{-1 \over \beta} \sum_{i=1}^n x_i}. - -\end{align} - -The joint density of the sample takes the form required by the Fisher–Neyman factorization theorem, by letting - -\begin{align} - -h(x_1^n)= 1, - -g_{(\alpha , \beta)}(x_1^n)= \left({1 \over \Gamma(\alpha) \beta^{\alpha}}\right)^n \left(\prod_{i=1}^n x_i\right)^{\alpha-1} e^{{-1 \over \beta} \sum_{i=1}^n x_i}. - -\end{align} - -Since $h(x_1^n)$ does not depend on the parameter $(\alpha , \beta)$ and $g_{(\alpha , \beta)}(x_1^n)$ depends only on $x_1^n$ through the function $T(x_1^n)= \left( \prod_{i=1}^n x_i, \sum_{i=1}^n x_i \right),$ - -the Fisher–Neyman factorization theorem implies $T(X_1^n)= \left( \prod_{i=1}^n X_i, \sum_{i=1}^n X_i \right)$ is a sufficient statistic for $(\alpha , \beta).$ - -Sufficiency finds a useful application in the Rao–Blackwell theorem, which states that if g(X) is any kind of estimator of θ, then typically the conditional expectation of g(X) given sufficient statistic T(X) is a better estimator of θ, and is never worse. Sometimes one can very easily construct a very crude estimator g(X), and then evaluate that conditional expected value to get an estimator that is in various senses optimal. - -According to the Pitman–Koopman–Darmois theorem, among families of probability distributions whose domain does not vary with the parameter being estimated, only in exponential families is there a sufficient statistic whose dimension remains bounded as sample size increases. - -Less tersely, suppose $X_n, n = 1, 2, 3, \dots$ are independent identically distributed random variables whose distribution is known to be in some family of probability distributions with fixed support. Only if that family is an exponential family there is a sufficient statistic (possibly vector-valued) $T(X_1, \dots, X_n)$ whose number of scalar components does not increase as the sample size n increases. - -This theorem shows that sufficiency (or rather, the existence of a scalar- or vector-valued sufficient statistic of bounded dimension) sharply restricts the possible forms of the distribution. - -An alternative formulation of the condition that a statistic be sufficient, set in a Bayesian context, involves the posterior distributions obtained by using the full data-set and by using only a statistic. Thus the requirement is that, for almost every x, -$$ -\Pr(\theta\mid X=x) = \Pr(\theta\mid T(X)=t(x)). -$$ - -More generally, without assuming a parametric model, we can say that the statistics T is predictive sufficient if -$$ -\Pr(X'=x'\mid X=x) = \Pr(X'=x'\mid T(X)=t(x)). -$$ - -It turns out that this "Bayesian sufficiency" is a consequence of the formulation above, however they are not directly equivalent in the infinite-dimensional case. A range of theoretical results for sufficiency in a Bayesian context is available. - -A concept called "linear sufficiency" can be formulated in a Bayesian context, and more generally. First define the best linear predictor of a vector Y based on X as $\hat E[Y\mid X]$. Then a linear statistic T(x) is linear sufficient if -$$ -\hat E[\theta\mid X]= \hat E[\theta\mid T(X)] . -$$ diff --git a/wiki/wikipedia/3175.txt b/wiki/wikipedia/3175.txt deleted file mode 100644 index 02eae53aaf9192fd99180393db31ab5b00ee94b1..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3175.txt +++ /dev/null @@ -1,9 +0,0 @@ -In computational complexity theory, the linear search problem is an optimal search problem introduced by Richard E. Bellman and independently considered by Anatole Beck. - -"An immobile hider is located on the real line according to a known probability distribution. A searcher, whose maximal velocity is one, starts from the origin and wishes to discover the hider in minimal expected time. It is assumed that the searcher can change the direction of his motion without any loss of time. It is also assumed that the searcher cannot see the hider until he actually reaches the point at which the hider is located and the time elapsed until this moment is the duration of the game." In order to find the hider the searcher has to go a distance x1 in one direction, return to the origin and go distance x2 in the other direction etc., (the length of the n-th step being denoted by xn), and to do it in an optimal way. (However, an optimal solution need not have a first step and could start with an infinite number of small 'oscillations'.) This problem is usually called the linear search problem and a search plan is called a trajectory. It has attracted much research, some of it quite recent. - -The linear search problem for a general probability distribution is unsolved. However, there exists a dynamic programming algorithm that produces a solution for any discrete distribution and also an approximate solution, for any probability distribution, with any desired accuracy. - -The linear search problem was solved by Anatole Beck and Donald J. Newman (1970) as a two-person zero-sum game. Their minimax trajectory is to double the distance on each step and the optimal strategy is a mixture of trajectories that increase the distance by some fixed constant. This solution gives search strategies that are not sensitive to assumptions concerning the distribution of the target. Thus, it also presents an upper bound for a worst-case scenario. This solution was obtained in the framework of an online algorithm by Shmuel Gal, who also generalized this result to a set of concurrent rays. The best online competitive ratio for the search on the line is 9 but it can be reduced to 4.6 by using a randomized strategy. Demaine et al. gave an online solution with a turn cost. - -These results were rediscovered in the 1990s by computer scientists as the cow path problem. diff --git a/wiki/wikipedia/3176.txt b/wiki/wikipedia/3176.txt deleted file mode 100644 index 793c604be2e2f8be79c9e254eb19ea622b24df3d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3176.txt +++ /dev/null @@ -1,10 +0,0 @@ -Bonnesen's inequality is an inequality relating the length, the area, the radius of the incircle and the radius of the circumcircle of a Jordan curve. It is a strengthening of the classical isoperimetric inequality. - -More precisely, consider a planar simple closed curve of length $L$ bounding a domain of area $A$. Let $r$ and $R$ denote the radii of the incircle and the circumcircle. Bonnesen proved the inequality -$$ - \pi^2 (R-r)^2 \leq L^2-4\pi A. -$$ - -The term $ \pi^2 (R-r)^2$ in the left hand side is known as the isoperimetric defect. - -Loewner's torus inequality with isosystolic defect is a systolic analogue of Bonnesen's inequality. diff --git a/wiki/wikipedia/3177.txt b/wiki/wikipedia/3177.txt deleted file mode 100644 index 97d5ade7f3ca4cfdc8bbeabdb7eee43ca7d81257..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3177.txt +++ /dev/null @@ -1,22 +0,0 @@ -The Hawkins–Simon condition refers to a result in mathematical economics, attributed to David Hawkins and Herbert A. Simon, that guarantees the existence of a non-negative output vector that solves the equilibrium relation in the input–output model where demand equals supply. More precisely, it states a condition for $[\mathbf{I} - \mathbf{A}]$ under which the input–output system -$$ -[\mathbf{I} - \mathbf{A}] \cdot \mathbf{x} = \mathbf{d} -$$ - -has a solution $\mathbf{\hat{x}} \geq 0$ for any $\mathbf{d} \geq 0$. Here $\mathbf{I}$ is the identity matrix and $\mathbf{A}$ is called the input–output matrix or Leontief matrix after Wassily Leontief, who empirically estimated it in the 1940s. Together, they describe a system in which -$$ -\sum_{j=1}^{n} a_{ij} x_{j} + d_{i} = x_{i} \quad i = 1, 2, \ldots, n -$$ - -where $a_{ij}$ is the amount of the ith good used to produce one unit of the jth good, $x_{j}$ is the amount of the jth good produced, and $d_{i}$ is the amount of final demand for good i. Rearranged and written in vector notation, this gives the first equation. - -Define $[\mathbf{I} - \mathbf{A}] = \mathbf{B}$, where $\mathbf{B} = \left[ b_{ij} \right]$ is an $n \times n$ matrix with $b_{ij} \leq 0, i \neq j$. Then the Hawkins–Simon theorem states that the following two conditions are equivalent - -(i) There exists an $\mathbf{x} \geq 0$ such that $\mathbf{B} \cdot \mathbf{x} > 0$. - -(ii) All the successive leading principal minors of $\mathbf{B}$ are positive, that is -$$ -b_{11} > 0, \begin{vmatrix} b_{11} & b_{12} \\ b_{21} & b_{22} \end{vmatrix} > 0, \ldots, \begin{vmatrix} b_{11} & b_{12} & \dots & b_{1n} \\ b_{21} & b_{22} & \dots & b_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ b_{n1} & b_{n2} & \dots & b_{nn} \end{vmatrix} > 0 -$$ - -For a proof, see Morishima (1964), Nikaido (1968), or Murata (1977). Condition (ii) is known as Hawkins–Simon condition. This theorem was independently discovered by David Kotelyanskiĭ, as it is referred to by Felix Gantmacher as Kotelyanskiĭ lemma. diff --git a/wiki/wikipedia/3178.txt b/wiki/wikipedia/3178.txt deleted file mode 100644 index 1cb18ea42e75c8adc3fdda2384eff8a103bc73f0..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3178.txt +++ /dev/null @@ -1,5 +0,0 @@ -Given an exterior differential system defined on a manifold M, the Cartan-Kuranishi prolongation theorem says that after a finite number of prolongations the system is either in involution (admits at least one 'large' integral manifold), or is impossible. - -The theorem is named after Élie Cartan and Masatake Kuranishi. - -This theorem is used in infinite-dimensional Lie theory. diff --git a/wiki/wikipedia/3179.txt b/wiki/wikipedia/3179.txt deleted file mode 100644 index 71ce543f36ac3cd018ae4bf3ecc65780e6090b1d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3179.txt +++ /dev/null @@ -1,11 +0,0 @@ -In geometry, Monge's theorem, named after Gaspard Monge, states that for any three circles in a plane, none of which is completely inside one of the others, the intersection points of each of the three pairs of external tangent lines are collinear. - -For any two circles in a plane, an external tangent is a line that is tangent to both circles but does not pass between them. There are two such external tangent lines for any two circles. Each such pair has a unique intersection point in the extended Euclidean plane. Monge's theorem states that the three such points given by the three pairs of circles always lie in a straight line. In the case of two of the circles being of equal size, the two external tangent lines are parallel. In this case Monge's theorem asserts that the other two intersection points must lie on a line parallel to those two external tangents. In other words, if the two external tangents are considered to intersect at the point at infinity, then the other two intersection points must be on a line passing through the same point at infinity, so the line between them takes the same angle as the external tangent. - -The simplest proof employs a three-dimensional analogy. Let the three circles correspond to three spheres of different radii; the circles correspond to the equators that result from a plane passing through the centers of the spheres. The three spheres can be sandwiched uniquely between two planes. Each pair of spheres defines a cone that is externally tangent to both spheres, and the apex of this cone corresponds to the intersection point of the two external tangents, i.e., the external homothetic center. Since one line of the cone lies in each plane, the apex of each cone must lie in both planes, and hence somewhere on the line of intersection of the two planes. Therefore, the three external homothetic centers are collinear. - -Monge's theorem can also be proved by using Desargues' theorem. - -Another easy proof uses Menelaus' theorem, since the ratios can be calculated with the diameters of each circle, which will be eliminated by cyclic forms when using Menelaus' theorem. - -Desargues' theorem also asserts that 3 points lie on a line, and has a similar proof using the same idea of considering it in 3 rather than 2 dimensions and writing the line as an intersection of 2 planes. diff --git a/wiki/wikipedia/318.txt b/wiki/wikipedia/318.txt deleted file mode 100644 index 4a70c82b4913766580ab246283d0b2e97249eed1..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/318.txt +++ /dev/null @@ -1,15 +0,0 @@ -Mindomo is a versatile freemium collaborative mind mapping, concept mapping and outlining tool developed by Expert Software Applications. It can be used to develop ideas and interactively brainstorm, with features including sharing, collaboration, task management, presentation and interactive web publication. - -The online version of Mindomo is available through any browser. There are also offline desktop versions for Windows, Linux and Mac, and app versions for both Android and iOS. Registered users can create and collaborate in real-time on mind maps, while unregistered users can view the maps shared with them. The software also provides ways to create presentations and mind map assignments. - -* In 2006 development was begun at Expert Software Application for a mind mapping tool called Mindomo using Adobe Flex development kit which is based on Adobe Flash platform. - -* In 2007 Mindomo web app was launched. The user interface used a ribbon consistent with office 2007. You can also publish interactive versions of mindmaps, as well as static images of them. - -For the educational sector, Mindomo offers student assignments for teachers, integration with many learning-management systems, - -Michael Stratton, author of "The Effective Project Manager", used Mindomo for examples of using mind maps in project management. - -When Mindomo launched in 2007, Chuck Frey, author of "The Mind Mapping Software Blog" wrote, "Mindomo sets a new standard for web-based mind mapping tools with features that rivals many desktop mind maps. In 2014, Mindomo was available to all public schools in Ontario, Canada, as it was approved by the Ontario Software Acquisition Program Advisory Committee (OSAPAC), which advises the Canadian Ministry of Education on the acquisition of provincial licenses for publicly-funded schools in Ontario. - -In 2019 Mindomo won PC Magazine's Editors' Choice award as "best mind mapping tool", citing how it coupled mind-mapping with the social aspects of knowledge management. According to Mindomo's website, there are more than six million users of Mindomo worldwide. diff --git a/wiki/wikipedia/3180.txt b/wiki/wikipedia/3180.txt deleted file mode 100644 index c98c5d626639edce34d84e9cef9cfb3dceff7149..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3180.txt +++ /dev/null @@ -1 +0,0 @@ -In mathematics, Mumford's compactness theorem states that the space of compact Riemann surfaces of fixed genus g > 1 with no closed geodesics of length less than some fixed ε > 0 in the Poincaré metric is compact. It was proved by as a consequence of a theorem about the compactness of sets of discrete subgroups of semisimple Lie groups generalizing Mahler's compactness theorem. diff --git a/wiki/wikipedia/3181.txt b/wiki/wikipedia/3181.txt deleted file mode 100644 index 7c26fea0ffee825b23287a20622208be2c4aed8f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3181.txt +++ /dev/null @@ -1,86 +0,0 @@ -In algebra, the rational root theorem (or rational root test, rational zero theorem, rational zero test or p/q theorem) states a constraint on rational solutions of a polynomial equation -$$ -a_nx^n+a_{n-1}x^{n-1}+\cdots+a_0 = 0 -$$ - -with integer coefficients $a_i\in\mathbb{Z}$ and $a_0,a_n \neq 0$. Solutions of the equation are also called roots or zeroes of the polynomial on the left side. - -The theorem states that each rational solution x = pq, written in lowest terms so that p and q are relatively prime, satisfies: - -* p is an integer factor of the constant term a0, and - -* q is an integer factor of the leading coefficient an. - -The rational root theorem is a special case (for a single linear factor) of Gauss's lemma on the factorization of polynomials. The integral root theorem is the special case of the rational root theorem when the leading coefficient is an = 1. - -The theorem is used to find all rational roots of a polynomial, if any. It gives a finite number of possible fractions which can be checked to see if they are roots. If a rational root x = r is found, a linear polynomial (x – r) can be factored out of the polynomial using polynomial long division, resulting in a polynomial of lower degree whose roots are also roots of the original polynomial. - -The general cubic equation -$$ -ax^3+bx^2+cx+d=0 -$$ - -with integer coefficients has three solutions in the complex plane. If the rational root test finds no rational solutions, then the only way to express the solutions algebraically uses cube roots. But if the test finds a rational solution r, then factoring out (x – r) leaves a quadratic polynomial whose two roots, found with the quadratic formula, are the remaining two roots of the cubic, avoiding cube roots. - -Let $P(x) \ =\ a_n x^n + a_{n-1} x^{n-1} + \cdots + a_1 x + a_0$ with $a_0, \ldots a_n \in \mathbb{Z}.$ - -Suppose P(p/q) = 0 for some coprime p, q ∈ ℤ: -$$ -P\left(\tfrac{p}{q}\right) = a_n\left(\tfrac{p}{q}\right)^n + a_{n-1}\left(\tfrac{p}{q}\right)^{n-1} + \cdots + a_1 \left(\tfrac{p}{q}\right) + a_0 = 0. -$$ - -To clear denominators, multiply both sides by qn: -$$ -a_n p^n + a_{n-1} p^{n-1}q + \cdots + a_1 p q^{n-1} + a_0 q^n = 0. -$$ - -Shifting the a0 term to the right side and factoring out p on the left side produces: -$$ -p \left (a_np^{n-1} + a_{n-1}qp^{n-2} + \cdots + a_1q^{n-1} \right ) = -a_0q^n. -$$ - -Thus, p divides a0qn. But p is coprime to q and therefore to qn, so by Euclid's lemma p must divide the remaining factor a0. - -On the other hand, shifting the an term to the right side and factoring out q on the left side produces: -$$ -q \left (a_{n-1}p^{n-1} + a_{n-2}qp^{n-2} + \cdots + a_0q^{n-1} \right ) = -a_np^n. -$$ - -Reasoning as before, it follows that q divides an. - -Should there be a nontrivial factor dividing all the coefficients of the polynomial, then one can divide by the greatest common divisor of the coefficients so as to obtain a primitive polynomial in the sense of Gauss's lemma; this does not alter the set of rational roots and only strengthens the divisibility conditions. That lemma says that if the polynomial factors in Q[X], then it also factors in Z[X] as a product of primitive polynomials. Now any rational root p/q corresponds to a factor of degree 1 in Q[X] of the polynomial, and its primitive representative is then qx − p, assuming that p and q are coprime. But any multiple in Z[X] of qx − p has leading term divisible by q and constant term divisible by p, which proves the statement. This argument shows that more generally, any irreducible factor of P can be supposed to have integer coefficients, and leading and constant coefficients dividing the corresponding coefficients of P. - -In the polynomial -$$ -2x^3+x-1, -$$ - -any rational root fully reduced would have to have a numerator that divides evenly into 1 and a denominator that divides evenly into 2. Hence the only possible rational roots are ±1/2 and ±1; since neither of these equates the polynomial to zero, it has no rational roots. - -In the polynomial -$$ -x^3-7x+6 -$$ - -the only possible rational roots would have a numerator that divides 6 and a denominator that divides 1, limiting the possibilities to ±1, ±2, ±3, and ±6. Of these, 1, 2, and –3 equate the polynomial to zero, and hence are its rational roots. (In fact these are its only roots since a cubic has only three roots; in general, a polynomial could have some rational and some irrational roots.) - -Every rational root of the polynomial -$$ -3x^3 - 5x^2 + 5x - 2 -$$ - -must be among the numbers symbolically indicated by: -$$ -\pm\tfrac{1,2}{1,3} = \pm \left\{1, 2, \tfrac{1}{3}, \tfrac{2}{3}\right\} . -$$ - -These 8 root candidates x = r can be tested by evaluating P(r), for example using Horner's method. It turns out there is exactly one with P(r) = 0. - -This process may be made more efficient: if P(r) ≠ 0, it can be used to shorten the list of remaining candidates. For example, x = 1 does not work, as P(1) = 1. Substituting x = 1 + t yields a polynomial in t with constant term P(1) = 1, while the coefficient of t3 remains the same as the coefficient of x3. Applying the rational root theorem thus yields the possible roots $t=\pm\tfrac{1}{1,3}$, so that -$$ -x = 1+t = 2, 0, \tfrac{4}{3}, \tfrac{2}{3}. -$$ - -True roots must occur on both lists, so list of rational root candidates has shrunk to just x = 2 and x = 2/3. - -If k ≥ 1 rational roots are found, Horner's method will also yield a polynomial of degree n − k whose roots, together with the rational roots, are exactly the roots of the original polynomial. If none of the candidates is a solution, there can be no rational solution. diff --git a/wiki/wikipedia/3182.txt b/wiki/wikipedia/3182.txt deleted file mode 100644 index 6b74ed575cc55be55a8758d0effea88cd48cb3fe..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3182.txt +++ /dev/null @@ -1,14 +0,0 @@ -In mathematics, the Kuratowski–Ryll-Nardzewski measurable selection theorem is a result from measure theory that gives a sufficient condition for a multifunction to have a measurable selection function. It is named after the Polish mathematicians Kazimierz Kuratowski and Czesław Ryll-Nardzewski. - -Many classical selection results follow from this theorem and it is widely used in mathematical economics and optimal control. - -== Statement of the theorem == - -Let $ X $ be a Polish space, $ \mathcal{B} (X) $ the Borel σ-algebra of $ X $, $ (\Omega, B) $ a measurable space and $ \psi $ a multifunction on $ \Omega$ taking values in the set of nonempty closed subsets of $ X $. - -Suppose that $ \psi $ is $ B $-weakly measurable, that is, for every open set $ U $ of $ X $, we have -$$ -\{\omega : \psi (\omega) \cap U \neq \empty \} \in B. -$$ - -Then $ \psi $ has a selection that is $ B $-$ \mathcal{B} (X) $-measurable. diff --git a/wiki/wikipedia/3183.txt b/wiki/wikipedia/3183.txt deleted file mode 100644 index 477c471ad66de283d879cd80d4473ed33b91f950..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3183.txt +++ /dev/null @@ -1,12 +0,0 @@ -In group theory, a branch of mathematics, the Nielsen–Schreier theorem states that every subgroup of a free group is itself free. A free group G on a set of generators is the fundamental group of a bouquet of circles, a topological graph X with a single vertex and with a loop-edge for each generator. - -Simplicial homology allows the computation of the rank of H, which is equal to h1(Y), the first Betti number of the covering space, the number of independent cycles. For G free of rank n, the graph X has n edges and 1 vertex; assuming H has finite index [G : H] = e, the covering graph Y has en edges and e vertices. The first Betti number of a graph is equal to the number of edges, minus the number of vertices, plus the number of connected components; hence the rank of H is: -$$ -h_1(Y) = en-e+1 = 1+e(n{-}1). -$$ - -This proof is due to ; the original proof by Schreier forms the Schreier graph in a different way as a quotient of the Cayley graph of G modulo the action of H. - -According to Schreier's subgroup lemma, a set of generators for a free presentation of H may be constructed from cycles in the covering graph formed by concatenating a spanning tree path from a base point (the coset of the identity) to one of the cosets, a single non-tree edge, and an inverse spanning tree path from the other endpoint of the edge back to the base point. - -originally proved a restricted form of the theorem, stating that any finitely-generated subgroup of a free group is free. His proof involves performing a sequence of Nielsen transformations on the subgroup's generating set that reduce their length (as reduced words in the free group from which they are drawn). diff --git a/wiki/wikipedia/3184.txt b/wiki/wikipedia/3184.txt deleted file mode 100644 index 02fcad3c30ec5e649e5a8c48aadc8bb4c1cddc07..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3184.txt +++ /dev/null @@ -1,21 +0,0 @@ -In graph theory, a clique cover or partition into cliques of a given undirected graph is a partition of the vertices of the graph into cliques, subsets of vertices within which every two vertices are adjacent. A minimum clique cover is a clique cover that uses as few cliques as possible. The minimum k for which a clique cover exists is called the clique cover number of the given graph. - -A clique cover of a graph G may be seen as a graph coloring of the complement graph of G, the graph on the same vertex set that has edges between non-adjacent vertices of G. Like clique covers, graph colorings are partitions of the set of vertices, but into subsets with no adjacencies (independent sets) rather than cliques. A subset of vertices is a clique in G if and only if it is an independent set in the complement of G, so a partition of the vertices of G is a clique cover of G if and only if it is a coloring of the complement of G. - -The clique cover problem in computational complexity theory is the algorithmic problem of finding a minimum clique cover, or (rephrased as a decision problem) finding a clique cover whose number of cliques is below a given threshold. Finding a minimum clique cover is NP-hard, and its decision version is NP-complete. It was one of Richard Karp's original 21 problems shown NP-complete in his 1972 paper "Reducibility Among Combinatorial Problems". - -The equivalence between clique covers and coloring is a reduction that can be used to prove the NP-completeness of the clique cover problem from the known NP-completeness of graph coloring. - -Perfect graphs are defined as the graphs in which, for every induced subgraph, the chromatic number (minimum number of colors in a coloring) equals the size of the maximum clique. - -According to the weak perfect graph theorem, the complement of a perfect graph is also perfect. Therefore, the perfect graphs are also the graphs in which, for every induced subgraph, the clique cover number equals the size of the maximum independent set. It is possible to compute the clique cover number in perfect graphs in polynomial time. - -Another class of graphs in which the minimum clique cover can be found in polynomial time are the triangle-free graphs. In these graphs, every clique cover consists of a matching (a set of disjoint pairs of adjacent vertices) together with singleton sets for the remaining unmatched vertices. The number of cliques equals the number of vertices minus the number of matched pairs. Therefore, in triangle-free graphs, the minimum clique cover can be found by using an algorithm for maximum matching. - -The optimum partition into cliques can also be found in polynomial time for graphs of bounded clique-width. These include, among other graphs, the cographs and distance-hereditary graphs, which are both also classes of perfect graphs. - -The clique cover problem remains NP-complete on some other special classes of graphs, including the cubic planar graphs. - -Baker's technique can be used to provide a polynomial-time approximation scheme for the problem on planar graphs. - -The related clique edge cover problem concerns partitioning the edges of a graph, rather than the vertices, into subgraphs induced by cliques. It is also NP-complete. diff --git a/wiki/wikipedia/3185.txt b/wiki/wikipedia/3185.txt deleted file mode 100644 index 3b4cff74821f40e85c8c4f30de698fe24e87e2d8..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3185.txt +++ /dev/null @@ -1,7 +0,0 @@ -In mathematics, the Denjoy theorem gives a sufficient condition for a diffeomorphism of the circle to be topologically conjugate to a diffeomorphism of a special kind, namely an irrational rotation. proved the theorem in the course of his topological classification of homeomorphisms of the circle. He also gave an example of a C1 diffeomorphism with an irrational rotation number that is not conjugate to a rotation. - -Let ƒ: S1 → S1 be an orientation-preserving diffeomorphism of the circle whose rotation number θ = ρ(ƒ) is irrational. Assume that it has positive derivative ƒ(x) > 0 that is a continuous function with bounded variation on the interval [0,1). Then ƒ is topologically conjugate to the irrational rotation by θ. Moreover, every orbit is dense and every nontrivial interval I of the circle intersects its forward image ƒ°q(I), for some q > 0 (this means that the non-wandering set of ƒ is the whole circle). - -If ƒ is a C2 map, then the hypothesis on the derivative holds; however, for any irrational rotation number Denjoy constructed an example showing that this condition cannot be relaxed to C1, continuous differentiability of ƒ. - -Vladimir Arnold showed that the conjugating map need not be smooth, even for an analytic diffeomorphism of the circle. Later Michel Herman proved that nonetheless, the conjugating map of an analytic diffeomorphism is itself analytic for "most" rotation numbers, forming a set of full Lebesgue measure, namely, for those that are badly approximable by rational numbers. His results are even more general and specify differentiability class of the conjugating map for Cr diffeomorphisms with any r ≥ 3. diff --git a/wiki/wikipedia/3186.txt b/wiki/wikipedia/3186.txt deleted file mode 100644 index 8f7bd5e0874f808a274176ac4c66cd8e1a828b98..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3186.txt +++ /dev/null @@ -1,63 +0,0 @@ -In differential topology, an area of mathematics, the Hirzebruch signature theorem (sometimes called the Hirzebruch index theorem) - -is Friedrich Hirzebruch's 1954 result expressing the signature - -of a smooth closed oriented manifold by a linear combination of Pontryagin numbers called the - -L-genus. - -It was used in the proof of the Hirzebruch–Riemann–Roch theorem. - -The L-genus is the genus for the multiplicative sequence of polynomials - -associated to the characteristic power series - -{x\over \tanh(x)} = \sum_{k\ge 0} {{2^{2k}B_{2k}\over (2k)!}x^{2k}} - -= 1 + {x^2 \over 3} - {x^4 \over 45} +\cdots . - -The first two of the resulting L-polynomials are: - -* $L_1 = \tfrac13 p_1$ - -* $L_2 = \tfrac1{45}(7p_2 - p_1^2)$ - -By taking for the $p_i$ the Pontryagin classes $p_i(M)$ of the tangent bundle of a 4n dimensional smooth closed oriented - -manifold M one obtains the L-classes of M. - -Hirzebruch showed that the n-th L-class of M evaluated on the fundamental class of M, $[M]$, is equal to $\sigma(M)$, the signature of M - -(i.e. the signature of the intersection form on the 2nth cohomology group of M ): -$$ - \sigma(M) = \langle L_n(p_1(M), \dots, p_n(M)), [M]\rangle. -$$ - -René Thom had earlier proved that the signature was given by some linear combination of Pontryagin numbers, and Hirzebruch found the exact formula for this linear combination - -by introducing the notion of the genus of a multiplicative sequence. - -Since the rational oriented cobordism ring $\Omega_*^{\text{SO}}\otimes \Q$ is equal to -$$ -\Omega_*^{\text{SO}}\otimes \Q =\Q [\mathbb{P}^{2}(\Complex), \mathbb{P}^{4}(\Complex), \ldots ], -$$ - -the polynomial algebra generated by the oriented cobordism classes -$$ -[\mathbb{P}^{2i}(\Complex)] -$$ of the even dimensional complex projective spaces, - -it is enough to verify that -$$ - \sigma(\mathbb{P}^{2i})= 1 = \langle L_i(p_1(\mathbb{P}^{2i}), \ldots, p_n(\mathbb{P}^{2i})), [\mathbb{P}^{2i}]\rangle -$$ - -for all i. - -The signature theorem is a special case of the Atiyah–Singer index theorem for - -the signature operator. - -The analytic index of the signature operator equals the signature of the manifold, and its topological index is the L-genus of the manifold. - -By the Atiyah–Singer index theorem these are equal. diff --git a/wiki/wikipedia/3187.txt b/wiki/wikipedia/3187.txt deleted file mode 100644 index 238fcd126765bbeb8ffd218a0e6e99f9ac5f9d80..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3187.txt +++ /dev/null @@ -1,223 +0,0 @@ -
    - -Multiplication (often denoted by the cross symbol , by the mid-line dot operator , by juxtaposition, or, on computers, by an asterisk ) is one of the four elementary mathematical operations of arithmetic, with the other ones being addition, subtraction, and division. The result of a multiplication operation is called a product. - -The multiplication of whole numbers may be thought of as repeated addition; that is, the multiplication of two numbers is equivalent to adding as many copies of one of them, the multiplicand, as the quantity of the other one, the multiplier. Both numbers can be referred to as factors. -$$ -a\times b = \underbrace{b + \cdots + b}_{a \text{ times}} -$$ - -For example, 4 multiplied by 3, often written as $ 3 \times 4 $ and spoken as "3 times 4", can be calculated by adding 3 copies of 4 together: -$$ -3 \times 4 = 4 + 4 + 4 = 12 -$$ - -Here, 3 (the multiplier) and 4 (the multiplicand) are the factors, and 12 is the product. - -One of the main properties of multiplication is the commutative property, which states in this case that adding 3 copies of 4 gives the same result as adding 4 copies of 3: -$$ -4 \times 3 = 3 + 3 + 3 + 3 = 12 -$$ - -Thus the designation of multiplier and multiplicand does not affect the result of the multiplication. - -Systematic generalizations of this basic definition define the multiplication of integers (including negative numbers), rational numbers (fractions), and real numbers. - -Multiplication can also be visualized as counting objects arranged in a rectangle (for whole numbers) or as finding the area of a rectangle whose sides have some given lengths. The area of a rectangle does not depend on which side is measured first—a consequence of the commutative property. - -The product of two measurements is a new type of measurement. For example, multiplying the lengths of the two sides of a rectangle gives its area. Such a product is the subject of dimensional analysis. - -The inverse operation of multiplication is division. For example, since 4 multiplied by 3 equals 12, 12 divided by 3 equals 4. Indeed, multiplication by 3, followed by division by 3, yields the original number. The division of a number other than 0 by itself equals 1. - -Multiplication is also defined for other types of numbers, such as complex numbers, and for more abstract constructs, like matrices. For some of these more abstract constructs, the order in which the operands are multiplied together matters. A listing of the many different kinds of products used in mathematics is given in Product (mathematics). - -In arithmetic, multiplication is often written using the multiplication sign (either or) between the terms (that is, in infix notation). For example, -$$ -2\times 3 = 6 -$$ ("two times three equals six") -$$ -3\times 4 = 12 -$$ -$$ -2\times 3\times 5 = 6\times 5 = 30 -$$ -$$ -2\times 2\times 2\times 2\times 2 = 32 -$$ - -There are other mathematical notations for multiplication: - -* To reduce confusion between the multiplication sign × and the common variable x, multiplication is also denoted by dot signs, usually a middle-position dot (rarely period): - -5 ⋅ 2 or 5 . 3 - -The middle dot notation, encoded in Unicode as , is now standard in the United States and other countries where the period is used as a decimal point. When the dot operator character is not accessible, the interpunct (·) is used. In other countries that use a comma as a decimal mark, either the period or a middle dot is used for multiplication. - -Historically, in the United Kingdom and Ireland, the middle dot was sometimes used for the decimal to prevent it from disappearing in the ruled line, and the period/full stop was used for multiplication. However, since the Ministry of Technology ruled to use the period as the decimal point in 1968, and the SI standard has since been widely adopted, this usage is now found only in the more traditional journals such as The Lancet. - -* In algebra, multiplication involving variables is often written as a juxtaposition (e.g., xy for x times y or 5x for five times x), also called implied multiplication. The notation can also be used for quantities that are surrounded by parentheses (e.g., 5(2) or (5)(2) for five times two). This implicit usage of multiplication can cause ambiguity when the concatenated variables happen to match the name of another variable, when a variable name in front of a parenthesis can be confused with a function name, or in the correct determination of the order of operations. - -* In vector multiplication, there is a distinction between the cross and the dot symbols. The cross symbol generally denotes the taking a cross product of two vectors, yielding a vector as its result, while the dot denotes taking the dot product of two vectors, resulting in a scalar. - -In computer programming, the asterisk (as in 5*2) is still the most common notation. This is due to the fact that most computers historically were limited to small character sets (such as ASCII and EBCDIC) that lacked a multiplication sign (such as or ×), while the asterisk appeared on every keyboard. This usage originated in the FORTRAN programming language. - -The numbers to be multiplied are generally called the "factors". The number to be multiplied is the "multiplicand", and the number by which it is multiplied is the "multiplier". Usually, the multiplier is placed first and the multiplicand is placed second; - -The modern method of multiplication based on the Hindu–Arabic numeral system was first described by Brahmagupta. Brahmagupta gave rules for addition, subtraction, multiplication, and division. Henry Burchard Fine, then a professor of mathematics at Princeton University, wrote the following: - -The Indians are the inventors not only of the positional decimal system itself, but of most of the processes involved in elementary reckoning with the system. Addition and subtraction they performed quite as they are performed nowadays; multiplication they effected in many ways, ours among them, but division they did cumbrously. - -These place value decimal arithmetic algorithms were introduced to Arab countries by Al Khwarizmi in the early 9th century and popularized in the Western world by Fibonacci in the 13th century. - -Grid method multiplication, or the box method, is used in primary schools in England and Wales and in some areas of the United States to help teach an understanding of how multiple digit multiplication works. An example of multiplying 34 by 13 would be to lay the numbers out in a grid as follows: - -and then add the entries. - -The classical method of multiplying two n-digit numbers requires n2 digit multiplications. Multiplication algorithms have been designed that reduce the computation time considerably when multiplying large numbers. Methods based on the discrete Fourier transform reduce the computational complexity to O(n log n log log n). In 2016, the factor log log n was replaced by a function that increases much slower, though still not constant. In March 2019, David Harvey and Joris van der Hoeven submitted a paper presenting an integer multiplication algorithm with a complexity of $O(n\log n).$ The algorithm, also based on the fast Fourier transform, is conjectured to be asymptotically optimal. The algorithm is not practically useful, as it only becomes faster for multiplying extremely large numbers (having more than 2172912 bits). - -One can only meaningfully add or subtract quantities of the same type, but quantities of different types can be multiplied or divided without problems. For example, four bags with three marbles each can be thought of as: - -;Associative property - -Expressions solely involving multiplication or addition are invariant with respect to the order of operations: -$$ -(x\cdot y)\cdot z = x\cdot(y\cdot z) -$$ - -;Distributive property - -Holds with respect to multiplication over addition. This identity is of prime importance in simplifying algebraic expressions: -$$ -x\cdot(y + z) = x\cdot y + x\cdot z -$$ - -;Identity element - -The multiplicative identity is 1; anything multiplied by 1 is itself. This feature of 1 is known as the identity property: -$$ -x\cdot 1 = x -$$ - -;Property of 0 - -Any number multiplied by 0 is 0. This is known as the zero property of multiplication: -$$ -x\cdot 0 = 0 -$$ - -;Negation - -−1 times any number is equal to the additive inverse of that number. -$$ -(-1)\cdot x = (-x) -$$ where $(-x)+x=0$ - -–1 times –1 is 1. -$$ -(-1)\cdot (-1) = 1 -$$ - -;Inverse element - -Every number x, except 0, has a multiplicative inverse, $\frac{1}{x}$, such that $x\cdot\left(\frac{1}{x}\right) = 1$. - -;Order preservation - -Multiplication by a positive number preserves the order: - -For a > 0, if b > c then ab > ac. - -Multiplication by a negative number reverses the order: - -For a < 0, if b > c then ab < ac. - -The complex numbers do not have an ordering that is compatible with both addition and multiplication. - -Other mathematical systems that include a multiplication operation may not have all these properties. For example, multiplication is not, in general, commutative for matrices and quaternions. - -In the book Arithmetices principia, nova methodo exposita, Giuseppe Peano proposed axioms for arithmetic based on his axioms for natural numbers. Peano arithmetic has two axioms for multiplication: -$$ -x \times 0 = 0 -$$ -$$ -x \times S(y) = (x \times y) + x -$$ - -Here S(y) represents the successor of y; i.e., the natural number that follows y. The various properties like associativity can be proved from these and the other axioms of Peano arithmetic, including induction. For instance, S(0), denoted by 1, is a multiplicative identity because -$$ -x \times 1 = x \times S(0) = (x \times 0) + x = 0 + x = x. -$$ - -The axioms for integers typically define them as equivalence classes of ordered pairs of natural numbers. The model is based on treating (x,y) as equivalent to x − y when x and y are treated as integers. Thus both (0,1) and (1,2) are equivalent to −1. The multiplication axiom for integers defined this way is -$$ -(x_p, x_m) \times (y_p, y_m) = (x_p \times y_p + x_m \times y_m, x_p \times y_m + x_m \times y_p). -$$ - -The rule that −1 × −1 = 1 can then be deduced from -$$ -(0, 1) \times (0, 1) = (0 \times 0 + 1 \times 1, 0 \times 1 + 1 \times 0) = (1,0). -$$ - -Multiplication is extended in a similar way to rational numbers and then to real numbers. - -The product of non-negative integers can be defined with set theory using cardinal numbers or the Peano axioms. See below how to extend this to multiplying arbitrary integers, and then arbitrary rational numbers. The product of real numbers is defined in terms of products of rational numbers; see construction of the real numbers. - -==Multiplication in group theory== - -There are many sets that, under the operation of multiplication, satisfy the axioms that define group structure. These axioms are closure, associativity, and the inclusion of an identity element and inverses. - -A simple example is the set of non-zero rational numbers. Here we have identity 1, as opposed to groups under addition where the identity is typically 0. Note that with the rationals, we must exclude zero because, under multiplication, it does not have an inverse: there is no rational number that can be multiplied by zero to result in 1. In this example, we have an abelian group, but that is not always the case. - -To see this, consider the set of invertible square matrices of a given dimension over a given field. Here, it is straightforward to verify closure, associativity, and inclusion of identity (the identity matrix) and inverses. However, matrix multiplication is not commutative, which shows that this group is non-abelian. - -Another fact worth noticing is that the integers under multiplication do not form a group—even if we exclude zero. This is easily seen by the nonexistence of an inverse for all elements other than 1 and −1. - -Multiplication in group theory is typically notated either by a dot or by juxtaposition (the omission of an operation symbol between elements). So multiplying element a by element b could be notated as a $\cdot$ b or ab. When referring to a group via the indication of the set and operation, the dot is used. For example, our first example could be indicated by $\left( \mathbb{Q}/ \{ 0 \} , \cdot \right)$. - -==Multiplication of different kinds of numbers== - -Numbers can count (3 apples), order (the 3rd apple), or measure (3.5 feet high); as the history of mathematics has progressed from counting on our fingers to modelling quantum mechanics, multiplication has been generalized to more complicated and abstract types of numbers, and to things that are not numbers (such as matrices) or do not look much like numbers (such as quaternions). - -;Integers -$$ -N\times M -$$ is the sum of N copies of M when N and M are positive whole numbers. This gives the number of things in an array N wide and M high. Generalization to negative numbers can be done by -$$ -N\times (-M) = (-N)\times M = - (N\times M) -$$ and -$$ -(-N)\times (-M) = N\times M -$$ - -The same sign rules apply to rational and real numbers. - -;Rational numbers - -Generalization to fractions $\frac{A}{B}\times \frac{C}{D}$ is by multiplying the numerators and denominators respectively: $\frac{A}{B}\times \frac{C}{D} = \frac{(A\times C)}{(B\times D)}$. This gives the area of a rectangle $\frac{A}{B}$ high and $\frac{C}{D}$ wide, and is the same as the number of things in an array when the rational numbers happen to be whole numbers. - -;Real numbers - -Real numbers and their products can be defined in terms of sequences of rational numbers. - -;Complex numbers - -Considering complex numbers $z_1$ and $z_2$ as ordered pairs of real numbers $(a_1, b_1)$ and $(a_2, b_2)$, the product $z_1\times z_2$ is $(a_1\times a_2 - b_1\times b_2, a_1\times b_2 + a_2\times b_1)$. This is the same as for reals $a_1\times a_2$ when the imaginary parts $b_1$ and $b_2$ are zero. - -Equivalently, denoting $\sqrt{-1}$ as $i$, we have $z_1 \times z_2 = (a_1+b_1i)(a_2+b_2i)=(a_1 \times a_2)+(a_1\times b_2i)+(b_1\times a_2i)+(b_1\times b_2i^2)=(a_1a_2-b_1b_2)+(a_1b_2+b_1a_2)i.$ - -Alternatively, in trigonometric form, if $z_1 = r_1(\cos\phi_1+i\sin\phi_1), z_2 = r_2(\cos\phi_2+i\sin\phi_2)$, then$z_1z_2 = r_1r_2(\cos(\phi_1 + \phi_2) + i\sin(\phi_1 + \phi_2)).$ - -;Further generalizations - -See Multiplication in group theory, above, and Multiplicative group, which for example includes matrix multiplication. A very general, and abstract, concept of multiplication is as the "multiplicatively denoted" (second) binary operation in a ring. An example of a ring that is not any of the above number systems is a polynomial ring (you can add and multiply polynomials, but polynomials are not numbers in any usual sense.) - -;Division - -Often division, $\frac{x}{y}$, is the same as multiplication by an inverse, $x\left(\frac{1}{y}\right)$. Multiplication for some types of "numbers" may have corresponding division, without inverses; in an integral domain x may have no inverse "$\frac{1}{x}$" but $\frac{x}{y}$ may be defined. In a division ring there are inverses, but $\frac{x}{y}$ may be ambiguous in non-commutative rings since $x\left(\frac{1}{y}\right)$ need not be the same as $\left(\frac{1}{y}\right)x$. - -When multiplication is repeated, the resulting operation is known as exponentiation. For instance, the product of three factors of two (2×2×2) is "two raised to the third power", and is denoted by 23, a two with a superscript three. In this example, the number two is the base, and three is the exponent. In general, the exponent (or superscript) indicates how many times the base appears in the expression, so that the expression -$$ -a^n = \underbrace{a\times a \times \cdots \times a}_n -$$ - -indicates that n copies of the base a are to be multiplied together. This notation can be used whenever multiplication is known to be power associative. diff --git a/wiki/wikipedia/3188.txt b/wiki/wikipedia/3188.txt deleted file mode 100644 index ab2c914fb9319531893b1badfea66257dcd2b169..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3188.txt +++ /dev/null @@ -1,11 +0,0 @@ -In differential geometry, the Minkowski problem, named after Hermann Minkowski, asks for the construction of a strictly convex compact surface S whose Gaussian curvature is specified. More precisely, the input to the problem is a strictly positive real function ƒ defined on a sphere, and the surface that is to be constructed should have Gaussian curvature ƒ(n(x)) at the point x, where n(x) denotes the normal to S at x. Eugenio Calabi stated: "From the geometric view point it [the Minkowski problem] is the Rosetta Stone, from which several related problems can be solved." - -In full generality, the Minkowski problem asks for necessary and sufficient conditions on a non-negative Borel measure on the unit sphere Sn-1 to be the surface area measure of a convex body in $\mathbb{R}^n$. Here the surface area measure SK of a convex body K is the pushforward of the (n-1)-dimensional Hausdorff measure restricted to the boundary of K via the Gauss map. The Minkowski problem was solved by Hermann Minkowski, Aleksandr Danilovich Aleksandrov, Werner Fenchel and Børge Jessen: a Borel measure μ on the unit sphere is the surface area measure of a convex body if and only if μ has centroid at the origin and is not concentrated on a great subsphere. The convex body is then uniquely determined by μ up to translations. - -The Minkowski problem, despite its clear geometric origin, is found to have its appearance in many places. The problem of radiolocation is easily reduced to the Minkowski problem in Euclidean 3-space: restoration of convex shape over the given Gauss surface curvature. The inverse problem of the short-wave diffraction is reduced to the Minkowski problem. The Minkowski problem is the basis of the mathematical theory of diffraction as well as for the physical theory of diffraction. - -In 1953 Louis Nirenberg published the solutions of two long standing open problems, the Weyl problem and the Minkowski problem in Euclidean 3-space. L. Nirenberg's solution of the Minkowski problem was a milestone in global geometry. He has been selected to be the first recipient of the Chern Medal (in 2010) for his role in the formulation of the modern theory of non-linear elliptic partial differential equations, particularly for solving the Weyl problem and the Minkowski problems in Euclidean 3-space. - -A. V. Pogorelov received Ukraine State Prize (1973) for resolving the multidimensional Minkowski problem in Euclidean spaces. Pogorelov resolved the Weyl problem in Riemannian space in 1969. - -Shing-Tung Yau's joint work with Shiu-Yuen Cheng gives a complete proof of the higher-dimensional Minkowski problem in Euclidean spaces. Shing-Tung Yau received the Fields Medal at the International Congress of Mathematicians in Warsaw in 1982 for his work in global differential geometry and elliptic partial differential equations, particularly for solving such difficult problems as the Calabi conjecture of 1954, and a problem of Hermann Minkowski in Euclidean spaces concerning the Dirichlet problem for the real Monge–Ampère equation. diff --git a/wiki/wikipedia/3189.txt b/wiki/wikipedia/3189.txt deleted file mode 100644 index b5c53dae057c551f70fb7df2562f55aa7c30cb2c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3189.txt +++ /dev/null @@ -1,31 +0,0 @@ -In queueing theory, a discipline within the mathematical theory of probability, the Gordon–Newell theorem is an extension of Jackson's theorem from open queueing networks to closed queueing networks of exponential servers where customers cannot leave the network. Jackson's theorem cannot be applied to closed networks because the queue length at a node in the closed network is limited by the population of the network. The Gordon-Newell theorem calculates the open network solution and then eliminates the infeasible states by renormalizing the probabilities. Calculation of the normalizing constant makes the treatment more awkward as the whole state space must be enumerated. Buzen's algorithm or mean value analysis can be used to calculate the normalizing constant more efficiently. - -A network of m interconnected queues is known as a Gordon–Newell network or closed Jackson network if it meets the following conditions: - -# the network is closed (no customers can enter or leave the network), - -# all service times are exponentially distributed and the service discipline at all queues is FCFS, - -# a customer completing service at queue i will move to queue j with probability $P_{ij}$, with the $P_{ij}$ such that $\sum_{j =1}^m P_{ij} = 1$, - -# the utilization of all of the queues is less than one. - -In a closed Gordon-Newell network of m queues, with a total population of K individuals, write $\scriptstyle{(k_1,k_2,\ldots,k_m)}$ (where ki is the length of queue i) for the state of the network and S(K, m) for the state space -$$ -S(K,m) = \left\{ \mathbf{k} \in \mathbb{N}^m \text{ such that } \sum_{i=1}^m k_i = K \right\}. -$$ - -Then the equilibrium state probability distribution exists and is given by -$$ -\pi (k_1,k_2,\ldots,k_m) = \frac{1}{G(K)} \prod_{i=1}^m \left( \frac{e_i}{\mu_i} \right)^{k_i} -$$ - -where service times at queue i are exponentially distributed with parameter μi. The normalizing constant G(K) is given by -$$ -G(K) = \sum_{\mathbf{k} \in S(K,m)} \prod_{i=1}^{m} \left( \frac{e_i}{\mu_i} \right)^{k_i} , -$$ - -and ei is the visit ratio, calculated by solving the simultaneous equations -$$ -e_i = \sum_{j=1}^m e_j p_{ji} \text{ for }1 \leq i \leq m. -$$ diff --git a/wiki/wikipedia/319.txt b/wiki/wikipedia/319.txt deleted file mode 100644 index 6d8148f8a9bb79e90349b77e1e2ce7f057b17fe8..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/319.txt +++ /dev/null @@ -1,125 +0,0 @@ -In mathematics, the Hodge conjecture is a major unsolved problem in algebraic geometry and complex geometry that relates the algebraic topology of a non-singular complex algebraic variety to its subvarieties. - -In simple terms, the Hodge conjecture asserts that the basic topological information like the number of holes in certain geometric spaces, complex algebraic varieties, can be understood by studying the possible nice shapes sitting inside those spaces, which look like zero sets of polynomial equations. The latter objects can be studied using algebra and the calculus of analytic functions, and this allows one to indirectly understand the broad shape and structure of often higher-dimensional spaces which can not be otherwise easily visualized. - -More specifically, the conjecture states that certain de Rham cohomology classes are algebraic; that is, they are sums of Poincaré duals of the homology classes of subvarieties. It was formulated by the Scottish mathematician William Vallance Douglas Hodge as a result of a work in between 1930 and 1940 to enrich the description of de Rham cohomology to include extra structure that is present in the case of complex algebraic varieties. It received little attention before Hodge presented it in an address during the 1950 International Congress of Mathematicians, held in Cambridge, Massachusetts. The Hodge conjecture is one of the Clay Mathematics Institute's Millennium Prize Problems, with a prize of $1,000,000 for whoever can prove or disprove the Hodge conjecture. - -Let X be a compact complex manifold of complex dimension n. Then X is an orientable smooth manifold of real dimension $2n$, so its cohomology groups lie in degrees zero through $2n$. Assume X is a Kähler manifold, so that there is a decomposition on its cohomology with complex coefficients -$$ -H^k(X, \Complex) = \bigoplus_{p+q=k} H^{p,q}(X), -$$ - -where $H^{p,q}(X)$ is the subgroup of cohomology classes which are represented by harmonic forms of type $(p,q)$. That is, these are the cohomology classes represented by differential forms which, in some choice of local coordinates $z_1, \ldots, z_n$, can be written as a harmonic function times -$$ -dz_{i_1} \wedge \cdots \wedge dz_{i_p} \wedge d\bar z_{j_1} \wedge \cdots \wedge d\bar z_{j_q}. -$$ - -(See Hodge theory for more details.) Taking wedge products of these harmonic representatives corresponds to the cup product in cohomology, so the cup product is compatible with the Hodge decomposition: -$$ -\cup \colon H^{p,q}(X) \times H^{p',q'}(X) \rightarrow H^{p+p',q+q'}(X). -$$ - -Since X is a compact oriented manifold, X has a fundamental class. - -Let Z be a complex submanifold of X of dimension k, and let $i\colon Z\to X$ be the inclusion map. Choose a differential form $\alpha$ of type $(p,q)$. We can integrate $\alpha$ over Z: -$$ -\int_Z i^*\alpha. -$$ - -To evaluate this integral, choose a point of Z and call it 0. Around 0, we can choose local coordinates $z_1, \ldots, z_k$ on X such that Z is just $z_{k+1} = \cdots = z_n = 0$. If $p>k$, then $\alpha$ must contain some $dz_i$ where $z_i$ pulls back to zero on Z. The same is true if $q > k$. Consequently, this integral is zero if $(p,q) \ne (k,k)$. - -More abstractly, the integral can be written as the cap product of the homology class of Z and the cohomology class represented by $\alpha$. By Poincaré duality, the homology class of Z is dual to a cohomology class which we will call [Z], and the cap product can be computed by taking the cup product of [Z] and α and capping with the fundamental class of X. Because [Z] is a cohomology class, it has a Hodge decomposition. By the computation we did above, if we cup this class with any class of type $(p,q) \ne (k,k)$, then we get zero. Because $H^{2n}(X, \Complex) = H^{n,n}(X)$, we conclude that [Z] must lie in $H^{n-k,n-k}(X)$. Loosely speaking, the Hodge conjecture asks: - -Which cohomology classes in $H^{k,k}(X)$ come from complex subvarieties Z? - -Let: -$$ -\operatorname{Hdg}^k(X) = H^{2k}(X, \Q) \cap H^{k,k}(X). -$$ - -We call this the group of Hodge classes of degree 2k on X. - -The modern statement of the Hodge conjecture is: - -Hodge conjecture. Let X be a non-singular complex projective manifold. Then every Hodge class on X is a linear combination with rational coefficients of the cohomology classes of complex subvarieties of X. - -A projective complex manifold is a complex manifold which can be embedded in complex projective space. Because projective space carries a Kähler metric, the Fubini–Study metric, such a manifold is always a Kähler manifold. By Chow's theorem, a projective complex manifold is also a smooth projective algebraic variety, that is, it is the zero set of a collection of homogeneous polynomials. - -Another way of phrasing the Hodge conjecture involves the idea of an algebraic cycle. An algebraic cycle on X is a formal combination of subvarieties of X; that is, it is something of the form: -$$ -\sum_i c_iZ_i. -$$ - -The coefficients are usually taken to be integral or rational. We define the cohomology class of an algebraic cycle to be the sum of the cohomology classes of its components. This is an example of the cycle class map of de Rham cohomology, see Weil cohomology. For example, the cohomology class of the above cycle would be: -$$ -\sum_i c_i[Z_i]. -$$ - -Such a cohomology class is called algebraic. With this notation, the Hodge conjecture becomes: - -Let X be a projective complex manifold. Then every Hodge class on X is algebraic. - -The assumption in the Hodge conjecture that X be algebraic (projective complex manifold) cannot be weakened. In 1977, Steven Zucker showed that it is possible to construct a counterexample to the Hodge conjecture as complex tori with analytic rational cohomology of type $(p,p)$, which is not projective algebraic. (see appendix B of Zucker) - -The first result on the Hodge conjecture is due to Lefschetz. In fact, it predates the conjecture and provided some of Hodge's motivation. - -Theorem (Lefschetz theorem on (1,1)-classes) Any element of $H^2(X,\Z)\cap H^{1,1}(X)$ is the cohomology class of a divisor on $X$. In particular, the Hodge conjecture is true for $H^2$. - -A very quick proof can be given using sheaf cohomology and the exponential exact sequence. (The cohomology class of a divisor turns out to equal to its first Chern class.) Lefschetz's original proof proceeded by normal functions, which were introduced by Henri Poincaré. However, the Griffiths transversality theorem shows that this approach cannot prove the Hodge conjecture for higher codimensional subvarieties. - -By the Hard Lefschetz theorem, one can prove: - -Theorem. If the Hodge conjecture holds for Hodge classes of degree $p$, for all $p < n$, then the Hodge conjecture holds for Hodge classes of degree $2n-p$. - -Combining the above two theorems implies that Hodge conjecture is true for Hodge classes of degree $2n-2$. This proves the Hodge conjecture when $X$ has dimension at most three. - -The Lefschetz theorem on (1,1)-classes also implies that if all Hodge classes are generated by the Hodge classes of divisors, then the Hodge conjecture is true: - -Corollary. If the algebra $\operatorname{Hdg}^*(X) = \bigoplus\nolimits_k \operatorname{Hdg}^k(X)$ is generated by $\operatorname{Hdg}^1(X)$, then the Hodge conjecture holds for $X$. - -By the strong and weak Lefschetz theorem, the only non-trivial part of the Hodge conjecture for hypersurfaces is the degree m part (i.e., the middle cohomology) of a 2m-dimensional hypersurface $X \subset \mathbf P^{2m+1}$. If the degree d is 2, i.e., X is a quadric, the Hodge conjecture holds for all m. For $m = 2$, i.e., fourfolds, the Hodge conjecture is known for $d \le 5$. - -For most abelian varieties, the algebra Hdg*(X) is generated in degree one, so the Hodge conjecture holds. In particular, the Hodge conjecture holds for sufficiently general abelian varieties, for products of elliptic curves, and for simple abelian varieties of prime dimension. However, Mumford constructed an example of an abelian variety where Hdg2(X) is not generated by products of divisor classes. Weil generalized this example by showing that whenever the variety has complex multiplication by an imaginary quadratic field, then Hdg2(X) is not generated by products of divisor classes. Moonen proved that in dimension less than 5, either Hdg*(X) is generated in degree one, or the variety has complex multiplication by an imaginary quadratic field. In the latter case, the Hodge conjecture is only known in special cases. - -Hodge's original conjecture was: - -Integral Hodge conjecture. Let X be a projective complex manifold. Then every cohomology class in $H^{2k}(X, \Z) \cap H^{k,k}(X)$ is the cohomology class of an algebraic cycle with integral coefficients on X. - -This is now known to be false. The first counterexample was constructed by Atiyah. Using K-theory, they constructed an example of a torsion cohomology class—that is, a cohomology class α such that nα = 0 for some positive integer n—which is not the class of an algebraic cycle. Such a class is necessarily a Hodge class. Totaro reinterpreted their result in the framework of cobordism and found many examples of such classes. - -The simplest adjustment of the integral Hodge conjecture is: - -Integral Hodge conjecture modulo torsion. Let X be a projective complex manifold. Then every cohomology class in $H^{2k}(X, \Z) \cap H^{k,k}(X)$ is the sum of a torsion class and the cohomology class of an algebraic cycle with integral coefficients on X. - -Equivalently, after dividing $H^{2k}(X, \Z) \cap H^{k,k}(X)$ by torsion classes, every class is the image of the cohomology class of an integral algebraic cycle. This is also false. Kollár found an example of a Hodge class α which is not algebraic, but which has an integral multiple which is algebraic. - -Rosenschon have shown that in order to obtain a correct integral Hodge conjecture, one needs to replace Chow groups, which can also be expressed as motivic cohomology groups, by a variant known as étale (or Lichtenbaum) motivic cohomology. They show that the rational Hodge conjecture is equivalent to an integral Hodge conjecture for this modified motivic cohomology. - -A natural generalization of the Hodge conjecture would ask: - -Hodge conjecture for Kähler varieties, naive version. Let X be a complex Kähler manifold. Then every Hodge class on X is a linear combination with rational coefficients of the cohomology classes of complex subvarieties of X. - -This is too optimistic, because there are not enough subvarieties to make this work. A possible substitute is to ask instead one of the two following questions: - -Hodge conjecture for Kähler varieties, vector bundle version. Let X be a complex Kähler manifold. Then every Hodge class on X is a linear combination with rational coefficients of Chern classes of vector bundles on X. - -Hodge conjecture for Kähler varieties, coherent sheaf version. Let X be a complex Kähler manifold. Then every Hodge class on X is a linear combination with rational coefficients of Chern classes of coherent sheaves on X. - -Voisin proved that the Chern classes of coherent sheaves give strictly more Hodge classes than the Chern classes of vector bundles and that the Chern classes of coherent sheaves are insufficient to generate all the Hodge classes. Consequently, the only known formulations of the Hodge conjecture for Kähler varieties are false. - -Hodge made an additional, stronger conjecture than the integral Hodge conjecture. Say that a cohomology class on X is of co-level c (coniveau c) if it is the pushforward of a cohomology class on a c-codimensional subvariety of X. The cohomology classes of co-level at least c filter the cohomology of X, and it is easy to see that the cth step of the filtration N'H(X, Z) satisfies -$$ -N^cH^k(X, \mathbf{Z}) \subseteq H^k(X, \mathbf{Z}) \cap (H^{k-c,c}(X) \oplus\cdots\oplus H^{c,k-c}(X)). -$$ - -Hodge's original statement was: - -Generalized Hodge conjecture, Hodge's version. $N^cH^k(X, \mathbf{Z}) = H^k(X, \mathbf{Z}) \cap (H^{k-c,c}(X) \oplus\cdots\oplus H^{c,k-c}(X)).$ - -Grothendieck observed that this cannot be true, even with rational coefficients, because the right-hand side is not always a Hodge structure. His corrected form of the Hodge conjecture is: - -Generalized Hodge conjecture. N'H(X, Q) is the largest sub-Hodge structure of H(X, Z) contained in $H^{k-c,c}(X) \oplus\cdots\oplus H^{c,k-c}(X).$ - -This version is open. - -The strongest evidence in favor of the Hodge conjecture is the algebraicity result of Cattani. Suppose that we vary the complex structure of X over a simply connected base. Then the topological cohomology of X does not change, but the Hodge decomposition does change. It is known that if the Hodge conjecture is true, then the locus of all points on the base where the cohomology of a fiber is a Hodge class is in fact an algebraic subset, that is, it is cut out by polynomial equations. Cattani, Deligne & Kaplan (1995) proved that this is always true, without assuming the Hodge conjecture. diff --git a/wiki/wikipedia/3190.txt b/wiki/wikipedia/3190.txt deleted file mode 100644 index a0a0c7dbd60e861e06d81bbdab25df3e56dd4476..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3190.txt +++ /dev/null @@ -1,109 +0,0 @@ -Tandem Computers, Inc. was the dominant manufacturer of fault-tolerant computer systems for ATM networks, banks, stock exchanges, telephone switching centers, and other similar commercial transaction processing applications requiring maximum uptime and zero data loss. The company was founded by Jimmy Treybig in 1974 in Cupertino, California. It remained independent until 1997, when it became a server division within Compaq. It is now a server division within Hewlett Packard Enterprise, following Hewlett-Packard's acquisition of Compaq and the split of Hewlett Packard into HP Inc. and Hewlett Packard Enterprise. - -Tandem's NonStop systems use a number of independent identical processors and redundant storage devices and controllers to provide automatic high-speed "failover" in the case of a hardware or software failure. To contain the scope of failures and of corrupted data, these multi-computer systems have no shared central components, not even main memory. Conventional multi-computer systems all use shared memories and work directly on shared data objects. Instead, NonStop processors cooperate by exchanging messages across a reliable fabric, and software takes periodic snapshots for possible rollback of program memory state. - -Besides handling failures well, this "shared-nothing" messaging system design also scales extremely well to the largest commercial workloads. Each doubling of the total number of processors would double system throughput, up to the maximum configuration of 4000 processors. In contrast, the performance of conventional multiprocessor systems is limited by the speed of some shared memory, bus, or switch. Adding more than 4–8 processors in that manner gives no further system speedup. NonStop systems have more often been bought to meet scaling requirements than for extreme fault tolerance. They compete well against IBM's largest mainframes, despite being built from simpler minicomputer technology. - -Tandem Computers was founded in 1974 by James (Jimmy) Treybig. Treybig first saw the market need for fault tolerance in OLTP (online transaction processing) systems while running a marketing team for Hewlett Packard's HP 3000 computer division, but HP was not interested in developing for this niche. He then joined the venture capital firm Kleiner & Perkins and developed the Tandem business plan there. Treybig pulled together a core engineering team hired away from the HP 3000 division: Mike Green, Jim Katzman, Dave Mackie and Jack Loustaunou. Their business plan called for ultra-reliable systems that never had outages and never lost or corrupted data. These were modular in a new way that was safe from all "single-point failures", yet would be only marginally more expensive than conventional non-fault-tolerant systems. They would be less expensive and support more throughput than some existing ad-hoc toughened systems that used redundant but usually required "hot spares". - -Each engineer was confident they could quickly pull off their own part of this tricky new design, but doubted that others' areas could be worked out. The parts of the hardware and software design that did not have to be different were largely based on incremental improvements to the familiar hardware and software designs of the HP 3000. Many subsequent engineers and programmers also came from HP. Tandem headquarters in Cupertino, California, were a quarter mile away from the HP offices. Initial venture capital investment in Tandem Computers came from Tom Perkins, who was formerly a general manager of the HP 3000 division. - -The business plan included detailed ideas for building a unique corporate culture reflecting Treybig's values. - -The design of the initial Tandem/16 hardware was completed in 1975 and the first system shipped to Citibank in May 1976. - -The company enjoyed uninterrupted exponential growth up through 1983. Inc. magazine ranked Tandem as the fastest-growing public company in America. - -Over 40 years, Tandem's main NonStop product line has grown and evolved in an upward-compatible way from the initial T/16 fault-tolerant system, with three major changes to date to its top-level modular architecture or its programming-level instruction set architecture. Within each series, there have been several major re-implementations as chip technology progressed. - -While conventional systems of the era, including large mainframes, had mean-time-between-failures (MTBF) on the order of a few days, the NonStop system was designed to failure intervals 100 times longer, with uptimes measured in years. Nevertheless, the NonStop was designed to be price-competitive with conventional systems, with a simple 2-CPU system priced at just over twice that of a competing single-processor mainframe, as opposed to four or more times of other fault-tolerant solutions. - -The first system was the Tandem/16 or T/16, later re-branded NonStop I. The machine consisted of between two and 16 CPUs, organized as a fault-tolerant computer cluster packaged in a single rack. Each CPU had its own private, unshared memory, its own I/O processor, its own private I/O bus to connect to I/O controllers, and dual connections to all the other CPUs over a custom inter-CPU backplane bus called Dynabus. Each disk controller or network controller was duplicated and had dual connections to both CPUs and devices. Each disk was mirrored, with separate connections to two independent disk controllers. If a disk failed, its data was still available from its mirrored copy. If a CPU or controller or bus failed, the disk was still reachable through alternative CPU, controller, and/or bus. Each disk or network controller was connected to two independent CPUs. Power supplies were each wired to only one side of some pair of CPUs, controllers, or buses, so that the system would keep running well without loss of connections if one power supply failed. The careful complex arrangement of parts and connections in customers' larger configurations were documented in a Mackie diagram, named after lead salesman David Mackie who invented the notation. - -None of these duplicated parts were wasted "hot spares"; everything added to system throughput during normal operations. - -Besides recovering well from failed parts, the T/16 was also designed to detect as many kinds of intermittent failures as possible, as soon as possible. This prompt detection is called "fail fast". The point was to find and isolate corrupted data before it was permanently written into databases and other disk files. In the T/16, error detection was by some added custom circuits that added little cost to the total design; no major parts were duplicated just to get error detection. - -The T/16 CPU was a proprietary design. It was greatly influenced by the HP 3000 minicomputer. They were both microprogrammed, 16-bit, stack-based machines with segmented, 16-bit virtual addressing. Both were intended to be programmed exclusively in high-level languages, with no use of assembler. Both were initially implemented via standard low-density TTL chips, each holding a 4-bit slice of the 16-bit ALU. Both had a small number of top-of-stack, 16-bit data registers plus some extra address registers for accessing the memory stack. Both used Huffman encoding of operand address offsets, to fit a large variety of address modes and offset sizes into the 16-bit instruction format with very good code density. Both relied heavily on pools of indirect addresses to overcome the short instruction format. Both supported larger 32- and 64-bit operands via multiple ALU cycles, and memory-to-memory string operations. Both used "big-endian" addressing of long versus short memory operands. These features had all been inspired by Burroughs B5500-B6800 mainframe stack machines. - -The T/16 instruction set changed several features from the HP 3000 design. The T/16 supported paged virtual memory from the beginning. The HP 3000 series did not add paging until the PA-RISC generation, 10 years later (although MPE V it had a form of paging via the APL firmware, in 1978). Tandem added support for 32-bit addressing in its second machine; HP 3000 lacked this until its PA-RISC generation. Paging and long addresses was critical for supporting complex system software and large applications. The T/16 treated its top-of-stack registers in a novel way; the compiler, not the microcode, was responsible for deciding when full registers were spilled to the memory stack and when empty registers were re-filled from the memory stack. On the HP 3000, this decision took extra microcode cycles in every instruction. The HP 3000 supported COBOL with several instructions for calculating directly on arbitrary-length BCD (binary-coded decimal) strings of digits. The T/16 simplified this to single instructions for converting between BCD strings and 64-bit binary integers. - -In the T/16, each CPU consisted of two boards of TTL logic and SRAMs, and ran at about 0.7 MIPS. At any instant, it could access only four virtual memory segments (System Data, System Code, User Data, User Code), each limited to 128 kB in size. The 16-bit address spaces were already too small for major applications when it shipped. - -The first release of T/16 had only a single programming language, Transaction Application Language (TAL). This was an efficient machine-dependent systems programming language (for operating systems, compilers, etc.) but could also be used for non-portable applications. It was derived from HP 3000's System Programming Language (SPL). Both had semantics similar to C but a syntax based on Burroughs' ALGOL. Subsequent releases added support for Cobol74, Fortran, and MUMPS. - -The Tandem NonStop series ran a custom operating system which was significantly different from Unix or HP 3000's MPE. It was initially called T/TOS (Tandem Transactional Operating System) but soon named Guardian for its ability to protect all data from machine faults or software faults. In contrast to all other commercial operating systems, Guardian was based on message passing as the basic way for all processes to interact, without shared memory, regardless of where the processes were running. This approach easily scaled to multiple-computer clusters and helped isolate corrupted data before it propagated. - -All file system processes and all transactional application processes were structured as master/slave pairs of processes running in separate CPUs. The slave process periodically took snapshots of the master's memory state, and took over the workload if and when the master process ran into trouble. This allowed the application to survive failures in any CPU or its associated devices, without data loss. It further allowed recovery from some intermittent-style software failures. Between failures, the monitoring by the slave process added some performance overhead but this was far less than the 100% duplication in other system designs. Some major early applications were directly coded in this checkpoint style, but most instead used various Tandem software layers which hid the details of this in a semi-portable way. - -In 1981, all T/16 CPUs were replaced by the NonStop II. Its main difference from the T/16 was support for occasional 32-bit addressing via a user-switchable "extended data segment". This supported the next ten years of growth in software and was a huge advantage over the T/16 or HP 3000. Unfortunately, visible registers remained 16-bit, and this unplanned addition to the instruction set required executing many instructions per memory reference compared to most 32-bit minicomputers. All subsequent TNS computers were hampered by this instruction set inefficiency. Also, the NonStop II lacked wider internal data paths and so required additional microcode steps for 32-bit addresses. A NonStop II CPU had three boards, using chips and design similar to the T/16. The NonStop II also replaced core memory with battery-backed DRAM memory. - -In 1983, the NonStop TXP CPU was the first entirely new implementation of the TNS instruction set architecture. It was built from standard TTL chips and Programmed Array Logic chips, with four boards per CPU module. It had Tandem's first use of cache memory. It had a more direct implementation of 32-bit addressing, but still sent them through 16-bit adders. A wider microcode store allowed a major reduction in the cycles executed per instruction; speed increased to 2.0 MIPS. It used the same rack packaging, controllers, backplane, and buses as before. The Dynabus and I/O buses had been overdesigned in the T/16 so they would work for several generations of upgrades. - -Up to 14 TXP and NonStop II systems could now be combined via FOX, a long-distance fault-tolerant fibre optic bus for connecting TNS clusters across a business campus; a cluster of clusters with a total of 224 CPUs. This allowed further scale-up for taking on the largest mainframe applications. Like the CPU modules within the computers, Guardian could failover entire task sets to other machines in the network. Worldwide clusters of 4000 CPUs could also be built via conventional long-haul network links. - -In 1986, Tandem introduced a third generation CPU, the NonStop VLX. It had 32-bit datapaths, wider microcode, 12 MHz cycle time, and a peak rate of one instruction per microcycle. It was built from three boards of ECL gate array chips (with TTL pinout). It had a revised Dynabus with speed raised to 20 Mbytes/sec per link, 40 Mbytes/sec total. FOX II increased the physical diameter of TNS clusters to 4 kilometers. - -Tandem's initial database support was only for hierarchical, non-relational databases via the ENSCRIBE file system. This was extended into a relational database called ENCOMPASS. In 1986 Tandem introduced the first fault-tolerant SQL database, NonStop SQL. Developed totally in-house, NonStop SQL includes a number of features based on Guardian to ensure data validity across nodes. NonStop SQL is famous for scaling linearly in performance with the number of nodes added to the system, whereas most databases had performance that plateaued quite quickly, often after just two CPUs. A later version released in 1989 added transactions that could be spread over nodes, a feature that remained unique for some time. NonStop SQL continued to evolve, first as SQL/MP and then SQL/MX, which transitioned from Tandem to Compaq to HP. The code remains in use in both HP's SQL/MX and the Apache Trafodion project. - -In 1987 Tandem introduced the NonStop CLX, a low-cost less-expandable minicomputer system. Its role was for growing the low end of the fault-tolerant market, and for deploying on the remote edges of large Tandem networks. Its initial performance was roughly similar to the TXP; later versions were about 20% slower than a VLX. Its small cabinet could be installed into any "copier room" office environment. A CLX CPU was one board, containing six "compiled silicon" ASIC CMOS chips. The CPU core chip was duplicated and lock stepped for maximal error detection. Pinout was a main limitation of this chip technology. Microcode, cache, and TLB were all external to the CPU core and shared a single bus and single SRAM memory bank. As a result, CLX required at least two machine cycles per instruction. - -In 1989 Tandem introduced the NonStop Cyclone, a fast but expensive system for the mainframe end of the market. Each self-checking CPU took three boards full of hot-running ECL gate array chips, plus memory boards. Despite being microprogrammed, the CPU was superscalar, often completing two instructions per cache cycle. This was accomplished by having a separate microcode routine for every common pair of instructions. That fused pair of stack instructions generally accomplished the same work as a single instruction of normal 32-bit minicomputers. Cyclone processors were packaged as sections of four CPUs each, and the sections joined by a fiber optic version of Dynabus. - -Like Tandem's prior high-end machines, Cyclone cabinets were styled with much angular black to suggest strength and power. Advertising videos directly compared Cyclone to the Lockheed SR-71 Blackbird Mach 3 spy plane. Cyclone's name was supposed to represent its unstoppable speed in roaring through OLTP workloads. Announcement day was October 17 and the press came to town. That afternoon, the region was struck by the magnitude 6.9 Loma Prieta earthquake, causing freeway collapses in Oakland and major fires in San Francisco. Tandem offices were shaken, but no one was badly hurt on site. This was the first and last time that Tandem named its products after a natural disaster. - -In 1980–1983, Tandem attempted to re-design its entire hardware and software stack to put its NonStop methods on a stronger foundation than its inherited HP 3000 traits. Rainbow's hardware was a 32-bit register-file machine that aimed to be better than a VAX. For reliable programming, the main programming language was "TPL", a subset of Ada. At that time, people barely understood how to compile Ada to unoptimized code. There was no migration path for existing NonStop system software coded in TAL. The OS and database and Cobol compilers were entirely redesigned. Customers would see it as a totally disjoint product line requiring all-new software from them. The software side of this ambitious project took much longer than planned. The hardware was already obsolete and out-performed by TXP before its software was ready, so the Rainbow project was abandoned. All subsequent efforts emphasized upward compatibility and easy migration paths. - -Development of Rainbow's advanced client/server application development framework called "Crystal" continued awhile longer and was spun off as the "Ellipse" product of Cooperative Systems Inc. - -In 1985, Tandem attempted to grab a piece of the rapidly growing personal computer market with its introduction of the MS-DOS based Dynamite PC/workstation. Sadly, numerous design compromises (including a unique 8086-based hardware platform incompatible with expansion cards of the day and extremely limited compatibility with IBM-based PCs) relegated the Dynamite to serving primarily as a smart terminal. It was quietly and quickly withdrawn from the market. - -Tandem's message-based NonStop operating system had advantages for scaling, extreme reliability, and efficiently using expensive "spare" resources. But many potential customers wanted just good-enough reliability in a small system, using a familiar Unix operating system and industry-standard programs. Tandem's various fault-tolerant competitors all adopted a simpler hardware-only memory-centric design where all recovery was done by switching between hot spares. The most successful competitor was Stratus Technologies, whose machines were re-marketed by IBM as "IBM System/88". - -In such systems, the spare processors do not contribute to system throughput between failures, but merely redundantly execute exactly the same data thread as the active processor at the same instant, in "lock step". Faults are detected by seeing when the cloned processors' outputs diverged. To detect failures, the system must have two physical processors for each logical, active processor. To also implement automatic failover recovery, the system must have three or four physical processors for each logical processor. The triple or quadruple cost of this sparing is practical when the duplicated parts are commodity single-chip microprocessors. - -Tandem's products for this market began with the Integrity line in 1989, using MIPS processors and a "NonStop UX" variant of Unix. It was developed in Austin TX. In 1991, the Integrity S2 used TMR, Triple Modular Redundancy, where each logical CPU used three MIPS R2000 microprocessors to execute the same data thread, with voting to find and lock out a failed part. Their fast clocks could not be synchronized as in strict lock stepping, so voting instead happened at each interrupt. Some other version of Integrity used 4x "pair and spares" redundancy. Pairs of processors ran in lock-step to check each other. When they disagreed, both processors were marked untrusted and their workload was taken over by a hot-spare pair of processors whose state was already current. In 1995, the Integrity S4000 was the first to use ServerNet and moved toward sharing peripherals with the NonStop line. - -In 1995–1997, Tandem partnered with Microsoft to implement high-availability features and advanced SQL configurations in clusters of commodity Windows NT machines. This project was called "Wolfpack" and first shipped as Microsoft Cluster Server in 1997. Microsoft benefited greatly from this partnership; Tandem did not. - -When Tandem was formed in 1974, every computer company had to design and build its CPUs from basic circuits, using its own proprietary instruction set and own compilers etc. With each year of semiconductor progress with Moore's Law, more of a CPU's core circuits could fit into single chips, and run faster and much cheaper as a result. But it became increasingly expensive for a computer company to design those advanced custom chips, or build the plants to fabricate the chips. Facing the challenges of this rapidly changing marketplace and manufacturing landscape, Tandem chose to partner with MIPS and adopted its R3000 and successor chipsets and their advanced optimizing compiler. Subsequent NonStop Guardian machines using the MIPS architecture were known to programmers as TNS/R machines, but had a variety of marketing names. - -In 1991, Tandem released the Cyclone/R, also known as CLX/R. This was a low cost mid-range system based on CLX components, but used R3000 microprocessors instead of the much slower CLX stack machine board. To minimize time to market, this machine was initially shipped without any MIPS native-mode software. Everything, including its NSK operating system and SQL database, was compiled to TNS stack machine code. That object code was then translated to equivalent partially optimized MIPS instruction sequences at kernel install time by a tool called the Accelerator. Less-important programs could also be executed directly without pre-translation, via a TNS code interpreter. These migration techniques were very successful and are still in use today. Everyone's software was brought over without extra work, and the performance was good enough for mid-range machines, and programmers could ignore the instruction differences, even when debugging at machine code level. These Cyclone/R machines were updated with a faster native-mode NSK in a follow-up release. - -The R3000 and later microprocessors had only a typical amount of internal error checking, insufficient for Tandem's needs. So the Cyclone/R ran pairs of R3000 processors in lock step, running the same data thread. It used a curious variation of lock stepping. The checker processor ran 1 cycle behind the primary processor. This allowed them to share a single copy of external code and data caches without putting excessive pinout load on the sysbus and lowering the system clock rate. To successfully run microprocessors in lock step, the chips must be designed to be fully deterministic. Any hidden internal state must be cleared by the chip's reset mechanism. Otherwise, the matched chips will sometimes get out of sync for no visible reason and without any faults, long after the chips are restarted. All chip designers agree that these are good principles because it helps them test chips at manufacturing time. But all new microprocessor chips seemed to have bugs in this area, and required months of shared work between MIPS and Tandem to eliminate or work around the final subtle bugs. - -In 1993, Tandem released the NonStop Himalaya K-series with the faster MIPS R4400, a native mode NSK, and fully expandable Cyclone system components. These were still connected by Dynabus, Dynabus+, and the original I/O bus, which by now were all running out of performance headroom. - -In 1994, the NonStop Kernel was extended with a Unix-like POSIX environment called Open System Services. The original Guardian shell and ABI remained available. - -In 1997 Tandem introduced the NonStop Himalaya S-Series with a new top-level system architecture based on ServerNet connections. ServerNet replaced the obsolete Dynabus, FOX, and I/O buses. It was much faster, more general, and could be extended to more than just two-way redundancy via an arbitrary fabric of point-to-point connections. Tandem designed ServerNet for its own needs but then promoted its use by others; it evolved into the InfiniBand industry standard. - -All S-Series machines used MIPS processors, including the R4400, R10000, R12000, and R14000. - -The design of the later, faster MIPS cores was primarily funded by Silicon Graphics Inc. But Intel's Pentium Pro overtook the performance of RISC designs and also SGI's graphics business shrunk. After the R10000, there was no investment in significant new MIPS core designs for high-end servers. So Tandem needed to eventually move its NonStop product line yet again onto some other microprocessor architecture with competitive fast chips. - -Jimmy Treybig remained CEO of the company he founded until a downturn in 1996. The next CEO was Roel Pieper, who joined the company in 1996 as president and CEO. Re-branding to promote itself as a true Wintel (Windows/Intel) platform was conducted by their in-house brand and creative team led by Ronald May, who later went on to co-found the Silicon Valley Brand Forum in 1999. The concept worked, and shortly thereafter the company was acquired by Compaq. - -Compaq's x86-based server division was an early outside adopter of Tandem's ServerNet/Infiniband interconnect technology. In 1997, Compaq acquired the Tandem Computers company and NonStop customer base to balance Compaq's heavy focus on low-end PCs. In 1998, Compaq also acquired the much larger Digital Equipment Corporation and inherited its DEC Alpha RISC servers with OpenVMS and Tru64 Unix customer bases. Tandem was then midway in porting its NonStop product line from MIPS R12000 microprocessors to Intel's new Itanium Merced microprocessors. This project was restarted with Alpha as the new target to align NonStop with Compaq's other large server lines. But in 2001, Compaq terminated all Alpha engineering investments in favor of the Itanium microprocessors. - -In 2001, Hewlett Packard similarly made the choice to abdicate its successful PA-RISC product lines in favor of Intel's Itanium microprocessors that HP helped to design. Shortly thereafter, Compaq and HP announced their plan to merge and consolidate their similar product lines. This contentious merger became official in May 2002. The consolidations were painful and destroyed the DEC and "HP Way" engineer-oriented cultures, but the combined company did know how to sell complex systems to enterprises and profit, so it was an improvement for the surviving NonStop division and its customers. - -In some ways, Tandem's journey from HP-inspired start-up, to an HP-inspired competitor, then to an HP division was "bringing Tandem back to its original roots", but this was definitely not the same HP. - -The re-port of the NSK-based NonStop product line from MIPS processors to Itanium-based processors was finally completed and is branded as "HP Integrity NonStop Servers". (This NSK Integrity NonStop is unrelated to Tandem's original "Integrity" series for Unix.) - -Because it was not possible to run Itanium McKinley chips with clock-level lock stepping, the Integrity NonStop machines instead use comparisons between chip states at longer time scales, at interrupt points and at various software sync points in between interrupts. The intermediate sync points are automatically triggered at every n'th taken branch instruction, and are also explicitly inserted into long loop bodies by all NonStop compilers. The machine design supports both dual and triple redundancy, with either two or three physical microprocessors per logical Itanium processor. The triple version is sold to customers needing the utmost reliability. This new checking approach is called NSAA, NonStop Advanced Architecture. - -As in the earlier migration from stack machines to MIPS microprocessors, all customer software was carried forward without source changes. "Native mode" source code compiled directly to MIPS machine code was simply recompiled for Itanium. Some older "non native" software was still in TNS stack machine form. These were automatically ported onto Itanium via object code translation techniques. - -The people working for Tandem/HP have a long history of porting the kernel onto new hardware. The latest endeavor was to move from Itanium to the Intel x86 architecture. It was completed in 2014 with the first systems already being commercially available. - -The inclusion of the fault-tolerant 4X FDR (Fourteen Data Rate) InfiniBand double-wide switches provides more than a 25 times increase in system interconnect capacity for responding to business growth. - -NSK Guardian also became the base for the HP Neoview OS, the operating system used in the HP Neoview systems that were tailored for use in Business Intelligence and Enterprise Data Warehouse use. NonStop SQL/MX was also the starting point for Neoview SQL, which was tailored to Business Intelligence use. The code was also ported to Linux and served as the basis for the Apache Trafodion project. - -* ITUG (International Tandem User Group) now part of Connect (users group) - -* OzTUG The Australia and New Zealand Tandem Users Group here: - -* diff --git a/wiki/wikipedia/3191.txt b/wiki/wikipedia/3191.txt deleted file mode 100644 index c2d3e3b40a409f70c080171885ffcceec54d1198..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3191.txt +++ /dev/null @@ -1,17 +0,0 @@ -Reversi is a strategy board game for two players, played on an 8×8 uncheckered board. It was invented in 1883. Othello, a variant with a fixed initial setup of the board, was patented in 1971. - -There are sixty-four identical game pieces called disks (often spelled "discs"), which are light on one side and dark on the other. Players take turns placing disks on the board with their assigned color facing up. During a play, any disks of the opponent's color that are in a straight line and bounded by the disk just placed and another disk of the current player's color are turned over to the current player's color. The objective of the game is to have the majority of disks turned to display one's color when the last playable empty square is filled. - -Reversi/Othello is similar to the older Japanese game Go Bang. In 1893, the German games publisher Ravensburger started producing the game as one of its first titles. Two 18th-century continental European books dealing with a game that may or may not be Reversi are mentioned on page fourteen of the Spring 1989 Othello Quarterly, and there has been speculation, so far without documentation, that the game has older origins. - -The modern version of the game—the most regularly used rule-set, and the one used in international tournaments—is marketed and recognized as Othello. It was patented in Japan in 1971 by (autonym: Satoshi Hasegawa), then a 38-year-old salesman. Hasegawa initially explained that Othello was an improvement on Reversi, but from around 2000, he began to claim that he invented it in Mito regardless of Reversi. - -The game differs from Reversi in that the first four pieces go in the center, but in a standard diagonal pattern, rather than being placed by players. Additionally, where Reversi ends as soon as either player cannot make a move, in Othello the player who cannot make a move simply passes, meaning that the game can end prematurely if neither player can make a move. - -Hasegawa established the Japan Othello Association in March 1973, and held the first national Othello championship on 4 April 1973 in Japan. The Japanese game company Tsukuda Original launched Othello in late April 1973 in Japan under Hasegawa's license, which led to an immediate commercial success. - -The name was selected by Hasegawa Mathematically, Othello still remains unsolved. Experts have not absolutely resolved what the outcome of a game will be where both sides use perfect play. However, analysis of thousands of high-quality games (most of them computer-generated) appears to lead to a reliable conclusion (pending actual proof if true) that, on the standard 8×8 board, perfect play on both sides results in a draw. When generalizing the game to play on an n×n board, the problem of determining if the first player has a winning move in a given position is PSPACE-complete. On 4×4 and 6×6 boards under perfect play, the second player wins. The first of these proofs is relatively trivial, and the second dates to around 1990. - -The World Othello Championship (WOC) started 1977, organized by the Japan Othello Association. 1978 until 2004 the World Othello Championship was organized by the Othello TD group and Anjar Co. 2005 the World Othello Federation took over the responsibility for the WOC. - -1977 - 1986 each country could send one player to participate in the WOC. From 1987 each country could send three players to participate. 1987 the title WOC team champion started. 2005 an female spot was added to WOC. The title WOC female champion started. From 2006 each World Othello Federation member could send a full team of four players. In 2016, a youth champion title was added to the WOC. diff --git a/wiki/wikipedia/3192.txt b/wiki/wikipedia/3192.txt deleted file mode 100644 index 89e37e402b2578a0837c047c57add9778231ead1..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3192.txt +++ /dev/null @@ -1,19 +0,0 @@ -The pebble motion problems, or pebble motion on graphs, are a set of related problems in graph theory dealing with the movement of multiple objects ("pebbles") from vertex to vertex in a graph with a constraint on the number of pebbles that can occupy a vertex at any time. Pebble motion problems occur in domains such as multi-robot motion planning (in which the pebbles are robots) and network routing (in which the pebbles are packets of data). The best-known example of a pebble motion problem is the famous 15 puzzle where a disordered group of fifteen tiles must be rearranged within a 4x4 grid by sliding one tile at a time. - -The general form of the pebble motion problem is Pebble Motion on Graphs formulated as follows: - -Let $G = (V,E)$ be a graph with $n$ vertices. Let $P = \{1,\ldots,k\}$ be a set of pebbles with $k < n$. An arrangement of pebbles is a mapping $S : P \rightarrow V$ such that $S(i) \neq S(j)$ for $i \neq j$. A move $m = (p, u, v)$ consists of transferring pebble $p$ from vertex $u$ to adjacent unoccupied vertex $v$. The Pebble Motion on Graphs problem is to decide, given two arrangements $S_0$ and $S_+$, whether there is a sequence of moves that transforms $S_0$ into $S_+$. - -Common variations on the problem limit the structure of the graph to be: - -* a tree - -* a square grid, - -* a bi-connected graph. - -Another set of variations consider the case in which some or all of the pebbles are unlabeled and interchangeable. - -Other versions of the problem seek not only to prove reachability but to find a (potentially optimal) sequence of moves (i.e. a plan) which performs the transformation. - -Finding the shortest path in the pebble motion on graphs problem (with labeled pebbles) is known to be NP-hard and APX-hard. The unlabeled problem can be solved in polynomial time when using the cost metric mentioned above (minimizing the total number of moves to adjacent vertices), but is NP-hard for other natural cost metrics. diff --git a/wiki/wikipedia/3193.txt b/wiki/wikipedia/3193.txt deleted file mode 100644 index ffd6dea3296f6025b243232e0a5c522c6d7d992f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3193.txt +++ /dev/null @@ -1,13 +0,0 @@ -In mathematics, the Śleszyński–Pringsheim theorem is a statement about convergence of certain continued fractions. It was discovered by Ivan Śleszyński and Alfred Pringsheim in the late 19th century. - -It states that if an, bn, for n = 1, 2, 3, ... are real numbers and |bn| ≥ |an| + 1 for all n, then -$$ - \cfrac{a_1}{b_1+\cfrac{a_2}{b_2+\cfrac{a_3}{b_3+ \ddots}}} -$$ - -converges absolutely to a number ƒ satisfying 0 < |ƒ| < 1, meaning that the series -$$ - f = \sum_n \left\{ \frac{A_n}{B_n} - \frac{A_{n-1}}{B_{n-1}}\right\}, -$$ - -where An / Bn are the convergents of the continued fraction, converges absolutely. diff --git a/wiki/wikipedia/3194.txt b/wiki/wikipedia/3194.txt deleted file mode 100644 index f78ce4dc9d71ca7ea97b22333faf9326f2601927..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3194.txt +++ /dev/null @@ -1,96 +0,0 @@ -[[Image:EulerPhi.svg|thumb|The first thousand values of φ(n). The points on the top line represent φ(p) when p is a prime number, which is p − 1.]] - -In number theory, Euler's totient function counts the positive integers up to a given integer n that are relatively prime to n. It is written using the Greek letter phi as $\varphi(n)$ or $\phi(n)$, and may also be called Euler's phi function. In other words, it is the number of integers k in the range 1 ≤ k ≤ n for which the greatest common divisor gcd(n, k) is equal to 1. The integers k of this form are sometimes referred to as totatives of n. - -For example, the totatives of n = 9 are the six numbers 1, 2, 4, 5, 7 and 8. They are all relatively prime to 9, but the other three numbers in this range, 3, 6, and 9 are not, since gcd(9, 3) = gcd(9, 6) = 3 and gcd(9, 9) = 9. Therefore, φ(9) = 6. As another example, φ(1) = 1 since for n = 1 the only integer in the range from 1 to n is 1 itself, and gcd(1, 1) = 1. - -Euler's totient function is a multiplicative function, meaning that if two numbers m and n are relatively prime, then φ(mn) = φ(m)φ(n). - -This function gives the order of the multiplicative group of integers modulo n (the group of units of the ring $\Z/n\Z$). It is also used for defining the RSA encryption system. - -Leonhard Euler introduced the function in 1763. However, he did not at that time choose any specific symbol to denote it. In a 1784 publication, Euler studied the function further, choosing the Greek letter π to denote it: he wrote πD for "the multitude of numbers less than D, and which have no common divisor with it". This definition varies from the current definition for the totient function at D = 1 but is otherwise the same. The now-standard notation cited in) - -
  • $\sum_{k=1}^n\frac{\varphi(k)}{k} = \sum_{k=1}^n\frac{\mu(k)}{k}\left\lfloor\frac{n}{k}\right\rfloor=\frac6{\pi^2}n+O\left((\log n)^\frac23(\log\log n)^\frac43\right)$
  • - -
  • $\sum_{k=1}^n\frac{1}{\varphi(k)} = \frac{315\zeta(3)}{2\pi^4}\left(\log n+\gamma-\sum_{p\text{ prime}}\frac{\log p}{p^2-p+1}\right)+O\left(\frac{(\log n)^\frac23}n\right)$ -$$ -\lim\inf\frac{\varphi(n)}{n}\log\log n = e^{-\gamma}. -$$ - -Here γ is Euler's constant, γ = 0.577215665..., so eγ = 1.7810724... and e−γ = 0.56145948.... - -Proving this does not quite require the prime number theorem. Since log log n goes to infinity, this formula shows that -$$ -\lim\inf\frac{\varphi(n)}{n}= 0. -$$ - -In fact, more is true. -$$ -\varphi(n) > \frac {n} {e^\gamma \log \log n + \frac {3} {\log \log n}} \quad\text{for } n>2 -$$ - -and -$$ -\varphi(n) < \frac {n} {e^{ \gamma}\log \log n} \quad\text{for infinitely many } n. -$$ - -The second inequality was shown by Jean-Louis Nicolas. Ribenboim says "The method of proof is interesting, in that the inequality is shown first under the assumption that the Riemann hypothesis is true, secondly under the contrary assumption." -$$ -\varphi(1)+\varphi(2)+\cdots+\varphi(n) = \frac{3n^2}{\pi^2}+O\left(n(\log n)^\frac23(\log\log n)^\frac43\right) \quad\text{as }n\rightarrow\infty, -$$ - -due to Arnold Walfisz, its proof exploiting estimates on exponential sums due to I. M. Vinogradov and N. M. Korobov (this is currently the best known estimate of this type). The "Big O" stands for a quantity that is bounded by a constant times the function of n inside the parentheses (which is small compared to n2). - -This result can be used to prove that the probability of two randomly chosen numbers being relatively prime is 6/pi2. - -In 1950 Somayajulu proved - -\begin{align} - -\lim\inf \frac{\varphi(n+1)}{\varphi(n)}&= 0 \quad\text{and} \\[5px] - -\lim\sup \frac{\varphi(n+1)}{\varphi(n)}&= \infty. - -\end{align} - -In 1954 Schinzel and Sierpiński strengthened this, proving A nontotient is a natural number which is not a totient number. Every odd integer exceeding 1 is trivially a nontotient. There are also infinitely many even nontotients, and indeed every positive integer has a multiple which is an even nontotient. - -The number of totient numbers up to a given limit x is -$$ -\frac{x}{\log x}e^{ \big(C+o(1)\big)(\log\log\log x)^2 } -$$ - -for a constant C = 0.8178146.... - -If counted accordingly to multiplicity, the number of totient numbers up to a given limit x is -$$ -\Big\vert\{ n : \varphi(n) \le x \}\Big\vert = \frac{\zeta(2)\zeta(3)}{\zeta(6)} \cdot x + R(x) -$$ - -where the error term R is of order at most x/(log x)k for any positive k. - -It is known that the multiplicity of m exceeds mδ infinitely often for any δ < 0.55655. - -Ford proved that for every integer k ≥ 2 there is a totient number m of multiplicity k: that is, for which the equation φ(n) = m has exactly k solutions; this result had previously been conjectured by Wacław Sierpiński, and it had been obtained as a consequence of Schinzel's hypothesis H. - -In the last section of the Disquisitiones Gauss proves that a regular n-gon can be constructed with straightedge and compass if φ(n) is a power of 2. If n is a power of an odd prime number the formula for the totient says its totient can be a power of two only if n is a first power and n − 1 is a power of 2. The primes that are one more than a power of 2 are called Fermat primes, and only five are known: 3, 5, 17, 257, and 65537. Fermat and Gauss knew of these. Nobody has been able to prove whether there are any more. - -Thus, a regular n-gon has a straightedge-and-compass construction if n is a product of distinct Fermat primes and any power of 2. The first few such n are - -2, 3, 4, 5, 6, 8, 10, 12, 15, 16, 17, 20, 24, 30, 32, 34, 40,... . - -Setting up an RSA system involves choosing large prime numbers p and q, computing n = pq and k = φ(n), and finding two numbers e and d such that ed ≡ 1 (mod k). The numbers n and e (the "encryption key") are released to the public, and d (the "decryption key") is kept private. - -A message, represented by an integer m, where 0 < m < n, is encrypted by computing S = me (mod n). - -It is decrypted by computing t = Sd (mod n). Euler's Theorem can be used to show that if 0 < t < n, then t = m. - -The security of an RSA system would be compromised if the number n could be factored or if φ(n) could be computed without factoring n. - -If p is prime, then φ(p) = p − 1. In 1932 D. H. Lehmer asked if there are any composite numbers n such that φ(n) divides n − 1. None are known. - -In 1933 he proved that if any such n exists, it must be odd, square-free, and divisible by at least seven primes (i.e. ω(n) ≥ 7). In 1980 Cohen and Hagis proved that n > 1020 and that ω(n) ≥ 14. Further, Hagis showed that if 3 divides n then n > 101937042 and ω(n) ≥ 298848. - -This states that there is no number n with the property that for all other numbers m, m ≠ n, φ(m) ≠ φ(n). See Ford's theorem above. - -As stated in the main article, if there is a single counterexample to this conjecture, there must be infinitely many counterexamples, and the smallest one has at least ten billion digits in base 10. diff --git a/wiki/wikipedia/3195.txt b/wiki/wikipedia/3195.txt deleted file mode 100644 index f81e81512f37cb0ffd5e692a4fbfcd54ab6a7817..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3195.txt +++ /dev/null @@ -1,96 +0,0 @@ -Presburger arithmetic is the first-order theory of the natural numbers with addition, named in honor of Mojżesz Presburger, who introduced it in 1929. The signature of Presburger arithmetic contains only the addition operation and equality, omitting the multiplication operation entirely. The axioms include a schema of induction. - -Presburger arithmetic is much weaker than Peano arithmetic, which includes both addition and multiplication operations. Unlike Peano arithmetic, Presburger arithmetic is a decidable theory. This means it is possible to algorithmically determine, for any sentence in the language of Presburger arithmetic, whether that sentence is provable from the axioms of Presburger arithmetic. The asymptotic running-time computational complexity of this algorithm is at least doubly exponential, however, as shown by Fischer. - -The language of Presburger arithmetic contains constants 0 and 1 and a binary function +, interpreted as addition. - -In this language, the axioms of Presburger arithmetic are the universal closures of the following: - -# ¬(0 = x + 1) - -# x + 1 = y + 1 → x = y - -# x + 0 = x - -# x + (y + 1) = (x + y) + 1 - -# Let P(x) be a first-order formula in the language of Presburger arithmetic with a free variable x (and possibly other free variables). Then the following formula is an axiom:(P(0) ∧ ∀x(P(x) → P(x + 1))) → ∀y P(y). - -(5) is an axiom schema of induction, representing infinitely many axioms. These cannot be replaced by any finite number of axioms, that is, Presburger arithmetic is not finitely axiomatizable in first-order logic. - -Presburger arithmetic can be viewed as first-order theory with equality containing precisely all consequences of the above axioms. Alternatively, it can be defined as the set of those sentences that are true in the intended interpretation: the structure of non-negative integers with constants 0, 1, and the addition of non-negative integers. - -Presburger arithmetic is designed to be complete and decidable. Therefore, it cannot formalize concepts such as divisibility or primality, or, more generally, any number concept leading to multiplication of variables. However, it can formulate individual instances of divisibility; for example, it proves "for all x, there exists y : (y + y = x) ∨ (y + y + 1 = x)". This states that every number is either even or odd. - -Mojżesz Presburger proved Presburger arithmetic to be: - -* consistent: There is no statement in Presburger arithmetic which can be deduced from the axioms such that its negation can also be deduced. - -* complete: For each statement in the language of Presburger arithmetic, either it is possible to deduce it from the axioms or it is possible to deduce its negation. - -* decidable: There exists an algorithm which decides whether any given statement in Presburger arithmetic is a theorem or a nontheorem. - -The decidability of Presburger arithmetic can be shown using quantifier elimination, supplemented by reasoning about arithmetical congruence. The steps used to justify a quantifier elimination algorithm can be used to define recursive axiomatizations that do not necessarily contain the axiom schema of induction. - -In contrast, Peano arithmetic, which is Presburger arithmetic augmented with multiplication, is not decidable, as a consequence of the negative answer to the Entscheidungsproblem. By Gödel's incompleteness theorem, Peano arithmetic is incomplete and its consistency is not internally provable (but see Gentzen's consistency proof). - -The decision problem for Presburger arithmetic is an interesting example in computational complexity theory and computation. Let n be the length of a statement in Presburger arithmetic. Then Fischer proved that, in the worst case, the proof of the statement in first order logic has length at least $2^{2^{cn}}$, for some constant c>0. Hence, their decision algorithm for Presburger arithmetic has runtime at least exponential. Fischer and Rabin also proved that for any reasonable axiomatization (defined precisely in their paper), there exist theorems of length n which have doubly exponential length proofs. Intuitively, this suggests there are computational limits on what can be proven by computer programs. Fischer and Rabin's work also implies that Presburger arithmetic can be used to define formulas which correctly calculate any algorithm as long as the inputs are less than relatively large bounds. The bounds can be increased, but only by using new formulas. On the other hand, a triply exponential upper bound on a decision procedure for Presburger Arithmetic was proved by Oppen. - -A more tight complexity bound was shown using alternating complexity classes by Berman. The set of true statements in Presburger arithmetic (PA) is shown complete for TimeAlternations(22nO(1), n). Thus, its complexity is between double exponential nondeterministic time (2-NEXP) and double exponential space (2-EXPSPACE). Completeness is under polynomial time many-to-one reductions. (Also, note that while Presburger arithmetic is commonly abbreviated PA, in mathematics in general PA usually means Peano arithmetic.) - -For a more fine-grained result, let PA(i) be the set of true Σi PA statements, and PA(i, j) the set of true Σi PA statements with each quantifier block limited to j variables. '<' is considered to be quantifier-free; here, bounded quantifiers are counted as quantifiers.
    - -PA(1, j) is in P, while PA(1) is NP-complete.
    - -For i > 0 and j > 2, PA(i + 1, j) is ΣiP-complete. The hardness result only needs j>2 (as opposed to j=1) in the last quantifier block.
    - -For i>0, PA(i+1) is ΣiEXP-complete (and is TimeAlternations(2nO(i), i)-complete). - -Because Presburger arithmetic is decidable, automatic theorem provers for Presburger arithmetic exist. For example, the Coq proof assistant system features the tactic omega for Presburger arithmetic and the Isabelle proof assistant contains a verified quantifier elimination procedure by Nipkow. The double exponential complexity of the theory makes it infeasible to use the theorem provers on complicated formulas, but this behavior occurs only in the presence of nested quantifiers: Nelson describe an automatic theorem prover which uses the simplex algorithm on an extended Presburger arithmetic without nested quantifiers to prove some of the instances of quantifier-free Presburger arithmetic formulas. More recent satisfiability modulo theories solvers use complete integer programming techniques to handle quantifier-free fragment of Presburger arithmetic theory. - -Presburger arithmetic can be extended to include multiplication by constants, since multiplication is repeated addition. Most array subscript calculations then fall within the region of decidable problems. This approach is the basis of at least five proof-of-correctness systems for computer programs, beginning with the Stanford Pascal Verifier in the late 1970s and continuing through to Microsoft's Spec# system of 2005. - -Some properties are now given about integer relations definable in Presburger Arithmetic. For the sake of simplicity, all relations considered in this section are over non-negative integers. - -A relation is Presburger-definable if and only if it is a semilinear set. - -A unary integer relation $R$, that is, a set of non-negative integers, is Presburger-definable if and only if it is ultimately periodic. That is, if there exists a threshold $t\in \N$ and a positive period $p\in\N^{>0}$ such that, for all integer $n$ such that $|n|\ge t$, $n\in R$ if and only if $n+p\in R$. - -By the Cobham–Semenov theorem, a relation is Presburger-definable if and only if it is definable in Büchi arithmetic of base $k$ for all $k\ge2$. A relation definable in Büchi arithmetic of base $k$ and $k'$ for $k$ and $k'$ being multiplicatively independent integers is Presburger definable. - -An integer relation $R$ is Presburger-definable if and only if all sets of integers which are definable in first order logic with addition and $R$ (that is, Presburger Arithmetic plus a predicate for $R$) are Presburger-definable. Equivalently, for each relation $R$ which is not Presburger-definable, there exists a first-order formula with addition and $R$ which defines a set of integers which is not definable using only addition. - -Presburger-definable relations admit another characterization: by Muchnik's theorem. It is more complicated to state, but led to the proof of the two former characterizations. Before Muchnik's theorem can be stated, some additional definitions must be introduced. - -Let $R\subseteq\N^d$ be a set, the section $x_i = j$ of $R$, for $i < d$ and $j \in \N$ is defined as -$$ -\left \{(x_0,\ldots,x_{i-1},x_{i+1},\ldots,x_{d-1})\in\N^{d-1}\mid(x_0,\ldots,x_{i-1},j,x_{i+1},\ldots,x_{d-1})\in R \right \}. -$$ - -Given two sets $R,S\subseteq\N^d$ and a $d$-tuple of integers $(p_0,\ldots,p_{d-1})\in\N^d$, the set $R$ is called $(p_0,\dots,p_{d-1})$-periodic in $S$ if, for all $(x_0, \dots, x_{d-1}) \in S$ such that $(x_0+p_0,\dots,x_{d-1}+p_{d-1})\in S,$ then $(x_0,\ldots,x_{d-1})\in R$ if and only if $(x_0+p_0,\dots,x_{d-1}+p_{d-1})\in R$. For $s\in\N$, the set $R$ is said to be $s$-periodic in $S$ if it is $(p_0,\ldots,p_{d-1})$-periodic for some $(p_0,\dots,p_{d-1})\in\Z^d$ such that -$$ -\sum_{i=0}^{d-1}|p_i| < s. -$$ - -Finally, for $k,x_0,\dots,x_{d-1}\in\N$ let -$$ -C(k,(x_0,\ldots,x_{d-1}))= \left \{(x_0+c_0,\dots,x_{d-1}+c_{d-1})\mid 0 \leq c_i < k \right \} -$$ - -denote the cube of size $k$ whose lesser corner is $(x_0,\dots,x_{d-1})$. -$$ -R\subseteq\N^d -$$ is Presburger-definable if and only if: - -* if $d > 1$ then all sections of $R$ are Presburger-definable and - -* there exists $s\in\N$ such that, for every $k\in\N$, there exists $t\in\N$ such that for all $(x_0,\dots,x_{d-1})\in\N^d$ with \sum_{i=0}^{d-1}x_i>t, $R$ is $s$-periodic in $C(k,(x_0,\dots,x_{d-1}))$. - -Intuitively, the integer $s$ represents the length of a shift, the integer $k$ is the size of the cubes and $t$ is the threshold before the periodicity. This result remains true when the condition -$$ -\sum_{i=0}^{d-1}x_i>t -$$ - -is replaced either by $\min(x_0,\ldots,x_{d-1})>t$ or by $\max(x_0,\ldots,x_{d-1})>t$. - -This characterization led to the so-called "definable criterion for definability in Presburger arithmetic", that is: there exists a first-order formula with addition and a $d$-ary predicate $R$ which holds if and only if $R$ is interpreted by a Presburger-definable relation. Muchnik's theorem also allows one to prove that it is decidable whether an automatic sequence accepts a Presburger-definable set. diff --git a/wiki/wikipedia/3196.txt b/wiki/wikipedia/3196.txt deleted file mode 100644 index dcff9fb7a5edfd4cd1b52bcf65e7f78ff6cddf4f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3196.txt +++ /dev/null @@ -1,39 +0,0 @@ -In the fields of mechanism design and social choice theory, Gibbard's theorem is a result proven by philosopher Allan Gibbard in 1973. It states that for any deterministic process of collective decision, at least one of the following three properties must hold: - -# The process is dictatorial, i.e. there exists a distinguished agent who can impose the outcome; - -# The process limits the possible outcomes to two options only; - -# The process is open to strategic voting: once an agent has identified their preferences, it is possible that they have no action at their disposal that best defends these preferences irrespective of the other agents' actions. - -A corollary of this theorem is Gibbard–Satterthwaite theorem about voting rules. The main difference between the two is that Gibbard–Satterthwaite theorem is limited to ranked (ordinal) voting rules: a voter's action consists in giving a preference ranking over the available options. Gibbard's theorem is more general and considers processes of collective decision that may not be ordinal: for example, voting systems where voters assign grades to candidates. Gibbard's theorem can be proven using Arrow's impossibility theorem. - -Gibbard's theorem is itself generalized by Gibbard's 1978 theorem and Hylland's theorem, which extend these results to non-deterministic processes, i.e. where the outcome may not only depend on the agents' actions but may also involve an element of chance. - -Consider some voters $1$, $2$ and $3$ who wish to select an option among three alternatives: $a$, $b$ and $c$. Assume they use approval voting: each voter assigns to each candidate the grade 1 (approval) or 0 (withhold approval). For example, $(1, 1, 0)$ is an authorized ballot: it means that the voter approves of candidates $a$ and $b$ but does not approve of candidate $c$. Once the ballots are collected, the candidate with highest total grade is declared the winner. Ties between candidates are broken by alphabetical order: for example, if there is a tie between candidates $a$ and $b$, then $a$ wins. - -Assume that voter $1$ prefers alternative $a$, then $b$ and then $c$. Which ballot will best defend her opinions? For example, consider the two following situations. - -* If the two other voters respectively cast ballots $(0, 1, 1)$ and $(1, 1, 1)$, then voter $1$ has only one ballot that leads to the election of her favorite alternative $a$ : $(1, 0, 0)$. - -* However, if we assume instead that the two other voters respectively cast ballots $(0, 0, 1)$ and $(0, 1, 1)$, then voter $1$ should not vote $(1, 0, 0)$ because it makes $c$ win; she should rather vote $(1, 1, 0)$, which makes $b$ win. - -To sum up, voter $1$ faces a strategic voting dilemma: depending on the ballots that the other voters will cast, $(1, 0, 0)$ or $(1, 1, 0)$ can be a ballot that best defends her opinions. We then say that approval voting is not strategyproof: once the voter has identified her own preferences, she does not have a ballot at her disposal that best defends her opinions in all situations; she needs to act strategically, possibly by spying over the other voters to determine how they intend to vote. - -Gibbard's theorem states that a deterministic process of collective decision cannot be strategyproof, except possibly in two cases: if there is a distinguished agent who has a dictatorial power, or if the process limits the outcome to two possible options only. - -Let $\mathcal{A}$ be the set of alternatives, which can also be called candidates in a context of voting. Let $\mathcal{N} = \{1, \ldots, n\}$ be the set of agents, which can also be called players or voters, depending on the context of application. For each agent $i$, let $\mathcal{S}_i$ be a set that represents the available strategies for agent $i$; assume that $\mathcal{S}_i$ is finite. Let $g$ be a function that, to each $n$-tuple of strategies $(s_1, \ldots, s_n) \in \mathcal{S}_1 \times \cdots \times \mathcal{S}_n$, maps an alternative. The function $g$ is called a game form. In other words, a game form is essentially defined like an n-player game, but with no utilities associated to the possible outcomes: it describes the procedure only, without specifying a priori the gain that each agent would get from each outcome. - -We say that $g$ is strategyproof (originally called: straightforward) if for any agent $i$ and for any strict weak order $P_i$ over the alternatives, there exists a strategy $s_i^*(P_i)$ that is dominant for agent $i$ when she has preferences $P_i$: there is no profile of strategies for the other agents such that another strategy $s_i$, different from $s_i^*(P_i)$, would lead to a strictly better outcome (in the sense of $P_i$). This property is desirable for a democratic decision process: it means that once the agent $i$ has identified her own preferences $P_i$, she can choose a strategy $s_i^*(P_i)$ that best defends her preferences, with no need to know or guess the strategies chosen by the other agents. - -We let $\mathcal{S} = \mathcal{S}_1 \times \cdots \times \mathcal{S}_n$ and denote by $g(\mathcal{S})$ the range of $g$, i.e. the set of the possible outcomes of the game form. For example, we say that $g$ has at least 3 possible outcomes if and only if the cardinality of $g(\mathcal{S})$ is 3 or more. Since the strategy sets are finite, $g(\mathcal{S})$ is finite also; thus, even if the set of alternatives $\mathcal{A}$ is not assumed to be finite, the subset of possible outcomes $g(\mathcal{S})$ is necessarily so. - -We say that $g$ is dictatorial if there exists an agent $i$ who is a dictator, in the sense that for any possible outcome $a \in g(\mathcal{S})$, agent $i$ has a strategy at her disposal that ensures that the result is $a$, whatever the strategies chosen by the other agents. - -We assume that each voter communicates a strict weak order over the candidates. The serial dictatorship is defined as follows. If voter 1 has a unique most-liked candidate, then this candidate is elected. Otherwise, possible outcomes are restricted to his ex-aequo most-liked candidates and the other candidates are eliminated. Then voter 2's ballot is examined: if he has a unique best-liked candidate among the non-eliminated ones, then this candidate is elected. Otherwise, the list of possible outcomes is reduced again, etc. If there is still several non-eliminated candidates after all ballots have been examined, then an arbitrary tie-breaking rule is used. - -This game form is strategyproof: whatever the preferences of a voter, he has a dominant strategy that consists in declaring his sincere preference order. It is also dictatorial, and its dictator is voter 1: if his wishes to see candidate $a$ elected, then he just has to communicate a preference order where $a$ is the unique most-liked candidate. - -If there are only 2 possible outcomes, a game form may be strategyproof and not dictatorial. For example, it is the case of the simple majority vote: each voter casts a ballot for her most-liked alternative (among the two possible outcomes), and the alternative with most votes is declared the winner. This game form is strategyproof because it is always optimal to vote for one's most-liked alternative (unless one is indifferent between them). However, it is clearly not dictatorial. Many other game forms are strategyproof and not dictatorial: for example, assume that the alternative $a$ wins if it gets two thirds of the votes, and $b$ wins otherwise. - -Consider the following game form. Voter 1 can vote for a candidate of her choice, or she can abstain. In the first case, the specified candidate is automatically elected. Otherwise, the other voters use a classic voting rule, for example the Borda count. This game form is clearly dictatorial, because voter 1 can impose the result. However, it is not strategyproof: the other voters face the same issue of strategic voting as in the usual Borda count. Thus, Gibbard's theorem is an implication and not an equivalence. diff --git a/wiki/wikipedia/3197.txt b/wiki/wikipedia/3197.txt deleted file mode 100644 index dc9df720318117d27d44a124d4a890526f84944b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3197.txt +++ /dev/null @@ -1,19 +0,0 @@ -The Denjoy–Carleman–Ahlfors theorem states that the number of asymptotic values attained by a non-constant entire function of order ρ on curves going outwards toward infinite absolute value is less than or equal to 2ρ. It was first conjectured by Arnaud Denjoy in 1907. - -Torsten Carleman showed that the number of asymptotic values was less than or equal to (5/2)ρ in 1921. - -In 1929 Lars Ahlfors confirmed Denjoy's conjecture of 2ρ. - -Finally, in 1933, Carleman published a very short proof. - -The use of the term "asymptotic value" does not mean that the ratio of that value to the value of the function approaches 1 (as in asymptotic analysis) as one moves along a certain curve, but rather that the function value approaches the asymptotic value along the curve. For example, as one moves along the real axis toward negative infinity, the function $\exp(z)$ approaches zero, but the quotient $0/\exp(z)$ does not go to 1. - -The function $\exp(z)$ is of order 1 and has only one asymptotic value, namely 0. The same is true of the function $\sin(z)/z,$ but the asymptote is attained in two opposite directions. - -A case where the number of asymptotic values is equal to 2ρ is the sine integral $\text{Si}(z)=\int_0^z\frac{\sin \zeta}{\zeta}d\zeta$, a function of order 1 which goes to −π/2 along the real axis going toward negative infinity, and to +π/2 in the opposite direction. - -The integral of the function $a\sin(z^2)/z+b\sin(z^2)/z^2$ is an example of a function of order 2 with four asymptotic values (if b is not zero), approached as one goes outward from zero along the real and imaginary axes. - -More generally, $f(z)=\int_0^z\frac{\sin(\zeta^\rho)}{\zeta^\rho}d\zeta,$ with ρ any positive integer, is of order ρ and has 2ρ asymptotic values. - -It is clear that the theorem applies to polynomials only if they are not constant. A constant polynomial has 1 asymptotic value, but is of order 0. diff --git a/wiki/wikipedia/3198.txt b/wiki/wikipedia/3198.txt deleted file mode 100644 index 9f663500793ce1306fd1a0263662a964a517e4c8..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3198.txt +++ /dev/null @@ -1,162 +0,0 @@ -In mathematics, a Hadamard matrix, named after the French mathematician Jacques Hadamard, is a square matrix whose entries are either +1 or -1 and whose rows are mutually orthogonal. In geometric terms, this means that each pair of rows in a Hadamard matrix represents two perpendicular vectors, while in combinatorial terms, it means that each pair of rows has matching entries in exactly half of their columns and mismatched entries in the remaining columns. It is a consequence of this definition that the corresponding properties hold for columns as well as rows. The n-dimensional parallelotope spanned by the rows of an n×n Hadamard matrix has the maximum possible n-dimensional volume among parallelotopes spanned by vectors whose entries are bounded in absolute value by 1. Equivalently, a Hadamard matrix has maximal determinant among matrices with entries of absolute value less than or equal to 1 and so is an extremal solution of Hadamard's maximal determinant problem. - -Certain Hadamard matrices can almost directly be used as an error-correcting code using a Hadamard code (generalized in Reed–Muller codes), and are also used in balanced repeated replication (BRR), used by statisticians to estimate the variance of a parameter estimator. - -Let H be a Hadamard matrix of order n. The transpose of H is closely related to its inverse. In fact: -$$ - H H^\textsf{T} = n I_n -$$ - -where In is the n × n identity matrix and HT is the transpose of H. To see that this is true, notice that the rows of H are all orthogonal vectors over the field of real numbers and each have length $\sqrt n$. Dividing H through by this length gives an orthogonal matrix whose transpose is thus its inverse. Multiplying by the length again gives the equality above. As a result, -$$ - \operatorname{det}(H) = \pm n^{\frac{n}{2}}, -$$ - -where det(H) is the determinant of H. - -Suppose that M is a complex matrix of order n, whose entries are bounded by |Mij| ≤ 1, for each i, j between 1 and n. Then Hadamard's determinant bound states that -$$ - |\operatorname{det}(M)| \leq n^\frac{n}{2}. -$$ - -Equality in this bound is attained for a real matrix M if and only if M is a Hadamard matrix. - -The order of a Hadamard matrix must be 1, 2, or a multiple of 4. - -Examples of Hadamard matrices were actually first constructed by James Joseph Sylvester in 1867. Let H be a Hadamard matrix of order n. Then the partitioned matrix - -\begin{bmatrix} - -H & H\\ - -H & -H - -\end{bmatrix} - -is a Hadamard matrix of order 2n. This observation can be applied repeatedly and leads to the following sequence of matrices, also called Walsh matrices. - -\begin{align} - -H_1 &= \begin{bmatrix} 1 \end{bmatrix}, \\ - -H_2 &= \begin{bmatrix} - -1 & 1 \\ - -1 & -1 - -\end{bmatrix}, \\ - -H_4 &= \begin{bmatrix} - -1 & 1 & 1 & 1\\ - -1 & -1 & 1 & -1\\ - -1 & 1 & -1 & -1\\ - -1 & -1 & -1 & 1 - -\end{bmatrix}, - -\end{align} - -and - - - -H_{2^k} = \begin{bmatrix} - -H_{2^{k-1}} & H_{2^{k-1}}\\ - -H_{2^{k-1}} & -H_{2^{k-1}} - -\end{bmatrix} = H_2 \otimes H_{2^{k-1}}, - - - -for $ 2 \le k \in N $, where $ \otimes $ denotes the Kronecker product. - -In this manner, Sylvester constructed Hadamard matrices of order 2k for every non-negative integer k. - -Sylvester's matrices have a number of special properties. They are symmetric and, when k ≥ 1 (2k > 1), have trace zero. The elements in the first column and the first row are all positive. The elements in all the other rows and columns are evenly divided between positive and negative. Sylvester matrices are closely connected with Walsh functions. - -If we map the elements of the Hadamard matrix using the group homomorphism $ \{1, -1, \times\} \mapsto \{0, 1, \oplus\} $, we can describe an alternative construction of Sylvester's Hadamard matrix. First consider the matrix $ F_n $, the $ n\times 2^n $ matrix whose columns consist of all n-bit numbers arranged in ascending counting order. We may define $ F_n $ recursively by - -\begin{align} - -F_1 &= \begin{bmatrix}0 & 1\end{bmatrix} \\ - -F_n &= \begin{bmatrix} - -0_{1\times 2^{n-1}} & 1_{1\times 2^{n-1}} \\ - -F_{n-1} & F_{n-1} - -\end{bmatrix}. - -\end{align} - -It can be shown by induction that the image of the Hadamard matrix under the above homomorphism is given by - - - -H_{2^n} = F_n^\textsf{T} F_n. - - - -This construction demonstrates that the rows of the Hadamard matrix $ H_{2^n} $ can be viewed as a length $ 2^n $ linear error-correcting code of rank n, and minimum distance $ 2^{n-1} $ with generating matrix $ F_n. $ - -This code is also referred to as a Walsh code. The Hadamard code, by contrast, is constructed from the Hadamard matrix $ H_{2^n} $ by a slightly different procedure. - -The most important open question in the theory of Hadamard matrices is that of existence. The Hadamard conjecture proposes that a Hadamard matrix of order 4k exists for every positive integer k. The Hadamard conjecture has also been attributed to Paley, although it was considered implicitly by others prior to Paley's work. - -A generalization of Sylvester's construction proves that if $H_n$ and $H_m$ are Hadamard matrices of orders n and m respectively, then $H_n \otimes H_m$ is a Hadamard matrix of order nm. This result is used to produce Hadamard matrices of higher order once those of smaller orders are known. - -Sylvester's 1867 construction yields Hadamard matrices of order 1, 2, 4, 8, 16, 32, etc. Hadamard matrices of orders 12 and 20 were subsequently constructed by Hadamard (in 1893). In 1933, Raymond Paley discovered the Paley construction, which produces a Hadamard matrix of order q + 1 when q is any prime power that is congruent to 3 modulo 4 and that produces a Hadamard matrix of order 2(q + 1) when q is a prime power that is congruent to 1 modulo 4. His method uses finite fields. - -The smallest order that cannot be constructed by a combination of Sylvester's and Paley's methods is 92. A Hadamard matrix of this order was found using a computer by Baumert, Golomb, and Hall in 1962 at JPL. They used a construction, due to Williamson, that has yielded many additional orders. Many other methods for constructing Hadamard matrices are now known. - -In 2005, Hadi Kharaghani and Behruz Tayfeh-Rezaie published their construction of a Hadamard matrix of order 428. As a result, the smallest order for which no Hadamard matrix is presently known is 668. - -, there are 12 multiples of 4 less than or equal to 2000 for which no Hadamard matrix of that order is known. They are: - -668, 716, 892, 1132, 1244, 1388, 1436, 1676, 1772, 1916, 1948, and 1964. - -Two Hadamard matrices are considered equivalent if one can be obtained from the other by negating rows or columns, or by interchanging rows or columns. Up to equivalence, there is a unique Hadamard matrix of orders 1, 2, 4, 8, and 12. There are 5 inequivalent matrices of order 16, 3 of order 20, 60 of order 24, and 487 of order 28. Millions of inequivalent matrices are known for orders 32, 36, and 40. Using a coarser notion of equivalence that also allows transposition, there are 4 inequivalent matrices of order 16, 3 of order 20, 36 of order 24, and 294 of order 28. - -Hadamard matrices are also uniquely recoverable, in the following sense: If an Hadamard matrix $H$ of order $n$ has $O(n^2/\log n)$ entries randomly deleted, then with overwhelming likelihood, one can perfectly recover the original matrix $H$ from the damaged one. The algorithm of recovery has the same computational cost as matrix inversion. - -Many special cases of Hadamard matrices have been investigated in the mathematical literature. - -A Hadamard matrix H is skew if $H^\textsf{T} + H = 2I.$ A skew Hadamard matrix remains a skew Hadamard matrix after multiplication of any row and its corresponding column by −1. This makes it possible, for example, to normalize a skew Hadamard matrix so that all elements in the first row equal 1. - -Reid and Brown in 1972 showed that there exists a doubly regular tournament of order n if and only if there exists a skew Hadamard matrix of order n + 1. In a mathematical tournament of order n, each of n players plays one match against each of the other players, each match resulting in a win for one of the players and a loss for the other. A tournament is regular if each player wins the same number of matches. A regular tournament is doubly regular if the number of opponents beaten by both of two distinct players is the same for all pairs of distinct players. Since each of the n (n−1) / 2 matches played results in a win for one of the players, each player wins (n−1) / 2 matches (and loses the same number). Since each of the (n−1) / 2 players defeated by a given player also loses to (n−3) / 2 other players, the number of player pairs (i, j) such that j loses both to i and to the given player is (n−1) (n−3) / 4. The same result should be obtained if the pairs are counted differently: the given player and any of the (n−1) other players together defeat the same number of common opponents. This common number of defeated opponents must therefore be (n−3) / 4. A skew Hadamard matrix is obtained by introducing an additional player who defeats all of the original players and then forming a matrix with rows and columns labeled by players according to the rule that row i, column j contains 1 if i = j or i defeats j and −1 if j defeats i. This correspondence in reverse produces a doubly regular tournament from a skew Hadamard matrix, assuming the skew Hadamard matrix is normalized so that all elements of the first row equal 1. - -Regular Hadamard matrices are real Hadamard matrices whose row and column sums are all equal. A necessary condition on the existence of a regular n×n Hadamard matrix is that n be a perfect square. A circulant matrix is manifestly regular, and therefore a circulant Hadamard matrix would have to be of perfect square order. Moreover, if an n×n circulant Hadamard - -matrix existed with n > 1 then n would necessarily have to be of the form 4u2 with u odd. - -The circulant Hadamard matrix conjecture, however, asserts that, apart from the known 1×1 and 4×4 examples, no such matrices exist. This was verified for all but 26 values of u less than 104. - -One basic generalization is the weighing matrix, a square matrix in which entries may also be zero and which satisfies $WW^\textsf{T} = wI$ for some w, its weight. A weighing matrix with its weight equal to its order is a Hadamard matrix. - -Another generalization defines a complex Hadamard matrix to be a matrix in which the entries are complex numbers of unit modulus and which satisfies H H* = n In where H* is the conjugate transpose of H. Complex Hadamard matrices arise in the study of operator algebras and the theory of quantum computation. - -Butson-type Hadamard matrices are complex Hadamard matrices in which the entries are taken to be qth roots of unity. The term complex Hadamard matrix has been used by some authors to refer specifically to the case q = 4. - -* Olivia MFSK – an amateur-radio digital protocol designed to work in difficult (low signal-to-noise ratio plus multipath propagation) conditions on shortwave bands. - -* Balanced repeated replication (BRR) – a technique used by statisticians to estimate the variance of a statistical estimator. - -* Coded aperture spectrometry – an instrument for measuring the spectrum of light. The mask element used in coded aperture spectrometers is often a variant of a Hadamard matrix. - -* Feedback delay networks – Digital reverberation devices which use Hadamard matrices to blend sample values - -* Plackett–Burman design of experiments for investigating the dependence of some measured quantity on a number of independent variables. - -* Robust parameter designs for investigating noise factor impacts on responses - -* Compressed sensing for signal processing and undetermined linear systems (inverse problems) - -* Quantum Hadamard gate for quantum computing and the Hadamard transform for quantum algorithms. diff --git a/wiki/wikipedia/3199.txt b/wiki/wikipedia/3199.txt deleted file mode 100644 index bb55660e5797a3875131a840664db3c835ba019b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3199.txt +++ /dev/null @@ -1,108 +0,0 @@ -thumb|200px|Josiah Willard Gibbs - -In information theory, Gibbs' inequality is a statement about the information entropy of a discrete probability distribution. Several other bounds on the entropy of probability distributions are derived from Gibbs' inequality, including Fano's inequality. - -It was first presented by J. Willard Gibbs in the 19th century. - -Suppose that -$$ - P = \{ p_1 , \ldots , p_n \} -$$ - -is a discrete probability distribution. Then for any other probability distribution -$$ - Q = \{ q_1 , \ldots , q_n \} -$$ - -the following inequality between positive quantities (since pi and qi are between zero and one) holds: -$$ - - \sum_{i=1}^n p_i \log p_i \leq - \sum_{i=1}^n p_i \log q_i -$$ - -with equality if and only if -$$ - p_i = q_i -$$ - -for all i. Put in words, the information entropy of a distribution P is less than or equal to its cross entropy with any other distribution Q. - -The difference between the two quantities is the Kullback–Leibler divergence or relative entropy, so the inequality can also be written: -$$ - D_{\mathrm{KL}}(P\|Q) \equiv \sum_{i=1}^n p_i \log \frac{p_i}{q_i} \geq 0. -$$ - -Note that the use of base-2 logarithms is optional, and - -allows one to refer to the quantity on each side of the inequality as an - -"average surprisal" measured in bits. - -For simplicity, we prove the statement using the natural logarithm (ln), since -$$ - \log a = \frac{ \ln a }{ \ln 2 }, -$$ - -the particular logarithm we choose only scales the relationship. - -Let $I$ denote the set of all $i$ for which pi is non-zero. Then, since $ \ln x \leq x-1 $ for all x > 0, with equality if and only if x=1, we have: -$$ -- \sum_{i \in I} p_i \ln \frac{q_i}{p_i} \geq - \sum_{i \in I} p_i \left( \frac{q_i}{p_i} - 1 \right) -$$$ = - \sum_{i \in I} q_i + \sum_{i \in I} p_i = - \sum_{i \in I} q_i + 1 \geq 0$ - -The last inequality is a consequence of the pi and qi being part of a probability distribution. Specifically, the sum of all non-zero values is 1. Some non-zero qi, however, may have been excluded since the choice of indices is conditioned upon the pi being non-zero. Therefore the sum of the qi may be less than 1. - -So far, over the index set $I$, we have: -$$ - - \sum_{i \in I} p_i \ln \frac{q_i}{p_i} \geq 0 -$$, - -or equivalently -$$ - - \sum_{i \in I} p_i \ln q_i \geq - \sum_{i \in I} p_i \ln p_i -$$. - -Both sums can be extended to all $i=1, \ldots, n$, i.e. including $p_i=0$, by recalling that the expression $p \ln p$ tends to 0 as $p$ tends to 0, and $(-\ln q)$ tends to $\infty$ as $q$ tends to 0. We arrive at -$$ - - \sum_{i=1}^n p_i \ln q_i \geq - \sum_{i=1}^n p_i \ln p_i -$$ - -For equality to hold, we require - -# $ \frac{q_i}{p_i} = 1$ for all $i \in I $ so that the equality $\ln \frac{q_i}{p_i} = \frac{q_i}{p_i} -1 $ holds, - -# and $ \sum_{i \in I} q_i = 1$ which means $q_i=0$ if $i\notin I$, that is, $q_i=0$ if $p_i=0$. - -This can happen if and only if $p_i = q_i $ for $i = 1, \ldots, n$. - -The result can alternatively be proved using Jensen's inequality, the log sum inequality, or the fact that the Kullback-Leibler divergence is a form of Bregman divergence. Below we give a proof based on Jensen's inequality: - -Because log is a concave function, we have that: -$$ -\sum_i p_i \log\frac{q_i}{p_i} \le \log\sum_i p_i\frac{q_i}{p_i} = \log\sum_i q_i \le 0 -$$ - -Where the first inequality is due to Jensen's inequality, and the last equality is due to the same reason given in the above proof. - -Furthermore, since $\log$ is strictly concave, by the equality condition of Jensen's inequality we get equality when -$$ -\frac{q_1}{p_1} = \frac{q_2}{p_2} = \cdots = \frac{q_n}{p_n} -$$ - -and -$$ -\sum_i q_i = 1 -$$ - -Suppose that this ratio is $\sigma$, then we have that -$$ -1 = \sum_i q_i = \sum_i \sigma p_i = \sigma -$$ - -Where we use the fact that $p, q$ are probability distributions. Therefore the equality happens when $p = q$. - -The entropy of $P$ is bounded by: -$$ -H(p_1, \ldots , p_n) \leq \log n. -$$ - -The proof is trivial – simply set $q_i = 1/n $ for all i. diff --git a/wiki/wikipedia/32.txt b/wiki/wikipedia/32.txt deleted file mode 100644 index c654c7465b7a35dde08f657779931301a382a803..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/32.txt +++ /dev/null @@ -1,3 +0,0 @@ -In mathematics, the Honda–Tate theorem classifies abelian varieties over finite fields up to isogeny. It states that the isogeny classes of simple abelian varieties over a finite field of order q correspond to algebraic integers all of whose conjugates (given by eigenvalues of the Frobenius endomorphism on the first cohomology group or Tate module) have absolute value q. - -showed that the map taking an isogeny class to the eigenvalues of the Frobenius is injective, and showed that this map is surjective, and therefore a bijection. diff --git a/wiki/wikipedia/320.txt b/wiki/wikipedia/320.txt deleted file mode 100644 index 0d9c891f8f2bba37cd34cea03fb39f99b8c741b2..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/320.txt +++ /dev/null @@ -1,7 +0,0 @@ -In mathematics, the Cartan–Dieudonné theorem, named after Élie Cartan and Jean Dieudonné, establishes that every orthogonal transformation in an n-dimensional symmetric bilinear space can be described as the composition of at most n reflections. - -The notion of a symmetric bilinear space is a generalization of Euclidean space whose structure is defined by a symmetric bilinear form (which need not be positive definite, so is not necessarily an inner product – for instance, a pseudo-Euclidean space is also a symmetric bilinear space). The orthogonal transformations in the space are those automorphisms which preserve the value of the bilinear form between every pair of vectors; in Euclidean space, this corresponds to preserving distances and angles. These orthogonal transformations form a group under composition, called the orthogonal group. - -For example, in the two-dimensional Euclidean plane, every orthogonal transformation is either a reflection across a line through the origin or a rotation about the origin (which can be written as the composition of two reflections). Any arbitrary composition of such rotations and reflections can be rewritten as a composition of no more than 2 reflections. Similarly, in three-dimensional Euclidean space, every orthogonal transformation can be described as a single reflection, a rotation (2 reflections), or an improper rotation (3 reflections). In four dimensions, double rotations are added that represent 4 reflections. - -Let (V, b) be an n-dimensional, non-degenerate symmetric bilinear space over a field with characteristic not equal to 2. Then, every element of the orthogonal group O(V, b) is a composition of at most n reflections. diff --git a/wiki/wikipedia/3200.txt b/wiki/wikipedia/3200.txt deleted file mode 100644 index db171390f01a5f31bddff82c4e5dc919262aa4a1..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3200.txt +++ /dev/null @@ -1,171 +0,0 @@ -In computer science, the dining philosophers problem is an example problem often used in concurrent algorithm design to illustrate synchronization issues and techniques for resolving them. - -It was originally formulated in 1965 by Edsger Dijkstra as a student exam exercise, presented in terms of computers competing for access to tape drive peripherals. - -Soon after, Tony Hoare gave the problem its present formulation. - -Five philosophers, numbered from 0 through 4 are living in a house where the table laid for them, each philosopher having his own place at the table. Their only problem - besides those of philosophy - is that the dish served is a very difficult kind of spaghetti, that has to be eaten with two forks. There are two forks next to each plate, so that presents no difficulty: as a consequence, however, no two neighbours may be eating simultaneously. - -A very naive solution associates with each fork a binary semaphore with the initial value = 1 (indicating that the fork is free) and, naming in each philosopher these semaphores in a local terminology, we could think the following solution for the philosopher's life adequate. - -But this solution – although it guarantees that no two neighbours are eating simultaneously – must be rejected because it contains the danger of the deadly embrace (deadlock). When all five philosophers get hungry simultaneously, each will grab his left-hand fork and from that moment onwards the group is stuck. - -In order to be able to give a formal description, we associate with each philosopher a state variable, "C" say, where C[i] = 0 means: philosopher i is thinking, C[i] = 1 means: philosopher i is hungry, C[i] = 2 means: philosopher i is eating. - -Now each philosopher will go cyclically through the states 0, 1, 2, 0, ... - -Each philosopher must alternately think and eat. However, a philosopher can only eat spaghetti when they have both left and right forks. Each fork can be held by only one philosopher at a time and so a philosopher can use the fork only if it is not being used by another philosopher. After an individual philosopher finishes eating, they need to put down both forks so that the forks become available to others. A philosopher can only take the fork on their right or the one on their left as they become available, and they cannot start eating before getting both forks. - -Eating is not limited by the remaining amounts of spaghetti or stomach space; an infinite supply and an infinite demand are assumed. - -The problem is how to design a discipline of behavior (a concurrent algorithm) such that no philosopher will starve; i.e., each can forever continue to alternate between eating and thinking, assuming that no philosopher can know when others may want to eat or think. - -The problem was designed to illustrate the challenges of avoiding deadlock, a system state in which no progress is possible. - -To see that a proper solution to this problem is not obvious, consider a proposal in which each philosopher is instructed to behave as follows: - -* think until the left fork is available; when it is, pick it up; - -* think until the right fork is available; when it is, pick it up; - -* when both forks are held, eat for a fixed amount of time; - -* then, put the right fork down; - -* then, put the left fork down; - -* repeat from the beginning. - -This attempted solution fails because it allows the system to reach a deadlock state, in which no progress is possible. This is a state in which each philosopher has picked up the fork to the left, and is waiting for the fork to the right to become available. With the given instructions, this state can be reached, and when it is reached, each philosopher will eternally wait for another (the one to the right) to release a fork. - -Resource starvation might also occur independently of deadlock if a particular philosopher is unable to acquire both forks because of a timing problem. For example, there might be a rule that the philosophers put down a fork after waiting ten minutes for the other fork to become available and wait a further ten minutes before making their next attempt. This scheme eliminates the possibility of deadlock (the system can always advance to a different state) but still suffers from the problem of livelock. If all five philosophers appear in the dining room at exactly the same time and each picks up the left fork at the same time the philosophers will wait ten minutes until they all put their forks down and then wait a further ten minutes before they all pick them up again. - -Mutual exclusion is the basic idea of the problem; the dining philosophers create a generic and abstract scenario useful for explaining issues of this type. The failures these philosophers may experience are analogous to the difficulties that arise in real computer programming when multiple programs need exclusive access to shared resources. These issues are studied in concurrent programming. The original problems of Dijkstra were related to external devices like tape drives. However, the difficulties exemplified by the dining philosophers problem arise far more often when multiple processes access sets of data that are being updated. Complex systems such as operating system kernels use thousands of locks and synchronizations that require strict adherence to methods and protocols if such problems as deadlock, starvation, and data corruption are to be avoided. - -Dijkstra's solution uses one mutex, one semaphore per philosopher and one state variable per philosopher. This solution is more complex than the resource hierarchy solution. - -This solution assigns a partial order to the resources (the forks, in this case), and establishes the convention that all resources will be requested in order, and that no two resources unrelated by order will ever be used by a single unit of work at the same time. Here, the resources (forks) will be numbered 1 through 5 and each unit of work (philosopher) will always pick up the lower-numbered fork first, and then the higher-numbered fork, from among the two forks they plan to use. The order in which each philosopher puts down the forks does not matter. In this case, if four of the five philosophers simultaneously pick up their lower-numbered fork, only the highest-numbered fork will remain on the table, so the fifth philosopher will not be able to pick up any fork. Moreover, only one philosopher will have access to that highest-numbered fork, so he will be able to eat using two forks. - -While the resource hierarchy solution avoids deadlocks, it is not always practical, especially when the list of required resources is not completely known in advance. For example, if a unit of work holds resources 3 and 5 and then determines it needs resource 2, it must release 5, then 3 before acquiring 2, and then it must re-acquire 3 and 5 in that order. Computer programs that access large numbers of database records would not run efficiently if they were required to release all higher-numbered records before accessing a new record, making the method impractical for that purpose. - -The resource hierarchy solution is not fair. If philosopher 1 is slow to take a fork, and if philosopher 2 is quick to think and pick its forks back up, then philosopher 1 will never get to pick up both forks. A fair solution must guarantee that each philosopher will eventually eat, no matter how slowly that philosopher moves relative to the others. - -The following source code is a C++11 implementation of the resource hierarchy solution for three philosophers. The sleep_for() function simulates the time normally spend with business logic. - - - -#include - -#include - -#include - -#include - -#include - -#include - -using namespace std; - -void phil(int ph, mutex& ml, mutex& mh, mutex& mo) { - -for (;;) { // prevent thread from termination - -int duration = rand() % 100 + 1; - -{ - -// Block { } limits scope of lock - -lock_guard moGuard(mo); - -cout< moGuard(mo); - -cout<<"\t\t"< mlGuard(ml); - -this_thread::sleep_for(chrono::milliseconds(30)); - -lock_guard mhGuard(mh); - -duration = rand() % 100 + 1; - -{ - -lock_guard moGuard(mo); - -cout<<"\t\t\t\t"< - -Another approach is to guarantee that a philosopher can only pick up both forks or none by introducing an arbitrator, e.g., a waiter. In order to pick up the forks, a philosopher must ask permission of the waiter. The waiter gives permission to only one philosopher at a time until the philosopher has picked up both of their forks. Putting down a fork is always allowed. The waiter can be implemented as a mutex. - -In addition to introducing a new central entity (the waiter), this approach can result in reduced parallelism: if a philosopher is eating and one of his neighbors is requesting the forks, all other philosophers must wait until this request has been fulfilled even if forks for them are still available. - -A solution presented by William Stallings is to allow a maximum of n-1 philosophers to sit down at any time. The last philosopher would have to wait (for example, using a semaphore) for someone to finish dining before they "sit down" and request access to any fork. This guarantees at least one philosopher may always acquire both forks, allowing the system to make progress. - -In 1984, K. Mani Chandy and J. Misra proposed a different solution to the dining philosophers problem to allow for arbitrary agents (numbered P1, ..., Pn) to contend for an arbitrary number of resources, unlike Dijkstra's solution. It is also completely distributed and requires no central authority after initialization. However, it violates the requirement that "the philosophers do not speak to each other" (due to the request messages). - -#For every pair of philosophers contending for a resource, create a fork and give it to the philosopher with the lower ID (n for agent Pn). Each fork can either be dirty or clean. Initially, all forks are dirty. - -#When a philosopher wants to use a set of resources (i.e., eat), said philosopher must obtain the forks from their contending neighbors. For all such forks the philosopher does not have, they send a request message. - -#When a philosopher with a fork receives a request message, they keep the fork if it is clean, but give it up when it is dirty. If the philosopher sends the fork over, they clean the fork before doing so. - -#After a philosopher is done eating, all their forks become dirty. If another philosopher had previously requested one of the forks, the philosopher that has just finished eating cleans the fork and sends it. - -This solution also allows for a large degree of concurrency, and will solve an arbitrarily large problem. - -It also solves the starvation problem. The clean/dirty labels act as a way of giving preference to the most "starved" processes, and a disadvantage to processes that have just "eaten". One could compare their solution to one where philosophers are not allowed to eat twice in a row without letting others use the forks in between. Chandy and Misra's solution is more flexible than that, but has an element tending in that direction. - -In their analysis, they derive a system of preference levels from the distribution of the forks and their clean/dirty states. They show that this system may describe a directed acyclic graph, and if so, the operations in their protocol cannot turn that graph into a cyclic one. This guarantees that deadlock cannot occur. However, if the system is initialized to a perfectly symmetric state, like all philosophers holding their left side forks, then the graph is cyclic at the outset, and their solution cannot prevent a deadlock. Initializing the system so that philosophers with lower IDs have dirty forks ensures the graph is initially acyclic. diff --git a/wiki/wikipedia/3201.txt b/wiki/wikipedia/3201.txt deleted file mode 100644 index c174c27e65f090a3f00f0250448353851ecfc58a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3201.txt +++ /dev/null @@ -1,11 +0,0 @@ -A term graph is a representation of an expression in a formal language as a generalized graph whose vertices are terms. Term graphs are a more powerful form of representation than expression trees because they can represent not only common subexpressions (i.e. they can take the structure of a directed acyclic graph) but also cyclic/recursive subexpressions (cyclic digraphs). - -Abstract syntax trees are not capable of representing shared subexpressions since each tree node can only have one parent; this simplicity comes at a cost of efficiency due to redundant duplicate computations of identical terms. For this reason term graphs are often used as an intermediate language at a subsequent compilation stage to abstract syntax tree construction via parsing. - -The phrase "term graph rewriting" is often used when discussing graph rewriting methods for transforming expressions in formal languages. Considered from the point of view of graph grammars, term graphs are not regular graphs, but hypergraphs where an n-ary word will have a particular subgraph in first place, another in second place, and so on, a distinction that does not exist in the usual undirected graphs studied in graph theory. - -Term graphs are a prominent topic in programming language research since term graph rewriting rules are capable of formally expressing a compiler's operational semantics. Term graphs are also used as abstract machines capable of modelling chemical and biological computations as well as graphical calculi such as concurrency models. Term graphs can perform automated verification and logical programming since they are well-suited to representing quantified statements in first order logic. Symbolic programming software is another application for term graphs, which are capable of representing and performing computation with abstract algebraic structures such as groups, fields and rings. - -The TERMGRAPH conference focuses entirely on research into term graph rewriting and its applications. - -Term graphs are also used in type inference, where the graph structure aids in implementing type unification. diff --git a/wiki/wikipedia/3202.txt b/wiki/wikipedia/3202.txt deleted file mode 100644 index ccf43c273376b6576c809bb46ce132dcb5cdcd12..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3202.txt +++ /dev/null @@ -1,102 +0,0 @@ -In trigonometry, the law of cotangents is a relationship among the lengths of the sides of a triangle and the cotangents of the halves of the three angles. This is also known as the Cot Theorem. - -Just as three quantities whose equality is expressed by the law of sines are equal to the diameter of the circumscribed circle of the triangle (or to its reciprocal, depending on how the law is expressed), so also the law of cotangents relates the radius of the inscribed circle of a triangle (the inradius) to its sides and angles. - -Using the usual notations for a triangle (see the figure at the upper right), where a, b, c are the lengths of the three sides, A, B, C are the vertices opposite those three respective sides, α, β, γ are the corresponding angles at those vertices, s is the semi-perimeter, that is, s = a + b + c/2, and r is the radius of the inscribed circle, the law of cotangents states that -$$ -\frac{\cot\left(\tfrac{\alpha}{2}\right)}{s-a} = \frac{\cot\left(\tfrac{\beta}{2}\right)}{s-b} = \frac{\cot\left(\tfrac{\gamma}{2}\right)}{s-c} = \frac{1}{r} -$$ - -and furthermore that the inradius is given by -$$ -r = \sqrt{\frac{(s-a)(s-b)(s-c)}{s}}. -$$ - -In the upper figure, the points of tangency of the incircle with the sides of the triangle break the perimeter into 6 segments, in 3 pairs. In each pair the segments are of equal length. For example, the 2 segments adjacent to vertex A are equal. If we pick one segment from each pair, their sum will be the semiperimeter s. An example of this is the segments shown in color in the figure. The two segments making up the red line add up to a, so the blue segment must be of length s − a. Obviously, the other five segments must also have lengths s − a, s − b, or s − c, as shown in the lower figure. - -By inspection of the figure, using the definition of the cotangent function, we have -$$ -\cot\left(\frac{\alpha}{2}\right) =\frac{s-a}{r} -$$ - -and similarly for the other two angles, proving the first assertion. - -For the second one—the inradius formula—we start from the general addition formula: -$$ - \cot (u+v+w) = \frac{\cot u +\cot v +\cot w - \cot u \cot v \cot w }{1-\cot u \cot v - \cot v \cot w -\cot w \cot u}. -$$ - -Applying to cot, we obtain: -$$ - \cot\left(\frac{\alpha}{2}\right) \cot \left(\frac{\beta}{2}\right) \cot \left(\frac{\gamma}{2}\right) = \cot\left(\frac{\alpha}{2}\right) + \cot \left(\frac{\beta}{2}\right) + \cot \left(\frac{\gamma}{2}\right). -$$ - -(This is also the triple cotangent identity) - -Substituting the values obtained in the first part, we get: -$$ - \frac {(s-a)}r \frac {(s-b)}r \frac {(s-c)}r = \frac {s-a}r + \frac {s-b}r +\frac {s-c}r =\frac{3s-2s}r=\frac{s}r. -$$ - -Multiplying through by r3/s gives the value of r2, proving the second assertion. - -A number of other results can be derived from the law of cotangents. - -* Heron's formula. Note that the area of triangle ABC is also divided into 6 smaller triangles, also in 3 pairs, with the triangles in each pair having the same area. For example, the two triangles near vertex A, being right triangles of width s − a and height r, each have an area of 1/2r(s − a). So those two triangles together have an area of r(s − a), and the area S of the whole triangle is therefore - - - -\begin{align} - -S &= r(s-a) + r(s-b) + r(s-c) = r\bigl(3s - (a+b+c)\bigr) = r(3s - 2s) = rs \\[8pt] - -\end{align} - - - -This gives the result - -S = sqrt s(s − a)(s − b)(s − c) - -as required. - -*Mollweide's first formula. From the addition formula and the law of cotangents we have -$$ -\frac {\sin \left( \tfrac{\alpha}{2}-\tfrac{\beta}{2} \right) }{\sin \left( \tfrac{\alpha}{2}+\tfrac{\beta}{2} \right) } = \frac {\cot \left( \tfrac{\beta}{2} \right) -\cot \left( \tfrac{\alpha}{2} \right) }{\cot \left( \tfrac{\beta}{2} \right) +\cot \left( \tfrac{\alpha}{2} \right) }= \frac {a-b}{2s-a-b}. -$$ - -This gives the result -$$ -\dfrac {a-b}{c}=\dfrac {\sin \left( \tfrac{\alpha}{2}-\tfrac{\beta}{2} \right)}{\cos \left( \tfrac{\gamma}{2} \right) } -$$ - -as required. - -*Mollweide's second formula. From the addition formula and the law of cotangents we have - - - -\begin{align} - -& \frac {\cos\left( \tfrac{\alpha}{2}-\tfrac{\beta}{2} \right) }{\cos\left( \tfrac{\alpha}{2}+\tfrac{\beta}{2} \right) } = - -\frac {\cot\left( \tfrac{\alpha}{2} \right) \cot\left( \tfrac{\beta}{2} \right) +1}{\cot\left( \tfrac{\alpha}{2} \right) \cot \left( \tfrac{\beta}{2} \right) -1} \\[6pt] - -= {} & \frac {\cot \left( \tfrac{\alpha}{2} \right) +\cot \left( \tfrac{\beta}{2} \right) +2\cot \left( \tfrac{\gamma}{2} \right) }{\cot \left( \tfrac{\alpha}{2} \right) +\cot \left( \tfrac{\beta}{2} \right) } = - -\frac {4s-a-b-2c}{2s-a-b}. - -\end{align} - - - -Here, an extra step is required to transform a product into a sum, according to the sum/product formula. - -This gives the result -$$ -\dfrac {b+a}{c} = \dfrac{\cos \left( \tfrac{\alpha}{2}-\tfrac{\beta}{2} \right) }{\sin \left( \tfrac{\gamma}{2} \right) } -$$ - -as required. - -*The law of tangents can also be derived from this . diff --git a/wiki/wikipedia/3203.txt b/wiki/wikipedia/3203.txt deleted file mode 100644 index 202697c1e47c7377882701ff3dd27dcc69ef13fb..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3203.txt +++ /dev/null @@ -1,3 +0,0 @@ -In mathematics, the Néron–Ogg–Shafarevich criterion states that if A is an elliptic curve or abelian variety over a local field K and ℓ is a prime not dividing the characteristic of the residue field of K then A has good reduction if and only if the ℓ-adic Tate module T of A is unramified. introduced the criterion for elliptic curves. used the results of to extend it to abelian varieties, - -and named the criterion after Ogg, Néron and Igor Shafarevich (commenting that Ogg's result seems to have been known to Shafarevich). diff --git a/wiki/wikipedia/3204.txt b/wiki/wikipedia/3204.txt deleted file mode 100644 index 52308500a8424ab3ad93918c7c640ca263d47572..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3204.txt +++ /dev/null @@ -1,286 +0,0 @@ -In mathematics, the Schuette–Nesbitt formula is a generalization of the inclusion–exclusion principle. It is named after Donald R. Schuette and Cecil J. Nesbitt. - -The probabilistic version of the Schuette–Nesbitt formula has practical applications in actuarial science, where it is used to calculate the net single premium for life annuities and life insurances based on the general symmetric status. - -Consider a set Ω and subsets A1, ..., Am. Let - -{{NumBlk|:|$ N(\omega)=\sum_{n=1}^m 1_{A_n}(\omega),\qquad \omega\in\Omega,$|}} - -denote the number of subsets to which ω ∈ Ω belongs, where we use the indicator functions of the sets A1, ..., Am. Furthermore, for each k ∈ {0, 1, ..., m}, let - -{{NumBlk|:|$N_k(\omega)=\sum_{\scriptstyle J\subset\{1,\ldots,m\}\atop\scriptstyle|J|=k} 1_{\cap_{j\in J}A_j}(\omega),\qquad\omega\in\Omega,$|}} - -denote the number of intersections of exactly k sets out of A1, ..., Am, to which ω belongs, where the intersection over the empty index set is defined as Ω, hence N0 = 1Ω. Let V denote a vector space over a field R such as the real or complex numbers (or more generally a module over a ring R with multiplicative identity). Then, for every choice of c0, ..., cm ∈ V, - -{{NumBlk|:|$\sum_{n=0}^m 1_{\{N=n\}}c_n = \sum_{k=0}^m N_k\sum_{l=0}^k (-1)^{k-l}\binom klc_l,$|}} - -where 1{N=n} denotes the indicator function of the set of all ω ∈ Ω with N(ω) = n, and $\textstyle\binom kl$ is a binomial coefficient. Equality () says that the two V-valued functions defined on Ω are the same. - -We prove that () holds pointwise. Take ω ∈ Ω and define n = N(ω). - -Then the left-hand side of () equals cn. - -Let I denote the set of all those indices i ∈ {1, ..., m} such that ω ∈ Ai, hence I contains exactly n indices. - -Given J ⊂ {1, ..., m} with k elements, then ω belongs to the intersection ∩j∈JAj if and only if J is a subset of I. - -By the combinatorial interpretation of the binomial coefficient, there are Nk = $\textstyle\binom nk$ such subsets (the binomial coefficient is zero for k > n). - -Therefore the right-hand side of () evaluated at ω equals - -\sum_{k=0}^m \binom nk\sum_{l=0}^k (-1)^{k-l}\binom klc_l - -=\sum_{l=0}^m\underbrace{\sum_{k=l}^n (-1)^{k-l}\binom nk \binom kl}_{=:(*)} c_l, - -where we used that the first binomial coefficient is zero for k > n. - -Note that the sum (*) is empty and therefore defined as zero for n < l. - -Using the factorial formula for the binomial coefficients, it follows that - - - -\begin{align} - -(*) - -&=\sum_{k=l}^n (-1)^{k-l}\frac{n!}{k!(n-k)!}\frac{k!}{l!(k-l)!}\\ - -&=\underbrace{\frac{n!}{l!(n-l)!}}_{=\binom nl}\underbrace{\sum_{k=l}^n (-1)^{k-l}\frac{(n-l)!}{(n-k)!(k-l)!}}_{=:(**)}\\ - -\end{align} - - - -Rewriting (**) with the summation index j = k − l und using the binomial formula for the third equality shows that - - - -\begin{align} - -(**) - -&=\sum_{j=0}^{n-l} (-1)^{j}\frac{(n-l)!}{(n-l-j)!j!}\\ - -&=\sum_{j=0}^{n-l} (-1)^{j}\binom{n-l}{j} - -=(1-1)^{n-l} - -=\delta_{ln}, - -\end{align} - - - -which is the Kronecker delta. Substituting this result into the above formula and noting that n choose l equals 1 for l = n, it follows that the right-hand side of () evaluated at ω also reduces to cn. - -As a special case, take for V the polynomial ring R[x] with the indeterminate x. Then () can be rewritten in a more compact way as - -{{NumBlk|:|$\sum_{n=0}^m 1_{\{N=n\}}x^n = \sum_{k=0}^m N_k(x-1)^k.$|}} - -This is an identity for two polynomials whose coefficients depend on ω, which is implicit in the notation. - -Proof of () using (): Substituting cn = xn for n ∈ {0, ..., m} into () and using the binomial formula shows that - - - -\sum_{n=0}^m 1_{\{N=n\}}x^n - -=\sum_{k=0}^m N_k\underbrace{\sum_{l=0}^k \binom kl(-1)^{k-l}x^l}_{=(x-1)^k}, - - - -which proves (). - -Consider the linear shift operator E and the linear difference operator Δ, which we define here on the sequence space of V by - -\begin{align} - -E:V^{\mathbb{N}_0}&\to V^{\mathbb{N}_0},\\ - -E(c_0,c_1,c_2,c_3,\ldots)&\mapsto(c_1,c_2,c_3,\ldots),\\ - -\end{align} - -and - -\begin{align} - -\Delta:V^{\mathbb{N}_0}&\to V^{\mathbb{N}_0},\\ - -\Delta(c_0,c_1,c_2,c_3\ldots)&\mapsto(c_1-c_0,c_2-c_1,c_3-c_2,\ldots).\\ - -\end{align} - -Substituting x = E in () shows that - -{{NumBlk|:|$\sum_{n=0}^m 1_{\{N=n\}}E^n = \sum_{k=0}^m N_k\Delta^k,$|}} - -where we used that Δ = E – I with I denoting the identity operator. Note that E0 and Δ0 equal the identity operator I on the sequence space, Ek and Δk denote the k-fold composition. - -{{math proof|title=Direct proof of () by the operator method|proof= - -To prove (), we first want to verify the equation - -{{NumBlk|:|$\sum_{n=0}^m 1_{\{N=n\}}E^n=\prod_{j=1}^m(1_{A_j^{\mathrm c}}I+1_{A_j}E)$|}} - -involving indicator functions of the sets A1, ..., Am and their complements with respect to Ω. Suppose an ω from Ω belongs to exactly k sets out of A1, ..., Am, where k ∈ {0, ..., m}, for simplicity of notation say that ω only belongs to A1, ..., Ak. Then the left-hand side of () is Ek. On the right-hand side of (), the first k factors equal E, the remaining ones equal I, their product is also Ek, hence the formula () is true. - -Note that - -\begin{align} - -1_{A_j^{\mathrm c}}I+1_{A_j}E - -&=I-1_{A_j}I+1_{A_j}E\\ - -&=I+1_{A_j}(E-I)=I+1_{A_j}\Delta,\qquad j\in\{0,\ldots,m\}. - -\end{align} - -Inserting this result into equation () and expanding the product gives - -\sum_{n=0}^m 1_{\{N=n\}}E^n - -=\sum_{k=0}^m\sum_{\scriptstyle J\subset\{1,\ldots,m\}\atop\scriptstyle|J|=k} - -1_{\cap_{j\in J}A_j}\Delta^k, - - - -because the product of indicator functions is the indicator function of the intersection. Using the definition (), the result () follows. - -}} - -Let (Δkc)0 denote the 0th component of the k-fold composition Δk applied to c = (c0, c1, ..., cm, ...), where Δ0 denotes the identity. Then () can be rewritten in a more compact way as - -{{NumBlk|:|$\sum_{n=0}^m 1_{\{N=n\}}c_n = \sum_{k=0}^m N_k(\Delta^k c)_0.$|}} - -Consider arbitrary events A1, ..., Am in a probability space (Ω, F, $\mathbb{P}$) and let E denote the expectation operator. Then N from () is the random number of these events which occur simultaneously. Using Nk from (), define - -{{NumBlk|:|$S_k=\mathbb{E}[N_k]=\sum_{\scriptstyle J\subset\{1,\ldots,m\}\atop\scriptstyle|J|=k} \mathbb{P}\biggl(\bigcap_{j\in J}A_j\biggr),\qquad k\in\{0,\ldots,m\},$|}} - -where the intersection over the empty index set is again defined as Ω, hence S0 = 1. If the ring R is also an algebra over the real or complex numbers, then taking the expectation of the coefficients in () and using the notation from (), - -{{NumBlk|:|$\sum_{n=0}^m \mathbb{P}(N=n)x^n = \sum_{k=0}^m S_k(x-1)^k$|}} - -in R[x]. If R is the field of real numbers, then this is the probability-generating function of the probability distribution of N. - -Similarly, () and () yield - -{{NumBlk|:|$\sum_{n=0}^m \mathbb{P}(N=n)E^n=\sum_{k=0}^m S_k\Delta^k$|}} - -and, for every sequence c = (c0, c1, c2, c3, ..., cm, ...), - -{{NumBlk|:|$\sum_{n=0}^m \mathbb{P}(N=n)c_n=\sum_{k=0}^m S_k(\Delta^kc)_0.$|}} - -The quantity on the left-hand side of () is the expected value of cN. - -#In actuarial science, the name Schuette–Nesbitt formula refers to equation (), where V denotes the set of real numbers. - -#The left-hand side of equation () is a convex combination of the powers of the shift operator E, it can be seen as the expected value of random operator EN. Accordingly, the left-hand side of equation () is the expected value of random component cN. Note that both have a discrete probability distribution with finite support, hence expectations are just the well-defined finite sums. - -#The probabilistic version of the inclusion–exclusion principle can be derived from equation () by choosing the sequence c = (0, 1, 1, ...): the left-hand side reduces to the probability of the event {N ≥ 1}, which is the union of A1, ..., Am, and the right-hand side is S1 – S2 + S3 – ... – (–1)mSm, because (Δ0c)0 = 0 and (Δkc)0 = –(–1)k for k ∈ {1, ..., m}. - -#Equations (), (), () and () are also true when the shift operator and the difference operator are considered on a subspace like the ℓ p spaces. - -#If desired, the formulae (), (), () and () can be considered in finite dimensions, because only the first m + 1 components of the sequences matter. Hence, represent the linear shift operator E and the linear difference operator Δ as mappings of the (m + 1)-dimensional Euclidean space into itself, given by the (m + 1) × (m + 1)-matrices - -E=\begin{pmatrix} - -0&1&0&\cdots&0\\ - -0&0&1&\ddots&\vdots\\ - -\vdots&\ddots&\ddots&\ddots&0\\ - -0&\cdots&0&0&1\\ - -0&\cdots&0&0&0 - -\end{pmatrix}, - -\qquad - -\Delta=\begin{pmatrix} - --1&1&0&\cdots&0\\ - -0&-1&1&\ddots&\vdots\\ - -\vdots&\ddots&\ddots&\ddots&0\\ - -0&\cdots&0&-1&1\\ - -0&\cdots&0&0&-1 - -\end{pmatrix}, - - - -and let I denote the (m + 1)-dimensional identity matrix. Then () and () hold for every vector c = (c0, c1, ..., cm)T in (m + 1)-dimensional Euclidean space, where the exponent T in the definition of c denotes the transpose. - -#Equations () and () hold for an arbitrary linear operator E as long as Δ is the difference of E and the identity operator I. - -#The probabilistic versions (), () and () can be generalized to every finite measure space. - -For textbook presentations of the probabilistic Schuette–Nesbitt formula () and their applications to actuarial science, cf. Gerber. Chapter 8, or Bowers, Chapter 18 and the Appendix, pp. 577–578. - -For independent events, the formula () appeared in a discussion of Robert P. White and T.N.E. Greville's paper by Donald R. Schuette and Cecil J. Nesbitt, see Schuette. In the two-page note Gerber, Hans U. Gerber, called it Schuette–Nesbitt formula and generalized it to arbitrary events. Christian Buchta, see Buchta, noticed the combinatorial nature of the formula and published the elementary combinatorial proof of (). - -Cecil J. Nesbitt, PhD, F.S.A., M.A.A.A., received his mathematical education at the University of Toronto and the Institute for Advanced Study in Princeton. He taught actuarial mathematics at the University of Michigan from 1938 to 1980. He served the Society of Actuaries from 1985 to 1987 as Vice-President for Research and Studies. Professor Nesbitt died in 2001. (Short CV taken from Bowers, page xv.) - -Donald Richard Schuette was a PhD student of C. Nesbitt, he later became professor at the University of Wisconsin–Madison. - -The probabilistic version of the Schuette–Nesbitt formula () generalizes much older formulae of Waring, which express the probability of the events {N = n} and {N ≥ n} in terms of S1, S2, ..., Sm. More precisely, with $\textstyle\binom kn$ denoting the binomial coefficient, - -{{NumBlk|:|$\mathbb{P}(N=n)=\sum_{k=n}^m(-1)^{k-n}\binom kn S_k, \qquad n\in\{0,\ldots,m\},$|}} - -and - -{{NumBlk|:|$\mathbb{P}(N\ge n)=\sum_{k=n}^m(-1)^{k-n}\binom{k-1}{n-1}S_k,\qquad n\in\{1,\ldots,m\},$|}} - -see Feller, Sections IV.3 and IV.5, respectively. - -To see that these formulae are special cases of the probabilistic version of the Schuette–Nesbitt formula, note that by the binomial theorem -$$ -\Delta^k=(E-I)^k=\sum_{j=0}^k\binom kj (-1)^{k-j}E^j,\qquad k\in\mathbb{N}_0. -$$ - -Applying this operator identity to the sequence c = (0, ..., 0, 1, 0, 0, ...) with n leading zeros and noting that (E jc)0 = 1 if j = n and (E jc)0 = 0 otherwise, the formula () for {N = n} follows from (). - -Applying the identity to c = (0, ..., 0, 1, 1, 1, ...) with n leading zeros and noting that (E jc)0 = 1 if j ≥ n and (E jc)0 = 0 otherwise, equation () implies that -$$ -\mathbb{P}(N\ge n)=\sum_{k=n}^m S_k\sum_{j=n}^k\binom kj(-1)^{k-j}. -$$ - -Expanding (1 – 1)k using the binomial theorem and using equation (11) of the formulas involving binomial coefficients, we obtain - -\sum_{j=n}^k\binom kj(-1)^{k-j} - -=-\sum_{j=0}^{n-1}\binom kj(-1)^{k-j} - -=(-1)^{k-n}\binom{k-1}{n-1}. - -Hence, we have the formula () for {N ≥ n}. - -Problem: Suppose there are m persons aged x1, ..., xm with remaining random (but independent) lifetimes T1, ..., Tm. Suppose the group signs a life insurance contract which pays them after t years the amount cn if exactly n persons out of m are still alive after t years. How high is the expected payout of this insurance contract in t years? - -Solution: Let Aj denote the event that person j survives t years, which means that Aj = {Tj > t}. In actuarial notation the probability of this event is denoted by t pxj and can be taken from a life table. Use independence to calculate the probability of intersections. Calculate S1, ..., Sm and use the probabilistic version of the Schuette–Nesbitt formula () to calculate the expected value of cN. - -Let σ be a random permutation of the set {1, ..., m} and let Aj denote the event that j is a fixed point of σ, meaning that Aj = {σ(j) = j}. When the numbers in J, which is a subset of {1, ..., m}, are fixed points, then there are (m – |J|)! ways to permute the remaining m – |J| numbers, hence -$$ -\mathbb{P}\biggl(\bigcap_{j\in J}A_j\biggr)=\frac{(m-|J|)!}{m!}. -$$ - -By the combinatorical interpretation of the binomial coefficient, there are $\textstyle\binom mk$ different choices of a subset J of {1, ..., m} with k elements, hence () simplifies to -$$ -S_k=\binom mk \frac{(m-k)!}{m!}=\frac1{k!}. -$$ - -Therefore, using (), the probability-generating function of the number N of fixed points is given by -$$ -\mathbb{E}[x^N]=\sum_{k=0}^m\frac{(x-1)^k}{k!},\qquad x\in\mathbb{R}. -$$ - -This is the partial sum of the infinite series giving the exponential function at x – 1, which in turn is the probability-generating function of the Poisson distribution with parameter 1. Therefore, as m tends to infinity, the distribution of N converges to the Poisson distribution with parameter 1. diff --git a/wiki/wikipedia/3205.txt b/wiki/wikipedia/3205.txt deleted file mode 100644 index 8be98d9797e371ec56ff70e43a37234c2f4a5d5c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3205.txt +++ /dev/null @@ -1 +0,0 @@ -#redirect Filling area conjecture diff --git a/wiki/wikipedia/3206.txt b/wiki/wikipedia/3206.txt deleted file mode 100644 index b87400488d2686972137fc732797eb6d5149e29c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3206.txt +++ /dev/null @@ -1,11 +0,0 @@ -In mathematical logic, the Friedman translation is a certain transformation of intuitionistic formulas. Among other things it can be used to show that the Π02-theorems of various first-order theories of classical mathematics are also theorems of intuitionistic mathematics. It is named after its discoverer, Harvey Friedman. - -Let A and B be intuitionistic formulas, where no free variable of B is quantified in A. The translation AB is defined by replacing each atomic subformula C of A by C ∨ B. For purposes of the translation, ⊥ is considered to be an atomic formula as well, hence it is replaced with ⊥ ∨ B (which is equivalent to B). Note that ¬A is defined as an abbreviation for A → ⊥, hence (¬A)B = AB → B. - -The Friedman translation can be used to show the closure of many intuitionistic theories under the Markov rule, and to obtain partial conservativity results. The key condition is that the $\Delta^0_0$ sentences of the logic be decidable, allowing the unquantified theorems of the intuitionistic and classical theories to coincide. - -For example, if A is provable in Heyting arithmetic (HA), then AB is also provable in HA. Moreover, if A is a Σ01-formula, then AB is in HA equivalent to A ∨ B. This implies that: - -*Heyting arithmetic is closed under the primitive recursive Markov rule (MPPR): if the formula ¬¬A is provable in HA, where A is a Σ01-formula, then A is also provable in HA. - -*Peano arithmetic is Π02-conservative over Heyting arithmetic: if Peano arithmetic proves a Π02-formula A, then A is already provable in HA. diff --git a/wiki/wikipedia/3207.txt b/wiki/wikipedia/3207.txt deleted file mode 100644 index 4aab23b84f6f0e910f996bf97bcf9fa18f7e950d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3207.txt +++ /dev/null @@ -1,55 +0,0 @@ -In graph theory, the Lovász conjecture (1969) is a classical problem on Hamiltonian paths in graphs. It says: - -Every finite connected vertex-transitive graph contains a Hamiltonian path. - -Originally László Lovász stated the problem in the opposite way, but - -this version became standard. In 1996, László Babai published a conjecture sharply contradicting this conjecture, but both conjectures remain widely open. It is not even known if a single counterexample would necessarily lead to a series of counterexamples. - -The problem of finding Hamiltonian paths in highly symmetric graphs is quite old. As Donald Knuth describes it in volume 4 of The Art of Computer Programming, the problem originated in British campanology (bell-ringing). Such Hamiltonian paths and cycles are also closely connected to Gray codes. In each case the constructions are explicit. - -Another version of Lovász conjecture states that - -Every finite connected vertex-transitive graph contains a Hamiltonian cycle except the five known counterexamples. - -There are 5 known examples of vertex-transitive graphs with no Hamiltonian cycles (but with Hamiltonian paths): the complete graph $K_2$, the Petersen graph, the Coxeter graph and two graphs derived from the Petersen and Coxeter graphs by replacing each vertex with a triangle. - -None of the 5 vertex-transitive graphs with no Hamiltonian cycles is a Cayley graph. This observation leads to a weaker version of the conjecture: - -Every finite connected Cayley graph contains a Hamiltonian cycle. - -The advantage of the Cayley graph formulation is that such graphs correspond to a finite group $G$ and a - -generating set $S$. Thus one can ask for which $G$ and $S$ the conjecture holds rather than attack it in full generality. - -For directed Cayley graphs (digraphs) the Lovász conjecture is false. Various counterexamples were obtained by Robert Alexander Rankin. Still, many of the below results hold in this restrictive setting. - -Every directed Cayley graph of an abelian group has a Hamiltonian path; however, every cyclic group whose order is not a prime power has a directed Cayley graph that does not have a Hamiltonian cycle. - -In 1986, D. Witte proved that the Lovász conjecture holds for the Cayley graphs of p-groups. It is open even for dihedral groups, although for special sets of generators some progress has been made. - -When group $G = S_n$ is a symmetric group, there are many attractive generating sets. For example, the Lovász conjecture holds in the following cases of generating sets: - -* $a = (1,2,\dots,n), b = (1,2)$ (long cycle and a transposition). - -* $s_1 = (1,2), s_2 = (2,3), \dots, s_{n-1} = (n-1,n)$ (Coxeter generators). In this case a Hamiltonian cycle is generated by the Steinhaus–Johnson–Trotter algorithm. - -* any set of transpositions corresponding to a labelled tree on $\{1,2,..,n\}$. - -* $a =(1,2), b = (1,2)(3,4)\cdots, c = (2,3)(4,5)\cdots$ - -Stong has shown that the conjecture holds for the Cayley graph of the wreath product Zm wr Zn with the natural minimal generating set when m is either even or three. In particular this holds for the cube-connected cycles, which can be generated as the Cayley graph of the wreath product Z2 wr Zn. - -For general finite groups, only a few results are known: - -* $S=\{a,b\}, (ab)^2=1$ (Rankin generators) - -* $S=\{a,b,c\}, a^2= b^2=c^2=[a,b]=1$ (Rapaport–Strasser generators) - -* $S=\{a,b,c\}, a^2=1, c = a^{-1}ba$ (Pak–Radoičić generators) - -* $S=\{a,b\}, a^2 = b^s =(ab)^3 = 1, $ where $|G|,s = 2~mod ~4$ (here we have (2,s,3)-presentation, Glover–Marušič theorem). - -Finally, it is known that for every finite group $G$ there exists a generating set of size at most $\log_2 |G|$ such that the corresponding Cayley graph is Hamiltonian (Pak-Radoičić). This result is based on classification of finite simple groups. - -The Lovász conjecture was also established for random generating sets of size $\Omega(\log^5 |G|)$. diff --git a/wiki/wikipedia/3208.txt b/wiki/wikipedia/3208.txt deleted file mode 100644 index 186ad0908e677321227e9fad0c7cc7f7797c5a8a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3208.txt +++ /dev/null @@ -1,9 +0,0 @@ -In mathematics, two objects, especially systems of axioms or semantics for them, are called cryptomorphic if they are equivalent but not obviously equivalent. This word is a play on the many morphisms in mathematics, but "cryptomorphism" is only very distantly related to "isomorphism", "homomorphism", or "morphisms". The equivalence may possibly be in some informal sense, or may be formalized in terms of a bijection or equivalence of categories between the mathematical objects defined by the two cryptomorphic axiom systems. - -The word was coined by Garrett Birkhoff before 1967, for use in the third edition of his book Lattice Theory. Birkhoff did not give it a formal definition, though others working in the field have made some attempts since. - -Its informal sense was popularized (and greatly expanded in scope) by Gian-Carlo Rota in the context of matroid theory: there are dozens of equivalent axiomatic approaches to matroids, but two different systems of axioms often look very different. - -In his 1997 book Indiscrete Thoughts, Rota describes the situation as follows: - -Though there are many cryptomorphic concepts in mathematics outside of matroid theory and universal algebra, the word has not caught on among mathematicians generally. It is, however, in fairly wide use among researchers in matroid theory. diff --git a/wiki/wikipedia/3209.txt b/wiki/wikipedia/3209.txt deleted file mode 100644 index c574a0d4379d44bf4f2d7a8ab688072d8e3f811a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3209.txt +++ /dev/null @@ -1,273 +0,0 @@ -Variable neighborhood search (VNS), proposed by Mladenović & Hansen in 1997, is a metaheuristic method for solving a set of combinatorial optimization and global optimization problems. - -It explores distant neighborhoods of the current incumbent solution, and moves from there to a new one if and only if an improvement was made. The local search method is applied repeatedly to get from solutions in the neighborhood to local optima. - -VNS was designed for approximating solutions of discrete and continuous optimization problems and according to these, it is aimed for solving linear program problems, integer program problems, mixed integer program problems, nonlinear program problems, etc. - -VNS systematically changes the neighborhood in two phases: firstly, descent to find a local optimum and finally, a perturbation phase to get out of the corresponding valley. - -Applications are rapidly increasing in number and pertain to many fields: location theory, cluster analysis, scheduling, vehicle routing, network design, lot-sizing, artificial intelligence, engineering, pooling problems, biology, phylogeny, reliability, geometry, telecommunication design, etc. - -There are several books important for understanding VNS, such as: Handbook of Metaheuristics, 2010, Handbook of Metaheuristics, 2003 and Search methodologies, 2005. - -Earlier work that motivated this approach can be found in - -# Davidon, W.C. - -# Fletcher, R., Powell, M.J.D. - -# Mladenović, N. and - -# Brimberg, J., Mladenović, N. - -Recent surveys on VNS methodology as well as numerous applications can be found in 4OR, 2008 and Annals of OR, 2010. - -Define one deterministic optimization problem with -$$ - \min {\{f (x)|x \in X, X \subseteq S\}} -$$, (1) - -where S, X, x, and f are the solution space, the feasible set, a feasible solution, and a real-valued objective function, respectively. If S is a finite but large set, a combinatorial optimization problem is defined. If ${S = R^{n}}$, there is continuous optimization model. - -A solution ${x^* \in X}$ is optimal if -$$ - {f (x^{*}) \leq f (x), \qquad \forall{x} \in X} -$$. - -Exact algorithm for problem (1) is to be found an optimal solution x*, with the validation of its optimal structure, or if it is unrealizable, in procedure have to be shown that there is no achievable solution, i.e., $X =\varnothing$, or the solution is unbounded. CPU time has to be finite and short. For continuous optimization, it is reasonable to allow for some degree of tolerance, i.e., to stop when a feasible solution $x^{*}$ has been found such that -$$ - {f (x^{*}) \leq f (x) + \epsilon, \qquad \forall{x} \in X} -$$ or -$$ - {(f (x^{*})- f (x))/ f (x^{*}) < \epsilon , \qquad \forall{x} \in X} -$$ - -Some heuristics speedily accept an approximate solution, or optimal solution but one with no validation of its optimality. - -Some of them have an incorrect certificate, i.e., the solution $x_h$ obtained satisfies -$$ - {(f (x_{h})- f (x))/ f (x_{h}) \leq \epsilon , \qquad \forall{x} \in X} -$$ - -for some $\epsilon$, though this is rarely small. - -Heuristics are faced with the problem of local optima as a result of avoiding boundless computing time. - -A local optimum $x_L$ of problem is such that -$$ - {f (x_{L}) \leq f (x), \qquad \forall{x} \in N(x_{L}) \cap X} -$$ - -where $ N(x_{L})$ denotes a neighborhood of $ x_{L} $ - -According to (Mladenović, 1995), VNS is a metaheuristic which systematically performs the procedure of neighborhood change, both in descent to local minima and in escape from the valleys which contain them. - -VNS is built upon the following perceptions: - -# A local minimum with respect to one neighborhood structure is not necessarily a local minimum for another neighborhood structure. - -# A global minimum is a local minimum with respect to all possible neighborhood structures. - -# For many problems, local minima with respect to one or several neighborhoods are relatively close to each other. - -Unlike many other metaheuristics, the basic schemes of VNS and its extensions are simple and require few, and sometimes no parameters. Therefore, in addition to providing very good solutions, often in simpler ways than other methods, VNS gives insight into the reasons for such a performance, which, in turn, can lead to more efficient and sophisticated implementations. - -There are several papers where it could be studied among recently mentioned, such as (Hansen and Mladenović 1999, 2001a, 2003, 2005; Moreno-Pérez et al.;) - -A local search heuristic is performed through choosing an initial solution x, discovering a direction of descent from x, within a neighborhood N(x), and proceeding to the minimum of f(x) within N(x) in the same direction. If there is no direction of descent, the heuristic stops; otherwise, it is iterated. Usually the highest direction of descent, also related to as best improvement, is used. This set of rules is summarized in Algorithm 1, where we assume that an initial solution x is given. The output consists of a local minimum, denoted by x, and its value. Observe that a neighborhood structure N(x) is defined for all x ∈ X. At each step, the neighborhood N(x) of x is explored completely. As this may be time-consuming, an alternative is to use the first descent heuristic. Vectors $x^i \in N(x)$ are then enumerated systematically and a move is made as soon as a direction for the descent is found. This is summarized in Algorithm 2. - -
    -
    -Function BestImprovement(x)
    -
    -1: repeat
    -
    -2: x' ← x
    -
    -3: x ← argmin_{ f(y) }, y ∈ N(x)
    -
    -4: until ( f(x) ≥ f(x') )
    -
    -5: return x'
    -
    -
    - -
    -
    -Function FirstImprovement(x)
    -
    -1: repeat
    -
    -2: x' ← x; i ← 0
    -
    -3: repeat
    -
    -4: i ← i + 1
    -
    -5: x ← argmin{ f(x), f(x^i)}, x^i ∈ N(x)
    -
    -6: until ( f(x) < f(x^i) or i = |N(x)| )
    -
    -7: until ( f(x) ≥ f(x') )
    -
    -8: return x'
    -
    -
    - -Let one denote $ \mathcal{ N}_k(k=1, . . . ,k_{max}) $, a finite set of pre-selected neighborhood structures, and with $\mathcal{N}_k(x)$ the set of solutions in the kth neighborhood of x. - -One will also use the notation $\mathcal{N'}_k(x), k = 1, . . . , k'_{max} $ when describing local descent. Neighborhoods $\mathcal{N}_k(x)$ or $\mathcal{N'}_k(x)$ may be induced from one or more metric (or quasi-metric) functions introduced into a solution space S. - -An optimal solution $x_{opt}$ (or global minimum) is a feasible solution where a minimum of problem is reached. We call x' ∈ X a local minimum of problem with respect to $\mathcal{N}_k(x) $, if there is no solution $ x \in \mathcal{N'}_k(x) \subseteq X $ such that $f(x) < f(x')$. - -In order to solve problem by using several neighborhoods, facts 1–3 can be used in three different ways: (i) deterministic; (ii) stochastic; (iii) both deterministic and stochastic. We first give in Algorithm 3 the steps of the neighborhood change function which will be used later. Function NeighborhoodChange() compares the new value f(x') with the incumbent value f(x) obtained in the neighborhood k (line 1). If an improvement is obtained, k is returned to its initial value and the new incumbent updated (line 2). Otherwise, the next neighborhood is considered (line 3). - -
    -
    -Function NeighborhoodChange (x, x', k)
    -
    -1: if f (x') < f(x) then
    -
    -2: x ← x' // Make a move
    -
    -3: k ← 1 // Initial neighborhood
    -
    -4: else
    -
    -5: k ← k+1 // Next neighborhood
    -
    -
    - -When VNS does not render a good solution, there are several steps which could be helped in process, such as comparing first and best improvement strategies in local search, reducing neighborhood, intensifying shaking, adopting VND, adopting FSS, and experimenting with parameter settings. - -The Basic VNS (BVNS) method (Handbook of Metaheuristics, 2010) combines deterministic and stochastic changes of neighborhood. Its steps are given in Algorithm 4. Often successive neighborhoods $ \mathcal{N}_k$ will be nested. Observe that point x is generated at random in Step 4 in order to avoid cycling, which might occur if a deterministic rule were applied. In Step 5, the best improvement local search (Algorithm 1) is usually - -adopted. However, it can be replaced with first improvement (Algorithm 2). - -
    -
    -Function VNS (x, kmax, tmax)
    -
    -1: repeat
    -
    -2: k ← 1
    -
    -3: repeat
    -
    -4: x' ← Shake(x, k) // Shaking
    -
    -5: x ← BestImprovement(x' ) // Local search
    -
    -6: x ← NeighbourhoodChange(x, x, k) // Change neighbourhood
    -
    -7: until k = kmax
    -
    -8: t ← CpuTime()
    -
    -9: until t > tmax
    -
    -
    - -The basic VNS is a best improvement descent method with randomization. Without much additional effort, it can be transformed into a descent-ascent method: in NeighbourhoodChange() function, replace also x by x" with some probability, even if the solution is worse than the incumbent. It can also be changed into a first improvement method. - -Another variant of the basic VNS can be to find a solution x in the 'Shaking' step as the best among b (a parameter) randomly generated solutions from the kth neighborhood. There are two possible variants of this extension: (1) to perform only one local search from the best among b points; (2) to perform all b local searches and then choose the best. In paper (Fleszar and Hindi) could be found algorithm. - -* VND - -The variable neighborhood descent (VND) method is obtained if a change of neighborhoods is performed in a deterministic way. In the descriptions of its algorithms, we assume that an initial solution x is given. Most local search heuristics in their descent phase use very few neighborhoods. The final solution should be a local minimum with respect to all $k_{max}$ neighborhoods; hence the chances to reach a global one are larger when using VND than with a single neighborhood structure. - -* RVNS - -The reduced VNS (RVNS) method is obtained if random points are selected from $\mathcal{N}_k(x)$ and no descent is made. Rather, the values of these new points are compared with that of the incumbent and an update takes place in case of improvement. It is assumed that a stopping condition has been chosen like the maximum CPU time allowed $t_{max}$ or the maximum number of iterations between two improvements. - -To simplify the description of the algorithms it is used $t_{max}$ below. Therefore, RVNS uses two parameters: $t_{max}$ and $k_{max}$. RVNS is useful in very large instances, for which local search is costly. It has been observed that the best value for the parameter $k_{max}$ is often 2. In addition, the maximum number of iterations between two improvements is usually used as a stopping condition. RVNS is akin to a Monte-Carlo method, but is more systematic. - -* Skewed VNS - -The skewed VNS (SVNS) method (Hansen et al.) addresses the problem of exploring valleys far from the incumbent solution. Indeed, once the best solution in a large region has been found, it is necessary to go some way to obtain an improved one. Solutions drawn at random in distant neighborhoods may differ substantially from the incumbent and VNS can then degenerate, to some extent, into the Multistart heuristic (in which descents are made iteratively from solutions generated at random, a heuristic which is known not to be very efficient). Consequently, some compensation for distance from the incumbent must be made. - -* Variable Neighborhood Decomposition Search - -The variable neighborhood decomposition search (VNDS) method (Hansen et al.) extends the basic VNS into a two-level VNS scheme based upon decomposition of the problem. For ease of presentation, but without loss of generality, it is assumed that the solution x represents the set of some elements. - -* Parallel VNS - -Several ways of parallelizing VNS have recently been proposed for solving the p-Median problem. In García-López et al. three of them are tested: (i) parallelize local search; (ii) augment the number of solutions drawn from the current neighborhood and make a local search in parallel from each of them and (iii) do the same as (ii) but update the information about the best solution found. Three Parallel VNS strategies are also suggested for solving the Travelling purchaser problem in Ochi et al. - -* Primal-dual VNS - -For most modern heuristics, the difference in value between the optimal solution and the obtained one is completely unknown. Guaranteed performance of the primal heuristic may be determined if a lower bound on the objective function value is known. To this end, the standard approach is to relax the integrality condition on the primal variables, based on a mathematical programming formulation of the problem. - -However, when the dimension of the problem is large, even the relaxed problem may be impossible to solve exactly by standard commercial solvers. Therefore, it seems a good idea to solve dual relaxed problems heuristically as well. It was obtained guaranteed bounds on the primal heuristics performance. In Primal-dual VNS (PD-VNS) (Hansen et al.) one possible general way to attain both the guaranteed bounds and the exact solution is proposed. - -* Variable Neighborhood Branching - -The mixed integer linear programming (MILP) problem consists of maximizing or minimizing a linear function, subject to equality or inequality constraints, and integrality restrictions on some of the variables. - -* Variable Neighborhood Formulation Space Search - -FSS is a method which is very useful because, one problem could be defined in addition formulations and moving through formulations is legitimate. It is proved that local search works within formulations, implying a final solution when started from some initial solution in first formulation. Local search systematically alternates between different formulations which was investigated for circle packing problem (CPP) where stationary point for a nonlinear programming formulation of CPP in Cartesian coordinates is not strictly a stationary point in polar coordinates. - -Applications of VNS, or of varieties of VNS are very abundant and numerous. Some fields where it could be found collections of scientific papers: - -* Industrial applications - -* Design problems in communication - -* Location problems - -* Data mining - -* Graph problems - -* Knapsack and packing problems - -* Mixed integer problems - -* Time tabling - -* Scheduling - -* Vehicle routing problems - -* Arc routing and waste collection - -* Fleet sheet problems - -* Extended vehicle routing problems - -* Problems in biosciences and chemistry - -* Continuous optimization - -* Other optimization problems - -* Discovery science - -VNS implies several features which are presented by Hansen and Mladenović and some are presented here: - -# Simplicity: VNS is simple, clear and universally applicable - -# Precision: VNS is formulated in precise mathematical definitions - -# Coherence: all actions of the heuristics for solving problems follow from the VNS principles - -# Effectiveness: VNS supplies optimal or near-optimal solutions for all or at least most realistic instances - -# Efficiency: VNS takes a moderate computing time to generate optimal or near-optimal solutions - -# Robustness: the functioning of the VNS is coherent over a variety of instances - -# User friendliness: VNS has no parameters, so it is easy for understanding, expressing and using - -# Innovation: VNS is generating new types of application - -# Generality: VNS is inducing to good results for a wide variety of problems - -# Interactivity: VNS allows the user to incorporate his knowledge to improve the resolution process - -# Multiplicity: VNS is able to produce a certain near-optimal solutions from which the user can choose; - -Interest in VNS is growing quickly, evidenced by the increasing number of papers published each year on this topic (10 years ago, only a few; 5 years ago, about a dozen; and about 50 in 2007). - -Moreover, the 18th EURO mini-conference held in Tenerife in November 2005 was entirely devoted to VNS. It led to special issues of IMA Journal of Management Mathematics in 2007, European Journal of Operational Research (http://www.journals.elsevier.com/european-journal-of-operational-research/), and Journal of Heuristics (https://www.springer.com/mathematics/applications/journal/10732/) in 2008. diff --git a/wiki/wikipedia/321.txt b/wiki/wikipedia/321.txt deleted file mode 100644 index a39f19e13ca78955cf58ca5434f01bcb04a58584..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/321.txt +++ /dev/null @@ -1,93 +0,0 @@ -In mathematics, Doob's martingale inequality, also known as Kolmogorov’s submartingale inequality is a result in the study of stochastic processes. It gives a bound on the probability that a stochastic process exceeds any given value over a given interval of time. As the name suggests, the result is usually given in the case that the process is a martingale, but the result is also valid for submartingales. - -The inequality is due to the American mathematician Joseph L. Doob. - -Let X be a submartingale taking real values, either in discrete or continuous time. That is, for all times s and t with s < t, -$$ - X_s \leq \operatorname E[X_t \mid \mathcal{F}_s]. -$$ - -(For a continuous-time submartingale, assume further that the process is càdlàg.) Then, for any constant C > 0, -$$ - P\left[ \sup_{0 \leq t \leq T} X_t \geq C \right] \leq \frac{\operatorname E[\textrm{max}(X_T,0)]}{C}. -$$ - -In the above, as is conventional, P denotes a probability measure on the sample space Ω of the stochastic process -$$ -X : [0, T] \times \Omega \to [0, + \infty) -$$ - -and $\operatorname E[X]$ denotes the expected value with respect to the probability measure P, i.e. the integral -$$ -\operatorname E[X_T] = \int_\Omega X_T (\omega) \mathrm{d} P (\omega) -$$ - -in the sense of Lebesgue integration. $\mathcal{F}_{s}$ denotes the σ-algebra generated by all the random variables Xi with i ≤ s; the collection of such σ-algebras forms a filtration of the probability space. - -There are further submartingale inequalities also due to Doob. With the same assumptions on X as above, let -$$ -S_{t} = \sup_{0 \leq s \leq t} X_{s}, -$$ - -and for p ≥ 1 let -$$ -\| X_t \|_p = \| X_t \|_{L^p (\Omega, \mathcal{F}, P)} = \left( \operatorname E[|X_t|^p] \right)^{1/p}. -$$ - -In this notation, Doob's inequality as stated above reads -$$ - P[S_T \geq C] \leq \frac{\| X_T \|_1}{C}. -$$ - -The following inequalities also hold : -$$ -\| S_T \|_1 \leq \frac{e}{e - 1} \left( 1 + \| X_T \log^+ X_T \|_1 \right) -$$ - -and, for p > 1, -$$ -\| X_T \|_p \leq \| S_T \|_p \leq \frac{p}{p-1} \| X_T \|_p. -$$ - -The last of these is sometimes known as Doob's Maximal inequality. - -Doob's inequality for discrete-time martingales implies Kolmogorov's inequality: if X1, X2, ... is a sequence of real-valued independent random variables, each with mean zero, it is clear that - -\begin{align} - -\operatorname E\left[ X_1 + \cdots + X_n + X_{n + 1} \mid X_1, \ldots, X_n \right] &= X_1 + \cdots + X_n + \operatorname E\left[ X_{n + 1} \mid X_1, \ldots, X_n \right] \\ - -&= X_1 + \cdots + X_n, - -\end{align} - -so Sn = X1 + ... + Xn is a martingale. Note that Jensen's inequality implies that |Sn| is a nonnegative submartingale if Sn is a martingale. Hence, taking p = 2 in Doob's martingale inequality, -$$ - P\left[ \max_{1 \leq i \leq n} \left| S_i \right| \geq \lambda \right] \leq \frac{\operatorname E\left[ S_n^2 \right]}{\lambda^2}, -$$ - -which is precisely the statement of Kolmogorov's inequality. - -Let B denote canonical one-dimensional Brownian motion. Then -$$ - P\left[ \sup_{0 \leq t \leq T} B_t \geq C \right] \leq \exp \left( - \frac{C^2}{2T} \right). -$$ - -The proof is just as follows: since the exponential function is monotonically increasing, for any non-negative λ, -$$ -\left\{ \sup_{0 \leq t \leq T} B_{t} \geq C \right\} = \left\{ \sup_{0 \leq t \leq T} \exp ( \lambda B_{t} ) \geq \exp ( \lambda C ) \right\}. -$$ - -By Doob's inequality, and since the exponential of Brownian motion is a positive submartingale, - -\begin{align} - -P\left[ \sup_{0 \leq t \leq T} B_t \geq C \right] & = P\left[ \sup_{0 \leq t \leq T} \exp ( \lambda B_t ) \geq \exp ( \lambda C ) \right] \\[8pt] - -& \leq \frac{\operatorname E[ \exp (\lambda B_T)]}{\exp (\lambda C)} \\[8pt] - -& = \exp \left( \tfrac{1}{2} \lambda^2 T - \lambda C \right) && \operatorname E\left[ \exp (\lambda B_t) \right] = \exp \left( \tfrac{1}{2} \lambda^2 t \right) - -\end{align} - -Since the left-hand side does not depend on λ, choose λ to minimize the right-hand side: λ = C/T gives the desired inequality. diff --git a/wiki/wikipedia/3210.txt b/wiki/wikipedia/3210.txt deleted file mode 100644 index 53764570bf694ef245bd5be25f224f1171655052..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3210.txt +++ /dev/null @@ -1,21 +0,0 @@ -In operator algebra, the Koecher–Vinberg theorem is a reconstruction theorem for real Jordan algebras. It was proved independently by Max Koecher in 1957 and Ernest Vinberg in 1961. It provides a one-to-one correspondence between formally real Jordan algebras and so-called domains of positivity. Thus it links operator algebraic and convex order theoretic views on state spaces of physical systems. - -A convex cone $C$ is called regular if $a=0$ whenever both $a$ and $-a$ are in the closure $\overline{C}$. - -A convex cone $C$ in a vector space $A$ with an inner product has a dual cone $C^* = \{ a \in A : \forall b \in C \langle a,b\rangle > 0 \}$. The cone is called self-dual when $C=C^*$. It is called homogeneous when to any two points $a,b \in C$ there is a real linear transformation $T \colon A \to A$ that restricts to a bijection $C \to C$ and satisfies $T(a)=b$. - -The Koecher–Vinberg theorem now states that these properties precisely characterize the positive cones of Jordan algebras. - -Theorem: There is a one-to-one correspondence between formally real Jordan algebras and convex cones that are: - -* open; - -* regular; - -* homogeneous; - -* self-dual. - -Convex cones satisfying these four properties are called domains of positivity or symmetric cones. The domain of positivity associated with a real Jordan algebra $A$ is the interior of the 'positive' cone $A_+ = \{ a^2 \colon a \in A \}$. - -For a proof, see Koecher or Faraut. diff --git a/wiki/wikipedia/3211.txt b/wiki/wikipedia/3211.txt deleted file mode 100644 index b7fd5a128774c5022a0a972c84e8bcd2322ca3f6..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3211.txt +++ /dev/null @@ -1,5 +0,0 @@ -Cyberduck is an open-source client for FTP and SFTP, WebDAV, and cloud storage (OpenStack Swift, Amazon S3, Backblaze B2 and Microsoft Azure), available for macOS and Windows (as of version 4.0) licensed under the GPL. Cyberduck is written in Java and C# using the Cocoa user interface framework on macOS and Windows Forms on Windows. It supports FTP/TLS (FTP secured over SSL/TLS), using AUTH TLS as well as directory synchronization. The user interacts with the user interface (GUI), including file transfer by drag and drop and notifications via Growl. It is also able to open some files in external text editors. - -Cyberduck includes a bookmark manager and supports Apple's Keychain and Bonjour networking. It supports multiple languages including English, Catalan, Czech, Chinese (Traditional and Simplified), Danish, Dutch, Finnish, French, German, Hebrew, Hungarian, Indonesian, Italian, Japanese, Korean, Norwegian, Polish, Portuguese, Russian, Slovak, Spanish, Swedish, Thai, Turkish, Ukrainian, and Welsh. - -The Cyberduck creator also provides a version for the Command-line interface (CLI), called duck, available for Windows, macOS and Linux. It has its own website at . The program can be used as FTP and SFTP-client, for operations with different cloud services. diff --git a/wiki/wikipedia/3212.txt b/wiki/wikipedia/3212.txt deleted file mode 100644 index e6d1508d929c432a5684ac400946e432dd27017c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3212.txt +++ /dev/null @@ -1,29 +0,0 @@ -The Engelbert–Schmidt zero–one law is a theorem that gives a mathematical criterion for an event associated with a continuous, non-decreasing additive functional of Brownian motion to have probability either 0 or 1, without the possibility of an intermediate value. This zero-one law is used in the study of questions of finiteness and asymptotic behavior for stochastic differential equations. (A Wiener process is a mathematical formalization of Brownian motion used in the statement of the theorem.) This 0-1 law, published in 1981, is named after Hans-Jürgen Engelbert and the probabilist Wolfgang Schmidt (not to be confused with the number theorist Wolfgang M. Schmidt). - -Let $\mathcal{F}$ be a σ-algebra and let $F = (\mathcal{F}_t)_{t \ge 0}$ be an increasing family of sub-σ-algebras of $\mathcal{F}$. Let $(W, F)$ be a Wiener process on the probability space $(\Omega, \mathcal{F}, P)$. - -Suppose that $f$ is a Borel measurable function of the real line into [0,∞]. - -Then the following three assertions are equivalent: - -(i) $ P \Big( \int_0^t f (W_s)\mathrm ds < \infty \text{ for all } t \ge 0 \Big) > 0 $. - -(ii) $ P \Big( \int_0^t f (W_s)\mathrm ds < \infty \text{ for all } t \ge 0 \Big) = 1 $. - -(iii) $ \int_K f (y)\mathrm dy < \infty $ for all compact subsets $K$ of the real line. - -In 1997 Pio Andrea Zanzotto proved the following extension of the Engelbert–Schmidt zero-one law. It contains Engelbert and Schmidt's result as a special case, since the Wiener process is a real-valued stable process of index $\alpha = 2$. - -Let $X$ be a $\mathbb R$-valued stable process of index $\alpha\in(1,2]$ on the filtered probability space $(\Omega, \mathcal{F}, (\mathcal{F}_t), P)$. - -Suppose that $f:\mathbb R \to [0,\infty]$ is a Borel measurable function. - -Then the following three assertions are equivalent: - -(i) $ P \Big( \int_0^t f (X_s)\mathrm ds < \infty \text{ for all } t \ge 0 \Big) > 0 $. - -(ii) $ P \Big( \int_0^t f (X_s)\mathrm ds < \infty \text{ for all } t \ge 0 \Big) = 1 $. - -(iii) $ \int_K f (y)\mathrm dy < \infty $ for all compact subsets $K$ of the real line. - -The proof of Zanzotto's result is almost identical to that of the Engelbert–Schmidt zero-one law. The key object in the proof is the local time process associated with stable processes of index $\alpha\in(1,2]$, which is known to be jointly continuous. diff --git a/wiki/wikipedia/3213.txt b/wiki/wikipedia/3213.txt deleted file mode 100644 index 2f2de83c700aaa7a62d49465886192db527dfc0c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3213.txt +++ /dev/null @@ -1,85 +0,0 @@ -In physics, the no-deleting theorem of quantum information theory is a no-go theorem which states that, in general, given two copies of some arbitrary quantum state, it is impossible to delete one of the copies. It is a time-reversed dual to the no-cloning theorem, which states that arbitrary states cannot be copied. This theorem seems remarkable, because, in many senses, quantum states are fragile; the theorem asserts that, in a particular case, they are also robust. Physicist Arun K. Pati along with Samuel L. Braunstein proved this theorem. - -The no-deleting theorem, together with the no-cloning theorem, underpin the interpretation of quantum mechanics in terms of category theory, and, in particular, as a dagger symmetric monoidal category. This formulation, known as categorical quantum mechanics, in turn allows a connection to be made from quantum mechanics to linear logic as the logic of quantum information theory (in exact analogy to classical logic being founded on Cartesian closed categories). - -Suppose that there are two copies of an unknown quantum state. A pertinent question in this context is to ask if it is possible, given two identical copies, to delete one of them using quantum mechanical operations? It turns out that one cannot. The no-deleting theorem is a consequence of linearity of quantum mechanics. Like the no-cloning theorem this has important implications in quantum computing, quantum information theory and quantum mechanics in general. - -The process of quantum deleting takes two copies of an arbitrary, unknown - -quantum state at the input port and outputs a blank state along with the original. Mathematically, - -this can be described by: -$$ -U |\psi\rangle_A |\psi\rangle_B |A\rangle_C = |\psi\rangle_A |0\rangle_B |A'\rangle_C -$$ - -where $U$ is the deleting operation which is not necessarily unitary (but a linear operator), $|\psi\rangle_A$ is the unknown quantum - -state, $|0\rangle_B$ is the blank state, $|A\rangle_C$ is the initial state of - -the deleting machine and $|A'\rangle_C$ is the final state of the machine. - -It may be noted that classical bits can be copied and deleted, as can qubits in orthogonal states. For example, if we have two identical qubits $|00 \rangle $ and $|11 \rangle $ then we can transform to $|00 \rangle $ and $|10 \rangle $. In this case we have deleted the second copy. However, it follows from linearity of quantum theory that there is no $U$ that can perform the deleting operation for any arbitrary state $|\psi\rangle$. - -Let $|\psi\rangle $ be an unknown quantum state in some Hilbert space (and let other states have their usual meaning). Then, - -there is no linear isometric transformation such that -$$ -|\psi\rangle_A |\psi\rangle_B |A\rangle_C \rightarrow |\psi\rangle_A |0\rangle_B |A'\rangle_C -$$, with the final state of the ancilla being independent of -$$ -|\psi\rangle -$$. - -The theorem holds for quantum states in a Hilbert space of any dimension. For simplicity, - -consider the deleting transformation for two identical qubits. If two qubits are in orthogonal states, then deletion requires that -$$ -|0 \rangle_A |0 \rangle_B |A\rangle_C \rightarrow |0\rangle_A |0\rangle_B |A_0\rangle_C -$$, -$$ -|1 \rangle_A |1 \rangle_B |A\rangle_C \rightarrow |1 \rangle_A |0\rangle_B |A_1\rangle_C -$$. - -Let $|\psi\rangle = \alpha |0\rangle + \beta |1 \rangle $ be the state of an unknown qubit. If we have two copies of an unknown qubit, then by linearity of the deleting transformation we have - -|\psi\rangle_A |\psi\rangle_B |A\rangle_C = [\alpha^2 |0 \rangle_A |0\rangle_B + \beta^2 - -|1\rangle_A |1\rangle_B + \alpha \beta (|0\rangle_A |1\rangle_B + |1 \rangle_A |0\rangle_B ) ] - -|A \rangle_C - - \qquad \rightarrow - -\alpha^2 |0 \rangle_A |0\rangle_B |A_0\rangle_C + \beta^2 - -|1\rangle_A |0\rangle_B |A_1\rangle_C+ {\sqrt 2} \alpha \beta |\Phi \rangle_{ABC}. - -In the above expression, the following transformation has been used: -$$ -1/{\sqrt 2}(|0\rangle_A |1\rangle_B + |1 \rangle_A |0\rangle_B ) |A \rangle_C \rightarrow |\Phi \rangle_{ABC} . -$$ - -However, if we are able to delete a copy, then, at the output port of the deleting machine, the combined state should be - - |\psi\rangle_A |0\rangle_B |A'\rangle_C = - -(\alpha |0 \rangle_A |0\rangle_B + \beta |1\rangle_A |0\rangle_B) |A'\rangle_C. - -In general, these states are not identical and hence we can say that the machine fails to delete a copy. If we require that the final output states are same, then we will see that there is only one option: -$$ - |\Phi\rangle = 1/{\sqrt 2}(|0 \rangle_A |0\rangle_B |A_1\rangle_C + |1\rangle_A |0\rangle_B |A_0\rangle_C), -$$ - -and -$$ - |A'\rangle_C = \alpha |A_0\rangle_C + \beta |A_1\rangle_C . -$$ - -Since final state $|A' \rangle$ of the ancilla is normalized for all values of $\alpha, \beta$ it must be true that $ |A_0\rangle_C $ and $ |A_1\rangle_C $ are orthogonal. This means that the quantum information is simply in the final state of the ancilla. One can always obtain the unknown state from the final state of the ancilla using local operation on the ancilla Hilbert space. Thus, linearity of quantum theory does not allow an unknown quantum state to be deleted perfectly. - -* If it were possible to delete an unknown quantum state, then, using two pairs of EPR states, we could send signals faster than light. Thus, violation of the no-deleting theorem is inconsistent with the no-signalling condition. - -* The no-cloning and the no-deleting theorems point to the conservation of quantum information. - -* A stronger version of the no-cloning theorem and the no-deleting theorem provide permanence to quantum information. To create a copy one must import the information from some part of the universe and to delete a state one needs to export it to another part of the universe where it will continue to exist. diff --git a/wiki/wikipedia/3214.txt b/wiki/wikipedia/3214.txt deleted file mode 100644 index 287531b5ec1bc30eedd537143ee856810bc0b2db..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3214.txt +++ /dev/null @@ -1,89 +0,0 @@ -In statistics, the Rao–Blackwell theorem, sometimes referred to as the Rao–Blackwell–Kolmogorov theorem, is a result which characterizes the transformation of an arbitrarily crude estimator into an estimator that is optimal by the mean-squared-error criterion or any of a variety of similar criteria. - -The Rao–Blackwell theorem states that if g(X) is any kind of estimator of a parameter θ, then the conditional expectation of g(X) given T(X), where T is a sufficient statistic, is typically a better estimator of θ, and is never worse. Sometimes one can very easily construct a very crude estimator g(X), and then evaluate that conditional expected value to get an estimator that is in various senses optimal. - -The theorem is named after Calyampudi Radhakrishna Rao and David Blackwell. The process of transforming an estimator using the Rao–Blackwell theorem is sometimes called Rao–Blackwellization. The transformed estimator is called the Rao–Blackwell estimator. -$$ - \operatorname{E}[(\delta_1(X)-\theta)^2]=\operatorname{E}[(\delta(X)-\theta)^2]-\operatorname{E}[\operatorname{Var}(\delta(X)\mid T(X))] -$$ - -Since $\operatorname{E}[\operatorname{Var}(\delta(X)\mid T(X))]\ge 0$, the Rao-Blackwell theorem immediately follows. - -The more general version of the Rao–Blackwell theorem speaks of the "expected loss" or risk function: -$$ -\operatorname{E}(L(\delta_1(X)))\leq \operatorname{E}(L(\delta(X))) -$$ - -where the "loss function" L may be any convex function. If the loss function is twice-differentiable, as in the case for mean-squared-error, then we have the sharper inequality -$$ -\operatorname{E}(L(\delta(X)))-\operatorname{E}(L(\delta_1(X)))\ge \frac{1}{2}\operatorname{E}_T\left[\inf_x L(x)\operatorname{Var}(\delta(X)\mid T)\right]. -$$ - -The improved estimator is unbiased if and only if the original estimator is unbiased, as may be seen at once by using the law of total expectation. The theorem holds regardless of whether biased or unbiased estimators are used. - -The theorem seems very weak: it says only that the Rao–Blackwell estimator is no worse than the original estimator. In practice, however, the improvement is often enormous. - -Phone calls arrive at a switchboard according to a Poisson process at an average rate of λ per minute. This rate is not observable, but the numbers X1, ..., Xn of phone calls that arrived during n successive one-minute periods are observed. It is desired to estimate the probability e that the next one-minute period passes with no phone calls. - -An extremely crude estimator of the desired probability is -$$ -\delta_0=\left\{\begin{matrix}1 & \text{if}\ X_1=0, \\ 0 & \text{otherwise,}\end{matrix}\right. -$$ - -i.e., it estimates this probability to be 1 if no phone calls arrived in the first minute and zero otherwise. Despite the apparent limitations of this estimator, the result given by its Rao–Blackwellization is a very good estimator. - -The sum -$$ - S_n = \sum_{i=1}^n X_{i} = X_1+\cdots+X_n -$$ - -can be readily shown to be a sufficient statistic for λ, i.e., the conditional distribution of the data X1, ..., Xn, depends on λ only through this sum. Therefore, we find the Rao–Blackwell estimator -$$ -\delta_1=\operatorname{E}(\delta_0\mid S_n=s_n). -$$ - -After doing some algebra we have - -\begin{align} - -\delta_1 &= \operatorname{E} \left (\mathbf{1}_{\{X_1=0\}} \Bigg| \sum_{i=1}^n X_{i} = s_n \right ) \\ - -&= P \left (X_{1}=0 \Bigg| \sum_{i=1}^n X_{i} = s_n \right ) \\ - -&= P \left (X_{1}=0, \sum_{i=2}^n X_{i} = s_n \right ) \times P \left (\sum_{i=1}^n X_{i} = s_n \right )^{-1} \\ - -&= e^{-\lambda}\frac{\left((n-1)\lambda\right)^{s_n}e^{-(n-1)\lambda}}{s_n!} \times \left (\frac{(n\lambda)^{s_n}e^{-n\lambda}}{s_n!} \right )^{-1} \\ - -&= \frac{\left((n-1)\lambda\right)^{s_n}e^{-n\lambda}}{s_n!} \times \frac{s_n!}{(n\lambda)^{s_n}e^{-n\lambda}} \\ - -&= \left(1-\frac{1}{n}\right)^{s_n} - -\end{align} - -Since the average number of calls arriving during the first n minutes is nλ, one might not be surprised if this estimator has a fairly high probability (if n is big) of being close to -$$ -\left(1-{1 \over n}\right)^{n\lambda}\approx e^{-\lambda}. -$$ - -So δ1 is clearly a very much improved estimator of that last quantity. In fact, since Sn is complete and δ0 is unbiased, δ1 is the unique minimum variance unbiased estimator by the Lehmann–Scheffé theorem. - -Rao–Blackwellization is an idempotent operation. Using it to improve the already improved estimator does not obtain a further improvement, but merely returns as its output the same improved estimator. - -If the conditioning statistic is both complete and sufficient, and the starting estimator is unbiased, then the Rao-Blackwell estimator is the unique "best unbiased estimator": see Lehmann–Scheffé theorem. - -An example of an improvable Rao–Blackwell improvement, when using a minimal sufficient statistic that is not complete, was provided by Galili and Meilijson in 2016. Let $X_1, \ldots, X_n$ be a random sample from a scale-uniform distribution $X \sim U \left( (1-k) \theta, (1+k) \theta \right),$ with unknown mean $E[X]=\theta$ and known design parameter $k \in (0,1)$. In the search for "best" possible unbiased estimators for $\theta,$ it is natural to consider $X_1$ as an initial (crude) unbiased estimator for $\theta$ and then try to improve it. Since $X_1$ is not a function of $T = \left( X_{(1)}, X_{(n)} \right)$, the minimal sufficient statistic for $\theta$ (where $X_{(1)} = \min( X_i )$ and $X_{(n)} = \max( X_i )$), it may be improved using the Rao–Blackwell theorem as follows: -$$ -\hat{\theta}_{RB}=E_{\theta} \left [X_1|X_{(1)}, X_{(n)} \right ]=\frac{X_{(1)}+X_{(n)}}{2}. -$$ - -However, the following unbiased estimator can be shown to have lower variance: -$$ -\hat{\theta}_{LV} = \frac{1}{2 \left (k^2 \frac{n-1}{n+1}+1\right )} \left[ (1-k){{X}_{(1)}}+(1+k){{X}_{(n)}} \right]. -$$ - -And in fact, it could be even further improved when using the following estimator: -$$ -\hat{\theta}_{BAYES} =\frac{n+1}{n} \left[ 1-\frac{\frac{\left( \frac{{{X}_{(1)}}}{1-k} \right)}{\left( \frac{{{X}_{(n)}}}{1+k} \right)}-1}{{{\left[ \frac{\left( \frac{{{X}_{(1)}}}{1-k} \right)}{\left( \frac{{{X}_{(n)}}}{1+k} \right)} \right]}^{n+1}}-1} \right] \frac{X_{(n)}}{1+k} -$$ - -The model is a scale model. Optimal equivariant estimators can then be derived for loss functions that are invariant. diff --git a/wiki/wikipedia/3215.txt b/wiki/wikipedia/3215.txt deleted file mode 100644 index c9a5d7153ccaed9e8ceb18a676120c6e4c6bb9c0..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3215.txt +++ /dev/null @@ -1,17 +0,0 @@ -In databases an index is a data structure, part of the database, used by a database system to efficiently navigate access to user data. Index data are system data distinct from user data, and consist primarily of pointers. Changes in a database (by insert, delete, or modify operations), may require indexes to be updated to maintain accurate user data accesses. Index locking is a technique used to maintain index integrity. A portion of an index is locked during a database transaction when this portion is being accessed by the transaction as a result of attempt to access related user data. Additionally, special database system transactions (not user-invoked transactions) may be invoked to maintain and modify an index, as part of a system's self-maintenance activities. When a portion of an index is locked by a transaction, other transactions may be blocked from accessing this index portion (blocked from modifying, and even from reading it, depending on lock type and needed operation). Index Locking Protocol guarantees that phantom read phenomenon won't occur. - -Index locking protocol states: - -* Every relation must have at least one index. - -* A transaction can access tuples only after finding them through one or more indices on the relation - -* A transaction Ti that performs a lookup must lock all the index leaf nodes that it accesses, in S-mode, even if the leaf node does not contain any tuple satisfying the index lookup (e.g. for a range query, no tuple in a leaf is in the range) - -* A transaction Ti that inserts, updates or deletes a tuple ti in a relation r must update all indices to r and it must obtain exclusive locks on all index leaf nodes affected by the insert/update/delete - -* The rules of the two-phase locking protocol must be observed. - -) which are regularly used as database indexes. - -Index locks are used to coordinate threads accessing indexes concurrently, and typically shorter-lived than the common transaction locks on user data. In professional literature, they are often called latches. diff --git a/wiki/wikipedia/3216.txt b/wiki/wikipedia/3216.txt deleted file mode 100644 index d07d152a2d67dff679d03ac5ea3310d71b01f80a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3216.txt +++ /dev/null @@ -1,9 +0,0 @@ -In model theory, a branch of mathematical logic, Chang's conjecture, attributed to Chen Chung Chang by Vaught, states that every model of type (ω21) for a countable language has an elementary submodel of type (ω1, ω). A model is of type (α,β) if it is of cardinality α and a unary relation is represented by a subset of cardinality β. The usual notation is $(\omega_2,\omega_1)\twoheadrightarrow(\omega_1,\omega)$. - -The axiom of constructibility implies that Chang's conjecture fails. Silver proved the consistency of Chang's conjecture from the consistency of an ω1-Erdős cardinal. Hans-Dieter Donder showed a weak version of the reverse implication: if CC is not only consistent but actually holds, then ω2 is ω1-Erdős in K. - -More generally, Chang's conjecture for two pairs (α,β), (γ,δ) of cardinals is the claim - -that every model of type (α,β) for a countable language has an elementary submodel of type (γ,δ). - -The consistency of $(\omega_3,\omega_2)\twoheadrightarrow(\omega_2,\omega_1)$ was shown by Laver from the consistency of a huge cardinal. diff --git a/wiki/wikipedia/3217.txt b/wiki/wikipedia/3217.txt deleted file mode 100644 index e969e2aa409e97bf7c8ce243c5e78329a753750c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3217.txt +++ /dev/null @@ -1,69 +0,0 @@ -The Newton–Pepys problem is a probability problem concerning the probability of throwing sixes from a certain number of dice. - -In 1693 Samuel Pepys and Isaac Newton corresponded over a problem posed by Pepys in relation to a wager he planned to make. The problem was: - -Which of the following three propositions has the greatest chance of success? - -A. Six fair dice are tossed independently and at least one “6” appears. - -B. Twelve fair dice are tossed independently and at least two “6”s appear. - -C. Eighteen fair dice are tossed independently and at least three “6”s appear. - -Pepys initially thought that outcome C had the highest probability, but Newton correctly concluded that outcome A actually has the highest probability. - -The probabilities of outcomes A, B and C are: -$$ -P(A)=1-\left(\frac{5}{6}\right)^{6} = \frac{31031}{46656} \approx 0.6651 , -$$ - -P(B)=1-\sum_{x=0}^1\binom{12}{x}\left(\frac{1}{6}\right)^x\left(\frac{5}{6}\right)^{12-x} - -= \frac{1346704211}{2176782336} \approx 0.6187 , - -P(C)=1-\sum_{x=0}^2\binom{18}{x}\left(\frac{1}{6}\right)^x\left(\frac{5}{6}\right)^{18-x} - -= \frac{15166600495229}{25389989167104} \approx 0.5973 . - -These results may be obtained by applying the binomial distribution (although Newton obtained them from first principles). In general, if P(N) is the probability of throwing at least n sixes with 6n dice, then: -$$ -P(N)=1-\sum_{x=0}^{n-1}\binom{6n}{x}\left(\frac{1}{6}\right)^x\left(\frac{5}{6}\right)^{6n-x} . -$$ - -As n grows, P(N) decreases monotonically towards an asymptotic limit of 1/2. - -The solution outlined above can be implemented in R as follows: - - - -for (s in 1:3) { # looking for s = 1, 2 or 3 sixes - -n = 6*s # ... in n = 6, 12 or 18 dice - -q = pbinom(s-1, n, 1/6) # q = Prob( - -Although Newton correctly calculated the odds of each bet, he provided a separate intuitive explanation to Pepys. He imagined that B and C toss their dice in groups of six, and said that A was most favorable because it required a 6 in only one toss, while B and C required a 6 in each of their tosses. This explanation assumes that a group does not produce more than one 6, so it does not actually correspond to the original problem. - -A natural generalization of the problem is to consider n non-necessarily fair dice, with p the probability that each die will select the 6 face when thrown (notice that actually the number of faces of the dice and which face should be selected are irrelevant). If r is the total number of dice selecting the 6 face, then $P(r \ge k ; n, p)$ is the probability of having at least k correct selections when throwing exactly n dice. Then the original Newton–Pepys problem can be generalized as follows: - -Let $\nu_1, \nu_2$ be natural positive numbers s.t. $\nu_1 \le \nu_2$. Is then $P(r \ge \nu_1 k ; \nu_1 n, p)$ not smaller than $P(r \ge \nu_2 k ; \nu_2 n, p)$ for all n, p, k? - -Notice that, with this notation, the original Newton–Pepys problem reads as: is $P(r \ge 1 ; 6, 1/6) \ge P(r \ge 2 ; 12, 1/6) \ge P(r \ge 3 ; 18, 1/6)$? - -As noticed in Rubin and Evans (1961), there are no uniform answers to the generalized Newton–Pepys problem since answers depend on k, n and p. There are nonetheless some variations of the previous questions that admit uniform answers: - -(from Chaundy and Bullard (1960)): - -If $k_1, k_2, n$ are positive natural numbers, and $k_1 < k_2$, then $P(r \ge k_1 ; k_1 n, \frac{1}{n}) > P(r \ge k_2 ; k_2 n, \frac{1}{n})$. - -If $k, n_1, n_2$ are positive natural numbers, and $n_1 < n_2$, then $P(r \ge k ; k n_1, \frac{1}{n_1}) > P(r \ge k ; k n_2, \frac{1}{n_2})$. - -(from Varagnolo, Pillonetto and Schenato (2013)): - -If $\nu_1, \nu_2 , n, k$ are positive natural numbers, and $\nu_1 \le \nu_2, k \le n, p \in [0, 1]$ then $P(r = \nu_1 k ; \nu_1 n, p) \ge P(r = \nu_2 k ; \nu_2 n, p)$. diff --git a/wiki/wikipedia/3218.txt b/wiki/wikipedia/3218.txt deleted file mode 100644 index afd65855d84d4fdb415303b017421d7ec6cb1f57..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3218.txt +++ /dev/null @@ -1,207 +0,0 @@ -The eight queens puzzle is the problem of placing eight chess queens on an 8×8 chessboard so that no two queens threaten each other; thus, a solution requires that no two queens share the same row, column, or diagonal. The eight queens puzzle is an example of the more general n queens problem of placing n non-attacking queens on an n×n chessboard, for which solutions exist for all natural numbers n with the exception of n = 2 and n = 3. - -Chess composer Max Bezzel published the eight queens puzzle in 1848. Franz Nauck published the first solutions in 1850. Nauck also extended the puzzle to the n queens problem, with n queens on a chessboard of n×n squares. - -Since then, many mathematicians, including Carl Friedrich Gauss, have worked on both the eight queens puzzle and its generalized n-queens version. In 1874, S. Gunther proposed a method using determinants to find solutions. - -These solutions exhibit stair-stepped patterns, as in the following examples for n = 8, 9 and 10: - -The examples above can be obtained with the following formulas. Let (i, j) be the square in column i and row j on the n × n chessboard, k an integer. - -One approach is - -# If the remainder from dividing n by 6 is not 2 or 3 then the list is simply all even numbers followed by all odd numbers not greater than n. - -# Otherwise, write separate lists of even and odd numbers (2, 4, 6, 8 – 1, 3, 5, 7). - -# If the remainder is 2, swap 1 and 3 in odd list and move 5 to the end (3, 1, 7, 5). - -# If the remainder is 3, move 2 to the end of even list and 1,3 to the end of odd list (4, 6, 8, 2 – 5, 7, 1, 3). - -# Append odd list to the even list and place queens in the rows given by these numbers, from left to right (a2, b4, c6, d8, e3, f1, g7, h5). - -For n = 8 this results in fundamental solution 1 above. A few more examples follow. - -* 14 queens (remainder 2): 2, 4, 6, 8, 10, 12, 14, 3, 1, 7, 9, 11, 13, 5. - -* 15 queens (remainder 3): 4, 6, 8, 10, 12, 14, 2, 5, 7, 9, 11, 13, 15, 1, 3. - -* 20 queens (remainder 2): 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 3, 1, 7, 9, 11, 13, 15, 17, 19, 5. - -The following tables give the number of solutions for placing n queens on an n × n board, both fundamental and all . - -The six queens puzzle has fewer solutions than the five queens puzzle. - -There is no known formula for the exact number of solutions. The 27×27 board is the highest-order board that has been completely enumerated. - -The number of solutions has asymptotics of the form $\mathcal{Q}(n) = ((1 \pm o(1))ne^{-\alpha})^n$ with $\alpha = 1.942 \pm 3 \times 10^{-3}$ while the asymptotics for a toroidal chessboard is $T(n)\leq ((1+o(1))ne^{-3})^n$ with equality whenever $n \equiv 1,5 \mod 6.$ - -*Higher dimensions - -Find the number of non-attacking queens that can be placed in a d-dimensional chess of size n. More than n queens can be placed in some higher dimensions (the smallest example is four non-attacking queens in a 3×3×3 chess space), and it is in fact known that for any k, there are higher dimensions where nk queens do not suffice to attack all spaces. - -*Using pieces other than queens - -On an 8×8 board one can place 32 knights, or 14 bishops, 16 kings or eight rooks, so that no two pieces attack each other. Fairy chess pieces have also been substituted for queens. In the case of knights, an easy solution is to place one on each square of a given color, since they move only to the opposite color. The solution is also easy for rooks and kings. Eight rooks can be placed along a long diagonal (amongst thousands of other solutions), and 16 kings are placed on the board by dividing it into 2 by 2 squares and placing the kings at equivalent points on each square. - -*Chess variations - -Related problems can be asked for chess variations such as shogi. For instance, the n+k dragon kings problem asks to place k shogi pawns and n+k mutually nonattacking dragon kings on an n×n shogi board. - -*Permutation matrix - -In mathematics, a permutation matrix can be regarded geometrically as a set of n points lying on the squares of a n×n chessboard, such that each row or column contains only one point. Thus, an order-n permutation matrix is a solution to an n-rooks puzzle. - -*Nonstandard boards - -Pólya studied the n queens problem on a toroidal ("donut-shaped") board and showed that there is a solution on an n×n board if and only if n is not divisible by 2 or 3. In 2009 Pearson and Pearson algorithmically populated three-dimensional boards (n×n×n) with n2 queens, and proposed that multiples of these can yield solutions for a four-dimensional version of the puzzle. - -*Domination - -Given an n×n board, the domination number is the minimum number of queens (or other pieces) needed to attack or occupy every square. For n = 8 the queen's domination number is 5. - -*Queens and other pieces - -Variants include mixing queens with other pieces; for example, placing m queens and m knights on an n×n board so that no piece attacks another or placing queens and pawns so that no two queens attack each other. - -*Magic squares - -In 1992, Demirörs, Rafraf, and Tanik published a method for converting some magic squares into n-queens solutions, and vice versa. - -*Latin squares - -In an n×n matrix, place each digit 1 through n in n locations in the matrix so that no two instances of the same digit are in the same row or column. - -*Exact cover - -Consider a matrix with one primary column for each of the n ranks of the board, one primary column for each of the n files, and one secondary column for each of the 4n − 6 nontrivial diagonals of the board. The matrix has n2 rows: one for each possible queen placement, and each row has a 1 in the columns corresponding to that square's rank, file, and diagonals and a 0 in all the other columns. Then the n queens problem is equivalent to choosing a subset of the rows of this matrix such that every primary column has a 1 in precisely one of the chosen rows and every secondary column has a 1 in at most one of the chosen rows; this is an example of a generalized exact cover problem, of which sudoku is another example. - -* n-Queens Completion - -A 2017 paper investigated the problem "Given an n×n chessboard on which some queens are already placed, can you place a queen in every remaining row so that no two queens attack each other?" and several related problems. The authors asserted that these problems are NP-complete and #P-complete. - -Finding all solutions to the eight queens puzzle is a good example of a simple but nontrivial problem. For this reason, it is often used as an example problem for various programming techniques, including nontraditional approaches such as constraint programming, logic programming or genetic algorithms. Most often, it is used as an example of a problem that can be solved with a recursive algorithm, by phrasing the n queens problem inductively in terms of adding a single queen to any solution to the problem of placing n-1 queens on an n×n chessboard. The induction bottoms out with the solution to the 'problem' of placing 0 queens on the chessboard, which is the empty chessboard. - -This technique can be used in a way that is much more efficient than the naïve brute-force search algorithm, which considers all 648 = 248 = 281,474,976,710,656 possible blind placements of eight queens, and then filters these to remove all placements that place two queens either on the same square (leaving only 64!/56! = 178,462,987,637,760 possible placements) or in mutually attacking positions. This very poor algorithm will, among other things, produce the same results over and over again in all the different permutations of the assignments of the eight queens, as well as repeating the same computations over and over again for the different sub-sets of each solution. A better brute-force algorithm places a single queen on each row, leading to only 88 = 224 = 16,777,216 blind placements. - -It is possible to do much better than this. - -One algorithm solves the eight rooks puzzle by generating the permutations of the numbers 1 through 8 (of which there are 8! = 40,320), and uses the elements of each permutation as indices to place a queen on each row. - -Then it rejects those boards with diagonal attacking positions. - -The backtracking depth-first search program, a slight improvement on the permutation method, constructs the search tree by considering one row of the board at a time, eliminating most nonsolution board positions at a very early stage in their construction. - -Because it rejects rook and diagonal attacks even on incomplete boards, it examines only 15,720 possible queen placements. - -A further improvement, which examines only 5,508 possible queen - -placements, is to combine the permutation based method with the early - -pruning method: the permutations are generated depth-first, and - -the search space is pruned if the partial permutation produces a - -diagonal attack. - -Constraint programming can also be very effective on this problem. - -An alternative to exhaustive search is an 'iterative repair' algorithm, which typically starts with all queens on the board, for example with one queen per column. It then counts the number of conflicts (attacks), and uses a heuristic to determine how to improve the placement of the queens. The 'minimum-conflicts' heuristic – moving the piece with the largest number of conflicts to the square in the same column where the number of conflicts is smallest – is particularly effective: it finds a solution to the 1,000,000 queen problem in less than 50 steps on average. This assumes that the initial configuration is 'reasonably good' – if a million queens all start in the same row, it will take at least 999,999 steps to fix it. A 'reasonably good' starting point can for instance be found by putting each queen in its own row and column so that it conflicts with the smallest number of queens already on the board. - -Unlike the backtracking search outlined above, iterative repair does not guarantee a solution: like all greedy procedures, it may get stuck on a local optimum. (In such a case, the algorithm may be restarted with a different initial configuration.) On the other hand, it can solve problem sizes that are several orders of magnitude beyond the scope of a depth-first search. - -This animation illustrates backtracking to solve the problem. A queen is placed in a column that is known not to cause conflict. If a column is not found the program returns to the last good state and then tries a different column. - -As an alternative to backtracking, solutions can be counted by recursively enumerating valid partial solutions, one row at a time. Rather than constructing entire board positions, blocked diagonals and columns are tracked with bitwise operations. This does not allow the recovery of individual solutions. - -The following is a Pascal program by Niklaus Wirth in 1976. It finds one solution to the eight queens problem. - -program eightqueen1(output); - -var i : integer; q : boolean; - -a : array[ 1 .. 8] of boolean; - -b : array[ 2 .. 16] of boolean; - -c : array[−7 .. 7] of boolean; - -x : array[ 1 .. 8] of integer; - -procedure try( i : integer; var q : boolean); - -var j : integer; - -begin - -j := 0; - -repeat - -j := j + 1; - -q := false; - -if a[ j] and b[ i + j] and c[ i − j] then - -begin - -x[ i ] := j; - -a[ j] := false; - -b[ i + j] := false; - -c[ i − j] := false; - -if i < 8 then - -begin - -try( i + 1, q); - -if not q then - -begin - -a[ j] := true; - -b[ i + j] := true; - -c[ i − j] := true; - -end - -end - -else - -q := true - -end - -until q or (j = 8); - -end; - -begin - -for i := 1 to 8 do a[ i] := true; - -for i := 2 to 16 do b[ i] := true; - -for i := −7 to 7 do c[ i] := true; - -try( 1, q); - -if q then - -for i := 1 to 8 do write( x[ i]:4); - -writeln - -end. - -*In the game The 7th Guest, the 8th Puzzle: "The Queen's Dilemma" in the game room of the Stauf mansion is the de facto eight queens puzzle. - -*In the game Professor Layton and the Curious Village, the 130th puzzle: "Too Many Queens 5"( ) is an eight queens puzzle. diff --git a/wiki/wikipedia/3219.txt b/wiki/wikipedia/3219.txt deleted file mode 100644 index 7af4da333ca4109720a083c38757dd4de1c6616d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3219.txt +++ /dev/null @@ -1,7 +0,0 @@ -In algebraic geometry, the Ramanujam–Samuel theorem gives conditions for a divisor of a local ring to be principal. - -It was introduced independently by in answer to a question of Grothendieck and by C. P. Ramanujam in an appendix to a paper by , and was generalized by . - -Grothendieck's version of the Ramanujam–Samuel theorem is as follows. - -Suppose that A is a local Noetherian ring with maximal ideal m, whose completion is integral and integrally closed, and ρ is a local homomorphism from A to a local Noetherian ring B of larger dimension such that B is formally smooth over A and the residue field of B is finite over that of A. Then a cycle of codimension 1 in Spec(B) that is principal at the point mB is principal. diff --git a/wiki/wikipedia/322.txt b/wiki/wikipedia/322.txt deleted file mode 100644 index 8fdf8b6f287fa41c5348ab9eba16a7295424c856..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/322.txt +++ /dev/null @@ -1 +0,0 @@ -In mathematical finite group theory, Thompson's original uniqueness theorem states that in a minimal simple finite group of odd order there is a unique maximal subgroup containing a given elementary abelian subgroup of rank 3. Bender gave a shorter proof of the uniqueness theorem. diff --git a/wiki/wikipedia/3220.txt b/wiki/wikipedia/3220.txt deleted file mode 100644 index 1aae20433e3289b3da21795684777d761119e043..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3220.txt +++ /dev/null @@ -1,29 +0,0 @@ -The Erdős–Gallai theorem is a result in graph theory, a branch of combinatorial mathematics. It provides one of two known approaches to solving the graph realization problem, i.e. it gives a necessary and sufficient condition for a finite sequence of natural numbers to be the degree sequence of a simple graph. A sequence obeying these conditions is called "graphic". The theorem was published in 1960 by Paul Erdős and Tibor Gallai, after whom it is named. - -A sequence of non-negative integers $d_1\geq\cdots\geq d_n$ can be represented as the degree sequence of a finite simple graph on n vertices if and only if $d_1+\cdots+d_n$ is even and - - - -\sum^{k}_{i=1}d_i\leq k(k-1)+ \sum^n_{i=k+1} \min (d_i,k) - - - -holds for every k in $1\leq k\leq n$. - -It is not difficult to show that the conditions of the Erdős–Gallai theorem are necessary for a sequence of numbers to be graphic. The requirement that the sum of the degrees be even is the handshaking lemma, already used by Euler in his 1736 paper on the bridges of Königsberg. The inequality between the sum of the $k$ largest degrees and the sum of the remaining degrees can be established by double counting: the left side gives the numbers of edge-vertex adjacencies among the $k$ highest-degree vertices, each such adjacency must either be on an edge with one or two high-degree endpoints, the $k(k-1)$ term on the right gives the maximum possible number of edge-vertex adjacencies in which both endpoints have high degree, and the remaining term on the right upper bounds the number of edges that have exactly one high degree endpoint. Thus, the more difficult part of the proof is to show that, for any sequence of numbers obeying these conditions, there exists a graph for which it is the degree sequence. - -The original proof of Erdős was long and involved. Choudum cites a shorter proof by Claude Berge, based on ideas of network flow. Choudum instead provides a proof by mathematical induction on the sum of the degrees: he lets $t$ be the first index of a number in the sequence for which $d_t > d_{t+1}$ (or the penultimate number if all are equal), uses a case analysis to show that the sequence formed by subtracting one from $d_t$ and from the last number in the sequence (and removing the last number if this subtraction causes it to become zero) is again graphic, and forms a graph representing the original sequence by adding an edge between the two positions from which one was subtracted. - -Tripathi consider a sequence of "subrealizations", graphs whose degrees are upper bounded by the given degree sequence. They show that, if G is a subrealization, and i is the smallest index of a vertex in G whose degree is not equal to di, then G may be modified in a way that produces another subrealization, increasing the degree of vertex i without changing the degrees of the earlier vertices in the sequence. Repeated steps of this kind must eventually reach a realization of the given sequence, proving the theorem. - -Aigner describe close connections between the Erdős–Gallai theorem and the theory of integer partitions. - -Let $m=\sum d_i$; then the sorted integer sequences summing to $m$ may be interpreted as the partitions of $m$. Under majorization of their prefix sums, the partitions form a lattice, in which the minimal change between an individual partition and another partition lower in the partition order is to subtract one from one of the numbers $d_i$ and add it to a number $d_{j}$ that is smaller by at least two ($d_{j}$ could be zero). As Aigner and Triesch show, this operation preserves the property of being graphic, so to prove the Erdős–Gallai theorem it suffices to characterize the graphic sequences that are maximal in this majorization order. They provide such a characterization, in terms of the Ferrers diagrams of the corresponding partitions, and show that it is equivalent to the Erdős–Gallai theorem. - -Similar theorems describe the degree sequences of simple directed graphs, simple directed graphs with loops, and simple bipartite graphs . The first problem is characterized by the Fulkerson–Chen–Anstee theorem. The latter two cases, which are equivalent, are characterized by the Gale–Ryser theorem. - -Tripathi proved that it suffices to consider the $k$th inequality such that $1 \leq k < n$ with $a_k > a_{k+1}$ and for $k = n$. Barrus restrict the set of inequalities for graphs in an opposite thrust. If an even-summed positive sequence d has no repeated entries other than the maximum and the minimum (and the length exceeds - -the largest entry), then it suffices to check only the $l$th inequality, where $l = \max\{k \mid d_k \geq k\}$. - -A finite sequences of nonnegative integers $(d_1,\cdots,d_n)$ with $d_1 \geq \cdots \geq d_n$ is graphic if $\sum_{i=1}^{n}d_i$ is even and there exists a sequence $(c_1,\cdots,c_n)$ that is graphic and majorizes $(d_1,\cdots,d_n)$. This result was given by Aigner. Mahadev reinvented it and gave a more direct proof. diff --git a/wiki/wikipedia/3221.txt b/wiki/wikipedia/3221.txt deleted file mode 100644 index c72866cb2bfcfb17ff167962e1327af9058be033..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3221.txt +++ /dev/null @@ -1,341 +0,0 @@ -This article collects together a variety of proofs of Fermat's little theorem, which states that -$$ -a^p \equiv a \pmod p -$$ - -for every prime number p and every integer a (see modular arithmetic). - -Some of the proofs of Fermat's little theorem given below depend on two simplifications. - -The first is that we may assume that a is in the range 0 ≤ a ≤ p − 1. This is a simple consequence of the laws of modular arithmetic; we are simply saying that we may first reduce a modulo p. This is consistent with reducing $a^p $ modulo p, as one can check. - -Secondly, it suffices to prove that -$$ -a^{p-1} \equiv 1 \pmod p -$$ - -for a in the range 1 ≤ a ≤ p − 1. Indeed, if the previous assertion holds for such a, multiplying both sides by a yields the original form of the theorem, -$$ -a^p \equiv a \pmod p -$$ - -On the other hand, if a = 0 or a = 1, the theorem holds trivially. - -This is perhaps the simplest known proof, requiring the least mathematical background. It is an attractive example of a combinatorial proof (a proof that involves counting a collection of objects in two different ways). - -The proof given here is an adaptation of Golomb's proof. - -To keep things simple, let us assume that a is a positive integer. Consider all the possible strings of p symbols, using an alphabet with a different symbols. The total number of such strings is ap, since there are a possibilities for each of p positions (see rule of product). - -For example, if p = 5 and a = 2, then we can use an alphabet with two symbols (say A and B), and there are 25 = 32 strings of length 5: - -AAAAA, AAAAB, AAABA, AAABB, AABAA, AABAB, AABBA, AABBB, - -ABAAA, ABAAB, ABABA, ABABB, ABBAA, ABBAB, ABBBA, ABBBB, - -BAAAA, BAAAB, BAABA, BAABB, BABAA, BABAB, BABBA, BABBB, - -BBAAA, BBAAB, BBABA, BBABB, BBBAA, BBBAB, BBBBA, BBBBB. - -We will argue below that if we remove the strings consisting of a single symbol from the list (in our example, AAAAA and BBBBB), the remaining ap − a strings can be arranged into groups, each group containing exactly p strings. It follows that ap − a is divisible by p. - -Let us think of each such string as representing a necklace. That is, we connect the two ends of the string together and regard two strings as the same necklace if we can rotate one string to obtain the second string; in this case we will say that the two strings are friends. In our example, the following strings are all friends: - -AAAAB, AAABA, AABAA, ABAAA, BAAAA. - -In full, each line of the following list corresponds to a single necklace, and the entire list comprises all 32 strings. - -AAAAB, AAABA, AABAA, ABAAA, BAAAA, - -AAABB, AABBA, ABBAA, BBAAA, BAAAB, - -AABAB, ABABA, BABAA, ABAAB, BAABA, - -AABBB, ABBBA, BBBAA, BBAAB, BAABB, - -ABABB, BABBA, ABBAB, BBABA, BABAB, - -ABBBB, BBBBA, BBBAB, BBABB, BABBB, - -AAAAA, - -BBBBB. - -Notice that in the above list, each necklace with more than one symbol is represented by 5 different strings, and the number of necklaces represented by just one string is 2, i.e. is the number of distinct symbols. Thus the list shows very clearly why 32 − 2 is divisible by 5. - -One can use the following rule to work out how many friends a given string S has: - -If S is built up of several copies of the string T, and T cannot itself be broken down further into repeating strings, then the number of friends of S (including S itself) is equal to the length of T. - -For example, suppose we start with the string S = ABBABBABBABB, which is built up of several copies of the shorter string T = ABB. If we rotate it one symbol at a time, we obtain the following 3 strings: - -ABBABBABBABB, - -BBABBABBABBA, - -BABBABBABBAB. - -There aren't any others, because ABB is exactly 3 symbols long and cannot be broken down into further repeating strings. - -Using the above rule, we can complete the proof of Fermat's little theorem quite easily, as follows. Our starting pool of a p strings may be split into two categories: - -* Some strings contain p identical symbols. There are exactly a of these, one for each symbol in the alphabet. (In our running example, these are the strings AAAAA and BBBBB.) - -* The rest of the strings use at least two distinct symbols from the alphabet. If we can break up S into repeating copies of some string T, the length of T must divide the length of S. But, since the length of S is the prime p, the only possible length for T is also p. Therefore, the above rule tells us that S has exactly p friends (including S itself). - -The second category contains a p − a strings, and they may be arranged into groups of p strings, one group for each necklace. Therefore, a p − a must be divisible by p, as promised. - -This proof uses some basic concepts from dynamical systems. - -We start by considering a family of functions Tn(x), where n ≥ 2 is an integer, mapping the interval [0, 1] to itself by the formula - -T_n(x) = \begin{cases} - -\{ nx \} & 0 \leq x < 1, \\ - -1 & x = 1, - -\end{cases} - -where {y} denotes the fractional part of y. For example, the function T3(x) is illustrated below: - -A number x0 is said to be a fixed point of a function f(x) if f(x0) = x0; in other words, if f leaves x0 fixed. The fixed points of a function can be easily found graphically: they are simply the x coordinates of the points where the graph of f(x) intersects the graph of the line y = x. For example, the fixed points of the function T3(x) are 0, 1/2, and 1; they are marked by black circles on the following diagram: - -We will require the following two lemmas. - -Lemma 1. For any n ≥ 2, the function Tn(x) has exactly n fixed points. - -Proof. There are 3 fixed points in the illustration above, and the same sort of geometrical argument applies for any n ≥ 2. - -Lemma 2. For any positive integers n and m, and any 0 ≤ x ≤ 1, -$$ -T_m(T_n(x)) = T_{mn}(x). -$$ - -In other words, Tmn(x) is the composition of Tn(x) and Tm(x). - -Proof. The proof of this lemma is not difficult, but we need to be slightly careful with the endpoint x = 1. For this point the lemma is clearly true, since -$$ -T_m(T_n(1)) = T_m(1) = 1 = T_{mn}(1). -$$ - -So let us assume that 0 ≤ x < 1. In this case, -$$ -T_n(x) = \{nx\} < 1, -$$ - -so Tm(Tn(x)) is given by -$$ -T_m(T_n(x)) = \{m\{nx\}\}. -$$ - -Therefore, what we really need to show is that -$$ -\{m\{nx\}\} = \{mnx\}. -$$ - -To do this we observe that {nx} = nx − k, where k is the integer part of nx; then -$$ -\{m\{nx\}\} = \{mnx - mk\} = \{mnx\}, -$$ - -since mk is an integer. - -Now let us properly begin the proof of Fermat's little theorem, by studying the function Tap(x). We will assume that a ≥ 2. From Lemma 1, we know that it has ap fixed points. By Lemma 2 we know that -$$ -T_{a^p}(x) = \underbrace{T_a(T_a( \cdots T_a(x) \cdots ))}_{p\text{ times}}, -$$ - -so any fixed point of Ta(x) is automatically a fixed point of Tap(x). - -We are interested in the fixed points of Tap(x) that are not fixed points of Ta(x). Let us call the set of such points S. There are ap − a points in S, because by Lemma 1 again, Ta(x) has exactly a fixed points. The following diagram illustrates the situation for a = 3 and p = 2. The black circles are the points of S, of which there are 32 − 3 = 6. - -The main idea of the proof is now to split the set S up into its orbits under Ta. What this means is that we pick a point x0 in S, and repeatedly apply Ta(x) to it, to obtain the sequence of points -$$ - x_0, T_a(x_0), T_a(T_a(x_0)), T_a(T_a(T_a(x_0))), \ldots. -$$ - -This sequence is called the orbit of x0 under Ta. By Lemma 2, this sequence can be rewritten as -$$ - x_0, T_a(x_0), T_{a^2}(x_0), T_{a^3}(x_0), \ldots. -$$ - -Since we are assuming that x0 is a fixed point of Ta p(x), after p steps we hit Tap(x0) = x0, and from that point onwards the sequence repeats itself. - -However, the sequence cannot begin repeating itself any earlier than that. If it did, the length of the repeating section would have to be a divisor of p, so it would have to be 1 (since p is prime). But this contradicts our assumption that x0 is not a fixed point of Ta. - -In other words, the orbit contains exactly p distinct points. This holds for every orbit of S. Therefore, the set S, which contains ap − a points, can be broken up into orbits, each containing p points, so ap − a is divisible by p. - -(This proof is essentially the same as the necklace-counting proof given above, simply viewed through a different lens: one may think of the interval [0, 1] as given by sequences of digits in base a (our distinction between 0 and 1 corresponding to the familiar distinction between representing integers as ending in ".0000..." and ".9999..."). Tan amounts to shifting such a sequence by n many digits. The fixed points of this will be sequences that are cyclic with period dividing n. In particular, the fixed points of Tap can be thought of as the necklaces of length p, with Tan corresponding to rotation of such necklaces by n spots. - -This proof could also be presented without distinguishing between 0 and 1, simply using the half-open interval [0, 1); then Tn would only have n − 1 fixed points, but Tap − Ta would still work out to ap − a, as needed.) - -This proof, due to Euler, discovered by James Ivory and rediscovered by Dirichlet requires some background in modular arithmetic. - -Let us assume that p is positive and not divisible by a. - -The idea is that if we write down the sequence of numbers - -and reduce each one modulo p, the resulting sequence turns out to be a rearrangement of - -Therefore, if we multiply together the numbers in each sequence, the results must be identical modulo p: -$$ -a \times 2a \times 3a \times \cdots \times (p-1)a \equiv 1 \times 2 \times 3 \times \cdots \times (p-1) \pmod p. -$$ - -Collecting together the a terms yields -$$ -a^{p-1} (p-1)! \equiv (p-1)! \pmod p. -$$ - -Finally, we may “cancel out” the numbers 1, 2, ..., p − 1 from both sides of this equation, obtaining -$$ -a^{p-1} \equiv 1 \pmod p. -$$ - -There are two steps in the above proof that we need to justify: - -* Why the elements of the sequence (), reduced modulo p, are a rearrangement of (), and - -* Why it is valid to “cancel” in the setting of modular arithmetic. - -We will prove these things below; let us first see an example of this proof in action. - -If a = 3 and p = 7, then the sequence in question is -$$ -3, 6, 9, 12, 15, 18; -$$ - -reducing modulo 7 gives -$$ -3, 6, 2, 5, 1, 4, -$$ - -which is just a rearrangement of -$$ -1, 2, 3, 4, 5, 6. -$$ - -Multiplying them together gives -$$ -3 \times 6 \times 9 \times 12 \times 15 \times 18 \equiv 3 \times 6 \times 2 \times 5 \times 1 \times 4 \equiv 1 \times 2 \times 3 \times 4 \times 5 \times 6 \pmod 7; -$$ - -that is, -$$ -3^6 (1 \times 2 \times 3 \times 4 \times 5 \times 6) \equiv (1 \times 2 \times 3 \times 4 \times 5 \times 6) \pmod 7. -$$ - -Canceling out 1 × 2 × 3 × 4 × 5 × 6 yields -$$ -3^6 \equiv 1 \pmod 7, -$$ - -which is Fermat's little theorem for the case a = 3 and p = 7. - -Let us first explain why it is valid, in certain situations, to “cancel”. The exact statement is as follows. If u, x, and y are integers, and u is not divisible by a prime number p, and if - -then we may “cancel” u to obtain - -Our use of this cancellation law in the above proof of Fermat's little theorem was valid, because the numbers 1, 2, ..., p − 1 are certainly not divisible by p (indeed they are smaller than p). - -We can prove the cancellation law easily using Euclid's lemma, which generally states that if a prime p divides a product ab (where a and b are integers), then p must divide a or b. Indeed, the assertion () simply means that p divides ux − uy = u(x − y). Since p is a prime which does not divide u, Euclid's lemma tells us that it must divide x − y instead; that is, () holds. - -Note that the conditions under which the cancellation law holds are quite strict, and this explains why Fermat's little theorem demands that p is a prime. For example, 2×2 ≡ 2×5 (mod 6), but it is not true that 2 ≡ 5 (mod 6). However, the following generalization of the cancellation law holds: if u, x, y, and z are integers, if u and z are relatively prime, and if -$$ -ux \equiv uy \pmod z, -$$ - -then we may “cancel” u to obtain -$$ -x \equiv y \pmod z. -$$ - -This follows from a generalization of Euclid's lemma. - -Finally, we must explain why the sequence -$$ -a, 2a, 3a, \ldots, (p-1)a, -$$ - -when reduced modulo p, becomes a rearrangement of the sequence -$$ -1, 2, 3, \ldots, p-1. -$$ - -To start with, none of the terms a, 2a, ..., (p − 1)a can be congruent to zero modulo p, since if k is one of the numbers 1, 2, ..., p − 1, then k is relatively prime with p, and so is a, so Euclid's lemma tells us that ka shares no factor with p. Therefore, at least we know that the numbers a, 2a, ..., (p − 1)a, when reduced modulo p, must be found among the numbers 1, 2, 3, ..., p − 1. - -Furthermore, the numbers a, 2a, ..., (p − 1)a must all be distinct after reducing them modulo p, because if -$$ -ka \equiv ma \pmod p, -$$ - -where k and m are one of 1, 2, ..., p − 1, then the cancellation law tells us that -$$ -k \equiv m \pmod p. -$$ - -Since both k and m are between 1 and p − 1, they must be equal. Therefore, the terms a, 2a, ..., (p − 1)a when reduced modulo p must be distinct. - -To summarise: when we reduce the p − 1 numbers a, 2a, ..., (p − 1)a modulo p, we obtain distinct members of the sequence 1, 2, ..., p − 1. Since there are exactly p − 1 of these, the only possibility is that the former are a rearrangement of the latter. - -This method can also be used to prove Euler's theorem, with a slight alteration in that the numbers from 1 to p − 1 are substituted by the numbers less than and coprime with some number m (not necessarily prime). Both the rearrangement property and the cancellation law (under the generalized form mentioned above) are still satisfied and can be utilized. - -For example, if m = 10, then the numbers less than m and coprime with m are 1, 3, 7, and 9. Thus we have: -$$ -a \times 3a \times 7a \times 9a \equiv 1 \times 3 \times 7 \times 9 \pmod {10}. -$$ - -Therefore, -$$ -{a^{\varphi(10)}} \equiv 1 \pmod {10}. -$$ - -This proof requires the most basic elements of group theory. - -The idea is to recognise that the set G = {1, 2, …, p − 1}, with the operation of multiplication (taken modulo p), forms a group. The only group axiom that requires some effort to verify is that each element of G is invertible. Taking this on faith for the moment, let us assume that a is in the range 1 ≤ a ≤ p − 1, that is, a is an element of G. Let k be the order of a, that is, k is the smallest positive integer such that ak ≡ 1 (mod p). Then the numbers 1, a, a2, ..., ak −1 reduced modulo p form a subgroup of G whose order is k and therefore, by Lagrange's theorem, k divides the order of G, which is p − 1. So p − 1 = km for some positive integer m and then -$$ -a^{p-1} \equiv a^{km} \equiv (a^k)^m \equiv 1^m \equiv 1 \pmod p. -$$ - -To prove that every element b of G is invertible, we may proceed as follows. First, b is coprime to p. Thus Bézout's identity assures us that there are integers x and y such that bx + py = 1. Reading this equality modulo p, we see that x is an inverse for b, since bx ≡ 1 (mod p). Therefore, every element of G is invertible. So, as remarked earlier, G is a group. - -For example, when p = 11, the inverses of each element are given as follows: - -If we take the previous proof and, instead of using Lagrange's theorem, we try to prove it in this specific situation, then we get Euler's third proof, which is the one that he found more natural. Let A be the set whose elements are the numbers 1, a, a2, ..., ak − 1 reduced modulo p. If A = G, then k = p − 1 and therefore k divides p −1. Otherwise, there is some b1 ∈ G\A. - -Let A1 be the set whose elements are the numbers b1, ab1, a2b1, …, ak − 1b1 reduced modulo p. Then A1 has k distinct elements, because otherwise there would be two distinct numbers m, n ∈ {0, 1, ..., k − 1} such that amb1 ≡ anb1 (mod p), which is impossible, since it would follow that am ≡ an (mod p). On the other hand, no element of A1 can be an element of A, because otherwise there would be numbers m, n ∈ {0, 1, …, k − 1} such that amb1 ≡ an (mod p), and then b1 ≡ anak − m ≡ an + k − m (mod p), which is impossible, since b1 ∉ A. - -So, the set A∪A1 has 2k elements. If it turns out to be equal to G, then 2k = p −1 and therefore k divides p −1. Otherwise, there is some b2 ∈ G\(A∪A1) and we can start all over again, defining A2 as the set whose elements are the numbers b2, ab2, a2b2, ..., ak − 1b2 reduced modulo p. Since G is finite, this process must stop at some point and this proves that k divides p − 1. - -For instance, if a = 5 and p = 13, then, since - -* 52 = 25 ≡ 12 (mod 13), - -* 53 = 125 ≡ 8 (mod 13), - -* 54 = 625 ≡ 1 (mod 13), - -we have k = 4 and A = {1, 5, 8, 12}. Clearly, A ≠ G = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12}. Let b1 be an element of G\A; for instance, take b1 = 2. Then, since - -* 2×1 = 2, - -* 2×5 = 10, - -* 2×8 = 16 ≡ 3 (mod 13), - -* 2×12 = 24 ≡ 11 (mod 13), - -we have A1 = {2, 3, 10, 11}. Clearly, A∪A1 ≠ G. Let b2 be an element of G\(A∪A1); for instance, take b2 = 4. Then, since - -* 4×1 = 4, - -* 4×5 = 20 ≡ 7 (mod 13), - -* 4×8 = 32 ≡ 6 (mod 13), - -* 4×12 = 48 ≡ 9 (mod 13), - -we have A2 = {4, 6, 7, 9}. And now G = A∪A1∪A2. - -Note that the sets A, A1, and so on are in fact the cosets of A in G. diff --git a/wiki/wikipedia/3222.txt b/wiki/wikipedia/3222.txt deleted file mode 100644 index 220e92fa585e21ea5f1445664b4c87b61395ad85..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3222.txt +++ /dev/null @@ -1,76 +0,0 @@ -In physics, the no-cloning theorem states that it is impossible to create an independent and identical copy of an arbitrary unknown quantum state, a statement which has profound implications in the field of quantum computing among others. The theorem is an evolution of the 1970 no-go theorem authored by James Park, in which he demonstrates that a non-disturbing measurement scheme which is both simple and perfect cannot exist (the same result would be independently derived in 1982 by Wootters and Zurek as well as Dieks the same year). The aforementioned theorems do not preclude the state of one system becoming entangled with the state of another as cloning specifically refers to the creation of a separable state with identical factors. For example, one might use the controlled NOT gate and the Walsh–Hadamard gate to entangle two qubits without violating the no-cloning theorem as no well-defined state may be defined in terms of a subsystem of an entangled state. The no-cloning theorem (as generally understood) concerns only pure states whereas the generalized statement regarding mixed states is known as the no-broadcast theorem. - -The no-cloning theorem has a time-reversed dual, the no-deleting theorem. Together, these underpin the interpretation of quantum mechanics in terms of category theory, and, in particular, as a dagger compact category. This formulation, known as categorical quantum mechanics, allows, in turn, a connection to be made from quantum mechanics to linear logic as the logic of quantum information theory (in the same sense that intuitionistic logic arises from Cartesian closed categories). - -According to Asher Peres and David Kaiser, the publication of the 1982 proof of the no-cloning theorem by - -Wootters and Zurek had proven the theorem 18 months prior to the published proof by Wootters and Zurek in his referee report to said proposal (as evidenced by a letter from the editor). However, Ortigoso pointed out in 2018 that a complete proof along with an interpretation in terms of the lack of simple nondisturbing measurements in quantum mechanics was already delivered by Park in 1970. - -Suppose we have two quantum systems A and B with a common Hilbert space $H = H_A = H_B$. Suppose we want to have a procedure to copy the state $|\phi\rangle_A$ of quantum system A, over the state $|e\rangle_B$ of quantum system B, for any original state $|\phi\rangle_A$ (see bra–ket notation). That is, beginning with the state $|\phi\rangle_A \otimes |e\rangle_B $, we want to end up with the state $|\phi\rangle_A \otimes |\phi\rangle_B $. To make a "copy" of the state A, we combine it with system B in some unknown initial, or blank, state $|e\rangle_B$ independent of $|\phi\rangle_A$, of which we have no prior knowledge. - -The state of the initial composite system is then described by the following tensor product: -$$ -|\phi\rangle_A \otimes |e\rangle_B. -$$ - -(in the following we will omit the $\otimes$ symbol and keep it implicit). - -There are only two permissible quantum operations with which we may manipulate the composite system: - -* We can perform an observation, which irreversibly collapses the system into some eigenstate of an observable, corrupting the information contained in the qubit(s). This is obviously not what we want. - -* Alternatively, we could control the Hamiltonian of the combined system, and thus the time-evolution operator U(t), e.g. for a time-independent Hamiltonian, $U(t) = e^{-iHt/\hbar}$. Evolving up to some fixed time $t_0$ yields a unitary operator U on $H \otimes H$, the Hilbert space of the combined system. However, no such unitary operator U can clone all states. - -The no-cloning theorem answers the following question in the negative: Is it possible to construct a unitary operator U, acting on $H_A \otimes H_B = H \otimes H$, under which the state the system B is in always evolves into the state the system A is in, regardless of the state system A is in? - -Theorem: There is no unitary operator U on $H \otimes H$ such that for all normalised states $|\phi \rangle_A$ and $|e\rangle_B$ in $H$ -$$ -U(|\phi\rangle_A |e\rangle_B) = e^{i \alpha(\phi,e)} |\phi\rangle_A |\phi\rangle_B -$$ - -for some real number $\alpha$ depending on $\phi$ and $e$. - -The extra phase factor expresses the fact that a quantum-mechanical state defines a normalised vector in Hilbert space only up to a phase factor i.e. as an element of projectivised Hilbert space. - -To prove the theorem, we select an arbitrary pair of states $|\phi\rangle_A$ and $|\psi\rangle_A$ in the Hilbert space $H$. Because U is supposed to be unitary, we would have - - - -\langle \phi| \psi\rangle \langle e | e \rangle \equiv - -\langle \phi|_A \langle e|_B |\psi\rangle_A |e\rangle_B = - -\langle \phi|_A \langle e|_B U^\dagger U |\psi\rangle_A |e\rangle_B = - -e^{-i(\alpha(\phi, e) - \alpha(\psi, e))} \langle \phi|_A \langle \phi|_B |\psi\rangle_A |\psi\rangle_B \equiv - -e^{-i(\alpha(\phi, e) - \alpha(\psi, e))} \langle \phi |\psi\rangle^2. - - - -Since the quantum state $|e\rangle$ is assumed to be normalized, we thus get -$$ - |\langle \phi | \psi \rangle|^2 = |\langle \phi | \psi \rangle|. -$$ - -This implies that either $|\langle \phi | \psi \rangle| = 1$ or $|\langle \phi | \psi \rangle| = 0$. Hence by the Cauchy–Schwarz inequality either $\phi = e^{i\beta}\psi$ or $\phi$ is orthogonal to $\psi$. However, this cannot be the case for two arbitrary states. Therefore, a single universal U cannot clone a general quantum state. This proves the no-cloning theorem. - -Take a qubit for example. It can be represented by two complex numbers, called probability amplitudes (normalised to 1), that is three real numbers (two polar angles and one radius). Copying three numbers on a classical computer using any copy and paste operation is trivial (up to a finite precision) but the problem manifests if the qubit is unitarily transformed (e.g. by the Hadamard quantum gate) to be polarised (which unitary transformation is a surjective isometry). In such a case the qubit can be represented by just two real numbers (one polar angle and one radius equal to 1), while the value of the third can be arbitrary in such a representation. Yet a realisation of a qubit (polarisation-encoded photon, for example) is capable of storing the whole qubit information support within its "structure". Thus no single universal unitary evolution U can clone an arbitrary quantum state according to the no-cloning theorem. It would have to depend on the transformed qubit (initial) state and thus would not have been universal. - -In the statement of the theorem, two assumptions were made: the state to be copied is a pure state and the proposed copier acts via unitary time evolution. These assumptions cause no loss of generality. If the state to be copied is a mixed state, it can be purified. Alternately, a different proof can be given that works directly with mixed states; in this case, the theorem is often known as the no-broadcast theorem. Similarly, an arbitrary quantum operation can be implemented via introducing an ancilla and performing a suitable unitary evolution. Thus the no-cloning theorem holds in full generality. - -*The no-cloning theorem prevents the use of certain classical error correction techniques on quantum states. For example, backup copies of a state in the middle of a quantum computation cannot be created and used for correcting subsequent errors. Error correction is vital for practical quantum computing, and for some time it was unclear whether or not it was possible. In 1995, Shor and Steane showed that it is by independently devising the first quantum error correcting codes, which circumvent the no-cloning theorem. - -*Similarly, cloning would violate the no-teleportation theorem, which says that it is impossible to convert a quantum state into a sequence of classical bits (even an infinite sequence of bits), copy those bits to some new location, and recreate a copy of the original quantum state in the new location. This should not be confused with entanglement-assisted teleportation, which does allow a quantum state to be destroyed in one location, and an exact copy to be recreated in another location. - -* The no-cloning theorem is implied by the no-communication theorem, which states that quantum entanglement cannot be used to transmit classical information (whether superluminally, or slower). That is, cloning, together with entanglement, would allow such communication to occur. To see this, consider the EPR thought experiment, and suppose quantum states could be cloned. Assume parts of a maximally entangled Bell state are distributed to Alice and Bob. Alice could send bits to Bob in the following way: If Alice wishes to transmit a "0", she measures the spin of her electron in the z direction, collapsing Bob's state to either $|z+\rangle_B$ or $|z-\rangle_B$. To transmit "1", Alice does nothing to her qubit. Bob creates many copies of his electron's state, and measures the spin of each copy in the z direction. Bob will know that Alice has transmitted a "0" if all his measurements will produce the same result; otherwise, his measurements will have outcomes $|z+\rangle_B$ or $|z-\rangle_B$ with equal probability. This would allow Alice and Bob to communicate classical bits between each other (possibly across space-like separations, violating causality). - -* Quantum states cannot be discriminated perfectly. - -* The no cloning theorem prevents an interpretation of the holographic principle for black holes as meaning that there are two copies of information, one lying at the event horizon and the other in the black hole interior. This leads to more radical interpretations, such as black hole complementarity. - -* The no-cloning theorem applies to all dagger compact categories: there is no universal cloning morphism for any non-trivial category of this kind. Although the theorem is inherent in the definition of this category, it is not trivial to see that this is so; the insight is important, as this category includes things that are not finite-dimensional Hilbert spaces, including the category of sets and relations and the category of cobordisms. - -Even though it is impossible to make perfect copies of an unknown quantum state, it is possible to produce imperfect copies. This can be done by coupling a larger auxiliary system to the system that is to be cloned, and applying a unitary transformation to the combined system. If the unitary transformation is chosen correctly, several components of the combined system will evolve into approximate copies of the original system. In 1996, V. Buzek and M. Hillery showed that a universal cloning machine can make a clone of an unknown state with the surprisingly high fidelity of 5/6. - -Imperfect quantum cloning can be used as an eavesdropping attack on quantum cryptography protocols, among other uses in quantum information science. diff --git a/wiki/wikipedia/3223.txt b/wiki/wikipedia/3223.txt deleted file mode 100644 index 339807b3f6d7067d63049d517759e4120a52abb1..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3223.txt +++ /dev/null @@ -1,149 +0,0 @@ -In physics, the n-body problem is the problem of predicting the individual motions of a group of celestial objects interacting with each other gravitationally. Solving this problem has been motivated by the desire to understand the motions of the Sun, Moon, planets, and visible stars. In the 20th century, understanding the dynamics of globular cluster star systems became an important n-body problem. The n-body problem in general relativity is considerably more difficult to solve due to additional factors like time and space distortions. - -The classical physical problem can be informally stated as the following: - -The two-body problem has been completely solved and is discussed below, as well as the famous restricted three-body problem. - -Knowing three orbital positions of a planet's orbit – positions obtained by Sir Isaac Newton from astronomer John Flamsteed – Newton was able to produce an equation by straightforward analytical geometry, to predict a planet's motion; i.e., to give its orbital properties: position, orbital diameter, period and orbital velocity. Having done so, he and others soon discovered over the course of a few years, those equations of motion did not predict some orbits correctly or even very well. Newton realized that this was because gravitational interactive forces amongst all the planets were affecting all their orbits. - -The above discovery goes right to the heart of the matter as to what exactly the n-body problem is physically: as Newton realized, it is not sufficient to just specify the initial position and velocity, or three orbital positions either, to determine a planet's true orbit: the gravitational interactive forces have to be known too. Thus came the awareness and rise of the n-body "problem" in the early 17th century. These gravitational attractive forces do conform to Newton's laws of motion and to his law of universal gravitation, but the many multiple (n-body) interactions have historically made any exact solution intractable. Ironically, this conformity led to the wrong approach. - -After Newton's time the n-body problem historically was not stated correctly because it did not include a reference to those gravitational interactive forces. Newton does not say it directly but implies in his Principia the n-body problem is unsolvable because of those gravitational interactive forces. Newton said - -The moment of inertia of an n-body system is given by - - I = \sum_{i=1}^n m_i \mathbf{q}_i \cdot \mathbf{q}_i = \sum_{i=1}^n m_i \left\|\mathbf{q}_i\right\|^2 - -and the virial is given by Q = 1/2 dI/dt. Then the Lagrange–Jacobi formula states that - -\frac{d^2I}{dt^2} = 2T - U. - -For systems in dynamic equilibrium, the longterm time average of ⟨d2I/dt2⟩ is zero. Then on average the total kinetic energy is half the total potential energy, ⟨T⟩ = 1/2⟨U⟩, which is an example of the virial theorem for gravitational systems. If M is the total mass and R a characteristic size of the system (for example, the radius containing half the mass of the system), then the critical time for a system to settle down to a dynamic equilibrium is - -t_\mathrm{cr} = \sqrt\frac{GM}{R^3}. - -=== Two-body problem === - -Any discussion of planetary interactive forces has always started historically with the two-body problem. The purpose of this section is to relate the real complexity in calculating any planetary forces. Note in this Section also, several subjects, such as gravity, barycenter, Kepler's Laws, etc.; and in the following Section too (Three-body problem) are discussed on other Wikipedia pages. Here though, these subjects are discussed from the perspective of the n-body problem. - -The two-body problem (n = 2) was completely solved by Johann Bernoulli (1667–1748) by classical theory (and not by Newton) by assuming the main point-mass was fixed; this is outlined here. Consider then the motion of two bodies, say the Sun and the Earth, with the Sun fixed, then: - -\begin{align} - -m_1 \mathbf{a}_1 &= \frac{Gm_1m_2}{r_{12}^3}(\mathbf{r}_2-\mathbf{r}_1) &&\quad\text{Sun–Earth} \\ - -m_2 \mathbf{a}_2 &= \frac{Gm_1m_2}{r_{21}^3}(\mathbf{r}_1-\mathbf{r}_2) &&\quad\text{Earth–Sun} - -\end{align} - -The equation describing the motion of mass m2 relative to mass m1 is readily obtained from the differences between these two equations and after canceling common terms gives: - - \mathbf{a} + \frac{\eta}{r^3} \mathbf{r} = \mathbf{0} - -Where - -*r = r2 − r1 is the vector position of m2 relative to m1; - -*α is the Eulerian acceleration d2r/dt2; - -*η = G(m1 + m2). - -The equation α + η/r3r = 0 is the fundamental differential equation for the two-body problem Bernoulli solved in 1734. Notice for this approach forces have to be determined first, then the equation of motion resolved. This differential equation has elliptic, or parabolic or hyperbolic solutions. - -It is incorrect to think of m1 (the Sun) as fixed in space when applying Newton's law of universal gravitation, and to do so leads to erroneous results. The fixed point for two isolated gravitationally interacting bodies is their mutual barycenter, and this two-body problem can be solved exactly, such as using Jacobi coordinates relative to the barycenter. - -Dr. Clarence Cleminshaw calculated the approximate position of the Solar System's barycenter, a result achieved mainly by combining only the masses of Jupiter and the Sun. Science Program stated in reference to his work: - -The Sun wobbles as it rotates around the galactic center, dragging the Solar System and Earth along with it. What mathematician Kepler did in arriving at his three famous equations was curve-fit the apparent motions of the planets using Tycho Brahe's data, and not curve-fitting their true circular motions about the Sun (see Figure). Both Robert Hooke and Newton were well aware that Newton's Law of Universal Gravitation did not hold for the forces associated with elliptical orbits. In fact, Newton's Universal Law does not account for the orbit of Mercury, the asteroid belt's gravitational behavior, or Saturn's rings. Newton stated (in section 11 of the Principia) that the main reason, however, for failing to predict the forces for elliptical orbits was that his math model was for a body confined to a situation that hardly existed in the real world, namely, the motions of bodies attracted toward an unmoving center. Some present physics and astronomy textbooks do not emphasize the negative significance of Newton's assumption and end up teaching that his mathematical model is in effect reality. It is to be understood that the classical two-body problem solution above is a mathematical idealization. See also Kepler's first law of planetary motion. - -=== Three-body problem === - -This section relates a historically important n-body problem solution after simplifying assumptions were made. - -In the past not much was known about the n-body problem for n ≥ 3. The case n = 3 has been the most studied. Many earlier attempts to understand the Three-body problem were quantitative, aiming at finding explicit solutions for special situations. - -*In 1687, Isaac Newton published in the Principia the first steps in the study of the problem of the movements of three bodies subject to their mutual gravitational attractions, but his efforts resulted in verbal descriptions and geometrical sketches; see especially Book 1, Proposition 66 and its corollaries (Newton, 1687 and 1999 (transl.), see also Tisserand, 1894). - -*In 1767, Euler found collinear motions, in which three bodies of any masses move proportionately along a fixed straight line. The Euler's three-body problem is the special case in which two of the bodies are fixed in space (this should not be confused with the circular restricted three-body problem, in which the two massive bodies describe a circular orbit and are only fixed in a synodic reference frame). - -*In 1772, Lagrange discovered two classes of periodic solution, each for three bodies of any masses. In one class, the bodies lie on a rotating straight line. In the other class, the bodies lie at the vertices of a rotating equilateral triangle. In either case, the paths of the bodies will be conic sections. Those solutions led to the study of central configurations, for which q̈ = kq for some constant k > 0. - -*A major study of the Earth–Moon–Sun system was undertaken by Charles-Eugène Delaunay, who published two volumes on the topic, each of 900 pages in length, in 1860 and 1867. Among many other accomplishments, the work already hints at chaos, and clearly demonstrates the problem of so-called "small denominators" in perturbation theory. - -*In 1917, Forest Ray Moulton published his now classic, An Introduction to Celestial Mechanics (see references) with its plot of the restricted three-body problem solution (see figure below). An aside, see Meirovitch's book, pages 413–414 for his restricted three-body problem solution. - -Moulton's solution may be easier to visualize (and definitely easier to solve) if one considers the more massive body (such as the Sun) to be stationary in space, and the less massive body (such as Jupiter) to orbit around it, with the equilibrium points (Lagrangian points) maintaining the 60° spacing ahead of, and behind, the less massive body almost in its orbit (although in reality neither of the bodies are truly stationary, as they both orbit the center of mass of the whole system—about the barycenter). For sufficiently small mass ratio of the primaries, these triangular equilibrium points are stable, such that (nearly) massless particles will orbit about these points as they orbit around the larger primary (Sun). The five equilibrium points of the circular problem are known as the Lagrangian points. See figure below: - -In the restricted three-body problem math model figure above (after Moulton), the Lagrangian points L4 and L5 are where the Trojan planetoids resided (see Lagrangian point); m1 is the Sun and m2 is Jupiter. L2 is a point within the asteroid belt. It has to be realized for this model, this whole Sun-Jupiter diagram is rotating about its barycenter. The restricted three-body problem solution predicted the Trojan planetoids before they were first seen. The h-circles and closed loops echo the electromagnetic fluxes issued from the Sun and Jupiter. It is conjectured, contrary to Richard H. Batin's conjecture (see References), the two h1 are gravity sinks, in and where gravitational forces are zero, and the reason the Trojan planetoids are trapped there. The total amount of mass of the planetoids is unknown. - -The restricted three-body problem that assumes the mass of one of the bodies is negligible. For a discussion of the case where the negligible body is a satellite of the body of lesser mass, see Hill sphere; for binary systems, see Roche lobe. Specific solutions to the three-body problem result in chaotic motion with no obvious sign of a repetitious path. - -The restricted problem (both circular and elliptical) was worked on extensively by many famous mathematicians and physicists, most notably by Poincaré at the end of the 19th century. Poincaré's work on the restricted three-body problem was the foundation of deterministic chaos theory. In the restricted problem, there exist five equilibrium points. Three are collinear with the masses (in the rotating frame) and are unstable. The remaining two are located on the third vertex of both equilateral triangles of which the two bodies are the first and second vertices. - -Inspired by the circular restricted three-body problem, the four-body problem can be greatly simplified by considering a smaller body to have a small mass compared to the other three massive bodies, which in turn are approximated to describe circular orbits. This is known as the bicircular restricted four-body problem (also known as bicircular model) and it can be traced back to 1960 in a NASA report written by Su-Shu Huang. This formulation has been highly relevant in the astrodynamics, mainly to model spacecraft trajectories in the Earth-Moon system with the addition of the gravitational attraction of the Sun. The former formulation of the bicircular restricted four-body problem can be problematic when modelling other systems that not the Earth-Moon-Sun, so the formulation was generalized by Negri and Prado to expand the application range and improve the accuracy without loss of simplicity. - -The planetary problem is the n-body problem in the case that one of the masses is much larger than all the others. A prototypical example of a planetary problem is the Sun–Jupiter–Saturn system, where the mass of the Sun is about 100 times larger than the masses of Jupiter or Saturn. In 1963, Vladimir Arnold proved using KAM theory a kind of stability of the planetary problem: there exists a set of positive measure of quasiperiodic orbits in the case of the planetary problem restricted to the plane. In the KAM theory, chaotic planetary orbits would be bounded by quasiperiodic KAM tori. Arnold's result was extended to a more general theorem by Féjoz and Herman in 2004. - -A central configuration q1(0), …, qN(0) is an initial configuration such that if the particles were all released with zero velocity, they would all collapse toward the center of mass C. Such a motion is called homothetic. Central configurations may also give rise to homographic motions in which all masses moves along Keplerian trajectories (elliptical, circular, parabolic, or hyperbolic), with all trajectories having the same eccentricity e. For elliptical trajectories, e = 1 corresponds to homothetic motion and e = 0 gives a relative equilibrium motion in which the configuration remains an isometry of the initial configuration, as if the configuration was a rigid body. Central configurations have played an important role in understanding the topology of invariant manifolds created by fixing the first integrals of a system. - -Solutions in which all masses move on the same curve without collisions are called choreographies. A choreography for n = 3 was discovered by Lagrange in 1772 in which three bodies are situated at the vertices of an equilateral triangle in the rotating frame. A figure eight choreography for n = 3 was found numerically by C. Moore in 1993 and generalized and proven by A. Chenciner and R. Montgomery in 2000. Since then, many other choreographies have been found for n ≥ 3. - -For every solution of the problem, not only applying an isometry or a time shift but also a reversal of time (unlike in the case of friction) gives a solution as well. - -In the physical literature about the n-body problem (n ≥ 3), sometimes reference is made to the impossibility of solving the n-body problem (via employing the above approach). However, care must be taken when discussing the 'impossibility' of a solution, as this refers only to the method of first integrals (compare the theorems by Abel and Galois about the impossibility of solving algebraic equations of degree five or higher by means of formulas only involving roots). - -One way of solving the classical n-body problem is "the n-body problem by Taylor series". - -We start by defining the system of differential equations: - -\frac{d^2\mathbf{x}_i(t)}{dt^2}=G \sum_{k=1 \atop k\neq i}^n \frac{m_k \left(\mathbf{x}_k(t)-\mathbf{x}_i(t)\right)}{\left|\mathbf{x}_k(t)-\mathbf{x}_i(t)\right|^{3}}, - -As xi(t0) and dxi(t0)/dt are given as initial conditions, every d2xi(t)/dt2 is known. Differentiating d2xi(t)/dt2 results in d3xi(t)/dt3 which at t0 which is also known, and the Taylor series is constructed iteratively. - -In order to generalize Sundman's result for the case n > 3 (or n = 3 and c = 0) one has to face two obstacles: - -#As has been shown by Siegel, collisions which involve more than two bodies cannot be regularized analytically, hence Sundman's regularization cannot be generalized. - -#The structure of singularities is more complicated in this case: other types of singularities may occur (see below). - -Lastly, Sundman's result was generalized to the case of n > 3 bodies by Qiudong Wang in the 1990s. Since the structure of singularities is more complicated, Wang had to leave out completely the questions of singularities. The central point of his approach is to transform, in an appropriate manner, the equations to a new system, such that the interval of existence for the solutions of this new system is [0,∞). - -There can be two types of singularities of the n-body problem: - -*collisions of two or more bodies, but for which q(t) (the bodies' positions) remains finite. (In this mathematical sense, a "collision" means that two pointlike bodies have identical positions in space.) - -*singularities in which a collision does not occur, but q(t) does not remain finite. In this scenario, bodies diverge to infinity in a finite time, while at the same time tending towards zero separation (an imaginary collision occurs "at infinity"). - -The latter ones are called Painlevé's conjecture (no-collisions singularities). Their existence has been conjectured for n > 3 by Painlevé (see Painlevé conjecture). Examples of this behavior for n = 5 have been constructed by Xia and a heuristic model for n = 4 by Gerver. Donald G. Saari has shown that for 4 or fewer bodies, the set of initial data giving rise to singularities has measure zero. - -While there are analytic solutions available for the classical (i.e. nonrelativistic) two-body problem and for selected configurations with n > 2, in general n-body problems must be solved or simulated using numerical methods. - -For a small number of bodies, an n-body problem can be solved using direct methods, also called particle–particle methods. These methods numerically integrate the differential equations of motion. Numerical integration for this problem can be a challenge for several reasons. First, the gravitational potential is singular; it goes to infinity as the distance between two particles goes to zero. The gravitational potential may be softened to remove the singularity at small distances: - -U_\varepsilon = \sum_{1 \le i < j \le n} \frac{G m_i m_j}{ \sqrt{\left\| \mathbf{q}_j - \mathbf{q}_i\right\|^2 + \varepsilon^2} } - -Second, in general for n > 2, the n-body problem is chaotic, which means that even small errors in integration may grow exponentially in time. Third, a simulation may be over large stretches of model time (e.g. millions of years) and numerical errors accumulate as integration time increases. - -There are a number of techniques to reduce errors in numerical integration. Local coordinate systems are used to deal with widely differing scales in some problems, for example an Earth–Moon coordinate system in the context of a solar system simulation. Variational methods and perturbation theory can yield approximate analytic trajectories upon which the numerical integration can be a correction. The use of a symplectic integrator ensures that the simulation obeys Hamilton's equations to a high degree of accuracy and in particular that energy is conserved. - -Direct methods using numerical integration require on the order of 1/2n2 computations to evaluate the potential energy over all pairs of particles, and thus have a time complexity of O(n2). For simulations with many particles, the O(n2) factor makes large-scale calculations especially time consuming. - -A number of approximate methods have been developed that reduce the time complexity relative to direct methods: - -* Tree code methods, such as a Barnes–Hut simulation, are collisionless methods used when close encounters among pairs are not important and distant particle contributions do not need to be computed to high accuracy. The potential of a distant group of particles is computed using a multipole expansion of the potential. This approximation allows for a reduction in complexity to O(n log n). - -* Fast multipole methods take advantage of the fact that the multipole-expanded forces from distant particles are similar for particles close to each other. It is claimed that this further approximation reduces the complexity to O(n). - -* Particle mesh methods divide up simulation space into a three dimensional grid onto which the mass density of the particles is interpolated. Then calculating the potential becomes a matter of solving a Poisson equation on the grid, which can be computed in O(n log n) time using fast Fourier transform techniques. Using adaptive mesh refinement or multigrid techniques can further reduce the complexity of the methods. - -*P3M and PM-tree methods are hybrid methods that use the particle mesh approximation for distant particles, but use more accurate methods for close particles (within a few grid intervals). P3M stands for particle–particle, particle–mesh and uses direct methods with softened potentials at close range. PM-tree methods instead use tree codes at close range. As with particle mesh methods, adaptive meshes can increase computational efficiency. - -*Mean field methods approximate the system of particles with a time-dependent Boltzmann equation representing the mass density that is coupled to a self-consistent Poisson equation representing the potential. It is a type of smoothed-particle hydrodynamics approximation suitable for large systems. - -In astrophysical systems with strong gravitational fields, such as those near the event horizon of a black hole, n-body simulations must take into account general relativity; such simulations are the domain of numerical relativity. Numerically simulating the Einstein field equations is extremely challenging and a parameterized post-Newtonian formalism (PPN), such as the Einstein–Infeld–Hoffmann equations, is used if possible. The two-body problem in general relativity is analytically solvable only for the Kepler problem, in which one mass is assumed to be much larger than the other. - -Most work done on the n-body problem has been on the gravitational problem. But there exist other systems for which n-body mathematics and simulation techniques have proven useful. - -In large scale electrostatics problems, such as the simulation of proteins and cellular assemblies in structural biology, the Coulomb potential has the same form as the gravitational potential, except that charges may be positive or negative, leading to repulsive as well as attractive forces. Fast Coulomb solvers are the electrostatic counterpart to fast multipole method simulators. These are often used with periodic boundary conditions on the region simulated and Ewald summation techniques are used to speed up computations. - -In statistics and machine learning, some models have loss functions of a form similar to that of the gravitational potential: a sum of kernel functions over all pairs of objects, where the kernel function depends on the distance between the objects in parameter space. Example problems that fit into this form include all-nearest-neighbors in manifold learning, kernel density estimation, and kernel machines. Alternative optimizations to reduce the O(n2) time complexity to O(n) have been developed, such as dual tree algorithms, that have applicability to the gravitational n-body problem as well. diff --git a/wiki/wikipedia/3224.txt b/wiki/wikipedia/3224.txt deleted file mode 100644 index 2784e369829b79ce34995612307e3bf28cf08c34..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3224.txt +++ /dev/null @@ -1,33 +0,0 @@ -Next-fit is an online algorithm for bin packing. Its input is a list of items of different sizes. Its output is a packing - a partition of the items into bins of fixed capacity, such that the sum of sizes of items in each bin is at most the capacity. Ideally, we would like to use as few bins as possible, but minimizing the number of bins is an NP-hard problem. The next-fit algorithm uses the following heuristic: - -* It keeps a current bin, which is initially empty. - -* When an item arrives, it checks whether the item fits into the current bin. - -** If it fits, it is placed inside it. - -** Otherwise, the current bin is closed, a new bin is opened and the coming item is placed inside this new bin. - -Next-Fit is a bounded space algorithm - it requires only one partially-filled bin to be open at any time. The algorithm was studied by David S. Johnson in his doctoral thesis in 1973. - -The running time of NextFit can be bounded by $\mathcal{O}(n)$, where $n$ is the number of items in the list. - -Denote by NF(L) the number of bins used by NextFit, and by OPT(L) the optimal number of bins possible for the list L. - -Then, for each list $L$, $NF(L) \leq 2 \cdot \mathrm{OPT}(L) -1 $. The intuition to the proof s the following. The number of bins used by this algorithm is no more than twice the optimal number of bins. In other words, it is impossible for 2 bins to be at most half full because such a possibility implies that at some point, exactly one bin was at most half full and a new one was opened to accommodate an item of size at most $B/2$. But since the first one has at least a space of $B/2$, the algorithm will not open a new bin for any item whose size is at most $B/2$. Only after the bin fills with more than $B/2$ or if an item with a size larger than $B/2$ arrives, the algorithm may open a new bin. Thus if we have $K$ bins, at least $K-1$ bins are more than half full. Therefore, $\sum_{i \in I} s(i)>\tfrac{K-1}{2}B$. Because $\tfrac{\sum_{i \in I} s(i)}{B}$ is a lower bound of the optimum value $\mathrm{OPT}$, we get that $K-1<2\mathrm{OPT}$ and therefore $K \leq 2\mathrm{OPT}$. - -For each $N \in \mathbb{N}$, there exists a list $L$ such that $\mathrm{OPT}(L) = N$ and $NF(L) = 2 \cdot \mathrm{OPT}(L) -2$. - -The family of lists for which it holds that $NF(L) = 2 \cdot \mathrm{OPT}(L) - 2$ is given by $L := \left(\frac{1}{2},\frac{1}{2(N-1)},\frac{1}{2},\frac{1}{2(N-1)}, \dots, \frac{1}{2},\frac{1}{2(N-1)}\right)$ with $|L| = 4(N-1)$. The optimal solution for this list has $N - 1$ bins containing two items with size $1/2$ and one bin with $2(N-1)$ items with size $1/2(N-1)$ (i.e., $N$ bins total), while the solution generated by NF has $2(N-1)$ bins with one item of size $1/2$ and one item with size $1/(2(N-1))$. - -If the maximum size of an item is $\alpha$, then the asymptotic approximation ratio ratio $R_{NF}^\infty$ satisfies: - -* $R_{NF}^\infty(\text{size}\leq\alpha) \leq 2$ for all $\alpha \geq 1/2$; - -* $R_{NF}^\infty(\text{size}\leq\alpha) \leq 1/(1-\alpha)$ for all $\alpha \leq 1/2$. - -Next-Fit packs a list and its inverse into the same number of bins. - -Next-k-Fit is a variant of Next-Fit, but instead of keeping only one bin open, the algorithm keeps the last $k$ bins open and chooses the first bin in which the item fits. - -For $k\geq 2$, NkF delivers results that are improved compared to the results of NF, however, increasing $k$ to constant values larger than $2$ improves the algorithm no further in its worst-case behavior. If algorithm $A$ is an AlmostAnyFit-algorthm and $m = \lfloor 1/\alpha\rfloor \geq 2$ then $R_{A}^{\infty}(\text{size}\leq\alpha)\leq R_{N2F}^{\infty}(\text{size}\leq\alpha) = 1+1/m$. diff --git a/wiki/wikipedia/3225.txt b/wiki/wikipedia/3225.txt deleted file mode 100644 index bc00610ad468ea35c8e29282e59236e58b3bbe2b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3225.txt +++ /dev/null @@ -1,5 +0,0 @@ -The Bellman–Ford algorithm is an algorithm that computes shortest paths from a single source vertex to all of the other vertices in a weighted digraph. - -It is slower than Dijkstra's algorithm for the same problem, but more versatile, as it is capable of handling graphs in which some of the edge weights are negative numbers. - -The algorithm was first proposed by , but is instead named after Richard Bellman and Lester Ford Jr., who published it in Bellman|1958}}|1958 and Ford|1956}}|1956, respectively. Edward F. Moore also published a variation of the algorithm in Moore|1959}}|1959, and for this reason it is also sometimes called the Bellman–Ford–Moore algorithm. diff --git a/wiki/wikipedia/3226.txt b/wiki/wikipedia/3226.txt deleted file mode 100644 index b5e6976e651c2bc3cef573ff238b2ff41205de9f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3226.txt +++ /dev/null @@ -1,27 +0,0 @@ -The Bertrand paradox is a problem within the classical interpretation of probability theory. Joseph Bertrand introduced it in his work Calcul des probabilités (1889), as an example to show that the principle of indifference may not produce definite, well-defined results for probabilities if it is applied uncritically when the domain of possibilities is infinite. Edwin Jaynes proposed a solution to Bertrand's paradox, based on the principle of "maximum ignorance"—that we should not use any information that is not given in the statement of the problem. Jaynes pointed out that Bertrand's problem does not specify the position or size of the circle, and argued that therefore any definite and objective solution must be "indifferent" to size and position. In other words: the solution must be both scale and translation invariant. - -To illustrate: assume that chords are laid at random onto a circle with a diameter of 2, say by throwing straws onto it from far away and converting them to chords by extension/restriction. Now another circle with a smaller diameter (e.g., 1.1) is laid into the larger circle. Then the distribution of the chords on that smaller circle needs to be the same as the restricted distribution of chords on the larger circle (again using extension/restriction of the generating straws). Thus, if the smaller circle is moved around within the larger circle, the restricted distribution should not change. It can be seen very easily that there would be a change for method 3: the chord distribution on the small red circle looks qualitatively different from the distribution on the large circle: - -
    - -
    - -The same occurs for method 1, though it is harder to see in a graphical representation. Method 2 is the only one that is both scale invariant and translation invariant; method 3 is just scale invariant, method 1 is neither. - -However, Jaynes did not just use invariances to accept or reject given methods: this would leave the possibility that there is another not yet described method that would meet his common-sense criteria. Jaynes used the integral equations describing the invariances to directly determine the probability distribution. In this problem, the integral equations indeed have a unique solution, and it is precisely what was called "method 2" above, the random radius method. - -In a 2015 article, Alon Drory argued that Jaynes' principle can also yield Bertrand's other two solutions. Drory argues that the mathematical implementation of the above invariance properties is not unique, but depends on the underlying procedure of random selection that one uses (as mentioned above, Jaynes used a straw-throwing method to choose random chords). He shows that each of Bertrand's three solutions can be derived using rotational, scaling, and translational invariance, concluding that Jaynes' principle is just as subject to interpretation as the principle of indifference itself. - -For example, we may consider throwing a dart at the circle, and drawing the chord having the chosen point as its center. Then the unique distribution which is translation, rotation, and scale invariant is the one called "method 3" above. - -Likewise, "method 1" is the unique invariant distribution for a scenario where a spinner is used to select one endpoint of the chord, and then used again to select the orientation of the chord. Here the invariance in question consists of rotational invariance for each of the two spins. It is also the unique scale and rotation invariant distribution for a scenario where a rod is placed vertically over a point on the circle's circumference, and allowed to drop to the horizontal position (conditional on it landing partly inside the circle). - -"Method 2" is the only solution that fulfills the transformation invariants that are present in certain physical systems-such as in statistical mechanics and gas physics-in the specific case of Jaynes's proposed experiment of throwing straws from a distance onto a small circle. Nevertheless, one can design other practical experiments that give answers according to the other methods. For example, in order to arrive at the solution of "method 1", the random endpoints method, one can affix a spinner to the center of the circle, and let the results of two independent spins mark the endpoints of the chord. In order to arrive at the solution of "method 3", one could cover the circle with molasses and mark the first point that a fly lands on as the midpoint of the chord. Several observers have designed experiments in order to obtain the different solutions and verified the results empirically. - -Nicholas Shackel affirms that after more than a century the paradox remains unresolved, and continues to stand in refutation of the principle of indifference. - -Shackel - -as a typical representative of the distinction strategy, and Edwin Jaynes - -Diederik Aerts and Massimiliano Sassoli de Bianchi consider that a mixed strategy is necessary to tackle Bertrand's paradox. According to these authors, the problem needs first to be disambiguated by specifying in a very clear way the nature of the entity which is subjected to the randomization, and only once this is done the problem can be considered to be a well-posed one, in the Jaynes sense, so that the principle of maximum ignorance can be used to solve it. To this end, and since the problem doesn't specify how the chord has to be selected, the principle needs to be applied not at the level of the different possible choices of a chord, but at the much deeper level of the different possible ways of choosing a chord. This requires the calculation of a meta average over all the possible ways of selecting a chord, which the authors call a universal average. To handle it, they use a discretization method inspired by what is done in the definition of the probability law in the Wiener processes. The result they obtain is in agreement with the numerical result of Jaynes, although their well-posed problem is different from that of Jaynes. diff --git a/wiki/wikipedia/3227.txt b/wiki/wikipedia/3227.txt deleted file mode 100644 index 969bea1b3afbe04eb986e70623777b05d77f3fe0..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3227.txt +++ /dev/null @@ -1,22 +0,0 @@ -In complex analysis, Runge's theorem (also known as Runge's approximation theorem) is named after the German mathematician Carl Runge who first proved it in the year 1885. It states the following: - -Denoting by C the set of complex numbers, let K be a compact subset of C and let f be a function which is holomorphic on an open set containing K. If A is a set containing at least one complex number from every bounded connected component of C\K then there exists a sequence $(r_n)_{n\in\N}$ of rational functions which converges uniformly to f on K and such that all the poles of the functions $(r_n)_{n\in\N}$ are in A. - -Note that not every complex number in A needs to be a pole of every rational function of the sequence $(r_n)_{n\in\N}$. We merely know that for all members of $(r_n)_{n\in\N}$ that do have poles, those poles lie in A. - -One aspect that makes this theorem so powerful is that one can choose the set A arbitrarily. In other words, one can choose any complex numbers from the bounded connected components of C\K and the theorem guarantees the existence of a sequence of rational functions with poles only amongst those chosen numbers. - -For the special case in which C\K is a connected set (in particular when K is simply-connected), the set A in the theorem will clearly be empty. Since rational functions with no poles are simply polynomials, we get the following corollary: If K is a compact subset of C such that C\K is a connected set, and f is a holomorphic function on an open set containing K, then there exists a sequence of polynomials $(p_n)$ that approaches f uniformly on K (the assumptions can be relaxed, see Mergelyan's theorem). - -Runge's theorem generalises as follows: one can take A to be a subset of the Riemann sphere C∪{∞} and require that A intersect also the unbounded connected component of K (which now contains ∞). That is, in the formulation given above, the rational functions may turn out to have a pole at infinity, while in the more general formulation the pole can be chosen instead anywhere in the unbounded connected component of C\K. - -An elementary proof, given in Sarason, proceeds as follows. There is a closed piecewise-linear contour Γ in the open set, containing K in its interior. By Cauchy's integral formula -$$ -f(w)={1\over 2\pi i} \int_\Gamma {f(z) dz\over z-w} -$$ - -for w in K. Riemann approximating sums can be used to approximate the contour integral uniformly over K. Each term in the sum is a scalar multiple of (z − w)−1 for some point z on the contour. This gives a uniform approximation by a rational function with poles on Γ. - -To modify this to an approximation with poles at specified points in each component of the complement of K, it is enough to check this for terms of the form (z − w)−1. If z0 is the point in the same component as z, take a piecewise-linear path from z to z0. If two points are sufficiently close on the path, any rational function with poles only at the first point can be expanded as a Laurent series about the second point. That Laurent series can be truncated to give a rational function with poles only at the second point uniformly close to the original function on K. Proceeding by steps along the path from z to z0 the original function (z − w)−1 can be successively modified to give a rational function with poles only at z0. - -If z0 is the point at infinity, then by the above procedure the rational function (z − w)−1 can first be approximated by a rational function g with poles at R > 0 where R is so large that K lies in w < R. The Taylor series expansion of g about 0 can then be truncated to give a polynomial approximation on K. diff --git a/wiki/wikipedia/3228.txt b/wiki/wikipedia/3228.txt deleted file mode 100644 index a7d11e8196882793b16e2b7c0a444e9b63640c02..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3228.txt +++ /dev/null @@ -1,183 +0,0 @@ -Graph cut optimization is a combinatorial optimization method applicable to a family of functions of discrete variables, named after the concept of cut in the theory of flow networks. Thanks to the max-flow min-cut theorem, determining the minimum cut over a graph representing a flow network is equivalent to computing the maximum flow over the network. Given a pseudo-Boolean function $f$, if it is possible to construct a flow network with positive weights such that - -* each cut $C$ of the network can be mapped to an assignment of variables $\mathbf{x}$ to $f$ (and vice versa), and - -* the cost of $C$ equals $f(\mathbf{x})$ (up to an additive constant) - -then it is possible to find the global optimum of $f$ in polynomial time by computing a minimum cut of the graph. The mapping between cuts and variable assignments is done by representing each variable with one node in the graph and, given a cut, each variable will have a value of 0 if the corresponding node belongs to the component connected to the source, or 1 if it belong to the component connected to the sink. - -Not all pseudo-Boolean functions can be represented by a flow network, and in the general case the global optimization problem is NP-hard. There exist sufficient conditions to characterise families of functions that can be optimised through graph cuts, such as submodular quadratic functions. Graph cut optimization can be extended to functions of discrete variables with a finite number of values, that can be approached with iterative algorithms with strong optimality properties, computing one graph cut at each iteration. - -Graph cut optimization is an important tool for inference over graphical models such as Markov random fields or conditional random fields, and it has applications in computer vision problems such as image segmentation, denoising, registration and stereo matching. - -A pseudo-Boolean function $f: \{0, 1\}^n \to \mathbb{R}$ is said to be representable if there exists a graph $G = (V, E)$ with non-negative weights and with source and sink nodes $s$ and $t$ respectively, and there exists a set of nodes $V_0 = \{v_1, \dots, v_n\} \subset V - \{s, t\}$ such that, for each tuple of values $(x_1, \dots, x_n) \in \{0, 1\}^n$ assigned to the variables, $f(x_1, \dots, x_n)$ equals (up to a constant) the value of the flow determined by a minimum cut $C = (S, T)$ of the graph $G$ such that $v_i \in S$ if $x_i = 0$ and $v_i \in T$ if $x_i = 1$. - -It is possible to classify pseudo-Boolean functions according to their order, determined by the maximum number of variables contributing to each single term. All first order functions, where each term depends upon at most one variable, are always representable. Quadratic functions -$$ - f(\mathbf{x}) = w_0 + \sum_i w_i(x_i) + \sum_{i < j} w_{ij}(x_i, x_j) . -$$ - -are representable if and only if they are submodular, i.e. for each quadratic term $w_{ij}$ the following condition is satisfied -$$ - w_{ij}(0, 0) + w_{ij}(1, 1) \le w_{ij}(0, 1) + w_{ij}(1, 0) . -$$ - -Cubic functions -$$ - f(\mathbf{x}) = w_0 + \sum_i w_i(x_i) + \sum_{i < j} w_{ij}(x_i, x_j) + \sum_{i < j < k} w_{ijk}(x_i, x_j, x_k) -$$ - -are representable if and only if they are regular, i.e. all possible binary projections to two variables, obtained by fixing the value of the remaining variable, are submodular. For higher-order functions, regularity is a necessary condition for representability. - -Graph construction for a representable function is simplified by the fact that the sum of two representable functions $f'$ and $f$ is representable, and its graph $G = (V' \cup V, E' \cup E)$ is the union of the graphs $G' = (V', E')$ and $G = (V, E)$ representing the two functions. Such theorem allows to build separate graphs representing each term and combine them to obtain a graph representing the entire function. - -The graph representing a quadratic function of $n$ variables contains $n + 2$ vertices, two of them representing the source and sink and the others representing the variables. When representing higher-order functions, the graph contains auxiliary nodes that allow to model higher-order interactions. - -A unary term $w_i$ depends only on one variable $x_i$ and can be represented by a graph with one non-terminal node $v_i$ and one edge $s \rightarrow v_i$ with weight $w_i(1) - w_i(0)$ if $w_i(1) \ge w_i(0)$, or $v_i \rightarrow t$ with weight $w_i(0) - w_i(1)$ if $w_i(1) < w_i(0)$. - -A quadratic (or binary) term $w_{ij}$ can be represented by a graph containing two non-terminal nodes $v_i$ and $v_j$. The term can be rewritten as -$$ -w_{ij}(x_i, x_j) = w_{ij}(0, 0) + k_i x_i + k_j x_j + k_{ij} \left( (1 - x_i) x_j + x_i (1 - x_j) \right) -$$ - -with - - - -\begin{align} - -k_i &= \frac{1}{2} (w_{ij}(1, 0) - w_{ij}(0, 0)) \\ - -k_j &= \frac{1}{2} (w_{ij}(1, 1) - w_{ij}(1, 0)) \\ - -k_{ij} &= \frac{1}{2} (w_{ij}(0, 1) + w_{ij}(1, 0) - w_{ij}(0, 0) - w_{ij}(1, 1)) . - -\end{align} - - - -In this expression, the first term is constant and it is not represented by any edge, the two following terms depend on one variable and are represented by one edge, as shown in the previous section for unary terms, while the third term is represented by an edge $v_i \rightarrow v_j$ with weight $w_{ij}(0, 1) + w_{ij}(1, 0) - w_{ij}(0, 0) - w_{ij}(1, 1)$ (submodularity guarantees that the weight is non-negative). - -A cubic (or ternary) term $w_{ijk}$ can be represented by a graph with four non-terminal nodes, three of them ($v_i$, $v_j$ and $v_k$) associated to the three variables plus one fourth auxiliary node $v_{ijk}$. A generic ternary term can be rewritten as the sum of a constant, three unary terms, three binary terms, and a ternary term in simplified form. There may be two different cases, according to the sign of $p = w_{ijk}(0, 0, 0) + w_{ijk}(0, 1, 1) + w_{ijk}(1, 0, 1) + w_{ijk}(1, 1, 0)$. If $p > 0$ then - - - -w_{ijk}(x_i, x_j, x_k) = - -w_{ijk}(0, 0, 0) - -+ p_1 (x_i - 1) + p_2 (x_j - 1) + p_3 (x_k - 1) - -+ p_{23}(x_j - 1) x_k + p_{31} x_i (x_k - 1) + p_{12} (x_i - 1) x_j - -- p x_i x_j x_k - - - -with - - - -\begin{align} - -p_1 &= w_{ijk}(1, 0, 1) - w_{ijk}(0, 0, 1) \\ - -p_2 &= w_{ijk}(1, 1, 0) - w_{ijk}(1, 0, 1) \\ - -p_3 &= w_{ijk}(0, 1, 1) - w_{ijk}(0, 1, 0) \\ - -p_{23} &= w_{ijk}(0, 0, 1) + w_{ijk}(0, 1, 0) - w_{ijk}(0, 0, 0) - w_{ijk}(0, 1, 1) \\ - -p_{31} &= w_{ijk}(0, 0, 1) + w_{ijk}(1, 0, 0) - w_{ijk}(0, 0, 0) - w_{ijk}(1, 0, 1) \\ - -p_{12} &= w_{ijk}(0, 1, 0) + w_{ijk}(1, 0, 0) - w_{ijk}(0, 0, 0) - w_{ijk}(1, 1, 0) . - -\end{align} - - - -If $p < 0$ the construction is similarly, but the variables will have opposite value. If the function is regular, then all its projections of two variables will be submodular, implying that $p_{23}$, $p_{31}$ and $p_{12}$ are positive and then all terms in the new representation are submodular. - -In this decomposition, the constant, unary and binary terms can be represented as shown in the previous sections. If $p > 0$ the ternary term can be represented with a graph with four edges $v_i \rightarrow v_{ijk}$, $v_j \rightarrow v_{ijk}$, $v_k \rightarrow v_{ijk}$, $v_{ijk} \rightarrow t$, all with weight $p$, while if $p < 0$ the term can be represented by four edges $v_{ijk} \rightarrow v_i$, $v_{ijk} \rightarrow v_j$, $v_{ijk} \rightarrow v_k$, $s \rightarrow v_{ijk}$ with weight $-p$. - -After building a graph representing a pseudo-Boolean function, it is possible to compute a minimum cut using one among the various algorithms developed for flow networks, such as Ford–Fulkerson, Edmonds–Karp, and Boykov–Kolmogorov algorithm. The result is a partition of the graph in two connected components $S$ and $T$ such that $s \in S$ and $t \in T$, and the function attains its global minimum when $x_i = 0$ for each $i$ such that the corresponding node v_i \in - -S, and $x_i = 1$ for each $i$ such that the corresponding node $v_i \in T$. - -Max-flow algorithms such as Boykov-Kolmogorov's are very efficient in practice for sequential computation, but they are difficult to parallelise, making them not suitable for distributed computing applications and preventing them from exploiting the potential of modern CPUs. Parallel max-flow algorithms were developed, such as push-relabel and jump-flood, that can also take advantage of hardware acceleration in GPGPU implementations. - -The previous construction allows global optimization of pseudo-Boolean functions only, but it can be extended to quadratic functions of discrete variables with a finite number of values, in the form -$$ -f(\mathbf{x}) = \sum_{i \in V} D(x_i) + \sum_{(i, j) \in E} S(x_i, x_j) -$$ - -where $E \subseteq V \times V$ and $x_i \in \Lambda = \{1, \dots, k\}$. The function $D(x_i)$ represents the unary contribution of each variable (often referred as data term), while the function $S(x_i, x_j)$ represents binary interactions between variables (smoothness term). In the general case, optimization of such functions is a NP-hard problem, and stochastic optimization methods such as simulated annealing are sensitive to local minima and in practice they can generate arbitrarily sub-optimal results. With graph cuts it is possible to construct move-making algorithms that allow to reach in polynomial time a local minima with strong optimality properties for a wide family of quadratic functions of practical interest (when the binary interaction $S(x_i, x_j)$ is a metric or a semimetric), such that the value of the function in the solution lies within a constant and known factor from the global optimum. - -Given a function $f: \Lambda^n \to \mathbb{R}$ with $\Lambda = \{1, \dots, k\}$, and a certain assignment of values $\mathbf{x} = (x_1, \dots, x_n) \in \Lambda^n$ to the variables, it is possible to associate each assignment $\mathbf{x}$ to a partition $P = \{P_l | l \in \Lambda \}$ of the set of variables, such that, $P_l = \{ x_i | x_i = l \in \Lambda \}$. Give two distinct assignments $P$ and $P'$ and a value $\alpha \in \Lambda$, a move that transforms $P$ into $P'$ is said to be an $\alpha$-expansion if $P_\alpha \subset P'_\alpha$ and $P'_l \subset P_l \forall l \in \Lambda - \{ \alpha \}$. Given a couple of values $\alpha$ and $\beta$, a move is said to be an $\alpha\beta$-swap if $P_l = P'_l \forall l \in \Lambda - \{ \alpha, \beta \}$. Intuitively, an $\alpha$-expansion move from $\mathbf{x}$ assigns the value of $\alpha$ to some variables that have a different value in $\mathbf{x}$, while an $\alpha\beta$-swap move assigns $\alpha$ to some variables that have value $\beta$ in $\mathbf{x}$ and vice versa. - -For each iteration, the $\alpha$-expansion algorithm computes, for each possible value $\alpha$, the minimum of the function among all assignments $\Alpha(\mathbf{x})$ that can be reached with a single $\alpha$-expansion move from the current temporary solution $\mathbf{x}$, and takes it as the new temporary solution. -$$ -\mathbf{x} := \text{arbitrary value in } \Lambda^n -$$ -$$ -\text{exit} := 0 -$$ - -while $\text{exit} \ne 1$: -$$ -\text{exit} = 1 -$$ - -foreach $\alpha \in \Lambda$: -$$ -\mathbf{\hat{x}} := \arg \min_{\mathbf{y} \in \Alpha(\mathbf{x})} f(\mathbf{y}) -$$ - -if $f(\mathbf{\hat{x}}) < \mathbf{x}$: -$$ -\mathbf{x} = \mathbf{\hat{x}} -$$ -$$ -\text{exit} := 0 -$$ - -The $\alpha\beta$-swap algorithm is similar, but it searches for the minimum among all assignments $\Alpha\Beta(\mathbf{x})$ reachable with a single $\alpha\beta$-swap move from $\mathbf{x}$. -$$ -\mathbf{x} := \text{arbitrary value in } \Lambda^n -$$ -$$ -\text{exit} := 0 -$$ - -while $\text{exit} \ne 1$: -$$ -\text{exit} = 1 -$$ - -foreach $(\alpha, \beta) \in \Lambda^2$: -$$ -\mathbf{\hat{x}} := \arg \min_{\mathbf{y} \in \Alpha\Beta(\mathbf{x})} f(\mathbf{y}) -$$ - -if $f(\mathbf{\hat{x}}) < \mathbf{x}$: -$$ -\mathbf{x} = \mathbf{\hat{x}} -$$ -$$ -\text{exit} := 0 -$$ - -In both cases, the optimization problem in the innermost loop can be solved exactly and efficiently with a graph cut. Both algorithms terminate certainly in a finite number of iterations of the outer loop, and in practice such number is small, with most of the improvement happening at the first iteration. The algorithms can generate different solutions depending on the initial guess, but in practice they are robust with respect to initialisation, and starting with a point where all variables are assigned to the same random value is usually sufficient to produce good quality results. - -The solution generated by such algorithms is not necessarily a global optimum, but it has strong guarantees of optimality. If $S(x_i, x_j)$ is a metric and $\mathbf{x}$ is a solution generated by the $\alpha$-expansion algorithm, or if $S(x_i, x_j)$ is a semimetric and $\mathbf{x}$ is a solution generated by the $\alpha\beta$-swap algorithm, then $f(\mathbf{x})$ lies within a known and constant factor from the global minimum $f(\mathbf{x}^*)$: -$$ -f(\mathbf{x}) \le 2 \frac{ \max_{\alpha \ne \beta \in \Lambda} S(\alpha, \beta) }{ \min_{\alpha \ne \beta \in \Lambda} S(\alpha, \beta) } f(\mathbf{x}^*) . -$$ - -Generally speaking, the problem of optimizing a non-submodular pseudo-Boolean function is NP-hard and cannot be solved in polynomial time with a simple graph cut. The simplest approach is to approximate the function with a similar but submodular one, for instance truncating all non-submodular terms or replacing them with similar submodular expressions. Such approach is generally sub-optimal, and it produces acceptable results only if the number of non-submodular terms is relatively small. - -In case of quadratic non-submodular functions, it is possible to compute in polynomial time a partial solution using algorithms such as QPBO. Higher-order functions can be reduced in polynomial time to a quadratic form that can be optimised with QPBO. - -Quadratic functions are extensively studied and were characterised in detail, but more general results were derived also for higher-order functions. While quadratic functions can indeed model many problems of practical interest, they are limited by the fact they can represent only binary interactions between variables. The possibility to capture higher-order interactions allows to better capture the nature of the problem and it can provide higher quality results that could be difficult to achieve with quadratic models. For instance in computer vision applications, where each variable represents a pixel or voxel of the image, higher-order interactions can be used to model texture information, that would be difficult to capture using only quadratic functions. - -Sufficient conditions analogous to submodularity were developed to characterise higher-order pseudo-Boolean functions that can be optimised in polynomial time, and there exists algorithms analogous to $\alpha$-expansion and $\alpha\beta$-swap for some families of higher-order functions. The problem is NP-hard in the general case, and approximate methods were developed for fast optimization of functions that do not satisfy such conditions. diff --git a/wiki/wikipedia/3229.txt b/wiki/wikipedia/3229.txt deleted file mode 100644 index dba3845241170a8dde4129ded3113e70a96bed0d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3229.txt +++ /dev/null @@ -1,91 +0,0 @@ -In mathematics, a well-order (or well-ordering or well-order relation) on a set S is a total order on S with the property that every non-empty subset of S has a least element in this ordering. The set S together with the well-order relation is then called a well-ordered set. In some academic articles and textbooks these terms are instead written as wellorder, wellordered, and wellordering or well order, well ordered, and well ordering. - -Every non-empty well-ordered set has a least element. Every element s of a well-ordered set, except a possible greatest element, has a unique successor (next element), namely the least element of the subset of all elements greater than s. There may be elements besides the least element which have no predecessor (see below for an example). A well-ordered set S contains for every subset T with an upper bound a least upper bound, namely the least element of the subset of all upper bounds of T in S. - -If ≤ is a non-strict well ordering, then < is a strict well ordering. A relation is a strict well ordering if and only if it is a well-founded strict total order. The distinction between strict and non-strict well orders is often ignored since they are easily interconvertible. - -Every well-ordered set is uniquely order isomorphic to a unique ordinal number, called the order type of the well-ordered set. The well-ordering theorem, which is equivalent to the axiom of choice, states that every set can be well ordered. If a set is well ordered (or even if it merely admits a well-founded relation), the proof technique of transfinite induction can be used to prove that a given statement is true for all elements of the set. - -The observation that the natural numbers are well ordered by the usual less-than relation is commonly called the well-ordering principle (for natural numbers). - -Every well-ordered set is uniquely order isomorphic to a unique ordinal number, called the order type of the well-ordered set. The position of each element within the ordered set is also given by an ordinal number. In the case of a finite set, the basic operation of counting, to find the ordinal number of a particular object, or to find the object with a particular ordinal number, corresponds to assigning ordinal numbers one by one to the objects. The size (number of elements, cardinal number) of a finite set is equal to the order type. Counting in the everyday sense typically starts from one, so it assigns to each object the size of the initial segment with that object as last element. Note that these numbers are one more than the formal ordinal numbers according to the isomorphic order, because these are equal to the number of earlier objects (which corresponds to counting from zero). Thus for finite n, the expression "n-th element" of a well-ordered set requires context to know whether this counts from zero or one. In a notation "β-th element" where β can also be an infinite ordinal, it will typically count from zero. - -For an infinite set the order type determines the cardinality, but not conversely: well-ordered sets of a particular cardinality can have many different order types, see Section #Natural numbers for a simple example. For a countably infinite set, the set of possible order types is even uncountable. - -The standard ordering ≤ of the natural numbers is a well ordering and has the additional property that every non-zero natural number has a unique predecessor. - -Another well ordering of the natural numbers is given by defining that all even numbers are less than all odd numbers, and the usual ordering applies within the evens and the odds: - -0 2 4 6 8 ... 1 3 5 7 9 ... - -This is a well-ordered set of order type ω + ω. Every element has a successor (there is no largest element). Two elements lack a predecessor: 0 and 1. - -Unlike the standard ordering ≤ of the natural numbers, the standard ordering ≤ of the integers is not a well ordering, since, for example, the set of negative integers does not contain a least element. - -The following relation R is an example of well ordering of the integers: x R y if and only if one of the following conditions holds: - -# x = 0 - -# x is positive, and y is negative - -# x and y are both positive, and x ≤ y - -# x and y are both negative, and |x| ≤ |y| - -This relation R can be visualized as follows: - -0 1 2 3 4 ... −1 −2 −3 ... - -R is isomorphic to the ordinal number ω + ω. - -Another relation for well ordering the integers is the following definition: x ≤z y if and only if (|x| < |y| or (|x| = |y| and x ≤ y)). This well order can be visualized as follows: - -0 −1 1 −2 2 −3 3 −4 4 ... - -This has the order type ω. - -The standard ordering ≤ of any real interval is not a well ordering, since, for example, the open interval (0, 1) ⊆ [0,1] does not contain a least element. From the ZFC axioms of set theory (including the axiom of choice) one can show that there is a well order of the reals. Also Wacław Sierpiński proved that ZF + GCH (the generalized continuum hypothesis) imply the axiom of choice and hence a well order of the reals. Nonetheless, it is possible to show that the ZFC+GCH axioms alone are not sufficient to prove the existence of a definable (by a formula) well order of the reals. However it is consistent with ZFC that a definable well ordering of the reals exists—for example, it is consistent with ZFC that V=L, and it follows from ZFC+V=L that a particular formula well orders the reals, or indeed any set. - -An uncountable subset of the real numbers with the standard ordering ≤ cannot be a well order: Suppose X is a subset of R well ordered by ≤. For each x in X, let s(x) be the successor of x in ≤ ordering on X (unless x is the last element of X). Let A = { (x, s(x)) | x ∈ X } whose elements are nonempty and disjoint intervals. Each such interval contains at least one rational number, so there is an injective function from A to Q. There is an injection from X to A (except possibly for a last element of X which could be mapped to zero later). And it is well known that there is an injection from Q to the natural numbers (which could be chosen to avoid hitting zero). Thus there is an injection from X to the natural numbers which means that X is countable. On the other hand, a countably infinite subset of the reals may or may not be a well order with the standard "≤". For example, - -* The natural numbers are a well order under the standard ordering ≤. - -* The set {1/n : n =1,2,3,...} has no least element and is therefore not a well order under standard ordering ≤. - -Examples of well orders: - -*The set of numbers { − 2−n | 0 ≤ n < ω } has order type ω. - -*The set of numbers { − 2−n − 2−m−n | 0 ≤ m,n < ω } has order type ω2. The previous set is the set of limit points within the set. Within the set of real numbers, either with the ordinary topology or the order topology, 0 is also a limit point of the set. It is also a limit point of the set of limit points. - -*The set of numbers { − 2−n | 0 ≤ n < ω } ∪ { 1 } has order type ω + 1. With the order topology of this set, 1 is a limit point of the set. With the ordinary topology (or equivalently, the order topology) of the real numbers it is not. - -If a set is totally ordered, then the following are equivalent to each other: - -# The set is well ordered. That is, every nonempty subset has a least element. - -# Transfinite induction works for the entire ordered set. - -# Every strictly decreasing sequence of elements of the set must terminate after only finitely many steps (assuming the axiom of dependent choice). - -# Every subordering is isomorphic to an initial segment. - -Every well-ordered set can be made into a topological space by endowing it with the order topology. - -With respect to this topology there can be two kinds of elements: - -*isolated points — these are the minimum and the elements with a predecessor. - -*limit points — this type does not occur in finite sets, and may or may not occur in an infinite set; the infinite sets without limit point are the sets of order type ω, for example N. - -For subsets we can distinguish: - -*Subsets with a maximum (that is, subsets which are bounded by themselves); this can be an isolated point or a limit point of the whole set; in the latter case it may or may not be also a limit point of the subset. - -*Subsets which are unbounded by themselves but bounded in the whole set; they have no maximum, but a supremum outside the subset; if the subset is non-empty this supremum is a limit point of the subset and hence also of the whole set; if the subset is empty this supremum is the minimum of the whole set. - -*Subsets which are unbounded in the whole set. - -A subset is cofinal in the whole set if and only if it is unbounded in the whole set or it has a maximum which is also maximum of the whole set. - -A well-ordered set as topological space is a first-countable space if and only if it has order type less than or equal to ω1 (omega-one), that is, if and only if the set is countable or has the smallest uncountable order type. diff --git a/wiki/wikipedia/323.txt b/wiki/wikipedia/323.txt deleted file mode 100644 index 8fc1170261f42fde8cb83286a2ddeff3effc9647..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/323.txt +++ /dev/null @@ -1,3 +0,0 @@ -In mathematics, the Denjoy–Luzin–Saks theorem states that a function of generalized bounded variation in the restricted sense has a derivative almost everywhere, and gives further conditions of the set of values of the function where the derivative does not exist. - -N. N. Luzin and A. Denjoy proved a weaker form of the theorem, and later strengthened their theorem. diff --git a/wiki/wikipedia/3230.txt b/wiki/wikipedia/3230.txt deleted file mode 100644 index 06044e28b3840d787787bd5186054e56306713cb..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3230.txt +++ /dev/null @@ -1,65 +0,0 @@ -"Korobeiniki" () is a nineteenth-century Russian folk song that tells the story of a meeting between a peddler and a girl, describing their haggling over goods in a metaphor for courtship. - -Outside Russia, "Korobeiniki" is widely known as the Tetris theme (titled "A-Type" in the game), from its appearance in Nintendo's 1989 Game Boy version of the game as arranged by Hirokazu Tanaka. - -The song "Korobeiniki" is based on a poem of the same name by Nikolay Nekrasov, which was first printed in the Sovremennik magazine in 1861. Its increasing tempo and the associated dance style led to it quickly becoming a popular Russian folk song. - -Korobeiniki were peddlers with trays, who sold fabric, haberdashery, books and other small items in pre-revolutionary Russia. Nekrasov's much longer poem tells the story of a young peddler who seduces a peasant girl named Katya one night in a field of rye. He offers her some of his wares as gifts in exchange for a kiss and, as it is implied, sexual favours. She rejects all but one of his gifts, a turquoise ring, reasoning that having his wares but not him would be unbearable. The next morning, he pledges to marry her when he returns from selling his wares at the market. The song's narrative ends here; however, the poem concludes with the peddler being robbed and killed by a forest ranger whom he asks for directions while returning home with the profits made during his successful day at the market. - -In 1989, the song was adapted into the theme of the video game Tetris. - - - -\relative a' { - -\time 4/4 - -e4. gis8 b4 gis8 e8 a4. c8 e4 d8 c8 b4. c8 d4 e4 c4 a4 a2 - -\repeat volta 2 { - -f'4. g8 a4 g8 f8 e4. f8 e4 d8 c8 b4. c8 d4 e4 c4 a4 a2 } - -} - - - -The official Tetris website wrote that Korobeiniki was "memorable enough on its own as both a poem and folk tune", independent of its adaption into the Tetris theme. - -After arrangements of "Korobeiniki" first appeared in Spectrum Holobyte's Apple IIgs and Mac versions of Tetris, the song was re-arranged in 1989 by Hirokazu Tanaka as the "Type A" accompaniment in Nintendo's Game Boy version 1.1. It has since become closely associated with the game in Western popular culture. In 2008, UGO listed the song as the 3rd best videogame music of all time. - -Though The Tetris Company holds a sound trademark on this variation of the song for use in video games, the song has appeared in Dance Maniax 2nd Mix under the title "Happy-hopper". - -* Doctor Spin's 1992 novelty Eurodance cover version (under the name "Tetris") reached #6 on the UK singles chart. - -* Tokyo Ska Paradise Orchestra has recorded and performed versions of the song under the title "Peddlers" (sometimes "Pedorazu") since their eponymous debut EP in 1989. Most recently it can be found on their "Ska Me Forever" (2014) album. - -* The string quartet Bond included a version on their 2000 debut album Born called Korobushka which they often perform at their live concerts. - -* American rock band Ozma released a rock version on their 2001 album The Doubble Donkey Disc, used in 2013 on the movie Kick Ass 2. - -* An Italian house remix of the song called "Cammino Contento" was featured in the 2005 compilation album by Gigi D'Agostino, Disco Tanz. - -* A remix of Tetris Type-A played during High Rank of Tire & Ice level in Crash Tag Team Racing. - -*A version of the song is used for the Nintendo Wii's video game Super Smash Bros. Brawl, in a part of the game related to Tetris. Arranged by Yoko Shimomura. The song is retained in later versions of Smash Bros. as well. - -* A trance cover arranged by Ryu* is featured on the Exit Trance release under the title "Korobushka". The song was later included on his album Ageha as "Korobushka (Ryu*Remix)". - -* The PlayStation Portable title Ape Escape Academy (Ape Academy in Europe) also features this song in one of the 'Camp-Side Fire' mini-games (essentially a short rhythm game-like sequence), also under the title 'Korobushka'. - -* In 2006, Jablay, sung by Titi Kamal for original soundtrack of Mendadak Dangdut samples and slightly modifies Korobeiniki on chorus part. - -* Canadian fingerstyle guitarist Ewan Dobson performs an acoustic guitar version of the song on his first album. - -* The Timbers Army sings this melody with altered lyrics during Portland Timbers games, usually accompanied by a simple dance with a large visual effect. - -* The German Techno Band Scooter used the melody for it in their song Whistling Dave from the 2007 album Jumping All Over the World. - -*A version by the duo Pig With The Face Of A Boy, known as "A Complete History of the Soviet Union as Told by a Humble Worker, Arranged to the Melody of Tetris", details a simplified telling of the tale of the Soviet Union, along with the two ending verses to the tune of the Game Over screen in Tetris. - -* British folk-punk band The Mechanisms sample the song's melody on the third-to-last song of their first album Once Upon A Time (In Space), "No Happy Ending". - -*A version of this song was arranged for the game The End Is Nigh titled ??? – Korobeiniki (Russian Folk 1861) by video game composer team Ridiculon (Matthias Bossi and Jon Evans). - -*The music was rearranged by Vitali Klatschkov and Diana May for a promotional session by Ortel Mobile in 2021. diff --git a/wiki/wikipedia/3231.txt b/wiki/wikipedia/3231.txt deleted file mode 100644 index a80e8547062d5ebd557ff463669227d13c731524..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3231.txt +++ /dev/null @@ -1,7 +0,0 @@ -In mathematics – especially in the area of abstract algebra dealing with ordered structures on abelian groups – the Hahn embedding theorem gives a simple description of all linearly ordered abelian groups. It is named after Hans Hahn. - -The theorem states that every linearly ordered abelian group G can be embedded as an ordered subgroup of the additive group ℝΩ endowed with a lexicographical order, where ℝ is the additive group of real numbers (with its standard order), Ω is the set of Archimedean equivalence classes of G, and ℝΩ is the set of all functions from Ω to ℝ which vanish outside a well-ordered set. - -Let 0 denote the identity element of G. For any nonzero element g of G, exactly one of the elements g or -g is greater than 0; denote this element by |g|. Two nonzero elements g and h of G are Archimedean equivalent if there exist natural numbers N and M such that N|g| > |h| and M|h| > |g|. Intuitively, this means that neither g nor h is "infinitesimal" with respect to the other. The group G is Archimedean if all nonzero elements are Archimedean-equivalent. In this case, Ω is a singleton, so ℝΩ is just the group of real numbers. Then Hahn's Embedding Theorem reduces to Hölder's theorem (which states that a linearly ordered abelian group is Archimedean if and only if it is a subgroup of the ordered additive group of the real numbers). - -Gravett gives a clear statement and proof of the theorem. The papers of Clifford and Hausner together provide another proof. See also Fuchs. diff --git a/wiki/wikipedia/3232.txt b/wiki/wikipedia/3232.txt deleted file mode 100644 index d38434aad75bd2cbec28ed860f788f320a8d5a4d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3232.txt +++ /dev/null @@ -1,331 +0,0 @@ -In trigonometry, tangent half-angle formulas relate the tangent of half of an angle to trigonometric functions of the entire angle. The tangent of half an angle is the stereographic projection of the circle onto a line. Among these formulas are the following: - - - -\begin{align} - -\tan \tfrac{1}{2}( \eta \pm \theta) - -&= \frac{\tan \tfrac{1}{2} \eta \pm \tan \tfrac{1}{2} \theta}{1 \mp \tan \tfrac{1}{2} \eta \tan \tfrac{1}{2} \theta} - -= \frac{\sin\eta \pm \sin\theta}{\cos\eta + \cos\theta} - -= -\frac{\cos\eta - \cos\theta}{\sin\eta \mp \sin\theta}, \\[10pt] - -\tan \pm\tfrac{1}{2} \theta - -&= \frac{\pm\sin\theta}{1 + \cos\theta} - -= \frac{\pm\tan\theta}{\sec\theta + 1} - -= \frac{\pm 1}{\csc\theta + \cot\theta}, - -& & (\eta = 0) \\[10pt] - -\tan \pm\tfrac{1}{2} \theta - -&= \frac{1-\cos\theta}{\pm\sin\theta} - -= \frac{\sec\theta-1}{\pm\tan\theta} - -= \pm(\csc\theta-\cot\theta), - -& & (\eta = 0) \\[10pt] - -\tan \tfrac{1}{2} \big(\theta \pm \tfrac{1}{2}\pi \big) - -&= \frac{1 \pm \sin\theta}{\cos\theta} - -= \sec\theta \pm \tan\theta - -= \frac{\csc\theta \pm 1}{\cot\theta}, - -& & \big(\eta = \tfrac{1}{2}\pi \big) \\[10pt] - -\tan \tfrac{1}{2} \big(\theta \pm \tfrac{1}{2}\pi \big) - -&= \frac{\cos\theta}{1 \mp \sin\theta} - -= \frac{1}{\sec\theta \mp \tan\theta} - -= \frac{\cot\theta}{\csc\theta \mp 1}, - -& & \big(\eta = \tfrac{1}{2}\pi \big) \\[10pt] - -\frac{1 - \tan \tfrac{1}{2}\theta}{1 + \tan \tfrac{1}{2}\theta} - -&= \pm\sqrt{\frac{1 - \sin\theta}{1 + \sin\theta}} \\[10pt] - -\tan \tfrac{1}{2} \theta - -&= \pm \sqrt{\frac{1 - \cos\theta}{1 + \cos\theta}} \\[10pt] - -\end{align} - - - -From these one can derive identities expressing the sine, cosine, and tangent as functions of tangents of half-angles: - - - -\begin{align} - -\sin \alpha & = \frac{2\tan \tfrac{1}{2} \alpha}{1 + \tan ^2 \tfrac{1}{2} \alpha} \\[7pt] - -\cos \alpha & = \frac{1 - \tan ^2 \tfrac{1}{2} \alpha}{1 + \tan ^2 \tfrac{1}{2} \alpha} \\[7pt] - -\tan \alpha & = \frac{2\tan \tfrac{1}{2} \alpha}{1 - \tan ^2 \tfrac{1}{2} \alpha} - -\end{align} - - - -Using double-angle formulae and the Pythagorean identity $1 + \tan^2 \alpha = 1 \big/ \cos^2 \alpha$ gives - - - -\sin \alpha - -= 2\sin \tfrac 1 2 \alpha \cos \tfrac 1 2 \alpha - -= \frac{ 2 \sin \tfrac 1 2 \alpha \cos \tfrac 1 2 \alpha - -\Big/ \cos^2 \tfrac 1 2 \alpha} - -{1 + \tan^2 \tfrac 1 2 \alpha} - -= \frac{2\tan \tfrac 1 2 \alpha}{1 + \tan^2 \tfrac 1 2 \alpha}, - -\quad \text{and} - - - - - -\cos \alpha - -= \cos^2 \tfrac 1 2 \alpha - \sin^2 \tfrac 1 2 \alpha - -= \frac{ \left(\cos^2 \tfrac 1 2 \alpha - \sin^2 \tfrac 1 2 \alpha\right) - -\Big/ \cos^2 \tfrac 1 2 \alpha} - -{ 1 + \tan^2 \tfrac 1 2 \alpha} - -= \frac{1 - \tan^2 \tfrac 1 2 \alpha}{1 + \tan^2 \tfrac 1 2 \alpha}, - -\quad \text{and} - - - -Taking the quotient of the formulae for sine and cosine yields -$$ -\tan \alpha = \frac{2\tan \tfrac 1 2 \alpha}{1 - \tan ^2 \tfrac 1 2 \alpha}. -$$ - -Combining the Pythagorean identity with the double-angle formula for the cosine, $ \cos 2\alpha = \cos^2 \alpha - \sin^2 \alpha = 1 - 2\sin^2 \alpha = 2\cos^2 \alpha - 1 $, - -rearranging, and taking the square roots yields -$$ - |\sin \alpha| = \sqrt {\frac{1-\cos2\alpha}{2}} -$$ and $ |\cos \alpha|= \sqrt {\frac{1+\cos2\alpha}{2}} $ - -which, upon division gives -$$ - |\tan \alpha| = \frac {\sqrt {1 - \cos 2\alpha}}{\sqrt {1 + \cos 2\alpha}} = \frac { {\sqrt {1 - \cos 2\alpha}}{\sqrt {1 + \cos 2\alpha}} }{1 + \cos 2\alpha} =\frac{{\sqrt {1 - \cos^2 2\alpha}}}{1 + \cos 2\alpha} = \frac{1 + \cos 2\alpha}. -$$ - -Alternatively, -$$ - |\tan \alpha| = \frac {\sqrt {1 - \cos 2\alpha}}{\sqrt {1 + \cos 2\alpha}} = \frac {1 - \cos 2\alpha}{ {\sqrt {1 + \cos 2\alpha}}{\sqrt {1 - \cos 2\alpha}} } = \frac{1 - \cos 2\alpha}{{\sqrt {1 - \cos^2 2\alpha}}} = \frac{1 - \cos 2\alpha}. -$$ - -The absolute value signs may be dropped when working only in the first quadrant. - -Also, using the angle addition and subtraction formulae for both the sine and cosine one obtains: -$$ - \cos (a+b) = \cos a \cos b - \sin a \sin b -$$ -$$ - \cos (a-b) = \cos a \cos b + \sin a \sin b -$$ -$$ - \sin (a+b) = \sin a \cos b + \cos a \sin b -$$ -$$ - \sin (a-b) = \sin a \cos b - \cos a \sin b -$$ - -Pairwise addition of the above four formulae yields: - - - -\begin{align} - -&\sin (a+b) + \sin (a-b) \\[5mu] - -&\quad= \sin a \cos b + \cos a \sin b + \sin a \cos b - \cos a \sin b \\[5mu] - -&\quad = 2 \sin a \cos b \\[15mu] - -&\cos (a+b) + \cos (a-b) \\[5mu] - -&\quad= \cos a \cos b - \sin a \sin b + \cos a \cos b + \sin a \sin b \\[5mu] - -&\quad= 2 \cos a \cos b - -\end{align} - - - -Setting $a= \tfrac 1 2 (p+q)$ and $b= \tfrac 1 2 (p-q)$ and substituting yields: - - - -\begin{align} - -& \sin p + \sin q \\[5mu] - -&\quad= \sin \left(\tfrac 1 2 (p+q) + \tfrac 1 2 (p-q)\right) + \sin\left(\tfrac 1 2(p+q) - \tfrac 1 2 (p-q)\right) \\[5mu] - -&\quad= 2 \sin \tfrac 1 2(p+q) \cos \tfrac 1 2(p-q) \\[15mu] - -& \cos p + \cos q \\[5mu] - -&\quad= \cos\left(\tfrac 1 2(p+q) + \tfrac 1 2 (p-q)\right) + \cos\left(\tfrac 1 2(p+q) - \tfrac 1 2(p-q)\right) \\[5mu] - -&\quad= 2 \cos\tfrac 1 2(p+q) \cos\tfrac 1 2(p-q) - -\end{align} - - - -Dividing the sum of sines by the sum of cosines one arrives at: - - - -\frac{\sin p + \sin q}{\cos p + \cos q} - -= \frac{2 \sin \tfrac 1 2(p+q) \cos \tfrac 1 2(p-q)}{2 \cos \tfrac 1 2(p+q) \cos \tfrac 1 2(p-q)} = \tan \tfrac 1 2(p+q) - -Applying the formulae derived above to the rhombus figure on the right, it is readily shown that -$$ -\tan \tfrac 1 2 (a+b) = \frac{\sin \tfrac 1 2 (a + b)}{\cos \tfrac 1 2 (a + b)} = \frac{\sin a + \sin b}{\cos a + \cos b}. -$$ - -In the unit circle, application of the above shows that $t = \tan \tfrac 1 2 \varphi$. By similarity of triangles, -$$ -\frac{t}{\sin \varphi} = \frac{1}{1+ \cos \varphi} -$$. It follows that $t = \frac{\sin \varphi}{1+ \cos \varphi} = \frac{\sin \varphi(1- \cos \varphi)}{(1+ \cos \varphi)(1- \cos \varphi)} = \frac{1- \cos \varphi}{\sin \varphi}.$ - -In various applications of trigonometry, it is useful to rewrite the trigonometric functions (such as sine and cosine) in terms of rational functions of a new variable $t$. These identities are known collectively as the tangent half-angle formulae because of the definition of $t$. These identities can be useful in calculus for converting rational functions in sine and cosine to functions of t in order to find their antiderivatives. - -Technically, the existence of the tangent half-angle formulae stems from the fact that the circle is an algebraic curve of genus 0. One then expects that the circular functions should be reducible to rational functions. - -Geometrically, the construction goes like this: for any point (cos φ, sin φ) on the unit circle, draw the line passing through it and the point (−1, 0). This point crosses the y-axis at some point y = t. One can show using simple geometry that t = tan(φ/2). The equation for the drawn line is y = (1 + x)t. The equation for the intersection of the line and circle is then a quadratic equation involving t. The two solutions to this equation are (−1, 0) and (cos φ, sin φ). This allows us to write the latter as rational functions of t (solutions are given below). - -The parameter t represents the stereographic projection of the point (cos φ, sin φ) onto the y-axis with the center of projection at (−1, 0). Thus, the tangent half-angle formulae give conversions between the stereographic coordinate t on the unit circle and the standard angular coordinate φ. - -Then we have - - - -\begin{align} - -& \sin\varphi = \frac{2t}{1 + t^2}, - -& & \cos\varphi = \frac{1 - t^2}{1 + t^2}, \\[8pt] - -& \tan\varphi = \frac{2t}{1 - t^2} - -& & \cot\varphi = \frac{1 - t^2}{2t}, \\[8pt] - -& \sec\varphi = \frac{1 + t^2}{1 - t^2}, - -& & \csc\varphi = \frac{1 + t^2}{2t}, - -\end{align} - - - -and - -e^{i \varphi} = \frac{1 + i t}{1 - i t}, \qquad - -e^{-i \varphi} = \frac{1 - i t}{1 + i t}. - - - -By eliminating phi between the directly above and the initial definition of $t$, one arrives at the following useful relationship for the arctangent in terms of the natural logarithm -$$ -2 \arctan t = -i \ln\frac{1+it}{1-it}. -$$ - -In calculus, the Weierstrass substitution is used to find antiderivatives of rational functions of sin φ and cos φ. After setting -$$ -t=\tan\tfrac{1}{2}\varphi. -$$ - -This implies that -$$ -\varphi=2\arctan(t)+2\pi n , -$$ - -for some integer n, and therefore -$$ -d\varphi = {{2dt} \over {1 + t^2}}. -$$ - -One can play an entirely analogous game with the hyperbolic functions. A point on (the right branch of) a hyperbola is given by (cosh θ, sinh θ). Projecting this onto y-axis from the center (−1, 0) gives the following: -$$ -t = \tanh\tfrac{1}{2}\theta = \frac{\sinh\theta}{\cosh\theta+1} = \frac{\cosh\theta-1}{\sinh\theta} -$$ - -with the identities - - - -\begin{align} - -& \sinh\theta = \frac{2t}{1 - t^2}, - -& & \cosh\theta = \frac{1 + t^2}{1 - t^2}, \\[8pt] - -& \tanh\theta = \frac{2t}{1 + t^2}, - -& & \coth\theta = \frac{1 + t^2}{2t}, \\[8pt] - -& \operatorname{sech}\theta = \frac{1 - t^2}{1 + t^2}, - -& & \operatorname{csch}\theta = \frac{1 - t^2}{2t}, - -\end{align} - - - -and - -e^\theta = \frac{1 + t}{1 - t}, \qquad - -e^{-\theta} = \frac{1 - t}{1 + t}. - -Finding θ in terms of t leads to following relationship between the area hyperbolic tangent and the natural logarithm: -$$ -2 \operatorname{artanh} t = \ln\frac{1+t}{1-t}. -$$ - -("ar-" is used rather than "arc-" because "arc" is about arc length and "ar" abbreviates "area". It is the area between two rays and a hyperbola, rather than the arc length between two rays measured along an arc of a circle.) - -Comparing the hyperbolic identities to the circular ones, one notices that they involve the same functions of t, just permuted. If we identify the parameter t in both cases we arrive at a relationship between the circular functions and the hyperbolic ones. That is, if -$$ -t = \tan\tfrac 1 2 \varphi = \tanh\tfrac 1 2 \theta -$$ - -then -$$ -\varphi = 2\tan^{-1}\tanh \tfrac 1 2 \theta \equiv \operatorname{gd} \theta. -$$ - -where gd(θ) is the Gudermannian function. The Gudermannian function gives a direct relationship between the circular functions and the hyperbolic ones that does not involve complex numbers. The above descriptions of the tangent half-angle formulae (projection the unit circle and standard hyperbola onto the y-axis) give a geometric interpretation of this function. - -The tangent of half of an acute angle of a right triangle whose sides are a Pythagorean triple will necessarily be a rational number in the interval (0, 1). Vice versa, when a half-angle tangent is a rational number in the interval (0, 1), there is a right triangle that has the full angle and that has side lengths that are a Pythagorean triple. diff --git a/wiki/wikipedia/3233.txt b/wiki/wikipedia/3233.txt deleted file mode 100644 index dafcf50dc37f2ab8b2c5dfb9d0072c9513c295f5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3233.txt +++ /dev/null @@ -1,267 +0,0 @@ -In computer science, read-copy-update (RCU) is a synchronization mechanism that avoids the use of lock primitives while multiple threads concurrently read and update elements that are linked through pointers and that belong to shared data structures (e.g., linked lists, trees, hash tables). - -Whenever a thread is inserting or deleting elements of data structures in shared memory, all readers are guaranteed to see and traverse either the older or the new structure, therefore avoiding inconsistencies (e.g., dereferencing null pointers). - -So the typical RCU update sequence goes something like the following: - -# Ensure that all readers accessing RCU-protected data structures carry out their references from within an RCU read-side critical section. - -# Remove pointers to a data structure, so that subsequent readers cannot gain a reference to it. - -# Wait for a grace period to elapse, so that all previous readers (which might still have pointers to the data structure removed in the prior step) will have completed their RCU read-side critical sections. - -# At this point, there cannot be any readers still holding references to the data structure, so it now may safely be reclaimed (e.g., freed). - -In the above procedure (which matches the earlier diagram), the updater is performing both the removal and the reclamation step, but it is often helpful for an entirely different thread to do the reclamation. Reference counting can be used to let the reader perform removal so, even if the same thread performs both the update step (step (2) above) and the reclamation step (step (4) above), it is often helpful to think of them separately. - -RCU is perhaps the most common non-blocking algorithm for a shared data structure. RCU is completely wait-free for any number of readers. - -Single-writer implementations RCU are also lock-free for the writer. - -Some multi-writer implementations of RCU are lock-free. - -Other multi-writer implementations of RCU serialize writers with a lock. - -By early 2008, there were almost 2,000 uses of the RCU API within the Linux kernel including the networking protocol stacks and the memory-management system. - -, there were more than 9,000 uses. - -Since 2006, researchers have applied RCU and similar techniques to a number of problems, including management of metadata used in dynamic analysis, managing the lifetime of clustered objects, managing object lifetime in the K42 research operating system, and optimizing software transactional memory implementations. Dragonfly BSD uses a technique similar to RCU that most closely resembles Linux's Sleepable RCU (SRCU) implementation. - -The ability to wait until all readers are done allows RCU readers to use much lighter-weight synchronization-in some cases, absolutely no synchronization at all. In contrast, in more conventional lock-based schemes, readers must use heavy-weight synchronization in order to prevent an updater from deleting the data structure out from under them. The reason is that lock-based updaters typically update data in place, and must therefore exclude readers. In contrast, RCU-based updaters typically take advantage of the fact that writes to single aligned pointers are atomic on modern CPUs, allowing atomic insertion, removal, and replacement of data in a linked structure without disrupting readers. Concurrent RCU readers can then continue accessing the old versions, and can dispense with the atomic read-modify-write instructions, memory barriers, and cache misses that are so expensive on modern SMP computer systems, even in absence of lock contention. The lightweight nature of RCU's read-side primitives provides additional advantages beyond excellent performance, scalability, and real-time response. For example, they provide immunity to most deadlock and livelock conditions. - -Of course, RCU also has disadvantages. For example, RCU is a specialized technique that works best in situations with mostly reads and few updates, but is often less applicable to update-only workloads. For another example, although the fact that RCU readers and updaters may execute concurrently is what enables the lightweight nature of RCU's read-side primitives, some algorithms may not be amenable to read/update concurrency. - -Despite well over a decade of experience with RCU, the exact extent of its applicability is still a research topic. - -The technique is covered by U.S. software patent 5,442,758, issued August 15, 1995 and assigned to Sequent Computer Systems, as well as by 5,608,893 (expired 2009-03-30), 5,727,209 (expired 2010-04-05), 6,219,690 (expired 2009-05-18), and 6,886,162 (expired 2009-05-25). The now-expired US Patent 4,809,168 covers a closely related technique. RCU is also the topic of one claim in the SCO v. IBM lawsuit. - -RCU is available in a number of operating systems, and was added to the Linux kernel in October 2002. User-level implementations such as are also available. - -The implementation of RCU in version 2.6 of the Linux kernel is among the better-known RCU implementations, and will be used as an inspiration for the RCU API in the remainder of this article. The core API (Application Programming Interface) is quite small: - -* rcu_read_lock(): Marks an RCU-protected data structure so that it won't be reclaimed for the full duration of that critical section. - -* rcu_read_unlock(): Used by a reader to inform the reclaimer that the reader is exiting an RCU read-side critical section. Note that RCU read-side critical sections may be nested and/or overlapping. - -* synchronize_rcu(): Blocks until all pre-existing RCU read-side critical sections on all CPUs have completed. Note that synchronize_rcu will not necessarily wait for any subsequent RCU read-side critical sections to complete. For example, consider the following sequence of events: - -
    -
    -	 CPU 0 CPU 1 CPU 2
    -
    -	 ----------------- ------------------------- ---------------
    -
    -	 1. rcu_read_lock()
    -
    -	 2. enters synchronize_rcu()
    -
    -	 3. rcu_read_lock()
    -
    -	 4. rcu_read_unlock()
    -
    -	 5. exits synchronize_rcu()
    -
    -	 6. rcu_read_unlock()
    -
    -
    - -Since synchronize_rcu is the API that must figure out when readers are done, its implementation is key to RCU. For RCU to be useful in all but the most read-intensive situations, synchronize_rcu's overhead must also be quite small. - -Alternatively, instead of blocking, synchronize_rcu may register a callback to be invoked after all ongoing RCU read-side critical sections have completed. This callback variant is called call_rcu in the Linux kernel. - -* rcu_assign_pointer(): The updater uses this function to assign a new value to an RCU-protected pointer, in order to safely communicate the change in value from the updater to the reader. This function returns the new value, and also executes any memory barrier instructions required for a given CPU architecture. Perhaps more importantly, it serves to document which pointers are protected by RCU. - -* rcu_dereference(): The reader uses rcu_dereference to fetch an RCU-protected pointer, which returns a value that may then be safely dereferenced. It also executes any directives required by the compiler or the CPU, for example, a volatile cast for gcc, a memory_order_consume load for C/C++11 or the memory-barrier instruction required by the old DEC Alpha CPU. The value returned by rcu_dereference is valid only within the enclosing RCU read-side critical section. As with rcu_assign_pointer, an important function of rcu_dereference is to document which pointers are protected by RCU. - -The diagram on the right shows how each API communicates among the reader, updater, and reclaimer. - -The RCU infrastructure observes the time sequence of rcu_read_lock, rcu_read_unlock, synchronize_rcu, and call_rcu invocations in order to determine when (1) synchronize_rcu invocations may return to their callers and (2) call_rcu callbacks may be invoked. Efficient implementations of the RCU infrastructure make heavy use of batching in order to amortize their overhead over many uses of the corresponding APIs. - -RCU has extremely simple "toy" implementations that can aid understanding of RCU. This section presents one such "toy" implementation that works in a non-preemptive environment. - - - -void rcu_read_lock(void) { } - -void rcu_read_unlock(void) { } - -void call_rcu(void (*callback) (void *), void *arg) - -{ - -// add callback/arg pair to a list - -} - -void synchronize_rcu(void) - -{ - -int cpu, ncpus = 0; - -for each_cpu(cpu) - -schedule_current_task_to(cpu); - -for each entry in the call_rcu list - -entry->callback (entry->arg); - -} - - - -In the code sample, rcu_assign_pointer and rcu_dereference can be ignored without missing much. However, they are needed in order to suppress harmful compiler optimization and to prevent CPUs from reordering accesses. - - - -#define rcu_assign_pointer(p, v) ({ \ - -smp_wmb(); /* Order previous writes. */ \ - -ACCESS_ONCE(p) = (v); \ - -}) - -#define rcu_dereference(p) ({ \ - -typeof(p) _value = ACCESS_ONCE(p); \ - -smp_read_barrier_depends(); /* nop on most architectures */ \ - -(_value); \ - -}) - - - -Note that rcu_read_lock and rcu_read_unlock do nothing. This is the great strength of classic RCU in a non-preemptive kernel: read-side overhead is precisely zero, as smp_read_barrier_depends() is an empty macro on all but DEC Alpha CPUs; such memory barriers are not needed on modern CPUs. The ACCESS_ONCE() macro is a volatile cast that generates no additional code in most cases. And there is no way that rcu_read_lock can participate in a deadlock cycle, cause a realtime process to miss its scheduling deadline, precipitate priority inversion, or result in high lock contention. However, in this toy RCU implementation, blocking within an RCU read-side critical section is illegal, just as is blocking while holding a pure spinlock. - -The implementation of synchronize_rcu moves the caller of synchronize_cpu to each CPU, thus blocking until all CPUs have been able to perform the context switch. Recall that this is a non-preemptive environment and that blocking within an RCU read-side critical section is illegal, which imply that there can be no preemption points within an RCU read-side critical section. Therefore, if a given CPU executes a context switch (to schedule another process), we know that this CPU must have completed all preceding RCU read-side critical sections. Once all CPUs have executed a context switch, then all preceding RCU read-side critical sections will have completed. - -Although RCU can be used in many different ways, a very common use of RCU is analogous to reader–writer locking. The following side-by-side code display shows how closely related reader–writer locking and RCU can be. - - - -/* reader-writer locking */ /* RCU */ - -1 struct el { 1 struct el { - -2 struct list_head lp; 2 struct list_head lp; - -3 long key; 3 long key; - -4 spinlock_t mutex; 4 spinlock_t mutex; - -5 int data; 5 int data; - -6 /* Other data fields */ 6 /* Other data fields */ - -7 }; 7 }; - -8 DEFINE_RWLOCK(listmutex); 8 DEFINE_SPINLOCK(listmutex); - -9 LIST_HEAD(head); 9 LIST_HEAD(head); - -1 int search(long key, int *result) 1 int search(long key, int *result) - -2 { 2 { - -3 struct el *p; 3 struct el *p; - -4 4 - -5 read_lock(&listmutex); 5 rcu_read_lock(); - -6 list_for_each_entry(p, &head, lp) { 6 list_for_each_entry_rcu(p, &head, lp) { - -7 if (p->key == key) { 7 if (p->key == key) { - -8 *result = p->data; 8 *result = p->data; - -9 read_unlock(&listmutex); 9 rcu_read_unlock(); - -10 return 1; 10 return 1; - -11 } 11 } - -12 } 12 } - -13 read_unlock(&listmutex); 13 rcu_read_unlock(); - -14 return 0; 14 return 0; - -15 } 15 } - -1 int delete(long key) 1 int delete(long key) - -2 { 2 { - -3 struct el *p; 3 struct el *p; - -4 4 - -5 write_lock(&listmutex); 5 spin_lock(&listmutex); - -6 list_for_each_entry(p, &head, lp) { 6 list_for_each_entry(p, &head, lp) { - -7 if (p->key == key) { 7 if (p->key == key) { - -8 list_del(&p->lp); 8 list_del_rcu(&p->lp); - -9 write_unlock(&listmutex); 9 spin_unlock(&listmutex); - -10 synchronize_rcu(); - -10 kfree(p); 11 kfree(p); - -11 return 1; 12 return 1; - -12 } 13 } - -13 } 14 } - -14 write_unlock(&listmutex); 15 spin_unlock(&listmutex); - -15 return 0; 16 return 0; - -16 } 17 } - - - -The differences between the two approaches are quite small. Read-side locking moves to rcu_read_lock and rcu_read_unlock, update-side locking moves from a reader-writer lock to a simple spinlock, and a synchronize_rcu precedes the kfree. - -However, there is one potential catch: the read-side and update-side critical sections can now run concurrently. In many cases, this will not be a problem, but it is necessary to check carefully regardless. For example, if multiple independent list updates must be seen as a single atomic update, converting to RCU will require special care. - -Also, the presence of synchronize_rcu means that the RCU version of delete can now block. If this is a problem, call_rcu could be used like call_rcu (kfree, p) in place of synchronize_rcu. This is especially useful in combination with reference counting. - -Techniques and mechanisms resembling RCU have been independently invented multiple times: - -# H. T. Kung and Q. Lehman described use of garbage collectors to implement RCU-like access to a binary search tree. - -# Udi Manber and Richard Ladner extended Kung's and Lehman's work to non-garbage-collected environments by deferring reclamation until all threads running at removal time have terminated, which works in environments that do not have long-lived threads. - -# Richard Rashid et al. described a lazy translation lookaside buffer (TLB) implementation that deferred reclaiming virtual-address space until all CPUs flushed their TLB, which is similar in spirit to some RCU implementations. - -# James P. Hennessy, Damian L. Osisek, and Joseph W. Seigh, II were granted US Patent 4,809,168 in 1989 (since lapsed). This patent describes an RCU-like mechanism that was apparently used in VM/XA on IBM mainframes. - -# William Pugh described an RCU-like mechanism that relied on explicit flag-setting by readers. - -# Aju John proposed an RCU-like implementation where updaters simply wait for a fixed period of time, under the assumption that readers would all complete within that fixed time, as might be appropriate in a hard real-time system. Van Jacobson proposed a similar scheme in 1993 (verbal communication). - -# J. Slingwine and P. E. McKenney received US Patent 5,442,758 in August 1995, which describes RCU as implemented in DYNIX/ptx and later in the Linux kernel. - -# B. Gamsa, O. Krieger, J. Appavoo, and M. Stumm described an RCU-like mechanism used in the University of Toronto and the closely related IBM Research K42 research operating systems. - -# Rusty Russell and Phil Rumpf described RCU-like techniques for handling unloading of Linux kernel modules. - -# D. Sarma added RCU to in October 2002. - -# Robert Colvin et al. formally verified a lazy concurrent list-based set algorithm that resembles RCU. - -# M. Desnoyers et al. published a description of user-space RCU. - -# A. Gotsman et al. derived formal semantics for RCU based on separation logic. - -# Ilan Frenkel, Roman Geller, Yoram Ramberg, and Yoram Snir were granted US Patent 7,099,932 in 2006. This patent describes an RCU-like mechanism for retrieving and storing quality of service policy management information using a directory service in a manner that enforces read/write consistency and enables read/write concurrency. diff --git a/wiki/wikipedia/3234.txt b/wiki/wikipedia/3234.txt deleted file mode 100644 index e2e5ac208c44a3802c27a6df3187fff9ae9d074a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3234.txt +++ /dev/null @@ -1,58 +0,0 @@ -In mathematical economics, Topkis's theorem is a result that is useful for establishing comparative statics. The theorem allows researchers to understand how the optimal value for a choice variable changes when a feature of the environment changes. The result states that if f is supermodular in (x,θ), and D is a lattice, then $x^*(\theta)=\arg\max_{x\in D}f(x,\theta)$ is nondecreasing in θ. The result is especially helpful for establishing comparative static results when the objective function is not differentiable. The result is named after Donald M. Topkis. - -This example will show how using Topkis's theorem gives the same result as using more standard tools. The advantage of using Topkis's theorem is that it can be applied to a wider class of problems than can be studied with standard economics tools. - -A driver is driving down a highway and must choose a speed, s. Going faster is desirable, but is more likely to result in a crash. There is some prevalence of potholes, p. The presence of potholes increases the probability of crashing. Note that s is a choice variable and p is a parameter of the environment that is fixed from the perspective of the driver. The driver seeks to $\max_{s}U(s,p)$. - -We would like to understand how the driver's speed (a choice variable) changes with the amount of potholes: -$$ -\frac{\partial s^{\ast }(p)}{\partial p}. -$$ - -If one wanted to solve the problem with standard tools such as the implicit function theorem, one would have to assume that the problem is well behaved: U(.) is twice continuously differentiable, concave in s, that the domain over which s is defined is convex, and that it there is a unique maximizer $s^{\ast }(p)$ for every value of p and that $s^{\ast }(p)$ is in the interior of the set over which s is defined. Note that the optimal speed is a function of the amount of potholes. Taking the first order condition, we know that at the optimum, $U_{s}(s^{\ast }(p),p)=0$. Differentiating the first order condition, with respect to p and using the implicit function theorem, we find that -$$ -U_{ss}(s^{\ast}(p),p)(\partial s^{\ast }(p)/(\partial p))+U_{sp}(s^{\ast }(p),p)=0 -$$ - -or that - -\frac{\partial s^{\ast }(p)}{\partial p} - -= \underset{\text{negative since we assumed }U(.)\text{ was concave in }s}{\underbrace{\frac{-U_{sp}(s^{\ast }(p),p)}{U_{ss}(s^{\ast }(p),p)}}}. - -So, -$$ -\frac{\partial s^{\ast }(p)}{\partial p}\overset{\text{sign}}{=}U_{sp}(s^{\ast }(p),p). -$$ - -If s and p are substitutes, -$$ -U_{sp}(s^\ast (p),p)<0 -$$ - -and hence -$$ -\frac{\partial s^{\ast }(p)}{\partial p}<0 -$$ - -and more potholes causes less speeding. Clearly it is more reasonable to assume that they are substitutes. - -The problem with the above approach is that it relies on the differentiability of the objective function and on concavity. We could get at the same answer using Topkis's theorem in the following way. We want to show that $U(s,p)$ is submodular (the opposite of supermodular) in $\left( s,p\right) $. Note that the choice set is clearly a lattice. The cross partial of U being negative, $\frac{\partial^2 U}{\partial s\partial p}<0$, is a sufficient condition. Hence if $ \frac{\partial^2 U}{\partial s\partial p}<0,$ we know that $\frac{\partial s^{\ast }(p)}{\partial p}<0$. - -Hence using the implicit function theorem and Topkis's theorem gives the same result, but the latter does so with fewer assumptions. - -* - -* - -* - -Category:Comparative statics - -Category:Economics theorems - -Category:Theorems in lattice theory - -Category:Order theory - -Category:Optimization of ordered sets diff --git a/wiki/wikipedia/3235.txt b/wiki/wikipedia/3235.txt deleted file mode 100644 index 05ea137a23d12375c713b1ec9669e4944ad46b35..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3235.txt +++ /dev/null @@ -1 +0,0 @@ -In algebra, Quillen's lemma states that an endomorphism of a simple module over the enveloping algebra of a finite-dimensional Lie algebra over a field k is algebraic over k. In contrast to a version of Schur's lemma due to Dixmier, it does not require k to be uncountable. Quillen's original short proof uses generic flatness. diff --git a/wiki/wikipedia/3236.txt b/wiki/wikipedia/3236.txt deleted file mode 100644 index 1fddb3b2991f8594a157f4c14155eed91cb3fcf6..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3236.txt +++ /dev/null @@ -1,54 +0,0 @@ -In mathematics, and more particularly in analytic number theory, Perron's formula is a formula due to Oskar Perron to calculate the sum of an arithmetic function, by means of an inverse Mellin transform. - -Let $\{a(n)\}$ be an arithmetic function, and let -$$ - g(s)=\sum_{n=1}^{\infty} \frac{a(n)}{n^{s}} -$$ - -be the corresponding Dirichlet series. Presume the Dirichlet series to be uniformly convergent for $\Re(s)>\sigma$. Then Perron's formula is -$$ - A(x) = {\sum_{n\le x}}' a(n) =\frac{1}{2\pi i}\int_{c-i\infty}^{c+i\infty} g(z)\frac{x^{z}}{z} dz. -$$ - -Here, the prime on the summation indicates that the last term of the sum must be multiplied by 1/2 when x is an integer. The integral is not a convergent Lebesgue integral; it is understood as the Cauchy principal value. The formula requires that c > 0, c > σ, and x > 0. - -An easy sketch of the proof comes from taking Abel's sum formula -$$ - g(s)=\sum_{n=1}^{\infty} \frac{a(n)}{n^{s} }=s\int_{1}^{\infty} A(x)x^{-(s+1) } dx. -$$ - -This is nothing but a Laplace transform under the variable change $x = e^t.$ Inverting it one gets Perron's formula. - -Because of its general relationship to Dirichlet series, the formula is commonly applied to many number-theoretic sums. Thus, for example, one has the famous integral representation for the Riemann zeta function: -$$ -\zeta(s)=s\int_1^\infty \frac{\lfloor x\rfloor}{x^{s+1}}dx -$$ - -and a similar formula for Dirichlet L-functions: -$$ -L(s,\chi)=s\int_1^\infty \frac{A(x)}{x^{s+1}}dx -$$ - -where -$$ -A(x)=\sum_{n\le x} \chi(n) -$$ - -and $\chi(n)$ is a Dirichlet character. Other examples appear in the articles on the Mertens function and the von Mangoldt function. - -Perron's formula is just a special case of the Mellin discrete convolution -$$ - \sum_{n=1}^{\infty} a(n)f(n/x)= \frac{1}{2\pi i} \int_{c-i\infty}^{c+i\infty}F(s)G(s)x^{s}ds -$$ - -where -$$ -G(s)= \sum_{n=1}^{\infty} \frac{a(n)}{n^{s}} -$$ - -and -$$ - F(s)= \int_{0}^{\infty}f(x)x^{s-1}dx -$$ - -the Mellin transform. The Perron formula is just the special case of the test function $f(1/x)=\theta (x-1),$ for $ \theta(x) $ the Heaviside step function. diff --git a/wiki/wikipedia/3237.txt b/wiki/wikipedia/3237.txt deleted file mode 100644 index 3b448ba0145bea802dc1901edef5b2af11fc9aab..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3237.txt +++ /dev/null @@ -1,366 +0,0 @@ -In geometry, the area enclosed by a circle of radius r is πr2. Here the Greek letter pi represents the constant ratio of the circumference of any circle to its diameter, approximately equal to 3.1416. - -One method of deriving this formula, which originated with Archimedes, involves viewing the circle as the limit of a sequence of regular polygons. The area of a regular polygon is half its perimeter multiplied by the distance from its center to its sides, and the corresponding formula-that the area is half the perimeter times the radius-namely, A = 1/2 × 2πr × r, holds in the limit for a circle. - -Although often referred to as the area of a circle in informal contexts, strictly speaking the term disk refers to the interior of the circle, while circle is reserved for the boundary only, which is a curve and covers no area itself. Therefore, the area of a disk is the more precise phrase for the area enclosed by a circle. - -Modern mathematics can obtain the area using the methods of integral calculus or its more sophisticated offspring, real analysis. However, the area of a disk was studied by the Ancient Greeks. Eudoxus of Cnidus in the fifth century B.C. had found that the area of a disk is proportional to its radius squared. Archimedes used the tools of Euclidean geometry to show that the area inside a circle is equal to that of a right triangle whose base has the length of the circle's circumference and whose height equals the circle's radius in his book Measurement of a Circle. The circumference is 2pir, and the area of a triangle is half the base times the height, yielding the area pi r2 for the disk. Prior to Archimedes, Hippocrates of Chios was the first to show that the area of a disk is proportional to the square of its diameter, as part of his quadrature of the lune of Hippocrates, but did not identify the constant of proportionality. - -A variety of arguments have been advanced historically to establish the equation $A=\pi r^2$ to varying degrees of mathematical rigor. The most famous of these is Archimedes' method of exhaustion, one of the earliest uses of the mathematical concept of a limit, as well as the origin of Archimedes' axiom which remains part of the standard analytical treatment of the real number system. The original proof of Archimedes is not rigorous by modern standards, because it assumes that we can compare the length of arc of a circle to the length of a secant and a tangent line, and similar statements about the area, as geometrically evident. - -The area of a regular polygon is half its perimeter times the apothem. As the number of sides of the regular polygon increases, the polygon tends to a circle, and the apothem tends to the radius. This suggests that the area of a disk is half the circumference of its bounding circle times the radius. - -Following Archimedes' argument in The Measurement of a Circle (c. 260 BCE), compare the area enclosed by a circle to a right triangle whose base has the length of the circle's circumference and whose height equals the circle's radius. If the area of the circle is not equal to that of the triangle, then it must be either greater or less. We eliminate each of these by contradiction, leaving equality as the only possibility. We use regular polygons in the same way. - -Suppose that the area C enclosed by the circle is greater than the area T = 12cr of the triangle. Let E denote the excess amount. Inscribe a square in the circle, so that its four corners lie on the circle. Between the square and the circle are four segments. If the total area of those gaps, G4, is greater than E, split each arc in half. This makes the inscribed square into an inscribed octagon, and produces eight segments with a smaller total gap, G8. Continue splitting until the total gap area, Gn, is less than E. Now the area of the inscribed polygon, Pn = C − Gn, must be greater than that of the triangle. - -\begin{align} - -E &{}= C - T \\ - -&{}> G_n \\ - -P_n &{}= C - G_n \\ - -&{}> C - E \\ - -P_n &{}> T - -\end{align} - -But this forces a contradiction, as follows. Draw a perpendicular from the center to the midpoint of a side of the polygon; its length, h, is less than the circle radius. Also, let each side of the polygon have length s; then the sum of the sides, ns, is less than the circle circumference. The polygon area consists of n equal triangles with height h and base s, thus equals 12nhs. But since h < r and ns < c, the polygon area must be less than the triangle area, 12cr, a contradiction. Therefore, our supposition that C might be greater than T must be wrong. - -Suppose that the area enclosed by the circle is less than the area T of the triangle. Let D denote the deficit amount. Circumscribe a square, so that the midpoint of each edge lies on the circle. If the total area gap between the square and the circle, G4, is greater than D, slice off the corners with circle tangents to make a circumscribed octagon, and continue slicing until the gap area is less than D. The area of the polygon, Pn, must be less than T. - -\begin{align} - -D &{}= T - C \\ - -&{}> G_n \\ - -P_n &{}= C + G_n \\ - -&{}< C + D \\ - -P_n &{}< T - -\end{align} - -This, too, forces a contradiction. For, a perpendicular to the midpoint of each polygon side is a radius, of length r. And since the total side length is greater than the circumference, the polygon consists of n identical triangles with total area greater than T. Again we have a contradiction, so our supposition that C might be less than T must be wrong as well. - -Therefore, it must be the case that the area enclosed by the circle is precisely the same as the area of the triangle. This concludes the proof. - -Following Satō Moshun and Leonardo da Vinci , we can use inscribed regular polygons in a different way. Suppose we inscribe a hexagon. Cut the hexagon into six triangles by splitting it from the center. Two opposite triangles both touch two common diameters; slide them along one so the radial edges are adjacent. They now form a parallelogram, with the hexagon sides making two opposite edges, one of which is the base, s. Two radial edges form slanted sides, and the height, h is equal to its apothem (as in the Archimedes proof). In fact, we can also assemble all the triangles into one big parallelogram by putting successive pairs next to each other. The same is true if we increase it to eight sides and so on. For a polygon with 2n sides, the parallelogram will have a base of length ns, and a height h. As the number of sides increases, the length of the parallelogram base approaches half the circle circumference, and its height approaches the circle radius. In the limit, the parallelogram becomes a rectangle with width pir and height r. - -There are various equivalent definitions of the constant π. The conventional definition in pre-calculus geometry is the ratio of the circumference of a circle to its diameter: -$$ -\pi=\frac{C}{D}. -$$ - -However, because the circumference of a circle is not a primitive analytical concept, this definition is not suitable in modern rigorous treatments. A standard modern definition is that pi is equal to twice the least positive root of the cosine function or, equivalently, the half-period of the sine (or cosine) function. The cosine function can be defined either as a power series, or as the solution of a certain differential equation. This avoids any reference to circles in the definition of pi, so that statements about the relation of pi to the circumference and area of circles are actually theorems, rather than definitions, that follow from the analytical definitions of concepts like "area" and "circumference". - -The analytical definitions are seen to be equivalent, if it is agreed that the circumference of the circle is measured as a rectifiable curve by means of the integral -$$ -C = 2\int_{-R}^R \frac{Rdx}{\sqrt{R^2-x^2}} = 2R\int_{-1}^1\frac{dx}{\sqrt{1-x^2}}. -$$ - -The integral appearing on the right is an abelian integral whose value is a half-period of the sine function, equal to pi. Thus $C=2\pi R=\pi D$ is seen to be true as a theorem. - -Several of the arguments that follow use only concepts from elementary calculus to reproduce the formula $A=\pi r^2$, but in many cases to regard these as actual proofs, they rely implicitly on the fact that one can develop trigonometric functions and the fundamental constant pi in a way that is totally independent of their relation to geometry. We have indicated where appropriate how each of these proofs can be made totally independent of all trigonometry, but in some cases that requires more sophisticated mathematical ideas than those afforded by elementary calculus. - -Using calculus, we can sum the area incrementally, partitioning the disk into thin concentric rings like the layers of an onion. This is the method of shell integration in two dimensions. For an infinitesimally thin ring of the "onion" of radius t, the accumulated area is 2pit dt, the circumferential length of the ring times its infinitesimal width (one can approximate this ring by a rectangle with width=2pit and height=dt). This gives an elementary integral for a disk of radius r. - -\begin{align} - -\mathrm{Area}(r) &{}= \int_0^{r} 2 \pi t dt \\ - -&{}= 2\pi \left[\frac{t^2}{2} \right]_{0}^{r}\\ - -&{}= \pi r^2. - -\end{align} - -It is rigorously justified by the multivariate substitution rule in polar coordinates. Namely, the area is given by a double integral of the constant function 1 over the disk itself. If D denotes the disk, then the double integral can be computed in polar coordinates as follows: - -\begin{align} - -\mathrm{Area}(r) &{}= \iint_D 1\ d(x, y)\\ - -&{} = \iint_D t\ dt\ d\theta\\ - -&{} = \int_0^r \int_0^{2\pi} t\ d\theta\ dt\\ - -&{} = \int_0^r \left[ t\theta \right]_{0}^{2\pi} dt\\ - -&{}= \int_0^{r} 2 \pi t dt \\ - -\end{align} - -which is the same result as obtained above. - -An equivalent rigorous justification, without relying on the special coordinates of trigonometry, uses the coarea formula. Define a function $\rho:\mathbb R^2\to\mathbb R$ by $\rho(x,y)=\sqrt{x^2+y^2}$. Note ρ is a Lipschitz function whose gradient is a unit vector $|\nabla\rho|=1$ (almost everywhere). Let D be the disc $\rho<1$ in $\mathbb R^2$. We will show that $\mathcal L^2(D)=\pi$, where $\mathcal L^2$ is the two-dimensional Lebesgue measure in $\mathbb R^2$. We shall assume that the one-dimensional Hausdorff measure of the circle $\rho=r$ is $2\pi r$, the circumference of the circle of radius r. (This can be taken as the definition of circumference.) Then, by the coarea formula, - -\begin{align}\mathcal L^2(D) &= \iint_D |\nabla \rho|d\mathcal{L}^2\\ - -&= \int_{\mathbb R} \mathcal H^1(\rho^{-1}(r)\cap D)dr\\ - -&= \int_0^1\mathcal H^1(\rho^{-1}(r))dr \\ - -&= \int_0^1 2\pi r dr= \pi. - -\end{align} - -Similar to the onion proof outlined above, we could exploit calculus in a different way in order to arrive at the formula for the area of a disk. Consider unwrapping the concentric circles to straight strips. This will form a right angled triangle with r as its height and 2pir (being the outer slice of onion) as its base. - -Finding the area of this triangle will give the area of the disk - -\begin{align} - -\text{Area} &{}= \frac{1}{2} \cdot \text{base} \cdot \text{height} \\ - -&{}= \frac{1}{2} \cdot 2 \pi r \cdot r \\ - -&{}= \pi r^2 - -\end{align} - -The opposite and adjacent angles for this triangle are respectively in degrees 9.0430611..., 80.956939... and in radians 0.1578311... , 1.4129651.... - -Explicitly, we imagine dividing up a circle into triangles, each with a height equal to the circle's radius and a base that is infinitesimally small. The area of each of these triangles is equal to $1/2\cdot r \cdot du$. By summing up (integrating) all of the areas of these triangles, we arrive at the formula for the circle's area: - -\begin{align} - -\mathrm{Area}(r) &{}= \int_0^{2\pi r} \frac{1}{2} r du \\ - -&{}= \left[ \frac{1}{2} r u \right]_{0}^{2 \pi r}\\ - -&{}= \pi r^2. - -\end{align} - -It too can be justified by a double integral of the constant function 1 over the disk by reversing the order of integration and using a change of variables in the above iterated integral: - -\begin{align} - -\mathrm{Area}(r) &{}= \iint_D 1\ d(x, y)\\ - -&{} = \iint_D t\ dt\ d\theta\\ - -&{} = \int_0^{2\pi} \int_0^r t\ dt\ d\theta\\ - -&{} = \int_0^{2\pi} \frac{1}{2}r^2\ d\theta\\ - -\end{align} - -Making the substitution $u = r\theta,\ du=r\ d\theta$ converts the integral to - -\begin{align} - -\int_0^{2\pi r} \frac{1}{2}\frac{r^2}{r} du = \int_0^{2\pi r} \frac{1}{2} r\ du - -\end{align} - - - -which is the same as the above result. - -The triangle proof can be reformulated as an application of Green's theorem in flux-divergence form (i.e. a two-dimensional version of the divergence theorem), in a way that avoids all mention of trigonometry and the constant pi. Consider the vector field $\mathbf r=x\mathbf i + y\mathbf j$ in the plane. So the divergence of r is equal to two, and hence the area of a disc D is equal to -$$ -A = \frac12\iint_D \operatorname{div}\mathbf r dA. -$$ - -By Green's theorem, this is the same as the outward flux of r across the circle bounding D: -$$ -A = \frac12\oint_{\partial D} \mathbf r\cdot\mathbf n ds -$$ - -where n is the unit normal and ds is the arc length measure. For a circle of radius R centered at the origin, we have $|\mathbf r|=R$ and $\mathbf n=\mathbf r/R$, so the above equality is -$$ -A = \frac12\oint_{\partial D} \mathbf r\cdot\frac{\mathbf r}{R} ds = \frac{R}{2}\oint_{\partial D} ds. -$$ - -The integral of ds over the whole circle $\partial D$ is just the arc length, which is its circumference, so this shows that the area A enclosed by the circle is equal to $R/2$ times the circumference of the circle. - -Another proof that uses triangles considers the area enclosed by a circle to be made up of an infinite number of triangles (i.e. the triangles each have an angle of d at the centre of the circle), each with an area of 1/2 · r2 · d (derived from the expression for the area of a triangle: 1/2 · a · b · sin). Note that sin(d≈ d due to small angle approximation. Through summing the areas of the triangles, the expression for the area of the circle can therefore be found: - -\begin{align} - -\mathrm{Area}&{}= \int_0^{2\pi} \frac{1}{2} r^2 d \theta \\ - -&{}= \left[ \frac{1}{2} r^2 \theta \right]_{0}^{2 \pi}\\ - -&{}= \pi r^2. - -\end{align} - -Note that the area of a semicircle of radius r can be computed by the integral $\int_{-r}^r \sqrt{r^2 - x^2}dx$. - -By trigonometric substitution, we substitute $x=r \sin\theta $, hence $dx=r\cos \theta d\theta.$ -$$ -\int_{-r}^r \sqrt{r^2 - x^2}dx -$$ -$$ -=\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}}\sqrt{r^2(1-\sin ^2 \theta)} \cdot r \cos \theta d \theta -$$ -$$ -=2r^2\int_{0}^{\frac{\pi}{2}} \cos ^2 \theta d \theta -$$ -$$ -=\frac{\pi r^2}{2}. -$$ - -The last step follows since the trigonometric identity $\cos(\theta)=\sin(\pi/2-\theta)$ implies that $\cos^2\theta$ and $\sin^2\theta$ have equal integrals over the interval $[0,\pi/2]$, using integration by substitution. But on the other hand, since $\cos^2\theta+\sin^2\theta=1$, the sum of the two integrals is the length of that interval, which is $\pi/2$. Consequently, the integral of $\cos^2 \theta$ is equal to half the length of that interval, which is $\pi/4$. - -Therefore, the area of a circle of radius r, which is twice the area of the semi-circle, is equal to $2 \cdot \frac{\pi r^2}{2} = \pi r^2$. - -This particular proof may appear to beg the question, if the sine and cosine functions involved in the trigonometric substitution are regarded as being defined in relation to circles. However, as noted earlier, it is possible to define sine, cosine, and pi in a way that is totally independent of trigonometry, in which case the proof is valid by the change of variables formula and Fubini's theorem, assuming the basic properties of sine and cosine (which can also be proved without assuming anything about their relation to circles). - -The circle is the closed curve of least perimeter that encloses the maximum area. This is known as the isoperimetric inequality, which states that if a rectifiable Jordan curve in the Euclidean plane has perimeter C and encloses an area A (by the Jordan curve theorem) then -$$ -4\pi A\le C^2. -$$ - -Moreover, equality holds in this inequality if and only if the curve is a circle, in which case $A=\pi r^2$ and $C=2\pi r$. - -The calculations Archimedes used to approximate the area numerically were laborious, and he stopped with a polygon of 96 sides. A faster method uses ideas of Willebrord Snell (Cyclometricus, 1621), further developed by Christiaan Huygens (De Circuli Magnitudine Inventa, 1654), described in Gerretsen. - -Given a circle, let un be the perimeter of an inscribed regular n-gon, and let Un be the perimeter of a circumscribed regular n-gon. Then un and Un are lower and upper bounds for the circumference of the circle that become sharper and sharper as n increases, and their average (un + Un)/2 is an especially good approximation to the circumference. To compute un and Un for large n, Archimedes derived the following doubling formulae: -$$ -u_{2n} = \sqrt{U_{2n} u_{n}} -$$ (geometric mean), and -$$ -U_{2n} = \frac{2 U_{n} u_{n}}{ U_{n} + u_{n}} -$$ (harmonic mean). - -Starting from a hexagon, Archimedes doubled n four times to get a 96-gon, which gave him a good approximation to the circumference of the circle. - -In modern notation, we can reproduce his computation (and go further) as follows. - -For a unit circle, an inscribed hexagon has u6 = 6, and a circumscribed hexagon has U6 = 43. - -Doubling seven times yields - -(Here un + Un/2 approximates the circumference of the unit circle, which is 2pi, so un + Un/4 approximates pi.) - -The last entry of the table has 355113 as one of its best rational approximations; - -i.e., there is no better approximation among rational numbers with denominator up to 113. - -The number 355113 is also an excellent approximation to pi, better than any other rational number with denominator less than 16604. - -Snell proposed (and Huygens proved) a tighter bound than Archimedes': -$$ - n \frac{3 \sin \frac{\pi}{n}}{2+\cos\frac{\pi}{n}} < \pi < n \left(2 \sin \frac{\pi}{3 n} + \tan \frac{\pi}{3 n}\right). -$$ - -This for n = 48 gives a better approximation (about 3.14159292) than Archimedes' method for n = 768. - -Let one side of an inscribed regular n-gon have length sn and touch the circle at points A and B. Let A′ be the point opposite A on the circle, so that A′A is a diameter, and A′AB is an inscribed triangle on a diameter. By Thales' theorem, this is a right triangle with right angle at B. Let the length of A′B be cn, which we call the complement of sn; thus cn2+sn2 = (2r)2. Let C bisect the arc from A to B, and let C′ be the point opposite C on the circle. Thus the length of CA is s2n, the length of C′A is c2n, and C′CA is itself a right triangle on diameter C′C. Because C bisects the arc from A to B, C′C perpendicularly bisects the chord from A to B, say at P. Triangle C′AP is thus a right triangle, and is similar to C′CA since they share the angle at C′. Thus all three corresponding sides are in the same proportion; in particular, we have C′A : C′C = C′P : C′A and AP : C′A = CA : C′C. The center of the circle, O, bisects A′A, so we also have triangle OAP similar to A′AB, with OP half the length of A′B. In terms of side lengths, this gives us - -\begin{align} - -c_{2n}^2 &{}= \left( r + \frac{1}{2} c_n \right) 2r \\ - -c_{2n} &{}= \frac{s_n}{s_{2n}} . - -\end{align} - -In the first equation C′P is C′O+OP, length r+12cn, and C′C is the diameter, 2r. For a unit circle we have the famous doubling equation of Ludolph van Ceulen, -$$ - c_{2n} = \sqrt{2+c_n} . -$$ - -If we now circumscribe a regular n-gon, with side A″B″ parallel to AB, then OAB and OA″B″ are similar triangles, with A″B″ : AB = OC : OP. Call the circumscribed side Sn; then this is Sn : sn = 1 : 12cn. (We have again used that OP is half the length of A′B.) Thus we obtain -$$ - c_n = 2\frac{s_n}{S_n} . -$$ - -Call the inscribed perimeter un = nsn, and the circumscribed perimeter Un = nSn. Then combining equations, we have -$$ - c_{2n} = \frac{s_n}{s_{2n}} = 2 \frac{s_{2n}}{S_{2n}} , -$$ - -so that -$$ - u_{2n}^2 = u_n U_{2n} . -$$ - -This gives a geometric mean equation. - -We can also deduce -$$ - 2 \frac{s_{2n}}{S_{2n}} \frac{s_n}{s_{2n}} = 2 + 2 \frac{s_n}{S_n} , -$$ - -or -$$ - \frac{2}{U_{2n}} = \frac{1}{u_n} + \frac{1}{U_n} . -$$ - -This gives a harmonic mean equation. - -When more efficient methods of finding areas are not available, we can resort to "throwing darts". This Monte Carlo method uses the fact that if random samples are taken uniformly scattered across the surface of a square in which a disk resides, the proportion of samples that hit the disk approximates the ratio of the area of the disk to the area of the square. This should be considered a method of last resort for computing the area of a disk (or any shape), as it requires an enormous number of samples to get useful accuracy; an estimate good to 10−n requires about 100n random samples . - -We have seen that by partitioning the disk into an infinite number of pieces we can reassemble the pieces into a rectangle. A remarkable fact discovered relatively recently is that we can dissect the disk into a large but finite number of pieces and then reassemble the pieces into a square of equal area. This is called Tarski's circle-squaring problem. The nature of Laczkovich's proof is such that it proves the existence of such a partition (in fact, of many such partitions) but does not exhibit any particular partition. - -Circles can be defined in non-Euclidean geometry, and in particular in the hyperbolic and elliptic planes. - -For example, the unit sphere $S^2(1)$ is a model for the two-dimensional elliptic plane. It carries an intrinsic metric that arises by measuring geodesic length. The geodesic circles are the parallels in a geodesic coordinate system. - -More precisely, fix a point $\mathbf z\in S^2(1)$ that we place at the zenith. Associated to that zenith is a geodesic polar coordinate system $(\phi,\theta)$, $0\le\phi\le\pi$, $0\le\theta< 2\pi$, where z is the point $\phi=0$. In these coordinates, the geodesic distance from z to any other point $\mathbf x\in S^2(1)$ having coordinates $(\phi,\theta)$ is the value of $\phi$ at x. A spherical circle is the set of points a geodesic distance R from the zenith point z. Equivalently, with a fixed embedding into $\mathbb R^3$, the spherical circle of radius $R\le\pi$ centered at z is the set of x in $S^2(1)$ such that $\mathbf x\cdot\mathbf z = \cos R$. - -We can also measure the area of the spherical disk enclosed within a spherical circle, using the intrinsic surface area measure on the sphere. The area of the disk of radius R is then given by -$$ -A = \int_0^{2\pi}\int_0^R \sin(\phi)d\phid\theta = 2\pi(1-\cos R). -$$ - -More generally, if a sphere $S^2(\rho)$ has radius of curvature $\rho$, then the area of the disk of radius R is given by -$$ -A = 2\pi\rho^2(1-\cos(R/\rho)). -$$ - -Observe that, as an application of L'Hôpital's rule, this tends to the Euclidean area $\pi R^2$ in the flat limit $\rho\to\infty$. - -The hyperbolic case is similar, with the area of a disk of intrinsic radius R in the (constant curvature $-1$) hyperbolic plane given by -$$ -A = 2\pi(1-\cosh R) -$$ - -where cosh is the hyperbolic cosine. More generally, for the constant curvature $-k$ hyperbolic plane, the answer is -$$ -A = 2\pi k^{-2}(1-\cosh(kR)). -$$ - -These identities are important for comparison inequalities in geometry. For example, the area enclosed by a circle of radius R in a flat space is always greater than the area of a spherical circle and smaller than a hyperbolic circle, provided all three circles have the same (intrinsic) radius. That is, -$$ -2\pi(1-\cos R) < \pi R^2 < 2\pi(1-\cosh R) -$$ - -for all $R>0$. Intuitively, this is because the sphere tends to curve back on itself, yielding circles of smaller area than those in the plane, whilst the hyperbolic plane, when immersed into space, develops fringes that produce additional area. It is more generally true that the area of the circle of a fixed radius R is a strictly decreasing function of the curvature. - -In all cases, if $k$ is the curvature (constant, positive or negative), then the isoperimetric inequality for a domain with area A and perimeter L is -$$ -L^2\ge 4\pi A - kA^2 -$$ - -where equality is achieved precisely for the circle. - -We can stretch a disk to form an ellipse. Because this stretch is a linear transformation of the plane, it has a distortion factor which will change the area but preserve ratios of areas. This observation can be used to compute the area of an arbitrary ellipse from the area of a unit circle. - -Consider the unit circle circumscribed by a square of side length 2. The transformation sends the circle to an ellipse by stretching or shrinking the horizontal and vertical diameters to the major and minor axes of the ellipse. The square gets sent to a rectangle circumscribing the ellipse. The ratio of the area of the circle to the square is pi/4, which means the ratio of the ellipse to the rectangle is also pi/4. Suppose a and b are the lengths of the major and minor axes of the ellipse. Since the area of the rectangle is ab, the area of the ellipse is piab/4. - -We can also consider analogous measurements in higher dimensions. For example, we may wish to find the volume inside a sphere. When we have a formula for the surface area, we can use the same kind of "onion" approach we used for the disk. - -*
    (Originally published by Cambridge University Press, 1897, based on J. L. Heiberg's Greek version.) - -* - -*
    (Originally Grundzüge der Mathematik, Vandenhoeck & Ruprecht, Göttingen, 1971.) - -* - -* - -* - -* diff --git a/wiki/wikipedia/3238.txt b/wiki/wikipedia/3238.txt deleted file mode 100644 index 949a8acc28d9c1b8a3f3f324e7388d84869bdd23..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3238.txt +++ /dev/null @@ -1,17 +0,0 @@ -In mathematics, an autonomous convergence theorem is one of a family of related theorems which specify conditions guaranteeing global asymptotic stability of a continuous autonomous dynamical system. - -The Markus–Yamabe conjecture was formulated as an attempt to give conditions for global stability of continuous dynamical systems in two dimensions. However, the Markus–Yamabe conjecture does not hold for dimensions higher than two, a problem which autonomous convergence theorems attempt to address. The first autonomous convergence theorem was constructed by Russell Smith. This theorem was later refined by Michael Li and James Muldowney. - -A comparatively simple autonomous convergence theorem is as follows: - -Let $x$ be a vector in some space $X \subseteq \mathbb{R}^n$, evolving according to an autonomous differential equation $\dot{x} = f(x)$. Suppose that $X$ is convex and forward invariant under $f$, and that there exists a fixed point $\hat{x} \in X$ such that $f(\hat{x}) = 0$. If there exists a logarithmic norm $\mu$ such that the Jacobian $J(x) = D_x f$ satisfies $\mu(J(x)) < 0$ for all values of $x$, then $\hat{x}$ is the only fixed point, and it is globally asymptotically stable. - -This autonomous convergence theorem is very closely related to the Banach fixed-point theorem. - -Note: this is an intuitive description of how autonomous convergence theorems guarantee stability, not a strictly mathematical description. - -The key point in the example theorem given above is the existence of a negative logarithmic norm, which is derived from a vector norm. The vector norm effectively measures the distance between points in the vector space on which the differential equation is defined, and the negative logarithmic norm means that distances between points, as measured by the corresponding vector norm, are decreasing with time under the action of $f$. So long as the trajectories of all points in the phase space are bounded, all trajectories must therefore eventually converge to the same point. - -The autonomous convergence theorems by Russell Smith, Michael Li and James Muldowney work in a similar manner, but they rely on showing that the area of two-dimensional shapes in phase space decrease with time. This means that no periodic orbits can exist, as all closed loops must shrink to a point. If the system is bounded, then according to Pugh's closing lemma there can be no chaotic behaviour either, so all trajectories must eventually reach an equilibrium. - -Michael Li has also developed an extended autonomous convergence theorem which is applicable to dynamical systems containing an invariant manifold. diff --git a/wiki/wikipedia/3239.txt b/wiki/wikipedia/3239.txt deleted file mode 100644 index af82ab1b5450defd500e632af3c7506812a31882..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3239.txt +++ /dev/null @@ -1,9 +0,0 @@ -Bottema's theorem is a theorem in plane geometry by the Dutch mathematician Oene Bottema (Groningen, 1901–1992). - -The theorem can be stated as follows: in any given triangle $ABC$, construct squares on any two adjacent sides, for example $AC$ and $BC$. The midpoint of the line segment that connects the vertices of the squares opposite the common vertex, $C$, of the two sides of the triangle is independent of the location of $C$. - -The theorem is true when the squares are constructed in one of the following ways: - -*Looking at the figure, starting from the lower left vertex, $A$, follow the triangle vertices clockwise and construct the squares to the left of the sides of the triangle. - -*Follow the triangle in the same way and construct the squares to the right of the sides of the triangle. diff --git a/wiki/wikipedia/324.txt b/wiki/wikipedia/324.txt deleted file mode 100644 index 0a198e861d29e7f061beabb182668b21cf8025ac..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/324.txt +++ /dev/null @@ -1,64 +0,0 @@ -In mathematics, Schur's lemma is an elementary but extremely useful statement in representation theory of groups and algebras. In the group case it says that if M and N are two finite-dimensional irreducible representations - -of a group G and φ is a linear map from M to N that commutes with the action of the group, then either φ is invertible, or φ = 0. An important special case occurs when M = N and φ is a self-map; in particular, any element of the center of a group must act as a scalar operator (a scalar multiple of the identity) on M. The lemma is named after Issai Schur who used it to prove the Schur orthogonality relations and develop the basics of the representation theory of finite groups. Schur's lemma admits generalisations to Lie groups and Lie algebras, the most common of which is due to Jacques Dixmier. - -Representation theory is the study of homomorphisms from a group, G, into the general linear group GL(V) of a vector space V; i.e., into the group of automorphisms of V. (Let us here restrict ourselves to the case when the underlying field of V is $\mathbb{C}$, the field of complex numbers.) Such a homomorphism is called a representation of G on V. A representation on V is a special case of a group action on V, but rather than permit any arbitrary bijections (permutations) of the underlying set of V, we restrict ourselves to invertible linear transformations. - -Let ρ be a representation of G on V. It may be the case that V has a subspace, W, such that for every element g of G, the invertible linear map ρ(g) preserves or fixes W, so that (ρ(g))(w) is in W for every w in W, and (ρ(g))(v) is not in W for any v not in W. In other words, every linear map ρ(g): V→V is also an automorphism of W, ρ(g): W→W, when its domain is restricted to W. We say W is stable under G, or stable under the action of G. It is clear that if we consider W on its own as a vector space, then there is an obvious representation of G on W—the representation we get by restricting each map ρ(g) to W. When W has this property, we call W with the given representation a subrepresentation of V. Every representation of G has itself and the zero vector space as trivial subrepresentations. A representation of G with no non-trivial subrepresentations is called an irreducible representation. Irreducible representations – like the prime numbers, or like the simple groups in group theory – are the building blocks of representation theory. Many of the initial questions and theorems of representation theory deal with the properties of irreducible representations. - -As we are interested in homomorphisms between groups, or continuous maps between topological spaces, we are interested in certain functions between representations of G. Let V and W be vector spaces, and let $\rho_V$ and $\rho_W$ be representations of G on V and W respectively. Then we define a G-linear map f from V to W to be a linear map from V to W that is equivariant under the action of G; that is, for every g in G, $ \rho_W(g) \circ f = f \circ \rho_V(g)$. In other words, we require that f commutes with the action of G. G-linear maps are the morphisms in the category of representations of G. - -Schur's Lemma is a theorem that describes what G-linear maps can exist between two irreducible representations of G. - -Theorem (Schur's Lemma): Let V and W be vector spaces; and let $\rho_V$ and $\rho_W$ be irreducible representations of G on V and W respectively. - -# If $V$ and $W$ are not isomorphic, then there are no nontrivial G-linear maps between them. - -# If $V=W$ finite-dimensional over an algebraically closed field (e.g. $\mathbb{C}$); and if $\rho_V = \rho_W$, then the only nontrivial G-linear maps are the identity, and scalar multiples of the identity. (A scalar multiple of the identity is sometimes called a homothety.) - -Proof: Suppose $f$ is a nonzero G-linear map from $V$ to $W$. We will prove that $V$ and $W$ are isomorphic. Let $V'$ be the kernel, or null space, of $f$ in $V$, the subspace of all $x$ in $V$ for which $fx=0$. (It is easy to check that this is a subspace.) By the assumption that $f$ is G-linear, for every $g$ in $G$ and choice of $x$ in $V', f((\rho_V(g))(x)) = (\rho_W(g))(f(x))=(\rho_W(g))(0) = 0$. But saying that $f(\rho_V(g)(x))=0$ is the same as saying that $\rho_V(g)(x)$ is in the null space of $f:V\rightarrow W$. So $V'$ is stable under the action of G; it is a subrepresentation. Since by assumption $V$ is irreducible, $V'$ must be zero; so $f$ is injective. - -By an identical argument we will show $f$ is also surjective; since $f((\rho_V(g))(x)) = (\rho_W(g))(f(x))$, we can conclude that for arbitrary choice of $f(x)$ in the image of $f$, $\rho_W(g)$ sends $f(x)$ somewhere else in the image of $f$; in particular it sends it to the image of $\rho_V(g)x$. So the image of $f(x)$ is a subspace $W'$ of $W$ stable under the action of $G$, so it is a subrepresentation and $f$ must be zero or surjective. By assumption it is not zero, so it is surjective, in which case it is an isomorphism. - -In the event that $V=W$ finite-dimensional over an algebraically closed field and they have the same representation, let $\lambda$ be an eigenvalue of $f$. (An eigenvalue exists for every linear transformation on a finite-dimensional vector space over an algebraically closed field.) Let $f' = f-\lambda I$. Then if $x$ is an eigenvector of $f$ corresponding to $\lambda, f'(x)=0$. It is clear that $f'$ is a G-linear map, because the sum or difference of G-linear maps is also G-linear. Then we return to the above argument, where we used the fact that a map was G-linear to conclude that the kernel is a subrepresentation, and is thus either zero or equal to all of $V$; because it is not zero (it contains $x$) it must be all of V and so $f'$ is trivial, so $f = \lambda I$. - -If M and N are two simple modules over a ring R, then any homomorphism f: M → N of R-modules is either invertible or zero. In particular, the endomorphism ring of a simple module is a division ring. - -The condition that f is a module homomorphism means that -$$ - f(rm) = rf(m)\text{ for all }m \in M\text{ and }r \in R. -$$ - -The group version is a special case of the module version, since any representation of a group G can equivalently be viewed as a module over the group ring of G. - -Schur's lemma is frequently applied in the following particular case. Suppose that R is an algebra over a field k and the vector space M = N is a simple module of R. Then Schur's lemma says that the endomorphism ring of the module M is a division algebra over k. If M is finite-dimensional, this division algebra is finite-dimensional. If k is the field of complex numbers, the only option is that this division algebra is the complex numbers. Thus the endomorphism ring of the module M is "as small as possible". In other words, the only linear transformations of M that commute with all transformations coming from R are scalar multiples of the identity. - -This holds more generally for any algebra $R$ over an uncountable algebraically closed field $k$ and for any simple module $M$ that is at most countably-dimensional: the only linear transformations of $M$ that commute with all transformations coming from $R$ are scalar multiples of the identity. - -When the field is not algebraically closed, the case where the endomorphism ring is as small as possible is still of particular interest. A simple module over a $k$-algebra is said to be absolutely simple if its endomorphism ring is isomorphic to $k$. This is in general stronger than being irreducible over the field $k$, and implies the module is irreducible even over the algebraic closure of $k$. - -We now describe Schur's lemma as it is usually stated in the context of representations of Lie groups and Lie algebras. There are three parts to the result. - -First, suppose that $V_1$ and $V_2$ are irreducible representations of a Lie group or Lie algebra over any field and that $\phi:V_1\rightarrow V_2$ is an intertwining map. Then $\phi$ is either zero or an isomorphism. - -Second, if $V$ is an irreducible representation of a Lie group or Lie algebra over an algebraically closed field and $\phi:V\rightarrow V$ is an intertwining map, then $\phi$ is a scalar multiple of the identity map. - -Third, suppose $V_1$ and $V_2$ are irreducible representations of a Lie group or Lie algebra over an algebraically closed field and $\phi_1, \phi_2:V_1\rightarrow V_2$ are nonzero intertwining maps. Then $\phi_1=\lambda\phi_2$ for some scalar $\lambda$. - -A simple corollary of the second statement is that every complex irreducible representation of an abelian group is one-dimensional. - -Suppose $\mathfrak{g}$ is a Lie algebra and $U(\mathfrak{g})$ is the universal enveloping algebra of $\mathfrak{g}$. Let $\pi:\mathfrak{g}\rightarrow\mathrm{End}(V)$ be an irreducible representation of $\mathfrak{g}$ over an algebraically closed field. The universal property of the universal enveloping algebra ensures that $\pi$ extends to a representation of $U(\mathfrak{g})$ acting on the same vector space. It follows from the second part of Schur's lemma that if $x$ belongs to the center of $U(\mathfrak{g})$, then $\pi(x)$ must be a multiple of the identity operator. In the case when $\mathfrak{g}$ is a complex semisimple Lie algebra, an important example of the preceding construction is the one in which $x$ is the (quadratic) Casimir element $C$. In this case, $\pi(C)=\lambda_\pi I$, where $\lambda_\pi$ is a constant that can be computed explicitly in terms of the highest weight of $\pi$. The action of the Casimir element plays an important role in the proof of complete reducibility for finite-dimensional representations of semisimple Lie algebras. - -See also Schur complement. - -The one module version of Schur's lemma admits generalizations involving modules M that are not necessarily simple. They express relations between the module-theoretic properties of M and the properties of the endomorphism ring of M. - -A module is said to be strongly indecomposable if its endomorphism ring is a local ring. For the important class of modules of finite length, the following properties are equivalent : - -* A module M is indecomposable; - -* M is strongly indecomposable; - -* Every endomorphism of M is either nilpotent or invertible. - -In general, Schur's lemma cannot be reversed: there exist modules that are not simple, yet their endomorphism algebra is a division ring. Such modules are necessarily indecomposable, and so cannot exist over semi-simple rings such as the complex group ring of a finite group. However, even over the ring of integers, the module of rational numbers has an endomorphism ring that is a division ring, specifically the field of rational numbers. Even for group rings, there are examples when the characteristic of the field divides the order of the group: the Jacobson radical of the projective cover of the one-dimensional representation of the alternating group A5 over the finite field with three elements F3 has F3 as its endomorphism ring. diff --git a/wiki/wikipedia/3240.txt b/wiki/wikipedia/3240.txt deleted file mode 100644 index 5f4c8b5aec214524b0da7e75fc7e055326b221ca..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3240.txt +++ /dev/null @@ -1,3 +0,0 @@ -In mathematics, Esenin-Volpin's theorem states that weight of an infinite compact dyadic space is the supremum of the weights of its points. - -It was introduced by . It was generalized by and . diff --git a/wiki/wikipedia/3241.txt b/wiki/wikipedia/3241.txt deleted file mode 100644 index e55d2e17d5e041cfa8c5fe8a4360bf1f2abe6cf5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3241.txt +++ /dev/null @@ -1,17 +0,0 @@ -right - -In mathematics, the nine lemma (or 3×3 lemma) is a statement about commutative diagrams and exact sequences valid in the category of groups and any abelian category. It states: if the diagram to the right is a commutative diagram and all columns as well as the two bottom rows are exact, then the top row is exact as well. Likewise, if all columns as well as the two top rows are exact, then the bottom row is exact as well. Similarly, because the diagram is symmetric about its diagonal, rows and columns may be interchanged in the above as well. - -The nine lemma can be proved by direct diagram chasing, or by applying the snake lemma (to the two bottom rows in the first case, and to the two top rows in the second case). - -Linderholm (p. 201) offers a satirical view of the nine lemma: - -"Draw a noughts-and-crosses board... Do not fill it in with noughts and crosses... Instead, use curved arrows... Wave your hands about in complicated patterns over this board. Make some noughts, but not in the squares; put them at both ends of the horizontal and vertical lines. Make faces. You have now proved: - -(a) the Nine Lemma - -(b) the Sixteen Lemma - -(c) the Twenty-five Lemma..." - -There are two variants of nine lemma: sharp nine lemma and symmetric nine lemma (see Lemmas 3.3, 3.4 in Chapter XII of ). diff --git a/wiki/wikipedia/3242.txt b/wiki/wikipedia/3242.txt deleted file mode 100644 index b303a3222c8d1694c00de1a71bc3702588409eaf..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3242.txt +++ /dev/null @@ -1 +0,0 @@ -In mathematical finite group theory, the Baer–Suzuki theorem, proved by Baer and Suzuki, states that if any two elements of a conjugacy class C of a finite group generate a nilpotent subgroup, then all elements of the conjugacy class C are contained in a nilpotent subgroup. Alperin gave a short elementary proof. diff --git a/wiki/wikipedia/3243.txt b/wiki/wikipedia/3243.txt deleted file mode 100644 index 7c463a9008c590508e5399deee535ea7535b6c43..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3243.txt +++ /dev/null @@ -1,11 +0,0 @@ -The Hamiltonian completion problem is to find the minimal number of edges to add to a graph to make it Hamiltonian. - -The problem is clearly NP-hard in general case (since its solution gives an answer to the NP-complete problem of determining whether a given graph has a Hamiltonian cycle). The associated decision problem of determining whether K edges can be added to a given graph to produce a Hamiltonian graph is NP-complete. - -Moreover, Hamiltonian completion belongs to the APX complexity class, i.e., it is unlikely that efficient constant ratio approximation algorithms exist for this problem. - -The problem may be solved in polynomial time for certain classes of graphs, including series–parallel graphs and their generalizations, which include outerplanar graphs, as well as for a line graph of a tree - -or a cactus graph. - -Gamarnik et al. use a linear time algorithm for solving the problem on trees to study the asymptotic number of edges that must be added for sparse random graphs to make them Hamiltonian. diff --git a/wiki/wikipedia/3244.txt b/wiki/wikipedia/3244.txt deleted file mode 100644 index 1e2bbbadf92c385d73100489223873755ed84d35..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3244.txt +++ /dev/null @@ -1,102 +0,0 @@ -In mathematics, the Veblen functions are a hierarchy of normal functions (continuous strictly increasing functions from ordinals to ordinals), introduced by Oswald Veblen in Veblen. If φ0 is any normal function, then for any non-zero ordinal α, φα is the function enumerating the common fixed points of φβ for β<α. These functions are all normal. - -In the special case when φ0(α)=ωα - -this family of functions is known as the Veblen hierarchy. - -The function φ1 is the same as the ε function: φ1(α)= εα. If $\alpha < \beta ,$ then $\varphi_{\alpha}(\varphi_{\beta}(\gamma)) = \varphi_{\beta}(\gamma) .$ From this and the fact that φβ is strictly increasing we get the ordering: $\varphi_\alpha(\beta) < \varphi_\gamma(\delta) $ if and only if either ($\alpha = \gamma $ and $\beta < \delta $) or ($\alpha < \gamma $ and $\beta < \varphi_\gamma(\delta) $) or ($\alpha > \gamma $ and $\varphi_\alpha(\beta) < \delta $). - -The fundamental sequence for an ordinal with cofinality ω is a distinguished strictly increasing ω-sequence which has the ordinal as its limit. If one has fundamental sequences for α and all smaller limit ordinals, then one can create an explicit constructive bijection between ω and α, (i.e. one not using the axiom of choice). Here we will describe fundamental sequences for the Veblen hierarchy of ordinals. The image of n under the fundamental sequence for α will be indicated by α[n]. - -A variation of Cantor normal form used in connection with the Veblen hierarchy is - every nonzero ordinal number α can be uniquely written as $\alpha = \varphi_{\beta_1}(\gamma_1) + \varphi_{\beta_2}(\gamma_2) + \cdots + \varphi_{\beta_k}(\gamma_k)$, where k>0 is a natural number and each term after the first is less than or equal to the previous term, $\varphi_{\beta_m}(\gamma_m) \geq \varphi_{\beta_{m+1}}(\gamma_{m+1}) ,$ and each $\gamma_m < \varphi_{\beta_m}(\gamma_m) .$ If a fundamental sequence can be provided for the last term, then that term can be replaced by such a sequence to get $\alpha [n] = \varphi_{\beta_1}(\gamma_1) + \cdots + \varphi_{\beta_{k-1}}(\gamma_{k-1}) + (\varphi_{\beta_k}(\gamma_k) [n]) .$ - -For any β, if γ is a limit with $\gamma < \varphi_{\beta} (\gamma) ,$ then let $\varphi_{\beta}(\gamma) [n] = \varphi_{\beta}(\gamma [n]) .$ - -No such sequence can be provided for $\varphi_0(0)$ = ω0 = 1 because it does not have cofinality ω. - -For $\varphi_0(\gamma+1) = \omega ^{\gamma+1} = \omega^ \gamma \cdot \omega ,$ we choose $\varphi_0(\gamma+1) [n] = \varphi_0(\gamma) \cdot n = \omega^{\gamma} \cdot n .$ - -For $\varphi_{\beta+1}(0) ,$ we use $\varphi_{\beta+1}(0) [0] = 0 $ and $\varphi_{\beta+1}(0) [n+1] = \varphi_{\beta}(\varphi_{\beta+1}(0) [n]) ,$ i.e. 0, $\varphi_{\beta}(0)$, $\varphi_{\beta}(\varphi_{\beta}(0))$, etc.. - -For $\varphi_{\beta+1}(\gamma+1)$, we use $\varphi_{\beta+1}(\gamma+1) [0] = \varphi_{\beta+1}(\gamma)+1 $ and $\varphi_{\beta+1}(\gamma+1) [n+1] = \varphi_{\beta} (\varphi_{\beta+1}(\gamma+1) [n]) .$ - -Now suppose that β is a limit: - -If $\beta < \varphi_{\beta}(0)$, then let $\varphi_{\beta}(0) [n] = \varphi_{\beta [n]}(0) .$ - -For $\varphi_{\beta}(\gamma+1)$, use $\varphi_{\beta}(\gamma+1) [n] = \varphi_{\beta [n]}(\varphi_{\beta}(\gamma)+1) .$ - -Otherwise, the ordinal cannot be described in terms of smaller ordinals using $\varphi$ and this scheme does not apply to it. - -The function Γ enumerates the ordinals α such that φα(0) = α. - -Γ0 is the Feferman–Schütte ordinal, i.e. it is the smallest α such that φα(0) = α. - -For Γ0, a fundamental sequence could be chosen to be $\Gamma_0 [0] = 0 $ and $\Gamma_0 [n+1] = \varphi_{\Gamma_0 [n]} (0) .$ - -For Γβ+1, let $\Gamma_{\beta+1} [0] = \Gamma_{\beta} + 1 $ and $\Gamma_{\beta+1} [n+1] = \varphi_{\Gamma_{\beta+1} [n]} (0) .$ - -For Γβ where $\beta < \Gamma_{\beta} $ is a limit, let $\Gamma_{\beta} [n] = \Gamma_{\beta [n]} .$ - -To build the Veblen function of a finite number of arguments (finitary Veblen function), let the binary function $\varphi(\alpha, \gamma)$ be $\varphi_\alpha(\gamma)$ as defined above. - -Let $z$ be an empty string or a string consisting of one or more comma-separated zeros $0,0,...,0$ and $s$ be an empty string or a string consisting of one or more comma-separated ordinals $\alpha _{1},\alpha _{2},...,\alpha _{n}$ with $\alpha _{1}>0$. The binary function $\varphi (\beta ,\gamma )$ can be written as $\varphi (s,\beta ,z,\gamma )$ where both $s$ and $z$ are empty strings. - -The finitary Veblen functions are defined as follows: - -* $\varphi (\gamma )=\omega ^{\gamma }$ - -* $\varphi (z,s,\gamma )=\varphi (s,\gamma )$ - -* if $\beta >0$, then $\varphi (s,\beta ,z,\gamma )$ denotes the $(1+\gamma )$-th common fixed point of the functions $\xi \mapsto \varphi (s,\delta ,\xi ,z)$ for each $\delta <\beta$ - -For example, $\varphi(1,0,\gamma)$ is the $(1+\gamma)$-th fixed point of the functions $\xi\mapsto\varphi(\xi,0)$, namely $\Gamma_\gamma$; then $\varphi(1,1,\gamma)$ enumerates the fixed points of that function, i.e., of the $\xi\mapsto\Gamma_\xi$ function; and $\varphi(2,0,\gamma)$ enumerates the fixed points of all the $\xi\mapsto\varphi(1,\xi,0)$. Each instance of the generalized Veblen functions is continuous in the last nonzero variable (i.e., if one variable is made to vary and all later variables are kept constantly equal to zero). - -The ordinal $\varphi(1,0,0,0)$ is sometimes known as the Ackermann ordinal. The limit of the $\varphi(1,0,...,0)$ where the number of zeroes ranges over ω, is sometimes known as the "small" Veblen ordinal. - -Every non-zero ordinal $\alpha$ less than the small Veblen ordinal (SVO) can be uniquely written in normal form for the finitary Veblen function: -$$ -\alpha =\varphi (s_{1})+\varphi (s_{2})+\cdots +\varphi (s_{k}) -$$ - -where - -* $k$ is a positive integer - -* $\varphi (s_{1})\geq \varphi (s_{2})\geq \cdots \geq \varphi (s_{k})$ - -* $s_{m}$ is a string consisting of one or more comma-separated ordinals $\alpha _{m,1},\alpha _{m,2},...,\alpha _{m,n_{m}}$ where $\alpha _{m,1}>0$ and each $\alpha _{m,i}<\varphi (s_{m})$ - -For limit ordinals $\alpha\varphi(\gamma)[n]=\left\{\begin{array}{lcr} - -n \quad \text{if} \quad \gamma=1\\ - -\varphi(\gamma-1)\cdot n \quad \text{if} \quad \gamma \quad \text{is a successor ordinal}\\ - -\varphi(\gamma[n]) \quad \text{if} \quad \gamma \quad \text{is a limit ordinal}\\ - -\end{array}\right. - -, - -* $\varphi(s,\beta,z,\gamma)[0]=0$ and $\varphi(s,\beta,z,\gamma)[n+1]=\varphi(s,\beta-1,\varphi(s,\beta,z,\gamma)[n],z)$ if $\gamma=0$ and $\beta$ is a successor ordinal, - -* $\varphi(s,\beta,z,\gamma)[0]=\varphi(s,\beta,z,\gamma-1)+1$ and $\varphi(s,\beta,z,\gamma)[n+1]=\varphi(s,\beta-1,\varphi(s,\beta,z,\gamma)[n],z)$ if $\gamma$ and $\beta$ are successor ordinals, - -* $\varphi(s,\beta,z,\gamma)[n]=\varphi(s,\beta,z,\gamma[n])$ if $\gamma$ is a limit ordinal, - -* $\varphi(s,\beta,z,\gamma)[n]=\varphi(s,\beta[n],z,\gamma)$ if $\gamma=0$ and $\beta$ is a limit ordinal, - -* $\varphi(s,\beta,z,\gamma)[n]=\varphi(s,\beta[n],\varphi(s,\beta,z,\gamma-1)+1,z)$ if $\gamma$ is a successor ordinal and $\beta$ is a limit ordinal. - -More generally, Veblen showed that φ can be defined even for a transfinite sequence of ordinals αβ, provided that all but a finite number of them are zero. Notice that if such a sequence of ordinals is chosen from those less than an uncountable regular cardinal κ, then the sequence may be encoded as a single ordinal less than κκ. So one is defining a function φ from κκ into κ. - -The definition can be given as follows: let α be a transfinite sequence of ordinals (i.e., an ordinal function with finite support) which ends in zero (i.e., such that α0=0), and let α[0↦γ] denote the same function where the final 0 has been replaced by γ. Then γ↦φ(α[0↦γ]) is defined as the function enumerating the common fixed points of all functions ξ↦φ(β) where β ranges over all sequences which are obtained by decreasing the smallest-indexed nonzero value of α and replacing some smaller-indexed value with the indeterminate ξ (i.e., β=α0↦ζ,ι↦ξ] meaning that for the smallest index ι0 such that αι0 is nonzero the latter has been replaced by some value ζ<αι0 and that for some smaller index ι<ι0, the value αι=0 has been replaced with ξ). - -For example, if α=(ω↦1) denotes the transfinite sequence with value 1 at ω and 0 everywhere else, then φ(ω↦1) is the smallest fixed point of all the functions ξ↦φ(ξ,0,...,0) with finitely many final zeroes (it is also the limit of the φ(1,0,...,0) with finitely many zeroes, the small Veblen ordinal). - -The smallest ordinal α such that α is greater than φ applied to any function with support in α (i.e., which cannot be reached "from below" using the Veblen function of transfinitely many variables) is sometimes known as the "large" Veblen ordinal. diff --git a/wiki/wikipedia/3245.txt b/wiki/wikipedia/3245.txt deleted file mode 100644 index 07fcde5a95da29e16ed2be651663fb4636d8041d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3245.txt +++ /dev/null @@ -1,55 +0,0 @@ -The Ryu–Takayanagi conjecture is a conjecture within holography that posits a quantitative relationship between the entanglement entropy of a conformal field theory and the geometry of an associated anti-de Sitter spacetime. The formula characterizes "holographic screens" in the bulk; that is, it specifies which regions of the bulk geometry are "responsible to particular information in the dual CFT". The conjecture is named after and , who jointly published the result in 2006. As a result, the authors were awarded the 2015 New Horizons in Physics Prize for "fundamental ideas about entropy in quantum field theory and quantum gravity". The formula was generalized to a covariant form in 2007. - -The thermodynamics of black holes suggests certain relationships between the entropy of black holes and their geometry. Specifically, the Bekenstein–Hawking area formula conjectures that the entropy of a black hole is proportional to its surface area: -$$ -S_\text{BH} = \frac{k_\text{B} A}{4\ell_\text{P}^2} -$$ - -The Bekenstein–Hawking entropy $S_\text{BH}$ is a measure of the information lost to external observers due to the presence of the horizon. The horizon of the black hole acts as a "screen" distinguishing one region of the spacetime (in this case the exterior of the black hole) that is not affected by another region (in this case the interior). The Bekenstein–Hawking area law states that the area of this surface is proportional to the entropy of the information lost behind it. - -The Bekenstein–Hawking entropy is a statement about the gravitational entropy of a system; however, there is another type of entropy that is important in quantum information theory, namely the entanglement (or von Neumann) entropy. This form of entropy provides a measure of how far from a pure state a given quantum state is, or, equivalently, how entangled it is. The entanglement entropy is a useful concept in many areas, such as in condensed matter physics and quantum many-body systems. Given its use, and its suggestive similarity to the Bekenstein–Hawking entropy, it is desirable to have a holographic description of entanglement entropy in terms of gravity. - -The holographic principle states that gravitational theories in a given dimension are dual to a gauge theory in one lower dimension. The AdS/CFT correspondence is one example of such duality. Here, the field theory is defined on a fixed background and is equivalent to a quantum gravitational theory whose different states each correspond to a possible spacetime geometry. The conformal field theory is often viewed as living on the boundary of the higher dimensional space whose gravitational theory it defines. The result of such a duality is a dictionary between the two equivalent descriptions. For example, in a CFT defined on $d$ dimensional Minkowski space the vacuum state corresponds to pure AdS space, whereas the thermal state corresponds to a planar black hole. Important for the present discussion is that the thermal state of a CFT defined on the $d$ dimensional sphere corresponds to the $d+1$ dimensional Schwarzschild black hole in AdS space. - -The Bekenstein–Hawking area law, while claiming that the area of the black hole horizon is proportional to the black hole's entropy, fails to provide a sufficient microscopic description of how this entropy arises. The holographic principle provides such a description by relating the black hole system to a quantum system which does admit such a microscopic description. In this case, the CFT has discrete eigenstates and the thermal state is the canonical ensemble of these states. The entropy of this ensemble can be calculated through normal means, and yields the same result as predicted by the area law. This turns out to be a special case of the Ryu–Takayanagi conjecture. - -Consider a spatial slice $ \Sigma $ of an AdS space time on whose boundary we define the dual CFT. The Ryu–Takayanagi formula states: - -{{NumBlk|:|$S_A = \frac{\text{Area of } \gamma_A}{4G} $|}} - -where $ S_A$ is the entanglement entropy of the CFT in some spatial sub-region $ A \subset \partial \Sigma$ with its complement $B$, and $\gamma_A$ is the Ryu–Takayanagi surface in the bulk. This surface must satisfy three properties: - -# $ \gamma_A $ has the same boundary as $ A $. - -# $ \gamma_A $ is homologous to A. - -# $ \gamma_A $ extremizes the area. If there are multiple extremal surfaces, $ \gamma_A $ is the one with the least area. - -Because of property (3), this surface is typically called the minimal surface when the context is clear. Furthermore, property (1) ensures that the formula preserves certain features of entanglement entropy, such as $ S_A = S_B $ and $ S_{A_1 + A_2} \geq S_{A_1 \cup A_2} $. The conjecture provides an explicit geometric interpretation of the entanglement entropy of the boundary CFT, namely as the area of a surface in the bulk. - -In their original paper, Ryu and Takayanagi show this result explicitly for an example in $ \text{AdS}_3 / \text{CFT}_2 $ where an expression for the entanglement entropy is already known. For an $ \text{AdS}_3 $ space of radius $ R $, the dual CFT has a central charge given by - -{{NumBlk|:|$c = \frac{3R}{2G}$|}} - -Furthermore, $ \text{AdS}_3 $ has the metric - - - -ds^2 = R^2(-\cosh{\rho^2 dt^2} + d\rho^2 + \sinh{\rho^2 d\theta^2}) - - - -in $(t, \rho, \theta) $ (essentially a stack of hyperbolic disks). Since this metric diverges at $ \rho \to \infty $, $ \rho $ is restricted to $ \rho \leq \rho_0 $. This act of imposing a maximum $ \rho $ is analogous to the corresponding CFT having a UV cutoff. If $ L $ is the length of the CFT system, in this case the circumference of the cylinder calculated with the appropriate metric, and $ a $ is the lattice spacing, we have -$$ - e^{\rho_0} \sim L/a -$$. - -In this case, the boundary CFT lives at coordinates $(t, \rho_0, \theta) = (t, \theta) $. Consider a fixed $ t $ slice and take the subregion A of the boundary to be $ \theta \in [0, 2\pi l / L]$ where $ l $ is the length of $ A $. The minimal surface is easy to identify in this case, as it is just the geodesic through the bulk that connects $ \theta = 0 $ and $ \theta = 2 \pi l/L$. Remembering the lattice cutoff, the length of the geodesic can be calculated as - -{{NumBlk|:|$ \cosh{(L_{\gamma_A} / R)} = 1 + 2\sinh^2 \rho_0 \sin^2 \frac{\pi l}{L}$|}} - -If it is assumed that $ e^{\rho_0} >> 1$, then using the Ryu–Takayanagi formula to compute the entanglement entropy. Plugging in the length of the minimal surface calculated in () and recalling the central charge charge (), the entanglement entropy is given by - -{{NumBlk|:|$ S_A = \frac{R}{4G}\log{(e^{2\rho_0} \sin^2 \frac{\pi l}{L})} =\frac{c}{3} \log{(e^{\rho_0} \sin{\frac{\pi l}{L}} )} $|}} - -This agrees with the result calculated by usual means. diff --git a/wiki/wikipedia/3246.txt b/wiki/wikipedia/3246.txt deleted file mode 100644 index 4a1e58eeb3f0d3b3c7c0d6e8894ce7d99a997494..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3246.txt +++ /dev/null @@ -1,278 +0,0 @@ -In mathematics, a geometric progression, also known as a geometric sequence, is a sequence of non-zero numbers where each term after the first is found by multiplying the previous one by a fixed, non-zero number called the common ratio. For example, the sequence 2, 6, 18, 54, ... is a geometric progression with common ratio 3. Similarly 10, 5, 2.5, 1.25, ... is a geometric sequence with common ratio 1/2. - -Examples of a geometric sequence are powers rk of a fixed non-zero number r, such as 2k and 3k. The general form of a geometric sequence is -$$ -a,\ ar,\ ar^2,\ ar^3,\ ar^4,\ \ldots -$$ - -where r ≠ 0 is the common ratio and a ≠ 0 is a scale factor, equal to the sequence's start value. - -The distinction between a progression and a series is that a progression is a sequence, whereas a series is a sum. - -The n-th term of a geometric sequence with initial value a = a1 and common ratio r is given by -$$ -a_n = ar^{n-1}. -$$ - -Such a geometric sequence also follows the recursive relation -$$ -a_n = ra_{n-1} -$$ for every integer $n\geq 2.$ - -Generally, to check whether a given sequence is geometric, one simply checks whether successive entries in the sequence all have the same ratio. - -The common ratio of a geometric sequence may be negative, resulting in an alternating sequence, with numbers alternating between positive and negative. For instance - -1, −3, 9, −27, 81, −243, ... - -is a geometric sequence with common ratio −3. - -The behaviour of a geometric sequence depends on the value of the common ratio.
    - -If the common ratio is: - -* positive, the terms will all be the same sign as the initial term. - -* negative, the terms will alternate between positive and negative. - -* greater than 1, there will be exponential growth towards positive or negative infinity (depending on the sign of the initial term). - -* 1, the progression is a constant sequence. - -* between −1 and 1 but not zero, there will be exponential decay towards zero (→ 0). - -* −1, the absolute value of each term in the sequence is constant and terms alternate in sign. - -* less than −1, for the absolute values there is exponential growth towards (unsigned) infinity, due to the alternating sign. - -Geometric sequences (with common ratio not equal to −1, 1 or 0) show exponential growth or exponential decay, as opposed to the linear growth (or decline) of an arithmetic progression such as 4, 15, 26, 37, 48, … (with common difference 11). This result was taken by T.R. Malthus as the mathematical foundation of his Principle of Population. - -Note that the two kinds of progression are related: exponentiating each term of an arithmetic progression yields a geometric progression, while taking the logarithm of each term in a geometric progression with a positive common ratio yields an arithmetic progression. - -An interesting result of the definition of the geometric progression is that any three consecutive terms a, b and c will satisfy the following equation: -$$ -b^2=ac -$$ - -where b is considered to be the geometric mean between a and c. - -==Geometric series== - -
    - -
    - -
    - -Computation of the sum 2 + 10 + 50 + 250. The sequence is multiplied term by term by 5, and then subtracted from the original sequence. Two terms remain: the first term, a, and the term one beyond the last, or arm. The desired result, 312, is found by subtracting these two terms and dividing by 1 - 5. - -
    - -
    - -
    - -A geometric series is the sum of the numbers in a geometric progression. For example: -$$ -2 + 10 + 50 + 250 = 2 + 2 \times 5 + 2 \times 5^2 + 2 \times 5^3. -$$ - -Letting a be the first term (here 2), n be the number of terms (here 4), and r be the constant that each term is multiplied by to get the next term (here 5), the sum is given by: -$$ -\frac{a(1-r^n)}{1-r} -$$ - -In the example above, this gives: -$$ -2 + 10 + 50 + 250 = \frac{2(1-5^4)}{1-5} = \frac{-1248}{-4} = 312. -$$ - -The formula works for any real numbers a and r (except r = 1, which results in a division by zero). For example: -$$ --2\pi + 4\pi^2 - 8\pi^3 = -2\pi + (-2\pi)^2 + (-2\pi)^3 = \frac{-2\pi(1 - (-2\pi)^3)}{1-(-2\pi)} = \frac{-2\pi(1 + 2\pi^3)}{1+2\pi} \approx -54.360768. -$$ - -Since the derivation (below) does not depend on a and r being real, it holds for complex numbers as well. - -To derive this formula, first write a general geometric series as: -$$ -\sum_{k=1}^{n} ar^{k-1} = ar^0+ar^1+ar^2+ar^3+\cdots+ar^{n-1}. -$$ - -We can find a simpler formula for this sum by multiplying both sides - -of the above equation by 1 − r, and we'll see that - -\begin{align} - -(1-r) \sum_{k=1}^{n} ar^{k-1} & = (1-r)(ar^0 + ar^1+ar^2+ar^3+\cdots+ar^{n-1}) \\ - -& = ar^0 + ar^1+ar^2+ar^3+\cdots+ar^{n-1} - ar^1-ar^2-ar^3-\cdots-ar^{n-1} - ar^n \\ - -& = a - ar^n - -\end{align} - -since all the other terms cancel. If r ≠ 1, we can rearrange the above to get the convenient formula for a geometric series that computes the sum of n terms: -$$ -\sum_{k=1}^{n} ar^{k-1} = \frac{a(1-r^n)}{1-r}. -$$ - -If one were to begin the sum not from k=1, but from a different value, say m, then -$$ -\sum_{k=m}^n ar^k=\frac{a(r^m-r^{n+1})}{1-r}, -$$ - -provided $r \neq 1$. If $r=1$ then the sum is of just the constant $a$ and so equals $a(n-m+1)$. - -Differentiating this formula with respect to r allows us to arrive at formulae for sums of the form -$$ -G_s(n, r) := \sum_{k=0}^n k^s r^k. -$$ - -For example: - -\frac{d}{dr}\sum_{k=0}^nr^k = \sum_{k=1}^n kr^{k-1}= - -\frac{1-r^{n+1}}{(1-r)^2}-\frac{(n+1)r^n}{1-r}. - -For a geometric series containing only even powers of r multiply by 1 - r2 : -$$ -(1-r^2) \sum_{k=0}^{n} ar^{2k} = a-ar^{2n+2}. -$$ - -Then -$$ -\sum_{k=0}^{n} ar^{2k} = \frac{a(1-r^{2n+2})}{1-r^2}. -$$ - -Equivalently, take r2 as the common ratio and use the standard formulation. - -For a series with only odd powers of r -$$ -(1-r^2) \sum_{k=0}^{n} ar^{2k+1} = ar-ar^{2n+3} -$$ - -and -$$ -\sum_{k=0}^{n} ar^{2k+1} = \frac{ar(1-r^{2n+2})}{1-r^2}. -$$ - -An exact formula for the generalized sum $G_s(n, r)$ when $s \in \mathbb{N}$ is expanded by the Stirling numbers of the second kind as -$$ -G_s(n, r) = \sum_{j=0}^s \left\lbrace{s \atop j}\right\rbrace x^j \frac{d^j}{dx^j}\left[\frac{1-x^{n+1}}{1-x}\right]. -$$ - -An infinite geometric series is an infinite series whose successive terms have a common ratio. Such a series converges if and only if the absolute value of the common ratio is less than one (|r| < 1). Its value can then be computed from the finite sum formula -$$ -\sum_{k=0}^\infty ar^k = \lim_{n\to\infty}{\sum_{k=0}^{n} ar^k} = \lim_{n\to\infty}\frac{a(1-r^{n+1})}{1-r}= \frac{a}{1-r} - \lim_{n\to\infty}{\frac{ar^{n+1}}{1-r}} -$$ - -Since: -$$ - r^{n+1} \to 0 \mbox{ as } n \to \infty \mbox{ when } |r| < 1. -$$ - -Then: -$$ -\sum_{k=0}^\infty ar^k = \frac{a}{1-r} - 0 = \frac{a}{1-r} -$$ - -For a series containing only even powers of $r$, -$$ -\sum_{k=0}^\infty ar^{2k} = \frac{a}{1-r^2} -$$ - -and for odd powers only, -$$ -\sum_{k=0}^\infty ar^{2k+1} = \frac{ar}{1-r^2} -$$ - -In cases where the sum does not start at k = 0, -$$ -\sum_{k=m}^\infty ar^k=\frac{ar^m}{1-r} -$$ - -The formulae given above are valid only for |r| < 1. The latter formula is valid in every Banach algebra, as long as the norm of r is less than one, and also in the field of p-adic numbers if |r|p < 1. As in the case for a finite sum, we can differentiate to calculate formulae for related sums. - -For example, - -\frac{d}{dr}\sum_{k=0}^\infty r^k = \sum_{k=1}^\infty kr^{k-1}= - -\frac{1}{(1-r)^2} - -This formula only works for |r| < 1 as well. From this, it follows that, for |r| < 1, -$$ -\sum_{k=0}^{\infty} k r^k = \frac{r}{\left(1-r\right)^2} ; \sum_{k=0}^{\infty} k^2 r^k = \frac{r \left( 1+r \right)}{\left(1-r\right)^3} ; \sum_{k=0}^{\infty} k^3 r^k = \frac{r \left( 1+4 r + r^2\right)}{\left( 1-r\right)^4} -$$ - -Also, the infinite series 1/2 + 1/4 + 1/8 + 1/16 + ⋯ is an elementary example of a series that converges absolutely. - -It is a geometric series whose first term is 1/2 and whose common ratio is 1/2, so its sum is -$$ -\frac12+\frac14+\frac18+\frac{1}{16}+\cdots=\frac{1/2}{1-(+1/2)} = 1. -$$ - -The inverse of the above series is 1/2 − 1/4 + 1/8 − 1/16 + ⋯ is a simple example of an alternating series that converges absolutely. - -It is a geometric series whose first term is 1/2 and whose common ratio is −1/2, so its sum is -$$ -\frac12-\frac14+\frac18-\frac{1}{16}+\cdots=\frac{1/2}{1-(-1/2)} = \frac13. -$$ - -The summation formula for geometric series remains valid even when the common ratio is a complex number. In this case the condition that the absolute value of r be less than 1 becomes that the modulus of r be less than 1. It is possible to calculate the sums of some non-obvious geometric series. For example, consider the proposition -$$ - \sum_{k=0}^{\infty} \frac{\sin(kx)}{r^k} = \frac{r \sin(x)}{1 + r^2 - 2 r \cos(x)} -$$ - -The proof of this comes from the fact that -$$ -\sin(kx) = \frac{e^{ikx} - e^{-ikx}}{2i} , -$$ - -which is a consequence of Euler's formula. Substituting this into the original series gives -$$ - \sum_{k=0}^{\infty} \frac{\sin(kx)}{r^k} = \frac{1}{2 i} \left[ \sum_{k=0}^{\infty} \left( \frac{e^{ix}}{r} \right)^k - \sum_{k=0}^{\infty} \left(\frac{e^{-ix}}{r}\right)^k\right] -$$. - -This is the difference of two geometric series, and so it is a straightforward application of the formula for infinite geometric series that completes the proof. - -The product of a geometric progression is the product of all terms. It can be quickly computed by taking the geometric mean of the progression's first and last individual terms, and raising that mean to the power given by the number of terms. (This is very similar to the formula for the sum of terms of an arithmetic sequence: take the arithmetic mean of the first and last individual terms, and multiply by the number of terms.) - -As the geometric mean of two numbers equals the square root of their product, the product of a geometric progression is: -$$ -\prod_{i=0}^{n} ar^i = (\sqrt{a \cdot ar^n})^{n+1} = (\sqrt{a^{2}r^n})^{n+1} -$$. - -(An interesting aspect of this formula is that, even though it involves taking the square root of a potentially-odd power of a potentially-negative r, it cannot produce a complex result if neither a nor r has an imaginary part. It is possible, should r be negative and n be odd, for the square root to be taken of a negative intermediate result, causing a subsequent intermediate result to be an imaginary number. However, an imaginary intermediate formed in that way will soon afterwards be raised to the power of $\textstyle n + 1$, which must be an even number because n by itself was odd; thus, the final result of the calculation may plausibly be an odd number, but it could never be an imaginary one.) - -Let P represent the product. By definition, one calculates it by explicitly multiplying each individual term together. Written out in full, -$$ -P = a \cdot ar \cdot ar^2 \cdots ar^{n-1} \cdot ar^n -$$. - -Carrying out the multiplications and gathering like terms, -$$ -P = a^{n+1} r^{1+2+3+ \cdots +(n-1)+n} -$$. - -The exponent of r is the sum of an arithmetic sequence. Substituting the formula for that calculation, -$$ -P = a^{n+1} r^\frac{n(n+1)}{2} -$$, - -which enables simplifying the expression to -$$ -P = (ar^\frac{n}{2})^{n+1} = (a\sqrt{r^n})^{n+1} -$$. - -Rewriting a as $\textstyle \sqrt{a^2}$, -$$ -P = (\sqrt{a^{2}r^n})^{n+1} -$$, - -which concludes the proof. - -A clay tablet from the Early Dynastic Period in Mesopotamia, MS 3047, contains a geometric progression with base 3 and multiplier 1/2. It has been suggested to be Sumerian, from the city of Shuruppak. It is the only known record of a geometric progression from before the time of Babylonian mathematics. - -Books VIII and IX of Euclid's Elements analyzes geometric progressions (such as the powers of two, see the article for details) and give several of their properties. diff --git a/wiki/wikipedia/3247.txt b/wiki/wikipedia/3247.txt deleted file mode 100644 index 0ca781af95f84dc2c56f74998decd6468e17f21d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3247.txt +++ /dev/null @@ -1,17 +0,0 @@ -Tetris Holding, LLC v. Xio Interactive, Inc., 863 F.Supp.2d 394 (D.N.J. 2012), was a 2012 American legal case related to copyright of video games, confirming that a game's look and feel can be protected under copyright law. Tetris Holding is a company that holds the copyright to the original Tetris game from 1984 and licenses those rights to game developers. Xio Interactive is a game developer that released Mino in 2009, a mobile game based on the gameplay of Tetris. Mino was downloaded millions of times, and Tetris Holding filed a DMCA notice and eventually a lawsuit against Xio for copyright infringement. - -The earliest video game case law had protected the designs in Galaxian and Pac-Man. But later cases such as Data East USA, Inc. v. Epyx, Inc. found that it is permissible to make a video game clone with similar ideas and principles as another game, since copyright does not protect an idea, only the specific expression of that idea. A trial occurred in 2012, the first case in a long time to proceed to trial on this issue. The district court ruled for Tetris Holding, with Judge Wolfson applying the Abstraction-Filtration-Comparison test to determine if any infringement occurred. Although standard gameplay ideas are not copyrightable, Mino was still substantially similar to Tetris in terms of its art style, and those elements are in fact protected by copyright. This case has since been applied in other copyright disputes to offer broader protection to the look and feel of video games. - -In 1984, Alexey Pajitnov created the puzzle game Tetris for the Dorodnitsyn Computing Centre at the Soviet Academy of Sciences. Within a few years Tetris became one of the most successful video games of all time. Henk Rogers was one of the key people who brought Tetris to the world by going to Moscow to negotiate for the rights. Rogers later befriended Pajitnov and helped the two acquire ownership of the copyrights from a former Soviet agency. By the early 2000s, Rogers and Pajitnov created The Tetris Company to control the Tetris intellectual property, and to license their rights to game developers who comply with certain standards. - -In 2009, a game developer named Xio Interactive released a mobile game called Mino that was based on the gameplay of Tetris. Xio had tried to license the rights to Tetris from The Tetris Company, who refused. At that point Xio researched intellectual property law to see how to design a game similar to Tetris that would not include any legally-protected elements. The game Mino featured the same approach of using falling tetromino blocks to form complete lines on a playfield and score points. The game's marketing materials described it as a "Tetromino game" with "fast-paced, line-clearing features", and ended with a disclaimer: "Mino and Xio Interactive are not affiliated with Tetris or the Tetris Company". - -While there have been many Tetris clones, Mino was eventually downloaded more than six million times. In August 2009, Tetris Holdings sent DMCA notices to Xio via Apple requesting that Apple take Mino down from the App Store. As part of the DMCA process, Xio filed a counter-notice and Apple re-instated the game to their store. Since Apple could not permanently remove the software without a legal order to do so, The Tetris Company filed a lawsuit against Xio Interactive in December 2009 in the United States District Court for the District of New Jersey. Copyright jurisprudence developed a legal doctrine called the idea–expression distinction, which says that copyright does not protect a general idea, only one expression of an idea. Based on this, copyright does not protect scènes à faire, where stock scenes and generic details are common among creative works. There is also the merger doctrine where some ideas may only have a limited number of ways of being expressed, and it would be legally unfair to protect expression if it effectively gives someone a monopoly on an idea. With the costs of filing a lawsuit being very high compared to the expected outcome, many video game copyright holders became hesitant to sue alleged clones. Most lawsuits about alleged clones were settled between the mid-1990s and the mid-2000s, and Xio became a rare case that proceeded to trial on this issue. - -Judge Wolfson ruled early that, as previously established, the gameplay of Tetris was not copyrightable. The New Jersey district court was within the Third Circuit, which had prior case law from Whelan v. Jaslow (797 F.2d 1222 (1987)) that used a purpose-based test to abstract software to determine if copyright was infringed. Wolfson also explored case law from other circuits, using the Abstraction-Filtration-Comparison test (AFC) for substantial similarity that had been first defined in Nichols v. Universal Pictures Corp. (45 F.2d 119 (1930)) and then applied to computer software in Computer Associates International, Inc. v. Altai, Inc. (982 F.2d 693 (1992)). Two video game cases, Atari v. Philips (related to Pac-Man and an alleged clone K.C. Munchkin!) and Midway Manufacturing Co. v. Bandai-America, Inc. (related to handheld clones of Midway's Galaxian) were found to have been ruled in the same manner as the AFC test, and Wolfson decided to apply them to Mino. Wolfson explained that the court should compare the games "as they would appear to a layman [by] concentrating upon the gross features rather than an examination of minutiae", essentially comparing the games' respective look and feel; Wolfson further wrote "[i]f one has to squint to find distinctions only at a granular level, then the works are likely to be substantially similar". - -Wolfson discussed which aspects of Tetris were copyrightable as expressive elements, and which aspects are part of the general idea that cannot be protected by copyright. According to Wolfson, copyright cannot protect the idea of vertically falling blocks, or a player rotating those blocks to form lines and earn points, or a player losing the game if those blocks accumulate at the top of the screen. However, Wolfson determined that several aspects of Tetris qualify as unique expression that is protected by copyright. This includes the twenty-by-ten square game board, the display of randomized junk blocks at the start of the game, the display of a block's "shadow" where it will land, and the display of the next piece to fall. Wolfson also granted protection to the blocks changing in color when they land, and the game board filling up when the game is over. - -Wolfson also examined at Mino's marketing materials to determine if they infringed the trade dress of Tetris. Where Mino's marketing used the same color and style of the pieces from Tetris, these details were distinct expression and not merely functional ideas in the public domain. Wolfson determined that this created a likelihood that consumers would confuse Mino with Tetris, and held that Minos trade dress was infringing. The ruling shows the courts using a "high level of understanding of video game mechanics for the first time". - -Susan Corbett argues that "the Tetris decision supports the view that United States courts are becoming more accepting of the possibility of offering broader copyright protection for videogames". Tomasz Grzegorczyk notes that this case shows courts are willing to recognize that the "graphic user interface of the game is subject to protection under copyright in the same manner as audiovisual works". Noting that the copyright infringing game copied exact shapes and colors, Steven Conway and Jennifer deWinter argue that the decision would not impact other alleged game clones that are less similar. Josh Davenport and Ross Dannenberg suggest that while a "standard game device" may be too generic to warrant copyright protection, that a specific selection or arrangement of those devices would quality as unique expression, and thus be copyrightable. John Kuehl calls this case a potential killing blow to knock off video games that are near copies of the original. diff --git a/wiki/wikipedia/3248.txt b/wiki/wikipedia/3248.txt deleted file mode 100644 index cfea4bac085b3ebd60c2434cb63103247b657fb2..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3248.txt +++ /dev/null @@ -1,50 +0,0 @@ -In number theory, a Wall–Sun–Sun prime or Fibonacci–Wieferich prime is a certain kind of prime number which is conjectured to exist, although none are known. - -Let $p$ be a prime number. When each term in the sequence of Fibonacci numbers $F_n$ is reduced modulo $p$, the result is a periodic sequence. - -The (minimal) period length of this sequence is called the Pisano period and denoted $\pi(p)$. - -Since $F_0 = 0$, it follows that p divides $F_{\pi(p)}$. A prime p such that p2 divides $F_{\pi(p)}$ is called a Wall–Sun–Sun prime. - -If $\alpha(m)$ denotes the rank of apparition modulo $m$ (i.e., $\alpha(m)$ is the smallest positive index $m$ such that $m$ divides $F_{\alpha(m)}$), then a Wall–Sun–Sun prime can be equivalently defined as a prime $p$ such that $p^2$ divides $F_{\alpha(p)}$. - -For a prime p ≠ 2, 5, the rank of apparition $\alpha(p)$ is known to divide $p - \left(\tfrac{p}{5}\right)$, where the Legendre symbol $\textstyle\left(\frac{p}{5}\right)$ has the values -$$ -\left(\frac{p}{5}\right) = \begin{cases} 1 &\text{if }p \equiv \pm1 \pmod 5;\\ -1 &\text{if }p \equiv \pm2 \pmod 5.\end{cases} -$$ - -This observation gives rise to an equivalent characterization of Wall–Sun–Sun primes as primes $p$ such that $p^2$ divides the Fibonacci number $F_{p - \left(\frac{p}{5}\right)}$. - -A prime $p$ is a Wall–Sun–Sun prime if and only if $\pi(p^2) = \pi(p)$. - -A prime $p$ is a Wall–Sun–Sun prime if and only if $L_p \equiv 1 \pmod{p^2}$, where $L_p$ is the $p$-th Lucas number. - -McIntosh and Roettger establish several equivalent characterizations of Lucas–Wieferich primes. - -Dorais and Klyve extended this range to 9.7 without finding such a prime. - -In December 2011, another search was started by the PrimeGrid project, however it was suspended in May 2017. In November 2020, PrimeGrid started another project that searches for Wieferich and Wall–Sun–Sun primes simultaneously. , its leading edge is over $300\cdot 10^{15}$. - -Wall–Sun–Sun primes are named after Donald Dines Wall, Zhi Hong Sun and Zhi Wei Sun; Z. H. Sun and Z. W. Sun showed in 1992 that if the first case of Fermat's Last Theorem was false for a certain prime p, then p would have to be a Wall–Sun–Sun prime. As a result, prior to Andrew Wiles' proof of Fermat's Last Theorem, the search for Wall–Sun–Sun primes was also the search for a potential counterexample to this centuries-old conjecture. - -A tribonacci–Wieferich prime is a prime p satisfying h(p) = h(p2), where h is the least positive integer satisfying [Th,Th+1,Th+2] ≡ [T0, T1, T2] (mod m) and Tn denotes the n-th tribonacci number. No tribonacci–Wieferich prime exists below 1011. - -A Pell–Wieferich prime is a prime p satisfying p2 divides Pp−1, when p congruent to 1 or 7 (mod 8), or p2 divides Pp+1, when p congruent to 3 or 5 (mod 8), where Pn denotes the n-th Pell number. For example, 13, 31, and 1546463 are Pell–Wieferich primes, and no others below 109 . In fact, Pell–Wieferich primes are 2-Wall–Sun–Sun primes. - -A prime p such that $F_{p - \left(\frac{p}{5}\right)} \equiv Ap \pmod{p^2}$ with small |A| is called near-Wall–Sun–Sun prime. Near-Wall–Sun–Sun primes with A = 0 would be Wall–Sun–Sun primes. PrimeGrid records cases with |A| ≤ 1000. A dozen cases is known where A = ±1 . - -Wall–Sun–Sun primes can be considered for the field $Q_{\sqrt{D}}$ with discriminant D. - -For the conventional Wall–Sun–Sun primes, D = 5. In the general case, a Lucas–Wieferich prime p associated with (P, Q) is a Wieferich prime to base Q and a Wall–Sun–Sun prime with discriminant D = P2 – 4Q. In this definition, the prime p should be odd and not divide D. - -It is conjectured that for every natural number D, there are infinitely many Wall–Sun–Sun primes with discriminant D. - -The case of $(P,Q) = (k,-1)$ corresponds to the k-Wall–Sun–Sun primes, for which Wall–Sun–Sun primes represent the special case k = 1. The k-Wall–Sun–Sun primes can be explicitly defined as primes p such that p2 divides the k-Fibonacci number $F_k(\pi_k(p))$, where Fk(n) = Un(k, −1) is a Lucas sequence of the first kind with discriminant D = k2 + 4 and $\pi_k(p)$ is the Pisano period of k-Fibonacci numbers modulo p. For a prime p ≠ 2 and not dividing D, this condition is equivalent to either of the following. - -* p2 divides $F_k\left(p - \left(\tfrac{D}{p}\right)\right)$, where $\left(\tfrac{D}{p}\right)$ is the Kronecker symbol; - -* Vp(k, −1) ≡ k (mod p2), where Vn(k, −1) is a Lucas sequence of the second kind. - -The smallest k-Wall–Sun–Sun primes for k = 2, 3, ... are - -13, 241, 2, 3, 191, 5, 2, 3, 2683, ... diff --git a/wiki/wikipedia/3249.txt b/wiki/wikipedia/3249.txt deleted file mode 100644 index 09af6571dfff38e218f95763030cdf9ee020391c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3249.txt +++ /dev/null @@ -1,32 +0,0 @@ -A fundamental tool in statistical mechanics and probabilistic combinatorics (especially random graphs and the probabilistic method), the Ahlswede–Daykin inequality , also known as the four functions theorem (or inequality), - -is a correlation-type inequality for four functions on a finite distributive lattice. - -It states that if $f_1,f_2,f_3,f_4$ are nonnegative functions on a finite distributive lattice such that -$$ -f_1(x)f_2(y)\le f_3(x\vee y)f_4(x\wedge y) -$$ - -for all x, y in the lattice, then -$$ -f_1(X)f_2(Y)\le f_3(X\vee Y)f_4(X\wedge Y) -$$ - -for all subsets X, Y of the lattice, where -$$ -f(X) = \sum_{x\in X}f(x) -$$ - -and -$$ -X\vee Y = \{x\vee y\mid x\in X, y\in Y\} -$$ -$$ -X\wedge Y = \{x\wedge y\mid x\in X, y\in Y\}. -$$ - -The Ahlswede–Daykin inequality can be used to provide a short proof of both the Holley inequality and the FKG inequality. It also implies the XYZ inequality. - -For a proof, see the original article or . - -The "four functions theorem" was independently generalized to 2k functions in and . diff --git a/wiki/wikipedia/325.txt b/wiki/wikipedia/325.txt deleted file mode 100644 index 3a96f35210c7d1ed65c459bd2a846a37b3ef1952..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/325.txt +++ /dev/null @@ -1,3 +0,0 @@ -The electromagnetism uniqueness theorem states that providing boundary conditions for Maxwell's equations uniquely fixes a solution for those equations. - -However, this theorem must not be misunderstood as that providing boundary conditions (or the field solution itself) uniquely fixes a source distribution. One counterexample is that the field outside a uniformly charged sphere may also be produced by a point charge placed at the center of the sphere instead, i.e. the source needed to produce such field at a boundary outside the sphere is not unique. diff --git a/wiki/wikipedia/3250.txt b/wiki/wikipedia/3250.txt deleted file mode 100644 index 1b218b7d74bb262b553bb27efdbbb17fdb28364d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3250.txt +++ /dev/null @@ -1,33 +0,0 @@ -String girdling Earth is a mathematical puzzle with a counterintuitive solution. In a version of this puzzle, string is tightly wrapped around the equator of a perfectly spherical Earth. If the string should be raised off the ground, all the way along the equator, how much longer would the string be? - -Alternatively, of string is spliced into the original string, and the extended string rearranged so that it is at a uniform height above the equator. The question that is then posed is whether the gap between string and Earth will allow the passage of a car, a cat or a thin knife blade. - -As the string must be raised all along the entire circumference, one might expect several kilometres of additional string. Surprisingly, the answer is 2pi or around . - -In the second phrasing, considering that is almost negligible compared with the circumference, the first response may be that the new position of the string will be no different from the original surface-hugging position. The answer is that a cat will easily pass through the gap, the size of which will be 1/2pi metres or about . - -Even more surprising is that the size of the sphere or circle around which the string is spanned is irrelevant, and may be anything from the size of an atom to the Milky Way - the result depends only on the amount it is raised. Moreover, as in the coin-rolling problem, the shape the string girdles need not be a circle: 2pi times the offset is added when it is any simple polygon or closed curve which does not intersect itself. If the shape is complex, 2pi times the offset, times the absolute value of its turning number must be added. - -This diagram gives a visual analogue using a square: regardless of the size of the square, the added perimeter is the sum of the four blue arcs, a circle with the same radius as the offset. - -More formally, let c  be the Earth's circumference, r  its radius, Δc  the added string length and Δr  the added radius. As a circle of radius R  has a circumference of 2piR , - - - -\begin{align} - -c + \varDelta c & = 2 \pi (r + \varDelta r) - -\\ 2 \pi r + \varDelta c & = 2 \pi r + 2 \pi \varDelta r - -\\ \varDelta c & = 2 \pi \varDelta r - -\\ \therefore \varDelta r & = \frac{\varDelta c}{2 \pi} - -\end{align} - - - -regardless of the value of c . - -This observation also means that an athletics track has the same offset between starting lines on each lane, equal to 2pi times the width of the lane, whether the circumference of the stadium is the standard or the size of a galaxy. diff --git a/wiki/wikipedia/3251.txt b/wiki/wikipedia/3251.txt deleted file mode 100644 index e91b7f4ebe72f49c43f9e735ee9b54e60532c2ad..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3251.txt +++ /dev/null @@ -1,25 +0,0 @@ -Tetris is an upcoming biographical film directed by Jon S. Baird for Apple TV+. - -The film chronicles the development and release of the video game Tetris. - -*Taron Egerton as Henk Rogers - -*Nikita Efremov as Alexey Pajitnov - -*Roger Allam as Robert Maxwell - -*Anthony Boyle as Kevin Maxwell - -*Togo Igawa as Hiroshi Yamauchi - -*Toby Jones as Robert Stein - -*Ken Yamamura as Minoru Arakawa - -*Ben Miles as Howard Lincoln - -*Matthew Marsh as Mikhail Gorbachev - -In July 2020, it was reported that a biopic was being made about the making of Tetris, which will delve into the legal battles that took place during the Cold War over ownership of the game, with Jon S. Baird directing and Taron Egerton cast to portray the game developer Henk Rogers. Egerton confirmed this report in an August interview, explaining that the film would mirror a tone similar to The Social Network. In November, Apple TV+ acquired the film. - -Filming began in Glasgow in December 2020, including Glasgow Prestwick Airport on the Ayrshire coast. In February 2021, filming took place in Aberdeen at locations including the University of Aberdeen's Zoology Building, which was used as the headquarters of Soviet firm Elorg. Production then returned to Glasgow for a few days, before wrapping in early March 2021. diff --git a/wiki/wikipedia/3252.txt b/wiki/wikipedia/3252.txt deleted file mode 100644 index 831d49796b2fe52285cd2280bb038d7d2f8b4a36..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3252.txt +++ /dev/null @@ -1,75 +0,0 @@ -In mathematics, Bogoliubov's edge-of-the-wedge theorem implies that holomorphic functions on two "wedges" with an "edge" in common are analytic continuations of each other provided they both give the same continuous function on the edge. It is used in quantum field theory to construct the analytic continuation of Wightman functions. The formulation and the first proof of the theorem were presented by Nikolay Bogoliubov at the International Conference on Theoretical Physics, Seattle, USA (September, 1956) and also published in the book Problems in the Theory of Dispersion Relations. Further proofs and generalizations of the theorem were given by R. Jost and H. Lehmann (1957), F. Dyson (1958), H. Epstein (1960), and by other researchers. - -In one dimension, a simple case of the edge-of-the-wedge theorem can be stated as follows. - -*Suppose that f is a continuous complex-valued function on the complex plane that is holomorphic on the upper half-plane, and on the lower half-plane. Then it is holomorphic everywhere. - -In this example, the two wedges are the upper half-plane and the lower half plane, and their common edge is the real axis. This result can be proved from Morera's theorem. Indeed, a function is holomorphic provided its integral round any contour vanishes; a contour which crosses the real axis can be broken up into contours in the upper and lower half-planes and the integral round these vanishes by hypothesis. - -The more general case is phrased in terms of distributions. This is technically simplest in the case where the common boundary is the unit circle $|z|=1$ in the complex plane. In that case holomorphic functions f, g in the regions $r<|z|<1$ and $ 1<|z| 1, ..., zn) depending on variables zi in the complexification of Minkowski spacetime. They are defined and holomorphic in the wedge where the imaginary part of each zi-zi-1 lies in the open positive timelike cone. By permuting the variables we get n! different Wightman functions defined in n! different wedges. By applying the edge-of-the-wedge theorem (with the edge given by the set of totally spacelike points) one can deduce that the Wightman functions are all analytic continuations of the same holomorphic function, defined on a connected region containing all n! wedges. (The equality of the boundary values on the edge that we need to apply the edge-of-the-wedge theorem follows from the locality axiom of quantum field theory.) - -The edge-of-the-wedge theorem has a natural interpretation in the language of hyperfunctions. A hyperfunction is roughly a sum of boundary values of holomorphic functions, and can also be thought of as something like a "distribution of infinite order". The analytic wave front set of a hyperfunction at each point is a cone in the cotangent space of that point, and can be thought of as describing the directions in which the singularity at that point is moving. - -In the edge-of-the-wedge theorem, we have a distribution (or hyperfunction) f on the edge, given as the boundary values of two holomorphic functions on the two wedges. If a hyperfunction is the boundary value of a holomorphic function on a wedge, then its analytic wave front set lies in the dual of the corresponding cone. So the analytic wave front set of f lies in the duals of two opposite cones. But the intersection of these duals is empty, so the analytic wave front set of f is empty, which implies that f is analytic. This is the edge-of-the-wedge theorem. - -In the theory of hyperfunctions there is an extension of the edge-of-the-wedge theorem to the case when there are several wedges instead of two, called Martineau's edge-of-the-wedge theorem. See the book by Hörmander for details. diff --git a/wiki/wikipedia/3253.txt b/wiki/wikipedia/3253.txt deleted file mode 100644 index 97ac95fa039a0b8b007255c2ca289b15d9a57218..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3253.txt +++ /dev/null @@ -1 +0,0 @@ -In mathematics, a number of concepts employ the word harmonic. The similarity of this terminology to that of music is not accidental: the equations of motion of vibrating strings, drums and columns of air are given by formulas involving Laplacians; the solutions to which are given by eigenvalues corresponding to their modes of vibration. Thus, the term "harmonic" is applied when one is considering functions with sinusoidal variations, or solutions of Laplace's equation and related concepts. diff --git a/wiki/wikipedia/3254.txt b/wiki/wikipedia/3254.txt deleted file mode 100644 index 34efbbe6c1f2302cbf8c79ef58d496c9bc701558..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3254.txt +++ /dev/null @@ -1,5 +0,0 @@ -In mathematics, the Carlitz–Wan conjecture classifies the possible degrees of exceptional polynomials over a finite field Fq of q elements. A polynomial f(x) in Fq[x] of degree d is called exceptional over Fq if every irreducible factor (differing from x − y) or (f(x) − f(y))/(x − y)) over Fq becomes reducible over the algebraic closure of Fq. If q > d4, then f(x) is exceptional if and only if f(x) is a permutation polynomial over Fq. - -The Carlitz–Wan conjecture states that there are no exceptional polynomials of degree d over Fq if gcd(d, q − 1) > 1. - -In the special case that q is odd and d is even, this conjecture was proposed by Leonard Carlitz (1966) and proved by Fried, Guralnick, and Saxl (1993). The general form of the Carlitz–Wan conjecture was proposed by Daqing Wan (1993) and later proved by Hendrik Lenstra (1995) diff --git a/wiki/wikipedia/3255.txt b/wiki/wikipedia/3255.txt deleted file mode 100644 index 04d8b0d23905b3451e64c94a1edc65938d2dd759..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3255.txt +++ /dev/null @@ -1,11 +0,0 @@ -In group theory, Higman's embedding theorem states that every finitely generated recursively presented group R can be embedded as a subgroup of some finitely presented group G. This is a result of Graham Higman from the 1960s. - -On the other hand, it is an easy theorem that every finitely generated subgroup of a finitely presented group is recursively presented, so the recursively presented finitely generated groups are (up to isomorphism) exactly the finitely generated subgroups of finitely presented groups. - -Since every countable group is a subgroup of a finitely generated group, the theorem can be restated for those groups. - -As a corollary, there is a universal finitely presented group that contains all finitely presented groups as subgroups (up to isomorphism); in fact, its finitely generated subgroups are exactly the finitely generated recursively presented groups (again, up to isomorphism). - -Higman's embedding theorem also implies the Novikov-Boone theorem (originally proved in the 1950s by other methods) about the existence of a finitely presented group with algorithmically undecidable word problem. Indeed, it is fairly easy to construct a finitely generated recursively presented group with undecidable word problem. Then any finitely presented group that contains this group as a subgroup will have undecidable word problem as well. - -The usual proof of the theorem uses a sequence of HNN extensions starting with R and ending with a group G which can be shown to have a finite presentation. diff --git a/wiki/wikipedia/3256.txt b/wiki/wikipedia/3256.txt deleted file mode 100644 index f5abfc50eb39f456bd47032d9eadfdccc74fdbf2..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3256.txt +++ /dev/null @@ -1,62 +0,0 @@ -In additive number theory, an area of mathematics, the Erdős–Tetali theorem is an existence theorem concerning economical additive bases of every order. More specifically, it states that for every fixed integer $h \geq 2$, there exists a subset of the natural numbers $\mathcal{B} \subseteq \mathbb{N}$ satisfying - -r_{\mathcal{B},h}(n) \asymp \log n, - -where $r_{\mathcal{B},h}(n)$ denotes the number of ways that a natural number n can be expressed as the sum of h elements of B. - -The theorem is named after Paul Erdős and Prasad V. Tetali, who published it in 1990. - -The original motivation for this result is attributed to a problem posed by S. Sidon in 1932 on economical bases. An additive basis $\mathcal{B}\subseteq\mathbb{N}$ is called economical (or sometimes thin) when it is an additive basis of order h and -$$ -r_{\mathcal{B},h}(n) \ll_{\varepsilon} n^\varepsilon -$$ - -for every $\varepsilon > 0$. In other words, these are additive bases that use as few numbers as possible to represent a given n, and yet represent every natural number. Related concepts include $B_h[g]$-sequences and the Erdős–Turán conjecture on additive bases. - -Sidon's question was whether an economical basis of order 2 exists. A positive answer was given by P. Erdős in 1956, settling the case h = 2 of the theorem. Although the general version was believed to be true, no complete proof appeared in the literature before the paper by Erdős and Tetali. - -The proof is an instance of the probabilistic method, and can be divided into three main steps. First, one starts by defining a random sequence $\omega \subseteq \mathbb{N}$ by -$$ -\Pr(n\in \omega) = C\cdot n^{\frac{1}{h} - 1} (\log n)^{\frac{1}{h}}, -$$ - -where $C>0$ is some large real constant, $h\geq 2$ is a fixed integer and n is sufficiently large so that the above formula is well-defined. A detailed discussion on the probability space associated with this type of construction may be found on Halberstam & Roth (1983). Secondly, one then shows that the expected value of the random variable $r_{\omega,h}(n)$ has the order of log. That is, -$$ -\mathbb{E}(r_{\omega,h}(n)) \asymp \log n. -$$ - -Finally, one shows that $r_{\omega,h}(n)$ almost surely concentrates around its mean. More explicitly: -$$ -\Pr\big(\exists c_1,c_2>0 ~|~ \text{for all large } n\in\mathbb{N} ,~ c_1\mathbb{E}(r_{\omega,h}(n)) \leq r_{\omega,h}(n) \leq c_2\mathbb{E}(r_{\omega,h}(n))\big) = 1 -$$ - -This is the critical step of the proof. Originally it was dealt with by means of Janson's inequality, a type of concentration inequality for multivariate polynomials. Tao & Vu (2006) present this proof with a more sophisticated two-sided concentration inequality by V. Vu (2000), thus relatively simplifying this step. Alon & Spencer (2016) classify this proof as an instance of the Poisson paradigm. - -{{unsolved|mathematics|Let $h\geq 2$ be an integer. If $\mathcal{B}\subseteq\mathbb{N}$ is an infinite set such that $r_{\mathcal{B},h}(n)>0$ for every n, does this imply that: -$$ -\limsup_{n\to\infty} \frac{r_{\mathcal{B},h}(n)}{\log n} > 0 -$$?}} - -The original Erdős–Turán conjecture on additive bases states, in its most general form, that if $\mathcal{B}\subseteq\mathbb{N}$ is an additive basis of order h then -$$ -\limsup_{n\to \infty} r_{\mathcal{B},h}(n) = \infty; -$$ - -that is, $r_{\mathcal{B},h}(n)$ cannot be bounded. In his 1956 paper, P. Erdős asked whether it could be the case that -$$ -\limsup_{n\to\infty} \frac{r_{\mathcal{B},2}(n)}{\log n} > 0 -$$ - -whenever $\mathcal{B}\subseteq\mathbb{N}$ is an additive basis of order 2. In other words, this is saying that $r_{\mathcal{B},2}(n)$ is not only unbounded, but that no function smaller than log can dominate $r_{\mathcal{B},2}(n)$. The question naturally extends to $h\geq 3$, making it a stronger form of the Erdős–Turán conjecture on additive bases. In a sense, what is being conjectured is that there are no additive bases substantially more economical than those guaranteed to exist by the Erdős–Tetali theorem. - -All the known proofs of Erdős–Tetali theorem are, by the nature of the infinite probability space used, non-constructive proofs. However, Kolountzakis (1995) showed the existence of a recursive set $\mathcal{R}\subseteq \mathbb{N}$ satisfying $r_{\mathcal{R},2}(n) \asymp \log n$ such that $\mathcal{R}\cap\{0,1,\ldots ,n\}$ takes polynomial time in n to be computed. The question for $h\geq 3$ remains open. - -Given an arbitrary additive basis $\mathcal{A}\subseteq\mathbb{N}$, one can ask whether there exists $\mathcal{B}\subseteq\mathcal{A}$ such that $\mathcal{B}$ is an economical basis. V. Vu (2000) showed that this is the case for Waring bases $\mathbb{N}^{\wedge} k := \{0^k, 1^k, 2^k,\ldots \}$, where for every fixed k there are economical subbases of $\mathbb{N}^{\wedge}k$ of order $s$ for every $s \geq s_k$, for some large computable constant $s_k$. - -Another possible question is whether similar results apply for functions other than log. That is, fixing an integer $h\geq 2$, for which functions f can we find a subset of the natural numbers $\mathcal{B}\subseteq \mathbb{N}$ satisfying $r_{\mathcal{B},h}(n) \asymp f(n)$? It follows from a result of C. Táfula (2019) that if f is a locally integrable, positive real function satisfying - -* $\frac{1}{x}\int_{1}^{x} f(t) \mathrm{d}t \asymp f(x)$, and - -* $\log x \ll f(x) \ll x^{\frac{1}{h-1}}(\log x)^{-\varepsilon}$ for some $\varepsilon > 0$, - -then there exists an additive basis $\mathcal{B}\subseteq \mathbb{N}$ of order h which satisfies $r_{\mathcal{B},h}(n) \asymp f(n)$. The minimal case f(x) = log x recovers Erdős–Tetali's theorem. diff --git a/wiki/wikipedia/3257.txt b/wiki/wikipedia/3257.txt deleted file mode 100644 index 577017e1382382a4e9f8a8636e74d4c64381f891..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3257.txt +++ /dev/null @@ -1,29 +0,0 @@ -The Havel–Hakimi algorithm is an algorithm in graph theory solving the graph realization problem. That is, it answers the following question: Given a finite list of nonnegative integers in non-increasing order, is there a simple graph such that its degree sequence is exactly this list? A simple graph contains no double edges or loops. The degree sequence is a list of numbers in nonincreasing order indicating the number of edges incident to each vertex in the graph. If a simple graph exists for exactly the given degree sequence, the list of integers is called graphic. The Havel-Hakimi algorithm constructs a special solution if a simple graph for the given degree sequence exists, or proves that one cannot find a positive answer. This construction is based on a recursive algorithm. The algorithm was published by Havel, and later by Hakimi. - -The Havel-Hakimi algorithm is based on the following theorem. - -Let $A = (s, t_{1},..., t_{s}, d_{1},..., d_{n})$ be a finite list of nonnegative integers that is nonincreasing. Let $A' = (t_{1}-1,..., t_{s}-1, d_{1},..., d_{n})$ be a second finite list of nonnegative integers that is rearranged to be nonincreasing. List $A$ is graphic if and only if list $A'$ is graphic. - -If the given list $A$ is graphic, then the theorem will be applied at most $n-1$ times setting in each further step $A:=A'$. Note that it can be necessary to sort this list again. This process ends when the whole list $A'$ consists of zeros. Let $G$ be a simple graph with the degree sequence $A$: Let the vertex $S$ have degree $s$; let the vertices $T_{1},..., T_{s}$ have respective degrees $t_{1},..., t_{s}$; let the vertices $D_{1},..., D_{n}$ have respective degrees $d_{1},..., d_{n}$. In each step of the algorithm, one constructs the edges of a graph with vertices $T_{1},..., T_{s}$—i.e., if it is possible to reduce the list $A$ to $A'$, then we add edges $\{S,T_1\},\{S,T_2\},\cdots,\{S,T_{s}\}$. When the list $A$ cannot be reduced to a list $A'$ of nonnegative integers in any step of this approach, the theorem proves that the list $A$ from the beginning is not graphic. - -The following is a summary based on the proof of the Havel-Hakimi algorithm in Invitation to Combinatorics (Shahriari 2022). - -To prove the Havel-Hakimi algorithm always works, assume that $A'$ is graphic, and there exists a simple graph $G'$ with the degree sequence $A' = (t_{1}-1,..., t_{s}-1, d_{1},..., d_{n})$. Then we add a new vertex $v$ adjacent to the $s$ vertices with degrees $t_{1}-1,..., t_{s}-1$ to obtain the degree sequence $A$. - -To prove the other direction, assume that $A$ is graphic, and there exists a simple graph $G$ with the degree sequence $A = (s, t_{1},..., t_{s}, d_{1},..., d_{n})$ and vertices $S, T_{1},..., T_{s}, D_{1},..., D_{n}$. We do not know which $s$ vertices are adjacent to $S$, so we have two possible cases. - -In the first case, $S$ is adjacent to the vertices $T_{1},..., T_{s}$ in $G$. In this case, we remove $S$ with all its incident edges to obtain the degree sequence $A'$. - -In the second case, $S$ is not adjacent to some vertex $T_{i}$ for some $1 \leq i \leq n$ in $G$. Then we can change the graph $G$ so that $S$ is adjacent to $T_{i}$ while maintaining the same degree sequence $A$. Since $S$ has degree $s$, the vertex $S$ must be adjacent to some vertex $D_{j}$ in $G$ for $1 \leq j \leq n$: Let the degree of $D_{j}$ be $d_{j}$. We know $t_i \geq d_j$, as the degree sequence $A$ is in non-increasing order. - -Since $t_i \geq d_j$, we have two possibilities: Either $t_i = d_j$, or $t_i > d_j$. If $t_i = d_j$, then by switching the places of the vertices $T_{i}$ and $D_{j}$, we can adjust $G$ so that $S$ is adjacent to $T_{i}$ instead of $D_{j}.$ If $t_i > d_j$, then since $T_{i}$ is adjacent to more vertices than $D_{j}$, let another vertex $W$ be adjacent to $T_{i}$ and not $D_{j}$. Then we can adjust $G$ by removing the edges $\left \{ S, D_j \right \}$ and $\left \{ T_i, W \right \}$, and adding the edges $\left \{ S, T_i \right \}$ and $\left \{ W, D_j\right \}$. This modification preserves the degree sequence of $G$, but the vertex $S$ is now adjacent to $T_{i}$ instead of $D_{j}$. In this way, any vertex not connected to $S$ can be adjusted accordingly so that $S$ is adjacent to $T_{i}$ while maintaining the original degree sequence $A$ of $G$. Thus any vertex not connected to $S$ can be connected to $S$ using the above method, and then we have the first case once more, through which we can obtain the degree sequence $A'$. Hence, $A$ is graphic if and only if $A'$ is also graphic. - -The time complexity of the algorithm is $O(n^2)$. - -Let $6, 3, 3, 3, 3, 2, 2, 2, 2, 1, 1$ be a nonincreasing, finite degree sequence of nonnegative integers. To test whether this degree sequence is graphic, we apply the Havel-Hakimi algorithm: - -First, we remove the vertex with the highest degree — in this case, $6$ —  and all its incident edges to get $2, 2, 2, 2, 1, 1, 2, 2, 1, 1$ (assuming the vertex with highest degree is adjacent to the $6$ vertices with next highest degree). We rearrange this sequence in nonincreasing order to get $2, 2, 2, 2, 2, 2, 1, 1, 1, 1$. We repeat the process, removing the vertex with the next highest degree to get $1, 1, 2, 2, 2, 1, 1, 1, 1$ and rearranging to get $2, 2, 2, 1, 1, 1, 1, 1, 1$. We continue this removal to get $1, 1, 1, 1, 1, 1, 1, 1$, and then $0, 0, 0, 0, 0, 0, 0, 0$. This sequence is clearly graphic, as it is the simple graph of $8$ isolated vertices. - -To show an example of a non-graphic sequence, let $6, 5, 5, 4, 3, 2, 1$ be a nonincreasing, finite degree sequence of nonnegative integers. Applying the algorithm, we first remove the degree $6$ vertex and all its incident edges to get $4, 4, 3, 2, 1, 0$. Already, we know this degree sequence is not graphic, since it claims to have $6$ vertices with one vertex not adjacent to any of the other vertices; thus, the maximum degree of the other vertices is $4$. This means that two of the vertices are connected to all the other vertices with the exception of the isolated one, so the minimum degree of each vertex should be $2$; however, the sequence claims to have a vertex with degree $1$. Thus, the sequence is not graphic. - -For the sake of the algorithm, if we were to reiterate the process, we would get $3, 2, 1, 0, 0$ which is yet more clearly not graphic. One vertex claims to have a degree of $3$, and yet only two other vertices have neighbors. Thus the sequence cannot be graphic. diff --git a/wiki/wikipedia/3258.txt b/wiki/wikipedia/3258.txt deleted file mode 100644 index 763e8dfb4783de138103afbe5f7699a26666d35c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3258.txt +++ /dev/null @@ -1 +0,0 @@ -#Redirect Overfull graph#Overfull conjecture diff --git a/wiki/wikipedia/3259.txt b/wiki/wikipedia/3259.txt deleted file mode 100644 index 40b4d7fda6d157d02e592d01456a67954f6461b8..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3259.txt +++ /dev/null @@ -1,29 +0,0 @@ -In representation learning, knowledge graph embedding (KGE), also referred to as knowledge representation learning (KRL), or multi-relation learning, is a machine learning task of learning a low-dimensional representation of a knowledge graph's entities and relations while preserving their semantic meaning. Leveraging their embedded representation, knowledge graphs (KGs) can be used for various applications such as link prediction, triple classification, entity recognition, clustering, and relation extraction. - -A knowledge graph $\mathcal{G} = \{E, R, F\}$ is a collection of entities E - -, relations $R$, and facts $F$. A fact is a triple $(h, r, t) \in F$ that denotes a link $r \in R$ between the head $h \in E$ and the tail $t \in E$ of the triple. Another notation that is often used in the literature to represent a triple (or fact) is $$. This notation is called resource description framework (RDF). However, nowadays, people have to deal with the sparsity of data and the computational inefficiency to use them in a real-world application. - -The embedding of a knowledge graph translates each entity and relation of a knowledge graph, $\mathcal{G}$ into a vector of a given dimension $d$, called embedding dimension. Given Q as the set of all ranked predictions of a model, it is possible to define three different performance indexes: Hits@K, MR, and MRR. In particular, this technique completes a triple inferring the missing entity or relation. Training this kind of recommender system requires a huge amount of information from the users; however, knowledge graph techniques can address this issue by using a graph already constructed over a prior knowledge of the item correlation and using the embedding to infer from it the recommendation. It is possible to use the task of link prediction to infer a new connection between an already existing drug and a disease by using a biomedical knowledge graph built leveraging the availability of massive literature and biomedical databases. making this class of embedding models light, and easy to train even if they suffer from high-dimensional and sparsity of data. - -* DistMult: Since the embedding matrix of the relation is a diagonal matrix,: As DistMult uses a diagonal matrix to represent the relations embedding but adds a representation in the complex vector space and the hermitian product, it can distinguish symmetric and asymmetric facts. This approach is scalable to a large knowledge graph in terms of time and space cost.: This model encodes in the embedding the analogical structure of the knowledge graph to simulate inductive reasoning.: This model is the improvement of canonical polyadic decomposition (CPD), in which an embedding vector for the relation and two independent embedding vectors for each entity are learned, depending on whether it is a head or a tail in the knowledge graph fact. HolE uses circular correlation to create an embedded representation of the knowledge graph, TuckER sees the knowledge graph as a tensor that could be decomposed using the Tucker decomposition in a collection of vectorsi.e., the embeddings of entities and relationswith a shared core. Each entity and relation has its own embedding dimension, and the size of the core tensor is determined by the shape of the entities and relations that interact.: This model uses a scoring function that forces the embeddings to satisfy a simple vector sum equation in each fact in which they appear: $h + r = t$.: It is an evolution of TransE introducing a hyperplane as geometric space to solve the problem of representing correctly the types of relations.: TransR is an evolution of TransH because it uses two different spaces to represent the embedded representation of the entities and the relations, Given a fact, in TransR, the head and the tail of a fact could belongs to two different types of entities, for example, in the fact$(Obama, president\_of, USA)$, Obama and USA are two entities but one is a person and the other is a country. All the translational models define a score function in their representation space, but they oversimplify this metric loss. This model is the result of the combination of TransE and of the structure embedding Crossover interactions can be used for related information selection, and could be very useful for the embedding procedure. The regularization term of TransE makes the entity embedding to built a spheric space, and consequently loses the translation properties of the geometric space. RotatE is inspired by the Euler's identity and involves the use of Hadmard product to represent a relation $r$ as a rotation from the head $h$ to the tail t - - in the complex space. ConvE is an embedding model that represents a good tradeoff expressiveness of deep learning models and computational expensiveness, ConvR is an adaptive convolutional network aimed to deeply represent all the possible interactions between the entities and the relations. ConvKB, to compute score function of a given triple $(h, r, t)$, it produces an input [h; \mathcal{r}; t]of dimension $d \times 3$ without reshaping and passes it to series of convolutional filter of size $1 \times 3$. CapsE implements a capsule network to model a fact $(h, r, t)$. During the embedding procedure is commonly assumed that, similar entities has similar relations. WN18RR, More recently, it has been discussed that these datasets are far away from real-world applications, and other datasets should be integrated as a standard benchmark. - -* - -* - -* - -* - -* - -* - -* - -* - -* diff --git a/wiki/wikipedia/326.txt b/wiki/wikipedia/326.txt deleted file mode 100644 index 5585a954eb82be8e9d0789855283c654c45817e6..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/326.txt +++ /dev/null @@ -1,35 +0,0 @@ -In mathematics, the bounded inverse theorem (or inverse mapping theorem) is a result in the theory of bounded linear operators on Banach spaces. - -It states that a bijective bounded linear operator T from one Banach space to another has bounded inverse T-1. It is equivalent to both the open mapping theorem and the closed graph theorem. - -This theorem may not hold for normed spaces that are not complete. - -For example, consider the space X of sequences x : N → R with only finitely many non-zero terms equipped with the supremum norm. The map T : X → X defined by -$$ -T x = \left( x_{1}, \frac{x_{2}}{2}, \frac{x_{3}}{3}, \dots \right) -$$ - -is bounded, linear and invertible, but T-1 is unbounded. - -This does not contradict the bounded inverse theorem since X is not complete, and thus is not a Banach space. - -To see that it's not complete, consider the sequence of sequences x(n) ∈ X given by -$$ -x^{(n)} = \left( 1, \frac1{2}, \dots, \frac1{n}, 0, 0, \dots \right) -$$ - -converges as n → ∞ to the sequence x(∞) given by -$$ -x^{(\infty)} = \left( 1, \frac1{2}, \dots, \frac1{n}, \dots \right), -$$ - -which has all its terms non-zero, and so does not lie in X. - -The completion of X is the space $c_0$ of all sequences that converge to zero, which is a (closed) subspace of the ℓp space ℓ(N), which is the space of all bounded sequences. - -However, in this case, the map T is not onto, and thus not a bijection. To see this, one need simply note that the sequence -$$ -x = \left( 1, \frac12, \frac13, \dots \right), -$$ - -is an element of $c_0$, but is not in the range of $T:c_0\to c_0$. diff --git a/wiki/wikipedia/3260.txt b/wiki/wikipedia/3260.txt deleted file mode 100644 index e4d2abd769c4fb78a1250129d7f73aeb7fb8802a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3260.txt +++ /dev/null @@ -1,17 +0,0 @@ -In graph theory, a branch of mathematics, a squaregraph is a type of undirected graph that can be drawn in the plane in such a way that every bounded face is a quadrilateral and every vertex with three or fewer neighbors is incident to an unbounded face. - -The squaregraphs include as special cases trees, grid graphs, gear graphs, and the graphs of polyominos. - -As well as being planar graphs, squaregraphs are median graphs, meaning that for every three vertices u, v, and w there is a unique median vertex m(u,v,w) that lies on shortest paths between each pair of the three vertices. As with median graphs more generally, squaregraphs are also partial cubes: their vertices can be labeled with binary strings such that the Hamming distance between strings is equal to the shortest path distance between vertices. - -The graph obtained from a squaregraph by making a vertex for each zone (an equivalence class of parallel edges of quadrilaterals) and an edge for each two zones that meet in a quadrilateral is a circle graph determined by a triangle-free chord diagram of the unit disk. - -*They are the median graphs that do not contain as an induced subgraph any member of an infinite family of forbidden graphs. These forbidden graphs are the cube (the simplex graph of K3), the Cartesian product of an edge and a claw K1,3 (the simplex graph of a claw), and the graphs formed from a gear graph by adding one more vertex connected to the hub of the wheel (the simplex graph of the disjoint union of a cycle with an isolated vertex). - -*They are the graphs that are connected and bipartite, such that (if an arbitrary vertex r is picked as a root) every vertex has at most two neighbors closer to r, and such that at every vertex v, the link of v (a graph with a vertex for each edge incident to v and an edge for each 4-cycle containing v) is either a cycle of length greater than three or a disjoint union of paths. - -*They are the dual graphs of arrangements of lines in the hyperbolic plane that do not have three mutually-crossing lines. - -The characterization of squaregraphs in terms of distance from a root and links of vertices can be used together with breadth first search as part of a linear time algorithm for testing whether a given graph is a squaregraph, without any need to use the more complex linear-time algorithms for planarity testing of arbitrary graphs. - -Several algorithmic problems on squaregraphs may be computed more efficiently than in more general planar or median graphs; for instance, Chepoi and Chepoi present linear time algorithms for computing the diameter of squaregraphs, and for finding a vertex minimizing the maximum distance to all other vertices. diff --git a/wiki/wikipedia/3261.txt b/wiki/wikipedia/3261.txt deleted file mode 100644 index 4937edad12b3b1588e25c321de3fa3c6079dbdf0..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3261.txt +++ /dev/null @@ -1,19 +0,0 @@ -Legendre's conjecture, proposed by Adrien-Marie Legendre, states that there is a prime number between n2 and (n + 1)2 for every positive integer n. The conjecture is one of Landau's problems (1912) on prime numbers; , the conjecture has neither been proved nor disproved. - -Legendre's conjecture is one of a family of results and conjectures related to prime gaps, that is, to the spacing between prime numbers. - -The prime number theorem suggests that the actual number of primes between n2 and (n + 1)2 () is asymptotic to n/ln(n). Since this number is large for large n, this lends credence to Legendre's conjecture. - -If Legendre's conjecture is true, the gap between any prime p and the next largest prime would always be at most on the order of $\sqrt{p}$; in big O notation, the gaps are $O(\sqrt p)$. Two stronger conjectures, Andrica's conjecture and Oppermann's conjecture, also both imply that the gaps have the same magnitude. - -Harald Cramér conjectured that the gaps are always much smaller, of the order $(\log p)^2$. If Cramér's conjecture is true, Legendre's conjecture would follow for all sufficiently large n. Cramér also proved that the Riemann hypothesis implies a weaker bound of $O(\sqrt p\log p)$ on the size of the largest prime gaps. - -A counterexample near 1018 would require a prime gap fifty million times the size of the average gap. - -Legendre's conjecture implies that at least one prime can be found in every half revolution of the Ulam spiral. - -It follows from a result by Ingham that for all sufficiently large $n$, there is a prime between the consecutive cubes $n^3$ and $(n+1)^3$. - -Baker, Harman and Pintz proved that there is a prime in the interval $[x,x+O(x^{21/40})]$ for all large $x$. - -A table of maximal prime gaps shows that the conjecture holds to at least $n^2=4\cdot10^{18}$, meaning $n=2\cdot10^9$. diff --git a/wiki/wikipedia/3262.txt b/wiki/wikipedia/3262.txt deleted file mode 100644 index 14171061dd2638a80737b7eac0be248799582b3b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3262.txt +++ /dev/null @@ -1,7 +0,0 @@ -In mathematics, an N-topological space is a set equipped with N arbitrary topologies. If τ1, τ2, ..., τN are N topologies defined on a nonempty set X, then the N-topological space is denoted by (X,τ12,...,τN). - -For N = 1, the structure is simply a topological space. - -For N = 2, the structure becomes a bitopological space introduced by J. C. Kelly. - -Let X = {x1, x2, ...., xn} be any finite set. Suppose Ar = {x1, x2, ..., xr}. Then the collection τ1 = {φ, A1, A2, ..., An = X} will be a topology on X. If τ1, τ2, ..., τm be m such topologies (chain topologies) defined on X, then the structure (X, τ1, τ2, ..., τm) is an m-topological space. diff --git a/wiki/wikipedia/3263.txt b/wiki/wikipedia/3263.txt deleted file mode 100644 index b75c7082339f9f68c6ffb9441d53c242200abcd8..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3263.txt +++ /dev/null @@ -1,54 +0,0 @@ -The mountain pass theorem is an existence theorem from the calculus of variations, originally due to Antonio Ambrosetti and Paul Rabinowitz. Given certain conditions on a function, the theorem demonstrates the existence of a saddle point. The theorem is unusual in that there are many other theorems regarding the existence of extrema, but few regarding saddle points. - -The assumptions of the theorem are: - -* $I$ is a functional from a Hilbert space H to the reals, - -* $I\in C^1(H,\mathbb{R})$ and $I'$ is Lipschitz continuous on bounded subsets of H, - -* $I$ satisfies the Palais–Smale compactness condition, - -* $I[0]=0$, - -* there exist positive constants r and a such that $I[u]\geq a$ if $\Vert u\Vert =r$, and - -* there exists $v\in H$ with $\Vert v\Vert >r$ such that $I[v]\leq 0$. - -If we define: -$$ -\Gamma=\{\mathbf{g}\in C([0,1];H)\vert\mathbf{g}(0)=0,\mathbf{g}(1)=v\} -$$ - -and: -$$ -c=\inf_{\mathbf{g}\in\Gamma}\max_{0\leq t\leq 1} I[\mathbf{g}(t)], -$$ - -then the conclusion of the theorem is that c is a critical value of I. - -The intuition behind the theorem is in the name "mountain pass." Consider I as describing elevation. Then we know two low spots in the landscape: the origin because $I[0]=0$, and a far-off spot v where $I[v]\leq 0$. In between the two lies a range of mountains (at $\Vert u\Vert =r$) where the elevation is high (higher than a>0). In order to travel along a path g from the origin to v, we must pass over the mountains—that is, we must go up and then down. Since I is somewhat smooth, there must be a critical point somewhere in between. (Think along the lines of the mean-value theorem.) The mountain pass lies along the path that passes at the lowest elevation through the mountains. Note that this mountain pass is almost always a saddle point. - -For a proof, see section 8.5 of Evans. - -Let $X$ be Banach space. The assumptions of the theorem are: - -* $\Phi\in C(X,\mathbf R)$ and have a Gateaux derivative $\Phi'\colon X\to X^*$ which is continuous when $X$ and $X^*$ are endowed with strong topology and weak* topology respectively. - -* There exists $r>0$ such that one can find certain $\|x'\|>r$ with -$$ -\max(\Phi(0),\Phi(x'))<\inf\limits_{\|x\|=r}\Phi(x)=:m(r) -$$. - -* $\Phi$ satisfies weak Palais–Smale condition on $\{x\in X\mid m(r)\le\Phi(x)\}$. - -In this case there is a critical point $\overline x\in X$ of $\Phi$ satisfying $m(r)\le\Phi(\overline x)$. Moreover, if we define -$$ -\Gamma=\{c\in C([0,1],X)\mid c(0)=0,c(1)=x'\} -$$ - -then -$$ -\Phi(\overline x)=\inf_{c\in\Gamma}\max_{0\le t\le 1}\Phi(c(t)). -$$ - -For a proof, see section 5.5 of Aubin and Ekeland. diff --git a/wiki/wikipedia/3264.txt b/wiki/wikipedia/3264.txt deleted file mode 100644 index 21158133bbcfaed7ad4c5c20fe1d7eb748c14e81..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3264.txt +++ /dev/null @@ -1,5 +0,0 @@ -In the mathematical field of graph theory, the 26-fullerene graph is a polyhedral graph with V = 26 vertices and E = 39 edges. Its planar embedding has three hexagonal faces (including the one shown as the external face of the illustration) and twelve pentagonal faces. As a planar graph with only pentagonal and hexagonal faces, meeting in three faces per vertex, this graph is a fullerene. The existence of this fullerene has been known since at least 1968. - -The 26-fullerene graph has $D_{3h}$ prismatic symmetry, the same group of symmetries as the triangular prism. This symmetry group has 12 elements; it has six symmetries that arbitrarily permute the three hexagonal faces of the graph and preserve the orientation of its planar embedding, and another six orientation-reversing symmetries. - -In 2009, The New York Times published a puzzle involving Hamiltonian paths in this graph, taking advantage of the correspondence between its 26 vertices and the 26 letters of the English alphabet. diff --git a/wiki/wikipedia/3265.txt b/wiki/wikipedia/3265.txt deleted file mode 100644 index 27f0751101a752cd1fd32a2c808f50ffbd703c9b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3265.txt +++ /dev/null @@ -1,5 +0,0 @@ -In mathematics, Scheffé's lemma is a proposition in measure theory concerning the convergence of sequences of integrable functions. It states that, if $f_n$ is a sequence of integrable functions on a measure space $(X,\Sigma,\mu)$ that converges almost everywhere to another integrable function $f$, then $\int |f_n - f| d\mu \to 0$ if and only if $\int | f_n | d\mu \to \int | f | d\mu$. - -Applied to probability theory, Scheffe's theorem, in the form stated here, implies that almost everywhere pointwise convergence of the probability density functions of a sequence of $\mu$-absolutely continuous random variables implies convergence in distribution of those random variables. - -Henry Scheffé published a proof of the statement on convergence of probability densities in 1947. The result is a special case of a theorem by Frigyes Riesz about convergence in Lp spaces published in 1928. diff --git a/wiki/wikipedia/3266.txt b/wiki/wikipedia/3266.txt deleted file mode 100644 index ed02bc0012a28ea6843809bef7dec070a9c3b3f5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3266.txt +++ /dev/null @@ -1,9 +0,0 @@ -In mathematics, the von Neumann conjecture stated that a group G is non-amenable if and only if G contains a subgroup that is a free group on two generators. The conjecture was disproved in 1980. - -In 1929, during his work on the Banach–Tarski paradox, John von Neumann defined the concept of amenable groups and showed that no amenable group contains a free subgroup of rank 2. The suggestion that the converse might hold, that is, that every non-amenable group contains a free subgroup on two generators, was made by a number of different authors in the 1950s and 1960s. Although von Neumann's name is popularly attached to the conjecture, its first written appearance seems to be due to Mahlon Marsh Day in 1957. - -The Tits alternative is a fundamental theorem which, in particular, establishes the conjecture within the class of linear groups. - -The historically first potential counterexample is Thompson group F. While its amenability is a wide open problem, the general conjecture was shown to be false in 1980 by Alexander Ol'shanskii; he demonstrated that Tarski monster groups, constructed by him, which are easily seen not to have free subgroups of rank 2, are not amenable. Two years later, Sergei Adian showed that certain Burnside groups are also counterexamples. None of these counterexamples are finitely presented, and for some years it was considered possible that the conjecture held for finitely presented groups. However, in 2003, Alexander Ol'shanskii and Mark Sapir exhibited a collection of finitely-presented groups which do not satisfy the conjecture. - -In 2013, Nicolas Monod found an easy counterexample to the conjecture. Given by piecewise projective homeomorphisms of the line, the group is remarkably simple to understand. Even though it is not amenable, it shares many known properties of amenable groups in a straightforward way. In 2013, Yash Lodha and Justin Tatch Moore isolated a finitely presented non amenable subgroup of Monod's group. This provides the first torsion-free finitely presented counterexample, and admits a presentation with 3 generators and 9 relations. Lodha later showed that this group satisfies the property $F_{\infty}$, which is a stronger finiteness property. diff --git a/wiki/wikipedia/3267.txt b/wiki/wikipedia/3267.txt deleted file mode 100644 index 863a28e13a93fefa030c8157f796a306f613978b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3267.txt +++ /dev/null @@ -1,62 +0,0 @@ -In mathematics, the Mellin inversion formula (named after Hjalmar Mellin) tells us conditions under - -which the inverse Mellin transform, or equivalently the inverse two-sided Laplace transform, are defined and recover the transformed function. - -If $\varphi(s)$ is analytic in the strip $a < \Re(s) < b$, - -and if it tends to zero uniformly as $ \Im(s) \to \pm \infty $ for any real value c between a and b, with its integral along such a line converging absolutely, then if -$$ -f(x)= \{ \mathcal{M}^{-1} \varphi \} = \frac{1}{2 \pi i} \int_{c-i \infty}^{c+i \infty} x^{-s} \varphi(s) ds -$$ - -we have that -$$ -\varphi(s)= \{ \mathcal{M} f \} = \int_0^{\infty} x^s f(x)\frac{dx}{x}. -$$ - -Conversely, suppose f(x) is piecewise continuous on the positive real numbers, taking a value halfway between the limit values at any jump discontinuities, and suppose the integral -$$ -\varphi(s)=\int_0^{\infty} x^s f(x)\frac{dx}{x} -$$ - -is absolutely convergent when $a < \Re(s) < b$. Then f is recoverable via the inverse Mellin transform from its Mellin transform $\varphi$. - -We may strengthen the boundedness condition on $\varphi(s)$ if - -f(x) is continuous. If $\varphi(s)$ is analytic in the strip $a < \Re(s) < b$, and if $|\varphi(s)| < K |s|^{-2}$, where K is a positive constant, then f(x) as defined by the inversion integral exists and is continuous; moreover the Mellin transform of f is $\varphi$ for at least $a < \Re(s) < b$. - -On the other hand, if we are willing to accept an original f which is a - -generalized function, we may relax the boundedness condition on -$$ -\varphi -$$ to - -simply make it of polynomial growth in any closed strip contained in the open strip $a < \Re(s) < b$. - -We may also define a Banach space version of this theorem. If we call by -$$ -L_{\nu, p}(R^{+}) -$$ the weighted Lp space of complex valued functions f on the positive reals such that -$$ -\|f\| = \left(\int_0^\infty |x^\nu f(x)|^p \frac{dx}{x}\right)^{1/p} < \infty -$$ - -where ν and p are fixed real numbers with p>1, then if f(x) - -is in $L_{\nu, p}(R^{+})$ with $1 < p \le 2$, then -$$ -\varphi(s) -$$ belongs to $L_{\nu, q}(R^{+})$ with $q = p/(p-1)$ and -$$ -f(x)=\frac{1}{2 \pi i} \int_{\nu-i \infty}^{\nu+i \infty} x^{-s} \varphi(s)ds. -$$ - -Here functions, identical everywhere except on a set of measure zero, are identified. - -Since the two-sided Laplace transform can be defined as -$$ - \left\{\mathcal{B} f\right\}(s) = \left\{\mathcal{M} f(- \ln x) \right\}(s) -$$ - -these theorems can be immediately applied to it also. diff --git a/wiki/wikipedia/3268.txt b/wiki/wikipedia/3268.txt deleted file mode 100644 index 8182a64b1f2c38f6f841baa89d0ccf637f6eb7c0..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3268.txt +++ /dev/null @@ -1,97 +0,0 @@ -The greater-than sign is a mathematical symbol that denotes an inequality between two values. The widely adopted form of two equal-length strokes connecting in an acute angle at the right, ', has been found in documents dated as far back as the 1560s. In mathematical writing, the greater-than sign is typically placed between two values being compared and signifies that the first number is greater than the second number. Examples of typical usage include 1.5 > 1 and 1 > −2. The less-than sign and greater-than sign always "point" to the smaller number. Since the development of computer programming languages, the greater-than sign and the less-than sign have been repurposed for a range of uses and operations. - -The earliest known use of the symbols and is found in (The Analytical Arts Applied to Solving Algebraic Equations) by Thomas Harriot, published posthumously in 1631. The text states: " a > b a b (The sign of majority a > b indicates that a is greater than b)" and " a < b a b (The sign of minority a < b indicates that a is less than b)." - -According to historian Art Johnson, while Harriot was surveying North America, he saw a Native American with a symbol that resembled the greater-than sign, in both backwards and forwards forms. Johnson says it is likely Harriot developed the two symbols from this symbol. - -The 'greater-than sign' is an original ASCII character (hex 3E, decimal 62). - -The Unicode code point is ; this is inherited from the same allocation in ASCII. - -The greater-than sign is sometimes used for an approximation of the closing angle bracket, (or "upright chevron"). The proper Unicode character is . ASCII does not have angular brackets. - -BASIC and C-family languages (including Java and C++) use the comparison operator to mean "greater than". In Lisp-family languages, is a function used to mean "greater than". - -In Coldfusion and Fortran, operator means "greater than". - -The double greater-than sign, , is used for an approximation of the much-greater-than sign . ASCII does not have the much greater-than sign. - -The double greater-than sign is also used for an approximation of the closing guillemet, . - -In Java, C, and C++, the operator is the right-shift operator. In C++ it is also used to get input from a stream, similar to the C functions and . - -In Haskell, the function is a monadic operator. It is used for sequentially composing two actions, discarding any value produced by the first. In that regard, it is like the statement sequencing operator in imperative languages, such as the semicolon in C. - -In XPath the operator returns true if the left operand follows the right operand in document order; otherwise it returns false. - -The triple greater-than sign, , is the unsigned-right-shift operator in JavaScript. Three greater-than signs form the distinctive "three chevron prompt" of the firmware console in MicroVAX, VAXstation, and DEC Alpha computers (known as the SRM console in the latter). This is also the default prompt of the Python interactive shell, often seen for code examples that can be executed interactively in the interpreter: - - - -$ python - -Python 3.9.2 (default, Feb 20 2021, 18:40:11) - -[GCC 10.2.0] on linux - -Type "help", "copyright", "credits" or "license" for more information. - ->>> print("Hello World") - -Hello World - ->>> - - - -The greater-than sign plus the equals sign, , is sometimes used for an approximation of the greater than or equal to sign, which was not included in the ASCII repertoire. The sign is, however, provided in Unicode, as . - -In BASIC, Lisp-family languages, and C-family languages (including Java and C++), operator means "greater than or equal to". In Sinclair BASIC it is encoded as a single-byte code point token. - -In Fortran, operator means "greater than or equal to". - -In Bourne shell and Windows PowerShell, the operator means "greater than or equal to". - -In Lua, operator means "greater than or equal to" and is used like this - - - -x = math.random(1,9) - -y = 5 - -if x >= y then - -print("x("..x..") is more or equal to y("..y..")") - -else - -print("x("..x..") is less than y("..y..")") - -end - - - -expected output: - -x(number >= 5) is more or equal to y(5) - -or - -x(number < 5) is less than y(5) - -In some programming languages (for example F#), the greater-than sign is used in conjunction with a hyphen-minus to create an arrow (). Arrows like these could also be used in text where other arrow symbols are unavailable. In the R programming language, this can be used as the right assignment operator. In the C, C++, and C# programming languages, this is used as a member access operator. In Swift, it is used to indicate the return value type when defining a function (i.e., func foo() -> MyClass {...}). - -In Bourne shell (and many other shells), greater-than sign is used to redirect output to a file. Greater-than plus ampersand () is used to redirect to a file descriptor. - -Greater-than sign is used in the 'spaceship operator', . - - - -===HTML=== - -In HTML (and SGML and XML), the greater-than sign is used at the end of tags. The greater-than sign may be included with , while produces the greater-than or equal to sign. - -The greater-than sign was used to denote quotations in the e-mail and newsgroup formats, and this has been taken into use also in forums. - -The sign is also used to denote quotations in Markdown. diff --git a/wiki/wikipedia/3269.txt b/wiki/wikipedia/3269.txt deleted file mode 100644 index b3a4c67c02cf1523aff2338eded7c3ec13b7c990..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3269.txt +++ /dev/null @@ -1,51 +0,0 @@ -In mathematics, the Gan–Gross–Prasad conjecture is a restriction problem in the representation theory of real or p-adic Lie groups posed by Gan Wee Teck, Benedict Gross, and Dipendra Prasad. The problem originated from a conjecture of Gross and Prasad for special orthogonal groups but was later generalized to include all four classical groups. In the cases considered, it is known that the multiplicity of the restrictions is at most one - -and the conjecture describes when the multiplicity is precisely one. - -A motivating example is the following classical branching problem in the theory of compact Lie groups. Let $\pi$ be an irreducible finite dimensional representation of the compact unitary group $U(n)$, and consider its restriction to the naturally embedded subgroup $U(n-1)$. It is known that this restriction is multiplicity-free, but one may ask precisely which irreducible representations of $U(n-1)$ occur in the restriction. - -By the Cartan–Weyl theory of highest weights, there is a classification of the irreducible representations of $U(n)$ via their highest weights which are in natural bijection with sequences of integers $\underline{a} = (a_1 \geq a_2 \geq \cdots \geq a_n)$. - -Now suppose that $\pi$ has highest weight $\underline{a}$. Then an irreducible representation $\tau$ of $U(n-1)$ with highest weight $\underline{b}$ occurs in the restriction of $\pi$ to $U(n-1)$ (viewed as a subgroup of $U(n)$) if and only if $\underline{a}$ and $\underline{b}$ are interlacing, i.e. $a_1 \geq b_1 \geq a_2 \geq b_2 \geq \cdots \geq b_{n-1} \geq a_n$. - -The Gan–Gross–Prasad conjecture then considers the analogous restriction problem for other classical groups. - -The conjecture has slightly different forms for the different classical groups. The formulation for unitary groups is as follows. - -Let $V$ be a finite-dimensional vector space over a field $k$ not of characteristic $2$ equipped with a non-degenerate sesquilinear form that is $\varepsilon$-Hermitian (i.e. $\varepsilon = 1$ if the form is Hermitian and $\varepsilon = -1$ if the form is skew-Hermitian). Let $W$ be a non-degenerate subspace of $V$ such that $V = W \oplus W^\perp$ and $W^\perp$ is of dimension $(\varepsilon + 1)/2$. Then let $G = G(V) \times G(W)$, where $G(V)$ is the unitary group preserving the form on $V$, and let $H = \Delta G(W)$ be the diagonal subgroup of $G$. - -Let $\pi = \pi_1 \boxtimes \pi_2$ be an irreducible smooth representation of $G$ and let $\nu$ be either the trivial representation (the "Bessel case") or the Weil representation (the "Fourier–Jacobi case"). - -Let $\varphi = \varphi_1 \times \varphi_2$ be a generic L-parameter for $G = G(V) \times G(W)$, and let $\Pi_\varphi$ be the associated Vogan L-packet. - -If $\varphi$ is a local L-parameter for $G$, then -$$ -\sum_{\text{relevant } \pi \in \Pi_\varphi} \dim \operatorname{Hom}_H (\pi \otimes \overline{\nu}, \mathbb{C}) = 1. -$$ - -Letting $\eta_{\mathrm{GP}}$ be the "distinguished character" defined in terms of the Langlands–Deligne local constant, then furthermore -$$ -\operatorname{Hom}_H (\pi(\varphi, \eta) \otimes \overline{\nu}, \mathbb{C}) \neq 0 \text{ if and only if } \eta = \eta_{\mathrm{GP}}. -$$ - -For a quadratic field extension $E/F$, let $L_E(s, \pi_1 \times \pi_2) := L_E(s, \pi_1 \boxtimes \pi_2, \mathrm{std}_n \boxtimes \mathrm{std}_{n-1})$ where $L_E$ is the global L-function obtained as the product of local L-factors given by the local Langlands conjectures. - -The conjecture states that the following are equivalent: - -# The period interval $P_H$ is nonzero when restricted to $\pi$. - -# For all places $v$, the local Hom space $\operatorname{Hom}_{H(F_v)}(\pi_v, \nu_v) \neq 0$ and $L_E(1/2, \pi_1 \times \pi_2) \neq 0$. - -In a series of four papers between 2010 and 2012, Jean-Loup Waldspurger proved the local Gan–Gross–Prasad conjecture for tempered representations of special orthogonal groups over p-adic fields. In 2012, Colette Moeglin and Waldspurger then proved the local Gan–Gross–Prasad conjecture for generic non-tempered representations of special orthogonal groups over p-adic fields. - -In his 2013 thesis, Raphaël Beuzart-Plessis proved the local Gan–Gross–Prasad conjecture for the tempered representations of unitary groups in the p-adic Hermitian case under the same hypotheses needed to establish the local Langlands conjecture. - -Hongyu He proved the Gan-Gross-Prasad conjectures for discrete series representations of the real unitary group U(p,q). - -In a series of papers between 2004 and 2009, David Ginzburg, Dihua Jiang, and Stephen Rallis showed the (1) implies (2) direction of the global Gan–Gross–Prasad conjecture for all quasisplit classical groups. - -In the Bessel case of the global Gan–Gross–Prasad conjecture for unitary groups, Wei Zhang used the theory of the relative trace formula by Hervé Jacquet and the work on the fundamental lemma by Zhiwei Yun to prove that the conjecture is true subject to certain local conditions in 2014. - -In the Fourier–Jacobi case of the global Gan–Gross–Prasad conjecture for unitary groups, Yifeng Liu and Hang Xue showed that the conjecture holds in the skew-Hermitian case, subject to certain local conditions. - -In the Bessel case of the global Gan–Gross–Prasad conjecture for special orthogonal groups and unitary groups, Dihua Jiang and Lei Zhang used the theory of twisted automorphic descents to prove that (1) implies (2) in its full generality, i.e. for any irreducible cuspidal automorphic representation with a generic global Arthur parameter, and that (2) implies (1) subject to a certain global assumption. diff --git a/wiki/wikipedia/327.txt b/wiki/wikipedia/327.txt deleted file mode 100644 index 1405ad4d2e30c8a0929ae297db90d9c14e337525..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/327.txt +++ /dev/null @@ -1,15 +0,0 @@ -Banach's match problem is a classic problem in probability attributed to Stefan Banach. Feller says that the problem was inspired by a humorous reference to Banach's smoking habit in a speech honouring him by Hugo Steinhaus, but that it was not Banach who set the problem or provided an answer. - -Suppose a mathematician carries two matchboxes at all times: one in his left pocket and one in his right. Each time he needs a match, he is equally likely to take it from either pocket. Suppose he reaches into his pocket and discovers for the first time that the box picked is empty. If it is assumed that each of the matchboxes originally contained $N$ matches, what is the probability that there are exactly $k$ matches in the other box? - -Without loss of generality consider the case where the matchbox in his right pocket has an unlimited number of matches and let $M$ be the number of matches removed from this one before the left one is found to be empty. When the left pocket is found to be empty, the man has chosen that pocket $(N+1)$ times. Then $M$ is the number of successes before $(N+1)$ failures in Bernoulli trials with $p=1/2$, which has the negative binomial distribution and thus -$$ -P[M=m] = \binom{N+m}{m}\left(\frac{1}{2}\right)^{N+1+m} -$$. - -Returning to the original problem, we see that the probability that the left pocket is found to be empty first is $P[M+(p) the number of its positive real roots, counted with their multiplicity, and by v(p) the number of sign variations in the sequence of its coefficients. Descartes's rule of signs asserts that - -v(p) – #+(p) is a nonnegative even integer. - -In particular, if v(p) ≤ 1, then one has #+(p) = v(p). - -Given a univariate polynomial p(x) with real coefficients, let us denote by #(ℓ,r](p) the number of real roots, counted with their multiplicities, - -Joseph Fourier published a similar theorem in 1820, on which he worked for more than twenty years. - -Because of the similarity between the two theorems, there was a priority controversy, despite the fact that the two theorems were discovered independently. « C'est en m'appuyant sur les principes qu'il a posés, et en imitant ses démonstrations, que j'ai trouvé les nouveaux théorèmes que je vais énoncer. » which translates into « It is by relying upon the principles he has laid out and by imitating his proofs that I have found the new theorems which I am about to present. » - -Because of this, during the 19th century, Fourier's and Sturm's theorems appeared together in almost all books on the theory of equations. - -Fourier and Budan left open the problem of reducing the size of the intervals in which roots are searched in a way that, eventually, the difference between the numbers of sign variations is at most one, allowing certifying that the final intervals contains at most one root each. This problem was solved in 1834 by Alexandre Joseph Hidulph Vincent. Roughly speaking, Vincent's theorem consists of using continued fractions for replacing Budan's linear transformations of the variable by Möbius transformations. - -Budan's, Fourier's and Vincent theorem sank into oblivion at the end of 19th century. The last author mentioning these theorems before the second half of 20th century Joseph Alfred Serret. They were introduced again in 1976 by Collins and Akritas, for providing, in computer algebra, an efficient algorithm for real roots isolation on computers. diff --git a/wiki/wikipedia/3273.txt b/wiki/wikipedia/3273.txt deleted file mode 100644 index 034c6a46da49f409e6c79611a6529df006d83255..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3273.txt +++ /dev/null @@ -1,19 +0,0 @@ -A Cauchy problem in mathematics asks for the solution of a partial differential equation that satisfies certain conditions that are given on a hypersurface in the domain. A Cauchy problem can be an initial value problem or a boundary value problem (for this case see also Cauchy boundary condition). It is named after Augustin-Louis Cauchy. - -For a partial differential equation defined on Rn+1 and a smooth manifold S ⊂ Rn+1 of dimension n (S is called the Cauchy surface), the Cauchy problem consists of finding the unknown functions $u_1,\dots,u_N$ of the differential equation with respect to the independent variables $t,x_1,\dots,x_n$ that satisfies - -\begin{align}&\frac{\partial^{n_i}u_i}{\partial t^{n_i}} = F_i\left(t,x_1,\dots,x_n,u_1,\dots,u_N,\dots,\frac{\partial^k u_j}{\partial t^{k_0}\partial x_1^{k_1}\dots\partial x_n^{k_n}},\dots\right) \\ - -&\text{for } i,j = 1,2,\dots,N; k_0+k_1+\dots+k_n=k\leq n_j; k_0 - -subject to the condition, for some value $t=t_0$, - -\frac{\partial^k u_i}{\partial t^k}=\phi_i^{(k)}(x_1,\dots,x_n) - -\quad \text{for } k=0,1,2,\dots,n_i-1 - -where $\phi_i^{(k)}(x_1,\dots,x_n)$ are given functions defined on the surface $S$ (collectively known as the Cauchy data of the problem). The derivative of order zero means that the function itself is specified. - -The Cauchy–Kowalevski theorem states that If all the functions $F_i$ are analytic in some neighborhood of the point $(t^0,x_1^0,x_2^0,\dots,\phi_{j,k_0,k_1,\dots,k_n}^0,\dots)$, and if all the functions $\phi_j^{(k)}$ are analytic in some neighborhood of the point $(x_1^0,x_2^0,\dots,x_n^0)$, then the Cauchy problem has a unique analytic solution in some neighborhood of the point $(t^0,x_1^0,x_2^0,\dots,x_n^0)$. diff --git a/wiki/wikipedia/3274.txt b/wiki/wikipedia/3274.txt deleted file mode 100644 index 2f70579689627782a426701f09eddb2b809fa1af..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3274.txt +++ /dev/null @@ -1,33 +0,0 @@ -In combinatorics, tripod packing is a problem of finding many disjoint tripods in a three-dimensional grid, where a tripod is an infinite polycube, the union of the grid cubes along three positive axis-aligned rays with a shared apex. - -Several problems of tiling and packing tripods and related shapes were formulated in 1967 by Sherman K. Stein. Stein originally called the tripods of this problem "semicrosses", and they were also called Stein corners by Solomon W. Golomb. A collection of disjoint tripods can be represented compactly as a monotonic matrix, a square matrix whose nonzero entries increase along each row and column and whose equal nonzero entries are placed in a monotonic sequence of cells, and the problem can also be formulated in terms of finding sets of triples satisfying a compatibility condition called "2-comparability", or of finding compatible sets of triangles in a convex polygon. - -The best lower bound known for the number of tripods that can have their apexes packed into an $n\times n\times n$ grid is $\Omega(n^{1.546})$, and the best upper bound is $n^2/\exp \Omega(\log^* n)$, both expressed in big Omega notation. - -The coordinates $(x_i,y_i,z_i)$ of the apexes of a solution to the tripod problem form a 2-comparable sets of triples, where two triples are defined as being 2-comparable if there are either at least two coordinates where one triple is smaller than the other, or at least two coordinates where one triple is larger than the other. This condition ensures that the tripods defined from these triples do not have intersecting rays. - -Another equivalent two-dimensional version of the question asks how many cells of an $n\times n$ array of square cells (indexed from $1$ to $n$) can be filled in by the numbers from $1$ to $n$ in such a way that the non-empty cells of each row and each column of the array form strictly increasing sequences of numbers, and the positions holding each value $i$ form a monotonic chain within the array. An array with these properties is called a monotonic matrix. A collection of disjoint tripods with apexes $(x_i,y_i,z_i)$ can be transformed into a monotonic matrix by placing the number $z_i$ in array cell $(x_i,y_i)$ and vice versa. - -The problem is also equivalent to finding as many triangles as possible among the vertices of a convex polygon, such that no two triangles that share a vertex have nested angles at that vertex. This triangle-counting problem was posed by Peter Braß and its equivalence to tripod packing was observed by Aronov et al. - -It is straightforward to find a solution to the tripod packing problem with $\Omega(n^{3/2})$ tripods. For instance, for $k=\lfloor\sqrt{n}\rfloor$, the $\Omega(n^{3/2})$ triples - -\bigl\{ (ak+b+1,bk+c+1,ak+c+1) \big| a,b,c\in[0,k-1]\bigr\} - -are 2-comparable. - -After several earlier improvements to this naïve bound, Gowers and Long found solutions to the tripod problem of cardinality $\Omega(n^{1.546})$. - -From any solution to the tripod packing problem, one can derive a balanced tripartite graph whose vertices are three copies of the numbers from $0$ to $n-1$ (one for each of the three coordinates) with a triangle of edges connecting the three vertices corresponding to the coordinates of the apex of each tripod. There are no other triangles in these graphs (they are locally linear graphs) because any other triangle would lead to a violation of 2-comparability. Therefore, by the known upper bounds to the Ruzsa–Szemerédi problem (one version of which is to find the maximum density of edges in a balanced tripartite locally linear graph), the maximum number of disjoint tripods that can be packed in an $n\times n\times n$ grid is $o(n^2)$, and more precisely $n^2/\exp\Omega(\log^* n)$. Although Tiskin writes that "tighter analysis of the parameters" can produce a bound that is less than quadratic by a polylogarithmic factor, he does not supply details and his proof that the number is $o(n^2)$ uses only the same techniques that are known for the Ruzsa–Szemerédi problem, so this stronger claim appears to be a mistake. - -An argument of Dean Hickerson shows that, because tripods cannot pack space with constant density, the same is true for analogous problems in higher dimensions. - -For small instances of the tripod problem, the exact solution is known. The numbers of tripods that can be packed into an $n\times n\times n$ cube, for $n\le 11$, are: - -left=1.6|1, 2, 5, 8, 11, 14, 19, 23, 28, 32, 38, ... - -For instance, the figure shows the 11 tripods that can be packed into a $5\times 5\times 5$ cube. - -The number of distinct monotonic matrices of order $n$, for $n=1,2,3,\dots$ is - -left=1.6|2, 19, 712, 87685, 31102080, 28757840751, ... diff --git a/wiki/wikipedia/3275.txt b/wiki/wikipedia/3275.txt deleted file mode 100644 index 82326b017cff68dadcec56c71dba5bbabdfa6154..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3275.txt +++ /dev/null @@ -1,7 +0,0 @@ -In graph drawing, the area used by a drawing is a commonly used way of measuring its quality. - -For a drawing style in which the vertices are placed on the integer lattice, the area of the drawing may be defined as the area of the smallest axis-aligned bounding box of the drawing: that is, it the product of the largest difference in x-coordinates of two vertices with the largest difference in y-coordinates. For other drawing styles, in which vertices are placed more freely, the drawing may be scaled so that the closest pair of vertices have distance one from each other, after which the area can again be defined as the area of a smallest bounding box of a drawing. Alternatively, the area can be defined as the area of the convex hull of the drawing, again after appropriate scaling. - -For straight-line drawings of planar graphs with n vertices, the optimal worst-case bound on the area of a drawing is Θ(n2). The nested triangles graph requires this much area no matter how it is embedded, and several methods are known that can draw planar graphs with at most quadratic area. Binary trees, and trees of bounded degree more generally, have drawings with linear or near-linear area, depending on the drawing style. Every outerplanar graph has a straight-line outerplanar drawing with area subquadratic in its number of vertices, If bends or crossings are allowed, then outerplanar graphs have drawings with near-linear area. However, drawing series–parallel graphs requires an area larger than n multiplied by a superpolylogarithmic factor, even if edges can be drawn as polylines. - -In contrast to these polynomial bounds, some drawing styles may exhibit exponential growth in their areas, implying that these styles may be suitable only for small graphs. An example is upward planar drawing of planar directed acyclic graphs, where the area of an n-vertex drawing may be proportional to 2n in the worst case. Even plane trees may require exponential area, if they are to be drawn with straight edges that preserve a fixed cyclic order around each vertex and must be equally spaced around the vertex. diff --git a/wiki/wikipedia/3276.txt b/wiki/wikipedia/3276.txt deleted file mode 100644 index ec0afb59b302374e6e241314c9698b265ba532d2..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3276.txt +++ /dev/null @@ -1,27 +0,0 @@ -In mathematics, the maximum-minimums identity is a relation between the maximum element of a set S of n numbers and the minima of the 2n − 1 non-empty subsets of S. - -Let S = {x1, x2, ..., xn}. The identity states that - -\begin{align} - -\max\{x_1,x_2,\ldots,x_{n}\} - -& = \sum_{i=1}^n x_i - \sum_{i - -or conversely - -\begin{align} - -\min\{x_1,x_2,\ldots,x_{n}\} - -& = \sum_{i=1}^n x_i - \sum_{i - -For a probabilistic proof, see the reference. diff --git a/wiki/wikipedia/3277.txt b/wiki/wikipedia/3277.txt deleted file mode 100644 index bfa39ebcec2e3f5bde6cac95ce3d292e77b2b88a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3277.txt +++ /dev/null @@ -1,11 +0,0 @@ -In combinatorial optimization, the set TSP, also known as the generalized TSP, group TSP, One-of-a-Set TSP, Multiple Choice TSP or Covering Salesman Problem, is a generalization of the traveling salesman problem (TSP), whereby it is required to find a shortest tour in a graph which visits all specified subsets of the vertices of a graph. The subsets of vertices must be disjoint, since the case of overlapping subsets can be reduced to the case of disjoint ones. The ordinary TSP is a special case of the set TSP when all subsets to be visited are singletons. Therefore, the set TSP is also NP-hard. - -There is a transformation for an instance of the set TSP to an instance of the standard asymmetric TSP. The idea is to connect each subset into a directed cycle with edges of zero weight, and inherit the outgoing edges from the original graph shifting by one vertex backwards along this cycle. The salesman, when visiting a vertex v in some subset, walks around the cycle for free and exists it from the vertex preceding v by an outgoing edge corresponding to an outgoing edge of v in the original graph. - -The Set TSP has a lot of interesting applications in several path planning problems. For example, a two vehicle cooperative routing problem could be transformed into a set TSP, tight lower bounds to the Dubins TSP and generalized Dubins path problem could be computed by solving a Set TSP. - -The one-dimensional cutting stock problem as applied in the paper / plastic film industries, involves cutting jumbo rolls into smaller ones. This is done by generating cutting patterns typically to minimise waste. Once such a solution has been produced, one may seek to minimise the knife changes, by re-sequencing the patterns (up and down in the figure), or moving rolls left or right within each pattern. These moves do not affect the waste of the solution. - -500 px|border - -In the above figure, patterns (width no more than 198) are rows; knife changes are indicated by the small white circles; for example, patterns 2-3-4 have a roll of size 42.5 on the left - the corresponding knife does not have to move. Each pattern represents a TSP set, one of whose permutations must be visited. For instance, for the last pattern, which contains two repeated sizes (twice each), there are 5! / (2! × 2!) = 30 permutations. The number of possible solutions to the above instance is 12! × (5!)6 × (6!)4 × (7!)2 / ((2!)9 × (3!)2) ≈ 5.3 × 1035. The above solution contains 39 knife changes, and has been obtained by a heuristic; it is not known whether this is optimal. Transformations into the regular TSP, as described in would involve a TSP with 5,520 nodes. diff --git a/wiki/wikipedia/3278.txt b/wiki/wikipedia/3278.txt deleted file mode 100644 index d5932f059b31371d30cd54923ea9a794f7d606ed..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3278.txt +++ /dev/null @@ -1,72 +0,0 @@ -In probability theory, the craps principle is a theorem about event probabilities under repeated iid trials. Let $E_1$ and $E_2$ denote two mutually exclusive events which might occur on a given trial. Then the probability that $E_1$ occurs before $E_2$ equals the conditional probability that $E_1$ occurs given that $E_1$ or $E_2$ occur on the next trial, which is -$$ -\operatorname{P}[ E_1 \text{before} E_2]=\operatorname{P}\left[E_1\mid E_1\cup E_2\right]=\frac{\operatorname{P}[E_1]}{\operatorname{P}[E_1]+\operatorname{P}[E_2]} -$$ - -The events $E_1$ and $E_2$ need not be collectively exhaustive (if they are, the result is trivial). - -Let $A$ be the event that $E_1$ occurs before $E_2$. Let $B$ be the event that neither $E_1$ nor $E_2$ occurs on a given trial. Since $B$, $E_1$ and $E_2$ are mutually exclusive and collectively exhaustive for the first trial, we have -$$ - \operatorname{P}(A) = \operatorname{P}(E_1)\operatorname{P}(A \mid E_1) + \operatorname{P}(E_2)\operatorname{P}(A \mid E_2) + \operatorname{P}(B) \operatorname{P}(A \mid B) = \operatorname{P}(E_1) + \operatorname{P}(B) \operatorname{P}(A \mid B) -$$ - -and $\operatorname{P}(B) = 1 - \operatorname{P}(E_1) - \operatorname{P}(E_2)$. - -Since the trials are i.i.d., we have $\operatorname{P}(A \mid B) = \operatorname{P}(A)$. Using $\operatorname{P}(A|E_1)=1,\quad \operatorname{P}(A|E_2)=0$ and solving the displayed equation for $\operatorname{P}(A)$ gives the formula -$$ -\operatorname{P}(A) = \frac{\operatorname{P}(E_1)}{\operatorname{P}(E_1)+\operatorname{P}(E_2)} -$$. - -If the trials are repetitions of a game between two players, and the events are -$$ -E_1:\mathrm{ player\ 1\ wins} -$$ -$$ -E_2:\mathrm{ player\ 2\ wins} -$$ - -then the craps principle gives the respective conditional probabilities of each player winning a certain repetition, given that someone wins (i.e., given that a draw does not occur). In fact, the result is only affected by the relative marginal probabilities of winning $\operatorname{P}[E_1]$ and $\operatorname{P}[E_2]$ ; in particular, the probability of a draw is irrelevant. - -If the game is played repeatedly until someone wins, then the conditional probability above is the probability that the player wins the game. This is illustrated below for the original game of craps, using an alternative proof. - -If the game being played is craps, then this principle can greatly simplify the computation of the probability of winning in a certain scenario. Specifically, if the first roll is a 4, 5, 6, 8, 9, or 10, then the dice are repeatedly re-rolled until one of two events occurs: -$$ -E_1:\text{ the original roll (called ‘the point’) is rolled (a win) } -$$ -$$ -E_2:\text{ a 7 is rolled (a loss) } -$$ - -Since $E_1$ and $E_2$ are mutually exclusive, the craps principle applies. For example, if the original roll was a 4, then the probability of winning is -$$ -\frac{3/36}{3/36 + 6/36}=\frac{1}{3} -$$ - -This avoids having to sum the infinite series corresponding to all the possible outcomes: -$$ -\sum_{i=0}^{\infty}\operatorname{P}[\text{first i rolls are ties,}(i+1)^{\text{th}}\text{roll is ‘the point’}] -$$ - -Mathematically, we can express the probability of rolling $i$ ties followed by rolling the point: - -\operatorname{P}[\text{first i rolls are ties, }(i+1)^{\text{th}} \text{roll is ‘the point’}] - -= (1-\operatorname{P}[E_1]-\operatorname{P}[E_2])^i\operatorname{P}[E_1] - - - -The summation becomes an infinite geometric series: - -\sum_{i=0}^{\infty} (1-\operatorname{P}[E_1]-\operatorname{P}[E_2])^i\operatorname{P}[E_1] - -= \operatorname{P}[E_1] \sum_{i=0}^{\infty} (1-\operatorname{P}[E_1]-\operatorname{P}[E_2])^i - - - - = \frac{\operatorname{P}[E_1]}{1-(1-\operatorname{P}[E_1]-\operatorname{P}[E_2])} - -= \frac{\operatorname{P}[E_1]}{\operatorname{P}[E_1]+\operatorname{P}[E_2]} - - - -which agrees with the earlier result. diff --git a/wiki/wikipedia/3279.txt b/wiki/wikipedia/3279.txt deleted file mode 100644 index e933ceead7544bf401abe8a733fcffd9113ca4bb..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3279.txt +++ /dev/null @@ -1,55 +0,0 @@ -Category:Bin packing - -First-fit (FF) is an online algorithm for bin packing. Its input is a list of items of different sizes. Its output is a packing - a partition of the items into bins of fixed capacity, such that the sum of sizes of items in each bin is at most the capacity. Ideally, we would like to use as few bins as possible, but minimizing the number of bins is an NP-hard problem. The first-fit algorithm uses the following heuristic: - -* It keeps a list of open bins, which is initially empty. - -* When an item arrives, it find the first bin into which the item can fit, if any. - -** If such a bin is found, the new item is placed inside it. - -** Otherwise, a new bin is opened and the coming item is placed inside it. - -Denote by FF(L) the number of bins used by First-Fit, and by OPT(L) the optimal number of bins possible for the list L. The analysis of FF(L) was done in several steps. - -* The first upper bound of $FF(L) \leq 1.7\mathrm{OPT}+3$ for FF was proven by Ullman in 1971. - -* In 1972, this upper bound was improved to $FF(L) \leq 1.7\mathrm{OPT}+2$ by Garey, Graham and Ullman, Johnson and Demers. - -* In 1976, it was improved by Garey, Graham, Johnson, Yao and Chi-Chih to $FF(L) \leq \lceil 1.7\mathrm{OPT}\rceil$, which is equivalent to $FF(L) \leq 1.7\mathrm{OPT}+0.9$ due to the integrality of $FF(L)$ and $\mathrm{OPT}$. - -* The next improvement, by Xia and Tan in 2010, lowered the bound to $FF(L) \leq 1.7\mathrm{OPT}+0.7$. - -* Finally, in 2013, this bound was improved to $FF(L) \leq \lfloor 1.7\mathrm{OPT}\rfloor$ by Dósa and Sgall. They also present an example input list $L$, for which $FF(L)$ matches this bound. - -Refined-First-Fit (RFF) is another online algorithm for bin packing, that improves on the previously-developed FF algorithm. It was presented by Andrew Chi-Chih Yao. - -The items are categorized in four classes, according to their sizes: - -* $A$-piece - size in $(1/2,1]$. - -* $B_1$-piece - size in $(2/5,1/2]$. - -* $B_2$-piece - size in $(1/3,2/5]$. - -* $X$-piece - size in $(0,1/3]$. - -Similarly, the bins are categorized into four classes: 1, 2, 3 and 4. - -Let $m \in \{6,7,8,9\}$ be a fixed integer. The next item $i \in L$ is assigned into a bin in - - -* Class 1, if $i$ is an $A$-piece, - -* Class 2, if $i$ is an $B_1$-piece, - -* Class 3, if $i$ is an $B_2$-piece, but not the $(mk)$th $B_2$-piece seen so far, for any integer $k \geq 1$. - -* Class 1, if $i$ is the $(mk)$th $B_2$-piece seen so far, - -* Class 4, if $i$ is an $X$-piece. - -Once the class of the item is selected, it is placed inside items of that class using first-fit bin packing. - -Note that RFF is not an Any-Fit algorithm since it may open a new bin despite the fact that the current item fits inside an open bin (from another class). - -RFF has an approximation guarantee of $RFF(L) \leq (5/3) \cdot \mathrm{OPT}(L) +5 $. There exists a family of lists $L_k$ with $RFF(L_k) = (5/3)\mathrm{OPT}(L_k) +1/3$ for $\mathrm{OPT}(L) = 6k+1$. diff --git a/wiki/wikipedia/328.txt b/wiki/wikipedia/328.txt deleted file mode 100644 index ccd06dda1376dbc50765c38551faff5b713f0f7b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/328.txt +++ /dev/null @@ -1,3 +0,0 @@ -In group theory, Hajós's theorem states that if a finite abelian group is expressed as the Cartesian product of simplexes, that is, sets of the form $\{e,a,a^2,\dots,a^{s-1}\}$ where $e$ is the identity element, then at least one of the factors is a subgroup. The theorem was proved by the Hungarian mathematician György Hajós in 1941 using group rings. Rédei later proved the statement when the factors are only required to contain the identity element and be of prime cardinality. Rédei's proof of Hajós's theorem was simplified by Tibor Szele. - -An equivalent statement on homogeneous linear forms was originally conjectured by Hermann Minkowski. A consequence is Minkowski's conjecture on lattice tilings, which says that in any lattice tiling of space by cubes, there are two cubes that meet face to face. Keller's conjecture is the same conjecture for non-lattice tilings, which turns out to be false in high dimensions. diff --git a/wiki/wikipedia/3280.txt b/wiki/wikipedia/3280.txt deleted file mode 100644 index f26474b733aa927bbd16374febf65b4d4ed98e07..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3280.txt +++ /dev/null @@ -1,183 +0,0 @@ -In computer science, the Floyd–Warshall algorithm (also known as Floyd's algorithm, the Roy–Warshall algorithm, the Roy–Floyd algorithm, or the WFI algorithm) is an algorithm for finding shortest paths in a directed weighted graph with positive or negative edge weights (but with no negative cycles). A single execution of the algorithm will find the lengths (summed weights) of shortest paths between all pairs of vertices. Although it does not return details of the paths themselves, it is possible to reconstruct the paths with simple modifications to the algorithm. Versions of the algorithm can also be used for finding the transitive closure of a relation $R$, or (in connection with the Schulze voting system) widest paths between all pairs of vertices in a weighted graph. - -The Floyd–Warshall algorithm is an example of dynamic programming, and was published in its currently recognized form by Robert Floyd in 1962. However, it is essentially the same as algorithms previously published by Bernard Roy in 1959 and also by Stephen Warshall in 1962 for finding the transitive closure of a graph, and is closely related to Kleene's algorithm (published in 1956) for converting a deterministic finite automaton into a regular expression. The modern formulation of the algorithm as three nested for-loops was first described by Peter Ingerman, also in 1962. - -The Floyd–Warshall algorithm compares all possible paths through the graph between each pair of vertices. It is able to do this with $\Theta(|V|^3)$ comparisons in a graph, even though there may be up to $\Omega (|V|^2)$ edges in the graph, and every combination of edges is tested. It does so by incrementally improving an estimate on the shortest path between two vertices, until the estimate is optimal. - -Consider a graph $G$ with vertices $V$ numbered 1 through $N$. Further consider a function $\mathrm{shortestPath}(i,j,k)$ that returns the shortest possible path from $i$ to $j$ using vertices only from the set $\{1,2,\ldots,k\}$ as intermediate points along the way. Now, given this function, our goal is to find the shortest path from each $i$ to each $j$ using any vertex in $\{1,2,\ldots,N\}$. - -For each of these pairs of vertices, the $\mathrm{shortestPath}(i,j,k)$ could be either - -(1) a path that does not go through $k$ (only uses vertices in the set $\{1,\ldots,k-1\}$.) - -or - -(2) a path that does go through $k$ (from $i$ to $k$ and then from $k$ to $j$, both only using intermediate vertices in $\{1,\ldots,k-1\}$) - -We know that the best path from $i$ to $j$ that only uses vertices $1$ through $k-1$ is defined by $\mathrm{shortestPath}(i,j,k-1)$, and it is clear that if there was a better path from $i$ to $k$ to $j$, then the length of this path would be the concatenation of the shortest path from $i$ to $k$ (only using intermediate vertices in $\{1,\ldots,k-1\}$) and the shortest path from $k$ to $j$ (only using intermediate vertices in $\{1,\ldots,k-1\}$). - -If $w(i,j)$ is the weight of the edge between vertices $i$ and $j$, we can define $\mathrm{shortestPath}(i,j,k)$ in terms of the following recursive formula: the base case is -$$ -\mathrm{shortestPath}(i,j,0) = w(i,j) -$$ - -and the recursive case is -$$ -\mathrm{shortestPath}(i,j,k) = -$$ -$$ -\mathrm{min}\Big(\mathrm{shortestPath}(i,j,k-1), -$$ -$$ -\mathrm{shortestPath}(i,k,k-1)+\mathrm{shortestPath}(k,j,k-1)\Big) -$$. - -This formula is the heart of the Floyd–Warshall algorithm. The algorithm works by first computing $\mathrm{shortestPath}(i,j,k)$ for all $(i,j)$ pairs for $k=1$, then $k=2$, and so on. This process continues until $k=N$, and we have found the shortest path for all $(i,j)$ pairs using any intermediate vertices. Pseudocode for this basic version follows: - -let dist be a |V| × |V| array of minimum distances initialized to ∞ (infinity) - -for each edge (u, v) do - -dist[u][v] ← w(u, v) // The weight of the edge (u, v) - -for each vertex v do - -dist[v][v] ← 0 - -for k from 1 to |V| - -for i from 1 to |V| - -for j from 1 to |V| - -if dist[i][j] > dist[i][k] + dist[k][j] - -dist[i][j] ← dist[i][k] + dist[k][j] - -end if - -The algorithm above is executed on the graph on the left below: - -Prior to the first recursion of the outer loop, labeled k = 0 above, the only known paths correspond to the single edges in the graph. At k = 1, paths that go through the vertex 1 are found: in particular, the path [2,1,3] is found, replacing the path [2,3] which has fewer edges but is longer (in terms of weight). At k = 2, paths going through the vertices {1,2} are found. The red and blue boxes show how the path [4,2,1,3] is assembled from the two known paths [4,2] and [2,1,3] encountered in previous iterations, with 2 in the intersection. The path [4,2,3] is not considered, because [2,1,3] is the shortest path encountered so far from 2 to 3. At k = 3, paths going through the vertices {1,2,3} are found. Finally, at k = 4, all shortest paths are found. - -The distance matrix at each iteration of k, with the updated distances in bold, will be: - -A negative cycle is a cycle whose edges sum to a negative value. There is no shortest path between any pair of vertices $i$, $j$ which form part of a negative cycle, because path-lengths from $i$ to $j$ can be arbitrarily small (negative). For numerically meaningful output, the Floyd–Warshall algorithm assumes that there are no negative cycles. Nevertheless, if there are negative cycles, the Floyd–Warshall algorithm can be used to detect them. The intuition is as follows: - -* The Floyd–Warshall algorithm iteratively revises path lengths between all pairs of vertices $(i,j)$, including where $i=j$; - -* Initially, the length of the path $(i,i)$ is zero; - -* A path $[i,k,\ldots,i]$ can only improve upon this if it has length less than zero, i.e. denotes a negative cycle; - -* Thus, after the algorithm, $(i,i)$ will be negative if there exists a negative-length path from $i$ back to $i$. - -Hence, to detect negative cycles using the Floyd–Warshall algorithm, one can inspect the diagonal of the path matrix, and the presence of a negative number indicates that the graph contains at least one negative cycle. During the execution of the algorithm, if there is a negative cycle, exponentially large numbers can appear, as large as $\Omega(\cdot6^{n-1} w_{max})$, where $w_{max}$ is the largest absolute value of a negative edge in the graph. To avoid overflow/underflow problems one should check for negative numbers on the diagonal of the path matrix within the inner for loop of the algorithm. Obviously, in an undirected graph a negative edge creates a negative cycle (i.e., a closed walk) involving its incident vertices. Considering all edges of the above example graph as undirected, e.g. the vertex sequence 4 – 2 – 4 is a cycle with weight sum −2. - -The Floyd–Warshall algorithm typically only provides the lengths of the paths between all pairs of vertices. With simple modifications, it is possible to create a method to reconstruct the actual path between any two endpoint vertices. While one may be inclined to store the actual path from each vertex to each other vertex, this is not necessary, and in fact, is very costly in terms of memory. Instead, the shortest-path tree can be calculated for each node in $\Theta(|E|)$ time using $\Theta(|V|)$ memory to store each tree which allows us to efficiently reconstruct a path from any two connected vertices. - -let dist be a $|V| \times |V|$ array of minimum distances initialized to $\infty$ (infinity) - -let next be a $|V| \times |V|$ array of vertex indices initialized to null - -procedure FloydWarshallWithPathReconstruction() is - -for each edge (u, v) do - -dist[u][v] ← w(u, v) // The weight of the edge (u, v) - -next[u][v] ← v - -for each vertex v do - -dist[v][v] ← 0 - -next[v][v] ← v - -for k from 1 to |V| do // standard Floyd-Warshall implementation - -for i from 1 to |V| - -for j from 1 to |V| - -if dist[i][j] > dist[i][k] + dist[k][j] then - -dist[i][j] ← dist[i][k] + dist[k][j] - -next[i][j] ← next[i][k] - -procedure Path(u, v) - -if next[u][v] = null then - -return [] - -path = [u] - -while u ≠ v - -u ← next[u][v] - -path.append(u) - -return path - -Let $n$ be $|V|$, the number of vertices. To find all $n^2$ of -$$ -\mathrm{shortestPath}(i,j,k) -$$ (for all $i$ and $j$) from those of -$$ -\mathrm{shortestPath}(i,j,k-1) -$$ requires $2n^2$ operations. Since we begin with -$$ -\mathrm{shortestPath}(i,j,0) = \mathrm{edgeCost}(i,j) -$$ and compute the sequence of $n$ matrices $\mathrm{shortestPath}(i,j,1)$, $\mathrm{shortestPath}(i,j,2)$, $\ldots$, $\mathrm{shortestPath}(i,j,n)$, the total number of operations used is -$$ -n \cdot 2n^2 = 2n^3 -$$. Therefore, the complexity of the algorithm is $\Theta(n^3)$. - -The Floyd–Warshall algorithm can be used to solve the following problems, among others: - -* Shortest paths in directed graphs (Floyd's algorithm). - -* Transitive closure of directed graphs (Warshall's algorithm). In Warshall's original formulation of the algorithm, the graph is unweighted and represented by a Boolean adjacency matrix. Then the addition operation is replaced by logical conjunction (AND) and the minimum operation by logical disjunction (OR). - -* Finding a regular expression denoting the regular language accepted by a finite automaton (Kleene's algorithm, a closely related generalization of the Floyd–Warshall algorithm) - -* Inversion of real matrices (Gauss–Jordan algorithm) - -* Optimal routing. In this application one is interested in finding the path with the maximum flow between two vertices. This means that, rather than taking minima as in the pseudocode above, one instead takes maxima. The edge weights represent fixed constraints on flow. Path weights represent bottlenecks; so the addition operation above is replaced by the minimum operation. - -* Fast computation of Pathfinder networks. - -* Widest paths/Maximum bandwidth paths - -* Computing canonical form of difference bound matrices (DBMs) - -* Computing the similarity between graphs - -* Transitive closure in AND/OR/threshold graphs. - -Implementations are available for many programming languages. - -* For C++, in the library - -* For C#, at - -* For C#, at (A fork of QuickGraph with better compatibility with projects using Portable Class Libraries.) - -* For Java, in the library - -* For JavaScript, in the Cytoscape library - -* For MATLAB, in the package - -* For Perl, in the module - -* For Python, in the SciPy library (module ) or NetworkX library - -* For R, in packages and - -The Floyd–Warshall algorithm is a good choice for computing paths between all pairs of vertices in dense graphs, in which most or all pairs of vertices are connected by edges. For sparse graphs with non-negative edge weights, lower asymptotic complexity can be obtained by running Dijkstra's algorithm from each possible starting vertex, since the worst-case running time of repeated Dijkstra ($O(|E||V|+|V|^2\log|V|)$ using Fibonacci heaps) is smaller than the $O(|V|^3)$ running time of the Floyd–Warshall algorithm when $|E|$ is significantly smaller than $|V|^2$. For sparse graphs with negative edges but no negative cycles, Johnson's algorithm can be used, with the same asymptotic running time as the repeated Dijkstra approach. - -There are also known algorithms using fast matrix multiplication to speed up all-pairs shortest path computation in dense graphs, but these typically make extra assumptions on the edge weights (such as requiring them to be small integers). In addition, because of the high constant factors in their running time, they would only provide a speedup over the Floyd–Warshall algorithm for very large graphs. diff --git a/wiki/wikipedia/3281.txt b/wiki/wikipedia/3281.txt deleted file mode 100644 index 7ecd51a8c45a6be9c017c099c17632383f03dc1f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3281.txt +++ /dev/null @@ -1,10 +0,0 @@ -In graph theory, the meshedness coefficient is a graph invariant of planar graphs that measures the number of bounded faces of the graph, as a fraction of the possible number of faces for other planar graphs with the same number of vertices. It ranges from 0 for trees to 1 for maximal planar graphs. - -The meshedness coefficient is used to compare the general cycle structure of a connected planar graph to two extreme relevant references. In one end, there are trees, planar graphs with no cycle. It has also been used to characterize the network structure of streets in urban areas. - -Using the definition of the average degree $\langle k\rangle = 2m/n $, one can see that in the limit of large graphs (number of edges 1=$n \gg 1$) the meshedness tends to -$$ -\alpha \approx \frac{\langle k\rangle}{4} - \frac{1}{2} -$$ - -Thus, for large graphs, the meshedness does not carry more information than the average degree. diff --git a/wiki/wikipedia/3282.txt b/wiki/wikipedia/3282.txt deleted file mode 100644 index 6f608f7b89d8602d9a8ed7e07283b0b9c4126da2..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3282.txt +++ /dev/null @@ -1,3 +0,0 @@ -Given two graphs $G$ and $G'$, the maximum common edge subgraph problem is the problem of finding a graph $H$ with as many edges as possible which is isomorphic to both a subgraph of $G$ and a subgraph of $G'$. - -The maximum common edge subgraph problem on general graphs is NP-complete as it is a generalization of subgraph isomorphism: a graph $H$ is isomorphic to a subgraph of another graph $G$ if and only if the maximum common edge subgraph of $G$ and $H$ has the same number of edges as $H$. Unless the two inputs $G$ and $G'$ to the maximum common edge subgraph problem are required to have the same number of vertices, the problem is APX-hard. diff --git a/wiki/wikipedia/3283.txt b/wiki/wikipedia/3283.txt deleted file mode 100644 index 7935216d5ba99e765b621badca4ee51809f13ff9..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3283.txt +++ /dev/null @@ -1,71 +0,0 @@ -In the mathematical field of graph theory, a Hamiltonian path (or traceable path) is a path in an undirected or directed graph that visits each vertex exactly once. A Hamiltonian cycle (or Hamiltonian circuit) is a Hamiltonian path that is a cycle. Determining whether such paths and cycles exist in graphs is the Hamiltonian path problem, which is NP-complete. - -Hamiltonian paths and cycles are named after William Rowan Hamilton who invented the icosian game, now also known as Hamilton's puzzle, which involves finding a Hamiltonian cycle in the edge graph of the dodecahedron. Hamilton solved this problem using the icosian calculus, an algebraic structure based on roots of unity with many similarities to the quaternions (also invented by Hamilton). This solution does not generalize to arbitrary graphs. - -Despite being named after Hamilton, Hamiltonian cycles in polyhedra had also been studied a year earlier by Thomas Kirkman, who, in particular, gave an example of a polyhedron without Hamiltonian cycles. Even earlier, Hamiltonian cycles and paths in the knight's graph of the chessboard, the knight's tour, had been studied in the 9th century in Indian mathematics by Rudrata, and around the same time in Islamic mathematics by al-Adli ar-Rumi. In 18th century Europe, knight's tours were published by Abraham de Moivre and Leonhard Euler. - -A Hamiltonian path or traceable path is a path that visits each vertex of the graph exactly once. A graph that contains a Hamiltonian path is called a traceable graph. A graph is Hamiltonian-connected if for every pair of vertices there is a Hamiltonian path between the two vertices. - -A Hamiltonian cycle, Hamiltonian circuit, vertex tour or graph cycle is a cycle that visits each vertex exactly once. A graph that contains a Hamiltonian cycle is called a Hamiltonian graph. - -Similar notions may be defined for directed graphs, where each edge (arc) of a path or cycle can only be traced in a single direction (i.e., the vertices are connected with arrows and the edges traced "tail-to-head"). - -A Hamiltonian decomposition is an edge decomposition of a graph into Hamiltonian circuits. - -A Hamilton maze is a type of logic puzzle in which the goal is to find the unique Hamiltonian cycle in a given graph. - -* A complete graph with more than two vertices is Hamiltonian - -* Every cycle graph is Hamiltonian - -* Every tournament has an odd number of Hamiltonian paths (Rédei 1934) - -* Every platonic solid, considered as a graph, is Hamiltonian - -* The Cayley graph of a finite Coxeter group is Hamiltonian (For more information on Hamiltonian paths in Cayley graphs, see the Lovász conjecture.) - -* Cayley graphs on nilpotent groups with cyclic commutator subgroup are Hamiltonian. - -* The flip graph of a convex polygon or equivalently, the rotation graph of binary trees, is Hamiltonian. - -Any Hamiltonian cycle can be converted to a Hamiltonian path by removing one of its edges, but a Hamiltonian path can be extended to Hamiltonian cycle only if its endpoints are adjacent. - -All Hamiltonian graphs are biconnected, but a biconnected graph need not be Hamiltonian (see, for example, the Petersen graph). - -An Eulerian graph G (a connected graph in which every vertex has even degree) necessarily has an Euler tour, a closed walk passing through each edge of G exactly once. This tour corresponds to a Hamiltonian cycle in the line graph L(G), so the line graph of every Eulerian graph is Hamiltonian. Line graphs may have other Hamiltonian cycles that do not correspond to Euler tours, and in particular the line graph L(G) of every Hamiltonian graph G is itself Hamiltonian, regardless of whether the graph G is Eulerian. - -A tournament (with more than two vertices) is Hamiltonian if and only if it is strongly connected. - -The number of different Hamiltonian cycles in a complete undirected graph on n vertices is $(n-1)!/2$ and in a complete directed graph on n vertices is $(n-1)!$. These counts assume that cycles that are the same apart from their starting point are not counted separately. - -The best vertex degree characterization of Hamiltonian graphs was provided in 1972 by the Bondy–Chvátal theorem, which generalizes earlier results by G. A. Dirac (1952) and Øystein Ore. Both Dirac's and Ore's theorems can also be derived from Pósa's theorem (1962). Hamiltonicity has been widely studied with relation to various parameters such as graph density, toughness, forbidden subgraphs and distance among other parameters. Dirac and Ore's theorems basically state that a graph is Hamiltonian if it has enough edges. - -The Bondy–Chvátal theorem operates on the closure cl(G) of a graph G with n vertices, obtained by repeatedly adding a new edge uv connecting a nonadjacent pair of vertices u and v with deg(v) + deg(u) ≥ n until no more pairs with this property can be found. - -Bondy–Chvátal Theorem (1976). A graph is Hamiltonian if and only if its closure is Hamiltonian. - -As complete graphs are Hamiltonian, all graphs whose closure is complete are Hamiltonian, which is the content of the following earlier theorems by Dirac and Ore. - -Dirac's Theorem (1952). A simple graph with n vertices ($n\geq 3$) is Hamiltonian if every vertex has degree $\tfrac{n}{2}$ or greater. - -Ore's Theorem (1960). A simple graph with n vertices ($n\geq 3$) is Hamiltonian if, for every pair of non-adjacent vertices, the sum of their degrees is n or greater. - -The following theorems can be regarded as directed versions: - -Ghouila-Houiri (1960). A strongly connected simple directed graph with n vertices is Hamiltonian if every vertex has a full degree greater than or equal to n. - -Meyniel (1973). A strongly connected simple directed graph with n vertices is Hamiltonian if the sum of full degrees of every pair of distinct non-adjacent vertices is greater than or equal to $2n-1$. - -The number of vertices must be doubled because each undirected edge corresponds to two directed arcs and thus the degree of a vertex in the directed graph is twice the degree in the undirected graph. - -Rahman–Kaykobad (2005). A simple graph with n vertices has a Hamiltonian path if, for every non-adjacent vertex pairs the sum of their degrees and their shortest path length is greater than n. - -The above theorem can only recognize the existence of a Hamiltonian path in a graph and not a Hamiltonian Cycle. - -Many of these results have analogues for balanced bipartite graphs, in which the vertex degrees are compared to the number of vertices on a single side of the bipartition rather than the number of vertices in the whole graph. - -Theorem (Whitney, 1931). A 4-connected planar triangulation has a Hamiltonian cycle. - -Theorem (Tutte, 1956). A 4-connected planar graph has a Hamiltonian cycle. - -An algebraic representation of the Hamiltonian cycles of a given weighted digraph (whose arcs are assigned weights from a certain ground field) is the Hamiltonian cycle polynomial of its weighted adjacency matrix defined as the sum of the products of the arc weights of the digraph's Hamiltonian cycles. This polynomial is not identically zero as a function in the arc weights if and only if the digraph is Hamiltonian. The relationship between the computational complexities of computing it and computing the permanent was shown by Grigoriy Kogan. diff --git a/wiki/wikipedia/3284.txt b/wiki/wikipedia/3284.txt deleted file mode 100644 index af0d5310bd978b7b24710aade29a92cad5cb8a02..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3284.txt +++ /dev/null @@ -1,53 +0,0 @@ -In mathematics, the Babuška–Lax–Milgram theorem is a generalization of the famous Lax–Milgram theorem, which gives conditions under which a bilinear form can be "inverted" to show the existence and uniqueness of a weak solution to a given boundary value problem. The result is named after the mathematicians Ivo Babuška, Peter Lax and Arthur Milgram. - -In the modern, functional-analytic approach to the study of partial differential equations, one does not attempt to solve a given partial differential equation directly, but by using the structure of the vector space of possible solutions, e.g. a Sobolev space W k,p. Abstractly, consider two real normed spaces U and V with their continuous dual spaces U and V respectively. In many applications, U is the space of possible solutions; given some partial differential operator Λ : U → V and a specified element f ∈ V, the objective is to find a u ∈ U such that -$$ -\Lambda u = f. -$$ - -However, in the weak formulation, this equation is only required to hold when "tested" against all other possible elements of V. This "testing" is accomplished by means of a bilinear function B : U × V → R which encodes the differential operator Λ; a weak solution to the problem is to find a u ∈ U such that -$$ -B(u, v) = \langle f, v \rangle \mbox{ for all } v \in V. -$$ - -The achievement of Lax and Milgram in their 1954 result was to specify sufficient conditions for this weak formulation to have a unique solution that depends continuously upon the specified datum f ∈ V: it suffices that U = V is a Hilbert space, that B is continuous, and that B is strongly coercive, i.e. -$$ -| B(u, u) | \geq c \| u \|^{2} -$$ - -for some constant c > 0 and all u ∈ U. - -For example, in the solution of the Poisson equation on a bounded, open domain Ω ⊂ Rn, -$$ -\begin{cases} - \Delta u(x) = f(x), & x \in \Omega; \\ u(x) = 0, & x \in \partial \Omega; \end{cases} -$$ - -the space U could be taken to be the Sobolev space H01(Ω) with dual H-1(Ω); the former is a subspace of the Lp space V = L2(Ω); the bilinear form B associated to -Δ is the L2(Ω) inner product of the derivatives: -$$ -B(u, v) = \int_{\Omega} \nabla u(x) \cdot \nabla v(x) \mathrm{d} x. -$$ - -Hence, the weak formulation of the Poisson equation, given f ∈ L2(Ω), is to find uf such that -$$ -\int_{\Omega} \nabla u_{f}(x) \cdot \nabla v(x) \mathrm{d} x = \int_{\Omega} f(x) v(x) \mathrm{d} x \mbox{ for all } v \in H_{0}^{1} (\Omega). -$$ - -In 1971, Babuška provided the following generalization of Lax and Milgram's earlier result, which begins by dispensing with the requirement that U and V be the same space. Let U and V be two real Hilbert spaces and let B : U × V → R be a continuous bilinear functional. Suppose also that B is weakly coercive: for some constant c > 0 and all u ∈ U, -$$ -\sup_{\| v \| = 1} | B(u, v) | \geq c \| u \| -$$ - -and, for all 0 ≠ v ∈ V, -$$ -\sup_{\| u \| = 1} | B(u, v) | > 0 -$$ - -Then, for all f ∈ V, there exists a unique solution u = uf ∈ U to the weak problem -$$ -B(u_{f}, v) = \langle f, v \rangle \mbox{ for all } v \in V. -$$ - -Moreover, the solution depends continuously on the given data: -$$ -\| u_{f} \| \leq \frac{1}{c} \| f \|. -$$ diff --git a/wiki/wikipedia/3285.txt b/wiki/wikipedia/3285.txt deleted file mode 100644 index 2e4973bb840d797549cda2a01c1295310eb638fa..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3285.txt +++ /dev/null @@ -1,188 +0,0 @@ -In calculus, the squeeze theorem, also known as the pinching theorem, the sandwich theorem, the sandwich rule, the police theorem, the between theorem and sometimes the squeeze lemma, is a theorem regarding the limit of a function. In Italy, the theorem is also known as theorem of carabinieri. - -The squeeze theorem is used in calculus and mathematical analysis. It is typically used to confirm the limit of a function via comparison with two other functions whose limits are known or easily computed. It was first used geometrically by the mathematicians Archimedes and Eudoxus in an effort to compute pi, and was formulated in modern terms by Carl Friedrich Gauss. - -In many languages (e.g. French, German, Italian, Hungarian and Russian), the squeeze theorem is also known as the two officers (and a drunk) theorem, or some variation thereof. The story is that if two police officers are escorting a drunk prisoner between them, and both officers go to a cell, then (regardless of the path taken, and the fact that the prisoner may be wobbling about between the officers) the prisoner must also end up in the cell. - -The squeeze theorem is formally stated as follows. - -{{math theorem| - -Let I be an interval having the point a as a limit point. Let g, f, and h be functions defined on I, except possibly at a itself. Suppose that for every x in I not equal to a, we have - -g(x) \leq f(x) \leq h(x) - -and also suppose that - -\lim_{x \to a} g(x) = \lim_{x \to a} h(x) = L. - -Then $\lim_{x \to a} f(x) = L.$ - -}} - -* The functions $g$ and $h$ are said to be lower and upper bounds (respectively) of $f$. - -* Here, $a$ is not required to lie in the interior of $I$. Indeed, if $a$ is an endpoint of $I$, then the above limits are left- or right-hand limits. - -* A similar statement holds for infinite intervals: for example, if $I=(0, \infty)$, then the conclusion holds, taking the limits as $x \to \infty$. - -This theorem is also valid for sequences. Let $(a_n), (c_n)$ be two sequences converging to $\ell$, and $(b_n)$ a sequence. If $\forall n\geq N, N\in\N$ we have $a_n\leq b_n\leq c_n$, then $(b_n)$ also converges to $\ell$. - -According to the above hypotheses we have, taking the limit inferior and superior: - -L=\lim_{x \to a} g(x)\leq\liminf_{x\to a}f(x) \leq \limsup_{x\to a}f(x)\leq \lim_{x \to a}h(x)=L, - -so all the inequalities are indeed equalities, and the thesis immediately follows. - -A direct proof, using the $(\varepsilon, \delta)$-definition of limit, would be to prove that for all real $\varepsilon > 0$ there exists a real $\delta > 0$ such that for all $x$ with $|x - a| < \delta$, we have $|f(x) - L| < \varepsilon$. Symbolically, - - \forall \varepsilon > 0, \exists \delta > 0 : \forall x, (|x - a | < \delta \ \Rightarrow |f(x) - L |< \varepsilon). - -As - -\lim_{x \to a} g(x) = L - -means that - -and - -\lim_{x \to a} h(x) = L - -means that - -then we have - -g(x) \leq f(x) \leq h(x) - -g(x) - L\leq f(x) - L\leq h(x) - L - -We can choose $\delta:=\min\left\{\delta_1,\delta_2\right\}$. Then, if $|x - a| < \delta$, combining () and (), we have - - - \varepsilon < g(x) - L\leq f(x) - L\leq h(x) - L\ < \varepsilon, - - - \varepsilon < f(x) - L < \varepsilon , - -which completes the proof. Q.E.D - -The proof for sequences is very similar, using the $\varepsilon$-definition of a limit of a sequence. - -There is also the squeeze theorem for series, which can be stated as follows: - -Let $\sum_n a_n, \sum_n c_n$ be two convergent series. Hence, the sequences $\left(\sum_{k=1}^n a_k\right)_{n=1}^{\infty}, \left(\sum_{k=1}^n c_k\right)_{n=1}^{\infty} $ are Cauchy. That is, for fixed $\varepsilon > 0 $, -$$ -\exists N_1\in\mathbb{N} -$$ such that - -{{NumBlk||\forall n>m>N_1, \left|\sum_{k=1}^na_k-\sum_{k=1}^ma_k\right|<\varepsilon \Longrightarrow \left|\sum_{k=m+1}^n a_k\right|<\varepsilon \Longrightarrow -\varepsilon<\sum_{k=m+1}^n a_k<\varepsilon |}} - -and similarly $\exists N_2\in\N $ such that - -{{NumBlk||$\forall n>m>N_2, \left|\sum_{k=1}^nc_k-\sum_{k=1}^mc_k\right|<\varepsilon \Longrightarrow \left|\sum_{k=m+1}^n c_k\right| < \varepsilon \Longrightarrow -\varepsilon<\sum_{k=m+1}^n c_k < \varepsilon $|}} - -We know that $\exists N_3\in\N $ such that $\forall n>N_3, a_n\leq b_k \leq c_k $. Hence, $\forall n>m>\max\{N_1,N_2,N_3\} $, we have combining () and (): - -a_n\leq b_k\leq c_k \Longrightarrow \sum_{k=m+1}^n a_k \leq \sum_{k=m+1}^n b_k \leq \sum_{k=m+1}^n c_k \Longrightarrow - \varepsilon < \sum_{k=m+1}^n b_k <\varepsilon \Longrightarrow \left|\sum_{k=m+1}^n b_k\right| < \varepsilon \Longrightarrow \left|\sum_{k=1}^n b_k-\sum_{k=1}^m b_k\right| < \varepsilon . - -Therefore $\left(\sum_{k=1}^n b_k\right)_{n=1}^{\infty} $is a Cauchy sequence. So $\sum_n b_n $ converges. Q.E.D. - -The limit - -\lim_{x \to 0}x^2 \sin(\tfrac{1}{x}) - -cannot be determined through the limit law - -\lim_{x \to a}(f(x)\cdot g(x)) = - -\lim_{x \to a}f(x)\cdot \lim_{x \to a}g(x), - -because - -\lim_{x\to 0}\sin(\tfrac{1}{x}) - -does not exist. - -However, by the definition of the sine function, - --1 \le \sin(\tfrac{1}{x}) \le 1. - -It follows that - --x^2 \le x^2 \sin(\tfrac{1}{x}) \le x^2 - -Since $\lim_{x\to 0}-x^2 = \lim_{x\to 0}x^2 = 0$, by the squeeze theorem, $\lim_{x\to 0} x^2 \sin(\tfrac{1}{x})$ must also be 0. - -Probably the best-known examples of finding a limit by squeezing are the proofs of the equalities - - - -\begin{align} - -& \lim_{x\to 0} \frac{\sin(x)}{x} =1, \\[10pt] - -& \lim_{x\to 0} \frac{1 - \cos(x)}{x} = 0. - -\end{align} - - - -The first limit follows by means of the squeeze theorem from the fact that - - \cos x \leq \frac{\sin(x)}{x} \leq 1 - -for x close enough to 0. The correctness of which for positive x can be seen by simple geometric reasoning (see drawing) that can be extended to negative x as well. The second limit follows from the squeeze theorem and the fact that - - 0 \leq \frac{1 - \cos(x)}{x} \leq x - -for x close enough to 0. This can be derived by replacing $\sin(x)$ in the earlier fact by $ \sqrt{1-\cos(x)^2}$ and squaring the resulting inequality. - -These two limits are used in proofs of the fact that the derivative of the sine function is the cosine function. That fact is relied on in other proofs of derivatives of trigonometric functions. - -It is possible to show that - - \frac{d}{d\theta} \tan\theta = \sec^2\theta - -by squeezing, as follows. - -In the illustration at right, the area of the smaller of the two shaded sectors of the circle is - - \frac{\sec^2\theta\Delta\theta}{2}, - -since the radius is sec θ and the arc on the unit circle has length Δθ. Similarly, the area of the larger of the two shaded sectors is - - \frac{\sec^2(\theta + \Delta\theta)\Delta\theta}{2}. - -What is squeezed between them is the triangle whose base is the vertical segment whose endpoints are the two dots. The length of the base of the triangle is tan(θ + Δθ) − tan(θ), and the height is 1. The area of the triangle is therefore - - \frac{\tan(\theta + \Delta\theta) - \tan(\theta)}{2}. - -From the inequalities - - \frac{\sec^2\theta\Delta\theta}{2} \le \frac{\tan(\theta + \Delta\theta) - \tan(\theta)}{2} \le \frac{\sec^2(\theta + \Delta\theta)\Delta\theta}{2} - -we deduce that - - \sec^2\theta \le \frac{\tan(\theta + \Delta\theta) - \tan(\theta)}{\Delta\theta} \le \sec^2(\theta + \Delta\theta), - -provided Δθ > 0, and the inequalities are reversed if Δθ < 0. Since the first and third expressions approach sec2θ as Δθ → 0, and the middle expression approaches d/dθ tan θ, the desired result follows. - -The squeeze theorem can still be used in multivariable calculus but the lower (and upper functions) must be below (and above) the target function not just along a path but around the entire neighborhood of the point of interest and it only works if the function really does have a limit there. It can, therefore, be used to prove that a function has a limit at a point, but it can never be used to prove that a function does not have a limit at a point. - -\lim_{(x,y) \to (0, 0)} \frac{x^2 y}{x^2+y^2} - -cannot be found by taking any number of limits along paths that pass through the point, but since - -0 \leq \frac{x^2}{x^2+y^2} \leq 1 - --\left | y \right \vert \leq y \leq \left | y \right \vert - --\left | y \right \vert \leq \frac{x^2 y}{x^2+y^2} \leq \left | y \right \vert - -\lim_{(x,y) \to (0, 0)} -\left | y \right \vert = 0 - -\lim_{(x,y) \to (0, 0)} \left |y \right \vert = 0 - -0 \leq \lim_{(x,y) \to (0, 0)} \frac{x^2 y}{x^2+y^2} \leq 0 - -therefore, by the squeeze theorem, - -\lim_{(x,y) \to (0, 0)} \frac{x^2 y}{x^2+y^2} = 0 diff --git a/wiki/wikipedia/3286.txt b/wiki/wikipedia/3286.txt deleted file mode 100644 index 5159ba1175c9e233f0607aef9f10c5f024228325..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3286.txt +++ /dev/null @@ -1,19 +0,0 @@ -In the mathematical field of spectral graph theory, Brouwer's conjecture is a conjecture by Andries Brouwer on upper bounds for the intermediate sums of the eigenvalues of the Laplacian of a graph in term of its number of edges. - -The conjecture states that if G is a simple undirected graph and L(G) its Laplacian matrix, then its eigenvalues λn(L(G)) ≤ λn−1(L(G)) ≤ ... ≤ λ1(L(G)) satisfy - -\sum_{i=1}^{t}\lambda_{i}(L(G))\leq m(G)+\left(\begin{array}{c} - -t+1\\ - -2 - -\end{array}\right),\quad t=1,\ldots,n - -where m(G) is the number of edges of G. - -Brouwer has confirmed by computation that the conjecture is valid for all graphs with at most 10 vertices. It is also known that the conjecture is valid for any number of vertices if t = 1, 2, n − 1, and n. - -For certain types of graphs, Brouwer's conjecture is known to be valid for all t and for any number of vertices. In particular, it is known that is valid for trees, and for unicyclic and bicyclic graphs. It was also proved that Brouwer’s conjecture holds for two large families of graphs; the first family of graphs is obtained from a clique by identifying each of its vertices to a vertex of an arbitrary c-cyclic graph, and the second family is composed of the graphs in which the removal of the edges of the maximal complete bipartite subgraph gives a graph each of whose non-trivial components is a c-cyclic graph. - -For certain sequences of random graphs, Brouwer's conjecture holds true with probability tending to one as the number of vertices tends to infinity. diff --git a/wiki/wikipedia/3287.txt b/wiki/wikipedia/3287.txt deleted file mode 100644 index 4867d53d7be4cfa32b3762b55c4cbbefcb46a970..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3287.txt +++ /dev/null @@ -1,10 +0,0 @@ -In discrete geometry, Tverberg's theorem, first stated by , is the result that sufficiently many points in d-dimensional Euclidean space can be partitioned into subsets with intersecting convex hulls. Specifically, for any set of -$$ -(d + 1)(r - 1) + 1\ -$$ - -points there exists a point x (not necessarily one of the given points) and a partition of the given points into r subsets, such that x belongs to the convex hull of all of the subsets. The partition resulting from this theorem is known as a Tverberg partition. - -For r = 2, Tverberg's theorem states that any d + 2 points may be partitioned into two subsets with intersecting convex hulls; this special case is known as Radon's theorem. In this case, for points in general position, there is a unique partition. - -The case r = 3 and d = 2 states that any seven points in the plane may be partitioned into three subsets with intersecting convex hulls. The illustration shows an example in which the seven points are the vertices of a regular heptagon. As the example shows, there may be many different Tverberg partitions of the same set of points; these seven points may be partitioned in seven different ways that differ by rotations of each other. diff --git a/wiki/wikipedia/3288.txt b/wiki/wikipedia/3288.txt deleted file mode 100644 index 5628ee145d5a615f088501b158e159a24903e0a2..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3288.txt +++ /dev/null @@ -1,185 +0,0 @@ -SyncML (Synchronization Markup Language) is the former name for a platform-independent information synchronization standard. The project is currently referred to as Open Mobile Alliance Data Synchronization and Device Management. The purpose of SyncML is to offer an open standard as a replacement for existing data synchronization solutions, which have mostly been somewhat vendor-, application- or operating system specific. SyncML 1.0 specification was released on December 17, 2000, and 1.1 on February 26, 2002. - -SyncML works by exchanging commands, which can be requests and responses. As an example: - -* the mobile sends an Alert command for signaling the wish to begin a refresh-only synchronization - -* the computer responds with a Status command for accepting the request - -* the mobile sends one or more Sync command containing an Add sub-command for each item (e.g., phonebook entry); if the number of entries is large, it does not include the tag; - -* in the latter case, the computer requests to continue with an appropriate Alert message, and the mobile sends another chunk of items; otherwise, the computer confirms it received all data with a Status command - -Commands (Alert, Sync, Status, ecc.) are grouped into messages. Each message and each of its commands has an identifier, so that the pair MsgID,CmdID uniquely determine a command. Responses like Status commands include the pair identifying the command they are responding to. - -Before commands, messages contain a header specifying various data regarding the transaction. An example message containing the Alert command for begin a refresh synchronization, like in the previous example, is: - - - - - - - - - - - -1.1 - -SyncML/1.1 - -1 - -1 - -PC Suite - -IMEI:3405623856456 - -8000 - - - - - - - -1 - -203 - - - -Events - -/telecom/cal.vcs - -4242 - - - - - - - - - - - - - -The response from the computer could be an xml document like (comments added - -for the sake of explanation): - - - - - - - - - - - -1.1 - -SyncML/1.1 - -1 - -1 - -IMEI:3405623856456 - -PC Suite - - - - - - - -1 - -1 - -0 - -SyncHdr - -PC Suite - -IMEI:3405623856456 - -200 - - - - - -2 - -1 - -1 - -Alert - -Events - -/telecom/cal.vcs - -00 - -200 - - - - - - - - - - - -The transaction then proceeds with a message from the mobile containing the - -Sync command, and so on. - -This example is a refresh where the mobile sends all its data to the computer - -and nothing in the other way around. Different codes in the initial - -Alert command can be used to initiate other kinds of - -synchronizations. For example, in a "two-way sync", only the changes from the - -last synchronization are sent to the computer, which does the same. - -The Last and Next tags are used to keep track of a possible loss of sync. Last represents the time of the last - -operation of synchronization, as measured by each device. For example, a mobile - -may use progressive numbers (1, 2, - -3,...) to represent time, while the computer uses strings like - -20140112T213401Z. Next is the current time in the - -same representation. This latter data is stored and then compared with - -Last in the next synchronization. Any difference indicates a loss - -of sync. Appropriate actions involving sending all data can be then taken to - -put the devices back in sync. - -Anchors are only used to detect a loss of sync, they do not indicate which data - -is to be sent. Apart from the loss of sync case, in a normal (non-refresh) - -sync, each device sends all changes since the last synchronization. - -1SAN = Server Alert Notification. This SyncML Push technology is based on definitions by the Open Mobile Alliance and extends the existing SyncML protocol specification by offering a method of server initiated synchronization. diff --git a/wiki/wikipedia/3289.txt b/wiki/wikipedia/3289.txt deleted file mode 100644 index ee8abe0a9d283b1d8cde6cccf317bc3d5e83a7ba..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3289.txt +++ /dev/null @@ -1,113 +0,0 @@ -In quantum mechanics, specifically time-dependent density functional theory, the Runge–Gross theorem (RG theorem) shows that for a many-body system evolving from a given initial wavefunction, there exists a one-to-one mapping between the potential (or potentials) in which the system evolves and the density (or densities) of the system. The potentials under which the theorem holds are defined up to an additive purely time-dependent function: such functions only change the phase of the wavefunction and leave the density invariant. Most often the RG theorem is applied to molecular systems where the electronic density, ρ(r,t) changes in response to an external scalar potential, v(r,t), such as a time-varying electric field. - -The Runge–Gross theorem provides the formal foundation of time-dependent density functional theory. It shows that the density can be used as the fundamental variable in describing quantum many-body systems in place of the wavefunction, and that all properties of the system are functionals of the density. - -The theorem was published by and in 1984. Given such a field denoted by v and the number of electron, N, which together determine a Hamiltonian Hv, and an initial condition on the wavefunction Ψ(t = t0) = Ψ0, the evolution of the wavefunction is determined by the Schrödinger equation -$$ -\hat{H}_v(t)|\Psi(t)\rangle=i\frac{\partial}{\partial t}|\Psi(t)\rangle. -$$ - -At any given time, the N-electron wavefunction, which depends upon 3N spatial and N spin coordinates, determines the electronic density through integration as -$$ -\rho(\mathbf r,t)=N\sum_{s_1} \cdots \sum_{s_N} \int \ \mathrm d\mathbf r_2 \ \cdots \int\ \mathrm d\mathbf r_N \ |\Psi(\mathbf r_1,s_1,\mathbf r_2,s_2,...,\mathbf r_N,s_N,t)|^2. -$$ - -Two external potentials differing only by an additive time-dependent, spatially independent, function, c(t), give rise to wavefunctions differing only by a phase factor exp(-ic(t)), and therefore the same electronic density. These constructions provide a mapping from an external potential to the electronic density: -$$ -v(\mathbf r,t)+c(t)\rightarrow e^{-ic(t)}|\Psi(t)\rangle\rightarrow\rho(\mathbf r,t). -$$ - -The Runge–Gross theorem shows that this mapping is invertible, modulo c(t). Equivalently, that the density is a functional of the external potential and of the initial wavefunction on the space of potentials differing by more than the addition of c(t): -$$ -\rho(\mathbf r,t)=\rho[v,\Psi_0](\mathbf{r},t)\leftrightarrow v(\mathbf r,t)=v[\rho,\Psi_0](\mathbf r,t) -$$ - -Given two scalar potentials denoted as v(r,t) and v'(r,t), which differ by more than an additive purely time-dependent term, the proof follows by showing that the density corresponding to each of the two scalar potentials, obtained by solving the Schrödinger equation, differ. - -The proof relies heavily on the assumption that the external potential can be expanded in a Taylor series about the initial time. The proof also assumes that the density vanishes at infinity, making it valid only for finite systems. - -The Runge–Gross proof first shows that there is a one-to-one mapping between external potentials and current densities by invoking the Heisenberg equation of motion for the current density so as to relate time-derivatives of the current density to spatial derivatives of the external potential. Given this result, the continuity equation is used in a second step to relate time-derivatives of the electronic density to time-derivatives of the external potential. - -The assumption that the two potentials differ by more than an additive spatially independent term, and are expandable in a Taylor series, means that there exists an integer k ≥ 0, such that -$$ -u_{k}(\mathbf{r})\equiv\left.\frac{\partial^k}{\partial t^k}\big(v(\mathbf{r},t)-v'(\mathbf{r},t)\big)\right|_{t=t_0} -$$ - -is not constant in space. This condition is used throughout the argument. - -From the Heisenberg equation of motion, the time evolution of the current density, j(r,t), under the external potential v(r,t) which determines the Hamiltonian Hv, is -$$ -i\frac{\partial\mathbf j(\mathbf r,t)}{\partial t}=\langle\Psi(t)|[\hat{\mathbf{j}}(\mathbf r),\hat{H}_v(t)]|\Psi(t)\rangle. -$$ - -Introducing two potentials v and v', differing by more than an additive spatially constant term, and their corresponding current densities j and j', the Heisenberg equation implies - - - -\begin{align} - -i\left.\frac{\partial}{\partial t}\big(\mathbf j(\mathbf r,t)-\mathbf j'(\mathbf r,t) \big)\right|_{t=t_0} &= \langle\Psi(t_0)|[\hat{\mathbf{j}}(\mathbf r),\hat{H}_{v}(t_0)-\hat{H}_{v'}(t_0)]|\Psi(t_0)\rangle,\\ - -&=\langle\Psi(t_0)|[\hat{\mathbf{j}}(\mathbf r),\hat{V}(t_0)-\hat{V}'(t_0)]|\Psi(t_0)\rangle,\\ - -&= i\rho(\mathbf r,t_0)\nabla\big(v(\mathbf{r},t_0)-v'(\mathbf{r},t_0)\big). - -\end{align} - - - -The final line shows that if the two scalar potentials differ at the initial time by more than a spatially independent function, then the current densities that the potentials generate will differ infinitesimally after t0. If the two potentials do not differ at t0, but uk(r) ≠ 0 for some value of k, then repeated application of the Heisenberg equation shows that -$$ -i^{k+1}\left.\frac{\partial^{k+1}}{\partial t^{k+1}}\big(\mathbf j(\mathbf r,t)-\mathbf j'(\mathbf r,t)\big)\right|_{t=t_0}=i\rho(\mathbf r,t)\nabla i^k\left.\frac{\partial^{k}}{\partial t^{k}}\big(v(\mathbf{r},t)-v'(\mathbf{r},t) \big)\right|_{t=t_0}, -$$ - -ensuring the current densities will differ from zero infinitesimally after t0. - -The electronic density and current density are related by a continuity equation of the form -$$ -\frac{\partial\rho(\mathbf r,t)}{\partial t}+\nabla\cdot\mathbf j(\mathbf r,t)=0. -$$ - -Repeated application of the continuity equation to the difference of the densities ρ and ρ', and current densities j and j', yields - - - -\begin{align} - -\left.\frac{\partial^{k+2}}{\partial t^{k+2}}(\rho(\mathbf r,t)-\rho'(\mathbf r,t))\right|_{t=t_0}&=-\nabla\cdot\left.\frac{\partial^{k+1}}{\partial t^{k+1}}\big(\mathbf j(\mathbf r,t)-\mathbf j'(\mathbf r,t)\big)\right|_{t=t_0},\\ - -&=-\nabla\cdot[\rho(\mathbf r,t_0)\nabla\left.\frac{\partial^k}{\partial t^k}\big(v(\mathbf{r},t_0)-v'(\mathbf{r},t_0)\big)\right|_{t=t_0}],\\ - -&=-\nabla\cdot[\rho(\mathbf r,t_0)\nabla u_k(\mathbf r)]. - -\end{align} - - - -The two densities will then differ if the right-hand side (RHS) is non-zero for some value of k. The non-vanishing of the RHS follows by a reductio ad absurdum argument. Assuming, contrary to our desired outcome, that -$$ -\nabla\cdot(\rho(\mathbf r,t_0)\nabla u_k(\mathbf r)) = 0, -$$ - -integrate over all space and apply Green's theorem. - - - -\begin{align} - -0&=\int\mathrm d\mathbf r\ u_k(\mathbf r)\nabla\cdot(\rho(\mathbf r,t_0)\nabla u_k(\mathbf r)),\\ - -&=-\int\mathrm d\mathbf r\ \rho(\mathbf r,t_0)(\nabla u_k(\mathbf r))^2+\frac{1}{2}\int \mathrm d\mathbf S\cdot\rho(\mathbf r,t_0)(\nabla u_k^2(\mathbf r)). - -\end{align} - - - -The second term is a surface integral over an infinite sphere. Assuming that the density is zero at infinity (in finite systems, the density decays to zero exponentially) and that ∇uk2(r) increases slower than the density decays, the surface integral vanishes and, because of the non-negativity of the density, -$$ -\rho(\mathbf r,t_0)(\nabla u_k(\mathbf r))^2=0, -$$ - -implying that uk is a constant, contradicting the original assumption and completing the proof. - -The Runge–Gross proof is valid for pure electronic states in the presence of a scalar field. The first extension of the RG theorem was to time-dependent ensembles, which employed the Liouville equation to relate the Hamiltonian and density matrix. A proof of the RG theorem for multicomponent systems—where more than one type of particle is treated within the full quantum theory—was introduced in 1986. Incorporation of magnetic effects requires the introduction of a vector potential (A(r)) which together with the scalar potential uniquely determine the current density. Time-dependent density functional theories of superconductivity were introduced in 1994 and 1995. Here, scalar, vector, and pairing (D(t)) potentials map between current and anomalous (ΔIP(r,t)) densities. diff --git a/wiki/wikipedia/329.txt b/wiki/wikipedia/329.txt deleted file mode 100644 index 0242be8cee08e74c9e47adfe7edbb87885a5086e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/329.txt +++ /dev/null @@ -1,15 +0,0 @@ -In probability theory, the Palm-Khintchine theorem, the work of Conny Palm and Aleksandr Khinchin, expresses that a large number of renewal processes, not necessarily Poissonian, when combined ("superimposed") will have Poissonian properties. - -It is used to generalise the behaviour of users or clients in queuing theory. It is also used in dependability and reliability modelling of computing and telecommunications. - -According to Heyman and Sobel (2003), the theorem states that the superposition of a large number of independent equilibrium renewal processes, each with a finite intensity, behaves asymptotically like a Poisson process: - -Let $\{N_i(t),t\geq 0\}, i=1,2,\ldots, m$ be independent renewal processes and $\{N(t),t>0\}$ be the superposition of these processes. Denote by $X_{2jm}$ the time between the first and the second renewal epochs in process $j$. Define $N_{jm}(t)$ the $j$th counting process, $F_{jm}(t)=P(X_{2jm}\leq t)$ and $\lambda_{jm}=1/(E((X_{2jm)}))$. - -If the following assumptions hold - -1) For all sufficiently large $m$: $ \lambda_{1m}+\lambda_{2m}+\cdots+\lambda_{mm}=\lambda<\infty $ - -2) Given $\varepsilon>0$, for every $t>0$ and sufficiently large $m$: $F_{jm}(t)<\varepsilon$ for all $j$ - -then the superposition $N_{0m}(t)=N_{1m}(t)+N_{2m}(t)+\cdots+N_{mm}(t)$ of the counting processes approaches a Poisson process as $m \to \infty$. diff --git a/wiki/wikipedia/3290.txt b/wiki/wikipedia/3290.txt deleted file mode 100644 index 5f3f69ebd359a588fae99bdfb5e07b26b96ef28d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3290.txt +++ /dev/null @@ -1,21 +0,0 @@ -In mathematics, the first Blakers–Massey theorem, named after Albert Blakers and William S. Massey, gave vanishing conditions for certain triad homotopy groups of spaces. - -This connectivity result may be expressed more precisely, as follows. Suppose X is a topological space which is the pushout of the diagram -$$ -A\xleftarrow{\ f\ } C \xrightarrow{\ g\ } B -$$, - -where f is an m-connected map and g is n-connected. Then the map of pairs -$$ -(A,C)\rightarrow (X,B) -$$ - -induces an isomorphism in relative homotopy groups in degrees $k\le (m+n-1)$ and a surjection in the next degree. - -However the third paper of Blakers and Massey in this area determines the critical, i.e., first non-zero, triad homotopy group as a tensor product, under a number of assumptions, including some simple connectivity. This condition and some dimension conditions was relaxed in work of Ronald Brown and Jean-Louis Loday. The algebraic result implies the connectivity result, since a tensor product is zero if one of the factors is zero. In the non simply connected case, one has to use the nonabelian tensor product of Brown and Loday. - -The triad connectivity result can be expressed in a number of other ways, for example, it says that the pushout square above behaves like a homotopy pullback up to dimension $m+n$. - -The generalization of the connectivity part of the theorem from traditional homotopy theory to any other infinity-topos with an infinity-site of definition was given by Charles Rezk in 2010. - -In 2013 a fairly short, fully formal proof using homotopy type theory as a mathematical foundation and an Agda variant as a proof assistant was announced by Peter LeFanu Lumsdaine; this became Theorem 8.10.2 of Homotopy Type Theory – Univalent Foundations of Mathematics. This induces an internal proof for any infinity-topos (i.e. without reference to a site of definition); in particular, it gives a new proof of the original result. diff --git a/wiki/wikipedia/3291.txt b/wiki/wikipedia/3291.txt deleted file mode 100644 index b538641acbac96bea80017f1d1a74304f51d3a29..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3291.txt +++ /dev/null @@ -1,111 +0,0 @@ -The term gambler's ruin is a statistical concept, most commonly expressed as the fact that a gambler playing a game with negative expected value will eventually go broke, regardless of their betting system. - -The term's original meaning is that a persistent gambler who raises their bet to a fixed fraction of their bankroll after a win, but does not reduce it after a loss, will eventually and inevitably go broke, even if each bet has a positive expected value. - -Another common meaning is that a persistent gambler with finite wealth, playing a fair game (that is, each bet has expected value zero to both sides) will eventually and inevitably go broke against an opponent with infinite wealth. Such a situation can be modeled by a random walk on the real number line. In that context it is probable that the agent will, with virtual certainty, return to his point of origin, which means going broke, and is ruined an infinite number of times if the random walk continues forever. This is a corollary of a general theorem by Christiaan Huygens, which is also known as gambler's ruin. That theorem shows how to compute the probability of each player winning a series of bets that continues until one's entire initial stake is lost, given the initial stakes of the two players and the constant probability of winning. This is the oldest mathematical idea that goes by the name gambler's ruin, but not the first idea to which the name was applied. The term's common usage today is another corollary to Huygens's result. - -Gambler's ruin should not be confused with the gambler's fallacy, a different concept. - -The concept has specific relevance for gamblers; however it also leads to mathematical theorems with wide application and many related results in probability and statistics. Huygens's result in particular led to important advances in the mathematical theory of probability. - -The earliest known mention of the gambler's ruin problem is a letter from Blaise Pascal to Pierre Fermat in 1656 (two years after the more famous correspondence on the problem of points). Pascal's version was summarized in a 1656 letter from Pierre de Carcavi to Huygens: - -
    Let two men play with three dice, the first player scoring a point whenever 11 is thrown, and the second whenever 14 is thrown. But instead of the points accumulating in the ordinary way, let a point be added to a player's score only if his opponent's score is nil, but otherwise let it be subtracted from his opponent's score. It is as if opposing points form pairs, and annihilate each other, so that the trailing player always has zero points. The winner is the first to reach twelve points; what are the relative chances of each player winning?
    - -Huygens reformulated the problem and published it in De ratiociniis in ludo aleae ("On Reasoning in Games of Chance", 1657): - -
    Problem (2-1) Each player starts with 12 points, and a successful roll of the three dice for a player (getting an 11 for the first player or a 14 for the second) adds one to that player's score and subtracts one from the other player's score; the loser of the game is the first to reach zero points. What is the probability of victory for each player?
    This is the classic gambler's ruin formulation: two players begin with fixed stakes, transferring points until one or the other is "ruined" by getting to zero points. However, the term "gambler's ruin" was not applied until many years later. - -Let "bankroll" be the amount of money a gambler has at his disposal at any moment, and let N be any positive integer. Suppose that he raises his stake to $\frac{\text{bankroll}}{N}$ when he wins, but does not reduce his stake when he loses. This general pattern is not uncommon among real gamblers, and casinos encourage it by "chipping up" winners (giving them higher denomination chips). Under this betting scheme, it will take at most N losing bets in a row to bankrupt him. If his probability of winning each bet is less than 1 (if it is 1, then he is no gambler), he is virtually certain to eventually lose N bets in a row, however big N is. It is not necessary that he follow the precise rule, just that he increase his bet fast enough as he wins. This is true even if the expected value of each bet is positive. - -The gambler playing a fair game (with 0.5 probability of winning) will eventually either go broke or double his wealth. Let's define that the game ends upon either event. These events are equally likely, or the game would not be fair. So he has a 0.5 chance of going broke before doubling his money. If he doubles his money, a new game begins and he again has a 0.5 chance of doubling his money before going broke. After the second game there is a 1/2 x 1/2 chance that he has not gone broke in the first and second games. Continuing this way, his chance of not going broke after n successive games is 1/2 x 1/2 x 1/2 x . . . 1/2^n which approaches 0. His chance of going broke after n successive games is 0.5 + 0.25 + 0.125 + . . . 1 - 1/2^n which approaches 1. - -Huygens's result is illustrated in the next section. - -The eventual fate of a player at a game with negative expected value cannot be better than the player at a fair game, so he will go broke as well. - -Consider a coin-flipping game with two players where each player has a 50% chance of winning with each flip of the coin. After each flip of the coin the loser transfers one penny to the winner. The game ends when one player has all the pennies. - -If there are no other limitations on the number of flips, the probability that the game will eventually end this way is 1. (One way to see this is as follows. Any given finite string of heads and tails will eventually be flipped with certainty: the probability of not seeing this string, while high at first, decays exponentially. In particular, the players would eventually flip a string of heads as long as the total number of pennies in play, by which time the game must have already ended.) - -If player one has n1 pennies and player two n2 pennies, the probabilities P1 and P2 that players one and two, respectively, will end penniless are: - - - -\begin{align} - -P_1 & = \frac{n_2}{n_1+n_2} \\[5pt] - -P_2 & = \frac{n_1}{n_1+n_2} - -\end{align} - - - -Two examples of this are if one player has more pennies than the other; and if both players have the same number of pennies. - -In the first case say player one $(P_1)$ has 8 pennies and player two ($P_2$) were to have 5 pennies then the probability of each losing is: - - - -\begin{align} - -P_1 & =\frac{5}{8+5} =\frac{5}{13} = 0.3846 \text{ or } 38.46\% \\[6pt] - -P_2 & =\frac{8}{8+5} =\frac{8}{13} = 0.6154 \text{ or } 61.54\% - -\end{align} - - - -It follows that even with equal odds of winning the player that starts with fewer pennies is more likely to fail. - -In the second case where both players have the same number of pennies (in this case 6) the likelihood of each losing is: - - - -\begin{align} - -P_1 & =\frac{6}{6+6} = \frac{6}{12} = \frac{1}{2} = 0.5 \\[5pt] - -P_2 & =\frac{6}{6+6} = \frac{6}{12} = \frac{1}{2} = 0.5 - -\end{align} - - - -In the event of an unfair coin, where player one wins each toss with probability p, and player two wins with probability q = 1 − p, then the probability of each ending penniless is: - - - -\begin{align} - -P_1 & = \frac{1-(\frac{p}{q})^{n_2}}{1-(\frac{p}{q})^{n_1+n_2}} \\[5pt] - -P_2 & = \frac{1-(\frac{q}{p})^{n_1}}{1-(\frac{q}{p})^{n_1+n_2}} - -\end{align} - - - -This can be shown as follows: Consider the probability of player 1 experiencing gamblers ruin having started with $n > 1$ amount of money, $P(R_n)$. Then, using the Law of Total Probability, we have -$$ -P(R_n) = P(R_n\mid W)P(W) + P(R_n\mid\bar{W})P(\bar{W}), -$$ - -where W denotes the event that player 1 wins the first bet. Then clearly $P(W) = p$ and $P(\bar{W}) = 1 - p = q$. Also $P(R_n \mid W)$ is the probability that player 1 experiences gambler's ruin having started with $n+1$ amount of money: $P(R_{n+1})$; and $P(R_n \mid \bar{W})$ is the probability that player 1 experiences gambler's ruin having started with $n-1$ amount of money: $P(R_{n-1})$. - -Denoting $q_n = P(R_n)$, we get the linear homogeneous recurrence relation -$$ -q_n = q_{n+1} p + q_{n-1} q, -$$ - -which we can solve using the fact that $q_0 = 1$ (i.e. the probability of gambler's ruin given that player 1 starts with no money is 1), and $q_{n_1 + n_2} = 0$ (i.e. the probability of gambler's ruin given that player 1 starts with all the money is 0.) For a more detailed description of the method see e.g. Feller (1970), An introduction to probability theory and its applications, 3rd ed. - -The above-described problem (2 players) is a special case of the so-called N-Player Ruin problem. - -Here $ N \geq 2 $ players with initial capital $ x_1, x_2, \ldots, x_N $ dollars, respectively, play a sequence of (arbitrary) independent games and win and lose certain amounts of dollars from and to each other according to fixed rules. The sequence of games ends as soon as at least one player is ruined. Standard Markov chain methods can be applied to solve this more general problem in principle, but the computations quickly become prohibitive as soon as the number of players or their initial capitals increase. For $ N = 2 $ and large initial capitals $ x_1, x_2 $ the solution can be well approximated by using two-dimensional Brownian motion. (For $ N \geq 3 $ this is not possible.) - -In practice the true problem is to find the solution for the typical cases of $ N \geq 3 $ and limited initial capital. - -Swan (2006) proposed an algorithm based on matrix-analytic methods (Folding Algorithm for ruin problems) which significantly reduces the order of the computational task in such cases. diff --git a/wiki/wikipedia/3292.txt b/wiki/wikipedia/3292.txt deleted file mode 100644 index 1899f328e15cacb7b882b8e3c609638b25b17382..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3292.txt +++ /dev/null @@ -1,19 +0,0 @@ -In mathematics, the Bochner–Martinelli formula is a generalization of the Cauchy integral formula to functions of several complex variables, introduced by and . - -For ζ, z in $\C^n$ the Bochner–Martinelli kernel ω(ζ,z) is a differential form in ζ of bidegree (n,n−1) defined by - -\omega(\zeta,z) = \frac{(n-1)!}{(2\pi i)^n}\frac{1}{|z-\zeta|^{2n}} - -\sum_{1\le j\le n}(\overline\zeta_j-\overline z_j) d\overline\zeta_1 \land d\zeta_1 \land \cdots \land d\zeta_j \land \cdots \land d\overline\zeta_n \land d\zeta_n - -(where the term d'ζj is omitted). - -Suppose that f is a continuously differentiable function on the closure of a domain D in $\mathbb{C}$n with piecewise smooth boundary ∂D. Then the Bochner–Martinelli formula states that if z is in the domain D then -$$ -\displaystyle f(z) = \int_{\partial D}f(\zeta)\omega(\zeta, z) - \int_D\overline\partial f(\zeta)\land\omega(\zeta,z). -$$ - -In particular if f is holomorphic the second term vanishes, so -$$ -\displaystyle f(z) = \int_{\partial D}f(\zeta)\omega(\zeta, z). -$$ diff --git a/wiki/wikipedia/3293.txt b/wiki/wikipedia/3293.txt deleted file mode 100644 index ee1626928b2bc5f91c5d0ae7c7b5b977a2fc120f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3293.txt +++ /dev/null @@ -1,60 +0,0 @@ -Chern's conjecture for affinely flat manifolds was proposed by Shiing-Shen Chern in 1955 in the field of affine geometry. As of 2018, it remains an unsolved mathematical problem. - -Chern's conjecture states that the Euler characteristic of a compact affine manifold vanishes. - -In case the connection ∇ is the Levi-Civita connection of a Riemannian metric, the Chern–Gauss–Bonnet formula: -$$ -\chi(M) = \left ( \frac{1}{2\pi} \right )^n \int_M \operatorname{Pf}(K) -$$ - -implies that the Euler characteristic is zero. However, not all flat torsion-free connections on $T M$ admit a compatible metric, and therefore, Chern–Weil theory cannot be used in general to write down the Euler class in terms of the curvature. - -The conjecture is known to hold in several special cases: - -* when a compact affine manifold is 2-dimensional (as shown by Jean-Paul Benzécri in 1955, and later by John Milnor in 1957) - -* when a compact affine manifold is complete (i.e., affinely diffeomorphic to a quotient space of the affine space under a proper action of a discrete group of affine transformations, then the conjecture is true; the result is shown by Bertram Kostant and Dennis Sullivan in 1975; the result would also immediately follow from the Auslander conjecture; Kostant and Sullivan showed that a closed manifold with nonzero Euler characteristic can't admit a complete affine structure) - -* when a compact affine manifold is a higher-rank irreducible locally symmetric manifold (as shown by William Goldman and Morris Hirsch in 1984; they showed that a higher-rank irreducible locally symmetric manifold can never admit an affine structure) - -* when a compact affine manifold is locally a product of hyperbolic planes (as shown by Michelle Bucher and Tsachik Gelander in 2011) - -* when a compact affine manifold admits a parallel volume form (i.e., with linear holonomy in SL$(n, \mathbb{R})$; it was shown by Bruno Klingler in 2015; this weaker proven case was known as Chern's conjecture for special affine manifolds; a conjecture of Markus predicts this is equivalent to being complete) - -* when a compact affine manifold is a complex hyperbolic surface (as shown by Hester Pieters in 2016) - -Additionally obtained related results: - -* In 1958, Milnor proved inequalities which completely characterise those oriented rank two bundles over a surface that admit a flat connection - -* In 1977, Smillie proved that the condition that the connection is torsion-free matters. For each even dimension greater than 2, Smillie constructed closed manifolds with non-zero Euler characteristic that admit a flat connection on their tangent bundle - -For flat pseudo-Riemannian manifolds or complex affine manifolds, this follows from the Chern–Gauss–Bonnet theorem. - -Also, as proven by M.W. Hirsch and William Thurston in 1975 for incomplete affine manifolds, the conjecture holds if the holonomy group is a finite extension, a free product of amenable groups (however, their result applies to any flat bundles over manifolds). - -In 1977, John Smillie produced a manifold with the tangent bundle with nonzero-torsion flat connection and nonzero Euler characteristic, thus he disproved the strong version of the conjecture asking whether the Euler characteristic of a closed flat manifold vanishes. - -Later, Huyk Kim and Hyunkoo Lee proved for affine manifolds, and more generally projective manifolds developing into anaffine space with amenable holonomy by a different technique using nonstandard polyhedral Gauss–Bonnet theorem developed by Ethan Bloch and Kim and Lee. - -In 2002, Suhyoung Choi slightly generalized the result of Hirsch and Thurston that if the holonomy of a closed affine manifold is isomorphic to amenable groups amalgamated or HNN-extended along finite groups, then the Euler characteristic of the manifold is 0. He showed that if an even-dimensional manifold is obtained from a connected sum operation from K(π, 1)s with amenable fundamental groups, then the manifold does not admit an affine structure (generalizing a result of Smillie). - -In 2008, after Smillie's simple examples of closed manifolds with flat tangent bundles (these would have affine connections with zero curvature, but possibly nonzero torsion), Bucher and Gelander obtained further results in this direction. - -In 2015, Mihail Cocos proposed a possible way to solve the conjecture and proved that the Euler characteristic of a closed even-dimensional affine manifold vanishes. - -In 2016, Huitao Feng () and Weiping Zhang, both of Nankai University, claimed to prove the conjecture in general case, but a serious flaw had been found, so the claim was thereafter retracted. After the correction, their current result is a formula that counts the Euler number of a flat vector bundle in terms of vertices of transversal open coverings. - -Notoriously, the intrinsic Chern–Gauss–Bonnet theorem proved by Chern that the Euler characteristic of a closed affine manifold is 0 applies only to orthogonal connections, not linear ones, hence why the conjecture remains open in this generality (affine manifolds are considerably more complicated than Riemannian manifolds, where metric completeness is equivalent to geodesic completeness). - -There also exists a related conjecture by Mikhail Leonidovich Gromov on the vanishing of bounded cohomology of affine manifolds. - -The conjecture of Chern can be considered a particular case of the following conjecture: - -
    A closed aspherical manifold with nonzero Euler characteristic doesn't admit a flat structure
    - -This conjecture was originally stated for general closed manifolds, not just for aspherical ones (but due to Smillie, there's a counterexample), and it itself can, in turn, also be considered a special case of even more general conjecture: - -
    A closed aspherical manifold with nonzero simplicial volume doesn't admit a flat structure
    - -While generalizing the Chern's conjecture on affine manifolds in these ways, it's known as the generalized Chern conjecture for manifolds that are locally a product of surfaces. diff --git a/wiki/wikipedia/3294.txt b/wiki/wikipedia/3294.txt deleted file mode 100644 index 0b5c628d789a238bdb00b927852c7cb461f38dbf..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3294.txt +++ /dev/null @@ -1,41 +0,0 @@ -: For other inequalities named after Wirtinger, see Wirtinger's inequality. - -In mathematics, historically Wirtinger's inequality for real functions was an inequality used in Fourier analysis. It was named after Wilhelm Wirtinger. It was used in 1904 to prove the isoperimetric inequality. A variety of closely related results are today known as Wirtinger's inequality. - -Let $f : \mathbb{R} \to \mathbb{R}$ be a periodic function of period 2π, which is continuous and has a continuous derivative throughout R, and such that -$$ -\int_0^{2\pi}f(x) dx = 0. -$$ - -Then -$$ -\int_0^{2\pi}f'^2(x) dx \ge \int_0^{2\pi}f^2(x) dx -$$ - -with equality if and only if f(x) = a sin(x) + b cos(x) for some a and b (or equivalently f(x) = c sin (x + d) for some c and d). - -This version of the Wirtinger inequality is the one-dimensional Poincaré inequality, with optimal constant. - -The following related inequality is also called Wirtinger's inequality : -$$ -\pi^{2}\int_0^a |f|^2 \le a^2 \int_0^a|f'|^2 -$$ - -whenever f is a C1 function such that f(0) = f(a) = 0. In this form, Wirtinger's inequality is seen as the one-dimensional version of Friedrichs' inequality. - -The proof of the two versions are similar. Here is a proof of the first version of the inequality. Since Dirichlet's conditions are met, we can write -$$ -f(x)=\frac{1}{2}a_0+\sum_{n\ge 1}\left(a_n\frac{\sin nx}{\sqrt{\pi}}+b_n\frac{\cos nx}{\sqrt{\pi}}\right), -$$ - -and moreover a0 = 0 since the integral of f vanishes. By Parseval's identity, -$$ -\int_0^{2\pi}f^2(x)dx=\sum_{n=1}^\infty(a_n^2+b_n^2) -$$ - -and -$$ -\int_0^{2\pi}f'^2(x) dx = \sum_{n=1}^\infty n^2(a_n^2+b_n^2) -$$ - -and since the summands are all ≥ 0, we get the desired inequality, with equality if and only if an = bn = 0 for all n ≥ 2. diff --git a/wiki/wikipedia/3295.txt b/wiki/wikipedia/3295.txt deleted file mode 100644 index 7f923d47724554224150236e762aa14b0e1eb319..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3295.txt +++ /dev/null @@ -1,5 +0,0 @@ -Cloudike is a brandable file storage platform operated by Cloudike Inc., headquartered in San Jose, California. The platform provides cloud storage, file synchronization and contact synchronization, personal cloud, and client software. - -The company's data storage approach is similar to that of Dropbox, Google Drive, and Apple iCloud that store user data and provide access to files from smartphones, laptops, tablets, etc. The main difference is that apart from functionality for end-users (Cloudike Personal), Cloudike offers customization for businesses (Cloudike Enterprise) under their own brands which is widely known as white-label. - -Cloudike started in 2013 as a SaaS-platform that grew into a multi-tier cloud solution used to build white-label enterprise data storages for OEMs, Mobile and Internet service providers. diff --git a/wiki/wikipedia/3296.txt b/wiki/wikipedia/3296.txt deleted file mode 100644 index 2d86ffeb6bcbbfdc474a37d2f4776bad2a9fb0d7..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3296.txt +++ /dev/null @@ -1,62 +0,0 @@ -In statistical mechanics, the Lee–Yang theorem states that if partition functions of certain models in statistical field theory with ferromagnetic interactions are considered as functions of an external field, then all zeros - -are purely imaginary (or on the unit circle after a change of variable). The first version was proved for the Ising model by . Their result was later extended to more general models by several people. Asano in 1970 extended the Lee–Yang theorem to the Heisenberg model and provided a simpler proof using Asano contractions. Simon extended the Lee–Yang theorem to certain continuous probability distributions by approximating them by a superposition of Ising models. Newman gave a general theorem stating roughly that the Lee–Yang theorem holds for a ferromagnetic interaction provided it holds for zero interaction. Lieb generalized Newman's result from measures on R to measures on higher-dimensional Euclidean space. - -There has been some speculation about a relationship between the Lee–Yang theorem and the Riemann hypothesis about the Riemann zeta function; see . - -Along the formalization in Newman the Hamiltonian is given by -$$ -H = -\sum J_{jk} S_j S_k - \sum z_j S_j -$$ - -where Sj's are spin variables, zj external field. - -The system is said to be ferromagnetic if all the coefficients in the interaction term Jjk are non-negative reals. - -The partition function is given by -$$ -Z = \int e^{- H} d\mu_1(S_1)\cdots d\mu_N(S_N) -$$ - -where each dμj is an even measure on the reals R decreasing at infinity so fast that all Gaussian functions are integrable, i.e. -$$ - \int e^{b S^2} d|\mu_j(S)| < \infty , \forall b \in \mathbb{R}. -$$ - -A rapidly decreasing measure on the reals is said to have the Lee-Yang property if all zeros of its Fourier transform are real as the following. -$$ - \int e^{h S} d\mu_j(S) \neq 0 , \forall h \in \mathbb{H}_{+} := \{ z \in \mathbb{C} \mid \Re(z)>0 \} -$$ - -The Lee–Yang theorem states that if the Hamiltonian is ferromagnetic and all the measures dμj have the Lee-Yang property, and all the numbers zj have positive real part, then - -the partition function is non-zero. -$$ - Z(\{ z_j \}) \neq 0 , \forall z_j \in \mathbb{H}_{+} -$$ - -In particular if all the numbers zj are equal to some number z, then all zeros of the partition function (considered as a function of z) are imaginary. - -In the original Ising model case considered by Lee and Yang, the measures all have support on the 2 point set −1, 1, - -so the partition function can be considered a function of the variable ρ = eπz. With this change of variable the Lee–Yang theorem says that all zeros ρ lie on the unit circle. - -Some examples of measure with the Lee–Yang property are: - -*The measure of the Ising model, which has support consisting of two points (usually 1 and -1) each with weight 1/2. This is the original case considered by Lee and Yang. - -*The distribution of spin n/2, whose support has n+1 equally spaced points, each of weight 1/(n + 1). This is a generalization of the Ising model case. - -*The density of measure uniformly distributed between -1 and 1. - -*The density $\exp(-\lambda\cosh(S))dS$ - -*The density $\exp(-\lambda S^4-bS^2)dS$ for positive λ and real b. This corresponds to the (φ4)2 Euclidean quantum field theory. - -*The density $\exp(-\lambda S^6- aS^4-bS^2)dS$ for positive λ does not always have the Lee-Yang property. - -*If dμ has the Lee-Yang property, so does exp(bS2) dμ for any positive b. - -*If dμ has the Lee-Yang property, so does Q(S) dμ for any even polynomial Q all of whose zeros are imaginary. - -*The convolution of two measures with the Lee-Yang property also has the Lee-Yang property. diff --git a/wiki/wikipedia/3297.txt b/wiki/wikipedia/3297.txt deleted file mode 100644 index 707f7c46f6f33699b162df9acaf4089a195f3863..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3297.txt +++ /dev/null @@ -1,28 +0,0 @@ -:This page concerns mathematician Sergei Novikov's topology conjecture. For astrophysicist Igor Novikov's conjecture regarding time travel, see Novikov self-consistency principle. - -The Novikov conjecture is one of the most important unsolved problems in topology. It is named for Sergei Novikov who originally posed the conjecture in 1965. - -The Novikov conjecture concerns the homotopy invariance of certain polynomials in the Pontryagin classes of a manifold, arising from the fundamental group. According to the Novikov conjecture, the higher signatures, which are certain numerical invariants of smooth manifolds, are homotopy invariants. - -The conjecture has been proved for finitely generated abelian groups. It is not yet known whether the Novikov conjecture holds true for all groups. There are no known counterexamples to the conjecture. - -Let $G$ be a discrete group and $BG$ its classifying space, which is an Eilenberg–MacLane space of type $K(G,1)$, and therefore unique up to homotopy equivalence as a CW complex. Let -$$ -f\colon M\rightarrow BG -$$ - -be a continuous map from a closed oriented $n$-dimensional manifold $M$ to $BG$, and -$$ -x \in H^{n-4i} (BG;\mathbb{Q} ). -$$ - -Novikov considered the numerical expression, found by evaluating the cohomology class in top dimension against the fundamental class $[M]$, and known as a higher signature: -$$ -\left\langle f^*(x) \cup L_i(M),[M] \right\rangle \in \mathbb{Q} -$$ - -where $L_i$ is the $i^{\rm th}$ Hirzebruch polynomial, or sometimes (less descriptively) as the $i^{\rm th}$ $L$-polynomial. For each $i$, this polynomial can be expressed in the Pontryagin classes of the manifold's tangent bundle. The Novikov conjecture states that the higher signature is an invariant of the oriented homotopy type of $M$ for every such map $f$ and every such class $x$, in other words, if $h\colon M' \rightarrow M$ is an orientation preserving homotopy equivalence, the higher signature associated to $f \circ h$ is equal to that associated to $f$. - -The Novikov conjecture is equivalent to the rational injectivity of the assembly map in L-theory. The - -Borel conjecture on the rigidity of aspherical manifolds is equivalent to the assembly map being an isomorphism. diff --git a/wiki/wikipedia/3298.txt b/wiki/wikipedia/3298.txt deleted file mode 100644 index 521121b43bd1d95584b16bcecd88a593fe62d6a4..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3298.txt +++ /dev/null @@ -1,9 +0,0 @@ -The Tetris Company, Inc. (TTC) is based in Nevada and is owned by Henk Rogers, Alexey Pajitnov and Blue Planet Software. The company is the exclusive licensee of Tetris Holding LLC, the company that owns Tetris rights worldwide. It licenses the Tetris brand to third parties. - -Tetris, originally conceived by Alexey Pajitnov, is widely considered one of the most popular games ever since its release in 1984, which was reflected by its mobile edition being the top seller in the industry 2006. The Tetris Company licenses the Tetris trademark (which includes Tetris trade dress elements, such as the distinct brightly colored blocks and the vertically rectangular play field) to video game development companies and maintains a set of guidelines that each licensed game must meet (for instance, the button controls for game functions must be consistent.) The Tetris Company has issued licenses to third parties for the production of Tetris-based games and other products, such as greeting cards and lottery tickets. Tetris Holding, a newly created company into which Pajitnov placed his Tetris rights and Rogers' Blue Planet Software company each own 50 percent of The Tetris Company who is now the issuer for all Tetris licenses. - -In February 2011, The Tetris Company continued to make copyright claims against independently developed Tetris clones, most notably against Tetrada on the Windows Phone 7 marketplace. The developer, Mario Karagiannis, rejected the claims of copyright infringement on the grounds that copyright does not cover gameplay design, but still removed the game, citing lack of resources to fight what he called "bullying". - -A US District Court judge ruled in June 2012, that the Tetris clone Mino from Xio Interactive infringed on the Tetris Company's copyrights by replicating elements such as the playfield dimensions and the shapes of the blocks. - -In April 2021, a YouTuber called JDH made an operating system that only runs Tetris. Two months later, his GitHub repository was taken offline by The Tetris Company because of copyright infringement. diff --git a/wiki/wikipedia/3299.txt b/wiki/wikipedia/3299.txt deleted file mode 100644 index b504ccfd56fe555468e463f84eea03c7cacd19c9..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3299.txt +++ /dev/null @@ -1,218 +0,0 @@ -In mathematics, more specifically in general topology and related branches, a net or Moore-Smith sequence is a generalization of the notion of a sequence. In essence, a sequence is a function whose domain is the natural numbers. The codomain of this function is usually some topological space. - -The motivation for generalizing the notion of a sequence is that, in the context of topology, sequences do not fully encode all information about functions between topological spaces. In particular, the following two conditions are, in general, not equivalent for a map f between topological spaces X and Y: - -#The map f is continuous in the topological sense; - -#Given any point x in X, and any sequence in X converging to x, the composition of f with this sequence converges to f(x) (continuous in the sequential sense). - -While it is necessarily true that condition 1 implies condition 2, the reverse implication is not necessarily true if the topological spaces are not both first-countable. In particular, the two conditions are equivalent for metric spaces. - -The concept of a net, first introduced by E. H. Moore and Herman L. Smith in 1922, is to generalize the notion of a sequence so that the above conditions (with "sequence" being replaced by "net" in condition 2) are in fact equivalent for all maps of topological spaces. In particular, rather than being defined on a countable linearly ordered set, a net is defined on an arbitrary directed set. This allows for theorems similar to the assertion that the conditions 1 and 2 above are equivalent to hold in the context of topological spaces that do not necessarily have a countable or linearly ordered neighbourhood basis around a point. Therefore, while sequences do not encode sufficient information about functions between topological spaces, nets do, because collections of open sets in topological spaces are much like directed sets in behaviour. The term "net" was coined by John L. Kelley. - -Nets are one of the many tools used in topology to generalize certain concepts that may only be general enough in the context of metric spaces. A related notion, that of the filter, was developed in 1937 by Henri Cartan. - -Any function whose domain is a directed set is called a net where if this function takes values in some set $X$ then it may also be referred to as a net in $X$. Elements of a net's domain are called its indices. Explicitly, a net in $X$ is a function of the form $f : A \to X$ where $A$ is some directed set. - -A directed set is a non-empty set $A$ together with a preorder, typically automatically assumed to be denoted by $\leq$ (unless indicated otherwise), with the property that it is also (upward) directed, which means that for any $a, b \in A,$ there exists some $c \in A$ such that $a \leq c$ and $b \leq c.$ - -In words, this property means that given any two elements (of $A$), there is always some element that is "above" both of them (i.e. that is greater than or equal to each of them); in this way, directed sets generalize the notion of "a direction" in a mathematically rigorous way. - -The natural numbers $\N$ together with the usual integer comparison $\leq$ preorder form the archetypical example of a directed set. Indeed, a net whose domain is the natural numbers is a sequence because by definition, a sequence in $X$ is just a function from $\N = \{ 1, 2, \ldots \}$ into $X.$ It is in this way that nets are generalizations of sequences. Importantly though, unlike the natural numbers, directed sets are not required to be total orders or even partial orders. - -Moreover, directed sets are allowed to have greatest elements and/or maximal elements, which is the reason why when using nets, caution is advised when using the induced strict preorder $<$ instead of the original (non-strict) preorder $\leq$; in particular, if a directed set $(A, \leq)$ has a greatest element $a \in A$ then there does not exist any $b \in A$ such that $a < b$ (in contrast, there always exists some $b \in A$ such that $a \leq b$). - -Nets are frequently denoted using notation that is similar to (and inspired by) that used with sequences. - -A net in $X$ may be denoted by $\left(x_a\right)_{a \in A},$ where unless there is reason to think otherwise, it should automatically be assumed that the set $A$ is directed and that its associated preorder is denoted by $\leq.$ - -However, notation for nets varies with some authors using, for instance, angled brackets $\left\langle x_a \right\rangle_{a \in A}$ instead of parentheses. - -A net in $X$ may also be written as $x_{\bull} = \left(x_a\right)_{a \in A},$ which expresses the fact that this net $x_{\bull}$ is a function $x_{\bull} : A \to X$ whose value at an element $a$ in its domain is denoted by $x_a$ instead of the usual parentheses notation $x_{\bull}(a)$ that is typically used with functions (this subscript notation being taken from sequences). As in the field of algebraic topology, the filled disk or "bullet" denotes the location where arguments to the net (i.e. elements $a \in A$ of the net's domain) are placed; it helps emphasize that the net is a function and also reduces the number of indices and other symbols that must be written when referring to it later. - -Nets are primarily used in the fields of Analysis and Topology, where they are used to characterize many important topological properties that (in general), sequences are unable to characterize (this shortcoming of sequences motivated the study of sequential spaces and Fréchet–Urysohn spaces). Nets are intimately related to filters, which are also often used in topology. Every net may be associated with a filter and every filter may be associated with a net, where the properties of these associated objects are closely tied together (see the article about Filters in topology for more details). Nets directly generalize sequences and they may often be used very similarly to sequences. Consequently, the learning curve for using nets is typically much less steep than that for filters, which is why many mathematicians, especially analysts, prefer them over filters. However, filters, and especially ultrafilters, have some important technical advantages over nets that ultimately result in nets being encountered much less often than filters outside of the fields of Analysis and Topology. - -A subnet is not merely the restriction of a net $f$ to a directed subset of $A;$ see the linked page for a definition. - -Every non-empty totally ordered set is directed. Therefore, every function on such a set is a net. In particular, the natural numbers with the usual order form such a set, and a sequence is a function on the natural numbers, so every sequence is a net. - -Another important example is as follows. Given a point $x$ in a topological space, let $N_x$ denote the set of all neighbourhoods containing $x.$ Then $N_x$ is a directed set, where the direction is given by reverse inclusion, so that $S \geq T$ if and only if $S$ is contained in $T.$ For $S \in N_x,$ let $x_S$ be a point in $S.$ Then $\left(x_S\right)$ is a net. As $S$ increases with respect to $\geq,$ the points $x_S$ in the net are constrained to lie in decreasing neighbourhoods of $x,$ so intuitively speaking, we are led to the idea that $x_S$ must tend towards $x$ in some sense. We can make this limiting concept precise. - -If $x_{\bull} = \left(x_a\right)_{a \in A}$ is a net from a directed set $A$ into $X,$ and if $S$ is a subset of $X,$ then $x_{\bull}$ is said to be eventually in $S$ (or residually in $S$) if there exists some $a \in A$ such that for every $b \in A$ with $b \geq a,$ the point $x_b \in S.$ - -A point $x \in X$ is called a ' or ' of the net $x_{\bull}$ in $X$ if (and only if) - -for every open neighborhood $U$ of $x,$ the net $x_{\bull}$ is eventually in $U,$ - -in which case, this net is then also said to ' and to have $x$ as a limit. - -If the net $x_{\bull}$ converges in $X$ to a point $x \in X$ then this fact may be expressed by writing any of the following: - - - -\begin{alignat}{4} - -& x_{\bull} && \to && x && \text{ in } X \\ - -& x_a && \to && x && \text{ in } X \\ - -\lim_{} & x_{\bull} && \to && x && \text{ in } X \\ - -\lim_{a \in A} & x_a && \to && x && \text{ in } X \\ - -\lim_{} {}_a & x_a && \to && x && \text{ in } X \\ - -\end{alignat} - - - -where if the topological space $X$ is clear from context then the words "in $X$" may be omitted. - -If $\lim_{} x_{\bull} \to x$ in $X$ and if this limit in $X$ is unique (uniqueness in $X$ means that if $y \in X$ is such that $\lim_{} x_{\bull} \to y,$ then necessarily $x = y$) then this fact may be indicated by writing -$$ -\lim_{} x_{\bull} = x -$$ or $\lim_{} x_a = x$ or $\lim_{a \in A} x_a = x$ - -where an equals sign is used in place of the arrow $\to.$ In a Hausdorff space, every net has at most one limit so the limit of a convergent net in a Hausdorff space is always unique. - -Some authors instead use the notation "$\lim_{} x_{\bull} = x$" to mean $\lim_{} x_{\bull} \to x$ without also requiring that the limit be unique; however, if this notation is defined in this way then the equals sign $=$ is no longer guaranteed to denote a transitive relationship and so no longer denotes equality. Specifically, without the uniqueness requirement, if $x, y \in X$ are distinct and if each is also a limit of $x_{\bull}$ in $X$ then $\lim_{} x_{\bull} = x$ and $\lim_{} x_{\bull} = y$ could be written (using the equals sign $=$) despite it not being true that $x = y.$ - -Intuitively, convergence of this net means that the values $x_a$ come and stay as close as we want to $x$ for large enough $a.$ - -The example net given above on the neighborhood system of a point $x$ does indeed converge to $x$ according to this definition. - -Given a subbase $\mathcal{B}$ for the topology on $X$ (where note that every base for a topology is also a subbase) and given a point $x \in X,$ a net $x_{\bull}$ in $X$ converges to $x$ if and only if it is eventually in every neighborhood $U \in \mathcal{B}$ of $x.$ This characterization extends to neighborhood subbases (and so also neighborhood bases) of the given point $x.$ - -If the set $S := \{ x \} \cup \left\{ x_ a : a \in A \right\}$ is endowed with the subspace topology induced on it by $X,$ then $\lim_{} x_{\bull} \to x$ in $X$ if and only if $\lim_{} x_{\bull} \to x$ in $S.$ In this way, the question of whether or not the net $x_{\bull}$ converges to the given point $x$ is depends solely on this topological subspace $S$ consisting of $x$ and the image of (i.e. the points of) the net $x_{\bull}.$ - -A net in the product space has a limit if and only if each projection has a limit. - -Symbolically, suppose that the Cartesian product -$$ -X := \prod_{i \in I} X_i -$$ - -of the spaces $\left(X_i\right)_{i \in I}$ is endowed with the product topology and that for every index $i \in I,$ the canonical projection to $X_i$ is denoted by -$$ -\pi_i : X = \prod_{j \in I} X_j \to X_i -$$ and defined by $\left(x_j\right)_{j \in I} \mapsto x_i.$ - -Let $f_{\bull} = \left(f_a\right)_{a \in A}$ be a net in $X = \prod_{i \in I} X_i$ directed by $A$ and for every index $i \in I,$ let -$$ -\pi_i\left(f_{\bull}\right) ~:=~ \left( \pi_i\left(f_a\right) \right)_{a \in A} -$$ - -denote the result of "plugging $f_{\bull}$ into $\pi_i$", which results in the net $\pi_i\left(f_{\bull}\right) : A \to X_i.$ - -It is sometimes useful to think of this definition in terms of function composition: the net $\pi_i\left(f_{\bull}\right)$ is equal to the composition of the net $f_{\bull} : A \to X$ with the projection $\pi_i : X \to X_i$; that is, $\pi_i\left(f_{\bull}\right) := \pi_i \circ f_{\bull}.$ - -If given $L = \left(L_i\right)_{i \in I} \in X,$ then -$$ -f_{\bull} \to L -$$ in $X = \prod_i X_i$ if and only if for every $i \in I,$ $\pi_i\left(f_{\bull}\right) := \left( \pi_i\left(f_a\right) \right)_{a \in A} \to \pi_i(L) = L_i$ in $X_i.$ - -;Tychonoff's theorem and relation to the axiom of choice - -If no $L \in X$ is given but for every $i \in I,$ there exists some $L_i \in X_i$ such that $\pi_i\left(f_{\bull}\right) \to L_i$ in $X_i$ then the tuple defined by $L := \left(L_i\right)_{i \in I}$ will be a limit of $f_{\bull}$ in $X.$ - -However, the axiom of choice might be need to be assumed in order to conclude that this tuple $L$ exists; the axiom of choice is not needed in some situations, such as when $I$ is finite or when every $L_i \in X_i$ is the unique limit of the net $\pi_i\left(f_{\bull}\right)$ (because then there is nothing to choose between), which happens for example, when every $X_i$ is a Hausdorff space. If $I$ is infinite and $X = \prod_{j \in I} X_j$ is not empty, then the axiom of choice would (in general) still be needed to conclude that the projections $\pi_i : X\to X_i$ are surjective maps. - -The axiom of choice is equivalent to Tychonoff's theorem, which states that the product of any collection of compact topological spaces is compact. - -But if every compact space is also Hausdorff, then the so called "Tychonoff's theorem for compact Hausdorff spaces" can be used instead, which is equivalent to the ultrafilter lemma and so strictly weaker than the axiom of choice. - -Nets can be used to give short proofs of both version of Tychonoff's theorem by using the characterization of net convergence given above together with the fact that a space is compact if and only if every net has a convergent subnet. - -Let $f$ be a net in $X$ based on the directed set $A$ and let $S$ be a subset of $X,$ then $f$ is said to be ' (or ') $S$ if for every $a \in A$ there exists some $b \in A$ such that $b \geq a$ and $f(b) \in S.$ - -A point $x \in X$ is said to be an ' or cluster point of a net if (and only if) for every neighborhood $U$ of $x,$ the net is frequently in $U.$ - -A net $f$ in set $X$ is called a ' or an ' if for every subset $S \subseteq X,$ $f$ is eventually in $S$ or $f$ is eventually in $X \setminus S.$ - -Every constant net is an ultranet. - -Ultranets are closely related to ultrafilters. - -* Limit of a sequence and limit of a function: see below. - -* Limits of nets of Riemann sums, in the definition of the Riemann integral. In this example, the directed set is the set of partitions of the interval of integration, partially ordered by inclusion. - -A sequence $a_1, a_2, \ldots$ in a topological space $X$ can be considered a net in $X$ defined on $\mathbb{N}.$ - -The net is eventually in a subset $S$ of $X$ if there exists an $N \in \mathbb{N}$ such that for every integer $n \geq N,$ the point $a_n$ is in $S.$ - -So $\lim {}_{n} a_n \to L$ if and only if for every neighborhood $V$ of $L,$ the net is eventually in $V.$ - -The net is frequently in a subset $S$ of $X$ if and only if for every $N \in \mathbb{N}$ there exists some integer $n \geq N$ such that $a_n \in S,$ that is, if and only if infinitely many elements of the sequence are in $S.$ Thus a point $y \in X$ is a cluster point of the net if and only if every neighborhood $V$ of $y$ contains infinitely many elements of the sequence. - -Consider a function from a metric space $M$ to a topological space $X,$ and a point $c \in M.$ We direct the set $M \setminus \{ c \}$reversely according to distance from $c,$ that is, the relation is "has at least the same distance to $c$ as", so that "large enough" with respect to the relation means "close enough to $c$". The function $f$ is a net in $X$ defined on $M \setminus \{ c \}.$ - -The net $f$ is eventually in a subset $S$ of $X$ if there exists some $y \in M \setminus \{ x \}$ such that for every $x \in M \setminus \{ c \}$ with $d(x, c) \leq d(y, c)$ the point $f(x)$ is in $S.$ - -So $\lim_{x \to c} f(x) \to L$ if and only if for every neighborhood $V$ of $L,$ $f$ is eventually in $V.$ - -The net $f$ is frequently in a subset $S$ of $X$ if and only if for every $y \in M \setminus \{ c \}$ there exists some $x \in M \setminus \{ c \}$ with $d(x, c) \leq d(y, c)$ such that $f(x)$ is in $S.$ - -A point $y \in X$ is a cluster point of the net $f$ if and only if for every neighborhood $V$ of $y,$ the net is frequently in $V.$ - -Consider a well-ordered set $[0, c]$ with limit point $t$ and a function $f$ from $[0, t)$ to a topological space $X.$ This function is a net on $[0, t).$ - -It is eventually in a subset $V$ of $X$ if there exists an $r \in [0, t)$ such that for every $s \in [r, t)$ the point $f(s)$ is in $V.$ - -So $\lim_{x \to t} f(x) \to L$ if and only if for every neighborhood $V$ of $L,$ $f$ is eventually in $V.$ - -The net $f$ is frequently in a subset $V$ of $X$ if and only if for every $r \in [0, t)$ there exists some $s \in [r, t)$ such that $f(s) \in V.$ - -A point $y \in X$ is a cluster point of the net $f$ if and only if for every neighborhood $V$ of $y,$ the net is frequently in $V.$ - -The first example is a special case of this with $c = \omega.$ - -See also ordinal-indexed sequence. - -Virtually all concepts of topology can be rephrased in the language of nets and limits. This may be useful to guide the intuition since the notion of limit of a net is very similar to that of limit of a sequence. The following set of theorems and lemmas help cement that similarity: - -Characterizations of topologies - -A subset $S \subseteq X$ is open if and only if no net in $X \setminus S$ converges to a point of $S.$ It is this characterization of open subsets that allows nets to characterize topologies. - -Topologies can also be characterized by closed subsets. A subset $S \subseteq X$ is closed in $X$ if and only if every limit point of every net in $S$ necessarily belongs to $S.$ - -Explicitly, a subset $S \subseteq X$ is closed if and only if whenever $x \in X$ and $s_{\bull} = \left(s_a\right)_{a \in A}$ is a net with elements in $S$ having limit $x$ (that is, such that $s_a \in S \text{ for all } a \in A$ and $\lim{}_{} s_{\bull} \to x \text{ in } X$), then necessarily $x \in S.$ - -More generally, if $S \subseteq X$ is any subset then a point $x \in X$ is in the closure of $S$ if and only if there exists a net $s_{\bull} = \left(s_a\right)_{a \in A}$ in $S$ with limit $x \in X$ and such that $s_a \in S$ for every index $a \in A.$ - -Continuity - -A function $f : X \to Y$ between topological spaces is continuous at the point $x$ if and only if for every net $x_{\bull} = \left(x_a\right)_{a \in A},$ \lim_{} x_{\bull} \to x \text{ in } X \quad \text{ implies } \quad \lim{}_a f\left(x_a\right) \to f(x) \text{ in } Y. This theorem is in general not true if "net" is replaced by "sequence"; it is necessary to allow for directed sets other than just the natural numbers if $X$ is not a first-countable space (or not a sequential space). - -Compactness - -A space $X$ is compact if and only if every net $x_{\bull} = \left(x_a\right)_{a \in A}$ in $X$ has a subnet with a limit in $X.$ This can be seen as a generalization of the Bolzano–Weierstrass theorem and Heine–Borel theorem. - -The set of cluster points of a net is equal to the set of limits of its convergent subnets. - -A net has a limit if and only if all of its subnets have limits. In that case, every limit of the net is also a limit of every subnet. - -In general, a net in a space $X$ can have more than one limit, but if $X$ is a Hausdorff space, the limit of a net, if it exists, is unique. Conversely, if $X$ is not Hausdorff, then there exists a net on $X$ with two distinct limits. Thus the uniqueness of the limit is equivalent to the Hausdorff condition on the space, and indeed this may be taken as the definition. This result depends on the directedness condition; a set indexed by a general preorder or partial order may have distinct limit points even in a Hausdorff space. - -If $f : X \to Y$ and $x_{\bull} = \left(x_a\right)_{a \in A}$ is an ultranet on $X,$ then $\left(f\left(x_a\right)\right)_{a \in A}$ is an ultranet on $Y.$ - -A Cauchy net generalizes the notion of Cauchy sequence to nets defined on uniform spaces. - -A net $x_{\bull} = \left(x_a\right)_{a \in A}$ is a if for every entourage $V$ there exists $c \in A$ such that for all $a, b \geq c,$ $\left(x_a, x_b\right)$ is a member of $V.$ For instance, any net $\left(x_a\right)_{a \in A}$ in $X$ induces a filter base of tails $\{ \{ x_a : a \in A, a_0 \leq a \} : a_0 \in A \}$ where the filter in $X$ generated by this filter base is called the net's eventuality filter. This correspondence allows for any theorem that can be proven with one concept to be proven with the other. For instance, continuity of a function from one topological space to the other can be characterized either by the convergence of a net in the domain implying the convergence of the corresponding net in the codomain, or by the same statement with filter bases. - -Robert G. Bartle argues that despite their equivalence, it is useful to have both concepts. He argues that nets are enough like sequences to make natural proofs and definitions in analogy to sequences, especially ones using sequential elements, such as is common in analysis, while filters are most useful in algebraic topology. In any case, he shows how the two can be used in combination to prove various theorems in general topology. - -Limit superior and limit inferior of a net of real numbers can be defined in a similar manner as for sequences. Some authors work even with more general structures than the real line, like complete lattices. - -For a net $\left(x_a\right)_{a \in A},$ put -$$ -\limsup x_a = \lim_{a \in A} \sup_{b \succeq a} x_b = \inf_{a \in A} \sup_{b \succeq a} x_b. -$$ - -Limit superior of a net of real numbers has many properties analogous to the case of sequences. For example, -$$ -\limsup (x_a + y_a) \leq \limsup x_a + \limsup y_a, -$$ - -where equality holds whenever one of the nets is convergent. diff --git a/wiki/wikipedia/33.txt b/wiki/wikipedia/33.txt deleted file mode 100644 index 0422eaa5a244014200a943c65fd20510c5d8036f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/33.txt +++ /dev/null @@ -1,23 +0,0 @@ -Segal's Burnside ring conjecture, or, more briefly, the Segal conjecture, is a theorem in homotopy theory, a branch of mathematics. The theorem relates the Burnside ring of a finite group G to the stable cohomotopy of the classifying space BG. The conjecture was made in the mid 1970s by Graeme Segal and proved in 1984 by Gunnar Carlsson. , this statement is still commonly referred to as the Segal conjecture, even though it now has the status of a theorem. - -The Segal conjecture has several different formulations, not all of which are equivalent. Here is a weak form: there exists, for every finite group G, an isomorphism -$$ -\varprojlim \pi_S^0 \left( BG^{(k)}_+ \right) \to \widehat{A}(G). -$$ - -Here, lim denotes the inverse limit, piS* denotes the stable cohomotopy ring, B denotes the classifying space, the superscript k denotes the k-skeleton, and the subscript + denotes the addition of a disjoint basepoint. On the right-hand side, the hat denotes the completion of the Burnside ring with respect to its augmentation ideal. - -The Burnside ring of a finite group G is constructed from the category of finite G-sets as a Grothendieck group. More precisely, let M(G) be the commutative monoid of isomorphism classes of finite G-sets, with addition the disjoint union of G-sets and identity element the empty set (which is a G-set in a unique way). Then A(G), the Grothendieck group of M(G), is an abelian group. It is in fact a free abelian group with basis elements represented by the G-sets G/H, where H varies over the subgroups of G. (Note that H is not assumed here to be a normal subgroup of G, for while G/H is not a group in this case, it is still a G-set.) The ring structure on A(G) is induced by the direct product of G-sets; the multiplicative identity is the (isomorphism class of any) one-point set, which becomes a G-set in a unique way. - -The Burnside ring is the analogue of the representation ring in the category of finite sets, as opposed to the category of finite-dimensional vector spaces over a field (see motivation below). It has proven to be an important tool in the representation theory of finite groups. - -For any topological group G admitting the structure of a CW-complex, one may consider the category of principal G-bundles. One can define a functor from the category of CW-complexes to the category of sets by assigning to each CW-complex X the set of principal G-bundles on X. This functor descends to a functor on the homotopy category of CW-complexes, and it is natural to ask whether the functor so obtained is representable. The answer is affirmative, and the representing object is called the classifying space of the group G and typically denoted BG. If we restrict our attention to the homotopy category of CW-complexes, then BG is unique. Any CW-complex that is homotopy equivalent to BG is called a model for BG. - -For example, if G is the group of order 2, then a model for BG is infinite-dimensional real projective space. It can be shown that if G is finite, then any CW-complex modelling BG has cells of arbitrarily large dimension. On the other hand, if G = Z, the integers, then the classifying space BG is homotopy equivalent to the circle S1. - -The content of the theorem becomes somewhat clearer if it is placed in its historical context. In the theory of representations of finite groups, one can form an object $R[G]$ called the representation ring of $G$ in a way entirely analogous to the construction of the Burnside ring outlined above. The stable cohomotopy is in a sense the natural analog to complex K-theory, which is denoted $KU^*$. Segal was inspired to make his conjecture after Michael Atiyah proved the existence of an isomorphism -$$ -KU^0(BG) \to \widehat{R}[G] -$$ - -which is a special case of the Atiyah–Segal completion theorem. diff --git a/wiki/wikipedia/330.txt b/wiki/wikipedia/330.txt deleted file mode 100644 index 68d7be4bbd764029abdf8ab215eac296c4d6a198..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/330.txt +++ /dev/null @@ -1,35 +0,0 @@ -In philosophy and mathematics, Newcomb's paradox, also known as Newcomb's problem, is a thought experiment involving a game between two players, one of whom is able to predict the future. - -Newcomb's paradox was created by William Newcomb of the University of California's Lawrence Livermore Laboratory. However, it was first analyzed in a philosophy paper by Robert Nozick in 1969, and appeared in the March 1973 issue of Scientific American, in Martin Gardner's "Mathematical Games". Today it is a much debated problem in the philosophical branch of decision theory. - -There is a reliable predictor, another player, and two boxes designated A and B. The player is given a choice between taking only box B, or taking both boxes A and B. The player knows the following: - -* Box A is clear and always contains a visible $1,000. - -* Box B is opaque, and its content has already been set by the predictor: - -** If the predictor has predicted the player will take both boxes A and B, then box B contains nothing. - -** If the predictor has predicted that the player will take only box B, then box B contains $1,000,000. - -The player does not know what the predictor predicted or what box B contains while making the choice. - -In his 1969 article, Nozick noted that "To almost everyone, it is perfectly clear and obvious what should be done. The difficulty is that these people seem to divide almost evenly on the problem, with large numbers thinking that the opposing half is just being silly." However, these issues can still be explored in the case of an infallible predictor. Under this condition, it seems that taking only B is the correct option. This analysis argues that we can ignore the possibilities that return $0 and $1,001,000, as they both require that the predictor has made an incorrect prediction, and the problem states that the predictor is never wrong. Thus, the choice becomes whether to take both boxes with $1,000 or to take only box B with $1,000,000—so taking only box B is always better. - -William Lane Craig has suggested that, in a world with perfect predictors (or time machines, because a time machine could be used as a mechanism for making a prediction), retrocausality can occur. If a person truly knows the future, and that knowledge affects their actions, then events in the future will be causing effects in the past. The chooser's choice will have already caused the predictor's action. Some have concluded that if time machines or perfect predictors can exist, then there can be no free will and choosers will do whatever they're fated to do. Taken together, the paradox is a restatement of the old contention that free will and determinism are incompatible, since determinism enables the existence of perfect predictors. Put another way, this paradox can be equivalent to the grandfather paradox; the paradox presupposes a perfect predictor, implying the "chooser" is not free to choose, yet simultaneously presumes a choice can be debated and decided. This suggests to some that the paradox is an artifact of these contradictory assumptions. - -Gary Drescher argues in his book Good and Real that the correct decision is to take only box B, by appealing to a situation he argues is analogous—a rational agent in a deterministic universe deciding whether or not to cross a potentially busy street. - -Andrew Irvine argues that the problem is structurally isomorphic to Braess's paradox, a non-intuitive but ultimately non-paradoxical result concerning equilibrium points in physical systems of various kinds. - -Simon Burgess has argued that the problem can be divided into two stages: the stage before the predictor has gained all the information on which the prediction will be based, and the stage after it. While the player is still in the first stage, they are presumably able to influence the predictor's prediction, for example by committing to taking only one box. Burgess argues that after the first stage is done, the player can decide to take both boxes A and B without influencing the predictor, thus reaching the maximum payout. This assumes that the predictor cannot predict the player's thought process in the second stage, and that the player can change their mind at the second stage without influencing the predictor's prediction. Burgess says that given his analysis, Newcomb's problem is akin to the toxin puzzle. This is because both problems highlight the fact that one can have a reason to intend to do something without having a reason to actually do it. - -Newcomb's paradox can also be related to the question of machine consciousness, specifically if a perfect simulation of a person's brain will generate the consciousness of that person. Suppose we take the predictor to be a machine that arrives at its prediction by simulating the brain of the chooser when confronted with the problem of which box to choose. If that simulation generates the consciousness of the chooser, then the chooser cannot tell whether they are standing in front of the boxes in the real world or in the virtual world generated by the simulation in the past. The "virtual" chooser would thus tell the predictor which choice the "real" chooser is going to make. - -Newcomb's paradox is related to logical fatalism in that they both suppose absolute certainty of the future. In logical fatalism, this assumption of certainty creates circular reasoning ("a future event is certain to happen, therefore it is certain to happen"), while Newcomb's paradox considers whether the participants of its game are able to affect a predestined outcome. - -Many thought experiments similar to or based on Newcomb's problem have been discussed in the literature. - -Another related problem is the meta-Newcomb problem. The setup of this problem is similar to the original Newcomb problem. However, the twist here is that the predictor may elect to decide whether to fill box B after the player has made a choice, and the player does not know whether box B has already been filled. There is also another predictor: a "meta-predictor" who has reliably predicted both the players and the predictor in the past, and who predicts the following: "Either you will choose both boxes, and the predictor will make its decision after you, or you will choose only box B, and the predictor will already have made its decision." - -In this situation, a proponent of choosing both boxes is faced with the following dilemma: if the player chooses both boxes, the predictor will not yet have made its decision, and therefore a more rational choice would be for the player to choose box B only. But if the player so chooses, the predictor will already have made its decision, making it impossible for the player's decision to affect the predictor's decision. diff --git a/wiki/wikipedia/3300.txt b/wiki/wikipedia/3300.txt deleted file mode 100644 index cdcdd6d3e13dae46d1461c8b57c9ac4afe84e863..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3300.txt +++ /dev/null @@ -1,827 +0,0 @@ -In mathematics, trigonometric identities are equalities that involve trigonometric functions and are true for every value of the occurring variables for which both sides of the equality are defined. Geometrically, these are identities involving certain functions of one or more angles. They are distinct from triangle identities, which are identities potentially involving angles but also involving side lengths or other lengths of a triangle. - -These identities are useful whenever expressions involving trigonometric functions need to be simplified. An important application is the integration of non-trigonometric functions: a common technique involves first using the substitution rule with a trigonometric function, and then simplifying the resulting integral with a trigonometric identity. - -The basic relationship between the sine and cosine is given by the Pythagorean identity: - -\sin^2\theta + \cos^2\theta = 1, - -where $\sin^2 \theta$ means $(\sin \theta)^2$ and $\cos^2 \theta$ means $(\cos \theta)^2.$ - -This can be viewed as a version of the Pythagorean theorem, and follows from the equation $x^2 + y^2 = 1$ for the unit circle. This equation can be solved for either the sine or the cosine: - -\begin{align} - -\sin\theta &= \pm \sqrt{1 - \cos^2\theta}, \\ - -\cos\theta &= \pm \sqrt{1 - \sin^2\theta}. - -\end{align} - -where the sign depends on the quadrant of $\theta.$ - -Dividing this identity by $\sin^2 \theta$, $\cos^2 \theta$, or both yields the following identities: -$$ -1 + \cot^2\theta = \csc^2\theta -$$ -$$ -\tan^2\theta + 1 = \sec^2\theta -$$ -$$ -\sec^2\theta + \csc^2\theta = \sec^2\theta\csc^2\theta -$$ - -Using these identities, it is possible to express any trigonometric function in terms of any other (up to a plus or minus sign): - -By examining the unit circle, one can establish the following properties of the trigonometric functions. - -When the direction of a Euclidean vector is represented by an angle $\theta,$ this is the angle determined by the free vector (starting at the origin) and the positive $x$-unit vector. The same concept may also be applied to lines in a Euclidean space, where the angle is that determined by a parallel to the given line through the origin and the positive $x$-axis. If a line (vector) with direction $\theta$ is reflected about a line with direction $\alpha,$ then the direction angle $\theta^{\prime}$ of this reflected line (vector) has the value - -\theta^{\prime} = 2 \alpha - \theta. - -The values of the trigonometric functions of these angles $\theta,\theta^{\prime}$ for specific angles $\alpha$ satisfy simple identities: either they are equal, or have opposite signs, or employ the complementary trigonometric function. These are also known as reduction formulae. - -These are also known as the angle addition and subtraction theorems (or formulae). - -\begin{align} - -\sin(\alpha + \beta) &= \sin \alpha \cos \beta + \cos \alpha \sin \beta \\ - -\sin(\alpha - \beta) &= \sin \alpha \cos \beta - \cos \alpha \sin \beta \\ - -\cos(\alpha + \beta) &= \cos \alpha \cos \beta - \sin \alpha \sin \beta \\ - -\cos(\alpha - \beta) &= \cos \alpha \cos \beta + \sin \alpha \sin \beta - -\end{align} - -These identities are summarized in the first two rows of the following table, which also includes sum and difference identities for the other trigonometric functions. - -When the series $\sum_{i=1}^\infty \theta_i$ converges absolutely then - -\sin\left(\sum_{i=1}^\infty \theta_i\right) - -=\sum_{\text{odd}\ k \ge 1} (-1)^\frac{k-1}{2} - -\sum_{\begin{smallmatrix} A \subseteq \{1,2,3,\dots\} \\ \left|A\right| = k\end{smallmatrix}} - -\left(\prod_{i \in A} \sin\theta_i \prod_{i \not \in A} \cos\theta_i\right) - -\cos\left(\sum_{i=1}^\infty \theta_i\right) - -=\sum_{\text{even}\ k \ge 0} ~ (-1)^\frac{k}{2} ~~ - -\sum_{\begin{smallmatrix} A \subseteq \{1,2,3,\dots\} \\ \left|A\right| = k\end{smallmatrix}} - -\left(\prod_{i \in A} \sin\theta_i \prod_{i \not \in A} \cos\theta_i\right) . - -Because the series $\sum_{i=1}^\infty \theta_i$ converges absolutely, it is necessarily the case that $\lim_{i \to \infty} \theta_i = 0,$ $\lim_{i \to \infty} \sin \theta_i = 0,$ and $\lim_{i \to \infty} \cos \theta_i = 1.$ In particular, in these two identities an asymmetry appears that is not seen in the case of sums of finitely many angles: in each product, there are only finitely many sine factors but there are cofinitely many cosine factors. Terms with infinitely many sine factors would necessarily be equal to zero. - -When only finitely many of the angles $\theta_i$ are nonzero then only finitely many of the terms on the right side are nonzero because all but finitely many sine factors vanish. Furthermore, in each term all but finitely many of the cosine factors are unity. - -Let $e_k$ (for $k = 0, 1, 2, 3, \ldots$) be the kth-degree elementary symmetric polynomial in the variables - -x_i = \tan \theta_i - -for $i = 0, 1, 2, 3, \ldots,$ that is, - - - -\begin{align} - -e_0 & = 1 \\[6pt] - -e_1 & = \sum_i x_i & & = \sum_i \tan\theta_i \\[6pt] - -e_2 & = \sum_{i < j} x_i x_j & & = \sum_{i < j} \tan\theta_i \tan\theta_j \\[6pt] - -e_3 & = \sum_{i < j < k} x_i x_j x_k & & = \sum_{i < j < k} \tan\theta_i \tan\theta_j \tan\theta_k \\ - -& {}\ \ \vdots & & {}\ \ \vdots - -\end{align} - - - -Then - -\begin{align}\tan\left(\sum_i \theta_i\right) - -& = \frac{\sin\left(\sum_i \theta_i\right) / \prod_i \cos \theta_i}{\cos\left(\sum_i \theta_i\right) / \prod_i \cos \theta_i} - -\\& = \frac{\displaystyle\sum_{\text{odd}\ k \ge 1} (-1)^\frac{k-1}{2} - -\sum_{\begin{smallmatrix} A \subseteq \{1,2,3,\dots\} \\ \left|A\right| = k\end{smallmatrix}} - -\prod_{i \in A} \tan\theta_i}{\displaystyle\sum_{\text{even}\ k \ge 0} ~ (-1)^\frac{k}{2} ~~ - -\sum_{\begin{smallmatrix} A \subseteq \{1,2,3,\dots\} \\ \left|A\right| = k\end{smallmatrix}} - -\prod_{i \in A} \tan\theta_i} - -= \frac{e_1 - e_3 + e_5 -\cdots}{e_0 - e_2 + e_4 - \cdots} - -\\ - -\cot\left(\sum_i \theta_i\right) - -& = \frac{e_0 - e_2 + e_4 - \cdots}{e_1 - e_3 + e_5 -\cdots} - -\end{align} - -using the sine and cosine sum formulae above. - -The number of terms on the right side depends on the number of terms on the left side. - -For example: - -\begin{align} - -\tan(\theta_1 + \theta_2) & - -= \frac{ e_1 }{ e_0 - e_2 } - -= \frac{ x_1 + x_2 }{ 1 \ - \ x_1 x_2 } - -= \frac{ \tan\theta_1 + \tan\theta_2 }{ 1 \ - \ \tan\theta_1 \tan\theta_2 }, - -\\[8pt] - -\tan(\theta_1 + \theta_2 + \theta_3) & - -= \frac{ e_1 - e_3 }{ e_0 - e_2 } - -= \frac{ (x_1 + x_2 + x_3) \ - \ (x_1 x_2 x_3) }{ 1 \ - \ (x_1x_2 + x_1 x_3 + x_2 x_3) }, - -\\[8pt] - -\tan(\theta_1 + \theta_2 + \theta_3 + \theta_4) & - -= \frac{ e_1 - e_3 }{ e_0 - e_2 + e_4 } \\[8pt] & - -= \frac{ (x_1 + x_2 + x_3 + x_4) \ - \ (x_1 x_2 x_3 + x_1 x_2 x_4 + x_1 x_3 x_4 + x_2 x_3 x_4) }{ 1 \ - \ (x_1 x_2 + x_1 x_3 + x_1 x_4 + x_2 x_3 + x_2 x_4 + x_3 x_4) \ + \ (x_1 x_2 x_3 x_4) }, - -\end{align} - -and so on. The case of only finitely many terms can be proved by mathematical induction. - -\begin{align} - -\sec\left(\sum_i \theta_i\right) & = \frac{\prod_i \sec\theta_i}{e_0 - e_2 + e_4 - \cdots} \\[8pt] - -\csc\left(\sum_i \theta_i \right) & = \frac{\prod_i \sec\theta_i }{e_1 - e_3 + e_5 - \cdots} - -\end{align} - -where $e_k$ is the kth-degree elementary symmetric polynomial in the n variables $x_i = \tan \theta_i,$ $i = 1, \ldots, n,$ and the number of terms in the denominator and the number of factors in the product in the numerator depend on the number of terms in the sum on the left. The case of only finitely many terms can be proved by mathematical induction on the number of such terms. - -For example, - -\begin{align} - -\sec(\alpha+\beta+\gamma) & = \frac{\sec\alpha \sec\beta \sec\gamma}{1 - \tan\alpha\tan\beta - \tan\alpha\tan\gamma - \tan\beta\tan\gamma } \\[8pt] - -\csc(\alpha+\beta+\gamma) & = \frac{\sec\alpha \sec\beta \sec\gamma}{\tan\alpha + \tan\beta + \tan\gamma - \tan\alpha\tan\beta\tan\gamma}. - -\end{align} - -Formulae for twice an angle. -$$ -\sin (2\theta) = 2 \sin \theta \cos \theta -$$ -$$ -\cos (2\theta) = \cos^2 \theta - \sin^2 \theta = 2 \cos^2 \theta - 1 = 1 - 2 \sin^2 \theta -$$ -$$ -\tan (2\theta) = \frac{2 \tan \theta} {1 - \tan^2 \theta} -$$ -$$ -\cot (2\theta) = \frac{\cot^2 \theta - 1}{2 \cot \theta} -$$ -$$ -\sec (2\theta) = \frac{\sec^2 \theta}{2 - \sec^2 \theta} -$$ -$$ -\csc (2\theta) = \frac{\sec \theta \csc \theta}{2} -$$ - -Formulae for triple angles. - -Also - -\begin{align} - -\tan\frac{\eta\pm\theta}{2} &= \frac{\sin\eta \pm \sin\theta}{\cos\eta + \cos\theta} \\[3pt] - -\tan\left(\frac{\theta}{2} + \frac{\pi}{4}\right) &= \sec\theta + \tan\theta \\[3pt] - -\sqrt{\frac{1 - \sin\theta}{1 + \sin\theta}} &= \frac{\left|1 - \tan\frac{\theta}{2}\right|}{\left|1 + \tan\frac{\theta}{2}\right|} - -\end{align} - -These can be shown by using either the sum and difference identities or the multiple-angle formulae. - -{\left|1 + \tan\frac{\theta}{2}\right|} \\[5pt] - -\tan\frac{\theta}{2} &= \frac{\tan\theta}{1 + \sqrt{1 + \tan^2\theta}} \\ - -&\text{for } \theta \in \left(-\tfrac{\pi}{2},\tfrac{\pi}{2} \right) - -\end{align} - -| \begin{align} - -\cot \frac{\theta}{2} - -&= \csc \theta + \cot \theta \\ - -&= \pm \sqrt\frac{1 + \cos \theta}{1 - \cos \theta} \\[3pt] - -&= \frac{\sin \theta}{1 - \cos \theta} \\[4pt] - -&= \frac{1 + \cos \theta}{\sin \theta} - -\end{align} - -|} - -The fact that the triple-angle formula for sine and cosine only involves powers of a single function allows one to relate the geometric problem of a compass and straightedge construction of angle trisection to the algebraic problem of solving a cubic equation, which allows one to prove that trisection is in general impossible using the given tools, by field theory. - -A formula for computing the trigonometric identities for the one-third angle exists, but it requires finding the zeroes of the cubic equation 4x3 − 3x + d = 0, where $x$ is the value of the cosine function at the one-third angle and d is the known value of the cosine function at the full angle. However, the discriminant of this equation is positive, so this equation has three real roots (of which only one is the solution for the cosine of the one-third angle). None of these solutions is reducible to a real algebraic expression, as they use intermediate complex numbers under the cube roots. - -Obtained by solving the second and third versions of the cosine double-angle formula. - -and in general terms of powers of $\sin \theta$ or $\cos \theta$ the following is true, and can be deduced using De Moivre's formula, Euler's formula and the binomial theorem . - -==Product-to-sum and sum-to-product identities== - -The product-to-sum identities or prosthaphaeresis formulae can be proven by expanding their right-hand sides using the angle addition theorems. See amplitude modulation for an application of the product-to-sum formulae, and beat (acoustics) and phase detector for applications of the sum-to-product formulae. - -Charles Hermite demonstrated the following identity. Suppose $a_1, \ldots, a_n$ are complex numbers, no two of which differ by an integer multiple of pi. Let -$$ -A_{n,k} = \prod_{\begin{smallmatrix} 1 \le j \le n \\ j \neq k \end{smallmatrix}} \cot(a_k - a_j) -$$ - -(in particular, $A_{1,1},$ being an empty product, is 1). Then -$$ -\cot(z - a_1)\cdots\cot(z - a_n) = \cos\frac{n\pi}{2} + \sum_{k=1}^n A_{n,k} \cot(z - a_k). -$$ - -The simplest non-trivial example is the case n = 2: -$$ -\cot(z - a_1)\cot(z - a_2) = -1 + \cot(a_1 - a_2)\cot(z - a_1) + \cot(a_2 - a_1)\cot(z - a_2). -$$ - -Ptolemy's theorem can be expressed in the language of modern trigonometry as: - -If w + x + y + z = pi, then: - -\begin{align} - -\sin(w + x)\sin(x + y) - -&= \sin(x + y)\sin(y + z) & \text{(trivial)} \\ - -&= \sin(y + z)\sin(z + w) & \text{(trivial)} \\ - -&= \sin(z + w)\sin(w + x) & \text{(trivial)} \\ - -&= \sin w \sin y + \sin x \sin z. & \text{(significant)} - -\end{align} - -(The first three equalities are trivial rearrangements; the fourth is the substance of this identity.) - -For coprime integers n, m -$$ -\prod_{k=1}^n \left(2a + 2\cos\left(\frac{2 \pi k m}{n} + x\right)\right) = 2\left( T_n(a)+{(-1)}^{n+m}\cos(n x) \right) -$$ - -where Tn is the Chebyshev polynomial. - -The following relationship holds for the sine function -$$ -\prod_{k=1}^{n-1} \sin\left(\frac{k\pi}{n}\right) = \frac{n}{2^{n-1}}. -$$ - -More generally -$$ -\sin(nx) = 2^{n-1}\prod_{k=0}^{n-1} \sin\left(x+\frac{k\pi}{n}\right). -$$ - -For some purposes it is important to know that any linear combination of sine waves of the same period or frequency but different phase shifts is also a sine wave with the same period or frequency, but a different phase shift. This is useful in sinusoid data fitting, because the measured or observed data are linearly related to the a and b unknowns of the in-phase and quadrature components basis below, resulting in a simpler Jacobian, compared to that of $c$ and $\varphi$. - -The linear combination, or harmonic addition, of sine and cosine waves is equivalent to a single sine wave with a phase shift and scaled amplitude, -$$ -a\cos x+b\sin x=c\cos(x+\varphi) -$$ - -where $c$ and $\varphi$ are defined as so: - -\begin{align} - -c &= \sgn(a) \sqrt{a^2 + b^2}, \\ - -\varphi &= \operatorname{arctan} \left(-\frac{b}{a}\right), - -\end{align} - -given that $a \neq 0.$ - -More generally, for arbitrary phase shifts, we have -$$ -a \sin(x + \theta_a) + b \sin(x + \theta_b)= c \sin(x+\varphi) -$$ - -where $c$ and $\varphi$ satisfy: - -\begin{align} - -c^2 &= a^2 + b^2 + 2ab\cos \left(\theta_a - \theta_b \right) , \\ - -\tan \varphi &= \frac{a \sin \theta_a + b \sin \theta_b}{a \cos \theta_a + b \cos \theta_b}. - -\end{align} - -The general case reads - - - -\begin{align} - -\sum_{n=1}^N \sin (n\theta) & = \frac{1}{2}\cot\frac{\theta}{2}-\frac{\cos\left(\left(N+\frac{1}{2}\right)\theta\right)}{2\sin\left(\frac{\theta}{2}\right)}\\[5pt] - -\sum_{n=1}^N \cos (n\theta) & = -\frac{1}{2}+\frac{\sin\left(\left(N+\frac{1}{2}\right)\theta\right)}{2\sin\left(\frac{\theta}{2}\right)} - -\end{align} - - - -for -$$ -\theta \not\equiv 0 (\textrm{mod} 2\pi). -$$ - -A related function is the following function of $x,$ called the Dirichlet kernel. - -1+2\cos x + 2\cos(2x) + 2\cos(3x) + \cdots + 2\cos(nx) - -= \frac{\sin\left(\left(n +\frac{1}{2}\right)x\right)}{\sin\left(\frac{x}{2}\right)}. - -If $f(x)$ is given by the linear fractional transformation -$$ -f(x) = \frac{(\cos\alpha)x - \sin\alpha}{(\sin\alpha)x + \cos\alpha}, -$$ - -and similarly -$$ -g(x) = \frac{(\cos\beta)x - \sin\beta}{(\sin\beta)x + \cos\beta}, -$$ - -then - -f\big(g(x)\big) = g\big(f(x)\big) - -= \frac{\big(\cos(\alpha+\beta)\big)x - \sin(\alpha+\beta)}{\big(\sin(\alpha+\beta)\big)x + \cos(\alpha+\beta)}. - -More tersely stated, if for all $\alpha$ we let $f_{\alpha}$ be what we called $f$ above, then -$$ -f_\alpha \circ f_\beta = f_{\alpha+\beta}. -$$ - -If $x$ is the slope of a line, then $f(x)$ is the slope of its rotation through an angle of $- \alpha.$ - -Euler's formula states that, for any real number x: -$$ -e^{ix} = \cos x + i\sin x -$$, - -where i is the imaginary unit. Substituting -x for x gives us: -$$ -e^{-ix} = \cos(-x) + i\sin(-x) = \cos x - i\sin x -$$. - -These two equations can be used to solve for cosine and sine in terms of the exponential function. Specifically, -$$ -\cos x = \frac{e^{ix} + e^{-ix}}{2} -$$ -$$ -\sin x = \frac{e^{ix} - e^{-ix}}{2i} -$$ - -These formulae are useful for proving many other trigonometric identities. For example, that - -ei(θ+φ) = e e means that - -cos(θ + φ) + i sin(θ + φ) = (cos θ + i sin θ) (cos φ + i sin φ) = (cos θ cos φ − sin θ sin φ) + i (cos θ sin φ + sin θ cos φ). - -That the real part of the left hand side equals the real part of the right hand side is an angle addition formula for cosine. The equality of the imaginary parts gives an angle addition formula for sine. - -The following table expresses the trigonometric functions and their inverses in terms of the exponential function and the complex logarithm. - -For applications to special functions, the following infinite product formulae for trigonometric functions are useful: - -\begin{align} - -\sin x &= x \prod_{n = 1}^\infty\left(1 - \frac{x^2}{\pi^2 n^2}\right) & - -\cos x &= \prod_{n = 1}^\infty\left(1 - \frac{x^2}{\pi^2\left(n - \frac{1}{2}\right)^2}\right) \\ - -\sinh x &= x \prod_{n = 1}^\infty\left(1 + \frac{x^2}{\pi^2 n^2}\right) & - -\cosh x &= \prod_{n = 1}^\infty\left(1 + \frac{x^2}{\pi^2\left(n - \frac{1}{2}\right)^2}\right) - -\end{align} - -The following identities give the result of composing a trigonometric function with an inverse trigonometric function. - - - -\begin{align} - -\sin(\arcsin x) &=x - -& \cos(\arcsin x) &=\sqrt{1-x^2} - -& \tan(\arcsin x) &=\frac{x}{\sqrt{1 - x^2}} - -\\ - -\sin(\arccos x) &=\sqrt{1-x^2} - -& \cos(\arccos x) &=x - -& \tan(\arccos x) &=\frac{\sqrt{1 - x^2}}{x} - -\\ - -\sin(\arctan x) &=\frac{x}{\sqrt{1+x^2}} - -& \cos(\arctan x) &=\frac{1}{\sqrt{1+x^2}} - -& \tan(\arctan x) &=x - -\\ - -\sin(\arccsc x) &=\frac{1}{x} - -& \cos(\arccsc x) &=\frac{\sqrt{x^2 - 1}}{x} - -& \tan(\arccsc x) &=\frac{1}{\sqrt{x^2 - 1}} - -\\ - -\sin(\arcsec x) &=\frac{\sqrt{x^2 - 1}}{x} - -& \cos(\arcsec x) &=\frac{1}{x} - -& \tan(\arcsec x) &=\sqrt{x^2 - 1} - -\\ - -\sin(\arccot x) &=\frac{1}{\sqrt{1+x^2}} - -& \cos(\arccot x) &=\frac{x}{\sqrt{1+x^2}} - -& \tan(\arccot x) &=\frac{1}{x} - -\\ - -\end{align} - - - -Taking the multiplicative inverse of both sides of the each equation above results in the equations for $\csc = \frac{1}{\sin}, \sec = \frac{1}{\cos}, \text{ and } \cot = \frac{1}{\tan}.$ - -The right hand side of the formula above will always be flipped. - -For example, the equation for $\cot(\arcsin x)$ is: - -\cot(\arcsin x) = \frac{1}{\tan(\arcsin x)} = \frac{1}{\frac{x}{\sqrt{1 - x^2}}} = \frac{\sqrt{1 - x^2}}{x} - -while the equations for $\csc(\arccos x)$ and $\sec(\arccos x)$ are: - -\csc(\arccos x) = \frac{1}{\sin(\arccos x)} = \frac{1}{\sqrt{1-x^2}} \qquad \text{ and }\quad \sec(\arccos x) = \frac{1}{\cos(\arccos x)} = \frac{1}{x}. - -The following identities are implied by the reflection identities. They hold whenever $x, r, s, -x, -r, \text{ and } -s$ are in the domains of the relevant functions. - -\begin{alignat}{9} - -\frac{\pi}{2} ~&=~ \arcsin(x) &&+ \arccos(x) ~&&=~ \arctan(r) &&+ \arccot(r) ~&&=~ \arcsec(s) &&+ \arccsc(s) \\[0.4ex] - -\pi ~&=~ \arccos(x) &&+ \arccos(-x) ~&&=~ \arccot(r) &&+ \arccot(-r) ~&&=~ \arcsec(s) &&+ \arcsec(-s) \\[0.4ex] - -0 ~&=~ \arcsin(x) &&+ \arcsin(-x) ~&&=~ \arctan(r) &&+ \arctan(-r) ~&&=~ \arccsc(s) &&+ \arccsc(-s) \\[1.0ex] - -\end{alignat} - -Also, - -\begin{align} - -\arctan x + \arctan \dfrac{1}{x} - -&= \begin{cases} - -\frac{\pi}{2}, & \text{if } x > 0 \\ - -- \frac{\pi}{2}, & \text{if } x < 0 - -\end{cases} \\ - -\arccot x + \arccot \dfrac{1}{x} - -&= \begin{cases} - -\frac{\pi}{2}, & \text{if } x > 0 \\ - -\frac{3\pi}{2}, & \text{if } x < 0 - -\end{cases} \\ - -\end{align} - -\arccos \frac{1}{x} = \arcsec x \qquad \text{ and } \qquad \arcsec \frac{1}{x} = \arccos x - -\arcsin \frac{1}{x} = \arccsc x \qquad \text{ and } \qquad \arccsc \frac{1}{x} = \arcsin x - -In terms of the arctangent function we have -$$ -\arctan \frac{1}{2} = \arctan \frac{1}{3} + \arctan \frac{1}{7}. -$$ - -The curious identity known as Morrie's law, -$$ -\cos 20^\circ\cdot\cos 40^\circ\cdot\cos 80^\circ = \frac{1}{8}, -$$ - -is a special case of an identity that contains one variable: -$$ -\prod_{j=0}^{k-1}\cos\left(2^j x\right) = \frac{\sin\left(2^k x\right)}{2^k\sin x}. -$$ - -Similarly, -$$ -\sin 20^\circ\cdot\sin 40^\circ\cdot\sin 80^\circ = \frac{\sqrt{3}}{8} -$$ - -is a special case of an identity with $x$ = 20°: -$$ -\sin x \cdot \sin \left(60^\circ - x\right) \cdot \sin \left(60^\circ + x\right) = \frac{\sin 3x}{4}. -$$ - -For the case $x$ = 15°, - -\begin{align} - -\sin 15^\circ\cdot\sin 45^\circ\cdot\sin 75^\circ &= \frac{\sqrt{2}}{8}, \\ - -\sin 15^\circ\cdot\sin 75^\circ &= \frac{1}{4}. - -\end{align} - -For the case $x$ = 10°, -$$ -\sin 10^\circ\cdot\sin 50^\circ\cdot\sin 70^\circ = \frac{1}{8}. -$$ - -The same cosine identity is -$$ -\cos x \cdot \cos \left(60^\circ - x\right) \cdot \cos \left(60^\circ + x\right) = \frac{\cos 3x}{4}. -$$ - -Similarly, - -\begin{align} - -\cos 10^\circ\cdot\cos 50^\circ\cdot\cos 70^\circ &= \frac{\sqrt{3}}{8}, \\ - -\cos 15^\circ\cdot\cos 45^\circ\cdot\cos 75^\circ &= \frac{\sqrt{2}}{8}, \\ - -\cos 15^\circ\cdot\cos 75^\circ &= \frac{1}{4}. - -\end{align} - -Similarly, - -\begin{align} - -\tan 50^\circ\cdot\tan 60^\circ\cdot\tan 70^\circ &= \tan 80^\circ, \\ - -\tan 40^\circ\cdot\tan 30^\circ\cdot\tan 20^\circ &= \tan 10^\circ. - -\end{align} - -The following is perhaps not as readily generalized to an identity containing variables (but see explanation below): -$$ -\cos 24^\circ + \cos 48^\circ + \cos 96^\circ + \cos 168^\circ = \frac{1}{2}. -$$ - -Degree measure ceases to be more felicitous than radian measure when we consider this identity with 21 in the denominators: - -\begin{align} - -\cos \frac{2\pi}{21} - -+{} &\cos\left(2\cdot\frac{2\pi}{21}\right) + - -\cos\left(4\cdot\frac{2\pi}{21}\right) \\[10pt] - -{}+{} &\cos\left( 5\cdot\frac{2\pi}{21}\right) + - -\cos\left( 8\cdot\frac{2\pi}{21}\right) + - -\cos\left(10\cdot\frac{2\pi}{21}\right) = \frac{1}{2}. - -\end{align} - -The factors 1, 2, 4, 5, 8, 10 may start to make the pattern clear: they are those integers less than 21/2 that are relatively prime to (or have no prime factors in common with) 21. The last several examples are corollaries of a basic fact about the irreducible cyclotomic polynomials: the cosines are the real parts of the zeroes of those polynomials; the sum of the zeroes is the Möbius function evaluated at (in the very last case above) 21; only half of the zeroes are present above. The two identities preceding this last one arise in the same fashion with 21 replaced by 10 and 15, respectively. - -Other cosine identities include: - -\begin{align} - -2\cos \frac{\pi}{3} &= 1, \\ - -2\cos \frac{\pi}{5} \times 2\cos \frac{2\pi}{5} &= 1, \\ - -2\cos \frac{\pi}{7} \times 2\cos \frac{2\pi}{7}\times 2\cos \frac{3\pi}{7} &= 1, - -\end{align} - -and so forth for all odd numbers, and hence -$$ -\cos \frac{\pi}{3}+\cos \frac{\pi}{5} \times \cos \frac{2\pi}{5} + \cos \frac{\pi}{7} \times \cos \frac{2\pi}{7} \times \cos \frac{3\pi}{7} + \dots = 1. -$$ - -Many of those curious identities stem from more general facts like the following: -$$ -\prod_{k=1}^{n-1} \sin\frac{k\pi}{n} = \frac{n}{2^{n-1}} -$$ - -and -$$ -\prod_{k=1}^{n-1} \cos\frac{k\pi}{n} = \frac{\sin\frac{\pi n}{2}}{2^{n-1}}. -$$ - -Combining these gives us -$$ -\prod_{k=1}^{n-1} \tan\frac{k\pi}{n} = \frac{n}{\sin\frac{\pi n}{2}} -$$ - -If n is an odd number ($n = 2 m + 1$) we can make use of the symmetries to get -$$ -\prod_{k=1}^{m} \tan\frac{k\pi}{2m+1} = \sqrt{2m+1} -$$ - -The transfer function of the Butterworth low pass filter can be expressed in terms of polynomial and poles. By setting the frequency as the cutoff frequency, the following identity can be proved: -$$ -\prod_{k=1}^n \sin\frac{\left(2k - 1\right)\pi}{4n} = \prod_{k=1}^{n} \cos\frac{\left(2k-1\right)\pi}{4n} = \frac{\sqrt{2}}{2^n} -$$ - -An efficient way to compute pi to a large number of digits is based on the following identity without variables, due to Machin. This is known as a Machin-like formula: -$$ -\frac{\pi}{4} = 4 \arctan\frac{1}{5} - \arctan\frac{1}{239} -$$ - -or, alternatively, by using an identity of Leonhard Euler: -$$ -\frac{\pi}{4} = 5 \arctan\frac{1}{7} + 2 \arctan\frac{3}{79} -$$ - -or by using Pythagorean triples: -$$ -\pi = \arccos\frac{4}{5} + \arccos\frac{5}{13} + \arccos\frac{16}{65} = \arcsin\frac{3}{5} + \arcsin\frac{12}{13} + \arcsin\frac{63}{65}. -$$ - -Others include: -$$ -\frac{\pi}{4} = \arctan\frac{1}{2} + \arctan\frac{1}{3}, -$$ -$$ -\pi = \arctan 1 + \arctan 2 + \arctan 3, -$$ -$$ -\frac{\pi}{4} = 2\arctan \frac{1}{3} + \arctan \frac{1}{7}. -$$ - -Generally, for numbers t1, ..., tn−1 ∈ (−1, 1) for which θn = Σ_k=1|p=n−1 arctan tk ∈ (π/4, 3π/4), let tn = tan(π/2 − θn) = cot θn. This last expression can be computed directly using the formula for the cotangent of a sum of angles whose tangents are t1, ..., tn−1 and its value will be in (−1, 1). In particular, the computed tn will be rational whenever all the t1, ..., tn−1 values are rational. With these values, - -\begin{align} - -\frac{\pi}{2} & = \sum_{k=1}^n \arctan(t_k) \\ - -\pi & = \sum_{k=1}^n \operatorname{sign}(t_k) \arccos\left(\frac{1 - t_k^2}{1 + t_k^2}\right) \\ - -\pi & = \sum_{k=1}^n \arcsin\left(\frac{2t_k}{1 + t_k^2}\right) \\ - -\pi & = \sum_{k=1}^n \arctan\left(\frac{2t_k}{1 - t_k^2}\right), - -\end{align} - -where in all but the first expression, we have used tangent half-angle formulae. The first two formulae work even if one or more of the tk values is not within (−1, 1). Note that if t = p/q is rational, then the (2t, 1 − t2, 1 + t2) values in the above formulae are proportional to the Pythagorean triple (2pq, q2 − p2, q2 + p2). - -For example, for n = 3 terms, -$$ -\frac{\pi}{2} = \arctan\left(\frac{a}{b}\right) + \arctan\left(\frac{c}{d}\right) + \arctan\left(\frac{bd - ac}{ad + bc}\right) -$$ - -for any a, b, c, d > 0. - -Euclid showed in Book XIII, Proposition 10 of his Elements that the area of the square on the side of a regular pentagon inscribed in a circle is equal to the sum of the areas of the squares on the sides of the regular hexagon and the regular decagon inscribed in the same circle. In the language of modern trigonometry, this says: -$$ -\sin^2 18^\circ + \sin^2 30^\circ = \sin^2 36^\circ. -$$ - -Ptolemy used this proposition to compute some angles in his table of chords. - -These identities involve a trigonometric function of a trigonometric function: -$$ -\cos(t \sin x) = J_0(t) + 2 \sum_{k=1}^\infty J_{2k}(t) \cos(2kx) -$$ -$$ -\sin(t \sin x) = 2 \sum_{k=0}^\infty J_{2k+1}(t) \sin\big((2k+1)x\big) -$$ -$$ -\cos(t \cos x) = J_0(t) + 2 \sum_{k=1}^\infty (-1)^kJ_{2k}(t) \cos(2kx) -$$ -$$ -\sin(t \cos x) = 2 \sum_{k=0}^\infty(-1)^k J_{2k+1}(t) \cos\big((2k+1)x\big) -$$ - -where Ji are Bessel functions. - -The following formulae apply to arbitrary plane triangles and follow from $\alpha + \beta + \gamma = 180^{\circ},$ as long as the functions occurring in the formulae are well-defined (the latter applies only to the formulae in which tangents and cotangents occur). - -\begin{align} - -\tan \alpha + \tan \beta + \tan \gamma &= \tan \alpha \tan \beta \tan \gamma \\ - -1 &= \cot \beta \cot \gamma + \cot \gamma \cot \alpha + \cot \alpha \cot \beta \\ - -\cot\left(\frac{\alpha}{2}\right) + \cot\left(\frac{\beta}{2}\right) + \cot\left(\frac{\gamma}{2}\right) &= \cot\left(\frac{\alpha}{2}\right) \cot \left(\frac{\beta}{2}\right) \cot\left(\frac{\gamma}{2}\right) \\ - -1 &= \tan\left(\frac{\beta}{2}\right)\tan\left(\frac{\gamma}{2}\right) + \tan\left(\frac{\gamma}{2}\right)\tan\left(\frac{\alpha}{2}\right) + \tan\left(\frac{\alpha}{2}\right)\tan\left(\frac{\beta}{2}\right) \\ - -\sin \alpha + \sin \beta + \sin \gamma &= 4\cos\left(\frac{\alpha}{2}\right)\cos\left(\frac{\beta}{2}\right)\cos\left(\frac{\gamma}{2}\right) \\ - --\sin \alpha + \sin \beta + \sin \gamma &= 4\cos\left(\frac{\alpha}{2}\right)\sin\left(\frac{\beta}{2}\right)\sin\left(\frac{\gamma}{2}\right) \\ - -\cos \alpha + \cos \beta + \cos \gamma &= 4\sin\left(\frac{\alpha}{2}\right)\sin\left(\frac{\beta}{2}\right)\sin \left(\frac{\gamma}{2}\right) + 1 \\ - --\cos \alpha + \cos \beta + \cos \gamma &= 4\sin\left(\frac{\alpha}{2}\right)\cos\left(\frac{\beta}{2}\right)\cos \left(\frac{\gamma}{2}\right) - 1 \\ - -\sin (2\alpha) + \sin (2\beta) + \sin (2\gamma) &= 4\sin \alpha \sin \beta \sin \gamma \\ - --\sin (2\alpha) + \sin (2\beta) + \sin (2\gamma) &= 4\sin \alpha \cos \beta \cos \gamma \\ - -\cos (2\alpha) + \cos (2\beta) + \cos (2\gamma) &= -4\cos \alpha \cos \beta \cos \gamma - 1 \\ - --\cos (2\alpha) + \cos (2\beta) + \cos (2\gamma) &= -4\cos \alpha \sin \beta \sin \gamma + 1 \\ - -\sin^2\alpha + \sin^2\beta + \sin^2\gamma &= 2 \cos \alpha \cos \beta \cos \gamma + 2 \\ - --\sin^2\alpha + \sin^2\beta + \sin^2\gamma &= 2 \cos \alpha \sin \beta \sin \gamma \\ - -\cos^2\alpha + \cos^2\beta + \cos^2\gamma &= -2 \cos \alpha \cos \beta \cos \gamma + 1 \\ - --\cos^2\alpha + \cos^2\beta + \cos^2\gamma &= -2 \cos \alpha \sin \beta \sin \gamma + 1 \\ - --\sin^2 (2\alpha) + \sin^2 (2\beta) + \sin^2 (2\gamma) &= -2\cos (2\alpha) \sin (2\beta) \sin (2\gamma) \\ - --\cos^2 (2\alpha) + \cos^2 (2\beta) + \cos^2 (2\gamma) &= 2\cos (2\alpha) \sin (2\beta) \sin (2\gamma) + 1 \\ - -1 &= \sin^2 \left(\frac{\alpha}{2}\right) + \sin^2 \left(\frac{\beta}{2}\right) + \sin^2 \left(\frac{\gamma}{2}\right) + 2\sin \left(\frac{\alpha}{2}\right) \sin \left(\frac{\beta}{2}\right) \sin \left(\frac{\gamma}{2}\right) - -\end{align} - -The versine, coversine, haversine, and exsecant were used in navigation. For example, the haversine formula was used to calculate the distance between two points on a sphere. They are rarely used today. - -==Miscellaneous== - -The Dirichlet kernel Dn(x) is the function occurring on both sides of the next identity: -$$ -1 + 2\cos x + 2\cos(2x) + 2\cos(3x) + \cdots + 2\cos(nx) = \frac{\sin\left(\left(n + \frac{1}{2}\right)x\right) }{\sin\left(\frac{1}{2}x\right)}. -$$ - -The convolution of any integrable function of period $2 \pi$ with the Dirichlet kernel coincides with the function's $n$th-degree Fourier approximation. The same holds for any measure or generalized function. - -If we set -$$ -t = \tan\frac x 2, -$$ - -then -$$ -\sin x = \frac{2t}{1 + t^2};\qquad \cos x = \frac{1 - t^2}{1 + t^2};\qquad e^{i x} = \frac{1 + i t}{1 - i t} -$$ - -where $e^{i x} = \cos x + i \sin x,$ sometimes abbreviated to cis x. - -When this substitution of $t$ for tan x/2 is used in calculus, it follows that $\sin x$ is replaced by 2t/1 + t2, $\cos x$ is replaced by 1 − t2/1 + t2 and the differential dx is replaced by 2 dt/1 + t2. Thereby one converts rational functions of $\sin x$ and $\cos x$ to rational functions of $t$ in order to find their antiderivatives. - -\cos\frac{\theta}{2} \cdot \cos \frac{\theta}{4} - -\cdot \cos \frac{\theta}{8} \cdots = \prod_{n=1}^\infty \cos \frac{\theta}{2^n} - -= \frac{\sin \theta}{\theta} = \operatorname{sinc} \theta. diff --git a/wiki/wikipedia/3301.txt b/wiki/wikipedia/3301.txt deleted file mode 100644 index c37b16a263aa30d5e1c62fe1c89cf90302334054..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3301.txt +++ /dev/null @@ -1,9 +0,0 @@ -In combinatorics, the Dinitz theorem (formerly known as Dinitz conjecture) is a statement about the extension of arrays to partial Latin squares, proposed in 1979 by Jeff Dinitz, and proved in 1994 by Fred Galvin. - -The Dinitz theorem is that given an n × n square array, a set of m symbols with m ≥ n, and for each cell of the array an n-element set drawn from the pool of m symbols, it is possible to choose a way of labeling each cell with one of those elements in such a way that no row or column repeats a symbol. - -It can also be formulated as a result in graph theory, that the list chromatic index of the complete bipartite graph $K_{n, n}$ equals $n$. That is, if each edge of the complete bipartite graph is assigned a set of $n$ colors, it is possible to choose one of the assigned colors for each edge - -such that no two edges incident to the same vertex have the same color. - -Galvin's proof generalizes to the statement that, for every bipartite multigraph, the list chromatic index equals its chromatic index. The more general edge list coloring conjecture states that the same holds not only for bipartite graphs, but also for any loopless multigraph. An even more general conjecture states that the list chromatic number of claw-free graphs always equals their chromatic number. The Dinitz theorem is also related to Rota's basis conjecture. diff --git a/wiki/wikipedia/3302.txt b/wiki/wikipedia/3302.txt deleted file mode 100644 index 12dc39edf32fd63298566977b715e96887c778b2..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3302.txt +++ /dev/null @@ -1,5 +0,0 @@ -Watermark describes an object of a predefined format which provides a point of reference for two systems/datasets attempting to establish delta/incremental synchronization; any object in the queried data source which was created, modified/changed, and/or deleted after the watermark value was established will be qualified as "above watermark" and could/should be returned to a delta-querying partner - -Watermark term is often used in Directory Synchronization software development projects. For example, products such as Microsoft Exchange Server, Active Directory, Active Directory Application Mode (ADAM), and Microsoft Identity Integration Server 2003/ Microsoft Identity Lifecycle Manager Server 2007, as well as Cisco Unified Communications Manager or Sun Microsystems IPlanet and other LDAP-based directory products are using DirSync and consequently will consume "watermark" object to provide efficient synchronization between directories. Watermark object sometimes can be referred as "cookie". - -DirSync control implementation can differ from product to product, however concept of watermark will allow any product to read changes in the directory incrementally. diff --git a/wiki/wikipedia/3303.txt b/wiki/wikipedia/3303.txt deleted file mode 100644 index 0a3112d7780518a3738ad2d04267cbc42fe4e8ba..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3303.txt +++ /dev/null @@ -1,7 +0,0 @@ -The Nagata–Smirnov metrization theorem in topology characterizes when a topological space is metrizable. The theorem states that a topological space $X$ is metrizable if and only if it is regular, Hausdorff and has a countably locally finite (that is, -locally finite) basis. - -A topological space $X$ is called a regular space if every non-empty closed subset $C$ of $X$ and a point p not contained in $C$ admit non-overlapping open neighborhoods. - -A collection in a space $X$ is countably locally finite (or -locally finite) if it is the union of a countable family of locally finite collections of subsets of $X.$ - -Unlike Urysohn's metrization theorem, which provides only a sufficient condition for metrizability, this theorem provides both a necessary and sufficient condition for a topological space to be metrizable. The theorem is named after Junichi Nagata and Yuriĭ Mikhaĭlovich Smirnov, whose (independent) proofs were published in 1950 and 1951, respectively. diff --git a/wiki/wikipedia/3304.txt b/wiki/wikipedia/3304.txt deleted file mode 100644 index 5a7b849ecc2d4316d30071a1323c59d241badaf5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3304.txt +++ /dev/null @@ -1,31 +0,0 @@ -In control theory and in particular when studying the properties of a linear time-invariant system in state space form, the Hautus lemma, named after Malo L. J. Hautus, can prove to be a powerful tool. This result appeared first in and. Today it can be found in most textbooks on control theory. - -There exist multiple forms of the lemma. - -The Hautus lemma for controllability says that given a square matrix $\mathbf{A}\in M_n(\Re)$ and a $\mathbf{B}\in M_{n\times m}(\Re)$ the following are equivalent: - -# The pair $(\mathbf{A},\mathbf{B})$ is controllable - -# For all $\lambda\in\mathbb{C}$ it holds that $\operatorname{rank}[\lambda \mathbf{I}-\mathbf{A},\mathbf{B}]=n$ - -# For all $\lambda\in\mathbb{C}$ that are eigenvalues of $\mathbf{A}$ it holds that $\operatorname{rank}[\lambda \mathbf{I}-\mathbf{A},\mathbf{B}]=n$ - -The Hautus lemma for stabilizability says that given a square matrix $\mathbf{A}\in M_n(\Re)$ and a $\mathbf{B}\in M_{n\times m}(\Re)$ the following are equivalent: - -# The pair $(\mathbf{A},\mathbf{B})$ is stabilizable - -# For all $\lambda\in\mathbb{C}$ that are eigenvalues of $\mathbf{A}$ and for which $\Re(\lambda)\ge 0$ it holds that $\operatorname{rank}[\lambda \mathbf{I}-\mathbf{A},\mathbf{B}]=n$ - -The Hautus lemma for observability says that given a square matrix $\mathbf{A}\in M_n(\Re)$ and a $\mathbf{C}\in M_{m\times n}(\Re)$ the following are equivalent: - -# The pair $(\mathbf{A},\mathbf{C})$ is observable - -# For all $\lambda\in\mathbb{C}$ it holds that $\operatorname{rank}[\lambda \mathbf{I}-\mathbf{A};\mathbf{C}]=n$ - -# For all $\lambda\in\mathbb{C}$ that are eigenvalues of $\mathbf{A}$ it holds that $\operatorname{rank}[\lambda \mathbf{I}-\mathbf{A};\mathbf{C}]=n$ - -The Hautus lemma for detectability says that given a square matrix $\mathbf{A}\in M_n(\Re)$ and a $\mathbf{C}\in M_{m\times n}(\Re)$ the following are equivalent: - -# The pair $(\mathbf{A},\mathbf{C})$ is detectable - -# For all $\lambda\in\mathbb{C}$ that are eigenvalues of $\mathbf{A}$ and for which $\Re(\lambda)\ge 0$ it holds that $\operatorname{rank}[\lambda \mathbf{I}-\mathbf{A};\mathbf{C}]=n$ diff --git a/wiki/wikipedia/3305.txt b/wiki/wikipedia/3305.txt deleted file mode 100644 index 1ce7ac8a0183b3eaaadc04b852dc2129e09aea78..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3305.txt +++ /dev/null @@ -1,5 +0,0 @@ -Thomas Baxter ( 1732–1740), was a schoolmaster and mathematician who published an erroneous method of squaring the circle. He was derided as a "pseudo-mathematician" by F. Y. Edgeworth, writing for the Dictionary of National Biography. - -When he was master of a private school at Crathorne, North Yorkshire, Baxter composed a book entitled The Circle squared (London: 1732), published in octavo. The work was the reason Edgeworth gave Baxter the epithet, "pseudo-mathematician". - -Baxter published another work, Matho, or the Principles of Astronomy and Natural Philosophy accommodated to the Use of Younger Persons (London: 1740). Unlike Baxter's other work, this volume enjoyed considerable popularity in its time. diff --git a/wiki/wikipedia/3306.txt b/wiki/wikipedia/3306.txt deleted file mode 100644 index ec849882994163a8f9f8534c646bb1b91763494f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3306.txt +++ /dev/null @@ -1,21 +0,0 @@ -The Hewitt–Savage zero–one law is a theorem in probability theory, similar to Kolmogorov's zero–one law and the Borel–Cantelli lemma, that specifies that a certain type of event will either almost surely happen or almost surely not happen. It is sometimes known as the Savage-Hewitt law for symmetric events. It is named after Edwin Hewitt and Leonard Jimmie Savage. - -Let $\left\{ X_n \right\}_{n = 1}^\infty$ be a sequence of independent and identically-distributed random variables taking values in a set $\mathbb{X}$. The Hewitt-Savage zero–one law says that any event whose occurrence or non-occurrence is determined by the values of these random variables and whose occurrence or non-occurrence is unchanged by finite permutations of the indices, has probability either 0 or 1 (a "finite" permutation is one that leaves all but finitely many of the indices fixed). - -Somewhat more abstractly, define the exchangeable sigma algebra or sigma algebra of symmetric events $\mathcal{E}$ to be the set of events (depending on the sequence of variables $\left\{ X_n \right\}_{n = 1}^\infty$) which are invariant under finite permutations of the indices in the sequence $\left\{ X_n \right\}_{n = 1}^\infty$. Then $A \in \mathcal{E} \implies \mathbb{P} (A) \in \{ 0, 1 \}$. - -Since any finite permutation can be written as a product of transpositions, if we wish to check whether or not an event $A$ is symmetric (lies in $\mathcal{E}$), it is enough to check if its occurrence is unchanged by an arbitrary transposition $(i, j)$, $i, j \in \mathbb{N}$. - -Let the sequence $\left\{ X_n \right\}_{n = 1}^\infty$ take values in $[0, \infty)$. Then the event that the series $\sum_{n = 1}^\infty X_n$ converges (to a finite value) is a symmetric event in $\mathcal{E}$, since its occurrence is unchanged under transpositions (for a finite re-ordering, the convergence or divergence of the series—and, indeed, the numerical value of the sum itself—is independent of the order in which we add up the terms). Thus, the series either converges almost surely or diverges almost surely. If we assume in addition that the common expected value $\mathbb{E}[X_n] > 0$ (which essentially means that $\mathbb{P}(X_n = 0 ) < 1 $ because of the random variables' non-negativity), we may conclude that -$$ -\mathbb{P} \left( \sum_{n = 1}^\infty X_n = + \infty \right) = 1, -$$ - -i.e. the series diverges almost surely. This is a particularly simple application of the Hewitt–Savage zero–one law. In many situations, it can be easy to apply the Hewitt–Savage zero–one law to show that some event has probability 0 or 1, but surprisingly hard to determine which of these two extreme values is the correct one. - -Continuing with the previous example, define -$$ -S_N= \sum_{n = 1}^N X_n, -$$ - -which is the position at step N of a random walk with the iid increments Xn. The event { SN = 0 infinitely often } is invariant under finite permutations. Therefore, the zero–one law is applicable and one infers that the probability of a random walk with real iid increments visiting the origin infinitely often is either one or zero. Visiting the origin infinitely often is a tail event with respect to the sequence (SN), but SN are not independent and therefore the Kolmogorov's zero–one law is not directly applicable here. diff --git a/wiki/wikipedia/3307.txt b/wiki/wikipedia/3307.txt deleted file mode 100644 index ca2c911264822af8e75e7c52af51427f3e29d6f7..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3307.txt +++ /dev/null @@ -1,9 +0,0 @@ -The constant chord theorem is a statement in elementary geometry about a property of certain chords in two intersecting circles. - -The circles $k_1$ and $k_2$ intersect in the points $P$ and $Q$. $Z_1$ is an arbitrary point on $k_1$ being different from $P$ and $Q$. The lines $Z_1P$ and $Z_1Q$ intersect the circle $k_2$ in $P_1$ and $Q_1$. The constant chord theorem then states that the length of the chord $P_1Q_1$ in $k_2$ does not depend on the location of $Z_1$ on $k_1$, in other words the length is constant. - -The theorem stays valid when $Z_1$ coincides with $P$ or $Q$, provided one replaces the then undefined line $Z_1P$ or $Z_1Q$ by the tangent on $k_1$ at $Z_1$. - -A similar theorem exists in three dimensions for the intersection of two spheres. The spheres $k_1$ and $k_2$ intersect in the circle $k_s$. $Z_1$ is arbitrary point on the surface of the first sphere $k_1$, that is not on the intersection circle $k_s$. The extended cone created by $k_s$ and $Z_1$ intersects the second sphere $k_2$ in a circle. The length of the diameter of this circle is constant, that is it does not depend on the location of $Z_1$ on $k_1$. - -Nathan Altshiller Court described the constant chord theorem 1925 in the article sur deux cercles secants for the Belgian math journal Mathesis. Eight years later he published On Two Intersecting Spheres in the American Mathematical Monthly, which contained the 3-dimensional version. Later it was included in several textbooks, such as Ross Honsberger's Mathematical Morsels and Roger B. Nelsen's Proof Without Words II, where it was given as a problem, or the German geometry textbook Mit harmonischen Verhältnissen zu Kegelschnitten by Halbeisen, Hungerbühler and Läuchli, where it was given as a theorem. diff --git a/wiki/wikipedia/3308.txt b/wiki/wikipedia/3308.txt deleted file mode 100644 index f3593e6da4c787bf34b4585390bd7993e978868d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3308.txt +++ /dev/null @@ -1 +0,0 @@ -Elitzur's theorem is a theorem in quantum and statistical field theory stating that local gauge symmetries cannot be spontaneously broken. The theorem was proposed in 1975 by Shmuel Elitzur, who proved it for Abelian gauge fields on a lattice. It is nonetheless possible to spontaneously break a global symmetry within a theory that has a local gauge symmetry, as in the Higgs mechanism. diff --git a/wiki/wikipedia/3309.txt b/wiki/wikipedia/3309.txt deleted file mode 100644 index b58087cd6f0af571d5cf30cccc8951c1750a8cd3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3309.txt +++ /dev/null @@ -1,31 +0,0 @@ -In logic, Richard's paradox is a semantical antinomy of set theory and natural language first described by the French mathematician Jules Richard in 1905. The paradox is ordinarily used to motivate the importance of distinguishing carefully between mathematics and metamathematics. - -Kurt Gödel specifically cites Richard's antinomy as a semantical analogue to his syntactical incompleteness result in the introductory section of "On Formally Undecidable Propositions in Principia Mathematica and Related Systems I". The paradox was also a motivation of the development of predicative mathematics. - -The original statement of the paradox, due to Richard (1905), is strongly related to Cantor's diagonal argument on the uncountability of the set of real numbers. - -The paradox begins with the observation that certain expressions of natural language define real numbers unambiguously, while other expressions of natural language do not. For example, "The real number the integer part of which is 17 and the nth decimal place of which is 0 if n is even and 1 if n is odd" defines the real number 17.1010101... = 1693/99, whereas the phrase "the capital of England" does not define a real number, nor the phrase "the smallest positive integer not definable in under sixty letters" (see Berry's paradox). - -Thus there is an infinite list of English phrases (such that each phrase is of finite length, but the list itself is of infinite length) that define real numbers unambiguously. We first arrange this list of phrases by increasing length, then order all phrases of equal length lexicographically (in dictionary order, e.g. we can use the ASCII code, the phrases can only contain codes 32 to 126), so that the ordering is canonical. This yields an infinite list of the corresponding real numbers: r1, r2, ... . Now define a new real number r as follows. The integer part of r is 0, the nth decimal place of r is 1 if the nth decimal place of rn is not 1, and the nth decimal place of r is 2 if the nth decimal place of rn is 1. - -The preceding paragraph is an expression in English that unambiguously defines a real number r. Thus r must be one of the numbers rn. However, r was constructed so that it cannot equal any of the rn (thus, r is an undefinable number). This is the paradoxical contradiction. - -Richard's paradox results in an untenable contradiction, which must be analyzed to find an error. - -The proposed definition of the new real number r clearly includes a finite sequence of characters, and hence it seems at first to be a definition of a real number. However, the definition refers to definability-in-English itself. If it were possible to determine which English expressions actually do define a real number, and which do not, then the paradox would go through. Thus the resolution of Richard's paradox is that there is not any way to unambiguously determine exactly which English sentences are definitions of real numbers (see Good 1966). That is, there is not any way to describe in a finite number of words how to tell whether an arbitrary English expression is a definition of a real number. This is not surprising, as the ability to make this determination would also imply the ability to solve the halting problem and perform any other non-algorithmic calculation that can be described in English. - -A similar phenomenon occurs in formalized theories that are able to refer to their own syntax, such as Zermelo–Fraenkel set theory (ZFC). Say that a formula φ(x) defines a real number if there is exactly one real number r such that φ(r) holds. Then it is not possible to define, by ZFC, the set of all (Gödel numbers of) formulas that define real numbers. For, if it were possible to define this set, it would be possible to diagonalize over it to produce a new definition of a real number, following the outline of Richard's paradox above. Note that the set of formulas that define real numbers may exist, as a set F; the limitation of ZFC is that there is not any formula that defines F without reference to other sets. This is related to Tarski's undefinability theorem. - -The example of ZFC illustrates the importance of distinguishing the metamathematics of a formal system from the statements of the formal system itself. The property D(φ) that a formula φ of ZFC defines a unique real number is not itself expressible by ZFC, but must be considered as part of the metatheory used to formalize ZFC. From this viewpoint, Richard's paradox results from treating a construction of the metatheory (the enumeration of all statements in the original system that define real numbers) as if that construction could be performed in the original system. - -A variation of the paradox uses integers instead of real-numbers, while preserving the self-referential character of the original. Consider a language (such as English) in which the arithmetical properties of integers are defined. For example, "the first natural number" defines the property of being the first natural number, one; and "divisible by exactly two natural numbers" defines the property of being a prime number(It is clear that some properties cannot be defined explicitly, since every deductive system must start with some axioms. But for the purposes of this argument, it is assumed that phrases such as "an integer is the sum of two integers" are already understood). While the list of all such possible definitions is itself infinite, it is easily seen that each individual definition is composed of a finite number of words, and therefore also a finite number of characters. Since this is true, we can order the definitions, first by length and then lexicographically. - -Now, we may map each definition to the set of natural numbers, such that the definition with the smallest number of characters and alphabetical order will correspond to the number 1, the next definition in the series will correspond to 2, and so on. Since each definition is associated with a unique integer, then it is possible that occasionally the integer assigned to a definition fits that definition. If, for example, the definition "not divisible by any integer other than 1 and itself" happened to be 43rd, then this would be true. Since 43 is itself not divisible by any integer other than 1 and itself, then the number of this definition has the property of the definition itself. However, this may not always be the case. If the definition: "divisible by 3" were assigned to the number 58, then the number of the definition does not have the property of the definition itself. Since 58 is itself not divisible by 3. This latter example will be termed as having the property of being Richardian. Thus, if a number is Richardian, then the definition corresponding to that number is a property that the number itself does not have. (More formally, "x is Richardian" is equivalent to "x does not have the property designated by the defining expression with which x is correlated in the serially ordered set of definitions".) Thus in this example, 58 is Richardian, but 43 is not. - -Now, since the property of being Richardian is itself a numerical property of integers, it belongs in the list of all definitions of properties. Therefore, the property of being Richardian is assigned some integer, n. For example, the definition "being Richardian" might be assigned to the number 92. Finally, the paradox becomes: Is 92 Richardian? Suppose 92 is Richardian. This is only possible if 92 does not have the property designated by the defining expression which it is correlated with. In other words, this means 92 is not Richardian, contradicting our assumption. However, if we suppose 92 is not Richardian, then it does have the defining property which it corresponds to. This, by definition, means that it is Richardian, again contrary to assumption. Thus, the statement "92 is Richardian" cannot consistently be designated as either true or false. - -Another opinion concerning Richard's paradox relates to mathematical predicativism. By this view, the real numbers are defined in stages, with each stage only making reference to previous stages and other things that have already been defined. From a predicative viewpoint it is not valid to quantify over all real numbers in the process of generating a new real number, because this is believed to result in a circularity problem in the definitions. Set theories such as ZFC are not based on this sort of predicative framework, and allow impredicative definitions. - -Richard (1905) presented a solution to the paradox from the viewpoint of predicativisim. Richard claimed that the flaw of the paradoxical construction was that the expression for the construction of the real number r does not actually define a real number unambiguously, because the statement refers to the construction of an infinite set of real numbers, of which r itself is a part. Thus, Richard says, the real number r will not be included as any rn, because the definition of r does not meet the criteria for being included in the sequence of definitions used to construct the sequence rn. Contemporary mathematicians agree that the definition of r is invalid, but for a different reason. They believe the definition of r is invalid because there is no well-defined notion of when an English phrase defines a real number, and so there is no unambiguous way to construct the sequence rn. - -Although Richard's solution to the paradox did not gain favor with mathematicians, predicativism is an important part of the study of the foundations of mathematics. Predicativism was first studied in detail by Hermann Weyl in Das Kontinuum, wherein he showed that much of elementary real analysis can be conducted in a predicative manner starting with only the natural numbers. More recently, predicativism has been studied by Solomon Feferman, who has used proof theory to explore the relationship between predicative and impredicative systems. diff --git a/wiki/wikipedia/331.txt b/wiki/wikipedia/331.txt deleted file mode 100644 index 24aabd4d023efe59ac6e6582212686fcce7e401d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/331.txt +++ /dev/null @@ -1,34 +0,0 @@ -In number theory and algebraic geometry, the Tate conjecture is a 1963 conjecture of John Tate that would describe the algebraic cycles on a variety in terms of a more computable invariant, the Galois representation on étale cohomology. The conjecture is a central problem in the theory of algebraic cycles. It can be considered an arithmetic analog of the Hodge conjecture. - -Let V be a smooth projective variety over a field k which is finitely generated over its prime field. Let ks be a separable closure of k, and let G be the absolute Galois group Gal(ks/k) of k. Fix a prime number ℓ which is invertible in k. Consider the ℓ-adic cohomology groups (coefficients in the ℓ-adic integers Z, scalars then extended to the ℓ-adic numbers Q) of the base extension of V to ks; these groups are representations of G. For any i ≥ 0, a codimension-i subvariety of V (understood to be defined over k) determines an element of the cohomology group -$$ - H^{2i}(V_{k_s},\mathbf{Q}_{\ell}(i)) = W -$$ - -which is fixed by G. Here Q(i ) denotes the ith Tate twist, which means that this representation of the Galois group G is tensored with the ith power of the cyclotomic character. - -The Tate conjecture states that the subspace WG of W fixed by the Galois group G is spanned, as a Q-vector space, by the classes of codimension-i subvarieties of V. An algebraic cycle means a finite linear combination of subvarieties; so an equivalent statement is that every element of WG is the class of an algebraic cycle on V with Q coefficients. - -The Tate conjecture for divisors (algebraic cycles of codimension 1) is a major open problem. For example, let f : X → C be a morphism from a smooth projective surface onto a smooth projective curve over a finite field. Suppose that the generic fiber F of f, which is a curve over the function field k(C), is smooth over k(C). Then the Tate conjecture for divisors on X is equivalent to the Birch and Swinnerton-Dyer conjecture for the Jacobian variety of F. By contrast, the Hodge conjecture for divisors on any smooth complex projective variety is known (the Lefschetz (1,1)-theorem). - -Probably the most important known case is that the Tate conjecture is true for divisors on abelian varieties. This is a theorem of Tate for abelian varieties over finite fields, and of Faltings for abelian varieties over number fields, part of Faltings's solution of the Mordell conjecture. Zarhin extended these results to any finitely generated base field. The Tate conjecture for divisors on abelian varieties implies the Tate conjecture for divisors on any product of curves C1 × ... × Cn. - -The (known) Tate conjecture for divisors on abelian varieties is equivalent to a powerful statement about homomorphisms between abelian varieties. Namely, for any abelian varieties A and B over a finitely generated field k, the natural map -$$ - \text{Hom}(A,B)\otimes_{\mathbf{Z}}\mathbf{Q}_{\ell} \to \text{Hom}_G \left (H_1 \left (A_{k_s},\mathbf{Q}_{\ell} \right), H_1 \left (B_{k_s},\mathbf{Q}_{\ell} \right) \right ) -$$ - -is an isomorphism. In particular, an abelian variety A is determined up to isogeny by the Galois representation on its Tate module H1(Aks, Z). - -The Tate conjecture also holds for K3 surfaces over finitely generated fields of characteristic not 2. (On a surface, the nontrivial part of the conjecture is about divisors.) In characteristic zero, the Tate conjecture for K3 surfaces was proved by André and Tankeev. For K3 surfaces over finite fields of characteristic not 2, the Tate conjecture was proved by Nygaard, Ogus, Charles, Madapusi Pera, and Maulik. - -Totaro surveys known cases of the Tate conjecture. - -Let X be a smooth projective variety over a finitely generated field k. The semisimplicity conjecture predicts that the representation of the Galois group G = Gal(ks/k) on the ℓ-adic cohomology of X is semisimple (that is, a direct sum of irreducible representations). For k of characteristic 0, Moonen showed that the Tate conjecture (as stated above) implies the semisimplicity of -$$ -H^i \left (X \times_k \overline{k}, \mathbf{Q}_\ell(n) \right ). -$$ - -For k finite of order q, Tate showed that the Tate conjecture plus the semisimplicity conjecture would imply the strong Tate conjecture, namely that the order of the pole of the zeta function Z(X, t) at t = q−j is equal to the rank of the group of algebraic cycles of codimension j modulo numerical equivalence. - -Like the Hodge conjecture, the Tate conjecture would imply most of Grothendieck's standard conjectures on algebraic cycles. Namely, it would imply the Lefschetz standard conjecture (that the inverse of the Lefschetz isomorphism is defined by an algebraic correspondence); that the Künneth components of the diagonal are algebraic; and that numerical equivalence and homological equivalence of algebraic cycles are the same. diff --git a/wiki/wikipedia/3310.txt b/wiki/wikipedia/3310.txt deleted file mode 100644 index 49430295d200cced124d878001928f7f1c30dc4b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3310.txt +++ /dev/null @@ -1,78 +0,0 @@ -In computer science, $\boldsymbol{X}+\boldsymbol{Y}$ sorting is the problem of sorting pairs of numbers by their sums. Applications of the problem include transit fare minimisation, VLSI design, and sparse polynomial multiplication. As with comparison sorting and integer sorting more generally, algorithms for this problem can be based only on comparisons of these sums, or on other operations that work only when the inputs are small integers. - -It is unknown whether this problem has a comparison-based solution whose running time is asymptotically faster than sorting an unstructured list of equally many items. Therefore, research on the problem has focused on two approaches to settle the question of whether such an improvement is possible: the development of algorithms that improve on unstructured sorting in their number of comparisons rather than in their total running time, and lower bounds for the number of comparisons based on counting cells in subdivisions of high-dimensional spaces. Both approaches are historically tied together, in that the first algorithms that used few comparisons were based on the weakness of the cell-counting lower bounds. - -The input to the $X+Y$ sorting problem consists of two finite collections of numbers $X$ and $Y$, both of the same length. The problem's output is the collection of all pairs of a number from $X$ and a number from $Y$, arranged into sorted order by the sum of each pair. As a small example, for the inputs $X=\{1,2,9\}$ and $Y=\{0,4,9\}$, the output should be the list of pairs(1,0), (2,0), (1,4), (2,4), (9,0), (1,9), (2,9), (9,4), (9,9)of one element from $X$ and one element from $Y$, listed in sorted order by their sums of pairs1, 2, 5, 6, 9, 10, 11, 13, 18.One way to solve the problem would be to construct the pairs to be sorted (the Cartesian product of the two collections) and use these pairs as input to a standard comparison sorting algorithm such as merge sort or heapsort. When the inputs have length $n$, they form $n^2$ pairs, and the time to sort the pairs in this way is $O(n^2\log n)$. In terms of its big O notation, this method is the fastest known algorithm for $X+Y$ sorting. Whether a faster algorithm exists is an open problem, posed by Elwyn Berlekamp prior to 1975. - -A variant of the problem sorts the sumset, the set of sums of pairs, with duplicate sums condensed to a single value. For this variant, the size of the sumset may be significantly smaller than $n^2$, and output-sensitive algorithms for constructing it have been investigated. - -Steven Skiena recounts a practical application in transit fare minimisation, an instance of the shortest path problem: find the cheapest two-hop airplane ticket between two given cities, from an input that describes both the cost of each hop and which pairs of hops may be combined into a single ticket. Skiena's solution consists of sorting pairs of hops by their total cost as an instance of the $X+Y$ sorting problem, and then testing the resulting pairs in this sorted order until finding one that is allowed. To generate the sorted pairs in this order, Skiena uses a priority queue of pairs, initially containing only a single pair, the one consisting of the two cheapest hops. Then, when a pair $(x,y)$ is removed from the queue and found to be disallowed, two more pairs are added, with one of these two pairs combining $x$ with the next hop after $y$ in a sorted list of the hops to the destination, and the other pair combining $y$ with the next hop after $x$ in a sorted list of hops from the start. In this way, each successive pair can be found in logarithmic time, and only the pairs up to the first allowable one need to be sorted. -$$ -X+Y -$$ sorting is the most expensive subroutine in an algorithm for a problem in VLSI design, in which one must place two subunits of a VLSI circuit side-by-side along a communications channel in order to minimize the width of the channel needed to route pairs of wires from one subunit to the other. As one subunit is continuously shifted relative to the other, the channel width only changes at discrete positions where the ends of two wires line up with each other, and finding the sorted ordering of these positions in order to compute the sequence of changes to the width can be performed by $X+Y$ sorting. If this sorting problem could be sped up, it would also speed up this VLSI design task. - -Another application involves polynomial multiplication for polynomials of a single variable that may have many fewer terms than their degrees. The product of two polynomials can be expressed as a sum of products of pairs of terms, one from each polynomial, and placing these term-by-term products into degree order amounts to sorting them by the sum of degrees. For example, the instance of $X+Y$ sorting with $n=3$ given as an example above corresponds to the multiplication of two three-term polynomials to produce a nine-term polynomial: - -\begin{align} - -(&x+x^2+x^9)(1+x^4+x^9)\\ - -&=x+x^2+x^5+x^6+x^9+x^{10}+x^{11}+x^{13}+x^{18}.\\ - -\end{align} - -The degrees are always integers, so integer-based algorithms for $X+Y$ sorting may be applied. However, for polynomials whose number of terms is comparable to their degree, FFT-based polynomial multiplication algorithms may be significantly more efficient than term-by-term multiplication. - -A well-known lower bound for unstructured sorting, in the decision tree model, is based on the factorial number of sorted orders that an unstructured list may have. Because each comparison can at best reduce the number of possible orderings by a factor of two, sorting requires a number of comparisons at least equal to the binary logarithm of the factorial, which is $n\log_2 n-O(n)$. Early work on $X+Y$ sorting followed a similar approach by asking how many different sorted orderings are possible for this problem, and proving that this number is at most $O(n^{8n})$. However, because its binary logarithm is at most $8n\log_2 n+O(1)$, much smaller than the known time bounds for $X+Y$ sorting, this method can only lead to weak lower bounds on the number of comparisons. - -The proof of this bound relates $X+Y$ sorting to the complexity of an arrangement of hyperplanes in high-dimensional geometry. The two input collections for the $X+Y$ sorting problem comprise $2n$ numbers, which can alternatively be interpreted as the Cartesian coordinates of a point in the $2n$-dimensional space $\mathbb{R}^{2n}$. This space can be subdivided into cells, so that within a single cell all points correspond to inputs that produce the same sorted order. For this subdivision, each boundary between two cells lies within a hyperplane defined by an equality of pairs $x_i+y_j=x_k+y_\ell$, where $(x_i,y_j)$ and $(x_k,y_\ell)$ are two pairs whose ordering changes from one adjacent cell to the other. These hyperplanes are either generated by two disjoint pairs, or they have the simplified forms $x_i=x_k$ or $y_j=y_\ell$, so the number of distinct hyperplanes that can be determined in this way is - -k=2\binom{n}{2}^2+2\binom{n}{2}. - -The number of cells that this number of hyperplanes can divide a space of dimension $2n$ into is - -\binom{k}{2n}+\binom{k}{2n-1}+\cdots+\binom{k}{0}=O(n^{8n}). Therefore, the set $X+Y$ has $O(n^{8n})$ different possible sorted orderings. - -A similar style of analysis has been more successful in ruling out fast solutions to certain generalizations of $X+Y$ sorting, by showing that they have too many orderings to sort quickly. In particular, Harper suggest separately sorting $X$ and $Y$, and then constructing a two-dimensional matrix of the values of $X+Y$ that is sorted both by rows and by columns before using this partially-sorted data to complete the sort of $X+Y$. This idea of using a row-sorted and column-sorted matrix forms the basis for the method used by Skiena in the transportation application, and it can reduce the number of comparisons by a constant factor relative to naive comparison sorting. However, for matrices whose rows and columns are sorted in this way, the number of possible sorted orderings of the whole matrix is much larger than $O(n^{8n})$, so large that any comparison sorting algorithm that can work for arbitrary $n\times n$ matrices that are sorted by rows and columns still requires $\Omega(n^2\log n)$ comparisons. Therefore, if the $X+Y$ sorting problem is to be solved quickly, the solution must use - -additional information about the set $X+Y$ beyond this matrix ordering. - -For the classical comparison sorting problem, the time to sort and the number of comparisons needed to sort are within constant factors of each other. - -But for $X + Y$ sorting, the number of comparisons is smaller than the best time bound known: Michael Fredman showed in 1976 that $X + Y$ sorting can be done using only $O(n^2)$ comparisons. More generally, he showed that any set of $N$ elements, whose sorted ordering has already been restricted to a family $\Gamma$ of orderings, can be sorted using $\log_2|\Gamma|+O(N)$ comparisons, by a form of binary insertion sort. For the $X + Y$ sorting problem, $N=n^2$, and $|\Gamma|=O(n^{8n})$, so $\log_2|\Gamma|=O(n\log n)$ and Fredman's bound implies that only $O(n^2)$ comparisons are needed. However, in Fredman's method, the time needed to decide which comparisons to perform may be significantly higher than the bound on the number of comparisons. - -The first explicit algorithm that achieves both $O(n^2)$ comparisons and $O(n^2\log n)$ total complexity was published sixteen years after Fredman by Lambert. The algorithm performs the following steps: - -#Recursively sort the two sets $X+X$ and $Y+Y$. - -#Use the equivalence $x_i-x_j\le x_k-x_\ell \Leftrightarrow x_i+x_\ell\le x_j+x_k$ to infer the sorted orderings of $X-X$ and $Y-Y$ without additional comparisons. - -#Merge the two sets $X-X$ and $Y-Y$ into a single sorted order, using a number of comparisons linear in their total size. - -#Use the merged order and the equivalence $x_i+y_j\le x_k+y_\ell\Leftrightarrow x_i-x_k\le y_\ell-y_j$ to infer the sorted order of $X+Y$ without additional comparisons. - -The part of the algorithm that recursively sorts $X+X$ (or equivalently $Y+Y$) does so by the following steps: - -#Split $X$ into two equal sublists $A$ and $B$. - -#Recursively sort $A+A$ and $B+B$ - -#Infer the ordering on $A+B$ using only the comparisons from a single merge step as above. - -#Merge the sorted results $A+A$, $B+B$, and $A+B$ together. - -The number of comparisons $C(n)$ needed to perform this recursive algorithm on an input of $n$ items can be analyzed using the recurrence relation - -C(n)\le 2C(n/2)+O(n^2), - -where the $2C(n/2)$ term of the recurrence counts the number of comparisons in the recursive calls to the algorithm to sort $A+A$ and $B+B$, and the $O(n^2)$ term counts the number of comparisons used to merge the results. The master theorem for recurrence relations of this form shows that $C(n)=O(n^2).$ The total time complexity is slower, $O(n^2\log n)$, because of the steps of the algorithm that use already-made comparisons to infer orderings of other sets. These steps can be performed in time $O(n^2\log n)$ by using a standard comparison-sorting algorithm with its comparison steps replaced by the stated inferences. - -If only comparisons between elements of $X+Y$ are allowed, then there is also a matching lower bound of $\Omega(n^2)$ on the number of comparisons, but with more general comparisons involving linear combinations of constant numbers of elements, only $O(n\log^2 n)$ comparisons are needed. - -Just as integer sorting can be faster than comparison sorting for small-enough integers, the same is true for $X+Y$ sorting. In particular, - -with integer inputs in the range from $0$ to some upper limit $M$, the problem can be solved in $O(n+M\log M)$ operations by means of the fast Fourier transform. - -Several other problems in computational geometry have equivalent or harder complexity to $X+Y$ sorting, including constructing Minkowski sums of staircase polygons, finding the crossing points of an arrangement of lines in sorted order by their $x$-coordinates, listing pairs of points in sorted order by their distances, and testing whether one rectilinear polygon can be translated to fit within another. - -The problem of testing whether two of the pairs in the $X+Y$ sorting problem have equal sums can be solved by sorting the pairs and then testing consecutive pairs for equality. In turn, it could be used to solve the 3SUM problem, implying that it is unlikely to have a strongly subquadratic algorithm. diff --git a/wiki/wikipedia/3311.txt b/wiki/wikipedia/3311.txt deleted file mode 100644 index 0d8bbfaf453e27548dae114ca97043e134cc6d27..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3311.txt +++ /dev/null @@ -1,35 +0,0 @@ -The Pacman conjecture holds that durable-goods monopolists have complete market power and so can exercise perfect price discrimination, thus extracting the total surplus. This is in contrast to the Coase conjecture which holds that a durable goods monopolist has no market power, and so price is equal to the competitive market price. - -In a December 1989 journal article Mark Bagnoli, Stephen W. Salant, and Joseph E. Swierzbinski theorized that if each consumer could be relied upon to buy a good as soon as its price dipped below a certain point (with different consumers valuing goods differently, but all pursuing the same "get-it-while-you-can" strategy), then a monopolist could set prices very high initially and then "eat his way down the demand curve", extracting maximum profit in what Bagnoli et al. called "the Pacman strategy" after the voracious video-game character. Specifically, Bagnoli et al. state that "Pacman is a sequential best reply to get-it-while-you-can", a result they call "the Pacman Theorem". Their proof, however, relies strongly on the assumption that there is an infinite time horizon. - -A durable-goods monopolist sells goods which are in finite supply and which last forever, (not depreciating over time). According to the Coase Conjecture, such a monopolist has no market power as it is in competition with itself; the more of the good it sells in period one the less it will be able to sell in future periods. - -Assuming marginal costs are zero. In the first period the monopolist will produce quantity (Q1) where marginal cost = marginal revenue and so extract the monopoly surplus. However, in the second period the monopolist will face a new residual demand curve (Q − Q1) and so will produce quantity where the new marginal revenue is equal to the marginal cost, which is at the competitive market price. - -There is then an incentive for consumers to delay purchase of the good as they realize that its price will decrease over time. If buyers are patient enough they will not buy until the price falls and so durable goods monopolists face a horizontal demand curve at the equilibrium price and so will have no market power. - -The Pacman Conjecture on the other hand holds that consumers realize the price of the good will only fall when they purchase the good; therefore, a patient monopolist can exercise full market power and perfectly price-discriminate. - -The monopolist sets the price of the durable good at time t equal to the highest reservation price of a consumer who hasn't purchased prior to that point t. The consumer then buys the good as soon as it is equal to their reservation price, as they realize price will not fall further unless they purchase it. (Bagnoli et al. refer to buyers exhibiting this behavior as "type ℓ buyers", or "buyers following the get-it-while-you-can strategy".) - -The Pacman Conjecture requires that the monopolist to have perfect information about consumer reservation prices and an infinite time horizon. The buyers must not only follow the get-it-while-you-can strategy, but also must faithfully believe that the monopolist is following a perfect Pacman strategy (as otherwise they would be tempted to match patience with the monopolist in hopes of getting a better deal later). The monopolist will exercise full market power over the buyers in that pool, but will not be able to extract similar surpluses from buyers who come in from outside (for example, the children of the original buyers) without deviating from the pure Pacman strategy. - -*finite set of buyers, - -*infinite time horizon, - -*monopolists have maximum market power, - -*monopolists perfectly price-discriminate. - -*Patient buyers (discount factor greater than or equal to 0.5), - -*non-atomic buyers, - -*infinite time horizon, - -*reservation prices are continuous, - -*monopolists have no market power, - -*price is equal to the perfectly competitive price equilibrium. diff --git a/wiki/wikipedia/3312.txt b/wiki/wikipedia/3312.txt deleted file mode 100644 index 907ed82725093940e21aa76525ddf74e3fda3811..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3312.txt +++ /dev/null @@ -1,7 +0,0 @@ -is a puzzle video game developed by Sonic Team and published by Sega. It is a direct sequel to Puyo Puyo Tetris. The game was released for PlayStation 4, Xbox One, Nintendo Switch, PlayStation 5 and Xbox Series X/S in December 2020, with a Microsoft Windows version released in March 2021. The game released to positive reviews. - -In addition to a new story and characters, the game introduces new modes, such as Skill battles which allow for character based skills and items to quickly change the game. It was also stated to have an improved online mode from the first game, allowing for more competition in leagues and free play, as well as new modes. In Adventure mode, the player will traverse an overworld and engage in Skill Battles with other characters in the story, that acts more like a JRPG. - -On August 26, 2020 during a Nintendo Direct Mini presentation, the game was showcased and was slated for December 2020 for Nintendo Switch. Later that day, it was discovered that the game would be released on December 8, 2020 alongside Xbox One, Xbox Series X, and PlayStation 4 versions with the Japanese version releasing two days later. A PlayStation 5 version was also announced and would release during 'Holiday 2020', later revealed to be the same day as the other versions. A Microsoft Windows version on Steam was said to be released on March 23, 2021. An update on January 14, 2021 added Sonic the Hedgehog from the series of the same name as a playable guest character; this was intended to be the last work featuring Sonic to be released before voice actor Roger Craig Smith announced his retirement from the role two weeks later. Craig would confirm in May 2021 that he would be continuing in the role after all. In a second update, released in 2021, four new characters were added from Puyo Puyo 2, the Puyo Pop Fever games, and Puyo Puyo Chronicle. Accessibility options for color blind players were added, alongside three new songs and multiplayer support for certain modes previously only playable in single-player. New challenge rules were also added. In March 2021, the final update was released, adding four characters from the Puyo Puyo series, the ability for Playstation 4 and Playstation 5 players to participate in online play together, four additional songs, a spectator mode, allowing players to watch online matches, and a harder "Super Spicy" difficulty setting. Twenty new avatars were also added. - -The game received “generally favorable reviews” according to review aggregator Metacritic. diff --git a/wiki/wikipedia/3313.txt b/wiki/wikipedia/3313.txt deleted file mode 100644 index 8126a9feb9ebee3ce4b56d6a1919e26fcf957279..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3313.txt +++ /dev/null @@ -1,59 +0,0 @@ -The expected linear time MST algorithm is a randomized algorithm for computing the minimum spanning forest of a weighted graph with no isolated vertices. It was developed by David Karger, Philip Klein, and Robert Tarjan. The algorithm relies on techniques from Borůvka's algorithm along with an algorithm for verifying a minimum spanning tree in linear time. It combines the design paradigms of divide and conquer algorithms, greedy algorithms, and randomized algorithms to achieve expected linear performance. - -Deterministic algorithms that find the minimum spanning tree include Prim's algorithm, Kruskal's algorithm, reverse-delete algorithm, and Borůvka's algorithm. - -The key insight to the algorithm is a random sampling step which partitions a graph into two subgraphs by randomly selecting edges to include in each subgraph. The algorithm recursively finds the minimum spanning forest of the first subproblem and uses the solution in conjunction with a linear time verification algorithm to discard edges in the graph that cannot be in the minimum spanning tree. A procedure taken from Borůvka's algorithm is also used to reduce the size of the graph at each recursion. - -Each iteration of the algorithm relies on an adaptation of Borůvka's algorithm referred to as a Borůvka step: - -Input: A graph G with no isolated vertices - -1 For each vertex v, select the lightest edge incident on v - -2 Create a contracted graph G by replacing each component of G connected by the edges selected in step 1 with a single vertex - -3 Remove all isolated vertices, self-loops, and non-minimal repetitive edges from G - -Output: The edges selected in step 1 and the contracted graph G - -A Borůvka step is equivalent to the inner loop of Borůvka's algorithm, which runs in O(m) time where m is the number of edges in G. Furthermore, since each edge can be selected at most twice (once by each incident vertex) the maximum number of disconnected components after step 1 is equal to half the number of vertices. Thus, a Borůvka step reduces the number of vertices in the graph by at least a factor of two and deletes at least n/2 edges where n is the number of vertices in G. - -Example execution of a Borůvka step - -In each iteration the algorithm removes edges with particular properties that exclude them from the minimum spanning tree. These are called F-heavy edges and are defined as follows. Let F be a forest on the graph H. An F-heavy edge is an edge e connecting vertices u,v whose weight is strictly greater than the weight of the heaviest edge on the path from u to v in F. (If a path does not exist in F it is considered to have infinite weight). Any edge that is not F-heavy is F-light. If F is a subgraph of G then any F-heavy edge in G cannot be in the minimum spanning tree of G by the cycle property. Given a forest, F-heavy edges can be computed in linear time using a minimum spanning tree verification algorithm. - -Input: A graph G with no isolated vertices - -# If G is empty return an empty forest - -# Create a contracted graph G by running two successive Borůvka steps on G - -# Create a subgraph H by selecting each edge in G with probability 1/2. Recursively apply the algorithm to H to get its minimum spanning forest F. - -# Remove all F-heavy edges from G (where F is the forest from step 3) using a linear time minimum spanning tree verification algorithm. - -# Recursively apply the algorithm to G to get its minimum spanning forest. - -Output: The minimum spanning forest of G and the contracted edges from the Borůvka steps - -Correctness is proved by induction on the number of vertices in the graph. The base case is trivially true. Let T* be the minimum spanning tree of G. Every edge selected in a Borůvka step is in T* by the cut property and none of the edges removed to form the contracted graph are in T* by the cut property (for redundant edges) and the cycle property (for self loops). The remaining edges of T* not selected in step 2 form the minimum spanning tree of the contracted graph by the cut property (let each cut be a supernode). Every F-heavy edge deleted is not in the minimum spanning tree by the cycle property. Finally F' is the minimum spanning tree of the contracted graph by the inductive hypothesis. Thus F' and the edges contracted edges from the Borůvka steps form the minimum spanning tree. - -The expected performance is a result of the random sampling step. The effectiveness of the random sampling step is described by the following lemma which places a bound on the number of F-light edges in G thereby restricting the size of the second subproblem. - -Lemma- Let H be a subgraph of G formed by including each edge of G independently with probability p and let F be the minimum spanning forest of H. The expected number of F-light edges in G is at most np where n is the number of vertices in G - -To prove the lemma examine the edges of G as they are being added to H. The number of F-light edges in G is independent of the order in which the edges of H are selected since the minimum spanning forest of H is the same for all selection orders. For the sake of the proof consider selecting edges for H by taking the edges of G one at a time in order of edge weight from lightest to heaviest. Let e be the current edge being considered. If the endpoints of e are in two disconnected components of H then e is the lightest edge connecting those components and if it is added to H it will be in F by the cut property. This also means e is F-light regardless of whether or not it is added to H since only heavier edges are subsequently considered. If both endpoints of e are in the same component of H then it is (and always will be) F-heavy by the cycle property. Edge e is then added to H with probability p. - -The maximum number of F-light edges added to H is n-1 since any minimum spanning tree of H has n-1 edges. Once n-1 F-light edges have been added to H none of the subsequent edges considered are F-light by the cycle property. Thus, the number of F-light edges in G is bounded by the number of F-light edges considered for H before n-1 F-light edges are actually added to H. Since any F-light edge is added with probability p this is equivalent to flipping a coin with probability p of coming up heads until n-1 heads have appeared. The total number of coin flips is equal to the number of F-light edges in G. The distribution of the number of coin flips is given by the inverse binomial distribution with parameters n-1 and p. For these parameters the expected value of this distribution is (n-1)/p. - -Ignoring work done in recursive subproblems the total amount of work done in a single invocation of the algorithm is linear in the number of edges in the input graph. Step 1 takes constant time. Borůvka steps can be executed in time linear in the number of edges as mentioned in the Borůvka step section. Step 3 iterates through the edges and flips a single coin for each one so it is linear in the number of edges. Step 4 can be executed in linear time using a modified linear time minimum spanning tree verification algorithm. Since the work done in one iteration of the algorithm is linear in the number of edges the work done in one complete run of the algorithm (including all recursive calls) is bounded by a constant factor times the total number of edges in the original problem and all recursive subproblems. - -Each invocation of the algorithm produces at most two subproblems so the set of subproblems forms a binary tree. Each Borůvka step reduces the number of vertices by at least a factor of two so after two Borůvka steps the number of vertices has been reduced by a factor of four. Thus, if the original graph has n vertices and m edges then at depth d of the tree each subproblem is on a graph of at most n/4d vertices. Also the tree has at most log4n levels. - -To reason about the recursion tree let the left child problem be the subproblem in the recursive call in step 3 and the right child problem be the subproblem in the recursive call in step 5. Count the total number of edges in the original problem and all subproblems by counting the number of edges in each left path of the tree. A left path begins at either a right child or the root and includes all nodes reachable through a path of left children. The left paths of a binary tree are shown circled in blue in the diagram on the right. - -Each edge in a left child problem is selected from the edges of its parent problem (less the edges contracted in the Borůvka steps) with probability 1/2. If a parent problem has x edges then the expected number of edges in the left child problem is at most x/2. If x is replaced by a random variable X then by the linearity of expectation the expected number of edges in the left child problem Y is given by $E[Y] \leq E[X]/2$. Thus if the expected number of edges in a problem at the top of a left path is k then the sum of the expected number of edges in each subproblem in the left path is at most $\sum_{d=0}^{\infty} \frac{k}{2^d}=2k$ (see Geometric series). The root has m edges so the expected number of edges is equal to 2m plus twice the expected number of edges in each right subproblem. - -The expected number of edges in each right subproblem is equal to the number of F-light edges in the parent problem where F is the minimum spanning tree of the left subproblem. The number of F-light edges is less than or equal to twice the number of vertices in the subproblem by the sampling lemma. The number of vertices in a subproblem at depth d is n/4d so the total number of vertices in all right subproblems is given by $\sum_{d=1}^{\infty}\frac{2^{d-1}n}{4^d}=n/2$. Thus, the expected number of edges in the original problem and all subproblems is at most 2m+n. Since n at most 2m for a graph with no isolated vertices the algorithm runs in expected time O(m). - -The worst case runtime is equivalent to the runtime of Borůvka's algorithm. This occurs if all edges are added to either the left or right subproblem on each invocation. In this case the algorithm is identical to Borůvka's algorithm which runs in O(min{n2, mlogn}) on a graph with n vertices and m edges. diff --git a/wiki/wikipedia/3314.txt b/wiki/wikipedia/3314.txt deleted file mode 100644 index e81cb28addaecbf04f5db6aa46ffcabb25cfb9fc..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3314.txt +++ /dev/null @@ -1,47 +0,0 @@ -In number theory, Cramér's conjecture, formulated by the Swedish mathematician Harald Cramér in 1936, is an estimate for the size of gaps between consecutive prime numbers: intuitively, that gaps between consecutive primes are always small, and the conjecture quantifies asymptotically just how small they must be. It states that -$$ -p_{n+1}-p_n=O((\log p_n)^2),\ -$$ - -where pn denotes the nth prime number, O is big O notation, and "log" is the natural logarithm. While this is the statement explicitly conjectured by Cramér, his heuristic actually supports the stronger statement -$$ -\limsup_{n\rightarrow\infty} \frac{p_{n+1}-p_n}{(\log p_n)^2} = 1, -$$ - -and sometimes this formulation is called Cramér's conjecture. However, this stronger version is not supported by more accurate heuristic models, which nevertheless support the first version of Cramér's conjecture. Neither form has yet been proven or disproven. - -Cramér gave a conditional proof of the much weaker statement that -$$ -p_{n+1}-p_n = O(\sqrt{p_n}\log p_n) -$$ - -on the assumption of the Riemann hypothesis. has proposed the formula for the maximal gaps: -$$ -G(x)\sim \log x(\log x-\log\log x), -$$ - -which is formally identical to the Shanks conjecture but suggests a lower-order term. - -Marek Wolf has proposed the formula for the maximal gaps $G(x)$ - -expressed in terms of the prime-counting function -$$ -\pi(x) -$$: -$$ -G(x)\sim \frac{x}{\pi(x)}(2\log\pi(x)-\log x+c_0), -$$ - -where $c_0=\log(C_2)=0.2778769...$ and $C_2=1.3203236...$ is twice the twin primes constant; see , . Using Gauss's approximation $\pi(x)\sim x/\log(x)$ this gives -$$ -G(x)\sim \log(x)(\log x-2\log\log x), -$$ - -which for large $ x$ is also asymptotically equivalent to the Cramér and Shanks conjectures: $G(x)\sim \log^2(x)$. - -Thomas Nicely has calculated many large prime gaps. He measures the quality of fit to Cramér's conjecture by measuring the ratio -$$ -R = \frac{\log p_n}{\sqrt{p_{n+1}-p_n}}. -$$ - -He writes, “For the largest known maximal gaps, $R$ has remained near 1.13.” However, $1/R^2$ is still less than 1. diff --git a/wiki/wikipedia/3315.txt b/wiki/wikipedia/3315.txt deleted file mode 100644 index b5fe1f91c783a5944456be6360a4be999f4d18cd..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3315.txt +++ /dev/null @@ -1 +0,0 @@ -In mathematics, Shimura's reciprocity law, introduced by , describes the action of ideles of imaginary quadratic fields on the values of modular functions at singular moduli. It forms a part of the Kronecker Jugendtraum, explicit class field theory for such fields. There are also higher-dimensional generalizations. diff --git a/wiki/wikipedia/3316.txt b/wiki/wikipedia/3316.txt deleted file mode 100644 index 03afdbb18c0b7508e28b18a9e5530281d121cdec..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3316.txt +++ /dev/null @@ -1,32 +0,0 @@ -In the mathematical field of Riemannian geometry, M. Gromov's systolic inequality bounds the length of the shortest non-contractible loop on a Riemannian manifold in terms of the volume of the manifold. Gromov's systolic inequality was proved in 1983; it can be viewed as a generalisation, albeit non-optimal, of Loewner's torus inequality and Pu's inequality for the real projective plane. - -Technically, let M be an essential Riemannian manifold of dimension n; denote by sysπ1(M) the homotopy 1-systole of M, that is, the least length of a non-contractible loop on M. Then Gromov's inequality takes the form -$$ - \left(\operatorname{sys\pi}_1(M)\right)^n \leq C_n \operatorname{vol}(M), -$$ - -where Cn is a universal constant only depending on the dimension of M. - -A closed manifold is called essential if its fundamental class defines a nonzero element in the homology of its fundamental group, or more precisely in the homology of the corresponding Eilenberg–MacLane space. Here the fundamental class is taken in homology with integer coefficients if the manifold is orientable, and in coefficients modulo 2, otherwise. - -Examples of essential manifolds include aspherical manifolds, real projective spaces, and lens spaces. - -Gromov's original 1983 proof is about 35 pages long. It relies on a number of techniques and inequalities of global Riemannian geometry. The starting point of the proof is the imbedding of X into the Banach space of Borel functions on X, equipped with the sup norm. The imbedding is defined by mapping a point p of X, to the real function on X given by the distance from the point p. The proof utilizes the coarea inequality, the isoperimetric inequality, the cone inequality, and the deformation theorem of Herbert Federer. - -One of the key ideas of the proof is the introduction of filling invariants, namely the filling radius and the filling volume of X. Namely, Gromov proved a sharp inequality relating the systole and the filling radius, -$$ -\mathrm{sys\pi}_1 \leq 6 \mathrm{FillRad}(X), -$$ - -valid for all essential manifolds X; as well as an inequality -$$ -\mathrm{FillRad}(X) \leq C_n \mathrm{vol}_n{}^{\tfrac{1}{n}}(X), -$$ - -valid for all closed manifolds X. - -It was shown by Brunnbauer that the filling invariants, unlike the systolic invariants, are independent of the topology of the manifold in a suitable sense. - -Guth and Ambrosio developed approaches to the proof of Gromov's systolic inequality for essential manifolds. - -Stronger results are available for surfaces, where the asymptotics when the genus tends to infinity are by now well understood, see systoles of surfaces. A uniform inequality for arbitrary 2-complexes with non-free fundamental groups is available, whose proof relies on the Grushko decomposition theorem. diff --git a/wiki/wikipedia/3317.txt b/wiki/wikipedia/3317.txt deleted file mode 100644 index f49d8f45114984d97f294236c40dd82796f838fd..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3317.txt +++ /dev/null @@ -1,5 +0,0 @@ -In Euclidean plane geometry, Lester's theorem states that in any scalene triangle, the two Fermat points, the nine-point center, and the circumcenter lie on the same circle. - -The result is named after June Lester, who published it in 1997, and the circle through these points was called the Lester circle by Clark Kimberling. - -Lester proved the result by using the properties of complex numbers; subsequent authors have given elementary proofs, proofs using vector arithmetic, and computerized proofs. diff --git a/wiki/wikipedia/3318.txt b/wiki/wikipedia/3318.txt deleted file mode 100644 index 1f7c5f71c5bf75a61513cbf9f3ae1eb3dd16aa88..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3318.txt +++ /dev/null @@ -1,13 +0,0 @@ -In the mathematical field of graph theory, the bull graph is a planar undirected graph with 5 vertices and 5 edges, in the form of a triangle with two disjoint pendant edges. - -It has chromatic number 3, chromatic index 3, radius 2, diameter 3 and girth 3. It is also a self-complementary graph, a block graph, a split graph, an interval graph, a claw-free graph, a 1-vertex-connected graph and a 1-edge-connected graph. - -A graph is bull-free if it has no bull as an induced subgraph. The triangle-free graphs are bull-free graphs, since every bull contains a triangle. The strong perfect graph theorem was proven for bull-free graphs long before its proof for general graphs, and a polynomial time recognition algorithm for Bull-free perfect graphs is known. - -Maria Chudnovsky and Shmuel Safra have studied bull-free graphs more generally, showing that any such graph must have either a large clique or a large independent set (that is, the Erdős–Hajnal conjecture holds for the bull graph), and developing a general structure theory for these graphs. - -The chromatic polynomial of the bull graph is $(x-2)(x-1)^3x$. Two other graphs are chromatically equivalent to the bull graph. - -Its characteristic polynomial is $-x(x^2-x-3)(x^2+x-1)$. - -Its Tutte polynomial is $x^4+x^3+x^2y$. diff --git a/wiki/wikipedia/3319.txt b/wiki/wikipedia/3319.txt deleted file mode 100644 index d70db3360bc7a63090a09e2420af78499adc4058..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3319.txt +++ /dev/null @@ -1,5 +0,0 @@ -Maxwell's theorem is the following statement about triangles in the plane. - -For a given triangle $ABC$ and a point $V$ not on the sides of that triangle construct a second triangle $A'B'C'$, such that the side $A'B'$ is parallel to the line segment $CV$, the side $A'C'$ is parallel to the line segment $BV$ and the side $B'C'$ is parallel to the line segment $AV$. Then the parallel to $AB$ through $C'$, the parallel to $BC$ through $A'$ and the parallel to $AC$ through $B'$ intersect in a common point $V'$. - -The theorem is named after the physicist James Clerk Maxwell (1831–1879), who proved it in his work on reciprocal figures, which are of importance in statics. diff --git a/wiki/wikipedia/332.txt b/wiki/wikipedia/332.txt deleted file mode 100644 index acade297fea0a3e63eada1457357687c877545b8..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/332.txt +++ /dev/null @@ -1,9 +0,0 @@ -In mathematics, an existence theorem is a theorem which asserts the existence of a certain object. It might be a statement which begins with the phrase "there exist(s)", or it might be a universal statement whose last quantifier is existential (e.g., "for all x, y, ... there exist(s) ..."). In the formal terms of symbolic logic, an existence theorem is a theorem with a prenex normal form involving the existential quantifier, even though in practice, such theorems are usually stated in standard mathematical language. For example, the statement that the sine function is continuous everywhere, or any theorem written in big O notation, can be considered as theorems which are existential by nature—since the quantification can be found in the definitions of the concepts used. - -A controversy that goes back to the early twentieth century concerns the issue of purely theoretic existence theorems, that is, theorems which depend on non-constructive foundational material such as the axiom of infinity, the axiom of choice or the law of excluded middle. Such theorems provide no indication as to how to construct (or exhibit) the object whose existence is being claimed. From a constructivist viewpoint, such approaches are not viable as it lends to mathematics losing its concrete applicability, while the opposing viewpoint is that abstract methods are far-reaching, in a way that numerical analysis cannot be. - -In mathematics, an existence theorem is purely theoretical if the proof given for it does not indicate a construction of the object whose existence is asserted. Such a proof is non-constructive, since the whole approach may not lend itself to construction. In terms of algorithms, purely theoretical existence theorems bypass all algorithms for finding what is asserted to exist. These are to be contrasted with the so-called "constructive" existence theorems, which many constructivist mathematicians working in extended logics (such as intuitionistic logic) believe to be intrinsically stronger than their non-constructive counterparts. - -Despite that, the purely theoretical existence results are nevertheless ubiquitous in contemporary mathematics. For example, John Nash's original proof of the existence of a Nash equilibrium in 1951 was such an existence theorem. An approach which is constructive was also later found in 1962. - -From the other direction, there has been considerable clarification of what constructive mathematics is—without the emergence of a 'master theory'. For example, according to Errett Bishop's definitions, the continuity of a function such as sin(x) should be proved as a constructive bound on the modulus of continuity, meaning that the existential content of the assertion of continuity is a promise that can always be kept. Accordingly, Bishop rejects the standard idea of pointwise continuity, and proposed that continuity should be defined in terms of "local uniform continuity". One could get another explanation of existence theorem from type theory, in which a proof of an existential statement can come only from a term (which one can see as the computational content). diff --git a/wiki/wikipedia/3320.txt b/wiki/wikipedia/3320.txt deleted file mode 100644 index a8a61a9d25a04beff3357178f43c146df79a0652..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3320.txt +++ /dev/null @@ -1,13 +0,0 @@ -In geometry, Barbier's theorem states that every curve of constant width has perimeter pi times its width, regardless of its precise shape. This theorem was first published by Joseph-Émile Barbier in 1860. - -The most familiar examples of curves of constant width are the circle and the Reuleaux triangle. For a circle, the width is the same as the diameter; a circle of width w has perimeter piw. A Reuleaux triangle of width w consists of three arcs of circles of radius w. Each of these arcs has central angle pi/3, so the perimeter of the Reuleaux triangle of width w is equal to half the perimeter of a circle of radius w and therefore is equal to piw. A similar analysis of other simple examples such as Reuleaux polygons gives the same answer. - -One proof of the theorem uses the properties of Minkowski sums. If K is a body of constant width w, then the Minkowski sum of K and its 180° rotation is a disk with radius w and perimeter 2piw. However, the Minkowski sum acts linearly on the perimeters of convex bodies, so the perimeter of K must be half the perimeter of this disk, which is piw as the theorem states. - -Alternatively, the theorem follows immediately from the Crofton formula in integral geometry according to which the length of any curve equals the measure of the set of lines that cross the curve, multiplied by their numbers of crossings. Any two curves that have the same constant width are crossed by sets of lines with the same measure, and therefore they have the same length. Historically, Crofton derived his formula later than, and independently of, Barbier's theorem. - -An elementary probabilistic proof of the theorem can be found at Buffon's noodle. - -The analogue of Barbier's theorem for surfaces of constant width is false. In particular, the unit sphere has surface area $4\pi\approx 12.566$, while the surface of revolution of a Reuleaux triangle with the same constant width has surface area $8\pi-\tfrac{4}{3}\pi^2\approx 11.973$. - -Instead, Barbier's theorem generalizes to bodies of constant brightness, three-dimensional convex sets for which every two-dimensional projection has the same area. These all have the same surface area as a sphere of the same projected area. diff --git a/wiki/wikipedia/3321.txt b/wiki/wikipedia/3321.txt deleted file mode 100644 index 07827981dcacd2e83598b588e55fa3272854231e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3321.txt +++ /dev/null @@ -1,11 +0,0 @@ -In mathematics, the well-ordering theorem, also known as Zermelo's theorem, states that every set can be well-ordered. A set X is well-ordered by a strict total order if every non-empty subset of X has a least element under the ordering. The well-ordering theorem together with Zorn's lemma are the most important mathematical statements that are equivalent to the axiom of choice (often called AC, see also ). Ernst Zermelo introduced the axiom of choice as an "unobjectionable logical principle" to prove the well-ordering theorem. One can conclude from the well-ordering theorem that every set is susceptible to transfinite induction, which is considered by mathematicians to be a powerful technique. One famous consequence of the theorem is the Banach–Tarski paradox. - -Georg Cantor considered the well-ordering theorem to be a "fundamental principle of thought". However, it is considered difficult or even impossible to visualize a well-ordering of $\mathbb{R}$; such a visualization would have to incorporate the axiom of choice. In 1904, Gyula Kőnig claimed to have proven that such a well-ordering cannot exist. A few weeks later, Felix Hausdorff found a mistake in the proof. It turned out, though, that the well-ordering theorem is equivalent to the axiom of choice, in the sense that either one together with the Zermelo–Fraenkel axioms is sufficient to prove the other, in first order logic (the same applies to Zorn's Lemma). In second order logic, however, the well-ordering theorem is strictly stronger than the axiom of choice: from the well-ordering theorem one may deduce the axiom of choice, but from the axiom of choice one cannot deduce the well-ordering theorem. - -There is a well-known joke about the three statements, and their relative amenability to intuition:
    The axiom of choice is obviously true, the well-ordering principle obviously false, and who can tell about Zorn's lemma?
    - -The Axiom of Choice can be proven from the well-ordering theorem as follows. - -To make a choice function for a collection of non-empty sets, E, take the union of the sets in E and call it X. There exists a well-ordering of X; let R be such an ordering. The function that to each set S of E associates the smallest element of S, as ordered by (the restriction to S of) R, is a choice function for the collection E. - -An essential point of this proof is that it involves only a single arbitrary choice, that of R; applying the well-ordering theorem to each member S of E separately would not work, since the theorem only asserts the existence of a well-ordering, and choosing for each S a well-ordering would not be easier than choosing an element. diff --git a/wiki/wikipedia/3322.txt b/wiki/wikipedia/3322.txt deleted file mode 100644 index e7d46c661b3daada2d3a4273d8f7395841f0d3a6..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3322.txt +++ /dev/null @@ -1,45 +0,0 @@ -The Seven Bridges of Königsberg is a historically notable problem in mathematics. Its negative resolution by Leonhard Euler in 1736 laid the foundations of graph theory and prefigured the idea of topology. - -The city of Königsberg in Prussia (now Kaliningrad, Russia) was set on both sides of the Pregel River, and included two large islands—Kneiphof and Lomse—which were connected to each other, or to the two mainland portions of the city, by seven bridges. The problem was to devise a walk through the city that would cross each of those bridges once and only once. - -By way of specifying the logical task unambiguously, solutions involving either - -# reaching an island or mainland bank other than via one of the bridges, or - -# accessing any bridge without crossing to its other end - -are explicitly unacceptable. - -Euler proved that the problem has no solution. The difficulty he faced was the development of a suitable technique of analysis, and of subsequent tests that established this assertion with mathematical rigor. - -First, Euler pointed out that the choice of route inside each land mass is irrelevant. The only important feature of a route is the sequence of bridges crossed. This allowed him to reformulate the problem in abstract terms (laying the foundations of graph theory), eliminating all features except the list of land masses and the bridges connecting them. In modern terms, one replaces each land mass with an abstract "vertex" or node, and each bridge with an abstract connection, an "edge", which only serves to record which pair of vertices (land masses) is connected by that bridge. The resulting mathematical structure is a graph. - - - -180px → - -179px → - - - -Since only the connection information is relevant, the shape of pictorial representations of a graph may be distorted in any way, without changing the graph itself. Only the existence (or absence) of an edge between each pair of nodes is significant. For example, it does not matter whether the edges drawn are straight or curved, or whether one node is to the left or right of another. - -Next, Euler observed that (except at the endpoints of the walk), whenever one enters a vertex by a bridge, one leaves the vertex by a bridge. In other words, during any walk in the graph, the number of times one enters a non-terminal vertex equals the number of times one leaves it. Now, if every bridge has been traversed exactly once, it follows that, for each land mass (except for the ones chosen for the start and finish), the number of bridges touching that land mass must be even (half of them, in the particular traversal, will be traversed "toward" the landmass; the other half, "away" from it). However, all four of the land masses in the original problem are touched by an odd number of bridges (one is touched by 5 bridges, and each of the other three is touched by 3). Since, at most, two land masses can serve as the endpoints of a walk, the proposition of a walk traversing each bridge once leads to a contradiction. - -In modern language, Euler shows that the possibility of a walk through a graph, traversing each edge exactly once, depends on the degrees of the nodes. The degree of a node is the number of edges touching it. Euler's argument shows that a necessary condition for the walk of the desired form is that the graph be connected and have exactly zero or two nodes of odd degree. This condition turns out also to be sufficient—a result stated by Euler and later proved by Carl Hierholzer. Such a walk is now called an Eulerian path or Euler walk in his honor. Further, if there are nodes of odd degree, then any Eulerian path will start at one of them and end at the other. Since the graph corresponding to historical Königsberg has four nodes of odd degree, it cannot have an Eulerian path. - -An alternative form of the problem asks for a path that traverses all bridges and also has the same starting and ending point. Such a walk is called an Eulerian circuit or an Euler tour. Such a circuit exists if, and only if, the graph is connected, and there are no nodes of odd degree at all. All Eulerian circuits are also Eulerian paths, but not all Eulerian paths are Eulerian circuits. - -Euler's work was presented to the St. Petersburg Academy on 26 August 1735, and published as Solutio problematis ad geometriam situs pertinentis (The solution of a problem relating to the geometry of position) in the journal Commentarii academiae scientiarum Petropolitanae in 1741. It is available in English translation in The World of Mathematics by James R. Newman. - -In the history of mathematics, Euler's solution of the Königsberg bridge problem is considered to be the first theorem of graph theory and the first true proof in the theory of networks, a subject now generally regarded as a branch of combinatorics. Combinatorial problems of other types had been considered since antiquity. - -In addition, Euler's recognition that the key information was the number of bridges and the list of their endpoints (rather than their exact positions) presaged the development of topology. The difference between the actual layout and the graph schematic is a good example of the idea that topology is not concerned with the rigid shape of objects. - -Hence, as Euler recognized, the "geometry of position" is not about "measurements and calculations" but about something more general. That called in question the traditional Aristotelian view that mathematics is the "science of quantity". Though that view fits arithmetic and Euclidean geometry, it did not fit topology and the more abstract structural features studied in modern mathematics. - -Philosophers have noted that Euler's proof is not about an abstraction or a model of reality, but directly about the real arrangement of bridges. Hence the certainty of mathematical proof can apply directly to reality. The proof is also explanatory, giving insight into why the result must be true. - -Two of the seven original bridges did not survive the bombing of Königsberg in World War II. Two others were later demolished and replaced by a modern highway. The three other bridges remain, although only two of them are from Euler's time (one was rebuilt in 1935). Thus, , five bridges exist at the same sites that were involved in Euler's problem. In terms of graph theory, two of the nodes now have degree 2, and the other two have degree 3. Therefore, an Eulerian path is now possible, but it must begin on one island and end on the other. - -The University of Canterbury in Christchurch has incorporated a model of the bridges into a grass area between the old Physical Sciences Library and the Erskine Building, housing the Departments of Mathematics, Statistics and Computer Science. The rivers are replaced with short bushes and the central island sports a stone tōrō. Rochester Institute of Technology has incorporated the puzzle into the pavement in front of the Gene Polisseni Center, an ice hockey arena that opened in 2014. diff --git a/wiki/wikipedia/3323.txt b/wiki/wikipedia/3323.txt deleted file mode 100644 index c9501e1d7f1078e7a97b85901a3208bddd8022c8..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3323.txt +++ /dev/null @@ -1,13 +0,0 @@ -The Banach–Tarski Paradox is a book in mathematics on the Banach–Tarski paradox, the fact that a unit ball can be partitioned into a finite number of subsets and reassembled to form two unit balls. It was written by Stan Wagon and published in 1985 by the Cambridge University Press as volume 24 of their Encyclopedia of Mathematics and its Applications book series. A second printing in 1986 added two pages as an addendum, and a 1993 paperback printing added a new preface. - -In 2016 the Cambridge University Press published a second edition, adding Grzegorz Tomkowicz as a co-author, as volume 163 of the same series. The Basic Library List Committee of the Mathematical Association of America has recommended its inclusion in undergraduate mathematics libraries. - -The Banach–Tarski paradox, proved by Stefan Banach and Alfred Tarski in 1924, states that it is possible to partition a three-dimensional unit ball into finitely many pieces and reassemble them into two unit balls, a single ball of larger or smaller area, or any other bounded set with a non-empty interior. Although it is a mathematical theorem, it is called a paradox because it is so counter-intuitive; in the preface to the book, Jan Mycielski calls it the most surprising result in mathematics. It is closely related to measure theory and the non-existence of a measure on all subsets of three-dimensional space, invariant under all congruences of space, and to the theory of paradoxical sets in free groups and the representation of these groups by three-dimensional rotations, used in the proof of the paradox. The topic of the book is the Banach–Tarski paradox, its proof, and the many related results that have since become known. - -The book is divided into two parts, the first on the existence of paradoxical decompositions and the second on conditions that prevent their existence. After two chapters of background material, the first part proves the Banach–Tarski paradox itself, considers higher-dimensional spaces and non-Euclidean geometry, studies the number of pieces necessary for a paradoxical decomposition, and finds analogous results to the Banach–Tarski paradox for one- and two-dimensional sets. The second part includes a related theorem of Tarski that congruence-invariant finitely-additive measures prevent the existence of paradoxical decompositions, a theorem that Lebesgue measure is the only such measure on the Lebesgue measurable sets, material on amenable groups, connections to the axiom of choice and the Hahn–Banach theorem. Three appendices describe Euclidean groups, Jordan measure, and a collection of open problems. - -The second edition adds material on several recent results in this area, in many cases inspired by the first edition of the book. Trevor Wilson proved the existence of a continuous motion from the one-ball assembly to the two-ball assembly, keeping the sets of the partition disjoint at all times; this question had been posed by de Groot in the first edition of the book. Miklós Laczkovich solved Tarski's circle-squaring problem, asking for a dissection of a disk to a square of the same area, in 1990. And Edward Marczewski had asked in 1930 whether the Banach–Tarski paradox could be achieved using only Baire sets; a positive answer was found in 1994 by Randall Dougherty and Matthew Foreman. - -The book is written at a level accessible to mathematics graduate students, but provides a survey of research in this area that should also be useful to more advanced researchers. The beginning parts of the book, including its proof of the Banach–Tarski paradox, should also be readable by undergraduate mathematicians. - -Reviewer Włodzimierz Bzyl writes that "this beautiful book is written with care and is certainly worth reading". Reviewer John J. Watkins writes that the first edition of the book "became the classic text on paradoxical mathematics" and that the second edition "exceeds any possible expectation I might have had for expanding a book I already deeply treasured". diff --git a/wiki/wikipedia/3324.txt b/wiki/wikipedia/3324.txt deleted file mode 100644 index f52e4ecd3ceab467d73e873f11afff1baf91b5a3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3324.txt +++ /dev/null @@ -1,59 +0,0 @@ -In mathematical logic, the Buchholz hydra game is a hydra game, which is a single player game based on the idea of chopping pieces off a mathematical tree. The hydra game can be used to generate a rapidly growing function $BH(n)$, which eventually dominates all recursive functions that are provably total in $\mathsf{\Pi^1_1 - CA + BI}$, and is itself provably total in $\mathsf{\Pi^1_1 - CA + BI}$ + "transfinite induction with respect to TFB". - -The game is played on a hydra, a finite, rooted, two-dimensional, connected mathematical tree $A$ with the following properties: - -* The root of $A$ has a label $+$. - -* Any other node of $A$ has a label $\nu \leq \omega$. - -* All nodes directly above the root of $A$ have a label $0$. - -If the player chops off a head/leaf (i.e. the top node) $\sigma$ of $A$, the hydra will then choose an arbitrary $n \in \N$ (e.g. the current turn number), and then transform itself into a new hydra $A(\sigma, n)$ like so. Let $\tau$ represent the parent of $\sigma$, and let $A^-$ represent the part of the hydra which remains after $\sigma$ has been chopped off. The definition of $A(\sigma, n)$ depends on the label of $\sigma$: - -* If the label of $\sigma$ is 0 and $\tau$ is the root of $A$, then $A(\sigma, n)$ = $A^-$. - -* If the label of $\sigma$ is 0 but $\tau$ is not the root of $A$, we make $n$ copies of $\tau$ and all its children and attach them all to $\tau$'s parent. This new tree is $A(\sigma, n)$. - -* If the label of $\sigma$ is $u$ for some $u \in \N$, then we label the first node below $\sigma$ with label $v < u$ as $\varepsilon$. $B$ is then the subtree obtained by starting with $A_\varepsilon$ and replacing the label of $\varepsilon$ with $u - 1$ and $\sigma$ with 0. $A(\sigma, n)$ is then obtained by taking $A$ and replacing $\sigma$ with $B$. In this case, the value of $n$ does not matter. - -* If the label of $\sigma$ is $\omega$, $A(\sigma, n)$ is simply obtained by replacing the label of $\sigma$ with $n + 1$. - -If $\sigma$ is the rightmost head of $A$, we write simply $A(n)$. A series of moves is called a strategy, and a strategy is called a winning strategy if after a (finite) amount of moves, nothing is left of the hydra except its root. It has proven that this always terminates, even though the hydra can get taller by massive amounts. However, it does take a very, very long time. - -It was believed that Wilfried Buchholz showed that there are no losing strategies for any hydra, but the source of this article does not include such a result. Call this statement the hydra theorem. What he actually showed is the canonical correspondence from a hydra to an infnitary well-founded tree (or the corresponding term in the notation system $T$ associated to Buchholz's function, which does not necessarily belong to the ordinal notation system $OT \subset T$), preserves fundamental sequences, i.e. the strategy to choose the rightmost leaves and the $(n)$ operation on an infinitary well-founded tree (or the $[n]$ operation on the corresponding term in $T$). - -Although it might fortunately imply the hydra theorem under some weak set theory, the statement that he showed the hydra theorem is wrong because he even does not state the hydra theorem. He only referred to the fact that the sequence of the rightmost leaves is a winning strategy. The hydra theorem is unprovable in $\mathsf{\Pi^1_1 - CA + BI}$, but for individual hydras it was believed to be provable in this community without a source. - -Suppose a tree consists of just one branch with $x$ nodes, labeled $+, 0, \omega, ..., \omega$. Call such a tree $R^n$. It cannot be proven in $\mathsf{\Pi^1_1 - CA + BI}$ that for all $x$, there exists $k$ such that $R_x(1)(2)(3)...(k)$ is a winning strategy. (The latter expression means taking the tree $R_x$, then transforming it with $n=1$, then $n=2$, then $n=3$, etc. up to $n=k$.) - -Define $BH(x)$ as the smallest $k$ such that $R_x(1)(2)(3)...(k)$ as defined above is a winning strategy. By the hydra theorem this function is well-defined, but its totality cannot be proven in $\mathsf{\Pi^1_1 - CA + BI}$. Hydras grow extremely fast, because the amount of turns required to kill $R_x(1)(2)$ is larger than Graham's number or even the amount of turns to kill a Kirby-Paris hydra; and $R_x(1)(2)(3)(4)(5)(6)$ has an entire Kirby-Paris hydra as its branch. To be precise, its rate of growth is believed to be comparable to $f_{\psi_0(\varepsilon_{\Omega_\omega + 1})}(x)$ with respect to unspecified system of fundamental sequences without a proof. Here, $\psi_0$ denotes Buchholz's function, and $\psi_0(\varepsilon_{\Omega_\omega + 1})$ is the Takeuti-Feferman-Buchholz ordinal, which, unsurprisingly, measures the strength of $\mathsf{\Pi^1_1 - CA + BI}$.) - -The first two values of the BH function are virtually degenerate: $BH(1) = 0$ and $BH(2) = 1$. Similarly to the tree function, $BH(3)$ is very large, but not extremely. - -The Buchholz hydra eventually surpasses TREE(n) and SCG(n), yet it is likely weaker than loader.c as well as numbers from finite promise games. - -It is possible to make one-to-one correspondence between some hydras and ordinals. To convert a tree or subtree to an ordinal: - -* Inductively convert all the immediate children of the node to ordinals. - -* Add up those child ordinals. If there were no children, this will be 0. - -* If the label of the node is not +, apply $\psi_\alpha$, where $\alpha$ is the label of the node, and $\psi$ is Buchholz's function. - -The resulting ordinal expression is only useful if it is in normal form. Some examples are: - -We can then further extend this system to name larger ordinals: - -* Buchholz hydra ordinal: $+(0(\omega)(\omega)) = \psi_0(\psi_\omega(0) + \psi_\omega(0))$ - -* Super Buchholz hydra ordinal: $+(0(\omega(1))) = \psi_0(\psi_\omega(1))$ - -* Cantor-Buchholz ordinal: $+(0(\omega(\omega))) = \psi_0(\psi_\omega(\omega))$ - -* Feferman-Schütte-Buchholz ordinal: $+(0(\omega(\omega(\omega)))) = \psi_0(\psi_\omega(\psi_\omega(\omega))$ - -* Small Veblen-Buchholz ordinal: $+(0(\omega(\omega(\omega(1))))) = \psi_0(\psi_\omega(\psi_\omega(\psi_\omega(1)))$ - -* Large Veblen-Buchholz ordinal: $+(0(\omega(\omega(\omega(\omega))))) = \psi_0(\psi_\omega(\psi_\omega(\psi_\omega(\omega)))$ - -This seems to be the limit of the system, as attempting to construct larger ordinals will yield labels greater than $\omega$, which are not allowed. diff --git a/wiki/wikipedia/3325.txt b/wiki/wikipedia/3325.txt deleted file mode 100644 index c39676f2d9567e9b880e6625c395781367685f80..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3325.txt +++ /dev/null @@ -1,13 +0,0 @@ -In group theory, Matsumoto's theorem, proved by , gives conditions for two reduced words of a Coxeter group to represent the same element. - -If two reduced words represent the same element of a Coxeter group, then Matsumoto's theorem states that the first word can be transformed into the second by repeatedly transforming - -xyxy... to yxyx... (or vice versa) - -where - -xyxy... = yxyx... - -is one of the defining relations of the Coxeter group. - -Matsumoto's theorem implies that there is a natural map (not a group homomorphism) from a Coxeter group to the corresponding braid group, taking any element of the Coxeter group represented by some reduced word in the generators to the same word in the generators of the braid group. diff --git a/wiki/wikipedia/3326.txt b/wiki/wikipedia/3326.txt deleted file mode 100644 index 3965ec9f7a03baaca0cba086e976282fa7e15849..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3326.txt +++ /dev/null @@ -1,256 +0,0 @@ -In mathematics, Abel's identity (also called Abel's formula or Abel's differential equation identity) is an equation that expresses the Wronskian of two solutions of a homogeneous second-order linear ordinary differential equation in terms of a coefficient of the original differential equation. - -The relation can be generalised to nth-order linear ordinary differential equations. The identity is named after the Norwegian mathematician Niels Henrik Abel. - -Since Abel's identity relates the different linearly independent solutions of the differential equation, it can be used to find one solution from the other. It provides useful identities relating the solutions, and is also useful as a part of other techniques such as the method of variation of parameters. It is especially useful for equations such as Bessel's equation where the solutions do not have a simple analytical form, because in such cases the Wronskian is difficult to compute directly. - -A generalisation to first-order systems of homogeneous linear differential equations is given by Liouville's formula. - -Consider a homogeneous linear second-order ordinary differential equation -$$ - y + p(x)y' + q(x)y = 0 -$$ - -on an interval I of the real line with real- or complex-valued continuous functions p and q. Abel's identity states that the Wronskian $W=(y_1,y_2)$ of two real- or complex-valued solutions $y_1$ and $y_2$ of this differential equation, that is the function defined by the determinant - -W(y_1,y_2)(x) - -=\begin{vmatrix}y_1(x)&y_2(x)\\y'_1(x)&y'_2(x)\end{vmatrix} - -=y_1(x)y'_2(x) - y'_1(x)y_2(x),\qquad x\in I, - -satisfies the relation -$$ -W(y_1,y_2)(x)=C \exp\biggl(-\int_{x_0}^x p(x') \textrm{d}x'\biggr),\qquad x\in I, -$$ - -for every point x0 in I, where C is an arbitrary constant. - -* In particular, the Wronskian $W(y_1,y_2)$ is either always the zero function or always different from zero with the same sign at every point $x$ in $I$. In the latter case, the two solutions $y_1$ and $y_2$ are linearly independent (see the article about the Wronskian for a proof). - -* It is not necessary to assume that the second derivatives of the solutions $y_1$ and $y_2$ are continuous. - -* Abel's theorem is particularly useful if $p(x)=0$, because it implies that $W$ is constant. - -Differentiating the Wronskian using the product rule gives (writing $W$ for $W(y_1,y_2)$ and omitting the argument $x$ for brevity) - - - -\begin{align} - -W' &= y_1' y_2' + y_1 y_2 - y_1 y_2 - y_1' y_2' \\ - -& = y_1 y_2 - y_1 y_2. - -\end{align} - - - -Solving for $y$ in the original differential equation yields -$$ - y = -(py'+qy). -$$ - -Substituting this result into the derivative of the Wronskian function to replace the second derivatives of $y_1$ and $y_2$ gives - - - -\begin{align} - -W'&= -y_1(py_2'+qy_2)+(py_1'+qy_1)y_2 \\ - -&= -p(y_1y_2'-y_1'y_2)\\ - -&= -pW. - -\end{align} - - - -This is a first-order linear differential equation, and it remains to show that Abel's identity gives the unique solution, which attains the value $W(x_0)$ at $x_0$. Since the function $p$ is continuous on $I$, it is bounded on every closed and bounded subinterval of $I$ and therefore integrable, hence -$$ -V(x)=W(x) \exp\left(\int_{x_0}^x p(\xi) \textrm{d}\xi\right), \qquad x\in I, -$$ - -is a well-defined function. Differentiating both sides, using the product rule, the chain rule, the derivative of the exponential function and the fundamental theorem of calculus, one obtains -$$ -V'(x)=\bigl(W'(x)+W(x)p(x)\bigr)\exp\biggl(\int_{x_0}^x p(\xi) \textrm{d}\xi\biggr)=0,\qquad x\in I, -$$ - -due to the differential equation for $W$. Therefore, $V$ has to be constant on $I$, because otherwise we would obtain a contradiction to the mean value theorem (applied separately to the real and imaginary part in the complex-valued case). Since $V(x_0) = W(x_0)$, Abel's identity follows by solving the definition of $V$ for $W(x)$. - -Consider a homogeneous linear $n$th-order ($n \geq 1$) ordinary differential equation -$$ -y^{(n)} + p_{n-1}(x)y^{(n-1)} + \cdots + p_1(x)y' + p_0(x)y = 0, -$$ - -on an interval $I$ of the real line with a real- or complex-valued continuous function $p_{n-1}$. The generalisation of Abel's identity states that the Wronskian $W(y_1,\ldots,y_n)$ of $n$ real- or complex-valued solutions $y_1,\ldots,y_n$ of this $n$th-order differential equation, that is the function defined by the determinant - -W(y_1,\ldots,y_n)(x) - -=\begin{vmatrix} - -y_1(x) & y_2(x) & \cdots & y_n(x)\\ - -y'_1(x) & y'_2(x)& \cdots & y'_n(x)\\ - -\vdots & \vdots & \ddots & \vdots\\ - -y_1^{(n-1)}(x) & y_2^{(n-1)}(x) & \cdots & y_n^{(n-1)}(x) - -\end{vmatrix},\qquad x\in I, - -satisfies the relation -$$ -W(y_1,\ldots,y_n)(x)=W(y_1,\ldots,y_n)(x_0) \exp\biggl(-\int_{x_0}^x p_{n-1}(\xi) \textrm{d}\xi\biggr),\qquad x\in I, -$$ - -for every point $x_0$ in $I$. - -For brevity, we write $W$ for $W(y_1,\ldots,y_n)$ and omit the argument $x$. It suffices to show that the Wronskian solves the first-order linear differential equation -$$ -W'=-p_{n-1}W, -$$ - -because the remaining part of the proof then coincides with the one for the case $n=2$. - -In the case $n=1$ we have $W=y_1$ and the differential equation for $W$ coincides with the one for $y_1$. Therefore, assume $n \geq 2$ in the following. - -The derivative of the Wronskian $W$ is the derivative of the defining determinant. It follows from the Leibniz formula for determinants that this derivative can be calculated by differentiating every row separately, hence - -\begin{align}W' & = - -\begin{vmatrix} - -y'_1 & y'_2 & \cdots & y'_n\\ - -y'_1 & y'_2 & \cdots & y'_n\\ - -y_1 & y_2 & \cdots & y_n\\ - -y_1 & y_2 & \cdots & y_n\\ - -\vdots & \vdots & \ddots & \vdots\\ - -y_1^{(n-1)} & y_2^{(n-1)} & \cdots & y_n^{(n-1)} - -\end{vmatrix} - -+ - -\begin{vmatrix} - -y_1 & y_2 & \cdots & y_n\\ - -y_1 & y_2 & \cdots & y_n\\ - -y_1 & y_2 & \cdots & y_n\\ - -y_1 & y_2 & \cdots & y_n\\ - -\vdots & \vdots & \ddots & \vdots\\ - -y_1^{(n-1)} & y_2^{(n-1)} & \cdots & y_n^{(n-1)} - -\end{vmatrix}\\ - -&\qquad+\ \cdots\ + - -\begin{vmatrix} - -y_1 & y_2 & \cdots & y_n\\ - -y'_1 & y'_2 & \cdots & y'_n\\ - -\vdots & \vdots & \ddots & \vdots\\ - -y_1^{(n-3)} & y_2^{(n-3)} & \cdots & y_n^{(n-3)}\\ - -y_1^{(n-2)} & y_2^{(n-2)} & \cdots & y_n^{(n-2)}\\ - -y_1^{(n)} & y_2^{(n)} & \cdots & y_n^{(n)} - -\end{vmatrix}.\end{align} - - - -However, note that every determinant from the expansion contains a pair of identical rows, except the last one. Since determinants with linearly dependent rows are equal to 0, one is only left with the last one: - -W'= - -\begin{vmatrix} - -y_1 & y_2 & \cdots & y_n\\ - -y'_1 & y'_2 & \cdots & y'_n\\ - -\vdots & \vdots & \ddots & \vdots\\ - -y_1^{(n-2)} & y_2^{(n-2)} & \cdots & y_n^{(n-2)}\\ - -y_1^{(n)} & y_2^{(n)} & \cdots & y_n^{(n)} - -\end{vmatrix}. - - - -Since every $y_i$ solves the ordinary differential equation, we have -$$ -y_i^{(n)} + p_{n-2}y_i^{(n-2)} + \cdots + p_1y'_i + p_0y_i = -p_{n-1}y_i^{(n-1)} -$$ - -for every $i \in \lbrace 1,\ldots,n \rbrace$. Hence, adding to the last row of the above determinant $p_0$ times its first row, $p_1$ times its second row, and so on until $p_{n-2}$ times its next to last row, the value of the determinant for the derivative of $W$ is unchanged and we get - -W'= - -\begin{vmatrix} - -y_1 & y_2 & \cdots & y_n \\ - -y'_1 & y'_2 & \cdots & y'_n \\ - -\vdots & \vdots & \ddots & \vdots \\ - -y_1^{(n-2)} & y_2^{(n-2)} & \cdots & y_n^{(n-2)} \\ - --p_{n-1}y_1^{(n-1)} & -p_{n-1}y_2^{(n-1)} & \cdots & -p_{n-1}y_n^{(n-1)} - -\end{vmatrix} - -=-p_{n-1}W. - - - -The solutions $y_1,\ldots,y_n$ form the square-matrix valued solution - -\Phi(x)=\begin{pmatrix} - -y_1(x) & y_2(x) & \cdots & y_n(x)\\ - -y'_1(x) & y'_2(x)& \cdots & y'_n(x)\\ - -\vdots & \vdots & \ddots & \vdots\\ - -y_1^{(n-2)}(x) & y_2^{(n-2)}(x) & \cdots & y_n^{(n-2)}(x)\\ - -y_1^{(n-1)}(x) & y_2^{(n-1)}(x) & \cdots & y_n^{(n-1)}(x) - -\end{pmatrix},\qquad x\in I, - -of the $n$-dimensional first-order system of homogeneous linear differential equations - -\begin{pmatrix}y'\\y\\\vdots\\y^{(n-1)}\\y^{(n)}\end{pmatrix} - -=\begin{pmatrix}0&1&0&\cdots&0\\ - -0&0&1&\cdots&0\\ - -\vdots&\vdots&\vdots&\ddots&\vdots\\ - -0&0&0&\cdots&1\\ - --p_0(x)&-p_1(x)&-p_2(x)&\cdots&-p_{n-1}(x)\end{pmatrix} - -\begin{pmatrix}y\\y'\\\vdots\\y^{(n-2)}\\y^{(n-1)}\end{pmatrix}. - -The trace of this matrix is $-p_{n-1}(x)$, hence Abel's identity follows directly from Liouville's formula. diff --git a/wiki/wikipedia/3327.txt b/wiki/wikipedia/3327.txt deleted file mode 100644 index 454b33792971f60a19560daffd47bf42bad8b3ef..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3327.txt +++ /dev/null @@ -1,18 +0,0 @@ -In computational complexity theory, the compression theorem is an important theorem about the complexity of computable functions. - -The theorem states that there exists no largest complexity class, with computable boundary, which contains all computable functions. - -Given a Gödel numbering $\varphi$ of the computable functions and a Blum complexity measure $\Phi$ where a complexity class for a boundary function $f$ is defined as -$$ -\mathrm{C}(f):= \{\varphi_i \in \mathbf{R}^{(1)} | (\forall^\infty x) \Phi_i (x) \leq f(x) \}. -$$ - -Then there exists a total computable function $f$ so that for all $i$ -$$ -\mathrm{Dom}(\varphi_i) = \mathrm{Dom}(\varphi_{f(i)}) -$$ - -and -$$ -\mathrm{C}(\varphi_i) \subsetneq \mathrm{C}(\varphi_{f(i)}). -$$ diff --git a/wiki/wikipedia/3328.txt b/wiki/wikipedia/3328.txt deleted file mode 100644 index 8e09433279a2f69df9d2c546d09a78163a7bc2a3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3328.txt +++ /dev/null @@ -1,138 +0,0 @@ -The Snellius–Pothenot problem is a problem in planar surveying. Given three known points A, B and C, an observer at an unknown point P observes that the segment AC subtends an angle $\alpha$ and the segment CB subtends an angle $\beta$; the problem is to determine the position of the point P. (See figure; the point denoted C is between A and B as seen from P). - -Since it involves the observation of known points from an unknown point, the problem is an example of resection. Historically it was first studied by Snellius, who found a solution around 1615. - -First equation - -Denoting the (unknown) angles CAP as x and CBP as y we get: -$$ -x+y = 2 \pi - \alpha - \beta - C -$$ - -by using the sum of the angles formula for the quadrilateral PACB. The variable C represents the (known) internal angle in this quadrilateral at point C. (Note that in the case where the points C and P are on the same side of the line AB, the angle C will be greater than $\pi$). - -Second equation - -Applying the law of sines in triangles PAC and PBC we can express PC in two different ways: -$$ -\frac{{\rm AC} \sin x }{\sin \alpha} = {\rm PC} = \frac{{\rm BC} \sin y}{\sin \beta}. -$$ - -A useful trick at this point is to define an auxiliary angle $\phi$ such that -$$ -\tan \phi = \frac{{\rm BC} \sin \alpha}{{\rm AC} \sin \beta}. -$$ - -(A minor note: we should be concerned about division by zero, but consider that the problem is symmetric, so if one of the two given angles is zero we can, if needed, rename that angle alpha and call the other (non-zero) angle beta, reversing the roles of A and B as well. This will suffice to guarantee that the ratio above is well defined. An alternative approach to the zero angle problem is given in the algorithm below.) - -With this substitution the equation becomes -$$ -\frac{\sin x}{\sin y}=\tan \phi. -$$ - -We can use two known trigonometric identities, namely -$$ -\tan \left(\frac{\pi}{4}-\phi\right) = \frac{1- \tan \phi}{\tan \phi +1} -$$ and -$$ -\frac{\tan [(x-y)/2]}{\tan [(x+y)/2]}=\frac{\sin x- \sin y}{\sin x + \sin y} -$$ - -to put this in the form of the second equation we need -$$ -\tan \frac{1}{2}(x-y) = \tan \frac{1}{2}(\alpha+\beta+C) \tan \left(\frac{\pi}{4}-\phi\right). -$$ - -We now need to solve these two equations in two unknowns. Once x and y are known the various triangles can be solved straightforwardly to determine the position of P. The detailed procedure is shown below. - -Given are two lengths AC and BC, and three angles $\alpha$, $\beta$ and C, the solution proceeds as follows. - -*calculate $\phi= \operatorname{atan2}( {\rm BC} \sin \alpha, {\rm AC} \sin\beta )$. Where atan2 is a computer function, also called the arctangent of two arguments, that returns the arctangent of the ratio of the two values given. Note that in Microsoft Excel the two arguments are reversed, so the proper syntax would be '$=\rm atan2(AC^*\backslash\sin(beta), BC^*\backslash\sin(alpha))$'. The atan2 function correctly handles the case where one of the two arguments is zero. - -*calculate $K = 2 \pi -\alpha-\beta-C.$ - -*calculate $W = 2\cdot\operatorname{atan}\left[ \tan(\pi/4 - \phi) \tan\left(\frac{1}{2}(\alpha+\beta+C)\right)\right].$ - -*find $x = (K+W)/2$ and $y = (K-W)/2.$ - -*if $|\sin \beta|>|\sin \alpha|$ calculate ${\rm PC} = \frac{{\rm BC} \sin y}{\sin \beta}$ else use ${\rm PC} = \frac{{\rm AC} \sin x}{\sin \alpha}.$ - -*find ${\rm PA} = \sqrt{{\rm AC}^2+{\rm PC}^2 - 2\cdot{\rm AC}\cdot{\rm PC}\cdot\cos(\pi-\alpha-x)}.$ (This comes from the law of cosines.) - -*find ${\rm PB} = \sqrt{{\rm BC}^2+{\rm PC}^2 - 2\cdot{\rm BC}\cdot{\rm PC}\cdot\cos(\pi-\beta-y)}.$ - -If the coordinates of $A: x_A,y_A$ and $C: x_C,y_C$ are known in some appropriate Cartesian coordinate system then the coordinates of $P$ can be found as well. - -By the inscribed angle theorem the locus of points from which AC subtends an angle $\alpha$ is a circle having its center on the midline of AC; from the center O of this circle AC subtends an angle $2 \alpha$. Similarly the locus of points from which CB subtends an angle $\beta$ is another circle. The desired point P is at the intersection of these two loci. - -Therefore, on a map or nautical chart showing the points A, B, C, the following graphical construction can be used: - -*Draw the segment AC, the midpoint M and the midline, which crosses AC perpendicularly at M. On this line find the point O such that $MO=\frac{AC}{2 \tan \alpha}$. Draw the circle with center at O passing through A and C. - -*Repeat the same construction with points B, C and the angle $\beta$. - -*Mark P at the intersection of the two circles (the two circles intersect at two points; one intersection point is C and the other is the desired point P.) - -This method of solution is sometimes called Cassini's method. - -The following solution is based upon a paper by N. J. Wildberger. It has the advantage that it is almost purely algebraic. The only place trigonometry is used is in converting the angles to spreads. There is only one square root required. - -*define the following: - -**$s(x) = \sin^2(x)$ - -**$A(x,y,z) = (x + y + z)^2 - 2(x^2 + y^2 + z^2)$ - -**$r_1 = s(\beta)$ - -**$r_2 = s(\alpha)$ - -**$r_3 = s(\alpha + \beta)$ - -**$Q_1 = BC^2$ - -**$Q_2 = AC^2$ - -**$Q_3 = AB^2$ - -*now let: - -**$R_1 = r_2 Q_3 / r_3$ - -**$R_2 = r_1 Q_3 / r_3$ - -**$C_0 = ((Q_1 + Q_2 + Q_3) (r_1 + r_2 + r_3) - 2 (Q_1 r_1 + Q_2 r_2 + Q_3 r_3))/(2 r_3)$ - -**$D_0 = r_1 r_2 A(Q_1,Q_2,Q_3)/r_3$ - -*the following equation gives two possible values for $R_3$: - -**$(R_3 - C_0)^2 = D_0$ - -*choosing the larger of these values, let: - -**$v_1 = 1 - (R_1 + R_3 - Q_2)^2 / (4 R_1 R_3)$ - -**$v_2 = 1 - (R_2 + R_3 - Q_1)^2 / (4 R_2 R_3)$ - -*finally we get: - -**$AP^2 = v_1 R_1 / r_2 = v_1 Q_3 / r_3$ - -**$BP^2 = v2 R_2 / r_1 = v_2 Q_3 / r_3$ - -When the point P happens to be located on the same circle as A, B and C, the problem has an infinite number of solutions; the reason is that from any other point P' located on the arc APB of this circle the observer sees the same angles alpha and beta as from P (inscribed angle theorem). Thus the solution in this case is not uniquely determined. - -The circle through ABC is known as the "danger circle", and observations made on (or very close to) this circle should be avoided. It is helpful to plot this circle on a map before making the observations. - -A theorem on cyclic quadrilaterals is helpful in detecting the indeterminate situation. The quadrilateral APBC is cyclic iff a pair of opposite angles (such as the angle at P and the angle at C) are supplementary i.e. iff $\alpha+\beta+C = k \pi, (k=1,2,\cdots)$. If this condition is observed the computer/spreadsheet calculations should be stopped and an error message ("indeterminate case") returned. - -(Adapted form Bowser, exercise 140, page 203). A, B and C are three objects such that AC = 435 (yards), CB = 320, and C = 255.8 degrees. From a station P it is observed that APC = 30 degrees and CPB = 15 degrees. Find the distances of P from A, B and C. (Note that in this case the points C and P are on the same side of the line AB, a different configuration from the one shown in the figure). - -Answer: PA = 790, PB = 777, PC = 502. - -A slightly more challenging test case for a computer program uses the same data but this time with CPB = 0. The program should return the answers 843, 1157 and 837. - -The British authority on geodesy, George Tyrrell McCaw (1870–1942) wrote that the proper term in English was Snellius problem, while Snellius-Pothenot was the continental European usage. - -McCaw thought the name of Laurent Pothenot (1650–1732) did not deserve to be included as he had made no original contribution, but merely restated Snellius 75 years later. diff --git a/wiki/wikipedia/3329.txt b/wiki/wikipedia/3329.txt deleted file mode 100644 index b31011747dda6033d2a6274d9f769d4e9aa22f49..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3329.txt +++ /dev/null @@ -1,183 +0,0 @@ -Some elementary examples of groups in mathematics are given on Group (mathematics). - -Further examples are listed here. - -Consider three colored blocks (red, green, and blue), initially placed in the order RGB. Let a be the operation "swap the first block and the second block", and b be the operation "swap the second block and the third block". - -We can write xy for the operation "first do y, then do x"; so that ab is the operation RGB → RBG → BRG, which could be described as "move the first two blocks one position to the right and put the third block into the first position". If we write e for "leave the blocks as they are" (the identity operation), then we can write the six permutations of the three blocks as follows: - -* e : RGB → RGB - -* a : RGB → GRB - -* b : RGB → RBG - -* ab : RGB → BRG - -* ba : RGB → GBR - -* aba : RGB → BGR - -Note that aa has the effect RGB → GRB → RGB; so we can write aa = e. Similarly, bb = (aba)(aba) = e; (ab)(ba) = (ba)(ab) = e; so every element has an inverse. - -By inspection, we can determine associativity and closure; note in particular that (ba)b = bab = b(ab). - -Since it is built up from the basic operations a and b, we say that the set {a, b} generates this group. The group, called the symmetric group S3, has order 6, and is non-abelian (since, for example, ab ≠ ba). - -A translation of the plane is a rigid movement of every point of the plane for a certain distance in a certain direction. - -For instance "move in the North-East direction for 2 miles" is a translation of the plane. - -Two translations such as a and b can be composed to form a new translation a ∘ b as follows: first follow the prescription of b, then that of a. - -For instance, if - -a = "move North-East for 3 miles" - -and - -b = "move South-East for 4 miles" - -then - -a ∘ b = "move to bearing 8.13° for 5 miles" (bearing is measured counterclockwise and from East) - -Or, if - -a = "move to bearing 36.87° for 3 miles" (bearing is measured counterclockwise and from East) - -and - -b = "move to bearing 306.87° for 4 miles" (bearing is measured counterclockwise and from East) - -then - -a ∘ b = "move East for 5 miles" - -(see Pythagorean theorem for why this is so, geometrically). - -The set of all translations of the plane with composition as the operation forms a group: - -#If a and b are translations, then a ∘ b is also a translation. - -#Composition of translations is associative: (a ∘ b) ∘ c = a ∘ (b ∘ c). - -#The identity element for this group is the translation with prescription "move zero miles in any direction". - -#The inverse of a translation is given by walking in the opposite direction for the same distance. - -This is an abelian group and our first (nondiscrete) example of a Lie group: a group which is also a manifold. - -Groups are very important to describe the symmetry of objects, be they geometrical (like a tetrahedron) or algebraic (like a set of equations). - -As an example, we consider a glass square of a certain thickness (with a letter "F" written on it, just to make the different positions distinguishable). - -In order to describe its symmetry, we form the set of all those rigid movements of the square that don't make a visible difference (except the "F"). - -For instance, if an object turned 90° clockwise still looks the same, the movement is one element of the set, for instance a. - -We could also flip it horizontally so that its underside becomes its top side, while the left edge becomes the right edge. - -Again, after performing this movement, the glass square looks the same, so this is also an element of our set and we call it b. - -The movement that does nothing is denoted by e. - -Given two such movements x and y, it is possible to define the composition x ∘ y as above: first the movement y is performed, followed by the movement x. - -The result will leave the slab looking like before. - -The point is that the set of all those movements, with composition as the operation, forms a group. - -This group is the most concise description of the square's symmetry. - -Chemists use symmetry groups of this type to describe the symmetry of crystals and molecules. - -Let's investigate our square's symmetry group some more. Right now, we have the elements a, b and e, but we can easily form more: - -for instance a ∘ a, also written as a2, is a 180° degree turn. - -a3 is a 270° clockwise rotation (or a 90° counter-clockwise rotation). - -We also see that b2 = e and also a4 = e. - -Here's an interesting one: what does a ∘ b do? - -First flip horizontally, then rotate. - -Try to visualize that a ∘ b = b ∘ a3. - -Also, a2 ∘ b is a vertical flip and is equal to b ∘ a2. - -We say that elements a and b generate the group. - -This group of order 8 has the following Cayley table: - -For any two elements in the group, the table records what their composition is. Here we wrote "a3b" as a shorthand for a3 ∘ b. - -In mathematics this group is known as the dihedral group of order 8, and is either denoted Dih4, D4 or D8, depending on the convention. - -This was an example of a non-abelian group: the operation ∘ here is not commutative, which can be seen from the table; the table is not symmetrical about the main diagonal. - -This version of the Cayley table shows that this group has one normal subgroup shown with a red background. In this table r means rotations, and f means flips. Because the subgroup is normal, the left coset is the same as the right coset. - -The free group with two generators a and b consists of all finite strings/words that can be formed from the four symbols a, a−1, b and b−1 such that no a appears directly next to an a−1 and no b appears directly next to a b−1. - -Two such strings can be concatenated and converted into a string of this type by repeatedly replacing the "forbidden" substrings with the empty string. - -For instance: "abab−1a−1" concatenated with - -"abab−1a" yields "abab−1a−1abab−1a", which gets reduced to "abaab−1a". - -One can check that the set of those strings with this operation forms a group with the empty string ε := "" being the identity element - -(Usually the quotation marks are left off; this is why the symbol ε is required). - -This is another infinite non-abelian group. - -Free groups are important in algebraic topology; the free group in two generators is also used for a proof of the Banach–Tarski paradox. - -Let G be a group and S a set. The set of maps M(S, G) is itself a group; namely for two maps f, g of S into G we define fg to be the map such that (fg)(x) = f(x)g(x) for every x in S and f −1 to be the map such that f −1(x) = f(x)−1. - -Take maps f, g, and h in M(S, G). - -For every x in S, f(x) and g(x) are both in G, and so is (fg)(x). - -Therefore, fg is also in M(S, G), i.e. M(S, G) is closed. - -M(S, G) is associative because ((fg)h)(x) = (fg)(x)h(x) = (f(x)g(x))h(x) = f(x)(g(x)h(x)) = f(x)(gh)(x) = (f(gh))(x). - -And there is a map i such that i(x) = e where e is the identity element of G. - -The map i is such that for all f in M(S, G) we have - -fi = if = f, i.e. i is the identity element of M(S, G). - -Thus, M(S, G) is actually a group. - -If G is abelian then (fg)(x) = f(x)g(x) = g(x)f(x) = (gf)(x), and therefore so is M(S, G). - -Let G be the set of bijective mappings of a set S onto itself. Then G forms a group under ordinary composition of mappings. This group is called the symmetric group, and is commonly denoted $\operatorname{Sym}(S)$, ΣS, or $\mathfrak{S}_{S}$. The identity element of G is the identity map of S. For two maps f, g in G are bijective, fg is also bijective. Therefore, G is closed. The composition of maps is associative; hence G is a group. S may be either finite or infinite. - -If n is some positive integer, we can consider the set of all invertible n by n matrices with real number components, say. - -This is a group with matrix multiplication as the operation. It is called the general linear group, and denoted GLn(R) or GL(n, R) (where R is the set of real numbers). Geometrically, it contains all combinations of rotations, reflections, dilations and skew transformations of n-dimensional Euclidean space that fix a given point (the origin). - -If we restrict ourselves to matrices with determinant 1, then we get another group, the special linear group, SLn(R) or SL(n, R). - -Geometrically, this consists of all the elements of GLn(R) that preserve both orientation and volume of the various geometric solids in Euclidean space. - -If instead we restrict ourselves to orthogonal matrices, then we get the orthogonal group On(R) or O(n, R). - -Geometrically, this consists of all combinations of rotations and reflections that fix the origin. These are precisely the transformations which preserve lengths and angles. - -Finally, if we impose both restrictions, then we get the special orthogonal group SOn(R) or SO(n, R), which consists of rotations only. - -These groups are our first examples of infinite non-abelian groups. They are also happen to be Lie groups. In fact, most of the important Lie groups (but not all) can be expressed as matrix groups. - -If this idea is generalised to matrices with complex numbers as entries, then we get further useful Lie groups, such as the unitary group U(n). - -We can also consider matrices with quaternions as entries; in this case, there is no well-defined notion of a determinant (and thus no good way to define a quaternionic "volume"), but we can still define a group analogous to the orthogonal group, the symplectic group Sp(n). - -Furthermore, the idea can be treated purely algebraically with matrices over any field, but then the groups are not Lie groups. - -For example, we have the general linear groups over finite fields. The group theorist J. L. Alperin has written that "The typical example of a finite group is GL(n, q), the general linear group of n dimensions over the field with q elements. The student who is introduced to the subject with other examples is being completely misled." diff --git a/wiki/wikipedia/333.txt b/wiki/wikipedia/333.txt deleted file mode 100644 index 57326bf9e7bf089ba873d41889eaa41530202c78..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/333.txt +++ /dev/null @@ -1,103 +0,0 @@ -In probability theory, Cantelli's inequality (also called the Chebyshev-Cantelli inequality and the one-sided Chebyshev inequality) is an improved version of Chebyshev's inequality for one-sided tail bounds. The inequality states that, for $\lambda > 0,$ - - - -\Pr(X-\mathbb{E}[X]\ge\lambda) - -\le \frac{\sigma^2}{\sigma^2 + \lambda^2}, - - - -where -$$ -X -$$ is a real-valued random variable, -$$ -\Pr -$$ is the probability measure, -$$ -\mathbb{E}[X] -$$ is the expected value of $X$, -$$ -\sigma^2 -$$ is the variance of $X$. - -Applying the Cantelli inequality to $-X$ gives a bound on the lower tail, - - - -\Pr(X-\mathbb{E}[X]\le -\lambda) - -\le \frac{\sigma^2}{\sigma^2 + \lambda^2}. - - - -While the inequality is often attributed to Francesco Paolo Cantelli who published it in 1928, it originates in Chebyshev's work of 1874. When bounding the event random variable deviates from its mean in only one direction (positive or negative), Cantelli's inequality gives an improvement over Chebyshev's inequality. The Chebyshev inequality has "higher moments versions" and "vector versions", and so does the Cantelli inequality. - -For one-sided tail bounds, Cantelli's inequality is better, since Chebyshev's inequality can only get - - - -\Pr(X - \mathbb{E}[X] \geq \lambda) \leq \Pr(|X-\mathbb{E}[X]|\ge\lambda) - -\le \frac{\sigma^2}{\lambda^2}. - - - -On the other hand, for two-sided tail bounds, Cantelli's inequality gives - - - -\Pr(|X-\mathbb{E}[X]|\ge\lambda) - -= \Pr(X-\mathbb{E}[X]\ge\lambda) + \Pr(X-\mathbb{E}[X]\le-\lambda) - -\le \frac{2\sigma^2}{\sigma^2 + \lambda^2}, - - - -which is always worse than Chebyshev's inequality (when $\lambda \geq \sigma$; otherwise, both inequalities bound a probability by a value greater than one, and so are trivial). - -Let $X$ be a real-valued random variable with finite variance $\sigma^2$ and expectation $\mu$, and define $Y = X - \mathbb{E}[X]$ (so that $\mathbb{E}[Y] = 0$ and $\operatorname{Var}(Y) = \sigma^2$). - -Then, for any $u\geq 0$, we have - - - -\Pr( X-\mathbb{E}[X]\geq\lambda) - -= \Pr( Y \geq \lambda) - -= \Pr( Y + u \geq \lambda + u) - -\leq \Pr( (Y + u)^2 \geq (\lambda + u)^2 ) - -\leq \frac{\mathbb{E}[(Y + u)^2] }{(\lambda + u)^2} - -= \frac{\sigma^2 + u^2 }{(\lambda + u)^2}. - - - -the last inequality being a consequence of Markov's inequality. As the above holds for any choice of $u\in\mathbb{R}$, we can choose to apply it with the value that minimizes the function $u \geq 0 \mapsto \frac{\sigma^2 + u^2 }{(\lambda + u)^2}$. By differentiating, this can be seen to be $u_\ast = \frac{\sigma^2}{\lambda}$, leading to - - - -\Pr( X-\mathbb{E}[X] \geq\lambda) \leq \frac{\sigma^2 + u_\ast^2 }{(\lambda + u_\ast)^2} = \frac{\sigma^2}{\lambda^2 + \sigma^2} - - if $\lambda > 0$ - -Using more moments, various stronger inequalities can be shown. - -He, Zhang and Zhang and showed, when -$$ -\mathbb{E}[X]=0 -$$ and -$$ -\mathbb{E}[X^2]=1 -$$: - - - -\Pr(X\ge\lambda) \le 1- (2\sqrt{3}-3)\frac{(1+\lambda^2)^2}{\mathbb{E}[X^4]+6\lambda^2+\lambda^4} - - diff --git a/wiki/wikipedia/3330.txt b/wiki/wikipedia/3330.txt deleted file mode 100644 index 89ddf8d4670068ab1347a98d2cb5390b2078bcef..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3330.txt +++ /dev/null @@ -1 +0,0 @@ -In algebraic topology, the Dold–Thom theorem, proved by , states that the homotopy group πi(SP(X)) of the infinite symmetric product SP(X) of a connected CW complex X is the i-th singular reduced homology group of X, usually denoted by $\widetilde{H}_i(X;\mathbb{Z})$. Almgren Isomorphism Theorem is a generalization of the Dold-Thom theorem. diff --git a/wiki/wikipedia/3331.txt b/wiki/wikipedia/3331.txt deleted file mode 100644 index be0a0b89299badad2036efb07e9131e6507ef91e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3331.txt +++ /dev/null @@ -1,65 +0,0 @@ -In Euclidean geometry, the Erdős–Mordell inequality states that for any triangle ABC and point P inside ABC, the sum of the distances from P to the sides is less than or equal to half of the sum of the distances from P to the vertices. It is named after Paul Erdős and Louis Mordell. Erdős posed the problem of proving the inequality; a proof was provided two years later by . This solution was however not very elementary. Subsequent simpler proofs were then found by Kazarinoff, Bankoff, and Alsina. - -Barrow's inequality is a strengthened version of the Erdős–Mordell inequality in which the distances from P to the sides are replaced by the distances from P to the points where the angle bisectors of ∠APB, ∠BPC, and ∠CPA cross the sides. Although the replaced distances are longer, their sum is still less than or equal to half the sum of the distances to the vertices. - -Let $P$ be an arbitrary point P inside a given triangle $ABC$, and let $PL$, $PM$, and $PN$ be the perpendiculars from $P$ to the sides of the triangles. - -(If the triangle is obtuse, one of these perpendiculars may cross through a different side of the triangle and end on the line supporting one of the sides.) Then the inequality states that -$$ -PA+PB+PC\geq 2(PL+PM+PN) -$$ - -Let the sides of ABC be a opposite A, b opposite B, and c opposite C; also let PA = p, PB = q, PC = r, dist(P;BC) = x, dist(P;CA) = y, dist(P;AB) = z. First, we prove that -$$ -cr\geq ax+by. -$$ - -This is equivalent to -$$ -\frac{c(r+z)}2\geq \frac{ax+by+cz}2. -$$ - -The right side is the area of triangle ABC, but on the left side, r + z is at least the height of the triangle; consequently, the left side cannot be smaller than the right side. Now reflect P on the angle bisector at C. We find that cr ≥ ay + bx for P's reflection. Similarly, bq ≥ az + cx and ap ≥ bz + cy. We solve these inequalities for r, q, and p: -$$ -r\geq (a/c)y+(b/c)x, -$$ -$$ -q\geq (a/b)z+(c/b)x, -$$ -$$ -p\geq (b/a)z+(c/a)y. -$$ - -Adding the three up, we get - - - -p + q + r - -\geq - -\left( \frac{b}{c} + \frac{c}{b} \right) x + - -\left( \frac{a}{c} + \frac{c}{a} \right) y + - -\left( \frac{a}{b} + \frac{b}{a} \right) z. - - - -Since the sum of a positive number and its reciprocal is at least 2 by AM–GM inequality, we are finished. Equality holds only for the equilateral triangle, where P is its centroid. - -Let ABC be a triangle inscribed into a circle (O) and P be a point inside of ABC. Let D, E, F be the orthogonal projections of P onto BC, CA, AB. M, N, Q be the orthogonal projections of P onto tangents to (O) at A, B, C respectively, then: -$$ - PM+PN+PQ \ge 2(PD+PE+PF) -$$ - -Equality hold if and only if triangle ABC is equilateral (; ) - -Let $A_1A_2...A_n$ be a convex polygon, and $P$ be an interior point of $A_1A_2...A_n$. Let $R_i$ be the distance from $P$ to the vertex $A_i$ , $r_i$ the distance from $P$ to the side $A_iA_{i+1}$, $w_i$ the segment of the bisector of the angle $A_iPA_{i+1}$ from $P$ to its intersection with the side $A_iA_{i+1}$ then : -$$ - \sum_{i=1}^{n}R_i \ge \left(\sec{\frac{\pi}{n}}\right)\sum_{i=1}^{n} w_i \ge \left(\sec{\frac{\pi}{n}}\right)\sum_{i=1}^{n} r_i -$$ - -In absolute geometry the Erdős–Mordell inequality is equivalent, as proved in Pambuccian, to the statement - -that the sum of the angles of a triangle is less than or equal to two right angles. diff --git a/wiki/wikipedia/3332.txt b/wiki/wikipedia/3332.txt deleted file mode 100644 index 1da677fa779a79c80d1b4bcf7853cf87b4e4b716..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3332.txt +++ /dev/null @@ -1,54 +0,0 @@ -In mathematics, the Hurewicz theorem is a basic result of algebraic topology, connecting homotopy theory with homology theory via a map known as the Hurewicz homomorphism. The theorem is named after Witold Hurewicz, and generalizes earlier results of Henri Poincaré. - -The Hurewicz theorems are a key link between homotopy groups and homology groups. - -For any path-connected space X and positive integer n there exists a group homomorphism -$$ -h_* \colon \pi_n(X) \to H_n(X), -$$ - -called the Hurewicz homomorphism, from the n-th homotopy group to the n-th homology group (with integer coefficients). It is given in the following way: choose a canonical generator $u_n \in H_n(S^n)$, then a homotopy class of maps $f \in \pi_n(X)$ is taken to $f_*(u_n) \in H_n(X)$. - -For $n=1$ this homomorphism induces an isomorphism -$$ -\tilde{h}_* \colon \pi_1(X)/[ \pi_1(X), \pi_1(X)] \to H_1(X) -$$ - -between the abelianization of the first homotopy group (the fundamental group) and the first homology group. - -If $n\ge 2$ and X is $(n-1)$-connected, the Hurewicz map $h_* \colon \pi_n(X) \to H_n(X)$ is an isomorphism. In addition, the Hurewicz map $h_* \colon \pi_{n+1}(X) \to H_{n+1}(X)$ is an epimorphism in this case. - -For any pair of spaces $(X,A)$ and integer $k>1$ there exists a homomorphism -$$ -h_* \colon \pi_k(X,A) \to H_k(X,A) -$$ - -from relative homotopy groups to relative homology groups. The Relative Hurewicz Theorem states that if both $X$ and $A$ are connected and the pair is $(n-1)$-connected then $H_k(X,A)=0$ for $k2$ (crossed modules if $n=2$), which itself is deduced from a higher homotopy van Kampen theorem for relative homotopy groups, whose proof requires development of techniques of a cubical higher homotopy groupoid of a filtered space. - -For any triad of spaces $(X;A,B)$ (i.e., a space X and subspaces A, B) and integer $k>2$ there exists a homomorphism -$$ -h_*\colon \pi_k(X;A,B) \to H_k(X;A,B) -$$ - -from triad homotopy groups to triad homology groups. Note that -$$ -H_k(X;A,B) \cong H_k(X\cup (C(A\cup B))). -$$ - -The Triadic Hurewicz Theorem states that if X, A, B, and $C=A\cap B$ are connected, the pairs $(A,C)$ and $(B,C)$ are $(p-1)$-connected and $(q-1)$-connected, respectively, and the triad $(X;A,B)$ is $(p+q-2)$-connected, then $H_k(X;A,B)=0$ for $k33608 − 1, found by Peter Kaiser. - -The constant representing the sum of the reciprocals of all prime quadruplets, Brun's constant for prime quadruplets, denoted by B4, is the sum of the reciprocals of all prime quadruplets: - -B_4 = \left(\frac{1}{5} + \frac{1}{7} + \frac{1}{11} + \frac{1}{13}\right) - -+ \left(\frac{1}{11} + \frac{1}{13} + \frac{1}{17} + \frac{1}{19}\right) - -+ \left(\frac{1}{101} + \frac{1}{103} + \frac{1}{107} + \frac{1}{109}\right) + \cdots - -with value: - -B4 = 0.87058 83800 ± 0.00000 00005. - -This constant should not be confused with the Brun's constant for cousin primes, prime pairs of the form (p, p + 4), which is also written as B4. - -The prime quadruplet {11, 13, 17, 19} is alleged to appear on the Ishango bone although this is disputed. - -Excluding the first prime quadruplet, the shortest possible distance between two quadruplets {p, p+2, p+6, p+8} and {q, q+2, q+6, q+8} is q - p = 30. The first occurrences of this are for p = 1006301, 2594951, 3919211, 9600551, 10531061, ... (). - -The Skewes number for prime quadruplets {p, p+2, p+6, p+8} is $1172531$ (Tóth). - -If {p, p+2, p+6, p+8} is a prime quadruplet and p-4 or p+12 is also prime, then the five primes form a prime quintuplet which is the closest admissible constellation of five primes. - -The first few prime quintuplets with p+12 are: - -{5, 7, 11, 13, 17}, {11, 13, 17, 19, 23}, {101, 103, 107, 109, 113}, {1481, 1483, 1487, 1489, 1493}, {16061, 16063, 16067, 16069, 16073}, {19421, 19423, 19427, 19429, 19433}, {21011, 21013, 21017, 21019, 21023}, {22271, 22273, 22277, 22279, 22283}, {43781, 43783, 43787, 43789, 43793}, {55331, 55333, 55337, 55339, 55343} ... . - -The first prime quintuplets with p-4 are: - -{7, 11, 13, 17, 19}, {97, 101, 103, 107, 109}, {1867, 1871, 1873, 1877, 1879}, {3457, 3461, 3463, 3467, 3469}, {5647, 5651, 5653, 5657, 5659}, {15727, 15731, 15733, 15737, 15739}, {16057, 16061, 16063, 16067, 16069}, {19417, 19421, 19423, 19427, 19429}, {43777, 43781, 43783, 43787, 43789}, {79687, 79691, 79693, 79697, 79699}, {88807, 88811, 88813, 88817, 88819} ... . - -A prime quintuplet contains two close pairs of twin primes, a prime quadruplet, and three overlapping prime triplets. - -It is not known if there are infinitely many prime quintuplets. Once again, proving the twin prime conjecture might not necessarily prove that there are also infinitely many prime quintuplets. Also, proving that there are infinitely many prime quadruplets might not necessarily prove that there are infinitely many prime quintuplets. - -The Skewes number for prime quintuplets {p, p+2, p+6, p+8, p+12} is $21432401$ (Tóth). - -If both p-4 and p+12 are prime then it becomes a prime sextuplet. The first few: - -{7, 11, 13, 17, 19, 23}, {97, 101, 103, 107, 109, 113}, {16057, 16061, 16063, 16067, 16069, 16073}, {19417, 19421, 19423, 19427, 19429, 19433}, {43777, 43781, 43783, 43787, 43789, 43793} - -Some sources also call {5, 7, 11, 13, 17, 19} a prime sextuplet. Our definition, all cases of primes {p-4, p, p+2, p+6, p+8, p+12}, follows from defining a prime sextuplet as the closest admissible constellation of six primes. - -A prime sextuplet contains two close pairs of twin primes, a prime quadruplet, four overlapping prime triplets, and two overlapping prime quintuplets. - -All prime sextuplets except {7, 11, 13, 17, 19, 23} are of the form {210n + 97, 210n + 101, 210n + 103, 210n + 107, 210n + 109, 210n + 113} for some integer n. (This structure is necessary to ensure that none of the six primes is divisible by 2, 3, 5 or 7). - -It is not known if there are infinitely many prime sextuplets. Once again, proving the twin prime conjecture might not necessarily prove that there are also infinitely many prime sextuplets. Also, proving that there are infinitely many prime quintuplets might not necessarily prove that there are infinitely many prime sextuplets. - -In the digital currency riecoin one of the goals is to find prime sextuplets for large prime numbers p using distributed computing. - -The Skewes number for the tuplet {p, p+4, p+6, p+10, p+12, p+16} is $251331775687$ (Tóth). - -Prime quadruplets, quintuplets, and sextuplets are examples of prime constellations, and prime constellations are in turn examples of prime k-tuples. A prime constellation is a grouping of $k$ primes, with minimum prime $p$ and maximum prime $p+n$, meeting the following two conditions: - -* Not all residues modulo $q$ are represented for any prime $q$ - -* For any given $k$, the value of $n$ is the minimum possible - -More generally, a prime k-tuple occurs if the first condition but not necessarily the second condition is met. diff --git a/wiki/wikipedia/3334.txt b/wiki/wikipedia/3334.txt deleted file mode 100644 index e784aa29a13438f4427b8c87729190f5da00aae6..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3334.txt +++ /dev/null @@ -1,15 +0,0 @@ -Mindjet is a mind mapping and innovation management software company headquartered in San Francisco, California. Mindjet's software products, including its flagship product MindManager and SpigitEngage, are designed to visually and collaboratively manage information and tasks. , Mindjet had approximately sixteen million users. - -Mindjet was founded in 1998 by computer programmer Mike Jetter and his wife, Bettina Jetter, in order to support the development of their mind mapping software, MindManager. Jetter conceived of the idea for the first product while recovering from an illness in hospital, and began developing the program while living in Germany in 1994, aiming to simplify the creation and sharing of mind maps for business users. In August 2001, Mindjet received approximately $5 million in venture capital from London-based investment group 3i, which the company used to market MindManager in the U.S. and Europe. Scott Raskin, the former chief operating officer for Telelogic, was named CEO of Mindjet in 2006. - -In 2011, the company acquired Thinking Space, an Android-based information mapping application, The acquisition of Cohuman enabled Mindjet to launch a new collaborative working service called Mindjet Connect on September 22, 2011. - -Mindjet had 270 employees. The company's headquarters are located in San Francisco; The company is led by a board of directors including founder Mike Jetter, managing director of Investor Growth Capital, Noah Walley, and former Visio Corporation CEO, Jeremy Jaech. - -In 2013, Mindjet acquired innovation management company Spigit, and adopted their software product SpigitEngage into their product suite. - -Mindjet develops mind mapping and innovation management software for Microsoft Windows and Mac OS, - -Until 2012, the company's products focused on mind mapping, collaboration and project management. The company's MindManage ]] displayed information in mind maps using colors, words, images and spatial relationships. Following the acquisition of Cohuman in 2011, Mindjet launched Mindjet Connect, a cloud-based service for collaborative working. - -In December 2011, Mindjet reported 350,000 downloads for its iOS app and 1.1 million downloads for its Android-based app. and changed from a purchase-based model to a subscription-based model. diff --git a/wiki/wikipedia/3335.txt b/wiki/wikipedia/3335.txt deleted file mode 100644 index 63f9550d295b7dee1d186f0c8c906d8ee9a71fed..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3335.txt +++ /dev/null @@ -1,39 +0,0 @@ -In mathematics Haboush's theorem, often still referred to as the Mumford conjecture, states that for any semisimple algebraic group G over a field K, and for any linear representation ρ of G on a K-vector space V, given v ≠ 0 in V that is fixed by the action of G, there is a G-invariant polynomial F on V, without constant term, such that - -F(v) ≠ 0. - -The polynomial can be taken to be homogeneous, in other words an element of a symmetric power of the dual of V, and if the characteristic is p>0 the degree of the polynomial can be taken to be a power of p. - -When K has characteristic 0 this was well known; in fact Weyl's theorem on the complete reducibility of the representations of G implies that F can even be taken to be linear. Mumford's conjecture about the extension to prime characteristic p was proved by W. J. Haboush, about a decade after the problem had been posed by David Mumford, in the introduction to the first edition of his book Geometric Invariant Theory. - -Haboush's theorem can be used to generalize results of geometric invariant theory from characteristic 0, where they were already known, to characteristic p>0. In particular Nagata's earlier results together with Haboush's theorem show that if a reductive group (over an algebraically closed field) acts on a finitely generated algebra then the fixed subalgebra is also finitely generated. - -Haboush's theorem implies that if G is a reductive algebraic group acting regularly on an affine algebraic variety, then disjoint closed invariant sets X and Y can be separated by an invariant function f (this means that f is 0 on X and 1 on Y). - -C.S. Seshadri (1977) extended Haboush's theorem to reductive groups over schemes. - -It follows from the work of Nagata, Haboush, and Popov that the following conditions are equivalent for an affine algebraic group G over a field K: - -*G is reductive (its unipotent radical is trivial). - -*For any non-zero invariant vector in a rational representation of G, there is an invariant homogeneous polynomial that does not vanish on it. - -*For any finitely generated K algebra on which G act rationally, the algebra of fixed elements is finitely generated. - -The theorem is proved in several steps as follows: - -*We can assume that the group is defined over an algebraically closed field K of characteristic p>0. - -*Finite groups are easy to deal with as one can just take a product over all elements, so one can reduce to the case of connected reductive groups (as the connected component has finite index). By taking a central extension which is harmless one can also assume the group G is simply connected. - -*Let A(G) be the coordinate ring of G. This is a representation of G with G acting by left translations. Pick an element v′ of the dual of V that has value 1 on the invariant vector v. The map V to A(G) by sending w∈V to the element a∈A(G) with a(g) = v′(g(w)). This sends v to 1∈A(G), so we can assume that V⊂A(G) and v=1. - -*The structure of the representation A(G) is given as follows. Pick a maximal torus T of G, and let it act on A(G) by right translations (so that it commutes with the action of G). Then A(G) splits as a sum over characters λ of T of the subrepresentations A(G)λ of elements transforming according to λ. So we can assume that V is contained in the T-invariant subspace A(G)λ of A(G). - -*The representation A(G)λ is an increasing union of subrepresentations of the form Eλ+nρ⊗E, where ρ is the Weyl vector for a choice of simple roots of T, n is a positive integer, and Eμ is the space of sections of the line bundle over G/B corresponding to a character μ of T, where B is a Borel subgroup containing T. - -*If n is sufficiently large then E has dimension (n+1)N where N is the number of positive roots. This is because in characteristic 0 the corresponding module has this dimension by the Weyl character formula, and for n large enough that the line bundle over G/B is very ample, E has the same dimension as in characteristic 0. - -*If q=pr for a positive integer r, and n=q-1, then E contains the Steinberg representation of G(Fq) of dimension qN. (Here Fq ⊂ K is the finite field of order q.) The Steinberg representation is an irreducible representation of G(Fq) and therefore of G(K), and for r large enough it has the same dimension as E, so there are infinitely many values of n such that E is irreducible. - -*If E is irreducible it is isomorphic to its dual, so E⊗E is isomorphic to End(E). Therefore, the T-invariant subspace A(G)λ of A(G) is an increasing union of subrepresentations of the form End(E) for representations E (of the form E(q-1)ρ)). However, for representations of the form End(E) an invariant polynomial that separates 0 and 1 is given by the determinant. This completes the sketch of the proof of Haboush's theorem. diff --git a/wiki/wikipedia/3336.txt b/wiki/wikipedia/3336.txt deleted file mode 100644 index 858d659655ab96f7c22b9fc541d1e46a67cf58a7..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3336.txt +++ /dev/null @@ -1,50 +0,0 @@ -In mathematics, the comparison test, sometimes called the direct comparison test to distinguish it from similar related tests (especially the limit comparison test), provides a way of deducing the convergence or divergence of an infinite series or an improper integral. In both cases, the test works by comparing the given series or integral to one whose convergence properties are known. - -In calculus, the comparison test for series typically consists of a pair of statements about infinite series with non-negative (real-valued) terms: - -* If the infinite series $\sum b_n$ converges and $0 \le a_n \le b_n$ for all sufficiently large n (that is, for all $n>N$ for some fixed value N), then the infinite series $\sum a_n$ also converges. - -* If the infinite series $\sum b_n$ diverges and $0 \le b_n \le a_n$ for all sufficiently large n, then the infinite series $\sum a_n$ also diverges. - -Note that the series having larger terms is sometimes said to dominate (or eventually dominate) the series with smaller terms. - -Alternatively, the test may be stated in terms of absolute convergence, in which case it also applies to series with complex terms: - -* If the infinite series $\sum b_n$ is absolutely convergent and $|a_n| \le |b_n|$ for all sufficiently large n, then the infinite series $\sum a_n$ is also absolutely convergent. - -* If the infinite series $\sum b_n$ is not absolutely convergent and $|b_n| \le |a_n|$ for all sufficiently large n, then the infinite series $\sum a_n$ is also not absolutely convergent. - -Note that in this last statement, the series $\sum a_n$ could still be conditionally convergent; for real-valued series, this could happen if the an are not all nonnegative. - -The second pair of statements are equivalent to the first in the case of real-valued series because $\sum c_n$ converges absolutely if and only if $\sum |c_n|$, a series with nonnegative terms, converges. - -The proofs of all the statements given above are similar. Here is a proof of the third statement. - -Let $\sum a_n$ and $\sum b_n$ be infinite series such that $\sum b_n$ converges absolutely (thus $\sum |b_n|$ converges), and without loss of generality assume that $|a_n| \le |b_n|$ for all positive integers n. Consider the partial sums -$$ -S_n = |a_1| + |a_2| + \ldots + |a_n|,\ T_n = |b_1| + |b_2| + \ldots + |b_n|. -$$ - -Since $\sum b_n$ converges absolutely, $\lim_{n\to\infty} T_n = T$ for some real number T. For all n, -$$ - 0 \le S_n = |a_1| + |a_2| + \ldots + |a_n| \le |a_1| + \ldots + |a_n| + |b_{n+1}| + \ldots = S_n + (T-T_n) \le T. -$$ -$$ -S_n -$$ is a nondecreasing sequence and $S_n + (T - T_n)$ is nonincreasing. - -Given $m,n > N$ then both $S_n, S_m$ belong to the interval $[S_N, S_N + (T - T_N)]$, whose length $T - T_N$ decreases to zero as $N$ goes to infinity. - -This shows that $(S_n)_{n=1,2,\ldots}$ is a Cauchy sequence, and so must converge to a limit. Therefore, $\sum a_n$ is absolutely convergent. - -The comparison test for integrals may be stated as follows, assuming continuous real-valued functions f and g on $[a,b)$ with b either $+\infty$ or a real number at which f and g each have a vertical asymptote: - -* If the improper integral $\int_a^b g(x)dx$ converges and $0 \le f(x) \le g(x)$ for $a \le x < b$, then the improper integral $\int_a^b f(x)dx$ also converges with $\int_a^b f(x)dx \le \int_a^b g(x)dx.$ - -* If the improper integral $\int_a^b g(x)dx$ diverges and $0 \le g(x) \le f(x)$ for $a \le x < b$, then the improper integral $\int_a^b f(x)dx$ also diverges. - -Another test for convergence of real-valued series, similar to both the direct comparison test above and the ratio test, is called the ratio comparison test: - -* If the infinite series $\sum b_n$ converges and $a_n>0$, $b_n>0$, and $\frac{a_{n+1}}{a_n} \le \frac{b_{n+1}}{b_n}$ for all sufficiently large n, then the infinite series $\sum a_n$ also converges. - -* If the infinite series $\sum b_n$ diverges and $a_n>0$, $b_n>0$, and $\frac{a_{n+1}}{a_n} \ge \frac{b_{n+1}}{b_n}$ for all sufficiently large n, then the infinite series $\sum a_n$ also diverges. diff --git a/wiki/wikipedia/3337.txt b/wiki/wikipedia/3337.txt deleted file mode 100644 index 6ac49f534465a0719a84f19109061f0f5a2f6868..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3337.txt +++ /dev/null @@ -1,108 +0,0 @@ -Leonhard Euler proved the Euler product formula for the Riemann zeta function in his thesis Variae observationes circa series infinitas (Various Observations about Infinite Series), published by St Petersburg Academy in 1737. - -The Euler product formula for the Riemann zeta function reads -$$ -\zeta(s) = \sum_{n=1}^\infty\frac{1}{n^s} = \prod_{p \text{ prime}} \frac{1}{1-p^{-s}} -$$ - -where the left hand side equals the Riemann zeta function: -$$ -\zeta(s) = \sum_{n=1}^\infty\frac{1}{n^s} = 1+\frac{1}{2^s}+\frac{1}{3^s}+\frac{1}{4^s}+\frac{1}{5^s}+ \ldots -$$ - -and the product on the right hand side extends over all prime numbers p: -$$ -\prod_{p \text{ prime}} \frac{1}{1-p^{-s}} = \frac{1}{1-2^{-s}}\cdot\frac{1}{1-3^{-s}}\cdot\frac{1}{1-5^{-s}}\cdot\frac{1}{1-7^{-s}} \cdots \frac{1}{1-p^{-s}} \cdots -$$ - -This sketch of a proof makes use of simple algebra only. This was the method by which Euler originally discovered the formula. There is a certain sieving property that we can use to our advantage: -$$ -\zeta(s) = 1+\frac{1}{2^s}+\frac{1}{3^s}+\frac{1}{4^s}+\frac{1}{5^s}+ \ldots -$$ - -\frac{1}{2^s}\zeta(s) = - -\frac{1}{2^s}+\frac{1}{4^s}+\frac{1}{6^s}+\frac{1}{8^s}+\frac{1}{10^s}+ \ldots - -Subtracting the second equation from the first we remove all elements that have a factor of 2: -$$ -\left(1-\frac{1}{2^s}\right)\zeta(s) = 1+\frac{1}{3^s}+\frac{1}{5^s}+\frac{1}{7^s}+\frac{1}{9^s}+\frac{1}{11^s}+\frac{1}{13^s}+ \ldots -$$ - -Repeating for the next term: -$$ -\frac{1}{3^s}\left(1-\frac{1}{2^s}\right)\zeta(s) = \frac{1}{3^s}+\frac{1}{9^s}+\frac{1}{15^s}+\frac{1}{21^s}+\frac{1}{27^s}+\frac{1}{33^s}+ \ldots -$$ - -Subtracting again we get: -$$ -\left(1-\frac{1}{3^s}\right)\left(1-\frac{1}{2^s}\right)\zeta(s) = 1+\frac{1}{5^s}+\frac{1}{7^s}+\frac{1}{11^s}+\frac{1}{13^s}+\frac{1}{17^s}+ \ldots -$$ - -where all elements having a factor of 3 or 2 (or both) are removed. - -It can be seen that the right side is being sieved. Repeating infinitely for $\frac{1}{p^s}$ where $p$ is prime, we get: -$$ - \ldots \left(1-\frac{1}{11^s}\right)\left(1-\frac{1}{7^s}\right)\left(1-\frac{1}{5^s}\right)\left(1-\frac{1}{3^s}\right)\left(1-\frac{1}{2^s}\right)\zeta(s) = 1 -$$ - -Dividing both sides by everything but the ζ(s) we obtain: -$$ - \zeta(s) = \frac{1}{\left(1-\frac{1}{2^s}\right)\left(1-\frac{1}{3^s}\right)\left(1-\frac{1}{5^s}\right)\left(1-\frac{1}{7^s}\right)\left(1-\frac{1}{11^s}\right) \ldots } -$$ - -This can be written more concisely as an infinite product over all primes p: -$$ - \zeta(s) = \prod_{p \text{ prime}} \frac{1}{1-p^{-s}} -$$ - -To make this proof rigorous, we need only to observe that when $\Re(s) > 1$, the sieved right-hand side approaches 1, which follows immediately from the convergence of the Dirichlet series for $\zeta(s)$. - -An interesting result can be found for ζ(1), the harmonic series: -$$ - \ldots \left(1-\frac{1}{11}\right)\left(1-\frac{1}{7}\right)\left(1-\frac{1}{5}\right)\left(1-\frac{1}{3}\right)\left(1-\frac{1}{2}\right)\zeta(1) = 1 -$$ - -which can also be written as, -$$ - \ldots \left(\frac{10}{11}\right)\left(\frac{6}{7}\right)\left(\frac{4}{5}\right)\left(\frac{2}{3}\right)\left(\frac{1}{2}\right)\zeta(1) = 1 -$$ - -which is, -$$ - \left(\frac{\ldots\cdot10\cdot6\cdot4\cdot2\cdot1}{\ldots\cdot11\cdot7\cdot5\cdot3\cdot2}\right)\zeta(1) = 1 -$$ - -as, -$$ -\zeta(1) = 1+\frac{1}{2}+\frac{1}{3}+\frac{1}{4}+\frac{1}{5}+ \ldots -$$ - -thus, -$$ - 1+\frac{1}{2}+\frac{1}{3}+\frac{1}{4}+\frac{1}{5}+ \ldots = \frac{2\cdot 3\cdot 5\cdot 7\cdot 11\cdot\ldots}{1\cdot 2\cdot 4\cdot 6\cdot 10\cdot\ldots} -$$ - -While the series ratio test is inconclusive for the left-hand side it may be shown divergent by bounding logarithms. Similarly for the right-hand side the infinite coproduct of reals greater than one does not guarantee divergence, e.g., -$$ -\lim_{n \to \infty} \left(1+\frac{1}{n}\right)^n = e -$$. - -Instead, the denominator may be written in terms of the primorial numerator so that divergence is clear -$$ -\frac{p_n\#}{(p_n-1)\#}= e^{-\sum_{k=1}^n \ln \left(1-\frac{1}{p_k}\right)}=\sum_{m=0}^\infty \frac{1}{m!}\left(\sum_{l=1}^\infty\sum_{k=1}^n \frac{1}{lp_k^l}\right)^m -$$ - -given the trivial composed logarithmic divergence of an inverse prime series. - -Each factor (for a given prime p) in the product above can be expanded to a geometric series consisting of the reciprocal of p raised to multiples of s, as follows -$$ -\frac{1}{1-p^{-s}} = 1 + \frac{1}{p^s} + \frac{1}{p^{2s}} + \frac{1}{p^{3s}} + \ldots + \frac{1}{p^{ks}} + \ldots -$$ - -When $\Re(s) > 1$, we have |p-s| < 1 and this series converges absolutely. Hence we may take a finite number of factors, multiply them together, and rearrange terms. Taking all the primes p up to some prime number limit q, we have -$$ -\left|\zeta(s) - \prod_{p \le q}\left(\frac{1}{1-p^{-s}}\right)\right| < \sum_{n=q+1}^\infty \frac{1}{n^{\sigma}} -$$ - -where σ is the real part of s. By the fundamental theorem of arithmetic, the partial product when expanded out gives a sum consisting of those terms n-s where n is a product of primes less than or equal to q. The inequality results from the fact that therefore only integers larger than q can fail to appear in this expanded out partial product. Since the difference between the partial product and ζ(s) goes to zero when σ > 1, we have convergence in this region. diff --git a/wiki/wikipedia/3338.txt b/wiki/wikipedia/3338.txt deleted file mode 100644 index 8e64ba97e5a0ecbb826f6bf2600024d10657a590..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3338.txt +++ /dev/null @@ -1,11 +0,0 @@ -The three cups problem, also known as the three cup challenge and other variants, is a mathematical puzzle that, in its most common form, cannot be solved. - -In the beginning position of the problem, one cup is upside-down and the other two are right-side up. The objective is to turn all cups right-side up in no more than six moves, turning over exactly two cups at each move. - -The solvable (but trivial) version of this puzzle begins with one cup right-side up and two cups upside-down. To solve the puzzle in a single move, turn up the two cups that are upside down — after which all three cups are facing up. As a magic trick, a magician can perform the solvable version in a convoluted way, and then ask an audience member to solve the unsolvable version. - -To see that the problem is insolvable (when starting with just one cup upside down), it suffices to concentrate on the number of cups the wrong way up. Denoting this number by $W$, the goal of the problem is to change $W$ from 1 to 0, i.e. by $-1$. The problem is insoluble because any move changes $W$ by an even number. Since a move inverts two cups and each inversion changes $W$ by $+1$ (if the cup was the right way up) or $-1$ (otherwise), a move changes $W$ by the sum of two odd numbers, which is even, completing the proof. - -Another way of looking is that, at the start, 2 cups are in the "right" orientation and 1 is "wrong". Changing 1 right cup and 1 wrong cup, the situation remains the same. Changing 2 right cups results in a situation with 3 wrong cups, after which the next move restores the original status of 1 wrong cup. Thus, any number of moves results in a situation either with 3 wrongs or with 1 wrong, and never with 0 wrongs. - -More generally, this argument shows that for any number of cups, it is impossible to reduce $W$ to 0 if it is initially odd. On the other hand, if $W$ is even, inverting cups two at a time will eventually result in $W$ equaling 0. diff --git a/wiki/wikipedia/3339.txt b/wiki/wikipedia/3339.txt deleted file mode 100644 index 74ce55dfccc55fbc77f597360c6f4c4157af9b78..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3339.txt +++ /dev/null @@ -1,139 +0,0 @@ -In algebra, the Binet–Cauchy identity, named after Jacques Philippe Marie Binet and Augustin-Louis Cauchy, states that - - - -\biggl(\sum_{i=1}^n a_i c_i\biggr) - -\biggl(\sum_{j=1}^n b_j d_j\biggr) = - -\biggl(\sum_{i=1}^n a_i d_i\biggr) - -\biggl(\sum_{j=1}^n b_j c_j\biggr) - -+ \sum_{1\le i < j \le n} - -(a_i b_j - a_j b_i ) - -(c_i d_j - c_j d_i ) - - - -for every choice of real or complex numbers (or more generally, elements of a commutative ring). - -Setting ai = ci and bj = dj, it gives the Lagrange's identity, which is a stronger version of the Cauchy–Schwarz inequality for the Euclidean space $\textstyle\mathbb{R}^n$. - -When n = 3, the first and second terms on the right hand side become the squared magnitudes of dot and cross products respectively; in n dimensions these become the magnitudes of the dot and wedge products. We may write it -$$ -(a \cdot c)(b \cdot d) = (a \cdot d)(b \cdot c) + (a \wedge b) \cdot (c \wedge d) -$$ - -where a, b, c, and d are vectors. It may also be written as a formula giving the dot product of two wedge products, as -$$ -(a \wedge b) \cdot (c \wedge d) = (a \cdot c)(b \cdot d) - (a \cdot d)(b \cdot c), -$$ - -which can be written as -$$ -(a \times b) \cdot (c \times d) = (a \cdot c)(b \cdot d) - (a \cdot d)(b \cdot c) -$$ - -in the n = 3 case. - -In the special case a = c and b = d, the formula yields -$$ -|a \wedge b|^2 = |a|^2|b|^2 - |a \cdot b|^2. -$$ - -When both a and b are unit vectors, we obtain the usual relation -$$ -\sin^2 \phi = 1 - \cos^2 \phi -$$ - -where φ is the angle between the vectors. - -This is a special case of the inner product on the exterior algebra of a vector space, which is defined on wedge-decomposable elements as the Gram determinant of their components. - -A relationship between the Levi–Cevita symbols and the generalized Kronecker delta is -$$ -\frac{1}{k!}\varepsilon^{\lambda_1\cdots\lambda_k\mu_{k+1}\cdots\mu_{n}} \varepsilon_{\lambda_1\cdots\lambda_k\nu_{k+1}\cdots\nu_{n}} = \delta^{\mu_{k+1}\cdots\mu_{n}}_{\nu_{k+1}\cdots\nu_{n}}. -$$ - -The $(a \wedge b) \cdot (c \wedge d) = (a \cdot c)(b \cdot d) - (a \cdot d)(b \cdot c)$ form of the Binet–Cauchy identity can be written as -$$ -\frac{1}{(n-2)!}\left(\varepsilon^{\mu_1\cdots\mu_{n-2}\alpha\beta} ~ a_{\alpha} ~ b_{\beta} \right)\left( \varepsilon_{\mu_1\cdots\mu_{n-2}\gamma\delta} ~ c^{\gamma} ~ d^{\delta}\right) = \delta^{\alpha\beta}_{\gamma\delta} ~ a_{\alpha} ~ b_{\beta} ~ c^{\gamma} ~ d^{\delta}. -$$ - -Expanding the last term, - - - -\sum_{1\le i < j \le n} - -(a_i b_j - a_j b_i ) - -(c_i d_j - c_j d_i ) - - - - - -= - -\sum_{1\le i < j \le n} - -(a_i c_i b_j d_j + a_j c_j b_i d_i) - -+\sum_{i=1}^n a_i c_i b_i d_i - -- - -\sum_{1\le i < j \le n} - -(a_i d_i b_j c_j + a_j d_j b_i c_i) - -- - -\sum_{i=1}^n a_i d_i b_i c_i - - - -where the second and fourth terms are the same and artificially added to complete the sums as follows: - - - -= - -\sum_{i=1}^n \sum_{j=1}^n - -a_i c_i b_j d_j - -- - -\sum_{i=1}^n \sum_{j=1}^n - -a_i d_i b_j c_j. - - - -This completes the proof after factoring out the terms indexed by i. - -A general form, also known as the Cauchy–Binet formula, states the following: - -Suppose A is an m×n matrix and B is an n×m matrix. If S is a subset of {1, ..., n} with m elements, we write AS for the m×m matrix whose columns are those columns of A that have indices from S. Similarly, we write BS for the m×m matrix whose rows are those rows of B that have indices from S. - -Then the determinant of the matrix product of A and B satisfies the identity -$$ -\det(AB) = \sum_{\scriptstyle S\subset\{1,\ldots,n\}\atop\scriptstyle|S|=m} \det(A_S)\det(B_S), -$$ - -where the sum extends over all possible subsets S of {1, ..., n} with m elements. - -We get the original identity as special case by setting - - - -A=\begin{pmatrix}a_1&\dots&a_n\\b_1&\dots& b_n\end{pmatrix},\quad - -B=\begin{pmatrix}c_1&d_1\\\vdots&\vdots\\c_n&d_n\end{pmatrix}. - - diff --git a/wiki/wikipedia/334.txt b/wiki/wikipedia/334.txt deleted file mode 100644 index be5593ea6345279ef4e16f6c74930776d76069f7..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/334.txt +++ /dev/null @@ -1,71 +0,0 @@ -In theoretical physics, a Fierz identity is an identity that allows one to rewrite bilinears of the product of two spinors as a linear combination of products of the bilinears of the individual spinors. It is named after Swiss physicist Markus Fierz. The Fierz identities are also sometimes called the Fierz–Pauli–Kofink identities, as Pauli and Kofink described a general mechanism for producing such identities. - -There is a version of the Fierz identities for Dirac spinors and there is another version for Weyl spinors. And there are versions for other dimensions besides 3+1 dimensions. Spinor bilinears in arbitrary dimensions are elements of a Clifford algebra; the Fierz identities can be obtained by expressing the Clifford algebra as a quotient of the exterior algebra. - -When working in 4 spacetime dimensions the bivector $\psi \bar{\chi}$ may be decomposed in terms of the Dirac matrices that span the space: -$$ -\psi \bar{\chi} = \frac{1}{4}( c_S \mathbb{1} + c_V^\mu \gamma_\mu + c_T^{\mu\nu} T_{\mu\nu} + c_A^\mu \gamma_\mu \gamma_5 + c_P \gamma_5 ) -$$. - -The coefficients are - - c_S = (\bar\chi \psi), \quad - -c_V^\mu=(\bar\chi \gamma^\mu \psi), \quad - -c_T^{\mu\nu}=-(\bar\chi T^{\mu\nu}\psi), \quad - -c_A^\mu =-(\bar\chi \gamma^\mu \gamma_5\psi), \quad - -c_P=(\bar\chi \gamma_5 \psi) - -and are usually determined by using the orthogonality of the basis under the trace operation. By sandwiching the above decomposition between the desired gamma structures, the identities for the contraction of two Dirac bilinears of the same type can be written with coefficients according to the following table. - -where - -S=\bar\chi \psi, \quad - -V=\bar\chi\gamma^\mu\psi, \quad - -T= \bar\chi[\gamma^\mu, \gamma^\nu]\psi/2 \sqrt{2}, \quad - -A= \bar\chi\gamma_5\gamma^\mu\psi, \quad - -P= \bar\chi\gamma_5\psi . - -The table is symmetric with respect to reflection across the central element. - -The signs in the table correspond to the case of commuting spinors, otherwise, as is the case of fermions in physics, all coefficients change signs. - -For example, under the assumption of commuting spinors, the V × V product can be expanded as, - - - -\left(\bar\chi\gamma^\mu\psi\right)\left(\bar\psi\gamma_\mu \chi\right)= - -\left(\bar\chi\chi\right)\left(\bar\psi\psi\right)- - -\frac{1}{2}\left(\bar\chi\gamma^\mu\chi\right)\left(\bar\psi\gamma_\mu\psi\right)- - -\frac{1}{2}\left(\bar\chi\gamma^\mu\gamma_5\chi\right)\left(\bar\psi\gamma_\mu\gamma_5\psi\right) - --\left(\bar\chi\gamma_5\chi\right)\left(\bar\psi\gamma_5\psi\right)~. - -Combinations of bilinears corresponding to the eigenvectors of the transpose matrix transform to the same combinations with eigenvalues ±1. For example, again for commuting spinors, V×V + A×A, - - - -(\bar\chi\gamma^\mu\psi )(\bar\psi\gamma_\mu \chi )+ (\bar\chi\gamma_5\gamma^\mu\psi) (\bar\psi\gamma_5\gamma_\mu \chi) - -=-( ~(\bar\chi\gamma^\mu\chi )(\bar\psi\gamma_\mu\psi)+ - -(\bar\chi\gamma_5\gamma^\mu\chi) (\bar\psi\gamma_5\gamma_\mu\psi )~)~. - - - -Simplifications arise when the spinors considered are Majorana spinors, or chiral fermions, as then some terms in the expansion can vanish from symmetry reasons. - -For example, for anticommuting spinors this time, it readily follows from the above that -$$ - \bar{\chi}_1 \gamma^\mu (1+\gamma_5)\psi_2 \bar{\psi}_3 \gamma_\mu (1-\gamma_5) \chi_4 = -2 \bar{\chi}_1 (1-\gamma_5) \chi_4 \bar{\psi}_3 (1+\gamma_5) \psi_2 . -$$ diff --git a/wiki/wikipedia/3340.txt b/wiki/wikipedia/3340.txt deleted file mode 100644 index 6b95aa5f8daec64a6d40b4be0138015797403dfe..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3340.txt +++ /dev/null @@ -1,132 +0,0 @@ -Ceva's theorem is a theorem about triangles in plane geometry. Given a triangle ABC, let the lines AO, BO and CO be drawn from the vertices to a common point O (not on one of the sides of ABC), to meet opposite sides at D, E and F respectively. (The segments AD, BE, and CF are known as cevians.) Then, using signed lengths of segments, -$$ -\frac{AF}{FB} \cdot \frac{BD}{DC} \cdot \frac{CE}{EA} = 1. -$$ - -In other words, the length XY is taken to be positive or negative according to whether X is to the left or right of Y in some fixed orientation of the line. For example, AF/FB is defined as having positive value when F is between A and B and negative otherwise. - -Ceva's theorem is a theorem of affine geometry, in the sense that it may be stated and proved without using the concepts of angles, areas, and lengths (except for the ratio of the lengths of two line segments that are collinear). It is therefore true for triangles in any affine plane over any field. - -A slightly adapted converse is also true: If points D, E and F are chosen on BC, AC and AB respectively so that -$$ -\frac{AF}{FB} \cdot \frac{BD}{DC} \cdot \frac{CE}{EA} = 1, -$$ - -then AD, BE and CF are concurrent, or all three parallel. The converse is often included as part of the theorem. - -The theorem is often attributed to Giovanni Ceva, who published it in his 1678 work De lineis rectis. But it was proven much earlier by Yusuf Al-Mu'taman ibn Hűd, an eleventh-century king of Zaragoza. - -Associated with the figures are several terms derived from Ceva's name: cevian (the lines AD, BE, CF are the cevians of O), cevian triangle (the triangle DEF is the cevian triangle of O); cevian nest, anticevian triangle, Ceva conjugate. (Ceva is pronounced Chay'va; cevian is pronounced chev'ian.) - -The theorem is very similar to Menelaus' theorem in that their equations differ only in sign. - -Several proofs of the theorem have been given. - -Two proofs are given in the following. - -The first one is very elementary, using only basic properties of triangle areas. However, several cases have to be considered, depending on the position of the point O. - -The second proof uses barycentric coordinates and vectors, but is somehow more natural and not case dependent. Moreover, it works in any affine plane over any field. - -First, the sign of the left-hand side is positive since either all three of the ratios are positive, the case where O is inside the triangle (upper diagram), or one is positive and the other two are negative, the case O is outside the triangle (lower diagram shows one case). - -To check the magnitude, note that the area of a triangle of a given height is proportional to its base. So -$$ -\frac=\frac{BD}{DC}=\frac. -$$ - -Therefore, - -\frac{BD}{DC}= - -\frac - -=\frac. - -(Replace the minus with a plus if A and O are on opposite sides of BC.) - -Similarly, -$$ -\frac{CE}{EA}=\frac, -$$ - -and -$$ -\frac{AF}{FB}=\frac. -$$ - -Multiplying these three equations gives -$$ -\left|\frac{AF}{FB} \cdot \frac{BD}{DC} \cdot \frac{CE}{EA} \right|= 1, -$$ - -as required. - -The theorem can also be proven easily using Menelaus' theorem. From the transversal BOE of triangle ACF, -$$ -\frac{AB}{BF} \cdot \frac{FO}{OC} \cdot \frac{CE}{EA} = -1 -$$ - -and from the transversal AOD of triangle BCF, -$$ -\frac{BA}{AF} \cdot \frac{FO}{OC} \cdot \frac{CD}{DB} = -1. -$$ - -The theorem follows by dividing these two equations. - -The converse follows as a corollary. Let D, E and F be given on the lines BC, AC and AB so that the equation holds. Let AD and BE meet at O and let F′ be the point where CO crosses AB. Then by the theorem, the equation also holds for D, E and F′. Comparing the two, -$$ -\frac{AF}{FB} = \frac{AF'}{F'B} -$$ - -But at most one point can cut a segment in a given ratio so F=F′. - -Given three points A, B, C, that are not collinear, and a point O, that belongs to the same plane, the barycentric coordinates of O with respect of A, B, C are the unique three numbers $\lambda_A, \lambda_B, \lambda_C$ such that -$$ -\lambda_A + \lambda_B + \lambda_C =1, -$$ - -and -$$ -\overrightarrow{XO}=\lambda_A\overrightarrow{XA} + \lambda_B\overrightarrow{XB} + \lambda_C\overrightarrow{XC}, -$$ - -for every point X (for the definition of this arrow notation and further details, see Affine space). - -For Ceva's theorem, the point O is supposed to not belong to any line passing through two vertices of the triangle. This implies that $\lambda_A \lambda_B \lambda_C\ne 0.$ - -If one takes for X the intersection F of the lines AB and OC (see figures), the last equation may be rearranged into -$$ -\overrightarrow{FO}-\lambda_C\overrightarrow{FC}=\lambda_A\overrightarrow{FA} + \lambda_B\overrightarrow{FB}. -$$ - -The left-hand side of this equation is a vector that has the same direction as the line CF, and the right-hand side has the same direction as the line AB. These lines have different directions since A, B, and C are not collinear. It follows that the two members of the equation equal the zero vector, and -$$ -\lambda_A\overrightarrow{FA} + \lambda_B\overrightarrow{FB}=0. -$$ - -It follows that -$$ -\frac{AF}{FB}=\frac{\lambda_B}{\lambda_A}, -$$ - -where the left-hand-side fraction is the signed ratio of the lengths of the collinear line segments AF and FB. - -The same reasoning shows -$$ -\frac{BD}{DC}=\frac{\lambda_C}{\lambda_B}\quad \text{and}\quad \frac{CE}{EA}=\frac{\lambda_A}{\lambda_C}. -$$ - -Ceva's theorem results immediately by taking the product of the three last equations. - -The theorem can be generalized to higher-dimensional simplexes using barycentric coordinates. Define a cevian of an n-simplex as a ray from each vertex to a point on the opposite (n-1)-face (facet). Then the cevians are concurrent if and only if a mass distribution can be assigned to the vertices such that each cevian intersects the opposite facet at its center of mass. Moreover, the intersection point of the cevians is the center of mass of the simplex. - -Another generalization to higher-dimensional simplexs extends the conclusion of Ceva's theorem that the product of certain ratios is 1. Starting from a point in a simplex, a point is defined inductively on each k-face. This point is the foot of a cevian that goes from the vertex opposite the k-face, in a (k+1)-face that contains it, through the point already defined on this (k+1)-face. Each of these points divides the face on which it lies into lobes. Given a cycle of pairs of lobes, the product of the ratios of the volumes of the lobes in each pair is 1. - -Routh's theorem gives the area of the triangle formed by three cevians in the case that they are not concurrent. Ceva's theorem can be obtained from it by setting the area equal to zero and solving. - -The analogue of the theorem for general polygons in the plane has been known since the early nineteenth century. - -The theorem has also been generalized to triangles on other surfaces of constant curvature. - -The theorem also has a well-known generalization to spherical and hyperbolic geometry, replacing the lengths in the ratios with their sines and hyperbolic sines, respectively. diff --git a/wiki/wikipedia/3341.txt b/wiki/wikipedia/3341.txt deleted file mode 100644 index f1dbe216bac1b70827795d4e64ecff6f6c8ade64..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3341.txt +++ /dev/null @@ -1,26 +0,0 @@ -The freshman's dream is a name sometimes given to the erroneous equation (x + y)n = xn + yn, where n is a real number (usually a positive integer greater than 1). Beginning students commonly make this error in computing the power of a sum of real numbers, falsely assuming powers distribute over sums. When n = 2, it is easy to see why this is incorrect: (x + y)2 can be correctly computed as x2 + 2xy + y2 using distributivity (commonly known as the FOIL method). For larger positive integer values of n, the correct result is given by the binomial theorem. - -The name "freshman's dream" also sometimes refers to the theorem that says that for a prime number p, if x and y are members of a commutative ring of characteristic p, then - -(x + y)p = xp + yp. In this more exotic type of arithmetic, the "mistake" actually gives the correct result, since p divides all the binomial coefficients apart from the first and the last, making all intermediate terms equal to zero. - -The identity is actually true in the context of tropical geometry, where multiplication is replaced with addition, and addition is replaced with minimum. - -*$(1+4)^2 = 5^2 = 25$, but $1^2+4^2 = 17$. - -*$\sqrt{x^2+y^2}$ does not generally equal $\sqrt{x^2}+\sqrt{y^2}=|x|+|y|$. For example, $\sqrt{9+16}=\sqrt{25}=5$, which does not equal 1=3 + 4 = 7. In this example, the error is being committed with the exponent 1=n = 1/2. - -When p is a prime number and x and y are members of a commutative ring of characteristic p, then 1=(x + y)p = xp + yp. This can be seen by examining the prime factors of the binomial coefficients: the nth binomial coefficient is -$$ -\binom{p}{n} = \frac{p!}{n!(p-n)!}. -$$ - -The numerator is p factorial, which is divisible by p. However, when 0 < n < p, both n! and (p - n)! are coprime with p since all the factors are less than p and p is prime. Since a binomial coefficient is always an integer, the nth binomial coefficient is divisible by p and hence equal to 0 in the ring. We are left with the zeroth and pth coefficients, which both equal 1, yielding the desired equation. - -Thus in characteristic p the freshman's dream is a valid identity. This result demonstrates that exponentiation by p produces an endomorphism, known as the Frobenius endomorphism of the ring. - -The demand that the characteristic p be a prime number is central to the truth of the freshman's dream. A related theorem states that if p is prime then (x + 1)p ≡ xp + 1 in the polynomial ring $\mathbb{Z}_p[x]$. This theorem is a key fact in modern primality testing. - -The history of the term "freshman's dream" is somewhat unclear. In a 1940 article on modular fields, Saunders Mac Lane quotes Stephen Kleene's remark that a knowledge of 1=(a + b)2 = a2 + b2 in a field of characteristic 2 would corrupt freshman students of algebra. This may be the first connection between "freshman" and binomial expansion in fields of positive characteristic. Since then, authors of undergraduate algebra texts took note of the common error. The first actual attestation of the phrase "freshman's dream" seems to be in Hungerford's graduate algebra textbook (1974), where he quotes McBrien. Alternative terms include "freshman exponentiation", used in Fraleigh (1998). The term "freshman's dream" itself, in non-mathematical contexts, is recorded since the 19th century. - -Since the expansion of 1=(x + y)n is correctly given by the binomial theorem, the freshman's dream is also known as the "child's binomial theorem" or "schoolboy binomial theorem". diff --git a/wiki/wikipedia/3342.txt b/wiki/wikipedia/3342.txt deleted file mode 100644 index 2341c713558588f879f165c2c6fe247a4b3f8544..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3342.txt +++ /dev/null @@ -1,35 +0,0 @@ -The Steinitz exchange lemma is a basic theorem in linear algebra used, for example, to show that any two bases for a finite-dimensional vector space have the same number of elements. The result is named after the German mathematician Ernst Steinitz. The result is often called the Steinitz–Mac Lane exchange lemma, also recognizing the generalization - -by Saunders Mac Lane - -of Steinitz's lemma to matroids. - -Let $U$ and $W$ be finite subsets of a vector space $V$. If $U$ is a set of linearly independent vectors, and $W$ spans $V$, then: - -1. $|U| \leq |W|$; - -2. There is a set $W' \subseteq W$ with $|W'|=|W|-|U|$ such that $U \cup W'$ spans $V$. - -Suppose $U=\{u_1, \dots, u_m\}$ and $W=\{w_1, \dots, w_n\}$. We wish to show that for each $k \in \{0, \dots, m\}$, we have that $k \le n$, and that the set $\{u_1, \dotsc, u_k, w_{k + 1}, \dotsc, w_n\}$ spans $V$ (where the $w_j$ have possibly been reordered, and the reordering depends on $k$). We proceed by induction on $k$. - -For the base case, suppose $k$ is zero. - -In this case, the claim holds because there are no vectors $u_i$, and the set $\{w_1, \dotsc, w_n\}$ spans $V$ by hypothesis. - -For the inductive step, assume the proposition is true for some $k < m$. Since $u_{k+1}\in V$, and $\{ u_1,\ldots, u_k,w_{k+1},\ldots,w_n\}$ spans $V$ (by the induction hypothesis), there exist coefficients $\mu_1,\ldots,\mu_n$ such that -$$ -u_{k+1}=\sum_{j=1}^k \mu_j u_j+\sum_{j=k+1}^n \mu_j w_j -$$. - -At least one of $\{\mu_{k+1},\ldots,\mu_n\}$ must be non-zero, since otherwise this equality would contradict the linear independence of $\{ u_1,\ldots,u_{k+1} \}$; note that this additionally implies that $k+1 \le n$. By reordering the $\mu_{k+1}w_{k+1},\ldots,\mu_{n}w_n$, we may assume that $\mu_{k+1}$ is not zero. Therefore, we have -$$ -w_{k+1}= \frac{1}{\mu_{k+1}}\left(u_{k+1} - \sum_{j=1}^k \mu_j u_j - \sum_{j=k+2}^n \mu_j w_j\right) -$$. - -In other words, $w_{k+1}$ is in the span of $\{ u_1,\ldots, u_{k+1},w_{k+2},\ldots,w_n\}$. The latter span therefore contains each of the vectors $ u_1, \ldots, u_k, w_{k+1}, w_{k+2}, \ldots, w_n $, and hence must contain the span of these latter vectors as a subset. But since the latter span is $V$ (by the induction hypothesis), this simply means that the span of $\{ u_1,\ldots, u_{k+1},w_{k+2},\ldots,w_n\}$ contains $V$ as a subset (thus is $V$). We have therefore shown that our claim is true of $k+1$, completing the inductive step. - -We have thus shown that for each $k \in \{0, \dots, m\}$, we have that $k \le n$, and that the set $\{u_1, \dotsc, u_k, w_{k + 1}, \dotsc, w_n\}$ spans $V$ (where the $w_j$ have possibly been reordered, and the reordering depends on $k$). - -The fact that $ m \leq n $ follows from setting $ k = m $ in this result. - -The Steinitz exchange lemma is a basic result in computational mathematics, especially in linear algebra and in combinatorial algorithms. diff --git a/wiki/wikipedia/3343.txt b/wiki/wikipedia/3343.txt deleted file mode 100644 index 2a6790375e067905e565b8aa963fa1ddc6afd0f4..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3343.txt +++ /dev/null @@ -1,290 +0,0 @@ -In computational complexity theory, the class IP (which stands for Interactive Polynomial time) is the class of problems solvable by an interactive proof system. It is equal to the class PSPACE. The result was established in a series of papers: the first by Lund, Karloff, Fortnow, and Nisan showed that co-NP had multiple prover interactive proofs; and the second, by Shamir, employed their technique to establish that IP=PSPACE. The result is a famous example where the proof does not relativize. - -The concept of an interactive proof system was first introduced by Shafi Goldwasser, Silvio Micali, and Charles Rackoff in 1985. An interactive proof system consists of two machines, a prover, P, which presents a proof that a given string n is a member of some language, and a verifier, V, that checks that the presented proof is correct. The prover is assumed to be infinite in computation and storage, while the verifier is a probabilistic polynomial-time machine with access to a random bit string whose length is polynomial on the size of n. These two machines exchange a polynomial number, p(n), of messages and once the interaction is completed, the verifier must decide whether or not n is in the language, with only a 1/3 chance of error. (So any language in BPP is in IP, since then the verifier could simply ignore the prover and make the decision on its own.) - -A language L belongs to IP if there exist V, P such that for all Q, w: -$$ -w \in L \Rightarrow \Pr[V \leftrightarrow P\text{ accepts }w] \ge \tfrac{2}{3} -$$ -$$ -w \not \in L \Rightarrow \Pr[V \leftrightarrow Q\text{ accepts }w] \le \tfrac{1}{3} -$$ - -The Arthur–Merlin protocol, introduced by László Babai, is similar in nature, except that the number of rounds of interaction is bounded by a constant rather than a polynomial. - -Goldwasser et al. have shown that public-coin protocols, where the random numbers used by the verifier are provided to the prover along with the challenges, are no less powerful than private-coin protocols. At most two additional rounds of interaction are required to replicate the effect of a private-coin protocol. The opposite inclusion is straightforward, because the verifier can always send to the prover the results of their private coin tosses, which proves that the two types of protocols are equivalent. - -In the following section we prove that IP = PSPACE, an important theorem in computational complexity, which demonstrates that an interactive proof system can be used to decide whether a string is a member of a language in polynomial time, even though the traditional PSPACE proof may be exponentially long. - -The proof can be divided in two parts, we show that IP ⊆ PSPACE and PSPACE ⊆ IP. - -In order to demonstrate that IP ⊆ PSPACE, we present a simulation of an interactive proof system by a polynomial space machine. Now, we can define: -$$ -\Pr[V\text{ accepts }w\text{ starting at }M_j] = \max\nolimits_P \Pr \left [V \leftrightarrow P\text{ accepts }w\text{ starting at }M_j \right ] -$$ - -and for every 0 ≤ j ≤ p and every message history Mj, we inductively define the function NMj: - -N_{M_j} = \begin{cases} - -0 & j = p\text{ and }m_p = \text{reject}\\ - -1 & j = p\text{ and }m_p = \text{accept}\\ - -\max_{m_{j+1}} N_{M_{j+1}} & j < p\text{ and }j\text{ is odd} \\ - -\text{wt-avg}_{m_{j+1}} N_{M_{j+1}} & j < p\text{ and }j\text{ is even} \\ - -\end{cases} - -where: -$$ -\text{wt-avg}_{m_{j+1}} N_{M_{j+1}} := \sum\nolimits_{m_{j+1}} \Pr\nolimits_r[V(w,r,M_j)=m_{j+1}]N_{M_{j+1}} -$$ - -where Prr is the probability taken over the random string r of length p. This expression is the average of NMj+1, weighted by the probability that the verifier sent message mj+1. - -Take M0 to be the empty message sequence, here we will show that NM0 can be computed in polynomial space, and that NM0 = Pr[V accepts w]. First, to compute NM0, an algorithm can recursively calculate the values NMj for every j and Mj. Since the depth of the recursion is p, only polynomial space is necessary. The second requirement is that we need NM0 = Pr[V accepts w], the value needed to determine whether w is in A. We use induction to prove this as follows. - -We must show that for every 0 ≤ j ≤ p and every Mj, NMj = Pr[V accepts w starting at Mj], and we will do this using induction on j. The base case is to prove for j = p. Then we will use induction to go from p down to 0. - -The base case of j = p is fairly simple. Since mp is either accept or reject, if mp is accept, NMp is defined to be 1 and Pr[V accepts w starting at Mj] = 1 since the message stream indicates acceptance, thus the claim is true. If mp is reject, the argument is very similar. - -For the inductive hypothesis, we assume that for some j+1 ≤ p and any message sequence Mj+1, NMj+1 = Pr[V accepts w starting at Mj+1] and then prove the hypothesis for j and any message sequence Mj. - -If j is even, mj+1 is a message from V to P. By the definition of NMj, -$$ -N_{M_j} = \sum\nolimits_{m_{j+1}} \Pr\nolimits_r \left [V(w,r,M_j)=m_{j+1} \right ] N_{M_{j+1}}. -$$ - -Then, by the inductive hypothesis, we can say this is equal to -$$ -\sum\nolimits_{m_{j+1}} \Pr\nolimits_r \left [V(w,r,M_j)=m_{j+1} \right ] * \Pr \left [V\text{ accepts }w\text{ starting at }M_{j+1} \right ]. -$$ - -Finally, by definition, we can see that this is equal to Pr[V accepts w starting at Mj]. - -If j is odd, mj+1 is a message from P to V. By definition, -$$ -N_{M_j} = \max\nolimits_{m_{j+1}} N_{M_{j+1}}. -$$ - -Then, by the inductive hypothesis, this equals -$$ -\max\nolimits_{m_{j+1}} * \Pr[V\text{ accepts }w\text{ starting at }M_{j+1}]. -$$ - -This is equal to Pr[V accepts w starting at Mj] since: -$$ -\max\nolimits_{m_{j+1}} \Pr[V\text{ accepts }w\text{ starting at }M_{j+1}] \leq \Pr[V\text{ accepts w starting at }M_j] -$$ - -because the prover on the right-hand side could send the message mj+1 to maximize the expression on the left-hand side. And: -$$ -\max\nolimits_{m_{j+1}} \Pr\left[V\text{ accepts }w\text{ starting at }M_{j+1} \right] \geq \Pr\left[V\text{ accepts }w\text{ starting at }M_j\right] -$$ - -Since the same Prover cannot do any better than send that same message. Thus, this holds whether i is even or odd and the proof that IP ⊆ PSPACE is complete. - -Here we have constructed a polynomial space machine that uses the best prover P for a particular string w in language A. We use this best prover in place of a prover with random input bits because we are able to try every set of random input bits in polynomial space. Since we have simulated an interactive proof system with a polynomial space machine, we have shown that IP ⊆ PSPACE, as desired. - -In order to illustrate the technique that will be used to prove PSPACE ⊆ IP, we will first prove a weaker theorem, which was proven by Lund, et al.: #SAT ∈ IP. Then using the concepts from this proof we will extend it to show that TQBF ∈ IP. Since TQBF ∈ PSPACE-complete, and TQBF ∈ IP then PSPACE ⊆ IP. - -We begin by showing that #SAT is in IP, where: -$$ -\#\text{SAT} = \left \{ \langle \varphi, k \rangle \ : \ \varphi \text{ is a CNF-formula with exactly } k \text{ satisfying assignments} \right \}. -$$ - -Note that this is different from the normal definition of #SAT, in that it is a decision problem, rather than a function. - -First we use arithmetization to map the boolean formula with n variables, φ(b1, ..., bn) to a polynomial pφ(x1, ..., xn), where pφ mimics φ in that pφ is 1 if φ is true and 0 otherwise provided that the variables of pφ are assigned Boolean values. The Boolean operations ∨, ∧ and ¬ used in φ are simulated in pφ by replacing the operators in φ as shown in the table below. - -As an example, φ = a ∧ (b ∨ ¬c) would be converted into a polynomial as follows: - -\begin{align} - -p_\varphi &= a \wedge (b \vee \neg c) \\ - -&= a \wedge \left (b * (1-c) \right ) \\ - -&= a \wedge \left ( 1 - (1-b)(1 - (1-c)) \right ) \\ - -&= a \left ( 1 - (1-b)(1 - (1-c)) \right ) \\ - -&= a - (ac-abc) - -\end{align} - -The operations ab and a ∗ b each result in a polynomial with a degree bounded by the sum of the degrees of the polynomials for a and b and hence, the degree of any variable is at most the length of φ. - -Now let F be a finite field with order q > 2n; also demand that q be at least 1000. For each 0 ≤ i ≤ n, define a function fi on F, having parameters $a_1, \dots, a_{i-1}\in F$, and a single variable ai in F: For 0 ≤ i ≤ n and for $a_1, \dots, a_i \in F$ let -$$ -f_i(a_1, \dots, a_i) = \sum\nolimits_{a_{i+1}, \dots, a_n \in \{0, 1\}} p(a_1, \dots, a_n). -$$ - -Note that the value of f0 is the number of satisfying assignments of φ. f0 is a void function, with no variables. - -Now the protocol for #SAT works as follows: - -* Phase 0: The prover P chooses a prime q > 2n and computes f0, it then sends q and f0 to the verifier V. V checks that q is a prime greater than max(1000, 2n) and that f0() = k. - -* Phase 1: P sends the coefficients of f1(z) as a polynomial in z. V verifies that the degree of f1 is less than n and that f0 = f1(0) + f1(1). (If not V rejects). V now sends a random number r1 from F to P. - -* Phase i: P sends the coefficients of $f_i(r_1, \dots, r_{i-1}, z)$ as a polynomial in z. V verifies that the degree of fi is less than n and that $f_{i-1}(r_1, \dots, r_{i-1}) = f_i(r_1, \dots, r_{i-1}, 0) + f_i(r_1, \dots, r_{i-1}, 1)$. (If not V rejects). V now sends a random number ri from F to P. - -* Phase n+1: V evaluates $p(r_1, \dots, r_n)$ to compare to the value $f_n(r_1, \dots, r_n)$. If they are equal V accepts, otherwise V rejects. - -Note that this is a public-coin algorithm. - -If φ has k satisfying assignments, clearly V will accept. If φ does not have k satisfying assignments we assume there is a prover $\tilde P$ that tries to convince V that φ does have k satisfying assignments. We show that this can only be done with low probability. - -To prevent V from rejecting in phase 0, $\tilde P$ has to send an incorrect value $\tilde f_0()$ to P. Then, in phase 1, $\tilde P$ must send an incorrect polynomial $\tilde f_1$ with the property that $\tilde f_1(0)+\tilde f_1(1) = \tilde f_0()$. When V chooses a random r1 to send to P, -$$ -\Pr \left [\tilde f_1(r_1) = f_1(r_1) \right ] < \tfrac{1}{n^2}. -$$ - -This is because a polynomial in a single variable of degree at most d can have no more than d roots (unless it always evaluates to 0). So, any two polynomials in a single variable of degree at most d can be equal only in d places. Since |F| > 2n the chances of r1 being one of these values is at most $n/2^n < n/n^3$ if n > 10, or at most (n/1000) ≤ (n/n3) if n ≤ 10. - -Generalizing this idea for the other phases we have for each 1 ≤ i ≤ n if -$$ -\tilde f_{i-1}(r_1, \dots, r_{i-1}) \neq f_{i-1}(r_1, \dots, r_{i-1}), -$$ - -then for ri chosen randomly from F, -$$ -\Pr \left [\tilde f(r_1, \dots, r_i) = f_i(r_1, \dots, r_i) \right ] \leq \tfrac{1}{n^2}. -$$ - -There are n phases, so the probability that $\tilde P$ is lucky because V selects at some stage a convenient ri is at most 1/n. So, no prover can make the verifier accept with probability greater than 1/n. We can also see from the definition that the verifier V operates in probabilistic polynomial time. Thus, #SAT ∈ IP. - -In order to show that PSPACE is a subset of IP, we need to choose a PSPACE-complete problem and show that it is in IP. Once we show this, then it clear that PSPACE ⊆ IP. The proof technique demonstrated here is credited to Adi Shamir. - -We know that TQBF is in PSPACE-Complete. So let ψ be a quantified boolean expression: -$$ -\psi = \mathsf Q_1 x_1 \dots \mathsf Q_mx_m[\varphi] -$$ - -where φ is a CNF formula. Then Qi is a quantifier, either ∃ or ∀. Now fi is the same as in the previous proof, but now it also includes quantifiers. - -f_i(a_1, \dots, a_i) = \begin{cases} - -f_i(a_1, \dots, a_m) = 1 & \mathsf Q_{i+1}x_{i+1}\dots \mathsf Q_mx_m[\varphi(a_1, \dots, a_i)] \text{ is true}\\ - -0 & \text{otherwise} - -\end{cases} - -Here, φ(a1, ..., ai) is φ with a1 to ai substituted for x1 to xi. Thus f0 is the truth value of ψ. In order to arithmetize ψ we must use the following rules: - - f_i(a_1, \dots,a_i) = \begin{cases} f_{i+1}(a_1, \dots,a_i,0)\cdot f_{i+1}(a_1, \dots,a_i,1) & \mathsf Q_{i+1} = \forall \\ - -f_{i+1}(a_1, \dots,a_i,0) * f_{i+1}(a_1, \dots,a_i,1) & \mathsf Q_{i+1} = \exists - -\end{cases} - -where as before we define x ∗ y = 1 − (1 − x)(1 − y). - -By using the method described in #SAT, we must face a problem that for any fi the degree of the resulting polynomial may double with each quantifier. In order to prevent this, we must introduce a new reduction operator R which will reduce the degrees of the polynomial without changing their behavior on Boolean inputs. - -So now before we arithmetize $\psi = \mathsf Q_1x_1\dots \mathsf Q_mx_m[\varphi]$ we introduce a new expression: -$$ -\psi' = \mathsf Q_1 \mathrm R x_1 \mathsf Q_2 \mathrm R x_1 \mathrm R x_2\dots \mathsf Q_m \mathrm R x_1 \dots \mathrm R x_m [\varphi] -$$ - -or put another way: -$$ -\psi' = \mathsf S_1 y_1\dots \mathsf S_k y_k[\varphi], \qquad \text{ where }\mathsf S_i \in \{ \forall ,\exists , \mathrm R\}, \ y_i \in \{ x_1,\dots,x_m\} -$$ - -Now for every i ≤ k we define the function fi. We also define $f_k(x_1,\dots,x_m)$ to be the polynomial p(x1, ..., xm) which is obtained by arithmetizing φ. Now in order to keep the degree of the polynomial low, we define fi in terms of fi+1: -$$ -\text{If }\mathsf S_{i+1} = \forall, \quad f_i(a_1,\dots,a_i) = f_{i+1}(a_1,\dots,a_i,0) \cdot f_{i+1}(a_1,\dots,a_i,1) -$$ -$$ -\text{If }\mathsf S_{i+1} = \exists, \quad f_i(a_1,\dots,a_i) = f_{i+1}(a_1,\dots,a_i,0) * f_{i+1}(a_1,\dots,a_i,1) -$$ -$$ -\text{If }\mathsf S_{i+1} = \mathrm R, \quad f_i(a_1,\dots,a_i,a) = (1-a)f_{i+1}(a_1,\dots,a_i,0) + a f_{i+1}(a_1,\dots,a_i,1) -$$ - -Now we can see that the reduction operation R, doesn't change the degree of the polynomial. Also it is important to see that the Rx operation doesn't change the value of the function on boolean inputs. So f0 is still the truth value of ψ, but the Rx value produces a result that is linear in x. Also after any $\mathsf Q_i x_i$ we add $\mathrm R_{x_1}\dots \mathrm R_{x_i}$ in ψ′ in order to reduce the degree down to 1 after arithmetizing $\mathsf Q_i$. - -Now let's describe the protocol. If n is the length of ψ, all arithmetic operations in the protocol are over a field of size at least n4 where n is the length of ψ. - -* Phase 0: P → V: P sends f0 to V. V checks that f0= 1 and rejects if not. - -* Phase 1: P → V: P sends f1(z) to V. V uses coefficients to evaluate f1(0) and f1(1). Then it checks that the polynomial's degree is at most n and that the following identities are true: - -f_{0}(\varnothing) = \begin{cases} - -f_{1}(0)\cdot f_{1}(1) & \text{ if }\mathsf S = \forall \\ - -f_{1}(0) * f_{1}(1) & \text{ if }\mathsf S = \exists. \\ - -(1-r)f_{1}(0) + rf_{1}(1) & \text{ if }\mathsf S = \mathrm R. - -\end{cases} - -If either fails then reject. - -* Phase i: P → V: P sends $f_i(r_1,\dots,r_{i-1},z)$ as a polynomial in z. r1 denotes the previously set random values for $r_1,\dots,r_{i-1}$ - -V uses coefficients to evaluate $f_i(r_1,\dots,r_{i-1},0)$ and $f_i(r_1,\dots,r_{i-1},1)$. Then it checks that the polynomial degree is at most n and that the following identities are true: - -f_{i-1}(r_1,\dots,r_{i-1}) = \begin{cases} f_{i}(r_1,\dots,r_{i-1},0)\cdot f_{i}(r_1,\dots, r_{i-1},1) & \mathsf S = \forall \\ - -f_{i}(r_1,\dots,r_{i-1},0) * f_i(r_1, \dots,r_{i-1},1) & \mathsf S = \exists. - -\end{cases} -$$ -f_{i-1}(r_1\dots r) = (1-r)f_{i}(r_1,\dots,r_{i-1},0) + rf_{i}(r_1,\dots,r_{i-1},1)\text{ if }\mathsf S = \mathrm R. -$$ - -If either fails then reject. - -V → P: V picks a random r in F and sends it to P. (If $\mathsf S=\mathrm R$ then this r replaces the previous r). - -Goto phase i + 1 where P must persuade V that $f_i(r_1,\dots,r)$ is correct. - -* Phase k + 1: V evaluates $p(r_1,\dots,r_m)$. Then it checks if $p(r_1,\dots,r_m) = f_k(r_1,\dots,r_m)$ If they are equal then V accepts, otherwise V rejects. - -This is the end of the protocol description. - -If ψ is true then V will accept when P follows the protocol. Likewise if $ \tilde{P} $ is a malicious prover which lies, and if ψ is false, then $ \tilde{P} $ will need to lie at phase 0 and send some value for f0. If at phase i, V has an incorrect value for $f_{i-1}(r_1,\dots)$ then $f_i(r_1,\dots,0)$ and $f_i(r_1,\dots,1)$ will likely also be incorrect, and so forth. The probability for $ \tilde{P} $ to get lucky on some random r is at most the degree of the polynomial divided by the field size: $n/n^4$. The protocol runs through O(n2) phases, so the probability that $ \tilde{P} $ gets lucky at some phase is ≤ 1/n. If $\tilde{P} $ is never lucky, then V will reject at phase k+1. - -Since we have now shown that both IP ⊆ PSPACE and PSPACE ⊆ IP, we can conclude that IP = PSPACE as desired. Moreover, we have shown that any IP algorithm may be taken to be public-coin, since the reduction from PSPACE to IP has this property. - -There are a number of variants of IP which slightly modify the definition of the interactive proof system. We summarize some of the better-known ones here. - -A subset of IP is the deterministic Interactive Proof class, which is similar to IP but has a deterministic verifier (i.e. with no randomness). - -This class is equal to NP. - -An equivalent definition of IP replaces the condition that the interaction succeeds with high probability on strings in the language with the requirement that it always succeeds: -$$ -w \in L \Rightarrow \Pr[V \leftrightarrow P\text{ accepts }w] = 1 -$$ - -This seemingly stronger criterion of "perfect completeness" does not change the complexity class IP, since any language with an interactive proof system may be given an interactive proof system with perfect completeness. - -In 1988, Goldwasser et al. created an even more powerful interactive proof system based on IP called MIP in which there are two independent provers. The two provers cannot communicate once the verifier has begun sending messages to them. Just as it's easier to tell if a criminal is lying if he and his partner are interrogated in separate rooms, it's considerably easier to detect a malicious prover trying to trick the verifier if there is another prover it can double-check with. In fact, this is so helpful that Babai, Fortnow, and Lund were able to show that MIP = NEXPTIME, the class of all problems solvable by a nondeterministic machine in exponential time, a very large class. Moreover, all languages in NP have zero-knowledge proofs in an MIP system, without any additional assumptions; this is only known for IP assuming the existence of one-way functions. - -IPP (unbounded IP) is a variant of IP where we replace the BPP verifier by a PP verifier. More precisely, we modify the completeness and soundness conditions as follows: - -* Completeness: if a string is in the language, the honest verifier will be convinced of this fact by an honest prover with probability at least 1/2. - -* Soundness: if the string is not in the language, no prover can convince the honest verifier that it is in the language, except with probability less than 1/2. - -Although IPP also equals PSPACE, IPP protocols behaves quite differently from IP with respect to oracles: IPP=PSPACE with respect to all oracles, while IP ≠ PSPACE with respect to almost all oracles. - -QIP is a version of IP replacing the BPP verifier by a BQP verifier, where BQP is the class of problems solvable by quantum computers in polynomial time. The messages are composed of qubits. In 2009, Jain, Ji, Upadhyay, and Watrous proved that QIP also equals PSPACE, implying that this change gives no additional power to the protocol. This subsumes a previous result of Kitaev and Watrous that QIP is contained in EXPTIME because QIP = QIP[3], so that more than three rounds are never necessary. - -Whereas IPP and QIP give more power to the verifier, a compIP system (competitive IP proof system) weakens the completeness condition in a way that weakens the prover: - -* Completeness: if a string is in the language L, the honest verifier will be convinced of this fact by an honest prover with probability at least 2/3. Moreover, the prover will do so in probabilistic polynomial time given access to an oracle for the language L. - -Essentially, this makes the prover a BPP machine with access to an oracle for the language, but only in the completeness case, not the soundness case. The concept is that if a language is in compIP, then interactively proving it is in some sense as easy as deciding it. With the oracle, the prover can easily solve the problem, but its limited power makes it much more difficult to convince the verifier of anything. In fact, compIP isn't even known or believed to contain NP. - -On the other hand, such a system can solve some problems believed to be hard. Somewhat paradoxically, though such a system is not believed to be able to solve all of NP, it can easily solve all NP-complete problems due to self-reducibility. This stems from the fact that if the language L is not NP-hard, the prover is substantially limited in power (as it can no longer decide all NP problems with its oracle). - -Additionally, the graph nonisomorphism problem (which is a classical problem in IP) is also in compIP, since the only hard operation the prover has to do is isomorphism testing, which it can use the oracle to solve. Quadratic non-residuosity and graph isomorphism are also in compIP. Note, Quadratic non-residuosity (QNR) is likely an easier problem than graph isomorphism as QNR is in UP intersect co-UP. diff --git a/wiki/wikipedia/3344.txt b/wiki/wikipedia/3344.txt deleted file mode 100644 index 6faf5d5ce2274fdbf09079e1027b5331d012ce1e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3344.txt +++ /dev/null @@ -1,7 +0,0 @@ -Chris Danforth is a computer scientist and an applied mathematics professor at the University of Vermont. He is known for his work with the Hedonometer, a tool developed for measuring collective mood with sentiment analysis. - -Danforth directs the Computational Story Lab at Vermont Complex Systems Center. His researching job is focused on exploring human behavior through social media data. - -In 2007, Chris Danforth collaborated with Peter Sheridan Dodds to develop a tool to measure happiness that they called "hedonometer." For creating the hedonometer, a team directed by Danforth surveyed speakers of several languages to rate words on a scale of happiest to saddest. - -In collaboration with social psychologist Andrew Reece, Danforth found that depressed people post photos on Instagram whose colors are cooler and darker than those of non-depressed people. In 2020, he found evidence that analyzing social media techniques might identify viral outbreaks. diff --git a/wiki/wikipedia/3345.txt b/wiki/wikipedia/3345.txt deleted file mode 100644 index be18ee8be0bfd3beac2b36d50e13c5570c587d04..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3345.txt +++ /dev/null @@ -1,65 +0,0 @@ -Chow's lemma, named after Wei-Liang Chow, is one of the foundational results in algebraic geometry. It roughly says that a proper morphism is fairly close to being a projective morphism. More precisely, a version of it states the following: - -If $X$ is a scheme that is proper over a noetherian base $S$, then there exists a projective $S$-scheme $X'$ and a surjective $S$-morphism $f: X' \to X$ that induces an isomorphism $f^{-1}(U) \simeq U$ for some dense open $U\subseteq X.$ - -The proof here is a standard one (cf. ). - -We can first reduce to the case where $X$ is irreducible. To start, $X$ is noetherian since it is of finite type over a noetherian base. Therefore it has finitely many irreducible components $X_i$, and we claim that for each $X_i$ there is an irreducible proper $S$-scheme $Y_i$ so that $Y_i\to X$ has set-theoretic image $X_i$ and is an isomorphism on the open dense subset $X_i\setminus \cup_{j\neq i} X_j$ of $X_i$. To see this, define $Y_i$ to be the scheme-theoretic image of the open immersion -$$ -X\setminus \cup_{j\neq i} X_j \to X. -$$ - -Since $X\setminus \cup_{j\neq i} X_j$ is set-theoretically noetherian for each $i$, the map $X\setminus \cup_{j\neq i} X_j\to X$ is quasi-compact and we may compute this scheme-theoretic image affine-locally on $X$, immediately proving the two claims. If we can produce for each $Y_i$ a projective $S$-scheme $Y_i'$ as in the statement of the theorem, then we can take $X'$ to be the disjoint union $\coprod Y_i'$ and $f$ to be the composition $\coprod Y_i' \to \coprod Y_i\to X$: this map is projective, and an isomorphism over a dense open set of $X$, while $\coprod Y_i'$ is a projective $S$-scheme since it is a finite union of projective $S$-schemes. Since each $Y_i$ is proper over $S$, we've completed the reduction to the case $X$ irreducible. - -Next, we will show that $X$ can be covered by a finite number of open subsets $U_i$ so that each $U_i$ is quasi-projective over $S$. To do this, we may by quasi-compactness first cover $S$ by finitely many affine opens $S_j$, and then cover the preimage of each $S_j$ in $X$ by finitely many affine opens $X_{jk}$ each with a closed immersion in to $\mathbb{A}^n_{S_j}$ since $X\to S$ is of finite type and therefore quasi-compact. Composing this map with the open immersions $\mathbb{A}^n_{S_j}\to \mathbb{P}^n_{S_j}$ and $\mathbb{P}^n_{S_j} \to \mathbb{P}^n_S$, we see that each $X_{ij}$ is a closed subscheme of an open subscheme of $\mathbb{P}^n_S$. As $\mathbb{P}^n_S$ is noetherian, every closed subscheme of an open subscheme is also an open subscheme of a closed subscheme, and therefore each $X_{ij}$ is quasi-projective over $S$. - -Now suppose $\{U_i\}$ is a finite open cover of $X$ by quasi-projective $S$-schemes, with $\phi_i:U_i\to P_i$ an open immersion in to a projective $S$-scheme. Set $U=\cap_i U_i$, which is nonempty as $X$ is irreducible. The restrictions of the $\phi_i$ to $U$ define a morphism -$$ -\phi: U \to P = P_1 \times_S \cdots \times_S P_n -$$ - -so that $U\to U_i\to P_i = U\stackrel{\phi}{\to} P \stackrel{p_i}{\to} P_i$, where $U\to U_i$ is the canonical injection and $p_i:P\to P_i$ is the projection. Letting $j:U\to X$ denote the canonical open immersion, we define $\psi=(j,\phi)_S: U\to X\times_S P$, which we claim is an immersion. To see this, note that this morphism can be factored as the graph morphism $U\to U\times_S P$ (which is a closed immersion as $P\to S$ is separated) followed by the open immersion $U\times_S P\to X\times_S P$; as $X\times_S P$ is noetherian, we can apply the same logic as before to see that we can swap the order of the open and closed immersions. - -Now let $X'$ be the scheme-theoretic image of $\psi$, and factor $\psi$ as -$$ - \psi:U\stackrel{\psi'}{\to} X'\stackrel{h}{\to} X\times_S P -$$ - -where $\psi'$ is an open immersion and $h$ is a closed immersion. Let $q_1:X\times_S P\to X$ and $q_2:X\times_S P\to P$ be the canonical projections. - -Set -$$ -f:X'\stackrel{h}{\to} X\times_S P \stackrel{q_1}{\to} X, -$$ -$$ -g:X'\stackrel{h}{\to} X\times_S P \stackrel{q_2}{\to} P. -$$ - -We will show that $X'$ and $f$ satisfy the conclusion of the theorem. - -To show $f$ is surjective, we first note that it is proper and therefore closed. As its image contains the dense open set $U\subset X$, we see that $f$ must be surjective. It is also straightforward to see that $f$ induces an isomorphism on $U$: we may just combine the facts that $f^{-1}(U)=h^{-1}(U\times_S P)$ and $\psi$ is an isomorphism on to its image, as $\psi$ factors as the composition of a closed immersion followed by an open immersion $U\to U\times_S P \to X\times_S P$. It remains to show that $X'$ is projective over $S$. - -We will do this by showing that $g:X'\to P$ is an immersion. We define the following four families of open subschemes: -$$ - V_i = \phi_i(U_i)\subset P_i -$$ -$$ - W_i = p_i^{-1}(V_i)\subset P -$$ -$$ - U_i' = f^{-1}(U_i)\subset X' -$$ -$$ - U_i = g^{-1}(W_i)\subset X'. -$$ - -As the $U_i$ cover $X$, the $U_i'$ cover $X'$, and we wish to show that the $U_i$ also cover $X'$. We will do this by showing that $U_i'\subset U_i$ for all $i$. It suffices to show that $p_i\circ g|_{U_i'}:U_i'\to P_i$ is equal to $\phi_i\circ f|_{U_i'}:U_i'\to P_i$ as a map of topological spaces. Replacing $U_i'$ by its reduction, which has the same underlying topological space, we have that the two morphisms $(U_i')_{red}\to P_i$ are both extensions of the underlying map of topological space $U\to U_i\to P_i$, so by the reduced-to-separated lemma they must be equal as $U$ is topologically dense in $U_i$. Therefore $U_i'\subset U_i$ for all $i$ and the claim is proven. - -The upshot is that the $W_i$ cover $g(X')$, and we can check that $g$ is an immersion by checking that $g|_{U_i}:U_i\to W_i$ is an immersion for all $i$. For this, consider the morphism -$$ - u_i:W_i\stackrel{p_i}{\to} V_i\stackrel{\phi_i^{-1}}{\to} U_i\to X. -$$ - -Since $X\to S$ is separated, the graph morphism $\Gamma_{u_i}:W_i\to X\times_S W_i$ is a closed immersion and the graph $T_i=\Gamma_{u_i}(W_i)$ is a closed subscheme of $X\times_S W_i$; if we show that $U\to X\times_S W_i$ factors through this graph (where we consider $U\subset X'$ via our observation that $f$ is an isomorphism over $f^{-1}(U)$ from earlier), then the map from $U_i$ must also factor through this graph by construction of the scheme-theoretic image. Since the restriction of $q_2$ to $T_i$ is an isomorphism onto $W_i$, the restriction of $g$ to $U_i$ will be an immersion into $W_i$, and our claim will be proven. Let $v_i$ be the canonical injection $U\subset X' \to X\times_S W_i$; we have to show that there is a morphism $w_i:U\subset X'\to W_i$ so that $v_i=\Gamma_{u_i}\circ w_i$. By the definition of the fiber product, it suffices to prove that $q_1\circ v_i= u_i\circ q_2\circ v_i$, or by identifying $U\subset X$ and $U\subset X'$, that $q_1\circ\psi=u_i\circ q_2\circ \psi$. But $q_1\circ\psi = j$ and $q_2\circ\psi=\phi$, so the desired conclusion follows from the definition of $\phi:U\to P$ and $g$ is an immersion. Since $X'\to S$ is proper, any $S$-morphism out of $X'$ is closed, and thus $g:X'\to P$ is a closed immersion, so $X'$ is projective. $\blacksquare$ - -In the statement of Chow's lemma, if $X$ is reduced, irreducible, or integral, we can assume that the same holds for $X'$. If both $X$ and $X'$ are irreducible, then $f: X' \to X$ is a birational morphism. (cf. ). diff --git a/wiki/wikipedia/3346.txt b/wiki/wikipedia/3346.txt deleted file mode 100644 index b4aaa45673c161df7470d3057d320defe7d71f01..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3346.txt +++ /dev/null @@ -1,29 +0,0 @@ -The Curtis–Hedlund–Lyndon theorem is a mathematical characterization of cellular automata in terms of their symbolic dynamics. It is named after Morton L. Curtis, Gustav A. Hedlund, and Roger Lyndon; in his 1969 paper stating the theorem, Hedlund credited Curtis and Lyndon as co-discoverers. It has been called "one of the fundamental results in symbolic dynamics". - -The theorem states that a function from a shift space to itself represents the transition function of a one-dimensional cellular automaton if and only if it is continuous (with respect to the Cantor topology) and equivariant (with respect to the shift map). More generally, it asserts that the morphisms between any two shift spaces (that is, continuous mappings that commute with the shift) are exactly those mappings which can be defined uniformly by a local rule. - -The version of the theorem in Hedlund's paper applied only to one-dimensional finite automata, but a generalization to higher dimensional integer lattices was soon afterwards published by Richardson, and it can be even further generalized from lattices to discrete groups. One important consequence of the theorem is that, for reversible cellular automata, the reverse dynamics of the automaton can also be described by a cellular automaton. - -An alphabet is any finite set of symbols, which may be thought of as the states of the cells in a cellular automaton. A configuration is a bi-infinite sequence of symbols from the alphabet: - -..., x-2, x-1, x0, x1, x2, .... - -A position in a configuration is an integer, the index of one of the symbols in the sequence; the positions may be thought of as the cells of a cellular automaton. A pattern is a finite set of positions and an assignment of symbols to each of these positions. - -The shift space is the set of all possible configurations over a given alphabet. It may be given the structure of a topological space according to the Cantor topology, in which the fundamental open sets are the sets of configurations that match any single pattern and the open sets are arbitrary unions of fundamental open sets. In this topology, a function f from configurations to configurations is continuous if, for any fixed pattern p defining a fundamental open set P, the set f-1(P) of configurations mapped by f into P can itself be described by a (possibly infinite) set S of patterns, with the property that a configuration belongs to f-1(P) if and only if it matches a pattern in S. - -The shift map is a particular continuous function s on the shift space that transforms a configuration x into a new configuration y in which each symbol is shifted one position over from its previous position: that is, for every integer i, yi = xi - 1. A function f is equivariant under the shift map if the transformation on configurations described by f commutes with the shift map; that is, for every configuration x, it must be the case that f(s(x)) = s(f(x)). Intuitively, this means that every position of the configuration is updated by f using the same rule as every other position. - -A cellular automaton is defined by a rule for computing the new value of each position in a configuration based only on the values of cells in a prior-fixed finite neighborhood surrounding the position, with all positions of the configuration being updated simultaneously based on the same update rule. That is, the new value of a position is a function only of the values of the cells in its neighborhood rather than depending more generally on an unbounded number of cells of the previous configuration. The function f that uses this rule to map a configuration of the cellular automaton into its successor configuration is necessarily equivariant with respect to the shift map, by the assumption that all positions use the same update rule. It is also necessarily continuous in the Cantor topology: if p is a fixed pattern, defining a fundamental open set P, then f-1(P) is defined by a finite set of patterns, the assignments to cells in the neighborhood of p that cause f to produce p. The Curtis–Hedlund–Lyndon theorem states that these two properties are sufficient to define cellular automata: every continuous equivariant function is the update rule of a cellular automaton. - -Ceccherini-Silberstein and Coornaert provide the following proof of the Curtis–Hedlund–Lyndon theorem. - -Suppose f is a continuous shift-equivariant function on the shift space. For each configuration x, let p be the pattern consisting of the single symbol that appears at position zero of f(x). - -By continuity of f, there must exist a finite pattern q in x such that, if the positions outside q are changed arbitrarily but the positions within q are fixed to their values in x, then the result of applying f remains the same at position zero. Equivalently, there must exist a fundamental open set Qx such that x belongs to Qx and such that for every configuration y in Qx, f(x) and f(y) have the same value at position zero. These fundamental open sets Qx (for all possible configurations x) form an open cover of the shift space. However, the shift space is a compact space: it is a product of finite topological spaces with the alphabet as their points, so compactness follows from Tychonoff's theorem. By compactness, every open cover has a finite subcover. The finite set of positions appearing in this finite subcover may be used as the neighborhood of position zero in a description of f as a cellular automaton rule. - -The same proof applies more generally when the set of integer positions is replaced by any discrete group G, the space of configurations is replaced by the set of functions from G to a finite alphabet, and shift-equivariance is replaced by equivariance under the action of G on itself. In particular, it applies to cellular automata defined on an integer grid of any dimension. - -Consider the space of bi-infinite sequences of integers, and define a function f from this space to itself - -according to the rule that, if f(x) = y, then for every position i, yi = xi + xi. This rule is the same for each position, so it is shift-equivariant. And it can be shown to be continuous according to the Cantor topology: for each finite pattern p in y, there is a pattern in x with at most twice as many positions that forces f to generate p, consisting of the cells in p together with the cells whose values are copied into p. However, despite being continuous and equivariant, f is not a cellular automaton rule, because the value of any cell can potentially depend on the value of any other cell rather than only depending on the cells in any prior-fixed finite neighborhood. where it was proved they correspond to continuous shift-equivariant maps. diff --git a/wiki/wikipedia/3347.txt b/wiki/wikipedia/3347.txt deleted file mode 100644 index 5d4e8f8d1286f3fe2a7b8c7a125e4b47d4f5b91c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3347.txt +++ /dev/null @@ -1,21 +0,0 @@ -In logic, the contrapositive of a conditional statement is formed by negating both terms and reversing the direction of inference. More specifically, the contrapositive of the statement "if A, then B" is "if not B, then not A." A statement and its contrapositive are logically equivalent, in the sense that if the statement is true, then its contrapositive is true and vice versa. - -In mathematics, proof by contrapositive, or proof by contraposition, is a rule of inference used in proofs, where one infers a conditional statement from its contrapositive. In other words, the conclusion "if A, then B" is inferred by constructing a proof of the claim "if not B, then not A" instead. More often than not, this approach is preferred if the contrapositive is easier to prove than the original conditional statement itself. - -Logically, the validity of proof by contrapositive can be demonstrated by the use of the following truth table, where it is shown that p → q and $\lnot$q → $\lnot$p share the same truth values in all scenarios: - -Proof by contradiction: Assume $\neg A$ is true, $\neg A \to B $ is false, thus $\neg A$ is false, and $A$ is true. - -Proof by contrapositive: To prove $A \to B$, prove its contrapositive statement, which is $\neg B \to \neg A$. - -Let x be an integer. - -To prove: If x2 is even, then x is even. - -Although a direct proof can be given, we choose to prove this statement by contraposition. The contrapositive of the above statement is: - -If x is not even, then x2 is not even. - -This latter statement can be proven as follows: suppose that x is not even, then x is odd. The product of two odd numbers is odd, hence x2 = x·x is odd. Thus x2 is not even. - -Having proved the contrapositive, we can then infer that the original statement is true. diff --git a/wiki/wikipedia/3348.txt b/wiki/wikipedia/3348.txt deleted file mode 100644 index b39984eb0b2c2f2edc0e44405463f63b9faa9159..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3348.txt +++ /dev/null @@ -1,15 +0,0 @@ -Given a connected, undirected graph G, a shortest-path tree rooted at vertex v is a spanning tree T of G, such that the path distance from root v to any other vertex u in T is the shortest path distance from v to u in G. - -In connected graphs where shortest paths are well-defined (i.e. where there are no negative-length cycles), we may construct a shortest-path tree using the following algorithm: - -# Compute dist(u), the shortest-path distance from root v to vertex u in G using Dijkstra's algorithm or Bellman–Ford algorithm. - -# For all non-root vertices u, we can assign to u a parent vertex pu such that pu is connected to u, and that dist(pu) + edge_dist(pu,u) = dist(u). In case multiple choices for pu exist, choose pu for which there exists a shortest path from v to pu with as few edges as possible; this tie-breaking rule is needed to prevent loops when there exist zero-length cycles. - -# Construct the shortest-path tree using the edges between each node and its parent. - -The above algorithm guarantees the existence of shortest-path trees. Like minimum spanning trees, shortest-path trees in general are not unique. - -In graphs for which all edges weights equal one, shortest path trees coincide with breadth-first search trees. - -In graphs that have negative cycles, the set of shortest simple paths from v to all other vertices do not necessarily form a tree. diff --git a/wiki/wikipedia/3349.txt b/wiki/wikipedia/3349.txt deleted file mode 100644 index d07e5c64ab94259d980cd1b561a1e591fa359953..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3349.txt +++ /dev/null @@ -1,90 +0,0 @@ -In mathematics, and in particular, algebra, a generalized inverse (or, g-inverse) of an element x is an element y that has some properties of an inverse element but not necessarily all of them. Generalized inverses can be defined in any mathematical structure that involves associative multiplication, that is, in a semigroup. This article describes generalized inverses of a matrix $A$. - -A matrix $A^\mathrm{g} \in \mathbb{R}^{n \times m}$ is a generalized inverse of a matrix $A \in \mathbb{R}^{m \times n}$ if $ AA^\mathrm{g}A = A.$ The matrix $A^{-1}$ has been termed a regular inverse of $A$ by some authors. - -The Penrose conditions define different generalized inverses for $A \in \mathbb{R}^{n\times m}$ and $A^{\mathrm g} \in \mathbb{R}^{m\times n}:$ - -# $ A A^\mathrm{g} A = A $ - -# $ A^\mathrm{g} A A^\mathrm{g}= A^\mathrm{g} $ - -# $ (A A^\mathrm{g})^* = A A^\mathrm{g} $ - -# $ (A^\mathrm{g} A)^* = A^\mathrm{g} A, $ - -Where ${}^*$ indicates conjugate transpose. If $A^\mathrm{g}$ satisfies the first condition, then it is a generalized inverse of $A$. If it satisfies the first two conditions, then it is a reflexive generalized inverse of $A$. If it satisfies all four conditions, then it is a pseudoinverse of $A$, denoted by $A^+$. - -When $A$ is non-singular, any generalized inverse $A^\mathrm{g} = A^{-1}$ and is unique. Otherwise, there are an infinite number of $I$-inverses for a given $I$ with less than 4 elements. However, the pseudoinverse is unique. (Note: $I$ and $I$-inverse are defined in the Penrose conditions sub-section below and should not be confused with another common use of $I$ to mean an identity matrix). - -There are other kinds of generalized inverse: - -* One-sided inverse (right inverse or left inverse) - -** Right inverse: If the matrix $A$ has dimensions $n \times m$ and $ \textrm{rank} (A) = n$, then there exists an $m \times n$ matrix $A_{\mathrm{R}}^{-1}$ called the right inverse of $A$ such that $ A A_{\mathrm{R}}^{-1} = I_n $, where $I_n$ is the $n \times n$ identity matrix. - -** Left inverse: If the matrix $A$ has dimensions $n \times m$ and $ \textrm{rank} (A) = m$, then there exists an $m \times n$ matrix $A_{\mathrm{L}}^{-1}$ called the left inverse of $A$ such that $A_{\mathrm{L}}^{-1} A = I_m $, where $I_m$ is the $m \times m$ identity matrix. - -* A left inverse of a non-square matrix $A$ is given by $A_\mathrm{L}^{-1} = \left(A^{\intercal} A \right)^{-1} A^{\intercal}$, provided $A$ has full column rank. - -* If $A = BC$ is a rank factorization, then $G = C_\mathrm{R}^{-1} B_\mathrm{L}^{-1}$ is a g-inverse of $A$, where $C_\mathrm{R}^{-1}$ is a right inverse of $C$ and $B_\mathrm{L}^{-1}$ is left inverse of $B$. - -* If $A = P \begin{bmatrix}I_r & 0 \\ 0 & 0 \end{bmatrix} Q$ for any non-singular matrices $P$ and $Q$, then $G = Q^{-1} \begin{bmatrix}I_r & U \\ W & V \end{bmatrix} P^{-1}$ is a generalized inverse of $A$ for arbitrary $U, V$ and $W$. - -* Let $A$ be of rank $r$. Without loss of generality, letA = \begin{bmatrix}B & C\\ D & E\end{bmatrix},where $B_{r \times r}$ is the non-singular submatrix of $A$. Then,G = \begin{bmatrix} B^{-1} & 0\\ 0 & 0 \end{bmatrix}is a generalized inverse of $A$ if and only if $E=DB^{-1}C$. - -Any generalized inverse can be used to determine whether a system of linear equations has any solutions, and if so to give all of them. If any solutions exist for the n × m linear system -$$ -Ax = b -$$, - -with vector $x$ of unknowns and vector $b$ of constants, all solutions are given by -$$ -x = A^\mathrm{g}b + \left[I - A^\mathrm{g}A\right]w -$$, - -parametric on the arbitrary vector $w$, where $A^\mathrm{g}$ is any generalized inverse of $A$. Solutions exist if and only if $A^\mathrm{g}b$ is a solution, that is, if and only if $AA^\mathrm{g}b = b$. If A has full column rank, the bracketed expression in this equation is the zero matrix and so the solution is unique. - -The Penrose conditions are used to classify and study different generalized inverses. As noted above, the Penrose conditions are: - -# $ A A^\mathrm{g} A = A $ - -# $ A^\mathrm{g} A A^\mathrm{g} = A^\mathrm{g} $ - -# $ \left(A A^\mathrm{g}\right)^* = A A^\mathrm{g} $ - -# $ \left(A^\mathrm{g} A\right)^* = A^\mathrm{g} A. $ - -In contrast to above, here the numbering of the properties is relevant; this numbering is used in the literature. An $I$-inverse of $A$, where $I \subset \{1, 2, 3, 4\}$, is a generalized inverse of $A$ which satisfies the Penrose conditions listed in $I$. For example, a reflexive inverse is a $\{1, 2\}$-inverse, a right-inverse is a $\{1, 2, 3\}$-inverse, a left-inverse is a $\{1, 2, 4\}$-inverse, and a pseudoinverse is a $\{1, 2, 3, 4\}$-inverse. Much research has been devoted to the study between these different classes of generalized inverses; many such results can be found from reference. An example of such a result is the following: - -* $A^{(1, 4)} A A^{(1, 3)} = A^+$for any $\{1,4\}$-inverse $A^{(1, 4)}$and $\{1,3\}$-inverse $A^{(1, 3)}$. In particular, $A^+ A A^{(1, 3)} = A^+$for any $\{1,3\}$-inverse $A^{(1, 3)}$. - -The section on generalized inverses of matrices provides explicit characterizations for the different classes. - -The generalized inverses of matrices can be characterized as follows. Let $A \in \mathbb{R}^{m \times n}$, and - -A = U \begin{bmatrix} \Sigma_1 & 0 \\ 0 & 0 \end{bmatrix} V^\textsf{T} - -be its singular-value decomposition. Then for any generalized inverse $A^g$, there exist matrices $X$, $Y$, and $Z$ such that - -A^g = V \begin{bmatrix} \Sigma_1^{-1} & X \\ Y & Z \end{bmatrix} U^\textsf{T}. - -Conversely, any choice of $X$, $Y$, and $Z$ for matrix of this form is a generalized inverse of $A$. The $\{1,2\}$-inverses are exactly those for which $Z = Y \Sigma_1 X$, the $\{1,3\}$-inverses are exactly those for which $X = 0$, and the $\{1,4\}$-inverses are exactly those for which $Y = 0$. In particular, the pseudoinverse is given by $X = Y = Z = 0$: - -A^+ = V \begin{bmatrix} \Sigma_1^{-1} & 0 \\ 0 & 0 \end{bmatrix} U^\textsf{T}. - -In practical applications it is necessary to identify the class of matrix transformations that must be preserved by a generalized inverse. For example, the Moore–Penrose inverse, $A^+,$ satisfies the following definition of consistency with respect to transformations involving unitary matrices U and V: -$$ -(UAV)^+ = V^* A^+ U^* -$$. - -The Drazin inverse, $ A^\mathrm{D}$ satisfies the following definition of consistency with respect to similarity transformations involving a nonsingular matrix S: -$$ -\left(SAS^{-1}\right)^\mathrm{D} = S A^\mathrm{D} S^{-1} -$$. - -The unit-consistent (UC) inverse, $A^\mathrm{U},$ satisfies the following definition of consistency with respect to transformations involving nonsingular diagonal matrices D and E: -$$ -(DAE)^\mathrm{U} = E^{-1} A^\mathrm{U} D^{-1} -$$. - -The fact that the Moore–Penrose inverse provides consistency with respect to rotations (which are orthonormal transformations) explains its widespread use in physics and other applications in which Euclidean distances must be preserved. The UC inverse, by contrast, is applicable when system behavior is expected to be invariant with respect to the choice of units on different state variables, e.g., miles versus kilometers. diff --git a/wiki/wikipedia/335.txt b/wiki/wikipedia/335.txt deleted file mode 100644 index 3fdcef6525517e220e0d24ace6e1b7665b614f0c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/335.txt +++ /dev/null @@ -1,178 +0,0 @@ -In probability theory, Markov's inequality gives an upper bound for the probability that a non-negative function of a random variable is greater than or equal to some positive constant. It is named after the Russian mathematician Andrey Markov, although it appeared earlier in the work of Pafnuty Chebyshev (Markov's teacher), and many sources, especially in analysis, refer to it as Chebyshev's inequality (sometimes, calling it the first Chebyshev inequality, while referring to Chebyshev's inequality as the second Chebyshev inequality) or Bienaymé's inequality. - -Markov's inequality (and other similar inequalities) relate probabilities to expectations, and provide (frequently loose but still useful) bounds for the cumulative distribution function of a random variable. - -If X is a nonnegative random variable and a > 0, then the probability - -that X is at least a is at most the expectation of X divided by a: -$$ -\operatorname{P}(X \geq a) \leq \frac{\operatorname{E}(X)}{a}. -$$ - -Let $ a = \tilde{a} \cdot \operatorname{E}(X) $ (where $ \tilde{a} > 0 $); then we can rewrite the previous inequality as -$$ -\operatorname{P}(X \geq \tilde{a} \cdot \operatorname{E}(X)) \leq \frac{1}{\tilde{a}}. -$$ - -In the language of measure theory, Markov's inequality states that if (X, Σ, μ) is a measure space, $f$ is a measurable extended real-valued function, and ε > 0, then -$$ - \mu(\{x\in X:|f(x)|\geq \varepsilon \}) \leq \frac 1 \varepsilon \int_X |f|d\mu. -$$ - -This measure-theoretic definition is sometimes referred to as Chebyshev's inequality. - -If φ is a monotonically increasing nonnegative function for the nonnegative reals, X is a random variable, a ≥ 0, and φ(a) > 0, then -$$ -\operatorname P (|X| \ge a) \le \frac{\operatorname E(\varphi(|X|))}{\varphi(a)}. -$$ - -An immediate corollary, using higher moments of X supported on values larger than 0, is -$$ -\operatorname P (|X| \ge a) \le \frac{\operatorname E(|X|^n)}{a^n}. -$$ - -We separate the case in which the measure space is a probability space from the more general case because the probability case is more accessible for the general reader. -$$ -\operatorname{E}(X) = \operatorname{P}(X < a)\cdot \operatorname{E}(X|X0$, -$$ -aI_{(X \geq a)} \leq X -$$ - -which is clear if we consider the two possible values of $X\geq a$. If $X 0, we can divide both sides by a. - -We may assume that the function $f$ is non-negative, since only its absolute value enters in the equation. Now, consider the real-valued function s on X given by - - - -s(x) = - -\begin{cases} - -\varepsilon, & \text{if } f(x) \geq \varepsilon \\ - -0, & \text{if } f(x) < \varepsilon - -\end{cases} - - - -Then $0\leq s(x)\leq f(x)$. By the definition of the Lebesgue integral - - - -\int_X f(x) d\mu \geq \int_X s(x) d \mu = \varepsilon \mu( \{ x\in X : f(x) \geq \varepsilon \} ) - - - -and since $\varepsilon >0 $, both sides can be divided by $\varepsilon$, obtaining -$$ -\mu(\{x\in X : f(x) \geq \varepsilon \}) \leq {1\over \varepsilon }\int_X f d\mu. -$$ - -Chebyshev's inequality uses the variance to bound the probability that a random variable deviates far from the mean. Specifically, -$$ -\operatorname{P}(|X-\operatorname{E}(X)| \geq a) \leq \frac{\operatorname{Var}(X)}{a^2}, -$$ - -for any a > 0. Here Var(X) is the variance of X, defined as: -$$ - \operatorname{Var}(X) = \operatorname{E}[(X - \operatorname{E}(X) )^2]. -$$ - -Chebyshev's inequality follows from Markov's inequality by considering the random variable -$$ - (X - \operatorname{E}(X))^2 -$$ - -and the constant $a^2,$ for which Markov's inequality reads -$$ - \operatorname{P}( (X - \operatorname{E}(X))^2 \ge a^2) \le \frac{\operatorname{Var}(X)}{a^2}. -$$ - -This argument can be summarized (where "MI" indicates use of Markov's inequality): - -\operatorname{P}(|X-\operatorname{E}(X)| \geq a) = - -\operatorname{P}\left((X-\operatorname{E}(X))^2 \geq a^2\right) \overset{\underset{\mathrm{MI}}{}}{\leq} - -\frac {\operatorname{E} \left( (X-\operatorname{E}(X))^2 \right)}{a^2} = - -\frac{\operatorname{Var}(X)}{a^2}. - -#The "monotonic" result can be demonstrated by: - -#:\operatorname P (|X| \ge a) = \operatorname P \big(\varphi(|X|) \ge \varphi(a)\big) \overset{\underset{\mathrm{MI}}{}}{\leq} - -\frac{\operatorname E(\varphi(|X|))}{\varphi(a)} - -#: - -#The result that, for a nonnegative random variable X, the quantile function of X satisfies: - -#:$Q_X(1-p) \leq \frac {\operatorname E(X)}{p},$ - -#:the proof using - -#:$p \leq \operatorname P(X \geq Q_X(1-p)) \overset{\underset{\mathrm{MI}}{}}{\leq} \frac {\operatorname E(X)}{Q_X(1-p)}.$ - -#: - -#Let $ M \succeq 0 $ be a self-adjoint matrix-valued random variable and a > 0. Then - -#: - -\operatorname{P}(M \npreceq a \cdot I) \leq \frac{\operatorname{tr}\left( E(M) \right)}{n a}. - - - -#:can be shown in a similar manner. - -Assuming no income is negative, Markov's inequality shows that no more than 1/5 of the population can have more than 5 times the average income. diff --git a/wiki/wikipedia/3350.txt b/wiki/wikipedia/3350.txt deleted file mode 100644 index b3cdfd15120f5f088f0d08fbd9142ed5eefce16a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3350.txt +++ /dev/null @@ -1,3 +0,0 @@ -In mathematics, the grand Riemann hypothesis is a generalisation of the Riemann hypothesis and generalized Riemann hypothesis. It states that the nontrivial zeros of all automorphic L-functions lie on the critical line $\frac{1}{2} + it$ with $t$ a real number variable and $i$ the imaginary unit. - -The modified grand Riemann hypothesis is the assertion that the nontrivial zeros of all automorphic L-functions lie on the critical line or the real line. diff --git a/wiki/wikipedia/3351.txt b/wiki/wikipedia/3351.txt deleted file mode 100644 index 975e5546fbacfe8d2344651c567fe7629f7bb839..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3351.txt +++ /dev/null @@ -1,28 +0,0 @@ -In mathematics, Anderson's theorem is a result in real analysis and geometry which says that the integral of an integrable, symmetric, unimodal, non-negative function f over an n-dimensional convex body K does not decrease if K is translated inwards towards the origin. This is a natural statement, since the graph of f can be thought of as a hill with a single peak over the origin; however, for n ≥ 2, the proof is not entirely obvious, as there may be points x of the body K where the value f(x) is larger than at the corresponding translate of x. - -Anderson's theorem also has an interesting application to probability theory. - -Let K be a convex body in n-dimensional Euclidean space Rn that is symmetric with respect to reflection in the origin, i.e. K = -K. Let f : Rn → R be a non-negative, symmetric, globally integrable function; i.e. - -* f(x) ≥ 0 for all x ∈ Rn; - -* f(x) = f(-x) for all x ∈ Rn; - -* $\int_{\mathbb{R}^{n}} f(x) \mathrm{d} x < + \infty.$ - -Suppose also that the super-level sets L(f, t) of f, defined by -$$ -L(f, t) = \{ x \in \mathbb{R}^{n} | f(x) \geq t \}, -$$ - -are convex subsets of Rn for every t ≥ 0. (This property is sometimes referred to as being unimodal.) Then, for any 0 ≤ c ≤ 1 and y ∈ Rn, -$$ -\int_{K} f(x + c y) \mathrm{d} x \geq \int_{K} f(x + y) \mathrm{d} x. -$$ - -Given a probability space (Ω, Σ, Pr), suppose that X : Ω → Rn is an Rn-valued random variable with probability density function f : Rn → [0, +∞) and that Y : Ω → Rn is an independent random variable. The probability density functions of many well-known probability distributions are p-concave for some p, and hence unimodal. If they are also symmetric (e.g. the Laplace and normal distributions), then Anderson's theorem applies, in which case -$$ -\Pr ( X \in K ) \geq \Pr ( X + Y \in K ) -$$ - -for any origin-symmetric convex body K ⊆ Rn. diff --git a/wiki/wikipedia/3352.txt b/wiki/wikipedia/3352.txt deleted file mode 100644 index 1ab83eb83e4585d6c8896c4accdb26a3f9729cde..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3352.txt +++ /dev/null @@ -1,44 +0,0 @@ -In geometry, the truncated icosidodecahedron is an Archimedean solid, one of thirteen convex isogonal nonprismatic solids constructed by two or more types of regular polygon faces. - -It has 62 faces: 30 squares, 20 regular hexagons, and 12 regular decagons. It has the most edges and vertices of all Platonic and Archimedean solids, though the snub dodecahedron has more faces. Of all vertex-transitive polyhedra, it occupies the largest percentage (89.80%) of the volume of a sphere in which it is inscribed, very narrowly beating the snub dodecahedron (89.63%) and Small Rhombicosidodecahedron (89.23%), and less narrowly beating the Truncated Icosahedron (86.74%); it also has by far the greatest volume (206.8 cubic units) when its edge length equals 1. Of all vertex-transitive polyhedra that are not prisms or antiprisms, it has the largest sum of angles (90 + 120 + 144 = 354 degrees) at each vertex; only a prism or antiprism with more than 60 sides would have a larger sum. Since each of its faces has point symmetry (equivalently, 180° rotational symmetry), the truncated icosidodecahedron is a zonohedron. - -The name great rhombicosidodecahedron refers to the relationship with the (small) rhombicosidodecahedron (compare section Dissection).
    - -There is a nonconvex uniform polyhedron with a similar name, the nonconvex great rhombicosidodecahedron. - -The surface area A and the volume V of the truncated icosidodecahedron of edge length a are: -$$ -\begin{align} A &= 30 \left (1 + \sqrt{3} + \sqrt{5 + 2\sqrt{5}} \right)a^2 &&\approx 174.2920303a^2. \\ V &= \left( 95 + 50\sqrt{5} \right) a^3 &&\approx 206.803399a^3. \end{align} -$$ - -If a set of all 13 Archimedean solids were constructed with all edge lengths equal, the truncated icosidodecahedron would be the largest. - -Cartesian coordinates for the vertices of a truncated icosidodecahedron with edge length 2φ − 2, centered at the origin, are all the even permutations of: - -(±1/φ, ±1/φ, ±(3 + φ)), - -(±2/φ, ±φ, ±(1 + 2φ)), - -(±1/φ, ±φ2, ±(−1 + 3φ)), - -(±(2φ − 1), ±2, ±(2 + φ)) and - -(±φ, ±3, ±2φ), - -where φ = 1 + sqrt 5/2 is the golden ratio. - -The truncated icosidodecahedron is the convex hull of a rhombicosidodecahedron with cuboids above its 30 squares, whose height to base ratio is φ. The rest of its space can be dissected into nonuniform cupolas, namely 12 between inner pentagons and outer decagons and 20 between inner triangles and outer hexagons. - -An alternative dissection also has a rhombicosidodecahedral core. It has 12 pentagonal rotundae between inner pentagons and outer decagons. The remaining part is a toroidal polyhedron. - -The truncated icosidodecahedron has seven special orthogonal projections, centered on a vertex, on three types of edges, and three types of faces: square, hexagonal and decagonal. The last two correspond to the A2 and H2 Coxeter planes. - -The truncated icosidodecahedron can also be represented as a spherical tiling, and projected onto the plane via a stereographic projection. This projection is conformal, preserving angles but not areas or lengths. Straight lines on the sphere are projected as circular arcs on the plane. - -Schlegel diagrams are similar, with a perspective projection and straight edges. - -Within Icosahedral symmetry there are unlimited geometric variations of the truncated icosidodecahedron with isogonal faces. The truncated dodecahedron, rhombicosidodecahedron, and truncated icosahedron as degenerate limiting cases. - -In the mathematical field of graph theory, a truncated icosidodecahedral graph (or great rhombicosidodecahedral graph) is the graph of vertices and edges of the truncated icosidodecahedron, one of the Archimedean solids. It has 120 vertices and 180 edges, and is a zero-symmetric and cubic Archimedean graph. - -This polyhedron can be considered a member of a sequence of uniform patterns with vertex figure (4.6.2p) and Coxeter-Dynkin diagram . For p < 6, the members of the sequence are omnitruncated polyhedra (zonohedrons), shown below as spherical tilings. For p > 6, they are tilings of the hyperbolic plane, starting with the truncated triheptagonal tiling. diff --git a/wiki/wikipedia/3353.txt b/wiki/wikipedia/3353.txt deleted file mode 100644 index 06a627ada24a20f0d3b83472fb42df3590688cbe..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3353.txt +++ /dev/null @@ -1,23 +0,0 @@ -MALPAS is a software toolset that provides a means of investigating and proving the correctness of software by applying a rigorous form of static program analysis. The tool uses directed graphs and regular algebra to represent the program under analysis. Using the automated tools in MALPAS an analyst can describe the structure of a program, classify the use made of data and provide the information relationships between input and output data. It also supports a formal proof that the code meets its specification. - -MALPAS has been used to confirm the correctness of safety critical applications in the nuclear, aerospace and defence industries. It has also been used to provide compiler validation in the nuclear industry on Sizewell B. Languages that have been analysed include: Ada, C, PLM and Intel Assembler. - -MALPAS is well suited to the independent static analysis required by the UK's Health and Safety Executive guidance for computer based protection systems for nuclear reactors due to its rigour and flexibility in handling many programming languages. - -The MALPAS toolset comprises five specific analysis tools that address various properties of a program. The input to the analysers needs to be written in MALPAS Intermediate Language (IL); this can be hand-written or produced by an automated translation tool from the original source code. Automatic translators exist for common high-level programming languages such as Ada, C and Pascal, as well as assembler languages such as Intel 80*86, PowerPC and 68000. The IL text is input into MALPAS via the "IL Reader", which constructs a directed graph and associated semantics for the program under analysis. The graph is reduced using a series of graph reduction techniques. - -The MALPAS toolset consists of 5 analysers: - -# Control Flow Analyser. This examines the program structure, identifying key features: Entry/Exit points, Loops, Branches and unreachable code. It provides a summary report drawing attention to undesirable constructs and an indication of the complexity of the program structure. - -# Data Use Analyser. This separates the variables and parameters used by the program into distinct classes depending upon their use. (i.e. Data that is read before being written, Data that is written without being read or Data that is written twice without an intervening read). The report can identify errors such as uninitialised data and function outputs not written on all paths. - -# Information Flow Analyser. This identifies the data and branch dependencies for each output variable or parameter. Unwanted or unexpected dependencies can be revealed for all paths through the code. Information is also provided regarding unused variables and redundant statements. - -# Semantic Analyser (also known as symbolic execution). This reveals the exact functional relationship between all inputs and outputs over all semantically-feasible paths through the code. - -# Compliance Analyser. This compares the mathematical behaviour of the code with its formal IL specification, detailing where one differs from the other. The IL specification is written as Preconditions and Postconditions, as well as optional code assertions. Compliance analysis can be used to gain a very high level of confidence in the functional correctness of the code in relation to its specification. - -The original research and initial generations of the toolset were created by the UK's Royal Signals and Radar Establishment (RSRE) in Malvern, England (hence the derivation of the name, MALvern Programming Analysis Suite). It was used extensively in the civil nuclear and weapons field in the 1980s, when it was supported by Rex, Thompson and Partners, who set up the MALPAS User Group, with the first chair being David H Smith (now of Frazer-Nash) and then subsequently by Advantage Technical Consulting (bought by Atkins in 2008). - -The first large scale static analysis task was on the primary reactor protection system for the Sizewell B power station. This was the UK's first nuclear power station to employ a computer-based protection system as its first line of defence against a catastrophic failure. Further to this, CEZ in the Czech Republic employed MALPAS to increase the confidence in the reactor protection system in the Temelin Nuclear Power Station. In 1995 the UK's Royal Air Force commissioned independent analysis of the Lockheed Martin C130J's avionics software assessed as safety-critical. MALPAS was used for the analysis of this software, apart from the Mission Computer software, which was written in Spark Ada and verified with the Spark Toolset. diff --git a/wiki/wikipedia/3354.txt b/wiki/wikipedia/3354.txt deleted file mode 100644 index 3fd910bf98280ff48d66f153236b6ab70248ba2e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3354.txt +++ /dev/null @@ -1,169 +0,0 @@ -In mathematics, when X is a finite set with at least two elements, the permutations of X (i.e. the bijective functions from X to X) fall into two classes of equal size: the even permutations and the odd permutations. If any total ordering of X is fixed, the parity (oddness or evenness) of a permutation $\sigma$ of X can be defined as the parity of the number of inversions for σ, i.e., of pairs of elements x, y of X such that x < y and σ(x) > σ(y). - -The sign, signature, or signum of a permutation σ is denoted sgn(σ) and defined as +1 if σ is even and -1 if σ is odd. The signature defines the alternating character of the symmetric group Sn. Another notation for the sign of a permutation is given by the more general Levi-Civita symbol (εσ), which is defined for all maps from X to X, and has value zero for non-bijective maps. - -The sign of a permutation can be explicitly expressed as - -sgn(σ) = (−1)N(σ) - -where N(σ) is the number of inversions in σ. - -Alternatively, the sign of a permutation σ can be defined from its decomposition into the product of transpositions as - -sgn(σ) = (−1)m - -where m is the number of transpositions in the decomposition. Although such a decomposition is not unique, the parity of the number of transpositions in all decompositions is the same, implying that the sign of a permutation is well-defined. - -Consider the permutation σ of the set {1, 2, 3, 4, 5} which turns the initial arrangement 12345 into 34521. - -It can be obtained by three transpositions: first exchange the numbers 2 and 4, then exchange 1 and 5, and finally exchange 3 and 5. This shows that the given permutation σ is odd. Following the method of the cycle notation article, this could be written, composing from left to right, as - -\sigma=\begin{pmatrix}1&2&3&4&5\\ - -3&4&5&2&1\end{pmatrix} = \begin{pmatrix}1&3&5\end{pmatrix} \begin{pmatrix}2&4\end{pmatrix} = \begin{pmatrix}1&3\end{pmatrix} \begin{pmatrix}3&5\end{pmatrix} \begin{pmatrix}2&4\end{pmatrix} . - -There are many other ways of writing σ as a composition of transpositions, for instance - -σ = (2 3)(1 2)(2 4)(3 4)(1 5), - -but it is impossible to write it as a product of an even number of transpositions. - -The identity permutation is an even permutation. It is the kernel of the homomorphism sgn. The odd permutations cannot form a subgroup, since the composite of two odd permutations is even, but they form a coset of An (in Sn). - -If n > 1, then there are just as many even permutations in Sn as there are odd ones; consequently, An contains n!/2 permutations. (The reason is that if σ is even then (1  2)σ is odd, and if σ is odd then (1  2)σ is even, and these two maps are inverse to each other.) - -A cycle is even if and only if its length is odd. This follows from formulas like -$$ -(a\ b\ c\ d\ e)=(d\ e)(c\ e)(b\ e)(a\ e)\text{ or }(a\ b)(b\ c)(c\ d)(d\ e). -$$ - -In practice, in order to determine whether a given permutation is even or odd, one writes the permutation as a product of disjoint cycles. The permutation is odd if and only if this factorization contains an odd number of even-length cycles. - -Another method for determining whether a given permutation is even or odd is to construct the corresponding permutation matrix and compute its determinant. The value of the determinant is the same as the parity of the permutation. - -Every permutation of odd order must be even. The permutation (1 2)(3 4) in A4 shows that the converse is not true in general. - -This section presents proofs that the parity of a permutation σ can be defined in two equivalent ways: - -* as the parity of the number of inversions in σ (under any ordering); or - -* as the parity of the number of transpositions that σ can be decomposed to (however we choose to decompose it). - -{{hidden|header=Proof 1|content= - -Let σ be a permutation on a ranked domain S. Every permutation can be produced by a sequence of transpositions (2-element exchanges). Let the following be one such decomposition - -σ = T1 T2 ... Tk - -We want to show that the parity of k is equal to the parity of the number of inversions of σ. - -Every transposition can be written as a product of an odd number of transpositions of adjacent elements, e.g. - -(2 5) = (2 3) (3 4) (4 5) (4 3) (3 2). - -Generally, we can write the transposition (i i+d) on the set {1,...,i,...,i+d,...} as the composition of 2d−1 adjacent transpositions by recursion on d: - -* The base case d=1 is trivial. - -* In the recursive case, first rewrite (i, i+d) as (i, i+1) (i+1, i+d) (i, i+1). Then recursively rewrite (i+1, i+d) as adjacent transpositions. - -If we decompose in this way each of the transpositions T1 ... Tk above, we get the new decomposition: - -σ = A1 A2 ... Am - -where all of the A1...Am are adjacent. Also, the parity of m is the same as that of k. - -This is a fact: for all permutation τ and adjacent transposition a, aτ either has one less or one more inversion than τ. In other words, the parity of the number of inversions of a permutation is switched when composed with an adjacent transposition. - -Therefore, the parity of the number of inversions of σ is precisely the parity of m, which is also the parity of k. This is what we set out to prove. - -We can thus define the parity of σ to be that of its number of constituent transpositions in any decomposition. And this must agree with the parity of the number of inversions under any ordering, as seen above. Therefore, the definitions are indeed well-defined and equivalent. - -}} - -{{hidden|header=Proof 2|content= - -An alternative proof uses the Vandermonde polynomial -$$ -P(x_1,\ldots,x_n)=\prod_{i - -\begin{align} - -\sgn(\sigma\tau) & = \frac{P(x_{\sigma(\tau(1))},\ldots,x_{\sigma(\tau(n))})}{P(x_1,\ldots,x_n)} \\[4pt] - -& = \frac{P(x_{\sigma(1)},\ldots,x_{\sigma(n)})}{P(x_1,\ldots,x_n)} \cdot \frac{P(x_{\sigma(\tau(1))},\ldots, x_{\sigma(\tau(n))})}{P(x_{\sigma(1)},\ldots,x_{\sigma(n)})} \\[4pt] - -& = \sgn(\sigma)\cdot\sgn(\tau). - -\end{align} - - - -Since with this definition it is furthermore clear that any transposition of two elements has signature -1, we do indeed recover the signature as defined earlier. - -}} - -{{hidden|header=Proof 3|content= - -A third approach uses the presentation of the group Sn in terms of generators τ1, ..., τn-1 and relations - -* $\tau_i^2 = 1$ for all i - -* $\tau_i^{}\tau_{i+1}\tau_i = \tau_{i+1}\tau_i\tau_{i+1}$ for all i < n - 1 - -* $\tau_i^{}\tau_j = \tau_j\tau_i$ if |i - j| ≥ 2. - -[Here the generator $\tau_i$ represents the transposition (i, i + 1).] All relations keep the length of a word the same or change it by two. Starting with an even-length word will thus always result in an even-length word after using the relations, and similarly for odd-length words. It is therefore unambiguous to call the elements of Sn represented by even-length words "even", and the elements represented by odd-length words "odd". - -}} - -The parity of a permutation of $n$ points is also encoded in its cycle structure. - -Let σ = (i1 i2 ... ir+1)(j1 j2 ... js+1)...(ℓ12 ... ℓu+1) be the unique decomposition of σ into disjoint cycles, which can be composed in any order because they commute. A cycle (a b c ... x y z) involving k + 1 points can always be obtained by composing k transpositions (2-cycles): -$$ -(a\ b\ c \dots x\ y\ z)=(a\ b)(b\ c) \dots (x\ y)(y\ z), -$$ - -so call k the size of the cycle, and observe that, under this definition, transpositions are cycles of size 1. From a decomposition into m disjoint cycles we can obtain a decomposition of σ into k1 + k2 + ... + km transpositions, where ki is the size of the ith cycle. The number 1=N(σ) = k1 + k2 + ... + km is called the discriminant of σ, and can also be computed as -$$ -n \text{ minus the number of disjoint cycles in the decomposition of } \sigma -$$ - -if we take care to include the fixed points of σ as 1-cycles. - -Suppose a transposition (a b) is applied after a permutation σ. When a and b are in different cycles of σ then -$$ -(a\ b)(a\ c_1\ c_2 \dots c_r)(b\ d_1\ d_2 \dots d_s) = (a\ c_1\ c_2 \dots c_r\ b\ d_1\ d_2 \dots d_s) -$$, - -and if a and b are in the same cycle of σ then -$$ -(a\ b)(a c_1 c_2 \dots c_r\ b\ d_1\ d_2 \dots d_s) = (a\ c_1\ c_2 \dots c_r)(b\ d_1\ d_2 \dots d_s) -$$. - -In either case, it can be seen that 1=N((a b)σ) = N(σ) ± 1, so the parity of N((a b)σ) will be different from the parity of N(σ). - -If 1=σ = t1t2 ... tr is an arbitrary decomposition of a permutation σ into transpositions, by applying the r transpositions $t_1$ after t2 after ... after tr after the identity (whose N is zero) observe that N(σ) and r have the same parity. By defining the parity of σ as the parity of N(σ), a permutation that has an even length decomposition is an even permutation and a permutation that has one odd length decomposition is an odd permutation. - -; Remarks: - -* A careful examination of the above argument shows that r ≥ N(σ), and since any decomposition of σ into cycles whose sizes sum to r can be expressed as a composition of r transpositions, the number N(σ) is the minimum possible sum of the sizes of the cycles in a decomposition of σ, including the cases in which all cycles are transpositions. - -* This proof does not introduce a (possibly arbitrary) order into the set of points on which σ acts. - -Parity can be generalized to Coxeter groups: one defines a length function ℓ(v), which depends on a choice of generators (for the symmetric group, adjacent transpositions), and then the function v ↦ (-1)ℓ(v) gives a generalized sign map. diff --git a/wiki/wikipedia/3355.txt b/wiki/wikipedia/3355.txt deleted file mode 100644 index 4003c34918161f1501f09438ce463517b4727a9e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3355.txt +++ /dev/null @@ -1,31 +0,0 @@ -In classical logic, intuitionistic logic and similar logical systems, the principle of explosion (, 'from falsehood, anything [follows]'; or ), or the principle of Pseudo-Scotus, is the law according to which any statement can be proven from a contradiction. That is, once a contradiction has been asserted, any proposition (including their negations) can be inferred from it; this is known as deductive explosion. - -The proof of this principle was first given by 12th-century French philosopher William of Soissons. Due to the principle of explosion, the existence of a contradiction (inconsistency) in a formal axiomatic system is disastrous; since any statement can be proven, it trivializes the concepts of truth and falsity. Around the turn of the 20th century, the discovery of contradictions such as Russell's paradox at the foundations of mathematics thus threatened the entire structure of mathematics. Mathematicians such as Gottlob Frege, Ernst Zermelo, Abraham Fraenkel, and Thoralf Skolem put much effort into revising set theory to eliminate these contradictions, resulting in the modern Zermelo–Fraenkel set theory. - -As a demonstration of the principle, consider two contradictory statements—"All lemons are yellow" and "Not all lemons are yellow"—and suppose that both are true. If that is the case, anything can be proven, e.g., the assertion that "unicorns exist", by using the following argument: - -# We know that "Not all lemons are yellow", as it has been assumed to be true. - -# We know that "All lemons are yellow", as it has been assumed to be true. - -# Therefore, the two-part statement "All lemons are yellow or unicorns exist" must also be true, since the first part "All lemons are yellow" of the two-part statement is true (as this has been assumed). - -# However, since we know that "Not all lemons are yellow" (as this has been assumed), the first part is false, and hence the second part must be true to ensure the two-part statement to be true, i.e., unicorns exist. - -In a different solution to these problems, a few mathematicians have devised alternate theories of logic called paraconsistent logics, which eliminate the principle of explosion. These allow some contradictory statements to be proven without affecting other proofs. - -In symbolic logic, the principle of explosion can be expressed schematically in the following way:
    $ P, \lnot P \vdash Q$ - -For any statements P and Q, if P and not-P are both true, then it logically follows that Q is true.
    - -Below is a formal proof of the principle using symbolic logic - -This is just the symbolic version of the informal argument given in the introduction, with $P$ standing for "all lemons are yellow" and $Q$ standing for "Unicorns exist". We start out by assuming that (1) all lemons are yellow and that (2) not all lemons are yellow. From the proposition that all lemons are yellow, we infer that (3) either all lemons are yellow or unicorns exist. But then from this and the fact that not all lemons are yellow, we infer that (4) unicorns exist by disjunctive syllogism. - -An alternate argument for the principle stems from model theory. A sentence $P$ is a semantic consequence of a set of sentences $\Gamma$ only if every model of $\Gamma$ is a model of $P$. However, there is no model of the contradictory set $(P \wedge \lnot P)$. A fortiori, there is no model of $(P \wedge \lnot P)$ that is not a model of $Q$. Thus, vacuously, every model of $(P \wedge \lnot P)$ is a model of $Q$. Thus $Q$ is a semantic consequence of $(P \wedge \lnot P)$. - -Paraconsistent logics have been developed that allow for sub-contrary forming operators. Model-theoretic paraconsistent logicians often deny the assumption that there can be no model of $\{\phi , \lnot \phi \}$ and devise semantical systems in which there are such models. Alternatively, they reject the idea that propositions can be classified as true or false. Proof-theoretic paraconsistent logics usually deny the validity of one of the steps necessary for deriving an explosion, typically including disjunctive syllogism, disjunction introduction, and reductio ad absurdum. - -The metamathematical value of the principle of explosion is that for any logical system where this principle holds, any derived theory which proves ⊥ (or an equivalent form, $\phi \land \lnot \phi$) is worthless because all its statements would become theorems, making it impossible to distinguish truth from falsehood. That is to say, the principle of explosion is an argument for the law of non-contradiction in classical logic, because without it all truth statements become meaningless. - -Reduction in proof strength of logics without ex falso are discussed in minimal logic. diff --git a/wiki/wikipedia/3356.txt b/wiki/wikipedia/3356.txt deleted file mode 100644 index 4d5836756d88a631b510796b7aa0d4dd0d6a0c07..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3356.txt +++ /dev/null @@ -1,13 +0,0 @@ -In physics, the Weyl expansion, also known as the Weyl identity or angular spectrum expansion, expresses an outgoing spherical wave as a linear combination of plane waves. In Cartesian coordinate system, it can be denoted as -$$ -\frac{e^{-j k_0 r}}{r}=\frac{1}{j 2\pi} \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} dk_x dk_y e^{-j(k_x x + k_y y)} \frac{e^{-jk_z |z|}}{k_z} -$$, - -where $k_x$, $k_y$ and $k_z$ are the wavenumbers in their respective coordinate axes: -$$ -k_0=\sqrt{k_x^2+k_y^2+k_z^2} -$$. - -The expansion is named after Hermann Weyl, who published it in 1919. The Weyl identity is largely used to characterize the reflection and transmission of spherical waves at planar interfaces; it is often used to derive the Green's functions for Helmholtz equation in layered media. The expansion also covers evanescent wave components. It is often preferred to the Sommerfeld identity when the field representation is needed to be in Cartesian coordinates. - -The resulting Weyl integral is commonly encountered in microwave integrated circuit analysis and electromagnetic radiation over a stratified medium; as in the case for Sommerfeld integral, it is numerically evaluated. As a result, it is used in calculation of Green's functions for method of moments for such geometries. Other uses include the descriptions of dipolar emissions near surfaces in nanophotonics, holographic inverse scattering problems, Green's functions in quantum electrodynamics and acoustic or seismic waves. diff --git a/wiki/wikipedia/3357.txt b/wiki/wikipedia/3357.txt deleted file mode 100644 index 34ab424d95050b0c01b9cbed17ae95bacebfbc3f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3357.txt +++ /dev/null @@ -1,118 +0,0 @@ -In mathematical logic, a sequent is a very general kind of conditional assertion. -$$ -A_1,\dots,A_m \vdash B_1,\dots,B_n. -$$ - -A sequent may have any number m of condition formulas Ai (called "antecedents") and any number n of asserted formulas Bj (called "succedents" or "consequents"). A sequent is understood to mean that if all of the antecedent conditions are true, then at least one of the consequent formulas is true. This style of conditional assertion is almost always associated with the conceptual framework of sequent calculus. - -Sequents are best understood in the context of the following three kinds of logical judgments: - -
      - -
    1. Unconditional assertion. No antecedent formulas. - -* Example: ⊢ B - -* Meaning: B is true. - -
    2. Conditional assertion. Any number of antecedent formulas. - -
        - -
      1. Simple conditional assertion. Single consequent formula. - -* Example: A1, A2, A3 ⊢ B - -* Meaning: IF A1 AND A2 AND A3 are true, THEN B is true. - -
      2. Sequent. Any number of consequent formulas. - -* Example: A1, A2, A3 ⊢ B1, B2, B3, B4 - -* Meaning: IF A1 AND A2 AND A3 are true, THEN B1 OR B2 OR B3 OR B4 is true. - -
      - -
    - -Thus sequents are a generalization of simple conditional assertions, which are a generalization of unconditional assertions. - -The word "OR" here is the inclusive OR. The motivation for disjunctive semantics on the right side of a sequent comes from three main benefits. - -# The symmetry of the classical inference rules for sequents with such semantics. - -# The ease and simplicity of converting such classical rules to intuitionistic rules. - -# The ability to prove completeness for predicate calculus when it is expressed in this way. - -All three of these benefits were identified in the founding paper by Gentzen. - -Not all authors have adhered to Gentzen's original meaning for the word "sequent". For example, Lemmon used the word "sequent" strictly for simple conditional assertions with one and only one consequent formula. The same single-consequent definition for a sequent is given by . - -In a general sequent of the form -$$ -\Gamma\vdash\Sigma -$$ - -both Γ and Σ are sequences of logical formulas, not sets. Therefore both the number and order of occurrences of formulas are significant. In particular, the same formula may appear twice in the same sequence. The full set of sequent calculus inference rules contains rules to swap adjacent formulas on the left and on the right of the assertion symbol (and thereby arbitrarily permute the left and right sequences), and also to insert arbitrary formulas and remove duplicate copies within the left and the right sequences. (However, Smullyan, uses sets of formulas in sequents instead of sequences of formulas. Consequently the three pairs of structural rules called "thinning", "contraction" and "interchange" are not required.) - -The symbol ' $\vdash$ ' is often referred to as the "turnstile", "right tack", "tee", "assertion sign" or "assertion symbol". It is often read, suggestively, as "yields", "proves" or "entails". - -Since every formula in the antecedent (the left side) must be true to conclude the truth of at least one formula in the succedent (the right side), adding formulas to either side results in a weaker sequent, while removing them from either side gives a stronger one. This is one of the symmetry advantages which follows from the use of disjunctive semantics on the right hand side of the assertion symbol, whereas conjunctive semantics is adhered to on the left hand side. - -In the extreme case where the list of antecedent formulas of a sequent is empty, the consequent is unconditional. This differs from the simple unconditional assertion because the number of consequents is arbitrary, not necessarily a single consequent. Thus for example, ' ⊢ B1, B2 ' means that either B1, or B2, or both must be true. An empty antecedent formula list is equivalent to the "always true" proposition, called the "verum", denoted "⊤". (See Tee (symbol).) - -In the extreme case where the list of consequent formulas of a sequent is empty, the rule is still that at least one term on the right be true, which is clearly impossible. This is signified by the 'always false' proposition, called the "falsum", denoted "⊥". Since the consequence is false, at least one of the antecedents must be false. Thus for example, ' A1, A2 ⊢ ' means that at least one of the antecedents A1 and A2 must be false. - -One sees here again a symmetry because of the disjunctive semantics on the right hand side. If the left side is empty, then one or more right-side propositions must be true. If the right side is empty, then one or more of the left-side propositions must be false. - -The doubly extreme case ' ⊢ ', where both the antecedent and consequent lists of formulas are empty is "not satisfiable". In this case, the meaning of the sequent is effectively ' ⊤ ⊢ ⊥ '. This is equivalent to the sequent ' ⊢ ⊥ ', which clearly cannot be valid. - -A sequent of the form ' ⊢ α, β ', for logical formulas α and β, means that either α is true or β is true (or both). But it does not mean that either α is a tautology or β is a tautology. To clarify this, consider the example ' ⊢ B ∨ A, C ∨ ¬A '. This is a valid sequent because either B ∨ A is true or C ∨ ¬A is true. But neither of these expressions is a tautology in isolation. It is the disjunction of these two expressions which is a tautology. - -Similarly, a sequent of the form ' α, β ⊢ ', for logical formulas α and β, means that either α is false or β is false. But it does not mean that either α is a contradiction or β is a contradiction. To clarify this, consider the example ' B ∧ A, C ∧ ¬A ⊢ '. This is a valid sequent because either B ∧ A is false or C ∧ ¬A is false. But neither of these expressions is a contradiction in isolation. It is the conjunction of these two expressions which is a contradiction. - -Most proof systems provide ways to deduce one sequent from another. These inference rules are written with a list of sequents above and below a line. This rule indicates that if everything above the line is true, so is everything under the line. - -A typical rule is: -$$ - \frac{\Gamma,\alpha\vdash\Sigma\qquad \Gamma\vdash\alpha}{\Gamma\vdash\Sigma} -$$ - -This indicates that if we can deduce that $\Gamma,\alpha$ yields $\Sigma$, and that $\Gamma$ yields $\alpha$, then we can also deduce that $\Gamma$ yields $\Sigma$. (See also the full set of sequent calculus inference rules.) - -The assertion symbol in sequents originally meant exactly the same as the implication operator. But over time, its meaning has changed to signify provability within a theory rather than semantic truth in all models. - -In 1934, Gentzen did not define the assertion symbol ' ⊢ ' in a sequent to signify provability. He defined it to mean exactly the same as the implication operator ' ⇒ '. Using ' → ' instead of ' ⊢ ' and ' ⊃ ' instead of ' ⇒ ', he wrote: "The sequent A1, ..., Aμ → B1, ..., Bν signifies, as regards content, exactly the same as the formula (A1 & ... & Aμ) ⊃ (B1 ∨ ... ∨ Bν)". (Gentzen employed the right-arrow symbol between the antecedents and consequents of sequents. He employed the symbol ' ⊃ ' for the logical implication operator.) - -In 1939, Hilbert and Bernays stated likewise that a sequent has the same meaning as the corresponding implication formula. - -In 1944, Alonzo Church emphasized that Gentzen's sequent assertions did not signify provability. - -"Employment of the deduction theorem as primitive or derived rule must not, however, be confused with the use of Sequenzen by Gentzen. For Gentzen's arrow, →, is not comparable to our syntactical notation, ⊢, but belongs to his object language (as is clear from the fact that expressions containing it appear as premisses and conclusions in applications of his rules of inference)." - -Numerous publications after this time have stated that the assertion symbol in sequents does signify provability within the theory where the sequents are formulated. Curry in 1963, Lemmon in 1965, and Huth and Ryan in 2004 all state that the sequent assertion symbol signifies provability. However, Ben-Ari states that the assertion symbol in Gentzen-system sequents, which he denotes as ' ⇒ ', is part of the object language, not the metalanguage. - -According to Prawitz (1965): "The calculi of sequents can be understood as meta-calculi for the deducibility relation in the corresponding systems of natural deduction." And furthermore: "A proof in a calculus of sequents can be looked upon as an instruction on how to construct a corresponding natural deduction." In other words, the assertion symbol is part of the object language for the sequent calculus, which is a kind of meta-calculus, but simultaneously signifies deducibility in an underlying natural deduction system. - -A sequent is a formalized statement of provability that is frequently used when specifying calculi for deduction. In the sequent calculus, the name sequent is used for the construct, which can be regarded as a specific kind of judgment, characteristic to this deduction system. - -The intuitive meaning of the sequent $\Gamma\vdash\Sigma$ is that under the assumption of Γ the conclusion of Σ is provable. Classically, the formulae on the left of the turnstile can be interpreted conjunctively while the formulae on the right can be considered as a disjunction. This means that, when all formulae in Γ hold, then at least one formula in Σ also has to be true. If the succedent is empty, this is interpreted as falsity, i.e. $\Gamma\vdash$ means that Γ proves falsity and is thus inconsistent. On the other hand an empty antecedent is assumed to be true, i.e., $\vdash\Sigma$ means that Σ follows without any assumptions, i.e., it is always true (as a disjunction). A sequent of this form, with Γ empty, is known as a logical assertion. - -Of course, other intuitive explanations are possible, which are classically equivalent. For example, $\Gamma\vdash\Sigma$ can be read as asserting that it cannot be the case that every formula in Γ is true and every formula in Σ is false (this is related to the double-negation interpretations of classical intuitionistic logic, such as Glivenko's theorem). - -In any case, these intuitive readings are only pedagogical. Since formal proofs in proof theory are purely syntactic, the meaning of (the derivation of) a sequent is only given by the properties of the calculus that provides the actual rules of inference. - -Barring any contradictions in the technically precise definition above we can describe sequents in their introductory logical form. $\Gamma$ represents a set of assumptions that we begin our logical process with, for example "Socrates is a man" and "All men are mortal". The $\Sigma$ represents a logical conclusion that follows under these premises. For example "Socrates is mortal" follows from a reasonable formalization of the above points and we could expect to see it on the $\Sigma$ side of the turnstile. In this sense, $\vdash$ means the process of reasoning, or "therefore" in English. - -The general notion of sequent introduced here can be specialized in various ways. A sequent is said to be an intuitionistic sequent if there is at most one formula in the succedent (although multi-succedent calculi for intuitionistic logic are also possible). More precisely, the restriction of the general sequent calculus to single-succedent-formula sequents, with the same inference rules as for general sequents, constitutes an intuitionistic sequent calculus. (This restricted sequent calculus is denoted LJ.) - -Similarly, one can obtain calculi for dual-intuitionistic logic (a type of paraconsistent logic) by requiring that sequents be singular in the antecedent. - -In many cases, sequents are also assumed to consist of multisets or sets instead of sequences. Thus one disregards the order or even the numbers of occurrences of the formulae. For classical propositional logic this does not yield a problem, since the conclusions that one can draw from a collection of premises do not depend on these data. In substructural logic, however, this may become quite important. - -Natural deduction systems use single-consequence conditional assertions, but they typically do not use the same sets of inference rules as Gentzen introduced in 1934. In particular, tabular natural deduction systems, which are very convenient for practical theorem-proving in propositional calculus and predicate calculus, were applied by Suppes and Lemmon for teaching introductory logic in textbooks. - -Historically, sequents have been introduced by Gerhard Gentzen in order to specify his famous sequent calculus. In his German publication he used the word "Sequenz". However, in English, the word "sequence" is already used as a translation to the German "Folge" and appears quite frequently in mathematics. The term "sequent" then has been created in search for an alternative translation of the German expression. - -Kleene makes the following comment on the translation into English: "Gentzen says 'Sequenz', which we translate as 'sequent', because we have already used 'sequence' for any succession of objects, where the German is 'Folge'." diff --git a/wiki/wikipedia/3358.txt b/wiki/wikipedia/3358.txt deleted file mode 100644 index 520d10929eff238ea58c8eac500d8c1037aaeaee..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3358.txt +++ /dev/null @@ -1,52 +0,0 @@ -The multi-commodity flow problem is a network flow problem with multiple commodities (flow demands) between different source and sink nodes. - -Given a flow network $G(V,E)$, where edge $(u,v) \in E$ has capacity $c(u,v)$. There are $k$ commodities $K_1,K_2,\dots,K_k$, defined by $K_i=(s_i,t_i,d_i)$, where $s_i$ and $t_i$ is the source and sink of commodity $i$, and $d_i$ is its demand. The variable $f_i(u,v)$ defines the fraction of flow $i$ along edge $(u,v)$, where $f_i(u,v) \in [0,1]$ in case the flow can be split among multiple paths, and $f_i(u,v) \in \{0,1\}$ otherwise (i.e. "single path routing"). Find an assignment of all flow variables which satisfies the following four constraints: - -(1) Link capacity: The sum of all flows routed over a link does not exceed its capacity. -$$ -\forall (u,v)\in E:\sum_{i=1}^{k} f_i(u,v)\cdot d_i \leq c(u,v) -$$ - -(2) Flow conservation on transit nodes: The amount of a flow entering an intermediate node $u$ is the same that exits the node. -$$ -\forall i \in K:\sum_{w \in V} f_i(u,w) - \sum_{w \in V} f_i(w,u) = 0 \quad \mathrm{when} \quad u \neq s_i, t_i -$$ - -(3) Flow conservation at the source: A flow must exit its source node completely. -$$ -\forall i \in K:\sum_{w \in V} f_i(s_i,w) - \sum_{w \in V} f_i(w,s_i) = 1 -$$ - -(4) Flow conservation at the destination: A flow must enter its sink node completely. -$$ -\forall i \in K:\sum_{w \in V} f_i(w,t_i) - \sum_{w \in V} f_i(t_i,w) = 1 -$$ - -Load balancing is the attempt to route flows such that the utilization $U(u,v)$ of all links $(u,v)\in E$ is even, where -$$ -U(u,v)=\frac{\sum_{i=1}^{k} f_i(u,v)\cdot d_i}{c(u,v)} -$$ - -The problem can be solved e.g. by minimizing $\sum_{u,v\in V} (U(u,v))^2$. A common linearization of this problem is the minimization of the maximum utilization $U_{max}$, where -$$ -\forall (u,v)\in E: U_{max} \geq U(u,v) -$$ - -In the minimum cost multi-commodity flow problem, there is a cost $a(u,v) \cdot f(u,v)$ for sending a flow on $(u,v)$. You then need to minimize -$$ -\sum_{(u,v) \in E} \left( a(u,v) \sum_{i=1}^{k} f_i(u,v) \right) -$$ - -In the maximum multi-commodity flow problem, the demand of each commodity is not fixed, and the total throughput is maximized by maximizing the sum of all demands $\sum_{i=1}^{k} d_i$ - -The minimum cost variant of the multi-commodity flow problem is a generalization of the minimum cost flow problem (in which there is merely one source $s$ and one sink $t$. Variants of the circulation problem are generalizations of all flow problems. That is, any flow problem can be viewed as a particular circulation problem. - -Routing and wavelength assignment (RWA) in optical burst switching of Optical Network would be approached via multi-commodity flow formulas. - -In the decision version of problems, the problem of producing an integer flow satisfying all demands is NP-complete, even for only two commodities and unit capacities (making the problem strongly NP-complete in this case). - -If fractional flows are allowed, the problem can be solved in polynomial time through linear programming, or through (typically much faster) fully polynomial time approximation schemes. - -* Papers by Clifford Stein about this problem: http://www.columbia.edu/~cs2035/papers/#mcf - -* Software solving the problem: https://web.archive.org/web/20130306031532/http://typo.zib.de/opt-long_projects/Software/Mcf/ diff --git a/wiki/wikipedia/3359.txt b/wiki/wikipedia/3359.txt deleted file mode 100644 index 1da1bd78eb312a13101b2478d1bc957e3a6f33b9..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3359.txt +++ /dev/null @@ -1,55 +0,0 @@ -Unit propagation (UP) or Boolean Constraint propagation (BCP) or the one-literal rule (OLR) is a procedure of automated theorem proving that can simplify a set of (usually propositional) clauses. - -The procedure is based on unit clauses, i.e. clauses that are composed of a single literal. Because each clause needs to be satisfied, we know that this literal must be true. If a set of clauses contains the unit clause $l$, the other clauses are simplified by the application of the two following rules: - -# every clause (other than the unit clause itself) containing $l$ is removed (the clause is satisfied if $l$ is); - -# in every clause that contains $\neg l$ this literal is deleted ($\neg l$ can not contribute to it being satisfied). - -The application of these two rules lead to a new set of clauses that is equivalent to the old one. - -For example, the following set of clauses can be simplified by unit propagation because it contains the unit clause $a$. -$$ -\{a \vee b, \neg a \vee c, \neg c \vee d, a\} -$$ - -Since $a \vee b$ contains the literal $a$, this clause can be removed altogether. Since $\neg a \vee c$ contains the negation of the literal in the unit clause, this literal can be removed from the clause. The unit clause $a$ is not removed; this would make the resulting set not equivalent to the original one; this clause can be removed if already stored in some other form (see section "Using a partial model"). The effect of unit propagation can be summarized as follows. - -The resulting set of clauses $\{c, \neg c \vee d, a\}$ is equivalent to the above one. The new unit clause $c$ that results from unit propagation can be used for a further application of unit propagation, which would transform $\neg c \vee d$ into $d$. - -The second rule of unit propagation can be seen as a restricted form of resolution, in which one of the two resolvents must always be a unit clause. As for resolution, unit propagation is a correct inference rule, in that it never produces a new clause that was not entailed by the old ones. The differences between unit propagation and resolution are: - -# resolution is a complete refutation procedure while unit propagation is not; in other words, even if a set of clauses is contradictory, unit propagation may not generate an inconsistency; - -# the two clauses that are resolved cannot in general be removed after the generated clause is added to the set; on the contrary, the non-unit clause involved in a unit propagation can be removed when its simplification is added to the set; - -# resolution does not in general include the first rule used in unit propagation. - -Resolution calculi that include subsumption can model rule one by subsumption and rule two by a unit resolution step, followed by subsumption. - -Unit propagation, applied repeatedly as new unit clauses are generated, is a complete satisfiability algorithm for sets of propositional Horn clauses; it also generates a minimal model for the set if satisfiable: see Horn-satisfiability. - -The unit clauses that are present in a set of clauses or can be derived from it can be stored in form of a partial model (this partial model may also contain other literals, depending on the application). In this case, unit propagation is performed based on the literals of the partial model, and unit clauses are removed if their literal is in the model. In the example above, the unit clause $a$ would be added to the partial model; the simplification of the set of clauses would then proceed as above with the difference that the unit clause $a$ is now removed from the set. The resulting set of clauses is equivalent to the original one under the assumption of validity of the literals in the partial model. - -The direct implementation of unit propagation takes time quadratic in the total size of the set to check, which is defined to be the sum of the size of all clauses, where the size of each clause is the number of literals it contains. - -Unit propagation can however be done in linear time by storing, for each variable, the list of clauses in which each literal is contained. For example, the set above can be represented by numbering each clause as follows: -$$ -\{1: a \vee b, 2: \neg a \vee c, 3: \neg c \vee d, 4: a\} -$$ - -and then storing, for each variable, the list of clauses containing the variable or its negation: -$$ -a : 1\ 2\ 4 -$$ -$$ -b : 1 -$$ -$$ -c : 2\ 3 -$$ -$$ -d : 3 -$$ - -This simple data structure can be built in time linear in the size of the set, and allows finding all clauses containing a variable very easily. Unit propagation of a literal can be performed efficiently by scanning only the list of clauses containing the variable of the literal. More precisely, the total running time for doing unit propagation for all unit clauses is linear in the size of the set of clauses. diff --git a/wiki/wikipedia/336.txt b/wiki/wikipedia/336.txt deleted file mode 100644 index 582887034f55416ce3f2f940701107cbfe6a8fa3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/336.txt +++ /dev/null @@ -1,7 +0,0 @@ -In mathematics, especially homological algebra and other applications of abelian category theory, the short five lemma is a special case of the five lemma. - -It states that for the following commutative diagram (in any abelian category, or in the category of groups), if the rows are short exact sequences, and if g and h are isomorphisms, then f is an isomorphism as well. - -It follows immediately from the five lemma. - -The essence of the lemma can be summarized as follows: if you have a homomorphism f from an object B to an object B′, and this homomorphism induces an isomorphism from a subobject A of B to a subobject A′ of B′ and also an isomorphism from the factor object B/A to B′/A′, then f itself is an isomorphism. Note however that the existence of f (such that the diagram commutes) has to be assumed from the start; two objects B and B′ that simply have isomorphic sub- and factor objects need not themselves be isomorphic (for example, in the category of abelian groups, B could be the cyclic group of order four and B′ the Klein four-group). diff --git a/wiki/wikipedia/3360.txt b/wiki/wikipedia/3360.txt deleted file mode 100644 index 68bcc5ad3db2c813946da7f30e5d6c2bbd6b8425..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3360.txt +++ /dev/null @@ -1,68 +0,0 @@ -In spherical trigonometry, the law of cosines (also called the cosine rule for sides -$$ -\cos c = \cos a \cos b + \sin a \sin b \cos C -$$ - -Since this is a unit sphere, the lengths a, b, and c are simply equal to the angles (in radians) subtended by those sides from the center of the sphere. (For a non-unit sphere, the lengths are the subtended angles times the radius, and the formula still holds if a, b and c are reinterpreted as the subtended angles). As a special case, for C = π/2, then cos C = 0, and one obtains the spherical analogue of the Pythagorean theorem: -$$ -\cos c = \cos a \cos b -$$ - -If the law of cosines is used to solve for c, the necessity of inverting the cosine magnifies rounding errors when c is small. In this case, the alternative formulation of the law of haversines is preferable. - -A variation on the law of cosines, the second spherical law of cosines, (also called the cosine rule for angles) states: -$$ -\cos C = -\cos A \cos B + \sin A \sin B \cos c -$$ - -where A and B are the angles of the corners opposite to sides a and b, respectively. It can be obtained from consideration of a spherical triangle dual to the given one. - -Let u, v, and w denote the unit vectors from the center of the sphere to those corners of the triangle. The angles and distances do not change if the coordinate system is rotated, so we can rotate the coordinate system so that $\mathbf{u}$ is at the north pole and $\mathbf{v}$ is somewhere on the prime meridian (longitude of 0). With this rotation, the spherical coordinates for $\mathbf{v}$ are $(r, \theta, \phi) = (1, a, 0)$, where θ is the angle measured from the north pole not from the equator, and the spherical coordinates for $\mathbf{w}$ are $(r, \theta, \phi) = (1, b, C)$. The Cartesian coordinates for $\mathbf{v}$ are $(x, y, z) = (\sin a, 0, \cos a)$ and the Cartesian coordinates for $\mathbf{w}$ are $(x, y, z) = (\sin b \cos C, \sin b \sin C, \cos b)$. The value of $\cos c$ is the dot product of the two Cartesian vectors, which is $\sin a \sin b \cos C + \cos a \cos b$. - -Let u, v, and w denote the unit vectors from the center of the sphere to those corners of the triangle. We have u · u = 1, v · w = cos c, u · v = cos a, and u · w = cos b. The vectors u × v and u × w have lengths sin a and sin b respectively and the angle between them is C, so - -sin a sin b cos C = (u × v) · (u × w) = (u · u)(v · w) − (u · v)(u · w) = cos c − cos a cos b, - -using cross products, dot products, and the Binet–Cauchy identity (p × q) · (r × s) = (p · r)(q · s) − (p · s)(q · r). - -The first and second spherical laws of cosines can be rearranged to put the sides (a, b, c) and angles (A, B, C) on opposite sides of the equations: - -\begin{align} - -\cos C &= \frac{\cos c - \cos a \cos b}{\sin a \sin b} \\ - -\\ - -\cos c &= \frac{\cos C + \cos A \cos B}{\sin A \sin B} \\ - -\end{align} - -For small spherical triangles, i.e. for small a, b, and c, the spherical law of cosines is approximately the same as the ordinary planar law of cosines, -$$ -c^2 \approx a^2 + b^2 - 2ab\cos C . -$$ - -To prove this, we will use the small-angle approximation obtained from the Maclaurin series for the cosine and sine functions: -$$ -\cos a = 1 - \frac{a^2}{2} + O\left(a^4\right), \sin a = a + O\left(a^3\right) -$$ - -Substituting these expressions into the spherical law of cosines nets: - - - -1 - \frac{c^2}{2} + O\left(c^4\right) = - -1 - \frac{a^2}{2} - \frac{b^2}{2} + \frac{a^2 b^2}{4} + O\left(a^4\right) + O\left(b^4\right) + \cos(C)\left(ab + O\left(a^3 b\right) + O\left(ab^3\right) + O\left(a^3 b^3\right)\right) - - - -or after simplifying: -$$ -c^2 = a^2 + b^2 - 2ab\cos C + O\left(c^4\right) + O\left(a^4\right) + O\left(b^4\right) + O\left(a^2 b^2\right) + O\left(a^3 b\right) + O\left(ab^3\right) + O\left(a^3 b^3\right). -$$ - -The big O terms for a and b are dominated by O(a4) + O(b4) as a and b get small, so we can write this last expression as: -$$ -c^2 = a^2 + b^2 - 2ab\cos C + O\left(a^4\right) + O\left(b^4\right) + O\left(c^4\right). -$$ diff --git a/wiki/wikipedia/3361.txt b/wiki/wikipedia/3361.txt deleted file mode 100644 index 2941c40fac5c11fbd5ae2671a5258b25074f052c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3361.txt +++ /dev/null @@ -1,3 +0,0 @@ -In algebraic geometry, the Kempf vanishing theorem, introduced by , states that the higher cohomology group Hi(G/B,L(λ)) (i > 0) vanishes whenever λ is a dominant weight of B. Here G is a reductive algebraic group over an algebraically closed field, B a Borel subgroup, and L(λ) a line bundle associated to λ. In characteristic 0 this is a special case of the Borel–Weil–Bott theorem, but unlike the Borel–Weil–Bott theorem, the Kempf vanishing theorem still holds in positive characteristic. - -Andersen and Haboush found simpler proofs of the Kempf vanishing theorem using the Frobenius morphism. diff --git a/wiki/wikipedia/3362.txt b/wiki/wikipedia/3362.txt deleted file mode 100644 index 0b8ad1283d80d1aaed935f0a9946a7e36d234fef..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3362.txt +++ /dev/null @@ -1,77 +0,0 @@ -In mathematics, the (linear) Peetre theorem, named after Jaak Peetre, is a result of functional analysis that gives a characterisation of differential operators in terms of their effect on generalized function spaces, and without mentioning differentiation in explicit terms. The Peetre theorem is an example of a finite order theorem in which a function or a functor, defined in a very general way, can in fact be shown to be a polynomial because of some extraneous condition or symmetry imposed upon it. - -This article treats two forms of the Peetre theorem. The first is the original version which, although quite useful in its own right, is actually too general for most applications. - -Let M be a smooth manifold and let E and F be two vector bundles on M. Let -$$ -\Gamma^\infty (E),\ \hbox{and}\ \Gamma^\infty (F) -$$ - -be the spaces of smooth sections of E and F. An operator -$$ -D:\Gamma^\infty (E)\rightarrow \Gamma^\infty(F) -$$ - -is a morphism of sheaves which is linear on sections such that the support of D is non-increasing: supp Ds ⊆ supp s for every smooth section s of E. The original Peetre theorem asserts that, for every point p in M, there is a neighborhood U of p and an integer k (depending on U) such that D is a differential operator of order k over U. This means that D factors through a linear mapping iD from the k-jet of sections of E into the space of smooth sections of F: -$$ -D=i_D\circ j^k -$$ - -where -$$ -j^k:\Gamma^\infty E\rightarrow J^kE -$$ - -is the k-jet operator and -$$ -i_D:J^kE\rightarrow F -$$ - -is a linear mapping of vector bundles. - -The problem is invariant under local diffeomorphism, so it is sufficient to prove it when M is an open set in Rn and E and F are trivial bundles. At this point, it relies primarily on two lemmas: - -*Lemma 1. If the hypotheses of the theorem are satisfied, then for every x∈M and C > 0, there exists a neighborhood V of x and a positive integer k such that for any y∈V\{x} and for any section s of E whose k-jet vanishes at y (jks(y)=0), we have |Ds(y)|k tending to x, and a sequence of very disjoint balls Bk around the xk (meaning that the geodesic distance between any two such balls is non-zero), and sections sk of E over each Bk such that jksk(xk)=0 but |Dsk(xk)|≥C>0. - -Let ρ(x) denote a standard bump function for the unit ball at the origin: a smooth real-valued function which is equal to 1 on B1/2(0), which vanishes to infinite order on the boundary of the unit ball. - -Consider every other section s2k. At x2k, these satisfy - -j2ks2k(x2k)=0. - -Suppose that 2k is given. Then, since these functions are smooth and each satisfy j2k(s2k)(x2k)=0, it is possible to specify a smaller ball B′δ(x2k) such that the higher order derivatives obey the following estimate: -$$ -\sum_ \int_{S_r} (f(x)-f(x_0)) dx -$$ - -where $ f \in C^\infty(\mathbb{R}^d) $ and $S_r$ is the sphere centered at $x_0$ with radius $r$. This is in fact the Laplacian. We show will show $L$ is a differential operator by Peetre's theorem. The main idea is that since $ Lf(x_0) $ is defined only in terms of $f$'s behavior near $x_0$, it is local in nature; in particular, if $f$ is locally zero, so is $Lf$, and hence the support cannot grow. - -The technical proof goes as follows. - -Let $ M = \mathbb{R}^d $ and $E$ and $F$ be the rank $1$ trivial bundles. - -Then $\Gamma^\infty(E)$ and $\Gamma^\infty(F)$ are simply the space $C^\infty(\mathbb{R}^d)$ of smooth functions on $\mathbb{R}^d$. As a sheaf, $\mathcal{F}(U)$ is the set of smooth functions on the open set $U$ and restriction is function restriction. - -To see $L$ is indeed a morphism, we need to check $(Lu)|V = L(u|V)$ for open sets $U$ and $V$ such that $V \subseteq U$ and $u \in C^\infty(U)$. This is clear because for $x \in V$, both $[(Lu)|V](x)$ and $[L(u|V)](x)$ are simply $ \lim_{r \to 0} \frac{2d}{r^2}\frac{1} \int_{S_r} (u(y)-u(x)) dy$, as the $ S_r $ eventually sits inside both $U$ and $V$ anyway. - -It is easy to check that $L $ is linear: -$$ -L(f + g) = L(f) + L(g) -$$ and $L(af) = aL(f)$ - -Finally, we check that $ L $ is local in the sense that $ supp Lf \subseteq supp f$. If $ x_0 \notin supp(f) $, then $ \exists r > 0 $ such that $f = 0$ in the ball of radius $ r $ centered at $ x_0 $. Thus, for $ x \in B(x_0, r) $, -$$ -\int_{S_{r'}}(f(y)-f(x)) dy = 0 -$$ - -for $ r' < r - |x - x_0| $, and hence $ (Lf)(x) = 0 $. - -Therefore, $ x_0 \notin supp Lf $. - -So by Peetre's theorem, $ L $ is a differential operator. diff --git a/wiki/wikipedia/3363.txt b/wiki/wikipedia/3363.txt deleted file mode 100644 index dbd7be91405e3f649b79371e7727eb039a84903f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3363.txt +++ /dev/null @@ -1,41 +0,0 @@ -In nonlinear control, Aizerman's conjecture or Aizerman problem states that a linear system in feedback with a sector nonlinearity would be stable if the linear system is stable for any linear gain of the sector. This conjecture was proven false but led to the (valid) sufficient criteria on absolute stability. - -Consider a system with one scalar nonlinearity - - - -\frac{dx}{dt}=Px+qf(e),\quad e=r^*x \quad x\in\mathbb R^n, - - - -where P is a constant n×n-matrix, q, r are constant n-dimensional vectors, ∗ is an operation of transposition, f(e) is scalar function, and f(0)=0. Suppose that the nonlinearity f is sector bounded, meaning that for some real - -k_1 - - and - -k_2 - - with - -k_1 , the function - -f - - satisfies - - - -k_1 < \frac{f(e)}{e}< k_2, \quad \forall e \neq 0. - - - -Then Aizerman's conjecture is that the system is stable in large (i.e. unique stationary point is global attractor) if all linear systems with f(e)=ke, k ∈(k1,k2) are asymptotically stable. - -There are counterexamples to Aizerman's conjecture such that nonlinearity belongs to the sector of linear stability and unique stable equilibrium coexists with a stable periodic solution—hidden oscillation - -. - -Strengthening of Aizerman's conjecture is Kalman's conjecture (or Kalman problem) where in place of condition on the nonlinearity it is required that the derivative of nonlinearity belongs to linear stability sector. diff --git a/wiki/wikipedia/3364.txt b/wiki/wikipedia/3364.txt deleted file mode 100644 index a5eca6ad3cbcdf5d70a0ee2ee1997db5bf984a23..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3364.txt +++ /dev/null @@ -1,61 +0,0 @@ -In mathematics, the Loomis–Whitney inequality is a result in geometry, which in its simplest form, allows one to estimate the "size" of a $d$-dimensional set by the sizes of its $(d-1)$-dimensional projections. The inequality has applications in incidence geometry, the study of so-called "lattice animals", and other areas. - -The result is named after the American mathematicians Lynn Harold Loomis and Hassler Whitney, and was published in 1949. - -Fix a dimension $d\ge 2$ and consider the projections -$$ -\pi_{j} : \mathbb{R}^{d} \to \mathbb{R}^{d - 1}, -$$ -$$ -\pi_{j} : x = (x_{1}, \dots, x_{d}) \mapsto \hat{x}_{j} = (x_{1}, \dots, x_{j - 1}, x_{j + 1}, \dots, x_{d}). -$$ - -For each 1 ≤ j ≤ d, let -$$ -g_{j} : \mathbb{R}^{d - 1} \to [0, + \infty), -$$ -$$ -g_{j} \in L^{d - 1} (\mathbb{R}^{d -1}). -$$ - -Then the Loomis–Whitney inequality holds: -$$ -\int_{\mathbb{R}^{d}} \prod_{j = 1}^{d} g_{j} ( \pi_{j} (x) ) \mathrm{d} x \leq \prod_{j = 1}^{d} \| g_{j} \|_{L^{d - 1} (\mathbb{R}^{d - 1})}. -$$ - -Equivalently, taking -$$ -f_{j} (x) = g_{j} (x)^{d - 1}, -$$ -$$ -\int_{\mathbb{R}^{d}} \prod_{j = 1}^{d} f_{j} ( \pi_{j} (x) )^{1 / (d - 1)} \mathrm{d} x \leq \prod_{j = 1}^{d} \left( \int_{\mathbb{R}^{d - 1}} f_{j} (\hat{x}_{j}) \mathrm{d} \hat{x}_{j} \right)^{1 / (d - 1)}. -$$ - -The Loomis–Whitney inequality can be used to relate the Lebesgue measure of a subset of Euclidean space $\mathbb{R}^{d}$ to its "average widths" in the coordinate directions. Let E be some measurable subset of $\mathbb{R}^{d}$ and let -$$ -f_{j} = \mathbf{1}_{\pi_{j} (E)} -$$ - -be the indicator function of the projection of E onto the jth coordinate hyperplane. It follows that for any point x in E, -$$ -\prod_{j = 1}^{d} f_{j} (\pi_{j} (x))^{1 / (d - 1)} = 1. -$$ - -Hence, by the Loomis–Whitney inequality, -$$ -| E | \leq \prod_{j = 1}^{d} | \pi_{j} (E) |^{1 / (d - 1)}, -$$ - -and hence -$$ -| E | \geq \prod_{j = 1}^{d} \frac. -$$ - -The quantity -$$ -\frac -$$ - -can be thought of as the average width of $E$ in the $j$th coordinate direction. This interpretation of the Loomis–Whitney inequality also holds if we consider a finite subset of Euclidean space and replace Lebesgue measure by counting measure. - -The Loomis–Whitney inequality is a special case of the Brascamp–Lieb inequality, in which the projections πj above are replaced by more general linear maps, not necessarily all mapping onto spaces of the same dimension. diff --git a/wiki/wikipedia/3365.txt b/wiki/wikipedia/3365.txt deleted file mode 100644 index d928f725c3a135a3d4124fc128c8396200cc6453..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3365.txt +++ /dev/null @@ -1,44 +0,0 @@ -In mathematical analysis, the initial value theorem is a theorem used to relate frequency domain expressions to the time domain behavior as time approaches zero. - -It is also known under the abbreviation IVT. - -Let -$$ - F(s) = \int_0^\infty f(t) e^{-st}dt -$$ - -be the (one-sided) Laplace transform of ƒ(t). If $f$ is bounded on $(0,\infty)$ (or if just $f(t)=O(e^{ct})$) and $\lim_{t\to 0^+}f(t)$ exists then the initial value theorem says -$$ -\lim_{t\to 0}f(t)=\lim_{s\to\infty}{sF(s)}. -$$ - -Suppose first that $ f$ is bounded. Say $\lim_{t\to 0^+}f(t)=\alpha$. A change of variable in the integral -$$ -\int_0^\infty f(t)e^{-st}dt -$$ shows that -$$ -sF(s)=\int_0^\infty f\left(\frac ts\right)e^{-t}dt -$$. - -Since $f$ is bounded, the Dominated Convergence Theorem shows that -$$ -\lim_{s\to\infty}sF(s)=\int_0^\infty\alpha e^{-t}dt=\alpha. -$$ - -Of course we don't really need DCT here, one can give a very simple proof using only elementary calculus: - -Start by choosing $A$ so that $\int_A^\infty e^{-t}dt<\epsilon$, and then - -note that $\lim_{s\to\infty}f\left(\frac ts\right)=\alpha$ uniformly for $t\in(0,A]$. - -The theorem assuming just that $f(t)=O(e^{ct})$ follows from the theorem for bounded $f$: - -Define $g(t)=e^{-ct}f(t)$. Then $g$ is bounded, so we've shown that $g(0^+)=\lim_{s\to\infty}sG(s)$. - -But $f(0^+)=g(0^+)$ and $G(s)=F(s+c)$, so - -\lim_{s\to\infty}sF(s)=\lim_{s\to\infty}(s-c)F(s)=\lim_{s\to\infty}sF(s+c) - -=\lim_{s\to\infty}sG(s), - -since $\lim_{s\to\infty}F(s)=0$ diff --git a/wiki/wikipedia/3366.txt b/wiki/wikipedia/3366.txt deleted file mode 100644 index 6a760b9bb0ffcd6978d21eb71ff2675691786d78..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3366.txt +++ /dev/null @@ -1,35 +0,0 @@ -The Isabelle automated theorem prover is a higher-order logic (HOL) theorem prover, written in Standard ML and Scala. As an LCF-style theorem prover, it is based on a small logical core (kernel) to increase the trustworthiness of proofs without requiring yet supporting explicit proof objects. - -Isabelle is available inside a flexible system framework allowing for logically safe extensions, which comprise both theories as well as implementations for code-generation, documentation, and specific support for a variety of formal methods. It can be seen as an IDE for formal methods. In recent years, a substantial number of theories and system extensions have been collected in the Isabelle Archive of Formal Proofs (Isabelle AFP) - -Isabelle was named by Lawrence Paulson after Gérard Huet's daughter. - -The Isabelle theorem prover is free software, released under the revised BSD license. - -Isabelle is generic: it provides a meta-logic (a weak type theory), which is used to encode object logics like first-order logic (FOL), higher-order logic (HOL) or Zermelo–Fraenkel set theory (ZFC). The most widely used object logic is Isabelle/HOL, although significant set theory developments were completed in Isabelle/ZF. Isabelle's main proof method is a higher-order version of resolution, based on higher-order unification. - -Though interactive, Isabelle features efficient automatic reasoning tools, such as a term rewriting engine and a tableaux prover, various decision procedures, and, through the Sledgehammer proof-automation interface, external satisfiability modulo theories (SMT) solvers (including CVC4) and resolution-based automated theorem provers (ATPs), including E and SPASS (the Metis proof method reconstructs resolution proofs generated by these ATPs). It also features two model finders (counterexample generators): Nitpick and Nunchaku. - -Isabelle features locales which are modules that structure large proofs. A locale fixes types, constants, and assumptions within a specified scope the seL4 (secure embedded L4) microkernel. The proof is constructed and checked in Isabelle/HOL and comprises over 200,000 lines of proof script to verify 7,500 lines of C. The verification covers code, design, and implementation, and the main theorem states that the C code correctly implements the formal specification of the kernel. The proof uncovered 144 bugs in an early version of the C code of the seL4 kernel, and about 150 issues in each of design and specification. - -* The definition of the programming language Lightweight Java was proven type-sound in Isabelle. - -Larry Paulson keeps a list of research projects] that use Isabelle. - -Several languages and systems provide similar functionality: - -* Agda, written in Haskell - -* Coq, written in OCaml - -* Lean, written in C++ - -* LEGO - -* Mizar system - -* Metamath - -* Prover9 - -* Twelf, written in Standard ML diff --git a/wiki/wikipedia/3367.txt b/wiki/wikipedia/3367.txt deleted file mode 100644 index 30913f7683bbd18fce87b2ca4407b74f079895d3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3367.txt +++ /dev/null @@ -1,55 +0,0 @@ -In geometry, the truncated tetrahedron is an Archimedean solid. It has 4 regular hexagonal faces, 4 equilateral triangle faces, 12 vertices and 18 edges (of two types). It can be constructed by truncating all 4 vertices of a regular tetrahedron at one third of the original edge length. - -A deeper truncation, removing a tetrahedron of half the original edge length from each vertex, is called rectification. The rectification of a tetrahedron produces an octahedron. - -A truncated tetrahedron is the Goldberg polyhedron GIII(1,1), containing triangular and hexagonal faces. - -A truncated tetrahedron can be called a cantic cube, with Coxeter diagram, , having half of the vertices of the cantellated cube (rhombicuboctahedron), . There are two dual positions of this construction, and combining them creates the uniform compound of two truncated tetrahedra. - -The area A and the volume V of a truncated tetrahedron of edge length a are: - -\begin{align} - -A &= 7\sqrt{3}a^2 &&\approx 12.12435565a^2 \\ - -V &= \tfrac{23}{12}\sqrt{2}a^3 &&\approx 2.710575995a^3. \end{align} - -The densest packing of the Archimedean truncated tetrahedron is believed to be Φ = 207/208, as reported by two independent groups using Monte Carlo methods. Although no mathematical proof exists that this is the best possible packing for the truncated tetrahedron, the high proximity to the unity and independency of the findings make it unlikely that an even denser packing is to be found. In fact, if the truncation of the corners is slightly smaller than that of an Archimedean truncated tetrahedron, this new shape can be used to completely fill space. - -Cartesian coordinates for the 12 vertices of a truncated tetrahedron centered at the origin, with edge length √8, are all permutations of (±1,±1,±3) with an even number of minus signs: - -*(+3,+1,+1), (+1,+3,+1), (+1,+1,+3) - -*(−3,−1,+1), (−1,−3,+1), (−1,−1,+3) - -*(−3,+1,−1), (−1,+3,−1), (−1,+1,−3) - -*(+3,−1,−1), (+1,−3,−1), (+1,−1,−3) - -Another simple construction exists in 4-space as cells of the truncated 16-cell, with vertices as coordinate permutation of: - -(0,0,1,2) - -The truncated tetrahedron can also be represented as a spherical tiling, and projected onto the plane via a stereographic projection. This projection is conformal, preserving angles but not areas or lengths. Straight lines on the sphere are projected as circular arcs on the plane. - -A lower symmetry version of the truncated tetrahedron (a truncated tetragonal disphenoid with order 8 D2d symmetry) is called a Friauf polyhedron in crystals such as complex metallic alloys. This form fits 5 Friauf polyhedra around an axis, giving a 72-degree dihedral angle on a subset of 6-6 edges. It is named after J. B. Friauf and his 1927 paper "The crystal structure of the intermetallic compound MgCu2". - -Giant truncated tetrahedra were used for the "Man the Explorer" and "Man the Producer" theme pavilions in Expo 67. They were made of massive girders of steel bolted together in a geometric lattice. The truncated tetrahedra were interconnected with lattice steel platforms. All of these buildings were demolished after the end of Expo 67, as they had not been built to withstand the severity of the Montreal weather over the years. Their only remnants are in the Montreal city archives, the Public Archives Of Canada and the photo collections of tourists of the times. - -The Tetraminx puzzle has a truncated tetrahedral shape. This puzzle shows a dissection of a truncated tetrahedron into 4 octahedra and 6 tetrahedra. It contains 4 central planes of rotations. - -In the mathematical field of graph theory, a truncated tetrahedral graph is an Archimedean graph, the graph of vertices and edges of the truncated tetrahedron, one of the Archimedean solids. It has 12 vertices and 18 edges. It is a connected cubic graph, and connected cubic transitive graph. - -It is also a part of a sequence of cantic polyhedra and tilings with vertex configuration 3.6.n.6. In this wythoff construction the edges between the hexagons represent degenerate digons. - -This polyhedron is topologically related as a part of sequence of uniform truncated polyhedra with vertex configurations (3.2n.2n), and [n,3] Coxeter group symmetry. - - - -File:Truncatedtetrahedron.gif|Truncated tetrahedron in rotation - -File:Tetraedro truncado (Matemateca IME-USP).jpg|Truncated tetrahedron (Matemateca IME-USP) - -File:D4 truncated tetrahedron.JPG|Truncated 4-sided die - - diff --git a/wiki/wikipedia/3368.txt b/wiki/wikipedia/3368.txt deleted file mode 100644 index c91439786322b19980a9333ba166c5d6d2a3a8d8..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3368.txt +++ /dev/null @@ -1,33 +0,0 @@ -In computational complexity theory, a numeric algorithm runs in pseudo-polynomial time if its running time is a polynomial in the numeric value of the input (the largest integer present in the input)—but not necessarily in the length of the input (the number of bits required to represent it), which is the case for polynomial time algorithms. - -In general, the numeric value of the input is exponential in the input length, which is why a pseudo-polynomial time algorithm does not necessarily run in polynomial time with respect to the input length. - -An NP-complete problem with known pseudo-polynomial time algorithms is called weakly NP-complete. - -An NP-complete problem is called strongly NP-complete if it is proven that it cannot be solved by a pseudo-polynomial time algorithm unless P = NP. The strong/weak kinds of NP-hardness are defined analogously. - -Consider the problem of testing whether a number n is prime, by naively checking whether no number in $\{ 2, 3, \dots, \sqrt{n} \}$ divides $n$ evenly. This approach can take up to $ \sqrt{n} - 1 $ divisions, which is sub-linear in the value of n but exponential in the length of n (which is about $\log(n)$). For example, a number n slightly less than 10,000,000,000 would require up to approximately 100,000 divisions, even though the length of n is only 11 digits. Moreover one can easily write down an input (say, a 300-digit number) for which this algorithm is impractical. Since computational complexity measures difficulty with respect to the length of the (encoded) input, this naive algorithm is actually exponential. It is, however, pseudo-polynomial time. - -Contrast this algorithm with a true polynomial numeric algorithm—say, the straightforward algorithm for addition: Adding two 9-digit numbers takes around 9 simple steps, and in general the algorithm is truly linear in the length of the input. Compared with the actual numbers being added (in the billions), the algorithm could be called "pseudo-logarithmic time", though such a term is not standard. Thus, adding 300-digit numbers is not impractical. Similarly, long division is quadratic: an m-digit number can be divided by a n-digit number in $O(mn)$ steps (see Big O notation.) - -In the case of primality, it turns out there is a different algorithm for testing whether n is prime (discovered in 2002) that runs in time $O((\log {n})^6)$. - -In the knapsack problem, we are given $n$ items with weight $w_i$ and value $v_i$, along with a maximum weight capacity of a knapsack $W$. - -The goal is to solve the following optimization problem; informally, what's the best way to fit the items into the knapsack to maximize value? - -maximize $\sum_{i=1}^n v_i x_i$ - -subject to $\sum_{i=1}^n w_i x_i \leq W$ and $x_i \in \{0,1\}$. - -Solving this problem is NP-hard, so a polynomial time algorithm is impossible unless P = NP. However, an $O(nW)$ time algorithm is possible using dynamic programming; since the number $W$ only needs $\log W$ bits to describe, this algorithm runs in pseudo-polynomial time. - -Although the notion of pseudo-polynomial time is used almost exclusively for numeric problems, the concept can be generalized: - -The function m is pseudo-polynomial if - -m(n) is no greater than a polynomial function of the problem size n and an additional property of the input, k(n). (Presumably, k is chosen to be something relevant to the problem.) - -This makes numeric polynomial problems a special case by taking k to be the numeric value of the input. - -The distinction between the value of a number and its length is one of encoding: if numeric inputs are always encoded in unary, then pseudo-polynomial would coincide with polynomial. diff --git a/wiki/wikipedia/3369.txt b/wiki/wikipedia/3369.txt deleted file mode 100644 index 335905b284c47553bb3dd75a5efaff52d2ec12a9..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3369.txt +++ /dev/null @@ -1,9 +0,0 @@ -In graph drawing styles that represent the edges of a graph by polylines (sequences of line segments connected at bends), it is desirable to minimize the number of bends per edge (sometimes called the curve complexity) or the total number of bends in a drawing. Bend minimization is the algorithmic problem of finding a drawing that minimizes these quantities. - -The prototypical example of bend minimization is Fáry's theorem, which states that every planar graph can be drawn with no bends, that is, with all its edges drawn as straight line segments. - -Drawings of a graph in which the edges are both bendless and axis-aligned are sometimes called rectilinear drawings, and are one way of constructing RAC drawings in which all crossings are at right angles. - -Tamassia showed that bend minimization of orthogonal drawings of planar graphs, in which the vertices are placed in an integer lattice and the edges are drawn as axis-aligned polylines, could be performed in polynomial time by translating the problem into one of minimum-cost network flow. However, if the planar embedding of the graph may be changed, then bend minimization becomes NP-complete, and must instead be solved by techniques such as integer programming that do not guarantee both a fast runtime and an exact answer. - -Many graph drawing styles allow bends, but only in a limited way: the curve complexity of these drawings (the maximum number of bends per edge) is bounded by some fixed constant. Allowing this constant to grow larger can be used to improve other aspects of the drawing, such as its area. Alternatively, in some cases, a drawing style may only be possible when bends are allowed; for instance, not every graph has a RAC drawing (a drawing with all crossings at right angles) with no bends, or with curve complexity two, but every graph has such a drawing with curve complexity three. diff --git a/wiki/wikipedia/337.txt b/wiki/wikipedia/337.txt deleted file mode 100644 index 35b258772f9be61342f7f2c3fcada44206eeb87d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/337.txt +++ /dev/null @@ -1,30 +0,0 @@ -In number theory, the Ankeny–Artin–Chowla congruence is a result published in 1953 by N. C. Ankeny, Emil Artin and S. Chowla. It concerns the class number h of a real quadratic field of discriminant d > 0. If the fundamental unit of the field is -$$ -\varepsilon = \frac{t + u \sqrt{d}}{2} -$$ - -with integers t and u, it expresses in another form -$$ -\frac{ht}{u} \pmod{p} -$$ - -for any prime number p > 2 that divides d. In case p > 3 it states that -$$ --2{mht \over u} \equiv \sum_{0 < k < d} {\chi(k) \over k}\lfloor {k/p} \rfloor \pmod {p} -$$ - -where $m = \frac{d}{p}$ and $\chi$ is the Dirichlet character for the quadratic field. For p = 3 there is a factor (1 + m) multiplying the LHS. Here -$$ -\lfloor x\rfloor -$$ - -represents the floor function of x. - -A related result is that if d=p is congruent to one mod four, then -$$ -{u \over t}h \equiv B_{(p-1)/2} \pmod{ p} -$$ - -where Bn is the nth Bernoulli number. - -There are some generalisations of these basic results, in the papers of the authors. diff --git a/wiki/wikipedia/3370.txt b/wiki/wikipedia/3370.txt deleted file mode 100644 index bc01e3f60bd91ef18060a74b33f1b71b53ff5d4f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3370.txt +++ /dev/null @@ -1,96 +0,0 @@ -In mathematical analysis, the intermediate value theorem states that if f is a continuous function whose domain contains the interval , then it takes on any given value between f(a) and f(b) at some point within the interval. - -This has two important corollaries: - -# If a continuous function has values of opposite sign inside an interval, then it has a root in that interval (Bolzano's theorem). - -# The image of a continuous function over an interval is itself an interval. - -This captures an intuitive property of continuous functions over the real numbers: given f continuous on [1, 2] with the known values f(1) = 3 and f(2) = 5, then the graph of y = f(x) must pass through the horizontal line y = 4 while x moves from 1 to 2. It represents the idea that the graph of a continuous function on a closed interval can be drawn without lifting a pencil from the paper. - -The intermediate value theorem states the following: - -Consider an interval $I = [a,b]$ of real numbers $\R$ and a continuous function $f \colon I \to \R$. Then - -*Version I. if $u$ is a number between $f(a)$ and $f(b)$, that is, \min(f(a),f(b)) then there is a $c\in (a,b)$ such that $f(c)=u$. - -*Version II. the image set $f(I)$ is also an interval, and it contains $\bigl[\min(f(a), f(b)),\max(f(a), f(b))\bigr]$, - -Remark: Version II states that the set of function values has no gap. For any two function values $c < d$, even if they are outside the interval between $f(a)$ and $f(b)$, all points in the interval $\bigl[c,d\bigr]$ are also function values, -$$ -\bigl[c,d\bigr]\subseteq f(I) -$$. - -A subset of the real numbers with no internal gap is an interval. Version I is naturally contained in Version II. - -The theorem depends on, and is equivalent to, the completeness of the real numbers. The intermediate value theorem does not apply to the rational numbers Q because gaps exist between rational numbers; irrational numbers fill those gaps. For example, the function $f(x)=x^2-2$ for $x\in\Q$ satisfies $f(0)=-2$ and $f(2)=2$. However, there is no rational number $x$ such that $f(x)=0$, because $\sqrt2$ is an irrational number. - -The theorem may be proven as a consequence of the completeness property of the real numbers as follows: - -We shall prove the first case, $f(a) 0$. Since $f$ is continuous, there is a $\delta>0$ such that $|f(x) - f(c)| < \varepsilon$ whenever $|x-c| < \delta$. This means that -$$ -f(x)-\varepsilonf(a^{**})-\varepsilon\ > u-\varepsilon -$$. - -Both inequalities -$$ -u-\varepsilon0$, from which we deduce $f(c)=u$ as the only possible value, as stated. - -Remark: The intermediate value theorem can also be proved using the methods of non-standard analysis, which places "intuitive" arguments involving infinitesimals on a rigorous footing. - -The theorem was first proved by Bernard Bolzano in 1817. Bolzano used the following formulation of the theorem: - -Let $f, \phi$ be continuous functions on the interval between $\alpha$ and $\beta$ such that $f(\alpha) < \phi(\alpha)$ and $f(\beta) > \phi(\beta)$. Then there is an $x$ between $\alpha$ and $\beta$ such that $f(x) = \phi(x)$. - -The equivalence between this formulation and the modern one can be shown by setting $\phi$ to the appropriate constant function. Augustin-Louis Cauchy provided the modern formulation and a proof in 1821. Both were inspired by the goal of formalizing the analysis of functions and the work of Joseph-Louis Lagrange. The idea that continuous functions possess the intermediate value property has an earlier origin. Simon Stevin proved the intermediate value theorem for polynomials (using a cubic as an example) by providing an algorithm for constructing the decimal expansion of the solution. The algorithm iteratively subdivides the interval into 10 parts, producing an additional decimal digit at each step of the iteration. Before the formal definition of continuity was given, the intermediate value property was given as part of the definition of a continuous function. Proponents include Louis Arbogast, who assumed the functions to have no jumps, satisfy the intermediate value property and have increments whose sizes corresponded to the sizes of the increments of the variable. - -Earlier authors held the result to be intuitively obvious and requiring no proof. The insight of Bolzano and Cauchy was to define a general notion of continuity (in terms of infinitesimals in Cauchy's case and using real inequalities in Bolzano's case), and to provide a proof based on such definitions. - -The intermediate value theorem is closely linked to the topological notion of connectedness and follows from the basic properties of connected sets in metric spaces and connected subsets of R in particular: - -* If $X$ and $Y$ are metric spaces, $f \colon X \to Y$ is a continuous map, and $E \subset X$ is a connected subset, then $f(E)$ is connected. (*) - -* A subset $E \subset \R$ is connected if and only if it satisfies the following property: $x,y\in E,\ x < r < y \implies r \in E$. (**) - -In fact, connectedness is a topological property and (*) generalizes to topological spaces: If $X$ and $Y$ are topological spaces, $f \colon X \to Y$ is a continuous map, and $X$ is a connected space, then $f(X)$ is connected. The preservation of connectedness under continuous maps can be thought of as a generalization of the intermediate value theorem, a property of real valued functions of a real variable, to continuous functions in general spaces. - -Recall the first version of the intermediate value theorem, stated previously: - -Consider a closed interval $I=[a,b]$ in the real numbers $\R$ and a continuous function $f\colon I\to\R$. Then, if $ u$ is a real number such that $\min(f(a),f(b))< u < \max(f(a),f(b))$, there exists $c \in (a,b)$ such that $f(c) = u$. - -The intermediate value theorem is an immediate consequence of these two properties of connectedness: - -The intermediate value theorem generalizes in a natural way: Suppose that X is a connected topological space and (Y, <) is a totally ordered set equipped with the order topology, and let f : X → Y be a continuous map. If a and b are two points in X and u is a point in Y lying between f(a) and f(b) with respect to <, then there exists c in X such that f(c) = u. The original theorem is recovered by noting that R is connected and that its natural topology is the order topology. - -The Brouwer fixed-point theorem is a related theorem that, in one dimension, gives a special case of the intermediate value theorem. - -A Darboux function is a real-valued function f that has the "intermediate value property," i.e., that satisfies the conclusion of the intermediate value theorem: for any two values a and b in the domain of f, and any y between f(a) and f(b), there is some c between a and b with f(c) = y. The intermediate value theorem says that every continuous function is a Darboux function. However, not every Darboux function is continuous; i.e., the converse of the intermediate value theorem is false. - -As an example, take the function f : [0, ∞) → [−1, 1] defined by f(x) = sin(1/x) for x > 0 and f(0) = 0. This function is not continuous at x = 0 because the limit of f(x) as x tends to 0 does not exist; yet the function has the intermediate value property. Another, more complicated example is given by the Conway base 13 function. - -In fact, Darboux's theorem states that all functions that result from the differentiation of some other function on some interval have the intermediate value property (even though they need not be continuous). - -Historically, this intermediate value property has been suggested as a definition for continuity of real-valued functions; this definition was not adopted. - -A similar result is the Borsuk–Ulam theorem, which says that a continuous map from the $n$-sphere to Euclidean $n$-space will always map some pair of antipodal points to the same place. - -In general, for any continuous function whose domain is some closed convex $n$-dimensional shape and any point inside the shape (not necessarily its center), there exist two antipodal points with respect to the given point whose functional value is the same. - -The theorem also underpins the explanation of why rotating a wobbly table will bring it to stability (subject to certain easily met constraints). diff --git a/wiki/wikipedia/3371.txt b/wiki/wikipedia/3371.txt deleted file mode 100644 index 94ddefd64de105e6fb0a2a2da208321a2e7540b4..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3371.txt +++ /dev/null @@ -1,67 +0,0 @@ -Instant Insanity is the name given by Parker Brothers to their 1967 version of a puzzle which has existed since antiquity, and which has been marketed many toy and puzzle makers under a variety of names, including: Devil's Dice (Pressman); DamBlocks (Schaper); Logi-Qubes (Schaeffer); Logi Cubes (ThinkinGames); Daffy Dots (Reiss); Those Blocks (Austin); PsykoNosis (A to Z Ideas), and many others. - -The puzzle consists of four cubes with faces colored with four colors (commonly red, blue, green, and white). The objective of the puzzle is to stack these cubes in a column so that each side of the stack (front, back, left, and right) shows each of the four colors. The distribution of colors on each cube is unique. - -This problem has a graph-theoretic solution in which a graph with four vertices labeled B, G, R, W (for blue, green, red, and white) can be used to represent each cube; there is an edge between two vertices if the two colors are on the opposite sides of the cube, and a loop at a vertex if the opposite sides have the same color. Trial and error is a slow way to solve this problem, as there are 331,776 possible arrangements of the four cubes (6 faces, 4 turns = 24 positions of each cube, times four cubes, totalling 331,776). And the solution is symmetrical 8 ways (if you have a solution, and you flip all of the four cubes forward, you have another valid solution. You can do that move 4 times multiplied by the rotation of every cube 180 degrees around its vertical axe, which gives 8 symmetries in total), so therefore the odds are 331,776 divided by 8 equals 41,472 chance of randomly tossing the cubes into a solution. The puzzle is studied by D. E. Knuth in an article on estimating the running time of exhaustive search procedures with backtracking. - -Every position of the puzzle can be solved in eight moves or less. - -The first known patented version of the puzzle was created by Frederick A. Schossow in 1900, and marketed as the Katzenjammer puzzle. The puzzle was recreated by Franz Owen Armbruster, also known as Frank Armbruster, and independently published by Parker Brothers and Pressman, in 1967. Over 12 million puzzles were sold by Parker Brothers alone. The puzzle is similar or identical to numerous other puzzles (e.g., The Great Tantalizer, circa 1940, and the most popular name prior to Instant Insanity). - -One version of the puzzle is currently being marketed by Winning Moves Games USA. - -Given the already colored cubes and the four distinct colors are (Red, Green, Blue, White), we will try to generate a graph which gives a clear picture of all the positions of colors in all the cubes. The resultant graph will contain four vertices one for each color and we will number each edge from one through four (one number for each cube). If an edge connects two vertices (Red and Green) and the number of the edge is three, then it means that the third cube has Red and Green faces opposite to each other. - -To find a solution to this problem we need the arrangement of four faces of each of the cubes. To represent the information of two opposite faces of all the four cubes we need a directed subgraph instead of an undirected one because two directions can only represent two opposite faces, but not whether a face should be at the front or at the back. - -So if we have two directed subgraphs, we can actually represent all the four faces (which matter) of all the four cubes. - -* First directed graph will represent the front and rear faces. - -* Second directed graph will represent the left and right faces. - -We cannot randomly select any two subgraphs - so what are the criteria for selecting? - -We need to choose graphs such that: - -# the two subgraphs have no edges in common, because if there is an edge which is common that means at least one cube has the pair of opposite faces of exactly the same color, that is, if a cube has Red and Blue as its front and rear faces, then the same is true for its left and right faces. - -# a subgraph contains only one edge from each cube, because the sub graph has to account for all the cubes and one edge can completely represent a pair of opposite faces. - -# a subgraph can contain only vertices of degree two, because a degree of two means a color can only be present at faces of two cubes. Easy way to understand is that there are eight faces to be equally divided into four colors. So, two per color. - -After understanding these restrictions if we try to derive the two sub graphs, we may end up with one possible set as shown in Image 3. Each edge color represents a cube. - -From the first sub graph we will derive the front and the rear face colors of the corresponding cube. E.g.: - -# The black arrow from Yellow to Blue says that the first cube will have Yellow in the front face and Blue at the Rear. - -# The blue arrow from Green to Yellow says that the second cube will have Green in the front face and Yellow at the Rear. - -# The orange arrow from Blue to Red says that the third cube will have Blue in the front face and Red at the Rear. - -# The purple arrow from Red to Green says that the fourth cube will have Red in the front face and Green at the Rear. - -From the second sub graph we will derive the left and the right face colors of the corresponding cube. E.g.: - -# The black arrow from Red to Green says that the first cube will have Red in the left face and Green at the Right. - -# The blue arrow from Blue to Red says that the second cube will have Blue in the left face and Red at the Right. - -# The orange arrow from Yellow to Blue says that the third cube will have Yellow in the left face and Blue at the Right. - -# The purple arrow from Green to Yellow says that the fourth cube will have Green in the left face and Yellow at the Right. - -The third image shows the derived stack of cube which is the solution to the problem. - -It is important to note that: - -# You can arbitrarily label the cubes as one such solution will render 23 more by swapping the positions of the cubes but not changing their configurations. - -# The two directed subgraphs can represent front-to-back, and left-to-right interchangeably, i.e. one of them can represent front-to-rear or left-to-right. This is because one such solution also render 3 more just by rotating. Adding the effect in 1., we generate 95 more solutions by providing only one. To put it into perspective, such four cubes can generate 243 × 3 = 41472 configurations. - -# It is not important to take notice of the top and the bottom of the stack of cubes. - -Given n cubes, with the faces of each cube coloured with one of n colours, determining if it's possible to stack the cubes so that each colour appears exactly once on each of the 4 sides of the stack is NP-complete. - -The cube stacking game is a two-player game version of this puzzle. Given an ordered list of cubes, the players take turns adding the next cube to the top of a growing stack of cubes. The loser is the first player to add a cube that causes one of the four sides of the stack to have a color repeated more than once. Robertson and Munro proved that this game is PSPACE-complete, which illustrates the observation that NP-complete puzzles tend to lead to PSPACE-complete games. diff --git a/wiki/wikipedia/3372.txt b/wiki/wikipedia/3372.txt deleted file mode 100644 index 94a3c350e46ee5d48968d0069e09c040d05bd19c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3372.txt +++ /dev/null @@ -1,3 +0,0 @@ -In algebraic topology, Hilton's theorem, proved by , states that the loop space of a wedge of spheres is homotopy-equivalent to a product of loop spaces of spheres. - -showed more generally that the loop space of the suspension of a wedge of spaces can be written as an infinite product of loop spaces of suspensions of smash products. diff --git a/wiki/wikipedia/3373.txt b/wiki/wikipedia/3373.txt deleted file mode 100644 index c3ac9df5139734d25b3bf36f001e68f7e8ffea38..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3373.txt +++ /dev/null @@ -1,21 +0,0 @@ -Straight-through processing (STP) - -Payments may be non-STP due to various reasons such as missing information, information which that is not in a machine "understandable" form (such as name and address rather than a code), or human-readable instructions "Please credit urgently") or simply falls outside of rules for which the bank allows automatic processing (for example, payments of large value or in exotic currencies). - -Traditionally, making payments involves many departments in a bank. Both initiating a payment to be sent and processing a received payment may take days. In the past, payments were initiated through numerous "human-friendly" (aka paper-based) means, such as a human through a paper order, over the phone, or via fax. The payment order was input into the bank's payment system, then confirmed by a supervisor. This confirmation ensured an accurate match of input versus order before sending the payment. When multiple banks, countries or currencies were involved, the process often took several hours. When the complexity of the payment was higher, the amount of labor increased and additional human intervention resulted in more risk of errors, longer processing time, and higher costs. - -In most cases, banks levy charges for non-STP payments or for manual repairs. Alternatively, banks may not charge on a "per repair" basis, but rather levy heavier fees for correspondents that provide lower quality (lower STP) payments. - -The process before STP was very antiquated: sales traders would have to fill in a deal ticket, blue for buy and red for sell. The order was invariably scribbled and mostly unreadable. Upon receiving the order, the trader would often execute an incorrect investment on the market. A runner picking up the ticket to input the order into the system in order to send out a contract note. For example, if the client wished to purchase 100,000 shares, but the trader only executed 10,000, the runner would send out the contract for 1,000. In those days, there was a T10 settlement so any errors were "fixable". However, with the new introduction of T5, the settlement arena changed, and STP was born. To reduce the exposure of risk, failed settlement, there could only be one "golden source" of information and that it was the responsibility of the sales trader to be correct as they had the power to correct any discrepancies with the client directly. - -The goal of STP is to reduce the time it takes to process a transaction, in order to increase the likelihood that a contract or an agreement is settled on time. The concept has also been transferred into other sectors including energy (oil, gas) trading and banking, and financial planning. - -Currently, the entire trade lifecycle, from initiation to settlement, is a complex labyrinth of manual processes that take several days. Such processing for equities transactions is commonly referred to as T+2 (trade date plus two days) processing, as it usually takes two business days from the "trade" being executed to the trade being settled. This means investors who are selling a security must deliver the certificate within two business days, and investors who are buying securities must send payment within two business days. But this process comes with higher risks through the occurrence of unsettled trades. Market conditions fluctuate, meaning a two-day window brings an inherent risk of unexpected losses that investors may be unable to pay for, or settle, their transactions. - -Industry practitioners, particularly in the United States, viewed STP as meaning "same-day" settlement or faster, ideally minutes or even seconds. The goal was to minimise settlement risk for the execution of a trade and its settlement and clearing to occur simultaneously. However, for this to be achieved, multiple market participants must realize high levels of STP. In particular, transaction data would need to be made available on a just-in-time basis, which is a considerably harder goal to achieve for the financial services community than the application of STP alone. After all, STP itself is merely an efficient use of computers for transaction processing. - -In the past, STP methods were used to help financial market firms move to one-day trade settlement of equity transactions, as well as to meet the global demand resulting from the rapid growth of online trading. Now the concepts of STP are applied to reduce systemic and operational risk and to improve certainty of settlement and minimize operational costs. - -There is often confusion within the trading world between STP and an electronic communication network (ECN). Although they are similar initiatives, ECN connects orders with those of other traders as well as staff liquidity provides. An ECN is also typically a bigger pool of orders than a standard STP. - -When fully implemented, STP is able to provide asset managers, brokers and dealers, custodians, banks and other financial services with benefits including shorter processing cycles, reduced settlement risk, and lower operating costs. Some industry analysts believe that STP is not an achievable goal in the sense that firms are unlikely to find the cost/benefit to reach 100% automation. Instead, they promote the idea of improving levels of internal STP within a firm while encouraging groups of firms to work together to improve the quality of the automation of transaction information between themselves, either bilaterally or as a community of users (external STP). Other analysts, however, believe that STP will be achieved with the emergence of business process interoperability. diff --git a/wiki/wikipedia/3374.txt b/wiki/wikipedia/3374.txt deleted file mode 100644 index a0242781d4a80a007e47d7f2160fd187b4b7cad5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3374.txt +++ /dev/null @@ -1,47 +0,0 @@ -In computer science, the largest differencing method is an algorithm for solving the partition problem and the multiway number partitioning. It is also called the Karmarkar–Karp algorithm after its inventors, Narendra Karmarkar and Richard M. Karp. It is often abbreviated as LDM. - -The input to the algorithm is a set S of numbers, and a parameter k. The required output is a partition of S into k subsets, such that the sums in the subsets are as nearly equal as possible. The main steps of the algorithm are: - -# Order the numbers from large to small. - -#Replace the largest and second-largest numbers by their difference. - -#If two or more numbers remain, return to step 1. - -# Using backtracking, compute the partition. - -For k=2, the main step (2) works as follows. - -* Take the two largest numbers in S, remove them from S, and insert their difference (this represents a decision to put each of these numbers in a different subset). - -* Proceed in this way until a single number remains. This single number is the difference in sums between the two subsets. - -For example, if S = {8,7,6,5,4}, then the resulting difference-sets are 6,5,4,1, then 4,1,1, then 3,1 then 2. - -Step 3 constructs the subsets in the partition by backtracking. The last step corresponds to {2},{}. Then 2 is replaced by 3 in one set and 1 in the other set: {3},{1}, then {4},{1,1}, then {4,5}, {1,6}, then {4,7,5}, {8,6}, where the sum-difference is indeed 2. - -The runtime complexity of this algorithm is dominated by the step 1 (sorting), which takes O(n log n). - -Note that this partition is not optimal: in the partition {8,7}, {6,5,4} the sum-difference is 0. However, there is evidence that it provides a "good" partition: - -* If the numbers are uniformly distributed in [0,1], then the expected difference between the two sums is $n^{-\Theta(\log(n)))}$. This also implies that the expected ratio between the maximum sum and the optimal maximum sum is $1+n^{-\Theta(\log(n)))}$ . - -For any k ≥ 2, the algorithm can be generalized in the following way. - -#Order the numbers from large to small. - -#Replace numbers #1 and #2 by their difference; #3 and #4 by their difference; etc. - -#Sort the list of n/2 differences from large to small. - -#Assign each pair in turn to different sets: the largest in the pair to the set with the smallest sum, and the smallest in the pair to the set with the largest sum. - -For two-way partitioning, when inputs are uniformly-distributed random variables, the expected difference between largest and smallest sum is $O(\log{n}/n^2)$. - -BLDM (Balanced Largest Differencing Method) works as follows. - -CKK can also run as an anytime algorithm: it finds the KK solution first, and then finds progressively better solutions as time allows (possibly requiring exponential time to reach optimality, for the worst instances). - -Combining CKK with the balanced-LDM algorithm (BLDM) yields a complete anytime algorithm for solving the balanced partition problem. - -An algorithm equivalent to the Karmarkar-Karp differencing heuristic is mentioned in ancient Jewish legal texts by Nachmanides and Joseph ibn Habib. The algorithm is used to combine different testimonies about the same loan. diff --git a/wiki/wikipedia/3375.txt b/wiki/wikipedia/3375.txt deleted file mode 100644 index 3e0b25173fdfb754b120c5149f528d46015f4a9c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3375.txt +++ /dev/null @@ -1,45 +0,0 @@ -In combinatorics, Hall-type theorems for hypergraphs are several generalization of Hall's marriage theorem from graphs to hypergraphs. Such theorems were proved by Ofra Kessler, Ron Aharoni, Penny Haxell, Roy Meshulam, The matching width of a hypergraph H, denoted mw(H) is the maximum, over all matchings M in H, of a subset of E that pins M. - -Let V be a set of vertices. Let C be an abstract simplicial complex on V. Let Vy (for y in Y) be subsets of V. A C-V-transversal is a set in C (an element of C) whose intersection with each Vy contains exactly one vertex. For every subset Y0 of Y, let $V_{Y_0} := \bigcup_{y\in Y_0} V_y$ . Suppose that, for every subset Y0 of Y, the homological connectivity plus 2 of the sub-complex induced by $V_{Y_0}$ is at least |Y0|, that is:
    $\eta_H(C[V_{Y_0}]) \geq |Y_0|$.
    Then there exists a C-V-transversal. That is: there is a set in C that intersects each Vy by exactly one element. - -We consider several bipartite hypergraphs with Y = {1, 2} and X = {A, B; a, b, c}. The Meshulam condition trivially holds for the empty set. It holds for subsets of size 1 iff the neighbor-graph of each vertex in Y is non-empty (so it requires at least one explosion to destroy), which is easy to check. It remains to check the subset Y itself. - -#H = { {1,A,a}; {2,B,b}; {2,B,c} }. Here NH(Y) = { {A,a}, {B,b}, {B,c} }. The graph L(NH(Y)) has three vertices: Aa, Bb and Bc. Only the last two are connected; the vertex Aa is isolated. Hence, Ψ(L(NH(Y))=∞. Indeed, H admits a Y-perfect matching, e.g. { {1,A,a}; {2,B,b} }. - -#H = { {1,A,a}; {1,B,b}; {2,A,b}, {2,B,a} }. Here L(NH(Y)) has four vertices: Aa, Bb, Ab, Ba, and four edges: {Aa,Ab}, {Aa,Ba}, {Bb,Ba}, {Bb,Ab}. For any edge that CON offers, NON can explode it and destroy all vertices. Hence, Ψ(L(NH(Y))=1. Indeed, H does not admit a Y-perfect matching. - -#H = { {1,A,a}, {1,A,b}; {1,B,a}, {1,B,b}; {2,A,a}, {2,A,b}; {2,B,a}, {2,B,b} }. Here NH(Y) is the same as in the previous example, so Meshulam's sufficient condition is violated. However, this H does admit a Y-perfect matching, e.g. { {1,A,a}; {2,B,b} }, which shows that this condition is not necessary. - -No necessary-and-sufficient condition using Ψ is known. - -A rainbow matching is a matching in a simple graph, in which each edge has a different "color". By treating the colors as vertices in the set Y, one can see that a rainbow matching is in fact a matching in a bipartite hypergraph. Thus, several sufficient conditions for the existence of a large rainbow matching can be translated to conditions for the existence of a large matching in a hypergraph. - -The following results pertain to tripartite hypergraphs in which each of the 3 parts contains exactly n vertices, the degree of each vertex is exactly n, and the set of neighbors of every vertex is a matching (henceforth "n-tripartite-hypergraph"): - -* Every n-tripartite-hypergraph has a matching of size 2n/3. - -* Every n-tripartite-hypergraph has a matching of size n - sqrt(n). - -* Every n-tripartite-hypergraph has a matching of size n - 11 log22(n). - -*H. J. Ryser conjectured that, when n is odd, every n-tripartite-hypergraph has a matching of size n. - -* S. K. Stein and Brualdi conjectured that, when n is even, every n-tripartite-hypergraph has a matching of size n-1. (it is known that a matching of size n might not exist in this case). - -* A more general conjecture of Stein is that a matching of size n-1 exists even without requiring that the set of neighbors of every vertex in Y is a matching. The 2n-1 is the best possible: if |Y|=2n-2, then the maximum matching may be of size n-1. - -* Any bipartite hypergraph (X+Y, E) in which |Y|=3n-2, the degree of each vertex y in Y is n, and the neighbor-set of y is a matching, has a matching of size n. It is not known whether this is the best possible. For even n, it is only known that 2n is required; for odd n, it is only known that 2n-1 is required. - -* Rainbow independent set - -A balanced hypergraph is an alternative generalization of a bipartite graph: it is a hypergraph in which every odd cycle C of H has an edge containing at least three vertices of C. - -Let H = (V, E) be a balanced hypergraph. The following are equivalent: - -*H admits a perfect matching (i.e., a matching in which each vertex is matched). - -* For all disjoint vertex-sets V1, V2, if |V1| > |V2|, then there exists an edge e in E such that $|e\cap V_1| > |e\cap V_2|$ (equivalently: if $|e\cap V_2| \geq |e\cap V_1|$ for all edges e in E, then |V2| ≥ |V1|). - -A simple graph is bipartite iff it is balanced (it contains no odd cycles and no edges with three vertices). - -Let G = (X+Y, E) be a bipartite graph. Let X0 be a subset of X and Y0 a subset of Y. The condition "$|e\cap X_0| \geq |e\cap Y_0|$ for all edges e in E" means that X0 contains all the neighbors of vertices of Y0- Hence, the CCKV condition becomes:
    "If a subset X0 of X contains the set NH(Y0), then |X0| ≥ |Y0|".
    This is equivalent to Hall's condition. diff --git a/wiki/wikipedia/3376.txt b/wiki/wikipedia/3376.txt deleted file mode 100644 index b5fcfb72b5c299f878800c161603f8af84e5da3b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3376.txt +++ /dev/null @@ -1,3 +0,0 @@ -In mathematics, the Browder–Minty theorem (sometimes called the Minty–Browder theorem) states that a bounded, continuous, coercive and monotone function T from a real, separable reflexive Banach space X into its continuous dual space X is automatically surjective. That is, for each continuous linear functional g ∈ X, there exists a solution u ∈ X of the equation T(u) = g. (Note that T itself is not required to be a linear map.) - -The theorem is named in honor of Felix Browder and George J. Minty, who independently proved it. diff --git a/wiki/wikipedia/3377.txt b/wiki/wikipedia/3377.txt deleted file mode 100644 index 1b8027b6c99f27bd7fa7ecb7caff0e45088ba67d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3377.txt +++ /dev/null @@ -1,150 +0,0 @@ -In mathematics, specifically functional analysis, Mercer's theorem is a representation of a symmetric positive-definite function on a square as a sum of a convergent sequence of product functions. This theorem, presented in , is one of the most notable results of the work of James Mercer (1883–1932). It is an important theoretical tool in the theory of integral equations; it is used in the Hilbert space theory of stochastic processes, for example the Karhunen–Loève theorem; and it is also used to characterize a symmetric positive semi-definite kernel. - -To explain Mercer's theorem, we first consider an important special case; see below for a more general formulation. - -A kernel, in this context, is a symmetric continuous function -$$ - K: [a,b] \times [a,b] \rightarrow \mathbb{R} -$$ - -where symmetric means that $K(x,y) = K(y,x)$ for all $x,y \in [a,b]$. - -K is said to be non-negative definite (or positive semidefinite) if and only if -$$ - \sum_{i=1}^n\sum_{j=1}^n K(x_i, x_j) c_i c_j \geq 0 -$$ - -for all finite sequences of points x1, ..., xn of [a, b] and all choices of real numbers c1, ..., cn (cf. positive-definite kernel). - -Associated to K is a linear operator (more specifically a Hilbert–Schmidt integral operator) on functions defined by the integral -$$ - [T_K \varphi](x) =\int_a^b K(x,s) \varphi(s) ds. -$$ - -For technical considerations we assume $\varphi$ can range through the space - -L2[a, b] (see Lp space) of square-integrable real-valued functions. - -Since TK is a linear operator, we can talk about eigenvalues and eigenfunctions of TK. - -Theorem. Suppose K is a continuous symmetric non-negative definite kernel. Then there is an orthonormal basis - -{ei}i of L2[a, b] consisting of eigenfunctions of TK such that the corresponding - -sequence of eigenvalues {λi}i is nonnegative. The eigenfunctions corresponding to non-zero eigenvalues are continuous on [a, b] and K has the representation -$$ - K(s,t) = \sum_{j=1}^\infty \lambda_j e_j(s) e_j(t) -$$ - -where the convergence is absolute and uniform. - -We now explain in greater detail the structure of the proof of - -Mercer's theorem, particularly how it relates to spectral theory of compact operators. - -* The map K → TK is injective. - -* TK is a non-negative symmetric compact operator on L2[a,b]; moreover K(x, x) ≥ 0. - -To show compactness, show that the image of the unit ball of L2[a,b] under TK equicontinuous and apply Ascoli's theorem, to show that the image of the unit ball is relatively compact in C([a,b]) with the uniform norm and a fortiori in L2[a,b]. - -Now apply the spectral theorem for compact operators on Hilbert - -spaces to TK to show the existence of the - -orthonormal basis {ei}i of - -L2[a,b] -$$ - \lambda_i e_i(t)= [T_K e_i](t) = \int_a^b K(t,s) e_i(s) ds. -$$ - -If λi ≠ 0, the eigenvector (eigenfunction) ei is seen to be continuous on [a,b]. Now -$$ - \sum_{i=1}^\infty \lambda_i |e_i(t) e_i(s)| \leq \sup_{x \in [a,b]} |K(x,x)|, -$$ - -which shows that the sequence -$$ - \sum_{i=1}^\infty \lambda_i e_i(t) e_i(s) -$$ - -converges absolutely and uniformly to a kernel K0 which is easily seen to define the same operator as the kernel K. Hence K=K0 from which Mercer's theorem follows. - -Finally, to show non-negativity of the eigenvalues one can write $\lambda \langle f,f \rangle= \langle f, T_{K}f \rangle$ and expressing the right hand side as an integral well approximated by its Riemann sums, which are non-negative - -by positive-definiteness of K, implying $\lambda \langle f,f \rangle \geq 0$, implying $\lambda \geq 0 $. - -The following is immediate: - -Theorem. Suppose K is a continuous symmetric non-negative definite kernel; TK has a sequence of nonnegative - -eigenvalues {λi}i. Then -$$ - \int_a^b K(t,t) dt = \sum_i \lambda_i. -$$ - -This shows that the operator TK is a trace class operator and -$$ - \operatorname{trace}(T_K) = \int_a^b K(t,t) dt. -$$ - -Mercer's theorem itself is a generalization of the result that any symmetric positive-semidefinite matrix is the Gramian matrix of a set of vectors. - -The first generalization replaces the interval [a, b] with any compact Hausdorff space and Lebesgue measure on [a, b] is replaced by a finite countably additive measure μ on the Borel algebra of X whose support is X. This means that μ(U) > 0 for any nonempty open subset U of X. - -A recent generalization replaces these conditions by the following: the set X is a first-countable topological space endowed with a Borel (complete) measure μ. X is the support of μ and, for all x in X, there is an open set U containing x and having finite measure. Then essentially the same result holds: - -Theorem. Suppose K is a continuous symmetric positive-definite kernel on X. If the function κ is L1μ(X), where κ(x)=K(x,x), for all x in X, then there is an orthonormal set - -{ei}i of L2μ(X) consisting of eigenfunctions of TK such that corresponding - -sequence of eigenvalues {λi}i is nonnegative. The eigenfunctions corresponding to non-zero eigenvalues are continuous on X and K has the representation -$$ - K(s,t) = \sum_{j=1}^\infty \lambda_j e_j(s) e_j(t) -$$ - -where the convergence is absolute and uniform on compact subsets of X. - -The next generalization deals with representations of measurable kernels. - -Let (X, M, μ) be a σ-finite measure space. An L2 (or square-integrable) kernel on X is a function -$$ - K \in L^2_{\mu \otimes \mu}(X \times X). -$$ - -L2 kernels define a bounded operator TK by the formula -$$ - \langle T_K \varphi, \psi \rangle = \int_{X \times X} K(y,x) \varphi(y) \psi(x) d[\mu \otimes \mu](y,x). -$$ - -TK is a compact operator (actually it is even a Hilbert–Schmidt operator). If the kernel K is symmetric, by the spectral theorem, TK has an orthonormal basis of eigenvectors. Those eigenvectors that correspond to non-zero eigenvalues can be arranged in a sequence {ei}i (regardless of separability). - -Theorem. If K is a symmetric positive-definite kernel on (X, M, μ), then -$$ - K(y,x) = \sum_{i \in \mathbb{N}} \lambda_i e_i(y) e_i(x) -$$ - -where the convergence in the L2 norm. Note that when continuity of the kernel is not assumed, the expansion no longer converges uniformly. - -In mathematics, a real-valued function K(x,y) is said to fulfill Mercer's condition if for all square-integrable functions g(x) one has -$$ - \iint g(x)K(x,y)g(y)dxdy \geq 0. -$$ - -This is analogous to the definition of a positive-semidefinite matrix. This is a matrix $K$ of dimension $N$, which satisfies, for all vectors $g$, the property -$$ -(g,Kg)=g^{T}{\cdot}Kg=\sum_{i=1}^N\sum_{j=1}^Ng_iK_{ij}g_j\geq0 -$$. - -A positive constant function -$$ -K(x, y)=c -$$ - -satisfies Mercer's condition, as then the integral becomes by Fubini's theorem -$$ - \iint g(x)cg(y)dx dy = c\int\! g(x) dx \int\! g(y) dy = c\left(\int\! g(x) dx\right)^2 -$$ - -which is indeed non-negative. diff --git a/wiki/wikipedia/3378.txt b/wiki/wikipedia/3378.txt deleted file mode 100644 index ec9db192c622bdb7203d48c3e63817f6b9060719..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3378.txt +++ /dev/null @@ -1,38 +0,0 @@ -The modularity theorem (formerly called the Taniyama–Shimura conjecture, Taniyama-Weil conjecture or modularity conjecture for elliptic curves) states that elliptic curves over the field of rational numbers are related to modular forms. Andrew Wiles proved the modularity theorem for semistable elliptic curves, which was enough to imply Fermat's Last Theorem. Later, a series of papers by Wiles's former students Brian Conrad, Fred Diamond and Richard Taylor, culminating in a joint paper with Christophe Breuil, extended Wiles's techniques to prove the full modularity theorem in 2001. - -The theorem states that any elliptic curve over $\mathbf{Q}$ can be obtained via a rational map with integer coefficients from the classical modular curve $X_0(N)$ for some integer $N$; this is a curve with integer coefficients with an explicit definition. This mapping is called a modular parametrization of level $N$. If $N$ is the smallest integer for which such a parametrization can be found (which by the modularity theorem itself is now known to be a number called the conductor), then the parametrization may be defined in terms of a mapping generated by a particular kind of modular form of weight two and level $N$, a normalized newform with integer $q$-expansion, followed if need be by an isogeny. - -The modularity theorem implies a closely related analytic statement: - -To each elliptic curve E over $\mathbf{Q}$ we may attach a corresponding L-series. The $L$-series is a Dirichlet series, commonly written -$$ -L(E, s) = \sum_{n=1}^\infty \frac{a_n}{n^s}. -$$ - -The generating function of the coefficients $a_n$ is then -$$ -f(E, q) = \sum_{n=1}^\infty a_n q^n. -$$ - -If we make the substitution -$$ -q = e^{2 \pi i \tau} -$$ - -we see that we have written the Fourier expansion of a function $f(E, \tau)$ of the complex variable $\tau$, so the coefficients of the $q$-series are also thought of as the Fourier coefficients of $f$. The function obtained in this way is, remarkably, a cusp form of weight two and level $N$ and is also an eigenform (an eigenvector of all Hecke operators); this is the Hasse–Weil conjecture, which follows from the modularity theorem. - -Some modular forms of weight two, in turn, correspond to holomorphic differentials for an elliptic curve. The Jacobian of the modular curve can (up to isogeny) be written as a product of irreducible Abelian varieties, corresponding to Hecke eigenforms of weight 2. The 1-dimensional factors are elliptic curves (there can also be higher-dimensional factors, so not all Hecke eigenforms correspond to rational elliptic curves). The curve obtained by finding the corresponding cusp form, and then constructing a curve from it, is isogenous to the original curve (but not, in general, isomorphic to it). - -Yutaka Taniyama stated a preliminary (slightly incorrect) version of the conjecture at the 1955 international symposium on algebraic number theory in Tokyo and Nikkō. Goro Shimura and Taniyama worked on improving its rigor until 1957. André Weil rediscovered the conjecture, and showed that it would follow from the (conjectured) functional equations for some twisted $L$-series of the elliptic curve; this was the first serious evidence that the conjecture might be true. Weil also showed that the conductor of the elliptic curve should be the level of the corresponding modular form. The Taniyama–Shimura–Weil conjecture became a part of the Langlands program. - -The conjecture attracted considerable interest when Gerhard Frey suggested that it implies Fermat's Last Theorem. He did this by attempting to show that any counterexample to Fermat's Last Theorem would imply the existence of at least one non-modular elliptic curve. This argument was completed when Jean-Pierre Serre identified a missing link (now known as the epsilon conjecture or Ribet's theorem) in Frey's original work, followed two years later by Ken Ribet's completion of a proof of the epsilon conjecture. - -Even after gaining serious attention, the Taniyama–Shimura–Weil conjecture was seen by contemporary mathematicians as extraordinarily difficult to prove or perhaps even inaccessible to proof. For example, Wiles's Ph.D. supervisor John Coates states that it seemed "impossible to actually prove", and Ken Ribet considered himself "one of the vast majority of people who believed [it] was completely inaccessible". - -Andrew Wiles, with some help from Richard Taylor, proved the Taniyama–Shimura–Weil conjecture for all semistable elliptic curves, which he used to prove Fermat's Last Theorem, and the full Taniyama–Shimura–Weil conjecture was finally proved by Diamond, Conrad, Diamond & Taylor; and Breuil, Conrad, Diamond & Taylor; building on Wiles's work, incrementally chipped away at the remaining cases until the full result was proved. - -Once fully proven, the conjecture became known as the modularity theorem. - -Several theorems in number theory similar to Fermat's Last Theorem follow from the modularity theorem. For example: no cube can be written as a sum of two coprime $n$-th powers, $n \geq 3$. (The case $n = 3$ was already known by Euler.) - -The modularity theorem is a special case of more general conjectures due to Robert Langlands. The Langlands program seeks to attach an automorphic form or automorphic representation (a suitable generalization of a modular form) to more general objects of arithmetic algebraic geometry, such as to every elliptic curve over a number field. Most cases of these extended conjectures have not yet been proved. However, Freitas, Le Hung & Siksek proved that elliptic curves defined over real quadratic fields are modular. diff --git a/wiki/wikipedia/3379.txt b/wiki/wikipedia/3379.txt deleted file mode 100644 index 3f3e88f910a388f5005d51d7d3d92f7eef7b0b07..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3379.txt +++ /dev/null @@ -1,25 +0,0 @@ -A correlation coefficient is a numerical measure of some type of correlation, meaning a statistical relationship between two variables. The variables may be two columns of a given data set of observations, often called a sample, or two components of a multivariate random variable with a known distribution. - -Several types of correlation coefficient exist, each with their own definition and own range of usability and characteristics. They all assume values in the range from −1 to +1, where ±1 indicates the strongest possible agreement and 0 the strongest possible disagreement. As tools of analysis, correlation coefficients present certain problems, including the propensity of some types to be distorted by outliers and the possibility of incorrectly being used to infer a causal relationship between the variables (for more, see Correlation does not imply causation). - -There are several different measures for the degree of correlation in data, depending on the kind of data: principally whether the data is a measurement, ordinal, or categorical. - -The Pearson product-moment correlation coefficient, also known as r, R, or Pearson's r, is a measure of the strength and direction of the linear relationship between two variables that is defined as the covariance of the variables divided by the product of their standard deviations. This is the best-known and most commonly used type of correlation coefficient. When the term "correlation coefficient" is used without further qualification, it usually refers to the Pearson product-moment correlation coefficient. - -Intraclass correlation (ICC) is a descriptive statistic that can be used, when quantitative measurements are made on units that are organized into groups; it describes how strongly units in the same group resemble each other. - -Rank correlation is a measure of the relationship between the rankings of two variables, or two rankings of the same variable: - -*Spearman's rank correlation coefficient is a measure of how well the relationship between two variables can be described by a monotonic function. - -*The Kendall tau rank correlation coefficient is a measure of the portion of ranks that match between two data sets. - -*Goodman and Kruskal's gamma is a measure of the strength of association of the cross tabulated data when both variables are measured at the ordinal level. - -The polychoric correlation coefficient measures association between two ordered-categorical variables. It's technically defined as the estimate of the Pearson correlation coefficient one would obtain if: - -# The two variables were measured on a continuous scale, instead of as ordered-category variables. - -# The two continuous variables followed a bivariate normal distribution. - -When both variables are dichotomous instead of ordered-categorical, the polychoric correlation coefficient is called the tetrachoric correlation coefficient. diff --git a/wiki/wikipedia/338.txt b/wiki/wikipedia/338.txt deleted file mode 100644 index dfb23d0e9d018e4b28369a41887de30263772d7b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/338.txt +++ /dev/null @@ -1,11 +0,0 @@ -In mathematics and its applications, a parametric family or a parameterized family is a family of objects (a set of related objects) whose differences depend only on the chosen values for a set of parameters. - -Common examples are parametrized (families of) functions, probability distributions, curves, shapes, etc. - -For example, the probability density function fX of a random variable X may depend on a parameter θ. In that case, the function may be denoted $ f_X( \cdot ; \theta) $ to indicate the dependence on the parameter θ. θ is not a formal argument of the function as it is considered to be fixed. However, each different value of the parameter gives a different probability density function. Then the parametric family of densities is the set of functions $ \{ f_X( \cdot ; \theta) \mid \theta \in \Theta \} $, where Θ denotes the parameter space, the set of all possible values that the parameter θ can take. As an example, the normal distribution is a family of similarly-shaped distributions parametrized by their mean and their variance. - -In decision theory, two-moment decision models can be applied when the decision-maker is faced with random variables drawn from a location-scale family of probability distributions. - -In economics, the Cobb–Douglas production function is a family of production functions parametrized by the elasticities of output with respect to the various factors of production. - -In algebra, the quadratic equation, for example, is actually a family of equations parametrized by the coefficients of the variable and of its square and by the constant term. diff --git a/wiki/wikipedia/3380.txt b/wiki/wikipedia/3380.txt deleted file mode 100644 index 3c106eeb82a9c770fa9082b0a32db6bafcf2b97d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3380.txt +++ /dev/null @@ -1,9 +0,0 @@ -In mathematics, Siegel's theorem on integral points states that for a smooth algebraic curve C of genus g defined over a number field K, presented in affine space in a given coordinate system, there are only finitely many points on C with coordinates in the ring of integers O of K, provided g > 0. - -The theorem was first proved in 1929 by Carl Ludwig Siegel and was the first major result on Diophantine equations that depended only on the genus and not any special algebraic form of the equations. For g > 1 it was superseded by Faltings's theorem in 1983. - -In 1929, Siegel proved the theorem by combining a version of the Thue–Siegel–Roth theorem, from diophantine approximation, with the Mordell–Weil theorem from diophantine geometry (required in Weil's version, to apply to the Jacobian variety of C). - -In 2002, Umberto Zannier and Pietro Corvaja gave a new proof by using a new method based on the subspace theorem. - -Siegel's result was ineffective (see effective results in number theory), since Thue's method in diophantine approximation also is ineffective in describing possible very good rational approximations to algebraic numbers. Effective results in some cases derive from Baker's method. diff --git a/wiki/wikipedia/3381.txt b/wiki/wikipedia/3381.txt deleted file mode 100644 index d163ce29966db55ab0f21a23cf73cf1e5d8ad542..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3381.txt +++ /dev/null @@ -1,11 +0,0 @@ -Welltris is a puzzle video game, developed by Doca and licensed to Bullet-Proof Software. Adaptations were made by Sphere, Inc., for Spectrum Holobyte, and by Infogrames. It was originally released for MS-DOS and Macintosh in 1989. It was subsequently ported to the Amiga, Amstrad CPC and Atari ST in 1990 and the ZX Spectrum and Commodore 64 in 1991. - -Welltris was the first Tetris sequel designed by original designer Alexey Pajitnov, with Andrei Sgenov. It retains that game's falling-block puzzle gameplay but extends the pit into three dimensions while the blocks remain two-dimensional, with the board viewed from above. - -As blocks descend into the well, they can be rotated or moved left or right along the walls, from one wall to another if desired. Once a block reaches the floor, it will slide as far as possible until stopped by an edge or another piece. Whenever the player completes a solid horizontal or vertical line, it disappears and the remaining squares slide to fill the open space. - -If a falling block comes to rest with any part of itself still on a wall, that wall is temporarily frozen; no blocks can be moved onto it during this time. Freezing all four walls ends the game. MacUser reviewed the Macintosh version of Welltris, praising the new playstyle as compared to its predecessor, and stating that "Welltris is both thoughtful and highly addictive." Macworld also reviewed the Mac version, praising its gameplay, music and graphics, summarizing their thoughts by stating "[Welltris] successfully extends the Tetris metaphor; cheery folk music and captivating scenes; very challenging." Macworld criticizes the steep learning curve and a point in the game where the speed of the falling pieces become unmanagable, referring to the latter as the "one annoying habit" that it shares with Tetris. - -The ZX Spectrum version had mixed reviews, with CRASH awarding 79%, Sinclair User awarding 45% and Your Sinclair giving 79%. The actual gameplay and addictiveness were highlighted as good areas, but criticisms included the fiddly controls and minimal sound and looks. - -The Commodore 64 version, with its more colourful graphics, received 80% from Zzap!64. diff --git a/wiki/wikipedia/3382.txt b/wiki/wikipedia/3382.txt deleted file mode 100644 index f980cafb815b095ba15b7633fce8c3845ae6fb9c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3382.txt +++ /dev/null @@ -1,73 +0,0 @@ -This article lists mathematical identities, that is, identically true relations holding in mathematics. - -* Bézout's identity (despite its usual name, it is not, properly speaking, an identity) - -* Binomial inverse theorem - -* Binomial identity - -* Brahmagupta–Fibonacci two-square identity - -* Candido's identity - -* Cassini and Catalan identities - -* Degen's eight-square identity - -* Difference of two squares - -* Euler's four-square identity - -* Euler's identity - -* Fibonacci's identity see Brahmagupta–Fibonacci identity or Cassini and Catalan identities - -* Heine's identity - -* Hermite's identity - -* Lagrange's identity - -* Lagrange's trigonometric identities - -* MacWilliams identity - -* Matrix determinant lemma - -*Newton's identity - -* Parseval's identity - -* Pfister's sixteen-square identity - -* Sherman–Morrison formula - -*Sophie Germain identity - -* Sun's curious identity - -* Sylvester's determinant identity - -* Vandermonde's identity - -* Woodbury matrix identity - -* Exterior calculus identities - -* Fibonacci identities: Combinatorial Fibonacci identities and Other Fibonacci identities - -* Hypergeometric function identities - -* List of integrals of logarithmic functions - -* List of topics related to pi - -* List of set identities and relations - -* Logarithmic identities - -* Summation identities - -* List of trigonometric identities - -* Vector calculus identities diff --git a/wiki/wikipedia/3383.txt b/wiki/wikipedia/3383.txt deleted file mode 100644 index 61622f60b80df6994fdb079f91498f6fc1310655..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3383.txt +++ /dev/null @@ -1,31 +0,0 @@ -In mathematics, the Calderón–Zygmund lemma is a fundamental result in Fourier analysis, harmonic analysis, and singular integrals. It is named for the mathematicians Alberto Calderón and Antoni Zygmund. - -Given an integrable function  f  : Rd → C, where Rd denotes Euclidean space and C denotes the complex numbers, the lemma gives a precise way of partitioning Rd into two sets: one where  f  is essentially small; the other a countable collection of cubes where  f  is essentially large, but where some control of the function is retained. - -This leads to the associated Calderón–Zygmund decomposition of  f , wherein  f  is written as the sum of "good" and "bad" functions, using the above sets. - -
    Let  f  : Rd → C be integrable and α be a positive constant. Then there exists an open set Ω such that: - -(1) Ω is a disjoint union of open cubes, Ω = ∪k Qk, such that for each Qk, -$$ -\alpha\le \frac{1}{m(Q_k)} \int_{Q_k} |f(x)| dx \leq 2^d \alpha. -$$ - -(2) almost everywhere in the complement F of Ω.
    - -
    Given  f  as above, we may write  f  as the sum of a "good" function g and a "bad" function b,  f  = g + b. To do this, we define -$$ -g(x) = \begin{cases}f(x), & x \in F, \\ \frac{1}{m(Q_j)}\int_{Q_j}f(t)dt, & x \in Q_j,\end{cases} -$$ - -and let b =  f  − g. Consequently we have that -$$ -b(x) = 0,\ x\in F -$$ -$$ -\frac{1}{m(Q_j)}\int_{Q_j} b(x) dx = 0 -$$ - -for each cube Qj.
    - -The function b is thus supported on a collection of cubes where  f  is allowed to be "large", but has the beneficial property that its average value is zero on each of these cubes. Meanwhile, for almost every x in F, and on each cube in Ω, g is equal to the average value of  f  over that cube, which by the covering chosen is not more than 2dα. diff --git a/wiki/wikipedia/3384.txt b/wiki/wikipedia/3384.txt deleted file mode 100644 index 32b6621920cd76748cc996526afbf0f0b7a24c5b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3384.txt +++ /dev/null @@ -1,127 +0,0 @@ -In information technology and computer science, especially in the fields of computer programming, operating systems, multiprocessors, and databases, concurrency control ensures that correct results for concurrent operations are generated, while getting those results as quickly as possible. - -Computer systems, both software and hardware, consist of modules, or components. Each component is designed to operate correctly, i.e., to obey or to meet certain consistency rules. When components that operate concurrently interact by messaging or by sharing accessed data (in memory or storage), a certain component's consistency may be violated by another component. The general area of concurrency control provides rules, methods, design methodologies, and theories to maintain the consistency of components operating concurrently while interacting, and thus the consistency and correctness of the whole system. Introducing concurrency control into a system means applying operation constraints which typically result in some performance reduction. Operation consistency and correctness should be achieved with as good as possible efficiency, without reducing performance below reasonable levels. Concurrency control can require significant additional complexity and overhead in a concurrent algorithm compared to the simpler sequential algorithm. - -For example, a failure in concurrency control can result in data corruption from torn read or write operations. - -Comments: - -# This section is applicable to all transactional systems, i.e., to all systems that use database transactions (atomic transactions; e.g., transactional objects in Systems management and in networks of smartphones which typically implement private, dedicated database systems), not only general-purpose database management systems (DBMSs). - -# DBMSs need to deal also with concurrency control issues not typical just to database transactions but rather to operating systems in general. These issues (e.g., see Concurrency control in operating systems below) are out of the scope of this section. - -Concurrency control in Database management systems (DBMS; e.g., Bernstein et al. 1987, Weikum and Vossen 2001), other transactional objects, and related distributed applications (e.g., Grid computing and Cloud computing) ensures that database transactions are performed concurrently without violating the data integrity of the respective databases. Thus concurrency control is an essential element for correctness in any system where two database transactions or more, executed with time overlap, can access the same data, e.g., virtually in any general-purpose database system. Consequently, a vast body of related research has been accumulated since database systems emerged in the early 1970s. A well established concurrency control theory for database systems is outlined in the references mentioned above: serializability theory, which allows to effectively design and analyze concurrency control methods and mechanisms. An alternative theory for concurrency control of atomic transactions over abstract data types is presented in (Lynch et al. 1993), and not utilized below. This theory is more refined, complex, with a wider scope, and has been less utilized in the Database literature than the classical theory above. Each theory has its pros and cons, emphasis and insight. To some extent they are complementary, and their merging may be useful. - -To ensure correctness, a DBMS usually guarantees that only serializable transaction schedules are generated, unless serializability is intentionally relaxed to increase performance, but only in cases where application correctness is not harmed. For maintaining correctness in cases of failed (aborted) transactions (which can always happen for many reasons) schedules also need to have the recoverability (from abort) property. A DBMS also guarantees that no effect of committed transactions is lost, and no effect of aborted (rolled back) transactions remains in the related database. Overall transaction characterization is usually summarized by the ACID rules below. As databases have become distributed, or needed to cooperate in distributed environments (e.g., Federated databases in the early 1990, and Cloud computing currently), the effective distribution of concurrency control mechanisms has received special attention. - -The concept of a database transaction (or atomic transaction) has evolved in order to enable both a well understood database system behavior in a faulty environment where crashes can happen any time, and recovery from a crash to a well understood database state. A database transaction is a unit of work, typically encapsulating a number of operations over a database (e.g., reading a database object, writing, acquiring lock, etc.), an abstraction supported in database and also other systems. Each transaction has well defined boundaries in terms of which program/code executions are included in that transaction (determined by the transaction's programmer via special transaction commands). Every database transaction obeys the following rules (by support in the database system; i.e., a database system is designed to guarantee them for the transactions it runs): - -*Atomicity - Either the effects of all or none of its operations remain ("all or nothing" semantics) when a transaction is completed (committed or aborted respectively). In other words, to the outside world a committed transaction appears (by its effects on the database) to be indivisible (atomic), and an aborted transaction does not affect the database at all. Either all the operations are done or none of them are. - -*Consistency - Every transaction must leave the database in a consistent (correct) state, i.e., maintain the predetermined integrity rules of the database (constraints upon and among the database's objects). A transaction must transform a database from one consistent state to another consistent state (however, it is the responsibility of the transaction's programmer to make sure that the transaction itself is correct, i.e., performs correctly what it intends to perform (from the application's point of view) while the predefined integrity rules are enforced by the DBMS). Thus since a database can be normally changed only by transactions, all the database's states are consistent. - -*Isolation - Transactions cannot interfere with each other (as an end result of their executions). Moreover, usually (depending on concurrency control method) the effects of an incomplete transaction are not even visible to another transaction. Providing isolation is the main goal of concurrency control. - -*Durability - Effects of successful (committed) transactions must persist through crashes (typically by recording the transaction's effects and its commit event in a non-volatile memory). - -The concept of atomic transaction has been extended during the years to what has become Business transactions which actually implement types of Workflow and are not atomic. However also such enhanced transactions typically utilize atomic transactions as components. - -If transactions are executed serially, i.e., sequentially with no overlap in time, no transaction concurrency exists. However, if concurrent transactions with interleaving operations are allowed in an uncontrolled manner, some unexpected, undesirable results may occur, such as: - -# The lost update problem: A second transaction writes a second value of a data-item (datum) on top of a first value written by a first concurrent transaction, and the first value is lost to other transactions running concurrently which need, by their precedence, to read the first value. The transactions that have read the wrong value end with incorrect results. - -# The dirty read problem: Transactions read a value written by a transaction that has been later aborted. This value disappears from the database upon abort, and should not have been read by any transaction ("dirty read"). The reading transactions end with incorrect results. - -# The incorrect summary problem: While one transaction takes a summary over the values of all the instances of a repeated data-item, a second transaction updates some instances of that data-item. The resulting summary does not reflect a correct result for any (usually needed for correctness) precedence order between the two transactions (if one is executed before the other), but rather some random result, depending on the timing of the updates, and whether certain update results have been included in the summary or not. - -Most high-performance transactional systems need to run transactions concurrently to meet their performance requirements. Thus, without concurrency control such systems can neither provide correct results nor maintain their databases consistently. - -The main categories of concurrency control mechanisms are: - -* Optimistic - Delay the checking of whether a transaction meets the isolation and other integrity rules (e.g., serializability and recoverability) until its end, without blocking any of its (read, write) operations ("...and be optimistic about the rules being met..."), and then abort a transaction to prevent the violation, if the desired rules are to be violated upon its commit. An aborted transaction is immediately restarted and re-executed, which incurs an obvious overhead (versus executing it to the end only once). If not too many transactions are aborted, then being optimistic is usually a good strategy. - -* Pessimistic - Block an operation of a transaction, if it may cause violation of the rules, until the possibility of violation disappears. Blocking operations is typically involved with performance reduction. - -*Semi-optimistic - Block operations in some situations, if they may cause violation of some rules, and do not block in other situations while delaying rules checking (if needed) to transaction's end, as done with optimistic. - -Different categories provide different performance, i.e., different average transaction completion rates (throughput), depending on transaction types mix, computing level of parallelism, and other factors. If selection and knowledge about trade-offs are available, then category and method should be chosen to provide the highest performance. - -The mutual blocking between two transactions (where each one blocks the other) or more results in a deadlock, where the transactions involved are stalled and cannot reach completion. Most non-optimistic mechanisms (with blocking) are prone to deadlocks which are resolved by an intentional abort of a stalled transaction (which releases the other transactions in that deadlock), and its immediate restart and re-execution. The likelihood of a deadlock is typically low. - -Blocking, deadlocks, and aborts all result in performance reduction, and hence the trade-offs between the categories. - -Many methods for concurrency control exist. Most of them can be implemented within either main category above. The major methods, which have each many variants, and in some cases may overlap or be combined, are: - -#Locking (e.g., Two-phase locking - 2PL) - Controlling access to data by locks assigned to the data. Access of a transaction to a data item (database object) locked by another transaction may be blocked (depending on lock type and access operation type) until lock release. - -#Serialization graph checking (also called Serializability, or Conflict, or Precedence graph checking) - Checking for cycles in the schedule's graph and breaking them by aborts. - -#Timestamp ordering (TO) - Assigning timestamps to transactions, and controlling or checking access to data by timestamp order. - -#Commitment ordering (or Commit ordering; CO) - Controlling or checking transactions' chronological order of commit events to be compatible with their respective precedence order. - -Other major concurrency control types that are utilized in conjunction with the methods above include: - -* Multiversion concurrency control (MVCC) - Increasing concurrency and performance by generating a new version of a database object each time the object is written, and allowing transactions' read operations of several last relevant versions (of each object) depending on scheduling method. - -* Index concurrency control - Synchronizing access operations to indexes, rather than to user data. Specialized methods provide substantial performance gains. - -* Private workspace model (Deferred update) - Each transaction maintains a private workspace for its accessed data, and its changed data become visible outside the transaction only upon its commit (e.g., Weikum and Vossen 2001). This model provides a different concurrency control behavior with benefits in many cases. - -The most common mechanism type in database systems since their early days in the 1970s has been Strong strict Two-phase locking (SS2PL; also called Rigorous scheduling or Rigorous 2PL) which is a special case (variant) of both Two-phase locking (2PL) and Commitment ordering (CO). It is pessimistic. In spite of its long name (for historical reasons) the idea of the SS2PL mechanism is simple: "Release all locks applied by a transaction only after the transaction has ended." SS2PL (or Rigorousness) is also the name of the set of all schedules that can be generated by this mechanism, i.e., these are SS2PL (or Rigorous) schedules, have the SS2PL (or Rigorousness) property. - -Concurrency control mechanisms firstly need to operate correctly, i.e., to maintain each transaction's integrity rules (as related to concurrency; application-specific integrity rule are out of the scope here) while transactions are running concurrently, and thus the integrity of the entire transactional system. Correctness needs to be achieved with as good performance as possible. In addition, increasingly a need exists to operate effectively while transactions are distributed over processes, computers, and computer networks. Other subjects that may affect concurrency control are recovery and replication. - -For correctness, a common major goal of most concurrency control mechanisms is generating schedules with the Serializability property. Without serializability undesirable phenomena may occur, e.g., money may disappear from accounts, or be generated from nowhere. Serializability of a schedule means equivalence (in the resulting database values) to some serial schedule with the same transactions (i.e., in which transactions are sequential with no overlap in time, and thus completely isolated from each other: No concurrent access by any two transactions to the same data is possible). Serializability is considered the highest level of isolation among database transactions, and the major correctness criterion for concurrent transactions. In some cases compromised, relaxed forms of serializability are allowed for better performance (e.g., the popular Snapshot isolation mechanism) or to meet availability requirements in highly distributed systems (see Eventual consistency), but only if application's correctness is not violated by the relaxation (e.g., no relaxation is allowed for money transactions, since by relaxation money can disappear, or appear from nowhere). - -Almost all implemented concurrency control mechanisms achieve serializability by providing Conflict serializablity, a broad special case of serializability (i.e., it covers, enables most serializable schedules, and does not impose significant additional delay-causing constraints) which can be implemented efficiently. - -See Recoverability in Serializability - -Comment: While in the general area of systems the term "recoverability" may refer to the ability of a system to recover from failure or from an incorrect/forbidden state, within concurrency control of database systems this term has received a specific meaning. - -Concurrency control typically also ensures the Recoverability property of schedules for maintaining correctness in cases of aborted transactions (which can always happen for many reasons). Recoverability (from abort) means that no committed transaction in a schedule has read data written by an aborted transaction. Such data disappear from the database (upon the abort) and are parts of an incorrect database state. Reading such data violates the consistency rule of ACID. Unlike Serializability, Recoverability cannot be compromised, relaxed at any case, since any relaxation results in quick database integrity violation upon aborts. The major methods listed above provide serializability mechanisms. None of them in its general form automatically provides recoverability, and special considerations and mechanism enhancements are needed to support recoverability. A commonly utilized special case of recoverability is Strictness, which allows efficient database recovery from failure (but excludes optimistic implementations; e.g., Strict CO (SCO) cannot have an optimistic implementation, but has semi-optimistic ones). - -Comment: Note that the Recoverability property is needed even if no database failure occurs and no database recovery from failure is needed. It is rather needed to correctly automatically handle transaction aborts, which may be unrelated to database failure and recovery from it. - -With the fast technological development of computing the difference between local and distributed computing over low latency networks or buses is blurring. Thus the quite effective utilization of local techniques in such distributed environments is common, e.g., in computer clusters and multi-core processors. However the local techniques have their limitations and use multi-processes (or threads) supported by multi-processors (or multi-cores) to scale. This often turns transactions into distributed ones, if they themselves need to span multi-processes. In these cases most local concurrency control techniques do not scale well. - -See Distributed serializability in Serializability - -As database systems have become distributed, or started to cooperate in distributed environments (e.g., Federated databases in the early 1990s, and nowadays Grid computing, Cloud computing, and networks with smartphones), some transactions have become distributed. A distributed transaction means that the transaction spans processes, and may span computers and geographical sites. This generates a need in effective distributed concurrency control mechanisms. Achieving the Serializability property of a distributed system's schedule (see Distributed serializability and Global serializability (Modular serializability)) effectively poses special challenges typically not met by most of the regular serializability mechanisms, originally designed to operate locally. This is especially due to a need in costly distribution of concurrency control information amid communication and computer latency. The only known general effective technique for distribution is Commitment ordering, which was disclosed publicly in 1991 (after being patented). Commitment ordering (Commit ordering, CO; Raz 1992) means that transactions' chronological order of commit events is kept compatible with their respective precedence order. CO does not require the distribution of concurrency control information and provides a general effective solution (reliable, high-performance, and scalable) for both distributed and global serializability, also in a heterogeneous environment with database systems (or other transactional objects) with different (any) concurrency control mechanisms.). - -* Schedule - -* Isolation (computer science) - -* Distributed concurrency control - -*Philip A. Bernstein, Vassos Hadzilacos, Nathan Goodman (1987): (free PDF download), Addison Wesley Publishing Company, 1987, - -*Gerhard Weikum, Gottfried Vossen (2001): , Elsevier, - -*Nancy Lynch, Michael Merritt, William Weihl, Alan Fekete (1993): , Morgan Kaufmann (Elsevier), August 1993, , - -*Yoav Raz (1992): (), Proceedings of the Eighteenth International Conference on Very Large Data Bases (VLDB), pp. 292-312, Vancouver, Canada, August 1992. (also DEC-TR 841, Digital Equipment Corporation, November 1990) - -Multitasking operating systems, especially real-time operating systems, need to maintain the illusion that all tasks running on top of them are all running at the same time, even though only one or a few tasks really are running at any given moment due to the limitations of the hardware the operating system is running on. Such multitasking is fairly simple when all tasks are independent from each other. However, when several tasks try to use the same resource, or when tasks try to share information, it can lead to confusion and inconsistency. The task of concurrent computing is to solve that problem. Some solutions involve "locks" similar to the locks used in databases, but they risk causing problems of their own such as deadlock. Other solutions are Non-blocking algorithms and Read-copy-update. - -* Linearizability - -* Lock (computer science) - -* Mutual exclusion - -* Semaphore (programming) - -* Software transactional memory - -* Transactional Synchronization Extensions - -* Andrew S. Tanenbaum, Albert S Woodhull (2006): Operating Systems Design and Implementation, 3rd Edition, Prentice Hall, - -* - -Category:Data management - -Category:Databases diff --git a/wiki/wikipedia/3385.txt b/wiki/wikipedia/3385.txt deleted file mode 100644 index de9be0a4f58b62b105ed662d1809014e81274792..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3385.txt +++ /dev/null @@ -1,75 +0,0 @@ -In mathematics, the Boolean prime ideal theorem states that ideals in a Boolean algebra can be extended to prime ideals. A variation of this statement for filters on sets is known as the ultrafilter lemma. Other theorems are obtained by considering different mathematical structures with appropriate notions of ideals, for example, rings and prime ideals (of ring theory), or distributive lattices and maximal ideals (of order theory). This article focuses on prime ideal theorems from order theory. - -Although the various prime ideal theorems may appear simple and intuitive, they cannot be deduced in general from the axioms of Zermelo–Fraenkel set theory without the axiom of choice (abbreviated ZF). Instead, some of the statements turn out to be equivalent to the axiom of choice (AC), while others—the Boolean prime ideal theorem, for instance—represent a property that is strictly weaker than AC. It is due to this intermediate status between ZF and ZF + AC (ZFC) that the Boolean prime ideal theorem is often taken as an axiom of set theory. The abbreviations BPI or PIT (for Boolean algebras) are sometimes used to refer to this additional axiom. - -An order ideal is a (non-empty) directed lower set. If the considered partially ordered set (poset) has binary suprema (a.k.a. joins), as do the posets within this article, then this is equivalently characterized as a non-empty lower set I that is closed for binary suprema (that is, $x, y \in I$ implies $x \vee y \in I$). An ideal I is prime if its set-theoretic complement in the poset is a filter (that is, $x \wedge y \in I$ implies $x \in I$ or $y \in I$). Ideals are proper if they are not equal to the whole poset. - -Historically, the first statement relating to later prime ideal theorems was in fact referring to filters—subsets that are ideals with respect to the dual order. The ultrafilter lemma states that every filter on a set is contained within some maximal (proper) filter—an ultrafilter. Recall that filters on sets are proper filters of the Boolean algebra of its powerset. In this special case, maximal filters (i.e. filters that are not strict subsets of any proper filter) and prime filters (i.e. filters that with each union of subsets X and Y contain also X or Y) coincide. The dual of this statement thus assures that every ideal of a powerset is contained in a prime ideal. - -The above statement led to various generalized prime ideal theorems, each of which exists in a weak and in a strong form. Weak prime ideal theorems state that every non-trivial algebra of a certain class has at least one prime ideal. In contrast, strong prime ideal theorems require that every ideal that is disjoint from a given filter can be extended to a prime ideal that is still disjoint from that filter. In the case of algebras that are not posets, one uses different substructures instead of filters. Many forms of these theorems are actually known to be equivalent, so that the assertion that "PIT" holds is usually taken as the assertion that the corresponding statement for Boolean algebras (BPI) is valid. - -Another variation of similar theorems is obtained by replacing each occurrence of prime ideal by maximal ideal. The corresponding maximal ideal theorems (MIT) are often—though not always—stronger than their PIT equivalents. - -The Boolean prime ideal theorem is the strong prime ideal theorem for Boolean algebras. Thus the formal statement is: - -Let B be a Boolean algebra, let I be an ideal and let F be a filter of B, such that I and F are disjoint. Then I is contained in some prime ideal of B that is disjoint from F. - -The weak prime ideal theorem for Boolean algebras simply states: - -Every Boolean algebra contains a prime ideal. - -We refer to these statements as the weak and strong BPI. The two are equivalent, as the strong BPI clearly implies the weak BPI, and the reverse implication can be achieved by using the weak BPI to find prime ideals in the appropriate quotient algebra. - -The BPI can be expressed in various ways. For this purpose, recall the following theorem: - -For any ideal I of a Boolean algebra B, the following are equivalent: - -* I is a prime ideal. - -* I is a maximal ideal, i.e. for any proper ideal J, if I is contained in J then I = J. - -* For every element a of B, I contains exactly one of {a, ¬a}. - -This theorem is a well-known fact for Boolean algebras. Its dual establishes the equivalence of prime filters and ultrafilters. Note that the last property is in fact self-dual—only the prior assumption that I is an ideal gives the full characterization. All of the implications within this theorem can be proven in ZF. - -Thus the following (strong) maximal ideal theorem (MIT) for Boolean algebras is equivalent to BPI: - -Let B be a Boolean algebra, let I be an ideal and let F be a filter of B, such that I and F are disjoint. Then I is contained in some maximal ideal of B that is disjoint from F. - -Note that one requires "global" maximality, not just maximality with respect to being disjoint from F. Yet, this variation yields another equivalent characterization of BPI: - -Let B be a Boolean algebra, let I be an ideal and let F be a filter of B, such that I and F are disjoint. Then I is contained in some ideal of B that is maximal among all ideals disjoint from F. - -The fact that this statement is equivalent to BPI is easily established by noting the following theorem: For any distributive lattice L, if an ideal I is maximal among all ideals of L that are disjoint to a given filter F, then I is a prime ideal. The proof for this statement (which can again be carried out in ZF set theory) is included in the article on ideals. Since any Boolean algebra is a distributive lattice, this shows the desired implication. - -All of the above statements are now easily seen to be equivalent. Going even further, one can exploit the fact the dual orders of Boolean algebras are exactly the Boolean algebras themselves. Hence, when taking the equivalent duals of all former statements, one ends up with a number of theorems that equally apply to Boolean algebras, but where every occurrence of ideal is replaced by filter. It is worth noting that for the special case where the Boolean algebra under consideration is a powerset with the subset ordering, the "maximal filter theorem" is called the ultrafilter lemma. - -Summing up, for Boolean algebras, the weak and strong MIT, the weak and strong PIT, and these statements with filters in place of ideals are all equivalent. It is known that all of these statements are consequences of the Axiom of Choice, AC, (the easy proof makes use of Zorn's lemma), but cannot be proven in ZF (Zermelo-Fraenkel set theory without AC), if ZF is consistent. Yet, the BPI is strictly weaker than the axiom of choice, though the proof of this statement, due to J. D. Halpern and Azriel Lévy is rather non-trivial. - -The prototypical properties that were discussed for Boolean algebras in the above section can easily be modified to include more general lattices, such as distributive lattices or Heyting algebras. However, in these cases maximal ideals are different from prime ideals, and the relation between PITs and MITs is not obvious. - -Indeed, it turns out that the MITs for distributive lattices and even for Heyting algebras are equivalent to the axiom of choice. On the other hand, it is known that the strong PIT for distributive lattices is equivalent to BPI (i.e. to the MIT and PIT for Boolean algebras). Hence this statement is strictly weaker than the axiom of choice. Furthermore, observe that Heyting algebras are not self dual, and thus using filters in place of ideals yields different theorems in this setting. Maybe surprisingly, the MIT for the duals of Heyting algebras is not stronger than BPI, which is in sharp contrast to the abovementioned MIT for Heyting algebras. - -Finally, prime ideal theorems do also exist for other (not order-theoretical) abstract algebras. For example, the MIT for rings implies the axiom of choice. This situation requires to replace the order-theoretic term "filter" by other concepts—for rings a "multiplicatively closed subset" is appropriate. - -A filter on a set X is a nonempty collection of nonempty subsets of X that is closed under finite intersection and under superset. An ultrafilter is a maximal filter. - -The ultrafilter lemma states that every filter on a set X is a subset of some ultrafilter on X. - -An ultrafilter that does not contain finite sets is called "non-principal". The ultrafilter lemma, and in particular the existence of non-principal ultrafilters (consider the filter of all sets with finite complements), can be proven using from Zorn's lemma. - -The ultrafilter lemma is equivalent to the Boolean prime ideal theorem, with the equivalence provable in ZF set theory without the axiom of choice. The idea behind the proof is that the subsets of any set form a Boolean algebra partially ordered by inclusion, and any Boolean algebra is representable as an algebra of sets by Stone's representation theorem. - -If the set X is finite then the ultrafilter lemma can be proven from the axioms ZF. This is no longer true for infinite sets; an additional axiom must be assumed. Zorn's lemma, the axiom of choice, and Tychonoff's theorem can all be used to prove the ultrafilter lemma. The ultrafilter lemma is strictly weaker than the axiom of choice. - -The ultrafilter lemma has many applications in topology. The ultrafilter lemma can be used to prove the Hahn-Banach theorem and the Alexander subbase theorem. - -Intuitively, the Boolean prime ideal theorem states that there are "enough" prime ideals in a Boolean algebra in the sense that we can extend every ideal to a maximal one. This is of practical importance for proving Stone's representation theorem for Boolean algebras, a special case of Stone duality, in which one equips the set of all prime ideals with a certain topology and can indeed regain the original Boolean algebra (up to isomorphism) from this data. Furthermore, it turns out that in applications one can freely choose either to work with prime ideals or with prime filters, because every ideal uniquely determines a filter: the set of all Boolean complements of its elements. Both approaches are found in the literature. - -Many other theorems of general topology that are often said to rely on the axiom of choice are in fact equivalent to BPI. For example, the theorem that a product of compact Hausdorff spaces is compact is equivalent to it. If we leave out "Hausdorff" we get a theorem equivalent to the full axiom of choice. - -In graph theory, the de Bruijn–Erdős theorem is another equivalent to BPI. It states that, if a given infinite graph requires at least some finite number k in any graph coloring, then it has a finite subgraph that also requires k|colors. - -A not too well known application of the Boolean prime ideal theorem is the existence of a non-measurable set (the example usually given is the Vitali set, which requires the axiom of choice). From this and the fact that the BPI is strictly weaker than the axiom of choice, it follows that the existence of non-measurable sets is strictly weaker than the axiom of choice. - -In linear algebra, the Boolean prime ideal theorem can be used to prove that any two bases of a given vector space have the same cardinality. diff --git a/wiki/wikipedia/3386.txt b/wiki/wikipedia/3386.txt deleted file mode 100644 index 57b60f41a91efbd40dac4b7c0b12848b7f35604b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3386.txt +++ /dev/null @@ -1,62 +0,0 @@ -In mathematics, the standard conjectures about algebraic cycles are several conjectures describing the relationship of algebraic cycles and Weil cohomology theories. One of the original applications of these conjectures, envisaged by Alexander Grothendieck, was to prove that his construction of pure motives gave an abelian category that is semisimple. Moreover, as he pointed out, the standard conjectures also imply the hardest part of the Weil conjectures, namely the "Riemann hypothesis" conjecture that remained open at the end of the 1960s and was proved later by Pierre Deligne; for details on the link between Weil and standard conjectures, see Kleiman. The standard conjectures remain open problems, so that their application gives only conditional proofs of results. In quite a few cases, including that of the Weil conjectures, other methods have been found to prove such results unconditionally. - -The classical formulations of the standard conjectures involve a fixed Weil cohomology theory H. All of the conjectures deal with "algebraic" cohomology classes, which means a morphism on the cohomology of a smooth projective variety - -H ∗(X) → H ∗(X) - -induced by an algebraic cycle with rational coefficients on the product X × X via the cycle class map, which is part of the structure of a Weil cohomology theory. - -Conjecture A is equivalent to Conjecture B (see Grothendieck, p. 196), and so is not listed. - -One of the axioms of a Weil theory is the so-called hard Lefschetz theorem (or axiom): - -Begin with a fixed smooth hyperplane section - -W = H ∩ X, - -where X is a given smooth projective variety in the ambient projective space P N and H is a hyperplane. Then for i ≤ n = dim(X), the Lefschetz operator - -L : H i(X) → H i+2(X), - -which is defined by intersecting cohomology classes with W, gives an isomorphism - -Ln−i : H i(X) → H 2n−i(X). - -Now, for i ≤ n define: - -Λ = (Ln−i+2)−1 ∘ L ∘ (Ln−i) : H i(X) → H i−2(X) - -Λ = (Ln−i) ∘ L ∘ (Ln−i+2)−1 : H 2n−i+2(X) → H 2n−i(X) - -The conjecture states that the Lefschetz operator (Λ) is induced by an algebraic cycle. - -It is conjectured that the projectors - -H ∗(X) ↠ Hi(X) ↣ H ∗(X) - -are algebraic, i.e. induced by a cycle π i ⊂ X × X with rational coefficients. This implies that the motive of any smooth projective variety (and more generally, every pure motive) decomposes as -$$ -h(X) = \bigoplus_{i=0}^{2 dim(X)} h^i(X). -$$ - -The motives $h^0(X)$ and $h^{2 dim(X)}$ can always be split off as direct summands. The conjecture therefore immediately holds for curves. It was proved for surfaces by Murre. - -Katz have used the Weil conjectures to show the conjecture for algebraic varieties defined over finite fields, in arbitrary dimension. - -Šermenev proved the Künneth decomposition for abelian varieties A. - -Deninger refined this result by exhibiting a functorial Künneth decomposition of the Chow motive of A such that the n-multiplication on the abelian variety acts as $n^i$ on the i-th summand $h^i(A)$. - -de Cataldo proved the Künneth decomposition for the Hilbert scheme of points in a smooth surface. - -Conjecture D states that numerical and homological equivalence agree. (It implies in particular the latter does not depend on the choice of the Weil cohomology theory). This conjecture implies the Lefschetz conjecture. If the Hodge standard conjecture holds, then the Lefschetz conjecture and Conjecture D are equivalent. - -This conjecture was shown by Lieberman for varieties of dimension at most 4, and for abelian varieties. - -The Hodge standard conjecture is modelled on the Hodge index theorem. It states the definiteness (positive or negative, according to the dimension) of the cup product pairing on primitive algebraic cohomology classes. If it holds, then the Lefschetz conjecture implies Conjecture D. In characteristic zero the Hodge standard conjecture holds, being a consequence of Hodge theory. In positive characteristic the Hodge standard conjecture is known for surfaces (Grothendieck) and for abelian varieties of dimension 4 (Ancona). - -The Hodge standard conjecture is not to be confused with the Hodge conjecture which states that for smooth projective varieties over C, every rational (p, p)-class is algebraic. The Hodge conjecture implies the Lefschetz and Künneth conjectures and conjecture D for varieties over fields of characteristic zero. The Tate conjecture implies Lefschetz, Künneth, and conjecture D for ℓ-adic cohomology over all fields. - -For two algebraic varieties X and Y, Arapura has introduced a condition that Y is motivated by X. The precise condition is that the motive of Y is (in André's category of motives) expressible starting from the motive of X by means of sums, summands, and products. For example, Y is motivated if there is a surjective morphism $X^n \to Y$. If Y is not found in the category, it is unmotivated in that context. For smooth projective complex algebraic varieties X and Y, such that Y is motivated by X, the standard conjectures D (homological equivalence equals numerical), B (Lefschetz), the Hodge conjecture and also the generalized Hodge conjecture hold for Y if they hold for all powers of X. This fact can be applied to show, for example, the Lefschetz conjecture for the Hilbert scheme of points on an algebraic surface. - -Beilinson has shown that the (conjectural) existence of the so-called motivic t-structure on the triangulated category of motives implies the Lefschetz and Künneth standard conjectures B and C. diff --git a/wiki/wikipedia/3387.txt b/wiki/wikipedia/3387.txt deleted file mode 100644 index 377f227f2ea108c015f2233ff86dbd2152100d1f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3387.txt +++ /dev/null @@ -1,13 +0,0 @@ -In mathematics, Fujita's conjecture is a problem in the theories of algebraic geometry and complex manifolds, unsolved . It is named after Takao Fujita, who formulated it in 1985. - -In complex geometry, the conjecture states that for a positive holomorphic line bundle L on a compact complex manifold M, the line bundle KM ⊗ L⊗m (where KM is a canonical line bundle of M) is - -* spanned by sections when m ≥ n + 1 ; - -* very ample when m ≥ n + 2, - -where n is the complex dimension of M. - -Note that for large m the line bundle KM ⊗ L⊗m is very ample by the standard Serre's vanishing theorem (and its complex analytic variant). Fujita conjecture provides an explicit bound on m, which is optimal for projective spaces. - -For surfaces the Fujita conjecture follows from Reider's theorem. For three-dimensional algebraic varieties, Ein and Lazarsfeld in 1993 proved the first part of the Fujita conjecture, i.e. that m≥4 implies global generation. diff --git a/wiki/wikipedia/3388.txt b/wiki/wikipedia/3388.txt deleted file mode 100644 index 05c1585dbf06df13440ff7c81d9962eb4ffb2906..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3388.txt +++ /dev/null @@ -1,13 +0,0 @@ -MindView is a mind mapping and project management software owned by the company MatchWare. MindView is used for mind mapping, concept mapping, work breakdown structures, timelines, Gantt charts, organizational charts, and other visuals. - -MindView is a visualization tool for mind mapping, concept mapping, work breakdown structures, timelines, Gantt charts, organizational charts, top down estimations, bottom up estimates, and other business visuals. MindView is available as a desktop application for Windows and MacOS, online, or as a web browser extension. - -MindView's office integration includes Microsoft Office products, such as Word, PowerPoint, Excel, Outlook, and Project, even Outlook Task Lists. MindView also integrates with online solutions like OneDrive and GoogleDrive. It can import to and from Microsoft Word, PowerPoint, Project, Excel, and Outlook, as well as the ability to export PDF, HTML, and XML. As it allows the import from XML, one may be able to import from other mind mapping tools such as FreeMind or MindManager. - -* Editor's Choice award from PCMag 2011, 2013 - -* 5 star award from Tucows - -* Certificate of Accreditation from Digital Accessibility Centre, 2016 - -* BETT award nomination diff --git a/wiki/wikipedia/3389.txt b/wiki/wikipedia/3389.txt deleted file mode 100644 index 56326cff964048e2bf3e7e1f7a418fcf1edfcebc..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3389.txt +++ /dev/null @@ -1,15 +0,0 @@ -Steffensen's inequality is an equation in mathematics named after Johan Frederik Steffensen. - -It is an integral inequality in real analysis, stating: - -If ƒ : [a, b] → R is a non-negative, monotonically decreasing, integrable function - -and g : [a, b] → [0, 1] is another integrable function, then -$$ -\int_{b - k}^{b} f(x) dx \leq \int_{a}^{b} f(x) g(x) dx \leq \int_{a}^{a + k} f(x) dx, -$$ - -where -$$ -k = \int_{a}^{b} g(x) dx. -$$ diff --git a/wiki/wikipedia/339.txt b/wiki/wikipedia/339.txt deleted file mode 100644 index a73d6a8b622c5a35a966e81da8d0e168729af079..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/339.txt +++ /dev/null @@ -1,17 +0,0 @@ -In number theory, Lochs's theorem concerns the rate of convergence of the continued fraction expansion of a typical real number. A proof of the theorem was published in 1964 by Gustav Lochs. - -The theorem states that for almost all real numbers in the interval (0,1), the number of terms m of the number's continued fraction expansion that are required to determine the first n places of the number's decimal expansion behaves asymptotically as follows: -$$ -\lim_{n\to\infty}\frac{m}{n}=\frac{6\ln(2)\ln(10)}{\pi^2} \approx 0.97027014 -$$ . - -As this limit is only slightly smaller than 1, this can be interpreted as saying that each additional term in the continued fraction representation of a "typical" real number increases the accuracy of the representation by approximately one decimal place. The decimal system is the last positional system for which each digit carries less information than one continued fraction quotient; going to base-11 (changing $\ln(10)$ to $\ln(11)$ in the equation) makes the above value exceed 1. - -The reciprocal of this limit, -$$ -\frac{\pi^2}{6\ln(2)\ln(10)} \approx 1.03064083 -$$ , - -is twice the base-10 logarithm of Lévy's constant. - -A prominent example of a number not exhibiting this behavior is the golden ratio—sometimes known as the "most irrational" number—whose continued fraction terms are all ones, the smallest possible in canonical form. On average it requires approximately 2.39 continued fraction terms per decimal digit. diff --git a/wiki/wikipedia/3390.txt b/wiki/wikipedia/3390.txt deleted file mode 100644 index fa5f13d07c1dd3018e43e7fcdef006c54d0218ff..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3390.txt +++ /dev/null @@ -1,8 +0,0 @@ -For Lebesgue's lemma for open covers of compact spaces in topology see Lebesgue's number lemma - -In mathematics, Lebesgue's lemma is an important statement in approximation theory. It provides a bound for the projection error. - -Let (V, ||·||) be a normed vector space, U a subspace of V, and P a linear projector on U. Then for each v in V: -$$ - \|v-Pv\|\leq (1+\|P\|)\inf_{u\in U}\|v-u\|. -$$ diff --git a/wiki/wikipedia/3391.txt b/wiki/wikipedia/3391.txt deleted file mode 100644 index 83d0ae326887abf5db547d116b1e349564cc38f3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3391.txt +++ /dev/null @@ -1,12 +0,0 @@ -In geometry, Pedoe's inequality (also Neuberg–Pedoe inequality), named after Daniel Pedoe (1910–1998) and Joseph Jean Baptiste Neuberg (1840–1926), states that if a, b, and c are the lengths of the sides of a triangle with area ƒ, and A, B, and C are the lengths of the sides of a triangle with area F, then -$$ -A^2(b^2+c^2-a^2)+B^2(a^2+c^2-b^2)+C^2(a^2+b^2-c^2)\geq 16Ff, -$$ - -with equality if and only if the two triangles are similar with pairs of corresponding sides (A, a), (B, b), and (C, c). - -The expression on the left is not only symmetric under any of the six permutations of the set { (A, a), (B, b), (C, c) } of pairs, but also-perhaps not so obviously-remains the same if a is interchanged with A and b with B and c with C. In other words, it is a symmetric function of the pair of triangles. - -Pedoe's inequality is a generalization of Weitzenböck's inequality, which is the case in which one of the triangles is equilateral. - -Pedoe discovered the inequality in 1941 and published it subsequently in several articles. Later he learned that the inequality was already known in the 19th century to Neuberg, who however did not prove that the equality implies the similarity of the two triangles. diff --git a/wiki/wikipedia/3392.txt b/wiki/wikipedia/3392.txt deleted file mode 100644 index 567728354facd1e8e2664dec7d01524264c4ab76..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3392.txt +++ /dev/null @@ -1,16 +0,0 @@ -In mathematics, the Kolmogorov continuity theorem is a theorem that guarantees that a stochastic process that satisfies certain constraints on the moments of its increments will be continuous (or, more precisely, have a "continuous version"). It is credited to the Soviet mathematician Andrey Nikolaevich Kolmogorov. - -Let $(S,d)$ be some complete metric space, and let $X\colon [0, + \infty) \times \Omega \to S$ be a stochastic process. Suppose that for all times $T > 0$, there exist positive constants $\alpha, \beta, K$ such that -$$ -\mathbb{E} [d(X_t, X_s)^\alpha] \leq K | t - s |^{1 + \beta} -$$ - -for all $0 \leq s, t \leq T$. Then there exists a modification $\tilde{X}$ of $X$ that is a continuous process, i.e. a process $\tilde{X}\colon [0, + \infty) \times \Omega \to S$ such that - -* $\tilde{X}$ is sample-continuous; - -* for every time $t \geq 0$, $\mathbb{P} (X_t = \tilde{X}_t) = 1.$ - -Furthermore, the paths of $\tilde{X}$ are locally $\gamma$-Hölder-continuous for every $0<\gamma<\tfrac\beta\alpha$. - -In the case of Brownian motion on $\mathbb{R}^n$, the choice of constants $\alpha = 4$, $\beta = 1$, $K = n (n + 2)$ will work in the Kolmogorov continuity theorem. Moreover, for any positive integer $m$, the constants $\alpha = 2m$, $\beta = m-1$ will work, for some positive value of $K$ that depends on $n$ and $m$. diff --git a/wiki/wikipedia/3393.txt b/wiki/wikipedia/3393.txt deleted file mode 100644 index f32057259fa4989ce0631ea9d4aa31c01e24f5e7..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3393.txt +++ /dev/null @@ -1,165 +0,0 @@ -In information theory, the noisy-channel coding theorem (sometimes Shannon's theorem or Shannon's limit), establishes that for any given degree of noise contamination of a communication channel, it is possible to communicate discrete data (digital information) nearly error-free up to a computable maximum rate through the channel. This result was presented by Claude Shannon in 1948 and was based in part on earlier work and ideas of Harry Nyquist and Ralph Hartley. - -The Shannon limit or Shannon capacity of a communication channel refers to the maximum rate of error-free data that can theoretically be transferred over the channel if the link is subject to random data transmission errors, for a particular noise level. It was first described by Shannon (1948), and shortly after published in a book by Shannon and Warren Weaver entitled The Mathematical Theory of Communication (1949). This founded the modern discipline of information theory. - -Stated by Claude Shannon in 1948, the theorem describes the maximum possible efficiency of error-correcting methods versus levels of noise interference and data corruption. Shannon's theorem has wide-ranging applications in both communications and data storage. This theorem is of foundational importance to the modern field of information theory. Shannon only gave an outline of the proof. The first rigorous proof for the discrete case is due to Amiel Feinstein in 1954. - -The Shannon theorem states that given a noisy channel with channel capacity C and information transmitted at a rate R, then if $R < C$ there exist codes that allow the probability of error at the receiver to be made arbitrarily small. This means that, theoretically, it is possible to transmit information nearly without error at any rate below a limiting rate, C. - -The converse is also important. If $R > C$, an arbitrarily small probability of error is not achievable. All codes will have a probability of error greater than a certain positive minimal level, and this level increases as the rate increases. So, information cannot be guaranteed to be transmitted reliably across a channel at rates beyond the channel capacity. The theorem does not address the rare situation in which rate and capacity are equal. - -The channel capacity $C$ can be calculated from the physical properties of a channel; for a band-limited channel with Gaussian noise, using the Shannon–Hartley theorem. - -Simple schemes such as "send the message 3 times and use a best 2 out of 3 voting scheme if the copies differ" are inefficient error-correction methods, unable to asymptotically guarantee that a block of data can be communicated free of error. Advanced techniques such as Reed–Solomon codes and, more recently, low-density parity-check (LDPC) codes and turbo codes, come much closer to reaching the theoretical Shannon limit, but at a cost of high computational complexity. Using these highly efficient codes and with the computing power in today's digital signal processors, it is now possible to reach very close to the Shannon limit. In fact, it was shown that LDPC codes can reach within 0.0045 dB of the Shannon limit (for binary Additive white Gaussian noise (AWGN) channels, with very long block lengths). - -The basic mathematical model for a communication system is the following: - -\xrightarrow[\text{Message}]{W} - -\begin{array}{ |c| }\hline \text{Encoder} \\ f_n \\ \hline\end{array} \xrightarrow[\mathrm{Encoded \atop sequence}]{X^n} \begin{array}{ |c| }\hline \text{Channel} \\ p(y|x) \\ \hline\end{array} \xrightarrow[\mathrm{Received \atop sequence}]{Y^n} \begin{array}{ |c| }\hline \text{Decoder} \\ g_n \\ \hline\end{array} \xrightarrow[\mathrm{Estimated \atop message}]{\hat W} - -A message W is transmitted through a noisy channel by using encoding and decoding functions. An encoder maps W into a pre-defined sequence of channel symbols of length n. In its most basic model, the channel distorts each of these symbols independently of the others. The output of the channel –the received sequence– is fed into a decoder which maps the sequence into an estimate of the message. In this setting, the probability of error is defined as: -$$ - P_e = \text{Pr}\left\{ \hat{W} \neq W \right\}. -$$ - -Theorem (Shannon, 1948): - -1. For every discrete memoryless channel, the channel capacity, defined in terms of the mutual information $I(X; Y)$ as -$$ -\ C = \sup_{p_X} I(X;Y) -$$ - -has the following property. For any $\epsilon>0$ and $R - -\begin{align} - -P(\text{error}) & {} = P(\text{error}|W=1) \le P(E_1^c) + \sum_{i=2}^{2^{nR}}P(E_i) \\ - -& {} \le P(E_1^c) + (2^{nR}-1)2^{-n(I(X;Y)-3\varepsilon)} \\ - -& {} \le \varepsilon + 2^{-n(I(X;Y)-R-3\varepsilon)}. - -\end{align} - - - -We can observe that as $n$ goes to infinity, if $R < I(X;Y)$ for the channel, the probability of error will go to 0. - -Finally, given that the average codebook is shown to be "good," we know that there exists a codebook whose performance is better than the average, and so satisfies our need for arbitrarily low error probability communicating across the noisy channel. - -Suppose a code of $2^{nR}$ codewords. Let W be drawn uniformly over this set as an index. Let $X^n$ and $Y^n$ be the transmitted codewords and received codewords, respectively. - -# $nR = H(W) = H(W|Y^n) + I(W;Y^n)$ using identities involving entropy and mutual information - -# $\le H(W|Y^n) + I(X^n(W);Y^{n})$ since X is a function of W - -# $\le 1 + P_e^{(n)}nR + I(X^n(W);Y^n)$ by the use of Fano's Inequality - -# $\le 1 + P_e^{(n)}nR + nC$ by the fact that capacity is maximized mutual information. - -The result of these steps is that $ P_e^{(n)} \ge 1 - \frac{1}{nR} - \frac{C}{R} $. As the block length $n$ goes to infinity, we obtain $ P_e^{(n)}$ is bounded away from 0 if R is greater than C - we can get arbitrarily low rates of error only if R is less than C. - -A strong converse theorem, proven by Wolfowitz in 1957, states that, - - - -P_e \geq 1- \frac{4A}{n(R-C)^2} - e^{-\frac{n(R-C)}{2}} - - - -for some finite positive constant $A$. While the weak converse states that the error probability is bounded away from zero as $n$ goes to infinity, the strong converse states that the error goes to 1. Thus, $C$ is a sharp threshold between perfectly reliable and completely unreliable communication. - -We assume that the channel is memoryless, but its transition probabilities change with time, in a fashion known at the transmitter as well as the receiver. - -Then the channel capacity is given by - - - -C=\lim \inf \max_{p^{(X_1)},p^{(X_2)},...}\frac{1}{n}\sum_{i=1}^nI(X_i;Y_i). - - - -The maximum is attained at the capacity achieving distributions for each respective channel. That is, - - - -C=\lim \inf \frac{1}{n}\sum_{i=1}^n C_i - - - -where $C_i$ is the capacity of the ith channel. - -The proof runs through in almost the same way as that of channel coding theorem. Achievability follows from random coding with each symbol chosen randomly from the capacity achieving distribution for that particular channel. Typicality arguments use the definition of typical sets for non-stationary sources defined in the asymptotic equipartition property article. - -The technicality of lim inf comes into play when $\frac{1}{n}\sum_{i=1}^n C_i$ does not converge. diff --git a/wiki/wikipedia/3394.txt b/wiki/wikipedia/3394.txt deleted file mode 100644 index 7972c50af3dafd82e32b165cea19c48237050a45..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3394.txt +++ /dev/null @@ -1,5 +0,0 @@ -In mathematics, the Kneser–Tits problem, introduced by based on a suggestion by Martin Kneser, asks whether the Whitehead group W(G,K) of a semisimple simply connected isotropic algebraic group G over a field K is trivial. The Whitehead group is the quotient of the rational points of G by the normal subgroup generated by K-subgroups isomorphic to the additive group. - -A special case of the Kneser–Tits problem asks for which fields the Whitehead group of a semisimple almost simple simply connected isotropic algebraic group is always trivial. - -Platonov showed that this Whitehead group is trivial for local fields K, and gave examples of fields for which it is not always trivial. For global fields the combined work of several authors shows that this Whitehead group is always trivial . diff --git a/wiki/wikipedia/3395.txt b/wiki/wikipedia/3395.txt deleted file mode 100644 index af6e91fa55a3ebd35355c3e7bf3ff3b5cc960846..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3395.txt +++ /dev/null @@ -1,52 +0,0 @@ -Kőnig's lemma or Kőnig's infinity lemma is a theorem in graph theory due to the Hungarian mathematician Dénes Kőnig who published it in 1927. It gives a sufficient condition for an infinite graph to have an infinitely long path. The computability aspects of this theorem have been thoroughly investigated by researchers in mathematical logic, especially in computability theory. This theorem also has important roles in constructive mathematics and proof theory. - -Let G be a connected, locally finite, infinite graph (this means: any two vertices can be connected by a path, the graph has infinitely many vertices, and each vertex is adjacent to only finitely many other vertices). Then G contains a ray: a simple path (a path with no repeated vertices) that starts at one vertex and continues from it through infinitely many vertices. - -A common special case of this is that every infinite tree contains either a vertex of infinite degree or an infinite simple path. - -Start with any vertex v1. Every one of the infinitely many vertices of G can be reached from v1 with a simple path, and each such path must start with one of the finitely many vertices adjacent to v1. There must be one of those adjacent vertices through which infinitely many vertices can be reached without going through v1. If there were not, then the entire graph would be the union of finitely many finite sets, and thus finite, contradicting the assumption that the graph is infinite. We may thus pick one of these vertices and call it v2. - -Now infinitely many vertices of G can be reached from v2 with a simple path which does not include the vertex v1. Each such path must start with one of the finitely many vertices adjacent to v2. So an argument similar to the one above shows that there must be one of those adjacent vertices through which infinitely many vertices can be reached; pick one and call it v3. - -Continuing in this fashion, an infinite simple path can be constructed using mathematical induction and a weak version of the axiom of dependent choice. At each step, the induction hypothesis states that there are infinitely many nodes reachable by a simple path from a particular node vi that does not go through one of a finite set of vertices. The induction argument is that one of the vertices adjacent to vi satisfies the induction hypothesis, even when vi is added to the finite set. - -The result of this induction argument is that for all n it is possible to choose a vertex vn as the construction describes. The set of vertices chosen in the construction is then a chain in the graph, because each one was chosen to be adjacent to the previous one, and the construction guarantees that the same vertex is never chosen twice. - -The computability aspects of Kőnig's lemma have been thoroughly investigated. The form of Kőnig's lemma most convenient for this purpose is the one which states that any infinite finitely branching subtree of $\omega^{<\omega}$ has an infinite path. Here $\omega$ denotes the set of natural numbers (thought of as an ordinal number) and $\omega^{<\omega}$ the tree whose nodes are all finite sequences of natural numbers, where the parent of a node is obtained by removing the last element from a sequence. Each finite sequence can be identified with a partial function from $\omega$ to itself, and each infinite path can be identified with a total function. This allows for an analysis using the techniques of computability theory. - -A subtree of $\omega^{<\omega}$ in which each sequence has only finitely many immediate extensions (that is, the tree has finite degree when viewed as a graph) is called finitely branching. Not every infinite subtree of $\omega^{<\omega}$ has an infinite path, but Kőnig's lemma shows that any finitely branching subtree must have such a path. - -For any subtree T of $\omega^{<\omega}$ the notation Ext(T) denotes the set of nodes of T through which there is an infinite path. Even when T is computable the set Ext(T) may not be computable. Every subtree T of -$$ -\omega^{<\omega} -$$ that has a path has a path computable from Ext(T). - -It is known that there are non-finitely branching computable subtrees of $\omega^{<\omega}$ that have no arithmetical path, and indeed no hyperarithmetical path. However, every computable subtree of $\omega^{<\omega}$ with a path must have a path computable from Kleene's O, the canonical $\Pi^1_1$ complete set. This is because the set Ext(T) is always $\Sigma^1_1$ (see analytical hierarchy) when T is computable. - -A finer analysis has been conducted for computably bounded trees. A subtree of $\omega^{<\omega}$ is called computably bounded or recursively bounded if there is a computable function f from $\omega$ to $\omega$ such that for every sequence in the tree and every n, the nth element of the sequence is at most f(n). Thus f gives a bound for how “wide” the tree is. The following basis theorems apply to infinite, computably bounded, computable subtrees of $\omega^{< \omega}$. - -* Any such tree has a path computable from $0'$, the canonical Turing complete set that can decide the halting problem. - -* Any such tree has a path that is low. This is known as the low basis theorem. - -* Any such tree has a path that is hyperimmune free. This means that any function computable from the path is dominated by a computable function. - -* For any noncomputable subset X of $\omega$ the tree has a path that does not compute X. - -A weak form of Kőnig's lemma which states that every infinite binary tree has an infinite branch is used to define the subsystem WKL0 of second-order arithmetic. This subsystem has an important role in reverse mathematics. Here a binary tree is one in which every term of every sequence in the tree is 0 or 1, which is to say the tree is computably bounded via the constant function 2. The full form of Kőnig's lemma is not provable in WKL0, but is equivalent to the stronger subsystem ACA0. - -The proof given above is not generally considered to be constructive, because at each step it uses a proof by contradiction to establish that there exists an adjacent vertex from which infinitely many other vertices can be reached, and because of the reliance on a weak form of the axiom of choice. Facts about the computational aspects of the lemma suggest that no proof can be given that would be considered constructive by the main schools of constructive mathematics. - -The fan theorem of is, from a classical point of view, the contrapositive of a form of Kőnig's lemma. A subset S of $\{0,1\}^{<\omega}$ is called a bar if any function from $\omega$ to the set $\{0,1\}$ has some initial segment in S. A bar is detachable if every sequence is either in the bar or not in the bar (this assumption is required because the theorem is ordinarily considered in situations where the law of the excluded middle is not assumed). A bar is uniform if there is some number N so that any function from $\omega$ to $\{0,1\}$ has an initial segment in the bar of length no more than $N$. Brouwer's fan theorem says that any detachable bar is uniform. - -This can be proven in a classical setting by considering the bar as an open covering of the compact topological space $\{0,1\}^\omega$. Each sequence in the bar represents a basic open set of this space, and these basic open sets cover the space by assumption. By compactness, this cover has a finite subcover. The N of the fan theorem can be taken to be the length of the longest sequence whose basic open set is in the finite subcover. This topological proof can be used in classical mathematics to show that the following form of Kőnig's lemma holds: for any natural number k, any infinite subtree of the tree $\{0,\ldots,k\}^{<\omega}$ has an infinite path. - -Kőnig's lemma may be considered to be a choice principle; the first proof above illustrates the relationship between the lemma and the axiom of dependent choice. At each step of the induction, a vertex with a particular property must be selected. Although it is proved that at least one appropriate vertex exists, if there is more than one suitable vertex there may be no canonical choice. In fact, the full strength of the axiom of dependent choice is not needed; as described below, the axiom of countable choice suffices. - -If the graph is countable, the vertices are well-ordered and one can canonically choose the smallest suitable vertex. In this case, Kőnig's lemma is provable in second-order arithmetic with arithmetical comprehension, and, a fortiori, in ZF set theory (without choice). - -Kőnig's lemma is essentially the restriction of the axiom of dependent choice to entire relations R such that for each x there are only finitely many z such that xRz. Although the axiom of choice is, in general, stronger than the principle of dependent choice, this restriction of dependent choice is equivalent to a restriction of the axiom of choice. - -In particular, when the branching at each node is done on a finite subset of an arbitrary set not assumed to be countable, the form of Kőnig's lemma that says "Every infinite finitely branching tree has an infinite path" is equivalent to the principle that every countable set of finite sets has a choice function, that is to say, the axiom of countable choice for finite sets. This form of the axiom of choice (and hence of Kőnig's lemma) is not provable in ZF set theory. - -In the category of sets, the inverse limit of any inverse system of non-empty finite sets is non-empty. This may be seen as a generalization of Kőnig's lemma and can be proved with Tychonoff's theorem, viewing the finite sets as compact discrete spaces, and then using the finite intersection property characterization of compactness. diff --git a/wiki/wikipedia/3396.txt b/wiki/wikipedia/3396.txt deleted file mode 100644 index a3490fe422b95e8c66c201c8e0ee314ab632e19d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3396.txt +++ /dev/null @@ -1,54 +0,0 @@ -In mathematics, the Parseval-Gutzmer formula states that, if $f$ is an analytic function on a closed disk of radius r with Taylor series -$$ -f(z) = \sum^\infty_{k = 0} a_k z^k, -$$ - -then for z = re on the boundary of the disk, -$$ -\int^{2\pi}_0 |f(re^{i\theta}) |^2 \mathrm{d}\theta = 2\pi \sum^\infty_{k = 0} |a_k|^2r^{2k}, -$$ - -which may also be written as -$$ -\frac{1}{2\pi }\int^{2\pi}_0 |f(re^{i\theta}) |^2 \mathrm{d}\theta = \sum^\infty_{k = 0} |a_k r^k|^2. -$$ - -The Cauchy Integral Formula for coefficients states that for the above conditions: -$$ -a_n = \frac{1}{2\pi i} \int^{}_{\gamma} \frac{f(z)}{z^{n+1}} \mathrm{d} z -$$ - -where γ is defined to be the circular path around origin of radius r. Also for $x \in \Complex,$ we have: $\overline{x}{x} = |x|^2.$ Applying both of these facts to the problem starting with the second fact: - - \begin{align} - -\int^{2\pi}_0 \left |f \left (re^{i\theta} \right ) \right |^2 \mathrm{d}\theta &= \int^{2\pi}_0 f \left (re^{i\theta} \right ) \overline{f \left (re^{i\theta} \right )} \mathrm{d}\theta\\[6pt] - -&= \int^{2\pi}_0 f \left (re^{i\theta} \right ) \left (\sum^\infty_{k = 0} \overline{a_k \left (re^{i\theta} \right )^k} \right ) \mathrm{d}\theta && \text{Using Taylor expansion on the conjugate} \\[6pt] - -&= \int^{2\pi}_0 f \left (re^{i\theta} \right ) \left (\sum^\infty_{k = 0} \overline{a_k} \left (re^{-i\theta} \right )^k \right ) \mathrm{d}\theta \\[6pt] - -&= \sum^\infty_{k = 0} \int^{2\pi}_0 f \left (re^{i\theta} \right ) \overline{a_k} \left (re^{-i\theta} \right )^k \mathrm{d} \theta && \text{Uniform convergence of Taylor series} \\[6pt] - -&= \sum^\infty_{k = 0} \left (2\pi \overline{a_k} r^{2k} \right ) \left (\frac{1}{2{\pi}i}\int^{2\pi}_0 \frac{f \left (re^{i\theta} \right )}{(r e^{i\theta})^{k+1}} {rie^{i\theta}} \right ) \mathrm{d}\theta \\ - -& = \sum^\infty_{k = 0} \left (2\pi \overline{a_k} r^{2k} \right ) a_k && \text{Applying Cauchy Integral Formula} \\ - -& = {2\pi} \sum^\infty_{k = 0} {|a_k|^2 r^{2k}} - -\end{align} - -Using this formula, it is possible to show that -$$ -\sum^\infty_{k = 0} |a_k|^2r^{2k} \leqslant M_r^2 -$$ - -where -$$ -M_r = \sup\{|f(z)| : |z| = r\}. -$$ - -This is done by using the integral -$$ -\int^{2\pi}_0 \left |f \left (re^{i\theta} \right ) \right |^2 \mathrm{d}\theta \leqslant 2\pi \left|\max_{\theta \in [0,2\pi)} \left (f \left (re^{i\theta} \right ) \right ) \right |^2 = 2\pi\left |\max_{|z|=r}(f(z)) \right |^2 = 2\pi M_r^2 -$$ diff --git a/wiki/wikipedia/3397.txt b/wiki/wikipedia/3397.txt deleted file mode 100644 index c16790cd194b0c403ffb884f684e7ffc642d6768..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3397.txt +++ /dev/null @@ -1,9 +0,0 @@ -In mathematics, the Freudenthal spectral theorem is a result in Riesz space theory proved by Hans Freudenthal in 1936. It roughly states that any element dominated by a positive element in a Riesz space with the principal projection property can in a sense be approximated uniformly by simple functions. - -Numerous well-known results may be derived from the Freudenthal spectral theorem. The well-known Radon–Nikodym theorem, the validity of the Poisson formula and the spectral theorem from the theory of normal operators can all be shown to follow as special cases of the Freudenthal spectral theorem. - -Let e be any positive element in a Riesz space E. A positive element of p in E is called a component of e if $p\wedge(e-p)=0$. If $p_1,p_2,\ldots,p_n$ are pairwise disjoint components of e, any real linear combination of $p_1,p_2,\ldots,p_n$ is called an e-simple function. - -The Freudenthal spectral theorem states: Let E be any Riesz space with the principal projection property and e any positive element in E. Then for any element f in the principal ideal generated by e, there exist sequences $\{s_n\}$ and $\{t_n\}$ of e-simple functions, such that $\{s_n\}$ is monotone increasing and converges e-uniformly to f, and $\{t_n\}$ is monotone decreasing and converges e-uniformly to f. - -Let $(X,\Sigma)$ be a measure space and $M_\sigma$ the real space of signed $\sigma$-additive measures on $(X,\Sigma)$. It can be shown that $M_\sigma$ is a Dedekind complete Banach Lattice with the total variation norm, and hence has the principal projection property. For any positive measure $\mu$, $\mu$-simple functions (as defined above) can be shown to correspond exactly to $\mu$-measurable simple functions on $(X,\Sigma)$ (in the usual sense). Moreover, since by the Freudenthal spectral theorem, any measure $\nu$ in the band generated by $\mu$ can be monotonously approximated from below by $\mu$-measurable simple functions on $(X,\Sigma)$, by Lebesgue's monotone convergence theorem $\nu$ can be shown to correspond to an $L^1(X,\Sigma,\mu)$ function and establishes an isometric lattice isomorphism between the band generated by $\mu$ and the Banach Lattice $L^1(X,\Sigma,\mu)$. diff --git a/wiki/wikipedia/3398.txt b/wiki/wikipedia/3398.txt deleted file mode 100644 index ce9d58b6c150e05a20352e0e45ab8b14a28a5785..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3398.txt +++ /dev/null @@ -1,10 +0,0 @@ -In mathematics, the measurable Riemann mapping theorem is a theorem proved in 1960 by Lars Ahlfors and Lipman Bers in complex analysis and geometric function theory. Contrary to its name, it is not a direct generalization of the Riemann mapping theorem, but instead a result concerning quasiconformal mappings and solutions of the Beltrami equation. The result was prefigured by earlier results of Charles Morrey from 1938 on quasi-linear elliptic partial differential equations. - -The theorem of Ahlfors and Bers states that if μ is a bounded measurable function on C with $\|\mu\|_\infty < 1$, then there is a - -unique solution f of the Beltrami equation -$$ - \partial_{\overline{z}} f(z) = \mu(z) \partial_z f(z) -$$ - -for which f is a quasiconformal homeomorphism of C fixing the points 0, 1 and ∞. A similar result is true with C replaced by the unit disk D. Their proof used the Beurling transform, a singular integral operator. diff --git a/wiki/wikipedia/3399.txt b/wiki/wikipedia/3399.txt deleted file mode 100644 index 174b1b05f67a1b8bb028052442ce02fdd931d73f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3399.txt +++ /dev/null @@ -1,15 +0,0 @@ -Purchase-to-pay, often abbreviated to P2P and also called req to check/cheque, refers to the business processes that cover activities of requesting (requisitioning), purchasing, receiving, paying for and accounting for goods and services. Most organisations have a formal process and specialist staff to control this activity so that spending is not wasteful or fraudulent. - -Purchase-to-pay systems automate the full purchase-to-payment process, connecting procurement and invoicing operations through an intertwined business flow that automates the process from identification of a need, planning and budgeting, through to procurement and payment. - -Key benefits are increased financial and procurement visibility, efficiency, cost savings and control. Automation allows for reduced processing times and straight-through processing where the incoming invoices are handled without any manual intervention. - -Purchase-to-pay systems are designed to provide organizations with control and visibility over the entire lifecycle of a transaction – from the way an item is ordered to the way that the final invoice is processed – providing full insight into cashflow and financial commitments and is now deemed an important tool for proper implementation of Resource Accounting and Budgeting, not least by UK Government Departments such as HM Treasury. Financial commitments are understood at the point they are committed to rather than when invoiced. - -Organizations automate invoice processing and purchasing policies and procedures to bring financial rigor and process efficiency to the business of buying. - -Both purchase order (PO) and non-PO spending, capital, credit card and reimbursable spending can be captured and controlled through automated P2P systems. Finance departments can also enforce internal spending controls and have instant access to data that tells them who is spending, what they are buying and paying for, and with which vendors. - -As a result, efficiency and cost saving benefits can be substantial. - -The term emerged in the 1990s and is one of a number of buzz phrases (like B2B, B2C, G2C etc.) that emerged as Internet applications became used more widely in business. Although it does not necessarily refer directly to the application of technology to the purchasing process, it is most often used in relation to applications like e-procurement and ERP purchasing and payment modules. diff --git a/wiki/wikipedia/34.txt b/wiki/wikipedia/34.txt deleted file mode 100644 index 7a98d218b2f35e3be6dc80e47a8318615e47dcd5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/34.txt +++ /dev/null @@ -1,13 +0,0 @@ -In mathematical logic, a tolerant sequence is a sequence -$$ -T_1 -$$,...,$T_n$ - -of formal theories such that there are consistent extensions -$$ -S_1 -$$,...,$S_n$ - -of these theories with each $S_{i+1}$ interpretable in $S_i$. Tolerance naturally generalizes from sequences of theories to trees of theories. Weak interpretability can be shown to be a special, binary case of tolerance. - -This concept, together with its dual concept of cotolerance, was introduced by Japaridze in 1992, who also proved that, for Peano arithmetic and any stronger theories with effective axiomatizations, tolerance is equivalent to $\Pi_1$-consistency. diff --git a/wiki/wikipedia/340.txt b/wiki/wikipedia/340.txt deleted file mode 100644 index 2d1acbe7aadf57c8a42766a3d1bb118ea8d80d98..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/340.txt +++ /dev/null @@ -1,229 +0,0 @@ -Fermat's Last Theorem is a theorem in number theory, originally stated by Pierre de Fermat in 1637 and proved by Andrew Wiles in 1995. The statement of the theorem involves an integer exponent n larger than 2. In the centuries following the initial statement of the result and before its general proof, various proofs were devised for particular values of the exponent n. Several of these proofs are described below, including Fermat's proof in the case n = 4, which is an early example of the method of infinite descent. - -Fermat's Last Theorem states that no three positive integers (a, b, c) can satisfy the equation an + bn = cn for any integer value of n greater than two. (For n equal to 1, the equation is a linear equation and has a solution for every possible a, b. For n equal to 2, the equation has infinitely many solutions, the Pythagorean triples.) - -A solution (a, b, c) for a given n leads to a solution for all the factors of n: if h is a factor of n then there is an integer g such that n = gh. Then (ag, bg, cg) is a solution for the exponent h: - -(ag)h + (bg)h = (cg)h. - -Therefore, to prove that Fermat's equation has no solutions for n > 2, it suffices to prove that it has no solutions for n = 4 and for all odd primes p. - -For any such odd exponent p, every positive-integer solution of the equation ap + bp = cp corresponds to a general integer solution to the equation ap + bp + cp = 0. For example, if (3, 5, 8) solves the first equation, then (3, 5, -8) solves the second. Conversely, any solution of the second equation corresponds to a solution to the first. The second equation is sometimes useful because it makes the symmetry between the three variables a, b and c more apparent. - -If two of the three numbers (a, b, c) can be divided by a fourth number d, then all three numbers are divisible by d. For example, if a and c are divisible by d = 13, then b is also divisible by 13. This follows from the equation - -bn = cn - an - -If the right-hand side of the equation is divisible by 13, then the left-hand side is also divisible by 13. Let g represent the greatest common divisor of a, b, and c. Then (a, b, c) may be written as a = gx, b = gy, and c = gz where the three numbers (x, y, z) are pairwise coprime. In other words, the greatest common divisor (GCD) of each pair equals one - -GCD(x, y) = GCD(x, z) = GCD(y, z) = 1 - -If (a, b, c) is a solution of Fermat's equation, then so is (x, y, z), since the equation - -an + bn = cn = gnxn + gnyn = gnzn - -implies the equation - -xn + yn = zn. - -A pairwise coprime solution (x, y, z) is called a primitive solution. Since every solution to Fermat's equation can be reduced to a primitive solution by dividing by their greatest common divisor g, Fermat's Last Theorem can be proven by demonstrating that no primitive solutions exist. - -Integers can be divided into even and odd, those that are evenly divisible by two and those that are not. The even integers are ...-4, -2, 0, 2, 4, whereas the odd integers are -3, -1, 1, 3,... The property of whether an integer is even (or not) is known as its parity. If two numbers are both even or both odd, they have the same parity. By contrast, if one is even and the other odd, they have different parity. - -The addition, subtraction and multiplication of even and odd integers obey simple rules. The addition or subtraction of two even numbers or of two odd numbers always produces an even number, e.g., 4 + 6 = 10 and 3 + 5 = 8. Conversely, the addition or subtraction of an odd and even number is always odd, e.g., 3 + 8 = 11. The multiplication of two odd numbers is always odd, but the multiplication of an even number with any number is always even. An odd number raised to a power is always odd and an even number raised to power is always even. - -In any primitive solution (x, y, z) to the equation xn + yn = zn, one number is even and the other two numbers are odd. They cannot all be even, for then they would not be coprime; they could all be divided by two. However, they cannot be all odd, since the sum of two odd numbers xn + yn is never an odd number zn. Therefore, at least one number must be even and at least one number must be odd. It follows that the third number is also odd, because the sum of an even and an odd number is itself odd. - -The fundamental theorem of arithmetic states that any natural number can be written in only one way (uniquely) as the product of prime numbers. For example, 42 equals the product of prime numbers 2×3×7, and no other product of prime numbers equals 42, aside from trivial re-arrangements such as 7×3×2. This unique factorization property is the basis on which much of number theory is built. - -One consequence of this unique factorization property is that if a pth power of a number equals a product such as - -xp = uv - -and if u and v are coprime (share no prime factors), then u and v are themselves the pth power of two other numbers, u = rp and v = sp. - -As described below, however, some number systems do not have unique factorization. This fact led to the failure of Lamé's 1847 general proof of Fermat's Last Theorem. - -Since the time of Sophie Germain, Fermat's Last Theorem has been separated into two cases that are proven separately. The first case (case I) is to show that there are no primitive solutions (x, y, z) to the equation xp + yp = zp under the condition that p does not divide the product xyz. The second case (case II) corresponds to the condition that p does divide the product xyz. Since x, y, and z are pairwise coprime, p divides only one of the three numbers. - -Only one mathematical proof by Fermat has survived, in which Fermat uses the technique of infinite descent to show that the area of a right triangle with integer sides can never equal the square of an integer. This result is known as Fermat's right triangle theorem. As shown below, his proof is equivalent to demonstrating that the equation - -x4 - y4 = z2 - -has no primitive solutions in integers (no pairwise coprime solutions). In turn, this is sufficient to prove Fermat's Last Theorem for the case n = 4, since the equation a4 + b4 = c4 can be written as c4 - b4 = (a2)2. Alternative proofs of the case n = 4 were developed later by Frénicle de Bessy, Euler, Kausler, Let the right triangle have sides (u, v, w), where the area equals uv/2 and, by the Pythagorean theorem, u2 + v2 = w2. If the area were equal to the square of an integer s - -uv/2 = s2 - -giving - -2uv = 4s2 - --2uv = -4s2. - -Adding u2 + v2 = w2 to these equations gives - -u2 + 2uv + v2 = w2 + 4s2 - -u2 - 2uv + v2 = w2 - 4s2, - -which can be expressed as - -(u + v)2 = w2 + 4s2 - -(u - v)2 = w2 - 4s2. - -Multiplying these equations together yields - -(u2 - v2)2 = w4 - 24s4. - -But as Fermat proved, there can be no integer solution to the equation - -x4 - y4 = z2 - -of which this is a special case with z = (u2 − v2), x = w and y = 2s. - -The first step of Fermat's proof is to factor the left-hand side - -(x2 + y2)(x2 - y2) = z2 - -Since x and y are coprime (this can be assumed because otherwise the factors could be cancelled), the greatest common divisor of x2 + y2 and x2 - y2 is either 2 (case A) or 1 (case B). The theorem is proven separately for these two cases. - -In this case, both x and y are odd and z is even. Since (y2, z, x2) form a primitive Pythagorean triple, they can be written - -z = 2de - -y2 = d2 - e2 - -x2 = d2 + e2 - -where d and e are coprime and d > e > 0. Thus, - -x2y2 = d4 - e4 - -which produces another solution (d, e, xy) that is smaller (0 < d < x). As before, there must be a lower bound on the size of solutions, while this argument always produces a smaller solution than any given one, and thus the original solution is impossible. - -In this case, the two factors are coprime. Since their product is a square z2, they must each be a square - -x2 + y2 = s2 - -x2 - y2 = t2 - -The numbers s and t are both odd, since s2 + t2 = 2 x2, an even number, and since x and y cannot both be even. Therefore, the sum and difference of s and t are likewise even numbers, so we define integers u and v as - -u = (s + t)/2 - -v = (s - t)/2 - -Since s and t are coprime, so are u and v; only one of them can be even. Since y2 = 2uv, exactly one of them is even. For illustration, let u be even; then the numbers may be written as u=2m2 and v=k2. Since (u, v, x) form a primitive Pythagorean triple - -(s2 + t2)/2 = u2 + v2 = x2 - -they can be expressed in terms of smaller integers d and e using Euclid's formula - -u = 2de - -v = d2 - e2 - -x = d2 + e2 - -Since u = 2m2 = 2de, and since d and e are coprime, they must be squares themselves, d = g2 and e = h2. This gives the equation - -v = d2 - e2 = g4 - h4 = k2 - -The solution (g, h, k) is another solution to the original equation, but smaller (0 < g < d < x). Applying the same procedure to (g, h, k) would produce another solution, still smaller, and so on. But this is impossible, since natural numbers cannot be shrunk indefinitely. Therefore, the original solution (x, y, z) was impossible. - -Fermat sent the letters in which he mentioned the case in which n = 3 in 1636, 1640 and 1657. - -Euler sent a letter in which he gave a proof of the case in which n = 3 to Goldbach on 4 August 1753. - -The case n = 3 was proven by Euler in 1770. Independent proofs were published by several other mathematicians, including Kausler, Legendre, Calzolari, Lamé, Tait, Günther, Gambioli, Krey, Rychlik, Stockhaus, Carmichael, van der Corput, Thue, and Duarte. - -As Fermat did for the case n = 4, Euler used the technique of infinite descent. The proof assumes a solution (x, y, z) to the equation x3 + y3 + z3 = 0, where the three non-zero integers x, y, and z are pairwise coprime and not all positive. One of the three must be even, whereas the other two are odd. Without loss of generality, z may be assumed to be even. - -Since x and y are both odd, they cannot be equal. If x = y, then 2x3 = -z3, which implies that x is even, a contradiction. - -Since x and y are both odd, their sum and difference are both even numbers - -2u = x + y - -2v = x - y - -where the non-zero integers u and v are coprime and have different parity (one is even, the other odd). Since x = u + v and y = u - v, it follows that - --z3 = (u + v)3 + (u - v)3 = 2u(u2 + 3v2) - -Since u and v have opposite parity, u2 + 3v2 is always an odd number. Therefore, since z is even, u is even and v is odd. Since u and v are coprime, the greatest common divisor of 2u and u2 + 3v2 is either 1 (case A) or 3 (case B). - -In this case, the two factors of -z3 are coprime. This implies that three does not divide u and that the two factors are cubes of two smaller numbers, r and s - -2u = r3 - -u2 + 3v2 = s3 - -Since u2 + 3v2 is odd, so is s. A crucial lemma shows that if s is odd and if it satisfies an equation s3 = u2 + 3v2, then it can be written in terms of two integers e and f - -s = e2 + 3f2 - -so that - -u = e ( e2 - 9f2) - -v = 3f ( e2 - f2) - -u and v are coprime, so e and f must be coprime, too. Since u is even and v odd, e is even and f is odd. Since - -r3 = 2u = 2e (e - 3f)(e + 3f) - -The factors 2e, (e–3f ), and (e+3f ) are coprime since 3 cannot divide e: If e were divisible by 3, then 3 would divide u, violating the designation of u and v as coprime. Since the three factors on the right-hand side are coprime, they must individually equal cubes of smaller integers - --2e = k3 - -e - 3f = l3 - -e + 3f = m3 - -which yields a smaller solution k3 + l3 + m3= 0. Therefore, by the argument of infinite descent, the original solution (x, y, z) was impossible. - -In this case, the greatest common divisor of 2u and u2 + 3v2 is 3. That implies that 3 divides u, and one may express u = 3w in terms of a smaller integer, w. Since u is divisible by 4, so is w; hence, w is also even. Since u and v are coprime, so are v and w. Therefore, neither 3 nor 4 divide v. - -Substituting u by w in the equation for z3 yields - --z3 = 6w(9w2 + 3v2) = 18w(3w2 + v2) - -Because v and w are coprime, and because 3 does not divide v, then 18w and 3w2 + v2 are also coprime. Therefore, since their product is a cube, they are each the cube of smaller integers, r and s - -18w = r3 - -3w2 + v2 = s3 - -By the lemma above, since s is odd and its cube is equal to a number of the form 3w2 + v2, it too can be expressed in terms of smaller coprime numbers, e and f. - -s = e2 + 3f2 - -A short calculation shows that - -v = e (e2 - 9f2) - -w = 3f (e2 - f2) - -Thus, e is odd and f is even, because v is odd. The expression for 18w then becomes - -r3 = 18w = 54f (e2 - f2) = 54f (e + f) (e - f) = 33×2f (e + f) (e - f). - -Since 33 divides r3 we have that 3 divides r, so (r /3)3 is an integer that equals 2f (e + f) (e - f). Since e and f are coprime, so are the three factors 2f, e+f, and e-f; therefore, they are each the cube of smaller integers, k, l, and m. - --2f = k3 - -e + f = l3 - -e - f = m3 - -which yields a smaller solution k3 + l3 + m3= 0. Therefore, by the argument of infinite descent, the original solution (x, y, z) was impossible. - -Fermat's Last Theorem for n = 5 states that no three coprime integers x, y and z can satisfy the equation - -x5 + y5 + z5 = 0 - -This was proven neither independently nor collaboratively by Dirichlet and Legendre around 1825. Alternative proofs were developed by Gauss, Lebesgue, Lamé, Gambioli, His rather complicated proof was simplified in 1840 by Victor-Amédée Lebesgue, and still simpler proofs were published by Angelo Genocchi in 1864, 1874 and 1876. Alternative proofs were developed by Théophile Pépin and Edmond Maillet. - -Fermat's Last Theorem has also been proven for the exponents n = 6, 10, and 14. Proofs for n = 6 have been published by Kausler, Swift, and Breusch. Similarly, Dirichlet and Terjanian each proved the case n = 14, while Kapferer and Breusch each proved the case n = 10. Strictly speaking, these proofs are unnecessary, since these cases follow from the proofs for n = 3, 5, and 7, respectively. Nevertheless, the reasoning of these even-exponent proofs differs from their odd-exponent counterparts. Dirichlet's proof for n = 14 was published in 1832, before Lamé's 1839 proof for n = 7. diff --git a/wiki/wikipedia/3400.txt b/wiki/wikipedia/3400.txt deleted file mode 100644 index c3afb97e3054051af2b919035757dbbef490a7a4..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3400.txt +++ /dev/null @@ -1,49 +0,0 @@ -In mathematics, in the areas of topology and functional analysis, the Anderson–Kadec theorem states that any two infinite-dimensional, separable Banach spaces, or, more generally, Fréchet spaces, are homeomorphic as topological spaces. The theorem was proved by Mikhail Kadets (1966) and Richard Davis Anderson. - -Every infinite-dimensional, separable Fréchet space is homeomorphic to $\R^{\N},$ the Cartesian product of countably many copies of the real line $\R.$ - -Kadec norm: A norm $\|\cdot\|$ on a normed linear space $X$ is called a with respect to a total subset $A \subseteq X^*$ of the dual space $X^*$ if for each sequence $x_n\in X$ the following condition is satisfied: - -* If $\lim_{n\to\infty} x^*\left(x_n\right) = x^*(x_0)$ for $x^* \in A$ and $\lim_{n\to\infty} \left\|x_n\right\| = \left\|x_0\right\|,$ then $\lim_{n\to\infty} \left\|x_n - x_0\right\| = 0.$ - -Eidelheit theorem: A Fréchet space $E$ is either isomorphic to a Banach space, or has a quotient space isomorphic to $\R^{\N}.$ - -Kadec renorming theorem: Every separable Banach space $X$ admits a Kadec norm with respect to a countable total subset $A \subseteq X^*$ of $X^*.$ The new norm is equivalent to the original norm $\|\cdot\|$ of $X.$ The set $A$ can be taken to be any weak-star dense countable subset of the unit ball of $X^*$ - -In the argument below $E$ denotes an infinite-dimensional separable Fréchet space and $\simeq$ the relation of topological equivalence (existence of homeomorphism). - -A starting point of the proof of the Anderson–Kadec theorem is Kadec's proof that any infinite-dimensional separable Banach space is homeomorphic to $\R^{\N}.$ - -From Eidelheit theorem, it is enough to consider Fréchet space that are not isomorphic to a Banach space. In that case there they have a quotient that is isomorphic to $\R^{\N}.$ A result of Bartle-Graves-Michael proves that then - -E \simeq Y \times \R^{\N} - -for some Fréchet space $Y.$ - -On the other hand, $E$ is a closed subspace of a countable infinite product of separable Banach spaces $X = \prod_{n=1}^{\infty} X_i$ of separable Banach spaces. The same result of Bartle-Graves-Michael applied to $X$ gives a homeomorphism - -X \simeq E \times Z - -for some Fréchet space $Z.$ From Kadec's result the countable product of infinite-dimensional separable Banach spaces $X$ is homeomorphic to $\R^{\N}.$ - -The proof of Anderson–Kadec theorem consists of the sequence of equivalences - -\begin{align} - -\R^{\N} - -&\simeq (E \times Z)^{\N}\\ - -&\simeq E^\N \times Z^{\N}\\ - -&\simeq E \times E^{\N} \times Z^{\N}\\ - -&\simeq E \times \R^{\N}\\ - -&\simeq Y \times \R^{\N} \times \R^{\N}\\ - -&\simeq Y \times \R^{\N} \\ - -&\simeq E - -\end{align} diff --git a/wiki/wikipedia/3401.txt b/wiki/wikipedia/3401.txt deleted file mode 100644 index 821fa300956a393dd71f73c22d988ccf4e7be02a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3401.txt +++ /dev/null @@ -1,23 +0,0 @@ -In the mathematics of paper folding, map folding and stamp folding are two problems of counting the number of ways that a piece of paper can be folded. In the stamp folding problem, the paper is a strip of stamps with creases between them, and the folds must lie on the creases. In the map folding problem, the paper is a map, divided by creases into rectangles, and the folds must again lie only along these creases. - -Lucas credits the invention of the stamp folding problem to Émile Lemoine. Touchard provides several other early references. - -In the stamp folding problem, the paper to be folded is a strip of square or rectangular stamps, separated by creases, and the stamps can only be folded along those creases. - -In one commonly considered version of the problem, each stamp is considered to be distinguishable from each other stamp, so two foldings of a strip of stamps are considered equivalent only when they have the same vertical sequence of stamps. and the quotients of this division are - -1, 1, 2, 4, 10, 24, 66, 174, 504, 1406, 4210, 12198, 37378, 111278, 346846, 1053874, ... , - -the number of topologically distinct ways that a half-infinite curve can make n crossings with a line, called "semimeanders". - -In the 1960s, John E. Koehler and W. F. Lunnon implemented algorithms that, at that time, could calculate these numbers for up to 28 stamps. - -However, the general problem of counting the number of ways to fold a map remains unsolved. The numbers of ways of folding an n × n map are known only for n ≤ 7. They are: - -1, 8, 1368, 300608, 186086600, 123912532224, 129950723279272 . - -The map folding and stamp folding problems are related to a problem in the mathematics of origami of whether a square with a crease pattern can be folded to a flat figure. - -If a folding direction (either a mountain fold or a valley fold) is assigned to each crease of a strip of stamps, it is possible to test whether the result can be folded flat in polynomial time. - -Even for a one-dimensional strip of stamps, with its creases already labeled as mountain or valley folds, it is NP-hard to find a way to fold it that minimizes the maximum number of stamps that lie between the two stamps of any crease. diff --git a/wiki/wikipedia/3402.txt b/wiki/wikipedia/3402.txt deleted file mode 100644 index bb50506484d88c9af830ff19315c6a3d08c7bce8..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3402.txt +++ /dev/null @@ -1,33 +0,0 @@ -Dia () - -is free and open source general-purpose diagramming software, developed originally by Alexander Larsson. Dia uses a controlled single document interface (SDI) similar to GIMP and Inkscape. - -Dia has a modular design with several shape packages available for different needs: flowchart, network diagrams, circuit diagrams, and more. It does not restrict symbols and connectors from various categories from being placed together. - -Dia has special objects to help draw entity-relationship models, Unified Modeling Language (UML) diagrams, flowcharts, network diagrams, and simple electrical circuits. It is also possible to add support for new shapes by writing simple XML files, using a subset of Scalable Vector Graphics (SVG) to draw the shape. - -Dia loads and saves diagrams in a custom XML format which is, by default, gzipped to save space. It can print large diagrams spanning multiple pages and can also be scripted using the Python programming language. - -Dia can export diagrams to various formats including the following: - -* EPS (Encapsulated PostScript) - -* SVG (Scalable Vector Graphics) - -* DXF (Autocad's Drawing Interchange format) - -* CGM (Computer Graphics Metafile, defined by ISO standard 8632) - -* WMF (Windows Meta File) - -* PNG (Portable Network Graphics) - -* JPEG (Joint Photographic Experts Group) - -* VDX (Microsoft's XML for Visio Drawing) - -Dia was originally created by Alexander Larsson but he moved on to work on GNOME and other projects. James Henstridge then took over as the lead developer, but he also moved on to other projects. He was followed by Cyrille Chepelov and Lars Ræder Clausen in turn. - -Dia is maintained by a group of developers: Hans Breuer, Steffen Macke, and Sameer Sahasrabuddhe. - -Dia is written in C, and has an extension system, which also supports writing extensions in Python. diff --git a/wiki/wikipedia/3403.txt b/wiki/wikipedia/3403.txt deleted file mode 100644 index e18fd7950304ec7d9e3c31fd114327b4299a9783..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3403.txt +++ /dev/null @@ -1,39 +0,0 @@ -Euler's quadrilateral theorem or Euler's law on quadrilaterals, named after Leonhard Euler (1707–1783), describes a relation between the sides of a convex quadrilateral and its diagonals. It is a generalisation of the parallelogram law which in turn can be seen as generalisation of the Pythagorean theorem. Because of the latter the restatement of the Pythagorean theorem in terms of quadrilaterals is occasionally called the Euler–Pythagoras theorem. - -For a convex quadrilateral with sides $a, b, c, d$, diagonals $e$ and $ f$, and $g$ being the line segment connecting the midpoints of the two diagonals, the following equations holds: -$$ -a^2+b^2+c^2+d^2=e^2+f^2+4g^2 -$$ - -If the quadrilateral is a parallelogram, then the midpoints of the diagonals coincide so that the connecting line segment $g$ has length 0. In addition the parallel sides are of equal length, hence Euler's theorem reduces to -$$ -2a^2+2b^2=e^2+f^2 -$$ - -which is the parallelogram law. - -If the quadrilateral is rectangle, then equation simplifies further since now the two diagonals are of equal length as well: -$$ -2a^2+2b^2=2e^2 -$$ - -Dividing by 2 yields the Euler–Pythagoras theorem: -$$ -a^2+b^2=e^2 -$$ - -In other words, in the case of a rectangle the relation of the quadrilateral's sides and its diagonals is described by the Pythagorean theorem. - -Euler originally derived the theorem above as corollary from slightly different theorem that requires the introduction of an additional point, but provides more structural insight. - -For a given convex quadrilateral $ABCD$ Euler introduced an additional point $E$ such that $ABED$ forms a parallelogram and then the following equality holds: -$$ -|AB|^2+|BC|^2+|CD|^2+|AD|^2=|AC|^2+|BD|^2+|CE|^2 -$$ - -The distance $|CE|$ between the additional point $E$ and the point $C$ of the quadrilateral not being part of the parallelogram can be thought of measuring how much the quadrilateral deviates from a parallelogram and $|CE|^2 $ is correction term that needs to be added to the original equation of the parallelogram law. -$$ -M -$$ being the midpoint of $AC$ yields $\tfrac=2$. Since $N$ is the midpoint of $BD$ it is also the midpoint of $AE$, as $AE$ and $BD$ are both diagonals of the parallelogram $ABED$. This yields $\tfrac=2$ and hence $\tfrac=\tfrac$. Therefore, it follows from the intercept theorem (and its converse) that $CE$ and $NM$ are parallel and $|CE|^2=(2|NM|)^2=4|NM|^2$, which yields Euler's theorem. - -Euler's theorem can be extended to a larger set of quadrilaterals, that includes crossed and nonplaner ones. It holds for so called generalized quadrilaterals, which simply consist of four arbitrary points in $\mathbb{R}^n$ connected by edges so that they form a cycle graph. diff --git a/wiki/wikipedia/3404.txt b/wiki/wikipedia/3404.txt deleted file mode 100644 index 285ba1774ebaa30cf6ec25b475d7b31481b72728..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3404.txt +++ /dev/null @@ -1,61 +0,0 @@ -Packing problems are a class of optimization problems in mathematics that involve attempting to pack objects together into containers. The goal is to either pack a single container as densely as possible or pack all objects using as few containers as possible. Many of these problems can be related to real life packaging, storage and transportation issues. Each packing problem has a dual covering problem, which asks how many of the same objects are required to completely cover every region of the container, where objects are allowed to overlap. - -In a bin packing problem, you are given: - -* 'containers' (usually a single two- or three-dimensional convex region, or an infinite space) - -* A set of 'objects' some or all of which must be packed into one or more containers. The set may contain different objects with their sizes specified, or a single object of a fixed dimension that can be used repeatedly. - -Usually the packing must be without overlaps between goods and other goods or the container walls. In some variants, the aim is to find the configuration that packs a single container with the maximal density. More commonly, the aim is to pack all the objects into as few containers as possible. In some variants the overlapping (of objects with each other and/or with the boundary of the container) is allowed but should be minimized. - -Many of these problems, when the container size is increased in all directions, become equivalent to the problem of packing objects as densely as possible in infinite Euclidean space. This problem is relevant to a number of scientific disciplines, and has received significant attention. The Kepler conjecture postulated an optimal solution for packing spheres hundreds of years before it was proven correct by Thomas Callister Hales. Many other shapes have received attention, including ellipsoids, Platonic and Archimedean solids - -Determine the minimum number of cuboid containers (bins) that are required to pack a given set of item cuboids (3 Dimensional rectangles). The rectangular cuboids to be packed can be rotated by 90 degrees on each axis. - -The problem of finding the smallest ball such that $k$ disjoint open unit balls may be packed inside it has a simple and complete answer in $n$-dimensional Euclidean space if $ k\leq n+1$, and in an infinite dimensional Hilbert space with no restrictions. It is worth describing in detail here, to give a flavor of the general problem. In this case, a configuration of $k$ pairwise tangent unit balls is available. Place the centers at the vertices $a_1, \dots, a_k$ of a regular $(k-1)$ dimensional simplex with edge 2; this is easily realized starting from an orthonormal basis. A small computation shows that the distance of each vertex from the barycenter is $\sqrt{2\big(1-\frac{1}{k} \big)}$. Moreover, any other point of the space necessarily has a larger distance from at least one of the $ k$ vertices. In terms of inclusions of balls, the $ k$ open unit balls centered at $ a_1, \dots, a_k$ are included in a ball of radius $ r_k := 1+\sqrt{2\big(1-\frac{1}{k}\big)}$, which is minimal for this configuration. - -To show that this configuration is optimal, let $ x_1, \dots, x_k$ be the centers of $ k$ disjoint open unit balls contained in a ball of radius $ r$ centered at a point $ x_0$. Consider the map from the finite set $\{x_1,\dots, x_k\}$ into $\{a_1,\dots,a_k\}$ taking $ x_j$ in the corresponding $ a_j$ for each $ 1 \leq j \leq k$. Since for all $ 1\leq i < j \leq k$, $ \|a_i-a_j\|=2\leq\|x_i-x_j\|$ this map is 1-Lipschitz and by the Kirszbraun theorem it extends to a 1-Lipschitz map that is globally defined; in particular, there exists a point $ a_0$ such that for all $1\leq j\leq k$ one has $\|a_0-a_j\| \leq \|x_0-x_j\|$, so that also $ r_k\leq1+\|a_0-a_j\|\leq 1+\|x_0-x_j\| \leq r$. This shows that there are $ k$ disjoint unit open balls in a ball of radius $ r$ if and only if $ r \geq r_k$. Notice that in an infinite dimensional Hilbert space this implies that there are infinitely many disjoint open unit balls inside a ball of radius $ r$ if and only if $ r\geq 1+\sqrt{2}$. For instance, the unit balls centered at $\sqrt{2}e_j$, where $\{e_j\}_j$ is an orthonormal basis, are disjoint and included in a ball of radius $ 1 + \sqrt{2}$ centered at the origin. Moreover, for $ r < 1 + \sqrt{2}$, the maximum number of disjoint open unit balls inside a ball of radius r is $\big\lfloor \frac{2}{2-(r-1)^2}\big\rfloor$. - -Determine the number of spherical objects of given diameter d that can be packed into a cuboid of size a × b × c. - -Determine the minimum height h of a cylinder with given radius R that will pack n identical spheres of radius r (< R). For a small radius R the spheres arrange to ordered structures, called columnar structures. - -Determine the minimum radius R that will pack n identical, unit volume polyhedra of a given shape. - -thumb|120px|right|The optimal packing of 10 circles in a circleMany variants of 2-dimensional packing problems have been studied. See the linked pages for more information. - -You are given n unit circles, and have to pack them in the smallest possible container. Several kinds of containers have been studied: - -* Packing circles in a circle - closely related to spreading points in a unit circle with the objective of finding the greatest minimal separation, dn, between points. Optimal solutions have been proven for n ≤ 13, and n = 19. - -* Packing circles in a square - closely related to spreading points in a unit square with the objective of finding the greatest minimal separation, dn, between points. To convert between these two formulations of the problem, the square side for unit circles will be L = 2 + 2/dn. thumb|120px|right|The optimal packing of 15 circles in a squareOptimal solutions have been proven for n ≤ 30. - -* Packing circles in an isosceles right triangle - good estimates are known for n<300. - -* Packing circles in an equilateral triangle - Optimal solutions are known for n<13, and conjectures are available for n < 28. - -You are given n unit squares and have to pack them into the smallest possible container, where the container type varies: - -* Packing squares in a square: Optimal solutions have been proven for n = 1–10, 14–16, 22–25, 33–36, 62–64, 79–81, 98–100, and any square integer. The wasted space is asymptotically O(a7/11). - -* Packing squares in a circle: Good solutions are known for n up to 35. - -* Packing identical rectangles in a rectangle: The problem of packing multiple instances of a single rectangle of size (l,w), allowing for 90° rotation, in a bigger rectangle of size (L,W) has some applications such as loading of boxes on pallets and, specifically, woodpulp stowage. For example, it is possible to pack 147 rectangles of size (137,95) in a rectangle of size (1600,1230). - -* Packing different rectangles in a rectangle: The problem of packing multiple rectangles of varying widths and heights in an enclosing rectangle of minimum area (but with no boundaries on the enclosing rectangle's width or height) has an important application in combining images into a single larger image. A web page that loads a single larger image often renders faster in the browser than the same page loading multiple small images, due to the overhead involved in requesting each image from the web server. The problem is NP-complete in general, but there are fast algorithms for solving small instances. - -In tiling or tessellation problems, there are to be no gaps, nor overlaps. Many of the puzzles of this type involve packing rectangles or polyominoes into a larger rectangle or other square-like shape. - -There are significant theorems on tiling rectangles (and cuboids) in rectangles (cuboids) with no gaps or overlaps: - -An a × b rectangle can be packed with 1 × n strips iff n divides a or n divides b. - -de Bruijn's theorem: A box can be packed with a harmonic brick a × a b × a b c if the box has dimensions a p × a b q × a b c r for some natural numbers p, q, r (i.e., the box is a multiple of the brick.) - -The study of polyomino tilings largely concerns two classes of problems: to tile a rectangle with congruent tiles, and to pack one of each n-omino into a rectangle. - -A classic puzzle of the second kind is to arrange all twelve pentominoes into rectangles sized 3×20, 4×15, 5×12 or 6×10. - -Packing of irregular objects is a problem not lending itself well to closed form solutions; however, the applicability to practical environmental science is quite important. For example, irregularly shaped soil particles pack differently as the sizes and shapes vary, leading to important outcomes for plant species to adapt root formations and to allow water movement in the soil. - -The problem of deciding whether a given set of polygons can fit in a given square container has been shown to be complete for the existential theory of the reals. diff --git a/wiki/wikipedia/3405.txt b/wiki/wikipedia/3405.txt deleted file mode 100644 index 716f025d61fb725c154cb4383d47bb0f5b642b05..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3405.txt +++ /dev/null @@ -1,73 +0,0 @@ -In computational complexity theory, a branch of computer science, Schaefer's dichotomy theorem states necessary and sufficient conditions under which a finite set S of relations over the Boolean domain yields polynomial-time or NP-complete problems when the relations of S are used to constrain some of the propositional variables. - -It is called a dichotomy theorem because the complexity of the problem defined by S is either in P or NP-complete as opposed to one of the classes of intermediate complexity that is known to exist (assuming P ≠ NP) by Ladner's theorem. - -Special cases of Schaefer's dichotomy theorem include the NP-completeness of SAT (the Boolean satisfiability problem) and its two popular variants 1-in-3 SAT and not-all-equal 3SAT (often denoted by NAE-3SAT). In fact, for these two variants of SAT, Schaefer's dichotomy theorem shows that their monotone versions (where negations of variables are not allowed) are also NP-complete. - -Schaefer defines a decision problem that he calls the Generalized Satisfiability problem for S (denoted by SAT(S)), where $S = \{R_1,\ldots,R_m\}$ is a finite set of relations over propositional variables. An instance of the problem is an S-formula, i.e. a conjunction of constraints of the form $R_j(x_{i_1}, \ldots , x_{i_n})$ where $R_j \in S$ and the $x_{i_j}$ are propositional variables. The problem is to determine whether the given formula is satisfiable, in other words if the variables can be assigned values such that they satisfy all the constraints as given by the relations from S. - -Schaefer identifies six classes of sets of Boolean relations for which SAT(S) is in P and proves that all other sets of relations generate an NP-complete problem. A finite set of relations S over the Boolean domain defines a polynomial time computable satisfiability problem if any one of the following conditions holds: - -# all relations which are not constantly false are true when all its arguments are true; - -# all relations which are not constantly false are true when all its arguments are false; - -# all relations are equivalent to a conjunction of binary clauses; - -# all relations are equivalent to a conjunction of Horn clauses; - -# all relations are equivalent to a conjunction of dual-Horn clauses; - -# all relations are equivalent to a conjunction of affine formulae. - -Otherwise, the problem SAT(S) is NP-complete. - -A modern, streamlined presentation of Schaefer's theorem is given in an expository paper by Hubie Chen. In modern terms, the problem SAT(S) is viewed as a constraint satisfaction problem over the Boolean domain. In this area, it is standard to denote the set of relations by Γ and the decision problem defined by Γ as CSP(Γ). - -This modern understanding uses algebra, in particular, universal algebra. For Schaefer's dichotomy theorem, the most important concept in universal algebra is that of a polymorphism. An operation $f : D^m \to D$ is a polymorphism of a relation $R \subseteq D^k$ if, for any choice of m tuples $(t_{11}, \dotsc, t_{1k}), \dotsc, (t_{m1}, \dotsc, t_{mk})$ from R, it holds that the tuple obtained from these m tuples by applying f coordinate-wise, i.e. $(f(t_{11}, \dotsc, t_{m1}), \dotsc, f(t_{1k}, \dotsc, t_{mk}))$, is in R. That is, an operation f is a polymorphism of R if R is closed under f: applying f to any tuples in R yields another tuple inside R. A set of relations Γ is said to have a polymorphism f if every relation in Γ has f as a polymorphism. This definition allows for the algebraic formulation of Schaefer's dichotomy theorem. - -Let Γ be a finite constraint language over the Boolean domain. The problem CSP(Γ) is decidable in polynomial-time if Γ has one of the following six operations as a polymorphism: - -# the constant unary operation 0; - -# the constant unary operation 1; - -# the binary AND operation ∧; - -# the binary OR operation ∨; - -# the ternary majority operation $\operatorname{Majority}(x,y,z) = (x \wedge y) \vee (x \wedge z) \vee (y \wedge z);$ - -# the ternary minority operation $\operatorname{Minority}(x,y,z) = x \oplus y \oplus z.$ - -Otherwise, the problem CSP(Γ) is NP-complete. - -In this formulation, it is easy to check if any of the tractability conditions hold. - -Given a set Γ of relations, there is a surprisingly close connection between its polymorphisms and the computational complexity of CSP(Γ). - -A relation R is called primitive positive definable, or short pp-definable, from a - -set Γ of relations if R(v1, ... , vk) ⇔ ∃x1 ... xm. C holds for some conjunction C of constraints from Γ and equations over the variables {v1,...,vk, x1,...,xm}. - -For example, if Γ consists of the ternary relation nae(x,y,z) holding if x,y,z are not all equal, and R(x,y,z) is x∨y∨z, then R can be pp-defined by R(x,y,z) ⇔ ∃a. nae(0,x,a) ∧ nae(y,z,¬a); this reduction has been used to prove that NAE-3SAT is NP-complete. - -The set of all relations which are pp-definable from Γ is denoted by ≪Γ≫. - -If Γ' ⊆ ≪Γ≫ for some finite constraint sets Γ and Γ', then CSP(Γ') reduces to CSP(Γ). - -Given a set Γ of relations, Pol(Γ) denotes the set of polymorphisms of Γ. - -Conversely, if O is a set of operations, then Inv(O) denotes the set of relations having all operations in O as a polymorphism. - -Pol and Inv together build a Galois connection. - -For any finite set Γ of relations over a finite domain, ≪Γ≫ = Inv(Pol(Γ)) holds, that is, the set of relations pp-definable from Γ can be derived from the polymorphisms of Γ. Moreover, if Pol(Γ) ⊆ Pol(Γ') for two finite relation sets Γ and Γ', then Γ' ⊆ ≪Γ≫ and CSP(Γ') reduces to CSP(Γ). As a consequence, two relation sets having the same polymorphisms lead to the same computational complexity. - -The analysis was later fine-tuned: CSP(Γ) is either solvable in co-NLOGTIME, L-complete, NL-complete, ⊕L-complete, P-complete or NP-complete and given Γ, one can decide in polynomial time which of these cases holds. - -Schaefer's dichotomy theorem was recently generalized to a larger class of relations. - -If the problem is to count the number of solutions, which is denoted by #CSP(Γ), then a similar result by Creignou and Hermann holds. Let Γ be a finite constraint language over the Boolean domain. The problem #CSP(Γ) is computable in polynomial time if Γ has a Mal'tsev operation as a polymorphism. Otherwise, the problem #CSP(Γ) is #P-complete. A Mal'tsev operation m is a ternary operation that satisfies $m(x,y,y) = m(y,y,x) = x.$ An example of a Mal'tsev operation is the Minority operation given in the modern, algebraic formulation of Schaefer's dichotomy theorem above. Thus, when Γ has the Minority operation as a polymorphism, it is not only possible to decide CSP(Γ) in polynomial time, but to compute #CSP(Γ) in polynomial time. There are a total of 4 Mal'tsev operations on Boolean variables, determined by the values of $m(T,F,T)$ and $m(F,T,F)$. An example of less symmetric one is given by $m(x,y,z) = (x\wedge z) \vee (\neg y \wedge (x \vee z))$. On another domains, such as groups, examples of Mal'tsev operations include $x - y + z$ and $x y^{-1} z.$ - -For larger domains, even for a domain of size three, the existence of a Mal'tsev polymorphism for Γ is no longer a sufficient condition for the tractability of #CSP(Γ). However, the absence of a Mal'tsev polymorphism for Γ still implies the #P-hardness of #CSP(Γ). diff --git a/wiki/wikipedia/3406.txt b/wiki/wikipedia/3406.txt deleted file mode 100644 index a6c5f85064f2d0e6120f64f2801af6a6aa11df07..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3406.txt +++ /dev/null @@ -1,80 +0,0 @@ -In mathematics, Descartes' rule of signs, first described by René Descartes in his work La Géométrie, is a technique for getting information on the number of positive real roots of a polynomial. It asserts that the number of positive roots is at most the number of sign changes in the sequence of polynomial's coefficients (omitting the zero coefficients), and that the difference between these two numbers is always even. This implies, in particular, that if the number of sign changes is zero or one, then there are exactly zero or one positive roots, respectively. - -By a homographic transformation of the variable, one may use Descartes' rule of signs for getting a similar information on the number of roots in any interval. This is the basic idea of Budan's theorem and Budan–Fourier theorem. By repeating the division of an interval into two intervals, one gets eventually a list of disjoint intervals containing together all real roots of the polynomial, and containing each exactly one real root. Descartes rule of signs and homographic transformations of the variable are, nowadays, the basis of the fastest algorithms for computer computation of real roots of polynomials (see Real-root isolation). - -Descartes himself used the transformation x → –x for using his rule for getting information of the number of negative roots. - -The rule states that if the nonzero terms of a single-variable polynomial with real coefficients are ordered by descending variable exponent, then the number of positive roots of the polynomial is either equal to the number of sign changes between consecutive (nonzero) coefficients, or is less than it by an even number. A root of multiplicity k is counted as k roots. - -In particular, if the number of sign changes is zero or one, the number of positive roots equals the number of sign changes. - -As a corollary of the rule, the number of negative roots is the number of sign changes after multiplying the coefficients of odd-power terms by −1, or fewer than it by an even number. This procedure is equivalent to substituting the negation of the variable for the variable itself. - -For example, the negative roots of $ax^3+bx^2+cx+d$ are the positive roots of -$$ -a(-x)^3+b(-x)^2+c(-x)+d = -ax^3+bx^2-cx+d. -$$ - -Thus, applying Descartes' rule of signs to this polynomial gives the maximum number of negative roots of the original polynomial. - -The polynomial -$$ -f(x) = + x^3 + x^2 - x - 1 -$$ - -has one sign change between the second and third terms (the sequence of signs is (+, +, –, –). Therefore it has exactly one positive root. - -To find the number of negative roots, change the signs of the coefficients of the terms with odd exponents, i.e., apply Descartes' rule of signs to the polynomial $f(-x)$, to obtain the polynomial -$$ -f(-x)= - x^3 + x^2 + x - 1 . -$$ - -This polynomial has two sign changes (the sequence signs is (–, +, +, –)), meaning that this second polynomial has two or zero positive roots; thus the original polynomial has two or zero negative roots. - -In fact, the factorization of the first polynomial is - - -$$ -f(x)=(x + 1)^{2}(x - 1), -$$ - - - -so the roots are –1 (twice) and +1 (once). - -The factorization of the second polynomial is -$$ -f(-x)=-(x - 1)^{2}(x + 1), -$$ - -So here, the roots are +1 (twice) and –1 (once), the negation of the roots of the original polynomial. - -Any nth degree polynomial has exactly n roots in the complex plane, if counted according to multiplicity. So if f(x) is a polynomial which does not have a root at 0 (that is a polynomial with a nonzero constant term) then the minimum number of nonreal roots is equal to -$$ -n-(p+q), -$$ - -where p denotes the maximum number of positive roots, q denotes the maximum number of negative roots (both of which can be found using Descartes' rule of signs), and n denotes the degree of the equation. - -The polynomial -$$ -f(x) = x^3-1 , -$$ - -has one sign change; so the number of positive real roots is one. As -$$ -f(-x) = -x^3-1 , -$$ - -has no sign change, the original polynomial has no negative real roots. So the number of nonreal roots is -$$ -3 - (1+0) = 2 . -$$ - -Since nonreal roots of a polynomial with real coefficients must occur in conjugate pairs, it means that x3 − 1 has exactly two nonreal roots and one real root, which is positive. - -The subtraction of only multiples of 2 from the maximal number of positive roots occurs because the polynomial may have nonreal roots, which always come in pairs since the rule applies to polynomials whose coefficients are real. Thus if the polynomial is known to have all real roots, this rule allows one to find the exact number of positive and negative roots. Since it is easy to determine the multiplicity of zero as a root, the sign of all roots can be determined in this case. - -If the real polynomial P has k real positive roots counted with multiplicity, then for every a > 0 there are at least k changes of sign in the sequence of coefficients of the Taylor series of the function eaxP(x). For sufficiently large a, there are exactly k such changes of sign. - -In the 1970s Askold Khovanskii developed the theory of fewnomials that generalises Descartes' rule. The rule of signs can be thought of as stating that the number of real roots of a polynomial is dependent on the polynomial's complexity, and that this complexity is proportional to the number of monomials it has, not its degree. Khovanskiǐ showed that this holds true not just for polynomials but for algebraic combinations of many transcendental functions, the so-called Pfaffian functions. diff --git a/wiki/wikipedia/3407.txt b/wiki/wikipedia/3407.txt deleted file mode 100644 index 19ff714d8df9d2679dd9cceeed8a65b0771586dd..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3407.txt +++ /dev/null @@ -1,55 +0,0 @@ -The Euler tour technique (ETT), named after Leonhard Euler, is a method in graph theory for representing trees. The tree is viewed as a directed graph that contains two directed edges for each edge in the tree. The tree can then be represented as a Eulerian circuit of the directed graph, known as the Euler tour representation (ETR) of the tree. The ETT allows for efficient, parallel computation of solutions to common problems in algorithmic graph theory. It was introduced by Tarjan and Vishkin in 1984. - -Given an undirected tree presented as a set of edges, the Euler tour representation (ETR) can be constructed in parallel as follows: - -* We construct a symmetric list of directed edges: - -** For each undirected edge {u,v} in the tree, insert (u,v) and (v,u) in the edge list. - -* Sort the edge list lexicographically. (Here we assume that the nodes of the tree are ordered, and that the root is the first element in this order.) - -* Construct adjacency lists for each node (called next) and a map from nodes to the first entries of the adjacency lists (called first): - -** For each edge (u,v) in the list, do in parallel: - -*** If the previous edge (x,y) has x ≠ u, i.e. starts from a different node, set first(u) = (u,v) - -*** Else if x = u, i.e. starts from the same node, set next(x,y) = (u,v) - -Construct an edge list (called succ) in Euler tour order by setting pointers succ(u,v) for all edges (u,v) in parallel according to the following rule: - -\mathrm{succ}(u,v)=\begin{cases} - -\mathrm{next}(v,u) & \mathrm{next}(v,u)\neq \mathrm{nil} \\ - -\mathrm{first}(v)&\text{otherwise}. - -\end{cases} - - - -The resulting list succ will be circular. - -The overall construction takes work W(n) = O(sort(n)) (the time it takes to sort n items in parallel) if the tree has n nodes, as in trees the number of edges is one less than the number of nodes. - -If the tree has a root, we can split the circular list succ at that root. In that case, we can speak of advance and retreat edges: given a pair of nodes u,v, the first occurrence of either (u,v) or (v,u) in the ETR is called the advance edge, and the second occurrence is called the retreat edge. This appeals to the intuition that the first time such an edge is traversed the distance to the root is increased, while the second time the distance decreases. - -Rerooting the tree can be done in constant time O(1) by splitting the circular list succ at the new root. - -All of the following problems can be solved in O(Prefix sum(n)) (the time it takes to solve the prefix sum problem in parallel for a list of n items): - -# Classifying advance and retreat edges: Do list ranking on the ETR and save the result in a two-dimensional array A. Then (u,v) is an advance edge iff A(u,v) < A(v,u), and a retreat edge otherwise. - -# Determine the level of each node: Do a prefix sum on the ETR, where every advance edge counts as 1, and every retreat edge counts as −1. Then the value at the advance edge (u,v) is the level of v. - -# Number of nodes in a subtree rooted at v: assume the parent of v is u, determine advance edge (u,v), and the retreat edge (v,u) in parallel, and then count the number of advance edges between (u,v) and (v,u) using prefix sum. - -# The depth-first search index of a node v: count the number of advance edges up to and including (u,v). - -#Determine the lowest common ancestor of two nodes. - -Henzinger and King suggest to represent a given tree by keeping its Euler tour in a balanced binary search tree, keyed by the index in the tour. So for example, the unbalanced tree in the example above, having 7 nodes, will be represented by a balanced binary tree with 14 nodes, one for each time each node appears on the tour. - -We can represent a forest (an acyclic graph) using a collection of ET trees - one ET tree for one forest tree. This representation allows us to quickly answer the question "what is the root of node v?" by just moving to the first node of the ET tree (since nodes in the ET tree are keyed by their location in the Euler tour, and the root is the first and last node in the tour). When the represented forest is updated (e.g. by connecting two trees to a single tree or by splitting a tree to two trees), the corresponding Euler-tour structure can be updated in time O(log(n)). - -Link/cut trees have similar performance guarantees. While LC trees are good for maintaining aggregates on paths of a tree (making it a good choice data structure in network flow algorithms), ET trees are better at keeping aggregate information on subtrees. diff --git a/wiki/wikipedia/3408.txt b/wiki/wikipedia/3408.txt deleted file mode 100644 index fa61e61c9213ffb99bb698fc09e7f1a50f408b84..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3408.txt +++ /dev/null @@ -1,129 +0,0 @@ -Smale's problems are a list of eighteen unsolved problems in mathematics that was proposed by Steve Smale in 1998, republished in 1999. Smale composed this list in reply to a request from Vladimir Arnold, then vice-president of the International Mathematical Union, who asked several mathematicians to propose a list of problems for the 21st century. Arnold's inspiration came from the list of Hilbert's problems that had been published at the beginning of the 20th century. - - is minimized for a distribution of N points on a 2-sphere. This is equivalent to the Thomson problem. - -| - -| – - -|- - -| 8th - -| Extend the mathematical model of general equilibrium theory to include price adjustments - -|Gjerstad (2013) extends the deterministic model of price adjustment to a stochastic model and shows that when the stochastic model is linearized around the equilibrium the result is the autoregressive price adjustment model used in applied econometrics. He then tests the model with price adjustment data from a general equilibrium experiment. The model performs well in a general equilibrium experiment with two commodities. - -| 2013 - -|- - -| 9th - -| The linear programming problem: Find a strongly-polynomial time algorithm which for given matrix A ∈ Rm×n and b ∈ Rm decides whether there exists x ∈ Rn with Ax ≥ b. - -| - -| – - -|- - -| 10th - -| Pugh's closing lemma (higher order of smoothness) - -| - -| 2016 - -|- - -| 11th - -| Is one-dimensional dynamics generally hyperbolic?

    (a) Can a complex polynomial T be approximated by one of the same degree with the property that every critical point tends to a periodic sink under iteration?

    (b) Can a smooth map T : [0,1] → [0,1] be Cr approximated by one which is hyperbolic, for all r > 1? - -| - -| 2007 - -|- - -| 12th - -| For a closed manifold $M$ and any $r \geq 1$ let $\mathrm{Diff}^r(M)$ be the topological group of $C^r$ diffeomorphisms of $M$ onto itself. Given arbitrary $A \in \mathrm{Diff}^r(M)$, is it possible to approximate it arbitrary well by such $ T \in \mathrm{Diff}^r(M)$ that it commutes only with its iterates? - -In other words, is the subset of all diffeomorphisms whose centralizers are trivial dense in $\mathrm{Diff}^r(M)$? - -| - -| 2009 - -|- - -| 13th - -| Hilbert's 16th problem: Describe relative positions of ovals originating from a real algebraic curve and as limit cycles of a polynomial vector field on the plane. - -| - -| – - -|- - -| 14th - -| Do the properties of the Lorenz attractor exhibit that of a strange attractor? - -| - -| 2002 - -|- - -| 15th - -| Do the Navier–Stokes equations in R3 always have a unique smooth solution that extends for all time? - -| - -| – - -|- - -| 16th - -| Jacobian conjecture: If The Jacobian determinant of F is a non-zero constant and k has characteristic 0, then F has an inverse function G : kN → kN, and G is regular (in the sense that its components are polynomials). - -| - -| – - -|- - -| 17th - -| Solving polynomial equations in polynomial time in the average case - -| {{yes|Resolved. C. Beltrán and L. M. Pardo found a uniform probabilistic algorithm (average Las Vegas algorithm) for Smale's 17th problem

    F. Cucker and P. Bürgisser made the smoothed analysis of a probabilistic algorithm à la Beltrán-Pardo and then exhibited a deterministic algorithm running in time $N^{O(\log\log N)}$.

    Finally, P. Lairez found an alternative method to de-randomize the algorithm and thus found a deterministic algorithm which runs in average polynomial time.

    All these works follow Shub and Smale's foundational work (the "Bezout series") started in}} - -| 2008-2016 - -|- - -| 18th - -| Limits of intelligence (it talks about the fundamental problems of intelligence and learning, both from the human and machine side) - -| - -| – - -|} - -In later versions, Smale also listed three additional problems, "that don’t seem important enough to merit a place on our main list, but it would still be nice to solve them:" - -#Mean value problem - -#Is the three-sphere a minimal set (Gottschalk's conjecture)? - -#Is an Anosov diffeomorphism of a compact manifold topologically the same as the Lie group model of John Franks? diff --git a/wiki/wikipedia/3409.txt b/wiki/wikipedia/3409.txt deleted file mode 100644 index 2f0521337d80eb6141101dd9e16c7558f2aacde3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3409.txt +++ /dev/null @@ -1,27 +0,0 @@ -In mathematics, Hudde's rules are two properties of polynomial roots described by Johann Hudde. - -1. If r is a double root of the polynomial equation -$$ -a_0x^n + a_1x^{n-1} + \cdots + a_{n-1}x + a_n = 0 -$$ - -and if $b_0, b_1, \dots, b_{n-1}, b_n$ are numbers in arithmetic progression, then r is also a root of -$$ -a_0b_0x^n + a_1b_1x^{n-1} + \cdots + a_{n-1}b_{n-1}x + a_nb_n = 0. -$$ - -This definition is a form of the modern theorem that if r is a double root of ƒ(x) = 0, then r is a root of ƒ '(x) = 0. - -2. If for x = a the polynomial -$$ -a_0x^n + a_1x^{n-1} + \cdots + a_{n-1}x + a_n -$$ - -takes on a relative maximum or minimum value, then a is a root of the equation -$$ -na_0x^n + (n-1)a_1x^{n-1} + \cdots + 2a_{n-2}x^2 + a_{n-1}x = 0. -$$ - -This definition is a modification of Fermat's theorem in the form that if ƒ(a) is a relative maximum or minimum value of a polynomial ƒ(x), then ƒ '(a) = 0, where ƒ ' is the derivative of ƒ. - -Hudde was working with Frans van Schooten on a Latin edition of La Géométrie of René Descartes. In the 1659 edition of the translation, Hudde contributed two letters: "Epistola prima de Redvctione Ǣqvationvm" (pages 406 to 506), and "Epistola secvnda de Maximus et Minimus" (pages 507 to 16). These letters may be read by the Internet Archive link below. diff --git a/wiki/wikipedia/341.txt b/wiki/wikipedia/341.txt deleted file mode 100644 index 824b3b9b42e3cc2cf214f4c8cf2a5cb050cfcd49..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/341.txt +++ /dev/null @@ -1,79 +0,0 @@ -Lehmer's conjecture, also known as the Lehmer's Mahler measure problem, is a problem in number theory raised by Derrick Henry Lehmer. The conjecture asserts that there is an absolute constant $\mu>1$ such that every polynomial with integer coefficients $P(x)\in\mathbb{Z}[x]$ satisfies one of the following properties: - -* The Mahler measure $\mathcal{M}(P(x))$ of $P(x)$ is greater than or equal to $\mu$. - -* $P(x)$ is an integral multiple of a product of cyclotomic polynomials or the monomial $x$, in which case $\mathcal{M}(P(x))=1$. (Equivalently, every complex root of $P(x)$ is a root of unity or zero.) - -There are a number of definitions of the Mahler measure, one of which is to factor $P(x)$ over $\mathbb{C}$ as -$$ -P(x)=a_0 (x-\alpha_1)(x-\alpha_2)\cdots(x-\alpha_D), -$$ - -and then set -$$ -\mathcal{M}(P(x)) = |a_0| \prod_{i=1}^{D} \max(1,|\alpha_i|). -$$ - -The smallest known Mahler measure (greater than 1) is for "Lehmer's polynomial" -$$ -P(x)= x^{10}+x^9-x^7-x^6-x^5-x^4-x^3+x+1 , -$$ - -for which the Mahler measure is the Salem number -$$ -\mathcal{M}(P(x))=1.176280818\dots \ . -$$ - -It is widely believed that this example represents the true minimal value: that is, $\mu=1.176280818\dots$ in Lehmer's conjecture. - -Consider Mahler measure for one variable and Jensen's formula shows that if $P(x)=a_0 (x-\alpha_1)(x-\alpha_2)\cdots(x-\alpha_D)$ then -$$ -\mathcal{M}(P(x)) = |a_0| \prod_{i=1}^{D} \max(1,|\alpha_i|). -$$ - -In this paragraph denote $m(P)=\log(\mathcal{M}(P(x))$ , which is also called Mahler measure. - -If $P$ has integer coefficients, this shows that $\mathcal{M}(P)$ is an algebraic number so $m(P)$ is the logarithm of an algebraic integer. It also shows that $m(P)\ge0$ and that if $m(P)=0$ then $P$ is a product of cyclotomic polynomials i.e. monic polynomials whose all roots are roots of unity, or a monomial polynomial of $x$ i.e. a power $x^n$ for some $n$ . - -Lehmer noticed -$$ -\log\mathcal{M}(P(x))\ge \frac{C}{D\log D}. -$$ - -Dobrowolski improved this to -$$ -\log\mathcal{M}(P(x))\ge C\left(\frac{\log\log D}{\log D}\right)^3. -$$ - -Dobrowolski obtained the value C ≥ 1/1200 and asymptotically C > 1-ε for all sufficiently large D. Voutier in 1996 obtained C ≥ 1/4 for D ≥ 2. - -Let $E/K$ be an elliptic curve defined over a number field $K$, and let $\hat{h}_E:E(\bar{K})\to\mathbb{R}$ be the canonical height function. The canonical height is the analogue for elliptic curves of the function $(\deg P)^{-1}\log\mathcal{M}(P(x))$. It has the property that $\hat{h}_E(Q)=0$ if and only if $Q$ is a torsion point in $E(\bar{K})$. The elliptic Lehmer conjecture asserts that there is a constant $C(E/K)>0$ such that -$$ -\hat{h}_E(Q) \ge \frac{C(E/K)}{D} -$$ for all non-torsion points $Q\in E(\bar{K})$, - -where $D=[K(Q):K]$. If the elliptic curve E has complex multiplication, then the analogue of Dobrowolski's result holds: -$$ -\hat{h}_E(Q) \ge \frac{C(E/K)}{D} \left(\frac{\log\log D}{\log D}\right)^3 , -$$ - -due to Laurent. For arbitrary elliptic curves, the best known result is -$$ -\hat{h}_E(Q) \ge \frac{C(E/K)}{D^3(\log D)^2}, -$$ - -due to Masser. For elliptic curves with non-integral j-invariant, this has been improved to -$$ -\hat{h}_E(Q) \ge \frac{C(E/K)}{D^2(\log D)^2}, -$$ - -by Hindry and Silverman. - -Stronger results are known for restricted classes of polynomials or algebraic numbers. - -If P(x) is not reciprocal then -$$ -M(P) \ge M(x^3 -x - 1) \approx 1.3247 -$$ - -and this is clearly best possible. If further all the coefficients of P are odd then diff --git a/wiki/wikipedia/3410.txt b/wiki/wikipedia/3410.txt deleted file mode 100644 index 903ec21d471fa84d05ca91030ff7769c1cdd8679..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3410.txt +++ /dev/null @@ -1,76 +0,0 @@ -The binomial sum variance inequality states that the variance of the sum of binomially distributed random variables will always be less than or equal to the variance of a binomial variable with the same n and p parameters. In probability theory and statistics, the sum of independent binomial random variables is itself a binomial random variable if all the component variables share the same success probability. If success probabilities differ, the probability distribution of the sum is not binomial. The lack of uniformity in success probabilities across independent trials leads to a smaller variance. and is a special case of a more general theorem involving the expected value of convex functions. In some statistical applications, the standard binomial variance estimator can be used even if the component probabilities differ, though with a variance estimate that has an upward bias. - -Consider the sum, Z, of two independent binomial random variables, X ~ B(m0, p0) and Y ~ B(m1, p1), where Z = X + Y. Then, the variance of Z is less than or equal to its variance under the assumption that p0 = p1, that is, if Z had a binomial distribution. Symbolically, $Var(Z) \leqslant E[Z] (1 - \tfrac{E[Z]}{m_0+m_1})$. - -We wish to prove that -$$ -Var(Z) \leqslant E[Z] (1 - \frac{E[Z]}{m_0+m_1}) -$$ - -We will prove this inequality by finding an expression for Var(Z) and substituting it on the left-hand side, then showing that the inequality always holds. - -If Z has a binomial distribution with parameters n and p, then the expected value of Z is given by E[Z] = np and the variance of Z is given by Var[Z] = np(1 – p). Letting n = m0 + m1 and substituting E[Z] for np gives -$$ -Var(Z) = E[Z] (1 - \frac{E[Z]}{m_0+m_1}) -$$ - -The random variables X and Y are independent, so the variance of the sum is equal to the sum of the variances, that is -$$ -Var(Z) = E[X] (1-\frac{E[X]}{m_0}) + E[Y] (1-\frac{E[Y]}{m_1}) -$$ - -In order to prove the theorem, it is therefore sufficient to prove that -$$ -E[X](1 - \frac{E[X]}{m_0}) + E[Y](1 - \frac{E[Y]}{m_1}) \leqslant E[Z](1 - \frac{E[Z]}{m1+m0}) -$$ - -Substituting E[X] + E[Y] for E[Z] gives -$$ -E[X](1 - \frac{E[X]}{m_0}) + E[Y](1 - \frac{E[Y]}{m_1}) \leqslant (E[X]+E[Y])(1 - \frac{E[X]+E[Y]}{m_0+m_1}) -$$ - -Multiplying out the brackets and subtracting E[X] + E[Y] from both sides yields -$$ -- \frac{E[X]^2}{m_0} - \frac{E[Y]^2}{m_1} \leqslant - \frac{(E[X]+E[Y])^2}{m_0+m_1} -$$ - -Multiplying out the brackets yields -$$ -E[X] - \frac{E[X]^2}{m_0} + E[Y] - \frac{E[Y]^2}{m_1} \leqslant E[X] + E[Y] - \frac{(E[X]+E[Y])^2}{m_0+m_1} -$$ - -Subtracting E[X] and E[Y] from both sides and reversing the inequality gives -$$ -\frac{E[X]^2}{m_0} + \frac{E[Y]^2}{m_1} \geqslant \frac{(E[X]+E[Y])^2}{m_0+m_1} -$$ - -Expanding the right-hand side gives -$$ -\frac{E[X]^2}{m_0} + \frac{E[Y]^2}{m_1} \geqslant \frac{E[X]^2+2E[X]E[Y]+E[Y]^2}{m_0+m_1} -$$ - -Multiplying by $m_0 m_1 (m_0+m_1)$ yields -$$ -(m_0m_1+{m_1}^2){E[X]^2}+ ({m_0}^2+m_0m_1){E[Y]^2} \geqslant m_0m_1({E[X]}^2+2E[X]E[Y]+{E[Y]]^2}) -$$ - -Deducting the right-hand side gives the relation -$$ -{m_1}^2{E[X]^2} -2m_0m_1E[X]E[Y] + {m_0}^2{E[Y]^2} \geqslant 0 -$$ - -or equivalently -$$ -(m_1E[X] - m_0E[Y])^2 \geqslant 0 -$$ - -The square of a real number is always greater than or equal to zero, so this is true for all independent binomial distributions that X and Y could take. This is sufficient to prove the theorem. - -Although this proof was developed for the sum of two variables, it is easily generalized to greater than two. Additionally, if the individual success probabilities are known, then the variance is known to take the form -$$ - \operatorname{Var}(Z) = n \bar{p} (1 - \bar{p}) - ns^2, -$$ - -where $ s^2 = \frac{1}{n}\sum_{i=1}^n (p_i-\bar{p})^2$. This expression also implies that the variance is always less than that of the binomial distribution with $p=\bar{p}$, because the standard expression for the variance is decreased by ns2, a positive number. - -The inequality can be useful in the context of multiple testing, where many statistical hypothesis tests are conducted within a particular study. Each test can be treated as a Bernoulli variable with a success probability p. Consider the total number of positive tests as a random variable denoted by S. This quantity is important in the estimation of false discovery rates (FDR), which quantify uncertainty in the test results. If the null hypothesis is true for some tests and the alternative hypothesis is true for other tests, then success probabilities are likely to differ between these two groups. However, the variance inequality theorem states that if the tests are independent, the variance of S will be no greater than it would be under a binomial distribution. diff --git a/wiki/wikipedia/3411.txt b/wiki/wikipedia/3411.txt deleted file mode 100644 index 7f1755b2fa1901ebfb66a3254d9787c1bf526e37..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3411.txt +++ /dev/null @@ -1 +0,0 @@ -In algebraic geometry, the Fröberg conjecture is a conjecture about the possible Hilbert functions of a set of forms. It is named after Ralf Fröberg, who introduced it in . The Fröberg–Iarrobino conjecture is a generalization introduced by . diff --git a/wiki/wikipedia/3412.txt b/wiki/wikipedia/3412.txt deleted file mode 100644 index 69d22f9932fb1b45c7537c16736f44abf6f815db..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3412.txt +++ /dev/null @@ -1,8 +0,0 @@ -In mathematics, Hardy's theorem is a result in complex analysis describing the behavior of holomorphic functions. - -Let $f$ be a holomorphic function on the open ball centered at zero and radius $R$ in the complex plane, and assume that $f$ is not a constant function. If one defines -$$ -I(r) = \frac{1}{2\pi} \int_0^{2\pi}\! \left| f(r e^{i\theta}) \right| d\theta -$$ - -for $0< r < R,$ then this function is strictly increasing and is a convex function of $\log r$. diff --git a/wiki/wikipedia/3413.txt b/wiki/wikipedia/3413.txt deleted file mode 100644 index 94bb1b6b17be7efc5d68a41007fd344f72eb7c3d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3413.txt +++ /dev/null @@ -1,25 +0,0 @@ -In the mathematical field of graph theory, the friendship graph (or Dutch windmill graph or n-fan) Fn is a planar undirected graph with 2n+1 vertices and 3n edges. - -The friendship graph Fn can be constructed by joining n copies of the cycle graph C3 with a common vertex. - -By construction, the friendship graph Fn is isomorphic to the windmill graph Wd(3, n). It is unit distance with girth 3, diameter 2 and radius 1. The graph F2 is isomorphic to the butterfly graph. - -The friendship theorem of states that the finite graphs with the property that every two vertices have exactly one neighbor in common are exactly the friendship graphs. Informally, if a group of people has the property that every pair of people has exactly one friend in common, then there must be one person who is a friend to all the others. However, for infinite graphs, there can be many different graphs with the same cardinality that have this property. - -A combinatorial proof of the friendship theorem was given by Mertzios and Unger. Another proof was given by Craig Huneke. A formalised proof in Metamath was reported by Alexander van der Vekens in October 2018 on the Metamath mailing list. - -The friendship graph has chromatic number 3 and chromatic index 2n. Its chromatic polynomial can be deduced from the chromatic polynomial of the cycle graph C3 and is equal to $(x-2)^n (x-1)^n x$. - -The friendship graph Fn is edge-graceful if and only if n is odd. It is graceful if and only if n ≡ 0 (mod 4) or n ≡ 1 (mod 4). - -Every friendship graph is factor-critical. - -According to extremal graph theory, every graph with sufficiently many edges (relative to its number of vertices) must contain a $k$-fan as a subgraph. More specifically, this is true for an $n$-vertex graph if the number of edges is -$$ -\left\lfloor \frac{n^2}{4}\right\rfloor + f(k), -$$ - -where $f(k)$ is $k^2-k$ if $k$ is odd, and -$$ -f(k) -$$ is $k^2-3k/2$ if $k$ is even. These bounds generalize Turán's theorem on the number of edges in a triangle-free graph, and they are the best possible bounds for this problem, in that for any smaller number of edges there exist graphs that do not contain a $k$-fan. diff --git a/wiki/wikipedia/3414.txt b/wiki/wikipedia/3414.txt deleted file mode 100644 index d2b60b5c4c2c401f059c8ec78e5f857e133710b3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3414.txt +++ /dev/null @@ -1,8 +0,0 @@ -In mathematics - specifically, in measure theory - Malliavin's absolute continuity lemma is a result due to the French mathematician Paul Malliavin that plays a foundational rôle in the regularity (smoothness) theorems of the Malliavin calculus. Malliavin's lemma gives a sufficient condition for a finite Borel measure to be absolutely continuous with respect to Lebesgue measure. - -Let μ be a finite Borel measure on n-dimensional Euclidean space Rn. Suppose that, for every x ∈ Rn, there exists a constant C = C(x) such that -$$ -\left| \int_{\mathbf{R}^{n}} \mathrm{D} \varphi (y) (x) \mathrm{d} \mu(y) \right| \leq C(x) \| \varphi \|_{\infty} -$$ - -for every C function φ : Rn → R with compact support. Then μ is absolutely continuous with respect to n-dimensional Lebesgue measure λn on Rn. In the above, Dφ(y) denotes the Fréchet derivative of φ at y and ||φ|| denotes the supremum norm of φ. diff --git a/wiki/wikipedia/3415.txt b/wiki/wikipedia/3415.txt deleted file mode 100644 index b987e4aa4f6cac336b9428790c6a45fcf5faff2f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3415.txt +++ /dev/null @@ -1,179 +0,0 @@ -In matrix calculus, Jacobi's formula expresses the derivative of the determinant of a matrix A in terms of the adjugate of A and the derivative of A. - -If A is a differentiable map from the real numbers to n × n matrices, then - - \frac{d}{dt} \det A(t) - -= \operatorname{tr} \left (\operatorname{adj}(A(t)) \frac{dA(t)}{dt}\right ) - -= \left(\det A(t) \right) \cdot \operatorname{tr} \left (A(t)^{-1} \cdot \frac{dA(t)}{dt}\right ) - -where tr(X) is the trace of the matrix X. - -As a special case, -$$ -{\partial \det(A) \over \partial A_{ij}} = \operatorname{adj}(A)_{ji}. -$$ - -Equivalently, if dA stands for the differential of A, the general formula is -$$ - d \det (A) = \operatorname{tr} (\operatorname{adj}(A) dA). -$$ - -It is named after the mathematician Carl Gustav Jacob Jacobi. - -We first prove a preliminary lemma: - -Lemma. Let A and B be a pair of square matrices of the same dimension n. Then -$$ -\sum_i \sum_j A_{ij} B_{ij} = \operatorname{tr} (A^{\rm T} B). -$$ - -Proof. The product AB of the pair of matrices has components -$$ -(AB)_{jk} = \sum_i A_{ji} B_{ik}. -$$ - -Replacing the matrix A by its transpose AT is equivalent to permuting the indices of its components: -$$ -(A^{\rm T} B)_{jk} = \sum_i A_{ij} B_{ik}. -$$ - -The result follows by taking the trace of both sides: -$$ -\operatorname{tr} (A^{\rm T} B) = \sum_j (A^{\rm T} B)_{jj} = \sum_j \sum_i A_{ij} B_{ij} = \sum_i \sum_j A_{ij} B_{ij}.\ \square -$$ - -Theorem. (Jacobi's formula) For any differentiable map A from the real numbers to n × n matrices, -$$ -d \det (A) = \operatorname{tr} (\operatorname{adj}(A) dA). -$$ - -Proof. Laplace's formula for the determinant of a matrix A can be stated as -$$ -\det(A) = \sum_j A_{ij} \operatorname{adj}^{\rm T} (A)_{ij}. -$$ - -Notice that the summation is performed over some arbitrary row i of the matrix. - -The determinant of A can be considered to be a function of the elements of A: -$$ -\det(A) = F(A_{11}, A_{12}, \ldots , A_{21}, A_{22}, \ldots , A_{nn}) -$$ - -so that, by the chain rule, its differential is -$$ -d \det(A) = \sum_i \sum_j {\partial F \over \partial A_{ij}} dA_{ij}. -$$ - -This summation is performed over all n×n elements of the matrix. - -To find ∂F/∂Aij consider that on the right hand side of Laplace's formula, the index i can be chosen at will. (In order to optimize calculations: Any other choice would eventually yield the same result, but it could be much harder). In particular, it can be chosen to match the first index of ∂ / ∂Aij: -$$ -{\partial \det(A) \over \partial A_{ij}} = {\partial \sum_k A_{ik} \operatorname{adj}^{\rm T}(A)_{ik} \over \partial A_{ij}} = \sum_k {\partial (A_{ik} \operatorname{adj}^{\rm T}(A)_{ik}) \over \partial A_{ij}} -$$ - -Thus, by the product rule, -$$ -{\partial \det(A) \over \partial A_{ij}} = \sum_k {\partial A_{ik} \over \partial A_{ij}} \operatorname{adj}^{\rm T}(A)_{ik} + \sum_k A_{ik} {\partial \operatorname{adj}^{\rm T}(A)_{ik} \over \partial A_{ij}}. -$$ - -Now, if an element of a matrix Aij and a cofactor adjT(A)ik of element Aik lie on the same row (or column), then the cofactor will not be a function of Aij, because the cofactor of Aik is expressed in terms of elements not in its own row (nor column). Thus, -$$ -{\partial \operatorname{adj}^{\rm T}(A)_{ik} \over \partial A_{ij}} = 0, -$$ - -so -$$ -{\partial \det(A) \over \partial A_{ij}} = \sum_k \operatorname{adj}^{\rm T}(A)_{ik} {\partial A_{ik} \over \partial A_{ij}}. -$$ - -All the elements of A are independent of each other, i.e. -$$ -{\partial A_{ik} \over \partial A_{ij}} = \delta_{jk}, -$$ - -where δ is the Kronecker delta, so -$$ -{\partial \det(A) \over \partial A_{ij}} = \sum_k \operatorname{adj}^{\rm T}(A)_{ik} \delta_{jk} = \operatorname{adj}^{\rm T}(A)_{ij}. -$$ - -Therefore, -$$ -d(\det(A)) = \sum_i \sum_j \operatorname{adj}^{\rm T}(A)_{ij} d A_{ij}, -$$ - -and applying the Lemma yields -$$ -d(\det(A)) = \operatorname{tr}(\operatorname{adj}(A) dA).\ \square -$$ - -Lemma 1. $\det'(I)=\mathrm{tr}$, where $\det'$ is the differential of $\det$. - -This equation means that the differential of $\det$, evaluated at the identity matrix, is equal to the trace. The differential $\det'(I)$ is a linear operator that maps an n × n matrix to a real number. - -Proof. Using the definition of a directional derivative together with one of its basic properties for differentiable functions, we have -$$ -\det'(I)(T)=\nabla_T \det(I)=\lim_{\varepsilon\to0}\frac{\det(I+\varepsilon T)-\det I}{\varepsilon} -$$ -$$ -\det(I+\varepsilon T) -$$ is a polynomial in $\varepsilon$ of order n. It is closely related to the characteristic polynomial of $T$. The constant term (\varepsilon = - -0) is 1, while the linear term in $\varepsilon$ is $\mathrm{tr}\ T$. - -Lemma 2. For an invertible matrix A, we have: $\det'(A)(T)=\det A \mathrm{tr}(A^{-1}T)$. - -Proof. Consider the following function of X: -$$ -\det X = \det (A A^{-1} X) = (\det A) \ \det(A^{-1} X) -$$ - -We calculate the differential of $\det X$ and evaluate it at $X = A$ using Lemma 1, the equation above, and the chain rule: -$$ -\det'(A)(T) = \det A \ \det'(I) (A^{-1} T) = \det A \ \mathrm{tr}(A^{-1} T) -$$ - -Theorem. (Jacobi's formula) -$$ -\frac{d}{dt} \det A = \mathrm{tr}\left(\mathrm{adj}\ A\frac{dA}{dt}\right) -$$ - -Proof. If $A$ is invertible, by Lemma 2, with $T = dA/dt$ - -\frac{d}{dt} \det A = \det A \mathrm{tr} \left(A^{-1} \frac{dA}{dt}\right) - -= \mathrm{tr} \left( \mathrm{adj}\ A \frac{dA}{dt} \right) - -using the equation relating the adjugate of $A$ to $A^{-1}$. Now, the formula holds for all matrices, since the set of invertible linear matrices is dense in the space of matrices. - -The following is a useful relation connecting the trace to the determinant of the associated matrix exponential: -$$ - \det e^{tB} = e^{\operatorname{tr} \left(tB\right)} -$$ - -This statement is clear for diagonal matrices, and a proof of the general claim follows. - -For any invertible matrix $A(t)$, in the previous section "Via Chain Rule", we showed that -$$ -\frac{d}{dt} \det A(t) = \det A(t) \operatorname{tr} \left(A(t)^{-1} \frac{d}{dt} A(t)\right) -$$ - -Considering $A(t) = \exp(tB)$ in this equation yields: -$$ -\frac{d}{dt} \det e^{tB} =\operatorname{tr}(B) \det e^{tB} -$$ - -The desired result follows as the solution to this ordinary differential equation. - -Several forms of the formula underlie the Faddeev–LeVerrier algorithm for computing the characteristic polynomial, and explicit applications of the Cayley–Hamilton theorem. For example, starting from the following equation, which was proved above: -$$ -\frac{d}{dt} \det A(t) = \det A(t) \ \operatorname{tr} \left(A(t)^{-1} \frac{d}{dt} A(t)\right) -$$ - -and using $A(t) = t I - B$, we get: -$$ -\frac{d}{dt} \det (tI-B) = \det (tI-B) \operatorname{tr}[(tI-B)^{-1}] = \operatorname{tr}[\operatorname{adj} (tI-B)] -$$ - -where adj denotes the adjugate matrix. diff --git a/wiki/wikipedia/3416.txt b/wiki/wikipedia/3416.txt deleted file mode 100644 index c0dc0c8d1950ba9e48b2bef31515ba24de216aa7..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3416.txt +++ /dev/null @@ -1,91 +0,0 @@ -Avalanche is a decentralized, open-source blockchain with smart contract functionality. AVAX is the native cryptocurrency of the platform. - -==History== - -Avalanche started off as a protocol for solving consensus in a network of unreliable machines, where failures may be crash-fault or Byzantine. The protocol's fundamentals were first shared on InterPlanetary File System (aka IPFS) on May 2018 by a pseudonymous group of enthusiasts going by the name "Team Rocket It was later developed by a dedicated team of researchers from the Cornell University. The research was led by Emin Gün Sirer, a professor of computer science and software engineer, assisted by doctoral students Maofan "Ted" Yin and Kevin Sekniqi. Following the research stage, a startup technology company was founded to develop a blockchain network that would meet complex finance industry requirements. As a number of sources indicate, "the project was in the realm of academic circles interested in exploring consensus protocols that serve the same purpose of proof-of-work — securing transactions — but are more energy-efficient and have the potential to provide a basis for democratic development and the inclusion of more users in the consensus process". Although the original paper focused on a single protocol, namely Avalanche, it implicitly introduced a broad spectrum of voting-based, or quorum-based consensus protocols, called the Snow family. - -===Background=== - -Consensus protocols are the basis for the state machine replication problem, which aims to enable a set of machines to achieve agreement over a network even when a subset of the machines are corrupted. There are two major families of consensus protocols to date - classical consensus and Nakamoto consensus protocols. The first achieves consensus through quorums, thus requires voting. Famous instantiations of this are Paxos (in the crash-fault-tolerant environment) and PBFT in the Byzantine-fault tolerant case. These protocols achieve agreement in a similar operation to a parliament: a proposal (transaction) is proposed and voted on to be accepted or rejected. If sufficient votes cast by the various replicas are accumulated (typically collected through elected leader replica), then a quorum is achieved, and thus agreement. - -The second family, pioneered by Satoshi Nakamoto and Bitcoin is that of Nakamoto consensus. Unlike quorum-based protocols, machines operating an instance of Nakamoto consensus achieve agreement on transactions by downloading the longest chain (typically called a fork). In Bitcoin, the longest chain is verified by ensuring that it is the one with the highest degree of work (or proof of work). - -Snow, while quorum-based, seems to be a universal generalization of all quorum-based protocols. Unlike prior work which requires that quorums be deterministic, i.e. the failure probability is precisely zero, Avalanche loosens this requirement, thus enabling quorum-based protocols to estimate global network state *with errors*. The assumptions are as follows: - -Processors - -* Processors operate at arbitrary speed. - -* Processors may experience arbitrary failures, even Byzantine ones. - -* Processors with stable storage may re-join the protocol after failures. - -* Processors can collude, lie, or otherwise attempt to subvert the protocol. (That is, Byzantine failures are permissible.) - -Network - -* Processors can send messages to any other processor. - -* Messages are sent asynchronously and may take arbitrarily long to deliver. - -* Messages may be lost, reordered, or duplicated. - -* Messages are delivered without corruption, i.e. an adversary cannot forge digital signatures. - -=== Safety and liveness properties === - -The Snow family generalizes the typical definitions of safety and liveness encountered in quorum-based protocols. For Avalanche specifically, these properties are: - -; Agreement (or consistency, or safety) - -If any node (or machine) finalizes a value *v*, no other node will finalize another value *u* that conflicts with *v* with probability higher than $\epsilon$. - -; Termination (or liveness) - -If network resumes synchronous operation, then all nodes will achieve agreement. - -Avalanche, like other asynchronous networks, is not guaranteed to terminate and thus does not have the liveness property, during asynchrony. Like Paxos, Avalanche's goal is to ensure fault tolerance and it guarantees safety under asynchrony, but not liveness. This is in contrast to Nakamoto consensus, which guarantees liveness, and not safety under asynchrony. - -== Basic Operation == - -While Yin et al. formalize the Snow family up to Avalanche, the various optimizations introduced to make Avalanche viable for a real-world deployment make its operation complex. However, the basic Snowflake is simple to describe. - -=== Snowflake === - -The basic primitive used by Avalanche, called Snowflake, is similar in operation to the SIR model, in that nodes in the system adopt the state of some subset of other nodes in the system. In other words, instead of the nodes in the system being in a state of susceptible, infectious, or recovered as in the SIR model, they are in a binary state of 0 or 1, and must decide between the two in order to reach agreement. The ability to achieve agreement over a binary decision implies the ability to achieve agreement over any arbitrary number of choices. This enables, therefore, consensus on transactions. - -==== Initialization ==== - -Nodes in the system are initialized to either of the elements of a binary set. In the original paper, the nomenclature of red and blue to label the two binary choices is adopted. There is no preference between red and blue colors, and therefore nodes can be initialized arbitrarily between the two. - -==== Algorithm ==== - -The operation is simple: every node u in the system samples a random set of size k of other nodes from its neighbor view and adopts the color that some threshold majority of the k nodes prefer. This procedure is repeated until the same color majority is observed for some number of consecutive rounds. - -==Industry impact and practical aspects== - -The founders of the protocol claim to have developed a faster consensus protocol in computer networks that has the potential to close the gap between centralized payment systems currently in use and blockchain limitations. Bloomberg cites the company to be able of processing 6500 transactions per second, which is still slower than Visa (that claims to process 24 000 transactions per second) but much faster than current crypto-technologies can provide. For example, Bitcoin has 6 transactions per second and Ethereum has 15. The source also states that the technology has the capability to "wall off" parts of the blockchain, thus making financial services available to separate legal entities (for example, countries or financial systems) following specific regulations and laws in different countries. - -== See also == - -* Fault-tolerant computer system - -* Replication (computing) - -* Paxos - -* Virtual synchrony - -* Raft - -* Chandra–Toueg consensus algorithm - -== References == - -Category:Distributed algorithms - -Category:Fault tolerance - -Category:Fault-tolerant computer systems - -Category:Data synchronization diff --git a/wiki/wikipedia/3417.txt b/wiki/wikipedia/3417.txt deleted file mode 100644 index bba550348aad152f5bc992dd131c126564ed1c88..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3417.txt +++ /dev/null @@ -1,35 +0,0 @@ -The Fitting lemma, named after the mathematician Hans Fitting, is a basic statement in abstract algebra. Suppose M is a module over some ring. If M is indecomposable and has finite length, then every endomorphism of M is either an automorphism or nilpotent. - -As an immediate consequence, we see that the endomorphism ring of every finite-length indecomposable module is local. - -A version of Fitting's lemma is often used in the representation theory of groups. This is in fact a special case of the version above, since every K-linear representation of a group G can be viewed as a module over the group algebra KG. - -To prove Fitting's lemma, we take an endomorphism f of M and consider the following two sequences of submodules: - -* The first sequence is the descending sequence \mathrm{im}(f) \supseteq - -\mathrm{im}(f^2) \supseteq - -\mathrm{im}(f^3) \ldots, - -* the second sequence is the ascending sequence \mathrm{ker}(f) \subseteq - -\mathrm{ker}(f^2) \subseteq - -\mathrm{ker}(f^3) \ldots - - - -Because $M$ has finite length, both of these sequences must eventually stabilize, so there is some $n$ with $\mathrm{im}(f^n) = \mathrm{im}(f^{n^\prime})$ for all $n^\prime \geq n$, and some $m$ with $\mathrm{ker}(f^m) = \mathrm{ker}(f^{m^\prime})$ for all $m^\prime \geq m$. - -Let now $k = \max\{n, m\}$, and note that by construction $\mathrm{im} (f^{2k}) = \mathrm{im} (f^{k})$ and $\mathrm{ker} (f^{2k}) = \mathrm{ker} (f^{k})$. - -We claim that $\mathrm{ker}\left(f^k\right) \cap \mathrm{im}\left(f^k\right) = 0$. Indeed, every $x\in \mathrm{ker}\left(f^k\right) \cap \mathrm{im}\left(f^k\right)$ satisfies $x=f^k\left(y\right)$ for some $y\in M$ but also $f^k\left(x\right)=0$, so that $0=f^k\left(x\right)=f^k\left(f^k\left(y\right)\right)=f^{2k}\left(y\right)$, therefore $y\in\mathrm{ker}\left(f^{2k}\right)=\mathrm{ker}\left(f^k\right)$ and thus $x=f^k\left(y\right)=0$. - -Moreover, $\mathrm{ker}\left(f^k\right) + \mathrm{im}\left(f^k\right) = M$: for every $x\in M$, there exists some $y\in M$ such that $f^k\left(x\right)=f^{2k}\left(y\right)$ (since $f^k\left(x\right)\in\mathrm{im}\left(f^k\right)=\mathrm{im}\left(f^{2k}\right)$), and thus f^k\left(x-f^k\left(y\right)\right) - -= - -f^k\left(x\right)-f^{2k}\left(y\right)=0, so that $x-f^k\left(y\right)\in\mathrm{ker}\left(f^k\right)$ and thus $x\in \mathrm{ker}\left(f^k\right)+f^k\left(y\right)\subseteq \mathrm{ker}\left(f^k\right) + \mathrm{im}\left(f^k\right)$. - -Consequently, $M$ is the direct sum of $\mathrm{im}(f^k)$ and $\mathrm{ker}(f^k)$. Because $M$ is indecomposable, one of those two summands must be equal to $M$, and the other must be the trivial submodule. Depending on which of the two summands is zero, we find that $f$ is either bijective or nilpotent. diff --git a/wiki/wikipedia/3418.txt b/wiki/wikipedia/3418.txt deleted file mode 100644 index 70dca9992e339091eefc2e73dfc06a0adc8d0565..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3418.txt +++ /dev/null @@ -1,45 +0,0 @@ -In mathematics, informal logic and argument mapping, a lemma (plural lemmas or lemmata) is a generally minor, proven proposition which is used as a stepping stone to a larger result. For that reason, it is also known as a "helping theorem" or an "auxiliary theorem". In many cases, a lemma derives its importance from the theorem it aims to prove, however, a lemma can also turn out to be more important than originally thought. The word "lemma" derives from the Ancient Greek ("anything which is received", such as a gift, profit, or a bribe). - -There is no formal distinction between a lemma and a theorem, only one of intention (see Theorem terminology). However, a lemma can be considered a minor result whose sole purpose is to help prove a more substantial theorem – a step in the direction of proof. - -A good stepping stone can lead to many others. Some powerful results in mathematics are known as lemmas, first named for their originally minor purpose. These include, among others: - -* Bézout's lemma - -* Dehn's lemma - -* Euclid's lemma - -* Farkas' lemma - -* Fatou's lemma - -* Gauss's lemma - -* Greendlinger's lemma - -* Itô's lemma - -* Jordan's lemma - -* Nakayama's lemma - -* Poincaré's lemma - -* Riesz's lemma - -* Schur's lemma - -* Schwarz's lemma - -* Sperner's lemma - -* Urysohn's lemma - -* Vitali covering lemma - -* Yoneda's lemma - -* Zorn's lemma - -While these results originally seemed too simple or too technical to warrant independent interest, they have eventually turned out to be central to the theories in which they occur. diff --git a/wiki/wikipedia/3419.txt b/wiki/wikipedia/3419.txt deleted file mode 100644 index 4a9bd3f63006a0a8f5519a7c054610fa58e90e9d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3419.txt +++ /dev/null @@ -1,21 +0,0 @@ -In graph theory, a Halin graph is a type of planar graph, constructed by connecting the leaves of a tree into a cycle. - -The tree must have at least four vertices, none of which has exactly two neighbors; it should be drawn in the plane so none of its edges cross (this is called planar embedding), and the cycle - -connects the leaves in their clockwise ordering in this embedding. Thus, the cycle forms the outer face of the Halin graph, with the tree inside it. - -Halin graphs are named after German mathematician Rudolf Halin, who studied them in 1971. - -The cubic Halin graphs – the ones in which each vertex touches exactly three edges – had already been studied over a century earlier by Kirkman.|alt=Halin graph with one 16-vertex face, two 10-vertex faces, and all other faces having 3 to 5 vertices]] - -More strongly, every Halin graph is almost pancyclic, in the sense that it has cycles of all lengths from 3 to n with the possible exception of a single even length. Moreover, any Halin graph remains almost pancyclic if a single edge is contracted, and every Halin graph without interior vertices of degree three is pancyclic. - -The incidence chromatic number of a Halin graph G with maximum degree Δ(G) greater than four is Δ(G) + 1. This is the number of colors needed to color all pairs (v,e) where v is a vertex of the graph, and e is an edge incident to v, obeying certain constraints on the coloring. - -Pairs that share a vertex or that share an edge are not allowed to have the same color. In addition, a pair (v,e) cannot have the same color as another pair that uses the other endpoint of e. - -For Halin graphs with Δ(G) = 3 or 4, the incidence chromatic number may be as large as 5 or 6 respectively. - -It is possible to test whether a given n-vertex graph is a Halin graph in linear time, by finding a planar embedding of the graph (if one exists), and then testing whether there exists a face that has at least n/2 + 1 vertices, all of degree three. If so, there can be at most four such faces, and it is possible to check in linear time for each of them whether the rest of the graph forms a tree with the vertices of this face as its leaves. On the other hand, if no such face exists, then the graph is not Halin. Alternatively, a graph with n vertices and m edges is Halin if and only if it is planar, 3-connected, and has a face whose number of vertices equals the circuit rank m - n + 1 of the graph, all of which can be checked in linear time. Other methods for recognizing Halin graphs in linear time include the application of Courcelle's theorem, or a method based on graph rewriting, neither of which rely on knowing the planar embedding of the graph. - -Every Halin graph has treewidth = 3. Therefore, many graph optimization problems that are NP-complete for arbitrary planar graphs, such as finding a maximum independent set, may be solved in linear time on Halin graphs using dynamic programming or Courcelle's theorem, or in some cases (such as the construction of Hamiltonian cycles) by direct algorithms. and in 1965 by Hans Rademacher. Rademacher calls these graphs based polyhedra. He defines them as the cubic polyhedral graphs with f faces in which one of the faces has f - 1 sides. The graphs that fit this definition are exactly the cubic Halin graphs. However, these names are ambiguous. Some authors use the name "skirted trees" to refer to planar graphs formed from trees by connecting the leaves into a cycle, but without requiring that the internal vertices of the tree have degree three or more. And like "based polyhedra", the "roofless polyhedra" name may also refer to the cubic Halin graphs. The convex polyhedra whose graphs are Halin graphs have also been called domes. diff --git a/wiki/wikipedia/342.txt b/wiki/wikipedia/342.txt deleted file mode 100644 index 2431390d0e43db316af132e2b157abf3f3f8a750..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/342.txt +++ /dev/null @@ -1,68 +0,0 @@ -In geometry, the spiral of Theodorus (also called square root spiral, Einstein spiral, or Pythagorean spiral) is a spiral composed of right triangles, placed edge-to-edge. It was named after Theodorus of Cyrene. - -The spiral is started with an isosceles right triangle, with each leg having unit length. Another right triangle is formed, an automedian right triangle with one leg being the hypotenuse of the prior triangle (with length 2) and the other leg having length of 1; the length of the hypotenuse of this second triangle is 3. The process then repeats; the nth triangle in the sequence is a right triangle with the side lengths n and 1, and with hypotenuse sqrt n + 1. For example, the 16th triangle has sides measuring 4 (=sqrt 16), 1 and hypotenuse of sqrt 17. - -Although all of Theodorus' work has been lost, Plato put Theodorus into his dialogue Theaetetus, which tells of his work. It is assumed that Theodorus had proved that all of the square roots of non-square integers from 3 to 17 are irrational by means of the Spiral of Theodorus. - -Plato does not attribute the irrationality of the square root of 2 to Theodorus, because it was well known before him. Theodorus and Theaetetus split the rational numbers and irrational numbers into different categories. - -Each of the triangles' hypotenuses hn gives the square root of the corresponding natural number, with h1 = 2. - -Plato, tutored by Theodorus, questioned why Theodorus stopped at 17. The reason is commonly believed to be that the 17 hypotenuse belongs to the last triangle that does not overlap the figure. - -In 1958, Erich Teuffel proved that no two hypotenuses will ever coincide, regardless of how far the spiral is continued. Also, if the sides of unit length are extended into a line, they will never pass through any of the other vertices of the total figure. - -Theodorus stopped his spiral at the triangle with a hypotenuse of 17. If the spiral is continued to infinitely many triangles, many more interesting characteristics are found. - -If φn is the angle of the nth triangle (or spiral segment), then: -$$ -\tan\left(\varphi_n\right)=\frac{1}{\sqrt{n}}. -$$ - -Therefore, the growth of the angle φn of the next triangle n is: -$$ -\varphi_n=\arctan\left(\frac{1}{\sqrt{n}}\right). -$$ - -The sum of the angles of the first k triangles is called the total angle φ(k) for the kth triangle. It grows proportionally to the square root of k, with a bounded correction term c2: -$$ -\varphi\left (k\right)=\sum_{n=1}^k\varphi_n = 2\sqrt{k}+c_2(k) -$$ - -where -$$ -\lim_{k \to \infty} c_2(k)= - 2.157782996659\ldots -$$ - -(). - -The growth of the radius of the spiral at a certain triangle n is -$$ -\Delta r=\sqrt{n+1}-\sqrt{n}. -$$ - -The Spiral of Theodorus approximates the Archimedean spiral. Just as the distance between two windings of the Archimedean spiral equals mathematical constant pi, as the number of spins of the spiral of Theodorus approaches infinity, the distance between two consecutive windings quickly approaches π. - -The following is a table showing of two windings of the spiral approaching pi: - -As shown, after only the fifth winding, the distance is a 99.97% accurate approximation to π. - -The question of how to interpolate the discrete points of the spiral of Theodorus by a smooth curve was proposed and answered in by analogy with Euler's formula for the gamma function as an interpolant for the factorial function. Davis found the function -$$ -T(x) = \prod_{k=1}^\infty \frac{1 + i/\sqrt{k}}{1 + i/\sqrt{x+k}} \qquad ( -1 < x < \infty ) -$$ - -which was further studied by his student Leader and by Iserles (in an appendix to ). An axiomatic characterization of this function is given in as the unique function that satisfies the functional equation -$$ -f(x+1) = \left( 1 + \frac{i}{\sqrt{x+1} }\right) \cdot f(x), -$$ - -the initial condition $f(0) = 1,$ and monotonicity in both argument and modulus; alternative conditions and weakenings are also studied therein. An alternative derivation is given in . - -An analytic continuation of Davis' continuous form of the Spiral of Theodorus which extends in the opposite direction from the origin is given in . - -In the figure the nodes of the original (discrete) Theodorus spiral are shown as small green circles. The blue ones are those, added in the opposite direction of the spiral. - -Only nodes $n$ with the integer value of the polar radius $r_n=\pm\sqrt$ are numbered in the figure. - -The dashed circle in the coordinate origin $O$ is the circle of curvature at $O$. diff --git a/wiki/wikipedia/3420.txt b/wiki/wikipedia/3420.txt deleted file mode 100644 index ef20d2397fe30eeffe40148bd88cdb3ce1ea14dd..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3420.txt +++ /dev/null @@ -1,17 +0,0 @@ -In the mathematical theory of Kleinian groups, the Ahlfors finiteness theorem describes the quotient of the domain of discontinuity by a finitely generated Kleinian group. The theorem was proved by , apart from a gap that was filled by Greenberg. - -The Ahlfors finiteness theorem states that if Γ is a finitely-generated Kleinian group with region of discontinuity Ω, then - -Ω/Γ has a finite number of components, each of which is a compact Riemann surface with a finite number of points removed. - -The Bers area inequality is a quantitative refinement of the Ahlfors finiteness theorem proved by . It states that if Γ is a non-elementary finitely-generated Kleinian group with N generators and with region of discontinuity Ω, then - -Area(Ω/Γ) ≤ 4π(N - 1) - -with equality only for Schottky groups. (The area is given by the Poincaré metric in each component.) - -Moreover, if Ω1 is an invariant component then - -Area(Ω/Γ) ≤ 2Area(Ω1/Γ) - -with equality only for Fuchsian groups of the first kind (so in particular there can be at most two invariant components). diff --git a/wiki/wikipedia/3421.txt b/wiki/wikipedia/3421.txt deleted file mode 100644 index ddd0ad3dbb8a39c3aab94ec762c287be5d29af39..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3421.txt +++ /dev/null @@ -1,126 +0,0 @@ -The Zarankiewicz problem, an unsolved problem in mathematics, asks for the largest possible number of edges in a bipartite graph that has a given number of vertices and has no complete bipartite subgraphs of a given size. It belongs to the field of extremal graph theory, a branch of combinatorics, and is named after the Polish mathematician Kazimierz Zarankiewicz, who proposed several special cases of the problem in 1951. - -A bipartite graph $G=(U\cup V,E)$ consists of two disjoint sets of vertices $U$ and $V$, and a set of edges each of which connects a vertex in $U$ to a vertex in $V$. No two edges can both connect the same pair of vertices. A complete bipartite graph is a bipartite graph in which every pair of a vertex from $U$ and a vertex from $V$ is connected to each other. A complete bipartite graph in which $U$ has $s$ vertices and $V$ has $t$ vertices is denoted $K_{s,t}$. If $G=(U\cup V,E)$ is a bipartite graph, and there exists a set of $s$ vertices of $U$ and $t$ vertices of $V$ that are all connected to each other, then these vertices induce a subgraph of the form $K_{s,t}$. (In this formulation, the ordering of $s$ and $t$ is significant: the set of $s$ vertices must be from $U$ and the set of $t$ vertices must be from $V$, not vice versa.) - -The Zarankiewicz function $z(m,n;s,t)$ denotes the maximum possible number of edges in a bipartite graph $G=(U\cup V,E)$ for which $|U|=m$ and $|V|=n$, but which does not contain a subgraph of the form $K_{s,t}$. As a shorthand for an important special case, $z(n;t)$ is the same as $z(n,n;t,t)$. The Zarankiewicz problem asks for a formula for the Zarankiewicz function, or (failing that) for tight asymptotic bounds on the growth rate of $z(n;t)$ assuming that $t$ is a fixed constant, in the limit as $n$ goes to infinity. - -For $s=t=2$ this problem is the same as determining cages with girth six. The Zarankiewicz problem, cages and finite geometry are strongly interrelated. - -The same problem can also be formulated in terms of digital geometry. The possible edges of a bipartite graph $G=(U\cup V,E)$ can be visualized as the points of a $|U|\times |V|$ rectangle in the integer lattice, and a complete subgraph is a set of rows and columns in this rectangle in which all points are present. Thus, $z(m,n;s,t)$ denotes the maximum number of points that can be placed within an $m\times n$ grid in such a way that no subset of rows and columns forms a complete $s\times t$ grid. The case of $z(4;3)$ is relatively simple: a 13-edge bipartite graph with four vertices on each side of the bipartition, and no $K_{3,3}$ subgraph, may be obtained by adding one of the long diagonals to the graph of a cube. In the other direction, if a bipartite graph with 14 edges has four vertices on each side, then two vertices on each side must have degree four. Removing these four vertices and their 12 incident edges leaves a nonempty set of edges, any of which together with the four removed vertices forms a $K_{3,3}$ subgraph. - -The Kővári–Sós–Turán theorem provides an upper bound on the solution to the Zarankiewicz problem. It was established by Tamás Kővári, Vera T. Sós and Pál Turán shortly after the problem had been posed: -$$ -z(m,n;s,t) < (s-1)^{1/t} (n-t+1) m^{1-1/t} + (t-1)m. -$$ - -Kővári, Sós, and Turán originally proved this inequality for $z(n;t)$. Shortly afterwards, Hyltén-Cavallius observed that essentially the same argument can be used to prove the above inequality. - -An improvement on the second term of the upper bound on $z(n;t)$ was given by Štefan Znám: -$$ -z(n;t) < (t-1)^{1/t} n^{2-1/t} + \frac{1}{2}(t-1)n. -$$ - -If $s$ and $t$ are assumed to be constant, then asymptotically, using the big O notation, these formulas can be expressed as -$$ -z(m,n;s,t)=O(mn^{1-1/s}+n) -$$; -$$ -z(m,n;s,t)=O(nm^{1-1/t}+m) -$$. - -In the particular case $m=n$, assuming without loss of generality that $s \leq t$, we have the asymptotic upper bound -$$ -z(n,n;s,t)=O(n^{2-1/t}). -$$ - -One can verify that among the two asymptotic upper bounds of $z(m,n;s,t)$ in the previous section, the first bound is better when $m=o(n^{s/t})$, and the second bound becomes better when $m=\omega(n^{s/t})$. Therefore, if one can show a lower bound for $z(n^{s/t},n;s,t)$ that matches the upper bound up to a constant, then by a simple sampling argument (on either an $n^{t/s}\times t$ bipartite graph or an $m\times m^{s/t}$ bipartite graph that achieves the maximum edge number), we can show that for all $m,n$, one of the above two upper bounds is tight up to a constant. This leads to the following question: is it the case that for any fixed $s\leq t$ and $m\leq n^{s/t}$, we have -$$ -z(m,n;s,t)=\Omega(mn^{1-1/s}) -$$? - -In the special case $m=n$, up to constant factors, $z(n,n;s,t)$ has the same order as $\text{ex}(n,K_{s,t})$, the maximum number of edges in an $n$-vertex (not necessarily bipartite) graph that has no $K_{s,t}$ as a subgraph. In one direction, a bipartite graph with $n$ vertices on each side and $z(n,n;s,t)$ edges must have a subgraph with $n$ vertices and at least $z(n,n;s,t)/4$ edges; this can be seen from choosing $n/2$ vertices uniformly at random from each side, and taking the expectation. In the other direction, we can transform a graph with $n$ vertices and no copy of $K_{s,t}$ into a bipartite graph with $n$ vertices on each side of its bipartition, twice as many edges and still no copy of $K_{s,t}$, by taking its bipartite double cover. Same as above, with the convention that $s\leq t$, it has been conjectured that -$$ -z(n,n;s,t)=\Theta(n^{2-1/s}) -$$ - -for all constant values of $s,t$. - -For some specific values of $s,t$ (e.g., for $t$ sufficiently larger than $s$, or for $s=2$), the above statements have been proved using various algebraic and random algebraic constructions. At the same time, the answer to the general question is still unknown to us. - -For $s=t=2$, a bipartite graph with $n$ vertices on each side, $\Omega(n^{3/2})$ edges, and no $K_{2,2}$ may be obtained as the Levi graph, or point-line incidence graph, of a projective plane of order $q$, a system of $q^2+q+1$ points and $q^2+q+1$ lines in which each two points determine a unique line, and each two lines intersect at a unique point. We construct a bipartite graph associated to this projective plane that has one vertex part as its points, the other vertex part as its lines, such that a point and a line is connected if and only if they are incident in the projective plane. This leads to a $K_{2,2}$-free graph with $q^2+q+1$ vertices and $(q^2+q+1)(q+1)$ edges. - -Since this lower bound matches the upper bound given by I. Reiman, we have the asymptotic -$$ -z(n;2)=(1/2+o(1))n^{3/2}. -$$ - -For $s=t=3$, bipartite graphs with $n$ vertices on each side, $\Omega(n^{5/3})$ edges, and no $K_{3,3}$ may again be constructed from finite geometry, by letting the vertices represent points and spheres (of a carefully chosen fixed radius) in a three-dimensional finite affine space, and letting the edges represent point-sphere incidences. - -More generally, consider $s=2$ and any $t$. Let $\mathbb F_q$ be the $q$-element finite field, and $h$ be an element of multiplicative order $t$, in the sense that $H=\{1,h,\dots, h^{t-1}\}$ form a $t$-element subgroup of the multiplicative group $\mathbb F_q^*$. We say that two nonzero elements $(a,b),(a',b')\in\mathbb F_q\times \mathbb F_q$ are equivalent if we have $a'=h^da$ and $b'=h^db$ for some $d$. Consider a graph $G$ on the set of all equivalence classes $\langle a,b\rangle$, such that $\langle a,b\rangle$ and $\langle x,y\rangle$ are connected if and only if $ax+by\in H$. One can verify that $G$ is well-defined and free of $K_{2,t+1}$, and every vertex in $G$ has degree $q$ or $q-1$. Hence we have the upper bound -$$ -z(n,n;2,t+1)=(t^{1/2}+o(1))n^{3/2}. -$$ - -For $t$ sufficiently larger than $s$, the above conjecture $z(n,n;s,t)=\Theta(n^{2-1/s})$ was verified by Kollár, Rónyai, and Szabó - -and Alon, Rónyai, and Szabó - -using the construction of norm graphs and projective norm graphs over finite fields. - -For $t>s!$, consider the norm graph NormGraphp,s with vertex set $\mathbb F_{p^s}$, such that every two vertices $a,b\in\mathbb F_{p^s}$ are connected if and only if $N(a+b)=1$, where $N:\mathbb F_{p^s}\rightarrow\mathbb F_p$ is the norm map -$$ -N(x)=x\cdot x^p\cdot x^{p^2}\cdot\dots\cdot x^{p^{s-1}}=x^{(p^s-1)/(p-1)}. -$$ - -It is not hard to verify that the graph has $p^s$ vertices and at least $p^{2s-1}/2$ edges. To see that this graph is $K_{s,s!+1}$-free, observe that any common neighbor $x$ of $s$ vertices $y_1,\dots,y_s\in\mathbb F_{p^s}$ must satisfy -$$ -1=N(x+y_i)=(x+y_i)\cdot (x+y_i)^p\cdot\dots\cdot (x+y_i)^{p^{s-1}}=(x+y_i)\cdot (x^p+y_i^p)\cdot\dots\cdot (x^{p^{s-1}}+y_i^{p^{s-1}}) -$$ - -for all $i=1,\dots,s$, which a system of equations that has at most $s!$ solutions. - -The same result can be proved for all $t<(s-1)!$ using the projective norm graph, a construction slightly stronger than the above. The projective norm graph ProjNormGraphp,s is the graph on vertex set $\mathbb F_{p^{s-1}}\times \mathbb F_p^\times$, such that two vertices $(X,x),(Y,y)$ are adjacent if and only if $N(X+Y)=xy$, where $N:\mathbb F_{p^s}\rightarrow\mathbb F_p$ is the norm map defined by $N(x)=x^{(p^s-1)/(p-1)}$. By a similar argument to the above, one can verify that it is a $K_{s,t}$ -free graph with $\Omega(n^{2-1/s})$ edges. - -The above norm graph approach also gives tight lower bounds on $z(m,n;s,t)$ for certain choices of $m,n$. - -proved a tight lower bound on $z(m,n;2,t)$ for arbitrary $t$: if $m=(1+o(1))n^{t/2}$, then we have -$$ -z(m,n;2,t)=(1+o(1))mn^{1/2} -$$. - -For $2\leq t\leq n$, we say that a collection of subsets -$$ -A_1,\dots,A_\ell\subset[n] -$$ is a clique partition of $H\subset {[n]\choose t}$ if $\bigcup_{i=1}^\ell{A_i\choose t}$ form a partition of -$$ -H -$$. Observe that for any $k$, if there exists some $H\subset{[n]\choose t}$ of size $(1-o(1)){n\choose t}$ and $m=(1+o(1)){n\choose t}/{k\choose t}$, such that there is a partition of $H$ into $m$ cliques of size $k$, then we have $z(m,n;2,t)=km$. Indeed, supposing $A_1,\dots,A_m\subset[n]$ is a partition of $H$ into $m$ cliques of size $k$, we can let $G$ be the $m\times n$ bipartite graph with $V_1=\{A_1,\dots,A_m\}$ and $V_2=[n]$, such that $A_i\sim v$ in $G$ if and only if $v\in A_i$. Since the $A_i$ form a clique partition, $G$ cannot contain a copy of $K_{2,t}$. - -It remains to show that such a clique partition exists for any $m=(1+o(1))n^{t/2}$. To show this, let $\mathbb F_q$ be the finite field of size $q$ and $V=\mathbb F_q\times \mathbb F_q$. For every polynomial $p(\cdot)$ of degree at most $t-1$ over $\mathbb F_q$, define $C_p=\{(x,p(x)):x\in\mathbb F_q\}\subset V$. Let $\mathcal C$ be the collection of all $C_p$, so that $|\mathcal C|=q^t=n^{t/2}$ and every $C_p$ has size $q=\sqrt{n}$. Clearly no two members of $\mathcal C$ can share $t$ members. Since the only $t$-sets in $V$ that do not belong to $H$ are those that have at least two points sharing the same first coordinate, we know that almost all $t$-subsets of $V$ are contained in some $C_p$. - -Alternative proofs of $\text{ex}(n,K_{s,t})=\Omega(n^{2-1/s})$ for $t$ sufficiently larger than $s$ were also given by Blagojević, Bukh and Karasev - -and by Bukh - -using the method of random algebraic constructions. The basic idea is to take a random polynomial $f:\mathbb F_s^q\times \mathbb F_s^q\rightarrow \mathbb F_s$ and consider the graph $G$ between two copies of $\mathbb F_q^s$ whose edges are all those pairs $(x,y)$ such that $f(x,y) = 0$. - -To start with, let $q$ be a prime power and $n=q^2$. Let -$$ -f\in\mathbb F_q[x_1,\dots,x_s,y_1,\dots,t_s]_{\leq s^2} -$$ - -be a random polynomial with degree at most $s^2$ in $X=(x_1,\dots,x_s)$, degree at most $s^2$ in $Y=(y_1,\dots,y_s)$, and furthermore satisfying $f(X,Y)=f(Y,X)$ for all $X,Y$. Let $G$ be the associated random graph on vertex set $\mathbb F_q^s$, such that two vertices $x$ and $y$ are adjacent if and only if $f(x,y)=0$. - -To prove the asymptotic lower bound, it suffices to show that the expected number of edges in $G$ is $\Omega(q^{2s-1})$. For every $s$-subset $U\subset\mathbb F_q^s$, we let $Z_U$ denote the vertex subset of $\mathbb F_q^s\setminus U$ that "vanishes on $f(\cdot,U)$": -$$ -Z_U=\{x\in \mathbb F_q^s\setminus U:f(x,u)=0\text{ for all }u\in U\} -$$. - -Using the Lang-Weil bound for polynomials $f(\cdot,u)$ in $\mathbb F_q^s$, we can deduce that one always has $Z_U\leq C$ or $Z_U> q/2$ for some large constant $C$, which implies -$$ -\mathbb P(|Z_U|>C)=\mathbb P(|Z_U|>q/2) -$$. - -Since $f$ is chosen randomly over $\mathbb F_q$, it is not hard to show that the right-hand side probability is small, so the expected number of $s$-subsets $U$ with $|Z_U|>C$ also turned out to be small. If we remove a vertex from every such $U$, then the resulting graph is $K_{s,C+1}$ free, and the expected number of remaining edges is still large. This finishes the proof that $\text{ex}(n,K_{s,t})=\Omega(n^{2-1/s})$ for all $t$ sufficiently large with respect to $s$. More recently, there have been a number of results verifying the conjecture $z(m,n;s,t)=\Omega(n^{2-1/s})$ for different values of $s,t$, using similar ideas but with more tools from algebraic geometry. - -The Kővári–Sós–Turán theorem has been used in discrete geometry to bound the number of incidences between geometric objects of various types. As a simple example, a set of $n$ points and $m$ lines in the Euclidean plane necessarily has no $K_{2,2}$, so by the Kővári–Sós–Turán it has $O(nm^{1/2}+m)$ point-line incidences. This bound is tight when $m$ is much larger than $n$, but not when $m$ and $n$ are nearly equal, in which case the Szemerédi–Trotter theorem provides a tighter $O(n^{2/3}+m^{2/3}+n+m)$ bound. However, the Szemerédi–Trotter theorem may be proven by dividing the points and lines into subsets for which the Kővári–Sós–Turán bound is tight. diff --git a/wiki/wikipedia/3422.txt b/wiki/wikipedia/3422.txt deleted file mode 100644 index bd7c1069ffbfc94275da24a49bf3b994e18f9904..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3422.txt +++ /dev/null @@ -1,21 +0,0 @@ -In computational complexity theory and computability theory, a counting problem is a type of computational problem. If R is a search problem then -$$ -c_R(x)=\vert\{y\mid R(x,y)\}\vert -$$ - -is the corresponding counting function and -$$ -\#R=\{(x,y)\mid y\leq c_R(x)\} -$$ - -denotes the corresponding decision problem. - -Note that cR is a search problem while #R is a decision problem, however cR can be C Cook-reduced to #R (for appropriate C) using a binary search (the reason #R is defined the way it is, rather than being the graph of cR, is to make this binary search possible). - -If NX is a complexity class associated with non-deterministic machines then #X = {#R | R ∈ NX} is the set of counting problems associated with each search problem in NX. In particular, #P is the class of counting problems associated with NP search problems. - -Just as NP has NP-complete problems via many-one reductions, #P has complete problems via parsimonious reductions, problem transformations that preserve the number of solutions. - -== See also == - -* GapP diff --git a/wiki/wikipedia/3423.txt b/wiki/wikipedia/3423.txt deleted file mode 100644 index 13b0f562212e1168af7ed51959df8def04371932..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3423.txt +++ /dev/null @@ -1,5 +0,0 @@ -Cramér’s decomposition theorem for a normal distribution is a result of probability theory. It is well known that, given independent normally distributed random variables ξ1, ξ2, their sum is normally distributed as well. It turns out that the converse is also true. The latter result, initially announced by Paul Lévy, has been proved by Harald Cramér. This became a starting point for a new subfield in probability theory, decomposition theory for random variables as sums of independent variables (also known as arithmetic of probabilistic distributions). - -Let a random variable ξ be normally distributed and admits a decomposition as a sum ξ=ξ12 of two independent random variables. Then the summands ξ1 and ξ2 are normally distributed as well. - -A proof of Cramér's decomposition theorem uses the theory of entire functions. diff --git a/wiki/wikipedia/3424.txt b/wiki/wikipedia/3424.txt deleted file mode 100644 index 57b9210446dd5e9f3741d190fd5d53bcc96ded9a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3424.txt +++ /dev/null @@ -1,20 +0,0 @@ -In graph theory, Grinberg's theorem is a necessary condition for a planar graph to contain a Hamiltonian cycle, based on the lengths of its face cycles. The result has been widely used to construct non-Hamiltonian planar graphs with further properties, such as to give new counterexamples to Tait's conjecture (originally disproved by W.T. Tutte in 1946). This theorem was proved by Latvian mathematician Emanuel Grinberg in 1968. - -Let G be a finite planar graph with a Hamiltonian cycle C, with a fixed planar embedding. - -Denote by ƒk and gk the number of k-gonal faces of the embedding that are inside and outside of C, respectively. Then -$$ -\sum_{k \geq 3} (k-2) (f_k-g_k) = 0. -$$ - -The proof is an easy consequence of Euler's formula. - -As a corollary of this theorem, if an embedded planar graph has only one face whose number of sides is not 2 mod 3, and the remaining faces all have numbers of sides that are 2 mod 3, then the graph is not Hamiltonian. In this case, only the first face contributes to the mod-3 value of the sum, and it causes the sum to be nonzero. The factor of k - 2 in the contributions for the other faces causes their contributions to be zero mod 3. For instance, for the graph in the figure, all the bounded faces have 5 or 8 sides, but the unbounded face has 9 sides, so it satisfies this condition and is not Hamiltonian. - -Grinberg used his theorem to find non-Hamiltonian cubic polyhedral graphs with high cyclic edge connectivity. The cyclic edge connectivity of a graph is the smallest number of edges whose deletion leaves a subgraph with more than one cyclic component. The 46-vertex Tutte graph, and the smaller cubic non-Hamiltonian polyhedral graphs derived from it, have cyclic edge connectivity three. Grinberg used his theorem to find a non-Hamiltonian cubic polyhedral graph with 44 vertices, 24 faces, and cyclic edge connectivity four, and another example (shown in the figure) with 46 vertices, 25 faces, and cyclic edge connectivity five, the maximum possible cyclic edge connectivity for a cubic planar graph other than K4. In the example shown, all of the bounded faces have either five or eight edges, both of which are numbers that are 2 mod 3, but the unbounded face has nine edges, unequal to 2 mod 3. Therefore, by the corollary to Grinberg's theorem, the graph cannot be Hamiltonian. - -Grinberg's theorem has also been used to find planar hypohamiltonian graphs, graphs that are not Hamiltonian but that can be made Hamiltonian by removing any single vertex. The construction again makes all but one face have a number of edges congruent to 2 mod 3 (, ). Thomassen uses the theorem in a somewhat more complicated way to find a planar cubic hypohamiltonian graph: the graph he constructs includes a 4-edge face adjacent to four 7-edge faces, and all other faces have five or eight edges. In order to satisfy Grinberg's theorem, a Hamiltonian cycle would have to separate one of the 4- or 7-edge faces from the other four, which is not possible. - -There exist planar non-Hamiltonian graphs in which all faces have five or eight sides. For these graphs, Grinberg's formula taken modulo three is always satisfied by any partition of the faces into two subsets, preventing the application of his theorem to proving non-Hamiltonicity in this case . - -It is not possible to use Grinberg's theorem to find counterexamples to Barnette's conjecture, that every cubic bipartite polyhedral graph is Hamiltonian. Every cubic bipartite polyhedral graph has a partition of the faces into two subsets satisfying Grinberg's theorem, regardless of whether it also has a Hamiltonian cycle . diff --git a/wiki/wikipedia/3425.txt b/wiki/wikipedia/3425.txt deleted file mode 100644 index 078467b32f386a3ed7e267561ecd4737aca0b74b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3425.txt +++ /dev/null @@ -1,5 +0,0 @@ -In mathematical logic, weak interpretability is a notion of translation of logical theories, introduced together with interpretability by Alfred Tarski in 1953. - -Let T and S be formal theories. Slightly simplified, T is said to be weakly interpretable in S if, and only if, the language of T can be translated into the language of S in such a way that the translation of every theorem of T is consistent with S. Of course, there are some natural conditions on admissible translations here, such as the necessity for a translation to preserve the logical structure of formulas. - -A generalization of weak interpretability, tolerance, was introduced by Giorgi Japaridze in 1992. diff --git a/wiki/wikipedia/3426.txt b/wiki/wikipedia/3426.txt deleted file mode 100644 index 12232fb55796ad1c25544492e860c8f85e50b872..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3426.txt +++ /dev/null @@ -1,3 +0,0 @@ -Abelson's paradox is an applied statistics paradox identified by Robert P. Abelson. The paradox pertains to a possible paradoxical relationship between the magnitude of the r2 (i.e., coefficient of determination) effect size and its practical meaning. - -Abelson's example was obtained from the analysis of the r2 of batting average in baseball and skill level. Although batting average is considered among the most significant characteristics necessary for success, the effect size was only a tiny 0.003. diff --git a/wiki/wikipedia/3427.txt b/wiki/wikipedia/3427.txt deleted file mode 100644 index 0a51a9b8bf3ed92cfd6c4702c64b03ef0391070f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3427.txt +++ /dev/null @@ -1,23 +0,0 @@ -In mathematics, the soul theorem is a theorem of Riemannian geometry that largely reduces the study of complete manifolds of non-negative sectional curvature to that of the compact case. Cheeger and Gromoll proved the theorem in 1972 by generalizing a 1969 result of Gromoll and Wolfgang Meyer. The related soul conjecture was formulated by Gromoll and Cheeger in 1972 and proved by Grigori Perelman in 1994 with an astonishingly concise proof. - -The soul theorem states: - -If (M, g) is a complete connected Riemannian manifold with sectional curvature K ≥ 0, then there exists a compact totally convex, totally geodesic submanifold S whose normal bundle is diffeomorphic to M. - -(Note that the sectional curvature must be non-negative everywhere, but it does not have to be constant.) Such a submanifold S is called a soul of (M, g). - -The soul is not uniquely determined by (M, g) in general, but any two souls of (M, g) are isometric. This was proven by Sharafutdinov using Sharafutdinov's retraction in 1978 - -Every compact manifold is its own soul. Indeed, the theorem is often stated only for non-compact manifolds. - -As a very simple example, take M to be Euclidean space Rn. The sectional curvature is 0 everywhere, and any point of M can serve as a soul of M. - -Now take the paraboloid M = {(x, y, z) : z = x2 + y2}, with the metric g being the ordinary Euclidean distance coming from the embedding of the paraboloid in Euclidean space R3. Here the sectional curvature is positive everywhere, though not constant. The origin (0, 0, 0) is a soul of M. Not every point x of M is a soul of M, since there may be geodesic loops based at x, in which case $\{x\}$ wouldn't be totally convex. - -One can also consider an infinite cylinder M = {(x, y, z) : x2 + y2 = 1}, again with the induced Euclidean metric. The sectional curvature is 0 everywhere. Any "horizontal" circle {(x, y, z) : x2 + y2 = 1} with fixed z is a soul of M. Non-horizontal cross sections of the cylinder are not souls since they are neither totally convex nor totally geodesic. - -Cheeger and Gromoll's soul conjecture states: - -Suppose (M, g) is complete, connected and non-compact with sectional curvature K ≥ 0, and there exists a point in M where the sectional curvature (in all sectional directions) is strictly positive. Then the soul of M is a point; equivalently M is diffeomorphic to Rn. - -Grigori Perelman proved this statement by establishing that in the general case K ≥ 0, Sharafutdinov's retraction P : M → S is a submersion. Cao and Shaw later provided a different proof that avoids Perelman's flat strip theorem. diff --git a/wiki/wikipedia/3428.txt b/wiki/wikipedia/3428.txt deleted file mode 100644 index f8e43f8995d8f84295f2bad1266d90616ad356db..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3428.txt +++ /dev/null @@ -1,64 +0,0 @@ -In mathematics, a double Mersenne number is a Mersenne number of the form -$$ -M_{M_p} = 2^{2^p-1}-1 -$$ - -where p is prime. - -The first four terms of the sequence of double Mersenne numbers are : -$$ -M_{M_2} = M_3 = 7 -$$ -$$ -M_{M_3} = M_7 = 127 -$$ -$$ -M_{M_5} = M_{31} = 2147483647 -$$ -$$ -M_{M_7} = M_{127} = 170141183460469231731687303715884105727 -$$ - -A double Mersenne number that is prime is called a double Mersenne prime. Since a Mersenne number Mp can be prime only if p is prime, (see Mersenne prime for a proof), a double Mersenne number $M_{M_p}$ can be prime only if Mp is itself a Mersenne prime. For the first values of p for which Mp is prime, $M_{M_{p}}$ is known to be prime for p = 2, 3, 5, 7 while explicit factors of $M_{M_{p}}$ have been found for p = 13, 17, 19, and 31. - -Thus, the smallest candidate for the next double Mersenne prime is $M_{M_{61}}$, or 22305843009213693951 − 1. - -Being approximately 1.695, - -this number is far too large for any currently known primality test. It has no prime factor below 4 × 1033. There are probably no other double Mersenne primes than the four known. - -Smallest prime factor of $M_{M_{p}}$ (where p is the nth prime) are - -7, 127, 2147483647, 170141183460469231731687303715884105727, 47, 338193759479, 231733529, 62914441, 2351, 1399, 295257526626031, 18287, 106937, 863, 4703, 138863, 22590223644617, ... (next term is > 4 × 1033) - -The recursively defined sequence -$$ -c_0 = 2 -$$ -$$ -c_{n+1} = 2^{c_n}-1 = M_{c_n} -$$ - -is called the sequence of Catalan–Mersenne numbers. The first terms of the sequence are: -$$ -c_0 = 2 -$$ -$$ -c_1 = 2^2-1 = 3 -$$ -$$ -c_2 = 2^3-1 = 7 -$$ -$$ -c_3 = 2^7-1 = 127 -$$ -$$ -c_4 = 2^{127}-1 = 170141183460469231731687303715884105727 -$$ -$$ -c_5 = 2^{170141183460469231731687303715884105727}-1 \approx 5.454 \times 10^{51217599719369681875006054625051616349} \approx 10^{10^{37.7094}} -$$ - -Catalan came up with this sequence after the discovery of the primality of $M_{127}=c_4$ by Lucas in 1876. Catalan conjectured that they are prime "up to a certain limit". Although the first five terms are prime, no known methods can prove that any further terms are prime (in any reasonable time) simply because they are too huge. However, if $c_5$ is not prime, there is a chance to discover this by computing $c_5$ modulo some small prime $p$ (using recursive modular exponentiation). If the resulting residue is zero, $p$ represents a factor of $c_5$ and thus would disprove its primality. Since $c_5$ is a Mersenne number, such a prime factor $p$ would have to be of the form $2kc_4 +1$. Additionally, because $2^n-1$ is composite when $n$ is composite, the discovery of a composite term in the sequence would preclude the possibility of any further primes in the sequence. - -In the Futurama movie The Beast with a Billion Backs, the double Mersenne number $M_{M_7}$ is briefly seen in "an elementary proof of the Goldbach conjecture". In the movie, this number is known as a "martian prime". diff --git a/wiki/wikipedia/3429.txt b/wiki/wikipedia/3429.txt deleted file mode 100644 index 5c4ea10c919e60169ecfe53c7f5e5fbb4827eb39..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3429.txt +++ /dev/null @@ -1,55 +0,0 @@ -Transaction Processing Facility (TPF) is an IBM real-time operating system for mainframe computers descended from the IBM System/360 family, including zSeries and System z9. - -TPF delivers fast, high-volume, high-throughput transaction processing, handling large, continuous loads of essentially simple transactions across large, geographically dispersed networks. - -While there are other industrial-strength transaction processing systems, notably IBM's own CICS and IMS, TPF's specialty is extreme volume, large numbers of concurrent users, and very fast response times. For example, it handles VISA credit card transaction processing during the peak holiday shopping season., TPF is capable of running on a multiprocessor, that is, on systems in which there is more than one CPU. Within the LPAR, the CPUs are referred to as instruction streams or simply I-streams. When running on a LPAR with more than one I-stream, TPF is said to be running tightly coupled. TPF adheres to SMP concepts; no concept of NUMA-based distinctions between memory addresses exist. - -The depth of the CPU ready list is measured as any incoming transaction is received, and queued for the I-stream with the lowest demand, thus maintaining continuous load balancing among available processors. In cases where loosely coupled configurations are populated by multiprocessor CPCs (Central Processing Complex, i.e. the physical machine packaged in one system cabinet), SMP takes place within the CPC as described here, whereas sharing of inter-CPC resources takes place as described under Loosely coupled, below. - -In the TPF architecture, all memory (except for a 4KB-sized prefix area) is shared among all I-streams. In instances where memory-resident data must or should be kept separated by I-stream, the programmer typically allocates a storage area into a number of subsections equal to the number of I-streams, then accesses the desired I-stream associated area by taking the base address of the allocated area, and adding to it the product of the I-stream relative number times the size of each subsection. - -TPF is capable of supporting multiple mainframes (of any size themselves — be it single I-stream to multiple I-stream) connecting to and operating on a common database. Currently, 32 IBM mainframes may share the TPF database; if such a system were in operation, it would be called 32-way loosely coupled. The simplest loosely coupled system would be two IBM mainframes sharing one DASD (Direct Access Storage Device). In this case, the control program would be equally loaded into memory and each program or record on DASD could be potentially accessed by either mainframe. - -In order to serialize accesses between data records on a loosely coupled system, a practice known as record locking must be used. This means that when one mainframe processor obtains a hold on a record, the mechanism must prevent all other processors from obtaining the same hold and communicate to the requesting processors that they are waiting. Within any tightly coupled system, this is easy to manage between I-streams via the use of the Record Hold Table. However, when the lock is obtained offboard of the TPF processor in the DASD control unit, an external process must be used. Historically, the record locking was accomplished in the DASD control unit via an RPQ known as LLF (Limited Locking Facility) and later ELLF (extended). LLF and ELLF were both replaced by the Multipathing Lock Facility (MPLF). To run, clustered (loosely coupled) z/TPF requires either MPLF in all disk control units or an alternative locking device called a Coupling Facility. - -Records that absolutely must be managed by a record locking process are those which are processor shared. In TPF, most record accesses are done by using record type and ordinal. So if you had defined a record type in the TPF system of 'FRED' and gave it 100 records or ordinals, then in a processor shared scheme, record type 'FRED' ordinal '5' would resolve to exactly the same file address on DASD — clearly necessitating the use of a record locking mechanism. - -All processor shared records on a TPF system will be accessed via exactly the same file address which will resolve to exactly the same location. - -A processor unique record is one that is defined such that each processor expected to be in the loosely coupled complex has a record type of 'FRED' and perhaps 100 ordinals. However, if a user on any 2 or more processors examines the file address that record type 'FRED', ordinal '5' resolves to, they will note a different physical address is used. - -TPF is not a general-purpose operating system. TPF's specialized role is to process transaction input messages, then return output messages on a 1:1 basis at extremely high volume with short maximum elapsed time limits. - -TPF has no built-in graphical user interface functionality, and TPF has never offered direct graphical display facilities: to implement it on the host would be considered an unnecessary and potentially harmful diversion of real-time system resources. TPF's user interface is command-line driven with simple text display terminals that scroll upward, and there are no mouse-driven cursors, windows, or icons on a TPF Prime CRAS (Computer room agent set - which is best thought of as the "operator's console"). Character messages are intended to be the mode of communications with human users; all work is accomplished via the use of the command line, similar to UNIX without X. There are several products available which connect to Prime CRAS and provide graphical interface functions to the TPF operator, such as TPF Operations Server. Graphical interfaces for end users, if desired, must be provided by external systems. Such systems perform analysis on character content (see Screen scrape) and convert the message to/from the desired graphical form, depending on its context. - -Being a specialized purpose operating system, TPF does not host a compiler/assembler, text editor, nor implement the concept of a desktop as one might expect to find in a GPOS. TPF application source code is commonly stored in external systems, and likewise built "offline". Starting with z/TPF 1.1, Linux is the supported build platform; executable programs intended for z/TPF operation must observe the ELF format for s390x-ibm-linux. - -Using TPF requires a knowledge of its Command Guide since there is no support for an online command "directory" or "man"/help facility to which users might be accustomed. Commands created and shipped by IBM for the system administration of TPF are called "functional messages"—commonly referred to as "Z-messages", as they are all prefixed with the letter "Z". Other letters are reserved so that customers may write their own commands. - -TPF implements debugging in a distributed client-server mode; which is necessary because of the system's headless, multi-processing nature: pausing the entire system in order to trap a single task would be highly counter-productive. Debugger packages have been developed by 3rd party vendors who took very different approaches to the "break/continue" operations required at the TPF host, implementing unique communications protocols used in traffic between the human developer running the debugger client & server-side debug controller, as well as the form and function of debugger program operations at the client side. Two examples of 3rd party debugger packages are Step by Step Trace from Bedford Associates and CMSTPF, TPF/GI, & zTPFGI from TPF Software, Inc. Neither package is wholly compatible with the other, nor with IBM's own offering. IBM's debugging client offering is packaged in an IDE called IBM TPF Toolkit. - -TPF is highly optimized to permit messages from the supported network to either be switched out to another location, routed to an application (specific set of programs) or to permit extremely efficient accesses to database records. - -Historically, all data on the TPF system had to fit in fixed record (and memory block) sizes of 381, 1055 and 4K bytes. This was due in part to the physical record sizes of blocks located on DASD. Much overhead was saved by freeing up any part of the operating system from breaking large data entities into smaller ones during file operations, and reassembling the same during read operations. Since IBM hardware does I/O via the use of channels and channel programs, TPF would generate very small and efficient channel programs to do its I/O — all in the name of speed. Since the early days also placed a premium on the size of storage media — be it memory or disk, TPF applications evolved into doing very powerful things while using very little resource. - -Today, much of these limitations are removed. In fact, only because of legacy support are smaller-than-4K DASD records still used. With the advances made in DASD technology, a read/write of a 4K record is just as efficient as a 1055 byte record. The same advances have increased the capacity of each device so that there is no longer a premium placed on the ability to pack data into the smallest model as possible. - -TPF also had its program segments allocated as 381, 1055 and 4K byte-sized records at different points in its history. Each segment consisted of a single record; with a typically comprehensive application requiring perhaps tens or even hundreds of segments. For the first forty years of TPF's history, these segments were never link-edited. Instead, the relocatable object code (direct output from the assembler) was laid out in memory, had its internally (self-referential) relocatable symbols resolved, then the entire image was written to file for later loading into the system. This created a challenging programming environment in which segments related to one another could not directly address each other, with control transfer between them implemented as the ENTER/BACK system service. - -In ACP/TPF's earliest days (circa 1965), memory space was severely limited, which gave rise to a distinction between file-resident and core-resident programs—only the most frequently used application programs were written into memory and never removed (core-residency); the rest were stored on file and read in on demand, with their backing memory buffers released post-execution. - -The introduction of C language to TPF at version 3.0 was first implemented conformant to segment conventions, including the absence of linkage editing. This scheme quickly demonstrated itself to be impractical for anything other than the simplest of C programs. At TPF 4.1, truly and fully linked load modules were introduced to TPF. These were compiled with the z/OS C/C++ compiler using TPF-specific header files and linked with IEWL, resulting in a z/OS-conformant load module, which in no manner could be considered a traditional TPF segment. The TPF loader was extended to read the z/OS-unique load module file format, then lay out file-resident load modules' sections into memory; meanwhile, assembly language programs remained confined to TPF's segment model, creating an obvious disparity between applications written in assembler and those written in higher level languages (HLL). - -At z/TPF 1.1, all source language types were conceptually unified and fully link-edited to conform to the ELF specification. The segment concept became obsolete, meaning that any program written in any source language—including Assembler—may now be of any size. Furthermore, external references became possible, and separate source code programs that had once been segments could now be directly linked together into a shared object. A value point is that critical legacy applications can benefit from improved efficiency through simple repackaging—calls made between members of a single shared object module now have a much shorter pathlength at run time as compared to calling the system's ENTER/BACK service. Members of the same shared object may now share writeable data regions directly thanks to copy-on-write functionality also introduced at z/TPF 1.1; which coincidentally reinforces TPF's reentrancy requirements. - -The concepts of file- and memory- residency were also made obsolete, due to a z/TPF design point which sought to have all programs resident in memory at all times. - -Since z/TPF had to maintain a call stack for high-level language programs, which gave HLL programs the ability to benefit from stack-based memory allocation, it was deemed beneficial to extend the call stack to assembly language programs on an optional basis, which can ease memory pressure and ease recursive programming. - -All z/TPF executable programs are now packaged as ELF shared objects. - -Historically and in step with the previous, core blocks— memory— were also 381, 1055 and 4 K bytes in size. Since ALL memory blocks had to be of this size, most of the overhead for obtaining memory found in other systems was discarded. The programmer merely needed to decide what size block would fit the need and ask for it. TPF would maintain a list of blocks in use and simply hand the first block on the available list. - -Physical memory was divided into sections reserved for each size so a 1055 byte block always came from a section and returned there, the only overhead needed was to add its address to the appropriate physical block table's list. No compaction or data collection was required. - -As applications got more advanced demands for memory increased, and once C became available memory chunks of indeterminate or large size were required. This gave rise to the use of heap storage and some memory management routines. To ease the overhead, TPF memory was broken into frames— 4 KB in size (1 MB with z/TPF). If an application needs a certain number of bytes, the number of contiguous frames required to fill that need are granted. diff --git a/wiki/wikipedia/343.txt b/wiki/wikipedia/343.txt deleted file mode 100644 index e50f958b3a07ea66a5c86c6e8e57ff6ad6f1300d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/343.txt +++ /dev/null @@ -1,131 +0,0 @@ -In mathematics, particularly in functional analysis and convex analysis, a is a series of the form $\sum_{i=1}^{\infty} r_i x_i$ where $x_1, x_2, \ldots$ are all elements of a topological vector space $X$, and all $r_1, r_2, \ldots$ are non-negative real numbers that sum to $1$ (that is, such that $\sum_{i=1}^{\infty} r_i = 1$). - -Suppose that $S$ is a subset of $X$ and $\sum_{i=1}^{\infty} r_i x_i$ is a convex series in $X.$ - -* If all $x_1, x_2, \ldots$ belong to $S$ then the convex series $\sum_{i=1}^{\infty} r_i x_i$ is called a with elements of $S$. - -* If the set $\left\{ x_1, x_2, \ldots \right\}$ is a (von Neumann) bounded set then the series called a '. - -* The convex series $\sum_{i=1}^{\infty} r_i x_i$ is said to be a ' if the sequence of partial sums $\left(\sum_{i=1}^n r_i x_i\right)_{n=1}^{\infty}$ converges in $X$ to some element of $X,$ which is called the '. - -* The convex series is called ' if $\sum_{i=1}^{\infty} r_i x_i$ is a Cauchy series, which by definition means that the sequence of partial sums $\left(\sum_{i=1}^n r_i x_i\right)_{n=1}^{\infty}$ is a Cauchy sequence. - -Convex series allow for the definition of special types of subsets that are well-behaved and useful with very good stability properties. - -If $S$ is a subset of a topological vector space $X$ then $S$ is said to be a: - -* ' if any convergent convex series with elements of $S$ has its (each) sum in $S.$ - -** In this definition, $X$ is not required to be Hausdorff, in which case the sum may not be unique. In any such case we require that every sum belong to $S.$ - -* ' or a ' if there exists a Fréchet space $Y$ such that $S$ is equal to the projection onto $X$ (via the canonical projection) of some cs-closed subset $B$ of $X \times Y$ Every cs-closed set is lower cs-closed and every lower cs-closed set is lower ideally convex and convex (the converses are not true in general). - -* ' if any convergent b-series with elements of $S$ has its sum in $S.$ - -* ' or a ' if there exists a Fréchet space $Y$ such that $S$ is equal to the projection onto $X$ (via the canonical projection) of some ideally convex subset $B$ of $X \times Y.$ Every ideally convex set is lower ideally convex. Every lower ideally convex set is convex but the converse is in general not true. - -* ' if any Cauchy convex series with elements of $S$ is convergent and its sum is in $S.$ - -* ' if any Cauchy b-convex series with elements of $S$ is convergent and its sum is in $S.$ - -The empty set is convex, ideally convex, bcs-complete, cs-complete, and cs-closed. - -If $X$ and $Y$ are topological vector spaces, $A$ is a subset of $X \times Y,$ and $x \in X$ then $A$ is said to satisfy: - -* ': Whenever $\sum_{i=1}^{\infty} r_i (x_i, y_i)$ is a convex series with elements of $A$ such that $\sum_{i=1}^{\infty} r_i y_i$ is convergent in $Y$ with sum $y$ and $\sum_{i=1}^{\infty} r_i x_i$ is Cauchy, then $\sum_{i=1}^{\infty} r_i x_i$ is convergent in $X$ and its sum $x$ is such that $(x, y) \in A.$ - -* ': Whenever $\sum_{i=1}^{\infty} r_i (x_i, y_i)$ is a b-convex series with elements of $A$ such that $\sum_{i=1}^{\infty} r_i y_i$ is convergent in $Y$ with sum $y$ and $\sum_{i=1}^{\infty} r_i x_i$ is Cauchy, then $\sum_{i=1}^{\infty} r_i x_i$ is convergent in $X$ and its sum $x$ is such that $(x, y) \in A.$ - -** If X is locally convex then the statement "and $\sum_{i=1}^{\infty} r_i x_i$ is Cauchy" may be removed from the definition of condition (Hwx). - -The following notation and notions are used, where $\mathcal{R} : X \rightrightarrows Y$ and $\mathcal{S} : Y \rightrightarrows Z$ are multifunctions and $S \subseteq X$ is a non-empty subset of a topological vector space $X:$ - -* The of $\mathcal{R}$ is the set $\operatorname{gr} \mathcal{R} := \{ (x, y) \in X \times Y : y \in \mathcal{R}(x) \}.$ - -* $\mathcal{R}$ is ' (respectively, ', ', ', ', ', ', ') if the same is true of the graph of $\mathcal{R}$ in $X \times Y.$ - -** The mulifunction $\mathcal{R}$ is convex if and only if for all $x_0, x_1 \in X$ and all $r \in [0, 1],$ $r \mathcal{R}\left(x_0\right) + (1 - r) \mathcal{R}\left(x_1\right) \subseteq \mathcal{R} \left(r x_0 + (1 - r) x_1\right).$ - -* The $\mathcal{R}$ is the multifunction $\mathcal{R}^{-1} : Y \rightrightarrows X$ defined by $\mathcal{R}^{-1}(y) := \left\{ x \in X : y \in \mathcal{R}(x) \right\}.$ For any subset $B \subseteq Y,$ $\mathcal{R}^{-1}(B) := \cup_{y \in B} \mathcal{R}^{-1}(y).$ - -* The $\mathcal{R}$ is $\operatorname{Dom} \mathcal{R} := \left\{ x \in X : \mathcal{R}(x) \neq \emptyset \right\}.$ - -* The $\mathcal{R}$ is $\operatorname{Im} \mathcal{R} := \cup_{x \in X} \mathcal{R}(x).$ For any subset $A \subseteq X,$ $\mathcal{R}(A) := \cup_{x \in A} \mathcal{R}(x).$ - -* The $\mathcal{S} \circ \mathcal{R} : X \rightrightarrows Z$ is defined by $\left(\mathcal{S} \circ \mathcal{R}\right)(x) := \cup_{y \in \mathcal{R}(x)} \mathcal{S}(y)$ for each $x \in X.$ - -Let $X, Y, \text{ and } Z$ be topological vector spaces, $S \subseteq X, T \subseteq Y,$ and $A \subseteq X \times Y.$ The following implications hold: - -complete $\implies$ cs-complete $\implies$ cs-closed $\implies$ lower cs-closed (lcs-closed) and ideally convex. - -lower cs-closed (lcs-closed) or ideally convex $\implies$ lower ideally convex (li-convex) $\implies$ convex. - -(Hx) $\implies$ (Hwx) $\implies$ convex. - -The converse implications do not hold in general. - -If $X$ is complete then, - -# $S$ is cs-complete (respectively, bcs-complete) if and only if $S$ is cs-closed (respectively, ideally convex). - -# $A$ satisfies (Hx) if and only if $A$ is cs-closed. - -# $A$ satisfies (Hwx) if and only if $A$ is ideally convex. - -If $Y$ is complete then, - -# $A$ satisfies (Hx) if and only if $A$ is cs-complete. - -# $A$ satisfies (Hwx) if and only if $A$ is bcs-complete. - -# If $B \subseteq X \times Y \times Z$ and $y \in Y$ then: - -## $B$ satisfies (H(x, y)) if and only if $B$ satisfies (Hx). - -## $B$ satisfies (Hw(x, y)) if and only if $B$ satisfies (Hwx). - -If $X$ is locally convex and $\operatorname{Pr}_X (A)$ is bounded then, - -# If $A$ satisfies (Hx) then $\operatorname{Pr}_X (A)$ is cs-closed. - -# If $A$ satisfies (Hwx) then $\operatorname{Pr}_X (A)$ is ideally convex. - -Let $X_0$ be a linear subspace of $X.$ Let $\mathcal{R} : X \rightrightarrows Y$ and $\mathcal{S} : Y \rightrightarrows Z$ be multifunctions. - -* If $S$ is a cs-closed (resp. ideally convex) subset of $X$ then $X_0 \cap S$ is also a cs-closed (resp. ideally convex) subset of $X_0.$ - -* If $X$ is first countable then $X_0$ is cs-closed (resp. cs-complete) if and only if $X_0$ is closed (resp. complete); moreover, if $X$ is locally convex then $X_0$ is closed if and only if $X_0$ is ideally convex. - -* $S \times T$ is cs-closed (resp. cs-complete, ideally convex, bcs-complete) in $X \times Y$ if and only if the same is true of both $S$ in $X$ and of $T$ in $Y.$ - -* The properties of being cs-closed, lower cs-closed, ideally convex, lower ideally convex, cs-complete, and bcs-complete are all preserved under isomorphisms of topological vector spaces. - -* The intersection of arbitrarily many cs-closed (resp. ideally convex) subsets of $X$ has the same property. - -* The Cartesian product of cs-closed (resp. ideally convex) subsets of arbitrarily many topological vector spaces has that same property (in the product space endowed with the product topology). - -* The intersection of countably many lower ideally convex (resp. lower cs-closed) subsets of $X$ has the same property. - -* The Cartesian product of lower ideally convex (resp. lower cs-closed) subsets of countably many topological vector spaces has that same property (in the product space endowed with the product topology). - -* Suppose $X$ is a Fréchet space and the $A$ and $B$ are subsets. If $A$ and $B$ are lower ideally convex (resp. lower cs-closed) then so is $A + B.$ - -* Suppose $X$ is a Fréchet space and $A$ is a subset of $X.$ If $A$ and $\mathcal{R} : X \rightrightarrows Y$ are lower ideally convex (resp. lower cs-closed) then so is $\mathcal{R}(A).$ - -* Suppose $Y$ is a Fréchet space and $\mathcal{R}_2 : X \rightrightarrows Y$ is a multifunction. If $\mathcal{R}, \mathcal{R}_2, \mathcal{S}$ are all lower ideally convex (resp. lower cs-closed) then so are $\mathcal{R} + \mathcal{R}_2 : X \rightrightarrows Y$ and $\mathcal{S} \circ \mathcal{R} : X \rightrightarrows Z.$ - -If $S$ be a non-empty convex subset of a topological vector space $X$ then, - -# If $S$ is closed or open then $S$ is cs-closed. - -# If $X$ is Hausdorff and finite dimensional then $S$ is cs-closed. - -# If $X$ is first countable and $S$ is ideally convex then $\operatorname{int} S = \operatorname{int} \left(\operatorname{cl} S\right).$ - -Let $X$ be a Fréchet space, $Y$ be a topological vector spaces, $A \subseteq X \times Y,$ and $\operatorname{Pr}_Y : X \times Y \to Y$ be the canonical projection. If $A$ is lower ideally convex (resp. lower cs-closed) then the same is true of $\operatorname{Pr}_Y (A).$ - -If $X$ is a barreled first countable space and if $C \subseteq X$ then: - -# If $C$ is lower ideally convex then $C^i = \operatorname{int} C,$ where $C^i := \operatorname{aint}_X C$ denotes the algebraic interior of $C$ in $X.$ - -# If $C$ is ideally convex then $C^i = \operatorname{int} C = \operatorname{int} \left(\operatorname{cl} C\right) = \left(\operatorname{cl} C\right)^i.$ diff --git a/wiki/wikipedia/3430.txt b/wiki/wikipedia/3430.txt deleted file mode 100644 index 8ee7f50f92dd49e507bb50fa213d60ca16230c81..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3430.txt +++ /dev/null @@ -1,35 +0,0 @@ -In number theory, Glaisher's theorem is an identity useful to the study of integer partitions. It is named for James Whitbread Lee Glaisher. - -It states that the number of partitions of an integer $N$ into parts not divisible by $d$ is equal to the number of partitions of the form - - - -N=N_1+\cdots+N_k - -where -$$ -N_i\geq N_{i+1} -$$ - -and -$$ - N_i\geq N_{i+d-1}+1, -$$ - -that is, partitions in which no part is repeated d or more times. - -When $d=2$ this becomes the special case, known as Euler's theorem, that the number of partitions of $N$ into distinct parts is the same as the number of partitions of $N$ into odd parts. -$$ -(1+x+x^2+\dots+x^{d-1})(1+x^2+x^4+\dots+x^{2d-2})\dots(1+x^k+x^{2k}+\dots+x^{kd-k})\dots -$$ -$$ -=\frac{1-x^d}{1-x}\frac{1-x^{2d}}{1-x^2}\dots\frac{1-x^{kd}}{1-x^k}\dots -$$ - -Every term in the numerator cancels with the corresponding multiple of d in the denominator to give the result. - -If instead of counting the number of partitions with distinct parts we count the number of partitions with parts differing by at least 2, a theorem similar to Euler's theorem known as Rogers' theorem (after Leonard James Rogers) is obtained: - -The number of partitions whose parts differ by at least 2 is equal to the number of partitions involving only numbers congruent to 1 or 4 (mod 5). - -For example, there are 6 partitions of 10 into parts differing by at least 2, namely 10, 9+1, 8+2, 7+3, 6+4, 6+3+1; and 6 partitions of 10 involving only 1, 4, 6, 9 ..., namely 9+1, 6+4, 6+1+1+1+1, 4+4+1+1, 4+1+1+1+1+1+1, 1+1+1+1+1+1+1+1+1+1. The theorem was discovered independently by Schur and Ramanujan. diff --git a/wiki/wikipedia/3431.txt b/wiki/wikipedia/3431.txt deleted file mode 100644 index cf8ea5a35545b8982a262c342cd379268245150a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3431.txt +++ /dev/null @@ -1,17 +0,0 @@ -In theoretical computer science, the separating words problem is the problem of finding the smallest deterministic finite automaton that behaves differently on two given strings, meaning that it accepts one of the two strings and rejects the other string. It is an open problem how large such an automaton must be, in the worst case, as a function of the length of the input strings. - -The two strings 0010 and 1000 may be distinguished from each other by a three-state automaton in which the transitions from the start state go to two different states, both of which are terminal in the sense that subsequent transitions from these two states always return to the same state. The state of this automaton records the first symbol of the input string. If one of the two terminal states is accepting and the other is rejecting, then the automaton will accept only one of the strings 0010 and 1000. However, these two strings cannot be distinguished by any automaton with fewer than three states. - -It may also be assumed that the two strings have equal length. For strings of unequal length, there always exists a prime number p whose value is logarithmic in the smaller of the two input lengths, such that the two lengths are different modulo p. An automaton that counts the length of its input modulo p can be used to distinguish the two strings from each other in this case. Therefore, strings of unequal lengths can always be distinguished from each other by automata with few states. - -The problem of bounding the size of an automaton that distinguishes two given strings was first formulated by Goralčík, who showed that the automaton size is always sublinear. Later, Robson proved the best upper bound known, O(n2/5(log n)3/5), on the automaton size that may be required. This was improved by Chase to O(n1/3(log n)7). - -There exist pairs of inputs that are both binary strings of length n for which any automaton that distinguishes the inputs must have size Ω(log n). Closing the gap between this lower bound and Robson's upper bound remains an open problem. Jeffrey Shallit has offered a prize of 100 British pounds for any improvement to Robson's upper bound. - -Several special cases of the separating words problem are known to be solvable using few states: - -*If two binary words have differing numbers of zeros or ones, then they can be distinguished from each other by counting their Hamming weights modulo a prime of logarithmic size, using a logarithmic number of states. More generally, if a pattern of length k appears a different number of times in the two words, they can be distinguished from each other using O(k log n) states. - -*If two binary words differ from each other within their first or last k positions, they can be distinguished from each other using k + O(1) states. This implies that almost all pairs of binary words can be distinguished from each other with a logarithmic number of states, because only a polynomially small fraction of pairs have no difference in their initial O(log n) positions. - -*If two binary words have Hamming distance d, then there exists a prime p with p = O(d log n) and a position i at which the two strings differ, such that i is not equal modulo p to the position of any other difference. By computing the parity of the input symbols at positions congruent to i modulo p, it is possible to distinguish the words using an automaton with O(d log n) states. diff --git a/wiki/wikipedia/3432.txt b/wiki/wikipedia/3432.txt deleted file mode 100644 index 00e3cbf840341dc2f40b7086584cd61a98a624a9..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3432.txt +++ /dev/null @@ -1,15 +0,0 @@ -In mathematical logic, focused proofs are a family of analytic proofs that arise through goal-directed proof-search, and are a topic of study in structural proof theory and reductive logic. They form the most general definition of goal-directed proof-search—in which someone chooses a formula and performs hereditary reductions until the result meets some condition. The extremal case where reduction only terminates when axioms are reached forms the sub-family of uniform proofs. - -A sequent calculus is said to have the focusing property when focused proofs are complete for some terminating condition. For System LK, System LJ, and System LL, uniform proofs are focused proofs where all the atoms are assigned negative polarity. Many other sequent calculi has been shown to have the focusing property, notably the nested sequent calculi of both the classical and intuitionistic variants of the modal logics in the S5 cube. - -In the sequent calculus for an intuitionistic logic, the uniform proofs can be characterised as those in which the upward reading performs all right rules before the left rules. Typically, uniform proofs are not complete for the logic i.e., not all sequents or formulas admit a uniform proof, so one considers fragments where they are complete e.g., the hereditary Harrop fragment of Intuitionistic logic. Due to the deterministic behaviour, uniform proof-search has been used as the control mechanism defining the programming language paradigm of logic programming. Occasionally, uniform proof-search is implemented in a variant of the sequent calculus for the given logic where context management is automatic thereby increasing the fragment for which one can define a logic programming langue. - -The focusing principle was originally classified through the disambiguation between synchronous and asynchronous connective in Linear Logic i.e., connectives that interact with the context and those that do not, as consequence of research on logic programming. They are now an increasingly important example of control in reductive logic, and can drastically improve proof-search procedures in industry. The essential idea of focusing is to identify and coalesce the non-deterministic choices in a proof, so that a proof can be seen as an alternation of negative phases ( where invertible rules are applied eagerly), and positive phases (where applications of the other rules are confined and controlled). - -According to the rules in the sequent calculus, formulas are canonically put into one of two classes called positive and negative e.g., in LK and LJ the formula $\phi \lor \psi$ is positive. The only freedom is over atoms are assigned a polarity freely. For negative formulas provability is invariant under the application of a right rule; and, dually, for a positive formulas provability is invariant under the application of a left rule. In either case one can safely apply rules in any order to hereditary sub-formulas of the same polarity. - -In the case of a right rule applied to a positive formula, or a left rule applied to a negative formula, one may result in invalid sequents e.g., in LK and LJ there is no proof of the sequent $B \lor A \implies A \lor B$ beginning with a right rule. A calculus admits the focusing principle if when an original reduct was provable then the hereditary reducts of the same polarity are also provable. That is, one can commit to focusing on decomposing a formula and its sub-formulas of the same polarity without loss of completeness. - -A sequent calculus is often shown to have the focusing property by working in a related calculus where polarity explicitly controls which rules apply. Proofs in such systems are in focused, unfocused, or neutral phases, where the first two are characterised by hereditary decomposition; and the latter by forcing a choice of focus. One of the most important operational behaviours a procedure can undergo is backtracking i.e., returning to an earlier stage in the computation where a choice was made. In focused systems for classical and Intuitionistic logic, the use of backtracking can be simulated by pseudo-contraction. - -Let $\uparrow$ and $\downarrow$ denote change of polarity, the former making a formula negative, and the latter positive; and call a formula with an arrow neutral. Recall that $ \lor $ is positive, and consider the neutral polarized sequent $\downarrow \uparrow \phi \lor \psi \implies \uparrow \phi \lor \psi$, which is interpreted as the actual sequent $\phi \lor \psi \implies \phi \lor \psi$. For neutral sequents such as this, the focused system forces on to make an explicit choice of which formula to focus on, denoted by $ \langle \rangle $. To perform a proof-search the best thing is to choose the left formula, since $ \lor $ is positive, indeed (as discussed above) in some cases there are no proofs where the focus is on the right formula. To overcome this, some focused calculi create a backtracking point such that focusing on the right yields $\downarrow \uparrow \phi \lor \psi \implies \langle \phi \lor \psi \rangle, \uparrow \phi \lor \psi$, which is still as $\phi \lor \psi \implies \phi \lor \psi$. The second formula on the right can be removed only when the focused phase has finished, but if proof-search gets stuck before this happens the sequent may remove the focused component thereby returning to the choice e.g., $\downarrow \uparrow B \lor A \implies \langle A \rangle, \uparrow A \lor B$ must be taken to $\downarrow \uparrow B \lor A \implies \uparrow A \lor B$ as no other reductive inference can be made. This is a pseudo-contraction since it has the syntactic form of a contraction on the right, but the actual formula doesn't exist i.e., in the interpretation of the proof in the focused system the sequent has only one formula on the right. diff --git a/wiki/wikipedia/3433.txt b/wiki/wikipedia/3433.txt deleted file mode 100644 index e02a0a760c5216bf8978b3bfe32209a50a45fccd..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3433.txt +++ /dev/null @@ -1 +0,0 @@ -In mathematics, the Karoubi conjecture is a conjecture by that the algebraic and topological K-theories coincide on C* algebras spatially tensored with the algebra of compact operators. It was proved by . diff --git a/wiki/wikipedia/3434.txt b/wiki/wikipedia/3434.txt deleted file mode 100644 index 6fdcc59223e22ec47f5b47da0b854fac30e03422..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3434.txt +++ /dev/null @@ -1,112 +0,0 @@ -In the theory of probability, the Glivenko–Cantelli theorem (sometimes referred to as the Fundamental Theorem of Statistics) , named after Valery Ivanovich Glivenko and Francesco Paolo Cantelli, determines the asymptotic behaviour of the empirical distribution function as the number of independent and identically distributed observations grows. - -The uniform convergence of more general empirical measures becomes an important property of the Glivenko–Cantelli classes of functions or sets. The Glivenko–Cantelli classes arise in Vapnik–Chervonenkis theory, with applications to machine learning. Applications can be found in econometrics making use of M-estimators. - -Assume that $X_1,X_2,\dots$ are independent and identically-distributed random variables in $\mathbb{R}$ with common cumulative distribution function $F(x)$. The empirical distribution function for $X_1,\dots,X_n$ is defined by -$$ -F_n(x)=\frac{1}{n}\sum_{i=1}^n I_{[X_i, \infty)}(x) = \frac{1}{n}\left|\left\{1\leq i\leq n| X_i \leq x\right\}\right| -$$ - -where $I_C$ is the indicator function of the set $C$. For every (fixed) $x$, $F_n(x)$ is a sequence of random variables which converge to $F(x)$ almost surely by the strong law of large numbers, that is, $F_n$ converges to $F$ pointwise. Glivenko and Cantelli strengthened this result by proving uniform convergence of $F_n$ to $F$. - -Theorem -$$ -\|F_n - F\|_\infty = \sup_{x\in \mathbb{R}} |F_n(x) - F(x)| \longrightarrow 0 -$$ almost surely. - -This theorem originates with Valery Glivenko, and Francesco Cantelli, in 1933. - -Remarks - -*If $X_n$ is a stationary ergodic process, then $F_n(x)$ converges almost surely to $F(x)=E(1_{X_1\le x})$. The Glivenko–Cantelli theorem gives a stronger mode of convergence than this in the iid case. - -*An even stronger uniform convergence result for the empirical distribution function is available in the form of an extended type of law of the iterated logarithm. See asymptotic properties of the empirical distribution function for this and related results. - -For simplicity, consider a case of continuous random variable $X$. Fix $-\infty =x_0\begin{align} - -F_n(x)-F(x) &\leq F_n(x_j)-F(x_{j-1}) = F_n(x_j)-F(x_j)+1/m,\\ - -F_n(x)-F(x) &\geq F_n(x_{j-1})-F(x_{j}) = F_n(x_{j-1})-F(x_{j-1})-1/m. - -\end{align} - - - -Therefore, -$$ -||F_n-F||_{\infty} = \sup_{x\in \mathbb{R}}|F_n(x)-F(x)| \leq \max_{j\in\{1,\dots,m\}} |F_n(x_j)-F(x_j)| + 1/m. -$$ - -Since $\max_{j\in\{1,\dots,m\}} |F_n(x_j)-F(x_j)| \to 0 \text{ a.s.}$ by strong law of large numbers, we can guarantee that for any positive $\varepsilon$ and any integer $m$ such that $1/m<\varepsilon$, we can find $N$ such that for all $n \geq N$, we have $\max_{j\in\{1,\dots,m\}} |F_n(x_j)-F(x_j)|\leq \varepsilon-1/m \text{ a.s.}$. Combined with the above result, this further implies that $||F_n-F||_\infty \leq \varepsilon \text{ a.s.}$, which is the definition of almost sure convergence. - -One can generalize the empirical distribution function by replacing the set $(-\infty,x]$ by an arbitrary set C from a class of sets $\mathcal{C}$ to obtain an empirical measure indexed by sets $C \in \mathcal{C}.$ -$$ -P_n(C)=\frac{1}{n}\sum_{i=1}^n I_C(X_i), C\in\mathcal{C} -$$ - -Where $I_C(x)$ is the indicator function of each set $C$. - -Further generalization is the map induced by $P_n$ on measurable real-valued functions f, which is given by -$$ -f\mapsto P_nf=\int_Sf dP_n = \frac 1 n \sum_{i=1}^n f(X_i), f\in\mathcal{F}. -$$ - -Then it becomes an important property of these classes that the strong law of large numbers holds uniformly on $\mathcal{F}$ or $\mathcal{C}$. - -Consider a set $\mathcal{S}$ with a sigma algebra of Borel subsets A and a probability measure P. For a class of subsets, -$$ -{\mathcal C}\subset\{C: C \mbox{ is measurable subset of }\mathcal{S}\} -$$ - -and a class of functions -$$ -\mathcal{F}\subset\{f:\mathcal{S}\to \mathbb{R}, f \mbox{ is measurable}\} -$$ - -define random variables -$$ -\|P_n-P\|_{\mathcal C}=\sup_{C\in {\mathcal C}} |P_n(C)-P(C)| -$$ -$$ -\|P_n-P\|_{\mathcal F}=\sup_{f\in {\mathcal F}} |P_nf- Pf| -$$ - -where $P_n(C)$ is the empirical measure, $P_n f$ is the corresponding map, and -$$ -\mathbb{E}f=\int_\mathcal{S} f dP = P f -$$, assuming that it exists. - -Definitions - -* A class $\mathcal C$ is called a Glivenko–Cantelli class (or GC class) with respect to a probability measure P if any of the following equivalent statements is true. - -1. $\|P_n-P\|_\mathcal{C}\to 0$ almost surely as $n\to\infty$. - -2. $\|P_n-P\|_\mathcal{C}\to 0$ in probability as $n\to\infty$. - -3. $\mathbb{E}\|P_n-P\|_\mathcal{C}\to 0$, as $n\to\infty$ (convergence in mean). - -The Glivenko–Cantelli classes of functions are defined similarly. - -*A class is called a universal Glivenko–Cantelli class if it is a GC class with respect to any probability measure P on (S,A). - -*A class is called uniformly Glivenko–Cantelli if the convergence occurs uniformly over all probability measures P on (S,A): -$$ -\sup_{P\in \mathcal{P}(S,A)} \mathbb E \|P_n-P\|_\mathcal{C}\to 0; -$$ -$$ -\sup_{P\in \mathcal{P}(S,A)} \mathbb E \|P_n-P\|_\mathcal{F}\to 0. -$$ - -Theorem (Vapnik and Chervonenkis, 1968) - -A class of sets $\mathcal{C}$ is uniformly GC if and only if it is a Vapnik–Chervonenkis class. - -* Let $S=\mathbb R$ and ${\mathcal C}=\{(-\infty,t]:t\in {\mathbb R}\}$. The classical Glivenko–Cantelli theorem implies that this class is a universal GC class. Furthermore, by Kolmogorov's theorem, -$$ -\sup_{P\in \mathcal{P}(S,A)}\|P_n-P\|_{\mathcal C} \sim n^{-1/2} -$$, that is $\mathcal{C}$ is uniformly Glivenko–Cantelli class. - -* Let P be a nonatomic probability measure on S and $\mathcal{C}$ be a class of all finite subsets in S. Because $A_n=\{X_1,\ldots,X_n\}\in \mathcal{C}$, $P(A_n)=0$, $P_n(A_n)=1$, we have that $\|P_n-P\|_{\mathcal C}=1$ and so $\mathcal{C}$ is not a GC class with respect to P. diff --git a/wiki/wikipedia/3435.txt b/wiki/wikipedia/3435.txt deleted file mode 100644 index 2f91852eb2abf4b15aff4cfffce5c6a743780f6d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3435.txt +++ /dev/null @@ -1,17 +0,0 @@ -In logic, Nicod's axiom (named after the French logician and philosopher Jean Nicod) is a formula that can be used as the sole axiom of a semantically complete system of propositional calculus. The only connective used in the formulation of Nicod's axiom is the Sheffer's stroke. - -The axiom has the following form: - -((φ | (χ | ψ)) | ((τ | (τ | τ)) | ((θ | χ) | ((φ | θ) | (φ | θ))))) - -Nicod showed that the whole propositional logic of Principia Mathematica could be derived from this axiom alone by using one inference rule, called "Nicod's modus ponens": - -1. φ - -2. (φ | (χ | ψ)) - -∴ ψ - -In 1931, the Polish logician Mordechaj Wajsberg discovered an equally powerful and easier-to-work-with alternative: - -((φ | (ψ | χ)) | (((τ | χ) | ((φ | τ) | (φ | τ))) | (φ | (φ | ψ)))) diff --git a/wiki/wikipedia/3436.txt b/wiki/wikipedia/3436.txt deleted file mode 100644 index f2b1e5d0a50fbe2e87600b12e9f17279cf497881..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3436.txt +++ /dev/null @@ -1,29 +0,0 @@ -A quorum is the minimum number of votes that a distributed transaction has to obtain in order to be allowed to perform an operation in a distributed system. A quorum-based technique is implemented to enforce consistent operation in a distributed system. - -Quorum-based voting can be used as a replica control method, - -as well as a commit method to ensure transaction atomicity in the presence of network partitioning. - -In a distributed database system, a transaction could execute its operations at multiple sites. Since atomicity requires every distributed transaction to be atomic, the transaction must have the same fate (commit or abort) at every site. In case of network partitioning, sites are partitioned and the partitions may not be able to communicate with each other. This is where a quorum-based technique comes in. The fundamental idea is that a transaction is executed if the majority of sites vote to execute it. - -Every site in the system is assigned a vote Vi. Let us assume that the total number of votes in the system is V and the abort and commit quorums are Va and Vc, respectively. Then the following rules must be obeyed in the implementation of the commit protocol: - -# Va + Vc > V, where 0 < Vc, Va $\le$ V. - -# Before a transaction commits, it must obtain a commit quorum Vc.
    The total of at least one site that is prepared to commit and zero or more sites waiting $\ge$ Vc. - -# Before a transaction aborts, it must obtain an abort quorum Va
    The total of zero or more sites that are prepared to abort or any sites waiting $\ge$ Va. - -The first rule ensures that a transaction cannot be committed and aborted at the same time. The next two rules indicate the votes that a transaction has to obtain before it can terminate one way or the other. - -In replicated databases, a data object has copies present at several sites. To ensure serializability, no two transactions should be allowed to read or write a data item concurrently. In case of replicated databases, a quorum-based replica control protocol can be used to ensure that no two copies of a data item are read or written by two transactions concurrently. - -The quorum-based voting for replica control is due to [Gifford, 1979]. - -Each copy of a replicated data item is assigned a vote. Each operation then has to obtain a read quorum (Vr) or a write quorum (Vw) to read or write a data item, respectively. If a given data item has a total of V votes, the quorums have to obey the following rules: - -# Vr + Vw > V - -# Vw > V/2 - -The first rule ensures that a data item is not read and written by two transactions concurrently. Additionally, it ensures that a read quorum contains at least one site with the newest version of the data item. The second rule ensures that two write operations from two transactions cannot occur concurrently on the same data item. The two rules ensure that one-copy serializability is maintained. diff --git a/wiki/wikipedia/3437.txt b/wiki/wikipedia/3437.txt deleted file mode 100644 index 8cfc43746c97519a44742e413771fd8b90571fbc..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3437.txt +++ /dev/null @@ -1,24 +0,0 @@ -In mathematics, the categorical trace is a generalization of the trace of a matrix. - -The trace is defined in the context of a symmetric monoidal category C, i.e., a category equipped with a suitable notion of a product $\otimes$. (The notation reflects that the product is, in many cases, a kind of a tensor product.) An object X in such a category C is called dualizable if there is another object $X^\vee$ playing the rôle of a dual object of X. In this situation, the trace of a morphism $f: X \to X$ is defined as the composition of the following morphisms: -$$ -\mathrm{tr}(f) : 1 \stackrel{coev} \to X \otimes X^\vee \stackrel{f \otimes id} \to X \otimes X^\vee \stackrel{twist} \to X^\vee \otimes X \stackrel{eval} \to 1 -$$ - -where 1 is the monoidal unit and the extremal morphisms are the coevaluation and evaluation, which are part of the definition of dualizable objects. - -The same definition applies, to great effect, also when C is a symmetric monoidal ∞-category. - -If C is the category of vector spaces over a fixed field k, the dualizable objects are precisely the finite-dimensional vector spaces, and the trace in the sense above is the morphism -$$ -k \to k -$$ - -which is the multiplication by the trace of the endomorphism f in the usual sense of linear algebra. In this sense, the categorical trace generalizes the linear-algebraic trace. - -If C is the ∞-category of chain complexes of modules (over a fixed commutative ring R), dualizable objects V in C are precisely the perfect complexes. The trace in this setting captures, for example, the Euler characteristic, which is the alternating sum of the ranks of its terms: -$$ -\mathrm{tr}(id_V) = \sum_i (-1)^i \operatorname {rank} V_i. -$$ - -Kondyrev have used categorical trace methods to prove an algebro-geometric version of the Atiyah–Bott fixed point formula, an extension of the Lefschetz fixed point formula. diff --git a/wiki/wikipedia/3438.txt b/wiki/wikipedia/3438.txt deleted file mode 100644 index 9fb89bec2fa2f034236f3c921dea267882ca3ea0..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3438.txt +++ /dev/null @@ -1,50 +0,0 @@ -The no-three-in-line problem in discrete geometry asks how many points can be placed in the $n\times n$ grid so that no three points lie on the same line. This number is at most $2n$, because $2n+1$ points in a grid would include a row of three or more points, by the pigeonhole principle. The problem was introduced by Henry Dudeney in 1900. Brass, Moser, and Pach call it "one of the oldest and most extensively studied geometric questions concerning lattice points". - -Although the problem can be solved with $2n$ points for every $n$ up to $46$, it is conjectured that fewer than $2n$ points can be placed in grids of large size. Known methods can place linearly many points in grids of arbitrary size, but the best of these methods place slightly fewer than $3n/2$ points, not $2n$. Several related problems of finding points with no three in line, among other sets of points than grids, have also been studied. - -Although originating in recreational mathematics, the problem has applications in graph drawing and to the Heilbronn triangle problem. - -The problem was first posed by Henry Dudeney in 1900, as a puzzle in recreational mathematics, phrased in terms of placing the 16 pawns of a chessboard onto the board so that no three are in a line. This is exactly the no-three-in-line problem, for the case $n=8$. In a later version of the puzzle, Dudeney modified the problem, making its solution unique, by asking for a solution in which two of the pawns are on squares d4 and e5, attacking each other in the center of the board. - -Many authors have published solutions to this problem for small values of $n$, - -The exact number of points that can be placed, as a function of $n$, is not known. However, both proven and conjectured bounds limit this number to within a range proportional to $n$. - -A solution of Paul Erdős, published by Roth, is based on the observation that when $n$ is a prime number, the set of $n$ grid points $(i,i^2$ mod $n)$, for $0\le i < n$, contains no three collinear points. When $n$ is not prime, one can perform this construction for a $p\times p$ grid contained in the $n\times n$ grid, where $p$ is the largest prime that is at most $n$. Because the gap between consecutive primes is much smaller than the primes themselves, $p$ will always be close to $n$, so this method can be used to place $n-o(n)$ points in the $n\times n$ grid with no three points collinear. - -Erdős' bound has been improved subsequently: Hall show that, when $n/2$ is prime, one can obtain a solution with $3(n-2)/2$ points, by placing points in multiple copies of the hyperbola $xy=k$ (mod $n/2$), where $k$ may be chosen arbitrarily as long as it is nonzero mod $n/2$. Again, for arbitrary $n$ one can perform this construction for a prime near $n/2$ to obtain a solution with -$$ -\tfrac32n-o(n) -$$ points. - -At most $2n$ points may be placed in a grid of any size $n$. For, if more points are placed, then by the pigeonhole principle some three of them would all lie on the same horizontal line of the grid. For small-enough values of $n$, this trivial bound is tight. - -Although exactly $2n$ points can be placed on small grids, Guy conjectured that for large grids, there is a significantly smaller upper bound on the number of points that can be placed. More precisely, they conjectured that the number of points that can be placed is at most a sublinear amount larger than $cn$, with - -c = \sqrt[3]{\frac{2\pi^2}{3}} \approx 1.874. - -After an error in the heuristic reasoning leading to this conjecture was uncovered, Guy corrected the error and made the stronger conjecture that one cannot do more than sublinearly better than $cn$ with - -c = \frac{\pi}{\sqrt 3} \approx 1.814. - -Solutions to the no-three-in-line problem can be used to avoid certain kinds of degeneracies in graph drawing. The problem they apply to involves placing the vertices of a given graph at integer coordinates in the plane, and drawing the edges of the graph as straight line segments. For certain graphs, such as the utility graph, crossings between pairs of edges are unavoidable, but one should still avoid placements that cause a vertex to lie on an edge through two other vertices. When the vertices are placed with no three in line, this kind of problematic placement cannot occur, because the entire line through any two vertices, and not just the line segment, is free of other vertices. The fact that the no-three-in-line problem has a solution with linearly many points can be translated into graph drawing terms as meaning that every graph, even a complete graph, can be drawn without unwanted vertex-edge incidences using a grid whose area is quadratic in the number of vertices, and that for complete graphs no such drawing with less than quadratic area is possible. The complete graphs also require a linear number of colors in any graph coloring, but other graphs that can be colored with fewer colors can also be drawn on smaller grids: if a graph has $n$ vertices and a graph coloring with $k$ colors, it can be drawn in a grid with area proportional to $nk$. The no-three-in-line drawing of a complete graph is a special case of this result with $k=n$. - -The no-three-in-line problem also has applications to another problem in discrete geometry, the Heilbronn triangle problem. In this problem, one must place $n$ points, anywhere in a unit square, not restricted to a grid. The goal of the placement is to avoid small-area triangles, and more specifically to maximize the area of the smallest triangle formed by three of the points. For instance, a placement with three points in line would be very bad by this criterion, because these three points would form a degenerate triangle with area zero. On the other hand, if the points can be placed on a grid of side length $\varepsilon$ within the unit square, with no three in a line, then by Pick's theorem every triangle would have area at least $\varepsilon^2/2$, half of a grid square. Therefore, solving an instance of the no-three-in-line problem and then scaling down the integer grid to fit within a unit square produces solutions to the Heilbronn triangle problem where the smallest triangle has area $\Omega(1/n^2)$. This application was the motivation for Paul Erdős to find his solution for the no-three-in-line problem. It remained the best area lower bound known for the Heilbronn triangle problem from 1951 until 1982, when it was improved by a logarithmic factor using a construction that was not based on the no-three-in-line problem. - -In computational geometry, finite sets of points with no three in line are said to be in general position. In this terminology, the no-three-in-line problem seeks the largest subset of a grid that is in general position, but researchers have also considered the problem of finding the largest general position subset of other non-grid sets of points. It is NP-hard to find this subset, for certain input sets, and hard to approximate its size to within a constant factor; this hardness of approximation result is summarized by saying that the problem is APX-hard. If the largest subset has size $k$, a solution with the non-constant approximation ratio $O(\sqrt k)$ can be obtained by a greedy algorithm that simply chooses points one at a time until all remaining points lie on lines through pairs of chosen points. - -One can get a finer-grained understanding of the running time of algorithms for finding the exact optimal solution by using parameterized complexity, in which algorithms are analyzed not only in terms of the input size, but in terms of other parameters of the input. In this case, for inputs whose largest general position subset has size $k$, it can be found in an amount of time that is an exponential function of $k$ multiplied by a polynomial in the input size $n$, with the exponent of the polynomial not depending on $k$. Problems with this kind of time bound are called fixed-parameter tractable. - -For point sets $S$ having at most \ell points per line, with \ell=O(\sqrt), there exist general-position subsets of size nearly proportional to \ell=O(\sqrt). The example of the grid shows that this bound cannot be significantly improved. The proof of existence of these large general-position subsets can be converted into a polynomial-time algorithm for finding a general-position subset of $S$, of size matching the existence bound, using an algorithmic technique known as entropy compression. - -Repeating a suggestion of Adena, Martin Gardner asked for the smallest subset of an $n\times n$ grid that cannot be extended: it has no three points in a line, but every proper superset has three in a line. Equivalently, this is the smallest set that could be produced by a greedy algorithm that tries to solve the no-three-in-line problem by placing points one at a time until it gets stuck. If only axis-parallel and diagonal lines are considered, then every such set has at least $n-1$ points. However, less is known about the version of the problem where all lines are considered: every greedy placement includes at least $\Omega(n^{2/3})$ points before getting stuck, but nothing better than the trivial $2n$ upper bound is known. - -Non-collinear sets of points in the three-dimensional grid were considered by Pór. They proved that the maximum number of points in the $n\times n$ grid with no three points collinear is $\Theta(n^2)$. - -Similarly to Erdős's 2D construction, this can be accomplished by using points $(x,y,x^2+y^2$ mod $p)$, where $p$ is a prime congruent to 3 mod 4. - -Just as the original no-three-in-line problem can be used for two-dimensional graph drawing, one can use this three-dimensional solution to draw graphs in the three-dimensional grid. Here the non-collinearity condition means that a vertex should not lie on a non-adjacent edge, but it is normal to work with the stronger requirement that no two edges cross. - -In much higher dimensions, sets of grid points with no three in line, obtained by choosing points near a hypersphere, have been used for finding large Salem–Spencer sets, sets of integers with no three forming an arithmetic progression. However, it does not work well to use this same idea of choosing points near a circle in two dimensions: this method finds points forming convex polygons, which satisfy the requirement of having no three in line, but are too small. The largest convex polygons with vertices in an $n\times n$ grid have only $O(n^{2/3})$ vertices. The cap set problem concerns a similar problem to the no-three-in-line problem in spaces that are both high-dimensional, and based as vector spaces over finite fields rather than over the integers. - -Another variation on the problem involves converting the grid into a discrete torus by using periodic boundary conditions in which the left side of the torus is connected to the right side, and the top side is connected to the bottom side. This has the effect, on slanted lines through the grid, of connecting them up into longer lines through more points, and therefore making it more difficult to select points with at most two from each line. These extended lines can also be interpreted as normal lines through an infinite grid in the Euclidean plane, taken modulo the dimensions of the torus. For a torus based on an $m\times n$ grid, the maximum number of points that can be chosen with no three in line is at most $2\gcd(m,n)$. When both dimensions are equal, and prime, it is not possible to place exactly one point in each row and column without forming a linear number of collinear triples. Higher-dimensional torus versions of the problem have also been studied. diff --git a/wiki/wikipedia/3439.txt b/wiki/wikipedia/3439.txt deleted file mode 100644 index 433d3539702cf0118df59da27de3acade6393e8f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3439.txt +++ /dev/null @@ -1,22 +0,0 @@ -In mathematics, the Grothendieck–Katz p-curvature conjecture is a local-global principle for linear ordinary differential equations, related to differential Galois theory and in a loose sense analogous to the result in the Chebotarev density theorem considered as the polynomial case. It is a conjecture of Alexander Grothendieck from the late 1960s, and apparently not published by him in any form. - -The general case remains unsolved, despite recent progress; it has been linked to geometric investigations involving algebraic foliations. - -In a simplest possible statement the conjecture can be stated in its essentials for a vector system written as -$$ -dv/dz = A(z)v -$$ - -for a vector v of size n, and an n×n matrix A of algebraic functions with algebraic number coefficients. The question is to give a criterion for when there is a full set of algebraic function solutions, meaning a fundamental matrix (i.e. n vector solutions put into a block matrix). For example, a classical question was for the hypergeometric equation: when does it have a pair of algebraic solutions, in terms of its parameters? The answer is known classically as Schwarz's list. In monodromy terms, the question is of identifying the cases of finite monodromy group. - -By reformulation and passing to a larger system, the essential case is for rational functions in A and rational number coefficients. Then a necessary condition is that for almost all prime numbers p, the system defined by reduction modulo p should also have a full set of algebraic solutions, over the finite field with p elements. - -Grothendieck's conjecture is that these necessary conditions, for almost all p, should be sufficient. The connection with p-curvature is that the mod p condition stated is the same as saying the p-curvature, formed by a recurrence operation on A, is zero; so another way to say it is that p-curvature of 0 for almost all p implies enough algebraic solutions of the original equation. - -Nicholas Katz has applied Tannakian category techniques to show that this conjecture is essentially the same as saying that the differential Galois group G (or strictly speaking the Lie algebra g of the algebraic group G, which in this case is the Zariski closure of the monodromy group) can be determined by mod p information, for a certain wide class of differential equations. - -A wide class of cases has been proved by Benson Farb and Mark Kisin; these equations are on a locally symmetric variety X subject to some group-theoretic conditions. This work is based on the previous results of Katz for Picard–Fuchs equations (in the contemporary sense of the Gauss–Manin connection), as amplified in the Tannakian direction by André. It also applies a version of superrigidity particular to arithmetic groups. Other progress has been by arithmetic methods. - -Nicholas Katz related some cases to deformation theory in 1972, in a paper where the conjecture was published. Since then, reformulations have been published. A q-analogue for difference equations has been proposed. - -In responding to Kisin's talk on this work at the 2009 Colloque Grothendieck, Katz gave a brief account from personal knowledge of the genesis of the conjecture. Grothendieck put it forth in public discussion in Spring 1969, but wrote nothing on the topic. He was led to the idea by foundational intuitions in the area of crystalline cohomology, at that time being developed by his student Pierre Berthelot. In some way wishing to equate the notion of "nilpotence" in the theory of connections, with the divided power structure technique that became standard in crystalline theory, Grothendieck produced the conjecture as a by-product. diff --git a/wiki/wikipedia/344.txt b/wiki/wikipedia/344.txt deleted file mode 100644 index ee1115a16cf553c4e9bec9d31f24b398bbd5ce3b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/344.txt +++ /dev/null @@ -1,131 +0,0 @@ -Nqthm is a theorem prover sometimes referred to as the Boyer–Moore theorem prover. It was a precursor to ACL2. - -The system was developed by Robert S. Boyer and J Strother Moore, professors of computer science at the University of Texas, Austin. They began work on the system in 1971 in Edinburgh, Scotland. Their goal was to make a fully automatic, logic-based theorem prover. They used a variant of Pure LISP as the working logic. - -Definitions are formed as totally recursive functions, the system makes extensive use of rewriting and an induction heuristic that is used when rewriting and something that they called symbolic evaluation fails. - -The system was built on top of Lisp and had some very basic knowledge in what was called "Ground-zero", the state of the machine after bootstrapping it onto a Common Lisp implementation. - -This is an example of the proof of a simple arithmetic theorem. The function TIMES is part of the BOOT-STRAP (called a "satellite") and is defined to be - - - -(DEFN TIMES (X Y) - -(IF (ZEROP X) - -0 - -(PLUS Y (TIMES (SUB1 X) Y)))) - - - -The formulation of the theorem is also given in a Lisp-like syntax: - - - -(prove-lemma commutativity-of-times (rewrite) - -(equal (times x z) (times z x))) - - - -Should the theorem prove to be true, it will be added to the knowledge basis of the system and can be used as a rewrite rule for future proofs. - -The proof itself is given in a quasi-natural language manner. The authors randomly choose typical mathematical phrases for embedding the steps in the mathematical proof, which does actually make the proofs quite readable. There are macros for LaTeX that can transform the Lisp structure into more or less readable mathematical language. - -The proof of the commutativity of times continues: - -Give the conjecture the name *1. - -We will appeal to induction. Two inductions are suggested by terms in the conjecture, - -both of which are flawed. We limit our consideration to the two suggested by the - -largest number of nonprimitive recursive functions in the conjecture. Since both of - -these are equally likely, we will choose arbitrarily. We will induct according to - -the following scheme: - -(AND (IMPLIES (ZEROP X) (p X Z)) - -(IMPLIES (AND (NOT (ZEROP X)) (p (SUB1 X) Z)) - -(p X Z))). - -Linear arithmetic, the lemma COUNT-NUMBERP, and the definition of ZEROP inform - -us that the measure (COUNT X) decreases according to the well-founded relation - -LESSP in each induction step of the scheme. The above induction scheme - -produces the following two new conjectures: - -Case 2. (IMPLIES (ZEROP X) - -(EQUAL (TIMES X Z) (TIMES Z X))). - -and after winding itself through a number of induction proofs, finally concludes that - -Case 1. (IMPLIES (AND (NOT (ZEROP Z)) - -(EQUAL 0 (TIMES (SUB1 Z) 0))) - -(EQUAL 0 (TIMES Z 0))). - -This simplifies, expanding the definitions of ZEROP, TIMES, PLUS, and EQUAL, to: - -T. - -That finishes the proof of *1.1, which also finishes the proof of *1. - -Q.E.D. - -[ 0.0 1.2 0.5 ] - -COMMUTATIVITY-OF-TIMES - -Many proofs have been done or confirmed with the system, particularly - -* (1971) list concatenation - -* (1973) insertion sort - -* (1974) a binary adder - -* (1976) an expression compiler for a stack machine - -* (1978) uniqueness of prime factorizations - -* (1983) invertibility of the RSA encryption algorithm - -* (1984) unsolvability of the halting problem for Pure Lisp - -* (1985) FM8501 microprocessor (Warren Hunt) - -* (1986) Gödel's incompleteness theorem (Shankar) - -* (1988) CLI Stack (Bill Bevier, Warren Hunt, Matt Kaufmann, J Moore, Bill Young) - -* (1990) Gauss' law of quadratic reciprocity (David Russinoff) - -* (1992) Byzantine Generals and Clock Synchronization (Bevier and Young) - -* (1992) A compiler for a subset of the Nqthm language (Arthur Flatau) - -* (1993) bi-phase mark asynchronous communications protocol - -* (1993) Motorola MC68020 and Berkeley C String Library (Yuan Yu) - -* (1994) Paris–Harrington Ramsey theorem (Kenneth Kunen) - -* (1996) The equivalence of NFSA and DFSA (Debora Weber-Wulff) - -A more powerful version, called PC-Nqthm (Proof-checker Nqthm) was developed by Matt Kaufmann. This gave the proof tools that the system uses automatically to the user, so that more guidance can be given to the proof. This is a great help, as the system has an unproductive tendency to wander down infinite chains of inductive proofs. - -* A Computational Logic Handbook, R.S. Boyer and J S. Moore, Academic Press (2nd Edition), 1997. - -* The Boyer-Moore Theorem Prover and Its Interactive Enhancement, with M. Kaufmann and R. S. Boyer, Computers and Mathematics with Applications, 29(2), 1995, pp. 27–62. - -In 2005 Robert S. Boyer, Matt Kaufmann, and J Strother Moore received the ACM Software System Award for their work on the Nqthm theorem prover. diff --git a/wiki/wikipedia/3440.txt b/wiki/wikipedia/3440.txt deleted file mode 100644 index 857d8da6caf6965054a27ebca270086e20897eeb..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3440.txt +++ /dev/null @@ -1,259 +0,0 @@ -In mathematics, the symmetry of second derivatives (also called the equality of mixed partials) refers to the possibility under certain conditions (see below) of interchanging the order of taking partial derivatives of a function -$$ -f\left(x_1, x_2, \ldots, x_n\right) -$$ - -of n variables. The symmetry is the assertion that the second-order partial derivatives satisfy the identity - -\frac {\partial}{\partial x_i} \left( \frac{\partial f}{\partial x_j} \right) \ = \ - -\frac {\partial}{\partial x_j} \left( \frac{\partial f}{\partial x_i} \right) - - - -so that they form an n × n symmetric matrix, known as the function's Hessian matrix. This is sometimes known as Schwarz's theorem, Clairaut's theorem, or Young's theorem. - -In the context of partial differential equations it is called the Schwarz integrability condition. - -In symbols, the symmetry may be expressed as: - -\frac {\partial}{\partial x} \left( \frac{\partial f}{\partial y} \right) \ = \ - -\frac {\partial}{\partial y} \left( \frac{\partial f}{\partial x} \right) - -\qquad\text{or}\qquad - -\frac {\partial^2\! f} {\partial x\partial y} \ =\ \frac{\partial^2\! f} {\partial y\partial x}. - - - -Another notation is: -$$ -\partial_x\partial_y f = \partial_y\partial_x f \qquad\text{or}\qquad f_{yx} = f_{xy}. -$$ - -In terms of composition of the differential operator Di which takes the partial derivative with respect to xi: -$$ -D_i \circ D_j = D_j \circ D_i -$$. - -From this relation it follows that the ring of differential operators with constant coefficients, generated by the Di, is commutative; but this is only true as operators over a domain of sufficiently differentiable functions. It is easy to check the symmetry as applied to monomials, so that one can take polynomials in the xi as a domain. In fact smooth functions are another valid domain. - -The result on the equality of mixed partial derivatives under certain conditions has a long history. The list of unsuccessful proposed proofs started with Euler's, published in 1740, although already in 1721 Bernoulli had implicitly assumed the result with no formal justification. Clairaut also published a proposed proof in 1740, with no other attempts until the end of the 18th century. Starting then, for a period of 70 years, a number of incomplete proofs were proposed. The proof of Lagrange (1797) was improved by Cauchy (1823), but assumed the existence and continuity of the partial derivatives $\tfrac{\partial^2 f}{\partial x^2}$ and $\tfrac{\partial^2 f}{\partial y^2}$. Other attempts were made by P. Blanchet (1841), Duhamel (1856), Sturm (1857), Schlömilch (1862), and Bertrand (1864). Finally in 1867 Lindelöf systematically analysed all the earlier flawed proofs and was able to exhibit a specific counterexample where mixed derivatives failed to be equal. - -Six years after that, Schwarz succeeded in giving the first rigorous proof. Dini later contributed by finding more general conditions than those of Schwarz. Eventually a clean and more general version was found by Jordan in 1883 that is still the proof found in most textbooks. Minor variants of earlier proofs were published by Laurent (1885), Peano (1889 and 1893), J. Edwards (1892), P. Haag (1893), J. K. Whittemore (1898), Vivanti (1899) and Pierpont (1905). Further progress was made in 1907-1909 when E. W. Hobson and W. H. Young found proofs with weaker conditions than those of Schwarz and Dini. In 1918, Carathéodory gave a different proof based on the Lebesgue integral. - -In mathematical analysis, Schwarz's theorem (or Clairaut's theorem on equality of mixed partials) named after Alexis Clairaut and Hermann Schwarz, states that for a function $f \colon \Omega \to \mathbb{R}$ defined on a set $\Omega \subset \mathbb{R}^n$, if $\mathbf{p}\in \mathbb{R}^n$ is a point such that some neighborhood of $\mathbf{p}$ is contained in $\Omega$ - -and $f$ has continuous second partial derivatives at the point $\mathbf{p}$, then $\forall i, j \in \{1, 2, \ldots, n\},$ - - - -\frac{\partial^2}{\partial x_i \partial x_j}f(\mathbf{p}) = - -\frac{\partial^2}{\partial x_j \partial x_i}f(\mathbf{p}). - - - -The partial derivatives of this function commute at that point. - -One easy way to establish this theorem (in the case where $n = 2$, $i = 1$, and $j = 2$, which readily entails the result in general) is by applying Green's theorem to the gradient of $f.$ - -An elementary proof for functions on open subsets of the plane is as follows (by a simple reduction the general case for the theorem of Schwarz clearly reduces to the planar case). Let $f$ be a differentiable function on an open rectangle containing $(a,b)$ and suppose that $df$ is continuous with $\partial_x \partial _y f$ and $\partial_y\partial_x f$ both continuous. Define - -\begin{align} - -u\left(h, k\right) &= f\left(a + h, b + k\right) - f\left(a + h, b\right), \\ - -v\left(h, k\right) &= f\left(a + h, b + k\right) - f\left(a, b + k\right), \\ - -w\left(h, k\right) &= f\left(a + h, b + k\right) - f\left(a + h, b\right) - f\left(a, b + k\right) + f\left(a, b\right). - -\end{align} - -These functions are defined for $\left|h\right|, \left|k\right| < \varepsilon$, where $\varepsilon > 0 $ and $\left[a - \varepsilon, a + \varepsilon\right] \times \left[b - \varepsilon, b + \varepsilon\right] \subset \Omega$. - -By the mean value theorem, intermediate values $\theta,\theta^\prime, \phi,\phi^\prime$ can be found in $ (0,1)$ with - -\begin{align} - -w\left(h, k\right) - -&= u\left(h, k\right) - u\left(0, k\right) = h \partial_x u\left(\theta h, k\right) \\ - -&= h\left[\partial_x f\left(a + \theta h, b + k\right) - \partial_x f\left(a + \theta h, b\right)\right] \\ - -&= hk \partial_y \partial_x f\left(a + \theta h, b + \theta^\prime k\right) \\ - -w\left(h, k\right) - -&= v\left(h, k\right) - v\left(h, 0\right) = k\partial_y v\left(h, \phi k\right) \\ - -&= k\left[\partial_y f\left(a + h, b + \phi k\right) - \partial_y f\left(a, b + \phi k\right)\right] \\ - -&= hk \partial_x\partial_y f \left(a + \phi^\prime h, b + \phi k\right). - -\end{align} - -Since $h,k \neq 0$, the first equality below can be divided by $hk$: - -\begin{align} - -hk\partial_y\partial_x f\left(a + \theta h, b + \theta^\prime k\right) &= - -hk \partial_x\partial_y f\left(a + \phi^\prime h, b + \phi k\right), \\ - -\partial_y\partial_x f\left(a + \theta h, b + \theta^\prime k\right) &= - -\partial_x\partial_y f\left(a + \phi^\prime h, b + \phi k\right). - -\end{align} - -Letting $h,k$ tend to zero in the last equality, the continuity assumptions on $\partial_y\partial_x f$ and $\partial_x\partial_y f$ now imply that - - - -\frac{\partial^2}{\partial x\partial y}f\left(a, b\right) = - -\frac{\partial^2}{\partial y\partial x}f\left(a, b\right). - - - -This account is a straightforward classical method found in many text books, for example in Burkill, Apostol and Rudin. - -Although the derivation above is elementary, the approach can also be viewed from a more conceptual perspective so that the result becomes more apparent. Indeed the difference operators $\Delta^t_x,\Delta^t_y$ commute and $\Delta^t_x f,\Delta^t_y f$ tend to $\partial_x f, \partial_y f$ as $t$ tends to 0, with a similar statement for second order operators. Here, for $z$ a vector in the plane and $u$ a directional vector, the difference operator is defined by -$$ -\Delta^t_u f(z)= {f(z+tu) - f(z)\over t}. -$$ - -By the fundamental theorem of calculus for $C^1$ functions $f$ on an open interval $I$ with $ (a.b) \subset I$ -$$ -\int_a^b f^\prime (x) dx = f(b) - f(a). -$$ - -Hence -$$ -|f(b) - f(a)| \le (b-a) \sup_{c\in (a,b)} |f^\prime(c)| -$$. - -This is a generalized version of the mean value theorem. Recall that the elementary discussion on maxima or minima for real-valued functions implies that if $f$ is continuous on $[a,b]$ and differentiable on $(a,b)$, then there is a point $c$ in $(a,b)$ such that -$$ - {f(b) - f(a) \over b - a} = f^\prime(c). -$$ - -For vector-valued functions with $V$ a finite-dimensional normed space, there is no analogue of the equality above, indeed it fails. But since $ \inf f^\prime \le f^\prime(c) \le \sup f^\prime$, the inequality above is a useful substitute. Moreover, using the pairing of the dual of $V$ with its dual norm, yields the following inequality: -$$ -\|f(b) - f(a)\| \le (b-a) \sup_{c\in (a,b)} \|f^\prime(c)\| -$$. - -These versions of the mean valued theorem are discussed in Rudin, Hörmander and elsewhere. - -For $f$ a $C^2$ function on an open set in the plane, define $D_1 = \partial_x$ and $ D_2 = \partial_y$. Furthermore for $ t \ne 0$ set -$$ -\Delta_1^t f(x,y) = [f(x+t,y)-f(x,y)]/t,\Delta^t_2f(x,y)=[f(x,y+t) -f(x,y)]/t -$$. - -Then for $(x_0,y_0)$ in the open set, the generalized mean value theorem can be applied twice: -$$ - \left|\Delta_1^t\Delta_2^t f(x_0,y_0) - D_1 D_2f(x_0,y_0)\right|\le \sup_{0\le s \le 1} \left|\Delta_1^t D_2 f(x_0,y_0 + ts) -D_1D_2 f(x_0,y_0)\right|\le \sup_{0\le r,s\le 1} \left|D_1D_2f(x_0+tr,y_0+ts) - D_1D_2f(x_0,y_0)\right|. -$$ - -Thus $\Delta_1^t\Delta_2^t f(x_0,y_0)$ tends to $D_1 D_2f(x_0,y_0)$ as $t$ tends to 0. The same argument shows that $\Delta_2^t\Delta_1^t f(x_0,y_0)$ tends to $D_2 D_1f(x_0,y_0)$. Hence, since the difference operators commute, so do the partial differential operators $D_1$ and $D_2$, as claimed. - -Remark. By two applications of the classical mean value theorem, -$$ -\Delta_1^t\Delta_2^t f(x_0,y_0)= D_1 D_2 f(x_0+t\theta,y_0 +t\theta^\prime) -$$ - -for some $\theta$ and $\theta^\prime$ in $(0,1)$. Thus the first elementary proof can be reinterpreted using difference operators. Conversely, instead of using the generalized mean value theorem in the second proof, the classical mean valued theorem could be used. - -The properties of repeated Riemann integrals of a continuous function F on a compact rectangle [a,b] × [c,d] are easily established. The uniform continuity of F implies immediately that the functions $g(x)=\int_c^d F(x,y) dy$ and $h(y)=\int_a^b F(x,y) dx$ are continuous. It follows that -$$ -\int_a^b \int_c^d F(x,y) dy dx = \int_c^d \int_a^b F(x,y) dx dy -$$; - -moreover it is immediate that the iterated integral is positive if F is positive. The equality above is a simple case of Fubini's theorem, involving no measure theory. Titchmarsh proves it in a straightforward way using Riemann approximating sums corresponding to subdivisions of a rectangle into smaller rectangles. - -To prove Clairaut's theorem, assume f is a differentiable function on an open set U, for which the mixed second partial derivatives fyx and fxy exist and are continuous. Using the fundamental theorem of calculus twice, -$$ -\int_c^d \int_a^b f_{yx}(x,y) dx dy = \int_c^d f_y(b,y) - f_y(a,y) dy = f(b,d)-f(a,d)-f(b,c)+f(a,c). -$$ - -Similarly -$$ -\int_a^b \int_c^d f_{xy}(x,y) dy dx = \int_a^b f_x(x,d) - f_x(x,c) dx = f(b,d)-f(a,d)-f(b,c)+f(a,c). -$$ - -The two iterated integrals are therefore equal. On the other hand, since fxy(x,y) is continuous, the second iterated integral can be performed by first integrating over x and then afterwards over y. But then the iterated integral of fyx − fxy on [a,b] × [c,d] must vanish. However, if the iterated integral of a continuous function function F vanishes for all rectangles, then F must be identically zero; for otherwise F or −F would be strictly positive at some point and therefore by continuity on a rectangle, which is not possible. Hence fyx − fxy must vanish identically, so that fyx = fxy everywhere. - -A weaker condition than the continuity of second partial derivatives (which is implied by the latter) which suffices to ensure symmetry is that all partial derivatives are themselves differentiable. Another strengthening of the theorem, in which existence of the permuted mixed partial is asserted, was provided by Peano in a short 1890 note on Mathesis: - -If $f:E \to \mathbb{R}$ is defined on an open set $E \subset \R^2$; $\partial_1 f(x, y)$ and $ \partial_{2,1}f(x, y)$ exist everywhere on $E$; $\partial_{2,1}f$ is continuous at $\left(x_0, y_0\right) \in E$, and if $\partial_{2}f(x, y_0)$ exists in a neighborhood of $x = x_0$, then $\partial_{1,2}f$ exists at $\left(x_0, y_0\right)$ and $\partial_{1,2}f\left(x_0, y_0\right) = \partial_{2,1}f\left(x_0, y_0\right)$. - -The theory of distributions (generalized functions) eliminates analytic problems with the symmetry. The derivative of an integrable function can always be defined as a distribution, and symmetry of mixed partial derivatives always holds as an equality of distributions. The use of formal integration by parts to define differentiation of distributions puts the symmetry question back onto the test functions, which are smooth and certainly satisfy this symmetry. In more detail (where f is a distribution, written as an operator on test functions, and φ is a test function), -$$ -\left(D_1 D_2 f\right)[\phi] = -\left(D_2f\right)\left[D_1\phi\right] = f\left[D_2 D_1\phi\right] = f\left[D_1 D_2\phi\right] = -\left(D_1 f\right)\left[D_2\phi\right] = \left(D_2 D_1 f\right)[\phi]. -$$ - -Another approach, which defines the Fourier transform of a function, is to note that on such transforms partial derivatives become multiplication operators that commute much more obviously. - -The symmetry may be broken if the function fails to have differentiable partial derivatives, which is possible if Clairaut's theorem is not satisfied (the second partial derivatives are not continuous). - -An example of non-symmetry is the function (due to Peano) - -{{NumBlk|: - -| f(x, y) = \begin{cases} - -\frac{xy\left(x^2 - y^2\right)}{x^2 + y^2} & \mbox{ for } (x, y) \ne (0, 0),\\ - -0 & \mbox{ for } (x, y) = (0, 0). - -\end{cases} - -| - -}} - -This can be visualized by the polar form $f(r \cos(\theta), r\sin(\theta)) = \frac{r^2 \sin(4\theta)}{4}$; it is everywhere continuous, but its derivatives at (0, 0) cannot be computed algebraically. Rather, the limit of difference quotients shows that $f_x(0,0) = f_y(0,0) = 0$, so the graph $z = f(x, y)$ has a horizontal tangent plane at (0, 0), and the partial derivatives $f_x, f_y$ exist and are everywhere continuous. However, the second partial derivatives are not continuous at (0, 0), and the symmetry fails. In fact, along the x-axis the y-derivative is $f_y(x,0) = x$, and so: - - - -f_{yx}(0,0) = - -\lim_{\varepsilon \to 0} \frac{f_y(\varepsilon,0) - f_y(0,0)}{\varepsilon} = - -1. - - - -In contrast, along the y-axis the x-derivative $f_x(0,y) = -y$, and so $f_{xy}(0,0) = -1$. That is, $f_{yx} \ne f_{xy}$ at (0, 0), although the mixed partial derivatives do exist, and at every other point the symmetry does hold. - -The above function, written in a cylindrical coordinate system, can be expressed as -$$ -f(r, \theta) = \frac{r^2 \sin{4\theta}}{4}, -$$ - -showing that the function oscillates four times when traveling once around an arbitrarily small loop containing the origin. Intuitively, therefore, the local behavior of the function at (0, 0) cannot be described as a quadratic form, and the Hessian matrix thus fails to be symmetric. - -In general, the interchange of limiting operations need not commute. Given two variables near (0, 0) and two limiting processes on -$$ -f(h, k) - f(h, 0) - f(0, k) + f(0, 0) -$$ - -corresponding to making h → 0 first, and to making k → 0 first. It can matter, looking at the first-order terms, which is applied first. This leads to the construction of pathological examples in which second derivatives are non-symmetric. This kind of example belongs to the theory of real analysis where the pointwise value of functions matters. When viewed as a distribution the second partial derivative's values can be changed at an arbitrary set of points as long as this has Lebesgue measure 0. Since in the example the Hessian is symmetric everywhere except (0, 0), there is no contradiction with the fact that the Hessian, viewed as a Schwartz distribution, is symmetric. - -Consider the first-order differential operators Di to be infinitesimal operators on Euclidean space. That is, Di in a sense generates the one-parameter group of translations parallel to the xi-axis. These groups commute with each other, and therefore the infinitesimal generators do also; the Lie bracket - -[Di, Dj] = 0 - -is this property's reflection. In other words, the Lie derivative of one coordinate with respect to another is zero. - -The Clairaut-Schwarz theorem is the key fact needed to prove that for every $C^\infty$ (or at least twice differentiable) differential form $\omega\in\Omega^k(M)$, the second exterior derivative vanishes: $d^2\omega := d(d\omega) = 0$. This implies that every differentiable exact form (i.e., a form $\alpha$ such that $\alpha = d\omega$ for some form $\omega$) is closed (i.e., $d\alpha = 0$), since $d\alpha = d(d\omega) = 0$. - -In the middle of the 18th century, the theory of differential forms was first studied in the simplest case of 1-forms in the plane, i.e. $Adx + Bdy$, where $A$ and $B$ are functions in the plane. The study of 1-forms and the differentials of functions began with Clairaut's papers in 1739 and 1740. At that stage his investigations were interpreted as ways of solving ordinary differential equations. Formally Clairaut showed that a 1-form $\omega = A dx + B dy$ on an open rectangle is closed, i.e. $d\omega=0$, if and only $\omega$ has the form $df$ for some function $f$ in the disk. The solution for $f$ can be written by Cauchy's integral formula -$$ -f(x,y)=\int_{x_0}^x A(x,y) dx + \int_{y_0} ^y B(x,y) dy; -$$ - -while if $ \omega= df$, the closed property $ d\omega=0$ is the identity $\partial_x\partial_y f = \partial_y\partial_x f$. (In modern language this is one version of the Poincaré lemma.) diff --git a/wiki/wikipedia/3441.txt b/wiki/wikipedia/3441.txt deleted file mode 100644 index 01f1f90aca232cc0d9ac5a2de19d19f741e7c073..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3441.txt +++ /dev/null @@ -1,105 +0,0 @@ -A wormhole (or Einstein–Rosen bridge or Einstein–Rosen wormhole) is a speculative structure linking disparate points in spacetime, and is based on a special solution of the Einstein field equations. - -A wormhole can be visualized as a tunnel with two ends at separate points in spacetime (i.e., different locations, different points in time, or both). - -Wormholes are consistent with the general theory of relativity, but whether wormholes actually exist remains to be seen. Many scientists postulate that wormholes are merely projections of a fourth spatial dimension, analogous to how a two-dimensional (2D) being could experience only part of a three-dimensional (3D) object. - -Theoretically, a wormhole might connect extremely long distances such as a billion light years, or short distances such as a few meters, or different points in time, or even different universes. - -In 1995, Matt Visser suggested that there may be many wormholes in the universe if cosmic strings with negative mass were generated in the early universe. Some physicists, such as Frank Tipler and Kip Thorne, have suggested how to make wormholes artificially. - -For a simplified notion of a wormhole, space can be visualized as a two-dimensional surface. In this case, a wormhole would appear as a hole in that surface, lead into a 3D tube (the inside surface of a cylinder), then re-emerge at another location on the 2D surface with a hole similar to the entrance. An actual wormhole would be analogous to this, but with the spatial dimensions raised by one. For example, instead of circular holes on a 2D plane, the entry and exit points could be visualized as spherical holes in 3D space leading into a four-dimensional "tube" similar to a spherinder. - -Another way to imagine wormholes is to take a sheet of paper and draw two somewhat distant points on one side of the paper. The sheet of paper represents a plane in the spacetime continuum, and the two points represent a distance to be traveled, but theoretically a wormhole could connect these two points by folding that plane (⁠i.e. the paper) so the points are touching. In this way it would be much easier to traverse the distance since the two points are now touching. - -In 1928, German mathematician, philosopher and theoretical physicist Hermann Weyl proposed a wormhole hypothesis of matter in connection with mass analysis of electromagnetic field energy; however, he did not use the term "wormhole" (he spoke of "one-dimensional tubes" instead). - -American theoretical physicist John Archibald Wheeler (inspired by Weyl's work) (named after Albert Einstein and Nathan Rosen), However, in 1962, John Archibald Wheeler and Robert W. Fuller published a paper showing that this type of wormhole is unstable if it connects two parts of the same universe, and that it will pinch off too quickly for light (or any particle moving slower than light) that falls in from one exterior region to make it to the other exterior region. - -According to general relativity, the gravitational collapse of a sufficiently compact mass forms a singular Schwarzschild black hole. In the Einstein–Cartan–Sciama–Kibble theory of gravity, however, it forms a regular Einstein–Rosen bridge. This theory extends general relativity by removing a constraint of the symmetry of the affine connection and regarding its antisymmetric part, the torsion tensor, as a dynamic variable. Torsion naturally accounts for the quantum-mechanical, intrinsic angular momentum (spin) of matter. The minimal coupling between torsion and Dirac spinors generates a repulsive spin–spin interaction that is significant in fermionic matter at extremely high densities. Such an interaction prevents the formation of a gravitational singularity. Instead, the collapsing matter reaches an enormous but finite density and rebounds, forming the other side of the bridge. - -Although Schwarzschild wormholes are not traversable in both directions, their existence inspired Kip Thorne to imagine traversable wormholes created by holding the "throat" of a Schwarzschild wormhole open with exotic matter (material that has negative mass/energy). - -Other non-traversable wormholes include Lorentzian wormholes (first proposed by John Archibald Wheeler in 1957), wormholes creating a spacetime foam in a general relativistic spacetime manifold depicted by a Lorentzian manifold, and Euclidean wormholes (named after Euclidean manifold, a structure of Riemannian manifold). - -The Casimir effect shows that quantum field theory allows the energy density in certain regions of space to be negative relative to the ordinary matter vacuum energy, and it has been shown theoretically that quantum field theory allows states where energy can be arbitrarily negative at a given point. Many physicists, such as Stephen Hawking, Kip Thorne, and stable versions of such wormholes have been suggested as dark matter candidates. It has also been proposed that, if a tiny wormhole held open by a negative mass cosmic string had appeared around the time of the Big Bang, it could have been inflated to macroscopic size by cosmic inflation. - -Lorentzian traversable wormholes would allow travel in both directions from one part of the universe to another part of that same universe very quickly or would allow travel from one universe to another. The possibility of traversable wormholes in general relativity was first demonstrated in a 1973 paper by Homer Ellis and independently in a 1973 paper by K. A. Bronnikov. Ellis analyzed the topology and the geodesics of the Ellis drainhole, showing it to be geodesically complete, horizonless, singularity-free, and fully traversable in both directions. The drainhole is a solution manifold of Einstein's field equations for a vacuum spacetime, modified by inclusion of a scalar field minimally coupled to the Ricci tensor with antiorthodox polarity (negative instead of positive). (Ellis specifically rejected referring to the scalar field as 'exotic' because of the antiorthodox coupling, finding arguments for doing so unpersuasive.) The solution depends on two parameters: m, which fixes the strength of its gravitational field, and n, which determines the curvature of its spatial cross sections. When m is set equal to 0, the drainhole's gravitational field vanishes. What is left is the Ellis wormhole, a nongravitating, purely geometric, traversable wormhole. - -Kip Thorne and his graduate student Mike Morris, unaware of the 1973 papers by Ellis and Bronnikov, manufactured, and in 1988 published, a duplicate of the Ellis wormhole for use as a tool for teaching general relativity. For this reason, the type of traversable wormhole they proposed, held open by a spherical shell of exotic matter, was from 1988 to 2015 referred to in the literature as a Morris–Thorne wormhole. - -Later, other types of traversable wormholes were discovered as allowable solutions to the equations of general relativity, including a variety analyzed in a 1989 paper by Matt Visser, in which a path through the wormhole can be made where the traversing path does not pass through a region of exotic matter. However, in the pure Gauss–Bonnet gravity (a modification to general relativity involving extra spatial dimensions which is sometimes studied in the context of brane cosmology) exotic matter is not needed in order for wormholes to exist—they can exist even with no matter. A type held open by negative mass cosmic strings was put forth by Visser in collaboration with Cramer et al., However, according to general relativity, it would not be possible to use a wormhole to travel back to a time earlier than when the wormhole was first converted into a time "machine". Until this time it could not have been noticed or have been used. This means that an observer entering the "younger" end would exit the "older" end at a time when it was the same age as the "younger" end, effectively going back in time as seen by an observer from the outside. One significant limitation of such a time machine is that it is only possible to go as far back in time as the initial creation of the machine; and many physicists believe that the required negative energy may actually be possible due to the Casimir effect in quantum physics. Although early calculations suggested a very large amount of negative energy would be required, later calculations showed that the amount of negative energy can be made arbitrarily small. - -In 1993, Matt Visser argued that the two mouths of a wormhole with such an induced clock difference could not be brought together without inducing quantum field and gravitational effects that would either make the wormhole collapse or the two mouths repel each other, or otherwise prevent information from passing through the wormhole. Because of this, the two mouths could not be brought close enough for causality violation to take place. However, in a 1997 paper, Visser hypothesized that a complex "Roman ring" (named after Tom Roman) configuration of an N number of wormholes arranged in a symmetric polygon could still act as a time machine, although he concludes that this is more likely a flaw in classical quantum gravity theory rather than proof that causality violation is possible. - -A possible resolution to the paradoxes resulting from wormhole-enabled time travel rests on the many-worlds interpretation of quantum mechanics. - -In 1991 David Deutsch showed that quantum theory is fully consistent (in the sense that the so-called density matrix can be made free of discontinuities) in spacetimes with closed timelike curves. However, later it was shown that such a model of closed timelike curves can have internal inconsistencies as it will lead to strange phenomena like distinguishing non-orthogonal quantum states and distinguishing proper and improper mixture. Accordingly, the destructive positive feedback loop of virtual particles circulating through a wormhole time machine, a result indicated by semi-classical calculations, is averted. A particle returning from the future does not return to its universe of origination but to a parallel universe. This suggests that a wormhole time machine with an exceedingly short time jump is a theoretical bridge between contemporaneous parallel universes. - -Because a wormhole time-machine introduces a type of nonlinearity into quantum theory, this sort of communication between parallel universes is consistent with Joseph Polchinski's proposal of an Everett phone (named after Hugh Everett) in Steven Weinberg's formulation of nonlinear quantum mechanics. - -The possibility of communication between parallel universes has been dubbed interuniversal travel. - -Wormhole can also be depicted in Penrose diagram of Schwarzschild black hole. In the Penrose diagram, an object traveling faster than light will cross the black hole and will emerge from another end into a different space, time or universe. This will be an interuniversal wormhole. - -Theories of wormhole metrics describe the spacetime geometry of a wormhole and serve as theoretical models for time travel. An example of a (traversable) wormhole metric is the following: -$$ -ds^2= - c^2 dt^2 + d\ell^2 + (k^2 + \ell^2)(d \theta^2 + \sin^2 \theta d\varphi^2), -$$ - -first presented by Ellis (see Ellis wormhole) as a special case of the Ellis drainhole. - -One type of non-traversable wormhole metric is the Schwarzschild solution (see the first diagram): -$$ -ds^2= - c^2 \left(1 - \frac{2GM}{rc^2}\right) dt^2 + \frac{dr^2}{1 - \frac{2GM}{rc^2 + r^2(d \theta^2 + \sin^2 \theta d\varphi^2). -$$}} - -The original Einstein–Rosen bridge was described in an article published in July 1935. - -For the Schwarzschild spherically symmetric static solution -$$ -ds^2 = - \frac{1}{1 - \frac{2m}{r dr^2 - r^2(d\theta^2 + \sin^2 \theta d\varphi^2) + \left(1 - \frac{2m}{r} \right) dt^2, -$$}} - -where $ds$ is the proper time and $c = 1$. - -If one replaces $r$ with $u$ according to $u^2 = r - 2m$ -$$ -ds^2 = -4(u^2 + 2m)du^2 - (u^2 + 2m)^2(d\theta^2 + \sin^2 \theta d\varphi^2) + \frac{u^2}{u^2 + 2m} dt^2 -$$ - -For the combined field, gravity and electricity, Einstein and Rosen derived the following Schwarzschild static spherically symmetric solution -$$ -\varphi_1 = \varphi_2 = \varphi_3 = 0, \varphi_4 = \frac{\varepsilon}{4}, -$$ -$$ -ds^2 = - \frac{1}{\left( 1 - \frac{2m}{r} - \frac{\varepsilon^2}{2 r^2}\right)} dr^2 - r^2 (d\theta^2 + \sin^2 \theta d\varphi^2) + \left(1 - \frac{2m}{r} - \frac{\varepsilon^2}{2 r^2}\right) dt^2, -$$ - -where $\varepsilon$ is the electric charge. - -The field equations without denominators in the case when $m = 0$ can be written -$$ -\varphi_{\mu \nu} = \varphi_{\mu,\nu} - \varphi_{\nu,\mu} -$$ -$$ -g^2 \varphi_{\mu\nu;\sigma}g^{\nu\sigma} = 0 -$$ -$$ -g^2 (R_{ik} + \varphi_{i\alpha}\varphi_k^\alpha - \frac{1}{4} g_{ik} \varphi_{\alpha\beta}\varphi^{\alpha\beta}) = 0 -$$ - -In order to eliminate singularities, if one replaces $r$ by $u$ according to the equation: -$$ -u^2 = r^2 - \frac{\varepsilon^2}{2} -$$ - -and with $m = 0$ one obtains -$$ -\varphi_1 = \varphi_2 = \varphi_3 = 0 -$$ and $\varphi_4 = \frac{\varepsilon}{\left( u^2 + \frac{\varepsilon^2}{2} \right)^{1/2$}} -$$ -ds^2 = - du^2 - \left(u^2 + \frac{\varepsilon^2}{2}\right)(d \theta^2 + \sin^2 \theta d\varphi^2) + \left(\frac{2 u^2}{2 u^2 + \varepsilon^2}\right) dt^2 -$$ - -Wormholes are a common element in science fiction because they allow interstellar, intergalactic, and sometimes even interuniversal travel within human lifetime scales. In fiction, wormholes have also served as a method for time travel. diff --git a/wiki/wikipedia/3442.txt b/wiki/wikipedia/3442.txt deleted file mode 100644 index f6c7e9f10d2fb0fdb869c467b6f956be372dbcd5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3442.txt +++ /dev/null @@ -1,71 +0,0 @@ -Cederbaum's theorem defines hypothetical analog electrical networks which will automatically produce a solution to the minimum s–t cut problem. Alternatively, simulation of such a network will also produce a solution to the minimum s–t cut problem. This article gives basic definitions, a statement of the theorem and a proof of the theorem. The presentation in this article closely follows the presentation of the theorem in the original publication. - -Definitions in this article are consistent in all respects with those given in a discussion of the maximum-flow minimum-cut theorem. - -Cederbaum's theorem applies to a particular type of directed graph:  G = (V, E). V is the set of nodes. $E$ is the a set of directed edges: $ E = (a, b) \in V \times V $ . - -A positive weight is associated with each edge:  wab: E → R+ . Two of the nodes must be s and t: $ s \in V $ and $ t \in V $. - -Flow,   f : E → R+, is a positive quantity associated with each edge in the graph. Flow is constrained by the weight of the associated edge and by the conservation of flow at each vertex as described here. - -Current is defined as a map for each edge pair to the real numbers,  iab : Ep → R. Current maps from the voltage to a range that is determined by the weights of the respective forward and reverse edges. Each edge pair is the tuple consisting of the forward and reverse edges for a given pair of vertices. The current in the forward and reverse directions between a pair of nodes are the additive inverses of one another:  iab = −iba . Current is conserved at each interior node in the network. The net current at the $s$ and $t$ nodes is non-zero. The net current at the $s$ node is defined as the input current. For the set of neighbors of the node $s$, $N_s$: -$$ -I_{in} = \sum_{b \in N_s}i_{sb} -$$ - -Voltage is defined as a mapping from the set of edge pairs to real numbers,  vab : Ep → R. Voltage is directly analogous to electrical voltage in an electrical network. The voltage in the forward and reverse directions between a pair of nodes are the additive inverses of one another:  vab = −vba . The input voltage is the sum of the voltages over a set of edges, $P_{ab}$, that form a path between the $s$ and $t$ nodes. -$$ -V_{in} = \sum_{(a,b) \in P_{ab}}v_{ab} -$$ - -An s–t cut is a partition of the graph into two parts each containing one of either $s$ or $t$. Where $ S \cup T = V $, $ s \in S $, $ t \in T $, the s–t cut is $ (S, T) $. The s–t cut set is the set of edges that start in $ S $ and end in $ T $. The minimum s–t cut is the s–t cut whose cut set has the minimum weight. Formally, the cut set is defined as: -$$ -X_C = \{ (a, b) \in E \mid a \in S, b \in T \} -$$ - -An electrical network is a model that is derived from a flow graph. Each resistive element in the electrical network corresponds to an edge pair in the flow graph. The positive and negative terminals of the electrical network are the nodes corresponding to the $s$ and $t$ terminals of the graph, respectively. The voltage state of the model becomes binary in the limit as the input voltage difference approaches $\infty$. The behavior of the electrical network is defined by Kirchhoff's voltage and current laws. Voltages add to zero around all closed loops and currents add to zero at all nodes. - -A resistive element in the context of this theorem is a component of the electrical network that corresponds to an edge pair in the flow graph. - -200px|thumb|right|$iv$ characteristic. - -The $iv$ characteristic is the relationship between current and voltage. The requirements are: - -(i) Current and voltage are continuous function with respect to one another.
    - -(ii) Current and voltage are non-decreasing functions with respect to one another.
    - -(iii) The range of the current is limited by the weights of the forward and reverse edges corresponding to the resistive element. The current range may be inclusive or exclusive of the endpoints. The domain of the voltage is exclusive of the maximum and minimum currents: - - iab : R → [−wab,wba]   or   (−wab,wba]   or  [−wab,wba)   or  (−wab,wba) - - vab : (−wab,wba) → R - -The limit of the current  Iin between the input terminals of the electrical network as the input voltage, Vin approaches $\infty$, is equal to the weight of the minimum cut set XC . -$$ -\lim_{V_{in} \rightarrow \infty} (I_{in})= \min_{X_C}\sum_{(a,b) \in X_C}w_{ab} . -$$ - -Claim 1 Current at any resistive element in the electrical network in either direction is always less than or equal to the maximum flow at the corresponding edge in the graph. Therefore, the maximum current through the electrical network is less than the weight of the minimum cut of the flow graph: -$$ -\lim_{V_{in} \rightarrow \infty} (I_{in}) \leq \min_{X_C}\sum_{(a,b) \in X_C}w_{ab} . -$$ - -Claim 2 As the input voltage $ V_{in} $ approaches infinity, there exists at least one cut set $X'_C$ such that the voltage across the cut set approaches infinity. -$$ -\exists X'_C \forall (a,b) \in X'_C \lim_{V_{in} \rightarrow \infty}(v_{ab}) = \infty -$$ - -This implies that: -$$ -\lim_{V_{in} \rightarrow \infty} (I_{in}) = \sum_{(a,b) \in X'_C}w_{ab} \geq \min_{X_C}\sum_{(a,b) \in X_C}w_{ab}. -$$ - -Given claims 1 and 2 above: -$$ -\lim_{V_{in} \rightarrow \infty} (I_{in}) = \min_{X_C}\sum_{(a,b) \in X_C}w_{ab} . -$$ - -The existence and uniqueness of a solution to the equations of an electrical network composed of monotone resistive elements was established by Duffin. - -Cederbaum's maximum flow theorem is the basis for the Simcut algorithm. diff --git a/wiki/wikipedia/3443.txt b/wiki/wikipedia/3443.txt deleted file mode 100644 index d6a4661f9f68868c81bf0b01b0500ab651fa405a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3443.txt +++ /dev/null @@ -1,21 +0,0 @@ -Petra Mutzel is a German computer scientist, a University Professor of computer science at the University of Bonn. Her research is in the areas of algorithm engineering, graph drawing and combinatorial optimization. - -Mutzel earned a diploma in 1990 from the University of Augsburg, in mathematics with computer science. She then earned a doctorate in computer science from the University of Cologne in 1994 under the supervision of Michael Jünger, and her habilitation in 1999 from the Max Planck Institute for Informatics. She held a professorship at the Vienna University of Technology beginning in 1999, moving to the Technical University of Dortmund in 2004 and then to the University of Bonn in 2019. - -In graph drawing, Mutzel has contributed in work on planarization, crossing minimization in layered graph drawing, and SPQR trees, and co-edited a book on graph drawing. She was both the program chair and organizational chair of the 9th International Symposium on Graph Drawing, in Vienna in 2001. - -Mutzel's other contributions include works on the Ising model, steganography, and Steiner trees. In 2012, she was program committee co-chair of the Meeting on Algorithm Engineering and Experiments (ALENEX). - -*. - -*. - -*. - -*. - -*. - -*. - -*. diff --git a/wiki/wikipedia/3444.txt b/wiki/wikipedia/3444.txt deleted file mode 100644 index 4cd32f26acc8be66e3637f4a95be34e3218afbeb..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3444.txt +++ /dev/null @@ -1,5 +0,0 @@ -In descriptive set theory and mathematical logic, Lusin's separation theorem states that if A and B are disjoint analytic subsets of Polish space, then there is a Borel set C in the space such that A ⊆ C and B ∩ C = ∅. It is named after Nikolai Luzin, who proved it in 1927. - -The theorem can be generalized to show that for each sequence (An) of disjoint analytic sets there is a sequence (Bn) of disjoint Borel sets such that An ⊆ Bn for each n. - -An immediate consequence is Suslin's theorem, which states that if a set and its complement are both analytic, then the set is Borel. diff --git a/wiki/wikipedia/3445.txt b/wiki/wikipedia/3445.txt deleted file mode 100644 index 3ea940bff6b3933450964a3f90eecbe33bf158a7..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3445.txt +++ /dev/null @@ -1,31 +0,0 @@ -Simpson's paradox, which also goes by several other names, is a phenomenon in probability and statistics in which a trend appears in several groups of data but disappears or reverses when the groups are combined. This result is often encountered in social-science and medical-science statistics, and is particularly problematic when frequency data is unduly given causal interpretations. The paradox can be resolved when confounding variables and causal relations are appropriately addressed in the statistical modeling. had mentioned similar effects earlier. The name Simpson's paradox was introduced by Colin R. Blyth in 1972. It is also referred to as Simpson's reversal, Yule–Simpson effect, amalgamation paradox, or reversal paradox. - -One of the best-known examples of Simpson's paradox comes from a study of gender bias among graduate school admissions to University of California, Berkeley. The admission figures for the fall of 1973 showed that men applying were more likely than women to be admitted, and the difference was so large that it was unlikely to be due to chance. - -However, when examining the individual departments, it appeared that 6 out of 85 departments were significantly biased against men, while 4 were significantly biased against women. In total, the pooled and corrected data showed a "small but statistically significant bias in favor of women". The table below shows the success rates (the term success rate here actually means the success proportion) and numbers of treatments for treatments involving both small and large kidney stones, where Treatment A includes open surgical procedures and Treatment B includes closed surgical procedures. The numbers in parentheses indicate the number of success cases over the total size of the group. - -The paradoxical conclusion is that treatment A is more effective when used on small stones, and also when used on large stones, yet treatment B appears to be more effective when considering both sizes at the same time. In this example, the "lurking" variable (or confounding variable) causing the paradox is the size of the stones, which was not previously known to researchers to be important until its effects were included. - -Which treatment is considered better is determined by which success ratio (successes/total) is larger. The reversal of the inequality between the two ratios when considering the combined data, which creates Simpson's paradox, happens because two effects occur together: - -# The sizes of the groups, which are combined when the lurking variable is ignored, are very different. Doctors tend to give cases with large stones the better treatment A, and the cases with small stones the inferior treatment B. Therefore, the totals are dominated by groups 3 and 2, and not by the two much smaller groups 1 and 4. - -# The lurking variable, stone size, has a large effect on the ratios; i.e., the success rate is more strongly influenced by the severity of the case than by the choice of treatment. Therefore, the group of patients with large stones using treatment A (group 3) does worse than the group with small stones, even if the latter used the inferior treatment B (group 2). - -Based on these effects, the paradoxical result is seen to arise because the effect of the size of the stones overwhelms the benefits of the better treatment (A). In short, the less effective treatment B appeared to be more effective because it was applied more frequently to the small stones cases, which were easier to treat. - -In both 1995 and 1996, Justice had a higher batting average (in bold type) than Jeter did. However, when the two baseball seasons are combined, Jeter shows a higher batting average than Justice. According to Ross, this phenomenon would be observed about once per year among the possible pairs of players. - -Simpson's paradox can also be illustrated using a 2-dimensional vector space. A success rate of $\frac{p}{q}$ (i.e., successes/attempts) can be represented by a vector $\vec{A} = (q, p)$, with a slope of $\frac{p}{q}$. A steeper vector then represents a greater success rate. If two rates $\frac{p_1}{q_1}$ and $\frac{p_2}{q_2}$ are combined, as in the examples given above, the result can be represented by the sum of the vectors $(q_1, p_1)$ and $(q_2, p_2)$, which according to the parallelogram rule is the vector $(q_1 + q_2, p_1 + p_2)$, with slope $\frac{p_1 + p_2}{q_1 + q_2}$. - -Simpson's paradox says that even if a vector $\vec{L}_1$ (in orange in figure) has a smaller slope than another vector $\vec{B}_1$ (in blue), and $\vec{L}_2$ has a smaller slope than $\vec{B}_2$, the sum of the two vectors $\vec{L}_1 + \vec{L}_2$ can potentially still have a larger slope than the sum of the two vectors $\vec{B}_1 + \vec{B}_2$, as shown in the example. For this to occur one of the orange vectors must have a greater slope than one of the blue vectors (here $\vec{L}_2$ and $\vec{B}_1$), and these will generally be longer than the alternatively subscripted vectors – thereby dominating the overall comparison. - -Simpson's paradox can also arise in correlations, in which two variables appear to have (say) a positive correlation towards one another, when in fact they have a negative correlation, the reversal having been brought about by a "lurking" confounder. Berman et al. give an example from economics, where a dataset suggests overall demand is positively correlated with price (that is, higher prices lead to more demand), in contradiction of expectation. Analysis reveals time to be the confounding variable: plotting both price and demand against time reveals the expected negative correlation over various periods, which then reverses to become positive if the influence of time is ignored by simply plotting demand against price. - -Psychological interest in Simpson's paradox seeks to explain why people deem sign reversal to be impossible at first, offended by the idea that an action preferred both under one condition and under its negation should be rejected when the condition is unknown. The question is where people get this strong intuition from, and how it is encoded in the mind. - -Simpson's paradox demonstrates that this intuition cannot be derived from either classical logic or probability calculus alone, and thus led philosophers to speculate that it is supported by an innate causal logic that guides people in reasoning about actions and their consequences. Savage's sure-thing principle is an example of what such logic may entail. A qualified version of Savage's sure thing principle can indeed be derived from Pearl's do-calculus and reads: "An action A that increases the probability of an event B in each subpopulation Ci of C must also increase the probability of B in the population as a whole, provided that the action does not change the distribution of the subpopulations." This suggests that knowledge about actions and consequences is stored in a form resembling Causal Bayesian Networks. - -A paper by Pavlides and Perlman presents a proof, due to Hadjicostas, that in a random 2 × 2 × 2 table with uniform distribution, Simpson's paradox will occur with a probability of exactly 1/60. A study by Kock suggests that the probability that Simpson's paradox would occur at random in path models (i.e., models generated by path analysis) with two predictors and one criterion variable is approximately 12.8 percent; slightly higher than 1 occurrence per 8 path models. - -A second, less well-known paradox was also discussed in Edward H. Simpson's 1951 paper. It can occur when the "sensible interpretation" is not necessarily found in the separated data, like in Simpson's paradox, but can instead reside in the combined data. Which form of the data that should be used hinges on the background and the process giving rise to the data, meaning the correct interpretation of the data cannot always be determined by simply observing the tables. diff --git a/wiki/wikipedia/3446.txt b/wiki/wikipedia/3446.txt deleted file mode 100644 index c8cb29947c781028257a72292c77cb26fe02b1db..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3446.txt +++ /dev/null @@ -1,13 +0,0 @@ -In mathematics, the Riemann–von Mangoldt formula, named for Bernhard Riemann and Hans Carl Friedrich von Mangoldt, describes the distribution of the zeros of the Riemann zeta function. - -The formula states that the number N(T) of zeros of the zeta function with imaginary part greater than 0 and less than or equal to T satisfies -$$ -N(T)=\frac{T}{2\pi}\log{\frac{T}{2\pi}}-\frac{T}{2\pi}+O(\log{T}). -$$ - -The formula was stated by Riemann in his notable paper "On the Number of Primes Less Than a Given Magnitude" (1859) and was finally proved by Mangoldt in 1905. - -Backlund gives an explicit form of the error for all T > 2: -$$ -\left\vert{ N(T) - \left({\frac{T}{2\pi}\log{\frac{T}{2\pi}}-\frac{T}{2\pi} } - \frac{7}{8}\right)}\right\vert < 0.137 \log T + 0.443 \log\log T + 4.350 \ . -$$ diff --git a/wiki/wikipedia/3447.txt b/wiki/wikipedia/3447.txt deleted file mode 100644 index 3d79c96365e3567150643f53dbee9ee4178e97f2..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3447.txt +++ /dev/null @@ -1,5 +0,0 @@ -The Geometry of Interaction (GoI) was introduced by Jean-Yves Girard shortly after his work on linear logic. In linear logic, proofs can be seen as various kinds of networks as opposed to the flat tree structures of sequent calculus. To distinguish the real proof nets from all the possible networks, Girard devised a criterium involving trips in the network. Trips can in fact be seen as some kind of operator acting on the proof. Drawing from this observation, Girard described directly this operator from the proof and has given a formula, the so-called execution formula, encoding the process of cut elimination at the level of operators. - -One of the first significant applications of GoI was a better analysis of Lamping's algorithm for optimal reduction for the lambda calculus. GoI had a strong influence on game semantics for linear logic and PCF. - -GoI has been applied to deep compiler optimisation for lambda calculi. A bounded version of GoI dubbed the Geometry of Synthesis has been used to compile higher-order programming languages directly into static circuits. diff --git a/wiki/wikipedia/3448.txt b/wiki/wikipedia/3448.txt deleted file mode 100644 index 9b2e7d67fadb6a127b29931effbe0fa98fd89497..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3448.txt +++ /dev/null @@ -1,13 +0,0 @@ -The Kleitman–Wang algorithms are two different algorithms in graph theory solving the digraph realization problem, i.e. the question if there exists for a finite list of nonnegative integer pairs a simple directed graph such that its degree sequence is exactly this list. For a positive answer the list of integer pairs is called digraphic. Both algorithms construct a special solution if one exists or prove that one cannot find a positive answer. These constructions are based on recursive algorithms. Kleitman and Wang gave these algorithms in 1973. - -The algorithm is based on the following theorem. - -Let $S=((a_1,b_1),\dots,(a_n,b_n))$ be a finite list of nonnegative integers that is in nonincreasing lexicographical order and let $(a_i,b_i)$ be a pair of nonnegative integers with $b_i >0$. List $S$ is digraphic if and only if the finite list $S'=((a_1-1,b_1),\dots,(a_{b_i-1}-1,b_{b_i-1}),(a_{b_i},0),(a_{b_i+1},b_{b_i+1}),(a_{b_i+2},b_{b_i+2}),\dots,(a_n,b_n))$ has nonnegative integer pairs and is digraphic. - -Note that the pair $(a_i,b_i)$ is arbitrarily with the exception of pairs $(a_j,0)$. If the given list $S$ digraphic then the theorem will be applied at most $n$ times setting in each further step $S:=S'$. This process ends when the whole list $S'$ consists of $(0,0)$ pairs. In each step of the algorithm one constructs the arcs of a digraph with vertices $v_1,\dots,v_n$, i.e. if it is possible to reduce the list $S$ to $S'$, then we add arcs $(v_i,v_1),(v_i,v_2),\dots,(v_{i},v_{b_i-1}),(v_i,v_{b_i+1})$. When the list $S$ cannot be reduced to a list $S'$ of nonnegative integer pairs in any step of this approach, the theorem proves that the list $S$ from the beginning is not digraphic. - -The algorithm is based on the following theorem. - -Let $S=((a_1,b_1),\dots,(a_n,b_n))$ be a finite list of nonnegative integers such that $a_1 \geq a_2 \geq \cdots \geq a_n$ and let $(a_i,b_i)$ be a pair such that $(b_i,a_i)$ is maximal with respect to the lexicographical order under all pairs $(b_1,a_1),\dots,(b_n,a_n)$. List $S$ is digraphic if and only if the finite list $S'=((a_1-1,b_1),\cdots,(a_{b_i-1}-1,b_{b_i-1}),(a_{b_i},0),(a_{b_i+1},b_{b_i+1}),(a_{b_i+2},b_{b_i+2}),\dots,(a_n,b_n))$ has nonnegative integer pairs and is digraphic. - -Note that the list $S$ must not be in lexicographical order as in the first version. If the given list $S$ is digraphic, then the theorem will be applied at most $n$ times, setting in each further step $S:=S'$. This process ends when the whole list $S'$ consists of $(0,0)$ pairs. In each step of the algorithm, one constructs the arcs of a digraph with vertices $v_1,\dots,v_n$, i.e. if it is possible to reduce the list $S$ to $S'$, then one adds arcs $(v_i,v_1),(v_i,v_2),\dots,(v_i,v_{b_i-1}),(v_i,v_{b_i+1})$. When the list $S$ cannot be reduced to a list $S'$ of nonnegative integer pairs in any step of this approach, the theorem proves that the list $S$ from the beginning is not digraphic. diff --git a/wiki/wikipedia/3449.txt b/wiki/wikipedia/3449.txt deleted file mode 100644 index b459ec54790dbec5a8b2c5861412490d85e5af62..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3449.txt +++ /dev/null @@ -1,55 +0,0 @@ -The Steiner tree problem, or minimum Steiner tree problem, named after Jakob Steiner, is an umbrella term for a class of problems in combinatorial optimization. While Steiner tree problems may be formulated in a number of settings, they all require an optimal interconnect for a given set of objects and a predefined objective function. One well-known variant, which is often used synonymously with the term Steiner tree problem, is the Steiner tree problem in graphs. Given an undirected graph with non-negative edge weights and a subset of vertices, usually referred to as terminals, the Steiner tree problem in graphs requires a tree of minimum weight that contains all terminals (but may include additional vertices). Further well-known variants are the Euclidean Steiner tree problem and the rectilinear minimum Steiner tree problem. - -The Steiner tree problem in graphs can be seen as a generalization of two other famous combinatorial optimization problems: the (non-negative) shortest path problem and the minimum spanning tree problem. If a Steiner tree problem in graphs contains exactly two terminals, it reduces to finding the shortest path. If, on the other hand, all vertices are terminals, the Steiner tree problem in graphs is equivalent to the minimum spanning tree. However, while both the non-negative shortest path and the minimum spanning tree problem are solvable in polynomial time, the decision variant of the Steiner tree problem in graphs is NP-complete (which implies that the optimization variant is NP-hard); in fact, the decision variant was among Karp's original 21 NP-complete problems. The Steiner tree problem in graphs has applications in circuit layout or network design. However, practical applications usually require variations, giving rise to a multitude of Steiner tree problem variants. - -Most versions of the Steiner tree problem are NP-hard, but some restricted cases can be solved in polynomial time. Despite the pessimistic worst-case complexity, several Steiner tree problem variants, including the Steiner tree problem in graphs and the rectilinear Steiner tree problem, can be solved efficiently in practice, even for large-scale real-world problems. - -The original problem was stated in the form that has become known as the Euclidean Steiner tree problem or geometric Steiner tree problem: Given N points in the plane, the goal is to connect them by lines of minimum total length in such a way that any two points may be interconnected by line segments either directly or via other points and line segments. It may be shown that the connecting line segments do not intersect each other except at the endpoints and form a tree, hence the name of the problem. - -The problem for N = 3 has long been considered, and quickly extended to the problem of finding a star network with a single hub connecting to all of the N given points, of minimum total length. - -However, although the full Steiner tree problem was formulated in a letter by Gauss, its first serious treatment was in a 1934 paper written in Czech by Vojtěch Jarník and . This paper was long overlooked, but it already contains "virtually all general properties of Steiner trees" later attributed to other researchers, including the generalization of the problem from the plane to higher dimensions. - -For the Euclidean Steiner problem, points added to the graph (Steiner points) must have a degree of three, and the three edges incident to such a point must form three 120 degree angles (see Fermat point). It follows that the maximum number of Steiner points that a Steiner tree can have is N - 2, where N is the initial number of given points. - -For N = 3 there are two possible cases: if the triangle formed by the given points has all angles which are less than 120 degrees, the solution is given by a Steiner point located at the Fermat point; otherwise the solution is given by the two sides of the triangle which meet on the angle with 120 or more degrees. - -For general N, the Euclidean Steiner tree problem is NP-hard, and hence it is not known whether an optimal solution can be found by using a polynomial-time algorithm. However, there is a polynomial-time approximation scheme (PTAS) for Euclidean Steiner trees, i.e., a near-optimal solution can be found in polynomial time. It is not known whether the Euclidean Steiner tree problem is NP-complete, since membership to the complexity class NP is not known. - -The rectilinear Steiner tree problem is a variant of the geometric Steiner tree problem in the plane, in which the Euclidean distance is replaced with the rectilinear distance. The problem arises in the physical design of electronic design automation. In VLSI circuits, wire routing is carried out by wires that are often constrained by design rules to run only in vertical and horizontal directions, so the rectilinear Steiner tree problem can be used to model the routing of nets with more than two terminals. - -Steiner trees have been extensively studied in the context of weighted graphs. The prototype is, arguably, the Steiner tree problem in graphs. Let G = (V, E) be an undirected graph with non-negative edge weights c and let S ⊆ V be a subset of vertices, called terminals. A Steiner tree is a tree in G that spans S. There are two versions of the problem: in the optimization problem associated with Steiner trees, the task is to find a minimum-weight Steiner tree; in the decision problem the edge weights are integers and the task is to determine whether a Steiner tree exists whose total weight does not exceed a predefined natural number k. The decision problem is one of Karp's 21 NP-complete problems; hence the optimization problem is NP-hard. Steiner tree problems in graphs are applied to various problems in research and industry, including multicast routing and bioinformatics. - -A special case of this problem is when G is a complete graph, each vertex v ∈ V corresponds to a point in a metric space, and the edge weights w(e) for each e ∈ E correspond to distances in the space. Put otherwise, the edge weights satisfy the triangle inequality. This variant is known as the metric Steiner tree problem. Given an instance of the (non-metric) Steiner tree problem, we can transform it in polynomial time into an equivalent instance of the metric Steiner tree problem; the transformation preserves the approximation factor. - -While the Euclidean version admits a PTAS, it is known that the metric Steiner tree problem is APX-complete, i.e., unless P = NP, it is impossible to achieve approximation ratios that are arbitrarily close to 1 in polynomial time. There is a polynomial-time algorithm that approximates the minimum Steiner tree to within a factor of $\ln(4) + \varepsilon\approx1.386$; - -however, approximating within a factor $96/95\approx 1.0105$ is NP-hard. For the restricted case of Steiner Tree problem with distances 1 and 2, a 1.25-approximation algorithm is known. Karpinski and Alexander Zelikovsky constructed PTAS for the dense instances of Steiner Tree problems. - -In a special case of the graph problem, the Steiner tree problem for quasi-bipartite graphs, S is required to include at least one endpoint of every edge in G. - -The Steiner tree problem has also been investigated in higher dimensions and on various surfaces. Algorithms to find the Steiner minimal tree have been found on the sphere, torus, projective plane, wide and narrow cones, and others. - -Other generalizations of the Steiner tree problem are the k-edge-connected Steiner network problem and the k-vertex-connected Steiner network problem, where the goal is to find a k-edge-connected graph or a k-vertex-connected graph rather than any connected graph. - -The Steiner problem has also been stated in the general setting of metric spaces and for possibly infinitely many points. - -The general graph Steiner tree problem can be approximated by computing the minimum spanning tree of the subgraph of the metric closure of the graph induced by the terminal vertices, as first published in 1981 by Kou et al. The metric closure of a graph G is the complete graph in which each edge is weighted by the shortest path distance between the nodes in G. This algorithm produces a tree whose weight is within a 2 − 2/t factor of the weight of the optimal Steiner tree where t is the number of leaves in the optimal Steiner tree; this can be proven by considering a traveling salesperson tour on the optimal Steiner tree. This approximate solution is computable in O(|S| |V|²) polynomial time by first solving the all-pairs shortest paths problem to compute the metric closure, then by solving the minimum spanning tree problem. - -Another popular algorithm to approximate the Steiner tree in graphs was published by Takahashi and Matsuyama in 1980. Their solution incrementally builds up the Steiner tree by starting from an arbitrary vertex, and repeatedly adding the shortest path from the tree to the nearest vertex in S that has not yet been added. This algorithm also has O(|S| |V|²) running time, and produces a tree whose weight is within 2 − 2/|S| of optimal. - -In 1986, Wu et al. improved dramatically on the running time by avoiding precomputation of the all-pairs shortest paths. Instead, they take a similar approach to Kruskal's algorithm for computing a minimum spanning tree, by starting from a forest of |S| disjoint trees, and "growing" them simultaneously using a breadth-first search resembling Dijkstra's algorithm but starting from multiple initial vertices. When the search encounters a vertex that does not belong to the current tree, the two trees are merged into one. This process is repeated until only one tree remains. By using a Heap (data structure) to implement the priority queue and a disjoint-set data structure to track to which tree each visited vertex belongs, this algorithm achieves O(|E| log |V|) running time, although it does not improve on the 2 − 2/t cost ratio from Kou et al.. - -A series of papers provided approximation algorithms for the minimum Steiner tree problem with approximation ratios that improved upon the 2 − 2/t ratio. This sequence culminated with Robins and Zelikovsky's algorithm in 2000 which improved the ratio to 1.55 by iteratively improving upon the minimum cost terminal spanning tree. More recently, however, Jaroslaw Byrka et al. proved an $\ln(4) + \varepsilon \le 1.39$ approximation using a linear programming relaxation and a technique called iterative, randomized rounding. - -The general graph Steiner tree problem is known to be fixed-parameter tractable, with the number of terminals as a parameter, by the Dreyfus-Wagner algorithm. The running time of the Dreyfus-Wagner algorithm is $3^ \text{poly}(n)$, where $n$ is the number of vertices of the graph and $S$ is the set of terminals. Faster algorithms exist, running in $c^ \text{poly}(n)$ time for any $c > 2$ or, in the case of small weights, $2^ \text{poly}(n) W$ time, where $W$ is the maximum weight of any edge. A disadvantage of the aforementioned algorithms is that they use exponential space; there exist polynomial-space algorithms running in $2^ \text{poly}(n) W$ time and $(7.97)^ \text{poly}(n) \log W$ time. - -It is known that the general graph Steiner tree problem does not have a parameterized algorithm running in $2^{\epsilon t} \text{poly}(n)$ time for any $\epsilon < 1$, where $t$ is the number of edges of the optimal Steiner tree, unless the Set cover problem has an algorithm running in $2^{\epsilon n} \text{poly}(m)$ time for some $\epsilon < 1$, where $n$ and $m$ are the number of elements and the number of sets, respectively, of the instance of the set cover problem. - -Furthermore, it is known that the problem does not admit a polynomial kernel unless $\text{coNP} \subseteq \text{NP/poly}$, even parameterized by the number of edges of the optimal Steiner tree and if all edge weights are 1. - -The Steiner ratio is the supremum of the ratio of the total length of the minimum spanning tree to the minimum Steiner tree for a set of points in the Euclidean plane. - -In the Euclidean Steiner tree problem, the Steiner ratio is conjectured to be $\tfrac{2}{\sqrt{3}}\approx 1.1547$, the ratio that is achieved by three points in an equilateral triangle with a spanning tree that uses two sides of the triangle and a Steiner tree that connects the points through the centroid of the triangle. Despite earlier claims of a proof, the conjecture is still open. The best widely accepted upper bound for the problem is 1.2134, by Chung. - -For the rectilinear Steiner tree problem, the Steiner ratio is exactly $\tfrac{3}{2}$, the ratio that is achieved by four points in a square with a spanning tree that uses three sides of the square and a Steiner tree that connects the points through the center of the square. More precisely, for $L_1$ distance the square should be tilted at $45^{\circ}$ with respect to the coordinate axes, while for $L_{\infty}$ distance the square should be axis-aligned. diff --git a/wiki/wikipedia/345.txt b/wiki/wikipedia/345.txt deleted file mode 100644 index 417411c99c8f9af46081fd45ce2097c0551481c4..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/345.txt +++ /dev/null @@ -1,73 +0,0 @@ -In mathematics, the Brascamp–Lieb inequality is either of two inequalities. The first is a result in geometry concerning integrable functions on n-dimensional Euclidean space $\mathbb{R}^{n}$. It generalizes the Loomis–Whitney inequality and Hölder's inequality. The second is a result of probability theory which gives a concentration inequality for log-concave probability distributions. Both are named after Herm Jan Brascamp and Elliott H. Lieb. - -Fix natural numbers m and n. For 1 ≤ i ≤ m, let ni ∈ N and let ci > 0 so that -$$ -\sum_{i = 1}^m c_i n_i = n. -$$ - -Choose non-negative, integrable functions -$$ -f_i \in L^1 \left( \mathbb{R}^{n_i} ; [0, + \infty] \right) -$$ - -and surjective linear maps -$$ -B_i : \mathbb{R}^n \to \mathbb{R}^{n_i}. -$$ - -Then the following inequality holds: -$$ -\int_{\mathbb{R}^n} \prod_{i = 1}^m f_i \left( B_i x \right)^{c_i} \mathrm{d} x \leq D^{- 1/2} \prod_{i = 1}^m \left( \int_{\mathbb{R}^{n_i}} f_i (y) \mathrm{d} y \right)^{c_i}, -$$ - -where D is given by -$$ -D = \inf \left\{ \left. \frac{\det \left( \sum_{i = 1}^m c_i B_i^{*} A_i B_i \right)}{\prod_{i = 1}^m ( \det A_i )^{c_i}} \right| A_i \text{ is a positive-definite } n_i \times n_i \text{ matrix} \right\}. -$$ - -Another way to state this is that the constant D is what one would obtain by restricting attention to the case in which each $f_{i}$ is a centered Gaussian function, namely $f_{i}(y) = \exp \{-(y, A_{i} y)\}$. - -The geometric Brascamp–Lieb inequality is a special case of the above, and was used by Keith Ball, in 1989, to provide upper bounds for volumes of central sections of cubes. - -For i = 1, ..., m, let ci > 0 and let ui ∈ Sn−1 be a unit vector; suppose that ci and ui satisfy -$$ -x = \sum_{i = 1}^m c_i (x \cdot u_i) u_i -$$ - -for all x in Rn. Let fi ∈ L1(R; [0, +∞]) for each i = 1, ..., m. Then -$$ -\int_{\mathbb{R}^n} \prod_{i = 1}^m f_i (x \cdot u_i)^{c_i} \mathrm{d} x \leq \prod_{i = 1}^m \left( \int_{\mathbb{R}} f_i (y) \mathrm{d} y \right)^{c_i}. -$$ - -The geometric Brascamp–Lieb inequality follows from the Brascamp–Lieb inequality as stated above by taking ni = 1 and Bi(x) = x · ui. Then, for zi ∈ R, -$$ -B_i^{*} (z_i) = z_i u_i. -$$ - -It follows that D = 1 in this case. - -As another special case, take ni = n, Bi = id, the identity map on $\mathbb{R}^{n}$, replacing fi by f_i|p=1/ci, and let ci = 1 / pi for 1 ≤ i ≤ m. Then -$$ -\sum_{i = 1}^m \frac{1}{p_i} = 1 -$$ - -and the log-concavity of the determinant of a positive definite matrix implies that D = 1. This yields Hölder's inequality in $\mathbb{R}^{n}$: -$$ -\int_{\mathbb{R}^n} \prod_{i = 1}^m f_{i} (x) \mathrm{d} x \leq \prod_{i = 1}^{m} \| f_i \|_{p_i}. -$$ - -Consider a probability density function $p(x)=\exp(-\phi(x))$. This probability density function $p(x)$ is said to be a log-concave measure if the $ \phi(x) $ function is convex. Such probability density functions have tails which decay exponentially fast, so most of the probability mass resides in a small region around the mode of $ p(x) $. The Brascamp–Lieb inequality gives another characterization of the compactness of $ p(x) $ by bounding the mean of any statistic $ S(x)$. - -Formally, let $ S(x) $ be any derivable function. The Brascamp–Lieb inequality reads: -$$ - \operatorname{var}_p (S(x)) \leq E_p (\nabla^T S(x) [H \phi(x)]^{-1} \nabla S(x)) -$$ - -where H is the Hessian and $\nabla$ is the Nabla symbol. - -The Brascamp–Lieb inequality is an extension of the Poincaré inequality which only concerns Gaussian probability distributions. - -The Brascamp–Lieb inequality is also related to the Cramér–Rao bound. While Brascamp–Lieb is an upper-bound, the Cramér–Rao bound lower-bounds the variance of $\operatorname{var}_p (S(x))$. The expressions are almost identical: -$$ - \operatorname{var}_p (S(x)) \geq E_p (\nabla^T S(x) ) [ E_p( H \phi(x) )]^{-1} E_p( \nabla S(x) )\! -$$. diff --git a/wiki/wikipedia/3450.txt b/wiki/wikipedia/3450.txt deleted file mode 100644 index a87de2138d173b9ac6a07dc9283b0ef73f263646..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3450.txt +++ /dev/null @@ -1,11 +0,0 @@ -In mathematics, the Brauer–Nesbitt theorem can refer to several different theorems proved by Richard Brauer and Cecil J. Nesbitt in the representation theory of finite groups. - -In modular representation theory, - -the Brauer–Nesbitt theorem on blocks of defect zero states that a character whose order is divisible by the highest power of a prime p dividing the order of a finite group remains irreducible when reduced mod p and vanishes on all elements whose order is divisible by p. Moreover, it belongs to a block of defect zero. A block of defect zero contains only one ordinary character and only one modular character. - -Another version states that if k is a field of characteristic zero, A is a k-algebra, V, W are semisimple A-modules which are finite dimensional over k, and TrV = TrW as elements of Homk(A,k), then V and W are isomorphic as A-modules. - -Let $G$ be a group and $E$ be some field. If $\rho_i:G\to GL_n(E),i=1,2$ are two finite-dimensional semisimple representations such that the characteristic polynomials of $\rho_1(g)$ and $\rho_2(g)$ coincide for all $g\in G$, then $\rho_1$ and $\rho_2$ are isomorphic representations. If $char(E)=0$ or $char(E)>n$, then the condition on the characteristic polynomials can be changed to the condition that the traces of $\rho_1(g)$ and $\rho_2(g)$ coincide for all $g\in G$. - -As a consequence, let $\rho:Gal(K^s/K)\to GL_n(\overline{\mathbb{Q}}_l)$ be a semisimple (continuous) $l$-adic representations of the absolute Galois group of some field $K$, unramified outside some finite set of primes $S\subset M_K$. Then the representation is uniquely determined by the values of the traces of $\rho(Frob_p)$ for $p\in M_K^0-S$ (also using the Chebotarev density theorem). diff --git a/wiki/wikipedia/3451.txt b/wiki/wikipedia/3451.txt deleted file mode 100644 index 97940220fb889599c04edad8b49527ea48e77a4a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3451.txt +++ /dev/null @@ -1,49 +0,0 @@ -In probability theory, Lévy’s continuity theorem, or Lévy's convergence theorem, named after the French mathematician Paul Lévy, connects convergence in distribution of the sequence of random variables with pointwise convergence of their characteristic functions. - -This theorem is the basis for one approach to prove the central limit theorem and it is one of the major theorems concerning characteristic functions. - -Suppose we have - -{{unordered list - -|1= a sequence of random variables $ \{X_n\}_{n=1}^\infty$, not necessarily sharing a common probability space, - -|2= the sequence of corresponding characteristic functions $ \{\varphi_n\}_{n=1}^\infty$, which by definition are -$$ -\varphi_n(t) = \operatorname{E} \left[ e^{itX_n} \right] \quad \forall t\in\mathbb{R},\ \forall n\in\mathbb{N}, -$$ - -where $\operatorname{E}$ is the expected value operator. - -}} - -If the sequence of characteristic functions converges pointwise to some function $\varphi$ -$$ -\varphi_n(t)\to\varphi(t) \quad \forall t\in\mathbb{R}, -$$ - -then the following statements become equivalent: - -{{unordered list - -|1= $X_n$ converges in distribution to some random variable X -$$ -X_n\ \xrightarrow{\mathcal D}\ X, -$$ - -i.e. the cumulative distribution functions corresponding to random variables converge at every continuity point of the c.d.f. of X; - -|2= $ \{X_n\}_{n=1}^\infty$ is tight: -$$ -\lim_{x\to\infty}\left( \sup_n \operatorname{P}\big[ |X_n|>x \big]\right) = 0; -$$ - -|3= $\varphi(t)$ is a characteristic function of some random variable X; - -|4= $\varphi(t)$ is a continuous function of t; - -|5= $\varphi(t)$ is continuous at t = 0. - -}} - -Rigorous proofs of this theorem are available. diff --git a/wiki/wikipedia/3452.txt b/wiki/wikipedia/3452.txt deleted file mode 100644 index 5a1177da65d3dc31443252fdbf27458564362296..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3452.txt +++ /dev/null @@ -1,33 +0,0 @@ -In mathematics, Hilbert's program, formulated by German mathematician David Hilbert in the early part of the 20th century, was a proposed solution to the foundational crisis of mathematics, when early attempts to clarify the foundations of mathematics were found to suffer from paradoxes and inconsistencies. As a solution, Hilbert proposed to ground all existing theories to a finite, complete set of axioms, and provide a proof that these axioms were consistent. Hilbert proposed that the consistency of more complicated systems, such as real analysis, could be proven in terms of simpler systems. Ultimately, the consistency of all of mathematics could be reduced to basic arithmetic. - -Gödel's incompleteness theorems, published in 1931, showed that Hilbert's program was unattainable for key areas of mathematics. In his first theorem, Gödel showed that any consistent system with a computable set of axioms which is capable of expressing arithmetic can never be complete: it is possible to construct a statement that can be shown to be true, but that cannot be derived from the formal rules of the system. In his second theorem, he showed that such a system could not prove its own consistency, so it certainly cannot be used to prove the consistency of anything stronger with certainty. This refuted Hilbert's assumption that a finitistic system could be used to prove the consistency of itself, and therefore anything else. - -The main goal of Hilbert's program was to provide secure foundations for all mathematics. In particular this should include: - -*A formulation of all mathematics; in other words all mathematical statements should be written in a precise formal language, and manipulated according to well defined rules. - -*Completeness: a proof that all true mathematical statements can be proved in the formalism. - -*Consistency: a proof that no contradiction can be obtained in the formalism of mathematics. This consistency proof should preferably use only "finitistic" reasoning about finite mathematical objects. - -*Conservation: a proof that any result about "real objects" obtained using reasoning about "ideal objects" (such as uncountable sets) can be proved without using ideal objects. - -*Decidability: there should be an algorithm for deciding the truth or falsity of any mathematical statement. - -Kurt Gödel showed that most of the goals of Hilbert's program were impossible to achieve, at least if interpreted in the most obvious way. Gödel's second incompleteness theorem shows that any consistent theory powerful enough to encode addition and multiplication of integers cannot prove its own consistency. This presents a challenge to Hilbert's program: - -*It is not possible to formalize all mathematical true statements within a formal system, as any attempt at such a formalism will omit some true mathematical statements. There is no complete, consistent extension of even Peano arithmetic based on a recursively enumerable set of axioms. - -*A theory such as Peano arithmetic cannot even prove its own consistency, so a restricted "finitistic" subset of it certainly cannot prove the consistency of more powerful theories such as set theory. - -*There is no algorithm to decide the truth (or provability) of statements in any consistent extension of Peano arithmetic. Strictly speaking, this negative solution to the Entscheidungsproblem appeared a few years after Gödel's theorem, because at the time the notion of an algorithm had not been precisely defined. - -Many current lines of research in mathematical logic, such as proof theory and reverse mathematics, can be viewed as natural continuations of Hilbert's original program. Much of it can be salvaged by changing its goals slightly (Zach 2005), and with the following modifications some of it was successfully completed: - -*Although it is not possible to formalize all mathematics, it is possible to formalize essentially all the mathematics that anyone uses. In particular Zermelo–Fraenkel set theory, combined with first-order logic, gives a satisfactory and generally accepted formalism for almost all current mathematics. - -*Although it is not possible to prove completeness for systems that can express at least the Peano arithmetic (or, more generally, that have a computable set of axioms), it is possible to prove forms of completeness for many other interesting systems. An example of a non-trivial theory for which completeness has been proved is the theory of algebraically closed fields of given characteristic. - -*The question of whether there are finitary consistency proofs of strong theories is difficult to answer, mainly because there is no generally accepted definition of a "finitary proof". Most mathematicians in proof theory seem to regard finitary mathematics as being contained in Peano arithmetic, and in this case it is not possible to give finitary proofs of reasonably strong theories. On the other hand, Gödel himself suggested the possibility of giving finitary consistency proofs using finitary methods that cannot be formalized in Peano arithmetic, so he seems to have had a more liberal view of what finitary methods might be allowed. A few years later, Gentzen gave a consistency proof for Peano arithmetic. The only part of this proof that was not clearly finitary was a certain transfinite induction up to the ordinal ε0. If this transfinite induction is accepted as a finitary method, then one can assert that there is a finitary proof of the consistency of Peano arithmetic. More powerful subsets of second order arithmetic have been given consistency proofs by Gaisi Takeuti and others, and one can again debate about exactly how finitary or constructive these proofs are. (The theories that have been proved consistent by these methods are quite strong, and include most "ordinary" mathematics.) - -*Although there is no algorithm for deciding the truth of statements in Peano arithmetic, there are many interesting and non-trivial theories for which such algorithms have been found. For example, Tarski found an algorithm that can decide the truth of any statement in analytic geometry (more precisely, he proved that the theory of real closed fields is decidable). Given the Cantor–Dedekind axiom, this algorithm can be regarded as an algorithm to decide the truth of any statement in Euclidean geometry. This is substantial as few people would consider Euclidean geometry a trivial theory. diff --git a/wiki/wikipedia/3453.txt b/wiki/wikipedia/3453.txt deleted file mode 100644 index 60d4346dbcea57095dcd5c2f1f22e021d214adf3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3453.txt +++ /dev/null @@ -1,67 +0,0 @@ -In propositional logic, the commutativity of conjunction is a valid argument form and truth-functional tautology. It is considered to be a law of classical logic. It is the principle that the conjuncts of a logical conjunction may switch places with each other, while preserving the truth-value of the resulting proposition. - -Commutativity of conjunction can be expressed in sequent notation as: -$$ -(P \land Q) \vdash (Q \land P) -$$ - -and -$$ -(Q \land P) \vdash (P \land Q) -$$ - -where $\vdash$ is a metalogical symbol meaning that $(Q \land P)$ is a syntactic consequence of $(P \land Q)$, in the one case, and $(P \land Q)$ is a syntactic consequence of $(Q \land P)$ in the other, in some logical system; - -or in rule form: -$$ -\frac{P \land Q}{\therefore Q \land P} -$$ - -and -$$ -\frac{Q \land P}{\therefore P \land Q} -$$ - -where the rule is that wherever an instance of "$(P \land Q)$" appears on a line of a proof, it can be replaced with "$(Q \land P)$" and wherever an instance of "$(Q \land P)$" appears on a line of a proof, it can be replaced with "$(P \land Q)$"; - -or as the statement of a truth-functional tautology or theorem of propositional logic: -$$ -(P \land Q) \to (Q \land P) -$$ - -and -$$ -(Q \land P) \to (P \land Q) -$$ - -where $P$ and $Q$ are propositions expressed in some formal system. - -For any propositions H1, H2, ... Hn, and permutation σ(n) of the numbers 1 through n, it is the case that: - -H1 $\land$ H2 $\land$ ... $\land$ Hn - -is equivalent to - -Hσ(1) $\land$ Hσ(2) $\land$ Hσ(n). - -For example, if H1 is - -It is raining - -H2 is - -Socrates is mortal - -and H3 is - -2+2=4 - -then - -It is raining and Socrates is mortal and 2+2=4 - -is equivalent to - -Socrates is mortal and 2+2=4 and it is raining - -and the other orderings of the predicates. diff --git a/wiki/wikipedia/3454.txt b/wiki/wikipedia/3454.txt deleted file mode 100644 index 87ce62f85513bf2a4459fc0a2dc10c2820fe80ff..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3454.txt +++ /dev/null @@ -1,43 +0,0 @@ -In mathematical logic and philosophy, Skolem's paradox is a seeming contradiction that arises from the downward Löwenheim–Skolem theorem. Thoralf Skolem (1922) was the first to discuss the seemingly contradictory aspects of the theorem, and to discover the relativity of set-theoretic notions now known as non-absoluteness. Although it is not an actual antinomy like Russell's paradox, the result is typically called a paradox, and was described as a "paradoxical state of affairs" by Skolem (1922: p. 295). - -Skolem's paradox is that every countable axiomatisation of set theory in first-order logic, if it is consistent, has a model that is countable. This appears contradictory because it is possible to prove, from those same axioms, a sentence that intuitively says (or that precisely says in the standard model of the theory) that there exist sets that are not countable. Thus the seeming contradiction is that a model that is itself countable, and which therefore contains only countable sets, satisfies the first order sentence that intuitively states "there are uncountable sets". - -A mathematical explanation of the paradox, showing that it is not a contradiction in mathematics, was given by Skolem (1922). Skolem's work was harshly received by Ernst Zermelo, who argued against the limitations of first-order logic, but the result quickly came to be accepted by the mathematical community. - -The philosophical implications of Skolem's paradox have received much study. One line of inquiry questions whether it is accurate to claim that any first-order sentence actually states "there are uncountable sets". This line of thought can be extended to question whether any set is uncountable in an absolute sense. More recently, the paper "Models and Reality" by Hilary Putnam, and responses to it, led to renewed interest in the philosophical aspects of Skolem's result. - -One of the earliest results in set theory, published by Georg Cantor in 1874, was the existence of uncountable sets, such as the powerset of the natural numbers, the set of real numbers, and the Cantor set. An infinite set X is countable if there is a function that gives a one-to-one correspondence between X and the natural numbers, and is uncountable if there is no such correspondence function. When Zermelo proposed his axioms for set theory in 1908, he proved Cantor's theorem from them to demonstrate their strength. - -Löwenheim (1915) and Skolem (1920, 1923) proved the Löwenheim–Skolem theorem. The downward form of this theorem shows that if a countable first-order axiomatisation is satisfied by any infinite structure, then the same axioms are satisfied by some countable structure. In particular, this implies that if the first order versions of Zermelo's axioms of set theory are satisfiable, they are satisfiable in some countable model. The same is true of any consistent first order axiomatisation of set theory. - -Skolem (1922) pointed out the seeming contradiction between the Löwenheim–Skolem theorem on the one hand, which implies that there is a countable model of Zermelo's axioms, and Cantor's theorem on the other hand, which states that uncountable sets exist, and which is provable from Zermelo's axioms. "So far as I know," Skolem writes, "no one has called attention to this peculiar and apparently paradoxical state of affairs. By virtue of the axioms we can prove the existence of higher cardinalities... How can it be, then, that the entire domain B [a countable model of Zermelo's axioms] can already be enumerated by means of the finite positive integers?" (Skolem 1922, p. 295, translation by Bauer-Mengelberg) - -More specifically, let B be a countable model of Zermelo's axioms. Then there is some set u in B such that B satisfies the first-order formula saying that u is uncountable. For example, u could be taken as the set of real numbers in B. Now, because B is countable, there are only countably many elements c such that c ∈ u according to B, because there are only countably many elements c in B to begin with. Thus it appears that u should be countable. This is Skolem's paradox. - -Skolem went on to explain why there was no contradiction. In the context of a specific model of set theory, the term "set" does not refer to an arbitrary set, but only to a set that is actually included in the model. The definition of countability requires that a certain one-to-one correspondence, which is itself a set, must exist. Thus it is possible to recognise that a particular set u is countable, but not countable in a particular model of set theory, because there is no set in the model that gives a one-to-one correspondence between u and the natural numbers in that model. - -From an interpretation of the model into our conventional notions of these sets, this means that although u maps to an uncountable set, there are many elements in our intuitive notion of u that don't have a corresponding element in the model. The model, however, is consistent, because the absence of these elements cannot be observed through first-order logic. With u as the reals, these missing elements would correspond to undefinable numbers. - -Skolem used the term "relative" to describe this state of affairs, where the same set is included in two models of set theory, is countable in one model, and is not countable in the other model. He described this as the "most important" result in his paper. Contemporary set theorists describe concepts that do not depend on the choice of a transitive model as absolute. From their point of view, Skolem's paradox simply shows that countability is not an absolute property in first order logic. (Kunen 1980 p. 141; Enderton 2001 p. 152; Burgess 1977 p. 406). - -Skolem described his work as a critique of (first-order) set theory, intended to illustrate its weakness as a foundational system: - -"I believed that it was so clear that axiomatisation in terms of sets was not a satisfactory ultimate foundation of mathematics that mathematicians would, for the most part, not be very much concerned with it. But in recent times I have seen to my surprise that so many mathematicians think that these axioms of set theory provide the ideal foundation for mathematics; therefore it seemed to me that the time had come for a critique." (Ebbinghaus and van Dalen, 2000, p. 147) - -A central goal of early research into set theory was to find a first-order axiomatisation for set theory which was categorical, meaning that the axioms would have exactly one model, consisting of all sets. Skolem's result showed this is not possible, creating doubts about the use of set theory as a foundation of mathematics. It took some time for the theory of first-order logic to be developed enough for mathematicians to understand the cause of Skolem's result; no resolution of the paradox was widely accepted during the 1920s. Fraenkel (1928) still described the result as an antinomy: - -"Neither have the books yet been closed on the antinomy, nor has agreement on its significance and possible solution yet been reached." (van Dalen and Ebbinghaus, 2000, p. 147). - -In 1925, von Neumann presented a novel axiomatisation of set theory, which developed into NBG set theory. Very much aware of Skolem's 1922 paper, von Neumann investigated countable models of his axioms in detail. In his concluding remarks, Von Neumann comments that there is no categorical axiomatisation of set theory, or any other theory with an infinite model. Speaking of the impact of Skolem's paradox, he wrote, - -"At present we can do no more than note that we have one more reason here to entertain reservations about set theory and that for the time being no way of rehabilitating this theory is known."(Ebbinghaus and van Dalen, 2000, p. 148) - -Zermelo at first considered the Skolem paradox a hoax (van Dalen and Ebbinghaus, 2000, p. 148 ff.), and spoke against it starting in 1929. Skolem's result applies only to what is now called first-order logic, but Zermelo argued against the finitary metamathematics that underlie first-order logic (Kanamori 2004, p. 519 ff.). Zermelo argued that his axioms should instead be studied in second-order logic, a setting in which Skolem's result does not apply. Zermelo published a second-order axiomatisation in 1930 and proved several categoricity results in that context. Zermelo's further work on the foundations of set theory after Skolem's paper led to his discovery of the cumulative hierarchy and formalisation of infinitary logic (van Dalen and Ebbinghaus, 2000, note 11). - -Fraenkel et al. (1973, pp. 303-304) explain why Skolem's result was so surprising to set theorists in the 1920s. Gödel's completeness theorem and the compactness theorem were not proved until 1929. These theorems illuminated the way that first-order logic behaves and established its finitary nature, although Gödel's original proof of the completeness theorem was complicated. Leon Henkin's alternative proof of the completeness theorem, which is now a standard technique for constructing countable models of a consistent first-order theory, was not presented until 1947. Thus, in 1922, the particular properties of first-order logic that permit Skolem's paradox to go through were not yet understood. It is now known that Skolem's paradox is unique to first-order logic; if set theory is studied using higher-order logic with full semantics then it does not have any countable models, due to the semantics being used. - -Current mathematical logicians do not view Skolem's paradox as any sort of fatal flaw in set theory. Kleene (1967, p. 324) describes the result as "not a paradox in the sense of outright contradiction, but rather a kind of anomaly". After surveying Skolem's argument that the result is not contradictory, Kleene concludes "there is no absolute notion of countability." Hunter (1971, p. 208) describes the contradiction as "hardly even a paradox". Fraenkel et al. (1973, p. 304) explain that contemporary mathematicians are no more bothered by the lack of categoricity of first-order theories than they are bothered by the conclusion of Gödel's incompleteness theorem that no consistent, effective, and sufficiently strong set of - -first-order axioms is complete. - -Countable models of ZF have become common tools in the study of set theory. Forcing, for example, is often explained in terms of countable models. The fact that these countable models of ZF still satisfy the theorem that there are uncountable sets is not considered a pathology; van Heijenoort (1967) describes it as "a novel and unexpected feature of formal systems." (van Heijenoort 1967, p. 290) diff --git a/wiki/wikipedia/3455.txt b/wiki/wikipedia/3455.txt deleted file mode 100644 index 98b3beced2b090ed1c615f6e09be44628b7b42df..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3455.txt +++ /dev/null @@ -1,19 +0,0 @@ -In mathematical analysis, Korn's inequality is an inequality concerning the gradient of a vector field that generalizes the following classical theorem: if the gradient of a vector field is skew-symmetric at every point, then the gradient must be equal to a constant skew-symmetric matrix. Korn's theorem is a quantitative version of this statement, which intuitively says that if the gradient of a vector field is on average not far from the space of skew-symmetric matrices, then the gradient must not be far from a particular skew-symmetric matrix. The statement that Korn's inequality generalizes thus arises as a special case of rigidity. - -In (linear) elasticity theory, the symmetric part of the gradient is a measure of the strain that an elastic body experiences when it is deformed by a given vector-valued function. The inequality is therefore an important tool as an a priori estimate in linear elasticity theory. - -Let Ω be an open, connected domain in n-dimensional Euclidean space Rn, n ≥ 2. Let H1(Ω) be the Sobolev space of all vector fields v = (v1, ..., vn) on Ω that, along with their (first) weak derivatives, lie in the Lebesgue space L2(Ω). Denoting the partial derivative with respect to the ith component by ∂i, the norm in H1(Ω) is given by -$$ -\| v \|_{H^{1} (\Omega)} := \left( \int_{\Omega} \sum_{i = 1}^{n} | v^{i} (x) |^{2} \mathrm{d} x+\int_{\Omega} \sum_{i, j = 1}^{n} | \partial_{j} v^{i} (x) |^{2} \mathrm{d} x \right)^{1/2}. -$$ - -Then there is a constant C ≥ 0, known as the Korn constant of Ω, such that, for all v ∈ H1(Ω), - -{{NumBlk|:|$\| v \|_{H^{1} (\Omega)}^{2} \leq C \int_{\Omega} \sum_{i, j = 1}^{n} \left( | v^{i} (x) |^{2} + | (e_{ij} v) (x) |^{2} \right) \mathrm{d} x$|}} - -where e denotes the symmetrized gradient given by -$$ -e_{ij} v = \frac1{2} ( \partial_{i} v^{j} + \partial_{j} v^{i} ). -$$ - -Inequality is known as Korn's inequality. diff --git a/wiki/wikipedia/3456.txt b/wiki/wikipedia/3456.txt deleted file mode 100644 index 35a14e5ba336b21972539927bf9e9efe73795ba6..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3456.txt +++ /dev/null @@ -1,40 +0,0 @@ -At the 1912 International Congress of Mathematicians, Edmund Landau listed four basic problems about prime numbers. These problems were characterised in his speech as "unattackable at the present state of mathematics" and are now known as Landau's problems. They are as follows: - -# Goldbach's conjecture: Can every even integer greater than 2 be written as the sum of two primes? - -# Twin prime conjecture: Are there infinitely many primes p such that p + 2 is prime? - -# Legendre's conjecture: Does there always exist at least one prime between consecutive perfect squares? - -# Are there infinitely many primes p such that p − 1 is a perfect square? In other words: Are there infinitely many primes of the form n2 + 1? - -, all four problems are unresolved. - -Goldbach's weak conjecture, every odd number greater than 5 can be expressed as the sum of three primes, is a consequence of Goldbach's conjecture. Ivan Vinogradov proved it for large enough n (Vinogradov's theorem) in 1937, and Harald Helfgott extended this to a full proof of Goldbach's weak conjecture in 2013. - -Chen's theorem, another weakening of Goldbach's conjecture, proves that for all sufficiently large n, $2n=p+q$ where p is prime and q is either prime or semiprime. In 2015, Tomohiro Yamada proved an explicit version of Chen's theorem: every even number greater than $e^{e^{36}} \approx 1.7\cdot10^{1872344071119348}$ is the sum of a prime and a product of at most two primes. - -Montgomery and Vaughan showed that the exceptional set of even numbers not expressible as the sum of two primes was of density zero, although the set is not proven to be finite. The best current bound on the exceptional set is $E(x) < x^{0.72}$ (for large enough x) due to Pintz. - -Linnik proved that large enough even numbers could be expressed as the sum of two primes and some (ineffective) constant K of powers of 2. Following many advances (see Pintz for an overview), Pintz and Ruzsa improved this to K = 8. - -Yitang Zhang showed that there are infinitely many prime pairs with gap bounded by 70 million, and this result has been improved to gaps of length 246 by a collaborative effort of the Polymath Project. Under the generalized Elliott–Halberstam conjecture this was improved to 6, extending earlier work by Maynard and Goldston, Pintz & Yıldırım. - -Chen showed that there are infinitely many primes p (later called Chen primes) such that p + 2 is either a prime or a semiprime. - -It suffices to check that each prime gap starting at p is smaller than $2 \sqrt p$. A table of maximal prime gaps shows that the conjecture holds to 264 ≈ 1.8. A counterexample near that size would require a prime gap a hundred million times the size of the average gap. - -Matomäki shows that there are at most $x^{1/6}$ exceptional primes followed by gaps larger than $\sqrt{2p}$; in particular, -$$ -\sum_{\stackrel{p_{n+1}-p_n > x^{1/2}}{x \leq p_n \leq 2x}}p_{n+1}-p_n\ll x^{2/3}. -$$ - -A result due to Ingham shows that there is a prime between $n^3$ and $(n+1)^3$ for every large enough n. - -Landau's fourth problem asked whether there are infinitely many primes which are of the form $p=n^2+1$ for integer n. (The list of known primes of this form is .) The existence of infinitely many such primes would follow as a consequence of other number-theoretic conjectures such as the Bunyakovsky conjecture and Bateman–Horn conjecture. , this problem is open. - -One example of near-square primes are Fermat primes. Henryk Iwaniec showed that there are infinitely many numbers of the form $n^2+1$ with at most two prime factors. Nesmith Ankeny proved that, assuming the extended Riemann hypothesis for L-functions on Hecke characters, there are infinitely many primes of the form $x^2+y^2$ with $y=O(\log x)$. Landau's conjecture is for the stronger $y=1$. - -Merikoski, improving on previous works, showed that there are infinitely many numbers of the form $n^2+1$ with greatest prime factor at least $n^{1.279}$. Replacing the exponent with 2 would yield Landau's conjecture. - -The Brun sieve establishes an upper bound on the density of primes having the form $p=n^2+1$: there are $O(\sqrt x/\log x)$ such primes up to $x$. It then follows that almost all numbers of the form $n^2+1$ are composite. diff --git a/wiki/wikipedia/3457.txt b/wiki/wikipedia/3457.txt deleted file mode 100644 index 78081acb1896632f299b501e93304c401f3cab00..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3457.txt +++ /dev/null @@ -1,23 +0,0 @@ -In mathematics, the Schreier refinement theorem of group theory states that any two subnormal series of subgroups of a given group have equivalent refinements, where two series are equivalent if there is a bijection between their factor groups that sends each factor group to an isomorphic one. - -The theorem is named after the Austrian mathematician Otto Schreier who proved it in 1928. It provides an elegant proof of the Jordan–Hölder theorem. It is often proved using the Zassenhaus lemma. Baumslag gives a short proof by intersecting the terms in one subnormal series with those in the other series. - -Consider $\mathbb{Z}/(2) \times S_3$, where $S_3$ is the symmetric group of degree 3. The alternating group $A_3$ is a normal subgroup of $S_3$, so we have the two subnormal series -$$ -\{[0]\} \times \{\operatorname{id}\} \triangleleft \mathbb{Z}/(2) \times \{\operatorname{id}\} \triangleleft \mathbb{Z}/(2) \times S_3, -$$ -$$ -\{[0]\} \times \{\operatorname{id}\} \triangleleft \{[0]\} \times A_3 \triangleleft \mathbb{Z}/(2) \times S_3 -$$, - -with respective factor groups $(\mathbb{Z}/(2),S_3)$ and $(A_3,\mathbb{Z}/(2)\times\mathbb{Z}/(2))$. The two subnormal series are not equivalent, but they have equivalent refinements: -$$ -\{[0]\} \times \{\operatorname{id}\} \triangleleft \mathbb{Z}/(2) \times \{\operatorname{id}\} \triangleleft \mathbb{Z}/(2) \times A_3 \triangleleft \mathbb{Z}/(2) \times S_3 -$$ - -with factor groups isomorphic to $(\mathbb{Z}/(2), A_3, \mathbb{Z}/(2))$ and -$$ -\{[0]\} \times \{\operatorname{id}\} \triangleleft \{[0]\} \times A_3 \triangleleft \{[0]\} \times S_3 \triangleleft \mathbb{Z}/(2) \times S_3 -$$ - -with factor groups isomorphic to $(A_3, \mathbb{Z}/(2), \mathbb{Z}/(2))$. diff --git a/wiki/wikipedia/3458.txt b/wiki/wikipedia/3458.txt deleted file mode 100644 index 68d59a67cd51f06013eaf0fc0401da92bbb90675..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3458.txt +++ /dev/null @@ -1,33 +0,0 @@ -The arboricity of an undirected graph is the minimum number of forests into which its edges can be partitioned. Equivalently it is the minimum number of spanning forests needed to cover all the edges of the graph. The Nash-Williams theorem provides necessary and sufficient conditions for when a graph is k-arboric. - -The figure shows the complete bipartite graph K4,4, with the colors indicating a partition of its edges into three forests. K4,4 cannot be partitioned into fewer forests, because any forest on its eight vertices has at most seven edges, while the overall graph has sixteen edges, more than double the number of edges in a single forest. Therefore, the arboricity of K4,4 is three. - -The arboricity of a graph is a measure of how dense the graph is: graphs with many edges have high arboricity, and graphs with high arboricity must have a dense subgraph. - -In more detail, as any n-vertex forest has at most n-1 edges, the arboricity of a graph with n vertices and m edges is at least $\lceil m/(n-1)\rceil$. Additionally, the subgraphs of any graph cannot have arboricity larger than the graph itself, or equivalently the arboricity of a graph must be at least the maximum arboricity of any of its subgraphs. Nash-Williams proved that these two facts can be combined to characterize arboricity: if we let nS and mS denote the number of vertices and edges, respectively, of any subgraph S of the given graph, then the arboricity of the graph equals $\max_S\{\lceil m_S/(n_S-1)\rceil\}.$ - -Any planar graph with $n$ vertices has at most $3n-6$ edges, from which it follows by Nash-Williams' formula that planar graphs have arboricity at most three. Schnyder used a special decomposition of a planar graph into three forests called a Schnyder wood to find a straight-line embedding of any planar graph into a grid of small area. - -The arboricity of a graph can be expressed as a special case of a more general matroid partitioning problem, in which one wishes to express a set of elements of a matroid as a union of a small number of independent sets. As a consequence, the arboricity can be calculated by a polynomial-time algorithm . - -The anarboricity of a graph is the maximum number of edge-disjoint nonacyclic subgraphs into which the edges of the graph can be partitioned. - -The star arboricity of a graph is the size of the minimum forest, each tree of which is a star (tree with at most one non-leaf node), into which the edges of the graph can be partitioned. If a tree is not a star itself, its star arboricity is two, as can be seen by partitioning the edges into two subsets at odd and even distances from the tree root respectively. Therefore, the star arboricity of any graph is at least equal to the arboricity, and at most equal to twice the arboricity. - -The linear arboricity of a graph is the minimum number of linear forests (a collection of paths) into which the edges of the graph can be partitioned. The linear arboricity of a graph is closely related to its maximum degree and its slope number. - -The pseudoarboricity of a graph is the minimum number of pseudoforests into which its edges can be partitioned. Equivalently, it is the maximum ratio of edges to vertices in any subgraph of the graph, rounded up to an integer. As with the arboricity, the pseudoarboricity has a matroid structure allowing it to be computed efficiently . - -The thickness of a graph is the minimum number of planar subgraphs into which its edges can be partitioned. As any planar graph has arboricity three, the thickness of any graph is at least equal to a third of the arboricity, and at most equal to the arboricity. - -The degeneracy of a graph is the maximum, over all induced subgraphs of the graph, of the minimum degree of a vertex in the subgraph. The degeneracy of a graph with arboricity $a$ is at least equal to $a$, and at most equal to $2a-1$. The coloring number of a graph, also known as its Szekeres-Wilf number is always equal to its degeneracy plus 1 . - -The strength of a graph is a fractional value whose integer part gives the maximum number of disjoint spanning trees that can be drawn in a graph. It is the packing problem that is dual to the covering problem raised by the arboricity. The two parameters have been studied together by Tutte and Nash-Williams. - -The fractional arboricity is a refinement of the arboricity, as it is defined for a graph $G$ as $\max\{m_S/(n_S-1) \mid S \subseteq G\}.$ In other terms, the arboricity of a graph is the ceiling of the fractional arboricity. - -The (a,b)-decomposability generalizes the arboricity. A graph is $(a,b)$-decomposable if its edges can be partitioned into $a+1$ sets, each one of them inducing a forest, except one who induces a graph with maximum degree $b$. A graph with arboricity $a$ is $(a,0)$-decomposable. - -The tree number is the minimal number of trees covering the edges of a graph. - -Arboricity appears in the Goldberg–Seymour conjecture. diff --git a/wiki/wikipedia/3459.txt b/wiki/wikipedia/3459.txt deleted file mode 100644 index 65e6dd0ffe5fe52740f9ba3bd2648e51a6171f5b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3459.txt +++ /dev/null @@ -1,15 +0,0 @@ -The low birth-weight paradox is an apparently paradoxical observation relating to the birth weights and mortality rate of children born to tobacco smoking mothers. Low birth-weight children born to smoking mothers have a lower infant mortality rate than the low birth weight children of non-smokers. It is an example of Simpson's paradox. - -Traditionally, babies weighing less than a certain amount (which varies between countries) have been classified as having low birth weight. In a given population, low birth weight babies have a significantly higher mortality rate than others; thus, populations with a higher rate of low birth weights typically also have higher rates of child mortality than other populations. - -Based on prior research, the children of smoking mothers are more likely to be of low birth weight than children of non-smoking mothers. Thus, by extension the child mortality rate should be higher among children of smoking mothers. So it is a surprising real-world observation that low birth weight babies of smoking mothers have a lower child mortality than low birth weight babies of non-smokers. - -At first sight these findings seemed to suggest that, at least for some babies, having a smoking mother might be beneficial to one's health. However the paradox can be explained statistically by uncovering a lurking variable between smoking and the two key variables: birth weight and risk of mortality. Both variables are acted on independently by smoking and other adverse conditions — birth weight is lowered and the risk of mortality increases. However, each condition does not necessarily affect both variables to the same extent. - -The birth weight distribution for children of smoking mothers is shifted to lower weights by their mothers' actions. Therefore, otherwise healthy babies (who would weigh more if it were not for the fact their mother smoked) are born underweight. However, they still have a lower mortality rate than children who have other, more severe, medical reasons why they are born underweight. - -In short, smoking is harmful in that it contributes to low birth weight which has higher mortality than normal birth weight, but other causes of low birth weight are generally more harmful than smoking. - -If one corrects and adjusts for the confounding by smoking, via stratification or multivariable regression modelling to statistically control for smoking, one finds that the association between birth weight and mortality may be attenuated towards the null. Nevertheless, most epidemiologic studies of birth weight and mortality have controlled for maternal smoking, and the adjusted results, although attenuated after adjusting for smoking, still indicated a significant association. - -Additional support for the hypothesis that birth weight and mortality can be acted on independently came from the analysis of birth data from Colorado: compared with the birth weight distribution in the US as a whole, the distribution curve in Colorado is also shifted to lower weights. The overall child mortality of Colorado children is the same as that for US children however, and if one corrects for the lower weights as above, one finds that babies of a given (corrected) weight are just as likely to die, whether they are from Colorado or not. The likely explanation here is that the higher altitude of Colorado affects birth weight, but not mortality. diff --git a/wiki/wikipedia/346.txt b/wiki/wikipedia/346.txt deleted file mode 100644 index 093d51c231ecb4f67b2abf70582b30da8dd5fb86..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/346.txt +++ /dev/null @@ -1,40 +0,0 @@ -: - -Whitehead's lemma is a technical result in abstract algebra used in algebraic K-theory. It states that a matrix of the form - - - -\begin{bmatrix} - -u & 0 \\ - -0 & u^{-1} \end{bmatrix} - -is equivalent to the identity matrix by elementary transformations (that is, transvections): - - - -\begin{bmatrix} - -u & 0 \\ - -0 & u^{-1} \end{bmatrix} = e_{21}(u^{-1}) e_{12}(1-u) e_{21}(-1) e_{12}(1-u^{-1}). - -Here, $e_{ij}(s)$ indicates a matrix whose diagonal block is $1$ and $ij^{th}$ entry is $s$. - -The name "Whitehead's lemma" also refers to the closely related result that the derived group of the stable general linear group is the group generated by elementary matrices. In symbols, -$$ -\operatorname{E}(A) = [\operatorname{GL}(A),\operatorname{GL}(A)] -$$. - -This holds for the stable group (the direct limit of matrices of finite size) over any ring, but not in general for the unstable groups, even over a field. For instance for -$$ -\operatorname{GL}(2,\mathbb{Z}/2\mathbb{Z}) -$$ - -one has: -$$ -\operatorname{Alt}(3) \cong [\operatorname{GL}_2(\mathbb{Z}/2\mathbb{Z}),\operatorname{GL}_2(\mathbb{Z}/2\mathbb{Z})] < \operatorname{E}_2(\mathbb{Z}/2\mathbb{Z}) = \operatorname{SL}_2(\mathbb{Z}/2\mathbb{Z}) = \operatorname{GL}_2(\mathbb{Z}/2\mathbb{Z}) \cong \operatorname{Sym}(3), -$$ - -where Alt(3) and Sym(3) denote the alternating resp. symmetric group on 3 letters. diff --git a/wiki/wikipedia/3460.txt b/wiki/wikipedia/3460.txt deleted file mode 100644 index 5df65c542403387c405b55c66da4005996e61157..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3460.txt +++ /dev/null @@ -1,35 +0,0 @@ -The four-vertex theorem of geometry states that the curvature function of a simple, closed, smooth plane curve has at least four local extrema (specifically, at least two local maxima and at least two local minima). The name of the theorem derives from the convention of calling an extreme point of the curvature function a vertex. This theorem has many generalizations, including a version for space curves where a vertex is defined as a point of vanishing torsion. - -An ellipse has exactly four vertices: two local maxima of curvature where it is crossed by the major axis of the ellipse, and two local minima of curvature where it is crossed by the minor axis. In a circle, every point is both a local maximum and a local minimum of curvature, so there are infinitely many vertices. - -Every curve of constant width has at least six vertices. - -The four-vertex theorem was first proved for convex curves (i.e. curves with strictly positive curvature) in 1909 by Syamadas Mukhopadhyaya. His proof utilizes the fact that a point on the curve is an extremum of the curvature function if and only if the osculating circle at that point has fourth-order contact with the curve; in general the osculating circle has only third-order contact with the curve. The four-vertex theorem was proved for more general curves by Adolf Kneser in 1912 using a projective argument. - -For many years the proof of the four-vertex theorem remained difficult, but a simple and conceptual proof was given by Osserman, based on the idea of the minimum enclosing circle. This is a circle that contains the given curve and has the smallest possible radius. If the curve includes an arc of the circle, it has infinitely many vertices. Otherwise, the curve and circle must be tangent at at least two points, because a circle that touched the curve at fewer points could be reduced in size while still enclosing it. At each tangency, the curvature of the curve is greater than that of the circle, for otherwise the curve would continue from the tangency outside the circle rather than inside. However, between each pair of tangencies, the curvature must decrease to less than that of the circle, for instance at a point obtained by translating the circle - -until it no longer contains any part of the curve between the two points of tangency and considering the last point of contact between the translated circle and the curve. Therefore, there is a local minimum of curvature between each pair of tangencies, giving two of the four vertices. There must be a local maximum of curvature between each pair of local minima (not necessarily at the points of tangency), giving the other two vertices. - -The converse to the four-vertex theorem states that any continuous, real-valued function of the circle that has at least two local maxima and two local minima is the curvature function of a simple, closed plane curve. The converse was proved for strictly positive functions in 1971 by Herman Gluck as a special case of a general theorem on pre-assigning the curvature of n-spheres. The full converse to the four-vertex theorem was proved by shortly before his death in January 1998, and published posthumously. Dahlberg's proof uses a winding number argument which is in some ways reminiscent of the standard topological proof of the Fundamental Theorem of Algebra. - -One corollary of the theorem is that a homogeneous, planar disk rolling - -on a horizontal surface under gravity has at least 4 balance points. A discrete version of this is that there cannot be a monostatic polygon. - -However, in three dimensions there do exist monostatic polyhedra, and there also exists a convex, homogeneous object with exactly 2 balance points (one stable, and the other unstable), the Gömböc. - -There are several discrete versions of the four-vertex theorem, both for convex and non-convex polygons. Here are some of them: - -* (Bilinski) The sequence of angles of a convex equilateral polygon with at least four vertices has at least four extrema. - -* The sequence of side lengths of a convex equiangular polygon with at least four sides has at least four extrema. - -* (Musin) A circle circumscribed around three consecutive vertices of a polygon with at least four vertices is called extremal if it contains all remaining vertices of the polygon, or has none of them in its interior. Such a convex polygon is generic if it has no four vertices on the same circle. Then every generic convex polygon with at least four vertices has at least four extremal circles. - -* (Legendre–Cauchy) Two convex n-gons with equal corresponding side length have either zero or at least 4 sign changes in the cyclic sequence of the corresponding angle differences. - -* (A.D. Alexandrov) Two convex n-gons with parallel corresponding sides and equal area have either zero or at least 4 sign changes in the cyclic sequence of the corresponding side lengths differences. - -Some of these variations are stronger than the other, and all of them imply the (usual) four-vertex theorem by a limit argument. - -The stereographic projection from the sphere to the plane preserves critical points of geodesic curvature. Thus simple closed spherical curves have four vertices. Furthermore, on the sphere vertices of a curve correspond to points where its torsion vanishes. So for space curves a vertex is defined as a point of vanishing torsion. In 1994 V. D. Sedykh showed that every simple closed space curve which lies on the boundary of a convex body has four vertices. In 2017 Mohammad Ghomi generalized Sedykh's theorem to all curves which bound a locally convex disk. diff --git a/wiki/wikipedia/3461.txt b/wiki/wikipedia/3461.txt deleted file mode 100644 index 462f7aab577e9deb1942f363eab913a57994181b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3461.txt +++ /dev/null @@ -1,5 +0,0 @@ -In geometry, the Pitot theorem, named after the French engineer Henri Pitot, states that in a tangential quadrilateral (i.e. one in which a circle can be inscribed) the two sums of lengths of opposite sides are the same. Both sums of lengths equal the semiperimeter of the quadrilateral. - -Henri Pitot proved his theorem in 1725, whereas the converse was proved by the Swiss mathematician Jakob Steiner in 1846. - -Pitot's theorem generalizes to tangential 2n-gons, in which case the two sums of alternate sides are equal. diff --git a/wiki/wikipedia/3462.txt b/wiki/wikipedia/3462.txt deleted file mode 100644 index 9ed1481a3d773d813d049c45d8e541562dee0042..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3462.txt +++ /dev/null @@ -1,19 +0,0 @@ -In mathematics, Tait's conjecture states that "Every 3-connected planar cubic graph has a Hamiltonian cycle (along the edges) through all its vertices". It was proposed by and disproved by , who constructed a counterexample with 25 faces, 69 edges and 46 vertices. Several smaller counterexamples, with 21 faces, 57 edges and 38 vertices, were later proved minimal by Holton. - -The condition that the graph be 3-regular is necessary due to polyhedra such as the rhombic dodecahedron, which forms a bipartite graph with six degree-four vertices on one side and eight degree-three vertices on the other side; because any Hamiltonian cycle would have to alternate between the two sides of the bipartition, but they have unequal numbers of vertices, the rhombic dodecahedron is not Hamiltonian. - -The conjecture was significant, because if true, it would have implied the four color theorem: as Tait described, the four-color problem is equivalent to the problem of finding 3-edge-colorings of bridgeless cubic planar graphs. In a Hamiltonian cubic planar graph, such an edge coloring is easy to find: use two colors alternately on the cycle, and a third color for all remaining edges. Alternatively, a 4-coloring of the faces of a Hamiltonian cubic planar graph may be constructed directly, using two colors for the faces inside the cycle and two more colors for the faces outside. - -The key to this counter-example is what is now known as Tutte's fragment, shown on the right. - -If this fragment is part of a larger graph, then any Hamiltonian cycle through the graph must go in or out of the top vertex (and either one of the lower ones). It cannot go in one lower vertex and out the other. - -The fragment can then be used to construct the non-Hamiltonian Tutte graph, by putting - -together three such fragments as shown on the picture. The "compulsory" edges of the fragments, that must be part of any Hamiltonian path through the fragment, are connected at the central vertex; because any cycle can use only two of these three edges, there can be no Hamiltonian cycle. - -The resulting Tutte graph is 3-connected and planar, so by Steinitz' theorem it is the graph of a polyhedron. In total it has 25 faces, 69 edges and 46 vertices. - -It can be realized geometrically from a tetrahedron (the faces of which correspond to the four large faces in the drawing, three of which are between pairs of fragments and the fourth of which forms the exterior) by multiply truncating three of its vertices. - -As Holton show, there are exactly six 38-vertex non-Hamiltonian polyhedra that have nontrivial three-edge cuts. They are formed by replacing two of the vertices of a pentagonal prism by the same fragment used in Tutte's example. diff --git a/wiki/wikipedia/3463.txt b/wiki/wikipedia/3463.txt deleted file mode 100644 index bca80d587ac8aa2a66668255c3a5c3466be1f8f3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3463.txt +++ /dev/null @@ -1,28 +0,0 @@ -In mathematics, the second neighborhood problem is an unsolved problem about oriented graphs posed by Paul Seymour. Intuitively, it suggests that in a social network described by such a graph, someone will have at least as many friends-of-friends as friends. - -The problem is also known as the second neighborhood conjecture or Seymour’s distance two conjecture. - -An oriented graph is a finite directed graph obtained from a simple undirected graph by assigning an orientation to each edge. Equivalently, it is a directed graph that has no self-loops, no parallel edges, and no two-edge cycles. The first neighborhood of a vertex $v$ (also called its open neighborhood) consists of all vertices at distance one from $v$, and the second neighborhood of $v$ consists of all vertices at distance two from $v$. These two neighborhoods form disjoint sets, neither of which contains $v$ itself. - -In 1990, Paul Seymour conjectured that, in every oriented graph, there always exists at least one vertex $v$ whose second neighborhood is at least as large as its first neighborhood. Equivalently, in the square of the graph, the degree of $v$ is at least doubled. The problem was first published by Nathaniel Dean and Brenda J. Latka in 1995, in a paper that studied the problem on a restricted class of oriented graphs, the tournaments (orientations of complete graphs). Dean had previously conjectured that every tournament obeys the second neighborhood conjecture, and this special case became known as Dean's conjecture. - -A vertex in a directed graph whose second neighborhood is at least as large as its first neighborhood is called a Seymour vertex. - -In the second neighborhood conjecture, the condition that the graph have no two-edge cycles is necessary, for in graphs that have such cycles (for instance the complete oriented graph) all second neighborhoods may be empty or small. - -Fisher proved Dean's conjecture, the special case of the second neighborhood problem for tournaments. - -For some graphs, a vertex of minimum out-degree will be a Seymour vertex. For instance, if a directed graph has a sink, a vertex of out-degree zero, then the sink is automatically a Seymour vertex, because its first and second neighborhoods both have size zero. In a graph without sinks, a vertex of out-degree one is always a Seymour vertex. In the orientations of triangle-free graphs graphs, any vertex $v$ of minimum out-degree is again a Seymour vertex, because for any edge from $v$ to another vertex $w$, the out-neighbors of $w$ all belong to the second neighborhood of $v$. - -For arbitrary graphs with higher vertex degrees, the vertices of minimum degree might not be Seymour vertices, but the existence of a low-degree vertex can still lead to the existence of a nearby Seymour vertex. Using this sort of reasoning, the second neighborhood conjecture has been proven to be true for any oriented graph that contains at least one vertex of out-degree ≤ 6. - -Random tournaments and random oriented graphs have many Seymour vertices with high probability. - -Every oriented graph has a vertex whose second neighborhood is at least $\gamma$ times as big as the first neighborhood, - -where -$$ -\gamma=\frac{1}{6}\left(-1+\sqrt[3]{53-6\sqrt{78}}+\sqrt[3]{53+6\sqrt{78}}\right) \approx 0.657 -$$ - -is the real root of the polynomial $2x^3+x^2-1$. diff --git a/wiki/wikipedia/3464.txt b/wiki/wikipedia/3464.txt deleted file mode 100644 index 3b9272783a43bc676f9ebf94c98d585884cba2fb..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3464.txt +++ /dev/null @@ -1,11 +0,0 @@ -Without loss of generality (often abbreviated to WOLOG, WLOG or w.l.o.g.; less commonly stated as without any loss of generality or with no loss of generality) is a frequently used expression in mathematics. The term is used to indicate the assumption that follows is chosen arbitrarily, narrowing the premise to a particular case, but does not affect the validity of the proof in general. The other cases are sufficiently similar to the one presented that proving them follows by essentially the same logic. As a result, once a proof is given for the particular case, it is trivial to adapt it to prove the conclusion in all other cases. - -In many scenarios, the use of "without loss of generality" is made possible by the presence of symmetry. For example, if some property P(x,y) of real numbers is known to be symmetric in x and y, namely that P(x,y) is equivalent to P(y,x), then in proving that P(x,y) holds for every x and y, one may assume, "without loss of generality", that x ≤ y. There is no loss of generality in this assumption, since once the case x ≤ y ⇒ P(x,y) has been proved, the other case follows by interchanging x and y: y ≤ x ⇒ P(y,x), and by symmetry of P, this implies P(x,y), thereby showing that P(x,y) holds for all cases. - -On the other hand, if such a symmetry (or other form of equivalence) cannot be established, then the use of "without loss of generality" is incorrect and can amount to an instance of proof by example – a logical fallacy of proving a claim by proving a non-representative example. - -Consider the following theorem (which is a case of the pigeonhole principle): - -A proof: - -The above argument works because the exact same reasoning could be applied if the alternative assumption, namely, that the first object is blue, were made, or, similarly, that the words 'red' and 'blue' can be freely exchanged in the wording of the proof. As a result, the use of "without loss of generality" is valid in this case. diff --git a/wiki/wikipedia/3465.txt b/wiki/wikipedia/3465.txt deleted file mode 100644 index 705f5113ab115d0aeb0f98230c1a252bc0b876dd..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3465.txt +++ /dev/null @@ -1,14 +0,0 @@ -In mathematics, the Farrell–Markushevich theorem, proved independently by O. J. Farrell (1899–1981) and A. I. Markushevich (1908–1979) in 1934, is a result concerning the approximation in mean square of holomorphic functions on a bounded open set in the complex plane by complex polynomials. It states that complex polynomials form a dense subspace of the Bergman space of a domain bounded by a simple closed Jordan curve. The Gram–Schmidt process can be used to construct an orthonormal basis in the Bergman space and hence an explicit form of the Bergman kernel, which in turn yields an explicit Riemann mapping function for the domain. - -Let Ω be the bounded Jordan domain and let Ωn be bounded Jordan domains decreasing to Ω, with Ωn containing the closure of Ωn + 1. By the Riemann mapping theorem there is a conformal mapping fn of Ωn onto Ω, normalised to fix a given point in Ω with positive derivative there. By the Carathéodory kernel theorem fn(z) converges uniformly on compacta in Ω to z. In fact Carathéodory's theorem implies that the inverse maps tend uniformly on compacta to z. Given a subsequence of fn, it has a subsequence, convergent on compacta in Ω. Since the inverse functions converge to z, it follows that the subsequence converges to z on compacta. Hence fn converges to z on compacta in Ω. - -As a consequence the derivative of fn tends to 1 uniformly on compacta. - -Let g be a square integrable holomorphic function on Ω, i.e. an element of the Bergman space A2(Ω). Define gn on Ωn by gn(z) = g(fn(z))fn'(z). By change of variable -$$ -\displaystyle{\|g_n\|^2_{\Omega_n} =\|g\|_\Omega^2.} -$$ - -Let hn be the restriction of gn to Ω. Then the norm of hn is less than that of gn. Thus these norms are uniformly bounded. Passing to a subsequence if necessary, it can therefore be assumed that hn has a weak limit in A2(Ω). On the other hand, hn tends uniformly on compacta - -to g. Since the evaluation maps are continuous linear functions on A2(Ω), g is the weak limit of hn. On the other hand, by Runge's theorem, hn lies in the closed subspace K of A2(Ω) generated by complex polynomials. Hence g lies in the weak closure of K, which is K itself. diff --git a/wiki/wikipedia/3466.txt b/wiki/wikipedia/3466.txt deleted file mode 100644 index 9810a360b64bf8d2df193da5679e76197efc6e5d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3466.txt +++ /dev/null @@ -1,15 +0,0 @@ -The fold-and-cut theorem states that any shape with straight sides can be cut from a single (idealized) sheet of paper by folding it flat and making a single straight complete cut. Such shapes include polygons, which may be concave, shapes with holes, and collections of such shapes (i.e. the regions need not be connected). - -The corresponding problem that the theorem solves is known as the fold-and-cut problem, which asks what shapes can be obtained by the so-called fold-and-cut method. A particular instance of the problem, which asks how a particular shape can be obtained by the fold-and-cut method, is known as a fold-and-cut problem. - -The earliest known description of a fold-and-cut problem appears in Wakoku Chiyekurabe (Mathematical Contests), a book that was published in 1721 by Kan Chu Sen in Japan. - -An 1873 article in Harper's New Monthly Magazine describes how Betsy Ross may have proposed that stars on the American flag have five points, because such a shape can easily be obtained by the fold-and-cut method. - -In the 20th century, several magicians published books containing examples of fold-and-cut problems, including Will Blyth, Harry Houdini, and Gerald Loe (1955). - -Inspired by Loe, Martin Gardner wrote about the fold-and-cut problems in Scientific American in 1960. Examples mentioned by Gardner include separating the red squares from the black squares of a checkerboard with one cut, and "an old paper-cutting stunt, of unknown origin" in which one cut splits a piece of paper into both a Latin cross and a set of smaller pieces that can be rearranged to spell the word "hell". Foreshadowing work on the general fold-and-cut theorem, he writes that "more complicated designs present formidable problems". - -The first proof of the fold-and-cut theorem, solving the problem, was published in 1999 by Erik Demaine, Martin Demaine, and Anna Lubiw. - -There are two general methods known for solving instances of the fold-and-cut problem, based on straight skeletons and on circle packing respectively. diff --git a/wiki/wikipedia/3467.txt b/wiki/wikipedia/3467.txt deleted file mode 100644 index a6ba2657a137c94fe800246cacc2541084597806..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3467.txt +++ /dev/null @@ -1,46 +0,0 @@ -The Sz.-Nagy dilation theorem (proved by Béla Szőkefalvi-Nagy) states that every contraction T on a Hilbert space H has a unitary dilation U to a Hilbert space K, containing H, with -$$ -T^n = P_H U^n \vert_H,\quad n\ge 0. -$$ - -Moreover, such a dilation is unique (up to unitary equivalence) when one assumes K is minimal, in the sense that the linear span of ∪nUnH is dense in K. When this minimality condition holds, U is called the minimal unitary dilation of T. - -For a contraction T (i.e., ($\|T\|\le1$), its defect operator DT is defined to be the (unique) positive square root DT = (I - T*T)½. In the special case that S is an isometry, DS* is a projector and DS=0, hence the following is an Sz. Nagy unitary dilation of S with the required polynomial functional calculus property: - -U = - -\begin{bmatrix} S & D_{S^*} \\ D_S & -S^* \end{bmatrix}. - - - -Returning to the general case of a contraction T, every contraction T on a Hilbert space H has an isometric dilation, again with the calculus property, on -$$ -\oplus_{n \geq 0} H -$$ - -given by - -S = - -\begin{bmatrix} T & 0 & 0 & \cdots & \\ D_T & 0 & 0 & & \\ 0 & I & 0 & \ddots \\ 0 & 0 & I & \ddots \\ \vdots & & \ddots & \ddots \end{bmatrix} - -. - -Substituting the S thus constructed into the previous Sz.-Nagy unitary dilation for an isometry S, one obtains a unitary dilation for a contraction T: - - - -T^n = P_H S^n \vert_H = P_H (Q_{H'} U \vert_{H'})^n \vert_H = P_H U^n \vert_H. - - - -The Schaffer form of a unitary Sz. Nagy dilation can be viewed as a beginning point for the characterization of all unitary dilations, with the required property, for a given contraction. - -A generalisation of this theorem, by Berger, Foias and Lebow, shows that if X is a spectral set for T, and -$$ -\mathcal{R}(X) -$$ - -is a Dirichlet algebra, then T has a minimal normal δX dilation, of the form above. A consequence of this is that any operator with a simply connected spectral set X has a minimal normal δX dilation. - -To see that this generalises Sz.-Nagy's theorem, note that contraction operators have the unit disc D as a spectral set, and that normal operators with spectrum in the unit circle δD are unitary. diff --git a/wiki/wikipedia/3468.txt b/wiki/wikipedia/3468.txt deleted file mode 100644 index 2176dcf64832d263fb3c9d4c545f2223417c7cee..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3468.txt +++ /dev/null @@ -1,29 +0,0 @@ -In mathematics, Kaplansky's theorem on quadratic forms is a result on simultaneous representation of primes by quadratic forms. It was proved in 2003 by Irving Kaplansky. - -Kaplansky's theorem states that a prime p congruent to 1 modulo 16 is representable by both or none of x2 + 32y2 and x2 + 64y2, whereas a prime p congruent to 9 modulo 16 is representable by exactly one of these quadratic forms. - -This is remarkable since the primes represented by each of these forms individually are not describable by congruence conditions. - -Kaplansky's proof uses the facts that 2 is a 4th power modulo p if and only if p is representable by x2 + 64y2, and that -4 is an 8th power modulo p if and only if p is representable by x2 + 32y2. - -*The prime p = 17 is congruent to 1 modulo 16 and is representable by neither x2 + 32y2 nor x2 + 64y2. - -*The prime p=113 is congruent to 1 modulo 16 and is representable by both x2 + 32y2 and x2+64y2 (since 113 = 92 + 32×12 and 113 = 72 + 64×12). - -*The prime p = 41 is congruent to 9 modulo 16 and is representable by x2 + 32y2 (since 41 = 32 + 32×12), but not by x2 + 64y2. - -*The prime p = 73 is congruent to 9 modulo 16 and is representable by x2 + 64y2 (since 73 = 32 + 64×12), but not by x2 + 32y2. - -Five results similar to Kaplansky's theorem are known: - -*A prime p congruent to 1 modulo 20 is representable by both or none of x2 + 20y2 and x2 + 100y2, whereas a prime p congruent to 9 modulo 20 is representable by exactly one of these quadratic forms. - -*A prime p congruent to 1, 16 or 22 modulo 39 is representable by both or none of x2 + xy + 10y2 and x2 + xy + 127y2, whereas a prime p congruent to 4, 10 or 25 modulo 39 is representable by exactly one of these quadratic forms. - -*A prime p congruent to 1, 16, 26, 31 or 36 modulo 55 is representable by both or none of x2 + xy + 14y2 and x2 + xy + 69y2, whereas a prime p congruent to 4, 9, 14, 34 or 49 modulo 55 is representable by exactly one of these quadratic forms. - -*A prime p congruent to 1, 65 or 81 modulo 112 is representable by both or none of x2 + 14y2 and x2 + 448y2, whereas a prime p congruent to 9, 25 or 57 modulo 112 is representable by exactly one of these quadratic forms. - -*A prime p congruent to 1 or 169 modulo 240 is representable by both or none of x2 + 150y2 and x2 + 960y2, whereas a prime p congruent to 49 or 121 modulo 240 is representable by exactly one of these quadratic forms. - -It is conjectured that there are no other similar results involving definite forms. diff --git a/wiki/wikipedia/3469.txt b/wiki/wikipedia/3469.txt deleted file mode 100644 index 5c70d05651d3a315adb716797190ea47f2d5b318..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3469.txt +++ /dev/null @@ -1,11 +0,0 @@ -In differential geometry the last geometric statement of Jacobi is a conjecture named after Carl Gustav Jacob Jacobi. According to this conjecture: - -
    - -Every caustic from any point $p$ on an ellipsoid other than umbilical points has exactly four cusps. - -
    - -While numerical experiments had indicated the statement is true, it wasn’t until 2004 that it was proven rigorously by Itoh and Kiyohara. - -It has since been extended to a wider class of surfaces beyond the ellipsoid. diff --git a/wiki/wikipedia/347.txt b/wiki/wikipedia/347.txt deleted file mode 100644 index 16867979b047cbd7b53a46552a7fc75620d00b59..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/347.txt +++ /dev/null @@ -1,41 +0,0 @@ -In number theory, Gillies' conjecture is a conjecture about the distribution of prime divisors of Mersenne numbers and was made by Donald B. Gillies in a 1964 paper in which he also announced the discovery of three new Mersenne primes. The conjecture is a specialization of the prime number theorem and is a refinement of conjectures due to I. J. Good and Daniel Shanks. The conjecture remains an open problem: several papers give empirical support, but it disagrees with the widely accepted (but also open) Lenstra–Pomerance–Wagstaff conjecture. -$$ -\text{If } -$$$A < B < \sqrt{M_p}\text{, as }B/A\text{ and }M_p \rightarrow \infty\text{, the number of prime divisors of }M$ -$$ -\text{ in the interval }[A, B]\text{ is Poisson-distributed with} -$$ - - - -\text{mean }\sim - -\begin{cases} - -\log(\log B /\log A) & \text{ if }A \ge 2p\\ - -\log(\log B/\log 2p) & \text{ if } A < 2p - -\end{cases} - - - -He noted that his conjecture would imply that - -# The number of Mersenne primes less than $x$ is $~\frac{2}{\log 2} \log\log x$. - -# The expected number of Mersenne primes $M_p$ with $x \le p \le 2x$ is $\sim2$. - -# The probability that $M_p$ is prime is $~\frac{2 \log 2p }{p\log 2}$. - -The Lenstra–Pomerance–Wagstaff conjecture gives different values: - -# The number of Mersenne primes less than $x$ is $~\frac{e^\gamma}{\log 2} \log\log x$. - -# The expected number of Mersenne primes $M_p$ with $x \le p \le 2x$ is $\sim e^\gamma$. - -# The probability that $M_p$ is prime is $~\frac{e^\gamma\log ap}{p\log 2}$ with a = 2 if p = 3 mod 4 and 6 otherwise. - -Asymptotically these values are about 11% smaller. - -While Gillie's conjecture remains open, several papers have added empirical support to its validity, including Ehrman's 1964 paper. diff --git a/wiki/wikipedia/3470.txt b/wiki/wikipedia/3470.txt deleted file mode 100644 index b0f4f70d775654636da77b1dbb67e712b7f3f575..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3470.txt +++ /dev/null @@ -1,59 +0,0 @@ -In mathematics, the Selberg conjecture, named after Atle Selberg, is a theorem about the density of zeros of the Riemann zeta function ζ(1/2 + it). It is known that the function has infinitely many zeroes on this line in the complex plane: the point at issue is how densely they are clustered. Results on this can be formulated in terms of N(T), the function counting zeroes on the line for which the value of t satisfies 0 ≤ t ≤ T. - -In 1942 Atle Selberg investigated the problem of the Hardy–Littlewood conjecture 2; and he proved that for any -$$ -\varepsilon > 0 -$$ - -there exist -$$ -T_0 = T_0(\varepsilon) > 0 -$$ - -and -$$ -c = c(\varepsilon) > 0, -$$ - -such that for -$$ -T \geq T_0 -$$ - -and -$$ -H=T^{0.5+\varepsilon} -$$ - -the inequality -$$ -N(T+H)-N(T) \geq cH\log T -$$ - -holds true. - -In his turn, Selberg stated a conjecture relating to shorter intervals, namely that it is possible to decrease the value of the exponent a = 0.5 in -$$ -H=T^{0.5+\varepsilon}. -$$ - -In 1984 Anatolii Karatsuba proved that for a fixed $\varepsilon$ satisfying the condition -$$ -0<\varepsilon < 0.001, -$$ - -a sufficiently large T and -$$ -H = T^{a+\varepsilon}, -$$ $a = \tfrac{27}{82} = \tfrac{1}{3} -\tfrac{1}{246},$ - -the interval in the ordinate t (T, T + H) contains at least cH ln T real zeros of the Riemann zeta function -$$ -\zeta\Bigl(\tfrac{1}{2}+it\Bigr); -$$ - -and thereby confirmed the Selberg conjecture. The estimates of Selberg and Karatsuba cannot be improved in respect of the order of growth as T → +∞. - -In 1992 Karatsuba proved that an analog of the Selberg conjecture holds for "almost all" intervals (T, T + H], H = Tε, where ε is an arbitrarily small fixed positive number. The Karatsuba method permits one to investigate zeroes of the Riemann zeta-function on "supershort" intervals of the critical line, that is, on the intervals (T, T + H], the length H of which grows slower than any, even arbitrarily small degree T. - -In particular, he proved that for any given numbers ε, ε1 satisfying the conditions 0 < ε, ε1< 1 almost all intervals (T, T + H] for H ≥ exp[(ln T)ε] contain at least H (ln T)1 -ε1 zeros of the function ζ(1/2 + it). This estimate is quite close to the conditional result that follows from the Riemann hypothesis. diff --git a/wiki/wikipedia/3471.txt b/wiki/wikipedia/3471.txt deleted file mode 100644 index 49e108503c14146cb629b54de2e0b9c2b276c769..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3471.txt +++ /dev/null @@ -1,475 +0,0 @@ -In calculus, Taylor's theorem gives an approximation of a k-times differentiable function around a given point by a polynomial of degree k, called the kth-order Taylor polynomial. For a smooth function, the Taylor polynomial is the truncation at the order k of the Taylor series of the function. The first-order Taylor polynomial is the linear approximation of the function, and the second-order Taylor polynomial is often referred to as the quadratic approximation. There are several versions of Taylor's theorem, some giving explicit estimates of the approximation error of the function by its Taylor polynomial. - -Taylor's theorem is named after the mathematician Brook Taylor, who stated a version of it in 1715, although an earlier version of the result was already mentioned in 1671 by James Gregory. - -Taylor's theorem is taught in introductory-level calculus courses and is one of the central elementary tools in mathematical analysis. It gives simple arithmetic formulas to accurately compute values of many transcendental functions such as the exponential function and trigonometric functions. - -It is the starting point of the study of analytic functions, and is fundamental in various areas of mathematics, as well as in numerical analysis and mathematical physics. Taylor's theorem also generalizes to multivariate and vector valued functions. - -If a real-valued function f(x) is differentiable at the point x = a, then it has a linear approximation near this point. This means that there exists a function h1(x) such that -$$ - f(x) = f(a) + f'(a)(x - a) + h_1(x)(x - a), \quad \lim_{x \to a} h_1(x) = 0. -$$ - -Here -$$ -P_1(x) = f(a) + f'(a)(x - a) -$$ - -is the linear approximation of f(x) for x near the point a, whose graph 1=y = P1(x) is the tangent line to the graph y = f(x) at 1=x = a. The error in the approximation is: -$$ -R_1(x) = f(x) - P_1(x) = h_1(x)(x - a). -$$ - -As x tends to a, this error goes to zero much faster than $f'(a)(x{-}a)$, making $f(x)\approx P_1(x)$ a useful approximation. - -For a better approximation to f(x), we can fit a quadratic polynomial instead of a linear function: -$$ -P_2(x) = f(a) + f'(a)(x - a) + \frac{f(a)}{2}(x - a)^2. -$$ - -Instead of just matching one derivative of f(x) at x = a, this polynomial has the same first and second derivatives, as is evident upon differentiation. - -Taylor's theorem ensures that the quadratic approximation is, in a sufficiently small neighborhood of x = a, more accurate than the linear approximation. Specifically, -$$ -f(x) = P_2(x) + h_2(x)(x - a)^2, \quad \lim_{x \to a} h_2(x) = 0. -$$ - -Here the error in the approximation is -$$ -R_2(x) = f(x) - P_2(x) = h_2(x)(x - a)^2, -$$ - -which, given the limiting behavior of $h_2$, goes to zero faster than $(x - a)^2$ as x tends to a. - -thumb|360px|right|Approximation of f(x) = 1/(1 + x2) (blue) by its Taylor polynomials Pk of order k = 1, …, 16 centered at x = 0 (red) and x = 1 (green). The approximations do not improve at all outside (−1, 1) and (1 − √2, 1 + √2) respectively. Similarly, we might get still better approximations to f if we use polynomials of higher degree, since then we can match even more derivatives with f at the selected base point. - -In general, the error in approximating a function by a polynomial of degree k will go to zero much faster than $(x-a)^k$ as x tends to a. However, there are functions, even infinitely differentiable ones, for which increasing the degree of the approximating polynomial does not increase the accuracy of approximation: we say such a function fails to be analytic at x = a: it is not (locally) determined by its derivatives at this point. - -Taylor's theorem is of asymptotic nature: it only tells us that the error Rk in an approximation by a k-th order Taylor polynomial Pk tends to zero faster than any nonzero k-th degree polynomial as x → a. It does not tell us how large the error is in any concrete neighborhood of the center of expansion, but for this purpose there are explicit formulas for the remainder term (given below) which are valid under some additional regularity assumptions on f. These enhanced versions of Taylor's theorem typically lead to uniform estimates for the approximation error in a small neighborhood of the center of expansion, but the estimates do not necessarily hold for neighborhoods which are too large, even if the function f is analytic. In that situation one may have to select several Taylor polynomials with different centers of expansion to have reliable Taylor-approximations of the original function (see animation on the right.) - -There are several ways we might use the remainder term: - -# Estimate the error for a polynomial Pk(x) of degree k estimating f(x) on a given interval (a – r, a + r). (Given the interval and degree, we find the error.) - -# Find the smallest degree k for which the polynomial Pk(x) approximates f(x) to within a given error tolerance on a given interval (a − r, a + r) . (Given the interval and error tolerance, we find the degree.) - -# Find the largest interval (a − r, a + r) on which Pk(x) approximates f(x) to within a given error tolerance. (Given the degree and error tolerance, we find the interval.) - -The precise statement of the most basic version of Taylor's theorem is as follows: - -{{math theorem|name=Taylor's theorem|math_statement= Let k ≥ 1 be an integer and let the function f : R → R be k times differentiable at the point a ∈ R. Then there exists a function hk : R → R such that -$$ - f(x) = f(a) + f'(a)(x-a) + \frac{f(a)}{2!}(x-a)^2 + \cdots + \frac{f^{(k)}(a)}{k!}(x-a)^k + h_k(x)(x-a)^k, -$$ - -and -$$ -\lim_{x\to a} h_k(x) = 0. -$$ - -This is called the Peano form of the remainder.}} - -The polynomial appearing in Taylor's theorem is the k-th order Taylor polynomial -$$ -P_k(x) = f(a) + f'(a)(x-a) + \frac{f(a)}{2!}(x-a)^2 + \cdots + \frac{f^{(k)}(a)}{k!}(x-a)^k -$$ - -of the function f at the point a. The Taylor polynomial is the unique "asymptotic best fit" polynomial in the sense that if there exists a function hk : R → R and a k-th order polynomial p such that -$$ - f(x) = p(x) + h_k(x)(x-a)^k, \quad \lim_{x\to a} h_k(x) = 0 , -$$ - -then p = Pk. Taylor's theorem describes the asymptotic behavior of the remainder term -$$ - R_k(x) = f(x) - P_k(x), -$$ - -which is the approximation error when approximating f with its Taylor polynomial. Using the little-o notation, the statement in Taylor's theorem reads as -$$ -R_k(x) = o(|x-a|^{k}), \quad x\to a. -$$ - -Under stronger regularity assumptions on f there are several precise formulas for the remainder term Rk of the Taylor polynomial, the most common ones being the following. - -Let f : R → R be k + 1 times differentiable on the open interval with f{{i sup|(k) continuous on the closed interval between a and x. Then -$$ - R_k(x) = \frac{f^{(k+1)}(\xi_L)}{(k+1)!} (x-a)^{k+1} -$$ - -for some real number ξL between a and x. This is the Lagrange form of the remainder. - -Similarly, -$$ - R_k(x) = \frac{f^{(k+1)}(\xi_C)}{k!}(x-\xi_C)^k(x-a) -$$ - -for some real number ξC between a and x. This is the Cauchy form of the remainder. - -}} - -These refinements of Taylor's theorem are usually proved using the mean value theorem, whence the name. Also other similar expressions can be found. For example, if G(t) is continuous on the closed interval and differentiable with a non-vanishing derivative on the open interval between a and x, then -$$ - R_k(x) = \frac{f^{(k+1)}(\xi)}{k!}(x-\xi)^k \frac{G(x)-G(a)}{G'(\xi)} -$$ - -for some number ξ between a and x. This version covers the Lagrange and Cauchy forms of the remainder as special cases, and is proved below using Cauchy's mean value theorem. - -The statement for the integral form of the remainder is more advanced than the previous ones, and requires understanding of Lebesgue integration theory for the full generality. However, it holds also in the sense of Riemann integral provided the (k + 1)th derivative of f is continuous on the closed interval [a,x]. - -{{math theorem|name=Integral form of the remainder|math_statement=Let f be absolutely continuous on the closed interval between a and x. Then -$$ - R_k(x) = \int_a^x \frac{f^{(k+1)} (t)}{k!} (x - t)^k dt. -$$}} - -Due to absolute continuity of f on the closed interval between a and x, its derivative f exists as an L-function, and the result can be proven by a formal calculation using fundamental theorem of calculus and integration by parts. - -It is often useful in practice to be able to estimate the remainder term appearing in the Taylor approximation, rather than having an exact formula for it. Suppose that f is (k + 1)-times continuously differentiable in an interval I containing a. Suppose that there are real constants q and Q such that -$$ -q\le f^{(k+1)}(x)\le Q -$$ - -throughout I. Then the remainder term satisfies the inequality -$$ -q\frac{(x-a)^{k+1}}{(k+1)!}\le R_k(x)\le Q\frac{(x-a)^{k+1}}{(k+1)!}, -$$ - -if x > a, and a similar estimate if x < a. This is a simple consequence of the Lagrange form of the remainder. In particular, if -$$ -|f^{(k+1)}(x)|\le M -$$ - -on an interval 1=I = (a − r,a + r) with some $r > 0$ , then - -|R_k(x)|\le M\frac} - -From these properties it follows that 1=f for all k, and in particular, 1=f. Hence the k-th order Taylor polynomial of f at 0 and its remainder term in the Lagrange form are given by -$$ - P_k(x) = 1+x+\frac{x^2}{2!}+\cdots+\frac{x^k}{k!}, \qquad R_k(x)=\frac{e^\xi}{(k+1)!}x^{k+1}, -$$ - -where ξ is some number between 0 and x. Since ex is increasing by (), we can simply use ex ≤ 1 for x ∈ [−1, 0] to estimate the remainder on the subinterval [−1, 0]. To obtain an upper bound for the remainder on [0,1], we use the property eξ < ex for 0<ξx to deduce that -$$ - e^x \leq \frac{1+x}{1-\frac{x^2}{2}} = 2\frac{1+x}{2-x^2} \leq 4, \qquad 0 \leq x\leq 1 -$$ - -simply by maximizing the numerator and minimizing the denominator. Combining these estimates for ex we see that -$$ - |R_k(x)| \leq \frac{4|x|^{k+1}}{(k+1)!} \leq \frac{4}{(k+1)!}, \qquad -1\leq x \leq 1, -$$ - -so the required precision is certainly reached, when -$$ - \frac{4}{(k+1)!} < 10^{-5} \quad \Longleftrightarrow \quad 4\cdot 10^5 < (k+1)! \quad \Longleftrightarrow \quad k \geq 9. -$$ - -(See factorial or compute by hand the values 1=9! = 362880 and 1=10! = 3628800.) As a conclusion, Taylor's theorem leads to the approximation -$$ - e^x = 1+x+\frac{x^2}{2!} + \cdots + \frac{x^9}{9!} + R_9(x), \qquad |R_9(x)| < 10^{-5}, \qquad -1\leq x \leq 1. -$$ - -For instance, this approximation provides a decimal expression e ≈ 2.71828, correct up to five decimal places. - -Let I ⊂ R be an open interval. By definition, a function f : I → R is real analytic if it is locally defined by a convergent power series. This means that for every a ∈ I there exists some r > 0 and a sequence of coefficients ck ∈ R such that (a − r, a + r) ⊂ I and -$$ - f(x) = \sum_{k=0}^\infty c_k(x-a)^k = c_0 + c_1(x-a) + c_2(x-a)^2 + \cdots, \qquad |x-a|b, a + rb], where rb = |b − a|. Here only the convergence of the power series is considered, and it might well be that (a − R,a + R) extends beyond the domain I of f. - -The Taylor polynomials of the real analytic function f at a are simply the finite truncations -$$ - P_k(x) = \sum_{j=0}^k c_j(x-a)^j, \qquad c_j = \frac{f^{(j)}(a)}{j!} -$$ - -of its locally defining power series, and the corresponding remainder terms are locally given by the analytic functions -$$ - R_k(x) = \sum_{j=k+1}^\infty c_j(x-a)^j = (x-a)^k h_k(x), \qquad |x-a|\begin{align} - -& h_k:(a-r,a+r)\to \R \\ - -& h_k(x) = (x-a)\sum_{j=0}^\infty c_{k+1+j} \left(x - a\right)^j - -\end{align} - -are also analytic, since their defining power series have the same radius of convergence as the original series. Assuming that [a − r, a + r] ⊂ I and r < R, all these series converge uniformly on (a − r, a + r). Naturally, in the case of analytic functions one can estimate the remainder term Rk(x) by the tail of the sequence of the derivatives f′(a) at the center of the expansion, but using complex analysis also another possibility arises, which is described below. - -The Taylor series of f will converge in some interval in which all its derivatives are bounded and do not grow too fast as k goes to infinity. (However, even if the Taylor series converges, it might not converge to f, as explained below; f is then said to be non-analytic.) - -One might think of the Taylor series -$$ - f(x) \approx \sum_{k=0}^\infty c_k(x-a)^k = c_0 + c_1(x-a) + c_2(x-a)^2 + \cdots -$$ - -of an infinitely many times differentiable function f : R → R as its "infinite order Taylor polynomial" at a. Now the estimates for the remainder imply that if, for any r, the derivatives of f are known to be bounded over (a − r, a + r), then for any order k and for any r > 0 there exists a constant Mk,r > 0 such that - -{{NumBlk|:| |R_k(x)| \leq M_{k,r} \frac} - -for every x ∈ (a − r,a + r). Sometimes the constants Mk,r can be chosen in such way that Mk,r is bounded above, for fixed r and all k. Then the Taylor series of f converges uniformly to some analytic function - -\begin{align} - -& T_f:(a-r,a+r)\to\R \\ - -& T_f(x) = \sum_{k=0}^\infty \frac{f^{(k)}(a)}{k!} \left(x-a\right)^k - -\end{align} - -(One also gets convergence even if Mk,r is not bounded above as long as it grows slowly enough.) - -The limit function Tf is by definition always analytic, but it is not necessarily equal to the original function f, even if f is infinitely differentiable. In this case, we say f is a non-analytic smooth function, for example a flat function: - -\begin{align} - -& f:\R \to \R \\ - -& f(x) = \begin{cases} - -e^{-\frac{1}{x^2}} & x>0 \\ - -0 & x \leq 0 . - -\end{cases} - -\end{align} - -Using the chain rule repeatedly by mathematical induction, one shows that for any order k, - - f^{(k)}(x) = \begin{cases} - -\frac{p_k(x)}{x^{3k}}\cdot e^{-\frac{1}{x^2}} & x>0 \\ - -0 & x \leq 0 - -\end{cases} - -for some polynomial pk of degree 2(k − 1). The function $e^{-\frac{1}{x^2}}$ tends to zero faster than any polynomial as x → 0, so f is infinitely many times differentiable and 1=f for every positive integer k. The above results all hold in this case: - -* The Taylor series of f converges uniformly to the zero function Tf(x) = 0, which is analytic with all coefficients equal to zero. - -* The function f is unequal to this Taylor series, and hence non-analytic. - -* For any order k ∈ N and radius r > 0 there exists Mk,r > 0 satisfying the remainder bound () above. - -However, as k increases for fixed r, the value of Mk,r grows more quickly than rk, and the error does not go to zero. - -Taylor's theorem generalizes to functions f : C → C which are complex differentiable in an open subset U ⊂ C of the complex plane. However, its usefulness is dwarfed by other general theorems in complex analysis. Namely, stronger versions of related results can be deduced for complex differentiable functions f : U → C using Cauchy's integral formula as follows. - -Let r > 0 such that the closed disk B(z, r) ∪ S(z, r) is contained in U. Then Cauchy's integral formula with a positive parametrization 1=γ(t) = z + reit of the circle S(z, r) with t ∈ [0, 2pi] gives -$$ -f(z) = \frac{1}{2\pi i}\int_\gamma \frac{f(w)}{w-z}dw, \quad f'(z) = \frac{1}{2\pi i}\int_\gamma \frac{f(w)}{(w-z)^2} dw, \quad \ldots, \quad f^{(k)}(z) = \frac{k!}{2\pi i}\int_\gamma \frac{f(w)}{(w-z)^{k+1}} dw. -$$ - -Here all the integrands are continuous on the circle S(z, r), which justifies differentiation under the integral sign. In particular, if f is once complex differentiable on the open set U, then it is actually infinitely many times complex differentiable on U. One also obtains the Cauchy's estimates - - |f^{(k)}(z)| \leq \frac{k!}{2\pi}\int_\gamma \frac{M_r}{r}} - -\leq \frac{M_r \beta^{k+1}}{1-\beta}, \qquad \frac{r} \leq \beta < 1. - -The function - -\begin{align} - -& f : \R \to \R \\ - -& f(x) = \frac{1}{1+x^2} - -\end{align} - -is real analytic, that is, locally determined by its Taylor series. This function was plotted above to illustrate the fact that some elementary functions cannot be approximated by Taylor polynomials in neighborhoods of the center of expansion which are too large. This kind of behavior is easily understood in the framework of complex analysis. Namely, the function f extends into a meromorphic function - -\begin{align} - -& f:\Complex \cup \{\infty\} \to \Complex \cup \{\infty\} \\ - -& f(z) = \frac{1}{1+z^2} - -\end{align} - -on the compactified complex plane. It has simple poles at z = i and z = −i, and it is analytic elsewhere. Now its Taylor series centered at z0 converges on any disc B(z0, r) with r < |z − z0|, where the same Taylor series converges at z ∈ C. Therefore, Taylor series of f centered at 0 converges on B(0, 1) and it does not converge for any z ∈ C with |z| > 1 due to the poles at i and −i. For the same reason the Taylor series of f centered at 1 converges on B(1, √2) and does not converge for any z ∈ C with |z − 1| > √2. - -A function f: Rn → R is differentiable at a ∈ Rn if and only if there exists a linear functional L : Rn → R and a function h : Rn → R such that - -f(\boldsymbol{x}) = f(\boldsymbol{a}) + L(\boldsymbol{x}-\boldsymbol{a}) + h(\boldsymbol{x})\lVert\boldsymbol{x}-\boldsymbol{a}\rVert, - -\qquad \lim_{\boldsymbol{x}\to\boldsymbol{a}}h(\boldsymbol{x})=0. - -If this is the case, then L = df(a) is the (uniquely defined) differential of f at the point a. Furthermore, then the partial derivatives of f exist at a and the differential of f at a is given by -$$ - df( \boldsymbol{a} )( \boldsymbol{v} ) = \frac{\partial f}{\partial x_1}(\boldsymbol{a})v_1 + \cdots + \frac{\partial f}{\partial x_n}(\boldsymbol{a})v_n. -$$ - -Introduce the multi-index notation -$$ - |\alpha| = \alpha_1+\cdots+\alpha_n, \quad \alpha!=\alpha_1!\cdots\alpha_n!, \quad \boldsymbol{x}^\alpha=x_1^{\alpha_1}\cdots x_n^{\alpha_n} -$$ - -for α ∈ Nn and x ∈ Rn. If all the k-th order partial derivatives of f : Rn → R are continuous at a ∈ Rn, then by Clairaut's theorem, one can change the order of mixed derivatives at a, so the notation -$$ - D^\alpha f = \frac{\partial^f}{\partial x_1^{\alpha_1}\cdots \partial x_n^{\alpha_n}}, \qquad |\alpha|\leq k -$$ - -for the higher order partial derivatives is justified in this situation. The same is true if all the (k − 1)-th order partial derivatives of f exist in some neighborhood of a and are differentiable at a. Then we say that f is k times differentiable at the point a. - -Let f : Rn → R be a k-times continuously differentiable function at the point a ∈ Rn. Then there exists hα : Rn → R such that - -\begin{align} - -& f(\boldsymbol{x}) = \sum_{\beta!} \int_0^1 (1-t)^ \max_{\boldsymbol{y}\in B} |D^\alpha f(\boldsymbol{y})|, \qquad \boldsymbol{x}\in B. - -For example, the third-order Taylor polynomial of a smooth function f: R2 → R is, denoting x − a = v, - -\begin{align} - -P_3(\boldsymbol{x}) = f ( \boldsymbol{a} ) + {} &\frac{\partial f}{\partial x_1}( \boldsymbol{a} ) v_1 + \frac{\partial f}{\partial x_2}( \boldsymbol{a} ) v_2 + \frac{\partial^2 f}{\partial x_1^2}( \boldsymbol{a} ) \frac {v_1^2}{2!} + \frac{\partial^2 f}{\partial x_1 \partial x_2}( \boldsymbol{a} ) v_1 v_2 + \frac{\partial^2 f}{\partial x_2^2}( \boldsymbol{a} ) \frac{v_2^2}{2!} \\ - -& + \frac{\partial^3 f}{\partial x_1^3}( \boldsymbol{a} ) \frac{v_1^3}{3!} + \frac{\partial^3 f}{\partial x_1^2 \partial x_2}( \boldsymbol{a} ) \frac{v_1^2 v_2}{2!} + \frac{\partial^3 f}{\partial x_1 \partial x_2^2}( \boldsymbol{a} ) \frac{v_1 v_2^2}{2!} + \frac{\partial^3 f}{\partial x_2^3}( \boldsymbol{a} ) \frac{v_2^3}{3!} - -\end{align} - -Let - -h_k(x) = \begin{cases} - -\frac{f(x) - P(x)}{(x-a)^k} & x\not=a\\ - -0&x=a - -\end{cases} - - - -where, as in the statement of Taylor's theorem, -$$ -P(x) = f(a) + f'(a)(x-a) + \frac{f(a)}{2!}(x-a)^2 + \cdots + \frac{f^{(k)}(a)}{k!}(x-a)^k. -$$ - -It is sufficient to show that -$$ -\lim_{x\to a} h_k(x) =0. -$$ - -The proof here is based on repeated application of L'Hôpital's rule. Note that, for each 1=j = 0,1,…,k−1, $f^{(j)}(a)=P^{(j)}(a)$. Hence each of the first k−1 derivatives of the numerator in $h_k(x)$ vanishes at $x=a$, and the same is true of the denominator. Also, since the condition that the function f be k times differentiable at a point requires differentiability up to order k−1 in a neighborhood of said point (this is true, because differentiability requires a function to be defined in a whole neighborhood of a point), the numerator and its k − 2 derivatives are differentiable in a neighborhood of a. Clearly, the denominator also satisfies said condition, and additionally, doesn't vanish unless x=a, therefore all conditions necessary for L'Hopital's rule are fulfilled, and its use is justified. So - -\begin{align} - -\lim_{x\to a} \frac{f(x) - P(x)}{(x-a)^k} &= \lim_{x\to a} \frac{\frac{d}{dx}(f(x) - P(x))}{\frac{d}{dx}(x-a)^k} = \cdots = \lim_{x\to a} \frac{\frac{d^{k-1}}{dx^{k-1}}(f(x) - P(x))}{\frac{d^{k-1}}{dx^{k-1}}(x-a)^k}\\ - -&=\frac{1}{k!}\lim_{x\to a} \frac{f^{(k-1)}(x) - P^{(k-1)}(x)}{x-a}\\ - -&=\frac{1}{k!}(f^{(k)}(a) - f^{(k)}(a)) = 0 - -\end{align} - -where the last equality follows by the definition of the derivative at x = a. - -Let G be any real-valued function, continuous on the closed interval between a and x and differentiable with a non-vanishing derivative on the open interval between a and x, and define - - - -F(t) = f(t) + f'(t)(x-t) + \frac{f(t)}{2!}(x-t)^2 + \cdots + \frac{f^{(k)}(t)}{k!}(x-t)^k. - - - -For $ t \in [a,x] $. Then, by Cauchy's mean value theorem, - -{{NumBlk|:|$ \frac{F'(\xi)}{G'(\xi)} = \frac{F(x) - F(a)}{G(x) - G(a)}$|}} - -for some ξ on the open interval between a and x. Note that here the numerator 1=F(x) − F(a) = Rk(x) is exactly the remainder of the Taylor polynomial for f(x). Compute - -\begin{align} - -F'(t) = {} & f'(t) + \big(f(t)(x-t) - f'(t)\big) + \left(\frac{f^{(3)}(t)}{2!}(x-t)^2 - \frac{f^{(2)}(t)}{1!}(x-t)\right) + \cdots \\ - -& \cdots + \left( \frac{f^{(k+1)}(t)}{k!}(x-t)^k - \frac{f^{(k)}(t)}{(k-1)!}(x-t)^{k-1}\right) - -= \frac{f^{(k+1)}(t)}{k!}(x-t)^k, - -\end{align} - -plug it into () and rearrange terms to find that -$$ - R_k(x) = \frac{f^{(k+1)}(\xi)}{k!}(x-\xi)^k \frac{G(x)-G(a)}{G'(\xi)}. -$$ - -This is the form of the remainder term mentioned after the actual statement of Taylor's theorem with remainder in the mean value form. - -The Lagrange form of the remainder is found by choosing $ G(t) = (x-t)^{k+1} $ and the Cauchy form by choosing $ G(t) = t-a$. - -Remark. Using this method one can also recover the integral form of the remainder by choosing -$$ - G(t) = \int_a^t \frac{f^{(k+1)}(s)}{k!} (x-s)^k ds, -$$ - -but the requirements for f needed for the use of mean value theorem are too strong, if one aims to prove the claim in the case that f is only absolutely continuous. However, if one uses Riemann integral instead of Lebesgue integral, the assumptions cannot be weakened. - -Due to absolute continuity of f on the closed interval between a and x its derivative f exists as an L1-function, and we can use fundamental theorem of calculus and integration by parts. This same proof applies for the Riemann integral assuming that f is continuous on the closed interval and differentiable on the open interval between a and x, and this leads to the same result than using the mean value theorem. - -The fundamental theorem of calculus states that -$$ -f(x)=f(a)+ \int_a^x f'(t) dt. -$$ - -Now we can integrate by parts and use the fundamental theorem of calculus again to see that - - \begin{align} - -f(x) &= f(a)+\Big(xf'(x)-af'(a)\Big)-\int_a^x tf(t) dt \\ - -&= f(a) + x\left(f'(a) + \int_a^x f(t) dt \right) -af'(a)-\int_a^x tf(t) dt \\ - -&= f(a)+(x-a)f'(a)+\int_a^x (x-t)f(t) dt, - -\end{align} - -which is exactly Taylor's theorem with remainder in the integral form in the case k=1. The general statement is proved using induction. Suppose that - -{{NumBlk|:|$ f(x) = f(a) + \frac{f'(a)}{1!}(x - a) + \cdots + \frac{f^{(k)}(a)}{k!}(x - a)^k + \int_a^x \frac{f^{(k+1)} (t)}{k!} (x - t)^k dt. $|}} - -Integrating the remainder term by parts we arrive at - -\begin{align} - -\int_a^x \frac{f^{(k+1)} (t)}{k!} (x - t)^k dt = & - \left[ \frac{f^{(k+1)} (t)}{(k+1)k!} (x - t)^{k+1} \right]_a^x + \int_a^x \frac{f^{(k+2)} (t)}{(k+1)k!} (x - t)^{k+1} dt \\ - -= & \ \frac{f^{(k+1)} (a)}{(k+1)!} (x - a)^{k+1} + \int_a^x \frac{f^{(k+2)} (t)}{(k+1)!} (x - t)^{k+1} dt. - -\end{align} - -Substituting this into the formula in ( shows that if it holds for the value k, it must also hold for the value k + 1. Therefore, since it holds for k = 1, it must hold for every positive integer k. - -We prove the special case, where f : Rn → R has continuous partial derivatives up to the order k+1 in some closed ball B with center a. The strategy of the proof is to apply the one-variable case of Taylor's theorem to the restriction of f to the line segment adjoining x and a. Parametrize the line segment between a and x by u(t) = a + t(x − a). We apply the one-variable version of Taylor's theorem to the function 1=g(t) = f(u(t)): -$$ -f(\mathbf{x})=g(1)=g(0)+\sum_{j=1}^k\frac{1}{j!}g^{(j)}(0)\ +\ \int_0^1 \frac{(1-t)^k }{k!} g^{(k+1)}(t) dt. -$$ - -Applying the chain rule for several variables gives - -\begin{align} - -g^{(j)}(t)&=\frac{d^j}{dt^j}f(u(t)) = \frac{d^j}{dt^j} f(\mathbf{a}+t(\mathbf{x}-\mathbf{a})) \\ - -&= \sum_{|\alpha|=j} \left(\begin{matrix} j \\ \alpha\end{matrix} \right) (D^\alpha f) (\mathbf{a}+t(\mathbf{x}-\mathbf{a})) (\mathbf{x}-\mathbf{a})^\alpha - -\end{align} - -where $\tbinom j \alpha$ is the multinomial coefficient. Since $\tfrac{1}{j!}\tbinom j \alpha=\tfrac{1}{\alpha!}$, we get: - -f(\mathbf x)= f(\mathbf a) + \sum_{1 \leq |\alpha| \leq k}\frac{1}{\alpha!} (D^\alpha f) (\mathbf a)(\mathbf x-\mathbf a)^\alpha+\sum_{|\alpha|=k+1}\frac{k+1}{\alpha!} - -(\mathbf x-\mathbf a)^\alpha \int_0^1 (1-t)^k (D^\alpha f)(\mathbf a+t(\mathbf x-\mathbf a))dt. diff --git a/wiki/wikipedia/3472.txt b/wiki/wikipedia/3472.txt deleted file mode 100644 index 92edd0401beb2111f012ed20f517536c7548165f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3472.txt +++ /dev/null @@ -1,7 +0,0 @@ -The Equidistribution of Lattice Shapes of Rings of Integers of Cubic, Quartic, and Quintic Number Fields: An Artist's Rendering is a mathematics book by Piper H, based on her Princeton University doctoral thesis of the same title. It has been described as "feminist", Harron and Bhargava showed that, viewed as a lattice in real vector space, the ring of integers of a random number field does not have any special symmetries. Harron intentionally departs from the typical academic format as she is writing for a community of mathematicians who "do not feel that they are encouraged to be themselves". Mathematician Philp Ording calls her approach to communicating mathematical abstractions "generous". - -Her thesis went viral in late 2015, especially within the mathematical community, in part because of the prologue which begins by stating that "respected research math is dominated by men of a certain attitude". She returned determined that, even if she did not do math the "right way", she "could still contribute to the community". Her prologue states that the community lacks diversity and discourages diversity of thought. "It is not my place to make the system comfortable with itself", she concludes. - -A concise proof was published in Compositio Mathematica in 2016. - -Harron earned her doctorate from Princeton in 2016. , Harron, who also goes by Piper H., is a postdoctoral researcher at the University of Toronto. diff --git a/wiki/wikipedia/3473.txt b/wiki/wikipedia/3473.txt deleted file mode 100644 index 128d6cb042cb298ee320d64b9616131422478b8e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3473.txt +++ /dev/null @@ -1,32 +0,0 @@ -In complex analysis and numerical analysis, König's theorem, named after the Hungarian mathematician Gyula Kőnig, gives a way to estimate simple poles or simple roots of a function. In particular, it has numerous applications in root finding algorithms like Newton's method and its generalization Householder's method. - -Given a meromorphic function defined on $|x|n are the Favard constants. - -Following work by Matorin and others, the extremising functions were found by Isaac Jacob Schoenberg, explicit forms for the sharp constants are however still unknown. - -There are many generalisations, which are of the form -$$ -\|f^{(k)}\|_{L_q(T)} \le K \cdot {\|f\|^\alpha_{L_p(T)}} \cdot {\|f^{(n)}\|^{1-\alpha}_{L_r(T)}}\text{ for }1\le k < n. -$$ - -Here all three norms can be different from each other (from L1 to L, with p=q=r=∞ in the classical case) and T may be the real axis, semiaxis or a closed segment. - -The Kallman–Rota inequality generalizes the Landau–Kolmogorov inequalities from the derivative operator to more general contractions on Banach spaces. diff --git a/wiki/wikipedia/3475.txt b/wiki/wikipedia/3475.txt deleted file mode 100644 index e14285bec55a3152f7e1a7d69f9ed349668a5eed..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3475.txt +++ /dev/null @@ -1 +0,0 @@ -Dynamic link matching is a graph-based system for image recognition. It uses wavelet transformations to encode incoming image data. diff --git a/wiki/wikipedia/3476.txt b/wiki/wikipedia/3476.txt deleted file mode 100644 index ea2ae3d2e0b84397d17dd2f188e874a717fe9e5c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3476.txt +++ /dev/null @@ -1,9 +0,0 @@ -Robinson's joint consistency theorem is an important theorem of mathematical logic. It is related to Craig interpolation and Beth definability. - -The classical formulation of Robinson's joint consistency theorem is as follows: - -Let $T_1$ and $T_2$ be first-order theories. If $T_1$ and $T_2$ are consistent and the intersection $T_1\cap T_2$ is complete (in the common language of $T_1$ and $T_2$), then the union $T_1\cup T_2$ is consistent. Note that a theory is complete if it decides every formula, i.e. either $T \vdash \varphi$ or $T \vdash \neg\varphi$. - -Since the completeness assumption is quite hard to fulfill, there is a variant of the theorem: - -Let $T_1$ and $T_2$ be first-order theories. If $T_1$ and $T_2$ are consistent and if there is no formula $\varphi$ in the common language of $T_1$ and $T_2$ such that $T_1 \vdash \varphi$ and $T_2 \vdash \neg\varphi$, then the union $T_1\cup T_2$ is consistent. diff --git a/wiki/wikipedia/3477.txt b/wiki/wikipedia/3477.txt deleted file mode 100644 index c559c2e2c5c7d03c54a0f796c3a3a425ad061c93..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3477.txt +++ /dev/null @@ -1,5 +0,0 @@ -The Concorde TSP Solver is a program for solving the travelling salesman problem. It was written by David Applegate, Robert E. Bixby, Vašek Chvátal, and William J. Cook, in ANSI C, and is freely available for academic use. - -Concorde has been applied to problems of gene mapping, protein function prediction, vehicle routing, conversion of bitmap images to continuous line drawings, scheduling ship movements for seismic surveys, and in studying the scaling properties of combinatorial optimization problems. - -According to Mulder, Concorde “is widely regarded as the fastest TSP solver, for large instances, currently in existence.” In 2001, Concorde won a 5000 guilder prize from CMG for solving a vehicle routing problem the company had posed in 1996. diff --git a/wiki/wikipedia/3478.txt b/wiki/wikipedia/3478.txt deleted file mode 100644 index 39aa784270b03e7eb1eb4bd921bd185f5975aa0c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3478.txt +++ /dev/null @@ -1,180 +0,0 @@ -In Riemannian geometry, Gauss's lemma asserts that any sufficiently small sphere centered at a point in a Riemannian manifold is perpendicular to every geodesic through the point. More formally, let M be a Riemannian manifold, equipped with its Levi-Civita connection, and p a point of M. The exponential map is a mapping from the tangent space at p to M: -$$ -\mathrm{exp} : T_pM \to M -$$ - -which is a diffeomorphism in a neighborhood of zero. Gauss' lemma asserts that the image of a sphere of sufficiently small radius in TpM under the exponential map is perpendicular to all geodesics originating at p. The lemma allows the exponential map to be understood as a radial isometry, and is of fundamental importance in the study of geodesic convexity and normal coordinates. - -We define the exponential map at $p\in M$ by - - - -\exp_p: T_pM\supset B_{\epsilon}(0) \longrightarrow M,\quad v\longmapsto \gamma_{p,v}(1), - - - -where $\gamma_{p,v}$ is the unique geodesic with $\gamma_{p,v}(0)=p$ and tangent $\gamma_{p,v}'(0)=v \in T_pM$ and $\epsilon$ is chosen small enough so that for every $ v \in B_{\epsilon}(0) \subset T_pM $ the geodesic $\gamma_{p,v}$ is defined in 1. So, if $M$ is complete, then, by the Hopf–Rinow theorem, $ \exp_p$ is defined on the whole tangent space. - -Let $\alpha : I\rightarrow T_pM$ be a curve differentiable in $T_pM$ such that $\alpha(0):=0$ and $\alpha'(0):=v$. Since $T_pM\cong \mathbb R^n$, it is clear that we can choose $\alpha(t):=vt$. In this case, by the definition of the differential of the exponential in $0$ applied over $v$, we obtain: - - - -T_0\exp_p(v) = \frac{\mathrm d}{\mathrm d t} \Bigl(\exp_p\circ\alpha(t)\Bigr)\Big\vert_{t=0} = \frac{\mathrm d}{\mathrm d t} \Bigl(\exp_p(vt)\Bigr)\Big\vert_{t=0}=\frac{\mathrm d}{\mathrm d t} \Bigl(\gamma(1,p,vt)\Bigr)\Big\vert_{t=0}= \gamma'(t,p,v)\Big\vert_{t=0}=v. - - - -So (with the right identification $T_0 T_p M \cong T_pM$) the differential of $\exp_p$ is the identity. By the implicit function theorem, $\exp_p$ is a diffeomorphism on a neighborhood of $0 \in T_pM$. The Gauss Lemma now tells that $\exp_p$ is also a radial isometry. - -Let $p\in M$. In what follows, we make the identification $T_vT_pM\cong T_pM\cong \mathbb R^n$. - -Gauss's Lemma states: - -Let $v,w\in B_\epsilon(0)\subset T_vT_pM\cong T_pM$ and $M\ni q:=\exp_p(v)$. Then, - - - -\langle T_v\exp_p(v), T_v\exp_p(w)\rangle_q = \langle v,w\rangle_p. - - - -For $p\in M$, this lemma means that $\exp_p$ is a radial isometry in the following sense: let $v\in B_\epsilon(0)$, i.e. such that $\exp_p$ is well defined. - -And let $q:=\exp_p(v)\in M$. Then the exponential $\exp_p$ remains an isometry in $q$, and, more generally, all along the geodesic $\gamma$ (in so far as $\gamma(1,p,v)=\exp_p(v)$ is well defined)! Then, radially, in all the directions permitted by the domain of definition of $\exp_p$, it remains an isometry. - -Recall that - - - -T_v\exp_p \colon T_pM\cong T_vT_pM\supset T_vB_\epsilon(0)\longrightarrow T_{\exp_p(v)}M. - - - -We proceed in three steps: - -* $T_v\exp_p(v)=v$ : let us construct a curve -$$ -\alpha : \mathbb R \supset I \rightarrow T_pM -$$ such that $\alpha(0):=v\in T_pM$ and $\alpha'(0):=v\in T_vT_pM\cong T_pM$. Since $T_vT_pM\cong T_pM\cong \mathbb R^n$, we can put $\alpha(t):=v(t+1)$. - -Therefore, - - - -T_v\exp_p(v) = \frac{\mathrm d}{\mathrm d t}\Bigl(\exp_p\circ\alpha(t)\Bigr)\Big\vert_{t=0}=\frac{\mathrm d}{\mathrm d t}\Bigl(\exp_p(tv)\Bigr)\Big\vert_{t=1}=\Gamma(\gamma)_p^{\exp_p(v)}v=v, - - - -where $\Gamma$ is the parallel transport operator and $\gamma(t)=\exp_p(tv)$. The last equality is true because $\gamma$ is a geodesic, therefore $\gamma'$ is parallel. - -Now let us calculate the scalar product $\langle T_v\exp_p(v), T_v\exp_p(w)\rangle$. - -We separate $w$ into a component $w_T$ parallel to $v$ and a component $w_N$ normal to $v$. In particular, we put $w_T:=a v$, $a \in \mathbb R$. - -The preceding step implies directly: - - - -\langle T_v\exp_p(v), T_v\exp_p(w)\rangle = \langle T_v\exp_p(v), T_v\exp_p(w_T)\rangle + \langle T_v\exp_p(v), T_v\exp_p(w_N)\rangle - -=a \langle T_v\exp_p(v), T_v\exp_p(v)\rangle + \langle T_v\exp_p(v), T_v\exp_p(w_N)\rangle=\langle v, w_T\rangle + \langle T_v\exp_p(v), T_v\exp_p(w_N)\rangle. - - - -We must therefore show that the second term is null, because, according to Gauss's Lemma, we must have: -$$ -\langle T_v\exp_p(v), T_v\exp_p(w_N)\rangle = \langle v, w_N\rangle = 0. -$$ - -* $\langle T_v\exp_p(v), T_v\exp_p(w_N)\rangle = 0$ : - -Let us define the curve - - - -\alpha \colon [-\epsilon, \epsilon]\times [0,1] \longrightarrow T_pM,\qquad (s,t) \longmapsto tv+tsw_N. - - - -Note that - - - -\alpha(0,1) = v,\qquad - -\frac{\partial \alpha}{\partial t}(s,t) = v+sw_N, - -\qquad\frac{\partial \alpha}{\partial s}(0,t) = tw_N. - - - -Let us put: - - - -f \colon [-\epsilon, \epsilon ]\times [0,1] \longrightarrow M,\qquad (s,t)\longmapsto \exp_p(tv+tsw_N), - - - -and we calculate: - - - -T_v\exp_p(v)=T_{\alpha(0,1)}\exp_p\left(\frac{\partial \alpha}{\partial t}(0,1)\right)=\frac{\partial}{\partial t}\Bigl(\exp_p\circ\alpha(s,t)\Bigr)\Big\vert_{t=1, s=0}=\frac{\partial f}{\partial t}(0,1) - - - -and - - - -T_v\exp_p(w_N)=T_{\alpha(0,1)}\exp_p\left(\frac{\partial \alpha}{\partial s}(0,1)\right)=\frac{\partial}{\partial s}\Bigl(\exp_p\circ\alpha(s,t)\Bigr)\Big\vert_{t=1,s=0}=\frac{\partial f}{\partial s}(0,1). - - - -Hence - - - -\langle T_v\exp_p(v), T_v\exp_p(w_N)\rangle = \left\langle \frac{\partial f}{\partial t},\frac{\partial f}{\partial s}\right\rangle(0,1). - - - -We can now verify that this scalar product is actually independent of the variable $t$, and therefore that, for example: - - - -\left\langle\frac{\partial f}{\partial t},\frac{\partial f}{\partial s}\right\rangle(0,1) = \left\langle\frac{\partial f}{\partial t},\frac{\partial f}{\partial s}\right\rangle(0,0) = 0, - - - -because, according to what has been given above: - - - -\lim_{t\rightarrow 0}\frac{\partial f}{\partial s}(0,t) = \lim_{t\rightarrow 0}T_{tv}\exp_p(tw_N) = 0 - - - -being given that the differential is a linear map. This will therefore prove the lemma. - -* We verify that $\frac{\partial}{\partial t}\left\langle \frac{\partial f}{\partial t},\frac{\partial f}{\partial s}\right\rangle=0$: this is a direct calculation. Since the maps $t\mapsto f(s,t)$ are geodesics, - - - -\frac{\partial}{\partial t}\left\langle \frac{\partial f}{\partial t},\frac{\partial f}{\partial s}\right\rangle=\left\langle\underbrace{\frac{D}{\partial t}\frac{\partial f}{\partial t}}_{=0}, \frac{\partial f}{\partial s}\right\rangle + \left\langle\frac{\partial f}{\partial t},\frac{D}{\partial t}\frac{\partial f}{\partial s}\right\rangle = \left\langle\frac{\partial f}{\partial t},\frac{D}{\partial s}\frac{\partial f}{\partial t}\right\rangle=\frac12\frac{\partial }{\partial s}\left\langle \frac{\partial f}{\partial t}, \frac{\partial f}{\partial t}\right\rangle. - - - -Since the maps $t\mapsto f(s,t)$ are geodesics, - -the function $t\mapsto\left\langle\frac{\partial f}{\partial t},\frac{\partial f}{\partial t}\right\rangle$ is constant. Thus, - - - -\frac{\partial }{\partial s}\left\langle \frac{\partial f}{\partial t}, \frac{\partial f}{\partial t}\right\rangle - -=\frac{\partial }{\partial s}\left\langle v+sw_N,v+sw_N\right\rangle - -=2\left\langle v,w_N\right\rangle=0. - - diff --git a/wiki/wikipedia/3479.txt b/wiki/wikipedia/3479.txt deleted file mode 100644 index 82236d7ba065ed2053cd4d8bbf42a1d4ba4fd0d5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3479.txt +++ /dev/null @@ -1,92 +0,0 @@ -The Kantorovich theorem, or Newton–Kantorovich theorem, is a mathematical statement on the semi-local convergence of Newton's method. It was first stated by Leonid Kantorovich in 1948. It is similar to the form of the Banach fixed-point theorem, although it states existence and uniqueness of a zero rather than a fixed point. - -Newton's method constructs a sequence of points that under certain conditions will converge to a solution $x$ of an equation $f(x)=0$ or a vector solution of a system of equation $F(x)=0$. The Kantorovich theorem gives conditions on the initial point of this sequence. If those conditions are satisfied then a solution exists close to the initial point and the sequence converges to that point. - -Let $X\subset\R^n$ be an open subset and $F:X \subset \R^n \to\R^n$ a differentiable function with a Jacobian $F^{\prime}(\mathbf x)$ that is locally Lipschitz continuous (for instance if $F$ is twice differentiable). That is, it is assumed that for any open subset $U\subset X$ there exists a constant $L>0$ such that for any $\mathbf x,\mathbf y\in U$ -$$ -\|F'(\mathbf x)-F'(\mathbf y)\|\le L\|\mathbf x-\mathbf y\| -$$ - -holds. The norm on the left is some operator norm that is compatible with the vector norm on the right. This inequality can be rewritten to only use the vector norm. Then for any vector $\mathbf v\in\R^n$ the inequality -$$ -\|F'(\mathbf x)(\mathbf v)-F'(\mathbf y)(\mathbf v)\|\le L\|\mathbf x-\mathbf y\|\|\mathbf v\| -$$ - -must hold. - -Now choose any initial point $\mathbf x_0\in X$. Assume that $F'(\mathbf x_0)$ is invertible and construct the Newton step $\mathbf h_0=-F'(\mathbf x_0)^{-1}F(\mathbf x_0).$ - -The next assumption is that not only the next point $\mathbf x_1=\mathbf x_0+\mathbf h_0$ but the entire ball $B(\mathbf x_1,\|\mathbf h_0\|)$ is contained inside the set $X$. Let $M\le L$ be the Lipschitz constant for the Jacobian over this ball. - -As a last preparation, construct recursively, as long as it is possible, the sequences $(\mathbf x_k)_k$, $(\mathbf h_k)_k$, $(\alpha_k)_k$ according to - -\begin{alignat}{2} - -\mathbf h_k&=-F'(\mathbf x_k)^{-1}F(\mathbf x_k)\\[0.4em] - -\alpha_k&=M\|F'(\mathbf x_k)^{-1}\|\|\mathbf h_k\|\\[0.4em] - -\mathbf x_{k+1}&=\mathbf x_k+\mathbf h_k. - -\end{alignat} - -Now if $\alpha_0\le\tfrac12$ then - -#a solution $\mathbf x^*$ of $F(\mathbf x^*)=0$ exists inside the closed ball $\bar B(\mathbf x_1,\|\mathbf h_0\|)$ and - -#the Newton iteration starting in $\mathbf x_0$ converges to $\mathbf x^*$ with at least linear order of convergence. - -A statement that is more precise but slightly more difficult to prove uses the roots $t^\ast\le t^{**}$ of the quadratic polynomial - - - -p(t) - -=\left(\tfrac12L\|F'(\mathbf x_0)^{-1}\|^{-1}\right)t^2 - --t+\|\mathbf h_0\| - -, -$$ -t^{\ast/**}=\frac{2\|\mathbf h_0\|}{1\pm\sqrt{1-2\alpha_0}} -$$ - -and their ratio - - - -\theta - -=\frac{t^*}{t^{**}} - -=\frac{1-\sqrt{1-2\alpha_0}}{1+\sqrt{1-2\alpha_0}}. - - - -Then - -#a solution $\mathbf x^*$ exists inside the closed ball $\bar B(\mathbf x_1,\theta\|\mathbf h_0\|)\subset\bar B(\mathbf x_0,t^*)$ - -#it is unique inside the bigger ball $B(\mathbf x_0,t^{*\ast})$ - -#and the convergence to the solution of $F$ is dominated by the convergence of the Newton iteration of the quadratic polynomial $p(t)$ towards its smallest root $t^\ast$, if $t_0=0,t_{k+1}=t_k-\tfrac{p(t_k)}{p'(t_k)}$, then - -#:$\|\mathbf x_{k+p}-\mathbf x_k\|\le t_{k+p}-t_k.$ - -#The quadratic convergence is obtained from the error estimate - -#: - -\|\mathbf x_{n+1}-\mathbf x^*\| - -\le \theta^{2^n}\|\mathbf x_{n+1}-\mathbf x_n\| - -\le\frac{\theta^{2^n}}{2^n}\|\mathbf h_0\|. - - - -In 1986, Yamamoto proved that the error evaluations of the Newton method such as Doring (1969), Ostrowski (1971, 1973), Gragg-Tapia (1974), Potra-Ptak (1980), Miel (1981), Potra (1984), can be derived from the Kantorovich theorem. - -There is a q-analog for the Kantorovich theorem. For other generalizations/variations, see Ortega & Rheinboldt (1970). - -Oishi and Tanabe claimed that the Kantorovich theorem can be applied to obtain reliable solutions of linear programming. diff --git a/wiki/wikipedia/348.txt b/wiki/wikipedia/348.txt deleted file mode 100644 index 88785dcecbe0f0a4871b7f9ec2fbdc9c166f3e4c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/348.txt +++ /dev/null @@ -1,16 +0,0 @@ -In mathematics, the Wielandt theorem characterizes the gamma function, defined for all complex numbers $z$ for which $\mathrm{Re}z > 0$ by -$$ -\Gamma(z)=\int_0^{+\infty} t^{z-1} \mathrm e^{-t}\mathrm dt, -$$ - -as the only function $f$ defined on the half-plane $H := \{ z \in \Complex : \operatorname{Re}z > 0\}$ such that: - -* $f$ is holomorphic on $H$; - -* $f(1)=1$; - -* $f(z+1)=zf(z)$ for all $z \in H$ and - -* $f$ is bounded on the strip $\{ z \in \Complex : 1 \leq \operatorname{Re}z \leq 2\}$. - -This theorem named after the mathematician Helmut Wielandt. diff --git a/wiki/wikipedia/3480.txt b/wiki/wikipedia/3480.txt deleted file mode 100644 index 2ac71df0b2c48f53366f6074fe27723ca9831a02..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3480.txt +++ /dev/null @@ -1,29 +0,0 @@ -Homological mirror symmetry is a mathematical conjecture made by Maxim Kontsevich. It seeks a systematic mathematical explanation for a phenomenon called mirror symmetry first observed by physicists studying string theory. - -In an address to the 1994 International Congress of Mathematicians in Zürich, Kontsevich speculated that mirror symmetry for a pair of Calabi–Yau manifolds X and Y could be explained as an equivalence of a triangulated category constructed from the algebraic geometry of X (the derived category of coherent sheaves on X) and another triangulated category constructed from the symplectic geometry of Y (the derived Fukaya category). - -Edward Witten originally described the topological twisting of the N=(2,2) supersymmetric field theory into what he called the A and B model topological string theories. These models concern maps from Riemann surfaces into a fixed target—usually a Calabi–Yau manifold. Most of the mathematical predictions of mirror symmetry are embedded in the physical equivalence of the A-model on Y with the B-model on its mirror X. When the Riemann surfaces have empty boundary, they represent the worldsheets of closed strings. To cover the case of open strings, one must introduce boundary conditions to preserve the supersymmetry. In the A-model, these boundary conditions come in the form of Lagrangian submanifolds of Y with some additional structure (often called a brane structure). In the B-model, the boundary conditions come in the form of holomorphic (or algebraic) submanifolds of X with holomorphic (or algebraic) vector bundles on them. These are the objects one uses to build the relevant categories. They are often called A and B branes respectively. Morphisms in the categories are given by the massless spectrum of open strings stretching between two branes. - -The closed string A and B models only capture the so-called topological sector—a small portion of the full string theory. Similarly, the branes in these models are only topological approximations to the full dynamical objects that are D-branes. Even so, the mathematics resulting from this small piece of string theory has been both deep and difficult. - -The School of Mathematics at the Institute for Advanced Study in Princeton plans a special year devoted to Homological Mirror Symmetry during the 2016-17 academic year. Among the distinguished participants will be Paul Seidel from MIT, Maxim Kontsevich from IHÉS, and Denis Auroux, from UC Berkeley. - -Only in a few examples have mathematicians been able to verify the conjecture. In his seminal address, Kontsevich commented that the conjecture could be proved in the case of elliptic curves using theta functions. Following this route, Alexander Polishchuk and Eric Zaslow provided a proof of a version of the conjecture for elliptic curves. Kenji Fukaya was able to establish elements of the conjecture for abelian varieties. Later, Kontsevich and Yan Soibelman provided a proof of the majority of the conjecture for nonsingular torus bundles over affine manifolds using ideas from the SYZ conjecture. In 2003, Paul Seidel proved the conjecture in the case of the quartic surface. In 2002 Hausel explained SYZ conjecture in the context of Hitchin system and Langlands duality. - -The dimensions hp,q of spaces of harmonic (p,q)-differential forms (equivalently, the cohomology, i.e., closed forms modulo exact forms) are conventionally arranged in a diamond shape called the Hodge Diamond. These (p,q)-betti numbers can be computed for complete intersections using a generating function described by Friedrich Hirzebruch. For a three-dimensional manifold, for example, the Hodge diamond has p and q ranging from 0 to 3: - -Mirror symmetry translates the dimension number of the (p, q)-th differential form hp,q for the original manifold into hn-p,q of that for the counter pair manifold. Namely, for any Calabi–Yau manifold the Hodge diamond is unchanged by a rotation by π radians and the Hodge diamonds of mirror Calabi–Yau manifolds are related by a rotation by π/2 radians. - -In the case of an elliptic curve, which is viewed as a 1-dimensional Calabi–Yau manifold, the Hodge diamond is especially simple: it is the following figure. - -In the case of a K3 surface, which is viewed as 2-dimensional Calabi–Yau manifold, since the Betti numbers are {1, 0, 22, 0, 1}, their Hodge diamond is the following figure. - -In the 3-dimensional case, usually called the Calabi–Yau manifold, a very interesting thing happens. There are sometimes mirror pairs, say M and W, that have symmetric Hodge diamonds with respect to each other along a diagonal line. - -Ms diamond: - -Ws diamond: - -M and W correspond to A- and B-model in string theory. Mirror symmetry does not only replace the homological dimensions but also symplectic structure and complex structure on the mirror pairs. That is the origin of homological mirror symmetry. - -In 1990-1991, had a major impact not only on enumerative algebraic geometry but on the whole mathematics and motivated Kontsevich. The mirror pair of two quintic threefolds in this paper have the following Hodge diamonds. diff --git a/wiki/wikipedia/3481.txt b/wiki/wikipedia/3481.txt deleted file mode 100644 index c26b31745b6f1e9f20b716ae06353f80a25d3656..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3481.txt +++ /dev/null @@ -1,212 +0,0 @@ -The Banach–Tarski paradox is a theorem in set-theoretic geometry, which states the following: Given a solid ball in 3‑dimensional space, there exists a decomposition of the ball into a finite number of disjoint subsets, which can then be put back together in a different way to yield two identical copies of the original ball. Indeed, the reassembly process involves only moving the pieces around and rotating them without changing their shape. However, the pieces themselves are not "solids" in the usual sense, but infinite scatterings of points. The reconstruction can work with as few as five pieces. - -A stronger form of the theorem implies that given any two "reasonable" solid objects (such as a small ball and a huge ball), the cut pieces of either one can be reassembled into the other. This is often stated informally as "a pea can be chopped up and reassembled into the Sun" and called the "pea and the Sun paradox". - -The theorem is called a paradox because it contradicts basic geometric intuition. "Doubling the ball" by dividing it into parts and moving them around by rotations and translations, without any stretching, bending, or adding new points, seems to be impossible, since all these operations ought, intuitively speaking, to preserve the volume. The intuition that such operations preserve volumes is not mathematically absurd and it is even included in the formal definition of volumes. However, this is not applicable here because in this case it is impossible to define the volumes of the considered subsets. Reassembling them reproduces a set that has a volume, which happens to be different from the volume at the start. - -Unlike most theorems in geometry, the proof of this result depends on the choice of axioms for set theory in a critical way. It can be proven using the axiom of choice, which allows for the construction of non-measurable sets, i.e., collections of points that do not have a volume in the ordinary sense, and whose construction requires an uncountable number of choices. - -It was shown in 2005 that the pieces in the decomposition can be chosen in such a way that they can be moved continuously into place without running into one another. - -As proved independently by Leroy and Simpson, the Banach–Tarski paradox does not violate volumes if one works with locales rather than topological spaces. In this abstract setting, it is possible to have subspaces without point but still nonempty. The parts of the paradoxical decomposition do intersect a lot in the sense of locales, so much that some of these intersections should be given a positive mass. Allowing for this hidden mass to be taken into account, the theory of locales permits all subsets (and even all sublocales) of the Euclidean space to be satisfactorily measured. - -In a paper published in 1924, Stefan Banach and Alfred Tarski gave a construction of such a paradoxical decomposition, based on earlier work by Giuseppe Vitali concerning the unit interval and on the paradoxical decompositions of the sphere by Felix Hausdorff, and discussed a number of related questions concerning decompositions of subsets of Euclidean spaces in various dimensions. They proved the following more general statement, the strong form of the Banach–Tarski paradox: - -Given any two bounded subsets A and B of a Euclidean space in at least three dimensions, both of which have a nonempty interior, there are partitions of A and B into a finite number of disjoint subsets, $A=A_1 \cup \cdots\cup A_k$, $B=B_1 \cup \cdots\cup B_k$ (for some integer k), such that for each (integer) i between 1 and k, the sets Ai and Bi are congruent. - -Now let A be the original ball and B be the union of two translated copies of the original ball. Then the proposition means that you can divide the original ball A into a certain number of pieces and then rotate and translate these pieces in such a way that the result is the whole set B, which contains two copies of A. - -The strong form of the Banach–Tarski paradox is false in dimensions one and two, but Banach and Tarski showed that an analogous statement remains true if countably many subsets are allowed. The difference between dimensions 1 and 2 on the one hand, and 3 and higher on the other hand, is due to the richer structure of the group E(n) of Euclidean motions in 3 dimensions. For n = 1, 2 the group is solvable, but for n ≥ 3 it contains a free group with two generators. John von Neumann studied the properties of the group of equivalences that make a paradoxical decomposition possible, and introduced the notion of amenable groups. He also found a form of the paradox in the plane which uses area-preserving affine transformations in place of the usual congruences. - -Tarski proved that amenable groups are precisely those for which no paradoxical decompositions exist. Since only free subgroups are needed in the Banach–Tarski paradox, this led to the long-standing von Neumann conjecture, which was disproved in 1980. - -The Banach–Tarski paradox states that a ball in the ordinary Euclidean space can be doubled using only the operations of partitioning into subsets, replacing a set with a congruent set, and reassembling. Its mathematical structure is greatly elucidated by emphasizing the role played by the group of Euclidean motions and introducing the notions of equidecomposable sets and a paradoxical set. Suppose that G is a group acting on a set X. In the most important special case, X is an n-dimensional Euclidean space (for integral n), and G consists of all isometries of X, i.e. the transformations of X into itself that preserve the distances, usually denoted E(n). Two geometric figures that can be transformed into each other are called congruent, and this terminology will be extended to the general G-action. Two subsets A and B of X are called G-equidecomposable, or equidecomposable with respect to G, if A and B can be partitioned into the same finite number of respectively G-congruent pieces. This defines an equivalence relation among all subsets of X. Formally, if there exist non-empty sets $A_1, \dots, A_k$, $B_1, \dots, B_k$ such that - - - -A=\bigcup_{i=1}^k A_i, \quad B=\bigcup_{i=1}^k B_i, - - -$$ -\quad A_i\cap A_j=B_i\cap B_j=\emptyset\quad\text{for all }1 \leq i < j \leq k, -$$ - -and there exist elements $g_i \in G$ such that -$$ -g_i(A_i) = B_i\text{ for all }1 \le i \le k, -$$ - -then it can be said that A and B are G-equidecomposable using k pieces. If a set E has two disjoint subsets A and B such that A and E, as well as B and E, are G-equidecomposable, then E is called paradoxical. - -Using this terminology, the Banach–Tarski paradox can be reformulated as follows: - -A three-dimensional Euclidean ball is equidecomposable with two copies of itself. - -In fact, there is a sharp result in this case, due to Raphael M. Robinson: doubling the ball can be accomplished with five pieces, and fewer than five pieces will not suffice. - -The strong version of the paradox claims: - -Any two bounded subsets of 3-dimensional Euclidean space with non-empty interiors are equidecomposable. - -While apparently more general, this statement is derived in a simple way from the doubling of a ball by using a generalization of the Bernstein–Schroeder theorem due to Banach that implies that if A is equidecomposable with a subset of B and B is equidecomposable with a subset of A, then A and B are equidecomposable. - -The Banach–Tarski paradox can be put in context by pointing out that for two sets in the strong form of the paradox, there is always a bijective function that can map the points in one shape into the other in a one-to-one fashion. In the language of Georg Cantor's set theory, these two sets have equal cardinality. Thus, if one enlarges the group to allow arbitrary bijections of X, then all sets with non-empty interior become congruent. Likewise, one ball can be made into a larger or smaller ball by stretching, or in other words, by applying similarity transformations. Hence, if the group G is large enough, G-equidecomposable sets may be found whose "size"s vary. Moreover, since a countable set can be made into two copies of itself, one might expect that using countably many pieces could somehow do the trick. - -On the other hand, in the Banach–Tarski paradox, the number of pieces is finite and the allowed equivalences are Euclidean congruences, which preserve the volumes. Yet, somehow, they end up doubling the volume of the ball! While this is certainly surprising, some of the pieces used in the paradoxical decomposition are non-measurable sets, so the notion of volume (more precisely, Lebesgue measure) is not defined for them, and the partitioning cannot be accomplished in a practical way. In fact, the Banach–Tarski paradox demonstrates that it is impossible to find a finitely-additive measure (or a Banach measure) defined on all subsets of a Euclidean space of three (and greater) dimensions that is invariant with respect to Euclidean motions and takes the value one on a unit cube. In his later work, Tarski showed that, conversely, non-existence of paradoxical decompositions of this type implies the existence of a finitely-additive invariant measure. - -The heart of the proof of the "doubling the ball" form of the paradox presented below is the remarkable fact that by a Euclidean isometry (and renaming of elements), one can divide a certain set (essentially, the surface of a unit sphere) into four parts, then rotate one of them to become itself plus two of the other parts. This follows rather easily from a F2-paradoxical decomposition of F2, the free group with two generators. Banach and Tarski's proof relied on an analogous fact discovered by Hausdorff some years earlier: the surface of a unit sphere in space is a disjoint union of three sets B, C, D and a countable set E such that, on the one hand, B, C, D are pairwise congruent, and on the other hand, B is congruent with the union of C and D. This is often called the Hausdorff paradox. - -Banach and Tarski explicitly acknowledge Giuseppe Vitali's 1905 construction of the set bearing his name, Hausdorff's paradox (1914), and an earlier (1923) paper of Banach as the precursors to their work. Vitali's and Hausdorff's constructions depend on Zermelo's axiom of choice ("AC"), which is also crucial to the Banach–Tarski paper, both for proving their paradox and for the proof of another result: - -Two Euclidean polygons, one of which strictly contains the other, are not equidecomposable. - -They remark: - -Le rôle que joue cet axiome dans nos raisonnements nous semble mériter l'attention - -(The role this axiom plays in our reasoning seems to us to deserve attention) - -They point out that while the second result fully agrees with geometric intuition, its proof uses AC in an even more substantial way than the proof of the paradox. Thus Banach and Tarski imply that AC should not be rejected solely because it produces a paradoxical decomposition, for such an argument also undermines proofs of geometrically intuitive statements. - -However, in 1949, A. P. Morse showed that the statement about Euclidean polygons can be proved in ZF set theory and thus does not require the axiom of choice. In 1964, Paul Cohen proved that the axiom of choice is independent from ZF – that is, it cannot be proved from ZF. A weaker version of an axiom of choice is the axiom of dependent choice, DC, and it has been shown that DC is not sufficient for proving the Banach–Tarski paradox, that is, - -The Banach–Tarski paradox is not a theorem of ZF, nor of ZF+DC. - -Large amounts of mathematics use AC. As Stan Wagon points out at the end of his monograph, the Banach–Tarski paradox has been more significant for its role in pure mathematics than for foundational questions: it motivated a fruitful new direction for research, the amenability of groups, which has nothing to do with the foundational questions. - -In 1991, using then-recent results by Matthew Foreman and Friedrich Wehrung, Janusz Pawlikowski proved that the Banach–Tarski paradox follows from ZF plus the Hahn–Banach theorem. The Hahn–Banach theorem does not rely on the full axiom of choice but can be proved using a weaker version of AC called the ultrafilter lemma. So Pawlikowski proved that the set theory needed to prove the Banach–Tarski paradox, while stronger than ZF, is weaker than full ZFC. - -Here a proof is sketched which is similar but not identical to that given by Banach and Tarski. Essentially, the paradoxical decomposition of the ball is achieved in four steps: - -# Find a paradoxical decomposition of the free group in two generators. - -# Find a group of rotations in 3-d space isomorphic to the free group in two generators. - -# Use the paradoxical decomposition of that group and the axiom of choice to produce a paradoxical decomposition of the hollow unit sphere. - -# Extend this decomposition of the sphere to a decomposition of the solid unit ball. - -These steps are discussed in more detail below. - -[[Image:Paradoxical decomposition F 2.svg|thumb|right|250px|Cayley graph of F2, showing decomposition into the sets S(a) and aS(a-1). Traversing a horizontal edge of the graph in the rightward direction represents left multiplication of an element of F2 by a; traversing a vertical edge of the graph in the upward direction represents left multiplication of an element of F2 by b. Elements of the set S(a) are green dots; elements of the set aS(a-1) are blue dots or red dots with blue border. - -Red dots with blue border are elements of S(a-1), which is a subset of aS(a-1).]] - -The free group with two generators a and b consists of all finite strings that can be formed from the four symbols a, a-1, b and b-1 such that no a appears directly next to an a-1 and no b appears directly next to a b-1. Two such strings can be concatenated and converted into a string of this type by repeatedly replacing the "forbidden" substrings with the empty string. For instance: abab-1a-1 concatenated with abab-1a yields abab-1a-1abab-1a, which contains the substring a-1a, and so gets reduced to abab-1bab-1a, which contains the substring b-1b, which gets reduced to . One can check that the set of those strings with this operation forms a group with identity element the empty string e. This group may be called F2. - -The group $F_2$ can be "paradoxically decomposed" as follows: Let S(a) be the set of all non-forbidden strings that start with a and define S(a-1), S(b) and S(b-1) similarly. Clearly, -$$ -F_2=\{e\}\cup S(a)\cup S(a^{-1})\cup S(b)\cup S(b^{-1}) -$$ - -but also -$$ -F_2=aS(a^{-1})\cup S(a), -$$ - -and -$$ -F_2=bS(b^{-1})\cup S(b), -$$ - -where the notation aS(a-1) means take all the strings in S(a-1) and concatenate them on the left with a. - -This is at the core of the proof. For example, there may be a string $aa^{-1}b$ in the set $aS(a^{-1})$ which, because of the rule that $a$ must not appear next to $a^{-1}$, reduces to the string $b$. Similarly, $aS(a^{-1})$ contains all the strings that start with $a^{-1}$ (for example, the string $aa^{-1}a^{-1}$ which reduces to $a^{-1}$). In this way, $aS(a^{-1})$ contains all the strings that start with $b$, $b^{-1}$ and $a^{-1}$. - -Group F2 has been cut into four pieces (plus the singleton {e}), then two of them "shifted" by multiplying with a or b, then "reassembled" as two pieces to make one copy of $F_2$ and the other two to make another copy of $F_2$. That is exactly what is intended to do to the ball. - -In order to find a free group of rotations of 3D space, i.e. that behaves just like (or "is isomorphic to") the free group F2, two orthogonal axes are taken (e.g. the x and z axes). Then, A is taken to be a rotation of $\theta = \arccos\left(\frac{1}{3}\right)$ about the x axis, and B to be a rotation of $\theta$ about the z axis (there are many other suitable pairs of irrational multiples of π that could be used here as well). - -The group of rotations generated by A and B will be called H. - -Let $\omega$ be an element of H that starts with a positive rotation about the z axis, that is, an element of the form $\omega=\ldots B^{k_3}A^{k_2}B^{k_1}$ with $k_1>0,\ k_2,k_3,\ldots,k_n\ne0,\ n\ge1$. It can be shown by induction that $\omega$ maps the point $(1,0,0)$ to $\left(\frac k {3^N}, \frac{l\sqrt 2}{3^N}, \frac m {3^N}\right)$, for some $ k,l,m \in \mathbb Z,N \in \mathbb N$. Analyzing $ k,l $ and $m$ modulo 3, one can show that $ l\neq 0 $. The same argument repeated (by symmetry of the problem) is valid when $\omega$ starts with a negative rotation about the z axis, or a rotation about the x axis. This shows that if $\omega$ is given by a non-trivial word in A and B, then $ \omega \neq e$. Therefore, the group H is a free group, isomorphic to F2. - -The two rotations behave just like the elements a and b in the group F2: there is now a paradoxical decomposition of H. - -This step cannot be performed in two dimensions since it involves rotations in three dimensions. If two rotations are taken about the same axis, the resulting group is the abelian circle group and does not have the property required in step 1. - -An alternate arithmetic proof of the existence of free groups in some special orthogonal groups using integral quaternions leads to paradoxical decompositions of the rotation group. - -The unit sphere S2 is partitioned into orbits by the action of our group H: two points belong to the same orbit if and only if there is a rotation in H which moves the first point into the second. (Note that the orbit of a point is a dense set in S2.) The axiom of choice can be used to pick exactly one point from every orbit; collect these points into a set M. The action of H on a given orbit is free and transitive and so - -each orbit can be identified with H. In other words, every point in S2 can be reached in exactly one way by applying the proper rotation from H to the proper element from M. Because of this, the paradoxical decomposition of H yields a paradoxical decomposition of S2 into four pieces A1, A2, A3, A4 as follows: -$$ -A_1=S(a)M \cup M \cup B -$$ -$$ -A_2=S(a^{-1})M \setminus B -$$ -$$ -\displaystyle A_3=S(b)M -$$ -$$ -\displaystyle A_4=S(b^{-1})M -$$ - -where we define -$$ -S(a)M = \{s(x) | s \in S(a), x\in M\} -$$ - -and likewise for the other sets, and where we define -$$ -B = a^{-1}M \cup a^{-2}M \cup\dots -$$ - -(The five "paradoxical" parts of F2 were not used directly, as they would leave M as an extra piece after doubling, owing to the presence of the singleton {e}!) - -The (majority of the) sphere has now been divided into four sets (each one dense on the sphere), and when two of these are rotated, the result is double of what was had before: -$$ -aA_2 = A_2 \cup A_3 \cup A_4 -$$ -$$ -bA_4 = A_1 \cup A_2 \cup A_4 -$$ - -Finally, connect every point on S2 with a half-open segment to the origin; the paradoxical decomposition of S2 then yields a paradoxical decomposition of the solid unit ball minus the point at the ball's center. (This center point needs a bit more care; see below.) - -N.B. This sketch glosses over some details. One has to be careful about the set of points on the sphere which happen to lie on the axis of some rotation in H. However, there are only countably many such points, and like the case of the point at the center of the ball, it is possible to patch the proof to account for them all. (See below.) - -In Step 3, the sphere was partitioned into orbits of our group H. To streamline the proof, the discussion of points that are fixed by some rotation was omitted; since the paradoxical decomposition of F2 relies on shifting certain subsets, the fact that some points are fixed might cause some trouble. Since any rotation of S2 (other than the null rotation) has exactly two fixed points, and since H, which is isomorphic to F2, is countable, there are countably many points of S2 that are fixed by some rotation in H. Denote this set of fixed points as D. Step 3 proves that S2 − D admits a paradoxical decomposition. - -What remains to be shown is the Claim: S2 − D is equidecomposable with S2. - -Proof. Let λ be some line through the origin that does not intersect any point in D. This is possible since D is countable. Let J be the set of angles, α, such that for some natural number n, and some P in D, r(nα)P is also in D, where r(nα) is a rotation about λ of nα. Then J is countable. So there exists an angle θ not in J. Let ρ be the rotation about λ by θ. Then ρ acts on S2 with no fixed points in D, i.e., ρn(D) is disjoint from D, and for natural mn(D) is disjoint from ρm(D). Let E be the disjoint union of ρn(D) over n = 0, 1, 2, ... . Then S2 = E ∪ (S2 − E) ~ ρ(E) ∪ (S2 − E) = (E − D) ∪ (S2 − E) = S2 − D, where ~ denotes "is equidecomposable to". - -For step 4, it has already been shown that the ball minus a point admits a paradoxical decomposition; it remains to be shown that the ball minus a point is equidecomposable with the ball. Consider a circle within the ball, containing the point at the center of the ball. Using an argument like that used to prove the Claim, one can see that the full circle is equidecomposable with the circle minus the point at the ball's center. (Basically, a countable set of points on the circle can be rotated to give itself plus one more point.) Note that this involves the rotation about a point other than the origin, so the Banach–Tarski paradox involves isometries of Euclidean 3-space rather than just SO(3). - -Use is made of the fact that if A ~ B and B ~ C, then A ~ C. The decomposition of A into C can be done using number of pieces equal to the product of the numbers needed for taking A into B and for taking B into C. - -The proof sketched above requires 2 × 4 × 2 + 8 = 24 pieces - a factor of 2 to remove fixed points, a factor 4 from step 1, a factor 2 to recreate fixed points, and 8 for the center point of the second ball. But in step 1 when moving {e} and all strings of the form an into S(a−1), do this to all orbits except one. Move {e} of this last orbit to the center point of the second ball. This brings the total down to 16 + 1 pieces. With more algebra, one can also decompose fixed orbits into 4 sets as in step 1. This gives 5 pieces and is the best possible. - -Using the Banach–Tarski paradox, it is possible to obtain k copies of a ball in the Euclidean n-space from one, for any integers n ≥ 3 and k ≥ 1, i.e. a ball can be cut into k pieces so that each of them is equidecomposable to a ball of the same size as the original. Using the fact that the free group F2 of rank 2 admits a free subgroup of countably infinite rank, a similar proof yields that the unit sphere Sn−1 can be partitioned into countably infinitely many pieces, each of which is equidecomposable (with two pieces) to the Sn−1 using rotations. By using analytic properties of the rotation group SO(n), which is a connected analytic Lie group, one can further prove that the sphere Sn−1 can be partitioned into as many pieces as there are real numbers (that is, $2^{\aleph_0}$ pieces), so that each piece is equidecomposable with two pieces to Sn−1 using rotations. These results then extend to the unit ball deprived of the origin. A 2010 article by Valeriy Churkin gives a new proof of the continuous version of the Banach–Tarski paradox. - -In the Euclidean plane, two figures that are equidecomposable with respect to the group of Euclidean motions are necessarily of the same area, and therefore, a paradoxical decomposition of a square or disk of Banach–Tarski type that uses only Euclidean congruences is impossible. A conceptual explanation of the distinction between the planar and higher-dimensional cases was given by John von Neumann: unlike the group SO(3) of rotations in three dimensions, the group E(2) of Euclidean motions of the plane is solvable, which implies the existence of a finitely-additive measure on E(2) and R2 which is invariant under translations and rotations, and rules out paradoxical decompositions of non-negligible sets. Von Neumann then posed the following question: can such a paradoxical decomposition be constructed if one allows a larger group of equivalences? - -It is clear that if one permits similarities, any two squares in the plane become equivalent even without further subdivision. This motivates restricting one's attention to the group SA2 of area-preserving affine transformations. Since the area is preserved, any paradoxical decomposition of a square with respect to this group would be counterintuitive for the same reasons as the Banach–Tarski decomposition of a ball. In fact, the group SA2 contains as a subgroup the special linear group SL(2,R), which in its turn contains the free group F2 with two generators as a subgroup. This makes it plausible that the proof of Banach–Tarski paradox can be imitated in the plane. The main difficulty here lies in the fact that the unit square is not invariant under the action of the linear group SL(2, R), hence one cannot simply transfer a paradoxical decomposition from the group to the square, as in the third step of the above proof of the Banach–Tarski paradox. Moreover, the fixed points of the group present difficulties (for example, the origin is fixed under all linear transformations). This is why von Neumann used the larger group SA2 including the translations, and he constructed a paradoxical decomposition of the unit square with respect to the enlarged group (in 1929). Applying the Banach–Tarski method, the paradox for the square can be strengthened as follows: - -Any two bounded subsets of the Euclidean plane with non-empty interiors are equidecomposable with respect to the area-preserving affine maps. - -As von Neumann notes: - -"Infolgedessen gibt es bereits in der Ebene kein nichtnegatives additives Maß (wo das Einheitsquadrat das Maß 1 hat), das gegenüber allen Abbildungen von A2 invariant wäre." - -"In accordance with this, already in the plane there is no non-negative additive measure (for which the unit square has a measure of 1), which is invariant with respect to all transformations belonging to A2 [the group of area-preserving affine transformations]." - -To explain further, the question of whether a finitely additive measure (that is preserved under certain transformations) exists or not depends on what transformations are allowed. The Banach measure of sets in the plane, which is preserved by translations and rotations, is not preserved by non-isometric transformations even when they do preserve the area of polygons. The points of the plane (other than the origin) can be divided into two dense sets which may be called A and B. If the A points of a given polygon are transformed by a certain area-preserving transformation and the B points by another, both sets can become subsets of the A points in two new polygons. The new polygons have the same area as the old polygon, but the two transformed sets cannot have the same measure as before (since they contain only part of the A points), and therefore there is no measure that "works". - -The class of groups isolated by von Neumann in the course of study of Banach–Tarski phenomenon turned out to be very important for many areas of Mathematics: these are amenable groups, or groups with an invariant mean, and include all finite and all solvable groups. Generally speaking, paradoxical decompositions arise when the group used for equivalences in the definition of equidecomposability is not amenable. - -* 2000: Von Neumann's paper left open the possibility of a paradoxical decomposition of the interior of the unit square with respect to the linear group SL(2,R) (Wagon, Question 7.4). In 2000, Miklós Laczkovich proved that such a decomposition exists. More precisely, let A be the family of all bounded subsets of the plane with non-empty interior and at a positive distance from the origin, and B the family of all planar sets with the property that a union of finitely many translates under some elements of SL(2, R) contains a punctured neighborhood of the origin. Then all sets in the family A are SL(2, R)-equidecomposable, and likewise for the sets in B. It follows that both families consist of paradoxical sets. - -* 2003: It had been known for a long time that the full plane was paradoxical with respect to SA2, and that the minimal number of pieces would equal four provided that there exists a locally commutative free subgroup of SA2. In 2003 Kenzi Satô constructed such a subgroup, confirming that four pieces suffice. - -* 2011: Laczkovich's paper left open the possibility if there exists a free group F of piecewise linear transformations acting on the punctured disk D \{0,0} without fixed points. Grzegorz Tomkowicz constructed such a group, showing that the system of congruences A ≈ B ≈ C ≈ B U C can be realized by means of F and D \{0,0}. - -* 2017: It has been known for a long time that there exists in the hyperbolic plane H2 a set E that is a third, a fourth and ... and a $2^{\aleph_0}$-th part of H2. The requirement was satisfied by orientation-preserving isometries of H2. Analogous results were obtained by John Frank Adams and Jan Mycielski who showed that the unit sphere S2 contains a set E that is a half, a third, a fourth and ... and a $2^{\aleph_0}$-th part of S2. Grzegorz Tomkowicz showed that Adams and Mycielski construction can be generalized to obtain a set E of H2 with the same properties as in S2. - -* 2017: Von Neumann's paradox concerns the Euclidean plane, but there are also other classical spaces where the paradoxes are possible. For example, one can ask if there is a Banach–Tarski paradox in the hyperbolic plane H2. This was shown by Jan Mycielski and Grzegorz Tomkowicz. Tomkowicz proved also that most of the classical paradoxes are an easy consequence of a graph theoretical result and the fact that the groups in question are rich enough. - -* 2018: In 1984, Jan Mycielski and Stan Wagon constructed a paradoxical decomposition of the hyperbolic plane H2 that uses Borel sets. The paradox depends on the existence of a properly discontinuous subgroup of the group of isometries of H2. Similar paradox is obtained by Grzegorz Tomkowicz who constructed a free properly discontinuous subgroup G of the affine group SA(3,Z). The existence of such a group implies the existence of a subset E of Z3 such that for any finite F of Z3 there exists an element g of G such that $g(E) = E \triangle F$, where $E\triangleF$ denotes the symmetric difference of E and F. - -* 2019: Banach–Tarski paradox uses finitely many pieces in the duplication. In the case of countably many pieces, any two sets with non-empty interiors are equidecomposable using translations. But allowing only Lebesgue measurable pieces one obtains: If A and B are subsets of Rn with non-empty interiors, then they have equal Lebesgue measures if and only if they are countably equidecomposable using Lebesgue measurable pieces. Jan Mycielski and Grzegorz Tomkowicz extended this result to finite dimensional Lie groups and second countable locally compact topological groups that are totally disconnected or have countably many connected components. diff --git a/wiki/wikipedia/3482.txt b/wiki/wikipedia/3482.txt deleted file mode 100644 index 5aa173cb598210fe2a58f9323e2dd51677983822..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3482.txt +++ /dev/null @@ -1,214 +0,0 @@ -In computer science and mathematics, the Josephus problem (or Josephus permutation) is a theoretical problem related to a certain counting-out game. - -People are standing in a circle waiting to be executed. Counting begins at a specified point in the circle and proceeds around the circle in a specified direction. After a specified number of people are skipped, the next person is executed. The procedure is repeated with the remaining people, starting with the next person, going in the same direction and skipping the same number of people, until only one person remains, and is freed. - -The problem — given the number of people, starting point, direction, and number to be skipped — is to choose the position in the initial circle to avoid execution. - -The problem is named after Flavius Josephus, a Jewish historian living in the 1st century. According to Josephus' account of the siege of Yodfat, he and his 40 soldiers were trapped in a cave by Roman soldiers. They chose suicide over capture, and settled on a serial method of committing suicide by drawing lots. Josephus states that by luck or possibly by the hand of God, he and another man remained until the end and surrendered to the Romans rather than killing themselves. This is the story given in Book 3, Chapter 8, part 7 of Josephus' The Jewish War (writing of himself in the third person): - -The details of the mechanism used in this feat are rather vague. According to James Dowdy and Michael Mays, in 1612 Claude Gaspard Bachet de Méziriac suggested the specific mechanism of arranging the men in a circle and counting by threes to determine the order of elimination. This story has been often repeated and the specific details vary considerably from source to source. For instance, Israel Nathan Herstein and Irving Kaplansky (1974) have Josephus and 39 comrades stand in a circle with every seventh man eliminated. A history of the problem can be found in S. L. Zabell's Letter to the editor of the Fibonacci Quarterly. - -Josephus had an accomplice; the problem was then to find the places of the two last remaining survivors (whose conspiracy would ensure their survival). It is alleged that he placed himself and the other man in the 31st and 16th place respectively (for k = 3 below). - -A medieval version of the Josephus problem involves 15 Turks and 15 Christians aboard a ship in a storm which will sink unless half the passengers are thrown overboard. All 30 stand in a circle and every ninth person is to be tossed into the sea. The Christians need to determine where to stand to ensure that only the Turks are tossed. In other versions the roles of Turks and Christians are interchanged. - -describe and study a "standard" variant: Determine where the last survivor stands if there are n people to start and every second person (k = 2 below) is eliminated. - -A generalization of this problem is as follows. It is supposed that every mth person will be executed from a group of size n, in which the pth person is the survivor. If there is an addition of x people to the circle, then the survivor is in the p + mx-th position if this is less than or equal to n + x. If x is the smallest value for which (p + mx) > (n + x), then the survivor is in position (p + mx) − (n + x). - -In the following, $n$ denotes the number of people in the initial circle, and $k$ denotes the count for each step, that is, $k-1$ people are skipped and the $k$-th is executed. The people in the circle are numbered from $1$ to $n$, the starting position being $1$ and the counting being inclusive. - -The problem is explicitly solved when every second person will be killed, i.e. $k=2$. (For the more general case $k\neq 2$, a solution is outlined below.) - -The solution is expressed recursively. Let $f(n)$ denote the position of the survivor when there are initially n people (and $k=2$). - -The first time around the circle, all of the even-numbered people die. - -The second time around the circle, the new 2nd person dies, then the new 4th person, etc.; it is as though there were no first time around the circle. - -If the initial number of people was even, then the person in position x during the second time around the circle was originally in position $2x - 1$ (for every choice of x). Let $n=2j$. The person at $f(j)$ who will now survive was originally in position $2f(j) - 1$. This yields the recurrence -$$ -f(2j)=2f(j)-1. -$$ - -If the initial number of people was odd, then person 1 can be thought of as dying at the end of the first time around the circle. Again, during the second time around the circle, the new 2nd person dies, then the new 4th person, etc. - -In this case, the person in position x was originally in position $2x+1$. This yields the recurrence -$$ -f(2j+1)=2f(j)+1. -$$ - -When the values are tabulated of n and $f(n)$ a pattern emerges: - -This suggests that $f(n)$ is an increasing odd sequence that restarts with $f(n)=1$ whenever the index n is a power of 2. - -Therefore, if m and l is chosen so that $n=2^m+l$ and $0\leq l<2^m$, then $f(n)=2 \cdot l+1$. - -It is clear that values in the table satisfy this equation. Or it can be thought that after l people are dead there are only $2^m$ people and it goes to the $2l+1$th person. This person must be the survivor. So $f(n)=2l+1$. Below, a proof is given by induction. - -Theorem: If $n=2^m+l$ and $0\leq l<2^m$, then $f(n) = 2l+1$. - -Proof: The strong induction is used on n. The base case $n=1$ is true. - -The cases are considered separately when n is even and when n is odd. - -If n is even, then choose $l_1$ and $m_1$ such that $n/2 = 2^{m_1}+l_1$ and $0\leq l_1 < 2^{m_1}$. Note that $l_1 = l/2$. -$$ -f(n) = 2f(n/2)-1=2((2l_1)+1) - 1=2l+1 -$$ is had where the second equality follows from the induction hypothesis. - -If n is odd, then choose $l_1$ and $m_1$ such that $(n-1)/2 = 2^{m_1}+l_1$ and $0\leq l_1 < 2^{m_1}$. Note that $l_1 = (l-1)/2$. -$$ -f(n) = 2f((n-1)/2)+1=2((2l_1)+1) + 1=2l+1 -$$ is had where the second equality follows from the induction hypothesis. This completes the proof. - -l can be solved to get an explicit expression for $f(n)$: -$$ -f(n) = 2(n-2^{\lfloor \log_2(n) \rfloor})+1 -$$ - -The most elegant form of the answer involves the binary representation of size n: $f(n)$ can be obtained by a one-bit left cyclic shift of n itself. If n is represented in binary as $n=1 b_1 b_2 b_3\dots b_m$, then the solution is given by $f(n)=b_1 b_2 b_3 \dots b_m 1$. The proof of this follows from the representation of n as $2^m+l$ or from the above expression for $f(n)$. - -Implementation: If n denotes the number of people, the safe position is given by the function $f(n) = 2l+1$, where $n=2^m+l$ and $0\leq l<2^m$. - -Now if the number is represented in binary format, the first bit denotes $2^m$ and remaining bits will denote l. For example, when 1=n=41, its binary representation is - -n = 1 0 1 0 0 1 - -2ᵐ = 1 0 0 0 0 0 - -l = 0 1 0 0 1 - - - -/** - - * - - * @param n the number of people standing in the circle - - * @return the safe position who will survive the execution - - * f(N) = 2L + 1 where N =2^M + L and 0 <= L < 2^M - - */ - - public int getSafePosition(int n) { - - // find value of L for the equation - - int valueOfL = n - Integer.highestOneBit(n); - - return 2 * valueOfL + 1; - - } - - - -The easiest way to find the safe position is by using bitwise operators. In this approach, shifting the most-significant set bit of n to the least significant bit will return the safe position. Input must be a positive integer. - -n = 1 0 1 0 0 1 - -n = 0 1 0 0 1 1 - - - -/** - -* - -* @param n (41) the number of people standing in the circle - -* @return the safe position who will survive the execution - -*/ - -public int getSafePosition(int n) { - -return ~Integer.highestOneBit(n*2) & ((n<<1) | 1); - -// ---------------------- --- | ------------ - -// Get the first set bit | | Left Shift n and flipping the last bit - -// and take its complement | | - -// | | - -// Multiply n by 2 | - -// Bitwise And to copy bits exists in both operands. - -} - - - -In 1997, Lorenz Halbeisen and Norbert Hungerbühler discovered a closed-form for the case $k=3$. They showed that there is a certain constant -$$ -\alpha \approx 0.8111... -$$ - -that can be computed to arbitrary precision. Given this constant, choose m to be the greatest integer such that $\operatorname{round}(\alpha \cdot (3/2)^m) \le n$ (this will be either $m^\prime = \operatorname{round}(\log_{3/2} n/\alpha)$ or $m^\prime - 1$). Then, the final survivor is -$$ -f(n) = 3(n - \operatorname{round}(\alpha \cdot (3/2)^m)) + 2 -$$ if is rounded up else 1 - -for all $n \ge 5$. - -As an example computation, Halbeisen and Hungerbühler give $n = 41, k = 3$ (which is actually the original formulation of Josephus' problem). They compute: -$$ -m^\prime \approx \operatorname{round}(\log_{3/2} 41/0.8111) \approx \operatorname{round}(9.68) = 10 -$$ -$$ -\operatorname{round}(\alpha \cdot (3/2)^{m^\prime}) \approx \operatorname{round}(0.8111 \cdot (3/2)^{10}) = 47 -$$ and therefore $m = 9$ -$$ -\operatorname{round}(0.8111 \cdot (3/2)^9) \approx \operatorname{round}(31.18) = 31 -$$ (note that this has been rounded down) -$$ -f(n) = 3(41 - 31) + 1 = 31 -$$ - -This can be verified by looking at each successive pass on the numbers 1 through 41: - -1, 2, 4, 5, 7, 8, 10, 11, 13, 14, 16, 17, 19, 20, 22, 23, 25, 26, 28, 29, 31, 32, 34, 35, 37, 38, 40, 41 - -2, 4, 7, 8, 11, 13, 16, 17, 20, 22, 25, 26, 29, 31, 34, 35, 38, 40 - -2, 4, 8, 11, 16, 17, 22, 25, 29, 31, 35, 38 - -2, 4, 11, 16, 22, 25, 31, 35 - -2, 4, 16, 22, 31, 35 - -4, 16, 31, 35 - -16, 31 - -31 - -Dynamic programming is used to solve this problem in the general case by performing the first step and then using the solution of the remaining problem. When the index starts from one, then the person at $s$ shifts from the first person is in position $((s-1)\bmod n)+1$, where n is the total number of persons. Let $f(n,k)$ denote the position of the survivor. After the $k$-th person is killed, a circle of $n-1$ remains, and the next count is started with the person whose number in the original problem was $(k\bmod n)+1$. The position of the survivor in the remaining circle would be $f(n-1,k)$ if counting is started at $1$; shifting this to account for the fact that the starting point is $(k\bmod n)+1$ yields the recurrence -$$ -f(n,k)=((f(n-1,k)+k-1) \bmod n)+1,\text{ with }f(1,k)=1, -$$ - -which takes the simpler form -$$ -g(n,k)=(g(n-1,k)+k) \bmod n,\text{ with }g(1,k)=0 -$$ - -if the positions are numbered from $0$ to $n-1$ instead. - -This approach has running time $O(n)$, but for small $k$ and large $n$ there is another approach. The second approach also uses dynamic programming but has running time $O(k\log n)$. It is based on considering killing k-th, 2k-th, ..., $(\lfloor n/k \rfloor k)$-th people as one step, then changing the numbering. - -This improved approach takes the form - -g(n,k) = \begin{cases} - -0 & \text{if } n=1\\ - -(g(n-1,k)+k) \bmod n & \text{if } 1 < n < k\\ - -\left \lfloor \frac{k((g(n',k)-n \bmod k) \bmod n')}{k-1} \right \rfloor \text{where } n'= n- \left \lfloor \frac{n}{k} \right \rfloor & \text{if } k \le n\\ - -\end{cases} diff --git a/wiki/wikipedia/3483.txt b/wiki/wikipedia/3483.txt deleted file mode 100644 index 44e592ab14fdaf183c24309baecb662672f1f0b2..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3483.txt +++ /dev/null @@ -1,11 +0,0 @@ -Tetris Giant, known as in Japan, is an arcade game released in Japan in 2009 by Sega that features a giant version of the game Tetris. It is played on a large 70" DLP Projection Monitor, running on Sega System SP. It is controlled using giant joysticks, the right joystick being slightly lower than the left one, with a built-in shaker "rumble" motor, a device that Sega refers to as "Deka Lever" (deka is Japanese for large). The playing field is 6 cells wide by 7 cells high as opposed to almost universal 10 cells wide by 20 high. The game can be played with up to two players. - -This game is a one or two-player game with competitive and cooperative modes available. Single Mode includes Line Challenge and Score Challenge modes, and Co-op Mode includes the same modes. During competitive multiplayer, clearing multiple block rows makes the other player's speed increase. During two player cooperative play, players can swap pieces with each other by tapping a button on the main arcade cabinet, up to three times. As in a normal game of Tetris, you move your pieces left and right by moving the stick left and right, and push down on the stick to make the pieces fall into place. Buttons on the top of the joystick can be pressed to rotate your pieces around. - -Unique to this game, the Tetris Dekaris/Giant base unit is actually a projector which can optionally be detached from the default screen and projected onto a large wall. Arcade operators are able to disconnect the game from its default monitor and project the gameplay onto a large wall. - -Line challenge is a time-limited mode aimed at beginners of the game. In it, the player must clear as many lines as possible before the time runs out. Under default settings the timer starts at 120 seconds, though this can be changed in the operator settings. - -Clearing two or more lines at once adds bonus time to the timer (2 sec. for Doubles, 5 sec. for Triples, and 20 sec. for Tetrises). If a player would top out, either by placing a block entirely above the line over the 7th row or by having piece entry blocked, the bottom seven rows will be cleared and the player will be allowed to continue playing (however 5 seconds will be deducted from the timer). The game ends when either the time limit expires or the 200-line limit is reached. - -Score challenge is a score attack mode aimed toward those more familiar with the game. The objective is to get the highest score possible before topping out or reaching the 200-line limit. The scoring system in this game is significantly different than Guideline scoring and puts a high emphasis on clearing Tetrises. The level advances every 10 lines and caps at Level 15, ending the game. The score cap is 99,999 points. The game keeps track of the 1000 highest scores registered on the machine and displays them on the attract screen and in game. During gameplay, the leaderboard is seen to the side of the playfield, climbing higher as the player’s score places higher on the leaderboard. Extra emphasis is put on scores on positions of a multiple of 100 (1000th, 900th, etc.) and on the top 10 scores. diff --git a/wiki/wikipedia/3484.txt b/wiki/wikipedia/3484.txt deleted file mode 100644 index 5315c55e24e2cf82df06034947639609bf18ee83..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3484.txt +++ /dev/null @@ -1,9 +0,0 @@ -casync (content-addressable synchronisation) is a Linux software utility designed to distribute frequently-updated file system images over the Internet. - -According to the creator Lennart Poettering, casync is inspired by rsync and Git, as well as tar. casync is aimed to be used for Internet of things (IoT), container, virtual machine (VM), portable services, and operating system (OS) images, as well as backups and home directory synchronization. casync splits images into variable size segments, uses sha256 checksums, and aims to work with content delivery networks (CDNs). Available for Linux only, packages are available for Ubuntu, Fedora and Arch Linux. - -Similar software that delivers file system images are: - -* Docker with a layered tarballs - -* OSTree diff --git a/wiki/wikipedia/3485.txt b/wiki/wikipedia/3485.txt deleted file mode 100644 index e73a5eec6463fcc0f5ca45aa7c5cf4f36cc8971f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3485.txt +++ /dev/null @@ -1,14 +0,0 @@ -In mathematics, a proof without words, also known as visual proof is a proof of an identity or mathematical statement which can be demonstrated as self-evident by a diagram without any accompanying explanatory text. Such proofs can be considered more elegant than formal or mathematically rigorous due to their self-evident nature. When the diagram demonstrates a particular case of a general statement, to be a proof, it must be generalisable. - -The statement that the sum of all positive odd numbers up to 2n - 1 is a perfect square-more specifically, the perfect square n2-can be demonstrated by a proof without words, as shown on the right. The first square is formed by 1 block; 1 is the first square. The next strip, made of white squares, shows how adding 3 more blocks makes another square: four. The next strip, made of black squares, shows how adding 5 more blocks makes the next square. This process can be continued indefinitely. - -The Pythagorean theorem can be proven without words as shown in the second diagram on left. The two different methods for determining the area of the large square give the relation -$$ -a^2 + b^2 = c^2 -$$ - -between the sides. This proof is more subtle than the above, but still can be considered a proof without words. - -Jensen's inequality can also be proven graphically, as illustrated on the third diagram. The dashed curve along the X axis is the hypothetical distribution of X, while the dashed curve along the Y axis is the corresponding distribution of Y values. Note that the convex mapping Y(X) increasingly "stretches" the distribution for increasing values of X. - -Mathematics Magazine and the College Mathematics Journal run a regular feature titled "Proof without words" containing, as the title suggests, proofs without words. The Art of Problem Solving and USAMTS websites run Java applets illustrating proofs without words. diff --git a/wiki/wikipedia/3486.txt b/wiki/wikipedia/3486.txt deleted file mode 100644 index 25a96dde0bc36befe24d09e1dd6e8f79d0d2df8c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3486.txt +++ /dev/null @@ -1,13 +0,0 @@ -In the mathematical theory of artificial neural networks, universal approximation theorems are results that establish the density of an algorithmically generated class of functions within a given function space of interest. Typically, these results concern the approximation capabilities of the feedforward architecture on the space of continuous functions between two Euclidean spaces, and the approximation is with respect to the compact convergence topology. However, there are also a variety of results between non-Euclidean spaces and other commonly used architectures and, more generally, algorithmically generated sets of functions, such as the convolutional neural network (CNN) architecture, radial basis-functions, or neural networks with specific properties. Most universal approximation theorems can be parsed into two classes. The first quantifies the approximation capabilities of neural networks with an arbitrary number of artificial neurons ("arbitrary width" case) and the second focuses on the case with an arbitrary number of hidden layers, each containing a limited number of artificial neurons ("arbitrary depth" case). - -Universal approximation theorems imply that neural networks can represent a wide variety of interesting functions when given appropriate weights. On the other hand, they typically do not provide a construction for the weights, but merely state that such a construction is possible. - -One of the first versions of the arbitrary width case was proven by George Cybenko in 1989 for sigmoid activation functions. Kurt Hornik showed in 1991 that it is not the specific choice of the activation function, but rather the multilayer feed-forward architecture itself which gives neural networks the potential of being universal approximators. Moshe Leshno et al in 1993 and later Allan Pinkus in 1999 showed that the universal approximation property, is equivalent to having a nonpolynomial activation function. - -The arbitrary depth case was also studied by a number of authors, such as Zhou Lu et al in 2017, Boris Hanin and Mark Sellke in 2018, and Patrick Kidger and Terry Lyons in 2020. The result minimal width per layer was refined in and in for residual networks. - -Several extensions of the theorem exist, such as to discontinuous activation functions, - -The classical form of the universal approximation theorem for arbitrary width and bounded depth is as follows. - -Achieving useful universal function approximation on graphs (or rather on graph isomorphism classes) has been a longstanding problem. The popular Graph Convolutional Neural Networks (GCNs or GNNs) can be made as discriminative as the Weisfeiler–Leman graph isomorphism test. In 2020, a Universal Approximation Theorem result was established by Brüel-Gabrielsson, showing that graph representation with certain injective properties is sufficient for universal function approximation on bounded graphs and restricted universal function approximation on unbounded graphs, with an accompanying $O($#edges$\times$#nodes$)$-runtime method that performed at state-of-the-art on a collection of benchmarks. diff --git a/wiki/wikipedia/3487.txt b/wiki/wikipedia/3487.txt deleted file mode 100644 index 39a625035924ed84f2aee3382b3e175a7db1197a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3487.txt +++ /dev/null @@ -1,27 +0,0 @@ -The IBM A2 is an open source massively multicore capable and multithreaded 64-bit Power ISA processor core designed by IBM using the Power ISA v.2.06 specification. Versions of processors based on the A2 core range from a 2.3 GHz version with 16 cores consuming 65 W to a less powerful, four core version, consuming 20 W at 1.4 GHz. - -The A2 core is a processor core designed for customization and embedded use in system on chip-devices, and was developed following IBM's game console processor designs, the Xbox 360-processor and Cell processor for the PlayStation 3. - -A2I is a 4-way simultaneous multithreaded core which implements the 64-bit Power ISA v.2.06 Book III-E embedded platform specification with support for the embedded hypervisor features. It was designed for implementations with many cores and focusing on high throughput and many simultaneous threads. A2I was written in VHDL. - -The core has 4×32 64-bit general purpose registers (GPR) with full support for both little and big endian byte ordering, 16 KB+16 KB instruction and data cache and is capable of four-way multithreading. - -It has a fine grain branch prediction unit (BPU) with eight 1024-entry branch history tables. The L1 caches is a 16 KB 8-way set-associative data cache and a 4-way set-associative 16 KB instruction cache. It executes a simple in-order pipeline capable of issuing two instructions per cycle; one to the 6-stage arithmetic logic unit (ALU) and one to the optional auxiliary execution unit (AXU). - -It includes a memory management unit but no floating point unit (FPU). Such facilities are handled by the AXU, which has support for any number of standardized or customized macros, such as floating point units, vector units, DSPs, media accelerators and other units with instruction sets and registers not part of the Power ISA. The core has a system interface unit used to connect to other on die cores, with a 256-bit interface for data writes and a 128-bit interface for instruction and data reads at full core speed. - -The A2O is a slightly more modern version, written in Verilog, using the Power ISA v.2.07 Book III-E. It is optimized for single core performance and designed to reach 3 GHz at 45 nm process technology. The A2O differs from its sibling in that it is only two-way multithreaded, 32+32 kB data and instruction L1 caches, and is capable of out-of-order execution. - -When A2O was released, no actual products have used it. - -In the second half of 2020 IBM released the A2I and A2O cores under a Creative Commons license, and published the VHDL and Verilog code on GitHub. The intention was to add them to the OpenPOWER Foundation's offerings of free and open processor cores. - -As A2 was designed in 2010, A2I and A2O are not compliant with the Power ISA 3.0 or 3.1 which is mandatory for OpenPOWER cores. It is IBM's wish for the cores to be updated so they comply with the newer version of the ISA. - -The PowerEN (Power Edge of Network), or the "wire-speed processor", is designed as hybrid between regular networking processors, doing switching and routing and a typical server processor, that is manipulating and packaging data. It was revealed on February 8, 2010, at ISSCC 2010. - -Each chip uses the A2I core and has 8 MB of cache as well a multitude of task-specific engines besides the general-purpose processors, such as XML, cryptography, compression and regular expression accelerators each with MMUs of their own, four 10 Gigabit Ethernet ports and two PCIe lanes. Up to four chips can be linked in a SMP system without any additional support chips. The chips are said to be extremely complex according to Charlie Johnson, chief architect at IBM, and use 1.43 billion transistors on a die size of 428 mm² fabricated using a 45 nm process. - -The Blue Gene/Q processor is an 18 core chip using the A2I core running at 1.6 GHz with special features for fast thread context switching, quad SIMD floating point unit, 5D torus chip-to-chip network and 2 GB/s external I/O. The cores are linked by a crossbar switch at half core speed to a 32 MB eDRAM L2 cache. The L2 cache is multi-versioned and supports transactional memory and speculative execution. A Blue Gene/Q chip has two DDR3 memory controllers running at 1.33 GHz, supporting up to 16 GB RAM. - -It uses 16 cores for computing, and one core for operating system services. This 17th core will take care of interrupts, asynchronous I/O, MPI flow control, and RAS functionality. The 18th core is used as a spare in case one of the other cores are permanently damaged (for instance in manufacturing) but is shut down in functional operation. The Blue Gene/Q chip is manufactured on IBM's copper SOI process at 45 nm, will deliver a peak performance of 204.8 GFLOPS at 1.6 GHz and draws about 55 watts. The chip has a die size of 19×19 mm (359.5 mm²) and uses 1.47 billion transistors. diff --git a/wiki/wikipedia/3488.txt b/wiki/wikipedia/3488.txt deleted file mode 100644 index 82c8e135956bee88261ad6f2a0b490609ad9a6bb..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3488.txt +++ /dev/null @@ -1,11 +0,0 @@ -In mathematics, the Weierstrass product inequality states that for any real numbers 0 ≤ a1, ..., an ≤ 1 we have -$$ -(1-a_1)(1-a_2)(1-a_3)(1-a_4)....(1-a_n) \geq 1-S_n, -$$ -$$ -(1+a_1)(1+a_2)(1+a_3)(1+a_4)....(1+a_n) \geq 1+S_n, -$$ - -where $S_n=a_1+a_2+a_3+a_4+....+a_n.$ - -The inequality is named after the German mathematician Karl Weierstrass. It can be proven easily via mathematical induction. diff --git a/wiki/wikipedia/3489.txt b/wiki/wikipedia/3489.txt deleted file mode 100644 index 260a554857f2efbff8e727a50aacb4d4e35a86a2..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3489.txt +++ /dev/null @@ -1,77 +0,0 @@ -In computing, an odd–even sort or odd–even transposition sort (also known as brick sort or parity sort) is a relatively simple sorting algorithm, developed originally for use on parallel processors with local interconnections. It is a comparison sort related to bubble sort, with which it shares many characteristics. It functions by comparing all odd/even indexed pairs of adjacent elements in the list and, if a pair is in the wrong order (the first is larger than the second) the elements are switched. The next step repeats this for even/odd indexed pairs (of adjacent elements). Then it alternates between odd/even and even/odd steps until the list is sorted. - -On parallel processors, with one value per processor and only local left–right neighbor connections, the processors all concurrently do a compare–exchange operation with their neighbors, alternating between odd–even and even–odd pairings. This algorithm was originally presented, and shown to be efficient on such processors, by Habermann in 1972. - -The algorithm extends efficiently to the case of multiple items per processor. In the Baudet–Stevenson odd–even merge-splitting algorithm, each processor sorts its own sublist at each step, using any efficient sort algorithm, and then performs a merge splitting, or transposition–merge, operation with its neighbor, with neighbor pairing alternating between odd–even and even–odd on each step. - -A related but more efficient sort algorithm is the Batcher odd–even mergesort, using compare–exchange operations and perfect-shuffle operations. - -Batcher's method is efficient on parallel processors with long-range connections. - -The single-processor algorithm, like bubblesort, is simple but not very efficient. Here a zero-based index is assumed: - - - -function oddEvenSort(list) { - -function swap(list, i, j) { - -var temp = list[i]; - -list[i] = list[j]; - -list[j] = temp; - -} - -var sorted = false; - -while (!sorted) { - -sorted = true; - -for (var i = 1; i < list.length - 1; i += 2) { - -if (list[i] > list[i + 1]) { - -swap(list, i, i + 1); - -sorted = false; - -} - -} - -for (var i = 0; i < list.length - 1; i += 2) { - -if (list[i] > list[i + 1]) { - -swap(list, i, i + 1); - -sorted = false; - -} - -} - -} - -} - - - -Claim: Let $a_1, ..., a_n$ be a sequence of data ordered by <. The odd–even sort algorithm correctly sorts this data in $n$ passes. (A pass here is defined to be a full sequence of odd–even, or even–odd comparisons. The passes occur in order pass 1: odd–even, pass 2: even–odd, etc.) - -Proof: - -This proof is based loosely on one by Thomas Worsch. - -Since the sorting algorithm only involves comparison-swap operations and is oblivious (the order of comparison-swap operations does not depend on the data), by Knuth's 0–1 sorting principle, it suffices to check correctness when each $a_i$ is either 0 or 1. Assume that there are $e$ 1s. - -Observe that the rightmost 1 can be either in an even or odd position, so it might not be moved by the first odd–even pass. But after the first odd–even pass, the rightmost 1 will be in an even position. It follows that it will be moved to the right by all remaining passes. Since the rightmost one starts in position greater than or equal to $e$, it must be moved at most $n - e$ steps. It follows that it takes at most $n - e + 1$ passes to move the rightmost 1 to its correct position. - -Now, consider the second rightmost 1. After two passes, the 1 to its right will have moved right by at least one step. It follows that, for all remaining passes, we can view the second rightmost 1 as the rightmost 1. The second rightmost 1 starts in position at least $e - 1$ and must be moved to position at most $n - 1$, so it must be moved at most $(n - 1) - (e - 1) = n - e$ steps. After at most 2 passes, the rightmost 1 will have already moved, so the entry to the right of the second rightmost 1 will be 0. Hence, for all passes after the first two, the second rightmost 1 will move to the right. It thus takes at most $n - e + 2$ passes to move the second rightmost 1 to its correct position. - -Continuing in this manner, by induction it can be shown that the $i$-th rightmost 1 is moved to its correct position in at most $n - e + i$ passes. Since $i \leq e$, it follows that the $i$-th rightmost 1 is moved to its correct position in at most $n - e + e = n$ passes. The list is thus correctly sorted in $n$ passes. QED. - -We remark that each pass takes $O(n)$ steps, so this algorithm has $O(n^2)$ complexity. diff --git a/wiki/wikipedia/349.txt b/wiki/wikipedia/349.txt deleted file mode 100644 index 9c349029a4658d8b7ae8d5c3d1f452e68a9b96f3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/349.txt +++ /dev/null @@ -1,75 +0,0 @@ -In operations research, the cutting-stock problem is the problem of cutting standard-sized pieces of stock material, such as paper rolls or sheet metal, into pieces of specified sizes while minimizing material wasted. It is an optimization problem in mathematics that arises from applications in industry. In terms of computational complexity, the problem is an NP-hard problem reducible to the knapsack problem. The problem can be formulated as an integer linear programming problem. - -A paper machine can produce an unlimited number of master (jumbo) rolls, each 5600 mm wide. The following 13 items must be cut, in the table below. - -The important thing about this kind of problem is that many different product units can be made from the same master roll, and the number of possible combinations is itself very large, in general, and not trivial to enumerate. - -The problem therefore is to find an optimum set of patterns of making product rolls from the master roll, such that the demand is satisfied and waste is minimized. - -A simple lower bound is obtained by dividing the total amount of product by the size of each master roll. The total product required is 1380 x 22 + 1520 x 25 + ... + 2200 x 20 = 407160 mm. Each master roll is 5600 mm, requiring a minimum of 72.7 rolls, which means 73 rolls or more are required. - -There are 308 possible patterns for this small instance. The optimal answer requires 73 master rolls and has 0.401% waste; it can be shown computationally that in this case the minimum number of patterns with this level of waste is 10. It can also be computed that 19 different such solutions exist, each with 10 patterns and a waste of 0.401%, of which one such solution is shown below and in the picture: - -Cutting-stock problems can be classified in several ways. One way is the dimensionality of the cutting: the above example illustrates a one-dimensional (1D) problem; other industrial applications of 1D occur when cutting pipes, cables, and steel bars. Two-dimensional (2D) problems are encountered in furniture, clothing and glass production. When either the master item or the required parts are irregular-shaped (a situation often encountered in the leather, textile, metals industries) this is referred to as the nesting problem. - -Not many three-dimensional (3D) applications involving cutting are known; however the closely related 3D packing problem has many industrial applications, such as packing objects into shipping containers (see e.g. containerization: the related sphere packing problem has been studied since the 17th century (Kepler conjecture)). - -Industrial applications of cutting-stock problems for high production volumes arise especially when basic material is produced in large rolls that are further cut into smaller units (see roll slitting). This is done e.g. in paper and plastic film industries but also in production of flat metals like steel or brass. There are many variants and additional constraints arising from special production constraints due to machinery and process limits, customer requirements and quality issues; some examples are: - -* Two-stage, where the rolls produced in the first stage are then processed a second time. For instance, all office stationery (e.g. A4 size in Europe, Letter size in US) is produced in such a process. The complication arises because the machinery in the second stage is narrower than the primary. Efficient utilisation of both stages of production is important (from an energy or material use perspective) and what is efficient for the primary stage may be inefficient for the secondary, leading to trade-offs. Metallised film (used in packaging of snacks), and plastic extrusion on paper (used in liquid packaging, e.g. juice cartons) are further examples of such a process. - -* Winder constraints where the slitting process has physical or logical constraints: a very common constraint is that only a certain number of slitting knives are available, so that feasible patterns should not contain more than a maximum number of rolls. Because winder machinery is not standardised, very many other constraints are encountered. - -* An example of a customer requirement is when a particular order cannot be satisfied from either of the two edge positions: this is because the edges of the sheet tend to have greater variations in thickness and some applications can be very sensitive to these. - -* An example of a quality issue is when the master roll contains defects that have to be cut around. Expensive materials with demanding quality characteristics such as photographic paper or Tyvek have to be carefully optimised so that the wasted area is minimised. - -* Multi-machine problems arise when orders can be produced on more than one machine and these machines have different widths. Generally availability of more than one master roll width improves the waste considerably; in practice however additional order splitting constraints may have to be taken into account. - -* There is also a semi-continuous problem, where the produced rolls do not have to be of the same diameter, but can vary within a range. This typically occurs with sheet orders. This is sometimes known as a 1½ dimensional problem. This variant also occurs in the production of corrugated fiberboard, where it is called, somewhat confusingly, the corrugator scheduling problem. - -* Because some paper machines are relatively narrow compared to the demanded items, some companies have invested in a skiving (also known as a web-welding) secondary process, whereby two reels (produced by slitting the initial jumbo reels) are joined side-by-side (with a little overlap) to make up a wider roll. Producing narrower reels in the primary process leads to lower overall waste. - -* In the metals industry one key difference is that typically the master rolls are produced earlier and are generally different from each other (both in terms of width and length). Therefore, there are similarities with the multi-machine problem mentioned above. The presence of length variations creates a 2-D problem, because waste can occur both width-wise and length-wise. - -* The guillotine problem is another 2-D problem of cutting sheets into rectangles of specified sizes, however only cuts that continue all the way across each sheet are allowed. Industrial applications of this problem can be found in the glass industry. - -* The cutting stock problem of determining, for the one-dimensional case, the best master size that will meet given demand is known as the assortment problem. - -The cutting stock problem was first formulated by Kantorovich in 1939. In 1951 before computers became widely available, L. V. Kantorovich and V. A. Zalgaller suggested solving the problem of the economical use of material at the cutting stage with the help of linear programming. The proposed technique was later called the column generation method. - -The standard formulation for the cutting-stock problem (but not the only one) starts with a list of m orders, each requiring $q_j$ pieces, where $j = 1,\ldots,m$. We then construct a list of all possible combinations of cuts (often called "patterns" or "configurations"). Let $C$ be the number of those patterns. We associate with each pattern a positive integer variable $x_i$, representing how many times pattern $i$ is to be used, where $i = 1,\ldots,C$. The linear integer program is then: -$$ -\min\sum_{i=1}^C c_i x_i -$$ -$$ -\text{s.t.}\sum_{i=1}^C a_{ij} x_i \ge q_j, \quad \quad \forall j=1,\dots,m -$$ -$$ -x_i \ge 0 -$$, integer - -where $a_{ij}$ is the number of times order $j$ appears in pattern $i$ and $c_i$ is the cost (often the waste) of pattern $i$. The precise nature of the quantity constraints can lead to subtly different mathematical characteristics. The above formulation's quantity constraints are minimum constraints (at least the given amount of each order must be produced, but possibly more). - -When $c_i=1$, the objective minimises the number of utilised master items and, if the constraint for the quantity to be produced is replaced by equality, it is called the bin packing problem. - -The most general formulation has two-sided constraints (and in this case a minimum-waste solution may consume more than the minimum number of master items): -$$ -q_j \le \sum_{i=1}^n a_{ij} x_i \le Q_j, \quad \quad \forall j=1,\dots,m -$$ - -This formulation applies not just to one-dimensional problems. Many variations are possible, including one where the objective is not to minimise the waste, but to maximise the total value of the produced items, allowing each order to have a different value. - -In general, the number of possible patterns grows exponentially as a function of m, the number of orders. As the number of orders increases, it may therefore become impractical to enumerate the possible cutting patterns. - -An alternative approach uses delayed column-generation. This method solves the cutting-stock problem by starting with just a few patterns. It generates additional patterns when they are needed. For the one-dimensional case, the new patterns are introduced by solving an auxiliary optimization problem called the knapsack problem, using dual variable information from the linear program. The knapsack problem has well-known methods to solve it, such as branch and bound and dynamic programming. The Delayed Column Generation method can be much more efficient than the original approach, particularly as the size of the problem grows. The column generation approach as applied to the cutting stock problem was pioneered by Gilmore and Gomory in a series of papers published in the 1960s. Gilmore and Gomory showed that this approach is guaranteed to converge to the (fractional) optimal solution, without needing to enumerate all the possible patterns in advance. - -A limitation of the original Gilmore and Gomory method is that it does not handle integrality, so the solution may contain fractions, e.g. a particular pattern should be produced 3.67 times. Rounding to the nearest integer often does not work, in the sense that it may lead to a sub-optimal solution and/or under- or over-production of some of the orders (and possible infeasibility in the presence of two-sided demand constraints). This limitation is overcome in modern algorithms, which can solve to optimality (in the sense of finding solutions with minimum waste) very large instances of the problem (generally larger than encountered in practice). - -The cutting-stock problem is often highly degenerate, in that multiple solutions with the same amount of waste are possible. This degeneracy arises because it is possible to move items around, creating new patterns, without affecting the amount of waste. This gives rise to a whole collection of related problems which are concerned with some other criterion, such as the following: - -* The minimum pattern count problem: to find a minimum-pattern-count solution amongst the minimum-waste solutions. This is a very hard problem, even when the waste is known. There is a conjecture that any equality-constrained one-dimensional instance with n sizes has at least one minimum waste solution with no more than n + 1 patterns. This conjecture was first refuted in April 2020 with an example with 9 sizes that requires 11 patterns. - -* The minimum stack problem: this is concerned with the sequencing of the patterns so as not to have too many partially completed orders at any time. This was an open problem until 2007, when an efficient algorithm based on dynamic programming was published. - -* The minimum number of knife changes problem (for the one-dimensional problem): this is concerned with sequencing and permuting the patterns so as to minimise the number of times the slitting knives have to be moved. This is a special case of the generalised travelling salesman problem. diff --git a/wiki/wikipedia/3490.txt b/wiki/wikipedia/3490.txt deleted file mode 100644 index 612c55c2fece69e237c87c56ebaeb6e9c08d98c6..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3490.txt +++ /dev/null @@ -1,19 +0,0 @@ -In mathematics, the three-gap theorem, three-distance theorem, or Steinhaus conjecture states that if one places n points on a circle, at angles of θ, 2θ, 3θ ... from the starting point, then there will be at most three distinct distances between pairs of points in adjacent positions around the circle. When there are three distances, the largest of the three always equals the sum of the other two. Unless θ is a rational multiple of pi, there will also be at least two distinct distances. - -This result was conjectured by Hugo Steinhaus, and proved in the 1950s by Vera T. Sós, , and Stanisław Świerczkowski; more proofs were added by others later. Applications of the three-gap theorem include the study of plant growth and musical tuning systems, and the theory of light reflection within a mirrored square. - -In phyllotaxis, the theory of plant growth, it has been observed that each successive leaf on the stems of many plants is turned from the previous leaf by the golden angle, approximately 137.5°. It has been suggested that this angle maximizes the sun-collecting power of the plant's leaves. If one looks end-on at a plant stem that has grown in this way, there will be at most three distinct angles between two leaves that are consecutive in the cyclic order given by this end-on view. - -For example, in the figure, the largest of these three angles occurs three times, between the leaves numbered 3 and 6, between leaves 4 and 7, and between leaves 5 and 8. The second-largest angle occurs five times, between leaves 6 and 1, 9 and 4, 7 and 2, 10 and 5, and 8 and 3. And the smallest angle occurs only twice, between leaves 1 and 9 and between leaves 2 and 10. The phenomenon of having three types of distinct gaps depends only on fact that the growth pattern uses a constant rotation angle, and not on the relation of this angle to the golden ratio; the same phenomenon would happen for any other rotation angle, and not just for the golden angle. - -In music theory, the three-gap theorem implies that if a tuning system is generated by some number of consecutive multiples of a given musical interval, reduced to a cyclic sequence by considering two tones to be equivalent when they differ by whole numbers of octaves, then there are at most three different intervals between consecutive tones of the scale. For instance, the Pythagorean tuning is constructed in this way from multiples of a perfect fifth. It has only two distinct intervals between its tones, representing two different types of semitones, but if it were extended by one more step then the sequence of intervals between its tones would include a third, much shorter interval, the Pythagorean comma. - -The three-gap theorem also has applications to Sturmian words, infinite sequences of two symbols (for instance, "H" and "V") that can be used to describe the sequence of horizontal and vertical reflections of a light ray within a mirrored square, starting along a line of irrational slope, or equivalently the sequence of horizontal and vertical lines of the integer grid crossed by the starting line. For any positive integer n, such a sequence has exactly n + 1 distinct consecutive subsequences of length n. The three-gap theorem implies that these n + 1 subsequences occur with at most three distinct frequencies. If there are three frequencies, then the largest frequency must equal the sum of the other two. The proof involves partitioning the y-intercepts of the starting lines (modulo 1) into n + 1 subintervals within which the initial n elements of the sequence are the same, and applying the three-gap theorem to this partition. - -The three-gap theorem was conjectured by Hugo Steinhaus, and its first proofs were published in the late 1950s by Vera T. Sós, , and Stanisław Świerczkowski. Several later proofs have also been published. - -The following simple proof is due to Frank Liang. Define a gap to be an arc A of the circle that extends between two consecutive points of the given set, and define a gap to be rigid if the arc A + θ of the same length, obtained by rotating A by an angle of θ, is not another gap. If A is not rigid, meaning that A + θ is another gap, its endpoints are farther along in the sequence in which the points were placed. Repeatedly adding θ will produce equal-length gaps that are farther and farther along this sequence, which cannot continue indefinitely unless the sequence of angles repeats, having only one gap. Therefore, if there is more than one length of gap, then every gap has the same length as a rigid gap. However, the only ways for a gap A to be rigid are for one of its two endpoints to be the last point in the placement sequence (so that the corresponding endpoint of A + θ is missing from the given points) or for one of the given points to land within A + θ, preventing it from being a gap. A point can only land within A + θ if it is the first point in the placement ordering, because otherwise its predecessor in the sequence would land within A, contradicting the assumption that A is a gap. So there can be at most three rigid gaps, the two on either side of the last point and the one in which the predecessor of the first point (if it were part of the sequence) would land. Because there are at most three rigid gaps, there are at most three lengths of gaps. - -The same proof additionally shows that, when there are exactly three gap lengths, the longest gap length is the sum of the other two. For, in this case, the rotated copy A + θ that has the first point in it is partitioned by that point into two smaller gaps, which must be the other two gaps. - -A closely related but earlier theorem, also called the three-gap theorem, is that if A is any arc of the circle, then the integer sequence of multiples of θ that land in A has at most three gaps between sequence values. Again, if there are three gaps then one is the sum of the other two. diff --git a/wiki/wikipedia/3491.txt b/wiki/wikipedia/3491.txt deleted file mode 100644 index 0006933dfa246e99726629913ab65ec321b05430..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3491.txt +++ /dev/null @@ -1,29 +0,0 @@ -In mathematical analysis, the Young's inequality for integral operators, is a bound on the $L^p\to L^q$ operator norm of an integral operator in terms of $L^r$ norms of the kernel itself. - -Assume that $X$ and $Y$ are measurable spaces, $K : X \times Y \to \mathbb{R}$ is measurable and $ q,p,r\geq 1 $ are such that $\frac{1}{q} = \frac{1}{p} + \frac{1}{r} -1$. If - - - -\int_{Y} |K (x, y)|^r \mathrm{d} y \le C^r - - for all $ x\in X $ - -and - - - -\int_{X} |K (x, y)|^r \mathrm{d} x \le C^r - - for all $ y\in Y $ - -then - - - -\int_{X} \left|\int_{Y} K (x, y) f(y) \mathrm{d} y\right|^q \mathrm{d} x - -\le C^q \left( \int_{Y} |f(y)|^p \mathrm{d} y\right)^\frac{q}{p}. - - - -If $X = Y = \mathbb{R}^d$ and $K (x, y) = h (x - y) $, then the inequality becomes Young's convolution inequality. diff --git a/wiki/wikipedia/3492.txt b/wiki/wikipedia/3492.txt deleted file mode 100644 index 595ddb37616ecf92fbda0e3e5697cc59b7d48801..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3492.txt +++ /dev/null @@ -1,32 +0,0 @@ -In mathematics, the Golod–Shafarevich theorem was proved in 1964 by Evgeny Golod and Igor Shafarevich. It is a result in non-commutative homological algebra which solves the class field tower problem, by showing that class field towers can be infinite. - -Let A = K⟨x1, ..., xn⟩ be the free algebra over a field K in n = d + 1 non-commuting variables xi. - -Let J be the 2-sided ideal of A generated by homogeneous elements fj of A of degree dj with - -2 ≤ d1 ≤ d2 ≤ ... - -where dj tends to infinity. Let ri be the number of dj equal to i. - -Let B=A/J, a graded algebra. Let bj = dim Bj. - -The fundamental inequality of Golod and Shafarevich states that -$$ - b_j\ge nb_{j-1} -\sum_{i=2}^{j} b_{j-i} r_i. -$$ - -As a consequence: - -* B is infinite-dimensional if ri ≤ d2/4 for all i - -This result has important applications in combinatorial group theory: - -* If G is a nontrivial finite p-group, then r > d2/4 where d = dim H1(G,Z/pZ) and r = dim H2(G,Z/pZ) (the mod p cohomology groups of G). In particular if G is a finite p-group with minimal number of generators d and has r relators in a given presentation, then r > d2/4. - -* For each prime p, there is an infinite group G generated by three elements in which each element has order a power of p. The group G provides a counterexample to the generalised Burnside conjecture: it is a finitely generated infinite torsion group, although there is no uniform bound on the order of its elements. - -In class field theory, the class field tower of a number field K is created by iterating the Hilbert class field construction. The class field tower problem asks whether this tower is always finite; Hasse attributed this question to Furtwangler, though Furtwangler said he had heard it from Schreier. Another consequence of the Golod–Shafarevich theorem is that such towers may be infinite (in other words, do not always terminate in a field equal to its Hilbert class field). Specifically, - -* Let K be an imaginary quadratic field whose discriminant has at least 6 prime factors. Then the maximal unramified 2-extension of K has infinite degree. - -More generally, a number field with sufficiently many prime factors in the discriminant has an infinite class field tower. diff --git a/wiki/wikipedia/3493.txt b/wiki/wikipedia/3493.txt deleted file mode 100644 index 30d82e2b0971f3b99c4ca4ac81710df0af862f7d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3493.txt +++ /dev/null @@ -1,27 +0,0 @@ -In statistics, Fieller's theorem allows the calculation of a confidence interval for the ratio of two means. - -Variables a and b may be measured in different units, so there is no way to directly combine the standard errors as they may also be in different units. The most complete discussion of this is given by Fieller (1954). - -Fieller showed that if a and b are (possibly correlated) means of two samples with expectations $\mu_a$ and $\mu_b$, and variances $\nu_{11}\sigma^2$ and $\nu_{22}\sigma^2$ and covariance $\nu_{12}\sigma^2$, and if $\nu_{11}, \nu_{12}, \nu_{22} $ are all known, then a (1 - α) confidence interval (mL, mU) for $\mu_a/\mu_b$ is given by -$$ -(m_L, m_{U}) = \frac{1}{(1-g)} \left[\frac{a}{b} - \frac{g\nu_{12}}{\nu_{22}} \mp \frac{t_{r,\alpha}s}{b} \sqrt{\nu_{11} - 2\frac{a}{b}\nu_{12} + \frac{a^2}{b^2} \nu_{22} - g\left(\nu_{11} - \frac{\nu_{12}^2}{\nu_{22}}\right)} \right] -$$ - -where -$$ -g=\frac{t^{2}_{r,\alpha}s^2\nu_{22}}{b^2}. -$$ - -Here $s^2$ is an unbiased estimator of $\sigma^2$ based on r degrees of freedom, and $t_{r,\alpha}$ is the $\alpha$-level deviate from the Student's t-distribution based on r degrees of freedom. - -Three features of this formula are important in this context: - -a) The expression inside the square root has to be positive, or else the resulting interval will be imaginary. - -b) When g is very close to 1, the confidence interval is infinite. - -c) When g is greater than 1, the overall divisor outside the square brackets is negative and the confidence interval is exclusive. - -One problem is that, when g is not small, the confidence interval can blow up when using Fieller's theorem. Andy Grieve has provided a Bayesian solution where the CIs are still sensible, albeit wide. Bootstrapping provides another alternative that does not require the assumption of normality. - -Edgar C. Fieller (1907-1960) first started working on this problem while in Karl Pearson's group at University College London, where he was employed for five years after graduating in Mathematics from King's College, Cambridge. He then worked for the Boots Pure Drug Company as a statistician and operational researcher before becoming deputy head of operational research at RAF Fighter Command during the Second World War, after which he was appointed the first head of the Statistics Section at the National Physical Laboratory. diff --git a/wiki/wikipedia/3494.txt b/wiki/wikipedia/3494.txt deleted file mode 100644 index b9dc0d68ae5470a71ec90344c6cd2d86bb01b12f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3494.txt +++ /dev/null @@ -1,251 +0,0 @@ -In mathematics, Cauchy's integral formula, named after Augustin-Louis Cauchy, is a central statement in complex analysis. It expresses the fact that a holomorphic function defined on a disk is completely determined by its values on the boundary of the disk, and it provides integral formulas for all derivatives of a holomorphic function. Cauchy's formula shows that, in complex analysis, "differentiation is equivalent to integration": complex differentiation, like integration, behaves well under uniform limits – a result that does not hold in real analysis. - -Let U be an open subset of the complex plane C, and suppose the closed disk D defined as -$$ -D = \bigl\{z:|z - z_0| \leq r\bigr\} -$$ - -is completely contained in U. Let f : U → C be a holomorphic function, and let γ be the circle, oriented counterclockwise, forming the boundary of D. Then for every a in the interior of D, -$$ -f(a) = \frac{1}{2\pi i} \oint_\gamma \frac{f(z)}{z-a} dz. -$$ - -The proof of this statement uses the Cauchy integral theorem and like that theorem, it only requires f to be complex differentiable. - -Since $1/(z-a)$ can be expanded as a power series in the variable $a$ -$$ -\frac{1}{z-a} = \frac{1+\frac{a}{z}+\left(\frac{a}{z}\right)^2+\cdots}{z} -$$ - -it follows that holomorphic functions are analytic, i.e. they can be expanded as convergent power series. - -In particular f is actually infinitely differentiable, with -$$ -f^{(n)}(a) = \frac{n!}{2\pi i} \oint_\gamma \frac{f(z)}{\left(z-a\right)^{n+1}} dz. -$$ - -This formula is sometimes referred to as Cauchy's differentiation formula. - -The theorem stated above can be generalized. The circle γ can be replaced by any closed rectifiable curve in U which has winding number one about a. Moreover, as for the Cauchy integral theorem, it is sufficient to require that f be holomorphic in the open region enclosed by the path and continuous on its closure. - -Note that not every continuous function on the boundary can be used to produce a function inside the boundary that fits the given boundary function. For instance, if we put the function f(z) = 1/z, defined for |z| = 1, into the Cauchy integral formula, we get zero for all points inside the circle. In fact, giving just the real part on the boundary of a holomorphic function is enough to determine the function up to an imaginary constant — there is only one imaginary part on the boundary that corresponds to the given real part, up to addition of a constant. We can use a combination of a Möbius transformation and the Stieltjes inversion formula to construct the holomorphic function from the real part on the boundary. For example, the function f(z) = i − iz has real part Re f(z) = Im z. On the unit circle this can be written . Using the Möbius transformation and the Stieltjes formula we construct the function inside the circle. The i/z term makes no contribution, and we find the function −iz. This has the correct real part on the boundary, and also gives us the corresponding imaginary part, but off by a constant, namely i. - -By using the Cauchy integral theorem, one can show that the integral over C (or the closed rectifiable curve) is equal to the same integral taken over an arbitrarily small circle around a. Since f(z) is continuous, we can choose a circle small enough on which f(z) is arbitrarily close to f(a). On the other hand, the integral -$$ -\oint_C \frac{1}{z-a} dz = 2 \pi i, -$$ - -over any circle C centered at a. This can be calculated directly via a parametrization (integration by substitution) z(t) = a + εeit where 0 ≤ t ≤ 2π and ε is the radius of the circle. - -Letting ε → 0 gives the desired estimate - -\begin{align} - -\left | \frac{1}{2 \pi i} \oint_C \frac{f(z)}{z-a} dz - f(a) \right | - -&= \left | \frac{1}{2 \pi i} \oint_C \frac{f(z)-f(a)}{z-a} dz \right |\\[.5em] - -&= \left | \frac{1}{2\pi i}\int_0^{2\pi}\left(\frac{f\bigl(z(t)\bigr)-f(a)}{\varepsilon e^{it}}\cdot\varepsilon e^{it}i\right )dt\right |\\ - -&\leq \frac{1}{2 \pi} \int_0^{2\pi} \frac{ |f\bigl(z(t)\bigr) - f(a)| } {\varepsilon} \varepsilondt\\[.5em] - -&\leq \max_{|z-a|=\varepsilon}|f(z) - f(a)| - -\xrightarrow[\varepsilon\to 0]{} 0. - -\end{align} - -Let -$$ -g(z)=\frac{z^2}{z^2+2z+2}, -$$ - -and let C be the contour described by |z| = 2 (the circle of radius 2). - -To find the integral of g(z) around the contour C, we need to know the singularities of g(z). Observe that we can rewrite g as follows: -$$ -g(z)=\frac{z^2}{(z-z_1)(z-z_2)} -$$ - -where z1 = −1 + i and z2 = −1 − i. - -Thus, g has poles at z1 and z2. The moduli of these points are less than 2 and thus lie inside the contour. This integral can be split into two smaller integrals by Cauchy–Goursat theorem; that is, we can express the integral around the contour as the sum of the integral around z1 and z2 where the contour is a small circle around each pole. Call these contours C1 around z1 and C2 around z2. - -Now, each of these smaller integrals can be evaluated by the Cauchy integral formula, but they first must be rewritten to apply the theorem. For the integral around C1, define f1 as f1(z) = (z − z1)g(z). This is analytic (since the contour does not contain the other singularity). We can simplify f1 to be: -$$ -f_1(z)=\frac{z^2}{z-z_2} -$$ - -and now -$$ -g(z)=\frac{f_1(z)}{z-z_1}. -$$ - -Since the Cauchy integral formula says that: -$$ -\oint_C \frac{f_1(z)}{z-a} dz=2\pi i\cdot f_1(a), -$$ - -we can evaluate the integral as follows: - - - -\oint_{C_1} g(z)dz - -=\oint_{C_1} \frac{f_1(z)}{z-z_1}dz - -=2\pi i\frac{z_1^2}{z_1-z_2}. - - - -Doing likewise for the other contour: -$$ -f_2(z)=\frac{z^2}{z-z_1}, -$$ - -we evaluate - - - -\oint_{C_2} g(z)dz - -=\oint_{C_2} \frac{f_2(z)}{z-z_2}dz - -=2\pi i\frac{z_2^2}{z_2-z_1}. - - - -The integral around the original contour C then is the sum of these two integrals: - -\begin{align} - -\oint_C g(z)dz - -&{}= \oint_{C_1} g(z)dz - -+ \oint_{C_2} g(z)dz \\[.5em] - -&{}= 2\pi i\left(\frac{z_1^2}{z_1-z_2}+\frac{z_2^2}{z_2-z_1}\right) \\[.5em] - -&{}= 2\pi i(-2) \\[.3em] - -&{}=-4\pi i. - -\end{align} - -An elementary trick using partial fraction decomposition: - - - -\oint_C g(z)dz - -=\oint_C \left(1-\frac{1}{z-z_1}-\frac{1}{z-z_2}\right) dz - -=0-2\pi i-2\pi i - -=-4\pi i - - - -The integral formula has broad applications. First, it implies that a function which is holomorphic in an open set is in fact infinitely differentiable there. Furthermore, it is an analytic function, meaning that it can be represented as a power series. The proof of this uses the dominated convergence theorem and the geometric series applied to -$$ -f(\zeta) = \frac{1}{2\pi i}\int_C \frac{f(z)}{z-\zeta}dz. -$$ - -The formula is also used to prove the residue theorem, which is a result for meromorphic functions, and a related result, the argument principle. It is known from Morera's theorem that the uniform limit of holomorphic functions is holomorphic. This can also be deduced from Cauchy's integral formula: indeed the formula also holds in the limit and the integrand, and hence the integral, can be expanded as a power series. In addition the Cauchy formulas for the higher order derivatives show that all these derivatives also converge uniformly. - -The analog of the Cauchy integral formula in real analysis is the Poisson integral formula for harmonic functions; many of the results for holomorphic functions carry over to this setting. No such results, however, are valid for more general classes of differentiable or real analytic functions. For instance, the existence of the first derivative of a real function need not imply the existence of higher order derivatives, nor in particular the analyticity of the function. Likewise, the uniform limit of a sequence of (real) differentiable functions may fail to be differentiable, or may be differentiable but with a derivative which is not the limit of the derivatives of the members of the sequence. - -Another consequence is that if f(z) = Σ an zn is holomorphic in |z| < R and 0 < r < R then the coefficients an satisfy Cauchy's inequality -$$ -|a_n|\le r^{-n} \sup_{|z|=r}|f(z)|. -$$ - -From Cauchy's inequality, one can easily deduce that every bounded entire function must be constant (which is Liouville's theorem). - -A version of Cauchy's integral formula is the Cauchy–Pompeiu formula, and holds for smooth functions as well, as it is based on Stokes' theorem. Let D be a disc in C and suppose that f is a complex-valued C function on the closure of D. Then -$$ -f(\zeta) = \frac{1}{2\pi i}\int_{\partial D} \frac{f(z) dz}{z-\zeta} - \frac{1}{\pi}\iint_D \frac{\partial f}{\partial \bar{z}}(z) \frac{dx\wedge dy}{z-\zeta}. -$$ - -One may use this representation formula to solve the inhomogeneous Cauchy–Riemann equations in D. Indeed, if φ is a function in D, then a particular solution f of the equation is a holomorphic function outside the support of μ. Moreover, if in an open set D, -$$ -d\mu = \frac{1}{2\pi i}\varphi dz\wedge d\bar{z} -$$ - -for some φ ∈ C (where k ≥ 1), then f(ζ, ζ) is also in C and satisfies the equation -$$ -\frac{\partial f}{\partial\bar{z}} = \varphi(z,\bar{z}). -$$ - -The first conclusion is, succinctly, that the convolution μ ∗ k(z) of a compactly supported measure with the Cauchy kernel -$$ -k(z) = \operatorname{p.v.}\frac{1}{z} -$$ - -is a holomorphic function off the support of μ. Here p.v. denotes the principal value. The second conclusion asserts that the Cauchy kernel is a fundamental solution of the Cauchy–Riemann equations. Note that for smooth complex-valued functions f of compact support on C the generalized Cauchy integral formula simplifies to -$$ -f(\zeta) = \frac{1}{2\pi i}\iint \frac{\partial f}{\partial \bar{z}}\frac{dz\wedge d\bar{z}}{z-\zeta}, -$$ - -and is a restatement of the fact that, considered as a distribution, (πz)−1 is a fundamental solution of the Cauchy–Riemann operator ∂/∂z̄. The generalized Cauchy integral formula can be deduced for any bounded open region X with C boundary ∂X from this result and the formula for the distributional derivative of the characteristic function χX of X: -$$ - \frac {\partial \chi_X}{\partial \bar z}= \frac{i}{2} \oint_{\partial X} dz, -$$ - -where the distribution on the right hand side denotes contour integration along ∂X. - -In several complex variables, the Cauchy integral formula can be generalized to polydiscs . Let D be the polydisc given as the Cartesian product of n open discs D1, ..., Dn: -$$ -D = \prod_{i=1}^n D_i. -$$ - -Suppose that f is a holomorphic function in D continuous on the closure of D. Then -$$ -f(\zeta) = \frac{1}{\left(2\pi i\right)^n}\int\cdots\iint_{\partial D_1\times\cdots\times\partial D_n} \frac{f(z_1,\ldots,z_n)}{(z_1-\zeta_1)\cdots(z_n-\zeta_n)} dz_1\cdots dz_n -$$ - -where ζ = (ζ1,...,ζn) ∈ D. - -The Cauchy integral formula is generalizable to real vector spaces of two or more dimensions. The insight into this property comes from geometric algebra, where objects beyond scalars and vectors (such as planar bivectors and volumetric trivectors) are considered, and a proper generalization of Stokes' theorem. - -Geometric calculus defines a derivative operator ∇ = êii under its geometric product — that is, for a k-vector field ψ(r), the derivative ∇ψ generally contains terms of grade k + 1 and k − 1. For example, a vector field (k = 1) generally has in its derivative a scalar part, the divergence (k = 0), and a bivector part, the curl (k = 2). This particular derivative operator has a Green's function: -$$ -G\left(\mathbf r, \mathbf r'\right) = \frac{1}{S_n} \frac{\mathbf r - \mathbf r'}{\left|\mathbf r - \mathbf r'\right|^n} -$$ - -where Sn is the surface area of a unit n-ball in the space (that is, S2 = 2π, the circumference of a circle with radius 1, and S3 = 4π, the surface area of a sphere with radius 1). By definition of a Green's function, -$$ -\nabla G\left(\mathbf r, \mathbf r'\right) = \delta\left(\mathbf r- \mathbf r'\right). -$$ - -It is this useful property that can be used, in conjunction with the generalized Stokes theorem: -$$ -\oint_{\partial V} d\mathbf S f(\mathbf r) = \int_V d\mathbf V \nabla f(\mathbf r) -$$ - -where, for an n-dimensional vector space, dS is an (n − 1)-vector and dV is an n-vector. The function f(r) can, in principle, be composed of any combination of multivectors. The proof of Cauchy's integral theorem for higher dimensional spaces relies on the using the generalized Stokes theorem on the quantity G(r, r′) f(r′) and use of the product rule: - -\oint_{\partial V'} G\left(\mathbf r, \mathbf r'\right) d\mathbf S' f\left(\mathbf r'\right) - -= \int_V \left(\left[\nabla' G\left(\mathbf r, \mathbf r'\right)\right] f\left(\mathbf r'\right) + G\left(\mathbf r, \mathbf r'\right) \nabla' f\left(\mathbf r'\right)\right) d\mathbf V - -When ∇f = 0, f(r) is called a monogenic function, the generalization of holomorphic functions to higher-dimensional spaces — indeed, it can be shown that the Cauchy–Riemann condition is just the two-dimensional expression of the monogenic condition. When that condition is met, the second term in the right-hand integral vanishes, leaving only - -\oint_{\partial V'} G\left(\mathbf r, \mathbf r'\right) d\mathbf S' f\left(\mathbf r'\right) - -= \int_V \left[\nabla' G\left(\mathbf r, \mathbf r'\right)\right] f\left(\mathbf r'\right) - -= -\int_V \delta\left(\mathbf r - \mathbf r'\right) f\left(\mathbf r'\right) d\mathbf V - -=- i_n f(\mathbf r) - -where in is that algebra's unit n-vector, the pseudoscalar. The result is - -f(\mathbf r) - -=- \frac{1}{i_n} \oint_{\partial V} G\left(\mathbf r, \mathbf r'\right) d\mathbf S f\left(\mathbf r'\right) - -= -\frac{1}{i_n} \oint_{\partial V} \frac{\mathbf r - \mathbf r'}{S_n \left|\mathbf r - \mathbf r'\right|^n} d\mathbf S f\left(\mathbf r'\right) - -Thus, as in the two-dimensional (complex analysis) case, the value of an analytic (monogenic) function at a point can be found by an integral over the surface surrounding the point, and this is valid not only for scalar functions but vector and general multivector functions as well. diff --git a/wiki/wikipedia/3495.txt b/wiki/wikipedia/3495.txt deleted file mode 100644 index 2a67d9354019c2c1acaa74efbdc8c6d2438bf79e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3495.txt +++ /dev/null @@ -1,410 +0,0 @@ -In mathematics, especially abstract algebra, loop theory and quasigroup theory are active research areas with many open problems. As in other areas of mathematics, such problems are often made public at professional conferences and meetings. Many of the problems posed here first appeared in the Loops (Prague) conferences and the Mile High (Denver) conferences. - -
    - -Let L be a Moufang loop with normal abelian subgroup (associative subloop) M of odd order such that L/M is a cyclic group of order bigger than 3. (i) Is L a group? (ii) If the orders of M and L/M are relatively prime, is L a group? - -
    - -*Proposed: by Michael Kinyon, based on (Chein and Rajah, 2000) - -*Comments: The assumption that L/M has order bigger than 3 is important, as there is a (commutative) Moufang loop L of order 81 with normal commutative subgroup of order 27. - -
    - -Conjecture: Any finite commutative Moufang loop of period 3 can be embedded into a commutative alternative algebra. - -
    - -*Proposed: by Alexander Grishkov at Loops '03, Prague 2003 - -
    - -Conjecture: Let L be a finite Moufang loop and Φ(L) the intersection of all maximal subloops of L. Then Φ(L) is a normal nilpotent subloop of L. - -
    - -*Proposed: by Alexander Grishkov at Loops '11, Třešť 2011 - -
    - -For a group $G$, define $M(G,2)$ on $G$ x $C_2$ by -$$ -(g,0)(h,0)=(gh,0) -$$, $(g,0)(h,1)=(hg,1)$, $(g,1)(h,0)=(gh^{-1},1)$, $(g,1)(h,1)=(h^{-1}g,0)$. Find a minimal presentation for the Moufang loop $M(G,2)$ with respect to a presentation for $G$. - -
    - -*Proposed: by Petr Vojtěchovský at Loops '03, Prague 2003 - -*Comments: Chein showed in (Chein, 1974) that $M(G,2)$ is a Moufang loop that is nonassociative if and only if $G$ is nonabelian. Vojtěchovský (Vojtěchovský, 2003) found a minimal presentation for $M(G,2)$ when $G$ is a 2-generated group. - -
    - -Let p and q be distinct odd primes. If q is not congruent to 1 modulo p, are all Moufang loops of order p2q3 groups? What about pq4? - -
    - -*Proposed: by Andrew Rajah at Loops '99, Prague 1999 - -*Comments: The former has been solved by Rajah and Chee (2011) where they showed that for distinct odd primes p1 < ··· < pm < q < r1 < ··· < rn, all Moufang loops of order p12···pm2q3r12···rn2 are groups if and only if q is not congruent to 1 modulo pi for each i. - -
    - -Is there a Moufang loop of odd order with trivial nucleus? - -
    - -*Proposed: by Andrew Rajah at Loops '03, Prague 2003 - -
    - -Find presentations for all nonassociative finite simple Moufang loops in the variety of Moufang loops. - -
    - -*Proposed: by Petr Vojtěchovský at Loops '03, Prague 2003 - -*Comments: It is shown in (Vojtěchovský, 2003) that every nonassociative finite simple Moufang loop is generated by 3 elements, with explicit formulas for the generators. - -
    - -Conjecture: Let M be a finite Moufang loop of exponent n with m generators. Then there exists a function f(n,m) such that |M| < f(n,m). - -
    - -*Proposed: by Alexander Grishkov at Loops '11, Třešť 2011 - -*Comments: In the case when n is a prime different from 3 the conjecture was proved by Grishkov. If p = 3 and M is commutative, it was proved by Bruck. The general case for p = 3 was proved by G. Nagy. The case n = pm holds by the Grishkov–Zelmanov Theorem. - -
    - -Conjecture: Let L be a finitely generated Moufang loop of exponent 4 or 6. Then L is finite. - -
    - -*Proposed: by Alexander Grishkov at Loops '11, Třešť 2011 - -
    - -Let MFn be the free Moufang loop with n generators. - -Conjecture: MF3 is torsion free but MFn with n > 4 is not. - -
    - -*Proposed: by Alexander Grishkov at Loops '03, Prague 2003 - -
    - -For a left Bol loop Q, find some relation between the nilpotency degree of the left multiplication group of Q and the structure of Q. - -
    - -*Proposed: at Milehigh conference on quasigroups, loops, and nonassociative systems, Denver 2005 - -
    - -Let $(Q,*)$, $(Q,+)$ be two quasigroups defined on the same underlying set $Q$. The distance $d(*,+)$ is the number of pairs $(a,b)$ in $Q\times Q$ such that $a*b\ne a+b $. Call a class of finite quasigroups quadratic if there is a positive real number $\alpha$ such that any two quasigroups $(Q,*)$, $(Q,+)$ of order $n$ from the class satisfying $d(*,+) < \alphan^2$ are isomorphic. Are Moufang loops quadratic? Are Bol loops quadratic? - -
    - -*Proposed: by Aleš Drápal at Loops '99, Prague 1999 - -*Comments: Drápal proved in (Drápal, 1992) that groups are quadratic with $\alpha=1/9$, and in (Drápal, 2000) that 2-groups are quadratic with $\alpha=1/4$. - -
    - -Determine the Campbell–Hausdorff series for analytic Bol loops. - -
    - -*Proposed: by M. A. Akivis and V. V. Goldberg at Loops '99, Prague 1999 - -*Comments: The problem has been partially solved for local analytic Bruck loops in (Nagy, 2002). - -
    - -A loop is universally flexible if every one of its loop isotopes is flexible, that is, satisfies (xy)x = x(yx). A loop is middle Bol if every one of its loop isotopes has the antiautomorphic inverse property, that is, satisfies (xy)-1 = y-1x-1. Is there a finite, universally flexible loop that is not middle Bol? - -
    - -*Proposed: by Michael Kinyon at Loops '03, Prague 2003 - -
    - -Is there a finite simple nonassociative Bol loop with nontrivial conjugacy classes? - -
    - -*Proposed: by Kenneth W. Johnson and Jonathan D. H. Smith at the 2nd Mile High Conference on Nonassociative Mathematics, Denver 2009 - -
    - -Let Q be a loop whose inner mapping group is nilpotent. Is Q nilpotent? Is Q solvable? - -
    - -*Proposed: at Loops '03 and '07, Prague 2003 and 2007 - -*Comments: The answer to the first question is affirmative if Q is finite (Niemenmaa 2009). The problem is open in the general case. - -
    - -Let Q be a loop with abelian inner mapping group. Is Q nilpotent? If so, is there a bound on the nilpotency class of Q? In particular, can the nilpotency class of Q be higher than 3? - -
    - -*Proposed: at Loops '07, Prague 2007 - -*Comments: When the inner mapping group Inn(Q) is finite and abelian, then Q is nilpotent (Niemenaa and Kepka). The first question is therefore open only in the infinite case. Call loop Q of Csörgõ type if it is nilpotent of class at least 3, and Inn(Q) is abelian. No loop of Csörgõ type of nilpotency class higher than 3 is known. Loops of Csörgõ type exist (Csörgõ, 2004), Buchsteiner loops of Csörgõ type exist (Csörgõ, Drápal and Kinyon, 2007), and Moufang loops of Csörgõ type exist (Nagy and Vojtěchovský, 2007). On the other hand, there are no groups of Csörgõ type (folklore), there are no commutative Moufang loops of Csörgõ type (Bruck), and there are no Moufang p-loops of Csörgõ type for p > 3 (Nagy and Vojtěchovský, 2007). - -
    - -Determine the number of nilpotent loops of order 24 up to isomorphism. - -
    - -*Proposed: by Petr Vojtěchovský at the 2nd Mile High Conference on Nonassociative Mathematics, Denver 2009 - -*Comment: The counts are known for n < 24, see (Daly and Vojtěchovský, 2010). - -
    - -Construct a finite nilpotent loop with no finite basis for its laws. - -
    - -*Proposed: by M. R. Vaughan-Lee in the Kourovka Notebook of Unsolved Problems in Group Theory - -*Comment: There is a finite loop with no finite basis for its laws (Vaughan-Lee, 1979) but it is not nilpotent. - -
    - -Are there infinite simple paramedial quasigroups? - -
    - -*Proposed: by Jaroslav Ježek and Tomáš Kepka at Loops '03, Prague 2003 - -
    - -A variety V of quasigroups is isotopically universal if every quasigroup is isotopic to a member of V. Is the variety of loops a minimal isotopically universal variety? Does every isotopically universal variety contain the variety of loops or its parastrophes? - -
    - -*Proposed: by Tomáš Kepka and Petr Němec at Loops '03, Prague 2003 - -*Comments: Every quasigroup is isotopic to a loop, hence the variety of loops is isotopically universal. - -
    - -Does there exist a quasigroup Q of order q = 14, 18, 26 or 42 such that the operation * defined on Q by x * y = y - xy is a quasigroup operation? - -
    - -*Proposed: by Parascovia Syrbu at Loops '03, Prague 2003 - -*Comments: see (Conselo et al., 1998) - -
    - -Construct a latin square L of order n as follows: Let G = Kn,n be the complete bipartite graph with distinct weights on its n2 edges. Let M1 be the cheapest matching in G, M2 the cheapest matching in G with M1 removed, and so on. Each matching Mi determines a permutation pi of 1, ..., n. Let L be obtained from G by placing the permutation pi into row i of L. Does this procedure result in a uniform distribution on the space of latin squares of order n? - -
    - -*Proposed: by Gábor Nagy at the 2nd Mile High Conference on Nonassociative Mathematics, Denver 2009 - -
    - -For a loop Q, let Mlt(Q) denote the multiplication group of Q, that is, the group generated by all left and right translations. Is |Mlt(Q)| < f(|Q|) for some variety of loops and for some polynomial f? - -
    - -*Proposed: at the Milehigh conference on quasigroups, loops, and nonassociative systems, Denver 2005 - -
    - -Does every finite alternative loop, that is, every loop satisfying x(xy) = (xx)y and x(yy) = (xy)y, have 2-sided inverses? - -
    - -*Proposed: by Warren D. Smith - -*Comments: There are infinite alternative loops without 2-sided inverses, cf. (Ormes and Vojtěchovský, 2007) - -
    - -Find a nonassociative finite simple automorphic loop, if such a loop exists. - -
    - -*Proposed: by Michael Kinyon at Loops '03, Prague 2003 - -*Comments: It is known that such a loop cannot be commutative (Grishkov, Kinyon and Nagý, 2013) nor have odd order (Kinyon, Kunen, Phillips and Vojtěchovský, 2013). - -
    - -We say that a variety V of loops satisfies the Moufang theorem if for every loop Q in V the following implication holds: for every x, y, z in Q, if x(yz) = (xy)z then the subloop generated by x, y, z is a group. Is every variety that satisfies Moufang theorem contained in the variety of Moufang loops? - -
    - -*Proposed by: Andrew Rajah at Loops '11, Třešť 2011 - -
    - -A loop is Osborn if it satisfies the identity x((yz)x) = (xλ\y)(zx). Is every Osborn loop universal, that is, is every isotope of an Osborn loop Osborn? If not, is there a nice identity characterizing universal Osborn loops? - -
    - -*Proposed: by Michael Kinyon at Milehigh conference on quasigroups, loops, and nonassociative systems, Denver 2005 - -*Comments: Moufang and conjugacy closed loops are Osborn. See (Kinyon, 2005) for more. - -The following problems were posed as open at various conferences and have since been solved. - -
    - -Is there a Buchsteiner loop that is not conjugacy closed? Is there a finite simple Buchsteiner loop that is not conjugacy closed? - -
    - -*Proposed: at Milehigh conference on quasigroups, loops, and nonassociative systems, Denver 2005 - -*Solved by: Piroska Csörgõ, Aleš Drápal, and Michael Kinyon - -*Solution: The quotient of a Buchsteiner loop by its nucleus is an abelian group of exponent 4. In particular, no nonassociative Buchsteiner loop can be simple. There exists a Buchsteiner loop of order 128 which is not conjugacy closed. - -
    - -Classify nonassociative Moufang loops of order 64. - -
    - -*Proposed: at Milehigh conference on quasigroups, loops, and nonassociative systems, Denver 2005 - -*Solved by: Gábor P. Nagy and Petr Vojtěchovský - -*Solution: There are 4262 nonassociative Moufang loops of order 64. They were found by the method of group modifications in (Vojtěchovský, 2006), and it was shown in (Nagy and Vojtěchovský, 2007) that the list is complete. The latter paper uses a linear-algebraic approach to Moufang loop extensions. - -
    - -Construct a conjugacy closed loop whose left multiplication group is not isomorphic to its right multiplication group. - -
    - -*Proposed: by Aleš Drápal at Loops '03, Prague 2003 - -*Solved by: Aleš Drápal - -*Solution: There is such a loop of order 9. In can be obtained in the - -* by the command CCLoop(9,1) - -
    - -Is there a finite simple Bol loop that is not Moufang? - -
    - -* Proposed at: Loops '99, Prague 1999 - -* Solved by: Gábor P. Nagy, 2007. - -* Solution: A simple Bol loop that is not Moufang will be called proper. - -*: There are several families of proper simple Bol loops. A smallest proper simple Bol loop is of order 24 (Nagy 2008). - -*: There is also a proper simple Bol loop of exponent 2 (Nagy 2009), and a proper simple Bol loop of odd order (Nagy 2008). - -* Comments: The above constructions solved two additional open problems: - -** Is there a finite simple Bruck loop that is not Moufang? Yes, since any proper simple Bol loop of exponent 2 is Bruck. - -** Is every Bol loop of odd order solvable? No, as witnessed by any proper simple Bol loop of odd order. - -
    - -Is there a finite non-Moufang left Bol loop with trivial right nucleus? - -
    - -* Proposed: at Milehigh conference on quasigroups, loops, and nonassociative systems, Denver 2005 - -* Solved by: Gábor P. Nagy, 2007 - -* Solution: There is a finite simple left Bol loop of exponent 2 of order 96 with trivial right nucleus. Also, using an exact factorization of the Mathieu group M24, it is possible to construct a non-Moufang simple Bol loop which is a G-loop. - -
    - -Does every finite Moufang loop have the strong Lagrange property? - -
    - -* Proposed: by Orin Chein at Loops '99, Prague 1999 - -* Solved by: Alexander Grishkov and Andrei Zavarnitsine, 2003 - -* Solution: Every finite Moufang loop has the strong Lagrange property (SLP). Here is an outline of the proof: - -** According to (Chein et al. 2003), it suffices to show SLP for nonassociative finite simple Moufang loops (NFSML). - -** It thus suffices to show that the order of a maximal subloop of an NFSML L divides the order of L. - -** A countable class of NFSMLs $M(q)$ was discovered in (Paige 1956), and no other NSFMLs exist by (Liebeck 1987). - -** Grishkov and Zavarnitsine matched maximal subloops of loops $M(q)$ with certain subgroups of groups with triality in (Grishkov and Zavarnitsine, 2003). - -
    - -Is there a Moufang loop whose commutant is not normal? - -
    - -* Proposed: by Andrew Rajah at Loops '03, Prague 2003 - -* Solved by: Alexander Grishkov and Andrei Zavarnitsine, 2017 - -* Solution: Yes, there is a Moufang loop of order 38 with non-normal commutant. Gagola had previously claimed the opposite, but later found a hole in his proof. - -
    - -Is the class of cores of Bol loops a quasivariety? - -
    - -* Proposed: by Jonathan D. H. Smith and Alena Vanžurová at Loops '03, Prague 2003 - -* Solved by: Alena Vanžurová, 2004. - -* Solution: No, the class of cores of Bol loops is not closed under subalgebras. Furthermore, the class of cores of groups is not closed under subalgebras. Here is an outline of the proof: - -** Cores of abelian groups are medial, by (Romanowska and Smith, 1985), (Rozskowska-Lech, 1999). - -** The smallest nonabelian group $S_3$ has core containing a submagma $G$ of order 4 that is not medial. - -** If $G$ is a core of a Bol loop, it is a core of a Bol loop of order 4, hence a core of an abelian group, a contradiction. - -
    - -Let I(n) be the number of isomorphism classes of quasigroups of order n. Is I(n) odd for every n? - -
    - -* Proposed: by Douglas S. Stones at 2nd Mile High Conference on Nonassociative Mathematics, Denver 2009 - -* Solved by: Douglas S. Stones, 2010. - -* Solution: I(12) is even. In fact, I(n) is odd for all n ≤ 17 except 12. (Stones 2010) - -
    - -Classify the finite simple paramedial quasigroups. - -
    - -*Proposed: by Jaroslav Ježek and Tomáš Kepka at Loops '03, Prague 2003. - -*Solved by: Victor Shcherbacov and Dumitru Pushkashu (2010). - -*Solution: Any finite simple paramedial quasigroup is isotopic to elementary abelian p-group. Such quasigroup can be either a medial unipotent quasigroup, or a medial commutative distributive quasigroup, or special kind isotope of (φ+ψ)-simple medial distributive quasigroup. diff --git a/wiki/wikipedia/3496.txt b/wiki/wikipedia/3496.txt deleted file mode 100644 index b5d520fd07c4592c50a888496320a9ff560511e6..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3496.txt +++ /dev/null @@ -1,184 +0,0 @@ -In the mathematical fields of geometry and linear algebra, a principal axis is a certain line in a Euclidean space associated with an ellipsoid or hyperboloid, generalizing the major and minor axes of an ellipse or hyperbola. The principal axis theorem states that the principal axes are perpendicular, and gives a constructive procedure for finding them. - -Mathematically, the principal axis theorem is a generalization of the method of completing the square from elementary algebra. In linear algebra and functional analysis, the principal axis theorem is a geometrical counterpart of the spectral theorem. It has applications to the statistics of principal components analysis and the singular value decomposition. In physics, the theorem is fundamental to the studies of angular momentum and birefringence. - -The equations in the Cartesian plane R2: - -\begin{align} - -\frac{x^2}{9} + \frac{y^2}{25} &= 1 \\[3pt] - -\frac{x^2}{9} - \frac{y^2}{25} &= 1 - -\end{align} - -define, respectively, an ellipse and a hyperbola. In each case, the x and y axes are the principal axes. This is easily seen, given that there are no cross-terms involving products xy in either expression. However, the situation is more complicated for equations like -$$ -5x^2 + 8xy + 5y^2 = 1. -$$ - -Here some method is required to determine whether this is an ellipse or a hyperbola. The basic observation is that if, by completing the square, the quadratic expression can be reduced to a sum of two squares then the equation defines an ellipse, whereas if it reduces to a difference of two squares then the equation represents a hyperbola: - -\begin{align} - -u(x, y)^2 + v(x, y)^2 &= 1\qquad \text{(ellipse)} \\ - -u(x, y)^2 - v(x, y)^2 &= 1\qquad \text{(hyperbola)}. - -\end{align} - -Thus, in our example expression, the problem is how to absorb the coefficient of the cross-term 8xy into the functions u and v. Formally, this problem is similar to the problem of matrix diagonalization, where one tries to find a suitable coordinate system in which the matrix of a linear transformation is diagonal. The first step is to find a matrix in which the technique of diagonalization can be applied. - -The trick is to write the quadratic form as - -5x^2 + 8xy + 5y^2 = - -\begin{bmatrix} - -x & y - -\end{bmatrix} - -\begin{bmatrix} - -5 & 4 \\ - -4 & 5 - -\end{bmatrix} - -\begin{bmatrix} - -x \\ y - -\end{bmatrix} = - -\mathbf{x}^\textsf{T} A\mathbf{x} - - - -where the cross-term has been split into two equal parts. The matrix A in the above decomposition is a symmetric matrix. In particular, by the spectral theorem, it has real eigenvalues and is diagonalizable by an orthogonal matrix (orthogonally diagonalizable). - -To orthogonally diagonalize A, one must first find its eigenvalues, and then find an orthonormal eigenbasis. Calculation reveals that the eigenvalues of A are -$$ -\lambda_1 = 1,\quad \lambda_2 = 9 -$$ - -with corresponding eigenvectors - - - -\mathbf{v}_1 = \begin{bmatrix} 1 \\ -1 \end{bmatrix},\quad - -\mathbf{v}_2 = \begin{bmatrix} 1 \\ 1 \end{bmatrix}. - - - -Dividing these by their respective lengths yields an orthonormal eigenbasis: - - - -\mathbf{u}_1 = \begin{bmatrix} 1/\sqrt{2} \\ -1/\sqrt{2} \end{bmatrix},\quad - -\mathbf{u}_2 = \begin{bmatrix} 1/\sqrt{2} \\ 1/\sqrt{2} \end{bmatrix}. - - - -Now the matrix S = [u1 u2] is an orthogonal matrix, since it has orthonormal columns, and A is diagonalized by: - -A = SDS^{-1} = SDS^\textsf{T} = - -\begin{bmatrix} - -1/\sqrt{2} & 1/\sqrt{2}\\ - --1/\sqrt{2} & 1/\sqrt{2} - -\end{bmatrix} - -\begin{bmatrix} - -1 & 0 \\ - -0 & 9 - -\end{bmatrix} - -\begin{bmatrix} - -1/\sqrt{2} & -1/\sqrt{2} \\ - -1/\sqrt{2} & 1/\sqrt{2} - -\end{bmatrix}. - - - -This applies to the present problem of "diagonalizing" the quadratic form through the observation that - - - -5x^2 + 8xy + 5y^2 = - -\mathbf{x}^\textsf{T} A\mathbf{x} = - -\mathbf{x}^\textsf{T}\left(SDS^\textsf{T}\right)\mathbf{x} = - -\left(S^\textsf{T} \mathbf{x}\right)^\textsf{T} D\left(S^\textsf{T} \mathbf{x}\right) = - -1\left(\frac{x - y}{\sqrt{2}}\right)^2 + 9\left(\frac{x + y}{\sqrt{2}}\right)^2. - - - -Thus, the equation $5x^2 + 8xy + 5y^2 = 1$ is that of an ellipse, since the left side can be written as the sum of two squares. - -It is tempting to simplify this expression by pulling out factors of 2. However, it is important not to do this. The quantities -$$ -c_1 = \frac{x - y}{\sqrt{2}},\quad c_2 = \frac{x + y}{\sqrt{2}} -$$ - -have a geometrical meaning. They determine an orthonormal coordinate system on R2. In other words, they are obtained from the original coordinates by the application of a rotation (and possibly a reflection). Consequently, one may use the c1 and c2 coordinates to make statements about length and angles (particularly length), which would otherwise be more difficult in a different choice of coordinates (by rescaling them, for instance). For example, the maximum distance from the origin on the ellipse c12 + 9c22 = 1 occurs when c2 = 0, so at the points c1 = ±1. Similarly, the minimum distance is where c2 = ±1/3. - -It is possible now to read off the major and minor axes of this ellipse. These are precisely the individual eigenspaces of the matrix A, since these are where c2 = 0 or c1 = 0. Symbolically, the principal axes are - - - -E_1 = \text{span}\left(\begin{bmatrix} 1/\sqrt{2} \\ -1/\sqrt{2} \end{bmatrix}\right),\quad - -E_2 = \text{span}\left(\begin{bmatrix} 1/\sqrt{2} \\ 1/\sqrt{2} \end{bmatrix}\right). - - - -To summarize: - -* The equation is for an ellipse, since both eigenvalues are positive. (Otherwise, if one were positive and the other negative, it would be a hyperbola.) - -* The principal axes are the lines spanned by the eigenvectors. - -* The minimum and maximum distances to the origin can be read off the equation in diagonal form. - -Using this information, it is possible to attain a clear geometrical picture of the ellipse: to graph it, for instance. - -The principal axis theorem concerns quadratic forms in Rn, which are homogeneous polynomials of degree 2. Any quadratic form may be represented as -$$ -Q(\mathbf{x}) = \mathbf{x}^\textsf{T} A\mathbf{x} -$$ - -where A is a symmetric matrix. - -The first part of the theorem is contained in the following statements guaranteed by the spectral theorem: - -* The eigenvalues of A are real. - -* A is diagonalizable, and the eigenspaces of A are mutually orthogonal. - -In particular, A is orthogonally diagonalizable, since one may take a basis of each eigenspace and apply the Gram-Schmidt process separately within the eigenspace to obtain an orthonormal eigenbasis. - -For the second part, suppose that the eigenvalues of A are λ1, ..., λn (possibly repeated according to their algebraic multiplicities) and the corresponding orthonormal eigenbasis is u1, ..., un. Then -$$ -Q(\mathbf{x}) = \lambda_1 c_1^2 + \lambda_2 c_2^2 + \dots + \lambda_n c_n^2, -$$ - -where the ci are the coordinates with respect to the given eigenbasis. Furthermore, - -The i-th principal axis is the line determined by the n − 1 equations cj = 0, j ≠ i. This axis is the span of the vector ui. diff --git a/wiki/wikipedia/3497.txt b/wiki/wikipedia/3497.txt deleted file mode 100644 index 0745eb741fbf4808afd805e1ba19de3ab1fe9cf9..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3497.txt +++ /dev/null @@ -1,95 +0,0 @@ -This is a list of unusually long mathematical proofs. Such proofs often use computational proof methods and may be considered non-surveyable. - -, the longest mathematical proof, measured by number of published journal pages, is the classification of finite simple groups with well over 10000 pages. There are several proofs that would be far longer than this if the details of the computer calculations they depend on were published in full. - -The length of unusually long proofs has increased with time. As a rough rule of thumb, 100 pages in 1900, or 200 pages in 1950, or 500 pages in 2000 is unusually long for a proof. - -*1799 The Abel–Ruffini theorem was nearly proved by Paolo Ruffini, but his proof, spanning 500 pages, was mostly ignored and later, in 1824, Niels Henrik Abel published a proof that required just six pages. - -*1890 Killing's classification of simple complex Lie algebras, including his discovery of the exceptional Lie algebras, took 180 pages in 4 papers. - -*1894 The ruler-and-compass construction of a polygon of 65537 sides by Johann Gustav Hermes took over 200 pages. - -*1905 Emanuel Lasker's original proof of the Lasker–Noether theorem took 98 pages, but has since been simplified: modern proofs are less than a page long. - -*1963 Odd order theorem by Feit and Thompson was 255 pages long, which at the time was over 10 times as long as what had previously been considered a long paper in group theory. - -*1964 Resolution of singularities. Hironaka's original proof was 216 pages long; it has since been simplified considerably down to about 10 or 20 pages. - -*1966 Abyhankar's proof of resolution of singularities for 3-folds in characteristic greater than 6 covered about 500 pages in several papers. In 2009, Cutkosky simplified this to about 40 pages. - -*1966 Discrete series representations of Lie groups. Harish-Chandra's construction of these involved a long series of papers totaling around 500 pages. His later work on the Plancherel theorem for semisimple groups added another 150 pages to these. - -*1968 the Novikov–Adian proof solving Burnside's problem on finitely generated infinite groups with finite exponents negatively. The three-part original paper is more than 300 pages long. (Britton later published a 282 page paper attempting to solve the problem, but his paper contained a serious gap.) - -*1960-1970 Fondements de la Géometrie Algébrique, Éléments de géométrie algébrique and Séminaire de géométrie algébrique. Grothendieck's work on the foundations of algebraic geometry covers many thousands of pages. Although this is not a proof of a single theorem, there are several theorems in it whose proofs depend on hundreds of earlier pages. - -*1974 N-group theorem. Thompson's classification of N-groups used 6 papers totaling about 400 pages, but also used earlier results of his such as the odd order theorem, which bring to total length up to more than 700 pages. - -*1974 Ramanujan conjecture and the Weil conjectures. While Deligne's final paper proving these conjectures were "only" about 30 pages long, it depended on background results in algebraic geometry and étale cohomology that Deligne estimated to be about 2000 pages long. - -*1974 4-color theorem. Appel and Haken's proof of this took 139 pages, and also depended on long computer calculations. - -*1974 The Gorenstein–Harada theorem classifying finite groups of sectional 2-rank at most 4 was 464 pages long. - -*1976 Eisenstein series. Langlands's proof of the functional equation for Eisenstein series was 337 pages long. - -*1983 Trichotomy theorem. Gorenstein and Lyons's proof for the case of rank at least 4 was 731 pages long, and Aschbacher's proof of the rank 3 case adds another 159 pages, for a total of 890 pages. - -*1983 Selberg trace formula. Hejhal's proof of a general form of the Selberg trace formula consisted of 2 volumes with a total length of 1322 pages. - -*Arthur–Selberg trace formula. Arthur's proofs of the various versions of this cover several hundred pages spread over many papers. - -*2000 Almgren's regularity theorem. Almgren's proof was 955 pages long. - -*2000 Lafforgue's theorem on the Langlands conjecture for the general linear group over function fields. Laurent Lafforgue's proof of this was about 600 pages long, not counting many pages of background results. - -*2003 Poincaré conjecture, Geometrization theorem, Geometrization conjecture. Perelman's original proofs of the Poincaré conjecture and the Geometrization conjecture were not lengthy, but were rather sketchy. Several other mathematicians have published proofs with the details filled in, which come to several hundred pages. - -*2004 Quasithin groups. The classification of the simple quasithin groups by Aschbacher and Smith was 1221 pages long, one of the longest single papers ever written. - -*2004 Classification of finite simple groups. The proof of this is spread out over hundreds of journal articles which makes it hard to estimate its total length, which is probably around 10000 to 20000 pages. - -*2004 Robertson–Seymour theorem. The proof takes about 500 pages spread over about 20 papers. - -*2005 Kepler conjecture. Hales's proof of this involves several hundred pages of published arguments, together with several gigabytes of computer calculations. - -*2006 the strong perfect graph theorem, by Maria Chudnovsky, Neil Robertson, Paul Seymour, and Robin Thomas. 180 pages in the Annals of Mathematics. - -There are many mathematical theorems that have been checked by long computer calculations. If these were written out as proofs many would be far longer than most of the proofs above. There is not really a clear distinction between computer calculations and proofs, as several of the proofs above, such as the 4-color theorem and the Kepler conjecture, use long computer calculations as well as many pages of mathematical argument. For the computer calculations in this section, the mathematical arguments are only a few pages long, and the length is due to long but routine calculations. Some typical examples of such theorems include: - -*Several proofs of the existence of sporadic simple groups, such as the Lyons group, originally used computer calculations with large matrices or with permutations on billions of symbols. In most cases, such as the baby monster group, the computer proofs were later replaced by shorter proofs avoiding computer calculations. Similarly, the calculation of the maximal subgroups of the larger sporadic groups uses a lot of computer calculations. - -*2004 Verification of the Riemann hypothesis for the first 1013 zeros of the Riemann zeta function. - -*2007 Verification that Checkers is a draw. - -*2008 Proofs that various Mersenne numbers with around ten million digits are prime. - -*Calculations of large numbers of digits of π. - -*2010 Showing that Rubik's Cube can be solved in 20 moves. - -*2012 Showing that Sudoku needs . - -*2013 Ternary Goldbach conjecture: Every odd number greater than 5 can be expressed as the sum of three primes. - -*2014 Proof of Erdős discrepancy conjecture for particular case C=2: every ±1-sequence of the length 1161 has a discrepancy at least 3, original proof generated by a SAT solver, had a size of 13 gigabytes and was later reduced to 850 megabytes. - -*2016 Solving boolean Pythagorean triples problem required the generation of 200 terabytes of proof. - -*2017 Marijn Heule, who coauthored solution to the boolean Pythagorean triples problem, announced 2 petabytes long proof that 5th Schur's number is 161. - -Kurt Gödel showed how to find explicit examples of statements in formal systems that are provable in that system but whose shortest proof is absurdly long. For example, the statement: - -"This statement cannot be proved in Peano arithmetic in less than a googolplex symbols" - -is provable in Peano arithmetic but the shortest proof has at least a googolplex symbols. It has a short proof in a more powerful system: in fact, it is easily provable in Peano arithmetic together with the statement that Peano arithmetic is consistent (which cannot be proved in Peano arithmetic by Gödel's incompleteness theorem). - -In this argument, Peano arithmetic can be replaced by any more powerful consistent system, and a googolplex can be replaced by any number that can be described concisely in the system. - -Harvey Friedman found some explicit natural examples of this phenomenon, giving some explicit statements in Peano arithmetic and other formal systems whose shortest proofs are ridiculously long . For example, the statement - -"there is an integer n such that if there is a sequence of rooted trees T1, T2, ..., Tn such that Tk has at most k+10 vertices, then some tree can be homeomorphically embedded in a later one" - -is provable in Peano arithmetic, but the shortest proof has length at least A(1000), where A(0)=1 and A(n+1)=2A(n). The statement is a special case of Kruskal's theorem and has a short proof in second order arithmetic. diff --git a/wiki/wikipedia/3498.txt b/wiki/wikipedia/3498.txt deleted file mode 100644 index 7a0952a1d7e476379ca3aa2d1affe0994598a9b3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3498.txt +++ /dev/null @@ -1,31 +0,0 @@ -In probability theory, Isserlis' theorem or Wick's probability theorem is a formula that allows one to compute higher-order moments of the multivariate normal distribution in terms of its covariance matrix. It is named after Leon Isserlis. - -This theorem is also particularly important in particle physics, where it is known as Wick's theorem after the work of Wick. Other applications include the analysis of portfolio returns, quantum field theory and generation of colored noise. - -If $(X_1,\dots, X_{n})$ is a zero-mean multivariate normal random vector, then\operatorname{E} [X_1 X_2\cdots X_{n}] = \sum_{p\in P_n^2}\prod_{\{i,j\}\in p} \operatorname{E}[X_i X_j] = \sum_{p\in P_n^2}\prod_{\{i,j\}\in p} \operatorname{Cov}(X_i, X_j), where the sum is over all the pairings of $\{1,\ldots,n\}$, i.e. all distinct ways of partitioning $\{1,\ldots,n\}$ into pairs $\{i,j\}$, and the product is over the pairs contained in $p$. - -In his original paper, Leon Isserlis proves this theorem by mathematical induction, generalizing the formula for the $4^{\text{th}}$ order moments, which takes the appearance - - - -\operatorname{E}[X_1 X_2 X_3 X_4] = - -\operatorname{E}[X_1X_2]\operatorname{E}[X_3X_4] + - -\operatorname{E}[X_1X_3]\operatorname{E}[X_2X_4] + - -\operatorname{E}[X_1X_4]\operatorname{E}[X_2X_3]. - - - -If $n=2m+1$ is odd, there does not exist any pairing of $\{1,\ldots,2m+1\}$. Under this hypothesis, Isserlis' theorem implies that\operatorname{E}[X_1 X_2\cdots X_{2m+1}] = 0. - -This also follows from the fact that $-X=(-X_1,\dots,-X_n)$ has the same distribution as $X$, which implies that $\operatorname{E}[X_1 \cdots X_{2m+1}]=\operatorname{E}[(-X_1) \cdots (-X_{2m+1})]=-\operatorname{E}[X_1 \cdots X_{2m+1}] = 0$. - -If $n=2m$ is even, there exist $(2m)!/(2^{m}m!) = (2m-1)!!$ (see double factorial) pair partitions of $\{1,\ldots,2m\}$: this yields $(2m)!/(2^{m}m!) = (2m-1)!!$ terms in the sum. For example, for $4^{\text{th}}$ order moments (i.e. $4$ random variables) there are three terms. For $6^{\text{th}}$-order moments there are $3\times 5=15$ terms, and for $8^{\text{th}}$-order moments there are $3\times5\times7 = 105$ terms. - -An equivalent formulation of the Wick's probability formula is the Gaussian integration by parts. If $(X_1,\dots X_{n})$ is a zero-mean multivariate normal random vector, then - -\operatorname{E}(X_1 f(X_1,\ldots,X_n))=\sum_{i=1}^{n} \operatorname{Cov}(X_1X_i)\operatorname{E}(\partial_{X_i}f(X_1,\ldots,X_n)).The Wick's probability formula can be recovered by induction, considering the function $f:\mathbb{R}^n\to\mathbb{R}$ defined by $f(x_1,\ldots,x_n)=x_2\ldots x_n$. Among other things, this formulation is important in Liouville Conformal Field Theory to obtain conformal Ward's identities, BPZ equations and to prove the Fyodorov-Bouchaud formula. - -For non-Gaussian random variables, the moment-cumulants formula replaces the Wick's probability formula. If $(X_1,\dots X_{n})$ is a vector of random variables, then \operatorname{E}(X_1 \ldots X_n)=\sum_{p\in P_n} \prod_{b\in p} \kappa\big((X_i)_{i\in b}\big),where the sum is over all the partitions of $\{1,\ldots,n\}$, the product is over the blocks of $p$ and $\kappa\big((X_i)_{i\in b}\big)$ is the cumulant of $(X_i)_{i\in b}$. diff --git a/wiki/wikipedia/3499.txt b/wiki/wikipedia/3499.txt deleted file mode 100644 index 5d6522a4baa945a4c779fa820676089fb53f27a1..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3499.txt +++ /dev/null @@ -1,144 +0,0 @@ -In mathematics, a sequence of nested intervals can be intuitively understood as an ordered collection of intervals $I_n$ on the real number line with natural numbers $n=1,2,3,\dots$ as an index. Furthermore two conditions have to be met: - -# Every Interval is contained in the previous one ($I_{n+1}$ is always a subset of $I_n$). - -# The length of the intervals get arbitrarily small (meaning the length falls below every possible threshold $\varepsilon$ after a certain index $N$). - -In other words, the left bound of the interval $I_n$ can only increase ($a_{n+1}\geq a_n$) and the right bound can only decrease ($b_{n+1}\leq b_n$). - -Historically - long before anyone defined nested intervals in a textbook - people implicitly constructed such nestings for concrete calculation purposes. For example the ancient Babylonians discovered a method for computing square roots of numbers. In contrast, the famed Archimedes constructed sequences of polygons, that inscribed and surcumscribed a unit circle, in order to get a lower and upper bound for the circles circumference - which is the circle number Pi ($\pi$). - -The central question to be posed is the nature of the intersection over all the natural numbers; or put differently, the set of numbers, that are found in every Interval $I_n$ (thus for all $n\in\mathbb{N}$). In modern mathematics, nested intervals are used as a construction method for the real numbers (in order to complete the field of rational numbers). - -As stated in the introduction, historic users of mathematics discovered the nesting of intervals and closely related algorithms as methods for specific calculations. Some variations and modern interpretations of these ancient techniques will be introduced here: - -One intuitive algorithm is so easy to understand, that it could well be found by engaged highschool students. When trying to find the square root of a number $ x>1 $, we can be certain, that $1\leq \sqrt{x} \leq x $, which gives us the first interval $I_1=[1, x]$, in which $ x $ has to be found. If we know the next higher perfect square $k^2 > x $, we get an even better candidate for the fist interval: $I_1=[1, k]$. - -We now define the other intervals $I_n=[a_n, b_n], n\in\mathbb{N}$ recursively by looking at the sequence of midpoints $m_n=\frac{a_n + b_n}{2}$. Given the interval $I_n$ is already known (starting at $I_1$), we define - -I_{n+1} := \left\{\begin{matrix} - -\left[m_n, b_n\right] && \text{if} m_n^2 \leq x \\ - -\left[a_n, m_n\right] && \text{if} m_n^2 > x - -\end{matrix}\right. - -Put into words, we look, whether the midpoint of $I_{n} $is smaller (or bigger) than $\sqrt{x}$ and set it als the lower (or upper) bound of our next interval $I_{n+1} $. This guarantees, that $ \sqrt{x}\in I_{n+1} $. With this construction the intervals are nested and their length $|I_n|$ get halved in every step of the recursion. Therefore it is possible to get lower and upper bounds for $\sqrt{x} $ with arbitrarily good precision (given enough computational time). - -It should be noted here, that we can also compute $\sqrt{y}$, when $01$ and the algorithm can be used by setting $x:=1/y$ and calculating the reciprocal, after the desired level of precision has been acquired. - -To showcase the algorithm, we try to find $\sqrt{19}$. We note, that $1^2<19<5^2$, giving us our first interval $I_1:=[1,5]$, in which $\sqrt{19}$ certainly has to be found. We continue by looking at the midpoint, checking its square and setting the boundaries of the next interval accordingly: - -\begin{aligned} - -m_1&=\dfrac{1+5}{2}=3 &&\Rightarrow m_1^2=9 \leq 19 &&\Rightarrow I_2=[3, 5]\\ - -m_2&=\dfrac{3+5}{2}=4 &&\Rightarrow m_2^2=16 \leq 19 &&\Rightarrow I_3=[4, 5]\\ - -m_3&=\dfrac{4+5}{2}=4.5 &&\Rightarrow m_3^2=20.25 > 19 &&\Rightarrow I_4=[4, 4.5]\\ - -m_4&=\dfrac{4+4.5}{2}=4.25 &&\Rightarrow m_4^2=18.0625 \leq 19 &&\Rightarrow I_5=[4.25, 4.5]\\ - -m_5&=\dfrac{4.25+4.5}{2}=4.375 &&\Rightarrow m_5^2=19.140625 > 19 &&\Rightarrow I_5=[4.25, 4.375]\\ - -&\vdots & & - -\end{aligned} - -Slowly, this construction of $\sqrt{19}$ inches towards the real value of $\sqrt{19}=4.35889894\dots$. This procedure can be continued for as long as needed for the wanted precision. By repeating the steps indefinitely, we arrive at the true value of this square root. - -The Babylonian method used an even better algorithm, that yields accurate approximations of $x$ even faster. The modern description using nested intervals is similar to the algorithm above, but instead of the sequence of midpoints we look at a sequence $(c_n)_{n\in\mathbb{N}}$ given by -$$ -c_{n+1}:=\frac{1}{2}\cdot\left(c_n + \frac{x}{c_n}\right) -$$. - -Then the sequence of intervals given by $I_{n+1}:=\left[\frac{x}{c_n}, c_n\right]$ and $I_1=[0, k]$, where $ k^2>x $, will provide accurate upper and lower bounds for $\sqrt{x}$ very fast. In practice, only $c_n$ has to be considered, which converges to $\sqrt{x}$. This algorithm is a special case of Newton's method. - -As shown in the image, lower and upper bounds for the circumference of a circle can be obtained with inscribed and circumscribed regular polygons. When examining a circle with diameter $1$, the circumference is (by definition of Pi) the circle number $\pi$. - -Around 250 BCE Archimedes of Syracuse started with regular hexagons, whose side lengths (and therefore circumference) can be directly calculated from the circle diameter. Furthermore a way to compute the side length of a regular $2n$-gon from the previous $n$-gon can be found, starting at the regular hexagon ($6$-gon). By successively doubling the number of edges until reaching 96-sided polygons, Archimedes reached an interval with $\tfrac{223}{71}< \pi < \tfrac{22}{7} $. The upper bound $22/7 \approx 3.143 $ is still often used as a rough, but pragmatic approximation of $\pi$. - -Around the year 1600 CE, Archimedes' method was still the gold standard for calculating Pi and was used by Dutch mathematician Ludolph van Ceulen, to compute more than thirty digits of $\pi$, which took him decades. Soon after, more powerful methods for the computation were found. - -Early uses of what we recognize as a sequence of nested intervals today (or what we can describe as such with modern mathematics), can be found in the predecessors of Calculus (differentiation and integration). In computer science, sequences of nested intervals can be used in algorithms for numerical computation. I.e. the Bisection method can be used for calculating the roots (zeroes) of continuous functions. In contrast to mathematically infinite sequences, an applied computational algorithm terminates at some point, when the desired zero has been found or sufficiently well approximated. - -In mathematical analysis, nested intervals provide one method of axiomatically introducing the real numbers as the completion of the rational numbers, being a necessity for discussing the concepts of continuity and differentiability. Historically, Isaac Newton's and Gottfried Wilhelm Leibniz's discovery of differential and integral calculus from the late 1600s, has posed a huge challenge for mathematicians trying to prove their methods rigorously; despite their success in physics, engineering and other sciences. The axiomatic description on the basis of nested intervals (or an equivalent axiom) has become an important foundation for the modern understanding of calculus. - -Let $(I_n)_{n\in\mathbb{N}}$ be a sequence of closed intervals of the type $I_n=[a_n, b_n]$, where $|I_n|:=b_n - a_n$ denotes the length of such an interval. We call $(I_n)_{n\in\mathbb{N}}$ a sequence of nested intervals, if - -# $\quad \forall n \in \mathbb{N}: I_{n+1} \subseteq I_n$ - -# $\quad \forall \varepsilon > 0 \exists N\in\mathbb{N}: |I_N| < \varepsilon $. - -Put into words, property 1 tells us, that the intervals are nested according to their index. The second property formalizes the notion, that interval sizes get arbitrarily small; meaning, that for an arbitrary constant $\varepsilon > 0$ we can always find an interval (with index $N$) with a length strictly smaller than that number $\varepsilon$. We also note, that property 1 immediately implies, that every interval with an index $n \geq N$ also must have a length $|I_n| < \varepsilon$. - -If $(I_n)_{n\in\mathbb{N}}$ is a sequence of nested intervals, there always exists a real number, that is contained in every inverval $I_n$. In formal notation this axiom guarantees, that -$$ -\exists x\in\mathbb{R}: x\in\bigcap_{n\in\mathbb{N}} I_n -$$. - -Each sequence $(I_n)_{n\in\mathbb{N}}$ of nested intervals contains exactly one real number $x$. - -Proof: This statement can easily be verified by contradiction. Assume, that there exist two different numbers $x,y\in\cap_{n\in\mathbb{N}} I_n$. From $x\neq y$ it follows, that they differ by $|x-y|>0.$ Since both numbers have to be contained in every interval, we get, that $|I_n|\geq |x-y|$ for all $n\in\mathbb{N}$. This contradicts property 2 from the definition of nested intervals, therefore the intersection can contain at most one number $x$. The completeness axiom guarantees, that such a real number $x$ exists. $ \square$ - -* This axiom is fundamental in the sense, that a sequence of nested intervals does not necessarily contain a rational number - meaning that $\cap_{n\in\mathbb{N}}I_n$ could yield $\emptyset$, if only considering the rationals. - -* The axiom is equivalent to the existence of the infimum and supremum (proof: see below), the convergence of Cauchy sequences and the Bolzano-Weierstrass theorem. This means, that one of the four has to be introduced axiomatically, while the other three can be successively proven. - -By generalizing the algorithm shown above for square roots, we can prove, that in the real numbers, the equation $x=y^j, j\in\mathbb{N}$ can always be solved for $y=\sqrt[j]{x}=x^{1/j}$; meaning there exists a unique real number $y>0 $, such that $x=y^k$. Comparing to the section above, we achieve a sequence of nested intervals for the $k$-th root of $x$, namely $y$, by looking at whether the midpoint $m_n$ of the $n$-th interval is lower or equal or greater than $m_n^k$. - -If $A\subset \mathbb{R}$ has an upper bound, i.e. there exists a number $b$, such that $x\leq b$ for all $x\in A$, we call the number $s=\sup(A)$ the supremum of $A$, if - -# the number $s$ is an upper bound of $A$, meaning $\forall x \in A: x\leq s$ - -# $s$ is the least upper bound of $A$, meaning $\forall \sigma < s : \exists x\in A: x >\sigma$ - -Only one such number $s$ can exist. Analogously we can define the infimum ($\inf(B)$) of a set $B\subset \mathbb{R} $, that is bounded from below, as the greatest lower bound of that set. - -Each set $A\subset \mathbb{R}$ has a supremum (infimum), if it is bounded from above (below). - -Proof: Without loss of generality we look at a set $A\subset \mathbb{R}$, that has an upper bound. We now construct a sequence $(I_n)_{n\in\mathbb{N}}$ of nested intervals $I_n=[a_n, b_n]$, that has the following two properties: - -# $b_n$ is an upper bound of $A$ for all $n\in\mathbb{N}$ - -# $a_n$ is never an upper bound of $A$ for any $n\in\mathbb{N}$. - -The construction follows a recursion by starting with any number $a_1$, that is not an upper bound (e.g. $a_1=c - 1$, where -$$ -c\in A -$$ and an arbitrary upper bound $b_1$ of $A$). Given $I_n=[a_n, b_n]$ for some $n\in\mathbb{N}$ we compute the midpoint $m_n:= \frac{a_n+b_n}{2}$ and define - -I_{n+1} := \left\{\begin{matrix} - -\left[a_n, m_n\right] && \text{if} m_n \text{is an upper bound of} A \\ - -\left[m_n, b_n\right] && \text{if} m_n \text{is not an upper bound} - -\end{matrix}\right. - -We note, that this interval sequence is well defined and obviously a sequence of nested intervals by construction. - -Now let $ s $ be the number in every interval (whose existence is guaranteed by the axiom). $s$ is an upper bound of $A$, otherwise there exists a number $x\in A$, such that $x>s$. Furthermore this would imply the existence of an interval $I_m=[a_m, b_m]$ with $b_m - a_m < x-s$, from which $b_m - s < x-s$ follows, due to $ s$ also being an element of $I_m$. But this is a contradiction to property 1 of the supremum (meaning $b_m\sigma$. Following the rules of our construction, $a_n$ would have to be an upper bound of $A$, contradicting property 2 of all sequences of nested intervals. - -In two steps, we have shown, that $s$ is an upper bound of $A$ and that a lower upper bound cannot exist. Therefore $s$ is the supremum of $A$ by definition. $\square$ - -As was seen, the existence of suprema and infima of bounded sets is a counsequence of the completeness of $\mathbb{R}$. In effect the two are actually equivalent, meaning that either of the two can be introduced axiomatically. - -Proof: Let $(I_n)_{n\in\mathbb{N}}$ with $I_n=[a_n, b_n]$ be a sequence of nested intervals. Then the set $A:=\{a_1, a_2,\dots\}$ is bounded from above, where every $b_n$ is an upper bound. This implies, that the least upper bound $s=\sup(A)$ fulfills $a_n\leq s\leq b_n$ for all $n\in\mathbb{N}$. Therefore $s\in I_n$ for all $n\in\mathbb{N}$, respectively $x\in\cap_{n\in\mathbb{N}} I_n$. $\square$ - -After formally defining the convergence of sequences and accumulation points of sequences, one can also prove the Bolzano–Weierstrass theorem using nested intervals. In a follow-up, the fact, that Cauchy sequences are convergent (and that all convergent sequences are Cauchy sequences) can be proven. This in turn allows for a proof of the completeness property above, showing their equivalence. - -Without any specifying, what we mean by interval, all that can be said about the intersection $\cap_{n\in\mathbb{N}} I_n$ over all the naturals (i.e. the set of all points common to each interval) is, that it is either the empty set $\emptyset$, a point on the number line (called a singleton $\{x\}$), or some interval. - -The possibility of an empty intersection can be illustrated by looking at a sequence of open intervals $I_n=\left(0, \frac{1}{n}\right) = \left\{x\in\mathbb{R}:00 $ we can find some value of $n\in\mathbb{N}$ (namely any $n>1/x$), such that $ 1/n 0 $, we can always find intervals $ I_n $ in the sequence, such that $ x\notin I_n, $ implying that the intersection has to be empty. - -The situation is different for closed intervals. If we change the situation above by looking at closed intervals of the type $I_n=\left[0, \frac{1}{n}\right] = \left\{x\in\mathbb{R}:0 \leq x \leq \frac{1}{n}\right\}$, we can see this very clearly. Now for each $x>0 $ we still can always find intervals not containing said $x$, but for $x=0$ the property $0\leq x \leq 1/n$ holds true for any $n\in\mathbb{N}$. We conclude, that in this case $\cap_{n\in\mathbb{N}} I_n = \{0\}$. - -One can also consider the complement of each interval, written as $(-\infty,a_n) \cup (b_n, \infty)$ - which, in our last example, is $(-\infty,0) \cup (1/n, \infty)$. By De Morgan's laws, the complement of the intersection is a union of two disjoint open sets. By the connectedness of the real line there must be something between them. This shows that the intersection of (even an uncountable number of) nested, closed, and bounded intervals is nonempty. - -In two dimensions there is a similar result: nested closed disks in the plane must have a common intersection. This result was shown by Hermann Weyl to classify the singular behaviour of certain differential equations. diff --git a/wiki/wikipedia/35.txt b/wiki/wikipedia/35.txt deleted file mode 100644 index cfd8319c7fb2a872253f6acb23a0495941f05470..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/35.txt +++ /dev/null @@ -1,123 +0,0 @@ -Stein's example is an important result in decision theory which can be stated as - -The ordinary decision rule for estimating the mean of a multivariate Gaussian distribution is inadmissible under mean squared error risk in dimension at least 3. - -The following is an outline of its proof. The reader is referred to the main article for more information. - -The risk function of the decision rule $d(\mathbf{x}) = \mathbf{x}$ is -$$ -R(\theta,d) = \operatorname{E}_\theta[ |\mathbf{\theta - X}|^2] -$$ -$$ -=\int (\mathbf{\theta - x})^T(\mathbf{\theta - x}) \left( \frac{1}{2\pi} \right)^{n/2} e^{(-1/2) (\mathbf{\theta - x})^T (\mathbf{\theta - x}) } m(dx) -$$ -$$ - = n. -$$ - -Now consider the decision rule -$$ -d'(\mathbf{x}) = \mathbf{x} - \frac{\alpha}{|\mathbf{x}|^2}\mathbf{x} -$$ - -where $\alpha = n-2$. We will show that $d'$ is a better decision rule than $d$. The risk function is -$$ -R(\theta,d') = \operatorname{E}_\theta\left[ \left|\mathbf{\theta - X} + \frac{\alpha}{|\mathbf{X}|^2}\mathbf{X}\right|^2\right] -$$ -$$ - = \operatorname{E}_\theta\left[ |\mathbf{\theta - X}|^2 + 2(\mathbf{\theta - X})^T\frac{\alpha}{|\mathbf{X}|^2}\mathbf{X} + \frac{\alpha^2}{|\mathbf{X}|^4}|\mathbf{X}|^2 \right] -$$ -$$ - = \operatorname{E}_\theta\left[ |\mathbf{\theta - X}|^2 \right] + 2\alpha\operatorname{E}_\theta\left[\frac{\mathbf{(\theta-X)^T X}}{|\mathbf{X}|^2}\right] + \alpha^2\operatorname{E}_\theta\left[\frac{1}{|\mathbf{X}|^2} \right] -$$ - -- a quadratic in $\alpha$. We may simplify the middle term by considering a general "well-behaved" function $h:\mathbf{x} \mapsto h(\mathbf{x}) \in \mathbb{R} $ and using integration by parts. For $1\leq i \leq n$, for any continuously differentiable $h$ growing sufficiently slowly for large $x_i$ we have: -$$ -\operatorname{E}_\theta [ (\theta_i - X_i) h(\mathbf{X}) | X_j=x_j (j\neq i) ]= \int (\theta_i - x_i) h(\mathbf{x}) \left( \frac{1}{2\pi} \right)^{n/2} e^{ -(1/2)\mathbf{(x-\theta)}^T \mathbf{(x-\theta)} } m(dx_i) -$$ - -= \left[ h(\mathbf{x}) \left( \frac{1}{2\pi} \right)^{n/2} e^{-(1/2) \mathbf{(x-\theta)}^T \mathbf{(x-\theta)} } \right]^\infty_{x_i=-\infty} - -- \int \frac{\partial h}{\partial x_i}(\mathbf{x}) \left( \frac{1}{2\pi} \right)^{n/2} e^{-(1/2)\mathbf{(x-\theta)}^T \mathbf{(x-\theta)} } m(dx_i) - - = - \operatorname{E}_\theta \left[ \frac{\partial h}{\partial x_i}(\mathbf{X}) | X_j=x_j (j\neq i) \right]. - - - -Therefore, -$$ -\operatorname{E}_\theta [ (\theta_i - X_i) h(\mathbf{X})]= - \operatorname{E}_\theta \left[ \frac{\partial h}{\partial x_i}(\mathbf{X}) \right]. -$$ - -(This result is known as Stein's lemma.) - -Now, we choose - - - -h(\mathbf{x}) = \frac{x_i}{|\mathbf{x}|^2}. - - - -If $h$ met the "well-behaved" condition (it doesn't, but this can be remedied—see below), we would have -$$ -\frac{\partial h}{\partial x_i} = \frac{1}{|\mathbf{x}|^2} - \frac{2 x_i^2}{|\mathbf{x}|^4} -$$ - -and so - - - -\operatorname{E}_\theta\left[\frac{\mathbf{(\theta-X)^T X}}{|\mathbf{X}|^2}\right] = \sum_{i=1}^n \operatorname{E}_\theta \left[ (\theta_i - X_i) \frac{X_i}{|\mathbf{X}|^2} \right] -$$ - = - \sum_{i=1}^n \operatorname{E}_\theta \left[ \frac{1}{|\mathbf{X}|^2} - \frac{2 X_i^2}{|\mathbf{X}|^4} \right] -$$ -$$ - = -(n-2)\operatorname{E}_\theta \left[\frac{1}{|\mathbf{X}|^2}\right]. -$$ - -Then returning to the risk function of $d'$: - - - -R(\theta,d') = n - 2\alpha(n-2)\operatorname{E}_\theta\left[\frac{1}{|\mathbf{X}|^2}\right] + \alpha^2\operatorname{E}_\theta\left[\frac{1}{|\mathbf{X}|^2} \right]. - - - -This quadratic in $\alpha$ is minimized at -$$ -\alpha = n-2, -$$ - -giving -$$ -R(\theta,d') = R(\theta,d) - (n-2)^2\operatorname{E}_\theta\left[\frac{1}{|\mathbf{X}|^2} \right] -$$ - -which of course satisfies - - - -R(\theta,d') < R(\theta,d). - - - -making $d$ an inadmissible decision rule. - -It remains to justify the use of - - - -h(\mathbf{X})= \frac{\mathbf{X}}{|\mathbf{X}|^2}. - - - -This function is not continuously differentiable, since it is singular at $\mathbf{x}=0$. However, the function - - - -h(\mathbf{X}) = \frac{\mathbf{X}}{\varepsilon + |\mathbf{X}|^2} - - - -is continuously differentiable, and after following the algebra through and letting $\varepsilon \to 0$, one obtains the same result. diff --git a/wiki/wikipedia/350.txt b/wiki/wikipedia/350.txt deleted file mode 100644 index 400b82503805e40e0102d7ccf7c8344154757d20..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/350.txt +++ /dev/null @@ -1,9 +0,0 @@ -Kakuro or Kakkuro or Kakoro () is a kind of logic puzzle that is often referred to as a mathematical transliteration of the crossword. Kakuro puzzles are regular features in many math-and-logic puzzle publications across the world. In 1966, Canadian Jacob E. Funk, an employee of Dell Magazines, came up with the original English name Cross Sums and other names such as Cross Addition have also been used, but the Japanese name Kakuro, abbreviation of Japanese kasan kurosu (加算クロス, "addition cross"), seems to have gained general acceptance and the puzzles appear to be titled this way now in most publications. The popularity of Kakuro in Japan is immense, second only to Sudoku among Nikoli's famed logic-puzzle offerings.). - -Mathematically, Kakuro puzzles can be represented as integer programming problems, and are NP-complete. See also Yato and Seta, 2004. - -There are two kinds of mathematical symmetry readily identifiable in Kakuro puzzles: minimum and maximum constraints are duals, as are missing and required values. - -All sum combinations can be represented using a bitmapped representation. This representation is useful for determining missing and required values using bitwise logic operations. - -Kakuro puzzles appear in nearly 100 Japanese magazines and newspapers. Kakuro remained the most popular logic puzzle in Japanese printed press until 1992, when Sudoku took the top spot. In the UK, they first appeared in The Guardian with The Telegraph and the Daily Mail following. diff --git a/wiki/wikipedia/3500.txt b/wiki/wikipedia/3500.txt deleted file mode 100644 index 079a63948dc4df3b1540917911345bc4b7470297..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3500.txt +++ /dev/null @@ -1,146 +0,0 @@ -A decimal representation of a non-negative real number r is an expression in the form of a sequence of decimal digits traditionally written with a single separator -$$ -r=b_kb_{k-1}\ldots b_0.a_1a_2\ldots, -$$ - -where k is a nonnegative integer and $b_0, \ldots, b_k, a_1, a_2,\ldots$ are integers in the range 0, ..., 9, which are called the digits of the representation. - -This expression represents the infinite sum -$$ - r=\sum_{i=0}^k b_i10^i + \sum_{i=1}^\infty \frac{a_i}{10^i}. -$$ - -The sequence of the $a_i$—the digits after the dot—may be finite, in which case the lacking digits are assumed to be 0. - -Every nonnegative real number has at least one such representation; it has two such representations if and only one has a trailing infinite sequence of zeros, and the other has a trailing infinite sequence of nines. Some authors forbid decimal representations with a trailing infinite sequence of nines because this allows a one-to-one correspondence between nonnegative real numbers and decimal representations. - -The integer $\sum_{i=0}^k b_i10^i$, denoted by a0 in the remainder of this article, is called the integer part of r, and the sequence of the $a_i$ represents the number -$$ -0.a_1a_2\ldots = \sum_{i=1}^\infty \frac{a_i}{10^i}, -$$ - -which is called the fractional part of r. - -Any real number can be approximated to any desired degree of accuracy by rational numbers with finite decimal representations. - -Assume $x\geq 0$. Then for every integer $n\geq 1$ there is a finite decimal $r_n=a_0.a_1a_2\cdots a_n$ such that -$$ -r_n\leq x < r_n+\frac{1}{10^n}. -$$ - -Proof: - -Let $r_n = \textstyle\frac{p}{10^n}$, where $p = \lfloor 10^nx\rfloor$. - -Then $p \leq 10^nx < p+1$, and the result follows from dividing all sides by $10^n$. - -(The fact that $r_n$ has a finite decimal representation is easily established.) - -Some real numbers $x$ have two infinite decimal representations. For example, the number 1 may be equally represented by 1.000... as by 0.999... (where the infinite sequences of trailing 0's or 9's, respectively, are represented by "..."). Conventionally, the decimal representation without trailing 9's is preferred. Moreover, in the standard decimal representation of $x$, an infinite sequence of trailing 0's appearing after the decimal point is omitted, along with the decimal point itself if $x$ is an integer. - -Certain procedures for constructing the decimal expansion of $x$ will avoid the problem of trailing 9's. For instance, the following algorithmic procedure will give the standard decimal representation: Given $x\geq 0$, we first define $a_0$ (the integer part of $x$) to be the largest integer such that $a_0\leq x$ (i.e., $a_0 = \lfloor x\rfloor$). If $x=a_0$ the procedure terminates. Otherwise, for $(a_i)_{i=0}^{k-1}$ already found, we define $a_k$ inductively to be the largest integer such that
    $a_0+\frac{a_1}{10}+\frac{a_2}{10^2}+\cdots+\frac{a_k}{10^k}\leq x.\quad\quad (*)$
    The procedure terminates whenever $a_k$ is found such that equality holds in $(*)$; otherwise, it continues indefinitely to give an infinite sequence of decimal digits. It can be shown that $x = \sup _k\{\sum_{i=0}^{k}\frac{a_i}{10^i}\}$ (conventionally written as $x=a_0.a_1a_2a_3\cdots$), where $a_1,a_2,a_3\ldots \in \{0,1,2,\ldots, 9\},$ and the nonnegative integer $a_0$ is represented in decimal notation. This construction is extended to $x<0$ by applying the above procedure to $-x>0$ and denoting the resultant decimal expansion by $-a_0.a_1a_2a_3\cdots$. - -The decimal expansion of non-negative real number x will end in zeros (or in nines) if, and only if, x is a rational number whose denominator is of the form 2n5m, where m and n are non-negative integers. - -Proof: - -If the decimal expansion of x will end in zeros, or $x=\sum_{i=0}^n\frac{a_i}{10^i}=\sum_{i=0}^n10^{n-i}a_i/10^n$ - -for some n, - -then the denominator of x is of the form 10n = 2n5n. - -Conversely, if the denominator of x is of the form 2n5m, - -x=\frac{p}{2^n5^m}=\frac{2^m5^np}{2^{n+m}5^{n+m}}= - -\frac{2^m5^np}{10^{n+m}} - -for some p. - -While x is of the form $\textstyle\frac{p}{10^k}$, -$$ -p=\sum_{i=0}^{n}10^ia_i -$$ for some n. - -By $x=\sum_{i=0}^n10^{n-i}a_i/10^n=\sum_{i=0}^n\frac{a_i}{10^i}$, - -x will end in zeros. - -Some real numbers have decimal expansions that eventually get into loops, endlessly repeating a sequence of one or more digits: - -1/3 = 0.33333... - -1/7 = 0.142857142857... - -1318/185 = 7.1243243243... - -Every time this happens the number is still a rational number (i.e. can alternatively be represented as a ratio of an integer and a positive integer). - -Also the converse is true: The decimal expansion of a rational number is either finite, or endlessly repeating. - -Every decimal representation of a rational number can be converted to a fraction by converting it into a sum of the integer, non-repeating, and repeating parts and then converting that sum to a single fraction with a common denominator. - -For example to convert $\pm 8.123\overline{4567}$ to a fraction one notes the lemma: - - - -\begin{align} - -0.000\overline{4567} - -& = 4567\times0.000\overline{0001} \\ - -& = 4567\times0.\overline{0001}\times\frac{1}{10^3} \\ - -& = 4567\times\frac{1}{9999}\times\frac{1}{10^3} \\ - -& = \frac{4567}{9999}\times\frac{1}{10^3} \\ - -& = \frac{4567}{(10^4 - 1)\times10^3}& \text{The exponents are the number of non-repeating digits after the decimal point (3) and the number of repeating digits (4).} - -\end{align} - - - -Thus one converts as follows: - - - -\begin{align} - -\pm 8.123\overline{4567} - -& = \pm \left(8 + \frac{123}{10^3} + \frac{4567}{(10^4 - 1) \times 10^3}\right) & \text{from above} \\ - -& = \pm \frac{8\times(10^4-1)\times10^3+123\times(10^4-1)+4567}{(10^4 - 1) \times 10^3} & \text{common denominator}\\ - -& = \pm \frac{81226444}{9999000} & \text{multiplying, and summing the numerator}\\ - -& = \pm \frac{20306611}{2499750} & \text{reducing}\\ - -\end{align} - - - -If there are no repeating digits one assumes that there is a forever repeating 0, e.g. $1.9 = 1.9\overline{0}$, although since that makes the repeating term zero the sum simplifies to two terms and a simpler conversion. - -For example: - - - -\begin{align} - -\pm 8.1234 - -& = \pm \left(8 + \frac{1234}{10^4}\right) & \\ - -& = \pm \frac{8\times10^4+1234}{10^4} & \text{common denominator}\\ - -& = \pm \frac{81234}{10000} & \text{multiplying, and summing the numerator}\\ - -& = \pm \frac{40617}{5000} & \text{reducing}\\ - -\end{align} - - diff --git a/wiki/wikipedia/3501.txt b/wiki/wikipedia/3501.txt deleted file mode 100644 index eb8ddab22f576b1a912e881ab3ad95aab5bd2cb9..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3501.txt +++ /dev/null @@ -1,13 +0,0 @@ -In mathematics, the Kodaira embedding theorem characterises non-singular projective varieties, over the complex numbers, amongst compact Kähler manifolds. In effect it says precisely which complex manifolds are defined by homogeneous polynomials. - -Kunihiko Kodaira's result is that for a compact Kähler manifold M, with a Hodge metric, meaning that the cohomology class in degree 2 defined by the Kähler form ω is an integral cohomology class, there is a complex-analytic embedding of M into complex projective space of some high enough dimension N. - -The fact that M embeds as an algebraic variety follows from its compactness by Chow's theorem. - -A Kähler manifold with a Hodge metric is occasionally called a Hodge manifold (named after W. V. D. Hodge), so Kodaira's results states that Hodge manifolds are projective. - -The converse that projective manifolds are Hodge manifolds is more elementary and was already known. - -Kodaira also proved (Kodaira 1963), by recourse to the classification of compact complex surfaces, that every compact Kähler surface is a deformation of a projective Kähler surface. This was later simplified by Buchdahl to remove reliance on the classification (Buchdahl 2008). - -Let X be a compact Kähler manifold, and L a holomorphic line bundle on X. Then L is a positive line bundle if and only if there is a holomorphic embedding $\varphi:X\rightarrow \mathbb P$ of X into some projective space such that $\varphi^*\mathcal O_{\mathbb P}(1)=L^{\otimes m}$ for some m > 0. diff --git a/wiki/wikipedia/3502.txt b/wiki/wikipedia/3502.txt deleted file mode 100644 index a87129a8e13f0131c4f95d2dacb722edda7c0875..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3502.txt +++ /dev/null @@ -1,6 +0,0 @@ -In number theory, octic reciprocity is a reciprocity law relating the residues of 8th powers modulo primes, analogous to the law of quadratic reciprocity, cubic reciprocity, and quartic reciprocity. - -There is a rational reciprocity law for 8th powers, due to Williams. Define the symbol (x|p)k to be +1 if x is a k-th power modulo the prime p and -1 otherwise. Let p and q be distinct primes congruent to 1 modulo 8, such that (p|q) = (q|p) = +1. Let p = a2 + b2 = c2 + 2d2 and q = A2 + B2 = C2 + 2D2, with aA odd. Then -$$ - (p|q)_8 = (q|p)_8 = (aB-bA|q)_4 (cD-dC|q)_2 \ . -$$ diff --git a/wiki/wikipedia/3503.txt b/wiki/wikipedia/3503.txt deleted file mode 100644 index 4dd83162f2fc13adced5be272bf0f2820ff306b8..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3503.txt +++ /dev/null @@ -1,15 +0,0 @@ -In mathematics, a prime triplet is a set of three prime numbers in which the smallest and largest of the three differ by 6. In particular, the sets must have the form (p, p + 2, p + 6) or (p, p + 4, p + 6). With the exceptions of (2, 3, 5) and (3, 5, 7), this is the closest possible grouping of three prime numbers, since one of every three sequential odd numbers is a multiple of three, and hence not prime (except for 3 itself). - -The first prime triplets are - -(5, 7, 11), (7, 11, 13), (11, 13, 17), (13, 17, 19), (17, 19, 23), (37, 41, 43), (41, 43, 47), (67, 71, 73), (97, 101, 103), (101, 103, 107), (103, 107, 109), (107, 109, 113), (191, 193, 197), (193, 197, 199), (223, 227, 229), (227, 229, 233), (277, 281, 283), (307, 311, 313), (311, 313, 317), (347, 349, 353), (457, 461, 463), (461, 463, 467), (613, 617, 619), (641, 643, 647), (821, 823, 827), (823, 827, 829), (853, 857, 859), (857, 859, 863), (877, 881, 883), (881, 883, 887) - -A prime triplet contains a pair of twin primes (p and p + 2, or p + 4 and p + 6), a pair of cousin primes (p and p + 4, or p + 2 and p + 6), and a pair of sexy primes (p and p + 6). - -A prime can be a member of up to three prime triplets - for example, 103 is a member of (97, 101, 103), (101, 103, 107) and (103, 107, 109). When this happens, the five involved primes form a prime quintuplet. - -A prime quadruplet (p, p + 2, p + 6, p + 8) contains two overlapping prime triplets, (p, p + 2, p + 6) and (p + 2, p + 6, p + 8). - -Similarly to the twin prime conjecture, it is conjectured that there are infinitely many prime triplets. The first known gigantic prime triplet was found in 2008 by Norman Luhn and François Morain. The primes are (p, p + 2, p + 6) with p = 2072644824759 × 233333 − 1. the largest known proven prime triplet contains primes with 20008 digits, namely the primes (p, p + 2, p + 6) with p = 4111286921397  × 266420 − 1. - -The Skewes number for the triplet (p, p + 2, p + 6) is $87613571$, and for the triplet (p, p + 4, p + 6) it is $337867$. diff --git a/wiki/wikipedia/3504.txt b/wiki/wikipedia/3504.txt deleted file mode 100644 index d211f80d91e076270b2092f8849a5323684a95dd..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3504.txt +++ /dev/null @@ -1,3 +0,0 @@ -In elementary geometry, Reuschle's theorem describes a property of the cevians of a triangle intersecting in a common point and is named after the German mathematician Karl Gustav Reuschle (1812–1875). It is also known as Terquem's theorem after the French mathematician Olry Terquem (1782–1862), who published it in 1842. - -In a triangle $ABC$ with its three cevians intersecting in a common point other than the vertices $A$, $B$ or $C$ let $P_a$, $P_b$ and $P_c$ denote the intersections of the (extended) triangle sides and the cevians. The circle defined by the three points $P_a$, $P_b$ and $P_c$ intersects the (extended) triangle sides in the (additional) points $P'_a$, $P'_b$ and $P'_c$. Reuschle's theorem now states that the three new cevians $AP'_a$, $BP'_b$ and $CP'_c$ intersect in a common point as well. diff --git a/wiki/wikipedia/3505.txt b/wiki/wikipedia/3505.txt deleted file mode 100644 index 457ccf087c9af04dbac20fd96bd84ee2c11f4132..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3505.txt +++ /dev/null @@ -1,13 +0,0 @@ -In proof theory, an area of mathematical logic, proof compression is the problem of algorithmically compressing formal proofs. The developed algorithms can be used to improve the proofs generated by automated theorem proving tools such as SAT solvers, SMT-solvers, first-order theorem provers and proof assistants. - -In propositional logic a resolution proof of a clause $\kappa$ from a set of clauses C is a directed acyclic graph (DAG): the input nodes are axiom inferences (without premises) whose conclusions are elements of C, the resolvent nodes are resolution inferences, and the proof has a node with conclusion $\kappa$. - -RecyclePivots, - -LowerUnits, - -LowerUnivalents, - -Split, - -Reduce&Reconstruct, and Subsumption. diff --git a/wiki/wikipedia/3506.txt b/wiki/wikipedia/3506.txt deleted file mode 100644 index 4e3543491dbf7a8216241f26a3861fc2501ac897..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3506.txt +++ /dev/null @@ -1,104 +0,0 @@ -Operational transformation (OT) is a technology for supporting a range of collaboration functionalities in advanced collaborative software systems. OT was originally invented for consistency maintenance and concurrency control in collaborative editing of plain text documents. Its capabilities have been extended and its applications expanded to include group undo, locking, conflict resolution, operation notification and compression, group-awareness, HTML/XML and tree-structured document editing, collaborative office productivity tools, application-sharing, and collaborative computer-aided media design tools. In 2009 OT was adopted as a core technique behind the collaboration features in Apache Wave and Google Docs. - -Operational Transformation was pioneered by C. Ellis and S. Gibbs in the GROVE (GRoup Outline Viewing Edit) system in 1989. Several years later, some correctness issues were identified and several approaches were independently proposed to solve these issues, which was followed by another decade of continuous efforts of extending and improving OT by a community of dedicated researchers. In 1998, a Special Interest Group on Collaborative Editing was set up to promote communication and collaboration among CE and OT researchers. Since then, SIGCE holds annual CE workshops in conjunction with major CSCW (Computer Supported Cooperative Work) conferences, such as ACM, CSCW, GROUP and ECSCW. - -Collaboration systems utilizing Operational Transformations typically use replicated document storage, where each client has their own copy of the document; clients operate on their local copies in a lock-free, non-blocking manner, and the changes are then propagated to the rest of the clients; this ensures the client high responsiveness in an otherwise high-latency environment such as the Internet. When a client receives the changes propagated from another client, it typically transforms the changes before executing them; the transformation ensures that application-dependent consistency criteria (invariants) are maintained by all sites. This mode of operation results in a system particularly suited for implementing collaboration features, like simultaneous document editing, in a high-latency environment such as the web. - -250px|right|Basic idea behind OT The basic idea of OT can be illustrated by using a simple text editing scenario as follows. Given a text document with a string "abc" replicated at two collaborating sites; and two concurrent operations: - -# O1 = Insert[0, "x"] (to insert character "x" at position "0") - -# O2 = Delete[2, "c"] (to delete the character "c" at position "2") - -generated by two users at collaborating sites 1 and 2, respectively. Suppose the two operations are executed in the order of O1 and O2 (at site 1). After executing O1, the document becomes "xabc". To execute O2 after O1, O2 must be transformed against O1 to become: O2' = Delete[3, "c"], whose positional parameter is incremented by one due to the insertion of one character "x" by O1. Executing O2' on "xabc" deletes the correct character "c" and the document becomes "xab". However, if O2 is executed without transformation, it incorrectly deletes character "b" rather than "c". The basic idea of OT is to transform (or adjust) the parameters of an editing operation according to the effects of previously executed concurrent operations so that the transformed operation can achieve the correct effect and maintain document consistency. - -One functionality of OT is to support consistency maintenance in collaborative editing systems. A number of consistency models have been proposed in the research community, some generally for collaborative editing systems, and some specifically for OT algorithms. - -In Ellis and Gibbs 1989 paper "Concurrency control in groupware systems", and LBT approaches try to formalize an alternative conditions that can be proved. The consistency model proposed in these two approaches consist of the following formal conditions: - -* Causality: the same definition as in CC model - -* Single-operation effects: the effect of executing any operation in any execution state achieves the same effect as in its generation state - -* Multi-operation effects: the effects relation of any two operations is maintained after they are both executed in any states - -The above CSM model requires that a total order of all objects in the system be specified. Effectively, the specification is reduced to new objects introduced by insert operations. However, specification of the total order entails application-specific policies such as those to break insertion ties (i.e., new objects inserted by two current operations at the same position). Consequently, the total order becomes application specific. Moreover, in the algorithm, the total order must be maintained in the transformation functions and control procedure, which increases time/space complexities of the algorithm. - -Alternatively, the CA model is based on the admissibility theory. The CA model includes two aspects: - -* Causality: the same definition as in CC model - -* Admissibility: The invocation of every operation is admissible in its execution state, i.e., every invocation must not violate any effects relation (object ordering) that has been established by earlier invocations. - -These two conditions imply convergence. All cooperating sites converge in a state in which there is a same set of objects that are in the same order. Moreover, the ordering is effectively determined by the effects of the operations when they are generated. Since the two conditions also impose additional constraints on object ordering, they are actually stronger than convergence. The CA model and the design/prove approach are elaborated in the 2005 paper. and 3D model editing. The basic OT data model has been extended into a hierarchy of multiple linear addressing domains, which is capable of modeling a broad range of documents. A data adaptation process is often required to map application-specific data models to an OT-compliant data model. - -There exist two approaches to supporting application level operations in an OT system: - -# Generic operation model approach: which is to devise transformation functions for three primitive operations: insert, delete, and update. For an application with m different operations, m x m transformation functions are needed for supporting this application. In this approach, transformation functions are application-specific and cannot be reused in different applications. - -Various OT functions have been designed for OT systems with different capabilities and used for different applications. OT functions used in different OT systems may be named differently, but they can be classified into two categories: - -* Inclusion transformation (or forward transformation): IT(Oa, Ob) or $T(op_1,op_2)$, which transforms operation Oa against another operation Ob in such a way that the impact of Ob is effectively included. This is, for example, the case of two insertions at different nodes. - -* Exclusion transformation (or backward transformation): ET (Oa, Ob) or $T^{-1}(op_1,op_2)$, which transforms operation Oa against another operation Ob in such a way that the impact of Ob is effectively excluded. This is, for example, the ace of an insertion and a deletion at different nodes - -For example, suppose a type String with an operation ins(p, c, sid) where p is the position of insertion, c is the character to insert and sid is the identifier of the site that has generated the operation. We can write the following inclusion transformation function: - -T(ins($p_1,c_1,sid_1$),ins($p_2,c_2,sid_2$)) :- - -if ($p_1 < p_2$) return ins($p_1,c_1,sid_1$) - -else if ($p_1=p_2$ and $sid_1 < sid_2$) return ins($p_1,c_1,sid_1$) - -else return ins($p_1+1,c_1,sid_1$) - -Let's consider also the operation del(p,sid) regarding to the deletion of one character at position p at site sid. We can write the following exclusion transformation function: -$$ -T^{-1} -$$(ins($p_1,c_1,sid_1$),del($p_2,sid_2$)) :- - -if ($p_1 < p_2$) return ins($p_1,c_1,sid_1$) - -else if ($p_1=p_2$ and $sid_1 < sid_2$) return ins($p_1,c_1,sid_1$) - -else return ins($p_1-1,c_1,sid_1$) - -Some OT systems use both IT and ET functions, and some use only IT functions. The complexity of OT function design is determined by various factors: - -* the functionality of the OT system: whether the OT system supports do (consistency maintenance), undo, locking, awareness, application sharing, etc.; - -* the correctness responsibility in the OT system: what transformation properties (CP1/TP1, CP2/TP2, IP2, IP3, RP) to meet; whether ET is used; - -* the operation model of the OT system: whether the OT operation model is generic (e.g. primitive insert, delete, update), or application-specific (all operations of the target application); and - -* the data model of the OT system: whether the data in each operation is character-wise (an individual object), string-wise (a sequence of objects), hierarchical, or other structures. - -Various transformation properties for ensuring OT system correctness have been identified. These properties can be maintained by either the transformation control algorithm Different OT system designs have different division of responsibilities among these components. The specifications of these properties and preconditions of requiring them are given below. - -thumb|right|Illustration of the TP2 property - -The following two properties are related to achieving convergence. - -* CP1/TP1: For every pair of concurrent operations $op_1$ and $op_2$ defined on the same state, the transformation function T satisfies CP1/TP1 property if and only if: $ op_1 \circ T(op_2,op_1) \equiv op_2 \circ T(op_1,op_2) $ where $op_i \circ op_j$ denotes the sequence of operations containing $op_i$ followed by $op_j$;and where $\equiv$ denotes equivalence of the two sequences of operations. CP1/TP1 precondition: CP1/TP1 is required only if the OT system allows any two operations to be executed in different orders. - -* CP2/TP2: For every three concurrent operations $op_1, op_2$ and $op_3$ defined on the same document state, the transformation function T satisfies CP2/TP2 property if and only if: $T(op_3, op_1 \circ T(op_2,op_1)) = T(op_3, op_2 \circ T(op_1,op_2))$. CP2/TP2 stipulates equality between two operations transformed with regard to two equivalent sequences of operations: the transformation of $op_3$ against the sequence of operation $op_2$ followed by $T(op_1,op_2)$ must give the same operation as the transformation of $op_3$ against the sequence formed by $op_1$ and $T(op_2,op_1)$. CP2/TP2 precondition: CP2/TP2 is required only if the OT systems allows two operations $op_1$ and $op_2$ be IT-transformed in two different document states (or contexts). - -The following three properties are related to achieving the desired group undo effect. They are: - -* IP1: Given any document state S and the sequence $op \circ \overline{op}$, we have $S \circ op \circ \overline{op} = S$, which means the sequence $op \circ \overline{op}$ is equivalent to a single identity operation I with respect to the effect on the document state. This property is required in an OT system for achieving the correct undo effect, but is not related to IT functions. - -* IP2: The property IP2 expresses that the sequence $op \circ \overline{op}$ has no effect on the transformation of other operations. The transformation functions satisfy IP2 if and only if: $T(op_x, op \circ \overline{op})=op_x$, which means that the outcome of transforming $op_x$ against the sequence $op \circ \overline{op}$ is equivalent to the outcome of transforming $op_x$ against the identity operation I. IP2 precondition: IP2 is required only if the OT systems allows an operation $op_x$ to be transformed against a pair of do and undo operations $op \circ \overline{op}$, one-by-one. - -* IP3: Given two concurrent operations $op_1$ and $op_2$ defined on the same document state (or context), if $\overline{op_1}' = T(\overline{op_1}, T(op_2, op_1))$ and $\overline{op_1'} = \overline{T(op_1, op_2)}$. The transformation functions satisfy the property IP3 if and only if $\overline{op_1}' = \overline{op_1'}$, which means that the transformed inverse operation $\overline{op_1}'$ is equal to the inverse of the transformed operation $\overline{op_1'}$. IP3 precondition: IP3 is required only if the OT system allows an inverse operation $\overline{op_1}$ to be transformed against an operation $op_2$ that is concurrent and defined on the same document state as (or context-equivalent to) $op_1$. - -Various OT control algorithms have been designed for OT systems with different capabilities and for different applications. The complexity of OT control algorithm design is determined by multiple factors. A key differentiating factor is whether an algorithm is capable of supporting concurrency control (do) and/or group undo. - -The correctness problems of OT led to introduction of transformationless post-OT schemes, such as WOOT, Logoot and Causal Trees (CT). "Post-OT" schemes decompose the document into atomic operations, but they workaround the need to transform operations by employing a combination of unique symbol identifiers, vector timestamps and/or tombstones. - -While the classic OT approach of defining operations through their offsets in the text seems to be simple and natural, real-world distributed systems raise serious issues. Namely, that operations propagate with finite speed, states of participants are often different, thus the resulting combinations of states and operations are extremely hard to foresee and understand. As Li and Li put it, "Due to the need to consider complicated case coverage, formal proofs are very complicated and error-prone, even for OT algorithms that only treat two characterwise primitives (insert and delete)". - -Similarly, Joseph Gentle who is a former Google Wave engineer and an author of the Share.JS library wrote, "Unfortunately, implementing OT sucks. There's a million algorithms with different tradeoffs, mostly trapped in academic papers. […] Wave took 2 years to write and if we rewrote it today, it would take almost as long to write a second time." But later he amends his comment with "I no longer believe that wave would take 2 years to implement now - mostly because of advances in web frameworks and web browsers." - -For OT to work, every single change to the data needs to be captured: "Obtaining a snapshot of the state is usually trivial, but capturing edits is a different matter altogether. […] The richness of modern user interfaces can make this problematic, especially within a browser-based environment." An alternative to OT is differential synchronization. - -Another alternative to OT is using sequence types of conflict-free replicated data type. diff --git a/wiki/wikipedia/3507.txt b/wiki/wikipedia/3507.txt deleted file mode 100644 index 7e764608f3411e8500517bc0bbc28853c10d62c5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3507.txt +++ /dev/null @@ -1,7 +0,0 @@ -In computer science and graph theory, the zero-weight cycle problem is the problem of deciding whether a directed graph with weights on the edges (which may be positive or negative or zero) has a cycle in which the sum of weights is 0. - -A related problem is to decide whether the graph has a cycle in which the sum of weights is less than 0. This related problem can be solved in polynomial time using the Bellman–Ford algorithm. In contrast, detecting a cycle of weight exactly 0 is an NP-complete problem. - -The problem is in NP since, given a cycle, it is easy to verify that its weight is 0. - -The proof of NP-hardness is by reduction from the subset sum problem. In this problem we are given a set of numbers, positive and some negative, and have to decide whether there exists a subset whose sum is exactly 0. Given an instance of subset-sum with n numbers, construct an instance of zero-weight-cycle as follows. Construct a graph with 2n vertices. For each number ai the graph contains two vertices: ui and vi. From each ui, there is only one outgoing edge, which goes to vi and has weight ai. From each vi, there are n outgoing edges, which go to each uj and have weights 0. Any cycle in this graph have the form u1-v1-u2-v2-...-uk-vk. The weight of a cycle is 0, iff the sum of all weights between each ui and its corresponding vi is 0, iff the sum of all corresponding ai is 0, iff there is a subset with a sum of 0. diff --git a/wiki/wikipedia/3508.txt b/wiki/wikipedia/3508.txt deleted file mode 100644 index 62a2efcbfbb92e6728f2000aeb236973e47284bd..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3508.txt +++ /dev/null @@ -1,55 +0,0 @@ -Univariate is a term commonly used in statistics to describe a type of data which consists of observations on only a single characteristic or attribute. A simple example of univariate data would be the salaries of workers in industry. Like all the other data, univariate data can be visualized using graphs, images or other analysis tools after the data is measured, collected, reported, and analyzed. - -Some univariate data consists of numbers (such as the height of 65 inches or the weight of 100 pounds), while others are nonnumerical (such as eye colors of brown or blue). Generally, the terms categorical univariate data and numerical univariate data are used to distinguish between these types. - -Categorical univariate data consist non-numerical observations that may be placed in categories. It includes labels or names used to identify an attribute of each element. Categorical univariate data usually use either nominal or ordinal scale of measurement. - -Numerical univariate data consist observations that are numbers. They are obtained using either interval or ratio scale of measurement. This type of univariate data can be classified even further into two subcategories: discrete and continuous. A numerical univariate data is discrete if the set of all possible values is finite or countably infinite. Discrete univariate data are usually associated with counting (such as the number of books read by a person). A numerical univariate data is continuous if the set of all possible values is an interval of numbers. Continuous univariate data are usually associated with measuring (such as the weights of people). - -Univariate analysis is the simplest form of analyzing data. Uni means one, so in other words the data has only one variable. Univariate data requires to analyze each variable separately. Data is gathered for the purpose of answering a question, or more specifically, a research question. Univariate data does not answer research questions about relationships between variables, but rather it is used to describe one characteristic or attribute that varies from observation to observation. Usually there are two purposes that a researcher can look for. The first one is to answer a research question with descriptive study and the second one is to get knowledge about how attribute varies with individual effect of a variable in Regression analysis. There are some ways to describe patterns found in univariate data which include graphical methods, measures of central tendency and measures of variability. - -The most frequently used graphical illustrations for univariate data are: - -Frequency is how many times a number occurs. The frequency of an observation in statistics tells us the number of times the observation occurs in the data. For example, in the following list of numbers {1, 2, 3, 4, 6, 9, 9, 8, 5, 1, 1, 9, 9, 0, 6, 9}, the frequency of the number 9 is 5 (because it occurs 5 times). - -Bar chart is a graph consisting of rectangular bars. These bars actually represents number or percentage of observations of existing categories in a variable. The length or height of bars gives a visual representation of the proportional differences among categories. - -Histograms are used to estimate distribution of the data, with the frequency of values assigned to a value range called a bin. - -Pie chart is a circle divided into portions that represent the relative frequencies or percentages of a population or a sample belonging to different categories. - -Central tendency is one of the most common numerical descriptive measures. It's used to estimate the central location of the univariate data by the calculation of mean, median and mode. Each of these calculation has its own advantages and limitations. The mean has the advantage that its calculation includes each value of the data set, but it is particularly susceptible to the influence of outliers. The median is a better measure when the data set contains outliers. The mode is simple to locate. The important thing is that it's not restricted to using only one of these measure of central tendency. If the data being analyzed is categorical, then the only measure of central tendency that can be used is the mode. However, if the data is numerical in nature (ordinal or interval/ratio) then the mode, median, or mean can all be used to describe the data. Using more than one of these measures provides a more accurate descriptive summary of central tendency for the univariate. - -A measure of variability or dispersion (deviation from the mean) of a univariate data set can reveal the shape of a univariate data distribution more sufficiently. It will provide some information about the variation among data values. The measures of variability together with the measures of central tendency give a better picture of the data than the measures of central tendency alone. The three most frequently used measures of variability are range, variance and standard deviation. The appropriateness of each measure would depend on the type of data, the shape of the distribution of data and which measure of central tendency are being used. If the data is categorical, then there is no measure of variability to report. For data that is numerical, all three measures are possible. If the distribution of data is symmetrical, then the measures of variability are usually the variance and standard deviation. However, if the data are skewed, then the measure of variability that would be appropriate for that data set is the range. - -Univariate distribution is a dispersal type of a single random variable described either with a probability mass function (pmf) for discrete probability distribution, or probability density function (pdf) for continuous probability distribution. It is not to be confused with multivariate distribution. - -Uniform distribution (discrete)
    - -Bernoulli distribution
    - -Binomial distribution
    - -Geometric distribution
    - -Negative binomial distribution
    - -Poisson distribution
    - -Hypergeometric distribution
    - -Zeta distribution - -Uniform distribution (continuous)
    - -Normal distribution
    - -Gamma distribution
    - -Exponential distribution
    - -Weibull distribution
    - -Cauchy distribution
    - -Beta distribution diff --git a/wiki/wikipedia/3509.txt b/wiki/wikipedia/3509.txt deleted file mode 100644 index 16fc6285f3708459f408c8dedd75f9f0db52b8af..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3509.txt +++ /dev/null @@ -1,181 +0,0 @@ -In the fields of databases and transaction processing (transaction management), a schedule (or history) of a system is an abstract model to describe execution of transactions running in the system. Often it is a list of operations (actions) ordered by time, performed by a set of transactions that are executed together in the system. If the order in time between certain operations is not determined by the system, then a partial order is used. Examples of such operations are requesting a read operation, reading, writing, aborting, committing, requesting a lock, locking, etc. Not all transaction operation types should be included in a schedule, and typically only selected operation types (e.g., data access operations) are included, as needed to reason about and describe certain phenomena. Schedules and schedule properties are fundamental concepts in database concurrency control theory. - -The following is an example of a schedule: - -
    - -;D - -
    - -In this example, the horizontal axis represents the different transactions in the schedule D. The vertical axis represents time order of operations. Schedule D consists of three transactions T1, T2, T3. The schedule describes the actions of the transactions as seen by the DBMS. - -First T1 Reads and Writes to object X, and then Commits. Then T2 Reads and Writes to object Y and Commits, and finally, T3 Reads and Writes to object Z and Commits. This is an example of a serial schedule, i.e., sequential with no overlap in time because the actions of all three transactions are sequential, and the transactions are not interleaved in time. - -Representing the schedule D above by a table (rather than a list) is just for the convenience of identifying each transaction's operations in a glance. This notation is used throughout the article below. A more common way in the technical literature for representing such schedule is by a list: - -D = R1(X) W1(X) Com1 R2(Y) W2(Y) Com2 R3(Z) W3(Z) Com3 - -Usually, for the purpose of reasoning about concurrency control in databases, an operation is modelled as atomic, occurring at a point in time, without duration. When this is not satisfactory, start and end time-points and possibly other point events are specified (rarely). Real executed operations always have some duration and specified respective times of occurrence of events within them (e.g., "exact" times of beginning and completion), but for concurrency control reasoning usually only the precedence in time of the whole operations (without looking into the quite complex details of each operation) matters, i.e., which operation is before, or after another operation. Furthermore, in many cases, the before/after relationships between two specific operations do not matter and should not be specified, while being specified for other pairs of operations. - -In general, operations of transactions in a schedule can interleave (i.e., transactions can be executed concurrently), while time orders between operations in each transaction remain unchanged as implied by the transaction's program. Since not always time orders between all operations of all transactions matter and need to be specified, a schedule is, in general, a partial order between operations rather than a total order (where order for each pair is determined, as in a list of operations). Also in the general case, each transaction may consist of several processes, and itself be properly represented by a partial order of operations, rather than a total order. Thus, in general, a schedule is a partial order of operations, containing (embedding) the partial orders of all its transactions. - -Time-order between two operations can be represented by an ordered pair of these operations (e.g., the existence of a pair (OP1, OP2) means that OP1 is always before OP2), and a schedule in the general case is a set of such ordered pairs. Such a set, a schedule, is a partial order which can be represented by an acyclic directed graph (or directed acyclic graph, DAG) with operations as nodes and time-order as a directed edge (no cycles are allowed since a cycle means that a first (any) operation on a cycle can be both before and after (any) another second operation on the cycle, which contradicts our perception of Time). In many cases, a graphical representation of such a graph is used to demonstrate a schedule. - -Comment: Since a list of operations (and the table notation used in this article) always represents a total order between operations, schedules that are not a total order cannot be represented by a list (but always can be represented by a DAG). - -The transactions are executed non-interleaved (see example above) - -i.e., a serial schedule is one in which no transaction starts until a running transaction has ended. - -===Serializable=== - -A schedule that is equivalent (in its outcome) to a serial schedule has the serializability property. - -In schedule E, the order in which the actions of the transactions are executed is not the same as in D, but in the end, E gives the same result as D. - -
    - -;E - -
    - -Two actions are said to be in conflict (conflicting pair) if: - -# The actions belong to different transactions. - -# At least one of the actions is a write operation. - -# The actions access the same object (read or write). - -The following set of actions is conflicting: - -* R1(X), W2(X), W3(X) (3 conflicting pairs) - -While the following sets of actions are not: - -* R1(X), R2(X), R3(X) - -* R1(X), W2(Y), R3(X) - -The schedules S1 and S2 are said to be conflict-equivalent if the following two conditions are satisfied: - -# Both schedules S1 and S2 involve the same set of transactions (including the ordering of actions within each transaction). - -# Both schedules have the same set of conflicting operations. - -A schedule is said to be conflict-serializable when the schedule is conflict-equivalent to one or more serial schedules. - -Another definition for conflict-serializability is that a schedule is conflict-serializable if and only if its precedence graph/serializability graph, when only committed transactions are considered, is acyclic (if the graph is defined to include also uncommitted transactions, then cycles involving uncommitted transactions may occur without conflict serializability violation). - -
    - -;G - -
    - -Which is conflict-equivalent to the serial schedule , but not . - -A schedule is said to be commitment-ordered (commit-ordered), or commitment-order-serializable, if it obeys the Commitment ordering (CO; also commit-ordering or commit-order-serializability) schedule property. This means that the order in time of transactions' commitment events is compatible with the precedence (partial) order of the respective transactions, as induced by their schedule's acyclic precedence graph (serializability graph, conflict graph). This implies that it is also conflict-serializable. The CO property is especially effective for achieving Global serializability in distributed systems. - -Comment: Commitment ordering, which was discovered in 1990, is obviously not mentioned in (Bernstein et al. 1987). Its correct definition appears in (Weikum and Vossen 2001), however the description thereof its related techniques and theory is partial, inaccurate, and misleading. For an extensive coverage of commitment ordering and its sources see Commitment ordering and The History of Commitment Ordering. - -Two schedules S1 and S2 are said to be view-equivalent when the following conditions are satisfied: - -# If the transaction $T_i$ in S1 reads an initial value for object X, so does the transaction $T_i$ in S2. - -# If the transaction $T_i$ in S1 reads the value written by transaction $T_j$ in S1 for object X, so does the transaction $T_i$ in S2. - -# If the transaction $T_i$ in S1 is the final transaction to write the value for an object X, so is the transaction $T_i$ in S2. - -A schedule is said to be view-serializable if it is view-equivalent to some serial schedule. - -Note that by definition, all conflict-serializable schedules are view-serializable. - -
    - -;G - -
    - -Notice that the above example (which is the same as the example in the discussion of conflict-serializable) is both view-serializable and conflict-serializable at the same time. There are however view-serializable schedules that are not conflict-serializable: those schedules with a transaction performing a blind write: - -
    - -;H - -
    - -The above example is not conflict-serializable, but it is view-serializable since it has a view-equivalent serial schedule . - -Since determining whether a schedule is view-serializable is NP-complete, view-serializability has little practical interest. - -===Recoverable=== - -Transactions commit only after all transactions whose changes they read, commit. - -
    - -;F - -;F2 - -
    - -These schedules are recoverable. F is recoverable because T1 commits before T2, that makes the value read by T2 correct. Then T2 can commit itself. In F2, if T1 aborted, T2 has to abort because the value of A it read is incorrect. In both cases, the database is left in a consistent state. - -If a transaction T1 aborts, and a transaction T2 commits, but T2 relied on T1, we have an unrecoverable schedule. - -
    - -;G - -
    - -In this example, G is unrecoverable, because T2 read the value of A written by T1, and committed. T1 later aborted, therefore the value read by T2 is wrong, but since T2 committed, this schedule is unrecoverable. - -Also "Avoiding Cascading Aborts (ACA)". Avoids that a single transaction abort leads to a series of transaction rollbacks. A strategy to prevent cascading aborts is to disallow a transaction from reading uncommitted changes from another transaction in the same schedule. - -The following examples are the same as the ones in the discussion on recoverable: - -
    - -;F - -;F2 - -
    - -In this example, although F2 is recoverable, it does not avoid - -cascading aborts. It can be seen that if T1 aborts, T2 will have to - -be aborted too in order to maintain the correctness of the schedule - -as T2 has already read the uncommitted value written by T1. - -The following is a recoverable schedule which avoids cascading abort. Note, however, that the update of A by T1 is always lost (since T1 is aborted). - -
    - -;F3 - -
    - -Note that this Schedule would not be serializable if T1 would be committed. - -Cascading aborts avoidance is sufficient but not necessary for a schedule to be recoverable. - -A schedule is strict - has the strictness property - if for any two transactions T1, T2, if a write operation of T1 precedes a conflicting operation of T2 (either read or write), then the commit or abort event of T1 also precedes that conflicting operation of T2. - -Any strict schedule is cascade-less, but not the converse. Strictness allows efficient recovery of databases from failure. - -The following expressions illustrate the hierarchical (containment) relationships between serializability and recoverability classes: - -* Serial ⊂ commitment-ordered ⊂ conflict-serializable ⊂ view-serializable ⊂ all schedules - -* Serial ⊂ strict ⊂ cascadeless (ACA) ⊂ recoverable ⊂ all schedules - -The Venn diagram (below) illustrates the above clauses graphically. - -In practice, most general purpose database systems employ conflict-serializable and recoverable (primarily strict) schedules. diff --git a/wiki/wikipedia/351.txt b/wiki/wikipedia/351.txt deleted file mode 100644 index 315aba7031d7f66f67fdf2c3106bae8c8c5d314f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/351.txt +++ /dev/null @@ -1,62 +0,0 @@ -In probability theory, Bernstein inequalities give bounds on the probability that the sum of random variables deviates from its mean. In the simplest case, let X1, ..., Xn be independent Bernoulli random variables taking values +1 and -1 with probability 1/2 (this distribution is also known as the Rademacher distribution), then for every positive $\varepsilon$, -$$ -\mathbb{P}\left (\left|\frac{1}{n}\sum_{i=1}^n X_i\right| > \varepsilon \right ) \leq 2\exp \left (-\frac{n\varepsilon^2}{2(1+\frac{\varepsilon}{3})} \right). -$$ - -Bernstein inequalities were proved and published by Sergei Bernstein in the 1920s and 1930s. Later, these inequalities were rediscovered several times in various forms. Thus, special cases of the Bernstein inequalities are also known as the Chernoff bound, Hoeffding's inequality and Azuma's inequality. - -1. Let $X_1, \ldots, X_n$ be independent zero-mean random variables. Suppose that $|X_i|\leq M$ almost surely, for all $i.$ Then, for all positive $t$, -$$ -\mathbb{P} \left (\sum_{i=1}^n X_i \geq t \right ) \leq \exp \left ( -\frac{\tfrac{1}{2} t^2}{\sum_{i = 1}^n \mathbb{E} \left[X_i^2 \right ]+\tfrac{1}{3} Mt} \right ). -$$ - -2. Let $X_1, \ldots, X_n$ be independent zero-mean random variables. Suppose that for some positive real $L$ and every integer $k>1$, -$$ - \mathbb{E} \left[ \left |X_i^k \right |\right ] \leq \frac{1}{2} \mathbb{E} \left[X_i^2\right] L^{k-2} k! -$$ - -Then -$$ -\mathbb{P} \left (\sum_{i=1}^n X_i \geq 2t \sqrt{\sum \mathbb{E} \left [X_i^2 \right ]} \right ) < \exp(-t^2), \qquad \text{for}\quad 0 \leq t \leq \frac{1}{2L}\sqrt{\sum \mathbb{E} \left[X_j^2\right ]}. -$$ - -3. Let $X_1, \ldots, X_n$ be independent zero-mean random variables. Suppose that -$$ - \mathbb{E} \left[ \left |X_i^k \right |\right ] \leq \frac{k!}{4!} \left(\frac{L}{5}\right)^{k-4} -$$ - -for all integer $k>3.$ Denote -$$ - A_k = \sum \mathbb{E} \left [ X_i^k\right ]. -$$ - -Then, -$$ -\mathbb{P} \left( \left| \sum_{j=1}^n X_j - \frac{A_3 t^2}{3A_2} \right|\geq \sqrt{2A_2} t \left[ 1 + \frac{A_4 t^2}{6 A_2^2} \right] \right ) < 2 \exp (- t^2), \qquad \text{for} \quad 0 < t \leq \frac{5 \sqrt{2A_2}}{4L}. -$$ - -4. Bernstein also proved generalizations of the inequalities above to weakly dependent random variables. For example, inequality (2) can be extended as follows. $X_1, \ldots, X_n$ be possibly non-independent random variables. Suppose that for all integer $i>0$, - -\begin{align} - -\mathbb{E} \left. \left [ X_i \right | X_1, \ldots, X_{i-1} \right ] &= 0, \\ - -\mathbb{E} \left. \left [ X_i^2 \right | X_1, \ldots, X_{i-1} \right ] &\leq R_i \mathbb{E} \left [ X_i^2 \right ], \\ - -\mathbb{E} \left. \left [ X_i^k \right | X_1, \ldots, X_{i-1} \right ] &\leq \tfrac{1}{2} \mathbb{E} \left. \left[ X_i^2 \right | X_1, \ldots, X_{i-1} \right ] L^{k-2} k! - -\end{align} - -Then -$$ -\mathbb{P} \left( \sum_{i=1}^n X_i \geq 2t \sqrt{\sum_{i=1}^n R_i \mathbb{E}\left [ X_i^2 \right ]} \right) < \exp(-t^2), \qquad \text{for}\quad 0 < t \leq \frac{1}{2L} \sqrt{\sum_{i=1}^n R_i \mathbb{E} \left [X_i^2 \right ]}. -$$ - -More general results for martingales can be found in Fan et al. (2015). - -The proofs are based on an application of Markov's inequality to the random variable -$$ - \exp \left ( \lambda \sum_{j=1}^n X_j \right ), -$$ - -for a suitable choice of the parameter $\lambda > 0$. diff --git a/wiki/wikipedia/3510.txt b/wiki/wikipedia/3510.txt deleted file mode 100644 index 2730ff65528e2dad02e2a875b883ef9ede632d07..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3510.txt +++ /dev/null @@ -1,3 +0,0 @@ -In mathematics, specifically in the study of vector bundles over complex Kähler manifolds, the Nakano vanishing theorem, sometimes called the Akizuki–Nakano vanishing theorem, generalizes the Kodaira vanishing theorem. Given a compact complex manifold M with a holomorphic line bundle F over M, the Nakano vanishing theorem provides a condition on when the cohomology groups $H^q(M; \Omega^p(F))$ equal zero. Here, $\Omega^p(F)$ denotes the sheaf of holomorphic (p,0)-forms taking values on F. The theorem states that, if the first Chern class of F is negative,H^q(M; \Omega^p(F)) = 0 \text{ when } q + p < n. - -Alternatively, if the first Chern class of F is positive,H^q(M; \Omega^p(F)) = 0 \text{ when } q + p > n. diff --git a/wiki/wikipedia/3511.txt b/wiki/wikipedia/3511.txt deleted file mode 100644 index 566ae7453f7b9acbee1a65242ca74b2a37363dbf..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3511.txt +++ /dev/null @@ -1,129 +0,0 @@ -Here are some of the more commonly known problems that are PSPACE-complete when expressed as decision problems. This list is in no way comprehensive. - -Generalized versions of: - -* Amazons - -* Atomix - -* Checkers - -* Dyson Telescope Game - -* Cross Purposes - -* Geography - -* Ko-free Go - -* Ladder capturing in Go - -* Gomoku - -* Hex - -* Konane - -* Poset Game - -* Reversi - -* River Crossing - -* Finding optimal play in Mahjong solitaire - -* Sokoban - -* Black Pebble game - -* Black-White Pebble game - -* Acyclic pebble game - -* One-player pebble game - -* Provability in intuitionistic propositional logic - -* Satisfaction in modal logic S4 - -* Linear temporal logic satisfiability and model checking - -Type inhabitation problem for simply typed lambda calculus - -Integer circuit evaluation - -* Word problem for linear bounded automata - -* Word problem for quasi-realtime automata - -* Emptiness problem for a nondeterministic two-way finite state automaton - -* Equivalence problem for nondeterministic finite automata - -* Word problem and emptiness problem for non-erasing stack automata - -* Emptiness of intersection of an unbounded number of deterministic finite automata - -* A generalized version of Langton's Ant - -* Minimizing nondeterministic finite automata - -* Word problem for context-sensitive language - -* Intersection emptiness for an unbounded number of regular languages - -* Regular Expression Star-Freeness - -* Equivalence problem for regular expressions - -* Emptiness problem for regular expressions with intersection. - -* Equivalence problem for star-free regular expressions with squaring. - -* Covering for linear grammars - -* Structural equivalence for linear grammars - -* Equivalence problem for Regular grammars - -* Emptiness problem for ET0L grammars - -* Word problem for ET0L grammars - -* Tree transducer language membership problem for top down finite-state tree transducers - -* succinct versions of many graph problems, with graphs represented as Boolean circuits, ordered binary decision diagrams or other related representations: - -** s-t reachability problem for succinct graphs. This is essentially the same as the simplest plan existence problem in automated planning and scheduling. - -** planarity of succinct graphs - -** acyclicity of succinct graphs - -** connectedness of succinct graphs - -** existence of Eulerian paths in a succinct graph - -* Canadian traveller problem. - -* Determining whether routes selected by the Border Gateway Protocol will eventually converge to a stable state for a given set of path preferences - -* Dynamic graph reliability. - -* Deterministic constraint logic (unbounded) - -* Nondeterministic Constraint Logic (unbounded) - -* Bounded two-player Constraint Logic - -* Finite horizon POMDPs (Partially Observable Markov Decision Processes). - -* Hidden Model MDPs (hmMDPs). - -* Dynamic Markov process. - -* Detection of inclusion dependencies in a relational database - -* Computation of any Nash equilibrium of a 2-player normal-form game, that may be obtained via the Lemke–Howson algorithm. - -* The Corridor Tiling Problem: given a set of Wang tiles, a chosen tile $T_0$ and a width $n$ given in unary notation, is there any height $m$ such that an $n\times m$ rectangle can be tiled such that all the border tiles are $T_0$? diff --git a/wiki/wikipedia/3512.txt b/wiki/wikipedia/3512.txt deleted file mode 100644 index 39ce3513220824a3783132cae5a1b052ab42b831..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3512.txt +++ /dev/null @@ -1,37 +0,0 @@ -The clique percolation method is a popular approach for analyzing the overlapping community structure of networks. The term network community (also called a module, cluster or cohesive group) - -has no widely accepted unique definition and it is usually defined as a group of nodes that are more densely connected to each other than to other nodes in the network. There are numerous alternative methods for detecting communities in networks, for example, the Girvan–Newman algorithm, hierarchical clustering and modularity maximization. - -The clique percolation method builds up the communities from k-cliques, which correspond to complete (fully connected) sub-graphs of k nodes. (E.g., a k-clique at k = 3 is equivalent to a triangle). Two k-cliques are considered adjacent if they share k - 1 nodes. A community is defined as the maximal union of k-cliques that can be reached from each other through a series of adjacent k-cliques. Such communities can be best interpreted with the help of a k-clique template (an object isomorphic to a complete graph of k nodes). Such a template can be placed onto any k-clique in the graph, and rolled to an adjacent k-clique by relocating one of its nodes and keeping its other k - 1 nodes fixed. Thus, the k-clique communities of a network are all those sub-graphs that can be fully explored by rolling a k-clique template in them, but cannot be left by this template. - -This definition allows overlaps between the communities in a natural way, as illustrated in Fig.1, showing four k-clique communities at k = 4. The communities are color-coded and the overlap between them is emphasized in red. The definition above is also local: if a certain sub-graph fulfills the criteria to be considered as a community, then it will remain a community independent of what happens to another part of the network far away. In contrast, when searching for the communities by optimizing with respect to a global quantity, a change far away in the network can reshape the communities in the unperturbed regions as well. Furthermore, it has been shown that global methods can suffer from a resolution limit problem, where the size of the smallest community that can be extracted is dependent on the system size. A local community definition such as here circumvents this problem automatically. - -Since even small networks can contain a vast number of k-cliques, the implementation of this approach is based on locating all maximal cliques rather than the individual k-cliques. the worst case runtime complexity is exponential in the number of nodes. - - - -Image:Illustration_of_overlapping_communities.svg|Fig.1. Illustration of the k-clique communities at k = 4. - - - -On a network with directed links a directed k-clique is a complete subgraph with k nodes fulfilling the following condition. The k nodes can be ordered such that between an arbitrary pair of them there exists a directed link pointing from the node with the higher rank towards the node with the lower rank. The directed Clique Percolation Method defines directed network communities as the percolation clusters of directed k-cliques. - -On a network with weighted links a weighted k-clique is a complete subgraph with k nodes such that the geometric mean of the k (k - 1) / 2 link weights within the k-clique is greater than a selected threshold value, I. The weighted Clique Percolation Method defines weighted network communities as the percolation clusters of weighted k-cliques. Note that the geometric mean of link weights within a subgraph is called the intensity of that subgraph. - -Clique percolation methods may be generalized by recording different amounts of overlap between the various k-cliques. This then defines a new type of graph, a clique graph, where each k-clique in the original graph is represented by a vertex in the new clique graph. The edges in the clique graph are used to record the strength of the overlap of cliques in the original graph. One may then apply any community detection method to this clique graph to identify the clusters in the original graph through the k-clique structure. - -For instance in a simple graph, we can define the overlap between two k-cliques to be the number of vertices common to both k-cliques. The Clique Percolation Method is then equivalent to thresholding this clique graph, dropping all edges of weight less than (k-1), with the remaining connected components forming the communities of cliques found in CPM. For k=2 the cliques are the edges of the original graph and the clique graph in this case is the line graph of the original network. - -In practice, using the number of common vertices as a measure of the strength of clique overlap may give poor results as large cliques in the original graph, those with many more than k vertices, will dominate the clique graph. The problem arises because if a vertex is in n different k-cliques it will contribute to n(n-1)/2 edges in such a clique graph. A simple solution is to let each vertex common to two overlapping kcliques to contribute a weight equal to 1/n when measuring the overlap strength of the two k-cliques. - -In general the clique graph viewpoint is a useful way of finding generalizations of standard clique-percolation methods to get around any problems encountered. It even shows how to describe extensions of these methods based on other motifs, subgraphs other than k-cliques. In this case a clique graph is best thought of a particular example of a hypergraph. - -The Erdős–Rényi model shows a series of interesting transitions when the probability p of two nodes being connected is increased. For each k one can find a certain threshold probability pc above which the k-cliques organize into a giant community.(The size of the giant community is comparable to the system size, in other words the giant community occupies a finite part of the system even in the thermodynamic limit.) This transition is analogous to the percolation transition in statistical physics. A similar phenomenon can be observed in many real networks as well: if k is large, only the most densely linked parts are accepted as communities, thus, they usually remain small and dispersed. When k is lowered, both the number and the size of the communities start to grow. However, in most cases a critical k value can be reached, below which a giant community emerges, smearing out the details of the community structure by merging (and making invisible) many smaller communities. - -The clique percolation method had been used to detect communities from the studies of cancer metastasis through various social networks to document clustering and economical networks. - -There are a number of implementations of clique percolation. The clique percolation method was first implemented and popularized by CFinder (freeware for non-commercial use) software for detecting and visualizing overlapping communities in networks. The program enables customizable visualization and allows easy strolling over the found communities. The package contains a command line version of the program as well, which is suitable for scripting. - -A faster implementation ( under the GPL) has been implemented by another group. Another example, which is also very fast in certain contexts, is the SCP algorithm. - -A parallel version of the clique percolation method was designed and developed by S. Mainardi et al.. By exploiting today's multi-core/multi-processor computing architectures, the method enables the extraction of k-clique communities from very large networks such as the Internet. The authors released the source code of the method under the GPL and made it for the community. diff --git a/wiki/wikipedia/3513.txt b/wiki/wikipedia/3513.txt deleted file mode 100644 index 949f93c73644e76121974dc68fc445bdfca71a9f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3513.txt +++ /dev/null @@ -1,89 +0,0 @@ -In logic and mathematics, proof by contradiction is a form of proof that establishes the truth or the validity of a proposition, by showing that assuming the proposition to be false leads to a contradiction. Proof by contradiction is also known as indirect proof, proof by assuming the opposite, and reductio ad impossibile. - -Proof by contradiction is based on the law of noncontradiction as first formalized as a metaphysical principle by Aristotle. Noncontradiction is also a theorem in propositional logic. This states that an assertion or mathematical statement cannot be both true and false. That is, a proposition Q and its negation $\lnot$Q ("not-Q") cannot both be true. In a proof by contradiction, it is shown that the denial of the statement being proved results in such a contradiction. It has the form of a reductio ad absurdum argument, and usually proceeds as follows: - -#The proposition to be proved, P, is assumed to be false. That is, $\lnot$P is true. - -#It is then shown that $\lnot$P implies two mutually contradictory assertions, Q and $\lnot$Q. - -#Since Q and $\lnot$Q cannot both be true, the assumption that P is false must be wrong, so P must be true. - -The 3rd step is based on the following possible truth value cases of a valid argument p → q. - -* p(T) → q(T), where x in p(x) is the truth value of a statement p; T for True and F for False. - -* p(F) → q(T). - -* p(F) → q(F). - -It tells that if a false statement is reached via a valid logic from an assumed statement, then the assumed statement is a false statement. This fact is used in proof by contradiction. - -Proof by contradiction is formulated as $\text{p}\equiv \text{p}\vee \bot \equiv \lnot\left( \lnot\text{p} \right)\vee \bot\equiv \lnot\text{p}\to \bot$, where $\bot$ is a logical contradiction or a false statement (a statement which truth value is false). If $\bot$ is reached from $\lnot$P via a valid logic, then $\lnot\text{p}\to \bot$ is proved as true so p is proved as true. - -An alternate form of proof by contradiction derives a contradiction with the statement to be proved by showing that $\lnot$P implies P. This is a contradiction so the assumption $\lnot$P must be false, equivalently P as true. This is formulated as $\text{p}\equiv \text{p}\vee \text{p}\equiv \lnot\left( \lnot\text{p} \right)\vee \text{p}\equiv \lnot\text{p}\to \text{p}$. - -An existence proof by contradiction assumes that some object doesn't exist, and then proves that this would lead to a contradiction; thus, such an object must exist. Although it is quite freely used in mathematical proofs, not every school of mathematical thought accepts this kind of nonconstructive proof as universally valid. - -Proof by contradiction also depends on the law of the excluded middle, also first formulated by Aristotle. This states that either an assertion or its negation must be true -$$ -\forall P \vdash (P \lor \lnot P) -$$ - -(For all propositions P, either P or not-P is true) - -That is, there is no other truth value besides "true" and "false" that a proposition can take. Combined with the principle of noncontradiction, this means that exactly one of $P$ and $\lnot P$ is true. In proof by contradiction, this permits the conclusion that since the possibility of $\lnot P$ has been excluded, $P$ must be true. - -Intuitionist mathematicians do not accept the law of the excluded middle, and thus reject proof by contradiction as a viable proof technique. - -If the proposition to be proved has itself the form of a negation $\lnot P$, a proof by contradiction can start by assuming that $P$ is true and derive a contradiction from that assumption. It then follows that the assumption was wrong, so $P$ is false. In such cases, the proof does not need to appeal to the law of the excluded middle. An example is the proof of irrationality of the square root of 2 given below. - -Proof by contradiction is closely related to proof by contrapositive, and the two are sometimes confused, though they are distinct methods. The main distinction is that a proof by contrapositive applies only to statements $P$ that can be written in the form $A \rightarrow B$ (i.e., implications), whereas the technique of proof by contradiction applies to statements $P$ of any form: - -* Proof by contradiction (general): assume $ \lnot P$ and derive a contradiction. - -This corresponds, in the framework of propositional logic, to the equivalence $\text{p}\equiv \text{p}\vee \bot \equiv \lnot\left( \lnot\text{p} \right)\vee \bot\equiv \lnot\text{p}\to \bot$, where $\bot$ is a logical contradiction or a false statement (a statement which truth value is false). - -In the case where the statement to be proven is an implication $ A \rightarrow B$, then the differences between direct proof, proof by contrapositive, and proof by contradiction can be outlined as follows: - -* Direct proof: assume $ A$ and show $ B$. - -* Proof by contrapositive: assume $ \lnot B$ and show $ \lnot A$. - -This corresponds to the equivalence $ A\rightarrow B \equiv \lnot B\rightarrow \lnot A$. - -* Proof by contradiction: assume $ A$ and $ \lnot B$ and derive a contradiction. - -This corresponds to the equivalences $\text{p}\to \text{q}\equiv \lnot\text{p}\vee \text{q}\equiv \lnot\left( \text{p}\wedge \lnot\text{q} \right)\equiv \lnot\left( \text{p}\wedge \lnot\text{q} \right)\vee \bot\equiv \left( \text{p}\wedge \lnot\text{q} \right)\to \bot$. - -A classic proof by contradiction from mathematics is the proof that the square root of 2 is irrational. If it were rational, it would be expressible as a fraction a/b in lowest terms, where a and b are integers, at least one of which is odd. But if a/b = 2, then a2 = 2b2. Therefore, a2 must be even, and because the square of an odd number is odd, that in turn implies that a is itself even — which means that b must be odd because a/b is in lowest terms. - -On the other hand, if a is even, then a2 is a multiple of 4. If a2 is a multiple of 4 and a2 = 2b2, then 2b2 is a multiple of 4, and therefore b2 must be even, which means that b must be even as well. - -So b is both odd and even, a contradiction. Therefore, the initial assumption—that 2 can be expressed as a fraction—must be false. - -The method of proof by contradiction has also been used to show that for any non-degenerate right triangle, the length of the hypotenuse is less than the sum of the lengths of the two remaining sides. By letting c be the length of the hypotenuse and a and b be the lengths of the legs, one can also express the claim more succinctly as a + b > c. In which case, a proof by contradiction can then be made by appealing to the Pythagorean theorem. - -First, the claim is negated to assume that a + b ≤ c. In which case, squaring both sides would yield that (a + b)2 ≤ c2, or equivalently, a2 + 2ab + b2 ≤ c2. A triangle is non-degenerate if each of its edges has positive length, so it may be assumed that both a and b are greater than 0. Therefore, a2 + b2 < a2 + 2ab + b2 ≤ c2, and the transitive relation may be reduced further to a2 + b2 < c2. - -On the other hand, it is also known from the Pythagorean theorem that a2 + b2 = c2. This would result in a contradiction since strict inequality and equality are mutually exclusive. The contradiction means that it is impossible for both to be true and it is known that the Pythagorean theorem holds. It follows from there that the assumption a + b ≤ c must be false and hence a + b > c, proving the claim. - -Consider the proposition, P: "there is no smallest rational number greater than 0". In a proof by contradiction, we start by assuming the opposite, ¬P: that there is a smallest rational number, say, r. - -Now, r/2 is a rational number greater than 0 and smaller than r. But that contradicts the assumption that r was the smallest rational number (if "r is the smallest rational number" were Q, then one can infer from "r/2 is a rational number smaller than r" that ¬Q.) This contradictions shows that the original proposition, P, must be true. That is, that "there is no smallest rational number greater than 0". - -For other examples, see proof that the square root of 2 is not rational (where indirect proofs different from the one above can be found) and Cantor's diagonal argument. - -Proofs by contradiction sometimes end with the word "Contradiction!". Isaac Barrow and Baermann used the notation Q.E.A., for "quod est absurdum" ("which is absurd"), along the lines of Q.E.D., but this notation is rarely used today. A graphical symbol sometimes used for contradictions is a downwards zigzag arrow "lightning" symbol (U+21AF: ↯), for example in Davey and Priestley. Others sometimes used include a pair of opposing arrows (as $\rightarrow\!\leftarrow$ or $\Rightarrow\!\Leftarrow$), struck-out arrows ($\nleftrightarrow$), a stylized form of hash (such as U+2A33: ⨳), or the "reference mark" (U+203B: ※), or $\times\!\!\!\!\times$. - -A curious logical consequence of the principle of non-contradiction is that a contradiction implies any statement; if a contradiction is accepted as true, any proposition (including its negation) can be proved from it. This is known as the principle of explosion (, "from a falsehood, anything [follows]", or ', "from a contradiction, anything follows"), or the principle of pseudo-scotus. -$$ -\forall Q: (P \land \lnot P) \rightarrow Q -$$ - -(for all Q, P and not-P implies Q) - -Thus a contradiction in a formal axiomatic system is disastrous; since any theorem can be proven true, it destroys the conventional meaning of truth and falsity. - -The discovery of contradictions at the foundations of mathematics at the beginning of the 20th century, such as Russell's paradox, threatened the entire structure of mathematics due to the principle of explosion. This motivated a great deal of work during the 20th century to create consistent axiomatic systems to provide a logical underpinning for mathematics. This has also led a few philosophers such as Newton da Costa, Walter Carnielli and Graham Priest to reject the principle of non-contradiction, giving rise to theories such as paraconsistent logic and dialethism, which accepts that there exist statements that are both true and false. - -G. H. Hardy described proof by contradiction as "one of a mathematician's finest weapons", saying "It is a far finer gambit than any chess gambit: a chess player may offer the sacrifice of a pawn or even a piece, but a mathematician offers the game." diff --git a/wiki/wikipedia/3514.txt b/wiki/wikipedia/3514.txt deleted file mode 100644 index 3894ccf261c126061b298b21bf585c60c91a55d1..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3514.txt +++ /dev/null @@ -1,15 +0,0 @@ -The Vietoris–Begle mapping theorem is a result in the mathematical field of algebraic topology. It is named for Leopold Vietoris and Edward G. Begle. The statement of the theorem, below, is as formulated by Stephen Smale. - -Let $X$ and $Y$ be compact metric spaces, and let $f:X\to Y$ be surjective and continuous. Suppose that the fibers of $f$ are acyclic, so that -$$ -\tilde H_r(f^{-1}(y)) = 0, -$$ for all $0\leq r\leq n-1$ and all $y\in Y$, - -with $\tilde H_r$ denoting the $r$th reduced Vietoris homology group. Then, the induced homomorphism -$$ -f_*:\tilde H_r(X)\to\tilde H_r(Y) -$$ - -is an isomorphism for $r\leq n-1$ and a surjection for $r=n$. - -Note that as stated the theorem doesn't hold for homology theories like singular homology. For example, Vietoris homology groups of the closed topologist's sine curve and of a segment are isomorphic (since the first projects onto the second with acyclic fibers). But the singular homology differs, since the segment is path connected and the topologist's sine curve is not. diff --git a/wiki/wikipedia/3515.txt b/wiki/wikipedia/3515.txt deleted file mode 100644 index d4034e91261623b34bcae48ea131add155f06f73..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3515.txt +++ /dev/null @@ -1,63 +0,0 @@ -Lean is a theorem prover and programming language. It is based on the calculus of constructions with inductive types. - -The Lean project is an open source project, hosted on GitHub. It was launched by Leonardo de Moura at Microsoft Research in 2013. - -Lean has an interface that differentiates it from other interactive theorem provers. Lean can be compiled to JavaScript and accessed in a web browser. It has native support for Unicode symbols. (These can be typed using LaTeX-like sequences, such as "\times" for "×".) Lean also has an extensive support for meta-programming. - -Lean has gotten attention from mathematicians Thomas Hales and Kevin Buzzard. Hales is using it for his project, . Buzzard uses it for the . One of the Xena Project's goals is to rewrite every theorem and proof in the undergraduate math curriculum of Imperial College London in Lean. - -Here is how the natural numbers are defined in Lean. - - - -inductive nat : Type - -| zero : nat - -| succ : nat → nat - - - -Here is the addition operation defined for natural numbers. - - - -definition add : nat → nat → nat - -| n zero := n - -| n (succ m) := succ(add n m) - - - -This is a simple proof in lean in term mode. - - - -theorem and_swap : p ∧ q → q ∧ p := - -assume h1 : p ∧ q, - -⟨h1.right, h1.left⟩ - - - -This same proof can be accomplished using tactics. - - - -theorem and_swap (p q : Prop) : p ∧ q → q ∧ p := - -begin - -assume h : (p ∧ q), -- assume p ∧ q is true - -cases h, -- extract the individual propositions from the conjunction - -split, -- split the goal conjunction into two cases: prove p and prove q separately - -repeat { assumption } - -end - - diff --git a/wiki/wikipedia/3516.txt b/wiki/wikipedia/3516.txt deleted file mode 100644 index 73573fba93b1f1ddcb57921103d203e08572e102..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3516.txt +++ /dev/null @@ -1,15 +0,0 @@ -The law of averages is the commonly held belief that a particular outcome or event will, over certain periods of time, occur at a frequency that is similar to its probability. Depending on context or application it can be considered a valid common-sense observation or a misunderstanding of probability. This notion can lead to the gambler's fallacy when one becomes convinced that a particular outcome must come soon simply because it has not occurred recently (e.g. believing that because three consecutive coin flips yielded heads, the next coin flip must be virtually guaranteed to be tails). - -As invoked in everyday life, the "law" usually reflects wishful thinking or a poor understanding of statistics rather than any mathematical principle. While there is a real theorem that a random variable will reflect its underlying probability over a very large sample, the law of averages typically assumes that an unnatural short-term "balance" must occur. Typical applications also generally assume no bias in the underlying probability distribution, which is frequently at odds with the empirical evidence. - -The gambler's fallacy is a particular misapplication of the law of averages in which the gambler believes that a particular outcome is more likely because it has not happened recently, or (conversely) that because a particular outcome has recently occurred, it will be less likely in the immediate future. - -As an example, consider a roulette wheel that has landed on red in three consecutive spins. An onlooker might apply the law of averages to conclude that on its next spin it is guaranteed (or at least is much more likely) to land on black. Of course, the wheel has no memory and its probabilities do not change according to past results. So even if the wheel has landed on red in ten or a hundred consecutive spins, the probability that the next spin will be black is still no more than 48.6% (assuming a fair European wheel with only one green zero; it would be exactly 50% if there were no green zero and the wheel were fair, and 47.4% for a fair American wheel with one green "0" and one green "00"). Similarly, there is no statistical basis for the belief that lottery numbers which haven't appeared recently are due to appear soon. (There is some value in choosing lottery numbers that are, in general, less popular than others — not because they are any more or less likely to come up, but because the largest prizes are usually shared among all of the people who chose the winning numbers. The unpopular numbers are just as likely to come up as the popular numbers are, and in the event of a big win, one would likely have to share it with fewer other people. See parimutuel betting.) - -On the other hand, in some locales, modern slot machines are rigged so they do give wins a certain proportion of the time — the results are not truly random. This is carefully managed so as to encourage people to keep playing, while the casino takes its designated amount of profit. - -Another application of the law of averages is a belief that a sample's behaviour must line up with the expected value based on population statistics. For example, suppose a fair coin is flipped 100 times. Using the law of averages, one might predict that there will be 50 heads and 50 tails. While this is the single most likely outcome, there is only an 8% chance of it occurring according to $P(X=50|n=100, p=0.5)$ of the Binomial Distribution. Predictions based on the law of averages are even less useful if the sample does not reflect the population. - -In this example, one tries to increase the probability of a rare event occurring at least once by carrying out more trials. For example, a job seeker might argue, "If I send my résumé to enough places, the law of averages says that someone will eventually hire me." Assuming a non-zero probability, it is true that conducting more trials increases the overall likelihood of the desired outcome. However, there is no particular number of trials that guarantees that outcome; rather, the probability that it will already have occurred approaches but never quite reaches 100%. - -The Steve Goodman song "A Dying Cub Fan's Last Request" mentions the Law of Averages in reference to the Chicago Cubs lack of championship success. At the time Goodman recorded the song in 1981, the Cubs had not won a National League championship since the year the United States dropped the atomic bomb on Japan (1945), and had not won a World Series since 1908. This futility would continue until the Cubs would finally win both in 2016. diff --git a/wiki/wikipedia/3517.txt b/wiki/wikipedia/3517.txt deleted file mode 100644 index a354b39b1615adacb1a5ad12c941c12b11cce883..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3517.txt +++ /dev/null @@ -1,21 +0,0 @@ -In post-quantum cryptography, ring learning with errors (RLWE) is a computational problem which serves as the foundation of new cryptographic algorithms, such as NewHope, designed to protect against cryptanalysis by quantum computers and also to provide the basis for homomorphic encryption. Public-key cryptography relies on construction of mathematical problems that are believed to be hard to solve if no further information is available, but are easy to solve if some information used in the problem construction is known. Some problems of this sort that are currently used in cryptography are at risk of attack if sufficiently large quantum computers can ever be built, so resistant problems are sought. Homomorphic encryption is a form of encryption that allows computation on ciphertext, such as arithmetic on numeric values stored in an encrypted database. - -RLWE is more properly called learning with errors over rings and is simply the larger learning with errors (LWE) problem specialized to polynomial rings over finite fields. An important feature of basing cryptography on the ring learning with errors problem is the fact that the solution to the RLWE problem can be used to solve the NP-hard shortest vector problem (SVP) in a lattice (a polynomial-time reduction from the SVP problem to the RLWE problem has been presented This problem is commonly known as the Approximate Shortest Vector Problem (α-SVP) and it is the problem of finding a vector shorter than α times the shortest vector. The authors of the proof for this equivalence write: - -"... we give a quantum reduction from approximate SVP (in the worst case) on ideal lattices in $\mathbf{R}$ to the search version of ring-LWE, where the goal is to recover the secret $s \in \mathbf{R}_q$ (with high probability, for any $s$) from arbitrarily many noisy products." However, there is not yet a proof to show that the difficulty of the α-SVP for ideal lattices is equivalent to the average α-SVP. Rather we have a proof that if there are any α-SVP instances that are hard to solve in ideal lattices then the RLWE Problem will be hard in random instances. - -Regarding the difficulty of Shortest Vector Problems in Ideal Lattices, researcher Michael Schneider writes, "So far there is no SVP algorithm making use of the special structure of ideal lattices. It is widely believed that solving SVP (and all other lattice problems) in ideal lattices is as hard as in regular lattices." The difficulty of these problems on regular lattices is provably NP-hard. There are, however, a minority of researchers who do not believe that ideal lattices share the same security properties as regular lattices. - -Peikert believes that these security equivalences make the RLWE problem a good basis for future cryptography. He writes: "There is a mathematical proof that the only way to break the cryptosystem (within some formal attack model) on its random instances is by being able to solve the underlying lattice problem in the worst case" (emphasis in the original). - -A major advantage that RLWE based cryptography has over the original learning with errors (LWE) based cryptography is found in the size of the public and private keys. RLWE keys are roughly the square root of keys in LWE. For 128 bits of security an RLWE cryptographic algorithm would use public keys around 7000 bits in length. The corresponding LWE scheme would require public keys of 49 million bits for the same level of security. On the other hand, RLWE keys are larger than the keys sizes for currently used public key algorithms like RSA and Elliptic Curve Diffie-Hellman which require public key sizes of 3072 bits and 256 bits, respectively, to achieve a 128-bit level of security. From a computational standpoint, however, RLWE algorithms have been shown to be the equal of or better than existing public key systems. - -Three groups of RLWE cryptographic algorithms exist: - -The fundamental idea of using LWE and Ring LWE for key exchange was proposed and filed at the University of Cincinnati in 2011 by Jintai Ding. The basic idea comes from the associativity of matrix multiplications, and the errors are used to provide the security. The paper appeared in 2012 after a provisional patent application was filed in 2012. - -In 2014, Peikert presented a key transport scheme following the same basic idea of Ding's, where the new idea of sending additional 1 bit signal for rounding in Ding's construction is also utilized. An RLWE version of the classic MQV variant of a Diffie-Hellman key exchange was later published by Zhang et al. The security of both key exchanges is directly related to the problem of finding approximate short vectors in an ideal lattice. - -A RLWE version of the classic Feige–Fiat–Shamir Identification protocol was created and converted to a digital signature in 2011 by Lyubashevsky. The details of this signature were extended in 2012 by Gunesyu, Lyubashevsky, and Popplemann in 2012 and published in their paper "Practical Lattice Based Cryptography – A Signature Scheme for Embedded Systems." These papers laid the groundwork for a variety of recent signature algorithms some based directly on the ring learning with errors problem and some which are not tied to the same hard RLWE problems. - -The purpose of homomorphic encryption is to allow the computations on sensitive data to occur on computing devices that should not be trusted with the data. These computing devices are allowed to process the ciphertext which is output from a homomorphic encryption. In 2011, Brakersky and Vaikuntanathan, published "Fully Homomorphic Encryption from Ring-LWE and Security for Key Dependent Messages" which builds a homomorphic encryption scheme directly on the RLWE problem. diff --git a/wiki/wikipedia/3518.txt b/wiki/wikipedia/3518.txt deleted file mode 100644 index 08bbf4800e7c0354cb65d43cda937e06e3407df0..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3518.txt +++ /dev/null @@ -1,135 +0,0 @@ -In graph theory, a planar graph is a graph that can be embedded in the plane, i.e., it can be drawn on the plane in such a way that its edges intersect only at their endpoints. In other words, it can be drawn in such a way that no edges cross each other. Such a drawing is called a plane graph or planar embedding of the graph. A plane graph can be defined as a planar graph with a mapping from every node to a point on a plane, and from every edge to a plane curve on that plane, such that the extreme points of each curve are the points mapped from its end nodes, and all curves are disjoint except on their extreme points. - -Every graph that can be drawn on a plane can be drawn on the sphere as well, and vice versa, by means of stereographic projection. - -Plane graphs can be encoded by combinatorial maps or rotation systems. - -An equivalence class of topologically equivalent drawings on the sphere, usually with additional assumptions such as the absence of isthmuses, is called a planar map. Although a plane graph has an external or unbounded face, none of the faces of a planar map has a particular status. - -Planar graphs generalize to graphs drawable on a surface of a given genus. In this terminology, planar graphs have genus 0, since the plane (and the sphere) are surfaces of genus 0. See "graph embedding" for other related topics. - -The Polish mathematician Kazimierz Kuratowski provided a characterization of planar graphs in terms of forbidden graphs, now known as Kuratowski's theorem: - -A finite graph is planar if and only if it does not contain a subgraph that is a subdivision of the complete graph K5 or the complete bipartite graph $K_{3,3}$ (utility graph). - -A subdivision of a graph results from inserting vertices into edges (for example, changing an edge •--• to •-•-•) zero or more times. - -Instead of considering subdivisions, Wagner's theorem deals with minors: - -A finite graph is planar if and only if it does not have $K_{5}$ or $K_{3,3}$ as a minor. - -A minor of a graph results from taking a subgraph and repeatedly contracting an edge into a vertex, with each neighbor of the original end-vertices becoming a neighbor of the new vertex. - -Klaus Wagner asked more generally whether any minor-closed class of graphs is determined by a finite set of "forbidden minors". This is now the Robertson–Seymour theorem, proved in a long series of papers. In the language of this theorem, $K_{5}$ and $K_{3,3}$ are the forbidden minors for the class of finite planar graphs. - -In practice, it is difficult to use Kuratowski's criterion to quickly decide whether a given graph is planar. However, there exist fast algorithms for this problem: for a graph with n vertices, it is possible to determine in time O(n) (linear time) whether the graph may be planar or not (see planarity testing). - -For a simple, connected, planar graph with v vertices and e edges and f faces, the following simple conditions hold for v ≥ 3: - -* Theorem 1. e ≤ 3v - 6; - -* Theorem 2. If there are no cycles of length 3, then e ≤ 2v - 4. - -* Theorem 3. f ≤ 2v - 4. - -In this sense, planar graphs are sparse graphs, in that they have only O(v) edges, asymptotically smaller than the maximum O(v2). The graph K3,3, for example, has 6 vertices, 9 edges, and no cycles of length 3. Therefore, by Theorem 2, it cannot be planar. These theorems provide necessary conditions for planarity that are not sufficient conditions, and therefore can only be used to prove a graph is not planar, not that it is planar. If both theorem 1 and 2 fail, other methods may be used. - -* Whitney's planarity criterion gives a characterization based on the existence of an algebraic dual; - -* Mac Lane's planarity criterion gives an algebraic characterization of finite planar graphs, via their cycle spaces; - -* The Fraysseix–Rosenstiehl planarity criterion gives a characterization based on the existence of a bipartition of the cotree edges of a depth-first search tree. It is central to the left-right planarity testing algorithm; - -* Schnyder's theorem gives a characterization of planarity in terms of partial order dimension; - -* Colin de Verdière's planarity criterion gives a characterization based on the maximum multiplicity of the second eigenvalue of certain Schrödinger operators defined by the graph. - -* The Hanani–Tutte theorem states that a graph is planar if and only if it has a drawing in which each independent pair of edges crosses an even number of times; it can be used to characterize the planar graphs via a system of equations modulo 2. - -Euler's formula states that if a finite, connected, planar graph is drawn in the plane without any edge intersections, and v is the number of vertices, e is the number of edges and f is the number of faces (regions bounded by edges, including the outer, infinitely large region), then -$$ -v-e+f=2. -$$ - -As an illustration, in the butterfly graph given above, v = 5, e = 6 and f = 3. - -In general, if the property holds for all planar graphs of f faces, any change to the graph that creates an additional face while keeping the graph planar would keep v - e + f an invariant. Since the property holds for all graphs with f = 2, by mathematical induction it holds for all cases. Euler's formula can also be proved as follows: if the graph isn't a tree, then remove an edge which completes a cycle. This lowers both e and f by one, leaving v - e + f constant. Repeat until the remaining graph is a tree; trees have v = e + 1 and f = 1, yielding v - e + f = 2, i. e., the Euler characteristic is 2. - -In a finite, connected, simple, planar graph, any face (except possibly the outer one) is bounded by at least three edges and every edge touches at most two faces; using Euler's formula, one can then show that these graphs are sparse in the sense that if v ≥ 3: -$$ -e\leq 3v-6. -$$ - -Euler's formula is also valid for convex polyhedra. This is no coincidence: every convex polyhedron can be turned into a connected, simple, planar graph by using the Schlegel diagram of the polyhedron, a perspective projection of the polyhedron onto a plane with the center of perspective chosen near the center of one of the polyhedron's faces. Not every planar graph corresponds to a convex polyhedron in this way: the trees do not, for example. Steinitz's theorem says that the polyhedral graphs formed from convex polyhedra are precisely the finite 3-connected simple planar graphs. More generally, Euler's formula applies to any polyhedron whose faces are simple polygons that form a surface topologically equivalent to a sphere, regardless of its convexity. - -Connected planar graphs with more than one edge obey the inequality $2e \ge 3f$, because each face has at least three face-edge incidences and each edge contributes exactly two incidences. It follows via algebraic transformations of this inequality with Euler's formula $v - e + f = 2$ that for finite planar graphs the average degree is strictly less than 6. Graphs with higher average degree cannot be planar. - -We say that two circles drawn in a plane kiss (or osculate) whenever they intersect in exactly one point. A "coin graph" is a graph formed by a set of circles, no two of which have overlapping interiors, by making a vertex for each circle and an edge for each pair of circles that kiss. The circle packing theorem, first proved by Paul Koebe in 1936, states that a graph is planar if and only if it is a coin graph. - -This result provides an easy proof of Fáry's theorem, that every simple planar graph can be embedded in the plane in such a way that its edges are straight line segments that do not cross each other. If one places each vertex of the graph at the center of the corresponding circle in a coin graph representation, then the line segments between centers of kissing circles do not cross any of the other edges. - -The density $D$ of a planar graph, or network, is defined as a ratio of the number of edges $E$ to the number of possible edges in a network with $N$ nodes, given by a planar graph $(E_{\max}=3N-6)$, giving $D = \frac{E-N+1}{2N-5}$. A completely sparse planar graph has $D = 0$, alternatively a completely dense planar graph has $D = 1$ - -A simple graph is called maximal planar if it is planar but adding any edge (on the given vertex set) would destroy that property. All faces (including the outer one) are then bounded by three edges, explaining the alternative term plane triangulation. The alternative names "triangular graph" or "triangulated graph" have also been used, but are ambiguous, as they more commonly refer to the line graph of a complete graph and to the chordal graphs respectively. Every maximal planar graph is a least 3-connected. - -If a maximal planar graph has v vertices with v > 2, then it has precisely 3v - 6 edges and 2v - 4 faces. - -Apollonian networks are the maximal planar graphs formed by repeatedly splitting triangular faces into triples of smaller triangles. Equivalently, they are the planar 3-trees. - -Strangulated graphs are the graphs in which every peripheral cycle is a triangle. In a maximal planar graph (or more generally a polyhedral graph) the peripheral cycles are the faces, so maximal planar graphs are strangulated. The strangulated graphs include also the chordal graphs, and are exactly the graphs that can be formed by clique-sums (without deleting edges) of complete graphs and maximal planar graphs. - -Outerplanar graphs are graphs with an embedding in the plane such that all vertices belong to the unbounded face of the embedding. Every outerplanar graph is planar, but the converse is not true: K4 is planar but not outerplanar. A theorem similar to Kuratowski's states that a finite graph is outerplanar if and only if it does not contain a subdivision of K4 or of K2,3. The above is a direct corollary of the fact that a graph G is outerplanar if the graph formed from G by adding a new vertex, with edges connecting it to all the other vertices, is a planar graph. - -A 1-outerplanar embedding of a graph is the same as an outerplanar embedding. For k > 1 a planar embedding is k-outerplanar if removing the vertices on the outer face results in a (k - 1)-outerplanar embedding. A graph is k-outerplanar if it has a k-outerplanar embedding. - -A Halin graph is a graph formed from an undirected plane tree (with no degree-two nodes) by connecting its leaves into a cycle, in the order given by the plane embedding of the tree. Equivalently, it is a polyhedral graph in which one face is adjacent to all the others. Every Halin graph is planar. Like outerplanar graphs, Halin graphs have low treewidth, making many algorithmic problems on them more easily solved than in unrestricted planar graphs. - -An apex graph is a graph that may be made planar by the removal of one vertex, and a k-apex graph is a graph that may be made planar by the removal of at most k vertices. - -A 1-planar graph is a graph that may be drawn in the plane with at most one simple crossing per edge, and a k-planar graph is a graph that may be drawn with at most k simple crossings per edge. - -A map graph is a graph formed from a set of finitely many simply-connected interior-disjoint regions in the plane by connecting two regions when they share at least one boundary point. When at most three regions meet at a point, the result is a planar graph, but when four or more regions meet at a point, the result can be nonplanar. - -A toroidal graph is a graph that can be embedded without crossings on the torus. More generally, the genus of a graph is the minimum genus of a two-dimensional surface into which the graph may be embedded; planar graphs have genus zero and nonplanar toroidal graphs have genus one. - -Any graph may be embedded into three-dimensional space without crossings. However, a three-dimensional analogue of the planar graphs is provided by the linklessly embeddable graphs, graphs that can be embedded into three-dimensional space in such a way that no two cycles are topologically linked with each other. In analogy to Kuratowski's and Wagner's characterizations of the planar graphs as being the graphs that do not contain K5 or K3,3 as a minor, the linklessly embeddable graphs may be characterized as the graphs that do not contain as a minor any of the seven graphs in the Petersen family. In analogy to the characterizations of the outerplanar and planar graphs as being the graphs with Colin de Verdière graph invariant at most two or three, the linklessly embeddable graphs are the graphs that have Colin de Verdière invariant at most four. - -An upward planar graph is a directed acyclic graph that can be drawn in the plane with its edges as non-crossing curves that are consistently oriented in an upward direction. Not every planar directed acyclic graph is upward planar, and it is NP-complete to test whether a given graph is upward planar. - -The asymptotic for the number of (labeled) planar graphs on $n$ vertices is $g\cdot n^{-7/2}\cdot \gamma^n\cdot n!$, where $\gamma\approx 27.22687$ and $g\approx 0.43\times 10^{-5}$. - -Almost all planar graphs have an exponential number of automorphisms. - -The number of unlabeled (non-isomorphic) planar graphs on $n$ vertices is between $27.2^n$ and $30.06^n$. - -The four color theorem states that every planar graph is 4-colorable (i.e., 4-partite). - -Fáry's theorem states that every simple planar graph admits an embedding in the plane such that all edges are straight line segments which don't intersect. A universal point set is a set of points such that every planar graph with n vertices has such an embedding with all vertices in the point set; there exist universal point sets of quadratic size, formed by taking a rectangular subset of the integer lattice. Every simple outerplanar graph admits an embedding in the plane such that all vertices lie on a fixed circle and all edges are straight line segments that lie inside the disk and don't intersect, so n-vertex regular polygons are universal for outerplanar graphs. - -Given an embedding G of a (not necessarily simple) connected graph in the plane without edge intersections, we construct the dual graph G* as follows: we choose one vertex in each face of G (including the outer face) and for each edge e in G we introduce a new edge in G* connecting the two vertices in G* corresponding to the two faces in G that meet at e. Furthermore, this edge is drawn so that it crosses e exactly once and that no other edge of G or G* is intersected. Then G* is again the embedding of a (not necessarily simple) planar graph; it has as many edges as G, as many vertices as G has faces and as many faces as G has vertices. The term "dual" is justified by the fact that G** = G; here the equality is the equivalence of embeddings on the sphere. If G is the planar graph corresponding to a convex polyhedron, then G* is the planar graph corresponding to the dual polyhedron. - -Duals are useful because many properties of the dual graph are related in simple ways to properties of the original graph, enabling results to be proven about graphs by examining their dual graphs. - -While the dual constructed for a particular embedding is unique (up to isomorphism), graphs may have different (i.e. non-isomorphic) duals, obtained from different (i.e. non-homeomorphic) embeddings. - -A Euclidean graph is a graph in which the vertices represent points in the plane, and the edges are assigned lengths equal to the Euclidean distance between those points; see Geometric graph theory. - -A plane graph is said to be convex if all of its faces (including the outer face) are convex polygons. A planar graph may be drawn convexly if and only if it is a subdivision of a 3-vertex-connected planar graph. - -Scheinerman's conjecture (now a theorem) states that every planar graph can be represented as an intersection graph of line segments in the plane. - -The planar separator theorem states that every n-vertex planar graph can be partitioned into two subgraphs of size at most 2n/3 by the removal of O(n) vertices. As a consequence, planar graphs also have treewidth and branch-width O(n). - -The planar product structure theorem states that every planar graph is a subgraph of the strong graph product of a graph of treewidth at most 8 and a path. - -This result has been used to show that planar graphs have bounded queue number, bounded non-repetitive chromatic number, and universal graphs of near-linear size. It also has applications to vertex ranking - -and p-centered colouring - -of planar graphs. - -For two planar graphs with v vertices, it is possible to determine in time O(v) whether they are isomorphic or not (see also graph isomorphism problem). - -The meshedness coefficient of a planar graph normalizes its number of bounded faces (the same as the circuit rank of the graph, by Mac Lane's planarity criterion) by dividing it by 2n - 5, the maximum possible number of bounded faces in a planar graph with n vertices. Thus, it ranges from 0 for trees to 1 for maximal planar graphs. - -Word-representable planar graphs include triangle-free planar graphs and, more generally, 3-colourable planar graphs, as well as certain face subdivisions of triangular grid graphs, and certain triangulations of grid-covered cylinder graphs. diff --git a/wiki/wikipedia/3519.txt b/wiki/wikipedia/3519.txt deleted file mode 100644 index e1bbb202403a125c29863816c4a8e69c6d527b38..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3519.txt +++ /dev/null @@ -1,547 +0,0 @@ -In mathematics, Newton's identities, also known as the Girard–Newton formulae, give relations between two types of symmetric polynomials, namely between power sums and elementary symmetric polynomials. Evaluated at the roots of a monic polynomial P in one variable, they allow expressing the sums of the k-th powers of all roots of P (counted with their multiplicity) in terms of the coefficients of P, without actually finding those roots. These identities were found by Isaac Newton around 1666, apparently in ignorance of earlier work (1629) by Albert Girard. They have applications in many areas of mathematics, including Galois theory, invariant theory, group theory, combinatorics, as well as further applications outside mathematics, including general relativity. - -Let x1, ..., xn be variables, denote for k ≥ 1 by pk(x1, ..., xn) the k-th power sum: -$$ -p_k(x_1,\ldots,x_n)=\sum\nolimits_{i=1}^nx_i^k = x_1^k+\cdots+x_n^k, -$$ - -and for k ≥ 0 denote by ek(x1, ..., xn) the elementary symmetric polynomial (that is, the sum of all distinct products of k distinct variables), so - -\begin{align} - -e_0(x_1, \ldots, x_n) &= 1,\\ - -e_1(x_1, \ldots, x_n) &= x_1 + x_2 + \cdots + x_n,\\ - -e_2(x_1,\ldots,x_n) &= \sum_{1 \leq in.\\ - -\end{align} - -Then Newton's identities can be stated as -$$ - ke_k(x_1,\ldots,x_n) = \sum_{i=1}^k(-1)^{i-1} e_{k - i} (x_1, \ldots, x_n) p_i(x_1, \ldots, x_n), -$$ - -valid for all n ≥ 1 and n ≥k ≥ 1. - -Also, one has -$$ - 0 = \sum_{i=k-n}^k(-1)^{i-1} e_{k - i} (x_1, \ldots, x_n) p_i(x_1, \ldots, x_n), -$$ - -for all k > n ≥ 1. - -Concretely, one gets for the first few values of k: - -\begin{align} - -e_1(x_1, \ldots, x_n) &= p_1(x_1, \ldots, x_n),\\ - -2e_2(x_1, \ldots, x_n) &= e_1(x_1, \ldots, x_n)p_1(x_1, \ldots, x_n) - p_2(x_1, \ldots, x_n),\\ - -3e_3(x_1, \ldots, x_n) &= e_2(x_1, \ldots, x_n)p_1(x_1, \ldots, x_n) - e_1(x_1, \ldots, x_n)p_2(x_1, \ldots, x_n) + p_3(x_1, \ldots, x_n).\\ - -\end{align} - -The form and validity of these equations do not depend on the number n of variables (although the point where the left-hand side becomes 0 does, namely after the n-th identity), which makes it possible to state them as identities in the ring of symmetric functions. In that ring one has - -\begin{align} - -e_1 &= p_1,\\ - -2e_2 &= e_1p_1-p_2 = p_1^2-p_2,\\ - -3e_3 &= e_2p_1 - e_1p_2 + p_3 = \tfrac12 p_1^3-\tfrac32p_1p_2+p_3,\\ - -4e_4 &= e_3p_1 - e_2p_2 + e_1p_3 - p_4 = \tfrac16p_1^4 - p_1^2p_2 + \tfrac43p_1p_3+\tfrac12p_2^2-p_4,\\ - -\end{align} - -and so on; here the left-hand sides never become zero. - -These equations allow to recursively express the ei in terms of the pk; to be able to do the inverse, one may rewrite them as - -\begin{align} - -p_1 &= e_1,\\ - -p_2 &= e_1p_1-2e_2 = e_1^2 - 2e_2,\\ - -p_3 &= e_1p_2 - e_2p_1 + 3e_3 = e_1^3-3e_1e_2+3e_3,\\ - -p_4 &= e_1p_3 - e_2p_2 + e_3p_1 - 4e_4 = e_1^4-4e_1^2e_2+4e_1e_3+2e_2^2-4e_4, \\ - -& {}\ \ \vdots - -\end{align} - -In general, we have -$$ - p_k(x_1,\ldots,x_n) = (-1)^{k-1}ke_k(x_1,\ldots,x_n)+\sum_{i=1}^{k-1}(-1)^{k-1+i} e_{k - i} (x_1, \ldots, x_n) p_i(x_1, \ldots, x_n), -$$ - -valid for all n ≥ 1 and n ≥k ≥ 1. - -Also, one has -$$ - p_k(x_1,\ldots,x_n) = \sum_{i=k-n}^{k-1}(-1)^{k-1+i} e_{k - i} (x_1, \ldots, x_n) p_i(x_1, \ldots, x_n), -$$ - -for all k > n ≥ 1. - -The polynomial with roots xi may be expanded as -$$ - \prod_{i=1}^n (x - x_i) = \sum_{k=0}^n (-1)^k e_k x^{n-k}, -$$ - -where the coefficients $e_k(x_1,\ldots,x_n)$ are the symmetric polynomials defined above. - -Given the power sums of the roots -$$ - p_k(x_1,\ldots,x_n)= \sum_{i=1}^n x_i^k, -$$ - -the coefficients of the polynomial with roots $x_1,\ldots,x_n$ may be expressed recursively in terms of the power sums as - -\begin{align} - -e_0 &= 1,\\[4pt] - --e_1 &=-p_1,\\[4pt] - -e_2 &= \frac{1}{2}(e_1 p_1 - p_2),\\[4pt] - --e_3 &=-\frac{1}{3}(e_2 p_1 - e_1 p_2 + p_3),\\[4pt] - -e_4 &= \frac{1}{4}(e_3 p_1 - e_2 p_2 + e_1 p_3 - p_4), \\ - -& {} \ \ \vdots - -\end{align} - -Formulating polynomials in this way is useful in using the method of Delves and Lyness to find the zeros of an analytic function. - -When the polynomial above is the characteristic polynomial of a matrix A (in particular when A is the companion matrix of the polynomial), the roots $x_i$ are the eigenvalues of the matrix, counted with their algebraic multiplicity. For any positive integer k, the matrix Ak has as eigenvalues the powers xik, and each eigenvalue $x_i$ of A contributes its multiplicity to that of the eigenvalue xik of Ak. Then the coefficients of the characteristic polynomial of Ak are given by the elementary symmetric polynomials in those powers xik. In particular, the sum of the xik, which is the k-th power sum pk of the roots of the characteristic polynomial of A, is given by its trace: -$$ -p_k = \operatorname{tr} ( A^k ). -$$ - -The Newton identities now relate the traces of the powers Ak to the coefficients of the characteristic polynomial of A. Using them in reverse to express the elementary symmetric polynomials in terms of the power sums, they can be used to find the characteristic polynomial by computing only the powers Ak and their traces. - -This computation requires computing the traces of matrix powers Ak and solving a triangular system of equations. Both can be done in complexity class NC (solving a triangular system can be done by divide-and-conquer). Therefore, characteristic polynomial of a matrix can be computed in NC. By the Cayley–Hamilton theorem, every matrix satisfies its characteristic polynomial, and a simple transformation allows to find the adjugate matrix in NC. - -Rearranging the computations into an efficient form leads to the Faddeev-LeVerrier algorithm (1840), a fast parallel implementation of it is due to L. Csanky (1976). Its disadvantage is that it requires division by integers, so in general the field should have characteristic 0. - -For a given n, the elementary symmetric polynomials ek(x1,...,xn) for k = 1,..., n form an algebraic basis for the space of symmetric polynomials in x1,.... xn: every polynomial expression in the xi that is invariant under all permutations of those variables is given by a polynomial expression in those elementary symmetric polynomials, and this expression is unique up to equivalence of polynomial expressions. This is a general fact known as the fundamental theorem of symmetric polynomials, and Newton's identities provide explicit formulae in the case of power sum symmetric polynomials. Applied to the monic polynomial t^n+\sum_{k=1}^n (-1)^k a_k t^{n-k} with all coefficients ak considered as free parameters, this means that every symmetric polynomial expression S(x1,...,xn) in its roots can be expressed instead as a polynomial expression P(a1,...,an) in terms of its coefficients only, in other words without requiring knowledge of the roots. This fact also follows from general considerations in Galois theory (one views the ak as elements of a base field with roots in an extension field whose Galois group permutes them according to the full symmetric group, and the field fixed under all elements of the Galois group is the base field). - -The Newton identities also permit expressing the elementary symmetric polynomials in terms of the power sum symmetric polynomials, showing that any symmetric polynomial can also be expressed in the power sums. In fact the first n power sums also form an algebraic basis for the space of symmetric polynomials. - -There are a number of (families of) identities that, while they should be distinguished from Newton's identities, are very closely related to them. - -Denoting by hk the complete homogeneous symmetric polynomial that is the sum of all monomials of degree k, the power sum polynomials also satisfy identities similar to Newton's identities, but not involving any minus signs. Expressed as identities of in the ring of symmetric functions, they read -$$ -kh_k = \sum_{i=1}^kh_{k-i}p_i, -$$ - -valid for all n ≥ k ≥ 1. Contrary to Newton's identities, the left-hand sides do not become zero for large k, and the right-hand sides contain ever more non-zero terms. For the first few values of k, one has - -\begin{align} - -h_1 &= p_1,\\ - -2h_2 &= h_1p_1 + p_2,\\ - -3h_3 &= h_2p_1 + h_1p_2 + p_3.\\ - -\end{align} - -These relations can be justified by an argument analogous to the one by comparing coefficients in power series given above, based in this case on the generating function identity -$$ -\sum_{k=0}^\infty h_k(x_1, \ldots, x_n)t^k = \prod_{i=1}^n\frac1{1 - x_it}. -$$ - -Proofs of Newton's identities, like these given below, cannot be easily adapted to prove these variants of those identities. - -As mentioned, Newton's identities can be used to recursively express elementary symmetric polynomials in terms of power sums. Doing so requires the introduction of integer denominators, so it can be done in the ring ΛQ of symmetric functions with rational coefficients: - -\begin{align} - -e_1 &= p_1,\\ - -e_2 &= \textstyle\frac12p_1^2 - \frac12p_2 &&= \textstyle\frac12 ( p_1^2 - p_2 ),\\ - -e_3 &= \textstyle\frac16p_1^3 - \frac12p_1 p_2 + \frac13p_3 &&= \textstyle\frac{1}{6} ( p_1^3 - 3 p_1 p_2 + 2 p_3 ),\\ - -e_4 &= \textstyle\frac1{24}p_1^4 - \frac14p_1^2 p_2 + \frac18p_2^2 + \frac13p_1 p_3 - \frac14p_4 - -&&= \textstyle\frac1{24} ( p_1^4 - 6 p_1^2 p_2 + 3 p_2^2 + 8 p_1 p_3 - 6 p_4 ),\\ - -&~~\vdots\\ - -e_n &= (-1)^n \sum_{m_1 + 2m_2 + \cdots + nm_n = n \atop m_1 \ge 0, \ldots, m_n \ge 0} \prod_{i=1}^n \frac{(-p_i)^{m_i}}{m_i ! i^{m_i}} \\ - -\end{align} - -and so forth. The general formula can be conveniently expressed as -$$ -e_k = \frac{(-1)^k}{k!} B_{k}(- p_1, -1! p_2, - 2! p_3, \ldots, - (k-1)! p_k ), -$$ - -where the Bn is the complete exponential Bell polynomial. This expression also leads to the following identity for generating functions: -$$ - \sum_{k=0}^\infty e_k t^k = \exp\left(\sum_{k=1}^\infty \frac{(-1)^{k+1}}{k} p_k t^k \right). -$$ - -Applied to a monic polynomial, these formulae express the coefficients in terms of the power sums of the roots: replace each ei by ai and each pk by sk. - -The analogous relations involving complete homogeneous symmetric polynomials can be similarly developed, giving equations - -\begin{align} - -h_1 &= p_1,\\ - -h_2 &= \textstyle\frac12p_1^2 + \frac12p_2 &&= \textstyle\frac12 ( p_1^2 + p_2 ),\\ - -h_3 &= \textstyle\frac16p_1^3 + \frac12p_1 p_2 + \frac13p_3 &&= \textstyle\frac{1}{6} ( p_1^3 + 3 p_1 p_2 + 2 p_3 ),\\ - -h_4 &= \textstyle\frac1{24}p_1^4 + \frac14p_1^2 p_2 + \frac18p_2^2 + \frac13p_1 p_3 + \frac14p_4 - -&&= \textstyle\frac1{24} ( p_1^4 + 6 p_1^2 p_2 + 3 p_2^2 + 8 p_1 p_3 + 6 p_4 ),\\ - -&~~\vdots\\ - -h_k &= \sum_{m_1 + 2m_2 + \cdots + km_k = k \atop m_1\ge 0, \ldots, m_k\ge 0} \prod_{i=1}^k \frac{p_i^{m_i}}{m_i ! i^{m_i}} - -\end{align} - -and so forth, in which there are only plus signs. In terms of the complete Bell polynomial, -$$ -h_k = \frac{1}{k!} B_{k}(p_1, 1! p_2, 2! p_3, \ldots, (k-1)! p_k ). -$$ - -These expressions correspond exactly to the cycle index polynomials of the symmetric groups, if one interprets the power sums pi as indeterminates: the coefficient in the expression for hk of any monomial p1m1p2m2...plml is equal to the fraction of all permutations of k that have m1 fixed points, m2 cycles of length 2, ..., and ml cycles of length l. Explicitly, this coefficient can be written as $1/N$ where $N=\prod_{i=1}^l(m_i!i^{m_i})$; this N is the number permutations commuting with any given permutation pi of the given cycle type. The expressions for the elementary symmetric functions have coefficients with the same absolute value, but a sign equal to the sign of pi, namely (−1)m2+m4+.... - -It can be proved by considering the following inductive step: - -\begin{align} - -mf(m; m_1, \ldots, m_n) &= f(m-1; m_1 - 1, \ldots, m_n) + \cdots + f(m - n; m_1, \ldots, m_n - 1) \\ - -m_1\prod_{i=1}^n \frac{1}{i^{m_i} m_i!} + \cdots + nm_n \prod_{i=1}^n \frac{1}{i^{m_i} m_i!} &= m\prod_{i=1}^n \frac{1}{i^{m_i} m_i!} - -\end{align} - -By analogy with the derivation of the generating function of the $e_n$, we can also obtain the generating function of the $h_n$, in terms of the power sums, as: -$$ - \sum_{k=0}^\infty h_k t^k = \exp\left(\sum_{k=1}^\infty \frac{p_k}{k} t^k \right). -$$ - -This generating function is thus the plethystic exponential of $p_1 t = (x_1 + \cdots + x_n)t$. - -One may also use Newton's identities to express power sums in terms of symmetric polynomials, which does not introduce denominators: - -\begin{align} - -p_1 &= e_1,\\ - -p_2 &= e_1^2 - 2 e_2,\\ - -p_3 &= e_1^3 - 3 e_2 e_1 + 3 e_3,\\ - -p_4 &= e_1^4 - 4 e_2 e_1^2 + 4 e_3 e_1 + 2 e_2^2 - 4 e_4,\\ - -p_5 &= e_1^5 - 5 e_2 e_1^3 + 5 e_3 e_1^2 + 5 e_2^2 e_1 - 5 e_4 e_1 - 5 e_3e_2 + 5 e_5,\\ - -p_6 &= e_1^6 - 6 e_2 e_1^4 + 6 e_3 e_1^3 + 9 e_2^2 e_1^2 - 6 e_4 e_1^2 - 12 e_3 e_2 e_1 + 6 e_5 e_1 - 2 e_2^3 + 3 e_3^2 + 6 e_4 e_2 - 6e_6. - -\end{align} - -The first four formulas were obtained by Albert Girard in 1629 (thus before Newton). - -The general formula (for all non-negative integers m) is: -$$ -p_m = (-1)^m m \sum_{r_1 + 2r_2 + \cdots + mr_m = m \atop r_1\ge 0, \ldots, r_m\ge 0} \frac{(r_1 + r_2 + \cdots + r_m - 1)!}{r_1!r_2! \cdots r_m!} \prod_{i=1}^m (-e_i)^{r_i}. -$$ - -This can be conveniently stated in terms of ordinary Bell polynomials as -$$ -p_m = (-1)^m m \sum_{k=1}^m \frac{1}{k} \hat{B}_{m,k}(-e_1,\ldots,-e_{m-k+1}), -$$ - -or equivalently as the generating function: - - - -\begin{align} - -\sum_{k=1}^{\infty} (-1)^{k-1} p_k \frac{t^k}{k} & = \ln\left(1+e_1t+e_2t^2+e_3t^3+\cdots\right) \\ - -& = e_1 t - \frac{1}{2}\left(e_1^2-2e_2\right) t^2 + \frac{1}{3}\left(e_1^3-3e_1e_2+3e_3\right) t^3 + \cdots, - -\end{align} - - - -which is analogous to the Bell polynomial exponential generating function given in the previous subsection. - -The multiple summation formula above can be proved by considering the following inductive step: - -\begin{align} - -f(m; r_1,\ldots,r_n) - -= {} & f(m - 1; r_1 - 1, \cdots, r_n) + \cdots + f(m - n; r_1, \ldots, r_n - 1)\\[8pt] - -= {} & \frac{1}{(r_1 - 1)!\cdots r_n!}(m - 1)(r_1 + \cdots + r_n - 2)! + \cdots \\ - -& \cdots + \frac{1}{r_1!\cdots(r_n - 1)!}(m - n)(r_1 + \cdots + r_n - 2)!\\[8pt] - -= {} & \frac{1}{r_1!\cdots r_n!}\left[r_1(m - 1) + \cdots + r_n(m-n)\right]\left[r_1 + \cdots + r_n - 2\right]!\\[8pt] - -= {} & \frac{1}{r_1!\cdots r_n!}\left[m(r_1 + \cdots + r_n) - m\right]\left[r_1 + \cdots + r_n - 2\right]!\\[8pt] - -= {} & \frac{m(r_1 + \cdots + r_n - 1)!}{r_1!\cdots r_n!} - -\end{align} - -Finally one may use the variant identities involving complete homogeneous symmetric polynomials similarly to express power sums in term of them: - -\begin{align} - -p_1 &= + h_1,\\ - -p_2 &= - h_1^2 + 2 h_2,\\ - -p_3 &= + h_1^3 - 3 h_2 h_1 + 3 h_3,\\ - -p_4 &= - h_1^4 + 4 h_2 h_1^2 - 4 h_3 h_1 - 2 h_2^2 + 4 h_4,\\ - -p_5 &= + h_1^5 - 5 h_2 h_1^3 + 5 h_2^2 h_1 + 5 h_3 h_1^2 - 5 h_3h_2 - 5 h_4 h_1 + 5 h_5,\\ - -p_6 &= - h_1^6 + 6 h_2 h_1^4 - 9 h_2^2 h_1^2 - 6 h_3 h_1^3 + 2 h_2^3 + 12 h_3 h_2 h_1 + 6 h_4 h_1^2 - 3 h_3^2 - 6 h_4 h_2 - 6 h_1 h_5 + 6h_6,\\ - -\end{align} - -and so on. Apart from the replacement of each ei by the corresponding hi, the only change with respect to the previous family of identities is in the signs of the terms, which in this case depend just on the number of factors present: the sign of the monomial $\Pi_{i=1}^l h_i^{m_i}$ is −(−1)m1+m2+m3+.... In particular the above description of the absolute value of the coefficients applies here as well. - -The general formula (for all non-negative integers m) is: -$$ -p_m = -\sum_{r_1 + 2r_2 + \cdots + mr_m = m \atop r_1\ge 0, \ldots, r_m \ge 0} \frac{m(r_1 + r_2 + \cdots + r_m - 1)!}{r_1!r_2!\cdots r_m!} \prod_{i=1}^m (-h_i)^{r_i} -$$ - -One can obtain explicit formulas for the above expressions in the form of determinants, by considering the first n of Newton's identities (or it counterparts for the complete homogeneous polynomials) as linear equations in which the elementary symmetric functions are known and the power sums are unknowns (or vice versa), and apply Cramer's rule to find the solution for the final unknown. For instance taking Newton's identities in the form - -\begin{align} - -e_1 &= 1p_1,\\ - -2e_2 &= e_1p_1 - 1p_2,\\ - -3e_3 &= e_2p_1 - e_1p_2 + 1p_3,\\ - -& \vdots \\ - -ne_n &= e_{n-1}p_1 - e_{n-2} p_2 + \cdots + (-1)^ne_1p_{n - 1} + (-1)^{n - 1}p_n - -\end{align} - -we consider $p_1, -p_2, p_3, \ldots, (-1)^np_{n-1}$ and $p_n$ as unknowns, and solve for the final one, giving - -\begin{align} - -p_n ={} &\begin{vmatrix} - -1 & 0 & \cdots & & e_1 \\ - -e_1 & 1 & 0 & \cdots & 2e_2 \\ - -e_2 & e_1 & 1 & & 3e_3 \\ - -\vdots & & \ddots & \ddots & \vdots\\ - -e_{n - 1} & \cdots & e_2 & e_1 & ne_n - -\end{vmatrix}\begin{vmatrix} - -1 & 0 & \cdots & \\ - -e_1 & 1 & 0 & \cdots \\ - -e_2 & e_1 & 1 & \\ - -\vdots & & \ddots & \ddots \\ - -e_{n - 1} & \cdots & e_2 & e_1 & (-1)^{n-1} - -\end{vmatrix}^{-1} \\[7pt] - -= {(-1)^{n-1}} &\begin{vmatrix} - -1 & 0 & \cdots & & e_1 \\ - -e_1 & 1 & 0 & \cdots & 2e_2 \\ - -e_2 & e_1 & 1 & & 3e_3 \\ - -\vdots & & \ddots &\ddots & \vdots \\ - -e_{n - 1} & \cdots & e_2 & e_1 & ne_n - -\end{vmatrix} \\[7pt] - -={} &\begin{vmatrix} - -e_1 & 1 & 0 & \cdots \\ - -2e_2 & e_1 & 1 & 0 & \cdots \\ - -3e_3 & e_2 & e_1 & 1 \\ - -\vdots & & & \ddots & \ddots \\ - -ne_n & e_{n - 1} & \cdots & & e_1 - -\end{vmatrix}. - -\end{align} - -Solving for $e_n$ instead of for $p_n$ is similar, as the analogous computations for the complete homogeneous symmetric polynomials; in each case the details are slightly messier than the final results, which are (Macdonald 1979, p. 20): - -\begin{align} - -e_n = \frac1{n!}&\begin{vmatrix} - -p_1 & 1 & 0 & \cdots \\ - -p_2 & p_1 & 2 & 0 & \cdots \\ - -\vdots & & \ddots & \ddots \\ - -p_{n-1} & p_{n-2} & \cdots & p_1 & n-1 \\ - -p_n & p_{n-1} & \cdots & p_2 & p_1 - -\end{vmatrix} \\[7pt] - -p_n = (-1)^{n-1}&\begin{vmatrix} - -h_1 & 1 & 0 & \cdots \\ - -2h_2 & h_1 & 1 & 0 & \cdots \\ - -3h_3 & h_2 & h_1 & 1 \\ - -\vdots & & & \ddots & \ddots \\ - -nh_n & h_{n-1} & \cdots & & h_1 - -\end{vmatrix} \\[7pt] - -h_n = \frac1{n!}&\begin{vmatrix} - -p_1 & -1 & 0 & \cdots \\ - -p_2 & p_1 & -2 & 0 & \cdots \\ - -\vdots & & \ddots & \ddots \\ - -p_{n - 1} & p_{n-2} & \cdots & p_1 & 1 - n \\ - -p_n & p_{n-1} & \cdots & p_2 & p_1 - -\end{vmatrix}. - -\end{align} - -Note that the use of determinants makes that the formula for $h_n$ has additional minus signs compared to the one for $e_n$, while the situation for the expanded form given earlier is opposite. As remarked in (Littlewood 1950, p. 84) one can alternatively obtain the formula for $h_n$ by taking the permanent of the matrix for $e_n$ instead of the determinant, and more generally an expression for any Schur polynomial can be obtained by taking the corresponding immanant of this matrix. - -Each of Newton's identities can easily be checked by elementary algebra; however, their validity in general needs a proof. Here are some possible derivations. - -One can obtain the k-th Newton identity in k variables by substitution into -$$ - \prod_{i=1}^k (t - x_i) = \sum_{i=0}^k (-1)^{k-i} e_{k-i}(x_1,\ldots,x_k)t^i -$$ - -as follows. Substituting xj for t gives -$$ -0= \sum_{i=0}^k (-1)^{k-i} e_{k-i}(x_1,\ldots,x_k){x_j}^i \quad\text{for }1\leq j\leq k -$$ - -Summing over all j gives -$$ -0= (-1)^kke_k(x_1,\ldots,x_k)+\sum_{i=1}^k(-1)^{k-i} e_{k-i}(x_1,\ldots,x_k)p_i(x_1,\ldots,x_k), -$$ - -where the terms for i = 0 were taken out of the sum because p0 is (usually) not defined. This equation immediately gives the k-th Newton identity in k variables. Since this is an identity of symmetric polynomials (homogeneous) of degree k, its validity for any number of variables follows from its validity for k variables. Concretely, the identities in n < k variables can be deduced by setting k − n variables to zero. The k-th Newton identity in n > k variables contains more terms on both sides of the equation than the one in k variables, but its validity will be assured if the coefficients of any monomial match. Because no individual monomial involves more than k of the variables, the monomial will survive the substitution of zero for some set of n − k (other) variables, after which the equality of coefficients is one that arises in the k-th Newton identity in k (suitably chosen) variables. - -Another derivation can be obtained by computations in the ring of formal power series R, where R is Z[x1,..., xn], the ring of polynomials in n variables x1,..., xn over the integers. - -Starting again from the basic relation -$$ -\prod_{i=1}^n (t - x_i) = \sum_{k=0}^n (-1)^{k} a_k t^{n-k} -$$ - -and "reversing the polynomials" by substituting 1/t for t and then multiplying both sides by tn to remove negative powers of t, gives -$$ -\prod_{i=1}^n (1- x_it) = \sum_{k=0}^n (-1)^{k} a_k t^k. -$$ - -(the above computation should be performed in the field of fractions of R; alternatively, the identity can be obtained simply by evaluating the product on the left side) - -Swapping sides and expressing the ai as the elementary symmetric polynomials they stand for gives the identity -$$ -\sum_{k=0}^n (-1)^{k} e_k(x_1,\ldots,x_n) t^k=\prod_{i=1}^n (1- x_it). -$$ - -One formally differentiates both sides with respect to t, and then (for convenience) multiplies by t, to obtain - -\begin{align} - -\sum_{k=0}^n (-1)^{k}k e_k(x_1,\ldots,x_n) t^k - -&= t \sum_{i=1}^n \left[(-x_i) \prod\nolimits_{j \neq i}(1 - x_jt)\right]\\ - -&= -\left(\sum_{i=1}^n \frac{x_it}{1 - x_it}\right) \prod\nolimits_{j=1}^n (1 - x_jt)\\ - -&= -\left[\sum_{i=1}^n \sum_{j=1}^\infty(x_it)^j\right]\left[\sum_{\ell=0}^n (-1)^\ell e_\ell(x_1,\ldots,x_n) t^\ell\right]\\ - -&= \left[\sum_{j=1}^\infty p_j(x_1, \ldots, x_n)t^j\right] \left[\sum_{\ell=0}^n (-1)^{\ell - 1} e_\ell(x_1, \ldots, x_n) t^\ell\right],\\ - -\end{align} - -where the polynomial on the right hand side was first rewritten as a rational function in order to be able to factor out a product out of the summation, then the fraction in the summand was developed as a series in t, using the formula -$$ -\frac{X}{1 - X} = X + X^2 + X^3 + X^4 + X^5 + \cdots, -$$ - -and finally the coefficient of each t j was collected, giving a power sum. (The series in t is a formal power series, but may alternatively be thought of as a series expansion for t sufficiently close to 0, for those more comfortable with that; in fact one is not interested in the function here, but only in the coefficients of the series.) Comparing coefficients of tk on both sides one obtains -$$ -(-1)^{k}k e_k(x_1,\ldots,x_n) = \sum_{j=1}^k (-1)^{k-j-1} p_j(x_1,\ldots,x_n)e_{k-j}(x_1,\ldots,x_n), -$$ - -which gives the k-th Newton identity. - -The following derivation, given essentially in (Mead, 1992), is formulated in the ring of symmetric functions for clarity (all identities are independent of the number of variables). Fix some k > 0, and define the symmetric function r(i) for 2 ≤ i ≤ k as the sum of all distinct monomials of degree k obtained by multiplying one variable raised to the power i with k − i distinct other variables (this is the monomial symmetric function mγ where γ is a hook shape (i,1,1,...,1)). In particular r(k) = pk; for r(1) the description would amount to that of ek, but this case was excluded since here monomials no longer have any distinguished variable. All products piek−i can be expressed in terms of the r(j) with the first and last case being somewhat special. One has -$$ -p_ie_{k-i}=r(i)+r(i+1)\quad\text{for }1i already occurs among the variables of the term from ek−i contributes to r(i + 1), and all terms on the right are so obtained exactly once. For i = k one multiplies by e0 = 1, giving trivially -$$ -p_ke_0=p_k=r(k). -$$ - -Finally the product p1ek−1 for i = 1 gives contributions to r(i + 1) = r(2) like for other values i < k, but the remaining contributions produce k times each monomial of ek, since any one of the variables may come from the factor p1; thus -$$ -p_1e_{k-1}=ke_k+r(2). -$$ - -The k-th Newton identity is now obtained by taking the alternating sum of these equations, in which all terms of the form r(i) cancel out. - -A short combinatorial proof of Newton's Identities is given in (Zeilberger, 1984) diff --git a/wiki/wikipedia/352.txt b/wiki/wikipedia/352.txt deleted file mode 100644 index adce51a7c20212b51e975b96fbe689753ca48430..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/352.txt +++ /dev/null @@ -1,256 +0,0 @@ -In number theory, a Liouville number is a real number x with the property that, for every positive integer n, there exists a pair of integers (p, q) with q > 1 such that -$$ -0< \left |x- \frac{p}{q} \right| < \frac{1}{q^{n}}. -$$ - -Liouville numbers are "almost rational", and can thus be approximated "quite closely" by sequences of rational numbers. They are precisely the transcendental numbers that can be more closely approximated by rational numbers than any algebraic irrational number. In 1844, Joseph Liouville showed that all Liouville numbers are transcendental, thus establishing the existence of transcendental numbers for the first time. - -However, note that pi and e are not Liouville numbers. - -Here we show that Liouville numbers exist by exhibiting a construction that produces such numbers. - -For any integer b ≥ 2 and any sequence of integers (a1, a2, …, ) such that ak ∈ {0, 1, 2, …, b − 1} for all k and ak ≠ 0 for infinitely many k, define the number -$$ -x = \sum_{k=1}^\infty \frac{a_k}{b^{k!}}. -$$ - -In the special case when b = 10, and ak = 1, for all k, the resulting number x is called Liouville's constant: - -L = 0.11000100000000000000000100000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001... - -It follows from the definition of x that its base-b representation is -$$ -x = \left(0.a_{1}a_{2}000a_{3}00000000000000000a_{4}0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000a_{5}\ldots\right)_b -$$ - -where the nth term is in the (n!)th place. - -Since this base-b representation is non-repeating it follows that x is not a rational number. Therefore, for any rational number p/q, we have |x − p/q| > 0. - -Now, for any integer n ≥ 1, define qn and pn as follows: -$$ -q_n = b^{n!}; \quad p_n = q_n \sum_{k=1}^n \frac{a_k}{b^{k!}} = \sum_{k=1}^n {a_k}{b^{n!-k!}} . -$$ - -Then - -\begin{align} - -0 < \left|x - \frac{p_n}{q_n}\right| & = \left|x - \frac{q_n\sum_{k=1}^n \frac{a_k}{b^{k!}}}{q_n}\right| = \left|x - \sum_{k=1}^n \frac{a_k}{b^{k!}}\right| = \left|\sum_{k=1}^\infty \frac{a_k}{b^{k!}} - \sum_{k=1}^n \frac{a_k}{b^{k!}}\right| = \left|\left(\sum_{k=1}^n \frac{a_k}{b^{k!}} + \sum_{k=n+1}^\infty \frac{a_k}{b^{k!}}\right) - \sum_{k=1}^n \frac{a_k}{b^{k!}}\right| = \sum_{k=n+1}^\infty \frac{a_k}{b^{k!}} \\[6pt] - -& \le \sum_{k=n+1}^\infty \frac{b-1}{b^{k!}} < \sum_{k=(n+1)!}^\infty \frac{b-1}{b^k} = \frac{b-1}{b^{(n+1)!}} + \frac{b-1}{b^{(n+1)!+1}} + \frac{b-1}{b^{(n+1)!+2}} + ... = \frac{b-1}{b^{(n+1)!}b^{0}} + \frac{b-1}{b^{(n+1)!}b^{1}} + \frac{b-1}{b^{(n+1)!}b^{2}} + ... = \frac{b-1}{b^{(n+1)!}} \sum_{k=0}^\infty \frac{1}{b^k} \\[6pt] - -& = \frac{b-1}{b^{(n+1)!}}\cdot\frac{b}{b-1} = \frac{b}{b^{(n+1)!}} \le \frac{b^{n!}}{b^{(n+1)!}} = \frac{1}{b^{(n+1)! - n!}} = \frac{1}{b^{(n+1)n! - n!}} = \frac{1}{b^{n(n!) + n! - n!}} = \frac{1}{b^{(n!)n}} = \frac{1}{{q_n}^n} - -\end{align} - -Therefore, we conclude that any such x is a Liouville number. - -# The inequality $\sum_{k=n+1}^\infty \frac{a_k}{b^{k!}} \le \sum_{k=n+1}^\infty \frac{b-1}{b^{k!}}$ follows since ak ∈ {0, 1, 2, …, b−1} for all k, so at most ak = b−1. The largest possible sum would occur if the sequence of integers (a1, a2, …) were (b−1, b−1, ...), i.e. ak = b−1, for all k. $\sum_{k=n+1}^\infty \frac{a_k}{b^{k!}}$ will thus be less than or equal to this largest possible sum. - -# The strong inequality \begin{align} - -\sum_{k=n+1}^\infty \frac{b-1}{b^{k!}} < \sum_{k=(n+1)!}^\infty \frac{b-1}{b^k} - -\end{align} follows from our motivation to eliminate the series by way of reducing it to a series for which we know a formula. In the proof so far, the purpose for introducing the inequality in 1. comes from intuition that $\sum_{k=0}^\infty \frac{1}{b^{k}} = \frac{b}{b-1}$ (the geometric series formula); therefore, if we can find an inequality from $\sum_{k=n+1}^\infty \frac{a_k}{b^{k!}}$ that introduces a series with (b−1) in the numerator, and if we can work to further reduce the denominator term $b^{k!}$to $b^{k}$, as well as shifting the series indices from 0 to $\infty$, then we will be able to eliminate both series and (b−1) terms, getting us closer to a fraction of the form $\frac{1}{b^{(exponent)*n}}$, which is the end-goal of the proof. We further this motivation here by selecting now from the sum $\sum_{k=n+1}^\infty \frac{b-1}{b^{k!}}$ a partial sum. Observe that, for any term in $\sum_{k=n+1}^\infty \frac{b-1}{b^{k!}}$, since b ≥ 2, then $\frac{b-1}{b^{k!}} < \frac{b-1}{b^{k}}$, for all k (except for when n=1). Therefore, \begin{align} - -\sum_{k=n+1}^\infty \frac{b-1}{b^{k!}} < \sum_{k=n+1}^\infty \frac{b-1}{b^k} - -\end{align} (since, even if n=1, all subsequent terms are smaller). In order to manipulate the indices so that k starts at 0, we select a partial sum from within - -\sum_{k=n+1}^\infty \frac{b-1}{b^k} - -(also less than the total value since it's a partial sum from a series whose terms are all positive). We will choose the partial sum formed by starting at k = (n+1)! which follows from our motivation to write a new series with k=0, namely by noticing that $b^{(n+1)!} = b^{(n+1)!}b^0$. - -#For the final inequality $\frac{b}{b^{(n+1)!}} \le \frac{b^{n!}}{b^{(n+1)!}}$, we have chosen this particular inequality (true because b ≥ 2, where equality follows if and only if n=1) because we wish to manipulate $\frac{b}{b^{(n+1)!}}$into something of the form $\frac{1}{b^{(exponent)*n}}$. This particular inequality allows us to eliminate (n+1)! and the numerator, using the property that (n+1)! – n! = (n!)n, thus putting the denominator in ideal form for the substitution $q_n = b^{n!}$. - -Here we will show that the number $~ x = c / d ~,$ where c and d are integers and $~ d > 0 ~,$ cannot satisfy the inequalities that define a Liouville number. Since every rational number can be represented as such $~ c / d ~,$ we will have proven that no Liouville number can be rational. - -More specifically, we show that for any positive integer n large enough that $~ 2^{n - 1} > d > 0~$ [equivalently, for any positive integer $~ n > 1 + \log_2(d) ~$)], no pair of integers $~(p,q)~$ exists that simultaneously satisfies the pair of bracketing inequalities -$$ -0 < \left|x - \frac{p}{q}\right| < \frac{1}{q^n}~. -$$ - -If the claim is true, then the desired conclusion follows. - -Let p and q be any integers with $~q > 1~.$ Then we have, -$$ - \left| x - \frac{p}{q} \right| = \left| \frac{c}{d} - \frac{p}{q} \right| = \frac{|cq - dp|}{ dq } -$$ - -If $ \left| cq - dp \right| = 0~,$ we would then have -$$ -\left| x - \frac{p}{q}\right|= \frac{|cq - dp|}{ dq } = 0 ~, -$$ - -meaning that such pair of integers $~(p,q)~$ would violate the first inequality in the definition of a Liouville number, irrespective of any choice of n . - -If, on the other hand, since $~\left| cq - dp \right| > 0 ~,$ then, since $~cq - dp ~,$ is an integer, we can assert the sharper inequality $\left| cq - dp \right| \ge 1 ~.$ From this it follows that -$$ -\left| x - \frac{p}{q}\right|= \frac{| cq - dp |}{dq} \ge \frac{1}{dq} -$$ - -Now for any integer $~n > 1 + \log_2(d)~,$ the last inequality above implies -$$ -\left| x - \frac{p}{q} \right| \ge \frac{1}{dq} > \frac{1}{2^{n-1}q} \ge \frac{1}{q^n} ~. -$$ - -Therefore, in the case $~ \left| cq - dp \right| > 0 ~$ such pair of integers $~(p,q)~$ would violate the second inequality in the definition of a Liouville number, for some positive integer n. - -We conclude that there is no pair of integers $~(p,q)~,$ with $~ q > 1 ~,$ that would qualify such an $~ x = c / d ~,$ as a Liouville number. - -Hence a Liouville number, if it exists, cannot be rational. - -(The section on Liouville's constant proves that Liouville numbers exist by exhibiting the construction of one. The proof given in this section implies that this number must be irrational.) - -Consider, for example, the number - -3.1400010000000000000000050000.... - -3.14(3 zeros)1(17 zeros)5(95 zeros)9(599 zeros)2(4319 zeros)6... - -where the digits are zero except in positions n! where the digit equals the nth digit following the decimal point in the decimal expansion of pi. - -As shown in the section on the existence of Liouville numbers, this number, as well as any other non-terminating decimal with its non-zero digits similarly situated, satisfies the definition of a Liouville number. Since the set of all sequences of non-null digits has the cardinality of the continuum, the same thing occurs with the set of all Liouville numbers. - -Moreover, the Liouville numbers form a dense subset of the set of real numbers. - -From the point of view of measure theory, the set of all Liouville numbers L is small. More precisely, its Lebesgue measure, λ(L), is zero. The proof given follows some ideas by John C. Oxtoby. - -For positive integers n > 2 and q ≥ 2 set: -$$ -V_{n,q}=\bigcup\limits_{p=-\infty}^\infty \left(\frac{p}{q}-\frac{1}{q^n},\frac{p}{q}+\frac{1}{q^n}\right) -$$ - -we have -$$ -L\subseteq \bigcup_{q=2}^\infty V_{n,q}. -$$ - -Observe that for each positive integer n ≥ 2 and m ≥ 1, we also have -$$ -L\cap (-m,m)\subseteq \bigcup\limits_{q=2}^\infty V_{n,q}\cap(-m,m)\subseteq \bigcup\limits_{q=2}^\infty\bigcup\limits_{p=-mq}^{mq} \left( \frac{p}{q}-\frac{1}{q^n},\frac{p}{q}+\frac{1}{q^n}\right). -$$ - -Since -$$ - \left|\left(\frac{p}{q}+\frac{1}{q^n}\right)-\left(\frac{p}{q}-\frac{1}{q^n}\right)\right|=\frac{2}{q^n} -$$ - -and n > 2 we have - - - -\begin{align} - -\mu(L\cap (-m, m)) & \leq\sum_{q=2}^\infty\sum_{p=-mq}^{mq}\frac{2}{q^n} = \sum_{q=2}^\infty \frac{2(2mq+1)}{q^n} \\[6pt] - -& \leq (4m+1)\sum_{q=2}^\infty\frac{1}{q^{n-1}} \leq (4m+1) \int^\infty_1 \frac{dq}{q^{n-1}}\leq\frac{4m+1}{n-2}. - -\end{align} - - - -Now -$$ -\lim_{n\to\infty}\frac{4m+1}{n-2}=0 -$$ - -and it follows that for each positive integer m, L ∩ (−m, m) has Lebesgue measure zero. Consequently, so has L. - -In contrast, the Lebesgue measure of the set of all real transcendental numbers is infinite (since the set of algebraic numbers is a null set). - -For each positive integer n, set -$$ -~ U_n = \bigcup\limits_{q=2}^\infty ~ \bigcup\limits_{p=-\infty}^\infty ~ \left\{ x \in \mathbb R : 0 < \left |x- \frac{p}{q} \right |< \frac{1}{q^n}\right\} = \bigcup\limits_{q=2}^\infty ~ \bigcup\limits_{p=-\infty}^\infty ~ \left(\frac{p}{q}-\frac{1}{q^n}~,~\frac{p}{q} + \frac{1}{q^n}\right) \setminus \left\{\frac{p}{q}\right\} ~ -$$ - -The set of all Liouville numbers can thus be written as -$$ -~ L ~=~ \bigcap\limits_{n=1}^\infty U_n ~=~ \bigcap\limits_{n \in \mathbb{N}_1} ~ \bigcup\limits_{ q \geqslant 2} ~ \bigcup \limits_{ p \in \mathbb{Z} }\left(\left(\frac{p}{q} - \frac{1}{q^n}~,~ \frac{p}{q} + \frac{1}{q^n} \right) \setminus \left\{\frac{p}{q}\right\} \right) ~. -$$ - -Each $~ U_n ~$ is an open set; as its closure contains all rationals (the $~p / q~$ from each punctured interval), it is also a dense subset of real line. Since it is the intersection of countably many such open dense sets, L is comeagre, that is to say, it is a dense Gδ set. - -The Liouville-Roth irrationality measure (irrationality exponent, approximation exponent, or Liouville–Roth constant) of a real number x is a measure of how "closely" it can be approximated by rationals. Generalizing the definition of Liouville numbers, instead of allowing any n in the power of q, we find the largest possible value for μ such that $0< \left| x- \frac{p}{q} \right| < \frac{1}{q^\mu} $ is satisfied by an infinite number of integer pairs (p, q) with q > 0. This maximum value of μ is defined to be the irrationality measure of x. For any value μ less than this upper bound, the infinite set of all rationals p/q satisfying the above inequality yield an approximation of x. Conversely, if μ is greater than the upper bound, then there are at most finitely many (p, q) with q > 0 that satisfy the inequality; thus, the opposite inequality holds for all larger values of q. In other words, given the irrationality measure μ of a real number x, whenever a rational approximation x ≅ p/q, p,q ∈ N yields n + 1 exact decimal digits, we have -$$ -\frac{1}{10^n} \ge \left| x- \frac{p}{q} \right| \ge \frac{1}{q^{\mu+\varepsilon}} -$$ - -for any ε>0, except for at most a finite number of "lucky" pairs (p, q). - -Almost all numbers have an irrationality measure equal to 2. - -Below is a table of known upper and lower bounds for the irrationality measures of certain numbers. - -The irrationality base is a weaker measure of irrationality introduced by J. Sondow, and is regarded as an irrationality measure for Liouville numbers. It is defined as follows: - -Let $\alpha $ be an irrational number. If there exists a real number $ \beta \geq 1 $ with the property that for any $ \varepsilon >0 $, there is a positive integer $ q(\varepsilon)$ such that -$$ - \left| \alpha-\frac{p}{q} \right| > \frac 1 {(\beta+\varepsilon)^q} \text{ for all integers } p,q \text{ with } q \geq q(\varepsilon) -$$, - -then $\beta$ is called the irrationality base of $\alpha$ and is represented as $\beta(\alpha)$ - -If no such $\beta$ exists, then $\alpha$ is called a super Liouville number. - -Example: The series $\varepsilon_{2e}=1+\frac{1}{2^1}+\frac{1}{4^{2^1}}+\frac{1}{8^{4^{2^1}}}+\frac{1}{16^{8^{4^{2^1}}}}+\frac{1}{32^{16^{8^{4^{2^1}}}}}+\ldots$ is a super Liouville number, while the series $\tau_2 = \sum_{n=1}^\infty{\frac{1}{^{n}2}} = \frac{1}{2} + \frac{1}{2^2} + \frac{1}{2^{2^2}} + \frac{1}{2^{2^{2^2}}} + \frac{1}{2^{2^{2^{2^2}}}} + \ldots$ is a Liouville number with irrationality base 2. (${^{b}a}$ represents tetration.) - -Establishing that a given number is a Liouville number provides a useful tool for proving a given number is transcendental. However, not every transcendental number is a Liouville number. The terms in the continued fraction expansion of every Liouville number are unbounded; using a counting argument, one can then show that there must be uncountably many transcendental numbers which are not Liouville. Using the explicit continued fraction expansion of e, one can show that e is an example of a transcendental number that is not Liouville. Mahler proved in 1953 that pi is another such example. - -The proof proceeds by first establishing a property of irrational algebraic numbers. This property essentially says that irrational algebraic numbers cannot be well approximated by rational numbers, where the condition for "well approximated" becomes more stringent for larger denominators. A Liouville number is irrational but does not have this property, so it can't be algebraic and must be transcendental. The following lemma is usually known as Liouville's theorem (on diophantine approximation), there being several results known as Liouville's theorem. - -Below, we will show that no Liouville number can be algebraic. - -Lemma: If α is an irrational number which is the root of a polynomial f of degree n > 0 with integer coefficients, then there exists a real number A > 0 such that, for all integers p, q, with q > 0, -$$ - \left| \alpha - \frac{p}{q} \right | > \frac{A}{q^n} -$$ - -Proof of Lemma: Let M be the maximum value of |f ′(x)| (the absolute value of the derivative of f) over the interval [α − 1, α + 1]. Let α1, α2, ..., αm be the distinct roots of f which differ from α. Select some value A > 0 satisfying -$$ -A< \min \left(1, \frac{1}{M}, \left| \alpha - \alpha_1 \right|, \left| \alpha - \alpha_2 \right|, \ldots , \left| \alpha-\alpha_m \right| \right) -$$ - -Now assume that there exist some integers p, q contradicting the lemma. Then -$$ -\left| \alpha - \frac{p}{q}\right| \le \frac{A}{q^n} \le A< \min\left(1, \frac{1}{M}, \left| \alpha - \alpha_1 \right|, \left|\alpha - \alpha_2 \right|, \ldots , \left| \alpha-\alpha_m \right| \right) -$$ - -Then p/q is in the interval [α − 1, α + 1]; and p/q is not in {α1, α2, ..., αm}, so p/q is not a root of f; and there is no root of f between α and p/q. - -By the mean value theorem, there exists an x0 between p/q and α such that -$$ -f(\alpha)-f(\tfrac{p}{q}) = (\alpha - \frac{p}{q}) \cdot f'(x_0) -$$ - -Since α is a root of f but p/q is not, we see that |f ′(x0)| > 0 and we can rearrange: -$$ -\left|\alpha -\frac{p}{q}\right |= \frac{\left | f(\alpha)- f(\tfrac{p}{q})\right |} = \left | \frac{f(\tfrac{p}{q})}{f'(x_0)} \right | -$$ - -Now, f is of the form $\sum_{i=0}^n$ ci xi where each ci is an integer; so we can express |f(p/q)| as -$$ -\left|f \left (\frac{p}{q} \right) \right| = \left| \sum_{i=0}^n c_i p^i q^{-i} \right| = \frac{1}{q^n} \left| \sum_{i=0}^n c_i p^i q^{n-i} \right | \ge \frac {1}{q^n} -$$ - -the last inequality holding because p/q is not a root of f and the ci are integers. - -Thus we have that |f(p/q)| ≥ 1/qn. Since |f ′(x0)| ≤ M by the definition of M, and 1/M > A by the definition of A, we have that -$$ -\left | \alpha - \frac{p}{q} \right | = \left|\frac{f(\tfrac{p}{q})}{f'(x_0)}\right| \ge \frac{1}{Mq^n} > \frac{A}{q^n} \ge \left| \alpha - \frac{p}{q} \right| -$$ - -which is a contradiction; therefore, no such p, q exist; proving the lemma. - -Proof of assertion: As a consequence of this lemma, let x be a Liouville number; as noted in the article text, x is then irrational. If x is algebraic, then by the lemma, there exists some integer n and some positive real A such that for all p, q -$$ - \left| x - \frac{p}{q} \right|> \frac{A}{q^{n}} -$$ - -Let r be a positive integer such that 1/(2r) ≤ A. If we let m = r + n, and since x is a Liouville number, then there exist integers a, b where b > 1 such that -$$ -\left|x-\frac ab\right|<\frac1{b^m}=\frac1{b^{r+n}}=\frac1{b^rb^n} \le \frac1{2^r}\frac1{b^n} \le \frac A{b^n} -$$ - -which contradicts the lemma. Hence, if a Liouville number exists, it cannot be algebraic, and therefore must be transcendental. diff --git a/wiki/wikipedia/3520.txt b/wiki/wikipedia/3520.txt deleted file mode 100644 index 1dd7fd27969d5835e96dba43a76eae1c708c9bac..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3520.txt +++ /dev/null @@ -1,18 +0,0 @@ -In complex analysis, Harnack's principle or Harnack's theorem is one of several closely related theorems about the convergence of sequences of harmonic functions, that follow from Harnack's inequality. - -If the functions $ u_1(z)$, $ u_2(z)$, ... are harmonic in an open connected subset $G$ of the complex plane C, and -$$ -u_1(z) \le u_2(z) \le \dots -$$ - -in every point of $G$, then the limit -$$ - \lim_{n\to\infty}u_n(z) -$$ - -either is infinite in every point of the domain $G$ or it is finite in every point of the domain, in both cases uniformly in each compact subset of $G$. In case the limits are finite, the limit function -$$ - u(z) = \lim_{n\to\infty}u_n(z) -$$ - -is harmonic in $ G$. diff --git a/wiki/wikipedia/3521.txt b/wiki/wikipedia/3521.txt deleted file mode 100644 index 720e400b4cf7a373c72c8aed763b21f38a8bf7fd..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3521.txt +++ /dev/null @@ -1,6 +0,0 @@ -In geometric probability theory, Wendel's theorem, named after James G. Wendel, gives the probability that N points distributed uniformly at random on an $(n-1)$-dimensional hypersphere all lie on the same "half" of the hypersphere. In other words, one seeks the probability that there is some half-space with the origin on its boundary that contains all N points. Wendel's theorem says that the probability is -$$ - p_{n,N}=2^{-N+1}\sum_{k=0}^{n-1}\binom{N-1}{k}. -$$ - -The statement is equivalent to $ p_{n,N}$ being the probability that the origin is not contained in the convex hull of the N points and holds for any probability distribution on Rn that is symmetric around the origin. In particular this includes all distribution which are rotationally invariant around the origin. diff --git a/wiki/wikipedia/3522.txt b/wiki/wikipedia/3522.txt deleted file mode 100644 index 914ce317f32d3edb8b43bb660eb683320700b010..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3522.txt +++ /dev/null @@ -1 +0,0 @@ -In plane geometry, the Conway circle theorem states that when the sides meeting at each vertex of a triangle are extended by the length of the opposite side, the six endpoints of the three resulting line segments lie on a circle whose centre is the incentre of the triangle. The circle on which these six points lie is called the Conway circle of the triangle. The theorem and circle are named after mathematician John Horton Conway. diff --git a/wiki/wikipedia/3523.txt b/wiki/wikipedia/3523.txt deleted file mode 100644 index 78345c1278d7748a0e5825a8a695d9ff25ee5d2b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3523.txt +++ /dev/null @@ -1,7 +0,0 @@ -In model theory, a branch of mathematical logic, the Łoś–Vaught test is a criterion for a theory to be complete, unable to be augmented without becoming inconsistent. For theories in classical logic, this means that for every sentence the theory contains either the sentence or its negation but not both. - -A theory T is κ-categorical for an infinite cardinal κ if T has exactly one model (up to isomorphism) of cardinality κ. - -The Łoś–Vaught test states that if a satisfiable theory is κ-categorical for some κ ≥ ℵ0 and has no finite model, then it is complete. - -This theorem was proved independently by and , after whom it is named. diff --git a/wiki/wikipedia/3524.txt b/wiki/wikipedia/3524.txt deleted file mode 100644 index 1a1c6ca38efc8b10b6fe067d92f93d98768f19e5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3524.txt +++ /dev/null @@ -1,52 +0,0 @@ -In mathematics, in the topology of 3-manifolds, the loop theorem is a generalization of Dehn's lemma. The loop theorem was first proven by Christos Papakyriakopoulos in 1956, along with Dehn's lemma and the Sphere theorem. - -A simple and useful version of the loop theorem states that if for some 3-dimensional manifold M with boundary ∂M there is a map -$$ -f\colon (D^2,\partial D^2)\to (M,\partial M) -$$ - -with $f|\partial D^2$ not nullhomotopic in $\partial M$, then there is an embedding with the same property. - -The following version of the loop theorem, due to John Stallings, is given in the standard 3-manifold treatises (such as Hempel or Jaco): - -Let $M$ be a 3-manifold and let $S$ - -be a connected surface in $\partial M $. Let N\subset - -\pi_1(S) be a normal subgroup such that $\mathop{\mathrm{ker}}(\pi_1(S) \to \pi_1(M)) - N \neq \emptyset$. - -Let -$$ -f \colon D^2\to M -$$ - -be a continuous map such that -$$ -f(\partial D^2)\subset S -$$ - -and -$$ -[f|\partial D^2]\notin N. -$$ - -Then there exists an embedding -$$ -g\colon D^2\to M -$$ - -such that -$$ -g(\partial D^2)\subset S -$$ - -and -$$ -[g|\partial D^2]\notin N. -$$ - -Furthermore if one starts with a map f in general position, then for any neighborhood U of the singularity set of f, we can find such a g with image lying inside the union of image of f and U. - -Stalling's proof utilizes an adaptation, due to Whitehead and Shapiro, of Papakyriakopoulos' "tower construction". The "tower" refers to a special sequence of coverings designed to simplify lifts of the given map. The same tower construction was used by Papakyriakopoulos to prove the sphere theorem (3-manifolds), which states that a nontrivial map of a sphere into a 3-manifold implies the existence of a nontrivial embedding of a sphere. There is also a version of Dehn's lemma for minimal discs due to Meeks and S.-T. Yau, which also crucially relies on the tower construction. - -A proof not utilizing the tower construction exists of the first version of the loop theorem. This was essentially done 30 years ago by Friedhelm Waldhausen as part of his solution to the word problem for Haken manifolds; although he recognized this gave a proof of the loop theorem, he did not write up a detailed proof. The essential ingredient of this proof is the concept of Haken hierarchy. Proofs were later written up, by Klaus Johannson, Marc Lackenby, and Iain Aitchison with Hyam Rubinstein. diff --git a/wiki/wikipedia/3525.txt b/wiki/wikipedia/3525.txt deleted file mode 100644 index e52b8a6a1c10ad10593df3eb6fdc0bf54b6589b4..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3525.txt +++ /dev/null @@ -1,11 +0,0 @@ -In statistical analysis, Freedman's paradox, named after David Freedman, is a problem in model selection whereby predictor variables with no relationship to the dependent variable can pass tests of significance – both individually via a t-test, and jointly via an F-test for the significance of the regression. Freedman demonstrated (through simulation and asymptotic calculation) that this is a common occurrence when the number of variables is similar to the number of data points. - -Specifically, if the dependent variable and k regressors are independent normal variables, and there are n observations, then as k and n jointly go to infinity in the ratio k/n=ρ, - -# the R2 goes to ρ, - -# the F-statistic for the overall regression goes to 1.0, and - -# the number of spuriously significant regressors goes to αk where α is the chosen critical probability (probability of Type I error for a regressor). This third result is intuitive because it says that the number of Type I errors equals the probability of a Type I error on an individual parameter times the number of parameters for which significance is tested. - -More recently, new information-theoretic estimators have been developed in an attempt to reduce this problem, in addition to the accompanying issue of model selection bias, whereby estimators of predictor variables that have a weak relationship with the response variable are biased. diff --git a/wiki/wikipedia/3526.txt b/wiki/wikipedia/3526.txt deleted file mode 100644 index b2e26e064964b9df8adf65ad2cda5273a91d1ee5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3526.txt +++ /dev/null @@ -1,16 +0,0 @@ -In mathematics, the Grace–Walsh–Szegő coincidence theorem is a result named after John Hilton Grace, Joseph L. Walsh, and Gábor Szegő. - -Suppose ƒ(z1, ..., zn) is a polynomial with complex coefficients, and that it is - -* symmetric, i.e. invariant under permutations of the variables, and - -* multi-affine, i.e. affine in each variable separately. - -Let A be a circular region in the complex plane. If either A is convex or the degree of ƒ is n, then for every $\zeta_1,\ldots,\zeta_n\in A$ there exists $\zeta\in A$ such that -$$ - f(\zeta_1,\ldots,\zeta_n) = f(\zeta,\ldots,\zeta). -$$ - -Category:Theorems in complex analysis - -Category:Theorems about polynomials diff --git a/wiki/wikipedia/3527.txt b/wiki/wikipedia/3527.txt deleted file mode 100644 index 439421fff9e63113429e9f131009a898c45e1eea..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3527.txt +++ /dev/null @@ -1,91 +0,0 @@ -In differential geometry, Hilbert's theorem (1901) states that there exists no complete regular surface $S$ of constant negative gaussian curvature $K$ immersed in $\mathbb{R}^{3}$. This theorem answers the question for the negative case of which surfaces in $\mathbb{R}^{3}$ can be obtained by isometrically immersing complete manifolds with constant curvature. - -*Hilbert's theorem was first treated by David Hilbert in, "Über Flächen von konstanter Krümmung" (Trans. Amer. Math. Soc. 2 (1901), 87-99). - -*A different proof was given shortly after by E. Holmgren, "Sur les surfaces à courbure constante négative," (1902). - -*A far leading generalization was obtained by Nikolai Efimov in 1975. - -The proof of Hilbert's theorem is elaborate and requires several lemmas. The idea is to show the nonexistence of an isometric immersion -$$ -\varphi = \psi \circ \exp_p: S' \longrightarrow \mathbb{R}^{3} -$$ - -of a plane $S'$ to the real space $\mathbb{R}^{3}$. This proof is basically the same as in Hilbert's paper, although based in the books of Do Carmo and Spivak. - -Observations: In order to have a more manageable treatment, but without loss of generality, the curvature may be considered equal to minus one, $K=-1$. There is no loss of generality, since it is being dealt with constant curvatures, and similarities of $\mathbb{R}^{3}$ multiply $K$ by a constant. The exponential map $\exp_p: T_p(S) \longrightarrow S$ is a local diffeomorphism (in fact a covering map, by Cartan-Hadamard theorem), therefore, it induces an inner product in the tangent space of $S$ at $p$: $T_p(S)$. Furthermore, $S'$ denotes the geometric surface $T_p(S)$ with this inner product. If $\psi:S \longrightarrow \mathbb{R}^{3}$ is an isometric immersion, the same holds for -$$ -\varphi = \psi \circ \exp_o:S' \longrightarrow \mathbb{R}^{3} -$$. - -The first lemma is independent from the other ones, and will be used at the end as the counter statement to reject the results from the other lemmas. - -Lemma 1: The area of $S'$ is infinite.
    - -Proof's Sketch:
    - -The idea of the proof is to create a global isometry between $H$ and $S'$. Then, since $H$ has an infinite area, $S'$ will have it too.
    - -The fact that the hyperbolic plane $H$ has an infinite area comes by computing the surface integral with the corresponding coefficients of the First fundamental form. To obtain these ones, the hyperbolic plane can be defined as the plane with the following inner product around a point $q\in \mathbb{R}^{2}$ with coordinates $(u,v)$
    -$$ -E = \left\langle \frac{\partial}{\partial u}, \frac{\partial}{\partial u} \right\rangle = 1 \qquad F = \left\langle \frac{\partial}{\partial u}, \frac{\partial}{\partial v} \right\rangle = \left\langle \frac{\partial}{\partial v}, \frac{\partial}{\partial u} \right\rangle = 0 \qquad G = \left\langle \frac{\partial}{\partial v}, \frac{\partial}{\partial v} \right\rangle = e^{u} -$$ - -Since the hyperbolic plane is unbounded, the limits of the integral are infinite, and the area can be calculated through -$$ -\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} e^{u} du dv = \infty -$$ - -Next it is needed to create a map, which will show that the global information from the hyperbolic plane can be transfer to the surface $S'$, i.e. a global isometry. $\varphi: H \rightarrow S'$ will be the map, whose domain is the hyperbolic plane and image the 2-dimensional manifold $S'$, which carries the inner product from the surface $S$ with negative curvature. $\varphi$ will be defined via the exponential map, its inverse, and a linear isometry between their tangent spaces, -$$ -\psi:T_p(H) \rightarrow T_{p'}(S') -$$. - -That is -$$ -\varphi = \exp_{p'} \circ \psi \circ \exp_p^{-1} -$$, - -where $p\in H, p' \in S'$. That is to say, the starting point $p\in H$ goes to the tangent plane from $H$ through the inverse of the exponential map. Then travels from one tangent plane to the other through the isometry $\psi$, and then down to the surface $S'$ with another exponential map. - -The following step involves the use of polar coordinates, $(\rho, \theta)$ and $(\rho', \theta')$, around $p$ and $p'$ respectively. The requirement will be that the axis are mapped to each other, that is $\theta=0$ goes to $\theta'=0$. Then $\varphi$ preserves the first fundamental form.
    - -In a geodesic polar system, the Gaussian curvature $K$ can be expressed as -$$ -K = - \frac{(\sqrt{G})_{\rho \rho}}{\sqrt{G}} -$$. - -In addition K is constant and fulfills the following differential equation -$$ -(\sqrt{G})_{\rho \rho} + K\cdot \sqrt{G} = 0 -$$ - -Since $H$ and $S'$ have the same constant Gaussian curvature, then they are locally isometric (Minding's Theorem). That means that $\varphi$ is a local isometry between $H$ and $S'$. Furthermore, from the Hadamard's theorem it follows that $\varphi$ is also a covering map.
    - -Since $S'$ is simply connected, $\varphi$ is a homeomorphism, and hence, a (global) isometry. Therefore, $H$ and $S'$ are globally isometric, and because $H$ has an infinite area, then $S'=T_p(S)$ has an infinite area, as well. $\square$ - -Lemma 2: For each $p\in S'$ exists a parametrization $x:U \subset \mathbb{R}^{2} \longrightarrow S', \qquad p \in x(U)$, such that the coordinate curves of $x$ are asymptotic curves of $ x(U) = V'$ and form a Tchebyshef net. - -Lemma 3: Let $V' \subset S'$ be a coordinate neighborhood of $S'$ such that the coordinate curves are asymptotic curves in $V'$. Then the area A of any quadrilateral formed by the coordinate curves is smaller than $2\pi$. - -The next goal is to show that $x$ is a parametrization of $S'$. - -Lemma 4: For a fixed $t$, the curve $x(s,t), -\infty < s < +\infty $, is an asymptotic curve with $s$ as arc length. - -The following 2 lemmas together with lemma 8 will demonstrate the existence of a parametrization $x:\mathbb{R}^{2} \longrightarrow S'$ - -Lemma 5: $x$ is a local diffeomorphism. - -Lemma 6: $x$ is surjective. - -Lemma 7: On $S'$ there are two differentiable linearly independent vector fields which are tangent to the asymptotic curves of $S'$. - -Lemma 8: $x$ is injective. - -Proof of Hilbert's Theorem:
    - -First, it will be assumed that an isometric immersion from a complete surface $S$ with negative curvature exists: $\psi:S \longrightarrow \mathbb{R}^{3}$ - -As stated in the observations, the tangent plane $T_p(S)$ is endowed with the metric induced by the exponential map $\exp_p: T_p(S) \longrightarrow S$ . Moreover, $\varphi = \psi \circ \exp_p:S' \longrightarrow \mathbb{R}^{3}$ is an isometric immersion and Lemmas 5,6, and 8 show the existence of a parametrization $x:\mathbb{R}^{2} \longrightarrow S'$ of the whole $S'$, such that the coordinate curves of $x$ are the asymptotic curves of $S'$. This result was provided by Lemma 4. Therefore, $S'$ can be covered by a union of "coordinate" quadrilaterals $Q_{n}$ with $ Q_{n} \subset Q_{n+1}$. By Lemma 3, the area of each quadrilateral is smaller than $2 \pi $. On the other hand, by Lemma 1, the area of $S'$ is infinite, therefore has no bounds. This is a contradiction and the proof is concluded. $\square$ - -* Nash embedding theorem, states that every Riemannian manifold can be isometrically embedded into some Euclidean space. diff --git a/wiki/wikipedia/3528.txt b/wiki/wikipedia/3528.txt deleted file mode 100644 index 3d921c9d3b9a089f827b1b55218fc781b525d689..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3528.txt +++ /dev/null @@ -1,5 +0,0 @@ -In mathematics, the Borel fixed-point theorem is a fixed-point theorem in algebraic geometry generalizing the Lie–Kolchin theorem. The result was proved by . - -If G is a connected, solvable, linear algebraic group acting regularly on a non-empty, complete algebraic variety V over an algebraically closed field k, then there is a G fixed-point of V. - -A more general version of the theorem holds over a field k that is not necessarily algebraically closed. A solvable algebraic group G is split over k or k-split if G admits a composition series whose composition factors are isomorphic (over k) to the additive group $\mathbb G_a$ or the multiplicative group $\mathbb G_m$. If G is a connected, k-split solvable algebraic group acting regularly on a complete variety V having a k-rational point, then there is a G fixed-point of V. diff --git a/wiki/wikipedia/3529.txt b/wiki/wikipedia/3529.txt deleted file mode 100644 index abb10e111fa5285aca5a2888aac92b3a7ccec87b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3529.txt +++ /dev/null @@ -1,9 +0,0 @@ -In mathematics, the Ellis–Numakura lemma states that if S is a non-empty semigroup with a topology such that S is compact and the product is semi-continuous, then S has an idempotent element p, (that is, with pp = p). The lemma is named after Robert Ellis and Katsui Numakura. - -Applying this lemma to the Stone–Čech compactification βN of the natural numbers shows that there are idempotent elements in βN. The product on βN is not continuous, but is only semi-continuous (right or left, depending on the preferred construction, but never both). - -*By compactness and Zorn's Lemma, there is a minimal non-empty compact sub semigroup of S, so replacing S by this sub semi group we can assume S is minimal. - -*Choose p in S. The set Sp is a non-empty compact subsemigroup, so by minimality it is S and in particular contains p, so the set of elements q with qp = p is non-empty. - -*The set of all elements q with qp = p is a compact semigroup, and is nonempty by the previous step, so by minimality it is the whole of S and therefore contains p. So pp = p. diff --git a/wiki/wikipedia/353.txt b/wiki/wikipedia/353.txt deleted file mode 100644 index 4c7c3458556fe188a9daf4063f6528d8bc76d33e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/353.txt +++ /dev/null @@ -1,163 +0,0 @@ -In mathematics, generalized means (or power mean or Hölder mean from Otto Hölder) - -Proof of $\textstyle \lim_{p \to 0} M_p = M_0$ (geometric mean) - -We can rewrite the definition of Mp using the exponential function - -M_p(x_1,\dots,x_n) = \exp{\left( \ln{\left[\left(\sum_{i=1}^n w_ix_{i}^p \right)^{1/p}\right]} \right) } = \exp{\left( \frac{\ln{\left(\sum_{i=1}^n w_ix_{i}^p \right)}}{p} \right) } - -In the limit p → 0, we can apply L'Hôpital's rule to the argument of the exponential function. Differentiating the numerator and denominator with respect to p, we have - -\begin{align} - -\lim_{p \to 0} \frac{\ln{\left(\sum_{i=1}^n w_ix_{i}^p \right)}}{p} &= \lim_{p \to 0} \frac{\frac{\sum_{i=1}^n w_i x_i^p \ln{x_i}}{\sum_{j=1}^n w_j x_j^p}}{1} \\ - -&= \lim_{p \to 0} \frac{\sum_{i=1}^n w_i x_i^p \ln{x_i}}{\sum_{j=1}^n w_j x_j^p} \\ - -&= \sum_{i=1}^n \frac{ w_i \ln{x_i}}{ \lim_{p \to 0} \sum_{j=1}^n w_j \left( \frac{x_j}{x_i} \right)^p} \\ - -&= \sum_{i=1}^n w_i \ln{x_i} \\ - -&= \ln{\left(\prod_{i=1}^n x_i^{w_i} \right)} - -\end{align} - -By the continuity of the exponential function, we can substitute back into the above relation to obtain - -\lim_{p \to 0} M_p(x_1,\dots,x_n) = \exp{\left( \ln{\left(\prod_{i=1}^n x_i^{w_i} \right)} \right)} = \prod_{i=1}^n x_i^{w_i} = M_0(x_1,\dots,x_n) - -as desired. - -{{Proof|title= Proof of $\textstyle \lim_{p \to \infty} M_p = M_\infty$ and $\textstyle \lim_{p \to -\infty} M_p = M_{-\infty}$ |proof= - -Assume (possibly after relabeling and combining terms together) that $x_1 \geq \dots \geq x_n$. Then - -\begin{align} - -\lim_{p \to \infty} M_p(x_1,\dots,x_n) &= \lim_{p \to \infty} \left( \sum_{i=1}^n w_i x_i^p \right)^{1/p} \\ - -&= x_1 \lim_{p \to \infty} \left( \sum_{i=1}^n w_i \left( \frac{x_i}{x_1} \right)^p \right)^{1/p} \\ - -&= x_1 = M_\infty (x_1,\dots,x_n). - -\end{align} - -The formula for $M_{-\infty}$ follows from $M_{-\infty} (x_1,\dots,x_n) = \frac{1}{M_\infty (1/x_1,\dots,1/x_n)}.$ - -}} - -Let $x_1, \dots, x_n$ be a sequence of positive real numbers, then the following properties hold: - -#$\min(x_1, \dots, x_n) \le M_p(x_1, \dots, x_n) \le \max(x_1, \dots, x_n)$.left=1|text= Each generalized mean always lies between the smallest and largest of the x values. - -#$M_p(x_1, \dots, x_n) = M_p(P(x_1, \dots, x_n))$, where $P$ is a permutation operator.left=1|text= Each generalized mean is a symmetric function of its arguments; permuting the arguments of a generalized mean does not change its value. - -#$M_p(b x_1, \dots, b x_n) = b \cdot M_p(x_1, \dots, x_n)$.left=1|text= Like most means, the generalized mean is a homogeneous function of its arguments x1, ..., xn. That is, if b is a positive real number, then the generalized mean with exponent p of the numbers $b\cdot x_1,\dots, b\cdot x_n$ is equal to b times the generalized mean of the numbers x1, ..., xn. - -#$M_p(x_1, \dots, x_{n \cdot k}) = M_p\left[M_p(x_1, \dots, x_{k}), M_p(x_{k + 1}, \dots, x_{2 \cdot k}), \dots, M_p(x_{(n - 1) \cdot k + 1}, \dots, x_{n \cdot k})\right]$.left=1|text= Like the quasi-arithmetic means, the computation of the mean can be split into computations of equal sized sub-blocks. This enables use of a divide and conquer algorithm to calculate the means, when desirable. - -In general, if p < q, then - -M_p(x_1, \dots, x_n) \le M_q(x_1, \dots, x_n) - -and the two means are equal if and only if x1 = x2 = ... = xn. - -The inequality is true for real values of p and q, as well as positive and negative infinity values. - -It follows from the fact that, for all real p, - -\frac{\partial}{\partial p}M_p(x_1, \dots, x_n) \geq 0 - -which can be proved using Jensen's inequality. - -In particular, for p in {−1, 0, 1} , the generalized mean inequality implies the Pythagorean means inequality as well as the inequality of arithmetic and geometric means. - -We will prove weighted power means inequality, for the purpose of the proof we will assume the following without loss of generality: - -\begin{align} - -w_i \in [0, 1] \\ - -\sum_{i=1}^nw_i = 1 - -\end{align} - -Proof for unweighted power means is easily obtained by substituting wi = 1/n. - -Suppose an average between power means with exponents p and q holds: - -\sqrt[p]{\sum_{i=1}^nw_ix_i^p}\geq \sqrt[q]{\sum_{i=1}^nw_ix_i^q} - -applying this, then: - -\sqrt[p]{\sum_{i=1}^n\frac{w_i}{x_i^p}}\geq \sqrt[q]{\sum_{i=1}^n\frac{w_i}{x_i^q}} - -We raise both sides to the power of −1 (strictly decreasing function in positive reals): - -\sqrt[-p]{\sum_{i=1}^nw_ix_i^{-p}}=\sqrt[p]{\frac{1}{\sum_{i=1}^nw_i\frac{1}{x_i^p}}}\leq \sqrt[q]{\frac{1}{\sum_{i=1}^nw_i\frac{1}{x_i^q}}}=\sqrt[-q]{\sum_{i=1}^nw_ix_i^{-q}} - -We get the inequality for means with exponents -p and -q, and we can use the same reasoning backwards, thus proving the inequalities to be equivalent, which will be used in some of the later proofs. - -For any q > 0 and non-negative weights summing to 1, the following inequality holds: - -\sqrt[-q]{\sum_{i=1}^n w_i x_i^{-q}} \leq \prod_{i=1}^n x_i^{w_i} \leq \sqrt[q]{\sum_{i=1}^n w_i x_i^q}. - -The proof follows from Jensen's inequality, making use of the fact the logarithm is concave: - -\log \prod_{i=1}^n x_i^{w_i} = \sum_{i=1}^n w_i\log x_i \leq \log \sum_{i=1}^n w_i x_i. - -By applying the exponential function to both sides and observing that as a strictly increasing function it preserves the sign of the inequality, we get - -\prod_{i=1}^n x_i^{w_i} \leq \sum_{i=1}^n w_i x_i. - -Taking qth powers of the xi, we are done for the inequality with positive q; the case for negatives is identical. - -We are to prove that for any p < q the following inequality holds: - -\sqrt[p]{\sum_{i=1}^nw_ix_i^p}\leq \sqrt[q]{\sum_{i=1}^nw_ix_i^q} - -if p is negative, and q is positive, the inequality is equivalent to the one proved above: - -\sqrt[p]{\sum_{i=1}^nw_ix_i^p}\leq \prod_{i=1}^nx_i^{w_i} \leq\sqrt[q]{\sum_{i=1}^nw_ix_i^q} - -The proof for positive p and q is as follows: Define the following function: f : R+ → R+ $f(x)=x^{\frac{q}{p}}$. f is a power function, so it does have a second derivative: - -f(x) = \left(\frac{q}{p} \right) \left( \frac{q}{p}-1 \right)x^{\frac{q}{p}-2} - -which is strictly positive within the domain of f, since q > p, so we know f is convex. - -Using this, and the Jensen's inequality we get: - -\begin{align} - -f \left( \sum_{i=1}^nw_ix_i^p \right) &\leq \sum_{i=1}^nw_if(x_i^p) \\[3pt] - -\sqrt[\frac{p}{q}]{\sum_{i=1}^nw_ix_i^p} &\leq \sum_{i=1}^nw_ix_i^q - -\end{align} - -after raising both side to the power of 1/q (an increasing function, since 1/q is positive) we get the inequality which was to be proven: - -\sqrt[p]{\sum_{i=1}^nw_ix_i^p}\leq\sqrt[q]{\sum_{i=1}^nw_ix_i^q} - -Using the previously shown equivalence we can prove the inequality for negative p and q by replacing them with -q and -p, respectively. - -The power mean could be generalized further to the generalized f-mean: - - M_f(x_1,\dots,x_n) = f^{-1} \left({\frac{1}{n}\cdot\sum_{i=1}^n{f(x_i)}}\right) - -This covers the geometric mean without using a limit with f(x) = log(x). The power mean is obtained for 1= f(x) = xp. - -A power mean serves a non-linear moving average which is shifted towards small signal values for small p and emphasizes big signal values for big p. Given an efficient implementation of a moving arithmetic mean called smooth one can implement a moving power mean according to the following Haskell code. - - - -powerSmooth :: Floating a => ([a] -> [a]) -> a -> [a] -> [a] - -powerSmooth smooth p = map (** recip p) . smooth . map (**p) - - - -* For big p it can serve as an envelope detector on a rectified signal. - -* For small p it can serve as a baseline detector on a mass spectrum. diff --git a/wiki/wikipedia/3530.txt b/wiki/wikipedia/3530.txt deleted file mode 100644 index 702809c0ce3ff3e9796eb88f1f116188009725e5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3530.txt +++ /dev/null @@ -1 +0,0 @@ -In mathematics, Shintani's unit theorem introduced by is a refinement of Dirichlet's unit theorem and states that a subgroup of finite index of the totally positive units of a number field has a fundamental domain given by a rational polyhedric cone in the Minkowski space of the field . diff --git a/wiki/wikipedia/3531.txt b/wiki/wikipedia/3531.txt deleted file mode 100644 index 158c50a7d95b1464d0a65a58f8721c422e9830bd..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3531.txt +++ /dev/null @@ -1,35 +0,0 @@ -In logic, modus non excipiens is a valid rule of inference that is closely related to modus ponens. This argument form was created by Bart Verheij to address certain arguments which are types of modus ponens arguments, but must be considered to be invalid. An instance of a particular modus ponens type argument is - -A large majority accept A as true. Therefore, there exists a presumption in favor of A. - -However, this is an argumentum ad populum, and is not deductively valid. The problem can be addressed by drawing a distinction between two types of inference identified by Verheij: - -Modus ponens: - -Premises: - -*As a rule, if P then Q - -*P - -Conclusion: - -*Q - -and - -Modus non excipiens - -Premises: - -*As a rule, if P then Q - -*P - -*It is not the case that there is an exception to the rule that if P then Q - -Conclusion: - -*Q - -Category:Rules of inference diff --git a/wiki/wikipedia/3532.txt b/wiki/wikipedia/3532.txt deleted file mode 100644 index 3994177a6093cf4107195bbd6986c784831661c3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3532.txt +++ /dev/null @@ -1,159 +0,0 @@ -In mathematics, Hall's marriage theorem, proved by , is a theorem with two equivalent formulations: - -* The combinatorial formulation deals with a collection of finite sets. It gives a necessary and sufficient condition for being able to select a distinct element from each set. - -* The graph theoretic formulation deals with a bipartite graph. It gives a necessary and sufficient condition for finding a matching that covers at least one side of the graph. - -Let $S$ be a (possibly infinite) family of finite subsets of $X$, where the members of $S$ are counted with multiplicity (That is, $S$ may contain the same set several times). - -A transversal for $S$ is the image of an injective function $f$ from $S$ to $X$ such that $f(s)$ is an element of the set $s$ for every $s$ in the family $S$. In other words, $f$ selects one representative from each set in $S$ in such a way that no two of these representatives are equal. An alternative term for transversal is system of distinct representatives. - -The collection S satisfies the marriage condition when for each subfamily $W \subseteq S$, -$$ -|W| \le \Bigl|\bigcup_{A \in W} A\Bigr|. -$$ - -Restated in words, the marriage condition asserts that every subfamily of $S$ contains at least as many distinct members as the number of sets in the subfamily. - -If the marriage condition fails then there cannot be a transversal $f$ of $S$. - -Suppose that the marriage condition fails, i.e., that for some subcollection $W_0$ of $S$, $\textstyle |W_0| > |\bigcup_{A \in W_0} A|.$ Suppose, by way of contradiction, that a transversal $f(S)$ of $S$ also exists. - -The restriction of $f$ to the offending subcollection $W_0$ would be an injective function from $W_0$ into $\textstyle \bigcup_{A \in W_0} A$. This is impossible by the pigeonhole principle since $\textstyle |W_0| > |\bigcup_{A \in W_0} A| $. Therefore no transversal can exist if the marriage condition fails. - -Hall's theorem states that the converse is also true: - -A family S of finite sets has a transversal if and only if S satisfies the marriage condition. - -Example 1: Consider S = {A1, A2, A3} with - -A1 = {1, 2, 3} - -A2 = {1, 4, 5} - -A3 = {3, 5}. - -A valid transversal would be (1, 4, 5). (Note this is not unique: (2, 1, 3) works equally well, for example.) - -Example 2: Consider S = {A1, A2, A3, A4} with - -A1 = {2, 3, 4, 5} - -A2 = {4, 5} - -A3 = {5} - -A4 = {4}. - -No valid transversal exists; the marriage condition is violated as is shown by the subfamily W = {A2, A3, A4}. Here the number of sets in the subfamily is |W| = 3, while the union of the three sets A2 U A3 U A4 contains only 2 elements of X. - -Example 3: Consider S = {A1, A2, A3, A4} with - -A1 = {a, b, c} - -A2 = {b, d} - -A3 = {a, b, d} - -A4 = {b, d}. - -The only valid transversals are (c, b, a, d) and (c, d, a, b). - -The standard example of an application of the marriage theorem is to imagine two groups; one of n men, and one of n women. For each woman, there is a subset of the men, any one of which she would happily marry; and any man would be happy to marry a woman who wants to marry him. Consider whether it is possible to pair up (in marriage) the men and women so that every person is happy. - -If we let Ai be the set of men that the i-th woman would be happy to marry, then the marriage theorem states that each woman can happily marry a unique man if and only if for any subset $I$ of the women, the number of men whom at least one of these women would be happy to marry, $|\bigcup_{i \in I} A_i|$, is at least as big as the number of women in that subset, $|I|$. - -This condition is necessary, as if it does not hold, there are not enough men to share among the $|I|$ women. What is interesting is that it is also a sufficient condition. - -Let G be a finite bipartite graph with bipartite sets X and Y (i.e. G := (X +Y, E)). An X-perfect matching (also called: X-saturating matching) is a matching which covers every vertex in X. - -For a subset W of X, let NG(W) denote the neighborhood of W in G, i.e., the set of all vertices in Y adjacent to some element of W. The marriage theorem in this formulation states that there is an X-perfect matching if and only if for every subset W of X: -$$ -|W| \leq |N_G(W)|. -$$ - -In other words: every subset W of X has sufficiently many adjacent vertices in Y. - -Easy direction: we assume that some matching M saturates every vertex of X, and prove that Hall's condition is satisfied for all W ⊆ X. Let M(W) denote the set of all vertices in Y matched by M to a given W. By definition of a matching, |M(W)| = |W |. But M(W) ⊆ NG(W), since all elements of M(W) are neighbours of W. So, |NG(W)| ≥ |M(W)| and hence, |NG(W)| ≥ |W|. - -Hard direction: we assume that there is no X-perfect matching and prove that Hall's condition is violated for at least one W ⊆ X. Let M be a maximum matching, and let u be a vertex of X not saturated by M. Consider all alternating paths (i.e., paths in G alternately using edges outside and inside M) starting from u. Let the set of all points in Y connected to u by these alternating paths be Z, and the set of all points in X connected to u by these alternating paths (including u itself) be W. No maximal alternating path can end in a vertex in Y, lest it would be an augmenting path, so that we could augment M to a strictly larger matching by toggling the status (belongs to M or not) of all the edges of the path. Thus every vertex in Z is matched by M to a vertex in W \{u}. Conversely, every vertex v in W \{u} is matched by M to a vertex in Z (namely, the vertex preceding v on an alternating path ending at v). Thus, M provides a bijection of W \{u} and Z, which implies |W| = |Z| + 1. On the other hand, NG(W) ⊆ Z. Indeed, let v in NG(W) be connected to a vertex w in W. If the edge (w,v) is in M, then v precedes w in any alternating path that goes from u to w. If the edge (w,v) is not in M, then one can use it to extend any alternating path that goes from u to w. In either case, v is visited by some alternating path issued from u. Hence NG(W) ⊆ Z, and so |NG(W)| ≤ |Z| = |W| − 1 < |W|. - -Define a Hall violator as a subset W of X for which |NG(W)| < |W|. If W is a Hall violator, then there is no matching that saturates all vertices of W. Therefore, there is also no matching that saturates X. Hall's marriage theorem says that a graph contains an X-perfect matching if and only if it contains no Hall violators. The following algorithm proves the hard direction of the theorem: it finds either an X-perfect matching or a Hall violator. It uses, as a subroutine, an algorithm that, given a matching M and an unmatched vertex x0, either finds an M-augmenting path or a Hall violator containing x0. - -It uses - -# Initialize M := {}. // Empty matching. - -#Assert: M is a matching in G. - -# If M saturates all vertices of X, then return the X-perfect matching M. - -# Let x0 be an unmatched vertex (a vertex in X \ V(M)). - -# Using the Hall violator algorithm, find either a Hall violator or an M-augmenting path. - -#In the first case, return the Hall violator. - -#In the second case, use the M-augmenting path to increase the size of M (by one edge), and go back to step 2. - -At each iteration, M grows by one edge. Hence, this algorithm must end after at most |E| iterations. Each iteration takes at most |X| time. The total runtime complexity is similar to the Ford-Fulkerson method for finding a maximum cardinality matching. - -Let S = (A1, A2, ..., An) where the Ai are finite sets which need not be distinct. Let the set X = {A1, A2, ..., An} (that is, the set of names of the elements of S) and the set Y be the union of all the elements in all the Ai. - -We form a finite bipartite graph G := (X +Y, E), with bipartite sets X and Y by joining any element in Y to each Ai which it is a member of. A transversal of S is an X-perfect matching (a matching which covers every vertex in X) of the bipartite graph G. Thus a problem in the combinatorial formulation can be easily translated to a problem in the graph-theoretic formulation. - -Hall's theorem can be proved (non-constructively) based on Sperner's lemma. - -The theorem has many other interesting "non-marital" applications. For example, take a standard deck of cards, and deal them out into 13 piles of 4 cards each. Then, using the marriage theorem, we can show that it is always possible to select exactly 1 card from each pile, such that the 13 selected cards contain exactly one card of each rank (Ace, 2, 3, ..., Queen, King). More generally, any regular bipartite graph has a perfect matching. - -More abstractly, let G be a group, and H be a finite index subgroup of G. Then the marriage theorem can be used to show that there is a set T such that T is a transversal for both the set of left cosets and right cosets of H in G. - -The marriage theorem is used in the usual proofs of the fact that an (r × n) Latin rectangle can always be extended to an ((r +1) × n) Latin rectangle when r < n, and so, ultimately to a Latin square. - -This theorem is part of a collection of remarkably powerful theorems in combinatorics, all of which are related to each other in an informal sense in that it is more straightforward to prove one of these theorems from another of them than from first principles. These include: - -* The König–Egerváry theorem (1931) (Dénes Kőnig, Jenő Egerváry) - -* König's theorem - -* Menger's theorem (1927) - -* The max-flow min-cut theorem (Ford–Fulkerson algorithm) - -* The Birkhoff–Von Neumann theorem (1946) - -* Dilworth's theorem. - -In particular, there are simple proofs of the implications Dilworth's theorem ⇔ Hall's theorem ⇔ König–Egerváry theorem ⇔ König's theorem. - -By examining Philip Hall's original proof carefully, Marshall Hall, Jr. (no relation to Philip Hall) was able to tweak the result in a way that permitted the proof to work for infinite S. This variant refines the marriage theorem and provides a lower bound on the number of transversals that a given S may have. This variant is: - -Suppose that (A1, A2, ..., An), where the Ai are finite sets that need not be distinct, is a family of sets satisfying the marriage condition, and suppose that |Ai| ≥ r for i = 1, ..., n. Then the number of different transversals for the family is at least r! if r ≤ n and r(r − 1) ... (r − n + 1) if r > n. - -Recall that a transversal for a family S is an ordered sequence, so two different transversals could have exactly the same elements. For instance, the family A1 = {1, 2, 3}, A2 = {1, 2, 5} has both (1, 2) and (2, 1) as distinct transversals. - -The following example, due to Marshall Hall, Jr., shows that the marriage condition will not guarantee the existence of a transversal in an infinite family in which infinite sets are allowed. - -Let S be the family, A0 = {1, 2, 3, ...}, A1 = {1}, A2 = {2}, ..., Ai = {i}, ... - -The marriage condition holds for this infinite family, but no transversal can be constructed. - -The more general problem of selecting a (not necessarily distinct) element from each of a collection of non-empty sets (without restriction as to the number of sets or the size of the sets) is permitted in general only if the axiom of choice is accepted. - -The marriage theorem does extend to the infinite case if stated properly. Given a bipartite graph with sides A and B, we say that a subset C of B is smaller than or equal in size to a subset D of A in the graph if there exists an injection in the graph (namely, using only edges of the graph) from C to D, and that it is strictly smaller in the graph if in addition there is no injection in the graph in the other direction. Note that omitting in the graph yields the ordinary notion of comparing cardinalities. The infinite marriage theorem states that there exists an injection from A to B in the graph, if and only if there is no subset C of A such that N(C) is strictly smaller than C in the graph. - -A fractional matching in a graph is an assignment of non-negative weights to each edge, such that the sum of weights adjacent to each vertex is at most 1. A fractional matching is X-perfect if the sum of weights adjacent to each vertex is exactly 1. The following are equivalent for a bipartite graph G = (X+Y, E): - -* G admits an X-perfect matching. - -* G admits an X-perfect fractional matching. The implication follows directly from the fact that X-perfect matching is a special case of an X-perfect fractional matching, in which each weight is either 1 (if the edge is in the matching) or 0 (if it is not). - -* G satisfies Hall's marriage condition. The implication holds because, for each subset W of X, the sum of weights near vertices of W is |W|, so the edges adjacent to them are necessarily adjacent to at least |W| vertices of Y. - -When Hall's condition does not hold, the original theorem tells us only that a perfect matching does not exist, but does not tell what is the largest matching that does exist. To learn this information, we need the notion of deficiency of a graph. Given a bipartite graph G = (X+Y, E), the deficiency of G w.r.t. X is the maximum, over all subsets W of X, of the difference |W| - |NG(W)|. The larger is the deficiency, the farther is the graph from satisfying Hall's condition. - -Using Hall's marriage theorem, it can be proved that, if the deficiency of a bipartite graph G is d, then G admits a matching of size at least |X|-d. - -* A generalization of Hall's theorem to general graphs (that are not necessarily bipartite) is provided by the Tutte theorem. - -* A generalization of Hall's theorem to bipartite hypergraphs is provided by various Hall-type theorems for hypergraphs. diff --git a/wiki/wikipedia/3533.txt b/wiki/wikipedia/3533.txt deleted file mode 100644 index bb5fb57fb1d94d397a79036d8bb1c35f816d5f9e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3533.txt +++ /dev/null @@ -1,29 +0,0 @@ -In mathematics, Kronecker's lemma (see, e.g., Shiryaev) is a result about the relationship between convergence of infinite sums and convergence of sequences. The lemma is often used in the proofs of theorems concerning sums of independent random variables such as the strong Law of large numbers. The lemma is named after the German mathematician Leopold Kronecker. - -If $(x_n)_{n=1}^\infty$ is an infinite sequence of real numbers such that -$$ -\sum_{m=1}^\infty x_m = s -$$ - -exists and is finite, then we have for all $0 0. Now choose N so that $S_k$ is ε-close to s for k > N. This can be done as the sequence $S_k$ converges to s. Then the right hand side is: -$$ -S_n - \frac1{b_n}\sum_{k=1}^{N-1}(b_{k+1} - b_k)S_k - \frac1{b_n}\sum_{k=N}^{n-1}(b_{k+1} - b_k)S_k -$$ -$$ -= S_n - \frac1{b_n}\sum_{k=1}^{N-1}(b_{k+1} - b_k)S_k - \frac1{b_n}\sum_{k=N}^{n-1}(b_{k+1} - b_k)s - \frac1{b_n}\sum_{k=N}^{n-1}(b_{k+1} - b_k)(S_k - s) -$$ -$$ -= S_n - \frac1{b_n}\sum_{k=1}^{N-1}(b_{k+1} - b_k)S_k - \frac{b_n-b_N}{b_n}s - \frac1{b_n}\sum_{k=N}^{n-1}(b_{k+1} - b_k)(S_k - s). -$$ - -Now, let n go to infinity. The first term goes to s, which cancels with the third term. The second term goes to zero (as the sum is a fixed value). Since the b sequence is increasing, the last term is bounded by $\epsilon (b_n - b_N)/b_n \leq \epsilon$. diff --git a/wiki/wikipedia/3534.txt b/wiki/wikipedia/3534.txt deleted file mode 100644 index 3d94a4d58532dcd9227220f6aaf19049c7ab8f2e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3534.txt +++ /dev/null @@ -1,179 +0,0 @@ -In computer science, a readers–writer (single-writer lock, a multi-reader lock, a push lock, or an MRSW lock) is a synchronization primitive that solves one of the readers–writers problems. An RW lock allows concurrent access for read-only operations, while write operations require exclusive access. This means that multiple threads can read the data in parallel but an exclusive lock is needed for writing or modifying data. When a writer is writing the data, all other writers or readers will be blocked until the writer is finished writing. A common use might be to control access to a data structure in memory that cannot be updated atomically and is invalid (and should not be read by another thread) until the update is complete. - -Readers–writer locks are usually constructed on top of mutexes and condition variables, or on top of semaphores. - -Some RW locks allow the lock to be atomically upgraded from being locked in read-mode to write-mode, as well as being downgraded from write-mode to read-mode. Upgradable RW locks can be tricky to use safely, since whenever two threads holding reader locks both attempt to upgrade to writer locks, a deadlock is created that can only be broken by one of the threads releasing its reader lock. The deadlock can be avoided by allowing only one thread to acquire the lock in "read-mode with intent to upgrade to write" while there are no threads in write mode and possibly non-zero threads in read-mode. - -RW locks can be designed with different priority policies for reader vs. writer access. The lock can either be designed to always give priority to readers (read-preferring), to always give priority to writers (write-preferring) or be unspecified with regards to priority. These policies lead to different tradeoffs with regards to concurrency and starvation. - -* Read-preferring RW locks allow for maximum concurrency, but can lead to write-starvation if contention is high. This is because writer threads will not be able to acquire the lock as long as at least one reading thread holds it. Since multiple reader threads may hold the lock at once, this means that a writer thread may continue waiting for the lock while new reader threads are able to acquire the lock, even to the point where the writer may still be waiting after all of the readers which were holding the lock when it first attempted to acquire it have released the lock. Priority to readers may be weak, as just described, or strong, meaning that whenever a writer releases the lock, any blocking readers always acquire it next. - -* Write-preferring RW locks avoid the problem of writer starvation by preventing any new readers from acquiring the lock if there is a writer queued and waiting for the lock; the writer will acquire the lock as soon as all readers which were already holding the lock have completed. The downside is that write-preferring locks allows for less concurrency in the presence of writer threads, compared to read-preferring RW locks. Also the lock is less performant because each operation, taking or releasing the lock for either read or write, is more complex, internally requiring taking and releasing two mutexes instead of one. This variation is sometimes also known as "write-biased" readers–writer lock. - -* Unspecified priority RW locks does not provide any guarantees with regards read vs. write access. Unspecified priority can in some situations be preferable if it allows for a more efficient implementation. - -Several implementation strategies for readers–writer locks exist, reducing them to synchronization primitives that are assumed to pre-exist. - -Raynal demonstrates how to implement an R/W lock using two mutexes and a single integer counter. The counter, b, tracks the number of blocking readers. One mutex, r, protects b and is only used by readers; the other, g (for "global") ensures mutual exclusion of writers. This requires that a mutex acquired by one thread can be released by another. The following is pseudocode for the operations: - -
    - -Initialize - -* Set b to 0. - -* r is unlocked. - -* g is unlocked. - -
    - -
    - -Begin Read - -* Lock r. - -* Increment b. - -* If b = 1, lock g. - -* Unlock r. - -
    - -
    - -End Read - -* Lock r. - -* Decrement b. - -* If b = 0, unlock g. - -* Unlock r. - -
    - -
    - -Begin Write - -* Lock g. - -
    - -
    - -End Write - -* Unlock g. - -
    - -This implementation is read-preferring. - -Alternatively an RW lock can be implemented in terms of a condition variable, cond, an ordinary (mutex) lock, g, and various counters and flags describing the threads that are currently active or waiting. For a write-preferring RW lock one can use two integer counters and one boolean flag: - -* num_readers_active: the number of readers that have acquired the lock (integer) - -* num_writers_waiting: the number of writers waiting for access (integer) - -* writer_active: whether a writer has acquired the lock (boolean). - -Initially num_readers_active and num_writers_waiting are zero and writer_active is false. - -The lock and release operations can be implemented as - -
    - -Begin Read - -* Lock g - -* While num_writers_waiting > 0 or writer_active: - -** wait cond, g - -* Increment num_readers_active - -* Unlock g. - -
    - -
    - -End Read - -* Lock g - -* Decrement num_readers_active - -* If num_readers_active = 0: - -** Notify cond (broadcast) - -* Unlock g. - -
    - -
    - -Begin Write - -* Lock g - -* Increment num_writers_waiting - -* While num_readers_active > 0 or writer_active is true: - -** wait cond, g - -* Decrement num_writers_waiting - -* Set writer_active to true - -* Unlock g. - -
    - -
    - -End Write - -* Lock g - -* Set writer_active to false - -* Notify cond (broadcast) - -* Unlock g. - -
    - -* POSIX standard pthread_rwlock_t and associated operations - -* ReadWriteLock interface and the ReentrantReadWriteLock locks in Java version 5 or above - -* Microsoft System.Threading.ReaderWriterLockSlim lock for C# and other .NET languages - -* std::shared_mutex read/write lock in C++17 - -* boost::shared_mutex and boost::upgrade_mutex locks in Boost C++ Libraries - -* SRWLock, added to the Windows operating system API as of Windows Vista. - -* sync.RWMutex in Go - -* Phase fair reader–writer lock, which alternates between readers and writers - -* std::sync::RwLock read/write lock in Rust - -* Poco::RWLock in POCO C++ Libraries - -* mse::recursive_shared_timed_mutex in the [//github.com/duneroadrunner/SaferCPlusPlus SaferCPlusPlus] library is a version of that supports the recursive ownership semantics of . - -* txrwlock.ReadersWriterDeferredLock Readers/Writer Lock for Twisted - -The read-copy-update (RCU) algorithm is one solution to the readers–writers problem. RCU is wait-free for readers. The Linux kernel implements a special solution for few writers called seqlock. diff --git a/wiki/wikipedia/3535.txt b/wiki/wikipedia/3535.txt deleted file mode 100644 index dc7f94bdc90b3700314b1be75a0c1b87815d7aa6..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3535.txt +++ /dev/null @@ -1,59 +0,0 @@ -In mathematics the estimation lemma, also known as the ML inequality, gives an upper bound for a contour integral. If f is a complex-valued, continuous function on the contour Γ and if its absolute value |f (z)| is bounded by a constant M for all z on Γ, then -$$ -\left|\int_\Gamma f(z) dz\right| \le M l(\Gamma), -$$ - -where l(Γ) is the arc length of Γ. In particular, we may take the maximum -$$ -M:= \sup_{z\in\Gamma}|f(z)| -$$ - -as upper bound. Intuitively, the lemma is very simple to understand. If a contour is thought of as many smaller contour segments connected together, then there will be a maximum |f (z)| for each segment. Out of all the maximum |f (z)|s for the segments, there will be an overall largest one. Hence, if the overall largest |f (z)| is summed over the entire path then the integral of f (z) over the path must be less than or equal to it. - -Formally, the inequality can be shown to hold using the definition of contour integral, the absolute value inequality for integrals and the formula for the length of a curve as follows: - -\left|\int_\Gamma f(z) dz \right| - -= \left|\int_\alpha^\beta f(\gamma(t))\gamma'(t) dt \right| - -\leq \int_\alpha^\beta \left|f(\gamma(t))\right|\left|\gamma'(t)\right| dt - -\leq M \int_\alpha^\beta \left|\gamma'(t)\right| dt = M l(\Gamma) - -The estimation lemma is most commonly used as part of the methods of contour integration with the intent to show that the integral over part of a contour goes to zero as |z| goes to infinity. An example of such a case is shown below. - -Problem. - -Find an upper bound for -$$ -\left|\int_\Gamma \frac{1}{(z^2+1)^2} dz\right|, -$$ - -where Γ is the upper half-circle |z| = a with radius a > 1 traversed once in the counterclockwise direction. - -Solution. - -First observe that the length of the path of integration is half the circumference of a circle with radius a, hence -$$ -l(\Gamma)=\tfrac{1}{2}(2\pi a)=\pi a. -$$ - -Next we seek an upper bound M for the integrand when |z| = a. By the triangle inequality we see that -$$ -|z|^2=\left|z^2\right| = \left|z^2+1-1\right| \le \left|z^2+1\right|+1, -$$ - -therefore -$$ -\left|z^2+1\right|\ge |z|^2 - 1 = a^2 - 1>0 -$$ - -because |z| = a > 1 on Γ. Hence -$$ -\left|\frac{1}{\left(z^2+1\right)^2}\right| \le \frac{1}{\left(a^2-1\right)^2}. -$$ - -Therefore, we apply the estimation lemma with M = 1/(a2 − 1)2. The resulting bound is -$$ -\left|\int_\Gamma \frac{1}{\left(z^2+1\right)^2}dz\right| \le \frac{\pi a}{\left(a^2-1\right)^2}. -$$ diff --git a/wiki/wikipedia/3536.txt b/wiki/wikipedia/3536.txt deleted file mode 100644 index f9d22716570e16eefdfaeecada5bf95519959e0d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3536.txt +++ /dev/null @@ -1,110 +0,0 @@ -In mathematical logic and automated theorem proving, resolution is a rule of inference leading to a refutation complete theorem-proving technique for sentences in propositional logic and first-order logic. For propositional logic, systematically applying the resolution rule acts as a decision procedure for formula unsatisfiability, solving the (complement of the) Boolean satisfiability problem. For first-order logic, resolution can be used as the basis for a semi-algorithm for the unsatisfiability problem of first-order logic, providing a more practical method than one following from Gödel's completeness theorem. - -The resolution rule can be traced back to Davis and Putnam (1960); however, their algorithm required trying all ground instances of the given formula. This source of combinatorial explosion was eliminated in 1965 by John Alan Robinson's syntactical unification algorithm, which allowed one to instantiate the formula during the proof "on demand" just as far as needed to keep refutation completeness. - -The clause produced by a resolution rule is sometimes called a resolvent. - -The resolution rule in propositional logic is a single valid inference rule that produces a new clause implied by two clauses containing complementary literals. A literal is a propositional variable or the negation of a propositional variable. Two literals are said to be complements if one is the negation of the other (in the following, -$$ -\lnot c -$$ is taken to be the complement to $c$). The resulting clause contains all the literals that do not have complements. - -Formally: - -\frac{ - -a_1 \lor a_2 \lor \cdots \lor c, - -\quad b_1 \lor b_2 \lor \cdots \lor \neg c} - -{a_1 \lor a_2 \lor \cdots \lor b_1 \lor b_2 \lor \cdots} - -where - -all $a_i$, $b_i$, and $c$ are literals, - -the dividing line stands for "entails". - -The above may also be written as: - -\frac{ - -(\neg a_1 \land \neg a_2 \land \cdots) \rightarrow c, - -\quad c \rightarrow (b_1 \lor b_2 \lor \cdots)} - -{(\neg a_1 \land \neg a_2 \land \cdots) \rightarrow (b_1 \lor b_2 \lor \cdots)} - -Or schematically as: - - - -\frac{\Gamma_1 \cup\left\{ \ell\right\} \Gamma_2 \cup\left\{ \overline{\ell}\right\} }{\Gamma_1 \cup\Gamma_2}|\ell| - - - -We have the following terminology: - -* The clauses $\Gamma_1 \cup\left\{ \ell\right\}$ and $\Gamma_2 \cup\left\{ \overline{\ell}\right\} $ are the inference’s premises - -* $\Gamma_1 \cup \Gamma_2$ (the resolvent of the premises) is its conclusion. - -* The literal $\ell$ is the left resolved literal, - -* The literal $\overline{\ell}$ is the right resolved literal, - -* $|\ell|$ is the resolved atom or pivot. - -The clause produced by the resolution rule is called the resolvent of the two input clauses. It is the principle of consensus applied to clauses rather than terms. - -When the two clauses contain more than one pair of complementary literals, the resolution rule can be applied (independently) for each such pair; however, the result is always a tautology. - -Modus ponens can be seen as a special case of resolution (of a one-literal clause and a two-literal clause). - -\frac{ - -p \rightarrow q, \quad p} - -{ - -q - -} - -is equivalent to - -\frac{ - -\lnot p \lor q,\quad p} - -{ - -q - -} - -When coupled with a complete search algorithm, the resolution rule yields a sound and complete algorithm for deciding the satisfiability of a propositional formula, and, by extension, the validity of a sentence under a set of axioms. - -This resolution technique uses proof by contradiction and is based on the fact that any sentence in propositional logic can be transformed into an equivalent sentence in conjunctive normal form. The steps are as follows. - -* All sentences in the knowledge base and the negation of the sentence to be proved (the conjecture) are conjunctively connected. - -* The resulting sentence is transformed into a conjunctive normal form with the conjuncts viewed as elements in a set, S, of clauses. - -These techniques are useful mainly in interactive theorem proving where it is important to preserve human readability of intermediate result formulas. Besides, they avoid combinatorial explosion during transformation to clause-form, - -* CARINE - -* GKC - -* Otter - -* Prover9 - -* SNARK - -* SPASS - -* Vampire - -* online prover diff --git a/wiki/wikipedia/3537.txt b/wiki/wikipedia/3537.txt deleted file mode 100644 index ff0782fc92e1fd96fc1e91e9063ca7f616da0069..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3537.txt +++ /dev/null @@ -1,11 +0,0 @@ -In mathematics, specifically additive number theory, Romanov's theorem is a mathematical theorem proved by Nikolai Pavlovich Romanov. It states that given a fixed base b, the set of numbers that are the sum of a prime and a positive integer power of b has a positive lower asymptotic density. - -Romanov initially stated that he had proven the statements "In jedem Intervall (0, x) liegen mehr als ax Zahlen, welche als Summe von einer Primzahl und einer k-ten Potenz einer ganzen Zahl darstellbar sind, wo a eine gewisse positive, nur von k abhängige Konstante bedeutet" and "In jedem Intervall (0, x) liegen mehr als bx Zahlen, weiche als Summe von einer Primzahl und einer Potenz von a darstellbar sind. Hier ist a eine gegebene ganze Zahl und b eine positive Konstante, welche nur von a abhängt". These statements translate to "In every interval $(0,x)$ there are more than $\alpha x$ numbers which can be represented as the sum of a prime number and a k-th power of an integer, where $\alpha$ is a certain positive constant that is only dependent on k" and "In every interval $(0,x)$ there are more than $\beta x$ numbers which can be represented as the sum of a prime number and a power of a. Here a is a given integer and $\beta$ is a positive constant that only depends on a" respectively. The second statement is generally accepted as the Romanov's theorem, for example in Nathanson's book. - -Precisely, let $d(x)=\frac{\left\vert \{n\le x:n=p+2^k,p\ \textrm{prime,}\ k\in\N\} \right\vert}{x}$ and let $\underline{d}=\liminf_{x\to\infty}d(x)$, $\overline{d}=\limsup_{x\to\infty}d(x)$. Then Romanov's theorem asserts that $\underline{d}>0$. - -Alphonse de Polignac wrote in 1849 that every odd number larger than 3 can be written as the sum of an odd prime and a power of 2. (He soon noticed a counterexample, namely 959.) This corresponds to the case of $a=2$ in the original statement. The counterexample of 959 was, in fact, also mentioned in Euler's letter to Christian Goldbach, but they were working in the opposite direction, trying to find odd numbers that cannot be expressed in the form. - -In 1934, Romanov proved the theorem. The positive constant $\beta$ mentioned in the case $a=2$ was later known as Romanov's constant. Various estimates on the constant, as well as $\overline{d}$, has been made. The history of such refinements are listed below. In particular, since $\overline{d}$ is shown to be less than 0.5 this implies that the odd numbers that cannot be expressed this way has positive lower asymptotic density. - -Analogous results of Romanov's theorem has been proven in number fields by Riegel in 1961. In 2015, the theorem was also proven for polynomials in finite fields. Also in 2015, an arithmetic progression of Gaussian integers that are not expressible as the sum of a Gaussian prime and a power of 1+i is given. diff --git a/wiki/wikipedia/3538.txt b/wiki/wikipedia/3538.txt deleted file mode 100644 index 6bfd2f5970e69cdf372a482e5c9740f780b55f4a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3538.txt +++ /dev/null @@ -1,3 +0,0 @@ -In geometry, the six circles theorem relates to a chain of six circles together with a triangle, such that each circle is tangent to two sides of the triangle and also to the preceding circle in the chain. The chain closes, in the sense that the sixth circle is always tangent to the first circle. It is assumed in this construction that all circles lie within the triangle, and all points of tangency lie on the sides of the triangle. If the problem is generalized to allow circles that may not be within the triangle, and points of tangency on the lines extending the sides of the triangle, then the sequence of circles eventually reaches a periodic sequence of six circles, but may take arbitrarily many steps to reach this periodicity. - -The name may also refer to Miquel's six circles theorem, the result that if five circles have four triple points of intersection then the remaining four points of intersection lie on a sixth circle. diff --git a/wiki/wikipedia/3539.txt b/wiki/wikipedia/3539.txt deleted file mode 100644 index 2edd4ed958e28631a15786f5a7506f0d424f1d8f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3539.txt +++ /dev/null @@ -1,103 +0,0 @@ -A tetromino is a geometric shape composed of four squares, connected orthogonally (i.e. at the edges and not the corners). Tetrominoes, like dominoes and pentominoes, are a particular type of polyomino. The corresponding polycube, called a tetracube, is a geometric shape composed of four cubes connected orthogonally. - -A popular use of tetrominoes is in the video game Tetris, which refers to them as tetriminos. The tetrominoes used in the game are specifically the one-sided tetrominoes. - -Polyominos are formed by joining unit squares along their edges. A free polyomino is a polyomino considered up to congruence. That is, two free polyominos are the same if there is a combination of translations, rotations, and reflections that turns one into the other. A free tetromino is a free polyomino made from four squares. There are five free tetrominoes. - -The free tetrominoes have the following symmetry: - -* Straight: vertical and horizontal reflection symmetry, and two points of rotational symmetry - -* Square: vertical and horizontal reflection symmetry, and four points of rotational symmetry - -* T: vertical reflection symmetry only - -* L: no symmetry - -* Skew: two points of rotational symmetry only - -








    - -One-sided tetrominoes are tetrominoes that may be translated and rotated but not reflected. They are used by, and are overwhelmingly associated with, Tetris. There are seven distinct one-sided tetrominoes. These tetrominoes are named by the letter of the alphabet they most closely resemble. The "I", "O", and "T" tetrominoes have reflectional symmetry, so it does not matter whether they are considered as free tetrominoes or one-sided tetrominoes. The remaining four tetrominoes, "J", "L", "S", and "Z", exhibit a phenomenon called chirality. J and L are reflections of each other, and S and Z are reflections of each other. - -As free tetrominoes, J is equivalent to L, and S is equivalent to Z. But in two dimensions and without reflections, it is not possible to transform J into L or S into Z. - -







    - -The fixed tetrominoes allow only translation, not rotation or reflection. There are two distinct fixed I-tetrominoes, four J, four L, one O, two S, four T, and two Z, for a total of 19 fixed tetrominoes: - -











    - -A single set of free tetrominoes or one-sided tetrominoes cannot fit in a rectangle. This can be shown with a proof similar to the mutilated chessboard argument. A 5×4 rectangle with a checkerboard pattern has 20 squares, containing 10 light squares and 10 dark squares, but a complete set of free tetrominoes has 11 dark squares and 9 light squares. This is due to the T tetromino having 3 dark squares and one light square, while all other tetrominos each have 2 dark squares and 2 light squares. Similarly, a 7×4 rectangle has 28 squares, containing 14 squares of each shade, but the set of one-sided tetrominoes has 15 dark squares and 13 light squares. By extension, any odd number of sets for either type cannot fit in a rectangle. Additionally, the 19 fixed tetrominoes cannot fit in a 4×19 rectangle. This was discovered by exhausting all possibilities in a computer search. - -











    - -However, all three sets of tetrominoes can fit rectangles with holes: - -
      - -
    • All 5 free tetrominoes fit a 7×3 rectangle with a hole.
    • - -
    • All 7 one-sided tetrominoes fit a 6×5 rectangle with two holes of the same "checkerboard color".
    • - -
    • All 19 fixed tetrominoes fit a 11×7 rectangle with a hole.
    • - -
    - -









    - -Two sets of free or one-sided tetrominoes can fit into a rectangle in different ways, as shown below: - -










    - -The name "tetromino" is a combination of the prefix tetra- 'four' (from Ancient Greek ), and "domino". The name was introduced by Solomon W. Golomb in 1953 along with other nomenclature related to polyominos. - -Each of the five free tetrominoes has a corresponding tetracube, which is the tetromino extruded by one unit. - -J and L are the same tetracube, as are S and Z, because one may be rotated around an axis parallel to the tetromino's plane to form the other. Three more tetracubes are possible, all created by placing a unit cube on the bent tricube: - -






    - -The tetracubes can be packed into two-layer 3D boxes in several different ways, based on the dimensions of the box and criteria for inclusion. They are shown in both a pictorial diagram and a text diagram. For boxes using two sets of the same pieces, the pictorial diagram depicts each set as a lighter or darker shade of the same color. The text diagram depicts each set as having a capital or lower-case letter. In the text diagram, the top layer is on the left, and the bottom layer is on the right. - -
    -
    -1.) 2×4×5 box filled with two sets of free tetrominos: 
    -
    -Z Z T t I l T T T i
    -
    -L Z Z t I l l l t i
    -
    -L z z t I o o z z i
    -
    -L L O O I o o O O i
    -
    -2.) 2×2×10 box filled with two sets of free tetrominoes:
    -
    -L L L z z Z Z T O O o o z z Z Z T T T l
    -
    -L I I I I t t t O O o o i i i i t l l l
    -
    -3.) 2×4×4 box filled with one set of all tetracubes:
    -
    -F T T T F Z Z B
    -
    -F F T B Z Z B B
    -
    -O O L D L L L D
    -
    -O O D D I I I I
    -
    -4.) 2×2×8 box filled with one set of all tetracubes: 
    -
    -D Z Z L O T T T D L L L O B F F
    -
    -D D Z Z O B T F I I I I O B B F
    -
    -5.) 2×2×7 box filled with tetracubes, with mirror-image pieces removed:
    -
    -L L L Z Z B B L C O O Z Z B
    -
    -C I I I I T B C C O O T T T
    -
    -
    diff --git a/wiki/wikipedia/354.txt b/wiki/wikipedia/354.txt deleted file mode 100644 index 62c22b259e50cb733d046c62f07fed77d8ce8fde..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/354.txt +++ /dev/null @@ -1,87 +0,0 @@ -In mathematical analysis, Haar's tauberian theorem named after Alfréd Haar, relates the asymptotic behaviour of a continuous function to properties of its Laplace transform. It is related to the integral formulation of the Hardy–Littlewood tauberian theorem. - -William Feller gives the following simplified form for this theorem - -Suppose that $f(t)$ is a non-negative and continuous function for $t \geq 0$, having finite Laplace transform -$$ -F(s) = \int_0^\infty e^{-st} f(t)dt -$$ - -for $s>0$. Then $ F(s)$ is well defined for any complex value of $s=x+iy$ with $x>0$. Suppose that $F$ verifies the following conditions: - -1. For $y \neq 0$ the function $F(x+iy)$ (which is regular on the right half-plane $x>0$) has continuous boundary values $F(iy)$ as $x \to +0$, for $x \geq 0$ and $y \neq 0$, furthermore for $s=iy$ it may be written as -$$ -F(s) = \frac{C}{s} + \psi(s), -$$ - -where $\psi(iy)$ has finite derivatives $\psi'(iy),\ldots,\psi^{(r)}(iy)$ and $\psi^{(r)}(iy)$ is bounded in every finite interval; - -2. The integral -$$ -\int_0^\infty e^{ity} F(x+iy) dy -$$ - -converges uniformly with respect to $t \geq T$ for fixed $x>0$ and $T>0$; - -3. $F(x+iy) \to 0$ as $y \to \pm\infty$, uniformly with respect to $x \geq 0$; - -4. $F'(iy),\ldots,F^{(r)}(iy)$ tend to zero as $y \to \pm\infty$; - -5. The integrals -$$ -\int_{-\infty}^{y_1} e^{ity} F^{(r)}(iy) dy -$$ and $\int_{y_2}^\infty e^{ity} F^{(r)}(iy) dy$ - -converge uniformly with respect to $t \geq T$ for fixed $y_1 < 0$, $y_2 > 0$ and $T>0$. - -Under these conditions -$$ -\lim_{t \to \infty} t^r[f(t)-C] = 0. -$$ - -A more detailed version is given in - -Suppose that $f(t)$ is a continuous function for $t \geq 0$, having Laplace transform -$$ -F(s) = \int_0^\infty e^{-st} f(t)dt -$$ - -with the following properties - -1. For all values $s=x+iy$ with $x>a$ the function $F(s)=F(x+iy)$ is regular; - -2. For all $x>a$, the function $F(x+iy)$, considered as a function of the variable $y$, has the Fourier property ("Fourierschen Charakter besitzt") defined by Haar as for any $\delta>0$ there is a value $\omega$ such that for all $t \geq T$ -$$ -\Big| \int_\alpha^\beta e^{iyt} F(x+iy) dy \Big| < \delta -$$ - -whenever $\alpha,\beta \geq \omega$ or $\alpha,\beta \leq -\omega$. - -3. The function $F(s)$ has a boundary value for $\Re s = a$ of the form -$$ -F(s) = \sum_{j=1}^N \frac{c_j}{(s-s_j)^{\rho_j}} + \psi(s) -$$ - -where $s_j = a + i y_j$ and $\psi(a+iy)$ is an $n$ times differentiable function of $y$ and such that the derivative -$$ -\left| \frac{d^n \psi(a+iy)}{dy^n} \right| -$$ - -is bounded on any finite interval (for the variable $y$) - -4. The derivatives -$$ -\frac{d^k F(a+iy)}{dy^k} -$$ - -for $k=0,\ldots,n-1$ have zero limit for $y \to \pm\infty$ and for $k=n$ has the Fourier property as defined above. - -5. For sufficiently large $t$ the following hold -$$ -\lim_{y \to \pm\infty} \int_{a+iy}^{x+iy} e^{st} F(s) ds = 0 -$$ - -Under the above hypotheses we have the following asymptotic formula -$$ -\lim_{t \to \infty} t^n e^{-at} \Big[ f(t) - \sum_{j=1}^{N} \frac{c_j}{\Gamma(\rho_j)} e^{s_j t} t^{\rho_j - 1} \Big] = 0 -$$ diff --git a/wiki/wikipedia/3540.txt b/wiki/wikipedia/3540.txt deleted file mode 100644 index 4fcca4d597facfa8aee852e5910056f7d6b767a1..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3540.txt +++ /dev/null @@ -1,22 +0,0 @@ -In econometrics, the Frisch–Waugh–Lovell (FWL) theorem is named after the econometricians Ragnar Frisch, Frederick V. Waugh, and Michael C. Lovell. - -The Frisch–Waugh–Lovell theorem states that if the regression we are concerned with is: -$$ - Y = X_1 \beta_1 + X_2 \beta_2 + u -$$ - -where $X_1$ and $X_2$ are $n \times k_1$ and $n \times k_2$ matrices respectively and where $ \beta_1 $ and $ \beta_2 $ are conformable, then the estimate of $ \beta_2 $ will be the same as the estimate of it from a modified regression of the form: -$$ - M_{X_1} Y = M_{X_1} X_2 \beta_2 + M_{X_1} u, -$$ - -where $M_{X_1}$ projects onto the orthogonal complement of the image of the projection matrix $X_1(X_1^{\mathsf{T}}X_1)^{-1}X_1^{\mathsf{T}} $. Equivalently, MX1 projects onto the orthogonal complement of the column space of X1. Specifically, -$$ - M_{X_1} = I - X_1(X_1^{\mathsf{T}}X_1)^{-1}X_1^{\mathsf{T}}, -$$ - -and this particular orthogonal projection matrix is known as the annihilator matrix. - -The vector $ M_{X_1} Y $ is the vector of residuals from regression of $ Y $ on the columns of $ X_1$. - -The theorem implies that the secondary regression used for obtaining $ M_{X_1}$ is unnecessary when the predictor variables are uncorrelated (this never happens in practice): using projection matrices to make the explanatory variables orthogonal to each other will lead to the same results as running the regression with all non-orthogonal explanators included. diff --git a/wiki/wikipedia/3541.txt b/wiki/wikipedia/3541.txt deleted file mode 100644 index 3537a77086f2d5ad3e6b10a40ed2c00ff167e3a3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3541.txt +++ /dev/null @@ -1,19 +0,0 @@ -Bitcasa, Inc. was an American cloud storage company founded in 2011 in St. Louis, Missouri. The company was later based in Mountain View, California until it shut down in 2017. - -Bitcasa provided client software for Microsoft Windows, OS X, Android and web browsers. An iOS client was pending Apple approval. Its former product, Infinite Drive, once provided centralized storage that included unlimited capacity, client-side encryption, media streaming, file versioning and backups, and multi-platform mobile access. In 2013 Bitcasa moved to a tiered storage model, offering from 1TB for $99/year up to Infinite for $999/year. In October 2014, Bitcasa announced the discontinuation of Infinite Drive; for $999/year, users would get 10TB of storage. Infinite Drive users would be required to migrate to one of the new pricing plans or delete their account. In 2012 Tony Lee was recruited as vice president of engineering and Frank Meehan joined the company's board of directors. In June 2012 Bitcasa closed $9 million of investment. Investors included: CrunchFund, Bitcasa's encryption method reportedly cloaks the data while it is still on the client's computer and then blocks of data are sent by an enterprise-grade AES-256 encryption method to the data cloud for storage. According to ExtremeTech, this service gives users access and ownership rights to their own data. - -Users could access their Infinite Drive through mobile apps for Android, Windows RT, and browsers and support offline viewing of files. The app collects and displays individual media types such as photos, video, music, and documents, independently of the folder hierarchy that they are stored in. Video files are streamed and auto-transcoded based on the device bandwidth. Items may be uploaded or downloaded or shared directly with social media sites. Files of any size can be shared with a web link that can distributed via email, text or IM. After the initial server migration, only apps for Android, iOS and browsers were updated, effectively rendering other devices unusable with the service. - -A September 2011 article published in Extreme Tech said that Bitcasa's convergent encryption based system is "mostly" safe but has some risks associated with it. - -On November 19, 2013, the company announced that its Infinite Storage offering would increase in price. The move sparked an intense reaction from users at the company's forum, even though existing users were grandfathered into the original pricing plan. Reactions from bloggers were particularly critical. The announcement of the pricing plans change on the Bitcasa blog was commented on heavily by users. This post, Bitcasa backtracked due to 'lack of demand' and 'abuse'. - -The company instead offered previous clients the same packages that regular users pay Bitcasa filed a response on 18 November, challenging the legality of the TRO. As an apparent result of the restraining order, Bitcasa announced a 5-day extension of the deadline in an email to users on November 16; the email did not mention the restraining order. A hearing was set for 10.00 on 19 November; Bitcasa 'won' the lawsuit. - -In February 2015, the Community Forum was shut down. - -On April 7, 2016, the company switched their free 5GB plan to a free trial tier. Users with this account prior April 7 would automatically start the trial and after the 60-day trial, if the user has not changed to a paid plan, their account and data will be deleted from the server. - -On April 21, 2016, Bitcasa announced they would discontinue their cloud storage service, and focus on business products. Users had until May 20, 2016 to download their data, when user data could be deleted. Bitcasa shut down their consumer cloud storage at the end of May 20, 2016, only offering products for developers. - -After four months, they did not refund customers and the website of Bitcasa was inaccessible. diff --git a/wiki/wikipedia/3542.txt b/wiki/wikipedia/3542.txt deleted file mode 100644 index d00b5cbbf14e4768e2f5c37af40ae75debb9b651..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3542.txt +++ /dev/null @@ -1,33 +0,0 @@ -In mathematical logic, a proof calculus or a proof system is built to prove statements. - -A proof system includes the components: - -* Language: The set L of formulas admitted by the system, for example, propositional logic or first-order logic. - -* Rules of inference: List of rules that can be employed to prove theorems from axioms and theorems. - -* Axioms: Formulas in L assumed to be valid. All theorems are derived from axioms. - -Usually a given proof calculus encompasses more than a single particular formal system, since many proof calculi are under-determined and can be used for radically different logics. For example, a paradigmatic case is the sequent calculus, which can be used to express the consequence relations of both intuitionistic logic and relevance logic. Thus, loosely speaking, a proof calculus is a template or design pattern, characterized by a certain style of formal inference, that may be specialized to produce specific formal systems, namely by specifying the actual inference rules for such a system. There is no consensus among logicians on how best to define the term. - -The most widely known proof calculi are those classical calculi that are still in widespread use: - -*The class of Hilbert systems, of which the most famous example is the 1928 Hilbert–Ackermann system of first-order logic; - -*Gerhard Gentzen's calculus of natural deduction, which is the first formalism of structural proof theory, and which is the cornerstone of the formulae-as-types correspondence relating logic to functional programming; - -*Gentzen's sequent calculus, which is the most studied formalism of structural proof theory. - -Many other proof calculi were, or might have been, seminal, but are not widely used today. - -*Aristotle's syllogistic calculus, presented in the Organon, readily admits formalisation. There is still some modern interest in syllogisms, carried out under the aegis of term logic. - -*Gottlob Frege's two-dimensional notation of the Begriffsschrift (1879) is usually regarded as introducing the modern concept of quantifier to logic. - -*C.S. Peirce's existential graph easily might have been seminal, had history worked out differently. - -Modern research in logic teems with rival proof calculi: - -*Several systems have been proposed that replace the usual textual syntax with some graphical syntax. proof nets and cirquent calculus are among such systems. - -*Recently, many logicians interested in structural proof theory have proposed calculi with deep inference, for instance display logic, hypersequents, the calculus of structures, and bunched implication. diff --git a/wiki/wikipedia/3543.txt b/wiki/wikipedia/3543.txt deleted file mode 100644 index 5fe7989bf9b25e87d5baa2783122d9d277339b21..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3543.txt +++ /dev/null @@ -1,66 +0,0 @@ -In number theory, the fundamental lemma of sieve theory is any of several results that systematize the process of applying sieve methods to particular problems. Halberstam & Richert - -write: - -Diamond & Halberstam - -attribute the terminology Fundamental Lemma to Jonas Kubilius. - -We use these notations: - -* A is a set of X positive integers, and Ad is its subset of integers divisible by d - -* w(d) and Rd are functions of A and of d that estimate the number of elements of A that are divisible by d, according to the formula -$$ - \left\vert A_d \right\vert = \frac{w(d)}{d} X + R_d . -$$ - -Thus w(d) / d represents an approximate density of members divisible by d, and Rd represents an error or remainder term. - -* P is a set of primes, and P(z) is the product of those primes ≤ z - -* S(A, P, z) is the number of elements of A not divisible by any prime in P that is ≤ z - -* κ is a constant, called the sifting density, Other formulations are in Halberstam & Richert, - -and in Friedlander & Iwaniec. - -We make the assumptions: - -* w(d) is a multiplicative function. - -* The sifting density κ satisfies, for some constant C and any real numbers η and ξ with 2 ≤ η ≤ ξ: -$$ -\prod_{\eta \le p \le \xi} \left( 1 - \frac{w(p)}{p} \right) ^{-1} < \left( \frac{\ln \xi}{\ln \eta} \right) ^\kappa \left( 1 + \frac{C}{\ln \eta} \right). -$$ - -There is a parameter u ≥ 1 that is at our disposal. We have uniformly in A, X, z, and u that -$$ -S(a,P,z) = X \prod_{p \le z, p \in P} \left( 1 - \frac{w(p)}{p} \right) \{1 + O(u^{-u/2})\} + O\left(\sum_{d \le z^u, d|P(z)} |R_d| \right). -$$ - -In applications we pick u to get the best error term. In the sieve it represents the number of levels of the inclusion–exclusion principle. - -This formulation is from Halberstam & Richert. Another formulation is in Diamond & Halberstam. - -We make the assumptions: - -* w(d) is a multiplicative function. - -* The sifting density κ satisfies, for some constant C and any real numbers η and ξ with 2 ≤ η ≤ ξ: -$$ - \sum_{\eta \le p \le \xi} \frac{w(p) \ln p}{p} < \kappa \ln \frac{\xi}{\eta} + C. -$$ - -* w(p) / p < 1 - c for some small fixed c and all p - -* | Rd | ≤ ω(d) where ω(d) is the number of distinct prime divisors of d. - -The fundamental lemma has almost the same form as for the combinatorial sieve. Write u = ln X / ln z. The conclusion is: -$$ -S(a,P,z) = X \prod_{p \le z,\ p \in P} \left( 1 - \frac{w(p)}{p} \right) \{1 + O(e^{-u/2})\}. -$$ - -Note that u is no longer an independent parameter at our disposal, but is controlled by the choice of z. - -Note that the error term here is weaker than for the fundamental lemma of the combinatorial sieve. Halberstam & Richert remark: "Thus it is not true to say, as has been asserted from time to time in the literature, that Selberg's sieve is always better than Brun's." diff --git a/wiki/wikipedia/3544.txt b/wiki/wikipedia/3544.txt deleted file mode 100644 index 412b14a088261f9336d9e4e6d98c566cf90a949f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3544.txt +++ /dev/null @@ -1,15 +0,0 @@ -In operating systems, a giant lock, also known as a big-lock or kernel-lock, is a lock that may be used in the kernel to provide concurrency control required by symmetric multiprocessing (SMP) systems. - -A giant lock is a solitary global lock that is held whenever a thread enters kernel space and released when the thread returns to user space; a system call is the archetypal example. In this model, threads in user space can run concurrently on any available processors or processor cores, but no more than one thread can run in kernel space; any other threads that try to enter kernel space are forced to wait. In other words, the giant lock eliminates all concurrency in kernel space. - -By isolating the kernel from concurrency, many parts of the kernel no longer need to be modified to support SMP. However, as in giant-lock SMP systems only one processor can run the kernel code at a time, performance for applications spending significant amounts of time in the kernel is not much improved. Accordingly, the giant-lock approach is commonly seen as a preliminary means of bringing SMP support to an operating system, yielding benefits only in user space. Most modern operating systems use a fine-grained locking approach. - -The Linux kernel had a big kernel lock (BKL) since the introduction of SMP, until Arnd Bergmann removed it in 2011 in kernel version 2.6.39, with the remaining uses of the big lock removed or replaced by finer-grained locking. Linux distributions at or above CentOS 7, Debian 7 (Wheezy) and Ubuntu 11.10 are therefore not using BKL. - -, OpenBSD and NetBSD are still using the spl (Unix) family of primitives to facilitate synchronisation of critical sections within the kernel, meaning that many system calls may inhibit SMP capabilities of the system, and, according to Matthew Dillon, the SMP capabilities of these two systems cannot be considered modern. - -FreeBSD still has support for the Giant mutex, which provides semantics akin to the old spl interface, but performance-critical core components have long been converted to use finer-grained locking. - -It is claimed by Matthew Dillon that out of the open-source software general-purpose operating systems, only Linux, DragonFly BSD and FreeBSD have modern SMP support, with OpenBSD and NetBSD falling behind. - -The NetBSD Foundation views modern SMP support as vital to the direction of The NetBSD Project, and has offered grants to developers willing to work on SMP improvements; NPF (firewall) was one of the projects that arose as a result of these financial incentives, but further improvements to the core networking stack may still be necessary. diff --git a/wiki/wikipedia/3545.txt b/wiki/wikipedia/3545.txt deleted file mode 100644 index b857e5c822c0aa5a1b2eaf5d1ae657789b8fc8d3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3545.txt +++ /dev/null @@ -1,13 +0,0 @@ -Numerical 3-dimensional matching is an NP-complete decision problem. It is given by three multisets of integers $X$, $Y$ and $Z$, each containing $k$ elements, and a bound $b$. The goal is to select a subset $M$ of $X\times Y\times Z$ such that every integer in $X$, $Y$ and $Z$ occurs exactly once and that for every triple $(x,y,z)$ in the subset $x+y+z=b$ holds. - -This problem is labeled as [SP16] in. - -Take $X=\{3,4,4\}$, $Y=\{1,4,6\}$ and $Z=\{1,2,5\}$, and $b=10$. This instance has a solution, namely $\{(3,6,1), (4,4,2), (4,1,5)\}$. Note that each triple sums to $b=10$. The set $\{(3,6,1), (3,4,2), (4,1,5)\}$ is not a solution for several reasons: not every number is used (a $4\in X$ is missing), a number is used too often (the $3\in X$) and not every triple sums to $b$ (since $3+4+2=9\neq b=10$). However, there is at least one solution to this problem, which is the property we are interested in with decision problems. - -If we would take $b=11$ for the same $X$, $Y$ and $Z$, this problem would have no solution (all numbers sum to $30$, which is not equal to $k\cdot b=33$ in this case). - -Every instance of the Numerical 3-dimensional matching problem is an instance of both the 3-partition problem, and the 3-dimensional matching problem. - -NP-completeness of the 3-partition problem is stated by Garey and Johnson in "Computers and Intractability; A Guide to the Theory of NP-Completeness", which references to this problem with the code [SP16]. It is done by a reduction from 3-dimensional matching via 4-partition. - -To prove NP-completeness of the numerical 3-dimensional matching, the proof is similar, but a reduction from 3-dimensional matching via the numerical 4-dimensional matching problem should be used. diff --git a/wiki/wikipedia/3546.txt b/wiki/wikipedia/3546.txt deleted file mode 100644 index 52afea090e63b8cdca558cd2206c8837fc411e29..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3546.txt +++ /dev/null @@ -1,23 +0,0 @@ -In computer science, serializing tokens are a concept in concurrency control arising from the ongoing development of DragonFly BSD. According to Matthew Dillon, they are most akin to SPLs, except a token works across multiple CPUs while SPLs only work within a single CPU's domain. - -Serializing tokens allow programmers to write multiprocessor-safe code without themselves or the lower level subsystems needing to be aware of every single entity that may also be holding the same token. - -Tokens and mutual exclusion (mutex) mechanisms are locks. Unlike mutexes, tokens do not exclude other threads from accessing the resource while they are blocked or asleep. A thread sharing resources with other threads can be stopped and started for a variety of reasons: - -# Timeslicing: the user space (US) scheduler tries to ensure that all threads get a fair chance to run, so it runs each thread for a brief period of time (a timeslice) and then switches to another thread. - -# Concurrent execution: in multiprocessor computers, a thread may be run at exactly the same time as another thread on a different CPU. - -# Preemption: a thread may preempt a lower-priority thread, such as a hardware interrupt or light weight kernel threads. - -# Voluntary blocking: a thread may sleep if it has to wait for something, has no work to do, or calls a function that blocks. Even the call to acquire a lock can block. - -The following table summarizes the properties of tokens and mutexes. - -Issues such as deadlock and priority inversion can be very difficult to avoid, and require coordination at many levels of the kernel. Because locking with tokens does not deadlock and acquired tokens need not be atomic when later operations block, it allow much simpler code than mutexes. - -== Example == - -The following pseudocode and explanations illustrate how serializing tokens work. - -Mac OS X's Darwin kernel uses a similar technique (called a funnel) to serialize access to the BSD portion of the kernel. diff --git a/wiki/wikipedia/3547.txt b/wiki/wikipedia/3547.txt deleted file mode 100644 index 0aa302fefb9e14ce7508b6372bccab9659145741..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3547.txt +++ /dev/null @@ -1,117 +0,0 @@ -Tuxedo (Transactions for Unix, Extended for Distributed Operations) is a middleware platform used to manage distributed transaction processing in distributed computing environments. Tuxedo is a transaction processing system or transaction-oriented middleware, or enterprise application server for a variety of systems and programming languages. - -Developed by AT&T in the 1980s, it became a software product of Oracle Corporation in 2008 when they acquired BEA Systems. - -Tuxedo is now part of the Oracle Fusion Middleware. - -From the beginning in 1983, AT&T designed Tuxedo for high availability and to provide extremely scalable applications to support applications requiring thousands of transactions per second on commonly available distributed systems. - -The original development targeted the creation and administration of operations support systems for the US telephone company that required online transaction processing (OLTP) capabilities. - -The Tuxedo concepts derived from the Loop Maintenance Operations System (LMOS). Tuxedo supported moving the LMOS application off mainframe systems that used Information Management System (IMS) from IBM on to much cheaper distributed systems running (AT&T's own) Unix. - -The original Tuxedo team comprised members of the LMOS team, including Juan M. Andrade, Mark T. Carges, Terrence Dwyer, and Stephen Felts. - -In 1993 Novell acquired the Unix System Laboratories (USL) division of AT&T which was responsible for the development of Tuxedo at the time. In September 1993 it was called the "best known" distributed transaction processing monitor, running on 25 different platforms. - -In February 1996, BEA Systems made an exclusive agreement with Novell to develop and distribute Tuxedo on non-NetWare platforms, with most Novell employees working with Tuxedo joining BEA. - -In 2008, Oracle Corporation acquired BEA Systems, and TUXEDO was marketed as part of the Oracle Fusion Middleware product line. - -Tuxedo has been used as transactional middleware by a number of multi-tier application development tools. The Open Group used some of the Tuxedo interfaces as the basis of their standards such as X/Open XA and XATMI. - -The Tuxedo developers published papers about it in the early 1990s. - -Later it became the basis of some research projects. - -* Standards based APIs - SCA, The Open Group XATMI, Object Management Group CORBA - -* Communication types - Synchronous, Asynchronous, Conversational, Unsolicited Notifications, Publish/subscribe - -* Typed buffers - -** FML/FML32 - Self-describing fielded buffers similar to Abstract Syntax Notation One or Fast Infoset - -** XML - -** STRING and multibyte strings MBSTRING - -** CARRAY binary blobs - -** VIEW/VIEW32 externally described records - -** RECORD representing COBOL record structures - -* Transaction Management - Global Transactions - Two-phase commit protocol - X/Open XA - -* /D - Clustering - Domains - -* /WS - Remote Clients - -* WTC - Weblogic Tuxedo Connector - -* Java clients - Jolt - -* Java EE (J2EE) Integration - Tuxedo JCA Adapter - -* Bidirectional SOAP and REST Web Services - SALT - -* /Q - Transient (in memory) and Persistent Queues (also called Reliable Queues) - -* Data Dependent Routing (DDR) - -* Event Broker (also called publish and subscribe messaging) - -* Security - Authentication, Authorization, Auditing, and Public key infrastructure based message signing and encryption - -* Programmed Administration and SNMP support - -* System and application performance monitoring - TSAM Plus - -* Load balancing, server spawning and decay - -* Mainframe connectivity - TMA - -* Supports C, C++, COBOL, Python, Ruby, PHP, and Java applications on most Unix platforms, Linux, Microsoft Windows, and other proprietary platforms such as OpenVMS and IBM i. - -Tuxedo is at its core a message routing and queuing system. Requests are sent to named services and Tuxedo uses memory-based inter-process communication facilities to queue the requests to servers. The requester is unaware of where the server that actually processes the request is located or how it is implemented. In essence, Tuxedo provided the elements of service-oriented architecture (SOA) decades before the phrase was coined. Tuxedo can use the content of the message to determine what servers should be utilized to receive the request by means of data-dependent routing. - -The heart of the Tuxedo system is the Bulletin Board (BB). This is a shared memory segment that contains the configuration and state of a Tuxedo domain. Servers, services, transactions, and clients are all registered in the BB providing a global view of their state across the machines within a domain. To coordinate updates to the BB a process called the Bulletin Board Liaison (BBL) runs on each machine to keep the local copy of the BB up-to-date. A master machine runs a process called the "Distinguished Bulletin Board Liaison" that coordinates the updates to the BB. This allows each machine to have a view of what servers, services, transactions, and clients are on each machine within the domain. - -Another process on each machine called the Bridge is responsible for passing requests from one machine to another. This allows Tuxedo to spread load across the various machines within a domain and allows servers and services to be running on multiple machines. In addition, the BBL and Bridge monitor each other and restart the other should one fail. In the advent of a failure of the master machine, another machine designated as a backup master can take over the function of master machine. Also, since machines within a single domain can be of different architectures (x86, IA32, SPARC, P-Series, etc.), the Bridge is also responsible for handling differences in things like endianness. - -On Oracle Exalogic Tuxedo leverages the RDMA capabilities of InfiniBand to bypass the bridge. This allows the client of a service on one machine to directly make a request of a server on another machine. - -Tuxedo applications can utilize a variety of message formats depending upon the type of data that is to be passed. One of the most popular formats is the FML buffer format which is much like a binary XML or ASN.1 format. FML buffers can contain an arbitrary number of named fields of arbitrary type. Fields can be repeated and nested. As it is a self-describing binary format, the processing of fields incurs very little overhead in comparison to the parsing necessary to support something like XML. VIEW buffers are essentially records, C structures, or COBOL copybooks. A VIEW buffer has an external description which allows Tuxedo to access the fields within it if necessary for things like data dependent routing. Other buffer formats include XML, CARRAY (opaque binary data), STRING, and MBSTRING (a string buffer containing multibyte characters.) Tuxedo can automatically and transparently convert FML buffers to and from XML buffers. - -There is also support for user-developed buffer types (for example JamFlex buffers defined by Tuxedo version of Panther RAD toolset). - -For remote clients (Java, CORBA, or /WS), Tuxedo provides communication concentrators called listener/handlers that handle the remote network communication. Clients connect to these communication concentrators, which act as proxies for the clients. As clients make requests, the listener/handler uses the local Tuxedo infrastructure to make the request on the behalf of the client. Tuxedo then load balances the requests across the servers within the domain that offer the service even if the server is not on the local machine. This is in contrast to most Java EE application servers where load balancing is done by the client making requests to different machines with the cluster. - -To facilitate the sharing of services across domains, Tuxedo provides domain gateways. A domain gateway allows importing and exporting services from remote domains. This allows the local domain to see services on remote domains as though they were local services. The domain gateways are responsible for propagating security and transaction context to the remote domain. Besides connecting Tuxedo domains together, domain gateways exist for mainframe systems using TCP/IP, IBM's Systems Network Architecture (SNA), or the OSI protocols, and Java Platform, Enterprise Edition application servers. For the mainframe gateways, each system sees the services imported from the remote system as local services and use the local systems infrastructure to interact with those services. This means that Tuxedo sees a CICS transaction as a Tuxedo service, and CICS sees a Tuxedo service as a CICS transaction. - -The BBL on each machine monitors the state of all servers and can automatically restart failed servers. It can also detect hung servers and kill/restart them as required. The BRIDGE process in a clustered environment monitors to BBL, so there are no single points of failure. Any transactions that are affected by a server or machine failure and that have not completed the prepare phase are rolled back. Transactions that have completed the prepare phase but not the commit phase will be committed as part of the Tuxedo boot sequence. - -Tuxedo applications can request that all service invocations and their associated updates to any resources controlled by resource managers (such as databases) be controlled by a transaction. Once the application begins a transaction, all subsequent service invocations and nested invocations are included as part of that transaction, even those services that were executed on remote domains. Tuxedo then coordinates the commit processing with the resource managers to ensure atomic updates to all affected resources. Transactions can be controlled by the application or automatically controlled by the Tuxedo configuration, i.e., container managed transactions. - -Tuxedo provides a queuing subsystem called /Q. This facility provides transient and persistent queues that allows applications to explicitly enqueue and dequeue messages from named queues. Queues can be ordered by message availability time, expiration time, priority, LIFO, FIFO, or a combination. Queues are managed by an XA compliant resource manager allowing queue operations to participate in distributed transactions. An automated queue forwarding server is provided that will remove entries from a queue and invoke an associated Tuxedo services, placing the reply message on an associated reply queue. - -The event subsystem within Tuxedo provides support for unsolicited events as well as brokered events. Unsolicited events allow Tuxedo applications to send out-of-band notifications to clients that aren't necessarily waiting for a response. Brokered events allow application to subscribe to events of interest and when another application posts an event, all applications subscribed to that event receive it. This allows applications to use an event driven model instead of the more typical request/response model. As well this provides a publish and subscribe messaging model that can be combined with /Q. - -Oracle offers a number of add-on products to Tuxedo. - -In March 2010, Oracle announced two new products. - -Application Runtime for CICS and Batch along with the associated Oracle Tuxedo Application Rehosting Workbench allows the migration of IBM Customer Information Control System (CICS) and batch applications onto Tuxedo on distributed systems. By providing automated conversion tools, CICS equivalent API pre-processor macro expansion, and a JES-2 like Batch execution environment, the migration of mainframe applications is greatly simplified. - -This product provides a bi-directional web services SOAP/HTTP(S) gateway. This gateway allows Tuxedo services to be accessed by external SOAP clients without making any changes to the Tuxedo service. Likewise, Tuxedo applications can call an external web service as though it were a local Tuxedo service. The latest version of SALT supports WS-AtomicTransactions and modules for Apache Web Server, Oracle HTTP Server, and Oracle iPlanet Web Server, that allows the creation of dynamic web content by calling Tuxedo services. - -In version 12.1.3 SALT added support for RESTful services. - -This product provides centralized monitoring capabilities for multiple Tuxedo domains. TSAM Plus agents are deployed on the machines in a Tuxedo domain. These agents collect metric data from the running Tuxedo processes based on a configured policy, and send the data back to the TSAM Plus Manager where it is used historically or in real time. TSAM Plus provides configuration information, call path, call pattern, service execution, transaction, and more monitoring metrics. TSAM Plus also monitors Tuxedo ART CICS and Batch applications. An additional component of TSAM Plus is a plug-in for Oracle Enterprise Manager Cloud Control that provides full operation, configuration, administration, and management of a Tuxedo application. - -This product provides a set of gateway processes that run on Tuxedo that communicate with a mainframe using its native protocols. This gateway provides bidirectional integration between mainframe and Tuxedo platforms and makes Tuxedo appear as a remote CICS or IMS region to the mainframe, and the remote CICS or IMS region as another Tuxedo domain to the local Tuxedo application. - -The Tuxedo JCA adapter provide a JCA 1.5 compliant Resource Adapter that can be deployed to any Java EE (J2EE) 1.5 or later JCA container. The adapter supports both the JCA Common Client interface or CCI, as well as the JATMI interface supported by the Oracle WebLogic Tuxedo Connector component of Oracle WebLogic Server. Message inflow and outflow is supported along with distributed transaction support. - -Provides enterprise messaging capabilities that combines the features of Oracle MessageQ with Tuxedo. This extends the existing /Q message queuing facility of Tuxedo by providing things like delivery notification, offline messaging, and store and forward capability. diff --git a/wiki/wikipedia/3548.txt b/wiki/wikipedia/3548.txt deleted file mode 100644 index e5a2a42b4335032c7d41b7e8277a120e6c8a8f8c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3548.txt +++ /dev/null @@ -1,125 +0,0 @@ -In geometry, the hyperplane separation theorem is a theorem about disjoint convex sets in n-dimensional Euclidean space. There are several rather similar versions. In one version of the theorem, if both these sets are closed and at least one of them is compact, then there is a hyperplane in between them and even two parallel hyperplanes in between them separated by a gap. In another version, if both disjoint convex sets are open, then there is a hyperplane in between them, but not necessarily any gap. An axis which is orthogonal to a separating hyperplane is a separating axis, because the orthogonal projections of the convex bodies onto the axis are disjoint. - -The hyperplane separation theorem is due to Hermann Minkowski. The Hahn–Banach separation theorem generalizes the result to topological vector spaces. - -A related result is the supporting hyperplane theorem. - -In the context of support-vector machines, the optimally separating hyperplane or maximum-margin hyperplane is a hyperplane which separates two convex hulls of points and is equidistant from the two. - -{{math_theorem|name=Hyperplane separation theorem|Let A and B be two disjoint nonempty convex subsets of Rn. Then there exist a nonzero vector v and a real number c such that -$$ -\langle x, v \rangle \ge c \text{ and } \langle y, v \rangle \le c -$$ - -for all x in A and y in B; i.e., the hyperplane $\langle \cdot, v \rangle = c$, v the normal vector, separates A and B. - -}} - -The proof is based on the following lemma: - -Let $K$ be a nonempty closed convex subset of Rn. Then there exists a unique vector in $K$ of minimum norm (length). - -Proof of lemma: Let $\delta = \inf \{ |x| : x \in K \}.$ Let $x_j$ be a sequence in $K$ such that $|x_j| \to \delta$. Note that $(x_i + x_j)/2$ is in $K$ since $K$ is convex and so $|x_i + x_j|^2 \ge 4 \delta^2$. - -Since -$$ -|x_i - x_j|^2 = 2|x_i|^2 + 2|x_j|^2 - |x_i + x_j|^2 \le 2|x_i|^2 + 2|x_j|^2 - 4\delta^2 \to 0 -$$ - -as $i, j \to \infty$, $x_i$ is a Cauchy sequence and so has limit x in $K$. It is unique since if y is in $K$ and has norm δ, then$|x - y|^2 \le 2|x|^2 + 2|y|^2 - 4\delta^2 = 0$ and x = y. $\square$ - -Proof of theorem: - -Given disjoint nonempty convex sets A, B, let -$$ -K = A + (-B) = \{ x - y \mid x \in A, y \in B \}. -$$ - -Since $-B$ is convex and the sum of convex sets is convex, $K$ is convex. By the lemma, the closure $\overline{K}$ of $K$, which is convex, contains a vector $v$ of minimum norm. Since $\overline{K}$ is convex, for any $n$ in $K$, the line segment -$$ -v + t(n - v), 0 \le t \le 1 -$$ - -lies in $\overline{K}$ and so -$$ -|v|^2 \le |v + t(n - v)|^2 = |v|^2 + 2 t \langle v, n - v \rangle + t^2|n-v|^2 -$$. - -For $0 < t \le 1$, we thus have: -$$ -0 \le 2 \langle v, n \rangle - 2 |v|^2 + t|n-v|^2 -$$ - -and letting $t \to 0$ gives: $\langle n, v \rangle \ge |v|^2$. Hence, for any x in A and y in B, we have: $\langle x - y, v \rangle \ge |v|^2$. Thus, if v is nonzero, the proof is complete since -$$ -\inf_{x \in A} \langle x, v \rangle \ge |v|^2 + \sup_{y \in B} \langle y, v \rangle. -$$ - -More generally (covering the case v = 0), let us first take the case when the interior of $K$ is nonempty. The interior can be exhausted by a nested sequence of nonempty compact convex subsets $K_1\subset K_2\subset K_3\subset\cdots$ (namely, put $ K_j \equiv [-j,j]^n \cap \{ x \in \text{int}(K) : d(x, (\text{int}(K))^c) \ge \frac{1}{j} \} $). Since 0 is not in $K$, each $K_n$ contains a nonzero vector $v_n$ of minimum length and by the argument in the early part, we have: $\langle x, v_n \rangle \ge 0$ for any $x \in K_n$. We can normalize the $v_n$'s to have length one. Then the sequence $v_n$ contains a convergent subsequence (because the n-sphere is compact) with limit v, which is nonzero. We have $\langle x, v \rangle \ge 0$ for any x in the interior of $K$ and by continuity the same holds for all x in $K$. We now finish the proof as before. Finally, if $K$ has empty interior, the affine set that it spans has dimension less than that of the whole space. Consequently $K$ is contained in some hyperplane $\langle \cdot, v \rangle = c$; thus, $\langle x, v \rangle \ge c$ for all x in $K$ and we finish the proof as before. $\square$ - -The number of dimensions must be finite. In infinite-dimensional spaces there are examples of two closed, convex, disjoint sets which cannot be separated by a closed hyperplane (a hyperplane where a continuous linear functional equals some constant) even in the weak sense where the inequalities are not strict. - -The above proof also proves the first version of the theorem mentioned in the lede (to see it, note that $K$ in the proof is closed under the hypothesis of the theorem below.) - -Let A and B be two disjoint nonempty closed convex sets, one of which is compact. Then there exist a nonzero vector v and real numbers $c_1 < c_2$ such that -$$ -\langle x, v \rangle > c_2 \text{ and } \langle y, v \rangle < c_1 -$$ - -for all x in A and y in B. - -Here, the compactness in the hypothesis cannot be relaxed; see an example in the next section. This version of the separation theorem does generalize to infinite-dimension; the generalization is more commonly known as the Hahn–Banach separation theorem. - -We also have: - -Let A and B be two disjoint nonempty convex sets. If A is open, then there exist a nonzero vector v and real number $c$ such that -$$ -\langle x, v \rangle > c \text{ and } \langle y, v \rangle \le c -$$ - -for all x in A and y in B. If both sets are open, then there exist a nonzero vector v and real number $c$ such that -$$ -\langle x, v \rangle>c \text{ and } \langle y, v \rangle 0, y \geq 1/x \}.\ -$$ - -(Although, by an instance of the second theorem, there is a hyperplane that separates their interiors.) Another type of counterexample has A compact and B open. For example, A can be a closed square and B can be an open square that touches A. - -In the first version of the theorem, evidently the separating hyperplane is never unique. In the second version, it may or may not be unique. Technically a separating axis is never unique because it can be translated; in the second version of the theorem, a separating axis can be unique up to translation. - -The separating axis theorem (SAT) says that: - -Two convex objects do not overlap if there exists a line (called axis) onto which the two objects' projections do not overlap. - -SAT suggests an algorithm for testing whether two convex solids intersect or not. - -Regardless of dimensionality, the separating axis is always a line. - -For example, in 3D, the space is separated by planes, but the separating axis is perpendicular to the separating plane. - -The separating axis theorem can be applied for fast collision detection between polygon meshes. Each face's normal or other feature direction is used as a separating axis. Note that this yields possible separating axes, not separating lines/planes. - -In 3D, using face normals alone will fail to separate some edge-on-edge non-colliding cases. Additional axes, consisting of the cross-products of pairs of edges, one taken from each object, are required. - -For increased efficiency, parallel axes may be calculated as a single axis. - -*Dual cone - -*Farkas's lemma - -*Kirchberger's theorem - -*Optimal control diff --git a/wiki/wikipedia/3549.txt b/wiki/wikipedia/3549.txt deleted file mode 100644 index 9e43535b1565dd35ac8b184bda1f23c4d66ac83b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3549.txt +++ /dev/null @@ -1,7 +0,0 @@ -In graph theoretic mathematics, a strangulated graph is a graph in which deleting the edges of any induced cycle of length greater than three would disconnect the remaining graph. That is, they are the graphs in which every peripheral cycle is a triangle. - -In a maximal planar graph, or more generally in every polyhedral graph, the peripheral cycles are exactly the faces of a planar embedding of the graph, so a polyhedral graph is strangulated if and only if all the faces are triangles, or equivalently it is maximal planar. Every chordal graph is strangulated, because the only induced cycles in chordal graphs are triangles, so there are no longer cycles to delete. - -A clique-sum of two graphs is formed by identifying together two equal-sized cliques in each graph, and then possibly deleting some of the clique edges. For the version of clique-sums relevant to strangulated graphs, the edge deletion step is omitted. A clique-sum of this type between two strangulated graphs results in another strangulated graph, for every long induced cycle in the sum must be confined to one side or the other (otherwise it would have a chord between the vertices at which it crossed from one side of the sum to the other), and the disconnected parts of that side formed by deleting the cycle must remain disconnected in the clique-sum. Every chordal graph can be decomposed in this way into a clique-sum of complete graphs, and every maximal planar graph can be decomposed into a clique-sum of 4-vertex-connected maximal planar graphs. - -As Seymour show, these are the only possible building blocks of strangulated graphs: the strangulated graphs are exactly the graphs that can be formed as clique-sums of complete graphs and maximal planar graphs. diff --git a/wiki/wikipedia/355.txt b/wiki/wikipedia/355.txt deleted file mode 100644 index 3e4d57ab74e605d22e2f1a9c63a70249c32a8190..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/355.txt +++ /dev/null @@ -1,84 +0,0 @@ -In graph theory, a nowhere-zero flow or NZ flow is a network flow that is nowhere zero. It is intimately connected (by duality) to coloring planar graphs. - -Let G = (V,E) be a digraph and let M be an abelian group. A map φ: E → M is an M-circulation if for every vertex v ∈ V -$$ -\sum_{e \in \delta^+(v)} \phi(e) = \sum_{e \in \delta^-(v)} \phi(e), -$$ - -where δ+(v) denotes the set of edges out of v and δ(v) denotes the set of edges into v. Sometimes, this condition is referred to as Kirchhoff's law. - -If φ(e) ≠ 0 for every e ∈ E, we call φ a nowhere-zero flow, an M-flow, or an NZ-flow. If k is an integer and 0 < |φ(e)| < k then φ is a k-flow. - -Let G = (V,E) be an undirected graph. An orientation of E is a modular k-flow if for every vertex v ∈ V we have: -$$ -|\delta^+(v)| \equiv |\delta^-(v)| \bmod k. -$$ - -* The set of M-flows does not necessarily form a group as the sum of two flows on one edge may add to 0. - -* (Tutte 1950) A graph G has an M-flow if and only if it has a |M|-flow. As a consequence, a $\Z_k$ flow exists if and only if a k-flow exists. As a consequence if G admits a k-flow then it admits an h-flow where $h \geq k$. - -* Orientation independence. Modify a nowhere-zero flow φ on a graph G by choosing an edge e, reversing it, and then replacing φ(e) with −φ(e). After this adjustment, φ is still a nowhere-zero flow. Furthermore, if φ was originally a k-flow, then the resulting φ is also a k-flow. Thus, the existence of a nowhere-zero M-flow or a nowhere-zero k-flow is independent of the orientation of the graph. Thus, an undirected graph G is said to have a nowhere-zero M-flow or nowhere-zero k-flow if some (and thus every) orientation of G has such a flow. - -Let $N_M(G)$ be the number of M-flows on G. It satisfies the deletion–contraction formula: -$$ -N_M(G) = N_M(G/ e) - N_M(G\setminus e). -$$ - -Combining this with induction we can show $N_M(G)$ is a polynomial in $|M|-1$ where $|M|$ is the order of the group M. We call $N_M(G)$ the flow polynomial of G and abelian group M. - -The above implies that two groups of equal order have an equal number of NZ flows. The order is the only group parameter that matters, not the structure of M. In particular $N_{M_1}(G) = N_{M_2}(G)$ if $|M_1| = |M_2|.$ - -The above results were proved by Tutte in 1953 when he was studying the Tutte polynomial, a generalization of the flow polynomial. - -There is a duality between k-face colorings and k-flows for bridgeless planar graphs. To see this, let G be a directed bridgeless planar graph with a proper k-face-coloring with colors $\{0, 1, \ldots, k-1\}.$ Construct a map -$$ -\phi: E(G)\to \{-(k-1), \ldots, -1, 0, 1, \ldots, k-1\} -$$ - -by the following rule: if the edge e has a face of color x to the left and a face of color y to the right, then let φ(e) = x – y. Then φ is a (NZ) k-flow since x and y must be different colors. - -So if G and G* are planar dual graphs and G* is k-colorable (there is a coloring of the faces of G), then G has a NZ k-flow. Using induction on |E(G)| Tutte proved the converse is also true. This can be expressed concisely as: -$$ -\chi(G^*) = \phi(G), -$$ - -where the RHS is the flow number, the smallest k for which G permits a k-flow. - -The duality is true for general M-flows as well: - -* Let $c$ the be face-coloring function with values in M. - -* Define $\phi_c(e) = c(r_1) - c(r_2)$ where r1 is the face to the left of e and r2 is to the right. - -* For every M-circulation $\phi$ there is a coloring function c such that $\phi = \phi_c$ (proven by induction). - -* c is a |E(G)|-face-coloring if and only if $\phi_c$ is a NZ M-flow (straightforward). - -The duality follows by combining the last two points. We can specialize to $M = \Z_k$ to obtain the similar results for k-flows discussed above. Given this duality between NZ flows and colorings, and since we can define NZ flows for arbitrary graphs (not just planar), we can use this to extend face-colorings to non-planar graphs. - -* G is 2-face-colorable if and only if every vertex has even degree (consider NZ 2-flows). - -* Let $K = \Z_2 \times \Z_2$ be the Klein-4 group. Then a cubic graph has a K-flow if and only if it is 3-edge-colorable. As a corollary a cubic graph that is 3-edge colorable is 4-face colorable. - -*A graph is 4-face colorable if and only if it permits a NZ 4-flow (see Four color theorem). The Petersen graph does not have a NZ 4-flow, and this led to the 4-flow conjecture (see below). - -* If G is a triangulation then G is 3-(vertex) colorable if and only if every vertex has even degree. By the first bullet, the dual graph G* is 2-colorable and thus bipartite and planar cubic. So G* has a NZ 3-flow and is thus 3-face colorable, making G 3-vertex colorable. - -* Just as no graph with a loop edge has a proper vertex coloring, no graph with a bridge can have a NZ M-flow for any group M. Conversely, every bridgeless graph has a NZ $\Z$-flow (a form of Robbins' theorem). - -Interesting questions arise when trying to find nowhere-zero k-flows for small values of k. The following have been proven: - -Jaeger's 4-flow Theorem. Every 4-edge-connected graph has a 4-flow. - -Seymour's 6-flow Theorem. Every bridgeless graph has a 6-flow. - -As of 2019, the following are currently unsolved (due to Tutte): - -3-flow Conjecture. Every 4-edge-connected graph has a nowhere-zero 3-flow. - -4-flow Conjecture. Every bridgeless graph that does not have the Petersen graph as a minor has a nowhere-zero 4-flow. - -5-flow Conjecture. Every bridgeless graph has a nowhere-zero 5-flow. - -The converse of the 4-flow Conjecture does not hold since the complete graph K11 contains a Petersen graph and a 4-flow. For bridgeless cubic graphs with no Petersen minor, 4-flows exist by the snark theorem (Seymour, et al 1998, not yet published). The four color theorem is equivalent to the statement that no snark is planar. diff --git a/wiki/wikipedia/3550.txt b/wiki/wikipedia/3550.txt deleted file mode 100644 index ed8f2ca26903548ccf40c3769d8e994a8b16ea53..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3550.txt +++ /dev/null @@ -1,13 +0,0 @@ -In Euclidean geometry, Musselman's theorem is a property of certain circles defined by an arbitrary triangle. - -Specifically, let $T$ be a triangle, and $A$, $B$, and $C$ its vertices. Let $A^*$, $B^*$, and $C^*$ be the vertices of the reflection triangle $T^*$, obtained by mirroring each vertex of $T$ across the opposite side. Let $O$ be the circumcenter of $T$. Consider the three circles $S_A$, $S_B$, and $S_C$ defined by the points $AOA^*$, $BOB^*$, and $COC^*$, respectively. The theorem says that these three Musselman circles meet in a point $M$, that is the inverse with respect to the circumcenter of $T$ of the isogonal conjugate or the nine-point center of $T$. - -The common point $M$ is point $X_{1157}$ in Clark Kimberling's list of triangle centers. - -The theorem was proposed as an advanced problem by John Rogers Musselman and René Goormaghtigh in 1939, and a proof was presented by them in 1941. A generalization of this result was stated and proved by Goormaghtigh. - -The generalization of Musselman's theorem by Goormaghtigh does not mention the circles explicitly. - -As before, let $A$, $B$, and $C$ be the vertices of a triangle $T$, and $O$ its circumcenter. Let $H$ be the orthocenter of $T$, that is, the intersection of its three altitude lines. Let $A'$, $B'$, and $C'$ be three points on the segments $OA$, $OB$, and $OC$, such that $OA'/OA=OB'/OB=OC'/OC = t$. Consider the three lines $L_A$, $L_B$, and $L_C$, perpendicular to $OA$, $OB$, and $OC$ though the points $A'$, $B'$, and $C'$, respectively. Let $P_A$, $P_B$, and $P_C$ be the intersections of these perpendicular with the lines $BC$, $CA$, and $AB$, respectively. - -It had been observed by Joseph Neuberg, in 1884, that the three points $P_A$, $P_B$, and $P_C$ lie on a common line $R$. Let $N$ be the projection of the circumcenter $O$ on the line $R$, and $N'$ the point on $ON$ such that $ON'/ON = t$. Goormaghtigh proved that $N'$ is the inverse with respect to the circumcircle of $T$ of the isogonal conjugate of the point $Q$ on the Euler line $OH$, such that $QH/QO = 2t$. diff --git a/wiki/wikipedia/3551.txt b/wiki/wikipedia/3551.txt deleted file mode 100644 index b50a70c68efefdd89fd47980f96d40d79a8ed7ef..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3551.txt +++ /dev/null @@ -1,83 +0,0 @@ -Evernote is an app designed for note taking, organizing, task management, and archiving. It is developed by the Evernote Corporation, headquartered in Redwood City, California. The app allows users to create notes, which can be text, drawings, photographs, audio, or saved web content. Notes are stored in notebooks and can be tagged, annotated, edited, searched, given attachments, and exported. - -Evernote is cross-platform, for Android, iOS, macOS, and Microsoft Windows. It is free to use with monthly usage limits, and offers paid plans for expanded or lifted limits. - -Evernote version 10 is a complete re-write of desktop clients. When released it removed almost all preferences and so possibility to adjust application to user needs. This includes an ability to change global (system-wide) shortcuts, which caused particular problems for non-English speaking users. In the Evernote discussion forum it was indicated in late 2020 that updates to v10 were forthcoming to resolve the issues—v10.5.7 had addressed some, but not all, of the problems—with suggestions that users revert to an earlier version. - -After being founded in 2000 by Russian-American computer entrepreneur Stepan Pachikov, In October 2015, the Evernote Corp. announced that the company was laying off 18 percent of its workforce and would be closing three out of 10 global offices. In September 2016, Libin stepped down as Executive Chairman to focus on other business ventures. In February 2017, CEO O'Neill stated in a blog post that the business was now cash-flow positive. - -As well as the keyboard entry of typed notes, Evernote supports image capture from cameras on supported devices, and the recording of voice notes. In some situations, text that appears in captured images can be recognized using OCR and annotated. Evernote also supports touch and tablet screens with handwriting recognition. Evernote web-clipping plugins are available for the most popular Internet browsers that allow marked sections of webpages to be captured and clipped to Evernote. If no section of a webpage has been highlighted, Evernote can clip the full page. Evernote also supports the ability to e-mail notes to the service, allowing for automated note entry via e-mail rules or filters. - -Where suitable hardware is available, Evernote can automatically add geolocation tags to notes. - -As of November 2018, Evernote Pro integrates directly with Google Drive, Microsoft Outlook, Microsoft Teams, and Slack, and Evernote Pro adds an integration with Salesforce. All versions of Evernote also support integrations through IFTTT and Zapier. In 2013, Evernote deprecated its direct integration with Twitter in favor of these third-party services. - - - -File:Evernote Information Model.png|Information model for Evernote - -File:Information Type Note.png|Information type model for notes in Evernote - -File:Information Type User.png|Information type model for users in Evernote - - - -On supported operating systems, Evernote allows users to store and edit notes on their local machine, using a SQLite database in Windows. - -Users with Internet access and an Evernote account can also have their notes automatically synchronized with a master copy held on Evernote's servers. This approach lets a user access and edit their data across multiple machines and operating system platforms, but still view, input and edit data when an Internet connection is not available. However, notes stored on Evernote servers are not encrypted. - -Where Evernote client software is not available, online account-holders can access their note archive via a web interface or through a media device. The service also allows selected files to be shared for viewing and editing by other users. - -The Evernote software can be downloaded and used as "stand-alone" software without using the online portion of an Evernote account (online registration is required for initial setup, however), but it will not be able to upload files to the Evernote server, or use the server to synchronize or share files between different Evernote installations. Also, no image or Image-PDF (Premium only) recognition and indexing will take place if the software is used entirely offline. - -Evernote is a free online service that allows users to upgrade to Premium or Business accounts. All Free, Plus and Premium Evernote accounts have a maximum limit of 100,000 notes and 250 notebooks. - -Basic customers can upload 60 MB of data each month. Plus customers get a 1 GB upload limit, offline notes on mobile devices, as well as passcode lock for mobile devices. Emails can also be sent to their Evernote account. - -Premium subscribers are granted 10 GB of new uploaded data every month, faster word recognition in images, heightened security, PDF annotation, Context, where notes and news articles can be seen, which are related to the open note and the ability to search text within PDF documents. They also receive additional options for notebook sharing. Each of free, Plus and Premium account types allow notebook sharing with other Evernote users; however, the accounts are distinguished by editing capabilities. In regards to shared notebooks, editing permissions to non-paid account holders may only be granted to premium Evernote subscribers. The free service does not make files available offline on iOS and Android devices; while sometimes they are available from cache, editing these files can cause conflicts when synchronizing. - -With the full version of Evernote Business, users sync and view work documents through a variety of platforms, such as Mac, iPhone and iPads, Web, Windows and Android Devices. Files that can be uploaded include spreadsheets, presentations, notes and design mock ups. In addition, administrators can monitor company progress and individual employees through the admin console. - -In June 2016, Evernote announced the limitation for users of its free Basic account to two devices per year and raised prices for its premium service tiers. Non-paying Evernote users are able to sync notes between two devices. Plus lets users store notes offline and upload up to 1GB files, while Premium adds document-parsing features and 10GB of additional storage. - -From early April 2018, Evernote Plus was no longer available for purchase; however, users who currently have the Plus subscription can maintain it as long as their subscription is still active. - -Evernote clients are available for Android, iOS (iPad, iPhone, and iPod Touch), macOS, Microsoft Windows, and Web. Additionally, portable versions of Evernote are available for flash drives and U3 drives. There is currently no officially supported native client for BSD or Linux, but the company provides an API for external Linux clients. - -There is substantial variation in supported features on different platforms. For example, it is possible to edit Rich Text Format and sketches on Windows; on Mac it is possible to edit rich text, but only view sketches; and on the iPad only plain text could be edited prior to version 4.1.0 (August 2011). - -Web clipping support is installed by default on the Internet Explorer and Safari browsers when the Evernote software is installed under Windows or macOS. Evernote web-clipping plugins are also available for the Firefox, Google Chrome, Opera, and Yandex Browsers, and need to be downloaded and installed separately from the respective browser. - -The Evernote email-clipper is automatically installed in Microsoft Office Outlook if the desktop version is installed on the same computer. There is also a Thunderbird email plugin, which must be installed separately from the Thunderbird client. - - - -File:Evernote-web-2020.png|Evernote web in 2020 - -File:Evernote Meetup Paris (8248451090).jpg|Evernote client on an iOS device (iPhone 4/4S) - -File:Evernote on iPad and MacBook.jpg|Evernote on an iPad and on a MacBook - -File:Google Nexus One with Evernote for Android (4375773612).jpg|Evernote client on an Android device (Nexus One) - -File:Evernote on Lumia on Evernote (14197872383).jpg|Evernote client on a Windows Phone device (Nokia Lumia) - - - -Scannable captures paper quickly, transforming it into high-quality scans ready to save or share. - -Skitch is a free screenshot editing and sharing utility for OS X, iOS, Windows, and Android. The app permits the user to add shapes and text to an image, and then share it online. Images can also be exported to various image formats. Originally developed by Plasq, Skitch was acquired by Evernote on August 18, 2011. On December 17, 2015, Evernote announced that it will be ending support for Skitch for Windows, Windows Touch, iOS, and Android on January 22, 2016. Evernote said it will continue to offer Skitch for Mac. - -On August 13, 2013, The New York Times reported that Telefónica Digital and Evernote entered into a global partnership agreement, giving Brazilian customers free access to Evernote Premium for one year. Under this global deal Telefónica users in Costa Rica, Guatemala, Panama, the UK and Spain were also offered the promotion. - -The service has experienced several cases of losing customer data. - -On June 11, 2014, Evernote suffered a crippling distributed denial-of-service attack that prevented customers from accessing their information; the attackers demanded a ransom, which Evernote refused to pay. A denial-of-service attack on August 8, 2014, resulted in a brief period of downtime for evernote.com; service was quickly restored. - -On March 2, 2013, Evernote revealed that hackers had gained access to their network and been able to access user information, including usernames, email addresses, and hashed passwords. All users were asked to reset their passwords. - -In the wake of this, Evernote accelerated plans to implement an optional two-factor authentication for all users. - -In December 2016, Evernote announced its privacy policy would be changing in January 2017, leading to claims the policy allowed employees of the firm to access users' content in some situations. In response to the concerns, Evernote apologised and announced the policy would not be implemented, and that its employees would not have access to users' content unless users opted in. - -In late 2020, Evernote released Evernote v10, written from scratch in the Electron framework, to replace older versions on multiple platforms. Some users noted the new app was much slower than legacy Windows / iOS app, had many features removed, and did not work with some default keyboard layouts (like Turkish, Latvian, Polish) due to conflict of hardcoded key bindings. No options were enabled to disable or change hotkeys in Evernote v10. diff --git a/wiki/wikipedia/3552.txt b/wiki/wikipedia/3552.txt deleted file mode 100644 index 42dd0c424585bb198de96fe67b55add03d5b3859..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3552.txt +++ /dev/null @@ -1,3 +0,0 @@ -#redirect Synchronizing word - -Category:Conjectures diff --git a/wiki/wikipedia/3553.txt b/wiki/wikipedia/3553.txt deleted file mode 100644 index 1dfc0e50a7fa02e65e013af7f23ea3c553afef28..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3553.txt +++ /dev/null @@ -1,20 +0,0 @@ -In geometry, Napoleon's theorem states that if equilateral triangles are constructed on the sides of any triangle, either all outward or all inward, the lines connecting the centres of those equilateral triangles themselves form an equilateral triangle. - -The triangle thus formed is called the inner or outer Napoleon triangle. The difference in the areas of the outer and inner Napoleon triangles equals the area of the original triangle. - -The theorem is often attributed to Napoleon Bonaparte (1769–1821). Some have suggested that it may date back to W. Rutherford's 1825 question published in The Ladies' Diary, four years after the French emperor's death, but the result is covered in three questions set in an examination for a Gold Medal at the University of Dublin in October, 1820, whereas Napoleon died the following May. - -In the figure above, ABC is the original triangle. AZB, BXC, and CYA are equilateral triangles constructed on its sides' exteriors, and points L, M, and N are the centroids of those triangles. The theorem for outer triangles states that triangle LMN (green) is equilateral. - -A quick way to see that the triangle LMN is equilateral is to observe that MN becomes CZ under a clockwise rotation of 30° around A and a homothety of ratio 3 with the same center, and that LN also becomes CZ after a counterclockwise rotation of 30° around B and a homothety of ratio 3 with the same center. The respective spiral similarities are A(3,-30°) and B(3,30°). That implies MN = LN and the angle between them must be 60°. - -There are in fact many proofs of the theorem's statement, including a synthetic (coordinate-free) one, a trigonometric one, that each of the three sides of the outer Napoleon triangle has a length of -$$ -\text{Side(outer)}=\sqrt{{a^2+b^2+c^2 \over 6} + {\sqrt{(a+b+c)(a+b-c)(a-b+c)(-a+b+c)} \over {2\sqrt{3}}}}. -$$ - -The relation between the latter two equations is that the area of an equilateral triangle equals the square of the side times $\sqrt{3} / 4.$ - -If isosceles triangles with apex angles 2kπ/n are erected on the sides of an arbitrary n-gon A0, and if this process is repeated with the n-gon formed by the free apices of the triangles, but with a different value of k, and so on until all values 1 ≤ k ≤ n − 2 have been used (in arbitrary order), then a regular n-gon An−2 is formed whose centroid coincides with the centroid of A0. - -The centers of regular n-gons constructed over the sides of an n-gon P form a regular n-gon if and only if P is an affine image of a regular n-gon. diff --git a/wiki/wikipedia/3554.txt b/wiki/wikipedia/3554.txt deleted file mode 100644 index 2cb04fae991f80315575f3d1a7f68847e92b3e60..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3554.txt +++ /dev/null @@ -1,81 +0,0 @@ -In computability theory, computational complexity theory and proof theory, a fast-growing hierarchy (also called an extended Grzegorczyk hierarchy) is an ordinal-indexed family of rapidly increasing functions fα: N → N (where N is the set of natural numbers {0, 1, ...}, and α ranges up to some large countable ordinal). A primary example is the Wainer hierarchy, or Löb–Wainer hierarchy, which is an extension to all α < ε0. Such hierarchies provide a natural way to classify computable functions according to rate-of-growth and computational complexity. - -Let μ be a large countable ordinal such to every limit ordinal α < μ there is assigned a fundamental sequence (a strictly increasing sequence of ordinals whose supremum is α). A fast-growing hierarchy of functions fα: N → N, for α < μ, is then defined as follows: - -*$ f_0(n) = n + 1,$ - -*$ f_{\alpha+1}(n) = f_\alpha^n(n),$ - -*$ f_\alpha(n) = f_{\alpha[n]}(n) $ if α is a limit ordinal. - -Here fαn(n) = fα(fα(...(fα(n))...)) denotes the nth iterate of fα applied to n, and α[n] denotes the nth element of the fundamental sequence assigned to the limit ordinal α. (An alternative definition takes the number of iterations to be n+1, rather than n, in the second line above.) - -The initial part of this hierarchy, comprising the functions fα with finite index (i.e., α < ω), is often called the Grzegorczyk hierarchy because of its close relationship to the Grzegorczyk hierarchy; note, however, that the former is here an indexed family of functions fn, whereas the latter is an indexed family of sets of functions $\mathcal{E}^n$. (See Points of Interest below.) - -Generalizing the above definition even further, a fast iteration hierarchy is obtained by taking f0 to be any increasing function g: N → N. - -For limit ordinals not greater than ε0, there is a straightforward natural definition of the fundamental sequences (see the Wainer hierarchy below), but beyond ε0 the definition is much more complicated. However, this is possible well beyond the Feferman–Schütte ordinal, Γ0, up to at least the Bachmann–Howard ordinal. Using Buchholz psi functions one can extend this definition easily to the ordinal of transfinitely iterated $\Pi^1_1$-comprehension (see Analytical hierarchy). - -A fully specified extension beyond the recursive ordinals is thought to be unlikely; e.g., Prӧmel et al. [1991](p. 348) note that in such an attempt "there would even arise problems in ordinal notation". - -The Wainer hierarchy is the particular fast-growing hierarchy of functions fα (α ≤ ε0) obtained by defining the fundamental sequences as follows [Gallier 1991][Prӧmel, et al., 1991]: - -For limit ordinals λ < ε0, written in Cantor normal form, - -* if λ = ωα1 + ... + ωαk−1 + ωαk for α1 ≥ ... ≥ αk−1 ≥ αk, then λ[n] = ωα1 + ... + ωαk−1 + ωαk[n], - -* if λ = ωα+1, then λ[n] = ωαn, - -* if λ = ωα for a limit ordinal α, then λ[n] = ωα[n], - -and - -* if λ = ε0, take λ[0] = 0 and λ[n + 1] = ωλ[n] as in [Gallier 1991]; alternatively, take the same sequence except starting with λ[0] = 1 as in [Prӧmel, et al., 1991].
    For n > 0, the alternative version has one additional ω in the resulting exponential tower, i.e. λ[n] = ωω...ω with n omegas. - -Some authors use slightly different definitions (e.g., ωα+1[n] = ωα(n+1), instead of ωαn), and some define this hierarchy only for α < ε0 (thus excluding fε0 from the hierarchy). - -To continue beyond ε0, see the Fundamental sequences for the Veblen hierarchy. - -Following are some relevant points of interest about fast-growing hierarchies: - -* Every fα is a total function. If the fundamental sequences are computable (e.g., as in the Wainer hierarchy), then every fα is a total computable function. - -* In the Wainer hierarchy, fα is dominated by fβ if α < β. (For any two functions f, g: N → N, f is said to dominate g if f(n) > g(n) for all sufficiently large n.) The same property holds in any fast-growing hierarchy with fundamental sequences satisfying the so-called Bachmann property. (This property holds for most natural well orderings.) - -* In the Grzegorczyk hierarchy, every primitive recursive function is dominated by some fα with α < ω. Hence, in the Wainer hierarchy, every primitive recursive function is dominated by fω, which is a variant of the Ackermann function. - -* For n ≥ 3, the set $\mathcal{E}^n$ in the Grzegorczyk hierarchy is composed of just those total multi-argument functions which, for sufficiently large arguments, are computable within time bounded by some fixed iterate fn-1k evaluated at the maximum argument. - -* In the Wainer hierarchy, every fα with α < ε0 is computable and provably total in Peano arithmetic. - -* Every computable function that's provably total in Peano arithmetic is dominated by some fα with α < ε0 in the Wainer hierarchy. Hence fε0 in the Wainer hierarchy is not provably total in Peano arithmetic. - -* The Goodstein function has approximately the same growth rate (i.e. each is dominated by some fixed iterate of the other) as fε0 in the Wainer hierarchy, dominating every fα for which α < ε0, and hence is not provably total in Peano Arithmetic. - -* In the Wainer hierarchy, if α < β < ε0, then fβ dominates every computable function within time and space bounded by some fixed iterate fαk. - -* Friedman's TREE function dominates fΓ0 in a fast-growing hierarchy described by Gallier (1991). - -* The Wainer hierarchy of functions fα and the Hardy hierarchy of functions hα are related by fα = hωα for all α < ε0. The Hardy hierarchy "catches up" to the Wainer hierarchy at α = ε0, such that fε0 and hε0 have the same growth rate, in the sense that fε0(n-1) ≤ hε0(n) ≤ fε0(n+1) for all n ≥ 1. (Gallier 1991) - -* Girard and Cichon & Wainer (1983) showed that the slow-growing hierarchy of functions gα attains the same growth rate as the function fε0 in the Wainer hierarchy when α is the Bachmann–Howard ordinal. Girard (1981) further showed that the slow-growing hierarchy gα attains the same growth rate as fα (in a particular fast-growing hierarchy) when α is the ordinal of the theory ID of arbitrary finite iterations of an inductive definition. (Wainer 1989) - -The functions at finite levels (α < ω) of any fast-growing hierarchy coincide with those of the Grzegorczyk hierarchy: (using hyperoperation) - -* f0(n) = n + 1 = 2[1]n − 1 - -* f1(n) = f0n(n) = n + n = 2n = 2[2]n - -* f2(n) = f1n(n) = 2n · n > 2n = 2[3]n for n ≥ 2 - -* fk+1(n) = fkn(n) > (2[k + 1])n n ≥ 2[k + 2]n for n ≥ 2, k < ω. - -Beyond the finite levels are the functions of the Wainer hierarchy (ω ≤ α ≤ ε0): - -* fω(n) = fn(n) > 2[n + 1]n > 2[n](n + 3) − 3 = A(n, n) for n ≥ 4, where A is the Ackermann function (of which fω is a unary version). - -* fω+1(n) = fωn(n) ≥ fn[n + 2]n(n) for all n > 0, where n[n + 2]n is the nth Ackermann number. - -* fω+1(64) > fω64(5) > Graham's number (= g64 in the sequence defined by g0 = 4, gk+1 = 3[gk + 2]3). This follows by noting fω(n) > 2[n + 1]n > 3[n]3 + 2, and hence fω(gk + 2) > gk+1 + 2. - -* fε0(n) is the first function in the Wainer hierarchy that dominates the Goodstein function. diff --git a/wiki/wikipedia/3555.txt b/wiki/wikipedia/3555.txt deleted file mode 100644 index e187786ed8488e0438f2cd87865f47fb42738e95..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3555.txt +++ /dev/null @@ -1,59 +0,0 @@ -In mathematics, the Johnson–Lindenstrauss lemma is a result named after William B. Johnson and Joram Lindenstrauss concerning low-distortion embeddings of points from high-dimensional into low-dimensional Euclidean space. The lemma states that a set of points in a high-dimensional space can be embedded into a space of much lower dimension in such a way that distances between the points are nearly preserved. The map used for the embedding is at least Lipschitz, and can even be taken to be an orthogonal projection. - -The lemma has applications in compressed sensing, manifold learning, dimensionality reduction, and graph embedding. Much of the data stored and manipulated on computers, including text and images, can be represented as points in a high-dimensional space (see vector space model for the case of text). However, the essential algorithms for working with such data tend to become bogged down very quickly as dimension increases. It is therefore desirable to reduce the dimensionality of the data in a way that preserves its relevant structure. The Johnson–Lindenstrauss lemma is a classic result in this vein. - -Also, the lemma is tight up to a constant factor, i.e. there exists a set of points of size m that needs dimension -$$ - \Omega \left(\frac{\log(m)}{\varepsilon^2}\right) -$$ - -in order to preserve the distances between all pairs of points within a factor of $(1 \pm \varepsilon)$. - -Given $0 < \varepsilon < 1$, a set $X$ of $m$ points in $\mathbb{R}^N$, and a number $n > 8 \ln (m)/\varepsilon^2$, there is a linear map $f: \mathbb{R}^N \rightarrow \mathbb{R}^n$ such that -$$ -(1-\varepsilon)\|u-v\|^2 \leq \|f(u) - f(v)\|^2 \leq (1+\varepsilon)\|u-v\|^2 -$$ - -for all $u,v \in X$. - -The formula can be rearranged:(1+\varepsilon)^{-1}\|f(u)-f(v)\|^2 \leq \|u-v\|^2 \leq (1-\varepsilon)^{-1}\|f(u)-f(v)\|^2 - -One proof of the lemma takes ƒ to be a suitable multiple of the orthogonal projection onto a random subspace of dimension $n$ in $\mathbb{R}^N$, and exploits the phenomenon of concentration of measure. - -An orthogonal projection will, in general, reduce the average distance between points, but the lemma can be viewed as dealing with relative distances, which do not change under scaling. In a nutshell, you roll the dice and obtain a random projection, which will reduce the average distance, and then you scale up the distances so that the average distance returns to its previous value. If you keep rolling the dice, you will, in polynomial random time, find a projection for which the (scaled) distances satisfy the lemma. - -A related lemma is the distributional JL lemma. This lemma states that for any 0 < ε, δ < 1/2 and positive integer d, there exists a distribution over Rk × d from which the matrix A is drawn such that for k = O(ε−2log(1/δ)) and for any unit-length vector x ∈ Rd, the claim below holds. -$$ - P(|\Vert Ax\Vert_2^2-1|>\varepsilon)<\delta -$$ - -One can obtain the JL lemma from the distributional version by setting $x = (u-v)/\|u-v\|_2$ and $\delta < 1/n^2$ for some pair u,v both in X. Then the JL lemma follows by a union bound over all such pairs. - -Given A, computing the matrix vector product takes O(kd) time. There has been some work in deriving distributions for which the matrix vector product can be computed in less than O(kd) time. - -There are two major lines of work. The first, Fast Johnson Lindenstrauss Transform (FJLT), was introduced by Ailon and Chazelle in 2006. - -This method allows the computation of the matrix vector product in just $d\log d + k^{2+\gamma}$ for any constant $\gamma>0$. - -Another approach is to build a distribution supported over matrices that are sparse. - -This method allows keeping only an $\varepsilon$ fraction of the entries in the matrix, which means the computation can be done in just $kd\varepsilon$ time. - -Furthermore, if the vector has only $b$ non-zero entries, the Sparse JL takes time $kb\varepsilon$, which may be much less than the $d\log d$ time used by Fast JL. - -It is possible to combine two JL matrices by taking the so-called Face-splitting product is defined as the tensor products of the rows (was proposed by V. Slyusar in 1996 for radar and digital antenna array applications). - -More directly, let ${C}\in\mathbb R^{3\times 3}$ and ${D}\in\mathbb R^{3\times 3}$ be two matrices. - -Then the Face-splitting product ${C}\bullet {D}$ is - -In 2020 it was shown that if the matrices $C_1, C_2, \dots, C_c$ are independent $\pm1$ or Gaussian matrices, the combined matrix $C_1 \bullet \dots \bullet C_c$ satisfies the distributional JL lemma if the number of rows is at least -$$ -O(\epsilon^{-2}\log1/\delta + \epsilon^{-1}(\tfrac1c\log1/\delta)^c) -$$. - -For large $\epsilon$ this is as good as the completely random Johnson-Lindenstrauss, but - -a matching lower bound in the same paper shows that this exponential dependency on $(\log1/\delta)^c$ is necessary. - -Alternative JL constructions are suggested to circumvent this. diff --git a/wiki/wikipedia/3556.txt b/wiki/wikipedia/3556.txt deleted file mode 100644 index 49b9fc2307f04c300dac7f11a8481c7dd6262525..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3556.txt +++ /dev/null @@ -1,279 +0,0 @@ -The Banker algorithm, sometimes referred to as the detection algorithm, is a resource allocation and deadlock avoidance algorithm developed by Edsger Dijkstra that tests for safety by simulating the allocation of predetermined maximum possible amounts of all resources, and then makes an "s-state" check to test for possible deadlock conditions for all other pending activities, before deciding whether allocation should be allowed to continue. - -The algorithm was developed in the design process for the THE operating system and originally described (in Dutch) in EWD108. When a new process enters a system, it must declare the maximum number of instances of each resource type that it may ever claim; clearly, that number may not exceed the total number of resources in the system. Also, when a process gets all its requested resources it must return them in a finite amount of time. - -For the Banker's algorithm to work, it needs to know three things: - -*How much of each resource each process could possibly request[MAX] - -*How much of each resource each process is currently holding[ALLOCATED] - -*How much of each resource the system currently has available[AVAILABLE] - -Resources may be allocated to a process only if the amount of resources requested is less than or equal to the amount available; otherwise, the process waits until resources are available. - -Some of the resources that are tracked in real systems are memory, semaphores and interface access. - -The Banker's Algorithm derives its name from the fact that this algorithm could be used in a banking system to ensure that the bank does not run out of resources, because the bank would never allocate its money in such a way that it can no longer satisfy the needs of all its customers. By using the Banker's algorithm, the bank ensures that when customers request money the bank never leaves a safe state. If the customer's request does not cause the bank to leave a safe state, the cash will be allocated, otherwise the customer must wait until some other customer deposits enough. - -Basic data structures to be maintained to implement the Banker's Algorithm: - -Let $n$ be the number of processes in the system and $m$ be the number of resource types. Then we need the following data structures: - -* Available: A vector of length m indicates the number of available resources of each type. If Available[j] = k, there are k instances of resource type Rj available. - -* Max: An $n$ × $m$ matrix defines the maximum demand of each process. If Max[i,j] = k, then Pi may request at most k instances of resource type Rj. - -* Allocation: An $n$ × $m$ matrix defines the number of resources of each type currently allocated to each process. If Allocation[i,j] = k, then process Pi is currently allocated k instances of resource type Rj. - -* Need: An $n$ × $m$ matrix indicates the remaining resource need of each process. If Need[i,j] = k, then Pi may need k more instances of resource type Rj to complete the task. - -Note: Need[i,j] = Max[i,j] - Allocation[i,j]. - -n=m-a. - -Total system resources are: - -A B C D - -6 5 7 6 - -Available system resources are: - -A B C D - -3 1 1 2 - -Processes (currently allocated resources): - -A B C D - -P1 1 2 2 1 - -P2 1 0 3 3 - -P3 1 2 1 0 - -Processes (maximum resources): - -A B C D - -P1 3 3 2 2 - -P2 1 2 3 4 - -P3 1 3 5 0 - -Need = maximum resources - currently allocated resources - -Processes (possibly needed resources): - -A B C D - -P1 2 1 0 1 - -P2 0 2 0 1 - -P3 0 1 4 0 - -A state (as in the above example) is considered safe if it is possible for all processes to finish executing (terminate). Since the system cannot know when a process will terminate, or how many resources it will have requested by then, the system assumes that all processes will eventually attempt to acquire their stated maximum resources and terminate soon afterward. This is a reasonable assumption in most cases since the system is not particularly concerned with how long each process runs (at least not from a deadlock avoidance perspective). Also, if a process terminates without acquiring its maximum resource it only makes it easier on the system. - -A safe state is considered to be the decision maker if it's going to process ready queue. - -Given that assumption, the algorithm determines if a state is safe by trying to find a hypothetical set of requests by the processes that would allow each to acquire its maximum resources and then terminate (returning its resources to the system). Any state where no such set exists is an unsafe state. - -We can show that the state given in the previous example is a safe state by showing that it is possible for each process to acquire its maximum resources and then terminate. - -#P1 needs 2 A, 1 B and 1 D more resources, achieving its maximum - -#*[available resource: <3 1 1 2> - <2 1 0 1> = <1 0 1 1>] - -#*The system now still has 1 A, no B, 1 C and 1 D resource available - -#P1 terminates, returning 3 A, 3 B, 2 C and 2 D resources to the system - -#*[available resource: <1 0 1 1> + <3 3 2 2> = <4 3 3 3>] - -#*The system now has 4 A, 3 B, 3 C and 3 D resources available - -#P2 acquires 2 B and 1 D extra resources, then terminates, returning all its resources - -#*[available resource: <4 3 3 3> - <0 2 0 1> + <1 2 3 4> = <5 3 6 6>] - -#*The system now has 5 A, 3 B, 6 C and 6 D resources - -#P3 acquires 1 B and 4 C resources and terminates. - -#*[available resource: <5 3 6 6> - <0 1 4 0> + <1 3 5 0> = <6 5 7 6>] - -#*The system now has all resources: 6 A, 5 B, 7 C and 6 D - -#Because all processes were able to terminate, this state is safe - -For an example of an unsafe state, consider what would happen if process 2 was holding 1 units of resource B at the beginning. - -When the system receives a request for resources, it runs the Banker's algorithm to determine if it is safe to grant the request. - -The algorithm is fairly straightforward once the distinction between safe and unsafe states is understood. - -#Can the request be granted? - -#*If not, the request is impossible and must either be denied or put on a waiting list - -#Assume that the request is granted - -#Is the new state safe? - -#*If so grant the request - -#*If not, either deny the request or put it on a waiting list - -Whether the system denies or postpones an impossible or unsafe request is a decision specific to the operating system. - -Starting in the same state as the previous example started in, assume process 1 requests 2 units of resource C. - -#There is not enough of resource C available to grant the request - -#The request is denied - -
    - -On the other hand, assume process 3 requests 1 unit of resource C. - -#There are enough resources to grant the request - -#Assume the request is granted - -#*The new state of the system would be: - -Available system resources - -A B C D - -Free 3 1 0 2 - -Processes (currently allocated resources): - -A B C D - -P1 1 2 2 1 - -P2 1 0 3 3 - -P3 1 2 2 0 - -Processes (maximum resources): - -A B C D - -P1 3 3 2 2 - -P2 1 2 3 4 - -P3 1 3 5 0 - -#Determine if this new state is safe - -##P1 can acquire 2 A, 1 B and 1 D resources and terminate - -##Then, P2 can acquire 2 B and 1 D resources and terminate - -##Finally, P3 can acquire 1 B and 3 C resources and terminate - -##Therefore, this new state is safe - -#Since the new state is safe, grant the request - -
    - -Final example: from the state we started at, assume that process 2 requests 1 unit of resource B. - -#There are enough resources - -#Assuming the request is granted, the new state would be: - -Available system resources: - -A B C D - -Free 3 0 1 2 - -Processes (currently allocated resources): - -A B C D - -P1 1 2 5 1 - -P2 1 1 3 3 - -P3 1 2 1 0 - -Processes (maximum resources): - -A B C D - -P1 3 3 2 2 - -P2 1 2 3 4 - -P3 1 3 5 0 - -#Is this state safe? Assuming P1, P2, and P3 request more of resource B and C. - -#*P1 is unable to acquire enough B resources - -#*P2 is unable to acquire enough B resources - -#*P3 is unable to acquire enough B resources - -#*No process can acquire enough resources to terminate, so this state is not safe - -#Since the state is unsafe, deny the request - - - -import numpy as np - -n_processes = int(input('Number of processes? ')) - -n_resources = int(input('Number of resources? ')) - -available_resources = [int(x) for x in input('Claim vector? ').split(' ')] - -currently_allocated = np.array([[int(x) for x in input('Currently allocated for process ' + str(i + 1) + '? ').split(' ')] for i in range(n_processes)]) - -max_demand = np.array([[int(x) for x in input('Maximum demand from process ' + str(i + 1) + '? ').split(' ')] for i in range(n_processes)]) - -total_available = available_resources - np.sum(currently_allocated, axis=0) - -running = np.ones(n_processes) # An array with n_processes 1's to indicate if process is yet to run - -while np.count_nonzero(running) > 0: - -at_least_one_allocated = False - -for p in range(n_processes): - -if running[p]: - -if all(i >= 0 for i in total_available - (max_demand[p] - currently_allocated[p])): - -at_least_one_allocated = True - -print(str(p) + ' is running') - -running[p] = 0 - -total_available += currently_allocated[p] - -if not at_least_one_allocated: - -print('Unsafe') - -exit() - -print('Safe') - - - -Like the other algorithms, the Banker's algorithm has some limitations when implemented. Specifically, it needs to know how much of each resource a process could possibly request. In most systems, this information is unavailable, making it impossible to implement the Banker's algorithm. Also, it is unrealistic to assume that the number of processes is static since in most systems the number of processes varies dynamically. Moreover, the requirement that a process will eventually release all its resources (when the process terminates) is sufficient for the correctness of the algorithm, however it is not sufficient for a practical system. Waiting for hours (or even days) for resources to be released is usually not acceptable. diff --git a/wiki/wikipedia/3557.txt b/wiki/wikipedia/3557.txt deleted file mode 100644 index dc6baa485cc59637c5f717375f28630792158b27..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3557.txt +++ /dev/null @@ -1,43 +0,0 @@ -thumb|Top: In a hexagon vertex set there are 20 [[Partition of a set|partitions which have one three-element subset (green) and three single-element subsets (uncolored). Bottom: Of these, there are 4 partitions up to rotation, and 3 partitions up to rotation and reflection.]]Two mathematical objects a and b are called equal up to an equivalence relation R - -* if a and b are related by R, that is, - -* if aRb holds, that is, - -* if the equivalence classes of a and b with respect to R are equal. - -This figure of speech is mostly used in connection with expressions derived from equality, such as uniqueness or count. - -For example, x is unique up to R means that all objects x under consideration are in the same equivalence class with respect to the relation R. - -Moreover, the equivalence relation R is often designated rather implicitly by a generating condition or transformation. - -For example, the statement "an integer's prime factorization is unique up to ordering" is a concise way to say that any two lists of prime factors of a given integer are equivalent with respect to the relation R that relates two lists if one can be obtained by reordering (permutation) from the other. As another example, the statement "the solution to an indefinite integral is sin(x), up to addition by a constant" tacitly employs the equivalence relation R between functions, defined by fRg if f-g is a constant function, and means that the solution and the function sin(x) are equal up to this R. - -In the picture, "there are 4 partitions up to rotation" means that the set P has 4 equivalence classes with respect to R defined by aRb if b can be obtained from a by rotation; one representative from each class is shown in the bottom left picture part. - -Equivalence relations are often used to disregard possible differences of objects, so "up to R" can be understood informally as "ignoring the same subtleties as R does". - -In the factorization example, "up to ordering" means "ignoring the particular ordering". - -Further examples include "up to isomorphism", "up to permutations", and "up to rotations", which are described in the Examples section. - -In informal contexts, mathematicians often use the word modulo (or simply "mod") for similar purposes, as in "modulo isomorphism". - -A simple example is "there are seven reflecting tetrominoes, up to rotations", which makes reference to the seven possible contiguous arrangements of tetrominoes (collections of four unit squares arranged to connect on at least one side) and which are frequently thought of as the seven Tetris pieces (O, I, L, J, T, S, Z). One could also say "there are five tetrominoes, up to reflections and rotations", which would then take into account the perspective that L and J (as well as S and Z) can be thought of as the same piece when reflected. The Tetris game does not allow reflections, so the former statement is likely to seem more relevant. - -To add in the exhaustive count, there is no formal notation for the number of pieces of tetrominoes. However, it is common to write that "there are seven reflecting tetrominoes (= 19 total) up to rotations". Here, Tetris provides an excellent example, as one might simply count 7 pieces × 4 rotations as 28, but some pieces (such as the 2×2 O) obviously have fewer than four rotation states. - -In the eight queens puzzle, if the eight queens are considered to be distinct, then there are 3709440 distinct solutions. Normally, however, the queens are considered to be identical, and one usually says "there are 92 ($=\tfrac{3 709 440}{8!}$) unique solutions up to permutations of the queens", or that "there are 92 solutions mod the names of the queens", signifying that two different arrangements of the queens are considered equivalent if the queens have been permuted, but the same squares on the chessboard are occupied by them. - -If, in addition to treating the queens as identical, rotations and reflections of the board were allowed, we would have only 12 distinct solutions up to symmetry and the naming of the queens, signifying that two arrangements that are symmetrical to each other are considered equivalent (for more, see ). - -The regular n-gon, for given n, is unique up to similarity. In other words, if all similar n-gons are considered instances of the same n-gon, then there is only one regular n-gon. - -In group theory, one may have a group G acting on a set X, in which case, one might say that two elements of X are equivalent "up to the group action"—if they lie in the same orbit. - -Another typical example is the statement that "there are two different groups of order 4 up to isomorphism", or "modulo isomorphism, there are two groups of order 4". This means that there are two equivalence classes of groups of order 4—assuming that one considers groups to be equivalent if they are isomorphic. - -A hyperreal x and its standard part st(x) are equal up to an infinitesimal difference. - -In computer science, the term up-to techniques is a precisely defined notion that refers to certain proof techniques for (weak) bisimulation, and to relate processes that only behave similarly up to unobservable steps. diff --git a/wiki/wikipedia/3558.txt b/wiki/wikipedia/3558.txt deleted file mode 100644 index 6685eb9a0a2c4efe2b694c3b2e2073cc781e7414..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3558.txt +++ /dev/null @@ -1,3 +0,0 @@ -In computer science, a linear graph grammar (also a connection graph reduction system or a port graph grammar) is a class of graph grammar on which nodes have a number of ports connected together by edges and edges connect exactly two ports together. Interaction nets are a special subclass of linear graph grammars in which rewriting is confluent. - -Bawden introduces linear graphs in the context of a compiler for a fragment of the Scheme programming language. Bawden and Mairson (1998) describe the design of a distributed implementation in which the linear graph is spread across many computing nodes and may freely migrate in order to make rewrites possible. diff --git a/wiki/wikipedia/3559.txt b/wiki/wikipedia/3559.txt deleted file mode 100644 index 967b98c5945be69dc9925d70ebaf2f0011c7fae9..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3559.txt +++ /dev/null @@ -1,21 +0,0 @@ - - -Terminating Reliable Broadcast (TRB) is a problem in distributed computing that encapsulates the task of broadcasting a message to a set of receiving processes in the presence of faults. In particular, the sender and any other process might fail ("crash") at any time. - -A TRB protocol typically organizes the system into a sending process and a set of receiving processes, which may include the sender itself. A process is called "correct" if it does not fail at any point during its execution. The goal of the protocol is to transfer data (the "message") from the sender to the set of receiving processes. A process may perform many I/O operations during protocol execution, but eventually "delivers" a message by passing it to the application on that process that invoked the TRB protocol. - -The protocol must provide important guarantees to the receiving processes. All correct receiving processes, for example, must deliver the sender's message if the sender is also correct. A receiving process may deliver a special message, $\mathrm{SF}$ ("sender faulty"), if the sender failed, but either all correct processes will deliver $\mathrm{SF}$ or none will. A correct process is therefore guaranteed that data delivered to it was also delivered to all other correct processes. - -More precisely, a TRB protocol must satisfy the four formal properties below. - -* Termination: every correct process delivers some value. - -* Validity: if the sender is correct and broadcasts a message $m$, then every correct process delivers $m$. - -* Integrity: a process delivers a message at most once, and if it delivers some message $m \neq \mathrm{SF}$, then $m$ was broadcast by the sender. - -* Agreement: if a correct process delivers a message $m$, then all correct processes deliver $m$. - -The presence of faults in the system makes these properties more difficult to satisfy. A simple but invalid TRB protocol might have the sender broadcast the message to all processes, and have receiving processes deliver the message as soon as it is received. This protocol, however, does not satisfy agreement if faults can occur: if the sender crashes after sending the message to some processes, but before sending it to others, then the first set of processes may deliver the message while the second set delivers $\mathrm{SF}$. - -TRB is closely related, but not identical, to the fundamental distributed computing problem of consensus. diff --git a/wiki/wikipedia/356.txt b/wiki/wikipedia/356.txt deleted file mode 100644 index f9abf6bc1899885dceebedf1b69eb7c5bc0dcf42..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/356.txt +++ /dev/null @@ -1,24 +0,0 @@ -In mathematics, the Hardy–Ramanujan theorem, proved by , states that the normal order of the number ω(n) of distinct prime factors of a number n is log(log(n)). - -Roughly speaking, this means that most numbers have about this number of distinct prime factors. - -A more precise version states that for every real-valued function ψ(n) that tends to infinity as n tends to infinity -$$ -|\omega(n)-\log\log n|<\psi(n)\sqrt{\log\log n} -$$ - -or more traditionally -$$ -|\omega(n)-\log\log n|<{(\log\log n)}^{\frac12 +\varepsilon} -$$ - -for almost all (all but an infinitesimal proportion of) integers. That is, let g(x) be the number of positive integers n less than x for which the above inequality fails: then g(x)/x converges to zero as x goes to infinity. - -A simple proof to the result Turán was given by Pál Turán, who used the Turán sieve to prove that -$$ -\sum_{n \le x} | \omega(n) - \log\log n|^2 \ll x \log\log x \ . -$$ - -The same results are true of Ω(n), the number of prime factors of n counted with multiplicity. - -This theorem is generalized by the Erdős–Kac theorem, which shows that ω(n) is essentially normally distributed. diff --git a/wiki/wikipedia/3560.txt b/wiki/wikipedia/3560.txt deleted file mode 100644 index bbfd6deb552f8c0e5a09e6ccd16f7c3c86f913fc..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3560.txt +++ /dev/null @@ -1,50 +0,0 @@ -The Cauchy convergence test is a method used to test infinite series for convergence. It relies on bounding sums of terms in the series. This convergence criterion is named after Augustin-Louis Cauchy who published it in his textbook Cours d'Analyse 1821. - -A series -$$ -\sum_{i=0}^\infty a_i -$$ is convergent if and only if for every $\varepsilon>0$ there is a natural number N such that -$$ -|a_{n+1}+a_{n+2}+\cdots+a_{n+p}|<\varepsilon -$$ - -holds for all n > N and all p ≥ 1. - -The test works because the space R of real numbers and the space C of complex numbers (with the metric given by the absolute value) are both complete. Then the series is convergent if and only if the partial sum -$$ -s_n:=\sum_{i=0}^n a_i -$$ - -is a Cauchy sequence. - -A sequence of real or complex numbers $s_n $ is a Cauchy sequence if and only if $s_n $ converges (to some point a in R or C). - -The formal definition states that for every $\varepsilon>0$ there is a number N, such that for all n, m > N holds -$$ -|s_m-s_n|<\varepsilon. -$$ - -We will assume m > n and thus set p = m - n. -$$ -|s_{n+p}-s_n|=|a_{n+1}+a_{n+2}+\cdots+a_{n+p}|<\varepsilon. -$$ - -Showing that a sequence is a Cauchy sequence is useful since we do not need to know the limit of the sequence in question. Cauchy's convergence test can only be used in complete metric spaces (such as R and C), which are spaces where all Cauchy sequences converge. We need only show that its elements become arbitrarily close to each other after a finite progression in the sequence. There are computer applications of the Cauchy sequence, in which an iterative process may be set up to create such sequences. - -We can use the results about convergence of the sequence of partial sums of the infinite series and apply them to the convergence of the infinite series itself. The Cauchy Criterion test is one such application. - -For any real sequence $a_k $, the above results on convergence imply that the infinite series -$$ -\sum_{k=1}^\infty a_k -$$ - -converges if and only if for every $\varepsilon>0$ there is a number N, such that - -m ≥ n ≥ N imply -$$ -|s_m-s_n|=\left|\sum_{k=n+1}^m a_k\right|<\varepsilon -$$ - -Probably the most interesting part of [this theorem] is that the Cauchy condition implies the existence of the limit: this is indeed related to the completeness of the real line. - -The Cauchy criterion can be generalized to a variety of situations, which can all be loosely summarized as "a vanishing oscillation condition is equivalent to convergence". diff --git a/wiki/wikipedia/3561.txt b/wiki/wikipedia/3561.txt deleted file mode 100644 index 07988558172426ecf21f2e7ec9d9ecfc9273ab0d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3561.txt +++ /dev/null @@ -1,11 +0,0 @@ -In the mathematical field of graph theory, the Dürer graph is an undirected graph with 12 vertices and 18 edges. It is named after Albrecht Dürer, whose 1514 engraving Melencolia I includes a depiction of Dürer's solid, a convex polyhedron having the Dürer graph as its skeleton. Dürer's solid is one of only four well-covered simple convex polyhedra. - -Dürer's solid is combinatorially equivalent to a cube with two opposite vertices truncated, although Dürer's depiction of it is not in this form but rather as a truncated rhombohedron or triangular truncated trapezohedron. The exact geometry of the solid depicted by Dürer is a subject of some academic debate, with different hypothetical values for its acute angles ranging from 72° to 82°. - -The Dürer graph is the graph formed by the vertices and edges of the Dürer solid. It is a cubic graph of girth 3 and diameter 4. As well as its construction as the skeleton of Dürer's solid, it can be obtained by applying a Y-Δ transform to the opposite vertices of a cube graph, or as the generalized Petersen graph G(6,2). As with any graph of a convex polyhedron, the Dürer graph is a 3-vertex-connected simple planar graph. - -The Dürer graph is a well-covered graph, meaning that all of its maximal independent sets have the same number of vertices, four. It is one of four well-covered cubic polyhedral graphs and one of seven well-covered 3-connected cubic graphs. The only other three well-covered simple convex polyhedra are the tetrahedron, triangular prism, and pentagonal prism. - -The Dürer graph is Hamiltonian, with LCF notation [-4,5,2,-4,-2,5;-]. More precisely, it has exactly six Hamiltonian cycles, each pair of which may be mapped into each other by a symmetry of the graph. - -The automorphism group both of the Dürer graph and of the Dürer solid (in either the truncated cube form or the form shown by Dürer) is isomorphic to the dihedral group of order 12 : D6. diff --git a/wiki/wikipedia/3562.txt b/wiki/wikipedia/3562.txt deleted file mode 100644 index 851c0c48f873c11c25d32778e1221f19654a1512..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3562.txt +++ /dev/null @@ -1,193 +0,0 @@ -Nonograms, also known as Hanjie, Paint by Numbers, Picross, Griddlers, Pic-a-Pix, and various other names, are picture logic puzzles in which cells in a grid must be colored or left blank according to numbers at the side of the grid to reveal a hidden picture. In this puzzle type, the numbers are a form of discrete tomography that measures how many unbroken lines of filled-in squares there are in any given row or column. For example, a clue of "4 8 3" would mean there are sets of four, eight, and three filled squares, in that order, with at least one blank square between successive sets. - -These puzzles are often black and white—describing a binary image—but they can also be colored. If colored, the number clues are also colored to indicate the color of the squares. Two differently colored numbers may or may not have a space in between them. For example, a black four followed by a red two could mean four black boxes, some empty spaces, and two red boxes, or it could simply mean four black boxes followed immediately by two red ones. Nonograms have no theoretical limits on size, and are not restricted to square layouts. - -Nonograms were named after Non Ishida, one of the two inventors of the puzzle. - -Nonograms are also known by many other names, including Hanjie puzzle, Paint by Numbers, Griddlers, Pic-a-Pix, Picross, Picma, PrismaPixels, Pixel Puzzles, Crucipixel, Edel, FigurePic, Hanjie, HeroGlyphix, Illust-Logic, Japanese Crosswords, Japanese Puzzles, Kare Karala!, Logic Art, Logic Square, Logicolor, Logik-Puzzles, Logimage, Oekaki Logic, Oekaki-Mate, Paint Logic, Picture Logic, Tsunamii, Paint by Sudoku and Binary Coloring Books. - -In 1987, Non Ishida, a Japanese graphics editor, won a competition in Tokyo by designing grid pictures using skyscraper lights that were turned on or off. This led her to the idea of a puzzle based around filling in certain squares in a grid. Coincidentally, a professional Japanese puzzler named Tetsuya Nishio invented the same puzzles completely independently, and published them in another magazine. - -Paint by numbers puzzles started appearing in Japanese puzzle magazines. Non Ishida published three picture grid puzzles in 1988 in Japan under the name of "Window Art Puzzles". In 1990, James Dalgety in the UK invented the name Nonograms after Non Ishida, and The Sunday Telegraph started publishing them on a weekly basis. By 1993, the first book of nonograms was published by Non Ishida in Japan. The Sunday Telegraph published a dedicated puzzle book titled the "Book of Nonograms". Nonograms were also published in Sweden, the United States (originally by Games magazine), South Africa and other countries. The Sunday Telegraph ran a competition in 1998 to choose a new name for their puzzles. Griddlers was the winning name that readers chose. - -Paint by numbers puzzles were implemented by 1995 on hand held electronic toys such as Game Boy and on other plastic puzzle toys. Nintendo picked up on this puzzle fad and released two "Picross" (picture crossword) titles for the Game Boy and nine for the Super Famicom (eight of which were released in two-month intervals for the Nintendo Power Super Famicom Cartridge Writer as the NP series) in Japan. Only one of these, Mario's Picross for the Game Boy, was released outside Japan. Since then, one of the most prolific Picross game developers has been Jupiter Corporation, who released Picross DS on the Nintendo DS in 2007, 8 titles in the Picross e series for the Nintendo 3DS eShop (along with 5 character-specific titles, including ones featuring Pokémon, Zelda and Sanrio characters), and 6 titles in the Picross S series for the Nintendo Switch (along with two character-specific ones featuring Kemono Friends and Overlord respectively, and another featuring intellectual properties from SEGA's Master System and Genesis). - -Increased popularity in Japan launched new publishers and by now there were several monthly magazines, some of which contained up to 100 puzzles. The Japanese arcade game Logic Pro was released by Deniam Corp in 1996, with a sequel released the following year. UK games developer Jagex released a nonogram puzzle in 2011 as part of their annual Halloween event for their role-playing game, Runescape. In 2013, Casual Labs released a mobile version of these puzzles called Paint it Back with the theme of restoring an art gallery. Released early in 2017, Pictopix has been presented as a worthy heir to Picross on PC by Rock, Paper, Shotgun. In particular, the game enables players to share their creations. - -Paint by numbers have been published by Sanoma Uitgevers in the Netherlands, Puzzler Media (formerly British European Associated Publishers) in the UK and Nikui Rosh Puzzles in Israel. Magazines with nonogram puzzles are published in the US, UK, Germany, Netherlands, Italy, Hungary, Finland, Ukraine, and many other countries. - -|style="width:2em"| - -| - -|} - -To solve a puzzle, one needs to determine which cells will be boxes and which will be empty. Solvers often use a dot or a cross to mark cells they are certain are spaces. Cells that can be determined by logic should be filled. If guessing is used, a single error can spread over the entire field and completely ruin the solution. An error sometimes comes to the surface only after a while, when it is very difficult to correct the puzzle. The hidden picture plays little or no part in the solving process, as it may mislead. The picture may help find and eliminate an error. - -Many puzzles can be solved by reasoning on a single row or column at a time only, then trying another row or column, and repeating until the puzzle is complete. More difficult puzzles may also require several types of "what if?" reasoning that include more than one row (or column). This works on searching for contradictions, e.g., when a cell cannot be a box because some other cell would produce an error, it must be a space. - -At the beginning of the solution, a simple method can be used to determine as many boxes as possible. This method uses conjunctions of possible places for each block of boxes. For example, in a row of ten cells with only one clue of 8, the bound block consisting of 8 boxes could spread from - -* the right border, leaving two spaces to the left; - -* the left border, leaving two spaces to the right; - -* or somewhere in between. - -As a result, the block must spread through the six centermost cells in the row. - -The same applies when there are more clues in the row. For example, in a row of ten cells with clues of 4 and 3, the bound blocks of boxes could be - -* crowded to the left, one next to the other, leaving two spaces to the right; - -* crowded to the right, one just next to the other, leaving two spaces to the left; - -* or somewhere between. - -Consequently, the first block of four boxes definitely includes the third and fourth cells, while the second block of three boxes definitely includes the eighth cell. Boxes can therefore be placed in the third, fourth and eighth cells. When determining boxes in this way, boxes can be placed in cells only when the same block overlaps; in this example, there is overlap in the sixth cell, but it is from different blocks, and so it cannot yet be said whether or not the sixth cell will contain a box. - -This method consists of determining spaces by searching for cells that are out of range of any possible blocks of boxes. For example, considering a row of ten cells with boxes in the fourth and ninth cell and with clues of 3 and 1, the block bound to the clue 3 will spread through the fourth cell and clue 1 will be at the ninth cell. - -First, the clue 1 is complete and there will be a space at each side of the bound block. - -Second, the clue 3 can only spread somewhere between the second cell and the sixth cell, because it always has to include the fourth cell; however, this may leave cells that may not be boxes in any case, i.e. the first and the seventh. - -Note: In this example all blocks are accounted for; this is not always the case. The player must be careful for there may be clues or blocks that are not bound to each other yet. - -In this method, the significance of the spaces will be shown. A space placed somewhere in the middle of an uncompleted row may force a large block to one side or the other. Also, a gap that is too small for any possible block may be filled with spaces. - -For example, considering a row of ten cells with spaces in the fifth and seventh cells and with clues of 3 and 2: - -* the clue of 3 would be forced to the left, because it could not fit anywhere else. - -* the empty gap on the sixth cell is too small to accommodate clues like 2 or 3 and may be filled with spaces. - -* finally, the clue of 2 will spread through the ninth cell according to method Simple Boxes above. - -Sometimes, there is a box near the border that is not farther from the border than the length of the first clue. In this case, the first clue will spread through that box and will be forced outward from the border. - -For example, considering a row of ten cells with a box in the third cell and with a clue of 5, the clue of 5 will spread through the third cell and will continue to the fifth cell because of the border. - -Note: This method may also work in the middle of a row, farther away from the borders. - -* A space may act as a border, if the first clue is forced to the right of that space. - -* The first clue may also be preceded by some other clues, if all the clues are already bound to the left of the forcing space. - -Boxes closer to each other may be sometimes joined together into one block or split by a space into several blocks. When there are two blocks with an empty cell between, this cell will be: - -* A space if joining the two blocks by a box would produce a too large block - -* A box if splitting the two blocks by a space would produce a too small block that does not have enough free cells remaining - -For example, considering a row of fifteen cells with boxes in the third, fourth, sixth, seventh, eleventh and thirteenth cell and with clues of 5, 2 and 2: - -* The clue of 5 will join the first two blocks by a box into one large block, because a space would produce a block of only 4 boxes that is not enough there. - -* The clues of 2 will split the last two blocks by a space, because a box would produce a block of 3 continuous boxes, which is not allowed there. - -Note: The illustration picture also shows how the clues of 2 are further completed. This is, however, not part of the Joining and splitting technique, but the Glue technique described above. - -To solve the puzzle, it is usually also very important to enclose each bound or completed block of boxes immediately by separating spaces as described in Simple spaces method. Precise punctuating usually leads to more Forcing and may be vital for finishing the puzzle. Note: The examples above did not do that only to remain simple. - -Mercury is a special case of Simple spaces technique. Its name comes from the way mercury pulls back from the sides of a container. - -If there is a box in a row that is in the same distance from the border as the length of the first clue, the first cell will be a space. This is because the first clue would not fit to the left of the box. It will have to spread through that box, leaving the first cell behind. Furthermore, when the box is actually a block of more boxes to the right, there will be more spaces at the beginning of the row, determined by using this method several times. - -Some more difficult puzzles may also require advanced reasoning. When all simple methods above are exhausted, searching for contradictions may help. It is wise to use a pencil (or other color) for that to facilitate corrections. The procedure includes: - -# Trying an empty cell to be a box (or then a space). - -# Using all available methods to solve as much as possible. - -# If an error is found, the tried cell will not be a box for sure. It will be a space (or a box, if space was tried). - -In this example a box is tried in the first row, which leads to a space at the beginning of that row. The space then forces a box in the first column, which glues to a block of three boxes in the fourth row. However, that is wrong because the third column does not allow any boxes there, which leads to a conclusion that the tried cell must not be a box, so it must be a space. - -The problem of this method is that there is no quick way to tell which empty cell to try first. Usually only a few cells lead to any progress, and the other cells lead to dead ends. Most worthy cells to start with may be: - -* cells that have many non-empty neighbors; - -* cells that are close to the borders or close to the blocks of spaces; - -* cells that are within rows that consist of more non-empty cells. - -It is possible to get a start to a puzzle using a mathematical technique to fill in blocks for rows/columns independent of other rows/columns. This is a good "first step" and is a mathematical shortcut to techniques described above. The process is as follows: - -# Add the clues together, plus 1 for each "space" in between. For example, if the clue is 6 2 3, this step produces the sum 6 + 1 + 2 + 1 + 3 = 13. - -# Subtract this number from the total available in the row (usually the width or height of the puzzle). For example, if the clue in step 1 is in a row 15 cells wide, the difference is 15 - 13 = 2. Note: If spaces can be used on the left or right (top or bottom) borders, this "shrinks" the available area. If it is known that the rightmost cell is a space, the difference is 14 - 13 = 1. - -# Any clues that are greater than the number in step 2 will have some blocks filled in. In the example, this applies to the clues 6 and 3, but not 2. - -# For each clue in step 3, subtract the number in step 2 to determine the number of blocks that can be filled in. For example, the 6 clue will have (6 - 2 =) 4 blocks filled in and the 3 clue will have (3 - 2 =) 1. Note: Applying the same procedure to a clue that "failed" step 3 will produce a non-positive number, indicating that no blocks will be filled in for this clue. The clue 2 produces the number (2 - 2 =) 0; if there were a 1 clue, it would produce the number (1 - 2 =) -1. - -# To fill in the blocks, assume the blocks are all pushed to one side, count from that side "through" the blocks, and backfill the appropriate number of blocks. This can be done from either direction. For example, the 6 clue can be done either of two ways as follows: - -## From the left: Since the 6 is the first number, count 6 blocks from the left edge, ending in the 6th block. Now "backfill" 4 blocks (the number obtained in step 4), so that cells 3 through 6 are filled. - -## From the right: Starting from the right, the clues that are to the right of the 6 clue must be accounted for. Starting from cell 15, count 3 cells for the 3 clue (to cell 13), then a space (12), then the 2 clue (10), then a space (9), then the 6 clue (3). From the 3rd cell, "backfill" 4 blocks, filling cells 3 through 6. The results are the same as doing it from the left in the step above. - -# Repeat step 5 for all clues identified in step 3. - -In the illustration, row 1 shows the cells that are filled under this procedure, rows 2 and 4 show how the blocks are pushed to one side in step 5, and rows 3 and 5 show the cells backfilled in step 5. - -Using this technique for all rows and columns at the start of the puzzle produces a good head start into completing it. Note: Some rows/columns won't yield any results initially. For example, a row of 20 cells with a clue of 1 4 2 5 will yield 1 + 1 + 4 + 1 + 2 + 1 + 5 = 15. 20 - 15 = 5. None of the clues are greater than 5. Also, this technique can be used on a smaller scale. If there are available spaces in the center or either side, even if certain clues are already discovered, this method can be used with the remaining clues and available spaces. - -Some puzzles may require to go deeper with searching for the contradictions. This is, however, not possible simply by a pen and pencil, because of the many possibilities that must be searched. This method is practical for a computer to use. - -In some cases, reasoning over a set of rows may also lead to the next step of the solution even without contradictions and deeper recursion. However, finding such sets is usually as difficult as finding contradictions. - -There are puzzles that have several feasible solutions (one such is a picture of a simple chessboard). In these puzzles, all solutions are correct by the definition, but not all must give a reasonable picture. - -Solving nonogram puzzles is an NP-complete problem. This means that there is no polynomial time algorithm that solves all nonogram puzzles unless P = NP. - -However, certain classes of puzzles, such as those in which each row or column has only one block of cells and all cells are connected, may be solved in polynomial time by transforming the problem into an instance of 2-satisfiability. - -An extensive comparison and discussion of nonogram solving algorithms is found at the WebPBN site (Web Paint-By-Number). - -Some other online and offline solvers include: - -* Teal nonogram puzzle and solver - -* Nonogram Solver - -* Griddlers Solver with Animator - -* nonogram-solver (in Ruby) - -* nonogram-solver (in Python) - -* HTML5 Nonogram Solver (in browser) - -* Nonogram solver in C++ solving lines in quadratic time at most. - -* JavaScript Nonogram solver - -* QR code generator and solver (in Mathematica) - -*pynogram solver and animator (in Python) - -*nonogrid solver with browser frontend (in Rust) - -*nonogram solver (interactive and automatic) (in Java) - -Nintendo has published several nonogram video games using the name . The Nintendo Game Boy game Mario's Picross was initially released in Japan on March 14, 1995 as part of the NP Picross series to decent success. However, the game failed to become a hit in the U.S. market, despite a heavy advertising campaign by Nintendo. The game is of an escalating difficulty, with successive puzzle levels containing larger puzzles. Each puzzle has a limited amount of time to be cleared. Hints (line clears) may be requested at a time penalty, and mistakes made earn time penalties as well (the amount increasing for each mistake). Picross 2 was released later for Game Boy and Mario's Super Picross for the Super Famicom, neither of which were translated for the U.S. market (Mario's Super Picross was, however, later released on the Wii Virtual Console's PAL service on September 14, 2007, as part of its Hanabi Festival). Both games introduced Wario's Picross as well, featuring Mario's nemesis in the role. These rounds vary by removing the hint function, and mistakes are not penalized—at the price that mistakes are not even revealed. These rounds can only be cleared when all correct boxes are marked, with no mistakes. The time limit was also removed. Nintendo also released eight Picross volumes on the Japanese Nintendo Power peripheral in Japan, each with a new set of puzzles, including puzzles based around various Nintendo characters, such as Mario, The Legend of Zelda, and Pokémon. - -Nintendo has released Picross DS for the Nintendo DS portable system in 2007. It contains several stages of varying difficulty, from 5x5 grids to 25x20 grids. Normal mode tells players if they made an error (with a time penalty) and free mode does not. A hint is available before starting the puzzle in all modes; the game reveals a complete row and column at random. Additional puzzles were available through Nintendo Wi-Fi Connection; some of the original Mario Picross puzzles were available. However, the service was shut down on 20 May 2014. Nintendo made new releases available bi-weekly. Picross DS was released in Europe and Australia on 11 May 2007 and in the United States on July 30, 2007 and has been received well by critics, including Craig Harris, Jessica Wadleigh and Dave McCarthy labelling the game "Addictive". A 3D version of the game, titled Picross 3D, was also released for the DS in Japan in 2009 and internationally in 2010. A sequel, Picross 3D: Round 2, was released for the Nintendo 3DS in 2015. Another downloadable version of the game was released for Nintendo 3DS's Nintendo eShop, called Picross e, Picross e2, and Picross e3 released in 2013, with Picross e4 released in 2014. Nintendo has also released a Pokémon spinoff on December 7, 2015 in the form of the freemium game of Pokémon Picross for Nintendo 3DS. My Nintendo Picross The Legend of Zelda: Twilight Princess was released for Nintendo 3DS on March 31, 2016, exclusively as a premium reward for My Nintendo. - -Other companies have also released nonogram video games, such as Falcross on iOS, and the Color Cross series of games by Little Worlds Studio on the Nintendo DS, Microsoft Windows, and iOS. In addition, nonogram puzzles have appeared in non-picross puzzle games, such as in Deadly Rooms of Death's fifth installment, The Second Sky. In it, nonogram puzzles (again called "Picross" puzzles) representing in-game objects are optional, unlockable puzzles late into the game that can be played in the level "The Central Station", and solving them unlocks bonus levels in the game. In 2018, Konami released a game titled Pixel Puzzle Collection, or Picross Puzzle (ピクロジパズル), featuring classic Konami characters and sprites. - -==Other picture logic puzzles== - -Pentomino paint-by-numbers is a variant in which the twelve pentomino shapes must be placed in the grid, without touching each other (even diagonally). - -Triddlers are an offshoot that uses triangle shapes instead of squares. - -Paint by pairs or Link-a-Pix consists of a grid, with numbers filling some squares; pairs of numbers must be located correctly and connected with a line filling a total of squares equal to that number. There is only one unique way to link all the squares in a properly-constructed puzzle. When completed, the squares that have lines are filled; the contrast with the blank squares reveals the picture. (As above, colored versions exist that involving matching numbers of the same color.) - -Fill-a-Pix also uses a grid with numbers within. In this format, each number indicates how many of the squares immediately surrounding it, and itself, will be filled. A square marked "9," for example, will have all eight surrounding squares and itself filled. If it is marked "0" those squares are all blank. - -Maze-a-Pix uses a maze in a standard grid. When the single correct route from beginning to end is located, each 'square' of the solution is filled in (alternatively, all non-solution squares are filled in) to create the picture. - -Tile Paint is another type of picture logic puzzle by Nikoli. It works like regular nonograms except that it only specifies the total number of squares in each row or column that will be filled in and irregular sections within the grid have borders around them that indicate that, if one of the squares within it is filled in, all of them must be filled in. diff --git a/wiki/wikipedia/3563.txt b/wiki/wikipedia/3563.txt deleted file mode 100644 index f69147984d0d528b2d5e9930abd32c359b6d4f31..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3563.txt +++ /dev/null @@ -1,43 +0,0 @@ -Logic Theorist is a computer program written in 1956 by Allen Newell, Herbert A. Simon and Cliff Shaw. It was the first program deliberately engineered to perform automated reasoning and is called "the first artificial intelligence program". It would eventually prove 38 of the first 52 theorems in Whitehead and Russell's Principia Mathematica, and find new and more elegant proofs for some. - -In 1955, when Newell and Simon began to work on the Logic Theorist, the field of artificial intelligence did not yet exist. Even the term itself ("artificial intelligence") would not be coined until the following summer. - -Simon was a political scientist who had already produced classic work in the study of how bureaucracies function as well as developed his theory of bounded rationality (for which he would later win a Nobel Prize). The study of business organizations requires, like artificial intelligence, an insight into the nature of human problem solving and decision making. Simon remembers consulting at RAND Corporation in the early 1950s and seeing a printer typing out a map, using ordinary letters and punctuation as symbols. He realized that a machine that could manipulate symbols could just as well simulate decision making and possibly even the process of human thought. - -The program that printed the map had been written by Newell, a RAND scientist studying logistics and organization theory. For Newell, the decisive moment was in 1954 when Oliver Selfridge came to RAND to describe his work on pattern matching. Watching the presentation, Newell suddenly understood how the interaction of simple, programmable units could accomplish complex behavior, including the intelligent behavior of human beings. "It all happened in one afternoon," he would later say. It was a rare moment of scientific epiphany. - -
    "I had such a sense of clarity that this was a new path, and one I was going to go down. I haven't had that sensation very many times. I'm pretty skeptical, and so I don't normally go off on a toot, but I did on that one. Completely absorbed in it—without existing with the two or three levels consciousness so that you're working, and aware that you're working, and aware of the consequences and implications, the normal mode of thought. No. Completely absorbed for ten to twelve hours."
    - -Newell and Simon began to talk about the possibility of teaching machines to think. Their first project was a program that could prove mathematical theorems like the ones used in Bertrand Russell and Alfred North Whitehead's Principia Mathematica. They enlisted the help of computer programmer Cliff Shaw, also from RAND, to develop the program. (Newell says "Cliff was the genuine computer scientist of the three"). - -The first version was hand-simulated: they wrote the program onto 3x5 cards and, as Simon recalled:
    In January 1956, we assembled my wife and three children together with some graduate students. To each member of the group, we gave one of the cards, so that each one became, in effect, a component of the computer program ... Here was nature imitating art imitating nature.
    - -They succeeded in showing that the program could successfully prove theorems as well as a talented mathematician. Eventually Shaw was able to run the program on the computer at RAND's Santa Monica facility. - -In the summer of 1956, John McCarthy, Marvin Minsky, Claude Shannon and Nathan Rochester organized a conference on the subject of what they called "artificial intelligence" (a term coined by McCarthy for the occasion). Newell and Simon proudly presented the group with the Logic Theorist and were somewhat surprised when the program received a lukewarm reception. Pamela McCorduck writes "the evidence is that nobody save Newell and Simon themselves sensed the long-range significance of what they were doing." Simon confides that "we were probably fairly arrogant about it all" and adds: - -
    They didn't want to hear from us, and we sure didn't want to hear from them: we had something to show them! ... In a way it was ironic because we already had done the first example of what they were after; and second, they didn't pay much attention to it.
    - -Logic Theorist soon proved 38 of the first 52 theorems in chapter 2 of the Principia Mathematica. The proof of theorem 2.85 was actually more elegant than the proof produced laboriously by hand by Russell and Whitehead. Simon was able to show the new proof to Russell himself who "responded with delight". They attempted to publish the new proof in The Journal of Symbolic Logic but it was rejected on the grounds that a new proof of an elementary mathematical theorem was not notable, apparently overlooking the fact that one of the authors was a computer program. - -Newell and Simon formed a lasting partnership, founding one of the first AI laboratories at the Carnegie Institute of Technology and developing a series of influential artificial intelligence programs and ideas, including GPS, Soar, and their unified theory of cognition. - -Logic Theorist introduced several concepts that would be central to AI research: - -;Reasoning as search : Logic Theorist explored a search tree: the root was the initial hypothesis, each branch was a deduction based on the rules of logic. Somewhere in the tree was the goal: the proposition the program intended to prove. The pathway along the branches that led to the goal was a proof – a series of statements, each deduced using the rules of logic, that led from the hypothesis to the proposition to be proved. - -;Heuristics : Newell and Simon realized that the search tree would grow exponentially and that they needed to "trim" some branches, using "rules of thumb" to determine which pathways were unlikely to lead to a solution. They called these ad hoc rules "heuristics", using a term introduced by George Pólya in his classic book on mathematical proof, How to Solve It. (Newell had taken courses from Pólya at Stanford). Heuristics would become an important area of research in artificial intelligence and remains an important method to overcome the intractable combinatorial explosion of exponentially growing searches. - -;List processing : To implement Logic Theorist on a computer, the three researchers developed a programming language, IPL, which used the same form of symbolic list processing that would later form the basis of McCarthy's Lisp programming language, an important language still used by AI researchers. - -Pamela McCorduck writes that the Logic Theorist was "proof positive that a machine could perform tasks heretofore considered intelligent, creative and uniquely human". And, as such, it represents a milestone in the development of artificial intelligence and our understanding of intelligence in general. - -Simon famously told a graduate class in January 1956, "Over Christmas, Al Newell and I invented a thinking machine," - -and would write: - -
    [We] invented a computer program capable of thinking non-numerically, and thereby solved the venerable mind-body problem, explaining how a system composed of matter can have the properties of mind.
    - -This statement, that machines can have minds just as people do, would be later named "Strong AI" by philosopher John Searle. It remains a serious subject of debate up to the present day. - -Pamela McCorduck also sees in the Logic Theorist the debut of a new theory of the mind, the information processing model (sometimes called computationalism). She writes that "this view would come to be central to their later work, and in their opinion, as central to understanding mind in the twentieth century as Darwin's principle of natural selection had been to understanding biology in the nineteenth century." Newell and Simon would later formalize this proposal as the physical symbol systems hypothesis. diff --git a/wiki/wikipedia/3564.txt b/wiki/wikipedia/3564.txt deleted file mode 100644 index 745be0952e5176adb2b99098355063a83477eb71..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3564.txt +++ /dev/null @@ -1 +0,0 @@ -In geometry, the Murakami–Yano formula, introduced by Murakami, is a formula for the volume of a hyperbolic or spherical tetrahedron given in terms of its dihedral angles. diff --git a/wiki/wikipedia/3565.txt b/wiki/wikipedia/3565.txt deleted file mode 100644 index 094eafa7ad990a41603cc391dd0062a434e74e50..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3565.txt +++ /dev/null @@ -1,48 +0,0 @@ -In mathematics, the Fraňková-Helly selection theorem is a generalisation of Helly's selection theorem for functions of bounded variation to the case of regulated functions. It was proved in 1991 by the Czech mathematician Dana Fraňková. - -Let X be a separable Hilbert space, and let BV([0, T]; X) denote the normed vector space of all functions f : [0, T] → X with finite total variation over the interval [0, T], equipped with the total variation norm. It is well known that BV([0, T]; X) satisfies the compactness theorem known as Helly's selection theorem: given any sequence of functions (fn)n∈N in BV([0, T]; X) that is uniformly bounded in the total variation norm, there exists a subsequence -$$ -\left( f_{n(k)} \right) \subseteq (f_{n}) \subset \mathrm{BV}([0, T]; X) -$$ - -and a limit function f ∈ BV([0, T]; X) such that fn(k)(t) converges weakly in X to f(t) for every t ∈ [0, T]. That is, for every continuous linear functional λ ∈ X*, -$$ -\lambda \left( f_{n(k)}(t) \right) \to \lambda(f(t)) \mbox{ in } \mathbb{R} \mbox{ as } k \to \infty. -$$ - -Consider now the Banach space Reg([0, T]; X) of all regulated functions f : [0, T] → X, equipped with the supremum norm. Helly's theorem does not hold for the space Reg([0, T]; X): a counterexample is given by the sequence -$$ -f_{n} (t) = \sin (n t). -$$ - -One may ask, however, if a weaker selection theorem is true, and the Fraňková-Helly selection theorem is such a result. - -As before, let X be a separable Hilbert space and let Reg([0, T]; X) denote the space of regulated functions f : [0, T] → X, equipped with the supremum norm. Let (fn)n∈N be a sequence in Reg([0, T]; X) satisfying the following condition: for every ε > 0, there exists some Lε > 0 so that each fn may be approximated by a un ∈ BV([0, T]; X) satisfying -$$ -\| f_{n} - u_{n} \|_{\infty} < \varepsilon -$$ - -and -$$ -| u_{n}(0) | + \mathrm{Var}(u_{n}) \leq L_{\varepsilon}, -$$ - -where |-| denotes the norm in X and Var(u) denotes the variation of u, which is defined to be the supremum -$$ -\sup_{\Pi} \sum_{j=1}^{m} | u(t_{j}) - u(t_{j-1}) | -$$ - -over all partitions -$$ -\Pi = \{ 0 = t_{0} < t_{1} < \dots < t_{m} = T , m \in \mathbf{N} \} -$$ - -of [0, T]. Then there exists a subsequence -$$ -\left( f_{n(k)} \right) \subseteq (f_{n}) \subset \mathrm{Reg}([0, T]; X) -$$ - -and a limit function f ∈ Reg([0, T]; X) such that fn(k)(t) converges weakly in X to f(t) for every t ∈ [0, T]. That is, for every continuous linear functional λ ∈ X*, -$$ -\lambda \left( f_{n(k)}(t) \right) \to \lambda(f(t)) \mbox{ in } \mathbb{R} \mbox{ as } k \to \infty. -$$ diff --git a/wiki/wikipedia/3566.txt b/wiki/wikipedia/3566.txt deleted file mode 100644 index c6142682c0dbcc6122e3b1c0fcc4c039b2436678..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3566.txt +++ /dev/null @@ -1,28 +0,0 @@ -In physics, the CHSH inequality can be used in the proof of Bell's theorem, which states that certain consequences of entanglement in quantum mechanics can not be reproduced by local hidden variable theories. Experimental verification of violation of the inequalities is seen as experimental confirmation that nature cannot be described by local hidden variables theories. CHSH stands for John Clauser, Michael Horne, Abner Shimony, and Richard Holt, who described it in a much-cited paper published in 1969 (Clauser et al., 1969). derivation was oriented towards the use of "two-channel" detectors, and indeed it is for these that it is generally used, but under their method the only possible outcomes were +1 and -1. In order to adapt to real situations, which at the time meant the use of polarised light and single-channel polarisers, they had to interpret '-' as meaning "non-detection in the '+' channel", i.e. either '-' or nothing. They did not in the original article discuss how the two-channel inequality could be applied in real experiments with real imperfect detectors, though it was later proved (Bell, 1971) that the inequality itself was equally valid. The occurrence of zero outcomes, though, means it is no longer so obvious how the values of E are to be estimated from the experimental data. - -The mathematical formalism of quantum mechanics predicts a maximum value for S of 2sqrt 2 (Tsirelson's bound), which is greater than 2, and CHSH violations are therefore predicted by the theory of quantum mechanics. - -In practice most actual experiments have used light rather than the electrons that Bell originally had in mind. The property of interest is, in the best known experiments (Aspect, 1981-2), the polarisation direction, though other properties can be used. The diagram shows a typical optical experiment. Coincidences (simultaneous detections) are recorded, the results being categorised as '++', '+-', '-+' or '--' and corresponding counts accumulated. - -Four separate subexperiments are conducted, corresponding to the four terms $E(a, b)$ in the test statistic S (, above). The settings a, a′, b and b′ are generally in practice chosen to be 0, 45°, 22.5° and 67.5° respectively - the "Bell test angles" - these being the ones for which the quantum mechanical formula gives the greatest violation of the inequality. - -For each selected value of a and b, the numbers of coincidences in each category $\left\{ N_{++}, N_{--}, N_{+-}, N_{-+} \right\}$ are recorded. The experimental estimate for $E(a, b)$ is then calculated as: - -{{NumBlk|:|$E = \frac {N_{++} - N_{+-} - N_{-+} + N_{--}} {N_{++} + N_{+-} + N_{-+}+ N_{--}}$|}} - -Once all the $E$'s have been estimated, an experimental estimate of S () can be found. If it is numerically greater than 2 it has infringed the CHSH inequality and the experiment is declared to have supported the quantum mechanics prediction and ruled out all local hidden variable theories. - -The CHSH paper lists many preconditions (or "reasonable and/or presumable assumptions") to derive the simplified theorem and formula. For example, for the method to be valid, it has to be assumed that the detected pairs are a fair sample of those emitted. In actual experiments, detectors are never 100% efficient, so that only a sample of the emitted pairs are detected. A subtle, related requirement is that the hidden variables do not influence or determine detection probability in a way that would lead to different samples at each arm of the experiment. - -The original 1969 derivation will not be given here since it is not easy to follow and involves the assumption that the outcomes are all +1 or -1, never zero. Bell's 1971 derivation is more general. He effectively assumes the "Objective Local Theory" later used by Clauser and Horne (Clauser, 1974). Clauser and Horne show that the CHSH inequality can be derived from the CH74 one. As they tell us, in a two-channel experiment the CH74 single-channel test is still applicable and provides four sets of inequalities governing the probabilities p of coincidences. - -Working from the inhomogeneous version of the inequality, we can write: -$$ -- 1 \leq p_{jk}(a, b) - p_{jk}(a, b') + p_{jk}(a', b) + p_{jk}(a', b') - p_{jk}(a') - p_{jk}(b) \leq 0 -$$ - -where j and k are each '+' or '-', indicating which detectors are being considered. - -To obtain the CHSH test statistic S (), all that is needed is to multiply the inequalities for which j is different from k by -1 and add these to the inequalities for which j and k are the same. - -Many Bell tests conducted subsequent to Aspect's second experiment in 1982 have used the CHSH inequality, estimating the terms using (3) and assuming fair sampling. Some dramatic violations of the inequality have been reported. Scientific American reported in its Dec 2018 issue methods for greatly improved experimental applications of CHSH inequality diff --git a/wiki/wikipedia/3567.txt b/wiki/wikipedia/3567.txt deleted file mode 100644 index 5c7b0a8d02d4432336de3afc40affb237335ec8f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3567.txt +++ /dev/null @@ -1,69 +0,0 @@ -In mathematics, Specht's theorem gives a necessary and sufficient condition for two matrices to be unitarily equivalent. It is named after Wilhelm Specht, who proved the theorem in 1940. - -Two matrices A and B are said to be unitarily equivalent if there exists a unitary matrix U such that B = U *AU. Two matrices which are unitarily equivalent are also similar. Two similar matrices represent the same linear map, but with respect to a different basis; unitary equivalence corresponds to a change from an orthonormal basis to another orthonormal basis. - -If A and B are unitarily equivalent, then tr AA* = tr BB*, where tr denotes the trace (in other words, the Frobenius norm is a unitary invariant). This follows from the cyclic invariance of the trace: if B = U *AU, then tr BB* = tr U *AUU *A*U = tr AUU *A*UU * = tr AA*, where the second equality is cyclic invariance. - -Thus, tr AA* = tr BB* is a necessary condition for unitary equivalence, but it is not sufficient. Specht's theorem gives infinitely many necessary conditions which together are also sufficient. The formulation of the theorem uses the following definition. A word in two variables, say x and y, is an expression of the form - - - -W(x,y) = x^{m_1} y^{n_1} x^{m_2} y^{n_2} \cdots x^{m_p}, - - - -where m1, n1, m2, n2, …, mp are non-negative integers. The degree of this word is - - - -m_1 + n_1 + m_2 + n_2 + \cdots + m_p. - - - -Specht's theorem: Two matrices A and B are unitarily equivalent if and only if tr W(A, A*) = tr W(B, B*) for all words W. - -The theorem gives an infinite number of trace identities, but it can be reduced to a finite subset. Let n denote the size of the matrices A and B. For the case n = 2, the following three conditions are sufficient: - - - -\operatorname{tr} A = \operatorname{tr} B, \quad - -\operatorname{tr} A^2 = \operatorname{tr} B^2, \quad\text{and}\quad - -\operatorname{tr} AA^* = \operatorname{tr} BB^*. - - - -For n = 3, the following seven conditions are sufficient: - - - -\begin{align} - -&\operatorname{tr} A = \operatorname{tr} B, \quad - -\operatorname{tr} A^2 = \operatorname{tr} B^2, \quad - -\operatorname{tr} AA^* = \operatorname{tr} BB^*, \quad - -\operatorname{tr} A^3 = \operatorname{tr} B^3, \\ - -&\operatorname{tr} A^2 A^* = \operatorname{tr} B^2 B^*, \quad - -\operatorname{tr} A^2 (A^*)^2 = \operatorname{tr} B^2 (B^*)^2, \quad\text{and}\quad - -\operatorname{tr} A^2 (A^*)^2 A A^* = \operatorname{tr} B^2 (B^*)^2 B B^*. - -\end{align} - - - -For general n, it suffices to show that tr W(A, A*) = tr W(B, B*) for all words of degree at most - - - -n \sqrt{\frac{2n^2}{n-1} + \frac14} + \frac{n}2 - 2. - - - -It has been conjectured that this can be reduced to an expression linear in n. diff --git a/wiki/wikipedia/3568.txt b/wiki/wikipedia/3568.txt deleted file mode 100644 index 99f9de1c22cfc2ca2c1ba0cc2ed1156afb0bbfef..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3568.txt +++ /dev/null @@ -1,5 +0,0 @@ -In mathematical group theory, the Thompson replacement theorem is a theorem about the existence of certain abelian subgroups of a p-group. The Glauberman replacement theorem is a generalization of it introduced by . - -Suppose that P is a finite p-group for some prime p, and let A be the set of abelian subgroups of P of maximal order. Suppose that B is some abelian subgroup of P. The Thompson replacement theorem says that if A is an element of A that normalizes B but is not normalized by B, then there is another element A* of A such that A*∩B is strictly larger than A∩B, and [A*,A] normalizes A. - -The Glauberman replacement theorem is similar, except p is assumed to be odd and the condition that B is abelian is weakened to the condition that [B,B] commutes with B and with all elements of A. Glauberman says in his paper that he does not know whether the condition that p is odd is necessary. diff --git a/wiki/wikipedia/3569.txt b/wiki/wikipedia/3569.txt deleted file mode 100644 index ff1c1d22777598e91e50eaf1c89f42c9ace1e520..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3569.txt +++ /dev/null @@ -1,102 +0,0 @@ -In geometry, close-packing of equal spheres is a dense arrangement of congruent spheres in an infinite, regular arrangement (or lattice). Carl Friedrich Gauss proved that the highest average density – that is, the greatest fraction of space occupied by spheres – that can be achieved by a lattice packing is -$$ -\frac{\pi}{3\sqrt 2} \approx 0.74048 -$$. - -The same packing density can also be achieved by alternate stackings of the same close-packed planes of spheres, including structures that are aperiodic in the stacking direction. The Kepler conjecture states that this is the highest density that can be achieved by any arrangement of spheres, either regular or irregular. This conjecture was proven by T. C. Hales. Highest density is known only in case of 1, 2, 3, 8 and 24 dimensions. - -Many crystal structures are based on a close-packing of a single kind of atom, or a close-packing of large ions with smaller ions filling the spaces between them. The cubic and hexagonal arrangements are very close to one another in energy, and it may be difficult to predict which form will be preferred from first principles. - -There are two simple regular lattices that achieve this highest average density. They are called face-centered cubic (FCC) (also called cubic close packed) and hexagonal close-packed (HCP), based on their symmetry. Both are based upon sheets of spheres arranged at the vertices of a triangular tiling; they differ in how the sheets are stacked upon one another. The FCC lattice is also known to mathematicians as that generated by the A3 root system. - -The problem of close-packing of spheres was first mathematically analyzed by Thomas Harriot around 1587, after a question on piling cannonballs on ships was posed to him by Sir Walter Raleigh on their expedition to America. - -Cannonballs were usually piled in a rectangular or triangular wooden frame, forming a three-sided or four-sided pyramid. Both arrangements produce a face-centered cubic lattice – with different orientation to the ground. Hexagonal close-packing would result in a six-sided pyramid with a hexagonal base. - -The cannonball problem asks which flat square arrangements of cannonballs can be stacked into a square pyramid. Édouard Lucas formulated the problem as the Diophantine equation $\sum_{n=1}^{N} n^2 = M^2$ or $\frac{1}{6} N(N+1)(2N+1) = M^2$ and conjectured that the only solutions are $N = 1, M = 1,$ and $N = 24, M = 70$. Here $N$ is the number of layers in the pyramidal stacking arrangement and $M$ is the number of cannonballs along an edge in the flat square arrangement. - -In both the FCC and HCP arrangements each sphere has twelve neighbors. For every sphere there is one gap surrounded by six spheres (octahedral) and two smaller gaps surrounded by four spheres (tetrahedral). The distances to the centers of these gaps from the centers of the surrounding spheres is sqrt 3/2 for the tetrahedral, and sqrt 2 for the octahedral, when the sphere radius is 1. - -Relative to a reference layer with positioning A, two more positionings B and C are possible. Every sequence of A, B, and C without immediate repetition of the same one is possible and gives an equally dense packing for spheres of a given radius. - -The most regular ones are - -*FCC = ABC ABC ABC... (every third layer is the same) - -*HCP = AB AB AB AB... (every other layer is the same). - -There is an uncountably infinite number of disordered arrangements of planes (e.g. ABCACBABABAC...) that are sometimes collectively referred to as "Barlow packings", after crystallographer William Barlow - -In close-packing, the center-to-center spacing of spheres in the xy plane is a simple honeycomb-like tessellation with a pitch (distance between sphere centers) of one sphere diameter. The distance between sphere centers, projected on the z (vertical) axis, is: -$$ -\text{pitch}_Z = \sqrt{6} \cdot {d\over 3}\approx0.81649658 d, -$$ - -where d is the diameter of a sphere; this follows from the tetrahedral arrangement of close-packed spheres. - -The coordination number of HCP and FCC is 12 and their atomic packing factors (APFs) are equal to the number mentioned above, 0.74. - -When forming any sphere-packing lattice, the first fact to notice is that whenever two spheres touch a straight line may be drawn from the center of one sphere to the center of the other intersecting the point of contact. The distance between the centers along the shortest path namely that straight line will therefore be r1 + r2 where r1 is the radius of the first sphere and r2 is the radius of the second. In close packing all of the spheres share a common radius, r. Therefore two centers would simply have a distance 2r. - -To form an A-B-A-B-... hexagonal close packing of spheres, the coordinate points of the lattice will be the spheres' centers. Suppose, the goal is to fill a box with spheres according to HCP. The box would be placed on the x-y-z coordinate space. - -First form a row of spheres. The centers will all lie on a straight line. Their x-coordinate will vary by 2r since the distance between each center of the spheres are touching is 2r. The y-coordinate and z-coordinate will be the same. For simplicity, say that the balls are the first row and that their y- and z-coordinates are simply r, so that their surfaces rest on the zero-planes. Coordinates of the centers of the first row will look like (2r, r, r), (4r, r, r), (6r ,r, r), (8r ,r, r), ... . - -Now, form the next row of spheres. Again, the centers will all lie on a straight line with x-coordinate differences of 2r, but there will be a shift of distance r in the x-direction so that the center of every sphere in this row aligns with the x-coordinate of where two spheres touch in the first row. This allows the spheres of the new row to slide in closer to the first row until all spheres in the new row are touching two spheres of the first row. Since the new spheres touch two spheres, their centers form an equilateral triangle with those two neighbors' centers. The side lengths are all 2r, so the height or y-coordinate difference between the rows is sqrt 3r. Thus, this row will have coordinates like this: -$$ -\left(r, r + \sqrt{3}r, r\right),\ \left(3r, r + \sqrt{3}r, r\right),\ \left(5r, r + \sqrt{3}r, r\right),\ \left(7r, r + \sqrt{3}r, r\right), \dots. -$$ - -The first sphere of this row only touches one sphere in the original row, but its location follows suit with the rest of the row. - -The next row follows this pattern of shifting the x-coordinate by r and the y-coordinate by sqrt 3. Add rows until reaching the x and y maximum borders of the box. - -In an A-B-A-B-... stacking pattern, the odd numbered planes of spheres will have exactly the same coordinates save for a pitch difference in the z-coordinates and the even numbered planes of spheres will share the same x- and y-coordinates. Both types of planes are formed using the pattern mentioned above, but the starting place for the first row's first sphere will be different. - -Using the plane described precisely above as plane #1, the A plane, place a sphere on top of this plane so that it lies touching three spheres in the A-plane. The three spheres are all already touching each other, forming an equilateral triangle, and since they all touch the new sphere, the four centers form a regular tetrahedron. All of the sides are equal to 2r because all of the sides are formed by two spheres touching. The height of which or the z-coordinate difference between the two "planes" is sqrt 6r2/3. This, combined with the offsets in the x and y-coordinates gives the centers of the first row in the B plane: -$$ -\left(r, r + \frac{\sqrt{3}r}{3}, r + \frac{\sqrt{6}r2}{3}\right),\ \left(3r, r + \frac{\sqrt{3}r}{3}, r + \frac{\sqrt{6}r2}{3}\right),\ \left(5r, r + \frac{\sqrt{3}r}{3}, r + \frac{\sqrt{6}r2}{3}\right),\ \left(7r, r + \frac{\sqrt{3}r}{3}, r + \frac{\sqrt{6}r2}{3}\right), \dots. -$$ - -The second row's coordinates follow the pattern first described above and are: -$$ -\left(2r, r + \frac{4\sqrt{3}r}{3}, r + \frac{\sqrt{6}r2}{3}\right),\ \left(4r, r + \frac{4\sqrt{3}r}{3}, r + \frac{\sqrt{6}r2}{3}\right),\ \left(6r, r + \frac{4\sqrt{3}r}{3}, r + \frac{\sqrt{6}r2}{3}\right),\ \left(8r,r + \frac{4\sqrt{3}r}{3}, r + \frac{\sqrt{6}r2}{3}\right),\dots. -$$ - -The difference to the next plane, the A plane, is again sqrt 6r2/3 in the z-direction and a shift in the x and y to match those x- and y-coordinates of the first A plane. - -In general, the coordinates of sphere centers can be written as: - -\begin{bmatrix} - -2i + ((j\ +\ k) \bmod 2)\\ - -1+\sqrt{3}\left[j + \frac{1}{3}(k \bmod 2)\right]\\ - -1+\frac{2\sqrt{6}}{3}k - -\end{bmatrix}r - -where i, j and k are indices starting at 0 for the x-, y- and z-coordinates. - -Crystallographic features of HCP systems, such as vectors and atomic plane families, can be described using a four-value Miller index notation ( hkil ) in which the third index i denotes a convenient but degenerate component which is equal to −h − k. The h, i and k index directions are separated by 120°, and are thus not orthogonal; the l component is mutually perpendicular to the h, i and k index directions. - -The FCC and HCP packings are the densest known packings of equal spheres with the highest symmetry (smallest repeat units). - -Denser sphere packings are known, but they involve unequal sphere packing. - -A packing density of 1, filling space completely, requires non-spherical shapes, such as honeycombs. - -Replacing each contact point between two spheres with an edge connecting the centers of the touching spheres produces tetrahedrons and octahedrons of equal edge lengths. - -The FCC arrangement produces the tetrahedral-octahedral honeycomb. - -The HCP arrangement produces the gyrated tetrahedral-octahedral honeycomb. - -If, instead, every sphere is augmented with the points in space that are closer to it than to any other sphere, the duals of these honeycombs are produced: the rhombic dodecahedral honeycomb for FCC, and the trapezo-rhombic dodecahedral honeycomb for HCP. - -Spherical bubbles appear in soapy water in a FCC or HCP arrangement when the water in the gaps between the bubbles drains out. This pattern also approaches the rhombic dodecahedral honeycomb or trapezo-rhombic dodecahedral honeycomb. However, such FCC or HCP foams of very small liquid content are unstable, as they do not satisfy Plateau's laws. The Kelvin foam and the Weaire–Phelan foam are more stable, having smaller interfacial energy in the limit of a very small liquid content. - -There are two types of holes left by hcp and fcc conformations; tetrahedral and octahedral void. Four spheres surround the tetrahedral hole with three spheres being in one layer and one sphere from the next layer. Six spheres surround an octahedral voids with three spheres coming from one layer and three spheres coming from the next layer. Structures of many simple chemical compounds, for instance, are often described in terms of small atoms occupying tetrahedral or octahedral holes in closed-packed systems that are formed from larger atoms. - -Layered structures are formed by alternating empty and filled octahedral planes. Two octahedral layers usually allow for four structural arrangements that can either be filled by an hpc of fcc packing systems. In filling tetrahedral holes a complete filling leads to fcc field array. In unit cells, hole filling can sometimes lead to polyhedral arrays with a mix of hcp and fcc layering. diff --git a/wiki/wikipedia/357.txt b/wiki/wikipedia/357.txt deleted file mode 100644 index 4b04887213f3e28ca60641f0a8e8297fbc5b43cf..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/357.txt +++ /dev/null @@ -1,66 +0,0 @@ -Kirkman's school girl problem is a problem in combinatorics proposed by Rev. Thomas Penyngton Kirkman in 1850 as Query VI in The Lady's and Gentleman's Diary (pg.48). The problem states: - -
    - -Fifteen young ladies in a school walk out three abreast for seven days in succession: it is required to arrange them daily so that no two shall walk twice abreast. - -
    - -A solution to this problem is an example of a Kirkman triple system, which is a Steiner triple system having a parallelism, that is, a partition of the blocks of the triple system into parallel classes which are themselves partitions of the points into disjoint blocks. Such Steiner systems that have a parallelism are also called resolvable. - -There are exactly seven non-isomorphic solutions to the schoolgirl problem, as originally listed by Frank Nelson Cole in Kirkman Parades in 1922. The seven solutions are summarized in the table below, denoting the 15 girls with the letters A to O. - -From the number of automorphisms for each solution and the definition of an automorphism group, the total number of solutions including isomorphic solutions is therefore: -$$ -15! \times \left(\frac{1}{168} + \frac{1}{168} + \frac{1}{24} + \frac{1}{24} + \frac{1}{12} + \frac{1}{12} + \frac{1}{21}\right) -$$ -$$ -= 15! \times \frac{13}{42} -$$ - -= 404,756,352,000 -$$ -= 2^{10} \times 3^5 \times 5^3 \times 7 \times 11 \times 13^2 -$$. - -The problem has a long and storied history. This section is based on historical work done at different times by Robin Wilson and by Louise Duffield Cummings. The history is as follows: - -* In 1844, Wesley Woolhouse, the editor of The Lady's and Gentleman's Diary at the time, asked the general question: "Determine the number of combinations that can be made out of n symbols, p symbols in each; with this limitation, that no combination of q symbols, which may appear in any one of them shall be repeated in any other." Only two answers were received, one incorrect and the other correctly answering the question with $ \frac{n!}{q! (n-q)!} \div \frac{p!}{q! (p-q)!} $. As the question did not ask for anything more than the number of combinations, nothing was received about the conditions on n, p, or q when such a solution could be achieved. - -* In 1846, Woolhouse asked: "How many triads can be made out of n symbols, so that no pair of symbols shall be comprised more than once among them?". This is equivalent to repeating his 1844 question with the values p = 3 and q = 2. which was the first paper to lay out all 80 solutions to the Steiner triple system of size 15. These included both resolvable and non-resolvable systems. - -* In 1922, Cole published his paper Kirkman Parades His solution is discussed below. - -James Joseph Sylvester in 1850 asked if 13 disjoint Kirkman systems of 35 triples each could be constructed to use all 15C3 = 455 triples on 15 girls. No solution was found until 1974 when RHF Denniston at the University of Leicester constructed it with a computer. - -The Galois field GF(2) with two elements is used with four homogeneous coordinates to form PG(3,2) which has 15 points, 3 points to a line, 7 points and 7 lines in a plane. A plane can be considered a complete quadrilateral together with the line through its diagonal points. Each point is on 7 lines, and there are 35 lines in all. - -The lines of PG(3,2) are identified by their Plücker coordinates in PG(5,2) with 63 points, 35 of which represent lines of PG(3,2). These 35 points form the surface S known as the Klein quadric. For each of the 28 points off S there are 6 lines through it which do not intersect S. There are 56 spreads and 240 packings. When Hirschfeld considered the problem in his Finite Projective Spaces of Three Dimensions (1985), he noted that some solutions correspond to packings of PG(3,2), essentially as described by Conwell above, It is this generalization of the problem that Kirkman discussed first, while the famous special case $n=15$ was only proposed later. A complete solution to the general case was published by D. K. Ray-Chaudhuri and R. M. Wilson in 1968, though it had already been solved by Lu Jiaxi () in 1965, but had not been published at that time. - -Many variations of the basic problem can be considered. Alan Hartman solves a problem of this type with the requirement that no trio walks in a row of four more than once using Steiner quadruple systems. - -More recently a similar problem known as the Social Golfer Problem has gained interest that deals with 32 golfers who want to get to play with different people each day in groups of 4, over the course of 10 days. - -As this is a regrouping strategy where all groups are orthogonal, this process within the problem of organising a large group into smaller groups where no two people share the same group twice can be referred to as orthogonal regrouping. However, this term is currently not commonly used and evidence suggests that there is no common name for the process. - -The Resolvable Coverings problem considers the general $n$ girls, $g$ groups case where each pair of girls must be in the same group at some point, but we want to use as few days as possible. This can, for example, be used to schedule a rotating table plan, in which each pair of guests must at some point be at the same table. - -The Oberwolfach problem, of decomposing a complete graph into edge-disjoint copies of a given 2-regular graph, also generalizes Kirkman's schoolgirl problem. Kirkman's problem is the special case of the Oberwolfach problem in which the 2-regular graph consists of five disjoint triangles. - -* Cooperative learning strategy for increasing interaction within classroom teaching - -* Dobble card game - -* Progressive dinner party designs - -* Speed Networking events - -* Sports Competitions - -*Combinatorics - -*R M Wilson - -*Dijen K. Ray-Chaudhuri - -*Discrete Mathematics diff --git a/wiki/wikipedia/3570.txt b/wiki/wikipedia/3570.txt deleted file mode 100644 index 1835e903efb6df92d8716ee4cd1fc1989b4b9236..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3570.txt +++ /dev/null @@ -1,50 +0,0 @@ -In mathematics, the Erdős–Szekeres theorem asserts that, given r, s, any sequence of distinct real numbers with length at least (r - 1)(s - 1) + 1 contains a monotonically increasing subsequence of length r or a monotonically decreasing subsequence of length s. The proof appeared in the same 1935 paper that mentions the Happy Ending problem. - -It is a finitary result that makes precise one of the corollaries of Ramsey's theorem. While Ramsey's theorem makes it easy to prove that every infinite sequence of distinct real numbers contains a monotonically increasing infinite subsequence or a monotonically decreasing infinite subsequence, the result proved by Paul Erdős and George Szekeres goes further. - -For r = 3 and s = 2, the formula tells us that any permutation of three numbers has an increasing subsequence of length three or a decreasing subsequence of length two. Among the six permutations of the numbers 1,2,3: - -* 1,2,3 has an increasing subsequence consisting of all three numbers - -* 1,3,2 has a decreasing subsequence 3,2 - -* 2,1,3 has a decreasing subsequence 2,1 - -* 2,3,1 has two decreasing subsequences, 2,1 and 3,1 - -* 3,1,2 has two decreasing subsequences, 3,1 and 3,2 - -* 3,2,1 has three decreasing length-2 subsequences, 3,2, 3,1, and 2,1. - -One can interpret the positions of the numbers in a sequence as x-coordinates of points in the Euclidean plane, and the numbers themselves as y-coordinates; conversely, for any point set in the plane, the y-coordinates of the points, ordered by their x-coordinates, forms a sequence of numbers (unless two of the points have equal x-coordinates). With this translation between sequences and point sets, the Erdős–Szekeres theorem can be interpreted as stating that in any set of at least rs - r - s + 2 points we can find a polygonal path of either r - 1 positive-slope edges or s - 1 negative-slope edges. In particular (taking r = s), in any set of at least n points we can find a polygonal path of at least ⌊n-1⌋ edges with same-sign slopes. For instance, taking r = s = 5, any set of at least 17 points has a four-edge path in which all slopes have the same sign. - -An example of rs - r - s + 1 points without such a path, showing that this bound is tight, can be formed by applying a small rotation to an (r - 1) by (s - 1) grid. - -The Erdős–Szekeres theorem may also be interpreted in the language of permutation patterns as stating that every permutation of length at least rs + 1 must contain either the pattern 1, 2, 3, ..., r + 1 or the pattern s + 1, s, ..., 2, 1. - -The Erdős–Szekeres theorem can be proved in several different ways; Steele surveys six different proofs of the Erdős–Szekeres theorem, including the following two. - -Other proofs surveyed by Steele include the original proof by Erdős and Szekeres as well as those of Blackwell, Hammersley, and Lovász. - -Given a sequence of length (r - 1)(s - 1) + 1, label each number ni in the sequence with the pair (ai, bi), where ai is the length of the longest monotonically increasing subsequence ending with ni and bi is the length of the longest monotonically decreasing subsequence ending with ni. Each two numbers in the sequence are labeled with a different pair: if i < j and ni ≤ nj then ai < aj, and on the other hand if ni ≥ nj then bi < bj. But there are only (r - 1)(s - 1) possible labels if ai is at most r - 1 and bi is at most s - 1, so by the pigeonhole principle there must exist a value of i for which ai or bi is outside this range. If ai is out of range then ni is part of an increasing sequence of length at least r, and if bi is out of range then ni is part of a decreasing sequence of length at least s. - -Steele credits this proof to the one-page paper of Seidenberg and calls it "the slickest and most systematic" of the proofs he surveys. - -Another of the proofs uses Dilworth's theorem on chain decompositions in partial orders, or its simpler dual (Mirsky's theorem). - -To prove the theorem, define a partial ordering on the members of the sequence, in which x is less than or equal to y in the partial order if x ≤ y as numbers and x is not later than y in the sequence. A chain in this partial order is a monotonically increasing subsequence, and an antichain is a monotonically decreasing subsequence. By Mirsky's theorem, either there is a chain of length r, or the sequence can be partitioned into at most r - 1 antichains; but in that case the largest of the antichains must form a decreasing subsequence with length at least -$$ -\left\lceil\frac{rs-r-s+2}{r-1}\right\rceil=s. -$$ - -Alternatively, by Dilworth's theorem itself, either there is an antichain of length s, or the sequence can be partitioned into at most s - 1 chains, the longest of which must have length at least r. - -The result can also be obtained as a corollary of the Robinson–Schensted correspondence. - -Recall that the Robinson–Schensted correspondence associate to each sequence a Young tableau P whose entries are the values of the sequence. The tableau P has the following properties: - -* The length of the longest increasing subsequence is equal to the length of the first row of P. - -* The length of the longest decreasing subsequence is equal to the length of the first column of P. - -Now, it is not possible to fit (r − 1)(s − 1) + 1 entries in a square box of size (r − 1)(s − 1), so that either the first row is of length at least r or the last row is of length at least s. diff --git a/wiki/wikipedia/3571.txt b/wiki/wikipedia/3571.txt deleted file mode 100644 index c82ff604fba3cfdba9f9bfd69602f75ca5bfebb1..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3571.txt +++ /dev/null @@ -1,7 +0,0 @@ -In computer science, jump point search (JPS) is an optimization to the A* search algorithm for uniform-cost grids. It reduces symmetries in the search procedure by means of graph pruning, eliminating certain nodes in the grid based on assumptions that can be made about the current node's neighbors, as long as certain conditions relating to the grid are satisfied. As a result, the algorithm can consider long "jumps" along straight (horizontal, vertical and diagonal) lines in the grid, rather than the small steps from one grid position to the next that ordinary A* considers. - -Jump point search preserves A*'s optimality, while potentially reducing its running time by an order of magnitude. This paper also presents an algorithm for pre-processing a grid in order to minimise online search times. - -A number of further optimizations were published by the authors in 2014. These optimizations include exploring columns or rows of nodes instead of individual nodes, pre-computing "jumps" on the grid, and stronger pruning rules. - -Although jump point search is limited to uniform cost grids and homogeneously sized agents, the authors are placing future research into applying JPS with existing grid-based speed-up techniques such as hierarchical grids. diff --git a/wiki/wikipedia/3572.txt b/wiki/wikipedia/3572.txt deleted file mode 100644 index 7ad04a709084ae920d42390f6896ce2a65f39086..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3572.txt +++ /dev/null @@ -1,135 +0,0 @@ -In combinatorics, Bertrand's ballot problem is the question: "In an election where candidate A receives p votes and candidate B receives q votes with p > q, what is the probability that A will be strictly ahead of B throughout the count?" The answer is -$$ -\frac{p-q}{p+q}. -$$ - -The result was first published by W. A. Whitworth in 1878, but is named after Joseph Louis François Bertrand who rediscovered it in 1887. - -In Bertrand's original paper, he sketches a proof based on a general formula for the number of favourable sequences using a recursion relation. He remarks that it seems probable that such a simple result could be proved by a more direct method. Such a proof was given by Désiré André, based on the observation that the unfavourable sequences can be divided into two equally probable cases, one of which (the case where B receives the first vote) is easily computed; he proves the equality by an explicit bijection. A variation of his method is popularly known as André's reflection method, although André did not use any reflections. - -Suppose there are 5 voters, of whom 3 vote for candidate A and 2 vote for candidate B (so p = 3 and q = 2). There are ten possibilities for the order of the votes cast: - -*AAABB - -*AABAB - -*ABAAB - -*BAAAB - -*AABBA - -*ABABA - -*BAABA - -*ABBAA - -*BABAA - -*BBAAA - -For the order AABAB, the tally of the votes as the election progresses is: - -For each column the tally for A is always larger than the tally for B so the A is always strictly ahead of B. For the order AABBA the tally of the votes as the election progresses is: - -For this order, B is tied with A after the fourth vote, so A is not always strictly ahead of B. - -Of the 10 possible orders, A is always ahead of B only for AAABB and AABAB. So the probability that A will always be strictly ahead is -$$ -\frac{2}{10}=\frac{1}{5}, -$$ - -and this is indeed equal to $\frac{3-2}{3+2}$ as the theorem predicts. - -Rather than computing the probability that a random vote counting order has the desired property, one can instead compute the number of favourable counting orders, then divide by the total number of ways in which the votes could have been counted. (This is the method used by Bertrand.) The total number of ways is the binomial coefficient $\tbinom{p+q}{p}$; Bertrand's proof shows that the number of favourable orders in which to count the votes is $\tbinom{p+q-1}{p-1}-\tbinom{p+q-1}{p}$ (though he does not give this number explicitly). And indeed after division this gives $\tfrac{p}{p+q}-\tfrac{q}{p+q}=\tfrac{p-q}{p+q}$. - -Another equivalent problem is to calculate the number of random walks on the integers that consist of n steps of unit length, beginning at the origin and ending at the point m, that never become negative. Assuming n and m have the same parity and n ≥ m ≥ 0, this number is -$$ -\binom{n}{\tfrac{n+m}2}-\binom{n}{\tfrac{n+m}2+1} = \frac{m+1}{\tfrac{n+m}2+1}\binom{n}{\tfrac{n+m}2}. -$$ - -When m = 0 and n is even, this gives the Catalan number $\frac1{\tfrac{n}2+1}\binom{n}{\tfrac{n}2}$. - -For A to be strictly ahead of B throughout the counting of the votes, there can be no ties. Separate the counting sequences according to the first vote. Any sequence that begins with a vote for B must reach a tie at some point, because A eventually wins. For any sequence that begins with A and reaches a tie, reflect the votes up to the point of the first tie (so any A becomes a B, and vice versa) to obtain a sequence that begins with B. Hence every sequence that begins with A and reaches a tie is in one-to-one correspondence with a sequence that begins with B, and the probability that a sequence begins with B is $q/(p+q)$, so the probability that A always leads the vote is -$$ - = 1- -$$ the probability of sequences that tie at some point -$$ - = 1- -$$ the probability of sequences that tie at some point and begin with A or B -$$ - = 1-2\frac{q}{p+q}=\frac{p-q}{p+q} -$$ - -Another method of proof is by mathematical induction: - -*We loosen the condition $p > q$ to $p \geq q$. Clearly, the theorem is correct when $p = q$, since in this case the first candidate will not be strictly ahead after all the votes have been counted (so the probability is 0). - -*Clearly the theorem is true if p > 0 and q = 0 when the probability is 1, given that the first candidate receives all the votes; it is also true when p = q > 0 as we have just seen. - -*Assume it is true both when p = a − 1 and q = b, and when p = a and q = b − 1, with a > b > 0. (We don't need to consider the case $a = b$ here, since we have already disposed of it before.) Then considering the case with p = a and q = b, the last vote counted is either for the first candidate with probability a/(a + b), or for the second with probability b/(a + b). So the probability of the first being ahead throughout the count to the penultimate vote counted (and also after the final vote) is: -$$ -{a \over (a+b)}{(a-1)-b \over (a+b-1)}+{b \over (a+b)}{a-(b-1) \over (a+b-1)}={a-b \over a+b}. -$$ - -*And so it is true for all p and q with p > q > 0. - -A simple proof is based on the beautiful Cycle Lemma of Dvoretzky and Motzkin. - -Call a ballot sequence dominating if A is strictly ahead of B throughout the counting of the votes. The Cycle Lemma asserts that any sequence of $p$ A's and $q$ B's, where $p> q$, has precisely $p-q$ dominating cyclic permutations. To see this, just arrange the given sequence of $p+q$ A's and B's in a circle and repeatedly remove adjacent pairs AB until only $p-q$ A's remain. Each of these A's was the start of a dominating cyclic permutation before anything was removed. So $p-q$ out of the $p+q$ cyclic permutations of any arrangement of $p$ A votes and $q$ B votes are dominating. - -Bertrand expressed the solution as -$$ -\frac{2m-\mu}{\mu} -$$ - -where $\mu=p+q$ is the total number of voters and $m=p$ is the number of voters for the first candidate. He states that the result follows from the formula -$$ -P_{m+1,\mu+1}=P_{m,\mu}+P_{m+1,\mu}, -$$ - -where $P_{m,\mu}$ is the number of favourable sequences, but "it seems probable that such a simple result could be shown in a more direct way". Indeed, a more direct proof was soon produced by Désiré André. His approach is often mistakenly labelled "the reflection principle" by modern authors but in fact uses a permutation. He shows that the "unfavourable" sequences (those that reach an intermediate tie) consist of an equal number of sequences that begin with A as those that begin with B. Every sequence that begins with B is unfavourable, and there are $\tbinom{p+q-1}{q-1}$ such sequences with a B followed by an arbitrary sequence of (q-1) B's and p A's. Each unfavourable sequence that begins with A can be transformed to an arbitrary sequence of (q-1) B's and p A's by finding the first B that violates the rule (by causing the vote counts to tie) and deleting it, and interchanging the order of the remaining parts. To reverse the process, take any sequence of (q-1) B's and p A's and search from the end to find where the number of A's first exceeds the number of B's, and then interchange the order of the parts and place a B in between. For example, the unfavourable sequence AABBABAA corresponds uniquely to the arbitrary sequence ABAAAAB. From this, it follows that the number of favourable sequences of p A's and q B's is -$$ -\binom{p+q}{q}-2\binom{p+q-1}{q-1}=\binom{p+q}{q}\frac{p-q}{p+q} -$$ - -and thus the required probability is -$$ -\frac{p-q}{p+q} -$$ - -as expected. - -The original problem is to find the probability that the first candidate is always strictly ahead in the vote count. One may instead consider the problem of finding the probability that the second candidate is never ahead (that is, with ties are allowed). In this case, the answer is -$$ -\frac{p+1-q}{p+1}. -$$ - -The variant problem can be solved by the reflection method in a similar way to the original problem. The number of possible vote sequences is $\tbinom{p+q}{q}$. Call a sequence "bad" if the second candidate is ever ahead, and if the number of bad sequences can be enumerated then the number of "good" sequences can be found by subtraction and the probability can be computed. - -Represent a voting sequence as a lattice path on the Cartesian plane as follows: - -* Start the path at (0, 0) - -* Each time a vote for the first candidate is received move right 1 unit. - -* Each time a vote for the second candidate is received move up 1 unit. - -Each such path corresponds to a unique sequence of votes and will end at (p, q). A sequence is 'good' exactly when the corresponding path never goes above the diagonal line y = x; equivalently, a sequence is 'bad' exactly when the corresponding path touches the line y = x + 1. - -For each 'bad' path P, define a new path P′ by reflecting the part of P up to the first point it touches the line across it. P′ is a path from (−1, 1) to (p, q). The same operation applied again restores the original P. This produces a one-to-one correspondence between the 'bad' paths and the paths from (−1, 1) to (p, q). The number of these paths is $\tbinom{p+q}{q-1}$ and so that is the number of 'bad' sequences. This leaves the number of 'good' sequences as -$$ -\binom{p+q}{q} - \binom{p+q}{q-1} = \binom{p+q}{q}\frac{p+1-q}{p+1}. -$$ - -Since there are $\tbinom{p+q}{q}$ altogether, the probability of a sequence being good is $\tfrac{p+1-q}{p+1}$. - -In fact, the solutions to the original problem and the variant problem are easily related. For candidate A to be strictly ahead throughout the vote count, they must receive the first vote and for the remaining votes (ignoring the first) they must be either strictly ahead or tied throughout the count. Hence the solution to the original problem is -$$ -\frac{p}{p+q}\frac{p-1+1-q}{p-1+1}=\frac{p-q}{p+q} -$$ - -as required. - -Conversely, the tie case can be derived from the non-tie case. Note that the number of non-tie sequences with p+1 votes for A is equal to the number of tie sequences with p votes for A. The number of non-tie votes with p + 1 votes for A votes is $\tfrac{p + 1 - q}{p + 1 + q} \tbinom{p + 1 + q}{q} $, which by algebraic manipulation is $\tfrac{p + 1 - q}{p + 1} \tbinom{p + q}{q} $, so the fraction of sequences with p votes for A votes is $\tfrac{p + 1 - q}{p + 1}$. diff --git a/wiki/wikipedia/3573.txt b/wiki/wikipedia/3573.txt deleted file mode 100644 index c0b12c26c827680da64e54717a09f2227d5a714b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3573.txt +++ /dev/null @@ -1,9 +0,0 @@ -is a freemium puzzle video game featuring Pokémon characters developed by Jupiter Corporation and published by Nintendo and The Pokémon Company for the Nintendo 3DS. The title is part of the "Picross" nonogram series that use number-based grid puzzles to reveal pictures. It was released as a downloadable title on the Nintendo 3DS eShop worldwide in December 2015. - -Pokémon Picross follows the typical format of nonogram puzzles, in which players must use numbers depicted on a grid to determine which sections to fill and not fill in. In this a game a twist is added in which, when a puzzle is completed, players are rewarded with a Pokémon based on the puzzle they cleared. These Pokémon can be set before starting a puzzle and can utilise various abilities based on their type. For example, Electric-type Pokémon can slow down the levels' timer, while Fire-types can automatically fill in certain areas of the grid in a cross-shape pattern. Each Pokémon has a cooldown period after their ability is used, and their ability may be limited to grids under a certain size, anywhere form 10x10 to 20x15 (the smallest and largest Pokémon puzzle sizes respectively). - -The game's freemium elements revolve around items known as Picrites, which are required to perform various actions such as unlocking new areas, increasing the number of Pokémon that can be set, opening up Mega Evolution and Alt World stages, and instantly restoring the Energy gauge (the latter of which is replenished over time). In addition to purchasing them with Nintendo eShop funds, players can obtain Picrites by clearing certain objectives in each stage (such as using a particular Pokémon or beating the stage within a certain time limit), playing the Daily Challenge (which tasks players with clearing several smaller puzzles in quick succession), and unlocking certain achievements as they play. Clearing certain stage objectives also unlocks Mural Tiles, which contain individual Picross puzzles as part of a larger mural puzzle. The game also features a spending cap in which, if the player spends a certain amount of funds on Picrites, they will be able to receive additional Picrites for free. - -Pokémon Picross was first announced on a November 12, 2015 Nintendo Direct broadcast, with a worldwide release date set for the following month. The title's developer, Jupiter Corporation, had originally planned to release a game called Pokémon Picross on the Game Boy Color 16 years earlier. Although previews appeared in Japanese gaming magazines in Spring 1999, that version was ultimately cancelled. In September 2020, a playable version of the original Game Boy iteration was discovered in the Nintendo "Gigaleak 3" alongside other data and unreleased games. - -Pokémon Picross received "generally favorable" reviews, according to video game review aggregator Metacritic. Kyle Hilliard of Game Informer found that the game was "difficult to recommend universally", and that the free-to-play mechanic was not conducive to a Picross-style puzzle title, stating "Free-to-play is not inherently a bad thing... but unlike most successful free-to-play games, Picross puzzles are not randomized nor worth replaying." The editor did commend the title for not being an "endless money-sink" by having the in-game currency become free after 5,000 are purchased. Destructoid found the game to be an improvement over previous Picross entries, stating "Aside from the strangely disguised pricing scheme, the new additions to Pokémon Picross exceed expectations," giving special mention to its mission mechanic, the unlocking of "mural" images, and "mega rows" that encourage non-standard play. diff --git a/wiki/wikipedia/3574.txt b/wiki/wikipedia/3574.txt deleted file mode 100644 index d028b0136b99dfbc989085a5fd5ac9c924a1a336..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3574.txt +++ /dev/null @@ -1,43 +0,0 @@ -In mathematics, Schur's inequality, named after Issai Schur, - -establishes that for all non-negative real numbers - -x, y, z and t, -$$ -x^t (x-y)(x-z) + y^t (y-z)(y-x) + z^t (z-x)(z-y) \ge 0 -$$ - -with equality if and only if x = y = z or two of them are equal and the other is zero. When t is an even positive integer, the inequality holds for all real numbers x, y and z. - -When $t=1$, the following well-known special case can be derived: -$$ -x^3 + y^3 + z^3 + 3xyz \geq xy(x+y) + xz(x+z) + yz(y+z) -$$ - -Since the inequality is symmetric in $x,y,z$ we may assume without loss of generality that $ x \geq y \geq z$. Then the inequality -$$ -(x-y)[x^t(x-z)-y^t(y-z)]+z^t(x-z)(y-z) \geq 0 -$$ - -clearly holds, since every term on the left-hand side of the inequality is non-negative. This rearranges to Schur's inequality. - -A generalization of Schur's inequality is the following: - -Suppose a,b,c are positive real numbers. If the triples (a,b,c) and (x,y,z) are similarly sorted, then the following inequality holds: -$$ -a (x-y)(x-z) + b (y-z)(y-x) + c (z-x)(z-y) \ge 0. -$$ - -In 2007, Romanian mathematician Valentin Vornicu showed that a yet further generalized form of Schur's inequality holds: - -Consider $a,b,c,x,y,z \in \mathbb{R}$, where $a \geq b \geq c$, and either $x \geq y \geq z$ or $z \geq y \geq x$. Let $k \in \mathbb{Z}^{+}$, and let $f:\mathbb{R} \rightarrow \mathbb{R}_{0}^{+}$ be either convex or monotonic. Then, -$$ -{f(x)(a-b)^k(a-c)^k+f(y)(b-a)^k(b-c)^k+f(z)(c-a)^k(c-b)^k \geq 0}. -$$ - -The standard form of Schur's is the case of this inequality where x = a, y = b, z = c, k = 1, ƒ(m) = mr. - -Another possible extension states that if the non-negative real numbers $ x \geq y \geq z \geq v $ with and the positive real number t are such that x + v ≥ y + z then -$$ -x^t (x-y)(x-z)(x-v) + y^t (y-x)(y-z)(y-v) + z^t (z-x)(z-y)(z-v) + v^t (v-x)(v-y)(v-z) \ge 0. -$$ diff --git a/wiki/wikipedia/3575.txt b/wiki/wikipedia/3575.txt deleted file mode 100644 index 22164d715299849da0c36bbaa0dcb194a45f9935..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3575.txt +++ /dev/null @@ -1,15 +0,0 @@ -In abstract algebra, the Pierce–Birkhoff conjecture asserts that any piecewise-polynomial function can be expressed as a maximum of finite minima of finite collections of polynomials. It was first stated, albeit in non-rigorous and vague wording, in the 1956 paper of Garrett Birkhoff and Richard S. Pierce in which they first introduced f-rings. The modern, rigorous statement of the conjecture was formulated by Melvin Henriksen and John R. Isbell, who worked on the problem in the early 1960s in connection with their work on f-rings. Their formulation is as follows: - -For every real piecewise-polynomial function $f \colon \R^n \rightarrow \R$, there exists a finite set of polynomials $g_{ij} \in \R[x_1, \ldots, x_n]$ such that $f = \sup_i \inf_j ( g_{ij} )$. - -Isbell is likely the source of the name Pierce–Birkhoff conjecture, and popularized the problem in the 1980s by discussing it with several mathematicians interested in real algebraic geometry. - -The conjecture was proved true for n = 1 and 2 by Louis Mahé. - -In 1989, James J. Madden provided an equivalent statement that is in terms of the real spectrum of $A = R[x_1, \ldots, x_n]$ and the novel concepts of local polynomial representatives and separating ideals. - -Denoting the real spectrum of A by $\mathrm{Sper} A$, the separating ideal of α and β in $\mathrm{Sper} A$ is the ideal of A generated by all polynomials $g \in A$ that change sign on $\alpha$ and $\beta$, i.e., $g(\alpha) \ge 0$ and $g(\beta) \le 0$. Any finite covering $\R^n = \cup_i P_i$ of closed, semi-algebraic sets induces a corresponding covering $\mathrm{Sper} A = \cup_i\tilde{P}_i$, so, in particular, when f is piecewise polynomial, there is a polynomial $f_i$ for every $\alpha\in\mathrm{Sper} A$ such that $f|_{P_i} = f_i|_{P_i}$ and $\alpha\in\tilde{P}_i$. This $f_i$ is termed the local polynomial representative of f at $\alpha$. - -Madden's so-called local Pierce–Birkhoff conjecture at $\alpha$ and $\beta$, which is equivalent to the Pierce–Birkhoff conjecture, is as follows: - -Let $\alpha$, $\beta$ be in $\mathrm{Sper} A$ and f be piecewise-polynomial. It is conjectured that for every local representative of f at $\alpha$, $f_\alpha$, and local representative of f at $\beta$, $f_\beta$, $f_\alpha - f_\beta$ is in the separating ideal of $\alpha$ and $\beta$. diff --git a/wiki/wikipedia/3576.txt b/wiki/wikipedia/3576.txt deleted file mode 100644 index f56fab33bb187c80d85b90b0bf2ee4c3ee5bbcc2..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3576.txt +++ /dev/null @@ -1,39 +0,0 @@ -In mathematics – and in particular the study of games on the unit square – Parthasarathy's theorem is a generalization of Von Neumann's minimax theorem. It states that a particular class of games has a mixed value, provided that at least one of the players has a strategy that is restricted to absolutely continuous distributions with respect to the Lebesgue measure (in other words, one of the players is forbidden to use a pure strategy). - -The theorem is attributed to the Indian mathematician Thiruvenkatachari Parthasarathy. - -Let $X$ and $Y$ stand for the unit interval $[0,1]$; $\mathcal M_X$ denote the set of probability distributions on $X$ (with $\mathcal M_Y$ defined similarly); and $A_X$ denote the set of absolutely continuous distributions on $X$ (with $A_Y$ defined similarly). - -Suppose that $k(x,y)$ is bounded on the unit square $X \times Y = \{(x,y) : 0\leq x,y\leq 1\}$ and that $k(x,y)$ is continuous except possibly on a finite number of curves of the form $y=\phi_k(x)$ (with $k=1,2,\ldots,n$) where the $\phi_k(x)$ are continuous functions. For $ \mu \in M_X, \lambda \in M_Y$, define - - - -k(\mu,\lambda)=\int_{y=0}^1\int_{x=0}^1 k(x,y)d\mu(x)d\lambda(y)= - -\int_{x=0}^1\int_{y=0}^1 k(x,y)d\lambda(y)d\mu(x). - - - -Then - - - -\max_{\mu\in{\mathcal M}_X}\inf_{\lambda\in A_Y}k(\mu,\lambda)= - -\inf_{\lambda\in A_Y}\max_{\mu\in{\mathcal M}_X} k(\mu,\lambda). - - - -This is equivalent to the statement that the game induced by $k(\cdot,\cdot)$ has a value. Note that one player (WLOG $Y$) is forbidden from using a pure strategy. - -Parthasarathy goes on to exhibit a game in which - - - -\max_{\mu\in{\mathcal M}_X}\inf_{\lambda\in{\mathcal M}_Y}k(\mu,\lambda)\neq - -\inf_{\lambda\in{\mathcal M}_Y}\max_{\mu\in{\mathcal M}_X} k(\mu,\lambda) - - - -which thus has no value. There is no contradiction because in this case neither player is restricted to absolutely continuous distributions (and the demonstration that the game has no value requires both players to use pure strategies). diff --git a/wiki/wikipedia/3577.txt b/wiki/wikipedia/3577.txt deleted file mode 100644 index 5618d9d98b5d4e52cca37a18af8ae37543e679a0..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3577.txt +++ /dev/null @@ -1,20 +0,0 @@ -The Hauptvermutung (German for main conjecture) of geometric topology is the question of whether any two triangulations of a triangulable space have subdivisions that are combinatorially equivalent, i.e. the subdivided triangulations are built up in the same combinatorial pattern. It was originally formulated as a conjecture in 1908 by Ernst Steinitz and Heinrich Franz Friedrich Tietze, but it is now known to be false. - -The non-manifold version was disproved by John Milnor in 1961 using Reidemeister torsion. - -The manifold version is true in dimensions $m\le 3$. The cases $m = 2$ and $3$ were proved by Tibor Radó and Edwin E. Moise in the 1920s and 1950s, respectively. - -An obstruction to the manifold version was formulated by Andrew Casson and Dennis Sullivan in 1967–69 (originally in the simply-connected case), using the Rochlin invariant and the cohomology group $H^3(M;\mathbb{Z}/2\mathbb{Z})$. - -In dimension $m \ge 5$, a homeomorphism $f \colon N \to M$ of m-dimensional piecewise linear manifolds has an invariant $\kappa(f) \in H^3(M;\Z/2\Z)$ such that $f$ is isotopic to a piecewise linear (PL) homeomorphism if and only if $\kappa(f) = 0$. In the simply-connected case and with $m \ge 5$, $f$ is homotopic to a PL homeomorphism if and only if $[\kappa(f)] = 0 \in [M,G/{\rm PL}]$. - -The obstruction to the manifold Hauptvermutung is now seen as a relative version of the triangulation obstruction of Robion Kirby and Laurent C. Siebenmann, obtained in 1970. The Kirby–Siebenmann obstruction is defined for any compact m-dimensional topological manifold M -$$ -\kappa(M)\in H^4(M;\Z/2\Z) -$$ - -again using the Rochlin invariant. For $m\ge 5$, the manifold M has a PL structure (i.e., it can be triangulated by a PL manifold) if and only if $\kappa(M) = 0$, and if this obstruction is 0, the PL structures are parametrized by $H^3(M;\Z/2\Z)$. In particular there are only a finite number of essentially distinct PL structures on M. - -For compact simply-connected manifolds of dimension 4, Simon Donaldson found examples with an infinite number of inequivalent PL structures, and Michael Freedman found the E8 manifold which not only has no PL structure, but (by work of Casson) is not even homeomorphic to a simplicial complex. - -In 2013, Ciprian Manolescu proved that there exist compact topological manifolds of dimension 5 (and hence of any dimension greater than 5) that are not homeomorphic to a simplicial complex. Thus Casson's example illustrates a more general phenomenon that is not merely limited to dimension 4. diff --git a/wiki/wikipedia/3578.txt b/wiki/wikipedia/3578.txt deleted file mode 100644 index eba600194a3d933686644ddd915c6e923b9b98f7..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3578.txt +++ /dev/null @@ -1,17 +0,0 @@ -In mathematics, a characterization of an object is a set of conditions that, while different from the definition of the object, is logically equivalent to it. To say that "Property P characterizes object X" is to say that not only does X have property P, but that X is the only thing that has property P (i.e., P is a defining property of X). Similarly, a set of properties P is said to characterize X, when these properties distinguish X from all other objects. Even though a characterization identifies an object in a unique way, several characterizations can exist for a single object. Common mathematical expressions for a characterization of X in terms of P include "P is necessary and sufficient for X", and "X holds if and only if P". - -It is also common to find statements such as "Property Q characterizes Y up to isomorphism". The first type of statement says in different words that the extension of P is a singleton set, while the second says that the extension of Q is a single equivalence class (for isomorphism, in the given example - depending on how up to is being used, some other equivalence relation might be involved). - -A reference on mathematical terminology notes that characteristic originates from the Greek term kharax, "a pointed stake":
    "From Greek kharax came kharakhter, an instrument used to mark or engrave an object. Once an object was marked, it became distinctive, so the character of something came to mean its distinctive nature. The Late Greek suffix -istikos converted the noun character into the adjective characteristic, which, in addition to maintaining its adjectival meaning, later became a noun as well."
    Just as in chemistry, the characteristic property of a material will serve to identify a sample, or in the study of materials, structures and properties will determine characterization, in mathematics there is a continual effort to express properties that will distinguish a desired feature in a theory or system. Characterization is not unique to mathematics, but since the science is abstract, much of the activity can be described as "characterization". For instance, in Mathematical Reviews, as of 2018, more than 24,000 articles contain the word in the article title, and 93,600 somewhere in the review. - -In an arbitrary context of objects and features, characterizations have been expressed via the heterogeneous relation aRb, meaning that object a has feature b. For example, b may mean abstract or concrete. The objects can be considered the extensions of the world, while the features are expression of the intensions. A continuing program of characterization of various objects leads to their categorization. - -* A rational number, generally defined as a ratio of two integers, can be characterized as a number with finite or repeating decimal expansion. - -*A parallelogram is a quadrilateral whose opposing sides are parallel. One of its characterizations is that its diagonals bisect each other. This means that the diagonals in all parallelograms bisect each other, and conversely, that any quadrilateral whose diagonals bisect each other must be a parallelogram. The latter statement is only true if inclusive definitions of quadrilaterals are used (so that, for example, rectangles count as parallelograms), which is the dominant way of defining objects in mathematics nowadays. - -* "Among probability distributions on the interval from 0 to ∞ on the real line, memorylessness characterizes the exponential distributions." This statement means that the exponential distributions are the only probability distributions that are memoryless, provided that the distribution is continuous as defined above (see Characterization of probability distributions for more). - -* "According to Bohr–Mollerup theorem, among all functions f such that f(1) = 1 and x f(x) = f(x + 1) for x > 0, log-convexity characterizes the gamma function." This means that among all such functions, the gamma function is the only one that is log-convex. - -* The circle is characterized as a manifold by being one-dimensional, compact and connected; here the characterization, as a smooth manifold, is up to diffeomorphism. diff --git a/wiki/wikipedia/3579.txt b/wiki/wikipedia/3579.txt deleted file mode 100644 index 660bf35e57a149dd3ae1e65fe3faad731723fb78..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3579.txt +++ /dev/null @@ -1,27 +0,0 @@ -In mathematics, particularly in order theory, an upper bound or majorant of a subset S of some preordered set (K, ≤) is an element of K which is greater than or equal to every element of S. - -Dually, a lower bound or minorant of S is defined to be an element of K which is less than or equal to every element of S. - -A set with an upper (respectively, lower) bound is said to be bounded from above or majorized (respectively bounded from below or minorized) by that bound. - -The terms bounded above (bounded below) are also used in the mathematical literature for sets that have upper (respectively lower) bounds. - -For example, 5 is a lower bound for the set S = {5, 8, 42, 34, 13934} (as a subset of the integers or of the real numbers, etc.), and so is 4. On the other hand, 6 is not a lower bound for S since it is not smaller than every element in S. - -The set S = {42} has 42 as both an upper bound and a lower bound; all other numbers are either an upper bound or a lower bound for that S. - -Every subset of the natural numbers has a lower bound since the natural numbers have a least element (0 or 1, depending on convention). An infinite subset of the natural numbers cannot be bounded from above. An infinite subset of the integers may be bounded from below or bounded from above, but not both. An infinite subset of the rational numbers may or may not be bounded from below, and may or may not be bounded from above. - -Every finite subset of a non-empty totally ordered set has both upper and lower bounds. - -The definitions can be generalized to functions and even to sets of functions. - -Given a function with domain D and a preordered set (K, ≤) as codomain, an element y of K is an upper bound of if y ≥ for each x in D. The upper bound is called sharp if equality holds for at least one value of x. It indicates that the constraint is optimal, and thus cannot be further reduced without invalidating the inequality. - -Similarly, function g defined on domain D and having the same codomain (K, ≤) is an upper bound of , if g(x) ≥ for each x in D. Function g is further said to be an upper bound of a set of functions, if it is an upper bound of each function in that set. - -The notion of lower bound for (sets of) functions is defined analogously, by replacing ≥ with ≤. - -An upper bound is said to be a tight upper bound, a least upper bound, or a supremum, if no smaller value is an upper bound. Similarly, a lower bound is said to be a tight lower bound, a greatest lower bound, or an infimum, if no greater value is a lower bound. - -An upper bound u of a subset S of a preordered set (K, ≤) is said to be an exact upper bound for S if every element of K which is strictly majorized by u is also majorized by some element of S. Exact upper bounds of reduced products of linear orders play an important role in PCF theory. diff --git a/wiki/wikipedia/358.txt b/wiki/wikipedia/358.txt deleted file mode 100644 index 68ecabcae1c4029dcd908110e6a5c1daf60da8dd..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/358.txt +++ /dev/null @@ -1,200 +0,0 @@ -The Lucas numbers or Lucas series are an integer sequence named after the mathematician François Édouard Anatole Lucas (1842-91), who studied both that sequence and the closely related Fibonacci numbers. Lucas numbers and Fibonacci numbers form complementary instances of Lucas sequences. - -The Lucas sequence has the same recursive relationship as the Fibonacci sequence, where each term is the sum of the two previous terms, but with different starting values. This produces a sequence where the ratios of successive terms approach the golden ratio, and in fact the terms themselves are roundings of integer powers of the golden ratio. The sequence also has a variety of relationships with the Fibonacci numbers, like the fact that adding any two Fibonacci numbers two terms apart in the Fibonacci sequence results in the Lucas number in between. - -The first few Lucas numbers are - -2, 1, 3, 4, 7, 11, 18, 29, 47, 76, 123, .... - -Similar to the Fibonacci numbers, each Lucas number is defined to be the sum of its two immediate previous terms, thereby forming a Fibonacci integer sequence. The first two Lucas numbers are $L_0=2$ and $L_1=1$ as opposed to the first two Fibonacci numbers $F_0=0$ and $F_1=1$. Though closely related in definition, Lucas and Fibonacci numbers exhibit distinct properties. - -The Lucas numbers may thus be defined as follows: - - - -L_n := - -\begin{cases} - -2 & \text{if } n = 0; \\ - -1 & \text{if } n = 1; \\ - -L_{n-1}+L_{n-2} & \text{if } n > 1. - -\end{cases} - - - -(where n belongs to the natural numbers) - -The sequence of the first twelve Lucas numbers is: -$$ -2,1,3,4,7,11,18,29,47,76,123,199, \ldots -$$. - -All Fibonacci-like integer sequences appear in shifted form as a row of the Wythoff array; the Fibonacci sequence itself is the first row and the Lucas sequence is the second row. Also like all Fibonacci-like integer sequences, the ratio between two consecutive Lucas numbers converges to the golden ratio. - -Using $L_{n-2}=L_{n}-L_{n-1}$, one can extend the Lucas numbers to negative integers to obtain a doubly infinite sequence: - -..., -11, 7, -4, 3, -1, 2, 1, 3, 4, 7, 11, ... (terms $L_n$ for $-5\leq{}n\leq5$ are shown). - -The formula for terms with negative indices in this sequence is -$$ -L_{-n}=(-1)^nL_n.\! -$$ - -The Lucas numbers are related to the Fibonacci numbers by many identities. Among these are the following: - -* $L_n = F_{n-1}+F_{n+1}$ - -* $L_{m+n} = L_{m+1}F_{n}+L_mF_{n-1}$ - -* $F_{2n} = L_n F_n$ - -* $F_{n+k} + (-1)^k F_{n-k} = L_k F_n$ - -* $2F_{2n+k} = L_{n} F_{n+k} + L_{n+k} F_{n}$ - -* $L_{2n} = 5 F_n^2 + 2(-1)^n = L_n^2 - 2(-1)^n$, so $\lim_{n\to\infty} \frac{L_n}{F_n}=\sqrt{5}$. - -* $L_{n+k} - (-1)^k L_{n-k} = 5 F_n F_k$; in particular, $F_n = {L_{n-1}+L_{n+1} \over 5}$, so $5F_n + L_n = 2L_{n+1}$. - -Their closed formula is given as: -$$ -L_n = \varphi^n + (1-\varphi)^{n} = \varphi^n + (- \varphi)^{-n}=\left({ 1+ \sqrt{5} \over 2}\right)^n + \left({ 1- \sqrt{5} \over 2}\right)^n , -$$ - -where $\varphi$ is the golden ratio. Alternatively, as for $n>1$ the magnitude of the term $(-\varphi)^{-n}$ is less than 1/2, $L_n$ is the closest integer to $\varphi^n$ or, equivalently, the integer part of $\varphi^n+1/2$, also written as $\lfloor \varphi^n+1/2 \rfloor$. - -Combining the above with Binet's formula, -$$ -F_n = \frac{\varphi^n - (1-\varphi)^{n}}{\sqrt{5}} , -$$ - -a formula for $\varphi^n$ is obtained: -$$ -\varphi^n = {{L_n + F_n \sqrt{5}} \over 2} . -$$ - -Many of the Fibonacci identities have parallels in Lucas numbers. For example, the Cassini identity becomes -$$ -L_n^2 - L_{n-1}L_{n+1} = (-1)^{n+1}5 -$$ - -Also -$$ -\sum_{k=0}^n L_k = L_{n+2} - 1 -$$ -$$ -\sum_{k=0}^n L_k^2 = L_nL_{n+1} + 2 -$$ -$$ -2L_{n-1}^2 + L_n^2 = L_{2n+1} + 5F_{n-2}^2 -$$ - -where $\textstyle F_n=\frac{L_{n-1}+L_{n+1}}{5}$. -$$ -L_n^k = \sum_{j=0}^{\lfloor \frac{k}{2} \rfloor} (-1)^j\binom{n}{j}L'_{(k-2j)n} -$$ - -where $L'_n=L_n$ except for $L'_0=1$. - -For example, $L_n^3 = L'_{3n}-3L'_n$ and $L_n^4 = L'_{4n}-4L'_{2n}+6L'_0$ - -Checking, $L_3=4, 4^3=64=76-3(4)$, and $256=322-4(18)+6$ - -Let -$$ -\Phi(x) = 2 + x + 3x^2 + 4x^3 + \cdots = \sum_{n = 0}^\infty L_nx^n -$$ - -be the generating function of the Lucas numbers. By a direct computation, - -\begin{align} - -\Phi(x) &= L_0 + L_1x + \sum_{n = 2}^\infty L_nx^n \\ - -&= 2 + x + \sum_{n = 2}^\infty (L_{n - 1} + L_{n - 2})x^n \\ - -&= 2 + x + \sum_{n = 1}^\infty L_nx^{n + 1} + \sum_{n = 0}^\infty L_nx^{n + 2} \\ - -&= 2 + x + x(\Phi(x) - 2) + x^2 \Phi(x) - -\end{align} - -which can be rearranged as -$$ -\Phi(x) = \frac{2 - x}{1 - x - x^2} -$$ -$$ -\Phi(-\frac1x) -$$ gives the generating function for the negative indexed Lucas numbers, $\sum_{n = 0}^\infty (-1)^nL_nx^{-n} = \sum_{n = 0}^\infty L_{-n}x^{-n}$, and -$$ -\Phi(-\frac1x) = \frac{x + 2x^2}{1 - x - x^2} -$$ -$$ -\Phi(x) -$$ satisfies the functional equation -$$ -\Phi(x) - \Phi(-\frac1x) = 2 -$$ - -As the generating function for the Fibonacci numbers is given by -$$ -s(x) = \frac{x}{1 - x - x^2} -$$ - -we have -$$ -s(x) + \Phi(x) = \frac{2}{1 - x - x^2} -$$ - -which proves that -$$ -F_n + L_n = 2F_{n+1} -$$ - -And -$$ -5s(x) + \Phi(x) = \frac2x\Phi(-\frac1x) = 2\frac{1}{1 - x - x^2} + 4\frac{x}{1 - x - x^2} -$$ - -proves that -$$ -5F_n + L_n = 2L_{n+1} -$$ - -The partial fraction decomposition is given by -$$ -\Phi(x) = \frac{1}{1 - \phi x} + \frac{1}{1 - \psi x} -$$ - -where $\phi = \frac{1 + \sqrt{5}}{2}$ is the golden ratio and $\psi = \frac{1 - \sqrt{5}}{2}$ is its conjugate. - -This can be used to prove the generating function, as -$$ -\sum_{n = 0}^\infty L_nx^n = \sum_{n = 0}^\infty (\phi^n + \psi^n)x^n = \sum_{n = 0}^\infty \phi^nx^n + \sum_{n = 0}^\infty \psi^nx^n = \frac{1}{1 - \phi x} + \frac{1}{1 - \psi x} = \Phi(x) -$$ - -If $F_n\geq 5$ is a Fibonacci number then no Lucas number is divisible by $F_n$. -$$ -L_n -$$ is congruent to 1 modulo $n$ if $n$ is prime, but some composite values of $n$ also have this property. These are the Fibonacci pseudoprimes. -$$ -L_n-L_{n-4} -$$ is congruent to 0 modulo 5. - -A Lucas prime is a Lucas number that is prime. The first few Lucas primes are - -2, 3, 7, 11, 29, 47, 199, 521, 2207, 3571, 9349, 3010349, 54018521, 370248451, 6643838879, ... . - -The indices of these primes are (for example, L4 = 7) - -0, 2, 4, 5, 7, 8, 11, 13, 16, 17, 19, 31, 37, 41, 47, 53, 61, 71, 79, 113, 313, 353, 503, 613, 617, 863, 1097, 1361, 4787, 4793, 5851, 7741, 8467, ... . - -If Ln is prime then n is 0, prime, or a power of 2. L2m is prime for m = 1, 2, 3, and 4 and no other known values of m. - -In the same way as Fibonacci polynomials are derived from the Fibonacci numbers, the Lucas polynomials $L_{n}(x)$ are a polynomial sequence derived from the Lucas numbers. - -Lucas numbers are the second most common pattern in sunflowers after Fibonacci numbers, when clockwise and counter-clockwise spirals are counted, according to . diff --git a/wiki/wikipedia/3580.txt b/wiki/wikipedia/3580.txt deleted file mode 100644 index fa4b66ffd9b37d6f2abf66a96e682b3b2c9530ce..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3580.txt +++ /dev/null @@ -1,55 +0,0 @@ -Tiger Lake is Intel's codename for the 11th generation Intel Core mobile processors based on the new Willow Cove Core microarchitecture, manufactured using Intel's third-generation 10 nm process node known as 10SF ("10 nm SuperFin"). Tiger Lake replaces the Ice Lake family of mobile processors, representing an Optimization step in Intel's process–architecture–optimization model. - -Tiger Lake processors launched on September 2, 2020, are part of the Tiger Lake-U family and include dual-core and quad-core 9 W (7–15 W) TDP and 15 W (12–28 W) TDP models. They power 2020 "Project Athena" laptops. The quad-core 96 EU die measures 13.6 × 10.7 mm (146.1 mm2), which is 19.2% wider than the 11.4 × 10.7 mm (122.5 mm2) quad-core 64 EU Ice Lake die. The 8-core 32 EU die used in Tiger Lake-H is around 190 mm2. According to Yehuda Nissan and his team, the architecture is named after a lake across Puget Sound, Washington. Laptops based on Tiger Lake started to sell in October 2020. - -The Tiger Lake-H35 processors were launched on January 11, 2021. These quad-core processors are designed for "ultraportable gaming" laptops with 28-35 W TDP. Intel also announced that the Tiger Lake-H processors with 45 W TDP and up to eight cores will become available in Q1 2021. Intel officially launched 11th Gen Intel Core-H series on May 11, 2021 and announced 11th Gen Intel Core Tiger Lake Refresh series on May 30, 2021. - -* Intel Willow Cove CPU cores - -* Full memory (RAM) encryption - -* Indirect branch tracking and CET shadow stack - -* Intel Key Locker - -* Intel Xe-LP ("Gen12") GPU with up to 96 execution units (50% uplift compared to Ice Lake, up from 64) with some yet to be announced processors using Intel's discrete GPU, DG1 - -* Fixed-function hardware decoding for HEVC 12-bit, 4:2:2/4:4:4; VP9 12-bit 4:4:4 and AV1 8K 10-bit 4:2:0 - -* Support for a single 8K 12-bit HDR display or two 4K 10-bit HDR displays - -* Hardware accelerated Dolby Vision - -* Sampler Feedback support - -* Dual Queue Support - -* Image Processing Unit, a special co-processor to improve image and video capture quality - -* Not available on embedded models - -* Initially there were , , and models with no IPU but later embedded processors were introduced instead - -* PCI Express 4.0 (Pentium and Celeron CPUs are limited to PCI Express 3.0) - -* Integrated Thunderbolt 4 (includes USB4) - -* LPDDR4X-4267 memory support - -* LPDDR5-5400 "architecture capability" (Intel expected Tiger Lake products with LPDDR5 to be available around Q1 2021) Consumer parts and SO-DIMM DDR5 memory modules are yet to be announced - -*Miniaturization of CPU and motherboard into an M.2 SSD-sized small circuit board - -* Add Intel Platform Monitoring Technology - -* All models support DDR4-3200 memory - -* All models support 20 reconfigurable PCI Express 4.0 lanes, allowing x16 Gen 4 link for discrete GPU and x4 Gen 4 link for M.2 SSDs - -* All models support DDR4-3200 or LPDDR4X-4267 memory - -* Socket: FCBGA1787, a BGA socket, thus are meant only for system integrators - -*Intel Xe UHD Graphics - -* Up to 128 GB DDR4-3200 memory diff --git a/wiki/wikipedia/3581.txt b/wiki/wikipedia/3581.txt deleted file mode 100644 index e20fde530537d7d977ed1f6ac990b55345889b8a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3581.txt +++ /dev/null @@ -1,31 +0,0 @@ -In geometry, Apollonius's theorem is a theorem relating the length of a median of a triangle to the lengths of its sides. - -It states that "the sum of the squares of any two sides of any triangle equals twice the square on half the third side, together with twice the square on the median bisecting the third side". - -Specifically, in any triangle $ABC,$ if $AD$ is a median, then - -|AB|^2 + |AC|^2 = 2 \left(|AD|^2+|BD|^2\right). - -It is a special case of Stewart's theorem. For an isosceles triangle with $|AB| = |AC|,$ the median $AD$ is perpendicular to $BC$ and the theorem reduces to the Pythagorean theorem for triangle $ADB$ (or triangle $ADC$). From the fact that the diagonals of a parallelogram bisect each other, the theorem is equivalent to the parallelogram law. - -The theorem is named for the ancient Greek mathematician Apollonius of Perga. - -The theorem can be proved as a special case of Stewart's theorem, or can be proved using vectors (see parallelogram law). The following is an independent proof using the law of cosines. - -Let the triangle have sides $a, b, c$ with a median $d$ drawn to side $a.$ Let $m$ be the length of the segments of $a$ formed by the median, so $m$ is half of $a.$ Let the angles formed between $a$ and $d$ be $\theta$ and $\theta^{\prime},$ where $\theta$ includes $b$ and $\theta^{\prime}$ includes $c.$ Then $\theta^{\prime}$ is the supplement of $\theta$ and $\cos \theta^{\prime} = - \cos \theta.$ The law of cosines for $\theta$ and $\theta^{\prime}$ states that - -\begin{align} - -b^2 &= m^2 + d^2 - 2dm\cos\theta \\ - -c^2 &= m^2 + d^2 - 2dm\cos\theta' \\ - -&= m^2 + d^2 + 2dm\cos\theta. \end{align} - - - -Add the first and third equations to obtain - -b^2 + c^2 = 2(m^2 + d^2) - -as required. diff --git a/wiki/wikipedia/3582.txt b/wiki/wikipedia/3582.txt deleted file mode 100644 index 74e042772d3cfb488d6d9f7a568608008dfd4aee..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3582.txt +++ /dev/null @@ -1,7 +0,0 @@ -In graph theory, for a connected graph $G$, a spanning tree $T$ is a subgraph of $G$ with the least number of edges that still spans $G$. A number of properties can be proved about $T$. $T$ is acyclic, has ($|V|-1$) edges where $V$ is the number of vertices in $G$ etc. - -A minimum degree spanning tree $T'$ is a spanning tree which has the least maximum degree. The vertex of maximum degree in $T'$ is the least among all possible spanning trees of $G$. - -Finding minimum degree spanning tree is NP hard, but a local search algorithm can give a tree which its maximum degree is at most the maximum degree of optimal tree plus one. - -See Degree-Constrained Spanning Tree. diff --git a/wiki/wikipedia/3583.txt b/wiki/wikipedia/3583.txt deleted file mode 100644 index 028769de7e5eab6a2d78db5070b0271ece61c59a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3583.txt +++ /dev/null @@ -1,62 +0,0 @@ -In the theory of probability for stochastic processes, the reflection principle for a Wiener process states that if the path of a Wiener process f(t) reaches a value f(s) = a at time t = s, then the subsequent path after time s has the same distribution as the reflection of the subsequent path about the value a. More formally, the reflection principle refers to a lemma concerning the distribution of the supremum of the Wiener process, or Brownian motion. The result relates the distribution of the supremum of Brownian motion up to time t to the distribution of the process at time t. It is a corollary of the strong Markov property of Brownian motion. - -If $ (W(t): t \geq 0) $ is a Wiener process, and $a > 0$ is a threshold (also called a crossing point), then the lemma states: -$$ - \mathbb{P} \left(\sup_{0 \leq s \leq t} W(s) \geq a \right) = 2\mathbb{P}(W(t) \geq a) -$$ - -Assuming $ W(0) = 0 $ , due to continuity of Wiener process, each path (one sampled realization) of Wiener process on (0,t) which ends up at or above value/level/threshold/crossing point 'a' the time t ($W(t) \geq a$) must have crossed a threshold 'a' ($ W(t_a) = a) $) at some earlier time $ t_a \leq t $ for the first time . (It can cross level 'a' multiple times on the interval (0,t), we take the earliest.) - -For every such path you can define another sampled path of Wiener process W' on (0,t) that is reflected or vertically flipped on the sub-interval $ (t_a,t) $ symmetrically around level 'a' from the original path.( $ a-W'(t)=W(t)-a $ ) That reflected path also reached value $ W'(t_a) = a $ on the interval (0,t) and is also a Wiener process or Brownian motion. Both original and reflected paths form set of paths that reach value 'a' on (0,t) and they are twice as many as those that end up at or above threshold 'a' (original path only) at time t. If each path is equally probable (imagine symmetric random walk from 0 on trees) then reaching threshold 'a' at any time on (0,t) is twice as likely as ending up at or above threshold 'a' at the time t. What about paths that reach level 'a' on (0,t) and then end up somewhere at value $ W(t) < a $ at the time t? Are they accounted for? Yes. There are exactly those reflected paths that are counted towards the number of the paths that reached threshold 'a' only and they are exactly as many as those that ended up above threshold 'a' at the time t. Once Wiener process reached threshold 'a' , then due to the symmetry there is an equal probability (p=0.5) it will end up above or below threshold 'a' at any future time t. So conditional probability: -$$ - P(W(t) \geq a | W(t_a)=a)=0.5 -$$. - -Paths with $ W(t) < a $ that never reach threshold 'a' are never considered. - -In a stronger form, the reflection principle says that if $\tau$ is a stopping time then the reflection of the Wiener process starting at $ \tau $, denoted $ (W^\tau(t): t \geq 0)$, is also a Wiener process, where: -$$ - W^\tau(t) = W(t)\chi_\left\{t \leq \tau\right\} + (2W(\tau) - W(t))\chi_\left\{t > \tau\right\} -$$ - -and the indicator function $\chi_{\{t \leq \tau\}}= \begin{cases} 1, & \text{if }t \leq \tau \\ 0, & \text{otherwise }\end{cases}$ and $\chi_{\{t > \tau\}} $is defined similarly. The stronger form implies the original lemma by choosing $\tau = \inf\left\{t \geq 0: W(t) = a\right\}$. - -The earliest stopping time for reaching crossing point a, $ \tau_a := \inf\left\{t: W(t) = a\right\} $, is an almost surely bounded stopping time. Then we can apply the strong Markov property to deduce that a relative path subsequent to $\tau_a$, given by $ X_t := W(t + \tau_a) - a $, is also simple Brownian motion independent of $ \mathcal{F}^W_{\tau_a} $. Then the probability distribution for the last time $W(s)$ is at or above the threshold $a$ in the time interval $[0,t]$ can be decomposed as - - - -\begin{align} - -\mathbb{P}\left(\sup_{0\leq s\leq t}W(s) \geq a\right) & = \mathbb{P}\left(\sup_{0\leq s\leq t}W(s) \geq a, W(t) \geq a\right) + \mathbb{P}\left(\sup_{0\leq s\leq t}W(s) \geq a, W(t) < a\right)\\ - -& = \mathbb{P}\left(W(t) \geq a\right) + \mathbb{P}\left(\sup_{0\leq s\leq t}W(s) \geq a, X(t-\tau_a) < 0\right)\\ - -\end{align}. - -By the tower property for conditional expectations, the second term reduces to: - - \begin{align} - -\mathbb{P}\left(\sup_{0\leq s\leq t}W(s) \geq a, X(t-\tau_a) < 0\right) &= - -\mathbb{E}\left[\mathbb{P}\left(\sup_{0\leq s\leq t}W(s) \geq a, X(t-\tau_a) < 0| \mathcal{F}^W_{\tau_a}\right)\right]\\ - -& = \mathbb{E}\left[\chi_{\sup_{0\leq s\leq t}W(s) \geq a} \mathbb{P}\left(X(t-\tau_a) < 0| \mathcal{F}^W_{\tau_a}\right)\right]\\ - -& = \frac{1}{2}\mathbb{P}\left(\sup_{0\leq s\leq t}W(s) \geq a\right) , - -\end{align} - -since $ X(t) $ is a standard Brownian motion independent of $ \mathcal{F}^W_{\tau_a} $ and has probability $ 1/2 $ of being less than $0$. The proof of the lemma is completed by substituting this into the second line of the first equation. - - - -\begin{align} - -\mathbb{P}\left(\sup_{0\leq s\leq t}W(s) \geq a\right) & = \mathbb{P}\left(W(t) \geq a\right) + \frac{1}{2}\mathbb{P}\left(\sup_{0\leq s\leq t}W(s) \geq a\right) \\ - -\mathbb{P}\left(\sup_{0\leq s\leq t}W(s) \geq a\right) &= 2 \mathbb{P}\left(W(t) \geq a\right) - -\end{align}. - -The reflection principle is often used to simplify distributional properties of Brownian motion. Considering Brownian motion on the restricted interval $ (W(t): t \in [0,1]) $ then the reflection principle allows us to prove that the location of the maxima $ t_\text{max} $, satisfying $ W(t_\text{max}) = \sup_{0 \leq s \leq 1}W(s) $, has the arcsine distribution. This is one of the Lévy arcsine laws. diff --git a/wiki/wikipedia/3584.txt b/wiki/wikipedia/3584.txt deleted file mode 100644 index 54fff6a58a985ede5da5f0d411db362dcf8df9a5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3584.txt +++ /dev/null @@ -1,139 +0,0 @@ -Reification is the process by which an abstract idea about a computer program is turned into an explicit data model or other object created in a programming language. A computable/addressable object—a resource—is created in a system as a proxy for a non computable/addressable object. By means of reification, something that was previously implicit, unexpressed, and possibly inexpressible is explicitly formulated and made available to conceptual (logical or computational) manipulation. Informally, reification is often referred to as "making something a first-class citizen" within the scope of a particular system. Some aspect of a system can be reified at language design time, which is related to reflection in programming languages. It can be applied as a stepwise refinement at system design time. Reification is one of the most frequently used techniques of conceptual analysis and knowledge representation. - -In the context of programming languages, reification is the process by which a user program or any aspect of a programming language that was implicit in the translated program and the run-time system, are expressed in the language itself. This process makes it available to the program, which can inspect all these aspects as ordinary data. In reflective languages, reification data is causally connected to the related reified aspect such that a modification to one of them affects the other. Therefore, the reification data is always a faithful representation of the related reified aspect . Reification data is often said to be made a first class object. Reification, at least partially, has been experienced in many languages to date: in early Lisp dialects and in current Prolog dialects, programs have been treated as data, although the causal connection has often been left to the responsibility of the programmer. In Smalltalk-80, the compiler from the source text to bytecode has been part of the run-time system since the very first implementations of the language. - -* The C programming language reifies the low-level detail of memory addresses.Many programming language designs encapsulate the details of memory allocation in the compiler and the run-time system. In the design of the C programming language, the memory address is reified and is available for direct manipulation by other language constructs. For example, the following code may be used when implementing a memory-mapped device driver. The buffer pointer is a proxy for the memory address 0xB800000. - -char* buffer = (char*) 0xB800000; - -buffer[0] = 10; - - - -* Functional programming languages based on lambda-calculus reify the concept of a procedure abstraction and procedure application in the form of the Lambda expression. - -* The Scheme programming language reifies continuations (approximately, the call stack). - -* In C#, reification is used to make parametric polymorphism implemented as generics as a first-class feature of the language. - -* In the Java programming language, there exist "reifiable types" that are "completely available at run time" (i.e. their information is not erased during compilation). - -* REBOL reifies code as data and vice versa. - -* Many languages, such as Lisp, JavaScript, and Curl, provide an eval or evaluate procedure that effectively reifies the language interpreter. - -* The Logtalk framework for Prolog offers a means to explore reification in the context of logic programming. - -* Smalltalk and Actor languages permit the reification of blocks and messages, which are equivalent of lambda expressions in Lisp, and thisContext which is a reification of the current executing block. - -* Homoiconic languages reify the syntax of the language itself in the form of an abstract syntax tree, typically together with eval. - -Data reification (stepwise refinement) involves finding a more concrete representation of the abstract data types used in a formal specification. - -Data reification is the terminology of the Vienna Development Method (VDM) that most other people would call data refinement. An example is taking a step towards an implementation by replacing a data representation without a counterpart in the intended implementation language, such as sets, by one that does have a counterpart (such as maps with fixed domains that can be implemented by arrays), or at least one that is closer to having a counterpart, such as sequences. The VDM community prefers the word "reification" over "refinement", as the process has more to do with concretising an idea than with refining it. - -For similar usages, see Reification (linguistics). - -Reification is widely used in conceptual modeling. Reifying a relationship means viewing it as an entity. The purpose of reifying a relationship is to make it explicit, when additional information needs to be added to it. Consider the relationship type IsMemberOf(member:Person, Committee). An instance of IsMemberOf is a relationship that represents the fact that a person is a member of a committee. The figure below shows an example population of IsMemberOf relationship in tabular form. Person P1 is a member of committees C1 and C2. Person P2 is a member of committee C1 only. - -The same fact, however, could also be viewed as an entity. Viewing a relationship as an entity, one can say that the entity reifies the relationship. This is called reification of a relationship. Like any other entity, it must be an instance of an entity type. In the present example, the entity type has been named Membership. For each instance of IsMemberOf, there is one and only one instance of Membership, and vice versa. Now, it becomes possible to add more information to the original relationship. As an example, we can express the fact that "person p1 was nominated to be the member of committee c1 by person p2". Reified relationship Membership can be used as the source of a new relationship IsNominatedBy(Membership, Person). - -For related usages see Reification (knowledge representation). - -400px|thumb|The UML class diagram for the Membership example. UML provides an association class construct for defining reified relationship types. The association class is a single model element that is both a kind of association and a kind of a class. The association and the entity type that reifies are both the same model element. Note that attributes cannot be reified. - -In Semantic Web languages, such as Resource Description Framework (RDF) and Web Ontology Language (OWL), a statement is a binary relation. It is used to link two individuals or an individual and a value. Applications sometimes need to describe other RDF statements, for instance, to record information like when statements were made, or who made them, which is sometimes called "provenance" information. As an example, we may want to represent properties of a relation, such as our certainty about it, severity or strength of a relation, relevance of a relation, and so on. - -The example from the conceptual modeling section describes a particular person with URIref person:p1, who is a member of the committee:c1. The RDF triple from that description is - - - -person:p1 committee:isMemberOf committee:c1 . - - - -Consider to store two further facts: (i) to record who nominated this particular person to this committee (a statement about the membership itself), and (ii) to record who added the fact to the database (a statement about the statement). - -The first case is a case of classical reification like above in UML: reify the membership and store its attributes and roles etc.: - - - -committee:Membership rdf:type owl:Class . - -committee:membership12345 rdf:type committee:Membership . - -committee:membership12345 committee:ofPerson person:p1 . - -committee:membership12345 committee:inCommittee committee:c1 . - -person:p2 committee:nominated committee:membership12345 . - - - -Additionally, RDF provides a built-in vocabulary intended for describing RDF statements. A description of a statement using this vocabulary is called a reification of the statement. The RDF reification vocabulary consists of the type rdf:Statement, and the properties rdf:subject, rdf:predicate, and rdf:object. - -Using the reification vocabulary, a reification of the statement about the person's membership would be given by assigning the statement a URIref such as committee:membership12345 so that describing statements can be written as follows: - - - -committee:membership12345Stat rdf:type rdf:Statement . - -committee:membership12345Stat rdf:subject person:p1 . - -committee:membership12345Stat rdf:predicate committee:isMemberOf . - -committee:membership12345Stat rdf:object committee:c1 . - - - -These statements say that the resource identified by the URIref committee:membership12345Stat is an RDF statement, that the subject of the statement refers to the resource identified by person:p1, the predicate of the statement refers to the resource identified by committee:isMemberOf, and the object of the statement refers to the resource committee:c1. Assuming that the original statement is actually identified by committee:membership12345, it should be clear by comparing the original statement with the reification that the reification actually does describe it. The conventional use of the RDF reification vocabulary always involves describing a statement using four statements in this pattern. Therefore, they are sometimes referred to as the "reification quad". - -Using reification according to this convention, we could record the fact that person:p3 added the statement to the - -database by - - - -person:p3 committee:addedToDatabase committee:membership12345Stat . - - - -It is important to note that in the conventional use of reification, the subject of the reification triples is assumed to identify a particular instance of a triple in a particular RDF document, rather than some arbitrary triple having the same subject, predicate, and object. This particular convention is used because reification is intended for expressing properties such as dates of composition and source information, as in the examples given already, and these properties need to be applied to specific instances of triples. - -Note that the described triple (subject predicate object) itself is not implied by such a reification quad (and it is not necessary that it actually exists in the database). This allows also to use this mechanism to express which triples do not hold. - -The power of the reification vocabulary in RDF is restricted by the lack of a built-in means for assigning URIrefs to statements, so in order to express "provenance" information of this kind in RDF, one has to use some mechanism (outside of RDF) to assign URIs to individual RDF statements, then make further statements about those individual statements, using their URIs to identify them. - -In an XML Topic Map (XTM), only a topic can have a name or play a role in an association. One may use an association to make an assertion about a topic, but one cannot directly make assertions about that assertion. However, it is possible to create a topic that reifies a non-topic construct in a map, thus enabling the association to be named and treated as a topic itself. - -In Semantic Web languages, such as RDF and OWL, a property is a binary relation used to link two individuals or an individual and a value. However, in some cases, the natural and convenient way to represent certain concepts is to use relations to link an individual to more than just one individual or value. These relations are called n-ary relations. Examples are representing relations among multiple individuals, such as a committee, a person who is a committee member and another person who has nominated the first person to become the committee member, or a buyer, a seller, and an object that was bought when describing a purchase of a book. - -A more general approach to reification is to create an explicit new class and n new properties to represent an n-ary relation, making an instance of the relation linking the n individuals an instance of this class. This approach can also be used to represent provenance information and other properties for an individual relation instance. - - - -:p1 - -a :Person ; - -:has_membership _:membership_12345 . - -_:membership_12345 - -a :Membership ; - -:committee :c1; - -:nominated_by :p2 . - - - -It is also important to note that the reification described here is not the same as "quotation" found in other languages. Instead, the reification describes the relationship between a particular instance of a triple and the resources the triple refers to. The reification can be read intuitively as saying "this RDF triple talks about these things", rather than (as in quotation) "this RDF triple has this form." For instance, in the reification example used in this section, the triple: - - - -committee:membership12345 rdf:subject person:p1 . - - - -describing the rdf:subject of the original statement says that the subject of the statement is the resource (the person) identified by the URIref person:p1. It does not state that the subject of the statement is the URIref itself (i.e., a string beginning with certain characters), as quotation would. diff --git a/wiki/wikipedia/3585.txt b/wiki/wikipedia/3585.txt deleted file mode 100644 index fdff6abb7cd233d840bf57548ef265ade9dce17c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3585.txt +++ /dev/null @@ -1,52 +0,0 @@ -In mathematics, the Poincaré inequality is a result in the theory of Sobolev spaces, named after the French mathematician Henri Poincaré. The inequality allows one to obtain bounds on a function using bounds on its derivatives and the geometry of its domain of definition. Such bounds are of great importance in the modern, direct methods of the calculus of variations. A very closely related result is Friedrichs' inequality. - -Let p, so that 1 ≤ p < ∞ and Ω a subset bounded at least in one direction. Then there exists a constant C, depending only on Ω and p, so that, for every function u of the Sobolev space W01,p(Ω) of zero-trace (a.k.a. zero on the boundary) functions, -$$ -\| u \|_{L^{p} (\Omega)} \leq C \| \nabla u \|_{L^{p} (\Omega)}. -$$ - -Assume that 1 ≤ p ≤ ∞ and that Ω is a bounded connected open subset of the n-dimensional Euclidean space Rn with a Lipschitz boundary (i.e., Ω is a Lipschitz domain). Then there exists a constant C, depending only on Ω and p, such that for every function u in the Sobolev space W1,p(Ω), -$$ -\| u - u_{\Omega} \|_{L^{p} (\Omega)} \leq C \| \nabla u \|_{L^{p} (\Omega)}, -$$ - -where -$$ -u_{\Omega} = \frac{1} \int_{\Omega} u(y) \mathrm{d} y -$$ - -is the average value of u over Ω, with |Ω| standing for the Lebesgue measure of the domain Ω. When Ω is a ball, the above inequality is - -called a (p,p)-Poincaré inequality; for more general domains Ω, the above is more familiarly known as a Sobolev inequality. - -In the context of metric measure spaces (for example, sub-Riemannian manifolds), such spaces support a (q,p)-Poincare inequality for some $1\le q,p<\infty$ if there are constants C and $\lambda\ge 1$ so that for each ball B in the space, -$$ -\mu(B)^{-\frac{1}{q}} \left \|u-u_B \right \|_{L^q(B)}\le C \operatorname{rad}(B) \mu(B)^{-\frac{1}{p}} \| \nabla u\|_{L^p(\lambda B)}. -$$ - -In the context of metric measure spaces, $|\nabla u|$ is the minimal p-weak upper gradient of u in the sense of - -Heinonen and Koskela [J. Heinonen and P. Koskela, Quasiconformal maps in metric spaces with controlled geometry, Acta Math. 181 (1998), 1–61] - -There exist other generalizations of the Poincaré inequality to other Sobolev spaces. For example, the following (taken from Garroni) is a Poincaré inequality for the Sobolev space H1/2(T2), i.e. the space of functions u in the L2 space of the unit torus T2 with Fourier transform û satisfying -$$ -[u]_{H^{1/2} (\mathbf{T}^{2})}^2 = \sum_{k \in \mathbf{Z}^2} | k | \left | \hat{u} (k) \right |^2 < + \infty: -$$ - -there exists a constant C such that, for every u ∈ H1/2(T2) with u identically zero on an open set E ⊆ T2, -$$ -\int_{\mathbf{T}^2} | u(x) |^2 \mathrm{d} x \leq C \left( 1 + \frac1{\operatorname{cap} (E \times \{ 0 \})} \right) [ u ]_{H^{1/2} (\mathbf{T}^2)}^2, -$$ - -where cap(E × {0}) denotes the harmonic capacity of E × {0} when thought of as a subset of R3. - -The optimal constant C in the Poincaré inequality is sometimes known as the Poincaré constant for the domain Ω. Determining the Poincaré constant is, in general, a very hard task that depends upon the value of p and the geometry of the domain Ω. Certain special cases are tractable, however. For example, if Ω is a bounded, convex, Lipschitz domain with diameter d, then the Poincaré constant is at most d/2 for p = 1, $d/\pi$ for p = 2 (; ), and this is the best possible estimate on the Poincaré constant in terms of the diameter alone. For smooth functions, this can be understood as an application of the isoperimetric inequality to the function's level sets. In one dimension, this is Wirtinger's inequality for functions. - -However, in some special cases the constant C can be determined concretely. For example, for p = 2, it is well known that over the domain of unit isosceles right triangle, C = 1/π ( < d/π where $d=\sqrt{2}$). (See, for instance, Kikuchi.) - -Furthermore, for a smooth, bounded domain $\Omega$, since the Rayleigh quotient for the Laplace operator in the space $W^{1,2}_0(\Omega)$ is minimized by the eigenfunction corresponding to the minimal eigenvalue λ1 of the (negative) Laplacian, it is a simple consequence that, for any $u\in W^{1,2}_0(\Omega)$, -$$ - \|u\|_{L^2}^2\leq \lambda_1^{-1} \left \|\nabla u\right \|_{L^2}^2 -$$ - -and furthermore, that the constant λ1 is optimal. diff --git a/wiki/wikipedia/3586.txt b/wiki/wikipedia/3586.txt deleted file mode 100644 index 52925a3cc8f4f172c8c95f7261e43b679d2a9c22..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3586.txt +++ /dev/null @@ -1,173 +0,0 @@ -In mathematics, the Bussgang theorem is a theorem of stochastic analysis. The theorem states that the crosscorrelation of a Gaussian signal before and after it has passed through a nonlinear operation are equal up to a constant. It was first published by Julian J. Bussgang in 1952 while he was at the Massachusetts Institute of Technology. - -Let $ \left\{X(t)\right\} $ be a zero-mean stationary Gaussian random process and $ \left \{ Y(t) \right\} = g(X(t)) $ where $ g(\cdot) $ is a nonlinear amplitude distortion. - -If $ R_X(\tau) $ is the autocorrelation function of $ \left\{ X(t) \right\}$, then the cross-correlation function of $ \left\{ X(t) \right\}$ and $ \left\{ Y(t) \right\}$ is -$$ - R_{XY}(\tau) = CR_X(\tau), -$$ - -where $C$ is a constant that depends only on $ g(\cdot) $. - -It can be further shown that -$$ - C = \frac{1}{\sigma^3\sqrt{2\pi}}\int_{-\infty}^\infty ug(u)e^{-\frac{u^2}{2\sigma^2}} du. -$$ - -It is a property of the two-dimensional normal distribution that the joint density of $ y_1 $ and $y_2$ depends only on their covariance and is given explicitly by the expression -$$ - p(y_1,y_2) = \frac{1}{2 \pi \sqrt{1-\rho^2}} e^{-\frac{y_1^2 + y_2^2 - 2 \rho y_1 y_2}{2(1-\rho^2)}} -$$ - -where $ y_1 $ and $ y_2 $ are standard Gaussian random variables with correlation $ \phi_{y_1y_2}=\rho $. - -Assume that $ r_2 = Q(y_2) $, the correlation between $ y_1 $ and $ r_2 $ is, -$$ - \phi_{y_1r_2} = \frac{1}{2 \pi \sqrt{1-\rho^2}} \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} y_1 Q(y_2) e^{-\frac{y_1^2 + y_2^2 - 2 \rho y_1 y_2}{2(1-\rho^2)}} dy_1 dy_2 -$$. - -Since -$$ - \int_{-\infty}^{\infty} y_1 e^{-\frac{1}{2(1-\rho^2)} y_1^2 + \frac{\rho y_2}{1-\rho^2} y_1 } dy_1 = \rho \sqrt{2 \pi (1-\rho^2)} y_2 e^{ \frac{\rho^2 y_2^2}{2(1-\rho^2)} } -$$, - -the correlation $\phi_{y_1 r_2}$ may be simplified as -$$ - \phi_{y_1 r_2} = \frac{\rho}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} y_2 Q(y_2) e^{-\frac{y_2^2}{2}} dy_2 -$$. - -The integral above is seen to depend only on the distortion characteristic $Q()$ and is independent of $\rho$. - -Remembering that $\rho=\phi_{y_1 y_2}$, we observe that for a given distortion characteristic $Q()$, the ratio $\frac{\phi_{y_1 r_2}}{\phi_{y_1 y_2}}$ is $K_Q=\frac{1}{\sqrt{2\pi}} \int_{-\infty}^{\infty} y_2 Q(y_2) e^{-\frac{y_2^2}{2}} dy_2$. - -Therefore, the correlation can be rewritten in the form
    $\phi_{y_1 r_2} = K_Q \phi_{y_1 y_2}$.
    The above equation is the mathematical expression of the stated "Bussgang‘s theorem". - -If $Q(x) = \text{sign}(x)$, or called one-bit quantization, then $K_Q= \frac{2}{\sqrt{2\pi}} \int_{0}^{\infty} y_2 e^{-\frac{y_2^2}{2}} dy_2 = \sqrt{\frac{2}{\pi}}$. - -If the two random variables are both distorted, i.e., $r_1 = Q(y_1), r_2 = Q(y_2)$, the correlation of $r_1$ and $r_2$ is
    $\phi_{r_1 r_2}=\int_{-\infty}^{\infty}\int_{-\infty}^{\infty} Q(y_1) Q(y_2) p(y_1, y_2) dy_1 dy_2$.
    When $Q(x) = \text{sign}(x)$, the expression becomes,
    $\phi_{r_1 r_2}=\frac{1}{2\pi \sqrt{1-\rho^2}} \left[ \int_{0}^{\infty} \int_{0}^{\infty} e^{-\alpha} dy_1 dy_2 + \int_{-\infty}^{0} \int_{-\infty}^{0} e^{-\alpha} dy_1 dy_2 - \int_{0}^{\infty} \int_{-\infty}^{0} e^{-\alpha} dy_1 dy_2 - \int_{-\infty}^{0} \int_{0}^{\infty} e^{-\alpha} dy_1 dy_2 \right]$
    where $\alpha = \frac{y_1^2 + y_2^2 - 2\rho y_1 y_2}{2 (1-\rho^2)}$. - -Noticing that - -\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} p(y_1,y_2) dy_1 dy_2 - -= \frac{1}{2\pi \sqrt{1-\rho^2}} - -\left[ \int_{0}^{\infty} \int_{0}^{\infty} e^{-\alpha} dy_1 dy_2 - -+ \int_{-\infty}^{0} \int_{-\infty}^{0} e^{-\alpha} dy_1 dy_2 - -+ \int_{0}^{\infty} \int_{-\infty}^{0} e^{-\alpha} dy_1 dy_2 - -+ \int_{-\infty}^{0} \int_{0}^{\infty} e^{-\alpha} dy_1 dy_2 \right]=1, - -and $\int_{0}^{\infty} \int_{0}^{\infty} e^{-\alpha} dy_1 dy_2 = \int_{-\infty}^{0} \int_{-\infty}^{0} e^{-\alpha} dy_1 dy_2$, \int_{0}^{\infty} \int_{-\infty}^{0} e^{-\alpha} dy_1 dy_2 - -= \int_{-\infty}^{0} \int_{0}^{\infty} e^{-\alpha} dy_1 dy_2, - -we can simplify the expression of $\phi_{r_1r_2}$ as
    $\phi_{r_1 r_2}=\frac{4}{2\pi \sqrt{1-\rho^2}} \int_{0}^{\infty} \int_{0}^{\infty} e^{-\alpha} dy_1 dy_2-1 $
    Also, it is convenient to introduce the polar coordinate y_1 = R \cos \theta, y_2 = R \sin \theta . It is thus found that - -\phi_{r_1 r_2} =\frac{4}{2\pi \sqrt{1-\rho^2}} \int_{0}^{\pi/2} \int_{0}^{\infty} e^{-\frac{R^2 - 2R^2 \rho \cos \theta \sin \theta \ }{2(1-\rho^2)}} R dR d\theta-1=\frac{4}{2\pi \sqrt{1-\rho^2}} \int_{0}^{\pi/2} \int_{0}^{\infty} e^{-\frac{R^2 (1-\rho \sin 2\theta )}{2(1-\rho^2)}} R dR d\theta -1 - -. - -Integration gives
    \phi_{r_1 r_2}=\frac{2\sqrt{1-\rho^2}}{\pi} \int_{0}^{\pi/2} \frac{d\theta}{1-\rho \sin 2\theta} - 1= - \frac{2}{\pi} \arctan \left( \frac{\rho-\tan\theta} {\sqrt{1-\rho^2}} \right) \Bigg|_{0}^{\pi/2} -1 =\frac{2}{\pi} \arcsin(\rho) - -
    This is called "Arcsine law", which was first found by J. H. Van Vleck in 1943 and republished in 1966. - -The function $ f(x)=\frac{2}{\pi} \arcsin x $ can be approximated as $ f(x) \approx \frac{2}{\pi} x $ when $ x $ is small. - -Given two jointly normal random variables $y_1$ and $y_2$ with joint probability function
    ${\displaystyle p(y_{1},y_{2})={\frac {1}{2\pi {\sqrt {1-\rho ^{2}}}}}e^{-{\frac {y_{1}^{2}+y_{2}^{2}-2\rho y_{1}y_{2}}{2(1-\rho ^{2})}}}}$,
    we form the mean
    $I(\rho)=E(g(y_1,y_2))=\int_{-\infty}^{+\infty} \int_{-\infty}^{+\infty} g(y_1, y_2) p(y_1, y_2) dy_1 dy_2$
    of some function $g(y_1,y_2)$ of $(y_1, y_2)$. If $g(y_1, y_2) p(y_1, y_2) \rightarrow 0$ as $(y_1, y_2) \rightarrow 0$, then
    \frac{\partial^n I(\rho)}{\partial \rho^n}=\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} - -\frac{\partial ^{2n} g(y_1, y_2)}{\partial y_1^n \partial y_2^n} p(y_1, y_2) dy_1 dy_2 - -=E \left(\frac{\partial ^{2n} g(y_1, y_2)}{\partial y_1^n \partial y_2^n} \right).
    Proof. The joint characteristic function of the random variables $y_1$ and $y_2$ is by definition the integral
    \Phi(\omega_1, \omega_2)=\int_{-\infty}^{\infty}\int_{-\infty}^{\infty} p(y_1, y_2) - -e^{j (\omega_1 y_1 + \omega_2 y_2 )} dy_1 dy_2 - -= \exp \left\{-\frac{\omega_1^2 + \omega_2^2 + 2\rho \omega_1 \omega_2}{2} \right\}.
    From the two-dimensional inversion formula of Fourier transform, it follows that
    p(y_1, y_2) = \frac{1}{4 \pi^2} \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} \Phi(\omega_1, \omega_2) - -e^{-j (\omega_1 y_1 + \omega_2 y_2)} d\omega_1 d\omega_2 - -=\frac{1}{4 \pi^2} \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} - -\exp \left\{-\frac{\omega_1^2 + \omega_2^2 + 2\rho \omega_1 \omega_2}{2} \right\} - -e^{-j (\omega_1 y_1 + \omega_2 y_2)} d\omega_1 d\omega_2.
    Therefore, plugging the expression of $p(y_1, y_2)$ into $I(\rho)$, and differentiating with respect to $\rho$, we obtain
    \begin{align} - -\frac{\partial^n I(\rho)}{\partial \rho^n} & = - -\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} g(y_1, y_2) p(y_1, y_2) dy_1 dy_2 \\ - -& = - -\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} g(y_1, y_2) - -\left(\frac{1}{4 \pi^2} \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} \frac{\partial^ {n}\Phi(\omega_1, \omega_2)}{\partial \rho^n} - -e^{-j(\omega_1 y_1 + \omega_2 y_2)} d\omega_1 d\omega_2 \right) - - dy_1 dy_2 \\ - -& = - -\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} g(y_1, y_2) - -\left(\frac{(-1)^n}{4 \pi^2} \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} \omega_1^n \omega_2^n \Phi(\omega_1, \omega_2) - -e^{-j(\omega_1 y_1 + \omega_2 y_2)} d\omega_1 d\omega_2 \right) - - dy_1 dy_2 \\ - -& = - -\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} g(y_1, y_2) - -\left(\frac{1}{4 \pi^2} \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} \Phi(\omega_1, \omega_2) - -\frac{\partial^{2n} e^{-j(\omega_1 y_1 + \omega_2 y_2)}}{\partial y_1^n \partial y_2^n} d\omega_1 d\omega_2 \right) - - dy_1 dy_2 \\ - -& = - -\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} g(y_1, y_2) - -\frac{\partial^{2n} p(y_1, y_2)}{\partial y_1^n \partial y_2^n} - - dy_1 dy_2 \\ - -\end{align}
    After repeated integration by parts and using the condition at $\infty$, we obtain the Price's theorem.
    \begin{align} - -\frac{\partial^n I(\rho)}{\partial \rho^n} & = - -\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} g(y_1, y_2) - -\frac{\partial^{2n} p(y_1, y_2)}{\partial y_1^n \partial y_2^n} - - dy_1 dy_2 \\ - -& = \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} - -\frac{\partial^{2} g(y_1, y_2)}{\partial y_1 \partial y_2} - -\frac{\partial^{2n-2} p(y_1, y_2)}{\partial y_1^{n-1} \partial y_2^{n-1}} - - dy_1 dy_2 - -\\ - -&=\cdots \\ - -&=\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} - -\frac{\partial ^{2n} g(y_1, y_2)}{\partial y_1^n \partial y_2^n} p(y_1, y_2) dy_1 dy_2 - -\end{align}
    - -If $g(y_1, y_2) = \text{sign}(y_1) \text{sign} (y_2)$, then $\frac{\partial^2 g(y_1, y_2)}{\partial y_1 \partial y_2} = 4 \delta(y_1) \delta(y_2)$ where $\delta()$ is the Dirac delta function. - -Substituting into Price's Theorem, we obtain,
    \frac{\partial E(\text{sign} (y_1) \text{sign}(y_2))}{\partial \rho} = \frac{\partial I(\rho)}{\partial \rho}= \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} - -4 \delta(y_1) \delta(y_2) p(y_1, y_2) dy_1 dy_2=\frac{2}{\pi \sqrt{1-\rho^2}}.
    When $\rho=0$, $I(\rho)=0$. Thus
    $E \left(\text{sign}(y_1) \text{sign}(y_2) \right) = I(\rho)=\frac{2}{\pi} \int_{0}^{\rho} \frac{1}{\sqrt{1-\rho^2}} d\rho=\frac{2}{\pi} \arcsin(\rho)$,
    which is Van Vleck's well-known result of "Arcsine law". - -This theorem implies that a simplified correlator can be designed. Instead of having to multiply two signals, the cross-correlation problem reduces to the gating of one signal with another. diff --git a/wiki/wikipedia/3587.txt b/wiki/wikipedia/3587.txt deleted file mode 100644 index 3eea0789847cfdfddbc366e8239d4b4038e48a9e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3587.txt +++ /dev/null @@ -1,19 +0,0 @@ -In computer science, double pushout graph rewriting (or DPO graph rewriting) refers to a mathematical framework for graph rewriting. It was introduced as one of the first algebraic approaches to graph rewriting in the article "Graph-grammars: An algebraic approach" (1973). It has since been generalized to allow rewriting structures which are not graphs, and to handle negative application conditions, among other extensions. - -A DPO graph transformation system (or graph grammar) consists of a finite graph, which is the starting state, and a finite or countable set of labeled spans in the category of finite graphs and graph homomorphisms, which serve as derivation rules. The rule spans are generally taken to be composed of monomorphisms, but the details can vary. - -Rewriting is performed in two steps: deletion and addition. - -After a match from the left hand side to $G$ is fixed, nodes and edges that are not in the right hand side are deleted. The right hand side is then glued in. - -Gluing graphs is in fact a pushout construction in the category of graphs, and the deletion is the same as finding a pushout complement, hence the name. - -Double pushout graph rewriting allows the specification of graph transformations by specifying a pattern of fixed size and composition to be found and replaced, where part of the pattern can be preserved. The application of a rule is potentially non-deterministic: several distinct matches can be possible. These can be non-overlapping, or share only preserved items, thus showing a kind of concurrency known as parallel independence, or they may be incompatible, in which case either the applications can sometimes be executed sequentially, or one can even preclude the other. - -It can be used as a language for software design and programming (usually a variant working on richer structures than graphs is chosen). Termination for DPO graph rewriting is undecidable because the Post correspondence problem can be reduced to it. - -DPO graph rewriting can be viewed as a generalization of Petri nets. - -The concepts of adhesive category and HLR system are related (an adhesive category with coproducts is a HLR system). - -Hypergraph, typed graph and attributed graph rewriting, for example, can be handled because they can be cast as adhesive HLR systems. diff --git a/wiki/wikipedia/3588.txt b/wiki/wikipedia/3588.txt deleted file mode 100644 index c84f9a4811957e7ebcb3cc8955b6f4228d5b4ab5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3588.txt +++ /dev/null @@ -1,45 +0,0 @@ -The expander mixing lemma intuitively states that the edges of certain $d$-regular graphs are evenly distributed throughout the graph. In particular, the number of edges between two vertex subsets $S$ and $T$ is always close to the expected number of edges between them in a random $d$-regular graph, namely $\frac dn|S||T|$. - -Define an $(n, d, \lambda)$-graph to be a $d$-regular graph $G$ on $n$ vertices such that all of the eigenvalues of its adjacency matrix $A_G$ except one have absolute value at most $\lambda.$ The $d$-regularity of the graph guarantees that its largest absolute value of an eigenvalue is $d.$ In fact, the all-1's vector $\mathbf1$ is an eigenvector of $A_G$ with eigenvalue $d$, and the eigenvalues of the adjacency matrix will never exceed the maximum degree of $G$ in absolute value. - -If we fix $d$ and $\lambda$ then $(n, d, \lambda)$-graphs form a family of expander graphs with a constant spectral gap. - -Let $G = (V, E)$ be an $(n, d, \lambda)$-graph. For any two subsets $S, T \subseteq V$, let $e(S, T) = |\{(x,y) \in S \times T : xy \in E(G)\}|$ be the number of edges between S and T (counting edges contained in the intersection of S and T twice). Then -$$ -\left|e(S, T) - \frac{d |S| |T|}{n}\right| \leq \lambda \sqrt{n}\right| \leq \lambda \sqrt -$$. - -To show the tighter bound above, we instead consider the vectors $1_S-\fracn\mathbf 1$ and $1_T-\fracn\mathbf 1$, which are both perpendicular to $v_1$. We can expand -$$ -1_S^\operatorname{T}A_G1_T=\left(\fracn\mathbf 1\right)^\operatorname{T}A_G\left(\fracn\mathbf 1\right)+\left(1_S-\fracn\mathbf 1\right)^\operatorname{T}A_G\left(1_T-\fracn\mathbf 1\right) -$$ - -because the other two terms of the expansion are zero. The first term is equal to $\frac{n^2}\mathbf 1^\operatorname{T}A_G\mathbf 1=\frac dn|S||T|$, so we find that - -\left|e(S,T)-\frac dn|S||T|\right| - -\leq\left|\left(1_S-\fracn\mathbf 1\right)^\operatorname{T}A_G\left(1_T-\fracn\mathbf 1\right)\right| - -We can bound the right hand side by \lambda\left\|1_S-\frac\mathbf 1\right\|\left\|1_T-\frac\mathbf 1\right\| - -=\lambda\sqrtn\right)\left(1-\fracn\right)} using the same methods as in the earlier proof. - -The expander mixing lemma can be used to upper bound the size of an independent set within a graph. In particular, the size of an independent set in an $(n, d, \lambda)$-graph is at most $\lambda n/d.$ This is proved by letting $T=S$ in the statement above and using the fact that $e(S,S)=0.$ - -An additional consequence is that, if $G$ is an $(n, d, \lambda)$-graph, then its chromatic number $\chi(G)$ is at least $d/\lambda.$ This is because, in a valid graph coloring, the set of vertices of a given color is an independent set. By the above fact, each independent set has size at most $\lambda n/d,$ so at least $d/\lambda$ such sets are needed to cover all of the vertices. - -A second application of the expander mixing lemma is to provide an upper bound on the maximum possible size of an independent set within a polarity graph. Given a finite projective plane $\pi$ with a polarity $\perp,$ the polarity graph is a graph where the vertices are the points a of $\pi$, and vertices $x$ and $y$ are connected if and only if $x\in y^{\perp}.$ In particular, if $\pi$ has order $q,$ then the expander mixing lemma can show that an independent set in the polarity graph can have size at most $q^{3/2} - q + 2q^{1/2} - 1,$ a bound proved by Hobart and Williford. - -Bilu and Linial showed that a converse holds as well: if a $d$-regular graph $G = (V, E)$ satisfies that for any two subsets $S, T \subseteq V$ with $S \cap T = \empty $ we have -$$ -\left|e(S, T) - \frac{d |S| |T|}{n}\right| \leq \lambda \sqrt, -$$ - -then its second-largest (in absolute value) eigenvalue is bounded by $O(\lambda (1+\log(d/\lambda)))$. - -Friedman and Widgerson proved the following generalization of the mixing lemma to hypergraphs. - -Let $H$ be a $k$-uniform hypergraph, i.e. a hypergraph in which every "edge" is a tuple of $k$ vertices. For any choice of subsets $V_1, ..., V_k$ of vertices, -$$ -\left| |e(V_1,...,V_k)| - \frac{k!|E(H)|}{n^k}|V_1|...|V_k| \right| \le \lambda_2(H)\sqrt. -$$ diff --git a/wiki/wikipedia/3589.txt b/wiki/wikipedia/3589.txt deleted file mode 100644 index 4f924b64caf41dafc6c7239f7f59061f0359fc87..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3589.txt +++ /dev/null @@ -1,378 +0,0 @@ - - -In classical mechanics, the shell theorem gives gravitational simplifications that can be applied to objects inside or outside a spherically symmetrical body. This theorem has particular application to astronomy. - -Isaac Newton proved the shell theorem and stated that: - -# A spherically symmetric body affects external objects gravitationally as though all of its mass were concentrated at a point at its center. - -# If the body is a spherically symmetric shell (i.e., a hollow ball), no net gravitational force is exerted by the shell on any object inside, regardless of the object's location within the shell. - -A corollary is that inside a solid sphere of constant density, the gravitational force within the object varies linearly with distance from the center, becoming zero by symmetry at the center of mass. This can be seen as follows: take a point within such a sphere, at a distance $r$ from the center of the sphere. Then you can ignore all the shells of greater radius, according to the shell theorem. So, the remaining mass $m$ is proportional to $r^3$ (because it is based on volume), and the gravitational force exerted on it is proportional to $\frac{m}{r^2}$ (the inverse square law), so the overall gravitational effect is proportional to $\frac{r^3}{r^2} =r$, so is linear in $r$. - -These results were important to Newton's analysis of planetary motion; they are not immediately obvious, but they can be proven with calculus. (Alternatively, Gauss's law for gravity offers a much simpler way to prove the same results.) - -In addition to gravity, the shell theorem can also be used to describe the electric field generated by a static spherically symmetric charge density, or similarly for any other phenomenon that follows an inverse square law. The derivations below focus on gravity, but the results can easily be generalized to the electrostatic force. - -There are three steps to proving Newton's shell theorem. First, the equation for a gravitational field due to a ring of mass will be derived. Arranging an infinite number of infinitely thin rings to make a disc, this equation involving a ring will be used to find the gravitational field due to a disk. Finally, arranging an infinite number of infinitely thin discs to make a sphere, this equation involving a disc will be used to find the gravitational field due to a sphere. - -The gravitational field $E$ at a position called $P$ at $(x,y)=(-p,0)$ on the x-axis due to a point of mass $M$ at the origin is -$$ -E_\text{point}=\frac{GM}{p^2} -$$ - -
    frameless|300x300px
    - -Suppose that this mass is moved upwards along the y-axis to point $(0,R)$. The distance between $P$ and the point mass is now longer than before; It becomes the hypotenuse of the right triangle with legs $p$ and $R$ which is $\sqrt{p^2+R^2}$. Hence, the gravitational field of the elevated point is: -$$ -E_\text{elevated point}=\frac{GM}{p^2+R^2} -$$ - -
    frameless|270x270px
    - -The magnitude of the gravitational field that would pull a particle at point $P$ in the x-direction is the gravitational field multiplied by $\cos(\theta)$ where $\theta$ is the angle adjacent to the x-axis. In this case, $\cos(\theta)=\frac{p}{\sqrt{p^2+R^2$.}} Hence, the magnitude of the gravitational field in the x-direction, $E_x$ is: -$$ -E_x=\frac{GM\cos{\theta}}{p^2+R^2} -$$ - -Substituting in $\cos(\theta)$ gives -$$ -E_x=\frac{GMp}{\left(p^2+R^2\right)^{3/2}} -$$ - -Suppose that this mass is evenly distributed in a ring centered at the origin and facing point $P$ with the same radius $R$. Because all of the mass is located at the same angle with respect to the x-axis, and the distance between the points on the ring is the same distance as before, the gravitational field in the x-direction at point $P$ due to the ring is the same as a point mass located at a point $R$ units above the y-axis: -$$ -E_\text{ring}=\frac{GMp}{\left(p^2+R^2\right)^{3/2}} -$$ - -
    frameless|280x280px
    - -To find the gravitational field at point $P$ due to a disc, an infinite number of infinitely thin rings facing $P$, each with a radius $y$, width of $dy$, and mass of $dM$ may be placed inside one another to form a disc. The mass of any one of the rings $dM$ is the mass of the disc multiplied by the ratio of the area of the ring $2\pi ydy$ to the total area of the disc $\pi R^2$. So, $dM=\frac{M\cdot 2ydy}{R^2}$. Hence, a small change in the gravitational field, $E$ is: -$$ -dE=\frac{GpdM}{(p^2+y^2)^{3/2}} -$$ - -
    frameless|350x350px
    - -Substituting in $dM$ and integrating both sides gives the gravitational field of the disk: -$$ -E=\int \frac{GMp\cdot \frac{2y dy}{R^2}}{(p^2+y^2)^{3/2}} -$$ - -Adding up the contribution to the gravitational field from each of these rings will yield the expression for the gravitational field due to a disc. This is equivalent to integrating this above expression from $y=0$ to $y=R$, resulting in: -$$ -E_\text{disc}=\frac{2GM}{R^2} \left( 1-\frac{p}{\sqrt{p^2+R^2}}\right) -$$ - -To find the gravitational field at point $P$ due to a sphere centered at the origin, an infinite amount of infinitely thin discs facing $P$, each with a radius $R$, width of $dx$, and mass of $dM$ may be placed together. - -These discs' radii $R$ follow the height of the cross section of a sphere (with constant radius $a$) which is an equation of a semi-circle: $R=\sqrt{a^2-x^2}$. $x$ varies from $-a$ to $a$. - -The mass of any of the discs $dM$ is the mass of the sphere $M$ multiplied by the ratio of the volume of an infinitely thin disc divided by the volume of a sphere (with constant radius $a$). The volume of an infinitely thin disc is $\pi R^2 dx$, or $\pi\left(a^2-x^2\right) dx$. So, $dM=\frac{\pi M(a^2-x^2)dx}{\frac{4}{3}\pi a^3}$. Simplifying gives $dM=\frac{3M(a^2-x^2)dx}{4a^3}$. - -Each discs' position away from $P$ will vary with its position within the 'sphere' made of the discs, so $p$ must be replaced with $p+x$. - -Replacing $M$ with $dM$, $R$ with $\sqrt{a^2-x^2}$, and $p$ with $p+x$ in the 'disc' equation yields: -$$ -dE=\frac{\left( \frac{2G\left[3M\left(a^2-x^2\right)\right]}{4a^3} \right) }{\sqrt{a^2-x^2}^2}\cdot \left(1-\frac{p+x}{\sqrt{(p+x)^2+\sqrt{a^2-x^2}^2}}\right) dx -$$ - -Simplifying, -$$ -\int dE=\int_{-a}^a \frac{3GM}{2a^3} \left(1-\frac{p+x}{\sqrt{p^2+a^2+2px}}\right) dx -$$ - -Integrating the gravitational field of each thin disc from $x=-a$ to $x=+a$ with respect to $x$, and doing some careful algebra, yields Newton's shell theorem: -$$ -E=\frac{GM}{p^2} -$$ - -where $p$ is the distance between the center of the spherical mass and an arbitrary point $P$. The gravitational field of a spherical mass may be calculated by treating all the mass as a point particle at the center of the sphere. - -A solid, spherically symmetric body can be modeled as an infinite number of concentric, infinitesimally thin spherical shells. If one of these shells can be treated as a point mass, then a system of shells (i.e. the sphere) can also be treated as a point mass. Consider one such shell (the diagram shows a cross-section): - -(Note: the $d\theta$ in the diagram refers to the small angle, not the arc length. The arc length is $R d\theta$.) - -Applying Newton's Universal Law of Gravitation, the sum of the forces due to the mass elements in the shaded band is -$$ -dF = \frac{Gm}{s^2} dM. -$$ - -However, since there is partial cancellation due to the vector nature of the force in conjunction with the circular band's symmetry, the leftover component (in the direction pointing towards $m$) is given by -$$ -dF_r = \frac{Gm}{s^2} \cos(\varphi) dM -$$ - -The total force on $m$, then, is simply the sum of the force exerted by all the bands. By shrinking the width of each band, and increasing the number of bands, the sum becomes an integral expression: -$$ -F_r = \int dF_r -$$ - -Since $G$ and $m$ are constants, they may be taken out of the integral: -$$ -F_r = Gm \int \frac{\cos(\varphi)}{s^2} dM. -$$ - -To evaluate this integral, one must first express $dM$ as a function of $d\theta$ - -The total surface of a spherical shell is -$$ -4\pi R^2 -$$ - -while the surface area of the thin slice between $\theta$ and $\theta+d\theta$ is -$$ -2\pi R\sin(\theta) R d\theta = 2\pi R^2\sin(\theta) d\theta -$$ - -If the mass of the shell is $M$, one therefore has that -$$ -dM = \frac {2\pi R^2\sin(\theta) }{4\pi R^2} Md\theta = \frac{1}{2} M\sin(\theta) d\theta -$$ - -and -$$ -F_r = \frac{GMm}{2} \int \frac{\sin(\theta) \cos(\varphi)} {s^2}d\theta -$$ - -By the law of cosines, -$$ -\cos(\varphi) = \frac{r^2 + s^2 - R^2}{2rs} -$$ - -and -$$ -\cos(\theta) = \frac{r^2 + R^2 - s^2}{2rR}. -$$ - -These two relations link the three parameters $\theta$, $\varphi$ and $s$ that appear in the integral together. As $\theta$ increases from $0$ to $\pi$ radians, $\varphi$ varies from the initial value 0 to a maximal value before finally returning to zero at $\theta=\pi$. At the same time, $s$ increases from the initial value $r-R$ to the final value $r+R$ as $\theta$ increases from 0 to $\pi$ radians. This is illustrated in the following animation: - -(Note: As viewed from $m$, the shaded blue band appears as a thin annulus whose inner and outer radii converge to $R \sin(\theta)$ as $d\theta$ vanishes.) - -To find a primitive function to the integrand, one has to make $s$ the independent integration variable instead of $\theta$. - -Performing an implicit differentiation of the second of the "cosine law" expressions above yields -$$ --\sin(\theta) d\theta = \frac{-2s}{2rR} ds -$$ - -and thus -$$ -\sin(\theta) d\theta = \frac{s}{rR} ds. -$$ - -It follows that -$$ -F_r = \frac{GMm}{2} \frac{1}{rR} \int \frac{s\cos(\varphi)} {s^2}ds = \frac{GMm}{2rR} \int \frac{\cos(\varphi)} s ds -$$ - -where the new integration variable $s$ increases from $r-R$ to $r+R$. - -Inserting the expression for $\cos(\varphi)$ using the first of the "cosine law" expressions above, one finally gets that -$$ -F_r = \frac{GMm}{4r^2 R} \int \left( 1 + \frac{r^2 - R^2}{s^2} \right)\ ds\ . -$$ - -A primitive function to the integrand is -$$ -s - \frac{r^2 - R^2}{s}\ , -$$ - -and inserting the bounds $r-R$ and $r+R$ for the integration variable $s$ in this primitive function, one gets that -$$ -F_r = \frac{GMm}{r^2}, -$$ - -saying that the gravitational force is the same as that of a point mass in the center of the shell with the same mass. - -Finally, integrate all infinitesimally thin spherical shell with mass of $dM$, and we can obtain the total gravity contribution of a solid ball to the object outside the ball -$$ -F_{total} = \int dF_r = \frac{Gm}{r^2} \int dM. -$$ - -Between the radius of $x$ to $x+dx$, $dM$ can be expressed as a function of $x$, i.e., -$$ -dM = \frac{4 \pi x^2 dx}{\frac{4}{3} \pi R^3} M = \frac{3Mx^2 dx}{R^3} -$$ - -Therefore, the total gravity is -$$ -F_\text{total} = \frac{3GMm}{r^2 R^3} \int_0^R x^2 dx = \frac{GMm}{r^2} -$$ - -which suggests that the gravity of a solid spherical ball to an exterior object can be simplified as that of a point mass in the center of the ball with the same mass. - -For a point inside the shell, the difference is that when θ is equal to zero, ϕ takes the value pi radians and s the value R − r. When θ increases from 0 to pi radians, ϕ decreases from the initial value pi radians to zero and s increases from the initial value R − r to the value R + r. - -This can all be seen in the following figure - -Inserting these bounds into the primitive function -$$ -s - \frac{r^2 - R^2}{s} -$$ - -one gets that, in this case -$$ -F_r = 0 , -$$ - -saying that the net gravitational forces acting on the point mass from the mass elements of the shell, outside the measurement point, cancel out. - -Generalization: If $f=\frac{k}{r^p}$, the resultant force inside the shell is: -$$ -F(r) = \frac{GMm}{4r^2 R} \int_{R-r}^{R+r} \left( \frac{1}{s^{p-2}} + \frac{r^2 - R^2}{s^p} \right) ds -$$ - -The above results into $F(r)$ being identically zero if and only if $p=2$ - -Outside the shell (i.e. $r>R$ or $r < -R$): -$$ -F(r) = \frac{GMm}{4r^2 R} \int_{r-R}^{r+R} \left( \frac{1}{s^{p-2}} + \frac{r^2 - R^2}{s^p} \right) ds -$$ - -The shell theorem is an immediate consequence of Gauss's law for gravity saying that -$$ -\int_S {\mathbf g}\cdot d{\mathbf {S}} = -4 \pi GM -$$ - -where M is the mass of the part of the spherically symmetric mass distribution that is inside the sphere with radius r and -$$ -\int_S {\mathbf g}\cdot d{\mathbf {S}} = \int_S {\mathbf g}\cdot {\hat\mathbf{n}}dS -$$ - -is the surface integral of the gravitational field g over any closed surface inside which the total mass is M, the unit vector $\hat\mathbf{n}$ being the outward normal to the surface. - -The gravitational field of a spherically symmetric mass distribution like a mass point, a spherical shell or a homogeneous sphere must also be spherically symmetric. If $\hat\mathbf{n}$ is a unit vector in the direction from the point of symmetry to another point the gravitational field at this other point must therefore be -$$ - \mathbf g = g(r) \hat\mathbf{n} -$$ - -where g(r) only depends on the distance r to the point of symmetry - -Selecting the closed surface as a sphere with radius r with center at the point of symmetry the outward normal to a point on the surface, $\hat\mathbf{n}$, is precisely the direction pointing away from the point of symmetry of the mass distribution. - -One, therefore, has that -$$ - \mathbf {g} = g(r)\hat\mathbf{n} -$$ - -and -$$ -\int_S \mathbf g \cdot d{\mathbf S} = g(r) \int_S dS = g(r) 4\pi r^2 -$$ - -as the area of the sphere is 4pir2. - -From Gauss's law it then follows that -$$ - g(r) 4\pi r^2 = -4 \pi GM , -$$ - -or, -$$ - g(r) = -\frac {GM}{r^2}. -$$ - -It is natural to ask whether the converse of the shell theorem is true, namely whether the result of the theorem implies the law of universal gravitation, or if there is some more general force law for which the theorem holds. More specifically, one may ask the question: - -Suppose there is a force $F$ between masses M and m, separated by a distance r of the form $F = M m f(r)$ such that any spherically symmetric body affects external bodies as if its mass were concentrated at its center. Then what form can the function $f$ take? - -In fact, this allows exactly one more class of force than the (Newtonian) inverse square. The most general force as derived in is: -$$ - F = -\frac{G M m}{r^2} - \frac{\Lambda M m r}{3} -$$ - -where $G$ and $\Lambda$ can be constants taking any value. The first term is the familiar law of universal gravitation; the second is an additional force, analogous to the cosmological constant term in general relativity. - -If we further constrain the force by requiring that the second part of the theorem also holds, namely that there is no force inside a hollow ball, we exclude the possibility of the additional term, and the inverse square law is indeed the unique force law satisfying the theorem. - -On the other hand, if we relax the conditions, and require only that the field everywhere outside a spherically symmetric body is the same as the field from some point mass at the center (of any mass), we allow a new class of solutions given by the Yukawa potential, of which the inverse square law is a special case. - -Another generalization can be made for a disc by observing that -$$ -dM=\frac{R^2}{2} \frac{d\theta \sin^2(\theta)}{\pi R^2}M=\frac{ \sin^2(\theta)}{2 \pi}M d\theta -$$ - -so: -$$ -F_r = \frac{GMm}{2 \pi} \int \frac{ \sin^2 (\theta) \cos(\varphi)} {s^2} d\theta, -$$ - -where $M=\pi R^2 \rho$, and $\rho$ is the density of the body. - -Doing all the intermediate calculations we get: -$$ -F(r) = \frac{G m \rho}{8r^3} \int_{R-r}^{R+r} { \frac{\left(r^2 + s^2 - R^2\right)\sqrt{2\left(r^2 R^2 + r^2 s^2 + R^2 s^2\right) - s^4 - r^4 - R^4} }{s^2} } ds -$$ - -Propositions 70 and 71 consider the force acting on a particle from a hollow sphere with an infinitesimally thin surface, whose mass density is constant over the surface. The force on the particle from a small area of the surface of the sphere is proportional to the mass of the area and inversely as the square of its distance from the particle. The first proposition considers the case when the particle is inside the sphere, the second when it is outside. The use of infinitesimals and limiting processes in geometrical constructions are simple and elegant and avoid the need for any integrations. They well illustrate Newton's method of proving many of the propositions in the Principia. - -His proof of Propositions 70 is trivial. In the following, it is considered in slightly greater detail than Newton provides. - -The proof of Proposition 71 is more historically significant. It forms the first part of his proof that the gravitational force of a solid sphere acting on a particle outside it is inversely proportional to the square of its distance from the center of the sphere, provided the density at any point inside the sphere is a function only of its distance from the center of the sphere. - -Although the following are completely faithful to Newton's proofs, very minor changes have been made to attempt to make them clearer. - -Fig. 2 is a cross-section of the hollow sphere through the center, S and an arbitrary point, P, inside the sphere. Through P draw two lines IL and HK such that the angle KPL is very small. JM is the line through P that bisects that angle. From the geometry of circles, the triangles IPH and KPL are similar. The lines KH and IL are rotated about the axis JM to form 2 cones that intersect the sphere in 2 closed curves. In Fig. 1 the sphere is seen from a distance along the line PE and is assumed transparent so both curves can be seen. - -The surface of the sphere that the cones intersect can be considered to be flat, and $ \angle PJI = \angle PMK $. - -Since the intersection of a cone with a plane is an ellipse, in this case the intersections form two ellipses with major axes IH and KL, where $ \frac{IH}{KL} = \frac{PJ}{PM} $. - -By a similar argument, the minor axes are in the same ratio. This is clear if the sphere is viewed from above. Therefore the two ellipses are similar, so their areas are as the squares of their major axes. As the mass of any section of the surface is proportional to the area of that section, for the 2 elliptical areas the ratios of their masses $ \propto \frac{PJ^2}{PM^2} $. - -Since the force of attraction on P in the direction JM from either of the elliptic areas, is direct as the mass of the area and inversely as the square of its distance from P, it is independent of the distance of P from the sphere. Hence, the forces on P from the 2 infinitesimal elliptical areas are equal and opposite and there is no net force in the direction JM. - -As the position of P and the direction of JM are both arbitrary, it follows that any particle inside a hollow sphere experiences no net force from the mass of the sphere. - -Note: Newton simply describes the arcs IH and KL as 'minimally small' and the areas traced out by the lines IL and HK can be any shape, not necessarily elliptic, but they will always be similar. - -Fig. 1 is a cross-section of the hollow sphere through the center, S with an arbitrary point, P, outside the sphere. PT is the tangent to the circle at T which passes through P. HI is a small arc on the surface such that PH is less than PT. Extend PI to intersect the sphere at L and draw SF to the point F that bisects IL. Extend PH to intersect the sphere at K and draw SE to the point E that bisects HK, and extend SF to intersect HK at D. Drop a perpendicular IQ on to the line PS joining P to the center S. Let the radius of the sphere be a and the distance PS be D. - -Let arc IH be extended perpendicularly out of the plane of the diagram, by a small distance ζ. The area of the figure generated is $ IH\cdot \zeta $, and its mass is proportional to this product. - -The force due to this mass on the particle at P $ \propto \frac{IH\cdot \zeta}{PI^2} $ and is along the line PI. - -The component of this force towards the center $ \propto \frac{IH\cdot PQ\cdot \zeta}{PI^3} $. - -If now the arc HI is rotated completely about the line PS to form a ring of width HI and radius IQ, the length of the ring is 2pi·IQ and its area is 2pi·IQ·IH. The component of the force due to this ring on the particle at P in the direction PS becomes $ \propto \frac{IH\cdot IQ\cdot PQ}{PI^3} $. - -The perpendicular components of the force directed towards PS cancel out since the mass in the ring is distributed symmetrically about PS. Therefore, the component in the direction PS is the total force on P due to the ring formed by rotating arc HI about PS. - -From similar triangles: $ \frac{IQ}{PI} = \frac{FS}{D}$; $ \frac{PQ}{PI} = \frac{PF}{D}$, and $ \frac{RI}{PI} = \frac{DF}{PF}$. - -If HI is sufficiently small that it can be taken as a straight line, $ \angle SIH $ is a right angle, and $ \angle RIH = \angle FIS $, so that $ \frac{HI}{RI} = \frac{a}{IF}$. - -Hence the force on P due to the ring $ \propto \frac{IH\cdot IQ\cdot PQ}{PI^3} = \frac{a\cdot DF\cdot FS\cdot PF}{IF\cdot PF\cdot D\cdot D} = \frac{a\cdot DF\cdot FS}{IF\cdot D^2} $. - -Assume now in Fig. 2 that another particle is outside the sphere at a point p, a different distance d from the center of the sphere, with corresponding points lettered in lower case. For easy comparison, the construction of P in Fig. 1 is also shown in Fig. 2. As before, ph is less than pt. - -Generate a ring with width ih and radius iq by making angle $ fiS = FIS $ and the slightly larger angle $ dhS = DHS $, so that the distance PS is subtended by the same angle at I as is pS at i. The same holds for H and h, respectively. - -The total force on p due to this ring is -$$ - \propto \frac{ih\cdot iq\cdot pq}{pi^3} = \frac{a\cdot df\cdot fS}{if\cdot d^2} -$$ - -Clearly $ fS = FS $, $ if = IF $, and $ eS = ES $. - -Newton claims that DF and df can be taken as equal in the limit as the angles DPF and dpf 'vanish together'. Note that angles DPF and dpf are not equal. Although DS and dS become equal in the limit, this does not imply that the ratio of DF to df becomes equal to unity, when DF and df both approach zero. In the finite case DF depends on D, and df on d, so they are not equal. - -Since the ratio of DF to df in the limit is crucial, more detailed analysis is required. From the similar right triangles, $ \frac {DF}{PF} = \frac{ED}{ES}$ and $ ED^2 = (DF + FS)^2 - ES^2 $, giving $ \frac {\left(PF^2 - ES^2\right)DF^2}{PF^2} + 2\cdot FS\cdot DF + FS^2 - ES^2 = 0 $. Solving the quadratic for DF, in the limit as ES approaches FS, the smaller root, $ DF = ES - FS $. More simply, as DF approaches zero, in the limit the $ DF^2 $ term can be ignored: $ 2\cdot FS\cdot DF + FS^2 - ES^2 = 0 $ leading to the same result. Clearly df has the same limit, justifying Newton’s claim. - -Comparing the force from the ring HI rotated about PS to the ring hi about pS, the ratio of these 2 forces equals $ \frac{d^2}{D^2} $. - -By dividing up the arcs AT and Bt into corresponding infinitesimal rings, it follows that the ratio of the force due to the arc AT rotated about PS to that of Bt rotated about pS is in the same ratio, and similarly, the ratio of the forces due to arc TB to that of tA both rotated are in the same ratio. - -Therefore, the force on a particle any distance D from the center of the hollow sphere is inversely proportional to $ D^2 $, which proves the proposition. - -An analogue for shell theorem exists in general relativity (GR). - -Spherical symmetry implies that the metric has time-independent Schwarzschild geometry, even if a central mass is undergoing gravitational collapse (Misner et al. 1973; see Birkhoff's theorem). The metric thus has form -$$ -ds^2 = - (1-2M/r) dt^2 + (1-2M/r)^{-1} dr^2 + r^2 d\Omega^2 -$$ - -(using geometrized units, where $G=c=1$). For $r>R>0$ (where $R$ is the radius of some mass shell), mass acts as a delta function at the origin. For $r < R$, shells of mass may exist externally, but for the metric to be non-singular at the origin, $M$ must be zero in the metric. This reduces the metric to flat Minkowski space; thus external shells have no gravitational effect. - -This result illuminates the gravitational collapse leading to a black hole and its effect on the motion of light-rays and particles outside and inside the event horizon (Hartle 2003, chapter 12). diff --git a/wiki/wikipedia/359.txt b/wiki/wikipedia/359.txt deleted file mode 100644 index 693dc83d1c42a7faa1ed9d05c9685ad262415282..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/359.txt +++ /dev/null @@ -1,19 +0,0 @@ -In geometric graph theory, a branch of mathematics, a polyhedral graph is the undirected graph formed from the vertices and edges of a convex polyhedron. Alternatively, in purely graph-theoretic terms, the polyhedral graphs are the 3-vertex-connected planar graphs. - -The Schlegel diagram of a convex polyhedron represents its vertices and edges as points and line segments in the Euclidean plane, forming a subdivision of an outer convex polygon into smaller convex polygons (a convex drawing of the graph of the polyhedron). It has no crossings, so every polyhedral graph is also a planar graph. Additionally, by Balinski's theorem, it is a 3-vertex-connected graph. - -According to Steinitz's theorem, these two graph-theoretic properties are enough to completely characterize the polyhedral graphs: they are exactly the 3-vertex-connected planar graphs. That is, whenever a graph is both planar and 3-vertex-connected, there exists a polyhedron whose vertices and edges form an isomorphic graph. Given such a graph, a representation of it as a subdivision of a convex polygon into smaller convex polygons may be found using the Tutte embedding. - -Tait conjectured that every cubic polyhedral graph (that is, a polyhedral graph in which each vertex is incident to exactly three edges) has a Hamiltonian cycle, but this conjecture was disproved by a counterexample of W. T. Tutte, the polyhedral but non-Hamiltonian Tutte graph. If one relaxes the requirement that the graph be cubic, there are much smaller non-Hamiltonian polyhedral graphs. The graph with the fewest vertices and edges is the 11-vertex and 18-edge Herschel graph, and there also exists an 11-vertex non-Hamiltonian polyhedral graph in which all faces are triangles, the Goldner–Harary graph. - -More strongly, there exists a constant α < 1 (the shortness exponent) and an infinite family of polyhedral graphs such that the length of the longest simple path of an n-vertex graph in the family is O(nα). - -Duijvestijn provides a count of the polyhedral graphs with up to 26 edges; The number of these graphs with 6, 7, 8, ... edges is - -1, 0, 1, 2, 2, 4, 12, 22, 58, 158, 448, 1342, 4199, 13384, 43708, 144810, ... . - -One may also enumerate the polyhedral graphs by their numbers of vertices: for graphs with 4, 5, 6, ... vertices, the number of polyhedral graphs is - -1, 2, 7, 34, 257, 2606, 32300, 440564, 6384634, 96262938, 1496225352, ... . - -A polyhedral graph is the graph of a simple polyhedron if it is cubic (every vertex has three edges), and it is the graph of a simplicial polyhedron if it is a maximal planar graph. The Halin graphs, graphs formed from a planar embedded tree by adding an outer cycle connecting all of the leaves of the tree, form another important subclass of the polyhedral graphs. diff --git a/wiki/wikipedia/3590.txt b/wiki/wikipedia/3590.txt deleted file mode 100644 index ecf6b5157f80a95b1391aab81c18082f1ad7a087..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3590.txt +++ /dev/null @@ -1,12 +0,0 @@ -In mathematics, the principal ideal theorem of class field theory, a branch of algebraic number theory, says that extending ideals gives a mapping on the class group of an algebraic number field to the class group of its Hilbert class field, which sends all ideal classes to the class of a principal ideal. The phenomenon has also been called principalization, or sometimes capitulation. - -For any algebraic number field K and any ideal I of the ring of integers of K, if L is the Hilbert class field of K, then -$$ -IO_L\ -$$ - -is a principal ideal αOL, for OL the ring of integers of L and some element α in it. - -The principal ideal theorem was conjectured by , and was the last remaining aspect of his program on class fields to be completed, in 1929. - -reduced the principal ideal theorem to a question about finite abelian groups: he showed that it would follow if the transfer from a finite group to its derived subgroup is trivial. This result was proved by . diff --git a/wiki/wikipedia/3591.txt b/wiki/wikipedia/3591.txt deleted file mode 100644 index 1a5b5230424b9f1a7a99b157bcc466d95fe3a228..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3591.txt +++ /dev/null @@ -1,13 +0,0 @@ -In mathematics, the Ostrowski–Hadamard gap theorem is a result about the analytic continuation of complex power series whose non-zero terms are of orders that have a suitable "gap" between them. Such a power series is "badly behaved" in the sense that it cannot be extended to be an analytic function anywhere on the boundary of its disc of convergence. The result is named after the mathematicians Alexander Ostrowski and Jacques Hadamard. - -Let 0 < p1 < p2 < ... be a sequence of integers such that, for some λ > 1 and all j ∈ N, -$$ -\frac{p_{j + 1}}{p_{j}} > \lambda. -$$ - -Let (αj)j∈N be a sequence of complex numbers such that the power series -$$ -f(z) = \sum_{j \in \mathbf{N}} \alpha_{j} z^{p_{j}} -$$ - -has radius of convergence 1. Then no point z with |z| = 1 is a regular point for f, i.e. f cannot be analytically extended from the open unit disc D to any larger open set including even a single point of the boundary of D. diff --git a/wiki/wikipedia/3592.txt b/wiki/wikipedia/3592.txt deleted file mode 100644 index ee2085fc27437e94eac8090642c25f0052d7d437..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3592.txt +++ /dev/null @@ -1,23 +0,0 @@ -The out-of-kilter algorithm is an algorithm that computes the solution to the minimum-cost flow problem in a flow network. It was published in 1961 by D. R. Fulkerson and is described here. The analog of steady state flow in a network of nodes and arcs may describe a variety of processes. Examples include transportation systems & personnel assignment actions. Arcs generally have cost & capacity parameters. A recurring problem is trying to determine the minimum cost route between two points in a capacitated network. The idea of the algorithm is to identify out-of-kilter arcs and modify the flow network until all arcs are in-kilter and a minimum cost flow has been reached. The algorithm can be used to minimize the total cost of a constrained flow in an oriented network. - -To begin, the algorithm takes a single cycle and a set of node numbers. It then searches for out-of-kilter arcs. If none are found the algorithm is complete. If the flow needs to be increased or decreased to bring an arc into kilter, the algorithm will look for a path that increases or decreases the flow respectively. If no paths are found to improve the system then there is no feasible flow. This is done until all arcs are in-kilter, at which point the algorithm is complete. - -Suppose that the network has n nodes and m oriented arcs. We write $j ~ (i,i^1)$ if arc $j$ has initial node $i$ and terminal node $i^1$. Let $x(j)$ be the flow along arc $j$ (from node $i$ to node $i^1$). Define $c^-(j)$ and $c^+(j)$ to be the lower and upper capacity bounds on the flow in arc $j$. The capacities may be either finite, or infinite on some or all arcs for either the lower or upper bounds. The problem that is at hand to solve is to minimize: $\sum_{j=1}^md(j) x(j)$ subject to: -$$ -\sum_{j:j~(i,i^1)}x(j) -\sum_{j:j~(i^1,i)}x(j) = 0 -$$ for each $i = 1,....,n$ (1) - -, and: -$$ -c^-(j)\leq x(j)\leq c^+(j) -$$ for each $j = 1,....,n$ (2) - -If a given flow x satisfies (1), then the flow is conserved at each node and we call the flow a circulation. If the flow x satisfies (2) we say it is feasible. - -Runtime: - -* The algorithm terminates within $ O(mU) $ iterations - -* Dominant computation is shortest path computation - -* Total runtime is: $ O(m^2 U+mUn\log(n)) $ diff --git a/wiki/wikipedia/3593.txt b/wiki/wikipedia/3593.txt deleted file mode 100644 index 102f420eb24d56c2049a4ceff44246ca5f616eb0..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3593.txt +++ /dev/null @@ -1,3 +0,0 @@ -In commutative algebra, the Auslander–Buchsbaum theorem states that regular local rings are unique factorization domains. - -The theorem was first proved by . They showed that regular local rings of dimension 3 are unique factorization domains, and had previously shown that this implies that all regular local rings are unique factorization domains. diff --git a/wiki/wikipedia/3594.txt b/wiki/wikipedia/3594.txt deleted file mode 100644 index ac77a93dc88fcc032cb6ff4701c6b10a1253a012..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3594.txt +++ /dev/null @@ -1,135 +0,0 @@ -In complexity theory, the Karp–Lipton theorem states that if the Boolean satisfiability problem (SAT) can be solved by Boolean circuits with a polynomial number of logic gates, then -$$ -\Pi_2 = \Sigma_2 -$$ and therefore $\mathrm{PH} = \Sigma_2. $ - -That is, if we assume that NP, the class of nondeterministic polynomial time problems, can be contained in the non-uniform polynomial time complexity class P/poly, then this assumption implies the collapse of the polynomial hierarchy at its second level. Such a collapse is believed unlikely, so the theorem is generally viewed by complexity theorists as evidence for the nonexistence of polynomial size circuits for SAT or for other NP-complete problems. A proof that such circuits do not exist would imply that P ≠ NP. As P/poly contains all problems solvable in randomized polynomial time (Adleman's theorem), the Karp–Lipton theorem is also evidence that the use of randomization does not lead to polynomial time algorithms for NP-complete problems. - -The Karp–Lipton theorem is named after Richard M. Karp and Richard J. Lipton, who first proved it in 1980. (Their original proof collapsed PH to $\Sigma_3$, but Michael Sipser improved it to $\Sigma_2$.) - -Variants of the theorem state that, under the same assumption, MA = AM, and PH collapses to S_P|b=2 complexity class. There are stronger conclusions possible if PSPACE, or some other complexity classes are assumed to have polynomial-sized circuits; see P/poly. If NP is assumed to be a subset of BPP (which is a subset of P/poly), then the polynomial hierarchy collapses to BPP. If coNP is assumed to be subset of NP/poly, then the polynomial hierarchy collapses to its third level. - -Suppose that polynomial sized circuits for SAT not only exist, but also that they could be constructed by a polynomial time algorithm. Then this supposition implies that SAT itself could be solved by a polynomial time algorithm that constructs the circuit and then applies it. That is, efficiently constructible circuits for SAT would lead to a stronger collapse, P = NP. - -The assumption of the Karp–Lipton theorem, that these circuits exist, is weaker. But it is still possible for an algorithm in the complexity class $\Sigma_2$ to guess a correct circuit for SAT. The complexity class $\Sigma_2$ describes problems of the form -$$ -\exists x\forall y\psi(x,y) -$$ - -where $\psi$ is any polynomial-time computable predicate. The existential power of the first quantifier in this predicate can be used to guess a correct circuit for SAT, and the universal power of the second quantifier can be used to verify that the circuit is correct. Once this circuit is guessed and verified, the algorithm in class $\Sigma_2$ can use it as a subroutine for solving other problems. - -To understand the Karp–Lipton proof in more detail, we consider the problem of testing whether a circuit c is a correct circuit for solving SAT instances of a given size, and show that this circuit testing problem belongs to $\Pi_1$. That is, there exists a polynomial time computable predicate V such that c is a correct circuit if and only if, for all polynomially-bounded z, V(c,z) is true. - -The circuit c is a correct circuit for SAT if it satisfies two properties: - -*For every pair (s,x) where s is an instance of SAT and x is a solution to the instance, c(s) must be true - -*For every instance s of SAT for which c(s) is true, s must be solvable. - -The first of these two properties is already in the form of problems in class $\Pi_1$. To verify the second property, we use the self-reducibility property of SAT. - -Self-reducibility describes the phenomenon that, if we can quickly test whether a SAT instance is solvable, we can almost as quickly find an explicit solution to the instance. To find a solution to an instance s, choose one of the Boolean variables x that is input to s, and make two smaller instances s0 and s1 where si denotes the formula formed by replacing x with the constant i. Once these two smaller instances have been constructed, apply the test for solvability to each of them. If one of these two tests returns that the smaller instance is satisfiable, continue solving that instance until a complete solution has been derived. - -To use self-reducibility to check the second property of a correct circuit for SAT, we rewrite it as follows: - -*For every instance s of SAT for which c(s) is true, the self-reduction procedure described above finds a valid solution to s. - -Thus, we can test in $\Pi_1$ whether c is a valid circuit for solving SAT. - -see Random self-reducibility for more information - -The Karp–Lipton theorem can be restated as a result about Boolean formulas with polynomially-bounded quantifiers. Problems in $\Pi_2$ are described by formulas of this type, with the syntax -$$ -\phi = \forall x \exists y \psi(x, y) -$$ - -where $\psi$ is a polynomial-time computable predicate. The Karp–Lipton theorem states that this type of formula can be transformed in polynomial time into an equivalent formula in which the quantifiers appear in the opposite order; such a formula belongs to $\Sigma_2$. Note that the subformula -$$ -s(x)=\exists y \psi(x, y) -$$ - -is an instance of SAT. That is, if c is a valid circuit for SAT, then this subformula is equivalent to the unquantified formula c(s(x)). Therefore, the full formula for $\phi$ is equivalent (under the assumption that a valid circuit c exists) to the formula -$$ -\exists c\forall (x,z)V(c,z)\wedge c(s(x)) -$$ - -where V is the formula used to verify that c really is a valid circuit using self-reducibility, as described above. This equivalent formula has its quantifiers in the opposite order, as desired. Therefore, the Karp–Lipton assumption allows us to transpose the order of existential and universal quantifiers in formulas of this type, showing that $\Sigma_2=\Pi_2.$ Repeating the transposition allows formulas with deeper nesting to be simplified to a form in which they have a single existential quantifier followed by a single universal quantifier, showing that $PH=\Sigma_2.$ - -Assume $\mathsf{NP} \subseteq \mathsf{P/poly}$. Therefore, there exists a family of circuits $C_n$ that solves satisfiability on input of length n. Using self-reducibility, there exists a family of circuits $D_n$ which outputs a satisfying assignment on true instances. - -Suppose L is a $\Pi_2$ set -$$ -L = \{z : \forall x. \exists y. \phi(x,y,z)\} -$$ - -Since $\exists y. \phi(x,y,z)$ can be considered an instance of SAT (by Cook-Levin theorem), there exists a circuit $D_n$, depending on $n = |z|$, such that the formula defining L is equivalent to - -Furthermore, the circuit can be guessed with existential quantification: - -Obviously () implies (). If (1) is false, then $\neg \exists y. \phi(x,y,z)$. In this case, no circuit D can output an assignment making $\phi(x, D(x, z), z)$ true. - -The proof has shown that a $\Pi_2$ set $L$ is in $\Sigma_2$. - -What's more, if the $\Pi_2$ formula is true, then the circuit D will work against any x. If the $\Pi_2$ formula is false, then x making the formula (1) false will work against any circuit. This property means a stronger collapse, namely to S_P|b=2 complexity class (i.e. $\Pi_2 \subseteq \mathsf{S}_2^P \subseteq \Sigma_2$). It was observed by Sengupta. - -A modification of the above proof yields -$$ -\mathsf{NP} \subseteq \mathsf{P/poly} \implies \mathsf{AM} = \mathsf{MA} -$$ - -(see Arthur–Merlin protocol). - -Suppose that L is in AM, i.e.: -$$ -z \in L \implies \Pr\nolimits_x[\exists y. \phi(x,y,z)] \geq \tfrac{2}{3} -$$ -$$ -z \notin L \implies \Pr\nolimits_x[\exists y. \phi(x,y,z)] \leq \tfrac{1}{3} -$$ - -and as previously rewrite $\exists y. \phi(x,y,z)$ using the circuit $D_n$ that outputs a satisfying assignment if it exists: -$$ -z \in L \implies \Pr\nolimits_x[\phi(x,D_n(x,z),z)] \geq \tfrac{2}{3} -$$ -$$ -z \notin L \implies \Pr\nolimits_x[\phi(x,D_n(x,z),z)] \leq \tfrac{1}{3} -$$ - -Since $D_n$ can be guessed: -$$ -z \in L \implies \exists D. \Pr\nolimits_x[\phi(x,D(x,z),z)] \geq \tfrac{2}{3} -$$ -$$ -z \notin L \implies \forall D. \Pr\nolimits_x[\phi(x,D(x,z),z)] \leq \tfrac{1}{3} -$$ - -which proves $L$ is in the smaller class MA. - -Kannan's theorem states that for any fixed k there exists a language $L$ in $\Sigma_2$, which is not in SIZE(nk) (This is a different statement than $\Sigma_2 \not \subseteq \mathsf{P/poly}$, which is currently open and states that there exists a single language that is not in SIZE(nk) for any k). It is a simple circuit lower bound. - -Proof outline: - -There exists a language $L \in \Sigma_4 - \mathsf{SIZE}(n^k)$ (the proof uses diagonalization technique). Consider two cases: - -* If $\mathsf{SAT} \notin \mathsf{P/poly}$ then $\mathsf{SAT} \notin \mathsf{SIZE}(n^k)$ and theorem is proved. - -* If $\mathsf{SAT} \in \mathsf{P/poly}$, then by Karp–Lipton theorem, $\Sigma_4 = \Sigma_2$ and therefore $L \in \Sigma_2 - \mathsf{SIZE}(n^k)$. - -A stronger version of Karp–Lipton theorem strengthens Kannan's theorem to: for any k, there exists a language $L \in \mathsf{S}_2^P - \mathsf{SIZE}(n^k)$. - -It is also known that PP is not contained in $\mathsf{SIZE}(n^k)$, which was proved by Vinodchandran. Proof: - -* If $\mathsf{PP} \not \subseteq \mathsf{P/poly}$ then $\mathsf{PP} \not \subseteq \mathsf{SIZE}(n^k)$. - -* Otherwise, $\mathsf{P^{\#P}} \subseteq \mathsf{P/poly}$. Since -$$ -\mathsf{P^{\#P}} \supseteq \mathsf{PP} \supseteq \mathsf{MA} -$$ (by property of MA) -$$ -\mathsf{P^{\#P}} \supseteq \mathsf{PH} \supseteq \Sigma_2 \supseteq \mathsf{MA} -$$ (by Toda's theorem and property of MA) -$$ -\mathsf{P^{\#P}} = \mathsf{MA} -$$ (follows from assumption using interactive protocol for permanent, see P/poly) - -the containments are equalities and we get $\mathsf{PP} = \Sigma_2 \not \subseteq \mathsf{SIZE}(n^k)$ by Kannan's theorem. diff --git a/wiki/wikipedia/3595.txt b/wiki/wikipedia/3595.txt deleted file mode 100644 index 428a298d8ffebb9600b34be98e867615458ec490..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3595.txt +++ /dev/null @@ -1,21 +0,0 @@ -In order theory, the Szpilrajn extension theorem (also called the order-extension principle), proved by Edward Szpilrajn in 1930, states that every strict partial order is contained in a total order. Intuitively, the theorem says that any method of comparing elements that leaves some pairs incomparable can be extended in such a way that every pair becomes comparable. The theorem is one of many examples of the use of the axiom of choice in the form of Zorn's lemma to find a maximal set with certain properties. - -A binary relation $R$ on a set $X$ is formally defined as a set of ordered pairs $(x, y)$ of elements of $X,$ and $(x, y) \in R$ is often abbreviated as $xRy.$ - -A relation is reflexive if $xRx$ holds for every element $x \in X;$ it is transitive if $xRy \text{ and } yRz$ imply $xRz$ for all $x, y, z \in X;$ it is antisymmetric if $xRy \text{ and } yRx$ imply $x = y$ for all $x, y \in X;$ and it is a connex relation if $xRy \text{ or } yRx$ holds for all $x, y \in X.$ A partial order is, by definition, a reflexive, transitive and antisymmetric relation. A total order is a partial order that is connex. - -A relation $R$ is contained in another relation $S$ when all ordered pairs in $R$ also appear in $S;$ that is,$xRy$ implies $xSy$ for all $x, y \in X.$ The extension theorem states that every relation $R$ that is reflexive, transitive and antisymmetric (that is, a partial order) is contained in another relation $S$ which is reflexive, transitive, antisymmetric and connex (that is, a total order). - -The theorem is proved in two steps. First, if a partial order does not compare $x$ and $y,$ it can be extended by first adding the pair $(x, y)$ and then performing the transitive closure, and second, since this operation generates an ordering that strictly contains the original one and can be applied to all pairs of incomparable elements, there exists a relation in which all pairs of elements have been made comparable. - -The first step is proved as a preliminary lemma, in which a partial order where a pair of elements $x$ and $y$ are incomparable is changed to make them comparable. This is done by first adding the pair $xRy$ to the relation, which may result in a non-transitive relation, and then restoring transitivity by adding all pairs $qRp$ such that $qRx \text{ and } yRp.$ This is done on a single pair of incomparable elements $x$ and $y,$ and produces a relation that is still reflexive, antisymmetric and transitive and that strictly contains the original one. - -Next it is shown that the poset of partial orders containing $R,$ ordered by inclusion, has a maximal element. The existence of such a maximal element is proved by applying Zorn's lemma to this poset. A chain in this poset is a set of relations containing $R$ such that given any two of these relations, one is contained in the other. - -To apply Zorn's lemma, it must be shown that every chain has an upper bound in the poset. Let $\mathcal{C}$ be such a chain, and it remains to show that the union of its elements, $\bigcup \mathcal{C},$ is an upper bound for $\mathcal{C}$ which is in the poset: $\mathcal{C}$ contains the original relation $R$ since every element of $\mathcal{C}$ is a partial order containing $R.$ Next, it is shown that $\bigcup \mathcal{C}$ is a transitive relation. Suppose that $(x, y)$ and $(y, z)$ are in $\bigcup \mathcal{C},$ so that there exist $S, T \in \mathcal{C}$ such that $(x, y) \in S \text{ and } (y, z) \in T.$ Since $\mathcal{C}$ is a chain, either $S \subseteq T \text{ or } T \subseteq S.$ Suppose $S \subseteq T;$ the argument for when $T \subseteq S$ is similar. Then $(x, y) \in T.$ Since all relations produced by our process are transitive, $(x, z)$ is in $T$ and therefore also in $\bigcup \mathcal{C}.$ Similarly, it can be shown that $\bigcup \mathcal{C}$ is antisymmetric. - -Therefore by Zorn's lemma the set of partial orders containing $R$ has a maximal element $Q,$ and it remains only to show that $Q$ is total. Indeed if $Q$ had a pair of incomparable elements then it is possible to apply the process of the first step to it, leading to another strict partial order that contains $R$ and strictly contains $Q,$ contradicting that $Q$ is maximal. $Q$ is therefore a total order containing $R,$ completing the proof. - -Arrow stated that every preorder (reflexive and transitive relation) can be extended to a total preorder (transitive and connex relation), and this claim was later proved by Hansson. - -Suzumura proved that a binary relation can be extended to a total preorder if and only if it is Suzumura-consistent, which means that there is no cycle of elements such that $xRy$ for every pair of consecutive elements $(x, y),$ and there is some pair of consecutive elements $(x, y)$ in the cycle for which $yRx$ does not hold. diff --git a/wiki/wikipedia/3596.txt b/wiki/wikipedia/3596.txt deleted file mode 100644 index e2ac6106c4f5bfd68c93d252d1ff78cc4c762e1d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3596.txt +++ /dev/null @@ -1,19 +0,0 @@ -Van Schooten's theorem, named after the Dutch mathematician Frans van Schooten, describes a property of equilateral triangles. It states: - -For an equilateral triangle $\triangle ABC$ with a point $P$ on its circumcircle the length of longest of the three line segments $PA, PB, PC$ connecting $P$ with the vertices of the triangle equals the sum of the lengths of the other two. - -The theorem is a consequence of Ptolemy's theorem for concyclic quadrilaterals. Let $a$ be the side length of the equilateral triangle $\triangle ABC$ and $PA$ the longest line segment. The triangle's vertices together with $P$ form a concyclic quadrilateral and hence Ptolemy's theorem yields: - - - -\begin{align} - -& |BC| \cdot |PA| =|AC| \cdot |PB| + |AB| \cdot |PC| \\[6pt] - -\Longleftrightarrow & a \cdot |PA| =a \cdot |PB| + a \cdot |PC| - -\end{align} - - - -Dividing the last equation by $a$ delivers Van Schooten's theorem. diff --git a/wiki/wikipedia/3597.txt b/wiki/wikipedia/3597.txt deleted file mode 100644 index ee6a71e8cacb077d37b1770d27576c05c1261dba..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3597.txt +++ /dev/null @@ -1,50 +0,0 @@ -Hotelling's lemma is a result in microeconomics that relates the supply of a good to the maximum profit of the producer. It was first shown by Harold Hotelling, and is widely used in the theory of the firm. - -Specifically, it states: The rate of an increase in maximized profits w.r.t. a price increase is equal to the net supply of the good. In other words, if the firm makes its choices to maximize profits, then the choices can be recovered from the knowledge of the maximum profit function. - -Let $p$ denote a variable price, and $w$ be a constant cost of each input. Let $x:{\mathbb R^+}\rightarrow X$ be a mapping from the price to a set of feasible input choices $X\subset {\mathbb R^+}$. Let $f:{\mathbb R^+}\rightarrow {\mathbb R^+}$ be the production function, and $y(p)\triangleq f(x(p))$ be the net supply. - -The maximum profit can be written by -$$ -\pi (p) = \max_{x} p\cdot y(p) - w \cdot x(p). -$$ - -Then the lemma states that if the profit $\pi$ is differentiable at $p$, the maximizing net supply is given by -$$ -y^*(p) = \frac {d\pi (p)}{d p}. -$$ - -The lemma is a corollary of the envelope theorem. - -Specifically, the maximum profit can be rewritten as $\pi (p, x^*) = p\cdot f(x^*(p)) - w \cdot x^*(p)$ where $x^*$ is the maximizing input corresponding to $y^*$. Due to the optimality, the first order condition gives - -{{NumBlk|:|$\frac{\partial \pi}{\partial x} \bigg|_{x=x^*} = 0.$|}} - -By taking the derivative by $p$ at $x^*$, -$$ -\frac{d\pi}{d p} = \frac{\partial \pi}{\partial x}\bigg|_{x=x^*} \frac{\partial x}{\partial p} + \frac{\partial \pi}{\partial p} = \frac{\partial \pi}{\partial p} = f(x^*(p)) = y^*(p) -$$ - -where the second equality is due to (). QED - -Consider the following example. Let output $y$ have price $p$ and inputs $x_1$ and $x_2$ have prices $w_1$ and $w_2$. Suppose the production function is $y = x_1^{1/3}x_2^{1/3}$. The unmaximized profit function is $\pi(p, w_1, w_2, x_1, x_2) = p y - w_1x_1 - w_2x_2$. From this can be derived the profit-maximizing choices of inputs and the maximized profit function, a function just of the input and output prices, which is -$$ -\pi(p, w_1, w_2 ) = \frac{1}{27} \frac{p^3}{w_1w_2} -$$ - -Hotelling's Lemma says that from the maximized profit function we can find the profit-maximizing choices of output and input by taking partial derivatives: -$$ -\frac{ \partial \pi(p, w_1, w_2 )}{\partial p} = y = \frac{1}{9} \frac{p^2}{w_1w_2} -$$ -$$ -\frac{ \partial \pi(p, w_1, w_2 )}{\partial w_1} = -x_1 = -\frac{1}{27} \frac{ p^3}{w_1^2w_2} -$$ -$$ -\frac{ \partial \pi(p, w_1, w_2 )}{\partial w_2} = -x_2 = - \frac{1}{27} \frac{ p^3}{w_1w_2^2} -$$ - -Note that Hotelling's lemma gives the net supplies, which are positive for outputs and negative for inputs, since profit rises with output prices and falls with input prices. - -A number of criticisms have been made with regards to the use and application of Hotelling's lemma in empirical work. - -C. Robert Taylor points out that the accuracy of Hotelling's lemma is dependent on the firm maximizing profits, meaning that it is producing profit maximizing output $y^*$ and cost minimizing input $x^*$. If a firm is not producing at these optima, then Hotelling's lemma would not hold. diff --git a/wiki/wikipedia/3598.txt b/wiki/wikipedia/3598.txt deleted file mode 100644 index e869a25f6093afaebf44a481123300ff17ce5530..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3598.txt +++ /dev/null @@ -1,48 +0,0 @@ -Transfinite induction is an extension of mathematical induction to well-ordered sets, for example to sets of ordinal numbers or cardinal numbers. - -Let $P(\alpha)$ be a property defined for all ordinals $\alpha$. Suppose that whenever $P(\beta)$ is true for all $\beta < \alpha$, then $P(\alpha)$ is also true. Then transfinite induction tells us that $P$ is true for all ordinals. - -Usually the proof is broken down into three cases: - -* Zero case: Prove that $P(0)$ is true. - -* Successor case: Prove that for any successor ordinal $\alpha+1$, $P(\alpha+1)$ follows from $P(\alpha)$ (and, if necessary, $P(\beta)$ for all $\beta < \alpha$). - -* Limit case: Prove that for any limit ordinal $\lambda$, $P(\lambda)$ follows from $P(\beta)$ for all $\beta < \lambda$. - -All three cases are identical except for the type of ordinal considered. They do not formally need to be considered separately, but in practice the proofs are typically so different as to require separate presentations. Zero is sometimes considered a limit ordinal and then may sometimes be treated in proofs in the same case as limit ordinals. - -Transfinite recursion is similar to transfinite induction; however, instead of proving that something holds for all ordinal numbers, we construct a sequence of objects, one for each ordinal. - -As an example, a basis for a (possibly infinite-dimensional) vector space can be created by choosing a vector $v_0$ and for each ordinal α choosing a vector that is not in the span of the vectors $\{v_{\beta}\mid\beta<\alpha\}$. This process stops when no vector can be chosen. - -More formally, we can state the Transfinite Recursion Theorem as follows: - -* Transfinite Recursion Theorem (version 1). Given a class function G: V → V (where V is the class of all sets), there exists a unique transfinite sequence F: Ord → V (where Ord is the class of all ordinals) such that -$$ -F(\alpha) = G(F \upharpoonright \alpha) -$$ for all ordinals α, where $\upharpoonright$ denotes the restriction of Fs domain to ordinals <α. - -As in the case of induction, we may treat different types of ordinals separately: another formulation of transfinite recursion is the following: - -* Transfinite Recursion Theorem (version 2). Given a set g1, and class functions G2, G3, there exists a unique function F: Ord → V such that - -* F(0) = g1, - -* F(α + 1) = G2(F(α)), for all α ∈ Ord, - -* $F(\lambda) = G_3(F \upharpoonright \lambda)$, for all limit λ ≠ 0. - -Note that we require the domains of G2, G3 to be broad enough to make the above properties meaningful. The uniqueness of the sequence satisfying these properties can be proved using transfinite induction. - -More generally, one can define objects by transfinite recursion on any well-founded relation R. (R need not even be a set; it can be a proper class, provided it is a set-like relation; i.e. for any x, the collection of all y such that yRx is a set.) - -Proofs or constructions using induction and recursion often use the axiom of choice to produce a well-ordered relation that can be treated by transfinite induction. However, if the relation in question is already well-ordered, one can often use transfinite induction without invoking the axiom of choice. For example, many results about Borel sets are proved by transfinite induction on the ordinal rank of the set; these ranks are already well-ordered, so the axiom of choice is not needed to well-order them. - -The following construction of the Vitali set shows one way that the axiom of choice can be used in a proof by transfinite induction: - -First, well-order the real numbers (this is where the axiom of choice enters via the well-ordering theorem), giving a sequence $ \langle r_{\alpha} | \alpha < \beta \rangle $, where β is an ordinal with the cardinality of the continuum. Let v0 equal r0. Then let v1 equal rα1, where α1 is least such that rα1 - v0 is not a rational number. Continue; at each step use the least real from the r sequence that does not have a rational difference with any element thus far constructed in the v sequence. Continue until all the reals in the r sequence are exhausted. The final v sequence will enumerate the Vitali set. - -The above argument uses the axiom of choice in an essential way at the very beginning, in order to well-order the reals. After that step, the axiom of choice is not used again. - -Other uses of the axiom of choice are more subtle. For example, a construction by transfinite recursion frequently will not specify a unique value for Aα+1, given the sequence up to α, but will specify only a condition that Aα+1 must satisfy, and argue that there is at least one set satisfying this condition. If it is not possible to define a unique example of such a set at each stage, then it may be necessary to invoke (some form of) the axiom of choice to select one such at each step. For inductions and recursions of countable length, the weaker axiom of dependent choice is sufficient. Because there are models of Zermelo–Fraenkel set theory of interest to set theorists that satisfy the axiom of dependent choice but not the full axiom of choice, the knowledge that a particular proof only requires dependent choice can be useful. diff --git a/wiki/wikipedia/3599.txt b/wiki/wikipedia/3599.txt deleted file mode 100644 index 5f730ffbe76a16c0c4a4c3c34397dd69fab50573..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3599.txt +++ /dev/null @@ -1,132 +0,0 @@ -In mathematics, Frobenius' theorem gives necessary and sufficient conditions for finding a maximal set of independent solutions of an overdetermined system of first-order homogeneous linear partial differential equations. In modern geometric terms, given a family of vector fields, the theorem gives necessary and sufficient integrability conditions for the existence of a foliation by maximal integral manifolds whose tangent bundles are spanned by the given vector fields. The theorem generalizes the existence theorem for ordinary differential equations, which guarantees that a single vector field always gives rise to integral curves; Frobenius gives compatibility conditions under which the integral curves of r vector fields mesh into coordinate grids on r-dimensional integral manifolds. The theorem is foundational in differential topology and calculus on manifolds. - -In its most elementary form, the theorem addresses the problem of finding a maximal set of independent solutions of a regular system of first-order linear homogeneous partial differential equations. Let -$$ - \left \{ f_k^i : \mathbf{R}^n \to \mathbf{R} \ : \ 1 \leq i \leq n, 1 \leq k \leq r \right \} -$$ - -be a collection of C1 functions, with r < n, and such that the matrix ( f_k|p= i ) has rank r. Consider the following system of partial differential equations for a C2 function u : Rn → R: - -(1) \quad \begin{cases} - -L_1u\ \stackrel{\mathrm{def}}{=}\ \sum_i f_1^i(x)\frac{\partial u}{\partial x^i} = 0\\ - -L_2u\ \stackrel{\mathrm{def}}{=}\ \sum_i f_2^i(x)\frac{\partial u}{\partial x^i} = 0\\ - -\qquad \cdots \\ - -L_ru\ \stackrel{\mathrm{def}}{=}\ \sum_i f_r^i(x)\frac{\partial u}{\partial x^i} = 0 - -\end{cases} - -One seeks conditions on the existence of a collection of solutions u1, ..., un−r such that the gradients ∇u1, ..., ∇un−r are linearly independent. - -The Frobenius theorem asserts that this problem admits a solution locally if, and only if, the operators Lk satisfy a certain integrability condition known as involutivity. Specifically, they must satisfy relations of the form -$$ -L_iL_ju(x)-L_jL_iu(x)=\sum_k c_{ij}^k(x)L_ku(x) -$$ - -for 1 ≤ i, j ≤ r, and all C2 functions u, and for some coefficients ckij(x) that are allowed to depend on x. In other words, the commutators [Li, Lj] must lie in the linear span of the Lk at every point. The involutivity condition is a generalization of the commutativity of partial derivatives. In fact, the strategy of proof of the Frobenius theorem is to form linear combinations among the operators Li so that the resulting operators do commute, and then to show that there is a coordinate system yi for which these are precisely the partial derivatives with respect to y1, ..., yr. - -Even though the system is overdetermined there are typically infinitely many solutions. For example, the system of differential equations - -\begin{cases} \frac{\partial f}{\partial x} + \frac{\partial f}{\partial y} =0\\ \frac{\partial f}{\partial y}+ \frac{\partial f}{\partial z}=0 - -\end{cases} - -clearly permits multiple solutions. Nevertheless, these solutions still have enough structure that they may be completely described. The first observation is that, even if f1 and f2 are two different solutions, the level surfaces of f1 and f2 must overlap. In fact, the level surfaces for this system are all planes in R3 of the form x − y + z = C, for C a constant. The second observation is that, once the level surfaces are known, all solutions can then be given in terms of an arbitrary function. Since the value of a solution f on a level surface is constant by definition, define a function C(t) by: -$$ -f(x,y,z)=C(t) \text{ whenever } x - y + z = t. -$$ - -Conversely, if a function C(t) is given, then each function f given by this expression is a solution of the original equation. Thus, because of the existence of a family of level surfaces, solutions of the original equation are in a one-to-one correspondence with arbitrary functions of one variable. - -Frobenius' theorem allows one to establish a similar such correspondence for the more general case of solutions of (1). Suppose that u1, ..., un−r are solutions of the problem (1) satisfying the independence condition on the gradients. Consider the level sets of (u1, ..., un−r) as functions with values in Rn−r. If v1, ..., vn−r is another such collection of solutions, one can show (using some linear algebra and the mean value theorem) that this has the same family of level sets but with a possibly different choice of constants for each set. Thus, even though the independent solutions of (1) are not unique, the equation (1) nonetheless determines a unique family of level sets. Just as in the case of the example, general solutions u of (1) are in a one-to-one correspondence with (continuously differentiable) functions on the family of level sets. - -The level sets corresponding to the maximal independent solution sets of (1) are called the integral manifolds because functions on the collection of all integral manifolds correspond in some sense to constants of integration. Once one of these constants of integration is known, then the corresponding solution is also known. - -The Frobenius theorem can be restated more economically in modern language. Frobenius' original version of the theorem was stated in terms of Pfaffian systems, which today can be translated into the language of differential forms. An alternative formulation, which is somewhat more intuitive, uses vector fields. - -In the vector field formulation, the theorem states that a subbundle of the tangent bundle of a manifold is integrable (or involutive) if and only if it arises from a regular foliation. In this context, the Frobenius theorem relates integrability to foliation; to state the theorem, both concepts must be clearly defined. - -One begins by noting that an arbitrary smooth vector field $X$ on a manifold $M$ defines a family of curves, its integral curves $u:I\to M$ (for intervals $I$). These are the solutions of $\dot u(t) = X_{u(t)}$, which is a system of first-order ordinary differential equations, whose solvability is guaranteed by the Picard-Lindelöf theorem. If the vector field $X$ is nowhere zero then it defines a one-dimensional subbundle of the tangent bundle of $M$, and the integral curves form a regular foliation of $M$. Thus, one-dimensional subbundles are always integrable. - -If the subbundle has dimension greater than one, a condition needs to be imposed. - -One says that a subbundle $E\subset TM$ of the tangent bundle $TM$ is integrable (or involutive), if, for any two vector fields $X$ and $Y$ taking values in $E$, the Lie bracket $[X,Y]$ takes values in $E$ as well. This notion of integrability need only be defined locally; that is, the existence of the vector fields $X$ and $Y$ and their integrability need only be defined on subsets of $M$. - -Several definitions of foliation exist. Here we use the following: - -Definition. A p-dimensional, class Cr foliation of an n-dimensional manifold M is a decomposition of M into a union of disjoint connected submanifolds {Lα}α∈A, called the leaves of the foliation, with the following property: Every point in M has a neighborhood U and a system of local, class Cr coordinates x=(x1, ⋅⋅⋅, xn) : U→Rn such that for each leaf Lα, the components of U ∩ Lα are described by the equations xp+1=constant, ⋅⋅⋅, xn=constant. A foliation is denoted by $\mathcal{F}$={Lα}α∈A. - -Trivially, any foliation of $M$ defines an integrable subbundle, since if $p\in M$ and $N\subset M$ is the leaf of the foliation passing through $p$ then $E_p = T_pN$ is integrable. Frobenius' theorem states that the converse is also true: - -Given the above definitions, Frobenius' theorem states that a subbundle $E$ is integrable if and only if the subbundle $E$ arises from a regular foliation of $M$. - -Let U be an open set in a manifold M, Ω1(U) be the space of smooth, differentiable 1-forms on U, and F be a submodule of Ω1(U) of rank r, the rank being constant in value over U. The Frobenius theorem states that F is integrable if and only if for every p in U the stalk Fp is generated by r exact differential forms. - -Geometrically, the theorem states that an integrable module of 1-forms of rank r is the same thing as a codimension-r foliation. The correspondence to the definition in terms of vector fields given in the introduction follows from the close relationship between differential forms and Lie derivatives. Frobenius' theorem is one of the basic tools for the study of vector fields and foliations. - -There are thus two forms of the theorem: one which operates with distributions, that is smooth subbundles D of the tangent bundle TM; and the other which operates with subbundles of the graded ring Ω(M) of all forms on M. These two forms are related by duality. If D is a smooth tangent distribution on M, then the annihilator of D, I(D) consists of all forms $\alpha\in\Omega^k (M)$ (for any $k\in \{1,\dots, \operatorname{dim}M\}$) such that -$$ -\alpha(v_1,\dots,v_k) = 0 -$$ - -for all $v_1,\dots,v_k\in D$. The set I(D) forms a subring and, in fact, an ideal in Ω(M). Furthermore, using the definition of the exterior derivative, it can be shown that I(D) is closed under exterior differentiation (it is a differential ideal) if and only if D is involutive. Consequently, the Frobenius theorem takes on the equivalent form that I(D) is closed under exterior differentiation if and only if D is integrable. - -The theorem may be generalized in a variety of ways. - -One infinite-dimensional generalization is as follows. Let X and Y be Banach spaces, and A ⊂ X, B ⊂ Y a pair of open sets. Let -$$ -F:A\times B \to L(X,Y) -$$ - -be a continuously differentiable function of the Cartesian product (which inherits a differentiable structure from its inclusion into X × Y) into the space L(X,Y) of continuous linear transformations of X into Y. A differentiable mapping u : A → B is a solution of the differential equation -$$ -(1) \quad y' = F(x,y) -$$ - -if -$$ -\forall x \in A: \quad u'(x) = F(x, u(x)). -$$ - -The equation (1) is completely integrable if for each $(x_0, y_0)\in A\times B$, there is a neighborhood U of x0 such that (1) has a unique solution u(x) defined on U such that u(x0)=y0. - -The conditions of the Frobenius theorem depend on whether the underlying field is R or C. If it is R, then assume F is continuously differentiable. If it is C, then assume F is twice continuously differentiable. Then (1) is completely integrable at each point of A × B if and only if -$$ -D_1F(x,y)\cdot(s_1,s_2) + D_2F(x,y)\cdot(F(x,y)\cdot s_1,s_2) = D_1F(x,y) \cdot (s_2,s_1) + D_2F(x,y)\cdot(F(x,y)\cdot s_2,s_1) -$$ - -for all s1, s2 ∈ X. Here D1 (resp. D2) denotes the partial derivative with respect to the first (resp. second) variable; the dot product denotes the action of the linear operator F(x, y) ∈ L(X, Y), as well as the actions of the operators D1F(x, y) ∈ L(X, L(X, Y)) and D2F(x, y) ∈ L(Y, L(X, Y)). - -The infinite-dimensional version of the Frobenius theorem also holds on Banach manifolds. The statement is essentially the same as the finite-dimensional version. - -Let M be a Banach manifold of class at least C2. Let E be a subbundle of the tangent bundle of M. The bundle E is involutive if, for each point p ∈ M and pair of sections X and Y of E defined in a neighborhood of p, the Lie bracket of X and Y evaluated at p, lies in Ep: -$$ - [X,Y]_p \in E_p -$$ - -On the other hand, E is integrable if, for each p ∈ M, there is an immersed submanifold φ : N → M whose image contains p, such that the differential of φ is an isomorphism of TN with φ−1E. - -The Frobenius theorem states that a subbundle E is integrable if and only if it is involutive. - -The statement of the theorem remains true for holomorphic 1-forms on complex manifolds - manifolds over C with biholomorphic transition functions. - -Specifically, if $\omega^1,\dots,\omega^r$ are r linearly independent holomorphic 1-forms on an open set in Cn such that -$$ -d\omega^j = \sum_{i=1}^r \psi_i^j \wedge \omega^i -$$ - -for some system of holomorphic 1-forms ψ_i|p= j, 1 ≤ i, j ≤ r, then there exist holomorphic functions fij and gi such that, on a possibly smaller domain, -$$ -\omega^j = \sum_{i=1}^r f_i^jdg^i. -$$ - -This result holds locally in the same sense as the other versions of the Frobenius theorem. In particular, the fact that it has been stated for domains in Cn is not restrictive. - -The statement does not generalize to higher degree forms, although there is a number of partial results such as Darboux's theorem and the Cartan-Kähler theorem. - -Despite being named for Ferdinand Georg Frobenius, the theorem was first proven by Alfred Clebsch and Feodor Deahna. Deahna was the first to establish the sufficient conditions for the theorem, and Clebsch developed the necessary conditions. Frobenius is responsible for applying the theorem to Pfaffian systems, thus paving the way for its usage in differential topology. - -* In classical mechanics, the integrability of a system's constraint equations determines whether the system is holonomic or nonholonomic. diff --git a/wiki/wikipedia/36.txt b/wiki/wikipedia/36.txt deleted file mode 100644 index 0908c113633eb32f38de98f28763ec704f03d193..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/36.txt +++ /dev/null @@ -1,26 +0,0 @@ -In mathematics and probability, the Borell–TIS inequality is a result bounding the probability of a deviation of the uniform norm of a centered Gaussian stochastic process above its expected value. The result is named for Christer Borell and its independent discoverers Boris Tsirelson, Ildar Ibragimov, and Vladimir Sudakov. The inequality has been described as "the single most important tool in the study of Gaussian processes." - -Let $T$ be a topological space, and let $\{ f_t \}_{t \in T}$ be a centered (i.e. mean zero) Gaussian process on $T$, with -$$ -\| f \|_T := \sup_{t \in T} | f_t | -$$ - -almost surely finite, and let -$$ -\sigma_T^2 := \sup_{t \in T} \operatorname{E}| f_t |^2. -$$ - -Then $\operatorname{E}(\| f \|_T)$ and $\sigma_T$ are both finite, and, for each $u > 0$, -$$ -\operatorname{P} \big( \| f \|_T > \operatorname{E}(\| f \|_T) + u \big) \leq \exp\left( \frac{- u^2}{2\sigma_T^2} \right). -$$ - -Another related statement which is also known as the Borell-TIS inequality is that, under the same conditions as above, -$$ -\operatorname{P}\big(\sup_{t\in T}f_t>\operatorname{E}(\sup_{t\in T}f_t)+u\big) \le \exp\bigg(\frac{-u^2}{2\sigma_T^2}\bigg) -$$, - -and so by symmetry -$$ -\operatorname{P}\big(|\sup_{t\in T}f_t-\operatorname{E}(\sup_{t\in T}f_t)|>u\big) \le 2\exp\bigg(\frac{-u^2}{2\sigma_T^2}\bigg) -$$. diff --git a/wiki/wikipedia/360.txt b/wiki/wikipedia/360.txt deleted file mode 100644 index 37ecfa90fa03c5a19eadff54fb955da94ecf8004..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/360.txt +++ /dev/null @@ -1,59 +0,0 @@ -In mathematics, the Babenko–Beckner inequality (after K. Ivan Babenko and William E. Beckner) is a sharpened form of the Hausdorff–Young inequality having applications to uncertainty principles in the Fourier analysis of Lp spaces. The (q, p)-norm of the n-dimensional Fourier transform is defined to be -$$ -\|\mathcal F\|_{q,p} = \sup_{f\in L^p(\mathbb R^n)} \frac{\|\mathcal Ff\|_q}{\|f\|_p},\text{ where }1 < p \le 2,\text{ and }\frac 1 p + \frac 1 q = 1. -$$ - -In 1961, Babenko found this norm for even integer values of q. Finally, in 1975, using Hermite functions as eigenfunctions of the Fourier transform, Beckner proved that the value of this norm for all $q \ge 2$ is -$$ -\|\mathcal F\|_{q,p} = \left(p^{1/p}/q^{1/q}\right)^{n/2}. -$$ - -Thus we have the Babenko–Beckner inequality that -$$ -\|\mathcal Ff\|_q \le \left(p^{1/p}/q^{1/q}\right)^{n/2} \|f\|_p. -$$ - -To write this out explicitly, (in the case of one dimension,) if the Fourier transform is normalized so that -$$ -g(y) \approx \int_{\mathbb R} e^{-2\pi ixy} f(x)dx\text{ and }f(x) \approx \int_{\mathbb R} e^{2\pi ixy} g(y)dy, -$$ - -then we have -$$ -\left(\int_{\mathbb R} |g(y)|^q dy\right)^{1/q} \le \left(p^{1/p}/q^{1/q}\right)^{1/2} \left(\int_{\mathbb R} |f(x)|^p dx\right)^{1/p} -$$ - -or more simply - -\left(\sqrt q \int_{\mathbb R} |g(y)|^q dy\right)^{1/q} - -\le \left(\sqrt p \int_{\mathbb R} |f(x)|^p dx\right)^{1/p}. - -Throughout this sketch of a proof, let -$$ -1 < p \le 2, \quad \frac 1 p + \frac 1 q = 1, \quad \text{and} \quad \omega = \sqrt{1-p} = i\sqrt{p-1}. -$$ - -(Except for q, we will more or less follow the notation of Beckner.) - -Let $d\nu(x)$ be the discrete measure with weight $1/2$ at the points $x = \pm 1.$ Then the operator -$$ -C:a+bx \rightarrow a + \omega bx -$$ - -maps $L^p(d\nu)$ to $L^q(d\nu)$ with norm 1; that is, -$$ -\left[\int|a+\omega bx|^q d\nu(x)\right]^{1/q} \le \left[\int|a+bx|^p d\nu(x)\right]^{1/p}, -$$ - -or more explicitly, - -\left[\frac {|a+\omega b|^q + |a-\omega b|^q} 2 \right]^{1/q} - -\le \left[\frac {|a+b|^p + |a-b|^p} 2 \right]^{1/p} - -for any complex a, b. (See Beckner's paper for the proof of his "two-point lemma".) - -The measure $d\nu$ that was introduced above is actually a fair Bernoulli trial with mean 0 and variance 1. Consider the sum of a sequence of n such Bernoulli trials, independent and normalized so that the standard deviation remains 1. We obtain the measure $d\nu_n(x)$ which is the n-fold convolution of $d\nu(\sqrt n x)$ with itself. The next step is to extend the operator C defined on the two-point space above to an operator defined on the (n + 1)-point space of $d\nu_n(x)$ with respect to the elementary symmetric polynomials. - -The sequence $d\nu_n(x)$ converges weakly to the standard normal probability distribution $d\mu(x) = \frac{1}{\sqrt{2\pi}} e^{-x^2/2} dx$ with respect to functions of polynomial growth. In the limit, the extension of the operator C above in terms of the elementary symmetric polynomials with respect to the measure $d\nu_n(x)$ is expressed as an operator T in terms of the Hermite polynomials with respect to the standard normal distribution. These Hermite functions are the eigenfunctions of the Fourier transform, and the (q, p)-norm of the Fourier transform is obtained as a result after some renormalization. diff --git a/wiki/wikipedia/3600.txt b/wiki/wikipedia/3600.txt deleted file mode 100644 index 450688649b1cb890591f500411b6eb5b4fc875b5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3600.txt +++ /dev/null @@ -1,52 +0,0 @@ -In mathematics, the Schoen–Yau conjecture is a disproved conjecture in hyperbolic geometry, named after the mathematicians Richard Schoen and Shing-Tung Yau. - -It was inspired by a theorem of Erhard Heinz (1952). One method of disproof is the use of Scherk surfaces, as used by Harold Rosenberg and Pascal Collin (2006). - -Let $\mathbb{C}$ be the complex plane considered as a Riemannian manifold with its usual (flat) Riemannian metric. Let $\mathbb{H}$ denote the hyperbolic plane, i.e. the unit disc -$$ -\mathbb{H} := \{ (x, y) \in \mathbb{R}^2 | x^2 + y^2 < 1 \} -$$ - -endowed with the hyperbolic metric -$$ -\mathrm{d}s^2 = 4 \frac{\mathrm{d} x^2 + \mathrm{d} y^2}{(1 - (x^2 + y^2))^2}. -$$ - -E. Heinz proved in 1952 that there can exist no harmonic diffeomorphism -$$ -f : \mathbb{H} \to \mathbb{C}. -$$ - -In light of this theorem, Schoen conjectured that there exists no harmonic diffeomorphism -$$ -g : \mathbb{C} \to \mathbb{H}. -$$ - -(It is not clear how Yau's name became associated with the conjecture: in unpublished correspondence with Harold Rosenberg, both Schoen and Yau identify Schoen as having postulated the conjecture). The Schoen(-Yau) conjecture has since been disproved. - -The emphasis is on the existence or non-existence of an harmonic diffeomorphism, and that this property is a "one-way" property. In more detail: suppose that we consider two Riemannian manifolds M and N (with their respective metrics), and write -$$ -M \sim N -$$ - -if there exists a diffeomorphism from M onto N (in the usual terminology, M and N are diffeomorphic). Write -$$ -M \propto N -$$ - -if there exists an harmonic diffeomorphism from M onto N. It is not difficult to show that $\sim$ (being diffeomorphic) is an equivalence relation on the objects of the category of Riemannian manifolds. In particular, $\sim$ is a symmetric relation: -$$ -M \sim N \iff N \sim M. -$$ - -It can be shown that the hyperbolic plane and (flat) complex plane are indeed diffeomorphic: -$$ -\mathbb{H} \sim \mathbb{C}, -$$ - -so the question is whether or not they are "harmonically diffeomorphic". However, as the truth of Heinz's theorem and the falsity of the Schoen–Yau conjecture demonstrate, $\propto$ is not a symmetric relation: -$$ -\mathbb{C} \propto \mathbb{H} \text{ but } \mathbb{H} \not \propto \mathbb{C}. -$$ - -Thus, being "harmonically diffeomorphic" is a much stronger property than simply being diffeomorphic, and can be a "one-way" relation. diff --git a/wiki/wikipedia/3601.txt b/wiki/wikipedia/3601.txt deleted file mode 100644 index 4c6890c63b5d5ae8565d195889186eab902a3434..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3601.txt +++ /dev/null @@ -1,5 +0,0 @@ -The Nielsen–Ninomiya theorem is a no-go theorem in physics, in particular in lattice gauge theory, concerning the possibility of defining a theory of chiral fermions on a lattice in even dimensions. The theorem can be stated as follows: let $S[\psi]$ be the (Euclidean) action describing fermions $\psi$ on a regular lattice of even dimensions with periodic boundary conditions, and suppose that S is local, hermitian and translation invariant; then the theory describes as many left-handed as right-handed states. Equivalently, the theorem implies that there are as many states of chirality +1 as of chirality -1. The proof of the theorem relies on the Poincaré–Hopf theorem or on similar results in algebraic topology. - -Since the Standard Model is chiral (left- and right-handed fermions are treated differently by weak interactions, for example), the Nielsen–Ninomiya theorem implies that for simulating some Standard Model phenomena at least one of the assumptions of the theorem needs to be violated, otherwise there would be fermion doubling to the quantum system. - -The theorem is named after Holger Bech Nielsen and Masao Ninomiya. diff --git a/wiki/wikipedia/3602.txt b/wiki/wikipedia/3602.txt deleted file mode 100644 index 3edfb2621dbe359c59fcc423b17924720fd7da88..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3602.txt +++ /dev/null @@ -1,16 +0,0 @@ -In mathematics, especially in the study of dynamical systems and differential equations, the stable manifold theorem is an important result about the structure of the set of orbits approaching a given hyperbolic fixed point. It roughly states that the existence of a local diffeomorphism near a fixed point implies the existence of a local stable center manifold containing that fixed point. This manifold has dimension equal to the number of eigenvalues of the Jacobian matrix of the fixed point that are less than 1. - -Let -$$ -f: U \subset \mathbb{R}^n \to \mathbb{R}^n -$$ - -be a smooth map with hyperbolic fixed point at $p$. We denote by $W^{s}(p)$ the stable set and by $W^{u}(p)$ the unstable set of $p$. - -The theorem states that - -* $W^{s}(p)$ is a smooth manifold and its tangent space has the same dimension as the stable space of the linearization of $f$ at $p$. - -* $W^{u}(p)$ is a smooth manifold and its tangent space has the same dimension as the unstable space of the linearization of $f$ at $p$. - -Accordingly $W^{s}(p)$ is a stable manifold and $W^{u}(p)$ is an unstable manifold. diff --git a/wiki/wikipedia/3603.txt b/wiki/wikipedia/3603.txt deleted file mode 100644 index c23958733ddf57ff8c698ec575727516cf9989df..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3603.txt +++ /dev/null @@ -1,58 +0,0 @@ -In mathematics, the Cayley–Bacharach theorem is a statement about cubic curves (plane curves of degree three) in the projective plane P2. The original form states: - -Assume that two cubics C1 and C2 in the projective plane meet in nine (different) points, as they do in general over an algebraically closed field. Then every cubic that passes through any eight of the points also passes through the ninth point. - -A more intrinsic form of the Cayley–Bacharach theorem reads as follows: - -Every cubic curve C over an algebraically closed field that passes through a given set of eight points P1, ..., P8 also passes through (counting multiplicities) a ninth point P9 which depends only on P1, ..., P8. - -A related result on conics was first proved by the French geometer Michel Chasles and later generalized to cubics by Arthur Cayley and . - -If seven of the points P1, ..., P8 lie on a conic, then the ninth point can be chosen on that conic, since C will always contain the whole conic on account of Bézout's theorem. In other cases, we have the following. - -If no seven points out of P1, ..., P8 are co-conic, then the vector space of cubic homogeneous polynomials that vanish on (the affine cones of) P1, ..., P8 (with multiplicity for double points) has dimension two. - -In that case, every cubic through P1, ..., P8 also passes through the intersection of any two different cubics through P1, ..., P8, which has at least nine points (over the algebraic closure) on account of Bézout's theorem. These points cannot be covered by P1, ..., P8 only, which gives us P9. - -Since degenerate conics are a union of at most two lines, there are always four out of seven points on a degenerate conic that are collinear. Consequently: - -If no seven points out of P1, ..., P8 lie on a non-degenerate conic, and no four points out of P1, ..., P8 lie on a line, then the vector space of cubic homogeneous polynomials that vanish on (the affine cones of) P1, ..., P8 has dimension two. - -On the other hand, assume P1, P2, P3, P4 are collinear and no seven points out of P1, ..., P8 are co-conic. Then no five points of P1, ..., P8 and no three points of P5, P6, P7, P8 are collinear. Since C will always contain the whole line through P1, P2, P3, P4 on account of Bézout's theorem, the vector space of cubic homogeneous polynomials that vanish on (the affine cones of) P1, ..., P8 is isomorphic to the vector space of quadratic homogeneous polynomials that vanish (the affine cones of) P5, P6, P7, P8, which has dimension two. - -Although the sets of conditions for both dimension two results are different, they are both strictly weaker than full general positions: three points are allowed to be collinear, and six points are allowed to lie on a conic (in general two points determine a line and five points determine a conic). For the Cayley-Bacharach theorem, it is necessary to have a family of cubics passing through the nine points, rather than a single one. - -According to Bézout's theorem, two different cubic curves over an algebraically closed field which have no common irreducible component meet in exactly nine points (counted with multiplicity). The Cayley-Bacharach theorem thus asserts that the last point of intersection of any two members in the family of curves does not move if eight intersection points (without seven co-conic ones) are already prescribed. - -A special case is Pascal's theorem, in which case the two cubics in question are all degenerate: given six points on a conic (a hexagon), consider the lines obtained by extending opposite sides – this yields two cubics of three lines each, which intersect in 9 points – the 6 points on the conic, and 3 others. These 3 additional points lie on a line, as the conic plus the line through any two of the points is a cubic passing through 8 of the points. - -A second application is Pappus's hexagon theorem, similar to the above, but the six points are on two lines instead of on a conic. - -Finally, a third case is found for proving the associativity of elliptic curve point addition. Let a first cubic contain the three lines BC, O(A+B) and A(B+C); and a second cubic containing the three lines AB, O(B+C) and C(A+B). The following eight points are common to both cubics: A, B, C, A+B, -A-B, B+C, -B-C, O. Hence their ninth points must be the same -A-(B+C)=-(A+B)-C, giving the associativity. - -One can understand the Cayley–Bacharach theorem, and why it arises for degree 3, by dimension counting. Simply stated, nine points determine a cubic, but in general define a unique cubic. Thus if the nine points lie on more than one cubic, equivalently on the intersection of two cubics (as 3 × 3 = 9), they are not in general position – they are overdetermined by one dimension – and thus cubics passing through them satisfying one additional constraint, as reflected in the "eight implies nine" property. The general phenomenon is called superabundance; see Riemann–Roch theorem for surfaces. - -Formally, first recall that given two curves of degree d, they define a pencil (one-parameter linear system) of degree d curves by taking projective linear combinations of the defining equations; this corresponds to two points determining a projective line in the parameter space of curves, which is simply projective space. - -The Cayley–Bacharach theorem arises for high degree because the number of intersection points of two curves of degree d, namely d 2 (by Bézout's theorem), grows faster than the number of points needed to define a curve of degree d, which is given by -$$ -\frac{(d+1)(d+2)}{2} - 1 = \frac{d^2 + 3d}{2}. -$$ - -These first agree for d = 3, which is why the Cayley–Bacharach theorem occurs for cubics, and for higher degree d 2 is greater, hence the higher degree generalizations. - -In detail, the number of points required to determine a curve of degree d is the number of monomials of degree d, minus 1 from projectivization. For the first few d these yield: - -* d = 1: 2 and 1: two points determine a line, two lines intersect in a point, - -* d = 2: 5 and 4: five points determine a conic, two conics intersect in four points, - -* d = 3: 9 and 9: nine points determine a cubic, two cubics intersect in nine points, - -* d = 4: 14 and 16. - -Thus these first agree for 3, and the number of intersections is larger when d > 3. - -The meaning of this is that the 9 points of intersection of two cubics are in special position with respect to cubics, a fortiori for higher degree, but unlike for lower degree: two lines intersect in a point, which is trivially in general linear position, and two quadratics intersect in four points, which (assuming the quadratics are irreducible so no three points are collinear) are in general quadratic position because five points determine a quadratic, and any four points (in general linear position) have a pencil of quadratics through them, since the system is underdetermined. For cubics, nine points determine a cubic, but in general they determine a unique cubic – thus having two different cubics pass through them (and thus a pencil) is special – the solution space is one dimension higher than expected, and thus the solutions satisfy an additional constraint, namely the "8 implies 9" property. - -More concretely, because the vector space of homogeneous polynomials P(x, y, z) of degree three in three variables x, y, z has dimension 10, the system of cubic curves passing through eight (different) points is parametrized by a vector space of dimension ≥ 2 (the vanishing of the polynomial at one point imposes a single linear condition). It can be shown that the dimension is exactly two if no four of the points are collinear and no seven points lie on a conic. The Cayley-Bacharach theorem can be deduced from this fact . diff --git a/wiki/wikipedia/3604.txt b/wiki/wikipedia/3604.txt deleted file mode 100644 index 244ee4d0010b58910617db434872317f2d832536..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3604.txt +++ /dev/null @@ -1,61 +0,0 @@ -In mathematical logic, more specifically in the proof theory of first-order theories, extensions by definitions formalize the introduction of new symbols by means of a definition. For example, it is common in naive set theory to introduce a symbol $\emptyset$ for the set which has no member. In the formal setting of first-order theories, this can be done by adding to the theory a new constant $\emptyset$ and the new axiom $\forall x(x\notin\emptyset)$, meaning "for all x, x is not a member of $\emptyset$". It can then be proved that doing so adds essentially nothing to the old theory, as should be expected from a definition. More precisely, the new theory is a conservative extension of the old one. - -Let $T$ be a first-order theory and $\phi(x_1,\dots,x_n)$ a formula of $T$ such that $x_1$, ..., $x_n$ are distinct and include the variables free in $\phi(x_1,\dots,x_n)$. Form a new first-order theory $T'$ from $T$ by adding a new $n$-ary relation symbol $R$, the logical axioms featuring the symbol $R$ and the new axiom -$$ -\forall x_1\dots\forall x_n(R(x_1,\dots,x_n)\leftrightarrow\phi(x_1,\dots,x_n)) -$$, - -called the defining axiom of $R$. - -If $\psi$ is a formula of $T'$, let $\psi^\ast$ be the formula of $T$ obtained from $\psi$ by replacing any occurrence of $R(t_1,\dots,t_n)$ by $\phi(t_1,\dots,t_n)$ (changing the bound variables in $\phi$ if necessary so that the variables occurring in the $t_i$ are not bound in $\phi(t_1,\dots,t_n)$). Then the following hold: - -# $\psi\leftrightarrow\psi^\ast$ is provable in $T'$, and - -# $T'$ is a conservative extension of $T$. - -The fact that $T'$ is a conservative extension of $T$ shows that the defining axiom of $R$ cannot be used to prove new theorems. The formula $\psi^\ast$ is called a translation of $\psi$ into $T$. Semantically, the formula $\psi^\ast$ has the same meaning as $\psi$, but the defined symbol $R$ has been eliminated. - -Let $T$ be a first-order theory (with equality) and $\phi(y,x_1,\dots,x_n)$ a formula of $T$ such that $y$, $x_1$, ..., $x_n$ are distinct and include the variables free in $\phi(y,x_1,\dots,x_n)$. Assume that we can prove -$$ -\forall x_1\dots\forall x_n\exists !y\phi(y,x_1,\dots,x_n) -$$ - -in $T$, i.e. for all $x_1$, ..., $x_n$, there exists a unique y such that $\phi(y,x_1,\dots,x_n)$. Form a new first-order theory $T'$ from $T$ by adding a new $n$-ary function symbol $f$, the logical axioms featuring the symbol $f$ and the new axiom -$$ -\forall x_1\dots\forall x_n\phi(f(x_1,\dots,x_n),x_1,\dots,x_n) -$$, - -called the defining axiom of $f$. - -Let $\psi$ be any atomic formula of $T'$. We define formula $\psi^\ast$ of $T$ recursively as follows. If the new symbol $f$ does not occur in $\psi$, let $\psi^\ast$ be $\psi$. Otherwise, choose an occurrence of $f(t_1,\dots,t_n)$ in $\psi$ such that $f$ does not occur in the terms $t_i$, and let $\chi$ be obtained from $\psi$ by replacing that occurrence by a new variable $z$. Then since $f$ occurs in $\chi$ one less time than in $\psi$, the formula $\chi^\ast$ has already been defined, and we let $\psi^\ast$ be -$$ -\forall z(\phi(z,t_1,\dots,t_n)\rightarrow\chi^\ast) -$$ - -(changing the bound variables in $\phi$ if necessary so that the variables occurring in the $t_i$ are not bound in $\phi(z,t_1,\dots,t_n)$). For a general formula $\psi$, the formula $\psi^\ast$ is formed by replacing every occurrence of an atomic subformula $\chi$ by $\chi^\ast$. Then the following hold: - -# $\psi\leftrightarrow\psi^\ast$ is provable in $T'$, and - -# $T'$ is a conservative extension of $T$. - -The formula $\psi^\ast$ is called a translation of $\psi$ into $T$. As in the case of relation symbols, the formula $\psi^\ast$ has the same meaning as $\psi$, but the new symbol $f$ has been eliminated. - -The construction of this paragraph also works for constants, which can be viewed as 0-ary function symbols. - -A first-order theory $T'$ obtained from $T$ by successive introductions of relation symbols and function symbols as above is called an extension by definitions of $T$. Then $T'$ is a conservative extension of $T$, and for any formula $\psi$ of $T'$ we can form a formula $\psi^\ast$ of $T$, called a translation of $\psi$ into $T$, such that $\psi\leftrightarrow\psi^\ast$ is provable in $T'$. Such a formula is not unique, but any two of them can be proved to be equivalent in T. - -In practice, an extension by definitions $T'$ of T is not distinguished from the original theory T. In fact, the formulas of $T'$ can be thought of as abbreviating their translations into T. The manipulation of these abbreviations as actual formulas is then justified by the fact that extensions by definitions are conservative. - -* Traditionally, the first-order set theory ZF has $=$ (equality) and $\in$ (membership) as its only primitive relation symbols, and no function symbols. In everyday mathematics, however, many other symbols are used such as the binary relation symbol $\subseteq$, the constant $\emptyset$, the unary function symbol P (the power set operation), etc. All of these symbols belong in fact to extensions by definitions of ZF. - -* Let $T$ be a first-order theory for groups in which the only primitive symbol is the binary product ×. In T, we can prove that there exists a unique element y such that x×y = y×x = x for every x. Therefore we can add to T a new constant e and the axiom -$$ -\forall x(x \times e = x\text{ and }e \times x = x) -$$, - -and what we obtain is an extension by definitions $T'$ of $T$. Then in $T'$ we can prove that for every x, there exists a unique y such that x×y=y×x=e. Consequently, the first-order theory $T$ obtained from $T'$ by adding a unary function symbol $f$ and the axiom -$$ -\forall x(x \times f(x)=e\text{ and }f(x) \times x=e) -$$ - -is an extension by definitions of $T$. Usually, $f(x)$ is denoted $x^{-1}$. diff --git a/wiki/wikipedia/3605.txt b/wiki/wikipedia/3605.txt deleted file mode 100644 index 135f44f77d569a042d5f3259c4d9876c9657379e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3605.txt +++ /dev/null @@ -1,3 +0,0 @@ -Hilbert's lemma was proposed at the end of the 19th century by mathematician David Hilbert. The lemma describes a property of the principal curvatures of surfaces. It may be used to prove Liebmann's theorem that a compact surface with constant Gaussian curvature must be a sphere. - -Given a manifold in three dimensions that is smooth and differentiable over a patch containing the point p, where k and m are defined as the principal curvatures and K(x) is the Gaussian curvature at a point x, if k has a max at p, m has a min at p, and k is strictly greater than m at p, then K(p) is a non-positive real number. diff --git a/wiki/wikipedia/3606.txt b/wiki/wikipedia/3606.txt deleted file mode 100644 index bcb9dd0880d869b9fa9a2a0282ec9f44f3e988e2..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3606.txt +++ /dev/null @@ -1,19 +0,0 @@ -In graph theory, Schnyder's theorem is a characterization of planar graphs in terms - -of the order dimension of their incidence posets. It is named after Walter Schnyder, who published its proof in Schnyder|1989}}|1989. - -The incidence poset P(G) of an undirected graph G with vertex set V and edge set E is the partially ordered set of height 2 that has V ∪ E as its elements. In this partial order, there is an order relation x < y when x is a vertex, y is an edge, and x is one of the two endpoints of y. - -The order dimension of a partial order is the smallest number of total orderings whose intersection is the given partial order; such a set of orderings is called a realizer of the partial order. - -Schnyder's theorem states that a graph G is planar if and only if the order dimension of P(G) is at most three. - -This theorem has been generalized by to a tight bound on the dimension of the height-three partially ordered sets formed analogously from the vertices, edges and faces of a convex polyhedron, or more generally of an embedded planar graph: in both cases, the order dimension of the poset is at most four. However, this result cannot be generalized to higher-dimensional convex polytopes, as there exist four-dimensional polytopes whose face lattices have unbounded order dimension. - -Even more generally, for abstract simplicial complexes, the order dimension of the face poset of the complex is at most 1 + d, where d is the minimum dimension of a Euclidean space in which the complex has a geometric realization . - -As Schnyder observes, the incidence poset of a graph G has order dimension two if and only if the graph is a path or a subgraph of a path. For, in when an incidence poset has order dimension is two, its only possible realizer consists of two total orders that (when restricted to the graph's vertices) are the reverse of each other. Any other two orders would have an intersection that includes an order relation between two vertices, which is not allowed for incidence posets. For these two orders on the vertices, an edge between consecutive vertices can be included in the ordering by placing it immediately following the later of the two edge endpoints, but no other edges can be included. - -If a graph can be colored with four colors, then its incidence poset has order dimension at most four . - -The incidence poset of a complete graph on n vertices has order dimension $\Theta(\log\log n)$ . diff --git a/wiki/wikipedia/3607.txt b/wiki/wikipedia/3607.txt deleted file mode 100644 index 15d5a47fcf57479cd1f28292ff6e53e0c637f0ef..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3607.txt +++ /dev/null @@ -1,10 +0,0 @@ -In algebraic geometry, Lange's conjecture is a theorem about stability of vector bundles over curves, introduced by and proved by Montserrat Teixidor i Bigas and Barbara Russo in 1999. - -Let C be a smooth projective curve of genus greater or equal to 2. For generic vector bundles $E_1$ and $E_2$ on C of ranks and degrees $(r_1, d_1)$ and $(r_2, d_2)$, respectively, a generic extension -$$ -0 \to E_1 \to E \to E_2 \to 0 -$$ - -has E stable provided that $\mu(E_1) < \mu(E_2)$, where $\mu(E_i) = d_i/r_i$ is the slope of the respective bundle. The notion of a generic vector bundle here is a generic point in the moduli space of semistable vector bundles on C, and a generic extension is one that corresponds to a generic point in the vector space $\operatorname{Ext}^1$$(E_2,E_1)$. - -An original formulation by Lange is that for a pair of integers $(r_1, d_1)$ and $(r_2, d_2)$ such that $d_1/ r_1 < d_2/r_2$, there exists a short exact sequence as above with E stable. This formulation is equivalent because the existence of a short exact sequence like that is an open condition on E in the moduli space of semistable vector bundles on C. diff --git a/wiki/wikipedia/3608.txt b/wiki/wikipedia/3608.txt deleted file mode 100644 index e7df32e3c56c9df9a92f8eb2791dd9226eaadcb5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3608.txt +++ /dev/null @@ -1,17 +0,0 @@ -In mathematics, the n! conjecture is the conjecture that the dimension of a certain bi-graded module of diagonal harmonics is n!. It was made by A. M. Garsia and M. Haiman and later proved by M. Haiman. It implies Macdonald's positivity conjecture about the Macdonald polynomials. - -The Macdonald polynomials $P_\lambda$ are a two-parameter family of orthogonal polynomials indexed by a positive weight λ of a root system, introduced by Ian G. Macdonald (1987). They generalize several other families of orthogonal polynomials, such as Jack polynomials and Hall–Littlewood polynomials. They are known to have deep relationships with affine Hecke algebras and Hilbert schemes, which were used to prove several conjectures made by Macdonald about them. - -Macdonald introduced a new basis for the space of symmetric functions, which specializes to many of the well-known bases for the symmetric functions, by suitable substitutions for the parameters q and t. - -In fact, we can obtain in this manner the Schur functions, the Hall–Littlewood symmetric functions, the Jack symmetric functions, the zonal symmetric functions, the zonal spherical functions, and the elementary and monomial symmetric functions. - -The so-called q,t-Kostka polynomials are the coefficients of a resulting transition matrix. Macdonald conjectured that they are polynomials in q and t, with non-negative integer coefficients. - -It was Adriano Garsia's idea to construct an appropriate module in order to prove positivity (as was done in his previous joint work with Procesi on Schur positivity of Kostka–Foulkes polynomials). - -In an attempt to prove Macdonald's conjecture, Garsia introduced the bi-graded module $H_\mu $ of diagonal harmonics and conjectured that the (modified) Macdonald polynomials are the Frobenius image of the character generating function of Hμ, under the diagonal action of the symmetric group. - -The proof of Macdonald's conjecture was then reduced to the n! conjecture; i.e., to prove that the dimension of Hμ is n!. In 2001, Haiman proved that the dimension is indeed n! (see [4]). - -This breakthrough led to the discovery of many hidden connections and new aspects of symmetric group representation theory, as well as combinatorial objects (e.g., insertion tableaux, Haglund's inversion numbers, and the role of parking functions in representation theory). diff --git a/wiki/wikipedia/3609.txt b/wiki/wikipedia/3609.txt deleted file mode 100644 index 7dcbc2e65e1eec66b05662c6f68f89603f6c1d2f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3609.txt +++ /dev/null @@ -1,17 +0,0 @@ -In extremal graph theory, the even circuit theorem is a result of Paul Erdős according to which an n-vertex graph that does not have a simple cycle of length 2k can only have O(n1 + 1/k) edges. For instance, 4-cycle-free graphs have O(n3/2) edges, 6-cycle-free graphs have O(n4/3) edges, etc. - -The result was stated without proof by Erdős in 1964. Bondy published the first proof, and strengthened the theorem to show that, for n-vertex graphs with Ω(n1 + 1/k) edges, all even cycle lengths between 2k and 2kn1/k occur. - -{{unsolved|mathematics|Do there exist $2k$-cycle-free graphs (for $k$ other than $2$, $3$, or $5$) that have $\Omega(n^{1+1/k})$ edges?}} - -The bound of Erdős's theorem is tight up to constant factors for some small values of k: for k = 2, 3, or 5, there exist graphs with Ω(n1 + 1/k) edges that have no 2k-cycle. - -The maximum number of edges in a 10-cycle-free graph can be at least -$$ -4\left(\frac{n}{5}\right)^{6/5} \approx 0.5798 n^{6/5}. -$$ - -The best proven upper bound on the number of edges, for 2k-cycle-free graphs for arbitrary values of k, is -$$ -n^{1+1/k}\left(k-1+o(1)\right). -$$ diff --git a/wiki/wikipedia/361.txt b/wiki/wikipedia/361.txt deleted file mode 100644 index 9689461be8314aa7d3ff0ada2c9ed697d9005f3b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/361.txt +++ /dev/null @@ -1,116 +0,0 @@ -In mathematics, specifically in algebraic geometry, the Grothendieck–Riemann–Roch theorem is a far-reaching result on coherent cohomology. It is a generalisation of the Hirzebruch–Riemann–Roch theorem, about complex manifolds, which is itself a generalisation of the classical Riemann–Roch theorem for line bundles on compact Riemann surfaces. - -Riemann–Roch type theorems relate Euler characteristics of the cohomology of a vector bundle with their topological degrees, or more generally their characteristic classes in (co)homology or algebraic analogues thereof. The classical Riemann–Roch theorem does this for curves and line bundles, whereas the Hirzebruch–Riemann–Roch theorem generalises this to vector bundles over manifolds. The Grothendieck–Riemann–Roch theorem sets both theorems in a relative situation of a morphism between two manifolds (or more general schemes) and changes the theorem from a statement about a single bundle, to one applying to chain complexes of sheaves. - -The theorem has been very influential, not least for the development of the Atiyah–Singer index theorem. Conversely, complex analytic analogues of the Grothendieck–Riemann–Roch theorem can be proved using the index theorem for families. Alexander Grothendieck gave a first proof in a 1957 manuscript, later published. Armand Borel and Jean-Pierre Serre wrote up and published Grothendieck's proof in 1958. Later, Grothendieck and his collaborators simplified and generalized the proof. - -Let X be a smooth quasi-projective scheme over a field. Under these assumptions, the Grothendieck group $K_0(X)$ of bounded complexes of coherent sheaves is canonically isomorphic to the Grothendieck group of bounded complexes of finite-rank vector bundles. Using this isomorphism, consider the Chern character (a rational combination of Chern classes) as a functorial transformation: -$$ -\mathrm{ch} \colon K_0(X) \to A(X, \Q), -$$ - -where $A_d(X,\Q)$ is the Chow group of cycles on X of dimension d modulo rational equivalence, tensored with the rational numbers. In case X is defined over the complex numbers, the latter group maps to the topological cohomology group: -$$ -H^{2\dim(X) - 2d}(X, \Q). -$$ - -Now consider a proper morphism $f \colon X \to Y$ between smooth quasi-projective schemes and a bounded complex of sheaves ${\mathcal F^\bull}$ on $X.$ - -The Grothendieck–Riemann–Roch theorem relates the pushforward map -$$ -f_{!} = \sum (-1)^i R^i f_* \colon K_0(X) \to K_0(Y) -$$ - -(alternating sum of higher direct images) and the pushforward -$$ -f_* \colon A(X) \to A(Y), -$$ - -by the formula - - \mathrm{ch} (f_{!}{\mathcal F}^\bull) \mathrm{td}(Y) = f_* (\mathrm{ch}({\mathcal F}^\bull) \mathrm{td}(X) ). - -Here $\mathrm{td}(X)$ is the Todd genus of (the tangent bundle of) X. Thus the theorem gives a precise measure for the lack of commutativity of taking the push forwards in the above senses and the Chern character and shows that the needed correction factors depend on X and Y only. In fact, since the Todd genus is functorial and multiplicative in exact sequences, we can rewrite the Grothendieck–Riemann–Roch formula as -$$ - \mathrm{ch}(f_{!}{\mathcal F}^\bull) = f_* (\mathrm{ch}({\mathcal F}^\bull) \mathrm{td}(T_f) ), -$$ - -where $T_f$ is the relative tangent sheaf of f, defined as the element $TX - f^*(TY)$ in $K_0(X)$. For example, when f is a smooth morphism, $T_f$ is simply a vector bundle, known as the tangent bundle along the fibers of f. - -Using A1-homotopy theory, the Grothendieck–Riemann–Roch theorem has been extended by Navarro to the situation where f is a proper map between two smooth schemes. - -Generalisations of the theorem can be made to the non-smooth case by considering an appropriate generalisation of the combination $\mathrm{ch}(-)\mathrm{td}(X)$ and to the non-proper case by considering cohomology with compact support. - -The arithmetic Riemann–Roch theorem extends the Grothendieck–Riemann–Roch theorem to arithmetic schemes. - -The Hirzebruch–Riemann–Roch theorem is (essentially) the special case where Y is a point and the field is the field of complex numbers. - -The version of Riemann-Roch theorem for oriented cohomology theories was proven by Ivan Panin and Alexander Smirnov. It is concerned with multiplicative operations between algebraic oriented cohomology theories (like Algebraic cobordism). The Grothendieck-Riemann-Roch is a particular case of it, and the Chern character comes up naturally in that setting. - -A vector bundle $E \to C$ of rank $n$ and degree $d$ (defined as the degree of its determinant; or equivalently the degree of its first Chern class) on a smooth projective curve over a field $k$ has a formula similar to Riemann-Roch for line bundles. If we take $X = C$ and $Y = \{*\}$ a point then the Grothendieck-Riemann-Roch formula can be read as - - \begin{align} - -\mathrm{ch}(f_{!}{\mathcal F}^\bull) &= h^0(C,E) - h^1(C,E) \\ - -f_*(\mathrm{ch}(E)\mathrm{td}(X))&= f_*((n + c_1(E))(1 + (1/2)c_1(T_C))) \\ - -&= f_*(n + c_1(E) + (n/2)c_1(T_C)) \\ - -&= f_*(c_1(E) + (n/2)c_1(T_C)) \\ - -&= d + n(1-g) - -\end{align} - -hence -$$ -\chi(C,E) = d + n(1-g). -$$ - -This formula also holds for coherent sheaves of rank $n$ and degree $d$. - -One of the advantages of the Grothendieck–Riemann–Roch formula is it can be interpreted as a relative version of the Hirzebruch–Riemann–Roch formula. For example, a smooth morphism $f\colon X \to Y$ has fibers which are all equi-dimensional (and isomorphic as topological spaces when base changing to $\Complex$). This fact is useful in moduli-theory when considering a moduli space $\mathcal{M}$ parameterizing smooth proper spaces. For example, David Mumford used this formula to deduce relationships of the Chow ring on the moduli space of algebraic curves. - -For the moduli stack of genus $g$ curves (and no marked points) $\overline{\mathcal{M}}_g$ there is a universal curve $\pi\colon\overline{\mathcal{C}}_g \to \overline{\mathcal{M}}_g$ where $\overline{\mathcal{C}}_g = \overline{\mathcal{M}}_{g,1}$ (is the moduli stack of curves of genus $g$ and one marked point. Then, he defines the tautological classes - -\begin{align} - -K_{\overline{\mathcal{C}}_g/\overline{\mathcal{M}}_g} &= c_1(\omega_{\overline{\mathcal{C}}_g/\overline{\mathcal{M}}_g})\\ - -\kappa_l &= \pi_*(K^{l+1}_{\overline{\mathcal{C}}_g/\overline{\mathcal{M}}_g}) \\ - -\mathbb{E} &= \pi_*(\omega_{\overline{\mathcal{C}}_g/\overline{\mathcal{M}}_g}) \\ - -\lambda_l &= c_l(\mathbb{E}) - -\end{align} - -where $1 \leq l \leq g$ and $\omega_{\overline{\mathcal{C}}_g/\overline{\mathcal{M}}_g}$ is the relative dualizing sheaf. Note the fiber of $\omega_{\overline{\mathcal{C}}_g/\overline{\mathcal{M}}_g}$over a point $[C] \in \overline{\mathcal{M}}_g$ this is the dualizing sheaf $\omega_C$. He was able to find relations between the $\lambda_i$ and $\kappa_i$ describing the $\lambda_i$ in terms of a sum of $\kappa_i$ has the family of curves -$$ -\pi\colon C_{g,n} \to M_{g,n} -$$ - -with sections -$$ -s_i\colon M_{g,n} \to C_{g,n} -$$ - -corresponding to the marked points. Since each fiber has the canonical bundle $\omega_{C}$, there are the associated line bundles - -\Lambda_{g,n}(\pi) = \det(\mathbf{R}\pi_*(\omega_{C_{g,n}/M_{g,n}})) - -and - -\chi_{g,n}^{(i)} = s_i^*(\omega_{C_{g,n}/M_{g,n}}) . - -It turns out that -$$ -\Lambda_{g,n}(\pi) \otimes \left(\bigotimes_{i=1}^n \chi_{g,n}^{(i)}\right) -$$ - -is an ample line bundlepg 209, hence the coarse moduli space $M_{g,n}$ is quasi-projective. - -Alexander Grothendieck's version of the Riemann–Roch theorem was originally conveyed in a letter to Jean-Pierre Serre around 1956–1957. It was made public at the initial Bonn Arbeitstagung, in 1957. Serre and Armand Borel subsequently organized a seminar at Princeton University to understand it. The final published paper was in effect the Borel–Serre exposition. - -The significance of Grothendieck's approach rests on several points. First, Grothendieck changed the statement itself: the theorem was, at the time, understood to be a theorem about a variety, whereas Grothendieck saw it as a theorem about a morphism between varieties. By finding the right generalization, the proof became simpler while the conclusion became more general. In short, Grothendieck applied a strong categorical approach to a hard piece of analysis. Moreover, Grothendieck introduced K-groups, as discussed above, which paved the way for algebraic K-theory. diff --git a/wiki/wikipedia/3610.txt b/wiki/wikipedia/3610.txt deleted file mode 100644 index 18e28fe469b0cd2d1a0378eda42b38f5ab566539..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3610.txt +++ /dev/null @@ -1,13 +0,0 @@ -In geometry, the lune of Hippocrates, named after Hippocrates of Chios, is a lune bounded by arcs of two circles, the smaller of which has as its diameter a chord spanning a right angle on the larger circle. Equivalently, it is a non-convex plane region bounded by one 180-degree circular arc and one 90-degree circular arc. It was the first curved figure to have its exact area calculated mathematically. - -Hippocrates wanted to solve the classic problem of squaring the circle, i.e. constructing a square by means of straightedge and compass, having the same area as a given circle. Hippocrates' proof was preserved through the History of Geometry compiled by Eudemus of Rhodes, which has also not survived, but which was excerpted by Simplicius of Cilicia in his commentary on Aristotle's Physics. - -Not until 1882, with Ferdinand von Lindemann's proof of the transcendence of π, was squaring the circle proved to be impossible. - -Hippocrates' result can be proved as follows: The center of the circle on which the arc AEB lies is the point D, which is the midpoint of the hypotenuse of the isosceles right triangle ABO. Therefore, the diameter AC of the larger circle ABC is 2 times the diameter of the smaller circle on which the arc AEB lies. Consequently, the smaller circle has half the area of the larger circle, and therefore the quarter circle AFBOA is equal in area to the semicircle AEBDA. Subtracting the crescent-shaped area AFBDA from the quarter circle gives triangle ABO and subtracting the same crescent from the semicircle gives the lune. Since the triangle and lune are both formed by subtracting equal areas from equal area, they are themselves equal in area. - -Using a similar proof to the one above, the Arab mathematician Hasan Ibn al-Haytham (Latinized name Alhazen, c. 965 – c. 1040) showed that where two lunes are formed, on the two sides of a right triangle, whose outer boundaries are semicircles and whose inner boundaries are formed by the circumcircle of the triangle, then the areas of these two lunes added together are equal to the area of the triangle. The lunes formed in this way from a right triangle are known as the lunes of Alhazen. The quadrature of the lune of Hippocrates is the special case of this result for an isosceles right triangle. - -In the mid-20th century, two Russian mathematicians, Nikolai Chebotaryov and his student Anatoly Dorodnov, completely classified the lunes that are constructible by compass and straightedge and that have equal area to a given square. All such lunes can be specified by the two angles formed by the inner and outer arcs on their respective circles; in this notation, for instance, the lune of Hippocrates would have the inner and outer angles (90°, 180°). Hippocrates found two other squarable concave lunes, with angles approximately (107.2°, 160.9°) and (68.5°, 205.6°). Two more squarable concave lunes, with angles approximately (46.9°, 234.4°) and (100.8°, 168.0°) were found in 1766 by and again in 1840 by Thomas Clausen. - -As Chebotaryov and Dorodnov showed, these five pairs of angles give the only constructible squarable lunes; in particular, there are no constructible squarable convex lunes. diff --git a/wiki/wikipedia/3611.txt b/wiki/wikipedia/3611.txt deleted file mode 100644 index ce7ace8c66e45ed34c120d8d6eea765fd3c5a5ce..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3611.txt +++ /dev/null @@ -1,44 +0,0 @@ -In probability theory, Bennett's inequality provides an upper bound on the probability that the sum of independent random variables deviates from its expected value by more than any specified amount. Bennett's inequality was proved by George Bennett of the University of New South Wales in 1962. - -Let - -X1, … Xn - -be independent random variables with finite variance and assume (for simplicity but without loss of generality) they all have zero expected value. Further assume Xi ≤ a almost surely for all i, and define $ S_n = \sum_{i = 1}^n X_i - \operatorname{E}(X_i)$ and $ \sigma^2 = \sum_{i=1}^n \operatorname{E}(X_i^2).$ - -Then for any t ≥ 0, - -\Pr\left( S_n > t \right) \leq - -\exp\left( - \frac{\sigma^2}{a^2} h\left(\frac{at}{\sigma^2} \right)\right), - -where h(u) = (1 + u)log(1 + u) – u. - -For generalizations see Freedman (1975) and Fan, Grama and Liu (2012) for a martingale version of Bennett's inequality and its improvement, respectively. - -Hoeffding's inequality only assumes the summands are bounded almost surely, while Bennett's inequality offers some improvement when the variances of the summands are small compared to their almost sure bounds. However Hoeffding's inequality entails sub-Gaussian tails, whereas in general Bennett's inequality has Poissonian tails. - -Bennett's inequality is most similar to the Bernstein inequalities, the first of which also gives concentration in terms of the variance and almost sure bound on the individual terms. Bennett's inequality is stronger than this bound, but more complicated to compute. - -In both inequalities, unlike some other inequalities or limit theorems, there is no requirement that the component variables have identical or similar distributions. - -Suppose that each Xi is an independent binary random variable with probability p. Then Bennett's inequality says that: - -\Pr\left( \sum_{i = 1}^n X_i > pn + t \right) \leq - -\exp\left( - np h\left(\frac{t}{np}\right)\right). - -For $t \geq 10 np$, -$$ -h(\frac{t}{np}) \geq \frac{t}{2np} \log \frac{t}{np}, -$$ - -so - -\Pr\left( \sum_{i = 1}^n X_i > pn + t \right) \leq - -\left(\frac{t}{np}\right)^{-t/2} - -for $t \geq 10 np$. - -By contrast, Hoeffding's inequality gives a bound of $\exp(-2 t^2/n)$ and the first Bernstein inequality gives a bound of $\exp(-\frac{t^2}{2np + 2t/3})$. For $t \gg np$, Hoeffding's inequality gives $\exp(-\Theta(t^2/n))$, Bernstein gives $\exp(-\Theta(t))$, and Bennett gives $\exp(-\Theta(t \log \frac{t}{np}))$. diff --git a/wiki/wikipedia/3612.txt b/wiki/wikipedia/3612.txt deleted file mode 100644 index 72d1607033a9f4e57a7fbb2d156e7698382a6231..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3612.txt +++ /dev/null @@ -1,83 +0,0 @@ -In computer science, the Sethi–Ullman algorithm is an algorithm named after Ravi Sethi and Jeffrey D. Ullman, its inventors, for translating abstract syntax trees into machine code that uses as few registers as possible. - -When generating code for arithmetic expressions, the compiler has to decide which is the best way to translate the expression in terms of number of instructions used as well as number of registers needed to evaluate a certain subtree. Especially in the case that free registers are scarce, the order of evaluation can be important to the length of the generated code, because different orderings may lead to larger or smaller numbers of intermediate values being spilled to memory and then restored. The Sethi–Ullman algorithm (also known as Sethi–Ullman numbering) produces code which needs the fewest instructions possible as well as the fewest storage references (under the assumption that at the most commutativity and associativity apply to the operators used, but distributive laws i.e. $a * b + a * c = a * (b + c)$ do not hold). Please note that the algorithm succeeds as well if neither commutativity nor associativity hold for the expressions used, and therefore arithmetic transformations can not be applied. The algorithm also does not take advantage of common subexpressions or apply directly to expressions represented as general directed acyclic graphs rather than trees. - -The simple Sethi–Ullman algorithm works as follows (for a load/store architecture): - -# Traverse the abstract syntax tree in pre- or postorder - -## For every non-constant leaf node, assign a 1 (i.e. 1 register is needed to hold the variable/field/etc.) if it is the left child of its parent else assign a 0. For every constant leaf node (RHS of an operation – literals, values), assign a 0. - -## For every non-leaf node n, assign the number of registers needed to evaluate the respective subtrees of n. If the number of registers needed in the left subtree (l) are not equal to the number of registers needed in the right subtree (r), the number of registers needed for the current node n is max(l, r). If l == r, then the number of registers needed for the current node is r + 1. - -# Code emission - -## If the number of registers needed to compute the left subtree of node n is bigger than the number of registers for the right subtree, then the left subtree is evaluated first (since it may be possible that the one more register needed by the right subtree to save the result makes the left subtree spill). If the right subtree needs more registers than the left subtree, the right subtree is evaluated first accordingly. If both subtrees need an equal number of registers, then the order of evaluation is irrelevant. - -For an arithmetic expression $a = (b + c+ f * g) * (d + 3)$, the abstract syntax tree looks like this: - -= - -/ \ - -a * - -/ \ - -/ \ - -+ + - -/ \ / \ - -/ \ d 3 - -+ * - -/ \ / \ - -b c f g - -To continue with the algorithm, we need only to examine the arithmetic expression $(b + c + f * g) * (d + 3)$, i.e. we only have to look at the right subtree of the assignment '=': - -* - -/ \ - -/ \ - -+ + - -/ \ / \ - -/ \ d 3 - -+ * - -/ \ / \ - -b c f g - -Now we start traversing the tree (in preorder for now), assigning the number of registers needed to evaluate each subtree (note that the last summand in the expression $(b + c + f * g) * (d + 3)$ is a constant): - -*2 - -/ \ - -/ \ - -+2 +1 - -/ \ / \ - -/ \ d1 30 - -+1 *1 - -/ \ / \ - -b1 c0f1 g0 - -From this tree it can be seen that we need 2 registers to compute the left subtree of the '*', but only 1 register to compute the right subtree. Nodes 'c' and 'g' do not need registers for the following reasons: If T is a tree leaf, then the number of registers to evaluate T is either 1 or 0 depending whether T is a left or a right subtree (since an operation such as add R1, A can handle the right component A directly without storing it into a register). Therefore we shall start to emit code for the left subtree first, because we might run into the situation that we only have 2 registers left to compute the whole expression. If we now computed the right subtree first (which needs only 1 register), we would then need a register to hold the result of the right subtree while computing the left subtree (which would still need 2 registers), therefore needing 3 registers concurrently. Computing the left subtree first needs 2 registers, but the result can be stored in 1, and since the right subtree needs only 1 register to compute, the evaluation of the expression can do with only 2 registers left. - -In an advanced version of the Sethi–Ullman algorithm, the arithmetic expressions are first transformed, exploiting the algebraic properties of the operators used. diff --git a/wiki/wikipedia/3613.txt b/wiki/wikipedia/3613.txt deleted file mode 100644 index 69381e6486a96328f5499ce1039cca9c64e55eb8..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3613.txt +++ /dev/null @@ -1,17 +0,0 @@ -In graph drawing and geometric graph theory, the slope number of a graph is the minimum possible number of distinct slopes of edges in a drawing of the graph in which vertices are represented as points in the Euclidean plane and edges are represented as line segments that do not pass through any non-incident vertex. - -Although closely related problems in discrete geometry had been studied earlier, e.g. by Scott and Jamison, - -the problem of determining the slope number of a graph was introduced by Wade, who showed that the slope number of an n-vertex complete graph Kn is exactly n. A drawing with this slope number may be formed by placing the vertices of the graph on a regular polygon. - -The slope number of a graph of maximum degree d is clearly at least $\lceil d/2\rceil$, because at most two of the incident edges at a degree-d vertex can share a slope. More precisely, the slope number is at least equal to the linear arboricity of the graph, since the edges of a single slope must form a linear forest, and the linear arboricity in turn is at least $\lceil d/2\rceil$. - -There exist graphs with maximum degree five that have arbitrarily large slope number. However, every graph of maximum degree three has slope number at most four; the result of Wade for the complete graph K4 shows that this is tight. Not every set of four slopes is suitable for drawing all degree-3 graphs: a set of slopes is suitable for this purpose if and only it forms the slopes of the sides and diagonals of a parallelogram. In particular, any degree 3 graph can be drawn so that its edges are either axis-parallel or parallel to the main diagonals of the integer lattice. It is not known whether graphs of maximum degree four have bounded or unbounded slope number. - -As Keszegh showed, every planar graph has a planar straight-line drawing in which the number of distinct slopes is a function of the degree of the graph. Their proof follows a construction of Malitz for bounding the angular resolution of planar graphs as a function of degree, by completing the graph to a maximal planar graph without increasing its degree by more than a constant factor, and applying the circle packing theorem to represent this augmented graph as a collection of tangent circles. If the degree of the initial graph is bounded, the ratio between the radii of adjacent circles in the packing will also be bounded by the ring lemma, which in turn implies that using a quadtree to place each graph vertex on a point within its circle will produce slopes that are ratios of small integers. The number of distinct slopes produced by this construction is exponential in the degree of the graph. - -It is NP-complete to determine whether a graph has slope number two. From this, it follows that it is NP-hard to determine the slope number of an arbitrary graph, or to approximate it with an approximation ratio better than 3/2. - -It is also NP-complete to determine whether a planar graph has a planar drawing with slope number two, - -and hard for the existential theory of the reals to determine the minimum slope number of a planar drawing. diff --git a/wiki/wikipedia/3614.txt b/wiki/wikipedia/3614.txt deleted file mode 100644 index be4114e0fd2c86e2bd1307cd1af37077741e3f96..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3614.txt +++ /dev/null @@ -1,30 +0,0 @@ - - -The Star of David theorem is a mathematical result on arithmetic properties of binomial coefficients. It was discovered by Henry W. Gould in 1972. - -The greatest common divisors of the binomial coefficients forming each of the two triangles in the Star of David shape in Pascal's triangle are equal: - - - -\begin{align} - -& \gcd\left\{ \binom{n-1}{k-1}, \binom{n}{k+1}, \binom{n+1}{k}\right\} \\[8pt] - -= {} & \gcd\left\{ \binom{n-1}{k}, \binom{n}{k-1}, \binom{n+1}{k+1}\right\}. - -\end{align} - - - -Rows 8, 9, and 10 of Pascal's triangle are - -For n=9, k=3 or n=9, k=6, the element 84 is surrounded by, in sequence, the elements 28, 56, 126, 210, 120, 36. Taking alternating values, we have gcd(28, 126, 120) = 2 = gcd(56, 210, 36). - -The element 36 is surrounded by the sequence 8, 28, 84, 120, 45, 9, and taking alternating values we have gcd(8, 84, 45) = 1 = gcd(28, 120, 9). - -The above greatest common divisor also equals $\gcd \left({n-1 \choose k-2}, {n-1 \choose k-1}, {n-1 \choose k}, {n-1 \choose k+1}\right). $ Thus in the above example for the element 84 (in its rightmost appearance), we also have gcd(70, 56, 28, 8) = 2. This result in turn has further generalizations. - -The two sets of three numbers which the Star of David theorem says have equal greatest common divisors also have equal products. For example, again observing that the element 84 is surrounded by, in sequence, the elements 28, 56, 126, 210, 120, 36, and again taking alternating values, we have 28×126×120 = 26×33×5×72 = 56×210×36. This result can be confirmed by writing out each binomial coefficient in factorial form, using -$$ -{a \choose b}=\frac{a!}{(a-b)!b!}. -$$ diff --git a/wiki/wikipedia/3615.txt b/wiki/wikipedia/3615.txt deleted file mode 100644 index f6057c7c3d0d5f29a326b62b7257f50584770cb2..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3615.txt +++ /dev/null @@ -1 +0,0 @@ -In mathematics, Noether's theorem on rationality for surfaces is a classical result of Max Noether on complex algebraic surfaces, giving a criterion for a rational surface. Let S be an algebraic surface that is non-singular and projective. Suppose there is a morphism φ from S to the projective line, with general fibre also a projective line. Then the theorem states that S is rational. diff --git a/wiki/wikipedia/3616.txt b/wiki/wikipedia/3616.txt deleted file mode 100644 index a7d587ba555db6f3e7cacd1de9cc47286e9e246e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3616.txt +++ /dev/null @@ -1,7 +0,0 @@ -In mathematical logic, a judgment (or judgement) or assertion is a statement or enunciation in the metalanguage. For example, typical judgments in first-order logic would be that a string is a well-formed formula, or that a proposition is true. Similarly, a judgment may assert the occurrence of a free variable in an expression of the object language, or the provability of a proposition. In general, a judgment may be any inductively definable assertion in the metatheory. - -Judgments are used in formalizing deduction systems: a logical axiom expresses a judgment, premises of a rule of inference are formed as a sequence of judgments, and their conclusion is a judgment as well (thus, hypotheses and conclusions of proofs are judgments). A characteristic feature of the variants of Hilbert-style deduction systems is that the context is not changed in any of their rules of inference, while both natural deduction and sequent calculus contain some context-changing rules. Thus, if we are interested only in the derivability of tautologies, not hypothetical judgments, then we can formalize the Hilbert-style deduction system in such a way that its rules of inference contain only judgments of a rather simple form. The same cannot be done with the other two deductions systems: as context is changed in some of their rules of inferences, they cannot be formalized so that hypothetical judgments could be avoided—not even if we want to use them just for proving derivability of tautologies. - -This basic diversity among the various calculi allows such difference, that the same basic thought (e.g. deduction theorem) must be proven as a metatheorem in Hilbert-style deduction system, while it can be declared explicitly as a rule of inference in natural deduction. - -In type theory, some analogous notions are used as in mathematical logic (giving rise to connections between the two fields, e.g. Curry–Howard correspondence). The abstraction in the notion of judgment in mathematical logic can be exploited also in foundation of type theory as well. diff --git a/wiki/wikipedia/3617.txt b/wiki/wikipedia/3617.txt deleted file mode 100644 index eaf3092825a01e59d6b10b2df346649595e793a5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3617.txt +++ /dev/null @@ -1,393 +0,0 @@ -:This article describes a theorem concerning the dual of a Hilbert space. For the theorems relating linear functionals to measures, see Riesz–Markov–Kakutani representation theorem. - -The Riesz representation theorem, sometimes called the Riesz–Fréchet representation theorem after Frigyes Riesz and Maurice René Fréchet, establishes an important connection between a Hilbert space and its continuous dual space. If the underlying field is the real numbers, the two are isometrically isomorphic; if the underlying field is the complex numbers, the two are isometrically anti-isomorphic. The (anti-) isomorphism is a particular natural one as will be described next; a natural isomorphism. - -Let $H$ be a Hilbert space over a field $\mathbb{F},$ where $\mathbb{F}$ is either the real numbers $\R$ or the complex numbers $\C.$ If $\mathbb{F} = \C$ (resp. if $\mathbb{F} = \R$) then $H$ is called a complex Hilbert space (resp. a real Hilbert space). Every real Hilbert space can be extended to be a dense subset of a unique (up to bijective isometry) complex Hilbert space, called its complexification, which is why Hilbert spaces are often automatically assumed to be complex. Real and complex Hilbert spaces have in common many, but by no means all, properties and results/theorems. - -This article is intended for both mathematicians and physicists and will describe the theorem for both. - -In both mathematics and physics, if a Hilbert space is assumed to be real (that is, if $\mathbb{F} = \R$) then this will usually be made clear. Often in mathematics, and especially in physics, unless indicated otherwise, "Hilbert space" is usually automatically assumed to mean "complex Hilbert space." Depending on the author, in mathematics, "Hilbert space" usually means either (1) a complex Hilbert space, or (2) a real or complex Hilbert space. - -By definition, an antilinear map (also called a conjugate-linear map) $f : H \to Y$ is a map between vector spaces that is additive: - -f(x + y) = f(x) + f(y) \quad \text{ for all } x, y \in H, - -and antilinear (also called conjugate-linear or conjugate-homogeneous): - -f(c x) = \overline{c} f(x) \quad \text{ for all } x \in H \text{ and all scalar } c \in \mathbb{F}. - -In contrast, a map $f : H \to Y$ is linear if it is additive and homogeneous: - -f(c x) = c f(x) \quad \text{ for all } x \in H \quad \text{ and all scalars } c \in \mathbb{F}. - -Every constant $0$ map is always both linear and antilinear. If $\mathbb{F} = \R$ then the definitions of linear maps and antilinear maps are completely identical. A linear map from a Hilbert space into a Banach space (or more generally, from any Banach space into any topological vector space) is continuous if and only if it is bounded; the same is true of antilinear maps. The inverse of any antilinear (resp. linear) bijection is again an antilinear (resp. linear) bijection. The composition of two antilinear maps is a linear map. - -Continuous dual and anti-dual spaces - -A functional on $H$ is a function $H \to \mathbb{F}$ whose codomain is the underlying scalar field $\mathbb{F}.$ - -Denote by $H^*$ (resp. by $\overline{H}^*)$ the set of all continuous linear (resp. continuous antilinear) functionals on $H,$ which is called the (continuous) dual space (resp. the (continuous) anti-dual space) of $H.$ - -If $\mathbb{F} = \R$ then linear functionals on $H$ are the same as antilinear functionals and consequently, the same is true for such continuous maps: that is, $H^* = \overline{H}^*.$ - -One-to-one correspondence between linear and antilinear functionals - -Given any functional $f ~:~ H \to \mathbb{F},$ the conjugate of $f$ is the functional - -\begin{alignat}{4} - -\overline{f} : & H && \to && \mathbb{F} \\ - -& h && \mapsto&& \overline{f(h)}. \\ - -\end{alignat} - -This assignment is most useful when $\mathbb{F} = \C$ because if $\mathbb{F} = \R$ then $f = \overline{f}$ and the assignment $f \mapsto \overline{f}$ reduces down to the identity map. - -The assignment $f \mapsto \overline{f}$ defines an antilinear bijective correspondence from the set of - -all functionals (resp. all linear functionals, all continuous linear functionals $H^*$) on $H,$ - -onto the set of - -all functionals (resp. all antilinear functionals, all continuous antilinear functionals $\overline{H}^*$) on $H.$ - -The Hilbert space $H$ has an associated inner product $H \times H \to \mathbb{F}$ valued in $H$'s underlying scalar field $\mathbb{F}$ that is linear in one coordinate and antilinear in the other (as described in detail below). - -If $H$ is a complex Hilbert space (meaning, if $\mathbb{F} = \C$), which is very often the case, then which coordinate is antilinear and which is linear becomes a very important technicality. - -However, if $\mathbb{F} = \R$ then the inner product a symmetric map that is simultaneously linear in each coordinate (that is, bilinear) and antilinear in each coordinate. Consequently, the question of which coordinate is linear and which is antilinear is irrelevant for real Hilbert spaces. - -Notation for the inner product - -In mathematics, the inner product on a Hilbert space $H$ is often denoted by $\left\langle \cdot, \cdot \right\rangle$ or $\left\langle \cdot, \cdot \right\rangle_H$ while in physics, the bra-ket notation $\left\langle \cdot \mid \cdot \right\rangle$ or $\left\langle \cdot \mid \cdot \right\rangle_H$ is typically used instead. In this article, these two notations will be related by the equality: - -\left\langle x, y \right\rangle := \left\langle y \mid x \right\rangle \quad \text{ for all } x, y \in H. - -Completing definitions of the inner product - -The maps $\left\langle \cdot, \cdot \right\rangle$ and $\left\langle \cdot \mid \cdot \right\rangle$ are assumed to have the following two properties: - -
      - -
    1. The map $\left\langle \cdot, \cdot \right\rangle$ is linear in its first coordinate; equivalently, the map $\left\langle \cdot \mid \cdot \right\rangle$ is linear in its second coordinate. Explicitly, this means that for every fixed $y \in H,$ the map that is denoted by -$$ -\left\langle y\mid \cdot \right\rangle = \left\langle \cdot, y \right\rangle : H \to \mathbb{F} -$$ - -and defined by h \mapsto \left\langle y\mid h \right\rangle = \left\langle h, y \right\rangle \quad \text{ for all } h \in H - -is a linear functional on $H.$ - -* In fact, this linear functional is continuous, so $\left\langle y\mid\cdot \right\rangle = \left\langle \cdot, y \right\rangle \in H^*.$ - -
    2. - -
    3. The map $\left\langle \cdot, \cdot \right\rangle$ is antilinear in its second coordinate; equivalently, the map $\left\langle \cdot \mid \cdot \right\rangle$ is antilinear in its first coordinate. Explicitly, this means that for every fixed $y \in H,$ the map that is denoted by -$$ -\left\langle \cdot\mid y \right\rangle = \left\langle y, \cdot \right\rangle : H \to \mathbb{F} -$$ - -and defined by h \mapsto \left\langle h\mid y \right\rangle = \left\langle y, h \right\rangle \quad \text{ for all } h \in H - -is an antilinear functional on $H.$ - -* In fact, this antilinear functional is continuous, so $\left\langle \cdot\mid y \right\rangle = \left\langle y, \cdot \right\rangle \in \overline{H}^*.$ - -
    4. - -
    - -In mathematics, the prevailing convention (i.e. the definition of an inner product) is that the inner product is linear in the first coordinate and antilinear in the other coordinate. In physics, the convention/definition is unfortunately the opposite, meaning that the inner product is linear in the second coordinate and antilinear in the other coordinate. - -This article will not chose one definition over the other. - -Instead, the assumptions made above make it so that the mathematics notation $\left\langle \cdot, \cdot \right\rangle$ satisfies the mathematical convention/definition for the inner product (that is, linear in the first coordinate and antilinear in the other), while the physics bra-ket notation $\left\langle \cdot | \cdot \right\rangle$ satisfies the physics convention/definition for the inner product (that is, linear in the second coordinate and antilinear in the other). Consequently, the above two assumptions makes the notation used in each field consistent with that field's convention/definition for which coordinate is linear and which is antilinear. - -If $x = y$ then $\langle x\mid x \rangle = \langle x, x \rangle$ is a non-negative real number and the map - -\|x\| := \sqrt{\langle x, x \rangle} = \sqrt{\langle x \mid x \rangle} - -defines a canonical norm on $H$ that makes $H$ into a normed space. - -As with all normed spaces, the (continuous) dual space $H^*$ carries a canonical norm, called the dual norm, that is defined by - -\|f\|_{H^*} ~:=~ \sup_{\|x\| \leq 1, x \in H} |f(x)| \quad \text{ for every } f \in H^*. - -The canonical norm on the (continuous) anti-dual space $\overline{H}^*,$ denoted by $\|f\|_{\overline{H}^*},$ is defined by using this same equation: - -\|f\|_{\overline{H}^*} ~:=~ \sup_{\|x\| \leq 1, x \in H} |f(x)| \quad \text{ for every } f \in \overline{H}^*. - -This canonical norm on $H^*$ satisfies the parallelogram law, which means that the polarization identity can be used to define a canonical inner product on $H^*,$ which this article will denote by the notations - -\left\langle f, g \right\rangle_{H^*} := \left\langle g \mid f \right\rangle_{H^*}, - -where this inner product turns $H^*$ into a Hilbert space. There are now two ways of defining a norm on $H^*:$ the norm induced by this inner product (that is, the norm defined by $f \mapsto \sqrt{\left\langle f, f \right\rangle_{H^*}}$) and the usual dual norm (defined as the supremum over the closed unit ball). These norms are the same; explicitly, this means that the following holds for every $f \in H^*:$ - -\sup_{\|x\| \leq 1, x \in H} |f(x)| = \|f\|_{H^*} ~=~ \sqrt{\langle f, f \rangle_{H^*}} ~=~ \sqrt{\langle f \mid f \rangle_{H^*}}. - -As will be described later, the Riesz representation theorem can be used to give an equivalent definition of the canonical norm and the canonical inner product on $H^*.$ - -The same equations that were used above can also be used to define a norm and inner product on $H$'s anti-dual space $\overline{H}^*.$ - -Canonical isometry between the dual and antidual - -The complex conjugate $\overline{f}$ of a functional $f,$ which was defined above, satisfies - -\|f\|_{H^*} ~=~ \left\|\overline{f}\right\|_{\overline{H}^*} \quad \text{ and } \quad \left\|\overline{g}\right\|_{H^*} ~=~ \|g\|_{\overline{H}^*} - -for every $f \in H^*$ and every $g \in \overline{H}^*.$ - -This says exactly that the canonical antilinear bijection defined by - -\begin{alignat}{4} - -\operatorname{Cong} :&& H^* &&\to & \overline{H}^* \\[0.3ex] - -&& f &&\mapsto& \overline{f} \\ - -\end{alignat} - -as well as its inverse $\operatorname{Cong}^{-1} ~:~ \overline{H}^* \to H^*$ are antilinear isometries and consequently also homeomorphisms. - -The inner products on the dual space $H^*$ and the anti-dual space $\overline{H}^*,$ denoted respectively by $\langle \cdot, \cdot \rangle_{H^*}$ and $\langle \cdot, \cdot \rangle_{\overline{H}^*},$ are related by - -\langle \overline{f} | \overline{g} \rangle_{\overline{H}^*} = \overline{\langle f | g \rangle_{H^*}} = \langle g | f \rangle_{H^*} \qquad \text{ for all } f, g \in H^* - -and - -\langle \overline{f} | \overline{g} \rangle_{H^*} = \overline{\langle f | g \rangle_{\overline{H}^*}} = \langle g | f \rangle_{\overline{H}^*} \qquad \text{ for all } f, g \in \overline{H}^*. - -If $\mathbb{F} = \R$ then $H^* = \overline{H}^*$ and this canonical map $\operatorname{Cong} : H^* \to \overline{H}^*$ reduces down to the identity map. - -Two vectors $x$ and $y$ are orthogonal if $\langle x, y \rangle = 0,$ which happens if and only if $\|y\| \leq \|y + s x\|$ for all scalars $s.$ The orthogonal complement of a subset $C \subseteq H$ is - -C^{\bot} := \{ y \in H : \langle y, c \rangle = 0 \text{ for all } c \in C \}, - -which is always a closed vector subspace of $H.$ - -The Hilbert projection theorem guarantees that for any nonempty closed convex subset $C$ of a Hilbert space there exists a unique vector $m \in C$ such that $\|m\| = \inf_{c \in C} \|c\|;$ that is, $m \in C$ is the (unique) global minimum point of the function $C \to [0, \infty)$ defined by $c \mapsto \|c\|.$ - -Let $H$ be a Hilbert space whose inner product $\left\langle x, y \right\rangle$ is linear in its first argument and antilinear in its second argument (the notation $\langle y \mid x \rangle := \langle x, y \rangle$ is used in physics). For every continuous linear functional $\varphi \in H^*,$ there exists a unique $f_{\varphi} \in H$ such that - -\varphi(x) = \left\langle f_\varphi \mid x \right\rangle = \left\langle x, f_{\varphi} \right\rangle \quad \text{ for all } x \in H. - -* Importantly for complex Hilbert spaces, the vector $f_{\varphi} \in H,$ which is called the {{visible anchor|Riesz representation|Riesz representative of $\varphi,$ is always located in the antilinear coordinate of the inner product (no matter which notation is used). $H$ can be written as the direct sum $H = K \oplus K^{\bot}$ (a proof of this is given in the article on the Hilbert projection theorem). - -Because $K \neq H,$ there exists some non-zero $p \in K^{\bot}.$ - -For any $h \in H,$ - -\varphi[(\varphi h) p - (\varphi p) h] - -~=~ \varphi[(\varphi h) p] - \varphi[(\varphi p) h] - -~=~ (\varphi h) \varphi p - (\varphi p) \varphi h - -= 0, - -which shows that $(\varphi h) p - (\varphi p) h ~\in~ \ker \varphi = K,$ where now $p \in K^{\bot}$ implies - -0 = \langle p | (\varphi h) p - (\varphi p) h \rangle - -~=~ \langle p | (\varphi h) p \rangle - \langle p | (\varphi p) h \rangle - -~=~ (\varphi h) \langle p | p \rangle - (\varphi p) \langle p | h \rangle. - -Solving for $\varphi h$ shows that - -\varphi h - -= \frac{(\varphi p) \langle p | h \rangle}{\|p\|^2} - -= \left\langle \frac{\overline{\varphi p}}{\|p\|^2} p \Bigg| h \right\rangle \quad \text{ for every } h \in H, - -which proves that the vector $f_{\varphi} := \frac{\overline{\varphi p}}{\|p\|^2} p$ satisfies -$$ -\varphi h = \langle f_{\varphi} | h \rangle \text{ for every } h \in H. -$$ - -Applying the norm formula that was proved above with $y := f_{\varphi}$ shows that $\|\varphi\|_{H^*} = \left\|\left\langle f_{\varphi} | \cdot \right\rangle\right\|_{H^*} = \left\|f_{\varphi}\right\|_H.$ - -Also, the vector $u := \frac{p}{\|p\|}$ has norm $\|u\| = 1$ and satisfies $f_{\varphi} := \overline{\varphi(u)} u.$ -$$ -\blacksquare -$$ - -It can now be deduced that $K^{\bot}$ is $1$-dimensional when $\varphi \neq 0.$ - -Let $q \in K^{\bot}$ be any non-zero vector. Replacing $p$ with $q$ in the proof above shows that the vector $g := \frac{\overline{\varphi q}}{\|q\|^2} q$ satisfies $\varphi(h) = \langle g | h \rangle$ for every $h \in H.$ The uniqueness of the (non-zero) vector $f_{\varphi}$ representing $\varphi$ implies that $f_{\varphi} = g,$ which in turn implies that $\overline{\varphi q} \neq 0$ and $q = \frac{\|q\|^2}{\overline{\varphi q}} f_{\varphi}.$ Thus every vector in $K^{\bot}$ is a scalar multiple of $f_{\varphi}.$ $\blacksquare$ - -The formulas for the inner products follow from the polarization identity. - -If $\varphi \in H^*$ then - -\varphi \left(f_{\varphi}\right) = \left\langle f_{\varphi}, f_{\varphi} \right\rangle = \left\|f_{\varphi}\right\|^2 = \|\varphi\|^2. - -So in particular, $\varphi \left(f_{\varphi}\right) \geq 0$ is always real and furthermore, $\varphi \left(f_{\varphi}\right) = 0$ if and only if $f_{\varphi} = 0$ if and only if $\varphi = 0.$ - -Linear functionals as affine hyperplanes - -A non-trivial continuous linear functional $\varphi$ is often interpreted geometrically by identifying it with the affine hyperplane $A := \varphi^{-1}(1)$ (the kernel $\ker\varphi = \varphi^{-1}(0)$ is also often visualized alongside $A := \varphi^{-1}(1)$ although knowing $A$ is enough to reconstruct $\ker \varphi$ because if $A = \varnothing$ then $\ker \varphi = H$ and otherwise $\ker \varphi = A - A$). In particular, the norm of $\varphi$ should somehow be interpretable as the "norm of the hyperplane $A$". When $\varphi \neq 0$ then the Riesz representation theorem provides such an interpretation of $\|\varphi\|$ in terms of the affine hyperplane - -Bra of a linear functional - -Given a continuous linear functional $\psi \in H^*,$ let $\langle \psi\mid$ denote the vector $\Phi^{-1} \psi \in H$; that is, - -\langle \psi\mid ~:=~ \Phi^{-1} \psi. - -The assignment $\psi \mapsto \langle \psi\mid$ is just the isometric antilinear isomorphism $\Phi^{-1} ~:~ H^* \to H,$ which is why $~\langle c \psi + \phi\mid ~=~ \overline{c} \langle \psi\mid ~+~ \langle \phi\mid~$ holds for all $\phi, \psi \in H^*$ and all scalars $c.$ - -The defining condition of the vector $\langle \psi | \in H$ is the technically correct but unsightly equality - -\left\langle \langle \psi\mid \mid g \right\rangle_H ~=~ \psi g \quad \text{ for all } g \in H, - -which is why the notation $\left\langle \psi \mid g \right\rangle$ is used in place of $\left\langle \langle \psi\mid \mid g \right\rangle_H = \left\langle g, \langle \psi\mid \right\rangle_H.$ The defining condition becomes - -\left\langle \psi\mid g \right\rangle ~=~ \psi g \quad \text{ for all } g \in H. - -Kets - -For any given vector $g \in H,$ the notation $| g \rangle$ is used to denote $g$; that is, - -\mid g \rangle : = g. - -The assignment $g \mapsto | g \rangle$ is just the identity map $\operatorname{Id}_H : H \to H,$ which is why $~\mid c g + h \rangle ~=~ c \mid g \rangle ~+~ \mid h \rangle~$ holds for all $g, h \in H$ and all scalars $c.$ - -The notation $\langle h\mid g \rangle$ and $\langle \psi\mid g \rangle$ is used in place of $\left\langle h\mid \mid g \rangle \right\rangle_H ~=~ \left\langle \mid g \rangle, h \right\rangle_H$ and $\left\langle \psi\mid \mid g \rangle \right\rangle_H ~=~ \left\langle g, \langle \psi\mid \right\rangle_H,$ respectively. As expected, $~\langle \psi\mid g \rangle = \psi g~$ and $~\langle h\mid g \rangle~$ really is just the scalar $~\langle h\mid g \rangle_H ~=~ \langle g, h \rangle_H.$ - -Let $A : H \to Z$ be a continuous linear operator between Hilbert spaces $\left(H, \langle \cdot, \cdot \rangle_H\right)$ and $\left(Z, \langle \cdot, \cdot \rangle_Z \right).$ As before, let $\langle y \mid x \rangle_H := \langle x, y \rangle_H$ and $\langle y \mid x \rangle_Z := \langle x, y \rangle_Z.$ - -Denote by - -\begin{alignat}{4} - -\Phi_H :&& H &&\to & H^* \\[0.3ex] - -&& g &&\mapsto& \langle g \mid \cdot \rangle_H \\ - -\end{alignat} - -\quad \text{ and } \quad - -\begin{alignat}{4} - -\Phi_Z :&& Z &&\to & Z^* \\[0.3ex] - -&& y &&\mapsto& \langle y \mid \cdot \rangle_Z \\ - -\end{alignat} - -the usual bijective antilinear isometries that satisfy: - -\left(\Phi_H g\right) h = \langle g\mid h \rangle_H \quad \text{ for all } g, h \in H \qquad \text{ and } \qquad \left(\Phi_Z y\right) z = \langle y \mid z \rangle_Z \quad \text{ for all } y, z \in Z. - -For every $z \in Z,$ the scalar-valued map $\langle z\mid A (\cdot) \rangle_Z$ on $H$ defined by - -h \mapsto \langle z\mid A h \rangle_Z = \langle A h, z \rangle_Z - -is a continuous linear functional on $H$ and so by the Riesz representation theorem, there exists a unique vector in $H,$ denoted by $A^* z,$ such that $\langle z \mid A (\cdot) \rangle_Z = \left\langle A^* z \mid \cdot \right\rangle_H,$ or equivalently, such that - -\langle z \mid A h \rangle_Z = \left\langle A^* z \mid h \right\rangle_H \quad \text{ for all } h \in H. - -The assignment $z \mapsto A^* z$ thus induces a function $A^* : Z \to H$ called the adjoint of $A : H \to Z$ whose defining condition is - -\langle z \mid A h \rangle_Z = \left\langle A^* z\mid h \right\rangle_H \quad \text{ for all } h \in H \text{ and all } z \in Z. - -The adjoint $A^* : Z \to H$ is necessarily a continuous (equivalently, a bounded) linear operator. - -If $H$ is finite dimensional with the standard inner product and if $M$ is the transformation matrix of $A$ with respect to the standard orthonormal basis then $M$'s conjugate transpose $\overline{M^{\operatorname{T}}}$ is the transformation matrix of the adjoint $A^*.$ - -It is also possible to define the transpose or algebraic adjoint of $A : H \to Z,$ which is the map ${}^{t}A : Z^* \to H^*$ defined by sending a continuous linear functionals $\psi \in Z^*$ to - -{}^{t}A(\psi) := \psi \circ A, - -where $\psi \circ A$ is always a continuous linear functional on $H.$ - -It satisfies $\|A\| = \left\|{}^t A\right\|$ (this is true more generally, when $H$ and $Z$ are merely normed spaces). - -The adjoint $A^* : Z \to H$ is actually just to the transpose ${}^{t}A : Z^* \to H^*$ when the Riesz representation theorem is used to identify $Z$ with $Z^*$ and $H$ with $H^*.$ - -Explicitly, the relationship between the adjoint and transpose is: - -{{NumBlk|:|${}^{t}A ~\circ~ \Phi_Z ~=~ \Phi_H ~\circ~ A^*$||LnSty=1px dashed black}} - -To show that ${}^{t}A ~\circ~ \Phi_Z ~=~ \Phi_H ~\circ~ A^*,$ fix $z \in Z.$ The definition of ${}^{t}A$ implies \left({}^{t}A \circ \Phi_Z\right) z = {}^{t}A \left(\Phi_Z z\right) = \left(\Phi_Z z\right) \circ A so it remains to show that $\left(\Phi_Z z\right) \circ A = \Phi_H\left(A^* z\right).$ If $h \in H$ then \left(\left(\Phi_Z z\right) \circ A\right) h = \left(\Phi_Z z\right)(A h) = \langle z\mid A h \rangle_Z = \langle A^* z\mid h \rangle_H = \left(\Phi_H(A^* z)\right) h, as desired. $\blacksquare$ - -This can be rewritten as: - -A^* ~=~ \Phi_H^{-1} ~\circ~ {}^{t}A ~\circ~ \Phi_Z \quad \text{ and } \quad {}^{t}A ~=~ \Phi_H ~\circ~ A^* ~\circ~ \Phi_Z^{-1}. - -Given any $z \in Z,$ the left and right hand sides of equality () can be rewritten in terms of the inner products: - -\left({}^{t}A ~\circ~ \Phi_Z\right) z = \langle z \mid A (\cdot) \rangle_Z \quad \text{ and } \quad\left(\Phi_H ~\circ~ A^*\right) z = \langle A^* z\mid\cdot \rangle_H - -where as before, $\langle z \mid A (\cdot) \rangle_Z$ denotes the continuous linear functional on $H$ defined by $g \mapsto \langle z\mid A g \rangle_Z.$ - -Assume $Z = H$ and let $\Phi := \Phi_H = \Phi_Z.$ - -Let $A : H \to H$ be a continuous (that is, bounded) linear operator. - -Whether or not $A : H \to H$ is self-adjoint, normal, or unitary depends entirely on whether or not $A$ satisfies certain defining conditions related to its adjoint, which was shown by () to essentially be just the transpose ${}^t A : H^* \to H^*.$ - -Because the transpose of $A$ is a map between continuous linear functionals, these defining conditions can consequently be re-expressed entirely in terms of linear functionals, as the remainder of subsection will now describe in detail. - -The linear functionals that are involved are the simplest possible continuous linear functionals on $H$ that can be defined entirely in terms of $A,$ the inner product $\langle \cdot\mid\cdot \rangle$ on $H,$ and some given vector $h \in H.$ - -These "elementary $A$-induced" continuous linear functionals are $\left\langle A h\mid\cdot \right\rangle$ and $\langle h\mid A (\cdot) \rangle$ where - -\left\langle A h\mid\cdot \right\rangle = \Phi (A h) = (\Phi \circ A) h \quad \text{ and } \quad \langle h\mid A (\cdot) \rangle = \left({}^{t}A \circ \Phi\right) h. - -Self-adjoint operators - -A continuous linear operator $A : H \to H$ is called self-adjoint it is equal to its own adjoint; that is, if $A = A^*.$ Using (), this happens if and only if: - -\Phi \circ A = {}^t A \circ \Phi - -where this equality can be rewritten in the following two equivalent forms: - -A = \Phi^{-1} \circ {}^t A \circ \Phi \quad \text{ or } \quad {}^{t}A = \Phi \circ A \circ \Phi^{-1}. - -Unraveling notation and definitions produces the following characterization of self-adjoint operators in terms of the aforementioned "$A$-induced" continuous linear functionals: $A$ is self-adjoint if and only if for all $z \in H,$ the linear functional $\langle z\mid A (\cdot) \rangle$ is equal to the linear functional $\langle A z\mid\cdot \rangle$; that is, if and only if - -{{NumBlk|:|$\langle A z\mid \cdot \rangle = \langle z\mid A (\cdot) \rangle \quad \text{ for all } z \in H.$||LnSty=1px dashed black}} - -Normal operators - -A continuous linear operator $A : H \to H$ is called normal if $A A^* = A^* A,$ which happens if and only if for all $z, h \in H,$ - -\left\langle A A^* z\mid h \right\rangle = \left\langle A^* A z\mid h \right\rangle. - -Using () and unraveling notation and definitions produces the following characterization of normal operators in terms of inner products of continuous linear functionals: $A$ is a normal operator if and only if - -{{NumBlk|:|$\left\langle \left\langle A h \mid\cdot \right\rangle\mid\left\langle A z \mid\cdot \right\rangle \right\rangle_{H^*} ~=~ \left\langle \left\langle h | A(\cdot) \right\rangle\mid\left\langle z \mid A(\cdot) \right\rangle \right\rangle_{H^*} \quad \text{ for all } z, h \in H.$||LnSty=1px dashed black}} - -The left hand side of this characterization is also equal to $\overline{\left\langle A h \mid A z \right\rangle}_H = \left\langle A z \mid A h \right\rangle_H.$ - -The continuous linear functionals $\left\langle h \mid A(\cdot) \right\rangle$ and $\left\langle z \mid A(\cdot) \right\rangle$ are defined as above. - -In other words, if it happens to be the case that the assignment $\left\langle A h \mid\cdot \right\rangle ~\mapsto~ \left\langle h | A(\cdot) \right\rangle$ is well-defined (or alternatively, if $\left\langle h | A(\cdot) \right\rangle ~\mapsto~ \left\langle A h \mid\cdot \right\rangle$ is well-defined) where $h$ ranges over $H$, which happens (for instance) if $A$ is injective, then $A$ is a normal operator if and only if this assignment preserves the inner product on $H^*.$ - -The fact that every self-adjoint bounded linear operator is normal follows readily by direct substitution of $A^* = A$ into either side of $A^* A = A A^*.$ - -This same fact also follows immediately from the direct substitution of the equalities () into either side of (). - -Alternatively, for a complex Hilbert space, the continuous linear operator $A$ is a normal operator if and only if $\|Az\| = \left\|A^* z\right\|$ for every $z \in H,$ which happens if and only if - -\|Az\|_H = \left\|\langle z | A(\cdot) \rangle\right\|_{H^*} \quad \text{ for every } z \in H. - -Unitary operators - -An invertible bounded linear operator $A : H \to H$ is said to be unitary if its inverse is its adjoint: $A^{-1} = A^*.$ - -By using (), this is seen to be equivalent to $\Phi \circ A^{-1} = {}^{t}A \circ \Phi.$ - -Unraveling notation and definitions, it follows that $A$ is unitary if and only if - -\langle A^{-1} z\mid\cdot \rangle = \langle z\mid A (\cdot) \rangle \quad \text{ for all } z \in H. - -The fact that a bounded invertible linear operator $A : H \to H$ is unitary if and only if $A^* A = \operatorname{Id}_H$ (or equivalently, ${}^t A \circ \Phi \circ A = \Phi$) produces another (well-known) characterization: an invertible bounded linear map $A$ is unitary if and only if - -\langle A z\mid A (\cdot) \rangle = \langle z\mid\cdot \rangle \quad \text{ for all } z \in H. - -Because $A : H \to H$ is invertible (and so in particular a bijection), this is also true of the transpose ${}^t A : H^* \to H^*.$ This fact also allows the vector $z \in H$ in the above characterizations to be replaced with $A z$ or $A^{-1} z,$ thereby producing many more equalities. Similarly, $\cdot$ can be replaced with $A(\cdot)$ or $A^{-1}(\cdot).$ diff --git a/wiki/wikipedia/3618.txt b/wiki/wikipedia/3618.txt deleted file mode 100644 index 39bdd86c571628f2494afc0d6f343c80277912cd..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3618.txt +++ /dev/null @@ -1,30 +0,0 @@ -In mathematics, Abelian and Tauberian theorems are theorems giving conditions for two methods of summing divergent series to give the same result, named after Niels Henrik Abel and Alfred Tauber. The original examples are Abel's theorem showing that if a series converges to some limit then its Abel sum is the same limit, and Tauber's theorem showing that if the Abel sum of a series exists and the coefficients are sufficiently small (o(1/n)) then the series converges to the Abel sum. More general Abelian and Tauberian theorems give similar results for more general summation methods. - -There is not yet a clear distinction between Abelian and Tauberian theorems, and no generally accepted definition of what these terms mean. Often, a theorem is called "Abelian" if it shows that some summation method gives the usual sum for convergent series, and is called "Tauberian" if it gives conditions for a series summable by some method that allows it to be summable in the usual sense. - -In the theory of integral transforms Abelian theorems give the asymptotic behaviour of the transform based on properties of the original function. Conversely Tauberian theorems give the asymptotic behaviour of the original function based on properties of the transform but usually require some restrictions on the original function. - -For any summation method L, its Abelian theorem is the result that if c = (cn) is a convergent sequence, with limit C, then L(c) = C. An example is given by the Cesàro method, in which L is defined as the limit of the arithmetic means of the first N terms of c, as N tends to infinity. One can prove that if c does converge to C, then so does the sequence (dN) where -$$ - d_N = \frac{c_1+c_2+\cdots+c_N} N. -$$ - -To see that, subtract C everywhere to reduce to the case C = 0. Then divide the sequence into an initial segment, and a tail of small terms: given any ε > 0 we can take N large enough to make the initial segment of terms up to cN average to at most ε/2, while each term in the tail is bounded by ε/2 so that the average is also necessarily bounded. - -The name derives from Abel's theorem on power series. In that case L is the radial limit (thought of within the complex unit disk), where we let r tend to the limit 1 from below along the real axis in the power series with term - -anzn - -and set z = r·e. That theorem has its main interest in the case that the power series has radius of convergence exactly 1: if the radius of convergence is greater than one, the convergence of the power series is uniform for r in [0,1] so that the sum is automatically continuous and it follows directly that the limit as r tends up to 1 is simply the sum of the an. When the radius is 1 the power series will have some singularity on |z| = 1; the assertion is that, nonetheless, if the sum of the an exists, it is equal to the limit over r. This therefore fits exactly into the abstract picture. - -Partial converses to Abelian theorems are called Tauberian theorems. The original result of stated that if we assume also - -an = o(1/n) - -(see Little o notation) and the radial limit exists, then the series obtained by setting z = 1 is actually convergent. This was strengthened by John Edensor Littlewood: we need only assume O(1/n). A sweeping generalization is the Hardy–Littlewood Tauberian theorem. - -In the abstract setting, therefore, an Abelian theorem states that the domain of L contains the convergent sequences, and its values there are equal to those of the Lim functional. A Tauberian theorem states, under some growth condition, that the domain of L is exactly the convergent sequences and no more. - -If one thinks of L as some generalised type of weighted average, taken to the limit, a Tauberian theorem allows one to discard the weighting, under the correct hypotheses. There are many applications of this kind of result in number theory, in particular in handling Dirichlet series. - -The development of the field of Tauberian theorems received a fresh turn with Norbert Wiener's very general results, namely Wiener's Tauberian theorem and its large collection of corollaries. The central theorem can now be proved by Banach algebra methods, and contains much, though not all, of the previous theory. diff --git a/wiki/wikipedia/3619.txt b/wiki/wikipedia/3619.txt deleted file mode 100644 index e8fadcc56928bcded6cf03f181528a823342c976..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3619.txt +++ /dev/null @@ -1,45 +0,0 @@ -In geometric graph theory, a penny graph is a contact graph of unit circles. That is, it is a graph formed from a collection of non-crossing unit circles by creating a vertex for each circle and an edge for every pair of tangent circles. The circles can be represented physically by pennies, arranged without overlapping on a flat surface, with a vertex for each penny and an edge for each two pennies that touch. - -Penny graphs have also been called unit coin graphs, because they are the coin graphs formed from unit circles. If each vertex is represented by a point at the center of its circle, then two vertices will be adjacent if and only if their distance is the minimum distance among all pairs of points. Therefore, penny graphs have also been called minimum-distance graphs, smallest-distance graphs, or closest-pairs graphs. Similarly, in a mutual nearest neighbor graph that links pairs of points in the plane that are each other's nearest neighbors, each connected component is a penny graph, although edges in different components may have different lengths. - -Every penny graph is a unit disk graph and a matchstick graph. - -Like planar graphs more generally, they obey the four color theorem, but this theorem is easier to prove for penny graphs. - -Testing whether a graph is a penny graph, or finding its maximum independent set, is NP-hard; however, both upper and lower bounds are known for the size of the maximum independent set, higher than the bounds that are possible for arbitrary planar graphs. - -Every vertex in a penny graph has at most six neighboring vertices; here the number six is the kissing number for circles in the plane. - -However, the pennies whose centers are less than three units from the convex hull of the pennies have fewer neighbors. Based on a more precise version of this argument, one can show - -that every penny graph with n vertices has at most - -\left\lfloor 3n - \sqrt{12n-3}\right\rfloor - -edges. Some penny graphs, formed by arranging the pennies in a triangular grid, have exactly this number of edges. - -By arranging the pennies in a square grid, or in the form of certain squaregraphs, one can form triangle-free penny graphs whose number of edges is at least - -\left\lfloor 2n-2\sqrt{n}\right\rfloor, - -and in any triangle-free penny graph the number of edges is at most - -2n-1.65\sqrt{n}. - -Swanepoel conjectured that the $\left\lfloor 2n-2\sqrt{n}\right\rfloor$ bound is tight. Proving this, or finding a better bound, remains open. - -Every penny graph contains a vertex with at most three neighbors. For instance, such a vertex can be found at one of the corners of the convex hull of the circle centers, or as one of the two farthest-apart circle centers. Therefore, penny graphs have degeneracy at most three. Based on this, one can prove that their graph colorings require at most four colors, much more easily than the proof of the more general four-color theorem. However, despite their restricted structure, there exist penny graphs that do still require four colors. - -Analogously, the degeneracy of every triangle-free penny graph is at most two. Every such graph contains a vertex with at most two neighbors, even though it is not always possible to find this vertex on the convex hull. Based on this one can prove that they require at most three colors, more easily than the proof of the more general Grötzsch's theorem that triangle-free planar graphs are 3-colorable. - -A maximum independent set in a penny graph is a subset of the pennies, no two of which touch each other. Finding maximum independent sets is NP-hard for arbitrary graphs, and remains NP-hard on penny graphs. It is an instance of the maximum disjoint set problem, in which one must find large subsets of non-overlapping regions of the plane. However, as with planar graphs more generally, Baker's technique provides a polynomial-time approximation scheme for this problem. - -In 1983, Paul Erdős asked for the largest number c such that every n-vertex penny graph has an independent set of at least cn vertices. That is, if we place n pennies on a flat surface, there should be a subset of cn of the pennies that don't touch each other. By the four-color theorem, c ≥ 1/4, and the improved bound c ≥ 8/31 ≈ 0.258 was proven by Swanepoel. In the other direction, Pach and Tóth proved that c ≤ 5/16 = 0.3125. As of 2013, these remained the best bounds known for this problem. - -Constructing a penny graph from the locations of its n circles can be performed as an instance of the closest pair of points problem, taking worst-case time O(n log n) or (with randomized time and with the use of the floor function) expected time O(n). An alternative method with the same worst-case time is to construct the Delaunay triangulation or nearest neighbor graph of the circle centers (both of which contain the penny graph as a subgraph) and then test which edges correspond to circle tangencies. - -However, if a graph is given without geometric positions for its vertices, then testing whether it can be represented as a penny graph is NP-hard. It remains NP-hard even when the given graph is a tree. Similarly, testing whether a graph can be represented as a three-dimensional mutual nearest neighbor graph is also NP-hard. - -It is possible to perform some computational tasks on directed penny graphs, such as testing whether one vertex can reach another, in polynomial time and substantially less than linear space, given an input representing its circles in a form allowing basic computational tasks such as testing adjacency and finding intersections of the circles with axis-parallel lines. - -Penny graphs are a special case of the coin graphs (graphs that can be represented by tangencies of non-crossing circles of arbitrary radii). Because the coin graphs are the same as the planar graphs, all penny graphs are planar. The penny graphs are also unit disk graphs (the intersection graphs of unit circles), unit distance graphs (graphs that can be drawn with all edges having equal lengths, allowing crossings), and matchstick graphs (graphs that can be drawn in the plane with equal-length straight edges and no edge crossings). diff --git a/wiki/wikipedia/362.txt b/wiki/wikipedia/362.txt deleted file mode 100644 index 0ba93667b1511f38385ec02912468e0b1c421338..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/362.txt +++ /dev/null @@ -1,204 +0,0 @@ -Generalized pencil-of-function method (GPOF), also known as matrix pencil method, is a signal processing technique for estimating a signal or extracting information with complex exponentials. Being similar to Prony and original pencil-of-function methods, it is generally preferred to those for its robustness and computational efficiency. - -The method was originally developed by Yingbo Hua and Tapan Sarkar for estimating the behaviour of electromagnetic systems by its transient response, building on Sarkar's past work on the original pencil-of-function method. The method has a plethora of applications in electrical engineering, particularly related to problems in computational electromagnetics, microwave engineering and antenna theory. - -A transient electromagnetic signal can be represented as: -$$ -y(t)=x(t)+n(t) \approx \sum_{i=1}^{M}R_i e^{s_i t} + n(t); 0 \leq t \leq T, -$$ - -where -$$ -y(t) -$$ is the observed time-domain signal, -$$ -n(t) -$$ is the signal noise, -$$ -x(t) -$$ is the actual signal, -$$ -R_i -$$ are the residues ($R_i$), -$$ -s_i -$$ are the poles of the system, defined as $s_i=-\alpha_i+j \omega_i$, -$$ -\alpha_i -$$ are the damping factors and -$$ -\omega_i -$$ are the angular frequencies. - -The same sequence, sampled by a period of $T_s$, can be written as the following: -$$ -y[kT_s]=x[kT_s]+n[kT_s] \approx \sum_{i=1}^{M}R_i z_i^{k} + n[kT_s]; k=0,...,N-1; i=1,2,...,M -$$, - -where $z_i=e^{(- \alpha_i + j \omega_i) T_s}$ by the identities of Z-transform. Generalized pencil-of-function estimates the optimal $M$ and $z_i$'s. - -For the noiseless case, two $(N-L) \times L$ matrices, $Y_1$ and $Y_2$, are produced: - -[Y_1]= - -\begin{bmatrix} - -x(0) & x(1) & \cdots & x(L-1)\\ - -x(1) & x(2) & \cdots & x(L)\\ - -\vdots & \vdots & \ddots & \vdots\\ - -x(N-L-1) & x(N-L) & \cdots & x(N-2) - -\end{bmatrix}_{(N-L) \times L}; - -[Y_2]= - -\begin{bmatrix} - -x(1) & x(2) & \cdots & x(L)\\ - -x(2) & x(3) & \cdots & x(L+1)\\ - -\vdots & \vdots & \ddots & \vdots\\ - -x(N-L) & x(N-L+1) & \cdots & x(N-1) - -\end{bmatrix}_{(N-L) \times L} - - - -where $L$ is defined as the pencil parameter. $Y_1$ and $Y_2$ can be decomposed into the following matrices: -$$ -[Y_1]=[Z_1][B][Z_2] -$$ -$$ -[Y_2]=[Z_1][B][Z_0][Z_2] -$$ - -where - -[Z_1]= - -\begin{bmatrix} - -1 & 1 & \cdots & 1\\ - -z_1 & z_2 & \cdots & z_M\\ - -\vdots & \vdots & \ddots & \vdots\\ - -z_1^{(N-L-1)} & z_2^{(N-L-1)} & \cdots & z_M^{(N-L-1)} - -\end{bmatrix}_{(N-L) \times M}; - -[Z_2]= - -\begin{bmatrix} - -1 & z_1 & \cdots & z_1^{L-1}\\ - -1 & z_2 & \cdots & z_2^{L-1}\\ - -\vdots & \vdots & \ddots & \vdots\\ - -1 & z_M & \cdots & z_M^{L-1} - -\end{bmatrix}_{M \times L} - - -$$ -[Z_0] -$$ and $[B]$ are $M \times M$ diagonal matrices with sequentially-placed $z_i$ and $R_i$ values, respectively. - -If $M \leq L \leq N-M$, the generalized eigenvalues of the matrix pencil -$$ -[Y_2]-\lambda[Y_1]=[Z_1][B]([Z_0]-\lambda[I])[Z_2] -$$ - -yield the poles of the system, which are $\lambda=z_i$. Then, the generalized eigenvectors $p_i$ can be obtained by the following identities: -$$ -[Y_1]^+[Y_1]p_i=p_i; -$$ $i=1,...,M$ -$$ -[Y_1]^+[Y_2]p_i=z_i p_i; -$$ $i=1,...,M$ - -where the $^+$ denotes the Moore–Penrose inverse, also known as the pseudo-inverse. Singular value decomposition can be employed to compute the pseudo-inverse. - -If noise is present in the system, $[Y_1]$ and $[Y_2]$ are combined in a general data matrix, $[Y]$: - -[Y]= - -\begin{bmatrix} - -y(0) & y(1) & \cdots & y(L)\\ - -y(1) & y(2) & \cdots & y(L+1)\\ - -\vdots & \vdots & \ddots & \vdots\\ - -y(N-L-1) & y(N-L) & \cdots & y(N-1) - -\end{bmatrix}_{(N-L) \times (L+1)} - - - -where $y$ is the noisy data. For efficient filtering, L is chosen between $ \frac{N}{3}$ and $ \frac{N}{2}$. A singular value decomposition on $[Y]$ yields: -$$ -[Y]=[U][\Sigma][V]^H -$$ - -In this decomposition, $[U]$ and $[V]$ are unitary matrices with respective eigenvectors $[Y][Y]^H$ and $[Y]^H[Y]$ and $[\Sigma]$ is a diagonal matrix with singular values of $[Y]$. Superscript $H$ denotes the conjugate transpose. - -Then the parameter $M$ is chosen for filtering. Singular values after $M$, which are below the filtering threshold, are set to zero; for an arbitrary singular value $\sigma_c$, the threshold is denoted by the following formula: -$$ -\frac{\sigma_c}{\sigma_{max}}=10^{-p} -$$, -$$ -\sigma_{max} -$$ and p are the maximum singular value and significant decimal digits, respectively. For a data with significant digits accurate up to p, singular values below $10^{-p}$ are considered noise. -$$ -[V_1'] -$$ and $[V_2']$ are obtained through removing the last and first row and column of the filtered matrix $[V']$, respectively; $M$ columns of $[\Sigma]$ represent $[\Sigma']$. Filtered $[Y_1]$ and $[Y_2]$ matrices are obtained as: -$$ -[Y_1]=[U][\Sigma'][V_1']^H -$$ -$$ -[Y_2]=[U][\Sigma'][V_2']^H -$$ - -Prefiltering can be used to combat noise and enhance signal-to-noise ratio (SNR). Band-pass matrix pencil (BPMP) method is a modification of the GPOF method via FIR or IIR band-pass filters. - -GPOF can handle up to 25 dB SNR. For GPOF, as well as for BPMP, variace of the estimates approximately reaches Cramér–Rao bound. - -Residues of the complex poles are obtained through the least squares problem: - -\begin{bmatrix} - -y(0) \\ y(1) \\ \vdots \\ y(N-1) - -\end{bmatrix} = - -\begin{bmatrix} - -1 & 1 & \cdots & 1 \\ - -z_1 & z_2 & \cdots & z_M \\ - -\vdots & \vdots & \ddots & \vdots \\ - -z_1^{N-1} & z_2^{N-1} & \cdots & z_M^{N-1} - -\end{bmatrix} - -\begin{bmatrix} - -R_1 \\ R_2 \\ \vdots \\ R_M - -\end{bmatrix} - - - -The method is generally used for the evaluation of Sommerfeld integrals in discrete complex image method for method of moments, where the spectral Green's function is approximated as a sum of complex exponentials. Additionally, the method is used in antenna analysis, S-parameter-estimation in microwave integrated circuits, wave propagation analysis, moving target indication and radar signal processing. diff --git a/wiki/wikipedia/3620.txt b/wiki/wikipedia/3620.txt deleted file mode 100644 index 7f175ea3c0f7a9a4c7d5ccb846e84d2414d24d88..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3620.txt +++ /dev/null @@ -1,5 +0,0 @@ -In probability theory, the Mabinogion sheep problem or Mabinogian urn is a problem in stochastic control introduced by , who named it after a herd of magic sheep in the Welsh epic the Mabinogion. - -At time t = 0 there is a herd of sheep each of which is black or white. At each time t = 1, 2, ... a sheep is selected at random, and a sheep of the opposite color (if one exists) is changed to be the same color as the selected sheep. At any time one may remove as many sheep (of either color) as one wishes from the flock. The problem is to do this in such a way as to maximize the expected final number of black sheep. - -The optimal solution at each step is to remove just enough white sheep so that there are more black sheep than white sheep. diff --git a/wiki/wikipedia/3621.txt b/wiki/wikipedia/3621.txt deleted file mode 100644 index a286f4d416a8cbf2cddd1fdbbba8d0da3aa5f78d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3621.txt +++ /dev/null @@ -1,87 +0,0 @@ -In computational complexity theory, a branch of computer science, the Max/min CSP/Ones classification theorems state necessary and sufficient conditions that determine the complexity classes of problems about satisfying a subset S of boolean relations. They are similar to Schaefer's dichotomy theorem, which classifies the complexity of satisfying finite sets of relations; however, the Max/min CSP/Ones classification theorems give information about the complexity of approximating an optimal solution to a problem defined by S. - -Given a set S of clauses, the Max constraint satisfaction problem (CSP) is to find the maximum number (in the weighted case: the maximal sum of weights) of satisfiable clauses in S. Similarly, the Min CSP problem is to minimize the number of unsatisfied clauses. The Max Ones problem is to maximize the number of boolean variables in S that are set to 1 under the restriction that all clauses are satisfied, and the Min Ones problem is to minimize this number. - -When using the classifications below, the problem's complexity class is determined by the topmost classification that it satisfies. - -We define for brevity some terms here, which are used in the classifications below. - -* PO stands for Polynomial time optimizable; problems for which finding the optimum can be done in polynomial time, so that approximation to arbitrary precision can also clearly be done in polynomial time. - -* Conjunctive normal form is abbreviated CNF below. - -* X(N)OR-SAT stands for a satisfiability problem which is the AND of several boolean linear equations that can be written as XOR clauses. Exactly one literal in each XOR clause must be negated (e.g. $x_1 \oplus \lnot x_2 \oplus x_3 = 1$). See XOR-SAT. - -* Min UnCut-complete refers to a complexity class historically defined in terms of a problem named Min UnCut. Such problems are APX-hard but with an $O(\sqrt{\log n})$ factor approximation. - -* Min 2CNF-Deletion-complete is another complexity class historically defined via a problem. Such problems are APX-hard but with an $O(\sqrt{\log n})$ approximation. - -* Nearest Codeword-complete is yet another such complexity class. Such problems are inapproximable to within a $2^{\log^{1-\epsilon}(n)}$ factor for some $\epsilon$. - -* Min Horn-Deletion-complete is yet another such complexity class. Such problems are inapproximable to within a $2^{\log^{1-\epsilon}(n)}$ factor for some $\epsilon$, but are in Poly-APX, so they have some polynomial factor approximation. - -The following conditions comprise the classification theorem for Max CSP problems. - -# If setting all variables true or all variables false satisfies all clauses, it is in PO. - -# If all clauses, when converted to disjunctive normal form, have two terms, one consisting of all positive (unnegated) variables and the other all negated variables, it is in PO. - -# Otherwise, the problem is APX-complete. - -The following conditions comprise the classification theorem for Max Ones problems. - -# If setting all variables true satisfies all clauses, it is in PO. - -# If each clause can be written as the CNF of Dual-Horn subclauses, it is in PO. - -# If it is an instance of 2-X(N)OR-SAT, which is X(N)OR-SAT with two variables per linear equation, it is in PO. - -# If it is an instance of X(N)OR-SAT but not 2-X(N)OR-SAT, it is APX-complete. - -# If each clause can be written as the CNF of Horn subclauses, it is Poly-APX-complete. - -# If it is an instance of 2-CNF-SAT, it is Poly-APX-complete. - -# If setting all or all but one variable false satisfies each clause, it is Poly-APX-complete. - -# It is NP-hard to distinguish between an answer of 0 and a nonzero answer if setting all variables false satisfies all clauses. - -# Otherwise, it is NP-hard to find even a feasible solution. - -The following conditions comprise the classification theorem for Min CSP problems. - -# If setting all variables false or all variables true satisfies all clauses, it is in PO. - -# If all clauses, when converted to disjunctive normal form, have two terms, one consisting of all positive (unnegated) variables and the other all negated variables, it is in PO. - -# If all clauses are the OR of O(1) variables, it is APX-complete. - -# If it is an instance of 2-X(N)OR-SAT, it is Min UnCut-complete. - -# If it is an instance of X(N)OR-SAT but not 2-X(N)OR-SAT, it is Nearest Codeword-complete. - -# If it is an instance of 2-CNF-SAT, it is Min 2CNF-Deletion-complete. - -# If all clauses are Horn or Dual-Horn, it is Min Horn Deletion-complete. - -# Otherwise, distinguishing between an answer of 0 and a nonzero answer is NP-complete. - -The following conditions comprise the classification theorem for Min Ones problems. - -# If setting all variables false satisfies all clauses, it is in PO. - -# If each clause can be written as a CNF of Horn subclauses, it is in PO. - -# If it is an instance of 2-X(N)OR-SAT, it is in PO. - -# If it is an instance of 2-CNF-SAT, it is APX-complete. - -# If all clauses are the OR of O(1) variables, it is APX-complete. - -# If it is an instance of X(N)OR-SAT but not 2-X(N)OR-SAT, it is Nearest Codeword-complete. - -# If each clause can be written as a CNF of Dual-Horn subclauses, it is Min Horn Deletion-complete. - -# If setting all variables true satisfies all clauses, it is Poly-APX-complete. - -# Otherwise, it is NP-Hard to even find a feasible solution. diff --git a/wiki/wikipedia/3622.txt b/wiki/wikipedia/3622.txt deleted file mode 100644 index f98c1fee2043cef751006a367f2b571f313c661a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3622.txt +++ /dev/null @@ -1,33 +0,0 @@ -Biconditional elimination is the name of two valid rules of inference of propositional logic. It allows for one to infer a conditional from a biconditional. If $P \leftrightarrow Q$ is true, then one may infer that $P \to Q$ is true, and also that $Q \to P$ is true. For example, if it's true that I'm breathing if and only if I'm alive, then it's true that if I'm breathing, I'm alive; likewise, it's true that if I'm alive, I'm breathing. The rules can be stated formally as: -$$ -\frac{P \leftrightarrow Q}{\therefore P \to Q} -$$ - -and -$$ -\frac{P \leftrightarrow Q}{\therefore Q \to P} -$$ - -where the rule is that wherever an instance of "$P \leftrightarrow Q$" appears on a line of a proof, either "$P \to Q$" or "$Q \to P$" can be placed on a subsequent line; - -The biconditional elimination rule may be written in sequent notation: -$$ -(P \leftrightarrow Q) \vdash (P \to Q) -$$ - -and -$$ -(P \leftrightarrow Q) \vdash (Q \to P) -$$ - -where $\vdash$ is a metalogical symbol meaning that $P \to Q$, in the first case, and $Q \to P$ in the other are syntactic consequences of $P \leftrightarrow Q$ in some logical system; - -or as the statement of a truth-functional tautology or theorem of propositional logic: -$$ -(P \leftrightarrow Q) \to (P \to Q) -$$ -$$ -(P \leftrightarrow Q) \to (Q \to P) -$$ - -where $P$, and $Q$ are propositions expressed in some formal system. diff --git a/wiki/wikipedia/3623.txt b/wiki/wikipedia/3623.txt deleted file mode 100644 index 00b0418d8ed21836a1f1f9dcc68a76fcf89648a4..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3623.txt +++ /dev/null @@ -1,13 +0,0 @@ -In quantum mechanics, the Landau–Yang theorem is a selection rule for particles that decay into two on-shell photons. The theorem states that a massive particle with spin 1 cannot decay into two photons. - -A photon here is any particle with spin 1, without mass and without internal degrees of freedom. The photon is the only known particle with these properties. - -The theorem has several consequences in particle physics. For example: - -* The meson ρ cannot decay into two photons, differently from the neutral pion, that almost always decays into this final state (98.8% of times). - -* The boson Z cannot decay into two photons. - -* The Higgs boson, whose spin was not measured before 2013, but whose decay into two photons was observed in 2012 cannot have spin 1 in models that assume the Landau-Yang theorem. - -Category:Quantum mechanics diff --git a/wiki/wikipedia/3624.txt b/wiki/wikipedia/3624.txt deleted file mode 100644 index 52823be73f7de9271e0e26d61a17f1e37c496416..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3624.txt +++ /dev/null @@ -1,13 +0,0 @@ -The rectilinear Steiner tree problem, minimum rectilinear Steiner tree problem (MRST), or rectilinear Steiner minimum tree problem (RSMT) is a variant of the geometric Steiner tree problem in the plane, in which the Euclidean distance is replaced with the rectilinear distance. The problem may be formally stated as follows: given n points in the plane, it is required to interconnect them all by a shortest network which consists only of vertical and horizontal line segments. It can be shown that such a network is a tree whose vertices are the input points plus some extra points (Steiner points). - -It is known that the search for the RSMT may be restricted to the Hanan grid, constructed by drawing vertical and horizontal lines through each vertex. - -The RSMT is an NP-hard problem, and as with other NP-hard problems, common approaches to tackle it are approximate algorithms, heuristic algorithms, and separation of efficiently solvable special cases. An overview of the approaches to the problem may be found in the 1992 book by Hwang, Richards and Winter, The Steiner Tree Problem. - -The single-trunk Steiner tree is a tree that consists of a single horizontal segment and some vertical segments. A minimum single-trunk Steiner tree problem (MSTST) may be found in linear time. - -The idea is that STSTs for a given point set essentially have only one "degree of freedom", which is the position of the horizontal trunk. Further, it easy to see that if the Y-axis is split into segments by Y-coordinates of input points, then the length of a STST is constant within any such segment. Finally, it will be minimal if the trunk has the closest possible numbers of points below and above it. Therefore an optimal position of the trunk are defined by a median of the set of Y-coordinates of the points, which may be found in linear time. Once the trunk is found, the vertical segments may be easily computed. Notice however that while the construction of the connecting net takes linear time, the construction of the tree which involves both input points and Steiner points as its vertices will require O(n log n) time, since it essentially accomplishes sorting of the X-coordinates of the input points (along the split of the trunk into the edges of the tree). - -A MSTST is fast to compute but is a poor approximation of the MRST. A better approximation, called the refined single trunk tree, may be found in O(n log n) time. It is optimal for point sets of sizes up to 4. - -A number of algorithms exist which start from the rectilinear minimum spanning tree (RMST; the minimum spanning tree in the plane with rectilinear distance) and try to decrease its length by introducing Steiner points. The RMST itself may be up to 1.5 times longer than MRST. diff --git a/wiki/wikipedia/3625.txt b/wiki/wikipedia/3625.txt deleted file mode 100644 index ddcffdf6ac7a8dc172f317ae6ba2d74e462c0ff5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3625.txt +++ /dev/null @@ -1,58 +0,0 @@ -In mathematics, the Chern theorem (or the Chern–Gauss–Bonnet theorem after Shiing-Shen Chern, Carl Friedrich Gauss, and Pierre Ossian Bonnet) states that the Euler-Poincaré characteristic (a topological invariant defined as the alternating sum of the Betti numbers of a topological space) of a closed even-dimensional Riemannian manifold is equal to the integral of a certain polynomial (the Euler class) of its curvature form (an analytical invariant). - -It is a highly non-trivial generalization of the classic Gauss–Bonnet theorem (for 2-dimensional manifolds / surfaces) to higher even-dimensional Riemannian manifolds. In 1943, Carl B. Allendoerfer and André Weil proved a special case for extrinsic manifolds. In a classic paper published in 1944, Shiing-Shen Chern proved the theorem in full generality connecting global topology with local geometry. - -Riemann-Roch and Atiyah-Singer are other generalizations of the Gauss-Bonnet theorem. - -One useful form of the Chern theorem is that - -The theorem has also found numerous applications in physics, including: - -* adiabatic phase or Berry's phase, - -* string theory, - -* condensed matter physics, - -* Topological quantum field theory, - -* topological phases of matter (see the 2016 Nobel Prize in physics by Duncan Haldane et al.). - -In dimension $2n=4$, for a compact oriented manifold, we get -$$ -\chi(M) = \frac{1}{8\pi^2} \int_M \left( |\text{Riem}|^2 - 4 |\text{Ric}|^2 + R^2 \right) d\mu -$$ - -where $\text{Riem}$ is the full Riemann curvature tensor, $\text{Ric}$ is the Ricci curvature tensor, and $R$ is the scalar curvature. This is particularly important in general relativity, where spacetime is viewed as a 4-dimensional manifold. - -When M is a compact, even-dimensional hypersurface in Rn+1 we get -$$ -\int_M KdV = \frac{1}{2}\gamma_n\chi(M) -$$ - -where dV is the volume element of the hypersurface, $K$ is the Jacobian determinant of the Gauss map, and $\gamma_n$ is the surface area of the unit n-sphere. - -The Gauss–Bonnet theorem is a special case when M is a 2-dimensional manifold. It arises as the special case where the topological index is defined in terms of Betti numbers and the analytical index is defined in terms of the Gauss–Bonnet integrand. - -As with the two-dimensional Gauss–Bonnet theorem, there are generalizations when M is a manifold with boundary. - -A far-reaching generalization of the Gauss–Bonnet theorem is the Atiyah–Singer Index Theorem. - -Let $D$ be a weakly elliptic differential operator between vector bundles. That means that the principal symbol is an isomorphism. Strong ellipticity would furthermore require the symbol to be positive-definite. - -Let $D^*$ be its adjoint operator. Then the analytical index is defined as - -dim(ker(D)) − dim(ker(D*)), - -By ellipticity this is always finite. The index theorem says that this is constant as the elliptic operator is varied smoothly. It is equal to a topological index, which can be expressed in terms of characteristic classes like the Euler class. - -The Chern–Gauss–Bonnet theorem is derived by considering the Dirac operator -$$ -D = d + d^* -$$ - -The Chen formula is defined for even dimensions because the Euler characteristic vanishes for odd dimension. There is some research being done on 'twisting' the index theorem in K-theory to give non-trivial results for odd dimension. - -There is also a version of Chen's formula for orbifolds. - -Shiing-Shen Chern published his proof of the theorem in 1944 while at the Institute for Advanced Study. This was historically the first time that the formula was proven without assuming the manifold to be embedded in a Euclidean space, which is what it means by "intrinsic". The special case for a hypersurface (an n-1-dimensional submanifolds in an n-dimensional Euclidean space) was proved by H. Hopf in which the integrand is the Gauss-Kronecker curvature (the product of all principal curvatures at a point of the hypersurface). This was generalized independently by Allendoerfer in 1939 and Fenchel in 1940 to a Riemannian submanifold of a Euclidean space of any codimension, for which they used the Lipschitz-Killing curvature (the average of the Gauss-Kronecker curvature along each unit normal vector over the unit sphere in the normal space; for an even dimensional submanifold, this is an invariant only depending on the Riemann metric of the submanifold). Their result would be valid for the general case if the Nash embedding theorem can be assumed. However, this theorem was not available then, as John Nash published his famous embedding theorem for Riemannian manifolds in 1956. In 1943 Allendoerfer and Weil published their proof for the general case, in which they first used an approximation theorem of H. Whitney to reduce the case to analytic Riemannian manifolds, then they embedded "small" neighborhoods of the manifold isometrically into a Euclidean space with the help of the Cartan-Janet local embedding theorem, so that they can patch these embedded neighborhoods together and apply the above theorem of Allendoerfer and Fenchel to establish the global result. This is, of course, unsatisfactory for the reason that the theorem only involves intrinsic invariants of the manifold, then the validity of the theorem should not rely on its embedding into a Euclidean space. Weil met Chern in Princeton after Chern arrived in August 1943. He told Chern that he believed there should be an intrinsic proof, which Chern was able to obtain within two weeks. The result is Chern's classic paper "A simple intrinsic proof of the Gauss-Bonnet formula for closed Riemannian manifolds" published in the Annals of Mathematics the next year. The earlier work of Allendoerfer, Fenchel, Allendoerfer and Weil were cited by Chern in this paper. The work of Allendoerfer and Weil was also cited by Chern in his second paper related to the same topic. diff --git a/wiki/wikipedia/3626.txt b/wiki/wikipedia/3626.txt deleted file mode 100644 index 1fa5149b5df0edd97dc9da121942f6a388f12fc7..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3626.txt +++ /dev/null @@ -1,7 +0,0 @@ -In real-time computing, the priority ceiling protocol is a synchronization protocol for shared resources to avoid unbounded priority inversion and mutual deadlock due to wrong nesting of critical sections. In this protocol each resource is assigned a priority ceiling, which is a priority equal to the highest priority of any task which may lock the resource. The protocol works by temporarily raising the priorities of tasks in certain situations, thus it requires a scheduler that supports dynamic priority scheduling. - -There are two variants of the protocol: Original Ceiling Priority Protocol (OCPP) and Immediate Ceiling Priority Protocol (ICPP). The worst-case behaviour of the two ceiling schemes is identical from a scheduling view point. Both variants work by temporarily raising the priorities of tasks. - -In OCPP, a task X's priority is raised when a higher-priority task Y tries to acquire a resource that X has locked. The task's priority is then raised to the priority ceiling of the resource, ensuring that task X quickly finishes its critical section, unlocking the resource. A task is only allowed to lock a resource if its dynamic priority is higher than the priority ceilings of all resources locked by other tasks. Otherwise the task becomes blocked, waiting for the resource. - -It is also known as "Highest Locker's Priority Protocol" (HLP). diff --git a/wiki/wikipedia/3627.txt b/wiki/wikipedia/3627.txt deleted file mode 100644 index db9679a846bab2495e73778b3f9b132c7d30ae0c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3627.txt +++ /dev/null @@ -1,27 +0,0 @@ -Westi (Westinghouse Teleprocessing Interface System) was one of two early local teleprocessing packages for IBM's DOS/VSE environment. Westi stood for Westinghouse Terminal Interactive. Westi provided an interface and access method for programmers to 'talk' to monitors and handle data entry. Such access methods later became known as APIs and the handlers a form of transaction processing. - -In 1981, WESTI was considered the main competitor to CICS, holding second place in market share for IBM mainframe transaction processing monitors. - -Initially written for the IBM 2260 running under DOS on IBM mainframes, the original product was offered free for IBM users. With the advent of DOS/VS and the IBM 3270 series terminals, Westinghouse realized they could recover part of their development costs and commercialized the product, circa 1970. The company added transparent remote access about 1980. - -Westi consumed less memory than CICS, which was attractive in the highly memory-constrained computing environments of the 1970s, in which 256KB was considered a large amount of memory. - -Westi operated as an application's mainline program and, like IBM's soon to follow CICS, programmers wrote subroutines to read and write data to and from terminals and discs. This real time paradigm became known as transaction processing. - -This differed from Westi's primary competitor, DUCS, which reversed that model in that it was a subroutine package that read from and wrote to monitors. While Westi was not as easy to program and use as DUCS, Westi (like CICS) handled task management. - -In terms of speed, Westi fell between DUCS and the considerably more process-bound CICS. - -Steve Robert O'Donnell wrote the original DOS 2260 package, which was distributed free of charge. Its popularity made Westinghouse realize Westi had potential as a commercial product. - -In 1972, IBM released DOS/VS with the IBM/370 and the first IBM 3270 terminals, and the Westinghouse Software group began a rewrite for new products. Several new team members were assigned, including John Gaston, who took over lead development following the departure of Steve O'Donnell in the latter 1970s. (Steve O'Donnell went on to found GOAL Systems, Inc.) - -Westinghouse Marketing suffered a schism about the same time, and the result was that Europe established an independent subsidiary, Westinghouse Electric Management Systems, SA, or WEMSSA, headquartered in Paris. At that point, the Westinghouse product line, WDU and WESTI, bifurcated, taking independent development paths. - -The original development team moved to Orlando, Florida, where it eventually came under the management of Dr. Ray Ferguson and focused on integration with VSE and matching features with CICS. - -WEMSSA, under the direction of Eric Lutaud, contracted with GOAL Systems and eventually developer Leigh Lundin (author of DUCS Remote) for development, which focused on adding remote teleprocessing in Avignon, France. The result was WestiTAM, a 4k bi-sync module, which the Florida group expressed an interest in. - -In 1978, WEMSSA resumed relations with the Florida group and eventually the two merged, coming under the new director of WEMSSA in London, David Hazlewood. Westinghouse committed to remerging the product line, re-engineering new products under the direction of Dr. Ferguson and Leigh Lundin. However, part way into development, Westinghouse began to break up the division during the outsourcing thrust of the Reaganomics era. Through badly managed negotiations, Westinghouse ended up with neither developers or outsourcing partners, which spelled the end for one of the industries foremost software groups. - -Westinghouse Electric Management Systems, SA (WEMSSA), had sales offices in Pittsburgh, San Jose, Paris, Lyon, London, Geneva, Zürich, Munich, and Amsterdam. Development offices were in Orlando, with further development in Columbus, Ohio and Avignon, France. diff --git a/wiki/wikipedia/3628.txt b/wiki/wikipedia/3628.txt deleted file mode 100644 index f38cd68f07d2fef0cd3a147e38f4a53fff73442d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3628.txt +++ /dev/null @@ -1,17 +0,0 @@ -NetworkX is a Python library for studying graphs and networks. NetworkX is free software released under the BSD-new license. - -* Classes for graphs and digraphs. - -* Conversion of graphs to and from several formats. - -* Ability to construct random graphs or construct them incrementally. - -* Ability to find subgraphs, cliques, k-cores. - -* Explore adjacency, degree, diameter, radius, center, betweenness, etc. - -* Draw networks in 2D and 3D. - -NetworkX is suitable for operation on large real-world graphs: e.g., graphs in excess of 10 million nodes and 100 million edges. Due to its dependence on a pure-Python "dictionary of dictionary" data structure, NetworkX is a reasonably efficient, very scalable, highly portable framework for network and social network analysis. - -NetworkX is integrated into SageMath. diff --git a/wiki/wikipedia/3629.txt b/wiki/wikipedia/3629.txt deleted file mode 100644 index 9126fe4f5507f9d80ed392265ba8049c1c34103b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3629.txt +++ /dev/null @@ -1,33 +0,0 @@ -In computer science, in the field of databases, write–read conflict, also known as reading uncommitted data, is a computational anomaly associated with interleaved execution of transactions. - -Given a schedule S - -S = \begin{bmatrix} - -T1 & T2 \\ - -R(A) & \\ - -W(A) & \\ - -& R(A) \\ - -& W(A)\\ - -& R(B) \\ - -& W(B) \\ - -& Com. \\ - -R(B) & \\ - -W(B) & \\ - -Com. & \end{bmatrix} - -T2 could read a database object A, modified by T1 which hasn't committed. This is a dirty read. - -T1 may write some value into A which makes the database inconsistent. It is possible that interleaved execution can expose this inconsistency and lead to inconsistent final database state, violating ACID rules. - -Strict 2PL overcomes this inconsistency by locking T2 out from performing a Read/Write on A. Note however that Strict 2PL can have a number of drawbacks, such as the possibility of deadlocks. diff --git a/wiki/wikipedia/363.txt b/wiki/wikipedia/363.txt deleted file mode 100644 index 81d786cd2316bf6176f9287aed75067d9942a5a9..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/363.txt +++ /dev/null @@ -1,25 +0,0 @@ -In computer science, lambda calculi are said to have explicit substitutions if they pay special attention to the formalization of the process of substitution. This is in contrast to the standard lambda calculus where substitutions are performed by beta reductions in an implicit manner which is not expressed within the calculus; the "freshness" conditions in such implicit calculi are a notorious source of errors. The concept has appeared in a large number of published papers in quite different fields, such as in abstract machines, predicate logic, and symbolic computation. - -A simple example of a lambda calculus with explicit substitution is "λx", which adds one new form of term to the lambda calculus, namely the form M⟨x:=N⟩, which reads "M where x will be substituted by N". (The meaning of the new term is the same as the common idiom let x:=N in M from many programming languages.) λx can be written with the following rewriting rules: - -# (λx.M) N → M⟨x:=N⟩ - -# x⟨x:=N⟩ → N - -# x⟨y:=N⟩ → x (x≠y) - -# (M1M2) ⟨x:=N⟩ → (M1⟨x:=N⟩) (M2⟨x:=N⟩) - -# (λx.M) ⟨y:=N⟩ → λx.(M⟨y:=N⟩) (x≠y and x not free in N) - -While making substitution explicit, this formulation still retains the complexity of the lambda calculus "variable convention", requiring arbitrary renaming of variables during reduction to ensure that the "(x≠y and x not free in N)" condition on the last rule is always satisfied before applying the rule. Therefore many calculi of explicit substitution avoid variable names altogether by using a so-called "name-free" De Bruijn index notation. - -Explicit substitutions were sketched in the preface of Curry's book on Combinatory logic - -and grew out of an ‘implementation trick’ used, for example, by AUTOMATH, and became a respectable syntactic theory in lambda calculus and rewriting theory. Though it actually originated with de Bruijn, the idea of a specific calculus where substitutions are part of the object language, and not of the informal meta-theory, is traditionally credited to Abadi, Cardelli, Curien, and Lévy. Their seminal paper on the λσ calculus explains that implementations of lambda calculus need to be very careful when dealing with substitutions. Without sophisticated mechanisms for structure-sharing, substitutions can cause a size explosion, and therefore, in practice, substitutions are delayed and explicitly recorded. This makes the correspondence between the theory and the implementation highly non-trivial and correctness of implementations can be hard to establish. One solution is to make the substitutions part of the calculus, that is, to have a calculus of explicit substitutions. - -Once substitution has been made explicit, however, the basic properties of substitution change from being semantic to syntactic properties. One most important example is the "substitution lemma", which with the notation of λx becomes - -* (M⟨x:=N⟩)⟨y:=P⟩ = (M⟨y:=P⟩)⟨x:=(N⟨y:=P⟩)⟩ (where x≠y and x not free in P) - -A surprising counterexample, due to Melliès, shows that the way this rule is encoded in the original calculus of explicit substitutions is not strongly normalizing. Following this, a multitude of calculi were described trying to offer the best compromise between syntactic properties of explicit substitution calculi. diff --git a/wiki/wikipedia/3630.txt b/wiki/wikipedia/3630.txt deleted file mode 100644 index 93fbad2547aa0d8687fd97fd83d480755270d413..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3630.txt +++ /dev/null @@ -1,13 +0,0 @@ -The Boolean Pythagorean triples problem is a problem from Ramsey theory about whether the positive integers can be colored red and blue so that no Pythagorean triples consist of all red or all blue members. The Boolean Pythagorean triples problem was solved by Marijn Heule, Oliver Kullmann and Victor W. Marek in May 2016 through a computer-assisted proof. - -The problem asks if it is possible to color each of the positive integers either red or blue, so that no Pythagorean triple of integers a, b, c, satisfying $a^2+b^2=c^2$ are all the same color. - -For example, in the Pythagorean triple 3, 4 and 5 ($3^2+4^2=5^2$), if 3 and 4 are colored red, then 5 must be colored blue. - -Marijn Heule, Oliver Kullmann and Victor W. Marek showed that such a coloring is only possible up to the number 7824. The actual statement of the theorem proved is - -{{math theorem|The set {1, . . . , 7824} can be partitioned into two parts, such that no part contains a Pythagorean triple, while this is impossible for {1, . . . , 7825}. where it won the best paper award. The figure below shows a possible family of colorings for the set {1,...,7824} with no monochromatic Pythagorean triple, and the white squares can be colored either red or blue while still satisfying this condition. - -In the 1980s Ronald Graham offered a $100 prize for the solution of the problem, which has now been awarded to Marijn Heule. - -As of 2018, the problem is still open for more than 2 colors, that is, if there exists a k-coloring (k ≥ 3) of the positive integers such that no Pythagorean triples are the same color. diff --git a/wiki/wikipedia/3631.txt b/wiki/wikipedia/3631.txt deleted file mode 100644 index afd313fed72fce102da55d2611fcdc2db5bc9a07..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3631.txt +++ /dev/null @@ -1,126 +0,0 @@ -In probability theory, if a large number of events are all independent of one another and each has probability less than 1, then there is a positive (possibly small) probability that none of the events will occur. The Lovász local lemma allows one to relax the independence condition slightly: As long as the events are "mostly" independent from one another and aren't individually too likely, then there will still be a positive probability that none of them occurs. It is most commonly used in the probabilistic method, in particular to give existence proofs. - -There are several different versions of the lemma. The simplest and most frequently used is the symmetric version given below. A weaker version was proved in 1975 by László Lovász and Paul Erdős in the article Problems and results on 3-chromatic hypergraphs and some related questions. For other versions, see . In 2020, Robin Moser and Gábor Tardos received the Gödel Prize for their algorithmic version of the Lovász Local Lemma. - -Let A1, A2,..., Ak be a sequence of events such that each event occurs with probability at most p and such that each event is independent of all the other events except for at most d of them. - -
    Lemma I (Lovász and Erdős 1973; published 1975) If -$$ -4 p d \le 1 -$$ - -then there is a nonzero probability that none of the events occurs.
    - -
    Lemma II (Lovász 1977; published by Joel Spencer) If -$$ -e p (d+1) \le 1, -$$ - -where e = 2.718... is the base of natural logarithms, then there is a nonzero probability that none of the events occurs.
    - -Lemma II today is usually referred to as "Lovász local lemma". - -
    Lemma III (Shearer 1985) If -$$ -\begin{cases} p < \frac{(d-1)^{d-1}}{d^d} & d > 1\\ p < \tfrac{1}{2} & d = 1 \end{cases} -$$ - -then there is a nonzero probability that none of the events occurs.
    - -The threshold in Lemma III is optimal and it implies that the bound -$$ - epd \le 1 -$$ - -is also sufficient. - -A statement of the asymmetric version (which allows for events with different probability bounds) is as follows: - -
    Lemma (asymmetric version). Let $ \mathcal{A} = \{ A_1, \ldots, A_n \}$ be a finite set of events in the probability space Ω. For $ A \in \mathcal{A} $ let $ \Gamma(A)$ denote the neighbours of $A$ in the dependency graph (In the dependency graph, event $A$ is not adjacent to events which are mutually independent). If there exists an assignment of reals $x : \mathcal{A} \to [0,1) $ to the events such that -$$ - \forall A \in \mathcal{A} : \Pr(A) \leq x(A) \prod_{B \in \Gamma(A)} (1-x(B)) -$$ - -then the probability of avoiding all events in $ \mathcal{A} $ is positive, in particular -$$ - \Pr\left(\overline{A_1} \wedge \cdots \wedge \overline{A_n} \right) \geq \prod_{i\in \{1,\cdot\cdot\cdot,n\}} (1-x(A_i)). -$$
    - -The symmetric version follows immediately from the asymmetric version by setting -$$ - \forall A \in \mathcal{A} : x(A) = \frac{1}{d+1} -$$ - -to get the sufficient condition -$$ - p \leq \frac{1}{d+1} \cdot \frac{1}{e} -$$ - -since -$$ -\frac{1}{e} \leq \left (1 - \frac{1}{d+1} \right)^d. -$$ - -Note that, as is often the case with probabilistic arguments, this theorem is nonconstructive and gives no method of determining an explicit element of the probability space in which no event occurs. However, algorithmic versions of the local lemma with stronger preconditions are also known (Beck 1991; Czumaj and Scheideler 2000). More recently, a constructive version of the local lemma was given by Robin Moser and Gábor Tardos requiring no stronger preconditions. - -We prove the asymmetric version of the lemma, from which the symmetric version can be derived. By using the principle of mathematical induction we prove that for all $A$ in $\mathcal{A}$ and all subsets $S$ of $\mathcal{A}$ that do not include $A$, $ \Pr\left(A\mid\bigwedge_{B \in S}\overline{B}\right)\leq x(A)$. The induction here is applied on the size (cardinality) of the set $ S $. For base case $S=\emptyset$ the statement obviously holds since $ \Pr(A_i) \leq x\left(A_i\right) $. We need to show that the inequality holds for any subset of $\mathcal{A} $ of a certain cardinality given that it holds for all subsets of a lower cardinality. - -Let $S_1 = S\cap \Gamma(A), S_2 = S \setminus S_1$. We have from Bayes' theorem -$$ -\Pr\left(A\mid\bigwedge_{B\in S} \overline{B}\right) = \frac{\Pr\left(A\bigwedge_{B\in S_{1}} \overline{B}\mid \bigwedge_{B\in S_2} \overline{B}\right)}{\Pr\left(\bigwedge_{B\in S_1}\overline{B}\mid\bigwedge_{B\in S_2} \overline{B} \right)}. -$$ - -We bound the numerator and denominator of the above expression separately. For this, let $ S_1=\{B_{j1},B_{j2},\ldots,B_{jl}\} $. First, exploiting the fact that $A$ does not depend upon any event in $ S_2 $. -$$ - \text{Numerator} \leq \Pr\left(A\mid\bigwedge_{B\in S_2} \overline{B}\right) = \Pr(A) \leq x(A) \prod_{B\in\Gamma(A)}(1-x(B)). \qquad (1) -$$ - -Expanding the denominator by using Bayes' theorem and then using the inductive assumption, we get - - - -\begin{align} - -& \text{Denominator} \\ - -= {} & \Pr\left(\overline{B}_{j1}\mid\bigwedge_{t=2}^l \overline{B}_{jt}\wedge\bigwedge_{B\in S_2} \overline{B} \right)\cdot \Pr\left(\overline{B}_{j2}\mid\bigwedge_{t=3}^l\overline{B}_{jt}\wedge\bigwedge_{B\in S_2} \overline{B} \right)\cdots \Pr\left(\overline{B}_{jl}\mid\bigwedge_{B\in S_2} \overline{B} \right) \geq \prod_{B\in S_1} (1-x(B)) \qquad (2) - -\end{align} - - - -The inductive assumption can be applied here since each event is conditioned on lesser number of other events, i.e. on a subset of cardinality less than $|S|$. From (1) and (2), we get -$$ - \Pr\left(A\mid\bigwedge_{B\in S} \overline{B}\right) \leq x(A)\prod_{B\in \Gamma(A)-S_1}(1-x(B)) \leq x(A) -$$ - -Since the value of x is always in $[0,1)$. Note that we have essentially proved $ \Pr\left(\overline{A}\mid\bigwedge_{B\in S} \overline{B}\right) \geq 1-x(A) $. To get the desired probability, we write it in terms of conditional probabilities applying Bayes' theorem repeatedly. Hence, - - - -\begin{align} - -& \Pr\left(\overline{A_1} \wedge \cdots \wedge \overline{A_n} \right) \\ - -= {} & \Pr\left(\overline{A_1}\mid\overline{A_{2}}\wedge \cdots \overline{A_n}\right)\cdot\Pr\left(\overline{A_2}\mid\overline{A_3}\wedge \cdots \overline{A_n}\right) \cdots \Pr\left(\overline{A_n}\right) \\ - -\geq {} & \prod_{A\in\mathcal{A}}(1-x(A)), - -\end{align} - - - -which is what we had intended to prove. - -Suppose 11n points are placed around a circle and colored with n different colors in such a way that each color is applied to exactly 11 points. In any such coloring, there must be a set of n points containing one point of each color but not containing any pair of adjacent points. - -To see this, imagine picking a point of each color randomly, with all points equally likely (i.e., having probability 1/11) to be chosen. The 11n different events we want to avoid correspond to the 11n pairs of adjacent points on the circle. For each pair our odds of picking both points in that pair is at most 1/121 (exactly 1/121 if the two points are of different colors, otherwise 0), so we will take p = 1/121. - -Whether a given pair (a, b) of points is chosen depends only on what happens in the colors of a and b, and not at all on whether any other collection of points in the other n − 2 colors are chosen. This implies the event "a and b are both chosen" is dependent only on those pairs of adjacent points which share a color either with a or with b. - -There are 11 points on the circle sharing a color with a (including a itself), each of which is involved with 2 pairs. This means there are 21 pairs other than (a, b) which include the same color as a, and the same holds true for b. The worst that can happen is that these two sets are disjoint, so we can take d = 42 in the lemma. This gives -$$ - e p (d+1) \approx 0.966<1. -$$ - -By the local lemma, there is a positive probability that none of the bad events occur, meaning that our set contains no pair of adjacent points. This implies that a set satisfying our conditions must exist. diff --git a/wiki/wikipedia/3632.txt b/wiki/wikipedia/3632.txt deleted file mode 100644 index e74cd25280e80bff2b488359e0514909e0dde5fa..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3632.txt +++ /dev/null @@ -1,11 +0,0 @@ -In commutative algebra and algebraic geometry, Popescu's theorem, introduced by Dorin Popescu, - -states: - -Let A be a Noetherian ring and B a Noetherian algebra over it. Then, the structure map A →B is a regular morphism if and only if B is a direct limit of smooth A-algebras. - -For example, if A is a local G-ring (e.g., a local excellent ring) and B its completion, then the map A →B is regular by definition and the theorem applies. - -Another proof of Popescu's theorem was given by Tetsushi Ogoma, while an exposition of the result was provided by Richard Swan. - -The usual proof of the Artin approximation theorem relies crucially on Popescu's theorem. Popescu's result was proved by an alternate method, and somewhat strengthened, by Mark Spivakovsky. diff --git a/wiki/wikipedia/3633.txt b/wiki/wikipedia/3633.txt deleted file mode 100644 index f5236a1e7d907b9900d5494bb03c87b7c4a0eae3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3633.txt +++ /dev/null @@ -1,67 +0,0 @@ -In mathematics, Viète's formula is the following infinite product of nested radicals representing twice the reciprocal of the mathematical constant pi: - -\frac2\pi = \frac{\sqrt 2}2 \cdot \frac{\sqrt{2+\sqrt 2}}2 \cdot \frac{\sqrt{2+\sqrt{2+\sqrt 2}}}2 \cdots - -The formula is named after François Viète, who published it in 1593. As the first formula of European mathematics to represent an infinite process, it can be given a rigorous meaning as a limit expression, and marks the beginning of mathematical analysis. It has linear convergence, and can be used for calculations of pi, but other methods before and since have led to greater accuracy. It has also been used in calculations of the behavior of systems of springs and masses, and as a motivating example for the concept of statistical independence. - -The formula can be derived as a telescoping product of either the areas or perimeters of nested polygons converging to a circle. Alternatively, repeated use of the half-angle formula from trigonometry leads to a generalized formula, discovered by Leonhard Euler, that has Viète's formula as a special case. Many similar formulas involving nested roots or infinite products are now known. - -François Viète (1540–1603) was a French lawyer, privy councillor to two French kings, and amateur mathematician. He published this formula in 1593 in his work Variorum de rebus mathematicis responsorum, liber VIII. At this time, methods for approximating pi to (in principle) arbitrary accuracy had long been known. Viète's own method can be interpreted as a variation of an idea of Archimedes of approximating the circumference of a circle by the perimeter of a many-sided polygon, used by Archimedes to find the approximation - -\frac{223}{71} < \pi < \frac{22}{7}. - -By publishing his method as a mathematical formula, Viète formulated the first instance of an infinite product known in mathematics, and the first example of an explicit formula for the exact value of pi. As the first representation in European mathematics of a number as the result of an infinite process rather than of a finite calculation, Eli Maor highlights Viète's formula as marking the beginning of mathematical analysis and Jonathan Borwein calls its appearance "the dawn of modern mathematics". - -Using his formula, Viète calculated pi to an accuracy of nine decimal digits. However, this was not the most accurate approximation to pi known at the time, as the Persian mathematician Jamshīd al-Kāshī had calculated pi to an accuracy of nine sexagesimal digits and 16 decimal digits in 1424. Not long after Viète published his formula, Ludolph van Ceulen used a method closely related to Viète's to calculate 35 digits of pi, which were published only after van Ceulen's death in 1610. - -Beyond its mathematical and historical significance, Viète's formula can be used to explain the different speeds of waves of different frequencies in an infinite chain of springs and masses, and the appearance of pi in the limiting behavior of these speeds. Additionally, a derivation of this formula as a product of integrals involving the Rademacher system, equal to the integral of products of the same functions, provides a motivating example for the concept of statistical independence. - -Viète's formula may be rewritten and understood as a limit expression - -\lim_{n \rightarrow \infty} \prod_{i=1}^n \frac{a_i}{2} = \frac2\pi - -where - -\begin{align} a_1 &= \sqrt{2} \\ a_n &= \sqrt{2+a_{n-1}}. \end{align} - -For each choice of $n$, the expression in the limit is a finite product, and as $n$ gets arbitrarily large these finite products have values that approach the value of Viète's formula arbitrarily closely. Viète did his work long before the concepts of limits and rigorous proofs of convergence were developed in mathematics; the first proof that this limit exists was not given until the work of Ferdinand Rudio in 1891. - -The rate of convergence of a limit governs the number of terms of the expression needed to achieve a given number of digits of accuracy. In Viète's formula, the numbers of terms and digits are proportional to each other: the product of the first n terms in the limit gives an expression for pi that is accurate to approximately 0.6n digits. This convergence rate compares very favorably with the Wallis product, a later infinite product formula for pi. Although Viète himself used his formula to calculate pi only with nine-digit accuracy, an accelerated version of his formula has been used to calculate pi to hundreds of thousands of digits. - -Viète's formula may be obtained as a special case of a formula for the sinc function that has often been attributed to Leonhard Euler, more than a century later: - -\frac{\sin x}{x} = \cos\frac{x}{2} \cdot \cos\frac{x}{4} \cdot \cos\frac{x}{8} \cdots - -Substituting x = pi/2 in this formula yields: - -\frac{2}{\pi} = \cos\frac{\pi}{4} \cdot \cos\frac{\pi}{8} \cdot \cos\frac{\pi}{16} \cdots - -Then, expressing each term of the product on the right as a function of earlier terms using the half-angle formula: - -\cos\frac{x}{2} = \sqrt\frac{1+\cos x}{2} - -gives Viète's formula. - -It is also possible to derive from Viète's formula a related formula for pi that still involves nested square roots of two, but uses only one multiplication: - -\pi = \lim_{k\to\infty} 2^{k} \underbrace{\sqrt{2-\sqrt{2+\sqrt{2+\sqrt{2+\sqrt{2+\cdots+\sqrt{2}}}}}}}_{k\text{ square roots}}, - -which can be rewritten compactly as - -\begin{align} \pi &= \lim_{k\to\infty}2^k\sqrt{2-a_k} \\[5px] a_1&=0 \\ a_k&=\sqrt{2+a_{k-1}}. \end{align} - -Many formulae for pi and other constants such as the golden ratio are now known, similar to Viète's in their use of either nested radicals or infinite products of trigonometric functions. - -Viète obtained his formula by comparing the areas of regular polygons with 2n and 2n + 1 sides inscribed in a circle. The first term in the product, sqrt 2/2, is the ratio of areas of a square and an octagon, the second term is the ratio of areas of an octagon and a hexadecagon, etc. Thus, the product telescopes to give the ratio of areas of a square (the initial polygon in the sequence) to a circle (the limiting case of a 2n-gon). Alternatively, the terms in the product may be instead interpreted as ratios of perimeters of the same sequence of polygons, starting with the ratio of perimeters of a digon (the diameter of the circle, counted twice) and a square, the ratio of perimeters of a square and an octagon, etc. - -Another derivation is possible based on trigonometric identities and Euler's formula. - -Repeatedly applying the double-angle formula - -\sin x = 2\sin\frac{x}{2}\cos\frac{x}{2}, - -leads to a proof by mathematical induction that, for all positive integers n, - -\sin x = 2^n \sin\frac{x}{2^n}\left(\prod_{i=1}^n \cos\frac{x}{2^i}\right). - -The term 2n sin x/2n goes to x in the limit as n goes to infinity, from which Euler's formula follows. Viète's formula may be obtained from this formula by the substitution x = pi/2. diff --git a/wiki/wikipedia/3634.txt b/wiki/wikipedia/3634.txt deleted file mode 100644 index 7a70ddf75a0701f851bcdc08df625e1c9773a79f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3634.txt +++ /dev/null @@ -1,24 +0,0 @@ -In mathematics, the Chowla–Selberg formula is the evaluation of a certain product of values of the Gamma function at rational values in terms of values of the Dedekind eta function at imaginary quadratic irrational numbers. The result was essentially found by and rediscovered by . - -In logarithmic form, the Chowla–Selberg formula states that in certain cases the sum - - \frac{w}{4}\sum_r \chi(r)\log \Gamma\left( \frac{r}{D} \right) = \frac{h}{2}\log(4\pi\sqrt) - -+\sum_\tau\log\left(\sqrt{\Im(\tau)}|\eta(\tau)|^2\right) - - - -can be evaluated using the Kronecker limit formula. Here χ is the quadratic residue symbol modulo D, where −D is the discriminant of an imaginary quadratic field. The sum is taken over 0 < r < D, with the usual convention χ(r) = 0 if r and D have a common factor. The function η is the Dedekind eta function, and h is the class number, and w is the number of roots of unity. - -The origin of such formulae is now seen to be in the theory of complex multiplication, and in particular in the theory of periods of an abelian variety of CM-type. This has led to much research and generalization. In particular there is an analog of the Chowla–Selberg formula for p-adic numbers, involving a p-adic gamma function, called the Gross–Koblitz formula. - -The Chowla–Selberg formula gives a formula for a finite product of values of the eta functions. By combining this with the theory of complex multiplication, one can give a formula for the individual absolute values of the eta function as -$$ -\Im(\tau)|\eta(\tau)|^4 = \frac{\alpha}{4\pi\sqrt} \prod_r\Gamma(r/|D|)^{\chi(r)\frac{w}{2h}} -$$ - -for some algebraic number α. - -Using the reflection formula for the gamma function gives: - -*$\eta(i) = 2^{-1}\pi^{-3/4}\Gamma(\tfrac{1}{4})$ diff --git a/wiki/wikipedia/3635.txt b/wiki/wikipedia/3635.txt deleted file mode 100644 index 660d25498752763a185c1208aabb5ee590306c90..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3635.txt +++ /dev/null @@ -1,17 +0,0 @@ -In the mathematical field of graph theory, a Platonic graph is a graph that has one of the Platonic solids as its skeleton. There are 5 Platonic graphs, and all of them are regular, polyhedral (and therefore by necessity also 3-vertex-connected, vertex-transitive, edge-transitive and planar graphs), and also Hamiltonian graphs. - -* Tetrahedral graph – 4 vertices, 6 edges - -* Octahedral graph – 6 vertices, 12 edges - -* Cubical graph – 8 vertices, 12 edges - -* Icosahedral graph – 12 vertices, 30 edges - -* Dodecahedral graph – 20 vertices, 30 edges - -*Regular map (graph theory) - -*Archimedean graph - -*Wheel graph diff --git a/wiki/wikipedia/3636.txt b/wiki/wikipedia/3636.txt deleted file mode 100644 index 482ec3bebfddd397fc33f7a7b3d0c9d819adb5bc..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3636.txt +++ /dev/null @@ -1,5 +0,0 @@ -In computer science Language Of Temporal Ordering Specification (LOTOS) is a formal specification language based on temporal ordering of events. LOTOS is used for communications protocol specification in International Organization for Standardization (ISO) Open Systems Interconnection model (OSI) standards. - -LOTOS is an algebraic language that consists of two parts: a part for the description of data and operations, based on abstract data types, and a part for the description of concurrent processes, based on process calculus. - -Work on the standard was completed in 1988, and it was published as ISO 8807 in 1989. Between 1993 and 2001, an ISO committee worked to define a revised version of the LOTOS standard, which was published in 2001 as E-LOTOS. diff --git a/wiki/wikipedia/3637.txt b/wiki/wikipedia/3637.txt deleted file mode 100644 index 027535826af12029c2a30314a73ada7271dd2d2c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3637.txt +++ /dev/null @@ -1,157 +0,0 @@ -In mathematical logic, Goodstein's theorem is a statement about the natural numbers, proved by Reuben Goodstein in 1944, which states that every Goodstein sequence eventually terminates at 0. Kirby and Paris showed that it is unprovable in Peano arithmetic (but it can be proven in stronger systems, such as second-order arithmetic). This was the third example of a true statement that is unprovable in Peano arithmetic, after the examples provided by Gödel's incompleteness theorem and Gerhard Gentzen's 1943 direct proof of the unprovability of ε0-induction in Peano arithmetic. The Paris–Harrington theorem gave another example. - -Laurence Kirby and Jeff Paris introduced a graph-theoretic hydra game with behavior similar to that of Goodstein sequences: the "Hydra" (named for the mythological multi-headed Hydra of Lerna) is a rooted tree, and a move consists of cutting off one of its "heads" (a branch of the tree), to which the hydra responds by growing a finite number of new heads according to certain rules. Kirby and Paris proved that the Hydra will eventually be killed, regardless of the strategy that Hercules uses to chop off its heads, though this may take a very long time. Just like for Goodstein sequences, Kirby and Paris showed that it cannot be proven in Peano arithmetic alone. - -Goodstein sequences are defined in terms of a concept called "hereditary base-n notation". This notation is very similar to usual base-n positional notation, but the usual notation does not suffice for the purposes of Goodstein's theorem. - -In ordinary base-n notation, where n is a natural number greater than 1, an arbitrary natural number m is written as a sum of multiples of powers of n: -$$ -m = a_k n^k + a_{k-1} n^{k-1} + \cdots + a_0, -$$ - -where each coefficient ai satisfies 0 ≤ ai < n, and ak ≠ 0. For example, in base 2, -$$ -35 = 32 + 2 + 1 = 2^5 + 2^1 + 2^0. -$$ - -Thus the base-2 representation of 35 is 100011, which means 25 + 2 + 1. Similarly, 100 represented in base-3 is 10201: -$$ -100 = 81 + 18 + 1 = 3^4 + 2 \cdot 3^2 + 3^0. -$$ - -Note that the exponents themselves are not written in base-n notation. For example, the expressions above include 25 and 34, and 5 > 2, 4 > 3. - -To convert a base-n representation to hereditary base-n notation, first rewrite all of the exponents in base-n notation. Then rewrite any exponents inside the exponents, and continue in this way until every number appearing in the expression (except the bases themselves) has been converted to base-n notation. - -For example, while 35 in ordinary base-2 notation is 25 + 2 + 1, it is written in hereditary base-2 notation as -$$ -35 = 2^{2^{2^1}+1}+2^1+1, -$$ - -using the fact that 1=5 = 22 + 1. Similarly, 100 in hereditary base-3 notation is -$$ -100 = 3^{3^1+1} + 2 \cdot 3^2 + 1. -$$ - -The Goodstein sequence G(m) of a number m is a sequence of natural numbers. The first element in the sequence G(m) is m itself. To get the second, G(m)(2), write m in hereditary base-2 notation, change all the 2s to 3s, and then subtract 1 from the result. In general, the (n + 1)-st term, G(m)(n + 1), of the Goodstein sequence of m is as follows: - -* Take the hereditary base-n + 1 representation of G(m)(n). - -* Replace each occurrence of the base-n + 1 with n + 2. - -* Subtract one. (Note that the next term depends both on the previous term and on the index n.) - -* Continue until the result is zero, at which point the sequence terminates. - -Early Goodstein sequences terminate quickly. For example, G(3) terminates at the 6th step: - -Later Goodstein sequences increase for a very large number of steps. For example, G(4) starts as follows: - -Elements of G(4) continue to increase for a while, but at base $3 \cdot 2^{402653209}$, - -they reach the maximum of $3 \cdot 2^{402653210} - 1$, stay there for the next $3 \cdot 2^{402653209}$ steps, and then begin their descent. - -The value 0 is reached at base $3 \cdot 2^{402653211} - 1$. (Curiously, this is a Woodall number: $3 \cdot 2^{402653211} - 1 = 402653184 \cdot 2^{402653184} - 1$. This is also the case with all other final bases for starting values greater than 4.) - -However, even G(4) doesn't give a good idea of just how quickly the elements of a Goodstein sequence can increase. - -G(19) increases much more rapidly and starts as follows: - -In spite of this rapid growth, Goodstein's theorem states that every Goodstein sequence eventually terminates at 0, no matter what the starting value is. - -Goodstein's theorem can be proved (using techniques outside Peano arithmetic, see below) as follows: Given a Goodstein sequence G(m), we construct a parallel sequence P(m) of ordinal numbers which is strictly decreasing and terminates. Then G(m) must terminate too, and it can terminate only when it goes to 0. A common misunderstanding of this proof is to believe that G(m) goes to 0 because it is dominated by P(m). Actually, the fact that P(m) dominates G(m) plays no role at all. The important point is: G(m)(k) exists if and only if P(m)(k) exists (parallelism). Then if P(m) terminates, so does G(m). And G(m) can terminate only when it comes to 0. - -We define a function $f=f(u,k)$ which computes the hereditary base k representation of u and then replaces each occurrence of the base k with the first infinite ordinal number ω. For example, $f(100,3)=f(3^{3^1+1}+2\cdot3^2+1,3)=\omega^{\omega^1+1} + 2\omega^2 + 1 = \omega^{\omega+1} + 2\omega^2 + 1$. - -Each term P(m)(n) of the sequence P(m) is then defined as f(G(m)(n),n+1). For example, 1=G(3)(1) = 3 = 21 + 20 and 1=P(3)(1) = f(21 + 20,2) = ω1 + ω0 = ω + 1. Addition, multiplication and exponentiation of ordinal numbers are well defined. - -We claim that $f(G(m)(n),n+1) > f(G(m)(n+1),n+2)$: - -Let $G'(m)(n)$ be G(m)(n) after applying the first, - -base-changing operation in generating the next element of the Goodstein sequence, - -but before the second minus 1 operation in this generation. - -Observe that $G(m)(n+1)= G'(m)(n)-1$. - -Then clearly, $f(G(m)(n),n+1) = f(G'(m)(n),n+2)$. Now we apply the minus 1 operation, and $f(G'(m)(n),n+2) > f(G(m)(n+1),n+2)$, as $ G'(m)(n) = G(m)(n+1)+1$. - -For example, $G(4)(1)=2^2$ and $G(4)(2)=2\cdot 3^2 + 2\cdot 3+2$, so $f(2^2,2)=\omega^\omega$ and f(2\cdot 3^2 + 2\cdot 3+2,3)= - -2 \omega^2+2\omega+2, which is strictly smaller. Note that in order to calculate f(G(m)(n),n+1), we first need to write G(m)(n) in hereditary base n+1 notation, as for instance the expression $\omega^\omega-1$ either makes no sense or is equal to $\omega^\omega$. - -Thus the sequence P(m) is strictly decreasing. As the standard order < on ordinals is well-founded, an infinite strictly decreasing sequence cannot exist, or equivalently, every strictly decreasing sequence of ordinals terminates (and cannot be infinite). But P(m)(n) is calculated directly from G(m)(n). Hence the sequence G(m) must terminate as well, meaning that it must reach 0. - -While this proof of Goodstein's theorem is fairly easy, the Kirby–Paris theorem, which shows that Goodstein's theorem is not a theorem of Peano arithmetic, is technical and considerably more difficult. It makes use of countable nonstandard models of Peano arithmetic. - -Suppose the definition of the Goodstein sequence is changed so that instead of - -replacing each occurrence of the base b with b + 1 - -it replaces it with b + 2. Would the sequence still terminate? - -More generally, let b1, b2, b3, … be any sequences of integers. - -Then let the (n + 1)-st - -term G(m)(n + 1) of the extended Goodstein sequence of m be as - -follows: take the hereditary base bn representation of - -G(m)(n) and replace each occurrence of the base bn - -with bn+1 and then subtract one. - -The claim is that this sequence still terminates. - -The extended proof defines 1 = P(m)(n) = f(G(m)(n), n) as - -follows: take the hereditary base bn representation of - -G(m)(n), and replace each occurrence of the base - -bn with the first infinite ordinal number ω. - -The base-changing operation of the Goodstein sequence when going - -from G(m)(n) to G(m)(n + 1) still does not change the value of f. - -For example, if 1=bn = 4 and if 1=bn+1 = 9, - -then - -f(3 \cdot 4^{4^4} + 4, 4) = 3 \omega^{\omega^\omega} + \omega= f(3 - -\cdot 9^{9^9} + 9, 9), hence the ordinal f(3 \cdot 4^{4^4} + - -4, 4) is strictly greater than the ordinal f\big((3 \cdot - -9^{9^9} + 9) - 1, 9\big). - -The Goodstein function, $\mathcal{G}: \mathbb{N} \to \mathbb{N} $, is defined such that $\mathcal{G}(n)$ is the length of the Goodstein sequence that starts with n. (This is a total function since every Goodstein sequence terminates.) The extreme growth-rate of $\mathcal{G}$ can be calibrated by relating it to various standard ordinal-indexed hierarchies of functions, such as the functions $H_\alpha$ in the Hardy hierarchy, and the functions $f_\alpha$ in the fast-growing hierarchy of Löb and Wainer: - -* Kirby and Paris (1982) proved that -$$ -\mathcal{G} -$$ has approximately the same growth-rate as $H_{\epsilon_0}$ (which is the same as that of $f_{\epsilon_0}$); more precisely, $\mathcal{G}$ dominates $H_\alpha$ for every $\alpha < \epsilon_0$, and $H_{\epsilon_0}$ dominates $\mathcal{G}\!.$ - -(For any two functions $f, g: \mathbb{N} \to \mathbb{N} $, $f$ is said to dominate $g$ if $f(n) > g(n)$ for all sufficiently large $n$.) - -* Cichon (1983) showed that -$$ - \mathcal{G}(n) = H_{R_2^\omega(n+1)}(1) - 1, -$$ - -where $R_2^\omega(n)$ is the result of putting n in hereditary base-2 notation and then replacing all 2s with ω (as was done in the proof of Goodstein's theorem). - -* Caicedo (2007) showed that if $ n = 2^{m_1} + 2^{m_2} + \cdots + 2^{m_k} $ with $ m_1 > m_2 > \cdots > m_k, $ then -$$ - \mathcal{G}(n) = f_{R_2^\omega(m_1)}(f_{R_2^\omega(m_2)}(\cdots(f_{R_2^\omega(m_k)}(3))\cdots)) - 2 -$$. - -Some examples: - -(For Ackermann function and Graham's number bounds see fast-growing hierarchy#Functions in fast-growing hierarchies.) - -Goodstein's theorem can be used to construct a total computable function that Peano arithmetic cannot prove to be total. The Goodstein sequence of a number can be effectively enumerated by a Turing machine; thus the function which maps n to the number of steps required for the Goodstein sequence of n to terminate is computable by a particular Turing machine. This machine merely enumerates the Goodstein sequence of n and, when the sequence reaches 0, returns the length of the sequence. Because every Goodstein sequence eventually terminates, this function is total. But because Peano arithmetic does not prove that every Goodstein sequence terminates, Peano arithmetic does not prove that this Turing machine computes a total function. diff --git a/wiki/wikipedia/3638.txt b/wiki/wikipedia/3638.txt deleted file mode 100644 index e9713f2508f138218c5d402a85f189eb8e4de890..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3638.txt +++ /dev/null @@ -1,169 +0,0 @@ -Burrows–Abadi–Needham logic (also known as the BAN logic) is a set of rules for defining and analyzing information exchange protocols. Specifically, BAN logic helps its users determine whether exchanged information is trustworthy, secured against eavesdropping, or both. BAN logic starts with the assumption that all information exchanges happen on media vulnerable to tampering and public monitoring. This has evolved into the popular security mantra, "Don't trust the network." - -A typical BAN logic sequence includes three steps: - -# Verification of message origin - -# Verification of message freshness - -# Verification of the origin's trustworthiness. - -BAN logic uses postulates and definitions – like all axiomatic systems – to analyze authentication protocols. Use of the BAN logic often accompanies a security protocol notation formulation of a protocol and is sometimes given in papers. - -BAN logic, and logics in the same family, are decidable: there exists an algorithm taking BAN hypotheses and a purported conclusion, and that answers whether or not the conclusion is derivable from the hypotheses. The proposed algorithms use a variant of magic sets. - -BAN logic inspired many other similar formalisms, such as GNY logic. Some of these try to repair one weakness of BAN logic: the lack of a good semantics with a clear meaning in terms of knowledge and possible universes. However, starting in the mid-1990s, crypto protocols were analyzed in operational models (assuming perfect cryptography) using model checkers, and numerous bugs were found in protocols that were "verified" with BAN logic and related formalisms. In some cases a protocol was reasoned as secure by the BAN analysis but were in fact insecure. This has led to the abandonment of BAN-family logics in favor of proof methods based on standard invariance reasoning. - -The definitions and their implications are below (P and Q are network agents, X is a message, - -and K is an encryption key): - -*P believes X: P acts as if X is true, and may assert X in other messages. - -*P has jurisdiction over X: P's beliefs about X should be trusted. - -*P said X: At one time, P transmitted (and believed) message X, although P might no longer believe X. - -*P sees X: P receives message X, and can read and repeat X. - -*{X}K: X is encrypted with key K. - -*fresh(X): X has not previously been sent in any message. - -*key(K, P↔Q): P and Q may communicate with shared key K - -The meaning of these definitions is captured in a series of postulates: - -* If P believes key(K, P↔Q), and P sees {X}K, then P believes (Q said X) - -* If P believes (Q said X) and P believes fresh(X), then P believes (Q believes X). - -P must believe that X is fresh here. If X is not known to be fresh, then it might be an obsolete message, replayed by an attacker. - -* If P believes (Q has jurisdiction over X) and P believes (Q believes X), then P believes X - -* There are several other technical postulates having to do with composition of messages. For example, if P believes that Q said , the concatenation of X and Y, then P also believes that Q said X, and P also believes that Q said Y. - -Using this notation, the assumptions behind an authentication protocol - -can be formalized. Using the postulates, one can prove that certain - -agents believe that they can communicate using certain keys. If the - -proof fails, the point of failure usually suggests an attack which - -compromises the protocol. - -A very simple protocol - the Wide Mouth Frog protocol - allows two agents, A and B, to establish secure communications, using a trusted authentication server, S, and synchronized clocks all around. Using standard notation the protocol can be specified as follows: -$$ -A \rightarrow S: A,\{T_A, K_{ab}, B\}_{K_{as}} -$$ -$$ -S \rightarrow B: \{T_S, K_{ab}, A\}_{K_{bs}} -$$ - -Agents A and B are equipped with keys Kas and Kbs, respectively, for communicating securely with S. So we have assumptions: - - A believes key(Kas, A↔S) - - S believes key(Kas, A↔S) - - B believes key(Kbs, B↔S) - - S believes key(Kbs, B↔S) - -Agent A wants to initiate a secure conversation with B. It therefore invents a key, Kab, which it will use to communicate with B. A believes that this key is secure, since it made up the key itself: - - A believes key(Kab, A↔B) - -B is willing to accept this key, as long as it is sure that it came from A: - - B believes (A has jurisdiction over key(K, A↔B)) - -Moreover, B is willing to trust S to accurately relay keys from A: - - B believes (S has jurisdiction over (A believes key(K, A↔B))) - -That is, if B believes that S believes that A wants to use a particular key to communicate with B, then B will trust S and believe it also. - -The goal is to have - - B believes key(Kab, A↔B) - -A reads the clock, obtaining the current time t, and sends the following message: - - 1 A→S: {t, key(Kab, A↔B)}Kas - -That is, it sends its chosen session key and the current time to S, encrypted with its private authentication server key Kas. - -Since S believes that key(Kas, A↔S), and S sees {t, key(Kab, A↔B)}Kas, - -then S concludes that A actually said {t, key(Kab, A↔B)}. (In particular, S believes that the message was not manufactured out of whole cloth by some attacker.) - -Since the clocks are synchronized, we can assume - - S believes fresh(t) - -Since S believes fresh(t) and S believes A said {t, key(Kab, A↔B)}, - -S believes that A actually believes that key(Kab, A↔B). (In particular, S believes that the message was not replayed by some attacker who captured it at some time in the past.) - -S then forwards the key to B: - - 2 S→B: {t, A, A believes key(Kab, A↔B)}Kbs - -Because message 2 is encrypted with Kbs, and B - -believes key(Kbs, B↔S), B now believes that S - -said {t, A, A believes key(Kab, A↔B)}. - -Because the clocks are synchronized, B believes fresh(t), and so - -fresh(A believes key(Kab, A↔B)). Because B - -believes that S's statement is fresh, B believes that S believes that - -(A believes key(Kab, A↔B)). Because B - -believes that S is authoritative about what A believes, B believes - -that (A believes key(Kab, A↔B)). Because B - -believes that A is authoritative about session keys between A and B, B - -believes key(Kab, A↔B). B can now contact A - -directly, using Kab as a secret session key. - -Now let's suppose that we abandon the assumption that the clocks are - -synchronized. In that case, S gets message 1 from A with {t, - -key(Kab, A↔B)}, but it can no longer conclude - -that t is fresh. It knows that A sent this message at some time - -in the past (because it is encrypted with Kas) - -but not that this is a recent message, so S doesn't believe that A - -necessarily wants to continue to use the key - -Kab. This points directly at an attack on the - -protocol: An attacker who can capture messages can guess one of the - -old session keys Kab. (This might take a long - -time.) The attacker then replays the old {t, - -key(Kab, A↔B)} message, sending it to S. If - -the clocks aren't synchronized (perhaps as part of the same attack), S might - -believe this old message and request that B use the old, compromised key - -over again. - -The original Logic of Authentication paper (linked below) contains this example and many others, including analyses of the Kerberos handshake protocol, and two versions of the Andrew Project RPC handshake (one of which is defective). diff --git a/wiki/wikipedia/3639.txt b/wiki/wikipedia/3639.txt deleted file mode 100644 index e7db55f2704d4df2066f1bacab2326857fef0b33..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3639.txt +++ /dev/null @@ -1,144 +0,0 @@ -In computability theory Post's theorem, named after Emil Post, describes the connection between the arithmetical hierarchy and the Turing degrees. - -The statement of Post's theorem uses several concepts relating to definability and recursion theory. This section gives a brief overview of these concepts, which are covered in depth in their respective articles. - -The arithmetical hierarchy classifies certain sets of natural numbers that are definable in the language of Peano arithmetic. A formula is said to be $\Sigma^{0}_m$ if it is an existential statement in prenex normal form (all quantifiers at the front) with $m$ alternations between existential and universal quantifiers applied to a formula with bounded quantifiers only. Formally a formula $\phi(s)$ in the language of Peano arithmetic is a $\Sigma^{0}_m$ formula if it is of the form -$$ -\left(\exists n^1_1\exists n^1_2\cdots\exists n^1_{j_1}\right)\left(\forall n^2_1 \cdots \forall n^2_{j_2}\right)\left(\exists n^3_1\cdots\right)\cdots\left(Q n^m_1 \cdots \right)\rho(n^1_1,\ldots n^m_{j_m},x_1,\ldots,x_k) -$$ - -where $\rho$ contains only bounded quantifiers and Q is $\forall$ if m is even and $\exists$ if m is odd. - -A set of natural numbers $A$ is said to be $\Sigma^0_m$ if it is definable by a $\Sigma^0_m$ formula, that is, if there is a $\Sigma^0_m$ formula $\phi(s)$ such that each number $n$ is in $A$ if and only if $\phi(n)$ holds. It is known that if a set is $\Sigma^0_m$ then it is $\Sigma^0_n$ for any $n > m$, but for each m there is a $\Sigma^0_{m+1}$ set that is not $\Sigma^0_m$. Thus the number of quantifier alternations required to define a set gives a measure of the complexity of the set. - -Post's theorem uses the relativized arithmetical hierarchy as well as the unrelativized hierarchy just defined. A set $A$ of natural numbers is said to be $\Sigma^0_m$ relative to a set $B$, written $\Sigma^{0,B}_m$, if $A$ is definable by a $\Sigma^0_m$ formula in an extended language that includes a predicate for membership in $B$. - -While the arithmetical hierarchy measures definability of sets of natural numbers, Turing degrees measure the level of uncomputability of sets of natural numbers. A set $A$ is said to be Turing reducible to a set $B$, written $A \leq_T B$, if there is an oracle Turing machine that, given an oracle for $B$, computes the characteristic function of $A$. - -The Turing jump of a set $A$ is a form of the Halting problem relative to $A$. Given a set $A$, the Turing jump $A'$ is the set of indices of oracle Turing machines that halt on input $0$ when run with oracle $A$. It is known that every set $A$ is Turing reducible to its Turing jump, but the Turing jump of a set is never Turing reducible to the original set. - -Post's theorem uses finitely iterated Turing jumps. For any set $A$ of natural numbers, the notation $A^{(n)}$ indicates the $n$-fold iterated Turing jump of $A$. Thus $A^{(0)}$ is just $A$, and $A^{(n+1)}$ is the Turing jump of $A^{(n)}$. - -Post's theorem establishes a close connection between the arithmetical hierarchy and the Turing degrees of the form $\emptyset^{(n)}$, that is, finitely iterated Turing jumps of the empty set. (The empty set could be replaced with any other computable set without changing the truth of the theorem.) - -Post's theorem states: - -#A set $B$ is $\Sigma^0_{n+1}$ if and only if $B$ is recursively enumerable by an oracle Turing machine with an oracle for $\emptyset^{(n)}$, that is, if and only if $B$ is $\Sigma^{0,\emptyset^{(n)}}_1$. - -#The set $\emptyset^{(n)}$ is $\Sigma^0_n$-complete for every $n > 0$. This means that every $\Sigma^0_n$ set is many-one reducible to $\emptyset^{(n)}$. - -Post's theorem has many corollaries that expose additional relationships between the arithmetical - -hierarchy and the Turing degrees. These include: - -#Fix a set $C$. A set $B$ is $\Sigma^{0,C}_{n+1}$ if and only if $B$ is $\Sigma^{0,C^{(n)}}_1$. This is the relativization of the first part of Post's theorem to the oracle $C$. - -#A set $B$ is $\Delta_{n+1}$ if and only if $B \leq_T \emptyset^{(n)}$. More generally, $B$ is $\Delta^C_{n+1}$ if and only if $B \leq_T C^{(n)}$. - -#A set is defined to be arithmetical if it is $\Sigma^0_n$ for some $n$. Post's theorem shows that, equivalently, a set is arithmetical if and only if it is Turing reducible to $\emptyset^{(m)}$ for some m. - -The operation of a Turing machine $T$ on input $n$ can be formalized logically in first-order arithmetic. For example, we may use symbols $A_k$, $B_k$, and $C_k$ for the tape configuration, machine state and location along the tape after $k$ steps, respectively. $T$'s transition system determines the relation between $(A_k,B_k,C_k)$ and $(A_{k+1},B_{k+1},C_{k+1})$; their initial values (for $k=0$) are the input, the initial state and zero, respectively. The machine halts if and only if there is a number $k$ such that $B_k$ is the halting state. - -The exact relation depends on the specific implementation of the notion of Turing machine (e.g. their alphabet, allowed mode of motion along the tape, etc.) - -In case $T$ halts at time $n_1$, the relation between $(A_k,B_k,C_k)$ and $(A_{k+1},B_{k+1},C_{k+1})$ must be satisfied only for k bounded from above by $n_1$. - -Thus there is a formula $\varphi(n,n_1)$ in first-order arithmetic with no unbounded quantifiers, such that $T$ halts on input $n$ at time $n_1$ at most if and only if $\varphi(n,n_1)$ is satisfied. - -For example, for a prefix-free Turing machine with binary alphabet and no blank symbol, we may use the following notations: - -*$A_k$ is the 1-ary symbol for the configuration of the whole tape after $k$ steps (which we may write as a number with LSB first, the value of the m-th location on the tape being its m-th least significant bit). In particular $A_0$ is the initial configuration of the tape, which corresponds the input to the machine. - -*$B_k$ is the 1-ary symbol for the Turing machine state after $k$ steps. In particular, $B_0=q_I$, the initial state of the Turing machine. - -*$C_k$ is the 1-ary symbol for the Turing machine location on the tape after $k$ steps. In particular $C_0=0$. - -* $M(q,b)$ is the transition function of the Turing machine, written as a function from a doublet (machine state, bit read by the machine) to a triplet (new machine state, bit written by the machine, +1 or -1 machine movement along the tape). - -*$bit(j,m)$ is the j-th bit of a number $m$. This can be written as a first-order arithmetic formula with no unbounded quantifiers. - -For a prefix-free Turing machine we may use, for input n, the initial tape configuration $t(n)= cat(2^{ceil(log_2 n)}-1,0,n)$ where cat stands for concatenation; thus $t(n)$ is a $\log(n)-$length string of $1-s$ followed by $0$ and then by $n$. - -The operation of the Turing machine at the first $n_1$ steps can thus be written as the conjunction of the initial conditions and the following formulas, quantified over $k$ for all $k existential quantifiers followed by a negation of a formula in $\Sigma^0_1$; the latter formula can be enumerated by a Turing machine and can thus be checked immediately by an oracle for $\emptyset ^{(1)}$. - -We may thus enumerate the $k_1$-tuples of natural numbers and run an oracle machine with an oracle for $\emptyset ^{(1)}$ that goes through all of them until it finds a satisfaction for the formula. This oracle machine halts on precisely the set of natural numbers satisfying $\varphi(n)$, and thus enumerates its corresponding set. - -More generally, suppose every set that is recursively enumerable by an oracle machine with an oracle for $\emptyset ^{(p)}$ is in $\Sigma^0_{p+1}$. Then for an oracle machine with an oracle for $\emptyset ^{(p+1)}$, $\psi^O(m) = \exists m_1: \psi_H(m,m_1)$ is in $\Sigma^0_{p+1}$. - -Since $\psi^O(m)$ is the same as $\varphi(n)$ for the previous Turing jump, it can be constructed (as we have just done with $\varphi(n)$ above) so that $\psi_H(m,m_1)$ in $\Pi^0_p$. After moving to prenex formal form the new $\varphi(n)$ is in $\Sigma^0_{p+2}$. - -By induction, every set that is recursively enumerable by an oracle machine with an oracle for $\emptyset ^{(p)}$, is in $\Sigma^0_{p+1}$. - -The other direction can be proven by induction as well: Suppose every formula in $\Sigma^0_{p+1}$ can be enumerated by an oracle machine with an oracle for $\emptyset ^{(p)}$. - -Now Suppose $\varphi(n)$ is a formula in $\Sigma^0_{p+2}$ with $k_1$ existential quantifiers followed by $k_2$ universal quantifiers etc. Equivalently, $\varphi(n)$ has $k_1$> existential quantifiers followed by a negation of a formula in $\Sigma^0_{p+1}$; the latter formula can be enumerated by an oracle machine with an oracle for $\emptyset ^{(p)}$ and can thus be checked immediately by an oracle for $\emptyset ^{(p+1)}$. - -We may thus enumerate the $k_1$-tuples of natural numbers and run an oracle machine with an oracle for $\emptyset ^{(p+1)}$ that goes through all of them until it finds a satisfaction for the formula. This oracle machine halts on precisely the set of natural numbers satisfying $\varphi(n)$, and thus enumerates its corresponding set. diff --git a/wiki/wikipedia/364.txt b/wiki/wikipedia/364.txt deleted file mode 100644 index c11bb2c3a8752121b38ac5f86b42faa74a2d53f1..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/364.txt +++ /dev/null @@ -1,41 +0,0 @@ -In the mathematical discipline of graph theory, a feedback vertex set (FVS) of a graph is a set of vertices whose removal leaves a graph without cycles ("removal" means deleting the vertex and all edges adjacent to it). Equivalently, each FVS contains at least one vertex of any cycle in the graph. The feedback vertex set number of a graph is the size of a smallest feedback vertex set. The minimum feedback vertex set problem is an NP-complete problem; it was among the first problems shown to be NP-complete. It has wide applications in operating systems, database systems, and VLSI chip design. - -The FVS decision problem is as follows: - -INSTANCE: An (undirected or directed) graph $G = (V, E)$ and a positive integer $k$. - -QUESTION: Is there a subset $X \subseteq V$ with $|X| \leq k$ such that, when all vertices of $X$ and their adjacent edges are deleted from $G$, the remainder is cycle-free? - -The graph $G[V \setminus X]$ that remains after removing $X$ from $G$ is an induced forest (resp. an induced directed acyclic graph in the case of directed graphs). Thus, finding a minimum FVS in a graph is equivalent to finding a maximum induced forest (resp. maximum induced directed acyclic graph in the case of directed graphs). - -Karp showed that the minimum FVS problem for directed graphs is NP-complete. The problem remains NP-complete on directed graphs with maximum in-degree and out-degree two, and on directed planar graphs with maximum in-degree and out-degree three. - -Karp's reduction also implies the NP-completeness of the FVS problem on undirected graphs, where the problem stays NP-hard on graphs of maximum degree four. The FVS problem can be solved in polynomial time on graphs of maximum degree at most three. - -The corresponding NP optimization problem of finding the size of a minimum feedback vertex set can be solved in time O(1.7347n), where n is the number of vertices in the graph. This algorithm actually computes a maximum induced forest, and when such a forest is obtained, its complement is a minimum feedback vertex set. The number of minimal feedback vertex sets in a graph is bounded by O(1.8638n). The directed feedback vertex set problem can still be solved in time O*(1.9977n), where n is the number of vertices in the given directed graph. The parameterized versions of the directed and undirected problems are both fixed-parameter tractable. - -In undirected graphs of maximum degree three, the feedback vertex set problem can be solved in polynomial time, by transforming it into an instance of the matroid parity problem for linear matroids. - -The undirected problem is APX-complete. This follows from the following facts. - -* The APX-completeness of the vertex cover problem; - -* The existence of an approximation preserving L-reduction from the vertex cover problem to it; - -* Existing constant-factor approximation algorithms. - -The best known approximation algorithm on undirected graphs is by a factor of two. - -Whether the directed version is polynomial time approximable within constant ratio and thereby APX-complete is an open question. - -According to the Erdős–Pósa theorem, the size of a minimum feedback vertex set is within a logarithmic factor of the maximum number of vertex-disjoint cycles in the given graph. - -* Instead of vertices, one can consider a feedback edge set - a set of edges in an undirected graph, whose removal makes the graph acyclic. The size of a smallest feedback edge set in a graph is called the circuit rank of the graph. In contrast to the FVS number, the circuit rank can be easily computed: it is $|E|-|V|+|C|$, where C is the set of connected components of the graph. The problem of finding a smallest feedback edge set is equivalent to finding a spanning forest, which can be done in polynomial time. - -* The analogous concept in a directed graph is the feedback arc set (FAS) - a set of directed arcs whose removal makes the graph acyclic. Finding a smallest FAS is an NP-hard problem. - -* In operating systems, feedback vertex sets play a prominent role in the study of deadlock recovery. In the wait-for graph of an operating system, each directed cycle corresponds to a deadlock situation. In order to resolve all deadlocks, some blocked processes have to be aborted. A minimum feedback vertex set in this graph corresponds to a minimum number of processes that one needs to abort. - -* The feedback vertex set problem has applications in VLSI chip design. - -* Another application is in complexity theory. Some computational problems on graphs are NP-hard in general, but can be solved in polynomial time for graphs with bounded FVS number. Some examples are graph isomorphism and the path reconfiguration problem. diff --git a/wiki/wikipedia/3640.txt b/wiki/wikipedia/3640.txt deleted file mode 100644 index 58d8d0c5c7cfe938192bed0f175efe1650238b2f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3640.txt +++ /dev/null @@ -1,75 +0,0 @@ -In mathematics, a conjecture is a conclusion or a proposition which is suspected to be true due to preliminary supporting evidence, but for which no proof or disproof has yet been found. Some conjectures, such as the Riemann hypothesis (still a conjecture) or Fermat's Last Theorem (a conjecture until proven in 1995 by Andrew Wiles), have shaped much of mathematical history as new areas of mathematics are developed in order to prove them. - -In number theory, Fermat's Last Theorem (sometimes called Fermat's conjecture, especially in older texts) states that no three positive integers $a$, $b$, and $c$ can satisfy the equation $a^n + b^n = c^n$ for any integer value of $n$ greater than two. - -This theorem was first conjectured by Pierre de Fermat in 1637 in the margin of a copy of Arithmetica, where he claimed that he had a proof that was too large to fit in the margin. The first successful proof was released in 1994 by Andrew Wiles, and formally published in 1995, after 358 years of effort by mathematicians. The unsolved problem stimulated the development of algebraic number theory in the 19th century, and the proof of the modularity theorem in the 20th century. It is among the most notable theorems in the history of mathematics, and prior to its proof it was in the Guinness Book of World Records for "most difficult mathematical problems". - -In mathematics, the four color theorem, or the four color map theorem, states that given any separation of a plane into contiguous regions, producing a figure called a map, no more than four colors are required to color the regions of the map—so that no two adjacent regions have the same color. Two regions are called adjacent if they share a common boundary that is not a corner, where corners are the points shared by three or more regions. For example, in the map of the United States of America, Utah and Arizona are adjacent, but Utah and New Mexico, which only share a point that also belongs to Arizona and Colorado, are not. - -Möbius mentioned the problem in his lectures as early as 1840. The conjecture was first proposed on October 23, 1852 when Francis Guthrie, while trying to color the map of countries of England, noticed that only four different colors were needed. The five color theorem, which has a short elementary proof, states that five colors suffice to color a map and was proven in the late 19th century; however, proving that four colors suffice turned out to be significantly harder. A number of false proofs and false counterexamples have appeared since the first statement of the four color theorem in 1852. - -The four color theorem was ultimately proven in 1976 by Kenneth Appel and Wolfgang Haken. It was the first major theorem to be proved using a computer. Appel and Haken's approach started by showing that there is a particular set of 1,936 maps, each of which cannot be part of a smallest-sized counterexample to the four color theorem (i.e., if they did appear, one could make a smaller counter-example). Appel and Haken used a special-purpose computer program to confirm that each of these maps had this property. Additionally, any map that could potentially be a counterexample must have a portion that looks like one of these 1,936 maps. Showing this with hundreds of pages of hand analysis, Appel and Haken concluded that no smallest counterexample exists because any must contain, yet do not contain, one of these 1,936 maps. This contradiction means there are no counterexamples at all and that the theorem is therefore true. Initially, their proof was not accepted by mathematicians at all because the computer-assisted proof was infeasible for a human to check by hand. However, the proof has since then gained wider acceptance, although doubts still remain. - -The Hauptvermutung (German for main conjecture) of geometric topology is the conjecture that any two triangulations of a triangulable space have a common refinement, a single triangulation that is a subdivision of both of them. It was originally formulated in 1908, by Steinitz and Tietze. - -This conjecture is now known to be false. The non-manifold version was disproved by John Milnor in 1961 using Reidemeister torsion. - -The manifold version is true in dimensions 1=m ≤ 3. The cases 1=m = 2 and 3 were proved by Tibor Radó and Edwin E. Moise in the 1920s and 1950s, respectively. - -In mathematics, the Weil conjectures were some highly influential proposals by on the generating functions (known as local zeta-functions) derived from counting the number of points on algebraic varieties over finite fields. - -A variety V over a finite field with q elements has a finite number of rational points, as well as points over every finite field with qk elements containing that field. The generating function has coefficients derived from the numbers Nk of points over the (essentially unique) field with qk elements. - -Weil conjectured that such zeta-functions should be rational functions, should satisfy a form of functional equation, and should have their zeroes in restricted places. The last two parts were quite consciously modeled on the Riemann zeta function and Riemann hypothesis. The rationality was proved by Dwork, the functional equation by Grothendieck, and the analogue of the Riemann hypothesis was proved by Deligne - -In mathematics, the Poincaré conjecture is a theorem about the characterization of the 3-sphere, which is the hypersphere that bounds the unit ball in four-dimensional space. The conjecture states that: An equivalent form of the conjecture involves a coarser form of equivalence than homeomorphism called homotopy equivalence: if a 3-manifold is homotopy equivalent to the 3-sphere, then it is necessarily homeomorphic to it. - -Originally conjectured by Henri Poincaré, the theorem concerns a space that locally looks like ordinary three-dimensional space but is connected, finite in size, and lacks any boundary (a closed 3-manifold). The Poincaré conjecture claims that if such a space has the additional property that each loop in the space can be continuously tightened to a point, then it is necessarily a three-dimensional sphere. An analogous result has been known in higher dimensions for some time. - -After nearly a century of effort by mathematicians, Grigori Perelman presented a proof of the conjecture in three papers made available in 2002 and 2003 on arXiv. The proof followed on from the program of Richard S. Hamilton to use the Ricci flow to attempt to solve the problem. Hamilton later introduced a modification of the standard Ricci flow, called Ricci flow with surgery to systematically excise singular regions as they develop, in a controlled way, but was unable to prove this method "converged" in three dimensions. Perelman completed this portion of the proof. Several teams of mathematicians have verified that Perelman's proof is correct. - -The Poincaré conjecture, before being proven, was one of the most important open questions in topology. - -In mathematics, the Riemann hypothesis, proposed by , is a conjecture that the non-trivial zeros of the Riemann zeta function all have real part 1/2. The name is also used for some closely related analogues, such as the Riemann hypothesis for curves over finite fields. - -The Riemann hypothesis implies results about the distribution of prime numbers. Along with suitable generalizations, some mathematicians consider it the most important unresolved problem in pure mathematics. The Riemann hypothesis, along with the Goldbach conjecture, is part of Hilbert's eighth problem in David Hilbert's list of 23 unsolved problems; it is also one of the Clay Mathematics Institute Millennium Prize Problems. - -The P versus NP problem is a major unsolved problem in computer science. Informally, it asks whether every problem whose solution can be quickly verified by a computer can also be quickly solved by a computer; it is widely conjectured that the answer is no. It was essentially first mentioned in a 1956 letter written by Kurt Gödel to John von Neumann. Gödel asked whether a certain NP-complete problem could be solved in quadratic or linear time. The precise statement of the P=NP problem was introduced in 1971 by Stephen Cook in his seminal paper "The complexity of theorem proving procedures" and is considered by many to be the most important open problem in the field. It is one of the seven Millennium Prize Problems selected by the Clay Mathematics Institute to carry a US$1,000,000 prize for the first correct solution. - -* Goldbach's conjecture - -* The twin prime conjecture - -* The Collatz conjecture - -* The Manin conjecture - -* The Maldacena conjecture - -* The Euler conjecture, proposed by Euler in the 18th century but for which counterexamples for a number of exponents (starting with n=4) were found beginning in the mid 20th century - -* The Hardy-Littlewood conjectures are a pair of conjectures concerning the distribution of prime numbers, the first of which expands upon the aforementioned twin prime conjecture. Neither one has either been proven or disproven, but it has been proven that both cannot simultaneously be true (i.e., at least one must be false). It has not been proven which one is false, but it is widely believed that the first conjecture is true and the second one is false. - -* The Langlands program is a far-reaching web of these ideas of 'unifying conjectures' that link different subfields of mathematics (e.g. between number theory and representation theory of Lie groups). Some of these conjectures have since been proved. - -Formal mathematics is based on provable truth. In mathematics, any number of cases supporting a universally quantified conjecture, no matter how large, is insufficient for establishing the conjecture's veracity, since a single counterexample could immediately bring down the conjecture. Mathematical journals sometimes publish the minor results of research teams having extended the search for a counterexample farther than previously done. For instance, the Collatz conjecture, which concerns whether or not certain sequences of integers terminate, has been tested for all integers up to 1.2 × 1012 (over a trillion). However, the failure to find a counterexample after extensive search does not constitute a proof that the conjecture is true—because the conjecture might be false but with a very large minimal counterexample. - -Nevertheless, mathematicians often regard a conjecture as strongly supported by evidence even though not yet proved. That evidence may be of various kinds, such as verification of consequences of it or strong interconnections with known results. - -A conjecture is considered proven only when it has been shown that it is logically impossible for it to be false. There are various methods of doing so; see methods of mathematical proof for more details. - -One method of proof, applicable when there are only a finite number of cases that could lead to counterexamples, is known as "brute force": in this approach, all possible cases are considered and shown not to give counterexamples. In some occasions, the number of cases is quite large, in which case a brute-force proof may require as a practical matter the use of a computer algorithm to check all the cases. For example, the validity of the 1976 and 1997 brute-force proofs of the four color theorem by computer was initially doubted, but was eventually confirmed in 2005 by theorem-proving software. - -When a conjecture has been proven, it is no longer a conjecture but a theorem. Many important theorems were once conjectures, such as the Geometrization theorem (which resolved the Poincaré conjecture), Fermat's Last Theorem, and others. - -Conjectures disproven through counterexample are sometimes referred to as false conjectures (cf. the Pólya conjecture and Euler's sum of powers conjecture). In the case of the latter, the first counterexample found for the n=4 case involved numbers in the millions, although it has been subsequently found that the minimal counterexample is actually smaller. - -Not every conjecture ends up being proven true or false. The continuum hypothesis, which tries to ascertain the relative cardinality of certain infinite sets, was eventually shown to be independent from the generally accepted set of Zermelo–Fraenkel axioms of set theory. It is therefore possible to adopt this statement, or its negation, as a new axiom in a consistent manner (much as Euclid's parallel postulate can be taken either as true or false in an axiomatic system for geometry). - -In this case, if a proof uses this statement, researchers will often look for a new proof that doesn't require the hypothesis (in the same way that it is desirable that statements in Euclidean geometry be proved using only the axioms of neutral geometry, i.e. without the parallel postulate). The one major exception to this in practice is the axiom of choice, as the majority of researchers usually do not worry whether a result requires it—unless they are studying this axiom in particular. - -Sometimes, a conjecture is called a hypothesis when it is used frequently and repeatedly as an assumption in proofs of other results. For example, the Riemann hypothesis is a conjecture from number theory that — amongst other things — makes predictions about the distribution of prime numbers. Few number theorists doubt that the Riemann hypothesis is true. In fact, in anticipation of its eventual proof, some have even proceeded to develop further proofs which are contingent on the truth of this conjecture. These are called conditional proofs: the conjectures assumed appear in the hypotheses of the theorem, for the time being. - -These "proofs", however, would fall apart if it turned out that the hypothesis was false, so there is considerable interest in verifying the truth or falsity of conjectures of this type. - -Karl Popper pioneered the use of the term "conjecture" in scientific philosophy. Conjecture is related to hypothesis, which in science refers to a testable conjecture. diff --git a/wiki/wikipedia/3641.txt b/wiki/wikipedia/3641.txt deleted file mode 100644 index 556c10bbf0da986acf33fe2ff205c543f9cf498b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3641.txt +++ /dev/null @@ -1,17 +0,0 @@ -is an online version of the tile-matching video game Tetris, developed by Arika, and published by Nintendo for the Nintendo Switch in February 2019. The software is free, but only to Nintendo Switch Online subscribers. Incorporating elements of the battle royale genre, Tetris 99 has up to 99 players competing simultaneously to complete rows with falling tetrominoes, which in turn send attacks in the form of "garbage rows" to other players with the goal of knocking them out of the game. - -Arika would later develop similar games called Super Mario Bros. 35 and Pac-Man 99, which instead involve the games Super Mario Bros. and Pac-Man, respectively. The former also allowed up to a maximum of 35 players. - -Tetris 99 is a multiplayer puzzle game in which 99 players play against each other at the same time, with the aim to be the last player remaining. As with the traditional Tetris formula, players rotate and drop shaped bricks known as tetrominoes onto a board. Players can clear tetrominoes by completing rows across both sides, whereas players will lose if tetrominoes overflow off the top of the board. As with normal Tetris rules, players have the option to store a tetromino piece to swap out at any time. By clearing multiple lines or performing continuous line clears in a row, players can send "garbage" to other players, which will appear on their board unless they can quickly clear lines in response. More garbage can be sent by completing combination moves in succession of making a "tetris" (matching 4 lines at once) or performing a "T-spin" (squeezing the T-shaped tetromino into a position it would otherwise not fall into by rapidly rotating it). - -During gameplay, small grids representing the other 98 players are displayed at the sides of the main board. Players can either choose to target individual players, or have the computer automatically target other players based on one of four criteria: random players, those who are targeting the player, those who are close to being defeated, and those who possess badges. Badges are earned by knocking out a player with garbage (or gray lines), which earns them a piece of a badge, along with any other badges or pieces that player had. The more badges a player completes and possesses, the more lines they can send to other players at a time (up to a 100% boost). At the end of a game, players will earn experience that will increase their level. The game periodically features special "" events; one of the first Maximus Cups was held in March 2019 where players with the top number of wins over a weekend play period would win rewards within the My Nintendo loyalty program. Many of these events offer players the opportunity to earn special in-game themes based on other Nintendo Switch games. - -In May 2019, Nintendo released paid downloadable content (DLC) for the game, named the Big Block DLC. The DLC adds 4 offline modes in total: CPU Battle, where players battle 98 bot players; Marathon, where players play an endless game of Tetris, and challenged to achieve the highest score; Local Arena, where up to eight Nintendo Switch players play in the same arena via local wireless; and Two Player Share Battle, where two players share joy-cons and play the same game in local co-op. - -Tetris 99 was announced during a Nintendo Direct presentation on February 13, 2019, and made available later that day. It is available for free exclusively to players who have subscribed to the Nintendo Switch Online service. Nintendo released a physical version of the game in Japan on August 9, 2019, in North America on September 6, 2019, and in Europe on September 20, 2019. The physical edition includes the Big Block DLC content and a 12-month Nintendo Switch Online voucher. - -Upon release, Tetris 99 received "generally favourable reviews" according to the review aggregator Metacritic. According to IGN Tetris 99 is a "wondrous pandemonium in a battle royale bottle" and that "the massive player count really ups the intensity." The Telegraph said the game is "fiercer than Fortnite" and "as exciting and cutthroat as any video game deathmatch." - -During a financial results briefing, Nintendo president Shuntaro Furukawa reported that Tetris 99 had been played by over 2.8 million accounts as of April 2019. Furukawa also noted that the game has boosted "user engagement" with the Nintendo Switch. - -Alexey Pajitnov, the creator of the original Tetris, stated that he "love[s] the game" and called it "one of the best games of Tetris of the last year. I really like what was done." diff --git a/wiki/wikipedia/3642.txt b/wiki/wikipedia/3642.txt deleted file mode 100644 index fceeb947d523039c96e295c565a022be6709c3df..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3642.txt +++ /dev/null @@ -1,65 +0,0 @@ -The ultraproduct is a mathematical construction that appears mainly in abstract algebra and mathematical logic, in particular in model theory and set theory. An ultraproduct is a quotient of the direct product of a family of structures. All factors need to have the same signature. The ultrapower is the special case of this construction in which all factors are equal. - -For example, ultrapowers can be used to construct new fields from given ones. The hyperreal numbers, an ultrapower of the real numbers, are a special case of this. - -Some striking applications of ultraproducts include very elegant proofs of the compactness theorem and the completeness theorem, Keisler's ultrapower theorem, which gives an algebraic characterization of the semantic notion of elementary equivalence, and the Robinson–Zakon presentation of the use of superstructures and their monomorphisms to construct nonstandard models of analysis, leading to the growth of the area of nonstandard analysis, which was pioneered (as an application of the compactness theorem) by Abraham Robinson. - -The general method for getting ultraproducts uses an index set I, a structure Mi for each element i of I (all of the same signature), and an ultrafilter U on I. One usually considers this in the case that I to be infinite and U contains all cofinite subsets of I, i.e. U is not a principal ultrafilter. In the principal case the ultraproduct is isomorphic to one of the factors. - -Algebraic operations on the Cartesian product -$$ -\prod_{i \in I} M_i -$$ - -are defined pointwise (for example, if $+$ is a binary function then $(a + b)_i = a_i + b_i$), and an equivalence relation is defined by $a \sim b$ if -$$ -\left\{ i \in I: a_i = b_i \right\}\in U, -$$ - -and the ultraproduct is the quotient set with respect to $\sim.$ The ultraproduct is therefore sometimes denoted by -$$ -\prod_{i\in I}M_i / U. -$$ - -One may define a finitely additive measure m on the index set I by saying m(A) = 1 if A ∈ U and = 0 otherwise. Then two members of the Cartesian product are equivalent precisely if they are equal almost everywhere on the index set. The ultraproduct is the set of equivalence classes thus generated. - -Other relations can be extended the same way: -$$ -R([a^1],\dots,[a^n]) \iff \left\{ i \in I: R^{M_i}(a^1_i,\dots,a^n_i) \right\}\in U, -$$ - -where [a] denotes the equivalence class of a with respect to $\sim.$ - -In particular, if every Mi is an ordered field, then so is the ultraproduct. - -An ultrapower is an ultraproduct for which all the factors Mi are equal: -$$ -M^I/U=\prod_{i\in I}M/U. -$$ - -More generally, the construction above can be carried out whenever U is a filter on I; the resulting model $\prod_{i\in I}M_i / U$ is then called a reduced product. - -The hyperreal numbers are the ultraproduct of one copy of the real numbers for every natural number, with regard to an ultrafilter over the natural numbers containing all cofinite sets. Their order is the extension of the order of the real numbers. For example, the sequence ω given by ωi = i defines an equivalence class representing a hyperreal number that is greater than any real number. - -Analogously, one can define nonstandard integers, nonstandard complex numbers, etc., by taking the ultraproduct of copies of the corresponding structures. - -As an example of the carrying over of relations into the ultraproduct, consider the sequence ψ defined by ψi = 2i. Because ψi > ωi = i for all i, it follows that the equivalence class of ψi = 2i is greater than the equivalence class of ωi = i, so that it can be interpreted as an infinite number which is greater than the one originally constructed. However, let χi = i for i not equal to 7, but χ7 = 8. The set of indices on which ω and χ agree is a member of any ultrafilter (because ω and χ agree almost everywhere), so ω and χ belong to the same equivalence class. - -In the theory of large cardinals, a standard construction is to take the ultraproduct of the whole set-theoretic universe with respect to some carefully chosen ultrafilter U. Properties of this ultrafilter U have a strong influence on (higher order) properties of the ultraproduct; for example, if U is σ-complete, then the ultraproduct will again be well-founded. (See measurable cardinal for the prototypical example.) - -Łoś's theorem, also called the fundamental theorem of ultraproducts, is due to Jerzy Łoś (the surname is pronounced , approximately "wash"). It states that any first-order formula is true in the ultraproduct if and only if the set of indices i such that the formula is true in Mi is a member of U. More precisely: - -Let σ be a signature, $ U $ an ultrafilter over a set $ I $, and for each $ i \in I $ let $ M_{i} $ be a σ-structure. Let $ M $ be the ultraproduct of the $ M_{i} $ with respect to $U$, that is, $ M = \prod_{ i\in I }M_i/U.$ Then, for each $ a^{1}, \ldots, a^{n} \in \prod M_{i} $, where $ a^{k} = (a^k_i)_{i \in I} $, and for every σ-formula $\phi$, -$$ - M \models \phia^1], \ldots, [a^n \iff \{ i \in I : M_{i} \models \phi[a^1_{i}, \ldots, a^n_{i} ] \} \in U. -$$ - -The theorem is proved by induction on the complexity of the formula $\phi$. The fact that $U$ is an ultrafilter (and not just a filter) is used in the negation clause, and the axiom of choice is needed at the existential quantifier step. As an application, one obtains the transfer theorem for hyperreal fields. - -Let R be a unary relation in the structure M, and form the ultrapower of M. Then the set $S=\{x \in M|R x\}$ has an analog *S in the ultrapower, and first-order formulas involving S are also valid for *S. For example, let M be the reals, and let Rx hold if x is a rational number. Then in M we can say that for any pair of rationals x and y, there exists another number z such that z is not rational, and x < z < y. Since this can be translated into a first-order logical formula in the relevant formal language, Łoś's theorem implies that *S has the same property. That is, we can define a notion of the hyperrational numbers, which are a subset of the hyperreals, and they have the same first-order properties as the rationals. - -Consider, however, the Archimedean property of the reals, which states that there is no real number x such that x > 1, x > 1 + 1, x > 1 + 1 + 1, ... for every inequality in the infinite list. Łoś's theorem does not apply to the Archimedean property, because the Archimedean property cannot be stated in first-order logic. In fact, the Archimedean property is false for the hyperreals, as shown by the construction of the hyperreal number ω above. - -In model theory and set theory, the direct limit of a sequence of ultrapowers is often considered. In model theory, this construction can be referred to as an ultralimit or limiting ultrapower. - -Beginning with a structure, A0, and an ultrafilter, D0, form an ultrapower, A1. Then repeat the process to form A2, and so forth. For each n there is a canonical diagonal embedding $A_n\to A_{n+1}$. At limit stages, such as Aω, form the direct limit of earlier stages. One may continue into the transfinite. diff --git a/wiki/wikipedia/3643.txt b/wiki/wikipedia/3643.txt deleted file mode 100644 index 89ca3ec1972ef31995027cd8699ee5ecdc2c2211..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3643.txt +++ /dev/null @@ -1,164 +0,0 @@ -[[File:Pappus-aff-ev.svg|thumb|Pappus's theorem: affine form
    -$$ -Ab\parallel aB, Bc\parallel bC \Rightarrow Ac\parallel aC -$$]] - -In mathematics, Pappus's hexagon theorem (attributed to Pappus of Alexandria) states that - -*given one set of collinear points $A, B, C$, and another set of collinear points $a,b,c$, then the intersection points $X,Y,Z$ of line pairs $Ab$ and $aB, Ac$ and $aC, Bc$ and $bC$ are collinear, lying on the Pappus line. These three points are the points of intersection of the "opposite" sides of the hexagon $AbCaBc$. - -It holds in a projective plane over any field, but fails for projective planes over any noncommutative division ring. Projective planes in which the "theorem" is valid are called pappian planes. - -If one restricts the projective plane such that the Pappus line $u$ is the line at infinity, one gets the affine version of Pappus's theorem shown in the second diagram. - -If the Pappus line $u$ and the lines $g,h$ have a point in common, one gets the so-called little version of Pappus's theorem. - -The dual of this incidence theorem states that given one set of concurrent lines $A, B, C$, and another set of concurrent lines $a, b, c$, then the lines $x, y, z$ defined by pairs of points resulting from pairs of intersections $A\cap b$ and $a\cap B, A\cap c$ and $a\cap C, B\cap c$ and $b\cap C$ are concurrent. (Concurrent means that the lines pass through one point.) - -Pappus's theorem is a special case of Pascal's theorem for a conic—the limiting case when the conic degenerates into 2 straight lines. Pascal's theorem is in turn a special case of the Cayley–Bacharach theorem. - -The Pappus configuration is the configuration of 9 lines and 9 points that occurs in Pappus's theorem, with each line meeting 3 of the points and each point meeting 3 lines. In general, the Pappus line does not pass through the point of intersection of $ABC$ and $abc$. This configuration is self dual. Since, in particular, the lines $Bc, bC, XY$ have the properties of the lines $x,y,z$ of the dual theorem, and collinearity of $X,Y,Z$ is equivalent to concurrence of $Bc, bC, XY$, the dual theorem is therefore just the same as the theorem itself. The Levi graph of the Pappus configuration is the Pappus graph, a bipartite distance-regular graph with 18 vertices and 27 edges. - -If the affine form of the statement can be proven, then the projective form of Pappus's theorem is proven, as the extension of a pappian plane to a projective plane is unique. - -Because of the parallelity in an affine plane one has to distinct two cases: $g \not\parallel h$ and $g \parallel h$. The key for a simple proof is the possibility for introducing a "suitable" coordinate system: - -Case 1: The lines $g,h$ intersect at point $S=g\cap h$.
    - -In this case coordinates are introduced, such that $S=(0,0), A=(0,1), c=(1,0)$ (see diagram). -$$ -B,C -$$ have the coordinates $B=(0,\gamma), C=(0,\delta), \gamma,\delta \notin \{0,1\}$. - -From the parallelity of the lines $Bc, Cb$ one gets $b=(\tfrac{\delta}{\gamma},0)$ and the parallelity of the lines $Ab, Ba$ yields $a=(\delta,0)$. Hence line $Ca$ has slope $-1$ and is parallel line $Ac$. - -Case 2: $g\parallel h \ $ (little theorem).
    - -In this case the coordinates are chosen such that $c=(0,0), b=(1,0), A=(0,1), B=(\gamma,1),\gamma\ne 0$. From the parallelity of $Ab\parallel Ba$ and $ cB\parallel bC$ one gets $C=(\gamma+1,1)$ and $ a=(\gamma+1,0)$, respectively, and at least the parallelity $Ac\parallel Ca$. - -Choose homogeneous coordinates with -$$ -C = (1, 0, 0), c= (0, 1, 0), X = (0, 0, 1), A = (1, 1, 1) -$$. - -On the lines $AC, Ac, AX$, given by $x_2 = x_3, x_1 =x_3, x_2 = x_1$, take the points $B, Y, b$ to be -$$ -B = (p, 1, 1), Y = (1, q, 1), b = (1, 1, r) -$$ - -for some $p, q, r$. The three lines $XB, CY, cb$ are $x_1 = x_2 p, x_2= x_3 q, x_3 = x_1 r$, so they pass through the same point $a$ if and only if $rqp = 1$. The condition for the three lines $Cb, cB$ and $XY$ with equations $ x_2 = x_1 q, x_1 = x_3 p , x_3 = x_2 r$ to pass through the same point $Z$ is $rpq =1$. So this last set of three lines is concurrent if all the other eight sets are because multiplication is commutative, so $pq = qp$. Equivalently, $X, Y, Z$ are collinear. - -The proof above also shows that for Pappus's theorem to hold for a projective space over a division ring it is both sufficient and necessary that the division ring is a (commutative) field. German mathematician Gerhard Hessenberg proved that Pappus's theorem implies Desargues's theorem. In general, Pappus's theorem holds for some projective plane if and only if it is a projective plane over a commutative field. The projective planes in which Pappus's theorem does not hold are Desarguesian projective planes over noncommutative division rings, and non-Desarguesian planes. - -The proof is invalid if $C, c, X$ happen to be collinear. In that case an alternative proof can be provided, for example, using a different projective reference. - -Because of the principle of duality for projective planes the dual theorem of Pappus is true: - -If 6 lines $A,b,C,a,B,c $ are chosen alternately from two pencils with centers $G,H$, the lines -$$ - X:= (A\cap b) (a\cap B), -$$ -$$ - Y:= (c\cap A) (C\cap a), -$$ -$$ - Z:= (b\cap C) (B\cap c) -$$ - -are concurrent, that means: they have a point $U$ in common.
    - -The left diagram shows the projective version, the right one an affine version, where the points -$$ -G,H -$$ are points at infinity. If point $U$ is on the line $GH$ than one gets the "dual little theorem" of Pappus' theorem. - - - -Pappus-dual-proj-ev.svg|dual theorem: projective form - -Pappus-dual-aff-ev.svg|dual theorem: affine form - - - -If in the affine version of the dual "little theorem" point $U$ is a point at infinity too, one gets Thomsen's theorem, a statement on 6 points on the sides of a triangle (see diagram). The Thomsen figure plays an essential role coordinatising an axiomatic defined projective plane. The proof of the closure of Thomsen's figure is covered by the proof for the "little theorem", given above. But there exists a simple direct proof, too: - -Because the statement of Thomsen's theorem (the closure of the figure) uses only the terms connect, intersect and parallel, the statement is affinely invariant, and one can introduce coordinates such that -$$ -P=(0,0), Q=(1,0), R=(0,1) -$$ (see right diagram). The starting point of the sequence of chords is $(0,\lambda).$ One easily verifies the coordinates of the points given in the diagram, which shows: the last point coincides with the first point. - - - -Thomsen-kl-d-pap-ev.svg|Thomsen figure (points $\color{red} 1,2,3,4,5,6 $ of the triangle $PQR$) as dual theorem of the little theorem of Pappus ($U$ is at infinity, too !). - -Thomsen-beweis.svg|Thomsen figure: proof - - - -In addition to the above characterizations of Pappus's theorem and its dual, the following are equivalent statements: - -* If the six vertices of a hexagon lie alternately on two lines, then the three points of intersection of pairs of opposite sides are collinear. - -* Arranged in a matrix of nine points (as in the figure and description above) and thought of as evaluating a permanent, if the first two rows and the six "diagonal" triads are collinear, then the third row is collinear. - -\left|\begin{matrix} - -A & B & C \\ - -a & b & c \\ - -X & Y & Z \end{matrix} - -\right| - -That is, if $\ ABC, abc, AbZ, BcX, CaY, XbC, YcA, ZaB\ $ are lines, then Pappus's theorem states that $XYZ$ must be a line. Also, note that the same matrix formulation applies to the dual form of the theorem when $(A,B,C)$ etc. are triples of concurrent lines. - -* Given three distinct points on each of two distinct lines, pair each point on one of the lines with one from the other line, then the joins of points not paired will meet in (opposite) pairs at points along a line. - -* If two triangles are perspective in at least two different ways, then they are perspective in three ways. - -* If $ AB, CD,$ and $EF$ are concurrent and $DE, FA, $ and $BC$ are concurrent, then $ AD, BE,$ and $CF$ are concurrent. - -In its earliest known form, Pappus's Theorem is Propositions 138, 139, 141, and 143 of Book VII of Pappus's Collection. These are Lemmas XII, XIII, XV, and XVII in the part of Book VII consisting of lemmas to the first of the three books of Euclid's Porisms. - -The lemmas are proved in terms of what today is known as the cross ratio of four collinear points. Three earlier lemmas are used. The first of these, Lemma III, has the diagram below (which uses Pappus's lettering, with G for Γ, D for Δ, J for Θ, and L for Λ). - -Here three concurrent straight lines, AB, AG, and AD, are crossed by two lines, JB and JE, which concur at J. - -Also KL is drawn parallel to AZ. - -Then - -KJ : JL :: (KJ : AG & AG : JL) :: (JD : GD & BG : JB). - -These proportions might be written today as equations: - -KJ/JL = (KJ/AG)(AG/JL) = (JD/GD)(BG/JB). - -The last compound ratio (namely JD : GD & BG : JB) is what is known today as the cross ratio of the collinear points J, G, D, and B in that order; it is denoted today by (J, G; D, B). So we have shown that this is independent of the choice of the particular straight line JD that crosses the three straight lines that concur at A. In particular - -(J, G; D, B) = (J, Z; H, E). - -It does not matter on which side of A the straight line JE falls. In particular, the situation may be as in the next diagram, which is the diagram for Lemma X. - -Just as before, we have (J, G; D, B) = (J, Z; H, E). Pappus does not explicitly prove this; but Lemma X is a converse, namely that if these two cross ratios are the same, and the straight lines BE and DH cross at A, then the points G, A, and Z must be collinear. - -What we showed originally can be written as (J, ∞; K, L) = (J, G; D, B), with ∞ taking the place of the (nonexistent) intersection of JK and AG. Pappus shows this, in effect, in Lemma XI, whose diagram, however, has different lettering: - -What Pappus shows is DE.ZH : EZ.HD :: GB : BE, which we may write as - -(D, Z; E, H) = (∞, B; E, G). - -The diagram for Lemma XII is: - -The diagram for Lemma XIII is the same, but BA and DG, extended, meet at N. In any case, considering straight lines through G as cut by the three straight lines through A, (and accepting that equations of cross ratios remain valid after permutation of the entries,) we have by Lemma III or XI - -(G, J; E, H) = (G, D; ∞ Z). - -Considering straight lines through D as cut by the three straight lines through B, we have - -(L, D; E, K) = (G, D; ∞ Z). - -Thus (E, H; J, G) = (E, K; D, L), so by Lemma X, the points H, M, and K are collinear. That is, the points of intersection of the pairs of opposite sides of the hexagon ADEGBZ are collinear. - -Lemmas XV and XVII are that, if the point M is determined as the intersection of HK and BG, then the points A, M, and D are collinear. That is, the points of intersection of the pairs of opposite sides of the hexagon BEKHZG are collinear. diff --git a/wiki/wikipedia/3644.txt b/wiki/wikipedia/3644.txt deleted file mode 100644 index 90bf7f94c516f49cad65873f3ad8f41ef1d1a2b1..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3644.txt +++ /dev/null @@ -1,37 +0,0 @@ -In order theory, a branch of mathematics, the 1/3–2/3 conjecture states that, if one is comparison sorting a set of items then, no matter what comparisons may have already been performed, it is always possible to choose the next comparison in such a way that it will reduce the number of possible sorted orders by a factor of 2/3 or better. Equivalently, in every finite partially ordered set that is not totally ordered, there exists a pair of elements x and y with the property that at least 1/3 and at most 2/3 of the linear extensions of the partial order place x earlier than y. - -The partial order formed by three elements a, b, and c with a single comparability relationship, a ≤ b, has three linear extensions, a ≤ b ≤ c, a ≤ c ≤ b, and c ≤ a ≤ b. In all three of these extensions, a is earlier than b. However, a is earlier than c in only two of them, and later than c in the third. Therefore, the pair of a and c have the desired property, showing that this partial order obeys the 1/3–2/3 conjecture. - -This example shows that the constants 1/3 and 2/3 in the conjecture are tight; if q is any fraction strictly between 1/3 and 2/3, then there would not exist a pair x, y in which x is earlier than y in a number of partial orderings that is between q and 1 - q times the total number of partial orderings. - -More generally, let P be any series composition of three-element partial orders and of one-element partial orders, such as the one in the figure. Then P forms an extreme case for the 1/3–2/3 conjecture in the sense that, for each pair x, y of elements, one of the two elements occurs earlier than the other in at most 1/3 of the linear extensions of P. Partial orders with this structure are necessarily series-parallel semiorders; they are the only known extreme cases for the conjecture and can be proven to be the only extreme cases with width two. - -A partially ordered set is a set X together with a binary relation ≤ that is reflexive, antisymmetric, and transitive. A total order is a partial order in which every pair of elements is comparable. A linear extension of a finite partial order is a sequential ordering of the elements of X, with the property that if x ≤ y in the partial order, then x must come before y in the linear extension. In other words, it is a total order compatible with the partial order. If a finite partially ordered set is totally ordered, then it has only one linear extension, but otherwise it will have more than one. The 1/3–2/3 conjecture states that one can choose two elements x and y such that, among this set of possible linear extensions, between 1/3 and 2/3 of them place x earlier than y, and symmetrically between 1/3 and 2/3 of them place y earlier than x. - -There is an alternative and equivalent statement of the 1/3–2/3 conjecture in the language of probability theory. - -One may define a uniform probability distribution on the linear extensions in which each possible linear extension is equally likely to be chosen. The 1/3–2/3 conjecture states that, under this probability distribution, there exists a pair of elements x and y such that the probability that x is earlier than y in a random linear extension is between 1/3 and 2/3. - -Kahn define δ(P), for any partially ordered set P, to be the largest real number δ such that P has a pair x, y with x earlier than y in a number of linear extensions that is between δ and 1 - δ of the total number of linear extensions. In this notation, the 1/3–2/3 conjecture states that every finite partial order that is not total has δ(P) ≥ 1/3. - -The 1/3–2/3 conjecture was formulated by Kislitsyn, and later made independently by Michael Fredman and by . It was listed as a featured unsolved problem at the founding of the journal Order, and remains unsolved; Brightwell call it "one of the most intriguing problems in the combinatorial theory of posets." - -A survey of the conjecture is given by Brightwell. - -The 1/3–2/3 conjecture is known to be true for certain special classes of partial orders, including partial orders of width two, partial orders of height two, partial orders with at most 11 elements, partial orders in which each element is incomparable to at most six others, series-parallel partial orders, semiorders. and polytrees. In the limit as n goes to infinity, the proportion of n-element partial orders that obey the 1/3–2/3 conjecture approaches 100%. - -Brightwell proved that, for any finite partial order P that is not total, δ(P) ≥ 1/2 - 5/10 ≈ 0.276. Their results improve previous weaker bounds of the same type. They use the probabilistic interpretation of δ(P) to extend its definition to certain infinite partial orders; in that context, they show that their bounds are optimal, in that there exist infinite partial orders with 1=δ(P) = 1/2 - 5/10. - -Kahn proposed the following application for the problem: - -suppose one wishes to comparison sort a totally ordered set X, for which some partial order information is already known in the form of a partial order on X. In the worst case, each additional comparison between a pair x and y of elements may yield as little information as possible, by resolving the comparison in a way that leaves as many linear extensions as possible compatible with the comparison result. The 1/3–2/3 conjecture states that, at each step, one may choose a comparison to perform that reduces the remaining number of linear extensions by a factor of 2/3; therefore, if there are E linear extensions of the partial order given by the initial information, the sorting problem can be completed in at most log3/2E additional comparisons. - -However, this analysis neglects the complexity of selecting the optimal pair x and y to compare. Additionally, it may be possible to sort a partial order using a number of comparisons that is better than this analysis would suggest, because it may not be possible for this worst-case behavior to occur at each step of a sorting algorithm. In this direction, it has been conjectured that logφE comparisons may suffice, where φ denotes the golden ratio. - -A closely related class of comparison sorting problems is considered by Fredman, among them the problem of comparison sorting a set X when the sorted order of X is known to lie in some set S of permutations of X. Here S is not necessarily generated as the set of linear extensions of a partial order. Despite this added generality, Fredman shows that X can be sorted using log2|S| + O(|X|) comparisons, expressed in big O notation. This same bound applies as well to the case of partial orders and shows that log2E + O(n) comparisons suffice. - -Kahn conjectured that, in the limit as w tends to infinity, the value of δ(P) for partially ordered sets of width w should tend to 1/2. In particular, they expect that only partially ordered sets of width two can achieve the worst case value 1=δ(P) = 1/3, and Aigner stated this explicitly as a conjecture. The smallest known value of δ(P) for posets of width three is 14/39, and computer searches have shown that no smaller value is possible for width-3 posets with nine or fewer elements. Another related conjecture, again based on computer searches, states that there is a gap between 1/3 and the other possible values of δ(P): whenever a partial order does not have δ(P) exactly 1/3, it has δ(P) ≥ 0.348843. - -Marcin Peczarski has formulated a "gold partition conjecture" stating that in each partial order that is not a total order one can find two consecutive comparisons such that, if ti denotes the number of linear extensions remaining after i of the comparisons have been made, then (in each of the four possible outcomes of the comparisons) t0 ≥ t1 + t2. If this conjecture is true, it would imply the 1/3–2/3 conjecture: the first of the two comparisons must be between a pair that splits the remaining comparisons by at worst a 1/3–2/3 ratio. The gold partition conjecture would also imply that a partial order with E linear extensions can be sorted in at most logφE comparisons; the name of the conjecture is derived from this connection with the golden ratio. - -It is #P-complete, given a finite partial order P and a pair of elements x and y, to calculate the proportion of the linear extensions of P that place x earlier than y. diff --git a/wiki/wikipedia/3645.txt b/wiki/wikipedia/3645.txt deleted file mode 100644 index fad612421dfc515e0930a98db7bd9b8b576162d1..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3645.txt +++ /dev/null @@ -1,11 +0,0 @@ -Nuprl is a proof development system, providing computer-mediated analysis and proofs of formal mathematical statements, and tools for software verification and optimization. Originally developed in the 1980s by Robert Lee Constable and others, the system is now maintained by the PRL Project at Cornell University. The currently supported version, Nuprl 5, is also known as FDL (Formal Digital Library). Nuprl functions as an automated theorem proving system and can also be used to provide proof assistance. - -Nuprl uses a type system based on Martin-Löf intuitionistic type theory to model mathematical statements in a digital library. Mathematical theories can be constructed and analyzed with a variety of editors, including a graphical user interface, a web-based editor, and an Emacs mode. A variety of evaluators and inference engines can operate on the statements in the library. Translators also allow statements to be manipulated with Java and OCaml programs. The overall system is controlled with a variant of ML. - -Nuprl 5's architecture is described as "distributed open architecture" and intended primarily to be used as a web service rather than as standalone software. Those interested in using the web service, or migrating theories from older versions of Nuprl, can contact the email address given on the Nuprl System web page. - -Nuprl was first released in 1984, and was first described in detail in the book Implementing Mathematics with the Nuprl Proof Development System, published in 1986. Nuprl 2 was the first Unix version. Nuprl 3 provided machine proof for mathematical problems related to Girard's paradox and Higman's lemma. Nuprl 4, the first version developed for the World Wide Web, was used to verify cache coherency protocols and other computer systems. - -The current system architecture, implemented in Nuprl 5, was first proposed in a 2000 conference paper. A reference manual for Nuprl 5 was published in 2002. Nuprl has been the subject of many computer science publications. - -Both the and systems are also based on computational type theory. RedPRL is explicitly "inspired by Nuprl". diff --git a/wiki/wikipedia/3646.txt b/wiki/wikipedia/3646.txt deleted file mode 100644 index 2a859fd0192e4e5d2d3830d5f931ad971a3b5dde..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3646.txt +++ /dev/null @@ -1,6 +0,0 @@ -In mathematics, the Denjoy–Koksma inequality, introduced by Herman as a combination of work of Arnaud Denjoy and the Koksma–Hlawka inequality of Jurjen Ferdinand Koksma, is a bound for Weyl sums $\sum_{k=0}^{m-1}f(x+k\omega)$ of functions f of bounded variation. - -Suppose that a map f from the circle T to itself has irrational rotation number α, and p/q is a rational approximation to α with p and q coprime, |α – p/q| < 1/q2. Suppose that φ is a function of bounded variation, and μ a probability measure on the circle invariant under f. Then -$$ -\left|\sum_{i=0}^{q-1} \phi f^i(x) - q\int_T \phi d\mu \right| \leqslant \operatorname{Var}(\phi) -$$ diff --git a/wiki/wikipedia/3647.txt b/wiki/wikipedia/3647.txt deleted file mode 100644 index 34df7dc3ef1b743f3981b707b9e31d93031e8f6d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3647.txt +++ /dev/null @@ -1,71 +0,0 @@ -The Bunyakovsky conjecture (or Bouniakowsky conjecture) gives a criterion for a polynomial $f(x)$ in one variable with integer coefficients to give infinitely many prime values in the sequence$f(1), f(2), f(3),\ldots.$ It was stated in 1857 by the Russian mathematician Viktor Bunyakovsky. The following three conditions are necessary for $f(x)$ to have the desired prime-producing property: - -# The leading coefficient is positive, - -# The polynomial is irreducible over the integers. - -# The values $f(1), f(2), f(3),\ldots$ have no common factor. (In particular, the coefficients of $f(x)$ should be relatively prime.) - -Bunyakovsky's conjecture is that these conditions are sufficient: if $f(x)$ satisfies (1)–(3), then $f(n)$ is prime for infinitely many positive integers $n$. - -A seemingly weaker yet equivalent statement to Bunyakovsky's conjecture is that for every integer polynomial $f(x)$ that satisfies (1)–(3), $f(n)$ is prime for at least one positive integer $n$: but then, since the translated polynomial $f(x+n)$ still satisfies (1)–(3), in view of the weaker statement $f(m)$ is prime for at least one positive integer $m>n$, so that $f(n)$ is indeed prime for infinitely many positive integers $n$. Bunyakovsky's conjecture is a special case of Schinzel's hypothesis H, one of the most famous open problems in number theory. - -We need the first condition because if the leading coefficient is negative then $f(x) < 0$ for all large $x$, and thus $f(n)$ is not a (positive) prime number for large positive integers $n$. (This merely satisfies the sign convention that primes are positive.) - -We need the second condition because if $f(x) = g(x)h(x)$ where the polynomials $g(x)$ and $h(x)$ have integer coefficients, then we have $f(n) = g(n)h(n)$ for all integers $n$; but $g(x)$ and $h(x)$ take the values 0 and $\pm 1$ only finitely many times, so $f(n)$ is composite for all large $n$. - -The third condition, that the numbers $f(n)$ have gcd 1, is obviously necessary, but is somewhat subtle, and is best understood by a counterexample. Consider $f(x) = x^2 + x + 2$, which has positive leading coefficient and is irreducible, and the coefficients are relatively prime; however $f(n)$ is even for all integers $n$, and so is prime only finitely many times (namely when $f(n)=2$, in fact only at $n =0,-1$). - -In practice, the easiest way to verify the third condition is to find one pair of positive integers $m$ and $n$ such that $f(m)$ and $f(n)$ are relatively prime. In general, for any integer-valued polynomial $f(x) = c_0 + c_1x + \cdots + c_dx^d$ we can use $\gcd \{f(n)\}_{n\geq 1} = \gcd(f(m),f(m+1),\dots,f(m+d))$ for any integer $m$, so the gcd is given by values of $f(x)$ at any consecutive $d+1$ integers. In the example above, we have $f(-1)=2, f(0)=2, f(1)=4$ and so the gcd is $2$, which implies that $x^2 + x + 2$ has even values on the integers. - -Alternatively, $f(x)$ can be written in the basis of binomial coefficient polynomials: -$$ -f(x) = a_0 + a_1\binom{x}{1} + \cdots + a_d\binom{x}{d} -$$ - -where each $a_i$ is an integer, and $\gcd\{f(n)\}_{n \geq 1} = \gcd(a_0,a_1,\dots,a_d).$ - -For the above example, we have: -$$ -x^2 + x + 2 = 2\binom{x}{2} + 2\binom{x}{1} + 2, -$$ - -and the coefficients in the second formula have gcd 2. - -Using this gcd formula, it can be proved $\gcd\{f(n)\}_{n \geq 1} =1$ if and only if there are positive integers $m$ and $n$ such that $f(m)$ and $f(n)$ are relatively prime. - -Some prime values of the polynomial $f(x) = x^2 + 1$ are listed in the following table. (Values of $x$ form OEIS sequence ; those of $x^2 + 1$ form .) - -That $n^2+1$ should be prime infinitely often is a problem first raised by Euler, and it is also the fifth Hardy–Littlewood conjecture and the fourth of Landau's problems. Despite the extensive numerical evidence - -it is not known that this sequence extends indefinitely. - -The cyclotomic polynomials $\Phi_k(x)$ for $k=1,2,3,\ldots$ satisfy the three conditions of Bunyakovsky's conjecture, so for all k, there should be infinitely many natural numbers n such that $\Phi_k(n)$ is prime. It can be shown that if for all k, there exists an integer n > 1 with $\Phi_k(n)$ prime, then for all k, there are infinitely many natural numbers n with $\Phi_k(n)$ prime. - -The following sequence gives the smallest natural number n > 1 such that $\Phi_k(n)$ is prime, for $k=1,2,3,\ldots$: - -3, 2, 2, 2, 2, 2, 2, 2, 2, 2, 5, 2, 2, 2, 2, 2, 2, 6, 2, 4, 3, 2, 10, 2, 22, 2, 2, 4, 6, 2, 2, 2, 2, 2, 14, 3, 61, 2, 10, 2, 14, 2, 15, 25, 11, 2, 5, 5, 2, 6, 30, 11, 24, 7, 7, 2, 5, 7, 19, 3, 2, 2, 3, 30, 2, 9, 46, 85, 2, 3, 3, 3, 11, 16, 59, 7, 2, 2, 22, 2, 21, 61, 41, 7, 2, 2, 8, 5, 2, 2, ... . - -This sequence is known to contain some large terms: the 545th term is 2706, the 601st is 2061, and the 943rd is 2042. This case of Bunyakovsky's conjecture is widely believed, but again it is not known that the sequence extends indefinitely. - -Usually, there is integer 2≤n≤φ(k) such that $\Phi_k(n)$ is prime (note that the degree of $\Phi_k(n)$ is φ(k)), but there are exceptions, the exception numbers k are - -1, 2, 25, 37, 44, 68, 75, 82, 99, 115, 119, 125, 128, 159, 162, 179, 183, 188, 203, 213, 216, 229, 233, 243, 277, 289, 292, ... - -To date, the only case of Bunyakovsky's conjecture that has been proved is that of polynomials of degree 1. This is Dirichlet's theorem, which states that when $a$ and $m$ are relatively prime integers there are infinitely many prime numbers $p \equiv a \pmod m$. This is Bunyakovsky's conjecture for $f(x) = a + mx$ (or $a - mx$ if $m < 0$). - -The third condition in Bunyakovsky's conjecture for a linear polynomial $mx + a$ is equivalent to $a$ and $m$ being relatively prime. - -No single case of Bunyakovsky's conjecture for degree greater than 1 is proved, although numerical evidence in higher degree is consistent with the conjecture. - -Given $k \geq 1$ polynomials with positive degrees and integer coefficients, each satisfying the three conditions, assume that for any prime $p$ there is an $n$ such that none of the values of the $k$ polynomials at $n$ are divisible by $p$. Given these assumptions, it is conjectured that there are infinitely many positive integers $n$ such that all values of these $k$ polynomials at $x = n$ are prime. - -Note that the polynomials $\{x, x + 2, x + 4\}$ do not satisfy the assumption, since one of their values must be divisible by 3 for any integer $x = n$. Neither do $\{x, x^2 + 2\}$, since one of the values must be divisible by 3 for any $x = n$. - -On the other hand, $\{x^2 + 1, 3x - 1, x^2 + x + 41\}$ do satisfy the assumption, and the conjecture implies the polynomials have simultaneous prime values for infinitely many positive integers $x = n$. - -This conjecture includes as special cases the twin prime conjecture (when $k = 2$, and the two polynomials are $x$ and $x + 2$) as well as the infinitude of prime quadruplets (when $k = 4$, and the four polynomials are $x, x+ 2, x + 6$, and $x + 8$), sexy primes (when $k = 2$, and the two polynomials are $x$ and $x + 6$), Sophie Germain primes (when $k = 2$, and the two polynomials are $x$ and $2x + 1$), and Polignac's conjecture (when $k = 2$, and the two polynomials are $x$ and $x + a$, with $a$ any even number). When all the polynomials have degree 1 this is Dickson's conjecture. - -In fact, this conjecture is equivalent to the Generalized Dickson conjecture. - -Except for Dirichlet's theorem, no case of the conjecture has been proved, including the above cases. diff --git a/wiki/wikipedia/3648.txt b/wiki/wikipedia/3648.txt deleted file mode 100644 index b3154f9019066860b1e63aced20e6ef5018a83ef..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3648.txt +++ /dev/null @@ -1,175 +0,0 @@ -In mathematics, the Schwartz–Zippel lemma (also called the DeMillo-Lipton-Schwartz–Zippel lemma) is a tool commonly used in probabilistic polynomial identity testing, i.e. in the problem of determining whether a given multivariate polynomial is the - -0-polynomial (or identically equal to 0). It was discovered independently by Jack Schwartz, Richard Zippel, and Richard DeMillo and Richard J. Lipton, although DeMillo and Lipton's version was shown a year prior to Schwartz and Zippel's result. The finite field version of this bound was proved by Øystein Ore in 1922. - -Theorem 1 (Schwartz, Zippel). Let -$$ -P\in F[x_1,x_2,\ldots,x_n] -$$ - -be a non-zero polynomial of total degree d ≥ 0 over a field F. Let S be a finite subset of F and let r1, r2, ..., rn be selected at random independently and uniformly from S. Then -$$ -\Pr[P(r_1,r_2,\ldots,r_n)=0]\leq\frac{d}. -$$ - -Equivalently, the Lemma states that for any finite subset S of F, if Z(P) is the zero set of P, then -$$ - | Z(P) \cap S^n | \leq d \cdot |S|^{n-1}. -$$ - -Proof. The proof is by mathematical induction on n. For n = 1, as was mentioned before, P can have at most d roots. This gives us the base case. - -Now, assume that the theorem holds for all polynomials in n - 1 variables. We can then consider P to be a polynomial in x1 by writing it as -$$ -P(x_1,\dots,x_n)=\sum_{i=0}^d x_1^i P_i(x_2,\dots,x_n). -$$ - -Since P is not identically 0, there is some i such that $P_i$ is not identically 0. Take the largest such i. Then $\deg P_i\leq d-i$, since the degree of $x_1^iP_i$ is at most d. - -Now we randomly pick $r_2,\dots,r_n$ from S. By the induction hypothesis, $\Pr[P_i(r_2,\ldots,r_n)=0]\leq\frac{d-i}. $ - -If $P_i(r_2,\ldots,r_n)\neq 0$, then $P(x_1,r_2,\ldots,r_n)$ is of degree i (and thus not identically zero) so -$$ -\Pr[P(r_1,r_2,\ldots,r_n)=0|P_i(r_2,\ldots,r_n)\neq 0]\leq\frac{i}. -$$ - -If we denote the event $P(r_1,r_2,\ldots,r_n)=0$ by A, the event $P_i(r_2,\ldots,r_n)=0$ by B, and the complement of B by $B^c$, we have - -\begin{align} - -\Pr[A] & =\Pr[A\cap B]+\Pr[A\cap B^c] - -\\ - -&=\Pr[B]\Pr[A|B]+\Pr[B^c]\Pr[A|B^c] - -\\ - -&\leq \Pr[B]+\Pr[A|B^c] - -\\ - -&\leq \frac{d-i}+\frac{i}=\frac{d} - -\end{align} - - - -The importance of the Schwartz–Zippel Theorem and Testing Polynomial Identities follows - -from algorithms which are obtained to problems that can be reduced to the problem - -of polynomial identity testing. - -For example, is -$$ -(x_1 + 3x_2 - x_3)(3x_1 + x_4 - 1) \cdots (x_7 - x_2) \equiv 0\ ? -$$ - -To solve this, we can multiply it out and check that all the coefficients are 0. However, this takes exponential time. In general, a polynomial can be algebraically represented by an arithmetic formula or circuit. - -Given a pair of polynomials $p_1(x)$ and $p_2(x)$, is -$$ -p_1(x) \equiv p_2(x) -$$? - -This problem can be solved by reducing it to the problem of polynomial identity testing. It is equivalent to checking if -$$ -[p_1(x) - p_2(x)] \equiv 0. -$$ - -Hence if we can determine that -$$ -p(x) \equiv 0, -$$ - -where -$$ -p(x) = p_1(x)-p_2(x), -$$ - -then we can determine whether the two polynomials are equivalent. - -Comparison of polynomials has applications for branching programs (also called binary decision diagrams). A read-once branching program can be represented by a multilinear polynomial which computes (over any field) on {0,1}-inputs the same Boolean function as the branching program, and two branching programs compute the same function if and only if the corresponding polynomials are equal. Thus, identity of Boolean functions computed by read-once branching programs can be reduced to polynomial identity testing. - -Comparison of two polynomials (and therefore testing polynomial identities) also has - -applications in 2D-compression, where the problem of finding the equality of two - -2D-texts A and B is reduced to the problem - -of comparing equality of two polynomials $p_A(x,y)$ and $p_B(x,y)$. - -Given $n \in \mathbb{N}$, is $n$ a prime number? - -A simple randomized algorithm developed by Manindra Agrawal and Somenath Biswas can determine probabilistically - -whether $n$ is prime and uses polynomial identity testing to do so. - -They propose that all prime numbers n (and only prime numbers) satisfy the following - -polynomial identity: -$$ -(1+z)^n = 1+z^n (\mbox{mod}n). -$$ - -This is a consequence of the Frobenius endomorphism. - -Let -$$ -\mathcal{P}_n(z) = (1+z)^n - 1 -z^n. -$$ - -Then $\mathcal{P}_n(z) = 0(\mbox{mod}n)$ iff n is prime. The proof can be found in [4]. However, - -since this polynomial has degree $n$, where $n$ may or may not be a prime, - -the Schwartz–Zippel method would not work. Agrawal and Biswas use a more sophisticated technique, which divides -$$ -\mathcal{P}_n -$$ by a random monic polynomial of small degree. - -Prime numbers are used in a number of applications such as hash table sizing, pseudorandom number - -generators and in key generation for cryptography. Therefore, finding very large prime numbers - -(on the order of (at least) $10^{350} \approx 2^{1024}$) becomes very important and efficient primality testing algorithms - -are required. - -Let $G = (V, E)$ be a graph of n vertices where n is even. Does G contain a perfect matching? - -Theorem 2 : A Tutte matrix determinant is not a 0-polynomial if and only if there exists a perfect matching. - -A subset D of E is called a matching if each vertex in V is incident with at most one edge in D. A matching is perfect if each vertex in V has exactly one edge that is incident to it in D. Create a Tutte matrix A in the following way: -$$ -A = \begin{bmatrix} a_{11} & a_{12} & \cdots & a_{1\mathit{n}} \\ a_{21} & a_{22} & \cdots & a_{2\mathit{n}} \\ \vdots & \vdots & \ddots & \vdots \\ a_{\mathit{n}1} & a_{\mathit{n}2} & \ldots & a_{\mathit{nn}} \end{bmatrix} -$$ - -where - -a_{ij} = \begin{cases} x_{ij}\mbox{if}(i,j) \in E \mbox{ and } ij\\ - -0\mbox{otherwise}. \end{cases} - -The Tutte matrix determinant (in the variables xij, i\langle\operatorname{Ind}_H^G\psi, \varphi\rangle_G=\langle\psi,\operatorname{Res}_H^G\varphi\rangle_H. - -In other words, $\operatorname{Ind}_H^G$ and $\operatorname{Res}_H^G$ are Hermitian adjoint. - -Let $\psi:H\to\mathbb{C}$ and $\varphi:G\to\mathbb{C}$ be class functions. - -Proof. Every class function can be written as a linear combination of irreducible characters. As $\langle\cdot,\cdot\rangle$ is a bilinear form, we can, without loss of generality, assume $\psi$ and $\varphi$ to be characters of irreducible representations of $H$ in $W$ and of $G$ in $V,$ respectively. - -We define $ \psi(s)=0$ for all $s\in G\setminus H.$ Then we have - - \begin{align} - -\langle \text{Ind}(\psi), \varphi\rangle_G &= \frac{1} \sum_{t\in G} \text{Ind}(\psi)(t) \varphi(t^{-1}) \\ - -&= \frac{1} \sum_{t\in G} \frac{1}\sum_{s\in G \atop s^{-1}ts \in H} \psi(s^{-1}ts) \varphi(t^{-1}) \\ - -&= \frac{1} \frac{1}\sum_{t\in G} \sum_{s\in G} \psi(s^{-1}ts) \varphi((s^{-1}ts)^{-1}) \\ - -&= \frac{1} \frac{1}\sum_{t\in G} \sum_{s\in G} \psi(t) \varphi(t^{-1})\\ - -&= \frac{1}\sum_{t\in G} \psi(t) \varphi(t^{-1})\\ - -&= \frac{1}\sum_{t\in H} \psi(t) \varphi(t^{-1})\\ - -&= \frac{1}\sum_{t\in H} \psi(t) \text{Res}(\varphi)(t^{-1})\\ - -&= \langle \psi, \text{Res}(\varphi)\rangle_H - -\end{align} - -In the course of this sequence of equations we used only the definition of induction on class functions and the properties of characters. $\Box$ - -Alternative proof. In terms of the group algebra, i.e. by the alternative description of the induced representation, the Frobenius reciprocity is a special case of a general equation for a change of rings: -$$ -\text{Hom}_{\Complex [H]}(W,U)=\text{Hom}_{\Complex [G]}(\Complex [G]\otimes_{\Complex [H]}W, U). -$$ - -This equation is by definition equivalent to [how?] -$$ -\langle W,\text{Res}(U)\rangle_H=\langle W,U\rangle_H=\langle \text{Ind}(W),U\rangle_G. -$$ - -As this bilinear form tallies the bilinear form on the corresponding characters, the theorem follows without calculation. $\Box$ - -As explained in the section Representation theory of finite groups#Representations, modules and the convolution algebra, the theory of the representations of a group G over a field K is, in a certain sense, equivalent to the theory of modules over the group algebra K[G]. Therefore, there is a corresponding Frobenius reciprocity theorem for K[G]-modules. - -Let G be a group with subgroup H, let M be an H-module, and let N be a G-module. In the language of module theory, the induced module $K[G]\otimes_{K[H]} M$ corresponds to the induced representation $\operatorname{Ind}_H^G$, whereas the restriction of scalars ${_{K[H]}}N$ corresponds to the restriction $\operatorname{Res}_H^G$. Accordingly, the statement is as follows: The following sets of module homomorphisms are in bijective correspondence: - -\operatorname{Hom}_{K[G]}(K[G]\otimes_{K[H]} M,N)\cong \operatorname{Hom}_{K[H]}(M,{_{K[H]}}N). - -As noted below in the section on category theory, this result applies to modules over all rings, not just modules over group algebras. - -Let G be a group with a subgroup H, and let $\operatorname{Res}_H^G,\operatorname{Ind}_H^G$ be defined as above. For any group A and field K let $\textbf{Rep}_A^K$ denote the category of linear representations of A over K. There is a forgetful functor - -\begin{align} - -\operatorname{Res}_H^G:\textbf{Rep}_G&\longrightarrow\textbf{Rep}_H \\ - -(V,\rho) &\longmapsto \operatorname{Res}_H^G(V,\rho) - -\end{align} - -This functor acts as the identity on morphisms. There is a functor going in the opposite direction: - -\begin{align} - -\operatorname{Ind}_H^G:\textbf{Rep}_H &\longrightarrow\textbf{Rep}_G \\ - -(W,\tau) &\longmapsto \operatorname{Ind}_H^G(W,\tau) - -\end{align} - -These functors form an adjoint pair $\operatorname{Ind}_H^G\dashv\operatorname{Res}_H^G$. In the case of finite groups, they are actually both left- and right-adjoint to one another. This adjunction gives rise to a universal property for the induced representation (for details, see Induced representation#Properties). - -In the language of module theory, the corresponding adjunction is an instance of the more general relationship between restriction and extension of scalars. diff --git a/wiki/wikipedia/365.txt b/wiki/wikipedia/365.txt deleted file mode 100644 index 964a7b6bc0f209d3c37852a41d19495388cf4ae0..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/365.txt +++ /dev/null @@ -1,220 +0,0 @@ -In statistics, Cochran's theorem, devised by William G. Cochran, is a theorem used to justify results relating to the probability distributions of statistics that are used in the analysis of variance. - -Suppose U1, ..., UN are i.i.d. standard normally distributed random variables, and $B^{(1)},B^{(2)},\ldots, B^{(k)}$are positive semidefinite matrices with $\sum_{i=1}^kB^{(i)}=I_N$. Further suppose that $r_1+\cdots +r_k=N$, where ri is the rank of $B^{(i)}$. If we write -$$ -Q_i=\sum_{j=1}^N\sum_{\ell=1}^N U_j B_{j,\ell}^{(i)} U_\ell -$$ - -so that the Qi are quadratic forms, then Cochran's theorem states that the Qi are independent, and each Qi has a chi-squared distribution with ri degrees of freedom. - -Less formally, it is the number of linear combinations included in the sum of squares defining Qi, provided that these linear combinations are linearly independent. - -We first show that the matrices B(i) can be simultaneously diagonalized and that their non-zero eigenvalues are all equal to +1. We then use the vector basis that diagonalize them to simplify their characteristic function and show their independence and distribution. - -Each of the matrices B(i) has rank ri and thus ri non-zero eigenvalues. For each i, the sum $C^{(i)} \equiv \sum_{j\ne i}B^{(j)}$ has at most rank $\sum_{j\ne i}r_j = N-r_i$. Since $B^{(i)}+C^{(i)} = I_{N \times N}$, it follows that C(i) has exactly rank N - ri. - -Therefore B(i) and C(i) can be simultaneously diagonalized. This can be shown by first diagonalizing B(i). In this basis, it is of the form: - -\begin{bmatrix} - -\lambda_1 & 0 & 0 & \cdots & \cdots & & 0 \\ - -0 & \lambda_2 & 0 & \cdots & \cdots & & 0 \\ - -0 & 0 & \ddots & & & & \vdots \\ - -\vdots & \vdots & & \lambda_{r_i} & & \\ - -\vdots & \vdots & & & 0 & \\ - -0 & \vdots & & & & \ddots \\ - -0 & 0 & \ldots & & & & 0 - -\end{bmatrix}. - -Thus the lower $(N-r_i)$ rows are zero. Since $C^{(i)} = I - B^{(i)}$, it follows that these rows in C(i) in this basis contain a right block which is a $(N-r_i)\times(N-r_i)$ unit matrix, with zeros in the rest of these rows. But since C(i) has rank N - ri, it must be zero elsewhere. Thus it is diagonal in this basis as well. It follows that all the non-zero eigenvalues of both B(i) and C(i) are +1. Moreover, the above analysis can be repeated in the diagonal basis for $C^{(1)} = B^{(2)} + \sum_{j>2}B^{(j)}$. In this basis $C^{(1)}$ is the identity of an $(N-r_1)\times(N-r_1)$ vector space, so it follows that both B(2) and $\sum_{j>2}B^{(j)}$ are simultaneously diagonalizable in this vector space (and hence also together with B(1)). By iteration it follows that all B-s are simultaneously diagonalizable. - -Thus there exists an orthogonal matrix $S$ such that for all $i$, $ S^\mathrm{T}B^{(i)} S \equiv B^{(i)\prime} $ is diagonal, where any entry $ B^{(i)\prime}_{x,y} $ with indices $x = y$, $ \sum_{j=1}^{i-1} r_j < x = y \le \sum_{j=1}^i r_j $, is equal to 1, while any entry with other indices is equal to 0. - -Let $U_i^\prime$ denote some specific linear combination of all $U_i$ after transformation by $S$. Note that $\sum_{i=1}^N(U_i^\prime)^2=\sum_{i=1}^NU_i^2$ due to the length preservation of the orthogonal matrix S, that the Jacobian of a linear transformation is the matrix associated with the linear transformation itself, and that the determinant of an orthogonal matrix has modulus 1. - -The characteristic function of Qi is: - -\begin{align} - -\varphi_i(t) ={} & (2\pi)^{-N/2} \int du_1 \int du_2 \cdots \int du_N e^{i t Q_i} \cdot e^{-u_1^2/2} \cdot e^{-u_2^2/2} \cdots e^{-u_N^2/2} \\ - -={} & (2\pi)^{-N/2} \left(\prod_{j=1}^N \int du_j\right) e^{i t Q_i} \cdot e^{-\sum_{j=1}^N u_j^2/2} \\ - -={}& (2\pi)^{-N/2} \left(\prod_{j=1}^N \int du_j^\prime\right) e^{i t\cdot \sum_{m = r_1 + \cdots + r_{i-1}+1}^{r_1+\cdots+r_i} (u_m^\prime)^2} \cdot e^{-\sum_{j=1}^N {u_j^\prime}^2/2} \\ - -={}& (2\pi)^{-N/2} \left(\int e^{u^2(it-\frac{1}{2})} du \right)^{r_i} \left(\int e^{-\frac{u^2}{2}} du \right)^{N-r_i} \\ - -={}& (1 - 2 i t)^{-r_i/2} - -\end{align} - -This is the Fourier transform of the chi-squared distribution with ri degrees of freedom. Therefore this is the distribution of Qi. - -Moreover, the characteristic function of the joint distribution of all the Qis is: - -\begin{align} - -\varphi(t_1, t_2, \ldots, t_k) & = (2\pi)^{-N/2} \left(\prod_{j=1}^N \int dU_j\right) e^{i \sum_{i=1}^k t_i \cdot Q_i} \cdot e^{-\sum_{j=1}^N U_j^2/2} \\ - -& = (2\pi)^{-N/2} \left(\prod_{j=1}^N \int dU_j^\prime\right) e^{i \cdot \sum_{i=1}^k t_i \sum_{k = r_1+\cdots+r_{i-1} + 1}^{r_1+\cdots+r_i} (U_k^\prime)^2} \cdot e^{-\sum_{j=1}^N {U_j^\prime}^2/2} \\ - -& = (2\pi)^{-N/2} \prod_{i=1}^k\left(\int e^{u^2(it_i-\frac{1}{2})} du \right)^{r_i} \\ - -& = \prod_{i=1}^k (1 - 2 i t_i)^{-r_i/2} = \prod_{i=1}^k \varphi_i(t_i) - -\end{align} - -From this it follows that all the Qis are independent. - -If X1, ..., Xn are independent normally distributed random variables with mean μ and standard deviation σ then -$$ -U_i = \frac{X_i-\mu}{\sigma} -$$ - -is standard normal for each i. Note that the total Q is equal to sum of squared Us as shown here: - -\sum_iQ_i=\sum_{ijk} U_j B_{jk}^{(i)} U_k = \sum_{jk} U_j U_k \sum_i B_{jk}^{(i)} = - -\sum_{jk} U_j U_k\delta_{jk} = \sum_{j} U_j^2 - -which stems from the original assumption that $B_{1} + B_{2} \ldots = I$. - -So instead we will calculate this quantity and later separate it into Qi's. It is possible to write - - - -\sum_{i=1}^n U_i^2=\sum_{i=1}^n\left(\frac{X_i-\overline{X}}{\sigma}\right)^2 - -+ n\left(\frac{\overline{X}-\mu}{\sigma}\right)^2 - - - -(here $\overline{X}$ is the sample mean). To see this identity, multiply throughout by $\sigma^2$ and note that - - - -\sum(X_i-\mu)^2= - -\sum(X_i-\overline{X}+\overline{X}-\mu)^2 - - - -and expand to give - - - -\sum(X_i-\mu)^2= - -\sum(X_i-\overline{X})^2+\sum(\overline{X}-\mu)^2+ - -2\sum(X_i-\overline{X})(\overline{X}-\mu). - - - -The third term is zero because it is equal to a constant times -$$ -\sum(\overline{X}-X_i)=0, -$$ - -and the second term has just n identical terms added together. Thus - - - -\sum(X_i-\mu)^2 = \sum(X_i-\overline{X})^2+n(\overline{X}-\mu)^2 , - - - -and hence - - - -\sum\left(\frac{X_i-\mu}{\sigma}\right)^2= - -\sum\left(\frac{X_i-\overline{X}}{\sigma}\right)^2 - -+n\left(\frac{\overline{X}-\mu}{\sigma}\right)^2= - -\overbrace{\sum_i\left(U_i-\frac{1}{n}\sum_j{U_j}\right)^2}^{Q_1} - -+\overbrace{\frac{1}{n}\left(\sum_j{U_j}\right)^2}^{Q_2}= - -Q_1+Q_2. - - - -Now $B^{(2)}=\frac{J_n}{n}$ with $J_n$ the matrix of ones which has rank 1. In turn $B^{(1)}= I_n-\frac{J_n}{n}$ given that $I_n=B^{(1)}+B^{(2)}$. This expression can be also obtained by expanding $Q_1$ in matrix notation. It can be shown that the rank of $B^{(1)}$ is $n-1$ as the addition of all its rows is equal to zero. Thus the conditions for Cochran's theorem are met. - -Cochran's theorem then states that Q1 and Q2 are independent, with chi-squared distributions with n - 1 and 1 degree of freedom respectively. This shows that the sample mean and sample variance are independent. This can also be shown by Basu's theorem, and in fact this property characterizes the normal distribution – for no other distribution are the sample mean and sample variance independent. - -The result for the distributions is written symbolically as - - - -\sum\left(X_i-\overline{X}\right)^2 \sim \sigma^2 \chi^2_{n-1}. - - - - - -n(\overline{X}-\mu)^2\sim \sigma^2 \chi^2_1, - - - -Both these random variables are proportional to the true but unknown variance σ2. Thus their ratio does not depend on σ2 and, because they are statistically independent. The distribution of their ratio is given by - - - -\frac{n\left(\overline{X}-\mu\right)^2} - -{\frac{1}{n-1}\sum\left(X_i-\overline{X}\right)^2}\sim \frac{\chi^2_1}{\frac{1}{n-1}\chi^2_{n-1}} - -\sim F_{1,n-1} - - - -where F1,n - 1 is the F-distribution with 1 and n - 1 degrees of freedom (see also Student's t-distribution). The final step here is effectively the definition of a random variable having the F-distribution. - -To estimate the variance σ2, one estimator that is sometimes used is the maximum likelihood estimator of the variance of a normal distribution - - - -\widehat{\sigma}^2= - -\frac{1}{n}\sum\left( - -X_i-\overline{X}\right)^2. - -Cochran's theorem shows that - - - -\frac{n\widehat{\sigma}^2}{\sigma^2}\sim\chi^2_{n-1} - - - -and the properties of the chi-squared distribution show that - -\begin{align} - -E \left(\frac{n \widehat{\sigma}^2}{\sigma^2}\right) &= E \left(\chi^2_{n-1}\right) \\ - -\frac{n}{\sigma^2}E \left(\widehat{\sigma}^2\right) &= (n-1) \\ - -E \left(\widehat{\sigma}^2\right) &= \frac{\sigma^2 (n-1)}{n} - -\end{align} - -The following version is often seen when considering linear regression. Suppose that $Y\sim N_n(0,\sigma^2I_n)$ is a standard multivariate normal random vector (here $I_n$ denotes the n-by-n identity matrix), and if $A_1,\ldots,A_k$ are all n-by-n symmetric matrices with $\sum_{i=1}^kA_i=I_n$. Then, on defining $r_i= \operatorname{Rank}(A_i)$, any one of the following conditions implies the other two: - -* $\sum_{i=1}^kr_i=n ,$ - -* $Y^TA_iY\sim\sigma^2\chi^2_{r_i}$ (thus the $A_i$ are positive semidefinite) - -* $Y^TA_iY$ is independent of $Y^TA_jY$ for $i\neq j .$ diff --git a/wiki/wikipedia/3650.txt b/wiki/wikipedia/3650.txt deleted file mode 100644 index e70b107381ce9da51f5019d823a971c12111deb7..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3650.txt +++ /dev/null @@ -1,107 +0,0 @@ -In concurrency control of databases, transaction processing (transaction management), and other transactional distributed applications, global serializability (or modular serializability) is a property of a global schedule of transactions. A global schedule is the unified schedule of all the individual database (and other transactional object) schedules in a multidatabase environment (e.g., federated database). Complying with global serializability means that the global schedule is serializable, has the serializability property, while each component database (module) has a serializable schedule as well. In other words, a collection of serializable components provides overall system serializability, which is usually incorrect. A need in correctness across databases in multidatabase systems makes global serializability a major goal for global concurrency control (or modular concurrency control). With the proliferation of the Internet, Cloud computing, Grid computing, and small, portable, powerful computing devices (e.g., smartphones), as well as increase in systems management sophistication, the need for atomic distributed transactions and thus effective global serializability techniques, to ensure correctness in and among distributed transactional applications, seems to increase. - -In a federated database system or any other more loosely defined multidatabase system, which are typically distributed in a communication network, transactions span multiple (and possibly distributed) databases. Enforcing global serializability in such system, where different databases may use different types of concurrency control, is problematic. Even if every local schedule of a single database is serializable, the global schedule of a whole system is not necessarily serializable. The massive communication exchanges of conflict information needed between databases to reach conflict serializability globally would lead to unacceptable performance, primarily due to computer and communication latency. Achieving global serializability effectively over different types of concurrency control has been open for several years. Commitment ordering (or Commit ordering; CO), a serializability technique publicly introduced in 1991 by Yoav Raz from Digital Equipment Corporation (DEC), provides an effective general solution for global (conflict) serializability across any collection of database systems and other transactional objects, with possibly different concurrency control mechanisms. CO does not need the distribution of conflict information, but rather utilizes the already needed (unmodified) atomic commitment protocol messages without any further communication between databases. It also allows optimistic (non-blocking) implementations. CO generalizes strong strict two phase locking (SS2PL), which in conjunction with the two-phase commit (2PC) protocol is the de facto standard for achieving global serializability across (SS2PL based) database systems. As a result, CO compliant database systems (with any, different concurrency control types) can transparently join existing SS2PL based solutions for global serializability. The same applies also to all other multiple (transactional) object systems that use atomic transactions and need global serializability for correctness (see examples above; nowadays such need is not smaller than with database systems, the origin of atomic transactions). - -#Seamless, low overhead integration with any concurrency control mechanism, with neither changing any transaction's operation scheduling or blocking it, nor adding any new operation. - -#Heterogeneity: Global serializability is achieved across multiple transactional objects (e.g., database management systems) with different (any) concurrency control mechanisms, without interfering with the mechanisms' operations. - -#Modularity: Transactional objects can be added and removed transparently. - -#Autonomy of transactional objects: No need of conflict or equivalent information distribution (e.g., local precedence relations, locks, timestamps, or tickets; no object needs other object's information). - -#Scalability: With "normal" global transactions, computer network size and number of transactional objects can increase unboundedly with no impact on performance, and - -#Automatic global deadlock resolution. - -All these aspects, except the first two, are also possessed by the popular SS2PL, which is a (constrained, blocking) special case of CO and inherits many of CO's qualities. - -The difficulties described above translate into the following problem: - -Find an efficient (high-performance and fault tolerant) method to enforce Global serializability (global conflict serializability) in a heterogeneous distributed environment of multiple autonomous database systems. The database systems may employ different concurrency control methods. No limitation should be imposed on the operations of either local transactions (confined to a single database system) or global transactions (span two or more database systems). - -Lack of an appropriate solution for the global serializability problem has driven researchers to look for alternatives to serializability as a correctness criterion in a multidatabase environment (e.g., see Relaxing global serializability below), and the problem has been characterized as difficult and open. The following two quotations demonstrate the mindset about it by the end of the year 1991, with similar quotations in numerous other articles: - -*"Without knowledge about local as well as global transactions, it is highly unlikely that efficient global concurrency control can be provided... Additional complications occur when different component DBMSs [Database Management Systems] and the FDBMSs [Federated Database Management Systems] support different concurrency mechanisms... It is unlikely that a theoretically elegant solution that provides conflict serializability without sacrificing performance (i.e., concurrency and/or response time) and availability exists." - -Commitment ordering, - -Also the above quoted article proposes a relaxed global serializability solution, while referencing the CO work. The CO solution for global serializability both bridges between different concurrency control protocols with no substantial concurrency reduction (and typically minor, if at all), and maintains the autonomy of local DBMSs. Evidently also here CO has been misunderstood. This misunderstanding continues to 2010 in a textbook by some of the same authors, where the same relaxed global serializability technique, Two level serializability, is emphasized and described in detail, and CO is not mentioned at all. - -On the other hand, the following quotation on CO appears in a 2009 book: - -*"Not all concurrency control algorithms use locks... Three other techniques are timestamp ordering, serialization graph testing, and commit ordering. Timestamp ordering assigns each transaction a timestamp and ensures that conflicting operations execute in timestamp order. Serialization graph testing tracks conflicts and ensures that the serialization graph is acyclic. Commit ordering ensures that conflicting operations are consistent with the relative order in which their transactions commit, which can enable interoperability of systems using different concurrency control mechanisms." - -Comments: - -#Beyond the common locking based algorithm SS2PL, which is a CO variant itself, also additional variants of CO that use locks exist, (see below). However, generic, or "pure" CO does not use locks. - -#Since CO mechanisms order the commit events according to conflicts that already have occurred, it is better to describe CO as "Commit ordering ensures that the relative order in which transactions commit is consistent with the order of their respective conflicting operations." - -The characteristics and properties of the CO solution are discussed below. - -Several solutions, some partial, have been proposed for the global serializability problem. Among them: - -* Global conflict graph (serializability graph, precedence graph) checking - -* Distributed Two phase locking (Distributed 2PL) - -* Distributed Timestamp ordering - -* Tickets (local logical timestamps which define local total orders, and are propagated to determine global partial order of transactions) - -* Commitment ordering - -The problem of global serializability has been a quite intensively researched subject in the late 1980s and early 1990s. Commitment ordering (CO) has provided an effective general solution to the problem, insight into it, and understanding about possible generalizations of strong strict two phase locking (SS2PL), which practically and almost exclusively has been utilized (in conjunction with the Two-phase commit protocol (2PC) ) since the 1980s to achieve global serializability across databases. An important side-benefit of CO is the automatic global deadlock resolution that it provides (this is applicable also to distributed SS2PL; though global deadlocks have been an important research subject for SS2PL, automatic resolution has been overlooked, except in the CO articles, until today (2009)). At that time quite many commercial database system types existed, many non-relational, and databases were relatively very small. Multi database systems were considered a key for database scalability by database systems interoperability, and global serializability was urgently needed. Since then the tremendous progress in computing power, storage, and communication networks, resulted in orders of magnitude increases in both centralized databases' sizes, transaction rates, and remote access to database capabilities, as well as blurring the boundaries between centralized computing and distributed one over fast, low-latency local networks (e.g., Infiniband). These, together with progress in database vendors' distributed solutions (primarily the popular SS2PL with 2PC based, a de facto standard that allows interoperability among different vendors' (SS2PL-based) databases; both SS2PL and 2PC technologies have gained substantial expertise and efficiency), workflow management systems, and database replication technology, in most cases have provided satisfactory and sometimes better information technology solutions without multi database atomic distributed transactions over databases with different concurrency control (bypassing the problem above). As a result, the sense of urgency that existed with the problem at that period, and in general with high-performance distributed atomic transactions over databases with different concurrency control types, has reduced. However, the need in concurrent distributed atomic transactions as a fundamental element of reliability exists in distributed systems also beyond database systems, and so the need in global serializability as a fundamental correctness criterion for such transactional systems (see also Distributed serializability in Serializability). With the proliferation of the Internet, Cloud computing, Grid computing, small, portable, powerful computing devices (e.g., smartphones), and sophisticated systems management the need for effective global serializability techniques to ensure correctness in and among distributed transactional applications seems to increase, and thus also the need in Commitment ordering (including the popular for databases special case SS2PL; SS2PL, though, does not meet the requirements of many other transactional objects). - -Commitment ordering (or Commit ordering; CO) is the only high-performance, fault tolerant, conflict serializability providing solution that has been proposed as a fully distributed (no central computing component or data-structure are needed), general mechanism that can be combined seamlessly with any local (to a database) concurrency control mechanism (see technical summary). Since the CO property of a schedule is a necessary condition for global serializability of autonomous databases (in the context of concurrency control), it provides the only general solution for autonomous databases (i.e., if autonomous databases do not comply with CO, then global serializability may be violated). Seemingly by sheer luck, the CO solution possesses many attractive properties: - -#does not interfere with any transaction's operation, particularly neither block, restrict nor delay any data-access operation (read or write) for either local or global transactions (and thus does not cause any extra aborts); thus allows seamless integration with any concurrency control mechanism. - -#allows optimistic implementations (non-blocking, i.e., non data access blocking). - -#allows heterogeneity: Global serializability is achieved across multiple transactional objects with different (any) concurrency control mechanisms, without interfering with the mechanisms' operations. - -#allows modularity: Transactional objects can be added and removed transparently. - -#allows full ACID transaction support. - -#maintains each database's autonomy, and does not need any concurrency control information distribution (e.g., local precedence relations, locks, timestamps, or tickets). - -#does not need any knowledge about the transactions. - -#requires no communication overhead since it only uses already needed, unmodified atomic commitment protocol messages (any such protocol; using fault tolerant atomic commitment protocols and database systems makes the CO solution fault tolerant). - -#automatically resolves global deadlocks due to locking. - -#scales up effectively with computer network size and number of databases, almost without any negative impact on performance, since each global transaction is typically confined to certain relatively small numbers of databases and network nodes. - -#requires no additional, artificial transaction access operations (e.g., "take timestamp" or "take ticket"), which typically result in additional, artificial conflicts that reduce concurrency. - -#requires low overhead. - -The only overhead incurred by the CO solution is locally detecting conflicts (which is already done by any known serializability mechanism, both pessimistic and optimistic) and locally ordering in each database system both the (local) commits of local transactions and the voting for atomic commitment of global transactions. Such overhead is low. The net effect of CO may be some delays of commit events (but never more delay than SS2PL, and on the average less). This makes CO instrumental for global concurrency control of multidatabase systems (e.g., federated database systems). The underlying Theory of Commitment ordering, part of Serializability theory, is both sound and elegant (and even "mathematically beautiful"; referring to structure and dynamics of conflicts, graph cycles, and deadlocks), with interesting implications for transactional distributed applications. - -All the qualities of CO in the list above, except the first three, are also possessed by SS2PL, which is a special case of CO, but blocking and constraining. This partially explains the popularity of SS2PL as a solution (practically, the only solution, for many years) for achieving global serializability. However, property 9 above, automatic resolution of global deadlocks, has not been noticed for SS2PL in the database research literature until today (2009; except in the CO publications). This, since the phenomenon of voting-deadlocks in such environments and their automatic resolution by the atomic commitment protocol has been overlooked. - -Most existing database systems, including all major commercial database systems, are strong strict two phase locking (SS2PL) based and already CO compliant. Thus they can participate in a CO based solution for global serializability in multidatabase environments without any modification (except for the popular multiversioning, where additional CO aspects should be considered). Achieving global serializability across SS2PL based databases using atomic commitment (primarily using two phase commit, 2PC) has been employed for many years (i.e., using the same CO solution for a specific special case; however, no reference is known prior to CO, that notices this special case's automatic global deadlock resolution by the atomic commitment protocol's augmented-conflict-graph global cycle elimination process). Virtually all existing distributed transaction processing environments and supporting products rely on SS2PL and provide 2PC. As a matter of fact SS2PL together with 2PC have become a de facto standard. This solution is a homogeneous concurrency control one, suboptimal (when both Serializability and Strictness are needed; see Strict commitment ordering; SCO) but still quite effective in most cases, sometimes at the cost of increased computing power needed relatively to the optimum. (However, for better performance relaxed serializability is used whenever applications allow). It allows inter-operation among SS2PL-compliant different database system types, i.e., allows heterogeneity in aspects other than concurrency control. SS2PL is a very constraining schedule property, and "takes over" when combined with any other property. For example, when combined with any optimistic property, the result is not optimistic anymore, but rather characteristically SS2PL. On the other hand, CO does not change data-access scheduling patterns at all, and any combined property's characteristics remain unchanged. Since also CO uses atomic commitment (e.g., 2PC) for achieving global serializability, as SS2PL does, any CO compliant database system or transactional object can transparently join existing SS2PL based environments, use 2PC, and maintain global serializability without any environment change. This makes CO a straightforward, natural generalization of SS2PL for any conflict serializability based database system, for all practical purposes. - -Commitment ordering has been quite widely known inside the transaction processing and databases communities at Digital Equipment Corporation (DEC) since 1990. It has been under company confidentiality due to patenting - - processes. CO was disclosed outside of DEC by lectures and technical reports' distribution to database researches in May 1991, immediately after its first patent filing. It has been misunderstood by many database researchers years after its introduction, which is evident by the quotes above from articles in 1997-1998 referencing Commitment ordering articles. On the other hand, CO has been utilized extensively as a solution for global serializability in works on Transactional processes, - - and more recently in the related Re:GRIDiT, - -which is an approach for transaction management in the converging Grid computing and Cloud computing. - -See more in The History of Commitment Ordering. - -Some techniques have been developed for relaxed global serializability (i.e., they do not guarantee global serializability; see also Relaxing serializability). Among them (with several publications each): - -* Quasi serializability - -* Two-level serializability - -Consequently, Optimistic replication (Lazy replication) is often utilized (e.g., in many products and services by Google, Amazon, Yahoo, and alike), while global serializability is relaxed and compromised for eventual consistency. In this case relaxation is done only for applications that are not expected to be harmed by it. - -Classes of schedules defined by relaxed global serializability properties either contain the global serializability class, or are incomparable with it. What differentiates techniques for relaxed global conflict serializability (RGCSR) properties from those of relaxed conflict serializability (RCSR) properties that are not RGCSR is typically the different way global cycles (span two or more databases) in the global conflict graph are handled. No distinction between global and local cycles exists for RCSR properties that are not RGCSR. RCSR contains RGCSR. Typically RGCSR techniques eliminate local cycles, i.e., provide local serializability (which can be achieved effectively by regular, known concurrency control methods); however, obviously they do not eliminate all global cycles (which would achieve global serializability). diff --git a/wiki/wikipedia/3651.txt b/wiki/wikipedia/3651.txt deleted file mode 100644 index 3473e3d6cd2066a384e800831a9ecee9e36d657c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3651.txt +++ /dev/null @@ -1,63 +0,0 @@ -Safe semantics is a computer hardware consistency model. It describes one type of guarantee that a data register provides when it is shared by several processors in a parallel computer or in a network of computers working together. - -Safe semantics was first defined by Leslie Lamport in 1985. It was formally defined in Lamport's "On Interprocess Communication" in 1986. - -Safe register has been implemented in many distributed systems. - -Safe semantics are defined for a variable with a single writer but multiple readers (SWMR). A SWMR register is safe if each read operation satisfies these properties: - -# A read operation not concurrent with any write operation returns the value written by the latest write operation. - -# A read operation that is concurrent with a write operation may return any value within the register's allowed range of values (for example, 0,1,2,...). - -In particular, given concurrency of a read and a write operation, the read can return a value that has not been written by a write. The return value need only belong to the register domain. - -A binary safe register can be seen as modeling a bit flickering. Whatever the previous value of the register is, its value could flicker until the write finishes. Therefore, the read that overlaps with a write could return 0 or 1. - -Churn refers to the entry and exit of servers to/from a distributed system. Baldoni et al. show that no register can have the stronger property of regular semantics in a synchronous system under continuous churn. However, a safe register can be implemented under continuous churn in a non-synchronous system. Client systems contains a finite, arbitrary number of processes that are responsible for reading and writing the server system. However, the server system must ensure that read and write operations happen properly. - -Safe register implementation involves: - -Safe register is maintained by the set of active servers. - -Clients maintain no register information. - -Eventually synchronous system - -Quora (set of server or client systems) - -Size of the Read and Write operation executed on quora = n – f – J (n is the number of servers, J is the number of servers that enter and exit, and f is the number of Byzantine failures. - -Algorithms such as join, read, and write. - -A server (si) that wants to enter a server system broadcasts an inquiry message to other servers to inform them of its entry, si requests a current value of the register. Once other server receive this inquiry they send reply messages to si. After si receives enough replies from other servers, it collects the replies and saves them into a reply set. Si waits until it gets enough replies (n-f-j) from other servers then it picks the most frequently received value. Si also: - -* Updates its local copy of the register - -* Becomes active - -* Replies to the processes in the reply set - -* If it becomes active it sends reply messages to the other servers. Otherwise, it stores the inquiries, replying when it becomes active. - -* When it gets replies from other servers it adds the new reply to the reply set and discards the old value. - -* If the value of the responding server is bigger than si's value, si retains the new value. - -The read algorithm is a basic version of join. The difference is the broadcast mechanism used by the read operation. A client (cw) broadcasts a message to the system and once a server receives the inquiry, it sends a reply message to the client. Once the client receives enough replies (n-f-j) it stops sending an inquiry. - -Client (cw) sends an inquiry into the system in different rounds and waits until it receives two acknowledgment. (sn =sequence number) - -The reason for receiving two acknowledgments is to avoid danger in a system. When a process sends an acknowledgement (ack), it may die after one millisecond. Therefore, no confirmation is received by the client. - -The validity of the safe register (If a read is not concurrent with any write, return the last value written) was proved based on the quorum system. Given two quorum systems (Qw, Qr) Qw indicates the servers that know about the latest value, and Qr indicates values of read responses. The size of each quorum is equal to n-f-j. Proving the safe register's validity requires proving -$$ -(Qw \cup Qr)\backslash B >(Qr \cup B) -$$ - -were B is the number of Byzantine failures. - -Proof : Red region indicates (Qw∩Qr)\B and the blue region indicates Qr∩B. From the assumption, the size of each quorum is n-f-j, so the red region has n-3f-2j active servers. Therefore, -$$ -n-3f-2J > f --> n > 4f+2J --> n -$$is strictly greater than f. diff --git a/wiki/wikipedia/3652.txt b/wiki/wikipedia/3652.txt deleted file mode 100644 index 307a66839545344da0c2a2ece179e0304d3fca03..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3652.txt +++ /dev/null @@ -1,29 +0,0 @@ -In graph theory, a branch of mathematics, the Herschel graph is a bipartite undirected graph with 11 vertices and 18 edges, the smallest non-Hamiltonian polyhedral graph. It is named after British astronomer Alexander Stewart Herschel. - -The Herschel graph is a planar graph: it can be drawn in the plane with none of its edges crossing. It is also 3-vertex-connected: the removal of any two of its vertices leaves a connected subgraph. It is a bipartite graph: its vertices can be separated into two subsets of five and six vertices respectively, such that every edge has an endpoint in each subset (the red and blue subsets in the picture). - -As with any bipartite graph, the Herschel graph is a perfect graph : the chromatic number of every induced subgraph equals the size of the largest clique of that subgraph. It has also chromatic index 4, girth 4, radius 3 and diameter 4. - -The Herschel graph is planar and 3-vertex-connected, so it follows by Steinitz's theorem that it is a polyhedral graph: there exists a convex polyhedron (an enneahedron) having the Herschel graph as its skeleton. - -This polyhedron has nine quadrilaterals for faces, which can be chosen to form three rhombi and six kites. - -Its dual polyhedron is a rectified triangular prism, formed as the convex hull of the midpoints of the edges of a triangular prism. - -This polyhedron has the property that its faces cannot be numbered in such a way that consecutive numbers appear on adjacent faces, and such that the first and last number are also on adjacent faces. - -Because polyhedral face numberings of this type are used as "spindown life counters" in the game Magic: The Gathering, Constantinides name the canonical polyhedron realization of this dual polyhedron as "the Lich's nemesis". - -As a bipartite graph that has an odd number of vertices, the Herschel graph does not contain a Hamiltonian cycle (a cycle of edges that passes through each vertex exactly once). For, in any bipartite graph, any cycle must alternate between the vertices on either side of the bipartition, and therefore must contain equal numbers of both types of vertex and must have an even length. Thus, a cycle passing once through each of the eleven vertices cannot exist in the Herschel graph. It is the smallest non-Hamiltonian polyhedral graph, whether the size of the graph is measured in terms of its number of vertices, edges, or faces. There exist other polyhedral graphs with 11 vertices and no Hamiltonian cycles (notably the Goldner–Harary graph) but none with fewer edges. - -All but three of the vertices of the Herschel graph have degree three. Tait's conjecture states that a polyhedral graph in which every vertex has degree three must be Hamiltonian, but this was disproved when W. T. Tutte provided a counterexample, the much larger Tutte graph. A refinement of Tait's conjecture, Barnette's conjecture that every bipartite 3-regular polyhedral graph is Hamiltonian, remains open. - -Every maximal planar graph that does not have a Hamiltonian cycle has a Herschel graph as a minor. The Herschel graph is conjectured to be one of three minor-minimal non-Hamiltonian 3-vertex-connected graphs. The other two are the complete bipartite graph $K_{3,4}$ and a graph formed by splitting both the Herschel graph and $K_{3,4}$ into two symmetric halves by three-vertex separators and then combining one half from each graph. - -The Herschel graph also provides an example of a polyhedral graph for which the medial graph cannot be decomposed into two edge-disjoint Hamiltonian cycles. The medial graph of the Herschel graph is a 4-regular graph with 18 vertices, one for each edge of the Herschel graph; two vertices are adjacent in the medial graph whenever the corresponding edges of the Herschel graph are consecutive on one of its faces. - -It is 4-vertex-connected and essentially 6-edge-connected, meaning that for every partition of the vertices into two subsets of at least two vertices, there are at least six edges crossing the partition. - -The Herschel graph is named after British astronomer Alexander Stewart Herschel, who wrote an early paper concerning William Rowan Hamilton's icosian game: the Herschel graph describes the smallest convex polyhedron for which this game has no solution. However, Herschel's paper described solutions for the Icosian game only on the graphs of the regular tetrahedron and regular icosahedron; it did not describe the Herschel graph. - -The name "the Herschel graph" makes an early appearance in a graph theory textbook by John Adrian Bondy and U. S. R. Murty, published in 1976. However, the graph itself was described earlier, for instance by H. S. M. Coxeter. diff --git a/wiki/wikipedia/3653.txt b/wiki/wikipedia/3653.txt deleted file mode 100644 index ba7ea99048381375526fce8278ee8b411ceb1a42..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3653.txt +++ /dev/null @@ -1,219 +0,0 @@ -Proofs of the mathematical result that the rational number 22/7 is greater than pi (pi) date back to antiquity. One of these proofs, more recently developed but requiring only elementary techniques from calculus, has attracted attention in modern mathematics due to its mathematical elegance and its connections to the theory of diophantine approximations. Stephen Lucas calls this proof "one of the more beautiful results related to approximating pi". - -Julian Havil ends a discussion of continued fraction approximations of pi with the result, describing it as "impossible to resist mentioning" in that context. - -The purpose of the proof is not primarily to convince its readers that 22/7 (or 3/1|7) is indeed bigger than pi; systematic methods of computing the value of pi exist. If one knows that pi is approximately 3.14159, then it trivially follows that pi < 22/7, which is approximately 3.142857. But it takes much less work to show that pi < 22/7 by the method used in this proof than to show that pi is approximately 3.14159. - -22/7 is a widely used Diophantine approximation of pi. It is a convergent in the simple continued fraction expansion of pi. It is greater than pi, as can be readily seen in the decimal expansions of these values: - -\begin{align} - -\frac{22}{7} & = 3. \overline{142857}, \\ - -\pi & = 3.141 59265\ldots - -\end{align} - -The approximation has been known since antiquity. Archimedes wrote the first known proof that 22/7 is an overestimate in the 3rd century BCE, although he may not have been the first to use that approximation. His proof proceeds by showing that 22/7 is greater than the ratio of the perimeter of a regular polygon with 96 sides to the diameter of a circle it circumscribes. - -The proof can be expressed very succinctly: -$$ - 0 < \int_0^1 \frac{x^4\left(1-x\right)^4}{1+x^2} dx = \frac{22}{7} - \pi. -$$ - -Therefore, 22/7 > pi. - -The evaluation of this integral was the first problem in the 1968 Putnam Competition. - -It is easier than most Putnam Competition problems, but the competition often features seemingly obscure problems that turn out to refer to something very familiar. This integral has also been used in the entrance examinations for the Indian Institutes of Technology. - -That the integral is positive follows from the fact that the integrand is non-negative, being a quotient involving only sums and products of powers of non-negative real numbers. In addition, one can easily check that the integrand is strictly positive for at least one point in the range of integration, say at 1/2. Since the integrand is continuous at that point and non-negative elsewhere, the integral from 0 to 1 must be strictly positive. - -It remains to show that the integral in fact evaluates to the desired quantity: - - - -\begin{align} - -0 & < \int_0^1\frac{x^4\left(1-x\right)^4}{1+x^2} dx \\[8pt] - -& = \int_0^1\frac{x^4-4x^5+6x^6-4x^7+x^8}{1+x^2} dx & \text{expansion of terms in the numerator} \\[8pt] - -& = \int_0^1 \left(x^6-4x^5+5x^4-4x^2+4-\frac{4}{1+x^2}\right) dx & \text{ using polynomial long division} & \\[8pt] - -& = \left.\left(\frac{x^7}{7}-\frac{2x^6}{3}+ x^5- \frac{4x^3}{3}+4x-4\arctan{x}\right)\right|_0^1 & \text{definite integration} \\[6pt] - -& = \frac{1}{7}-\frac{2}{3}+1-\frac{4}{3}+4-\pi\quad & \text{with }\arctan(1) = \frac{\pi}{4} \text{ and } \arctan(0) = 0 \\[8pt] - -& = \frac{22}{7}-\pi. & \text{addition} - -\end{align} - - - -(See polynomial long division.) - -In Dalzell, it is pointed out that if 1 is substituted for x in the denominator, one gets a lower bound on the integral, and if 0 is substituted for x in the denominator, one gets an upper bound: -$$ -\frac{1}{1260} = \int_0^1\frac{x^4 \left(1-x\right)^4}{2}dx < \int_0^1\frac{x^4 \left(1-x\right)^4}{1+x^2}dx < \int_0^1\frac{x^4 \left(1-x\right)^4}{1}dx = {1 \over 630}. -$$ - -Thus we have -$$ -\frac{22}{7} - \frac{1}{630} < \pi < \frac{22}{7} - \frac{1}{1260}, -$$ - -hence 3.1412 < pi < 3.1421 in decimal expansion. The bounds deviate by less than 0.015% from pi. See also Dalzell. - -As discussed in Lucas, the well-known Diophantine approximation and far better upper estimate 355/113 for pi follows from the relation -$$ -0<\int_0^1\frac{x^8\left(1-x\right)^8\left(25+816x^2\right)}{3164\left(1+x^2\right)}dx=\frac{355}{113}-\pi. -$$ -$$ -\frac{355}{113}= 3.14159292\ldots, -$$ - -where the first six digits after the decimal point agree with those of pi. Substituting 1 for x in the denominator, we get the lower bound -$$ -\int_0^1\frac{x^8\left(1-x\right)^8\left(25+816x^2\right)}{6328}dx =\frac{911}{5261111856} = 0.000000173\ldots, -$$ - -substituting 0 for x in the denominator, we get twice this value as an upper bound, hence -$$ -\frac{355}{113}-\frac{911}{2630555928}<\pi<\frac{355}{113}-\frac{911}{5261111856}. -$$ - -In decimal expansion, this means 3.141 592 57 < pi < 3.141 592 74, where the bold digits of the lower and upper bound are those of pi. - -The above ideas can be generalized to get better approximations of pi; see also Backhouse and Lucas (in both references, however, no calculations are given). For explicit calculations, consider, for every integer n ≥ 1, - - - -\frac1{2^{2n-1}}\int_0^1 x^{4n}(1-x)^{4n}dx - -<\frac1{2^{2n-2}}\int_0^1\frac{x^{4n}(1-x)^{4n}}{1+x^2}dx - -<\frac1{2^{2n-2}}\int_0^1 x^{4n}(1-x)^{4n}dx, - - - -where the middle integral evaluates to - -\begin{align} - -\frac1{2^{2n-2}} & \int_0^1\frac{x^{4n}(1-x)^{4n}}{1+x^2}dx\\[6pt] - -= {} & \sum_{j=0}^{2n-1}\frac{(-1)^j}{2^{2n-j-2}(8n-j-1)\binom{8n-j-2}{4n+j}} - -+(-1)^n\left(\pi-4\sum_{j=0}^{3n-1}\frac{(-1)^j}{2j+1}\right) - -\end{align} - -involving pi. The last sum also appears in Leibniz' formula for pi. The correction term and error bound is given by - -\begin{align}\frac1{2^{2n-1}}\int_0^1 x^{4n}(1-x)^{4n}dx - -&=\frac{1}{2^{2n-1}(8n+1)\binom{8n}{4n}}\\[6pt] - -&\sim\frac{\sqrt{\pi n}}{2^{10n-2}(8n+1)}, - -\end{align} - -where the approximation (the tilde means that the quotient of both sides tends to one for large n) of the central binomial coefficient follows from Stirling's formula and shows the fast convergence of the integrals to pi. - -Calculation of these integrals: For all integers k ≥ 0 and ℓ ≥ 2 we have - -\begin{align} - -x^k(1-x)^\ell&=(1-2x+x^2)x^k(1-x)^{\ell-2}\\[6pt] - -&=(1+x^2)x^k(1-x)^{\ell-2}-2x^{k+1}(1-x)^{\ell-2}. - -\end{align} - -Applying this formula recursively 2n times yields - -x^{4n}(1-x)^{4n} - -=\left(1+x^2\right)\sum_{j=0}^{2n-1}(-2)^jx^{4n+j}(1-x)^{4n-2(j+1)}+(-2)^{2n}x^{6n}. - -Furthermore, - -\begin{align} - -x^{6n}-(-1)^{3n} - -&=\sum_{j=1}^{3n}(-1)^{3n-j}x^{2j}-\sum_{j=0}^{3n-1}(-1)^{3n-j}x^{2j}\\[6pt] - -&=\sum_{j=0}^{3n-1}\left((-1)^{3n-(j+1)} x^{2(j+1)}-(-1)^{3n-j}x^{2j}\right)\\[6pt] - -&=-(1+x^2)\sum_{j=0}^{3n-1} (-1)^{3n-j}x^{2j}, - -\end{align} - -where the first equality holds, because the terms for 1 ≤ j ≤ 3n – 1 cancel, and the second equality arises from the index shift j → j + 1 in the first sum. - -Application of these two results gives - -\begin{align}\frac{x^{4n}(1-x)^{4n}}{2^{2n-2}(1+x^2)} - -=\sum_{j=0}^{2n-1} & \frac{(-1)^j}{2^{2n-j-2}}x^{4n+j}(1-x)^{4n-2j-2}\\[6pt] - -& {} -4\sum_{j=0}^{3n-1}(-1)^{3n-j}x^{2j}+(-1)^{3n}\frac4{1+x^2}.\qquad(1) - -\end{align} - -For integers k, ℓ ≥ 0, using integration by parts ℓ times, we obtain - -\begin{align} - -\int_0^1x^k(1-x)^\elldx - -&=\frac \ell{k+1}\int_0^1x^{k+1}(1-x)^{\ell-1}dx\\[6pt] - -&\vdots\\[6pt] - -&=\frac \ell{k+1} \frac{\ell-1}{k+2}\cdots\frac1{k+\ell}\int_0^1x^{k+\ell}dx\\[6pt] - -&=\frac{1}{(k+\ell+1)\binom{k+\ell}{k}}.\qquad(2) - -\end{align} - -Setting k = ℓ = 4n, we obtain -$$ -\int_0^1 x^{4n} (1-x)^{4n}dx = \frac{1}{(8n+1)\binom{8n}{4n}}. -$$ - -Integrating equation (1) from 0 to 1 using equation (2) and arctan(1) = π/4, we get the claimed equation involving pi. - -The results for n = 1 are given above. For n = 2 we get -$$ -\frac14\int_0^1\frac{x^8(1-x)^8}{1+x^2}dx=\pi -\frac{47171}{15015} -$$ - -and -$$ -\frac18\int_0^1 x^8(1-x)^8dx=\frac1{1750320}, -$$ - -hence 3.141 592 31 < pi < 3.141 592 89, where the bold digits of the lower and upper bound are those of pi. Similarly for n = 3, -$$ -\frac1{16}\int_0^1\frac{x^{12}\left(1-x\right)^{12}}{1+x^2}dx= \frac{431302721}{137287920}-\pi -$$ - -with correction term and error bound -$$ -\frac1{32}\int_0^1 x^{12} (1-x)^{12}dx=\frac1{2163324800}, -$$ - -hence 3.141 592 653 40 < pi < 3.141 592 653 87. The next step for n = 4 is -$$ -\frac1{64}\int_0^1\frac{x^{16} (1-x)^{16}}{1+x^2}dx= \pi-\frac{741269838109}{235953517800} -$$ - -with -$$ -\frac1{128}\int_0^1 x^{16} (1-x)^{16}dx=\frac1{2538963567360}, -$$ - -which gives 3.141 592 653 589 55 < pi < 3.141 592 653 589 96. diff --git a/wiki/wikipedia/3654.txt b/wiki/wikipedia/3654.txt deleted file mode 100644 index 7c306f2f82b0f98b40165d7ba3884bcd95d94427..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3654.txt +++ /dev/null @@ -1,23 +0,0 @@ -In mathematical general relativity, the Penrose inequality, first conjectured by Sir Roger Penrose, estimates the mass of a spacetime in terms of the total area of its black holes and is a generalization of the positive mass theorem. The Riemannian Penrose inequality is an important special case. Specifically, if (M, g) is an asymptotically flat Riemannian 3-manifold with nonnegative scalar curvature and ADM mass m, and A is the area of the outermost minimal surface (possibly with multiple connected components), then the Riemannian Penrose inequality asserts -$$ -m \geq \sqrt{\frac{A}{16\pi}}. -$$ - -This is purely a geometrical fact, and it corresponds to the case of a complete three-dimensional, space-like, totally geodesic submanifold - -of a (3 + 1)-dimensional spacetime. Such a submanifold is often called a time-symmetric initial data set for a spacetime. The condition of (M, g) having nonnegative scalar curvature is equivalent to the spacetime obeying the dominant energy condition. - -This inequality was first proved by Gerhard Huisken and Tom Ilmanen in 1997 in the case where A is the area of the largest component of the outermost minimal surface. Their proof relied on the machinery of weakly defined inverse mean curvature flow, which they developed. In 1999, Hubert Bray gave the first complete proof of the above inequality using a conformal flow of metrics. Both of the papers were published in 2001. - -The original physical argument that led Penrose to conjecture such an inequality invoked the Hawking area theorem and the cosmic censorship hypothesis. - -Both the Bray and Huisken–Ilmanen proofs of the Riemannian Penrose inequality state that under the hypotheses, if -$$ -m = \sqrt{\frac{A}{16\pi}}, -$$ - -then the manifold in question is isometric to a slice of the Schwarzschild spacetime outside of the outermost minimal surface. - -More generally, Penrose conjectured that an inequality as above should hold for spacelike submanifolds of spacetimes that are not necessarily time-symmetric. In this case, nonnegative scalar curvature is replaced with the dominant energy condition, and one possibility is to replace the minimal surface condition with an apparent horizon condition. Proving such an inequality remains an open problem in general relativity, called the Penrose conjecture. - -*In episode 6 of season 8 of the television sitcom The Big Bang Theory, Dr. Sheldon Cooper claims to be in the process of solving the Penrose Conjecture while at the same time composing his Nobel Prize acceptance speech. diff --git a/wiki/wikipedia/3655.txt b/wiki/wikipedia/3655.txt deleted file mode 100644 index d017b265a0f90637192735b32b3ee9fac2776e80..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3655.txt +++ /dev/null @@ -1,3 +0,0 @@ -The Hopf theorem (named after Heinz Hopf) is a statement in differential topology, saying that the topological degree is the only homotopy invariant of continuous maps to spheres. - -Let M be an n-dimensional compact connected oriented manifold and $S^n$ the n-sphere and $f,g\colon M\to S^n$ be continuous. Then $\deg(f)=\deg(g)$ if and only if f and g are homotopic. diff --git a/wiki/wikipedia/3656.txt b/wiki/wikipedia/3656.txt deleted file mode 100644 index 514ad344238d27a30603683d5ecaf1921f0f0ce5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3656.txt +++ /dev/null @@ -1,11 +0,0 @@ -In mathematical writing, the term strict refers to the property of excluding equality and equivalence and often occurs in the context of inequality and monotonic functions. It is often attached to a technical term to indicate that the exclusive meaning of the term is to be understood. The opposite is non-strict, which is often understood to be the case but can be put explicitly for clarity. In some contexts, the word "proper" can also be used as a mathematical synonym for "strict". - -This term is commonly used in the context of inequalities - the phrase "strictly less than" means "less than and not equal to" (likewise "strictly greater than" means "greater than and not equal to"). More generally, a strict partial order, strict total order, and strict weak order exclude equality and equivalence. - -When comparing numbers to zero, the phrases "strictly positive" and "strictly negative" mean "positive and not equal to zero" and "negative and not equal to zero", respectively. In the context of functions, the adverb "strictly" is used to modify the terms "monotonic", "increasing", and "decreasing". - -On the other hand, sometimes one wants to specify the inclusive meanings of terms. In the context of comparisons, one can use the phrases "non-negative", "non-positive", "non-increasing", and "non-decreasing" to make it clear that the inclusive sense of the terms is being used. - -The use of such terms and phrases helps avoid possible ambiguity and confusion. For instance, when reading the phrase "x is positive", it is not immediately clear whether x = 0 is possible, since some authors might use the term positive loosely to mean that x is not less than zero. Such an ambiguity can be mitigated by writing "x is strictly positive" for x > 0, and "x is non-negative" for x ≥ 0. (A precise term like non-negative is never used with the word negative in the wider sense that includes zero.) - -The word "proper" is often used in the same way as "strict". For example, a "proper subset" of a set S is a subset that is not equal to S itself, and a "proper class" is a class which is not also a set. diff --git a/wiki/wikipedia/3657.txt b/wiki/wikipedia/3657.txt deleted file mode 100644 index 3ebf873ed79e8825950da1a41db8cf7c6a88f2b7..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3657.txt +++ /dev/null @@ -1 +0,0 @@ -Sphere packing in a sphere is a three-dimensional packing problem with the objective of packing a given number of equal spheres inside a unit sphere. It is the three-dimensional equivalent of the circle packing in a circle problem in two dimensions. diff --git a/wiki/wikipedia/3658.txt b/wiki/wikipedia/3658.txt deleted file mode 100644 index 52a2553ae7d062b8b5729b7da03d59d118e5f4fc..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3658.txt +++ /dev/null @@ -1,9 +0,0 @@ -The Pursuit of Perfect Packing is a book on packing problems in geometry. It was written by physicists Tomaso Aste and Denis Weaire, and published in 2000 by Institute of Physics Publishing (doi:10.1887/0750306483, ) with a second edition published in 2008 by Taylor & Francis (). - -The mathematical topics described in the book include sphere packing (including the Tammes problem, the Kepler conjecture, and higher-dimensional sphere packing), the Honeycomb conjecture and the Weaire–Phelan structure, Voronoi diagrams and Delaunay triangulations, Apollonian gaskets, random sequential adsorption, and the physical realizations of some of these structures by sand, soap bubbles, the seeds of plants, and columnar basalt. A broader theme involves the contrast between locally ordered and locally disordered structures, and the interplay between local and global considerations in optimal packings. - -As well, the book includes biographical sketches of some of the contributors to this field, and histories of their work in this area, including Johannes Kepler, Stephen Hales, Joseph Plateau, Lord Kelvin, Osborne Reynolds, and J. D. Bernal. - -The book is aimed at a general audience rather than to professional mathematicians. Therefore, it avoids mathematical proofs and is otherwise not very technical. However, it contains pointers to the mathematical literature where readers more expert in these topics can find more detail. Avoiding proof may have been a necessary decision as some proofs in this area defy summarization: the proof by Thomas Hales of the Kepler conjecture on optimal sphere packing in three dimensions, announced shortly before the publication of the book and one of its central topics, is hundreds of pages long. - -Reviewer Johann Linhart complains that (in the first edition) some figures are inaccurately drawn. And although finding the book "entertaining and easy to read", William Satzer finds it "frustrating" in the lack of detail in its stories. Nevertheless, Linhart and reviewer Stephen Blundell highly recommend the book, and reviewer Charles Radin calls it "a treasure trove of intriguing examples" and "a real gem". And despite complaining about a format that mixes footnote markers into mathematical formulas, and the illegibility of some figures, Michael Fox recommends it to "any mathematics or science library". diff --git a/wiki/wikipedia/3659.txt b/wiki/wikipedia/3659.txt deleted file mode 100644 index 9c6a74e411e711ab42f45a0a4a0c362d00ae454c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3659.txt +++ /dev/null @@ -1,52 +0,0 @@ -In mathematics, a Ford circle is a circle with center at $(p/q,1/(2q^2))$ and radius $1/(2q^2),$ where $p/q$ is an irreducible fraction, i.e. $p$ and $q$ are coprime integers. Each Ford circle is tangent to the horizontal axis $y=0,$ and any two Ford circles are either tangent or disjoint from each other. In the 17th century René Descartes discovered Descartes' theorem, a relationship between the reciprocals of the radii of mutually tangent circles. - -The Ford circle associated with the fraction $p/q$ is denoted by $C[p/q]$ or $C[p,q].$ There is a Ford circle associated with every rational number. In addition, the line $y=1$ is counted as a Ford circle – it can be thought of as the Ford circle associated with infinity, which is the case $p=1,q=0.$ - -Two different Ford circles are either disjoint or tangent to one another. No two interiors of Ford circles intersect, even though there is a Ford circle tangent to the x-axis at each point on it with rational coordinates. If $p/q$ is between 0 and 1, the Ford circles that are tangent to $C[p/q]$ can be described variously as - -# the circles $C[r/s]$ where $|p s-q r|=1,$ - -# the circles associated with the fractions $r/s$ that are the neighbors of $p/q$ in some Farey sequence, or - -# the circles $C[r/s]$ where $r/s$ is the next larger or the next smaller ancestor to $p/q$ in the Stern–Brocot tree or where $p/q$ is the next larger or next smaller ancestor to $r/s$. - -If $C[p/q]$ and $C[r/s]$ are two tangent Ford circles, then the circle through $(p/q,0)$ and $(r/s,0)$ (the x-coordinates of the centers of the Ford circles) and that is perpendicular to the $x$-axis (whose center is on the x-axis) also passes through the point where the two circles are tangent to one another. - -Ford circles can also be thought of as curves in the complex plane. The modular group of transformations of the complex plane maps Ford circles to other Ford circles. - -Ford circles are a sub-set of the circles in the Apollonian gasket generated by the lines $y=0$ and $y=1$ and the circle $C[0/1].$ - -By interpreting the upper half of the complex plane as a model of the hyperbolic plane (the Poincaré half-plane model), Ford circles can be interpreted as horocycles. - -In hyperbolic geometry any two horocycles are congruent. When these horocycles are circumscribed by apeirogons they tile the hyperbolic plane with an order-3 apeirogonal tiling. - -The 2015A AMC exam's last question is to find the sum of the reciprocals of the circumferences of Ford circles. - -There is a link between the area of Ford circles, Euler's totient function $\varphi,$ the Riemann zeta function $\zeta,$ and Apéry's constant $\zeta(3).$ As no two Ford circles intersect, it follows immediately that the total area of the Ford circles -$$ -\left\{ C[p,q]: 0 < \frac{p}{q} \le 1 \right\} -$$ - -is less than 1. In fact the total area of these Ford circles is given by a convergent sum, which can be evaluated. From the definition, the area is -$$ - A = \sum_{q\ge 1} \sum_{ (p, q)=1 \atop 1 \le p < q }\pi \left( \frac{1}{2 q^2} \right)^2. -$$ - -Simplifying this expression gives - - A = \frac{\pi}{4} \sum_{q\ge 1} \frac{1}{q^4} - -\sum_{ (p, q)=1 \atop 1 \le p < q } 1 = - -\frac{\pi}{4} \sum_{q\ge 1} \frac{\varphi(q)}{q^4} = - -\frac{\pi}{4} \frac{\zeta(3)}{\zeta(4)}, - -where the last equality reflects the Dirichlet generating function for Euler's totient function $\varphi(q).$ Since $\zeta(4)=\pi^4/90,$ this finally becomes -$$ - A = \frac{45}{2} \frac{\zeta(3)}{\pi^3}\approx 0.872284041. -$$ - -Note that as a matter of convention, the previous calculations excluded the circle of radius $\frac{1}{2}$ corresponding to the fraction $\frac{0}{1}$. It includes the complete circle for $\frac{1}{1}$, half of which lies outside the unit interval, hence the sum is still the fraction of the unit square covered by Ford circles. - -The concept of Ford circles can be generalized from the rational numbers to the Gaussian rationals, giving Ford spheres. In this construction, the complex numbers are embedded as a plane in a three-dimensional Euclidean space, and for each Gaussian rational point in this plane one constructs a sphere tangent to the plane at that point. For a Gaussian rational represented in lowest terms as $p/q$, the diameter of this sphere should be $1/q\bar q$ where $\bar q$ represents the complex conjugate of $q$. The resulting spheres are tangent for pairs of Gaussian rationals $P/Q$ and $p/q$ with $|Pq-pQ|=1$, and otherwise they do not intersect each other. diff --git a/wiki/wikipedia/366.txt b/wiki/wikipedia/366.txt deleted file mode 100644 index ab2635e63c99d28b5e0b46b0c37ac97a9a5a6f16..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/366.txt +++ /dev/null @@ -1,11 +0,0 @@ -Datacopia is a freemium tool that automatically generates charts and infographics from structured and unstructured data. - -Representing data using charts is a difficult task from two perspectives. The first is that it is not always clear which chart type best represents a dataset. The second, and more difficult of the two, is understanding what useful information even exists in the data before it can be visualized. - -Datacopia attempts to resolve these difficulties by automating both the data analysis and chart selection processes. - -Datacopia is built in HTML5 and runs on any platform with a browser that supports the Html5 canvas element. It makes use of D3.js and the library to provide its interactive graphics. It uses the Heroku PaaS stack. - -Datacopia allows generated charts to be posted to social media sites and blogs. - -Datacopia offers an API that allows developers to embed Datacopia functionality within their software and websites. This API is already used by the Qiqqa research management software to automatically turn tables of results in PDFs into charts. diff --git a/wiki/wikipedia/3660.txt b/wiki/wikipedia/3660.txt deleted file mode 100644 index 163360725593d1535286e4f81fb0a642c47427c1..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3660.txt +++ /dev/null @@ -1,17 +0,0 @@ -In mathematics, Kuiper's theorem (after Nicolaas Kuiper) is a result on the topology of operators on an infinite-dimensional, complex Hilbert space H. It states that the space GL(H) of invertible bounded endomorphisms of H is such that all maps from any finite complex Y to GL(H) are homotopic to a constant, for the norm topology on operators. - -A significant corollary, also referred to as Kuiper's theorem, is that this group is weakly contractible, ie. all its homotopy groups are trivial. This result has important uses in topological K-theory. - -For finite dimensional H, this group would be a complex general linear group and not at all contractible. In fact it is homotopy equivalent to its maximal compact subgroup, the unitary group U of H. The proof that the complex general linear group and unitary group have the same homotopy type is by the Gram-Schmidt process, or through the matrix polar decomposition, and carries over to the infinite-dimensional case of separable Hilbert space, basically because the space of upper triangular matrices is contractible as can be seen quite explicitly. The underlying phenomenon is that passing to infinitely many dimensions causes much of the topological complexity of the unitary groups to vanish; but see the section on Bott's unitary group, where the passage to infinity is more constrained, and the resulting group has non-trivial homotopy groups. - -It is a surprising fact that the unit sphere, sometimes denoted S, in infinite-dimensional Hilbert space H is a contractible space, while no finite-dimensional spheres are contractible. This result, certainly known decades before Kuiper's, may have the status of mathematical folklore, but it is quite often cited. In fact more is true: S is diffeomorphic to H, which is certainly contractible by its convexity. One consequence is that there are smooth counterexamples to an extension of the Brouwer fixed-point theorem to the unit ball in H. The existence of such counter-examples that are homeomorphisms was shown in 1943 by Shizuo Kakutani, who may have first written down a proof of the contractibility of the unit sphere. But the result was anyway essentially known (in 1935 Andrey Nikolayevich Tychonoff showed that the unit sphere was a retract of the unit ball). - -The result on the group of bounded operators was proved by the Dutch mathematician Nicolaas Kuiper, for the case of a separable Hilbert space; the restriction of separability was later lifted. The same result, but for the strong operator topology rather than the norm topology, was published in 1963 by Jacques Dixmier and Adrien Douady. The geometric relationship of the sphere and group of operators is that the unit sphere is a homogeneous space for the unitary group U. The stabiliser of a single vector v of the unit sphere is the unitary group of the orthogonal complement of v; therefore the homotopy long exact sequence predicts that all the homotopy groups of the unit sphere will be trivial. This shows the close topological relationship, but is not in itself quite enough, since the inclusion of a point will be a weak homotopy equivalence only, and that implies contractibility directly only for a CW complex. In a paper published two years after Kuiper's, Richard Palais provided technical results on infinite-dimensional manifolds sufficient to resolve this issue. - -There is another infinite-dimensional unitary group, of major significance in homotopy theory, that to which the Bott periodicity theorem applies. It is certainly not contractible. The difference from Kuiper's group can be explained: Bott's group is the subgroup in which a given operator acts non-trivially only on a subspace spanned by the first N of a fixed orthonormal basis {ei}, for some N, being the identity on the remaining basis vectors. - -An immediate consequence, given the general theory of fibre bundles, is that every Hilbert bundle is a trivial bundle. - -The result on the contractibility of S gives a geometric construction of classifying spaces for certain groups that act freely it, such as the cyclic group with two elements and the circle group. The unitary group U in Bott's sense has a classifying space BU for complex vector bundles (see Classifying space for U(n)). A deeper application coming from Kuiper's theorem is the proof of the Atiyah–Jänich theorem (after Klaus Jänich and Michael Atiyah), stating that the space of Fredholm operators on H, with the norm topology, represents the functor K(.) of topological (complex) K-theory, in the sense of homotopy theory. This is given by Atiyah. - -The same question may be posed about invertible operators on any Banach space of infinite dimension. Here there are only partial results. Some classical sequence spaces have the same property, namely that the group of invertible operators is contractible. On the other hand, there are examples known where it fails to be a connected space. Where all homotopy groups are known to be trivial, the contractibility in some cases may remain unknown. diff --git a/wiki/wikipedia/3661.txt b/wiki/wikipedia/3661.txt deleted file mode 100644 index 5e3d5497daef2f5fef781642bb962ef820200451..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3661.txt +++ /dev/null @@ -1,58 +0,0 @@ -Uniform convergence in probability is a form of convergence in probability in statistical asymptotic theory and probability theory. It means that, under certain conditions, the empirical frequencies of all events in a certain event-family converge to their theoretical probabilities. Uniform convergence in probability has applications to statistics as well as machine learning as part of statistical learning theory. - -The law of large numbers says that, for each single event $A$, its empirical frequency in a sequence of independent trials converges (with high probability) to its theoretical probability. In many application however, the need arises to judge simultaneously the probabilities of events of an entire class $S$ from one and the same sample. Moreover it, is required that the relative frequency of the events converge to the probability uniformly over the entire class of events $S$ using the concept of growth function. - -The statement of the uniform convergence theorem is as follows: - -If $H$ is a set of $\{0,1\}$-valued functions defined on a set $X$ and $P$ is a probability distribution on $X$ then for $\varepsilon>0$ and $m$ a positive integer, we have: - -P^m\\sum_{\sigma\in\Gamma_m} \int_{X^{2m}} 1_R(\sigma(x)) dP^{2m}(x) \\[5pt] - -= {} & \int_{X^{2m}} \frac 1 \sum_{\sigma\in\Gamma_m} 1_R (\sigma(x)) dP^{2m}(x) \\[5pt] - -& \text{(because } |\Gamma_{m}| \text{ is finite)} \\[5pt] - -= {} & \int_{X^{2m}} \Pr[\sigma(x)\in R] dP^{2m}(x) \quad \text{(the expectation)} \\[5pt] - -\leq {} & \max_{x\in X^{2m}}(\Pr[\sigma(x)\in R]). - -\end{align} - - - -The maximum is guaranteed to exist since there is only a finite set of values that probability under a random permutation can take. - -Lemma: Basing on the previous lemma, -$$ -\max_{x\in X^{2m}}(\Pr[\sigma(x)\in R])\leq 4\Pi_H(2m)e^{-\varepsilon^2 m/8} -$$. - -Proof: - -Let us define $x=(x_1,x_2,\ldots,x_{2m})$ and $t=|H|_x|$ which is at most $\Pi_H(2m)$. This means there are functions $h_1,h_2,\ldots,h_t\in H$ such that for any $h\in H,\exists i$ between $1$ and $t$ with $h_i(x_k)=h(x_k)$ for $1\leq k\leq 2m. $ - -We see that $\sigma(x)\in R$ iff for some $h$ in $H$ satisfies, -$$ -|\frac{1}{m}|\{1\leq i\leq m:h(x_{\sigma_{i}})=1\}|-\frac{1}{m}|\{m+1\leq i\leq 2m:h(x_{\sigma_{i}})=1\}||\geq\frac{\varepsilon}{2} -$$. - -Hence if we define $w^{j}_{i}=1$ if $h_{j}(x_{i})=1$ and $w^{j}_{i}=0$ otherwise. - -For $1\leq i\leq m$ and $1\leq j\leq t$, we have that $\sigma(x)\in R$ iff for some $j$ in ${1,\ldots,t}$ satisfies $|\frac 1 m \left(\sum_i w^j_{\sigma(i)}-\sum_i w^j_{\sigma(m+i)}\right)|\geq\frac \varepsilon 2 $. By union bound we get -$$ -\Pr[\sigma(x)\in R]\leq t\cdot \max\left(\Pr[|\frac 1 m \left(\sum_i w^j_{\sigma_i} - \sum_i w^j_{\sigma_{m+i}}\right)| \geq \frac \varepsilon 2]\right) -$$ -$$ -\leq \Pi_{H}(2m)\cdot \max\left(\Pr\left[ \left| \frac 1 m \left(\sum_i w^j_{\sigma_i}-\sum_i w^j_{\sigma_{m+i}}\right)\right| \geq \frac \varepsilon 2 \right] \right). -$$ - -Since, the distribution over the permutations $\sigma$ is uniform for each $i$, so $w^j_{\sigma_i}-w^j_{\sigma_{m+i}}$ equals $\pm |w^j_i-w^j_{m+i}|$, with equal probability. - -Thus, -$$ -\Pr\left[\left|\frac 1 m \left(\sum_i \left(w^j_{\sigma_i}-w^j_{\sigma_{m+i}}\right)\right)\right|\geq\frac \varepsilon 2\right] = \Pr\left[ \left| \frac 1 m \left( \sum_i|w^j_i-w^j_{m+i}|\beta_i\right)\right|\geq\frac \varepsilon 2\right], -$$ - -where the probability on the right is over $\beta_{i}$ and both the possibilities are equally likely. By Hoeffding's inequality, this is at most $2e^{-m\varepsilon^2/8}$. - -Finally, combining all the three parts of the proof we get the Uniform Convergence Theorem. diff --git a/wiki/wikipedia/3662.txt b/wiki/wikipedia/3662.txt deleted file mode 100644 index 4780ba7935f31b6615b8c97eb1d2505fada13dd2..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3662.txt +++ /dev/null @@ -1,27 +0,0 @@ -In mathematics, the closed graph theorem may refer to one of several basic results characterizing continuous functions in terms of their graphs. - -Each gives conditions when functions with closed graphs are necessarily continuous. - -If $f : X \to Y$ is a map between topological spaces then the graph of $f$ is the set $\operatorname{Gr} f := \{ (x, f(x)) : x \in X \}$ or equivalently, - -\operatorname{Gr} f := \{ (x, y) \in X \times Y : y = f(x) \} - -It is said that the graph of $f$ is closed if $\operatorname{Gr} f$ is a closed subset of $X \times Y$ (with the product topology). - -Any continuous function into a Hausdorff space has a closed graph. - -Any linear map, $L : X \to Y,$ between two topological vector spaces whose topologies are (Cauchy) complete with respect to translation invariant metrics, and if in addition (1a) $L$ is sequentially continuous in the sense of the product topology, then the map $L$ is continuous and its graph, Gr L, is necessarily closed. Conversely, if $L$ is such a linear map with, in place of (1a), the graph of $L$ is (1b) known to be closed in the Cartesian product space $X \times Y$, then $L$ is continuous and therefore necessarily sequentially continuous. - -If $X$ is any space then the identity map $\operatorname{Id} : X \to X$ is continuous but its graph, which is the diagonal $\operatorname{Gr} \operatorname{Id} := \{ (x, x) : x \in X \},$, is closed in $X \times X$ if and only if $X$ is Hausdorff. In particular, if $X$ is not Hausdorff then $\operatorname{Id} : X \to X$ is continuous but does not have a closed graph. - -Let $X$ denote the real numbers $\R$ with the usual Euclidean topology and let $Y$ denote $\R$ with the indiscrete topology (where note that $Y$ is not Hausdorff and that every function valued in $Y$ is continuous). Let $f : X \to Y$ be defined by $f(0) = 1$ and $f(x) = 0$ for all $x \neq 0$. Then $f : X \to Y$ is continuous but its graph is not closed in $X \times Y$. - -In point-set topology, the closed graph theorem states the following: - -If $T : X \to Y$ is a linear operator between topological vector spaces (TVSs) then we say that $T$ is a closed operator if the graph of $T$ is closed in $X \times Y$ when $X \times Y$ is endowed with the product topology. - -The closed graph theorem is an important result in functional analysis that guarantees that a closed linear operator is continuous under certain conditions. - -The original result has been generalized many times. - -A well known version of the closed graph theorems is the following. diff --git a/wiki/wikipedia/3663.txt b/wiki/wikipedia/3663.txt deleted file mode 100644 index 8604a4190f6234164d6de971c92b6993a40c2df7..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3663.txt +++ /dev/null @@ -1,64 +0,0 @@ -In functional analysis, a branch of mathematics, a selection theorem is a theorem that guarantees the existence of a single-valued selection function from a given multi-valued map. There are various selection theorems, and they are important in the theories of differential inclusions, optimal control, and mathematical economics. - -Given two sets X and Y, let F be a multivalued map from X and Y. Equivalently, $F:X\rightarrow\mathcal{P}(Y)$ is a function from X to the power set of Y. - -A function $f: X \rightarrow Y$ is said to be a selection of F if -$$ -\forall x \in X: f(x) \in F(x) . -$$ - -In other words, given an input x for which the original function F returns multiple values, the new function f returns a single value. This is a special case of a choice function. - -The axiom of choice implies that a selection function always exists; however, it is often important that the selection have some "nice" properties, such as continuity or measurability. This is where the selection theorems come into action: they guarantee that, if F satisfies certain properties, then it has a selection f that is continuous or has other desirable properties. - -The Michael selection theorem says that the following conditions are sufficient for the existence of a continuous selection: - -* X is a paracompact space; - -* Y is a Banach space; - -* F is lower hemicontinuous; - -* for all x in X, the set F(x) is nonempty, convex and closed. - -The Deutsch–Kenderov theorem generalizes Michael's theorem as follows: - -* X is a paracompact space; - -* Y is a normed vector space; - -* F is almost lower hemicontinuous, that is, at each $x \in X$, for each neighborhood $V$ of $0$ there exists a neighborhood $U$ of $x$ such that $\bigcap_{u \in U} \{F(u)+V\} \ne \emptyset$; - -* for all x in X, the set F(x) is nonempty and convex. - -These conditions guarantee that $F$ has a continuous approximate selection, that is, for each neighborhood $V$ of $0$ in $Y$ there is a continuous function $f \colon X \mapsto Y$ such that for each $x \in X$, $f (x) \in F (X) + V$. - -In a later note, Xu proved that the Deutsch–Kenderov theorem is also valid if $Y$ is a locally convex topological vector space. - -The Yannelis-Prabhakar selection theorem says that the following conditions are sufficient for the existence of a continuous selection: - -* X is a paracompact Hausdorff space; - -* Y is a linear topological space; - -* for all x in X, the set F(x) is nonempty and convex; - -* for all y in Y, the inverse set F−1(y) is an open set in X. - -The Kuratowski and Ryll-Nardzewski measurable selection theorem says that if X is a Polish space and $\mathcal B$ its Borel σ-algebra, $\mathrm{Cl}(X)$ is the set of nonempty closed subsets of X, $(\Omega, \mathcal F)$ is a measurable space, and $F : \Omega \to \mathrm{Cl}(X)$ is an $\mathcal F$-weakly measurable map (that is, for every open subset $U \subseteq X$ we have $\{\omega \in \Omega : F(\omega) \cap U \neq \empty \} \in \mathcal F$), then $F$ has a selection that is $(\mathcal F, \mathcal B)$-measurable. - -Other selection theorems for set-valued functions include: - -* Bressan–Colombo directionally continuous selection theorem - -* Castaing representation theorem - -* Fryszkowski decomposable map selection - -* Helly's selection theorem - -* Zero-dimensional Michael selection theorem - -* Robert Aumann measurable selection theorem - -* Blaschke selection theorem diff --git a/wiki/wikipedia/3664.txt b/wiki/wikipedia/3664.txt deleted file mode 100644 index 592cdf43e76a103ea4d4956f2ae6df1a6646ac55..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3664.txt +++ /dev/null @@ -1,23 +0,0 @@ -In computer science, the happened-before relation (denoted: $\to $) is a relation between the result of two events, such that if one event should happen before another event, the result must reflect that, even if those events are in reality executed out of order (usually to optimize program flow). This involves ordering events based on the potential causal relationship of pairs of events in a concurrent system, especially asynchronous distributed systems. It was formulated by Leslie Lamport. - -The happened-before relation is formally defined as the least strict partial order on events such that: - -* If events $a $ and $b $ occur on the same process, $a \to b$ if the occurrence of event $a $ preceded the occurrence of event $b $. - -* If event $a $ is the sending of a message and event $b $ is the reception of the message sent in event $a $, $a \to b$. - -If two events happen in different isolated processes (that do not exchange messages directly or indirectly via third-party processes), then the two processes are said to be concurrent, that is neither $a \to b$ nor $b \to a$ is true. - -If there are other causal relationships between events in a given system, such as between the creation of a process and its first event, these relationships are also added to the definition. - -For example, in some programming languages such as Java, C, C++ or Rust, a happens-before edge exists if memory written to by statement A is visible to statement B, that is, if statement A completes its write before statement B starts its read. - -Like all strict partial orders, the happened-before relation is transitive, irreflexive and antisymmetric, i.e.: - -* $\forall a, b, c$, if $a \to b$ and $b \to c$, then $a \to c$ (transitivity). This means that for any three events $a, b, c$, if $a$ happened before $b$, and $b$ happened before $c$, then $a$ must have happened before $c$. - -* $\forall a, a \nrightarrow a$ (irreflexivity). This means that no event can happen before itself. - -* $\forall a, b,$ where $a \neq b$, if $a \to b$ then $b \nrightarrow a$ (antisymmetry). This means that for any two distinct events $a, b$, if $a$ happened before $b$ then $b$ cannot have happened before $a$. - -The processes that make up a distributed system have no knowledge of the happened-before relation unless they use a logical clock, like a Lamport clock or a vector clock. This allows one to design algorithms for mutual exclusion, and tasks like debugging or optimising distributed systems. diff --git a/wiki/wikipedia/3665.txt b/wiki/wikipedia/3665.txt deleted file mode 100644 index dd93502857ad4c45c664a9cab025c821e2918f6b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3665.txt +++ /dev/null @@ -1,13 +0,0 @@ -In computational complexity theory, Blum's speedup theorem, first stated by Manuel Blum in 1967, is a fundamental theorem about the complexity of computable functions. - -Each computable function has an infinite number of different program representations in a given programming language. In the theory of algorithms one often strives to find a program with the smallest complexity for a given computable function and a given complexity measure (such a program could be called optimal). Blum's speedup theorem shows that for any complexity measure there are computable functions that are not optimal with respect to that measure. This also rules out the idea there is a way to assign to arbitrary functions their computational complexity, meaning the assignment to any f of the complexity of an optimal program for f. This does of course not exclude the possibility of finding the complexity of an optimal program for certain specific functions. - -Given a Blum complexity measure $(\varphi, \Phi)$ and a total computable function $f$ with two parameters, then there exists a total computable predicate $g$ (a boolean valued computable function) so that for every program $i$ for $g$, there exists a program $j$ for $g$ so that for almost all $x$ -$$ -f(x, \Phi_j(x)) \leq \Phi_i(x) -$$ -$$ -f -$$ is called the speedup function. The fact that it may be as fast-growing as desired - -(as long as it is computable) means that the phenomenon of always having a program of smaller complexity remains even if by "smaller" we mean "significantly smaller" (for instance, quadratically smaller, exponentially smaller). diff --git a/wiki/wikipedia/3666.txt b/wiki/wikipedia/3666.txt deleted file mode 100644 index e84cf78449791475bb9a5466e843b0e8c3d2ebee..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3666.txt +++ /dev/null @@ -1 +0,0 @@ -The Journal of Graph Algorithms and Applications is an open access peer-reviewed scientific journal covering the subject of graph algorithms and graph drawing. The journal was established in 1997 and the editor-in-chief is Giuseppe Liotta (University of Perugia). It is abstracted and indexed by Scopus and MathSciNet. diff --git a/wiki/wikipedia/3667.txt b/wiki/wikipedia/3667.txt deleted file mode 100644 index 45db0c05ea9eb92835d41a96294faeb722466e8e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3667.txt +++ /dev/null @@ -1,21 +0,0 @@ -Barnette's conjecture is an unsolved problem in graph theory, a branch of mathematics, concerning Hamiltonian cycles in graphs. It is named after David W. Barnette, a professor emeritus at the University of California, Davis; it states that every bipartite polyhedral graph with three edges per vertex has a Hamiltonian cycle. - -A planar graph is an undirected graph that can be embedded into the Euclidean plane without any crossings. A planar graph is called polyhedral if and only if it is 3-vertex-connected, that is, if there do not exist two vertices the removal of which would disconnect the rest of the graph. A graph is bipartite if its vertices can be colored with two different colors such that each edge has one endpoint of each color. A graph is cubic (or 3-regular) if each vertex is the endpoint of exactly three edges. Finally, a graph is Hamiltonian if there exists a cycle that passes through each of its vertices exactly once. Barnette's conjecture states that every cubic bipartite polyhedral graph is Hamiltonian. - -By Steinitz's theorem, a planar graph represents the edges and vertices of a convex polyhedron if and only if it is polyhedral. A three-dimensional polyhedron has a cubic graph if and only if it is a simple polyhedron. - -And, a planar graph is bipartite if and only if, in a planar embedding of the graph, all face cycles have even length. Therefore, Barnette's conjecture may be stated in an equivalent form: suppose that a three-dimensional simple convex polyhedron has an even number of edges on each of its faces. Then, according to the conjecture, the graph of the polyhedron has a Hamiltonian cycle. - -conjectured that every cubic polyhedral graph is Hamiltonian; this came to be known as Tait's conjecture. It was disproven by , who constructed a counterexample with 46 vertices; other researchers later found even smaller counterexamples. However, none of these known counterexamples is bipartite. Tutte himself conjectured that every cubic 3-connected bipartite graph is Hamiltonian, but this was shown to be false by the discovery of a counterexample, the Horton graph. proposed a weakened combination of Tait's and Tutte's conjectures, stating that every bipartite cubic polyhedron is Hamiltonian, or, equivalently, that every counterexample to Tait's conjecture is non-bipartite. - -Kelmans showed that Barnette's conjecture is equivalent to a superficially stronger statement, that for every two edges e and f on the same face of a bipartite cubic polyhedron, there exists a Hamiltonian cycle that contains e but does not contain f. Clearly, if this statement is true, then every bipartite cubic polyhedron contains a Hamiltonian cycle: just choose e and f arbitrarily. In the other directions, Kelmans showed that a counterexample could be transformed into a counterexample to the original Barnette conjecture. - -Barnette's conjecture is also equivalent to the statement that the vertices of the dual of every cubic bipartite polyhedral graph can be partitioned into two subsets whose induced subgraphs are trees. The cut induced by such a partition in the dual graph corresponds to a Hamiltonian cycle in the primal graph. - -Although the truth of Barnette's conjecture remains unknown, computational experiments have shown that there is no counterexample with fewer than 86 vertices. - -If Barnette's conjecture turns out to be false, then it can be shown to be NP-complete to test whether a bipartite cubic polyhedron is Hamiltonian. If a planar graph is bipartite and cubic but only of connectivity 2, then it may be non-Hamiltonian, and it is NP-complete to test Hamiltonicity for these graphs. Another result was obtained by Alt: if the dual graph can be vertex-colored with colors blue, red and green such that every red-green cycle contains a vertex of degree 4, then the primal graph is Hamiltonian. - -A related conjecture of Barnette states that every cubic polyhedral graph in which all faces have six or fewer edges is Hamiltonian. Computational experiments have shown that, if a counterexample exists, it would have to have more than 177 vertices. The conjecture was proven by Kardoš. - -The intersection of these two conjectures would be that every bipartite cubic polyhedral graph in which all faces have four or six edges is Hamiltonian. This was proved to be true by Goodey. diff --git a/wiki/wikipedia/3668.txt b/wiki/wikipedia/3668.txt deleted file mode 100644 index d17457697a311ca6e78ef553228c3138c030008f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3668.txt +++ /dev/null @@ -1,40 +0,0 @@ -In combinatorial mathematics, probability, and computer science, in the longest alternating subsequence problem, one wants to find a subsequence of a given sequence in which the elements are in alternating order, and in which the sequence is as long as possible. - -Formally, if $\mathbf{x} = \{x_1, x_2, \ldots, x_n\}$ is a sequence of distinct real numbers, then the subsequence $\{x_{i_1}, x_{i_2}, \ldots, x_{i_k}\}$ is alternating (or zigzag or down-up)if -$$ -x_{i_1} > x_{i_2} < x_{i_3} > \cdots x_{i_k}\qquad \text{and} \qquad 1\leq i_1 < i_2 < \cdots < i_k \leq n. -$$ - -Similarly, $\mathbf{x}$ is reverse alternating (or up-down) if -$$ -x_{i_1} < x_{i_2} > x_{i_3} < \cdots x_{i_k}\qquad \text{and} \qquad 1\leq i_1 < i_2 < \cdots < i_k \leq n. -$$ - -Let ${\rm as}_n(\mathbf{x})$ denote the length (number of terms) of the longest alternating subsequence of $\mathbf{x}$. For example, if we consider some of the permutations of the integers 1,2,3,4,5, we have that - -* ${\rm as}_5(1,2,3,4,5) = 2 $; because any sequence of 2 distinct digits are (by definition) alternating. (for example 1,2 or 1,4 or 3,5) - -* ${\rm as}_5(1,5,3,2,4) = 4, $ because 1,5,3,4 and 1,5,2,4 and 1,3,2,4 are all alternating, and there is no alternating subsequence with more elements; - -* ${\rm as}_5(5,3,4,1,2) = 5, $ because 5,3,4,1,2 is itself alternating. - -The longest alternating subsequence problem is solvable in time $O(n)$, where $n$ is the length of the original sequence. - -If $\mathbf{x}$ is a random permutation of the integers $1,2,\ldots,n$ and $A_n \equiv {\rm as}_n(\mathbf{x})$, then it is possible to show - -that -$$ - E[A_n] = \frac{2 n}{3} + \frac{1}{6} \qquad \text{and} \qquad \operatorname{Var}[A_n] = \frac{8 n}{45} - \frac{13}{180}. -$$ - -Moreover, as $n \rightarrow \infty$, the random variable $A_n$, appropriately centered and scaled, converges to a standard normal distribution. - -The longest alternating subsequence problem has also been studied in the setting of online algorithms, in which the elements of $\mathbf{x}$ are presented in an online fashion, and a decision maker needs to decide whether to include or exclude each element at the time it is first presented, without any knowledge of the elements that will be presented in the future, - -and without the possibility of recalling on preceding observations. - -Given a sequence $X_1, X_2, \ldots, X_n$ of independent random variables with common continuous distribution $F$, it is possible to construct a selection procedure that maximizes the expected number of alternating selections. - -Such expected values can be tightly estimated, and it equals $(2-\sqrt{2})n + O(1)$. - -As $n \rightarrow \infty$, the optimal number of online alternating selections appropriately centered and scaled converges to a normal distribution. diff --git a/wiki/wikipedia/3669.txt b/wiki/wikipedia/3669.txt deleted file mode 100644 index bde63ac1f5d86e2a7a2ac68955df8f6992a5ae34..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3669.txt +++ /dev/null @@ -1,152 +0,0 @@ -In geometry, the cissoid of Diocles is a cubic plane curve notable for the property that it can be used to construct two mean proportionals to a given ratio. In particular, it can be used to double a cube. It can be defined as the cissoid of a circle and a line tangent to it with respect to the point on the circle opposite to the point of tangency. In fact, the curve family of cissoids is named for this example and some authors refer to it simply as the cissoid. It has a single cusp at the pole, and is symmetric about the diameter of the circle which is the line of tangency of the cusp. The line is an asymptote. It is a member of the conchoid of de Sluze family of curves and in form it resembles a tractrix. - -The word "cissoid" comes from the Greek κισσοειδής kissoeidēs "ivy shaped" from κισσός kissos "ivy" and -οειδής -oeidēs "having the likeness of". The curve is named for Diocles who studied it in the 2nd century BCE. - -Let the radius of C be a. By translation and rotation, we may take O to be the origin and the center of the circle to be (a, 0), so A is (2a, 0). Then the polar equations of L and C are: -$$ -r=2a\sec\theta -$$ -$$ -r=2a\cos\theta -$$. - -By construction, the distance from the origin to a point on the cissoid is equal to the difference between the distances between the origin and the corresponding points on L and C. In other words, the polar equation of the cissoid is -$$ -r=2a\sec\theta-2a\cos\theta=2a(\sec\theta-\cos\theta) -$$. - -Applying some trigonometric identities, this is equivalent to -$$ -r=2a\sin^2\!\theta/\cos\theta=2a\sin\theta\tan\theta -$$. - -Let $t=\tan\theta$ in the above equation. Then -$$ -x = r\cos\theta = 2a\sin^2\!\theta = \frac{2a\tan^2\!\theta}{\sec^2\!\theta} = \frac{2at^2}{1+t^2} -$$ -$$ -y = tx = \frac{2at^3}{1+t^2} -$$ - -are parametric equations for the cissoid. - -Converting the polar form to Cartesian coordinates produces -$$ -(x^2+y^2)x=2ay^2 -$$ - -A compass-and-straightedge construction of various points on the cissoid proceeds as follows. Given a line L and a point O not on L, construct the line L' through O parallel to L. Choose a variable point P on L, and construct Q, the orthogonal projection of P on L, then R, the orthogonal projection of Q on OP. Then the cissoid is the locus of points R. - -To see this, let O be the origin and L the line x = 2a as above. Let P be the point (2a, 2at); then Q is (0, 2at) and the equation of the line OP is y=tx. The line through Q perpendicular to OP is -$$ -t(y-2at)+x=0 -$$. - -To find the point of intersection R, set y = tx in this equation to get -$$ -t(tx-2at)+x=0,\ x(t^2+1)=2at^2,\ x=\frac{2at^2}{t^2+1} -$$ -$$ -y=tx=\frac{2at^3}{t^2+1} -$$ - -which are the parametric equations given above. - -While this construction produces arbitrarily many points on the cissoid, it cannot trace any continuous segment of the curve. - -The following construction was given by Isaac Newton. Let J be a line and B a point not on J. Let BST be a right angle which moves so that ST equals the distance from B to J and T remains on J, while the other leg BS slides along B. Then the midpoint P of ST describes the curve. - -To see this, let the distance between B and J be 2a. By translation and rotation, take B = (-a, 0) and J the line x=a. Let P = (x, y) and let ψ be the angle between SB and the x-axis; this is equal to the angle between ST and J. By construction, PT = a, so the distance from P to J is a sin ψ. In other words a-x = a sin ψ. Also, SP = a is the y coordinate of (x, y) if it is rotated by angle ψ, so - -a = (x+a) sin ψ + y cos ψ. After simplification, this produces parametric equations -$$ -x=a(1-\sin\psi),y=a\frac{(1-\sin\psi)^2}{\cos\psi}. -$$ - -Change parameters by replacing ψ with its complement to get -$$ -x=a(1-\cos\psi),y=a\frac{(1-\cos\psi)^2}{\sin\psi} -$$ - -or, applying double angle formulas, -$$ -x=2a\sin^2{\psi \over 2},y=a\frac{4\sin^4{\psi \over 2}}{2\sin{\psi \over 2}\cos{\psi \over 2}} = 2a\frac{\sin^3{\psi \over 2}}{\cos{\psi \over 2}}. -$$ - -But this is polar equation -$$ -r=2a\sin^2\theta/\cos\theta -$$ - -given above with θ=Ψ/2. - -Note that, as with the double projection construction, this can be adapted to produce a mechanical device that generates the curve. - -The Greek geometer Diocles used the cissoid to obtain two mean proportionals to a given ratio. This means that given lengths a and b, the curve can be used to find u and v so that a is to u as u is to v as v is to b i.e. - -a/u=u/v=v/b, as discovered by Hippocrates of Chios. As a special case, this can be used to solve the Delian problem: how much must the length of a cube be increased in order to double its volume? Specifically, if a is the side of a cube, and b=2a, then the volume of a cube of side u is -$$ -u^3=a^3(\tfrac{u}{a})^3=a^3(\tfrac{u}{a})(\tfrac{v}{u})(\tfrac{b}{v})=a^3(\tfrac{b}{a})=2a^3 -$$ - -so u is the side of a cube with double the volume of the original cube. Note however that this solution does not fall within the rules of compass and straightedge construction since it relies on the existence of the cissoid. - -Let a and b be given. It is required to find u so that u3=a2b, giving u and v=u2/a as the mean proportionals. Let the cissoid -$$ -(x^2+y^2)x=2ay^2 -$$ - -be constructed as above, with O the origin, A the point (2a, 0), and J the line x=a, also as given above. Let C be the point of intersection of J with OA. From the given length b, mark B on J so that CB=b. Draw BA and let P = (x, y) be the point where it intersects the cissoid. Draw OP and let it intersect J at U. Then u=CU is the required length. - -To see this, rewrite the equation of the curve as -$$ -y^2=\frac{x^3}{2a-x} -$$ - -and let N = (x, 0), so PN is the perpendicular to OA through P. - -From the equation of the curve, -$$ -PN^2=\frac{ON^3}{NA}. -$$ - -From this, -$$ -\frac{PN^3}{ON^3}=\frac{PN}{NA}. -$$ - -By similar triangles PN/ON=UC/OC and PN/NA=BC/CA. So the equation becomes -$$ -\frac{UC^3}{OC^3}=\frac{BC}{CA}, -$$ - -so -$$ -\frac{u^3}{a^3}=\frac{b}{a}, u^3=a^2b -$$ - -as required. - -Diocles did not really solve the Delian problem. The reason is that the cissoid of Diocles cannot be constructed perfectly, at least not with compass and straightedge. To construct the cissoid of Diocles, one would construct a finite number of its individual points, then connect all these points to form a curve. The problem is that there is no well-defined way to connect the points. If they are connected by line segments, then the construction will be well-defined, but it will not be an exact cissoid of Diocles, but only an approximation. Likewise, if the dots are connected with circular arcs, the construction will be well-defined, but incorrect. Or one could simply draw a curve directly, trying to eyeball the shape of the curve, but the result would only be imprecise guesswork. - -Once the finite set of points on the cissoid have been drawn, then line PC will probably not intersect one of these points exactly, but will pass between them, intersecting the cissoid of Diocles at some point whose exact location has not been constructed, but has only been approximated. An alternative is to keep adding constructed points to the cissoid which get closer and closer to the intersection with line PC, but the number of steps may very well be infinite, and the Greeks did not recognize approximations as limits of infinite steps (so they were very puzzled by Zeno's paradoxes). - -One could also construct a cissoid of Diocles by means of a mechanical tool specially designed for that purpose, but this violates the rule of only using compass and straightedge. This rule was established for reasons of logical - axiomatic - consistency. Allowing construction by new tools would be like adding new axioms, but axioms are supposed to be simple and self-evident, but such tools are not. So by the rules of classical, synthetic geometry, Diocles did not solve the Delian problem, which actually can not be solved by such means. - -On the other hand, if one accepts that cissoids of Diocles do exist, then there must exist at least one example of such a cissoid. This cissoid could then be translated, rotated, and expanded or contracted in size (without changing its proportional shape) at will to fit into any position. Then one would readily admit that such a cissoid can be used to correctly solve the Delian problem. - -The pedal curve of a parabola with respect to its vertex is a cissoid of Diocles. The geometrical properties of pedal curves in general produce several alternate methods of constructing the cissoid. It is the envelopes of circles whose centers lie on a parabola and which pass through the vertex of the parabola. Also, if two congruent parabolas are set vertex-to-vertex and one is rolled along the other; the vertex of the rolling parabola will trace the cissoid. - -
    - -The cissoid of Diocles can also be defined as the inverse curve of a parabola with the center of inversion at the vertex. To see this, take the parabola to be x = y2, in polar coordinate $r\cos\theta = (r\sin \theta)^2$ or: -$$ -r=\frac{\cos\theta}{\sin^2\!\theta}. -$$ - -The inverse curve is thus: -$$ -r=\frac{\sin^2\!\theta}{\cos\theta} = \sin\theta \tan\theta, -$$ - -which agrees with the polar equation of the cissoid above. diff --git a/wiki/wikipedia/367.txt b/wiki/wikipedia/367.txt deleted file mode 100644 index 74ba0b524dd3c2f5e0199717a121b86b9bdd417e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/367.txt +++ /dev/null @@ -1,28 +0,0 @@ -In arithmetic combinatorics, Behrend's theorem states that the subsets of the integers from 1 to $n$ in which no member of the set is a multiple of any other must have a logarithmic density that goes to zero as $n$ becomes large. The theorem is named after Felix Behrend, who published it in 1935. - -The logarithmic density of a set of integers from 1 to $n$ can be defined by setting the weight of each integer $i$ to be $1/i$, and dividing the total weight of the set by the $n$th partial sum of the harmonic series (or, equivalently for the purposes of asymptotic analysis, dividing by $\log n$). The resulting number is 1 or close to 1 when the set includes all of the integers in that range, but smaller when many integers are missing, and particularly when the missing integers are themselves small. - -A subset of $\{1,\dots n\}$ is called primitive if it has the property that no subset element is a multiple of any other element. - -Behrend's theorem states that the logarithmic density of any primitive subset must be small. - -More precisely, the logarithmic density of such a set must be $O(1/\sqrt{\log\log n})$. - -For infinite primitive sequences, the maximum possible density is smaller, $o(1/\sqrt{\log\log n})$. - -There exist large primitive subsets of $\{1,\dots n\}$. However, these sets still have small logarithmic density. - -*In the subset $\{\lceil (n+1)/2 \rceil,\dots n\}$, all pairs of numbers are within a factor of less than two of each other, so no two can be multiples. It includes approximately half of the numbers from $1$ to $n$. By Dilworth's theorem (using a partition of the integers into chains of powers of two multiplied by an odd number) this subset has maximum cardinality among all subsets in which no two are multiples. But because all of its elements are large, this subset has low logarithmic density, only $O(1/\log n)$. - -*Another primitive subset is the set of prime numbers. Despite there being fewer prime numbers than the number of elements in the previous example, this set has larger logarithmic density, $O(\log\log n/\log n)$, according to the divergence of the sum of the reciprocals of the primes. - -Both of these subsets have significantly smaller logarithmic density than the bound given by Behrend's theorem. Resolving a conjecture of G. H. Hardy, both Paul Erdős and Subbayya Sivasankaranarayana Pillai showed that, for $k\approx\log\log n$, - -the set of numbers with exactly $k$ prime factors (counted with multiplicity) has logarithmic density -$$ -\frac{1+o(1)}{\sqrt{2\pi\log\log n}}, -$$ - -exactly matching the form of Behrend's theorem. This example is best possible, in the sense that no other primitive subset has logarithmic density with the same form and a larger leading constant. - -This theorem is known as Behrend's theorem because Felix Behrend proved it in 1934, and published it in 1935. Paul Erdős proved the same result, on a train ride in 1934 in which he traveled from Hungary to Cambridge to escape the growing anti-semitism of Europe at that time, but on his arrival he discovered that Behrend's proof was already known. diff --git a/wiki/wikipedia/3670.txt b/wiki/wikipedia/3670.txt deleted file mode 100644 index fcec832fe24798d3621a46613f3a7a79ae1cb56d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3670.txt +++ /dev/null @@ -1,21 +0,0 @@ -MindManager is a commercial mind mapping software application developed by Mindjet. The software provides ways for users to visualize information in mind maps and flowcharts. MindManager can be used to manage projects, organize information, and for brainstorming. - -, Mindjet had approximately two million users, including notable customers such as Coca Cola, Disney, IBM, and Wal-Mart. - -MindManager provides ways for users to visualize information using mind maps, and with the release of MindManager 2016 for Windows, now includes flowchart and concept map creation tools. The digital mind maps can be used as a “virtual whiteboard” for brainstorming, managing and planning projects, compiling research, organizing large amounts of information, and for strategic planning. - -MindManager also has features that allow budget calculations and formulas, Gantt chart views of project timelines, and guided brainstorming. Documents can be attached to mind map topics and viewed within the MindManager application. Links, images, and notes can also be added to mind map topics and viewed and searched in a side panel. - -The software that became MindManager was originally developed by Mike Jetter in the mid-1990s while he was recovering from a bone marrow transplant to treat leukemia. Jetter's goal was to develop a program that would overcome the limitations of creating mind maps with pen and paper, such as the inability to easily move items around. Following his release from hospital, Jetter decided to sell the software. The software's mind maps were initially based on the method created by Tony Buzan. Over time, however, Mindjet has developed its own style of mind mapping. - -The software was originally marketed under the name "MindMan — The Creative MindManager". In 1999, it was rebranded as MindManager. Originally only available for Windows, MindManager expanded to Mac OS X in 2006. With the release of version 7, the Windows version of MindManager adopted the ribbon interface first seen in Microsoft Office 2007 and introduced support for Office Open XML. In 2011, mobile versions of MindManager were released for both iOS and Android. Later that year, the company acquired Thinking Space, an Android-based information mapping application, and Cohuman, a social task management service, which the company developed into a collaborative, cloud-based service to complement MindManager called Mindjet Connect or Project Director. - -In September 2012, the Mindjet company combined all of its software, including MindManager, Mindjet Connect, and its mobile offerings into a single product, also called Mindjet. - -Mindjet moved away from the single-product offering in mid-2013. The stand-alone mind mapping product was again named MindManager, with a more expansive version tailored to large enterprise adoptions called MindManager Enterprise released in 2014. MindManager Enterprise added sharing options including viewing/editing within Microsoft SharePoint. A MindManager mind map viewer also became available with MindManager Enterprise 2016. - -On August 9, 2016, Corel announced that they had acquired the Mindjet MindManager business. - -MindManager has received generally positive notice from reviewers. MindManager 2016 for Windows took first place in Biggerplate's MindMapper's Choice poll. MindManager 8 received four out of five stars from TechRadar, while MindManager 9 received 3.5 out of 5 stars from PC Magazine and 4 out of 5 stars from Macworld. MindManager was chosen as one of the top 5 best mind mapping tools. - -MindManager also received a number of awards, including "Collaboration Product of the Year" for 2008 by Intranet Journal, a Jolt Productivity award for Design and Modeling tools from Dr. Dobb's Journal, and "Best of CeBIT" in the Personal Software category in 2004. diff --git a/wiki/wikipedia/3671.txt b/wiki/wikipedia/3671.txt deleted file mode 100644 index 4d7638b7ceb08ed7a9ae2374e0014a55404104ed..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3671.txt +++ /dev/null @@ -1,335 +0,0 @@ -In combinatorial mathematics, the Catalan numbers are a sequence of natural numbers that occur in various counting problems, often involving recursively defined objects. They are named after the French-Belgian mathematician Eugène Charles Catalan (1814–1894). - -The nth Catalan number can be expressed directly in terms of binomial coefficients by -$$ -C_n = \frac{1}{n+1}{2n\choose n} = \frac{(2n)!}{(n+1)!n!} = \prod\limits_{k=2}^{n}\frac{n+k}{k} \qquad\text{for }n\ge 0. -$$ - -The first Catalan numbers for n = 0, 1, 2, 3, ... are - -1, 1, 2, 5, 14, 42, 132, 429, 1430, 4862, 16796, 58786, ... . - -An alternative expression for Cn is -$$ -C_n = {2n\choose n} - {2n\choose n+1} -$$ for $n\ge 0,$ - -which is equivalent to the expression given above because $\tbinom{2n}{n+1}=\tfrac{n}{n+1}\tbinom{2n}n$. This expression shows that Cn is an integer, which is not immediately obvious from the first formula given. This expression forms the basis for a proof of the correctness of the formula. - -The Catalan numbers satisfy the recurrence relations -$$ -C_0 = 1 \quad \text{and} \quad C_{n+1}=\sum_{i=0}^{n}C_iC_{n-i}\quad\text{for }n\ge 0 -$$ - -and -$$ -C_0 = 1 \quad \text{and} \quad C_{n+1} = \frac{2(2n+1)}{n+2}C_n. -$$ - -Asymptotically, the Catalan numbers grow as - -C_n \sim \frac{4^n}{n^{3/2}\sqrt{\pi}}, - -in the sense that the quotient of the nth Catalan number and the expression on the right tends towards 1 as n approaches infinity. This can be proved by using the asymptotic growth of the central binomial coefficients, by Stirling's approximation for $n!$, or via generating functions. - -The only Catalan numbers Cn that are odd are those for which n = 2k − 1; all others are even. The only prime Catalan numbers are C2 = 2 and C3 = 5. - -The Catalan numbers have the integral representations - -C_n=\frac {1}{2\pi}\int_0^4 x^n\sqrt{\frac{4-x}{x}}dx - -=\frac{2}{\pi}4^n\int_{-1}^{1} t^{2n}\sqrt{1-t^2}dt. - - - -The latter representation is closely connected to Wigner's semicircle law for eigenvalue distribution of random symmetric matrices. - -There are many counting problems in combinatorics whose solution is given by the Catalan numbers. The book Enumerative Combinatorics: Volume 2 by combinatorialist Richard P. Stanley contains a set of exercises which describe 66 different interpretations of the Catalan numbers. Following are some examples, with illustrations of the cases C3 = 5 and C4 = 14. - -* Cn is the number of Dyck words of length 2n. A Dyck word is a string consisting of n X's and n Y's such that no initial segment of the string has more Y's than X's. For example, the following are the Dyck words of length 6: - -
    XXXYYY XYXXYY XYXYXY XXYYXY XXYXYY.
    - -* Re-interpreting the symbol X as an open parenthesis and Y as a close parenthesis, Cn counts the number of expressions containing n pairs of parentheses which are correctly matched: - -
    ((())) ()(()) ()()() (())() (()())
    - -* Cn is the number of different ways n + 1 factors can be completely parenthesized (or the number of ways of associating n applications of a binary operator, as in the matrix chain multiplication problem). For n = 3, for example, we have the following five different parenthesizations of four factors: - -
    ((ab)c)d (a(bc))d (ab)(cd) a((bc)d) a(b(cd))
    - -* Successive applications of a binary operator can be represented in terms of a full binary tree, with each correctly matched bracketing describing an internal node. It follows that Cn is the number of full binary trees with n + 1 leaves, or, equivalently, with a total of n internal nodes: - -Also, the interior of the correctly matching closing Y for the first X of a Dyck word contains the description of the left subtree, with the exterior describing the right subtree. - -* Cn is the number of non-isomorphic ordered (or plane) trees with n + 1 vertices. See encoding general trees as binary trees. - -* Cn is the number of monotonic lattice paths along the edges of a grid with n × n square cells, which do not pass above the diagonal. A monotonic path is one which starts in the lower left corner, finishes in the upper right corner, and consists entirely of edges pointing rightwards or upwards. Counting such paths is equivalent to counting Dyck words: X stands for "move right" and Y stands for "move up". - -The following diagrams show the case n = 4: - -This can be represented by listing the Catalan elements by column height: - -
    [0,0,0,0] [0,0,0,1] [0,0,0,2] [0,0,1,1]
    - -
    [0,1,1,1] [0,0,1,2] [0,0,0,3] [0,1,1,2] [0,0,2,2] [0,0,1,3]
    - -
    [0,0,2,3] [0,1,1,3] [0,1,2,2] [0,1,2,3]
    - -* A convex polygon with n + 2 sides can be cut into triangles by connecting vertices with non-crossing line segments (a form of polygon triangulation). The number of triangles formed is n and the number of different ways that this can be achieved is Cn. The following hexagons illustrate the case n = 4: - -* Cn is the number of stack-sortable permutations of {1, ..., n}. A permutation w is called stack-sortable if S(w) = (1, ..., n), where S(w) is defined recursively as follows: write w = unv where n is the largest element in w and u and v are shorter sequences, and set S(w) = S(u)S(v)n, with S being the identity for one-element sequences. - -* Cn is the number of permutations of {1, ..., n} that avoid the permutation pattern 123 (or, alternatively, any of the other patterns of length 3); that is, the number of permutations with no three-term increasing subsequence. For n = 3, these permutations are 132, 213, 231, 312 and 321. For n = 4, they are 1432, 2143, 2413, 2431, 3142, 3214, 3241, 3412, 3421, 4132, 4213, 4231, 4312 and 4321. - -* Cn is the number of noncrossing partitions of the set {1, ..., n}. A fortiori, Cn never exceeds the nth Bell number. Cn is also the number of noncrossing partitions of the set {1, ..., 2n} in which every block is of size 2. The conjunction of these two facts may be used in a proof by mathematical induction that all of the free cumulants of degree more than 2 of the Wigner semicircle law are zero. This law is important in free probability theory and the theory of random matrices. - -* Cn is the number of ways to tile a stairstep shape of height n with n rectangles. Cutting across the anti-diagonal and looking at only the edges gives full binary trees. The following figure illustrates the case n = 4: - -* Cn is the number of ways to form a "mountain range" with n upstrokes and n downstrokes that all stay above a horizontal line. The mountain range interpretation is that the mountains will never go below the horizon. - -* Cn is the number of standard Young tableaux whose diagram is a 2-by-n rectangle. In other words, it is the number of ways the numbers 1, 2, ..., 2n can be arranged in a 2-by-n rectangle so that each row and each column is increasing. As such, the formula can be derived as a special case of the hook-length formula. - -* Cn is the number of semiorders on n unlabeled items. - -* Given an infinite perfect binary decision tree and n − 1 votes, $C_n$ is the number of possible voting outcomes, given that at any node you can split your votes anyway you want. - -* $C_n$ is the number of length n sequences that start with $1$, and can increase by either $0$ or $1$, or decrease by any number (to at least $1$). For $n=4$ these are $1234, 1233, 1232, 1231, 1223, 1222, 1221, 1212, 1211, 1123, 1122, 1121, 1112, 1111$. From a Dyck path, start a counter at 0. An X increases the counter by 1 and a Y decreases it by 1. Record the values at only the X's. Compared to the similar representation of the Bell numbers, only $1213$ is missing. - -* $C_n$ is the number of Dyck ${2n+2}$-words that have equality of X 's and Y 's only at the last step. This is proved by adding an XY around each Dyck ${2n}$-word and proving that these are the only ones. - -There are several ways of explaining why the formula -$$ -C_n = \frac{1}{n+1}{2n\choose n} -$$ - -solves the combinatorial problems listed above. The first proof below uses a generating function. The other proofs are examples of bijective proofs; they involve literally counting a collection of some kind of object to arrive at the correct formula. - -We first observe that all of the combinatorial problems listed above satisfy Segner's recurrence relation -$$ -C_0 = 1 \quad \text{and} \quad C_{n+1}=\sum_{i=0}^n C_iC_{n-i}\quad\text{for }n\ge 0. -$$ - -For example, every Dyck word w of length ≥ 2 can be written in a unique way in the form - -w = Xw1Yw2 - -with (possibly empty) Dyck words w1 and w2. - -The generating function for the Catalan numbers is defined by -$$ -c(x)=\sum_{n=0}^\infty C_n x^n. -$$ - -The recurrence relation given above can then be summarized in generating function form by the relation -$$ -c(x)=1+xc(x)^2; -$$ - -in other words, this equation follows from the recurrence relation by expanding both sides into power series. On the one hand, the recurrence relation uniquely determines the Catalan numbers; on the other hand, the generating function relation can be algebraically solved to yield -$$ -c(x) = \frac{1\pm\sqrt{1-4x}}{2x}. -$$ - -Choosing the minus sign, the fraction has a power series at 0 so its coefficients must therefore be the Catalan numbers. This solution satisfies -$$ -\lim_{x \to 0^+} c(x) = C_0 = 1. -$$ - -The square root term can be expanded as a power series using the identity -$$ -\sqrt{1+y} = \sum_{n=0}^\infty {\frac12 \choose n} y^n = \sum_{n=0}^\infty \frac{(-1)^{n+1}}{4^n(2n-1)} {2n \choose n} y^n -$$ - -This is a special case of Newton's generalized binomial theorem; as with the general theorem, it can be proved by computing derivatives to produce its Taylor series. - -Setting y = −4x gives -$$ -\sqrt{1-4x} = \sum_{n=0}^\infty \frac{-1}{2n-1} {2n \choose n} x^n -$$ - -and substituting this power series into the expression for c(x) and shifting the summation index n by 1, the expansion simplifies to -$$ -\sum_{n=1}^\infty \frac{1}{2(2n-1)} {2n \choose n} x^{n-1}. -$$ - -Let $N=n-1$, so that -$$ -\sum_{N=0}^\infty \frac{1}{2(2N+1)} {2N+2 \choose N+1} x^N -$$ - -and because $\frac{1}{2(2N+1)} {2N+2 \choose N+1} = C_N$ (see 'proof of recurrence' above) - -we have -$$ -c(x) = \sum_{n=0}^\infty {2n \choose n} \frac{x^n}{n+1}. -$$ - -We count the number of paths which start and end on the diagonal of a n × n grid. All such paths have n right and n up steps. Since we can choose which of the 2n steps are up or right, there are in total $\tbinom{2n}{n}$ monotonic paths of this type. A bad path crosses the main diagonal and touches the next higher diagonal (red in the illustration). - -The part of the path after the higher diagonal is flipped about that diagonal, as illustrated with the red dotted line. This swaps all the right steps to up steps and vice versa. In the section of the path that is not reflected, there is one more up step than right steps, so therefore the remaining section of the bad path has one more right step than up steps. When this portion of the path is reflected, it will have one more up step than right steps. - -Since there are still 2n steps, there are now n + 1 up steps and n − 1 right steps. So, instead of reaching (n,n), all bad paths after reflection end at (n − 1, n + 1). Because every monotonic path in the (n − 1) × (n + 1) grid meets the higher diagonal, and because the reflection process is reversible, the reflection is therefore a bijection between bad paths in the original grid and monotonic paths in the new grid. - -The number of bad paths is therefore: -$$ -{n-1 + n+1 \choose n-1} = {2n \choose n-1} = {2n \choose n+1} -$$ - -and the number of Catalan paths (i.e. good paths) is obtained by removing the number of bad paths from the total number of monotonic paths of the original grid, -$$ -C_n = {2n \choose n} - {2n \choose n+1} = \frac{1}{n+1}{2n \choose n}. -$$ - -In terms of Dyck words, we start with a (non-Dyck) sequence of n X's and n Y's and interchange all X's and Y's after the first Y that violates the Dyck condition. After this Y, note that there is exactly one more Y than there are Xs. - -This bijective proof provides a natural explanation for the term n + 1 appearing in the denominator of the formula for Cn. A generalized version of this proof can be found in a paper of Rukavicka Josef (2011). - -Given a monotonic path, the exceedance of the path is defined to be the number of vertical edges above the diagonal. For example, in Figure 2, the edges above the diagonal are marked in red, so the exceedance of this path is 5. - -Given a monotonic path whose exceedance is not zero, we apply the following algorithm to construct a new path whose exceedance is 1 less than the one we started with. - -* Starting from the bottom left, follow the path until it first travels above the diagonal. - -* Continue to follow the path until it touches the diagonal again. Denote by X the first such edge that is reached. - -* Swap the portion of the path occurring before X with the portion occurring after X. - -In Figure 3, the black dot indicates the point where the path first crosses the diagonal. The black edge is X, and we place the last lattice point of the red portion in the top-right corner, and the first lattice point of the green portion in the bottom-left corner, and place X accordingly, to make a new path, shown in the second diagram. - -The exceedance has dropped from 3 to 2. In fact, the algorithm causes the exceedance to decrease by 1 for any path that we feed it, because the first vertical step starting on the diagonal (at the point marked with a black dot) is the unique vertical edge that passes from above the diagonal to below it - all the other vertical edges stay on the same side of the diagonal. - -It is also not difficult to see that this process is reversible: given any path P whose exceedance is less than n, there is exactly one path which yields P when the algorithm is applied to it. Indeed, the (black) edge X, which originally was the first horizontal step ending on the diagonal, has become the last horizontal step starting on the diagonal. Alternatively, reverse the original algorithm to look for the first edge that passes below the diagonal. - -This implies that the number of paths of exceedance n is equal to the number of paths of exceedance n − 1, which is equal to the number of paths of exceedance n − 2, and so on, down to zero. In other words, we have split up the set of all monotonic paths into n + 1 equally sized classes, corresponding to the possible exceedances between 0 and n. Since there are $\textstyle {2n\choose n}$ monotonic paths, we obtain the desired formula $\textstyle C_n = \frac{1}{n+1}{2n\choose n}.$ - -Figure 4 illustrates the situation for n = 3. Each of the 20 possible monotonic paths appears somewhere in the table. The first column shows all paths of exceedance three, which lie entirely above the diagonal. The columns to the right show the result of successive applications of the algorithm, with the exceedance decreasing one unit at a time. There are five rows, that is, C3 = 5, and the last column displays all paths no higher than the diagonal. - -Using Dyck words, start with a sequence from $\textstyle \binom{2n}{n}$. Let $X_d$ be the first X that brings an initial subsequence to equality, and configure the sequence as $(F)X_d(L)$. The new sequence is $LXF$. - -This proof uses the triangulation definition of Catalan numbers to establish a relation between Cn and Cn+1. - -Given a polygon P with n + 2 sides and a triangulation, mark one of its sides as the base, and also orient one of its 2n + 1 total edges. There are (4n + 2)Cn such marked triangulations for a given base. - -Given a polygon Q with n + 3 sides and a (different) triangulation, again mark one of its sides as the base. Mark one of the sides other than the base side (and not an inner triangle edge). There are (n + 2)Cn + 1 such marked triangulations for a given base. - -There is a simple bijection between these two marked triangulations: We can either collapse the triangle in Q whose side is marked (in two ways, and subtract the two that cannot collapse the base), or, in reverse, expand the oriented edge in P to a triangle and mark its new side. - -Thus -$$ -(4n+2)C_n = (n+2)C_{n+1} -$$. - -Write $\textstyle\frac{4n-2}{n+1}C_{n-1} = C_n.$ - -Because $(2n)!=(2n)!!(2n-1)!!=2^nn!(2n-1)!!$ - -then -$$ -\frac{(2n)!}{n!}=2^n(2n-1)!!=(4n-2)!!!!. -$$ - -Applying the recursion with $C_1=1$ gives the result. - -This proof is based on the Dyck words interpretation of the Catalan numbers, so Cn is the number of ways to correctly match n pairs of brackets. We denote a (possibly empty) correct string with c and its inverse (where "[" and "]" are exchanged) with c'. Since any c can be uniquely decomposed into c = [ c1 ] c2, summing over the possible spots to place the closing bracket immediately gives the recursive definition -$$ -C_0 = 1 \quad \text{and} \quad C_{n+1} = \sum_{i=0}^n C_iC_{n-i}\quad\text{for }n\ge 0. -$$ - -Let b stand for a balanced string of length 2n—that is, containing an equal number of "[" and "]", so $\textstyle B_n = {2n\choose n}$. As before, any balanced string can be uniquely decomposed into either [ c ] b or ] c' [ b, so -$$ -B_{n+1} = 2\sum_{i=0}^n B_i C_{n-i}. -$$ - -Any incorrect balanced string starts with c ], and the remaining string has one more [ than ], so -$$ -B_{n+1} - C_{n+1} = \sum_{i=0}^n {2i+1 \choose i} C_{n-i} -$$ - -Also, from the definitions, we have: -$$ -B_{n+1} - C_{n+1} = 2\sum_{i=0}^n B_i C_{n-i} - \sum_{i=0}^n C_iC_{n-i} = \sum_{i=0}^n (2B_i-C_i) C_{n-i}. -$$ - -Therefore -$$ -2B_i-C_i={2i+1 \choose i}= \frac{2i+1}{i+1}B_i, -$$ -$$ -C_i=B_i\left(2-\frac{2i+1}{i+1}\right), -$$ -$$ -C_i=\frac{1}{i+1}\binom{2i}{i}. -$$ - -This proof is based on the Dyck words interpretation of the Catalan numbers and uses the cycle lemma of Dvoretzky and Motzkin. - -We call a sequence of X's and Y's dominating if, reading from left to right, the number of X's is always strictly greater than the number of Y's. The cycle lemma states that any sequence of $m$ X's and $n$ Y's, where $m> n$, has precisely $m-n$ dominating cyclic permutations. To see this, arrange the given sequence of $m+n$ X's and Y's in a circle. Repeatedly removing XY pairs leaves exactly $m-n$ X's. Each of these X's was the start of a dominating cyclic permutation before anything was removed. - -For example, consider $XXYXY$. This is dominating, but none of its cyclic permutations $XYXYX$, $YXYXX$, $XYXXY$ and $YXXYX$ are. - -In particular, when $m=n+1$, there is exactly one dominating cyclic permutation. Removing the leading X from it (a dominating sequence must begin with X) leaves a Dyck sequence. Since there are $\textstyle {2n+1 \choose n}$ in total, and each one belongs to an equivalence class of size 2n+1 (because n, m and 2n+1 are pairwise coprime), we have $\textstyle\frac{1}{2n+1}{2n+1 \choose n}=\frac{1}{n+1}{2n\choose n}=C_n$ distinct cycles of $n+1$ X's and $n$ Y's, each of which corresponds to exactly one Dyck sequence, hence $\textstyle C_n$ counts Dyck sequences. - -The n×n Hankel matrix whose (i, j) entry is the Catalan number Ci+j−2 has determinant 1, regardless of the value of n. For example, for n = 4 we have -$$ -\det\begin{bmatrix}1 & 1 & 2 & 5 \\ 1 & 2 & 5 & 14 \\ 2 & 5 & 14 & 42 \\ 5 & 14 & 42 & 132\end{bmatrix} = 1. -$$ - -Moreover, if the indexing is "shifted" so that the (i, j) entry is filled with the Catalan number Ci+j−1 then the determinant is still 1, regardless of the value of n. - -For example, for n = 4 we have -$$ -\det\begin{bmatrix}1 & 2 & 5 & 14 \\ 2 & 5 & 14 & 42 \\ 5 & 14 & 42 & 132 \\ 14 & 42 & 132 & 429 \end{bmatrix} = 1. -$$ - -Taken together, these two conditions uniquely define the Catalan numbers. - -Another feature unique to the Catalan–Hankel matrix is the determinant of the n×n submatrix starting at 2 has determinant n + 1. -$$ -\det\begin{bmatrix} 2 \end{bmatrix} = 2 -$$ -$$ -\det\begin{bmatrix} 2 & 5 \\5 & 14 \end{bmatrix} = 3 -$$ -$$ -\det\begin{bmatrix} 2 & 5 & 14\\5 & 14 & 42\\ 14 & 42 & 132\end{bmatrix} = 4 -$$ -$$ -\det\begin{bmatrix} 2 & 5 & 14 & 42 \\ 5 & 14 & 42 & 132 \\ 14 & 42 & 132 & 42 9\\ 42 & 132 & 429 & 1430\end{bmatrix} = 5 -$$ - -et cetera. - -The Catalan sequence was described in the 18th century by Leonhard Euler, who was interested in the number of different ways of dividing a polygon into triangles. The sequence is named after Eugène Charles Catalan, who discovered the connection to parenthesized expressions during his exploration of the Towers of Hanoi puzzle. The reflection counting trick (second proof) for Dyck words was found by Désiré André in 1887. - -The name “Catalan numbers” originated from John Riordan. - -In 1988, it came to light that the Catalan number sequence had been used in China by the Mongolian mathematician Mingantu by 1730. That is when he started to write his book Ge Yuan Mi Lu Jie Fa [The Quick Method for Obtaining the Precise Ratio of Division of a Circle], which was completed by his student Chen Jixin in 1774 but published sixty years later. Peter J. Larcombe (1999) sketched some of the features of the work of Mingantu, including the stimulus of Pierre Jartoux, who brought three infinite series to China early in the 1700s. - -For instance, Ming used the Catalan sequence to express series expansions of sin(2α) and sin(4α) in terms of sin(α). - -The two-parameter sequence of non-negative integers $\frac{(2m)!(2n)!}{(m+n)!m!n!}$ - -is a generalization of the Catalan numbers. These are named super-Catalan numbers, per Ira Gessel. These should not confused with the Schröder–Hipparchus numbers, which sometimes are also called super-Catalan numbers. - -For $m=1$, this is just two times the ordinary Catalan numbers, and for $m=n$, the numbers have an easy combinatorial description. - -However, other combinatorial descriptions are only known - -for $m=2, 3$ and $4$, - -and it is an open problem to find a general combinatorial interpretation. - -Sergey Fomin and Nathan Reading have given a generalized Catalan number associated to any finite crystallographic Coxeter group, namely the number of fully commutative elements of the group; in terms of the associated root system, it is the number of anti-chains (or order ideals) in the poset of positive roots. The classical Catalan number $C_n$ corresponds to the root system of type $A_n$. The classical recurrence relation generalizes: the Catalan number of a Coxeter diagram is equal to the sum of the Catalan numbers of all its maximal proper sub-diagrams. - -The Catalan k-fold convolution is: - - \sum_{i_1+\cdots+i_m=n\atop i_1,\ldots,i_m\ge 0} C_{i_1}\cdots C_{i_m} = \begin{cases} - -\dfrac{m(n+1)(n+2)\cdots (n+m/2-1)}{2(n+m/2+2)(n+m/2+3)\cdots (n+m)}C_{n+m/2}, & m \text{ even,}\\[5 pt] - -\dfrac{m(n+1)(n+2)\cdots (n+(m-1)/2)}{(n+(m+3)/2)(n+(m+3)/2+1)\cdots (n+m)}C_{n+(m-1)/2}, & m \text{ odd.} - -\end{cases} - -The Catalan numbers are a solution of a version of the Hausdorff moment problem. diff --git a/wiki/wikipedia/3672.txt b/wiki/wikipedia/3672.txt deleted file mode 100644 index 495de30355b9cabb70e8a70508a6fbf1ce0446d1..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3672.txt +++ /dev/null @@ -1,3 +0,0 @@ -In computing and parallel processing, memory semantics refers to the process logic used to control access to shared memory locations, or at a higher level to shared variables in the presence of multiple threads or processors. - -Memory semantics may also be defined for transactional memory, where issues related to the interaction of transactions and locks, and user-level actions need to be defined and specified. diff --git a/wiki/wikipedia/3673.txt b/wiki/wikipedia/3673.txt deleted file mode 100644 index 2ff44a9f659a0a4abaacf3e5bd818c9745992b5e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3673.txt +++ /dev/null @@ -1,30 +0,0 @@ -"Witt's theorem" or "the Witt theorem" may also refer to the Bourbaki–Witt fixed point theorem of order theory. - -In mathematics, Witt's theorem, named after Ernst Witt, is a basic result in the algebraic theory of quadratic forms: any isometry between two subspaces of a nonsingular quadratic space over a field k may be extended to an isometry of the whole space. An analogous statement holds also for skew-symmetric, Hermitian and skew-Hermitian bilinear forms over arbitrary fields. The theorem applies to classification of quadratic forms over k and in particular allows one to define the Witt group W(k) which describes the "stable" theory of quadratic forms over the field k. - -Let (V, b) be a finite-dimensional vector space over a field k of characteristic different from 2 together with a non-degenerate symmetric or skew-symmetric bilinear form. If f : U → U is an isometry between two subspaces of V then f extends to an isometry of V. - -Witt's theorem implies that the dimension of a maximal totally isotropic subspace (null space) of V is an invariant, called the index or ' of b, and moreover, that the isometry group of (V, b) acts transitively on the set of maximal isotropic subspaces. This fact plays an important role in the structure theory and representation theory of the isometry group and in the theory of reductive dual pairs. - -Let (V, q), (V1, q1), (V2, q2) be three quadratic spaces over a field k. Assume that -$$ - (V_1,q_1)\oplus(V,q) \simeq (V_2,q_2)\oplus(V,q). -$$ - -Then the quadratic spaces (V1, q1) and (V2, q2) are isometric: -$$ - (V_1,q_1)\simeq (V_2,q_2). -$$ - -In other words, the direct summand (V, q) appearing in both sides of an isomorphism between quadratic spaces may be "cancelled". - -Let (V, q) be a quadratic space over a field k. Then - -it admits a Witt decomposition: -$$ -(V,q)\simeq (V_0,0)\oplus(V_a, q_a)\oplus (V_h,q_h), -$$ - -where 1=V0 = ker q is the radical of q, (Va, qa) is an anisotropic quadratic space and (Vh, qh) is a split quadratic space. Moreover, the anisotropic summand, termed the core form, and the hyperbolic summand in a Witt decomposition of (V, q) are determined uniquely up to isomorphism. - -Quadratic forms with the same core form are said to be similar or Witt equivalent. diff --git a/wiki/wikipedia/3674.txt b/wiki/wikipedia/3674.txt deleted file mode 100644 index 17fe069b6b68fd9ed107966e2fd31eebe40e01ea..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3674.txt +++ /dev/null @@ -1,163 +0,0 @@ -In mathematics, given a collection $S$ of subsets of a set X, an exact cover is a subcollection $S^{*}$ of $S$ such that each element in $X$ is contained in exactly one subset in $S^{*}$. One says that each element in $X$ is covered by exactly one subset in $S^{*}$. An exact cover is a kind of cover. - -In computer science, the exact cover problem is a decision problem to determine if an exact cover exists. The exact cover problem is NP-complete and is one of Karp's 21 NP-complete problems. The exact cover problem is a kind of constraint satisfaction problem. - -An exact cover problem can be represented by an incidence matrix or a bipartite graph. - -Knuth's Algorithm X is an algorithm that finds all solutions to an exact cover problem. DLX is the name given to Algorithm $X$ when it is implemented efficiently using Donald Knuth's Dancing Links technique on a computer. - -For example, consider the problem of tiling with pentominoes an 8×8 chessboard with the 4 central squares removed: - -The problem involves two kinds of constraints: - -Pentomino: For each of the 12 pentominoes, there is the constraint that it must be placed exactly once. Name these constraints after the corresponding pentominoes: F I L P N T U V W X Y Z. - -Square: For each of the 60 squares, there is the constraint that it must be covered by a pentomino exactly once. Name these constraints after the corresponding squares in the board: ij, where i is the rank and j is the file. - -Thus there are 12+60 = 72 constraints in all. - -As both kinds of constraints are "exactly one" constraints, the problem is an exact cover problem. - -The problem involves many choices, one for each way to place a pentomino on the board. - -It is convenient to consider each choice as satisfying a set of 6 constraints: 1 constraint for the pentomino being placed and 5 constraints for the five squares where it is placed. - -In the case of an 8×8 chessboard with the 4 central squares removed, there are 1568 such choices, for example: - -* {F, 12, 13, 21, 22, 32} - -* {F, 13, 14, 22, 23, 33} - -* … - -* {I, 11, 12, 13, 14, 15} - -* {I, 12, 13, 14, 15, 16} - -* … - -* {L, 11, 21, 31, 41, 42} - -* {L, 12, 22, 32, 42, 43} - -* … - -One of many solutions of this exact cover problem is the following set of 12 choices: - -* {I, 11, 12, 13, 14, 15} - -* {N, 16, 26, 27, 37, 47} - -* {L, 17, 18, 28, 38, 48} - -* {U, 21, 22, 31, 41, 42} - -* {X, 23, 32, 33, 34, 43} - -* {W, 24, 25, 35, 36, 46} - -* {P, 51, 52, 53, 62, 63} - -* {F, 56, 64, 65, 66, 75} - -* {Z, 57, 58, 67, 76, 77} - -* {T, 61, 71, 72, 73, 81} - -* {V, 68, 78, 86, 87, 88} - -* {Y, 74, 82, 83, 84, 85} - -This set of choices corresponds to the following solution to the pentomino tiling problem: - -A pentomino tiling problem is more naturally viewed as an exact cover problem than an exact hitting set problem, because it is more natural to view each choice as a set of constraints than each constraint as a set of choices. - -Each choice relates to just 6 constraints, which are easy to enumerate. On the other hand, each constraint relates to many choices, which are harder to enumerate. - -Whether viewed as an exact cover problem or an exact hitting set problem, the matrix representation is the same, having 1568 rows corresponding to choices and 72 columns corresponding to constraints. Each row contains a single 1 in the column identifying the pentomino and five 1s in the columns identifying the squares covered by the pentomino. - -Using the matrix, a computer can find all solutions relatively quickly, for example, using Dancing Links. - -Main articles: Sudoku, Mathematics of Sudoku, Sudoku solving algorithms - -The problem in Sudoku is to assign numbers (or digits, values, symbols) to cells (or squares) in a grid so as to satisfy certain constraints. - -In the standard 9×9 Sudoku variant, there are four kinds of constraints: - -Row-Column: Each intersection of a row and column, i.e, each cell, must contain exactly one number. - -Row-Number: Each row must contain each number exactly once - -Column-Number: Each column must contain each number exactly once. - -Box-Number: Each box must contain each number exactly once. - -While the first constraint might seem trivial, it is nevertheless needed to ensure there is only one number per cell. Intuitively, placing a number into a cell prohibits placing that number in any other cell sharing the same column, row, or box and also prohibits placing any other number into the now occupied cell. - -Solving Sudoku is an exact cover problem. - -More precisely, solving Sudoku is an exact hitting set problem, which is equivalent to an exact cover problem, when viewed as a problem to select possibilities such that each constraint set contains (i.e., is hit by) exactly one selected possibility. - -In the notation above for the (generalized) exact cover problem, X is the set of possibilities, Y is a set of constraint sets, and R is the binary relation "is contained in." - -Each possible assignment of a particular number to a particular cell is a possibility (or candidate). When Sudoku is played with pencil and paper, possibilities are often called pencil marks. - -In the standard 9×9 Sudoku variant, in which each of 9×9 cells is assigned one of 9 numbers, there are 9×9×9=729 possibilities. - -Using obvious notation for rows, columns and numbers, the possibilities can be labeled - -R1C1#1, R1C1#2, …, R9C9#9. - -The fact that each kind of constraint involves exactly one of something is what makes Sudoku an exact hitting set problem. The constraints can be represented by constraint sets. The problem is to select possibilities such that each constraint set contains (i.e., is hit by) exactly one selected possibility. - -In the standard 9×9 Sudoku variant, there are four kinds of constraints sets corresponding to the four kinds of constraints: - -Row-Column: A row-column constraint set contains all the possibilities for the intersection of a particular row and column, i.e., for a cell. For example, the constraint set for row 1 and column 1, which can be labeled R1C1, contains the 9 possibilities for row 1 and column 1 but different numbers: - -R1C1 = { R1C1#1, R1C1#2, R1C1#3, R1C1#4, R1C1#5, R1C1#6, R1C1#7, R1C1#8, R1C1#9 }. - -Row-Number: A row-number constraint set contains all the possibilities for a particular row and number. For example, the constraint set for row 1 and number 1, which can be labeled R1#1, contains the 9 possibilities for row 1 and number 1 but different columns: - -R1#1 = { R1C1#1, R1C2#1, R1C3#1, R1C4#1, R1C5#1, R1C6#1, R1C7#1, R1C8#1, R1C9#1 }. - -Column-Number: A column-number constraint set contains all the possibilities for a particular column and number. For example, the constraint set for column 1 and number 1, which can be labeled C1#1, contains the 9 possibilities for column 1 and number 1 but different rows: - -C1#1 = { R1C1#1, R2C1#1, R3C1#1, R4C1#1, R5C1#1, R6C1#1, R7C1#1, R8C1#1, R9C1#1 }. - -Box-Number: A box-number constraint set contains all the possibilities for a particular box and number. For example, the constraint set for box 1 (in the upper lefthand corner) and number 1, which can be labeled B1#1, contains the 9 possibilities for the cells in box 1 and number 1: - -B1#1 = { R1C1#1, R1C2#1, R1C3#1, R2C1#1, R2C2#1, R2C3#1, R3C1#1, R3C2#1, R3C3#1 }. - -Since there are 9 rows, 9 columns, 9 boxes and 9 numbers, there are 9×9=81 row-column constraint sets, 9×9=81 row-number constraint sets, 9×9=81 column-number constraint sets, and 9×9=81 box-number constraint sets: 81+81+81+81=324 constraint sets in all. - -In brief, the standard 9×9 Sudoku variant is an exact hitting set problem with 729 possibilities and 324 constraint sets. - -Thus the problem can be represented by a 729×324 matrix. - -Although it is difficult to present the entire 729×324 matrix, the general nature of the matrix can be seen from several snapshots: - -| - -| - -| - -|} - -The complete 729×324 matrix is available from Robert Hanson. - -Note that the set of possibilities RxCy#z can be arranged as a 9×9×9 cube in a 3-dimensional space with coordinates x, y, and z. Then each row Rx, column Cy, or number #z is a 9×9×1 "slice" of possibilities; each box Bw is a 9x3×3 "tube" of possibilities; each row-column constraint set RxCy, row-number constraint set Rx#z, or column-number constraint set Cy#z is a 9x1×1 "strip" of possibilities; each box-number constraint set Bw#z is a 3x3×1 "square" of possibilities; and each possibility RxCy#z is a 1x1×1 "cubie" consisting of a single possibility. Moreover, each constraint set or possibility is the intersection of the component sets. For example, R1C2#3 = R1 ∩ C2 ∩ #3, where ∩ denotes set intersection. - -Although other Sudoku variations have different numbers of rows, columns, numbers and/or different kinds of constraints, they all involve possibilities and constraint sets, and thus can be seen as exact hitting set problems. - -The N queens problem is an example of a generalized exact cover problem. The problem involves four kinds of constraints: - -Rank: For each of the N ranks, there must be exactly one queen. - -File: For each of the N files, there must be exactly one queen. - -Diagonals: For each of the 2N − 1 diagonals, there must be at most one queen. - -Reverse diagonals: For each of the 2N − 1 reverse diagonals, there must be at most one queen. - -Note that the 2N rank and file constraints form the primary constraints, while the 4N − 2 diagonal and reverse diagonals form the secondary constraints. Further, because each of first and last diagonal and reverse diagonals involves only one square on the chessboard, these can be omitted and thus one can reduce the number of secondary constraints to 4N − 6. The matrix for the N queens problem then has N2 rows and 6N − 6 columns, each row for a possible queen placement on each square on the chessboard, and each column for each constraint. diff --git a/wiki/wikipedia/3675.txt b/wiki/wikipedia/3675.txt deleted file mode 100644 index bb6f256dfa3b2ffd04b66460658bf0c06e88eec1..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3675.txt +++ /dev/null @@ -1,12 +0,0 @@ -In mathematical analysis, Heine's identity, named after Heinrich Eduard Heine is a Fourier expansion of a reciprocal square root which Heine presented as -$$ -\frac{1}{\sqrt{z-\cos\psi}}=\frac{\sqrt{2}}{\pi}\sum_{m=-\infty}^\infty Q_{m-\frac12}(z) e^{im\psi} -$$ - -where $ Q_{m-\frac12}$ is a Legendre function of the second kind, which has degree, m - 1/2, a half-integer, and argument, z, real and greater than one. This expression can be generalized for arbitrary half-integer powers as follows - -(z-\cos\psi)^{n-\frac12}=\sqrt{\frac{2}{\pi}}\frac{(z^2-1)^{\frac{n}{2}}}{\Gamma(\frac12-n)} - -\sum_{m=-\infty}^{\infty}\frac{\Gamma(m-n+\frac12)}{\Gamma(m+n+\frac12)}Q_{m-\frac12}^n(z)e^{im\psi}, - -where $\scriptstyle\Gamma$ is the Gamma function. diff --git a/wiki/wikipedia/3676.txt b/wiki/wikipedia/3676.txt deleted file mode 100644 index 02745d92d2c402c11d32ffc73aa53ffb6897b19b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3676.txt +++ /dev/null @@ -1,286 +0,0 @@ -In mathematics, Capelli's identity, named after , is an analogue of the formula det(AB) = det(A) det(B), for certain matrices with noncommuting entries, related to the representation theory of the Lie algebra $\mathfrak{gl}_n$. It can be used to relate an invariant ƒ to the invariant Ωƒ, where Ω is Cayley's Ω process. - -Suppose that xij for i,j = 1,...,n are commuting variables. Write Eij for the polarization operator -$$ -E_{ij} = \sum_{a=1}^n x_{ia}\frac{\partial}{\partial x_{ja}}. -$$ - -The Capelli identity states that the following differential operators, expressed as determinants, are equal: - - - -\begin{vmatrix} E_{11}+n-1 & \cdots &E_{1,n-1}& E_{1n} \\ \vdots& \ddots & \vdots&\vdots\\ E_{n-1,1} & \cdots & E_{n-1,n-1}+1&E_{n-1,n} \\ E_{n1} & \cdots & E_{n,n-1}& E_{nn} +0\end{vmatrix} = - -\begin{vmatrix} x_{11} & \cdots & x_{1n} \\ \vdots& \ddots & \vdots\\ x_{n1} & \cdots & x_{nn} \end{vmatrix} - -\begin{vmatrix} \frac{\partial}{\partial x_{11}} & \cdots &\frac{\partial}{\partial x_{1n}} \\ \vdots& \ddots & \vdots\\ \frac{\partial}{\partial x_{n1}} & \cdots &\frac{\partial}{\partial x_{nn}} \end{vmatrix}. - - - -Both sides are differential operators. The determinant on the left has non-commuting entries, and is expanded with all terms preserving their "left to right" order. Such a determinant is often called a column-determinant, since it can be obtained by the column expansion of the determinant starting from the first column. It can be formally written as -$$ -\det(A) = \sum_{\sigma \in S_n} \sgn(\sigma) A_{\sigma(1),1}A_{\sigma(2),2}\cdots A_{\sigma(n),n}, -$$ - -where in the product first come the elements from the first column, then from the second and so on. The determinant on the far right is Cayley's omega process, and the one on the left is the Capelli determinant. - -The operators Eij can be written in a matrix form: -$$ -E = X D^t, -$$ - -where $E, X, D$ are matrices with elements Eij, xij, $\frac{\partial}{\partial x_{ij}}$ respectively. If all elements in these matrices would be commutative then clearly $\det(E) = \det(X) \det(D^t)$. The Capelli identity shows that despite noncommutativity there exists a "quantization" of the formula above. The only price for the noncommutativity is a small correction: $(n-i)\delta_{ij}$ on the left hand side. For generic noncommutative matrices formulas like -$$ -\det(AB)=\det(A)\det(B) -$$ - -do not exist, and the notion of the 'determinant' itself does not make sense for generic noncommutative matrices. That is why the Capelli identity still holds some mystery, despite many proofs offered for it. A very short proof does not seem to exist. Direct verification of the statement can be given as an exercise for n = 2, but is already long for n = 3. - -Consider the following slightly more general context. Suppose that $n$ and $m$ are two integers and $x_{ij}$ for $i = 1, \dots, n, \ j = 1, \dots, m$, be commuting variables. Redefine $E_{ij}$ by almost the same formula: -$$ -E_{ij} = \sum_{a=1}^m x_{ia}\frac{\partial}{\partial x_{ja}}. -$$ - -with the only difference that summation index $a$ ranges from $1$ to $m$. One can easily see that such operators satisfy the commutation relations: -$$ -[ E_{ij}, E_{kl}] = \delta_{jk}E_{il}- \delta_{il}E_{kj}.~~~~~~~~~ -$$ - -Here $[a,b]$ denotes the commutator $ab-ba$. These are the same commutation relations which are satisfied by the matrices $e_{ij}$ which have zeros everywhere except the position $(i,j)$, where 1 stands. ($e_{ij}$ are sometimes called matrix units). Hence we conclude that the correspondence $\pi : e_{ij} \mapsto E_{ij} $ defines a representation of the Lie algebra $\mathfrak{gl}_n$ in the vector space of polynomials of $x_{ij}$. - -It is especially instructive to consider the special case m = 1; in this case we have xi1, which is abbreviated as xi: -$$ -E_{ij} = x_i \frac{\partial}{\partial x_j}. -$$ - -In particular, for the polynomials of the first degree it is seen that: -$$ -E_{ij} x_k = \delta_{jk} x_i. ~~~~~~~~~~~~~~ -$$ - -Hence the action of $E_{ij}$ restricted to the space of first-order polynomials is exactly the same as the action of matrix units $e_{ij}$ on vectors in $\mathbb{C}^{n}$. So, from the representation theory point of view, the subspace of polynomials of first degree is a subrepresentation of the Lie algebra $\mathfrak{gl}_n$, which we identified with the standard representation in $\mathbb{C}^{n}$. Going further, it is seen that the differential operators $E_{ij} $ preserve the degree of the polynomials, and hence the polynomials of each fixed degree form a subrepresentation of the Lie algebra $\mathfrak{gl}_n$. One can see further that the space of homogeneous polynomials of degree k can be identified with the symmetric tensor power $S^k \mathbb{C}^n$ of the standard representation $\mathbb C^n$. - -One can also easily identify the highest weight structure of these representations. The monomial $x^k_1$ is a highest weight vector, indeed: $E_{ij} x^k_1=0$ for i < j. Its highest weight equals to (k, 0, ... ,0), indeed: $E_{ii} x^k_1= k \delta_{i1}x^k_1$. - -Such representation is sometimes called bosonic representation of $\mathfrak{gl}_n$. Similar formulas $E_{ij} = \psi_{i}\frac{\partial}{\partial \psi_{j}} $ define the so-called fermionic representation, here $\psi_{i}$ are anti-commuting variables. Again polynomials of k-th degree form an irreducible subrepresentation which is isomorphic to $\Lambda^k \mathbb{C}^{n}$ i.e. anti-symmetric tensor power of $ \mathbb{C}^{n}$. Highest weight of such representation is (0, ..., 0, 1, 0, ..., 0). These representations for k = 1, ..., n are fundamental representations of $\mathfrak{gl}_n$. - -Let us return to the Capelli identity. One can prove the following: -$$ -\det(E+(n-i)\delta_{ij}) = 0, \qquad n>1 -$$ - -the motivation for this equality is the following: consider $ E^c_{ij} = x_i p_j$ for some commuting variables $x_i, p_j$. The matrix $ E^{c} $ is of rank one and hence its determinant is equal to zero. Elements of matrix $ E $ are defined by the similar formulas, however, its elements do not commute. The Capelli identity shows that the commutative identity: $ \det(E^{c})=0 $ can be preserved for the small price of correcting matrix $ E $ by $ (n-i)\delta_{ij} $. - -Let us also mention that similar identity can be given for the characteristic polynomial: -$$ -\det(t+E+(n-i)\delta_{ij}) = t^{[n]}+ \mathrm{Tr}(E)t^{[n-1]}, ~~~~ -$$ - -where $t^{[k]}=t(t+1) \cdots (t+k-1)$. The commutative counterpart of this is a simple fact that for rank = 1 matrices the characteristic polynomial contains only the first and the second coefficients. - -Consider an example for n = 2. - - \begin{align} - -& \begin{vmatrix} t+ E_{11}+1 & E_{12} \\ - -E_{21} & t+ E_{22} - -\end{vmatrix} - -=\begin{vmatrix} t+ x_1 \partial_1+1 & x_1 \partial_2 \\ - -x_2 \partial_1 & t+ x_2 \partial_2 - -\end{vmatrix} \\[8pt] - -& = (t+ x_1 \partial_1+1 ) ( t+ x_2 \partial_2)- x_2 \partial_1 x_1 \partial_2 \\[6pt] - -& = t(t+1)+ t( x_1 \partial_1 + x_2 \partial_2) - -+x_1 \partial_1 x_2 \partial_2+x_2 \partial_2 - - -x_2 \partial_1 x_1 \partial_2 - -\end{align} - - - -Using -$$ -\partial_1 x_1= x_1\partial_1+1,\partial_1 x_2= x_2\partial_1, x_1x_2=x_2x_1 -$$ - -we see that this is equal to: - - - -\begin{align} - -& {} \quad t(t+1)+ t( x_1 \partial_1 + x_2 \partial_2) - -+x_2 x_1 \partial_1 \partial_2+x_2 \partial_2 - - -x_2 x_1 \partial_1 \partial_2 - x_2 \partial_2 \\[8pt] - -& = t(t+1)+ t( x_1 \partial_1 + x_2 \partial_2)=t^{[2]}+ t\mathrm{Tr}(E). - -\end{align} - - - -An interesting property of the Capelli determinant is that it commutes with all operators Eij, that is, the commutator $[ E_{ij}, \det(E+(n-i)\delta_{ij})]=0$ is equal to zero. It can be generalized: - -Consider any elements Eij in any ring, such that they satisfy the commutation relation $[ E_{ij}, E_{kl}] = \delta_{jk}E_{il}- \delta_{il}E_{kj}$, (so they can be differential operators above, matrix units eij or any other elements) define elements Ck as follows: - -\det(t+E+(n-i)\delta_{ij}) = - -t^{[n]}+\sum_{k=n-1,\dots,0} t^{[k]} C_k, ~~~~~ - -where $t^{[k]}=t(t+1)\cdots(t+k-1),$ - -then: - -* elements Ck commute with all elements Eij - -* elements Ck can be given by the formulas similar to the commutative case: -$$ -C_k=\sum_{I=(i_10 is the Capelli determinant considered above. - -These statements are interrelated with the Capelli identity, as will be discussed below, and similarly to it the direct few lines short proof does not seem to exist, despite the simplicity of the formulation. - -The universal enveloping algebra -$$ -U(\mathfrak{gl}_n) -$$ - -can defined as an algebra generated by - -Eij - -subject to the relations -$$ -[ E_{ij}, E_{kl}] = \delta_{jk}E_{il}- \delta_{il}E_{kj} -$$ - -alone. The proposition above shows that elements Ckbelong to the center of $U(\mathfrak{gl}_n)$. It can be shown that they actually are free generators of the center of $U(\mathfrak{gl}_n)$. They are sometimes called Capelli generators. The Capelli identities for them will be discussed below. - -Consider an example for n = 2. - - - -\begin{align} - -{}\quad \begin{vmatrix} t+ E_{11}+1 & E_{12} \\ - -E_{21} & t+ E_{22} - -\end{vmatrix} - -& = (t+ E_{11}+1)(t+ E_{22})-E_{21}E_{12} \\ - -& = t(t+1)+t(E_{11}+E_{22})+E_{11}E_{22}-E_{21}E_{12}+E_{22}. - -\end{align} - - - -It is immediate to check that element $(E_{11}+E_{22})$ commute with $E_{ij}$. (It corresponds to an obvious fact that the identity matrix commute with all other matrices). More instructive is to check commutativity of the second element with $E_{ij}$. Let us do it for $E_{12}$: - - - -[E_{12}, E_{11}E_{22}-E_{21}E_{12}+E_{22}] - - - - - -=[E_{12}, E_{11}] E_{22} + E_{11} [E_{12}, E_{22}] - - -[E_{12}, E_{21}] E_{12} - E_{21}[E_{12},E_{12}] +[E_{12},E_{22}] - - - - - -=-E_{12} E_{22} + E_{11} E_{12} - - -(E_{11}- E_{22}) E_{12} - 0 +E_{12} - - - - - -=-E_{12} E_{22} + E_{22} E_{12} +E_{12}= -E_{12} + E_{12}=0. - - - -We see that the naive determinant $ E_{11}E_{22}-E_{21}E_{12}$ will not commute with $E_{12} $ and the Capelli's correction $ +E_{22}$ is essential to ensure the centrality. - -Let us return to the general case: -$$ -E_{ij} = \sum_{a=1}^m x_{ia}\frac{\partial}{\partial x_{ja}}, -$$ - -for arbitrary n and m. Definition of operators Eij can be written in a matrix form: $E = X D^t$, where $E$ is $n \times n $ matrix with elements $E_{ij}$; $X$ is $n \times m $ matrix with elements $x_{ij}$; $D$ is $n \times m $ matrix with elements $\frac{\partial}{\partial x_{ij}}$. - -Capelli–Cauchy–Binet identities - -For general m matrix E is given as product of the two rectangular matrices: X and transpose to D. If all elements of these matrices would commute then one knows that the determinant of E can be expressed by the so-called Cauchy–Binet formula via minors of X and D. An analogue of this formula also exists for matrix E again for the same mild price of the correction $ E \rightarrow (E+(n-i)\delta_{ij}) $: -$$ -\det(E+(n-i)\delta_{ij}) = \sum_{I=(1\le i_11 < k2 < ... < ks), L = (l1 < l2 < ... < ls), are arbitrary multi-indexes; as usually $M_{KL} $ denotes a submatrix of M formed by the elements M kalb. Pay attention that the Capelli correction now contains s, not n as in previous formula. Note that for s=1, the correction (s - i) disappears and we get just the definition of E as a product of X and transpose to D. Let us also mention that for generic K,L corresponding minors do not commute with all elements Eij, so the Capelli identity exists not only for central elements. - -As a corollary of this formula and the one for the characteristic polynomial in the previous section let us mention the following: -$$ -\det(t+E+(n-i)\delta_{ij}) = t^{[n]}+\sum_{k=n-1,\dots,0}t^{[k]} \sum_{I,J} \det(X_{IJ}) \det(D^t_{JI}), -$$ - -where $ I=(1\le i_1<\cdots [n] instead of tn at the right hand side. - -Relation to dual pairs - -Modern interest in these identities has been much stimulated by Roger Howe who considered them in his theory of reductive dual pairs (also known as Howe duality). To make the first contact with these ideas, let us look more precisely on operators $E_{ij} $. Such operators preserve the degree of polynomials. Let us look at the polynomials of degree 1: $E_{ij} x_{kl} = x_{il} \delta_{jk} $, we see that index l is preserved. One can see that from the representation theory point of view polynomials of the first degree can be identified with direct sum of the representations $\mathbb{C}^n \oplus \cdots \oplus \mathbb{C}^n $, here l-th subspace (l=1...m) is spanned by $ x_{il} $, i = 1, ..., n. Let us give another look on this vector space: -$$ -\mathbb{C}^n \oplus \cdots \oplus \mathbb{C}^n = \mathbb{C}^n \otimes \mathbb{C}^m . -$$ - -Such point of view gives the first hint of symmetry between m and n. To deepen this idea consider: -$$ -E_{ij}^\text{dual} = \sum_{a=1}^n x_{ai}\frac{\partial}{\partial x_{aj}}. -$$ - -These operators are given by the same formulas as $E_{ij}$ modula renumeration $i \leftrightarrow j$, hence by the same arguments we can deduce that $E_{ij}^\text{dual} $ form a representation of the Lie algebra $\mathfrak{gl}_m$ in the vector space of polynomials of xij. Before going further we can mention the following property: differential operators $E_{ij}^\text{dual} $ commute with differential operators $E_{kl} $. - -The Lie group $GL_n \times GL_m $ acts on the vector space $ \mathbb{C}^n \otimes \mathbb{C}^m $ in a natural way. One can show that the corresponding action of Lie algebra $\mathfrak{gl}_n \times \mathfrak{gl}_m$ is given by the differential operators $E_{ij}~~~~$ and $ E_{ij}^\text{dual} $ respectively. This explains the commutativity of these operators. - -The following deeper properties actually hold true: - -* The only differential operators which commute with $E_{ij}~~~~$ are polynomials in $ E_{ij}^\text{dual} $, and vice versa. - -* Decomposition of the vector space of polynomials into a direct sum of tensor products of irreducible representations of $GL_n $ and $ GL_m $ can be given as follows: -$$ -\mathbb{C} [x_{ij}] = S(\mathbb{C}^n \otimes \mathbb{C}^m) = \sum_D \rho_n^D \otimes\rho_m^{D'}. -$$ - -The summands are indexed by the Young diagrams D, and representations $\rho^D$ are mutually non-isomorphic. And diagram ${D} $ determine $ {D'} $ and vice versa. - -* In particular the representation of the big group $ GL_n \times GL_m $ is multiplicity free, that is each irreducible representation occurs only one time. - -One easily observe the strong similarity to Schur–Weyl duality. - -Much work have been done on the identity and its generalizations. Approximately two dozens of mathematicians and physicists contributed to the subject, to name a few: R. Howe, B. Kostant Fields medalist A. Okounkov A. Sokal, D. Zeilberger. - -It seems historically the first generalizations were obtained by Herbert Westren Turnbull in 1948, who found the generalization for the case of symmetric matrices (see diff --git a/wiki/wikipedia/3677.txt b/wiki/wikipedia/3677.txt deleted file mode 100644 index b70f07fb0c4d0c6cb59b9e95993255a6dda94adf..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3677.txt +++ /dev/null @@ -1,107 +0,0 @@ -In computer science, graph traversal (also known as graph search) refers to the process of visiting (checking and/or updating) each vertex in a graph. Such traversals are classified by the order in which the vertices are visited. Tree traversal is a special case of graph traversal. - -Unlike tree traversal, graph traversal may require that some vertices be visited more than once, since it is not necessarily known before transitioning to a vertex that it has already been explored. As graphs become more dense, this redundancy becomes more prevalent, causing computation time to increase; as graphs become more sparse, the opposite holds true. - -Thus, it is usually necessary to remember which vertices have already been explored by the algorithm, so that vertices are revisited as infrequently as possible (or in the worst case, to prevent the traversal from continuing indefinitely). This may be accomplished by associating each vertex of the graph with a "color" or "visitation" state during the traversal, which is then checked and updated as the algorithm visits each vertex. If the vertex has already been visited, it is ignored and the path is pursued no further; otherwise, the algorithm checks/updates the vertex and continues down its current path. - -Several special cases of graphs imply the visitation of other vertices in their structure, and thus do not require that visitation be explicitly recorded during the traversal. An important example of this is a tree: during a traversal it may be assumed that all "ancestor" vertices of the current vertex (and others depending on the algorithm) have already been visited. Both the depth-first and breadth-first graph searches are adaptations of tree-based algorithms, distinguished primarily by the lack of a structurally determined "root" vertex and the addition of a data structure to record the traversal's visitation state. - -Note. — If each vertex in a graph is to be traversed by a tree-based algorithm (such as DFS or BFS), then the algorithm must be called at least once for each connected component of the graph. This is easily accomplished by iterating through all the vertices of the graph, performing the algorithm on each vertex that is still unvisited when examined. - -A depth-first search (DFS) is an algorithm for traversing a finite graph. DFS visits the child vertices before visiting the sibling vertices; that is, it traverses the depth of any particular path before exploring its breadth. A stack (often the program's call stack via recursion) is generally used when implementing the algorithm. - -The algorithm begins with a chosen "root" vertex; it then iteratively transitions from the current vertex to an adjacent, unvisited vertex, until it can no longer find an unexplored vertex to transition to from its current location. The algorithm then backtracks along previously visited vertices, until it finds a vertex connected to yet more uncharted territory. It will then proceed down the new path as it had before, backtracking as it encounters dead-ends, and ending only when the algorithm has backtracked past the original "root" vertex from the very first step. - -DFS is the basis for many graph-related algorithms, including topological sorts and planarity testing. - -* Input: A graph G and a vertex v of G. - -* Output: A labeling of the edges in the connected component of v as discovery edges and back edges. - -procedure DFS(G, v) is - -label v as explored - -for all edges e in G.incidentEdges(v) do - -if edge e is unexplored then - -w ← G.adjacentVertex(v, e) - -if vertex w is unexplored then - -label e as a discovered edge - -recursively call DFS(G, w) - -else - -label e as a back edge - -A breadth-first search (BFS) is another technique for traversing a finite graph. BFS visits the sibling vertices before visiting the child vertices, and a queue is used in the search process. This algorithm is often used to find the shortest path from one vertex to another. - -* Input: A graph G and a vertex v of G. - -* Output: The closest vertex to v satisfying some conditions, or null if no such vertex exists. - -procedure BFS(G, v) is - -create a queue Q - -enqueue v onto Q - -mark v - -while Q is not empty do - -w ← Q.dequeue() - -if w is what we are looking for then - -return w - -for all edges e in G.adjacentEdges(w) do - -x ← G.adjacentVertex(w, e) - -if x is not marked then - -mark x - -enqueue x onto Q - -return null - -Breadth-first search can be used to solve many problems in graph theory, for example: - -* finding all vertices within one connected component; - -* Cheney's algorithm; - -* finding the shortest path between two vertices; - -* testing a graph for bipartiteness; - -* Cuthill–McKee algorithm mesh numbering; - -* Ford–Fulkerson algorithm for computing the maximum flow in a flow network; - -* serialization/deserialization of a binary tree vs serialization in sorted order (allows the tree to be re-constructed in an efficient manner); - -* maze generation algorithms; - -* flood fill algorithm for marking contiguous regions of a two dimensional image or n-dimensional array; - -* analysis of networks and relationships. - -The problem of graph exploration can be seen as a variant of graph traversal. It is an online problem, meaning that the information about the graph is only revealed during the runtime of the algorithm. A common model is as follows: given a connected graph 1=G = (V, E) with non-negative edge weights. The algorithm starts at some vertex, and knows all incident outgoing edges and the vertices at the end of these edges—but not more. When a new vertex is visited, then again all incident outgoing edges and the vertices at the end are known. The goal is to visit all n vertices and return to the starting vertex, but the sum of the weights of the tour should be as small as possible. The problem can also be understood as a specific version of the travelling salesman problem, where the salesman has to discover the graph on the go. - -For general graphs, the best known algorithms for both undirected and directed graphs is a simple greedy algorithm: - -* In the undirected case, the greedy tour is at most O(ln n)-times longer than an optimal tour. The best lower bound known for any deterministic online algorithm is 10/3. - -** Unit weight undirected graphs can be explored with a competitive ration of 2 − ε, which is already a tight bound on Tadpole graphs. - -* In the directed case, the greedy tour is at most (n − 1)-times longer than an optimal tour. This matches the lower bound of n − 1. An analogous competitive lower bound of Ω(n) also holds for randomized algorithms that know the coordinates of each node in a geometric embedding. If instead of visiting all nodes just a single "treasure" node has to be found, the competitive bounds are Θ(n2) on unit weight directed graphs, for both deterministic and randomized algorithms. - -A universal traversal sequence is a sequence of instructions comprising a graph traversal for any regular graph with a set number of vertices and for any starting vertex. A probabilistic proof was used by Aleliunas et al. to show that there exists a universal traversal sequence with number of instructions proportional to O(n5) for any regular graph with n vertices. The steps specified in the sequence are relative to the current node, not absolute. For example, if the current node is vj, and vj has d neighbors, then the traversal sequence will specify the next node to visit, vj+1, as the ith neighbor of vj, where 1 ≤ i ≤ d. diff --git a/wiki/wikipedia/3678.txt b/wiki/wikipedia/3678.txt deleted file mode 100644 index 5b027ed1a6dd5239fb2004ec460a2b039148b368..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3678.txt +++ /dev/null @@ -1,17 +0,0 @@ -In convex analysis, Popoviciu's inequality is an inequality about convex functions. It is similar to Jensen's inequality and was found in 1965 by Tiberiu Popoviciu, a Romanian mathematician. - -Let f be a function from an interval $I \subseteq \mathbb{R}$ to $\mathbb{R}$. If f is convex, then for any three points x, y, z in I, -$$ -\frac{f(x)+f(y)+f(z)}{3} + f\left(\frac{x+y+z}{3}\right) \ge \frac{2}{3}\left[ f\left(\frac{x+y}{2}\right) + f\left(\frac{y+z}{2}\right) + f\left(\frac{z+x}{2}\right) \right]. -$$ - -If a function f is continuous, then it is convex if and only if the above inequality holds for all x, y, z from $I$. When f is strictly convex, the inequality is strict except for x = y = z. - -It can be generalized to any finite number n of points instead of 3, taken on the right-hand side k at a time instead of 2 at a time: - -
    Let f be a continuous function from an interval $I \subseteq \mathbb{R}$ to $\mathbb{R}$. Then f is convex if and only if, for any integers n and k where n ≥ 3 and $2 \leq k \leq n-1$, and any n points $x_1, \dots, x_n$ from I, -$$ -\frac{1}{k} \binom{n-2}{k-2} \left( \frac{n-k}{k-1} \sum_{i=1}^{n}f(x_i) + nf\left(\frac1n\sum_{i=1}^{n}x_i\right) \right)\ge \sum_{1 \le i_1 < \dots < i_k \le n} f\left( \frac1k \sum_{j=1}^{k} x_{i_j} \right) -$$
    - -Popoviciu's inequality can also be generalized to a weighted inequality. diff --git a/wiki/wikipedia/3679.txt b/wiki/wikipedia/3679.txt deleted file mode 100644 index b252997fa868f260030e69e5b226f460dad7e0aa..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3679.txt +++ /dev/null @@ -1,151 +0,0 @@ -In mathematics, the Bott periodicity theorem describes a periodicity in the homotopy groups of classical groups, discovered by , which proved to be of foundational significance for much further research, in particular in K-theory of stable complex vector bundles, as well as the stable homotopy groups of spheres. Bott periodicity can be formulated in numerous ways, with the periodicity in question always appearing as a period-2 phenomenon, with respect to dimension, for the theory associated to the unitary group. See for example topological K-theory. - -There are corresponding period-8 phenomena for the matching theories, (real) KO-theory and (quaternionic) KSp-theory, associated to the real orthogonal group and the quaternionic symplectic group, respectively. The J-homomorphism is a homomorphism from the homotopy groups of orthogonal groups to stable homotopy groups of spheres, which causes the period 8 Bott periodicity to be visible in the stable homotopy groups of spheres. - -Bott showed that if $O(\infty)$ is defined as the inductive limit of the orthogonal groups, then its homotopy groups are periodic: -$$ -\pi_{n}(O(\infty))\simeq\pi_{n+8}(O(\infty)) -$$ - -and the first 8 homotopy groups are as follows: - -\begin{align} - -\pi_{0}(O(\infty))&\simeq\Z_2 \\ - -\pi_{1}(O(\infty))&\simeq\Z_2 \\ - -\pi_{2}(O(\infty))&\simeq 0 \\ - -\pi_{3}(O(\infty))&\simeq\Z \\ - -\pi_{4}(O(\infty))&\simeq 0 \\ - -\pi_{5}(O(\infty))&\simeq 0 \\ - -\pi_{6}(O(\infty))&\simeq 0 \\ - -\pi_{7}(O(\infty))&\simeq\Z - -\end{align} - -The context of Bott periodicity is that the homotopy groups of spheres, which would be expected to play the basic part in algebraic topology by analogy with homology theory, have proved elusive (and the theory is complicated). The subject of stable homotopy theory was conceived as a simplification, by introducing the suspension (smash product with a circle) operation, and seeing what (roughly speaking) remained of homotopy theory once one was allowed to suspend both sides of an equation, as many times as one wished. The stable theory was still hard to compute with, in practice. - -What Bott periodicity offered was an insight into some highly non-trivial spaces, with central status in topology because of the connection of their cohomology with characteristic classes, for which all the (unstable) homotopy groups could be calculated. These spaces are the (infinite, or stable) unitary, orthogonal and symplectic groups U, O and Sp. In this context, stable refers to taking the union U (also known as the direct limit) of the sequence of inclusions -$$ -U(1)\subset U(2)\subset\cdots\subset U = \bigcup_{k=1}^\infty U(k) -$$ - -and similarly for O and Sp. Note that Bott's use of the word stable in the title of his seminal paper refers to these stable classical groups and not to stable homotopy groups. - -The important connection of Bott periodicity with the stable homotopy groups of spheres $\pi_n^S$ comes via the so-called stable J-homomorphism from the (unstable) homotopy groups of the (stable) classical groups to these stable homotopy groups $\pi_n^S$. Originally described by George W. Whitehead, it became the subject of the famous Adams conjecture (1963) which was finally resolved in the affirmative by Daniel Quillen (1971). - -Bott's original results may be succinctly summarized in: - -Corollary: The (unstable) homotopy groups of the (infinite) classical groups are periodic: - -\begin{align} - -\pi_k(U) &=\pi_{k+2}(U) \\ - -\pi_k(O) &=\pi_{k+4}(\operatorname{Sp}) \\ - -\pi_k(\operatorname{Sp}) &= \pi_{k+4}(O) && k=0,1,\ldots - -\end{align} - -Note: The second and third of these isomorphisms intertwine to give the 8-fold periodicity results: - -\begin{align} - -\pi_k(O) &=\pi_{k+8}(O) \\ - -\pi_k(\operatorname{Sp}) &=\pi_{k+8}(\operatorname{Sp}), && k=0,1,\ldots - -\end{align} - -For the theory associated to the infinite unitary group, U, the space BU is the classifying space for stable complex vector bundles (a Grassmannian in infinite dimensions). One formulation of Bott periodicity describes the twofold loop space, $\Omega^2BU$ of BU. Here, $\Omega$ is the loop space functor, right adjoint to suspension and left adjoint to the classifying space construction. Bott periodicity states that this double loop space is essentially BU again; more precisely, - -\Omega^2BU\simeq \Z\times BU - -is essentially (that is, homotopy equivalent to) the union of a countable number of copies of BU. An equivalent formulation is - -\Omega^2U\simeq U . - -Either of these has the immediate effect of showing why (complex) topological K-theory is a 2-fold periodic theory. - -In the corresponding theory for the infinite orthogonal group, O, the space BO is the classifying space for stable real vector bundles. In this case, Bott periodicity states that, for the 8-fold loop space, - -\Omega^8BO\simeq \Z \times BO ; - -or equivalently, - -\Omega^8O\simeq O , - -which yields the consequence that KO-theory is an 8-fold periodic theory. Also, for the infinite symplectic group, Sp, the space BSp is the classifying space for stable quaternionic vector bundles, and Bott periodicity states that - -\Omega^8\operatorname{BSp}\simeq \Z \times \operatorname{BSp} ; - -or equivalently - -\Omega^8 \operatorname{Sp}\simeq \operatorname{Sp}. - -Thus both topological real K-theory (also known as KO-theory) and topological quaternionic K-theory (also known as KSp-theory) are 8-fold periodic theories. - -One elegant formulation of Bott periodicity makes use of the observation that there are natural embeddings (as closed subgroups) between the classical groups. The loop spaces in Bott periodicity are then homotopy equivalent to the symmetric spaces of successive quotients, with additional discrete factors of Z. - -Over the complex numbers: -$$ - U \times U \subset U \subset U \times U. -$$ - -Over the real numbers and quaternions: -$$ -O \times O \subset O \subset U\subset \operatorname{Sp} \subset \operatorname{Sp} \times \operatorname{Sp} \subset \operatorname{Sp} \subset U \subset O \subset O \times O. -$$ - -These sequences corresponds to sequences in Clifford algebras – see classification of Clifford algebras; over the complex numbers: -$$ -\Complex \oplus \Complex \subset \Complex \subset \Complex \oplus \Complex. -$$ - -Over the real numbers and quaternions: -$$ -\R \oplus \R \subset \R\subset \Complex\subset \mathbb{H} \subset \mathbb{H} \oplus \mathbb{H} \subset \mathbb{H} \subset \Complex \subset \R \subset \R \oplus \R, -$$ - -where the division algebras indicate "matrices over that algebra". - -As they are 2-periodic/8-periodic, they can be arranged in a circle, where they are called the Bott periodicity clock and Clifford algebra clock. - -The Bott periodicity results then refine to a sequence of homotopy equivalences: - -For complex K-theory: - -\begin{align} - -\Omega U &\simeq \Z\times BU = \Z\times U/(U \times U)\\ - -\Omega(Z\times BU) &\simeq U = (U \times U)/U - -\end{align} - -For real and quaternionic KO- and KSp-theories: - -\begin{align} - -\Omega(\Z\times BO) &\simeq O = (O \times O)/O & \Omega(\Z\times \operatorname{BSp}) &\simeq \operatorname{Sp} = (\operatorname{Sp} \times \operatorname{Sp})/\operatorname{Sp}\\ - -\Omega O &\simeq O/U & \Omega \operatorname{Sp} &\simeq \operatorname{Sp}/U\\ - -\Omega(O/U) &\simeq U/\operatorname{Sp} & \Omega(\operatorname{Sp}/U) &\simeq U/O\\ - -\Omega(U/\operatorname{Sp})&\simeq \Z\times \operatorname{BSp} = \Z\times \operatorname{Sp}/(\operatorname{Sp} \times \operatorname{Sp}) & \Omega(U/O) &\simeq \Z\times BO = \Z \times O/(O \times O) - -\end{align} - -The resulting spaces are homotopy equivalent to the classical reductive symmetric spaces, and are the successive quotients of the terms of the Bott periodicity clock. These equivalences immediately yield the Bott periodicity theorems. - -The specific spaces are, (for groups, the principal homogeneous space is also listed): - -Bott's original proof used Morse theory, which Bott had used earlier to study the homology of Lie groups. Many different proofs have been given. diff --git a/wiki/wikipedia/368.txt b/wiki/wikipedia/368.txt deleted file mode 100644 index c28a99c19a973a8735f4c7d8fa05e36aab9f49c2..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/368.txt +++ /dev/null @@ -1,19 +0,0 @@ -Windows Live Devices was an online device management service as part of Windows Live which will allow users to centrally access and manage the synchronization of files stored on their computers, mobile devices, as well as other peripherals such as digital photo frames. Windows Live Devices also allows users to remotely access their computers from the internet using a web browser. - -This service integrates tightly with Windows Live Mesh to allow files and folders between two or more computers be in sync with each other, as well as to be in sync with files and folders stored on the cloud with SkyDrive. (Now OneDrive) The combination of the three services: Windows Live Devices, Windows Live Mesh, and SkyDrive are very similar to the previous Live Mesh technology preview platform offering from Microsoft, and are based on the same underlying technology. - -Windows Live Devices was released on June 24, 2010, as part of Windows Live Wave 4 suite of services. - -Microsoft released their Live Mesh software as a service platform on April 23, 2008 that enabled PCs and other devices to connect with each other through the internet using FeedSync technologies. Live Mesh allows applications, files and folders to be synchronized across multiple devices. Live Mesh was initially released as a technology preview, however, it was shortly updated to Beta on October 30, 2008 and at the same time incorporated as part of the Azure Services Platform - a "cloud" platform hosted at Microsoft data centers. Live Mesh consisted of the following four elements: - -* Mesh Operating Environment - the software component of Live Mesh that manages the synchronization relationships between devices and data - -* Live Desktop - the online cloud storage service that allows synchronized folders to be accessible via a website - -* Live Mesh Remote Desktop - a software that allow users to remotely access, connect and manage to any of the devices in a synchronization relationship - -* Live Framework - a REST-based application programming interface for accessing the Live Mesh services over HTTP - -In January 2009, the Live Mesh team was merged into the unified Windows Live team at Microsoft such that its incubation technologies will be integrated into Windows Live services. As a result, Live Framework, the developer framework for Live Mesh, was discontinued on September 8, 2009 and was incorporated into Live Services - the developer resources central for all Windows Live services. As part of the merge, the Mesh Operating Environment, or simply the Live Mesh software, is replaced by Windows Live Mesh to support PC-to-PC as well as PC-to-cloud file synchronisation, and the online cloud storage service for Live Mesh - Live Desktop - is replaced by SkyDrive synchronised storage. Windows Live Devices will serve the purposes of managing and providing access to all devices in the synchronization relationship, as well as replacing the Live Mesh Remote Desktop to provide remote access functions to any devices in a synchronization relationship. - -The Live Mesh technology preview platform supported the management and synchronisation of data between Windows and Mac OS X computers, mobiles devices, Windows Home Server, Xbox, Zune, Car Automation System, as well as other computer devices and peripherals such as printers, digital cameras, and digital photo frames. These capabilities of Live Mesh are expected to be integrated into Windows Live Devices and Windows Live Mesh in future releases. diff --git a/wiki/wikipedia/3680.txt b/wiki/wikipedia/3680.txt deleted file mode 100644 index 5f60745137e92e80c18c79e1815657ef6e574250..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3680.txt +++ /dev/null @@ -1,178 +0,0 @@ -In logic and mathematics, contraposition refers to the inference of going from a conditional statement into its logically equivalent contrapositive, and an associated proof method known as proof by contraposition. The contrapositive of a statement has its antecedent and consequent inverted and flipped. - -Conditional statement $P \rightarrow Q$. In formulas: the contrapositive of $P \rightarrow Q$ is $ \neg Q \rightarrow \neg P $. - -If P, Then Q. — If not Q, Then not P. "If it is raining, then I wear my coat" — "If I don't wear my coat, then it isn't raining." - -The law of contraposition says that a conditional statement is true if, and only if, its contrapositive is true. - -The contrapositive ($ \neg Q \rightarrow \neg P $) can be compared with three other statements: - -;Inversion (the inverse), $\neg P \rightarrow \neg Q$:"If it is not raining, then I don't wear my coat." Unlike the contrapositive, the inverse's truth value is not at all dependent on whether or not the original proposition was true, as evidenced here. - -;Conversion (the converse), $Q \rightarrow P$:"If I wear my coat, then it is raining." The converse is actually the contrapositive of the inverse, and so always has the same truth value as the inverse (which as stated earlier does not always share the same truth value as that of the original proposition). - -;Negation (the logical complement), $\neg (P \rightarrow Q)$:"It is not the case that if it is raining then I wear my coat.", or equivalently, "Sometimes, when it is raining, I don't wear my coat. " If the negation is true, then the original proposition (and by extension the contrapositive) is false. - -Note that if $P \rightarrow Q$ is true and one is given that $Q$ is false (i.e., $\neg Q$), then it can logically be concluded that $P$ must be also false (i.e., $\neg P$). This is often called the law of contrapositive, or the modus tollens rule of inference. - -In the Euler diagram shown, if something is in A, it must be in B as well. So we can interpret "all of A is in B" as: -$$ -A \to B -$$ - -It is also clear that anything that is not within B (the blue region) cannot be within A, either. This statement, which can be expressed as: -$$ -\neg B \to \neg A -$$ - -is the contrapositive of the above statement. Therefore, one can say that -$$ -(A \to B) \leftrightarrow (\neg B \to \neg A) -$$. - -In practice, this equivalence can be used to make proving a statement easier. For example, if one wishes to prove that every girl in the United States (A) has brown hair (B), one can either try to directly prove $A \to B$ by checking that all girls in the United States do indeed have brown hair, or try to prove $\neg B \to \neg A$ by checking that all girls without brown hair are indeed all outside the US. In particular, if one were to find at least one girl without brown hair within the US, then one would have disproved $\neg B \to \neg A$, and equivalently $A \to B$. - -In general, for any statement where A implies B, not B always implies not A. As a result, proving or disproving either one of these statements automatically proves or disproves the other, as they are logically equivalent to each other. - -A proposition Q is implicated by a proposition P when the following relationship holds: -$$ -(P \to Q) -$$ - -This states that, "if $P$, then $Q$", or, "if Socrates is a man, then Socrates is human." In a conditional such as this, $P$ is the antecedent, and $Q$ is the consequent. One statement is the contrapositive of the other only when its antecedent is the negated consequent of the other, and vice versa. Thus a contrapositive generally takes the form of: -$$ -(\neg Q \to \neg P) -$$. - -That is, "If not-$Q$, then not-$P$", or, more clearly, "If $Q$ is not the case, then P is not the case." Using our example, this is rendered as "If Socrates is not human, then Socrates is not a man." This statement is said to be contraposed to the original and is logically equivalent to it. Due to their logical equivalence, stating one effectively states the other; when one is true, the other is also true, and when one is false, the other is also false. - -Strictly speaking, a contraposition can only exist in two simple conditionals. However, a contraposition may also exist in two complex, universal conditionals, if they are similar. Thus, $\forall{x}(P{x} \to Q{x})$, or "All $P$s are $Q$s," is contraposed to $\forall{x}(\neg Q{x} \to \neg P{x})$, or "All non-$Q$s are non-$P$s." - -In first-order logic, the conditional is defined as: -$$ -A \to B \leftrightarrow \neg A \lor B -$$ - -which can be made equivalent to its contrapositive, as follows: - - - -\begin{align} - -\neg A \lor B & \leftrightarrow B \lor \neg A \\ & \leftrightarrow \neg B \to \neg A - -\end{align} - - - -Let: -$$ -(A \to B)\land \neg B -$$ - -It is given that, if A is true, then B is true, and it is also given that B is not true. We can then show that A must not be true by contradiction. For if A were true, then B would have to also be true (by Modus Ponens). However, it is given that B is not true, so we have a contradiction. Therefore, A is not true (assuming that we are dealing with bivalent statements that are either true or false): -$$ -(A \to B) \to (\neg B \to \neg A) -$$ - -We can apply the same process the other way round, starting with the assumptions that: -$$ -(\neg B \to \neg A)\land A -$$ - -Here, we also know that B is either true or not true. If B is not true, then A is also not true. However, it is given that A is true, so the assumption that B is not true leads to a contradiction, which means that it is not the case that B is not true. Therefore, B must be true: -$$ -(\neg B \to \neg A) \to (A \to B) -$$ - -Combining the two proved statements together, we obtain the sought-after logical equivalence between a conditional and its contrapositive: -$$ -(A \to B) \equiv (\neg B \to \neg A) -$$ - -Logical equivalence between two propositions means that they are true together or false together. To prove that contrapositives are logically equivalent, we need to understand when material implication is true or false. -$$ -P \to Q -$$ - -This is only false when $P$ is true and $Q$ is false. Therefore, we can reduce this proposition to the statement "False when $P$ and not-$Q$" (i.e. "True when it is not the case that $P$ and not-$Q$"): -$$ -\neg(P \land \neg Q) -$$ - -The elements of a conjunction can be reversed with no effect (by commutativity): -$$ -\neg(\neg Q \land P) -$$ - -We define $R$ as equal to "$\neg Q$", and $S$ as equal to $\neg P$ (from this, $\neg S$ is equal to $\neg\neg P$, which is equal to just $P$): -$$ -\neg(R \land \neg S) -$$ - -This reads "It is not the case that (R is true and S is false)", which is the definition of a material conditional. We can then make this substitution: -$$ -R \to S -$$ - -By reverting R and S back into $P$ and $Q$, we then obtain the desired contrapositive: -$$ -\neg Q \to \neg P -$$ - -Take the statement "All red objects have color." This can be equivalently expressed as "If an object is red, then it has color." - -* The contrapositive is "If an object does not have color, then it is not red." This follows logically from our initial statement and, like it, it is evidently true. - -* The inverse is "If an object is not red, then it does not have color." An object which is blue is not red, and still has color. Therefore, in this case the inverse is false. - -* The converse is "If an object has color, then it is red." Objects can have other colors, so the converse of our statement is false. - -* The negation is "There exists a red object that does not have color." This statement is false because the initial statement which it negates is true. - -In other words, the contrapositive is logically equivalent to a given conditional statement, though not sufficient for a biconditional. - -Similarly, take the statement "All quadrilaterals have four sides," or equivalently expressed "If a polygon is a quadrilateral, then it has four sides." - -* The contrapositive is "If a polygon does not have four sides, then it is not a quadrilateral." This follows logically, and as a rule, contrapositives share the truth value of their conditional. - -* The inverse is "If a polygon is not a quadrilateral, then it does not have four sides." In this case, unlike the last example, the inverse of the statement is true. - -* The converse is "If a polygon has four sides, then it is a quadrilateral." Again, in this case, unlike the last example, the converse of the statement is true. - -* The negation is "There is at least one quadrilateral that does not have four sides." This statement is clearly false. - -Since the statement and the converse are both true, it is called a biconditional, and can be expressed as "A polygon is a quadrilateral if, and only if, it has four sides." (The phrase if and only if is sometimes abbreviated as iff.) That is, having four sides is both necessary to be a quadrilateral, and alone sufficient to deem it a quadrilateral. - -* If a statement is true, then its contrapositive is true (and vice versa). - -* If a statement is false, then its contrapositive is false (and vice versa). - -* If a statement's inverse is true, then its converse is true (and vice versa). - -* If a statement's inverse is false, then its converse is false (and vice versa). - -* If a statement's negation is false, then the statement is true (and vice versa). - -* If a statement (or its contrapositive) and the inverse (or the converse) are both true or both false, then it is known as a logical biconditional. - -Because the contrapositive of a statement always has the same truth value (truth or falsity) as the statement itself, it can be a powerful tool for proving mathematical theorems (especially if the truth of the contrapositive is easier to establish than the truth of the statement itself). A proof by contraposition (contrapositive) is a direct proof of the contrapositive of a statement. However, indirect methods such as proof by contradiction can also be used with contraposition, as, for example, in the proof of the irrationality of the square root of 2. By the definition of a rational number, the statement can be made that "If $\sqrt{2}$ is rational, then it can be expressed as an irreducible fraction". This statement is true because it is a restatement of a definition. The contrapositive of this statement is "If $\sqrt{2}$ cannot be expressed as an irreducible fraction, then it is not rational". This contrapositive, like the original statement, is also true. Therefore, if it can be proven that $\sqrt{2}$ cannot be expressed as an irreducible fraction, then it must be the case that $\sqrt{2}$ is not a rational number. The latter can be proved by contradiction. - -The previous example employed the contrapositive of a definition to prove a theorem. One can also prove a theorem by proving the contrapositive of the theorem's statement. To prove that if a positive integer N is a non-square number, its square root is irrational, we can equivalently prove its contrapositive, that if a positive integer N has a square root that is rational, then N is a square number. This can be shown by setting N equal to the rational expression a/b with a and b being positive integers with no common prime factor, and squaring to obtain N = a2/b2 and noting that since N is a positive integer b=1 so that N = a2, a square number. - -In intuitionistic logic, the statement $P \to Q$ cannot be proven to be equivalent to $\lnot Q \to \lnot P$. We can prove that $P \to Q$ implies $\lnot Q \to \lnot P$, but the reverse implication, from $\lnot Q \to \lnot P$ to $P \to Q$, requires the law of the excluded middle or an equivalent axiom. - -Contraposition represents an instance of Bayes' theorem which in a specific form can be expressed as: -$$ -\Pr(\lnot P\mid \lnot Q) = \frac{\Pr(\lnot Q \mid \lnot P)a(\lnot P)}{\Pr(\lnot Q\mid \lnot P)a(\lnot P)+\Pr(\lnot Q\mid P)a(P)} -$$. - -In the equation above the conditional probability $\Pr(\lnot Q\mid P)$ generalizes the logical statement $P \to \lnot Q$, i.e. in addition to assigning TRUE or FALSE we can also assign any probability to the statement. The term $a(P)$ denotes the base rate (aka. the prior probability) of $P$. Assume that $\Pr(\lnot Q \mid P) = 1$ is equivalent to $P\to \lnot Q$ being TRUE, and that $\Pr(\lnot Q \mid P) = 0$ is equivalent to $P \to \lnot Q$ being FALSE. It is then easy to see that $\Pr(\lnot P \mid \lnot Q) = 1$ when $\Pr(Q\mid P) = 1$ i.e. when $P \to Q$ is TRUE. This is because $\Pr(\lnot Q\mid P) = 1 - \Pr(Q\mid P) = 0$ so that the fraction on the right-hand side of the equation above is equal to 1, and hence $\Pr(\lnot P\mid \lnot Q) = 1$ which is equivalent to $\lnot Q \to \lnot P$ being TRUE. Hence, Bayes' theorem represents a generalization of contraposition. - -Contraposition represents an instance of the subjective Bayes' theorem in subjective logic expressed as: -$$ -(\omega^{A}_{P\tilde\lnot Q}) = (\omega^{A}_{Q|P},\omega^{A}_{Q|\lnot P})\widetilde{\phi} a_{P} -$$, - -where $(\omega^{A}_{Q|P},\omega^{A}_{Q|\lnot P})$ denotes a pair of binomial conditional opinions given by source $A$. The parameter $a_{P}$ denotes the base rate (aka. the prior probability) of $P$. The pair of inverted conditional opinions is denoted $(\omega^{A}_{P\tilde\lnot Q})$. The conditional opinion $\omega^{A}_{Q|P}$ generalizes the logical statement $P \to Q$, i.e. in addition to assigning TRUE or FALSE the source $A$ can assign any subjective opinion to the statement. The case where $\omega^{A}_{Q\mid P}$ is an absolute TRUE opinion is equivalent to source $A$ saying that $P\to Q$ is TRUE, and the case where $\omega^{A}_{Q\mid P}$ is an absolute FALSE opinion is equivalent to source $A$ saying that $P\to Q$ is FALSE. In the case when the conditional opinion $\omega^{A}_{Q|P}$ is absolute TRUE the subjective Bayes' theorem operator $\widetilde{\phi}$ of subjective logic produces an absolute FALSE conditional opinion $\omega^{A}_{P\widetilde\lnot Q}$ which is equivalent to $\lnot Q \to \lnot P$ being TRUE. Hence, the subjective Bayes' theorem represents a generalization of both contraposition and Bayes' theorem. diff --git a/wiki/wikipedia/3681.txt b/wiki/wikipedia/3681.txt deleted file mode 100644 index 1915dfde7e2742b0993a425264f4ff320c644623..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3681.txt +++ /dev/null @@ -1,49 +0,0 @@ -In computer science, the clique problem is the computational problem of finding cliques (subsets of vertices, all adjacent to each other, also called complete subgraphs) in a graph. It has several different formulations depending on which cliques, and what information about the cliques, should be found. Common formulations of the clique problem include finding a maximum clique (a clique with the largest possible number of vertices), finding a maximum weight clique in a weighted graph, listing all maximal cliques (cliques that cannot be enlarged), and solving the decision problem of testing whether a graph contains a clique larger than a given size. - -The clique problem arises in the following real-world setting. Consider a social network, where the graph's vertices represent people, and the graph's edges represent mutual acquaintance. Then a clique represents a subset of people who all know each other, and algorithms for finding cliques can be used to discover these groups of mutual friends. Along with its applications in social networks, the clique problem also has many applications in bioinformatics, and computational chemistry. - -Most versions of the clique problem are hard. The clique decision problem is NP-complete (one of Karp's 21 NP-complete problems). The problem of finding the maximum clique is both fixed-parameter intractable and hard to approximate. And, listing all maximal cliques may require exponential time as there exist graphs with exponentially many maximal cliques. Therefore, much of the theory about the clique problem is devoted to identifying special types of graph that admit more efficient algorithms, or to establishing the computational difficulty of the general problem in various models of computation. - -To find a maximum clique, one can systematically inspect all subsets, but this sort of brute-force search is too time-consuming to be practical for networks comprising more than a few dozen vertices. - -Although no polynomial time algorithm is known for this problem, more efficient algorithms than the brute-force search are known. For instance, the Bron–Kerbosch algorithm can be used to list all maximal cliques in worst-case optimal time, and it is also possible to list them in polynomial time per clique. - -The study of complete subgraphs in mathematics predates the "clique" terminology. For instance, complete subgraphs make an early appearance in the mathematical literature in the graph-theoretic reformulation of Ramsey theory by Erdős. But the term "clique" and the problem of algorithmically listing cliques both come from the social sciences, where complete subgraphs are used to model social cliques, groups of people who all know each other. Luce used graphs to model social networks, and adapted the social science terminology to graph theory. They were the first to call complete subgraphs "cliques". The first algorithm for solving the clique problem is that of Harary, who were motivated by the sociological application. Social science researchers have also defined various other types of cliques and maximal cliques in social network, "cohesive subgroups" of people or actors in the network all of whom share one of several different kinds of connectivity relation. Many of these generalized notions of cliques can also be found by constructing an undirected graph whose edges represent related pairs of actors from the social network, and then applying an algorithm for the clique problem to this graph. - -Since the work of Harary and Ross, many others have devised algorithms for various versions of the clique problem. - -*In the maximum clique problem, the input is an undirected graph, and the output is a maximum clique in the graph. If there are multiple maximum cliques, one of them may be chosen arbitrarily. while the independent set problem remains NP-hard on planar graphs. - -A maximal clique, sometimes called inclusion-maximal, is a clique that is not included in a larger clique. Therefore, every clique is contained in a maximal clique. Maximal cliques can be very small. A graph may contain a non-maximal clique with many vertices and a separate clique of size 2 which is maximal. While a maximum (i.e., largest) clique is necessarily maximal, the converse does not hold. There are some types of graphs in which every maximal clique is maximum; these are the complements of the well-covered graphs, in which every maximal independent set is maximum. However, other graphs have maximal cliques that are not maximum. - -A single maximal clique can be found by a straightforward greedy algorithm. Starting with an arbitrary clique (for instance, any single vertex or even the empty set), grow the current clique one vertex at a time by looping through the graph's remaining vertices. For each vertex v that this loop examines, add v to the clique if it is adjacent to every vertex that is already in the clique, and discard v otherwise. This algorithm runs in linear time. Because of the ease of finding maximal cliques, and their potential small size, more attention has been given to the much harder algorithmic problem of finding a maximum or otherwise large clique. However, some research in parallel algorithms has studied the problem of finding a maximal clique. In particular, the problem of finding the lexicographically first maximal clique (the one found by the algorithm above) has been shown to be complete for the class of polynomial-time functions. This result implies that the problem is unlikely to be solvable within the parallel complexity class NC. - -One can test whether a graph G contains a k-vertex clique, and find any such clique that it contains, using a brute force algorithm. This algorithm examines each subgraph with k vertices and checks to see whether it forms a clique. It takes time , as expressed using big O notation. - -This is because there are subgraphs to check, each of which has edges whose presence in G needs to be checked. Thus, the problem may be solved in polynomial time whenever k is a fixed constant. However, when k does not have a fixed value, but instead may vary as part of the input to the problem, the time is exponential. - -The simplest nontrivial case of the clique-finding problem is finding a triangle in a graph, or equivalently determining whether the graph is triangle-free. - -In a graph G with m edges, there may be at most Θ(m3/2) triangles (using big theta notation to indicate that this bound is tight). The worst case for this formula occurs when G is itself a clique. Therefore, algorithms for listing all triangles must take at least Ω(m3/2) time in the worst case (using big omega notation), and algorithms are known that match this time bound. For instance, Chiba describe an algorithm that sorts the vertices in order from highest degree to lowest and then iterates through each vertex v in the sorted list, looking for triangles that include v and do not include any previous vertex in the list. To do so the algorithm marks all neighbors of v, searches through all edges incident to a neighbor of v outputting a triangle for every edge that has two marked endpoints, and then removes the marks and deletes v from the graph. As the authors show, the time for this algorithm is proportional to the arboricity of the graph (denoted a(G)) multiplied by the number of edges, which is . Since the arboricity is at most , this algorithm runs in time . More generally, all k-vertex cliques can be listed by a similar algorithm that takes time proportional to the number of edges multiplied by the arboricity to the power (k - 2). For graphs of constant arboricity, such as planar graphs (or in general graphs from any non-trivial minor-closed graph family), this algorithm takes time, which is optimal since it is linear in the size of the input. - -If one desires only a single triangle, or an assurance that the graph is triangle-free, faster algorithms are possible. As Itai observe, the graph contains a triangle if and only if its adjacency matrix and the square of the adjacency matrix contain nonzero entries in the same cell. Therefore, fast matrix multiplication techniques can be applied to find triangles in time . Alon used fast matrix multiplication to improve the algorithm for finding triangles to . These algorithms based on fast matrix multiplication have also been extended to problems of finding k-cliques for larger values of k. - -By a result of Moon, every n-vertex graph has at most 3n/3 maximal cliques. They can be listed by the Bron–Kerbosch algorithm, a recursive backtracking procedure of Bron. The main recursive subroutine of this procedure has three arguments: a partially constructed (non-maximal) clique, a set of candidate vertices that could be added to the clique, and another set of vertices that should not be added (because doing so would lead to a clique that has already been found). The algorithm tries adding the candidate vertices one by one to the partial clique, making a recursive call for each one. After trying each of these vertices, it moves it to the set of vertices that should not be added again. Variants of this algorithm can be shown to have worst-case running time , matching the number of cliques that might need to be listed. Therefore, this provides a worst-case-optimal solution to the problem of listing all maximal cliques. Further, the Bron–Kerbosch algorithm has been widely reported as being faster in practice than its alternatives. - -However, when the number of cliques is significantly smaller than its worst case, other algorithms might be preferable. As Tsukiyama showed, it is also possible to list all maximal cliques in a graph in an amount of time that is polynomial per generated clique. An algorithm such as theirs in which the running time depends on the output size is known as an output-sensitive algorithm. Their algorithm is based on the following two observations, relating the maximal cliques of the given graph G to the maximal cliques of a graph G \ v formed by removing an arbitrary vertex v from G: - -*For every maximal clique K of G \ v, either K continues to form a maximal clique in G, or K ⋃ {v} forms a maximal clique in G. Therefore, G has at least as many maximal cliques as G \ v does. - -*Each maximal clique in G that does not contain v is a maximal clique in G \ v, and each maximal clique in G that does contain v can be formed from a maximal clique K in G \ v by adding v and removing the non-neighbors of v from K. - -Using these observations they can generate all maximal cliques in G by a recursive algorithm that chooses a vertex v arbitrarily and then, for each maximal clique K in G \ v, outputs both K and the clique formed by adding v to K and removing the non-neighbors of v. However, some cliques of G may be generated in this way from more than one parent clique of G \ v, so they eliminate duplicates by outputting a clique in G only when its parent in G \ v is lexicographically maximum among all possible parent cliques. On the basis of this principle, they show that all maximal cliques in G may be generated in time per clique, where m is the number of edges in G and n is the number of vertices. Chiba improve this to O(ma) per clique, where a is the arboricity of the given graph. Makino provide an alternative output-sensitive algorithm based on fast matrix multiplication. Johnson show that it is even possible to list all maximal cliques in lexicographic order with polynomial delay per clique. However, the choice of ordering is important for the efficiency of this algorithm: for the reverse of this order, - -there is no polynomial-delay algorithm unless P = NP. - -On the basis of this result, it is possible to list all maximal cliques in polynomial time, for families of graphs in which the number of cliques is polynomially bounded. These families include chordal graphs, complete graphs, triangle-free graphs, interval graphs, graphs of bounded boxicity, and planar graphs. In particular, the planar graphs have cliques, of at most constant size, that can be listed in linear time. The same is true for any family of graphs that is both sparse (having a number of edges at most a constant times the number of vertices) and closed under the operation of taking subgraphs. - -Weak results hinting that the clique problem might be hard to approximate have been known for a long time. Garey observed that, because the clique number takes on small integer values and is NP-hard to compute, it cannot have a fully polynomial-time approximation scheme. If too accurate an approximation were available, rounding its value to an integer would give the exact clique number. However, little more was known until the early 1990s, when several authors began to make connections between the approximation of maximum cliques and probabilistically checkable proofs. They used these connections to prove hardness of approximation results for the maximum clique problem. - -After many improvements to these results it is now known that, for every real number ε > 0, there can be no polynomial time algorithm that approximates the maximum clique to within a factor better than , unless P = NP. - -The rough idea of these inapproximability results is to form a graph that represents a probabilistically checkable proof system for an NP-complete problem such as the Boolean satisfiability problem. In a probabilistically checkable proof system, a proof is represented as a sequence of bits. An instance of the satisfiability problem should have a valid proof if and only if it is satisfiable. The proof is checked by an algorithm that, after a polynomial-time computation on the input to the satisfiability problem, chooses to examine a small number of randomly chosen positions of the proof string. Depending on what values are found at that sample of bits, the checker will either accept or reject the proof, without looking at the rest of the bits. False negatives are not allowed: a valid proof must always be accepted. However, an invalid proof may sometimes mistakenly be accepted. For every invalid proof, the probability that the checker will accept it must be low. diff --git a/wiki/wikipedia/3682.txt b/wiki/wikipedia/3682.txt deleted file mode 100644 index 2a6c4e44c18a4fd2a09c4cab46c047f7850460ef..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3682.txt +++ /dev/null @@ -1,11 +0,0 @@ -In number theory, Lemoine's conjecture, named after Émile Lemoine, also known as Levy's conjecture, after Hyman Levy, states that all odd integers greater than 5 can be represented as the sum of an odd prime number and an even semiprime. - -The conjecture was posed by Émile Lemoine in 1895, but was erroneously attributed by MathWorld to Hyman Levy who pondered it in the 1960s. - -A similar conjecture by Sun in 2008 states that all odd integers greater than 3 can be represented as the sum of a prime number and the product of two consecutive positive integers ( p+x(x+1) ). - -To put it algebraically, 2n + 1 = p + 2q always has a solution in primes p and q (not necessarily distinct) for n > 2. The Lemoine conjecture is similar to but stronger than Goldbach's weak conjecture. - -For example, 47 = 13 + 2 × 17 = 37 + 2 × 5 = 41 + 2 × 3 = 43 + 2 × 2. counts how many different ways 2n + 1 can be represented as p + 2q. - -According to MathWorld, the conjecture has been verified by Corbitt up to 109. A blog post in June of 2019 additionally claimed to have verified the conjecture up to 1010. diff --git a/wiki/wikipedia/3683.txt b/wiki/wikipedia/3683.txt deleted file mode 100644 index b584a7f3b999dfb0ac43690491806fea62e0b82d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3683.txt +++ /dev/null @@ -1,17 +0,0 @@ -The Erdős–Anning theorem states that an infinite number of points in the plane can have mutual integer distances only if all the points lie on a straight line. It is named after Paul Erdős and Norman H. Anning, who published a proof of it in 1945. - -Although there can be no infinite non-collinear set of points with integer distances, there are infinite non-collinear sets of points whose distances are rational numbers. - -The (still unsolved) Erdős–Ulam problem asks whether there can exist a dense set of points in the plane at rational distances from each other. - -For any finite set S of points at rational distances from each other, it is possible to find a similar set of points at integer distances from each other, by expanding S by a factor of the least common denominator of the distances in S. Therefore, there exist arbitrarily large finite sets of non-collinear points with integer distances from each other. However, including more points into S may cause the expansion factor to increase, so this construction does not allow infinite sets of points at rational distances to be transformed into infinite sets of points at integer distances. - -To prove the Erdős–Anning theorem, it is helpful to state it more strongly, by providing a concrete bound on the number of points in a set with integer distances as a function of the maximum distance between the points. More specifically, if a set of three or more non-collinear points have integer distances, all at most some number $\delta$, then at most $4(\delta+1)^2$ points at integer distances can be added to the set. - -To see this, let A, B and C be three non-collinear members of a set S of points with integer distances, all at most $\delta$, and let $d(A,B)$, $d(A,C)$, and $d(B,C)$ be the three distances between these three points. Let X be any other member of S. From the triangle inequality it follows that $|d(A,X)-d(B,X)|$ is a non-negative integer and is at most $\delta$. For each of the $\delta+1$ integer values i in this range, the locus of points satisfying the equation $|d(A,X)-d(B,X)|=i$ forms a hyperbola with A and B as its foci, and X must lie on one of these $\delta+1$ hyperbolae. By a symmetric argument, X must also lie on one of a family of $\delta+1$ hyperbolae having B and C as foci. Each pair of distinct hyperbolae, one defined by A and B and the second defined by B and C, can intersect in at most four points, - -and every point of S (including A, B, and C) lies on one of these intersection points. There are at most $4(\delta+1)^2$ intersection points of pairs of hyperbolae, and therefore at most $4(\delta+1)^2$ points in S. - -An alternative way of stating the theorem is that a non-collinear set of points in the plane with integer distances can only be extended by adding finitely many additional points, before no more points can be added. - -A set of points with both integer coordinates and integer distances, to which no more can be added while preserving both properties, forms an Erdős–Diophantine graph. diff --git a/wiki/wikipedia/3684.txt b/wiki/wikipedia/3684.txt deleted file mode 100644 index 7f76c67124f52b8dabbffc1517df50e257b6c47d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3684.txt +++ /dev/null @@ -1,26 +0,0 @@ -In mathematics, Legendre's three-square theorem states that a natural number can be represented as the sum of three squares of integers -$$ -n = x^2 + y^2 + z^2 -$$ - -if and only if n is not of the form $n = 4^a(8b + 7)$ for nonnegative integers a and b. - -The first numbers that cannot be expressed as the sum of three squares (i.e. numbers that can be expressed as $n = 4^a(8b + 7)$) are - -7, 15, 23, 28, 31, 39, 47, 55, 60, 63, 71 ... . - -Pierre de Fermat gave a criterion for numbers of the form 8a + 1 and 8a + 3 to be sums of a square plus twice another square, but did not provide a proof. N. Beguelin noticed in 1774 that every positive integer which is neither of the form 8n + 7, nor of the form 4n, is the sum of three squares, but did not provide a satisfactory proof. In 1796 Gauss proved his Eureka theorem that every positive integer n is the sum of 3 triangular numbers; this is equivalent to the fact that 8n + 3 is a sum of three squares. In 1797 or 1798 A.-M. Legendre obtained the first proof of his 3 square theorem. In 1813, A. L. Cauchy noted that Legendre's theorem is equivalent to the statement in the introduction above. Previously, in 1801, C. F. Gauss had obtained a more general result, containing Legendre's theorem of 1797–8 as a corollary. In particular, Gauss counted the number of solutions of the expression of an integer as a sum of three squares, and this is a generalisation of yet another result of Legendre, whose proof is incomplete. This last fact appears to be the reason for later incorrect claims according to which Legendre's proof of the three-square theorem was defective and had to be completed by Gauss. - -With Lagrange's four-square theorem and the two-square theorem of Girard, Fermat and Euler, the Waring's problem for k = 2 is entirely solved. - -The "only if" of the theorem is simply because modulo 8, every square is congruent to 0, 1 or 4. There are several proofs of the converse (besides Legendre's proof). One of them is due to J. P. G. L. Dirichlet in 1850, and has become classical. It requires three main lemmas: - -*the quadratic reciprocity law, - -*Dirichlet's theorem on arithmetic progressions, and - -*the equivalence class of the trivial ternary quadratic form. - -This theorem can be used to prove Lagrange's four-square theorem, which states that all natural numbers can be written as a sum of four squares. Gauss pointed out that the four squares theorem follows easily from the fact that any positive integer that is 1 or 2 mod 4 is a sum of 3 squares, because any positive integer not divisible by 4 can be reduced to this form by subtracting 0 or 1 from it. - -However, proving the three-square theorem is considerably more difficult than a direct proof of the four-square theorem that does not use the three-square theorem. Indeed, the four-square theorem was proved earlier, in 1770. diff --git a/wiki/wikipedia/3685.txt b/wiki/wikipedia/3685.txt deleted file mode 100644 index 4f7e6fdbcc62d63a150f5fe8d3a9635f32e07910..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3685.txt +++ /dev/null @@ -1,15 +0,0 @@ -In mathematics, the Denjoy–Young–Saks theorem gives some possibilities for the Dini derivatives of a function that hold almost everywhere. - -proved the theorem for continuous functions, extended it to measurable functions, and extended it to arbitrary functions. - -Saks and Bruckner give historical accounts of the theorem. - -If f is a real valued function defined on an interval, then with the possible exception of a set of measure 0 on the interval, the Dini derivatives of f satisfy one of the following four conditions at each point: - -*f has a finite derivative - -*D+f = Df is finite, Df = ∞, D+f = –∞. - -*Df = D+f is finite, D+f = ∞, Df = –∞. - -*Df = D+f = ∞, Df = D+f = –∞. diff --git a/wiki/wikipedia/3686.txt b/wiki/wikipedia/3686.txt deleted file mode 100644 index 829aaa9aa9c5ebdcddcd8fc49925f39a33a35331..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3686.txt +++ /dev/null @@ -1,7 +0,0 @@ -In mathematics and theoretical physics, Noether's second theorem relates symmetries of an action functional with a system of differential equations. The action S of a physical system is an integral of a so-called Lagrangian function L, from which the system's behavior can be determined by the principle of least action. - -Specifically, the theorem says that if the action has an infinite-dimensional Lie algebra of infinitesimal symmetries parameterized linearly by k arbitrary functions and their derivatives up to order m, then the functional derivatives of L satisfy a system of k differential equations. - -Noether's second theorem is sometimes used in gauge theory. Gauge theories are the basic elements of all modern field theories of physics, such as the prevailing Standard Model. - -The theorem is named after Emmy Noether. diff --git a/wiki/wikipedia/3687.txt b/wiki/wikipedia/3687.txt deleted file mode 100644 index ee8ca460686fa238dacec3fa8cf48e1298ec0c29..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3687.txt +++ /dev/null @@ -1,140 +0,0 @@ -In statistics, the Fisher–Tippett–Gnedenko theorem (also the Fisher–Tippett theorem or the extreme value theorem) is a general result in extreme value theory regarding asymptotic distribution of extreme order statistics. The maximum of a sample of iid random variables after proper renormalization can only converge in distribution to one of 3 possible distributions, the Gumbel distribution, the Fréchet distribution, or the Weibull distribution. Credit for the extreme value theorem and its convergence details are given to Fréchet (1927), Ronald Fisher and Leonard Henry Caleb Tippett (1928), Mises (1936) - -The role of the extremal types theorem for maxima is similar to that of central limit theorem for averages, except that the central limit theorem applies to the average of a sample from any distribution with finite variance, while the Fisher–Tippet–Gnedenko theorem only states that if the distribution of a normalized maximum converges, then the limit has to be one of a particular class of distributions. It does not state that the distribution of the normalized maximum does converge. - -Let $X_1,X_2,\ldots, X_n$ be a sequence of independent and identically-distributed random variables with cumulative distribution function $F$. Suppose that there exist two sequences of real numbers $a_n > 0$ and $b_n \in \mathbb{R}$ such that the following limits converge to a non-degenerate distribution function: -$$ - \lim_{n \to \infty}P\left(\frac{\max\{X_1, \dots, X_n\}-b_n}{a_n}\leq x\right) = G(x) -$$, - -or equivalently: -$$ - \lim_{n \to \infty}F^n\left(a_n x + b_n \right) = G(x) -$$. - -In such circumstances, the limit distribution $G$ belongs to either the Gumbel, the Fréchet or the Weibull family. - -In other words, if the limit above converges we will have $G(x)$ assume the form: -$$ -G_\gamma(x) = \exp\left(-(1 + \gamma x)^{-1/\gamma} \right), 1 + \gamma x_{a,b} > 0 -$$ - -or else -$$ -G_0(x) = \exp\left(-\exp(-x)\right) -$$ - -for some parameter $\gamma.$ This is the cumulative distribution function of the generalized extreme value distribution (GEV) with extreme value index $\gamma$. The GEV distribution groups the Gumbel, Fréchet and Weibull distributions into a single one. Note that the second formula (the Gumbel distribution) is the limit of the first as $\gamma$ goes to zero. - -The Fisher–Tippett–Gnedenko theorem is a statement about the convergence of the limiting distribution $G(x)$ above. The study of conditions for convergence of $G$ to particular cases of the generalized extreme value distribution began with Mises, R. (1936) and was further developed by Gnedenko, B. V. (1943). - -Let $F$ be the distribution function of $X$, and $X_1, \dots, X_n$ an i.i.d. sample thereof. Also let $x^*$ be the populational maximum, i.e. $x^* = \sup\{ x \mid F(x) < 1 \}$. The limiting distribution of the normalized sample maximum, given by $G$ above, will then be: - -*A Fréchet distribution ($\gamma > 0$) if and only if $x^* = \infty$ and $\lim_{t \rightarrow \infty} \frac{1-F(ut)}{1-F(t)} = u^{-1 /|\gamma|}$ for all $u> 0$. - -This corresponds to what we may call a heavy tail. In this case, possible sequences that will satisfy the theorem conditions are $b_n = 0$ and $a_n=F^{-1}\left(1-\frac{1}{n}\right)$. - -*A Gumbel distribution ($\gamma = 0$), with $x^*$ finite or infinite, if and only if $\lim_{t \rightarrow 0^-} \frac{1 - F(t + uf(t))}{1 - F(t)} = e^{-u}$ for all $u>0$ with $f(t) := \frac{\int_t^{x^*} 1 - F(s) ds}{1 - F(t)}$. - -Possible sequences here are $b_n = F^{-1}\left(1-\frac{1}{n}\right)$ and $a_n = f\left(F^{-1}\left(1-\frac{1}{n}\right)\right)$. - -*A Weibull distribution ($\gamma < 0$) if and only if $x^*$ is finite and $\lim_{t \rightarrow 0^+} \frac{1-F(x^* - ut)}{1 - F(x^* - t)} = u^{1/|\gamma|}$ for all $u>0$. - -Possible sequences here are $b_n = x^*$ and $a_n=x^* - F^{-1}\left(1-\frac{1}{n}\right)$. - -If we take the Cauchy distribution -$$ -f(x)=(\pi^2+x^2)^{-1} -$$ - -the cumulative distribution function is: -$$ -F(x)=1/2+\frac 1\pi\arctan(s/\pi) -$$ -$$ -1-F(x) -$$ is asymptotic to $1/x,$ or -$$ -\ln F(x)\sim-1/x -$$ - -and we have -$$ -\ln F(x)^n=-n\ln F(x)\sim-n/x. -$$ - -Thus we have -$$ -F(x)^n\approx\exp(-n/x) -$$ - -and letting $u=x/n-1$ (and skipping some explanation) -$$ -\lim_{n \to \infty}F(nu+n)^n =\exp(-(1+u)^{-1})= G_1(u) -$$ - -for any $u.$ The expected maximum value therefore goes up linearly with n. - -Let us take the normal distribution with cumulative distribution function -$$ -F(x)=\frac 12\text{erfc}(-x/\sqrt 2). -$$ - -We have -$$ -\ln F(x)\sim-\frac{\exp(-x^2/2)}{\sqrt{2\pi}x} -$$ - -and -$$ -\ln F(x)^n=-n\ln F(x)\sim-\frac{n\exp(-x^2/2)}{\sqrt{2\pi}x}. -$$ - -Thus we have -$$ -F(x)^n\approx\exp\left(-\frac{n\exp(-x^2/2)}{\sqrt{2\pi}x}\right). -$$ - -If we define $b_n$ as the value that satisfies -$$ -\frac{n\exp(-b_n^2/2)}{\sqrt{2\pi}b_n}=1 -$$ - -then around $x=b_n$ -$$ -\frac{n\exp(-x^2/2)}{\sqrt{2\pi}x}\approx\exp(b_n(b_n-x)). -$$ - -As n increases, this becomes a good approximation for a wider and wider range of $b_n(b_n-x)$ so letting $u=b_n(b_n-x)$ we find that -$$ -\lim_{n \to \infty}F(u/b_n+b_n)^n =\exp(-\exp(-u))= G_0(u). -$$ - -We can see that $\ln b_n\sim(\ln\ln n)/2$ and then -$$ -b_n\sim\sqrt{2\ln n} -$$ - -so the maximum is expected to climb ever more slowly toward infinity. - -We may take the simplest example, a uniform distribution between 0 and 1, with cumulative distribution function -$$ -F(x)=x -$$ from 0 to 1. - -Approaching 1 we have -$$ -\ln F(x)^n=n\ln F(x)\sim-n(1-x). -$$ - -Then -$$ -F(x)^n\approx\exp(nx-n). -$$ - -Letting $u=1+n(1-x)$ we have -$$ -\lim_{n \to \infty}F(u/n+1)^n=\exp\left(-(1-u)\right)=G_{-1}(u). -$$ - -The expected maximum approaches 1 inversely proportionally to n. diff --git a/wiki/wikipedia/3688.txt b/wiki/wikipedia/3688.txt deleted file mode 100644 index 31e63bd94ccdfed1e99e61acbbdc5bb3c3b1819f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3688.txt +++ /dev/null @@ -1,25 +0,0 @@ -A mathematical problem is a problem that can be represented, analyzed, and possibly solved, with the methods of mathematics. This can be a real-world problem, such as computing the orbits of the planets in the solar system, or a problem of a more abstract nature, such as Hilbert's problems. It can also be a problem referring to the nature of mathematics itself, such as Russell's Paradox. - -Informal "real-world" mathematical problems are questions related to a concrete setting, such as "Adam has five apples and gives John three. How many has he left?". Such questions are usually more difficult to solve than regular mathematical exercises like "5 - 3", even if one knows the mathematics required to solve the problem. Known as word problems, they are used in mathematics education to teach students to connect real-world situations to the abstract language of mathematics. - -In general, to use mathematics for solving a real-world problem, the first step is to construct a mathematical model of the problem. This involves abstraction from the details of the problem, and the modeller has to be careful not to lose essential aspects in translating the original problem into a mathematical one. After the problem has been solved in the world of mathematics, the solution must be translated back into the context of the original problem. - -Abstract mathematical problems arise in all fields of mathematics. While mathematicians usually study them for their own sake, by doing so, results may be obtained that find application outside the realm of mathematics. Theoretical physics has historically been a rich source of inspiration. - -Some abstract problems have been rigorously proved to be unsolvable, such as squaring the circle and trisecting the angle using only the compass and straightedge constructions of classical geometry, and solving the general quintic equation algebraically. Also provably unsolvable are so-called undecidable problems, such as the halting problem for Turing machines. - -Some well-known difficult abstract problems that have been solved relatively recently are the four-colour theorem, Fermat's Last Theorem, and the Poincaré conjecture. - -Computers do not need to have a sense of the motivations of mathematicians in order to do what they do.Formal definitions and computer-checkable deductions are absolutely central to mathematical science. - -Mathematics educators using problem solving for evaluation have an issue phrased by Alan H. Schoenfeld: - -How can one compare test scores from year to year, when very different problems are used? (If similar problems are used year after year, teachers and students will learn what they are, students will practice them: problems become exercises, and the test no longer assesses problem solving). - -The same issue was faced by Sylvestre Lacroix almost two centuries earlier: - -... it is necessary to vary the questions that students might communicate with each other. Though they may fail the exam, they might pass later. Thus distribution of questions, the variety of topics, or the answers, risks losing the opportunity to compare, with precision, the candidates one-to-another. - -Such degradation of problems into exercises is characteristic of mathematics in history. For example, describing the preparations for the Cambridge Mathematical Tripos in the 19th century, Andrew Warwick wrote: - -... many families of the then standard problems had originally taxed the abilities of the greatest mathematicians of the 18th century. diff --git a/wiki/wikipedia/3689.txt b/wiki/wikipedia/3689.txt deleted file mode 100644 index 34cd1a26bf00fa9daadc312008a45645a8f9e1bb..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3689.txt +++ /dev/null @@ -1,22 +0,0 @@ -In the theory of von Neumann algebras, the Kaplansky density theorem, due to Irving Kaplansky, is a fundamental approximation theorem. The importance and ubiquity of this technical tool led Gert Pedersen to comment in one of his books that, - -The density theorem is Kaplansky's great gift to mankind. It can be used every day, and twice on Sundays. - -Let K denote the strong-operator closure of a set K in B(H), the set of bounded operators on the Hilbert space H, and let (K)1 denote the intersection of K with the unit ball of B(H). - -Kaplansky density theorem. If $A$ is a self-adjoint algebra of operators in $B(H)$, then each element $a$ in the unit ball of the strong-operator closure of $A$ is in the strong-operator closure of the unit ball of $A$. In other words, $(A)_1^{-} = (A^{-})_1$. If $h$ is a self-adjoint operator in $(A^{-})_1$, then $h$ is in the strong-operator closure of the set of self-adjoint operators in $(A)_1$. - -The Kaplansky density theorem can be used to formulate some approximations with respect to the strong operator topology. - -1) If h is a positive operator in (A)1, then h is in the strong-operator closure of the set of self-adjoint operators in (A+)1, where A+ denotes the set of positive operators in A. - -2) If A is a C*-algebra acting on the Hilbert space H and u is a unitary operator in A, then u is in the strong-operator closure of the set of unitary operators in A. - -In the density theorem and 1) above, the results also hold if one considers a ball of radius r > 0, instead of the unit ball. - -The standard proof uses the fact that a bounded continuous real-valued function f is strong-operator continuous. In other words, for a net {aα} of self-adjoint operators in A, the continuous functional calculus a → f(a) satisfies, -$$ -\lim f(a_{\alpha}) = f (\lim a_{\alpha}) -$$ - -in the strong operator topology. This shows that self-adjoint part of the unit ball in A can be approximated strongly by self-adjoint elements in A. A matrix computation in M2(A) considering the self-adjoint operator with entries 0 on the diagonal and a and a* at the other positions, then removes the self-adjointness restriction and proves the theorem. diff --git a/wiki/wikipedia/369.txt b/wiki/wikipedia/369.txt deleted file mode 100644 index 5d7bd1b8d3b2d806457eb68b600092fc6685a343..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/369.txt +++ /dev/null @@ -1,88 +0,0 @@ -In mathematics and economics, the envelope theorem is a major result about the differentiability properties of the value function of a parameterized optimization problem. As we change parameters of the objective, the envelope theorem shows that, in a certain sense, changes in the optimizer of the objective do not contribute to the change in the objective function. The envelope theorem is an important tool for comparative statics of optimization models. - -The term envelope derives from describing the graph of the value function as the "upper envelope" of the graphs of the parameterized family of functions $\left\{ f\left( x,\cdot \right) \right\} _{x\in X}$ that are optimized. - -Let $f(x,\alpha)$ and $g_{j}(x,\alpha), j = 1,2, \ldots, m$ be real-valued continuously differentiable functions on $\mathbb{R}^{n+l}$, where $x \in \mathbb{R}^{n}$ are choice variables and $\alpha \in \mathbb{R}^{l}$ are parameters, and consider the problem of choosing $x$, for a given $\alpha$, so as to: -$$ - \max_{x} f(x, \alpha) -$$ subject to $g_{j}(x,\alpha) \geq 0, j = 1,2, \ldots, m$ and $x \geq 0$. - -The Lagrangian expression of this problem is given by -$$ -\mathcal{L} (x, \lambda, \alpha) = f(x, \alpha) + \lambda \cdot g(x, \alpha) -$$ - -where $\lambda \in \mathbb{R}^{m}$ are the Lagrange multipliers. Now let $x^{\ast}(\alpha)$ and $\lambda^{\ast}(\alpha)$ together be the solution that maximizes the objective function f subject to the constraints (and hence are saddle points of the Lagrangian), -$$ -\mathcal{L}^{\ast} (\alpha) \equiv f(x^{\ast}(\alpha), \alpha) + \lambda^{\ast}(\alpha) \cdot g(x^{\ast}(\alpha), \alpha), -$$ - -and define the value function -$$ -V(\alpha) \equiv f(x^{\ast}(\alpha), \alpha). -$$ - -Then we have the following theorem. - -Theorem: Assume that $V$ and $\mathcal{L}$ are continuously differentiable. Then -$$ - \frac{\partial V(\alpha)}{\partial \alpha_{k}} = \frac{\partial \mathcal{L}^{\ast} (\alpha)}{\partial \alpha_{k}} = \frac{\partial \mathcal{L} (x^{\ast} (\alpha), \lambda^{\ast} (\alpha), \alpha)}{\partial \alpha_{k}}, k = 1, 2, \ldots, l -$$ - -where $\partial \mathcal{L} / \partial \alpha_{k} = \partial f / \partial \alpha_{k} + \lambda \cdot \partial g / \partial \alpha_{k}$. - -Let $X$ denote the choice set and let the relevant parameter be $t\in \lbrack 0,1]$. Letting $f:X\times \lbrack 0,1]\rightarrow R$ denote the parameterized objective function, the value function $V$ and the optimal choice correspondence (set-valued function) $X^{\ast }$ are given by: - -{{NumBlk|:|$V(t) =\sup_{x\in X}f(x,t)$|}} - -{{NumBlk|:|$X^{\ast }(t) =\{x\in X:f(x,t)=V(t)\}$|}} - -"Envelope theorems" describe sufficient conditions for the value function $V$ to be differentiable in the parameter $t$ and describe its derivative as - -{{NumBlk|:|$V^{\prime }\left( t\right) =f_{t}\left( x,t\right) \text{ for each }x\in X^{\ast }\left( t\right),$|}} - -where $f_{t}$ denotes the partial derivative of $f$ with respect to $t$. Namely, the derivative of the value function with respect to the parameter equals the partial derivative of the objective function with respect to $t$ holding the maximizer fixed at its optimal level. - -Traditional envelope theorem derivations use the first-order condition for (), which requires that the choice set $X$ have the convex and topological structure, and the objective function $f$ be differentiable in the variable $x$. (The argument is that changes in the maximizer have only a "second-order effect" at the optimum and so can be ignored.) However, in many applications such as the analysis of incentive constraints in contract theory and game theory, nonconvex production problems, and "monotone" or "robust" comparative statics, the choice sets and objective functions generally lack the topological and convexity properties required by the traditional envelope theorems. - -Paul Milgrom and Segal (2002) observe that the traditional envelope formula holds for optimization problems with arbitrary choice sets at any differentiability point of the value function, Holmstrom (1979), Laffont and Maskin (1980), Riley and Samuelson (1981), Fudenberg and Tirole (1991), and Williams (1999). While these authors derived and exploited the envelope theorem by restricting attention to (piecewise) continuously differentiable choice rules or even narrower classes, it may sometimes be optimal to implement a choice rule that is not piecewise continuously differentiable. (One example is the class of trading problems with linear utility described in chapter 6.5 of Myerson (1991).) Note that the integral condition (3) still holds in this setting and implies such important results as Holmstrom's lemma (Holmstrom, 1979), the revenue equivalence theorem (for auctions), the Green–Laffont–Holmstrom theorem (Green and Laffont, 1979; Holmstrom, 1979), the Jehiel–Moldovanu impossibility theorems (Jehiel and Moldovanu, 2001), the McAfee–McMillan weak-cartels theorem (McAfee and McMillan, 1992), and Weber's martingale theorem (Weber, 1983), etc. The details of these applications are provided in Chapter 3 of Milgrom (2004), who offers an elegant and unifying framework in auction and mechanism design analysis mainly based on the envelope theorem and other familiar techniques and concepts in demand theory. - -For a multidimensional parameter space $T\subseteq \mathbb{R}^{K}$, Theorem - -1 can be applied to partial and directional derivatives of the value - -function. If both the objective function $f$ and the value function $V$ are (totally) differentiable in $t$, Theorem 1 implies the envelope formula for their gradients: $\nabla V\left( t\right) =\nabla _{t}f\left( x,t\right) $ for each $x\in X^{\ast }\left( t\right) $. While total differentiability of the value function may not be easy to ensure, Theorem 2 can be still applied along any smooth path connecting two parameter values $t_{0}$ and $t$. Namely, suppose that functions $f(x,\cdot )$ are differentiable for all $x\in X$ with $|\nabla _{t}f(x,t)|\leq B$ for all $x\in X,$ $t\in T$. A smooth path from $t_{0}$ to $t$ is described by a differentiable mapping $\gamma :\left[ 0,1\right] \rightarrow T$ with a bounded derivative, such that $\gamma \left( 0\right) =t_{0}$ and $\gamma \left( 1\right) =t$. Theorem 2 implies that for any such smooth path, the change of the value function can be expressed as the path integral of the partial gradient $\nabla _{t}f(x^{\ast }(t),t)$ of the objective function along the path: -$$ - V(t)-V(t_{0})=\int_{\gamma }\nabla _{t}f(x^{\ast }(s),s)\cdot ds. -$$ - -In particular, for $t=t_{0}$, this establishes that cyclic path integrals along any smooth path $\gamma $ must be zero: -$$ -\int \nabla _{t}f(x^{\ast }(s),s)\cdot ds=0. -$$ - -This "integrability condition" plays an important role in mechanism design with multidimensional types, constraining what kind of choice rules $x^{\ast }$ can be sustained by mechanism-induced menus $X\subseteq \bar{X}$. In application to producer theory, with $x\in X\subseteq \mathbb{R}^{L}$ being the firm's production vector and $t\in \mathbb{R}^{L}$ being the price vector, $f\left( x,t\right) =t\cdot x$, and the integrability condition says that any rationalizable supply function $x^{\ast }$ must satisfy -$$ - \int x^{\ast }(s)\cdot ds=0. -$$ - -When $x^{\ast }$ is continuously differentiable, this integrability condition is equivalent to the symmetry of the substitution matrix $\left(\partial x_{i}^{\ast }\left( t\right) /\partial t_{j}\right) _{i,j=1}^{L}$. (In consumer theory, the same argument applied to the expenditure minimization problem yields symmetry of the Slutsky matrix.) - -Suppose now that the feasible set $X\left( t\right) $ depends on the parameter, i.e., -$$ - V(t) =\sup_{x\in X\left( t\right) }f(x,t) -$$ -$$ - X^{\ast }(t) =\{x\in X\left( t\right) :f(x,t)=V(t)\}\text{, } -$$ - -where $ X\left( t\right) =\left\{ x\in X:g\left( x,t\right) \geq 0\right\}$ for some $ g:X\times \left[ 0,1\right] \rightarrow \mathbb{R}^{K}. $ - -Suppose that $X$ is a convex set, $f$ and $g$ are concave in $x$, and there exists $\hat{x}\in X$ such that $g\left( \hat{x},t\right) >0$ for all $t\in \left[ 0,1\right] $. Under these assumptions, it is well known that the above constrained optimization program can be represented as a saddle-point problem for the Lagrangian $L\left( x,\lambda,t\right) =f(x,t)+\lambda\cdot g\left( x,t\right) $, where $\lambda \in \mathbb{R}_{+}^{K}$ is the vector of Lagrange multipliers chosen by the adversary to minimize the Lagrangian. under the additional assumptions that $X$ is a compact set in a normed linear space, $f$ and $g$ are continuous in $x$, and $f_{t}$ and $g_{t}$ are continuous in $\left( x,t\right) $. In particular, letting $\left( x^{\ast}(t),\lambda^{\ast }\left( t\right) \right) $ denote the Lagrangian's saddle point for parameter value $t$, the theorem implies that $V$ is absolutely continuous and satisfies -$$ - V(t)=V(0)+\int_{0}^{t}L_{t}(x^{\ast }(s),\lambda^{\ast }\left( s\right) ,s)ds. -$$ - -For the special case in which $f\left( x,t\right) $ is independent of $t$, $K=1$, and $g\left( x,t\right) =h\left( x\right) +t$, the formula implies that $V^{\prime }(t)=L_{t}(x^{\ast }(t),\lambda^{\ast }\left( t\right) ,t)=\lambda^{\ast}\left( t\right) $ for a.e. $t$. That is, the Lagrange multiplier $\lambda^{\ast}\left( t\right) $ on the constraint is its "shadow price" in the optimization program. - -Milgrom and Segal (2002) demonstrate that the generalized version of the envelope theorems can also be applied to convex programming, continuous optimization problems, saddle-point problems, and optimal stopping problems. diff --git a/wiki/wikipedia/3690.txt b/wiki/wikipedia/3690.txt deleted file mode 100644 index b1b3c3fee6a33bb82da934f5a66f4dd36b749fcd..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3690.txt +++ /dev/null @@ -1,126 +0,0 @@ -The crystallographic restriction theorem in its basic form was based on the observation that the rotational symmetries of a crystal are usually limited to 2-fold, 3-fold, 4-fold, and 6-fold. However, quasicrystals can occur with other diffraction pattern symmetries, such as 5-fold; these were not discovered until 1982 by Dan Shechtman. - -Crystals are modeled as discrete lattices, generated by a list of independent finite translations . Because discreteness requires that the spacings between lattice points have a lower bound, the group of rotational symmetries of the lattice at any point must be a finite group (alternatively, the point is the only system allowing for infinite rotational symmetry). The strength of the theorem is that not all finite groups are compatible with a discrete lattice; in any dimension, we will have only a finite number of compatible groups. - -The special cases of 2D (wallpaper groups) and 3D (space groups) are most heavily used in applications, and they can be treated together. - -A rotation symmetry in dimension 2 or 3 must move a lattice point to a succession of other lattice points in the same plane, generating a regular polygon of coplanar lattice points. We now confine our attention to the plane in which the symmetry acts , illustrated with lattice vectors in the figure. - -[[Image:Crystallographic restriction polygons.png|right|280px|thumb|Lattices restrict polygons
    - - Compatible: 6-fold (3-fold), 4-fold (2-fold)
    - - Incompatible: 8-fold, 5-fold]] - -Now consider an 8-fold rotation, and the displacement vectors between adjacent points of the polygon. If a displacement exists between any two lattice points, then that same displacement is repeated everywhere in the lattice. So collect all the edge displacements to begin at a single lattice point. The edge vectors become radial vectors, and their 8-fold symmetry implies a regular octagon of lattice points around the collection point. But this is impossible, because the new octagon is about 80% as large as the original. The significance of the shrinking is that it is unlimited. The same construction can be repeated with the new octagon, and again and again until the distance between lattice points is as small as we like; thus no discrete lattice can have 8-fold symmetry. The same argument applies to any k-fold rotation, for k greater than 6. - -A shrinking argument also eliminates 5-fold symmetry. Consider a regular pentagon of lattice points. If it exists, then we can take every other edge displacement and (head-to-tail) assemble a 5-point star, with the last edge returning to the starting point. The vertices of such a star are again vertices of a regular pentagon with 5-fold symmetry, but about 60% smaller than the original. - -Thus the theorem is proved. - -The existence of quasicrystals and Penrose tilings shows that the assumption of a linear translation is necessary. Penrose tilings may have 5-fold rotational symmetry and a discrete lattice, and any local neighborhood of the tiling is repeated infinitely many times, but there is no linear translation for the tiling as a whole. And without the discrete lattice assumption, the above construction not only fails to reach a contradiction, but produces a (non-discrete) counterexample. Thus 5-fold rotational symmetry cannot be eliminated by an argument missing either of those assumptions. A Penrose tiling of the whole (infinite) plane can only have exact 5-fold rotational symmetry (of the whole tiling) about a single point, however, whereas the 4-fold and 6-fold lattices have infinitely many centres of rotational symmetry. - -Consider two lattice points A and B separated by a translation vector r. Consider an angle α such that a rotation of angle α about any lattice point is a symmetry of the lattice. Rotating about point B by α maps point A to a new point A'. Similarly, rotating about point A by α maps B to a point B'. Since both rotations mentioned are symmetry operations, A' and B' must both be lattice points. Due to periodicity of the crystal, the new vector r' which connects them must be equal to an integer multiple of r: -$$ - \mathbf{r}' = m\mathbf{r} -$$ - -with $m$ integer. The four translation vectors, three of length $r=|\mathbf{r}|$ and one, connecting A' and B', of length $r'=|\mathbf{r}'|$, form a trapezium. Therefore, the length of r' is also given by: -$$ - r' = 2r\cos\alpha - r. -$$ - -Combining the two equations gives: -$$ - \cos\alpha = \frac{m+1}{2} = \frac{M}{2} -$$ - -where $M=m+1$ is also an integer. Bearing in mind that $|\cos\alpha|\le 1$ we have allowed integers $M\in\{-2,-1,0,1,2\}$. Solving for possible values of $\alpha$ reveals that the only values in the 0° to 180° range are 0°, 60°, 90°, 120°, and 180°. In radians, the only allowed rotations consistent with lattice periodicity are given by 2π/n, where n = 1, 2, 3, 4, 6. This corresponds to 1-, 2-, 3-, 4-, and 6-fold symmetry, respectively, and therefore excludes the possibility of 5-fold or greater than 6-fold symmetry. - -Consider a line of atoms A-O-B, separated by distance a. Rotate the entire row by θ = +2π/n and θ = −2π/n, with point O kept fixed. After the rotation by +2π/n, A is moved to the lattice point C and after the rotation by -2π/n, B is moved to the lattice point D. Due to the assumed periodicity of the lattice, the two lattice points C and D will be also in a line directly below the initial row; moreover C and D will be separated by r = ma, with m an integer. But by the geometry, the separation between these points is: -$$ -2a\cos{\theta} = 2a\cos{\frac{2\pi}{n}} -$$. - -Equating the two relations gives: -$$ -2\cos{\frac{2\pi}{n}}=m -$$ - -This is satisfied by only n = 1, 2, 3, 4, 6. - -For an alternative proof, consider matrix properties. The sum of the diagonal elements of a matrix is called the trace of the matrix. In 2D and 3D every rotation is a planar rotation, and the trace is a function of the angle alone. For a 2D rotation, the trace is 2 cos θ; for a 3D rotation, 1 + 2 cos θ. - -Examples - -*Consider a 60° (6-fold) rotation matrix with respect to an orthonormal basis in 2D. -$$ -\begin{bmatrix} {1/2} & -{\sqrt{3}/2} \\ {\sqrt{3}/2} & {1/2} \end{bmatrix} -$$ - -The trace is precisely 1, an integer. - -*Consider a 45° (8-fold) rotation matrix. -$$ -\begin{bmatrix} {1/\sqrt{2}} & -{1/\sqrt{2}} \\ {1/\sqrt{2}} & {1/\sqrt{2}} \end{bmatrix} -$$ - -The trace is 2/2, not an integer. - -Selecting a basis formed from vectors that spans the lattice, neither orthogonality nor unit length is guaranteed, only linear independence. However the trace of the rotation matrix is the same with respect to any basis. The trace is a similarity invariant under linear transformations. In the lattice basis, the rotation operation must map every lattice point into an integer number of lattice vectors, so the entries of the rotation matrix in the lattice basis — and hence the trace — are necessarily integers. Similar as in other proofs, this implies that the only allowed rotational symmetries correspond to 1,2,3,4 or 6-fold invariance. For example, wallpapers and crystals cannot be rotated by 45° and remain invariant, the only possible angles are: 360°, 180°, 120°, 90° or 60°. - -Example - -*Consider a 60° (360°/6) rotation matrix with respect to the oblique lattice basis for a tiling by equilateral triangles. -$$ -\begin{bmatrix} 0 & -1 \\ 1 & 1 \end{bmatrix} -$$ - -The trace is still 1. The determinant (always +1 for a rotation) is also preserved. - -The general crystallographic restriction on rotations does not guarantee that a rotation will be compatible with a specific lattice. For example, a 60° rotation will not work with a square lattice; nor will a 90° rotation work with a rectangular lattice. - -When the dimension of the lattice rises to four or more, rotations need no longer be planar; the 2D proof is inadequate. However, restrictions still apply, though more symmetries are permissible. For example, the hypercubic lattice has an eightfold rotational symmetry, corresponding to an eightfold rotational symmetry of the hypercube. This is of interest, not just for mathematics, but for the physics of quasicrystals under the cut-and-project theory. In this view, a 3D quasicrystal with 8-fold rotation symmetry might be described as the projection of a slab cut from a 4D lattice. - -The following 4D rotation matrix is the aforementioned eightfold symmetry of the hypercube (and the cross-polytope): -$$ -A = \begin{bmatrix} 0 & 0 & 0 & -1 \\ 1 & 0 & 0 & 0 \\ 0 & -1 & 0 & 0 \\ 0 & 0 & -1 & 0 \end{bmatrix}. -$$ - -Transforming this matrix to the new coordinates given by -$$ -B = \begin{bmatrix} -1/2 & 0 & -1/2 & \sqrt 2/2 \\ 1/2 & \sqrt 2/2 & -1/2 & 0 \\ -1/2 & 0 & -1/2 & -\sqrt 2/2 \\ -1/2 & \sqrt 2/2 & 1/2 & 0 \end{bmatrix} -$$ will produce: -$$ -B A B^{-1} = \begin{bmatrix} \sqrt 2/2 & \sqrt 2/2 & 0 & 0 \\ -\sqrt 2/2 & \sqrt 2/2 & 0 & 0 \\ 0 & 0 & -\sqrt 2/2 & \sqrt 2/2 \\ 0 & 0 & -\sqrt 2/2 & -\sqrt 2/2 \end{bmatrix}. -$$ - -This third matrix then corresponds to a rotation both by 45° (in the first two dimensions) and by 135° (in the last two). Projecting a slab of hypercubes along the first two dimensions of the new coordinates produces an Ammann–Beenker tiling (another such tiling is produced by projecting along the last two dimensions), which therefore also has 8-fold rotational symmetry on average. - -The A4 lattice and F4 lattice have order 10 and order 12 rotational symmetries, respectively. - -To state the restriction for all dimensions, it is convenient to shift attention away from rotations alone and concentrate on the integer matrices . We say that a matrix A has order k when its k-th power (but no lower), Ak, equals the identity. Thus a 6-fold rotation matrix in the equilateral triangle basis is an integer matrix with order 6. Let OrdN denote the set of integers that can be the order of an N×N integer matrix. For example, Ord2 = {1, 2, 3, 4, 6}. We wish to state an explicit formula for OrdN. - -Define a function ψ based on Euler's totient function φ; it will map positive integers to non-negative integers. For an odd prime, p, and a positive integer, k, set ψ(pk) equal to the totient function value, - -φ(pk), which in this case is pk−pk−1. Do the same for ψ(2k) when k > 1. Set ψ(2) and ψ(1) to 0. Using the fundamental theorem of arithmetic, we can write any other positive integer uniquely as a product of prime powers, m = Πα pαk α; set ψ(m) = Σα ψ(pαk α). This differs from the totient itself, because it is a sum instead of a product. - -The crystallographic restriction in general form states that OrdN consists of those positive integers m such that ψ(m) ≤ N. - -For m>2, the values of ψ(m) are equal to twice the algebraic degree of cos(2π/m); therefore, ψ(m) is strictly less than m and reaches this maximum value if and only if m is a prime. - -These additional symmetries do not allow a planar slice to have, say, 8-fold rotation symmetry. In the plane, the 2D restrictions still apply. Thus the cuts used to model quasicrystals necessarily have thickness. - -Integer matrices are not limited to rotations; for example, a reflection is also a symmetry of order 2. But by insisting on determinant +1, we can restrict the matrices to proper rotations. - -The crystallographic restriction theorem can be formulated in terms of isometries of Euclidean space. A set of isometries can form a group. By a discrete isometry group we will mean an isometry group that maps each point to a discrete subset of RN, i.e. the orbit of any point is a set of isolated points. With this terminology, the crystallographic restriction theorem in two and three dimensions can be formulated as follows. - -For every discrete isometry group in two- and three-dimensional space which includes translations spanning the whole space, all isometries of finite order are of order 1, 2, 3, 4 or 6. - -Isometries of order n include, but are not restricted to, n-fold rotations. The theorem also excludes S8, S12, D4d, and D6d (see point groups in three dimensions), even though they have 4- and 6-fold rotational symmetry only. - -Rotational symmetry of any order about an axis is compatible with translational symmetry along that axis. - -The result in the table above implies that for every discrete isometry group in four- and five-dimensional space which includes translations spanning the whole space, all isometries of finite order are of order 1, 2, 3, 4, 5, 6, 8, 10, or 12. - -All isometries of finite order in six- and seven-dimensional space are of order 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 14, 15, 18, 20, 24 or 30 . diff --git a/wiki/wikipedia/3691.txt b/wiki/wikipedia/3691.txt deleted file mode 100644 index 9e949ff21619bd3d1f02f4ffa2064d43df1d251a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3691.txt +++ /dev/null @@ -1,62 +0,0 @@ -In number theory, the Green–Tao theorem, proved by Ben Green and Terence Tao in 2004, states that the sequence of prime numbers contains arbitrarily long arithmetic progressions. In other words, for every natural number k, there exist arithmetic progressions of primes with k terms. The proof is an extension of Szemerédi's theorem. The problem can be traced back to investigations of Lagrange and Waring from around 1770. - -Let $\pi(N)$ denote the number of primes less than or equal to $N$. If $A$ is a subset of the prime numbers such that -$$ -\limsup_{N\rightarrow\infty} \dfrac{\pi(N)}>0 -$$, - -then for all positive integers $k$, the set $A$ contains infinitely many arithmetic progressions of length $k$. In particular, the entire set of prime numbers contains arbitrarily long arithmetic progressions. - -In their later work on the generalized Hardy–Littlewood conjecture, Green and Tao stated and conditionally proved the asymptotic formula -$$ -(\mathfrak{S}_k + o(1))\frac{N^2}{(\log N)^k} -$$ - -for the number of k tuples of primes $p_1 < p_2 < \dotsb < p_k \leq N$ in arithmetic progression. Here, $\mathfrak{S}_k$ is the constant -$$ -\mathfrak{S}_k := \frac{1}{2(k-1)}\left(\prod_{p \leq k}\frac{1}p\left(\frac{p}{p - 1}\right)^{\!k-1}\right)\!\left(\prod_{p > k}\left(1 - \frac{k-1}p\right)\!\left(\frac{p}{p - 1}\right)^{\!k-1}\right)\!. -$$ - -The result was made unconditional by Green–Tao - -and Green–Tao–Ziegler. - -Green and Tao's proof has three main components: - -# Szemerédi's theorem, which asserts that subsets of the integers with positive upper density have arbitrarily long arithmetic progressions. It does not a priori apply to the primes because the primes have density zero in the integers. - -# A transference principle that extends Szemerédi's theorem to subsets of the integers which are pseudorandom in a suitable sense. Such a result is now called a relative Szemerédi theorem. - -# A pseudorandom subset of the integers containing the primes as a dense subset. To construct this set, Green and Tao used ideas from Goldston, Pintz, and Yıldırım's work on prime gaps. Once the pseudorandomness of the set is established, the transference principle may be applied, completing the proof. - -Numerous simplifications to the argument in the original paper have been found. Conlon provide a modern exposition of the proof. - -The proof of the Green–Tao theorem does not show how to find the progressions of primes; it merely proves they exist. There has been separate computational work to find large arithmetic progressions in the primes. - -The Green–Tao paper states 'At the time of writing the longest known arithmetic progression of primes is of length 23, and was found in 2004 by Markus Frind, Paul Underwood, and Paul Jobling: 56211383760397 + 44546738095860 · k; k = 0, 1, . . ., 22.'. - -On January 18, 2007, Jarosław Wróblewski found the first known case of 24 primes in arithmetic progression: - -468,395,662,504,823 + 205,619 · 223,092,870 · n, for n = 0 to 23. - -The constant 223,092,870 here is the product of the prime numbers up to 23, more compactly written 23# in Primorial notation. - -On May 17, 2008, Wróblewski and Raanan Chermoni found the first known case of 25 primes: - -6,171,054,912,832,631 + 366,384 · 23# · n, for n = 0 to 24. - -On April 12, 2010, Benoît Perichon with software by Wróblewski and Geoff Reynolds in a distributed PrimeGrid project found the first known case of 26 primes : - -43,142,746,595,714,191 + 23,681,770 · 23# · n, for n = 0 to 25. - -In September 2019 Rob Gahan and PrimeGrid found the first known case of 27 primes : - -224,584,605,939,537,911 + 81,292,139 · 23# · n, for n = 0 to 26. - -Many of the extensions of Szemerédi's theorem hold for the primes as well. - -Independently, Tao and Ziegler and Cook, Magyar, and Titichetrakun derived a multidimensional generalization of the Green–Tao theorem. The Tao–Ziegler proof was also simplified by Fox and Zhao. - -In 2006, Tao and Ziegler extended the Green–Tao theorem to cover polynomial progressions. More precisely, given any integer-valued polynomials P1, ..., Pk in one unknown m all with constant term 0, there are infinitely many integers x, m such that x + P1(m), ..., x + Pk(m) are simultaneously prime. The special case when the polynomials are m, 2m, ..., km implies the previous result that there are length k arithmetic progressions of primes. - -Tao proved an analogue of the Green–Tao theorem for the Gaussian primes. diff --git a/wiki/wikipedia/3692.txt b/wiki/wikipedia/3692.txt deleted file mode 100644 index db3d2046f8fb41344d0c6838203557a91211e2ce..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3692.txt +++ /dev/null @@ -1,57 +0,0 @@ -In mathematics, in the field of differential equations, a boundary value problem is a differential equation together with a set of additional constraints, called the boundary conditions. A solution to a boundary value problem is a solution to the differential equation which also satisfies the boundary conditions. - -Boundary value problems arise in several branches of physics as any physical differential equation will have them. Problems involving the wave equation, such as the determination of normal modes, are often stated as boundary value problems. A large class of important boundary value problems are the Sturm–Liouville problems. The analysis of these problems involves the eigenfunctions of a differential operator. - -To be useful in applications, a boundary value problem should be well posed. This means that given the input to the problem there exists a unique solution, which depends continuously on the input. Much theoretical work in the field of partial differential equations is devoted to proving that boundary value problems arising from scientific and engineering applications are in fact well-posed. - -Among the earliest boundary value problems to be studied is the Dirichlet problem, of finding the harmonic functions (solutions to Laplace's equation); the solution was given by the Dirichlet's principle. - -Boundary value problems are similar to initial value problems. A boundary value problem has conditions specified at the extremes ("boundaries") of the independent variable in the equation whereas an initial value problem has all of the conditions specified at the same value of the independent variable (and that value is at the lower boundary of the domain, thus the term "initial" value). A boundary value is a data value that corresponds to a minimum or maximum input, internal, or output value specified for a system or component. - -For example, if the independent variable is time over the domain [0,1], a boundary value problem would specify values for $y(t)$ at both $t=0$ and $t=1$, whereas an initial value problem would specify a value of $y(t)$ and $y'(t)$ at time $t=0$. - -Finding the temperature at all points of an iron bar with one end kept at absolute zero and the other end at the freezing point of water would be a boundary value problem. - -If the problem is dependent on both space and time, one could specify the value of the problem at a given point for all time or at a given time for all space. - -Concretely, an example of a boundary value (in one spatial dimension) is the problem -$$ -y(x)+y(x)=0 -$$ - -to be solved for the unknown function $y(x)$ with the boundary conditions -$$ -y(0)=0, \ y(\pi/2)=2. -$$ - -Without the boundary conditions, the general solution to this equation is -$$ -y(x) = A \sin(x) + B \cos(x). -$$ - -From the boundary condition $y(0)=0$ one obtains -$$ -0 = A \cdot 0 + B \cdot 1 -$$ - -which implies that $B=0.$ From the boundary condition $y(\pi/2)=2$ one finds -$$ -2 = A \cdot 1 -$$ - -and so $A=2.$ One sees that imposing boundary conditions allowed one to determine a unique solution, which in this case is -$$ -y(x)=2\sin(x). -$$ - -A boundary condition which specifies the value of the function itself is a Dirichlet boundary condition, or first-type boundary condition. For example, if one end of an iron rod is held at absolute zero, then the value of the problem would be known at that point in space. - -A boundary condition which specifies the value of the normal derivative of the function is a Neumann boundary condition, or second-type boundary condition. For example, if there is a heater at one end of an iron rod, then energy would be added at a constant rate but the actual temperature would not be known. - -If the boundary has the form of a curve or surface that gives a value to the normal derivative and the variable itself then it is a Cauchy boundary condition. - -Summary of boundary conditions for the unknown function, $y$, constants $c_0$ and $c_1$ specified by the boundary conditions, and known scalar functions $f$ and $g$ specified by the boundary conditions. - -Aside from the boundary condition, boundary value problems are also classified according to the type of differential operator involved. For an elliptic operator, one discusses elliptic boundary value problems. For a hyperbolic operator, one discusses hyperbolic boundary value problems. These categories are further subdivided into linear and various nonlinear types. - -In electrostatics, a common problem is to find a function which describes the electric potential of a given region. If the region does not contain charge, the potential must be a solution to Laplace's equation (a so-called harmonic function). The boundary conditions in this case are the Interface conditions for electromagnetic fields. If there is no current density in the region, it is also possible to define a magnetic scalar potential using a similar procedure. diff --git a/wiki/wikipedia/3693.txt b/wiki/wikipedia/3693.txt deleted file mode 100644 index 661d7b1c0cbe627256b9f571f189e2f3448ca71d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3693.txt +++ /dev/null @@ -1,39 +0,0 @@ -is a puzzle video game by Hudson Soft for WiiWare. An installment of the Tetris series, the game supports the use of Miis and the Wii Balance Board, and features both local and online multiplayer in addition to several single-player modes unique to the game. - -The game was released in Japan on October 14, 2008, in North America on October 20, 2008, and in Europe and Australia on October 24, 2008. - -A retail version called Tetris Party Deluxe which was announced by Tetris Online, Inc., Hudson Soft, Nintendo Australia and Majesco Entertainment, was released in 2010 for the Wii and the Nintendo DS systems. - -A DSiWare version called Tetris Party Live was released in North America on November 22, 2010, and later in the PAL region on December 3, 2010. This version is no longer available for purchase as of March 31, 2012. - -Tetris Party introduces a number of new game modes. In addition to the 15-level traditional single player only marathon mode, the single and multiplayer versus modes, and Vs. Hot Lines and Team Battle modes returning from earlier games, as well as the return of Bombliss, these new modes include: - -* Beginner's Tetris: The traditional 15-level game with larger blocks, a smaller playfield and new polyominos such as a three-block line, a two-block line and a small three-block L-shape. - -* Wii Balance Board Tetris: A variation of Beginner's Tetris in which players control falling Tetriminos using the Balance Board, leaning left and right to move the Tetrimino, leaning forward or backward to drop it, and squatting to rotate it in a clockwise direction. This game type also includes a 3-minute high-score mode called "Balance Ultra" and a Vs. Computer battle mode. - -* Co-op Tetris: Two players work together simultaneously on a double size playfield to clear lines. - -* Field Climber: The player builds layers of blocks to help a tiny man reach the top of the screen. The mode is time-based, and also available in online versus play. - -* Shadow: Players must race to fill in a background image with Tetriminos, while not allowing any pieces to lie outside of the puzzle in the process. This mode features new Tetrimino shapes and a total of 30 puzzles. - -* Stage Racer: The player guides a single Tetrimino downward through a narrow twisting passage, being sure not to get it caught on the sides. - -* Dual Spaces: A Reversi-inspired mode where players section off empty space, and gain points for every empty space they lockout by placing their Tetriminos to create larger areas of their own color. - -The multiplayer versus modes support up to four players offline and six players online through the Nintendo Wi-Fi Connection and feature new powerups that utilize the pointer and motion functions of the Wii Remote. The game also keeps skill charts and statistics and features online leaderboards and more than 130 achievements for players to monitor their progress. - -The game does not include any multiplayer marathon modes. Garbage is not optional in multiplayer mode as it was in The New Tetris. - -Both Hudson and Tetris Online have organized tournaments for players of the game, with the first held in December 2008. Each tournament involves the different game modes in Tetris Party, with the first and third tournaments featuring four rounds with four different game modes contested in each. - -There was a total of four tournaments where the top 500 in the first and third tournaments and top 100 in the second and fourth tournaments would be credited with 1,200 Nintendo points to use for either the Wii Shop or DSiWare Shop. - -Tetris Party received "generally favorable reviews" according to the review aggregation website Metacritic. IGN called it "the best console version [of Tetris] we've seen in years" and a "must buy", though they were slightly disappointed in the somewhat limited online multiplayer. Nintendo Life praised the online play in general and called it "the most robust online Tetris experience money can buy" next to Tetris DS, though they were also disappointed in the limited online play modes and felt the game is geared more towards local multiplayer than for solo players. Official Nintendo Magazine was very impressed by the "addictive online play" and commented on the "great variety of modes". They also loved the "classic Tetris gameplay" and thought it was "good value for your points." However, they did mark it down as they thought the "Balance Board mode was a bit gimmicky." - -The game was nominated for multiple Wii-specific awards by IGN in its 2008 video game awards, including Best WiiWare Game and Best Puzzle Game. - -The DS version of Tetris Party Deluxe received "generally favorable reviews", while the Wii version received "average" reviews, according to Metacritic. In Japan, Famitsu gave it a score of one eight and three sevens for the DS version, and one eight, one six, one seven, and one eight for the Wii version. - -At the time of release, Tetris Party Live received "average" reviews according to Metacritic. diff --git a/wiki/wikipedia/3694.txt b/wiki/wikipedia/3694.txt deleted file mode 100644 index 8ad4b45fd5f478f157807e6ddc2d7f250de1db8a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3694.txt +++ /dev/null @@ -1,3 +0,0 @@ -In mathematical logic, the Barwise compactness theorem, named after Jon Barwise, is a generalization of the usual compactness theorem for first-order logic to a certain class of infinitary languages. It was stated and proved by Barwise in 1967. - -Let $A$ be a countable admissible set. Let $L$ be an $A$-finite relational language. Suppose $\Gamma$ is a set of $L_A$-sentences, where $\Gamma$ is a $\Sigma_1$ set with parameters from $A$, and every $A$-finite subset of $\Gamma$ is satisfiable. Then $\Gamma$ is satisfiable. diff --git a/wiki/wikipedia/3695.txt b/wiki/wikipedia/3695.txt deleted file mode 100644 index 59be350020e43dd8dace49fb99f39df3179ebaa5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3695.txt +++ /dev/null @@ -1,26 +0,0 @@ -In mathematics and probability theory, Skorokhod's embedding theorem is either or both of two theorems that allow one to regard any suitable collection of random variables as a Wiener process (Brownian motion) evaluated at a collection of stopping times. Both results are named for the Ukrainian mathematician A. V. Skorokhod. - -Let X be a real-valued random variable with expected value 0 and finite variance; let W denote a canonical real-valued Wiener process. Then there is a stopping time (with respect to the natural filtration of W), τ, such that Wτ has the same distribution as X, -$$ -\operatorname{E}[\tau] = \operatorname{E}[X^2] -$$ - -and -$$ -\operatorname{E}[\tau^2] \leq 4 \operatorname{E}[X^4]. -$$ - -Let X1, X2, ... be a sequence of independent and identically distributed random variables, each with expected value 0 and finite variance, and let -$$ -S_n = X_1 + \cdots + X_n. -$$ - -Then there is a sequence of stopping times τ1 ≤ τ2 ≤ ... such that the $W_{\tau_{n}}$ have the same joint distributions as the partial sums Sn and τ1, τ2 - τ1, τ3 - τ2, ... are independent and identically distributed random variables satisfying -$$ -\operatorname{E}[\tau_n - \tau_{n - 1}] = \operatorname{E}[X_1^2] -$$ - -and -$$ -\operatorname{E}[(\tau_{n} - \tau_{n - 1})^2] \leq 4 \operatorname{E}[X_1^4]. -$$ diff --git a/wiki/wikipedia/3696.txt b/wiki/wikipedia/3696.txt deleted file mode 100644 index d986faf23cf54a3f189a9757b828a07b551a4ce4..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3696.txt +++ /dev/null @@ -1,257 +0,0 @@ -Dijkstra's algorithm ( ) is an algorithm for finding the shortest paths between nodes in a graph, which may represent, for example, road networks. It was conceived by computer scientist Edsger W. Dijkstra in 1956 and published three years later. - -The algorithm exists in many variants. Dijkstra's original algorithm found the shortest path between two given nodes, but a more common variant fixes a single node as the "source" node and finds shortest paths from the source to all other nodes in the graph, producing a shortest-path tree. - -For a given source node in the graph, the algorithm finds the shortest path between that node and every other. - -Dijkstra's algorithm uses a data structure for storing and querying partial solutions sorted by distance from the start. While the original algorithm uses a min-priority queue and runs in time $\Theta((|V| + |E|) \log |V|)$(where $|V|$ is the number of nodes and $|E|$ is the number of edges), it can also be implemented in $\Theta(|V| ^ 2)$ using an array. The idea of this algorithm is also given in . propose using a Fibonacci heap min-priority queue to optimize the running time complexity to $\Theta(|E|+|V|\log|V|)$. This is asymptotically the fastest known single-source shortest-path algorithm for arbitrary directed graphs with unbounded non-negative weights. However, specialized cases (such as bounded/integer weights, directed acyclic graphs etc.) can indeed be improved further as detailed in Specialized variants. Additionally, if preprocessing is allowed algorithms such as contraction hierarchies can be up to seven orders of magnitude faster. - -In some fields, artificial intelligence in particular, Dijkstra's algorithm or a variant of it is known as uniform cost search and formulated as an instance of the more general idea of best-first search. - -Dijkstra thought about the shortest path problem when working at the Mathematical Center in Amsterdam in 1956 as a programmer to demonstrate the capabilities of a new computer called ARMAC. His objective was to choose both a problem and a solution (that would be produced by computer) that non-computing people could understand. He designed the shortest path algorithm and later implemented it for ARMAC for a slightly simplified transportation map of 64 cities in the Netherlands (64, so that 6 bits would be sufficient to encode the city number). Dijkstra published the algorithm in 1959, two years after Prim and 29 years after Jarník. - -Let the node at which we are starting at be called the initial node. Let the distance of node Y be the distance from the initial node to Y. Dijkstra's algorithm will initially start with infinite distances and will try to improve them step by step. - -# Mark all nodes unvisited. Create a set of all the unvisited nodes called the unvisited set. - -# Assign to every node a tentative distance value: set it to zero for our initial node and to infinity for all other nodes. The tentative distance of a node v is the length of the shortest path discovered so far between the node v and the starting node. Since initially no path is known to any other vertex than the source itself (which is a path of length zero), all other tentative distances are initially set to infinity. Set the initial node as current. - -# For the current node, consider all of its unvisited neighbors and calculate their tentative distances through the current node. Compare the newly calculated tentative distance to the current assigned value and assign the smaller one. For example, if the current node A is marked with a distance of 6, and the edge connecting it with a neighbor B has length 2, then the distance to B through A will be 6 + 2 = 8. If B was previously marked with a distance greater than 8 then change it to 8. Otherwise, the current value will be kept. - -# When we are done considering all of the unvisited neighbors of the current node, mark the current node as visited and remove it from the unvisited set. A visited node will never be checked again. - -# If the destination node has been marked visited (when planning a route between two specific nodes) or if the smallest tentative distance among the nodes in the unvisited set is infinity (when planning a complete traversal; occurs when there is no connection between the initial node and remaining unvisited nodes), then stop. The algorithm has finished. - -# Otherwise, select the unvisited node that is marked with the smallest tentative distance, set it as the new current node, and go back to step 3. - -When planning a route, it is actually not necessary to wait until the destination node is "visited" as above: the algorithm can stop once the destination node has the smallest tentative distance among all "unvisited" nodes (and thus could be selected as the next "current"). - -Suppose you would like to find the shortest path between two intersections on a city map: a starting point and a destination. Dijkstra's algorithm initially marks the distance (from the starting point) to every other intersection on the map with infinity. This is done not to imply that there is an infinite distance, but to note that those intersections have not been visited yet. Some variants of this method leave the intersections' distances unlabeled. Now select the current intersection at each iteration. For the first iteration, the current intersection will be the starting point, and the distance to it (the intersection's label) will be zero. For subsequent iterations (after the first), the current intersection will be a closest unvisited intersection to the starting point (this will be easy to find). - -From the current intersection, update the distance to every unvisited intersection that is directly connected to it. This is done by determining the sum of the distance between an unvisited intersection and the value of the current intersection and then relabeling the unvisited intersection with this value (the sum) if it is less than the unvisited intersection's current value. In effect, the intersection is relabeled if the path to it through the current intersection is shorter than the previously known paths. To facilitate shortest path identification, in pencil, mark the road with an arrow pointing to the relabeled intersection if you label/relabel it, and erase all others pointing to it. After you have updated the distances to each neighboring intersection, mark the current intersection as visited and select an unvisited intersection with minimal distance (from the starting point) – or the lowest label—as the current intersection. Intersections marked as visited are labeled with the shortest path from the starting point to it and will not be revisited or returned to. - -Continue this process of updating the neighboring intersections with the shortest distances, marking the current intersection as visited, and moving onto a closest unvisited intersection until you have marked the destination as visited. Once you have marked the destination as visited (as is the case with any visited intersection), you have determined the shortest path to it from the starting point and can trace your way back following the arrows in reverse. In the algorithm's implementations, this is usually done (after the algorithm has reached the destination node) by following the nodes' parents from the destination node up to the starting node; that's why we also keep track of each node's parent. - -This algorithm makes no attempt of direct "exploration" towards the destination as one might expect. Rather, the sole consideration in determining the next "current" intersection is its distance from the starting point. This algorithm therefore expands outward from the starting point, interactively considering every node that is closer in terms of shortest path distance until it reaches the destination. When understood in this way, it is clear how the algorithm necessarily finds the shortest path. However, it may also reveal one of the algorithm's weaknesses: its relative slowness in some topologies. - -In the following pseudocode algorithm, dist is an array that contains the current distances from the source to other vertices, i.e. dist[u] is the current distance from the source to the vertex u. The prev array contains pointers to previous-hop nodes on the shortest path from source to the given vertex (equivalently, it is the next-hop on the path from the given vertex to the source). The code u ← vertex in Q with min dist[u], searches for the vertex u in the vertex set Q that has the least dist[u] value. length(u, v) returns the length of the edge joining (i.e. the distance between) the two neighbor-nodes u and v. The variable alt on line 18 is the length of the path from the root node to the neighbor node v if it were to go through u. If this path is shorter than the current shortest path recorded for v, that current path is replaced with this alt path. - -1 function Dijkstra(Graph, source): - -2 - -3 create vertex set Q - -4 - -5 for each vertex v in Graph: - -6 dist[v] ← INFINITY - -7 prev[v] ← UNDEFINED - -8 add v to Q - -9 dist[source] ← 0 - -10 - -11 while Q is not empty: - -12 u ← vertex in Q with min dist[u] - -13 - -14 remove u from Q - -15 - -16 for each neighbor v of u still in Q: - -17 alt ← dist[u] + length(u, v) - -18 if alt < dist[v]: - -19 dist[v] ← alt - -20 prev[v] ← u - -21 - -22 return dist[], prev[] - -If we are only interested in a shortest path between vertices source and target, we can terminate the search after line 15 if u = target. - -Now we can read the shortest path from source to target by reverse iteration: - -1 S ← empty sequence - -2 u ← target - -3 if prev[u] is defined or u = source: // Do something only if the vertex is reachable - -4 while u is defined: // Construct the shortest path with a stack S - -5 insert u at the beginning of S // Push the vertex onto the stack - -6 u ← prev[u] // Traverse from target to source - -Now sequence S is the list of vertices constituting one of the shortest paths from source to target, or the empty sequence if no path exists. - -A more general problem would be to find all the shortest paths between source and target (there might be several different ones of the same length). Then instead of storing only a single node in each entry of prev[] we would store all nodes satisfying the relaxation condition. For example, if both r and source connect to target and both of them lie on different shortest paths through target (because the edge cost is the same in both cases), then we would add both r and source to prev[target]. When the algorithm completes, prev[] data structure will actually describe a graph that is a subset of the original graph with some edges removed. Its key property will be that if the algorithm was run with some starting node, then every path from that node to any other node in the new graph will be the shortest path between those nodes in the original graph, and all paths of that length from the original graph will be present in the new graph. Then to actually find all these shortest paths between two given nodes we would use a path finding algorithm on the new graph, such as depth-first search. - -A min-priority queue is an abstract data type that provides 3 basic operations: add_with_priority(), decrease_priority() and extract_min(). As mentioned earlier, using such a data structure can lead to faster computing times than using a basic queue. Notably, Fibonacci heap or Brodal queue offer optimal implementations for those 3 operations. As the algorithm is slightly different, we mention it here, in pseudo-code as well : - -1 function Dijkstra(Graph, source): - -2 dist[source] ← 0 // Initialization - -3 - -4 create vertex priority queue Q - -5 - -6 for each vertex v in Graph: - -7 if v ≠ source - -8 dist[v] ← INFINITY // Unknown distance from source to v - -9 prev[v] ← UNDEFINED // Predecessor of v - -10 - -11 Q.add_with_priority(v, dist[v]) - -12 - -13 - -14 while Q is not empty: // The main loop - -15 u ← Q.extract_min() // Remove and return best vertex - -16 for each neighbor v of u: // only v that are still in Q - -17 alt ← dist[u] + length(u, v) - -18 if alt < dist[v] - -19 dist[v] ← alt - -20 prev[v] ← u - -21 Q.decrease_priority(v, alt) - -22 - -23 return dist, prev - -Instead of filling the priority queue with all nodes in the initialization phase, it is also possible to initialize it to contain only source; then, inside the if alt < dist[v] block, the decrease_priority() becomes an add_with_priority() operation if the node is not already in the queue. - -These alternatives can use entirely array-based priority queues without decrease-key functionality, which have been found to achieve even faster computing times in practice. However, the difference in performance was found to be narrower for denser graphs. - -Proof of Dijkstra's algorithm is constructed by induction on the number of visited nodes. - -Invariant hypothesis: For each node v, dist[v] is the shortest distance from source to v when traveling via visited nodes only, or infinity if no such path exists. (Note: we do not assume dist[v] is the actual shortest distance for unvisited nodes.) - -The base case is when there is just one visited node, namely the initial node source, in which case the hypothesis is trivial. - -Otherwise, assume the hypothesis for n-1 visited nodes. In which case, we choose an edge vu where u has the least dist[u] of any unvisited nodes such that 1=dist[u] = dist[v] + length[v,u]. dist[u] is considered to be the shortest distance from source to u because if there were a shorter path, and if w was the first unvisited node on that path then by the original hypothesis dist[w] > dist[u] which creates a contradiction. Similarly if there were a shorter path to u without using unvisited nodes, and if the last but one node on that path were w, then we would have had 1=dist[u] = dist[w] + length[w,u], also a contradiction. - -After processing u it will still be true that for each unvisited node w, dist[w] will be the shortest distance from source to w using visited nodes only, because if there were a shorter path that doesn't go by u we would have found it previously, and if there were a shorter path using u we would have updated it when processing u. - -After all nodes are visited, the shortest path from source to any node v consists only of visited nodes, therefore dist[v] is the shortest distance. - -Bounds of the running time of Dijkstra's algorithm on a graph with edges E and vertices V can be expressed as a function of the number of edges, denoted $|E|$, and the number of vertices, denoted $|V|$, using big-O notation. The complexity bound depends mainly on the data structure used to represent the set Q. In the following, upper bounds can be simplified because $|E|$ is $O(|V|^2)$ for any graph, but that simplification disregards the fact that in some problems, other upper bounds on $|E|$ may hold. - -For any data structure for the vertex set Q, the running time is in -$$ -\Theta(|E| \cdot T_\mathrm{dk} + |V| \cdot T_\mathrm{em}), -$$ - -where $T_\mathrm{dk}$ and $T_\mathrm{em}$ are the complexities of the decrease-key and extract-minimum operations in Q, respectively. - -The simplest version of Dijkstra's algorithm stores the vertex set Q as an linked list or array, and edges as an adjacency list or matrix. In this case, extract-minimum is simply a linear search through all vertices in Q, so the running time is $\Theta(|E| + |V|^2) = \Theta(|V|^2)$. - -For sparse graphs, that is, graphs with far fewer than $|V|^2$ edges, Dijkstra's algorithm can be implemented more efficiently by storing the graph in the form of adjacency lists and using a self-balancing binary search tree, binary heap, pairing heap, or Fibonacci heap as a priority queue to implement extracting minimum efficiently. To perform decrease-key steps in a binary heap efficiently, it is necessary to use an auxiliary data structure that maps each vertex to its position in the heap, and to keep this structure up to date as the priority queue Q changes. With a self-balancing binary search tree or binary heap, the algorithm requires -$$ -\Theta((|E| + |V|) \log |V|) -$$ - -time in the worst case (where $\log$ denotes the binary logarithm $\log_2$); for connected graphs this time bound can be simplified to $\Theta( | E | \log | V | )$. The Fibonacci heap improves this to -$$ -\Theta(|E| + |V| \log|V|). -$$ - -When using binary heaps, the average case time complexity is lower than the worst-case: assuming edge costs are drawn independently from a common probability distribution, the expected number of decrease-key operations is bounded by $\Theta(|V| \log (|E|/|V|))$, giving a total running time of -$$ -O\left(|E| + |V| \log \frac \log |V|\right). -$$ - -In common presentations of Dijkstra's algorithm, initially all nodes are entered into the priority queue. This is, however, not necessary: the algorithm can start with a priority queue that contains only one item, and insert new items as they are discovered (instead of doing a decrease-key, check whether the key is in the queue; if it is, decrease its key, otherwise insert it). This variant has the same worst-case bounds as the common variant, but maintains a smaller priority queue in practice, speeding up the queue operations. - -Moreover, not inserting all nodes in a graph makes it possible to extend the algorithm to find the shortest path from a single source to the closest of a set of target nodes on infinite graphs or those too large to represent in memory. The resulting algorithm is called uniform-cost search (UCS) in the artificial intelligence literature and can be expressed in pseudocode as - -procedure uniform_cost_search(start) is - -node ← start - -frontier ← priority queue containing node only - -explored ← empty set - -do - -if frontier is empty then - -return failure - -node ← frontier.pop() - -if node is a goal state then - -return solution(node) - -explored.add(node) - -for each of node's neighbors n do - -if n is not in explored and not in frontier then - -frontier.add(n) - -else if n is in frontier with higher cost - -replace existing node with n - -The complexity of this algorithm can be expressed in an alternative way for very large graphs: when C* is the length of the shortest path from the start node to any node satisfying the "goal" predicate, each edge has cost at least ε, and the number of neighbors per node is bounded by b, then the algorithm's worst-case time and space complexity are both in O(b1+⌊C* . - -Further optimizations of Dijkstra's algorithm for the single-target case include bidirectional variants, goal-directed variants such as the A* algorithm (see ), graph pruning to determine which nodes are likely to form the middle segment of shortest paths (reach-based routing), and hierarchical decompositions of the input graph that reduce s–t routing to connecting s and t to their respective "transit nodes" followed by shortest-path computation between these transit nodes using a "highway". - -Combinations of such techniques may be needed for optimal practical performance on specific problems. - -When arc weights are small integers (bounded by a parameter $C$), specialized queues which take advantage of this fact can be used to speed up Dijkstra's algorithm. The first algorithm of this type was Dial's algorithm for graphs with positive integer edge weights, which uses a bucket queue to obtain a running time $O(|E|+|V|C)$. The use of a Van Emde Boas tree as the priority queue brings the complexity to $O(|E|\log\log C)$ . Another interesting variant based on a combination of a new radix heap and the well-known Fibonacci heap runs in time $O(|E|+|V|\sqrt{\log C})$ . Finally, the best algorithms in this special case are as follows. The algorithm given by runs in $O(|E|\log\log|V|)$ time and the algorithm given by runs in $O(|E| + |V|\min\{(\log|V|)^{1/3+\varepsilon}, (\log C)^{1/4+\varepsilon}\})$ time. - -The functionality of Dijkstra's original algorithm can be extended with a variety of modifications. For example, sometimes it is desirable to present solutions which are less than mathematically optimal. To obtain a ranked list of less-than-optimal solutions, the optimal solution is first calculated. A single edge appearing in the optimal solution is removed from the graph, and the optimum solution to this new graph is calculated. Each edge of the original solution is suppressed in turn and a new shortest-path calculated. The secondary solutions are then ranked and presented after the first optimal solution. - -Dijkstra's algorithm is usually the working principle behind link-state routing protocols, OSPF and IS-IS being the most common ones. - -Unlike Dijkstra's algorithm, the Bellman–Ford algorithm can be used on graphs with negative edge weights, as long as the graph contains no negative cycle reachable from the source vertex s. The presence of such cycles means there is no shortest path, since the total weight becomes lower each time the cycle is traversed. (This statement assumes that a "path" is allowed to repeat vertices. In graph theory that is normally not allowed. In theoretical computer science it often is allowed.) It is possible to adapt Dijkstra's algorithm to handle negative weight edges by combining it with the Bellman-Ford algorithm (to remove negative edges and detect negative cycles), such an algorithm is called Johnson's algorithm. - -The A* algorithm is a generalization of Dijkstra's algorithm that cuts down on the size of the subgraph that must be explored, if additional information is available that provides a lower bound on the "distance" to the target. This approach can be viewed from the perspective of linear programming: there is a natural linear program for computing shortest paths, and solutions to its dual linear program are feasible if and only if they form a consistent heuristic (speaking roughly, since the sign conventions differ from place to place in the literature). This feasible dual / consistent heuristic defines a non-negative reduced cost and A* is essentially running Dijkstra's algorithm with these reduced costs. If the dual satisfies the weaker condition of admissibility, then A* is instead more akin to the Bellman–Ford algorithm. - -The process that underlies Dijkstra's algorithm is similar to the greedy process used in Prim's algorithm. Prim's purpose is to find a minimum spanning tree that connects all nodes in the graph; Dijkstra is concerned with only two nodes. Prim's does not evaluate the total weight of the path from the starting node, only the individual edges. - -Breadth-first search can be viewed as a special-case of Dijkstra's algorithm on unweighted graphs, where the priority queue degenerates into a FIFO queue. - -The fast marching method can be viewed as a continuous version of Dijkstra's algorithm which computes the geodesic distance on a triangle mesh. - -From a dynamic programming point of view, Dijkstra's algorithm is a successive approximation scheme that solves the dynamic programming functional equation for the shortest path problem by the Reaching method. - -In fact, Dijkstra's explanation of the logic behind the algorithm, namely - -is a paraphrasing of Bellman's famous Principle of Optimality in the context of the shortest path problem. - -Least-cost paths are calculated for instance to establish tracks of electricity lines or oil pipelines. The algorithm has also been used to calculate optimal long-distance footpaths in Ethiopia and contrast them with the situation on the ground. diff --git a/wiki/wikipedia/3697.txt b/wiki/wikipedia/3697.txt deleted file mode 100644 index b755c61840e59ddae174580f8fe7e7554a36fe42..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3697.txt +++ /dev/null @@ -1,11 +0,0 @@ -Circle packing in a circle is a two-dimensional packing problem with the objective of packing unit circles into the smallest possible larger circle. - -If more than one equivalent solution exists, all are shown. - -Only 26 optimal packings are thought to be rigid (with no circles able to "rattle"). Numbers in bold are prime: - -* Proven for n = 1, 2, 3, 4, 5, 6, 7, 10, 11, 12, 13, 19 - -* Conjectured for n = 14, 15, 16, 17, 18, 22, 23, 27, 30, 31, 33, 37, 61, 91 - -Of these, solutions for n = 2, 3, 4, 7, 19, and 37 achieve a packing density greater than any smaller number > 1. (Higher density records all have rattles.) diff --git a/wiki/wikipedia/3698.txt b/wiki/wikipedia/3698.txt deleted file mode 100644 index 74b97c03cac4649b44d703df0e0fbbd74c8255df..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3698.txt +++ /dev/null @@ -1,23 +0,0 @@ -The Game of the Amazons (in Spanish, El Juego de las Amazonas; often called Amazons for short) is a two-player abstract strategy game invented in 1988 by Walter Zamkauskas of Argentina. The games is played by moving pieces and blocking the opponents from squares, and the last player able to move is the winner. It is a member of the territorial game family, a distant relative of Go and chess. - -The Game of the Amazons is played on a 10x10 chessboard (or an international checkerboard). Some players prefer to use a monochromatic board. The two players are White and Black; each player has four amazons (not to be confused with the amazon fairy chess piece), which start on the board in the configuration shown at right. A supply of markers (checkers, poker chips, etc.) is also required. - -White moves first, and the players alternate moves thereafter. Each move consists of two parts. First, one moves one of one's own amazons one or more empty squares in a straight line (orthogonally or diagonally), exactly as a queen moves in chess; it may not cross or enter a square occupied by an amazon of either color or an arrow. Second, after moving, the amazon shoots an arrow from its landing square to another square, using another queenlike move. This arrow may travel in any orthogonal or diagonal direction (even backwards along the same path the amazon just traveled, into or across the starting square if desired). An arrow, like an amazon, cannot cross or enter a square where another arrow has landed or an amazon of either color stands. The square where the arrow lands is marked to show that it can no longer be used. The last player to be able to make a move wins. Draws are impossible. - -The strategy of the game is based on using arrows (as well as one's four amazons) to block the movement of the opponent's amazons and gradually wall off territory, trying to trap the opponents in smaller regions and gain larger areas for oneself. Each move reduces the available playing area, and eventually each amazon finds itself in a territory blocked off from all other amazons. The amazon can then move about its territory firing arrows until it no longer has any room to move. Since it would be tedious to actually play out all these moves, in practice the game usually ends when all of the amazons are in separate territories. The player with the largest amount of territory will be able to win, as the opponent will have to fill in her own territory more quickly. - -Scores are sometimes used for tie-breaking purposes in Amazons tournaments. When scoring, it is important to note that although the number of moves remaining to a player is usually equal to the number of empty squares in the territories occupied by that player's amazons, it is nonetheless possible to have defective territories in which there are fewer moves left than there are empty squares. The simplest such territory is three squares of the same colour, not in a straight line, with the amazon in the middle (for example, a1+b2+c1 with the amazon at b2). - -El Juego de las Amazonas was first published in Spanish in the Argentine puzzle magazine El Acertijo in December 1992. An approved English translation written by Michael Keller appeared in World Game Review in January 1994. and an updated version with graphics in Visual Basic in 1995. There are Amazons tournaments at the Computer Olympiad, a series of computer-versus-computer competitions. - -El Juego de las Amazonas (The Game of the Amazons) is a trademark of Ediciones de Mente. - -Usually, in the endgame, the board is partitioned into separate "royal chambers", with queens inside each chamber. We define simple Amazons endgames to be endgames where each chamber has at most one queen. - -Determining who wins in a simple Amazons endgame is NP-hard. This is proven by reducing it to finding the Hamiltonian path of a cubic subgraph of the square grid graph. - -Generalized Amazons (that is, determining the winner of a game of Amazons played on a n x n grid, started from an arbitrary configuration) is PSPACE-complete. This can be proved in two ways. - -* The first way is by reducing a generalized Hex position, which is known to be PSPACE-complete, into an Amazons position. - -* The second way is by reducing a certain kind of generalized geography called GEOGRAPHY-BP3, which is PSPACE-complete, to an Amazons position. This Amazons position uses only one black queen and one white queen, thus showing that generalized Amazons is PSPACE-complete even if only one queen on each side is allowed. diff --git a/wiki/wikipedia/3699.txt b/wiki/wikipedia/3699.txt deleted file mode 100644 index 7dc1eca1144e8fd754b80121a82e071b8154c707..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3699.txt +++ /dev/null @@ -1,29 +0,0 @@ -In mathematics, the Cartan–Hadamard theorem is a statement in Riemannian geometry concerning the structure of complete Riemannian manifolds of non-positive sectional curvature. The theorem states that the universal cover of such a manifold is diffeomorphic to a Euclidean space via the exponential map at any point. It was first proved by Hans Carl Friedrich von Mangoldt for surfaces in 1881, and independently by Jacques Hadamard in 1898. Élie Cartan generalized the theorem to Riemannian manifolds in 1928 (; ; ). The theorem was further generalized to a wide class of metric spaces by Mikhail Gromov in 1987; detailed proofs were published by Ballmann for metric spaces of non-positive curvature and by Alexander for general locally convex metric spaces. - -The Cartan–Hadamard theorem in conventional Riemannian geometry asserts that the universal covering space of a connected complete Riemannian manifold of non-positive sectional curvature is diffeomorphic to Rn. In fact, for complete manifolds on non-positive curvature the exponential map based at any point of the manifold is a covering map. - -The theorem holds also for Hilbert manifolds in the sense that the exponential map of a non-positively curved geodesically complete connected manifold is a covering map (; ). Completeness here is understood in the sense that the exponential map is defined on the whole tangent space of a point. - -In metric geometry, the Cartan–Hadamard theorem is the statement that the universal cover of a connected non-positively curved complete metric space X is a Hadamard space. In particular, if X is simply connected then it is a geodesic space in the sense that any two points are connected by a unique minimizing geodesic, and hence contractible. - -A metric space X is said to be non-positively curved if every point p has a neighborhood U in which any two points are joined by a geodesic, and for any point z in U and constant speed geodesic γ in U, one has -$$ - d(z,\gamma(1/2))^2 \le \frac{1}{2}d(z,\gamma(0))^2 + \frac{1}{2}d(z,\gamma(1))^2 - \frac{1}{4}d(\gamma(0),\gamma(1))^2. -$$ - -This inequality may be usefully thought of in terms of a geodesic triangle Δ = zγ(0)γ(1). The left-hand side is the square distance from the vertex z to the midpoint of the opposite side. The right-hand side represents the square distance from the vertex to the midpoint of the opposite side in a Euclidean triangle having the same side lengths as Δ. This condition, called the CAT(0) condition is an abstract form of Toponogov's triangle comparison theorem. - -The assumption of non-positive curvature can be weakened , although with a correspondingly weaker conclusion. Call a metric space X convex if, for any two constant speed minimizing geodesics a(t) and b(t), the function -$$ -t\mapsto d(a(t),b(t)) -$$ - -is a convex function of t. A metric space is then locally convex if every point has a neighborhood that is convex in this sense. The Cartan–Hadamard theorem for locally convex spaces states: - -* If X is a locally convex complete connected metric space, then the universal cover of X is a convex geodesic space with respect to the induced length metric d. - -In particular, the universal covering of such a space is contractible. The convexity of the distance function along a pair of geodesics is a well-known consequence of non-positive curvature of a metric space, but it is not equivalent . - -The Cartan–Hadamard theorem provides an example of a local-to-global correspondence in Riemannian and metric geometry: namely, a local condition (non-positive curvature) and a global condition (simple-connectedness) together imply a strong global property (contractibility); or in the Riemannian case, diffeomorphism with Rn. - -The metric form of the theorem demonstrates that a non-positively curved polyhedral cell complex is aspherical. This fact is of crucial importance for modern geometric group theory. diff --git a/wiki/wikipedia/37.txt b/wiki/wikipedia/37.txt deleted file mode 100644 index 11ac89ec1878c33e6e8ec1573104128fba2cd5f7..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/37.txt +++ /dev/null @@ -1,106 +0,0 @@ -Purchasing power parity (PPP) Ideally, a computer in New York and in Hong Kong should have the same price. If its price is 500 US dollars in New York and the same computer costs 2000 HK dollars in Hong Kong, PPP theory says the exchange rate should be 4 HK dollars for every 1 US dollar. - -Poverty, tariffs, transportation and other frictions prevent trading and purchasing of various goods, so measuring a single good can cause a large error. The PPP term accounts for this by using a basket of goods, that is, many goods with different quantities. PPP then computes an inflation and exchange rate as the ratio of the price of the basket in one location to the price of the basket in the other location. For example, if a basket consisting of 1 computer, 1 ton of rice, and 1 ton of steel was 1800 US dollars in New York and the same goods cost 10800 HK dollars in Hong Kong, the PPP exchange rate would be 6 HK dollars for every 1 US dollar. - -The name purchasing power parity comes from the idea that, with the right exchange rate, consumers in every location will have the same purchasing power. - -The value of the PPP exchange rate is very dependent on the basket of goods chosen. In general, goods are chosen that might closely obey the law of one price. So, ones traded easily and are commonly available in both locations. Organizations that compute PPP exchange rates use different baskets of goods and can come up with different values. - -The PPP exchange rate may not match the market exchange rate. The market rate is more volatile because it reacts to changes in demand at each location. Also, tariffs and difference in the price of labour (see Balassa–Samuelson theorem) can contribute to longer term differences between the two rates. One use of PPP is to predict longer term exchange rates. - -Because PPP exchange rates are more stable and are less affected by tariffs, they are used for many international comparisons, such as comparing countries' GDPs or other national income statistics. These numbers often come with the label PPP-adjusted. - -There can be marked differences between purchasing power adjusted incomes and those converted via market exchange rates. A well-known purchasing power adjustment is the Geary–Khamis dollar (the international dollar). The World Bank's World Development Indicators 2005 estimated that in 2003, one Geary–Khamis dollar was equivalent to about 1.8 Chinese yuan by purchasing power parity—considerably different from the nominal exchange rate. This discrepancy has large implications; for instance, when converted via the nominal exchange rates GDP per capita in India is about US$1,965 while on a PPP basis it is about US$7,197. At the other extreme, for instance Denmark's nominal GDP per capita is around US$53,242, but its PPP figure is US$46,602, in line with other developed nations. - -There are variations to calculating PPP. The EKS method (developed by Ö. Éltető, P. Köves and B. Szulc) uses the geometric mean of the exchange rates computed for individual goods. The EKS-S method (by Éltető, Köves, Szulc, and Sergeev) uses two different baskets, one for each country, and then averages the result. While these methods work for 2 countries, the exchange rates may be inconsistent if applied to 3 countries, so further adjustment may be necessary so that the rate from currency A to B times the rate from B to C equals the rate from A to C. - -Relative PPP is a weaker statement based on the law of one price, covering changes in the exchange rate and inflation rates. It seems to mirror the exchange rate closer than PPP does. - -Purchasing power parity exchange rate is used when comparing national production and consumption and other places where the prices of non-traded goods are considered important. (Market exchange rates are used for individual goods that are traded). PPP rates are more stable over time and can be used when that attribute is important. - -PPP exchange rates help costing but exclude profits and above all do not consider the different quality of goods among countries. The same product, for instance, can have a different level of quality and even safety in different countries, and may be subject to different taxes and transport costs. Since market exchange rates fluctuate substantially, when the GDP of one country measured in its own currency is converted to the other country's currency using market exchange rates, one country might be inferred to have higher real GDP than the other country in one year but lower in the other; both of these inferences would fail to reflect the reality of their relative levels of production. But if one country's GDP is converted into the other country's currency using PPP exchange rates instead of observed market exchange rates, the false inference will not occur. Essentially GDP measured at PPP controls for the different costs of living and price levels, usually relative to the United States dollar, enabling a more accurate estimate of a nation's level of production. - -The exchange rate reflects transaction values for traded goods between countries in contrast to non-traded goods, that is, goods produced for home-country use. Also, currencies are traded for purposes other than trade in goods and services, e.g., to buy capital assets whose prices vary more than those of physical goods. Also, different interest rates, speculation, hedging or interventions by central banks can influence the foreign-exchange market. - -The PPP method is used as an alternative to correct for possible statistical bias. The Penn World Table is a widely cited source of PPP adjustments, and the associated Penn effect reflects such a systematic bias in using exchange rates to outputs among countries. - -For example, if the value of the Mexican peso falls by half compared to the US dollar, the Mexican gross domestic product measured in dollars will also halve. However, this exchange rate results from international trade and financial markets. It does not necessarily mean that Mexicans are poorer by a half; if incomes and prices measured in pesos stay the same, they will be no worse off assuming that imported goods are not essential to the quality of life of individuals. Measuring income in different countries using PPP exchange rates helps to avoid this problem, as the metrics gives understanding of relative wealth regarding local goods and services at domestic markets. On the other hand, it is poor for measuring relative cost of goods and services at international markets. The reason is it does not take into account how much US$1 stands for in a respective country. Using the above-mentioned example: at an international market Mexicans can buy less than Americans after the fall of their currency, though their GDP PPP changed a little. - -PPP exchange rates are never valued because market exchange rates tend to move in their general direction, over a period of years. There is some value to knowing in which direction the exchange rate is more likely to shift over the long run. - -In neoclassical economic theory, the purchasing power parity theory assumes that the exchange rate between two currencies actually observed in the foreign exchange market is the one that is used in the purchasing power parity comparisons, so that the same amount of goods could actually be purchased in either currency with the same beginning amount of funds. Depending on the particular theory, purchasing power parity is assumed to hold either in the long run or, more strongly, in the short run. Theories that invoke purchasing power parity assume that in some circumstances a fall in either currency's purchasing power (a rise in its price level) would lead to a proportional decrease in that currency's valuation on the foreign exchange market. - -PPP exchange rates are especially useful when official exchange rates are artificially manipulated by governments. Countries with strong government control of the economy sometimes enforce official exchange rates that make their own currency artificially strong. By contrast, the currency's black market exchange rate is artificially weak. In such cases, a PPP exchange rate is likely the most realistic basis for economic comparison. Similarly, when exchange rates deviate significantly from their long term equilibrium due to speculative attacks or carry trade, a PPP exchange rate offers a better alternative for comparison. - -In 2011, the Big Mac Index was used to identify manipulation of inflation numbers by Argentina. - -The PPP exchange-rate calculation is controversial because of the difficulties of finding comparable baskets of goods to compare purchasing power across countries. - -Estimation of purchasing power parity is complicated by the fact that countries do not simply differ in a uniform price level; rather, the difference in food prices may be greater than the difference in housing prices, while also less than the difference in entertainment prices. People in different countries typically consume different baskets of goods. It is necessary to compare the cost of baskets of goods and services using a price index. This is a difficult task because purchasing patterns and even the goods available to purchase differ across countries. - -Thus, it is necessary to make adjustments for differences in the quality of goods and services. Furthermore, the basket of goods representative of one economy will vary from that of another: Americans eat more bread; Chinese more rice. Hence a PPP calculated using the US consumption as a base will differ from that calculated using China as a base. Additional statistical difficulties arise with multilateral comparisons when (as is usually the case) more than two countries are to be compared. - -Various ways of averaging bilateral PPPs can provide a more stable multilateral comparison, but at the cost of distorting bilateral ones. These are all general issues of indexing; as with other price indices there is no way to reduce complexity to a single number that is equally satisfying for all purposes. Nevertheless, PPPs are typically robust in the face of the many problems that arise in using market exchange rates to make comparisons. - -For example, in 2005 the price of a gallon of gasoline in Saudi Arabia was US$0.91, and in Norway the price was US$6.27. The significant differences in price would not contribute to accuracy in a PPP analysis, despite all of the variables that contribute to the significant differences in price. More comparisons have to be made and used as variables in the overall formulation of the PPP. - -When PPP comparisons are to be made over some interval of time, proper account needs to be made of inflationary effects. - -In addition to methodological issues presented by the selection of a basket of goods, PPP estimates can also vary based on the statistical capacity of participating countries. The International Comparison Program, which PPP estimates are based on, require the disaggregation of national accounts into production, expenditure or (in some cases) income, and not all participating countries routinely disaggregate their data into such categories. - -Some aspects of PPP comparison are theoretically impossible or unclear. For example, there is no basis for comparison between the Ethiopian laborer who lives on teff with the Thai laborer who lives on rice, because teff is not commercially available in Thailand and rice is not in Ethiopia, so the price of rice in Ethiopia or teff in Thailand cannot be determined. As a general rule, the more similar the price structure between countries, the more valid the PPP comparison. - -PPP levels will also vary based on the formula used to calculate price matrices. Possible formulas include GEKS-Fisher, Geary-Khamis, IDB, and the superlative method. Each has advantages and disadvantages. - -Linking regions presents another methodological difficulty. In the 2005 ICP round, regions were compared by using a list of some 1,000 identical items for which a price could be found for 18 countries, selected so that at least two countries would be in each region. While this was superior to earlier "bridging" methods, which do not fully take into account differing quality between goods, it may serve to overstate the PPP basis of poorer countries, because the price indexing on which PPP is based will assign to poorer countries the greater weight of goods consumed in greater shares in richer countries. - -There are a number of reasons that different measures do not perfectly reflect standards of living. - -The goods that the currency has the "power" to purchase are a basket of goods of different types: - -# Local, non-tradable goods and services (like electric power) that are produced and sold domestically. - -# Tradable goods such as non-perishable commodities that can be sold on the international market (like diamonds). - -The more that a product falls into category 1, the further its price will be from the currency exchange rate, moving towards the PPP exchange rate. Conversely, category 2 products tend to trade close to the currency exchange rate. (See also Penn effect). - -More processed and expensive products are likely to be tradable, falling into the second category, and drifting from the PPP exchange rate to the currency exchange rate. Even if the PPP "value" of the Ethiopian currency is three times stronger than the currency exchange rate, it won't buy three times as much of internationally traded goods like steel, cars and microchips, but non-traded goods like housing, services ("haircuts"), and domestically produced crops. The relative price differential between tradables and non-tradables from high-income to low-income countries is a consequence of the Balassa–Samuelson effect and gives a big cost advantage to labour-intensive production of tradable goods in low income countries (like Ethiopia), as against high income countries (like Switzerland). - -The corporate cost advantage is nothing more sophisticated than access to cheaper workers, but because the pay of those workers goes farther in low-income countries than high, the relative pay differentials (inter-country) can be sustained for longer than would be the case otherwise. (This is another way of saying that the wage rate is based on average local productivity and that this is below the per capita productivity that factories selling tradable goods to international markets can achieve.) An equivalent cost benefit comes from non-traded goods that can be sourced locally (nearer the PPP-exchange rate than the nominal exchange rate in which receipts are paid). These act as a cheaper factor of production than is available to factories in richer countries. It is difficult by GDP PPP to consider the different quality of goods among the countries. - -The Bhagwati–Kravis–Lipsey view provides a somewhat different explanation from the Balassa–Samuelson theory. This view states that price levels for nontradables are lower in poorer countries because of differences in endowment of labor and capital, not because of lower levels of productivity. Poor countries have more labor relative to capital, so marginal productivity of labor is greater in rich countries than in poor countries. Nontradables tend to be labor-intensive; therefore, because labor is less expensive in poor countries and is used mostly for nontradables, nontradables are cheaper in poor countries. Wages are high in rich countries, so nontradables are relatively more expensive. They cite the example that a dollar in London should purchase the same goods as a dollar in Chicago, which is certainly not the case. - -Nontradables are primarily services and the output of the construction industry. Nontradables also lead to deviations in PPP because the prices of nontradables are not linked internationally. The prices are determined by domestic supply and demand, and shifts in those curves lead to changes in the market basket of some goods relative to the foreign price of the same basket. If the prices of nontradables rise, the purchasing power of any given currency will fall in that country. While Gustav Cassel's use of PPP concept has been traditionally interpreted as his attempt to formulate a positive theory of exchange rate determination, the policy and theoretical context in which Cassel wrote about exchange rates suggests different interpretation. In the years immediately preceding the end of WWI and following it economists and politicians were involved in discussions on possible ways of restoring the gold standard, which would automatically restore the system of fixed exchange rates among participating nations. The stability of exchange rates was widely believed to be crucial for restoring the international trade and for its further stable and balanced growth. Nobody then was mentally prepared for the idea that flexible exchange rates determined by market forces do not necessarily cause chaos and instability in the peaceful time (and that is what the abandoning of the gold standard during the war was blamed for). Gustav Cassel was among those who supported the idea of restoring the gold standard, although with some alterations. The question, which Gustav Cassel tried to answer in his works written during that period, was not how exchange rates are determined in the free market, but rather how to determine the appropriate level at which exchange rates were to be fixed during the restoration of the system of fixed exchange rates. His recommendation was to fix exchange rates at the level corresponding to the PPP, as he believed that this would prevent trade imbalances between trading nations. Thus, PPP doctrine proposed by Cassel was not really a positive (descriptive) theory of exchange rate determination (as Cassel was perfectly aware of numerous factors that prevent exchange rates from stabilizing at PPP level if allowed to float), but rather a normative (prescriptive) policy advice, formulated in the context of discussions on returning to the gold standard. - -Each month, the Organisation for Economic Co-operation and Development (OECD) measures the differences in price levels between its member countries by calculating the ratios of PPPs for private final consumption expenditure to exchange rates. The OECD table below indicates the number of US dollars needed in each of the countries listed to buy the same representative basket of consumer goods and services that would cost US$100 in the United States. - -According to the table, an American living or travelling in Switzerland on an income denominated in US dollars would find that country to be the most expensive of the group, having to spend 47% more US dollars to maintain a standard of living comparable to the US in terms of consumption. - -Since global PPP estimates—such as those provided by the ICP—are not calculated annually, but for a single year, PPP exchange rates for years other than the benchmark year need to be extrapolated. One way of doing this is by using the country's GDP deflator. To calculate a country's PPP exchange rate in Geary–Khamis dollars for a particular year, the calculation proceeds in the following manner: -$$ -\textrm{PPPrate}_{X,i}=\frac{\textrm{PPPrate}_{X,b}\cdot \frac{\textrm{GDPdef}_{X,i}}{\textrm{GDPdef}_{X,b}}}{\textrm{PPPrate}_{U,b}\cdot \frac{\textrm{GDPdef}_{U,i}}{\textrm{GDPdef}_{U,b}}} -$$ - -Where PPPrateX,i is the PPP exchange rate of country X for year i, PPPrateX,b is the PPP exchange rate of country X for the benchmark year, PPPrateU,b is the PPP exchange rate of the United States (US) for the benchmark year (equal to 1), GDPdefX,i is the GDP deflator of country X for year i, GDPdefX,b is the GDP deflator of country X for the benchmark year, GDPdefU,i is the GDP deflator of the US for year i, and GDPdefU,b is the GDP deflator of the US for the benchmark year. - -The bank UBS produces its "Prices and Earnings" report every 3 years. The says, "Our reference basket of goods is based on European consumer habits and includes 122 positions". - -To teach PPP, the basket of goods is often simplified to a single good. - -The Big Mac Index is a simple implementation of PPP where the basket contains a single good: a Big Mac burger from McDonald's restaurants. The index was created and popularized by The Economist as a way to teach economics and to identify over- and under-valued currencies. - -The Big Mac has the value of being a relatively standardized consumer product that includes input costs from a wide range of sectors in the local economy, such as agricultural commodities (beef, bread, lettuce, cheese), labor (blue and white collar), advertising, rent and real estate costs, transportation, etc. - -There are some problems with the Big Mac Index. A Big Mac is perishable and not easily transported. That means the law of one price is not likely to keep prices the same in different locations. McDonald's restaurants are not present in every country, which limits the index's usage. Moreover, Big Macs are not sold at every McDonald's (noticeably in India), which limits its usage further. - -In the white paper, "Burgernomics", the authors computed a correlation of 0.73 between the Big Mac Index's prices and prices calculated using the Penn World Tables. This single-good index captures most, but not all, of the effects captured by more professional (and more complex) PPP measurement. The implied PPP exchange rate is 3.58 HK$ per US$. The difference between this and the actual exchange rate of 7.83 suggests that the Hong Kong dollar is 54.2% undervalued. That is, it is cheaper to convert US dollars into HK$ and buy a Big Mac in Hong Kong than it is to buy a Big Mac directly in US dollars. - -Similar to the Big Mac Index, the KFC Index measures PPP with a basket that contains a single item: a KFC Original 12/15 pc. bucket. The Big Mac Index cannot be used for most countries in Africa because most do not have a McDonald's restaurant. Thus, the KFC Index was created by Sagaci Research (a market research firm focusing solely on Africa) to identify over- and under-valued currencies in Africa. - -For example, the average price of KFC's Original 12 pc. Bucket in the United States in January 2016 was $20.50; while in Namibia it was only $13.40 at market exchange rates. Therefore, the index states the Namibian dollar was undervalued by 33% at that time. - -Like the Big Mac Index, the Nespresso Index measures PPP with a basket that contains a single product: An Arpeggio flavored coffee pod produced and retailed by the Nestlé Group. Its advantage compared to the Big Mac Index is that Nespresso capsules are sold in higher numbers compared to a single Big Mac hamburger. - -For example, 1 basic Nespresso Capsule costs 0.5 CHF in Switzerland and 0.7 USD in United States. The implied exchange rate is 0.71. The difference between this and the actual exchange rate, 0.93 as of Mid November 2021, suggests the CHF is -22.8% undervalued to the USD. - -Like the Big Mac Index, the iPad index (elaborated by CommSec) compares an item's price in various locations. Unlike the Big Mac, however, each iPad is produced in the same place (except for the model sold in Brazil) and all iPads (within the same model) have identical performance characteristics. Price differences are therefore a function of transportation costs, taxes, and the prices that may be realized in individual markets. In 2013, an iPad cost about twice as much in Argentina as in the United States. diff --git a/wiki/wikipedia/370.txt b/wiki/wikipedia/370.txt deleted file mode 100644 index 1ad7d1eb590e5f3befabd7fcc768b75f1b89e0ad..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/370.txt +++ /dev/null @@ -1,21 +0,0 @@ -In differential geometry, Cohn-Vossen's inequality, named after Stefan Cohn-Vossen, relates the integral of Gaussian curvature of a non-compact surface to the Euler characteristic. It is akin to the Gauss–Bonnet theorem for a compact surface. - -A divergent path within a Riemannian manifold is a smooth curve in the manifold that is not contained within any compact subset of the manifold. A complete manifold is one in which every divergent path has infinite length with respect to the Riemannian metric on the manifold. Cohn-Vossen's inequality states that in every complete Riemannian 2-manifold S with finite total curvature and finite Euler characteristic, we have -$$ - \iint_S K dA \le 2\pi\chi(S), -$$ - -where K is the Gaussian curvature, dA is the element of area, and χ is the Euler characteristic. - -* If S is a compact surface (without boundary), then the inequality is an equality by the usual Gauss-Bonnet theorem for compact manifolds. - -* If S has a boundary, then the Gauss-Bonnet theorem gives -$$ -\iint_S K dA = 2\pi\chi(S) - \int_{\partial S}k_gds -$$ - -where $k_g$ is the geodesic curvature of the boundary, and its integral the total curvature which is necessarily positive for a boundary curve, and the inequality is strict. (A similar result holds when the boundary of S is piecewise smooth.) - -* If S is the plane R2, then the curvature of S is zero, and χ(S) = 1, so the inequality is strict: 0 < 2pi. - -* S. E. Cohn-Vossen, Some problems of differential geometry in the large, Moscow (1959) (in Russian) diff --git a/wiki/wikipedia/3700.txt b/wiki/wikipedia/3700.txt deleted file mode 100644 index 969bcbc69aa1c611db3a0b26240a2a23626069fc..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3700.txt +++ /dev/null @@ -1,11 +0,0 @@ -Tetris (styled TETЯIS) is a puzzle game developed by Atari Games and originally released for arcades in 1988. Based on Alexey Pajitnov's Tetris, Atari Games' version features the same gameplay as the computer editions of the game, as players must stack differently shaped falling blocks to form and eliminate horizontal lines from the playing field. The game features several difficulty levels and two-player simultaneous play. - -In 1989, Atari Games released a port of their arcade version under their Tengen label for the Nintendo Entertainment System, despite it not being licensed by Nintendo for the system. There were also issues with the publishing rights for Tetris, and after much legal wrangling, Nintendo itself ended up with the rights to publish console versions, leaving Atari with only the rights to arcade versions. As a result, the Tengen game was only on the shelf for four weeks before Atari was legally required to recall the game and destroy any remaining inventory of its NES version. - -Nintendo produced its own version for the NES as well as a version for the Game Boy. Both versions were commercially successful and Nintendo held the Tetris license for many years. With fewer than 100,000 copies known to exist, the Tengen release has since become a collector's item, due to its short time on the market. Various publications have since noted that Tengen's Tetris was in some ways superior to the official NES release, especially since the Tengen game featured a two-player simultaneous mode not available in Nintendo's version. - -In 1987, Soviet Academy of Sciences researcher Alexey Pajitnov (who invented the original game in 1984) alongside Dmitry Pavlovsky and Vadim Gerasimov developed a new version of Tetris out of a desire to create a two-player puzzle game. Andromeda Software executive Robert Stein approached Pajitnov with an offer to distribute Tetris worldwide, and secured the rights to license the title. He in turn sub-licensed the rights to Mirrorsoft for the European market and Spectrum HoloByte for the North American market. With the rights secured, Atari Games produced an arcade version of Tetris, and under their Tengen subsidiary began development to port the title to the Nintendo Entertainment System (NES) in June 1988. The port was released in May 1989. In June 1989, a month after the release of Tengen's Tetris, U.S. District Court Judge Fern Smith issued an injunction barring Tengen from further distributing the game, and further ordered all existing copies of the game be destroyed. As a result, 268,000 Tetris cartridges were recalled and destroyed after only four weeks on shelves. - -The art which was featured on the Tengen cover was an airbrush painting by well known illustrator Marc Ericksen featuring St. Basil's Cathedral in Red Square, Moscow, and featuring at its base a falling stone concept that mirrored the gameplay. Atari made use of the same art when advertising the new release, as seen in the Atari inset above right, adding a fireworks motif that was not a part of the original art. - -In an interview, Ed Logg notes that the Tengen version of Tetris was built completely from scratch, using no source code or material from the original game. After presenting the title at the Consumer Electronics Show in Las Vegas, Tengen president Randy Broweleit requested improvements in the game. Originally portrayed solely in black and white, Broweleit requested that the pieces be portrayed in color, and Logg altered the game accordingly prior to the next Consumer Electronics Show. When asked which version of Tetris he liked the most, Logg stated the Nintendo version of Tetris for the NES "wasn't tuned right", citing a lack of logarithmic speed adjustment as the source of that version's overly steep increases in difficulty. diff --git a/wiki/wikipedia/3701.txt b/wiki/wikipedia/3701.txt deleted file mode 100644 index e5fc4c5cb23f980e035ec8b69c74a955e0534072..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3701.txt +++ /dev/null @@ -1,3 +0,0 @@ -In mathematics and statistics, Skorokhod's representation theorem is a result that shows that a weakly convergent sequence of probability measures whose limit measure is sufficiently well-behaved can be represented as the distribution/law of a pointwise convergent sequence of random variables defined on a common probability space. It is named for the Soviet mathematician A. V. Skorokhod. - -Let $(\mu_n)_{n \in \mathbb{N}}$ be a sequence of probability measures on a metric space $S$ such that $\mu_n$ converges weakly to some probability measure $\mu_\infty$ on $S$ as $n \to \infty$. Suppose also that the support of $\mu_\infty$ is separable. Then there exist $S$-valued random variables $X_n$ defined on a common probability space $(\Omega,\mathcal{F},\mathbf{P})$ such that the law of $X_n$ is $\mu_n$ for all $n$ (including $n=\infty$) and such that $(X_n)_{n \in \mathbb{N}}$ converges to $X_\infty$, $\mathbf{P}$-almost surely. diff --git a/wiki/wikipedia/3702.txt b/wiki/wikipedia/3702.txt deleted file mode 100644 index 7721f37e6cbf75d4ed27f33a5db556691a9f716c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3702.txt +++ /dev/null @@ -1,3 +0,0 @@ -In geometric measure theory, a field of mathematics, the Almgren regularity theorem, proved by , states that the singular set of a mass-minimizing surface has codimension at least 2. Almgren's proof of this was 955 pages long. Within the proof many new ideas are introduced, such as monotonicity of a frequency function and the use of a center manifold to perform a more intricate blow-up procedure. - -A streamlined and more accessible proof of Almgren's regularity theorem, following the same ideas as Almgren, was given by Camillo De Lellis and Emanuele Spadaro in a series of three papers. diff --git a/wiki/wikipedia/3703.txt b/wiki/wikipedia/3703.txt deleted file mode 100644 index ac0fae83281b1be09c85be1503c0af4cc24a9810..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3703.txt +++ /dev/null @@ -1,45 +0,0 @@ -In topology, Urysohn's lemma is a lemma that states that a topological space is normal if and only if any two disjoint closed subsets can be separated by a continuous function. - -Urysohn's lemma is commonly used to construct continuous functions with various properties on normal spaces. It is widely applicable since all metric spaces and all compact Hausdorff spaces are normal. The lemma is generalized by (and usually used in the proof of) the Tietze extension theorem. - -The lemma is named after the mathematician Pavel Samuilovich Urysohn. - -Two subsets $A$ and $B$ of a topological space $X$ are said to be separated by neighbourhoods if there are neighbourhoods $U$ of $A$ and $V$ of $B$ that are disjoint. In particular $A$ and $B$ are necessarily disjoint. - -Two plain subsets $A$ and $B$ are said to be separated by a function if there exists a continuous function $f : X \to [0, 1]$ from $X$ into the unit interval $[0, 1]$ such that $f(a) = 0$ for all $a \in A$ and $f(b) = 1$ for all $b \in B.$ Any such function is called a Urysohn function for $A$ and $B.$ In particular $A$ and $B$ are necessarily disjoint. - -It follows that if two subsets $A$ and $B$ are separated by a function then so are their closures.
    - -Also it follows that if two subsets $A$ and $B$ are separated by a function then $A$ and $B$ are separated by neighbourhoods. - -A normal space is a topological space in which any two disjoint closed sets can be separated by neighbourhoods. Urysohn's lemma states that a topological space is normal if and only if any two disjoint closed sets can be separated by a continuous function. - -The sets $A$ and $B$ need not be precisely separated by $f$, i.e., we do not, and in general cannot, require that $f(x) \neq 0$ and $\neq 1$ for $x$ outside of $A$ and $B.$ The spaces in which this property holds are the perfectly normal spaces. - -Urysohn's lemma has led to the formulation of other topological properties such as the 'Tychonoff property' and 'completely Hausdorff spaces'. For example, a corollary of the lemma is that normal T1 spaces are Tychonoff. - -A topological space $X$ is normal if and only if, for any two non-empty closed disjoint subsets $A$ and $B$ of $X,$ there exists a continuous map $f : X \to [0, 1]$ such that $f(A) = \{ 0 \}$ and $f(B) = \{ 1 \}.$ - -The procedure is an entirely straightforward application of the definition of normality (once one draws some figures representing the first few steps in the induction described below to see what is going on), beginning with two disjoint closed sets. The clever part of the proof is the indexing of the open sets thus constructed by dyadic fractions. - -For every dyadic fraction $r \in (0, 1),$ we are going to construct an open subset $U(r)$ of $X$ such that: - -# $U(r)$ contains $A$ and is disjoint from $B$ for all $r,$ - -# For $r < s,$ the closure of $U(r)$ is contained in $U(s).$ - -Once we have these sets, we define $f(x) = 1$ if $x \not\in U(r)$ for any $r$; otherwise $f(x) = \inf \{ r : x \in U(r) \}$ for every $x \in X,$ where $\inf$ denotes the infimum. Using the fact that the dyadic rationals are dense, it is then not too hard to show that $f$ is continuous and has the property $f(A) \subseteq \{ 0 \}$ and $f(B) \subseteq \{ 1 \}.$ - -In order to construct the sets $U(r),$ we actually do a little bit more: we construct sets $U(r)$ and $V(r)$ such that - -* $A \subseteq U(r)$ and $B \subseteq V(r)$ for all $r,$ - -* $U(r)$ and $V(r)$ are open and disjoint for all $r,$ - -* For $r < s,$ $V(s)$ is contained in the complement of $U(r)$ and the complement of $V(r)$ is contained in $U(s).$ - -Since the complement of $V(r)$ is closed and contains $U(r),$ the latter condition then implies condition (2) from above. - -This construction proceeds by mathematical induction. First define $U(1) = X \setminus B$ and $V(0) = X \setminus A.$ Since $X$ is normal, we can find two disjoint open sets $U(1/2)$ and $V(1/2)$ which contain $A$ and $B,$ respectively. Now assume that $n \geq 1$ and the sets $U\left(k/2^n\right)$ and $V\left(k/2^n\right)$ have already been constructed for $k = 1, \ldots, 2^n - 1.$ Since $X$ is normal, for any $a \in \left\{ 0, 1, \ldots, 2^n - 1 \right\},$ we can find two disjoint open sets which contain $X \setminus V\left(a/2^n\right)$ and $X \setminus U\left((a+1)/2^n\right),$respectively. Call these two open sets $U\left((2a+1)/2^{n+1}\right),$and $V\left((2a+1)/2^{n+1}\right),$ and verify the above three conditions. - -The Mizar project has completely formalized and automatically checked a proof of Urysohn's lemma in the . diff --git a/wiki/wikipedia/3704.txt b/wiki/wikipedia/3704.txt deleted file mode 100644 index 82626dc9ae5f16ff40fd8db496561922806ba767..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3704.txt +++ /dev/null @@ -1,66 +0,0 @@ -In mathematics, the infinite series is an example of one of the first infinite series to be summed in the history of mathematics; it was used by Archimedes circa 250–200 BC. As it is a geometric series with first term 1/4 and common ratio 1/4, its sum is -$$ -\sum_{n=1}^\infty \frac{1}{4^n}=\frac {\frac 1 4} {1 - \frac 1 4}=\frac 1 3. -$$ - -The series 1/4 + lends itself to some particularly simple visual demonstrations because a square and a triangle both divide into four similar pieces, each of which contains 1/4 the area of the original. - -In the figure on the left, if the large square is taken to have area 1, then the largest black square has area 1/2 × 1/2 = 1/4. Likewise, the second largest black square has area 1/16, and the third largest black square has area 1/64. The area taken up by all of the black squares together is therefore 1/4 + , and this is also the area taken up by the gray squares and the white squares. Since these three areas cover the unit square, the figure demonstrates that -$$ -3\left(\frac14+\frac{1}{4^2}+\frac{1}{4^3}+\frac{1}{4^4}+\cdots\right) = 1. -$$ - -Archimedes' own illustration, adapted at top, was slightly different, being closer to the equation -$$ -\sum_{n=1}^\infty \frac{3}{4^n}=\frac34+\frac{3}{4^2}+\frac{3}{4^3}+\frac{3}{4^4}+\cdots = 1. -$$ - -See below for details on Archimedes' interpretation. - -The same geometric strategy also works for triangles, as in the figure on the right: if the large triangle has area 1, then the largest black triangle has area 1/4, and so on. The figure as a whole has a self-similarity between the large triangle and its upper sub-triangle. A related construction making the figure similar to all three of its corner pieces produces the Sierpiński triangle. - -Archimedes encounters the series in his work Quadrature of the Parabola. He is finding the area inside a parabola by the method of exhaustion, and he gets a series of triangles; each stage of the construction adds an area 1/4 times the area of the previous stage. His desired result is that the total area is 4/3 times the area of the first stage. To get there, he takes a break from parabolas to introduce an algebraic lemma: - -
    Proposition 23. Given a series of areas A, B, C, D, … , Z, of which A is the greatest, and each is equal to four times the next in order, then -$$ -A + B + C + D + \cdots + Z + \frac13 Z = \frac43 A. -$$
    - -Archimedes proves the proposition by first calculating - -\begin{array}{rcl} - -\displaystyle B+C+\cdots+Z+\frac{B}{3}+\frac{C}{3}+\cdots+\frac{Z}{3} & = &\displaystyle \frac{4B}{3}+\frac{4C}{3}+\cdots+\frac{4Z}{3} \\[1em] - -& = &\displaystyle \frac13(A+B+\cdots+Y). - -\end{array} - -On the other hand, -$$ -\frac{B}{3}+\frac{C}{3}+\cdots+\frac{Y}{3} = \frac13(B+C+\cdots+Y). -$$ - -Subtracting this equation from the previous equation yields -$$ -B+C+\cdots+Z+\frac{Z}{3} = \frac13 A -$$ - -and adding A to both sides gives the desired result. - -Today, a more standard phrasing of Archimedes' proposition is that the partial sums of the series 1 + 1/4 + are: -$$ -1+\frac{1}{4}+\frac{1}{4^2}+\cdots+\frac{1}{4^n}=\frac{1-\left(\frac14\right)^{n+1}}{1-\frac14}. -$$ - -This form can be proved by multiplying both sides by 1 − 1/4 and observing that all but the first and the last of the terms on the left-hand side of the equation cancel in pairs. The same strategy works for any finite geometric series. - -Archimedes' Proposition 24 applies the finite (but indeterminate) sum in Proposition 23 to the area inside a parabola by a double reductio ad absurdum. He does not quite take the limit of the above partial sums, but in modern calculus this step is easy enough: -$$ -\lim_{n\to\infty} \frac{1-\left(\frac14\right)^{n+1}}{1-\frac14} = \frac{1}{1-\frac14} = \frac43. -$$ - -Since the sum of an infinite series is defined as the limit of its partial sums, -$$ -1+\frac14+\frac{1}{4^2}+\frac{1}{4^3}+\cdots = \frac43. -$$ diff --git a/wiki/wikipedia/3705.txt b/wiki/wikipedia/3705.txt deleted file mode 100644 index 89968bde11a4c9462f7febbbb326d6cf74a7795e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3705.txt +++ /dev/null @@ -1,3 +0,0 @@ -The distributed minimum spanning tree (MST) problem involves the construction of a minimum spanning tree by a distributed algorithm, in a network where nodes communicate by message passing. It is radically different from the classical sequential problem, although the most basic approach resembles Borůvka's algorithm. One important application of this problem is to find a tree that can be used for broadcasting. In particular, if the cost for a message to pass through an edge in a graph is significant, a MST can minimize the total cost for a source process to communicate with all the other processes in the network. - -The problem was first suggested and solved in $O(V \log V)$ time in 1983 by Gallager et al., This algorithm runs in $O(D+L\log n)$ time, where $L$ is the local shortest path diameter of the graph. diff --git a/wiki/wikipedia/3706.txt b/wiki/wikipedia/3706.txt deleted file mode 100644 index 3e452c2e8ab872ee268d421dd83ac4874d12f591..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3706.txt +++ /dev/null @@ -1,47 +0,0 @@ -In number theory, Carmichael's theorem, named after the American mathematician R.D. Carmichael, - -states that, for any nondegenerate Lucas sequence of the first kind Un(P,Q) with relatively prime parameters P, Q and positive discriminant, an element Un with n ≠ 1, 2, 6 has at least one prime divisor that does not divide any earlier one except the 12th Fibonacci number F(12)=U12(1, -1)=144 and its equivalent U12(-1, -1)=-144. - -In particular, for n greater than 12, the nth Fibonacci number F(n) has at least one prime divisor that does not divide any earlier Fibonacci number. - -Carmichael (1913, Theorem 21) proved this theorem. Recently, Yabuta (2001) gave a simple proof. - -Given two coprime integers P and Q, such that $D=P^2-4Q>0$ and PQ ≠ 0, let Un(P,Q) be the Lucas sequence of the first kind defined by - -\begin{align} - -U_0(P,Q)&=0, \\ - -U_1(P,Q)&=1, \\ - -U_n(P,Q)&=P\cdot U_{n-1}(P,Q)-Q\cdot U_{n-2}(P,Q) \qquad\mbox{ for }n>1. - -\end{align} - - - -Then, for n ≠ 1, 2, 6, Un(P,Q) has at least one prime divisor that does not divide any Um(P,Q) with m < n, except U12(1, -1)=F(12)=144, U12(-1, -1)=-F(12)=-144. - -Such a prime p is called a characteristic factor or a primitive prime divisor of Un(P,Q). - -Indeed, Carmichael showed a slightly stronger theorem: For n ≠ 1, 2, 6, Un(P,Q) has at least one primitive prime divisor not dividing D except U3(1, -2)=U3(-1, -2)=3, U5(1, -1)=U5(-1, -1)=F(5)=5, U12(1, -1)=F(12)=144, U12(-1, -1)=-F(12)=-144. - -Note that D should be > 0, thus the cases U13(1, 2), U18(1, 2) and U30(1, 2), etc. are not included, since in this case D = −7 < 0. - -The only exceptions in Fibonacci case for n up to 12 are: - -F(1)=1 and F(2)=1, which have no prime divisors - -F(6)=8 whose only prime divisor is 2 (which is F(3)) - -F(12)=144 whose only prime divisors are 2 (which is F(3)) and 3 (which is F(4)) - -The smallest primitive prime divisor of F(n) are - -1, 1, 2, 3, 5, 1, 13, 7, 17, 11, 89, 1, 233, 29, 61, 47, 1597, 19, 37, 41, 421, 199, 28657, 23, 3001, 521, 53, 281, 514229, 31, 557, 2207, 19801, 3571, 141961, 107, 73, 9349, 135721, 2161, 2789, 211, 433494437, 43, 109441, ... - -Carmichael's theorem says that every Fibonacci number, apart from the exceptions listed above, has at least one primitive prime divisor. - -If n > 1, then the nth Pell number has at least one prime divisor that does not divide any earlier Pell number. The smallest primitive prime divisor of nth Pell number are - -1, 2, 5, 3, 29, 7, 13, 17, 197, 41, 5741, 11, 33461, 239, 269, 577, 137, 199, 37, 19, 45697, 23, 229, 1153, 1549, 79, 53, 113, 44560482149, 31, 61, 665857, 52734529, 103, 1800193921, 73, 593, 9369319, 389, 241, ... diff --git a/wiki/wikipedia/3707.txt b/wiki/wikipedia/3707.txt deleted file mode 100644 index ebf2be55a9e6367cd2961d4849d73185b97c3a93..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3707.txt +++ /dev/null @@ -1,25 +0,0 @@ -In probability theory, Slutsky's theorem extends some properties of algebraic operations on convergent sequences of real numbers to sequences of random variables. - -The theorem was named after Eugen Slutsky. Slutsky's theorem is also attributed to Harald Cramér. - -Let $X_n, Y_n$ be sequences of scalar/vector/matrix random elements. - -If $X_n$ converges in distribution to a random element $X$ and $Y_n$ converges in probability to a constant $c$, then - -* $X_n + Y_n \ \xrightarrow{d}\ X + c ;$ - -* $X_nY_n \ \xrightarrow{d}\ Xc ;$ - -* $X_n/Y_n \ \xrightarrow{d}\ X/c,$ provided that c is invertible, - -where $\xrightarrow{d}$ denotes convergence in distribution. - -Notes: - -# The requirement that Yn converges to a constant is important — if it were to converge to a non-degenerate random variable, the theorem would be no longer valid. For example, let $X_n \sim {\rm Uniform}(0,1)$ and $Y_n = -X_n$. The sum $X_n + Y_n = 0$ for all values of n. Moreover, $Y_n \xrightarrow{d} {\rm Uniform}(-1,0)$, but $X_n + Y_n$ does not converge in distribution to $X + Y$, where $X \sim {\rm Uniform}(0,1)$, $Y \sim {\rm Uniform}(-1,0)$, and $X$ and $Y$ are independent. - -# The theorem remains valid if we replace all convergences in distribution with convergences in probability. - -This theorem follows from the fact that if Xn converges in distribution to X and Yn converges in probability to a constant c, then the joint vector (Xn, Yn) converges in distribution to (X, c) (see here). - -Next we apply the continuous mapping theorem, recognizing the functions g(x,y) = x + y, g(x,y) = xy, and g(x,y) = x y−1 are continuous (for the last function to be continuous, y has to be invertible). diff --git a/wiki/wikipedia/3708.txt b/wiki/wikipedia/3708.txt deleted file mode 100644 index d84543b55e7b26f717c6168c1d29086bed20cf21..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3708.txt +++ /dev/null @@ -1,25 +0,0 @@ -In mathematics, the theorem of Bertini is an existence and genericity theorem for smooth connected hyperplane sections for smooth projective varieties over algebraically closed fields, introduced by Eugenio Bertini. This is the simplest and broadest of the "Bertini theorems" applying to a linear system of divisors; simplest because there is no restriction on the characteristic of the underlying field, while the extensions require characteristic 0. - -Let X be a smooth quasi-projective variety over an algebraically closed field, embedded in a projective space $\mathbf P^n$. - -Let $|H|$ denote the complete system of hyperplane divisors in $\mathbf P^n$. Recall that it is the dual space $(\mathbf P^n)^{\star}$ of $\mathbf P^n$ and is isomorphic to $\mathbf P^n$. - -The theorem of Bertini states that the set of hyperplanes not containing X and with smooth intersection with X contains an open dense subset of the total system of divisors $|H|$. The set itself is open if X is projective. If $\dim(X) \ge 2$, then these intersections (called hyperplane sections of X) are connected, hence irreducible. - -The theorem hence asserts that a general hyperplane section not equal to X is smooth, that is: the property of smoothness is generic. - -Over an arbitrary field k, there is a dense open subset of the dual space $(\mathbf P^n)^{\star}$ whose rational points define hyperplanes smooth hyperplane sections of X. When k is infinite, this open subset then has infinitely many rational points and there are infinitely many smooth hyperplane sections in X. - -Over a finite field, the above open subset may not contain rational points and in general there is no hyperplanes with smooth intersection with X. However, if we take hypersurfaces of sufficiently big degrees, then the theorem of Bertini holds. - -We consider the subfibration of the product variety $X \times |H|$ with fiber above $x\in X$ the linear system of hyperplanes that intersect X non-transversally at x. - -The rank of the fibration in the product is one less than the codimension of $X \subset \mathbf P^n$, so that the total space has lesser dimension than $n$ and so its projection is contained in a divisor of the complete system $|H|$. - -Over any infinite field $ k $ of characteristic 0, if X is a smooth quasi-projective $k $-variety, a general member of a linear system of divisors on X is smooth away from the base locus of the system. For clarification, this means that given a linear system $ f:X\rightarrow \mathbf{P}^n $, the preimage $ f^{-1}(H)$ of a hyperplane H is smooth -- outside the base locus of f -- for all hyperplanes H in some dense open subset of the dual projective space $ (\mathbf{P}^n)^\star $. This theorem also holds in characteristic p>0 when the linear system f is unramified. - -The theorem of Bertini has been generalized in various ways. For example, a result due to Steven Kleiman asserts the following (cf. Kleiman's theorem): for a connected algebraic group G, and any homogeneous G-variety X, and two varieties Y and Z mapping to X, let Yσ be the variety obtained by letting σ ∈ G act on Y. Then, there is an open dense subscheme H of G such that for σ ∈ H, $Y^\sigma \times_X Z$ is either empty or purely of the (expected) dimension dim Y + dim Z - dim X. If, in addition, Y and Z are smooth and the base field has characteristic zero, then H may be taken such that $Y^\sigma \times_X Z$ is smooth for all $\sigma \in H$, as well. The above theorem of Bertini is the special case where $X = \mathbb P^n$ is expressed as the quotient of SLn by the parabolic subgroup of upper triangular matrices, Z is a subvariety and Y is a hyperplane. - -Theorem of Bertini has also been generalized to discrete valuation domains or finite fields, or for étale coverings of X. - -The theorem is often used for induction steps. diff --git a/wiki/wikipedia/3709.txt b/wiki/wikipedia/3709.txt deleted file mode 100644 index dff879f58c5cf55dcdc29fc5f589ce9ba1b6c8eb..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3709.txt +++ /dev/null @@ -1,41 +0,0 @@ -F. Riesz's theorem (named after Frigyes Riesz) is an important theorem in functional analysis that states that a Hausdorff topological vector space (TVS) is finite-dimensional if and only if it is locally compact. - -The theorem and its consequences are used ubiquitously in functional analysis, often used without being explicitly mentioned. - -Recall that a topological vector space (TVS) $X$ is Hausdorff if and only if the singleton set $\{ 0 \}$ consisting entirely of the origin is a closed subset of $X.$ - -A map between two TVSs is called a TVS-isomorphism or an isomorphism in the category of TVSs if it is a linear homeomorphism. - -{{Math theorem|name=F. Riesz theorem|math_statement= - -A Hausdorff TVS $X$ over the field $\mathbb{F}$ ( $\mathbb{F}$ is either the real or complex numbers) is finite-dimensional if and only if it is locally compact (or equivalently, if and only if there exists a compact neighborhood of the origin). In this case, $X$ is TVS-isomorphic to $\mathbb{F}^{\text{dim} X}.$ - -}} - -Throughout, $F, X, Y$ are TVSs (not necessarily Hausdorff) with $F$ a finite-dimensional vector space. - -* Every finite-dimensional vector subspace of a Hausdorff TVS is a closed subspace. - -* All finite-dimensional Hausdorff TVSs are Banach spaces and all norms on such a space are equivalent. - -* Closed + finite-dimensional is closed: If $M$ is a closed vector subspace of a TVS $Y$ and if $F$ is a finite-dimensional vector subspace of $Y$ ($Y, M,$ and $F$ are not necessarily Hausdorff) then $M + F$ is a closed vector subspace of $Y.$ - -* Every vector space isomorphism (i.e. a linear bijection) between two finite-dimensional Hausdorff TVSs is a TVS isomorphism. - -* Uniqueness of topology: If $X$ is a finite-dimensional vector space and if $\tau_1$ and $\tau_2$ are two Hausdorff TVS topologies on $X$ then $\tau_1 = \tau_2.$ - -* Finite-dimensional domain: A linear map $L : F \to Y$ between Hausdorff TVSs is necessarily continuous. - -** In particular, every linear functional of a finite-dimensional Hausdorff TVS is continuous. - -* Finite-dimensional range: Any continuous surjective linear map $L : X \to Y$ with a Hausdorff finite-dimensional range is an open map and thus a topological homomorphism. - -In particular, the range of $L$ is TVS-isomorphic to $X / L^{-1}(0).$ - -* A TVS $X$ (not necessarily Hausdorff) is locally compact if and only if $X / \overline{\{ 0 \}}$ is finite dimensional. - -* The convex hull of a compact subset of a finite-dimensional Hausdorff TVS is compact. - -** This implies, in particular, that the convex hull of a compact set is equal to the closed convex hull of that set. - -* A Hausdorff locally bounded TVS with the Heine-Borel property is necessarily finite-dimensional. diff --git a/wiki/wikipedia/371.txt b/wiki/wikipedia/371.txt deleted file mode 100644 index 12475eeadf7a2cc0ffa2279dde952a90fdb540af..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/371.txt +++ /dev/null @@ -1,3 +0,0 @@ -In proof theory, a branch of mathematical logic, proof mining (or proof unwinding) is a research program that analyzes formalized proofs, especially in analysis, to obtain explicit bounds or rates of convergence from proofs that, when expressed in natural language, appear to be nonconstructive. - -This research has led to improved results in analysis obtained from the analysis of classical proofs. diff --git a/wiki/wikipedia/3710.txt b/wiki/wikipedia/3710.txt deleted file mode 100644 index dec429667cb545536adfca127ad684fef139c1f1..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3710.txt +++ /dev/null @@ -1,29 +0,0 @@ -In general relativity, Buchdahl's theorem, named after Hans Adolf Buchdahl, makes more precise the notion that there is a maximal sustainable density for ordinary gravitating matter. It gives an inequality between the mass and radius that must be satisfied for static, spherically symmetric matter configurations under certain conditions. In particular, for areal radius $R$, the mass $M$ must satisfy -$$ - M < \frac{4 R c^2}{9G} -$$ - -where $ G $ is the gravitational constant and $c$ is the speed of light. This inequality is often referred to as Buchdahl's bound. The bound has historically also been called Schwarzschild's limit as it was first noted by Karl Schwarzschild to exist in the special case of a constant density fluid. However, this terminology should not be confused with the Schwarzschild radius which is notably smaller than the radius at the Buchdahl bound. - -Given a static, spherically symmetric solution to the Einstein equations (without cosmological constant) with matter confined to areal radius $ R $ that behaves as a perfect fluid with a density that does not increase outwards. (An areal radius $ R $ corresponds to a sphere of surface area $ 4 \pi R^2 $. In curved spacetime the proper radius of such a sphere is not necessarily $ R $.) Assumes in addition that the density and pressure cannot be negative. The mass of this solution must satisfy -$$ - M < \frac{4 R c^2}{9G} -$$ - -For his proof of the theorem, Buchdahl uses the Tolman-Oppenheimer-Volkoff (TOV) equation. - -The Buchdahl theorem is useful when looking for alternatives to black holes. Such attempts are often inspired by the information paradox; a way to explain (part of) the dark matter; or to criticize that observations of black holes are based on excluding known astrophysical alternatives (such as neutron stars) rather than direct evidence. However, to provide a viable alternative it is sometimes needed that the object should be extremely compact and in particular violate the Buchdahl inequality. This implies that one of the assumptions of Buchdahl's theorem must be invalid. A classification scheme can be made based on which assumptions are violated. - -The special case of the incompressible fluid or constant density, $ \rho(r) = \rho_* $ for $ r < R $, is a historically important example as, in 1916, Schwarzschild noted for the first time that the mass could not exceed the value $ \frac{4 R c^2}{9G} $ for a given radius $ R $ or the central pressure would become infinite. It is also a particularly tractable example. Within the star one finds. -$$ - m(r) = \frac{4}{3} \pi r^3 \rho_* -$$ - -and using the TOV-equation -$$ - p(r) = \rho_* c^2 \frac{R \sqrt{R-2GM/c^2}-\sqrt{R^3-2GMr^2/c^2}}{\sqrt{R^3-2GMr^2/c^2}-3R\sqrt{R-2GM/c^2}} -$$ - -such that the central pressure, $ p(0) $, diverges as $ R \to 9GM/4c^2 $. - -Extensions to Buchdahl's theorem generally either relax assumptions on the matter or on the symmetry of the problem. For instance, by introducing anistropic matter or rotation. In addition one can also consider analogues of Buchdahl's theorem in other theories of gravity diff --git a/wiki/wikipedia/3711.txt b/wiki/wikipedia/3711.txt deleted file mode 100644 index 97ce4abf434a7a98dfe64904cc116d07c9336ddb..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3711.txt +++ /dev/null @@ -1,35 +0,0 @@ -In mathematical logic, an omega-categorical theory is a theory that has exactly one countably infinite model up to isomorphism. Omega-categoricity is the special case κ = $\aleph_0$ = ω of κ-categoricity, and omega-categorical theories are also referred to as ω-categorical. The notion is most important for countable first-order theories. - -Many conditions on a theory are equivalent to the property of omega-categoricity. In 1959 Erwin Engeler, Czesław Ryll-Nardzewski and Lars Svenonius, proved several independently. Despite this, the literature still widely refers to the Ryll-Nardzewski theorem as a name for these conditions. The conditions included with the theorem vary between authors. - -Given a countable complete first-order theory T with infinite models, the following are equivalent: - -* The theory T is omega-categorical. - -* Every countable model of T has an oligomorphic automorphism group (that is, there are finitely many orbits on Mn for every n). - -* Some countable model of T has an oligomorphic automorphism group. - -* The theory T has a model which, for every natural number n, realizes only finitely many n-types, that is, the Stone space Sn(T) is finite. - -* For every natural number n, T has only finitely many n-types. - -* For every natural number n, every n-type is isolated. - -* For every natural number n, up to equivalence modulo T there are only finitely many formulas with n free variables, in other words, for every n, the nth Lindenbaum–Tarski algebra of T is finite. - -* Every model of T is atomic. - -* Every countable model of T is atomic. - -* The theory T has a countable atomic and saturated model. - -* The theory T has a saturated prime model. - -The theory of any countably infinite structure which is homogeneous over a finite relational language is omega-categorical. Hence, the following theories are omega-categorical: - -*The theory of dense linear orders without endpoints (Cantor's isomorphism theorem) - -*The theory of the Rado graph - -*The theory of infinite linear spaces over any finite field diff --git a/wiki/wikipedia/3712.txt b/wiki/wikipedia/3712.txt deleted file mode 100644 index 86261b3601edc796fce663bca529465e655f8c3d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3712.txt +++ /dev/null @@ -1,17 +0,0 @@ -The missing square puzzle is an optical illusion used in mathematics classes to help students reason about geometrical figures; or rather to teach them not to reason using figures, but to use only textual descriptions and the axioms of geometry. It depicts two arrangements made of similar shapes in slightly different configurations. Each apparently forms a 13×5 right-angled triangle, but one has a 1×1 hole in it. - -The key to the puzzle is the fact that neither of the 13×5 "triangles" is truly a triangle, nor would either truly be 13x5 if it were, because what appears to be the hypotenuse is bent. In other words, the "hypotenuse" does not maintain a consistent slope, even though it may appear that way to the human eye. - -A true 13×5 triangle cannot be created from the given component parts. The four figures (the yellow, red, blue and green shapes) total 32 units of area. The apparent triangles formed from the figures are 13 units wide and 5 units tall, so it appears that the area should be S = 13×5/2 = 32.5 units. However, the blue triangle has a ratio of 5:2 (=2.5), while the red triangle has the ratio 8:3 (≈2.667), so the apparent combined hypotenuse in each figure is actually bent. With the bent hypotenuse, the first figure actually occupies a combined 32 units, while the second figure occupies 33, including the "missing" square. - -The amount of bending is approximately 1/28 unit (1.245364267°), which is difficult to see on the diagram of the puzzle, and was illustrated as a graphic. Note the grid point where the red and blue triangles in the lower image meet (5 squares to the right and two units up from the lower left corner of the combined figure), and compare it to the same point on the other figure; the edge is slightly under the mark in the upper image, but goes through it in the lower. Overlaying the hypotenuses from both figures results in a very thin parallelogram (represented with the four red dots) with an area of exactly one grid square, so the "missing" area. - -According to Martin Gardner, this particular puzzle was invented by a New York City amateur magician, Paul Curry, in 1953. However, the principle of a dissection paradox has been known since the start of the 16th century. - -The integer dimensions of the parts of the puzzle (2, 3, 5, 8, 13) are successive Fibonacci numbers, which leads to the exact unit area in the thin parallelogram. - -Many other geometric dissection puzzles are based on a few simple properties of the Fibonacci sequence. - -Sam Loyd's chessboard paradox demonstrates two rearrangements of an 8×8 square. In the "larger" rearrangement (the 5×13 rectangle in the image to the right), the gaps between the figures have a combined unit square more area than their square gaps counterparts, creating an illusion that the figures there take up more space than those in the original square figure. In the "smaller" rearrangement (the shape below the 5×13 rectangle), each quadrilateral needs to overlap the triangle by an area of half a unit for its top/bottom edge to align with a grid line, resulting overall loss in one unit square area. - -Mitsunobu Matsuyama's "paradox" uses four congruent quadrilaterals and a small square, which form a larger square. When the quadrilaterals are rotated about their centers they fill the space of the small square, although the total area of the figure seems unchanged. The apparent paradox is explained by the fact that the side of the new large square is a little smaller than the original one. If θ is the angle between two opposing sides in each quadrilateral, then the ratio of the two areas is given by sec2 θ. For θ = 5°, this is approximately 1.00765, which corresponds to a difference of about 0.8%. diff --git a/wiki/wikipedia/3713.txt b/wiki/wikipedia/3713.txt deleted file mode 100644 index 351110629a40883d1e9c1c371f00faabe3de6969..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3713.txt +++ /dev/null @@ -1,15 +0,0 @@ -In topology, the pasting or gluing lemma, and sometimes the gluing rule, is an important result which says that two continuous functions can be "glued together" to create another continuous function. The lemma is implicit in the use of piecewise functions. For example, in the book Topology and Groupoids, where the condition given for the statement below is that $A \setminus B \subseteq \operatorname{Int} A$ and $B \setminus A \subseteq \operatorname{Int} B$. - -The pasting lemma is crucial to the construction of the fundamental group or fundamental groupoid of a topological space; it allows one to concatenate continuous paths to create a new continuous path. - -Let $X,Y$ be both closed (or both open) subsets of a topological space A such that $A = X \cup Y$, and let B also be a topological space. If $f: A \to B$ is continuous when restricted to both X and Y, then f is continuous. - -This result allows one to take two continuous functions defined on closed (or open) subsets of a topological space and create a new one. - -Proof: if U is a closed subset of B, then $f^{-1}(U )\cap X$ and $f^{-1}(U )\cap Y$ are both closed since each is the preimage of f when restricted to X and Y respectively, which by assumption are continuous. Then their union, $f^{-1}(U)$ is also closed, being a finite union of closed sets. - -A similar argument applies when X and Y are both open. $\Box$ - -The infinite analog of this result (where $A=X_1\cup X_2\cup X_3\cup\cdots$) is not true for closed $X_1, X_2, X_3\ldots$. For instance, the inclusion map $\iota:Z\rightarrow R$ from the integers to the real line (with the integers equipped with the cofinite topology) is continuous when restricted to an integer, but the inverse image of a bounded open set in the reals with this map is at most a finite number of points, so not open in Z. - -It is, however, true if the $X_1, X_2, X_3\ldots$ form a locally finite collection since a union of locally finite closed sets is closed. Similarly, it is true if the $X_1, X_2, X_3\ldots$ are instead assumed to be open since a union of open sets is open. diff --git a/wiki/wikipedia/3714.txt b/wiki/wikipedia/3714.txt deleted file mode 100644 index 5efc53a5963ad411eacd9876df015fe9211a6757..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3714.txt +++ /dev/null @@ -1,201 +0,0 @@ -In mathematics, there is in mathematical analysis a class of Sobolev inequalities, relating norms including those of Sobolev spaces. These are used to prove the Sobolev embedding theorem, giving inclusions between certain Sobolev spaces, and the Rellich–Kondrachov theorem showing that under slightly stronger conditions some Sobolev spaces are compactly embedded in others. They are named after Sergei Lvovich Sobolev. - -Let W k,p(Rn) denote the Sobolev space consisting of all real-valued functions on Rn whose first k weak derivatives are functions in Lp. Here k is a non-negative integer and 1 ≤ p < ∞. The first part of the Sobolev embedding theorem states that if k > ℓ, p < n and 1 ≤ p < q < ∞ are two real numbers such that -$$ -\frac{1}{p}-\frac{k}{n} = \frac{1}{q} -\frac{\ell}{n}, -$$ - -then -$$ -W^{k,p}(\mathbf{R}^n)\subseteq W^{\ell,q}(\mathbf{R}^n) -$$ - -and the embedding is continuous. In the special case of k = 1 and ℓ = 0, Sobolev embedding gives -$$ -W^{1,p}(\mathbf{R}^n) \subseteq L^{p^*}(\mathbf{R}^n) -$$ - -where p is the Sobolev conjugate of p, given by -$$ -\frac{1}{p^*} = \frac{1}{p} - \frac{1}{n}. -$$ - -This special case of the Sobolev embedding is a direct consequence of the Gagliardo–Nirenberg–Sobolev inequality. The result should be interpreted as saying that if a function $f$ in $L^p(\mathbb R^n)$ has one derivative in $L^p$, then $f$ itself has improved local behavior, meaning that it belongs to the space $L^{p^*}$ where $p^*>p$. (Note that $1/p^*<1/p$, so that $p^*>p$.) Thus, any local singularities in $f$ must be more mild than for a typical function in $L^p$. - -The second part of the Sobolev embedding theorem applies to embeddings in Hölder spaces C r,α(Rn). If n < pk and -$$ -\frac{1}{p}-\frac{k}{n} = -\frac{r + \alpha}{n}, \mbox{ or, equivalently, } r + \alpha = k - \frac{n}{p} -$$ - -with α ∈ (0, 1) then one has the embedding -$$ -W^{k,p}(\mathbf{R}^n)\subset C^{r,\alpha}(\mathbf{R}^n). -$$ - -This part of the Sobolev embedding is a direct consequence of Morrey's inequality. Intuitively, this inclusion expresses the fact that the existence of sufficiently many weak derivatives implies some continuity of the classical derivatives. If $ \alpha = 1$ then $W^{k,p}(\mathbf{R}^n)\subset C^{r,\gamma}(\mathbf{R}^n)$ for every $ \gamma \in (0,1)$. - -In particular, as long as $pk>n$, the embedding criterion will hold with $r=0$ and some positive value of $\alpha$. That is, for a function $f$ on $\mathbb R^n$, if $f$ has $k$ derivatives in $L^p$ and $pk>n$, then $f$ will be continuous (and actually Hölder continuous with some positive exponent $\alpha$). - -The Sobolev embedding theorem holds for Sobolev spaces W k,p(M) on other suitable domains M. In particular (; ), both parts of the Sobolev embedding hold when - -* M is a bounded open set in Rn with Lipschitz boundary (or whose boundary satisfies the cone condition; ) - -* M is a compact Riemannian manifold - -* M is a compact Riemannian manifold with boundary and the boundary is Lipschitz (meaning that the boundary can be locally represented as a graph of a Lipschitz continuous function). - -* M is a complete Riemannian manifold with injectivity radius δ > 0 and bounded sectional curvature. - -If M is a bounded open set in Rn with continuous boundary, then W 1,2(M) is compactly embedded in L2(M) (). - -On a compact manifold M with C1 boundary, the Kondrachov embedding theorem states that if k > ℓ and\frac{1}{p}-\frac{k}{n} < \frac{1}{q} -\frac{\ell}{n}then the Sobolev embedding -$$ -W^{k,p}(M)\subset W^{\ell,q}(M) -$$ - -is completely continuous (compact). Note that the condition is just as in the first part of the Sobolev embedding theorem, with the equality replaced by an inequality, thus requiring a more regular space W k,p(M). - -Assume that u is a continuously differentiable real-valued function on Rn with compact support. Then for 1 ≤ p < n there is a constant C depending only on n and p such that -$$ -\|u\|_{L^{p^*}(\mathbf{R}^n)}\leq C \|Du\|_{L^{p}(\mathbf{R}^n)}. -$$ - -with 1/p* = 1/p - 1/n. - -The case $ 1< p < n $ is due to Sobolev, $ p =1 $ to Gagliardo and Nirenberg independently. The Gagliardo–Nirenberg–Sobolev inequality implies directly the Sobolev embedding -$$ -W^{1,p}(\mathbf{R}^n) \sub L^{p^*}(\mathbf{R}^n). -$$ - -The embeddings in other orders on Rn are then obtained by suitable iteration. - -Sobolev's original proof of the Sobolev embedding theorem relied on the following, sometimes known as the Hardy–Littlewood–Sobolev fractional integration theorem. An equivalent statement is known as the Sobolev lemma in . A proof is in . - -Let 0 < α < n and 1 < p < q < ∞. Let Iα = (−Δ)−α/2 be the Riesz potential on Rn. Then, for q defined by -$$ -\frac 1 q = \frac 1 p - \frac \alpha n -$$ - -there exists a constant C depending only on p such that -$$ -\left \|I_\alpha f \right \|_q \le C \|f\|_p. -$$ - -If p = 1, then one has two possible replacement estimates. The first is the more classical weak-type estimate: -$$ -m \left \{x : \left |I_\alpha f(x) \right | > \lambda \right \} \le C \left( \frac{\|f\|_1}{\lambda} \right )^q, -$$ - -where 1/q = 1 − α/n. Alternatively one has the estimate\left \|I_\alpha f \right \|_q \le C \|Rf\|_1,where $ Rf $ is the vector-valued Riesz transform, c.f. . The boundedness of the Riesz transforms implies that the latter inequality gives a unified way to write the family of inequalities for the Riesz potential. - -The Hardy–Littlewood–Sobolev lemma implies the Sobolev embedding essentially by the relationship between the Riesz transforms and the Riesz potentials. - -Assume n < p ≤ ∞. Then there exists a constant C, depending only on p and n, such that -$$ -\|u\|_{C^{0,\gamma}(\mathbf{R}^n)}\leq C \|u\|_{W^{1,p}(\mathbf{R}^n)} -$$ - -for all u ∈ C1(Rn) ∩ Lp(Rn), where -$$ -\gamma=1-\frac{n}{p}. -$$ - -Thus if u ∈ W 1,p(Rn), then u is in fact Hölder continuous of exponent γ, after possibly being redefined on a set of measure 0. - -A similar result holds in a bounded domain U with C1 boundary. In this case, -$$ -\|u\|_{C^{0,\gamma}(U)}\leq C \|u\|_{W^{1,p}(U)} -$$ - -where the constant C depends now on n, p and U. This version of the inequality follows from the previous one by applying the norm-preserving extension of W 1,p(U) to W 1,p(Rn). The inequality is named after Charles B. Morrey Jr. - -Let U be a bounded open subset of Rn, with a C1 boundary. (U may also be unbounded, but in this case its boundary, if it exists, must be sufficiently well-behaved.) - -Assume u ∈ W k,p(U). Then we consider two cases: - -In this case we conclude that u ∈ Lq(U), where -$$ -\frac{1}{q}=\frac{1}{p}-\frac{k}{n}. -$$ - -We have in addition the estimate -$$ -\|u\|_{L^q(U)}\leq C \|u\|_{W^{k,p}(U)} -$$, - -the constant C depending only on k, p, n, and U. - -Here, we conclude that u belongs to a Hölder space, more precisely: -$$ - u \in C^{k-\left[\frac{n}{p}\right]-1,\gamma}(U), -$$ - -where - -\gamma = \begin{cases} - -\left[\frac{n}{p}\right]+1-\frac{n}{p} & \frac{n}{p} \notin \mathbf{Z} \\ - -\text{any element in } (0, 1) & \frac{n}{p} \in \mathbf{Z} - -\end{cases} - -We have in addition the estimate -$$ -\|u\|_{C^{k-\left[\frac{n}{p}\right]-1,\gamma}(U)}\leq C \|u\|_{W^{k,p}(U)}, -$$ - -the constant C depending only on k, p, n, γ, and U. In particular, the condition $k>n/p$ guarantees that $u$ is continuous (and actually Hölder continuous with some positive exponent). - -If $u\in W^{1,n}(\mathbf{R}^n)$, then u is a function of bounded mean oscillation and -$$ -\|u\|_{BMO} \leq C \|Du\|_{L^n(\mathbf{R}^n)}, -$$ - -for some constant C depending only on n. This estimate is a corollary of the Poincaré inequality. - -The Nash inequality, introduced by , states that there exists a constant C > 0, such that for all u ∈ L1(Rn) ∩ W 1,2(Rn), -$$ -\|u\|_{L^2(\mathbf{R}^n)}^{1+2/n} \leq C\|u\|_{L^1(\mathbf{R}^n)}^{2/n} \| Du\|_{L^2(\mathbf{R}^n)}. -$$ - -The inequality follows from basic properties of the Fourier transform. Indeed, integrating over the complement of the ball of radius ρ, - -{{NumBlk|:|\int_} - -because $1\le|x|^2/\rho^2$. On the other hand, one has -$$ -|\hat{u}| \le \|u\|_{L^1} -$$ - -which, when integrated over the ball of radius ρ gives - -{{NumBlk|:|\int_} - -where ωn is the volume of the n-ball. Choosing ρ to minimize the sum of () and () and applying Parseval's theorem: -$$ -\|\hat{u}\|_{L^2} = \|u\|_{L^2} -$$ - -gives the inequality. - -In the special case of n = 1, the Nash inequality can be extended to the Lp case, in which case it is a generalization of the Gagliardo-Nirenberg-Sobolev inequality (, Comments on Chapter 8). In fact, if I is a bounded interval, then for all 1 ≤ r < ∞ and all 1 ≤ q ≤ p < ∞ the following inequality holds -$$ -\| u\|_{L^p(I)}\le C\| u\|^{1-a}_{L^q(I)} \|u\|^a_{W^{1,r}(I)}, -$$ - -where: -$$ -a\left(\frac{1}{q}-\frac{1}{r}+1\right)=\frac{1}{q}-\frac{1}{p}. -$$ - -The simplest of the Sobolev embedding theorems, described above, states that if a function $f$ in $L^p(\mathbb R^n)$ has one derivative in $L^p$, then $f$ itself is in $L^{p^*}$, where -$$ -1/p^*=1/p-1/n. -$$ - -We can see that as $n$ tends to infinity, $p^*$ approaches $p$. Thus, if the dimension $n$ of the space on which $f$ is defined is large, the improvement in the local behavior of $f$ from having a derivative in $L^p$ is small ($p^*$ is only slightly larger than $p$). In particular, for functions on an infinite-dimensional space, we cannot expect any direct analog of the classical Sobolev embedding theorems. - -There is, however, a type of Sobolev inequality, established by Leonard Gross () and known as a logarithmic Sobolev inequality, that has dimension-independent constants and therefore continues to hold in the infinite-dimensional setting. The logarithmic Sobolev inequality says, roughly, that if a function is in $L^p$ with respect to a Gaussian measure and has one derivative that is also in $L^p$, then $f$ is in "$L^p$-log", meaning that the integral of $|f|^p\log|f|$ is finite. The inequality expressing this fact has constants that do not involve the dimension of the space and, thus, the inequality holds in the setting of a Gaussian measure on an infinite-dimensional space. It is now known that logarithmic Sobolev inequalities hold for many different types of measures, not just Gaussian measures. - -Although it might seem as if the $L^p$-log condition is a very small improvement over being in $L^p$, this improvement is sufficient to derive an important result, namely hypercontractivity for the associated Dirichlet form operator. This result means that if a function is in the range of the exponential of the Dirichlet form operator—which means that the function has, in some sense, infinitely many derivatives in $L^p$—then the function does belong to $L^{p^*}$ for some $p^*>p$ ( Theorem 6). diff --git a/wiki/wikipedia/3715.txt b/wiki/wikipedia/3715.txt deleted file mode 100644 index af81ed6a7323ccfd9768b9ecc02f94746b2c405f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3715.txt +++ /dev/null @@ -1,59 +0,0 @@ -In mathematics, the axiom of regularity (also known as the axiom of foundation) is an axiom of Zermelo–Fraenkel set theory that states that every non-empty set A contains an element that is disjoint from A. In first-order logic, the axiom reads: -$$ -\forall x(x \neq \varnothing \rightarrow \exists y(y \in x\ \land y \cap x = \varnothing)). -$$ - -The axiom of regularity together with the axiom of pairing implies that no set is an element of itself, and that there is no infinite sequence (an) such that ai+1 is an element of ai for all i. With the axiom of dependent choice (which is a weakened form of the axiom of choice), this result can be reversed: if there are no such infinite sequences, then the axiom of regularity is true. Hence, in this context the axiom of regularity is equivalent to the sentence that there are no downward infinite membership chains. - -The axiom was introduced by von Neumann; it was adopted in a formulation closer to the one found in contemporary textbooks by Zermelo. Virtually all results in the branches of mathematics based on set theory hold even in the absence of regularity; see chapter 3 of Kunen. However, regularity makes some properties of ordinals easier to prove; and it not only allows induction to be done on well-ordered sets but also on proper classes that are well-founded relational structures such as the lexicographical ordering on $\{ (n, \alpha) \mid n \in \omega \land \alpha \text{ is an ordinal } \} .$ - -Given the other axioms of Zermelo–Fraenkel set theory, the axiom of regularity is equivalent to the axiom of induction. The axiom of induction tends to be used in place of the axiom of regularity in intuitionistic theories (ones that do not accept the law of the excluded middle), where the two axioms are not equivalent. - -In addition to omitting the axiom of regularity, non-standard set theories have indeed postulated the existence of sets that are elements of themselves. - -Let A be a set, and apply the axiom of regularity to {A}, which is a set by the axiom of pairing. We see that there must be an element of {A} which is disjoint from {A}. Since the only element of {A} is A, it must be that A is disjoint from {A}. So, since $A \cap \{A\} = \varnothing$, we cannot have A ∈ A (by the definition of disjoint). - -Suppose, to the contrary, that there is a function, f, on the natural numbers with f(n+1) an element of f(n) for each n. Define S = {f(n): n a natural number}, the range of f, which can be seen to be a set from the axiom schema of replacement. Applying the axiom of regularity to S, let B be an element of S which is disjoint from S. By the definition of S, B must be f(k) for some natural number k. However, we are given that f(k) contains f(k+1) which is also an element of S. So f(k+1) is in the intersection of f(k) and S. This contradicts the fact that they are disjoint sets. Since our supposition led to a contradiction, there must not be any such function, f. - -The nonexistence of a set containing itself can be seen as a special case where the sequence is infinite and constant. - -Notice that this argument only applies to functions f that can be represented as sets as opposed to undefinable classes. The hereditarily finite sets, Vω, satisfy the axiom of regularity (and all other axioms of ZFC except the axiom of infinity). So if one forms a non-trivial ultrapower of Vω, then it will also satisfy the axiom of regularity. The resulting model will contain elements, called non-standard natural numbers, that satisfy the definition of natural numbers in that model but are not really natural numbers. They are fake natural numbers which are "larger" than any actual natural number. This model will contain infinite descending sequences of elements. For example, suppose n is a non-standard natural number, then $(n-1) \in n$ and $(n-2) \in (n-1)$, and so on. For any actual natural number k, $(n-k-1) \in (n-k)$. This is an unending descending sequence of elements. But this sequence is not definable in the model and thus not a set. So no contradiction to regularity can be proved. - -The axiom of regularity enables defining the ordered pair (a,b) as {a,{a,b}}; see ordered pair for specifics. This definition eliminates one pair of braces from the canonical Kuratowski definition (a,b) = {{a},{a,b}}. - -This was actually the original form of the axiom in von Neumann's axiomatization. - -Suppose x is any set. Let t be the transitive closure of {x}. Let u be the subset of t consisting of unranked sets. If u is empty, then x is ranked and we are done. Otherwise, apply the axiom of regularity to u to get an element w of u which is disjoint from u. Since w is in u, w is unranked. w is a subset of t by the definition of transitive closure. Since w is disjoint from u, every element of w is ranked. Applying the axioms of replacement and union to combine the ranks of the elements of w, we get an ordinal rank for w, to wit $\textstyle \operatorname{rank} (w) = \cup \{ \operatorname{rank} (z) + 1 \mid z \in w \}$. This contradicts the conclusion that w is unranked. So the assumption that u was non-empty must be false and x must have rank. - -Let X and Y be sets. Then apply the axiom of regularity to the set {X,Y} (which exists by the axiom of pairing). We see there must be an element of {X,Y} which is also disjoint from it. It must be either X or Y. By the definition of disjoint then, we must have either Y is not an element of X or vice versa. - -Let the non-empty set S be a counter-example to the axiom of regularity; that is, every element of S has a non-empty intersection with S. We define a binary relation R on S by $aRb :\Leftrightarrow b \in S \cap a$, which is entire by assumption. Thus, by the axiom of dependent choice, there is some sequence (an) in S satisfying anRan+1 for all n in N. As this is an infinite descending chain, we arrive at a contradiction and so, no such S exists. - -Regularity was shown to be relatively consistent with the rest of ZF by Skolem and von Neumann, meaning that if ZF without regularity is consistent, then ZF (with regularity) is also consistent. For his proof in modern notation see Vaught for instance. - -The axiom of regularity was also shown to be independent from the other axioms of ZF(C), assuming they are consistent. The result was announced by Paul Bernays in 1941, although he did not publish a proof until 1954. The proof involves (and led to the study of) Rieger-Bernays permutation models (or method), which were used for other proofs of independence for non-well-founded systems ( and ). - -Naive set theory (the axiom schema of unrestricted comprehension and the axiom of extensionality) is inconsistent due to Russell's paradox. In early formalizations of sets, mathematicians and logicians have avoided that contradiction by replacing the axiom schema of comprehension with the much weaker axiom schema of separation. However, this step alone takes one to theories of sets which are considered too weak. So some of the power of comprehension was added back via the other existence axioms of ZF set theory (pairing, union, powerset, replacement, and infinity) which may be regarded as special cases of comprehension. So far, these axioms do not seem to lead to any contradiction. Subsequently, the axiom of choice and the axiom of regularity were added to exclude models with some undesirable properties. These two axioms are known to be relatively consistent. - -In the presence of the axiom schema of separation, Russell's paradox becomes a proof that there is no set of all sets. The axiom of regularity together with the axiom of pairing also prohibit such a universal set. However, Russell's paradox yields a proof that there is no "set of all sets" using the axiom schema of separation alone, without any additional axioms. In particular, ZF without the axiom of regularity already prohibits such a universal set. - -If a theory is extended by adding an axiom or axioms, then any (possibly undesirable) consequences of the original theory remain consequences of the extended theory. In particular, if ZF without regularity is extended by adding regularity to get ZF, then any contradiction (such as Russell's paradox) which followed from the original theory would still follow in the extended theory. - -The existence of Quine atoms (sets that satisfy the formula equation x = {x}, i.e. have themselves as their only elements) is consistent with the theory obtained by removing the axiom of regularity from ZFC. Various non-wellfounded set theories allow "safe" circular sets, such as Quine atoms, without becoming inconsistent by means of Russell's paradox. - -In ZF it can be proven that the class $ \bigcup_{\alpha} V_\alpha $, called the von Neumann universe, is equal to the class of all sets. This statement is even equivalent to the axiom of regularity (if we work in ZF with this axiom omitted). From any model which does not satisfy axiom of regularity, a model which satisfies it can be constructed by taking only sets in $ \bigcup_{\alpha} V_\alpha $. - -wrote that "The idea of rank is a descendant of Russell's concept of type". Comparing ZF with type theory, Alasdair Urquhart wrote that "Zermelo's system has the notational advantage of not containing any explicitly typed variables, although in fact it can be seen as having an implicit type structure built into it, at least if the axiom of regularity is included. The details of this implicit typing are spelled out in Zermelo|1930}}|[Zermelo 1930], and again in a well-known article of George Boolos Boolos|1971}}|[Boolos 1971]." - -went further and claimed that: - -In the same paper, Scott shows that an axiomatic system based on the inherent properties of the cumulative hierarchy turns out to be equivalent to ZF, including regularity. - -The concept of well-foundedness and rank of a set were both introduced by Dmitry Mirimanoff (Mirimanoff|1917}}|1917) cf. Lévy and Hallett. Mirimanoff called a set x "regular" (French: "ordinaire") if every descending chain x ∋ x1 ∋ x2 ∋ ... is finite. Mirimanoff however did not consider his notion of regularity (and well-foundedness) as an axiom to be observed by all sets; in later papers Mirimanoff also explored what are now called non-well-founded sets ("extraordinaire" in Mirimanoff's terminology). - -Skolem and von Neumann pointed out that non-well-founded sets are superfluous (on p. 404 in van Heijenoort|1967}}|van Heijenoort's translation) and in the same publication von Neumann gives an axiom (p. 412 in translation) which excludes some, but not all, non-well-founded sets. In a subsequent publication, von Neumann gave the following axiom (rendered in modern notation by A. Rieger): -$$ -\forall x(x \neq \emptyset \rightarrow \exists y \in x(y \cap x = \emptyset)) -$$. - -Urelements are objects that are not sets, but which can be elements of sets. In ZF set theory, there are no urelements, but in some other set theories such as ZFA, there are. In these theories, the axiom of regularity must be modified. The statement "$x \neq \emptyset$" needs to be replaced with a statement that $x$ is not empty and is not an urelement. One suitable replacement is $(\exists y)[y \in x]$, which states that x is inhabited. diff --git a/wiki/wikipedia/3716.txt b/wiki/wikipedia/3716.txt deleted file mode 100644 index b3e0504c85b3b6eb0e1d84ab7dbfd934dbb7a824..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3716.txt +++ /dev/null @@ -1,37 +0,0 @@ -In mathematics, Roth's theorem is a fundamental result in diophantine approximation to algebraic numbers. It is of a qualitative type, stating that algebraic numbers cannot have many rational number approximations that are 'very good'. Over half a century, the meaning of very good here was refined by a number of mathematicians, starting with Joseph Liouville in 1844 and continuing with work of , , , and . - -Roth's theorem states that every irrational algebraic number $\alpha$ has approximation exponent equal to 2. This means that, for every $\varepsilon>0$, the inequality -$$ -\left|\alpha - \frac{p}{q}\right| < \frac{1}{q^{2 + \varepsilon}} -$$ - -can have only finitely many solutions in coprime integers $p$ and $q$. Roth's proof of this fact resolved a conjecture by Siegel. It follows that every irrational algebraic number α satisfies -$$ -\left|\alpha - \frac{p}{q}\right| > \frac{C(\alpha,\varepsilon)}{q^{2 + \varepsilon}} -$$ - -with $C(\alpha,\varepsilon)$ a positive number depending only on $\varepsilon>0$ and $\alpha$. - -The first result in this direction is Liouville's theorem on approximation of algebraic numbers, which gives an approximation exponent of d for an algebraic number α of degree d ≥ 2. This is already enough to demonstrate the existence of transcendental numbers. Thue realised that an exponent less than d would have applications to the solution of Diophantine equations and in Thue's theorem from 1909 established an exponent $d/2 + 1 + \varepsilon$. Siegel's theorem improves this to an exponent about 2d, and Dyson's theorem of 1947 has exponent about 2d. - -Roth's result with exponent 2 is in some sense the best possible, because this statement would fail on setting $\varepsilon = 0$: by Dirichlet's theorem on diophantine approximation there are infinitely many solutions in this case. However, there is a stronger conjecture of Serge Lang that -$$ -\left|\alpha - \frac{p}{q}\right| < \frac{1}{q^2 \log(q)^{1+\varepsilon}} -$$ - -can have only finitely many solutions in integers p and q. If one lets α run over the whole of the set of real numbers, not just the algebraic reals, then both Roth's conclusion and Lang's hold - -for almost all $\alpha$. So both the theorem and the conjecture assert that a certain countable set misses a certain set of measure zero. - -The theorem is not currently effective: that is, there is no bound known on the possible values of p,q given $\alpha$. Davenport showed that Roth's techniques could be used to give an effective bound for the number of p/q satisfying the inequality, using a "gap" principle. The fact that we do not actually know C(ε) means that the project of solving the equation, or bounding the size of the solutions, is out of reach. - -The proof technique involves constructing an auxiliary multivariate polynomial in an arbitrarily large number of variables depending upon $\varepsilon$, leading to a contradiction in the presence of too many good approximations. More specifically, one finds a certain number of rational approximations to the irrational algebraic number in question, and then applies the function over each of these simultaneously (i.e. each of these rational numbers serve as the input to a unique variable in the expression defining our function). By its nature, it was ineffective (see effective results in number theory); this is of particular interest since a major application of this type of result is to bound the number of solutions of some diophantine equations. - -There is a higher-dimensional version, Schmidt's subspace theorem, of the basic result. There are also numerous extensions, for example using the p-adic metric, based on the Roth method. - -William J. LeVeque generalized the result by showing that a similar bound holds when the approximating numbers are taken from a fixed algebraic number field. Define the height H(ξ) of an algebraic number ξ to be the maximum of the absolute values of the coefficients of its minimal polynomial. Fix κ>2. For a given algebraic number α and algebraic number field K, the equation -$$ - | \alpha - \xi | < \frac{1}{H(\xi)^\kappa} -$$ - -has only finitely many solutions in elements ξ of K. diff --git a/wiki/wikipedia/3717.txt b/wiki/wikipedia/3717.txt deleted file mode 100644 index a23858e4769b991d61c885f04aa0677809caea0a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3717.txt +++ /dev/null @@ -1,33 +0,0 @@ -The hairy ball theorem of algebraic topology (sometimes called the hedgehog theorem in Europe) states that there is no nonvanishing continuous tangent vector field on even-dimensional n-spheres. For the ordinary sphere, or 2‑sphere, if f is a continuous function that assigns a vector in R3 to every point p on a sphere such that f(p) is always tangent to the sphere at p, then there is at least one pole, a point where the field vanishes (a p such that f(p) = 0). - -The theorem was first proved by Henri Poincaré for the 2-sphere in 1885, and extended to higher dimensions in 1912 by Luitzen Egbertus Jan Brouwer. - -The theorem has been expressed colloquially as "you can't comb a hairy ball flat without creating a cowlick" or "you can't comb the hair on a coconut". - -Every zero of a vector field has a (non-zero) "index", and it can be shown that the sum of all of the indices at all of the zeros must be two, because the Euler characteristic of the 2-sphere is two. Therefore, there must be at least one zero. This is a consequence of the Poincaré–Hopf theorem. In the case of the torus, the Euler characteristic is 0; and it is possible to "comb a hairy doughnut flat". In this regard, it follows that for any compact regular 2-dimensional manifold with non-zero Euler characteristic, any continuous tangent vector field has at least one zero. - -A common problem in computer graphics is to generate a non-zero vector in R3 that is orthogonal to a given non-zero one. There is no single continuous function that can do this for all non-zero vector inputs. This is a corollary of the hairy ball theorem. To see this, consider the given vector as the radius of a sphere and note that finding a non-zero vector orthogonal to the given one is equivalent to finding a non-zero vector that is tangent to the surface of that sphere where it touches the radius. However, the hairy ball theorem says there exists no continuous function that can do this for every point on the sphere (equivalently, for every given vector). - -There is a closely related argument from algebraic topology, using the Lefschetz fixed-point theorem. Since the Betti numbers of a 2-sphere are 1, 0, 1, 0, 0, ... the Lefschetz number (total trace on homology) of the identity mapping is 2. By integrating a vector field we get (at least a small part of) a one-parameter group of diffeomorphisms on the sphere; and all of the mappings in it are homotopic to the identity. Therefore, they all have Lefschetz number 2, also. Hence they have fixed points (since the Lefschetz number is nonzero). Some more work would be needed to show that this implies there must actually be a zero of the vector field. It does suggest the correct statement of the more general Poincaré-Hopf index theorem. - -A consequence of the hairy ball theorem is that any continuous function that maps an even-dimensional sphere into itself has either a fixed point or a point that maps onto its own antipodal point. This can be seen by transforming the function into a tangential vector field as follows. - -Let s be the function mapping the sphere to itself, and let v be the tangential vector function to be constructed. For each point p, construct the stereographic projection of s(p) with p as the point of tangency. Then v(p) is the displacement vector of this projected point relative to p. According to the hairy ball theorem, there is a p such that v(p) = 0, so that s(p) = p. - -This argument breaks down only if there exists a point p for which s(p) is the antipodal point of p, since such a point is the only one that cannot be stereographically projected onto the tangent plane of p. - -The connection with the Euler characteristic χ suggests the correct generalisation: the 2n-sphere has no non-vanishing vector field for n ≥ 1. The difference between even and odd dimensions is that, because the only nonzero Betti numbers of the m-sphere are b0 and bm, their alternating sum χ is 2 for m even, and 0 for m odd. - -Indeed it is easy to see that an odd-dimensional sphere admits a non-vanishing tangent vector field through a simple process of considering coordinates of the ambient even-dimensional Euclidean space $\mathbb{R}^{2n}$ in pairs. Namely, one may define a tangent vector field to $S^{2n-1}$ by specifying a vector field $v: \mathbb{R}^{2n} \to \mathbb{R}^{2n}$ given by -$$ - v(x_1,\dots,x_{2n}) = (x_2, -x_1,\dots,x_{2n},-x_{2n-1}). -$$ - -In order for this vector field to restrict to a tangent vector field to the unit sphere $S^{2n-1}\subset \mathbb{R}^{2n}$ it is enough to verify that the dot product with a unit vector of the form $x=(x_1,\dots,x_{2n})$ satisfying $\|x\|=1$ vanishes. Due to the pairing of coordinates, one sees -$$ - v(x_1,\dots,x_{2n}) \bullet (x_1,\dots,x_{2n}) = (x_2 x_1 - x_1 x_2) + \cdots + (x_{2n} x_{2n-1} - x_{2n-1} x_{2n}) = 0. -$$ - -For a 2n-sphere, the ambient Euclidean space is $\mathbb{R}^{2n+1}$ which is odd-dimensional, and so this simple process of pairing coordinates is not possible. Whilst this does not preclude the possibility that there may still exist a tangent vector field to the even-dimensional sphere which does not vanish, the hairy ball theorem demonstrates that in fact there is no way of constructing such a vector field. - -The Hairy Ball Theorem has numerous physical exemplifications. For example, rotation of a rigid ball around its fixed axis gives rise to a continuous tangential vector field of velocities of the points located on its surface. This field has two zero-velocity points,which disappear after drilling the ball completely through its center, thereby converting the ball into the topological equivalent of a torus, a body to which the “hairy ball” theorem does not apply. The Hairy Ball Theorem may be successfully applied for the analysis of the propagation of electromagnetic waves, in the case when the wave-front forms a surface, topologically equivalent to a sphere (the surface possessing the Euler characteristic χ = 2). At least one point on the surface at which vectors of electric and magnetic fields equal zero will necessarily appear. diff --git a/wiki/wikipedia/3718.txt b/wiki/wikipedia/3718.txt deleted file mode 100644 index 2f9108700cb3984bb83c40e0e6ad4c23ba122783..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3718.txt +++ /dev/null @@ -1,22 +0,0 @@ -In mathematics, the Bishop–Gromov inequality is a comparison theorem in Riemannian geometry, named after Richard L. Bishop and Mikhail Gromov. It is closely related to Myers' theorem, and is the key point in the proof of Gromov's compactness theorem. - -Let $M$ be a complete n-dimensional Riemannian manifold whose Ricci curvature satisfies the lower bound -$$ -\mathrm{Ric} \geq (n-1) K -$$ - -for a constant $K\in \R$. Let $M_K^n$ be the complete n-dimensional simply connected space of constant sectional curvature $K$ (and hence of constant Ricci curvature $(n-1)K$); thus $M_K^n$ is the n-sphere of radius $1/\sqrt{K}$ if $K>0$, or n-dimensional Euclidean space if $K=0$, or an appropriately rescaled version of n-dimensional hyperbolic space if $K<0$. Denote by $B(p,r)$ the ball of radius r around a point p, defined with respect to the Riemannian distance function. - -Then, for any $p\in M$ and $p_K\in M_K^n$, the function -$$ - \phi(r) = \frac{\mathrm{Vol} B(p,r)}{\mathrm{Vol} B(p_K,r)} -$$ - -is non-increasing on $(0,\infty)$. - -As r goes to zero, the ratio approaches one, so together with the monotonicity this implies that -$$ -\mathrm{Vol} B(p,r) \leq \mathrm{Vol} B(p_K,r). -$$ - -This is the version first proved by Bishop. diff --git a/wiki/wikipedia/3719.txt b/wiki/wikipedia/3719.txt deleted file mode 100644 index c366e9fc80fd98958c7331821caaca9ab31ec8e6..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3719.txt +++ /dev/null @@ -1,211 +0,0 @@ -In geometry, a Steiner chain is a set of n circles, all of which are tangent to two given non-intersecting circles (blue and red in Figure 1), where n is finite and each circle in the chain is tangent to the previous and next circles in the chain. In the usual closed Steiner chains, the first and last (nth) circles are also tangent to each other; by contrast, in open Steiner chains, they need not be. The given circles α and β do not intersect, but otherwise are unconstrained; the smaller circle may lie completely inside or outside of the larger circle. In these cases, the centers of Steiner-chain circles lie on an ellipse or a hyperbola, respectively. - -Steiner chains are named after Jakob Steiner, who defined them in the 19th century and discovered many of their properties. A fundamental result is Steiner's porism, which states: - -If at least one closed Steiner chain of n circles exists for two given circles α and β, then there is an infinite number of closed Steiner chains of n circles; and any circle tangent to α and β in the same way is a member of such a chain. - -"Tangent in the same way" means that the arbitrary circle is internally or externally tangent in the same way as a circle of the original Steiner chain. A porism is a type of theorem relating to the number of solutions and the conditions on it. Porisms often describe a geometrical figure that cannot exist unless a condition is met, but otherwise may exist in infinite number; another example is Poncelet's porism. - -The method of circle inversion is helpful in treating Steiner chains. Since it preserves tangencies, angles and circles, inversion transforms one Steiner chain into another of the same number of circles. One particular choice of inversion transforms the given circles α and β into concentric circles; in this case, all the circles of the Steiner chain have the same size and can "roll" around in the annulus between the circles similar to ball bearings. This standard configuration allows several properties of Steiner chains to be derived, e.g., its points of tangencies always lie on a circle. Several generalizations of Steiner chains exist, most notably Soddy's hexlet and Pappus chains. - - - -Image:Steiner_chain_7mer.svg|The 7 circles of this Steiner chain (black) are externally tangent to the inner given circle (red) but internally tangent to the outer given circle (blue). - -Image:Steiner_chain_7mer_all_external.svg|The 7 circles of this Steiner chain (black) are externally tangent to both given circles (red and blue), which lie outside one another. - -Image:Steiner_chain_8mer_all_but_one_external.svg|Seven of the 8 circles of this Steiner chain (black) are externally tangent to both given circles (red and blue); the 8th circle is internally tangent to both. - - - -The two given circles α and β cannot intersect; hence, the smaller given circle must lie inside or outside the larger. The circles are usually shown as an annulus, i.e., with the smaller given circle inside the larger one. In this configuration, the Steiner-chain circles are externally tangent to the inner given circle and internally tangent to the outer circle. However, the smaller circle may also lie completely outside the larger one (Figure 2). The black circles of Figure 2 satisfy the conditions for a closed Steiner chain: they are all tangent to the two given circles and each is tangent to its neighbors in the chain. In this configuration, the Steiner-chain circles have the same type of tangency to both given circles, either externally or internally tangent to both. If the two given circles are tangent at a point, the Steiner chain becomes an infinite Pappus chain, which is often discussed in the context of the arbelos (shoemaker's knife), a geometric figure made from three circles. There is no general name for a sequence of circles tangent to two given circles that intersect at two points. - - - -Image:Steiner_chain_9mer_annular.svg|Closed Steiner chain of nine circles. The 1st and 9th circles are tangent. - -Image:Steiner_chain_open_9mer.svg|Open Steiner chain of nine circles. The 1st and 9th circles overlap. - -Image:Steiner_chain_double_17mer.svg|Multicyclic Steiner chain of 17 circles in 2 wraps. The 1st and 17th circles touch. - - - -The two given circles α and β touch the n circles of the Steiner chain, but each circle Ck of a Steiner chain touches only four circles: α, β, and its two neighbors, Ck-1 and Ck+1. By default, Steiner chains are assumed to be closed, i.e., the first and last circles are tangent to one another. By contrast, an open Steiner chain is one in which the first and last circles, C1 and Cn, are not tangent to one another; these circles are tangent only to three circles. Multicyclic Steiner chains wrap around the inner circle more than once before closing, i.e., before being tangent to the initial circle. - -Closed Steiner chains are the systems of circles obtained as the circle packing theorem representation of a bipyramid. - - - -Image:Steiner_chain_3mer_annular.svg|n = 3 - -Image:Steiner_chain_6mer_annular.svg|n = 6 - -Image:Steiner_chain_9mer_annular.svg|n = 9 - -Image:Steiner_chain_12mer_annular.svg|n = 12 - -Image:Steiner_chain_20mer_annular.svg|n = 20 - - - -The simplest type of Steiner chain is a closed chain of n circles of equal size surrounding an inscribed circle of radius r; the chain of circles is itself surrounded by a circumscribed circle of radius R. The inscribed and circumscribed given circles are concentric, and the Steiner-chain circles lie in the annulus between them. By symmetry, the angle 2θ between the centers of the Steiner-chain circles is 360°/n. Because Steiner chain circles are tangent to one another, the distance between their centers equals the sum of their radii, here twice their radius ρ. The bisector (green in Figure) creates two right triangles, with a central angle of θ = 180°/n. The sine of this angle can be written as the length of its opposite segment, divided by the hypotenuse of the right triangle - - - -\sin \theta = \frac{\rho}{r + \rho} - - - -Since θ is known from n, this provides an equation for the unknown radius ρ of the Steiner-chain circles - - - -\rho = \frac{r \sin\theta}{1 - \sin\theta} - - - -The tangent points of a Steiner chain circle with the inner and outer given circles lie on a line that pass through their common center; hence, the outer radius R = r + 2ρ. - -These equations provide a criterion for the feasibility of a Steiner chain for two given concentric circles. A closed Steiner chain of n circles requires that the ratio of radii R/r of the given circles equal exactly - - - -\frac{R}{r} = 1 + \frac{2 \sin\theta}{1 - \sin\theta} = \frac{1 + \sin\theta}{1 - \sin\theta} = \left[ \sec \theta + \tan \theta \right]^{2} - - - -As shown below, this ratio-of-radii criterion for concentric given circles can be extended to all types of given circles by the inversive distance δ of the two given circles. For concentric circles, this distance is defined as a logarithm of their ratio of radii - - - -\delta = \ln \frac{R}{r} - - - -Using the solution for concentric circles, the general criterion for a Steiner chain of n circles can be written - - - -\delta = 2 \ln \left( \sec\theta + \tan\theta \right). - - - -If a multicyclic annular Steiner chain has n total circles and wraps around m times before closing, the angle between Steiner-chain circles equals - - - -\theta = \frac{m}{n} 180^{\circ} - - - -In other respects, the feasibility criterion is unchanged. - - - -File:Steiner chain 9mer annular angle2.svg|Two circles (pink and cyan) that are internally tangent to both given circles and whose centers are collinear with the center of the given circles intersect at the angle 2θ. - -File:Steiner chain 9mer annular angle4.svg|Under inversion, these lines and circles become circles with the same intersection angle, 2θ. The gold circles intersect the two given circles at right angles, i.e., orthogonally. - -File:Steiner chain 6mer tangent circles.svg|The circles passing through the mutual tangent points of the Steiner-chain circles are orthogonal to the two given circles and intersect one another at multiples of the angle 2θ. - -File:Steiner chain 6mer orthogonal circles.svg|The circles passing through the tangent points of the Steiner-chain circles with the two given circles are orthogonal to the latter and intersect at multiples of the angle 2θ. - - - -Circle inversion transforms one Steiner chain into another with the same number of circles. - -In the transformed chain, the tangent points between adjacent circles of the Steiner chain all lie on a circle, namely the concentric circle midway between the two fixed concentric circles. Since tangencies and circles are preserved under inversion, this property of all tangencies lying on a circle is also true in the original chain. This property is also shared with the Pappus chain of circles, which can be construed as a special limiting case of the Steiner chain. - -In the transformed chain, the tangent lines from O to the Steiner chain circles are separated by equal angles. In the original chain, this corresponds to equal angles between the tangent circles that pass through the center of inversion used to transform the original circles into a concentric pair. - -In the transformed chain, the n lines connecting the pairs of tangent points of the Steiner circles with the concentric circles all pass through O, the common center. Similarly, the n lines tangent to each pair of adjacent circles in the Steiner chain also pass through O. Since lines through the center of inversion are invariant under inversion, and since tangency and concurrence are preserved under inversion, the 2n lines connecting the corresponding points in the original chain also pass through a single point, O. - -A Steiner chain between two non-intersecting circles can always be transformed into another Steiner chain of equally sized circles sandwiched between two concentric circles. Therefore, any such Steiner chain belongs to an infinite family of Steiner chains related by rotation of the transformed chain about O, the common center of the transformed bounding circles. - -The centers of the circles of a Steiner chain lie on a conic section. For example, if the smaller given circle lies within the larger, the centers lie on an ellipse. This is true for any set of circles that are internally tangent to one given circle and externally tangent to the other; such systems of circles appear in the Pappus chain, the problem of Apollonius, and the three-dimensional Soddy's hexlet. Similarly, if some circles of the Steiner chain are externally tangent to both given circles, their centers must lie on a hyperbola, whereas those that are internally tangent to both lie on a different hyperbola. - -The circles of the Steiner chain are tangent to two fixed circles, denoted here as α and β, where β is enclosed by α. Let the radii of these two circles be denoted as rα and rβ, respectively, and let their respective centers be the points A and B. Let the radius, diameter and center point of the kth circle of the Steiner chain be denoted as rk, dk and Pk, respectively. - -All the centers of the circles in the Steiner chain are located on a common ellipse, for the following reason. The sum of the distances from the center point of the kth circle of the Steiner chain to the two centers A and B of the fixed circles equals a constant - - - -\overline{\mathbf{P}_k\mathbf{A}} + \overline{\mathbf{P}_k \mathbf{B}} = (r_\alpha - r_k) + \left( r_\beta + r_k \right) = r_\alpha + r_\beta - - - -Thus, for all the centers of the circles of the Steiner chain, the sum of distances to A and B equals the same constant, rα + rβ. This defines an ellipse, whose two foci are the points A and B, the centers of the circles, α and β, that sandwich the Steiner chain of circles. - -The sum of distances to the foci equals twice the semi-major axis a of an ellipse; hence, - - - -2a = r_\alpha + r_\beta - - - -Let p equal the distance between the foci, A and B. Then, the eccentricity e is defined by 2 ae = p, or - - - -e = \frac{p}{2a} = \frac{p}{r_{\alpha} + r_{\beta}} - - - -From these parameters, the semi-minor axis b and the semi-latus rectum L can be determined - - - -b^2 = a^2 \left( 1 - e^2 \right) = a^2 - \frac{p^2}{4} - - - - - -L = \frac{b^2}{a} = a - \frac{p^2}{4a} - - - -Therefore, the ellipse can be described by an equation in terms of its distance d to one focus - - - -d = \frac{L}{1 - e \cos \theta} - - - -where θ is the angle with the line joining the two foci. - - - -Image:Steiner_chain_4mer_outside3.svg|Steiner chain with the two given circles shown in red and blue. - -Image:Steiner_chain_4mer_outside2.svg|Same set of circles, but with a different choice of given circles. - -Image:Steiner_chain_4mer_outside.svg|Same set of circles, but with yet another choice of given circles. - - - -If a Steiner chain has an even number of circles, then any two diametrically opposite circles in the chain can be taken as the two given circles of a new Steiner chain to which the original circles belong. If the original Steiner chain has n circles in m wraps, and the new chain has p circles in q wraps, then the equation holds - - - -\frac{m}{n} + \frac{p}{q} = \frac{1}{2}. - - - -A simple example occurs for Steiner chains of four circles (n = 4) and one wrap (m = 1). In this case, the given circles and the Steiner-chain circles are equivalent in that both types of circles are tangent to four others; more generally, Steiner-chain circles are tangent to four circles, but the two given circles are tangent to n circles. In this case, any pair of opposite members of the Steiner chain may be selected as the given circles of another Steiner chain that involves the original given circles. Since m = p = 1 and n = q = 4, Steiner's equation is satisfied: - - - -\frac{1}{4} + \frac{1}{4} = \frac{1}{2}. - - - -The simplest generalization of a Steiner chain is to allow the given circles to touch or intersect one another. In the former case, this corresponds to a Pappus chain, which has an infinite number of circles. - -Soddy's hexlet is a three-dimensional generalization of a Steiner chain of six circles. The centers of the six spheres (the hexlet) travel along the same ellipse as do the centers of the corresponding Steiner chain. The envelope of the hexlet spheres is a Dupin cyclide, the inversion of a torus. The six spheres are not only tangent to the inner and outer sphere, but also to two other spheres, centered above and below the plane of the hexlet centers. - -Multiple rings of Steiner chains are another generalization. An ordinary Steiner chain is obtained by inverting an annular chain of tangent circles bounded by two concentric circles. This may be generalized to inverting three or more concentric circles that sandwich annular chains of tangent circles. - -Hierarchical Steiner chains are yet another generalization. If the two given circles of an ordinary Steiner chain are nested, i.e., if one lies entirely within the other, then the larger given circle circumscribes the Steiner-chain circles. In a hierarchical Steiner chain, each circle of a Steiner chain is itself the circumscribing given circle of another Steiner chain within it; this process may be repeated indefinitely, forming a fractal. diff --git a/wiki/wikipedia/372.txt b/wiki/wikipedia/372.txt deleted file mode 100644 index 9f038ad7b3286f106ce094e4ec14f4ce1a1b965d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/372.txt +++ /dev/null @@ -1,11 +0,0 @@ -In geometry, the hinge theorem states that if two sides of one triangle are congruent to two sides of another triangle, and the included angle of the first is larger than the included angle of the second, then the third side of the first triangle is longer than the third side of the second triangle. This theorem is actually Propositions 24 of Book 1 of Euclid's Elements (sometimes called the open mouth theorem). The theorem states the following: - -== Euclidean == - -The hinge theorem holds in Euclidean spaces and more generally in simply connected non-positively curved space forms. - -It can be also extended from plane Euclidean geometry to higher dimension Euclidean spaces (e.g., to tetrahedra and more generally to simplices), as has been done for orthocentric tetrahedra (i.e., tetrahedra in which altitudes are concurrent) and more generally for orthocentric simplices (i.e., simplices in which altitudes are concurrent). - -The converse of the hinge theorem is also true: If the two sides of one triangle are congruent to two sides of another triangle, and the third side of the first triangle is greater than the third side of the second triangle, then the included angle of the first triangle is larger than the included angle of the second triangle. - -In some textbooks, the theorem and its converse are written as the SAS Inequality Theorem and the SSS Inequality Theorem respectively. diff --git a/wiki/wikipedia/3720.txt b/wiki/wikipedia/3720.txt deleted file mode 100644 index 0526b82494d3a0f36d57baeef747e260bd80cd4d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3720.txt +++ /dev/null @@ -1,198 +0,0 @@ -In mathematics, the Riemann series theorem (also called the Riemann rearrangement theorem), named after 19th-century German mathematician Bernhard Riemann, says that if an infinite series of real numbers is conditionally convergent, then its terms can be arranged in a permutation so that the new series converges to an arbitrary real number, or diverges. This implies that a series of real numbers is absolutely convergent if and only if it is unconditionally convergent. - -As an example, the series 1 − 1 + 1/2 − 1/2 + 1/3 − 1/3 + ⋯ converges to 0 (for a sufficiently large number of terms, the partial sum gets arbitrarily near to 0); but replacing all terms with their absolute values gives 1 + 1 + 1/2 + 1/2 + 1/3 + 1/3 + ⋯, which sums to infinity. Thus the original series is conditionally convergent, and can be rearranged (by taking the first two positive terms followed by the first negative term, followed by the next two positive terms and then the next negative term, etc.) to give a series that converges to a different sum: 1 + 1/2 − 1 + 1/3 + 1/4 − 1/2 + ⋯ = ln 2. More generally, using this procedure with p positives followed by q negatives gives the sum ln(p/q). Other rearrangements give other finite sums or do not converge to any sum. - -A series $\sum_{n=1}^\infty a_n$ converges if there exists a value $\ell$ such that the sequence of the partial sums -$$ -(S_1, S_2, S_3, \ldots), \quad S_n = \sum_{k=1}^n a_k, -$$ - -converges to $\ell$. That is, for any ε > 0, there exists an integer N such that if n ≥ N, then -$$ -\left\vert S_n - \ell \right\vert \le \epsilon. -$$ - -A series converges conditionally if the series $\sum_{n=1}^\infty a_n$ converges but the series $\sum_{n=1}^\infty \left\vert a_n \right\vert$ diverges. - -A permutation is simply a bijection from the set of positive integers to itself. This means that if $\sigma$ is a permutation, then for any positive integer $b,$ there exists exactly one positive integer $a$ such that $\sigma (a) = b.$ In particular, if $x \ne y$, then $\sigma (x) \ne \sigma (y)$. - -Suppose that $(a_1, a_2, a_3, \ldots)$ is a sequence of real numbers, and that $ \sum_{n=1}^\infty a_n$ is conditionally convergent. Let $M$ be a real number. Then there exists a permutation $\sigma$ such that -$$ -\sum_{n=1}^\infty a_{\sigma (n)} = M. -$$ - -There also exists a permutation $\sigma$ such that -$$ -\sum_{n=1}^\infty a_{\sigma (n)} = \infty. -$$ - -The sum can also be rearranged to diverge to $-\infty$ or to fail to approach any limit, finite or infinite. - -The alternating harmonic series is a classic example of a conditionally convergent series: - -\sum_{n=1}^\infty \frac{(-1)^{n+1}}{n} - -is convergent, whereas - -\sum_{n=1}^\infty \left| \frac{(-1)^{n+1}}{n} \right| = \sum_{n=1}^\infty \frac{1}{n} - -is the ordinary harmonic series, which diverges. Although in standard presentation the alternating harmonic series converges to ln(2), its terms can be arranged to converge to any number, or even to diverge. One instance of this is as follows. Begin with the series written in the usual order, -$$ -1 - \frac{1}{2} + \frac{1}{3} - \frac{1}{4} + \cdots -$$ - -and rearrange the terms: -$$ -1 - \frac{1}{2} - \frac{1}{4} + \frac{1}{3} - \frac{1}{6} - \frac{1}{8} + \frac{1}{5} - \frac{1}{10} - \frac{1}{12} + \cdots -$$ - -where the pattern is: the first two terms are 1 and −1/2, whose sum is 1/2. The next term is −1/4. - -The next two terms are 1/3 and −1/6, whose sum is 1/6. The next term is −1/8. - -The next two terms are 1/5 and −1/10, whose sum is 1/10. - -In general, the sum is composed of blocks of three: -$$ -\frac{1}{2k - 1} - \frac{1}{2(2k - 1)} - \frac{1}{4k},\quad k = 1, 2, \dots. -$$ - -This is indeed a rearrangement of the alternating harmonic series: every odd integer occurs once positively, and the even integers occur once each, negatively (half of them as multiples of 4, the other half as twice odd integers). Since -$$ -\frac{1}{2k - 1} - \frac{1}{2(2k - 1)} = \frac{1}{2(2k - 1)}, -$$ - -this series can in fact be written: - -\begin{align} - -&\frac{1}{2} - \frac{1}{4} + \frac{1}{6} - \frac{1}{8} + \frac{1}{10} + \cdots + \frac{1}{2(2k - 1)} - \frac{1}{2(2k)} + \cdots \\ - -={}& \frac{1}{2}\left(1 - \frac{1}{2} + \frac{1}{3} - \cdots\right) = \frac{1}{2} \ln(2) - -\end{align} - -which is half the usual sum. - -An efficient way to recover and generalize the result of the previous section is to use the fact that -$$ -1 + {1 \over 2} + {1 \over 3} + \cdots + {1 \over n} = \gamma + \ln n + o(1), -$$ - -where γ is the Euler–Mascheroni constant, and where the notation o(1) denotes a quantity that depends upon the current variable (here, the variable is n) in such a way that this quantity goes to 0 when the variable tends to infinity. - -It follows that the sum of q even terms satisfies -$$ -{1 \over 2} + {1 \over 4} + {1 \over 6} + \cdots + {1 \over 2 q} = {1 \over 2} \gamma + {1 \over 2} \ln q + o(1), -$$ - -and by taking the difference, one sees that the sum of p odd terms satisfies -$$ -{1} + {1 \over 3} + {1 \over 5} + \cdots + {1 \over 2 p - 1} = {1 \over 2} \gamma + {1 \over 2} \ln p + \ln 2 + o(1). -$$ - -Suppose that two positive integers a and b are given, and that a rearrangement of the alternating harmonic series is formed by taking, in order, a positive terms from the alternating harmonic series, followed by b negative terms, and repeating this pattern at infinity (the alternating series itself corresponds to 1=a = b = 1, the example in the preceding section corresponds to a = 1, b = 2): -$$ -{1} + {1 \over 3} + \cdots + {1 \over 2 a - 1} - {1 \over 2} - {1 \over 4} - \cdots - {1 \over 2 b} + {1 \over 2 a + 1} + \cdots + {1 \over 4 a - 1} - {1 \over 2b + 2} - \cdots -$$ - -Then the partial sum of order (a+b)n of this rearranged series contains 1=p = an positive odd terms and 1=q = bn negative even terms, hence -$$ -S_{(a+b)n} = {1 \over 2} \ln p + \ln 2 - {1 \over 2} \ln q + o(1) = {1 \over 2} \ln(a/b) + \ln 2 + o(1). -$$ - -It follows that the sum of this rearranged series is -$$ -{1 \over 2} \ln(a/b) + \ln 2 = \ln\left( 2 \sqrt{a/b} \right). -$$ - -Suppose now that, more generally, a rearranged series of the alternating harmonic series is organized in such a way that the ratio pn/qn between the number of positive and negative terms in the partial sum of order n tends to a positive limit r. Then, the sum of such a rearrangement will be -$$ -\ln\left( 2 \sqrt{r} \right), -$$ - -and this explains that any real number x can be obtained as sum of a rearranged series of the alternating harmonic series: it suffices to form a rearrangement for which the limit r is equal to e2x/ 4. - -For simplicity, this proof assumes first that an ≠ 0 for every n. The general case requires a simple modification, given below. Recall that a conditionally convergent series of real terms has both infinitely many negative terms and infinitely many positive terms. First, define two quantities, $a_n^+$ and $a_n^-$ by: -$$ -a_{n}^{+} = \frac{a_n + |a_n|}{2}, \quad a_{n}^{-} = \frac{a_n - |a_n|}{2}. -$$ - -That is, the series $\sum_{n=1}^\infty a_n^{+}$ includes all an positive, with all negative terms replaced by zeroes, and the series $\sum_{n=1}^\infty a_n^{-}$ includes all an negative, with all positive terms replaced by zeroes. Since $\sum_{n=1}^\infty a_n$ is conditionally convergent, both the positive and the negative series diverge. Let M be a positive real number. Take, in order, just enough positive terms $a_{n}^{+}$ so that their sum exceeds M. Suppose we require p terms – then the following statement is true: -$$ -\sum_{n=1}^{p-1} a_{n}^{+} \leq M < \sum_{n=1}^{p} a_{n}^{+}. -$$ - -This is possible for any M > 0 because the partial sums of $a_{n}^{+}$ tend to $+\infty$. Discarding the zero terms one may write -$$ -\sum_{n=1}^{p} a_{n}^{+} = a_{\sigma(1)} + \cdots + a_{\sigma(m_1)}, \quad a_{\sigma(j)} > 0, \ \ \sigma(1) < \dots < \sigma(m_1) = p. -$$ - -Now we add just enough negative terms $a_{n}^{-}$, say q of them, so that the resulting sum is less than M. This is always possible because the partial sums of $a_{n}^{-}$ tend to $-\infty$. Now we have: -$$ -\sum_{n=1}^{p} a_{n}^{+} + \sum_{n=1}^{q} a_{n}^{-} < M \leq \sum_{n=1}^{p} a_{n}^{+} + \sum_{n=1}^{q - 1} a_{n}^{-}. -$$ - -Again, one may write -$$ -\sum_{n=1}^{p} a_{n}^{+} + \sum_{n=1}^{q} a_{n}^{-} = a_{\sigma(1)} + \cdots + a_{\sigma(m_1)} + a_{\sigma(m_1+1)} + \cdots + a_{\sigma(n_1)}, -$$ - -with -$$ - \sigma(m_1+1) < \dots < \sigma(n_1) = q. -$$ - -The map σ is injective, and 1 belongs to the range of σ, either as image of 1 (if a1 > 0), or as image of m1 + 1 (if a1 < 0). Now repeat the process of adding just enough positive terms to exceed M, starting with 1=n = p + 1, and then adding just enough negative terms to be less than M, starting with 1=n = q + 1. Extend σ in an injective manner, in order to cover all terms selected so far, and observe that a2 must have been selected now or before, thus 2 belongs to the range of this extension. The process will have infinitely many such "changes of direction". One eventually obtains a rearrangement Σaσ(n). After the first change of direction, each partial sum of Σaσ(n) differs from M by at most the absolute value $a_{p_j}^{+}$ or $|a_{q_j}^{-}|$ of the term that appeared at the latest change of direction. But Σan converges, so as n tends to infinity, each of an, $a_{p_j}^{+}$ and $a_{q_j}^{-}$ go to 0. Thus, the partial sums of Σaσ(n) tend to M, so the following is true: -$$ -\sum_{n=1}^\infty a_{\sigma(n)} = M. -$$ - -The same method can be used to show convergence to M negative or zero. - -One can now give a formal inductive definition of the rearrangement σ, that works in general. For every integer k ≥ 0, a finite set Ak of integers and a real number Sk are defined. For every k > 0, the induction defines the value σ(k), the set Ak consists of the values σ(j) for j ≤ k and Sk is the partial sum of the rearranged series. The definition is as follows: - -* For k = 0, the induction starts with A0 empty and S0 = 0. - -* For every k ≥ 0, there are two cases: if Sk ≤ M, then σ(k+1) is the smallest integer n ≥ 1 such that n is not in Ak and an ≥ 0; if Sk > M, then σ(k+1) is the smallest integer n ≥ 1 such that n is not in Ak and an < 0. In both cases one sets A_{k+1} = A_k \cup \{\sigma(k+1)\} ; \quad S_{k+1} = S_k + a_{\sigma(k+1)}. - -It can be proved, using the reasonings above, that σ is a permutation of the integers and that the permuted series converges to the given real number M. - -Let $ \sum_{i=1}^\infty a_i$ be a conditionally convergent series. The following is a proof that there exists a rearrangement of this series that tends to $\infty$ (a similar argument can be used to show that $-\infty$ can also be attained). - -Let $p_1 < p_2 < p_3 < \cdots$ be the sequence of indexes such that each $a_{p_i}$ is positive, and define $n_1 < n_2 < n_3 < \cdots$ to be the indexes such that each $a_{n_i}$ is negative (again assuming that $a_i$ is never 0). Each natural number will appear in exactly one of the sequences $(p_i)$ and $(n_i).$ - -Let $b_1$ be the smallest natural number such that -$$ -\sum_{i=1}^{b_1} a_{p_i} \geq |a_{n_1}| + 1. -$$ - -Such a value must exist since $(a_{p_i}),$ the subsequence of positive terms of $(a_i),$ diverges. Similarly, let $b_2$ be the smallest natural number such that: -$$ -\sum_{i=b_1+1}^{b_2} a_{p_i} \geq |a_{n_2}| + 1, -$$ - -and so on. This leads to the permutation -$$ -(\sigma(1),\sigma(2),\sigma(3),\ldots) = (p_1, p_2, \ldots, p_{b_1}, n_1, p_{b_1+1}, p_{b_1+2}, \ldots, p_{b_2}, n_2, \ldots). -$$ - -And the rearranged series, $\sum_{i=1}^\infty a_{\sigma(i)},$ then diverges to $\infty$. - -From the way the $b_i$ were chosen, it follows that the sum of the first $b_1+1$ terms of the rearranged series is at least 1 and that no partial sum in this group is less than 0. Likewise, the sum of the next $b_2 - b_1 + 1$ terms is also at least 1, and no partial sum in this group is less than 0 either. Continuing, this suffices to prove that this rearranged sum does indeed tend to $\infty.$ - -In fact, if $\sum _{n=1}^{\infty} a_{n}$ is conditionally convergent, then there is a rearrangement of it such that the partial sums of the rearranged series form a dense subset of $\Reals.$ - -In Riemann's theorem, the permutation used for rearranging a conditionally convergent series to obtain a given value in $\R\cup\{\infty,-\infty\}$ may have arbitrarily many non-fixed points, i.e. all the indexes of the terms of the series may be rearranged. - -One may ask if it is possible to rearrange only the indexes in a smaller set so that a conditionally convergent series converges to an arbitrarily chosen real number or diverges to (positive or negative) infinity. The answer of this question is yes but only to smaller values: Sierpiński proved that rearranging only the positive terms one can obtain a series converging to any prescribed value less than or equal to the sum of the original series, but larger values in general can not be attained. - -This question has also been explored using the notion of ideals: for instance, Wilczyński proved that it is sufficient to consider rearrangements whose set of non-fixed indices belongs to the ideal of asymptotic density zero (that is, it is sufficient to rearrange a set of indices of asymptotic density zero). Filipów and Szuca proved that other ideals also have this property. - -Given a converging series Σan of complex numbers, several cases can occur when considering the set of possible sums for all series Σaσ(n) obtained by rearranging (permuting) the terms of that series: - -* the series Σan may converge unconditionally; then, all rearranged series converge, and have the same sum: the set of sums of the rearranged series reduces to one point; - -* the series Σan may fail to converge unconditionally; if S denotes the set of sums of those rearranged series that converge, then, either the set S is a line L in the complex plane C, of the form L = \{a + t b : t \in \R \}, \quad a, b \in \Complex, \ b \ne 0, or the set S is the whole complex plane C. - -More generally, given a converging series of vectors in a finite-dimensional real vector space E, the set of sums of converging rearranged series is an affine subspace of E. diff --git a/wiki/wikipedia/3721.txt b/wiki/wikipedia/3721.txt deleted file mode 100644 index 971776247c2d972c36bf2fdbd5b5978ec50f8f48..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3721.txt +++ /dev/null @@ -1,210 +0,0 @@ -In probability theory, the central limit theorem (CLT) establishes that, in many situations, when independent random variables are summed up, their properly normalized sum tends toward a normal distribution (informally a bell curve) even if the original variables themselves are not normally distributed. The theorem is a key concept in probability theory because it implies that probabilistic and statistical methods that work for normal distributions can be applicable to many problems involving other types of distributions. This theorem has seen many changes during the formal development of probability theory. Previous versions of the theorem date back to 1811, but in its modern general form, this fundamental result in probability theory was precisely stated as late as 1920, thereby serving as a bridge between classical and modern probability theory. - -If $X_1, X_2 , \dots, X_n$ are $n$ random samples drawn from a population with overall mean $\mu$ and finite variance $\sigma^2$, and if $\bar{X}_n$ is the sample mean, then the limiting form of the distribution, $ Z=\lim_{n \to \infty} \sqrt{n} {\left ( \frac{\bar{X}_n-\mu}{\sigma} \right )}$, is a standard normal distribution. - -For example, suppose that a sample is obtained containing many observations, each observation being randomly generated in a way that does not depend on the values of the other observations, and that the arithmetic mean of the observed values is computed. If this procedure is performed many times, the central limit theorem says that the probability distribution of the average will closely approximate a normal distribution. A simple example of this is that if one flips a coin many times, the probability of getting a given number of heads will approach a normal distribution, with the mean equal to half the total number of flips. At the limit of an infinite number of flips, it will equal a normal distribution. - -The central limit theorem has several variants. In its common form, the random variables must be identically distributed. In variants, convergence of the mean to the normal distribution also occurs for non-identical distributions or for non-independent observations, if they comply with certain conditions. - -The earliest version of this theorem, that the normal distribution may be used as an approximation to the binomial distribution, is the de Moivre–Laplace theorem. - -Let $\{X_1, \ldots, X_n\}$ be a random sample of size $n$ — that is, a sequence of independent and identically distributed (i.i.d.) random variables drawn from a distribution of expected value given by $\mu$ and finite variance given by $\sigma^2$. Suppose we are interested in the sample average -$$ -\bar{X}_n \equiv \frac{X_1 + \cdots + X_n}{n} -$$ - -of these random variables. By the law of large numbers, the sample averages converge almost surely (and therefore also converge in probability) to the expected value $\mu$ as $n\to\infty$. The classical central limit theorem describes the size and the distributional form of the stochastic fluctuations around the deterministic number $\mu$ during this convergence. More precisely, it states that as $n$ gets larger, the distribution of the difference between the sample average $\bar{X}_n$ and its limit $\mu$, when multiplied by the factor $\sqrt{n}$ (that is $\sqrt{n}(\bar{X}_n - \mu)$) approximates the normal distribution with mean 0 and variance $\sigma^2$. For large enough n, the distribution of $\bar{X}_n$ is close to the normal distribution with mean $\mu$ and variance $\sigma^2/n$. The usefulness of the theorem is that the distribution of $\sqrt{n}(\bar{X}_n - \mu)$ approaches normality regardless of the shape of the distribution of the individual $X_i$. Formally, the theorem can be stated as follows: - -
    Lindeberg–Lévy CLT. Suppose $\{X_1, \ldots, X_n\}$ is a sequence of i.i.d. random variables with $\mathbb{E}[X_i] = \mu$ and $\operatorname{Var}[X_i] = \sigma^2 < \infty$. Then as $n$ approaches infinity, the random variables $\sqrt{n}(\bar{X}_n - \mu)$ converge in distribution to a normal $\mathcal{N}(0, \sigma^2)$: -$$ -\sqrt{n}\left(\bar{X}_n - \mu\right)\ \xrightarrow{d}\ \mathcal{N}\left(0,\sigma^2\right) . -$$
    - -In the case $\sigma > 0$, convergence in distribution means that the cumulative distribution functions of $\sqrt{n}(\bar{X}_n - \mu)$ converge pointwise to the cdf of the $\mathcal{N}(0, \sigma^2)$ distribution: for every real number $z$, -$$ -\lim_{n\to\infty} \mathbb{P}\left[\sqrt{n}(\bar{X}_n-\mu) \le z\right] = \lim_{n\to\infty} \mathbb{P}\left[\frac{\sqrt{n}(\bar{X}_n-\mu)}{\sigma } \le \frac{z}{\sigma}\right]= \Phi\left(\frac{z}{\sigma}\right) , -$$ - -where $\Phi(z)$ is the standard normal cdf evaluated at $z$. The convergence is uniform in $z$ in the sense that -$$ -\lim_{n\to\infty}\sup_{z\in\R}\left|\mathbb{P}\left[\sqrt{n}(\bar{X}_n-\mu) \le z\right] - \Phi\left(\frac{z}{\sigma}\right)\right| = 0~, -$$ - -where $\sup$ denotes the least upper bound (or supremum) of the set. - -The theorem is named after Russian mathematician Aleksandr Lyapunov. In this variant of the central limit theorem the random variables $X_i$ have to be independent, but not necessarily identically distributed. The theorem also requires that random variables $\left| X_i\right|$ have moments of some order $(2+\delta)$, and that the rate of growth of these moments is limited by the Lyapunov condition given below. - -
    Lyapunov CLT. Suppose $\{X_1, \ldots, X_n\}$ is a sequence of independent random variables, each with finite expected value $\mu_i$ and variance $\sigma_i^2$. Define -$$ -s_n^2 = \sum_{i=1}^n \sigma_i^2 . -$$ - -If for some $\delta > 0$, Lyapunov’s condition -$$ -\lim_{n\to\infty} \frac{1}{s_{n}^{2+\delta}} \sum_{i=1}^{n} \mathbb{E}\left[\left|X_{i} - \mu_{i}\right|^{2+\delta}\right] = 0 -$$ - -is satisfied, then a sum of $\frac{X_i - \mu_i}{s_n}$ converges in distribution to a standard normal random variable, as $n$ goes to infinity: -$$ -\frac{1}{s_n}\sum_{i=1}^{n} \left(X_i - \mu_i\right) \ \xrightarrow{d}\ \mathcal{N}(0,1) . -$$
    - -In practice it is usually easiest to check Lyapunov's condition for $\delta = 1$. - -If a sequence of random variables satisfies Lyapunov's condition, then it also satisfies Lindeberg's condition. The converse implication, however, does not hold. - -In the same setting and with the same notation as above, the Lyapunov condition can be replaced with the following weaker one (from Lindeberg in 1920). - -Suppose that for every $\varepsilon > 0$ -$$ - \lim_{n \to \infty} \frac{1}{s_n^2}\sum_{i = 1}^{n} \mathbb{E}\left[(X_i - \mu_i)^2 \cdot \mathbf{1}_{\left\{X_i : \left| X_i - \mu_i \right| > \varepsilon s_n \right\}} \right] = 0 -$$ - -where $\mathbf{1}_{\{\ldots\}}$ is the indicator function. Then the distribution of the standardized sums -$$ -\frac{1}{s_n}\sum_{i = 1}^n \left( X_i - \mu_i \right) -$$ - -converges towards the standard normal distribution $\mathcal{N}(0, 1)$. - -Proofs that use characteristic functions can be extended to cases where each individual $\mathbf{X}_i$ is a random vector in $\R^k$, with mean vector $\boldsymbol\mu = \mathbb{E}[\mathbf{X}_i]$ and covariance matrix $\mathbf{\Sigma}$ (among the components of the vector), and these random vectors are independent and identically distributed. Summation of these vectors is being done component-wise. The multidimensional central limit theorem states that when scaled, sums converge to a multivariate normal distribution. - -Let -$$ -\mathbf{X}_i = \begin{bmatrix} X_{i(1)} \\ \vdots \\ X_{i(k)} \end{bmatrix} -$$ - -be the k-vector. The bold in $\mathbf{X}_i$ means that it is a random vector, not a random (univariate) variable. Then the sum of the random vectors will be -$$ -\begin{bmatrix} X_{1(1)} \\ \vdots \\ X_{1(k)} \end{bmatrix} + \begin{bmatrix} X_{2(1)} \\ \vdots \\ X_{2(k)} \end{bmatrix} + \cdots + \begin{bmatrix} X_{n(1)} \\ \vdots \\ X_{n(k)} \end{bmatrix} = \begin{bmatrix} \sum_{i=1}^{n} \left [ X_{i(1)} \right ] \\ \vdots \\ \sum_{i=1}^{n} \left [ X_{i(k)} \right ] \end{bmatrix} = \sum_{i=1}^{n} \mathbf{X}_i -$$ - -and the average is -$$ - \frac{1}{n} \sum_{i=1}^{n} \mathbf{X}_i= \frac{1}{n}\begin{bmatrix} \sum_{i=1}^{n} X_{i(1)} \\ \vdots \\ \sum_{i=1}^{n} X_{i(k)} \end{bmatrix} = \begin{bmatrix} \bar X_{i(1)} \\ \vdots \\ \bar X_{i(k)} \end{bmatrix} = \mathbf{\bar X_n} -$$ - -and therefore -$$ -\frac{1}{\sqrt{n}} \sum_{i=1}^{n} \left[ \mathbf{X}_i - \mathbb{E} \left( X_i \right) \right] = \frac{1}{\sqrt{n}}\sum_{i=1}^{n} ( \mathbf{X}_i - \boldsymbol\mu ) = \sqrt{n}\left(\overline{\mathbf{X}}_n - \boldsymbol\mu\right)~. -$$ - -The multivariate central limit theorem states that -$$ -\sqrt{n}\left( \overline{\mathbf{X}}_n - \boldsymbol\mu \right) \xrightarrow{D}\ \mathcal{N}_k(0,\boldsymbol\Sigma) -$$ - -where the covariance matrix $\boldsymbol{\Sigma}$ is equal to - - \boldsymbol\Sigma=\begin{bmatrix} - -{\operatorname{Var} \left (X_{1(1)} \right)} & \operatorname{Cov} \left (X_{1(1)},X_{1(2)} \right) & \operatorname{Cov} \left (X_{1(1)},X_{1(3)} \right) & \cdots & \operatorname{Cov} \left (X_{1(1)},X_{1(k)} \right) \\ - -\operatorname{Cov} \left (X_{1(2)},X_{1(1)} \right) & \operatorname{Var} \left( X_{1(2)} \right) & \operatorname{Cov} \left(X_{1(2)},X_{1(3)} \right) & \cdots & \operatorname{Cov} \left(X_{1(2)},X_{1(k)} \right) \\ - -\operatorname{Cov}\left (X_{1(3)},X_{1(1)} \right) & \operatorname{Cov} \left (X_{1(3)},X_{1(2)} \right) & \operatorname{Var} \left (X_{1(3)} \right) & \cdots & \operatorname{Cov} \left (X_{1(3)},X_{1(k)} \right) \\ - -\vdots & \vdots & \vdots & \ddots & \vdots \\ - -\operatorname{Cov} \left (X_{1(k)},X_{1(1)} \right) & \operatorname{Cov} \left (X_{1(k)},X_{1(2)} \right) & \operatorname{Cov} \left (X_{1(k)},X_{1(3)} \right) & \cdots & \operatorname{Var} \left (X_{1(k)} \right) \\ - -\end{bmatrix}~. - -The rate of convergence is given by the following Berry–Esseen type result: - -
    Theorem. Let $X_1,\dots, X_n$ be independent $\R^d$-valued random vectors, each having mean zero. Write $S =\sum^n_{i=1}X_i$ and assume $\Sigma = \operatorname{Cov}[S]$ is invertible. Let $Z \sim \mathcal{N}(0,\Sigma)$ be a $d$-dimensional Gaussian with the same mean and same covariance matrix as $S$. Then for all convex sets $U \subseteq \R^d$, -$$ -\left|\mathbb{P}[S \in U] - \mathbb{P}[Z \in U]\right| \le C d^{1/4} \gamma~, -$$ - -where $C$ is a universal constant, $\gamma = \sum^n_{i=1} \mathbb{E} \left[\left\| \Sigma^{-1/2}X_i\right\|^3_2\right]$, and $\|\cdot\|_2$ denotes the Euclidean norm on $\R^d$. - -
    - -It is unknown whether the factor $d^{1/4}$ is necessary. - -The central limit theorem states that the sum of a number of independent and identically distributed random variables with finite variances will tend to a normal distribution as the number of variables grows. A generalization due to Gnedenko and Kolmogorov states that the sum of a number of random variables with a power-law tail (Paretian tail) distributions decreasing as $^{-\alpha-1}$ where $0 < \alpha < 2$ (and therefore having infinite variance) will tend to a stable distribution $f(x; \alpha, 0, c, 0)$ as the number of summands grows. If $\alpha > 2$ then the sum converges to a stable distribution with stability parameter equal to 2, i.e. a Gaussian distribution. - -A useful generalization of a sequence of independent, identically distributed random variables is a mixing random process in discrete time; "mixing" means, roughly, that random variables temporally far apart from one another are nearly independent. Several kinds of mixing are used in ergodic theory and probability theory. See especially strong mixing (also called α-mixing) defined by $\alpha(n) \to 0$ where $\alpha(n)$ is so-called strong mixing coefficient. - -A simplified formulation of the central limit theorem under strong mixing is: - -
    Theorem. Suppose that $\{X_1, \ldots, X_n\}$ is stationary and $\alpha$-mixing with $\alpha_n = O\left(n^{-5}\right) $ and that $\mathbb{E}[X_n] = 0$ and $\mathbb{E}[{X_n}^{12}] < \infty$. Denote $S_n = X_1 + \cdots + X_n$, then the limit -$$ - \sigma^2 = \lim_n \frac{\mathbb{E}\left(S_n^2\right)}{n} -$$ - -exists, and if $\sigma \ne 0$ then $\frac{S_n}{\sigma\sqrt{n}}$ converges in distribution to $ \mathcal{N}(0, 1)$.
    - -In fact, -$$ -\sigma^2 = \mathbb{E}\left(X_1^2\right) + 2 \sum_{k=1}^{\infty} \mathbb{E}\left(X_1 X_{1+k}\right), -$$ - -where the series converges absolutely. - -The assumption $\sigma \ne 0$ cannot be omitted, since the asymptotic normality fails for $X_n = Y_n - Y_{n-1}$ where $Y_n$ are another stationary sequence. - -There is a stronger version of the theorem: the assumption $\mathbb{E}\left[{X_n}^{12}\right] < \infty$ is replaced with $\mathbb{E}\left[{\left|X_n\right|}^{2+\delta}\right] < \infty$, and the assumption $\alpha_n = O\left(n^{-5}\right) $ is replaced with -$$ -\sum_n \alpha_n^{\frac\delta{2(2+\delta)}} < \infty. -$$ - -Existence of such $\delta > 0$ ensures the conclusion. For encyclopedic treatment of limit theorems under mixing conditions see . - -
    Theorem. Let a martingale $M_n$ satisfy - -* $ \frac1n \sum_{k=1}^n \mathbb{E} \left(\left(M_k-M_{k-1}\right)^2 | M_1,\dots,M_{k-1}\right) \to 1 $ in probability as n → ∞, - -* for every ε > 0, $ \frac1n \sum_{k=1}^n \mathbb{E} \left( \left(M_k-M_{k-1}\right)^2; |M_k-M_{k-1}| > \varepsilon \sqrt n \right) \to 0 $ as n → ∞, - -then $\frac{M_n}{\sqrt{n}}$ converges in distribution to $N(0, 1)$ as $n \to \infty$.
    - -Caution: The restricted expectation $\mathbb{E}[X; A]$ should not be confused with the conditional expectation $\mathbb{E}[X \mid A] = \frac{\mathbb{E}[X; A]}{\mathbb{P}(A)}$. - -The central limit theorem has a proof using characteristic functions. It is similar to the proof of the (weak) law of large numbers. - -Assume $\{X_1, \ldots, X_n\}$ are independent and identically distributed random variables, each with mean $\mu$ and finite variance $\sigma^2$. The sum $X_1 + \cdots + X_n$ has mean $n\mu$ and variance $n\sigma^2$. Consider the random variable -$$ -Z_n = \frac{X_1+\cdots+X_n - n \mu}{\sqrt{n \sigma^2}} = \sum_{i=1}^n \frac{X_i - \mu}{\sqrt{n \sigma^2}} = \sum_{i=1}^n \frac{1}{\sqrt{n}} Y_i, -$$ - -where in the last step we defined the new random variables $Y_i = \frac{X_i - \mu}{\sigma} $, each with zero mean and unit variance ($\operatorname{var}(Y) = 1$). The characteristic function of $Z_n$ is given by - -\varphi_{Z_n}\!(t) = \varphi_{\sum_{i=1}^n {\frac{1}{\sqrt{n}}Y_i}}\!(t) \ =\ \varphi_{Y_1}\!\!\left(\frac{t}{\sqrt{n}}\right) \varphi_{Y_2}\!\! \left(\frac{t}{\sqrt{n}}\right)\cdots \varphi_{Y_n}\!\! \left(\frac{t}{\sqrt{n}}\right) \ =\ \left[\varphi_{Y_1}\!\!\left(\frac{t}{\sqrt{n}}\right)\right]^n, - - - -where in the last step we used the fact that all of the $Y_i$ are identically distributed. The characteristic function of $Y_1$ is, by Taylor's theorem, -$$ -\varphi_{Y_1}\!\left(\frac{t}{\sqrt{n}}\right) = 1 - \frac{t^2}{2n} + o\!\left(\frac{t^2}{n}\right), \quad \left(\frac{t}{\sqrt{n}}\right) \to 0 -$$ - -where $o(t^2 / n)$ is "little o notation" for some function of $t$ that goes to zero more rapidly than $t^2 / n$. By the limit of the exponential function ($e^x = \lim_{n \to \infty} \left(1 + \frac{x}{n}\right)^n$), the characteristic function of $Z_n$ equals -$$ -\varphi_{Z_n}(t) = \left(1 - \frac{t^2}{2n} + o\left(\frac{t^2}{n}\right) \right)^n \rightarrow e^{-\frac{1}{2} t^2}, \quad n \to \infty. -$$ - -All of the higher order terms vanish in the limit $n\to\infty$. The right hand side equals the characteristic function of a standard normal distribution $N(0, 1)$, which implies through Lévy's continuity theorem that the distribution of $Z_n$ will approach $N(0,1)$ as $n\to\infty$. Therefore, the sample average -$$ -\bar{X}_n = \frac{X_1+\cdots+X_n}{n} -$$ - -is such that -$$ -\frac{\sqrt{n}}{\sigma}(\bar{X}_n - \mu) -$$ - -converges to the normal distribution $N(0, 1)$, from which the central limit theorem follows. - -The central limit theorem gives only an asymptotic distribution. As an approximation for a finite number of observations, it provides a reasonable approximation only when close to the peak of the normal distribution; it requires a very large number of observations to stretch into the tails. - -The convergence in the central limit theorem is uniform because the limiting cumulative distribution function is continuous. If the third central moment $\operatorname{E}\left[(X_1 - \mu)^3\right]$ exists and is finite, then the speed of convergence is at least on the order of $1/\sqrt{n}$ (see Berry–Esseen theorem). Stein's method can be used not only to prove the central limit theorem, but also to provide bounds on the rates of convergence for selected metrics. - -The convergence to the normal distribution is monotonic, in the sense that the entropy of $Z_n$ increases monotonically to that of the normal distribution. - -Dutch mathematician Henk Tijms writes: Pólya referred to the theorem as "central" due to its importance in probability theory. According to Le Cam, the French school of probability interprets the word central in the sense that "it describes the behaviour of the centre of the distribution as opposed to its tails". The abstract of the paper On the central limit theorem of calculus of probability and the problem of moments by Pólya in 1920 translates as follows. - -A thorough account of the theorem's history, detailing Laplace's foundational work, as well as Cauchy's, Bessel's and Poisson's contributions, is provided by Hald. Two historical accounts, one covering the development from Laplace to Cauchy, the second the contributions by von Mises, Pólya, Lindeberg, Lévy, and Cramér during the 1920s, are given by Hans Fischer. Le Cam describes a period around 1935. Bernstein presents a historical discussion focusing on the work of Pafnuty Chebyshev and his students Andrey Markov and Aleksandr Lyapunov that led to the first proofs of the CLT in a general setting. - -A curious footnote to the history of the Central Limit Theorem is that a proof of a result similar to the 1922 Lindeberg CLT was the subject of Alan Turing's 1934 Fellowship Dissertation for King's College at the University of Cambridge. Only after submitting the work did Turing learn it had already been proved. Consequently, Turing's dissertation was not published. diff --git a/wiki/wikipedia/3722.txt b/wiki/wikipedia/3722.txt deleted file mode 100644 index 0064b85f0cfb9cfd4718d603babde824eb2e6180..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3722.txt +++ /dev/null @@ -1,23 +0,0 @@ -In the stochastic calculus, Tanaka's formula states that -$$ -|B_t| = \int_0^t \sgn(B_s) dB_s + L_t -$$ - -where Bt is the standard Brownian motion, sgn denotes the sign function -$$ -\sgn (x) = \begin{cases} +1, & x > 0; \\0,& x=0 \\-1, & x < 0. \end{cases} -$$ - -and Lt is its local time at 0 (the local time spent by B at 0 before time t) given by the L2-limit -$$ -L_{t} = \lim_{\varepsilon \downarrow 0} \frac1{2 \varepsilon} | \{ s \in [0, t] | B_{s} \in (- \varepsilon, + \varepsilon) \} |. -$$ - -Tanaka's formula is the explicit Doob-Meyer decomposition of the submartingale |Bt| into the martingale part (the integral on the right-hand side, which is a Brownian motion), and a continuous increasing process (local time). It can also be seen as the analogue of Itō's lemma for the (nonsmooth) absolute value function $f(x)=|x|$, with $ f'(x) = \sgn(x)$ and $ f(x) = 2\delta(x) $; see local time for a formal explanation of the Itō term. - -The function |x| is not C2 in x at x = 0, so we cannot apply Itō's formula directly. But if we approximate it near zero (i.e. in [-ε, ε]) by parabolas -$$ -\frac{x^2}{2|\varepsilon|}+\frac{2}. -$$ - -and use Itō's formula, we can then take the limit as ε → 0, leading to Tanaka's formula. diff --git a/wiki/wikipedia/3723.txt b/wiki/wikipedia/3723.txt deleted file mode 100644 index 2ebd50cd384d85dc3f07e09d95e56ff858a4dde8..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3723.txt +++ /dev/null @@ -1,110 +0,0 @@ -In number theory, Wilson's theorem states that a natural number n > 1 is a prime number if and only if the product of all the positive integers less than n is one less than a multiple of n. That is (using the notations of modular arithmetic), the factorial $(n - 1)! = 1 \times 2 \times 3 \times \cdots \times (n - 1)$ satisfies -$$ -(n-1)!\ \equiv -1 \pmod n -$$ - -exactly when n is a prime number. In other words, any number n is a prime number if, and only if, (n − 1)! + 1 is divisible by n. - -This theorem was stated by Ibn al-Haytham (c. 1000 AD), and, in the 18th century, by John Wilson. Edward Waring announced the theorem in 1770, although neither he nor his student Wilson could prove it. Lagrange gave the first proof in 1771. There is evidence that Leibniz was also aware of the result a century earlier, but he never published it. - -For each of the values of n from 2 to 30, the following table shows the number (n − 1)! and the remainder when (n − 1)! is divided by n. (In the notation of modular arithmetic, the remainder when m is divided by n is written m mod n.) - -The background color is blue for prime values of n, gold for composite values. - - - -The proofs (for prime moduli) below use the fact that residue classes modulo a prime number are fields - see the article prime field for more details. Lagrange's theorem, which states that in any field a polynomial of degree n has at most n roots, is needed for all the proofs. - -If n is composite it is divisible by some prime number q, where 1=2 ≤ q ≤ n − 2. Because $q$ divides $n$, let $n = qk$ for some integer $k$. Suppose for the sake of contradiction that 1=(n − 1)! were congruent to 1=−1 (mod n) where n is composite. Then (n-1)! would also be congruent to −1 (mod q) as $(n-1)! \equiv -1 \ (\text{mod} \ n)$ implies that $(n-1)! = nm - 1 = (qk)m - 1 = q(km) - 1$ for some integer $m$ which shows (n-1)! being congruent to -1 (mod q). But (n − 1)! ≡ 0 (mod q) by the fact that q is a term in (n-1)! making (n-1)! a multiple of q. A contradiction is now reached. - -In fact, more is true. With the sole exception of 4, where 3! = 6 ≡ 2 (mod 4), if n is composite then (n − 1)! is congruent to 0 (mod n). The proof is divided into two cases: First, if n can be factored as the product of two unequal numbers, 1=n = ab, where 2 ≤ a < b ≤ n − 2, then both a and b will appear in the product 1=1 × 2 × ... × (n − 1) = (n − 1)! and (n − 1)! will be divisible by n. If n has no such factorization, then it must be the square of some prime q, q > 2. But then 2q < q2 = n, both q and 2q will be factors of (n − 1)!, and again n divides (n − 1)!. - -The result is trivial when 1=p = 2, so assume p is an odd prime, 1=p ≥ 3. Since the residue classes (mod p) are a field, every non-zero a has a unique multiplicative inverse, a−1. Lagrange's theorem implies that the only values of a for which 1=a ≡ a−1 (mod p) are 1=a ≡ ±1 (mod p) (because the congruence 1=a2 ≡ 1 can have at most two roots (mod p)). Therefore, with the exception of ±1, the factors of 1=(p − 1)! can be arranged in unequal pairs, where the product of each pair is 1=≡ 1 (mod p). This proves Wilson's theorem. - -For example, if 1=p = 11, -$$ -10! = [(1\cdot10)]\cdot[(2\cdot6)(3\cdot4)(5\cdot9)(7\cdot8)] \equiv [-1]\cdot[1\cdot1\cdot1\cdot1] \equiv -1 \pmod{11}. -$$ - -Again, the result is trivial for p = 2, so suppose p is an odd prime, 1=p ≥ 3. Consider the polynomial -$$ -g(x)=(x-1)(x-2) \cdots (x-(p-1)). -$$ - -g has degree 1=p − 1, leading term 1=xp − 1, and constant term 1=(p − 1)!. Its 1=p − 1 roots are 1, 2, ..., 1=p − 1. - -Now consider -$$ -h(x)=x^{p-1}-1. -$$ - -h also has degree 1=p − 1 and leading term 1=xp − 1. Modulo p, Fermat's little theorem says it also has the same 1=p − 1 roots, 1, 2, ..., 1=p − 1. - -Finally, consider -$$ -f(x)=g(x)-h(x). -$$ - -f has degree at most p − 2 (since the leading terms cancel), and modulo p also has the 1=p − 1 roots 1, 2, ..., 1=p − 1. But Lagrange's theorem says it cannot have more than p − 2 roots. Therefore, f must be identically zero (mod p), so its constant term is 1=(p − 1)! + 1 ≡ 0 (mod p). This is Wilson's theorem. - -It is possible to deduce Wilson's theorem from a particular application of the Sylow theorems. Let p be a prime. It is immediate to deduce that the symmetric group $ S_p $ has exactly $(p-1)!$ elements of order p, namely the p-cycles $ C_p $. On the other hand, each Sylow p-subgroup in $ S_p $ is a copy of $ C_p $. Hence it follows that the number of Sylow p-subgroups is $ n_p=(p-2)! $. The third Sylow theorem implies -$$ -(p-2)! \equiv 1 \pmod p. -$$ - -Multiplying both sides by 1=(p − 1) gives -$$ -(p-1)! \equiv p-1 \equiv -1 \pmod p, -$$ - -that is, the result. - -Let a be a primitive root modulo n of p. Then $a^{\frac{p-1}{2}}\equiv -1 \pmod p$, and all other $a^k$ have $a^{p-k-1}$ as a multiplicative inverse, because $a^ka^{p-k-1}=a^{p-1}\equiv 1 \pmod p$. As $a^k$ runs through all the integers coprime to p, we are done. - -In practice, Wilson's theorem is useless as a primality test because computing (n − 1)! modulo n for large n is computationally complex, and much faster primality tests are known (indeed, even trial division is considerably more efficient). - -Using Wilson's Theorem, for any odd prime 1=p = 2m + 1, we can rearrange the left hand side of -$$ -1\cdot 2\cdots (p-1)\ \equiv\ -1\ \pmod{p} -$$ - -to obtain the equality -$$ -1\cdot(p-1)\cdot 2\cdot (p-2)\cdots m\cdot (p-m)\ \equiv\ 1\cdot (-1)\cdot 2\cdot (-2)\cdots m\cdot (-m)\ \equiv\ -1 \pmod{p}. -$$ - -This becomes -$$ -\prod_{j=1}^m\ j^2\ \equiv(-1)^{m+1} \pmod{p} -$$ - -or -$$ -(m!)^2 \equiv(-1)^{m+1} \pmod{p}. -$$ - -We can use this fact to prove part of a famous result: for any prime p such that p ≡ 1 (mod 4), the number (−1) is a square (quadratic residue) mod p. For suppose p = 4k + 1 for some integer k. Then we can take m = 2k above, and we conclude that (m!)2 is congruent to (−1). - -Wilson's theorem has been used to construct formulas for primes, but they are too slow to have practical value. - -Wilson's theorem allows one to define the p-adic gamma function. - -Gauss proved that - - - -\prod_{k = 1 \atop \gcd(k,m)=1}^m \!\!k \ \equiv - -\begin{cases} - --1 \pmod{m} & \text{if } m=4,p^\alpha,2p^\alpha \\ - -1 \pmod{m} & \text{otherwise} - -\end{cases} - - - -where p represents an odd prime and $\alpha$ a positive integer. The values of m for which the product is −1 are precisely the ones where there is a primitive root modulo m. - -This further generalizes to the fact that in any finite abelian group, either the product of all elements is the identity, or there is precisely one element a of order 2 (but not both). In the latter case, the product of all elements equals a. diff --git a/wiki/wikipedia/3724.txt b/wiki/wikipedia/3724.txt deleted file mode 100644 index ff96ecc9a6c887c92c648f71fb0ee8297bec3eb1..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3724.txt +++ /dev/null @@ -1,13 +0,0 @@ -In the mathematics of structural rigidity, grid bracing is a problem of adding cross bracing to a square grid to make it into a rigid structure. It can be solved optimally by translating it into a problem in graph theory on the connectivity of bipartite graphs. - -The problem considers a framework in the form of a square grid, with $r$ rows and $c$ columns of squares. The grid has $r(c+1)+(r+1)c$ edges, each of which has unit length and is considered to be a rigid rod, free to move continuously within the Euclidean plane but unable to change its length. These rods are attached to each other by flexible joints at the $(r+1)(c+1)$ vertices of the grid. A valid - -continuous motion of this framework is a way of continuously varying the placement of its edges and joints into the plane in such a way that they keep the same lengths and the same attachments, but without requiring them to form squares. Instead, each square of the grid may be deformed to form a rhombus, and the whole grid may form an irregular structure with a different shape for each of its faces, as shown in the figure. - -Unlike squares, triangles made of rigid rods and flexible joints cannot change their shapes: any two triangles with sides of the same lengths must be congruent (this is the SSS postulate). If a square is cross-braced by adding one of its diagonals as another rigid bar, the diagonal divides it into two triangles which similarly cannot change shape, so the square must remain square through any continuous motion of the cross-braced framework. (The same framework could also be placed in the plane in a different way, by folding its two triangles onto each other over their shared diagonal, but this folded placement cannot be obtained by a continuous motion.) Similarly, if all squares of the given grid were cross-braced in the same way, the grid could not change shape; its only continuous motions would be to rotate it or translate it as a single rigid body. However, this method of making the grid rigid, by adding cross-braces to all its squares, uses many more cross-braces than necessary. The grid bracing problem asks for a description of the minimal sets of cross-braces that have the same effect, of making the whole framework rigid. - -As originally observed, the grid bracing problem can be translated into a problem in graph theory by considering an undirected bipartite graph that has a vertex for each row and column of the given grid, and an edge for each cross-braced square of the grid. They proved that the cross-braced grid is rigid if and only if this bipartite graph is connected. It follows that the minimal cross-bracings of the grid correspond to the trees connecting all vertices in the graph, and that they have exactly $r+c-1$ cross-braced squares. Any overbraced but rigid cross-bracing (with more than this number of cross-braced squares) can be reduced to a minimal cross-bracing by finding a spanning tree of its graph. More generally, the number of degrees of freedom in the shape of a cross-braced grid is equal to the number of connected components of the bipartite graph, minus one. If a partially braced grid is to be made rigid by cross-bracing more squares, the minimum number of additional squares that need to be cross-braced is this number of degrees of freedom, and a solution with this number of squares can be obtained by adding this number of edges to the bipartite graph, connecting pairs of its connected components so that after the addition there is only one remaining component. - -Another version of the problem asks for a "double bracing", a set of cross-braces that is sufficiently redundant that it will stay rigid even if one of the diagonals is removed. This version allows both diagonals of a single square to be used, but it is not required to do so. In this version, a double bracing of a grid corresponds in the same way to an undirected bipartite multigraph that is connected and bridgeless (every edge belongs to at least one cycle). The minimum number of diagonals needed for a double bracing is $\min(2r,2c)$. In the special case of grids with equal numbers of rows and columns, the only double bracings of this minimum size are Hamiltonian cycles, so determining whether one exists within a larger bracing is NP-complete. However, it is possible to approximate the smallest double bracing subset of a given bracing within a constant approximation ratio. - -An analogous theory, using directed graphs, was discovered by for tension bracing, in which squares are braced by wires or strings (which cannot expand past their initial length, but can bend or collapse to a shorter length) instead of by rigid rods. To make a single square rigid in this way, it is necessary to brace both of its diagonals, instead of just one diagonal. One can again represent such a bracing by a bipartite graph, which has an edge directed from a row vertex to a column vertex if the shared square of that row and column is braced by the positively-sloped diagonal, and an edge from a column vertex to a row vertex if the shared square is braced by the negatively-sloped diagonal. The braced structure is rigid if and only if the resulting graph is strongly connected. If not, additional bracing needs to be added to connect together its strongly connected components. The problem of finding a minimal set of additional braces to add is an instance of strong connectivity augmentation, and can be solved in linear time. According to Robbins' theorem, the undirected graphs that can be made strongly connected by directing their edges are exactly the bridgeless graphs; reinterpreting this theorem in terms of grid bracing, a bracing by rigid rods forms a double bracing if and only if its rods can be replaced by wires (possibly on the other diagonals of their squares) to form a rigid tension bracing. diff --git a/wiki/wikipedia/3725.txt b/wiki/wikipedia/3725.txt deleted file mode 100644 index e46a7b9a716b82f49af3f0220d72782deba8ebd8..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3725.txt +++ /dev/null @@ -1,11 +0,0 @@ -MASON is a multi-agent simulation environment developed in Java. - -MASON is developed at George Mason University's Evolutionary Computation Laboratory in conjunction with the GMU Center for Social Complexity. First released in 2003, the environment continues to be maintained and kept up to date. The name, as well as referring to the parent institution, derives from the acronym Multi-Agent Simulator Of Neighborhoods (or Networks). - -MASON development started within the Java.net environment, then moved to Google Code and is now at GitHub. - -Whilst MASON is less extensive than other similar libraries it is designed with simplicity and execution speed as a priority. - -Applets developed using MASON include Craig Reynolds' Boids algorithm, Balls and Bands, a simulation of Hooke's Law, an L-system generator, Conway's Game of Life, Sugarscape and autonomous multi-robot systems. - -MASON may be used with the Eclipse Integrated development environment. diff --git a/wiki/wikipedia/3726.txt b/wiki/wikipedia/3726.txt deleted file mode 100644 index 343368ebcad417b847c24ed4bb72465d02f22f1f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3726.txt +++ /dev/null @@ -1,5 +0,0 @@ -In mathematics, the Neukirch–Uchida theorem shows that all problems about algebraic number fields can be reduced to problems about their absolute Galois groups. - -showed that two algebraic number fields with the same absolute Galois group are isomorphic, and strengthened this by proving Neukirch's conjecture that automorphisms of the algebraic number field correspond to outer automorphisms of its absolute Galois group. extended the result to infinite fields that are finitely generated over prime fields. - -The Neukirch–Uchida theorem is one of the foundational results of anabelian geometry, whose main theme is to reduce properties of geometric objects to properties of their fundamental groups, provided these fundamental groups are sufficiently non-abelian. diff --git a/wiki/wikipedia/3727.txt b/wiki/wikipedia/3727.txt deleted file mode 100644 index 332d583e343afe42dbd33f7bdfd0ceb71d729eb4..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3727.txt +++ /dev/null @@ -1,20 +0,0 @@ -In probability theory, the Doob–Dynkin lemma, named after Joseph L. Doob and Eugene Dynkin, characterizes the situation when one random variable is a function of another by the inclusion of the $\sigma$-algebras generated by the random variables. The usual statement of the lemma is formulated in terms of one random variable being measurable with respect to the $\sigma$-algebra generated by the other. - -The lemma plays an important role in the conditional expectation in probability theory, where it allows replacement of the conditioning on a random variable by conditioning on the $\sigma$-algebra that is generated by the random variable. There are more general versions with applications including the theory of stochastic differential equations. - -In the lemma below, $\mathcal{B}[0,1]$ is the $\sigma$-algebra of Borel sets on $ [0,1]. $ If $T\colon X\to Y,$ and $(Y,{\mathcal Y})$ is a measurable space, then -$$ -\sigma(T)\ \stackrel{\text{def}}{=}\ \{T^{-1}(S)\mid S\in {\mathcal Y}\} -$$ - -is the smallest $\sigma$-algebra on $X$ such that $T$ is $ \sigma(T) / {\mathcal Y} $-measurable. - -Let $T\colon \Omega\rightarrow\Omega'$ be a function, and $(\Omega',\mathcal{A}') $ a measurable space. A function $f\colon \Omega\rightarrow [0,1] $ is $ \sigma(T) / \mathcal{B}[0,1] $-measurable if and only if $f=g\circ T,$ for some $ \mathcal{A}' / \mathcal{B}[0,1] $-measurable $g\colon \Omega' \to [0,1]. $ - -Remark. The "if" part simply states that the composition of two measurable functions is measurable. The "only if" part is proven below. - -Remark. The lemma remains valid if the space $ ([0,1],\mathcal{B}[0,1]) $ is replaced with $ (S,\mathcal{B}(S)), $ where $ S \subseteq [-\infty,\infty], $ $ S $ is bijective with $ [0,1], $ and the bijection is measurable in both directions. - -By definition, the measurability of $ f $ means that $f^{-1}(S)\in \sigma(T)$ for every Borel set $S \subseteq [0,1].$ Therefore $\sigma(f) \subseteq \sigma(T),$ and the lemma may be restated as follows. - -Lemma. Let $T\colon \Omega\rightarrow\Omega', $ $f\colon \Omega\rightarrow [0,1], $ and $(\Omega',\mathcal{A}') $ is a measurable space. Then $ f = g\circ T, $ for some $ \mathcal{A}' / \mathcal{B}[0,1] $-measurable $g\colon \Omega' \to [0,1], $ if and only if $\sigma(f) \subseteq \sigma(T)$. diff --git a/wiki/wikipedia/3728.txt b/wiki/wikipedia/3728.txt deleted file mode 100644 index 2104e56ef8663c90e90d2dc396cf82a66e30ba15..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3728.txt +++ /dev/null @@ -1,69 +0,0 @@ -In abstract algebra, specifically the theory of Lie algebras, Serre's theorem states: given a (finite reduced) root system $\Phi$, there exists a finite-dimensional semisimple Lie algebra whose root system is the given $\Phi$. - -The theorem states that: given a root system $\Phi$ in a Euclidean space with an inner product $(, )$, $\langle \beta, \alpha \rangle = 2(\alpha, \beta)/(\alpha, \alpha), \beta, \alpha \in E$ and a base $\{ \alpha_1, \dots, \alpha_n \}$ of $\Phi$, the Lie algebra $\mathfrak g$ defined by (1) $3n$ generators $e_i, f_i, h_i$ and (2) the relations -$$ -[h_i, h_j] = 0, -$$ -$$ -[e_i, f_i] = h_i, [e_i, f_j] = 0, i \ne j -$$, -$$ -[h_i, e_j] = \langle \alpha_i, \alpha_j \rangle e_j, [h_i, f_j] = -\langle \alpha_i, \alpha_j \rangle f_j -$$, -$$ -\operatorname{ad}(e_i)^{-\langle \alpha_i, \alpha_j \rangle+1}(e_j) = 0, i \ne j -$$, -$$ -\operatorname{ad}(f_i)^{-\langle \alpha_i, \alpha_j \rangle+1}(f_j) = 0, i \ne j -$$. - -is a finite-dimensional semisimple Lie algebra with the Cartan subalgebra generated by $h_i$'s and with the root system $\Phi$. - -The square matrix $[\langle \alpha_i, \alpha_j \rangle]_{1 \le i, j \le n}$ is called the Cartan matrix. Thus, with this notion, the theorem states that, give a Cartan matrix A, there exists a unique (up to an isomorphism) finite-dimensional semisimple Lie algebra $\mathfrak g(A)$ associated to $A$. The construction of a semisimple Lie algebra from a Cartan matrix can be generalized by weakening the definition of a Cartan matrix. The (generally infinite-dimensional) Lie algebra associated to a generalized Cartan matrix is called a Kac–Moody algebra. - -The proof here is taken from and . - -Let $a_{ij} = \langle \alpha_i, \alpha_j \rangle$ and then let $\widetilde{\mathfrak g}$ be the Lie algebra generated by (1) the generators $e_i, f_i, h_i$ and (2) the relations: - -*$[h_i, h_j] = 0$, - -*$[e_i, f_i] = h_i$, $[e_i, f_j] = 0, i \ne j$, - -*$[h_i, e_j] = a_{ij} e_j, [h_i, f_j] = -a_{ij} f_j$. - -Let $\mathfrak{h}$ be the free vector space spanned by $h_i$, V the free vector space with a basis $v_1, \dots, v_n$ and $T = \bigoplus_{l=0}^{\infty} V^{\otimes l}$ the tensor algebra over it. Consider the following representation of a Lie algebra: -$$ -\pi : \widetilde{\mathfrak g} \to \mathfrak{gl}(T) -$$ - -given by: for $a \in T, h \in \mathfrak{h}, \lambda \in \mathfrak{h}^*$, - -*$\pi(f_i)a = v_i \otimes a,$ - -*$\pi(h)1 = \langle \lambda, h \rangle 1, \pi(h)(v_j \otimes a) = -\langle \alpha_j, h \rangle v_j \otimes a + v_j \otimes \pi(h)a$, inductively, - -*$\pi(e_i)1 = 0, \pi(e_i)(v_j \otimes a) = \delta_{ij} \alpha_i(a) + v_j \otimes \pi(e_i)a$, inductively. - -It is not trivial that this is indeed a well-defined representation and that has to be checked by hand. From this representation, one deduces the following properties: let $\widetilde{\mathfrak{n}}_+$ (resp. $\widetilde{\mathfrak{n}}_-$) the subalgebras of $\widetilde{\mathfrak g}$ generated by the $e_i$'s (resp. the $f_i$'s). - -*$\widetilde{\mathfrak{n}}_+$ (resp. $\widetilde{\mathfrak{n}}_-$) is a free Lie algebra generated by the $e_i$'s (resp. the $f_i$'s). - -*As a vector space, $\widetilde{\mathfrak g} = \widetilde{\mathfrak{n}}_+ \bigoplus \mathfrak{h} \bigoplus \widetilde{\mathfrak{n}}_-$. - -*$\widetilde{\mathfrak{n}}_+ = \bigoplus_{0 \ne \alpha \in Q_+} \widetilde{\mathfrak g}_{\alpha}$ where $\widetilde{\mathfrak g}_{\alpha} = \{ x \in \widetilde{\mathfrak g}|[h, x] = \alpha(h) x, h \in \mathfrak h \}$ and, similarly, $\widetilde{\mathfrak{n}}_- = \bigoplus_{0 \ne \alpha \in Q_+} \widetilde{\mathfrak g}_{-\alpha}$. - -*(root space decomposition) $\widetilde{\mathfrak g} = \left( \bigoplus_{0 \ne \alpha \in Q_+} \widetilde{\mathfrak g}_{-\alpha} \right) \bigoplus \mathfrak h \bigoplus \left( \bigoplus_{0 \ne \alpha \in Q_+} \widetilde{\mathfrak g}_{\alpha} \right)$. - -For each ideal $\mathfrak i$ of $\widetilde{\mathfrak g}$, one can easily show that $\mathfrak i$ is homogeneous with respect to the grading given by the root space decomposition; i.e., $\mathfrak i = \bigoplus_{\alpha} (\widetilde{\mathfrak g}_{\alpha} \cap \mathfrak i)$. It follows that the sum of ideals intersecting $\mathfrak h$ trivially, it itself intersects $\mathfrak h$ trivially. Let $\mathfrak r$ be the sum of all ideals intersecting $\mathfrak h$ trivially. Then there is a vector space decomposition: $\mathfrak r = (\mathfrak r \cap \widetilde{\mathfrak n}_-) \oplus (\mathfrak r \cap \widetilde{\mathfrak n}_+)$. In fact, it is a $\widetilde{\mathfrak g}$-module decomposition. Let -$$ -\mathfrak g = \widetilde{\mathfrak g}/\mathfrak r -$$. - -Then it contains a copy of $\mathfrak h$, which is identified with $\mathfrak h$ and -$$ -\mathfrak g = \mathfrak{n}_+ \bigoplus \mathfrak{h} \bigoplus \mathfrak{n}_- -$$ - -where $\mathfrak{n}_+$ (resp. $\mathfrak{n}_-$) are the subalgebras generated by the images of $e_i$'s (resp. the images of $f_i$'s). - -One then shows: (1) the derived algebra $[\mathfrak g, \mathfrak g]$ here is the same as $\mathfrak g$ in the lead, (2) it is finite-dimensional and semisimple and (3) $[\mathfrak g, \mathfrak g] = \mathfrak g$. diff --git a/wiki/wikipedia/3729.txt b/wiki/wikipedia/3729.txt deleted file mode 100644 index b8d7d04ec1e0616858a228e83dcce9def6ecdce2..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3729.txt +++ /dev/null @@ -1,264 +0,0 @@ -In the calculus of variations and classical mechanics, the Euler–Lagrange equations is a system of second-order ordinary differential equations whose solutions are stationary points of the given action functional. The equations were discovered in the 1750s by Swiss mathematician Leonhard Euler and Italian mathematician Joseph-Louis Lagrange. - -Because a differentiable functional is stationary at its local extrema, the Euler–Lagrange equation is useful for solving optimization problems in which, given some functional, one seeks the function minimizing or maximizing it. This is analogous to Fermat's theorem in calculus, stating that at any point where a differentiable function attains a local extremum its derivative is zero. - -In Lagrangian mechanics, according to Hamilton's principle of stationary action, the evolution of a physical system is described by the solutions to the Euler equation for the action of the system. In this context Euler equations are usually called Lagrange equations. In classical mechanics, it is equivalent to Newton's laws of motion, but it has the advantage that it takes the same form in any system of generalized coordinates, and it is better suited to generalizations. In classical field theory there is an analogous equation to calculate the dynamics of a field. - -The Euler–Lagrange equation was developed in the 1750s by Euler and Lagrange in connection with their studies of the tautochrone problem. This is the problem of determining a curve on which a weighted particle will fall to a fixed point in a fixed amount of time, independent of the starting point. - -Lagrange solved this problem in 1755 and sent the solution to Euler. Both further developed Lagrange's method and applied it to mechanics, which led to the formulation of Lagrangian mechanics. Their correspondence ultimately led to the calculus of variations, a term coined by Euler himself in 1766. - -Let $(X,L)$ be a mechanical system with $n$ degrees of freedom. Here $X$ is the configuration space and $L=L(t,\boldsymbol q, \boldsymbol v)$ the Lagrangian, i.e. a smooth real-valued function such that $\boldsymbol q \in X,$ and $\boldsymbol v$ is an $n$-dimensional "vector of speed". (For those familiar with differential geometry, $X$ is a smooth manifold, and $L : {\mathbb R}_t \times TX \to {\mathbb R},$ where $TX$ is the tangent bundle of $X).$ - -Let ${\cal P}(a,b,\boldsymbol x_a,\boldsymbol x_b)$ be the set of smooth paths $\boldsymbol q: [a,b] \to X$ for which $\boldsymbol q(a) = \boldsymbol x_a$ and $\boldsymbol q(b) = \boldsymbol x_{b}. $ The action functional $S : {\cal P}(a,b,\boldsymbol x_a,\boldsymbol x_b) \to \mathbb{R}$ is defined via -$$ -\displaystyle S[\boldsymbol q] = \int_a^b L(t,\boldsymbol q(t),\dot{\boldsymbol q}(t)) dt. -$$ - -A path $\boldsymbol q \in {\cal P}(a,b,\boldsymbol x_a,\boldsymbol x_b)$ is a stationary point of $S$ if and only if - -{{Equation box 1 - -|indent =: - -|equation = \frac{\partial L}{\partial q^i}(t,\boldsymbol q(t),\dot{\boldsymbol q}(t))-\frac{\mathrm{d}}{\mathrm{d}t}\frac{\partial L}{\partial \dot q^i}(t,\boldsymbol q(t),\dot{\boldsymbol q}(t)) = 0, - -\quad i = 1, \dots, n. - -|border colour = #50C878 - -|background colour = #ECFCF4}} - -Here, $\dot{\boldsymbol q}(t) $ is the time derivative of $\boldsymbol q(t).$ - -A standard example is finding the real-valued function y(x) on the interval [a, b], such that y(a) = c and y(b) = d, for which the path length along the curve traced by y is as short as possible. -$$ - \text{s} = \int_{a}^{b} \sqrt{\mathrm{d}x^2+\mathrm{d}y^2} = \int_{a}^{b} \sqrt{1+y'^2}\mathrm{d}x, -$$ - -the integrand function being 1=L(x, y, y′) = 1 + y′ ² . - -The partial derivatives of L are: - -\frac{\partial L(x, y, y')}{\partial y'} = \frac{y'}{\sqrt{1 + y'^2}} \quad \text{and} \quad - -\frac{\partial L(x, y, y')}{\partial y} = 0. - -By substituting these into the Euler–Lagrange equation, we obtain - - - -\begin{align} - -\frac{\mathrm{d}}{\mathrm{d}x} \frac{y'(x)}{\sqrt{1 + (y'(x))^2}} &= 0 \\ - -\frac{y'(x)}{\sqrt{1 + (y'(x))^2}} &= C = \text{constant} \\ - -\Rightarrow y'(x)&= \frac{C}{\sqrt{1-C^2}} =: A \\ - -\Rightarrow y(x) &= Ax + B - -\end{align} - - - -that is, the function must have a constant first derivative, and thus its graph is a straight line. - -The stationary values of the functional - - - -I[f] = \int_{x_0}^{x_1} \mathcal{L}(x, f, f', f, \dots, f^{(k)})~\mathrm{d}x ~;~~ - -f' := \cfrac{\mathrm{d}f}{\mathrm{d}x}, ~f := \cfrac{\mathrm{d}^2f}{\mathrm{d}x^2}, ~ - -f^{(k)} := \cfrac{\mathrm{d}^kf}{\mathrm{d}x^k} - - - -can be obtained from the Euler–Lagrange equation - - - -\cfrac{\partial \mathcal{L}}{\partial f} - \cfrac{\mathrm{d}}{\mathrm{d} x}\left(\cfrac{\partial \mathcal{L}}{\partial f'}\right) + \cfrac{\mathrm{d}^2}{\mathrm{d} x^2}\left(\cfrac{\partial \mathcal{L}}{\partial f}\right) - \dots + - -(-1)^k \cfrac{\mathrm{d}^k}{\mathrm{d} x^k}\left(\cfrac{\partial \mathcal{L}}{\partial f^{(k)}}\right) = 0 - - - -under fixed boundary conditions for the function itself as well as for the first $k-1$ derivatives (i.e. for all $f^{(i)}, i \in \{0, ..., k-1\}$). The endpoint values of the highest derivative $f^{(k)}$ remain flexible. - -If the problem involves finding several functions ($f_1, f_2, \dots, f_m$) of a single independent variable ($x$) that define an extremum of the functional - - - -I[f_1,f_2, \dots, f_m] = \int_{x_0}^{x_1} \mathcal{L}(x, f_1, f_2, \dots, f_m, f_1', f_2', \dots, f_m')~\mathrm{d}x - -~;~~ f_i' := \cfrac{\mathrm{d}f_i}{\mathrm{d}x} - - - -then the corresponding Euler–Lagrange equations are - - - -\begin{align} - -\frac{\partial \mathcal{L}}{\partial f_i} - \frac{\mathrm{d}}{\mathrm{d}x}\left(\frac{\partial \mathcal{L}}{\partial f_i'}\right) = 0 - -\end{align} - - - -A multi-dimensional generalization comes from considering a function on n variables. If $\Omega$ is some surface, then - - - -I[f] = \int_{\Omega} \mathcal{L}(x_1, \dots , x_n, f, f_{1}, \dots , f_{n}) \mathrm{d}\mathbf{x}\! ~;~~ - -f_{j} := \cfrac{\partial f}{\partial x_j} - - - -is extremized only if f satisfies the partial differential equation -$$ - \frac{\partial \mathcal{L}}{\partial f} - \sum_{j=1}^{n} \frac{\partial}{\partial x_j}\left(\frac{\partial \mathcal{L}}{\partial f_{j}}\right) = 0. -$$ - -When n = 2 and functional $\mathcal I$ is the energy functional, this leads to the soap-film minimal surface problem. - -If there are several unknown functions to be determined and several variables such that - - - -I[f_1,f_2,\dots,f_m] = \int_{\Omega} \mathcal{L}(x_1, \dots , x_n, f_1, \dots, f_m, f_{1,1}, \dots , f_{1,n}, \dots, f_{m,1}, \dots, f_{m,n}) \mathrm{d}\mathbf{x}\! ~;~~ - -f_{i,j} := \cfrac{\partial f_i}{\partial x_j} - - - -the system of Euler–Lagrange equations is - - - -\begin{align} - -\frac{\partial \mathcal{L}}{\partial f_1} - \sum_{j=1}^{n} \frac{\partial}{\partial x_j}\left(\frac{\partial \mathcal{L}}{\partial f_{1,j}}\right) &= 0_1 \\ - -\frac{\partial \mathcal{L}}{\partial f_2} - \sum_{j=1}^{n} \frac{\partial}{\partial x_j}\left(\frac{\partial \mathcal{L}}{\partial f_{2,j}}\right) &= 0_2 \\ - -\vdots \qquad \vdots \qquad &\quad \vdots \\ - -\frac{\partial \mathcal{L}}{\partial f_m} - \sum_{j=1}^{n} \frac{\partial}{\partial x_j}\left(\frac{\partial \mathcal{L}}{\partial f_{m,j}}\right) &= 0_m. - -\end{align} - - - -If there is a single unknown function f to be determined that is dependent on two variables x1 and x2 and if the functional depends on higher derivatives of f up to n-th order such that - - - -\begin{align} - -I[f] & = \int_{\Omega} \mathcal{L}(x_1, x_2, f, f_{1}, f_{2}, f_{11}, f_{12}, f_{22}, - -\dots, f_{22\dots 2}) \mathrm{d}\mathbf{x} \\ - -& \qquad \quad - -f_{i} := \cfrac{\partial f}{\partial x_i} , \quad - -f_{ij} := \cfrac{\partial^2 f}{\partial x_i\partial x_j} , \dots - -\end{align} - - - -then the Euler–Lagrange equation is - - - -\begin{align} - -\frac{\partial \mathcal{L}}{\partial f} - -& - \frac{\partial}{\partial x_1}\left(\frac{\partial \mathcal{L}}{\partial f_{1}}\right) - -- \frac{\partial}{\partial x_2}\left(\frac{\partial \mathcal{L}}{\partial f_{2}}\right) - -+ \frac{\partial^2}{\partial x_1^2}\left(\frac{\partial \mathcal{L}}{\partial f_{11}}\right) - -+ \frac{\partial^2}{\partial x_1\partial x_2}\left(\frac{\partial \mathcal{L}}{\partial f_{12}}\right) - -+ \frac{\partial^2}{\partial x_2^2}\left(\frac{\partial \mathcal{L}}{\partial f_{22}}\right) \\ - -& - \dots - -+ (-1)^n \frac{\partial^n}{\partial x_2^n}\left(\frac{\partial \mathcal{L}}{\partial f_{22\dots 2}}\right) = 0 - -\end{align} - - - -which can be represented shortly as: - - - -\frac{\partial \mathcal{L}}{\partial f} +\sum_{j=1}^n \sum_{\mu_1 \leq \ldots \leq \mu_j} (-1)^j \frac{\partial^j}{\partial x_{\mu_{1}}\dots \partial x_{\mu_{j}}} \left( \frac{\partial \mathcal{L} }{\partial f_{\mu_1\dots\mu_j}}\right)=0 - - - -wherein $\mu_1 \dots \mu_j$ are indices that span the number of variables, that is, here they go from 1 to 2. Here summation over the $\mu_1 \dots \mu_j$ indices is only over $\mu_1 \leq \mu_2 \leq \ldots \leq \mu_j$ in order to avoid counting the same partial derivative multiple times, for example $f_{12} = f_{21}$ appears only once in the previous equation. - -If there are p unknown functions fi to be determined that are dependent on m variables x1 ... xm and if the functional depends on higher derivatives of the fi up to n-th order such that - - - -\begin{align} - -I[f_1,\ldots,f_p] & = \int_{\Omega} \mathcal{L}(x_1, \ldots, x_m; f_1,\ldots,f_p; f_{1,1},\ldots, - -f_{p,m}; f_{1,11},\ldots, f_{p,mm};\ldots; f_{p,1\ldots 1}, \ldots, f_{p,m\ldots m}) \mathrm{d}\mathbf{x} \\ - -& \qquad \quad - -f_{i,\mu} := \cfrac{\partial f_i}{\partial x_\mu} , \quad - -f_{i,\mu_1\mu_2} := \cfrac{\partial^2 f_i}{\partial x_{\mu_1}\partial x_{\mu_2}} , \dots - -\end{align} - - - -where $\mu_1 \dots \mu_j$ are indices that span the number of variables, that is they go from 1 to m. Then the Euler–Lagrange equation is - - - -\frac{\partial \mathcal{L}}{\partial f_i} +\sum_{j=1}^n \sum_{\mu_1 \leq \ldots \leq \mu_j} (-1)^j \frac{\partial^j}{\partial x_{\mu_{1}}\dots \partial x_{\mu_{j}}} \left( \frac{\partial \mathcal{L} }{\partial f_{i,\mu_1\dots\mu_j}}\right)=0 - - - -where the summation over the $\mu_1 \dots \mu_j$ is avoiding counting the same derivative $ f_{i,\mu_1\mu_2} = f_{i,\mu_2\mu_1}$ several times, just as in the previous subsection. This can be expressed more compactly as - - - -\sum_{j=0}^n \sum_{\mu_1 \leq \ldots \leq \mu_j} (-1)^j \partial_{ \mu_{1}\ldots \mu_{j} }^j \left( \frac{\partial \mathcal{L} }{\partial f_{i,\mu_1\dots\mu_j}}\right)=0 - - - -Let $M$ be a smooth manifold, and let $C^\infty([a,b])$ denote the space of smooth functions $f:[a,b]\to M$. Then, for functionals $S:C^\infty ([a,b])\to \mathbb{R}$ of the form - - - -S[f]=\int_a^b (L\circ\dot{f})(t)\mathrm{d} t - - - -where $L:TM\to\mathbb{R}$ is the Lagrangian, the statement $\mathrm{d} S_f=0$ is equivalent to the statement that, for all $t\in [a,b]$, each coordinate frame trivialization $(x^i,X^i)$ of a neighborhood of $\dot{f}(t)$ yields the following $\dim M$ equations: - - - -\forall i:\frac{\mathrm{d}}{\mathrm{d}t}\frac{\partial L}{\partial X^i}\bigg|_{\dot{f}(t)}=\frac{\partial L}{\partial x^i}\bigg|_{\dot{f}(t)}. - - diff --git a/wiki/wikipedia/373.txt b/wiki/wikipedia/373.txt deleted file mode 100644 index 1f0d460d99ee9c00d5bd2c23d6383e4d0b4bf34f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/373.txt +++ /dev/null @@ -1,5 +0,0 @@ -In computational complexity, an NP-complete (or NP-hard) problem is weakly NP-complete (or weakly NP-hard) if there is an algorithm for the problem whose running time is polynomial in the dimension of the problem and the magnitudes of the data involved (provided these are given as integers), rather than the base-two logarithms of their magnitudes. Such algorithms are technically exponential functions of their input size and are therefore not considered polynomial. - -For example, the NP-hard knapsack problem can be solved by a dynamic programming algorithm requiring a number of steps polynomial in the size of the knapsack and the number of items (assuming that all data are scaled to be integers); however, the runtime of this algorithm is exponential time since the input sizes of the objects and knapsack are logarithmic in their magnitudes. However, as Garey and Johnson (1979) observed, “A pseudo-polynomial-time algorithm … will display 'exponential behavior' only when confronted with instances containing 'exponentially large' numbers, [which] might be rare for the application we are interested in. If so, this type of algorithm might serve our purposes almost as well as a polynomial time algorithm.” Another example for a weakly NP-complete problem is the subset sum problem. - -The related term strongly NP-complete (or unary NP-complete) refers to those problems that remain NP-complete even if the data are encoded in unary, that is, if the data are "small" relative to the overall input size. diff --git a/wiki/wikipedia/3730.txt b/wiki/wikipedia/3730.txt deleted file mode 100644 index 289811cdf43807a2a9e2798be539face76989c68..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3730.txt +++ /dev/null @@ -1,47 +0,0 @@ -In projective geometry, a plane at infinity is the hyperplane at infinity of a three dimensional projective space or to any plane contained in the hyperplane at infinity of any projective space of higher dimension. This article will be concerned solely with the three-dimensional case. - -There are two approaches to defining the plane at infinity which depend on whether one starts with a projective 3-space or an affine 3-space. - -If a projective 3-space is given, the plane at infinity is any distinguished projective plane of the space. This point of view emphasizes the fact that this plane is not geometrically different than any other plane. On the other hand, given an affine 3-space, the plane at infinity is a projective plane which is added to the affine 3-space in order to give it closure of incidence properties. Meaning that the points of the plane at infinity are the points where parallel lines of the affine 3-space will meet, and the lines are the lines where parallel planes of the affine 3-space will meet. The result of the addition is the projective 3-space, $P^3$. This point of view emphasizes the internal structure of the plane at infinity, but does make it look "special" in comparison to the other planes of the space. - -If the affine 3-space is real, $\mathbb{R}^3$, then the addition of a real projective plane $\mathbb{R}P^2$ at infinity produces the real projective 3-space $\mathbb{R}P^3$. - -Since any two projective planes in a projective 3-space are equivalent, we can choose a homogeneous coordinate system so that any point on the plane at infinity is represented as (X:Y:Z:0). - -Any point in the affine 3-space will then be represented as (X:Y:Z:1). The points on the plane at infinity seem to have three degrees of freedom, but homogeneous coordinates are equivalent up to any rescaling: -$$ - (X : Y : Z : 0) \equiv (a X : a Y : a Z : 0) -$$, - -so that the coordinates (X:Y:Z:0) can be normalized, thus reducing the degrees of freedom to two (thus, a surface, namely a projective plane). - -Proposition: Any line which passes through the origin (0:0:0:1) and through a point (X:Y:Z:1) will intersect the plane at infinity at the point (X:Y:Z:0). - -Proof: A line which passes through points (0:0:0:1) and (X:Y:Z:1) will consist of points which are linear combinations of the two given points: -$$ - a (0:0:0:1) + b (X:Y:Z:1) = (bX :bY: bZ: a + b). -$$ - -For such a point to lie on the plane at infinity we must have, $ a + b = 0$. So, by choosing $ a = - b$, we obtain the point -$$ -(bX:bY:bZ:0) = (X : Y : Z : 0) -$$, as required. Q.E.D. - -Any pair of parallel lines in 3-space will intersect each other at a point on the plane at infinity. Also, every line in 3-space intersects the plane at infinity at a unique point. This point is determined by the direction—and only by the direction—of the line. To determine this point, consider a line parallel to the given line, but passing through the origin, if the line does not already pass through the origin. Then choose any point, other than the origin, on this second line. If the homogeneous coordinates of this point are (X:Y:Z:1), then the homogeneous coordinates of the point at infinity through which the first and second line both pass is (X:Y:Z:0). - -Example: Consider a line passing through the points (0:0:1:1) and (3:0:1:1). A parallel line passes through points (0:0:0:1) and (3:0:0:1). This second line intersects the plane at infinity at the point (3:0:0:0). But the first line also passes through this point: -$$ - \lambda (3:0:1:1) + \mu (0:0:1:1) -$$ -$$ - = (3 \lambda : 0 : \lambda + \mu : \lambda + \mu) -$$ -$$ - = ( 3 : 0 : 0 : 0) -$$ - -when $\lambda + \mu = 0$. ■ - -Any pair of parallel planes in affine 3-space will intersect each other in a projective line (a line at infinity) in the plane at infinity. Also, every plane in the affine 3-space intersects the plane at infinity in a unique line. This line is determined by the direction—and only by the direction—of the plane. - -Since the plane at infinity is a projective plane, it is homeomorphic to the surface of a "sphere modulo antipodes", i.e. a sphere in which antipodal points are equivalent: S2/{1,-1} where the quotient is understood as a quotient by a group action (see quotient space). diff --git a/wiki/wikipedia/3731.txt b/wiki/wikipedia/3731.txt deleted file mode 100644 index f3952e32e16b42547cab42400c72edaac97c0b9f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3731.txt +++ /dev/null @@ -1,15 +0,0 @@ -In computer science, read–modify–write is a class of atomic operations (such as test-and-set, fetch-and-add, and compare-and-swap) that both read a memory location and write a new value into it simultaneously, either with a completely new value or some function of the previous value. These operations prevent race conditions in multi-threaded applications. Typically they are used to implement mutexes or semaphores. These atomic operations are also heavily used in non-blocking synchronization. - -Maurice Herlihy (1991) ranks atomic operations by their consensus numbers, as follows: - -* ∞: memory-to-memory move and swap, augmented queue, compare-and-swap, fetch-and-cons, sticky byte, load-link/store-conditional (LL/SC) - -* 2n - 2: n-register assignment - -* 2: test-and-set, swap, fetch-and-add, queue, stack - -* 1: atomic read and atomic write - -It is impossible to implement an operation that requires a given consensus number with only operations with a lower consensus number, no matter how many of such operations one uses. Read–modify–write instructions often produce unexpected results when used on I/O devices, as a write operation may not affect the same internal register that would be accessed in a read operation. - -This term is also associated with RAID levels that perform actual write operations as atomic read–modify–write sequences. Such RAID levels include RAID 4, RAID 5 and RAID 6. diff --git a/wiki/wikipedia/3732.txt b/wiki/wikipedia/3732.txt deleted file mode 100644 index 872d26f17341587c978fd88d80fd12dfad053d96..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3732.txt +++ /dev/null @@ -1,21 +0,0 @@ -In graph theory, Mac Lane's planarity criterion is a characterisation of planar graphs in terms of their cycle spaces, named after Saunders Mac Lane, who published it in 1937. It states that a finite undirected graph is planar if and only if - -the cycle space of the graph (taken modulo 2) has a cycle basis in which each edge of the graph participates in at most two basis vectors. - -For any cycle c in a graph G, one can form an m-dimensional 0-1 vector that has a 1 in the coordinate positions corresponding to edges in c and a 0 in the remaining coordinate positions. The cycle space C(G) of the graph is the vector space formed by all possible linear combinations of vectors formed in this way. In Mac Lane's characterization, C(G) is a vector space over the finite field GF(2) with two elements; that is, in this vector space, vectors are added coordinatewise modulo two. A 2-basis of G is a basis of C(G) with the property that, for each edge e in G, at most two basis vectors have nonzero coordinates in the position corresponding to e. Then, stated more formally, Mac Lane's characterization is that the planar graphs are exactly the graphs that have a 2-basis. - -One direction of the characterisation states that every planar graph has a 2-basis. Such a basis may be found as the collection of boundaries of the bounded faces of a planar embedding of the given graph G. - -If an edge is a bridge of G, it appears twice on a single face boundary and therefore has a zero coordinate in the corresponding vector. Thus, the only edges that have nonzero coordinates are the ones that separate two different faces; these edges appear either once (if one of the faces is the unbounded one) or twice in the collection of boundaries of bounded faces. It remains to prove that these cycles form a basis. One way to prove this by induction. As a base case, G is a tree, then it has no bounded faces and C(G) is zero-dimensional and has an empty basis. Otherwise, removing an edge from the unbounded face of G reduces both the dimension of the cycle space and the number of bounded faces by one and the induction follows. - -Alternatively, it is possible to use Euler's formula to show that the number of cycles in this collection equals the circuit rank of G, which is the dimension of the cycle space. Each nonempty subset of cycles has a vector sum that represents the boundary of the union of the bounded faces in the subset, which cannot be empty (the union includes at least one bounded face and excludes the unbounded face, so there must be some edges separating them). Therefore, there is no subset of cycles whose vectors sum to zero, which means that all the cycles are linearly independent. As a linearly independent set of the same size as the dimension of the space, this collection of cycles must form a basis. - -O'Neil provided the following simple argument for the other direction of the characterization, based on Wagner's theorem characterizing the planar graphs by forbidden minors. As O'Neill observes, the property of having a 2-basis is preserved under graph minors: if one contracts an edge, the same contraction may be performed in the basis vectors, if one removes an edge that has a nonzero coordinate in a single basis vector, then that vector may be removed from the basis, and if one removes an edge that has a nonzero coordinate in two basis vectors, then those two vectors may be replaced by their sum (modulo two). Additionally, if C(G) is a cycle basis for any graph, then it must cover some edges exactly once, for otherwise its sum would be zero (impossible for a basis), and so C(G) can be augmented by one more cycle consisting of these singly-covered edges while preserving the property that every edge is covered at most twice. - -However, the complete graph K5 has no 2-basis: C(G) is six-dimensional, each nontrivial vector in C(G) has nonzero coordinates for at least three edges, and so any augmented basis would have at least 21 nonzeros, exceeding the 20 nonzeros that would be allowed if each of the ten edges were nonzero in at most two basis vectors. By similar reasoning, the complete bipartite graph K3,3 has no 2-basis: C(G) is four-dimensional, and each nontrivial vector in C(G) has nonzero coordinates for at least four edges, so any augmented basis would have at least 20 nonzeros, exceeding the 18 nonzeros that would be allowed if each of the nine edges were nonzero in at most two basis vectors. Since the property of having a 2-basis is minor-closed and is not true of the two minor-minimal nonplanar graphs K5 and K3,3, it is also not true of any other nonplanar graph. - -Lefschetz provided another proof, based on algebraic topology. He uses a slightly different formulation of the planarity criterion, according to which a graph is planar if and only if it has a set of (not necessarily simple) cycles covering every edge exactly twice, such that the only nontrivial relation among these cycles in C(G) is that their sum be zero. If this is the case, then leaving any one of the cycles out produces a basis satisfying Mac Lane's formulation of the criterion. If a planar graph is embedded on a sphere, its face cycles clearly satisfy Lefschetz's property. Conversely, as Lefschetz shows, whenever a graph G has a set of cycles with this property, they necessarily form the face cycles of an embedding of the graph onto the sphere. - -Ja'Ja' used Mac Lane's planarity criterion as part of a parallel algorithm for testing graph planarity and finding planar embeddings. Their algorithm partitions the graph into triconnected components, after which there is a unique planar embedding (up to the choice of the outer face) and the cycles in a 2-basis can be assumed to be all the peripheral cycles of the graph. Ja'Ja' and Simon start with a fundamental cycle basis of the graph (a cycle basis generated from a spanning tree by forming a cycle for each possible combination of a path in the tree and an edge outside the tree) and transform it into a 2-basis of peripheral cycles. These cycles form the faces of a planar embedding of the given graph. - -Mac Lane's planarity criterion allows the number of bounded face cycles in a planar graph to be counted easily, as the circuit rank of the graph. This property is used in defining the meshedness coefficient of the graph, a normalized variant of the number of bounded face cycles that is computed by dividing the circuit rank by 2n - 5, the maximum possible number of bounded faces of a planar graph with the same vertex set . diff --git a/wiki/wikipedia/3733.txt b/wiki/wikipedia/3733.txt deleted file mode 100644 index ada20a4d352652af3c64a9a3375d722e94145be8..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3733.txt +++ /dev/null @@ -1,3 +0,0 @@ -In arithmetic geometry, Faltings' product theorem gives sufficient conditions for a subvariety of a product of projective spaces to be a product of varieties in the projective spaces. It was introduced by in his proof of Lang's conjecture that subvarieties of an abelian variety containing no translates of non-trivial abelian subvarieties have only finitely many rational points. - -Evertse and Ferretti gave explicit versions of Faltings' product theorem. diff --git a/wiki/wikipedia/3734.txt b/wiki/wikipedia/3734.txt deleted file mode 100644 index b1883f89ac9323b0be5eb224f2bb9a1bab6a7188..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3734.txt +++ /dev/null @@ -1,90 +0,0 @@ -In mathematics, the trigonometric moment problem is formulated as follows: given a finite sequence {α0, ... αn }, does there exist a positive Borel measure μ on the interval [0, 2π] such that -$$ -\alpha_k = \frac{1}{2 \pi}\int_0 ^{2 \pi} e^{-ikt}d \mu(t). -$$ - -In other words, an affirmative answer to the problems means that {α0, ... αn } are the first n + 1 Fourier coefficients of some positive Borel measure μ on [0, 2π]. - -The trigonometric moment problem is solvable, that is, {αk} is a sequence of Fourier coefficients, if and only if the (n + 1) × (n + 1) Toeplitz matrix - - - -A = - -\left(\begin{matrix} - -\alpha_0 & \alpha_1 & \cdots & \alpha_n \\ - -\bar{\alpha_1} & \alpha_0 & \cdots & \alpha_{n-1} \\ - -\vdots & \vdots & \ddots & \vdots \\ - -\bar{\alpha_n} & \bar{\alpha_{n-1}} & \cdots & \alpha_0 \\ - -\end{matrix}\right) - -is positive semidefinite. - -The "only if" part of the claims can be verified by a direct calculation. - -We sketch an argument for the converse. The positive semidefinite matrix A defines a sesquilinear product on Cn + 1, resulting in a Hilbert space -$$ -(\mathcal{H}, \langle , \rangle) -$$ - -of dimensional at most n + 1, a typical element of which is an equivalence class denoted by [f]. The Toeplitz structure of A means that a "truncated" shift is a partial isometry on $\mathcal{H}$. More specifically, let { e0, ...en } be the standard basis of Cn + 1. Let $\mathcal{E}$ be the subspace generated by { [e0], ... [en - 1] } and $\mathcal{F}$ be the subspace generated by { [e1], ... [en] }. Define an operator -$$ -V: \mathcal{E} \rightarrow \mathcal{F} -$$ - -by -$$ -V[e_k] = [e_{k+1}] \quad \mbox{for} \quad k = 0 \ldots n-1. -$$ - -Since -$$ -\langle V[e_j], V[e_k] \rangle = \langle [e_{j+1}], [e_{k+1}] \rangle = A_{j+1, k+1} = A_{j, k} = \langle [e_{j}], [e_{k}] \rangle, -$$ - -V can be extended to a partial isometry acting on all of $\mathcal{H}$. Take a minimal unitary extension U of V, on a possibly larger space (this always exists). According to the spectral theorem, there exists a Borel measure m on the unit circle T such that for all integer k -$$ -\langle (U^*)^k [ e_ {n+1} ], [ e_ {n+1} ] \rangle = \int_{\mathbf{T}} z^{k} dm . -$$ - -For k = 0,...,n, the left hand side is - - - -\langle (U^*)^k [ e_ {n+1} ], [ e_ {n+1} ] \rangle - -= \langle (V^*)^k [ e_ {n+1} ], [ e_{n+1} ] \rangle - -= \langle [e_{n+1-k}], [ e_{n+1} ] \rangle - -= A_{n+1, n+1-k} - -= \bar{\alpha_k}. - - - -So - - - -\int_{\mathbf{T}} z^{-k} dm - -= \int_{\mathbf{T}} \bar{z}^k dm - -= \alpha_k. - - - -Finally, parametrize the unit circle T by eit on [0, 2π] gives -$$ -\frac{1}{2 \pi} \int_0 ^{2 \pi} e^{-ikt} d\mu(t) = \alpha_k -$$ - -for some suitable measure μ. - -The above discussion shows that the trigonometric moment problem has infinitely many solutions if the Toeplitz matrix A is invertible. In that case, the solutions to the problem are in bijective correspondence with minimal unitary extensions of the partial isometry V. diff --git a/wiki/wikipedia/3735.txt b/wiki/wikipedia/3735.txt deleted file mode 100644 index 0761cef6eb2bd852869bc2acd114605cee72b166..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3735.txt +++ /dev/null @@ -1,146 +0,0 @@ -In transcendental number theory, a mathematical discipline, Baker's theorem gives a lower bound for the absolute value of linear combinations of logarithms of algebraic numbers. The result, proved by , subsumed many earlier results in transcendental number theory and solved a problem posed by Alexander Gelfond nearly fifteen years earlier. - -Baker used this to prove the transcendence of many numbers, to derive effective bounds for the solutions of some Diophantine equations, and to solve the class number problem of finding all imaginary quadratic fields with class number 1. - -To simplify notation, let $\mathbb{L}$ be the set of logarithms to the base e of nonzero algebraic numbers, that is -$$ -\mathbb{L} = \left \{\lambda \in \Complex : \ e^\lambda \in \overline{\Q} \right \}, -$$ - -where $\Complex$ denotes the set of complex numbers and $\overline{\Q}$ denotes the algebraic numbers (the algebraic completion of the rational numbers $\Q$). Using this notation, several results in transcendental number theory become much easier to state. For example the Hermite–Lindemann theorem becomes the statement that any nonzero element of $\mathbb{L}$ is transcendental. - -In 1934, Alexander Gelfond and Theodor Schneider independently proved the Gelfond–Schneider theorem. This result is usually stated as: if a is algebraic and not equal to 0 or 1, and if b is algebraic and irrational, then ab is transcendental. Note that this includes all determinations of ab, which in most cases constitutes infinitely many numbers. Equivalently, though, it says that if $\lambda_1, \lambda_2 \in \mathbb{L}$ are linearly independent over the rational numbers, then they are linearly independent over the algebraic numbers. So if $\lambda_1, \lambda_2 \in \mathbb{L}$ and λ2 isn't zero, then the quotient λ12 is either a rational number or transcendental. It cannot be an algebraic irrational number like 2. - -Although proving this result of "rational linear independence implies algebraic linear independence" for two elements of $\mathbb{L}$ was sufficient for his and Schneider's result, Gelfond felt that it was crucial to extend this result to arbitrarily many elements of $\mathbb{L}.$ Indeed, from Gel'fond: - -This problem was solved fourteen years later by Alan Baker and has since had numerous applications not only to transcendence theory but in algebraic number theory and the study of Diophantine equations as well. Baker received the Fields medal in 1970 for both this work and his applications of it to Diophantine equations. - -With the above notation, Baker's theorem is a nonhomogeneous generalization of the Gelfond–Schneider theorem. Specifically it states: - -If $\lambda_1, \ldots, \lambda_n \in \mathbb{L}$ are linearly independent over the rational numbers, then for any algebraic numbers $\beta_0, \ldots, \beta_n,$ not all zero, we have - -\left|\beta_0+\beta_1\lambda_1+\cdots+\beta_n \lambda_n\right| > H^{-C} - -where H is the maximum of the heights of $\beta_i$ and C is an effectively computable number depending on n, $\lambda_i$ and the maximum d of the degrees of $\beta_i.$ (If β0 is nonzero then the assumption that $\lambda_i$ are linearly independent can be dropped.) In particular this number is nonzero, so 1 and $\lambda_i$ are linearly independent over the algebraic numbers. - -Just as the Gelfond–Schneider theorem is equivalent to the statement about the transcendence of numbers of the form ab, so too Baker's theorem implies the transcendence of numbers of the form -$$ -a_1^{b_1}\cdots a_n^{b_n}, -$$ - -where the bi are all algebraic, irrational, and 1, b1, …, bn are linearly independent over the rationals, and the ai are all algebraic and not 0 or 1. - -Baker also gave several versions with explicit constants. For example, if $\exp(\lambda_j) = \alpha_j$ has height at most $A_j \ge 4$ and all the numbers $\beta_j$ have height at most $B \ge 4$ then the linear form -$$ -\Lambda=\beta_0+\beta_1\lambda_1+\cdots+\beta_n\lambda_n -$$ - -is either 0 or satisfies -$$ - \log|\Lambda|>(16nd)^{200n}\Omega \left (\log\Omega-\log\log A_n \right ) (\log B+\log\Omega) -$$ - -where -$$ -\Omega=\log A_1 \log A_2 \cdots \log A_n -$$ - -and the field generated by $\alpha_i$ and $\beta_i$ over the rationals has degree at most d. In the special case when β0 = 0 and all the $\beta_j$ are rational integers, the rightmost term log Ω can be deleted. - -An explicit result by Baker and Wüstholz for a linear form Λ with integer coefficients yields a lower bound of the form -$$ -\log|\Lambda| >-C h(\alpha_1)h(\alpha_2)\cdots h(\alpha_n) \log \left (\max \left \{|\beta_1|, \ldots, |\beta_n| \right \} \right ), -$$ - -where -$$ -C = 18(n + 1)! n^{n+1} (32d)^{n+2}\log(2nd), -$$ - -and d is the degree of the number field generated by the $\alpha_i.$ - -Baker's proof of his theorem is an extension of the argument given by Gel'fond. The main ideas of the proof are illustrated by the proof of the following qualitative version of the theorem of Baker described by Serre: - -If the numbers $2\pi i, \log a_1, \ldots, \log a_n$ are linearly independent over the rational numbers, for nonzero algebraic numbers $a_1, \ldots, a_n,$ then they are linearly independent over the algebraic numbers. - -The precise quantitative version of Baker's theory can be proved by replacing the conditions that things are zero by conditions that things are sufficiently small throughout the proof. - -The main idea of Baker's proof is to construct an auxiliary function $\Phi(z_1,\ldots,z_{n-1})$ of several variables that vanishes to high order at many points of the form $z_1 = \cdots = z_{n-1} = l,$ then repeatedly show that it vanishes to lower order at even more points of this form. Finally the fact that it vanishes (to order 1) at enough points of this form implies using Vandermonde determinants that there is a multiplicative relation between the numbers ai. - -Assume there is a relation -$$ -\beta_1\log \alpha_1+\cdots+\beta_{n-1}\log\alpha_{n-1}=\log \alpha_n -$$ - -for algebraic numbers α1, …, αn, β1, …, βn−1. The function Φ is of the form -$$ -\Phi(z_1,\ldots,z_{n-1}) = \sum_{\lambda_1=0}^L\cdots \sum_{\lambda_n=0}^L p(\lambda_1, \ldots,\lambda_n) \alpha_1^{(\lambda_1+\lambda_n\beta_1)z_1} \cdots \alpha_{n-1}^{(\lambda_{n-1}+\lambda_n\beta_{n-1})z_{n-1}} -$$ - -The integer coefficients p are chosen so that they are not all zero and Φ and its derivatives of order at most some constant M vanish at $z_1 = \cdots = z_{n-1} = l,$ for integers $l$ with $0 \leq l \leq h$ for some constant h. This is possible because these conditions are homogeneous linear equations in the coefficients p, which have a non-zero solution provided the number of unknown variables p is larger than the number of equations. The linear relation between the logs of the α's is needed to cut down the number of linear equations that have to be satisfied. Moreover, using Siegel's lemma, the sizes of the coefficients p can be chosen to be not too large. The constants L, h, and M have to be carefully adjusted to that the next part of the proof works, and are subject to some constraints, which are roughly: - -*L must be somewhat smaller than M to make the argument about extra zeros below work. - -*A small power of h must be larger than L to make the final step of the proof work. - -*Ln must be larger than about Mn−1h in order that it is possible to solve for the coefficients p. - -The constraints can be satisfied by taking h to be sufficiently large, M to be some fixed power of h, and L to be a slightly smaller power of h. Baker took M to be about h2 and L to be about h2−1/2n. - -The linear relation between the logarithms of the α's is used to reduce L slightly; roughly speaking, without it the condition Ln must be larger than about Mn−1h would become Ln must be larger than about Mnh, which is incompatible with the condition that L is somewhat smaller than M. - -The next step is to show that Φ vanishes to slightly smaller order at many more points of the form $z_1 = \cdots = z_{n-1} = l$ for integers l. This idea was Baker's key innovation: previous work on this problem involved trying to increase the number of derivatives that vanish while keeping the number of points fixed, which does not seem to work in the multivariable case. This is done by combining two ideas; First one shows that the derivatives at these points are quite small, by using the fact that many derivatives of Φ vanish at many nearby points. Then one shows that derivatives of Φ at this point are given by algebraic integers times known constants. If an algebraic integer has all its conjugates bounded by a known constant, then it cannot be too small unless it is zero, because the product of all conjugates of a nonzero algebraic integer is at least 1 in absolute value. Combining these two ideas implies that Φ vanishes to slightly smaller order at many more points $z_1 = \cdots = z_{n-1} = l.$ This part of the argument requires that Φ does not increase too rapidly; the growth of Φ depends on the size of L, so requires a bound on the size of L, which turns out to be roughly that L must be somewhat smaller than M. More precisely, Baker showed that since Φ vanishes to order M at h consecutive integers, it also vanishes to order M/2 at h1+1/8n consecutive integers 1, 2, 3, …. Repeating this argument J times shows that Φ vanishes to order M/2J at h1+J/8n points, provided that h is sufficiently large and L is somewhat smaller than M/2J. - -One then takes J large enough that: -$$ -h^{1 +\frac{J}{8n}} > (L+1)^n. -$$ - -(J larger than about 16n will do if h2 > L) so that: -$$ -\forall l \in \left \{1, 2, \ldots, (L+1)^n \right \}: \qquad \Phi(l, \ldots, l ) = 0. -$$ - -By definition $\Phi(l, \ldots, l) =0$ can be written as: -$$ -\sum_{\lambda_1=0}^L \cdots \sum_{\lambda_n=0}^L p(\lambda_1,\ldots,\lambda_n) \alpha_1^{\lambda_1 l} \cdots \alpha_n^{\lambda_n l} = 0. -$$ - -Therefore as l varies we have a system of (L + 1)n homogeneous linear equations in the (L + 1)n unknowns which by assumption has a non-zero solution, which in turn implies the determinant of the matrix of coefficients must vanish. However this matrix is a Vandermonde matrix and the formula for the determinant of such a matrix forces an equality between two of the values: -$$ -\alpha_1^{\lambda_1} \cdots \alpha_n^{\lambda_n} -$$ - -so $\alpha_1, \ldots, \alpha_n$ are multiplicatively dependent. Taking logs shows that $2\pi i, \log \alpha_1, \ldots, \log \alpha_n$ are linearly dependent over the rationals. - -Baker in fact gave a quantitative version of the theorem, giving effective lower bounds for the linear form in logarithms. This is done by a similar argument, except statements about something being zero are replaced by statements giving a small upper bound for it, and so on. - -Baker showed how to eliminate the assumption about 2πi in the theorem. This requires a modification of the final step of the proof. One shows that many derivatives of the function $\phi(z) = \Phi(z, \ldots, z) $ vanish at z = 0, by an argument similar to the one above. But these equations for the first (L+1)n derivatives again give a homogeneous set of linear equations for the coefficients p, so the determinant is zero, and is again a Vandermonde determinant, this time for the numbers λ1 log α1 + ⋯ + λn log αn. So two of these expressions must be the same which shows that log α1,…,log αn are linearly dependent over the rationals. - -Baker gave an inhomogeneous version of the theorem, showing that -$$ -\beta_0 + \beta_1\log \alpha_1+\cdots+\beta_{n}\log\alpha_{n} -$$ - -is nonzero for nonzero algebraic numbers β0, …, βn, α1, …, αn, and moreover giving an effective lower bound for it. The proof is similar to the homogeneous case: one can assume that -$$ -\beta_0+\beta_1\log \alpha_1+\cdots+\beta_{n-1}\log\alpha_{n-1}=\log \alpha_n -$$ - -and one inserts an extra variable z0 into Φ as follows: -$$ -\Phi(z_0,\ldots,z_{n-1}) = \sum_{\lambda_0=0}^L \cdots \sum_{\lambda_n=0}^L p(\lambda_0, \ldots,\lambda_n) z_0^{\lambda_0} e^{\lambda_n\beta_0z_0} \alpha_1^{(\lambda_1+\lambda_n\beta_1)z_1}\cdots\alpha_{n-1}^{(\lambda_{n-1}+\lambda_n\beta_{n-1})z_{n-1}} -$$ - -As mentioned above, the theorem includes numerous earlier transcendence results concerning the exponential function, such as the Hermite–Lindemann theorem and Gelfond–Schneider theorem. It is not quite as encompassing as the still unproven Schanuel's conjecture, and does not imply the six exponentials theorem nor, clearly, the still open four exponentials conjecture. - -The main reason Gelfond desired an extension of his result was not just for a slew of new transcendental numbers. In 1935 he used the tools he had developed to prove the Gelfond–Schneider theorem to derive a lower bound for the quantity -$$ -|\beta_1\lambda_1+\beta_2\lambda_2| -$$ - -where β1 and β2 are algebraic and λ1 and λ2 are in $\mathbb{L}$. Baker's proof gave lower bounds for quantities like the above but with arbitrarily many terms, and he could use these bounds to develop effective means of tackling Diophantine equations and to solve Gauss' class number problem. - -Baker's theorem grants us the linear independence over the algebraic numbers of logarithms of algebraic numbers. This is weaker than proving their algebraic independence. So far no progress has been made on this problem at all. It has been conjectured that if λ1, …, λn are elements of $\mathbb{L}$ that are linearly independent over the rational numbers, then they are algebraically independent too. This is a special case of Schanuel's conjecture, but so far it remains to be proved that there even exist two algebraic numbers whose logarithms are algebraically independent. Indeed, Baker's theorem rules out linear relations between logarithms of algebraic numbers unless there are trivial reasons for them; the next most simple case, that of ruling out homogeneous quadratic relations, is the still open four exponentials conjecture. - -Similarly, extending the result to algebraic independence but in the p-adic setting, and using the p-adic logarithm function, remains an open problem. It is known that proving algebraic independence of linearly independent p-adic logarithms of algebraic p-adic numbers would prove Leopoldt's conjecture on the p-adic ranks of units of a number field. diff --git a/wiki/wikipedia/3736.txt b/wiki/wikipedia/3736.txt deleted file mode 100644 index df405d94b381ade48b74726d88566c218443fd60..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3736.txt +++ /dev/null @@ -1,61 +0,0 @@ -In mathematics, the Clark–Ocone theorem (also known as the Clark–Ocone–Haussmann theorem or formula) is a theorem of stochastic analysis. It expresses the value of some function F defined on the classical Wiener space of continuous paths starting at the origin as the sum of its mean value and an Itô integral with respect to that path. It is named after the contributions of mathematicians J.M.C. Clark (1970), Daniel Ocone (1984) and U.G. Haussmann (1978). - -Let C0([0, T]; R) (or simply C0 for short) be classical Wiener space with Wiener measure γ. Let F : C0 → R be a BC1 function, i.e. F is bounded and Fréchet differentiable with bounded derivative DF : C0 → Lin(C0; R). Then -$$ -F(\sigma) = \int_{C_{0}} F(p) \mathrm{d} \gamma(p) + \int_{0}^{T} \mathbf{E} \left[ \left. \frac{\partial}{\partial t} \nabla_{H} F (-) \right| \Sigma_{t} \right] (\sigma) \mathrm{d} \sigma_{t}. -$$ - -In the above - -* F(σ) is the value of the function F on some specific path of interest, σ; - -* the first integral, -$$ -\int_{C_{0}} F(p) \mathrm{d} \gamma(p) = \mathbf{E}[F] -$$ - -is the expected value of F over the whole of Wiener space C0; - -* the second integral, -$$ -\int_0^T \cdots \mathrm{d} \sigma (t) -$$ - -is an Itô integral; - -* Σ is the natural filtration of Brownian motion B : [0, T] × Ω → R: Σt is the smallest σ-algebra containing all Bs-1(A) for times 0 ≤ s ≤ t and Borel sets A ⊆ R; - -* E[·|Σt] denotes conditional expectation with respect to the sigma algebra Σt; - -* /∂t denotes differentiation with respect to time t; ∇H denotes the H-gradient; hence, /∂tH is the Malliavin derivative. - -More generally, the conclusion holds for any F in L2(C0; R) that is differentiable in the sense of Malliavin. - -The Clark–Ocone theorem gives rise to an integration by parts formula on classical Wiener space, and to write Itô integrals as divergences: - -Let B be a standard Brownian motion, and let L02,1 be the Cameron–Martin space for C0 (see abstract Wiener space. Let V : C0 → L02,1 be a vector field such that -$$ -\dot{V} = \frac{\partial V}{\partial t} : [0, T] \times C_{0} \to \mathbb{R} -$$ - -is in L2(B) (i.e. is Itō integrable, and hence is an adapted process). Let F : C0 → R be BC1 as above. Then -$$ -\int_{C_{0}} \mathrm{D} F (\sigma) (V(\sigma)) \mathrm{d} \gamma (\sigma) = \int_{C_{0}} F (\sigma) \left( \int_{0}^{T} \dot{V}_{t} (\sigma) \mathrm{d} \sigma_{t} \right) \mathrm{d} \gamma (\sigma), -$$ - -i.e. -$$ -\int_{C_{0}} \left\langle \nabla_{H} F (\sigma), V (\sigma) \right\rangle_{L_{0}^{2, 1}} \mathrm{d} \gamma (\sigma) = - \int_{C_{0}} F (\sigma) \operatorname{div}(V) (\sigma) \mathrm{d} \gamma (\sigma) -$$ - -or, writing the integrals over C0 as expectations: -$$ -\mathbb{E} \big[ \langle \nabla_{H} F, V \rangle \big] = - \mathbb{E} \big[ F \operatorname{div} V \big], -$$ - -where the "divergence" div(V) : C0 → R is defined by -$$ -\operatorname{div} (V) (\sigma) := - \int_{0}^{T} \dot{V}_{t} (\sigma) \mathrm{d} \sigma_{t}. -$$ - -The interpretation of stochastic integrals as divergences leads to concepts such as the Skorokhod integral and the tools of the Malliavin calculus. diff --git a/wiki/wikipedia/3737.txt b/wiki/wikipedia/3737.txt deleted file mode 100644 index 15127070e975acae1c4bf73074298934733fa702..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3737.txt +++ /dev/null @@ -1,9 +0,0 @@ -In algebraic geometry and commutative algebra, the theorems of generic flatness and generic freeness state that under certain hypotheses, a sheaf of modules on a scheme is flat or free. They are due to Alexander Grothendieck. - -Generic flatness states that if Y is an integral locally noetherian scheme, u : X → Y is a finite type morphism of schemes, and F is a coherent OX-module, then there is a non-empty open subset U of Y such that the restriction of F to u-1(U) is flat over U. - -Because Y is integral, U is a dense open subset of Y. This can be applied to deduce a variant of generic flatness which is true when the base is not integral. Suppose that S is a noetherian scheme, u : X → S is a finite type morphism, and F is a coherent OX module. Then there exists a partition of S into locally closed subsets S1, ..., Sn with the following property: Give each Si its reduced scheme structure, denote by Xi the fiber product X ×S Si, and denote by Fi the restriction F ⊗OS OSi; then each Fi is flat. - -Generic flatness is a consequence of the generic freeness lemma. Generic freeness states that if A is a noetherian integral domain, B is a finite type A-algebra, and M is a finite type B-module, then there exists a non-zero element f of A such that Mf is a free Af-module. Generic freeness can be extended to the graded situation: If B is graded by the natural numbers, A acts in degree zero, and M is a graded B-module, then f may be chosen such that each graded component of Mf is free. - -Generic freeness is proved using Grothendieck's technique of dévissage. Another version of generic freeness can be proved using Noether's normalization lemma. diff --git a/wiki/wikipedia/3738.txt b/wiki/wikipedia/3738.txt deleted file mode 100644 index 7ca4b7d700745b39d660d53776067d23c43c62ce..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3738.txt +++ /dev/null @@ -1,169 +0,0 @@ -This page will attempt to list examples in mathematics. To qualify for inclusion, an article should be about a mathematical object with a fair amount of concreteness. Usually a definition of an abstract concept, a theorem, or a proof would not be an "example" as the term should be understood here (an elegant proof of an isolated but particularly striking fact, as opposed to a proof of a general theorem, could perhaps be considered an "example"). The discussion page for list of mathematical topics has some comments on this. Eventually this page may have its own discussion page. This page links to itself in order that edits to this page will be included among related changes when the user clicks on that button. - -The concrete example within the article titled Rao-Blackwell theorem is perhaps one of the best ways for a probabilist ignorant of statistical inference to get a quick impression of the flavor of that subject. - -*Alexander horned sphere - -*All horses are the same color - -*Cantor function - -*Cantor set - -*Checking if a coin is biased - -*Concrete illustration of the central limit theorem - -*Differential equations of mathematical physics - -*Dirichlet function - -*Discontinuous linear map - -*Efron's non-transitive dice - -*Example of a game without a value - -*Examples of contour integration - -*Examples of differential equations - -*Examples of generating functions - -*Examples of groups - -**List of the 230 crystallographic 3D space groups - -*Examples of Markov chains - -*Examples of vector spaces - -*Fano plane - -*Frieze group - -*Gray graph - -*Hall–Janko graph - -*Higman–Sims graph - -*Hilbert matrix - -*Illustration of a low-discrepancy sequence - -*Illustration of the central limit theorem - -*An infinitely differentiable function that is not analytic - -*Leech lattice - -*Lewy's example on PDEs - -*List of finite simple groups - -*Long line - -*Normally distributed and uncorrelated does not imply independent - -*Pairwise independence of random variables need not imply mutual independence. - -*Petersen graph - -*Sierpinski space - -*Simple example of Azuma's inequality for coin flips - -*Proof that 22/7 exceeds π - -*Solenoid (mathematics) - -*Sorgenfrey plane - -*Stein's example - -*Three cards and a top hat - -*Topologist's sine curve - -*Tsirelson space - -*Tutte eight cage - -*Weierstrass function - -*Wilkinson's polynomial - -*Wallpaper group - -*Uses of trigonometry (The "examples" in that article are not mathematical objects, i.e., numbers, functions, equations, sets, etc., but applications of trigonometry or scientific fields to which trigonometry is applied.) - -*List of algebraic surfaces - -*List of curves - -*List of complexity classes - -*List of examples in general topology - -*List of finite simple groups - -*List of Fourier-related transforms - -*List of mathematical functions - -*List of knots - -**List of mathematical knots and links - -*List of manifolds - -*List of mathematical shapes - -*List of matrices - -*List of numbers - -*List of polygons, polyhedra and polytopes - -*List of prime numbers -not merely a numerical table, but a list of various kinds of prime numbers, each with a table - -*List of regular polytopes - -*List of surfaces - -*List of small groups - -*Table of Lie groups - -See also list of finite simple groups. - -*Baby Monster group - -*Conway group - -*Fischer groups - -*Harada–Norton group - -*Held group - -*Higman–Sims group - -*Janko groups - -*Lyons group - -*The Mathieu groups - -*McLaughlin group - -*Monster group - -*O'Nan group - -*Rudvalis group - -*Suzuki sporadic group - -*Thompson group diff --git a/wiki/wikipedia/3739.txt b/wiki/wikipedia/3739.txt deleted file mode 100644 index f9a9fc11db109edbf10e283a5167d6dbe11b6771..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3739.txt +++ /dev/null @@ -1,106 +0,0 @@ -In mathematics, the Marcinkiewicz interpolation theorem, discovered by , is a result bounding the norms of non-linear operators acting on Lp spaces. - -Marcinkiewicz' theorem is similar to the Riesz–Thorin theorem about linear operators, but also applies to non-linear operators. - -Let f be a measurable function with real or complex values, defined on a measure space (X, F, ω). The distribution function of f is defined by -$$ -\lambda_f(t) = \omega\left\{x\in X\mid |f(x)| > t\right\}. -$$ - -Then f is called weak $L^1$ if there exists a constant C such that the distribution function of f satisfies the following inequality for all t > 0: -$$ -\lambda_f(t)\leq \frac{C}{t}. -$$ - -The smallest constant C in the inequality above is called the weak $L^1$ norm and is usually denoted by $\|f\|_{1,w}$ or $\|f\|_{1,\infty}.$ Similarly the space is usually denoted by L1,w or L1,∞. - -(Note: This terminology is a bit misleading since the weak norm does not satisfy the triangle inequality as one can see by considering the sum of the functions on $ (0,1) $ given by $ 1/x $ and $ 1/(1-x) $, which has norm 4 not 2.) - -Any $L^1$ function belongs to L1,w and in addition one has the inequality -$$ -\|f\|_{1,w}\leq \|f\|_1. -$$ - -This is nothing but Markov's inequality (aka Chebyshev's Inequality). The converse is not true. For example, the function 1/x belongs to L1,w but not to L1. - -Similarly, one may define the weak $L^p$ space as the space of all functions f such that $|f|^p$ belong to L1,w, and the weak $L^p$ norm using -$$ -\|f\|_{p,w}= \left \||f|^p \right \|_{1,w}^{\frac{1}{p}}. -$$ - -More directly, the Lp,w norm is defined as the best constant C in the inequality -$$ -\lambda_f(t) \le \frac{C^p}{t^p} -$$ - -for all t > 0. - -Informally, Marcinkiewicz's theorem is - -Theorem. Let T be a bounded linear operator from $L^p$ to $L^{p,w}$ and at the same time from $L^q$ to $L^{q,w}$. Then T is also a bounded operator from $L^r$ to $L^r$ for any r between p and q. - -In other words, even if you only require weak boundedness on the extremes p and q, you still get regular boundedness inside. To make this more formal, one has to explain that T is bounded only on a dense subset and can be completed. See Riesz-Thorin theorem for these details. - -Where Marcinkiewicz's theorem is weaker than the Riesz-Thorin theorem is in the estimates of the norm. The theorem gives bounds for the $L^r$ norm of T but this bound increases to infinity as r converges to either p or q. Specifically , suppose that -$$ -\|Tf\|_{p,w} \le N_p\|f\|_p, -$$ -$$ -\|Tf\|_{q,w} \le N_q\|f\|_q, -$$ - -so that the operator norm of T from Lp to Lp,w is at most Np, and the operator norm of T from Lq to Lq,w is at most Nq. Then the following interpolation inequality holds for all r between p and q and all f ∈ Lr: -$$ -\|Tf\|_r\le \gamma N_p^\delta N_q^{1-\delta}\|f\|_r -$$ - -where -$$ -\delta=\frac{p(q-r)}{r(q-p)} -$$ - -and -$$ -\gamma=2\left(\frac{r(q-p)}{(r-p)(q-r)}\right)^{1/r}. -$$ - -The constants δ and γ can also be given for q = ∞ by passing to the limit. - -A version of the theorem also holds more generally if T is only assumed to be a quasilinear operator in the following sense: there exists a constant C > 0 such that T satisfies -$$ -|T(f+g)(x)| \le C(|Tf(x)|+|Tg(x)|) -$$ - -for almost every x. The theorem holds precisely as stated, except with γ replaced by -$$ -\gamma=2C\left(\frac{r(q-p)}{(r-p)(q-r)}\right)^{1/r}. -$$ - -An operator T (possibly quasilinear) satisfying an estimate of the form -$$ -\|Tf\|_{q,w}\le C\|f\|_p -$$ - -is said to be of weak type (p,q). An operator is simply of type (p,q) if T is a bounded transformation from Lp to Lq: -$$ -\|Tf\|_q\le C\|f\|_p. -$$ - -A more general formulation of the interpolation theorem is as follows: - -* If T is a quasilinear operator of weak type (p0, q0) and of weak type (p1, q1) where q0 ≠ q1, then for each θ ∈ (0,1), T is of type (p,q), for p and q with p ≤ q of the form -$$ -\frac{1}{p} = \frac{1-\theta}{p_0}+\frac{\theta}{p_1},\quad \frac{1}{q} = \frac{1-\theta}{q_0} + \frac{\theta}{q_1}. -$$ - -The latter formulation follows from the former through an application of Hölder's inequality and a duality argument. - -A famous application example is the Hilbert transform. Viewed as a multiplier, the Hilbert transform of a function f can be computed by first taking the Fourier transform of f, then multiplying by the sign function, and finally applying the inverse Fourier transform. - -Hence Parseval's theorem easily shows that the Hilbert transform is bounded from $L^2$ to $L^2$. A much less obvious fact is that it is bounded from $L^1$ to $L^{1,w}$. Hence Marcinkiewicz's theorem shows that it is bounded from $L^p$ to $L^p$ for any 1 < p < 2. Duality arguments show that it is also bounded for 2 < p < ∞. In fact, the Hilbert transform is really unbounded for p equal to 1 or ∞. - -Another famous example is the Hardy–Littlewood maximal function, which is only sublinear operator rather than linear. While $L^p$ to $L^p$ bounds can be derived immediately from the $L^1$ to weak $L^1$ estimate by a clever change of variables, Marcinkiewicz interpolation is a more intuitive approach. Since the Hardy–Littlewood Maximal Function is trivially bounded from $L^\infty$ to $L^\infty$, strong boundedness for all $p>1$ follows immediately from the weak (1,1) estimate and interpolation. The weak (1,1) estimate can be obtained from the Vitali covering lemma. - -The theorem was first announced by Marcinkiewicz, who showed this result to Antoni Zygmund shortly before he died in World War II. The theorem was almost forgotten by Zygmund, and was absent from his original works on the theory of singular integral operators. Later Zygmund realized that Marcinkiewicz's result could greatly simplify his work, at which time he published his former student's theorem together with a generalization of his own. - -In 1964 Richard A. Hunt and Guido Weiss published a new proof of the Marcinkiewicz interpolation theorem. diff --git a/wiki/wikipedia/374.txt b/wiki/wikipedia/374.txt deleted file mode 100644 index 6f4701f5276d3daa0f621ba51bc194f872593569..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/374.txt +++ /dev/null @@ -1,69 +0,0 @@ -In number theory, the sum of the first n cubes is the square of the nth triangular number. That is, -$$ -1^3+2^3+3^3+\cdots+n^3 = \left(1+2+3+\cdots+n\right)^2. -$$ - -The same equation may be written more compactly using the mathematical notation for summation: -$$ -\sum_{k=1}^n k^3 = \bigg(\sum_{k=1}^n k\bigg)^2. -$$ - -This identity is sometimes called Nicomachus's theorem, after Nicomachus of Gerasa (c. 60 – c. 120 CE). - -Nicomachus, at the end of Chapter 20 of his Introduction to Arithmetic, pointed out that if one writes a list of the odd numbers, the first is the cube of 1, the sum of the next two is the cube of 2, the sum of the next three is the cube of 3, and so on. He does not go further than this, but from this it follows that the sum of the first n cubes equals the sum of the first $n(n+1)/2$ odd numbers, that is, the odd numbers from 1 to $n(n+1)-1$. The average of these numbers is obviously $n(n+1)/2$, and there are $n(n+1)/2$ of them, so their sum is $\bigl(n(n+1)/2\bigr)^2.$ - -Many early mathematicians have studied and provided proofs of Nicomachus's theorem. Stroeker claims that "every student of number theory surely must have marveled at this miraculous fact". Pengelley finds references to the identity not only in the works of Nicomachus in what is now Jordan in the first century CE, but also in those of Aryabhata in India in the fifth century, and in those of Al-Karaji circa 1000 in Persia. Bressoud mentions several additional early mathematical works on this formula, by Al-Qabisi (tenth century Arabia), Gersonides (circa 1300 France), and Nilakantha Somayaji (circa 1500 India); he reproduces Nilakantha's visual proof. - -The sequence of squared triangular numbers is - -left=1.6|0, 1, 9, 36, 100, 225, 441, 784, 1296, 2025, 3025, 4356, 6084, 8281, ... . - -These numbers can be viewed as figurate numbers, a four-dimensional hyperpyramidal generalization of the triangular numbers and square pyramidal numbers. - -As Stein observes, these numbers also count the number of rectangles with horizontal and vertical sides formed in an n × n grid. For instance, the points of a 4 × 4 grid (or a square made up of three smaller squares on a side) can form 36 different rectangles. The number of squares in a square grid is similarly counted by the square pyramidal numbers. - -The identity also admits a natural probabilistic interpretation as follows. Let X, Y, Z, W be four integer numbers independently and uniformly chosen at random between 1 and n. Then, the probability that W is the largest of the four numbers equals the probability that Y is at least as large as X and that W is at least as large as Z. That is, $P[\max(X,Y,Z)\le W]=P[X\le Y\wedge Z\le W]$. For any particular value of W, the combinations of X, Y, and Z that make W largest form a cube 1 ≤ X, Y, Z ≤ n so (adding the size of this cube over all choices of W) the number of combinations of X, Y, Z, W for which W is largest is a sum of cubes, the left hand side of the Nichomachus identity. The sets of pairs (X,Y) with X ≤ Y and of pairs (Z,W) with Z ≤ W form isosceles right triangles, and the set counted by the right hand side of the equation of probabilities is the Cartesian product of these two triangles, so its size is the square of a triangular number on the right hand side of the Nichomachus identity. The probabilities themselves are respectively the left and right sides of the Nichomachus identity, normalized to make probabilities by dividing both sides by n4. - -gives a particularly simple derivation, by expanding each cube in the sum into a set of consecutive odd numbers. He begins by giving the identity - -n^3 = \underbrace{\left(n^2-n+1\right) + \left(n^2-n+1+2\right) + \left(n^2-n+1+4\right)+ \cdots + \left(n^2+n-1\right)}_{n \text{ consecutive odd numbers}}. - -That identity is related to triangular numbers $T_n$ in the following way: - -n^3 =\sum _{k=T_{n-1}+1}^{T_{n}} (2 k-1), - -and thus the summands forming $n^3$ start off just after those forming all previous values $1^3$ up to $(n-1)^3$. - -Applying this property, along with another well-known identity: - -n^2 = \sum_{k=1}^n (2k-1), - -produces the following derivation: - - - -\begin{align} - -\sum_{k=1}^n k^3 &= 1 + 8 + 27 + 64 + \cdots + n^3 \\ - -&= \underbrace{1}_{1^3} + \underbrace{3+5}_{2^3} + \underbrace{7 + 9 + 11}_{3^3} + \underbrace{13 + 15 + 17 + 19}_{4^3} + \cdots + \underbrace{\left(n^2-n+1\right) + \cdots + \left(n^2+n-1\right)}_{n^3} \\ - -&= \underbrace{\underbrace{\underbrace{\underbrace{1}_{1^2} + 3}_{2^2} + 5}_{3^2} + \cdots + \left(n^2 + n - 1\right)}_{\left( \frac{n^{2}+n}{2} \right)^{2}} \\ - -&= (1 + 2 + \cdots + n)^2 \\ - -&= \bigg(\sum_{k=1}^n k\bigg)^2. - -\end{align} - -Row obtains another proof by summing the numbers in a square multiplication table in two different ways. The sum of the $i$th row is $i$ times a triangular number, from which it follows that the sum of all the rows is the square of a triangular number. Alternatively, one can decompose the table into a sequence of nested gnomons, each consisting of the products in which the larger of the two terms is some fixed value. The sum within each gmonon is a cube, so the sum of the whole table is a sum of cubes. - -In the more recent mathematical literature, Edmonds provides a proof using summation by parts. Stein uses the rectangle-counting interpretation of these numbers to form a geometric proof of the identity (see also ); he observes that it may also be proved easily (but uninformatively) by induction, and states that Toeplitz provides "an interesting old Arabic proof". Kanim provides a purely visual proof, Benjamin provide two additional proofs, and Nelsen gives seven geometric proofs. - -A similar result to Nicomachus's theorem holds for all power sums, namely that odd power sums (sums of odd powers) are a polynomial in triangular numbers. - -These are called Faulhaber polynomials, of which the sum of cubes is the simplest and most elegant example. - -However, in no other case is one power sum a square of another. - -Stroeker studies more general conditions under which the sum of a consecutive sequence of cubes forms a square. Garrett and Warnaar study polynomial analogues of the square triangular number formula, in which series of polynomials add to the square of another polynomial. diff --git a/wiki/wikipedia/3740.txt b/wiki/wikipedia/3740.txt deleted file mode 100644 index 30c06f660ccc8c41e574115a85104db337156a83..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3740.txt +++ /dev/null @@ -1,11 +0,0 @@ -Free choice is a phenomenon in natural language where a disjunction appears to receive a conjunctive interpretation when it interacts with a modal operator. For example, the following English sentences can be interpreted to mean that the addressee can watch a movie and that they can also play video games, depending on their preference. - -# You can watch a movie or play video games. - -# You can watch a movie or you can play video games. - -Free choice inferences are a major topic of research in formal semantics and philosophical logic because they are not valid in classical systems of modal logic. If they were valid, then the semantics of natural language would validate the Free Choice Principle. - -# Free Choice Principle: $ (\Diamond P \lor \Diamond Q) \rightarrow (\Diamond P \land \Diamond Q) $ - -This principle is not valid in classical modal logic. Moreover adding this principle to standard modal logics would allow one to conclude $\Diamond Q$ from $\Diamond P$, for any $P$ and $Q$. This observation is known as the Paradox of Free Choice. Others have proposed ways of deriving free choice inferences as scalar implicatures which arise on the basis of classical lexical entries for disjunction and modality. Indefinite noun phrases give rise to a similar inference which is also referred to as "free choice" though researchers disagree as to whether it forms a natural class with disjunctive free choice. diff --git a/wiki/wikipedia/3741.txt b/wiki/wikipedia/3741.txt deleted file mode 100644 index c37c3f15189c1748f4a286bf3f809165b9c8c9e5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3741.txt +++ /dev/null @@ -1,37 +0,0 @@ -In mathematics, the Christ–Kiselev maximal inequality is a maximal inequality for filtrations, named for mathematicians Michael Christ and Alexander Kiselev. - -A continuous filtration of $(M,\mu)$ is a family of measurable sets $\{A_\alpha\}_{\alpha\in\mathbb{R}}$ such that - -# $A_\alpha\nearrow M$, $\bigcap_{\alpha\in\mathbb{R}}A_\alpha=\emptyset$, and $\mu(A_\beta\setminus A_\alpha)<\infty$ for all $\beta>\alpha$ (stratific) - -# $\lim_{\varepsilon\to0^+}\mu(A_{\alpha+\varepsilon}\setminus A_\alpha)=\lim_{\varepsilon\to0^+}\mu(A_\alpha\setminus A_{\alpha+\varepsilon})=0$ (continuity) - -For example, $\mathbb{R}=M$ with measure $\mu$ that has no pure points and -$$ -A_\alpha:=\begin{cases}\{|x|\le\alpha\},&\alpha>0, \\ \emptyset,&\alpha\le0. \end{cases} -$$ - -is a continuous filtration. - -Let $1\le p0\\\emptyset,&\alpha\le0\end{cases}$. - -The discrete version can be proved from the continuum version through constructing $T:L^p(\mathbb{R},dx)\to L^q(N,\nu)$. - -The Christ–Kiselev maximal inequality has applications to the Fourier transform and convergence of Fourier series, as well as to the study of Schrödinger operators. diff --git a/wiki/wikipedia/3742.txt b/wiki/wikipedia/3742.txt deleted file mode 100644 index 2fc83be4de61f61a6bd2a208a8e4572073023579..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3742.txt +++ /dev/null @@ -1,100 +0,0 @@ -In mathematics, Parseval's theorem usually refers to the result that the Fourier transform is unitary; loosely, that the sum (or integral) of the square of a function is equal to the sum (or integral) of the square of its transform. It originates from a 1799 theorem about series by Marc-Antoine Parseval, which was later applied to the Fourier series. It is also known as Rayleigh's energy theorem, or Rayleigh's identity, after John William Strutt, Lord Rayleigh. - -Although the term "Parseval's theorem" is often used to describe the unitarity of any Fourier transform, especially in physics, the most general form of this property is more properly called the Plancherel theorem. - -Suppose that $A(x)$ and $B(x)$ are two complex-valued functions on $\mathbb{R}$ of period $2 \pi$ that are square integrable (with respect to the Lebesgue measure) over intervals of period length, with Fourier series -$$ -A(x)=\sum_{n=-\infty}^\infty a_ne^{inx} -$$ - -and
    -$$ -B(x)=\sum_{n=-\infty}^\infty b_ne^{inx} -$$ - -respectively. Then - -{{NumBlk|:|$\sum_{n=-\infty}^\infty a_n\overline{b_n} = \frac{1}{2\pi} \int_{-\pi}^\pi A(x)\overline{B(x)} \mathrm{d}x,$|}} - -where $i$ is the imaginary unit and horizontal bars indicate complex conjugation. Substituting $A(x)$ and $\overline{B(x)}$: - - - -\begin{align} - -\sum_{n=-\infty}^\infty a_n\overline{b_n} - -&= \frac{1}{2\pi} \int_{-\pi}^\pi \left( \sum_{n=-\infty}^\infty a_ne^{inx} \right) \left( \sum_{n=-\infty}^\infty \overline{b_n}e^{-inx} \right) \mathrm{d}x \\[6pt] - -&= \frac{1}{2\pi} \int_{-\pi}^\pi \left(a_1e^{i1x} + a_2e^{i2x} + \cdots\right) \left(\overline{b_1}e^{-i1x} + \overline{b_2}e^{-i2x} + \cdots\right) \mathrm{d}x \\[6pt] - -&= \frac{1}{2\pi} \int_{-\pi}^\pi \left(a_1e^{i1x} \overline{b_1}e^{-i1x} + a_1e^{i1x} \overline{b_2}e^{-i2x} + a_2e^{i2x} \overline{b_1}e^{-i1x} + a_2e^{i2x} \overline{b_2}e^{-i2x} + \cdots \right) \mathrm{d}x \\[6pt] - -&= \frac{1}{2\pi} \int_{-\pi}^\pi \left(a_1 \overline{b_1} + a_1 \overline{b_2}e^{-ix} + a_2 \overline{b_1}e^{ix} + a_2 \overline{b_2} + \cdots\right) \mathrm{d}x - -\end{align} - - - -As is the case with the middle terms in this example, many terms will integrate to $0$ over a full period of length $2\pi$ (see harmonics): - - - -\begin{align} - -\sum_{n=-\infty}^\infty a_n\overline{b_n} &= \frac{1}{2\pi} \left[a_1 \overline{b_1} x + i a_1 \overline{b_2}e^{-ix} - i a_2 \overline{b_1}e^{ix} + a_2 \overline{b_2} x + \cdots\right] _{-\pi} ^{+\pi} \\[6pt] - -&= \frac{1}{2\pi} \left(2\pi a_1 \overline{b_1} + 0 + 0 + 2\pi a_2 \overline{b_2} + \cdots\right) \\[6pt] - -&= a_1 \overline{b_1} + a_2 \overline{b_2} + \cdots \\[6pt] - -\end{align} - -More generally, given an abelian locally compact group G with Pontryagin dual G^, Parseval's theorem says the Pontryagin–Fourier transform is a unitary operator between Hilbert spaces L2(G) and L2(G^) (with integration being against the appropriately scaled Haar measures on the two groups.) When G is the unit circle T, G^ is the integers and this is the case discussed above. When G is the real line $\mathbb{R}$, G^ is also $\mathbb{R}$ and the unitary transform is the Fourier transform on the real line. When G is the cyclic group Zn, again it is self-dual and the Pontryagin–Fourier transform is what is called discrete Fourier transform in applied contexts. - -Parseval's theorem can also be expressed as follows: - -Suppose $f(x)$ is a square-integrable function over $[-\pi, \pi]$ (i.e., $f(x)$ and $f^2(x)$ are integrable on that interval), with the Fourier series -$$ -f(x) \simeq \frac{a_0}{2} + \sum_{n=1}^{\infty} (a_n \cos(nx) + b_n \sin(nx)). -$$ - -Then -$$ -\frac{1}{\pi} \int_{-\pi}^{\pi} f^2(x) \mathrm{d}x = \frac{a_0^2}{2} + \sum_{n=1}^{\infty} \left(a_n^2 + b_n^2 \right). -$$ - -In electrical engineering, Parseval's theorem is often written as: -$$ -\int_{-\infty}^\infty | x(t) |^2 \mathrm{d}t = \frac{1}{2\pi} \int_{-\infty}^\infty | X(\omega) |^2 \mathrm{d}\omega = \int_{-\infty}^\infty | X(2\pi f) |^2 \mathrm{d}f -$$ - -where $X(\omega) = \mathcal{F}_\omega\{ x(t) \}$ represents the continuous Fourier transform (in normalized, unitary form) of $x(t)$, and $\omega = 2\pi f$ is frequency in radians per second. - -The interpretation of this form of the theorem is that the total energy of a signal can be calculated by summing power-per-sample across time or spectral power across frequency. - -For discrete time signals, the theorem becomes: -$$ -\sum_{n=-\infty}^\infty | x[n] |^2 = \frac{1}{2\pi} \int_{-\pi}^\pi | X_{2\pi}({\phi}) |^2 \mathrm{d}\phi -$$ - -where $X_{2\pi}$ is the discrete-time Fourier transform (DTFT) of $x$ and $\phi$ represents the angular frequency (in radians per sample) of $x$. - -Alternatively, for the discrete Fourier transform (DFT), the relation becomes: -$$ - \sum_{n=0}^{N-1} | x[n] |^2 = \frac{1}{N} \sum_{k=0}^{N-1} | X[k] |^2 -$$ - -where $X[k]$ is the DFT of $x[n]$, both of length $N$. - -We show the DFT case below. For the other cases, the proof is similar. By using the definition of inverse DFT of $X[k]$, we can derive - - \frac{1}{N} \sum_{k=0}^{N-1} | X[k] |^2 = \frac{1}{N} \sum_{k=0}^{N-1} X[k]\cdot X^*[k] = \frac{1}{N} \sum_{k=0}^{N-1} \left[\sum_{n=0}^{N-1} x[n]\exp\left(-j\frac{2\pi}{N}kn\right)\right] X^*[k] - -= \frac{1}{N} \sum_{n=0}^{N-1} x[n] \left[\sum_{k=0}^{N-1} X^*[k]\exp\left(-j\frac{2\pi}{N}kn\right)\right] - -= \frac{1}{N} \sum_{n=0}^{N-1} x[n] (N \cdot x^*[n]) = \sum_{n=0}^{N-1} | x[n] |^2, - - - -where $*$ represents complex conjugate. diff --git a/wiki/wikipedia/3743.txt b/wiki/wikipedia/3743.txt deleted file mode 100644 index ca42901075de783bbc32c543d944cbd2f8f39d7e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3743.txt +++ /dev/null @@ -1,351 +0,0 @@ -Solution of triangles () is the main trigonometric problem of finding the characteristics of a triangle (angles and lengths of sides), when some of these are known. The triangle can be located on a plane or on a sphere. Applications requiring triangle solutions include geodesy, astronomy, construction, and navigation. - -A general form triangle has six main characteristics (see picture): three linear (side lengths a, b, c) and three angular (α, β, γ). The classical plane trigonometry problem is to specify three of the six characteristics and determine the other three. A triangle can be uniquely determined in this sense when given any of the following: - -*Three sides (SSS) - -*Two sides and the included angle (SAS) - -*Two sides and an angle not included between them (SSA), if the side length adjacent to the angle is shorter than the other side length. - -*A side and the two angles adjacent to it (ASA) - -*A side, the angle opposite to it and an angle adjacent to it (AAS). - -For all cases in the plane, at least one of the side lengths must be specified. If only the angles are given, the side lengths cannot be determined, because any similar triangle is a solution. - -The standard method of solving the problem is to use fundamental relations. - -;Law of cosines -$$ - a^2 = b^2 + c^2 - 2 b c \cos \alpha -$$ -$$ - b^2 = a^2 + c^2 - 2 a c \cos \beta -$$ -$$ - c^2 = a^2 + b^2 - 2 a b \cos \gamma -$$ - -;Law of sines -$$ -\frac{a}{\sin\alpha} = \frac{b}{\sin\beta} = \frac{c}{\sin\gamma} -$$ - -;Sum of angles: -$$ -\alpha + \beta + \gamma = 180^\circ -$$ - -;Law of tangents -$$ -\frac{a-b}{a+b} = \frac{\tan[\frac{1}{2}(\alpha-\beta)]}{\tan[\frac 1 2 (\alpha+\beta)]}. -$$ - -There are other (sometimes practically useful) universal relations: the law of cotangents and Mollweide's formula. - -#To find an unknown angle, the law of cosines is safer than the law of sines. The reason is that the value of sine for the angle of the triangle does not uniquely determine this angle. For example, if sin β = 0.5, the angle β can equal either 30° or 150°. Using the law of cosines avoids this problem: within the interval from 0° to 180° the cosine value unambiguously determines its angle. On the other hand, if the angle is small (or close to 180°), then it is more robust numerically to determine it from its sine than its cosine because the arc-cosine function has a divergent derivative at 1 (or −1). - -#We assume that the relative position of specified characteristics is known. If not, the mirror reflection of the triangle will also be a solution. For example, three side lengths uniquely define either a triangle or its reflection. - -Let three side lengths a, b, c be specified. To find the angles α, β, the law of cosines can be used: - - - -\begin{align} - -\alpha & = \arccos \frac{b^2 + c^2 - a^2} {2 b c} \\[4pt] - -\beta & = \arccos \frac{a^2 + c^2 - b^2} {2 a c}. - -\end{align} - - - -Then angle γ = 180° − α − β. - -Some sources recommend to find angle β from the law of sines but (as Note 1 above states) there is a risk of confusing an acute angle value with an obtuse one. - -Another method of calculating the angles from known sides is to apply the law of cotangents. - -Here the lengths of sides a, b and the angle γ between these sides are known. The third side can be determined from the law of cosines: -$$ -c = \sqrt{a^2+b^2-2ab\cos\gamma}. -$$ - -Now we use law of cosines to find the second angle: -$$ - \alpha = \arccos \frac{b^2 + c^2 - a^2} {2 b c}. -$$ - -Finally, β = 180° − α − γ. - -This case is not solvable in all cases; a solution is guaranteed to be unique only if the side length adjacent to the angle is shorter than the other side length. Assume that two sides b, c and the angle β are known. The equation for the angle γ can be implied from the law of sines: -$$ -\sin\gamma = \frac c b \sin\beta. -$$ - -We denote further D = c/b sin β (the equation's right side). There are four possible cases: - -#If D > 1, no such triangle exists because the side b does not reach line BC. For the same reason a solution does not exist if the angle β ≥ 90° and b ≤ c. - -#If D = 1, a unique solution exists: γ = 90°, i.e., the triangle is right-angled. - -# If D < 1 two alternatives are possible. - -## If b ≥ c, then β ≥ γ (the larger side corresponds to a larger angle). Since no triangle can have two obtuse angles, γ is an acute angle and the solution γ = arcsin D is unique. - -## If b < c, the angle γ may be acute: γ = arcsin D or obtuse: γ′ = 180° − γ. The figure on right shows the point C, the side b and the angle γ as the first solution, and the point C′, side b′ and the angle γ′ as the second solution. - -Once γ is obtained, the third angle α = 180° − β − γ. - -The third side can then be found from the law of sines: -$$ -a = b\ \frac{\sin\alpha}{\sin\beta} -$$ - -or from the law of cosines: -$$ -a = c\cos\beta \pm \sqrt{b^2 -c^2\sin^2\beta} -$$ - -The known characteristics are the side c and the angles α, β. The third angle γ = 180° − α − β. - -Two unknown sides can be calculated from the law of sines: -$$ -a = c\ \frac{\sin\alpha}{\sin\gamma}; \quad b = c\ \frac{\sin\beta}{\sin\gamma}. -$$ - -or -$$ -a = c \frac{\sin\alpha}{\sin\alpha \cos \beta +\sin\beta \cos \alpha} -$$ -$$ -b = c \frac{\sin\beta}{\sin\alpha \cos \beta +\sin\beta \cos \alpha} -$$ - -The procedure for solving an AAS triangle is same as that for an ASA triangle: First, find the third angle by using the angle sum property of a triangle, then find the other two sides using the law of sines. - -In many cases, triangles can be solved given three pieces of information some of which are the lengths of the triangle's medians, altitudes, or angle bisectors. Posamentier and Lehmann list the results for the question of solvability using no higher than square roots (i.e., constructibility) for each of the 95 distinct cases; 63 of these are constructible. - -The general spherical triangle is fully determined by three of its six characteristics (3 sides and 3 angles). The lengths of the sides a, b, c of a spherical triangle are their central angles, measured in angular units rather than linear units. (On a unit sphere, the angle (in radians) and length around the sphere are numerically the same. On other spheres, the angle (in radians) is equal to the length around the sphere divided by the radius.) - -Spherical geometry differs from planar Euclidean geometry, so the solution of spherical triangles is built on different rules. For example, the sum of the three angles α + β + γ depends on the size of the triangle. In addition, similar triangles cannot be unequal, so the problem of constructing a triangle with specified three angles has a unique solution. The basic relations used to solve a problem are similar to those of the planar case: see Spherical law of cosines and Spherical law of sines. - -Among other relationships that may be useful are the half-side formula and Napier's analogies: - -*$\tan\frac c 2 \cos\frac{\alpha-\beta} 2 = \tan\frac{a+b}{2} \cos\frac{\alpha+\beta}{2}$ - -*$\tan\frac c 2 \sin\frac{\alpha-\beta} 2 = \tan\frac{a-b} 2 \sin\frac{\alpha+\beta} 2$ - -*$\cot\frac{\gamma} 2 \cos\frac{a-b}{2} = \tan\frac{\alpha+\beta} 2 \cos\frac{a+b} 2$ - -*$\cot\frac{\gamma} 2 \sin\frac{a-b}{2} = \tan\frac{\alpha-\beta} 2 \sin\frac{a+b} 2.$ - -Known: the sides a, b, c (in angular units). The triangle's angles are computed using the spherical law of cosines: -$$ -\alpha = \arccos\left(\frac{\cos a-\cos b\ \cos c}{\sin b\ \sin c}\right), -$$ -$$ -\beta = \arccos\left(\frac{\cos b-\cos c\ \cos a}{\sin c\ \sin a}\right), -$$ -$$ -\gamma = \arccos\left(\frac{\cos c-\cos a\ \cos b}{\sin a\ \sin b}\right). -$$ - -Known: the sides a, b and the angle γ between them. The side c can be found from the spherical law of cosines: -$$ -c = \arccos \left(\cos a\cos b + \sin a\sin b\cos\gamma \right). -$$ - -The angles α, β can be calculated as above, or by using Napier's analogies: -$$ -\alpha = \arctan\ \frac{2\sin a}{\tan(\frac{\gamma}{2}) \sin (b+a) + \cot(\frac{\gamma}{2})\sin(b-a)}, -$$ -$$ -\beta = \arctan\ \frac{2\sin b}{\tan(\frac{\gamma}{2}) \sin (a+b) + \cot(\frac{\gamma}{2})\sin(a-b)}. -$$ - -This problem arises in the navigation problem of finding the great circle between two points on the earth specified by their latitude and longitude; in this application, it is important to use formulas which are not susceptible to round-off errors. For this purpose, the following formulas (which may be derived using vector algebra) can be used: - -\begin{align} - -c &= \arctan\frac - -{\sqrt{(\sin a\cos b - \cos a \sin b \cos \gamma)^2 + (\sin b\sin\gamma)^2}} - -{\cos a \cos b + \sin a\sin b\cos\gamma},\\ - -\alpha &= \arctan\frac - -{\sin a\sin\gamma} - -{\sin b\cos a - \cos b\sin a\cos\gamma},\\ - -\beta &= \arctan\frac - -{\sin b\sin\gamma} - -{\sin a\cos b - \cos a\sin b\cos\gamma}, - -\end{align} - -where the signs of the numerators and denominators in these expressions should be used to determine the quadrant of the arctangent. - -This problem is not solvable in all cases; a solution is guaranteed to be unique only if the side length adjacent to the angle is shorter than the other side length. Known: the sides b, c and the angle β not between them. A solution exists if the following condition holds: -$$ -b > \arcsin (\sin c\sin\beta). -$$ - -The angle γ can be found from the spherical law of sines: -$$ -\gamma = \arcsin \left(\frac{\sin c\sin\beta}{\sin b}\right). -$$ - -As for the plane case, if b < c then there are two solutions: γ and 180° - γ. - -We can find other characteristics by using Napier's analogies: - - - -\begin{align} - -a & = 2\arctan \left[ \tan\left(\tfrac12(b-c)\right) \frac{\sin \left(\tfrac12(\beta+\gamma)\right)}{\sin\left(\tfrac12(\beta-\gamma)\right)} \right], \\[4pt] - -\alpha & = 2\arccot \left[\tan\left(\tfrac12(\beta-\gamma)\right) \frac{\sin \left(\tfrac12(b+c)\right)}{\sin \left(\tfrac12(b-c)\right)} \right]. - -\end{align} - - - -Known: the side c and the angles α, β. First we determine the angle γ using the spherical law of cosines: -$$ -\gamma = \arccos(\sin\alpha\sin\beta\cos c -\cos\alpha\cos\beta). -$$ - -We can find the two unknown sides from the spherical law of cosines (using the calculated angle γ): -$$ -a=\arccos\left(\frac{\cos\alpha+\cos\beta\cos\gamma}{\sin\beta\sin\gamma}\right), -$$ -$$ -b=\arccos\left(\frac{\cos\beta+\cos\alpha\cos\gamma}{\sin\alpha\sin\gamma}\right), -$$ - -or by using Napier's analogies: - - - -\begin{align} - -a & = \arctan\left[\frac{2\sin\alpha}{\cot(\frac c 2) \sin(\beta+\alpha) + \tan(\frac c 2) \sin(\beta-\alpha)}\right], \\[4pt] - -b & = \arctan\left[\frac{2\sin\beta} {\cot(\frac c 2) \sin(\alpha+\beta) + \tan(\frac c 2)\sin(\alpha-\beta)}\right]. - -\end{align} - - - -Known: the side a and the angles α, β. The side b can be found from the spherical law of sines: -$$ -b = \arcsin \left( \frac{\sin a\sin \beta}{\sin \alpha} \right). -$$ - -If the angle for the side a is acute and α > β, another solution exists: -$$ -b = \pi - \arcsin \left( \frac{\sin a\sin \beta}{\sin \alpha} \right). -$$ - -We can find other characteristics by using Napier's analogies: - - - -\begin{align} - -c & = 2\arctan \left[ \tan\left(\tfrac12(a-b)\right) \frac{\sin\left(\tfrac12(\alpha+\beta)\right)}{\sin\left(\frac12(\alpha-\beta)\right)}\right], \\[4pt] - -\gamma & = 2\arccot \left[\tan\left(\tfrac12(\alpha-\beta)\right) \frac{\sin \left(\tfrac12(a+b)\right)}{\sin \left(\frac12(a-b)\right)} \right]. - -\end{align} - - - -Known: the angles α, β, γ. From the spherical law of cosines we infer: -$$ -a=\arccos\left(\frac{\cos\alpha+\cos\beta\cos\gamma}{\sin\beta\sin\gamma}\right), -$$ -$$ -b=\arccos\left(\frac{\cos\beta+\cos\gamma\cos\alpha}{\sin\gamma\sin\alpha}\right), -$$ -$$ -c=\arccos\left(\frac{\cos\gamma+\cos\alpha\cos\beta}{\sin\alpha\sin\beta}\right). -$$ - -The above algorithms become much simpler if one of the angles of a triangle (for example, the angle C) is the right angle. Such a spherical triangle is fully defined by its two elements, and the other three can be calculated using Napier's Pentagon or the following relations. -$$ -\sin a = \sin c \cdot \sin A -$$ (from the spherical law of sines) -$$ -\tan a = \sin b \cdot \tan A -$$ -$$ -\cos c = \cos a \cdot \cos b -$$ (from the spherical law of cosines) -$$ -\tan b = \tan c \cdot \cos A -$$ -$$ -\cos A = \cos a \cdot \sin B -$$ (also from the spherical law of cosines) -$$ -\cos c = \cot A \cdot \cot B -$$ - -If one wants to measure the distance d from shore to a remote ship via triangulation, one marks on the shore two points with known distance l between them (the baseline). Let α, β be the angles between the baseline and the direction to the ship. - -From the formulae above (ASA case, assuming planar geometry) one can compute the distance as the triangle height: -$$ -d = \frac{\sin\alpha\sin\beta}{\sin(\alpha+\beta)} \ell = \frac{\tan\alpha\tan\beta}{\tan\alpha+\tan\beta} \ell. -$$ - -For the spherical case, one can first compute the length of side from the point at α to the ship (i.e. the side opposite to β) via the ASA formula -$$ - \tan b =\frac{2\sin\beta}{\cot(l/2)\sin(\alpha+\beta)+\tan(l/2)\sin(\alpha-\beta)}, -$$ - -and insert this into the AAS formula for the right subtriangle that contains the angle α and the sides b and d: -$$ - \sin d = \sin b \sin\alpha = \frac{\tan b}{\sqrt{1+\tan^2 b}}\sin\alpha. -$$ - -(The planar formula is actually the first term of the Taylor expansion of d of the spherical solution in powers of l.) - -This method is used in cabotage. The angles α, β are defined by observation of familiar landmarks from the ship. - -As another example, if one wants to measure the height h of a mountain or a high building, the angles α, β from two ground points to the top are specified. Let ℓ be the distance between these points. From the same ASA case formulas we obtain: -$$ - h = \frac{\sin\alpha\sin\beta}{\sin(\beta-\alpha)} \ell = \frac{\tan\alpha\tan\beta}{\tan\beta-\tan\alpha} \ell. -$$ - -To calculate the distance between two points on the globe, - -Point A: latitude λ_A, longitude L_A, and - -Point B: latitude λ_B, longitude L_B - -we consider the spherical triangle ABC, where C is the North Pole. Some characteristics are: -$$ -a = 90^\mathrm{o} - \lambda_\mathrm{B}, -$$ -$$ -b = 90^\mathrm{o} - \lambda_\mathrm{A}, -$$ -$$ -\gamma = L_\mathrm{A}-L_\mathrm{B}. -$$ - -If two sides and the included angle given, we obtain from the formulas -$$ -\mathrm{AB} = R \arccos\left[\sin \lambda_\mathrm{A} \sin \lambda_\mathrm{B} + \cos \lambda_\mathrm{A} \cos \lambda_\mathrm{B} \cos \left(L_\mathrm{A}-L_\mathrm{B}\right)\right]. -$$ - -Here R is the Earth's radius. diff --git a/wiki/wikipedia/3744.txt b/wiki/wikipedia/3744.txt deleted file mode 100644 index bc923ca3d609ed45c54a0f1f6f1972280513e47e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3744.txt +++ /dev/null @@ -1,25 +0,0 @@ -In graph drawing, the angular resolution of a drawing of a graph is the sharpest angle formed by any two edges that meet at a common vertex of the drawing. - -Formann observed that every straight-line drawing of a graph with maximum degree d has angular resolution at most 2π/d: if v is a vertex of degree d, then the edges incident to v partition the space around v into d wedges with total angle 2π, and the smallest of these wedges must have an angle of at most 2π/d. More strongly, if a graph is d-regular, it must have angular resolution less than $\frac{\pi}{d-1}$, because this is the best resolution that can be achieved for a vertex on the convex hull of the drawing. - -As Formann showed, the largest possible angular resolution of a graph G is closely related to the chromatic number of the square G2, the graph on the same vertex set in which pairs of vertices are connected by an edge whenever their distance in G is at most two. If G2 can be colored with χ colors, then G may be drawn with angular resolution π/χ − ε, for any ε > 0, by assigning distinct colors to the vertices of a regular χ-gon and placing each vertex of G close to the polygon vertex with the same color. Using this construction, they showed that every graph with maximum degree d has a drawing with angular resolution proportional to 1/d2. This bound is close to tight: they used the probabilistic method to prove the existence of graphs with maximum degree d whose drawings all have angular resolution $O\left(\frac{\log d}{d^2}\right)$. - -Formann provided an example showing that there exist graphs that do not have a drawing achieving the maximum possible angular resolution; instead, these graphs have a family of drawings whose angular resolutions tend towards some limiting value without reaching it. Specifically, they exhibited an 11-vertex graph that has drawings of angular resolution π/3 − ε for any ε > 0, but that does not have a drawing of angular resolution exactly π/3. - -Every tree may be drawn in such a way that the edges are equally spaced around each vertex, a property known as perfect angular resolution. Moreover, if the edges may be freely permuted around each vertex, then such a drawing is possible, without crossings, with all edges unit length or higher, and with the entire drawing fitting within a bounding box of polynomial area. However, if the cyclic ordering of the edges around each vertex is already determined as part of the input to the problem, then achieving perfect angular resolution with no crossings may sometimes require exponential area. - -Perfect angular resolution is not always possible for outerplanar graphs, because vertices on the convex hull of the drawing with degree greater than one cannot have their incident edges equally spaced around them. Nonetheless, every outerplanar graph of maximum degree d has an outerplanar drawing with angular resolution proportional to 1/d. - -For planar graphs with maximum degree d, the square-coloring technique of Formann provides a drawing with angular resolution proportional to 1/d, because the square of a planar graph must have chromatic number proportional to d. More precisely, Wegner conjectured in 1977 that the chromatic number of the square of a planar graph is at most $\max\left(d+5, \frac{3d}{2}+1\right)$, and it is known that the chromatic number is at most $\frac{5d}{3}+O(1)$. However, the drawings resulting from this technique are generally not planar. - -For some planar graphs, the optimal angular resolution of a planar straight-line drawing is O(1/d3), where d is the degree of the graph. Additionally, such a drawing may be forced to use very long edges, longer by an exponential factor than the shortest edges in the drawing. - -Malitz used the circle packing theorem and ring lemma to show that every planar graph with maximum degree d has a planar drawing whose angular resolution is at worst an exponential function of d, independent of the number of vertices in the graph. - -It is NP-hard to determine whether a given graph of maximum degree d has a drawing with angular resolution 2π/d, even in the special case that d = 4. However, for certain restricted classes of drawings, including drawings of trees in which extending the leaves to infinity produces a convex subdivision of the plane as well as drawings of planar graphs in which each bounded face is a centrally-symmetric polygon, a drawing of optimal angular resolution may be found in polynomial time. - -Angular resolution was first defined by Formann. - -Although originally defined only for straight-line drawings of graphs, later authors have also investigated the angular resolution of drawings in which the edges are polygonal chains, circular arcs, or spline curves. - -The angular resolution of a graph is closely related to its crossing resolution, the angle formed by crossings in a drawing of the graph. In particular, RAC drawing seeks to ensure that these angles are all right angles, the largest crossing angle possible. diff --git a/wiki/wikipedia/3745.txt b/wiki/wikipedia/3745.txt deleted file mode 100644 index 702d01cfff5e4fe3bce4b06b0fb429009cc66f26..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3745.txt +++ /dev/null @@ -1,265 +0,0 @@ -Gödel's incompleteness theorems are two theorems of mathematical logic that are concerned with the limits of in formal axiomatic theories. These results, published by Kurt Gödel in 1931, are important both in mathematical logic and in the philosophy of mathematics. The theorems are widely, but not universally, interpreted as showing that Hilbert's program to find a complete and consistent set of axioms for all mathematics is impossible. - -The first incompleteness theorem states that no consistent system of axioms whose theorems can be listed by an effective procedure (i.e., an algorithm) is capable of proving all truths about the arithmetic of natural numbers. For any such consistent formal system, there will always be statements about natural numbers that are true, but that are unprovable within the system. The second incompleteness theorem, an extension of the first, shows that the system cannot demonstrate its own consistency. - -Employing a diagonal argument, Gödel's incompleteness theorems were the first of several closely related theorems on the limitations of formal systems. They were followed by Tarski's undefinability theorem on the formal undefinability of truth, Church's proof that Hilbert's Entscheidungsproblem is unsolvable, and Turing's theorem that there is no algorithm to solve the halting problem. - -The incompleteness theorems apply to formal systems that are of sufficient complexity to express the basic arithmetic of the natural numbers and which are consistent, and effectively axiomatized, these concepts being detailed below. Particularly in the context of first-order logic, formal systems are also called formal theories. In general, a formal system is a deductive apparatus that consists of a particular set of axioms along with rules of symbolic manipulation (or rules of inference) that allow for the derivation of new theorems from the axioms. One example of such a system is first-order Peano arithmetic, a system in which all variables are intended to denote natural numbers. In other systems, such as set theory, only some sentences of the formal system express statements about the natural numbers. The incompleteness theorems are about formal provability within these systems, rather than about "provability" in an informal sense. - -There are several properties that a formal system may have, including completeness, consistency, and the existence of an effective axiomatization. The incompleteness theorems show that systems which contain a sufficient amount of arithmetic cannot possess all three of these properties. - -A formal system is said to be effectively axiomatized (also called effectively generated) if its set of theorems is a recursively enumerable set . - -This means that there is a computer program that, in principle, could enumerate all the theorems of the system without listing any statements that are not theorems. Examples of effectively generated theories include Peano arithmetic and Zermelo–Fraenkel set theory (ZFC). - -The theory known as true arithmetic consists of all true statements about the standard integers in the language of Peano arithmetic. This theory is consistent, and complete, and contains a sufficient amount of arithmetic. However it does not have a recursively enumerable set of axioms, and thus does not satisfy the hypotheses of the incompleteness theorems. - -A set of axioms is (syntactically, or negation-) complete if, for any statement in the axioms' language, that statement or its negation is provable from the axioms . This is the notion relevant for Gödel's first Incompleteness theorem. It is not to be confused with semantic completeness, which means that the set of axioms proves all the semantic tautologies of the given language. In his completeness theorem (not to be confused with the incompleteness theorems described here), Gödel proved that first order logic is semantically complete. But it is not syntactically complete, since there are sentences expressible in the language of first order logic that can be neither proved nor disproved from the axioms of logic alone. - -In a mere system of logic it would be absurd to expect syntactic completeness. But in a system of mathematics, thinkers such as Hilbert had believed that it is just a matter of time to find such an axiomatization that would allow one to either prove or disprove (by proving its negation) each and every mathematical formula. - -A formal system might be syntactically incomplete by design, as logics generally are. Or it may be incomplete simply because not all the necessary axioms have been discovered or included. For example, Euclidean geometry without the parallel postulate is incomplete, because some statements in the language (such as the parallel postulate itself) can not be proved from the remaining axioms. Similarly, the theory of dense linear orders is not complete, but becomes complete with an extra axiom stating that there are no endpoints in the order. The continuum hypothesis is a statement in the language of ZFC that is not provable within ZFC, so ZFC is not complete. In this case, there is no obvious candidate for a new axiom that resolves the issue. - -The theory of first order Peano arithmetic seems to be consistent. Assuming this is indeed the case, note that it has an infinite but recursively enumerable set of axioms, and can encode enough arithmetic for the hypotheses of the incompleteness theorem. Thus by the first incompleteness theorem, Peano Arithmetic is not complete. The theorem gives an explicit example of a statement of arithmetic that is neither provable nor disprovable in Peano's arithmetic. Moreover, this statement is true in the usual model. In addition, no effectively axiomatized, consistent extension of Peano arithmetic can be complete. - -A set of axioms is (simply) consistent if there is no statement such that both the statement and its negation are provable from the axioms, and inconsistent otherwise. - -Peano arithmetic is provably consistent from ZFC, but not from within itself. Similarly, ZFC is not provably consistent from within itself, but ZFC + "there exists an inaccessible cardinal" proves ZFC is consistent because if κ is the least such cardinal, then Vκ sitting inside the von Neumann universe is a model of ZFC, and a theory is consistent if and only if it has a model. - -If one takes all statements in the language of Peano arithmetic as axioms, then this theory is complete, has a recursively enumerable set of axioms, and can describe addition and multiplication. However, it is not consistent. - -Additional examples of inconsistent theories arise from the paradoxes that result when the axiom schema of unrestricted comprehension is assumed in set theory. - -The incompleteness theorems apply only to formal systems which are able to prove a sufficient collection of facts about the natural numbers. One sufficient collection is the set of theorems of Robinson arithmetic Q. Some systems, such as Peano arithmetic, can directly express statements about natural numbers. Others, such as ZFC set theory, are able to interpret statements about natural numbers into their language. Either of these options is appropriate for the incompleteness theorems. - -The theory of algebraically closed fields of a given characteristic is complete, consistent, and has an infinite but recursively enumerable set of axioms. However it is not possible to encode the integers into this theory, and the theory cannot describe arithmetic of integers. A similar example is the theory of real closed fields, which is essentially equivalent to Tarski's axioms for Euclidean geometry. So Euclidean geometry itself (in Tarski's formulation) is an example of a complete, consistent, effectively axiomatized theory. - -The system of Presburger arithmetic consists of a set of axioms for the natural numbers with just the addition operation (multiplication is omitted). Presburger arithmetic is complete, consistent, and recursively enumerable and can encode addition but not multiplication of natural numbers, showing that for Gödel's theorems one needs the theory to encode not just addition but also multiplication. - -has studied some weak families of arithmetic systems which allow enough arithmetic as relations to formalise Gödel numbering, but which are not strong enough to have multiplication as a function, and so fail to prove the second incompleteness theorem; these systems are consistent and capable of proving their own consistency (see self-verifying theories). - -In choosing a set of axioms, one goal is to be able to prove as many correct results as possible, without proving any incorrect results. For example, we could imagine a set of true axioms which allow us to prove every true arithmetical claim about the natural numbers . In the standard system of first-order logic, an inconsistent set of axioms will prove every statement in its language (this is sometimes called the principle of explosion), and is thus automatically complete. A set of axioms that is both complete and consistent, however, proves a maximal set of non-contradictory theorems . - -The pattern illustrated in the previous sections with Peano arithmetic, ZFC, and ZFC + "there exists an inaccessible cardinal" cannot generally be broken. Here ZFC + "there exists an inaccessible cardinal" cannot from itself, be proved consistent. It is also not complete, as illustrated by the in ZFC + "there exists an inaccessible cardinal" theory unresolved continuum hypothesis. - -The first incompleteness theorem shows that, in formal systems that can express basic arithmetic, a complete and consistent finite list of axioms can never be created: each time an additional, consistent statement is added as an axiom, there are other true statements that still cannot be proved, even with the new axiom. If an axiom is ever added that makes the system complete, it does so at the cost of making the system inconsistent. It is not even possible for an infinite list of axioms to be complete, consistent, and effectively axiomatized. - -Gödel's first incompleteness theorem first appeared as "Theorem VI" in Gödel's 1931 paper "On Formally Undecidable Propositions of Principia Mathematica and Related Systems I". The hypotheses of the theorem were improved shortly thereafter by using Rosser's trick. The resulting theorem (incorporating Rosser's improvement) may be paraphrased in English as follows, where "formal system" includes the assumption that the system is effectively generated. - -
    First Incompleteness Theorem: "Any consistent formal system F within which a certain amount of elementary arithmetic can be carried out is incomplete; i.e., there are statements of the language of F which can neither be proved nor disproved in F." (Raatikainen 2015)
    - -The unprovable statement GF referred to by the theorem is often referred to as "the Gödel sentence" for the system F. The proof constructs a particular Gödel sentence for the system F, but there are infinitely many statements in the language of the system that share the same properties, such as the conjunction of the Gödel sentence and any logically valid sentence. - -Each effectively generated system has its own Gödel sentence. It is possible to define a larger system F’ that contains the whole of F plus GF as an additional axiom. This will not result in a complete system, because Gödel's theorem will also apply to F’, and thus F’ also cannot be complete. In this case, GF is indeed a theorem in F’, because it is an axiom. Because GF states only that it is not provable in F, no contradiction is presented by its provability within F’. However, because the incompleteness theorem applies to F’, there will be a new Gödel statement GF′ for F’, showing that F’ is also incomplete. GF′ will differ from GF in that GF′ will refer to F’, rather than F. - -The Gödel sentence is designed to refer, indirectly, to itself. The sentence states that, when a particular sequence of steps is used to construct another sentence, that constructed sentence will not be provable in F. However, the sequence of steps is such that the constructed sentence turns out to be GF itself. In this way, the Gödel sentence GF indirectly states its own unprovability within F . - -To prove the first incompleteness theorem, Gödel demonstrated that the notion of provability within a system could be expressed purely in terms of arithmetical functions that operate on Gödel numbers of sentences of the system. Therefore, the system, which can prove certain facts about numbers, can also indirectly prove facts about its own statements, provided that it is effectively generated. Questions about the provability of statements within the system are represented as questions about the arithmetical properties of numbers themselves, which would be decidable by the system if it were complete. - -Thus, although the Gödel sentence refers indirectly to sentences of the system F, when read as an arithmetical statement the Gödel sentence directly refers only to natural numbers. It asserts that no natural number has a particular property, where that property is given by a primitive recursive relation . As such, the Gödel sentence can be written in the language of arithmetic with a simple syntactic form. In particular, it can be expressed as a formula in the language of arithmetic consisting of a number of leading universal quantifiers followed by a quantifier-free body (these formulas are at level $\Pi^0_1$ of the arithmetical hierarchy). Via the MRDP theorem, the Gödel sentence can be re-written as a statement that a particular polynomial in many variables with integer coefficients never takes the value zero when integers are substituted for its variables . - -The first incompleteness theorem shows that the Gödel sentence GF of an appropriate formal theory F is unprovable in F. Because, when interpreted as a statement about arithmetic, this unprovability is exactly what the sentence (indirectly) asserts, the Gödel sentence is, in fact, true (; also see ). For this reason, the sentence GF is often said to be "true but unprovable." . However, since the Gödel sentence cannot itself formally specify its intended interpretation, the truth of the sentence GF may only be arrived at via a meta-analysis from outside the system. In general, this meta-analysis can be carried out within the weak formal system known as primitive recursive arithmetic, which proves the implication Con(F)→GF, where Con(F) is a canonical sentence asserting the consistency of F (, ). - -Although the Gödel sentence of a consistent theory is true as a statement about the intended interpretation of arithmetic, the Gödel sentence will be false in some nonstandard models of arithmetic, as a consequence of Gödel's completeness theorem . That theorem shows that, when a sentence is independent of a theory, the theory will have models in which the sentence is true and models in which the sentence is false. As described earlier, the Gödel sentence of a system F is an arithmetical statement which claims that no number exists with a particular property. The incompleteness theorem shows that this claim will be independent of the system F, and the truth of the Gödel sentence follows from the fact that no standard natural number has the property in question. Any model in which the Gödel sentence is false must contain some element which satisfies the property within that model. Such a model must be "nonstandard" - it must contain elements that do not correspond to any standard natural number (, ). - -Gödel specifically cites Richard's paradox and the liar paradox as semantical analogues to his syntactical incompleteness result in the introductory section of "On Formally Undecidable Propositions in Principia Mathematica and Related Systems I". The liar paradox is the sentence "This sentence is false." An analysis of the liar sentence shows that it cannot be true (for then, as it asserts, it is false), nor can it be false (for then, it is true). A Gödel sentence G for a system F makes a similar assertion to the liar sentence, but with truth replaced by provability: G says "G is not provable in the system F." The analysis of the truth and provability of G is a formalized version of the analysis of the truth of the liar sentence. - -It is not possible to replace "not provable" with "false" in a Gödel sentence because the predicate "Q is the Gödel number of a false formula" cannot be represented as a formula of arithmetic. This result, known as Tarski's undefinability theorem, was discovered independently both by Gödel, when he was working on the proof of the incompleteness theorem, and by the theorem's namesake, Alfred Tarski. - -Compared to the theorems stated in Gödel's 1931 paper, many contemporary statements of the incompleteness theorems are more general in two ways. These generalized statements are phrased to apply to a broader class of systems, and they are phrased to incorporate weaker consistency assumptions. - -Gödel demonstrated the incompleteness of the system of Principia Mathematica, a particular system of arithmetic, but a parallel demonstration could be given for any effective system of a certain expressiveness. Gödel commented on this fact in the introduction to his paper, but restricted the proof to one system for concreteness. In modern statements of the theorem, it is common to state the effectiveness and expressiveness conditions as hypotheses for the incompleteness theorem, so that it is not limited to any particular formal system. The terminology used to state these conditions was not yet developed in 1931 when Gödel published his results. - -Gödel's original statement and proof of the incompleteness theorem requires the assumption that the system is not just consistent but ω-consistent. A system is ω-consistent if it is not ω-inconsistent, and is ω-inconsistent if there is a predicate P such that for every specific natural number m the system proves ~P(m), and yet the system also proves that there exists a natural number n such that P(n). That is, the system says that a number with property P exists while denying that it has any specific value. The ω-consistency of a system implies its consistency, but consistency does not imply ω-consistency. strengthened the incompleteness theorem by finding a variation of the proof (Rosser's trick) that only requires the system to be consistent, rather than ω-consistent. This is mostly of technical interest, because all true formal theories of arithmetic (theories whose axioms are all true statements about natural numbers) are ω-consistent, and thus Gödel's theorem as originally stated applies to them. The stronger version of the incompleteness theorem that only assumes consistency, rather than ω-consistency, is now commonly known as Gödel's incompleteness theorem and as the Gödel–Rosser theorem. - -For each formal system F containing basic arithmetic, it is possible to canonically define a formula Cons(F) expressing the consistency of F. This formula expresses the property that "there does not exist a natural number coding a formal derivation within the system F whose conclusion is a syntactic contradiction." The syntactic contradiction is often taken to be "0=1", in which case Cons(F) states "there is no natural number that codes a derivation of '0=1' from the axioms of F." - -Gödel's second incompleteness theorem shows that, under general assumptions, this canonical consistency statement Cons(F) will not be provable in F. The theorem first appeared as "Theorem XI" in Gödel's 1931 paper "On Formally Undecidable Propositions in Principia Mathematica and Related Systems I". In the following statement, the term "formalized system" also includes an assumption that F is effectively axiomatized. - -
    Second Incompleteness Theorem: "Assume F is a consistent formalized system which contains elementary arithmetic. Then $F \not \vdash \text{Cons}(F)$."
    - -This theorem is stronger than the first incompleteness theorem because the statement constructed in the first incompleteness theorem does not directly express the consistency of the system. The proof of the second incompleteness theorem is obtained by formalizing the proof of the first incompleteness theorem within the system F itself. - -There is a technical subtlety in the second incompleteness theorem regarding the method of expressing the consistency of F as a formula in the language of F. There are many ways to express the consistency of a system, and not all of them lead to the same result. The formula Cons(F) from the second incompleteness theorem is a particular expression of consistency. - -Other formalizations of the claim that F is consistent may be inequivalent in F, and some may even be provable. For example, first-order Peano arithmetic (PA) can prove that "the largest consistent subset of PA" is consistent. But, because PA is consistent, the largest consistent subset of PA is just PA, so in this sense PA "proves that it is consistent". What PA does not prove is that the largest consistent subset of PA is, in fact, the whole of PA. (The term "largest consistent subset of PA" is meant here to be the largest consistent initial segment of the axioms of PA under some particular effective enumeration.) - -The standard proof of the second incompleteness theorem assumes that the provability predicate ProvA(P) satisfies the Hilbert–Bernays provability conditions. Letting #(P) represent the Gödel number of a formula P, the provability conditions say: - -# If F proves P, then F proves ProvA(#(P)). - -# F proves 1.; that is, F proves ProvA(#(P)) → ProvA(#(ProvA(#(P)))). - -# F proves ProvA(#(P → Q)) ∧ ProvA(#(P)) → ProvA(#(Q)) (analogue of modus ponens). - -There are systems, such as Robinson arithmetic, which are strong enough to meet the assumptions of the first incompleteness theorem, but which do not prove the Hilbert-Bernays conditions. Peano arithmetic, however, is strong enough to verify these conditions, as are all theories stronger than Peano arithmetic. - -Gödel's second incompleteness theorem also implies that a system F1 satisfying the technical conditions outlined above cannot prove the consistency of any system F2 that proves the consistency of F1. This is because such a system F1 can prove that if F2 proves the consistency of F1, then F1 is in fact consistent. For the claim that F1 is consistent has form "for all numbers n, n has the decidable property of not being a code for a proof of contradiction in F1". If F1 were in fact inconsistent, then F2 would prove for some n that n is the code of a contradiction in F1. But if F2 also proved that F1 is consistent (that is, that there is no such n), then it would itself be inconsistent. This reasoning can be formalized in F1 to show that if F2 is consistent, then F1 is consistent. Since, by second incompleteness theorem, F1 does not prove its consistency, it cannot prove the consistency of F2 either. - -This corollary of the second incompleteness theorem shows that there is no hope of proving, for example, the consistency of Peano arithmetic using any finitistic means that can be formalized in a system the consistency of which is provable in Peano arithmetic (PA). For example, the system of primitive recursive arithmetic (PRA), which is widely accepted as an accurate formalization of finitistic mathematics, is provably consistent in PA. Thus PRA cannot prove the consistency of PA. This fact is generally seen to imply that Hilbert's program, which aimed to justify the use of "ideal" (infinitistic) mathematical principles in the proofs of "real" (finitistic) mathematical statements by giving a finitistic proof that the ideal principles are consistent, cannot be carried out . - -The corollary also indicates the epistemological relevance of the second incompleteness theorem. It would actually provide no interesting information if a system F proved its consistency. This is because inconsistent theories prove everything, including their consistency. Thus a consistency proof of F in F would give us no clue as to whether F really is consistent; no doubts about the consistency of F would be resolved by such a consistency proof. The interest in consistency proofs lies in the possibility of proving the consistency of a system F in some system F’ that is in some sense less doubtful than F itself, for example weaker than F. For many naturally occurring theories F and F’, such as F = Zermelo–Fraenkel set theory and F’ = primitive recursive arithmetic, the consistency of F’ is provable in F, and thus F’ cannot prove the consistency of F by the above corollary of the second incompleteness theorem. - -The second incompleteness theorem does not rule out altogether the possibility of proving the consistency of some theory T, only doing so in a theory that T itself can prove to be consistent. For example, Gerhard Gentzen proved the consistency of Peano arithmetic in a different system that includes an axiom asserting that the ordinal called ε0 is wellfounded; see Gentzen's consistency proof. Gentzen's theorem spurred the development of ordinal analysis in proof theory. - -There are two distinct senses of the word "undecidable" in mathematics and computer science. The first of these is the proof-theoretic sense used in relation to Gödel's theorems, that of a statement being neither provable nor refutable in a specified deductive system. The second sense, which will not be discussed here, is used in relation to computability theory and applies not to statements but to decision problems, which are countably infinite sets of questions each requiring a yes or no answer. Such a problem is said to be undecidable if there is no computable function that correctly answers every question in the problem set (see undecidable problem). - -Because of the two meanings of the word undecidable, the term independent is sometimes used instead of undecidable for the "neither provable nor refutable" sense. - -Undecidability of a statement in a particular deductive system does not, in and of itself, address the question of whether the truth value of the statement is well-defined, or whether it can be determined by other means. Undecidability only implies that the particular deductive system being considered does not prove the truth or falsity of the statement. Whether there exist so-called "absolutely undecidable" statements, whose truth value can never be known or is ill-specified, is a controversial point in the philosophy of mathematics. - -The combined work of Gödel and Paul Cohen has given two concrete examples of undecidable statements (in the first sense of the term): The continuum hypothesis can neither be proved nor refuted in ZFC (the standard axiomatization of set theory), and the axiom of choice can neither be proved nor refuted in ZF (which is all the ZFC axioms except the axiom of choice). These results do not require the incompleteness theorem. Gödel proved in 1940 that neither of these statements could be disproved in ZF or ZFC set theory. In the 1960s, Cohen proved that neither is provable from ZF, and the continuum hypothesis cannot be proved from ZFC. - -In 1973, Saharon Shelah showed that the Whitehead problem in group theory is undecidable, in the first sense of the term, in standard set theory. - -Gregory Chaitin produced undecidable statements in algorithmic information theory and proved another incompleteness theorem in that setting. Chaitin's incompleteness theorem states that for any system that can represent enough arithmetic, there is an upper bound c such that no specific number can be proved in that system to have Kolmogorov complexity greater than c. While Gödel's theorem is related to the liar paradox, Chaitin's result is related to Berry's paradox. - -These are natural mathematical equivalents of the Gödel "true but undecidable" sentence. They can be proved in a larger system which is generally accepted as a valid form of reasoning, but are undecidable in a more limited system such as Peano Arithmetic. - -In 1977, Paris and Harrington proved that the Paris-Harrington principle, a version of the infinite Ramsey theorem, is undecidable in (first-order) Peano arithmetic, but can be proved in the stronger system of second-order arithmetic. Kirby and Paris later showed that Goodstein's theorem, a statement about sequences of natural numbers somewhat simpler than the Paris-Harrington principle, is also undecidable in Peano arithmetic. - -Kruskal's tree theorem, which has applications in computer science, is also undecidable from Peano arithmetic but provable in set theory. In fact Kruskal's tree theorem (or its finite form) is undecidable in a much stronger system codifying the principles acceptable based on a philosophy of mathematics called predicativism. The related but more general graph minor theorem (2003) has consequences for computational complexity theory. - -The incompleteness theorem is closely related to several results about undecidable sets in recursion theory. - -presented a proof of Gödel's incompleteness theorem using basic results of computability theory. One such result shows that the halting problem is undecidable: there is no computer program that can correctly determine, given any program P as input, whether P eventually halts when run with a particular given input. Kleene showed that the existence of a complete effective system of arithmetic with certain consistency properties would force the halting problem to be decidable, a contradiction. This method of proof has also been presented by Shoenfield; Charlesworth; and Hopcroft. - -Franzén explains how Matiyasevich's solution to Hilbert's 10th problem can be used to obtain a proof to Gödel's first incompleteness theorem. Matiyasevich proved that there is no algorithm that, given a multivariate polynomial p(x1, x2,...,xk) with integer coefficients, determines whether there is an integer solution to the equation p = 0. Because polynomials with integer coefficients, and integers themselves, are directly expressible in the language of arithmetic, if a multivariate integer polynomial equation p = 0 does have a solution in the integers then any sufficiently strong system of arithmetic T will prove this. Moreover, if the system T is ω-consistent, then it will never prove that a particular polynomial equation has a solution when in fact there is no solution in the integers. Thus, if T were complete and ω-consistent, it would be possible to determine algorithmically whether a polynomial equation has a solution by merely enumerating proofs of T until either "p has a solution" or "p has no solution" is found, in contradiction to Matiyasevich's theorem. Hence it follows that T cannot be w-consistent and complete. Moreover, for each consistent effectively generated system T, it is possible to effectively generate a multivariate polynomial p over the integers such that the equation p = 0 has no solutions over the integers, but the lack of solutions cannot be proved in T (; ). - -Smorynski shows how the existence of recursively inseparable sets can be used to prove the first incompleteness theorem. This proof is often extended to show that systems such as Peano arithmetic are essentially undecidable (see ). - -Chaitin's incompleteness theorem gives a different method of producing independent sentences, based on Kolmogorov complexity. Like the proof presented by Kleene that was mentioned above, Chaitin's theorem only applies to theories with the additional property that all their axioms are true in the standard model of the natural numbers. Gödel's incompleteness theorem is distinguished by its applicability to consistent theories that nonetheless include statements that are false in the standard model; these theories are known as ω-inconsistent. - -The proof by contradiction has three essential parts. To begin, choose a formal system that meets the proposed criteria: - -# Statements in the system can be represented by natural numbers (known as Gödel numbers). The significance of this is that properties of statements—such as their truth and falsehood—will be equivalent to determining whether their Gödel numbers have certain properties, and that properties of the statements can therefore be demonstrated by examining their Gödel numbers. This part culminates in the construction of a formula expressing the idea that "statement S is provable in the system" (which can be applied to any statement "S" in the system). - -# In the formal system it is possible to construct a number whose matching statement, when interpreted, is self-referential and essentially says that it (i.e. the statement itself) is unprovable. This is done using a technique called "diagonalization" (so-called because of its origins as Cantor's diagonal argument). - -# Within the formal system this statement permits a demonstration that it is neither provable nor disprovable in the system, and therefore the system cannot in fact be ω-consistent. Hence the original assumption that the proposed system met the criteria is false. - -The main problem in fleshing out the proof described above is that it seems at first that to construct a statement p that is equivalent to "p cannot be proved", p would somehow have to contain a reference to p, which could easily give rise to an infinite regress. Gödel's ingenious technique is to show that statements can be matched with numbers (often called the arithmetization of syntax) in such a way that "proving a statement" can be replaced with "testing whether a number has a given property". This allows a self-referential formula to be constructed in a way that avoids any infinite regress of definitions. The same technique was later used by Alan Turing in his work on the Entscheidungsproblem. - -In simple terms, a method can be devised so that every formula or statement that can be formulated in the system gets a unique number, called its Gödel number, in such a way that it is possible to mechanically convert back and forth between formulas and Gödel numbers. The numbers involved might be very long indeed (in terms of number of digits), but this is not a barrier; all that matters is that such numbers can be constructed. A simple example is how English can be stored as a sequence of numbers for each letter and then combined into a single larger number: - -* The word hello is encoded as 104-101-108-108-111 in ASCII, which can be converted into the number 104101108108111. - -* The logical statement x=y => y=x is encoded as 120-061-121-032-061-062-032-121-061-120 in ASCII, which can be converted into the number 120061121032061062032121061120. - -In principle, proving a statement true or false can be shown to be equivalent to proving that the number matching the statement does or doesn't have a given property. Because the formal system is strong enough to support reasoning about numbers in general, it can support reasoning about numbers that represent formulae and statements as well. Crucially, because the system can support reasoning about properties of numbers, the results are equivalent to reasoning about provability of their equivalent statements. - -Having shown that in principle the system can indirectly make statements about provability, by analyzing properties of those numbers representing statements it is now possible to show how to create a statement that actually does this. - -A formula F(x) that contains exactly one free variable x is called a statement form or class-sign. As soon as x is replaced by a specific number, the statement form turns into a bona fide statement, and it is then either provable in the system, or not. For certain formulas one can show that for every natural number n, F(n) is true if and only if it can be proved (the precise requirement in the original proof is weaker, but for the proof sketch this will suffice). In particular, this is true for every specific arithmetic operation between a finite number of natural numbers, such as "23 = 6". - -Statement forms themselves are not statements and therefore cannot be proved or disproved. But every statement form F(x) can be assigned a Gödel number denoted by G(F). The choice of the free variable used in the form F(x) is not relevant to the assignment of the Gödel number G(F). - -The notion of provability itself can also be encoded by Gödel numbers, in the following way: since a proof is a list of statements which obey certain rules, the Gödel number of a proof can be defined. Now, for every statement p, one may ask whether a number x is the Gödel number of its proof. The relation between the Gödel number of p and x, the potential Gödel number of its proof, is an arithmetical relation between two numbers. Therefore, there is a statement form Bew(y) that uses this arithmetical relation to state that a Gödel number of a proof of y exists: - -Bew(y) = ∃ x ( y is the Gödel number of a formula and x is the Gödel number of a proof of the formula encoded by y). - -The name Bew is short for beweisbar, the German word for "provable"; this name was originally used by Gödel to denote the provability formula just described. Note that "Bew(y)" is merely an abbreviation that represents a particular, very long, formula in the original language of T; the string "Bew" itself is not claimed to be part of this language. - -An important feature of the formula Bew(y) is that if a statement p is provable in the system then Bew(G(p)) is also provable. This is because any proof of p would have a corresponding Gödel number, the existence of which causes Bew(G(p)) to be satisfied. - -The next step in the proof is to obtain a statement which, indirectly, asserts its own unprovability. Although Gödel constructed this statement directly, the existence of at least one such statement follows from the diagonal lemma, which says that for any sufficiently strong formal system and any statement form F there is a statement p such that the system proves - -p ↔ F(G(p)). - -By letting F be the negation of Bew(x), we obtain the theorem - -p ↔ ~Bew(G(p)) - -and the p defined by this roughly states that its own Gödel number is the Gödel number of an unprovable formula. - -The statement p is not literally equal to ~Bew(G(p)); rather, p states that if a certain calculation is performed, the resulting Gödel number will be that of an unprovable statement. But when this calculation is performed, the resulting Gödel number turns out to be the Gödel number of p itself. This is similar to the following sentence in English: - -", when preceded by itself in quotes, is unprovable.", when preceded by itself in quotes, is unprovable. - -This sentence does not directly refer to itself, but when the stated transformation is made the original sentence is obtained as a result, and thus this sentence indirectly asserts its own unprovability. The proof of the diagonal lemma employs a similar method. - -Now, assume that the axiomatic system is ω-consistent, and let p be the statement obtained in the previous section. - -If p were provable, then Bew(G(p)) would be provable, as argued above. But p asserts the negation of Bew(G(p)). Thus the system would be inconsistent, proving both a statement and its negation. This contradiction shows that p cannot be provable. - -If the negation of p were provable, then Bew(G(p)) would be provable (because p was constructed to be equivalent to the negation of Bew(G(p))). However, for each specific number x, x cannot be the Gödel number of the proof of p, because p is not provable (from the previous paragraph). Thus on one hand the system proves there is a number with a certain property (that it is the Gödel number of the proof of p), but on the other hand, for every specific number x, we can prove that it does not have this property. This is impossible in an ω-consistent system. Thus the negation of p is not provable. - -Thus the statement p is undecidable in our axiomatic system: it can neither be proved nor disproved within the system. - -In fact, to show that p is not provable only requires the assumption that the system is consistent. The stronger assumption of ω-consistency is required to show that the negation of p is not provable. Thus, if p is constructed for a particular system: - -*If the system is ω-consistent, it can prove neither p nor its negation, and so p is undecidable. - -*If the system is consistent, it may have the same situation, or it may prove the negation of p. In the later case, we have a statement ("not p") which is false but provable, and the system is not ω-consistent. - -If one tries to "add the missing axioms" to avoid the incompleteness of the system, then one has to add either p or "not p" as axioms. But then the definition of "being a Gödel number of a proof" of a statement changes. which means that the formula Bew(x) is now different. Thus when we apply the diagonal lemma to this new Bew, we obtain a new statement p, different from the previous one, which will be undecidable in the new system if it is ω-consistent. - -sketches an alternative proof of the first incompleteness theorem that uses Berry's paradox rather than the liar paradox to construct a true but unprovable formula. A similar proof method was independently discovered by Saul Kripke . Boolos's proof proceeds by constructing, for any computably enumerable set S of true sentences of arithmetic, another sentence which is true but not contained in S. This gives the first incompleteness theorem as a corollary. According to Boolos, this proof is interesting because it provides a "different sort of reason" for the incompleteness of effective, consistent theories of arithmetic . - -The incompleteness theorems are among a relatively small number of nontrivial theorems that have been transformed into formalized theorems that can be completely verified by proof assistant software. Gödel's original proofs of the incompleteness theorems, like most mathematical proofs, were written in natural language intended for human readers. - -Computer-verified proofs of versions of the first incompleteness theorem were announced by Natarajan Shankar in 1986 using Nqthm , by Russell O'Connor in 2003 using Coq and by John Harrison in 2009 using HOL Light . A computer-verified proof of both incompleteness theorems was announced by Lawrence Paulson in 2013 using Isabelle . - -The main difficulty in proving the second incompleteness theorem is to show that various facts about provability used in the proof of the first incompleteness theorem can be formalized within a system S using a formal predicate for provability. Once this is done, the second incompleteness theorem follows by formalizing the entire proof of the first incompleteness theorem within the system S itself. - -Let p stand for the undecidable sentence constructed above, and assume for purposes of obtaining a contradiction that the consistency of the system S can be proved from within the system S itself. This is equivalent to proving the statement "System S is consistent". - -Now consider the statement c, where c = "If the system S is consistent, then p is not provable". The proof of sentence c can be formalized within the system S, and therefore the statement c, "p is not provable", (or identically, "not P(p)") can be proved in the system S. - -Observe then, that if we can prove that the system S is consistent (ie. the statement in the hypothesis of c), then we have proved that p is not provable. But this is a contradiction since by the 1st Incompleteness Theorem, this sentence (ie. what is implied in the sentence c, ""p" is not provable") is what we construct to be unprovable. Notice that this is why we require formalizing the first Incompleteness Theorem in S: to prove the 2nd Incompleteness Theorem, we obtain a contradiction with the 1st Incompleteness Theorem which can do only by showing that the theorem holds in S. So we cannot prove that the system S is consistent. And the 2nd Incompleteness Theorem statement follows. - -The incompleteness results affect the philosophy of mathematics, particularly versions of formalism, which use a single system of formal logic to define their principles. - -The incompleteness theorem is sometimes thought to have severe consequences for the program of logicism proposed by Gottlob Frege and Bertrand Russell, which aimed to define the natural numbers in terms of logic . Bob Hale and Crispin Wright argue that it is not a problem for logicism because the incompleteness theorems apply equally to first order logic as they do to arithmetic. They argue that only those who believe that the natural numbers are to be defined in terms of first order logic have this problem. - -Many logicians believe that Gödel's incompleteness theorems struck a fatal blow to David Hilbert's second problem, which asked for a finitary consistency proof for mathematics. The second incompleteness theorem, in particular, is often viewed as making the problem impossible. Not all mathematicians agree with this analysis, however, and the status of Hilbert's second problem is not yet decided (see "Modern viewpoints on the status of the problem"). - -Authors including the philosopher J. R. Lucas and physicist Roger Penrose have debated what, if anything, Gödel's incompleteness theorems imply about human intelligence. Much of the debate centers on whether the human mind is equivalent to a Turing machine, or by the Church–Turing thesis, any finite machine at all. If it is, and if the machine is consistent, then Gödel's incompleteness theorems would apply to it. - -suggested that while Gödel's theorems cannot be applied to humans, since they make mistakes and are therefore inconsistent, it may be applied to the human faculty of science or mathematics in general. Assuming that it is consistent, either its consistency cannot be proved or it cannot be represented by a Turing machine. - -has proposed that the concept of mathematical "knowability" should be based on computational complexity rather than logical decidability. He writes that "when knowability is interpreted by modern standards, namely via computational complexity, the Gödel phenomena are very much with us." - -Douglas Hofstadter, in his books Gödel, Escher, Bach and I Am a Strange Loop, cites Gödel's theorems as an example of what he calls a strange loop, a hierarchical, self-referential structure existing within an axiomatic formal system. He argues that this is the same kind of structure which gives rise to consciousness, the sense of "I", in the human mind. While the self-reference in Gödel's theorem comes from the Gödel sentence asserting its own unprovability within the formal system of Principia Mathematica, the self-reference in the human mind comes from the way in which the brain abstracts and categorises stimuli into "symbols", or groups of neurons which respond to concepts, in what is effectively also a formal system, eventually giving rise to symbols modelling the concept of the very entity doing the perception. - -Hofstadter argues that a strange loop in a sufficiently complex formal system can give rise to a "downward" or "upside-down" causality, a situation in which the normal hierarchy of cause-and-effect is flipped upside-down. In the case of Gödel's theorem, this manifests, in short, as the following: - -"Merely from knowing the formula's meaning, one can infer its truth or falsity without any effort to derive it in the old-fashioned way, which requires one to trudge methodically "upwards" from the axioms. This is not just peculiar; it is astonishing. Normally, one cannot merely look at what a mathematical conjecture says and simply appeal to the content of that statement on its own to deduce whether the statement is true or false." (I Am a Strange Loop.) - -In the case of the mind, a far more complex formal system, this "downward causality" manifests, in Hofstadter's view, as the ineffable human instinct that the causality of our minds lies on the high level of desires, concepts, personalities, thoughts and ideas, rather than on the low level of interactions between neurons or even fundamental particles, even though according to physics the latter seems to possess the causal power. - -"There is thus a curious upside-downness to our normal human way of perceiving the world: we are built to perceive “big stuff” rather than “small stuff”, even though the domain of the tiny seems to be where the actual motors driving reality reside." (I Am a Strange Loop.) - -Although Gödel's theorems are usually studied in the context of classical logic, they also have a role in the study of paraconsistent logic and of inherently contradictory statements (dialetheia). argues that replacing the notion of formal proof in Gödel's theorem with the usual notion of informal proof can be used to show that naive mathematics is inconsistent, and uses this as evidence for dialetheism. The cause of this inconsistency is the inclusion of a truth predicate for a system within the language of the system . gives a more mixed appraisal of the applications of Gödel's theorems to dialetheism. - -Appeals and analogies are sometimes made to the incompleteness theorems in support of arguments that go beyond mathematics and logic. Several authors have commented negatively on such extensions and interpretations, including Torkel Franzén (2005); Panu Raatikainen; ; and . Bricmont, for example, quote from Rebecca Goldstein's comments on the disparity between Gödel's avowed Platonism and the anti-realist uses to which his ideas are sometimes put. Sokal criticize Régis Debray's invocation of the theorem in the context of sociology; Debray has defended this use as metaphorical (ibid.). - -After Gödel published his proof of the completeness theorem as his doctoral thesis in 1929, he turned to a second problem for his habilitation. His original goal was to obtain a positive solution to Hilbert's second problem . At the time, theories of the natural numbers and real numbers similar to second-order arithmetic were known as "analysis", while theories of the natural numbers alone were known as "arithmetic". - -Gödel was not the only person working on the consistency problem. Ackermann had published a flawed consistency proof for analysis in 1925, in which he attempted to use the method of ε-substitution originally developed by Hilbert. Later that year, von Neumann was able to correct the proof for a system of arithmetic without any axioms of induction. By 1928, Ackermann had communicated a modified proof to Bernays; this modified proof led Hilbert to announce his belief in 1929 that the consistency of arithmetic had been demonstrated and that a consistency proof of analysis would likely soon follow. After the publication of the incompleteness theorems showed that Ackermann's modified proof must be erroneous, von Neumann produced a concrete example showing that its main technique was unsound (; ). - -In the course of his research, Gödel discovered that although a sentence which asserts its own falsehood leads to paradox, a sentence that asserts its own non-provability does not. In particular, Gödel was aware of the result now called Tarski's indefinability theorem, although he never published it. Gödel announced his first incompleteness theorem to Carnap, Feigel and Waismann on August 26, 1930; all four would attend the Second Conference on the Epistemology of the Exact Sciences, a key conference in Königsberg the following week. - -The 1930 Königsberg conference was a joint meeting of three academic societies, with many of the key logicians of the time in attendance. Carnap, Heyting, and von Neumann delivered one-hour addresses on the mathematical philosophies of logicism, intuitionism, and formalism, respectively . The conference also included Hilbert's retirement address, as he was leaving his position at the University of Göttingen. Hilbert used the speech to argue his belief that all mathematical problems can be solved. He ended his address by saying, - -This speech quickly became known as a summary of Hilbert's beliefs on mathematics (its final six words, "Wir müssen wissen. Wir werden wissen!", were used as Hilbert's epitaph in 1943). Although Gödel was likely in attendance for Hilbert's address, the two never met face to face . - -Gödel announced his first incompleteness theorem at a roundtable discussion session on the third day of the conference. The announcement drew little attention apart from that of von Neumann, who pulled Gödel aside for conversation. Later that year, working independently with knowledge of the first incompleteness theorem, von Neumann obtained a proof of the second incompleteness theorem, which he announced to Gödel in a letter dated November 20, 1930 . Gödel had independently obtained the second incompleteness theorem and included it in his submitted manuscript, which was received by Monatshefte für Mathematik on November 17, 1930. - -Gödel's paper was published in the Monatshefte in 1931 under the title "Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I" ("On Formally Undecidable Propositions in Principia Mathematica and Related Systems I"). As the title implies, Gödel originally planned to publish a second part of the paper in the next volume of the Monatshefte; the prompt acceptance of the first paper was one reason he changed his plans . - -Gödel gave a series of lectures on his theorems at Princeton in 1933–1934 to an audience that included Church, Kleene, and Rosser. By this time, Gödel had grasped that the key property his theorems required is that the system must be effective (at the time, the term "general recursive" was used). Rosser proved in 1936 that the hypothesis of ω-consistency, which was an integral part of Gödel's original proof, could be replaced by simple consistency, if the Gödel sentence was changed in an appropriate way. These developments left the incompleteness theorems in essentially their modern form. - -Gentzen published his consistency proof for first-order arithmetic in 1936. Hilbert accepted this proof as "finitary" although (as Gödel's theorem had already shown) it cannot be formalized within the system of arithmetic that is being proved consistent. - -The impact of the incompleteness theorems on Hilbert's program was quickly realized. Bernays included a full proof of the incompleteness theorems in the second volume of Grundlagen der Mathematik (Bernays|1939}}|1939), along with additional results of Ackermann on the ε-substitution method and Gentzen's consistency proof of arithmetic. This was the first full published proof of the second incompleteness theorem. - -used a version of Richard's paradox to construct an expression that was false but unprovable in a particular, informal framework he had developed. Gödel was unaware of this paper when he proved the incompleteness theorems (Collected Works Vol. IV., p. 9). Finsler wrote to Gödel in 1931 to inform him about this paper, which Finsler felt had priority for an incompleteness theorem. Finsler's methods did not rely on formalized provability, and had only a superficial resemblance to Gödel's work . Gödel read the paper but found it deeply flawed, and his response to Finsler laid out concerns about the lack of formalization (). Finsler continued to argue for his philosophy of mathematics, which eschewed formalization, for the remainder of his career. - -In September 1931, Ernst Zermelo wrote to Gödel to announce what he described as an "essential gap" in Gödel's argument (). In October, Gödel replied with a 10-page letter (, ), where he pointed out that Zermelo mistakenly assumed that the notion of truth in a system is definable in that system (which is not true in general by Tarski's undefinability theorem). But Zermelo did not relent and published his criticisms in print with "a rather scathing paragraph on his young competitor" (). Gödel decided that to pursue the matter further was pointless, and Carnap agreed (). Much of Zermelo's subsequent work was related to logics stronger than first-order logic, with which he hoped to show both the consistency and categoricity of mathematical theories. - -Ludwig Wittgenstein wrote several passages about the incompleteness theorems that were published posthumously in his 1953 Remarks on the Foundations of Mathematics, in particular one section sometimes called the "notorious paragraph" where he seems to confuse the notions of "true" and "provable" in Russell's system. Gödel was a member of the Vienna Circle during the period in which Wittgenstein's early ideal language philosophy and Tractatus Logico-Philosophicus dominated the circle's thinking. There has been some controversy about whether Wittgenstein misunderstood the incompleteness theorem or just expressed himself unclearly. Writings in Gödel's Nachlass express the belief that Wittgenstein misread his ideas. - -Multiple commentators have read Wittgenstein as misunderstanding Gödel , although Juliet Floyd and , as well as have provided textual readings arguing that most commentary misunderstands Wittgenstein. On their release, Bernays, Dummett, and Kreisel wrote separate reviews on Wittgenstein's remarks, all of which were extremely negative . The unanimity of this criticism caused Wittgenstein's remarks on the incompleteness theorems to have little impact on the logic community. In 1972, Gödel stated: "Has Wittgenstein lost his mind? Does he mean it seriously? He intentionally utters trivially nonsensical statements" , and wrote to Karl Menger that Wittgenstein's comments demonstrate a misunderstanding of the incompleteness theorems writing: - -Since the publication of Wittgenstein's Nachlass in 2000, a series of papers in philosophy have sought to evaluate whether the original criticism of Wittgenstein's remarks was justified. Floyd argue that Wittgenstein had a more complete understanding of the incompleteness theorem than was previously assumed. They are particularly concerned with the interpretation of a Gödel sentence for an ω-inconsistent system as actually saying "I am not provable", since the system has no models in which the provability predicate corresponds to actual provability. Rodych argues that their interpretation of Wittgenstein is not historically justified, while Bays argues against Floyd and Putnam's philosophical analysis of the provability predicate. Berto explores the relationship between Wittgenstein's writing and theories of paraconsistent logic. diff --git a/wiki/wikipedia/3746.txt b/wiki/wikipedia/3746.txt deleted file mode 100644 index 5ad3789fc022663110afd9dafd4ddab877b8527a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3746.txt +++ /dev/null @@ -1,23 +0,0 @@ -In mathematics, Milliken's tree theorem in combinatorics is a partition theorem generalizing Ramsey's theorem to infinite trees, objects with more structure than sets. - -Let T be a finitely splitting rooted tree of height ω, n a positive integer, and $\mathbb{S}^n_T$ the collection of all strongly embedded subtrees of T of height n. In one of its simple forms, Milliken's tree theorem states that if $\mathbb{S}^n_T=C_1 \cup ... \cup C_r$ then for some strongly embedded infinite subtree R of T, $\mathbb{S}^n_R \subset C_i$ for some i ≤ r. - -This immediately implies Ramsey's theorem; take the tree T to be a linear ordering on ω vertices. - -Define $\mathbb{S}^n= \bigcup_T \mathbb{S}^n_T$ where T ranges over finitely splitting rooted trees of height ω. Milliken's tree theorem says that not only is $\mathbb{S}^n$ partition regular for each n < ω, but that the homogeneous subtree R guaranteed by the theorem is strongly embedded in T. - -Call T an α-tree if each branch of T has cardinality α. Define Succ(p, P)= $ \{ q \in P : q \geq p \}$, and $IS(p,P)$ to be the set of immediate successors of p in P. Suppose S is an α-tree and T is a β-tree, with 0 ≤ α ≤ β ≤ ω. S is strongly embedded in T if: - -* $S \subset T$, and the partial order on S is induced from T, - -* if $s \in S$ is nonmaximal in S and $t \in IS(s,T)$, then $|Succ(t,T) \cap IS(s,S)|=1$, - -* there exists a strictly increasing function from $\alpha$ to $\beta$, such that $S(n) \subset T(f(n)).$ - -Intuitively, for S to be strongly embedded in T, - -* S must be a subset of T with the induced partial order - -* S must preserve the branching structure of T; i.e., if a nonmaximal node in S has n immediate successors in T, then it has n immediate successors in S - -* S preserves the level structure of T; all nodes on a common level of S must be on a common level in T. diff --git a/wiki/wikipedia/3747.txt b/wiki/wikipedia/3747.txt deleted file mode 100644 index 40fc79b8f0e6ab51dd9ce9b58acb63138888f38a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3747.txt +++ /dev/null @@ -1,166 +0,0 @@ -The Arzelà–Ascoli theorem is a fundamental result of mathematical analysis giving necessary and sufficient conditions to decide whether every sequence of a given family of real-valued continuous functions defined on a closed and bounded interval has a uniformly convergent subsequence. The main condition is the equicontinuity of the family of functions. The theorem is the basis of many proofs in mathematics, including that of the Peano existence theorem in the theory of ordinary differential equations, Montel's theorem in complex analysis, and the Peter–Weyl theorem in harmonic analysis and various results concerning compactness of integral operators. - -The notion of equicontinuity was introduced in the late 19th century by the Italian mathematicians Cesare Arzelà and Giulio Ascoli. A weak form of the theorem was proven by Ascoli, who established the sufficient condition for compactness, and by Arzelà, who established the necessary condition and gave the first clear presentation of the result. A further generalization of the theorem was proven by Fréchet, to sets of real-valued continuous functions with domain a compact metric space . Modern formulations of the theorem allow for the domain to be compact Hausdorff and for the range to be an arbitrary metric space. More general formulations of the theorem exist that give necessary and sufficient conditions for a family of functions from a compactly generated Hausdorff space into a uniform space to be compact in the compact-open topology; see Kelley. - -By definition, a sequence { fn }n∈N of continuous functions on an interval I = [a, b] is uniformly bounded if there is a number M such that -$$ -\left|f_n(x)\right| \le M -$$ - -for every function  fn  belonging to the sequence, and every x ∈ [a, b]. (Here, M must be independent of n and x.) - -The sequence is said to be uniformly equicontinuous if, for every ε > 0, there exists a δ > 0 such that -$$ -\left|f_n(x)-f_n(y)\right| < \varepsilon -$$ - -whenever for all functions  fn  in the sequence. (Here, δ may depend on ε, but not x, y or n.) - -One version of the theorem can be stated as follows: - -Consider a sequence of real-valued continuous functions { fn }n ∈ N defined on a closed and bounded interval [a, b] of the real line. If this sequence is uniformly bounded and uniformly equicontinuous, then there exists a subsequence { fnk }k ∈ N that converges uniformly. - -The converse is also true, in the sense that if every subsequence of { fn } itself has a uniformly convergent subsequence, then { fn } is uniformly bounded and equicontinuous. - -{{Math proof|drop=hidden|proof= - -The proof is essentially based on a diagonalization argument. The simplest case is of real-valued functions on a closed and bounded interval: - -* Let I = [a, b] ⊂ R be a closed and bounded interval. If F is an infinite set of functions  f  : I → R which is uniformly bounded and equicontinuous, then there is a sequence fn of elements of F such that fn converges uniformly on I. - -Fix an enumeration {xi}i ∈N of rational numbers in I. Since F is uniformly bounded, the set of points {f(x1)}f∈F is bounded, and hence by the Bolzano–Weierstrass theorem, there is a sequence {fn1} of distinct functions in F such that {fn1(x1)} converges. Repeating the same argument for the sequence of points {fn1(x2)}, there is a subsequence {fn2} of {fn1} such that {fn2(x2)} converges. - -By induction this process can be continued forever, and so there is a chain of subsequences -$$ -\left \{f_{n_1} \right \} \supseteq \left \{f_{n_2} \right \} \supseteq \cdots -$$ - -such that, for each k = 1, 2, 3, ..., the subsequence {fnk} converges at x1, ..., xk. Now form the diagonal subsequence {f} whose mth term fm is the mth term in the mth subsequence {fnm}. By construction, fm converges at every rational point of I. - -Therefore, given any ε > 0 and rational xk in I, there is an integer N = N(ε, xk) such that -$$ -|f_n(x_k) - f_m(x_k)| < \tfrac{\varepsilon}{3}, \qquad n, m \ge N. -$$ - -Since the family F is equicontinuous, for this fixed ε and for every x in I, there is an open interval Ux containing x such that -$$ -|f(s)-f(t)| < \tfrac{\varepsilon}{3} -$$ - -for all f ∈ F and all s, t in I such that s, t ∈ Ux. - -The collection of intervals Ux, x ∈ I, forms an open cover of I. Since I  is compact, by the Heine-Borel theorem this covering admits a finite subcover U1, ..., UJ. There exists an integer K such that each open interval Uj, 1 ≤ j ≤ J, contains a rational xk with 1 ≤ k ≤ K. Finally, for any t ∈ I, there are j and k so that t and xk belong to the same interval Uj. For this choice of k, - -\begin{align} - -\left |f_n(t)-f_m(t) \right| &\le \left|f_n(t) - f_n(x_k) \right| + |f_n(x_k) - f_m(x_k)| + |f_m(x_k) - f_m(t)| \\ - -&< \tfrac{\varepsilon}{3} + \tfrac{\varepsilon}{3} + \tfrac{\varepsilon}{3} - -\end{align} - -for all n, m > N = max{N(ε, x1), ..., N(ε, xK)}. Consequently, the sequence {fn} is uniformly Cauchy, and therefore converges to a continuous function, as claimed. This completes the proof. - -}} - -The hypotheses of the theorem are satisfied by a uniformly bounded sequence { fn } of differentiable functions with uniformly bounded derivatives. Indeed, uniform boundedness of the derivatives implies by the mean value theorem that for all x and y, -$$ -\left|f_n(x) - f_n(y)\right| \le K |x-y|, -$$ - -where K is the supremum of the derivatives of functions in the sequence and is independent of n. So, given ε > 0, let δ = ε/2K to verify the definition of equicontinuity of the sequence. This proves the following corollary: - -* Let {fn} be a uniformly bounded sequence of real-valued differentiable functions on [a, b] such that the derivatives {fn′} are uniformly bounded. Then there exists a subsequence {fnk} that converges uniformly on [a, b]. - -If, in addition, the sequence of second derivatives is also uniformly bounded, then the derivatives also converge uniformly (up to a subsequence), and so on. Another generalization holds for continuously differentiable functions. Suppose that the functions  fn  are continuously differentiable with derivatives f′n. Suppose that fn′ are uniformly equicontinuous and uniformly bounded, and that the sequence { fn }, is pointwise bounded (or just bounded at a single point). Then there is a subsequence of the { fn } converging uniformly to a continuously differentiable function. - -The diagonalization argument can also be used to show that a family of infinitely differentiable functions, whose derivatives of each order are uniformly bounded, has a uniformly convergent subsequence, all of whose derivatives are also uniformly convergent. This is particularly important in the theory of distributions. - -The argument given above proves slightly more, specifically - -* If { fn } is a uniformly bounded sequence of real valued functions on [a, b] such that each f is Lipschitz continuous with the same Lipschitz constant K: -$$ -\left|f_n(x) - f_n(y)\right| \le K|x-y| -$$ - -for all x, y ∈ [a, b] and all  fn , then there is a subsequence that converges uniformly on [a, b]. - -The limit function is also Lipschitz continuous with the same value K for the Lipschitz constant. A slight refinement is - -* A set F of functions  f  on [a, b] that is uniformly bounded and satisfies a Hölder condition of order α, 0 < α ≤ 1, with a fixed constant M, -$$ -\left|f(x) - f(y)\right| \le M |x - y|^\alpha, \qquad x, y \in [a, b] -$$ - -is relatively compact in C([a, b]). In particular, the unit ball of the Hölder space C0,α([a, b]) is compact in C([a, b]). - -This holds more generally for scalar functions on a compact metric space X satisfying a Hölder condition with respect to the metric on X. - -The Arzelà–Ascoli theorem holds, more generally, if the functions  fn  take values in d-dimensional Euclidean space Rd, and the proof is very simple: just apply the R-valued version of the Arzelà–Ascoli theorem d times to extract a subsequence that converges uniformly in the first coordinate, then a sub-subsequence that converges uniformly in the first two coordinates, and so on. The above examples generalize easily to the case of functions with values in Euclidean space. - -The definitions of boundedness and equicontinuity can be generalized to the setting of arbitrary compact metric spaces and, more generally still, compact Hausdorff spaces. Let X be a compact Hausdorff space, and let C(X) be the space of real-valued continuous functions on X. A subset F ⊂ C(X) is said to be equicontinuous if for every x ∈ X and every ε > 0, x has a neighborhood Ux such that -$$ -\forall y \in U_x, \forall f \in \mathbf{F} : \qquad |f(y) - f(x)| < \varepsilon. -$$ - -A set F ⊂ C(X, R) is said to be pointwise bounded if for every x ∈ X, -$$ -\sup \{ | f(x) | : f \in \mathbf{F} \} < \infty. -$$ - -A version of the Theorem holds also in the space C(X) of real-valued continuous functions on a compact Hausdorff space X : - -Let X be a compact Hausdorff space. Then a subset F of C(X) is relatively compact in the topology induced by the uniform norm if and only if it is equicontinuous and pointwise bounded. - -The Arzelà–Ascoli theorem is thus a fundamental result in the study of the algebra of continuous functions on a compact Hausdorff space. - -Various generalizations of the above quoted result are possible. For instance, the functions can assume values in a metric space or (Hausdorff) topological vector space with only minimal changes to the statement (see, for instance, Kelley, Kelley): - -Let X be a compact Hausdorff space and Y a metric space. Then F ⊂ C(X, Y) is compact in the compact-open topology if and only if it is equicontinuous, pointwise relatively compact and closed. - -Here pointwise relatively compact means that for each x ∈ X, the set Fx = { f (x) :  f  ∈ F} is relatively compact in Y. - -The proof given can be generalized in a way that does not rely on the separability of the domain. On a compact Hausdorff space X, for instance, the equicontinuity is used to extract, for each ε = 1/n, a finite open covering of X such that the oscillation of any function in the family is less than ε on each open set in the cover. The role of the rationals can then be played by a set of points drawn from each open set in each of the countably many covers obtained in this way, and the main part of the proof proceeds exactly as above. - -Solutions of numerical schemes for parabolic equations are usually piecewise constant, and therefore not continuous, in time. As their jumps nevertheless tend to become small as the time step goes to $0$, it is possible to establish uniform-in-time convergence properties using a generalisation to non-continuous functions of the classical Arzelà–Ascoli theorem (see e.g. Droniou). - -Denote by $S(X,Y)$ the space of functions from $X$ to $Y$ endowed with the uniform metric -$$ -d_S(v,w)=\sup_{t\in X}d_Y(v(t),w(t)). -$$ - -Then we have the following: - -Let $X$ be a compact metric space and $Y$ a complete metric space. Let $\{v_n\}_{n\in\mathbb{N}}$ be a sequence in $S(X,Y)$ such that there exists a function $\omega:X\times X\to[0,\infty]$ and a sequence $\{\delta_n\}_{n\in\mathbb{N}}\subset[0,\infty)$ satisfying -$$ -\lim_{d_X(t,t')\to0}\omega(t,t')=0,\quad\lim_{n\to\infty}\delta_n=0, -$$ -$$ -\forall(t,t')\in X\times X,\quad \forall n\in\mathbb{N},\quad d_Y(v_n(t),v_n(t'))\leq \omega(t,t')+\delta_n. -$$ - -Assume also that, for all $t\in X$, $\{v_n(t):n\in\mathbb{N}\}$ is relatively compact in $Y$. Then $\{v_n\}_{n\in\mathbb{N}}$ is relatively compact in $S(X,Y)$, and any limit of $\{v_n\}_{n\in\mathbb{N}}$ in this space is in $C(X,Y)$. - -Whereas most formulations of the Arzelà–Ascoli theorem assert sufficient conditions for a family of functions to be (relatively) compact in some topology, these conditions are typically also necessary. For instance, if a set F is compact in C(X), the Banach space of real-valued continuous functions on a compact Hausdorff space with respect to its uniform norm, then it is bounded in the uniform norm on C(X) and in particular is pointwise bounded. Let N(ε, U) be the set of all functions in F whose oscillation over an open subset U ⊂ X is less than ε: -$$ -N(\varepsilon, U) = \{f \mid \operatorname{osc}_U f < \varepsilon\}. -$$ - -For a fixed x∈X and ε, the sets N(ε, U) form an open covering of F as U varies over all open neighborhoods of x. Choosing a finite subcover then gives equicontinuity. - -* To every function g that is p-integrable on [0, 1], with 1 < p ≤ ∞, associate the function G defined on [0, 1] by -$$ -G(x) = \int_0^x g(t) \mathrm{d}t. -$$ - -Let F be the set of functions G corresponding to functions g in the unit ball of the space Lp([0, 1]). If q is the Hölder conjugate of p, defined by 1/p + 1/q = 1, then Hölder's inequality implies that all functions in F satisfy a Hölder condition with α = 1/q and constant M = 1. - -It follows that F is compact in C([0, 1]). This means that the correspondence g → G defines a compact linear operator T between the Banach spaces Lp([0, 1]) and C([0, 1]). Composing with the injection of C([0, 1]) into Lp([0, 1]), one sees that T acts compactly from Lp([0, 1]) to itself. The case p = 2 can be seen as a simple instance of the fact that the injection from the Sobolev space $H^1_0(\Omega)$ into L2(Ω), for Ω a bounded open set in Rd, is compact. - -*When T is a compact linear operator from a Banach space X to a Banach space Y, its transpose T ∗ is compact from the (continuous) dual Y ∗ to X ∗. This can be checked by the Arzelà-Ascoli theorem. - -Indeed, the image T(B) of the closed unit ball B of X is contained in a compact subset K of Y. The unit ball B of Y ∗ defines, by restricting from Y to K, a set F of (linear) continuous functions on K that is bounded and equicontinuous. By Arzelà-Ascoli, for every sequence {y_∗|b=n}, in B, there is a subsequence that converges uniformly on K, and this implies that the image $T^*(y^*_{n_k})$ of that subsequence is Cauchy in X ∗. - -*When  f  is holomorphic in an open disk D1 = B(z0, r), with modulus bounded by M, then (for example by Cauchy's formula) its derivative  f ′ has modulus bounded by 2M/r in the smaller disk D2 = B(z0, r/2). If a family of holomorphic functions on D1 is bounded by M on D1, it follows that the family F of restrictions to D2 is equicontinuous on D2. Therefore, a sequence converging uniformly on D2 can be extracted. This is a first step in the direction of Montel's theorem. - -* Let $C([0,T],L^1(\mathbb{R}^N))$ be endowed with the uniform metric $\textstyle\sup_{t\in [0,T]}\|v(\cdot,t)-w(\cdot,t)\|_{L^1(\mathbb{R}^N)}.$ Assume that $u_n=u_n(x,t)\subset C([0,T];L^1(\mathbb{R}^N))$ is a sequence of solutions of a certain partial differential equation (PDE), where the PDE ensures the following a priori estimates: $x\mapsto u_n(x,t)$ is equicontinuous for all $t$, $x\mapsto u_n(x,t)$ is equitight for all $t$, and, for all $(t,t')\in [0,T]\times[0,T]$ and all $n\in\mathbb{N}$, $\|u_n(\cdot,t)-u_n(\cdot,t')\|_{L^1(\mathbb{R}^N)}$ is small enough when $|t-t'|$ is small enough. Then by the Fréchet–Kolmogorov theorem, we can conclude that $\{x\mapsto u_n(x,t):n\in\mathbb{N}\}$ is relatively compact in $L^1(\mathbb{R}^N)$. Hence, we can, by (a generalization of) the Arzelà-Ascoli theorem, conclude that $\{u_n:n\in\mathbb{N}\}$ is relatively compact in $C([0,T],L^1(\mathbb{R}^N)).$ diff --git a/wiki/wikipedia/3748.txt b/wiki/wikipedia/3748.txt deleted file mode 100644 index 4dde1d2f6c05447f6f4222cc3bc416717a4ed0b8..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3748.txt +++ /dev/null @@ -1,22 +0,0 @@ -In mathematics, the Carathéodory kernel theorem is a result in complex analysis and geometric function theory established by the Greek mathematician Constantin Carathéodory in 1912. The uniform convergence on compact sets of a sequence of holomorphic univalent functions, defined on the unit disk in the complex plane and fixing 0, can be formulated purely geometrically in terms of the limiting behaviour of the images of the functions. The kernel theorem has wide application in the theory of univalent functions and in particular provides the geometric basis for the Loewner differential equation. - -Let Un be a sequence of open sets in C containing 0. Let Vn be the connected component of the interior of - -Un ∩ Un + 1 ∩ ... containing 0. The kernel of the sequence is defined to be the union of the Vn's, provided it is non-empty; otherwise it is defined to be $ \{0\}$. Thus the kernel is either a connected open set containing 0 or the one point set $ \{0\}$. The sequence is said to converge to a kernel if each subsequence has the same kernel. - -Examples - -*If Un is an increasing sequence of connected open sets containing 0, then the kernel is just the union. - -*If Un is a decreasing sequence of connected open sets containing 0, then, if 0 is an interior point of U1 ∩ U2 ∩ ..., the sequence converges to the component of the interior containing 0. Otherwise, if 0 is not an interior point, the sequence converges to $ \{0\}$. - -Let fn(z) be a sequence of holomorphic univalent functions on the unit disk D, normalised so that fn(0) = 0 and f 'n (0) > 0. Then fn converges uniformly on compacta in D to a function f if and only if Un = fn(D) converges to its kernel and this kernel is not C. If the kernel is $ \{0\}$, then f = 0. Otherwise the kernel is a connected open set U, f is univalent on D and f(D) = U. - -Using Hurwitz's theorem and Montel's theorem, it is straightforward to check that if fn tends uniformly on compacta to f then each subsequence of Un has kernel U = f(D). - -Conversely if Un converges to a kernel not equal to C, then by the Koebe quarter theorem Un contains the disk of radius f 'n(0) / 4 with centre 0. The assumption that U ≠ C implies that these radii are uniformly bounded. By the Koebe distortion theorem -$$ - |f_n(z)| \le f_n^\prime(0) {|z|\over (1-|z|)^2}. -$$ - -Hence the sequence fn is uniformly bounded on compact sets. If two subsequences converge to holomorphic limits f and g, then f(0) = g(0) and with f'(0), g(0) ≥ 0. By the first part and the assumptions it follows that f(D) = g(D). Uniqueness in the Riemann mapping theorem forces f = g, so the original sequence fn is uniformly convergent on compact sets. diff --git a/wiki/wikipedia/3749.txt b/wiki/wikipedia/3749.txt deleted file mode 100644 index e714abef500c71be682143e5310954c5a20f9a4f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3749.txt +++ /dev/null @@ -1 +0,0 @@ -In mathematics, Spijker's lemma is a result in the theory of rational mappings of the Riemann sphere. It states that the image of a circle under a complex rational map with numerator and denominator having degree at most n has length at most 2nπ. diff --git a/wiki/wikipedia/375.txt b/wiki/wikipedia/375.txt deleted file mode 100644 index 42720c9e01f0c0aacdcc0e049838f5c95844aae8..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/375.txt +++ /dev/null @@ -1,139 +0,0 @@ -In computer science, a semaphore is a variable or abstract data type used to control access to a common resource by multiple threads and avoid critical section problems in a concurrent system such as a multitasking operating system. A trivial semaphore is a plain variable that is changed (for example, incremented or decremented, or toggled) depending on programmer-defined conditions. - -A useful way to think of a semaphore as used in a real-world system is as a record of how many units of a particular resource are available, coupled with operations to adjust that record safely (i.e., to avoid race conditions) as units are acquired or become free, and, if necessary, wait until a unit of the resource becomes available. - -Semaphores are a useful tool in the prevention of race conditions; however, their use is by no means a guarantee that a program is free from these problems. Semaphores which allow an arbitrary resource count are called counting semaphores, while semaphores which are restricted to the values 0 and 1 (or locked/unlocked, unavailable/available) are called binary semaphores and are used to implement locks. - -The semaphore concept was invented by Dutch computer scientist Edsger Dijkstra in 1962 or 1963, when Dijkstra and his team were developing an operating system for the Electrologica X8. That system eventually became known as THE multiprogramming system. - -Suppose a library has 10 identical study rooms, to be used by one student at a time. Students must request a room from the front desk if they wish to use a study room. If no rooms are free, students wait at the desk until someone relinquishes a room. When a student has finished using a room, the student must return to the desk and indicate that one room has become free. - -In the simplest implementation, the clerk at the front desk knows only the number of free rooms available, which they only know correctly if all of the students actually use their room while they've signed up for them and return them when they're done. When a student requests a room, the clerk decreases this number. When a student releases a room, the clerk increases this number. The room can be used for as long as desired, and so it is not possible to book rooms ahead of time. - -In this scenario the front desk count-holder represents a counting semaphore, the rooms are the resource, and the students represent processes/threads. The value of the semaphore in this scenario is initially 10, with all rooms empty. When a student requests a room, they are granted access, and the value of the semaphore is changed to 9. After the next student comes, it drops to 8, then 7 and so on. If someone requests a room and the current value of the semaphore is 0, they are forced to wait until a room is freed (when the count is increased from 0). If one of the rooms was released, but there are several students waiting, then any method can be used to select the one who will occupy the room (like FIFO or flipping a coin). And of course, a student needs to inform the clerk about releasing their room only after really leaving it, otherwise, there can be an awkward situation when such student is in the process of leaving the room (they are packing their textbooks, etc.) and another student enters the room before they leave it. - -When used to control access to a pool of resources, a semaphore tracks only how many resources are free; it does not keep track of which of the resources are free. Some other mechanism (possibly involving more semaphores) may be required to select a particular free resource. - -The paradigm is especially powerful because the semaphore count may serve as a useful trigger for a number of different actions. The librarian above may turn the lights off in the study hall when there are no students remaining, or may place a sign that says the rooms are very busy when most of the rooms are occupied. - -The success of the protocol requires applications to follow it correctly. Fairness and safety are likely to be compromised (which practically means a program may behave slowly, act erratically, hang or crash) if even a single process acts incorrectly. This includes: - -* requesting a resource and forgetting to release it; - -* releasing a resource that was never requested; - -* holding a resource for a long time without needing it; - -* using a resource without requesting it first (or after releasing it). - -Even if all processes follow these rules, multi-resource deadlock may still occur when there are different resources managed by different semaphores and when processes need to use more than one resource at a time, as illustrated by the dining philosophers problem. - -Counting semaphores are equipped with two operations, historically denoted as P and V (see for alternative names). Operation V increments the semaphore S, and operation P decrements it. - -The value of the semaphore S is the number of units of the resource that are currently available. The P operation wastes time or sleeps until a resource protected by the semaphore becomes available, at which time the resource is immediately claimed. The V operation is the inverse: it makes a resource available again after the process has finished using it. - -One important property of semaphore S is that its value cannot be changed except by using the V and P operations. - -A simple way to understand wait (P) and signal (V) operations is: - -* wait: Decrements the value of semaphore variable by 1. If the new value of the semaphore variable is negative, the process executing wait is blocked (i.e., added to the semaphore's queue). Otherwise, the process continues execution, having used a unit of the resource. - -* signal: Increments the value of semaphore variable by 1. After the increment, if the pre-increment value was negative (meaning there are processes waiting for a resource), it transfers a blocked process from the semaphore's waiting queue to the ready queue. - -Many operating systems provide efficient semaphore primitives that unblock a waiting process when the semaphore is incremented. This means that processes do not waste time checking the semaphore value unnecessarily. - -The counting semaphore concept can be extended with the ability to claim or return more than one "unit" from the semaphore, a technique implemented in Unix. The modified V and P operations are as follows, using square brackets to indicate atomic operations, i.e., operations which appear indivisible from the perspective of other processes: - -function V(semaphore S, integer I): - -[S ← S + I] - -function P(semaphore S, integer I): - -repeat: - -[if S ≥ I: - -S ← S − I - -break] - -However, the remainder of this section refers to semaphores with unary V and P operations, unless otherwise specified. - -To avoid starvation, a semaphore has an associated queue of processes (usually with FIFO semantics). If a process performs a P operation on a semaphore that has the value zero, the process is added to the semaphore's queue and its execution is suspended. When another process increments the semaphore by performing a V operation, and there are processes on the queue, one of them is removed from the queue and resumes execution. When processes have different priorities the queue may be ordered by priority, so that the highest priority process is taken from the queue first. - -If the implementation does not ensure atomicity of the increment, decrement and comparison operations, then there is a risk of increments or decrements being forgotten, or of the semaphore value becoming negative. Atomicity may be achieved by using a machine instruction that is able to read, modify and write the semaphore in a single operation. In the absence of such a hardware instruction, an atomic operation may be synthesized through the use of a software mutual exclusion algorithm. On uniprocessor systems, atomic operations can be ensured by temporarily suspending preemption or disabling hardware interrupts. This approach does not work on multiprocessor systems where it is possible for two programs sharing a semaphore to run on different processors at the same time. To solve this problem in a multiprocessor system a locking variable can be used to control access to the semaphore. The locking variable is manipulated using a test-and-set-lock command. - -Consider a variable A and a boolean variable S. A is only accessed when S is marked true. Thus, S is a semaphore for A. - -One can imagine a stoplight signal (S) just before a train station (A). In this case, if the signal is green, then one can enter the train station. If it is yellow or red (or any other color), the train station cannot be accessed. - -Consider a system that can only support ten users (S=10). Whenever a user logs in, P is called, decrementing the semaphore S by 1. Whenever a user logs out, V is called, incrementing S by 1 representing a login slot that has become available. When S is 0, any users wishing to log in must wait until S increases; the login request is enqueued onto a FIFO queue until a slot is freed. Mutual exclusion is used to ensure that requests are enqueued in order. Whenever S increases (login slots available), a login request is dequeued, and the user owning the request is allowed to log in. If S is already greater than or equal to 0, then login requests are immediately dequeued. - -In the producer–consumer problem, one process (the producer) generates data items and another process (the consumer) receives and uses them. They communicate using a queue of maximum size N and are subject to the following conditions: - -* the consumer must wait for the producer to produce something if the queue is empty; - -* the producer must wait for the consumer to consume something if the queue is full. - -The semaphore solution to the producer–consumer problem tracks the state of the queue with two semaphores: emptyCount, the number of empty places in the queue, and fullCount, the number of elements in the queue. To maintain integrity, emptyCount may be lower (but never higher) than the actual number of empty places in the queue, and fullCount may be lower (but never higher) than the actual number of items in the queue. Empty places and items represent two kinds of resources, empty boxes and full boxes, and the semaphores emptyCount and fullCount maintain control over these resources. - -The binary semaphore useQueue ensures that the integrity of the state of the queue itself is not compromised, for example by two producers attempting to add items to an empty queue simultaneously, thereby corrupting its internal state. Alternatively a mutex could be used in place of the binary semaphore. - -The emptyCount is initially N, fullCount is initially 0, and useQueue is initially 1. - -The producer does the following repeatedly: - -produce: - -P(emptyCount) - -P(useQueue) - -putItemIntoQueue(item) - -V(useQueue) - -V(fullCount) - -The consumer does the following repeatedly - -consume: - -P(fullCount) - -P(useQueue) - -item ← getItemFromQueue() - -V(useQueue) - -V(emptyCount) - -Below is a substantive example: - -# A single consumer enters its critical section. Since fullCount is 0, the consumer blocks. - -# Several producers enter the producer critical section. No more than N producers may enter their critical section due to emptyCount constraining their entry. - -# The producers, one at a time, gain access to the queue through useQueue and deposit items in the queue. - -# Once the first producer exits its critical section, fullCount is incremented, allowing one consumer to enter its critical section. - -Note that emptyCount may be much lower than the actual number of empty places in the queue, for example in the case where many producers have decremented it but are waiting their turn on useQueue before filling empty places. Note that emptyCount + fullCount ≤ N always holds, with equality if and only if no producers or consumers are executing their critical sections. - -The canonical names V and P come from the initials of Dutch words. V is generally explained as verhogen ("increase"). Several explanations have been offered for P, including proberen ("to test" or "to try"), passeren ("pass"), and pakken ("grab"). Dijkstra's earliest paper on the subject - -In ALGOL 68, the Linux kernel, and in some English textbooks, the V and P operations are called, respectively, up and down. In software engineering practice, they are often called signal and wait, release and acquire (which the standard Java library uses), or post and pend. Some texts call them vacate and procure to match the original Dutch initials. - -A mutex is a locking mechanism that sometimes uses the same basic implementation as the binary semaphore. The differences between them are in how they are used. While a binary semaphore may be colloquially referred to as a mutex, a true mutex has a more specific use-case and definition, in that only the task that locked the mutex is supposed to unlock it. This constraint aims to handle some potential problems of using semaphores: - -# Priority inversion: If the mutex knows who locked it and is supposed to unlock it, it is possible to promote the priority of that task whenever a higher-priority task starts waiting on the mutex. - -# Premature task termination: Mutexes may also provide deletion safety, where the task holding the mutex cannot be accidentally deleted. - -# Termination deadlock: If a mutex-holding task terminates for any reason, the OS can release the mutex and signal waiting tasks of this condition. - -# Recursion deadlock: a task is allowed to lock a reentrant mutex multiple times as it unlocks it an equal number of times. - -# Accidental release: An error is raised on the release of the mutex if the releasing task is not its owner. diff --git a/wiki/wikipedia/3750.txt b/wiki/wikipedia/3750.txt deleted file mode 100644 index d6c51b487344278555822b5993842a898a322dca..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3750.txt +++ /dev/null @@ -1,27 +0,0 @@ -The Boy or Girl paradox surrounds a set of questions in probability theory, which are also known as The Two Child Problem, Mr. Smith's Children-->, when Martin Gardner featured it in his October 1959 "Mathematical Games column" in Scientific American. He titled it The Two Children Problem, and phrased the paradox as follows: - -*Mr. Jones has two children. The older child is a girl. What is the probability that both children are girls? - -*Mr. Smith has two children. At least one of them is a boy. What is the probability that both children are boys? - -Gardner initially gave the answers 1/2 and 1/3, respectively, but later acknowledged that the second question was ambiguous. - -Other variants of this question, with varying degrees of ambiguity, have been popularized by Ask Marilyn in Parade Magazine, John Tierney of The New York Times, and Leonard Mlodinow in The Drunkard's Walk. One scientific study showed that when identical information was conveyed, but with different partially ambiguous wordings that emphasized different points, that the percentage of MBA students who answered 1/2 changed from 85% to 39%. and that the probability of these outcomes is absolute, not conditional. - -The two possible answers share a number of assumptions. First, it is assumed that the space of all possible events can be easily enumerated, providing an extensional definition of outcomes: {BB, BG, GB, GG}. They leave it to the reader to decide whether the procedure, that yields 1/3 as the answer, is reasonable for the problem as stated above. - -For example, say an observer sees Mr. Smith on a walk with just one of his children. If he has two boys then that child must be a boy. But if he has a boy and a girl, that child could have been a girl. So seeing him with a boy eliminates not only the combinations where he has two girls, but also the combinations where he has a son and a daughter and chooses the daughter to walk with. - -So, while it is certainly true that every possible Mr. Smith has at least one boy (i.e., the condition is necessary), it cannot be assumed that every Mr. Smith with at least one boy is intended. That is, the problem statement does not say that having a boy is a sufficient condition for Mr. Smith to be identified as having a boy this way. - -Commenting on Gardner's version of the problem, Bar-Hillel and Falk - -However, the "1/3" answer is obtained only by assuming P(ALOB|BG) = P(ALOB|GB) =1, which implies P(ALOG|BG) = P(ALOG|GB) = 0, that is, the other child's sex is never mentioned although it is present. As Marks and Smith say, "This extreme assumption is never included in the presentation of the two-child problem, however, and is surely not what people have in mind when they present it." - -Vos Savant's articles were discussed by Carlton and Stansfield In this study, the paradox was posed to participants in two ways: - -*"Mr. Smith says: 'I have two children and at least one of them is a boy.' Given this information, what is the probability that the other child is a boy?" - -*"Mr. Smith says: 'I have two children and it is not the case that they are both girls.' Given this information, what is the probability that both children are boys?" - -The authors argue that the first formulation gives the reader the mistaken impression that there are two possible outcomes for the "other child", whereas the second formulation gives the reader the impression that there are four possible outcomes, of which one has been rejected (resulting in 1/3 being the probability of both children being boys, as there are 3 remaining possible outcomes, only one of which is that both of the children are boys). The study found that 85% of participants answered 1/2 for the first formulation, while only 39% responded that way to the second formulation. The authors argued that the reason people respond differently to each question (along with other similar problems, such as the Monty Hall Problem and the Bertrand's box paradox) is because of the use of naive heuristics that fail to properly define the number of possible outcomes. diff --git a/wiki/wikipedia/3751.txt b/wiki/wikipedia/3751.txt deleted file mode 100644 index 5e30df24298e6c1fd106f2829038d9a2dbce5fa9..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3751.txt +++ /dev/null @@ -1,3 +0,0 @@ -Dick Herman Jacobus de Jongh (born 19 October 1939, Enschede) is a Dutch logician and mathematician and a retired professor at the University of Amsterdam. - -He received his PhD degree in 1968 from the University of Wisconsin–Madison under supervision of Stephen Kleene with a dissertation entitled Investigations on the Intuitionistic Propositional Calculus. De Jongh is mostly known for his work on proof theory, provability logic and intuitionistic logic. De Jongh is a member of the group collectively publishing under the pseudonym L. T. F. Gamut. In 2004, on the occasion of his retirement, the Institute for Logic, Language and Computation at the University of Amsterdam published a festschrift in his honor. diff --git a/wiki/wikipedia/3752.txt b/wiki/wikipedia/3752.txt deleted file mode 100644 index 9e33f2714915b34825c6319e09c86b3e0e7852dd..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3752.txt +++ /dev/null @@ -1,93 +0,0 @@ -In social choice theory, Arrow's impossibility theorem, the general possibility theorem or Arrow's paradox is an impossibility theorem stating that when voters have three or more distinct alternatives (options), no ranked voting electoral system can convert the ranked preferences of individuals into a community-wide (complete and transitive) ranking while also meeting a specified set of criteria: unrestricted domain, non-dictatorship, Pareto efficiency, and independence of irrelevant alternatives. The theorem is often cited in discussions of voting theory as it is further interpreted by the Gibbard–Satterthwaite theorem. The theorem is named after economist and Nobel laureate Kenneth Arrow, who demonstrated the theorem in his doctoral thesis and popularized it in his 1951 book Social Choice and Individual Values. The original paper was titled "A Difficulty in the Concept of Social Welfare". - -In short, the theorem states that no rank-order electoral system can be designed that always satisfies these three "fairness" criteria: - -* If every voter prefers alternative X over alternative Y, then the group prefers X over Y. - -* If every voter's preference between X and Y remains unchanged, then the group's preference between X and Y will also remain unchanged (even if voters' preferences between other pairs like X and Z, Y and Z, or Z and W change). - -* There is no "dictator": no single voter possesses the power to always determine the group's preference. - -Cardinal voting electoral systems are not covered by the theorem, as they convey more information than rank orders. However, Gibbard's theorem shows that strategic voting remains a problem. - -The axiomatic approach Arrow adopted can treat all conceivable rules (that are based on preferences) within one unified framework. In that sense, the approach is qualitatively different from the earlier one in voting theory, in which rules were investigated one by one. One can therefore say that the contemporary paradigm of social choice theory started from this theorem. - -The practical consequences of the theorem are debatable: Arrow has said "Most systems are not going to work badly all of the time. All I proved is that all can work badly at times." version of Arrow's theorem replaced the monotonicity and non-imposition criteria with: - -; Pareto efficiency, or unanimity: If every individual prefers a certain option to another, then so must the resulting societal preference order. This, again, is a demand that the social welfare function will be minimally sensitive to the preference profile. - -This later version is more general, having weaker conditions. The axioms of monotonicity, non-imposition, and IIA together imply Pareto efficiency, whereas Pareto efficiency (itself implying non-imposition) and IIA together do not imply monotonicity. - -The IIA condition has three purposes (or effects): - -;Normative: Irrelevant alternatives should not matter. - -;Practical: Use of minimal information. - -;Strategic: Providing the right incentives for the truthful revelation of individual preferences. Though the strategic property is conceptually different from IIA, it is closely related. - -Arrow's death-of-a-candidate example (1963, page 26) and Kirman and Sondermann) point out that when one drops the assumption that there are only finitely many individuals, one can find aggregation rules that satisfy all of Arrow's other conditions. - -However, such aggregation rules are practically of limited interest, since they are based on ultrafilters, highly non-constructive mathematical objects. In particular, Kirman and Sondermann argue that there is an "invisible dictator" behind such a rule. shows that such a rule violates algorithmic computability. These results can be seen to establish the robustness of Arrow's theorem. - -On the other hand, the ultrafilters (indeed, constructing them in an infinite model relies on the axiom of choice) are inherent in finite models as well (with no need of the axiom of choice). - -They can be interpreted as decisive hierarchies, with the only difference that the hiererchy's top level - Arrow's dictator - always exists in a finite model but can be unattainable (= missing) in an infinite hierarchy. - -In the latter case, the "invisible dictator" is nothing else but the infinite decisive hierarchy itself. - -If desired, it can be complemented with a limit point, which then becomes a "visible dictator". - -Since dictators are inseparable from decisive hierarchies, the Dictatorship prohibition automatically prohibits decisive hierarchies, which is much less self-evident than the Dictatorship prohibition. - -See also paragraph "Relaxing the Dictatorship prohibition". - -When there are only two alternatives to choose from, May's theorem shows that only simple majority rule satisfies a certain set of criteria (e.g., equal treatment of individuals and of alternatives; increased support for a winning alternative should not make it into a losing one). On the other hand, when there are at least three alternatives, Arrow's theorem points out the difficulty of collective decision making. Why is there such a sharp difference between the case of less than three alternatives and that of at least three alternatives? - -Nakamura's theorem (about the core of simple games) gives an answer more generally. It establishes that if the number of alternatives is less than a certain integer called the Nakamura number, then the rule in question will identify "best" alternatives without any problem; if the number of alternatives is greater or equal to the Nakamura number, then the rule will not always work, since for some profile a voting paradox (a cycle such as alternative A socially preferred to alternative B, B to C, and C to A) will arise. Since the Nakamura number of majority rule is 3 (except the case of four individuals), one can conclude from Nakamura's theorem that majority rule can deal with up to two alternatives rationally. Some super-majority rules (such as those requiring 2/3 of the votes) can have a Nakamura number greater than 3, but such rules violate other conditions given by Arrow. - -A common way "around" Arrow's paradox is limiting the alternative set to two alternatives. Thus, whenever more than two alternatives should be put to the test, it seems very tempting to use a mechanism that pairs them and votes by pairs. As tempting as this mechanism seems at first glance, it is generally far from satisfying even Pareto efficiency, not to mention IIA. The specific order by which the pairs are decided strongly influences the outcome. This is not necessarily a bad feature of the mechanism. Many sports use the tournament mechanism—essentially a pairing mechanism—to choose a winner. This gives considerable opportunity for weaker teams to win, thus adding interest and tension throughout the tournament. This means that the person controlling the order by which the choices are paired (the agenda maker) has great control over the outcome. In any case, when viewing the entire voting process as one game, Arrow's theorem still applies. - -Another approach is relaxing the universality condition, which means restricting the domain of aggregation rules. The best-known result along this line assumes "single peaked" preferences. - -Duncan Black has shown that if there is only one dimension on which every individual has a "single-peaked" preference, then all of Arrow's conditions are met by majority rule. Suppose that there is some predetermined linear ordering of the alternative set. An individual's preference is single-peaked with respect to this ordering if he has some special place that he likes best along that line, and his dislike for an alternative grows larger as the alternative goes further away from that spot (i.e., the graph of his utility function has a single peak if alternatives are placed according to the linear ordering on the horizontal axis). For example, if voters were voting on where to set the volume for music, it would be reasonable to assume that each voter had their own ideal volume preference and that as the volume got progressively too loud or too quiet they would be increasingly dissatisfied. - -If the domain is restricted to profiles in which every individual has a single peaked preference with respect to the linear ordering, then simple aggregation rules, which include majority rule, have an acyclic (defined below) social preference, hence "best" alternative. In particular, when there are odd number of individuals, then the social preference becomes transitive, and the socially "best" alternative is equal to the median of all the peaks of the individuals (Black's median voter theorem). Under single-peaked preferences, the majority rule is in some respects the most natural voting mechanism. - -One can define the notion of "single-peaked" preferences on higher-dimensional sets of alternatives. However, one can identify the "median" of the peaks only in exceptional cases. Instead, we typically have the destructive situation suggested by McKelvey's Chaos Theorem: for any x and y, one can find a sequence of alternatives such that x is beaten by x1 by a majority, x1 by x2, up to xk by y. - -By relaxing the transitivity of social preferences, we can find aggregation rules that satisfy Arrow's other conditions. If we impose neutrality (equal treatment of alternatives) on such rules, however, there exists an individual who has a "veto". So the possibility provided by this approach is also very limited. - -First, suppose that a social preference is quasi-transitive (instead of transitive); this means that the strict preference $\succ$ ("better than") is transitive: if $x \succ y$ and $y \succ z$, then $x \succ z$. Then, there do exist non-dictatorial aggregation rules satisfying Arrow's conditions, but such rules are oligarchic. This means that there exists a coalition L such that L is decisive (if every member in L prefers x to y, then the society prefers x to y), and each member in L has a veto (if she prefers x to y, then the society cannot prefer y to x). - -Second, suppose that a social preference is acyclic (instead of transitive): there do not exist alternatives $x_1, \ldots, x_k$ that form a cycle ($x_1 \succ x_2, x_2 \succ x_3, \ldots, x_{k-1} \succ x_k, x_k \succ x_1$). Then, provided that there are at least as many alternatives as individuals, an aggregation rule satisfying Arrow's other conditions is collegial. This means that there are individuals who belong to the intersection ("collegium") of all decisive coalitions. If there is someone who has a veto, then he belongs to the collegium. If the rule is assumed to be neutral, then it does have someone who has a veto. - -Finally, Brown's theorem left open the case of acyclic social preferences where the number of alternatives is less than the number of individuals. One can give a definite answer for that case using the Nakamura number. See limiting the number of alternatives. - -There are numerous examples of aggregation rules satisfying Arrow's conditions except IIA. The Borda rule is one of them. These rules, however, are susceptible to strategic manipulation by individuals. - -See also Interpretations of the theorem above. - -Wilson (1972) shows that if an aggregation rule is non-imposed and non-null, then there is either a dictator or an inverse dictator, provided that Arrow's conditions other than Pareto are also satisfied. Here, an inverse dictator is an individual i such that whenever i prefers x to y, then the society prefers y to x. - -Amartya Sen offered both relaxation of transitivity and removal of the Pareto principle. He demonstrated another interesting impossibility result, known as the "impossibility of the Paretian Liberal" (see liberal paradox for details). Sen went on to argue that this demonstrates the futility of demanding Pareto optimality in relation to voting mechanisms. - -Andranik Tangian (2010) introduced measures of dictator's "representativeness", for instance, the "popularity index" defined as the average size of the social group whose pairwise preferences are shared (= represented) by the dictator, averaged over all pairs of alternatives and all preference profiles. It was shown that there always exist "good" Arrow's dictators who on the average represent a majority. - -Since they are rather representatives of the society - like democratically elected presidents - there are no self-evident reasons to prohibit them. Restricting the notion of dictator to "bad" ones only, i.e. those who on the average represent a minority, Arrow's axioms were proven to be consistent. - -In social decision making, to rank all alternatives is not usually a goal. It often suffices to find some alternative. The approach focusing on choosing an alternative investigates either social choice functions (functions that map each preference profile into an alternative) or social choice rules (functions that map each preference profile into a subset of alternatives). - -As for social choice functions, the Gibbard–Satterthwaite theorem is well-known, which states that if a social choice function whose range contains at least three alternatives is strategy-proof, then it is dictatorial. - -As for social choice rules, we should assume there is a social preference behind them. That is, we should regard a rule as choosing the maximal elements ("best" alternatives) of some social preference. The set of maximal elements of a social preference is called the core. Conditions for existence of an alternative in the core have been investigated in two approaches. The first approach assumes that preferences are at least acyclic (which is necessary and sufficient for the preferences to have a maximal element on any finite subset). For this reason, it is closely related to relaxing transitivity. The second approach drops the assumption of acyclic preferences. Kumabe and Mihara adopt this approach. They make a more direct assumption that individual preferences have maximal elements, and examine conditions for the social preference to have a maximal element. See Nakamura number for details of these two approaches. - -Arrow originally rejected cardinal utility as a meaningful tool for expressing social welfare, and so focused his theorem on preference rankings, but later stated that a cardinal score system with three or four classes "is probably the best". - -Whether such a claim is correct depends on how each condition is reformulated.{{refn - -| No voting method that nontrivially uses cardinal utility satisfies Arrow's IIA (in which preference profiles are replaced by lists of ballots or lists of utilities). For this reason, a weakened notion of IIA is proposed (e.g., Sen). The notion requires that the social ranking of two alternatives depend only on the levels of utility attained by individuals at the two alternatives. (More formally, a social welfare functional $F$ is a function that maps each list $u=(u_1, \ldots, u_n)$ of utility functions into a social preference. $F$ satisfies IIA (for social welfare functionals) if for all lists $u, u'$ and for all alternatives $x, y$, if $u_i(x)=u'_i(x)$ and $u_i(y)=u'_i(y)$ for all $i$, then $xF(u)y \text{ iff } xF(u')y$.) Many cardinal voting methods (including range voting) satisfy the weakened version of IIA. - -}} Other rated electoral system which pass certain generalizations of Arrow's criteria include approval voting and majority judgment. Note that Arrow's theorem does not apply to single-winner methods such as these, but Gibbard's theorem still does: no non-defective electoral system is fully strategy-free, so the informal dictum that "no electoral system is perfect" still has a mathematical basis. - -Finally, though not an approach investigating some kind of rules, there is a criticism by James M. Buchanan, Charles Plott, and others. It argues that it is silly to think that there might be social preferences that are analogous to individual preferences. Arrow (1963, Chapter 8) answers this sort of criticism seen in the early period, which come at least partly from misunderstanding. diff --git a/wiki/wikipedia/3753.txt b/wiki/wikipedia/3753.txt deleted file mode 100644 index e3882697c12bebe17c6602359f0bf8c7f1f814f4..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3753.txt +++ /dev/null @@ -1,104 +0,0 @@ -In complex analysis, a branch of mathematics, the Casorati–Weierstrass theorem describes the behaviour of holomorphic functions near their essential singularities. It is named for Karl Theodor Wilhelm Weierstrass and Felice Casorati. In Russian literature it is called Sokhotski's theorem. - -Start with some open subset $U$ in the complex plane containing the number $z_0$, and a function $f$ that is holomorphic on $U\ \backslash\ \{z_0\}$, but has an essential singularity at $z_0$ . The Casorati–Weierstrass theorem then states that - -if $V$ is any neighbourhood of $z_0$ contained in $U$, then $f(V\ \backslash\ \{z_0\})$ is dense in $\mathbb{C}$. - -This can also be stated as follows: - -for any $\varepsilon > 0, \delta > 0 $, and a complex number $w$, there exists a complex number $z$ in $U$ with $0<|z-z_0|<\delta$ and $|f(z)-w|<\varepsilon$ . - -Or in still more descriptive terms: -$$ -f -$$ comes arbitrarily close to any complex value in every neighbourhood of $z_0$. - -The theorem is considerably strengthened by Picard's great theorem, which states, in the notation above, that $f$ assumes every complex value, with one possible exception, infinitely often on $V$. - -In the case that $f$ is an entire function and $a=\infty$, the theorem says that the values $f(z)$ - -approach every complex number and $\infty$, as $z$ tends to infinity. - -It is remarkable that this does not hold for holomorphic maps in higher dimensions, - -as the famous example of Pierre Fatou shows. - -The function f(z) = exp(1/z) has an essential singularity at 0, but the function g(z) = 1/z3 does not (it has a pole at 0). - -Consider the function -$$ -f(z)=e^{1/z}. -$$ - -This function has the following Taylor series about the essential singular point at 0: -$$ -f(z)=\displaystyle\sum_{n=0}^{\infty}\frac{1}{n!}z^{-n}. -$$ - -Because $f'(z) =\frac{-e^{\frac{1}{z}}}{z^{2}}$ exists for all points z ≠ 0 we know that ƒ(z) is analytic in a punctured neighborhood of z = 0. Hence it is an isolated singularity, as well as being an essential singularity. - -Using a change of variable to polar coordinates $z=re^{i \theta }$ our function, ƒ(z) = e1/z becomes: -$$ -f(z)=e^{\frac{1}{r}e^{-i\theta}}=e^{\frac{1}{r}\cos(\theta)}e^{-\frac{1}{r}i \sin(\theta)}. -$$ - -Taking the absolute value of both sides: -$$ -\left| f(z) \right| = \left| e^{\frac{1}{r}\cos \theta} \right| \left| e^{-\frac{1}{r}i \sin(\theta)} \right | =e^{\frac{1}{r}\cos \theta}. -$$ - -Thus, for values of θ such that cos θ > 0, we have $f(z)\rightarrow\infty$ as $r \rightarrow 0$, and for $\cos \theta <0$, $f(z) \rightarrow 0$ as $r \rightarrow 0$. - -Consider what happens, for example when z takes values on a circle of diameter 1/R tangent to the imaginary axis. This circle is given by r = (1/R) cos θ. Then, -$$ -f(z) = e^{R} \left[ \cos \left( R\tan \theta \right) - i \sin \left( R\tan \theta \right) \right] -$$ - -and -$$ -\left| f(z) \right| = e^R. -$$ - -Thus,$\left| f(z) \right|$ may take any positive value other than zero by the appropriate choice of R. As $z \rightarrow 0$ on the circle, $ \theta \rightarrow \frac{\pi}{2}$ with R fixed. So this part of the equation: -$$ -\left[ \cos \left( R \tan \theta \right) - i \sin \left( R \tan \theta \right) \right] -$$ - -takes on all values on the unit circle infinitely often. Hence f(z) takes on the value of every number in the complex plane except for zero infinitely often. - -A short proof of the theorem is as follows: - -Take as given that function f is meromorphic on some punctured neighborhood V \ {z0}, and that z0 is an essential singularity. Assume by way of contradiction that some value b exists that the function can never get close to; that is: assume that there is some complex value b and some ε > 0 such that |f(z) - b| ≥ ε for all z in V at which f is defined. - -Then the new function: -$$ -g(z) = \frac{1}{f(z) - b} -$$ - -must be holomorphic on V {z0}, with zeroes at the poles of f, and bounded by 1/ε. It can therefore be analytically continued (or continuously extended, or holomorphically extended) to all of V by Riemann's analytic continuation theorem. So the original function can be expressed in terms of g: -$$ -f(z) = \frac{1}{g(z)} + b -$$ - -for all arguments z in V {z0}. Consider the two possible cases for -$$ -\lim_{z \to z_0} g(z). -$$ - -If the limit is 0, then f has a pole at z0 . If the limit is not 0, then z0 is a removable singularity of f . Both possibilities contradict the assumption that the point z0 is an essential singularity of the function f . Hence the assumption is false and the theorem holds. - -The history of this important theorem is described by - -Collingwood and Lohwater. - -It was published by Weierstrass in 1876 (in German) and by Sokhotski in 1868 in his Master thesis (in Russian). - -So it was called Sokhotski's theorem in the Russian literature and Weierstrass's theorem in - -the Western literature. - -The same theorem was published by Casorati in 1868, and - -by Briot and Bouquet in the first edition of their book (1859). - -However, Briot and Bouquet removed this theorem from the second edition (1875). diff --git a/wiki/wikipedia/3754.txt b/wiki/wikipedia/3754.txt deleted file mode 100644 index b984480bef8dab03ccd16ee43db3fc32226bb433..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3754.txt +++ /dev/null @@ -1,11 +0,0 @@ -In set theory, when dealing with sets of infinite size, the term almost or nearly is used to refer to all but a negligible amount of elements in the set. The notion of "negligible" depends on the context, and may mean "of measure zero" (in a measure space), "finite" (when infinite sets are involved), or "countable" (when uncountably infinite sets are involved). - -For example: - -*The set $ S = \{ n \in \mathbb{N}| n \ge k \} $ is almost $\mathbb{N}$ for any $k$ in $\mathbb{N}$, because only finitely many natural numbers are less than $k$. - -*The set of prime numbers is not almost $\mathbb{N}$, because there are infinitely many natural numbers that are not prime numbers. - -*The set of transcendental numbers are almost $\mathbb{R}$, because the algebraic real numbers form a countable subset of the set of real numbers (which is uncountable). - -*The Cantor set is uncountably infinite, but has Lebesgue measure zero. So almost all real numbers in (0, 1) are members of the complement of the Cantor set. diff --git a/wiki/wikipedia/3755.txt b/wiki/wikipedia/3755.txt deleted file mode 100644 index 5fa6289199a62963a3b57ecaacf914fca6baa0c6..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3755.txt +++ /dev/null @@ -1,15 +0,0 @@ -Remote Differential Compression (RDC) is a client–server synchronization algorithm that allows the contents of two files to be synchronized by communicating only the differences between them. It was introduced with Microsoft Windows Server 2003 R2, is included with later Windows client and server operating systems, but by 2019 is not being developed and is not used by any Microsoft product. - -Unlike Binary Delta Compression (BDC), which is designed to operate only on known versions of a single file, RDC does not make assumptions about file similarity or versioning. The differences between files are computed on the fly, therefore RDC is suitable for efficient synchronization of files that have been updated independently, where network bandwidth is small, or where the files are large but the differences between them are small. - -The algorithm used is based on fingerprinting blocks on each file locally at both ends of the replication partners. Since many types of file changes can cause the file contents to move without other significant change (for example, a small insertion or deletion at the beginning of a file can cause the rest of the file to become misaligned to the original content) the blocks used for comparison are not based on static arbitrary cut points but on cut points defined by the contents of each file segment. This means that if a part of a file changes in length, or blocks of the contents get moved to other parts of the file, the block boundaries for the parts that have not changed remain fixed related to the contents, and thus the series of fingerprints for those blocks do not change, they just change position. By comparing all hashes in a file to the hashes for the same file at the other end of the replication pair, RDC is able to identify which blocks of the file have changed and which have not, even if the contents of the file have been significantly reshuffled. - -Since comparing large files could imply making large numbers of signature comparisons, the algorithm is recursively applied to the hash sets to detect which blocks of hashes have changed or moved around, significantly reducing the amount of data that needs to be transmitted for comparing files. - -Later versions of Windows support cross-file RDC, which finds files similar to the one being replicated, and uses blocks of the similar files that are identical to the replicating file to minimize data transferred over the WAN. Cross-file RDC can use blocks of up to five similar files. - -RDC is similar in many ways to the older (1996) rsync protocol, but with some useful innovations, in particular the recursive algorithm and cross-file RDC. - -RDC is implemented in Windows operating systems by a DLL file, MSRDC.DLL, which will be present in the %SYSTEMROOT%\System32 directory if and only if RDC is enabled. Very little software is available which makes use of it, particularly on non-server systems. According to Internet rumor, enabling RDC significantly slows local file transfers, and it should not be enabled; a Microsoft TechNet web page disputes this in great detail, despite frequent anecdotal posts of its removal having worked to restore transfer speeds. - -With the release of Microsoft's Windows Server 2019, RDC support was included in the section Features we’re no longer developing (which may be removed from a future update), with the comment "This support isn’t currently used by any Microsoft product". diff --git a/wiki/wikipedia/3756.txt b/wiki/wikipedia/3756.txt deleted file mode 100644 index 3508451b469255c77caad9feca07897aed095029..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3756.txt +++ /dev/null @@ -1,108 +0,0 @@ -In calculus, Rolle's theorem or Rolle's lemma essentially states that any real-valued differentiable function that attains equal values at two distinct points must have at least one stationary point somewhere between them—that is, a point where the first derivative (the slope of the tangent line to the graph of the function) is zero. The theorem is named after Michel Rolle. - -If a real-valued function f is continuous on a proper closed interval [a, b], differentiable on the open interval (a, b), and f (a) = f (b), then there exists at least one c in the open interval (a, b) such that -$$ -f'(c) = 0 -$$. - -This version of Rolle's theorem is used to prove the mean value theorem, of which Rolle's theorem is indeed a special case. It is also the basis for the proof of Taylor's theorem. - -Although the theorem is named after Michel Rolle, Rolle's 1691 proof covered only the case of polynomial functions. His proof did not use the methods of differential calculus, which at that point in his life he considered to be fallacious. The theorem was first proved by Cauchy in 1823 as a corollary of a proof of the mean value theorem. The name "Rolle's theorem" was first used by Moritz Wilhelm Drobisch of Germany in 1834 and by Giusto Bellavitis of Italy in 1846. - -For a radius r > 0, consider the function -$$ -f(x)=\sqrt{r^2-x^2},\quad x\in[-r,r]. -$$ - -Its graph is the upper semicircle centered at the origin. This function is continuous on the closed interval [−r, r] and differentiable in the open interval (−r, r), but not differentiable at the endpoints −r and r. Since f (−r) = f (r), Rolle's theorem applies, and indeed, there is a point where the derivative of f is zero. Note that the theorem applies even when the function cannot be differentiated at the endpoints because it only requires the function to be differentiable in the open interval. - -If differentiability fails at an interior point of the interval, the conclusion of Rolle's theorem may not hold. Consider the absolute value function -$$ -f(x) = |x|,\qquad x\in[-1,1]. -$$ - -Then f (−1) = f (1), but there is no c between −1 and 1 for which the f ′(c) is zero. This is because that function, although continuous, is not differentiable at x = 0. Note that the derivative of f changes its sign at x = 0, but without attaining the value 0. The theorem cannot be applied to this function because it does not satisfy the condition that the function must be differentiable for every x in the open interval. However, when the differentiability requirement is dropped from Rolle's theorem, f will still have a critical number in the open interval (a, b), but it may not yield a horizontal tangent (as in the case of the absolute value represented in the graph). - -The second example illustrates the following generalization of Rolle's theorem: - -Consider a real-valued, continuous function f on a closed interval [a, b] with f (a) = f (b). If for every x in the open interval (a, b) the right-hand limit -$$ -f'(x^+):=\lim_{h \to 0^+}\frac{f(x+h)-f(x)}{h} -$$ - -and the left-hand limit -$$ -f'(x^-):=\lim_{h \to 0^-}\frac{f(x+h)-f(x)}{h} -$$ - -exist in the extended real line [−∞, ∞], then there is some number c in the open interval (a, b) such that one of the two limits -$$ -f'(c^+)\quad\text{and}\quad f'(c^-) -$$ - -is ≥ 0 and the other one is ≤ 0 (in the extended real line). If the right- and left-hand limits agree for every x, then they agree in particular for c, hence the derivative of f exists at c and is equal to zero. - -*If f is convex or concave, then the right- and left-hand derivatives exist at every inner point, hence the above limits exist and are real numbers. - -*This generalized version of the theorem is sufficient to prove convexity when the one-sided derivatives are monotonically increasing: f'(x^-) \le f'(x^+) \le f'(y^-),\qquad x < y. - -Since the proof for the standard version of Rolle's theorem and the generalization are very similar, we prove the generalization. - -The idea of the proof is to argue that if f (a) = f (b), then f must attain either a maximum or a minimum somewhere between a and b, say at c, and the function must change from increasing to decreasing (or the other way around) at c. In particular, if the derivative exists, it must be zero at c. - -By assumption, f is continuous on [a, b], and by the extreme value theorem attains both its maximum and its minimum in [a, b]. If these are both attained at the endpoints of [a, b], then f is constant on [a, b] and so the derivative of f is zero at every point in (a, b). - -Suppose then that the maximum is obtained at an interior point c of (a, b) (the argument for the minimum is very similar, just consider −f ). We shall examine the above right- and left-hand limits separately. - -For a real h such that c + h is in [a, b], the value f (c + h) is smaller or equal to f (c) because f attains its maximum at c. Therefore, for every h > 0, -$$ -\frac{f(c+h)-f(c)}{h}\le0, -$$ - -hence -$$ -f'(c^+):=\lim_{h \to 0^+}\frac{f(c+h)-f(c)}{h}\le0, -$$ - -where the limit exists by assumption, it may be minus infinity. - -Similarly, for every h < 0, the inequality turns around because the denominator is now negative and we get -$$ -\frac{f(c+h)-f(c)}{h}\ge0, -$$ - -hence -$$ -f'(c^-):=\lim_{h \to 0^-}\frac{f(c+h)-f(c)}{h}\ge0, -$$ - -where the limit might be plus infinity. - -Finally, when the above right- and left-hand limits agree (in particular when f is differentiable), then the derivative of f at c must be zero. - -(Alternatively, we can apply Fermat's stationary point theorem directly.) - -We can also generalize Rolle's theorem by requiring that f has more points with equal values and greater regularity. Specifically, suppose that - -* the function f is n − 1 times continuously differentiable on the closed interval [a, b] and the nth derivative exists on the open interval (a, b), and - -* there are n intervals given by a1 < b1 ≤ a2 < b2 ≤ … ≤ an < bn in [a, b] such that f (ak) = f (bk) for every k from 1 to n. - -Then there is a number c in (a, b) such that the nth derivative of f at c is zero. - -The requirements concerning the nth derivative of f can be weakened as in the generalization above, giving the corresponding (possibly weaker) assertions for the right- and left-hand limits defined above with f  in place of f. - -Particularly, this version of the theorem asserts that if a function differentiable enough times has n roots (so they have the same value, that is 0), then there is an internal point where f  vanishes. - -The proof uses mathematical induction. The case n = 1 is simply the standard version of Rolle's theorem. For n > 1, take as the induction hypothesis that the generalization is true for n − 1. We want to prove it for n. Assume the function f satisfies the hypotheses of the theorem. By the standard version of Rolle's theorem, for every integer k from 1 to n, there exists a ck in the open interval (ak, bk) such that f ′(ck) = 0. Hence, the first derivative satisfies the assumptions on the n − 1 closed intervals [c1, c2], …, [cn − 1, cn]. By the induction hypothesis, there is a c such that the (n − 1)st derivative of f ′ at c is zero. - -Rolle's theorem is a property of differentiable functions over the real numbers, which are an ordered field. As such, it does not generalize to other fields, but the following corollary does: if a real polynomial factors (has all of its roots) over the real numbers, then its derivative does as well. One may call this property of a field Rolle's property. More general fields do not always have differentiable functions, but they do always have polynomials, which can be symbolically differentiated. Similarly, more general fields may not have an order, but one has a notion of a root of a polynomial lying in a field. - -Thus Rolle's theorem shows that the real numbers have Rolle's property. Any algebraically closed field such as the complex numbers has Rolle's property. However, the rational numbers do not – for example, x3 − x = x(x − 1)(x + 1) factors over the rationals, but its derivative, -$$ -3x^2-1 = 3 \left (x-\tfrac{1}{\sqrt 3} \right ) \left (x+\tfrac{1}{\sqrt 3} \right ) , -$$ - -does not. The question of which fields satisfy Rolle's property was raised in . For finite fields, the answer is that only F2 and F4 have Rolle's property. - -For a complex version, see Voorhoeve index. diff --git a/wiki/wikipedia/3757.txt b/wiki/wikipedia/3757.txt deleted file mode 100644 index 38271662e4128c6a58e18a59b6421fe9a115149e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3757.txt +++ /dev/null @@ -1,11 +0,0 @@ -The Chetaev instability theorem for dynamical systems states that if there exists, for the system $\dot{\textbf{x}} = X(\textbf{x})$ with an equilibrium point at the origin, a continuously differentiable function V(x) such that - -# the origin is a boundary point of the set $G = \{\mathbf{x} \mid V(\mathbf{x})>0\}$; - -# there exists a neighborhood $U$ of the origin such that $\dot{V}(\textbf{x})>0$ for all $\mathbf{x} \in G \cap U$ - -then the origin is an unstable equilibrium point of the system. - -This theorem is somewhat less restrictive than the Lyapunov instability theorems, since a complete sphere (circle) around the origin for which V and $\dot{V}$ both are of the same sign does not have to be produced. - -It is named after Nicolai Gurevich Chetaev. diff --git a/wiki/wikipedia/3758.txt b/wiki/wikipedia/3758.txt deleted file mode 100644 index db7e65cbe07fea4f3209e1ffe71d2605b028b4fd..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3758.txt +++ /dev/null @@ -1,21 +0,0 @@ -In mathematics, Newton's theorem about ovals states that the area cut off by a secant of a smooth convex oval is not an algebraic function of the secant. - -Isaac Newton stated it as lemma 28 of section VI of book 1 of Newton's Principia, and used it to show that the position of a planet moving in an orbit is not an algebraic function of time. There has been some controversy about whether or not this theorem is correct because Newton did not state exactly what he meant by an oval, and for some interpretations of the word oval the theorem is correct, while for others it is false. If "oval" means "continuous convex curve", then there are counterexamples, such as triangles or one of the lobes of Huygens lemniscate y2 = x2 - x4, while Arnold pointed that if "oval" means "infinitely differentiable convex curve" then Newton's claim is correct and his argument has the essential steps of a rigorous proof. - -Vassiliev generalized Newton's theorem to higher dimensions. - -An English translation Newton's original statement is: - -"There is no oval figure whose area, cut off by right lines at pleasure, can be universally found by means of equations of any number of finite terms and dimensions." - -In modern mathematical language, Newton essentially proved the following theorem: - -There is no convex smooth (meaning infinitely differentiable) curve such that the area cut off by a line ax + by = c is an algebraic function of a, b, and c. - -In other words, "oval" in Newton's statement should mean "convex smooth curve". The infinite differentiability at all points is necessary: For any positive integer n there are algebraic curves that are smooth at all but one point and differentiable n times at the remaining point for which the area cut off by a secant is algebraic. - -Newton observed that a similar argument shows that the arclength of a (smooth convex) oval between two points is not given by an algebraic function of the points. - -Newton took the origin P inside the oval, and considered the spiral of points (r, θ) in polar coordinates whose distance r from P is the area cut off by the lines from P with angles 0 and θ. He then observed that this spiral cannot be algebraic as it has an infinite number of intersections with a line through P, so the area cut off by a secant cannot be an algebraic function of the secant. - -This proof requires that the oval and therefore the spiral be smooth; otherwise the spiral might be an infinite union of pieces of different algebraic curves. This is what happens in the various "counterexamples" to Newton's theorem for non-smooth ovals. diff --git a/wiki/wikipedia/3759.txt b/wiki/wikipedia/3759.txt deleted file mode 100644 index f26229fc3d06a13d25b614245d2bfd15a73f82c7..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3759.txt +++ /dev/null @@ -1,15 +0,0 @@ -The maximal ergodic theorem is a theorem in ergodic theory, a discipline within mathematics. - -Suppose that $(X, \mathcal{B},\mu)$ is a probability space, that $T : X\to X$ is a (possibly noninvertible) measure-preserving transformation, and that $f\in L^1(\mu,\mathbb{R})$. Define $f^*$ by -$$ -f^* = \sup_{N\geq 1} \frac{1}{N} \sum_{i=0}^{N-1} f \circ T^i. -$$ - -Then the maximal ergodic theorem states that -$$ - \int_{f^{*} > \lambda} f d\mu \ge \lambda \cdot \mu\{ f^{*} > \lambda\} -$$ - -for any λ ∈ R. - -This theorem is used to prove the point-wise ergodic theorem. diff --git a/wiki/wikipedia/376.txt b/wiki/wikipedia/376.txt deleted file mode 100644 index 2001dc2d2841f7d3086740ddf379751083e0b977..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/376.txt +++ /dev/null @@ -1,3 +0,0 @@ -In mathematics, especially several complex variables, the Behnke–Stein theorem states that a union of an increasing sequence $G_k \subset \mathbb{C}^n$ (i.e., $G_k \subset G_{k + 1}$) of domains of holomorphy is again a domain of holomorphy. It was proved by Heinrich Behnke and Karl Stein in 1938. - -This is related to the fact that an increasing union of pseudoconvex domains is pseudoconvex and so it can be proven using that fact and the solution of the Levi problem. Though historically this theorem was in fact used to solve the Levi problem, and the theorem itself was proved using the Oka–Weil theorem. This theorem again holds for Stein manifolds, but it is not known if it holds for Stein space. diff --git a/wiki/wikipedia/3760.txt b/wiki/wikipedia/3760.txt deleted file mode 100644 index da95f6ff956ff1febad5d1f61cfb7518569294a2..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3760.txt +++ /dev/null @@ -1,5 +0,0 @@ -SNARK, (SRI's New Automated Reasoning Kit), is a theorem prover for multi-sorted first-order logic intended for applications in artificial intelligence and software engineering, developed at SRI International. - -SNARK's principal inference mechanisms are resolution and paramodulation; in addition it offers specialized decision procedures for particular domains, e.g., a constraint solver for Allen's temporal interval logic. In contrast to many other theorem provers is fully automated (non-interactive). SNARK offers many strategic controls for adjusting its search behavior and thus tune its performance to particular applications. This, together with its use of multi-sorted logic and facilities for integrating special-purpose reasoning procedures with general-purpose inference make it particularly suited as reasoner for large sets of assertions. - -SNARK is used as reasoning component in the NASA Intelligent Systems Project. It is written in Common Lisp and available under the Mozilla Public License. diff --git a/wiki/wikipedia/3761.txt b/wiki/wikipedia/3761.txt deleted file mode 100644 index ebf3750be4b8416e4ad4e3608ace2e2b30926766..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3761.txt +++ /dev/null @@ -1,84 +0,0 @@ -The MU puzzle is a puzzle stated by Douglas Hofstadter and found in Gödel, Escher, Bach involving a simple formal system called "MIU". Hofstadter's motivation is to contrast reasoning within a formal system (ie., deriving theorems) against reasoning about the formal system itself. MIU is an example of a Post canonical system and can be reformulated as a string rewriting system. - -Suppose there are the symbols , , and which can be combined to produce strings of symbols. The MU puzzle asks one to start with the "axiomatic" string and transform it into the string using in each step one of the following transformation rules: - -The puzzle cannot be solved: it is impossible to change the string into by repeatedly applying the given rules. In other words, MU is not a theorem of the MIU formal system. To prove this, one must step "outside" the formal system itself. - -In order to prove assertions like this, it is often beneficial to look for an invariant; that is, some quantity or property that doesn't change while applying the rules. - -In this case, one can look at the total number of in a string. Only the second and third rules change this number. In particular, rule two will double it while rule three will reduce it by 3. Now, the invariant property is that the number of is not divisible by 3: - -* In the beginning, the number of s is 1 which is not divisible by 3. - -* Doubling a number that is not divisible by 3 does not make it divisible by 3. - -* Subtracting 3 from a number that is not divisible by 3 does not make it divisible by 3 either. - -Thus, the goal of with zero cannot be achieved because 0 is divisible by 3. - -In the language of modular arithmetic, the number n of obeys the congruence -$$ -n \equiv 2^a \not\equiv 0 \pmod 3. -$$ - -where a counts how often the second rule is applied. - -More generally, an arbitrarily given string x can be derived from by the above four rules if, and only if, x respects the three following properties: - -# x is only composed with one and any number of and , - -# x begins with , and - -# the number of in x is not divisible by 3. - -Only if: No rule moves the , changes the number of , or introduces any character out of , , . Therefore, every x derived from respects properties 1 and 2. As shown before, it also respects property 3. - -If: If x respects properties 1 to 3, let $N_{I}$ and $N_{U}$ be the number of and in x, respectively, and let $N = N_{I} + 3N_{U}$. - -By property 3, the number $N_{I}$ cannot be divisible by 3, hence, $N$ cannot be, either. That is, $N \equiv 1 \text{ or } N \equiv 2 \pmod 3$. Let $n \in \N$ such that $2^n > N$ and $2^n \equiv N \pmod 3$. Beginning from the axiom , applying the second rule $n$ times obtains ... with $2^n$ . Since $2^n - N$ is divisible by 3, by construction of $n$, applying the third rule $\frac{2^n - N}{3}$ times will obtain ......, with exactly with $N$ , followed by some number of . The count can always be made even, by applying the first rule once, if necessary. Applying the fourth rule sufficiently often, all can then be deleted, thus obtaining ... with $ N_{I} + 3N_{U}$ . Applying the third rule to reduce triplets of into a in the right spots will obtain x. Altogether, x has been derived from . - -To illustrate the construction in the If part of the proof, the string , which respects properties 1 to 3, leads to $N_I=4$, $N_U=1$, $N=7$, $n=4$; it can be hence derived as follows: - -\stackrel{2}{\to} - - \stackrel{2}{\to} - - \stackrel{2}{\to} - - \stackrel{2}{\to} - - \stackrel{3}{\to} - - \stackrel{3}{\to} - - \stackrel{1}{\to} - - \stackrel{3}{\to} - - \stackrel{4}{\to} - - \stackrel{4}{\to} - - \stackrel{3}{\to} - - . - -Chapter XIX of Gödel, Escher, Bach gives a mapping of the MIU system to arithmetic, as follows. First, every MIU-string can be translated to an integer by mapping the letters , , and to the numbers 3, 1, and 0, respectively. (For example, the string would be mapped to 31010.) - -Second, the single axiom of the MIU system, namely the string , becomes the number 31. - -Third, the four formal rules given above become the following: - -(NB: The rendering of the first rule above differs superficially from that in the book, where it is written as "[i]f we have made 10m + 1, then we can make 10 × (10m + 1)". Here, however, the variable m was reserved for use in exponents of 10 only, and therefore it was replaced by k in the first rule. Also, in this rendering, the arrangement of factors in this rule was made consistent with that of the other three rules.) - -The MIU system illustrates several important concepts in logic by means of analogy. - -It can be interpreted as an analogy for a formal system — an encapsulation of mathematical and logical concepts using symbols. The MI string is akin to a single axiom, and the four transformation rules are akin to rules of inference. - -The MU string and the impossibility of its derivation is then analogous to a statement of mathematical logic which cannot be proven or disproven by the formal system. - -It also demonstrates the contrast between interpretation on the "syntactic" level of symbols and on the "semantic" level of meanings. On the syntactic level, there is no knowledge of the MU puzzle's insolubility. The system does not refer to anything: it is simply a game involving meaningless strings. Working within the system, an algorithm could successively generate every valid string of symbols in an attempt to generate MU, and though it would never succeed, it would search forever, never deducing that the quest was futile. For a human player however, after a number of attempts, one soon begins to suspect that the puzzle may be impossible. One then "jumps out of the system" and starts to reason about the system, rather than working within it. Eventually, one realises that the system is in some way about divisibility by three. This is the "semantic" level of the system — a level of meaning that the system naturally attains. On this level, the MU puzzle can be seen to be impossible. - -The inability of the MIU system to express or deduce facts about itself, such as the inability to derive MU, is a consequence of its simplicity. However, more complex formal systems, such as systems of mathematical logic, may possess this ability. This is the key idea behind Godel's Incompleteness Theorem. - -In her textbook, Discrete Mathematics with Applications, Susanna S. Epp uses the MU puzzle to introduce the concept of recursive definitions, and begins the relevant chapter with a quote from GEB. diff --git a/wiki/wikipedia/3762.txt b/wiki/wikipedia/3762.txt deleted file mode 100644 index f65b2bbe9f97dc2a032f4b8b67f0c964a416f497..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3762.txt +++ /dev/null @@ -1,8 +0,0 @@ -In commutative algebra, a field of mathematics, the monomial conjecture of Melvin Hochster says the following: - -Let A be a Noetherian local ring of Krull dimension d and let x1, ..., xd be a system of parameters for A (so that A/(x1, ..., xd) is an Artinian ring). Then for all positive integers t, we have -$$ - x_1^t \cdots x_d^t \not\in (x_1^{t+1},\dots,x_d^{t+1}). -$$ - -The statement can relatively easily be shown in characteristic zero. diff --git a/wiki/wikipedia/3763.txt b/wiki/wikipedia/3763.txt deleted file mode 100644 index b995d263169194d6f59aa2cc1a53b95aec7ec706..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3763.txt +++ /dev/null @@ -1,78 +0,0 @@ -In mathematics, an Apollonian gasket or Apollonian net is a fractal generated by starting with a triple of circles, each tangent to the other two, and successively filling in more circles, each tangent to another three. It is named after Greek mathematician Apollonius of Perga. - -An Apollonian gasket can be constructed as follows. Start with three circles C1, C2 and C3, each one of which is tangent to the other two (in the general construction, these three circles have to be different sizes, and they must have a common tangent). Apollonius discovered that there are two other non-intersecting circles, C4 and C5, which have the property that they are tangent to all three of the original circles – these are called Apollonian circles. Adding the two Apollonian circles to the original three, we now have five circles. - -Take one of the two Apollonian circles – say C4. It is tangent to C1 and C2, so the triplet of circles C4, C1 and C2 has its own two Apollonian circles. We already know one of these – it is C3 – but the other is a new circle C6. - -In a similar way we can construct another new circle C7 that is tangent to C4, C2 and C3, and another circle C8 from C4, C3 and C1. This gives us 3 new circles. We can construct another three new circles from C5, giving six new circles altogether. Together with the circles C1 to C5, this gives a total of 11 circles. - -Continuing the construction stage by stage in this way, we can add 2·3n new circles at stage n, giving a total of 3n+1 + 2 circles after n stages. In the limit, this set of circles is an Apollonian gasket. - -The sizes of the new circles are determined by Descartes' theorem. Let ki (for i = 1, ..., 4) denote the curvatures of four mutually tangent circles. Then Descartes' Theorem states - -The Apollonian gasket has a Hausdorff dimension of about 1.3057. - -The curvature of a circle (bend) is defined to be the reciprocal of its radius. - -* Negative curvature indicates that all other circles are internally tangent to that circle. This is bounding circle. - -* Zero curvature gives a line (circle with infinite radius). - -* Positive curvature indicates that all other circles are externally tangent to that circle. This circle lies in the interior of the circle with negative curvature. - -An Apollonian gasket can also be constructed by replacing one of the generating circles by a straight line, which can be regarded as a circle passing through the point at infinity. - -Alternatively, two of the generating circles may be replaced by parallel straight lines, which can be regarded as being tangent to one another at infinity. In this construction, the additional circles form a family of Ford circles. - -The three-dimensional equivalent of the Apollonian gasket is the Apollonian sphere packing. - -If two of the original generating circles have the same radius and the third circle has a radius that is two-thirds of this, then the Apollonian gasket has two lines of reflective symmetry; one line is the line joining the centres of the equal circles; the other is their mutual tangent, which passes through the centre of the third circle. These lines are perpendicular to one another, so the Apollonian gasket also has rotational symmetry of degree 2; the symmetry group of this gasket is D2. - -If all three of the original generating circles have the same radius then the Apollonian gasket has three lines of reflective symmetry; these lines are the mutual tangents of each pair of circles. Each mutual tangent also passes through the centre of the third circle and the common centre of the first two Apollonian circles. These lines of symmetry are at angles of 60 degrees to one another, so the Apollonian gasket also has rotational symmetry of degree 3; the symmetry group of this gasket is D3. - -The three generating circles, and hence the entire construction, are determined by the location of the three points where they are tangent to one another. Since there is a Möbius transformation which maps any three given points in the plane to any other three points, and since Möbius transformations preserve circles, then there is a Möbius transformation which maps any two Apollonian gaskets to one another. - -Möbius transformations are also isometries of the hyperbolic plane, so in hyperbolic geometry all Apollonian gaskets are congruent. In a sense, there is therefore only one Apollonian gasket, up to (hyperbolic) isometry. - -The Apollonian gasket is the limit set of a group of Möbius transformations known as a Kleinian group. - - - -Image:ApollonianGasket-1_2_2_3-Labels.png|Integral Apollonian circle packing defined by circle curvatures of (−1, 2, 2, 3) - -Image:ApollonianGasket-3_5_8_8-Labels.png|Integral Apollonian circle packing defined by circle curvatures of (−3, 5, 8, 8) - -Image:ApollonianGasket-12_25_25_28-Labels.png|Integral Apollonian circle packing defined by circle curvatures of (−12, 25, 25, 28) - -Image:ApollonianGasket-6_10_15_19-Labels.png|Integral Apollonian circle packing defined by circle curvatures of (−6, 10, 15, 19) - -Image:ApollonianGasket-10_18_23_27-Labels.png|Integral Apollonian circle packing defined by circle curvatures of (−10, 18, 23, 27) - - - -If any four mutually tangent circles in an Apollonian gasket all have integer curvature then all circles in the gasket will have integer curvature. - -Since the equation relating curvatures in an Apollonian gasket, integral or not, is -$$ -a^2 + b^2 + c^2 + d^2 = 2ab + 2 a c + 2 a d + 2 bc+2bd+2cd, -$$ - -it follows that one may move from one quadruple of curvatures to another by Vieta jumping, just as when finding a new Markov number. - -The first few of these integral Apollonian gaskets are listed in the following table. The table lists the curvatures of the largest circles in the gasket. Only the first three curvatures (of the five displayed in the table) are needed to completely describe each gasket – all other curvatures can be derived from these three. - -If none of the curvatures are repeated within the first five, the gasket contains no symmetry, which is represented by symmetry group C1; the gasket described by curvatures (−10, 18, 23, 27) is an example. - -Whenever two of the largest five circles in the gasket have the same curvature, that gasket will have D1 symmetry, which corresponds to a reflection along a diameter of the bounding circle, with no rotational symmetry. - -If two different curvatures are repeated within the first five, the gasket will have D2 symmetry; such a symmetry consists of two reflections (perpendicular to each other) along diameters of the bounding circle, with a two-fold rotational symmetry of 180°. The gasket described by curvatures (−1, 2, 2, 3) is the only Apollonian gasket (up to a scaling factor) to possess D2 symmetry. - -There are no integer gaskets with D3 symmetry. - -If the three circles with smallest positive curvature have the same curvature, the gasket will have D3 symmetry, which corresponds to three reflections along diameters of the bounding circle (spaced 120° apart), along with three-fold rotational symmetry of 120°. In this case the ratio of the curvature of the bounding circle to the three inner circles is 2sqrt 3 - 3. As this ratio is not rational, no integral Apollonian circle packings possess this D3 symmetry, although many packings come close. - -The figure at left is an integral Apollonian gasket that appears to have D3 symmetry. The same figure is displayed at right, with labels indicating the curvatures of the interior circles, illustrating that the gasket actually possesses only the D1 symmetry common to many other integral Apollonian gaskets. - -The following table lists more of these almost-D3 integral Apollonian gaskets. The sequence has some interesting properties, and the table lists a factorization of the curvatures, along with the multiplier needed to go from the previous set to the current one. The absolute values of the curvatures of the "a" disks obey the recurrence relation a(n) = 4a(n − 1) − a(n − 2) , from which it follows that the multiplier converges to sqrt 3 + 2 ≈ 3.732050807. - -For any integer n > 0, there exists an Apollonian gasket defined by the following curvatures:
    (−n, n + 1, n(n + 1), n(n + 1) + 1).
    For example, the gaskets defined by (−2, 3, 6, 7), (−3, 4, 12, 13), (−8, 9, 72, 73), and (−9, 10, 90, 91) all follow this pattern. Because every interior circle that is defined by n + 1 can become the bounding circle (defined by −n) in another gasket, these gaskets can be nested. This is demonstrated in the figure at right, which contains these sequential gaskets with n running from 2 through 20. diff --git a/wiki/wikipedia/3764.txt b/wiki/wikipedia/3764.txt deleted file mode 100644 index 99a9fb4b31a2cbfb5484907ff5da31062c0ec869..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3764.txt +++ /dev/null @@ -1,95 +0,0 @@ -The Collatz conjecture in mathematics asks whether repeating certain simple arithmetic operations will eventually transform every positive integer into one. It concerns sequences of integers in which each term is obtained from the previous term as follows: if the previous term is even, the next term is one half of the previous term. If the previous term is odd, the next term is 3 times the previous term plus 1. The conjecture is that these sequences always reach 1, no matter which positive integer is chosen to start the sequence. - -The conjecture is named after Lothar Collatz, who introduced the idea in 1937, two years after receiving his doctorate. It is also known as the 3n + 1 problem, the 3n + 1 conjecture, the Ulam conjecture (after Stanisław Ulam), Kakutani's problem (after Shizuo Kakutani), the Thwaites conjecture (after Sir Bryan Thwaites), Hasse's algorithm (after Helmut Hasse), or the Syracuse problem. - -The sequence of numbers involved is sometimes referred to as the hailstone sequence or hailstone numbers (because the values are usually subject to multiple descents and ascents like hailstones in a cloud), or as wondrous numbers. - -Paul Erdős said about the Collatz conjecture: "Mathematics may not be ready for such problems." - -These numbers are the lowest ones with the indicated step count, but not necessarily the only ones below the given limit. As an example, 9780657631 has 1132 steps, as does 9780657630. - -The starting values having the smallest total stopping time with respect to their number of digits (in base 2) are the powers of two since 2n is halved n times to reach 1, and is never increased. - - - -File:Collatz orbits of the all integers up to 1000.svg|Directed graph showing the orbits of the first 1000 numbers. - -File:CollatzConjectureGraphMaxValues.jpg|The x axis represents starting number, the y axis represents the highest number reached during the chain to 1. This plot shows a restricted y axis: some x values produce intermediates as high as 2.7e7 (for x = 9663) - -File:All Collatz sequences of a length inferior to 20.svg|The tree of all the numbers having fewer than 20 steps. - -File:Collatz Conjecture 100M.jpg|alt=Collatz Conjecture 100M|The number of iterations it takes to get to one for the first 100 million numbers. - - - -Although the conjecture has not been proven, most mathematicians who have looked into the problem think the conjecture is true because experimental evidence and heuristic arguments support it. - -, the conjecture has been checked by computer for all starting values up to 268 ≈ 2.95e20. All initial values tested so far eventually end in the repeating cycle (4; 2; 1) of period 3. - -This computer evidence is not sufficient to prove that the conjecture is true for all starting values. As in the case of some disproved conjectures, like the Pólya conjecture, counterexamples might be found when considering very large numbers. - -However, such verifications may have other implications. For example, one can derive additional constraints on the period and structural form of a non-trivial cycle. - -In a computer-aided proof, Krasikov and Lagarias showed that the number of integers in the interval [1,x] that eventually reach 1 is at least equal to x0.84 for all sufficiently large x. - -In this part, consider the shortcut form of the Collatz function -$$ - f(n) = \begin{cases} \frac{n}{2} &\text{if } n \equiv 0 \pmod{2},\\[4px] \frac{3n+1}{2} & \text{if } n \equiv 1 \pmod{2}. \end{cases} -$$ - -A cycle is a sequence (a0, a1, ..., aq) of distinct positive integers where f(a0) = a1, f(a1) = a2, ..., and f(aq) = a0. - -The only known cycle is (1,2) of period 2, called the trivial cycle. - -The length of a non-trivial cycle is known to be at least 17087915. The representation of n therefore holds the repetends of 1/3h, where each repetend is optionally rotated and then replicated up to a finite number of bits. It is only in binary that this occurs. Conjecturally, every binary string s that ends with a '1' can be reached by a representation of this form (where we may add or delete leading '0's to s). - -Repeated applications of the Collatz function can be represented as an abstract machine that handles strings of bits. The machine will perform the following three steps on any odd number until only one 1 remains: - -# Append 1 to the (right) end of the number in binary (giving 2n + 1); - -# Add this to the original number by binary addition (giving 2n + 1 + n = 3n + 1); - -# Remove all trailing 0s (that is, repeatedly divide by 2 until the result is odd). - -The starting number 7 is written in base two as 111. The resulting Collatz sequence is: - -
    - -111 - -1111 - -10110 - -10111 - -100010 - -100011 - -110100 - -11011 - -101000 - -1011 - -10000 - -
    - -For this section, consider the Collatz function in the slightly modified form -$$ - f(n) = \begin{cases} \frac{n}{2} &\text{if } n \equiv 0 \\[4px] \frac{3n + 1}{2} & \text{if } n \equiv 1 \end{cases} \pmod{2}. -$$ - -This can be done because when n is odd, 3n + 1 is always even. - -If P(…) is the parity of a number, that is P(2n) = 0 and P(2n + 1) = 1, then we can define the Collatz parity sequence (or parity vector) for a number n as pi = P(ai), where a0 = n, and ai+1 = f(ai). - -Which operation is performed, 3n + 1/2 or n/2, depends on the parity. The parity sequence is the same as the sequence of operations. - -Using this form for f(n), it can be shown that the parity sequences for two numbers m and n will agree in the first k terms if and only if m and n are equivalent modulo 2k. This implies that every number is uniquely identified by its parity sequence, and moreover that if there are multiple Hailstone cycles, then their corresponding parity cycles must be different. proved that the above problem is, in fact, undecidable and even higher in the arithmetical hierarchy, specifically Π_2|p=0-complete. This hardness result holds even if one restricts the class of functions g by fixing the modulus P to 6480. - -In the movie Incendies, a graduate student in pure mathematics explains the Collatz conjecture to a group of undergraduates. She puts her studies on hold for a time to address some unresolved questions about her family's past. Late in the movie, the Collatz conjecture turns out to have foreshadowed a disturbing and difficult discovery that she makes about her family. diff --git a/wiki/wikipedia/3765.txt b/wiki/wikipedia/3765.txt deleted file mode 100644 index 60801a6471a0fa646c3adca117e2c26ae6927348..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3765.txt +++ /dev/null @@ -1,9 +0,0 @@ -Gauss's lemma can mean any of several lemmas named after Carl Friedrich Gauss: - -* - -* - -* - -* A generalization of Euclid's lemma is sometimes called Gauss's lemma diff --git a/wiki/wikipedia/3766.txt b/wiki/wikipedia/3766.txt deleted file mode 100644 index 6b1fff5323ff9d1a2f94a49e9dbbd2fe7b8c9598..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3766.txt +++ /dev/null @@ -1 +0,0 @@ -In mathematics, the Abhyankar–Moh theorem states that if $L$ is a complex line in the complex affine plane $\mathbb{C}^2$, then every embedding of $L$ into $\mathbb{C}^2$ extends to an automorphism of the plane. It is named after Shreeram Shankar Abhyankar and Tzuong-Tsieng Moh, who published it in 1975. More generally, the same theorem applies to lines and planes over any algebraically closed field of characteristic zero, and to certain well-behaved subsets of higher-dimensional complex affine spaces. diff --git a/wiki/wikipedia/3767.txt b/wiki/wikipedia/3767.txt deleted file mode 100644 index afb791b48df5c643356d5a5af8945b3404c68890..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3767.txt +++ /dev/null @@ -1,69 +0,0 @@ -In combinatorial mathematics, an Apollonian network is an undirected graph formed by a process of recursively subdividing a triangle into three smaller triangles. Apollonian networks may equivalently be defined as the planar 3-trees, the maximal planar chordal graphs, the uniquely 4-colorable planar graphs, and the graphs of stacked polytopes. They are named after Apollonius of Perga, who studied a related circle-packing construction. - -An Apollonian network may be formed, starting from a single triangle embedded in the Euclidean plane, by repeatedly selecting a triangular face of the embedding, adding a new vertex inside the face, and connecting the new vertex to each vertex of the face containing it. In this way, the triangle containing the new vertex is subdivided into three smaller triangles, which may in turn be subdivided in the same way. - -The complete graphs on three and four vertices, K3 and K4, are both Apollonian networks. K3 is formed by starting with a triangle and not performing any subdivisions, while K4 is formed by making a single subdivision before stopping. - -The Goldner–Harary graph is an Apollonian network that forms the smallest non-Hamiltonian maximal planar graph. Another more complicated Apollonian network was used by Nishizeki to provide an example of a 1-tough non-Hamiltonian maximal planar graph. - -As well as being defined by recursive subdivision of triangles, the Apollonian networks have several other equivalent mathematical characterizations. They are the chordal maximal planar graphs, the chordal polyhedral graphs, and the planar 3-trees. They are the uniquely 4-colorable planar graphs, and the planar graphs with a unique Schnyder wood decomposition into three trees. They are the maximal planar graphs with treewidth three, a class of graphs that can be characterized by their forbidden minors or by their reducibility under Y-Δ transforms. They are the maximal planar graphs with degeneracy three. They are also the planar graphs on a given number of vertices that have the largest possible number of triangles, the largest possible number of tetrahedral subgraphs, the largest possible number of cliques, and the largest possible number of pieces after decomposing by separating triangles. - -Apollonian networks are examples of maximal planar graphs, graphs to which no additional edges can be added without destroying planarity, or equivalently graphs that can be drawn in the plane so that every face (including the outer face) is a triangle. They are also chordal graphs, graphs in which every cycle of four or more vertices has a diagonal edge connecting two non-consecutive cycle vertices, and the order in which vertices are added in the subdivision process that forms an Apollonian network is an elimination ordering as a chordal graph. This forms an alternative characterization of the Apollonian networks: they are exactly the chordal maximal planar graphs or equivalently the chordal polyhedral graphs. - -In an Apollonian network, every maximal clique is a complete graph on four vertices, formed by choosing any vertex and its three earlier neighbors. Every minimal clique separator (a clique that partitions the graph into two disconnected subgraphs) is one of the subdivided triangles. A chordal graph in which all maximal cliques and all minimal clique separators have the same size is a k-tree, and Apollonian networks are examples of 3-trees. Not every 3-tree is planar, but the planar 3-trees are exactly the Apollonian networks. - -Every Apollonian network is also a uniquely 4-colorable graph. Because it is a planar graph, the four color theorem implies that it has a graph coloring with only four colors, but once the three colors of the initial triangle are selected, there is only one possible choice for the color of each successive vertex, so up to permutation of the set of colors it has exactly one 4-coloring. It is more difficult to prove, but also true, that every uniquely 4-colorable planar graph is an Apollonian network. Therefore, Apollonian networks may also be characterized as the uniquely 4-colorable planar graphs. Apollonian networks also provide examples of planar graphs having as few k-colorings as possible for k > 4. - -The Apollonian networks are also exactly the maximal planar graphs that (once an exterior face is fixed) have a unique Schnyder wood, a partition of the edges of the graph into three interleaved trees rooted at the three vertices of the exterior face. - -The Apollonian networks do not form a family of graphs that is closed under the operation of taking graph minors, as removing edges but not vertices from an Apollonian network produces a graph that is not an Apollonian network. However, the planar partial 3-trees, subgraphs of Apollonian networks, are minor-closed. Therefore, according to the Robertson–Seymour theorem, they can be characterized by a finite number of forbidden minors. The minimal forbidden minors for the planar partial 3-trees are the four minimal graphs among the forbidden minors for the planar graphs and the partial 3-trees: the complete graph K5, the complete bipartite graph K3,3, the graph of the octahedron, and the graph of the pentagonal prism. The Apollonian graphs are the maximal graphs that do not have any of these four graphs as a minor. - -A Y-Δ transform, an operation that replaces a degree-three vertex in a graph by a triangle connecting its neighbors, is sufficient (together with the removal of parallel edges) to reduce any Apollonian network to a single triangle, and more generally the planar graphs that can be reduced to a single edge by Y-Δ transforms, removal of parallel edges, removal of degree-one vertices, and compression of degree-two vertices are exactly the planar partial 3-trees. The dual graphs of the planar partial 3-trees form another minor-closed graph family and are exactly the planar graphs that can be reduced to a single edge by Δ-Y transforms, removal of parallel edges, removal of degree-one vertices, and compression of degree-two vertices. - -In every subgraph of an Apollonian network, the most recently added vertex has degree at most three, so Apollonian networks have degeneracy three. The order in which the vertices are added to create the network is therefore a degeneracy ordering, and the Apollonian networks coincide with the 3-degenerate maximal planar graphs. - -Another characterization of the Apollonian networks involves their connectivity. Any maximal planar graph may be decomposed into 4-vertex-connected maximal planar subgraphs by splitting it along its separating triangles (triangles that are not faces of the graph): given any non-facial triangle: one can form two smaller maximal planar graphs, one consisting of the part inside the triangle and the other consisting of the part outside the triangle. The maximal planar graphs without separating triangles that may be formed by repeated splits of this type are sometimes called blocks, although that name has also been used for the biconnected components of a graph that is not itself biconnected. An Apollonian network is a maximal planar graph in which all of the blocks are isomorphic to the complete graph K4. - -In extremal graph theory, Apollonian networks are also exactly the n-vertex planar graphs in which the number of blocks achieves its maximum, n - 3, and the planar graphs in which the number of triangles achieves its maximum, 3n - 8. Since each K4 subgraph of a planar graph must be a block, these are also the planar graphs in which the number of K4 subgraphs achieves its maximum, n - 3, and the graphs in which the number of cliques of any type achieves its maximum, 8n - 16. - -Apollonian networks are named after Apollonius of Perga, who studied the Problem of Apollonius of constructing a circle tangent to three other circles. One method of constructing Apollonian networks is to start with three mutually-tangent circles and then repeatedly inscribe another circle within the gap formed by three previously-drawn circles. The fractal collection of circles produced in this way is called an Apollonian gasket. - -If the process of producing an Apollonian gasket is stopped early, with only a finite set of circles, then the graph that has one vertex for each circle and one edge for each pair of tangent circles is an Apollonian network. The existence of a set of tangent circles whose tangencies represent a given Apollonian network forms a simple instance of the Koebe–Andreev–Thurston circle-packing theorem, which states that any planar graph can be represented by tangent circles in the same way. - -Apollonian networks are planar 3-connected graphs and therefore, by Steinitz's theorem, can always be represented as the graphs of convex polyhedra. The convex polyhedron representing an Apollonian network is a 3-dimensional stacked polytope. Such a polytope can be obtained from a tetrahedron by repeatedly gluing additional tetrahedra one at a time onto its triangular faces. Therefore, Apollonian networks may also be defined as the graphs of stacked 3d polytopes. It is possible to find a representation of any Apollonian network as convex 3d polyhedron in which all of the coordinates are integers of polynomial size, better than what is known for other planar graphs. - -The recursive subdivision of triangles into three smaller triangles was investigated as an image segmentation technique in computer vision by Elcock; in this context, they called it the ternary scalene triangle decomposition. They observed that, by placing each new vertex at the centroid of its enclosing triangle, the triangulation could be chosen in such a way that all triangles have equal areas, although they do not all have the same shape. More generally, - -Apollonian networks may be drawn in the plane with any prescribed area in each face; if the areas are rational numbers, so are all of the vertex coordinates. - -It is also possible to carry out the process of subdividing a triangle to form an Apollonian network in such a way that, at every step, the edge lengths are rational numbers; it is an open problem whether every planar graph has a drawing with this property. It is possible in polynomial time to find a drawing of a planar 3-tree with integer coordinates minimizing the area of the bounding box of the drawing, and to test whether a given planar 3-tree may be drawn with its vertices on a given set of points. - -Plummer used Apollonian networks to construct an infinite family of maximal planar graphs with an even number of vertices but with no perfect matching. Plummer's graphs are formed in two stages. In the first stage, starting from a triangle abc, one repeatedly subdivides the triangular face of the subdivision that contains edge bc: the result is a graph consisting of a path from a to the final subdivision vertex together with an edge from each path vertex to each of b and c. In the second stage, each of the triangular faces of the resulting planar graph is subdivided one more time. If the path from a to the final subdivision vertex of the first stage has even length, then the number of vertices in the overall graph is also even. However, approximately 2/3 of the vertices are the ones inserted in the second stage; these form an independent set, and cannot be matched to each other, nor are there enough vertices outside the independent set to find matches for all of them. - -Although Apollonian networks themselves may not have perfect matchings, the planar dual graphs of Apollonian networks are 3-regular graphs with no cut edges, so by a theorem of Petersen they are guaranteed to have at least one perfect matching. However, in this case more is known: the duals of Apollonian networks always have an exponential number of perfect matchings. László Lovász and Michael D. Plummer conjectured that a similar exponential lower bound holds more generally for every 3-regular graph without cut edges, a result that was later proven. - -Andrade studied power laws in the degree sequences of a special case of networks of this type, formed by subdividing all triangles the same number of times. They used these networks to model packings of space by particles of varying sizes. Based on their work, other authors introduced random Apollonian networks, formed by repeatedly choosing a random face to subdivide, and they showed that these also obey power laws in their degree distribution and have small average distances. Alan M. Frieze and Charalampos E. Tsourakakis analyzed the highest degrees and the eigenvalues of random Apollonian networks. Andrade et al. also observed that their networks satisfy the small world effect, that all vertices are within a small distance of each other. Based on numerical evidence they estimated the average distance between randomly selected pairs of vertices in an n-vertex network of this type to be proportional to (log n)3/4, but later researchers showed that the average distance is actually proportional to log n. - -Butler observed that if each new vertex is placed at the incenter of its triangle, so that the edges to the new vertex bisect the angles of the triangle, then the set of triples of angles of triangles in the subdivision, when reinterpreted as triples of barycentric coordinates of points in an equilateral triangle, converges in shape to the Sierpinski triangle as the number of levels of subdivision grows. - -Takeo claimed erroneously that all Apollonian networks have Hamiltonian cycles; however, the Goldner–Harary graph provides a counterexample. If an Apollonian network has toughness greater than one (meaning that removing any set of vertices from the graph leaves a smaller number of connected components than the number of removed vertices) then it necessarily has a Hamiltonian cycle, but there exist non-Hamiltonian Apollonian networks whose toughness is equal to one. - -The combinatorial enumeration problem of counting Apollonian triangulations was studied by Takeo, who showed that they have the simple generating function f(x) described by the equation f(x) = 1 + x(f(x))3. - -In this generating function, the term of degree n counts the number of Apollonian networks with a fixed outer triangle and n + 3 vertices. - -Thus, the numbers of Apollonian networks (with a fixed outer triangle) on 3, 4, 5, ... vertices are: - -1, 1, 3, 12, 55, 273, 1428, 7752, 43263, 246675, ... , - -a sequence that also counts ternary trees and dissections of convex polygons into odd-sided polygons. - -For instance, there are 12 6-vertex Apollonian networks: three formed by subdividing the outer triangle once and then subdividing two of the resulting triangles, - -and nine formed by subdividing the outer triangle once, subdividing one of its triangles, and then subdividing one of the resulting smaller triangles. - -Birkhoff is an early paper that uses a dual form of Apollonian networks, the planar maps formed by repeatedly placing new regions at the vertices of simpler maps, as a class of examples of planar maps with few colorings. - -Geometric structures closely related to Apollonian networks have been studied in polyhedral combinatorics since at least the early 1960s, when they were used by Grünbaum to describe graphs that can be realized as the graph of a polytope in only one way, without dimensional or combinatorial ambiguities, and by Moon to find simplicial polytopes with no long paths. In graph theory, the close connection between planarity and treewidth goes back to Robertson, who showed that every minor-closed family of graphs either has bounded treewidth or contains all of the planar graphs. Planar 3-trees, as a class of graphs, were explicitly considered by Hakimi, Alon, Patil, and many authors since them. - -The name "Apollonian network" was given by Andrade to the networks they studied in which the level of subdivision of triangles is uniform across the network; these networks correspond geometrically to a type of stacked polyhedron called a Kleetope. Other authors applied the same name more broadly to planar 3-trees in their work generalizing the model of Andrade et al. to random Apollonian networks. The triangulations generated in this way have also been named "stacked triangulations" or "stack-triangulations". diff --git a/wiki/wikipedia/3768.txt b/wiki/wikipedia/3768.txt deleted file mode 100644 index 4bf96de0cc4a51a724e22b0066ffccbb19f23b5b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3768.txt +++ /dev/null @@ -1,64 +0,0 @@ -The circulation problem and its variants are a generalisation of network flow problems, with the added constraint of a lower bound on edge flows, and with flow conservation also being required for the source and sink (i.e. there are no special nodes). In variants of the problem, there are multiple commodities flowing through the network, and a cost on the flow. - -Given flow network $G(V,E)$ with: -$$ -l(v,w) -$$, lower bound on flow from node $v$ to node $w$, -$$ -u(v,w) -$$, upper bound on flow from node $v$ to node $w$, -$$ -c(v,w) -$$, cost of a unit of flow on $(v,w)$ - -and the constraints: -$$ -l(v,w) \leq f(v,w) \leq u(v,w) -$$, -$$ -\sum_{w \in V} f(u,w) = 0 -$$ (flow cannot appear or disappear in nodes). - -Finding a flow assignment satisfying the constraints gives a solution to the given circulation problem. - -In the minimum cost variant of the problem, minimize -$$ -\sum_{(v,w) \in E} c(v,w) \cdot f(v,w). -$$ - -In a multi-commodity circulation problem, you also need to keep track of the flow of the individual commodities: - -There is also a lower bound on each flow of commodity. - -The conservation constraint must be upheld individually for the commodities: -$$ -\ \sum_{w \in V} f_i(u,w) = 0. -$$ - -For the circulation problem, many polynomial algorithms have been developed (e.g., Edmonds–Karp algorithm, 1972; Tarjan 1987-1988). Tardos found the first strongly polynomial algorithm. - -For the case of multiple commodities, the problem is NP-complete for integer flows. For fractional flows, it is solvable in polynomial time, as one can formulate the problem as a linear program. - -Below are given some problems, and how to solve them with the general circulation setup given above. - -* Minimum cost multi-commodity circulation problem - Using all constraints given above. - -* Minimum cost circulation problem - Use a single commodity - -* Multi-commodity circulation - Solve without optimising cost. - -* Simple circulation - Just use one commodity, and no cost. - -* Multi-commodity flow - If $K_i(s_i,t_i,d_i)$ denotes a demand of $d_i$ for commodity $i$ from $s_i$ to $t_i$, create an edge $(t_i,s_i)$ with $l_i(t_i,s_i) = u(t_i,s_i) = d_i$ for all commodities $i$. Let $l_i(u,v)=0$ for all other edges. - -* Minimum cost multi-commodity flow problem - As above, but minimize the cost. - -* Minimum cost flow problem - As above, with 1 commodity. - -* Maximum flow problem - Set all costs to 0, and add an edge from the sink $t$ to the source $s$ with $l(t,s)=0$, $u(t,s)=$∞ and $c(t,s)=-1$. - -* Minimum cost maximum flow problem - First find the maximum flow amount $m$. Then solve with $l(t,s)=u(t,s)=m$ and $c(t,s)=0$. - -* Single-source shortest path - Let $l(u,v)=0$ and $c(u,v)=1$ for all edges in the graph, and add an edge $(t,s)$ with $l(t,s)=c(t,s)=1$ and $a(t,s)=0$. - -* All-pairs shortest path - Let all capacities be unlimited, and find a flow of 1 for $v(v-1)/2$ commodities, one for each pair of nodes. diff --git a/wiki/wikipedia/3769.txt b/wiki/wikipedia/3769.txt deleted file mode 100644 index 97292a0f7c9788bbb404cea26618c2e7bff13e62..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3769.txt +++ /dev/null @@ -1,9 +0,0 @@ -Zendoku is a 2007 puzzle video game developed by Zoonami and published by Eidos Interactive for the Nintendo DS and PlayStation Portable handheld consoles. - -Zendoku is a variation of Sudoku, offering a slightly more combative experience than simply lining up numbers. Players must now insert symbols, rather than the standard numbers. Players choose characters and choose to attack or defend against a challenger, which takes place via mini-games started upon filling in a group of numbers. The game is set against anime-influenced backdrops and has a "light-hearted martial arts" theme. The game also offers single-player puzzles. - -Zendoku was developed by Zoonami and published by Eidos Interactive for the Nintendo DS and PlayStation Portable handheld consoles. According to Zoonami CEO Martin Hollis, the game's focus is on fun and innovation, stating that the company wanted to turn "a familiar paper and pencil game into a puzzling, battling, micro-gaming, fighting extravaganza." In North America, Zendoku was released to retailers on June 18, 2007. - -Zendoku received generally mixed reviews from critics. The UK Nintendo Magazine, ONM praised the DS version of the game for its colourful graphics, but criticized it for the confusing system of using symbols instead of numbers, as well as the minigames feeling tacked on and more of an annoyance, but finished by saying, "One for multiplayer only". - -NGamer praised the multiplayer game highly, saying, "The thrill of using logical reasoning to kick someone's arse is a phenomenal experience." diff --git a/wiki/wikipedia/377.txt b/wiki/wikipedia/377.txt deleted file mode 100644 index a5b00f8b19776259e596138bd2b1b30fd996a83d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/377.txt +++ /dev/null @@ -1,32 +0,0 @@ -The Erdős–Turán conjecture is an old unsolved problem in additive number theory (not to be confused with Erdős conjecture on arithmetic progressions) posed by Paul Erdős and Pál Turán in 1941. - -The question concerns subsets of the natural numbers, typically denoted by $ \mathbb{N} $, called additive bases. A subset $B$ is called an (asymptotic) additive basis of finite order if there is some positive integer $h$ such that every sufficiently large natural number $n$ can be written as the sum of at most $h$ elements of $B$. For example, the natural numbers are themselves an additive basis of order 1, since every natural number is trivially a sum of at most one natural number. Lagrange's four-square theorem says that the set of positive square numbers is an additive basis of order 4. Another highly non-trivial and celebrated result along these lines is Vinogradov's theorem. - -One is naturally inclined to ask whether these results are optimal. It turns out that Lagrange's four-square theorem cannot be improved, as there are infinitely many positive integers which are not the sum of three squares. This is because no positive integer which is the sum of three squares can leave a remainder of 7 when divided by 8. However, one should perhaps expect that a set $B$ which is about as sparse as the squares (meaning that in a given interval $[1,N]$, roughly $N^{1/2}$ of the integers in $[1,N]$ lie in $B$) which does not have this obvious deficit should have the property that every sufficiently large positive integer is the sum of three elements from $B$. This follows from the following probabilistic model: suppose that $N/2 < n \leq N$ is a positive integer, and $x_1, x_2, x_3$ are 'randomly' selected from $B \cap [1,N]$. Then the probability of a given element from $B$ being chosen is roughly $1/N^{1/2}$. One can then estimate the expected value, which in this case will be quite large. Thus, we 'expect' that there are many representations of $n$ as a sum of three elements from $B$, unless there is some arithmetic obstruction (which means that $ B $ is somehow quite different than a 'typical' set of the same density), like with the squares. Therefore, one should expect that the squares are quite inefficient at representing positive integers as the sum of four elements, since there should already be lots of representations as sums of three elements for those positive integers $n$ that passed the arithmetic obstruction. Examining Vinogradov's theorem quickly reveals that the primes are also very inefficient at representing positive integers as the sum of four primes, for instance. - -This begets the question: suppose that $B$, unlike the squares or the prime numbers, is very efficient at representing positive integers as a sum of $h$ elements of $B$. How efficient can it be? The best possibility is that we can find a positive integer $h$ and a set $B$ such that every positive integer $n$ is the sum of at most $ h $ elements of $ B $ in exactly one way. Failing that, perhaps we can find a $B$ such that every positive integer $n$ is the sum of at most $h$ elements of $B$ in at least one way and at most $S(h)$ ways, where $S$ is a function of $h$. - -This is basically the question that Paul Erdős and Pál Turán asked in 1941. Indeed, they conjectured a negative answer to this question, namely that if $B$ is an additive basis of order $h$ of the natural numbers, then it cannot represent positive integers as a sum of at most $h$ too efficiently; the number of representations of $n$, as a function of $n$, must tend to infinity. - -The conjecture was made jointly by Paul Erdős and Pál Turán in 1941. In the original paper, they write - -"(2) If $f(n) > 0$ for $n > n_0$, then $\varlimsup_{n \rightarrow \infty} f(n) = \infty$", - -where $\varlimsup_{n \rightarrow \infty}$ denotes the limit superior. Here $f(n)$ is the number of ways one can write the natural number $n$ as the sum of two (not necessarily distinct) elements of $B$. If $f(n)$ is always positive for sufficiently large $n$, then $B$ is called an additive basis (of order 2). This problem has attracted significant attention but remains unsolved. - -In 1964, Erdős published a multiplicative version of this conjecture. - -While the conjecture remains unsolved, there have been some advances on the problem. First, we express the problem in modern language. For a given subset $B \subset \mathbb{N}$, we define its representation function $r_B(n) = \#\{(a_1, a_2) \in B^2 \mid a_1 + a_2 = n \}$. Then the conjecture states that if $ r_B(n) > 0 $ for all $n$ sufficiently large, then $ \limsup_{n \rightarrow \infty} r_B(n) = \infty $. - -More generally, for any $h \in \mathbb{N}$ and subset $B \subset \mathbb{N}$, we can define the $h$ representation function as $r_{B,h}(n) = \#\{(a_1, \cdots, a_h) \in B^h \mid a_1 + \cdots + a_h = n \}$. We say that $B$ is an additive basis of order $h$ if $r_{B,h}(n) > 0$ for all $n$ sufficiently large. One can see from an elementary argument that if $B$ is an additive basis of order $h$, then -$$ -\displaystyle n \leq \sum_{m=1}^n r_{B,h}(m) \leq |B \cap [1,n]|^h -$$ - -So we obtain the lower bound $n^{1/h} \leq |B \cap [1,n]|$. - -The original conjecture spawned as Erdős and Turán sought a partial answer to Sidon's problem (see: Sidon sequence). Later, Erdős set out to answer the following question posed by Sidon: how close to the lower bound $ |B \cap [1,n]| \geq n^{1/h} $ can an additive basis $ B $ of order $ h $ get? This question was answered in the case $h=2$ by Erdős in 1956. Erdős proved that there exists an additive basis $B$ of order 2 and constants $c_1, c_2 > 0 $ such that $c_1 \log n \leq r_B(n) \leq c_2 \log n $ for all $n $ sufficiently large. In particular, this implies that there exists an additive basis $B$ such that $r_B(n) = n^{1/2 + o(1)} $, which is essentially best possible. This motivated Erdős to make the following conjecture: - -If $B$ is an additive basis of order $h$, then $ \limsup_{n \rightarrow \infty} r_B(n)/\log n > 0.$ - -In 1986, Eduard Wirsing proved that a large class of additive bases, including the prime numbers, contains a subset that is an additive basis but significantly thinner than the original. In 1990, Erdős and Prasad V. Tetali extended Erdős's 1956 result to bases of arbitrary order. In 2000, V. Vu proved that thin subbases exist in the Waring bases using the Hardy–Littlewood circle method and his polynomial concentration results. In 2006, Borwein, Choi, and Chu proved that for all additive bases $B$, $f(n)$ eventually exceeds 7. diff --git a/wiki/wikipedia/3770.txt b/wiki/wikipedia/3770.txt deleted file mode 100644 index 03064f09a3bcc97384a198a8a7b1d34917da7786..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3770.txt +++ /dev/null @@ -1,45 +0,0 @@ -In mathematics, the term undefined is often used to refer to an expression which is not assigned an interpretation or a value (such as an indeterminate form, which has the propensity of assuming different values). The term can take on several different meanings depending on the context. For example: - -* In various branches of mathematics, certain concepts are introduced as primitive notions (e.g., the terms "point", "line" and "angle" in geometry). As these terms are not defined in terms of other concepts, they may be referred to as "undefined terms". - -* A function is said to be "undefined" at points outside of its domainfor example, the real-valued function $ f(x)=\sqrt{x} $ is undefined for negative $x$ (i.e., it assigns no value to negative arguments). - -* In algebra, some arithmetic operations may not assign a meaning to certain values of its operands (e.g., division by zero). In which case, the expressions involving such operands are termed "undefined". - -In ancient times, geometers attempted to define every term. For example, Euclid defined a point as "that which has no part". In modern times, mathematicians recognize that attempting to define every word inevitably leads to circular definitions, and therefore leave some terms (such as "point") undefined (see primitive notion for more). - -This more abstract approach allows for fruitful generalizations. In topology, a topological space may be defined as a set of points endowed with certain properties, but in the general setting, the nature of these "points" is left entirely undefined. Likewise, in category theory, a category consists of "objects" and "arrows", which are again primitive, undefined terms. This allows such abstract mathematical theories to be applied to very diverse concrete situations. - -The expression 0/0 is undefined in arithmetic, as explained in division by zero (the same expression is used in calculus to represent an indeterminate form). - -Mathematicians have different opinions as to whether 0^0 should be defined to equal 1, or be left undefined. - -The set of numbers for which a function is defined is called the domain of the function. If a number is not in the domain of a function, the function is said to be "undefined" for that number. Two common examples are $ f(x)=\frac{1}{x}$, which is undefined for $x=0$, and $ f(x)=\sqrt{x}$, which is undefined (in the real number system) for negative $ x $. - -In trigonometry, the functions $\tan \theta$ and $\sec \theta$ are undefined for all $\theta = 180^\circ \left(n - \frac{1}{2}\right)$, while the functions $\cot \theta$ and $\csc \theta$ are undefined for all $\theta = 180^\circ(n)$. - -In computability theory, if $ f$ is a partial function on $ S$ and $ a$ is an element of $ S$, then this is written as $ f(a)\downarrow$, and is read as "f(a) is defined." - -If $ a$ is not in the domain of $ f$, then this is written as $ f(a)\uparrow$, and is read as "$ f(a)$ is undefined". - -In analysis, measure theory and other mathematical disciplines, the symbol $\infty$ is frequently used to denote an infinite pseudo-number, along with its negative, $ -\infty$. The symbol has no well-defined meaning by itself, but an expression like $\left\{a_n\right\}\rightarrow\infty$ is shorthand for a divergent sequence, which at some point is eventually larger than any given real number. - -Performing standard arithmetic operations with the symbols $\pm\infty$ is undefined. Some extensions, though, define the following conventions of addition and multiplication: - -* $x+\infty=\infty$for all $ x \in\R\cup\{\infty\}$. - -* $-\infty+x=-\infty$for all $ x\in\R\cup\{-\infty\}$. - -* $x\cdot\infty=\infty$for all $ x\in\R^{+}$. - -No sensible extension of addition and multiplication with $\infty$ exists in the following cases: - -* $\infty-\infty$ - -* $0\cdot\infty$ (although in measure theory, this is often defined as $0$) - -* $\frac{\infty}{\infty}$ - -For more detail, see extended real number line. - -In complex analysis, a point $z\in\mathbb{C}$ where a holomorphic function is undefined is called a singularity. One distinguishes between removable singularities (i.e., the function can be extended holomorphically to $z$), poles (i.e., the function can be extended meromorphically to $z$), and essential singularities (i.e., no meromorphic extension to $z$ can exist). diff --git a/wiki/wikipedia/3771.txt b/wiki/wikipedia/3771.txt deleted file mode 100644 index 3914fe1d56f1b997a6080547c67a5037249a22b7..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3771.txt +++ /dev/null @@ -1,27 +0,0 @@ -In mathematics, and in particular the necklace splitting problem, the Hobby–Rice theorem is a result that is useful in establishing the existence of certain solutions. It was proved in 1965 by Charles R. Hobby and John R. Rice; a simplified proof was given in 1976 by A. Pinkus. - -Given an integer n, define a partition of the interval [0,1] as a sequence of numbers which divide the interval to $n+1$ subintervals: -$$ -0=z_0 < z_1 < \dotsb < z_k < z_{k+1} = 1 -$$ - -Define a signed partition as a partition in which each subinterval $i$ has an associated sign $\delta_i$: -$$ -\delta_1,\dotsc,\delta_{k+1}\in\left\{+1,-1\right\} -$$ - -The Hobby-Rice theorem says that for every n continuously integrable functions: -$$ -g_1,\dotsc,g_n\colon[0,1]\longrightarrow\mathbb{R} -$$ - -there exists a signed partition of [0,1] such that: -$$ -\sum_{i=1}^{n+1}\delta_i\!\int_{z_{i-1}}^{z_i} g_j(z)dz=0\text{ for }1\leq j\leq n. -$$ - -(in other words: for each of the n functions, its integral over the positive subintervals equals its integral over the negative subintervals). - -The theorem was used by Noga Alon in the context of necklace splitting in 1987. - -Suppose the interval [0,1] is a cake. There are n partners and each of the n functions is a value-density function of one partner. We want to divide the cake into two parts such that all partners agree that the parts have the same value. This fair-division challenge is sometimes referred to as the consensus-halving problem. The Hobby-Rice theorem implies that this can be done with n cuts. diff --git a/wiki/wikipedia/3772.txt b/wiki/wikipedia/3772.txt deleted file mode 100644 index 51d1d4b593cc372b98005a12b0183c6a2854c459..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3772.txt +++ /dev/null @@ -1,3 +0,0 @@ -In mathematics, Tsen's theorem states that a function field K of an algebraic curve over an algebraically closed field is quasi-algebraically closed (i.e., C1). This implies that the Brauer group of any such field vanishes, and more generally that all the Galois cohomology groups H i(K, K*) vanish for i ≥ 1. This result is used to calculate the étale cohomology groups of an algebraic curve. - -The theorem was published by Chiungtze C. Tsen in 1933. diff --git a/wiki/wikipedia/3773.txt b/wiki/wikipedia/3773.txt deleted file mode 100644 index 590c19205cbf1785d17d3bb21c9c7f23ec043186..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3773.txt +++ /dev/null @@ -1,75 +0,0 @@ -In mathematics, the Schwarz lantern is a pathological example of the difficulty of defining the area of a smooth (curved) surface as the limit of the areas of polyhedra. It consists of a family of polyhedral approximations to a right circular cylinder that converge pointwise to the cylinder but whose areas do not converge to the area of the cylinder. It is also known as the Chinese lantern, because of its resemblance to a cylindrical paper lantern, or as Schwarz's boot. The "Schwarz lantern" and "Schwarz's boot" names are from mathematician Hermann Schwarz. - -The sum of the angles at each vertex is equal to two flat angles ($2\pi$ radians). This has as a consequence that the Schwarz lantern can be folded out of a flat piece of paper. The crease pattern for this folded surface, a tessellation of the paper by isosceles triangles, has also been called the Yoshimura pattern, after the work of Y. Yoshimura on the Yoshimura buckling pattern of cylindrical surfaces under axial compression, which can be similar in shape to the Schwarz lantern. - -The discrete polyhedral approximation considered by Schwarz can be described by two parameters, $m$ and $n$. The cylinder is sliced by parallel planes into $2n$ circles. Each of these circles contains $2m$ vertices of the Schwarz lantern, placed with equal spacing around the circle at (for unit circles) a circumferential distance of $\pi / m$ from each other. Importantly, the vertices are placed so they shift in phase by $ \pi/2m$ with each slice. - -From these vertices, the Schwarz lantern is defined as a polyhedral surface formed from isosceles triangles. Each triangle has as its base two consecutive vertices along one of the circular slices, and as its apex a vertex from an adjacent cycle. These triangles meet edge-to-edge to form a polyhedral manifold, topologically equivalent to the cylinder that is being approximated. - -As Schwarz showed, it is not sufficient to simply increase $m$ and $n$ if we wish for the surface area of the polyhedron to converge to the surface area of the curved surface. Depending on the relation of $m$ and $n$ the area of the lantern can converge to the area of the cylinder, to a limit arbitrarily larger than the area of the cylinder, to infinity or in other words to diverge. Thus, the Schwarz lantern demonstrates that simply connecting inscribed vertices is not enough to ensure surface area convergence. - -In the work of Archimedes it already appears that the length of a circle can be approximated by the length of regular polyhedra inscribed or circumscribed in the circle. - -In general, for smooth or rectifiable curves their length can be defined as the supremum of the lengths of polygonal curves inscribed in them. The Schwarz lantern shows that surface area cannot be defined as the supremum of inscribed polyhedral surfaces. - -Schwarz devised his construction in the late 19th century as a counterexample to the erroneous definition in J. A. Serret's book , which incorrectly states that: - -In English: - -Independently of Schwarz, Giuseppe Peano found the same counterexample. At the time, Peano was a student of Angelo Genocchi, who already knew about the difficulty on defining surface area from communication with Schwarz. Genocchi informed Charles Hermite, who had been using Serret's erroneous definition in his course. Hermite asked Schwarz for details, revised his course, and published the example in the second edition of his lecture notes (1883). The original note from Schwarz was not published until the second edition of his collected works in 1890. - -A straight circular cylinder of radius $r$ and height $h$ can be parametrized in Cartesian coordinates using the equations -$$ -x = r\cos(u) -$$ -$$ -y = r\sin(u) -$$ -$$ -z = v -$$ - -for $0\leq u\leq 2\pi$ and $0\leq v\leq h$. The Schwarz lantern is a polyhedron with $4mn$ triangular faces inscribed in the cylinder. - -The vertices of the polyhedron correspond in the parametrization to the points -$$ -u=\frac{2\mu\pi}{m} -$$ -$$ -v=\frac{\nu h}{n} -$$ - -and the points -$$ -u=\frac{(2\mu+1)\pi}{m} -$$ -$$ -v=\frac{(2\nu+1) h}{2n} -$$ - -with $\mu=0,1,2,\ldots,m-1$ and $\nu=0,1,2,\ldots,n-1$. - -All the faces are isosceles triangles congruent to each other. - -The base and the height of each of these triangles have lengths -$$ -2r\sin\left(\frac{\pi}{m}\right)\text{ and }\sqrt{r^2\left[1-\cos\left(\frac{\pi}{m}\right)\right]^2+\left(\frac{h}{2n}\right)^2} -$$ - -respectively. This gives a total surface area for the Schwarz lantern -$$ -S(m,n)=4mnr\sin\left(\frac{\pi}{m}\right)\sqrt{4r^2\sin^4\left(\frac{\pi}{2m}\right)+\left(\frac{h}{2n}\right)^2} -$$. - -Simplifying sines when $m\to\infty$ -$$ -S(m,n)\simeq 4\pi nr\sqrt{\left(\frac{\pi^2 r}{2m^2}\right)^2+\left(\frac{h}{2n}\right)^2} = 2\pi r\sqrt{\left(\pi^2 r\frac{n}{m^2}\right)^2+h^2} -$$. - -From this formula it follows that: - -#If $n=am$ for some constant $a$, then $S(m,am)\to2\pi rh$ when $m\to\infty$. This limit is the surface area of the cylinder in which the Schwarz lantern is inscribed. - -#If $n=am^2$ for some constant $a$, then $S(m,am^2)\to 2\pi r\sqrt{\pi^4 r^2 a^2+h^2}$ when $m\to\infty$. This limit depends on the value of $a$ and can be made equal to any number not smaller than the area of the cylinder $2r\pi h$. - -#If $n=am^3$, then $S(m,am^3)\to\infty$ as $m\to\infty$. diff --git a/wiki/wikipedia/3774.txt b/wiki/wikipedia/3774.txt deleted file mode 100644 index b8c301f982d22ccb2a6da93abd90595ff5a2bfff..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3774.txt +++ /dev/null @@ -1,278 +0,0 @@ -Cubic reciprocity is a collection of theorems in elementary and algebraic number theory that state conditions under which the congruence x3 ≡ p (mod q) is solvable; the word "reciprocity" comes from the form of the main theorem, which states that if p and q are primary numbers in the ring of Eisenstein integers, both coprime to 3, the congruence x3 ≡ p (mod q) is solvable if and only if x3 ≡ q (mod p) is solvable. - -Sometime before 1748 Euler made the first conjectures about the cubic residuacity of small integers, but they were not published until 1849, after his death. - -Gauss's published works mention cubic residues and reciprocity three times: there is one result pertaining to cubic residues in the Disquisitiones Arithmeticae (1801). In the introduction to the fifth and sixth proofs of quadratic reciprocity (1818) he said that he was publishing these proofs because their techniques (Gauss's lemma and Gaussian sums, respectively) can be applied to cubic and biquadratic reciprocity. Finally, a footnote in the second (of two) monographs on biquadratic reciprocity (1832) states that cubic reciprocity is most easily described in the ring of Eisenstein integers. - -From his diary and other unpublished sources, it appears that Gauss knew the rules for the cubic and quartic residuacity of integers by 1805, and discovered the full-blown theorems and proofs of cubic and biquadratic reciprocity around 1814. Proofs of these were found in his posthumous papers, but it is not clear if they are his or Eisenstein's. - -Jacobi published several theorems about cubic residuacity in 1827, but no proofs. In his Königsberg lectures of 1836-37 Jacobi presented proofs. - -As is often the case in number theory, it is easier to work modulo prime numbers, so in this section all moduli p, q, etc., are assumed to be positive, odd primes. - -We first note that if q ≡ 2 (mod 3) is a prime then every number is a cubic residue modulo q. Let q = 3n + 2; since 0 = 03 is obviously a cubic residue, assume x is not divisible by q. Then by Fermat's little theorem, -$$ -x^q \equiv x \bmod{q}, \qquad x^{q - 1} \equiv 1 \bmod{q} -$$ - -Multiplying the two congruences we have -$$ - x^{2q-1} \equiv x \bmod{q} -$$ - -Now substituting 3n + 2 for q we have: -$$ - x^{2q-1} = x^{6n + 3} = \left (x^{2n+1} \right )^3. -$$ - -Therefore, the only interesting case is when the modulus p ≡ 1 (mod 3). In this case the non-zero residue classes (mod p) can be divided into three sets, each containing (p−1)/3 numbers. Let e be a cubic non-residue. The first set is the cubic residues; the second one is e times the numbers in the first set, and the third is e2 times the numbers in the first set. Another way to describe this division is to let e be a primitive root (mod p); then the first (resp. second, third) set is the numbers whose indices with respect to this root are congruent to 0 (resp. 1, 2) (mod 3). In the vocabulary of group theory, the first set is a subgroup of index 3 of the multiplicative group $(\Z/p\Z)^{\times}$ and the other two are its cosets. - -A theorem of Fermat states that every prime p ≡ 1 (mod 3) can be written as p = a2 + 3b2 and (except for the signs of a and b) this representation is unique. - -Letting m = a + b and n = a − b, we see that this is equivalent to p = m2 − mn + n2 (which equals (n − m)2 − (n − m)n + n2 = m2 + m(n − m) + (n − m)2, so m and n are not determined uniquely). Thus, - -\begin{align} - -4p &= (2m-n)^2 + 3n^2 \\ - -&= (2n-m)^2 + 3m^2 \\ - -&= (m+n)^2 + 3(m-n)^2 - -\end{align} - -and it is a straightforward exercise to show that exactly one of m, n, or m − n is a multiple of 3, so -$$ -p = \frac14 (L^2+ 27M^2), -$$ - -and this representation is unique up to the signs of L and M. - -For relatively prime integers m and n define the rational cubic residue symbol as -$$ -\left[\frac{m}{n}\right]_3 = \begin{cases} 1 & m \text{ is a cubic residue } \bmod n \\ -1 & m \text{ is a cubic non-residue }\bmod n \end{cases} -$$ - -It is important to note that this symbol does not have the multiplicative properties of the Legendre symbol; for this, we need the true cubic character defined below. - -Euler's Conjectures. Let p = a2 + 3b2 be a prime. Then the following hold: - -\begin{align} - -\left[\tfrac{2}{p}\right]_3 =1 \quad &\Longleftrightarrow \quad 3\mid b\\ - -\left[\tfrac{3}{p}\right]_3 =1 \quad &\Longleftrightarrow \quad 9\mid b \text{ or } 9\mid(a\pm b)\\ - -\left[\tfrac{5}{p}\right]_3 =1 \quad &\Longleftrightarrow \quad 15\mid b \text{ or } 3\mid b \text{ and } 5\mid a \text{ or } 15\mid(a\pm b) \text{ or } 15\mid(2a\pm b)\\ - -\left[\tfrac{6}{p}\right]_3 =1 \quad &\Longleftrightarrow \quad 9\mid b \text{ or } 9\mid(a\pm 2b)\\ - -\left[\tfrac{7}{p}\right]_3 =1 \quad &\Longrightarrow \quad (3\mid b\text{ and }7\mid a) \text{ or } 21\mid (b\pm a) \text{ or } 7\mid(4b\pm a) \text{ or } 21\mid b \text{ or } 7\mid(b\pm 2a) - -\end{align} - -The first two can be restated as follows. Let p be a prime that is congruent to 1 modulo 3. Then: - -* 2 is a cubic residue of p if and only if p = a2 + 27b2. - -* 3 is a cubic residue of p if and only if 4p = a2 + 243b2. - -Gauss's Theorem. Let p be a positive prime such that -$$ -p = 3n + 1= \tfrac14 \left(L^2+ 27M^2\right). -$$ - -Then $ L(n!)^3\equiv 1 \bmod p.$ - -One can easily see that Gauss's Theorem implies: -$$ -\left[\tfrac{L}{p}\right]_3 = \left[\tfrac{M}{p}\right]_3 =1. -$$ - -Jacobi's Theorem (stated without proof). Let q ≡ p ≡ 1 (mod 6) be positive primes. Obviously both p and q are also congruent to 1 modulo 3, therefore assume: -$$ -p = \tfrac14 \left(L^2+ 27M^2\right), \qquad q = \tfrac14 \left(L'^2+ 27M'^2\right). -$$ - -Let x be a solution of x2 ≡ −3 (mod q). Then -$$ -x\equiv\pm \frac{L'}{3M'}\bmod q, -$$ - -and we have: - -\begin{align} - -\left[\frac{q}{p}\right]_3 =1 \quad &\Longleftrightarrow \quad \left[\frac{\frac{L+3Mx}{2}p}{q}\right]_3 =1 \quad \Longleftrightarrow \quad \left[\frac{\frac{L+3Mx}{L-3Mx}}{q}\right]_3 =1 \\ - -\left[\frac{q}{p}\right]_3 =1 \quad &\Longrightarrow \quad \left[\frac{\frac{LM'+L'M}{LM'-L'M}}{q}\right]_3 =1 - -\end{align} - -Lehmer's Theorem. Let q and p be primes, with $p = \tfrac14 \left(L^2+ 27M^2\right).$ Then: -$$ -\left[\frac{q}{p}\right]_3 = 1 \quad \Longleftrightarrow \quad q \mid LM \text{ or } L\equiv\pm \frac{9r}{2u+1} M\bmod{q}, -$$ - -where -$$ -u\not\equiv 0,1,-\tfrac12, -\tfrac13 \bmod q \quad \text{and} \quad 3u+1 \equiv r^2 (3u-3)\bmod q. -$$ - -Note that the first condition implies: that any number that divides L or M is a cubic residue (mod p). - -The first few examples of this are equivalent to Euler's conjectures: - -\begin{align} - -\left[\frac{2}{p}\right]_3 =1 \quad &\Longleftrightarrow \quad L \equiv M \equiv 0 \bmod 2 \\ - -\left[\frac{3}{p}\right]_3 =1 \quad &\Longleftrightarrow \quad M \equiv 0 \bmod 3 \\ - -\left[\frac{5}{p}\right]_3 =1 \quad &\Longleftrightarrow \quad LM \equiv 0 \bmod 5 \\ - -\left[\frac{7}{p}\right]_3 =1 \quad &\Longleftrightarrow \quad LM \equiv 0 \bmod 7 - -\end{align} - -Since obviously L ≡ M (mod 2), the criterion for q = 2 can be simplified as: -$$ - \left[\frac{2}{p}\right]_3 =1 \quad \Longleftrightarrow \quad M \equiv 0 \bmod 2. -$$ - -Martinet's theorem. Let p ≡ q ≡ 1 (mod 3) be primes, $ pq = \tfrac14 (L^2+ 27M^2).$ Then -$$ -\left[\frac{L}{p}\right]_3 \left[\frac{L}{q}\right]_3 =1\quad \Longleftrightarrow \quad \left[\frac{q}{p}\right]_3 \left[\frac{p}{q}\right]_3 =1. -$$ - -Sharifi's theorem. Let p = 1 + 3x + 9x2 be a prime. Then any divisor of x is a cubic residue (mod p). - -In his second monograph on biquadratic reciprocity, Gauss says: - -
    The theorems on biquadratic residues gleam with the greatest simplicity and genuine beauty only when the field of arithmetic is extended to imaginary numbers, so that without restriction, the numbers of the form a + bi constitute the object of study ... we call such numbers integral complex numbers. [bold in the original]
    - -These numbers are now called the ring of Gaussian integers, denoted by Z[i]. Note that i is a fourth root of 1. - -In a footnote he adds - -
    The theory of cubic residues must be based in a similar way on a consideration of numbers of the form a + bh where h is an imaginary root of the equation h3 = 1 ... and similarly the theory of residues of higher powers leads to the introduction of other imaginary quantities.
    - -In his first monograph on cubic reciprocity Eisenstein developed the theory of the numbers built up from a cube root of unity; they are now called the ring of Eisenstein integers. Eisenstein said (paraphrasing) "to investigate the properties of this ring one need only consult Gauss's work on Z[i] and modify the proofs". This is not surprising since both rings are unique factorization domains. - -The "other imaginary quantities" needed for the "theory of residues of higher powers" are the rings of integers of the cyclotomic number fields; the Gaussian and Eisenstein integers are the simplest examples of these. - -Let -$$ -\omega = \frac{-1 + i\sqrt 3}{2} = e^\frac{2\pi i}{3}, \qquad \omega^3 = 1. -$$ - -And consider the ring of Eisenstein integers: -$$ -\Z[\omega] = \left \{ a + b \omega \ : \ a, b \in \Z \right \}. -$$ - -This is a Euclidean domain with the norm function given by: -$$ -N(a + b \omega) = a^2 -ab + b^2. -$$ - -Note that the norm is always congruent to 0 or 1 (mod 3). - -The group of units in $\Z[\omega]$ (the elements with a multiplicative inverse or equivalently those with unit norm) is a cyclic group of the sixth roots of unity, -$$ -\left \{ \pm 1, \pm \omega, \pm \omega^2\right \}. -$$ -$$ -\Z[\omega] -$$ is a unique factorization domain. The primes fall into three classes: - -* 3 is a special case: -$$ - 3 = -\omega^2 (1-\omega)^2. -$$ - -It is the only prime in $\Z$ divisible by the square of a prime in $\Z[\omega]$. The prime 3 is said to ramify in $\Z[\omega]$. - -* Positive primes in $\Z$ congruent to 2 (mod 3) are also primes in $\Z[\omega]$. These primes are said to remain inert in $\Z[\omega]$. Note that if $q$ is any inert prime then: -$$ -N(q) = q^2 \equiv 1 \bmod{3}. -$$ - -* Positive primes in $\Z$ congruent to 1 (mod 3) are the product of two conjugate primes in $\Z[\omega]$. These primes are said to split in $\Z[\omega]$. Their factorization is given by: -$$ -p=N (\pi) = N (\overline{\pi})= \pi \overline{\pi}. -$$ - -for example -$$ - 7 = ( 3 + \omega) ( 2 - \omega). -$$ - -A number is primary if it is coprime to 3 and congruent to an ordinary integer modulo $(1-\omega)^2,$ which is the same as saying it is congruent to $\pm 2$ modulo 3. If $\gcd(N(\lambda), 3) = 1$ one of $\lambda, \omega \lambda,$ or $\omega^2 \lambda$ is primary. Moreover, the product of two primary numbers is primary and the conjugate of a primary number is also primary. - -The unique factorization theorem for $\Z[\omega]$ is: if $\lambda \neq 0,$ then -$$ -\lambda = \pm\omega^\mu(1-\omega)^\nu\pi_1^{\alpha_1}\pi_2^{\alpha_2}\pi_3^{\alpha_3} \cdots, \qquad \mu \in \{0, 1, 2\}, \quad \nu, \alpha_1, \alpha_2, \ldots \geqslant 0 -$$ - -where each $\pi_i$ is a primary (under Eisenstein's definition) prime. And this representation is unique, up to the order of the factors. - -The notions of congruence and greatest common divisor are defined the same way in $\Z[\omega]$ as they are for the ordinary integers $\Z$. Because the units divide all numbers, a congruence modulo $\lambda$ is also true modulo any associate of $\lambda$, and any associate of a GCD is also a GCD. - -An analogue of Fermat's little theorem is true in $\Z[\omega]$: if $\alpha$ is not divisible by a prime $\pi$, -$$ -\alpha^{N (\pi) - 1} \equiv 1 \bmod{\pi}. -$$ - -Now assume that $N(\pi) \neq 3$ so that $N(\pi) \equiv 1 \bmod{3}.$ Or put differently $3\mid N(\pi) -1.$ Then we can write: -$$ -\alpha^{\frac{N ( \pi )- 1}{3}}\equiv \omega^k \bmod\pi, -$$ - -for a unique unit $\omega^k.$ This unit is called the cubic residue character of $\alpha$ modulo $\pi$ and is denoted by -$$ -\left(\frac{\alpha}{\pi}\right)_3 = \omega^k \equiv \alpha^{\frac{N(\pi) - 1}{3}} \bmod{\pi}. -$$ - -The cubic residue character has formal properties similar to those of the Legendre symbol: - -* If $\alpha \equiv \beta \bmod{\pi}$ then $\left (\tfrac{\alpha}{\pi}\right )_3=\left (\tfrac{\beta}{\pi}\right )_3.$ - -* $\left (\tfrac{\alpha\beta}{\pi}\right )_3=\left (\tfrac{\alpha}{\pi}\right )_3\left (\tfrac{\beta}{\pi}\right )_3.$ - -* $\overline{\left (\tfrac{\alpha}{\pi}\right )_3}=\left (\tfrac{\overline{\alpha}}{\overline{\pi}}\right )_3,$ where the bar denotes complex conjugation. - -* If $\pi$ and $\theta$ are associates then $\left (\tfrac{\alpha}{\pi}\right )_3=\left (\tfrac{\alpha}{\theta}\right )_3$ - -* The congruence $x^3 \equiv \alpha \bmod{\pi}$ has a solution in $\Z[\omega]$ if and only if $\left(\tfrac{\alpha}{\pi}\right)_3 = 1.$ - -* If $a, b \in \Z$ are such that $\gcd(a, b) = \gcd(b, 3) = 1,$ then $\left(\tfrac{a}{b}\right)_3 = 1.$ - -* The cubic character can be extended multiplicatively to composite numbers (coprime to 3) in the "denominator" in the same way the Legendre symbol is generalized into the Jacobi symbol. Like the Jacobi symbol, if the "denominator" of the cubic character is composite, then if the "numerator" is a cubic residue mod the "denominator" the symbol will equal 1, if the symbol does not equal 1 then the "numerator" is a cubic non-residue, but the symbol can equal 1 when the "numerator" is a non-residue: -$$ -\left(\frac{\alpha}{\lambda}\right)_3 = \left(\frac{\alpha}{\pi_1}\right)_3^{\alpha_1} \left(\frac{\alpha}{\pi_2}\right)_3^{\alpha_2} \cdots, -$$ - -where -$$ -\lambda = \pi_1^{\alpha_1}\pi_2^{\alpha_2}\pi_3^{\alpha_3} \cdots -$$ - -Let α and β be primary. Then -$$ -\Bigg(\frac{\alpha}{\beta}\Bigg)_3 = \Bigg(\frac{\beta}{\alpha}\Bigg)_3. -$$ - -There are supplementary theorems for the units and the prime 1 − ω: - -Let α = a + bω be primary, a = 3m + 1 and b = 3n. (If a ≡ 2 (mod 3) replace α with its associate −α; this will not change the value of the cubic characters.) Then - - - -\Bigg(\frac{\omega}{\alpha}\Bigg)_3 = \omega^\frac{1-a-b}{3}= \omega^{-m-n}, - -\Bigg(\frac{1-\omega}{\alpha}\Bigg)_3 = \omega^\frac{a-1}{3}= \omega^m, - -\Bigg(\frac{3}{\alpha}\Bigg)_3 = \omega^\frac{b}{3}= \omega^n. - - diff --git a/wiki/wikipedia/3775.txt b/wiki/wikipedia/3775.txt deleted file mode 100644 index 10bd33691838c5ea7925817eb1a69bb2aa954473..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3775.txt +++ /dev/null @@ -1,11 +0,0 @@ -In mathematics, the Bishop–Phelps theorem is a theorem about the topological properties of Banach spaces named after Errett Bishop and Robert Phelps, who published its proof in 1961. - -Its statement is as follows. - -Let $B \subseteq X$ be a bounded, closed, convex set of a real Banach space $B \subseteq X.$ Then the set - -\left\{ x^{\prime} \in X^{\prime} : x^{\prime} \text{ attains its supremum on } B \right\} - -is norm-dense in the continuous dual space $X^{\prime}$ of $X.$ - -Importantly, this theorem fails for complex Banach spaces. diff --git a/wiki/wikipedia/3776.txt b/wiki/wikipedia/3776.txt deleted file mode 100644 index 685d0561f1a5675e970b8427beae8d1a55d68f9d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3776.txt +++ /dev/null @@ -1,50 +0,0 @@ -In mathematical analysis, a Besicovitch cover, named after Abram Samoilovitch Besicovitch, is an open cover of a subset E of the Euclidean space RN by balls such that each point of E is the center of some ball in the cover. - -The Besicovitch covering theorem asserts that there exists a constant cN depending only on the dimension N with the following property: - -* Given any Besicovitch cover F of a bounded set E, there are cN subcollections of balls A1 = {Bn1}, …, AcN = {BncN} contained in F such that each collection Ai consists of disjoint balls, and -$$ - E \subseteq \bigcup_{i=1}^{c_N} \bigcup_{B\in A_i} B. -$$ - -Let G denote the subcollection of F consisting of all balls from the cN disjoint families A1,...,AcN. - -The less precise following statement is clearly true: every point x ∈ RN belongs to at most cN different balls from the subcollection G, and G remains a cover for E (every point y ∈ E belongs to at least one ball from the subcollection G). This property gives actually an equivalent form for the theorem (except for the value of the constant). - -* There exists a constant bN depending only on the dimension N with the following property: Given any Besicovitch cover F of a bounded set E, there is a subcollection G of F such that G is a cover of the set E and every point x ∈ E belongs to at most bN different balls from the subcover G. - -In other words, the function SG equal to the sum of the indicator functions of the balls in G is larger than 1E and bounded on RN by the constant bN, -$$ - \mathbf{1}_E \le S_{\mathbf {G}} := \sum_{B \in \mathbf{G}} \mathbf{1}_B \le b_N. -$$ - -Let μ be a Borel non-negative measure on RN, finite on compact subsets and let f be a μ-integrable function. Define the maximal function $f^*$ by setting for every x (using the convention $\infty \times 0 = 0$) -$$ -f^*(x) = \sup_{r > 0} \Bigl( \mu(B(x, r))^{-1} \int_{B(x, r)} |f(y)| d\mu(y) \Bigr). -$$ - -This maximal function is lower semicontinuous, hence measurable. The following maximal inequality is satisfied for every λ > 0 : -$$ -\lambda \mu \bigl( \{ x : f^*(x) > \lambda \} \bigr) \le b_N \int |f| d\mu. -$$ - -;Proof. - -The set Eλ of the points x such that $f^*(x) > \lambda$ clearly admits a Besicovitch cover Fλ by balls B such that -$$ -\int \mathbf{1}_B |f| \ d\mu = \int_{B} |f(y)| d\mu(y) > \lambda \mu(B). -$$ - -For every bounded Borel subset E´ of Eλ, one can find a subcollection G extracted from Fλ that covers E´ and such that SG ≤ bN, hence - -\begin{align} - -\lambda \mu(E') &\le \lambda \sum_{B \in \mathbf{G}} \mu(B)\\ - -&\le \sum_{B \in \mathbf{G}} \int \mathbf{1}_B |f| d\mu = \int S_{\mathbf {G}} |f| d\mu \le b_N \int |f| d\mu, - -\end{align} - -which implies the inequality above. - -When dealing with the Lebesgue measure on RN, it is more customary to use the easier (and older) Vitali covering lemma in order to derive the previous maximal inequality (with a different constant). diff --git a/wiki/wikipedia/3777.txt b/wiki/wikipedia/3777.txt deleted file mode 100644 index 5218d713d20de335ba75979a10eb8659d2681ba4..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3777.txt +++ /dev/null @@ -1,50 +0,0 @@ -In mathematics, Schilder's theorem is a result in the large deviations theory of stochastic processes. Roughly speaking, Schilder's theorem gives an estimate for the probability that a (scaled-down) sample path of Brownian motion will stray far from the mean path (which is constant with value 0). This statement is made precise using rate functions. Schilder's theorem is generalized by the Freidlin–Wentzell theorem for Itō diffusions. - -Let B be a standard Brownian motion in d-dimensional Euclidean space Rd starting at the origin, 0 ∈ Rd; let W denote the law of B, i.e. classical Wiener measure. For ε > 0, let Wε denote the law of the rescaled process ε'B. Then, on the Banach space C0 = C0([0, T]; Rd) of continuous functions $ f : [0,T] \longrightarrow \mathbf{R}^d$ such that $f(0)=0$, equipped with the supremum norm ||·||, the probability measures Wε satisfy the large deviations principle with good rate function I : C0 → R ∪ {+∞} given by -$$ -I(\omega) = \frac{1}{2} \int_{0}^{T} | \dot{\omega}(t) |^{2} \mathrm{d} t -$$ - -if ω is absolutely continuous, and I(ω) = +∞ otherwise. In other words, for every open set G ⊆ C0 and every closed set F ⊆ C0, -$$ -\limsup_{\varepsilon \downarrow 0} \varepsilon \log \mathbf{W}_{\varepsilon} (F) \leq - \inf_{\omega \in F} I(\omega) -$$ - -and -$$ -\liminf_{\varepsilon \downarrow 0} \varepsilon \log \mathbf{W}_{\varepsilon} (G) \geq - \inf_{\omega \in G} I(\omega). -$$ - -Taking ε = 1/c2, one can use Schilder's theorem to obtain estimates for the probability that a standard Brownian motion B strays further than c from its starting point over the time interval [0, T], i.e. the probability -$$ -\mathbf{W} (C_0 \smallsetminus \mathbf{B}_c (0; \| \cdot \|_\infty)) \equiv \mathbf{P} \big[ \| B \|_\infty > c \big], -$$ - -as c tends to infinity. Here Bc(0; ||·||) denotes the open ball of radius c about the zero function in C0, taken with respect to the supremum norm. First note that -$$ -\| B \|_\infty > c \iff \sqrt{\varepsilon} B \in A := \left \{ \omega \in C_0 \mid |\omega(t)| > 1 \text{ for some } t \in [0, T] \right\}. -$$ - -Since the rate function is continuous on A, Schilder's theorem yields - -\begin{align} - -\lim_{c \to \infty} \frac{\log \left (\mathbf{P} \left [ \| B \|_\infty > c \right] \right )}{c^2} &= \lim_{\varepsilon \to 0} \varepsilon \log \left (\mathbf{P} \left[ \sqrt{\varepsilon} B \in A \right] \right ) \\[6pt] - -&= - \inf \left\{ \left. \frac{1}{2} \int_0^T | \dot{\omega}(t) |^2 \mathrm{d} t \right| \omega \in A \right\} \\[6pt] - -&= - \frac{1}{2} \int_0^T \frac{1}{T^2} \mathrm{d} t \\[6pt] - -&= - \frac{1}{2 T}, - -\end{align} - -making use of the fact that the infimum over paths in the collection A is attained for ω(t) = t/T . This result can be heuristically interpreted as saying that, for large c and/or large T -$$ -\frac{\log \left (\mathbf{P} \left [ \| B \|_\infty > c \right] \right )}{c^2} \approx - \frac{1}{2T} \qquad \text{or} \qquad \mathbf{P} \left[ \| B \|_\infty > c \right ] \approx \exp \left( - \frac{c^2}{2 T} \right). -$$ - -In fact, the above probability can be estimated more precisely: for B a standard Brownian motion in Rn, and any T, c and ε > 0, we have: -$$ -\mathbf{P} \left[ \sup_{0 \leq t \leq T} \left| \sqrt{\varepsilon} B_t \right | \geq c \right] \leq 4 n \exp \left( - \frac{c^2}{2 n T \varepsilon} \right). -$$ diff --git a/wiki/wikipedia/3778.txt b/wiki/wikipedia/3778.txt deleted file mode 100644 index a10bb72fb488b28addd978b504b5a98ee9831fec..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3778.txt +++ /dev/null @@ -1,14 +0,0 @@ -In mathematics, Kummer's theorem is a formula for the exponent of the highest power of a prime number p that divides a given binomial coefficient. In other words, it gives the p-adic valuation of a binomial coefficient. The theorem is named after Ernst Kummer, who proved it in a paper, . - -Kummer's theorem states that for given integers n ≥ m ≥ 0 and a prime number p, the p-adic valuation $\nu_p\left( \tbinom n m \right)$ is equal to the number of carries when m is added to n - m in base p. - -It can be proved by writing $\tbinom{n}{m}$ as $\tfrac{n!}{m! (n-m)!}$ and using Legendre's formula. - -To compute the largest power of 2 dividing the binomial coefficient $\tbinom{10}{3} $ write m = 3 and n − m = 7 in base p = 2 as 3 = 112 and 7 = 1112. Carrying out the addition 112 + 1112 = 10102 in base 2 requires three carries. And the largest power of 2 that divides $\tbinom{10}{3} = 120 = 2^3 \cdot 15 $ is 23. - -Kummer's theorem can be generalized to multinomial coefficients $ \tbinom n {m_1,\ldots,m_k} = \tfrac{n!}{m_1!\cdots m_k!}$ as follows: - -Write the base-$p$ expansion of the integer $n$ as $n=n_0+n_1p+n_2p^2+\cdots+n_rp^r$, and define $S_p(n):=n_0+n_1+\cdots+n_r$ to be the sum of the base-$p$ digits. Then -$$ -\nu_p\left( \binom n {m_1,\ldots,m_k} \right) = \dfrac{1}{p-1} \left( \sum_{i=1}^k S_p(m_i) - S_p(n)\right). -$$ diff --git a/wiki/wikipedia/3779.txt b/wiki/wikipedia/3779.txt deleted file mode 100644 index 56119b024290d547dd1e5d1da6aaf88d7e2c379d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3779.txt +++ /dev/null @@ -1,29 +0,0 @@ -Multiversion concurrency control (MCC or MVCC), is a concurrency control method commonly used by database management systems to provide concurrent access to the database and in programming languages to implement transactional memory. - -Without concurrency control, if someone is reading from a database at the same time as someone else is writing to it, it is possible that the reader will see a half-written or inconsistent piece of data. For instance, when making a wire transfer between two bank accounts if a reader reads the balance at the bank when the money has been withdrawn from the original account and before it was deposited in the destination account, it would seem that money has disappeared from the bank. Isolation is the property that provides guarantees in the concurrent accesses to data. Isolation is implemented by means of a concurrency control protocol. The simplest way is to make all readers wait until the writer is done, which is known as a read-write lock. Locks are known to create contention especially between long read transactions and update transactions. MVCC aims at solving the problem by keeping multiple copies of each data item. In this way, each user connected to the database sees a snapshot of the database at a particular instant in time. Any changes made by a writer will not be seen by other users of the database until the changes have been completed (or, in database terms: until the transaction has been committed.) - -When an MVCC database needs to update a piece of data, it will not overwrite the original data item with new data, but instead creates a newer version of the data item. Thus there are multiple versions stored. The version that each transaction sees depends on the isolation level implemented. The most common isolation level implemented with MVCC is snapshot isolation. With snapshot isolation, a transaction observes a state of the data as when the transaction started. - -MVCC provides point-in-time consistent views. Read transactions under MVCC typically use a timestamp or transaction ID to determine what state of the DB to read, and read these versions of the data. Read and write transactions are thus isolated from each other without any need for locking. However, despite locks being unnecessary, they are used by some MVCC databases such as Oracle. Writes create a newer version, while concurrent reads access an older version. - -MVCC introduces the challenge of how to remove versions that become obsolete and will never be read. In some cases, a process to periodically sweep through and delete the obsolete versions is implemented. This is often a stop-the-world process that traverses a whole table and rewrites it with the last version of each data item. PostgreSQL can use this approach with its process. Other databases split the storage blocks into two parts: the data part and an undo log. The data part always keeps the last committed version. The undo log enables the recreation of older versions of data. The main inherent limitation of this latter approach is that when there are update-intensive workloads, the undo log part runs out of space and then transactions are aborted as they cannot be given their snapshot. For a document-oriented database it also allows the system to optimize documents by writing entire documents onto contiguous sections of disk—when updated, the entire document can be re-written rather than bits and pieces cut out or maintained in a linked, non-contiguous database structure. - -MVCC uses timestamps (TS), and incrementing transaction IDs, to achieve transactional consistency. MVCC ensures a transaction (T) never has to wait to Read a database object (P) by maintaining several versions of the object. Each version of object P has both a Read Timestamp (RTS) and a Write Timestamp (WTS) which lets a particular transaction Ti read the most recent version of the object which precedes the transaction's Read Timestamp RTS(Ti). - -, for the object Write Operation (WTS) to succeed. A Write cannot complete if there are other outstanding transactions with an earlier Read Timestamp (RTS) to the same object. Like standing in line at the store, you cannot complete your checkout transaction until those in front of you have completed theirs. - -To restate; every object (P) has a Timestamp (TS), however if transaction Ti wants to Write to an object, and the transaction has a Timestamp (TS) that is earlier than the object's current Read Timestamp, TS(Ti) < RTS(P), then the transaction is aborted and restarted. (This is because a later transaction already depends on the old value.) Otherwise, Ti creates a new version of object P and sets the read/write timestamp TS of the new version to the timestamp of the transaction TS ← TS(Ti). - -The drawback to this system is the cost of storing multiple versions of objects in the database. On the other hand, reads are never blocked, which can be important for workloads mostly involving reading values from the database. MVCC is particularly adept at implementing true snapshot isolation, something which other methods of concurrency control frequently do either incompletely or with high performance costs. - -At Time = 1, the state of a database could be: - -T0 wrote Object 1="Foo" and Object 2="Bar". After that T1 wrote Object 1="Hello" leaving Object 2 at its original value. The new value of Object 1 will supersede the value at 0 for all transactions that start after T1 commits at which point version 0 of Object 1 can be garbage collected. - -If a long running transaction T2 starts a read operation of Object 2 and Object 1 after T1 committed and there is a concurrent update transaction T3 which deletes Object 2 and adds Object 3="Foo-Bar", the database state will look like this at time 2: - -There is a new version as of time 2 of Object 2 which is marked as deleted and a new Object 3. Since T2 and T3 run concurrently T2 sees the version of the database before 2 i.e. before T3 committed writes, as such T2 reads Object 2="Bar" and Object 1="Hello". This is how multiversion concurrency control allows snapshot isolation reads without any locks. - -Multiversion concurrency control is described in some detail in the 1981 paper "Concurrency Control in Distributed Database Systems" by Phil Bernstein and Nathan Goodman, then employed by the Computer Corporation of America. Bernstein and Goodman's paper cites a 1978 dissertation by David P. Reed which quite clearly describes MVCC and claims it as an original work. - -The first shipping, commercial database software product featuring MVCC was VAX Rdb/ELN, released in 1984, and created at Digital Equipment Corporation by Jim Starkey. Starkey went on to create the second commercially successful MVCC database - InterBase. diff --git a/wiki/wikipedia/378.txt b/wiki/wikipedia/378.txt deleted file mode 100644 index daee7713fe041d6f265d3885d71e50deddaca77c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/378.txt +++ /dev/null @@ -1,19 +0,0 @@ -In computer science, greedy number partitioning is a class of greedy algorithms for multiway number partitioning. The input to the algorithm is a set S of numbers, and a parameter k. The required output is a partition of S into k subsets, such that the sums in the subsets are as nearly equal as possible. Greedy algorithms process the numbers sequentially, and insert the next number into a bin in which the sum of numbers is currently smallest. - -The simplest greedy partitioning algorithm is called list scheduling. It just processes the inputs in any order they arrive. It always returns a partition in which the largest sum is at most $2-\frac{1}{k}$ times the optimal (minimum) largest sum. This heuristic can be used as an online algorithm, when the order in which the items arrive cannot be controlled. - -An improved greedy algorithm is called LPT scheduling. It processes the inputs by descending order of value, from large to small. Since it needs to pre-order the inputs, it can be used only as an offline algorithm. It guarantees that the largest sum is at most $\frac{4 k-1}{3 k}$ times the optimal (minimum) largest sum, and the smallest sum is at least $\frac{3k-1}{4k-2}$ times the optimal (maximum) smallest sum. See LPT scheduling for more details. - -The complete greedy algorithm (CGA) is an exact algorithm, i.e., it always finds an optimal solution. It works in the following way. After sorting the numbers in descending order (as in LPT), it constructs a k-ary tree. Each level corresponds to a number, and each of the k branches corresponds to a different set in which the current number can be put. Traversing the tree in depth-first order requires only O(n) space, but might take O(kn) time. The runtime can be improved by using the greedy heuristic: in each level, develop first the branch in which the current number is put in the set with the smallest sum. This algorithm finds the greedy (LPT) solution first, but then proceeds to look for better solutions. - -Several additional heuristics can be used in the case k=2 to improve the runtime: - -* In a node in which the current sum-difference is at least the sum of all remaining numbers, the remaining numbers can just be put in the smallest-sum subset. - -* If we reach a leaf in which the sum-difference is 0 or 1, then the algorithm can terminate since this is the optimum. - -* If the subset sums in the current node are equal, then we can put the current number only in one subset, thus reducing the size of the subtree by half. - -* The last number can be assigned only to the subset with the smaller sum. - -In the fair item allocation problem, there are n items and k people, each of which assigns a possibly different value to each item. The goal is to partition the items among the people in as fair way as possible. The natural generalization of the greedy number partitioning algorithm is the envy-graph algorithm. It guarantees that the allocation is envy-free up to at most one item (EF1). Moreover, if the instance is ordered (- all agents rank the items in the same order), then the outcome is EFX, and guarantees to each agent at least $\frac{2n}{3n-1}$ of his maximin share. If the items are chores, then a similar algorithm guarantees $\frac{4n-1}{3n}$ MMS. diff --git a/wiki/wikipedia/3780.txt b/wiki/wikipedia/3780.txt deleted file mode 100644 index 32ec022531317d5349b905052591a8bc1282b750..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3780.txt +++ /dev/null @@ -1,41 +0,0 @@ -In graph theory, an outerplanar graph is a graph that has a planar drawing for which all vertices belong to the outer face of the drawing. - -Outerplanar graphs may be characterized (analogously to Wagner's theorem for planar graphs) by the two forbidden minors K4 and K2,3, or by their Colin de Verdière graph invariants. - -They have Hamiltonian cycles if and only if they are biconnected, in which case the outer face forms the unique Hamiltonian cycle. Every outerplanar graph is 3-colorable, and has degeneracy and treewidth at most 2. - -The outerplanar graphs are a subset of the planar graphs, the subgraphs of series–parallel graphs, and the circle graphs. The maximal outerplanar graphs, those to which no more edges can be added while preserving outerplanarity, are also chordal graphs and visibility graphs. - -Outerplanar graphs were first studied and named by Chartrand, in connection with the problem of determining the planarity of graphs formed by using a perfect matching to connect two copies of a base graph (for instance, many of the generalized Petersen graphs are formed in this way from two copies of a cycle graph). As they showed, when the base graph is biconnected, a graph constructed in this way is planar if and only if its base graph is outerplanar and the matching forms a dihedral permutation of its outer cycle. Chartrand and Harary also proved an analogue of Kuratowski's theorem for outerplanar graphs, that a graph is outerplanar if and only if it does not contain a subdivision of one of the two graphs K4 or K2,3. - -An outerplanar graph is an undirected graph that can be drawn in the plane without crossings in such a way that all of the vertices belong to the unbounded face of the drawing. That is, no vertex is totally surrounded by edges. Alternatively, a graph G is outerplanar if the graph formed from G by adding a new vertex, with edges connecting it to all the other vertices, is a planar graph. - -A maximal outerplanar graph is an outerplanar graph that cannot have any additional edges added to it while preserving outerplanarity. Every maximal outerplanar graph with n vertices has exactly 2n - 3 edges, and every bounded face of a maximal outerplanar graph is a triangle. - -Outerplanar graphs have a forbidden graph characterization analogous to Kuratowski's theorem and Wagner's theorem for planar graphs: a graph is outerplanar if and only if it does not contain a subdivision of the complete graph K4 or the complete bipartite graph K2,3. Alternatively, a graph is outerplanar if and only if it does not contain K4 or K2,3 as a minor, a graph obtained from it by deleting and contracting edges. - -A triangle-free graph is outerplanar if and only if it does not contain a subdivision of K2,3. - -All loopless outerplanar graphs can be colored using only three colors; this fact features prominently in the simplified proof of Chvátal's art gallery theorem by Fisk. A 3-coloring may be found in linear time by a greedy coloring algorithm that removes any vertex of degree at most two, colors the remaining graph recursively, and then adds back the removed vertex with a color different from the colors of its two neighbors. - -According to Vizing's theorem, the chromatic index of any graph (the minimum number of colors needed to color its edges so that no two adjacent edges have the same color) is either the maximum degree of any vertex of the graph or one plus the maximum degree. However, in a connected outerplanar graph, the chromatic index is equal to the maximum degree except when the graph forms a cycle of odd length. An edge coloring with an optimal number of colors can be found in linear time based on a breadth-first traversal of the weak dual tree. - -Outerplanar graphs have degeneracy at most two: every subgraph of an outerplanar graph contains a vertex with degree at most two. - -Outerplanar graphs have treewidth at most two, which implies that many graph optimization problems that are NP-complete for arbitrary graphs may be solved in polynomial time by dynamic programming when the input is outerplanar. More generally, k-outerplanar graphs have treewidth O(k). - -Every outerplanar graph can be represented as an intersection graph of axis-aligned rectangles in the plane, so outerplanar graphs have boxicity at most two. - -Every outerplanar graph is a planar graph. Every outerplanar graph is also a subgraph of a series–parallel graph. However, not all planar series–parallel graphs are outerplanar. The complete bipartite graph K2,3 is planar and series–parallel but not outerplanar. On the other hand, the complete graph K4 is planar but neither series–parallel nor outerplanar. Every forest and every cactus graph are outerplanar. - -The weak planar dual graph of an embedded outerplanar graph (the graph that has a vertex for every bounded face of the embedding, and an edge for every pair of adjacent bounded faces) is a forest, and the weak planar dual of a Halin graph is an outerplanar graph. A planar graph is outerplanar if and only if its weak dual is a forest, and it is Halin if and only if its weak dual is biconnected and outerplanar. - -There is a notion of degree of outerplanarity. A 1-outerplanar embedding of a graph is the same as an outerplanar embedding. For k > 1 a planar embedding is said to be k-outerplanar if removing the vertices on the outer face results in a (k - 1)-outerplanar embedding. - -A graph is k-outerplanar if it has a k-outerplanar embedding. - -An outer-1-planar graph, analogously to 1-planar graphs can be drawn in a disk, with the vertices on the boundary of the disk, and with at most one crossing per edge. - -Every maximal outerplanar graph is a chordal graph. Every maximal outerplanar graph is the visibility graph of a simple polygon. Maximal outerplanar graphs are also formed as the graphs of polygon triangulations. They are examples of 2-trees, of series–parallel graphs, and of chordal graphs. - -Every outerplanar graph is a circle graph, the intersection graph of a set of chords of a circle. diff --git a/wiki/wikipedia/3781.txt b/wiki/wikipedia/3781.txt deleted file mode 100644 index b8ff6fe89de59b823e1fa061dbd233b8673e8362..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3781.txt +++ /dev/null @@ -1,61 +0,0 @@ -MINimum Relevant Variables in Linear System (Min-RVLS) is a problem in mathematical optimization. Given a linear program, it is required to find a feasible solution in which the number of non-zero variables is as small as possible. - -The problem is known to be NP-hard and even hard to approximate. - -A Min-RVLS problem is defined by: - -* A binary relation R, which is one of {=, ≥, >, ≠}; - -* An m-by-n matrix A (where m is the number of constraints and n the number of variables); - -* An m-by-1 vector b. - -The linear system is given by: A x R b. It is assumed to be feasible (i.e., satisfied by at least one x). Depending on R, there are four different variants of this system: A x = b, A x ≥ b, A x > b, A x ≠ b. - -The goal is to find an n-by-1 vector x that satisfies the system A x R b, and subject to that, contains as few as possible nonzero elements. - -The problem Min-RVLS[=] was presented by Garey and Johnson, who called it "minimum weight solution to linear equations". They proved it was NP-hard, but did not consider approximations. - -The Min-RVLS problem is important in machine learning and linear discriminant analysis. Given a set of positive and negative examples, it is required to minimize the number of features that are required to correctly classify them. The problem is known as the minimum feature set problem. An algorithm that approximates Min-RVLS within a factor of $O(\log(m))$ could substantially reduce the number of training samples required to attain a given accuracy level. - -The shortest codeword problem in coding theory is the same problem as Min-RVLS[=] when the coefficients are in GF(2). - -In MINimum Unsatisfied Linear Relations (Min-ULR), we are given a binary relation R and a linear system A x R b, which is now assumed to be infeasible. The goal is to find a vector x that violates as few relations as possible, while satisfying all the others. - -Min-ULR[≠] is trivially solvable, since any system with real variables and a finite number of inequality constraints is feasible. As for the other three variants: - -* Min-ULR[=,>,≥] are NP-hard even with homogeneous systems and bipolar coefficients (coefficients in {1,-1}). - -* The NP-complete problem Minimum feedback arc set reduces to Min-ULR[≥], with exactly one 1 and one -1 in each constraint, and all right-hand sides equal to 1. - -* Min-ULR[=,>,≥] are polynomial if the number of variables n is constant: they can be solved polynomially using an algorithm of Greer in time $O(n\cdot m^n / 2^{n-1})$. - -* Min-ULR[=,>,≥] are linear if the number of constraints m is constant, since all subsystems can be checked in time O(n). - -*Min-ULR[≥] is polynomial in some special case. - -*Min-ULR[=,>,≥] can be approximated within n + 1 in polynomial time. - -*Min-ULR[>,≥] are minimum-dominating-set-hard, even with homogeneous systems and ternary coefficients (in {−1,0,1}). - -*Min-ULR[=] cannot be approximated within a factor of $2^{\log^{1-\varepsilon}n}$, for any $\varepsilon>0$, unless NP is contained in DTIME($n^{\operatorname{polylog}(n)}$). - -In the complementary problem MAXimum Feasible Linear Subsystem (Max-FLS), the goal is to find a maximum subset of the constraints that can be satisfied simultaneously. - -* Max-FLS[≠] can be solved in polynomial time. - -* Max-FLS[=] is NP-hard even with homogeneous systems and bipolar coefficients. - -* . With integer coefficients, it is hard to approximate within $m^{\varepsilon}$. With coefficients over GF[q], it is q-approximable. - -* Max-FLS[>] and Max-FLS[≥] are NP-hard even with homogeneous systems and bipolar coefficients. They are 2-approximable, but they cannot be approximated within any smaller constant factor. - -All four variants of Min-RVLS are hard to approximate. In particular all four variants cannot be approximated within a factor of $2^{\log^{1-\varepsilon}n}$, for any $\varepsilon>0$, unless NP is contained in DTIME($n^{\operatorname{polylog}(n)}$). The hardness is proved by reductions: - -* There is a reduction from Min-ULR[=] to Min-RVLS[=]. It also applies to Min-RVLS[≥] and Min-RVLS[>], since each equation can be replaced by two complementary inequalities. - -* There is a reduction from minimum-dominating-set to Min-RVLS[≠]. - -On the other hand, there is a reduction from Min-RVLS[=] to Min-ULR[=]. It also applies to Min-ULR[≥] and Min-ULR[>], since each equation can be replaced by two complementary inequalities. - -Therefore, when R is in {=,>,≥}, Min-ULR and Min-RVLS are equivalent in terms of approximation hardness. diff --git a/wiki/wikipedia/3782.txt b/wiki/wikipedia/3782.txt deleted file mode 100644 index 9306b93607442cefbd30c3648e2c22490cb3a3ac..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3782.txt +++ /dev/null @@ -1,22 +0,0 @@ -The initial attractiveness is a possible extension of the Barabási–Albert model (preferential attachment model). The Barabási–Albert model generates scale-free networks where the degree distribution can be described by a pure power law. However, the degree distribution of most real life networks cannot be described by a power law solely. The most common discrepancies regarding the degree distribution found in real networks are the high degree cut-off (or structural cut-off) and the low degree cut-off. The inclusion of initial attractiveness in the Barabási–Albert model addresses the low-degree cut-off phenomenon. - -The Barabási–Albert model defines the following linear preferential attachment rule: $\Pi\left(k_i\right)=\frac{k_i}{\sum_j k_j}$. This would imply that the probability that a new node will attach to a node that has a zero degree is zero – $\Pi(0)=0$. The preferential attachment function of the Barabási–Albert model can be modified as follows: $\Pi(k)=A+k$ as proposed by Dorogovtsev-Mendes-Samukhin. The constant $A$ denotes the initial attractiveness of the node. From this the preferential attachment rule with initial attractiveness comes as: -$$ -\Pi(k_i)=\frac{A_i+k_i}{\sum\limits_j (A_j+ k_j)} -$$ - -Based on this attachment rule it can be inferred that: $\Pi(0)\sim A$. This means that even isolated nodes with $\Pi(0)$ have a chance to obtain connections with the newly arriving nodes. - -The presence of initial attractiveness results in two important consequences one is the small degree cut-off (or small degree saturation). The other on is the increased degree exponent of the degree distribution. - -The small degree saturation is an empirical regularity – the number of nodes with low degree is smaller than it would be expected if a power law would describe the degree distribution. The reason why this appears is the following: initial attractiveness increases the probability that the node obtains connection with an arriving node. This increased attachment probability becomes marginal as the node obtains more connections – it does not effect the right tail of the distribution. The degree distribution of a model with initial attractiveness can be described by the following: $p_k=C\cdot (k+A)^{-\gamma}$. - -There are numerous real life networks when the degree distribution shows some kind of small degree cut-off. The following list offers some examples: - -* Scientific collaboration network - -* Co-stardom network - -* Citation network - -Importantly, in case of the Barabási–Albert model the exponent of the degree distribution, here denoted by $\gamma$, has a value of 3. In case of the Barabási–Albert model with initial attractiveness the degree exponent is simply $\gamma=2+\frac{A}{m}$. Here $m$ denotes the initial number of links in the network. As $\gamma$ is higher than 3 it follows that the network is in the random network regime and as the number of initial nodes is higher it converges to the scale-free regime. The same holds for the value of the initial attractiveness as $A$ is higher the more the network is into the random network regime. This means that the number of nodes with relatively high degrees will be lower than it would be in the Barabási–Albert model. The higher degree exponent generally implies that the network is more homogeneous – most nodes are average linked. diff --git a/wiki/wikipedia/3783.txt b/wiki/wikipedia/3783.txt deleted file mode 100644 index 11723ed3dffa20abbb405e3477255ab65bb3dba5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3783.txt +++ /dev/null @@ -1,5 +0,0 @@ -William Alvin Howard (born 1926) is a proof theorist best known for his work demonstrating formal similarity between intuitionistic logic and the simply typed lambda calculus that has come to be known as the Curry–Howard correspondence. He has also been active in the theory of proof-theoretic ordinals. He earned his Ph.D. at the University of Chicago in 1956 for a dissertation entitled "k-fold recursion and well-ordering". He was a student of Saunders Mac Lane. - -The Howard ordinal (also known as the Bachmann–Howard ordinal) was named after him. - -He was elected to the 2018 class of fellows of the American Mathematical Society. diff --git a/wiki/wikipedia/3784.txt b/wiki/wikipedia/3784.txt deleted file mode 100644 index fc6714cd523320fd644f3a2de26985d453d34b37..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3784.txt +++ /dev/null @@ -1,5 +0,0 @@ -In computer science, a lock convoy is a performance problem that can occur when using locks for concurrency control in a multithreaded application. - -A lock convoy occurs when multiple threads of equal priority contend repeatedly for the same lock. Unlike deadlock and livelock situations, the threads in a lock convoy do progress; however, each time a thread attempts to acquire the lock and fails, it relinquishes the remainder of its scheduling quantum and forces a context switch. The overhead of repeated context switches and underutilization of scheduling quanta degrade overall performance. - -Lock convoys often occur when concurrency control primitives such as locks serialize access to a commonly used resource, such as a memory heap or a thread pool. They can sometimes be addressed by using non-locking alternatives such as lock-free algorithms or by altering the relative priorities of the contending threads. diff --git a/wiki/wikipedia/3785.txt b/wiki/wikipedia/3785.txt deleted file mode 100644 index db16c8fcd9e329bb002f84be7dbfbc7abd92a20d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3785.txt +++ /dev/null @@ -1,19 +0,0 @@ -In mathematics, statistics, and computational modelling, a grey box model combines a partial theoretical structure with data to complete the model. The theoretical structure may vary from information on the smoothness of results, to models that need only parameter values from data or existing literature. Thus, almost all models are grey box models as opposed to black box where no model form is assumed or white box models that are purely theoretical. Some models assume a special form such as a linear regression or neural network. These have special analysis methods. In particular linear regression techniques are much more efficient than most non-linear techniques. The model can be deterministic or stochastic (i.e. containing random components) depending on its planned use. - -The general case is a non-linear model with a partial theoretical structure and some unknown parts derived from data. Models with unlike theoretical structures need to be evaluated individually, possibly using simulated annealing or genetic algorithms. - -Within a particular model structure, parameters or variable parameter relations may need to be found. For a particular structure it is arbitrarily assumed that the data consists of sets of feed vectors f, product vectors p, and operating condition vectors c. - -m(f,p,q) - -where the vector function m gives the errors between the data p, and the model predictions. The vector q gives some variable parameters that are the model's unknown parts. - -The parameters q vary with the operating conditions c in a manner to be determined. between the original operating conditions and q. It is then a matter of selecting which terms in A are non-zero and assigning their values. The model completion becomes an optimization problem to determine the non-zero values in A that minimizes the error terms m(f,p,Ac) over the data. - -Once a selection of non-zero values is made, the remaining coefficients in A can be determined by minimizing m(f,p,Ac) over the data with respect to the nonzero values in A, typically by non-linear least squares. Selection of the nonzero terms can be done by optimization methods such as simulated annealing and evolutionary algorithms. Also the non-linear least squares can provide accuracy estimates - -It is sometimes possible to calculate values of q for each data set, directly or by non-linear least squares. Then the more efficient linear regression can be used to predict q using c thus selecting the non-zero values in A and estimating their values. Once the non-zero values are located non-linear least squares can be used on the original model m(f,p,Ac) to refine these values . The chi squared test requires known standard deviations which are seldom available, and failed tests give no indication of how to improve the model. There are a range of methods to compare both nested and non nested models. These include comparison of model predictions with repeated data. - -An attempt to predict the residuals m(, ) with the operating conditions c using linear regression will show if the residuals can be predicted. Residuals that cannot be predicted offer little prospect of improving the model using the current operating conditions. Terms that do predict the residuals are prospective terms to incorporate into the model to improve its performance. - -The model inversion technique above can be used as a method of determining whether a model can be improved. In this case selection of nonzero terms is not so important and linear prediction can be done using the significant eigenvectors of the regression matrix. The values in A determined in this manner need to be substituted into the nonlinear model to assess improvements in the model errors. The absence of a significant improvement indicates the available data is not able to improve the current model form using the defined parameters. Extra parameters can be inserted into the model to make this test more comprehensive. diff --git a/wiki/wikipedia/3786.txt b/wiki/wikipedia/3786.txt deleted file mode 100644 index 15aee66bc74600cffe2cc5534fae665583deb5cd..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3786.txt +++ /dev/null @@ -1,18 +0,0 @@ -In the theory of functions of several complex variables, Hartogs's extension theorem is a statement about the singularities of holomorphic functions of several variables. Informally, it states that the support of the singularities of such functions cannot be compact, therefore the singular set of a function of several complex variables must (loosely speaking) 'go off to infinity' in some direction. More precisely, it shows that an isolated singularity is always a removable singularity for any analytic function of n > 1 complex variables. A first version of this theorem was proved by Friedrich Hartogs, and as such it is known also as Hartogs's lemma and Hartogs's principle: in earlier Soviet literature, it is also called Osgood–Brown theorem, acknowledging later work by Arthur Barton Brown and William Fogg Osgood. This property of holomorphic functions of several variables is also called Hartogs's phenomenon: however, the locution "Hartogs's phenomenon" is also used to identify the property of solutions of systems of partial differential or convolution equations satisfying Hartogs type theorems. - -The original proof was given by Friedrich Hartogs in 1906, using Cauchy's integral formula for functions of several complex variables. Today, usual proofs rely on either the Bochner–Martinelli–Koppelman formula or the solution of the inhomogeneous Cauchy–Riemann equations with compact support. The latter approach is due to Leon Ehrenpreis who initiated it in the paper . Yet another very simple proof of this result was given by Gaetano Fichera in the paper , by using his solution of the Dirichlet problem for holomorphic functions of several variables and the related concept of CR-function: later he extended the theorem to a certain class of partial differential operators in the paper , and his ideas were later further explored by Giuliano Bratti. Also the Japanese school of the theory of partial differential operators worked much on this topic, with notable contributions by Akira Kaneko. Their approach is to use Ehrenpreis's fundamental principle. - -For example, in two variables, consider the interior domain -$$ -H_\varepsilon = \{z=(z_1,z_2)\in\Delta^2:|z_1|<\varepsilon\ \ \text{or}\ \ 1-\varepsilon< |z_2|\} -$$ - -in the two-dimensional polydisk $\Delta^2=\{z\in\mathbb{C}^2;|z_1|<1,|z_2|<1\}$ where $0 <\varepsilon < 1$ . - -Theorem Hartogs: any holomorphic functions $f$ on $H_\varepsilon$ are analytically continued to $\Delta^2$ . Namely, there is a holomorphic function $F$ on $\Delta^2$ such that $F=f$ on $H_\varepsilon$ . - -Such a phenomenon is called Hartogs's phenomenon, which lead to the notion of this Hartogs's extension theorem and the domain of holomorphy. - -Let f be a holomorphic function on a set G \ K, where G is an open subset of Cn (n ≥ 2) and K is a compact subset of G. If the complement G \ K is connected, then f can be extended to a unique holomorphic function on G. - -The theorem does not hold when n = 1. To see this, it suffices to consider the function f(z) = z−1, which is clearly holomorphic in C \ {0}, but cannot be continued as a holomorphic function on the whole C. Therefore, the Hartogs's phenomenon is an elementary phenomenon that highlights the difference between the theory of functions of one and several complex variables. diff --git a/wiki/wikipedia/3787.txt b/wiki/wikipedia/3787.txt deleted file mode 100644 index 3d2627f86802c7ca1b5ffcb2912ad9af36bbf985..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3787.txt +++ /dev/null @@ -1,5 +0,0 @@ -thumb|A suffix tree of the letters ATCGATCGA$In computer science, the longest repeated substring problem is the problem of finding the longest substring of a string that occurs at least twice. - -This problem can be solved in linear time and space $\Theta(n)$ by building a suffix tree for the string (with a special end-of-string symbol like '$' appended), and finding the deepest internal node in the tree with more than one child. Depth is measured by the number of characters traversed from the root. The string spelled by the edges from the root to such a node is a longest repeated substring. The problem of finding the longest substring with at least $k$ occurrences can be solved by first preprocessing the tree to count the number of leaf descendants for each internal node, and then finding the deepest node with at least $k$ leaf descendants. To avoid overlapping repeats, you can check that the list of suffix lengths has no consecutive elements with less than prefix-length difference. - -In the figure with the string "ATCGATCGA$", the longest substring that repeats at least twice is "ATCGA". diff --git a/wiki/wikipedia/3788.txt b/wiki/wikipedia/3788.txt deleted file mode 100644 index 18694011df8fc33e4c77cef9ae6e33f89c4e3393..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3788.txt +++ /dev/null @@ -1,69 +0,0 @@ -The Baire category theorem (BCT) is an important result in general topology and functional analysis. The theorem has two forms, each of which gives sufficient conditions for a topological space to be a Baire space (a topological space such that the intersection of countably many dense open sets is still dense). - -Versions of Baire category theorem were first proved independently in 1897 and 1899 by Osgood and Baire respectively. This theorem says that every complete metric space is a Baire space. - -A Baire space is a topological space with the property that for each countable collection of open dense sets $U_1, U_2, \ldots,$ their intersection $\bigcap_{n \in \N} U_n$ is dense. - -* (BCT1) Every complete pseudometric space is a Baire space. Thus every completely metrizable topological space is a Baire space. More generally, every topological space that is homeomorphic to an open subset of a complete pseudometric space is a Baire space. - -* (BCT2) Every locally compact Hausdorff space is a Baire space. The proof is similar to the preceding statement; the finite intersection property takes the role played by completeness. - -Neither of these statements directly implies the other, since there are complete metric spaces that are not locally compact (the irrational numbers with the metric defined below; also, any Banach space of infinite dimension), and there are locally compact Hausdorff spaces that are not metrizable (for instance, any uncountable product of non-trivial compact Hausdorff spaces is such; also, several function spaces used in functional analysis; the uncountable Fort space). - -See Steen and Seebach in the references below. - -* (BCT3) A non-empty complete metric space with nonempty interior, or any of its subsets with nonempty interior, is not the countable union of nowhere-dense sets. - -This formulation is equivalent to BCT1 and is sometimes more useful in applications. - -Also: if a non-empty complete metric space is the countable union of closed sets, then one of these closed sets has non-empty interior. - -The proof of BCT1 for arbitrary complete metric spaces requires some form of the axiom of choice; and in fact BCT1 is equivalent over ZF to the axiom of dependent choice, a weak form of the axiom of choice. - -A restricted form of the Baire category theorem, in which the complete metric space is also assumed to be separable, is provable in ZF with no additional choice principles. - -This restricted form applies in particular to the real line, the Baire space $\omega^{\omega},$ the Cantor space $2^{\omega},$ and a separable Hilbert space such as the $L^p-$space $L^2\left(\R^n\right).$ - -BCT1 is used in functional analysis to prove the open mapping theorem, the closed graph theorem and the uniform boundedness principle. - -BCT1 also shows that every complete metric space with no isolated points is uncountable. (If $X$ is a countable complete metric space with no isolated points, then each singleton $\{x\}$ in $X$ is nowhere dense, and so $X$ is of first category in itself.) In particular, this proves that the set of all real numbers is uncountable. - -BCT1 shows that each of the following is a Baire space: - -* The space $\R$ of real numbers - -* The irrational numbers, with the metric defined by $d(x, y) = \tfrac{1}{n+1},$ where $n$ is the first index for which the continued fraction expansions of $x$ and $y$ differ (this is a complete metric space) - -* The Cantor set - -By BCT2, every finite-dimensional Hausdorff manifold is a Baire space, since it is locally compact and Hausdorff. This is so even for non-paracompact (hence nonmetrizable) manifolds such as the long line. - -BCT is used to prove Hartogs's theorem, a fundamental result in the theory of several complex variables. - -BCT3 is used to prove that a Banach space cannot have countably infinite dimension. - -The following is a standard proof that a complete pseudometric space $X$ is a Baire space. - -Let $U_1, U_2, \ldots$ be a countable collection of open dense subsets. It remains to show that the intersection $U_1 \cap U_2 \cap \ldots$ is dense. - -A subset is dense if and only if every nonempty open subset intersects it. Thus to show that the intersection is dense, it suffices to show that any nonempty open subset $W$ of $X$ has some point $x$ in common with all of the $U_n$. - -Because $U_1$ is dense, $W$ intersects $U_1;$ consequently, there exists a point $x_1$ and a number $0 < r_1 < 1$ such that: - -\overline{B}\left(x_1, r_1\right) \subseteq W \cap U_1 - -where $B(x, r)$ and $\overline{B}(x, r)$ denote an open and closed ball, respectively, centered at $x$ with radius $r.$ - -Since each $U_n$ is dense, this construction can be continued recursively to find a pair of sequences $x_n$ and $0 < r_n < \tfrac{1}{n}$ such that: - -\overline{B}\left(x_n, r_n\right) \subseteq B\left(x_{n-1}, r_{n-1}\right) \cap U_n. - -(This step relies on the axiom of choice and the fact that a finite intersection of open sets is open and hence an open ball can be found inside it centered at $x_n$.) - -The sequence $\left(x_n\right)$ is Cauchy because $x \in B\left(x_m, r_m\right)$ whenever $n > m,$ and hence $\left(x_n\right)$ converges to some limit $x$ by completeness. - -If $n$ is a positive integer then $x \in \overline{B}\left(x_n, r_n\right)$ (because this set is closed). - -Thus $x \in W$ and $x \in U_n$ for all $n.$ $\blacksquare$ - -There is an alternative proof by M. Baker for the proof of the theorem using Choquet's game. diff --git a/wiki/wikipedia/3789.txt b/wiki/wikipedia/3789.txt deleted file mode 100644 index 9d93b88073e0df43e272cd0901a8eb3107e6cd65..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3789.txt +++ /dev/null @@ -1,482 +0,0 @@ -The following are important identities involving derivatives and integrals in vector calculus. - -For a function $f(x, y, z)$ in three-dimensional Cartesian coordinate variables, the gradient is the vector field: - - - -\operatorname{grad}(f) = \nabla f = - -\begin{pmatrix} \frac{\partial }{\partial x},\ \frac{\partial }{\partial y},\ \frac{\partial }{\partial z} \end{pmatrix} f = - -\frac{\partial f}{\partial x} \mathbf{i} + \frac{\partial f}{\partial y} \mathbf{j} + \frac{\partial f}{\partial z} \mathbf{k} - - - -where i, j, k are the standard unit vectors for the x, y, z-axes. More generally, for a function of n variables $\psi(x_1, \ldots, x_n)$, also called a scalar field, the gradient is the vector field: - - - -\nabla\psi = \begin{pmatrix}\frac{\partial}{\partial x_1}, \ldots,\ \frac{\partial}{\partial x_n}\end{pmatrix}\psi = - -\frac{\partial\psi}{\partial x_1}\mathbf{e}_1 + \ldots + \frac{\partial\psi}{\partial x_n}\mathbf{e}_n - -. - -where $\mathbf{e}_{i}$ are orthogonal unit vectors in arbitrary directions. - -For a vector field $\mathbf{A} = \left(A_1, \ldots, A_n\right)$ written as a 1 × n row vector, also called a tensor field of order 1, the gradient or covariant derivative is the n × n Jacobian matrix: -$$ -\mathbf{J}_{\mathbf{A}}= (\nabla \!\mathbf{A})^\mathrm{T} = \left(\frac{\partial A_i}{\partial x_j}\right)_{\!ij}. -$$ - -For a tensor field $\mathbf{A}$ of any order k, the gradient $\operatorname{grad}(\mathbf{A}) = (\nabla\!\mathbf{A})^\mathrm{T}$ is a tensor field of order k + 1. - -In Cartesian coordinates, the divergence of a continuously differentiable vector field $\mathbf{F} = F_x\mathbf{i} + F_y\mathbf{j} + F_z\mathbf{k}$ is the scalar-valued function: - - - -\operatorname{div}\mathbf{F} = \nabla\cdot\mathbf{F} = - -\begin{pmatrix}\frac{\partial}{\partial x},\ \frac{\partial}{\partial y},\ \frac{\partial}{\partial z}\end{pmatrix} \cdot \begin{pmatrix}F_{x},\ F_{y},\ F_{z}\end{pmatrix} = - -\frac{\partial F_x}{\partial x} + \frac{\partial F_y}{\partial y} + \frac{\partial F_z}{\partial z}. - - - -The divergence of a tensor field $\mathbf{A}$ of non-zero order k is written as $\operatorname{div}(\mathbf{A}) = \nabla \cdot \mathbf{A}$, a contraction to a tensor field of order k − 1. Specifically, the divergence of a vector is a scalar. The divergence of a higher order tensor field may be found by decomposing the tensor field into a sum of outer products and using the identity, -$$ -\nabla \cdot \left(\mathbf{B} \otimes \hat{\mathbf{A}}\right) = \hat{\mathbf{A}} (\nabla \cdot \mathbf{B}) + (\mathbf{B} \cdot \nabla) \hat{\mathbf{A}} -$$ - -where $\mathbf{B} \cdot \nabla$ is the directional derivative in the direction of $\mathbf{B}$ multiplied by its magnitude. Specifically, for the outer product of two vectors, -$$ -\nabla \cdot \left(\mathbf{b} \mathbf{a}^\mathsf{T}\right) = \mathbf{a}\left(\nabla \cdot \mathbf{b}\right) + \left(\mathbf{b} \cdot \nabla\right) \mathbf{a}. -$$ - -In Cartesian coordinates, for $\mathbf{F} = F_x\mathbf{i} + F_y\mathbf{j} + F_z\mathbf{k}$ the curl is the vector field: - - - -\operatorname{curl}\mathbf{F} = \nabla \times \mathbf{F} = - -\begin{pmatrix}\frac{\partial}{\partial x},\ \frac{\partial}{\partial y},\ \frac{\partial}{\partial z}\end{pmatrix} \times \begin{pmatrix}F_{x},\ F_{y},\ F_{z}\end{pmatrix} - -= - -\left|\begin{matrix} - -\mathbf{i} & \mathbf{j} & \mathbf{k} \\ - -\frac{\partial}{\partial x} & \frac{\partial}{\partial y} & \frac{\partial}{\partial z} \\ - -F_x & F_y & F_z - -\end{matrix}\right| = - -\left(\frac{\partial F_z}{\partial y} - \frac{\partial F_y}{\partial z}\right) \mathbf{i} + - -\left(\frac{\partial F_x}{\partial z} - \frac{\partial F_z}{\partial x}\right) \mathbf{j} + - -\left(\frac{\partial F_y}{\partial x} - \frac{\partial F_x}{\partial y}\right) \mathbf{k} - -where i, j, and k are the unit vectors for the x-, y-, and z-axes, respectively. In Einstein notation, the vector field $\mathbf{F} = \begin{pmatrix}F_1 & F_2 & F_3\end{pmatrix}$ has curl given by: -$$ -\nabla \times \mathbf{F} = \varepsilon^{ijk}\mathbf{e}_i \frac{\partial F_k}{\partial x^j} -$$ - -where $\varepsilon$ = ±1 or 0 is the Levi-Civita parity symbol. - -In Cartesian coordinates, the Laplacian of a function $f(x,y,z)$ is -$$ -\Delta f = \nabla^2\! f = (\nabla \cdot \nabla) f = \frac{\partial^2\! f}{\partial x^2} + \frac{\partial^2\! f}{\partial y^2} + \frac{\partial^2\! f}{\partial z^2}. -$$ - -For a tensor field, $\mathbf{A}$, the Laplacian is generally written as: -$$ -\Delta\mathbf{A} = \nabla^2\! \mathbf{A} = (\nabla \cdot \nabla) \mathbf{A} -$$ - -and is a tensor field of the same order. - -When the Laplacian is equal to 0, the function is called a Harmonic Function. That is, -$$ -\Delta f = 0 -$$
    - -In Feynman subscript notation, -$$ -\nabla_\mathbf{B}\! \left( \mathbf{A {\cdot} B} \right) = \mathbf{A} {\times}\! \left( \nabla {\times} \mathbf{B} \right) + \left( \mathbf{A} {\cdot} \nabla \right) \mathbf{B} -$$ - -where the notation ∇B means the subscripted gradient operates on only the factor B. - -Less general but similar is the Hestenes overdot notation in geometric algebra. The above identity is then expressed as: -$$ -\dot{\nabla} \left( \mathbf{A} {\cdot} \dot{\mathbf{B}} \right) = \mathbf{A} {\times}\! \left( \nabla {\times} \mathbf{B} \right) + \left( \mathbf{A} {\cdot} \nabla \right) \mathbf{B} -$$ - -where overdots define the scope of the vector derivative. The dotted vector, in this case B, is differentiated, while the (undotted) A is held constant. - -For the remainder of this article, Feynman subscript notation will be used where appropriate. - -For scalar fields $\psi$, $\phi$ and vector fields $\mathbf{A}$, $\mathbf{B}$, we have the following derivative identities. - -\begin{align} - -\nabla ( \psi + \phi ) &= \nabla \psi + \nabla \phi \\ - -\nabla ( \mathbf{A} + \mathbf{B} ) &= \nabla \mathbf{A} + \nabla \mathbf{B} \\ - -\nabla \cdot ( \mathbf{A} + \mathbf{B} ) &= \nabla {\cdot} \mathbf{A} + \nabla \cdot \mathbf{B} \\ - -\nabla \times ( \mathbf{A} + \mathbf{B} ) &= \nabla \times \mathbf{A} + \nabla \times \mathbf{B} - -\end{align} - -We have the following generalizations of the product rule in single variable calculus. - -\begin{align} - -\nabla ( \psi \phi ) &= \phi \nabla \psi + \psi \nabla \phi \\ - -\nabla ( \psi \mathbf{A} ) &= (\nabla \psi)^{\mathbf{T}} \mathbf{A} + \psi \nabla \mathbf{A} - -\ =\ \nabla \psi \otimes \mathbf{A} + \psi \nabla \mathbf{A} \\ - -\nabla \cdot ( \psi \mathbf{A} ) &= \psi \nabla {\cdot} \mathbf{A} + ( \nabla \psi ) {\cdot} \mathbf{A} \\ - -\nabla {\times} ( \psi \mathbf{A} ) &= \psi \nabla {\times} \mathbf{A} + ( \nabla \psi ) {\times} \mathbf{A} \\ - -\nabla^{2}(f g) &= f\nabla^{2\!}g + 2\nabla\! f\cdot\!\nabla g+g \nabla^{2\!}f - -\end{align} - -In the second formula, the transposed gradient $(\nabla \psi)^{\mathbf{T}}$ is an n × 1 column vector, $\mathbf{A}$ is a 1 × n row vector, and their product is an n × n matrix (or more precisely, a dyad); This may also be considered as the tensor product $\otimes$ of two vectors, or of a covector and a vector. - -\begin{align} - -\nabla\left(\frac{\psi}{\phi}\right) - -&= \frac{\phi\nabla \psi - \psi\nabla\phi}{\phi^2} \\[1em] - -\nabla\left(\frac{\mathbf{A}}{\phi}\right) - -&= \frac{\phi\nabla \mathbf{A} - \nabla\phi \otimes \mathbf{A}}{\phi^2} \\[1em] - -\nabla \cdot \left(\frac{\mathbf{A}}{\phi}\right) - -&= \frac{\phi \nabla{\cdot} \mathbf{A} - \nabla\!\phi \cdot \mathbf{A}}{\phi^2} \\[1em] - -\nabla \times \left(\frac{\mathbf{A}}{\phi}\right) - -&= \frac{\phi \nabla {\times} \mathbf{A} - \nabla\!\phi {\times} \mathbf{A}}{\phi^2} - -\end{align} - -Let $f(x)$ be a one-variable function from scalars to scalars, $\mathbf{r}(t) = (r_1(t),\ldots,r_n(t))$ a parametrized curve, and $F:\mathbb{R}^n\to\mathbb{R}$ a function from vectors to scalars. We have the following special cases of the multi-variable chain rule. - -\begin{align} - -\nabla(f \circ F) &= \left(f' \circ F\right) \nabla F \\ - -(F \circ \mathbf{r})' &= (\nabla F \circ \mathbf{r}) \cdot \mathbf{r}' \\ - -\nabla(F \circ \mathbf{A}) &= (\nabla F \circ \mathbf{A}) \nabla \mathbf{A} - -\end{align} - -For a coordinate parametrization \Phi:\mathbb{R}^n \to \mathbb{R}^n - - we have: -$$ -\nabla \cdot (\mathbf{A} \circ \Phi) = \mathrm{tr} \left((\nabla\mathbf{A} \circ \Phi) \mathbf{J}_\Phi\right) -$$ - -Here we take the trace of the product of two n × n matrices: the gradient of A and the Jacobian of $\Phi$. - -\begin{align} - -\nabla(\mathbf{A} \cdot \mathbf{B}) &\ =\ (\mathbf{A} \cdot \nabla)\mathbf{B} + (\mathbf{B} \cdot \nabla)\mathbf{A} + \mathbf{A} {\times} (\nabla {\times} \mathbf{B}) + \mathbf{B} {\times} (\nabla {\times} \mathbf{A}) \\ - -&\ =\ \mathbf{A}\cdot\mathbf{J}_\mathbf{B} + \mathbf{B}\cdot\mathbf{J}_\mathbf{A} - -\ =\ - -(\nabla\mathbf{B})\cdot \mathbf{A} + (\nabla\mathbf{A}) \cdot\mathbf{B} - -\end{align} - -where $\mathbf{J}_{\mathbf{A}} = (\nabla \!\mathbf{A})^\mathrm{T} = (\partial A_i/\partial x_j)_{ij}$ denotes the Jacobian matrix of the vector field $\mathbf{A} = (A_1,\ldots,A_n)$. - -Alternatively, using Feynman subscript notation, -$$ - \nabla(\mathbf{A} \cdot \mathbf{B}) = \nabla_\mathbf{A}(\mathbf{A} \cdot \mathbf{B}) + \nabla_\mathbf{B} (\mathbf{A} \cdot \mathbf{B}) \ . -$$ - -See these notes. - -As a special case, when A = B, - - - -\tfrac{1}{2} \nabla \left( \mathbf{A} \cdot \mathbf{A} \right) - -\ =\ \mathbf{A} \cdot \mathbf{J}_\mathbf{A} \ =\ (\nabla \mathbf{A})\cdot \mathbf{A}\ =\ (\mathbf{A} {\cdot} \nabla) \mathbf{A} + \mathbf{A} {\times} (\nabla {\times} \mathbf{A}) \ =\ A \nabla (A) - -. - - - -The generalization of the dot product formula to Riemannian manifolds is a defining property of a Riemannian connection, which differentiates a vector field to give a vector-valued 1-form. - -\begin{align} - -\nabla \cdot (\mathbf{A} \times \mathbf{B}) - -&\ =\ (\nabla {\times} \mathbf{A}) \cdot \mathbf{B} - -- \mathbf{A} \cdot (\nabla {\times} \mathbf{B}) \\[5pt] - -\nabla \times (\mathbf{A} \times \mathbf{B}) - -&\ =\ \mathbf{A}(\nabla {\cdot} \mathbf{B}) - \mathbf{B}(\nabla {\cdot} \mathbf{A}) - -+ (\mathbf{B} {\cdot} \nabla) \mathbf{A} - (\mathbf{A} {\cdot} \nabla) \mathbf{B} \\[2pt] - -&\ =\ (\nabla {\cdot} \mathbf{B} + \mathbf{B} {\cdot} \nabla)\mathbf{A} - -- (\nabla {\cdot} \mathbf{A} + \mathbf{A} {\cdot} \nabla) \mathbf{B} \\[2pt] - -&\ =\ \nabla {\cdot} \left(\mathbf{B} \mathbf{A}^\mathrm{T}\right) - -- \nabla {\cdot} \left(\mathbf{A} \mathbf{B}^\mathrm{T}\right) \\[2pt] - -&\ =\ \nabla {\cdot} \left(\mathbf{B} \mathbf{A}^\mathrm{T} - \mathbf{A} \mathbf{B}^\mathrm{T}\right) \\ - -\mathbf{A} \times (\nabla \times \mathbf{B}) - -&\ =\ \nabla_{\mathbf{B}}(\mathbf{A} {\cdot} \mathbf{B}) - (\mathbf{A} {\cdot} \nabla) \mathbf{B} \\[2pt] - -&\ =\ \mathbf{A} \cdot \mathbf{J}_\mathbf{B} - (\mathbf{A} {\cdot} \nabla) \mathbf{B} =\ (\nabla\mathbf{B})\cdot\mathbf{A} - (\mathbf{A} {\cdot} \nabla) \mathbf{B} \\[5pt] - -&\ =\ \mathbf{A} \cdot (\mathbf{J}_\mathbf{B} - \mathbf{J}_\mathbf{B}^\mathrm{T})\\[5pt] - -(\mathbf{A} \times \nabla) \times \mathbf{B} - -&\ =\ (\nabla\mathbf{B}) \cdot \mathbf{A} - \mathbf{A} (\nabla {\cdot} \mathbf{B})\\ - -&\ =\ \mathbf{A} \times (\nabla \times \mathbf{B}) + (\mathbf{A} {\cdot} \nabla) \mathbf{B} - \mathbf{A} (\nabla {\cdot} \mathbf{B}) - -\end{align} - -Note that the matrix $ \mathbf{J}_\mathbf{B} - \mathbf{J}_\mathbf{B}^\mathrm{T}$ is antisymmetric. - -The divergence of the curl of any vector field A is always zero: -$$ -\nabla \cdot ( \nabla \times \mathbf{A} ) = 0 -$$ - -This is a special case of the vanishing of the square of the exterior derivative in the De Rham chain complex. - -The Laplacian of a scalar field is the divergence of its gradient: -$$ -\Delta \psi = \nabla^2 \psi = \nabla \cdot (\nabla \psi) -$$ - -The result is a scalar quantity. - -Divergence of a vector field A is a scalar, and you cannot take the divergence of a scalar quantity. Therefore: -$$ - \nabla \cdot (\nabla \cdot \mathbf{A})\text{ is undefined} -$$ - -The curl of the gradient of any continuously twice-differentiable scalar field $\varphi $ is always the zero vector: -$$ -\nabla \times ( \nabla \varphi ) = \mathbf{0} -$$ - -This is a special case of the vanishing of the square of the exterior derivative in the De Rham chain complex. -$$ - \nabla \times \left( \nabla \times \mathbf{A} \right) \ =\ \nabla(\nabla {\cdot} \mathbf{A}) - \nabla^{2\!}\mathbf{A} -$$ - -Here ∇2 is the vector Laplacian operating on the vector field A. - -The divergence of a vector field A is a scalar, and you cannot take curl of a scalar quantity. Therefore -$$ - \nabla \times (\nabla \cdot \mathbf{A})\ \text{is undefined} -$$ - -[[File:DCG chart.svg|right|thumb|300px|DCG chart: - -Some rules for second derivatives. - -]] - -The figure to the right is a mnemonic for some of these identities. The abbreviations used are: - -* D: divergence, - -* C: curl, - -* G: gradient, - -* L: Laplacian, - -* CC: curl of curl. - -Each arrow is labeled with the result of an identity, specifically, the result of applying the operator at the arrow's tail to the operator at its head. The blue circle in the middle means curl of curl exists, whereas the other two red circles (dashed) mean that DD and GG do not exist. - -*$\nabla(\psi+\phi)=\nabla\psi+\nabla\phi $ - -*$\nabla(\psi \phi) = \phi\nabla \psi + \psi \nabla \phi $ - -*$\nabla(\psi \mathbf{A} ) = \nabla \psi \otimes \mathbf{A} + \psi \nabla \mathbf{A}$ - -*\nabla(\mathbf{A} \cdot \mathbf{B}) = (\mathbf{A} \cdot \nabla)\mathbf{B} + (\mathbf{B} \cdot \nabla)\mathbf{A} + \mathbf{A} \times (\nabla \times \mathbf{B}) - -+ \mathbf{B} \times (\nabla \times \mathbf{A}) - -*$ \nabla\cdot(\mathbf{A}+\mathbf{B})= \nabla\cdot\mathbf{A}+\nabla\cdot\mathbf{B} $ - -*$ \nabla\cdot\left(\psi\mathbf{A}\right)= \psi\nabla\cdot\mathbf{A}+\mathbf{A}\cdot\nabla \psi$ - -*$ \nabla\cdot\left(\mathbf{A}\times\mathbf{B}\right)= (\nabla\times\mathbf{A})\cdot \mathbf{B}-(\nabla\times\mathbf{B})\cdot \mathbf{A}$ - -*$\nabla\times(\mathbf{A}+\mathbf{B})=\nabla\times\mathbf{A}+\nabla\times\mathbf{B} $ - -*$\nabla\times\left(\psi\mathbf{A}\right)=\psi(\nabla\times\mathbf{A})-(\mathbf{A}\times\nabla)\psi=\psi(\nabla\times\mathbf{A})+(\nabla\psi)\times\mathbf{A}$ - -*$\nabla\times\left(\psi\nabla\phi\right)= \nabla \psi \times \nabla \phi$ - -*$\nabla\times\left(\mathbf{A}\times\mathbf{B}\right)= \mathbf{A}\left(\nabla\cdot\mathbf{B}\right)-\mathbf{B} \left( \nabla\cdot\mathbf{A}\right)+\left(\mathbf{B}\cdot\nabla\right)\mathbf{A}- \left(\mathbf{A}\cdot\nabla\right)\mathbf{B} $ - -*$(\mathbf{A} \cdot \nabla)\mathbf{B} = \frac{1}{2}\bigg[\nabla(\mathbf{A} \cdot \mathbf{B}) - \nabla\times(\mathbf{A} \times \mathbf{B}) - \mathbf{B}\times(\nabla \times \mathbf{A}) - \mathbf{A}\times(\nabla \times \mathbf{B}) - \mathbf{B}(\nabla \cdot \mathbf{A}) + \mathbf{A}(\nabla \cdot\mathbf{B})\bigg]$ - -*$(\mathbf{A} \cdot \nabla)\mathbf{A} = \frac{1}{2}\nabla |\mathbf{A}|^2-\mathbf{A}\times(\nabla\times\mathbf{A}) $ - -*$\nabla \cdot (\nabla \times \mathbf{A}) = 0$ - -*$\nabla \times (\nabla\psi) = \mathbf{0}$ - -*$\nabla \cdot (\nabla\psi) = \nabla^2\psi$ (scalar Laplacian) - -*$\nabla\left(\nabla \cdot \mathbf{A}\right) - \nabla \times \left(\nabla \times \mathbf{A}\right) = \nabla^2\mathbf{A}$ (vector Laplacian) - -*$\nabla \cdot (\phi\nabla\psi) = \phi\nabla^2\psi + \nabla\phi \cdot \nabla\psi$ - -*$\psi\nabla^2\phi - \phi\nabla^2\psi = \nabla \cdot \left(\psi\nabla\phi - \phi\nabla\psi\right)$ - -*$\nabla^2(\phi\psi) = \phi\nabla^2\psi + 2(\nabla\phi) \cdot(\nabla\psi) + \left(\nabla^2\phi\right)\psi$ - -*$\nabla^2(\psi\mathbf{A}) = \mathbf{A}\nabla^2\psi + 2(\nabla\psi \cdot \nabla)\mathbf{A} + \psi\nabla^2\mathbf{A}$ - -*$\nabla^2(\mathbf{A} \cdot \mathbf{B}) = \mathbf{A} \cdot \nabla^2\mathbf{B} - \mathbf{B} \cdot \nabla^2\!\mathbf{A} + 2\nabla \cdot ((\mathbf{B} \cdot \nabla)\mathbf{A} + \mathbf{B} \times (\nabla \times \mathbf{A}))$ (Green's vector identity) - -*$ \nabla^2(\nabla\psi) = \nabla(\nabla \cdot (\nabla\psi)) = \nabla\left(\nabla^2\psi\right)$ - -*$ \nabla^2(\nabla \cdot \mathbf{A}) = \nabla \cdot (\nabla(\nabla \cdot \mathbf{A})) = \nabla \cdot \left(\nabla^2\mathbf{A}\right)$ - -*$ \nabla^{2}(\nabla\times\mathbf{A}) = -\nabla \times (\nabla \times (\nabla \times \mathbf{A})) = \nabla \times \left(\nabla^2\mathbf{A}\right)$ - -Below, the curly symbol ∂ means "boundary of" a surface or solid. - -In the following surface–volume integral theorems, V denotes a three-dimensional volume with a corresponding two-dimensional boundary S = ∂V (a closed surface): - -* {{oiint - -| intsubscpt=$\scriptstyle \partial V$ - -| integrand=$\mathbf{A} \cdot d\mathbf{S}\ =\ \iiint_V \left(\nabla \cdot \mathbf{A}\right)dV$ - -}} (divergence theorem) - -* {{oiint - -| intsubscpt=$\scriptstyle \partial V$ - -| integrand=$\psid\mathbf{S}\ =\ \iiint_V \nabla\psidV$ - -}} - -* {{oiint - -| intsubscpt=$\scriptstyle \partial V$ - -| integrand=$\mathbf{A} \times d\mathbf{S}\ =\ -\iiint_V \nabla \times \mathbf{A}dV$ - -}} - -*{{oiint - -| intsubscpt=$\scriptstyle \partial V$ - -| integrand=$\psi \nabla\!\varphi \cdot d\mathbf{S}\ =\ \iiint_V \left(\psi\nabla^2 \!\varphi + \nabla\!\varphi \cdot \nabla\!\psi\right)dV$ - -}} (Green's first identity) - -* {{oiint - -| intsubscpt=$\scriptstyle \partial V$ - -| integrand=\left(\psi\nabla\!\varphi - \varphi\nabla\!\psi\right) \cdot d\mathbf{S} - -\ =\ {{oiint - -| intsubscpt=$\scriptstyle \partial V$ - -| integrand=$\left(\psi\frac{\partial\varphi}{\partial n} - \varphi\frac{\partial\psi}{\partial n}\right)dS$ - -}} $\displaystyle\ =\ \iiint_{V}\left(\psi\nabla^2\!\varphi - \varphi\nabla^2\!\psi\right)dV$ - -}} (Green's second identity) - -* {{oiint - -| preintegral=$\iiint_V \mathbf{A} \cdot \nabla\psidV\ =\ $ - -| intsubscpt=$\scriptstyle \partial V$ - -| integrand=$\psi\mathbf{A} \cdot d\mathbf{S} - \iiint_V \psi\nabla \cdot \mathbf{A}dV$ - -}} (integration by parts) - -* {{oiint - -| preintegral=$\iiint_V \psi\nabla \cdot \mathbf{A}dV\ =\ $ - -| intsubscpt=$\scriptstyle \partial V$ - -| integrand=$\psi\mathbf{A} \cdot d\mathbf{S} - \iiint_V \mathbf{A} \cdot \nabla\psidV$ - -}} (integration by parts) - -In the following curve–surface integral theorems, S denotes a 2d open surface with a corresponding 1d boundary C = ∂S (a closed curve): - -*$ \oint_{\partial S}\mathbf{A}\cdot d\boldsymbol{\ell}\ =\ \iint_{S}\left(\nabla \times \mathbf{A}\right)\cdot d\mathbf{S} $ (Stokes' theorem) - -*$ \oint_{\partial S}\psi d\boldsymbol{\ell}\ =\ -\iint_{S} \nabla\psi \times d\mathbf{S} $ - -Integration around a closed curve in the clockwise sense is the negative of the same line integral in the counterclockwise sense (analogous to interchanging the limits in a definite integral): - -{{intorient| - -| preintegral = {{intorient - -| preintegral = - -| symbol = oint - -| intsubscpt = ${\scriptstyle \partial S}$ - -| integrand = $\mathbf{A}\cdot{\rm d}\boldsymbol{\ell} = -$ - -}} - -| symbol = ointctr - -| intsubscpt = ${\scriptstyle \partial S}$ - -| integrand = $\mathbf{A}\cdot{\rm d}\boldsymbol{\ell}.$ - -}} diff --git a/wiki/wikipedia/379.txt b/wiki/wikipedia/379.txt deleted file mode 100644 index fa94ead8f20ccaf0d882e8a2be3b65a9f4c4155f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/379.txt +++ /dev/null @@ -1,98 +0,0 @@ -In mathematics, a Paley–Wiener theorem is any theorem that relates decay properties of a function or distribution at infinity with analyticity of its Fourier transform. The theorem is named for Raymond Paley (1907–1933) and Norbert Wiener (1894–1964). The original theorems did not use the language of distributions, and instead applied to square-integrable functions. The first such theorem using distributions was due to Laurent Schwartz. These theorems heavily rely on the triangle inequality (to interchange the absolute value and integration). - -The classical Paley–Wiener theorems make use of the holomorphic Fourier transform on classes of square-integrable functions supported on the real line. Formally, the idea is to take the integral defining the (inverse) Fourier transform -$$ -f(\zeta) = \int_{-\infty}^\infty F(x)e^{i x \zeta}dx -$$ - -and allow ζ to be a complex number in the upper half-plane. One may then expect to differentiate under the integral in order to verify that the Cauchy–Riemann equations hold, and thus that f defines an analytic function. However, this integral may not be well-defined, even for F in L2(R) — indeed, since ζ is in the upper half plane, the modulus of eixζ grows exponentially as $x \rightarrow -\infty$ — so differentiation under the integral sign is out of the question. One must impose further restrictions on F in order to ensure that this integral is well-defined. - -The first such restriction is that F be supported on R+: that is, F ∈ L2(R+). The Paley–Wiener theorem now asserts the following: The holomorphic Fourier transform of F, defined by -$$ -f(\zeta) = \int_0^\infty F(x) e^{i x\zeta} dx -$$ - -for ζ in the upper half-plane is a holomorphic function. Moreover, by Plancherel's theorem, one has -$$ -\int_{-\infty}^\infty \left |f(\xi+i\eta) \right|^2 d\xi \le \int_0^\infty |F(x)|^2 dx -$$ - -and by dominated convergence, -$$ -\lim_{\eta\to 0^+}\int_{-\infty}^\infty \left|f(\xi+i\eta)-f(\xi) \right|^2d\xi = 0. -$$ - -Conversely, if f is a holomorphic function in the upper half-plane satisfying -$$ -\sup_{\eta>0} \int_{-\infty}^\infty \left |f(\xi+i\eta) \right|^2d\xi = C < \infty -$$ - -then there exists F in L2(R+) such that f is the holomorphic Fourier transform of F. - -In abstract terms, this version of the theorem explicitly describes the Hardy space H2(R). The theorem states that -$$ - \mathcal{F}H^2(\mathbf{R})=L^2(\mathbf{R_+}). -$$ - -This is a very useful result as it enables one pass to the Fourier transform of a function in the Hardy space and perform calculations in the easily understood space L2(R+) of square-integrable functions supported on the positive axis. - -By imposing the alternative restriction that F be compactly supported, one obtains another Paley–Wiener theorem. Suppose that F is supported in [−A, A], so that F ∈ L2(−A,A). Then the holomorphic Fourier transform -$$ -f(\zeta) = \int_{-A}^A F(x)e^{i x\zeta}dx -$$ - -is an entire function of exponential type A, meaning that there is a constant C such that -$$ -|f(\zeta)|\le Ce^{A|\zeta|}, -$$ - -and moreover, f is square-integrable over horizontal lines: -$$ -\int_{-\infty}^{\infty} |f(\xi+i\eta)|^2d\xi < \infty. -$$ - -Conversely, any entire function of exponential type A which is square-integrable over horizontal lines is the holomorphic Fourier transform of an L2 function supported in [−A, A]. - -Schwartz's Paley–Wiener theorem asserts that the Fourier transform of a distribution of compact support on Rn is an entire function on Cn and gives estimates on its growth at infinity. It was proven by Laurent Schwartz (1952). The formulation presented here is from Hörmander. - -Generally, the Fourier transform can be defined for any tempered distribution; moreover, any distribution of compact support v is a tempered distribution. If v is a distribution of compact support and f is an infinitely differentiable function, the expression -$$ - v(f) = v(x\mapsto f(x)) -$$ - -is well defined. - -It can be shown that the Fourier transform of v is a function (as opposed to a general tempered distribution) given at the value s by -$$ - \hat{v}(s) = (2 \pi)^{-\frac{n}{2}} v\left(x\mapsto e^{-i \langle x, s\rangle}\right) -$$ - -and that this function can be extended to values of s in the complex space Cn. This extension of the Fourier transform to the complex domain is called the Fourier–Laplace transform. - -An entire function F on Cn is the Fourier–Laplace transform of a distribution v of compact support if and only if for all z ∈ Cn, -$$ - |F(z)| \leq C (1 + |z|)^N e^{B|\text{Im}(z)|} -$$ - -for some constants C, N, B. The distribution v in fact will be supported in the closed ball of center 0 and radius B. - -Additional growth conditions on the entire function F impose regularity properties on the distribution v. For instance: - -If for every positive N there is a constant CN such that for all z ∈ Cn, -$$ - |F(z)| \leq C_N (1 + |z|)^{-N} e^{B|\text{Im}(z)|} -$$ - -then v is an infinitely differentiable function, and vice versa. - -Sharper results giving good control over the singular support of v have been formulated by Hörmander. In particular, let K be a convex compact set in Rn with supporting function H, defined by -$$ -H(x) = \sup_{y\in K} \langle x,y\rangle. -$$ - -Then the singular support of v is contained in K if and only if there is a constant N and sequence of constants Cm such that -$$ -|\hat{v}(\zeta)| \le C_m(1+|\zeta|)^Ne^{H(\text{Im}(\zeta))} -$$ - -for $|\text{Im}(\zeta)| \le m \log(| \zeta |+1).$ diff --git a/wiki/wikipedia/3790.txt b/wiki/wikipedia/3790.txt deleted file mode 100644 index 13858c993923e8dea9819c657a34611a95fc195f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3790.txt +++ /dev/null @@ -1,17 +0,0 @@ -The inscribed square problem, also known as the square peg problem or the Toeplitz' conjecture, is an unsolved question in geometry: Does every plane simple closed curve contain all four vertices of some square? This is true if the curve is convex or piecewise smooth and in other special cases. The problem was proposed by Otto Toeplitz in 1911. Some early positive results were obtained by Arnold Emch and Lev Schnirelmann. , the general case remains open. - -showed that piecewise analytic curves always have inscribed squares. In particular this is true for polygons. Emch's proof considers the curves traced out by the midpoints of secant line segments to the curve, parallel to a given line. He shows that, when these curves are intersected with the curves generated in the same way for a perpendicular family of secants, there are an odd number of crossings. Therefore, there always exists at least one crossing, which forms the center of a rhombus inscribed in the given curve. By rotating the two perpendicular lines continuously through a right angle, and applying the intermediate value theorem, he shows that at least one of these rhombi is a square. The condition for the admission to happen is that for any point p, the curve C should be locally represented as a graph of a function y=f(x). - -In more precise terms, for any given point p on C, there is a neighborhood U(p) and a fixed direction n(p) (the direction of the “y-axis”) such that no chord of C -in this neighborhood- is parallel to n(p). - -Locally monotone curves include all types of polygons, all closed convex curves, and all piecewise C1 curves without any cusps. - -An even weaker condition on the curve than local monotonicity is that, for some ε > 0, the curve does not have any inscribed special trapezoids of size ε. A special trapezoid is an isosceles trapezoid with three equal sides, each longer than the fourth side, inscribed in the curve with a vertex ordering consistent with the clockwise ordering of the curve itself. Its size is the length of the part of the curve that extends around the three equal sides. Here, this length is measured in the domain of a fixed parametrization of C, as C may not be rectifiable. Instead of a limit argument, the proof is based on relative obstruction theory. This condition is open and dense in the space of all Jordan curves with respect to the compact-open topology. In this sense, the inscribed square problem is solved for generic curves. - -In 2017, Terence Tao published a proof of the existence of a square in curves formed by the union of the graphs of two functions, both of which have the same value at the endpoints of the curves and both of which obey a Lipschitz continuity condition with Lipschitz constant less than one. Tao also formulated several related conjectures. - -One may ask whether other shapes can be inscribed into an arbitrary Jordan curve. It is known that for any triangle T and Jordan curve C, there is a triangle similar to T and inscribed in C. Moreover, the set of the vertices of such triangles is dense in C. In particular, there is always an inscribed equilateral triangle. - -It is also known that any Jordan curve admits an inscribed rectangle. This was proved by Vaughan by reducing the problem to the non-embeddability of the projective plane in R^3; his proof is published in Meyerson. In 2020, Joshua Evan Greene and Andrew Lobb proved that for every smooth Jordan curve C and rectangle R in the Euclidean plane there exists a rectangle similar to R whose vertices lie on C. This generalizes both the existence of rectangles (of arbitrary shape) and the existence of squares on smooth curves, which has been known since the work of Šnirel'man. - -Some generalizations of the inscribed square problem consider inscribed polygons for curves and even more general continua in higher dimensional Euclidean spaces. For example, Stromquist proved that every continuous closed curve C in Rn satisfying "Condition A" that no two chords of C in a suitable neighborhood of any point are perpendicular admits an inscribed quadrilateral with equal sides and equal diagonals. This class of curves includes all C2 curves. Nielsen and Wright proved that any symmetric continuum K in Rn contains many inscribed rectangles. diff --git a/wiki/wikipedia/3791.txt b/wiki/wikipedia/3791.txt deleted file mode 100644 index c82c60d8510791fb70364b2c4d4a311ea78a3984..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3791.txt +++ /dev/null @@ -1,133 +0,0 @@ -In mathematics, a theorem is a statement that has been proved, or can be proved. The proof of a theorem is a logical argument that uses the inference rules of a deductive system to establish that the theorem is a logical consequence of the axioms and previously proved theorems. - -In the mainstream of mathematics, the axioms and the inference rules are commonly left implicit, and, in this case, they are almost always those of Zermelo–Fraenkel set theory with the axiom of choice, or of a less powerful theory, such as Peano arithmetic. A notable exception is Wiles's proof of Fermat's Last Theorem, which involves the Grothendieck universes whose existence requires to add a new axiom to the set theory. Generally, an assertion that is explicitly called a theorem is a proved result that is not an immediate consequence of other known theorems. Moreover, many authors qualify as theorems only the most important results, and use the terms lemma, proposition and corollary for less important theorems. - -In mathematical logic, the concepts of theorems and proofs have been formalized in order to allow mathematical reasoning about them. In this context, statements become well-formed formulas of some formal language. A theory consists of some basis statements called axioms, and some deducing rules (sometimes included in the axioms). The theorems of the theory are the statements that can be derived from the axioms by using the deducing rules. This formalization led to proof theory, which allows proving general theorems about theorems and proofs. In particular, Gödel's incompleteness theorems show that every consistent theory containing the natural numbers has true statements on natural numbers that are not theorems of the theory (that is they cannot be proved inside the theory). - -As the axioms are often abstractions of properties of the physical world, theorems may be considered as the expression some truth, but in contrast to the notion of a scientific law, which is experimental, the justification of the truth of a theorem is purely deductive. - -Until the end of the 19th century and the foundational crisis of mathematics, all mathematics were built from a few basic properties that were considered as self-evident; for example, the facts that every natural number has a successor, and that there is exactly one line that passing through two given distinct points. Those basic properties that were not considered as absolutely evident were called postulates; for example Euclid's postulates. All theorems were proved by using implicitly or explicitly these basic properties, and, because of the evidence of these basic properties, a proved theorem was considered as a definitive truth, unless there was an error in the proof. For example, the sum of the interior angles of a triangle equals 180°, and this was considered as an undoubtful fact. - -One aspect of the foundational crisis of mathematics was the discovery of non-Euclidean geometries that do not lead to any contradiction, although, in such geometries, the sum of the angles of a triangle is different from 180°. So, the property "the sum of the angles of a triangle equals 180°" is either true or false, depending whether Euclid's postulates is assumed. Similarly, the use of "evident" basic properties of sets leads to the contradiction of Russel's paradox. This has been resolved by elaborating the rules that are allowed for manipulating sets. - -This crisis has been resolved by revisiting the foundations of mathematics for making them more rigorous. In these new fundations, a theorem is a well-formed formula of a mathematical theory that can be proved from the axioms and inference rules of the theory. So, the above theorem on the sum of the angles of a triangle becomes: Under the axioms and inference rules of Euclidean geometry, the sum of the interior angles of a triangle equals 180°. Similarly, Russel's paradox disappears because, in an axiomatized set theory, the set of all sets cannot be expressed with a well-formed formula. More precisely, if the set of all sets can be expressed with a well-formed formula, this implies that the theory is inconsistent, and every well-formed assertion, as well as its negation, is a theorem. - -In this context, the validity of a theorem depends only on the correctness of its proof. It is independent from the truth, or even the significance of the axioms. This does not mean that the significance of the axioms in uninteresting, but only that the validity (truth) of a theorem is independent from the significance of the axioms. This independence may be useful by allowing the use of results of some area of mathematics in apparently unrelated areas. - -An important consequence of this way of thinking mathematics is that it allows defining mathematical theories and theorems as mathematical objects, and to prove theorems about them. Examples are Gödel's incompleteness theorems. In particular, there are well-formed assertions than can be proved to not be a theorem of the ambient theory, although they can be proved in a wider theory. An example is Goodstein's theorem, which can be stated in Peano arithmetic, but is proved to be not provable in Peano arithmetics. However, it is provable in some more general theories, such as Zermelo–Fraenkel set theory. - -Many mathematical theorems are conditional statements, whose proofs deduce conclusions from conditions known as hypotheses or premises. In light of the interpretation of proof as justification of truth, the conclusion is often viewed as a necessary consequence of the hypotheses. Namely, that the conclusion is true in case the hypotheses are true—without any further assumptions. However, the conditional could also be interpreted differently in certain deductive systems, depending on the meanings assigned to the derivation rules and the conditional symbol (e.g., non-classical logic). - -Although theorems can be written in a completely symbolic form (e.g., as propositions in propositional calculus), they are often expressed informally in a natural language such as English for better readability. The same is true of proofs, which are often expressed as logically organized and clearly worded informal arguments, intended to convince readers of the truth of the statement of the theorem beyond any doubt, and from which a formal symbolic proof can in principle be constructed. - -In addition to the better readability, informal arguments are typically easier to check than purely symbolic ones—indeed, many mathematicians would express a preference for a proof that not only demonstrates the validity of a theorem, but also explains in some way why it is obviously true. In some cases, one might even be able to substantiate a theorem by using a picture as its proof. - -Because theorems lie at the core of mathematics, they are also central to its aesthetics. Theorems are often described as being "trivial", or "difficult", or "deep", or even "beautiful". These subjective judgments vary not only from person to person, but also with time and culture: for example, as a proof is obtained, simplified or better understood, a theorem that was once difficult may become trivial. On the other hand, a deep theorem may be stated simply, but its proof may involve surprising and subtle connections between disparate areas of mathematics. Fermat's Last Theorem is a particularly well-known example of such a theorem. - -Logically, many theorems are of the form of an indicative conditional: If A, then B. Such a theorem does not assert B — only that B is a necessary consequence of A. In this case, A is called the hypothesis of the theorem ("hypothesis" here means something very different from a conjecture), and B the conclusion of the theorem. The two together (without the proof) are called the proposition or statement of the theorem (e.g. "If A, then B" is the proposition). Alternatively, A and B can be also termed the antecedent and the consequent, respectively. The theorem "If n is an even natural number, then n/2 is a natural number" is a typical example in which the hypothesis is "n is an even natural number", and the conclusion is "n/2 is also a natural number". - -In order for a theorem be proved, it must be in principle expressible as a precise, formal statement. However, theorems are usually expressed in natural language rather than in a completely symbolic form—with the presumption that a formal statement can be derived from the informal one. - -It is common in mathematics to choose a number of hypotheses within a given language and declare that the theory consists of all statements provable from these hypotheses. These hypotheses form the foundational basis of the theory and are called axioms or postulates. The field of mathematics known as proof theory studies formal languages, axioms and the structure of proofs. - -Some theorems are "trivial", in the sense that they follow from definitions, axioms, and other theorems in obvious ways and do not contain any surprising insights. Some, on the other hand, may be called "deep", because their proofs may be long and difficult, involve areas of mathematics superficially distinct from the statement of the theorem itself, or show surprising connections between disparate areas of mathematics. A theorem might be simple to state and yet be deep. An excellent example is Fermat's Last Theorem, and there are many other examples of simple yet deep theorems in number theory and combinatorics, among other areas. - -Other theorems have a known proof that cannot easily be written down. The most prominent examples are the four color theorem and the Kepler conjecture. Both of these theorems are only known to be true by reducing them to a computational search that is then verified by a computer program. Initially, many mathematicians did not accept this form of proof, but it has become more widely accepted. The mathematician Doron Zeilberger has even gone so far as to claim that these are possibly the only nontrivial results that mathematicians have ever proved. Many mathematical theorems can be reduced to more straightforward computation, including polynomial identities, trigonometric identities and hypergeometric identities. - -Theorems in mathematics and theories in science are fundamentally different in their epistemology. A scientific theory cannot be proved; its key attribute is that it is falsifiable, that is, it makes predictions about the natural world that are testable by experiments. Any disagreement between prediction and experiment demonstrates the incorrectness of the scientific theory, or at least limits its accuracy or domain of validity. Mathematical theorems, on the other hand, are purely abstract formal statements: the proof of a theorem cannot involve experiments or other empirical evidence in the same way such evidence is used to support scientific theories. - -Nonetheless, there is some degree of empiricism and data collection involved in the discovery of mathematical theorems. By establishing a pattern, sometimes with the use of a powerful computer, mathematicians may have an idea of what to prove, and in some cases even a plan for how to set about doing the proof. It is also possible to find a single counter-example and so establish the impossibility of a proof for the proposition as-stated, and possibly suggest restricted forms of the original proposition that might have feasible proofs. - -For example, both the Collatz conjecture and the Riemann hypothesis are well-known unsolved problems; they have been extensively studied through empirical checks, but remain unproven. The Collatz conjecture has been verified for start values up to about 2.88 × 1018. The Riemann hypothesis has been verified to hold for the first 10 trillion non-trivial zeroes of the zeta function. Although most mathematicians can tolerate supposing that the conjecture and the hypothesis are true, neither of these propositions is considered proved. - -Such evidence does not constitute proof. For example, the Mertens conjecture is a statement about natural numbers that is now known to be false, but no explicit counterexample (i.e., a natural number n for which the Mertens function M(n) equals or exceeds the square root of n) is known: all numbers less than 1014 have the Mertens property, and the smallest number that does not have this property is only known to be less than the exponential of 1.59 × 1040, which is approximately 10 to the power 4.3 × 1039. Since the number of particles in the universe is generally considered less than 10 to the power 100 (a googol), there is no hope to find an explicit counterexample by exhaustive search. - -The word "theory" also exists in mathematics, to denote a body of mathematical axioms, definitions and theorems, as in, for example, group theory (see mathematical theory). There are also "theorems" in science, particularly physics, and in engineering, but they often have statements and proofs in which physical assumptions and intuition play an important role; the physical axioms on which such "theorems" are based are themselves falsifiable. - -A number of different terms for mathematical statements exist; these terms indicate the role statements play in a particular subject. The distinction between different terms is sometimes rather arbitrary, and the usage of some terms has evolved over time. - -* An axiom or postulate is a fundamental assumption regarding the object of study, that is accepted without proof. A related concept is that of a definition, which gives the meaning of a word or a phrase in terms of known concepts. Classical geometry discerns between axioms, which are general statements; and postulates, which are statements about geometrical objects. Historically, axioms were regarded as "self-evident"; today they are merely assumed to be true. - -* A conjecture is an unproved statement that is believed to be true. Conjectures are usually made in public, and named after their maker (for example, Goldbach's conjecture and Collatz conjecture). The term hypothesis is also used in this sense (for example, Riemann hypothesis), which should not be confused with "hypothesis" as the premise of a proof. Other terms are also used on occasion, for example problem when people are not sure whether the statement should be believed to be true. Fermat's Last Theorem was historically called a theorem, although, for centuries, it was only a conjecture. - -* A theorem is a statement that has been proven to be true based on axioms and other theorems. - -* A proposition is a theorem of lesser importance, or one that is considered so elementary or immediately obvious, that it may be stated without proof. This should not be confused with "proposition" as used in propositional logic. In classical geometry the term "proposition" was used differently: in Euclid's Elements (), all theorems and geometric constructions were called "propositions" regardless of their importance. - -* A lemma is an "accessory proposition" - a proposition with little applicability outside its use in a particular proof. Over time a lemma may gain in importance and be considered a theorem, though the term "lemma" is usually kept as part of its name (e.g. Gauss's lemma, Zorn's lemma, and the fundamental lemma). - -* A corollary is a proposition that follows immediately from another theorem or axiom, with little or no required proof. A corollary may also be a restatement of a theorem in a simpler form, or for a special case: for example, the theorem "all internal angles in a rectangle are right angles" has a corollary that "all internal angles in a square are right angles" - a square being a special case of a rectangle. - -* A generalization of a theorem is a theorem with a similar statement but a broader scope, from which the original theorem can be deduced as a special case (a corollary). - -Other terms may also be used for historical or customary reasons, for example: - -* An identity is a theorem stating an equality between two expressions, that holds for any value within its domain (e.g. Bézout's identity and Vandermonde's identity). - -* A rule is a theorem that establishes a useful formula (e.g. Bayes' rule and Cramer's rule). - -* A law or principle is a theorem with wide applicability (e.g. the law of large numbers, law of cosines, Kolmogorov's zero–one law, Harnack's principle, the least-upper-bound principle, and the pigeonhole principle). - -A few well-known theorems have even more idiosyncratic names, for example, the division algorithm, Euler's formula, and the Banach–Tarski paradox. - -A theorem and its proof are typically laid out as follows: - -Theorem (name of the person who proved it, along with year of discovery or publication of the proof) - -Statement of theorem (sometimes called the proposition) - -Proof - -Description of proof - -End - -The end of the proof may be signaled by the letters Q.E.D. (quod erat demonstrandum) or by one of the tombstone marks, such as "□" or "∎", meaning "end of proof", introduced by Paul Halmos following their use in magazines to mark the end of an article. - -The exact style depends on the author or publication. Many publications provide instructions or macros for typesetting in the house style. - -It is common for a theorem to be preceded by definitions describing the exact meaning of the terms used in the theorem. It is also common for a theorem to be preceded by a number of propositions or lemmas which are then used in the proof. However, lemmas are sometimes embedded in the proof of a theorem, either with nested proofs, or with their proofs presented after the proof of the theorem. - -Corollaries to a theorem are either presented between the theorem and the proof, or directly after the proof. Sometimes, corollaries have proofs of their own that explain why they follow from the theorem. - -It has been estimated that over a quarter of a million theorems are proved every year. - -The well-known aphorism, "A mathematician is a device for turning coffee into theorems", is probably due to Alfréd Rényi, although it is often attributed to Rényi's colleague Paul Erdős (and Rényi may have been thinking of Erdős), who was famous for the many theorems he produced, the number of his collaborations, and his coffee drinking. - -The classification of finite simple groups is regarded by some to be the longest proof of a theorem. It comprises tens of thousands of pages in 500 journal articles by some 100 authors. These papers are together believed to give a complete proof, and several ongoing projects hope to shorten and simplify this proof. Another theorem of this type is the four color theorem whose computer generated proof is too long for a human to read. It is among the longest known proofs of a theorem whose statement can be easily understood by a layman. - -In mathematical logic, a formal theory is a set of sentences within a formal language. A sentence is a well-formed formula with no free variables. A sentence that is a member of a theory is one of its theorems, and the theory is the set of its theorems. Usually a theory is understood to be closed under the relation of logical consequence. Some accounts define a theory to be closed under the semantic consequence relation ($\models$), while others define it to be closed under the syntactic consequence, or derivability relation ($\vdash$). - -For a theory to be closed under a derivability relation, it must be associated with a deductive system that specifies how the theorems are derived. The deductive system may be stated explicitly, or it may be clear from the context. The closure of the empty set under the relation of logical consequence yields the set that contains just those sentences that are the theorems of the deductive system. - -In the broad sense in which the term is used within logic, a theorem does not have to be true, since the theory that contains it may be unsound relative to a given semantics, or relative to the standard interpretation of the underlying language. A theory that is inconsistent has all sentences as theorems. - -The definition of theorems as sentences of a formal language is useful within proof theory, which is a branch of mathematics that studies the structure of formal proofs and the structure of provable formulas. It is also important in model theory, which is concerned with the relationship between formal theories and structures that are able to provide a semantics for them through interpretation. - -Although theorems may be uninterpreted sentences, in practice mathematicians are more interested in the meanings of the sentences, i.e. in the propositions they express. What makes formal theorems useful and interesting is that they may be interpreted as true propositions and their derivations may be interpreted as a proof of their truth. A theorem whose interpretation is a true statement about a formal system (as opposed to within a formal system) is called a metatheorem. - -Some important theorems in mathematical logic are: - -* Compactness of first-order logic - -* Completeness of first-order logic - -* Gödel's incompleteness theorems of first-order arithmetic - -* Consistency of first-order arithmetic - -* Tarski's undefinability theorem - -* Church-Turing theorem of undecidability - -* Löb's theorem - -* Löwenheim–Skolem theorem - -* Lindström's theorem - -* Craig's theorem - -* Cut-elimination theorem - -The concept of a formal theorem is fundamentally syntactic, in contrast to the notion of a true proposition, which introduces semantics. Different deductive systems can yield other interpretations, depending on the presumptions of the derivation rules (i.e. belief, justification or other modalities). The soundness of a formal system depends on whether or not all of its theorems are also validities. A validity is a formula that is true under any possible interpretation (for example, in classical propositional logic, validities are tautologies). A formal system is considered semantically complete when all of its theorems are also tautologies. diff --git a/wiki/wikipedia/3792.txt b/wiki/wikipedia/3792.txt deleted file mode 100644 index 1de159eaf1b381127f257adc616d67434c18789d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3792.txt +++ /dev/null @@ -1,50 +0,0 @@ -In optimization theory, maximum flow problems involve finding a feasible flow through a flow network that obtains the maximum possible flow rate. - -The maximum flow problem can be seen as a special case of more complex network flow problems, such as the circulation problem. The maximum value of an s-t flow (i.e., flow from source s to sink t) is equal to the minimum capacity of an s-t cut (i.e., cut severing s from t) in the network, as stated in the max-flow min-cut theorem. - -The maximum flow problem was first formulated in 1954 by T. E. Harris and F. S. Ross as a simplified model of Soviet railway traffic flow. - -In 1955, Lester R. Ford, Jr. and Delbert R. Fulkerson created the first known algorithm, the Ford–Fulkerson algorithm. In their 1955 paper, - -Another version of airline scheduling is finding the minimum needed crews to perform all the flights. In order to find an answer to this problem, a bipartite graph G' = (A ∪ B, E) is created where each flight has a copy in set A and set B. If the same plane can perform flight j after flight i, i∈A is connected to j∈B. A matching in G' induces a schedule for F and obviously maximum bipartite matching in this graph produces an airline schedule with minimum number of crews. As it is mentioned in the Application part of this article, the maximum cardinality bipartite matching is an application of maximum flow problem. - -There are some factories that produce goods and some villages where the goods have to be delivered. They are connected by a networks of roads with each road having a capacity c for maximum goods that can flow through it. The problem is to find if there is a circulation that satisfies the demand. - -This problem can be transformed into a maximum-flow problem. - -# Add a source node s and add edges from it to every factory node fi with capacity pi where pi is the production rate of factory fi. - -# Add a sink node t and add edges from all villages vi to t with capacity di where di is the demand rate of village vi. - -Let G = (V, E) be this new network. There exists a circulation that satisfies the demand if and only if : - -Maximum flow value(G) $ = \sum_{i \in v} d_i $. - -If there exists a circulation, looking at the max-flow solution would give the answer as to how much goods have to be sent on a particular road for satisfying the demands. - -The problem can be extended by adding a lower bound on the flow on some edges. - -
    - -In their book, Kleinberg and Tardos present an algorithm for segmenting an image. They present an algorithm to find the background and the foreground in an image. More precisely, the algorithm takes a bitmap as an input modelled as follows: ai ≥ 0 is the likelihood that pixel i belongs to the foreground, bi ≥ 0 in the likelihood that pixel i belongs to the background, and pij is the penalty if two adjacent pixels i and j are placed one in the foreground and the other in the background. The goal is to find a partition (A, B) of the set of pixels that maximize the following quantity -$$ -q(A, B) = \sum_{i \in A} a_i + \sum_{i \in B} b_i - \sum_{\begin{matrix}i, j \text{ adjacent} \\ |A \cap \{i, j\}| = 1 \end{matrix}} p_{ij} -$$, - -Indeed, for pixels in A (considered as the foreground), we gain ai; for all pixels in B (considered as the background), we gain bi. On the border, between two adjacent pixels i and j, we loose pij. It is equivalent to minimize the quantity -$$ -q'(A, B) = \sum_{i \in A} b_i + \sum_{i \in B} a_i + \sum_{\begin{matrix}i, j \text{ adjacent} \\ |A \cap \{i, j\}| = 1 \end{matrix}} p_{ij} -$$ - -because -$$ -q(A, B) = \sum_{i \in A\cup B} a_i + \sum_{i \in A\cup B} b_i - q'(A, B). -$$ - -We now construct the network whose nodes are the pixel, plus a source and a sink, see Figure on the right. We connect the source to pixel i by an edge of weight ai. We connect the pixel i to the sink by an edge of weight bi. We connect pixel i to pixel j with weight pij. Now, it remains to compute a minimum cut in that network (or equivalently a maximum flow). The last figure shows a minimum cut. - -1. In the minimum-cost flow problem, each edge (u,v) also has a cost-coefficient auv in addition to its capacity. If the flow through the edge is fuv, then the total cost is auvfuv. It is required to find a flow of a given size d, with the smallest cost. In most variants, the cost-coefficients may be either positive or negative. There are various polynomial-time algorithms for this problem. - -2. The maximum-flow problem can be augmented by disjunctive constraints: a negative disjunctive constraint says that a certain pair of edges cannot simultaneously have a nonzero flow; a positive disjunctive constraints says that, in a certain pair of edges, at least one must have a nonzero flow. With negative constraints, the problem becomes strongly NP-hard even for simple networks. With positive constraints, the problem is polynomial if fractional flows are allowed, but may be strongly NP-hard when the flows must be integral. - -
    diff --git a/wiki/wikipedia/3793.txt b/wiki/wikipedia/3793.txt deleted file mode 100644 index a77df6251adba9b91136f8c8bf1600f0ef719cd2..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3793.txt +++ /dev/null @@ -1,111 +0,0 @@ -{{unsolved|mathematics|Are there infinitely many regular primes, and if so, is their relative density $e^{-1/2}$?}} - -In number theory, a regular prime is a special kind of prime number, defined by Ernst Kummer in 1850 to prove certain cases of Fermat's Last Theorem. Regular primes may be defined via the divisibility of either class numbers or of Bernoulli numbers. - -The first few regular odd primes are: - -3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 41, 43, 47, 53, 61, 71, 73, 79, 83, 89, 97, 107, 109, 113, 127, 137, 139, 151, 163, 167, 173, 179, 181, 191, 193, 197, 199, ... . - -In 1850, Kummer proved that Fermat's Last Theorem is true for a prime exponent p if p is regular. This focused attention on the irregular primes. In 1852, Genocchi was able to prove that the first case of Fermat's Last Theorem is true for an exponent p, if (p, p − 3) is not an irregular pair. Kummer improved this further in 1857 by showing that for the "first case" of Fermat's Last Theorem (see Sophie Germain's theorem) it is sufficient to establish that either (p, p − 3) or (p, p − 5) fails to be an irregular pair. - -Kummer found the irregular primes less than 165. In 1963, Lehmer reported results up to 10000 and Selfridge and Pollack announced in 1964 to have completed the table of irregular primes up to 25000. Although the two latter tables did not appear in print, Johnson found that (p, p − 3) is in fact an irregular pair for p = 16843 and that this is the first and only time this occurs for p < 30000. It was found in 1993 that the next time this happens is for p = 2124679; see Wolstenholme prime. - -An odd prime number p is defined to be regular if it does not divide the class number of the p-th cyclotomic field Q(ζp), where ζp is a primitive p-th root of unity, it is listed on . The prime number 2 is often considered regular as well. - -The class number of the cyclotomic - -field is the number of ideals of the ring of integers - -Z(ζp) up to equivalence. Two ideals I, J are considered equivalent if there is a nonzero u in Q(ζp) so that 1=I = uJ. - -Ernst Kummer showed that an equivalent criterion for regularity is that p does not divide the numerator of any of the Bernoulli numbers Bk for k = 2, 4, 6, ..., p - 3. - -Kummer's proof that this is equivalent to the class number definition is strengthened by the Herbrand–Ribet theorem, which states certain consequences of p dividing one of these Bernoulli numbers. - -It has been conjectured that there are infinitely many regular primes. More precisely conjectured that e−1/2, or about 60.65%, of all prime numbers are regular, in the asymptotic sense of natural density. Neither conjecture has been proven to date. - -An odd prime that is not regular is an irregular prime (or Bernoulli irregular or B-irregular to distinguish from other types or irregularity discussed below). The first few irregular primes are: - -37, 59, 67, 101, 103, 131, 149, 157, 233, 257, 263, 271, 283, 293, 307, 311, 347, 353, 379, 389, 401, 409, 421, 433, 461, 463, 467, 491, 523, 541, 547, 557, 577, 587, 593, ... - -K. L. Jensen (an otherwise unknown student of Nielsen) proved in 1915 that there are infinitely many irregular primes of the form 4n + 3. - -In 1954 Carlitz gave a simple proof of the weaker result that there are in general infinitely many irregular primes. - -Metsänkylä proved that for any integer T > 6, there are infinitely many irregular primes not of the form mT + 1 or mT − 1, and later generalized it. - -If p is an irregular prime and p divides the numerator of the Bernoulli number B2k for 0 < 2k < p − 1, then (p, 2k) is called an irregular pair. In other words, an irregular pair is a bookkeeping device to record, for an irregular prime p, the particular indices of the Bernoulli numbers at which regularity fails. The first few irregular pairs (when ordered by k) are: - -(691, 12), (3617, 16), (43867, 18), (283, 20), (617, 20), (131, 22), (593, 22), (103, 24), (2294797, 24), (657931, 26), (9349, 28), (362903, 28), ... . - -The smallest even k such that nth irregular prime divides Bk are - -32, 44, 58, 68, 24, 22, 130, 62, 84, 164, 100, 84, 20, 156, 88, 292, 280, 186, 100, 200, 382, 126, 240, 366, 196, 130, 94, 292, 400, 86, 270, 222, 52, 90, 22, ... - -For a given prime p, the number of such pairs is called the index of irregularity of p. Hence, a prime is regular if and only if its index of irregularity is zero. Similarly, a prime is irregular if and only if its index of irregularity is positive. - -It was discovered that (p, p − 3) is in fact an irregular pair for p = 16843, as well as for p = 2124679. There are no more occurrences for p < 109. - -An odd prime p has irregular index n if and only if there are n values of k for which p divides B2k and these ks are less than (p − 1)/2. The first irregular prime with irregular index greater than 1 is 157, which divides B62 and B110, so it has an irregular index 2. Clearly, the irregular index of a regular prime is 0. - -The irregular index of the nth prime is - -0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 2, 0, ... (Start with n = 2, or the prime = 3) - -The irregular index of the nth irregular prime is - -1, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 1, 1, 1, 1, 1, 1, 1, 2, 3, 1, 1, 2, 1, 1, 2, 1, 1, 1, 3, 1, 2, 3, 1, 1, 2, 1, 1, 2, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 3, 1, 1, 1, ... - -The primes having irregular index 1 are - -37, 59, 67, 101, 103, 131, 149, 233, 257, 263, 271, 283, 293, 307, 311, 347, 389, 401, 409, 421, 433, 461, 463, 523, 541, 557, 577, 593, 607, 613, 619, 653, 659, 677, 683, 727, 751, 757, 761, 773, 797, 811, 821, 827, 839, 877, 881, 887, 953, 971, ... - -The primes having irregular index 2 are - -157, 353, 379, 467, 547, 587, 631, 673, 691, 809, 929, 1291, 1297, 1307, 1663, 1669, 1733, 1789, 1933, 1997, 2003, 2087, 2273, 2309, 2371, 2383, 2423, 2441, 2591, 2671, 2789, 2909, 2957, ... - -The primes having irregular index 3 are - -491, 617, 647, 1151, 1217, 1811, 1847, 2939, 3833, 4003, 4657, 4951, 6763, 7687, 8831, 9011, 10463, 10589, 12073, 13217, 14533, 14737, 14957, 15287, 15787, 15823, 16007, 17681, 17863, 18713, 18869, ... - -The least primes having irregular index n are - -2, 3, 37, 157, 491, 12613, 78233, 527377, 3238481, ... (This sequence defines "the irregular index of 2" as −1, and also starts at 1=n = −1.) - -Similarly, we can define an Euler irregular prime (or E-irregular) as a prime p that divides at least one Euler number E2n with 0 < 2n ≤ p − 3. The first few Euler irregular primes are - -19, 31, 43, 47, 61, 67, 71, 79, 101, 137, 139, 149, 193, 223, 241, 251, 263, 277, 307, 311, 349, 353, 359, 373, 379, 419, 433, 461, 463, 491, 509, 541, 563, 571, 577, 587, ... - -The Euler irregular pairs are - -(61, 6), (277, 8), (19, 10), (2659, 10), (43, 12), (967, 12), (47, 14), (4241723, 14), (228135437, 16), (79, 18), (349, 18), (84224971, 18), (41737, 20), (354957173, 20), (31, 22), (1567103, 22), (1427513357, 22), (2137, 24), (111691689741601, 24), (67, 26), (61001082228255580483, 26), (71, 28), (30211, 28), (2717447, 28), (77980901, 28), ... - -Vandiver proved that Fermat's Last Theorem (1=xp + yp = zp) has no solution for integers x, y, z with 1=gcd(xyz, p) = 1 if p is Euler-regular. Gut proved that 1=x2p + y2p = z2p has no solution if p has an E-irregularity index less than 5. - -It was proven that there is an infinity of E-irregular primes. A stronger result was obtained: there is an infinity of E-irregular primes congruent to 1 modulo 8. As in the case of Kummer's B-regular primes, there is as yet no proof that there are infinitely many E-regular primes, though this seems likely to be true. - -A prime p is called strong irregular if it's both B-irregular and E-irregular (the indexes of Bernoulli and Euler numbers that are divisible by p can be either the same or different). The first few strong irregular primes are - -67, 101, 149, 263, 307, 311, 353, 379, 433, 461, 463, 491, 541, 577, 587, 619, 677, 691, 751, 761, 773, 811, 821, 877, 887, 929, 971, 1151, 1229, 1279, 1283, 1291, 1307, 1319, 1381, 1409, 1429, 1439, ... - -To prove the Fermat's Last Theorem for a strong irregular prime p is more difficult (since Kummer proved the first case of Fermat's Last Theorem for B-regular primes, Vandiver proved the first case of Fermat's Last Theorem for E-regular primes), the most difficult is that p is not only a strong irregular prime, but 2p + 1, 4p + 1, 8p + 1, 10p + 1, 14p + 1, and 16p + 1 are also all composite (Legendre proved the first case of Fermat's Last Theorem for primes p such that at least one of 2p + 1, 4p + 1, 8p + 1, 10p + 1, 14p + 1, and 16p + 1 is prime), the first few such p are - -263, 311, 379, 461, 463, 541, 751, 773, 887, 971, 1283, ... - -A prime p is weak irregular if it's either B-irregular or E-irregular (or both). The first few weak irregular primes are - -19, 31, 37, 43, 47, 59, 61, 67, 71, 79, 101, 103, 131, 137, 139, 149, 157, 193, 223, 233, 241, 251, 257, 263, 271, 277, 283, 293, 307, 311, 347, 349, 353, 373, 379, 389, 401, 409, 419, 421, 433, 461, 463, 491, 509, 523, 541, 547, 557, 563, 571, 577, 587, 593, ... - -Like the Bernoulli irregularity, the weak regularity relates to the divisibility of class numbers of cyclotomic fields. In fact, a prime p is weak irregular if and only if p divides the class number of the 4p-th cyclotomic field Q(ζ4p). - -In this section, "an" means the numerator of the nth Bernoulli number if n is even, "an" means the (n − 1)th Euler number if n is odd . - -Since for every odd prime p, p divides ap if and only if p is congruent to 1 mod 4, and since p divides the denominator of (p − 1)th Bernoulli number for every odd prime p, so for any odd prime p, p cannot divide ap−1. Besides, if and only if an odd prime p divides an (and 2p does not divide n), then p also divides an+k(p−1) (if 2p divides n, then the sentence should be changed to "p also divides an+2kp". In fact, if 2p divides n and p(p − 1) does not divide n, then p divides an.) for every integer k (a condition is n + k(p − 1) must be > 1). For example, since 19 divides a11 and 1=2 × 19 = 38 does not divide 11, so 19 divides a18k+11 for all k. Thus, the definition of irregular pair (p, n), n should be at most p − 2. - -The following table shows all irregular pairs with odd prime p ≤ 661: - -The only primes below 1000 with weak irregular index 3 are 307, 311, 353, 379, 577, 587, 617, 619, 647, 691, 751, and 929. Besides, 491 is the only prime below 1000 with weak irregular index 4, and all other odd primes below 1000 with weak irregular index 0, 1, or 2. (Weak irregular index is defined as "number of integers 0 ≤ n ≤ p − 2 such that p divides an.) - -The following table shows all irregular pairs with n ≤ 63. (To get these irregular pairs, we only need to factorize an. For example, 1=a34 = 17 × 151628697551, but 17 < 34 + 2, so the only irregular pair with 1=n = 34 is (151628697551, 34)) (for more information (even ns up to 300 and odd ns up to 201), see ). - -The following table shows irregular pairs (p, p − n) (n ≥ 2), it is a conjecture that there are infinitely many irregular pairs (p, p − n) for every natural number n ≥ 2, but only few were found for fixed n. For some values of n, even there is no known such prime p. diff --git a/wiki/wikipedia/3794.txt b/wiki/wikipedia/3794.txt deleted file mode 100644 index 059c8b970eb6758758c7a291253298e38573631c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3794.txt +++ /dev/null @@ -1,3 +0,0 @@ -In mathematics, the Torelli theorem, named after Ruggiero Torelli, is a classical result of algebraic geometry over the complex number field, stating that a non-singular projective algebraic curve (compact Riemann surface) C is determined by its Jacobian variety J(C), when the latter is given in the form of a principally polarized abelian variety. In other words, the complex torus J(C), with certain 'markings', is enough to recover C. The same statement holds over any algebraically closed field. From more precise information on the constructed isomorphism of the curves it follows that if the canonically principally polarized Jacobian varieties of curves of genus $\geq 2$ are k-isomorphic for k any perfect field, so are the curves. - -This result has had many important extensions. It can be recast to read that a certain natural morphism, the period mapping, from the moduli space of curves of a fixed genus, to a moduli space of abelian varieties, is injective (on geometric points). Generalizations are in two directions. Firstly, to geometric questions about that morphism, for example the local Torelli theorem. Secondly, to other period mappings. A case that has been investigated deeply is for K3 surfaces (by Viktor S. Kulikov, Ilya Pyatetskii-Shapiro, Igor Shafarevich and Fedor Bogomolov) and hyperkähler manifolds (by Misha Verbitsky, Eyal Markman and Daniel Huybrechts). diff --git a/wiki/wikipedia/3795.txt b/wiki/wikipedia/3795.txt deleted file mode 100644 index 8c41691f941c79752eae6833263bdb70ecf4b0af..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3795.txt +++ /dev/null @@ -1,29 +0,0 @@ -In traditional logic, contraposition is a form of immediate inference in which a proposition is inferred from another and where the former has for its subject the contradictory of the original logical proposition's predicate. In some cases, contraposition involves a change of the former's quality (i.e. affirmation or negation). For its symbolic expression in modern logic, see the rule of transposition. Contraposition also has philosophical application distinct from the other traditional inference processes of conversion and obversion where equivocation varies with different proposition types. - -In traditional logic, the process of contraposition is a schema composed of several steps of inference involving categorical propositions and classes. A categorical proposition contains a subject and predicate where the existential impact of the copula implies the proposition as referring to a class with at least one member, in contrast to the conditional form of hypothetical or materially implicative propositions, which are compounds of other propositions, e.g. "If P, then Q" (P and Q are both propositions), and their existential impact is dependent upon further propositions where quantification existence is instantiated (existential instantiation), not on the hypothetical or materially implicative propositions themselves. - -Full contraposition is the simultaneous interchange and negation of the subject and predicate, and is valid only for the type "A" and type "O" propositions of Aristotelian logic, while it is conditionally valid for "E" type propositions if a change in quantity from universal to particular is made (partial contraposition). Since the valid obverse is obtained for all the four types (A, E, I, and O types) of traditional propositions, yielding propositions with the contradictory of the original predicate, (full) contraposition is obtained by converting the obvert of the original proposition. For "E" statements, partial contraposition can be obtained by additionally making a change in quantity. Because nothing is said in the definition of contraposition with regard to the predicate of the inferred proposition, it can be either the original subject, or its contradictory, resulting in two contrapositives which are the obverts of one another in the "A", "O", and "E" type propositions. - -By example: from an original, 'A' type categorical proposition, - -All residents are voters, - -which presupposes that all classes have members and the existential import presumed in the form of categorical propositions, one can derive first by obversion the 'E' type proposition, - -No residents are non-voters. - -The contrapositive of the original proposition is then derived by conversion to another 'E' type proposition, - -No non-voters are residents. - -The process is completed by further obversion resulting in the 'A' type proposition that is the obverted contrapositive of the original proposition, - -All non-voters are non-residents. - -The schema of contraposition: - -Notice that contraposition is a valid form of immediate inference only when applied to "A" and "O" propositions. It is not valid for "I" propositions, where the obverse is an "O" proposition which has no valid converse. The contraposition of the "E" proposition is valid only with limitations (per accidens). This is because the obverse of the "E" proposition is an "A" proposition which cannot be validly converted except by limitation, that is, contraposition plus a change in the quantity of the proposition from universal to particular. - -Also, notice that contraposition is a method of inference which may require the use of other rules of inference. The contrapositive is the product of the method of contraposition, with different outcomes depending upon whether the contraposition is full, or partial. The successive applications of conversion and obversion within the process of contraposition may be given by a variety of names. - -The process of the logical equivalence of a statement and its contrapositive as defined in traditional class logic is not one of the axioms of propositional logic. In traditional logic there is more than one contrapositive inferred from each original statement. In regard to the "A" proposition this is circumvented in the symbolism of modern logic by the rule of transposition, or the law of contraposition. In its technical usage within the field of philosophic logic, the term "contraposition" may be limited by logicians (e.g. Irving Copi, Susan Stebbing) to traditional logic and categorical propositions. In this sense the use of the term "contraposition" is usually referred to by "transposition" when applied to hypothetical propositions or material implications. diff --git a/wiki/wikipedia/3796.txt b/wiki/wikipedia/3796.txt deleted file mode 100644 index 58903374b38c4c8b28f087b6735574da6ee0559d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3796.txt +++ /dev/null @@ -1,86 +0,0 @@ -In classical logic, disjunctive syllogism (historically known as modus tollendo ponens (MTP), Latin for "mode that affirms by denying") is a valid argument form which is a syllogism having a disjunctive statement for one of its premises. - -An example in English: - -The breach is a safety violation, or it is not subject to fines. - -The breach is not a safety violation. - -Therefore, it is not subject to fines. - -In propositional logic, disjunctive syllogism (also known as disjunction elimination and or elimination, or abbreviated ∨E), is a valid rule of inference. If we are told that at least one of two statements is true; and also told that it is not the former that is true; we can infer that it has to be the latter that is true. If P is true or Q is true and P is false, then Q is true. The reason this is called "disjunctive syllogism" is that, first, it is a syllogism, a three-step argument, and second, it contains a logical disjunction, which simply means an "or" statement. "P or Q" is a disjunction; P and Q are called the statement's disjuncts. The rule makes it possible to eliminate a disjunction from a logical proof. It is the rule that: -$$ -\frac{P \lor Q, \neg P}{\therefore Q} -$$ - -where the rule is that whenever instances of "$P \lor Q$", and "$\neg P$" appear on lines of a proof, "$Q$" can be placed on a subsequent line. - -Disjunctive syllogism is closely related and similar to hypothetical syllogism, in that it is also a type of syllogism, and also the name of a rule of inference. It is also related to the law of noncontradiction, one of the three traditional laws of thought. - -The disjunctive syllogism rule may be written in sequent notation: -$$ - P \lor Q, \lnot P \vdash Q -$$ - -where $\vdash$ is a metalogical symbol meaning that $Q$ is a syntactic consequence of $P \lor Q$, and $\lnot P$ in some logical system; - -and expressed as a truth-functional tautology or theorem of propositional logic: -$$ - ((P \lor Q) \land \neg P) \to Q -$$ - -where $P$, and $Q$ are propositions expressed in some formal system. - -Here is an example: - -I will choose soup or I will choose salad. - -I will not choose soup. - -Therefore, I will choose salad. - -Here is another example: - -It is red or it is blue. - -It is not blue. - -Therefore, it is red. - -Please observe that the disjunctive syllogism works whether 'or' is considered 'exclusive' or 'inclusive' disjunction. See below for the definitions of these terms. - -There are two kinds of logical disjunction: - -* inclusive means "and/or"—at least one of them is true, or maybe both. - -* exclusive ("xor") means exactly one must be true, but they cannot both be. - -The widely used English language concept of or is often ambiguous between these two meanings, but the difference is pivotal in evaluating disjunctive arguments. - -This argument: - -P or Q. - -Not P. - -Therefore, Q. - -is valid and indifferent between both meanings. However, only in the exclusive meaning is the following form valid: - -Either (only) P or (only) Q. - -P. - -Therefore, not Q. - -With the inclusive meaning you could draw no conclusion from the first two premises of that argument. See affirming a disjunct. - -Unlike modus ponens and modus ponendo tollens, with which it should not be confused, disjunctive syllogism is often not made an explicit rule or axiom of logical systems, as the above arguments can be proven with a (slightly devious) combination of reductio ad absurdum and disjunction elimination. - -Other forms of syllogism include: - -*hypothetical syllogism - -*categorical syllogism - -Disjunctive syllogism holds in classical propositional logic and intuitionistic logic, but not in some paraconsistent logics. diff --git a/wiki/wikipedia/3797.txt b/wiki/wikipedia/3797.txt deleted file mode 100644 index de230840831a3f4eb5980249f6bd3565db9c6e12..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3797.txt +++ /dev/null @@ -1,19 +0,0 @@ -Transaction Workflow Innovation Standards Team (Twist) is a not-for-profit industry standards group. It does not charge anything for involvement. The main goal of Twist is to create non-proprietary XML message standards for the financial services industry. To this end it provides a message format validation service. - -Its focus on financial transaction processing covers the aspects of: - -* Payments and collections (on invoices to suppliers and from customers) - -* Cash management (cash flow, position keeping across accounts and timing) - -* Working capital finance (short term investment of spare cash to make gains) - -* Wholesale financial market access (raising capital through stocks and bonds) - -The focus of these combined standards is to create and improve straight-through processing (STP). As any STP system of systems will invariably involve many market participants such standards are essential for success. Twist provides a check-list for ensuring STP is implemented in accordance with its standards. The word ‘standards’ is used rather than protocols as Twist endeavours to also define business process best practices. Such practices are typically implemented in a workflow, or business process management system (BPMS). Considerable emphasis is placed on how to handle exceptions so that as much traffic can be handled automatically with as little as possible going to manual intervention. One major standard introduced by TWIST is the Bank Services Billing Standard (BSB). - -The leading party in Twist is the Royal Dutch Shell Group’s corporate treasury department, giving the Twist standards more of a corporate finance perspective as opposed to a banking perspective. Participants in Twist come from across the broad supply chain of banking and its support industries. A common role for Twist is to draw on expertise in corporate treasury and channel it to software vendors so that they can create the products required. Geographically Twist has good traction in the United Kingdom, Singapore and the Netherlands. It has momentum in the foreign exchange (FX) and fixed income (FI) and money markets (MM) asset classes. - -Related XML financial services standard include FIX, FpML and SWIFT XML. Where related financial services message standards exist, such as these XML standards or classic SWIFT, then Twist aims to leverage them and interoperate with them rather than compete. - -Reuters is working with Twist, as of May 2005, to tie into its FX trading service (known as the RTFX service). Their aim is to deliver real time trade confirmations directly into corporate treasurer’s STP systems. diff --git a/wiki/wikipedia/3798.txt b/wiki/wikipedia/3798.txt deleted file mode 100644 index 6d0502d266c0ee706d4f8d2a41c84bd154fc1eab..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3798.txt +++ /dev/null @@ -1,62 +0,0 @@ -In complex analysis, the Phragmén–Lindelöf principle (or method), first formulated by Lars Edvard Phragmén (1863–1937) and Ernst Leonard Lindelöf (1870–1946) in 1908, is a technique which employs an auxiliary, parameterized function to prove the boundedness of a holomorphic function $f$ (i.e, $|f(z)|0$ and $|fh_\epsilon|\leq M$ on the boundary $\partial S_{\mathrm{bdd}}$ of an appropriate bounded subregion $S_{\mathrm{bdd}}\subset S$; and (ii): the asymptotic behavior of $fh_\epsilon$ allows us to establish that $|fh_\epsilon|\leq M$ for $z\in S\setminus \overline{S_{\mathrm{bdd}}}$ (i.e., the unbounded part of $S$ outside the closure of the bounded subregion). This allows us to apply the maximum modulus principle to first conclude that $|fh_\epsilon|\leq M$ on $\overline{S_{\mathrm{bdd}}}$ and then extend the conclusion to all $z\in S$. Finally, we let $\epsilon\to 0$ so that $f(z)h_\epsilon(z)\to f(z)$ for every $z\in S$ in order to conclude that $|f|\leq M$ on $S$. - -In the literature of complex analysis, there are many examples of the Phragmén–Lindelöf principle applied to unbounded regions of differing types, and also a version of this principle may be applied in a similar fashion to subharmonic and superharmonic functions. - -To continue the example above, we can impose a growth condition on a holomorphic function $f$ that prevents it from "blowing up" and allows the Phragmén–Lindelöf principle to be applied. To this end, we now include the condition that -$$ -|f(z)|<\exp\big(A\exp(c\cdot|\Re(z)|)\big) -$$ - -for some real constants $c<1$ and $A<\infty$, for all $z\in S$. It can then be shown that $|f(z)|\leq 1$ for all $z\in\partial S$ implies that $|f(z)|\leq 1$ in fact holds for all $z\in S$. Thus, we have the following proposition: - -Proposition. Let -$$ -S=\Big\{z:\Im(z)\in \big(-\frac{\pi}{2},\frac{\pi}{2}\big)\Big\},\quad \overline{S}=\Big\{z:\Im(z)\in \big[-\frac{\pi}{2},\frac{\pi}{2}\big]\Big\} -$$. - -Let $f$ be holomorphic on $S$ and continuous on $\overline{S}$, and suppose there exist real constants $c<1,\ A<\infty$ such that -$$ -|f(z)|<\exp\big(A\exp(c\cdot|\Re(z)|)\big) -$$ - -for all $z\in S$ and $|f(z)|\leq 1$ for all $z\in\overline{S}\setminus S=\partial S$. Then $|f(z)|\leq 1$ for all $z\in S$. - -Note that this conclusion fails when $c=1$, precisely as the motivating counterexample in the previous section demonstrates. The proof of this statement employs a typical Phragmén–Lindelöf argument: - -Proof: (Sketch) We fix $b\in(c,1)$ and define for each $\epsilon>0$ the auxiliary function $h_\epsilon$ by $h_\epsilon(z)=e^{-\epsilon(e^{b z}+e^{-b z})}$. Moreover, for a given $a>0$, we define $S_{a}$ to be the open rectangle in the complex plane enclosed within the vertices $\{a\pm i\pi/2,-a\pm i\pi/2\}$. Now, fix $\epsilon>0$ and consider the function $fh_\epsilon$. It can be shown that $|f(z)h_\epsilon(z)|\to 0$ as $|\Re(z)|\to\infty$. This allows us to find an $x_0$ such that $|f(z)h_\epsilon(z)|\leq1$ whenever $z\in\overline{S}$ and $|\Re(z)|\geq x_0$. Because $S_{x_0}$ is a bounded region, and $|f(z)h_\epsilon(z)|\leq 1$ for all $z\in\partial S_{x_0}$, the maximum modulus principle implies that $|f(z)h_\epsilon(z)|\leq 1$ for all $z\in \overline{S_{x_0}}$. Since $|f(z)h_\epsilon(z)|\leq1$ whenever $z\in S$ and $|\Re(z)|> x_0$, $|f(z)h_\epsilon(z)|\leq1$ in fact holds for all $z\in S$. Finally, because $fh_\epsilon\to f$ as $\epsilon\to 0$, we conclude that $|f(z)|\leq 1$ for all $z\in S$. - -A particularly useful statement proved using the Phragmén–Lindelöf principle bounds holomorphic functions on a sector of the complex plane if it is bounded on its boundary. This statement can be used to give a complex analytic proof of the Hardy's uncertainty principle, which states that a function and its Fourier transform cannot both decay faster than exponentially. - -Proposition. Let $F$ be a function that is holomorphic in a sector -$$ - S = \left\{ z \big| \alpha < \arg z < \beta \right\} -$$ - -of central angle $\beta-\alpha=\pi/\lambda$, and continuous on its boundary. If - -for $z\in\partial S$, and - -{{NumBlk|::|$|F(z)| \leq e^{C |z|^\rho}$|}} - -for all $z\in S$, where $\rho\in[0,\lambda)$ and $C>0$, then () holds also for all $z\in S$. - -* The condition () can be relaxed to - -{{NumBlk|::|$\liminf_{r \to \infty} \sup_{\alpha < \theta < \beta} \frac{\log|F(re^{i\theta})|}{r^\rho} = 0 \quad \text{for some} \quad 0 \leq \rho < \lambda~,$|}} - -with the same conclusion. - -In practice the point 0 is often transformed into the point ∞ of the Riemann sphere. This gives a version of the principle that applies to strips, for example bounded by two lines of constant real part in the complex plane. This special case is sometimes known as Lindelöf's theorem. - -Carlson's theorem is an application of the principle to functions bounded on the imaginary axis. diff --git a/wiki/wikipedia/3799.txt b/wiki/wikipedia/3799.txt deleted file mode 100644 index 7dc75d514a545103648004c12acbc98eb0085f2d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3799.txt +++ /dev/null @@ -1,139 +0,0 @@ -A minimum spanning tree (MST) or minimum weight spanning tree is a subset of the edges of a connected, edge-weighted undirected graph that connects all the vertices together, without any cycles and with the minimum possible total edge weight. That is, it is a spanning tree whose sum of edge weights is as small as possible. More generally, any edge-weighted undirected graph (not necessarily connected) has a minimum spanning forest, which is a union of the minimum spanning trees for its connected components. - -There are many use cases for minimum spanning trees. One example is a telecommunications company trying to lay cable in a new neighborhood. If it is constrained to bury the cable only along certain paths (e.g. roads), then there would be a graph containing the points (e.g. houses) connected by those paths. Some of the paths might be more expensive, because they are longer, or require the cable to be buried deeper; these paths would be represented by edges with larger weights. Currency is an acceptable unit for edge weight – there is no requirement for edge lengths to obey normal rules of geometry such as the triangle inequality. A spanning tree for that graph would be a subset of those paths that has no cycles but still connects every house; there might be several spanning trees possible. A minimum spanning tree would be one with the lowest total cost, representing the least expensive path for laying the cable. - -If there are n vertices in the graph, then each spanning tree has n − 1 edges. - -There may be several minimum spanning trees of the same weight; in particular, if all the edge weights of a given graph are the same, then every spanning tree of that graph is minimum. - -If each edge has a distinct weight then there will be only one, unique minimum spanning tree. This is true in many realistic situations, such as the telecommunications company example above, where it's unlikely any two paths have exactly the same cost. This generalizes to spanning forests as well. - -Proof: - -# Assume the contrary, that there are two different MSTs A and B. - -# Since A and B differ despite containing the same nodes, there is at least one edge that belongs to one but not the other. Among such edges, let e1 be the one with least weight; this choice is unique because the edge weights are all distinct. Without loss of generality, assume e1 is in A. - -# As B is an MST, {e1} $\cup$ B must contain a cycle C with e1. - -# As a tree, A contains no cycles, therefore C must have an edge e2 that is not in A. - -# Since e1 was chosen as the unique lowest-weight edge among those belonging to exactly one of A and B, the weight of e2 must be greater than the weight of e1. - -# As e1 and e2 are part of the cycle C, replacing e2 with e1 in B therefore yields a spanning tree with a smaller weight. - -# This contradicts the assumption that B is an MST. - -More generally, if the edge weights are not all distinct then only the (multi-)set of weights in minimum spanning trees is certain to be unique; it is the same for all minimum spanning trees. - -If the weights are positive, then a minimum spanning tree is in fact a minimum-cost subgraph connecting all vertices, since subgraphs containing cycles necessarily have more total weight. - -For any cycle C in the graph, if the weight of an edge e of C is larger than the individual weights of all other edges of C, then this edge cannot belong to an MST. - -Proof: Assume the contrary, i.e. that e belongs to an MST T1. Then deleting e will break T1 into two subtrees with the two ends of e in different subtrees. The remainder of C reconnects the subtrees, hence there is an edge f of C with ends in different subtrees, i.e., it reconnects the subtrees into a tree T2 with weight less than that of T1, because the weight of f is less than the weight of e. - -For any cut C of the graph, if the weight of an edge e in the cut-set of C is strictly smaller than the weights of all other edges of the cut-set of C, then this edge belongs to all MSTs of the graph. - -Proof: Assume that there is an MST T that does not contain e. Adding e to T will produce a cycle, that crosses the cut once at e and crosses back at another edge e' . Deleting e' we get a spanning tree T∖{e'}∪{e} of strictly smaller weight than T. This contradicts the assumption that T was a MST. - -By a similar argument, if more than one edge is of minimum weight across a cut, then each such edge is contained in some minimum spanning tree. - -If the minimum cost edge e of a graph is unique, then this edge is included in any MST. - -Proof: if e was not included in the MST, removing any of the (larger cost) edges in the cycle formed after adding e to the MST, would yield a spanning tree of smaller weight. - -If T is a tree of MST edges, then we can contract T into a single vertex while maintaining the invariant that the MST of the contracted graph plus T gives the MST for the graph before contraction. Its running time is O(m α(m,n)), where α is the classical functional inverse of the Ackermann function. The function α grows extremely slowly, so that for all practical purposes it may be considered a constant no greater than 4; thus Chazelle's algorithm takes very close to linear time. - -If the graph is dense (i.e. m/n ≥ log log log n), then a deterministic algorithm by Fredman and Tarjan finds the MST in time O(m). The algorithm executes a number of phases. Each phase executes Prim's algorithm many times, each for a limited number of steps. The run-time of each phase is O(m+n). If the number of vertices before a phase is $n'$, the number of vertices remaining after a phase is at most $n' / 2^{m/n'}$. Hence, at most $\log^*{n}$ phases are needed, which gives a linear run-time for dense graphs. The following is a simplified description of the algorithm. - -# Let $r = \log \log \log n$, where n is the number of vertices. Find all optimal decision trees on r vertices. This can be done in time O(n) (see Decision trees above). - -# Partition the graph to components with at most r vertices in each component. This partition uses a soft heap, which "corrupts" a small number of the edges of the graph. - -# Use the optimal decision trees to find an MST for the uncorrupted subgraph within each component. - -# Contract each connected component spanned by the MSTs to a single vertex, and apply any algorithm which works on dense graphs in time O(m) to the contraction of the uncorrupted subgraph - -# Add back the corrupted edges to the resulting forest to form a subgraph guaranteed to contain the minimum spanning tree, and smaller by a constant factor than the starting graph. Apply the optimal algorithm recursively to this graph. - -The runtime of all steps in the algorithm is O(m), except for the step of using the decision trees. The runtime of this step is unknown, but it has been proved that it is optimal - no algorithm can do better than the optimal decision tree. Thus, this algorithm has the peculiar property that it is provably optimal although its runtime complexity is unknown. - -Research has also considered parallel algorithms for the minimum spanning tree problem. - -With a linear number of processors it is possible to solve the problem in $O(\log n)$ time. - -Bader demonstrate an algorithm that can compute MSTs 5 times faster on 8 processors than an optimized sequential algorithm. - -Other specialized algorithms have been designed for computing minimum spanning trees of a graph so large that most of it must be stored on disk at all times. These external storage algorithms, for example as described in "Engineering an External Memory Minimum Spanning Tree Algorithm" by Roman, Dementiev et al., can operate, by authors' claims, as little as 2 to 5 times slower than a traditional in-memory algorithm. They rely on efficient external storage sorting algorithms and on graph contraction techniques for reducing the graph's size efficiently. - -The problem can also be approached in a distributed manner. If each node is considered a computer and no node knows anything except its own connected links, one can still calculate the distributed minimum spanning tree. - -Alan M. Frieze showed that given a complete graph on n vertices, with edge weights that are independent identically distributed random variables with distribution function $F$ satisfying $F'(0) > 0$, then as n approaches +∞ the expected weight of the MST approaches $\zeta(3)/F'(0)$, where $\zeta$ is the Riemann zeta function (more specifically is $\zeta(3)$ Apéry's constant). Frieze and Steele also proved convergence in probability. Svante Janson proved a central limit theorem for weight of the MST. - -For uniform random weights in $[0,1]$, the exact expected size of the minimum spanning tree has been computed for small complete graphs. - -Minimum spanning trees have direct applications in the design of networks, including computer networks, telecommunications networks, transportation networks, water supply networks, and electrical grids (which they were first invented for, as mentioned above). They are invoked as subroutines in algorithms for other problems, including the Christofides algorithm for approximating the traveling salesman problem, approximating the multi-terminal minimum cut problem (which is equivalent in the single-terminal case to the maximum flow problem), - -and approximating the minimum-cost weighted perfect matching. - -Other practical applications based on minimal spanning trees include: - -* Taxonomy. - -* Cluster analysis: clustering points in the plane, single-linkage clustering (a method of hierarchical clustering), graph-theoretic clustering, and clustering gene expression data. - -* Constructing trees for broadcasting in computer networks. - -* Image registration and segmentation – see minimum spanning tree-based segmentation. - -* Curvilinear feature extraction in computer vision. - -* Handwriting recognition of mathematical expressions. - -* Circuit design: implementing efficient multiple constant multiplications, as used in finite impulse response filters. - -* Regionalisation of socio-geographic areas, the grouping of areas into homogeneous, contiguous regions. - -* Comparing ecotoxicology data. - -* Topological observability in power systems. - -* Measuring homogeneity of two-dimensional materials. - -* Minimax process control. - -* Minimum spanning trees can also be used to describe financial markets. A correlation matrix can be created by calculating a coefficient of correlation between any two stocks. This matrix can be represented topologically as a complex network and a minimum spanning tree can be constructed to visualize relationships. - -The problem of finding the Steiner tree of a subset of the vertices, that is, minimum tree that spans the given subset, is known to be NP-Complete. - -A related problem is the k-minimum spanning tree (k-MST), which is the tree that spans some subset of k vertices in the graph with minimum weight. - -A set of k-smallest spanning trees is a subset of k spanning trees (out of all possible spanning trees) such that no spanning tree outside the subset has smaller weight. (Note that this problem is unrelated to the k-minimum spanning tree.) - -The Euclidean minimum spanning tree is a spanning tree of a graph with edge weights corresponding to the Euclidean distance between vertices which are points in the plane (or space). - -The rectilinear minimum spanning tree is a spanning tree of a graph with edge weights corresponding to the rectilinear distance between vertices which are points in the plane (or space). - -In the distributed model, where each node is considered a computer and no node knows anything except its own connected links, one can consider distributed minimum spanning tree. The mathematical definition of the problem is the same but there are different approaches for a solution. - -The capacitated minimum spanning tree is a tree that has a marked node (origin, or root) and each of the subtrees attached to the node contains no more than a c nodes. c is called a tree capacity. Solving CMST optimally is NP-hard, but good heuristics such as Esau-Williams and Sharma produce solutions close to optimal in polynomial time. - -The degree constrained minimum spanning tree is a minimum spanning tree in which each vertex is connected to no more than d other vertices, for some given number d. The case d = 2 is a special case of the traveling salesman problem, so the degree constrained minimum spanning tree is NP-hard in general. - -For directed graphs, the minimum spanning tree problem is called the Arborescence problem and can be solved in quadratic time using the Chu–Liu/Edmonds algorithm. - -A maximum spanning tree is a spanning tree with weight greater than or equal to the weight of every other spanning tree. - -Such a tree can be found with algorithms such as Prim's or Kruskal's after multiplying the edge weights by -1 and solving - -the MST problem on the new graph. A path in the maximum spanning tree is the widest path in the graph between its two endpoints: among all possible paths, it maximizes the weight of the minimum-weight edge. - -Maximum spanning trees find applications in parsing algorithms for natural languages - -and in training algorithms for conditional random fields. - -The dynamic MST problem concerns the update of a previously computed MST after an edge weight change in the original graph or the insertion/deletion of a vertex. - -The minimum labeling spanning tree problem is to find a spanning tree with least types of labels if each edge in a graph is associated with a label from a finite label set instead of a weight. - -A bottleneck edge is the highest weighted edge in a spanning tree. A spanning tree is a minimum bottleneck spanning tree (or MBST) if the graph does not contain a spanning tree with a smaller bottleneck edge weight. A MST is necessarily a MBST (provable by the cut property), but a MBST is not necessarily a MST. diff --git a/wiki/wikipedia/38.txt b/wiki/wikipedia/38.txt deleted file mode 100644 index a0f9834b2b3251d4fa54eae73f2d8cca4631be2f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/38.txt +++ /dev/null @@ -1,121 +0,0 @@ -In mathematics, Pythagorean addition is the following binary operation on the real numbers: - -a \oplus b = \sqrt{a^2+b^2}. - -The name recalls the Pythagorean theorem, which states that the length of the hypotenuse of a right triangle is a ⊕ b, where a and b are the lengths of the other sides. In its applications to signal processing and propagation of measurement uncertainty, the same operation is also called addition in quadrature. - -This operation provides a simple notation and terminology when the summands are complicated; for example, the energy-momentum relation in physics becomes - -E = mc^2 \oplus pc. - -It is implemented in many programming libraries as the hypot function, in a way designed to avoid errors arising due to limited-precision calculations performed on computers. - -If measurements X, Y, Z, ... have independent errors ΔX, ΔY, ΔZ, ..., respectively, the quadrature method gives the overall error, - -\varDelta_o = \sqrt{{\varDelta_X}^2 + {\varDelta_Y}^2 + {\varDelta_Z}^2 + \cdots} - -whereas the upper limit of the overall error is - -\varDelta_u = \varDelta_X + \varDelta_Y + \varDelta_Z + \cdots - -if the errors were not independent. - -This is equivalent of finding the magnitude of the resultant of adding orthogonal vectors, each with magnitude equal to the uncertainty, using the Pythagorean theorem. - -In signal processing, addition in quadrature is used to find the overall noise from independent sources of noise. For example, if an image sensor gives 6 digital numbers of shot noise, 3 of dark current noise and 2 of Johnson–Nyquist noise under a specific condition, the overall noise, - -\sigma = 6 \oplus 3 \oplus 2 = \sqrt{6^2 + 3^2 + 2^2} = 7 - -digital numbers, showing the dominance of larger sources of noise. - -The operation ⊕ is associative and commutative, and - -\sqrt{x_1^2 + x_2^2 + \cdots + x_n^2} = x_1 \oplus x_2 \oplus \cdots \oplus x_n. - -This is enough to form the real numbers into a commutative semigroup. However, ⊕ is not a group operation for the following reasons. - -The only element which could potentially act as an identity element is 0, since an identity e must satisfy e⊕e = e. This yields the equation $\sqrt{2}e=e$, but if e is nonzero that implies $\sqrt{2}=1$, so e could only be zero. Unfortunately 0 does not work as an identity element after all, since 0⊕(−1) = 1. This does indicate, however, that if the operation ⊕ is restricted to nonnegative real numbers, then 0 does act as an identity. Consequently, the operation ⊕ acting on the nonnegative real numbers forms a commutative monoid. - -Hypot is a mathematical function defined to calculate the length of the hypotenuse of a right-angle triangle. It was designed to avoid errors arising due to limited-precision calculations performed on computers. Calculating the length of the hypotenuse of a triangle is possible using the square-root function on the sum of two squares, but hypot(x, y) avoids problems that occur when squaring very large or very small numbers. If calculated using the natural formula, - -r = \sqrt{x^2 + y^2}, - -the squares of very large or small values of x and y may exceed the range of machine precision when calculated on a computer, leading to an inaccurate result caused by arithmetic underflow and/or arithmetic overflow. The hypot function was designed to calculate the result without causing this problem. - -The hypot function is often used together with the atan2 function to convert from Cartesian coordinates to polar coordinates: - -If either input is infinite, the result is infinite, i.e. - -Because this is true for all possible values of x, including infinity, the IEEE 754 floating-point standard requires that this definition also applies if x is not a number (NaN). - -The difficulty with the naive implementation is that x2 or y2 may overflow or underflow, unless the intermediate result is computed with extended precision. A common implementation technique is to exchange the values, if necessary, so that |x| ≥ |y|, and then use the equivalent form - -\begin{align} - -r &= \sqrt{x^2 + y^2} \\ - -&= \sqrt{x^2 \left( 1 + \left(\tfrac{y}{x}\right)^2\right)} \\ - -&= |x| \sqrt{1 + \left(\tfrac{y}{x}\right)^2} - -\left(= |x| + \frac{y}\frac{y}{1 + \sqrt{1 + \left(\tfrac{y}{x}\right)^2 }}\right). - -\end{align} - -The computation of y/x cannot overflow unless both x and y are 0. If y/x underflows, the final result is equal to |x|, which is correct within the precision of the calculation. The square root is computed of a value between 1 and 2. Finally, the multiplication by |x| cannot underflow, and overflows only when the result is too large to represent. - -This implementation has the downside that it requires an additional floating-point division, which can double the cost of the naive implementation, as multiplication and addition are typically far faster than division and square root. - -More complex implementations avoid this by dividing the inputs into more cases: - -* x ≫ y: hypot(x, y) = |x|, to within machine precision. - -* x2 overflows: Multiply both x and y by a small scaling factor (e.g. 2-64 for IEEE single precision), use the naive algorithm which will now not overflow, and multiply the result by the (large) inverse (e.g. 264). - -* y2 underflows: As above, but reverse the scaling factors to scale up the intermediate values. - -* Otherwise: The naive algorithm is safe to use. - -Additional techniques allow the result to be computed more accurately, e.g. to less than one ulp. - -The function is present in several programming languages: - -*C99 - -*CSS - -*C++11 - -*D (programming language) - -*Fortran 2008 - -*Julia (programming language) - -*Swift (programming language) - -*Python (programming language) - -*Apple's PowerPC Numerics - -*MATLAB - -*Pascal - -*PHP - -*Java (programming language) (since version 1.5) - -*Kotlin - -*Ruby - -*Go - -*Rust - -*JavaScript (since ES2015) - -*Some C90 and C++ libraries have provided a hypot function. - -*Scala diff --git a/wiki/wikipedia/380.txt b/wiki/wikipedia/380.txt deleted file mode 100644 index 7e0a134de484302266db41642ca3c091d56a11be..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/380.txt +++ /dev/null @@ -1,113 +0,0 @@ -In mathematics, specifically in the field of finite group theory, the Sylow theorems are a collection of theorems named after the Norwegian mathematician Peter Ludwig Sylow that give detailed information about the number of subgroups of fixed order that a given finite group contains. The Sylow theorems form a fundamental part of finite group theory and have very important applications in the classification of finite simple groups. - -For a prime number $p$, a Sylow p-subgroup (sometimes p-Sylow subgroup) of a group $G$ is a maximal $p$-subgroup of $G$, i.e., a subgroup of $G$ that is a p-group (meaning its cardinality is a power of $p,$ or equivalently, the order of every group element is a power of $p$) that is not a proper subgroup of any other $p$-subgroup of $G$. The set of all Sylow $p$-subgroups for a given prime $p$ is sometimes written $\text{Syl}_p(G)$. - -The Sylow theorems assert a partial converse to Lagrange's theorem. Lagrange's theorem states that for any finite group $G$ the order (number of elements) of every subgroup of $G$ divides the order of $G$. The Sylow theorems state that for every prime factor $p$ of the order of a finite group $G$, there exists a Sylow $p$-subgroup of $G$ of order $p^n$, the highest power of $p$ that divides the order of $G$. Moreover, every subgroup of order $p^n$ is a Sylow $p$-subgroup of $G$, and the Sylow $p$-subgroups of a group (for a given prime $p$) are conjugate to each other. Furthermore, the number of Sylow $p$-subgroups of a group for a given prime $p$ is congruent to $1 \text{ mod } p$. - -The Sylow theorems are a powerful statement about the structure of groups in general, but are also powerful in applications of finite group theory. This is because they give a method for using the prime decomposition of the cardinality of a finite group $G$ to give statements about the structure of its subgroups: essentially, it gives a technique to transport basic number-theoretic information about a group to its group structure. From this observation, classifying finite groups becomes a game of finding which combinations/constructions of groups of smaller order can be applied to construct a group. For example, a typical application of these theorems is in the classification of finite groups of some fixed cardinality, e.g. $|G| = 60$. - -Collections of subgroups that are each maximal in one sense or another are common in group theory. The surprising result here is that in the case of $\operatorname{Syl}_p(G)$, all members are actually isomorphic to each other and have the largest possible order: if $|G|=p^nm$ with $n > 0$ where p does not divide m, then every Sylow p-subgroup P has order $|P| = p^n$. That is, P is a p-group and $\text{gcd}(|G:P|, p) = 1$. These properties can be exploited to further analyze the structure of G. - -The following theorems were first proposed and proven by Ludwig Sylow in 1872, and published in Mathematische Annalen. - -For every prime factor p with multiplicity n of the order of a finite group G, there exists a Sylow p-subgroup of G, of order $p^n$. - -The following weaker version of theorem 1 was first proved by Augustin-Louis Cauchy, and is known as Cauchy's theorem. - -Given a finite group G and a prime number p dividing the order of G, then there exists an element (and thus a cyclic subgroup generated by this element) of order p in G. - -Given a finite group G and a prime number p, all Sylow p-subgroups of G are conjugate to each other. That is, if H and K are Sylow p-subgroups of G, then there exists an element $g \in G$ with $g^{-1}Hg = K$. - -Let p be a prime factor with multiplicity n of the order of a finite group G, so that the order of G can be written as $p^nm$, where $n > 0$ and p does not divide m. Let $n_p$ be the number of Sylow p-subgroups of G. Then the following hold: - -* $n_p$ divides m, which is the index of the Sylow p-subgroup in G. - -* $n_p \equiv 1 \bmod p$ - -* $n_p = |G:N_G(P)|$, where P is any Sylow p-subgroup of G and $N_G$ denotes the normalizer. - -The Sylow theorems imply that for a prime number $p$ every Sylow $p$-subgroup is of the same order, $p^n$. Conversely, if a subgroup has order $p^n$, then it is a Sylow $p$-subgroup, and so is isomorphic to every other Sylow $p$-subgroup. Due to the maximality condition, if $H$ is any $p$-subgroup of $G$, then $H$ is a subgroup of a $p$-subgroup of order $p^n$. - -A very important consequence of Theorem 2 is that the condition $n_p = 1$ is equivalent to saying that the Sylow $p$-subgroup of $G$ is a normal subgroup. However, there are groups that have normal subgroups but no normal Sylow subgroups, such as $S_4$. - -There is an analogue of the Sylow theorems for infinite groups. One defines a Sylow p-subgroup in an infinite group to be a p-subgroup (that is, every element in it has p-power order) that is maximal for inclusion among all p-subgroups in the group. Such subgroups exist by Zorn's lemma. Let $\operatorname{Cl}(K)$ denote the set of conjugacy classes of a subgroup $K \subset G$ - -\operatorname{Cl}(K)| is finite, then every Sylow p-subgroup is conjugate to K, and $n_p \equiv 1 \bmod p$. - -A simple illustration of Sylow subgroups and the Sylow theorems are the dihedral group of the n-gon, D2n. For n odd, 2 = 21 is the highest power of 2 dividing the order, and thus subgroups of order 2 are Sylow subgroups. These are the groups generated by a reflection, of which there are n, and they are all conjugate under rotations; geometrically the axes of symmetry pass through a vertex and a side. - -By contrast, if n is even, then 4 divides the order of the group, and the subgroups of order 2 are no longer Sylow subgroups, and in fact they fall into two conjugacy classes, geometrically according to whether they pass through two vertices or two faces. These are related by an outer automorphism, which can be represented by rotation through π/n, half the minimal rotation in the dihedral group. - -Another example are the Sylow p-subgroups of GL2(Fq), where p and q are primes ≥ 3 and p ≡ 1 (mod q) , which are all abelian. The order of GL2(Fq) is (q2 − 1)(q2 − q) = (q)(q + 1)(q − 1)2. Since q = pnm + 1, the order of GL2(Fq) = p2n m′. Thus by Theorem 1, the order of the Sylow p-subgroups is p2n. - -One such subgroup P, is the set of diagonal matrices $\begin{bmatrix}x^{im} & 0 \\0 & x^{jm} \end{bmatrix}$, x is any primitive root of Fq. Since the order of Fq is q − 1, its primitive roots have order q − 1, which implies that x(q − 1)/pn or xm and all its powers have an order which is a power of p. So, P is a subgroup where all its elements have orders which are powers of p. There are pn choices for both a and b, making |P| = p2n. This means P is a Sylow p-subgroup, which is abelian, as all diagonal matrices commute, and because Theorem 2 states that all Sylow p-subgroups are conjugate to each other, the Sylow p-subgroups of GL2(Fq) are all abelian. - -Since Sylow's theorem ensures the existence of p-subgroups of a finite group, it's worthwhile to study groups of prime power order more closely. Most of the examples use Sylow's theorem to prove that a group of a particular order is not simple. For groups of small order, the congruence condition of Sylow's theorem is often sufficient to force the existence of a normal subgroup. - -;Example-1: Groups of order pq, p and q primes with p < q. - -;Example-2: Group of order 30, groups of order 20, groups of order p2q, p and q distinct primes are some of the applications. - -;Example-3: (Groups of order 60): If the order |G| = 60 and G has more than one Sylow 5-subgroup, then G is simple. - -Some non-prime numbers n are such that every group of order n is cyclic. One can show that n = 15 is such a number using the Sylow theorems: Let G be a group of order 15 = 3 · 5 and n3 be the number of Sylow 3-subgroups. Then n3 $\mid$ 5 and n3 ≡ 1 (mod 3). The only value satisfying these constraints is 1; therefore, there is only one subgroup of order 3, and it must be normal (since it has no distinct conjugates). Similarly, n5 must divide 3, and n5 must equal 1 (mod 5); thus it must also have a single normal subgroup of order 5. Since 3 and 5 are coprime, the intersection of these two subgroups is trivial, and so G must be the internal direct product of groups of order 3 and 5, that is the cyclic group of order 15. Thus, there is only one group of order 15 (up to isomorphism). - -A more complex example involves the order of the smallest simple group that is not cyclic. Burnside's pa qb theorem states that if the order of a group is the product of one or two prime powers, then it is solvable, and so the group is not simple, or is of prime order and is cyclic. This rules out every group up to order 30 (= 2 · 3 · 5). - -If G is simple, and |G| = 30, then n3 must divide 10 ( = 2 · 5), and n3 must equal 1 (mod 3). Therefore, n3 = 10, since neither 4 nor 7 divides 10, and if n3 = 1 then, as above, G would have a normal subgroup of order 3, and could not be simple. G then has 10 distinct cyclic subgroups of order 3, each of which has 2 elements of order 3 (plus the identity). This means G has at least 20 distinct elements of order 3. - -As well, n5 = 6, since n5 must divide 6 ( = 2 · 3), and n5 must equal 1 (mod 5). So G also has 24 distinct elements of order 5. But the order of G is only 30, so a simple group of order 30 cannot exist. - -Next, suppose |G| = 42 = 2 · 3 · 7. Here n7 must divide 6 ( = 2 · 3) and n7 must equal 1 (mod 7), so n7 = 1. So, as before, G can not be simple. - -On the other hand, for |G| = 60 = 22 · 3 · 5, then n3 = 10 and n5 = 6 is perfectly possible. And in fact, the smallest simple non-cyclic group is A5, the alternating group over 5 elements. It has order 60, and has 24 cyclic permutations of order 5, and 20 of order 3. - -Part of Wilson's theorem states that -$$ -(p-1)!\ \equiv\ -1 \pmod p -$$ - -for every prime p. One may easily prove this theorem by Sylow's third theorem. Indeed, - -observe that the number np of Sylow's p-subgroups - -in the symmetric group Sp is (p - 2)!. On the other hand, np ≡ 1 (mod p). Hence, (p - 2)! ≡ 1 (mod p). So, (p - 1)! ≡ -1 (mod p). - -Frattini's argument shows that a Sylow subgroup of a normal subgroup provides a factorization of a finite group. A slight generalization known as Burnside's fusion theorem states that if G is a finite group with Sylow p-subgroup P and two subsets A and B normalized by P, then A and B are G-conjugate if and only if they are NG(P)-conjugate. The proof is a simple application of Sylow's theorem: If B=Ag, then the normalizer of B contains not only P but also Pg (since Pg is contained in the normalizer of Ag). By Sylow's theorem P and Pg are conjugate not only in G, but in the normalizer of B. Hence gh−1 normalizes P for some h that normalizes B, and then Agh−1 = Bh−1 = B, so that A and B are NG(P)-conjugate. Burnside's fusion theorem can be used to give a more powerful factorization called a semidirect product: if G is a finite group whose Sylow p-subgroup P is contained in the center of its normalizer, then G has a normal subgroup K of order coprime to P, G = PK and P∩K = {1}, that is, G is p-nilpotent. - -Less trivial applications of the Sylow theorems include the focal subgroup theorem, which studies the control a Sylow p-subgroup of the derived subgroup has on the structure of the entire group. This control is exploited at several stages of the classification of finite simple groups, and for instance defines the case divisions used in the Alperin–Brauer–Gorenstein theorem classifying finite simple groups whose Sylow 2-subgroup is a quasi-dihedral group. These rely on J. L. Alperin's strengthening of the conjugacy portion of Sylow's theorem to control what sorts of elements are used in the conjugation. - -The Sylow theorems have been proved in a number of ways, and the history of the proofs themselves is the subject of many papers, including Waterhouse, Scharlau, Casadio and Zappa, Gow, and to some extent Meo. - -One proof of the Sylow theorems exploits the notion of group action in various creative ways. The group G acts on itself or on the set of its p-subgroups in various ways, and each such action can be exploited to prove one of the Sylow theorems. The following proofs are based on combinatorial arguments of Wielandt. In the following, we use $a \mid b$ as notation for "a divides b" and $a \nmid b$ for the negation of this statement. - -1=A finite group G whose order $|G|$ is divisible by a prime power pk has a subgroup of order pk. - -{{math proof|1=Let |G| = pkm = pk+ru such that $p \nmid u$, and let Ω denote the set of subsets of G of size pk. G acts on Ω by left multiplication: for g ∈ G and ω ∈ Ω, g⋅ω = { g}. For a given set ω ∈ Ω, write Gω for its stabilizer subgroup { g} ∈ and Gω for its orbit { g}⋅ in Ω. - -The proof will show the existence of some ω ∈ Ω for which Gω has pk elements, providing the desired subgroup. This is the maximal possible size of a stabilizer subgroup Gω, since for any fixed element α ∈ ω ⊆ G, the right coset Gωα is contained in ω; therefore, |Gω| = |Gωα| ≤ |ω| = pk. - -By the orbit-stabilizer theorem we have |Gω| |Gω| = |G| for each ω ∈ Ω, and therefore using the additive p-adic valuation νp, which counts the number of factors p, one has νp(|Gω|) + νp(|Gω|) = νp(|G|) = k + r. This means that for those ω with |Gω| = pk, the ones we are looking for, one has νp(|Gω|) = r, while for any other ω one has νp(|Gω|) > r (as 0 < |Gω| < pk implies νp(|Gω|) < k). Since |Ω| is the sum of |Gω| over all distinct orbits Gω, one can show the existence of ω of the former type by showing that νp(|Ω|) = r (if none existed, that valuation would exceed r). This is an instance of Kummer's theorem (since in base p notation the number |G| ends with precisely k + r digits zero, subtracting pk from it involves a carry in r places), and can also be shown by a simple computation: -$$ -|\Omega | ={p^km \choose p^k} = \prod_{j=0}^{p^k - 1} \frac{p^k m - j}{p^k - j} = m\prod_{j=1}^{p^{k} - 1} \frac{p^{k - \nu_p(j)} m - j/p^{\nu_p(j)}}{p^{k - \nu_p(j)} - j/p^{\nu_p(j)}} -$$ - -and no power of p remains in any of the factors inside the product on the right. Hence νp(|Ω|) = νp(m) = r, completing the proof. - -It may be noted that conversely every subgroup H of order pk gives rise to sets ω ∈ Ω for which Gω = H, namely any one of the m distinct cosets Hg.}} - -1=Let H be a finite p-group, let Ω be a finite set acted on by H, and let Ω0 denote the set of points of Ω that are fixed under the action of H. Then |Ω| ≡ |Ω0| (mod p). - -1=If H is a p-subgroup of G and P is a Sylow p-subgroup of G, then there exists an element g in G such that g−1Hg ≤ P. In particular, all Sylow p-subgroups of G are conjugate to each other (and therefore isomorphic), that is, if H and K are Sylow p-subgroups of G, then there exists an element g in G with g−1Hg = K. - -1=Let q denote the order of any Sylow p-subgroup P of a finite group G. Let np denote the number of Sylow p-subgroups of G. Then (a) np = [G : NG(P)] (where NG(P) is the normalizer of P), (b) np divides |G|/q, and (c) np ≡ 1 (mod p). - -{{math proof|1=Let Ω be the set of all Sylow p-subgroups of G and let G act on Ω by conjugation. Let P ∈ Ω be a Sylow p-subgroup. By Theorem 2, the orbit of P has size np, so by the orbit-stabilizer theorem np = [G : GP]. For this group action, the stabilizer GP is given by {1= g} ∈ , the normalizer of P in G. Thus, np = [G : NG(P)], and it follows that this number is a divisor of [G : P] = |G|/q. - -Now let P act on Ω by conjugation, and again let Ω0 denote the set of fixed points of this action. Let Q ∈ Ω0 and observe that then Q = xQx−1 for all x ∈ P so that P ≤ NG(Q). By Theorem 2, P and Q are conjugate in NG(Q) in particular, and Q is normal in NG(Q), so then P = Q. It follows that Ω0 = {P} so that, by the Lemma, |Ω| ≡ |Ω0| = 1 (mod p).}} - -The problem of finding a Sylow subgroup of a given group is an important problem in computational group theory. - -One proof of the existence of Sylow p-subgroups is constructive: if H is a p-subgroup of G and the index [G:H] is divisible by p, then the normalizer N = NG(H) of H in G is also such that [N : H] is divisible by p. In other words, a polycyclic generating system of a Sylow p-subgroup can be found by starting from any p-subgroup H (including the identity) and taking elements of p-power order contained in the normalizer of H but not in H itself. The algorithmic version of this (and many improvements) is described in textbook form in Butler, including the algorithm described in Cannon. These versions are still used in the GAP computer algebra system. - -In permutation groups, it has been proven, in Kantor and Kantor and Taylor, that a Sylow p-subgroup and its normalizer can be found in polynomial time of the input (the degree of the group times the number of generators). These algorithms are described in textbook form in Seress, and are now becoming practical as the constructive recognition of finite simple groups becomes a reality. In particular, versions of this algorithm are used in the Magma computer algebra system. diff --git a/wiki/wikipedia/3800.txt b/wiki/wikipedia/3800.txt deleted file mode 100644 index 93a7283b90ee0598c792fa0fe0aef1cf51f1276b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3800.txt +++ /dev/null @@ -1,21 +0,0 @@ -In set theory, Cantor's paradox states that there is no set of all cardinalities. This is derived from the theorem that there is no greatest cardinal number. In informal terms, the paradox is that the collection of all possible "infinite sizes" is not only infinite, but so infinitely large that its own infinite size cannot be any of the infinite sizes in the collection. The difficulty is handled in axiomatic set theory by declaring that this collection is not a set but a proper class; in von Neumann–Bernays–Gödel set theory it follows from this and the axiom of limitation of size that this proper class must be in bijection with the class of all sets. Thus, not only are there infinitely many infinities, but this infinity is larger than any of the infinities it enumerates. - -This paradox is named for Georg Cantor, who is often credited with first identifying it in 1899 (or between 1895 and 1897). Like a number of "paradoxes" it is not actually contradictory but merely indicative of a mistaken intuition, in this case about the nature of infinity and the notion of a set. Put another way, it is paradoxical within the confines of naïve set theory and therefore demonstrates that a careless axiomatization of this theory is inconsistent. - -In order to state the paradox it is necessary to understand that the cardinal numbers admit an ordering, so that one can speak about one being greater or less than another. Then Cantor's paradox is: - -Theorem: There is no greatest cardinal number. - -This fact is a direct consequence of Cantor's theorem on the cardinality of the power set of a set. - -Proof: Assume the contrary, and let C be the largest cardinal number. Then (in the von Neumann formulation of cardinality) C is a set and therefore has a power set 2C which, by Cantor's theorem, has cardinality strictly larger than C. Demonstrating a cardinality (namely that of 2C) larger than C, which was assumed to be the greatest cardinal number, falsifies the definition of C. This contradiction establishes that such a cardinal cannot exist. - -Another consequence of Cantor's theorem is that the cardinal numbers constitute a proper class. That is, they cannot all be collected together as elements of a single set. Here is a somewhat more general result. - -Theorem: If S is any set then S cannot contain elements of all cardinalities. In fact, there is a strict upper bound on the cardinalities of the elements of S. - -Proof: Let S be a set, and let T be the union of the elements of S. Then every element of S is a subset of T, and hence is of cardinality less than or equal to the cardinality of T. Cantor's theorem then implies that every element of S is of cardinality strictly less than the cardinality of 2T. - -Since the cardinal numbers are well-ordered by indexing with the ordinal numbers (see Cardinal number, formal definition), this also establishes that there is no greatest ordinal number; conversely, the latter statement implies Cantor's paradox. By applying this indexing to the Burali-Forti paradox we obtain another proof that the cardinal numbers are a proper class rather than a set, and (at least in ZFC or in von Neumann–Bernays–Gödel set theory) it follows from this that there is a bijection between the class of cardinals and the class of all sets. Since every set is a subset of this latter class, and every cardinality is the cardinality of a set (by definition!) this intuitively means that the "cardinality" of the collection of cardinals is greater than the cardinality of any set: it is more infinite than any true infinity. This is the paradoxical nature of Cantor's "paradox". - -While Cantor is usually credited with first identifying this property of cardinal sets, some mathematicians award this distinction to Bertrand Russell, who defined a similar theorem in 1899 or 1901. diff --git a/wiki/wikipedia/3801.txt b/wiki/wikipedia/3801.txt deleted file mode 100644 index 4b8f08d9f3548a819350b295724038bea388c80c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3801.txt +++ /dev/null @@ -1,6 +0,0 @@ -In mathematics - specifically, differential geometry - the Bochner identity is an identity concerning harmonic maps between Riemannian manifolds. The identity is named after the American mathematician Salomon Bochner. - -Let M and N be Riemannian manifolds and let u : M → N be a harmonic map. Let du denote the derivative (pushforward) of u, ∇ the gradient, Δ the Laplace–Beltrami operator, RiemN the Riemann curvature tensor on N and RicM the Ricci curvature tensor on M. Then -$$ -\frac12 \Delta \big( | \nabla u |^{2} \big) = \big| \nabla ( \mathrm{d} u ) \big|^{2} + \big\langle \mathrm{Ric}_{M} \nabla u, \nabla u \big\rangle - \big\langle \mathrm{Riem}_{N} (u) (\nabla u, \nabla u) \nabla u, \nabla u \big\rangle. -$$ diff --git a/wiki/wikipedia/3802.txt b/wiki/wikipedia/3802.txt deleted file mode 100644 index 0da5e325c9bc011ffab622fac63ebb69715c8901..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3802.txt +++ /dev/null @@ -1,15 +0,0 @@ -In mathematics, Fuchs' theorem, named after Lazarus Fuchs, states that a second-order differential equation of the form - -y + p(x)y' + q(x)y = g(x) - -has a solution expressible by a generalised Frobenius series when $p(x)$, $q(x)$ and $g(x)$ are analytic at $x = a$ or $a$ is a regular singular point. That is, any solution to this second-order differential equation can be written as - - y = \sum_{n=0}^\infty a_n (x - a)^{n + s}, \quad a_0 \neq 0 - -for some positive real s, or - - y = y_0 \ln(x - a) + \sum_{n=0}^\infty b_n(x - a)^{n + r}, \quad b_0 \neq 0 - -for some positive real r, where y0 is a solution of the first kind. - -Its radius of convergence is at least as large as the minimum of the radii of convergence of $p(x)$, $q(x)$ and $g(x)$. diff --git a/wiki/wikipedia/3803.txt b/wiki/wikipedia/3803.txt deleted file mode 100644 index c8d5ec86d491bf1723ee21fd3eb5f2352276dd8b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3803.txt +++ /dev/null @@ -1,123 +0,0 @@ -In mathematics, the Vitali covering lemma is a combinatorial and geometric result commonly used in measure theory of Euclidean spaces. This lemma is an intermediate step, of independent interest, in the proof of the Vitali covering theorem. The covering theorem is credited to the Italian mathematician Giuseppe Vitali. The theorem states that it is possible to cover, up to a Lebesgue-negligible set, a given subset E of Rd by a disjoint family extracted from a Vitali covering of E. - -thumb|right|300px|On the top: a collection of balls; the green balls are the disjoint subcollection. On the bottom: the subcollection with three times the radius covers all the balls.There are two basic version of the lemma, a finite version and an infinite version. Both lemmas can be proved in the general setting of a metric space, typically these results are applied to the special case of the Euclidean space $\mathbb{R}^d$. In both theorems we will use the following notation: if $B = B(x,r)$ is a ball and $c \in \mathbb{R}$, we will write $cB$ for the ball $B(x,cr)$. - -Theorem (Finite Covering Lemma). Let $ B_{1}, \dots, B_{n} $ be any finite collection of balls contained in an arbitrary metric space. Then there exists a subcollection $ B_{j_{1}}, B_{j_{2}}, \dots, B_{j_{m}} $ of these balls which are disjoint and satisfy B_{1}\cup B_{2}\cup\dots \cup B_{n}\subseteq 3B_{j_1} \cup 3B_{j_2} \cup \dots \cup 3B_{j_m}.Proof: Without loss of generality, we assume that the collection of balls is not empty; that is, n > 0. Let $B_{j_1}$ be the ball of largest radius. Inductively, assume that $B_{j_1},\dots,B_{j_k}$ have been chosen. If there is some ball in $B_1,\dots,B_n$ that is disjoint from $B_{j_1}\cup B_{j_2}\cup\dots\cup B_{j_k}$, let $B_{j_{k+1}}$ be such ball with maximal radius (breaking ties arbitrarily), otherwise, we set m := k and terminate the inductive definition. - -Now set $X:=\bigcup_{k=1}^m 3B_{j_k}$. It remains to show that $ B_i\subset X$ for every $i=1,2,\dots,n$. This is clear if $i\in\{j_1,\dots,j_m\}$. Otherwise, there necessarily is some $k \in \{1,\dots,m\}$ such that Bi intersects $B_{j_k}$ and the radius of $B_{j_k}$ is at least as large as that of Bi. The triangle inequality then easily implies that $B_i\subset 3B_{j_k}\subset X$, as needed. This completes the proof of the finite version. - -Theorem (Infinite Covering Lemma). Let $ \mathbf{F}$ be an arbitrary collection of balls in a separable metric space such that R := \sup \{ \mathrm{rad}(B) : B \in \mathbf{F} \} <\infty where $ \mathrm{rad}(B) $ denotes the radius of the ball Bj. Then there exists a countable sub-collection $ \mathbf{G} \subset \mathbf{F}$ such that the balls of $ \mathbf{G}$ are pairwise disjoint, and satisfy \bigcup_{B \in \mathbf{F}} B \subseteq \bigcup_{C \in \mathbf{G}} 5C. And moreover, each $B \in \mathbf{F}$ intersects some $C \in \mathbf{G}$ with $B \subset 5C$. - -Proof: Consider the partition of F into subcollections Fn, n ≥ 0, defined by -$$ - \mathbf{F}_n = \{ B \in \mathbf{F} : 2^{-n-1}R < \text{rad}(B) \leq 2^{-n}R \}. -$$ - -That is, $\mathbf{F}_n$ consists of the balls B whose radius is in (2−n−1R, 2−nR]. A sequence Gn, with Gn ⊂ Fn, is defined inductively as follows. First, set H0 = F0 and let G0 be a maximal disjoint subcollection of H0 (such a subcollection exists by Zorn's lemma). Assuming that G0,…,Gn have been selected, let -$$ - \mathbf{H}_{n+1} = \{ B \in \mathbf{F}_{n+1} : \ B \cap C = \emptyset, \ \ \forall C \in \mathbf{G}_0 \cup \mathbf{G}_1 \cup \dots \cup \mathbf{G}_n \}, -$$ - -and let Gn+1 be a maximal disjoint subcollection of Hn+1. The subcollection -$$ -\mathbf{G} := \bigcup_{n=0}^\infty \mathbf{G}_n -$$ - -of F satisfies the requirements of the theorem: G is a disjoint collection, and is thus countable since the given metric space is separable. Moreover, every ball B ∈ F intersects a ball C ∈ G such that B ⊂ 5 C.
    - -Indeed, if we are given some $B \in \mathbf{F}$, there must be some n be such that B belongs to Fn. Either B does not belong to Hn, which implies n > 0 and means that B intersects a ball from the union of G0, …, Gn−1, or B ∈ Hn and by maximality of Gn, B intersects a ball in Gn. In any case, B intersects a ball C that belongs to the union of G0, …, Gn. Such a ball C must have a radius larger than 2−n−1R. Since the radius of B is less than or equal to 2−nR, we can conclude by the triangle inequality that B ⊂ 5 C, as claimed. From this $ \bigcup_{B \in \mathbf{F}} B \subseteq \bigcup_{C \in \mathbf{G}} 5C $ immediately follows, completing the proof. - -Remarks - -*In the infinite version, the initial collection of balls can be countable or uncountable. In a separable metric space, any pairwise disjoint collection of balls must be countable. In a non-separable space, the same argument shows a pairwise disjoint subfamily exists, but that family need not be countable. - -*The result may fail if the radii are not bounded: consider the family of all balls centered at 0 in Rd; any disjoint subfamily consists of only one ball B, and 5 B does not contain all the balls in this family. - -*The constant 5 is not optimal. If the scale c−n, c > 1, is used instead of 2−n for defining Fn, the final value is 1 + 2c instead of 5. Any constant larger than 3 gives a correct statement of the lemma, but not 3. - -*Using a finer analysis, when the original collection F is a Vitali covering of a subset E of Rd, one shows that the subcollection G, defined in the above proof, covers E up to a Lebesgue-negligible set. - -An application of the Vitali lemma is in proving the Hardy–Littlewood maximal inequality. As in this proof, the Vitali lemma is frequently used when we are, for instance, considering the d-dimensional Lebesgue measure, $\lambda_d$, of a set E ⊂ Rd, which we know is contained in the union of a certain collection of balls $ \{B_{j}:j\in J\}$, each of which has a measure we can more easily compute, or has a special property one would like to exploit. Hence, if we compute the measure of this union, we will have an upper bound on the measure of E. However, it is difficult to compute the measure of the union of all these balls if they overlap. By the Vitali lemma, we may choose a subcollection $ \left\{ B_{j} : j\in J' \right\} $ which is disjoint and such that $\bigcup_{j\in J'}5 B_j\supset \bigcup_{j\in J} B_j\supset E$. Therefore, -$$ - \lambda_d(E)\leq \lambda_d \biggl( \bigcup_{j\in J}B_{j} \biggr) \leq \lambda_d \biggl( \bigcup_{j\in J'}5B_{j} \biggr) \leq \sum_{j\in J'} \lambda_d(5 B_{j}). -$$ - -Now, since increasing the radius of a d-dimensional ball by a factor of five increases its volume by a factor of 5d, we know that -$$ - \sum_{j\in J'} \lambda_d(5B_{j}) = 5^d \sum_{j\in J'} \lambda_d(B_{j}) -$$ - -and thus -$$ - \lambda_d(E) \leq 5^{d} \sum_{j\in J'}\lambda_d(B_{j}). -$$ - -In the covering theorem, the aim is to cover, up to a "negligible set", a given set E ⊆ Rd by a disjoint subcollection extracted from a Vitali covering for E : a Vitali class or Vitali covering $ \mathcal{V} $ for E is a collection of sets such that, for every x ∈ E and δ > 0, there is a set U in the collection $\mathcal{V}$ such that x ∈ U and the diameter of U is non-zero and less than δ. - -In the classical setting of Vitali, the negligible set is a Lebesgue negligible set, but measures other than the Lebesgue measure, and spaces other than Rd have also been considered, as is shown in the relevant section below. - -The following observation is useful: if $\mathcal{V}$ is a Vitali covering for E and if E is contained in an open set Ω ⊆ Rd, then the subcollection of sets U in $\mathcal{V}$ that are contained in Ω is also a Vitali covering for E. - -The next covering theorem for the Lebesgue measure λd is due to Lebesgue. A collection $ \mathcal{V} $ of measurable subsets of Rd is a regular family (in the sense of Lebesgue) if there exists a constant C such that -$$ -\operatorname{diam}(V)^d \le C \lambda_d(V) -$$ - -for every set V in the collection $\mathcal{V}$.
    - -The family of cubes is an example of regular family $\mathcal{V}$, as is the family $\mathcal{V}(m)$ of rectangles in R2 such that the ratio of sides stays between m−1 and m, for some fixed m ≥ 1. If an arbitrary norm is given on Rd, the family of balls for the metric associated to the norm is another example. To the contrary, the family of all rectangles in R2 is not regular. - -{{math theorem| Let E ⊆ Rd be a measurable set with finite Lebesgue measure, and let $\mathcal{V}$ be a regular family of closed subsets of Rd that is a Vitali covering for E. Then there exists a finite or countably infinite disjoint subcollection $\{U_{j}\}\subseteq \mathcal{V}$ such that - - \lambda_d \biggl( E \setminus \bigcup_{j}U_{j} \biggr) = 0.}} - -The original result of Vitali is a special case of this theorem, in which d = 1 and $\mathcal{V}$ is a collection of intervals that is a Vitali covering for a measurable subset E of the real line having finite measure. - -
    - -The theorem above remains true without assuming that E has finite measure. This is obtained by applying the covering result in the finite measure case, for every integer n ≥ 0, to the portion of E contained in the open annulus Ωn of points x such that n < |x| < n+1. - -A somewhat related covering theorem is the Besicovitch covering theorem. To each point a of a subset A ⊆ Rd, a Euclidean ball B(a, ra) with center a and positive radius ra is assigned. Then, as in the Vitali theorem, a subcollection of these balls is selected in order to cover A in a specific way. The main differences with the Vitali covering theorem are that on one hand, the disjointness requirement of Vitali is relaxed to the fact that the number Nx of the selected balls containing an arbitrary point x ∈ Rd is bounded by a constant Bd depending only upon the dimension d; on the other hand, the selected balls do cover the set A of all the given centers. - -One may have a similar objective when considering Hausdorff measure instead of Lebesgue measure. The following theorem applies in that case. - -{{math theorem| Let Hs denote s-dimensional Hausdorff measure, let E ⊆ Rd be an Hs-measurable set and $\mathcal{V}$ a Vitali class - -of closed sets for E. Then there exists a (finite or countably infinite) disjoint subcollection $\{U_{j}\}\subseteq \mathcal{V}$ such that either - - H^{s} \left( E\setminus \bigcup_{j}U_{j} \right)=0 or \sum_{j} \operatorname{diam} (U_{j})^{s}=\infty.}} - -Furthermore, if E has finite s-dimensional Hausdorff measure, then for any ε > 0, we may choose this subcollection {Uj} such that -$$ - H^{s}(E)\leq \sum_{j} \mathrm{diam} (U_{j})^{s}+\varepsilon. -$$ - -This theorem implies the result of Lebesgue given above. Indeed, when s = d, the Hausdorff measure Hs on Rd coincides with a multiple of the d-dimensional Lebesgue measure. If a disjoint collection $\{U_{j}\}$ is regular and contained in a measurable region B with finite Lebesgue measure, then -$$ -\sum_j \operatorname{diam}(U_j)^d \le C \sum_j \lambda_d(U_j) \le C \lambda_d(B) < +\infty -$$ - -which excludes the second possibility in the first assertion of the previous theorem. It follows that E is covered, up to a Lebesgue-negligible set, by the selected disjoint subcollection. - -The covering lemma can be used as intermediate step in the proof of the following basic form of the Vitali covering theorem. - -Proof: Without loss of generality, one can assume that all balls in F are nondegenerate and have radius less than or equal to 1. By the infinite form of the covering lemma, there exists a countable disjoint subcollection $\mathbf{G}$ of F such that every ball B ∈ F intersects a ball C ∈ G for which B ⊂ 5 C. Let r > 0 be given, and let Z denote the set of points z ∈ E that are not contained in any ball from G and belong to the open ball B(r) of radius r, centered at 0. It is enough to show that Z is Lebesgue-negligible, for every given r. - -Let $\mathbf{G}_r = \{ C_n\}_{n}$ denote the subcollection of those balls in G that meet B(r). Note that $\mathbf{G}_r$ may be finite or countably infinite. Let z ∈ Z be fixed. For each N, z does not belong to the closed set $K = \bigcup_{n \leq N} C_n$ by the definition of Z. But by the Vitali cover property, one can find a ball B ∈ F containing z, contained in B(r), and disjoint from K. By the property of G, the ball B intersects some ball $C_i \in \mathbf{G}$ and is contained in $5C_i$. But because K and B are disjoint, we must have i > N. So $z \in 5C_i$ for some i > N, and therefore -$$ - Z \subset \bigcup_{n > N} 5C_n. -$$ - -This gives for every N the inequality -$$ - \lambda_d(Z) \le \sum_{n > N} \lambda_d(5C_n) = 5^d \sum_{n > N} \lambda_d(C_n). -$$ - -But since the balls of $\mathbf{G}_r$ are contained in B(r+2), and these balls are disjoint we see -$$ -\sum_n \lambda_d(C_n) < \infty. -$$ - -Therefore, the term on the right side of the above inequality converges to 0 as N goes to infinity, which shows that Z is negligible as needed. - -The Vitali covering theorem is not valid in infinite-dimensional settings. The first result in this direction was given by David Preiss in 1979: there exists a Gaussian measure γ on an (infinite-dimensional) separable Hilbert space H so that the Vitali covering theorem fails for (H, Borel(H), γ). This result was strengthened in 2003 by Jaroslav Tišer: the Vitali covering theorem in fact fails for every infinite-dimensional Gaussian measure on any (infinite-dimensional) separable Hilbert space. diff --git a/wiki/wikipedia/3804.txt b/wiki/wikipedia/3804.txt deleted file mode 100644 index 96a48104b354020031edcd7b15f71d5d4e5c2715..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3804.txt +++ /dev/null @@ -1,105 +0,0 @@ -In mathematical logic, Löb's theorem states that in Peano arithmetic (PA) (or any formal system including PA), for any formula P, if it is provable in PA that "if P is provable in PA then P is true", then P is provable in PA. More formally, if Prov(P) means that the formula P is provable, then -$$ -\mathrm{if}\ PA \vdash ({\rm Prov}(P) \rightarrow P)\mathrm{, then}\ PA \vdash P, -$$ - -or -$$ -\dfrac{ PA \vdash {\rm Prov}(P) \rightarrow P }{ PA \vdash P }. -$$ - -An immediate corollary (the contrapositive) of Löb's theorem is that, if P is not provable in PA, then "if P is provable in PA, then P is true" is not provable in PA. For example, "If $1+1=3$ is provable in PA, then $1+1=3$" is not provable in PA. - -Löb's theorem is named for Martin Hugo Löb, who formulated it in 1955. It is related to - -Curry's paradox. - -Provability logic abstracts away from the details of encodings used in Gödel's incompleteness theorems by expressing the provability of $\phi$ in the given system in the language of modal logic, by means of the modality . - -Then we can formalize Löb's theorem by the axiom -$$ -\Box(\Box P\rightarrow P)\rightarrow \Box P, -$$ - -known as axiom GL, for Gödel–Löb. This is sometimes formalized by means of an inference rule that infers -$$ - P -$$ - -from -$$ -\Box P\rightarrow P. -$$ - -The provability logic GL that results from taking the modal logic K4 (or K, since the axiom schema 4, $\Box A\rightarrow\Box\Box A$, then becomes redundant) and adding the above axiom GL is the most intensely investigated system in provability logic. - -Löb's theorem can be proved within modal logic using only some basic rules about the provability operator (the K4 system) plus the existence of modal fixed points. - -We will assume the following grammar for formulas: - -# If $X$ is a propositional variable, then $X$ is a formula. - -# If $K$ is a propositional constant, then $K$ is a formula. - -# If $A$ is a formula, then $\Box A$ is a formula. - -# If $A$ and $B$ are formulas, then so are $\neg A$, $A \rightarrow B$, $A \wedge B$, $A \vee B$, and $A \leftrightarrow B$ - -A modal sentence is a modal formula that contains no propositional variables. We use $\vdash A$ to mean $A$ is a theorem. - -If $F(X)$ is a modal formula with only one propositional variable $X$, then a modal fixed point of $F(X)$ is a sentence $\Psi$ such that -$$ -\vdash \Psi \leftrightarrow F(\Box \Psi) -$$ - -We will assume the existence of such fixed points for every modal formula with one free variable. This is of course not an obvious thing to assume, but if we interpret $\Box$ as provability in Peano Arithmetic, then the existence of modal fixed points follows from the diagonal lemma. - -In addition to the existence of modal fixed points, we assume the following rules of inference for the provability operator $\Box$, known as Hilbert–Bernays provability conditions: - -# (necessitation) From $\vdash A$ conclude $\vdash \Box A$: Informally, this says that if A is a theorem, then it is provable. - -# (internal necessitation) $\vdash \Box A \rightarrow \Box \Box A$: If A is provable, then it is provable that it is provable. - -# (box distributivity) $\vdash \Box (A \rightarrow B) \rightarrow (\Box A \rightarrow \Box B)$: This rule allows you to do modus ponens inside the provability operator. If it is provable that A implies B, and A is provable, then B is provable. - -# Assume that there is a modal sentence $P$ such that $\vdash \Box P \rightarrow P$.
    Roughly speaking, it is a theorem that if $P$ is provable, then it is, in fact true.
    This is a claim of soundness. - -# From the existence of modal fixed points for every formula (in particular, the formula $X \rightarrow P$) it follows there exists a sentence $\Psi$ such that $\vdash \Psi \leftrightarrow (\Box \Psi \rightarrow P)$. - -# From 2, it follows that $\vdash \Psi \rightarrow (\Box \Psi \rightarrow P)$. - -# From the necessitation rule, it follows that $\vdash \Box(\Psi \rightarrow (\Box \Psi \rightarrow P))$. - -# From 4 and the box distributivity rule, it follows that $\vdash \Box\Psi \rightarrow \Box(\Box \Psi \rightarrow P)$. - -# Applying the box distributivity rule with $ A = \Box \Psi $ and $ B= P$ gives us $\vdash \Box(\Box \Psi \rightarrow P) \rightarrow (\Box\Box\Psi \rightarrow \Box P)$. - -# From 5 and 6, it follows that $\vdash \Box \Psi \rightarrow (\Box\Box\Psi \rightarrow \Box P)$. - -# From the internal necessitation rule, it follows that $\vdash \Box \Psi \rightarrow \Box \Box \Psi$. - -# From 7 and 8, it follows that $\vdash \Box \Psi \rightarrow \Box P$. - -# From 1 and 9, it follows that $\vdash \Box \Psi \rightarrow P$. - -# From 2, it follows that $\vdash (\Box \Psi \rightarrow P) \rightarrow \Psi$. - -# From 10 and 11, it follows that $\vdash \Psi$ - -# From 12 and the necessitation rule, it follows that $\vdash \Box \Psi$. - -# From 13 and 10, it follows that $\vdash P$. - -An immediate corollary of Löb's theorem is that, if P is not provable in PA, then "if P is provable in PA, then P is true" is not provable in PA. Given we know PA is consistent (but PA does not know PA is consistent), here are some simple examples: - -* "If $1+1=3$ is provable in PA, then $1+1=3$" is not provable in PA, as $1+1=3$ is not provable in PA (as it is false). - -* "If $1+1=2$ is provable in PA, then $1+1=2$" is provable in PA, as is any statement of the form "If X, then $1+1=2$". - -* "If the strengthened finite Ramsey theorem is provable in PA, then the strengthened finite Ramsey theorem is true" is not provable in PA, as "The strengthened finite Ramsey theorem is true" is not provable in PA (despite being true). - -In Doxastic logic, Löb's theorem shows that any system classified as a reflexive "type 4" reasoner must also be "modest": such a reasoner can never believe "my belief in P would imply that P is true", without first believing that P is true. - -Gödel's second incompleteness theorem follows from Löb's theorem by substituting the false statement $\bot$ for P. - -Not only does the existence of modal fixed points imply Löb's theorem, but the converse is valid, too. When Löb's theorem is given as an axiom (schema), the existence of a fixed point (up to provable equivalence) $p\leftrightarrow A(p)$ for any formula A(p) modalized in p can be derived. Thus in normal modal logic, Löb's axiom is equivalent to the conjunction of the axiom schema 4, $(\Box A\rightarrow \Box\Box A)$, and the existence of modal fixed points. diff --git a/wiki/wikipedia/3805.txt b/wiki/wikipedia/3805.txt deleted file mode 100644 index 24fa3f0f84741b261db6b995e19f9ffe499f3bf6..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3805.txt +++ /dev/null @@ -1,80 +0,0 @@ -In mathematics, especially in homological algebra and algebraic topology, a Künneth theorem, also called a Künneth formula, is a statement relating the homology of two objects to the homology of their product. The classical statement of the Künneth theorem relates the singular homology of two topological spaces X and Y and their product space $X \times Y$. In the simplest possible case the relationship is that of a tensor product, but for applications it is very often necessary to apply certain tools of homological algebra to express the answer. - -A Künneth theorem or Künneth formula is true in many different homology and cohomology theories, and the name has become generic. These many results are named for the German mathematician Hermann Künneth. - -Let X and Y be two topological spaces. In general one uses singular homology; but if X and Y happen to be CW complexes, then this can be replaced by cellular homology, because that is isomorphic to singular homology. The simplest case is when the coefficient ring for homology is a field F. In this situation, the Künneth theorem (for singular homology) states that for any integer k, -$$ -\bigoplus_{i + j = k} H_i(X; F) \otimes H_j(Y; F) \cong H_k(X \times Y; F) -$$. - -Furthermore, the isomorphism is a natural isomorphism. The map from the sum to the homology group of the product is called the cross product. More precisely, there is a cross product operation by which an i-cycle on X and a j-cycle on Y can be combined to create an $(i+j)$-cycle on $X \times Y$; so that there is an explicit linear mapping defined from the direct sum to $H_k(X \times Y)$. - -A consequence of this result is that the Betti numbers, the dimensions of the homology with $\Q$ coefficients, of $X \times Y$ can be determined from those of X and Y. If $p_Z(t)$ is the generating function of the sequence of Betti numbers $b_k(Z)$ of a space Z, then -$$ -p_{X \times Y}(t) = p_X(t) p_Y(t). -$$ - -Here when there are finitely many Betti numbers of X and Y, each of which is a natural number rather than $\infty$, this reads as an identity on Poincaré polynomials. In the general case these are formal power series with possibly infinite coefficients, and have to be interpreted accordingly. Furthermore, the above statement holds not only for the Betti numbers but also for the generating functions of the dimensions of the homology over any field. (If the integer homology is not torsion-free, then these numbers may differ from the standard Betti numbers.) - -The above formula is simple because vector spaces over a field have very restricted behavior. As the coefficient ring becomes more general, the relationship becomes more complicated. The next simplest case is the case when the coefficient ring is a principal ideal domain. This case is particularly important because the integers $\Z$ are a PID. - -In this case the equation above is no longer always true. A correction factor appears to account for the possibility of torsion phenomena. This correction factor is expressed in terms of the Tor functor, the first derived functor of the tensor product. - -When R is a PID, then the correct statement of the Künneth theorem is that for any topological spaces X and Y there are natural short exact sequences -$$ -0 \to \bigoplus_{i + j = k} H_i(X; R) \otimes_R H_j(Y; R) \to H_k(X \times Y; R) \to \bigoplus_{i + j = k-1} \mathrm{Tor}_1^R(H_i(X; R), H_j(Y; R)) \to 0. -$$ - -Furthermore, these sequences split, but not canonically. - -The short exact sequences just described can easily be used to compute the homology groups with integer coefficients of the product $\mathbb{RP}^2 \times \mathbb{RP}^2$ of two real projective planes, in other words, $H_k(\mathbb{RP}^2 \times \mathbb{RP}^2; \Z)$. These spaces are CW complexes. Denoting the homology group $H_i(\mathbb{RP}^2;\Z)$ by $h_i$ for brevity's sake, one knows from a simple calculation with cellular homology that -$$ -h_0\cong \Z -$$, -$$ -h_1\cong \Z/2\Z -$$, -$$ -h_i= 0 -$$ for all other values of i. - -The only non-zero Tor group (torsion product) which can be formed from these values of $h_i$ is -$$ -\mathrm{Tor}^{\Z}_1(h_1, h_1) \cong \mathrm{Tor}^{\Z}_1(\Z/2\Z,\Z/2\Z)\cong \Z/2\Z -$$. - -Therefore, the Künneth short exact sequence reduces in every degree to an isomorphism, because there is a zero group in each case on either the left or the right side in the sequence. The result is - -\begin{align} - -H_0 \left (\mathbb{RP}^2 \times \mathbb{RP}^2;\Z \right ) &\cong h_0 \otimes h_0 \cong \Z \\ - -H_1 \left (\mathbb{RP}^2 \times \mathbb{RP}^2;\Z \right ) &\cong h_0 \otimes h_1 \oplus h_1 \otimes h_0 \cong \Z/2\Z\oplus \Z/2\Z \\ - -H_2 \left (\mathbb{RP}^2 \times \mathbb{RP}^2;\Z \right ) &\cong h_1 \otimes h_1 \cong \Z/2\Z \\ - -H_3 \left (\mathbb{RP}^2 \times \mathbb{RP}^2;\Z \right ) &\cong \mathrm{Tor}^{\Z}_1(h_1,h_1) \cong \Z/2\Z \\ - -\end{align} - -and all the other homology groups are zero. - -For a general commutative ring R, the homology of X and Y is related to the homology of their product by a Künneth spectral sequence -$$ -E_{pq}^2 = \bigoplus_{q_1 + q_2 = q} \mathrm{Tor}^R_p(H_{q_1}(X; R), H_{q_2}(Y; R)) \Rightarrow H_{p+q}(X \times Y; R). -$$ - -In the cases described above, this spectral sequence collapses to give an isomorphism or a short exact sequence. - -The chain complex of the space X × Y is related to the chain complexes of X and Y by a natural quasi-isomorphism -$$ -C_*(X \times Y) \cong C_*(X) \otimes C_*(Y). -$$ - -For singular chains this is the theorem of Eilenberg and Zilber. For cellular chains on CW complexes, it is a straightforward isomorphism. Then the homology of the tensor product on the right is given by the spectral Künneth formula of homological algebra. - -The freeness of the chain modules means that in this geometric case it is not necessary to use any hyperhomology or total derived tensor product. - -There are analogues of the above statements for singular cohomology and sheaf cohomology. For sheaf cohomology on an algebraic variety, Alexander Grothendieck found six spectral sequences relating the possible hyperhomology groups of two chain complexes of sheaves and the hyperhomology groups of their tensor product. - -There are many generalized (or "extraordinary") homology and cohomology theories for topological spaces. K-theory and cobordism are the best-known. Unlike ordinary homology and cohomology, they typically cannot be defined using chain complexes. Thus Künneth theorems can not be obtained by the above methods of homological algebra. Nevertheless, Künneth theorems in just the same form have been proved in very many cases by various other methods. The first were Michael Atiyah's Künneth theorem for complex K-theory and Pierre Conner and Edwin E. Floyd's result in cobordism. A general method of proof emerged, based upon a homotopical theory of modules over highly structured ring spectra. The homotopy category of such modules closely resembles the derived category in homological algebra. diff --git a/wiki/wikipedia/3806.txt b/wiki/wikipedia/3806.txt deleted file mode 100644 index fdd07f785c631b90a837899d578530f3e77c63f8..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3806.txt +++ /dev/null @@ -1,17 +0,0 @@ -The Euclid–Euler theorem is a theorem in number theory that relates perfect numbers to Mersenne primes. It states that an even number is perfect if and only if it has the form 2p−1(2p − 1), where 2p − 1 is a prime number. The theorem is named after mathematicians Euclid and Leonhard Euler, who respectively proved the "if" and "only if" aspects of the theorem. - -It has been conjectured that there are infinitely many Mersenne primes. Although the truth of this conjecture remains unknown, it is equivalent, by the Euclid–Euler theorem, to the conjecture that there are infinitely many even perfect numbers. However, it is also unknown whether there exists even a single odd perfect number. The perfect number 6 comes from p = 2 in this way, as 1=22−1M2 = 2 × 3 = 6, and the Mersenne prime 7 corresponds in the same way to the perfect number 28. - -Euclid proved that 2p−1(2p − 1) is an even perfect number whenever 2p − 1 is prime. This is the final result on number theory in Euclid's Elements; the later books in the Elements instead concern irrational numbers, solid geometry, and the golden ratio. Euclid expresses the result by stating that if a finite geometric series beginning at 1 with ratio 2 has a prime sum q, then this sum multiplied by the last term t in the series is perfect. Expressed in these terms, the sum q of the finite series is the Mersenne prime 2p − 1 and the last term t in the series is the power of two 2p−1. Euclid proves that qt is perfect by observing that the geometric series with ratio 2 starting at q, with the same number of terms, is proportional to the original series; therefore, since the original series sums to q = 2t − 1, the second series sums to q(2t − 1) = 2qt − q, and both series together add to 2qt, two times the supposed perfect number. However, these two series are disjoint from each other and (by the primality of q) exhaust all the divisors of qt, so qt has divisors that sum to 2qt, showing that it is perfect. - -Over a millennium after Euclid, Alhazen conjectured that every even perfect number is of the form 2p−1(2p − 1) where 2p − 1 is prime, but he was not able to prove this result. It was not until the 18th century, over 2000 years after Euclid, that Leonhard Euler proved that the formula 2p−1(2p − 1) will yield all the even perfect numbers. - -In the other direction, suppose that an even perfect number has been given, and partially factor it as 2kx, where x is odd. For 2kx to be perfect, the sum of its divisors must be twice its value: - -{{NumBlk|:|$2^{k+1}x = \sigma(2^k x) = (2^{k+1} - 1)\sigma(x).$|∗}} - -The odd factor 2k+1 − 1 on the right side of (∗) is at least 3, and it must divide x, the only odd factor on the left side, so y = x/(2k+1 − 1) is a proper divisor of x. Dividing both sides of (∗) by the common factor 2k+1 − 1 and taking into account the known divisors x and y of x gives - -left=1.6|$2^{k+1}y = \sigma(x) = x + y + {}$other divisors${} = 2^{k+1}y + {}$other divisors. - -For this equality to be true, there can be no other divisors. Therefore, y must be 1, and x must be a prime of the form 2k+1 − 1. diff --git a/wiki/wikipedia/3807.txt b/wiki/wikipedia/3807.txt deleted file mode 100644 index 240d12b41fea6ecd8f6d0b34ea9ccd96aa63af60..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3807.txt +++ /dev/null @@ -1,33 +0,0 @@ -In complex analysis, Picard's great theorem and Picard's little theorem are related theorems about the range of an analytic function. They are named after Émile Picard. - -
    Little Picard Theorem: If a function f : C → C is entire and non-constant, then the set of values that f(z) assumes is either the whole complex plane or the plane minus a single point.
    - -
    Sketch of Proof: Picard's original proof was based on properties of the modular lambda function, usually denoted by λ, and which performs, using modern terminology, the holomorphic universal covering of the twice punctured plane by the unit disc. This function is explicitly constructed in the theory of elliptic functions. If f omits two values, then the composition of f with the inverse of the modular function maps the plane into - -the unit disc which implies that f is constant by Liouville's theorem.
    - -This theorem is a significant strengthening of Liouville's theorem which states that the image of an entire non-constant function must be unbounded. Many different proofs of Picard's theorem were later found and Schottky's theorem is a quantitative version of it. In the case where the values of f are missing a single point, this point is called a lacunary value of the function. - -
    Great Picard's Theorem: If an analytic function f has an essential singularity at a point w, then on any punctured neighborhood of w, f(z) takes on all possible complex values, with at most a single exception, infinitely often.
    - -This is a substantial strengthening of the Casorati–Weierstrass theorem, which only guarantees that the range of f is dense in the complex plane. A result of the Great Picard Theorem is that any entire, non-polynomial function attains all possible complex values infinitely often, with at most one exception. - -The "single exception" is needed in both theorems, as demonstrated here: - -*ez is an entire non-constant function that is never 0, - -*e1/z has an essential singularity at 0, but still never attains 0 as a value. - -Great Picard's theorem is true in a slightly more general form that also applies to meromorphic functions: - -
    Great Picard's Theorem (meromorphic version): If M is a Riemann surface, w a point on M, P1(C) = C ∪ {∞} denotes the Riemann sphere and f : M\{w} → P1(C) is a holomorphic function with essential singularity at w, then on any open subset of M containing w, the function f(z) attains all but at most two points of P1(C) infinitely often.
    - -Example: The function f(z) = 1/(1 − e1/z) is meromorphic on C* = C - {0}, the complex plane with the origin deleted. It has an essential singularity at z = 0 and attains the value ∞ infinitely often in any neighborhood of 0; however it does not attain the values 0 or 1. - -With this generalization, Little Picard Theorem follows from Great Picard Theorem because an entire function is either a polynomial or it has an essential singularity at infinity. As with the little theorem, the (at most two) points that are not attained are lacunary values of the function. - -The following conjecture is related to "Great Picard's Theorem": - -
    Conjecture: Let {U1, ..., Un} be a collection of open connected subsets of C that cover the punctured unit disk D \ {0}. Suppose that on each Uj there is an injective holomorphic function fj, such that dfj = dfk on each intersection Uj ∩ Uk. Then the differentials glue together to a meromorphic 1-form on D.
    - -It is clear that the differentials glue together to a holomorphic 1-form g dz on D \ {0}. In the special case where the residue of g at 0 is zero the conjecture follows from the "Great Picard's Theorem". diff --git a/wiki/wikipedia/3808.txt b/wiki/wikipedia/3808.txt deleted file mode 100644 index 9300418c6d95349afe42b5957e09342cb725f3d9..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3808.txt +++ /dev/null @@ -1,19 +0,0 @@ -Erdős on Graphs: His Legacy of Unsolved Problems is a book on unsolved problems in mathematics collected by Paul Erdős in the area of graph theory. It was written by Fan Chung and Ronald Graham, based on a 1997 survey paper by Chung, and published in 1998 by A K Peters. A softcover edition with some updates and corrections followed in 1999. - -The book has eight chapters, the first being a short introduction. - -Its main content are six chapters of unsolved problems, grouped by subtopic. - -Chapters two and three are on Ramsey theory and extremal graph theory. The fourth covers topics in graph coloring, packing problems, and covering problems. The fifth concerns graph enumeration and random graphs, the sixth generalizes from graphs to hypergraphs, and the seventh concerns infinite graphs. The book concludes with a chapter of stories about Erdős from one of his oldest friends, Andrew Vázsonyi. - -Each chapter begins with a survey of the history and major results in the subtopic of graph theory that it covers; Erdős himself figures prominently in the history of several of these subtopics. The individual history, motivation, known progress, and bibliographic references for each problem are included, along with (in some cases) prizes for a solution originally offered by Erdős and maintained by Chung and Graham. - -One target audience for the book is researchers in graph theory, for whom these problems may provide material for much future research. - -They may also provide an inspiration for students of mathematics, - -and reviewer Arthur Hobbs suggests that the book could even be used as the basis for a graduate course. - -Additionally, reviewers Robert Beezer and W. T. Tutte suggest that the book may be of interests to mathematicians in other areas, and to historians of mathematics, for the insight it provides into Erdős's life and work. Ralph Faudree writes that the book is suitable both as reference material and for browsing. - -Tutte notes, for those not familiar with the topic, that in mathematics, a well-posed and unsolved problem can itself be a significant contribution, a success rather than a failure. In a similar vein of thought, Faudree adds that the book provides "an appropriate tribute" to Erdős and his history of both formulating and solving problems. diff --git a/wiki/wikipedia/3809.txt b/wiki/wikipedia/3809.txt deleted file mode 100644 index 7a3314f967f52d7eaf16e8b3bc7198a11f87b89a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3809.txt +++ /dev/null @@ -1 +0,0 @@ -#redirect Artin L-function#The Artin conjecture diff --git a/wiki/wikipedia/381.txt b/wiki/wikipedia/381.txt deleted file mode 100644 index 2093a567f51229dce4a18640b99639a512e27a61..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/381.txt +++ /dev/null @@ -1,5 +0,0 @@ -Pollock's conjectures are two closely related unproven conjectures in additive number theory. They were first stated in 1850 by Sir Frederick Pollock, - -The numbers that are not the sum of at most 4 tetrahedral numbers are given by the sequence 17, 27, 33, 52, 73, ..., of 241 terms, with 343867 being almost certainly the last such number. This conjecture has been proven for all but finitely many positive integers. - -*Polyhedral numbers conjecture: Let m be the number of vertices of a platonic solid “regular n-hedron” (n is 4, 6, 8, 12, or 20), then every positive integer is the sum of at most m+1 n-hedral numbers. (i.e. every positive integer is the sum of at most 5 tetrahedral numbers, or the sum of at most 9 cube numbers, or the sum of at most 7 octahedral numbers, or the sum of at most 21 dodecahedral numbers, or the sum of at most 13 icosahedral numbers) diff --git a/wiki/wikipedia/3810.txt b/wiki/wikipedia/3810.txt deleted file mode 100644 index e8f0e97f8515368be5ef45b20f64cfd04ccb895e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3810.txt +++ /dev/null @@ -1,3 +0,0 @@ -In computing, CloudTran, a transaction management product, enables applications running in distributed computing and cloud computing architectures to embed logical business transactions that adhere to the properties of ACID transactions. Specifically, CloudTran coordinates ACID transactionality for data stored within in-memory data grids (e.g., Oracle Coherence, GigaSpaces, and Gemfire), as well as from the data grid to persistent storage systems (e.g., Oracle, MySQL, Microsoft SQL Server, MongoDB). - -Distributed computing has traditionally relied on a technology called distributed transactions which is an algorithm used to coordinate the storing of a logically-related set of data within more than one database or computer. CloudTran is aimed at addressing issues with this approach to improve on performance, scalability, and ease of implementation for application developers. In doing so, CloudTran enables a broad range of developers to implement highly scalable applications that run in cloud computing environments and distributed architectures. In addition, CloudTran is a manifestation of “Cloud Transaction Processing, or, “CloudTP”. diff --git a/wiki/wikipedia/3811.txt b/wiki/wikipedia/3811.txt deleted file mode 100644 index 019effce10c75ea50dda4c241bdd774b61b5f600..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3811.txt +++ /dev/null @@ -1,42 +0,0 @@ -The Euclid–Mullin sequence is an infinite sequence of distinct prime numbers, in which each element is the least prime factor of one plus the product of all earlier elements. They are named after the ancient Greek mathematician Euclid, because their definition relies on an idea in Euclid's proof that there are infinitely many primes, and after Albert A. Mullin, who asked about the sequence in 1963. - -The first 51 elements of the sequence are - -2, 3, 7, 43, 13, 53, 5, 6221671, 38709183810571, 139, 2801, 11, 17, 5471, 52662739, 23003, 30693651606209, 37, 1741, 1313797957, 887, 71, 7127, 109, 23, 97, 159227, 643679794963466223081509857, 103, 1079990819, 9539, 3143065813, 29, 3847, 89, 19, 577, 223, 139703, 457, 9649, 61, 4357, 87991098722552272708281251793312351581099392851768893748012603709343, 107, 127, 3313, 227432689108589532754984915075774848386671439568260420754414940780761245893, 59, 31, 211... - -These are the only known elements . Finding the next one requires finding the least prime factor of a 335-digit number (which is known to be composite). - -The $n$th element of the sequence, $a_n$, is the least prime factor of -$$ -\Bigl(\prod_{i < n} a_i\Bigr)+1. -$$ - -The first element is therefore the least prime factor of the empty product plus one, which is 2. The third element is (2 × 3) + 1 = 7. A better illustration is the fifth element in the sequence, 13. This is calculated by (2 × 3 × 7 × 43) + 1 = 1806 + 1 = 1807, the product of two primes, 13 × 139. Of these two primes, 13 is the smallest and so included in the sequence. Similarly, the seventh element, 5, is the result of (2 × 3 × 7 × 43 × 13 × 53) + 1 = 1,244,335, the prime factors of which are 5 and 248,867. These examples illustrate why the sequence can leap from very large to very small numbers. - -The sequence is infinitely long and does not contain repeated elements. This can be proved using the method of Euclid's proof that there are infinitely many primes. That proof is constructive, and the sequence is the result of performing a version of that construction. - -Mullin asked whether every prime number appears in the Euclid–Mullin sequence and, if not, whether the problem of testing a given prime for membership in the sequence is computable. conjectured, on the basis of heuristic assumptions that the distribution of primes is random, that every prime does appear in the sequence. - -However, although similar recursive sequences over other domains do not contain all primes, - -these problems both remain open for the original Euclid–Mullin sequence. The least prime number not known to be an element of the sequence is 41. - -The positions of the prime numbers from 2 to 97 are: - -2:1, 3:2, 5:7, 7:3, 11:12, 13:5, 17:13, 19:36, 23:25, 29:33, 31:50, 37:18, 41:?, 43:4, 47:?, 53:6, 59:49, 61:42, 67:?, 71:22, 73:?, 79:?, 83:?, 89:35, 97:26 - -where ? indicates that the position (or whether it exists at all) is unknown as of 2012. - -A related sequence of numbers determined by the largest prime factor of one plus the product of the previous numbers (rather than the smallest prime factor) is also known as the Euclid–Mullin sequence. It grows more quickly, but is not monotonic. The numbers in this sequence are - -2, 3, 7, 43, 139, 50207, 340999, 2365347734339, 4680225641471129, 1368845206580129, 889340324577880670089824574922371, … . - -Not every prime number appears in this sequence, and the sequence of missing primes, - -5, 11, 13, 17, 19, 23, 29, 31, 37, 41, 47, 53, 59, 61, 67, 71, 73, ... - -has been proven to be infinite. - -It is also possible to generate modified versions of the Euclid–Mullin sequence by using the same rule of choosing the smallest prime factor at each step, but beginning with a different prime than 2. - -Alternatively, taking each number to be one plus the product of the previous numbers (rather than factoring it) gives Sylvester's sequence. The sequence constructed by repeatedly appending all factors of one plus the product of the previous numbers is the same as the sequence of prime factors of Sylvester's sequence. Like the Euclid–Mullin sequence, this is a non-monotonic sequence of primes, but it is known not to include all primes. diff --git a/wiki/wikipedia/3812.txt b/wiki/wikipedia/3812.txt deleted file mode 100644 index d28d5ba7b86fafd802a829eebd9979e57dc1ce1a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3812.txt +++ /dev/null @@ -1,27 +0,0 @@ -Egnyte is a software company that provides a cloud platform for enterprise file synchronization and sharing as well as content and data governance for business customers. It offers storage, collaboration, and sharing capabilities using a cloud infrastructure, and users can access files from on-premises and cloud environments. - -Headquartered in Mountain View, California, Egnyte was founded in 2007, and incorporated in 2008. - -Egnyte received $1 million seed venture capital in 2007, $6 million in July 2009, - -On 20 January 2015, the company announced what it called adaptive enterprise file services, and content intelligence and smart reporting and auditing services. - -In March 2015, the company unveiled its Egnyte for Google Apps that allows customers to move files between Google Drive and their own data center. Google Apps administrators can set up the integration and then manage permissions. - -Egnyte also unveiled new mobile apps in May 2015 for file sharing on platforms including the Apple Watch. - -By June 2016, Egnyte announced a move into data protection technology, as the file synchronization market was seen to mature. - -In October 2016, the company announced it would recommend Microsoft Azure as a cloud provider, despite what was considered as the somewhat competing service OneDrive operated by Microsoft. - -By the end of 2019, the company reported having more than 16,000 customers worldwide, and had reached more than $100 million in annual recurring revenue. - -As of 2020, the Egnyte Content Services Platform emphasizes specific areas of enterprise content management: - -* Content governance - -* Compliance - -* Content intelligence - -* Remote working diff --git a/wiki/wikipedia/3813.txt b/wiki/wikipedia/3813.txt deleted file mode 100644 index dc5bcd4cd4eedd7d9d78cac5f57e9c22d7e0a4e5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3813.txt +++ /dev/null @@ -1,18 +0,0 @@ -In ergodic theory, Kac's lemma, demonstrated by mathematician Mark Kac in 1947, states that in a measure space the orbit of almost all the points contained in a set $A$ of such space, whose measure is $\mu(A)$, return to $A$ within an average time inversely proportional to $\mu(A)$. - -The lemma extends what is stated by Poincaré recurrence theorem, in which it is shown that the points return in $A$ infinite times. - -Since the phase space of a dynamical system with $n$ variables and bounded, i.e. with the $n$ variables all having a minimum and a maximum, is, for the Liouville theorem, a measure space, the lemma implies that given a configuration of the system (point of space) the average return period close to this configuration (in the neighbourhood of the point) is inversely proportional to the considered size of volume surrounding the configuration. - -Normalizing the measure space to 1, it becomes a probability space and the measure $P(A)$ of its set $A$ represents the probability of finding the system in the states represented by the points of that set. In this case the lemma implies that the smaller is the probability to be in a certain state (or close to it), the longer is the time of return near that state. - -In formulas, if $A$ is the region close to the starting point and $T_R$ is the return period, its average value is: -$$ -\langle T_R \rangle = \tau/P(A) -$$ - -Where $\tau$ is a characteristic time of the system in question. - -Note that since the volume of $A$, therefore $P(A)$, depends exponentially on the number of variables in the system ($A = \epsilon ^n$, with $\epsilon$ infinitesimal side, therefore less than 1, of the volume in $n$ dimensions), $P(A)$ decreases very rapidly as the variables of the system increase and consequently the return period increases exponentially. - -In practice, as the variables needed to describe the system increase, the return period increases rapidly. diff --git a/wiki/wikipedia/3814.txt b/wiki/wikipedia/3814.txt deleted file mode 100644 index fd209a9e108ac3d341bd72c92d7d8e1764dc32f3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3814.txt +++ /dev/null @@ -1,43 +0,0 @@ -Viviani's theorem, named after Vincenzo Viviani, states that the sum of the distances from any interior point to the sides of an equilateral triangle equals the length of the triangle's altitude. It is a theorem commonly employed in various math competitions, secondary school mathematics examinations, and has wide applicability to many problems in the real world. - -[[File:Viviani_theorem_visual_proof.svg|thumb|Visual proof of Viviani's theorem - -]] - -This proof depends on the readily-proved proposition that the area of a triangle is half its base times its height—that is, half the product of one side with the altitude from that side. - -Let ABC be an equilateral triangle whose height is h and whose side is a. - -Let P be any point inside the triangle, and u, s, t the distances of P from the sides. Draw a line from P to each of A, B, and C, forming three triangles PAB, PBC, and PCA. - -Now, the areas of these triangles are $\frac{u \cdot a}{2}$, $\frac{s \cdot a}{2}$, and $\frac{t \cdot a}{2}$. They exactly fill the enclosing triangle, so the sum of these areas is equal to the area of the enclosing triangle. - -So we can write: -$$ -\frac{u \cdot a}{2} + \frac{s \cdot a}{2} + \frac{t \cdot a}{2} = \frac{h \cdot a}{2} -$$ - -and thus -$$ -u + s + t = h -$$ - -Q.E.D. - -The converse also holds: If the sum of the distances from an interior point of a triangle to the sides is independent of the location of the point, the triangle is equilateral. - -Viviani's theorem means that lines parallel to the sides of an equilateral triangle give coordinates for making ternary plots, such as flammability diagrams. - -More generally, they allow one to give coordinates on a regular simplex in the same way. - -The sum of the distances from any interior point of a parallelogram to the sides is independent of the location of the point. The converse also holds: If the sum of the distances from a point in the interior of a quadrilateral to the sides is independent of the location of the point, then the quadrilateral is a parallelogram. - -The result generalizes to any 2n-gon with opposite sides parallel. Since the sum of distances between any pair of opposite parallel sides is constant, it follows that the sum of all pairwise sums between the pairs of parallel sides, is also constant. The converse in general is not true, as the result holds for an equilateral hexagon, which does not necessarily have opposite sides parallel. - -If a polygon is regular (both equiangular and equilateral), the sum of the distances to the sides from an interior point is independent of the location of the point. Specifically, it equals n times the apothem, where n is the number of sides and the apothem is the distance from the center to a side. However, the converse does not hold; the non-square parallelogram is a counterexample. - -The sum of the distances from an interior point to the sides of an equiangular polygon does not depend on the location of the point. - -A necessary and sufficient condition for a convex polygon to have a constant sum of distances from any interior point to the sides is that there exist three non-collinear interior points with equal sums of distances. - -The sum of the distances from any point in the interior of a regular polyhedron to the sides is independent of the location of the point. However, the converse does not hold, not even for tetrahedra. diff --git a/wiki/wikipedia/3815.txt b/wiki/wikipedia/3815.txt deleted file mode 100644 index 5ff596a69eb22f827df28d6b539e78a7ecd2111f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3815.txt +++ /dev/null @@ -1,31 +0,0 @@ -Transaction processing is information processing in computer science - -A transaction's changes to the state are atomic: either all happen or none happen. These changes include database changes, messages, and actions on transducers. - -Consistency: A transaction is a correct transformation of the state. The actions taken as a group do not violate any of the integrity constraints associated with the state. - -Even though transactions execute concurrently, it appears to each transaction T, that others executed either before T or after T, but not both. - -Once a transaction completes successfully (commits), its changes to the database survive failures and retain its changes. - -Transaction processing has these benefits: - -*It allows sharing of computer resources among many users - -*It shifts the time of job processing to when the computing resources are less busy - -*It avoids idling the computing resources without minute-by-minute human interaction and supervision - -*It is used on expensive classes of computers to help amortize the cost by keeping high rates of utilization of those expensive resources - -* They have relatively expensive setup costs - -* There is a lack of standard formats - -* Hardware and software incompatibility - -Standard transaction-processing software, such as IBM's Information Management System, was first developed in the 1960s, and was often closely coupled to particular database management systems. Client–server computing implemented similar principles in the 1980s with mixed success. However, in more recent years, the distributed client–server model has become considerably more difficult to maintain. As the number of transactions grew in response to various online services (especially the Web), a single distributed database was not a practical solution. In addition, most online systems consist of a whole suite of programs operating together, as opposed to a strict client–server model where the single server could handle the transaction processing. Today a number of transaction processing systems are available that work at the inter-program level and which scale to large systems, including mainframes. - -One effort is the X/Open Distributed Transaction Processing (DTP) (see also Java Transaction API (JTA). However, proprietary transaction-processing environments such as IBM's CICS are still very popular, although CICS has evolved to include open industry standards as well. - -The term extreme transaction processing (XTP) was used to describe transaction processing systems with uncommonly challenging requirements, particularly throughput requirements (transactions per second). Such systems may be implemented via distributed or cluster style architectures. It was used at least by 2011. diff --git a/wiki/wikipedia/3816.txt b/wiki/wikipedia/3816.txt deleted file mode 100644 index 2e07030921b5224dea44f17090eca02006b790d8..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3816.txt +++ /dev/null @@ -1,252 +0,0 @@ -The travelling salesman problem (also called the travelling salesperson problem or TSP) asks the following question: "Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each city exactly once and returns to the origin city?" It is an NP-hard problem in combinatorial optimization, important in theoretical computer science and operations research. - -The travelling purchaser problem and the vehicle routing problem are both generalizations of TSP. - -In the theory of computational complexity, the decision version of the TSP (where given a length L, the task is to decide whether the graph has a tour of at most L) belongs to the class of NP-complete problems. Thus, it is possible that the worst-case running time for any algorithm for the TSP increases superpolynomially (but no more than exponentially) with the number of cities. - -The problem was first formulated in 1930 and is one of the most intensively studied problems in optimization. It is used as a benchmark for many optimization methods. Even though the problem is computationally difficult, many heuristics and exact algorithms are known, so that some instances with tens of thousands of cities can be solved completely and even problems with millions of cities can be approximated within a small fraction of 1%. - -The TSP has several applications even in its purest formulation, such as planning, logistics, and the manufacture of microchips. Slightly modified, it appears as a sub-problem in many areas, such as DNA sequencing. In these applications, the concept city represents, for example, customers, soldering points, or DNA fragments, and the concept distance represents travelling times or cost, or a similarity measure between DNA fragments. The TSP also appears in astronomy, as astronomers observing many sources will want to minimize the time spent moving the telescope between the sources; in such problems, the TSP can be embedded inside an optimal control problem. In many applications, additional constraints such as limited resources or time windows may be imposed. - -The origins of the travelling salesman problem are unclear. A handbook for travelling salesmen from 1832 mentions the problem and includes example tours through Germany and Switzerland, but contains no mathematical treatment. - -The travelling salesman problem was mathematically formulated in the 19th century by the Irish mathematician W.R. Hamilton and by the British mathematician Thomas Kirkman. Hamilton's icosian game was a recreational puzzle based on finding a Hamiltonian cycle. The general form of the TSP appears to have been first studied by mathematicians during the 1930s in Vienna and at Harvard, notably by Karl Menger, who defines the problem, considers the obvious brute-force algorithm, and observes the non-optimality of the nearest neighbour heuristic: - -It was first considered mathematically in the 1930s by Merrill M. Flood who was looking to solve a school bus routing problem. - -Hassler Whitney at Princeton University generated interest in the problem, which he called the "48 states problem". The earliest publication using the phrase "travelling salesman problem" was the 1949 RAND Corporation report by Julia Robinson, "On the Hamiltonian game (a traveling salesman problem)." - -In the 1950s and 1960s, the problem became increasingly popular in scientific circles in Europe and the US after the RAND Corporation in Santa Monica offered prizes for steps in solving the problem. the Christofides-Serdyukov algorithm yields a solution that, in the worst case, is at most 1.5 times longer than the optimal solution. As the algorithm was simple and quick, many hoped it would give way to a near optimal solution method. However, this hoped for improvement did not immediately materialize, and Christofides-Serdyukov remained the method with the best worst-case scenario until 2011, when a (very) slightly improved approximation algorithm was developed for the subset of "graphical" TSPs. In 2020 this tiny improvement was extended to the full (metric) TSP. - -Richard M. Karp showed in 1972 that the Hamiltonian cycle problem was NP-complete, which implies the NP-hardness of TSP. This supplied a mathematical explanation for the apparent computational difficulty of finding optimal tours. - -Great progress was made in the late 1970s and 1980, when Grötschel, Padberg, Rinaldi and others managed to exactly solve instances with up to 2,392 cities, using cutting planes and branch and bound. - -In the 1990s, Applegate, Bixby, Chvátal, and Cook developed the program Concorde that has been used in many recent record solutions. Gerhard Reinelt published the TSPLIB in 1991, a collection of benchmark instances of varying difficulty, which has been used by many research groups for comparing results. In 2006, Cook and others computed an optimal tour through an 85,900-city instance given by a microchip layout problem, currently the largest solved TSPLIB instance. For many other instances with millions of cities, solutions can be found that are guaranteed to be within 2–3% of an optimal tour. - -TSP can be modelled as an undirected weighted graph, such that cities are the graph's vertices, paths are the graph's edges, and a path's distance is the edge's weight. It is a minimization problem starting and finishing at a specified vertex after having visited each other vertex exactly once. Often, the model is a complete graph (i.e., each pair of vertices is connected by an edge). If no path exists between two cities, adding a sufficiently long edge will complete the graph without affecting the optimal tour. - -In the symmetric TSP, the distance between two cities is the same in each opposite direction, forming an undirected graph. This symmetry halves the number of possible solutions. In the asymmetric TSP, paths may not exist in both directions or the distances might be different, forming a directed graph. Traffic collisions, one-way streets, and airfares for cities with different departure and arrival fees are examples of how this symmetry could break down. - -* An equivalent formulation in terms of graph theory is: Given a complete weighted graph (where the vertices would represent the cities, the edges would represent the roads, and the weights would be the cost or distance of that road), find a Hamiltonian cycle with the least weight. - -* The requirement of returning to the starting city does not change the computational complexity of the problem, see Hamiltonian path problem. - -* Another related problem is the bottleneck travelling salesman problem (bottleneck TSP): Find a Hamiltonian cycle in a weighted graph with the minimal weight of the weightiest edge. For example, avoiding narrow streets with big buses. The problem is of considerable practical importance, apart from evident transportation and logistics areas. A classic example is in printed circuit manufacturing: scheduling of a route of the drill machine to drill holes in a PCB. In robotic machining or drilling applications, the "cities" are parts to machine or holes (of different sizes) to drill, and the "cost of travel" includes time for retooling the robot (single machine job sequencing problem). - -* The generalized travelling salesman problem, also known as the "travelling politician problem", deals with "states" that have (one or more) "cities" and the salesman has to visit exactly one "city" from each "state". One application is encountered in ordering a solution to the cutting stock problem in order to minimize knife changes. Another is concerned with drilling in semiconductor manufacturing, see e.g., . Noon and Bean demonstrated that the generalized travelling salesman problem can be transformed into a standard travelling salesman problem with the same number of cities, but a modified distance matrix. - -* The sequential ordering problem deals with the problem of visiting a set of cities where precedence relations between the cities exist. - -* A common interview question at Google is how to route data among data processing nodes; routes vary by time to transfer the data, but nodes also differ by their computing power and storage, compounding the problem of where to send data. - -* The travelling purchaser problem deals with a purchaser who is charged with purchasing a set of products. He can purchase these products in several cities, but at different prices and not all cities offer the same products. The objective is to find a route between a subset of the cities that minimizes total cost (travel cost + purchasing cost) and enables the purchase of all required products. - -The TSP can be formulated as an integer linear program. Several formulations are known. Two notable formulations are the Miller–Tucker–Zemlin (MTZ) formulation and the Dantzig–Fulkerson–Johnson (DFJ) formulation. The DFJ formulation is stronger, though the MTZ formulation is still useful in certain settings. - -Label the cities with the numbers $1,\ldots,n$ and define: -$$ - x_{ij} = \begin{cases} 1 & \text{the path goes from city } i \text{ to city } j \\ 0 & \text{otherwise} \end{cases} -$$ - -For $i=1,\ldots,n$, let $u_i$ be a dummy variable, and finally take $c_{ij} > 0$ to be the distance from city $i$ to city $j$. Then TSP can be written as the following integer linear programming problem: - -\begin{align} - -\min &\sum_{i=1}^n \sum_{j\ne i,j=1}^nc_{ij}x_{ij}\colon && \\ - -& x_{ij} \in \{0,1\} && i,j=1, \ldots, n; \\ - -& u_{i} \in \mathbf{Z} && i=2, \ldots, n; \\ - -& \sum_{i=1,i\ne j}^n x_{ij} = 1 && j=1, \ldots, n; \\ - -& \sum_{j=1,j\ne i}^n x_{ij} = 1 && i=1, \ldots, n; \\ - -& u_i-u_j +nx_{ij} \le n-1 && 2 \le i \ne j \le n; \\ - -& 1 \le u_i \le n-1 && 2 \le i \le n. - -\end{align} - -The first set of equalities requires that each city is arrived at from exactly one other city, and the second set of equalities requires that from each city there is a departure to exactly one other city. The last constraints enforce that there is only a single tour covering all cities, and not two or more disjointed tours that only collectively cover all cities. To prove this, it is shown below (1) that every feasible solution contains only one closed sequence of cities, and (2) that for every single tour covering all cities, there are values for the dummy variables $u_i$ that satisfy the constraints. (The dummy variables indicate tour ordering, such that $u_i < u_j$ implies city $i$ is visited before city $j$. This may be accomplished by incrementing $u_i$ each time it is visited.) - -To prove that every feasible solution contains only one closed sequence of cities, it suffices to show that every subtour in a feasible solution passes through city 1 (noting that the equalities ensure there can only be one such tour). For if we sum all the inequalities corresponding to $x_{ij}=1$ for any subtour of k steps not passing through city 1, we obtain: -$$ -nk \leq (n-1)k, -$$ - -which is a contradiction. - -It now must be shown that for every single tour covering all cities, there are values for the dummy variables $u_i$ that satisfy the constraints. - -Without loss of generality, define the tour as originating (and ending) at city 1. Choose $u_{i}=t$ if city $i$ is visited in step $t (i,t=1,2,\ldots,n)$. Then -$$ -u_i-u_j\le n-1, -$$ - -since $u_i$ can be no greater than $n$ and $u_j$ can be no less than 1; hence the constraints are satisfied whenever $x_{ij}=0.$ For $x_{ij}=1$, we have: -$$ - u_{i} - u_{j} + nx_{ij} = (t) - (t+1) + n = n-1, -$$ - -satisfying the constraint. - -Label the cities with the numbers 1, …, n and define: -$$ - x_{ij} = \begin{cases} 1 & \text{the path goes from city } i \text{ to city } j \\ 0 & \text{otherwise} \end{cases} -$$ - -Take $c_{ij} > 0$ to be the distance from city i to city j. Then TSP can be written as the following integer linear programming problem: - -\begin{align} - -\min &\sum_{i=1}^n \sum_{j\ne i,j=1}^nc_{ij}x_{ij}\colon && \\ - -& \sum_{i=1,i\ne j}^n x_{ij} = 1 && j=1, \ldots, n; \\ - -& \sum_{j=1,j\ne i}^n x_{ij} = 1 && i=1, \ldots, n; \\ - -& \sum_{i \in Q}{\sum_{j \ne i, j \in Q}{x_{ij}}} \leq |Q|-1 && \forall Q \subsetneq \{1, \ldots, n\}, |Q| \geq 2 \\ - -\end{align} - -The last constraint of the DFJ formulation ensures no proper subset Q can form a sub-tour, so the solution returned is a single tour and not the union of smaller tours. Because this leads to an exponential number of possible constraints, in practice it is solved with delayed column generation. - -The traditional lines of attack for the NP-hard problems are the following: - -* Devising exact algorithms, which work reasonably fast only for small problem sizes. - -* Devising "suboptimal" or heuristic algorithms, i.e., algorithms that deliver approximated solutions in a reasonable time. - -* Finding special cases for the problem ("subproblems") for which either better or exact heuristics are possible. - -The most direct solution would be to try all permutations (ordered combinations) and see which one is cheapest (using brute-force search). The running time for this approach lies within a polynomial factor of $O(n!)$, the factorial of the number of cities, so this solution becomes impractical even for only 20 cities. - -One of the earliest applications of dynamic programming is the Held-Karp algorithm that solves the problem in time $O(n^2 2^n)$. This bound has also been reached by Exclusion-Inclusion in an attempt preceding the dynamic programming approach. - -Improving these time bounds seems to be difficult. For example, it has not been determined whether an exact algorithm for TSP that runs in time $O(1.9999^n)$ exists. - -Other approaches include: - -* Various branch-and-bound algorithms, which can be used to process TSPs containing 40–60 cities. - -* Progressive improvement algorithms which use techniques reminiscent of linear programming. Works well for up to 200 cities. - -* Implementations of branch-and-bound and problem-specific cut generation (branch-and-cut); this is the method of choice for solving large instances. This approach holds the current record, solving an instance with 85,900 cities, see . - -An exact solution for 15,112 German towns from TSPLIB was found in 2001 using the cutting-plane method proposed by George Dantzig, Ray Fulkerson, and Selmer M. Johnson in 1954, based on linear programming. The computations were performed on a network of 110 processors located at Rice University and Princeton University. The total computation time was equivalent to 22.6 years on a single 500 MHz Alpha processor. In May 2004, the travelling salesman problem of visiting all 24,978 towns in Sweden was solved: a tour of length approximately 72,500 kilometres was found and it was proven that no shorter tour exists. In March 2005, the travelling salesman problem of visiting all 33,810 points in a circuit board was solved using Concorde TSP Solver: a tour of length 66,048,945 units was found and it was proven that no shorter tour exists. The computation took approximately 15.7 CPU-years (Cook et al. 2006). In April 2006 an instance with 85,900 points was solved using Concorde TSP Solver, taking over 136 CPU-years, see . - -Various heuristics and approximation algorithms, which quickly yield good solutions, have been devised. These include the Multi-fragment algorithm. Modern methods can find solutions for extremely large problems (millions of cities) within a reasonable time which are with a high probability just 2–3% away from the optimal solution. - -The Lin–Kernighan heuristic is a special case of the V-opt or variable-opt technique. It involves the following steps: - -# Given a tour, delete k mutually disjoint edges. - -# Reassemble the remaining fragments into a tour, leaving no disjoint subtours (that is, don't connect a fragment's endpoints together). This in effect simplifies the TSP under consideration into a much simpler problem. - -# Each fragment endpoint can be connected to 2k − 2 other possibilities: of 2k total fragment endpoints available, the two endpoints of the fragment under consideration are disallowed. Such a constrained 2k-city TSP can then be solved with brute force methods to find the least-cost recombination of the original fragments. - -The most popular of the k-opt methods are 3-opt, as introduced by Shen Lin of Bell Labs in 1965. A special case of 3-opt is where the edges are not disjoint (two of the edges are adjacent to one another). In practice, it is often possible to achieve substantial improvement over 2-opt without the combinatorial cost of the general 3-opt by restricting the 3-changes to this special subset where two of the removed edges are adjacent. This so-called two-and-a-half-opt typically falls roughly midway between 2-opt and 3-opt, both in terms of the quality of tours achieved and the time required to achieve those tours. - -The variable-opt method is related to, and a generalization of the k-opt method. Whereas the k-opt methods remove a fixed number (k) of edges from the original tour, the variable-opt methods do not fix the size of the edge set to remove. Instead they grow the set as the search process continues. The best known method in this family is the Lin–Kernighan method (mentioned above as a misnomer for 2-opt). Shen Lin and Brian Kernighan first published their method in 1972, and it was the most reliable heuristic for solving travelling salesman problems for nearly two decades. More advanced variable-opt methods were developed at Bell Labs in the late 1980s by David Johnson and his research team. These methods (sometimes called Lin–Kernighan–Johnson) build on the Lin–Kernighan method, adding ideas from tabu search and evolutionary computing. The basic Lin–Kernighan technique gives results that are guaranteed to be at least 3-opt. The Lin–Kernighan–Johnson methods compute a Lin–Kernighan tour, and then perturb the tour by what has been described as a mutation that removes at least four edges and reconnecting the tour in a different way, then V-opting the new tour. The mutation is often enough to move the tour from the local minimum identified by Lin–Kernighan. V-opt methods are widely considered the most powerful heuristics for the problem, and are able to address special cases, such as the Hamilton Cycle Problem and other non-metric TSPs that other heuristics fail on. For many years Lin–Kernighan–Johnson had identified optimal solutions for all TSPs where an optimal solution was known and had identified the best known solutions for all other TSPs on which the method had been tried. - -Optimized Markov chain algorithms which use local searching heuristic sub-algorithms can find a route extremely close to the optimal route for 700 to 800 cities. - -TSP is a touchstone for many general heuristics devised for combinatorial optimization such as genetic algorithms, simulated annealing, tabu search, ant colony optimization, river formation dynamics (see swarm intelligence) and the cross entropy method. - -Artificial intelligence researcher Marco Dorigo described in 1993 a method of heuristically generating "good solutions" to the TSP using a simulation of an ant colony called ACS (ant colony system). It models behaviour observed in real ants to find short paths between food sources and their nest, an emergent behaviour resulting from each ant's preference to follow trail pheromones deposited by other ants. - -ACS sends out a large number of virtual ant agents to explore many possible routes on the map. Each ant probabilistically chooses the next city to visit based on a heuristic combining the distance to the city and the amount of virtual pheromone deposited on the edge to the city. The ants explore, depositing pheromone on each edge that they cross, until they have all completed a tour. At this point the ant which completed the shortest tour deposits virtual pheromone along its complete tour route (global trail updating). The amount of pheromone deposited is inversely proportional to the tour length: the shorter the tour, the more it deposits. - -In the metric TSP, also known as delta-TSP or Δ-TSP, the intercity distances satisfy the triangle inequality. - -A very natural restriction of the TSP is to require that the distances between cities form a metric to satisfy the triangle inequality; that is the direct connection from A to B is never farther than the route via intermediate C: -$$ -d_{AB} \le d_{AC} + d_{CB} -$$. - -The edge spans then build a metric on the set of vertices. When the cities are viewed as points in the plane, many natural distance functions are metrics, and so many natural instances of TSP satisfy this constraint. - -The following are some examples of metric TSPs for various metrics. - -*In the Euclidean TSP (see below) the distance between two cities is the Euclidean distance between the corresponding points. - -*In the rectilinear TSP the distance between two cities is the sum of the absolute values of the differences of their x- and y-coordinates. This metric is often called the Manhattan distance or city-block metric. - -*In the maximum metric, the distance between two points is the maximum of the absolute values of differences of their x- and y-coordinates. - -The last two metrics appear, for example, in routing a machine that drills a given set of holes in a printed circuit board. The Manhattan metric corresponds to a machine that adjusts first one co-ordinate, and then the other, so the time to move to a new point is the sum of both movements. The maximum metric corresponds to a machine that adjusts both co-ordinates simultaneously, so the time to move to a new point is the slower of the two movements. - -In its definition, the TSP does not allow cities to be visited twice, but many applications do not need this constraint. In such cases, a symmetric, non-metric instance can be reduced to a metric one. This replaces the original graph with a complete graph in which the inter-city distance $d_{AB}$ is replaced by the shortest path length between A and B in the original graph. - -When the input numbers can be arbitrary real numbers, Euclidean TSP is a particular case of metric TSP, since distances in a plane obey the triangle inequality. When the input numbers must be integers, comparing lengths of tours involves comparing sums of square-roots. - -Like the general TSP, Euclidean TSP is NP-hard in either case. With rational coordinates and discretized metric (distances rounded up to an integer), the problem is NP-complete. With rational coordinates and the actual Euclidean metric, Euclidean TSP is known to be in the Counting Hierarchy, a subclass of PSPACE. With arbitrary real coordinates, Euclidean TSP cannot be in such classes, since there are uncountably many possible inputs. However, Euclidean TSP is probably the easiest version for approximation. For example, the minimum spanning tree of the graph associated with an instance of the Euclidean TSP is a Euclidean minimum spanning tree, and so can be computed in expected O (n log n) time for n points (considerably less than the number of edges). This enables the simple 2-approximation algorithm for TSP with triangle inequality above to operate more quickly. - -In general, for any c > 0, where d is the number of dimensions in the Euclidean space, there is a polynomial-time algorithm that finds a tour of length at most (1 + 1/c) times the optimal for geometric instances of TSP in -$$ -O\left(n (\log n)^{(O(c \sqrt{d}))^{d-1}}\right), -$$ - -time; this is called a polynomial-time approximation scheme (PTAS). Sanjeev Arora and Joseph S. B. Mitchell were awarded the Gödel Prize in 2010 for their concurrent discovery of a PTAS for the Euclidean TSP. - -In practice, simpler heuristics with weaker guarantees continue to be used. - -In most cases, the distance between two nodes in the TSP network is the same in both directions. The case where the distance from A to B is not equal to the distance from B to A is called asymmetric TSP. A practical application of an asymmetric TSP is route optimization using street-level routing (which is made asymmetric by one-way streets, slip-roads, motorways, etc.). - -Solving an asymmetric TSP graph can be somewhat complex. The following is a 3×3 matrix containing all possible path weights between the nodes A, B and C. One option is to turn an asymmetric matrix of size N into a symmetric matrix of size 2N. - -To double the size, each of the nodes in the graph is duplicated, creating a second ghost node, linked to the original node with a "ghost" edge of very low (possibly negative) weight, here denoted −w. (Alternatively, the ghost edges have weight 0, and weight w is added to all other edges.) The original 3×3 matrix shown above is visible in the bottom left and the transpose of the original in the top-right. Both copies of the matrix have had their diagonals replaced by the low-cost hop paths, represented by −w. In the new graph, no edge directly links original nodes and no edge directly links ghost nodes. - -The weight −w of the "ghost" edges linking the ghost nodes to the corresponding original nodes must be low enough to ensure that all ghost edges must belong to any optimal symmetric TSP solution on the new graph (w=0 is not always low enough). As a consequence, in the optimal symmetric tour, each original node appears next to its ghost node (e.g. a possible path is $\mathrm{A \to A' \to C \to C' \to B \to B' \to A}$) and by merging the original and ghost nodes again we get an (optimal) solution of the original asymmetric problem (in our example, $\mathrm{A \to C \to B \to A}$). - -There is an analogous problem in geometric measure theory which asks the following: under what conditions may a subset E of Euclidean space be contained in a rectifiable curve (that is, when is there a curve with finite length that visits every point in E)? This problem is known as the analyst's travelling salesman problem. - -Suppose $X_1,\ldots,X_n$ are $n$ independent random variables with uniform distribution in the square $[0,1]^2$, and let $L^\ast_n$ be the shortest path length (i.e. TSP solution) for this set of points, according to the usual Euclidean distance. It is known that, almost surely, -$$ -\frac{L^*_n}{\sqrt n}\rightarrow \beta\qquad\text{when }n\to\infty, -$$ - -where $\beta$ is a positive constant that is not known explicitly. Since $L^*_n\le2\sqrt n+2$ (see below), it follows from bounded convergence theorem that $\beta=\lim_{n\to\infty} \mathbb E[L^*_n]/\sqrt n$, hence lower and upper bounds on $\beta$ follow from bounds on $\mathbb E[L^*_n]$. - -The almost sure limit $\frac{L^*_n}{\sqrt n}\rightarrow \beta$ as $n\to\infty$ may not exist - -if the independent locations $X_1,\ldots,X_n$ are replaced with observations from a stationary ergodic process with uniform marginals. - -*One has $L^*\le 2\sqrt{n}+2$, and therefore $\beta\leq 2$, by using a naive path which visits monotonically the points inside each of $\sqrt n$ slices of width $1/\sqrt{n}$ in the square. - -*Few proved $L^*_n\le\sqrt{2n}+1.75$, hence $\beta\le\sqrt 2$, later improved by Karloff (1987): $\beta\le0.984\sqrt2$. - -* Fietcher showed an upper bound of $\beta\le 0.73\dots$. - -*By observing that $\mathbb E[L^*_n]$ is greater than $n$ times the distance between $X_0$ and the closest point $X_i\ne X_0$, one gets (after a short computation) -$$ -\mathbb E[L^*_n]\ge\tfrac{1}{2} \sqrt{n}. -$$ - -*A better lower bound is obtained by observing that $\mathbb E[L^*_n]$ is greater than $\tfrac12n$ times the sum of the distances between $X_0$ and the closest and second closest points $X_i,X_j\ne X_0$, which gives -$$ -\mathbb E[L^*_n]\ge \left( \tfrac{1}{4} + \tfrac{3}{8} \right)\sqrt{n} = \tfrac{5}{8}\sqrt{n}, -$$ - -*The currently best lower bound is -$$ -\mathbb E[L^*_n]\ge (\tfrac{5}{8} + \tfrac{19}{5184})\sqrt{n}, -$$ - -*Held and Karp gave a polynomial-time algorithm that provides numerical lower bounds for $L^*_n$, and thus for $\beta(\simeq L^*_n/{\sqrt n})$ which seem to be good up to more or less 1%. In particular, David S. Johnson obtained a lower bound by computer experiment: -$$ -L^*_n\gtrsim 0.7080\sqrt{n}+0.522, -$$ - -where 0.522 comes from the points near square boundary which have fewer neighbours, - -and Christine L. Valenzuela and Antonia J. Jones obtained the following other numerical lower bound: -$$ -L^*_n\gtrsim 0.7078\sqrt{n}+0.551 -$$. - -The problem has been shown to be NP-hard (more precisely, it is complete for the complexity class FPNP; see function problem), and the decision problem version ("given the costs and a number x, decide whether there is a round-trip route cheaper than x") is NP-complete. The bottleneck travelling salesman problem is also NP-hard. The problem remains NP-hard even for the case when the cities are in the plane with Euclidean distances, as well as in a number of other restrictive cases. Removing the condition of visiting each city "only once" does not remove the NP-hardness, since in the planar case there is an optimal tour that visits each city only once (otherwise, by the triangle inequality, a shortcut that skips a repeated visit would not increase the tour length). - -In the general case, finding a shortest travelling salesman tour is NPO-complete. If the distance measure is a metric (and thus symmetric), the problem becomes APX-complete and the algorithm of Christofides and Serdyukov approximates it within 1.5. The first issue of the Journal of Problem Solving was devoted to the topic of human performance on TSP, and a 2011 review listed dozens of papers on the subject. diff --git a/wiki/wikipedia/3817.txt b/wiki/wikipedia/3817.txt deleted file mode 100644 index b115583a659edf2db23b9b11b1154ae3a23559c4..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3817.txt +++ /dev/null @@ -1,45 +0,0 @@ -In mathematics, Abel's inequality, named after Niels Henrik Abel, supplies a simple bound on the absolute value of the inner product of two vectors in an important special case. - -Let {a1, a2,...} be a sequence of real numbers that is either nonincreasing or nondecreasing, and let {b1, b2,...} be a sequence of real or complex numbers. - -If {an} is nondecreasing, it holds that - - - -\left |\sum_{k=1}^n a_k b_k \right | \le \operatorname{max}_{k=1,\dots,n} |B_k| (|a_n| + a_n - a_1), - - - -and if {an} is nonincreasing, it holds that - - - -\left |\sum_{k=1}^n a_k b_k \right | \le \operatorname{max}_{k=1,\dots,n} |B_k| (|a_n| - a_n + a_1), - - - -where - - - -B_k =b_1+\cdots+b_k. - - - -In particular, if the sequence {an} is nonincreasing and nonnegative, it follows that - - - -\left |\sum_{k=1}^n a_k b_k \right | \le \operatorname{max}_{k=1,\dots,n} |B_k| a_1, - - - -Abel's inequality follows easily from Abel's transformation, which is the discrete version of integration by parts: If - -{a1, a2, ...} and {b1, b2, ...} are sequences of real or complex numbers, it holds that - - - -\sum_{k=1}^n a_k b_k = a_n B_n - \sum_{k=1}^{n-1} B_k (a_{k+1} - a_k). - - diff --git a/wiki/wikipedia/3818.txt b/wiki/wikipedia/3818.txt deleted file mode 100644 index 3b2241e27c043dbe0882673212444e7f57e6e183..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3818.txt +++ /dev/null @@ -1,59 +0,0 @@ -In mathematics, the Birch and Swinnerton-Dyer conjecture describes the set of rational solutions to equations defining an elliptic curve. It is an open problem in the field of number theory and is widely recognized as one of the most challenging mathematical problems. It is named after mathematicians Bryan John Birch and Peter Swinnerton-Dyer, who developed the conjecture during the first half of the 1960s with the help of machine computation. , only special cases of the conjecture have been proven. - -The modern formulation of the conjecture relates arithmetic data associated with an elliptic curve E over a number field K to the behaviour of the Hasse–Weil L-function L(E, s) of E at s = 1. More specifically, it is conjectured that the rank of the abelian group E(K) of points of E is the order of the zero of L(E, s) at s = 1, and the first non-zero coefficient in the Taylor expansion of L(E, s) at s = 1 is given by more refined arithmetic data attached to E over K . - -The conjecture was chosen as one of the seven Millennium Prize Problems listed by the Clay Mathematics Institute, which has offered a $1,000,000 prize for the first correct proof. - -Mordell proved Mordell's theorem: the group of rational points on an elliptic curve has a finite basis. This means that for any elliptic curve there is a finite subset of the rational points on the curve, from which all further rational points may be generated. - -If the number of rational points on a curve is infinite then some point in a finite basis must have infinite order. The number of independent basis points with infinite order is called the rank of the curve, and is an important invariant property of an elliptic curve. - -If the rank of an elliptic curve is 0, then the curve has only a finite number of rational points. On the other hand, if the rank of the curve is greater than 0, then the curve has an infinite number of rational points. - -Although Mordell's theorem shows that the rank of an elliptic curve is always finite, it does not give an effective method for calculating the rank of every curve. The rank of certain elliptic curves can be calculated using numerical methods but (in the current state of knowledge) it is unknown if these methods handle all curves. - -An L-function L(E, s) can be defined for an elliptic curve E by constructing an Euler product from the number of points on the curve modulo each prime p. This L-function is analogous to the Riemann zeta function and the Dirichlet L-series that is defined for a binary quadratic form. It is a special case of a Hasse–Weil L-function. - -The natural definition of L(E, s) only converges for values of s in the complex plane with Re(s) > 3/2. Helmut Hasse conjectured that L(E, s) could be extended by analytic continuation to the whole complex plane. This conjecture was first proved by Deuring for elliptic curves with complex multiplication. It was subsequently shown to be true for all elliptic curves over Q, as a consequence of the modularity theorem. - -Finding rational points on a general elliptic curve is a difficult problem. Finding the points on an elliptic curve modulo a given prime p is conceptually straightforward, as there are only a finite number of possibilities to check. However, for large primes it is computationally intensive. - -In the early 1960s Peter Swinnerton-Dyer used the EDSAC-2 computer at the University of Cambridge Computer Laboratory to calculate the number of points modulo p (denoted by Np) for a large number of primes p on elliptic curves whose rank was known. From these numerical results Birch conjectured that Np for a curve E with rank r obeys an asymptotic law -$$ -\prod_{p\leq x} \frac{N_p}{p} \approx C\log (x)^r \mbox{ as } x \rightarrow \infty -$$ - -where C is a constant. - -Initially this was based on somewhat tenuous trends in graphical plots; this induced a measure of skepticism in J. W. S. Cassels (Birch's Ph.D. advisor). Over time the numerical evidence stacked up. - -This in turn led them to make a general conjecture about the behaviour of a curve's L-function L(E, s) at s = 1, namely that it would have a zero of order r at this point. This was a far-sighted conjecture for the time, given that the analytic continuation of L(E, s) there was only established for curves with complex multiplication, which were also the main source of numerical examples. (NB that the reciprocal of the L-function is from some points of view a more natural object of study; on occasion this means that one should consider poles rather than zeroes.) - -The conjecture was subsequently extended to include the prediction of the precise leading Taylor coefficient of the L-function at s = 1. It is conjecturally given by -$$ -\frac{L^{(r)}(E,1)}{r!} = \frac{\#\mathrm{Sha}(E)\Omega_E R_E \prod_{p|N}c_p}{(\#E_{\mathrm{Tor}})^2} -$$ - -where the quantities on the right hand side are invariants of the curve, studied by Cassels, Tate, Shafarevich and others: these include the order of the torsion group, the order of the Tate–Shafarevich group, and the canonical heights of a basis of rational points . - -The Birch and Swinnerton-Dyer conjecture has been proved only in special cases: - -# Coates proved that if E is a curve over a number field F with complex multiplication by an imaginary quadratic field K of class number 1, F = K or Q, and L(E, 1) is not 0 then E(F) is a finite group. This was extended to the case where F is any finite abelian extension of K by Arthaud. - -# Gross showed that if a modular elliptic curve has a first-order zero at s = 1 then it has a rational point of infinite order; see Gross–Zagier theorem. - -#Kolyvagin showed that a modular elliptic curve E for which L(E, 1) is not zero has rank 0, and a modular elliptic curve E for which L(E, 1) has a first-order zero at s = 1 has rank 1. - -# Rubin showed that for elliptic curves defined over an imaginary quadratic field K with complex multiplication by K, if the L-series of the elliptic curve was not zero at s = 1, then the p-part of the Tate–Shafarevich group had the order predicted by the Birch and Swinnerton-Dyer conjecture, for all primes p > 7. - -# Breuil, extending work of Wiles, proved that all elliptic curves defined over the rational numbers are modular, which extends results #2 and #3 to all elliptic curves over the rationals, and shows that the L-functions of all elliptic curves over Q are defined at s = 1. - -# Bhargava proved that the average rank of the Mordell–Weil group of an elliptic curve over Q is bounded above by 7/6. Combining this with the p-parity theorem of Nekovář and Dokchitser and with the proof of the main conjecture of Iwasawa theory for GL(2) by Skinner, they conclude that a positive proportion of elliptic curves over Q have analytic rank zero, and hence, by Kolyvagin, satisfy the Birch and Swinnerton-Dyer conjecture. - -Nothing has been proved for curves with rank greater than 1, although there is extensive numerical evidence for the truth of the conjecture. - -Much like the Riemann hypothesis, this conjecture has multiple consequences, including the following two: - -* Let n be an odd square-free integer. Assuming the Birch and Swinnerton-Dyer conjecture, n is the area of a right triangle with rational side lengths (a congruent number) if and only if the number of triplets of integers (x, y, z) satisfying 2x2 + y2 + 8z2 = n is twice the number of triplets satisfying 2x2 + y2 + 32z2 = n. This statement, due to Tunnell's theorem , is related to the fact that n is a congruent number if and only if the elliptic curve y2 = x3 − n2x has a rational point of infinite order (thus, under the Birch and Swinnerton-Dyer conjecture, its L-function has a zero at 1). The interest in this statement is that the condition is easily verified. - -*In a different direction, certain analytic methods allow for an estimation of the order of zero in the center of the critical strip of families of L-functions. Admitting the BSD conjecture, these estimations correspond to information about the rank of families of elliptic curves in question. For example: suppose the generalized Riemann hypothesis and the BSD conjecture, the average rank of curves given by y2 = x3 + ax+ b is smaller than 2. diff --git a/wiki/wikipedia/3819.txt b/wiki/wikipedia/3819.txt deleted file mode 100644 index 466ef292a930bc6aaf509f4293c3b83ff6c18a62..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3819.txt +++ /dev/null @@ -1,30 +0,0 @@ -In algebraic number theory, the prime ideal theorem is the number field generalization of the prime number theorem. It provides an asymptotic formula for counting the number of prime ideals of a number field K, with norm at most X. - -What to expect can be seen already for the Gaussian integers. There for any prime number p of the form 4n + 1, p factors as a product of two Gaussian primes of norm p. Primes of the form 4n + 3 remain prime, giving a Gaussian prime of norm p2. Therefore, we should estimate -$$ -2r(X)+r^\prime(\sqrt{X}) -$$ - -where r counts primes in the arithmetic progression 4n + 1, and r′ in the arithmetic progression 4n + 3. By the quantitative form of Dirichlet's theorem on primes, each of r(Y) and r′(Y) is asymptotically -$$ -\frac{Y}{2\log Y}. -$$ - -Therefore, the 2r(X) term predominates, and is asymptotically -$$ -\frac{X}{\log X}. -$$ - -This general pattern holds for number fields in general, so that the prime ideal theorem is dominated by the ideals of norm a prime number. As Edmund Landau proved in , for norm at most X the same asymptotic formula -$$ -\frac{X}{\log X} -$$ - -always holds. Heuristically this is because the logarithmic derivative of the Dedekind zeta-function of K always has a simple pole with residue -1 at s = 1. - -As with the Prime Number Theorem, a more precise estimate may be given in terms of the logarithmic integral function. The number of prime ideals of norm ≤ X is -$$ - \mathrm{Li}(X) + O_K(X \exp(-c_K \sqrt{\log(X)}) , -$$ - -where cK is a constant depending on K. diff --git a/wiki/wikipedia/382.txt b/wiki/wikipedia/382.txt deleted file mode 100644 index f9d855a7eacf2e59afea68e6950da2fc7da38a67..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/382.txt +++ /dev/null @@ -1,116 +0,0 @@ -In mathematics, Eisenstein's criterion gives a sufficient condition for a polynomial with integer coefficients to be irreducible over the rational numbers — that is, for it to not be factorizable into the product of non-constant polynomials with rational coefficients. - -This criterion is not applicable to all polynomials with integer coefficients that are irreducible over the rational numbers, but it does allow in certain important cases for irreducibility to be proved with very little effort. It may apply either directly or after transformation of the original polynomial. - -This criterion is named after Gotthold Eisenstein. In the early 20th century, it was also known as the Schönemann–Eisenstein theorem because Theodor Schönemann was the first to publish it. in 1846 in Crelle's Journal, which reads in translation - -
    That (x − a)n + pF(x) will be irreducible to the modulus p2 when F(x) to the modulus p does not contain a factor x−a.
    - -This formulation already incorporates a shift to a in place of 0; the condition on F(x) means that F(a) is not divisible by p, and so pF(a) is divisible by p but not by p2. As stated it is not entirely correct in that it makes no assumptions on the degree of the polynomial F(x), so that the polynomial considered need not be of the degree n that its expression suggests; the example x2 + p(x3 + 1) ≡ (x2 + p)(px + 1) mod p2, shows the conclusion is not valid without such hypothesis. Assuming that the degree of F(x) does not exceed n, the criterion is correct however, and somewhat stronger than the formulation given above, since if (x − a)n + pF(x) is irreducible modulo p2, it certainly cannot decompose in Z[x] into non-constant factors. - -Subsequently Eisenstein published a somewhat different version in 1850, also in Crelle's Journal. This version reads in translation - -
    When in a polynomial F(x) in x of arbitrary degree the coefficient of the highest term is 1, and all following coefficients are whole (real, complex) numbers, into which a certain (real resp. complex) prime number m divides, and when furthermore the last coefficient is equal to εm, where ε denotes a number not divisible by m: then it is impossible to bring F(x) into the form -$$ -\left (x^{\mu} + a_1 x^{\mu-1} + \cdots + a_{\mu} \right) \left (x^{\nu} + b_1 x^{\nu-1} + \cdots + b_{\nu} \right) -$$ - -where μ, ν ≥ 1, μ + ν = deg(F(x)), and all a and b are whole (real resp. complex) numbers; the equation F(x) = 0 is therefore irreducible.
    - -Here "whole real numbers" are ordinary integers and "whole complex numbers" are Gaussian integers; one should similarly interpret "real and complex prime numbers". The application for which Eisenstein developed his criterion was establishing the irreducibility of certain polynomials with coefficients in the Gaussian integers that arise in the study of the division of the lemniscate into pieces of equal arc-length. - -Remarkably Schönemann and Eisenstein, once having formulated their respective criteria for irreducibility, both immediately apply it to give an elementary proof of the irreducibility of the cyclotomic polynomials for prime numbers, a result that Gauss had obtained in his Disquisitiones Arithmeticae with a much more complicated proof. In fact, Eisenstein adds in a footnote that the only proof for this irreducibility known to him, other than that of Gauss, is one given by Kronecker in 1845. This shows that he was unaware of the two different proofs of this statement that Schönemann had given in his 1846 article, where the second proof was based on the above-mentioned criterion. This is all the more surprising given the fact that two pages further, Eisenstein actually refers (for a different matter) to the first part of Schönemann's article. In a note ("Notiz") that appeared in the following issue of the Journal, Schönemann points this out to Eisenstein, and indicates that the latter's method is not essentially different from the one he used in the second proof. - -To prove the validity of the criterion, suppose Q satisfies the criterion for the prime number p, but that it is nevertheless reducible in Q[x], from which we wish to obtain a contradiction. From Gauss' lemma it follows that Q is reducible in Z[x] as well, and in fact can be written as the product Q = GH of two non-constant polynomials G, H (in case Q is not primitive, one applies the lemma to the primitive polynomial Q/c (where the integer c is the content of Q) to obtain a decomposition for it, and multiplies c into one of the factors to obtain a decomposition for Q). Now reduce Q = GH modulo p to obtain a decomposition in (Z/pZ)[x]. But by hypothesis this reduction for Q leaves its leading term, of the form axn for a non-zero constant a ∈ Z/pZ, as the only nonzero term. But then necessarily the reductions modulo p of G and H also make all non-leading terms vanish (and cannot make their leading terms vanish), since no other decompositions of axn are possible in (Z/pZ)[x], which is a unique factorization domain. In particular the constant terms of G and H vanish in the reduction, so they are divisible by p, but then the constant term of Q, which is their product, is divisible by p2, contrary to the hypothesis, and one has a contradiction. - -A second proof of Eisenstein's criterion also starts with the assumption that the polynomial Q(x) is reducible. It is shown that this assumption entails a contradiction. - -The assumption that -$$ -Q(x)=a_nx^n+a_{n-1}x^{n-1}+\cdots+a_1x+a_0 -$$ - -is reducible means that there are polynomials - -\begin{align} - -G(x) &= c_r x^r + c_{r-1} x^{r-1} + \cdots + c_0 && r \ge 1 \\ - -H(x) &= d_s x^s + d_{s-1} x^{s-1} + \cdots + d_0 && s \ge 1 - -\end{align} - -Such that -$$ -Q(x)=G(x)\cdot H(x), \qquad n = r+s. -$$ - -The coefficient a0 of the polynomial Q(x) can be divided by the prime p but not by p2. Since a0 = c0d0, it is possible to divide c0 or d0 by p, but not both. One may without loss of generality proceed - -*with a coefficient c0 that can be divided by p and - -*with a coefficient d0 that cannot be divided by p. - -By the assumption, $p$ does not divide $a_n$. Because an = cr ds, neither cr nor ds can be divided by p. Thus, if $a_r$ is the $r$-th coefficient of the reducible polynomial $Q$, then (possibly with $d_t = 0$ in case $t>s$) -$$ -a_r=c_r d_0 + c_{r-1}d_1 + \cdots + c_0 d_r -$$ - -wherein $c_r d_0$ cannot be divided by $p$, because neither $d_0$ nor $c_r$ can be divided by $p$. - -We will prove that $c_0, c_1, \ldots, c_{r-1}$ are all divisible by p. As $a_r$ is also divisible by p (by hypothesis of the criterion), this implies that -$$ -c_r d_0 = a_r- \left (c_{r-1}d_1 + \cdots + c_0 d_r \right ) -$$ - -is divisible by p, a contradiction proving the criterion. - -It is possible to divide $c_0 d_r$ by $p$, because $c_0$ can be divided by $p$. - -By initial assumption, it is possible to divide the coefficient a1 of the polynomial Q(x) by p. Since -$$ -a_1=c_0 d_1 + c_1 d_0 -$$ - -and since d0 is not a multiple of p it must be possible to divide c1 by p. Analogously, by induction, $c_i$ is a multiple of $p$ for all $i1), (2, v2), ..., (n − 1, vn−1), (n, 0), - -where vi is the p-adic valuation of ai (i.e. the highest power of p dividing it). Now the data we are given on the vi for 0 < i < n, namely that they are at least one, is just what we need to conclude that the lower convex envelope is exactly the single line segment from (0, 1) to (n, 0), the slope being −1/n. - -This tells us that each root of Q has p-adic valuation 1/n and hence that Q is irreducible over the p-adic field (since, for instance, no product of any proper subset of the roots has integer valuation); and a fortiori over the rational number field. - -This argument is much more complicated than the direct argument by reduction mod p. It does however allow one to see, in terms of algebraic number theory, how frequently Eisenstein's criterion might apply, after some change of variable; and so limit severely the possible choices of p with respect to which the polynomial could have an Eisenstein translate (that is, become Eisenstein after an additive change of variables as in the case of the p-th cyclotomic polynomial). - -In fact only primes p ramifying in the extension of Q generated by a root of Q have any chance of working. These can be found in terms of the discriminant of Q. For example, in the case x2 + x + 2 given above, the discriminant is −7 so that 7 is the only prime that has a chance of making it satisfy the criterion. Modulo 7, it becomes (x − 3)2- a repeated root is inevitable, since the discriminant is 0 mod 7. Therefore the variable shift is actually something predictable. - -Again, for the cyclotomic polynomial, it becomes - -(x − 1)p−1 mod p; - -the discriminant can be shown to be (up to sign) pp−2, by linear algebra methods. - -More precisely, only totally ramified primes have a chance of being Eisenstein primes for the polynomial. (In quadratic fields, ramification is always total, so the distinction is not seen in the quadratic case like x2 + x + 2 above.) In fact, Eisenstein polynomials are directly linked to totally ramified primes, as follows: if a field extension of the rationals is generated by the root of a polynomial that is Eisenstein at p then p is totally ramified in the extension, and conversely if p is totally ramified in a number field then the field is generated by the root of an Eisenstein polynomial at p. - -Given an integral domain D, let -$$ -Q=\sum_{i=0}^n a_ix^i -$$ - -be an element of D[x], the polynomial ring with coefficients in D. - -Suppose there exists a prime ideal p of D such that - -* ai ∈ p for each i ≠ n, - -* an ∉ p, and - -* a0 ∉ p2, where p2 is the ideal product of p with itself. - -Then Q cannot be written as a product of two non-constant polynomials in D[x]. If in addition Q is primitive (i.e., it has no non-trivial constant divisors), then it is irreducible in D[x]. If D is a unique factorization domain with field of fractions F, then by Gauss's lemma Q is irreducible in F[x], whether or not it is primitive (since constant factors are invertible in F[x]); in this case a possible choice of prime ideal is the principal ideal generated by any irreducible element of D. The latter statement gives original theorem for D = Z or (in Eisenstein's formulation) for D = Z[i]. - -The proof of this generalization is similar to the one for the original statement, considering the reduction of the coefficients modulo p; the essential point is that a single-term polynomial over the integral domain D/p cannot decompose as a product in which at least one of the factors has more than one term (because in such a product there can be no cancellation in the coefficient either of the highest or the lowest possible degree). - -After Z, one of the basic examples of an integral domain is the polynomial ring D = k[u] in the variable u over the field k. In this case, the principal ideal generated by u is a prime ideal. Eisenstein's criterion can then be used to prove the irreducibility of a polynomial such as Q(x) = x3 + ux + u in D[x]. Indeed, u does not divide a3, u2 does not divide a0, and u divides a0, a1 and a2. This shows that this polynomial satisfies the hypotheses of the generalization of Eisenstein's criterion for the prime ideal p = (u) since, for a principal ideal (u), being an element of (u) is equivalent to being divisible by u. diff --git a/wiki/wikipedia/3820.txt b/wiki/wikipedia/3820.txt deleted file mode 100644 index 1bcae2a4b537c123da08e74b4b4167d0ac4bcc2d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3820.txt +++ /dev/null @@ -1,64 +0,0 @@ -In coding theory and information theory, a Z-channel (binary asymmetric channel) is a communications channel used to model the behaviour of some data storage systems. - -A Z-channel is a channel with binary input and binary output, where each 0 bit is transmitted correctly, but each 1 bit has probability p of being transmitted incorrectly as a 0, and probability 1–p of being transmitted correctly as a 1. In other words, if X and Y are the random variables describing the probability distributions of the input and the output of the channel, respectively, then the crossovers of the channel are characterized by the conditional probabilities: - -\begin{align} - -\operatorname {Pr} [ Y = 0 | X = 0 ] &= 1 \\ - -\operatorname {Pr} [ Y = 0 | X = 1 ] &= p \\ - -\operatorname {Pr} [ Y = 1 | X = 0 ] &= 0 \\ - -\operatorname {Pr} [ Y = 1 | X = 1 ] &= 1 - p - -\end{align} - -The channel capacity $\mathsf{cap}(\mathbb{Z})$ of the Z-channel $\mathbb{Z}$ with the crossover 1 → 0 probability p, when the input random variable X is distributed according to the Bernoulli distribution with probability $\alpha$ for the occurrence of 0, is given by the following equation: -$$ -\mathsf{cap}(\mathbb{Z}) = \mathsf{H}\left(\frac{1}{1+2^{\mathsf{s}(p)}}\right) - \frac{\mathsf{s}(p)}{1+2^{\mathsf{s}(p)}} = \log_2(1{+}2^{-\mathsf{s}(p)}) = \log_2\left(1+(1-p) p^{p/(1-p)}\right) -$$ - -where $\mathsf{s}(p) = \frac{\mathsf{H}(p)}{1-p}$ for the binary entropy function $\mathsf{H}(\cdot)$. - -This capacity is obtained when the input variable X has Bernoulli distribution with probability $\alpha$ of having value 0 and $1-\alpha$ of value 1, where: -$$ -\alpha = 1 - \frac{1}{(1-p)(1+2^{\mathsf{H}(p)/(1-p)})}, -$$ - -For small p, the capacity is approximated by -$$ - \mathsf{cap}(\mathbb{Z}) \approx 1- 0.5 \mathsf{H}(p) -$$ - -as compared to the capacity $1{-}\mathsf{H}(p)$ of the binary symmetric channel with crossover probability p. - -For any p, $\alpha<0.5$ (i.e. more 0s should be transmitted than 1s) because transmitting a 1 introduces noise. As $p\rightarrow 1$, the limiting value of $\alpha$ is $\frac{1}{e}$. - -Define the following distance function $\mathsf{d}_A(\mathbf{x}, \mathbf{y})$ on the words $\mathbf{x}, \mathbf{y} \in \{0,1\}^n$ of length n transmitted via a Z-channel -$$ -\mathsf{d}_A(\mathbf{x}, \mathbf{y}) \stackrel{\vartriangle}{=} \max\left\{ \big|\{i \mid x_i = 0, y_i = 1\}\big| , \big|\{i \mid x_i = 1, y_i = 0\}\big| \right\}. -$$ - -Define the sphere $V_t(\mathbf{x})$ of radius t around a word $\mathbf{x} \in \{0,1\}^n$ of length n as the set of all the words at distance t or less from $\mathbf{x}$, in other words, -$$ -V_t(\mathbf{x}) = \{\mathbf{y} \in \{0, 1\}^n \mid \mathsf{d}_A(\mathbf{x}, \mathbf{y}) \leq t\}. -$$ - -A code $\mathcal{C}$ of length n is said to be t-asymmetric-error-correcting if for any two codewords $\mathbf{c}\ne \mathbf{c}' \in \{0,1\}^n$, one has $V_t(\mathbf{c}) \cap V_t(\mathbf{c}') = \emptyset$. Denote by $M(n,t)$ the maximum number of codewords in a t-asymmetric-error-correcting code of length n. - -The Varshamov bound. - -For n≥1 and t≥1, -$$ -M(n,t) \leq \frac{2^{n+1}}{\sum_{j = 0}^t{\left( \binom{\lfloor n/2\rfloor}{j}+\binom{\lceil n/2\rceil}{j}\right)}}. -$$ - -The constant-weight code bound. - -For n > 2t ≥ 2, let the sequence B0, B1, ..., Bn-2t-1 be defined as -$$ -B_0 = 2, \quad B_i = \min_{0 \leq j < i}\{ B_j + A(n{+}t{+}i{-}j{-}1, 2t{+}2, t{+}i)\} -$$ for $i > 0$. - -Then $M(n,t) \leq B_{n-2t-1}.$ diff --git a/wiki/wikipedia/3821.txt b/wiki/wikipedia/3821.txt deleted file mode 100644 index 44c2af842ceec2c0aeed7f0ad0a964bdc55dd9e6..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3821.txt +++ /dev/null @@ -1,26 +0,0 @@ -In number theory, the Elliott–Halberstam conjecture is a conjecture about the distribution of prime numbers in arithmetic progressions. It has many applications in sieve theory. It is named for Peter D. T. A. Elliott and Heini Halberstam, who stated the conjecture in 1968. - -Stating the conjecture requires some notation. Let $\pi(x)$, the prime-counting function, denote the number of primes less than or equal to $x$. If $q$ is a positive integer and $a$ is coprime to $q$, we let $\pi(x;q,a)$ denote the number of primes less than or equal to $x$ which are equal to $a$ modulo $q$. Dirichlet's theorem on primes in arithmetic progressions then tells us - -that -$$ - \pi(x;q,a) \approx \frac{\pi(x)}{\varphi(q)} -$$ - -where $\varphi$ is Euler's totient function. If we then define the error function -$$ - E(x;q) = \max_{\text{gcd}(a,q) = 1} \left|\pi(x;q,a) - \frac{\pi(x)}{\varphi(q)}\right| -$$ - -where the max is taken over all $a$ coprime to $q$, then the Elliott–Halberstam conjecture is the assertion that - -for every $\theta < 1$ and $A > 0$ there exists a constant $C > 0$ such that -$$ - \sum_{1 \leq q \leq x^\theta} E(x;q) \leq \frac{C x}{\log^A x} -$$ - -for all $x > 2$. - -This conjecture was proven for all $\theta < 1/2$ by Enrico Bombieri and A. I. Vinogradov (the Bombieri–Vinogradov theorem, sometimes known simply as "Bombieri's theorem"); this result is already quite useful, being an averaged form of the generalized Riemann hypothesis. It is known that the conjecture fails at the endpoint $\theta = 1$. - -The Elliott–Halberstam conjecture has several consequences. One striking one is the result announced by Dan Goldston, János Pintz, and Cem Yıldırım, which shows (assuming this conjecture) that there are infinitely many pairs of primes which differ by at most 16. In November 2013, James Maynard showed that subject to the Elliott–Halberstam conjecture, one can show the existence of infinitely many pairs of consecutive primes that differ by at most 12. In August 2014, Polymath group showed that subject to the generalized Elliott–Halberstam conjecture, one can show the existence of infinitely many pairs of consecutive primes that differ by at most 6. Without assuming any form of the conjecture, the lowest proven bound is 246. diff --git a/wiki/wikipedia/3822.txt b/wiki/wikipedia/3822.txt deleted file mode 100644 index c448eb1f69aa3975ff7f322df18c8f95adb603c2..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3822.txt +++ /dev/null @@ -1,37 +0,0 @@ -frame|right|A domatic partitionIn graph theory, a domatic partition of a graph $G = (V,E)$ is a partition of $V$ into disjoint sets $V_1$, $V_2$,...,$V_K$ such that each Vi is a dominating set for G. The figure on the right shows a domatic partition of a graph; here the dominating set $V_1$ consists of the yellow vertices, $V_2$ consists of the green vertices, and $V_3$ consists of the blue vertices. - -The domatic number is the maximum size of a domatic partition, that is, the maximum number of disjoint dominating sets. The graph in the figure has domatic number 3. It is easy to see that the domatic number is at least 3 because we have presented a domatic partition of size 3. To see that the domatic number is at most 3, we first review a simple upper bound. - -Let $\delta$ be the minimum degree of the graph $G$. The domatic number of $G$ is at most $\delta + 1$. To see this, consider a vertex $v$ of degree $\delta$. Let $N$ consist of $v$ and its neighbours. We know that (1) each dominating set $V_i$ must contain at least one vertex in $N$ (domination), and (2) each vertex in $N$ is contained in at most one dominating set $V_i$ (disjointness). Therefore, there are at most $|N| = \delta + 1$ disjoint dominating sets. - -The graph in the figure has minimum degree $\delta = 2$, and therefore its domatic number is at most 3. Hence we have shown that its domatic number is exactly 3; the figure shows a maximum-size domatic partition. - -frame|right|Weak 2-coloringIf there is no isolated vertex in the graph (that is, $\delta$ ≥ 1), then the domatic number is at least 2. To see this, note that (1) a weak 2-coloring is a domatic partition if there is no isolated vertex, and (2) any graph has a weak 2-coloring. Alternatively, (1) a maximal independent set is a dominating set, and (2) the complement of a maximal independent set is also a dominating set if there are no isolated vertices. - -The figure on the right shows a weak 2-coloring, which is also a domatic partition of size 2: the dark nodes are a dominating set, and the light nodes are another dominating set (the light nodes form a maximal independent set). See weak coloring for more information. - -Finding a domatic partition of size 1 is trivial: let $V_1 = V$. Finding a domatic partition of size 2 (or determining that it does not exist) is easy: check if there are isolated nodes, and if not, find a weak 2-coloring. - -However, finding a maximum-size domatic partition is computationally hard. Specifically, the following decision problem, known as the domatic number problem, is NP-complete: given a graph $G$ and an integer $K$, determine whether the domatic number of $G$ is at least $K$. Therefore, the problem of determining the domatic number of a given graph is NP-hard, and the problem of finding a maximum-size domatic partition is NP-hard as well. - -There is a polynomial-time approximation algorithm with a logarithmic approximation guarantee, that is, it is possible to find a domatic partition whose size is within a factor $O(\log |V|)$ of the optimum. - -However, under plausible complexity-theoretic assumptions, there is no polynomial-time approximation algorithm with a sub-logarithmic approximation factor. More specifically, a polynomial-time approximation algorithm for domatic partition with the approximation factor $(1-\epsilon) \ln |V|$ for a constant $\epsilon > 0$ would imply that all problems in NP can be solved in slightly super-polynomial time $n^{O(\log \log n)}$. - -; Domatic partition - -Partition of vertices into disjoint dominating sets. The domatic number is the maximum number of such sets. - -; Vertex coloring - -Partition of vertices into disjoint independent sets. The chromatic number is the minimum number of such sets. - -; Clique partition - -Partition of vertices into disjoint cliques. Equal to vertex coloring in the complement graph. - -; Edge coloring - -Partition of edges into disjoint matchings. The edge chromatic number is the minimum number of such sets. - -Let G = (U ∪ V, E) be a bipartite graph without isolated nodes; all edges are of the form {u, v} ∈ E with u ∈ U and v ∈ V. Then {U, V} is both a vertex 2-coloring and a domatic partition of size 2; the sets U and V are independent dominating sets. The chromatic number of G is exactly 2; there is no vertex 1-coloring. The domatic number of G is at least 2. It is possible that there is a larger domatic partition; for example, the complete bipartite graph Kn,n for any n ≥ 2 has domatic number n. diff --git a/wiki/wikipedia/3823.txt b/wiki/wikipedia/3823.txt deleted file mode 100644 index df2180d50fd137e93fad0e805fa45b8c528f643e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3823.txt +++ /dev/null @@ -1,53 +0,0 @@ -A jury theorem is a mathematical theorem proving that, under certain assumptions, a decision attained using majority voting in a large group is more likely to be correct than a decision attained by a single expert. It serves as a formal argument for the idea of wisdom of the crowd, for decision of questions of fact by jury trial, and for democracy in general. - -The first and most famous jury theorem is Condorcet's jury theorem. It assumes that all voters have independent probabilities to vote for the correct alternative, these probabilities are larger than 1/2, and are the same for all voters. Under these assumptions, the probability that the majority decision is correct is strictly larger when the group is larger; and when the group size tends to infinity, the probability that the majority decision is correct tends to 1. - -There are many other jury theorems, relaxing some or all of these assumptions. - -The premise of all jury theorems is that there is an objective truth, which is unknown to the voters. Most theorems focus on binary issues (issues with two possible states), for example, whether a certain defendant is guilty or innocent, whether a certain stock is going to rise or fall, etc. There are $n$ voters (or jurors), and their goal is to reveal the truth. Each voter has an opinion about which of the two options is correct. The opinion of each voter is either correct (i.e., equals the true state), or wrong (i.e., differs than the true state). This is in contrast to other settings of voting, in which the opinion of each voter represents his/her subjective preferences and is thus always "correct" for this specific voter. The opinion of a voter can be considered a random variable: for each voter, there is a positive probability that his opinion equals the true state. - -The group decision is determined by the majority rule. For example, if a majority of voters says "guilty" then the decision is "guilty", while if a majority says "innocent" then the decision is "innocent". To avoid ties, it is often assumed that the number of voters $n$ is odd. Alternatively, if $n$ is even, then ties are broken by tossing a fair coin. - -Jury theorems are interested in the probability of correctness - the probability that the majority decision coincides with the objective truth. Typical jury theorems make two kinds of claims on this probability: - -There are several jury theorems that weaken the Independence assumption in various ways. - -In binary decision problems, there is often one option that is easier to detect that the other one. For example, it may be easier to detect that a defendant is guilty (as there is clear evidence for guilt) than to detect that he is innocent. In this case, the probability that the opinion of a single voter is correct is represented by two different numbers: probability given that option #1 is correct, and probability given that option #2 is correct. This also implies that opinions of different voters are correlated. This motivates the following relaxations of the above assumptions: - -# Conditional Independence: for each of the two options, the voters' opinions given that this option is the true one are independent random variables. - -# Conditional Competence: for each of the two options, the probability that a single voter's opinion is correct given that this option is true is larger than 1/2. - -# Conditional Uniformity: for each of the two options, all voters have the same probability of being correct given that this option is true. - -Growing Reliability and Crowd Infallibility continue to hold under these weaker assumptions. - -Uniformity can be dismissed if the Competence assumption is strengthened. There are several ways to strengthen it: - -* Strong Competence: for each voter i, the probability of correctness pi is at least 1/2+e, where e>0 is fixed for all voters. In other words: the competence is bounded away from a fair coin toss. A jury theorem by Paroush suppose there are 15 voters. In a direct majority system, a decision is accepted whenever at least 8 votes support it. Suppose now that the voters are grouped into 3 groups of size 5 each. A decision is accepted whenever at least 2 groups support it, and in each group, a decision is accepted whenever at least 3 voters support it. Therefore, a decision may be accepted even if only 6 voters support it. - -Boland, Proschan and Tong prove that, when the voters are independent and p>1/2, a direct majority system - as in Condorcet's theorem - always has a higher chance of accepting the correct decision than any indirect majority system. - -Berg and Paroush consider multi-tier voting hierarchies, which may have several levels with different decision-making rules in each level. They study the optimal voting structure, and compares the competence against the benefit of time-saving and other expenses. - -Goodin and Spiekermann compute the amount by which a small group of experts should be better than the average voters, in order for them to accept better decisions. - -It is well-known that, when there are three or more alternatives, and voters have different preferences, they may engage in strategic voting, for example, vote for the second-best option in order to prevent the worst option from being elected. Surprisingly, strategic voting might occur even with two alternatives and when all voters have the same preference, which is to reveal the truth. For example, suppose the question is whether a defendant is guilty or innocent, and suppose a certain juror thinks the true answer is "guilty". However, he also knows that his vote is effective only if the other votes are tied. But, if other votes are tied, it means that the probability that the defendant is guilty is close to 1/2. Taking this into account, our juror might decide that this probability is not sufficient for deciding "guilty", and thus will vote "innocent". But if all other voters do the same, the wrong answer is derived. In game-theoretic terms, truthful voting is might not be a Nash equilibrium. This problem has been termed the swing voter's curse, as it is analogous to the winner's curse in auction theory. - -A jury theorem by Peleg and Zamir shows sufficient and necessary conditions for the existence of a Bayesian-Nash equilibrium that satisfies Condorcet's jury theorem. Bozbay, Dietrich and Peters show voting rules that lead to efficient aggregation of the voters' private information even with strategic voting. - -In practice, this problem may not be very severe, since most voters care not only about the final outcome, but also about voting correctly by their conscience. Moreover, most voters are not sophisticated enough to vote strategically. - -The notion of "correctness" may not be meaningful when making policy decisions, which are based on values or preferences, rather than just on facts. - -Some defenders of the theorem hold that it is applicable when voting is aimed at determining which policy best promotes the public good, rather than at merely expressing individual preferences. On this reading, what the theorem says is that although each member of the electorate may only have a vague perception of which of two policies is better, majority voting has an amplifying effect. The "group competence level", as represented by the probability that the majority chooses the better alternative, increases towards 1 as the size of the electorate grows assuming that each voter is more often right than wrong. - -Several papers show that, under reasonable conditions, large groups are better trackers of the majority preference. - -* Law of large numbers: a mathematical generalization of jury theorems. - -*Evolution in collective decision making. - -*Realizing Epistemic Democracy: a criticism on the assumptions of jury theorems. - -*The Epistemology of Democracy: a comparison of jury theorems to two other epistemic models of democracy: experimentalism and Diversity trumps ability. diff --git a/wiki/wikipedia/3824.txt b/wiki/wikipedia/3824.txt deleted file mode 100644 index 75c79b2bf774f4cf706ec96abf6dc968fc489b90..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3824.txt +++ /dev/null @@ -1,37 +0,0 @@ -In algebraic geometry, Mnëv's universality theorem is a result which can be used to represent algebraic (or semi algebraic) varieties as realizations of oriented matroids, a notion of combinatorics. - -For the purposes of Mnëv's universality, an oriented matroid of a finite subset $S\subset {\mathbb R}^n$ is a list of all partitions of points in S induced by hyperplanes in ${\mathbb R}^n$. In particular, the structure of oriented matroid contains full information on the incidence relations in S, inducing on S a matroid structure. - -The realization space of an oriented matroid is the space of all configurations of points $S\subset {\mathbb R}^n$ inducing the same oriented matroid structure on S. - -For the purposes of Mnëv's Universality, the stable equivalence of semialgebraic sets is defined as follows. - -Let U, V be semialgebraic sets, obtained as a disconnected union of connected semialgebraic sets -$$ -U=U_1\coprod \cdots\coprod U_k -$$, $V=V_1\coprod \cdots\coprod V_k$ - -We say that U and V are rationally equivalent if there exist homeomorphisms $U_i \stackrel {\varphi_i} \mapsto V_i$ defined by rational maps. - -Let $U\subset {\mathbb R}^{n+d}, V\subset {\mathbb R}^{n}$ be semialgebraic sets, -$$ -U=U_1\coprod \cdots\coprod U_k -$$, $V=V_1\coprod \cdots\coprod V_k$ - -with $U_i$ mapping to $V_i$ under the natural projection $\pi$ deleting last d coordinates. We say that $\pi: U \mapsto V$ is a stable projection if there exist integer polynomial maps -$$ - \varphi_1, \ldots, \varphi_\ell, \psi_1, \dots, \psi_m: {\mathbb R}^n \mapsto ({\mathbb R}^d)^* -$$ - -such that -$$ - U_i =\{ (v,v') \in {\mathbb R}^{n+d}\mid v\in V_i \text{ and } \langle \varphi_a(v), v'\rangle >0, \langle \psi_b(v), v'\rangle=0 \text{ for } a=1,\dots, \ell, b = 1, \dots, m\}. -$$ - -The stable equivalence is an equivalence relation on semialgebraic subsets generated by stable projections and rational equivalence. - -THEOREM (Mnëv's universality theorem) - -Let V be a semialgebraic subset in ${\mathbb R}^n$ defined over integers. Then V is stably equivalent to a realization space of a certain oriented matroid. - -Mnëv's universality theorem was discovered by Nikolai Mnëv in his 1986 Ph.D. thesis. It has numerous applications in algebraic geometry, due to Laurent Lafforgue, Ravi Vakil and others, allowing one to construct moduli spaces with arbitrarily bad behaviour. diff --git a/wiki/wikipedia/3825.txt b/wiki/wikipedia/3825.txt deleted file mode 100644 index d78ff8eabb5dc90d79d3ee411ad207a79cea5289..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3825.txt +++ /dev/null @@ -1,58 +0,0 @@ -In the mathematics of graph drawing, Turán's brick factory problem asks for the minimum number of crossings in a drawing of a complete bipartite graph. The problem is named after Pál Turán, who formulated it while being forced to work in a brick factory during World War II. - -A drawing method found by Kazimierz Zarankiewicz has been conjectured to give the correct answer for every complete bipartite graph, and the statement that this is true has come to be known as the Zarankiewicz crossing number conjecture. The conjecture remains open, with only some special cases solved. - -During World War II, Hungarian mathematician Pál Turán was forced to work in a brick factory, pushing wagon loads of bricks from kilns to storage sites. The factory had tracks from each kiln to each storage site, and the wagons were harder to push at the points where tracks crossed each other. Turán was inspired by this situation to ask how the factory might be redesigned to minimize the number of crossings between these tracks. - -Mathematically, this problem can be formalized as asking for a graph drawing of a complete bipartite graph, whose vertices represent kilns and storage sites, and whose edges represent the tracks from each kiln to each storage site. - -The graph should be drawn in the plane with each vertex as a point, each edge as a curve connecting its two endpoints, and no vertex placed on an edge that it is not incident to. A crossing is counted whenever two edges that are disjoint in the graph have a nonempty intersection in the plane. The question is then, what is the minimum number of crossings in such a drawing? - -Turán's formulation of this problem is often recognized as one of the first studies of the crossing numbers of graphs. - -(Another independent formulation of the same concept occurred in sociology, in methods for drawing sociograms, and a much older puzzle, the three utilities problem, can be seen as a special case of the brick factory problem with three kilns and three storage facilities.) - -Crossing numbers have since gained greater importance, as a central object of study in graph drawing - -and as an important tool in VLSI design - -and discrete geometry. - -Both Zarankiewicz and Kazimierz Urbanik saw Turán speak about the brick factory problem in different talks in Poland in 1952, - -and independently published attempted solutions of the problem, with equivalent formulas for the number of crossings. - -As both of them showed, it is always possible to draw the complete bipartite graph Km,n (a graph with m vertices on one side, n vertices on the other side, and mn edges connecting the two sides) with a number of crossings equal to -$$ -\operatorname{cr}(K_{m,n}) \le \biggl\lfloor\frac{n}{2}\biggr\rfloor\biggl\lfloor\frac{n-1}{2}\biggr\rfloor\biggl\lfloor\frac{m}{2}\biggr\rfloor\biggl\lfloor\frac{m-1}{2}\biggr\rfloor. -$$ - -The construction is simple: place m vertices on the x-axis of the plane, avoiding the origin, with equal or nearly-equal numbers of points to the left and right of the y-axis. - -Similarly, place n vertices on the y-axis of the plane, avoiding the origin, with equal or nearly-equal numbers of points above and below the x-axis. - -Then, connect every point on the x-axis by a straight line segment to every point on the y-axis. - -However, their proofs that this formula is optimal, that is, that there can be no drawings with fewer crossings, were erroneous. The gap was not discovered until eleven years after publication, nearly simultaneously by Gerhard Ringel and Paul Kainen. - -Nevertheless, it is conjectured that Zarankiewicz's and Urbanik's formula is optimal. This has come to be known as the Zarankiewicz crossing number conjecture. Although some special cases of it are known to be true, the general case remains open. - -Since Km,n and Kn,m are isomorphic, it is enough to consider the case where m ≤ n. In addition, for m ≤ 2 Zarankiewicz's construction gives no crossings, which of course cannot be bested. So the only nontrivial cases are those for which m and n are both ≥ 3. - -Zarankiewicz's attempted proof of the conjecture, although invalid for the general case of Km,n, works for the case m = 3. - -It has since been extended to other small values of m, and - -the Zarankiewicz conjecture is known to be true for the complete bipartite graphs Km,n with m ≤ 6. - -The conjecture is also known to be true for K7,7, K7,8, and K7,9. - -If a counterexample exists, that is, a graph Km,n requiring fewer crossings than the Zarankiewicz bound, then in the smallest counterexample both m and n must be odd. - -For each fixed choice of m, the truth of the conjecture for all Km,n can be verified by testing only a finite number of choices of n. - -More generally, it has been proven that every complete bipartite graph requires a number of crossings that is (for sufficiently large graphs) at least 83% of the number given by the Zarankiewicz bound. Closing the gap between this lower bound and the upper bound remains an open problem. - -If edges are required to be drawn as straight line segments, rather than arbitrary curves, then some graphs need more crossings than they would when drawn with curved edges. - -However, the upper bound established by Zarankiewicz for the crossing numbers of complete bipartite graphs can be achieved using only straight edges. Therefore, if the Zarankiewicz conjecture is correct, then the complete bipartite graphs have rectilinear crossing numbers equal to their crossing numbers. diff --git a/wiki/wikipedia/3826.txt b/wiki/wikipedia/3826.txt deleted file mode 100644 index 403b8ef7e1f75af351a5469a38d6781ced23d097..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3826.txt +++ /dev/null @@ -1,13 +0,0 @@ -The Blaschke selection theorem is a result in topology and convex geometry about sequences of convex sets. Specifically, given a sequence $\{K_n\}$ of convex sets contained in a bounded set, the theorem guarantees the existence of a subsequence $\{K_{n_m}\}$ and a convex set $K$ such that $K_{n_m}$ converges to $K$ in the Hausdorff metric. The theorem is named for Wilhelm Blaschke. - -* A succinct statement of the theorem is that the metric space of convex bodies is locally compact. - -* Using the Hausdorff metric on sets, every infinite collection of compact subsets of the unit ball has a limit point (and that limit point is itself a compact set). - -As an example of its use, the isoperimetric problem can be shown to have a solution. That is, there exists a curve of fixed length that encloses the maximum area possible. Other problems likewise can be shown to have a solution: - -* Lebesgue's universal covering problem for a convex universal cover of minimal size for the collection of all sets in the plane of unit diameter, - -* the maximum inclusion problem, - -* and the Moser's worm problem for a convex universal cover of minimal size for the collection of planar curves of unit length. diff --git a/wiki/wikipedia/3827.txt b/wiki/wikipedia/3827.txt deleted file mode 100644 index dac2a081e139cb16c10d45fd5f7e6d4d518e36c2..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3827.txt +++ /dev/null @@ -1,149 +0,0 @@ -In number theory, a Wieferich prime is a prime number p such that p2 divides 2p − 1 − 1, therefore connecting these primes with Fermat's little theorem, which states that every odd prime p divides 2p − 1 − 1. Wieferich primes were first described by Arthur Wieferich in 1909 in works pertaining to Fermat's Last Theorem, at which time both of Fermat's theorems were already well known to mathematicians. - -Since then, connections between Wieferich primes and various other topics in mathematics have been discovered, including other types of numbers and primes, such as Mersenne and Fermat numbers, specific types of pseudoprimes and some types of numbers generalized from the original definition of a Wieferich prime. Over time, those connections discovered have extended to cover more properties of certain prime numbers as well as more general subjects such as number fields and the abc conjecture. - -, the only known Wieferich primes are 1093 and 3511 . - -The stronger version of Fermat's little theorem, which a Wieferich prime satisfies, is usually expressed as a congruence relation 2p -1 ≡ 1 (mod p2). From the definition of the congruence relation on integers, it follows that this property is equivalent to the definition given at the beginning. Thus if a prime p satisfies this congruence, this prime divides the Fermat quotient $\tfrac{2^{p-1}-1}{p}$. The following are two illustrative examples using the primes 11 and 1093: - -For p = 11, we get $\tfrac{2^{10}-1}{11}$ which is 93 and leaves a remainder of 5 after division by 11, hence 11 is not a Wieferich prime. For p = 1093, we get $\tfrac{2^{1092}-1}{1093}$ or 485439490310...852893958515 (302 intermediate digits omitted for clarity), which leaves a remainder of 0 after division by 1093 and thus 1093 is a Wieferich prime. - -Wieferich primes can be defined by other equivalent congruences. If p is a Wieferich prime, one can multiply both sides of the congruence 2p−1 ≡ 1 (mod p2) by 2 to get 2p ≡ 2 (mod p2). Raising both sides of the congruence to the power p shows that a Wieferich prime also satisfies 2p2 ≡2p ≡ 2 (mod p2), and hence 2pk ≡ 2 (mod p2) for all k ≥ 1. The converse is also true: 2pk ≡ 2 (mod p2) for some k ≥ 1 implies that the multiplicative order of 2 modulo p2 divides gcd(pk − 1, φ(p2)) = p − 1, that is, 2p−1 ≡ 1 (mod p2) and thus p is a Wieferich prime. This also implies that Wieferich primes can be defined as primes p such that the multiplicative orders of 2 modulo p and modulo p2 coincide: ordp2 2 = ordp 2, (By the way, ord10932 = 364, and ord35112 = 1755). - -H. S. Vandiver proved that 2p−1 ≡ 1 (mod p3) if and only if $1 + \tfrac{1}{3} + \dots + \tfrac{1}{p-2} \equiv 0 \pmod{p^2}$. - -In 1902, Meyer proved a theorem about solutions of the congruence ap − 1 ≡ 1 (mod pr). Later in that decade Arthur Wieferich showed specifically that if the first case of Fermat's last theorem has solutions for an odd prime exponent, then that prime must satisfy that congruence for a = 2 and r = 2. In other words, if there exist solutions to xp + yp + zp = 0 in integers x, y, z and p an odd prime with p ∤ xyz, then p satisfies 2p − 1 ≡ 1 (mod p2). In 1913, Bachmann examined the residues of $\tfrac{2^{p-1}-1}{p}\bmodp$. He asked the question when this residue vanishes and tried to find expressions for answering this question. - -The prime 1093 was found to be a Wieferich prime by in 1913 and confirmed to be the only such prime below 2000. He calculated the smallest residue of $\tfrac{2^{t}-1}{p}\bmodp$ for all primes p < 2000 and found this residue to be zero for t = 364 and p = 1093, thereby providing a counterexample to a conjecture by Grave about the impossibility of the Wieferich congruence. later ordered verification of the correctness of Meissner's congruence via only elementary calculations. Inspired by an earlier work of Euler, he simplified Meissner's proof by showing that 10932 | (2182 + 1) and remarked that (2182 + 1) is a factor of (2364 − 1). It was also shown that it is possible to prove that 1093 is a Wieferich prime without using complex numbers contrary to the method used by Meissner, although Meissner himself hinted at that he was aware of a proof without complex values. - -In 2007–2016, a search for Wieferich primes was performed by the distributed computing project Wieferich@Home. In 2011–2017, another search was performed by the PrimeGrid project, although later the work done in this project was claimed wasted. While these projects reached search bounds above 1, neither of them reported any sustainable results. - -In 2020, PrimeGrid started another project that searches for Wieferich and Wall–Sun–Sun primes simultaneously. The new project uses checksums to enable independent double-checking of each subinterval, thus minimizing the risk of missing an instance because of faulty hardware. the leading edge of this search was over 3. - -It has been conjectured (as for Wilson primes) that infinitely many Wieferich primes exist, and that the number of Wieferich primes below x is approximately log(log(x)), which is a heuristic result that follows from the plausible assumption that for a prime p, the (p − 1)-th degree roots of unity modulo p2 are uniformly distributed in the multiplicative group of integers modulo p2. - -In 1910, Mirimanoff expanded the theorem by showing that, if the preconditions of the theorem hold true for some prime p, then p2 must also divide 3p − 1 − 1. Granville and Monagan further proved that p2 must actually divide mp − 1 − 1 for every prime m ≤ 89. Suzuki extended the proof to all primes m ≤ 113. - -Let Hp be a set of pairs of integers with 1 as their greatest common divisor, p being prime to x, y and x + y, (x + y)p−1 ≡ 1 (mod p2), (x + ξy) being the pth power of an ideal of K with ξ defined as cos 2π/p + i sin 2π/p. K = Q(ξ) is the field extension obtained by adjoining all polynomials in the algebraic number ξ to the field of rational numbers (such an extension is known as a number field or in this particular case, where ξ is a root of unity, a cyclotomic number field). are complementary sets, so if one of them is shown to be finite, the other one would necessarily have to be infinite, because both are proper subsets of the set of prime numbers. It was later shown that the existence of infinitely many non-Wieferich primes already follows from a weaker version of the abc conjecture, called the ABC-(k, ε) conjecture. Additionally, the existence of infinitely many non-Wieferich primes would also follow if there exist infinitely many square-free Mersenne numbers as well as if there exists a real number ξ such that the set {n ∈ N : λ(2n − 1) < 2 − ξ} is of density one, where the index of composition λ(n) of an integer n is defined as $\tfrac{\log n}{\log \gamma (n)}$ and \gamma (n) = \prod_{p \mid n} p, meaning $\gamma (n)$ gives the product of all prime factors of n. that the two known Wieferich primes are one greater than numbers with periodic binary expansions (1092 = 0100010001002=44416; 3510 = 1101101101102=66668). The Wieferich@Home project searched for Wieferich primes by testing numbers that are one greater than a number with a periodic binary expansion, but up to a "bit pseudo-length" of 3500 of the tested binary numbers generated by combination of bit strings with a bit length of up to 24 it has not found a new Wieferich prime. - -It has been noted that the known Wieferich primes are one greater than mutually friendly numbers (the shared abundancy index being 112/39). - -It was observed that the two known Wieferich primes are the square factors of all non-square free base-2 Fermat pseudoprimes up to 25. Later computations showed that the only repeated factors of the pseudoprimes up to 1012 are 1093 and 3511. In addition, the following connection exists: - -Let n be a base 2 pseudoprime and p be a prime divisor of n. If $\tfrac{2^{n-1}-1}{n}\not\equiv 0 \pmod{p}$, then also $\tfrac{2^{p-1}-1}{p}\not\equiv 0 \pmod{p}$. It was shown, that for all odd prime numbers either L(pn+1) = p · L(pn) or L(pn+1) = L(pn). This also holds if the conditions p ≡ −1 (mod q) and p ≢ −1 (mod q3) are replaced by p ≡ −3 (mod q) and p ≢ −3 (mod q3) as well as when the condition p ≡ −1 (mod q) is replaced by p ≡ −5 (mod q) (in which case q is a Wall–Sun–Sun prime) and the incongruence condition replaced by p ≢ −5 (mod q3). - -A prime p satisfying the congruence 2(p−1)/2 ≡ ±1 + Ap (mod p2) with small |A| is commonly called a near-Wieferich prime . Near-Wieferich primes with A = 0 represent Wieferich primes. Recent searches, in addition to their primary search for Wieferich primes, also tried to find near-Wieferich primes. The following table lists all near-Wieferich primes with |A| ≤ 10 in the interval [1, 3]. This search bound was reached in 2006 in a search effort by P. Carlisle, R. Crandall and M. Rodenkirch. Bigger entries are by PrimeGrid. - -The sign +1 or -1 above can be easily predicted by Euler's criterion (and the second supplement to the law of quadratic reciprocity). - -Dorais and Klyve It was shown that ap2−1 ≡ 1 (mod p2) if and only if ap−1 ≡ 1 (mod p2). - -Known solutions of ap−1 ≡ 1 (mod p2) for small values of a are: (checked up to 5 × 1013) - -For more information, see and. (Note that the solutions to a = bk is the union of the prime divisors of k which does not divide b and the solutions to a = b) - -The smallest solutions of np−1 ≡ 1 (mod p2) are - -2, 1093, 11, 1093, 2, 66161, 5, 3, 2, 3, 71, 2693, 2, 29, 29131, 1093, 2, 5, 3, 281, 2, 13, 13, 5, 2, 3, 11, 3, 2, 7, 7, 5, 2, 46145917691, 3, 66161, 2, 17, 8039, 11, 2, 23, 5, 3, 2, 3, ... (The next term > 4.9×1013) - -There are no known solutions of np−1 ≡ 1 (mod p2) for n = 47, 72, 186, 187, 200, 203, 222, 231, 304, 311, 335, 355, 435, 454, 546, 554, 610, 639, 662, 760, 772, 798, 808, 812, 858, 860, 871, 983, 986, 1002, 1023, 1130, 1136, 1138, .... - -It is a conjecture that there are infinitely many solutions of ap−1 ≡ 1 (mod p2) for every natural number a. - -The bases b < p2 which p is a Wieferich prime are (for b > p2, the solutions are just shifted by k·p2 for k > 0), and there are p − 1 solutions < p2 of p and the set of the solutions congruent to p are {1, 2, 3, ..., p − 1}) - -The least base b > 1 which prime(n) is a Wieferich prime are - -5, 8, 7, 18, 3, 19, 38, 28, 28, 14, 115, 18, 51, 19, 53, 338, 53, 264, 143, 11, 306, 31, 99, 184, 53, 181, 43, 164, 96, 68, 38, 58, 19, 328, 313, 78, 226, 65, 253, 259, 532, 78, 176, 276, 143, 174, 165, 69, 330, 44, 33, 332, 94, 263, 48, 79, 171, 747, 731, 20, ... - -We can also consider the formula $(a+1)^{p-1}-a^{p-1} \equiv 0 \pmod{p^2}$, (because of the generalized Fermat little theorem, $(a+1)^{p-1}-a^{p-1} \equiv 0 \pmod{p^2}$ is true for all prime p and all natural number a such that both a and a + 1 are not divisible by p). It's a conjecture that for every natural number a, there are infinitely many primes such that $(a+1)^{p-1}-a^{p-1} \equiv 0 \pmod{p^2}$. - -Known solutions for small a are: (checked up to 4 × 1011) - -A Wieferich pair is a pair of primes p and q that satisfy - -pq − 1 ≡ 1 (mod q2) and qp − 1 ≡ 1 (mod p2) - -so that a Wieferich prime p ≡ 1 (mod 4) will form such a pair (p, 2): the only known instance in this case is p = 1093. There are only 7 known Wieferich pairs. - -(2, 1093), (3, 1006003), (5, 1645333507), (5, 188748146801), (83, 4871), (911, 318917), and (2903, 18787) (sequence in OEIS) - -Start with a(1) any natural number (>1), a(n) = the smallest prime p such that (a(n − 1))p − 1 = 1 (mod p2) but p2 does not divide a(n − 1) − 1 or a(n − 1) + 1. (If p2 divides a(n − 1) − 1 or a(n − 1) + 1, then the solution is a trivial solution) It is a conjecture that every natural number k = a(1) > 1 makes this sequence become periodic, for example, let a(1) = 2: - -2, 1093, 5, 20771, 18043, 5, 20771, 18043, 5, ..., it gets a cycle: {5, 20771, 18043}. - -Let a(1) = 83: - -83, 4871, 83, 4871, 83, 4871, 83, ..., it gets a cycle: {83, 4871}. - -Let a(1) = 59 (a longer sequence): - -59, 2777, 133287067, 13, 863, 7, 5, 20771, 18043, 5, ..., it also gets 5. - -However, there are many values of a(1) with unknown status, for example, let a(1) = 3: - -3, 11, 71, 47, ? (There are no known Wieferich primes in base 47). - -Let a(1) = 14: - -14, 29, ? (There are no known Wieferich prime in base 29 except 2, but 22 = 4 divides 29 − 1 = 28) - -Let a(1) = 39 (a longer sequence): - -39, 8039, 617, 101, 1050139, 29, ? (It also gets 29) - -It is unknown that values for a(1) > 1 exist such that the resulting sequence does not eventually become periodic. - -When a(n − 1)=k, a(n) will be (start with k = 2): 1093, 11, 1093, 20771, 66161, 5, 1093, 11, 487, 71, 2693, 863, 29, 29131, 1093, 46021, 5, 7, 281, ?, 13, 13, 25633, 20771, 71, 11, 19, ?, 7, 7, 5, 233, 46145917691, 1613, 66161, 77867, 17, 8039, 11, 29, 23, 5, 229, 1283, 829, ?, 257, 491531, ?, ... (For k = 21, 29, 47, 50, even the next value is unknown) - -A Wieferich number is an odd natural number n satisfying the congruence 2φ(n) ≡ 1 (mod n2), where φ denotes the Euler's totient function (according to Euler's theorem, 2φ(n) ≡ 1 (mod n) for every odd natural number n). If Wieferich number n is prime, then it is a Wieferich prime. The first few Wieferich numbers are: - -1, 1093, 3279, 3511, 7651, 10533, 14209, 17555, 22953, 31599, 42627, 45643, 52665, 68859, 94797, 99463, ... - -It can be shown that if there are only finitely many Wieferich primes, then there are only finitely many Wieferich numbers. In particular, if the only Wieferich primes are 1093 and 3511, then there exist exactly 104 Wieferich numbers, which matches the number of Wieferich numbers currently known. - -More generally, a natural number n is a Wieferich number to base a, if aφ(n) ≡ 1 (mod n2). - -Another definition specifies a Wieferich number as odd natural number n such that n and $\tfrac{2^m-1}{n}$ are not coprime, where m is the multiplicative order of 2 modulo n. The first of these numbers are: - -21, 39, 55, 57, 105, 111, 147, 155, 165, 171, 183, 195, 201, 203, 205, 219, 231, 237, 253, 273, 285, 291, 301, 305, 309, 327, 333, 355, 357, 385, 399, ... - -As above, if Wieferich number q is prime, then it is a Wieferich prime. - -A weak Wieferich prime to base a is a prime p satisfies the condition - -ap ≡ a (mod p2) - -Every Wieferich prime to base a is also a weak Wieferich prime to base a. If the base a is squarefree, then a prime p is a weak Wieferich prime to base a if and only if p is a Wieferich prime to base a. - -Smallest weak Wieferich prime to base n are (start with n = 0) - -2, 2, 1093, 11, 2, 2, 66161, 5, 2, 2, 3, 71, 2, 2, 29, 29131, 2, 2, 3, 3, 2, 2, 13, 13, 2, 2, 3, 3, 2, 2, 7, 7, 2, 2, 46145917691, 3, 2, 2, 17, 8039, 2, 2, 23, 5, 2, 2, 3, ... - -For integer n ≥2, a Wieferich prime to base a with order n is a prime p satisfies the condition - -ap−1 ≡ 1 (mod pn) - -Clearly, a Wieferich prime to base a with order n is also a Wieferich prime to base a with order m for all 2 ≤ m ≤ n, and Wieferich prime to base a with order 2 is equivalent to Wieferich prime to base a, so we can only consider the n ≥ 3 case. However, there are no known Wieferich prime to base 2 with order 3. The first base with known Wieferich prime with order 3 is 9, where 2 is a Wieferich prime to base 9 with order 3. Besides, both 5 and 113 are Wieferich prime to base 68 with order 3. - -Let P and Q be integers. The Lucas sequence of the first kind associated with the pair (P, Q) is defined by - -\begin{align} - -U_0(P,Q)&=0, \\ - -U_1(P,Q)&=1, \\ - -U_n(P,Q)&=P\cdot U_{n-1}(P,Q)-Q\cdot U_{n-2}(P,Q) - -\end{align} - - - -for all $n \geq 2$. A Lucas–Wieferich prime associated with (P, Q) is a prime p such that Up−ε(P, Q) ≡ 0 (mod p2), where ε equals the Legendre symbol $\left(\tfrac{P^2-4Q}p\right)$. All Wieferich primes are Lucas–Wieferich primes associated with the pair (3, 2). - -Let Q = −1. For every natural number P, the Lucas–Wieferich primes associated with (P, −1) are called P-Fibonacci–Wieferich primes or P-Wall–Sun–Sun primes. If P = 1, they are called Fibonacci–Wieferich primes. If P = 2, they are called Pell–Wieferich primes. - -For example, 241 is a Lucas–Wieferich prime associated with (3, −1), so it is a 3-Fibonacci–Wieferich prime or 3-Wall–Sun–Sun prime. In fact, 3 is a P-Fibonacci–Wieferich prime if and only if P congruent to 0, 4, or 5 (mod 9), which is analogous to the statement for traditional Wieferich primes that 3 is a base-n Wieferich prime if and only if n congruent to 1 or 8 (mod 9). - -Let K be a global field, i.e. a number field or a function field in one variable over a finite field and let E be an elliptic curve. If v is a non-archimedean place of norm qv of K and a ∈ K, with v(a) = 0 then v(aqv − 1 − 1) ≥ 1. v is called a Wieferich place for base a, if v(aqv − 1 − 1) > 1, an elliptic Wieferich place for base P ∈ E, if NvP ∈ E2 and a strong elliptic Wieferich place for base P ∈ E if nvP ∈ E2, where nv is the order of P modulo v and Nv gives the number of rational points (over the residue field of v) of the reduction of E at v. diff --git a/wiki/wikipedia/3828.txt b/wiki/wikipedia/3828.txt deleted file mode 100644 index 972ca6752bb04f5efbe89f388faa5a12af2c2c5e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3828.txt +++ /dev/null @@ -1 +0,0 @@ -#redirect Weak formulation diff --git a/wiki/wikipedia/3829.txt b/wiki/wikipedia/3829.txt deleted file mode 100644 index d5df0776a9acbf1f36b396e43aa66b424f84f471..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3829.txt +++ /dev/null @@ -1,89 +0,0 @@ -In calculus, the quotient rule is a method of finding the derivative of a function that is the ratio of two differentiable functions. Let $f(x)=g(x)/h(x),$ where both g and h are differentiable and $h(x)\neq 0.$ The quotient rule states that the derivative of f(x) is -$$ -f'(x) = \frac{g'(x)h(x) - g(x)h'(x)}{h(x)^2}. -$$ - -#A basic example: - -#:\begin{align} - -\frac{d}{dx} \frac{e^x}{x^2} &= \frac{\left(\frac{d}{dx}e^x\right)(x^2) - (e^x)\left(\frac{d}{dx} x^2\right)}{(x^2)^2} \\ - -&= \frac{(e^x)(x^2) - (e^x)(2x)}{x^4} \\ - -&= \frac{e^x(x - 2)}{x^3}. - -\end{align} - -#The quotient rule can be used to find the derivative of $f(x) = \tan x = \tfrac{\sin x}{\cos x}$ as follows. - -#:\begin{align} - -\frac{d}{dx} \tan x &= \frac{d}{dx} \frac{\sin x}{\cos x} \\ - -&= \frac{\left(\frac{d}{dx}\sin x\right)(\cos x) - (\sin x)\left(\frac{d}{dx}\cos x\right)}{\cos^2 x} \\ - -&= \frac{\cos^2 x + \sin^2 x}{\cos^2 x} \\ - -&= \frac{1}{\cos^2 x} = \sec^2 x. - -\end{align} - -Let $f(x) = \frac{g(x)}{h(x)}.$ Applying the definition of the derivative and properties of limits gives the following proof. - -\begin{align} - -f'(x) &= \lim_{k\to 0} \frac{f(x+k) - f(x)}{k} \\ - -&= \lim_{k\to 0} \frac{\frac{g(x+k)}{h(x+k)} - \frac{g(x)}{h(x)}}{k} \\ - -&= \lim_{k\to 0} \frac{g(x+k)h(x) - g(x)h(x+k)}{k \cdot h(x)h(x+k)} \\ - -&= \lim_{k\to 0} \frac{g(x+k)h(x) - g(x)h(x+k)}{k} \cdot \lim_{k\to 0}\frac{1}{h(x)h(x+k)} \\ - -&= \left(\lim_{k\to 0} \frac{g(x+k)h(x) - g(x)h(x) + g(x)h(x) - g(x)h(x+k)}{k} \right) \cdot \frac{1}{h(x)^2} \\ - -&= \left(\lim_{k\to 0} \frac{g(x+k)h(x) - g(x)h(x)}{k} - \lim_{k\to 0}\frac{g(x)h(x+k) - g(x)h(x)}{k} \right) \cdot \frac{1}{h(x)^2} \\ - -&= \left(h(x)\lim_{k\to 0} \frac{g(x+k) - g(x)}{k} - g(x)\lim_{k\to 0}\frac{h(x+k) - h(x)}{k} \right) \cdot \frac{1}{h(x)^2} \\ - -&= \frac{g'(x)h(x) - g(x)h'(x)}{h(x)^2}. - -\end{align} - -Let $f(x) = \frac{g(x)}{h(x)},$ so $g(x) = f(x)h(x).$ The product rule then gives $g'(x)=f'(x)h(x) + f(x)h'(x).$ Solving for $f'(x)$ and substituting back for $f(x)$ gives: - -\begin{align} - -f'(x) &= \frac{g'(x) - f(x)h'(x)}{h(x)} \\ - -&= \frac{g'(x) - \frac{g(x)}{h(x)}\cdot h'(x)}{h(x)} \\ - -&= \frac{g'(x)h(x) - g(x)h'(x)}{h(x)^2}. - -\end{align} - -Let $f(x) = \frac{g(x)}{h(x)} = g(x)h(x)^{-1}.$ Then the product rule gives -$$ -f'(x) = g'(x)h(x)^{-1} + g(x) \cdot \frac{d}{dx}(h(x)^{-1}). -$$ - -To evaluate the derivative in the second term, apply the power rule along with the chain rule: -$$ -f'(x) = g'(x)h(x)^{-1} + g(x) \cdot (-1) h(x)^{-2} h'(x). -$$ - -Finally, rewrite as fractions and combine terms to get - -\begin{align} - -f'(x) &= \frac{g'(x)}{h(x)} - \frac{g(x)h'(x)}{h(x)^2} \\ - -&= \frac{g'(x)h(x) - g(x)h'(x)}{h(x)^2}. - -\end{align} - -Implicit differentiation can be used to compute the nth derivative of a quotient (partially in terms of its first n - 1 derivatives). For example, differentiating $fh=g$ twice (resulting in $fh + 2f'h' + fh = g$) and then solving for $f$ yields -$$ -f = \left(\frac{g}{h}\right) = \frac{g - 2f'h' - fh}{h}. -$$ diff --git a/wiki/wikipedia/383.txt b/wiki/wikipedia/383.txt deleted file mode 100644 index 3756a98a11e6667f1d9bbce707476244abc8feb3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/383.txt +++ /dev/null @@ -1,10 +0,0 @@ -In mathematics, Ono's inequality is a theorem about triangles in the Euclidean plane. In its original form, as conjectured by T. Ono in 1914, the inequality is actually false; however, the statement is true for acute triangles and right triangles, as shown by F. Balitrand in 1916. - -Consider an acute or right triangle in the Euclidean plane with side lengths a, b and c and area A. Then -$$ -27 (b^2 + c^2 - a^2)^2 (c^2 + a^2 - b^2)^2 (a^2 + b^2 - c^2)^2 \leq (4 A)^6. -$$ - -This inequality fails for general triangles (to which Ono's original conjecture applied), as shown by the counterexample $a=2, b=3, c=4, A=3\sqrt{15}/4.$ - -The inequality holds with equality in the case of an equilateral triangle, in which up to similarity we have sides $1,1,1$ and area $\sqrt{3}/4.$ diff --git a/wiki/wikipedia/3830.txt b/wiki/wikipedia/3830.txt deleted file mode 100644 index e08b3820a7bedd87d82884e79163f7139f86c7dd..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3830.txt +++ /dev/null @@ -1,93 +0,0 @@ -Borůvka's algorithm is a greedy algorithm for finding a minimum spanning tree in a graph, - -or a minimum spanning forest in the case of a graph that is not connected. - -It was first published in 1926 by Otakar Borůvka as a method of constructing an efficient electricity network for Moravia. - -The algorithm was rediscovered by Choquet in 1938; again by Florek, Łukasiewicz, Perkal, Steinhaus, and Zubrzycki in 1951; and again by Georges Sollin in 1965. This algorithm is frequently called Sollin's algorithm, especially in the parallel computing literature. - -The algorithm begins by finding the minimum-weight edge incident to each vertex of the graph, and adding all of those edges to the forest. - -Then, it repeats a similar process of finding the minimum-weight edge from each tree constructed so far to a different tree, and adding all of those edges to the forest. - -Each repetition of this process reduces the number of trees, within each connected component of the graph, to at most half of this former value, - -so after logarithmically many repetitions the process finishes. When it does, the set of edges it has added forms the minimum spanning forest. - -The following pseudocode illustrates a basic implementation of Borůvka's algorithm. - -In the conditional clauses, every edge uv is considered cheaper than "None". The purpose of the completed variable is to determine whether the forest F is yet a spanning forest. - -If edges do not have distinct weights, then a consistent tie-breaking rule must be used, e.g. based on some total order on vertices or edges. - -This can be achieved by representing vertices as integers and comparing them directly; comparing their memory addresses; etc. - -A tie-breaking rule is necessary to ensure that the created graph is indeed a forest, that is, it does not contain cycles. For example, consider a triangle graph with nodes {a,b,c} and all edges of weight 1. Then a cycle could be created if we select ab as the minimal weight edge for {a}, bc for {b}, and ca for {c}. - -A tie-breaking rule which orders edges first by source, then by destination, will prevent creation of a cycle, resulting in the minimal spanning tree {ab, bc}. - -algorithm Borůvka is - -input: A weighted undirected graph G = (V, E). - -output: F, a minimum spanning forest of G. - -Initialize a forest F to (V, E') where E' = {}. - -completed := false - -while not completed do - -Find the connected components of F and assign to each vertex its component - -Initialize the cheapest edge for each component to "None" - -for each edge uv in E, where u and v are in different components of F: - -let wx be the cheapest edge for the component of u - -if is-preferred-over(uv, wx) then - -Set uv as the cheapest edge for the component of u - -let yz be the cheapest edge for the component of v - -if is-preferred-over(uv, yz) then - -Set uv as the cheapest edge for the component of v - -if all components have cheapest edge set to "None" then - -// no more trees can be merged -- we are finished - -completed := true - -else - -completed := false - -for each component whose cheapest edge is not "None" do - -Add its cheapest edge to E - -function is-preferred-over(edge1, edge2) is - -return (edge1 is "None") or - -(weight(edge1) < weight(edge2)) or - -(weight(edge1) = weight(edge2) and tie-breaking-rule(edge1, edge2)) - -function tie-breaking-rule(edge1, edge2) is - -The tie-breaking rule; returns true if and only if edge1 - -is preferred over edge2 in the case of a tie. - -As an optimization, one could remove from G each edge that is found to connect two vertices in the same component, so that it does not contribute to the time for searching for cheapest edges in later components. - -Borůvka's algorithm can be shown to take O(log V) iterations of the outer loop until it terminates, and therefore to run in time O(E log V), where E is the number of edges, and V is the number of vertices in G (assuming E ≥ V). In planar graphs, and more generally in families of graphs closed under graph minor operations, it can be made to run in linear time, by removing all but the cheapest edge between each pair of components after each stage of the algorithm. - -Other algorithms for this problem include Prim's algorithm and Kruskal's algorithm. Fast parallel algorithms can be obtained by combining Prim's algorithm with Borůvka's. - -A faster randomized minimum spanning tree algorithm based in part on Borůvka's algorithm due to Karger, Klein, and Tarjan runs in expected O(E) time. The best known (deterministic) minimum spanning tree algorithm by Bernard Chazelle is also based in part on Borůvka's and runs in O(E α(E,V)) time, where α is the inverse of the Ackermann function. These randomized and deterministic algorithms combine steps of Borůvka's algorithm, reducing the number of components that remain to be connected, with steps of a different type that reduce the number of edges between pairs of components. diff --git a/wiki/wikipedia/3831.txt b/wiki/wikipedia/3831.txt deleted file mode 100644 index 1f3c3863aeb8e8dc4a0e2228db573eb14bd10042..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3831.txt +++ /dev/null @@ -1,12 +0,0 @@ -In mathematics, the Minkowski–Hlawka theorem is a result on the lattice packing of hyperspheres in dimension n > 1. It states that there is a lattice in Euclidean space of dimension n, such that the corresponding best packing of hyperspheres with centres at the lattice points has density Δ satisfying -$$ -\Delta \geq \frac{\zeta(n)}{2^{n-1}}, -$$ - -with ζ the Riemann zeta function. Here as n → ∞, ζ(n) → 1. The proof of this theorem is indirect and does not give an explicit example, however, and there is still no known simple and explicit way to construct lattices with packing densities exceeding this bound for arbitrary n. In principle one can find explicit examples: for example, even just picking a few "random" lattices will work with high probability. The problem is that testing these lattices to see if they are solutions requires finding their shortest vectors, and the number of cases to check grows very fast with the dimension, so this could take a very long time. - -This result was stated without proof by and proved by . The result is related to a linear lower bound for the Hermite constant. - -Siegel proved the following generalization of the Minkowski–Hlawka theorem. If S is a bounded set in Rn with Jordan volume vol(S) then the average number of nonzero lattice vectors in S is vol(S)/D, where the average is taken over all lattices with a fundamental domain of volume D, and similarly the average number of primitive lattice vectors in S is vol(S)/Dζ(n). - -The Minkowski–Hlawka theorem follows easily from this, using the fact that if S is a star-shaped centrally symmetric body (such as a ball) containing less than 2 primitive lattice vectors then it contains no nonzero lattice vectors. diff --git a/wiki/wikipedia/3832.txt b/wiki/wikipedia/3832.txt deleted file mode 100644 index 172f1c7dfcb0e3d0f168e4dde83c8b57082225a9..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3832.txt +++ /dev/null @@ -1,23 +0,0 @@ -In mathematics, the discussion of vector fields on spheres was a classical problem of differential topology, beginning with the hairy ball theorem, and early work on the classification of division algebras. - -Specifically, the question is how many linearly independent smooth nowhere-zero vector fields can be constructed on a sphere in N-dimensional Euclidean space. A definitive answer was provided in 1962 by Frank Adams. It was already known, by direct construction using Clifford algebras, that there were at least ρ(N)-1 such fields (see definition below). Adams applied homotopy theory and topological K-theory to prove that no more independent vector fields could be found. Hence ρ(N)-1 is the exact number of pointwise linearly independent vector fields that exist on an (N-1)-dimensional sphere. - -In detail, the question applies to the 'round spheres' and to their tangent bundles: in fact since all exotic spheres have isomorphic tangent bundles, the Radon–Hurwitz numbers ρ(N) determine the maximum number of linearly independent sections of the tangent bundle of any homotopy sphere. The case of N odd is taken care of by the Poincaré–Hopf index theorem (see hairy ball theorem), so the case N even is an extension of that. Adams showed that the maximum number of continuous (smooth would be no different here) pointwise linearly-independent vector fields on the (N - 1)-sphere is exactly ρ(N) - 1. - -The construction of the fields is related to the real Clifford algebras, which is a theory with a periodicity modulo 8 that also shows up here. By the Gram–Schmidt process, it is the same to ask for (pointwise) linear independence or fields that give an orthonormal basis at each point. - -The Radon–Hurwitz numbers ρ(n) occur in earlier work of Johann Radon (1922) and Adolf Hurwitz (1923) on the Hurwitz problem on quadratic forms. For N written as the product of an odd number A and a power of two 2B, write - -B = c + 4d, 0 ≤ c < 4. - -Then - -ρ(N) = 2c + 8d. - -The first few values of ρ(2n) are (from ): - -2, 4, 2, 8, 2, 4, 2, 9, 2, 4, 2, 8, 2, 4, 2, 10, ... - -For odd n, the value of the function ρ(n) is one. - -These numbers occur also in other, related areas. In matrix theory, the Radon–Hurwitz number counts the maximum size of a linear subspace of the real n×n matrices, for which each non-zero matrix is a similarity transformation, i.e. a product of an orthogonal matrix and a scalar matrix. In quadratic forms, the Hurwitz problem asks for multiplicative identities between quadratic forms. The classical results were revisited in 1952 by Beno Eckmann. They are now applied in areas including coding theory and theoretical physics. diff --git a/wiki/wikipedia/3833.txt b/wiki/wikipedia/3833.txt deleted file mode 100644 index e45b31ed84ae8209d28c3241d4414c434f5eff1e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3833.txt +++ /dev/null @@ -1 +0,0 @@ -The Weyl–Schouten theorem in mathematics (named after Hermann Weyl and Jan Arnoldus Schouten) says that a Riemannian manifold of dimension n with n ≥ 3 is conformally flat if and only if the Schouten tensor is a Codazzi tensor for n = 3, or the Weyl tensor vanishes for n > 3. diff --git a/wiki/wikipedia/3834.txt b/wiki/wikipedia/3834.txt deleted file mode 100644 index 475d22e0634a892109c820e151a44f57720de64b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3834.txt +++ /dev/null @@ -1,15 +0,0 @@ -Slicing the Truth: On the Computability Theoretic and Reverse Mathematical Analysis of Combinatorial Principles is a book on reverse mathematics in combinatorics, the study of the axioms needed to prove combinatorial theorems. It was written by Denis R. Hirschfeldt, based on a course given by Hirschfeldt at the National University of Singapore in 2010, and published in 2014 by World Scientific, as volume 28 of the Lecture Notes Series of the Institute for Mathematical Sciences, National University of Singapore. - -The book begins with five chapters that discuss the field of reverse mathematics, which has the goal of classifying mathematical theorems by the axiom schemes needed to prove them, and the big five subsystems of second-order arithmetic into which many theorems of mathematics have been classified. These chapters also review some of the tools needed in this study, including computability theory, forcing, and the low basis theorem. - -Chapter six, "the real heart of the book", applies this method to an infinitary form of Ramsey's theorem: every edge coloring of a countably infinite complete graph or complete uniform hypergraph, using finitely many colors, contains a monochromatic infinite induced subgraph. The standard proof of this theorem uses the arithmetical comprehension axiom, falling into one of the big five subsystems, ACA0. However, as David Seetapun originally proved, the version of the theorem for graphs is weaker than ACA0, and it turns out to be inequivalent to any one of the big five subsystems. The version for uniform hypergraphs of fixed order greater than two is equivalent to ACA0, and the version of the theorem stated for all numbers of colors and all orders of hypergraphs simultaneously is stronger than ACA0. - -Chapter seven discusses conservative extensions of theories, in which the statements of a powerful theory (such as one of the forms of second-order arithmetic) that are both provable in that theory and expressible in a weaker theory (such as Peano arithmetic) are only the ones that are already provably in the weaker theory. Chapter eight summarizes the results so far in diagrammatic form. Chapter nine discusses ways to weaken Ramsey's theorem, and the final chapter discusses stronger theorems in combinatorics including the Dushnik–Miller theorem on self-embedding of infinite linear orderings, Kruskal's tree theorem, Laver's theorem on order embedding of countable linear orders, and Hindman's theorem on IP sets. An appendix provides a proof of a theorem of Jiayi Liu, part of the collection of results showing that the graph Ramsey theorem does not fall into the big five subsystems. - -This is a technical monograph, requiring its readers to have some familiarity with computability theory and Ramsey theory. Prior knowledge of reverse mathematics is not required. It is written in a somewhat informal style, and includes many exercises, making it usable as a graduate textbook or beginning work in reverse mathematics; reviewer François Dorais writes that it is an "excellent introduction to reverse mathematics and the computability theory of combinatorial principles" as well as a case study in the methods available for proving results in reverse mathematics. - -Reviewer William Gasarch complains about two missing topics, the work of Joe Mileti on the reverse mathematics of canonical versions of Ramsey's theorem, and the work of James Schmerl on the reverse mathematics of graph coloring. Nevertheless he recommends this book to anyone interested in reverse mathematics and Ramsey theory. And reviewer Benedict Easthaugh calls it "a welcome addition ... providing a fresh and accessible look at a central aspect of contemporary reverse mathematical research." - -A "classic reference" in reverse mathematics is the book Subsystems of Second Order Arithmetic (2009) by Stephen Simpson; it is centered around the big five subsystems and contains many more examples of results equivalent in strength to one of these five. Dorais suggests using the two books together as companion volumes. - -Reviewer Jeffry Hirst suggests Computability Theory by Rebecca Weber as a good source for the background needed to read this book. diff --git a/wiki/wikipedia/3835.txt b/wiki/wikipedia/3835.txt deleted file mode 100644 index 7399119539d207f8a36f0ea9c9f9225b90181a8e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3835.txt +++ /dev/null @@ -1,280 +0,0 @@ -In mathematical logic, the implicational propositional calculus is a version of classical propositional calculus which uses only one connective, called implication or conditional. In formulas, this binary operation is indicated by "implies", "if ..., then ...", "→", "$\rightarrow $", etc.. - -Implication alone is not functionally complete as a logical operator because one cannot form all other two-valued truth functions from it. - -For example, the two-place truth function that always returns false is not definable from → and arbitrary sentence variables: any formula constructed from → and propositional variables must receive the value true when all of its variables are evaluated to true. - -It follows that {→} is not functionally complete. - -However, if one adds a nullary connective ⊥ for falsity, then one can define all other truth functions. Formulas over the resulting set of connectives {→, ⊥} are called f-implicational. If P and Q are propositions, then: - -*¬P is equivalent to P → ⊥ - -*P ∧ Q is equivalent to (P → (Q → ⊥)) → ⊥ - -*P ∨ Q is equivalent to (P → Q) → Q - -*P ↔ Q is equivalent to ((P → Q) → ((Q → P) → ⊥)) → ⊥ - -Since the above operators are known to be functionally complete, it follows that any truth function can be expressed in terms of → and ⊥. - -The following statements are considered tautologies (irreducible and intuitively true, by definition). - -*Axiom schema 1 is P → (Q → P). - -*Axiom schema 2 is (P → (Q → R)) → ((P → Q) → (P → R)). - -*Axiom schema 3 (Peirce's law) is ((P → Q) → P) → P. - -*The one non-nullary rule of inference (modus ponens) is: from P and P → Q infer Q. - -Where in each case, P, Q, and R may be replaced by any formulas which contain only "→" as a connective. If Γ is a set of formulas and A a formula, then $\Gamma\vdash A$ means that A is derivable using the axioms and rules above and formulas from Γ as additional hypotheses. - -Łukasiewicz (1948) found an axiom system for the implicational calculus, which replaces the schemas 1–3 above with a single schema - -*((P → Q) → R) → ((R → P) → (S → P)). - -He also argued that there is no shorter axiom system. - -Since all axioms and rules of the calculus are schemata, derivation is closed under substitution: - -If $\Gamma\vdash A,$ then $\sigma(\Gamma)\vdash\sigma(A),$ - -where σ is any substitution (of formulas using only implication). - -The implicational propositional calculus also satisfies the deduction theorem: - -If $\Gamma,A\vdash B$, then $\Gamma\vdash A\to B.$ - -As explained in the deduction theorem article, this holds for any axiomatic extension of the system containing axiom schemas 1 and 2 above and modus ponens. - -The implicational propositional calculus is semantically complete with respect to the usual two-valued semantics of classical propositional logic. That is, if Γ is a set of implicational formulas, and A is an implicational formula entailed by Γ, then $\Gamma\vdash A$. - -A proof of the completeness theorem is outlined below. First, using the compactness theorem and the deduction theorem, we may reduce the completeness theorem to its special case with empty Γ, i.e., we only need to show that every tautology is derivable in the system. - -The proof is similar to completeness of full propositional logic, but it also uses the following idea to overcome the functional incompleteness of implication. If A and F are formulas, then A → F is equivalent to (¬A*) ∨ F, where A* is the result of replacing in A all, some, or none of the occurrences of F by falsity. Similarly, (A → F) → F is equivalent to A* ∨ F. So under some conditions, one can use them as substitutes for saying A* is false or A* is true respectively. - -We first observe some basic facts about derivability: - -Indeed, we can derive A → (B → C) using Axiom 1, and then derive A → C by modus ponens (twice) from Ax. 2. - -This follows from () by the deduction theorem. - -If we further assume C → B, we can derive A → B using (), then we derive C by modus ponens. This shows $A\to C,(A\to B)\to C,C\to B\vdash C$, and the deduction theorem gives $A\to C,(A\to B)\to C\vdash(C\to B)\to C$. We apply Ax. 3 to obtain (). - -Let F be an arbitrary fixed formula. For any formula A, we define A0 = (A → F) and A1 = ((A → F) → F). Consider only formulas in propositional variables p1, ..., pn. We claim that for every formula A in these variables and every truth assignment e, - -{{NumBlk|:|$p_1^{e(p_1)},\dots,p_n^{e(p_n)}\vdash A^{e(A)}.$|}} - -We prove () by induction on A. The base case A = pi is trivial. Let A = (B → C). We distinguish three cases: - -#e(C) = 1. Then also e(A) = 1. We have - -#::$(C\to F)\to F\vdash((B\to C)\to F)\to F$ - -#:by applying () twice to the axiom C → (B → C). Since we have derived (C → F) → F by the induction hypothesis, we can infer ((B → C) → F) → F. - -#e(B) = 0. Then again e(A) = 1. The deduction theorem applied to () gives - -#::$B\to F\vdash((B\to C)\to F)\to F.$ - -#:Since we have derived B → F by the induction hypothesis, we can infer ((B → C) → F) → F. - -#e(B) = 1 and e(C) = 0. Then e(A) = 0. We have - -#::$\begin{align}(B\to F)\to F,C\to F,B\to C&\vdash B\to F&&\text{by (1)}\\&\vdash F&&\text{by modus ponens,}\end{align}$ - -#:thus $(B\to F)\to F,C\to F\vdash(B\to C)\to F$ by the deduction theorem. We have derived (B → F) → F and C → F by the induction hypothesis, hence we can infer (B → C) → F. This completes the proof of (). - -Now let F be a tautology in variables p1, ..., pn. We will prove by reverse induction on k = n,...,0 that for every assignment e, - -{{NumBlk|:|$p_1^{e(p_1)},\dots,p_k^{e(p_k)}\vdash F.$|}} - -The base case k = n follows from a special case of () using -$$ - F^{e(F)} = F^1 = ((F \to F) \to F) -$$ - -and the fact that F→F is a theorem by the deduction theorem. - -Assume that () holds for k + 1, we will show it for k. By applying deduction theorem to the induction hypothesis, we obtain - -\begin{align}p_1^{e(p_1)},\dots,p_k^{e(p_k)}&\vdash(p_{k+1}\to F)\to F,\\ - -p_1^{e(p_1)},\dots,p_k^{e(p_k)}&\vdash((p_{k+1}\to F)\to F)\to F,\end{align} - -by first setting e(pk+1) = 0 and second setting e(pk+1) = 1. From this we derive () using modus ponens. - -For k = 0 we obtain that the tautology F is provable without assumptions. This is what was to be proved. - -This proof is constructive. That is, given a tautology, one could actually follow the instructions and create a proof of it from the axioms. However, the length of such a proof increases exponentially with the number of propositional variables in the tautology, hence it is not a practical method for any but the very shortest tautologies. - -The Bernays–Tarski axiom system is often used. In particular, Łukasiewicz's paper derives the Bernays–Tarski axioms from Łukasiewicz's sole axiom as a means of showing its completeness.
    - -It differs from the axiom schemas above by replacing axiom schema 2, (P→(Q→R))→((P→Q)→(P→R)), with - -* Axiom schema 2': (P→Q)→((Q→R)→(P→R)) - -which is called hypothetical syllogism. - -This makes derivation of the deduction meta-theorem a little more difficult, but it can still be done. - -We show that from P→(Q→R) and P→Q one can derive P→R. This fact can be used in lieu of axiom schema 2 to get the meta-theorem. - -# P→(Q→R) given - -# P→Q given - -# (P→Q)→((Q→R)→(P→R)) ax 2' - -# (Q→R)→(P→R) mp 2,3 - -# (P→(Q→R))→(((Q→R)→(P→R))→(P→(P→R))) ax 2' - -# ((Q→R)→(P→R))→(P→(P→R)) mp 1,5 - -# P→(P→R) mp 4,6 - -# (P→(P→R))→(((P→R)→R)→(P→R)) ax 2' - -# ((P→R)→R)→(P→R) mp 7,8 - -# (((P→R)→R)→(P→R))→(P→R) ax 3 - -# P→R mp 9,10 qed - -Satisfiability in the implicational propositional calculus is trivial, because every formula is satisfiable: just set all variables to true. - -Falsifiability in the implicational propositional calculus is NP-complete, meaning that validity (tautology) is co-NP-complete. - -In this case, a useful technique is to presume that the formula is not a tautology and attempt to find a valuation which makes it false. If one succeeds, then it is indeed not a tautology. If one fails, then it is a tautology. - -Example of a non-tautology: - -Suppose [(A→B)→((C→A)→E)]→([F→((C→D)→E)]→[(A→F)→(D→E)]) is false. - -Then (A→B)→((C→A)→E) is true; F→((C→D)→E) is true; A→F is true; D is true; and E is false. - -Since D is true, C→D is true. So the truth of F→((C→D)→E) is equivalent to the truth of F→E. - -Then since E is false and F→E is true, we get that F is false. - -Since A→F is true, A is false. Thus A→B is true and (C→A)→E is true. - -C→A is false, so C is true. - -The value of B does not matter, so we can arbitrarily choose it to be true. - -Summing up, the valuation which sets B, C and D to be true and A, E and F to be false will make [(A→B)→((C→A)→E)]→([F→((C→D)→E)]→[(A→F)→(D→E)]) false. So it is not a tautology. - -Example of a tautology: - -Suppose ((A→B)→C)→((C→A)→(D→A)) is false. - -Then (A→B)→C is true; C→A is true; D is true; and A is false. - -Since A is false, A→B is true. So C is true. Thus A must be true, contradicting the fact that it is false. - -Thus there is no valuation which makes ((A→B)→C)→((C→A)→(D→A)) false. Consequently, it is a tautology. - -What would happen if another axiom schema were added to those listed above? There are two cases: (1) it is a tautology; or (2) it is not a tautology. - -If it is a tautology, then the set of theorems remains the set of tautologies as before. However, in some cases it may be possible to find significantly shorter proofs for theorems. Nevertheless, the minimum length of proofs of theorems will remain unbounded, that is, for any natural number n there will still be theorems which cannot be proved in n or fewer steps. - -If the new axiom schema is not a tautology, then every formula becomes a theorem (which makes the concept of a theorem useless in this case). What is more, there is then an upper bound on the minimum length of a proof of every formula, because there is a common method for proving every formula. For example, suppose the new axiom schema were ((B→C)→C)→B. Then ((A→(A→A))→(A→A))→A is an instance (one of the new axioms) and also not a tautology. But [((A→(A→A))→(A→A))→A]→A is a tautology and thus a theorem due to the old axioms (using the completeness result above). Applying modus ponens, we get that A is a theorem of the extended system. Then all one has to do to prove any formula is to replace A by the desired formula throughout the proof of A. This proof will have the same number of steps as the proof of A. - -The axioms listed above primarily work through the deduction metatheorem to arrive at completeness. Here is another axiom system which aims directly at completeness without going through the deduction metatheorem. - -First we have axiom schemas which are designed to efficiently prove the subset of tautologies which contain only one propositional variable. - -* aa 1: ꞈA→A - -* aa 2: (A→B)→ꞈ(A→(C→B)) - -* aa 3: A→((B→C)→ꞈ((A→B)→C)) - -* aa 4: A→ꞈ(B→A) - -The proof of each such tautology would begin with two parts (hypothesis and conclusion) which are the same. Then insert additional hypotheses between them. Then insert additional tautological hypotheses (which are true even when the sole variable is false) into the original hypothesis. Then add more hypotheses outside (on the left). This procedure will quickly give every tautology containing only one variable. (The symbol "ꞈ" in each axiom schema indicates where the conclusion used in the completeness proof begins. It is merely a comment, not a part of the formula.) - -Consider any formula Φ which may contain A, B, C1, ..., Cn and ends with A as its final conclusion. Then we take - -* aa 5: Φ-→(Φ+→ꞈΦ) - -as an axiom schema where Φ- is the result of replacing B by A throughout Φ and Φ+ is the result of replacing B by (A→A) throughout Φ. This is a schema for axiom schemas since there are two level of substitution: in the first Φ is substituted (with variations); in the second, any of the variables (including both A and B) may be replaced by arbitrary formulas of the implicational propositional calculus. This schema allows one to prove tautologies with more than one variable by considering the case when B is false Φ- and the case when B is true Φ+. - -If the variable which is the final conclusion of a formula takes the value true, then the whole formula takes the value true regardless of the values of the other variables. Consequently if A is true, then Φ, Φ-, Φ+ and Φ-→(Φ+→Φ) are all true. So without loss of generality, we may assume that A is false. Notice that Φ is a tautology if and only if both Φ- and Φ+ are tautologies. But while Φ has n+2 distinct variables, Φ- and Φ+ both have n+1. So the question of whether a formula is a tautology has been reduced to the question of whether certain formulas with one variable each are all tautologies. Also notice that Φ-→(Φ+→Φ) is a tautology regardless of whether Φ is, because if Φ is false then either Φ- or Φ+ will be false depending on whether B is false or true. - -Examples: - -Deriving Peirce's law - -# [((P→P)→P)→P]→([((P→(P→P))→P)→P]→[((P→Q)→P)→P]) aa 5 - -# P→P aa 1 - -# (P→P)→((P→P)→(((P→P)→P)→P)) aa 3 - -# (P→P)→(((P→P)→P)→P) mp 2,3 - -# ((P→P)→P)→P mp 2,4 - -# [((P→(P→P))→P)→P]→[((P→Q)→P)→P] mp 5,1 - -# P→(P→P) aa 4 - -# (P→(P→P))→((P→P)→(((P→(P→P))→P)→P)) aa 3 - -# (P→P)→(((P→(P→P))→P)→P) mp 7,8 - -# ((P→(P→P))→P)→P mp 2,9 - -# ((P→Q)→P)→P mp 10,6 qed - -Deriving Łukasiewicz' sole axiom - -# [((P→Q)→P)→((P→P)→(S→P))]→([((P→Q)→(P→P))→(((P→P)→P)→(S→P))]→[((P→Q)→R)→((R→P)→(S→P))]) aa 5 - -# [((P→P)→P)→((P→P)→(S→P))]→([((P→(P→P))→P)→((P→P)→(S→P))]→[((P→Q)→P)→((P→P)→(S→P))]) aa 5 - -# P→(S→P) aa 4 - -# (P→(S→P))→(P→((P→P)→(S→P))) aa 2 - -# P→((P→P)→(S→P)) mp 3,4 - -# P→P aa 1 - -# (P→P)→((P→((P→P)→(S→P)))→[((P→P)→P)→((P→P)→(S→P))]) aa 3 - -# (P→((P→P)→(S→P)))→[((P→P)→P)→((P→P)→(S→P))] mp 6,7 - -# ((P→P)→P)→((P→P)→(S→P)) mp 5,8 - -# [((P→(P→P))→P)→((P→P)→(S→P))]→[((P→Q)→P)→((P→P)→(S→P))] mp 9,2 - -# P→(P→P) aa 4 - -# (P→(P→P))→((P→((P→P)→(S→P)))→[((P→(P→P))→P)→((P→P)→(S→P))]) aa 3 - -# (P→((P→P)→(S→P)))→[((P→(P→P))→P)→((P→P)→(S→P))] mp 11,12 - -# ((P→(P→P))→P)→((P→P)→(S→P)) mp 5,13 - -# ((P→Q)→P)→((P→P)→(S→P)) mp 14,10 - -# [((P→Q)→(P→P))→(((P→P)→P)→(S→P))]→[((P→Q)→R)→((R→P)→(S→P))] mp 15,1 - -# (P→P)→((P→(S→P))→[((P→P)→P)→(S→P)]) aa 3 - -# (P→(S→P))→[((P→P)→P)→(S→P)] mp 6,17 - -# ((P→P)→P)→(S→P) mp 3,18 - -# (((P→P)→P)→(S→P))→[((P→Q)→(P→P))→(((P→P)→P)→(S→P))] aa 4 - -# ((P→Q)→(P→P))→(((P→P)→P)→(S→P)) mp 19,20 - -# ((P→Q)→R)→((R→P)→(S→P)) mp 21,16 qed - -Using a truth table to verify Łukasiewicz' sole axiom would require consideration of 16=24 cases since it contains 4 distinct variables. In this derivation, we were able to restrict consideration to merely 3 cases: R is false and Q is false, R is false and Q is true, and R is true. However because we are working within the formal system of logic (instead of outside it, informally), each case required much more effort. diff --git a/wiki/wikipedia/3836.txt b/wiki/wikipedia/3836.txt deleted file mode 100644 index a779eea06f2f7a387d2bb31d553d1a73cd80b7e4..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3836.txt +++ /dev/null @@ -1,7 +0,0 @@ -Maekawa's theorem is a theorem in the mathematics of paper folding named after Jun Maekawa. It relates to flat-foldable origami crease patterns and states that at every vertex, the numbers of valley and mountain folds always differ by two in either direction. The same result was also discovered by Jacques Justin and, even earlier, by S. Murata. - -One consequence of Maekawa's theorem is that the total number of folds at each vertex must be an even number. This implies (via a form of planar graph duality between Eulerian graphs and bipartite graphs) that, for any flat-foldable crease pattern, it is always possible to color the regions between the creases with two colors, such that each crease separates regions of differing colors. The same result can also be seen by considering which side of the sheet of paper is uppermost in each region of the folded shape. - -Maekawa's theorem does not completely characterize the flat-foldable vertices, because it only takes into account the numbers of folds of each type, and not their angles. - -Kawasaki's theorem gives a complementary condition on the angles between the folds at a vertex (regardless of which folds are mountain folds and which are valley folds) that is also necessary for a vertex to be flat-foldable. diff --git a/wiki/wikipedia/3837.txt b/wiki/wikipedia/3837.txt deleted file mode 100644 index 418a71f8d7b1de5e2421a52fb1f5d4eca62813cf..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3837.txt +++ /dev/null @@ -1,27 +0,0 @@ -In logic, a logical framework provides a means to define (or present) a logic as a signature in a higher-order type theory in such a way that provability of a formula in the original logic reduces to a type inhabitation problem in the framework type theory. This approach has been used successfully for (interactive) automated theorem proving. The first logical framework was Automath; however, the name of the idea comes from the more widely known Edinburgh Logical Framework, LF. Several more recent proof tools like Isabelle are based on this idea. - -A logical framework is based on a general treatment of syntax, rules and proofs by means of a dependently typed lambda calculus. Syntax is treated in a style similar to, but more general than Per Martin-Löf's system of arities. - -To describe a logical framework, one must provide the following: - -# A characterization of the class of object-logics to be represented; - -# An appropriate meta-language; - -# A characterization of the mechanism by which object-logics are represented. - -This is summarized by: - -"Framework = Language + Representation." - -In the case of the LF logical framework, the meta-language is the λΠ-calculus. This is a system of first-order dependent function types which are related by the propositions as types principle to first-order minimal logic. The key features of the λΠ-calculus are that it consists of entities of three levels: objects, types and kinds (or type classes, or families of types). It is predicative, all well-typed terms are strongly normalizing and Church-Rosser and the property of being well-typed is decidable. However, type inference is undecidable. - -A logic is represented in the LF logical framework by the judgements-as-types representation mechanism. This is inspired by Per Martin-Löf's development of Kant's notion of judgement, in the 1983 Siena Lectures. The two higher-order judgements, the hypothetical $J\vdash K$ and the general, $\Lambda x\in J. K(x)$, correspond to the ordinary and dependent function space, respectively. The methodology of judgements-as-types is that judgements are represented as the types of their proofs. A logical system ${\mathcal L}$ is represented by its signature which assigns kinds and types to a finite set of constants that represents its syntax, its judgements and its rule schemes. An object-logic's rules and proofs are seen as primitive proofs of hypothetico-general judgements $\Lambda x\in C. J(x)\vdash K$. - -An implementation of the LF logical framework is provided by the Twelf system at Carnegie Mellon University. Twelf includes - -* a logic programming engine - -* meta-theoretic reasoning about logic programs (termination, coverage, etc.) - -* an inductive meta-logical theorem prover diff --git a/wiki/wikipedia/3838.txt b/wiki/wikipedia/3838.txt deleted file mode 100644 index 40ccca3c04a7e6f6f9a30f41ad0f5e349c7aaed6..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3838.txt +++ /dev/null @@ -1,41 +0,0 @@ -The Fueter–Pólya theorem, first proved by Rudolf Fueter and George Pólya, states that the only quadratic pairing functions are the Cantor polynomials. - -In 1873, Georg Cantor showed that the so-called Cantor polynomial - -P(x,y) := \frac{1}{2} ((x+y)^2+3x+y) = \frac{x^2 + 2xy+ 3x + y^2 + y}2 = - -x + \frac{(x+y)(x+y+1)}2 = \binom{x}1 + \binom{x+y+1}{2} - - - -is a bijective mapping from $\N^2$ to $\N$. - -The polynomial given by swapping the variables is also a pairing function. - -Fueter was investigating whether there are other quadratic polynomials with this property, and concluded that this is not the case assuming $P(0,0)=0$. He then wrote to Pólya, who showed the theorem does not require this condition. - -If $P$ is a real quadratic polynomial in two variables whose restriction to $\N^2$ is a bijection from $\N^2$ to $\N$ then it is -$$ -P(x,y) := \frac{1}{2} ((x+y)^2+3x+y) -$$ - -or -$$ -P(x,y) := \frac{1}{2} ((y+x)^2+3y+x). -$$ - -The original proof is surprisingly difficult, using the Lindemann–Weierstrass theorem to prove the transcendence of -$$ -e^a -$$ for a nonzero algebraic number $a$. - -In 2002, M. A. Vsemirnov published an elementary proof of this result. - -The theorem states that the Cantor polynomial is the only quadratic pairing polynomial of $\N^2$ and $\N$. The conjecture is that these are the only such pairing polynomials. - -A generalization of the Cantor polynomial in higher dimensions is as follows: -$$ -P_n(x_1,\ldots,x_n) = \sum_{k=1}^n \binom{k-1+\sum_{j=1}^k x_j}{k} = x_1 + \binom{x_1+x_2+1}{2} + \cdots +\binom{x_1+\cdots +x_n+n-1}{n} -$$ - -The sum of these binomial coefficients yields a polynomial of degree $n$ in $n$ variables. This is just one of at least $(n-1)!$ inequivalent packing polynomials for $n$ dimensions. diff --git a/wiki/wikipedia/3839.txt b/wiki/wikipedia/3839.txt deleted file mode 100644 index b0eb044b3d5dfed87fab34523b431b0ea29814cc..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3839.txt +++ /dev/null @@ -1,51 +0,0 @@ -In computational complexity theory, a probabilistically checkable proof (PCP) is a type of proof that can be checked by a randomized algorithm using a bounded amount of randomness and reading a bounded number of bits of the proof. The algorithm is then required to accept correct proofs and reject incorrect proofs with very high probability. A standard proof (or certificate), as used in the verifier-based definition of the complexity class NP, also satisfies these requirements, since the checking procedure deterministically reads the whole proof, always accepts correct proofs and rejects incorrect proofs. However, what makes them interesting is the existence of probabilistically checkable proofs that can be checked by reading only a few bits of the proof using randomness in an essential way. - -Probabilistically checkable proofs give rise to many complexity classes depending on the number of queries required and the amount of randomness used. The class PCP[r(n),q(n)] refers to the set of decision problems that have probabilistically checkable proofs that can be verified in polynomial time using at most r(n) random bits and by reading at most q(n) bits of the proof. Unless specified otherwise, correct proofs should always be accepted, and incorrect proofs should be rejected with probability greater than 1/2. The PCP theorem, a major result in computational complexity theory, states that PCP[O(log n),O(1)] = NP. - -Given a decision problem L (or a language L with its alphabet set Σ), a probabilistically checkable proof system for L with completeness c(n) and soundness s(n), where 0 ≤ s(n) ≤ c(n) ≤ 1, consists of a prover and a verifier. Given a claimed solution x with length n, which might be false, the prover produces a proof π which states x solves L (x ∈ L, the proof is a string ∈ Σ*). And the verifier is a randomized oracle Turing Machine V (the verifier) that checks the proof π for the statement that x solves L(or x ∈ L) and decides whether to accept the statement. The system has the following properties: - -* Completeness: For any x ∈ L, given the proof π produced by the prover of the system, the verifier accepts the statement with probability at least c(n), - -* Soundness: For any x ∉ L, then for any proof π, the verifier mistakenly accepts the statement with probability at most s(n). - -For the computational complexity of the verifier, we have the randomness complexity r(n) to measure the maximum number of random bits that V uses over all x of length n and the query complexity q(n) of the verifier is the maximum number of queries that V makes to π over all x of length n. - -In the above definition, the length of proof is not mentioned since usually it includes the alphabet set and all the witnesses. For the prover, we do not care how it arrives at the solution to the problem; we care only about the proof it gives of the solution's membership in the language. - -The verifier is said to be non-adaptive if it makes all its queries before it receives any of the answers to previous queries. - -The complexity class PCPc(n), s(n)[r(n), q(n)] is the class of all decision problems having probabilistically checkable proof systems over binary alphabet of completeness c(n) and soundness s(n), where the verifier is nonadaptive, runs in polynomial time, and it has randomness complexity r(n) and query complexity q(n). - -The shorthand notation PCP[r(n), q(n)] is sometimes used for PCP1, ½[r(n), q(n)]. The complexity class PCP is defined as PCP1, ½[O(log n), O(1)]. - -The theory of probabilistically checkable proofs studies the power of probabilistically checkable proof systems under various restrictions of the parameters (completeness, soundness, randomness complexity, query complexity, and alphabet size). It has applications to computational complexity (in particular hardness of approximation) and cryptography. - -The definition of a probabilistically checkable proof was explicitly introduced by Arora and Safra in 1992, although their properties were studied earlier. In 1990 Babai, Fortnow, and Lund proved that PCP[poly(n), poly(n)] = NEXP, providing the first nontrivial equivalence between standard proofs (NEXP) and probabilistically checkable proofs. The PCP theorem proved in 1992 states that PCP[O(log n),O(1)] = NP. - -The theory of hardness of approximation requires a detailed understanding of the role of completeness, soundness, alphabet size, and query complexity in probabilistically checkable proofs. - -From computational complexity point of view, for extreme settings of the parameters, the definition of probabilistically checkable proofs is easily seen to be equivalent to standard complexity classes. For example, we have the following for different setting of PCP[r(n), q(n)]: - -*PCP[0, 0] = P (P is defined to have no randomness and no access to a proof.) - -*PCP[O(log(n)), 0] = P (A logarithmic number of random bits doesn't help a polynomial time Turing machine, since it could try all possibly random strings of logarithmic length in polynomial time.) - -*PCP[0,O(log(n))] = P (Without randomness, the proof can be thought of as a fixed logarithmic sized string. A polynomial time machine could try all possible logarithmic sized proofs in polynomial time.) - -*PCP[poly(n), 0] = coRP (By definition of coRP.) - -*PCP[0, poly(n)] = NP (By the verifier-based definition of NP.) - -The PCP theorem and MIP = NEXP can be characterized as follows: - -*PCP[O(log n),O(1)] = NP (the PCP theorem) - -*PCP[poly(n),O(1)] = PCP[poly(n),poly(n)] = NEXP (MIP = NEXP). - -It is also known that PCP[r(n), q(n)] ⊆ NTIME(poly(n,2O(r(n))q(n))). - -In particular, PCP[log n, poly(n)] = NP. - -On the other hand, if NP ⊆ PCP[o(log n),o(log n)] then P = NP. - -Linear PCP is one such that given query q, the oracle does only linear operation on the proof $\pi$. That's to say the response from the oracle to the query q is a linear function $f(q,\pi)$. diff --git a/wiki/wikipedia/384.txt b/wiki/wikipedia/384.txt deleted file mode 100644 index a132982bb6fe5b8300ca9ec770b97085987ff43f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/384.txt +++ /dev/null @@ -1,37 +0,0 @@ -In algebra, Serre's criterion for normality, introduced by Jean-Pierre Serre, gives necessary and sufficient conditions for a commutative Noetherian ring A to be a normal ring. The criterion involves the following two conditions for A: - -*$R_k: A_{\mathfrak{p}}$ is a regular local ring for any prime ideal $\mathfrak{p}$ of height ≤ k. - -*$S_k: \operatorname{depth} A_{\mathfrak{p}} \ge \inf \{k, \operatorname{ht}(\mathfrak{p}) \}$ for any prime ideal $\mathfrak{p}$. - -The statement is: - -*A is a reduced ring $\Leftrightarrow R_0, S_1$ hold. - -*A is a normal ring $\Leftrightarrow R_1, S_2$ hold. - -*A is a Cohen–Macaulay ring $\Leftrightarrow S_k$ hold for all k. - -Items 1, 3 trivially follow from the definitions. Item 2 is much deeper. - -For an integral domain, the criterion is due to Krull. The general case is due to Serre. - -(After EGA IV. Theorem 5.8.6.) - -Suppose A satisfies S2 and R1. Then A in particular satisfies S1 and R0; hence, it is reduced. If $\mathfrak{p}_i, 1 \le i \le r$ are the minimal prime ideals of A, then the total ring of fractions K of A is the direct product of the residue fields $\kappa(\mathfrak{p}_i) = Q(A/\mathfrak{p}_i)$: see total ring of fractions of a reduced ring. That means we can write $1 = e_1 + \dots + e_r$ where $e_i$ are idempotents in $\kappa(\mathfrak{p}_i)$ and such that $e_i e_j = 0, i \ne j$. Now, if A is integrally closed in K, then each $e_i$ is integral over A and so is in A; consequently, A is a direct product of integrally closed domains Aei's and we are done. Thus, it is enough to show that A is integrally closed in K. - -For this end, suppose -$$ -(f/g)^n + a_1 (f/g)^{n-1} + \dots + a_n = 0 -$$ - -where all f, g, ai's are in A and g is moreover a non-zerodivisor. We want to show: -$$ -f \in gA -$$. - -Now, the condition S2 says that $gA$ is unmixed of height one; i.e., each associated primes $\mathfrak{p}$ of $A/gA$ has height one. This is because if $\mathfrak{p}$ has height greater than one, then $\mathfrak{p}$ would contain a non zero divisor in $A/gA$. However, $\mathfrak{p}$ is associated to the zero ideal in $A/gA$ so it can only contain zero divisors, see . By the condition R1, the localization $A_{\mathfrak{p}}$ is integrally closed and so $\phi(f) \in \phi(g)A_{\mathfrak{p}}$, where $\phi: A \to A_{\mathfrak{p}}$ is the localization map, since the integral equation persists after localization. If $gA = \cap_i \mathfrak{q}_i$ is the primary decomposition, then, for any i, the radical of $\mathfrak{q}_i$ is an associated prime $\mathfrak{p}$ of $A/gA$ and so $f \in \phi^{-1}(\mathfrak{q}_i A_{\mathfrak{p}}) = \mathfrak{q}_i$; the equality here is because $\mathfrak{q}_i$ is a $\mathfrak{p}$-primary ideal. Hence, the assertion holds. - -Suppose A is a normal ring. For S2, let $\mathfrak{p}$ be an associated prime of $A/fA$ for a non-zerodivisor f; we need to show it has height one. Replacing A by a localization, we can assume A is a local ring with maximal ideal $\mathfrak{p}$. By definition, there is an element g in A such that $\mathfrak{p} = \{ x \in A | xg \equiv 0 \text{ mod }fA \}$ and $g \not\in fA$. Put y = g/f in the total ring of fractions. If $y \mathfrak{p} \subset \mathfrak{p}$, then $\mathfrak{p}$ is a faithful $A[y]$-module and is a finitely generated A-module; consequently, $y$ is integral over A and thus in A, a contradiction. Hence, $y \mathfrak{p} = A$ or $\mathfrak{p} = f/g A$, which implies $\mathfrak{p}$ has height one (Krull's principal ideal theorem). - -For R1, we argue in the same way: let $\mathfrak{p}$ be a prime ideal of height one. Localizing at $\mathfrak{p}$ we assume $\mathfrak{p}$ is a maximal ideal and the similar argument as above shows that $\mathfrak{p}$ is in fact principal. Thus, A is a regular local ring. $\square$ diff --git a/wiki/wikipedia/3840.txt b/wiki/wikipedia/3840.txt deleted file mode 100644 index 474b10d4d1006ff7d936445b2d985cf60facd0d9..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3840.txt +++ /dev/null @@ -1,40 +0,0 @@ -In mathematics, specifically group theory, Cauchy's theorem states that if G is a finite group and p is a prime number dividing the order of G (the number of elements in G), then G contains an element of order p. That is, there is x in G such that p is the smallest positive integer with xp = e, where e is the identity element of G. It is named after Augustin-Louis Cauchy, who discovered it in 1845. - -The theorem is related to Lagrange's theorem, which states that the order of any subgroup of a finite group G divides the order of G. Cauchy's theorem implies that for any prime divisor p of the order of G, there is a subgroup of G whose order is p—the cyclic group generated by the element in Cauchy's theorem. - -Cauchy's theorem is generalized by Sylow's first theorem, which implies that if pn is the maximal power of p dividing the order of G, then G has a subgroup of order pn (and using the fact that a p-group is solvable, one can show that G has subgroups of order pr for any r less than or equal to n). - -Many texts prove the theorem with the use of strong induction and the class equation, though considerably less machinery is required to prove the theorem in the abelian case. One can also invoke group actions for the proof. - -Let G be a finite group and p be a prime. If p divides the order of G, then G has an element of order p. - -We first prove the special case that where G is abelian, and then the general case; both proofs are by induction on n = |G|, and have as starting case n = p which is trivial because any non-identity element now has order p. Suppose first that G is abelian. Take any non-identity element a, and let H be the cyclic group it generates. If p divides |H|, then a|H|/p is an element of order p. If p does not divide |H|, then it divides the order [G:H] of the quotient group G/H, which therefore contains an element of order p by the inductive hypothesis. That element is a class xH for some x in G, and if m is the order of x in G, then xm = e in G gives (xH)m = eH in G/H, so p divides m; as before xm/p is now an element of order p in G, completing the proof for the abelian case. - -In the general case, let Z be the center of G, which is an abelian subgroup. If p divides |Z|, then Z contains an element of order p by the case of abelian groups, and this element works for G as well. So we may assume that p does not divide the order of Z. Since p does divide |G|, and G is the disjoint union of Z and of the conjugacy classes of non-central elements, there exists a conjugacy class of a non-central element a whose size is not divisible by p. But the class equation shows that size is [G : CG(a)], so p divides the order of the centralizer CG(a) of a in G, which is a proper subgroup because a is not central. This subgroup contains an element of order p by the inductive hypothesis, and we are done. - -This proof uses the fact that for any action of a (cyclic) group of prime order p, the only possible orbit sizes are 1 and p, which is immediate from the orbit stabilizer theorem. - -The set that our cyclic group shall act on is the set -$$ - X = \{(x_1,\ldots,x_p) \in G^p : x_1x_2\cdots x_p = e \} -$$ - -of p-tuples of elements of G whose product (in order) gives the identity. Such a p-tuple is uniquely determined by all its components except the last one, as the last element must be the inverse of the product of those preceding elements. One also sees that those p − 1 elements can be chosen freely, so X has |G|p−1 elements, which is divisible by p. - -Now from the fact that in a group if ab = e then also ba = e, it follows that any cyclic permutation of the components of an element of X again gives an element of X. Therefore one can define an action of the cyclic group Cp of order p on X by cyclic permutations of components, in other words in which a chosen generator of Cp sends -$$ -(x_1,x_2,\ldots,x_p)\mapsto(x_2,\ldots,x_p,x_1) -$$. - -As remarked, orbits in X under this action either have size 1 or size p. The former happens precisely for those tuples $(x,x,\ldots,x)$ for which $x^p=e$. Counting the elements of X by orbits, and reducing modulo p, one sees that the number of elements satisfying $x^p=e$ is divisible by p. But x = e is one such element, so there must be at least p − 1 other solutions for x, and these solutions are elements of order p. This completes the proof. - -A practically immediate consequence of Cauchy's theorem is a useful characterization of finite p-groups, where p is a prime. In particular, a finite group G is a p-group (i.e. all of its elements have order pk for some natural number k) if and only if G has order pn for some natural number n. One may use the abelian case of Cauchy's Theorem in an inductive proof of the first of Sylow's theorems, similar to the first proof above, although there are also proofs that avoid doing this special case separately. - -Let G is a finite group where x2 = e for all element x of G. Then G has the order 2n for some non negative integer n. Let is m. In the case of m is 1, then G = {e}. In the case of m ≥ 2, if m has the odd prime factor p, G has the element x where xp = e from Cauchy's theorem. It conflicts with the assumption. Therefore m must be 2n. G| is an abelian group, and G| is called an elementary abelian 2-group or Boolean group. The well-known example is Klein four-group. - -An abelian simple group is either {e} or cyclic group Cp whose order is a prime number p. Let G is an abelian group, then all subgroups of G| are normal subgroups. So, if G is a simple group, G has only normal subgroup that is either {e} or G. If , then G is {e}. It is suitable. If , let a ∈ G is not e, the cyclic group $\langle a \rangle$ is subgroup of G and $\langle a \rangle$ is not {e}, then $G = \langle a \rangle.$ Let n is the order of $\langle a \rangle$. If n is infinite, then -$$ -G = \langle a \rangle \supsetneqq \langle a^2 \rangle \supsetneqq \{e\}. -$$ - -So in this case, it is not suitable. Then n is finite. If n is composite, n is divisible by prime q which is less than n. From Cauchy's theorem, the subgroup H will be exist whose order is q, it is not suitable. Therefore, n must be a prime number. diff --git a/wiki/wikipedia/3841.txt b/wiki/wikipedia/3841.txt deleted file mode 100644 index be531e40d6359582e8eb30d4679a015de295c61b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3841.txt +++ /dev/null @@ -1,211 +0,0 @@ -Short integer solution (SIS) and ring-SIS problems are two average-case problems that are used in lattice-based cryptography constructions. Lattice-based cryptography began in 1996 from a seminal work by Ajtai who presented a family of one-way functions based on SIS problem. He showed that it is secure in an average case if the shortest vector problem $\mathrm{SVP}_\gamma$ (where $\gamma = n^c$ for some constant $c>0$) is hard in a worst-case scenario. - -Average case problems are the problems that are hard to be solved for some randomly selected instances. For cryptography applications, worst case complexity is not sufficient, and we need to guarantee cryptographic construction are hard based on average case complexity. - -A full rank lattice $ \mathfrak{L} \subset \R^n $ is a set of integer linear combinations of $ n $ linearly independent vectors $ \{b_1,\ldots,b_n\} $, named basis: - - - -\mathfrak{L}(b_1,\ldots,b_n) = \left\{ \sum_{i=1}^n z_i b_i: z_i \in \Z \right\} = \{ B\boldsymbol{z}: \boldsymbol{z} \in \Z^n \} - - - -where $ B \in \R ^{n\times n} $ is a matrix having basis vectors in its columns. - -Remark: Given $ B_1,B_2 $ two bases for lattice $ \mathfrak{L} $, there exist unimodular matrices $ U_1 $ such that $ B_1 = B_2U_1^{-1}, B_2 = B_1U_1 $. - -Definition: Rotational shift operator on $ \R^n (n \geq 2) $ is denoted by $ \operatorname{rot} $, and is defined as: - - - -\forall \boldsymbol{x} = (x_1,\ldots,x_{n-1},x_n) \in \R^n: \operatorname{rot}(x_1,\ldots,x_{n-1},x_n) = (x_n,x_1,\ldots,x_{n-1}) - - - -Micciancio introduced cyclic lattices in his work in generalizing the compact knapsack problem to arbitrary rings. A cyclic lattice is a lattice that is closed under rotational shift operator. Formally, cyclic lattices are defined as follows: - -Definition: A lattice $ \mathfrak{L} \subseteq \Z^n $ is cyclic if $ \forall \boldsymbol{x} \in \mathfrak{L}: \operatorname{rot}(\boldsymbol{x}) \in \mathfrak{L} $. - -Examples: - -# $ \Z^n $ itself is a cyclic lattice. - -# Lattices corresponding to any ideal in the quotient polynomial ring $ R = \Z[x]/(x^n-1) $ are cyclic: - -consider the quotient polynomial ring $ R = \Z[x]/(x^n-1) $, and let $ p(x) $ be some polynomial in $ R $, i.e. $ p(x) = \sum_{i=0}^{n-1}a_ix^i $ where $ a_i \in \Z $ for $ i = 0,\ldots, n-1 $. - -Define the embedding coefficient $ \Z $-module isomorphism $ \rho $ as: - - - -\begin{align} - -\quad \rho: R & \rightarrow \Z^n \\[4pt] - -\rho(x) = \sum_{i=0}^{n-1}a_ix^i & \mapsto (a_0,\ldots,a_{n-1}) - -\end{align} - - - -Let $ I \subset R $ be an ideal. The lattice corresponding to ideal $ I \subset R $, denoted by $ \mathfrak{L}_I $, is a sublattice of $ \Z^n $, and is defined as -$$ - \mathfrak{L}_I := \rho(I) = \left\{ (a_0,\ldots,a_{n-1}) \mid \sum_{i=0}^{n-1}a_ix^i \in I \right\} \subset \Z^n. -$$ - -Theorem: $ \mathfrak{L} \subset \Z^n $ is cyclic if and only if $ \mathfrak{L} $ corresponds to some ideal $ I $ in the quotient polynomial ring $ R = \Z[x]/(x^n-1) $. - -proof: -$$ - \Leftarrow) -$$ We have: - - - -\mathfrak{L} = \mathfrak{L}_I := \rho(I) = \left\{ (a_0,\ldots,a_{n-1}) \mid \sum_{i=0}^{n-1}a_ix^i \in I\right\} - - - -Let $ (a_0,\ldots,a_{n-1}) $ be an arbitrary element in $ \mathfrak{L} $. Then, define $ p(x) = \sum_{i=0}^{n-1}a_ix^i \in I $. But since $ I $ is an ideal, we have $ xp(x) \in I $. Then, $ \rho(xp(x)) \in \mathfrak{L}_I $. But, $ \rho(xp(x)) = \operatorname{rot}(a_0,\ldots,a_{n-1}) \in \mathfrak{L}_I $. Hence, $ \mathfrak{L} $ is cyclic. -$$ - \Rightarrow) -$$ - -Let $ \mathfrak{L} \subset \Z^n $ be a cyclic lattice. Hence $ \forall (a_0,\ldots,a_{n-1}) \in \mathfrak{L}: \operatorname{rot}(a_0,\ldots,a_{n-1}) \in \mathfrak{L} $. - -Define the set of polynomials $ I: = \left\{ \sum_{i=0}^{n-1}a_ix^i \mid (a_0,\ldots,a_{n-1}) \in \mathfrak{L} \right\} $: - -# Since $ \mathfrak{L} $ a lattice and hence an additive subgroup of $ \Z^n $, $ I \subset R $ is an additive subgroup of $ R $. - -# Since $ \mathfrak{L} $ is cyclic, $ \forall p(x) \in I: xp(x) \in I $. - -Hence, $ I \subset R $ is an ideal, and consequently, $ \mathfrak{L} = \mathfrak{L}_I $. - -Let $ f(x) \in \Z[x] $ be a monic polynomial of degree $ n $. For cryptographic applications, $ f(x) $ is usually selected to be irreducible. The ideal generated by $ f(x) $ is: - - - -(f(x)) := f(x) \cdot\Z[x] = \{ f(x)g(x): \forall g(x) \in \Z[x] \}. - - - -The quotient polynomial ring $R = \Z[x]/(f(x))$ partitions $\Z[x]$ into equivalence classes of polynomials of degree at most $n-1$: - - - -R = \Z[x]/(f(x)) = \left\{ \sum_{i=0}^{n-1}a_ix^i: a_i \in \Z \right\} - - - -where addition and multiplication are reduced modulo $f(x)$. - -Consider the embedding coefficient $\Z$-module isomorphism $\rho$. Then, each ideal in $R$ defines a sublattice of $\Z^n$ called ideal lattice. - -Definition: $\mathfrak{L}_I$, the lattice corresponding to an ideal $I$, is called ideal lattice. More precisely, consider a quotient polynomial ring $R = \Z[x]/(p(x))$, where $(p(x))$ is the ideal generated by a degree $n$ polynomial $p(x) \in \Z[x]$. $\mathfrak{L}_I$, is a sublattice of $\Z^n$, and is defined as: - - - -\mathfrak{L}_I := \rho(I) = \left\{ (a_0,\ldots,a_{n-1}) \mid \sum_{i=0}^{n-1}a_i x^i \in I \right\} \subset \Z^n. - - - -Remark: - -# It turns out that $\text{GapSVP}_\gamma$ for even small $\gamma = \operatorname{poly(n)}$ is typically easy on ideal lattices. The intuition is that the algebraic symmetries causes the minimum distance of an ideal to lie within a narrow, easily computable range. - -# By exploiting the provided algebraic symmetries in ideal lattices, one can convert a short nonzero vector into $n$ linearly independent ones of (nearly) the same length. Therefore, on ideal lattices, $\mathrm{SVP}_\gamma$ and $\mathrm{SIVP}_\gamma$ are equivalent with a small loss. Furthermore, even for quantum algorithms, $\mathrm{SVP}_\gamma$ and $\mathrm{SIVP}_\gamma$ are very hard in the worst-case scenario. - -SIS and Ring-SIS are two average case problems that are used in lattice-based cryptography constructions. Lattice-based cryptography began in 1996 from a seminal work by Ajtai who presented a family of one-way functions based on SIS problem. He showed that it is secure in an average case if $\mathrm{SVP}_{\gamma}$ (where $\gamma = n^c$ for some constant $c>0$) is hard in a worst-case scenario. - -Let $A \in \Z^{n\times m}_q$ be an $n\times m$ matrix with entries in $\Z_q$ that consists of $m$ uniformly random vectors $\boldsymbol{a_i} \in \Z^n_q$: $A = [\boldsymbol{a_1}|\cdots|\boldsymbol{a_m}]$. Find a nonzero vector $\boldsymbol{x} \in \Z^m$ such that: - -* $ \|\boldsymbol{x}\| \leq \beta$ - -* $f_A(\boldsymbol{x}) := A\boldsymbol{x} = \boldsymbol{0} \in \Z^n_q$ - -A solution to SIS without the required constraint on the length of the solution ($\|\boldsymbol{x}\| \leq \beta$) is easy to compute by using Gaussian elimination technique. We also require $\beta < q$, otherwise $\boldsymbol{x} = (q,0,\ldots,0) \in \Z^m$ is a trivial solution. - -In order to guarantee $f_A(\boldsymbol{x})$ has non-trivial, short solution, we require: - -* $\beta \geq \sqrt{n\log q}$, and - -* $m \geq n\log q$ - -Theorem: For any $m = \operatorname{poly}(n)$, any $\beta > 0$, and any sufficiently large $q \geq \beta n^c$ (for any constant $c >0$), solving $\operatorname{SIS}_{n,m,q,\beta}$ with nonnegligible probability is at least as hard as solving the $\operatorname{GapSVP}_\gamma$ and $\operatorname{SIVP}_\gamma$ for some $\gamma = \beta \cdot O(\sqrt{n})$ with a high probability in the worst-case scenario. - -Ring-SIS problem, a compact ring-based analogue of SIS problem, was studied in. - -They consider quotient polynomial ring $R = \Z[x]/(f(x))$ with $f(x) = x^n-1$ and $x^{2^k}+1$, respectively, and extend the definition of norm on vectors in $\R^n$ to vectors in $R^m$ as follows: - -Given a vector $\vec{\boldsymbol{z}} = (p_1,\ldots,p_m)\in R^m$ where $p_i(x)$ are some polynomial in $R$. Consider the embedding coefficient $\Z$-module isomorphism $\rho$: - - - -\begin{align} - -& \rho: R \rightarrow \Z^n \\[3pt] - -\rho(x) & = \sum_{i=0}^{n-1}a_ix^i \mapsto (a_0,\ldots,a_{n-1}) - -\end{align} - - - -Let $\boldsymbol{z_i} = \rho(p_i(x)) \in \Z^n$. Define norm $\vec{\boldsymbol{z}}$ as: - - - -\|\vec{\boldsymbol{z}}\| := \sqrt{\sum_{i=1}^m \|\boldsymbol{z_i}\|^2} - - - -Alternatively, a better notion for norm is achieved by exploiting the canonical embedding. The canonical embedding is defined as: - - - -\begin{align} - -\sigma: R = \Z/(f(x)) & \rightarrow \mathbb{C}^n \\ - -p(x) & \mapsto (p(\alpha_1),\ldots,p(\alpha_{n})) - -\end{align} - - - -where $\alpha_i$ is the $i^{th}$ complex root of $f(x)$ for $i=1,\ldots, n$. - -Given the quotient polynomial ring $R = \Z[x]/(f(x))$, define -$$ -R_q:= R/qR = \Z_q[x]/(f(x)) -$$. Select $m$ independent uniformly random elements $a_i \in R_q$. Define vector $\vec{\boldsymbol{a}}:=(a_1,\ldots,a_m) \in R_q^m$. Find a nonzero vector $\vec{\boldsymbol{z}}:=(p_1,\ldots,p_m) \in R^m$ such that: - -* $ \|\vec{\boldsymbol{z}}\| \leq \beta$ - -* $f_{\vec{\boldsymbol{a}}}(\vec{\boldsymbol{z}}) := \vec{\boldsymbol{a}}^{t}.\vec{\boldsymbol{z}} = \sum_{i=1}^m a_i.p_i = 0 \in R_q$ - -Recall that to guarantee existence of a solution to SIS problem, we require $m \approx n\log q$. However, Ring-SIS problem provide us with more compactness and efficacy: to guarantee existence of a solution to Ring-SIS problem, we require $m \approx \log q$. - -Definition: The nega-circulant matrix of $b$ is defined as: - - - -\text{for} \quad b = \sum_{i=0}^{n-1}b_ix^i \in R, \quad - -\operatorname{rot}(b) := \begin{bmatrix} - -b_0 & -b_{n-1} & \ldots & -b_1 \\[0.3em] - -b_1 & b_0 & \ldots & -b_2 \\[0.3em] - -\vdots & \vdots & \ddots & \vdots \\[0.3em] - -b_{n-1} & b_{n-2} & \ldots & b_0 - -\end{bmatrix} - - - -When the quotient polynomial ring is $R = \Z[x]/(x^n+1)$ for $n = 2^k$, the ring multiplication $a_i.p_i$ can be efficiently computed by first forming $\operatorname{rot}(a_i)$, the nega-circulant matrix of $a_i$, and then multiplying $\operatorname{rot}(a_i)$ with $\rho(p_i(x)) \in Z^n$, the embedding coefficient vector of $p_i$ (or, alternatively with $\sigma(p_i(x)) \in Z^n$, the canonical coefficient vector). - -Moreover, R-SIS problem is a special case of SIS problem where the matrix $A$ in the SIS problem is restricted to negacirculant blocks: $A = [\operatorname{rot}(a_1)|\cdots|\operatorname{rot}(a_m)]$. diff --git a/wiki/wikipedia/3842.txt b/wiki/wikipedia/3842.txt deleted file mode 100644 index fbdf6f15cbf6e38a26435fca81e035e1af0ab8e0..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3842.txt +++ /dev/null @@ -1,3 +0,0 @@ -In mathematics, Delzant's theorem, introduced by , classifies effective Hamiltonian actions of a torus on a compact connected symplectic manifold of twice the dimension by their image under the momentum mapping (Delzant polytope). A Delzant polytope is a convex polytope in Rn such that the slopes of the edges of each vertex are given by a basis of Zn. - -As a corollary, these symplectic manifolds have a complex structure and can be promoted as toric varieties, with invariant Kähler structures. diff --git a/wiki/wikipedia/3843.txt b/wiki/wikipedia/3843.txt deleted file mode 100644 index 1b22a38fd99ad229030c6a6a8377816803241742..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3843.txt +++ /dev/null @@ -1,37 +0,0 @@ -In mathematics, Proizvolov's identity is an identity concerning sums of differences of positive integers. The identity was posed by Vyacheslav Proizvolov as a problem in the 1985 All-Union Soviet Student Olympiads . - -To state the identity, take the first 2N positive integers, - -1, 2, 3, ..., 2N - 1, 2N, - -and partition them into two subsets of N numbers each. Arrange one subset in increasing order: -$$ - A_1 < A_2 < \cdots < A_N. -$$ - -Arrange the other subset in decreasing order: -$$ - B_1 > B_2 > \cdots > B_N. -$$ - -Then the sum -$$ - |A_1-B_1| + |A_2-B_2| + \cdots + |A_N-B_N| -$$ - -is always equal to N2. - -Take for example N = 3. The set of numbers is then {1, 2, 3, 4, 5, 6}. Select three numbers of this set, say 2, 3 and 5. Then the sequences A and B are: - -A1 = 2, A2 = 3, and A3 = 5; - -B1 = 6, B2 = 4, and B3 = 1. - -The sum is -$$ -|A_1-B_1| + |A_2-B_2| + |A_3-B_3| = |2-6| + |3-4| + |5-1| = 4+1+4 = 9, -$$ - -which indeed equals 32. - -A slick proof of the identity is as follows. Note that for any $ a,b$, we have that :$ |a-b|=\max\{a,b\}-\min\{a,b\}$. For this reason, it suffices to establish that the sets $\{\max\{a_i,b_i\}:1\le i\le n\} $ and :$ \{n+1,n+2,\dots,2n\} $ coincide. Since the numbers $a_i,b_i $ are all distinct, it therefore suffices to show that for any $1\le k\le n$, $ \max\{a_k,b_k\}>n$. Assume the contrary that this is false for some $k$, and consider $n+1$ positive integers $a_1,a_2,\dots,a_k,b_k,b_{k+1},\dots,b_n$. Clearly, these numbers are all distinct (due to the construction), but they are at most $n$: a contradiction is reached. diff --git a/wiki/wikipedia/3844.txt b/wiki/wikipedia/3844.txt deleted file mode 100644 index 29cfd25afc844d3c3185ba91fb0b90f1224d7ec1..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3844.txt +++ /dev/null @@ -1,9 +0,0 @@ -The graph realization problem is a decision problem in graph theory. Given a finite sequence $(d_1,\dots,d_n)$ of natural numbers, the problem asks whether there is a labeled simple graph such that $(d_1,\dots,d_n)$ is the degree sequence of this graph. - -The problem can be solved in polynomial time. One method of showing this uses the Havel–Hakimi algorithm constructing a special solution with the use of a recursive algorithm. Alternatively, following the characterization given by the Erdős–Gallai theorem, the problem can be solved by testing the validity of $n$ inequalities. - -The problem can also be stated in terms of symmetric matrices of zeros and ones. The connection can be seen if one realizes that each graph has an adjacency matrix where the column sums and row sums correspond to $(d_1,\ldots,d_n)$. The problem is then sometimes denoted by symmetric 0-1-matrices for given row sums. - -Similar problems describe the degree sequences of simple bipartite graphs or the degree sequences of simple directed graphs. The first problem is the so-called bipartite realization problem. The second is known as the digraph realization problem. - -The problem of constructing a solution for the graph realization problem with the additional constraint that each such solution comes with the same probability was shown to have a polynomial-time approximation scheme for the degree sequences of regular graphs by Cooper, Martin, and Greenhill. The general problem is still unsolved. diff --git a/wiki/wikipedia/3845.txt b/wiki/wikipedia/3845.txt deleted file mode 100644 index 84655bf6a603f156fcf2ce718a0f30f30a10576a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3845.txt +++ /dev/null @@ -1,57 +0,0 @@ -In the study of graph algorithms, Courcelle's theorem is the statement that every graph property definable in the monadic second-order logic of graphs can be decided in linear time on graphs of bounded treewidth. The result was first proved by Bruno Courcelle in 1990 and independently rediscovered by Borie. - -It is considered the archetype of algorithmic meta-theorems. - -In one variation of monadic second-order graph logic known as MSO1, the graph is described by a set of vertices and a binary adjacency relation $\operatorname{adj}(.,.)$, and the restriction to monadic logic means that the graph property in question may be defined in terms of sets of vertices of the given graph, but not in terms of sets of edges, or sets of tuples of vertices. - -As an example, the property of a graph being colorable with three colors (represented by three sets of vertices $R$, $G$, and $B$) may be defined by the monadic second-order formula - - - -\begin{align} - -\exists R\ \exists G\ \exists B\ \Bigl( & \forall v\ (v\in R \vee v\in G \vee v\in B) \wedge{} \\ - -& \forall u\ \forall v\ \bigl( (u\in R\wedge v\in R)\rightarrow\lnot\operatorname{adj}(u,v) \bigr) \wedge{} \\ - -& \forall u\ \forall v\ \bigl( (u\in G\wedge v\in G)\rightarrow\lnot\operatorname{adj}(u,v) \bigr) \wedge{} \\ - -& \forall u\ \forall v\ \bigl( (u\in B\wedge v\in B)\rightarrow\lnot\operatorname{adj}(u,v) \bigr) \Bigr) - -\end{align} - - - -with the naming convention that uppercase variables denote sets of vertices and lowercase variables denote individual vertices (so that an explicit declaration of which is which can be omitted from the formula). The first part of this formula ensures that the three color classes cover all the vertices of the graph, and the rest ensures that they each form an independent set. (It would also be possible to add clauses to the formula to ensure that the three color classes are disjoint, but this makes no difference to the result.) Thus, by Courcelle's theorem, 3-colorability of graphs of bounded treewidth may be tested in linear time. - -For this variation of graph logic, Courcelle's theorem can be extended from treewidth to clique-width: for every fixed MSO1 property $\Pi$, and every fixed bound $b$ on the clique-width of a graph, there is a linear-time algorithm for testing whether a graph of clique-width at most $b$ has property $\Pi$. The original formulation of this result required the input graph to be given together with a construction proving that it has bounded clique-width, but later approximation algorithms for clique-width removed this requirement. - -Courcelle's theorem may also be used with a stronger variation of monadic second-order logic known as MSO2. In this formulation, a graph is represented by a set V of vertices, a set - -E of edges, and an incidence relation between vertices and edges. This variation allows quantification over sets of vertices or edges, but not over more complex relations on tuples of vertices or edges. - -For instance, the property of having a Hamiltonian cycle may be expressed in MSO2 by describing the cycle as a set of edges that includes exactly two edges incident to each vertex, such that every nonempty proper subset of vertices has an edge in the putative cycle with exactly one endpoint in the subset. However, Hamiltonicity cannot be expressed in MSO1. - -It is possible to apply the same results to graphs in which the vertices or edges have labels from a fixed finite set, either by augmenting the graph logic to incorporate predicates describing the labels, or by representing the labels by unquantified vertex set or edge set variables. - -Rather than bounding the time complexity of an algorithm that recognizes an MSO property on bounded-treewidth graphs, it is also possible to analyze the space complexity of such an algorithm; that is, the amount of memory needed above and beyond the size of the input itself (which is assumed to be represented in a read-only way so that its space requirements cannot be put to other purposes). - -In particular, it is possible to recognize the graphs of bounded treewidth, and any MSO property on these graphs, by a deterministic Turing machine that uses only logarithmic space. - -The typical approach to proving Courcelle's theorem involves the construction of a finite bottom-up tree automaton that acts on the tree decompositions of the given graph. Until 2016, only a few special cases were resolved: in particular, the conjecture has been proved for graphs of treewidth at most three, for k-connected graphs of treewidth k, for graphs of constant treewidth and chordality, and for k-outerplanar graphs. - -The general version of the conjecture was finally proved by Mikołaj Bojańczyk and Michał Pilipczuk. - -Moreover, for Halin graphs (a special case of treewidth three graphs) counting is not needed: for these graphs, every property that can be recognized by a tree automaton can also be defined in monadic second-order logic. The same is true more generally for certain classes of graphs in which a tree decomposition can itself be described in MSOL. However, it cannot be true for all graphs of bounded treewidth, because in general counting adds extra power over monadic second-order logic without counting. For instance, the graphs with an even number of vertices can be recognized using counting, but not without. - -The satisfiability problem for a formula of monadic second-order logic is the problem of determining whether there exists at least one graph (possibly within a restricted family of graphs) for which the formula is true. For arbitrary graph families, and arbitrary formulas, this problem is undecidable. However, satisfiability of MSO2 formulas is decidable for the graphs of bounded treewidth, and satisfiability of MSO1 formulas is decidable for graphs of bounded clique-width. The proof involves building a tree automaton for the formula and then testing whether the automaton has an accepting path. - -As a partial converse, Seese proved that, whenever a family of graphs has a decidable MSO2 satisfiability problem, the family must have bounded treewidth. The proof is based on a theorem of Robertson and Seymour that the families of graphs with unbounded treewidth have arbitrarily large grid minors. Seese also conjectured that every family of graphs with a decidable MSO1 satisfiability problem must have bounded clique-width; this has not been proven, but a weakening of the conjecture that replaces MSO1 by CMSO1 is true. - -Grohe used Courcelle's theorem to show that computing the crossing number of a graph G is fixed-parameter tractable with a quadratic dependence on the size of G, improving a cubic-time algorithm based on the Robertson–Seymour theorem. An additional later improvement to linear time by Kawarabayashi follows the same approach. If the given graph G has small treewidth, Courcelle's theorem can be applied directly to this problem. On the other hand, if G has large treewidth, then it contains a large grid minor, within which the graph can be simplified while leaving the crossing number unchanged. Grohe's algorithm performs these simplifications until the remaining graph has a small treewidth, and then applies Courcelle's theorem to solve the reduced subproblem. - -Gottlob observed that Courcelle's theorem applies to several problems of finding minimum multi-way cuts in a graph, when the structure formed by the graph and the set of cut pairs has bounded treewidth. As a result they obtain a fixed-parameter tractable algorithm for these problems, parameterized by a single parameter, treewidth, improving previous solutions that had combined multiple parameters. - -In computational topology, Burton extend Courcelle's theorem from MSO2 to a form of monadic second-order logic on simplicial complexes of bounded dimension that allows quantification over simplices of any fixed dimension. As a consequence, they show how to compute certain quantum invariants of 3-manifolds as well as how to solve certain problems in discrete Morse theory efficiently, when the manifold has a triangulation (avoiding degenerate simplices) whose dual graph has small treewidth. - -Methods based on Courcelle's theorem have also been applied to database theory, knowledge representation and reasoning, automata theory, and model checking. diff --git a/wiki/wikipedia/3846.txt b/wiki/wikipedia/3846.txt deleted file mode 100644 index 103fb61fe2a2ed6bc157c70ee80b2800e84d6d54..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3846.txt +++ /dev/null @@ -1,133 +0,0 @@ -In mathematics, Thomae's formula is a formula introduced by relating theta constants to the branch points of a hyperelliptic curve . - -In 1824 the Abel–Ruffini theorem established that polynomial equations of a degree of five or higher could have no solutions in radicals. It became clear to mathematicians since then that one needed to go beyond radicals in order to express the solutions to equations of the fifth and higher degrees. In 1858, Charles Hermite, Leopold Kronecker, and Francesco Brioschi independently discovered that the quintic equation could be solved with elliptic transcendents. This proved to be a generalization of the radical, which can be written as: -$$ -\sqrt[n]{x}=\exp \left({{\frac {1}{n}}\ln x}\right) = \exp \left(\frac{1}{n}\int^x_1\frac{dt}{t}\right). -$$ - -With the restriction to only this exponential, as shown by Galois theory, only compositions of Abelian extensions may be constructed, which suffices only for equations of the fourth degree and below. Something more general is required for equations of higher degree, so to solve the quintic, Hermite, et al. replaced the exponential by an elliptic modular function and the integral (logarithm) by an elliptic integral. Kronecker believed that this was a special case of a still more general method. Camille Jordan showed that any algebraic equation may be solved by use of modular functions. This was accomplished by Thomae in 1870. The process involved replacing the exponential in the nth root and the elliptic modular function in the approach by Hermite, et al. by still more general Siegel modular forms and the integral by a hyperelliptic integral. Hiroshi Umemura expressed these modular functions in terms of higher genus theta functions. - -If we have a polynomial function: -$$ -f(x) = a_0x^n + a_1x^{n-1} + \cdots + a_n -$$ - -with $a_0 \ne 0$ irreducible over a certain subfield of the complex numbers, then its roots $x_k$ may be expressed by the following equation involving theta functions of zero argument (theta constants): - - - -\begin{align} - -x_k = {} & \left[\theta\left( - -\begin{matrix} - -1 & 0 & \cdots & 0 \\ - -0 & \cdots & 0 & 0 - -\end{matrix} - -\right)(\Omega)\right]^4 \left[\theta\left( - -\begin{matrix} - -1 & 1 & 0 & \cdots & 0 \\ - -0 & \cdots & 0 & 0 & 0 - -\end{matrix} - -\right)(\Omega)\right]^4 \\[6pt] - -& {} + \left[\theta\left( - -\begin{matrix} - -0 & \cdots & 0 \\ - -0 & \cdots & 0 - -\end{matrix} - -\right)(\Omega)\right]^4 \left[\theta\left( - -\begin{matrix} - -0 & 1 & 0 & \cdots & 0 \\ - -0 & 0 & 0 & \cdots & 0 - -\end{matrix} - -\right)(\Omega)\right]^4 \\[6pt] - -& {} - \frac{\left[\theta\left( - -\begin{matrix} - -0 & 0 & \cdots & 0 \\ - -1 & 0 & \cdots & 0 - -\end{matrix} - -\right)(\Omega)\right]^4 \left[\theta\left( - -\begin{matrix} - -0 & 1 & 0 & \cdots & 0 \\ - -1 & 0 & \cdots & 0 & 0 - -\end{matrix} - -\right)(\Omega)\right]^4}{ - -2 \left[\theta\left( - -\begin{matrix} - -1 & 0 & \cdots & 0 \\ - -0 & \cdots & 0 & 0 - -\end{matrix} - -\right)(\Omega)\right]^4 \left[\theta\left( - -\begin{matrix} - -1 & 1 & 0 & \cdots & 0 \\ - -0 & 0 & \cdots & 0 & 0 - -\end{matrix} - -\right)(\Omega)\right]^4 - -} - -\end{align} - - - -where $\Omega$ is the period matrix derived from one of the following hyperelliptic integrals: - - - -u(a) = \int^a_1 \frac{dx}{\sqrt{x(x-1)f(x)}} - - - -if $f(x)$ is of odd degree, or - - - -u(a) = \int^a_1 \frac{dx}{\sqrt{x(x-1)(x-2)f(x)}} - - - -if $f(x)$ is of even degree. - -This formula applies to any algebraic equation of any degree without need for a Tschirnhaus transformation or any other manipulation to bring the equation into a specific normal form, such as the Bring–Jerrard form for the quintic. However, application of this formula in practice is difficult because the relevant hyperelliptic integrals and higher genus theta functions are very complex. diff --git a/wiki/wikipedia/3847.txt b/wiki/wikipedia/3847.txt deleted file mode 100644 index 63cd0676ea3169ed1762ac1261857bf3368e4764..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3847.txt +++ /dev/null @@ -1,65 +0,0 @@ -In mathematics, specifically in the calculus of variations, a variation δf of a function f can be concentrated on an arbitrarily small interval, but not a single point. - -Accordingly, the necessary condition of extremum (functional derivative equal zero) appears in a weak formulation (variational form) integrated with an arbitrary function δf. The fundamental lemma of the calculus of variations is typically used to transform this weak formulation into the strong formulation (differential equation), free of the integration with arbitrary function. The proof usually exploits the possibility to choose δf concentrated on an interval on which f keeps sign (positive or negative). Several versions of the lemma are in use. Basic versions are easy to formulate and prove. More powerful versions are used when needed. - -If a continuous function $f$ on an open interval $(a,b)$ satisfies the equality -$$ - \int_a^b f(x)h(x)\mathrm{d}x = 0 -$$ - -for all compactly supported smooth functions $h$ on $(a,b)$, then $f$ is identically zero. - -Here "smooth" may be interpreted as "infinitely differentiable", - -The special case for g = 0 is just the basic version. - -Here is the special case for f = 0 (often sufficient). - -If a continuous function g on an interval (a,b) satisfies the equality -$$ - \int_a^b g(x) h'(x) \mathrm{d}x = 0 -$$ - -for all smooth functions h on (a,b) such that $h(a)=h(b)=0$, then g is constant. - -If, in addition, continuous differentiability of g is assumed, then integration by parts reduces both statements to the basic version; this case is attributed to Joseph-Louis Lagrange, while the proof of differentiability of g is due to Paul du Bois-Reymond. - -The given functions (f, g) may be discontinuous, provided that they are locally integrable (on the given interval). In this case, Lebesgue integration is meant, the conclusions hold almost everywhere (thus, in all continuity points), and differentiability of g is interpreted as local absolute continuity (rather than continuous differentiability). Sometimes the given functions are assumed to be piecewise continuous, in which case Riemann integration suffices, and the conclusions are stated everywhere except the finite set of discontinuity points. - -This necessary condition is also sufficient, since the integrand becomes $ (u_0 h)' + (u_1 h')' + \dots + (u_{n-1} h^{(n-1)})'. $ - -The case n = 1 is just the version for two given functions, since $ f=f_0=u'_0 $ and $ f_1=u_0, $ thus, $ f_0-f'_1 = 0. $ - -In contrast, the case n=2 does not lead to the relation $ f_0 - f'_1 + f_2 = 0, $ since the function $ f_2 = u_1 $ need not be differentiable twice. The sufficient condition $ f_0 - f'_1 + f_2 = 0 $ is not necessary. Rather, the necessary and sufficient condition may be written as $ f_0 - (f_1 - f'_2)' = 0 $ for n=2, $ f_0 - (f_1 - (f_2-f'_3)')' = 0 $ for n=3, and so on; in general, the brackets cannot be opened because of non-differentiability. - -Generalization to vector-valued functions $(a,b)\to\mathbb{R}^d$ is straightforward; one applies the results for scalar functions to each coordinate separately, or treats the vector-valued case from the beginning. - -If a continuous multivariable function f on an open set $\Omega\subset\mathbb{R}^d$ satisfies the equality -$$ - \int_\Omega f(x) h(x) \mathrm{d}x = 0 -$$ - -for all compactly supported smooth functions h on Ω, then f is identically zero. - -Similarly to the basic version, one may consider a continuous function f on the closure of Ω, assuming that h vanishes on the boundary of Ω (rather than compactly supported). - -Here is a version for discontinuous multivariable functions. - -Let $\Omega\subset\mathbb{R}^d$ be an open set, and $f\in L^2(\Omega)$ satisfy the equality -$$ - \int_\Omega f(x) h(x) \mathrm{d}x = 0 -$$ - -for all compactly supported smooth functions h on Ω. Then f=0 (in L2, that is, almost everywhere). - -This lemma is used to prove that extrema of the functional -$$ - J[y] = \int_{x_0}^{x_1} L(t,y(t),\dot y(t)) \mathrm{d}t -$$ - -are weak solutions $y:[x_0,x_1]\to V$ (for an appropriate vector space $V$) of the Euler–Lagrange equation -$$ - {\partial L(t,y(t),\dot y(t)) \over \partial y} = {\mathrm{d} \over \mathrm{d}t} {\partial L(t,y(t),\dot y(t)) \over \partial \dot y} . -$$ - -The Euler–Lagrange equation plays a prominent role in classical mechanics and differential geometry. diff --git a/wiki/wikipedia/3848.txt b/wiki/wikipedia/3848.txt deleted file mode 100644 index f60fee33c77244b5c4d03e7d80f2a82e57d7bac3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3848.txt +++ /dev/null @@ -1,109 +0,0 @@ -In linear algebra, the Rouché–Capelli theorem determines the number of solutions for a system of linear equations, given the rank of its augmented matrix and coefficient matrix. The theorem is variously known as the: - -* Rouché–Capelli theorem in English speaking countries, Italy and Brazil; - -* Kronecker–Capelli theorem in Austria, Poland, Romania and Russia; - -* Rouché–Fontené theorem in France; - -* Rouché–Frobenius theorem in Spain and many countries in Latin America; - -* Frobenius theorem in the Czech Republic and in Slovakia. - -A system of linear equations with n variables has a solution if and only if the rank of its coefficient matrix A is equal to the rank of its augmented matrix [A|b]. If there are solutions, they form an affine subspace of $\mathbb{R}^n$ of dimension n − rank(A). In particular: - -* if n = rank(A), the solution is unique, - -* otherwise there are infinitely many solutions. - -Consider the system of equations - -x + y + 2z = 3, - -x + y + z = 1, - -2x + 2y + 2z = 2. - -The coefficient matrix is - - - -A = - -\begin{bmatrix} - -1 & 1 & 2 \\ - -1 & 1 & 1 \\ - -2 & 2 & 2 \\ - -\end{bmatrix}, - - - -and the augmented matrix is - - - -(A|B) = - -\left[\begin{array}{ccc|c} - -1 & 1 & 2 & 3\\ - -1 & 1 & 1 & 1 \\ - -2 & 2 & 2 & 2 - -\end{array}\right]. - - - -Since both of these have the same rank, namely 2, there exists at least one solution; and since their rank is less than the number of unknowns, the latter being 3, there are infinitely many solutions. - -In contrast, consider the system - -x + y + 2z = 3, - -x + y + z = 1, - -2x + 2y + 2z = 5. - -The coefficient matrix is - - - -A = - -\begin{bmatrix} - -1 & 1 & 2 \\ - -1 & 1 & 1 \\ - -2 & 2 & 2 \\ - -\end{bmatrix}, - - - -and the augmented matrix is - - - -(A|B) = - -\left[\begin{array}{ccc|c} - -1 & 1 & 2 & 3\\ - -1 & 1 & 1 & 1 \\ - -2 & 2 & 2 & 5 - -\end{array}\right]. - - - -In this example the coefficient matrix has rank 2, while the augmented matrix has rank 3; so this system of equations has no solution. Indeed, an increase in the number of linearly independent columns has made the system of equations inconsistent. diff --git a/wiki/wikipedia/3849.txt b/wiki/wikipedia/3849.txt deleted file mode 100644 index 06a350a6f81a0fea3372bb5a3318700ddf3a89ba..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3849.txt +++ /dev/null @@ -1,21 +0,0 @@ -In mathematics and logic, a corollary ( , ) is a theorem of less importance which can be readily deduced from a previous, more notable statement. A corollary could for instance be a proposition which is incidentally proved while proving another proposition, while it could also be used more casually to refer to something which naturally or incidentally accompanies something else (e.g., violence as a corollary of revolutionary social changes). - -In mathematics, a corollary is a theorem connected by a short proof to an existing theorem. The use of the term corollary, rather than proposition or theorem, is intrinsically subjective. More formally, proposition B is a corollary of proposition A, if B can be readily deduced from A or is self-evident from its proof. - -In many cases, a corollary corresponds to a special case of a larger theorem, which makes the theorem easier to use and apply, even though its importance is generally considered to be secondary to that of the theorem. In particular, B is unlikely to be termed a corollary if its mathematical consequences are as significant as those of A. A corollary might have a proof that explains its derivation, even though such a derivation might be considered rather self-evident in some occasions (e.g., the Pythagorean theorem as a corollary of law of cosines). - -Charles Sanders Peirce held that the most important division of kinds of deductive reasoning is that between corollarial and theorematic. He argued that while all deduction ultimately depends in one way or another on mental experimentation on schemata or diagrams, in corollarial deduction: - -"it is only necessary to imagine any case in which the premises are true in order to perceive immediately that the conclusion holds in that case" - -while in theorematic deduction: - -"It is necessary to experiment in the imagination upon the image of the premise in order from the result of such experiment to make corollarial deductions to the truth of the conclusion." - -Peirce also held that corollarial deduction matches Aristotle's conception of direct demonstration, which Aristotle regarded as the only thoroughly satisfactory demonstration, while theorematic deduction is: - -# The kind more prized by mathematicians - -# Peculiar to mathematics - -# Involves in its course the introduction of a lemma or at least a definition uncontemplated in the thesis (the proposition that is to be proved), in remarkable cases that definition is of an abstraction that "ought to be supported by a proper postulate." diff --git a/wiki/wikipedia/385.txt b/wiki/wikipedia/385.txt deleted file mode 100644 index 51ead5f592fa2841b5df602e9219210c4ece6274..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/385.txt +++ /dev/null @@ -1,7 +0,0 @@ -In polyhedral combinatorics, a branch of mathematics, Balinski's theorem is a statement about the graph-theoretic structure of three-dimensional convex polyhedra and higher-dimensional convex polytopes. It states that, if one forms an undirected graph from the vertices and edges of a convex d-dimensional convex polyhedron or polytope (its skeleton), then the resulting graph is at least d-vertex-connected: the removal of any d - 1 vertices leaves a connected subgraph. For instance, for a three-dimensional polyhedron, even if two of its vertices (together with their incident edges) are removed, for any pair of vertices there will still exist a path of vertices and edges connecting the pair. - -Balinski's theorem is named after mathematician Michel Balinski, who published its proof in 1961, although the three-dimensional case dates back to the earlier part of the 20th century and the discovery of Steinitz's theorem that the graphs of three-dimensional polyhedra are exactly the three-connected planar graphs. - -Balinski proves the result based on the correctness of the simplex method for finding the minimum or maximum of a linear function on a convex polytope (the linear programming problem). The simplex method starts at an arbitrary vertex of the polytope and repeatedly moves towards an adjacent vertex that improves the function value; when no improvement can be made, the optimal function value has been reached. - -If S is a set of fewer than d vertices to be removed from the graph of the polytope, Balinski adds one more vertex v0 to S and finds a linear function ƒ that has the value zero on the augmented set but is not identically zero on the whole space. Then, any remaining vertex at which ƒ is non-negative (including v0) can be connected by simplex steps to the vertex with the maximum value of ƒ, while any remaining vertex at which ƒ is non-positive (again including v0) can be similarly connected to the vertex with the minimum value of ƒ. Therefore, the entire remaining graph is connected. diff --git a/wiki/wikipedia/3850.txt b/wiki/wikipedia/3850.txt deleted file mode 100644 index 325bc8fc6c895b69b07916b0078fe14915a2b800..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3850.txt +++ /dev/null @@ -1,88 +0,0 @@ -In mathematics, Wolstenholme's theorem states that for a prime number $p \geq 5$, the congruence -$$ -{2p-1 \choose p-1} \equiv 1 \pmod{p^3} -$$ - -holds, where the parentheses denote a binomial coefficient. For example, with p = 7, this says that 1716 is one more than a multiple of 343. The theorem was first proved by Joseph Wolstenholme in 1862. In 1819, Charles Babbage showed the same congruence modulo p2, which holds for $p \geq 3$. An equivalent formulation is the congruence -$$ -{ap \choose bp} \equiv {a \choose b} \pmod{p^3} -$$ - -for $p \geq 5$, which is due to Wilhelm Ljunggren (and, in the special case $b = 1$, to J. W. L. Glaisher) and is inspired by Lucas' theorem. - -No known composite numbers satisfy Wolstenholme's theorem and it is conjectured that there are none (see below). A prime that satisfies the congruence modulo p4 is called a Wolstenholme prime (see below). - -As Wolstenholme himself established, his theorem can also be expressed as a pair of congruences for (generalized) harmonic numbers: -$$ -1+{1 \over 2}+{1 \over 3}+...+{1 \over p-1} \equiv 0 \pmod{p^2} \mbox{, and} -$$ -$$ -1+{1 \over 2^2}+{1 \over 3^2}+...+{1 \over (p-1)^2} \equiv 0 \pmod p. -$$ - -(Congruences with fractions make sense, provided that the denominators are coprime to the modulus.) - -For example, with p=7, the first of these says that the numerator of 49/20 is a multiple of 49, while the second says the numerator of 5369/3600 is a multiple of 7. - -A prime p is called a Wolstenholme prime iff the following condition holds: -$$ -{{2p-1}\choose{p-1}} \equiv 1 \pmod{p^4}. -$$ - -If p is a Wolstenholme prime, then Glaisher's theorem holds modulo p4. The only known Wolstenholme primes so far are 16843 and 2124679 ; any other Wolstenholme prime must be greater than 109. This result is consistent with the heuristic argument that the residue modulo p4 is a pseudo-random multiple of p3. This heuristic predicts that the number of Wolstenholme primes between K and N is roughly ln ln N − ln ln K. The Wolstenholme condition has been checked up to 109, and the heuristic says that there should be roughly one Wolstenholme prime between 109 and 1024. A similar heuristic predicts that there are no "doubly Wolstenholme" primes, for which the congruence would hold modulo p5. - -There is more than one way to prove Wolstenholme's theorem. Here is a proof that directly establishes Glaisher's version using both combinatorics and algebra. - -For the moment let p be any prime, and let a and b be any non-negative integers. Then a set A with ap elements can be divided into a rings of length p, and the rings can be rotated separately. Thus, the a-fold direct sum of the cyclic group of order p acts on the set A, and by extension it acts on the set of subsets of size bp. Every orbit of this group action has pk elements, where k is the number of incomplete rings, i.e., if there are k rings that only partly intersect a subset B in the orbit. There are $\textstyle {a \choose b}$ orbits of size 1 and there are no orbits of size p. Thus we first obtain Babbage's theorem -$$ -{ap \choose bp} \equiv {a \choose b} \pmod{p^2}. -$$ - -Examining the orbits of size p2, we also obtain -$$ -{ap \choose bp} \equiv {a \choose b} + {a \choose 2}\left({2p \choose p} - 2\right){a -2 \choose b-1} \pmod{p^3}. -$$ - -Among other consequences, this equation tells us that the case a=2 and b=1 implies the general case of the second form of Wolstenholme's theorem. - -Switching from combinatorics to algebra, both sides of this congruence are polynomials in a for each fixed value of b. The congruence therefore holds when a is any integer, positive or negative, provided that b is a fixed positive integer. In particular, if a=-1 and b=1, the congruence becomes -$$ -{-p \choose p} \equiv {-1 \choose 1} + {-1 \choose 2}\left({2p \choose p} - 2\right) \pmod{p^3}. -$$ - -This congruence becomes an equation for $\textstyle {2p \choose p}$ using the relation -$$ -{-p \choose p} = \frac{(-1)^p}2{2p \choose p}. -$$ - -When p is odd, the relation is -$$ -3{2p \choose p} \equiv 6 \pmod{p^3}. -$$ - -When p≠3, we can divide both sides by 3 to complete the argument. - -A similar derivation modulo p4 establishes that -$$ -{ap \choose bp} \equiv {a \choose b} \pmod{p^4} -$$ - -for all positive a and b if and only if it holds when a=2 and b=1, i.e., if and only if p is a Wolstenholme prime. - -It is conjectured that if - -{{NumBlk|:|${2n-1 \choose n-1} \equiv 1 \pmod{n^k}$|}} - -when k=3, then n is prime. The conjecture can be understood by considering k = 1 and 2 as well as 3. When k = 1, Babbage's theorem implies that it holds for n = p2 for p an odd prime, while Wolstenholme's theorem implies that it holds for n = p3 for p > 3, and it holds for n = p4 if p is a Wolstenholme prime. When k = 2, it holds for n = p2 if p is a Wolstenholme prime. These three numbers, 4 = 22, 8 = 23, and 27 = 33 are not held for () with k = 1, but all other prime square and prime cube are held for () with k = 1. Only 5 other composite values (neither prime square nor prime cube) of n are known to hold for () with k = 1, they are called Wolstenholme pseudoprimes, they are - -27173, 2001341, 16024189487, 80478114820849201, 20378551049298456998947681, ... - -The first three are not prime powers , the last two are 168434 and 21246794, 16843 and 2124679 are Wolstenholme primes . Besides, with an exception of 168432 and 21246792, no composites are known to hold for () with k = 2, much less k = 3. Thus the conjecture is considered likely because Wolstenholme's congruence seems over-constrained and artificial for composite numbers. Moreover, if the congruence does hold for any particular n other than a prime or prime power, and any particular k, it does not imply that -$$ -{an \choose bn} \equiv {a \choose b} \pmod{n^k}. -$$ - -Leudesdorf has proved that for a positive integer n coprime to 6, the following congruence holds: -$$ - \sum_{i=1\atop (i,n)=1}^{n-1} \frac{1}{i} \equiv 0\pmod{n^2}. -$$ diff --git a/wiki/wikipedia/3851.txt b/wiki/wikipedia/3851.txt deleted file mode 100644 index b3d25a41b0a3ecf429966dafd20d7bb4c115f9e8..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3851.txt +++ /dev/null @@ -1,19 +0,0 @@ -Kronecker's Jugendtraum or Hilbert's twelfth problem, of the 23 mathematical Hilbert problems, is the extension of the Kronecker–Weber theorem on abelian extensions of the rational numbers, to any base number field. That is, it asks for analogues of the roots of unity, as complex numbers that are particular values of the exponential function; the requirement is that such numbers should generate a whole family of further number fields that are analogues of the cyclotomic fields and their subfields. - -The classical theory of complex multiplication, now often known as the Kronecker Jugendtraum, does this for the case of any imaginary quadratic field, by using modular functions and elliptic functions chosen with a particular period lattice related to the field in question. Goro Shimura extended this to CM fields. The general case is still open . Leopold Kronecker described the complex multiplication issue as his ' or “dearest dream of his youth”. - -The fundamental problem of algebraic number theory is to describe the fields of algebraic numbers. The work of Galois made it clear that field extensions are controlled by certain groups, the Galois groups. The simplest situation, which is already at the boundary of what is well-understood, is when the group in question is abelian. All quadratic extensions, obtained by adjoining the roots of a quadratic polynomial, are abelian, and their study was commenced by Gauss. Another type of abelian extension of the field Q of rational numbers is given by adjoining the nth roots of unity, resulting in the cyclotomic fields. Already Gauss had shown that, in fact, every quadratic field is contained in a larger cyclotomic field. The Kronecker–Weber theorem shows that any finite abelian extension of Q is contained in a cyclotomic field. Kronecker's (and Hilbert's) question addresses the situation of a more general algebraic number field K: what are the algebraic numbers necessary to construct all abelian extensions of K? The complete answer to this question has been completely worked out only when K is an imaginary quadratic field or its generalization, a CM-field. - -Hilbert's original statement of his 12th problem is rather misleading: he seems to imply that the abelian extensions of imaginary quadratic fields are generated by special values of elliptic modular functions, which is not correct. (It is hard to tell exactly what Hilbert was saying, one problem being that he may have been using the term "elliptic function" to mean both the elliptic function ℘ and the elliptic modular function j.) - -First it is also necessary to use roots of unity, though Hilbert may have implicitly meant to include these. More seriously, while values of elliptic modular functions generate the Hilbert class field, for more general abelian extensions one also needs to use values of elliptic functions. For example, the abelian extension $\mathbf{Q}(i,\sqrt[4]{1+2i})/\mathbf{Q}(i)$ is not generated by singular moduli and roots of unity. - -One particularly appealing way to state the Kronecker–Weber theorem is by saying that the maximal abelian extension of Q can be obtained by adjoining the special values exp(2πi/n) of the exponential function. Similarly, the theory of complex multiplication shows that the maximal abelian extension of Q(τ), where τ is an imaginary quadratic irrationality, can be obtained by adjoining the special values of ℘(τ,z) and j(τ) of modular functions j and elliptic functions ℘, and roots of unity, where τ is in the imaginary quadratic field and z represents a torsion point on the corresponding elliptic curve. One interpretation of Hilbert's twelfth problem asks to provide a suitable analogue of exponential, elliptic, or modular functions, whose special values would generate the maximal abelian extension Kab of a general number field K. In this form, it remains unsolved. A description of the field Kab was obtained in the class field theory, developed by Hilbert - -himself, Emil Artin, and others in the first half of the 20th century. However the construction of Kab in class field theory involves first constructing larger non-abelian extensions using Kummer theory, and then cutting down to the abelian extensions, so does not really solve Hilbert's problem which asks for a more direct construction of the abelian extensions. - -Developments since around 1960 have certainly contributed. Before that in his dissertation used Hilbert modular forms to study abelian extensions of real quadratic fields. Complex multiplication of abelian varieties was an area opened up by the work of Shimura and Taniyama. This gives rise to abelian extensions of CM-fields in general. The question of which extensions can be found is that of the Tate modules of such varieties, as Galois representations. Since this is the most accessible case of l-adic cohomology, these representations have been studied in depth. - -Robert Langlands argued in 1973 that the modern version of the ' should deal with Hasse–Weil zeta functions of Shimura varieties. While he envisaged a grandiose program that would take the subject much further, more than thirty years later serious doubts remain concerning its import for the question that Hilbert asked. - -A separate development was Stark's conjecture (Harold Stark), which in contrast dealt directly with the question of finding interesting, particular units in number fields. This has seen a large conjectural development for L-functions, and is also capable of producing concrete, numerical results. A p-adic solution for totally real fields was announced by Dasgupta and Kakde, and for the special case of real quadratic fields by Darmon, Pozzi and Vonk, in March 2021. diff --git a/wiki/wikipedia/3852.txt b/wiki/wikipedia/3852.txt deleted file mode 100644 index 04d414292d441daf374d8808983fd975109d4ed4..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3852.txt +++ /dev/null @@ -1,11 +0,0 @@ -Picross 3D, known in Japan as , is a puzzle video game developed by HAL Laboratory and published by Nintendo for the Nintendo DS. It was released in Japan in March 2009, in Europe in March 2010, and in North America in May 2010. It uses similar nonogram mechanics to Picross DS, but it puts it in 3D. Outside Japan, the game is part of Nintendo's Touch! Generations brand. A sequel, Picross 3D: Round 2, was released for the Nintendo 3DS in Japan in October 2015, in North America in September 2016, and in Europe and Australia in December 2016. - -While Picross presents a rectangular grid of squares, which must be filled in to create a picture, Picross 3D uses a rectangular prism made of a number of smaller cubes, and must be chipped away in order to construct an image in three dimensions. Each row or column has at most one number corresponding to it, and many do not have any numbers at all; the number indicates how many cubes the row/column should contain when the picture is complete. Generally each row of cubes will be in one group. If a number has a circle around it, it means that, while that number of cubes is the total amount in the row/column, they will be split up into two groups (for example, a 4 with a circle around it will be in two groups of either 1 and 3 or 2 and 2). If a number has a square around it, the cubes will be split up into three or more groups. If there are no numbers on a row or column, there are no rules regarding the cubes in that row. - -A paintbrush may be used to mark cubes that definitely will remain in the image (which also prevents them from being broken accidentally), or a hammer to chip away at the unnecessary cubes. If an attempt is made to break a cube that is part of the image, the game will count this as a strike. If the player receives five strikes, the player is out and will have to start the puzzle over again. Players will also need to complete the stage within a certain amount of time. Early on, the game will show players "technique" tutorials to help them refine their puzzle-solving skills. - -Once a 3D image is completed, the player will be rewarded with a short animation regarding the image, and it will be placed into a themed collection, encouraging players to complete each collection. During standard puzzles, up to three stars are awarded based on the player's performance (completing the puzzle within a certain time limit and with no strikes earns maximum stars) which unlock additional bonus puzzles in each level. In between certain levels are "One-Chance Challenges", where players make just one mistake and they're out, "Time Challenges", which require players to quickly chip away cubes to extend their time limit, and "Construction Challenges" in which multiple puzzles are completed to form one giant image. Bonus animations can also be unlocked under certain conditions, such as completing each difficulty setting. - -Outside of the main mode, players could also download additional puzzles via the Nintendo Wi-Fi Connection, as well as create their own puzzles, which can be shared with friends or entered into themed challenges. While it is no longer possible to use the Nintendo WFC for downloading or sharing puzzles since it was shut down in 2014 it is still possible to send and receive puzzles locally using wireless transfer. - -The game was the third best-selling game during the week of its release in Japan at 38,000 copies. It sold an additional 29,000 copies and 16,000 copies its second and third weeks respectively. diff --git a/wiki/wikipedia/3853.txt b/wiki/wikipedia/3853.txt deleted file mode 100644 index 07f19ec6c85ef6958648e7b2fcca0df6345e3f46..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3853.txt +++ /dev/null @@ -1,21 +0,0 @@ -In game theory, Zermelo's theorem is a theorem about finite two-person games of perfect information in which the players move alternately and in which chance does not affect the decision making process. It says that if the game cannot end in a draw, then one of the two players must have a winning strategy (i.e. force a win). An alternate statement is that for a game meeting all of these conditions except the condition that a draw is not possible, then either the first-player can force a win, or the second-player can force a win, or both players can force a draw. - -The theorem is named after Ernst Zermelo, a German mathematician and logician. - -Zermelo's work shows that in two-person zero-sum games with perfect information, if a player is in a winning position, then they can always force a win no matter what strategy the other player may employ. Furthermore, and as a consequence, if a player is in a winning position, it will never require more moves than there are positions in the game (with a position defined as position of pieces as well as the player next to move). - -Zermelo considers the class of two-person games without chance, where players have strictly opposing interests and where only a finite number of positions are possible. Although in the game only finitely many positions are possible, Zermelo allows infinite sequences of moves since he does not consider stopping rules. Thus, he allows for the possibility of infinite games. Then he addresses two problems: - -# What does it mean for a player to be in a 'winning' position and is it possible to define this in an objective mathematical manner? - -# If the player is in a winning position, can the number of moves needed to force the win be determined? - -To answer the first question, Zermelo states that a necessary and sufficient condition is the nonemptyness of a certain set, containing all possible sequences of moves such that a player wins independently of how the other player plays. But should this set be empty, the best a player could achieve would be a draw. So he defines another set containing all possible sequences of moves such that a player can postpone her loss for an infinite number of moves, which implies a draw. This set may also be empty, i. e., the player can avoid her loss for only finitely many moves if her opponent plays correctly. But this is equivalent to the opponent being able to force a win. This is the basis for all modern versions of Zermelo's theorem. - -About the second question, Zermelo claimed that it will never take more moves than there are positions in the game. His proof is a proof by contradiction: Assume that a player can win in a number of moves larger than the number of positions. Of course, at least one winning position must have appeared twice. So the player could have played at the first occurrence in the same way as he does at the second and thus could have won in fewer moves than there are positions. - -In 1927, a Hungarian mathematician Denes Konig revised Zermelo's paper and pointed out some gaps in the original work. First of all, Konig argues that Zermelo did not prove that a player, for example White, who is in a winning position is always able to force a win by making moves smaller than the number of positions in the game. Zermelo argued that White can change its behaviour at the first possibility of any related winning position and win without repetition. However, Koning maintains that this argument is not correct as it is not enough to reduce the number of moves in a single game below the number of possible positions. Thus, Zermelo claimed, but did not show, that a winning player can always win without repetition. The second objective by Konig is that the strategy 'do the same at the first occurrence of a position as at the second and thus win in fewer moves' cannot be made if it is Black's turn to move in this position. However, this argument is not correct because Zermelo considered two positions different whether Black or White makes a move. - -When applied to chess, Zermelo's Theorem states "either White can force a win, or Black can force a win, or both sides can force at least a draw". - -It has been believed that Zermelo used backward induction as his method of proof. However, recent research on the Zermelo's theorem demonstrates that backward induction was not used to explain the strategy behind chess. Contrary to the popular belief, chess is not a finite game. Strictly speaking, chess is an infinite game therefore backward induction does not provide the minmax theorem in this game. diff --git a/wiki/wikipedia/3854.txt b/wiki/wikipedia/3854.txt deleted file mode 100644 index 12b99b9779050341712625b2e3917e0dddb4479a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3854.txt +++ /dev/null @@ -1,45 +0,0 @@ -Seidel's algorithm is an algorithm designed by Raimund Seidel in 1992 for the all-pairs-shortest-path problem for undirected, unweighted, connected graphs. It solves the problem in $O(V^\omega \log V)$ expected time for a graph with $V$ vertices, where $\omega < 2.373$ is the exponent in the complexity $O(n^\omega)$ of $n \times n$ matrix multiplication. If only the distances between each pair of vertices are sought, the same time bound can be achieved in the worst case. Even though the algorithm is designed for connected graphs, it can be applied individually to each connected component of a graph with the same running time overall. There is an exception to the expected running time given above for computing the paths: if $\omega = 2$ the expected running time becomes $O(V^2 \log^2 V)$. - -The core of the algorithm is a procedure that computes the length of the shortest-paths between any pair of vertices. - -This can be done in $O(V^\omega \log V)$ time in the worst case. Once the lengths are computed, the paths can be reconstructed using a Las Vegas algorithm whose expected running time is $O(V^\omega \log V)$ for $\omega > 2$ and $O(V^2 \log^2 V)$ for $\omega = 2$. - -The Python code below assumes the input graph is given as a $n\times n$ $0$-$1$ adjacency matrix $A$ with zeros on the diagonal. It defines the function APD which returns a matrix with entries $D_{i,j}$ such that $D_{i,j}$ is the length of the shortest path between the vertices $i$ and $j$. The matrix class used can be any matrix class implementation supporting the multiplication, exponentiation, and indexing operators (for example ). - - - -def apd(A, n: int): - -"""Compute the shortest-paths lengths.""" - -if all(A[i][j] for i in range(n) for j in range(n) if i != j): - -return A - -Z = A ** 2 - -B = matrix([ - -[1 if i != j and (A[i][j] == 1 or Z[i][j] > 0) else 0 for j in range(n)] - -for i in range(n)]) - -T = apd(B, n) - -X = T*A - -degree = [sum(A[i][j] for j in range(n)) for i in range(n)] - -D = matrix([ - -[2 * T[i][j] if X[i][j] >= T[i][j] * degree[j] else 2 * T[i][j] - 1 for j in range(n)] - -for i in range(n)]) - -return D - - - -The base case tests whether the input adjacency matrix describes a complete graph, in which case all shortest paths have length $1$. - -Algorithms for undirected and directed graphs with weights from a finite universe $\{1,\ldots,M,+\infty\}$ also exist. The best known algorithm for the directed case is in time $\tilde{O}(M^{1/(4-\omega)} V^{2+1/(4-\omega)})$ by Zwick in 1998. This algorithm uses rectangular matrix multiplication instead of square matrix multiplication. Better upper bounds can be obtained if one uses the best rectangular matrix multiplication algorithm available instead of achieving rectangular multiplication via multiple square matrix multiplications. The best known algorithm for the undirected case is in time $\tilde{O}(MV^\omega)$ by Shoshan and Zwick in 1999. The original implementation of this algorithm was erroneous and has been corrected by Eirinakis, Williamson, and Subramani in 2016. diff --git a/wiki/wikipedia/3855.txt b/wiki/wikipedia/3855.txt deleted file mode 100644 index c772eddff0082acfcff8ea9601a70fd818901aea..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3855.txt +++ /dev/null @@ -1,86 +0,0 @@ -In additive combinatorics, Freiman's theorem is a central result which indicates the approximate structure of sets whose sumset is small. It roughly states that if $|A+A|/|A|$ is small, then $A$ can be contained in a small generalized arithmetic progression. - -If $A$ is a finite subset of $\mathbb{Z}$ with $|A+A|\le K|A|$, then $A$ is contained in a generalized arithmetic progression of dimension at most $d(K)$ and size at most $f(K)|A|$, where $d(K)$ and $f(K)$ are constants depending only on $K$. - -For a finite set $A$ of integers, it is always true that -$$ -|A + A| \ge 2|A|-1, -$$ - -with equality precisely when $A$ is an arithmetic progression. - -More generally, suppose $A$ is a subset of a finite proper generalized arithmetic progression $P$ of dimension $d$ such that $|P| \le C|A|$ for some real $C \ge 1$. Then $|P+P| \le 2^d |P|$, so that -$$ -|A+A| \le |P+P| \le 2^d |P| \le C2^d|A|. -$$ - -This result is due to Gregory Freiman (1964, 1966). Much interest in it, and applications, stemmed from a new proof by Imre Z. Ruzsa (1994). Mei-Chu Chang proved new polynomial estimates for the size of arithmetic progressions arising in the theorem in 2002. The current best bounds were provided by Tom Sanders. - -The proof presented here follows the proof in Yufei Zhao's lecture notes. - -The Ruzsa covering lemma states the following: - -Let $A$ and $S$ be finite subsets of an abelian group with $S$ nonempty, and let $K$ be a positive real number. Then if $|A+S| \le K|S|$, there is a subset $T$ of $A$ with at most $K$ elements such that $A \subseteq T+S-S$. - -This lemma provides a bound on how many copies of $S-S$ one needs to cover $A$, hence the name. The proof is essentially a greedy algorithm: - -Proof: Let $T$ be a maximal subset of $A$ such that the sets $t+S$ for $A$ are all disjoint. Then $|T+S|=|T| \cdot |S|$, and also $|T+S| \le |A+S| \le K|S|$, so $|T| \le K$. Furthermore, for any $a \in A$, there is some $t \in T$ such that $t+S$ intersects $a+S$, as otherwise adding $a$ to $T$ contradicts the maximality of $T$. Thus $a \in T+S-S$, so $A \subseteq T+S-S$. - -Let $s \ge 2$ be a positive integer, and $\Gamma$ and $\Gamma'$ be abelian groups. Let $A \subseteq \Gamma$ and $B \subseteq \Gamma'$. A map $\varphi \colon A \to B$ is a Freiman $s$-homomorphism if -$$ -\varphi(a_1) + \cdots + \varphi(a_s) = \varphi(a_1') + \cdots + \varphi(a_s') -$$ - -whenever $a_1 + \cdots + a_s = a_1' + \cdots + a_s'$ for any $a_1, \ldots, a_s, a_1', \ldots, a_s' \in A$. - -If in addition $\varphi$ is a bijection and $\varphi^{-1} \colon B \to A$ is a Freiman $s$-homomorphism, then $\varphi$ is a Freiman $s$-isomorphism. - -If $\varphi$ is a Freiman $s$-homomorphism, then $\varphi$ is a Freiman $t$-homomorphism for any positive integer $t$ such that $2\le t \le s$. - -Then the Ruzsa modeling lemma states the following: - -Let $A$ be a finite set of integers, and let $s \ge 2$ be a positive integer. Let $N$ be a positive integer such that $N \ge |sA-sA|$. Then there exists a subset $A'$ of $A$ with cardinality at least $|A|/s$ such that $A'$ is Freiman $s$-isomorphic to a subset of $\mathbb{Z}/N\mathbb{Z}$. - -The last statement means there exists some Freiman $s$-homomorphism between the two subsets. - -Proof sketch: Choose a prime $q$ sufficiently large such that the modulo-$q$ reduction map $\pi_q \colon \mathbb{Z} \to \mathbb{Z}/q\mathbb{Z}$ is a Freiman $s$-isomorphism from $A$ to its image in $\mathbb{Z}/q\mathbb{Z}$. Let $\psi_q \colon \mathbb{Z}/q\mathbb{Z} \to \mathbb{Z}$ be the lifting map that takes each member of $\mathbb{Z}/q\mathbb{Z}$ to its unique representative in $\{1,\ldots,q\} \subseteq \mathbb{Z}$. For nonzero $\lambda \in \mathbb{Z}/q\mathbb{Z}$, let $\cdot\lambda \colon \mathbb{Z}/q\mathbb{Z} \to \mathbb{Z}/q\mathbb{Z}$ be the multiplication by $\lambda$ map, which is a Freiman $s$-isomorphism. Let $B$ be the image $(\cdot\lambda\circ\pi_q)(A)$. Choose a suitable subset $B'$ of $B$ with cardinality at least $|B|/s$ such that the restriction of $\psi_q$ to $B'$ is a Freiman $s$-isomorphism onto its image, and let $A' \subseteq A$ be the preimage of $B'$ under $\cdot\lambda\circ\pi_q$. Then the restriction of $\psi_q \circ \cdot\lambda \circ \pi_q$ to $A'$ is a Freiman $s$-isomorphism onto its image $\psi_q(B')$. Lastly, there exists some choice of nonzero $\lambda$ such that the restriction of the modulo-$N$ reduction $\mathbb{Z} \to \mathbb{Z}/N\mathbb{Z}$ to $\psi_q(B')$ is a Freiman $s$-isomorphism onto its image. The result follows after composing this map with the earlier Freiman $s$-isomorphism. - -Though Freiman's theorem applies to sets of integers, the Ruzsa modeling lemma allows one to model sets of integers as subsets of finite cyclic groups. So it is useful to first work in the setting of a finite field, and then generalize results to the integers. The following lemma was proved by Bogolyubov: - -Let $A \in \mathbb{F}_2^n$ and let $\alpha= |A|/2^n$. Then $4A$ contains a subspace of $\mathbb{F}_2^n$ of dimension at least $n-\alpha^{-2}$. - -Generalizing this lemma to arbitrary cyclic groups requires an analogous notion to “subspace”: that of the Bohr set. Let $R$ be a subset of $\mathbb{Z}/N\mathbb{Z}$ where $N$ is a prime. The Bohr set of dimension $|R|$ and width $\epsilon$ is -$$ -\operatorname{Bohr}(R, \epsilon) = \{ x \in \mathbb{Z}/N\mathbb{Z} : \forall r \in R, |rx/N| \le \epsilon \}, -$$ - -where $|rx/N|$ is the distance from $rx/N$ to the nearest integer. The following lemma generalizes Bogolyubov's lemma: - -Let $A \in \mathbb{Z}/N\mathbb{Z}$ and $\alpha = |A|/N$. Then $2A-2A$ contains a Bohr set of dimension at most $\alpha^{-2}$ and width $1/4$. - -Here the dimension of a Bohr set is analogous to the codimension of a set in $\mathbb{F}_2^n$. The proof of the lemma involves Fourier-analytic methods. The following proposition relates Bohr sets back to generalized arithmetic progressions, eventually leading to the proof of Freiman's theorem. - -Let $X$ be a Bohr set in $\mathbb{Z}/N\mathbb{Z}$ of dimension $d$ and width $\epsilon$. Then $X$ contains a proper generalized arithmetic progression of dimension at most $d$ and size at least $(\epsilon/d)^dN$. - -The proof of this proposition uses Minkowski's theorem, a fundamental result in geometry of numbers. - -By the Plünnecke-Ruzsa inequality, $|8A-8A|\le K^{16}|A|$. By Bertrand's postulate, there exists a prime $N$ such that $|8A-8A|\le N\le 2K^{16}|A|$. By the Ruzsa modeling lemma, there exists a subset $A'$ of $A$ of cardinality at least $|A|/8$ such that $A'$ is Freiman 8-isomorphic to a subset $B\subseteq \mathbb{Z}/N\mathbb{Z}$. - -By the generalization of Bogolyubov's lemma, $2B-2B$ contains a proper generalized arithmetic progression of dimension $d$ at most $(1/(8\cdot 2K^{16}))^{-2}=256K^{32}$ and size at least $(1/(4d))^dN$. Because $A'$ and $B$ are Freiman 8-isomorphic, $2A'-2A'$ and $2B-2B$ are Freiman 2-isomorphic. Then the image under the 2-isomorphism of the proper generalized arithmetic progression in $2B-2B$ is a proper generalized arithmetic progression in $2A'-2A'\subseteq 2A-2A$ called $P$. - -But $P+A\subseteq 3A-2A$, since $P\subseteq 2A-2A$. Thus -$$ -|P+A|\le |3A-2A|\le |8A-8A|\le N\le (4d)^d|P| -$$ - -so by the Ruzsa covering lemma $A\subseteq X+P-P$ for some $X\subseteq A$ of cardinality at most $(4d)^d$. Then $X+P-P$ is contained in a generalized arithmetic progression of dimension $|X|+d$ and size at most $2^2^{d}|P|\le 2^{|X|+d}|2A-2A|\le 2^{|X|+d}K^4|A|$, completing the proof. - -A result due to Ben Green and Imre Ruzsa generalized Freiman's theorem to arbitrary abelian groups. They used an analogous notion to generalized arithmetic progressions, which they called coset progressions. A coset progression of an abelian group $G$ is a set $P+H$ for a proper generalized arithmetic progression $P$ and a subgroup $H$ of $G$. The dimension of this coset progression is defined to be the dimension of $P$, and its size is defined to be the cardinality of the whole set. Green and Ruzsa showed the following: - -Let $A$ be a finite set in an abelian group $G$ such that $|A+A| \le K|A|$. Then $A$ is contained in a coset progression of dimension at most $d(K)$ and size at most $f(K)|A|$, where $f(K)$ and $d(K)$ are functions of $K$ that are independent of $G$. - -Green and Ruzsa provided upper bounds of $d(K)=CK^4\log (K+2)$ and $f(K)=e^{CK^4\log^2(K+2)}$ for some absolute constant $C$. - -Terence Tao (2010) also generalized Freiman's theorem to solvable groups of bounded derived length. - -Extending Freiman’s theorem to an arbitrary nonabelian group is still open. Results for $K<2$, when a set has very small doubling, are referred to as Kneser theorems. diff --git a/wiki/wikipedia/3856.txt b/wiki/wikipedia/3856.txt deleted file mode 100644 index abe20e09de97fe0790ae36f6dfe424780b15f0c5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3856.txt +++ /dev/null @@ -1,33 +0,0 @@ -In computability theory and computational complexity theory, a decision problem is a problem that can be posed as a yes–no question of the input values. An example of a decision problem is deciding whether a given natural number is prime. Another is the problem "given two numbers x and y, does x evenly divide y?". The answer is either 'yes' or 'no' depending upon the values of x and y. A method for solving a decision problem, given in the form of an algorithm, is called a decision procedure for that problem. A decision procedure for the decision problem "given two numbers x and y, does x evenly divide y?" would give the steps for determining whether x evenly divides y. One such algorithm is long division. If the remainder is zero the answer is 'yes', otherwise it is 'no'. A decision problem which can be solved by an algorithm is called decidable. - -Decision problems typically appear in mathematical questions of decidability, that is, the question of the existence of an effective method to determine the existence of some object or its membership in a set; some of the most important problems in mathematics are undecidable. - -The field of computational complexity categorizes decidable decision problems by how difficult they are to solve. "Difficult", in this sense, is described in terms of the computational resources needed by the most efficient algorithm for a certain problem. The field of recursion theory, meanwhile, categorizes undecidable decision problems by Turing degree, which is a measure of the noncomputability inherent in any solution. - -A decision problem is a yes-or-no question on an infinite set of inputs. It is traditional to define the decision problem as the set of possible inputs together with the set of inputs for which the answer is yes. - -These inputs can be natural numbers, but can also be values of some other kind, like binary strings or strings over some other alphabet. The subset of strings for which the problem returns "yes" is a formal language, and often decision problems are defined as formal languages. - -Using an encoding such as Gödel numbering, any string can be encoded as a natural number, via which a decision problem can be defined as a subset of the natural numbers. Therefore, the algorithm of a decision problem is to compute the characteristic function of a subset of the natural numbers. - -A classic example of a decidable decision problem is the set of prime numbers. It is possible to effectively decide whether a given natural number is prime by testing every possible nontrivial factor. Although much more efficient methods of primality testing are known, the existence of any effective method is enough to establish decidability. - -A decision problem is decidable or effectively solvable if the set of inputs (or natural numbers) for which the answer is yes is a recursive set. A problem is partially decidable, semidecidable, solvable, or provable if the set of inputs (or natural numbers) for which the answer is yes a recursively enumerable set. Problems that are not decidable are undecidable. For those it is not possible to create an algorithm, efficient or otherwise, that solves them. - -The halting problem is an important undecidable decision problem; for more examples, see list of undecidable problems. - -Decision problems can be ordered according to many-one reducibility and related to feasible reductions such as polynomial-time reductions. A decision problem P is said to be complete for a set of decision problems S if P is a member of S and every problem in S can be reduced to P. Complete decision problems are used in computational complexity theory to characterize complexity classes of decision problems. For example, the Boolean satisfiability problem is complete for the class NP of decision problems under polynomial-time reducibility. - -Decision problems are closely related to function problems, which can have answers that are more complex than a simple 'yes' or 'no'. A corresponding function problem is "given two numbers x and y, what is x divided by y?". - -A function problem consists of a partial function f; the informal "problem" is to compute the values of f on the inputs for which it is defined. - -Every function problem can be turned into a decision problem; the decision problem is just the graph of the associated function. (The graph of a function f is the set of pairs (x,y) such that f(x) = y.) If this decision problem were effectively solvable then the function problem would be as well. This reduction does not respect computational complexity, however. For example, it is possible for the graph of a function to be decidable in polynomial time (in which case running time is computed as a function of the pair (x,y)) when the function is not computable in polynomial time (in which case running time is computed as a function of x alone). The function f(x) = 2x has this property. - -Every decision problem can be converted into the function problem of computing the characteristic function of the set associated to the decision problem. If this function is computable then the associated decision problem is decidable. However, this reduction is more liberal than the standard reduction used in computational complexity (sometimes called polynomial-time many-one reduction); for example, the complexity of the characteristic functions of an NP-complete problem and its co-NP-complete complement is exactly the same even though the underlying decision problems may not be considered equivalent in some typical models of computation. - -Unlike decision problems, for which there is only one correct answer for each input, optimization problems are concerned with finding the best answer to a particular input. Optimization problems arise naturally in many applications, such as the traveling salesman problem and many questions in linear programming. - -There are standard techniques for transforming function and optimization problems into decision problems. For example, in the traveling salesman problem, the optimization problem is to produce a tour with minimal weight. The associated decision problem is: for each N, to decide whether the graph has any tour with weight less than N. By repeatedly answering the decision problem, it is possible to find the minimal weight of a tour. - -Because the theory of decision problems is very well developed, research in complexity theory has typically focused on decision problems. Optimization problems themselves are still of interest in computability theory, as well as in fields such as operations research. diff --git a/wiki/wikipedia/3857.txt b/wiki/wikipedia/3857.txt deleted file mode 100644 index 742b34dee0cd70ef358058bab6be3b623a6208ed..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3857.txt +++ /dev/null @@ -1,46 +0,0 @@ -In algebra, the factor theorem is a theorem linking factors and zeros of a polynomial. It is a special case of the polynomial remainder theorem. - -The factor theorem states that a polynomial $f(x)$ has a factor $(x - k)$ if and only if $f(k)=0$ (i.e. $k$ is a root). - -Two problems where the factor theorem is commonly applied are those of factoring a polynomial and finding the roots of a polynomial equation; it is a direct consequence of the theorem that these problems are essentially equivalent. - -The factor theorem is also used to remove known zeros from a polynomial while leaving all unknown zeros intact, thus producing a lower degree polynomial whose zeros may be easier to find. Abstractly, the method is as follows: - -# "Guess" a zero $a$ of the polynomial $f$. (In general, this can be very hard, but math textbook problems that involve solving a polynomial equation are often designed so that some roots are easy to discover.) - -# Use the factor theorem to conclude that $(x-a)$ is a factor of $f(x)$. - -# Compute the polynomial $ g(x) = \frac{f(x)}{(x-a)} $, for example using polynomial long division or synthetic division. - -# Conclude that any root $x \neq a$ of $f(x)=0$ is a root of $g(x)=0$. Since the polynomial degree of $g$ is one less than that of $f$, it is "simpler" to find the remaining zeros by studying $g$. - -Find the factors of -$$ -x^3 + 7x^2 + 8x + 2. -$$ - -To do this one would use trial and error (or the rational root theorem) to find the first x value that causes the expression to equal zero. To find out if $(x - 1)$ is a factor, substitute $x = 1$ into the polynomial above: - -\begin{align} - -x^3 + 7x^2 + 8x + 2 &= (1)^3 + 7(1)^2 + 8(1) + 2\\ - -&= 1 + 7 + 8 + 2\\ - -&= 18 - -\end{align} - -As this is equal to 18 and not 0, this means $(x - 1)$ is not a factor of $x^3 + 7x^2 + 8x + 2$. So, we next try $(x + 1)$ (substituting $x = -1$ into the polynomial): -$$ -(-1)^3 + 7(-1)^2 + 8(-1) + 2. -$$ - -This is equal to $0$. Therefore $x-(-1)$, which is to say $x+1$, is a factor, and $-1$ is a root of $x^3 + 7x^2 + 8x + 2.$ - -The next two roots can be found by algebraically dividing $x^3 + 7x^2 + 8x + 2$ by $(x+1)$ to get a quadratic: -$$ -{x^3 + 7x^2 + 8x + 2 \over x + 1} = x^2 + 6x + 2, -$$ - -and therefore $(x+1)$ and $x^2 + 6x + 2$ are factors of $x^3 + 7x^2 + 8x + 2.$ Of these, the quadratic factor can be further factored using the quadratic formula, which gives as roots of the quadratic $-3\pm \sqrt{7}.$ Thus the three irreducible factors of the original polynomial are $x+1, $ $x-(-3+\sqrt{7}),$ and $x-(-3-\sqrt{7}).$ diff --git a/wiki/wikipedia/3858.txt b/wiki/wikipedia/3858.txt deleted file mode 100644 index 7c7e9f2429f75aac5cb77d9c82665f420c095855..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3858.txt +++ /dev/null @@ -1,19 +0,0 @@ -In algebraic number theory, Minkowski's bound gives an upper bound of the norm of ideals to be checked in order to determine the class number of a number field K. It is named for the mathematician Hermann Minkowski. - -Let D be the discriminant of the field, n be the degree of K over $\mathbb{Q}$, and $2 r_2 = n - r_1$ be the number of complex embeddings where $r_1$ is the number of real embeddings. Then every class in the ideal class group of K contains an integral ideal of norm not exceeding Minkowski's bound -$$ - M_K = \sqrt \left(\frac{4}{\pi}\right)^{r_2} \frac{n!}{n^n} \ . -$$ - -Minkowski's constant for the field K is this bound MK. - -Since the number of integral ideals of given norm is finite, the finiteness of the class number is an immediate consequence, and further, the ideal class group is generated by the prime ideals of norm at most MK. - -Minkowski's bound may be used to derive a lower bound for the discriminant of a field K given n, r1 and r2. Since an integral ideal has norm at least one, we have 1 ≤ MK, so that -$$ - \sqrt \ge \left(\frac{\pi}{4}\right)^{r_2} \frac{n^n}{n!} \ge \left(\frac{\pi}{4}\right)^{n/2} \frac{n^n}{n!} \ . -$$ - -For n at least 2, it is easy to show that the lower bound is greater than 1, so we obtain Minkowski's Theorem, that the discriminant of every number field, other than Q, is non-trivial. This implies that the field of rational numbers has no unramified extension. - -The result is a consequence of Minkowski's theorem. diff --git a/wiki/wikipedia/3859.txt b/wiki/wikipedia/3859.txt deleted file mode 100644 index fc45c882235b407d90cfe9cf67ee1604cd83c92b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3859.txt +++ /dev/null @@ -1,9 +0,0 @@ -In graph theory and graph drawing, a subhamiltonian graph is a subgraph of a planar Hamiltonian graph. - -A graph G is subhamiltonian if G is a subgraph of another graph aug(G) on the same vertex set, such that aug(G) is planar and contains a Hamiltonian cycle. For this to be true, G itself must be planar, and additionally it must be possible to add edges to G, preserving planarity, - -in order to create a cycle in the augmented graph that passes through each vertex exactly once. The graph aug(G) is called a Hamiltonian augmentation of G. and the Halin graphs. - -Every planar graph with maximum degree at most four is subhamiltonian, as is every planar graph with no separating triangles. - -If the edges of an arbitrary planar graph are subdivided into paths of length two, the resulting subdivided graph is always subhamiltonian. diff --git a/wiki/wikipedia/386.txt b/wiki/wikipedia/386.txt deleted file mode 100644 index be2b8f9ba2a71271fb6ce97449f6f2bd15bb797c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/386.txt +++ /dev/null @@ -1,15 +0,0 @@ -In mathematics, the phrase "of the form" indicates that a mathematical object, or (more frequently) a collection of objects, follows a certain pattern of expression. It is frequently used to reduce the formality of mathematical proofs. - -Here is a proof which should be appreciable with limited mathematical background: - -Statement: - -The product of any two even natural numbers is also even. - -Proof: - -Any even natural number is of the form 2n, where n is a natural number. Therefore, let us assume that we have two even numbers which we will denote by 2k and 2l. Their product is (2k)(2l) = 4(kl) = 2(2kl). Since 2kl is also a natural number, the product is even. - -Note: - -In this case, both exhaustivity and exclusivity were needed. That is, it was not only necessary that every even number is of the form 2n (exhaustivity), but also that every expression of the form 2n is an even number (exclusivity). This will not be the case in every proof, but normally, at least exhaustivity is implied by the phrase of the form. diff --git a/wiki/wikipedia/3860.txt b/wiki/wikipedia/3860.txt deleted file mode 100644 index 2df33a4a36349b0ba9b53476ffeca22783c8d2fc..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3860.txt +++ /dev/null @@ -1,42 +0,0 @@ -Data synchronization is the process of establishing consistency among data from a source to a target data storage and vice versa and the continuous harmonization of the data over time. It is fundamental to a wide variety of applications, including file synchronization and mobile device synchronization e.g., for PDAs. - -Synchronization can also be useful in encryption for synchronizing Public Key Servers. - -There are tools available for file synchronization, version control (CVS, Subversion, etc.), distributed filesystems (Coda, etc.), and mirroring (rsync, etc.), in that all these attempt to keep sets of files synchronized. However, only version control and file synchronization tools can deal with modifications to more than one copy of the files. - -* File synchronization is commonly used for home backups on external hard drives or updating for transport on USB flash drives. The automatic process prevents copying already identical files, thus can save considerable time relative to a manual copy, also being faster and less error prone. - -* Version control tools are intended to deal with situations where more than one user attempts to simultaneously modify the same file, while file synchronizers are optimized for situations where only one copy of the file will be edited at a time. For this reason, although version control tools can be used for file synchronization, dedicated programs require less overhead. - -* Distributed filesystems may also be seen as ensuring multiple versions of a file are synchronized. This normally requires that the devices storing the files are always connected, but some distributed file systems like Coda allow disconnected operation followed by reconciliation. The merging facilities of a distributed file system are typically more limited than those of a version control system because most file systems do not keep a version graph. - -* Mirror (computing): A mirror is an exact copy of a data set. On the Internet, a mirror site is an exact copy of another Internet site. Mirror sites are most commonly used to provide multiple sources of the same information, and are of particular value as a way of providing reliable access to large downloads. - -Several theoretical models of data synchronization exist in the research literature, and the problem is also related to the problem of Slepian–Wolf coding in information theory. The models are classified based on how they consider the data to be synchronized. - -The problem of synchronizing unordered data (also known as the set reconciliation problem) is modeled as an attempt to compute the symmetric difference -$$ -S_A \oplus S_B = (S_A - S_B) \cup (S_B - S_A) -$$ between two remote sets $S_A$ - -and $S_B$ of b-bit numbers. Some solutions to this problem are typified by: - -;Wholesale transfer: In this case all data is transferred to one host for a local comparison. - -;Timestamp synchronization: In this case all changes to the data are marked with timestamps. Synchronization proceeds by transferring all data with a timestamp later than the previous synchronization. - -;Mathematical synchronization: In this case data are treated as mathematical objects and synchronization corresponds to a mathematical process. - -In this case, two remote strings $\sigma_A$ and $\sigma_B$ need to be reconciled. Typically, it is assumed that these strings differ by up to a fixed number of edits (i.e. character insertions, deletions, or modifications). Then data synchronization is the process of reducing edit distance between $\sigma_A$ and $\sigma_B$, up to the ideal distance of zero. This is applied in all filesystem based synchronizations (where the data is ordered). Many practical applications of this are discussed or referenced above. - -It is sometimes possible to transform the problem to one of unordered data through a process known as shingling (splitting the strings into shingles). - -In fault-tolerant systems, distributed databases must be able to cope with the loss or corruption of (part of) their data. The first step is usually replication, which involves making multiple copies of the data and keeping them all up to date as changes are made. However, it is then necessary to decide which copy to rely on when loss or corruption of an instance occurs. - -The simplest approach is to have a single master instance that is the sole source of truth. Changes to it are replicated to other instances, and one of those instances becomes the new master when the old master fails. - -Paxos and Raft are more complex protocols that exist to solve problems with transient effects during failover, such as two instances thinking they are the master at the same time. - -Secret sharing is useful if failures of whole nodes are very common. This moves synchronization from an explicit recovery process to being part of each read, where a read of some data requires retrieving encoded data from several different nodes. If corrupt or out-of-date data may be present on some nodes, this approach may also benefit from the use of an error correction code. - -DHTs and Blockchains try to solve the problem of synchronization between many nodes (hundreds to billions). diff --git a/wiki/wikipedia/3861.txt b/wiki/wikipedia/3861.txt deleted file mode 100644 index a31bb612384c789559a111cff2c6fc3feda8da0c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3861.txt +++ /dev/null @@ -1,446 +0,0 @@ -Maple is a symbolic and numeric computing environment as well as a multi-paradigm programming language. It covers several areas of technical computing, such as symbolic mathematics, numerical analysis, data processing, visualization, and others. A toolbox, MapleSim, adds functionality for multidomain physical modeling and code generation. - -Maple's capacity for symbolic computing include those of a general-purpose computer algebra system. For instance, it can manipulate mathematical expressions and find symbolic solutions to - -certain problems, such as those arising from ordinary and partial differential equations. - -Maple is developed commercially by the Canadian software company Maplesoft. The name 'Maple' is a reference to the software's Canadian heritage. - -Users can enter mathematics in traditional mathematical notation. Custom user interfaces can also be created. There is support for numeric computations, to arbitrary precision, as well as symbolic computation and visualization. Examples of symbolic computations are given below. - -Maple incorporates a dynamically typed imperative-style programming language (resembling Pascal), which permits variables of lexical scope. There are also interfaces to other languages (C, C#, Fortran, Java, MATLAB, and Visual Basic), as well as to Microsoft Excel. - -Maple supports MathML 2.0, which is a W3C format for representing and interpreting mathematical expressions, including their display in web pages. There is also functionality for converting expressions from traditional mathematical notation to markup suitable for the typesetting system LaTeX. - -Maple is based on a small kernel, written in C, which provides the Maple language. Most functionality is provided by libraries, which come from a variety of sources. Most of the libraries are written in the Maple language; these have viewable source code. Many numerical computations are performed by the NAG Numerical Libraries, ATLAS libraries, or GMP libraries. - -Different functionality in Maple requires numerical data in different formats. Symbolic expressions are stored in memory as directed acyclic graphs. The standard interface and calculator interface are written in Java. - -The first concept of Maple arose from a meeting in late 1980 at the University of Waterloo. Researchers at the university wished to purchase a computer powerful enough to run the Lisp-based computer algebra system Macsyma. Instead, they opted to develop their own computer algebra system, named Maple, that would run on lower cost computers. Aiming for portability, they began writing Maple in programming languages from the BCPL family (initially using a subset of B and C, and later on only C). By the end of 1983, over 50 universities had copies of Maple installed on their machines. - -In 1984, the research group arranged with Watcom Products Inc to license and distribute the first commercially available version, Maple 3.3. In 1988 Waterloo Maple Inc. (Maplesoft) was founded. The company’s original goal was to manage the distribution of the software, but eventually it grew to have its own R&D department, where most of Maple's development takes place today (the remainder being done at various university laboratories). - -In 1989, the first graphical user interface for Maple was developed and included with version 4.3 for the Macintosh. X11 and Windows versions of the new interface followed in 1990 with Maple V. In 1992, Maple V Release 2 introduced the Maple "worksheet" that combined text, graphics, and input and typeset output. In 1994 a special issue of a newsletter created by Maple developers called MapleTech was published. - -In 1999, with the release of Maple 6, Maple included some of the NAG Numerical Libraries. In 2003, the current "standard" interface was introduced with Maple 9. This interface is primarily written in Java (although portions, such as the rules for typesetting mathematical formulae, are written in the Maple language). The Java interface was criticized for being slow; improvements have been made in later versions, although the Maple 11 documentation recommends the previous ("classic") interface for users with less than 500 MB of physical memory. - -Between 1995 and 2005 Maple lost significant market share to competitors due to a weaker user interface. With Maple 10 in 2005, Maple introduced a new "document mode" interface, which has since been further developed across several releases. - -In September 2009 Maple and Maplesoft were acquired by the Japanese software retailer Cybernet Systems. - -* Maple 1.0: January, 1982 - -* Maple 1.1: January, 1982 - -* Maple 2.0: May, 1982 - -* Maple 2.1: June, 1982 - -* Maple 2.15: August, 1982 - -* Maple 2.2: December, 1982 - -* Maple 3.0: May, 1983 - -* Maple 3.1: October, 1983 - -* Maple 3.2: April, 1984 - -* Maple 3.3: March, 1985 (first public available version) - -* Maple 4.0: April, 1986 - -* Maple 4.1: May, 1987 - -* Maple 4.2: December, 1987 - -* Maple 4.3: March, 1989 - -* Maple V: August, 1990 - -* Maple V R2: November 1992 - -* Maple V R3: March 15, 1994 - -* Maple V R4: January, 1996 - -* Maple V R5: November 1, 1997 - -* Maple 6: December 6, 1999 - -* Maple 7: July 1, 2001 - -* Maple 8: April 16, 2002 - -* Maple 9: June 30, 2003 - -* Maple 9.5: April 15, 2004 - -* Maple 10: May 10, 2005 - -* Maple 11: February 21, 2007 - -* Maple 11.01: July, 2007 - -* Maple 11.02: November, 2007 - -* Maple 12: May, 2008 - -* Maple 12.01: October, 2008 - -* Maple 12.02: December, 2008 - -* Maple 13: April 28, 2009 - -* Maple 13.01: July, 2009 - -* Maple 13.02: October, 2009 - -* Maple 14: April 29, 2010 - -* Maple 14.01: October 28, 2010 - -* Maple 15: April 13, 2011 - -* Maple 15.01: June 21, 2011 - -* Maple 16: March 28, 2012 - -* Maple 16.01: May 16, 2012 - -* Maple 17: March 13, 2013 - -* Maple 17.01: July, 2013 - -* Maple 18: Mar 5, 2014 - -* Maple 18.01: May, 2014 - -* Maple 18.01a: July, 2014 - -* Maple 18.02: Nov, 2014 - -* Maple 2015.0: Mar 4, 2015 - -* Maple 2015.1: Nov, 2015 - -* Maple 2016.0: March 2, 2016 - -* Maple 2016.1: April 20, 2016 - -* Maple 2016.1a: April 27, 2016 - -* Maple 2017.0: May 25, 2017 - -* Maple 2017.1: June 28, 2017 - -* Maple 2017.2: August 2, 2017 - -* Maple 2017.3: October 3, 2017 - -* Maple 2018.0: March 21, 2018 - -* Maple 2019.0: March 14, 2019 - -* Maple 2020.0: March 12, 2020 - -Features of Maple include: - -* Support for symbolic and numeric computation with arbitrary precision - -* Elementary and special mathematical function libraries - -* Complex numbers and interval arithmetic - -* Arithmetic, greatest common divisors and factorization for multivariate polynomials over the rationals, finite fields, algebraic number fields, and algebraic function fields - -* Limits, series and asymptotic expansions - -* Gröbner basis - -* Differential Algebra - -* Matrix manipulation tools including support for sparse arrays - -* Mathematical function graphing and animation tools - -* Solvers for systems of equations, diophantine equations, ODEs, PDEs, DAEs, DDEs and recurrence relations - -* Numeric and symbolic tools for discrete and continuous calculus including definite and indefinite integration, definite and indefinite summation, automatic differentiation and continuous and discrete integral transforms - -* Constrained and unconstrained local and global optimization - -* Statistics including model fitting, hypothesis testing, and probability distributions - -* Tools for data manipulation, visualization and analysis - -* Tools for probability and combinatoric problems - -* Support for time-series and unit based data - -* Connection to online collection of financial and economic data - -* Tools for financial calculations including bonds, annuities, derivatives, options etc. - -* Calculations and simulations on random processes - -* Tools for text mining including regular expressions - -* Tools for signal processing and linear and non-linear control systems - -* Discrete math tools including number theory - -* Tools for visualizing and analysing directed and undirected graphs - -* Group theory including permutation and finitely presented groups - -* Symbolic tensor functions - -* Import and export filters for data, image, sound, CAD, and document formats - -* Technical word processing including formula editing - -* Programming language supporting procedural, functional and object-oriented constructs - -* Tools for adding user interfaces to calculations and applications - -* Tools for connecting to SQL, Java, .NET, C++, Fortran and http - -* Tools for generating code for C, C#, Fortran, Java, JavaScript, Julia, Matlab, Perl, Python, R, and Visual Basic - -* Tools for parallel programming - -The following code, which computes the factorial of a nonnegative integer, is an example of an imperative programming construct within Maple: - - - -myfac := proc(n::nonnegint) - -local out, i; - -out := 1; - -for i from 2 to n do - -out := out * i - -end do; - -out - -end proc; - - - -Simple functions can also be defined using the "maps to" arrow notation: - - - -myfac := n -> product(i, i = 1..n); - - - -Find -$$ -\int\cos\left(\frac{x}{a}\right)dx -$$. - - - -int(cos(x/a), x); - - - -Output: -$$ -a \sin\left(\frac{x}{a}\right) -$$ - -Compute the determinant of a matrix. - - - -M := Matrix(1,2,3], [a,b,c], [x,y,z); # example Matrix - - - - - -\begin{bmatrix} - -1 & 2 & 3 \\ - -a & b & c \\ - -x & y & z - -\end{bmatrix} - - - -LinearAlgebra:-Determinant(M); -$$ -bz-cy+3ay-2az+2xc-3xb -$$ - - - -series(tanh(x), x = 0, 15) - - -$$ -x-\frac{1}{3}x^3+\frac{2}{15}x^5-\frac{17}{315}x^7 -$$ -$$ -{}+\frac{62}{2835}x^9-\frac{1382}{155925}x^{11}+\frac{21844}{6081075}x^{13}+\mathcal{O}\left(x^{15}\right) -$$ - -The following code numerically calculates the roots of a high-order polynomial: - - - -f := x^53-88*x^5-3*x-5 = 0 - -fsolve(f) - --1.097486315, -.5226535640, 1.099074017 - - - -The same command can also solve systems of equations: - - - -f := (cos(x+y))^2 + exp(x)*y+cot(x-y)+cosh(z+x) = 0: - -g := x^5 - 8*y = 2: - -h := x+3*y-77*z=55; - -fsolve( {f,g,h} ); - -{x = -1.543352313, y = -1.344549481, z = -.7867142955} - - - -Plot $x \sin(x)$ with $x$ ranging from -10 to 10: - - - -plot(x*sin(x), x = -10..10); - - - -Plot $x^2+y^2$ with $x$ and $y$ ranging from -1 to 1: - - - -plot3d(x^2+y^2, x = -1..1, y = -1..1); - - - -* Animation of function of two variables -$$ -f := \frac{2k^2}{\cosh^2\left(x k - 4 k^3 t\right)} -$$ - - - -plots:-animate(subs(k = 0.5, f), x=-30..30, t=-10..10, numpoints=200, frames=50, color=red, thickness=3); - - - -* Animation of functions of three variables - - - -plots:-animate3d(cos(t*x)*sin(3*t*y), x=-Pi..Pi, y=-Pi..Pi, t=1..2); - - - -* Fly-through animation of 3-D plots. - - - -M := Matrix(400,400,200], [100,100,-400], [1,1,1, datatype=float[8]): - -plot3d(1, x=0..2*Pi, y=0..Pi, axes=none, coords=spherical, viewpoint=[path=M]); - - - -* Laplace transform - - - -f := (1+A*t+B*t^2)*exp(c*t); - - -$$ - \left(1 + A t + B t^2\right) e^{c t} -$$ - - - -inttrans:-laplace(f, t, s); - - -$$ -\frac{1}{s-c}+\frac{A}{(s-c)^2}+\frac{2B}{(s-c)^3} -$$ - -* inverse Laplace transform - - - -inttrans:-invlaplace(1/(s-a), s, x); - - -$$ -e^{ax} -$$ - -* Fourier transform - - - -inttrans:-fourier(sin(x), x, w) - - -$$ -\mathrm{I}\pi(\mathrm{Dirac}(w+1)-\mathrm{Dirac}(w-1)) -$$ - -Find functions $f$ that satisfy the integral equation -$$ -f(x)-3\int_{-1}^1(xy+x^2y^2)f(y)dy = h(x) -$$. - - - -eqn:= f(x)-3*Int((x*y+x^2*y^2)*f(y), y=-1..1) = h(x): - -intsolve(eqn,f(x)); - - - -f \left( x \right) =\int _{-1}^{1}\! \left( -15{x}^{2}{y}^{2}-3xy \right) h \left( y \right) {dy}+h \left( x \right) - - - -The Maple engine is used within several other products from Maplesoft: - -* Moebius, DigitalEd’s online testing suite, uses Maple to algorithmically generate questions and grade student responses. - -* MapleNet allows users to create JSP pages and Java Applets. MapleNet 12 and above also allow users to upload and work with Maple worksheets containing interactive components. - -* MapleSim, an engineering simulation tool. - -* Maple Quantum Chemistry Package from RDMChem computes and visualizes the electronic energies and properties of molecules. - -Listed below are third-party commercial products that no longer use the Maple engine: - -* Versions of Mathcad released between 1994 and 2006 included a Maple-derived algebra engine (MKM, aka Mathsoft Kernel Maple), though subsequent versions use MuPAD. - -* Symbolic Math Toolbox in MATLAB contained a portion of the Maple 10 engine, but now uses MuPAD (starting with MATLAB R2007b+ release). - -* Older versions of the mathematical editor Scientific Workplace included Maple as a computational engine, though current versions include MuPAD. diff --git a/wiki/wikipedia/3862.txt b/wiki/wikipedia/3862.txt deleted file mode 100644 index 2f63aaf1050ef9b74ccdff42750d4abd5c95e0f1..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3862.txt +++ /dev/null @@ -1,56 +0,0 @@ -In mathematics, Grunsky's theorem, due to the German mathematician Helmut Grunsky, is a result in complex analysis concerning holomorphic univalent functions defined on the unit disk in the complex numbers. The theorem states that a univalent function defined on the unit disc, fixing the point 0, maps every disk |z| < r onto a starlike domain for r ≤ tanh π/4. The largest r for which this is true is called the radius of starlikeness of the function. - -Let f be a univalent holomorphic function on the unit disc D such that f(0) = 0. Then for all r ≤ tanh π/4, the image of the disc |z| < r is starlike with respect to 0, , i.e. it is invariant under multiplication by real numbers in (0,1). - -If f(z) is univalent on D with f(0) = 0, then -$$ -\left|\log {zf^\prime(z)\over f(z)}\right|\le \log {1+|z|\over 1-|z|}. -$$ - -Taking the real and imaginary parts of the logarithm, this implies the two inequalities -$$ -\left|{zf^\prime(z)\over f(z)}\right|\le {1+|z|\over 1-|z|} -$$ - -and -$$ -\left|\arg {zf^\prime(z)\over f(z)}\right| \le \log {1+|z|\over 1-|z|}. -$$ - -For fixed z, both these equalities are attained by suitable Koebe functions -$$ - g_w(\zeta)={\zeta\over (1-\overline{w}\zeta)^2}, -$$ - -where |w| = 1. - -Grunsky originally proved these inequalities based on extremal techniques of Ludwig Bieberbach. Subsequent proofs, outlined in Goluzin, relied on the Loewner equation. More elementary proofs were subsequently given based on Goluzin's inequalities, an equivalent form of Grunsky's inequalities (1939) for the Grunsky matrix. - -For a univalent function g in z > 1 with an expansion -$$ - g(z) = z + b_1 z^{-1} + b_2 z^{-2} + \cdots. -$$ - -Goluzin's inequalities state that -$$ - \left|\sum_{i=1}^n\sum_{j=1}^n\lambda_i\lambda_j \log {g(z_i)-g(z_j)\over z_i-z_j}\right| \le \sum_{i=1}^n\sum_{j=1}^n \lambda_i\overline{\lambda_j}\log {z_i\overline{z_j}\over z_i\overline{z_j}-1}, -$$ - -where the zi are distinct points with |zi| > 1 and λi are arbitrary complex numbers. - -Taking n = 2. with λ1 = – λ2 = λ, the inequality implies -$$ - \left| \log {g^\prime(\zeta)g^\prime(\eta) (\zeta-\eta)^2\over (g(\zeta)-g(\eta))^2}\right|\le \log . -$$ - -Thus if -$$ - \log {1+|z|\over 1-|z|} \le {\pi\over 2}, -$$ - -the inequality holds at z. This condition is equivalent to -$$ -|z|\le \tanh {\pi\over 4} -$$ - -and hence f is starlike on any disk |z| < r with r ≤ tanh π/4. diff --git a/wiki/wikipedia/3863.txt b/wiki/wikipedia/3863.txt deleted file mode 100644 index 16392126f0fb650790cfa0f9f9ba3690b5efb462..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3863.txt +++ /dev/null @@ -1,127 +0,0 @@ -In trigonometry, the law of tangents is a statement about the relationship between the tangents of two angles of a triangle and the lengths of the opposing sides. - -In Figure 1, a, b, and c are the lengths of the three sides of the triangle, and α, β, and γ are the angles opposite those three respective sides. The law of tangents states that - - - -\frac{a-b}{a+b} - -= \frac{\tan\tfrac12(\alpha-\beta)} - -{\tan\tfrac12(\alpha+\beta)}. - - - -The law of tangents, although not as commonly known as the law of sines or the law of cosines, is equivalent to the law of sines, and can be used in any case where two sides and the included angle, or two angles and a side, are known. - -To prove the law of tangents one can start with the law of sines: -$$ -\frac{a}{\sin\alpha} = \frac{b}{\sin\beta}. -$$ - -Let -$$ -d = \frac{a}{\sin\alpha} = \frac{b}{\sin\beta} -$$ - -so that - - - -a = d \sin\alpha \quad\text{and} - -\quad b = d \sin\beta. - - - -It follows that - - - -\frac{a-b}{a+b} - -= \frac{d \sin \alpha - d\sin\beta} - -{d \sin\alpha + d\sin\beta} - -= \frac{\sin \alpha - \sin\beta} - -{\sin\alpha + \sin\beta}. - - - -Using the trigonometric identity, the factor formula for sines specifically - - - -\sin\alpha \pm \sin\beta - -= 2 \sin\tfrac12(\alpha \pm \beta) \cos\tfrac12( \alpha \mp \beta), - - - -we get - -\frac{a-b}{a+b} - -= \frac{2\sin\tfrac12(\alpha-\beta) \cos\tfrac12(\alpha+\beta)} - -{2\sin\tfrac12(\alpha+\beta) \cos\tfrac12(\alpha-\beta)} - -= \frac{\sin\tfrac12(\alpha-\beta)} - -{\cos\tfrac12(\alpha-\beta)} \Bigg/ - -\frac{\sin\tfrac12(\alpha+\beta)} - -{\cos\tfrac12(\alpha+\beta)} - -= \frac{\tan\tfrac12(\alpha-\beta)} - -{\tan\tfrac12(\alpha+\beta)}. - - - -As an alternative to using the identity for the sum or difference of two sines, one may cite the trigonometric identity - - - -\tan \tfrac12 (\alpha \pm \beta) - -= \frac{\sin\alpha \pm \sin\beta} - -{\cos\alpha + \cos\beta} - - - -(see tangent half-angle formula). - -The law of tangents can be used to compute the missing side and angles of a triangle in which two sides a and b and the enclosed angle γ are given. From - - - -\tan\tfrac12(\alpha-\beta) - -= \frac{a-b}{a+b} \tan\tfrac12(\alpha+\beta) - -= \frac{a-b}{a+b} \cot\tfrac12\gamma - - - -one can compute α − β; together with α + β = 180° − γ this yields α and β; the remaining side c can then be computed using the law of sines. In the time before electronic calculators were available, this method - -was preferable to an application of the law of cosines c = sqrt a^2 + b, as this latter law necessitated an additional lookup in a logarithm table, in order to compute the square root. In modern times the law of tangents may have better numerical properties than the law of cosines: If γ is small, and a and b are almost equal, then an application of the law of cosines leads to a subtraction of almost equal values, incurring catastrophic cancellation. - -On a sphere of unit radius, the sides of the triangle are arcs of great circles. Accordingly, their lengths can be expressed in radians or any other units of angular measure. Let A, B, C be the angles at the three vertices of the triangle and let a, b, c be the respective lengths of the opposite sides. The spherical law of tangents says - - - -\frac{\tan\tfrac12(A-B)}{\tan\tfrac12(A+B)} - -= \frac{\tan\tfrac12(a-b)} - -{\tan\tfrac12(a+b)}. - - - -The law of tangents for spherical triangles was described in the 13th century by Persian mathematician Nasir al-Din al-Tusi (1201–1274), who also presented the law of sines for plane triangles in his five-volume work Treatise on the Quadrilateral. diff --git a/wiki/wikipedia/3864.txt b/wiki/wikipedia/3864.txt deleted file mode 100644 index 3903750ebb2ceeff5156850b49f750f40d4c785c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3864.txt +++ /dev/null @@ -1,72 +0,0 @@ -In the mathematical field known as complex analysis, Jensen's formula, introduced by , relates the average magnitude of an analytic function on a circle with the number of its zeros inside the circle. It forms an important statement in the study of entire functions. - -Suppose that ƒ is an analytic function in a region in the complex plane which contains the closed disk D of radius r about the origin, a1, a2, ..., an are the zeros of ƒ in the interior of D repeated according to multiplicity, and ƒ(0) ≠ 0. Jensen's formula states that -$$ -\log |f(0)| = \sum_{k=1}^n \log \left( \frac{r}\right) + \frac{1}{2\pi} \int_0^{2\pi} \log|f(re^{i\theta})| d\theta. -$$ - -This formula establishes a connection between the moduli of the zeros of the function ƒ inside the disk D and the average of log |f(z)| on the boundary circle |z| = r, and can be seen as a generalisation of the mean value property of harmonic functions. Namely, if f has no zeros in D, then Jensen's formula reduces to -$$ -\log |f(0)| = \frac{1}{2\pi} \int_0^{2\pi} \log|f(re^{i\theta})| d\theta, -$$ - -which is the mean-value property of the harmonic function $\log |f(z)|$. - -An equivalent statement of Jensen's formula that is frequently used is - -\frac{1}{2\pi} \int_0^{2\pi} \log |f(re^{i\theta})| d\theta - -- \log |f(0)| = \int_0^r \frac{n(t)}{t} dt - - - -where $n(t)$ denotes the number of zeros of $f$ in the disc of radius $t$ centered at the origin. - -Jensen's formula may be generalized for functions which are merely meromorphic on D. Namely, assume that -$$ -f(z)=z^l \frac{g(z)}{h(z)}, -$$ - -where g and h are analytic functions in D having zeros at $a_1,\ldots,a_n \in \mathbb D\setminus\{0\}$ - -and -$$ -b_1,\ldots,b_m \in \mathbb D\setminus\{0\} -$$ - -respectively, then Jensen's formula for meromorphic functions states that -$$ -\log \left|\frac{g(0)}{h(0)}\right| = \log \left |r^{m-n} \frac{a_1\ldots a_n}{b_1\ldots b_m}\right| + \frac{1}{2\pi} \int_0^{2\pi} \log|f(re^{i\theta})| d\theta. -$$ - -Jensen's formula can be used to estimate the number of zeros of analytic function in a circle. Namely, if $f$ is a function analytic in a disk of radius R centered at z0 and if $| \ f \ |$ is bounded by M on the boundary of that disk, then the number of zeros of $f$ in a circle of radius r < R centered at the same point z0 does not exceed - - - -\frac{1}{\log (R/r)} \log \frac{M}. - - - -Jensen's formula is an important statement in the study of value distribution of entire and meromorphic functions. In particular, it is the starting point of Nevanlinna theory. - -Jensen's formula is a consequence of the more general Poisson-Jensen formula, which in turn follows from Jensen's formula by applying a Möbius transformation to z. It was introduced and named by Rolf Nevanlinna. If f is a function which is analytic in the unit disk, with zeros a1, a2, ..., an located in the interior of the unit disk, then for every $z_0=r_0e^{i\varphi_0}$ in the unit disk the Poisson-Jensen formula states that -$$ -\log |f(z_0)| = \sum_{k=1}^n \log \left|\frac{z_0-a_k}{1-\bar {a}_k z_0} \right| + \frac{1}{2\pi} \int_0^{2\pi} P_{r_0}(\varphi_0-\theta) \log |f(e^{i\theta})| d\theta. -$$ - -Here, - - - -P_{r}(\omega)= \sum_{n\in \mathbb Z} r^ e^{i n\omega} - - - -is the Poisson kernel on the unit disk. - -If the function f has no zeros in the unit disk, the Poisson-Jensen formula reduces to -$$ -\log |f(z_0)| = \frac{1}{2\pi} \int_0^{2\pi} P_{r_0}(\varphi_0-\theta) \log |f(e^{i\theta})| d\theta, -$$ - -which is the Poisson formula for the harmonic function $\log |f(z)|$. diff --git a/wiki/wikipedia/3865.txt b/wiki/wikipedia/3865.txt deleted file mode 100644 index bd67b4d35c69e90e1a50e8db2040c31f113c4e9c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3865.txt +++ /dev/null @@ -1,38 +0,0 @@ -In real analysis and approximation theory, the Arnold-Kolmogorov representation theorem (or superposition theorem) states that every multivariate continuous function can be represented as a superposition of the 2-argument addition and continuous functions of one variable. It solved a more constrained, yet more general form of Hilbert's thirteenth problem. - -The works of Vladimir Arnold and Andrey Kolmogorov established that if f is a multivariate continuous function, then f can be written as a finite composition of continuous functions of a single variable and the binary operation of addition. More specifically, -$$ - f(\mathbf x) = f(x_1,\ldots ,x_n) = \sum_{q=0}^{2n} \Phi_{q}\left(\sum_{p=1}^{n} \phi_{q,p}(x_{p})\right) -$$. - -There are proofs with specific constructions. - -In a sense, they showed that the only true multivariate function is the sum, since every other function can be written using univariate functions and summing. - -The Kolmogorov–Arnold representation theorem is closely related to Hilbert's 13th problem. In his Paris lecture at the International Congress of Mathematicians in 1900, David Hilbert formulated 23 problems which in his opinion were important for the further development of mathematics. The 13th of these problems dealt with the solution of general equations of higher degrees. It is known that for algebraic equations of degree 4 the solution can be computed by formulae that only contain radicals and arithmetic operations. For higher orders, Galois theory shows us that the solutions of algebraic equations cannot be expressed in terms of basic algebraic operations. It follows from the so called Tschirnhaus transformation that the general algebraic equation -$$ -x^{n}+a_{n-1}x^{n-1}+\cdots +a_{0}=0 -$$ - -can be translated to the form $ y^{n}+b_{n-4}y^{n-4}+\cdots +b_{1}y+1=0$. The Tschirnhaus transformation is given by a formula containing only radicals and arithmetic operations and transforms. Therefore, the solution of an algebraic equation of degree $n$ can be represented as a superposition of functions of two variables if $n<7$ and as a superposition of functions of $n-4$ variables if $n\geq 7$. For $n=7$ the solution is a superposition of arithmetic operations, radicals, and the solution of the equation $y^{7}+b_{3}y^{3}+b_{2}y^{2}+b_{1}y+1=0$. - -A further simplification with algebraic transformations seems to be impossible which led to Hilbert's conjecture that "A solution of the general equation of degree 7 cannot be represented as a superposition of continuous functions of two variables". This explains the relation of Hilbert's thirteenth problem to the representation of a higher-dimensional function as superposition of lower-dimensional functions. In this context, it has stimulated many studies in the theory of functions and other related problems by different authors. - -A variant of Kolmogorov's theorem that reduces the number of - -outer functions $\Phi_{q}$ is due to George Lorentz. He showed in 1962 that the outer functions $\Phi_{q}$ can be replaced by a single function $\Phi$. More precisely, Lorentz proved the existence of functions $\phi _{q,p}$, $q=0,1,\ldots, 2n$, $p=1,\ldots,n,$ such that -$$ - f(\mathbf x) = \sum_{q=0}^{2n} \Phi\left(\sum_{p=1}^{n} \phi_{q,p}(x_{p})\right) -$$. - -David Sprecher replaced the inner functions $\phi_{q,p}$ by one single inner function with an appropriate shift in its argument. He proved that there exist real values $\eta, \lambda_1,\ldots,\lambda_n$, a continuous function $\Phi\colon \mathbb{R} \rightarrow \R$, and a real increasing continuous function $\phi\colon [0,1] \rightarrow [0,1]$ with $\phi \in \operatorname{Lip}(\ln 2/\ln (2N+2))$, for $N \geq n \geq 2$, such that -$$ - f(\mathbf x) = \sum_{q=0}^{2n} \Phi\left(\sum_{p=1}^{n} \lambda_p \phi(x_{p}+\eta q)+q \right) -$$. - -Phillip A. Ostrand generalized the Kolmogorov superposition theorem to compact metric spaces. For $p=1,\ldots,m$ let $X_p$ be compact metric spaces of finite dimension $n_p$ and let $n = \sum_{p=1}^{m} n_p$. Then there exists continuous functions $\phi_{q,p}\colon X_p \rightarrow [0,1], q=0,\ldots,2n, p=1,\ldots,m$ and continuous functions $G_q\colon [0,1] \rightarrow \R, q=0,\ldots,2n$ such that any continuous function $f\colon X_1 \times \dots \times X_m \rightarrow \mathbb{R}$ is representable in the form -$$ - f(x_1,\ldots,x_m) = \sum_{q=0}^{2n} G_{q}\left(\sum_{p=1}^{m} \phi_{q,p}(x_{p})\right) -$$. - -The theorem does not hold in general for complex multi-variate functions, as discussed here. Furthermore, the non-smoothness of the inner functions and their "wild behavior" has limited the practical use of the representation, although there is some debate on this. diff --git a/wiki/wikipedia/3866.txt b/wiki/wikipedia/3866.txt deleted file mode 100644 index 9dab919b22d2b183218890fa765bc9386133afc6..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3866.txt +++ /dev/null @@ -1,106 +0,0 @@ -In mathematical logic, the Löwenheim–Skolem theorem is a theorem on the existence and cardinality of models, named after Leopold Löwenheim and Thoralf Skolem. - -The precise formulation is given below. It implies that if a countable first-order theory has an infinite model, then for every infinite cardinal number κ it has a model of size κ, and that no first-order theory with an infinite model can have a unique model up to isomorphism. - -As a consequence, first-order theories are unable to control the cardinality of their infinite models. - -The (downward) Löwenheim–Skolem theorem is one of the two key properties, along with the compactness theorem, that are used in Lindström's theorem to characterize first-order logic. - -In general, the Löwenheim–Skolem theorem does not hold in stronger logics such as second-order logic. - -In its general form, the Löwenheim–Skolem Theorem states that for every signature σ, every infinite σ-structure M and every infinite cardinal number κ ≥ , there is a σ-structure N such that and such that - -* if κ < then N is an elementary substructure of M; - -* if κ > then N is an elementary extension of M. - -The theorem is often divided into two parts corresponding to the two cases above. The part of the theorem asserting that a structure has elementary substructures of all smaller infinite cardinalities is known as the downward Löwenheim–Skolem Theorem. The part of the theorem asserting that a structure has elementary extensions of all larger cardinalities is known as the upward Löwenheim–Skolem Theorem. - -Below we elaborate on the general concept of signatures and structures. - -A signature consists of a set of function symbols Sfunc, a set of relation symbols Srel, and a function $\operatorname{ar}: S_{\operatorname{func}}\cup S_{\operatorname{rel}} \rightarrow \mathbb{N}_0$ representing the arity of function and relation symbols. (A nullary function symbol is called a constant symbol.) In the context of first-order logic, a signature is sometimes called a language. It is called countable if the set of function and relation symbols in it is countable, and in general the cardinality of a signature is the cardinality of the set of all the symbols it contains. - -A first-order theory consists of a fixed signature and a fixed set of sentences (formulas with no free variables) in that signature. Theories are often specified by giving a list of axioms that generate the theory, or by giving a structure and taking the theory to consist of the sentences satisfied by the structure. - -Given a signature σ, a σ-structure M - -is a concrete interpretation of the symbols in σ. It consists of an underlying set (often also denoted by "M") together with an interpretation of the function and relation symbols of σ. An interpretation of a constant symbol of σ in M is simply an element of M. More generally, an interpretation of an n-ary function symbol f is a function from Mn to M. Similarly, an interpretation of a relation symbol R is an n-ary relation on M, i.e. a subset of Mn. - -A substructure of a σ-structure M is obtained by taking a subset N of M which is closed under the interpretations of all the function symbols in σ (hence includes the interpretations of all constant symbols in σ), and then restricting the interpretations of the relation symbols to N. An elementary substructure is a very special case of this; in particular an elementary substructure satisfies exactly the same first-order sentences as the original structure (its elementary extension). - -The statement given in the introduction follows immediately by taking M to be an infinite model of the theory. The proof of the upward part of the theorem also shows that a theory with arbitrarily large finite models must have an infinite model; sometimes this is considered to be part of the theorem. - -A theory is called categorical if it has only one model, up to isomorphism. This term was introduced by Veblen, and for some time thereafter mathematicians hoped they could put mathematics on a solid foundation by describing a categorical first-order theory of some version of set theory. The Löwenheim–Skolem theorem dealt a first blow to this hope, as it implies that a first-order theory which has an infinite model cannot be categorical. Later, in 1931, the hope was shattered completely by Gödel's incompleteness theorem. - -Many consequences of the Löwenheim–Skolem theorem seemed counterintuitive to logicians in the early 20th century, as the distinction between first-order and non-first-order properties was not yet understood. One such consequence is the existence of uncountable models of true arithmetic, which satisfy every first-order induction axiom but have non-inductive subsets. - -Let N denote the natural numbers and R the reals. It follows from the theorem that the theory of (N, +, ×, 0, 1) (the theory of true first-order arithmetic) has uncountable models, and that the theory of (R, +, ×, 0, 1) (the theory of real closed fields) has a countable model. There are, of course, axiomatizations characterizing (N, +, ×, 0, 1) and (R, +, ×, 0, 1) up to isomorphism. - -The Löwenheim–Skolem theorem shows that these axiomatizations cannot be first-order. - -For example, in the theory of the real numbers, the completeness of a linear order used to characterize R as a complete ordered field, is a non-first-order property. - -Another consequence that was considered particularly troubling is the existence of a countable model of set theory, which nevertheless must satisfy the sentence saying the real numbers are uncountable. Cantor's theorem states that some sets are uncountable. This counterintuitive situation came to be known as Skolem's paradox; it shows that the notion of countability is not absolute. - -For each first-order $\sigma $-formula $\varphi(y,x_{1}, \ldots, x_{n}) ,$ the axiom of choice implies the existence of a function -$$ -f_{\varphi}: M^n\to M -$$ - -such that, for all $a_{1}, \ldots, a_{n} \in M$, either -$$ -M\models\varphi(f_{\varphi} (a_1, \dots, a_n), a_1, \dots, a_n) -$$ - -or -$$ -M\models\neg\exists y \varphi(y, a_1, \dots, a_n) . -$$ - -Applying the axiom of choice again we get a function from the first-order formulas $\varphi$ to such functions $f_{\varphi} .$ - -The family of functions $f_{\varphi}$ gives rise to a preclosure operator $F $ on the power set of $M $ -$$ -F(A) = \{f_{\varphi}(a_1, \dots, a_n) \in M \mid \varphi \in \sigma ; a_1, \dots, a_n \in A \} -$$ - -for $A \subseteq M .$ - -Iterating $F $ countably many times results in a closure operator $F^{\omega} .$ Taking an arbitrary subset $A \subseteq M$ such that $\left\vert A \right\vert = \kappa$, and having defined $N = F^{\omega}(A) ,$ one can see that also $\left\vert N \right\vert = \kappa .$ Then $N $ is an elementary substructure of $M $ by the Tarski–Vaught test. - -The trick used in this proof is essentially due to Skolem, who introduced function symbols for the Skolem functions $f_{\varphi}$ into the language. One could also define the $f_{\varphi}$ as partial functions such that $f_{\varphi}$ is defined if and only if $M \models \exists y \varphi(y,a_1,\dots,a_n) .$ The only important point is that $F $ is a preclosure operator such that $F(A) $ contains a solution for every formula with parameters in $A $ which has a solution in $M $ and that -$$ -\left\vert F(A) \right\vert \leq \left\vert A \right\vert + \left\vert \sigma \right\vert + \aleph_0 . -$$ - -First, one extends the signature by adding a new constant symbol for every element of M. The complete theory of M for the extended signature σ' is called the elementary diagram of M. In the next step one adds κ many new constant symbols to the signature and adds to the elementary diagram of M the sentences c ≠ c' for any two distinct new constant symbols c and c'. Using the compactness theorem, the resulting theory is easily seen to be consistent. Since its models must have cardinality at least κ, the downward part of this theorem guarantees the existence of a model N which has cardinality exactly κ. It contains an isomorphic copy of M as an elementary substructure. - -Although the (classical) Löwenheim–Skolem theorem is tied very closely to first-order logic, variants hold for other logics. For example, every consistent theory in second-order logic has a model smaller than the first supercompact cardinal (assuming one exists). The minimum size at which a (downward) Löwenheim–Skolem–type theorem applies in a logic is known as the Löwenheim number, and can be used to characterize that logic's strength. Moreover, if we go beyond first-order logic, we must give up one of three things: countable compactness, the downward Löwenheim–Skolem Theorem, or the properties of an abstract logic. - -This account is based mainly on Dawson. To understand the early history of model theory one must distinguish between syntactical consistency (no contradiction can be derived using the deduction rules for first-order logic) and satisfiability (there is a model). Somewhat surprisingly, even before the completeness theorem made the distinction unnecessary, the term consistent was used sometimes in one sense and sometimes in the other. - -The first significant result in what later became model theory was Löwenheim's theorem in Leopold Löwenheim's publication "Über Möglichkeiten im Relativkalkül" (1915): - -For every countable signature σ, every σ-sentence that is satisfiable is satisfiable in a countable model. - -Löwenheim's paper was actually concerned with the more general Peirce-Schröder calculus of relatives (relation algebra with quantifiers). He also used the now antiquated notations of Ernst Schröder. For a summary of the paper in English and using modern notations see Brady. - -According to the received historical view, Löwenheim's proof was faulty because it implicitly used Kőnig's lemma without proving it, although the lemma was not yet a published result at the time. In a revisionist account, Badesa considers that Löwenheim's proof was complete. - -Skolem gave a (correct) proof using formulas in what would later be called Skolem normal form and relying on the axiom of choice: - -Every countable theory which is satisfiable in a model M, is satisfiable in a countable substructure of M. - -Skolem also proved the following weaker version without the axiom of choice: - -Every countable theory which is satisfiable in a model is also satisfiable in a countable model. - -Skolem simplified Skolem. Finally, Anatoly Ivanovich Maltsev (Анато́лий Ива́нович Ма́льцев, 1936) proved the Löwenheim–Skolem theorem in its full generality . He cited a note by Skolem, according to which the theorem had been proved by Alfred Tarski in a seminar in 1928. Therefore, the general theorem is sometimes known as the Löwenheim–Skolem–Tarski theorem. But Tarski did not remember his proof, and it remains a mystery how he could do it without the compactness theorem. - -It is somewhat ironic that Skolem's name is connected with the upward direction of the theorem as well as with the downward direction: - -"I follow custom in calling Corollary 6.1.4 the upward Löwenheim-Skolem theorem. But in fact Skolem didn't even believe it, because he didn't believe in the existence of uncountable sets." – Hodges. - -"Skolem [...] rejected the result as meaningless; Tarski [...] very reasonably responded that Skolem's formalist viewpoint ought to reckon the downward Löwenheim-Skolem theorem meaningless just like the upward." – Hodges. - -"Legend has it that Thoralf Skolem, up until the end of his life, was scandalized by the association of his name to a result of this type, which he considered an absurdity, nondenumerable sets being, for him, fictions without real existence." – Poizat. diff --git a/wiki/wikipedia/3867.txt b/wiki/wikipedia/3867.txt deleted file mode 100644 index 0ca51771f794119739688b6d8512779b0213a6c8..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3867.txt +++ /dev/null @@ -1,33 +0,0 @@ -Strong connectivity augmentation is a computational problem in the mathematical study of graph algorithms, in which the input is a directed graph and the goal of the problem is to add a small number of edges, or a set of edges with small total weight, so that the added edges make the graph into a strongly connected graph. - -The strong connectivity augmentation problem was formulated by . They showed that a weighted version of the problem is NP-complete, but the unweighted problem can be solved in linear time. Subsequent research has considered the approximation ratio and parameterized complexity of the weighted problem. - -In the unweighted strong connectivity augmentation problem, the input is a directed graph and the goal is to add as few edges as possible to it to make the result into a strongly connected graph. The algorithm for the unweighted case by Eswaran and Tarjan considers the condensation of the given directed graph, a directed acyclic graph that has one vertex per strongly connected component of the given graph. Letting $s$ denote the number of source vertices in the condensation (strongly connected components with at least one outgoing edge but no incoming edges), $t$ denote the number of sink vertices in the condensation (strongly connected components with incoming but no outgoing edges), and $q$ denote the number of isolated vertices in the condensation (strongly connected components with neither incoming nor outgoing edges), they observe that the number of edges to be added is necessarily at least $\max(s+q,t+q)$. This follows because $s+q$ edges need to be added to provide an incoming edge for each source or isolated vertex, and symmetrically at least $t+q$ edges need to be added to provide an outgoing edge for each sink or isolated vertex. Their algorithm for the problem finds a set of exactly $\max(s+q,t+q)$ edges to add to the graph to make it strongly connected. - -Their algorithm uses a depth-first search on the condensation to find a collection of pairs of sources and sinks, with the following properties: - -*The source of each pair can reach the sink of the pair by a path in the given graph. - -*Every source that is not in one of the pairs can reach a sink in one of the pairs. - -*Every sink that is not in one of the pairs can be reached from a source in one of the pairs. - -A minor error in the part of their algorithm that finds the pairs of sources and sinks was later found and corrected. - -Once these pairs have been found, one can obtain a strong connectivity augmentation by adding three sets of edges: - -*The first set of edges connects the pairs and the isolated vertices of the condensation into a single cycle, consisting of one edge per pair or isolated vertex. - -*The second set of edges each connect one of the remaining sinks to one of the remaining sources (chosen arbitrarily). This links both the source and the sink to the cycle of pairs and isolated vertices at a cost of one edge per source-sink pair. - -*Once the previous two sets of edges have either exhausted all sources or exhausted all sinks, the third set of edges links each remaining source or sink to this cycle by adding one more edge per source or sink. - -The total number of edges in these three sets is $\max(s+q,t+q)$. - -The weighted version of the problem, in which each edge that might be added has a given weight and the goal is to choose a set of added edges of minimum weight that makes the given graph strongly connected, is NP-complete. An approximation algorithm with approximation ratio 2 was provided by Frederickson. A parameterized and weighted version of the problem, in which one must add at most $k$ edges of minimum total weight to make the given graph strongly connected, is fixed-parameter tractable. - -If a square grid is made of rigid rods (the edges of the grid) connected to each other by flexible joints at the edges of the grid, then the overall structure can bend in many ways rather than remaining square. The grid bracing problem asks how to stabilize such a structure by adding additional cross bracing within some of its squares. This problem can be modeled using graph theory, by making a bipartite graph with a vertex for each row or column of squares in the grid, and an edge between two of these vertices when a square in a given row and column is cross-braced. If the cross-bracing within each square makes that completely rigid, then this graph is undirected, and represents a rigid structure if and only if it is a connected graph. However, if squares are only partially braced (for instance by connecting two opposite corners by a string or wire that prevents expansive motion but does not prevent contractive motion), then the graph is directed, and represents a rigid structure if and only if it is a strongly connected graph. - -An associated strong connectivity augmentation problem asks how to add more partial bracing to a grid that already has partial bracing in some of its squares. The existing partial bracing can be represented as a directed graph, - -and the additional partial bracing to be added should form a strong connectivity augmentation of that graph. In order to be able to translate a solution to the strong connectivity augmentation problem back to a solution of the original bracing problem, an extra restriction is required: each added edge must respect the bipartition of the original graph, and only connect row vertices with column vertices rather than attempting to connect rows to rows or columns to columns. This restricted version of the strong connectivity augmentation problem can again be solved in linear time. diff --git a/wiki/wikipedia/3868.txt b/wiki/wikipedia/3868.txt deleted file mode 100644 index d8fe4442c9b9362a4a74aaf6a5f0110196e2b2ac..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3868.txt +++ /dev/null @@ -1,64 +0,0 @@ -In the mathematics of coding theory, the Griesmer bound, named after James Hugo Griesmer, is a bound on the length of linear binary codes of dimension k and minimum distance d. - -There is also a very similar version for non-binary codes. - -For a binary linear code, the Griesmer bound is: -$$ -n\geqslant \sum_{i=0}^{k-1} \left\lceil\frac{d}{2^i}\right\rceil. -$$ - -== Proof== - -Let $N(k,d)$ denote the minimum length of a binary code of dimension k and distance d. Let C be such a code. We want to show that -$$ - N(k,d)\geqslant \sum_{i=0}^{k-1} \left\lceil\frac{d}{2^i}\right\rceil. -$$ - -Let G be a generator matrix of C. We can always suppose that the first row of G is of the form r = (1, ..., 1, 0, ..., 0) with weight d. - -G= \begin{bmatrix} - -1 & \dots & 1 & 0 & \dots & 0 \\ - -\ast & \ast & \ast & & G' & \\ - -\end{bmatrix} - -The matrix $G'$ generates a code $C'$, which is called the residual code of $C.$ $C'$ obviously has dimension $k'=k-1$ and length $n'=N(k,d)-d.$ $C'$ has a distance $d',$ but we don't know it. Let $u\in C'$ be such that $w(u)=d'$. There exists a vector $v\in \mathbb{F}_2^d$ such that the concatenation $(v|u)\in C.$ Then $w(v)+w(u)=w(v|u)\geqslant d.$ On the other hand, also $(v|u)+r\in C,$ since $r\in C$ and $C$ is linear: $w((v|u)+r)\geqslant d.$ But -$$ -w((v|u)+r)=w(((1,\cdots,1)+v)|u)=d-w(v)+w(u), -$$ - -so this becomes $d-w(v)+w(u)\geqslant d$. By summing this with $w(v)+w(u)\geqslant d,$ we obtain $d+2w(u)\geqslant 2d$. But $w(u)=d',$ so we get $d'\geqslant \tfrac{d}{2}.$ This implies -$$ -n'\geqslant N\left (k-1,\tfrac{d}{2} \right ), -$$ - -therefore due to the integrality of $n'$ -$$ -n'\geqslant \left\lceil N\left (k-1,\tfrac{d}{2} \right ) \right\rceil, -$$ - -so that -$$ -N(k,d)\geqslant \left\lceil N\left (k-1,\tfrac{d}{2} \right )\right\rceil +d. -$$ - -By induction over k we will eventually get -$$ - N(k,d)\geqslant \sum_{i=0}^{k-1} \left\lceil\frac{d}{2^i}\right\rceil. -$$ - -Note that at any step the dimension decreases by 1 and the distance is halved, and we use the identity -$$ -\left\lceil\frac{\left\lceil a/2^{k-1}\right\rceil}{2}\right\rceil = \left\lceil\frac{a}{2^k}\right\rceil -$$ - -for any integer a and positive integer k. - -For a linear code over $\mathbb{F}_q$, the Griesmer bound becomes: -$$ -n\geqslant \sum_{i=0}^{k-1} \left\lceil\frac{d}{q^i}\right\rceil. -$$ - -The proof is similar to the binary case and so it is omitted. diff --git a/wiki/wikipedia/3869.txt b/wiki/wikipedia/3869.txt deleted file mode 100644 index 74e5799434f75e9d6d1c76e485bd2110e6a2df0b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3869.txt +++ /dev/null @@ -1,24 +0,0 @@ -In probability theory, Le Cam's theorem, named after Lucien Le Cam (1924 - 2000), states the following. - -Suppose: - -* $X_1, X_2, X_3, \ldots $ are independent random variables, each with a Bernoulli distribution (i.e., equal to either 0 or 1), not necessarily identically distributed. - -* $\Pr(X_i = 1) = p_i, \text{ for } i = 1, 2, 3, \ldots.$ - -* $\lambda_n = p_1 + \cdots + p_n.$ - -* $S_n = X_1 + \cdots + X_n.$ (i.e. $S_n$ follows a Poisson binomial distribution) - -Then -$$ -\sum_{k=0}^\infty \left| \Pr(S_n=k) - {\lambda_n^k e^{-\lambda_n} \over k!} \right| < 2 \left( \sum_{i=1}^n p_i^2 \right). -$$ - -In other words, the sum has approximately a Poisson distribution and the above inequality bounds the approximation error in terms of the total variation distance. - -By setting pi = λn/n, we see that this generalizes the usual Poisson limit theorem. - -When $\lambda_n$ is large a better bound is possible: $\sum_{k=0}^\infty \left| \Pr(S_n=k) - {\lambda_n^k e^{-\lambda_n} \over k!} \right| < 2 \left(1 \wedge \frac 1 \lambda_n\right) \left( \sum_{i=1}^n p_i^2 \right). $ - -It is also possible to weaken the independence requirement. diff --git a/wiki/wikipedia/387.txt b/wiki/wikipedia/387.txt deleted file mode 100644 index 98231d8847f7850af6c0405c89580a5e034aa06a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/387.txt +++ /dev/null @@ -1,26 +0,0 @@ -In mathematics, the Gross–Koblitz formula, introduced by expresses a Gauss sum using a product of values of the p-adic gamma function. It is an analog of the Chowla–Selberg formula for the usual gamma function. It implies the Hasse–Davenport relation and generalizes the Stickelberger theorem. - -Boyarsky gave another proof of the Gross–Koblitz formula (Boyarski being a pseudonym of Bernard Dwork), and - -Robert gave an elementary proof. - -The Gross–Koblitz formula states that the Gauss sum τ can be given in terms of the p-adic gamma function Γp by -$$ -\tau_q(r) = -\pi^{s_p(r)}\prod_{0\leq i f
    of a prime p - -*r is an integer with 0 ≤ r < q–1 - -* r(i) is the integer whose base p expansion is a cyclic permutation of the f digits of r by i positions - -* sp(r) is the sum of the digits of r in base p - -* $\tau_q(r) = \sum_{a^{q-1}=1}a^{-r}\zeta_\pi^{\text{Tr}(a)}$, where the sum is over roots of 1 in the extension Qp(π) - -*π satisfies πp – 1 = –p - -*ζπ is the pth root of 1 congruent to 1+π mod π2 diff --git a/wiki/wikipedia/3870.txt b/wiki/wikipedia/3870.txt deleted file mode 100644 index 6c80a866978b2b1233adc8ee8bee8bf02921db43..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3870.txt +++ /dev/null @@ -1,41 +0,0 @@ -In geometry, the barycentric subdivision is a standard way of dividing an arbitrary convex polygon into triangles, a convex polyhedron into tetrahedra, or, in general, a convex polytope into simplices with the same dimension, by connecting the barycenters of their faces in a specific way. - -The name is also used in topology for a similar operation on cell complexes. The result is topologically equivalent to that of the geometric operation, but the parts have arbitrary shape and size. This is an example of a finite subdivision rule. - -Both operations have a number of applications in mathematics and in geometric modeling, especially whenever some function or shape needs to be approximated piecewise, e.g. by a spline. - -The barycentric subdivision (henceforth BCS) of an $n$-dimensional simplex $S$ consists of (n + 1)! $n$-dimensional simplices. Each piece, with vertices $v_0,v_1,\dots,v_n$, can be associated with a permutation $p_0,p_1,\dots,p_n$ of the vertices of $S$, in such a way that each vertex $v_i$ is the barycenter of the points $p_0,p_1,\dots,p_i$. - -In particular, the BCS of a single point (a 0-dimensional simplex) consists of that point itself. The BCS of a line segment (1-simplex) $S$ consists of two smaller segments, each connecting one endpoint (0-dimensional face) of $S$ to the midpoint of $S$ itself (1-dimensional face). - -The BCS of a triangle $S$ divides it into six triangles; each part has one vertex $v_2$ at the barycenter of $S$, another one $v_1$ at the midpoint of some side, and the last one $v_0$ at one of the original vertices. - -The BCS of a tetrahedron $S$ divides it into 24 tetrahedra; each part has one vertex at the center of $S$, one on some face, one along some edge, and the last one at some vertex of $S$. - -An important feature of BCS is the fact that the maximal diameter of an $n$-dimensional simplex shrinks at least by the factor $\frac n{n+1}$. - -A generalization of barycentric subdivision can also be defined for a cell complex. Informally, such an object can be thought of as an assemblage of one or more chunks of rubber (cells), each shaped like a convex polytope, which are glued to each other by their facets - possibly with much stretching and twisting. - -The topological version of BCS replaces each cell by an assemblage of rubber simplices, likewise glued together by their facets and possibly deformed. The procedure is (1) select for each cell a deformation map that converts it into a geometric convex polytope, preserving its incidence and topological connections; (2) perform the geometric BCS on this polytope; and, then (3) map the resulting subdivision back to the original cells. - -The result of barycentric subdivision, when viewed as an abstract simplicial complex, is an example of a flag complex. It has one vertex for every cell of the original cell complex and one maximal-dimensional cell for every flag (a collection of cells of different dimensions, all related to each other by inclusion) of the original cell complex. - -The barycentric subdivision is chiefly used to replace an arbitrarily complicated convex polytope or topological cell complex by an assemblage of pieces, all of them of bounded complexity (simplices, in fact). A typical application is modeling the shape of a car body by a spline - a piecewise-defined polynomial function. The algebra of such functions becomes much simpler and easier to program if each "piece" is a "topological triangle", i.e. is attached to exactly three other pieces. However, a human user may find it more natural to design the shape by joining patches with more liberal shapes and topologies. Barycentric subdivision is a convenient way to convert that "user-friendly" model into a "computer-friendly" one. - -When approximating a mathematical function or a surface by a spline, the accuracy of the approximation is usually determined by the piece size - the bigger the pieces, the larger the error. Thus it is often necessary to split large pieces into smaller ones, in order to achieve a prescribed accuracy. - -In theory, BCS could be used for that purpose, since it has the property that the longest edge of any piece is smaller than the longest edge of the original polytope by a factor less than $n/(n + 1)$. Therefore, by applying BCS sufficiently many times, the largest edge can be made as small as desired. - -However, in practice BCS is not well-suited for that purpose. For one thing, each application after the first one multiplies the number of simplices by $(n+1)!$. BCS also multiplies the degree of each original vertex by $n!$, and the degree of each edge by $(n-1)!$. Moreover, the BCS will split all simplices, even those that are already small enough. Finally, each BCS stage also makes the simplices not only smaller but "skinnier", i.e. it tends to increase their aspect ratio (the ratio between the longest and shortest edge). For all these reasons, in practice one rarely applies more than one round of BCS, and other subdivision schemes are used instead. - -For simplicial complexes $L\subset K$ one defines the relative barycentric subdivision $K^*$ of $K$ modulo $L$ that consists of those simplexes with vertices $v_0\dots v_kB(S'_1)\dots B(S'_l)$ associated to a sequence $S_0 < \cdots < S_k$ of proper faces of $L$ and barycenters $B(S'_i)$ of simplexes in $K\setminus L$. - -Clearly, $L$ remains a subcomplex of $K^*$. Only the simplexes away from $L$ shrink. - -Sometimes the term "barycentric subdivision" is improperly used for any subdivision of a polytope $P$ into simplices that have one vertex at the centroid of $P$, and the opposite facet on the boundary of $P$. While this property holds for the true barycentric subdivision, it also holds for other subdivisions which are not the BCS. - -For example, if one makes a straight cut from the barycenter of a triangle to each of its three corners, one obtains a subdivision into three triangles. Generalizing this idea, one obtains a schema for subdividing an $n$-dimensional simplex into $n+1$ simplices. However, this subdivision is not the BCS. - -The barycentric division can also be defined for simplicial sets, in a way that is compatible (with respect to the topological realization functor) with the above division of simplices. - -The term barycentric division is also used in graph theory (Barycentric_Subdivision (Graph Theory)). diff --git a/wiki/wikipedia/3871.txt b/wiki/wikipedia/3871.txt deleted file mode 100644 index 35914db5cf53b13cfd1ef877ee6caed07f4abab2..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3871.txt +++ /dev/null @@ -1,406 +0,0 @@ -In geometry, Euler's rotation theorem states that, in three-dimensional space, any displacement of a rigid body such that a point on the rigid body remains fixed, is equivalent to a single rotation about some axis that runs through the fixed point. It also means that the composition of two rotations is also a rotation. Therefore the set of rotations has a group structure, known as a rotation group. - -The theorem is named after Leonhard Euler, who proved it in 1775 by means of spherical geometry. The axis of rotation is known as an Euler axis, typically represented by a unit vector ê. Its product by the rotation angle is known as an axis-angle vector. The extension of the theorem to kinematics yields the concept of instant axis of rotation, a line of fixed points. - -In linear algebra terms, the theorem states that, in 3D space, any two Cartesian coordinate systems with a common origin are related by a rotation about some fixed axis. This also means that the product of two rotation matrices is again a rotation matrix and that for a non-identity rotation matrix one eigenvalue is 1 and the other two are both complex, or both equal to −1. The eigenvector corresponding to this eigenvalue is the axis of rotation connecting the two systems. - -Euler states the theorem as follows: - -
    - -Theorema. - -Quomodocunque sphaera circa centrum suum conuertatur, semper assignari potest diameter, - -cuius directio in situ translato conueniat cum situ initiali. - -
    - -or (in English): - -
    When a sphere is moved around its centre it is always possible to find a diameter whose direction in the displaced position is the same as in the initial position.
    - -Euler's original proof was made using spherical geometry and therefore whenever he speaks about triangles they must be understood as spherical triangles. - -To arrive at a proof, Euler analyses what the situation would look like if the theorem were true. To that end, suppose the yellow line in Figure 1 goes through the center of the sphere and is the axis of rotation we are looking for, and point O is one of the two intersection points of that axis with the sphere. Then he considers an arbitrary great circle that does not contain O (the blue circle), and its image after rotation (the red circle), which is another great circle not containing O. He labels a point on their intersection as point A. (If the circles coincide, then A can be taken as any point on either; otherwise A is one of the two points of intersection.) - -Now A is on the initial circle (the blue circle), so its image will be on the transported circle (red). He labels that image as point a. Since A is also on the transported circle (red), it is the image of another point that was on the initial circle (blue) and he labels that preimage as α (see Figure 2). Then he considers the two arcs joining α and a to A. These arcs have the same length because arc αA is mapped onto arc Aa. Also, since O is a fixed point, triangle αOA is mapped onto triangle AOa, so these triangles are isosceles, and arc AO bisects angle ∠αAa. - -Let us construct a point that could be invariant using the previous considerations. We start with the blue great circle and its image under the transformation, which is the red great circle as in the Figure 1. Let point A be a point of intersection of those circles. If A’s image under the transformation is the same point then A is a fixed point of the transformation, and since the center is also a fixed point, the diameter of the sphere containing A is the axis of rotation and the theorem is proved. - -Otherwise we label A’s image as a and its preimage as α, and connect these two points to A with arcs αA and Aa. These arcs have the same length. Construct the great circle that bisects ∠αAa and locate point O on that great circle so that arcs AO and aO have the same length, and call the region of the sphere containing O and bounded by the blue and red great circles the interior of ∠αAa. (That is, the yellow region in Figure 3.) Then since αA = Aa and O is on the bisector of ∠αAa, we also have αO = aO. - -Now let us suppose that O′ is the image of O. Then we know ∠αAO = ∠AaO′ and orientation is preserved, so O′ must be interior to ∠αAa. Now AO is transformed to aO′, so AO = aO′. Since AO is also the same length as aO, ∠AaO = ∠aAO. But ∠aAO = ∠AaO′, so ∠AaO = ∠AaO′ and therefore O′ is the same point as O. In other words, O is a fixed point of the transformation, and since the center is also a fixed point, the diameter of the sphere containing O is the axis of rotation. - -Euler also points out that O can be found by intersecting the perpendicular bisector of Aa with the angle bisector of ∠αAO, a construction that might be easier in practice. He also proposed the intersection of two planes: - -*the symmetry plane of the angle ∠αAa (which passes through the center C of the sphere), and - -*the symmetry plane of the arc Aa (which also passes through C). - -Proposition. These two planes intersect in a diameter. This diameter is the one we are looking for. - -Proof. Let us call O either of the endpoints (there are two) of this diameter over the sphere surface. Since αA is mapped on Aa and the triangles have the same angles, it follows that the triangle OαA is transported onto the triangle OAa. Therefore the point O has to remain fixed under the movement. - -Corollaries. This also shows that the rotation of the sphere can be seen as two consecutive reflections about the two planes described above. Points in a mirror plane are invariant under reflection, and hence the points on their intersection (a line: the axis of rotation) are invariant under both the reflections, and hence under the rotation. - -Another simple way to find the rotation axis is by considering the plane on which the points α, A, a lie. The rotation axis is obviously orthogonal to this plane, and passes through the center C of the sphere. - -Given that for a rigid body any movement that leaves an axis invariant is a rotation, this also proves that any arbitrary composition of rotations is equivalent to a single rotation around a new axis. - -A spatial rotation is a linear map in one-to-one correspondence with a 3 × 3 rotation matrix R that transforms a coordinate vector x into X, that is Rx = X. Therefore, another version of Euler's theorem is that for every rotation R, there is a nonzero vector n for which Rn = n; this is exactly the claim that n is an eigenvector of R associated with the eigenvalue 1. Hence it suffices to prove that 1 is an eigenvalue of R; the rotation axis of R will be the line μn, where n is the eigenvector with eigenvalue 1. - -A rotation matrix has the fundamental property that its inverse is its transpose, that is - - - -\mathbf{R}^\mathsf{T}\mathbf{R} = \mathbf{R}\mathbf{R}^\mathsf{T} = \mathbf{I}, - - - -where I is the 3 × 3 identity matrix and superscript T indicates the transposed matrix. - -Compute the determinant of this relation to find that a rotation matrix has determinant ±1. In particular, - -\begin{align} - -1 = \det(\mathbf{I}) &= \det\left(\mathbf{R}^\mathsf{T}\mathbf{R}\right) = \det\left(\mathbf{R}^\mathsf{T}\right)\det(\mathbf{R}) = \det(\mathbf{R})^2 \\ - -\Longrightarrow\qquad \det(\mathbf{R}) &= \pm 1. - -\end{align} - -A rotation matrix with determinant +1 is a proper rotation, and one with a negative determinant −1 is an improper rotation, that is a reflection combined with a proper rotation. - -It will now be shown that a proper rotation matrix R has at least one invariant vector n, i.e., Rn = n. Because this requires that (R − I)n = 0, we see that the vector n must be an eigenvector of the matrix R with eigenvalue λ = 1. Thus, this is equivalent to showing that det(R − I) = 0. - -Use the two relations -$$ - \det(-\mathbf{A}) = (-1)^{3} \det(\mathbf{A}) = - \det(\mathbf{A}) \quad -$$ - -for any 3 × 3 matrix A and -$$ - \det\left(\mathbf{R}^{-1} \right) = 1 \quad -$$ - -(since det(R) = 1) to compute - -\begin{align} - -&\det(\mathbf{R} - \mathbf{I}) = \det\left((\mathbf{R} - \mathbf{I})^\mathsf{T}\right) \\ - -{}={} &\det\left(\mathbf{R}^\mathsf{T} - \mathbf{I}\right) = \det\left(\mathbf{R}^{-1} - \mathbf{R}^{-1}\mathbf{R}\right) \\ - -{}={} &\det\left(\mathbf{R}^{-1}(\mathbf{I} - \mathbf{R})\right) = \det\left(\mathbf{R}^{-1}\right) \det(-(\mathbf{R} - \mathbf{I})) \\ - -{}={} &-\det(\mathbf{R} - \mathbf{I}) \\[3pt] - -\Longrightarrow\ 0 ={} &\det(\mathbf{R} - \mathbf{I}). - -\end{align} - -This shows that λ = 1 is a root (solution) of the characteristic equation, that is, - - - -\det(\mathbf{R} - \lambda \mathbf{I}) = 0\quad \hbox{for}\quad \lambda=1. - - - -In other words, the matrix R − I is singular and has a non-zero kernel, that is, there is at least one non-zero vector, say n, for which - - - -(\mathbf{R} - \mathbf{I}) \mathbf{n} = \mathbf{0} \quad \Longleftrightarrow \quad \mathbf{R}\mathbf{n} = \mathbf{n}. - - - -The line μn for real μ is invariant under R, i.e., μn is a rotation axis. This proves Euler's theorem. - -Two matrices (representing linear maps) are said to be equivalent if there is a change of basis that makes one equal to the other. A proper orthogonal matrix is always equivalent (in this sense) to either the following matrix or to its vertical reflection: - - - -\mathbf{R} \sim - -\begin{pmatrix} - -\cos\phi & -\sin\phi & 0 \\ - -\sin\phi & \cos\phi & 0 \\ - -0 & 0 & 1\\ - -\end{pmatrix}, \qquad 0\le \phi \le 2\pi. - - - -Then, any orthogonal matrix is either a rotation or an improper rotation. A general orthogonal matrix has only one real eigenvalue, either +1 or −1. When it is +1 the matrix is a rotation. When −1, the matrix is an improper rotation. - -If R has more than one invariant vector then φ = 0 and R = I. Any vector is an invariant vector of I. - -In order to prove the previous equation some facts from matrix theory must be recalled. - -An m × m matrix A has m orthogonal eigenvectors if and only if A is normal, that is, if AA = AA. This result is equivalent to stating that normal matrices can be brought to diagonal form by a unitary similarity transformation: - - - -\mathbf{A}\mathbf{U} = \mathbf{U} \operatorname{diag}(\alpha_1,\ldots,\alpha_m)\quad \Longleftrightarrow\quad - -\mathbf{U}^\dagger \mathbf{A}\mathbf{U} = \operatorname{diag}(\alpha_1,\ldots,\alpha_m), - - - -and U is unitary, that is, - - - -\mathbf{U}^\dagger = \mathbf{U}^{-1}. - - - -The eigenvalues α1, ..., αm are roots of the characteristic equation. If the matrix A happens to be unitary (and note that unitary matrices are normal), then - - - -\left(\mathbf{U}^\dagger\mathbf{A} \mathbf{U}\right)^\dagger = \operatorname{diag}\left(\alpha^*_1,\ldots,\alpha^*_m\right) = - -\mathbf{U}^\dagger\mathbf{A}^{-1} \mathbf{U} = \operatorname{diag}\left(\frac{1}{\alpha_1},\ldots,\frac{1}{\alpha_m}\right) - - - -and it follows that the eigenvalues of a unitary matrix are on the unit circle in the complex plane: - - - -\alpha^*_k = \frac{1}{\alpha_k} \quad\Longleftrightarrow\quad \alpha^*_k\alpha_k = \left|\alpha_k\right|^2 = 1,\qquad k=1,\ldots,m. - - - -Also an orthogonal (real unitary) matrix has eigenvalues on the unit circle in the complex plane. Moreover, since its characteristic equation (an mth order polynomial in λ) has real coefficients, it follows that its roots appear in complex conjugate pairs, that is, if α is a root then so is α. There are 3 roots, thus at least one of them must be purely real (+1 or −1). - -After recollection of these general facts from matrix theory, we return to the rotation matrix R. It follows from its realness and orthogonality that we can find a U such that: - - - -\mathbf{R} \mathbf{U} = \mathbf{U} - -\begin{pmatrix} - -e^{i\phi} & 0 & 0 \\ - -0 & e^{-i\phi} & 0 \\ - -0 & 0 & \pm 1 \\ - -\end{pmatrix} - - - -If a matrix U can be found that gives the above form, and there is only one purely real component and it is −1, then we define R to be an improper rotation. Let us only consider the case, then, of matrices R that are proper rotations (the third eigenvalue is just 1). The third column of the 3 × 3 matrix U will then be equal to the invariant vector n. Writing u1 and u2 for the first two columns of U, this equation gives - - - -\mathbf{R}\mathbf{u}_1 = e^{i\phi} \mathbf{u}_1 \quad\hbox{and}\quad \mathbf{R}\mathbf{u}_2 = e^{-i\phi} \mathbf{u}_2. - - - -If u1 has eigenvalue 1, then φ = 0 and u2 has also eigenvalue 1, which implies that in that case R = E. - -Finally, the matrix equation is transformed by means of a unitary matrix, - - - -\mathbf{R} \mathbf{U} - -\begin{pmatrix} - -\frac{1}{\sqrt{2}} & \frac{i}{\sqrt{2}} & 0 \\ - -\frac{1}{\sqrt{2}} & \frac{-i}{\sqrt{2}} & 0 \\ - -0 & 0 & 1\\ - -\end{pmatrix} - -= \mathbf{U} - -\underbrace{ - -\begin{pmatrix} - -\frac{1}{\sqrt{2}} & \frac{i}{\sqrt{2}} & 0 \\ - -\frac{1}{\sqrt{2}} & \frac{-i}{\sqrt{2}} & 0 \\ - -0 & 0 & 1\\ - -\end{pmatrix} - -\begin{pmatrix} - -\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} & 0 \\ - -\frac{-i}{\sqrt{2}} & \frac{i}{\sqrt{2}} & 0 \\ - -0 & 0 & 1\\ - -\end{pmatrix} - -}_{=\mathbf{I}} - -\begin{pmatrix} - -e^{i\phi} & 0 & 0 \\ - -0 & e^{-i\phi} & 0 \\ - -0 & 0 & 1 \\ - -\end{pmatrix} - -\begin{pmatrix} - -\frac{1}{\sqrt{2}} & \frac{i}{\sqrt{2}} & 0 \\ - -\frac{1}{\sqrt{2}} & \frac{-i}{\sqrt{2}} & 0 \\ - -0 & 0 & 1\\ - -\end{pmatrix} - - - -which gives - - - -\mathbf{U'}^\dagger \mathbf{R} \mathbf{U'} = \begin{pmatrix} - -\cos\phi & -\sin\phi & 0 \\ - -\sin\phi & \cos\phi & 0 \\ - -0 & 0 & 1\\ - -\end{pmatrix} - -\quad\text{ with }\quad \mathbf{U'} - -= \mathbf{U} - -\begin{pmatrix} - -\frac{1}{\sqrt{2}} & \frac{i}{\sqrt{2}} & 0 \\ - -\frac{1}{\sqrt{2}} & \frac{-i}{\sqrt{2}} & 0 \\ - -0 & 0 & 1\\ - -\end{pmatrix} . - - - -The columns of U′ are orthonormal. The third column is still n, the other two columns are perpendicular to n. We can now see how our definition of improper rotation corresponds with the geometric interpretation: an improper rotation is a rotation around an axis (here, the axis corresponding to the third coordinate) and a reflection on a plane perpendicular to that axis. If we only restrict ourselves to matrices with determinant 1, we can thus see that they must be proper rotations. This result implies that any orthogonal matrix R corresponding to a proper rotation is equivalent to a rotation over an angle φ around an axis n. - -The trace (sum of diagonal elements) of the real rotation matrix given above is 1 + 2 cos φ. Since a trace is invariant under an orthogonal matrix similarity transformation, - - - -\mathrm{Tr}\left[\mathbf{A} \mathbf{R} \mathbf{A}^\mathsf{T}\right] = - -\mathrm{Tr}\left[ \mathbf{R} \mathbf{A}^\mathsf{T}\mathbf{A}\right] = \mathrm{Tr}[\mathbf{R}]\quad\text{ with }\quad \mathbf{A}^\mathsf{T} = \mathbf{A}^{-1}, - - - -it follows that all matrices that are equivalent to R by such orthogonal matrix transformations have the same trace: the trace is a class function. This matrix transformation is clearly an equivalence relation, that is, all such equivalent matrices form an equivalence class. - -In fact, all proper rotation 3 × 3 rotation matrices form a group, usually denoted by SO(3) (the special orthogonal group in 3 dimensions) and all matrices with the same trace form an equivalence class in this group. All elements of such an equivalence class share their rotation angle, but all rotations are around different axes. If n is an eigenvector of R with eigenvalue 1, then An is also an eigenvector of ARAT, also with eigenvalue 1. Unless A = I, n and An are different. - -Suppose we specify an axis of rotation by a unit vector [x, y, z], and suppose we have an infinitely small rotation of angle Δθ about that vector. Expanding the rotation matrix as an infinite addition, and taking the first order approach, the rotation matrix ΔR is represented as: - - - -\Delta R = - -\begin{bmatrix} - -1 & 0 & 0 \\ - -0 & 1 & 0 \\ - -0 & 0 & 1 - -\end{bmatrix} + - -\begin{bmatrix} - -0 & z & -y \\ - --z & 0 & x \\ - -y & -x & 0 - -\end{bmatrix}\Delta \theta = - -\mathbf{I} + \mathbf{A}\Delta \theta. - - - -A finite rotation through angle θ about this axis may be seen as a succession of small rotations about the same axis. Approximating Δθ as θ/N where N is a large number, a rotation of θ about the axis may be represented as: -$$ -R = \left(\mathbf{1}+\frac{\mathbf{A}\theta}{N}\right)^N \approx e^{\mathbf{A}\theta}. -$$ - -It can be seen that Euler's theorem essentially states that all rotations may be represented in this form. The product Aθ is the "generator" of the particular rotation, being the vector (x,y,z) associated with the matrix A. This shows that the rotation matrix and the axis–angle format are related by the exponential function. - -One can derive a simple expression for the generator G. One starts with an arbitrary plane (in Euclidean space) defined by a pair of perpendicular unit vectors a and b. In this plane one can choose an arbitrary vector x with perpendicular y. One then solves for y in terms of x and substituting into an expression for a rotation in a plane yields the rotation matrix R which includes the generator G = ba^T. - -\begin{align} - -\mathbf{x} &= \mathbf{a}\cos\alpha + \mathbf{b}\sin\alpha \\ - -\mathbf{y} &= -\mathbf{a}\sin\alpha + \mathbf{b}\cos\alpha \\[8pt] - -\cos\alpha &= \mathbf{a}^\mathsf{T}\mathbf{x} \\ - -\sin\alpha &= \mathbf{b}^\mathsf{T}\mathbf{x} \\[8px] - -\mathbf{y} &= -\mathbf{ab}^\mathsf{T}\mathbf{x} + \mathbf{ba}^\mathsf{T}\mathbf{x} - -= \left( \mathbf{ba}^\mathsf{T} - \mathbf{ab}^\mathsf{T} \right)\mathbf{x} \\[8px] - -\mathbf{x}' &= \mathbf{x}\cos\beta + \mathbf{y}\sin\beta \\ - -&= \left( \mathbf{I}\cos\beta + \left( \mathbf{ba}^\mathsf{T} - \mathbf{ab}^\mathsf{T} \right) \sin\beta \right)\mathbf{x} \\[8px] - -\mathbf{R} &= \mathbf{I}\cos\beta + \left( \mathbf{ba}^\mathsf{T} - \mathbf{ab}^\mathsf{T} \right)\sin\beta \\ - -&= \mathbf{I}\cos\beta + \mathbf{G}\sin\beta \\[8px] - -\mathbf{G} &= \mathbf{ba}^\mathsf{T} - \mathbf{ab}^\mathsf{T} - -\end{align} - -To include vectors outside the plane in the rotation one needs to modify the above expression for R by including two projection operators that partition the space. This modified rotation matrix can be rewritten as an exponential function. - -\begin{align} - -\mathbf{P_{ab}} &= -\mathbf{G}^2 \\ - -\mathbf{R} &= \mathbf{I} - \mathbf{P_{ab}} + \left( \mathbf{I} \cos \beta + \mathbf{G} \sin \beta \right)\mathbf{P_{ab}} = e^{\mathbf{G}\beta } - -\end{align} - -Analysis is often easier in terms of these generators, rather than the full rotation matrix. Analysis in terms of the generators is known as the Lie algebra of the rotation group. - -It follows from Euler's theorem that the relative orientation of any pair of coordinate systems may be specified by a set of three independent numbers. Sometimes a redundant fourth number is added to simplify operations with quaternion algebra. Three of these numbers are the direction cosines that orient the eigenvector. The fourth is the angle about the eigenvector that separates the two sets of coordinates. Such a set of four numbers is called a quaternion. - -While the quaternion as described above, does not involve complex numbers, if quaternions are used to describe two successive rotations, they must be combined using the non-commutative quaternion algebra derived by William Rowan Hamilton through the use of imaginary numbers. - -Rotation calculation via quaternions has come to replace the use of direction cosines in aerospace applications through their reduction of the required calculations, and their ability to minimize round-off errors. Also, in computer graphics the ability to perform spherical interpolation between quaternions with relative ease is of value. - -In higher dimensions, any rigid motion that preserves a point in dimension 2n or 2n + 1 is a composition of at most n rotations in orthogonal planes of rotation, though these planes need not be uniquely determined, and a rigid motion may fix multiple axes. - -A rigid motion in three dimensions that does not necessarily fix a point is a "screw motion". This is because a composition of a rotation with a translation perpendicular to the axis is a rotation about a parallel axis, while composition with a translation parallel to the axis yields a screw motion; see screw axis. This gives rise to screw theory. diff --git a/wiki/wikipedia/3872.txt b/wiki/wikipedia/3872.txt deleted file mode 100644 index 520967a797a5456570a0424b7c8b56fa90ccefa4..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3872.txt +++ /dev/null @@ -1,94 +0,0 @@ -In mathematics, the Remez inequality, discovered by the Soviet mathematician Evgeny Yakovlevich Remez , gives a bound on the sup norms of certain polynomials, the bound being attained by the Chebyshev polynomials. - -Let σ be an arbitrary fixed positive number. Define the class of polynomials πn(σ) to be those polynomials p of the nth degree for which -$$ -|p(x)| \le 1 -$$ - -on some set of measure ≥ 2 contained in the closed interval [−1, 1+σ]. Then the Remez inequality states that -$$ -\sup_{p \in \pi_n(\sigma)} \left\|p\right\|_\infty = \left\|T_n\right\|_\infty -$$ - -where Tn(x) is the Chebyshev polynomial of degree n, and the supremum norm is taken over the interval [−1, 1+σ]. - -Observe that Tn is increasing on $[1, +\infty]$, hence -$$ - \|T_n\|_\infty = T_n(1+\sigma). -$$ - -The R.i., combined with an estimate on Chebyshev polynomials, implies the following corollary: If J ⊂ R is a finite interval, and E ⊂ J is an arbitrary measurable set, then - -{{NumBlk|:|$ \max_{x \in J} |p(x)| \leq \left( \frac{4 \textrm{mes } J}{\textrm{mes } E} \right)^n \sup_{x \in E} |p(x)| $|}} - -for any polynomial p of degree n. - -Inequalities similar to () have been proved for different classes of functions, and are known as Remez-type inequalities. One important example is Nazarov's inequality for exponential sums : - -Nazarov's inequality. Let -$$ - p(x) = \sum_{k = 1}^n a_k e^{\lambda_k x} -$$ - -be an exponential sum (with arbitrary λk ∈C), and let J ⊂ R be a finite interval, E ⊂ J—an arbitrary measurable set. Then -$$ - \max_{x \in J} |p(x)| \leq e^{\max_k |\Re \lambda_k| \mathrm{mes} J} \left( \frac{C \textrm{mes} J}{\textrm{mes} E} \right)^{n-1} \sup_{x \in E} |p(x)|~, -$$ - -where C > 0 is a numerical constant. - -In the special case when λk are pure imaginary and integer, and the subset E is itself an interval, the inequality was proved by Pál Turán and is known as Turán's lemma. - -This inequality also extends to $L^p(\mathbb{T}),\ 0\leq p\leq2$ in the following way -$$ - \|p\|_{L^p(\mathbb{T})} \leq e^{A(n - 1) \textrm{mes }(\mathbb{T}\setminus E)}\|p\|_{L^p(E)} -$$ - -for some A>0 independent of p, E, and n. When -$$ -\mathrm{mes} E <1-\frac{\log n}{n} -$$ - -a similar inequality holds for p > 2. For p=∞ there is an extension to multidimensional polynomials. - -Proof: Applying Nazarov's lemma to $E=E_\lambda=\{x:\ |p(x)|\leq\lambda\},\ \lambda>0$ leads to -$$ -\max_{x \in J} |p(x)| \leq e^{\max_k |\Re \lambda_k| \mathrm{mes} J} \left( \frac{C \textrm{mes} J}{\textrm{mes} E_\lambda} \right)^{n-1} \sup_{x \in E_\lambda} |p(x)| \leq e^{\max_k |\Re \lambda_k| \mathrm{mes} J} \left( \frac{C \textrm{mes} J}{\textrm{mes} E_\lambda} \right)^{n-1} \lambda -$$ - -thus -$$ -\textrm{mes} E_\lambda\leq C \textrm{mes} J\left(\frac{\lambda e^{\max_k |\Re \lambda_k| \mathrm{mes} J}}{\max_{x \in J} |p(x)|} \right )^{\frac{1}{n-1}} -$$ - -Now fix a set $E$ and choose $\lambda$ such that $\textrm{mes} E_\lambda\leq\tfrac{1}{2}\textrm{mes} E$, that is -$$ -\lambda =\left(\frac{\textrm{mes} E}{2C \mathrm{mes} J}\right)^{n-1}e^{-\max_k |\Re \lambda_k| \mathrm{mes} J}\max_{x \in J} |p(x)| -$$ - -Note that this implies: - -# $ \textrm{mes}E\setminus E_{\lambda}\ge \tfrac{1}{2} \textrm{mes}E .$ - -# $ \forall x \in E \setminus E_{\lambda} : |p(x)| > \lambda .$ - -Now - -\begin{align} - -\int_{x\in E}|p(x)|^p\mbox{d}x &\geq \int_{x\in E \setminus E_\lambda}|p(x)|^p\mbox{d}x \\[6pt] - -&\geq \lambda^p\frac{1}{2}\textrm{mes} E \\[6pt] - -&= \frac{1}{2}\textrm{mes} E \left(\frac{\textrm{mes} E}{2C \mathrm{mes} J}\right)^{p(n-1)}e^{-p\max_k |\Re \lambda_k| \mathrm{mes} J}\max_{x \in J} |p(x)|^p \\[6pt] - -&\geq \frac{1}{2} \frac{\textrm{mes} E}{\textrm{mes} J}\left(\frac{\textrm{mes} E}{2C \mathrm{mes} J}\right)^{p(n-1)}e^{-p\max_k |\Re \lambda_k| \mathrm{mes} J}\int_{x \in J} |p(x)|^p\mbox{d}x, - -\end{align} - -which completes the proof. - -One of the corollaries of the R.i. is the Pólya inequality, which was proved by George Pólya , and states that the Lebesgue measure of a sub-level set of a polynomial p of degree n is bounded in terms of the leading coefficient LC(p) as follows: -$$ - \textrm{mes} \left\{ x \in \R : \left|P(x)\right| \leq a \right\} \leq 4 \left(\frac{a}{2 \mathrm{LC}(p)}\right)^{1/n} , \quad a > 0~. -$$ diff --git a/wiki/wikipedia/3873.txt b/wiki/wikipedia/3873.txt deleted file mode 100644 index 7ae4a3aca2d004fcdb8e8a0163d42c6333c4a2bb..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3873.txt +++ /dev/null @@ -1,42 +0,0 @@ -Hooper's paradox is a falsidical paradox based on a optical illusion. A geometric shape with an area of 32 units is dissected into four parts, which afterwards get assembled into a rectangle with an area of only 30 units. - -Upon close inspection one can notice that the triangles of the dissected shape are not identical to the triangles in the rectangle. The length of the shorter side at the right angle measures 2 units in the original shape but only 1.8 units in the rectangle. This means, the real triangles of the original shape overlap in the rectangle. The overlapping area is a parallelogram, the diagonals and sides of which can be computed via the Pythagorean theorem. -$$ - d_1=\sqrt{2^2+1^2}=\sqrt{5} -$$ -$$ - d_2=\sqrt{3^2+10^2}=\sqrt{109} -$$ -$$ - s_1=\sqrt{2^2+6^2}=\sqrt{40} -$$ -$$ - s_2=\sqrt{1^2+4^2}=\sqrt{17} -$$ - -The area of this parallelogram can determined using Heron's formula for triangles. This yields -$$ - s=\frac{d_1+s_1+s_2}{2}=\frac{\sqrt{5}+\sqrt{17}+\sqrt{40}}{2} -$$ - -for the halved circumference of the triangle (half of the parallelogram) and with that for the area of the parallelogram - - - -\begin{align} - -F&=2\cdot \sqrt{s\cdot (s-s_1) \cdot (s-s_2) \cdot (s-d_1)} \\[5pt] - -&=2\cdot\frac{1}{4}\cdot\sqrt{(\sqrt{5}+\sqrt{17}+\sqrt{40})\cdot(-\sqrt{5}+\sqrt{17}+\sqrt{40})\cdot(\sqrt{5}-\sqrt{17}+\sqrt{40})\cdot(\sqrt{5}+\sqrt{17}-\sqrt{40})} \\[5pt] - -&=2\cdot\frac{1}{4}\cdot\sqrt{16} \\[5pt] - -&=2 - -\end{align} - -. - -So the overlapping area of the two triangles accounts exactly for the vanished area of 2 units. - -William Hooper published the paradox 1774 in his book Rational Recreations in which he named it The geometric money. The 1774 edition of his book still contained a false drawing, which got corrected in the 1782 edition. However Hooper was not the first to publish this geometric fallacy, since Hooper's book was largely an adaption of Edmé-Gilles Guyot's Nouvelles récréations physiques et mathétiques, which had been published in France in 1769. The description in this book contains the same false drawing as in Hooper's book, but it got corrected in a later edition as well. diff --git a/wiki/wikipedia/3874.txt b/wiki/wikipedia/3874.txt deleted file mode 100644 index 226fdc63f27460ff1f4070dba90e83ebcb6e4689..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3874.txt +++ /dev/null @@ -1,85 +0,0 @@ -In mathematics, a covering system (also called a complete residue system) is a collection -$$ -\{a_1(\mathrm{mod}\ {n_1}),\ \ldots,\ a_k(\mathrm{mod}\ {n_k})\} -$$ - -of finitely many residue classes $ a_i(\mathrm{mod}\ {n_i}) = \{ a_i + n_ix:\ x \in \Z \} $ - -whose union contains every integer. - -The notion of covering system was introduced by Paul Erdős in the early 1930s. - -The following are examples of covering systems: -$$ -\{0(\mathrm{mod}\ {3}),\ 1(\mathrm{mod}\ {3}),\ 2(\mathrm{mod}\ {3})\}, -$$ - -and -$$ -\{1(\mathrm{mod}\ {2}),\ 2(\mathrm{mod}\ {4}),\ 4(\mathrm{mod}\ {8}),\ 0(\mathrm{mod}\ {8})\}, -$$ - -and - -\{0(\mathrm{mod}\ {2}),\ 0(\mathrm{mod}\ {3}),\ 1(\mathrm{mod}\ {4}), - -\ 5(\mathrm{mod}\ {6}),\ 7(\mathrm{mod}\ {12}) - -\}. - -A covering system is called disjoint (or exact) if no two members overlap. - -A covering system is called distinct (or incongruent) if all the moduli $n_i$ are different (and bigger than 1). - -A covering system is called irredundant (or minimal) if all the residue classes are required to cover the integers. - -The first two examples are disjoint. - -The third example is distinct. - -A system (i.e., an unordered multi-set) -$$ -\{a_1(\mathrm{mod}\ {n_1}),\ \ldots,\ a_k(\mathrm{mod}\ {n_k})\} -$$ - -of finitely many residue classes is called an $m$-cover if it covers every integer at least -$$ -m -$$ times, and an exact $m$-cover if it covers each integer exactly $m$ times. It is known that for each -$$ -m=2,3,\ldots -$$ there are exact $m$-covers which cannot be written as a union of two covers. For example, - -\{1(\mathrm{mod}\ {2});\ 0(\mathrm{mod}\ {3});\ 2(\mathrm{mod}\ {6});\ 0,4,6,8(\mathrm{mod}\ {10}); - - - -1,2,4,7,10,13(\mathrm{mod}\ {15});\ 5,11,12,22,23,29(\mathrm{mod}\ {30}) - -\} - -is an exact 2-cover which is not a union of two covers. - -The first example above is an exact 1-cover (also called an exact cover). Another exact cover in common use is that of odd and even numbers, or -$$ -\{0(\mathrm{mod}\ {2}),\ 1(\mathrm{mod}\ {2})\}. -$$ - -This is just one case of the following fact: For every positive integer modulus $m$, there is an exact cover: -$$ -\{0(\mathrm{mod}\ {m}),\ 1(\mathrm{mod}\ {m}),\ \ldots,\ {m-1}(\mathrm{mod}\ {m})\}. -$$ - -The Mirsky–Newman theorem, a special case of the Herzog–Schönheim conjecture, states that there is no disjoint distinct covering system. This result was conjectured in 1950 by Paul Erdős and proved soon thereafter by Leon Mirsky and Donald J. Newman. However, Mirsky and Newman never published their proof. The same proof was also found independently by Harold Davenport and Richard Rado. - -Covering systems can be used to find primefree sequences, sequences of integers satisfying the same recurrence relation as the Fibonacci numbers, such that consecutive numbers in the sequence are relatively prime but all numbers in the sequence are composite numbers. For instance, a sequence of this type found by Herbert Wilf has initial terms - -a1 = 20615674205555510, a2 = 3794765361567513 . - -In this sequence, the positions at which the numbers in the sequence are divisible by a prime p form an arithmetic progression; for instance, the even numbers in the sequence are the numbers ai where i is congruent to 1 mod 3. The progressions divisible by different primes form a covering system, showing that every number in the sequence is divisible by at least one prime. - -Paul Erdős asked whether for any arbitrarily large N there exists an incongruent covering system the minimum of whose moduli is at least N. It is easy to construct examples where the minimum of the moduli in such a system is 2, or 3 (Erdős gave an example where the moduli are in the set of the divisors of 120; a suitable cover is 0(3), 0(4), 0(5), 1(6), 1(8), 2(10), 11(12), 1(15), 14(20), 5(24), 8(30), 6(40), 58(60), 26(120) ) D. Swift gave an example where the minimum of the moduli is 4 (and the moduli are in the set of the divisors of 2880). S. L. G. Choi proved that it is possible to give an example for N = 20, and Pace P Nielsen demonstrates the existence of an example with N = 40, consisting of more than $10^{50}$ congruences. - -Erdős's question was resolved in the negative by Bob Hough. Hough used the Lovász local lemma to show that there is some maximum N<1016 which can be the minimum modulus on a covering system. - -There is a famous unsolved conjecture from Erdős and Selfridge: an incongruent covering system (with the minimum modulus greater than 1) whose moduli are odd, does not exist. It is known that if such a system exists with square-free moduli, the overall modulus must have at least 22 prime factors. diff --git a/wiki/wikipedia/3875.txt b/wiki/wikipedia/3875.txt deleted file mode 100644 index c40751e3107d85829ff48e777f20cfd418c7f1ad..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3875.txt +++ /dev/null @@ -1,65 +0,0 @@ -Fermat's Last Tango is a 2000 off-Broadway musical about the proof of Fermat's Last Theorem, written by husband and wife Joshua Rosenblum (music, lyrics) and Joanne Sydney Lessner (book, lyrics). The musical presents a fictionalized version of the real life story of Andrew Wiles, and has been praised for the accuracy of the mathematical content. A video of the original production has been shown at several mathematical conferences and similar occasions. - -The plot is based on the story of the proof of Fermat's Last Theorem by Andrew Wiles, whose name is changed to "Daniel Keane" in the musical. After seven years of isolation in his attic, Keane has found a proof. He announces it to the press and promises his wife Anna that he will return to normal life. In his study, Keane is surprised by none other than Fermat himself, who introduces him to the "Aftermath", where he meets the famous mathematicians Euclid, Pythagoras, Newton, and Gauss, and is informed that his proof contains a "big fat hole". Keane returns to his attic to try to fix his proof, to the disappointment of his "math widow" wife. He is taunted by Fermat about not succeeding, but the other mathematicians from the Aftermath, after noticing that they can't keep up with the mathematics of the past century, decide to grant admission to Keane anyway. As the latter finally gives up and declares his attempts a failure, Anna suggests that "within your failure lie the seeds of your success", and this quickly leads to Keane realising how to close the gap in the argument, and the musical ends with another press conference. - -Rosenblum and Lessner started working on Fermat's Last Tango in December 1996, after Rosenblum had read a review of Amir Aczel's book Fermat's Last Theorem. Originally planned as an opera, it turned into a musical during the writing process, but operatic elements remained. The original working title, Proof, was changed because it was shared by the successful 2000 play Proof. While written in a whimsical tone and using nerdy jokes, the lyrics contain sophisticated mathematical content and mention the Shimura-Taniyama conjecture, and in the words of mathematician Arthur Jaffe, "the characters think about mathematics just the way a real mathematician would". - -Almost the entire text is performed in song, with the exception of the prologue. The music contains elements of operetta, blues, pop, and tango. - -The original production by the York Theatre ran from November 21 to December 31, 2000 at the Theater at St. Peter's Lutheran Church, directed by Mel Marvin, with sets designed by James Morgan. - -Cast: - -*Gilles Chiasson as Carl Friedrich Gauss and reporter - -*Edwardyne Cowan as Anna Keane (mezzo-soprano) - -*Mitchell Kantor as Pythagoras and reporter (bass) - -*Jonathan Rabb as Pierre de Fermat (tenor) - -*Chris Thompson as Daniel Keane (baritone) - -*Christianne Tisdale as Euclid and reporter (soprano) - -*Carrie Wilshusen as Sir Isaac Newton and reporter (mezzo-soprano) - -The musical was translated into Portuguese by César Viana as and was played in Portuguese university towns in 2003 and at the Teatro da Trindade in 2004. Students at Madison East High School performed an abridged version in 2005 and 2006, including at a statewide meeting of the Mathematical Association of America. - -Reviews for Fermat's Last Tango during its theatrical run were mixed. Wilburn Hampton's review in the New York Times, while noticing the catchy tunes and lyrics, criticised that David Keane does not "become a real character". Elyse Sommer's review in CurtainUp was more positive, finding praise for both writing and the performances of Rabb and Thompson. - -The mathematical reception has been more generally positive, with audiences of screenings of the film version reacting "mildly amused to enthusiastic." Mathematician Robert Osserman, while acknowledging the musical as unique to the point of making comparisons difficult, found it fun and moving and praised the actors and the music. He especially pointed out the mathematical accuracy, but mildly complained about stereotyping of mathematicians and the differences between the true story of Andrew Wiles and the fictional story of Daniel Keane: Unlike Keane, Wiles did not withdraw to his attic for seven years and did not solve the complete Shimura-Taniyama conjecture. Richard Taylor's role in the proof is also omitted in the fictionalized version. - -Michele Emmer's review in the Mathematical Intelligencer was positive, stating "the gamble of trying to produce an entertaining and mathematically correct musical turned out a success." - -In their book Math Goes to the Movies, mathematicians Burkard Polster and Marty Ross were enthusiastic about Fermat's Last Tango, calling it "terrific fun" and a "must-see". - -In her book Science on Stage From Doctor Faustus to Copenhagen, literary scholar Kirsten Shepherd-Barr noted the musical's "successful integration of a surprising amount of 'real' mathematics with a charming and witty score." - -On the initiative of Clay Mathematics Institute president Arthur Jaffe, a high quality live performance video was made, directed by David Stern. It was first shown to an audience of four hundred people in July 2001 in Berkeley, and later sold at cost by the Clay Mathematics Institute in both VHS and DVD editions. A pamphlet about the mathematics and the mathematicians as well as the actors in the musical was included. The film was shown at various mathematical conferences. - -A CD version was distributed by Original Cast Records. - -* - -* - -* - -* - -* - -* - -* - -* - -* - -* - -* - -* diff --git a/wiki/wikipedia/3876.txt b/wiki/wikipedia/3876.txt deleted file mode 100644 index 01a4c39758b3c718fcbf8d39bb3f0b578c60fe4a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3876.txt +++ /dev/null @@ -1,57 +0,0 @@ -Kaby Lake is Intel's codename for its seventh generation Core microprocessor family announced on August 30, 2016. Due to security concerns, Windows 11 is not officially supported on any of the 7th generation Kaby Lake processors (except Core i7-7820HQ on select devices), although Microsoft has stated that it will test those processors for compatibility. resulting in identical IPC (Instructions Per Clock). It adds native HDCP 2.2 support, along with fixed function decode of H.264 (AVC), HEVC Main and Main10/10-bit, and VP9 10-bit and 8-bit video. An enthusiast-created modification was released that disabled the Windows Update check and allowed Windows 8.1 and earlier to continue to be updated on the platform. - -Desktop processors: - -* High-power (K/X): - -** For dual-core: 60 W - -** For quad-core: 91 W (LGA1151) - 112W (LGA2066) - -* Medium-power: - -** For dual-core: 51...54 W - -** For quad-core: 65 W - -* Low-power (T): 35 W - -Mobile processors: - -* High-power (H): 45 W with configurable TDP-down to 35 W - -* Medium-power (U): 15...28 W with configurable TDP-down to 7.5 W - -* Low-power (Y): 5...7 W with configurable TDP-down to 3.5 W - -Features common to desktop Kaby Lake CPUs: - -* LGA 1151 socket (Except the Core i7 7740X and Core i5 7640X, which use the LGA 2066 socket.) - -* DMI 3.0 and PCIe 3.0 interfaces - -* Dual channel memory support in the following configurations: DDR3L-1600 1.35 V (32 GB maximum) or DDR4-2400 1.2 V (64 GB maximum) - -** The Core i7 7740X and Core i5 7640x support DDR4-2666 (64 GB maximum), but do not support DDR3L memory. - -* A total of 16 PCIe lanes - -* The Core-branded processors support the AVX2 instruction set. The Celeron and Pentium-branded ones support only SSE4.1/4.2. - -* 350 MHz base graphics clock rate - -** The Core i7 7740X and Core i5 7640x do not have an integrated GPU. - -* No L4 cache (eDRAM) - -* A release date of January 3, 2017 (KBL-S) and June 2017 (KBL-X) - -Maximum PCIe Lanes: 16. Release date: Q1 2017. - -In late 2016, it was reported that Intel had been working on a processor family codenamed “Kaby Lake R” ("R" for "Refresh"). On August 21, 2017, the eighth generation mobile CPUs were announced. The first products released were four "Kaby Lake R" processors with a 15W TDP. This marketing is distinct from previous generational changes of the Core product line, where a new generation coincided with a new microarchitecture. Intel has stated that the 8th generation would be based on multiple microarchitectures, including Kaby Lake R, Coffee Lake, and Cannon Lake. - -Maximum number of PCIe lanes: 8. One-package processors with discrete graphics chip - it is connected with main CPU core using a PCI Express link through an embedded multi-die interconnect bridge (EMIB). Release date: Q1 2018. - -On August 28, 2018, Intel announced a refreshed lineup of ultra low power mobile Kaby Lake CPUs under the moniker Amber Lake. - -On August 21, 2019, Intel announced their 10th generation Amber Lake ultra low power CPUs. diff --git a/wiki/wikipedia/3877.txt b/wiki/wikipedia/3877.txt deleted file mode 100644 index 63deb633c6a00cfb5c2576de1b59d83c8978b4d5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3877.txt +++ /dev/null @@ -1,195 +0,0 @@ -In mathematics, and more specifically in algebraic topology and polyhedral combinatorics, the Euler characteristic (or Euler number, or Euler-Poincaré characteristic) is a topological invariant, a number that describes a topological space's shape or structure regardless of the way it is bent. It is commonly denoted by $ \chi $ (Greek lower-case letter chi). - -The Euler characteristic was originally defined for polyhedra and used to prove various theorems about them, including the classification of the Platonic solids. It was stated for Platonic solids in 1537 in an unpublished manuscript by Francesco Maurolico. Leonhard Euler, for whom the concept is named, introduced it for convex polyhedra more generally but failed to rigorously prove that it is an invariant. In modern mathematics, the Euler characteristic arises from homology and, more abstractly, homological algebra. - -The Euler characteristic $\chi$ was classically defined for the surfaces of polyhedra, according to the formula -$$ -\chi=V-E+F -$$ - -where V, E, and F are respectively the numbers of vertices (corners), edges and faces in the given polyhedron. Any convex polyhedron's surface has Euler characteristic -$$ -V - E + F = 2. -$$ - -This equation, stated by Leonhard Euler in 1758, is known as Euler's polyhedron formula. It corresponds to the Euler characteristic of the sphere (i.e. χ = 2), and applies identically to spherical polyhedra. An illustration of the formula on all Platonic polyhedra is given below. - -The surfaces of nonconvex polyhedra can have various Euler characteristics: - -For regular polyhedra, Arthur Cayley derived a modified form of Euler's formula using the density D, vertex figure density dv, and face density $d_f$: -$$ -d_v V - E + d_f F = 2 D. -$$ - -This version holds both for convex polyhedra (where the densities are all 1) and the non-convex Kepler-Poinsot polyhedra. - -Projective polyhedra all have Euler characteristic 1, like the real projective plane, while the surfaces of toroidal polyhedra all have Euler characteristic 0, like the torus. - -The Euler characteristic can be defined for connected plane graphs by the same $V - E + F$ formula as for polyhedral surfaces, where F is the number of faces in the graph, including the exterior face. - -The Euler characteristic of any plane connected graph G is 2. This is easily proved by induction on the number of faces determined by G, starting with a tree as the base case. For trees, $ E = V-1 $ and $F = 1$. If G has C components (disconnected graphs), the same argument by induction on F shows that $V - E + F - C = 1$. One of the few graph theory papers of Cauchy also proves this result. - -Via stereographic projection the plane maps to the 2-sphere, such that a connected graph maps to a polygonal decomposition of the sphere, which has Euler characteristic 2. This viewpoint is implicit in Cauchy's proof of Euler's formula given below. - -There are many proofs of Euler's formula. One was given by Cauchy in 1811, as follows. It applies to any convex polyhedron, and more generally to any polyhedron whose boundary is topologically equivalent to a sphere and whose faces are topologically equivalent to disks. - -Remove one face of the polyhedral surface. By pulling the edges of the missing face away from each other, deform all the rest into a planar graph of points and curves, in such a way that the perimeter of the missing face is placed externally, surrounding the graph obtained, as illustrated by the first of the three graphs for the special case of the cube. (The assumption that the polyhedral surface is homeomorphic to the sphere at the beginning is what makes this possible.) After this deformation, the regular faces are generally not regular anymore. The number of vertices and edges has remained the same, but the number of faces has been reduced by 1. Therefore, proving Euler's formula for the polyhedron reduces to proving V - E + F =1 for this deformed, planar object. - -If there is a face with more than three sides, draw a diagonal—that is, a curve through the face connecting two vertices that are not yet connected. This adds one edge and one face and does not change the number of vertices, so it does not change the quantity V - E + F. (The assumption that all faces are disks is needed here, to show via the Jordan curve theorem that this operation increases the number of faces by one.) Continue adding edges in this manner until all of the faces are triangular. - -Apply repeatedly either of the following two transformations, maintaining the invariant that the exterior boundary is always a simple cycle: - -#Remove a triangle with only one edge adjacent to the exterior, as illustrated by the second graph. This decreases the number of edges and faces by one each and does not change the number of vertices, so it preserves V - E + F. - -#Remove a triangle with two edges shared by the exterior of the network, as illustrated by the third graph. Each triangle removal removes a vertex, two edges and one face, so it preserves V - E + F. - -These transformations eventually reduce the planar graph to a single triangle. (Without the simple-cycle invariant, removing a triangle might disconnect the remaining triangles, invalidating the rest of the argument. A valid removal order is an elementary example of a shelling.) - -At this point the lone triangle has V = 3, E = 3, and F = 1, so that V - E + F = 1. Since each of the two above transformation steps preserved this quantity, we have shown V - E + F = 1 for the deformed, planar object thus demonstrating V - E + F = 2 for the polyhedron. This proves the theorem. - -For additional proofs, see Twenty Proofs of Euler's Formula by David Eppstein. Multiple proofs, including their flaws and limitations, are used as examples in Proofs and Refutations by Imre Lakatos. - -The polyhedral surfaces discussed above are, in modern language, two-dimensional finite CW-complexes. (When only triangular faces are used, they are two-dimensional finite simplicial complexes.) In general, for any finite CW-complex, the Euler characteristic can be defined as the alternating sum -$$ -\chi = k_0 - k_1 + k_2 - k_3 + \cdots, -$$ - -where kn denotes the number of cells of dimension n in the complex. - -Similarly, for a simplicial complex, the Euler characteristic equals the alternating sum -$$ -\chi = k_0 - k_1 + k_2 - k_3 + \cdots, -$$ - -where kn denotes the number of n-simplexes in the complex. - -More generally still, for any topological space, we can define the nth Betti number bn as the rank of the n-th singular homology group. The Euler characteristic can then be defined as the alternating sum -$$ -\chi = b_0 - b_1 + b_2 - b_3 + \cdots. -$$ - -This quantity is well-defined if the Betti numbers are all finite and if they are zero beyond a certain index n0. For simplicial complexes, this is not the same definition as in the previous paragraph but a homology computation shows that the two definitions will give the same value for $\chi$. - -The Euler characteristic behaves well with respect to many basic operations on topological spaces, as follows. - -Homology is a topological invariant, and moreover a homotopy invariant: Two topological spaces that are homotopy equivalent have isomorphic homology groups. It follows that the Euler characteristic is also a homotopy invariant. - -For example, any contractible space (that is, one homotopy equivalent to a point) has trivial homology, meaning that the 0th Betti number is 1 and the others 0. Therefore, its Euler characteristic is 1. This case includes Euclidean space $\mathbb{R}^n$ of any dimension, as well as the solid unit ball in any Euclidean space - the one-dimensional interval, the two-dimensional disk, the three-dimensional ball, etc. - -For another example, any convex polyhedron is homeomorphic to the three-dimensional ball, so its surface is homeomorphic (hence homotopy equivalent) to the two-dimensional sphere, which has Euler characteristic 2. This explains why convex polyhedra have Euler characteristic 2. - -If M and N are any two topological spaces, then the Euler characteristic of their disjoint union is the sum of their Euler characteristics, since homology is additive under disjoint union: -$$ -\chi(M \sqcup N) = \chi(M) + \chi(N). -$$ - -More generally, if M and N are subspaces of a larger space X, then so are their union and intersection. In some cases, the Euler characteristic obeys a version of the inclusion–exclusion principle: -$$ -\chi(M \cup N) = \chi(M) + \chi(N) - \chi(M \cap N). -$$ - -This is true in the following cases: - -*if M and N are an excisive couple. In particular, if the interiors of M and N inside the union still cover the union. - -*if X is a locally compact space, and one uses Euler characteristics with compact supports, no assumptions on M or N are needed. - -*if X is a stratified space all of whose strata are even-dimensional, the inclusion–exclusion principle holds if M and N are unions of strata. This applies in particular if M and N are subvarieties of a complex algebraic variety. - -In general, the inclusion–exclusion principle is false. A counterexample is given by taking X to be the real line, M a subset consisting of one point and N the complement of M. - -For two connected closed n-manifolds $M, N$ one can obtain a new connected manifold $M \# N$ - -via the connected sum operation. - -The Euler characteristic is related by the formula -$$ - \chi(M \# N) = \chi(M) + \chi(N) - \chi(S^n). -$$ - -Also, the Euler characteristic of any product space M × N is -$$ -\chi(M \times N) = \chi(M) \cdot \chi(N). -$$ - -These addition and multiplication properties are also enjoyed by cardinality of sets. In this way, the Euler characteristic can be viewed as a generalisation of cardinality; see . - -Similarly, for a k-sheeted covering space $\tilde{M} \to M,$ one has -$$ -\chi(\tilde{M}) = k \cdot \chi(M). -$$ - -More generally, for a ramified covering space, the Euler characteristic of the cover can be computed from the above, with a correction factor for the ramification points, which yields the Riemann–Hurwitz formula. - -The product property holds much more generally, for fibrations with certain conditions. - -If $p\colon E \to B$ is a fibration with fiber F, with the base B path-connected, and the fibration is orientable over a field K, then the Euler characteristic with coefficients in the field K satisfies the product property: -$$ -\chi(E) = \chi(F)\cdot \chi(B). -$$ - -This includes product spaces and covering spaces as special cases, - -and can be proven by the Serre spectral sequence on homology of a fibration. - -For fiber bundles, this can also be understood in terms of a transfer map $\tau\colon H_*(B) \to H_*(E)$ – note that this is a lifting and goes "the wrong way" – whose composition with the projection map $p_*\colon H_*(E) \to H_*(B)$ is multiplication by the Euler class of the fiber: -$$ -p_* \circ \tau = \chi(F) \cdot 1. -$$ - -The Euler characteristic can be calculated easily for general surfaces by finding a polygonization of the surface (that is, a description as a CW-complex) and using the above definitions. - -It is common to construct soccer balls by stitching together pentagonal and hexagonal pieces, with three pieces meeting at each vertex (see for example the Adidas Telstar). If P pentagons and H hexagons are used, then there are F = P + H faces, V = (5 P + 6 H) / 3 vertices, and E = (5 P + 6 H) / 2 edges. The Euler characteristic is thus -$$ -V - E + F = \frac{5P + 6H}{3} - \frac{5P + 6H}{2} + P + H = \frac{P}{6}. -$$ - -Because the sphere has Euler characteristic 2, it follows that P = 12. That is, a soccer ball constructed in this way always has 12 pentagons. In principle, the number of hexagons is unconstrained. This result is applicable to fullerenes and Goldberg polyhedra. - -The n-dimensional sphere has singular homology groups equal to -$$ -H_k(S^n) = \begin{cases} \mathbb Z & k=0, n \\ \{0\} & \text{otherwise,} \end{cases} -$$ - -hence has Betti number 1 in dimensions 0 and n, and all other Betti numbers are 0. Its Euler characteristic is then 1 + (−1)n - that is, either 0 or 2. - -The n-dimensional real projective space is the quotient of the n-sphere by the antipodal map. It follows that its Euler characteristic is exactly half that of the corresponding sphere - either 0 or 1. - -The n-dimensional torus is the product space of n circles. Its Euler characteristic is 0, by the product property. More generally, any compact parallelizable manifold, including any compact Lie group, has Euler characteristic 0. - -The Euler characteristic of any closed odd-dimensional manifold is also 0. The case for orientable examples is a corollary of Poincaré duality. This property applies more generally to any compact stratified space all of whose strata have odd dimension. It also applies to closed odd-dimensional non-orientable manifolds, via the two-to-one orientable double cover. - -The Euler characteristic of a closed orientable surface can be calculated from its genus g (the number of tori in a connected sum decomposition of the surface; intuitively, the number of "handles") as -$$ -\chi = 2 - 2g. -$$ - -The Euler characteristic of a closed non-orientable surface can be calculated from its non-orientable genus k (the number of real projective planes in a connected sum decomposition of the surface) as -$$ -\chi = 2 - k. -$$ - -For closed smooth manifolds, the Euler characteristic coincides with the Euler number, i.e., the Euler class of its tangent bundle evaluated on the fundamental class of a manifold. The Euler class, in turn, relates to all other characteristic classes of vector bundles. - -For closed Riemannian manifolds, the Euler characteristic can also be found by integrating the curvature; see the Gauss-Bonnet theorem for the two-dimensional case and the generalized Gauss-Bonnet theorem for the general case. - -A discrete analog of the Gauss-Bonnet theorem is Descartes' theorem that the "total defect" of a polyhedron, measured in full circles, is the Euler characteristic of the polyhedron; see defect (geometry). - -Hadwiger's theorem characterizes the Euler characteristic as the unique (up to scalar multiplication) translation-invariant, finitely additive, not-necessarily-nonnegative set function defined on finite unions of compact convex sets in Rn that is "homogeneous of degree 0". - -For every combinatorial cell complex, one defines the Euler characteristic as the number of 0-cells, minus the number of 1-cells, plus the number of 2-cells, etc., if this alternating sum is finite. In particular, the Euler characteristic of a finite set is simply its cardinality, and the Euler characteristic of a graph is the number of vertices minus the number of edges. - -More generally, one can define the Euler characteristic of any chain complex to be the alternating sum of the ranks of the homology groups of the chain complex, assuming that all these ranks are finite. - -A version of Euler characteristic used in algebraic geometry is as follows. For any coherent sheaf $\mathcal{F}$ on a proper scheme X, one defines its Euler characteristic to be -$$ -\chi ( \mathcal{F})= \sum_i (-1)^i h^i(X,\mathcal{F}), -$$ - -where $h^i(X, \mathcal{F}) $ is the dimension of the i-th sheaf cohomology group of $\mathcal{F}$. In this case, the dimensions are all finite by Grothendieck's finiteness theorem. This is an instance of the Euler characteristic of a chain complex, where the chain complex is a finite resolution of $\mathcal{F}$ by acyclic sheaves. - -Another generalization of the concept of Euler characteristic on manifolds comes from orbifolds (see Euler characteristic of an orbifold). While every manifold has an integer Euler characteristic, an orbifold can have a fractional Euler characteristic. For example, the teardrop orbifold has Euler characteristic 1 + 1/p, where p is a prime number corresponding to the cone angle 2pi / p. - -The concept of Euler characteristic of the reduced homology of a bounded finite poset is another generalization, important in combinatorics. A poset is "bounded" if it has smallest and largest elements; call them 0 and 1. The Euler characteristic of such a poset is defined as the integer μ(0,1), where μ is the Möbius function in that poset's incidence algebra. - -This can be further generalized by defining a Q-valued Euler characteristic for certain finite categories, a notion compatible with the Euler characteristics of graphs, orbifolds and posets mentioned above. In this setting, the Euler characteristic of a finite group or monoid G is 1/|G|, and the Euler characteristic of a finite groupoid is the sum of 1/|Gi|, where we picked one representative group Gi for each connected component of the groupoid. diff --git a/wiki/wikipedia/3878.txt b/wiki/wikipedia/3878.txt deleted file mode 100644 index 6524ba42f4138a85d7cc62e3660e6bfa927b80c4..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3878.txt +++ /dev/null @@ -1,157 +0,0 @@ -In computer science, asynchronous I/O (also non-sequential I/O) is a form of input/output processing that permits other processing to continue before the transmission has finished. - -Input and output (I/O) operations on a computer can be extremely slow compared to the processing of data. An I/O device can incorporate mechanical devices that must physically move, such as a hard drive seeking a track to read or write; this is often orders of magnitude slower than the switching of electric current. For example, during a disk operation that takes ten milliseconds to perform, a processor that is clocked at one gigahertz could have performed ten million instruction-processing cycles. - -A simple approach to I/O would be to start the access and then wait for it to complete. But such an approach (called synchronous I/O, or blocking I/O) would block the progress of a program while the communication is in progress, leaving system resources idle. When a program makes many I/O operations (such as a program mainly or largely dependent on user input), this means that the processor can spend almost all of its time idle waiting for I/O operations to complete. - -Alternatively, it is possible to start the communication and then perform processing that does not require that the I/O be completed. This approach is called asynchronous input/output. Any task that depends on the I/O having completed (this includes both using the input values and critical operations that claim to assure that a write operation has been completed) still needs to wait for the I/O operation to complete, and thus is still blocked, but other processing that does not have a dependency on the I/O operation can continue. - -Many operating system functions exist to implement asynchronous I/O at many levels. In fact, one of the main functions of all but the most rudimentary of operating systems is to perform at least some form of basic asynchronous I/O, though this may not be particularly apparent to the user or the programmer. In the simplest software solution, the hardware device status is polled at intervals to detect whether the device is ready for its next operation. (For example, the CP/M operating system was built this way. Its system call semantics did not require any more elaborate I/O structure than this, though most implementations were more complex, and thereby more efficient.) Direct memory access (DMA) can greatly increase the efficiency of a polling-based system, and hardware interrupts can eliminate the need for polling entirely. Multitasking operating systems can exploit the functionality provided by hardware interrupts, whilst hiding the complexity of interrupt handling from the user. Spooling was one of the first forms of multitasking designed to exploit asynchronous I/O. Finally, multithreading and explicit asynchronous I/O APIs within user processes can exploit asynchronous I/O further, at the cost of extra software complexity. - -Asynchronous I/O is used to improve throughput, latency, and/or responsiveness. - -Forms of I/O and examples of POSIX functions: - -All forms of asynchronous I/O open applications up to potential resource conflicts and associated failure. Careful programming (often using mutual exclusion, semaphores, etc.) is required to prevent this. - -When exposing asynchronous I/O to applications there are a few broad classes of implementation. The form of the API provided to the application does not necessarily correspond with the mechanism actually provided by the operating system; emulations are possible. Furthermore, more than one method may be used by a single application, depending on its needs and the desires of its programmer(s). Many operating systems provide more than one of these mechanisms, it is possible that some may provide all of them. - -Available in early Unix. In a multitasking operating system, processing can be distributed across different processes, which run independently, have their own memory, and process their own I/O flows; these flows are typically connected in pipelines. Processes are fairly expensive to create and maintain, so this solution only works well if the set of processes is small and relatively stable. It also assumes that the individual processes can operate independently, apart from processing each other's I/O; if they need to communicate in other ways, coordinating them can become difficult. - -An extension of this approach is dataflow programming, which allows more complicated networks than just the chains that pipes support. - -Variations: - -* Error if it cannot be done yet (reissue later) - -* Report when it can be done without blocking (then issue it) - -Polling provides non-blocking synchronous API which may be used to implement some asynchronous API. Available in traditional Unix and Windows. Its major problem is that it can waste CPU time polling repeatedly when there is nothing else for the issuing process to do, reducing the time available for other processes. Also, because a polling application is essentially single-threaded it may be unable to fully exploit I/O parallelism that the hardware is capable of. - -Available in BSD Unix, and almost anything else with a TCP/IP protocol stack that either utilizes or is modeled after the BSD implementation. A variation on the theme of polling, a select loop uses the select system call to sleep until a condition occurs on a file descriptor (e.g., when data is available for reading), a timeout occurs, or a signal is received (e.g., when a child process dies). By examining the return parameters of the select call, the loop finds out which file descriptor has changed and executes the appropriate code. Often, for ease of use, the select loop is implemented as an event loop, perhaps using callback functions; the situation lends itself particularly well to event-driven programming. - -While this method is reliable and relatively efficient, it depends heavily on the Unix paradigm that "everything is a file"; any blocking I/O that does not involve a file descriptor will block the process. The select loop also relies on being able to involve all I/O in the central select call; libraries that conduct their own I/O are particularly problematic in this respect. An additional potential problem is that the select and the I/O operations are still sufficiently decoupled that select's result may effectively be a lie: if two processes are reading from a single file descriptor (arguably bad design) the select may indicate the availability of read data that has disappeared by the time that the read is issued, thus resulting in blocking; if two processes are writing to a single file descriptor (not that uncommon) the select may indicate immediate writability yet the write may still block, because a buffer has been filled by the other process in the interim, or due to the write being too large for the available buffer or in other ways unsuitable to the recipient. - -The select loop does not reach the ultimate system efficiency possible with, say, the completion queues method, because the semantics of the select call, allowing as it does for per-call tuning of the acceptable event set, consumes some amount of time per invocation traversing the selection array. This creates little overhead for user applications that might have open one file descriptor for the windowing system and a few for open files, but becomes more of a problem as the number of potential event sources grows, and can hinder development of many-client server applications, as in the C10k problem; other asynchronous methods may be noticeably more efficient in such cases. Some Unixes provide system-specific calls with better scaling; for example, epoll in Linux (that fills the return selection array with only those event sources on which an event has occurred), kqueue in FreeBSD, and event ports (and /dev/poll) in Solaris. - -SVR3 Unix provided the poll system call. Arguably better-named than select, for the purposes of this discussion it is essentially the same thing. SVR4 Unixes (and thus POSIX) offer both calls. - -Available in BSD and POSIX Unix. I/O is issued asynchronously, and when it is completed a signal (interrupt) is generated. As in low-level kernel programming, the facilities available for safe use within the signal handler are limited, and the main flow of the process could have been interrupted at nearly any point, resulting in inconsistent data structures as seen by the signal handler. The signal handler is usually not able to issue further asynchronous I/O by itself. - -The signal approach, though relatively simple to implement within the OS, brings to the application program the unwelcome baggage associated with writing an operating system's kernel interrupt system. Its worst characteristic is that every blocking (synchronous) system call is potentially interruptible; the programmer must usually incorporate retry code at each call. - -Available in the classic Mac OS, VMS and Windows. Bears many of the characteristics of the signal method as it is fundamentally the same thing, though rarely recognized as such. The difference is that each I/O request usually can have its own completion function, whereas the signal system has a single callback. This method is used by some networking framework, including Node.js, due to ease of implementation and due to the absence of language support needed; nonetheless it can result in nested and messy code, nicknamed as "callback hell". - -On the other hand, a potential problem of using callbacks is that stack depth can grow unmanageably, as an extremely common thing to do when one I/O is finished is to schedule another. If this should be satisfied immediately, the first callback is not 'unwound' off the stack before the next one is invoked. Systems to prevent this (like 'mid-ground' scheduling of new work) add complexity and reduce performance. In practice, however, this is generally not a problem because the new I/O will itself usually return as soon as the new I/O is started allowing the stack to be 'unwound'. The problem can also be prevented by avoiding any further callbacks, by means of a queue, until the first callback returns. - -Light-weight processes (LWPs) or threads are available in more modern Unixes, originating in Plan 9 . Like the process method, but without the data isolation that hampers coordination of the flows. This lack of isolation introduces its own problems, usually requiring kernel-provided synchronization mechanisms and thread-safe libraries. Each LWP or thread itself uses traditional blocking synchronous I/O. The requisite separate per-thread stack may preclude large-scale implementations using very large numbers of threads. The separation of textual (code) and time (event) flows provides fertile ground for errors. - -This approach is also used in the Erlang programming language runtime system. The Erlang virtual machine uses asynchronous I/O using a small pool of only a few threads or sometimes just one process, to handle I/O from up to millions of Erlang processes. I/O handling in each process is written mostly using blocking synchronous I/O. This way high performance of asynchronous I/O is merged with simplicity of normal I/O (c.f. the Actor model). Many I/O problems in Erlang are mapped to message passing, which can be easily processed using built-in selective receive. - -Fibers / Coroutines can be viewed as a similarly lightweight approach to do asynchronous I/O outside of the Erlang runtime system, although they do not provide exactly the same guarantees as Erlang processes. - -Available in Microsoft Windows, Solaris, AmigaOS, DNIX and Linux (using io_uring, available on 5.1 and above). I/O requests are issued asynchronously, but notifications of completion are provided via a synchronizing queue mechanism in the order they are completed. Usually associated with a state-machine structuring of the main process (event-driven programming), which can bear little resemblance to a process that does not use asynchronous I/O or that uses one of the other forms, hampering code reuse. Does not require additional special synchronization mechanisms or thread-safe libraries, nor are the textual (code) and time (event) flows separated. - -Available in VMS and AmigaOS (often used in conjunction with a completion port). Bears many of the characteristics of the completion queue method, as it is essentially a completion queue of depth one. To simulate the effect of queue 'depth', an additional event flag is required for each potential unprocessed (but completed) event, or event information can be lost. Waiting for the next available event in such a clump requires synchronizing mechanisms that may not scale well to larger numbers of potentially parallel events. - -Available in mainframes by IBM, Groupe Bull, and Unisys. Channel I/O is designed to maximize CPU utilization and throughput by offloading most I/O onto a coprocessor. The coprocessor has onboard DMA, handles device interrupts, is controlled by the main CPU, and only interrupts the main CPU when it's truly necessary. This architecture also supports so-called channel programs that run on the channel processor to do heavy lifting for I/O activities and protocols. - -Available in Windows Server 2012 and Windows 8. Optimized for applications that process large numbers of small messages to achieve higher I/O operations per second with reduced jitter and latency. - -The vast majority of general-purpose computing hardware relies entirely upon two methods of implementing asynchronous I/O: polling and interrupts. Usually both methods are used together, the balance depends heavily upon the design of the hardware and its required performance characteristics. (DMA is not itself another independent method, it is merely a means by which more work can be done per poll or interrupt.) - -Pure polling systems are entirely possible, small microcontrollers (such as systems using the PIC) are often built this way. CP/M systems could also be built this way (though rarely were), with or without DMA. Also, when the utmost performance is necessary for only a few tasks, at the expense of any other potential tasks, polling may also be appropriate as the overhead of taking interrupts may be unwelcome. (Servicing an interrupt requires time [and space] to save at least part of the processor state, along with the time required to resume the interrupted task.) - -Most general-purpose computing systems rely heavily upon interrupts. A pure interrupt system may be possible, though usually some component of polling is also required, as it is very common for multiple potential sources of interrupts to share a common interrupt signal line, in which case polling is used within the device driver to resolve the actual source. (This resolution time also contributes to an interrupt system's performance penalty. Over the years a great deal of work has been done to try to minimize the overhead associated with servicing an interrupt. Current interrupt systems are rather lackadaisical when compared to some highly tuned earlier ones, but the general increase in hardware performance has greatly mitigated this.) - -Hybrid approaches are also possible, wherein an interrupt can trigger the beginning of some burst of asynchronous I/O, and polling is used within the burst itself. This technique is common in high-speed device drivers, such as network or disk, where the time lost in returning to the pre-interrupt task is greater than the time until the next required servicing. (Common I/O hardware in use these days relies heavily upon DMA and large data buffers to make up for a relatively poorly-performing interrupt system. These characteristically use polling inside the driver loops, and can exhibit tremendous throughput. Ideally the per-datum polls are always successful, or at most repeated a small number of times.) - -At one time this sort of hybrid approach was common in disk and network drivers where there was not DMA or significant buffering available. Because the desired transfer speeds were faster even than could tolerate the minimum four-operation per-datum loop (bit-test, conditional-branch-to-self, fetch, and store), the hardware would often be built with automatic wait state generation on the I/O device, pushing the data ready poll out of software and onto the processor's fetch or store hardware and reducing the programmed loop to two operations. (In effect using the processor itself as a DMA engine.) The 6502 processor offered an unusual means to provide a three-element per-datum loop, as it had a hardware pin that, when asserted, would cause the processor's Overflow bit to be set directly. (Obviously one would have to take great care in the hardware design to avoid overriding the Overflow bit outside of the device driver!) - -Using only these two tools (polling, and interrupts), all the other forms of asynchronous I/O discussed above may be (and in fact, are) synthesized. - -In an environment such as a Java Virtual Machine (JVM), asynchronous I/O can be synthesized even though the environment the JVM is running in may not offer it at all. This is due to the interpreted nature of the JVM. The JVM may poll (or take an interrupt) periodically to institute an internal flow of control change, effecting the appearance of multiple simultaneous processes, at least some of which presumably exist in order to perform asynchronous I/O. (Of course, at the microscopic level the parallelism may be rather coarse and exhibit some non-ideal characteristics, but on the surface it will appear to be as desired.) - -That, in fact, is the problem with using polling in any form to synthesize a different form of asynchronous I/O. Every CPU cycle that is a poll is wasted, and lost to overhead rather than accomplishing a desired task. Every CPU cycle that is not a poll represents an increase in latency of reaction to pending I/O. Striking an acceptable balance between these two opposing forces is difficult. (This is why hardware interrupt systems were invented in the first place.) - -The trick to maximize efficiency is to minimize the amount of work that has to be done upon reception of an interrupt in order to awaken the appropriate application. Secondarily (but perhaps no less important) is the method the application itself uses to determine what it needs to do. - -Particularly problematic (for application efficiency) are the exposed polling methods, including the select/poll mechanisms. Though the underlying I/O events they are interested in are in all likelihood interrupt-driven, the interaction to this mechanism is polled and can consume a large amount of time in the poll. This is particularly true of the potentially large-scale polling possible through select (and poll). Interrupts map very well to Signals, Callback functions, Completion Queues, and Event flags, such systems can be very efficient. - -Following examples shows concepts of three I/O approaches on the reading operation. Objects and functions are abstract. - -1. Blocking, synchronous: - - - -device = IO.open() - -data = device.read() # thread will be blocked until there is data in the device - -print(data) - - - -2. Non-blocking, synchronous: - - - -device = IO.open() - -ready = False - -while not ready: - -print("There is no data to read!") - -ready = IO.poll(device, IO.INPUT, 5) # returns control if 5 seconds have elapsed or there is data to read (INPUT) - -data = device.read() - -print(data) - - - -3. Non-blocking, asynchronous: - - - -ios = IO.IOService() - -device = IO.open(ios) - -def inputHandler(data, err): - -"Input data handler" - -if not err: - -print(data) - -device.readSome(inputHandler) - -ios.loop() # wait till all operations have been completed and call all appropriate handlers - - - -Here is the example with Reactor pattern: - - - -device = IO.open() - -reactor = IO.Reactor() - -def inputHandler(data): - -"Input data handler" - -print(data) - -reactor.stop() - -reactor.addHandler(inputHandler, device, IO.INPUT) - -reactor.run() # run reactor, which handles events and calls appropriate handlers - - diff --git a/wiki/wikipedia/3879.txt b/wiki/wikipedia/3879.txt deleted file mode 100644 index 6aa720a2b0e270bb71645476a95e0a36b2bedbb0..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3879.txt +++ /dev/null @@ -1,27 +0,0 @@ -A theorem in functional analysis concerning convergent series (Orlicz) or, equivalently, countable additivity of measures (Pettis) with values in abstract spaces. - -Let $X $ be a Hausdorff locally convex topological vector space with dual $ X^*$. A series $\sum_{n=1}^\infty~x_n $ is subseries convergent (in $ X $), if all its subseries $\sum_{k=1}^\infty~ x_{n_k}$ are convergent. The theorem says that, equivalently, - -*(i) If a series $ \sum_{n=1}^\infty~x_n $ is weakly subseries convergent in $X $ (i.e., is subseries convergent in $X $ with respect to its weak topology $\sigma(X,X^*)$), then it is (subseries) convergent; or - -*(ii) Let $ \mathbf{A} $ be a $\sigma$-algebra of sets and let $\mu:\mathbf{A}\to X $ be an additive set function. If $\mu $ is weakly countably additive, then it is countably additive (in the original topology of the space $X $). - -The history of the origins of the theorem is somewhat complicated. In numerous papers and books there are misquotations or/and misconceptions concerning the result. Assuming that $X $ is weakly sequentially complete Banach space, W. Orlicz proved the following - -Theorem. If a series $ \sum_{n=1}^\infty~x_n $ is weakly unconditionally Cauchy, i.e., $ \sum_{n=1}^\infty |x^*(x_n)|<\infty $ for each linear functional $x^*\in X^* $, then the series is (norm) convergent in $ X $. - -After the paper was published, Orlicz realized that in the proof of the theorem the weak sequential completeness of $X $ was only used to guarantee the existence of the weak limits of the considered series. Consequently, assuming the existence of those limits, which amounts to the assumption of the weak subseries convergence of the series, the same proof shows that the series in norm convergent. In other words, the version (i) of the Orlicz–Pettis theorem holds. - -The theorem in this form, openly credited to Orlicz, appeared in Banach's monograph in the last chapter Remarques in which no proofs were provided. Pettis directly referred to Orlicz's theorem in Banach's book. Needing the result in order to show the coincidence of the weak and strong measures, he provided a proof. Also Dunford gave a proof (with a remark that it is similar to the original proof of Orlicz). - -A more thorough discussion of the origins of the Orlicz–Pettis theorem and, in particular, of the paper can be found in. See also footnote 5 on p. 839 of and the comments at the end of Section 2.4 of the 2nd edition of the quoted book by Albiac and Kalton. Though in Polish, there is also an adequate comment on page 284 of the quoted monograph of Alexiewicz, Orlicz’s first PhD-student, still in the occupied Lwów. - -In Grothendieck proved a theorem, whose special case is the Orlicz–Pettis theorem in locally convex spaces. Later, a more direct proofs of the form (i) of the theorem in the locally convex case were provided by McArthur and Robertson. - -The theorem of Orlicz and Pettis had been strengthened and generalized in many directions. An early survey of this area of research is Kalton's paper. A natural setting for subseries convergence is that of an Abelian topological group $ X $ and a representative result of this area of research is the following theorem, called by Kalton the Graves-Labuda-Pachl Theorem. - -Theorem. Let $ X $ be an Abelian group and $ \alpha ,\beta $ two Hausdorff group topologies on $ X $ such that $(X,\beta)$ is sequentially complete, $ \alpha \subset \beta $, and the identity $ j:(X,\alpha)\to (X,\beta)$ is universally measurable. Then the subseries convergence for both topologies $\alpha $ and $\beta $ is the same. - -As a consequence, if $(X,\beta)$ is a sequentially complete K-analytic group, then the conclusion of the theorem is true for every Hausdorff group topology $\alpha$ weaker than $\beta$. This is a generalization of an analogical result for a sequentially complete analytic group $ (X,\beta)$ (in the original statement of the Andersen-Christensen theorem the assumption of sequential completeness is missing), which in turn extends the corresponding theorem of Kalton for a Polish group, a theorem that triggered this series of papers. - -The limitations for this kind of results are provided by the weak* topology of the Banach space $\ell^\infty $ and the examples of F-spaces $ X $ with separating dual $ X^*$ such that the weak (i.e., $\sigma(X,X^*) $) subseries convergence does not imply the subseries convergence in the F-norm of the space $ X $. diff --git a/wiki/wikipedia/388.txt b/wiki/wikipedia/388.txt deleted file mode 100644 index 481adcbcd4d6a8a62183dc793b5de82095e44a87..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/388.txt +++ /dev/null @@ -1,41 +0,0 @@ -In applied mathematics, transit node routing can be used to speed up shortest-path routing by pre-computing connections between common access nodes to a sub-network relevant to long-distance travel. - -Transit node routing as a framework was established in 2007 and a more sophisticated implementation based on contraction hierarchies. - -In a grid-based approach, the bounding square of all nodes is equally subdivided into square cells. - -How are access nodes selected? - -For each cell $C$, a set of access nodes can be found by looking at an inner area $I$ of 5x5 cells and an outer area $O$ of 9x9 cells around $C$. Focusing on crossing nodes (ends of edges that cross the boundary of $C$, $I$ or $O$), the access nodes for $C$ are those nodes of $I$ that are part of a shortest path from some node in $C$ to a node in $O$. As access nodes for an arbitrary node $v \in C$ all access nodes of $C$ are chosen (red dots in the image to the right). - -How are transit nodes selected? - -The set of transit nodes is exactly the union of all sets of access nodes. - -Which locality filter should be used? - -The way access nodes are selected implies that if source and target are more than four grid cells apart, a transit node must be passed on the shortest path and the distance can be calculated as described above. If they lie closer together, a fallback-algorithm is used to obtain the distance. - -How should local queries be handled? - -Local queries are only needed if start and target already lie close together, therefore every suitable shortest-path algorithm such as Dijkstra's algorithm or extensions thereof can be chosen. - -The pre-computed distances between each node and the corresponding access node as well as the pairwise distances between transit nodes need to be stored in distance tables. - -In the grid-based implementation outlined above, this results in 16 bytes of storage that is required for each node of the road graph. A full graph of the USA road network has 23,947,347 nodes. Therefore approx. 383 MB of storage would be required to store the distance tables. - -How are transit nodes selected? - -By definition, a contraction hierarchy moves important nodes (i.e. nodes that are part of many shortest paths) to the top of the hierarchy. A set of transit nodes therefore can be selected as the $k$highest nodes of the contraction hierarchy. - -How are access nodes selected? - -Forward access nodes of a node $v$ can be found by running the upward-search of the contraction hierarchy starting at $v$. During the upward-search, edges leaving previously found transit nodes aren't relaxed. When the search has no more upward nodes left to settle, those transit nodes that have been settled are the access nodes of $v$. Backward access nodes can be found analogously. - -Which locality filter should be used? - -If the highest node of a shortest up-down-path in the hierarchy is not part of the set of transit nodes, then the query was local. This implies that neither the up-part of the path (beginning at the starting node) nor the down-part of the path (ending at the target node) can contain a transit node and there must be a common node in both paths. During the calculation of the access nodes, the search space (all visited nodes towards the top of the hierarchy) for each node can be stored without including transit nodes. When performing a query, those search spaces for start- and target node are checked for an intersection. If those spaces are disjoint, transit node routing can be used because the up- and down-paths must meet at a transit node. Otherwise there could be a shortest path without a transit node. - -How should local queries be handled? - -Local queries use the regular query algorithm of the contraction hierarchy. diff --git a/wiki/wikipedia/3880.txt b/wiki/wikipedia/3880.txt deleted file mode 100644 index 9288705e8b3e5bc4926e5a78b684581fbb9cdb7f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3880.txt +++ /dev/null @@ -1,36 +0,0 @@ -In mathematics, the Kantorovich inequality is a particular case of the Cauchy–Schwarz inequality, which is itself a generalization of the triangle inequality. - -The triangle inequality states that the length of two sides of any triangle, added together, will be equal to or greater than the length of the third side. In simplest terms, the Kantorovich inequality translates the basic idea of the triangle inequality into the terms and notational conventions of linear programming. (See vector space, inner product, and normed vector space for other examples of how the basic ideas inherent in the triangle inequality—line segment and distance—can be generalized into a broader context.) - -More formally, the Kantorovich inequality can be expressed this way: - -Let -$$ -p_i \geq 0,\quad 0 < a \leq x_i \leq b\text{ for }i=1, \dots ,n. -$$ - -Let $A_n=\{1,2,\dots ,n\}.$ - -Then - - - -\begin{align} - -& {} \qquad \left( \sum_{i=1}^n p_ix_i \right ) \left (\sum_{i=1}^n \frac{p_i}{x_i} \right) \\ - -& \leq \frac{(a+b)^2}{4ab} \left (\sum_{i=1}^n p_i \right )^2 - --\frac{(a-b)^2}{4ab} \cdot \min \left\{ \left (\sum_{i \in X}p_i-\sum_{j \in Y}p_j \right )^2: {X \cup Y=A_n},{X \cap Y=\varnothing} \right\}. - -\end{align} - - - -The Kantorovich inequality is used in convergence analysis; it bounds the convergence rate of Cauchy's steepest descent. - -Equivalents of the Kantorovich inequality have arisen in a number of different fields. For instance, the Cauchy-Schwarz-Bunyakovsky inequality and the Wielandt inequality are equivalent to the Kantorovich inequality and all of these are, in turn, special cases of the Hölder inequality. - -The Kantorovich inequality is named after Soviet economist, mathematician, and Nobel Prize winner Leonid Kantorovich, a pioneer in the field of linear programming. - -There is also Matrix version of the Kantrovich inequality due to Marshall and Olkin. diff --git a/wiki/wikipedia/3881.txt b/wiki/wikipedia/3881.txt deleted file mode 100644 index b04523e0458afa7f4f42b599d42391ecde876989..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3881.txt +++ /dev/null @@ -1,13 +0,0 @@ -ZumoDrive is a defunct cloud-based file hosting service operated by Zecter, Inc. On December 22, 2010, Zecter announced its acquisition by Motorola Mobility. The service enabled users to store and sync files online, and also between computers using their HybridCloud storage solution; the latter functionality stopped working in approximately September 2011, while the former was undergoing formal takedown on May 1, 2012. ZumoDrive had a cross-platform client (Windows, macOS, Linux, iOS, Android, and Palm webOS) that enabled users to copy any file or folder into the ZumoDrive virtual disk that was then synced to the web and the users' other computers and hand-held devices. Files in the ZumoDrive virtual disk could be shared with other ZumoDrive users or accessed from the web. Users could also upload files manually through a web browser interface. A free ZumoDrive account offered 2 GB of storage, and users could upgrade to paid plans ranging from 10 GB to 500 GB for a monthly subscription fee. The ZumoDrive service was integrated into Yahoo! Mail, allowing users to send or receive any file on their ZumoDrive (the integration with Yahoo! Mail has been stopped as of June 1, 2012), and powers HP's recent CloudDrive technology, bundled on all new HP Mini netbooks. - -ZumoDrive was created by the Silicon Valley based company Zecter, which was founded by David Zhao, Kevin West, and Vijay Mani in 2007. Zhao is a former application developer for Amazon while West and Mani were former Microsoft employees. - -The company received seed funding from Y Combinator, Tandem Entrepreneurs, and other seed and early-stage investors. An additional round of $1.5 million in funding has been secured as of late October 2009, led by Sherpalo Ventures, with Tandem Entrepreneurs and VeriFone CEO Douglas Bergeron participating. Development for ZumoDrive began in 2008 and the product launched in January 2009. although users of the HP, CruzSync, and Toshiba branded services were initially unaffected. - -While ZumoDrive functioned as a file synchronization and storage service, it employed an approach that allows content in the cloud to appear local to the filesystem. ZumoDrive synchronization uses SSL transfers with AES-256 encryption, and it supports revision history—by use of deltas or delta encoding technology—so files deleted from the ZumoDrive virtual disk may be recovered from any of the synced computers. ZumoDrive's version control also helped users know the history of a file they may have been currently working on, enabling more than one person to edit and re-post files without edit conflicts or loss of information. There is no limit to file size for files added via the ZumoDrive client. ZumoDrive used Amazon's S3 simple storage service to store files in the cloud. - -The ZumoDrive service was a unique file sync and storage service in that content appears local to the filesystem and can be streamed from the cloud on demand. Users can stream music directly from ZumoDrive to iPhone, iPod Touch, Android and WebOS devices. ZumoDrive allows users to selectively synchronize individual files, folders, or the entire virtual drive. Users can also link folders in place on their computers to their ZumoDrive, and these folders and all content will stay in sync across all devices. - -Slow broadband connection speeds can make streaming large files, such as movies to mobile and other remote devices difficult. - -While ZumoDrive encrypts transport of all content with 256-bit SSL, and stores that content encrypted on Amazon S3 servers, that content is still accessible to ZumoDrive administrators. diff --git a/wiki/wikipedia/3882.txt b/wiki/wikipedia/3882.txt deleted file mode 100644 index c96d9921a37cd8809c0a12177773c9f4e043d384..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3882.txt +++ /dev/null @@ -1,83 +0,0 @@ -The structured program theorem, also called the Böhm–Jacopini theorem, is a result in programming language theory. It states that a class of control-flow graphs (historically called flowcharts in this context) can compute any computable function if it combines subprograms in only three specific ways (control structures). These are - -#Executing one subprogram, and then another subprogram (sequence) - -#Executing one of two subprograms according to the value of a boolean expression (selection) - -#Repeatedly executing a subprogram as long as a boolean expression is true (iteration) - -The structured chart subject to these constraints may however use additional variables in the form of bits (stored in an extra integer variable in the original proof) in order to keep track of information that the original program represents by the program location. The construction was based on Böhm's programming language P′′. - -The theorem forms the basis of structured programming, a programming paradigm which eschews goto commands and exclusively uses subroutines, sequences, selection and iteration. - -The theorem is typically credited - -This version of the theorem replaces all the original program's control flow with a single global while loop that simulates a program counter going over all possible labels (flowchart boxes) in the original non-structured program. Harel traced the origin of this folk theorem to two papers marking the beginning of computing. One is the 1946 description of the von Neumann architecture, which explains how a program counter operates in terms of a while loop. Harel notes that the single loop used by the folk version of the structured programming theorem basically just provides operational semantics for the execution of a flowchart on a von Neumann computer. - - - -p := 1 - -while p > 0 do - -if p = 1 then - -perform step 1 from the flowchart - -p := resulting successor step number of step 1 from the flowchart (0 if no successor) - -end if - -if p = 2 then - -perform step 2 from the flowchart - -p := resulting successor step number of step 2 from the flowchart (0 if no successor) - -end if - -... - -if p = n then - -perform step n from the flowchart - -p := resulting successor step number of step n from the flowchart (0 if no successor) - -end if - -end while - - - -The proof in Böhm and Jacopini's paper proceeds by induction on the structure of the flow chart. - -Edward Yourdon notes that in the 1970s there was even philosophical opposition to transforming unstructured programs into structured ones by automated means, based on the argument that one needed to think in structured programming fashion from the get go. The pragmatic counterpoint was that such transformations benefited a large body of existing programs. Among the first proposals for an automated transformation was a 1971 paper by Edward Ashcroft and Zohar Manna. - -The direct application of the Böhm–Jacopini theorem may result in additional local variables being introduced in the structured chart, and may also result in some code duplication. The latter issue is called the loop and a half problem in this context. Pascal is affected by both of these problems and according to empirical studies cited by Eric S. Roberts, student programmers had difficulty formulating correct solutions in Pascal for several simple problems, including writing a function for searching an element in an array. A 1980 study by Henry Shapiro cited by Roberts found that using only the Pascal-provided control structures, the correct solution was given by only 20% of the subjects, while no subject wrote incorrect code for this problem if allowed to write a return from the middle of a loop. - -McCabe's characterization of the forbidden graphs for structured programming can be considered incomplete, at least if the Dijkstra's D structures are considered the building blocks. - -Up to 1990 there were quite a few proposed methods for eliminating gotos from existing programs, while preserving most of their structure. The various approaches to this problem also proposed several notions of equivalence, which are stricter than simply Turing equivalence, in order to avoid output like the folk theorem discussed above. The strictness of the chosen notion of equivalence dictates the minimal set of control flow structures needed. The 1988 JACM paper by Lyle Ramshaw surveys the field up to that point, as well proposing its own method. Ramshaw's algorithm was used for example in some Java decompilers because the Java virtual machine code has branch instructions with targets expressed as offsets, but the high-level Java language only has multi-level break and continue statements. Ammarguellat (1992) proposed a transformation method that goes back to enforcing single-exit. - -In the 1980s IBM researcher Harlan Mills oversaw the development of the COBOL Structuring Facility, which applied a structuring algorithm to COBOL code. Mills's transformation involved the following steps for each procedure. - -#Identify the basic blocks in the procedure. - -#Assign a unique label to each block's entry path, and label each block's exit paths with the labels of the entry paths they connect to. Use 0 for return from the procedure and 1 for the procedure's entry path. - -#Break the procedure into its basic blocks. - -#For each block that is the destination of only one exit path, reconnect that block to that exit path. - -#Declare a new variable in the procedure (called L for reference). - -#On each remaining unconnected exit path, add a statement that sets L to the label value on that path. - -#Combine the resulting programs into a selection statement that executes the program with the entry path label indicated by L - -#Construct a loop that executes this selection statement as long as L is not 0. - -#Construct a sequence that initializes L to 1 and executes the loop. - -Note that this construction can be improved by converting some cases of the selection statement into subprocedures. diff --git a/wiki/wikipedia/3883.txt b/wiki/wikipedia/3883.txt deleted file mode 100644 index fb71a3f7fab14f2aca5a1555915b40ca31d37ff6..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3883.txt +++ /dev/null @@ -1,19 +0,0 @@ -In algebraic geometry, the Grothendieck trace formula expresses the number of points of a variety over a finite field in terms of the trace of the Frobenius endomorphism on its cohomology groups. There are several generalizations: the Frobenius endomorphism can be replaced by a more general endomorphism, in which case the points over a finite field are replaced by its fixed points, and there is also a more general version for a sheaf over the variety, where the cohomology groups are replaced by cohomology with coefficients in the sheaf. - -The Grothendieck trace formula is an analogue in algebraic geometry of the Lefschetz fixed-point theorem in algebraic topology. - -One application of the Grothendieck trace formula is to express the zeta function of a variety over a finite field, or more generally the L-series of a sheaf, as a sum over traces of Frobenius on cohomology groups. This is one of the steps used in the proof of the Weil conjectures. - -Behrend's trace formula generalizes the formula to algebraic stacks. - -Let k be a finite field, l a prime number invertible in k, X a smooth k-scheme of dimension n, and $\mathcal{F}$ a constructible $\mathbb{Q}_l$-sheaf on X. Then the following cohomological expression for the L-function of $\mathcal{F}$ holds: -$$ -L(X, \mathcal{F}, t) = \prod_{i=0}^{2n} \det(1 - t\cdot F | H^i_c(X_{\bar{k}}, \mathcal{F}))^{(-1)^{i+1}} = \frac{\det(1 - t \cdot F | H^1_c(X_{\bar{k}}, \mathcal{F})) \ldots \det(1 - t \cdot F | H^{2n-1}_c(X_{\bar{k}}, \mathcal{F}))}{\det(1 - t \cdot F | H^0_c(X_{\bar{k}}, \mathcal{F})) \ldots \det(1 - t \cdot F | H^{2n}_c(X_{\bar{k}}, \mathcal{F}))} -$$ - -where F is everywhere a geometric Frobenius action on l-adic cohomology with compact supports of the sheaf $\mathcal{F}$. Taking logarithmic derivatives of both formal power series produces a statement on sums of traces for each finite field extension E of the base field k: -$$ -\sum_{x \in X(E)} \operatorname{tr}(F_E | \mathcal{F}_x) = \sum_{i=0}^{2n}(-1)^{i}\operatorname{tr}(F_E | H^i_c(X_{\bar{k}}, \mathcal{F})) -$$ - -For a constant sheaf $\mathbb{Q}_l$ (viewed as $(\varprojlim \mathbb{Z}/l^n\mathbb{Z}) \otimes \mathbb{Q}$ to qualify as an l-adic sheaf) the left hand side of this formula is the number of E-points of X. diff --git a/wiki/wikipedia/3884.txt b/wiki/wikipedia/3884.txt deleted file mode 100644 index ff8f3c756927fbb2d941082729ccf6399795a8a2..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3884.txt +++ /dev/null @@ -1,18 +0,0 @@ -Lovelock's theorem of general relativity says that from a local gravitational action which contains only second derivatives of the four-dimensional spacetime metric, then the only possible equations of motion are the Einstein field equations. - -The only possible second-order Euler–Lagrange expression obtainable in a four-dimensional space from a scalar density of the form $\mathcal{L}=\mathcal{L}(g_{\mu\nu})$ is -$$ -E^{\mu\nu} = \alpha \sqrt{-g} \left[R^{\mu\nu} - \frac{1}{2} g^{\mu\nu} R \right] + \lambda \sqrt{-g} g^{\mu\nu} -$$ - -Lovelock's theorem means that if we want to modify the Einstein field equations, then we have five options. - -* Add other fields rather than the metric tensor; - -* Use more or fewer than four spacetime dimensions; - -* Add more than second order derivatives of the metric; - -* Non-locality, e.g. for example the inverse d'Alembertian; - -* Emergence – the idea that the field equations don't come from the action. diff --git a/wiki/wikipedia/3885.txt b/wiki/wikipedia/3885.txt deleted file mode 100644 index 135a99a51e42840de2d2fdfeaea4067c82911fc1..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3885.txt +++ /dev/null @@ -1,7 +0,0 @@ -Pierre Rosenstiehl (5 December 1933 – 28 October 2020) was a French mathematician recognized for his work in graph theory, planar graphs, and graph drawing. - -The Fraysseix-Rosenstiehl's planarity criterion is at the origin of the left-right planarity algorithm implemented in software, which is considered the fastest implemented planarity testing algorithm. - -Rosenstiehl was directeur d’études at the École des Hautes Études en Sciences Sociales in Paris, before his retirement. He was a founding co-editor in chief of the European Journal of Combinatorics. Rosenstiehl, Giuseppe Di Battista, Peter Eades and Roberto Tamassia organized in 1992 at Marino (Italy) a meeting devoted to graph drawing which initiated a long series of international conferences, the International Symposia on Graph Drawing. - -He has been a member of the French literary group Oulipo since 1992. He married the French author and illustrator Agnès Rosenstiehl. diff --git a/wiki/wikipedia/3886.txt b/wiki/wikipedia/3886.txt deleted file mode 100644 index 476381a519896eb9c3008bed08b602b1e45f1c6d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3886.txt +++ /dev/null @@ -1,260 +0,0 @@ -In the theory of vector spaces, a set of vectors is said to be ' if there is a nontrivial linear combination of the vectors that equals the zero vector. If no such linear combination exists, then the vectors are said to be '. These concepts are central to the definition of dimension. - -A vector space can be of finite dimension or infinite dimension depending on the maximum number of linearly independent vectors. The definition of linear dependence and the ability to determine whether a subset of vectors in a vector space is linearly dependent are central to determining the dimension of a vector space. - -A sequence of vectors $\mathbf{v}_1, \mathbf{v}_2, \dots, \mathbf{v}_k$ from a vector space V is said to be linearly dependent, if there exist scalars $a_1, a_2, \dots, a_k,$ not all zero, such that -$$ -a_1\mathbf{v}_1 + a_2\mathbf{v}_2 + \cdots + a_k\mathbf{v}_k = \mathbf{0}, -$$ - -where $\mathbf{0}$ denotes the zero vector. - -This implies that at least one of the scalars is nonzero, say $a_1\ne 0$, and the above equation can be written as -$$ -\mathbf{v}_1 = \frac{-a_2}{a_1}\mathbf{v}_2 + \cdots + \frac{-a_k}{a_1} \mathbf{v}_k, -$$ - -if $k>1,$ and $\mathbf{v}_1 = \mathbf{0}$ if $k=0.$ - -Thus, a set of vectors is linearly dependent if and only if one of them is zero or a linear combination of the others. - -A sequence of vectors $\mathbf{v}_1, \mathbf{v}_2, \dots, \mathbf{v}_n$ is said to be linearly independent if it is not linearly dependent, that is, if the equation -$$ -a_1\mathbf{v}_1 + a_2 \mathbf{v}_2 + \cdots + a_n\mathbf{v}_n = \mathbf{0}, -$$ - -can only be satisfied by $a_i=0$ for $i=1,\dots,n.$ This implies that no vector in the sequence can be represented as a linear combination of the remaining vectors in the sequence. In other words, a sequence of vectors is linearly independent if the only representation of $\mathbf 0$ as a linear combination of its vectors is the trivial representation in which all the scalars $a_i$ are zero. Even more concisely, a sequence of vectors is linearly independent if and only if $\mathbf 0$ can be represented as a linear combination of its vectors in a unique way. - -If a sequence of vectors contains twice the same vector, it is necessarily dependent. The linear dependency of a sequence of vectors does not depend of the order of the terms in the sequence. This allows defining linear independence for a finite set of vectors: A finite set of vectors is linearly independent if the sequence obtained by ordering them is linearly independent. In other words, one has the following result that is often useful. - -A sequence of vectors is linearly independent if and only if it does not contain twice the same vector and the set of its vectors is linearly independent. - -An infinite set of vectors is linearly independent if every nonempty finite subset is linearly independent. Conversely, an infinite set of vectors is linearly dependent if it contains a finite subset that is linearly dependent, or equivalently, if some vector in the set is a linear combination of other vectors in the set. - -An indexed family of vectors is linearly independent if it does not contain twice the same vector, and if the set of its vectors is linearly independent. Otherwise, the family is said linearly dependent. - -A set of vectors which is linearly independent and spans some vector space, forms a basis for that vector space. For example, the vector space of all polynomials in x over the reals has the (infinite) subset {1, x, x2, ...} as a basis. - -* $\vec u$ and $\vec v$ are independent and define the plane P. - -* $\vec u$, $\vec v$ and $\vec w$ are dependent because all three are contained in the same plane. - -* $\vec u$ and $\vec j$ are dependent because they are parallel to each other. - -* $\vec u$ , $\vec v$ and $\vec k$ are independent because $\vec u$ and $\vec v$ are independent of each other and $\vec k$ is not a linear combination of them or, what is the same, because they do not belong to a common plane. The three vectors define a three-dimensional space. - -* The vectors $\vec o$ (null vector, whose components are equal to zero) and $\vec k$ are dependent since $\vec o = 0 \cdot \vec k$ - -A person describing the location of a certain place might say, "It is 3 miles north and 4 miles east of here." This is sufficient information to describe the location, because the geographic coordinate system may be considered as a 2-dimensional vector space (ignoring altitude and the curvature of the Earth's surface). The person might add, "The place is 5 miles northeast of here." This last statement is true, but it is not necessary to find the location. - -In this example the "3 miles north" vector and the "4 miles east" vector are linearly independent. That is to say, the north vector cannot be described in terms of the east vector, and vice versa. The third "5 miles northeast" vector is a linear combination of the other two vectors, and it makes the set of vectors linearly dependent, that is, one of the three vectors is unnecessary to define a specific location on a plane. - -Also note that if altitude is not ignored, it becomes necessary to add a third vector to the linearly independent set. In general, n linearly independent vectors are required to describe all locations in n-dimensional space. - -If one or more vectors from a given sequence of vectors $\mathbf{v}_1, \dots, \mathbf{v}_k$ is the zero vector $\mathbf{0}$ then the vector $\mathbf{v}_1, \dots, \mathbf{v}_k$ are necessarily linearly dependent (and consequently, they are not linearly independent). - -To see why, suppose that $i$ is an index (i.e. an element of $\{ 1, \ldots, k \}$) such that $\mathbf{v}_i = \mathbf{0}.$ Then let $a_{i} := 1$ (alternatively, letting $a_{i}$ be equal any other non-zero scalar will also work) and then let all other scalars be $0$ (explicitly, this means that for any index $j$ other than $i$ (i.e. for $j \neq i$), let $a_{j} := 0$ so that consequently $a_{j} \mathbf{v}_j = 0 \mathbf{v}_j = \mathbf{0}$). - -Simplifying $a_1 \mathbf{v}_1 + \cdots + a_k\mathbf{v}_k$ gives: -$$ -a_1 \mathbf{v}_1 + \cdots + a_k\mathbf{v}_k = \mathbf{0} + \cdots + \mathbf{0} + a_i \mathbf{v}_i + \mathbf{0} + \cdots + \mathbf{0} = a_i \mathbf{v}_i = a_i \mathbf{0} = \mathbf{0}. -$$ - -Because not all scalars are zero (in particular, $a_{i} \neq 0$), this proves that the vectors $\mathbf{v}_1, \dots, \mathbf{v}_k$ are linearly dependent. - -As a consequence, the zero vector can not possibly belong to any collection of vectors that is linearly independent. - -Now consider the special case where the sequence of $\mathbf{v}_1, \dots, \mathbf{v}_k$ has length $1$ (i.e. the case where $k = 1$). - -A collection of vectors that consists of exactly one vector is linearly dependent if and only if that vector is zero. - -Explicitly, if $\mathbf{v}_1$ is any vector then the sequence $\mathbf{v}_1$ (which is a sequence of length $1$) is linearly dependent if and only if $\mathbf{v}_1 = \mathbf{0}$; alternatively, the collection $\mathbf{v}_1$ is linearly independent if and only if $\mathbf{v}_1 \neq \mathbf{0}.$ - -This example considers the special case where there are exactly two vector $\mathbf{u}$ and $\mathbf{v}$ from some real or complex vector space. - -The vectors $\mathbf{u}$ and $\mathbf{v}$ are linearly dependent if and only if at least one of the following is true: - -# $\mathbf{u}$ is a scalar multiple of $\mathbf{v}$ (explicitly, this means that there exists a scalar $c$ such that $\mathbf{u} = c \mathbf{v}$) or - -# $\mathbf{v}$ is a scalar multiple of $\mathbf{u}$ (explicitly, this means that there exists a scalar $c$ such that $\mathbf{v} = c \mathbf{u}$). - -If $\mathbf{u} = \mathbf{0}$ then by setting $c := 0$ we have $c \mathbf{v} = 0 \mathbf{v} = \mathbf{0} = \mathbf{u}$ (this equality holds no matter what the value of $\mathbf{v}$ is), which shows that (1) is true in this particular case. Similarly, if $\mathbf{v} = \mathbf{0}$ then (2) is true because $\mathbf{v} = 0 \mathbf{u}.$ - -If $\mathbf{u} = \mathbf{v}$ (for instance, if they are both equal to the zero vector $\mathbf{0}$) then both (1) and (2) are true (by using $c := 1$ for both). - -If $\mathbf{u} = c \mathbf{v}$ then $\mathbf{u} \neq \mathbf{0}$ is only possible if $c \neq 0$ and $\mathbf{v} \neq \mathbf{0}$; in this case, it is possible to multiply both sides by $\frac{1}{c}$ to conclude $\mathbf{v} = \frac{1}{c} \mathbf{u}.$ - -This shows that if $\mathbf{u} \neq \mathbf{0}$ and $\mathbf{v} \neq \mathbf{0}$ then (1) is true if and only if (2) is true; that is, in this particular case either both (1) and (2) are true (and the vectors are linearly dependent) or else both (1) and (2) are false (and the vectors are linearly independent). - -If $\mathbf{u} = c \mathbf{v}$ but instead $\mathbf{u} = \mathbf{0}$ then at least one of $c$ and $\mathbf{v}$ must be zero. - -Moreover, if exactly one of $\mathbf{u}$ and $\mathbf{v}$ is $\mathbf{0}$ (while the other is non-zero) then exactly one of (1) and (2) is true (with the other being false). - -The vectors $\mathbf{u}$ and $\mathbf{v}$ are linearly independent if and only if $\mathbf{u}$ is not a scalar multiple of $\mathbf{v}$ and $\mathbf{v}$ is not a scalar multiple of $\mathbf{u}$. - -Three vectors: Consider the set of vectors $\mathbf{v}_1 = (1, 1),$ $\mathbf{v}_2 = (-3, 2),$ and $\mathbf{v}_3 = (2, 4),$ then the condition for linear dependence seeks a set of non-zero scalars, such that -$$ -a_1 \begin{bmatrix} 1\\1\end{bmatrix} + a_2 \begin{bmatrix} -3\\2\end{bmatrix} + a_3 \begin{bmatrix} 2\\4\end{bmatrix} =\begin{bmatrix} 0\\0\end{bmatrix}, -$$ - -or -$$ -\begin{bmatrix} 1 & -3 & 2 \\ 1 & 2 & 4 \end{bmatrix}\begin{bmatrix} a_1\\ a_2 \\ a_3 \end{bmatrix}= \begin{bmatrix} 0\\0\end{bmatrix}. -$$ - -Row reduce this matrix equation by subtracting the first row from the second to obtain, -$$ -\begin{bmatrix} 1 & -3 & 2 \\ 0 & 5 & 2 \end{bmatrix}\begin{bmatrix} a_1\\ a_2 \\ a_3 \end{bmatrix}= \begin{bmatrix} 0\\0\end{bmatrix}. -$$ - -Continue the row reduction by (i) dividing the second row by 5, and then (ii) multiplying by 3 and adding to the first row, that is -$$ -\begin{bmatrix} 1 & 0 & 16/5 \\ 0 & 1 & 2/5 \end{bmatrix}\begin{bmatrix} a_1\\ a_2 \\ a_3 \end{bmatrix}= \begin{bmatrix} 0\\0\end{bmatrix}. -$$ - -Rearranging this equation allows us to obtain -$$ -\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}\begin{bmatrix} a_1\\ a_2 \end{bmatrix}= \begin{bmatrix} a_1\\ a_2 \end{bmatrix}=-a_3\begin{bmatrix} 16/5\\2/5\end{bmatrix}. -$$ - -which shows that non-zero ai exist such that $\mathbf{v}_3 = (2, 4)$ can be defined in terms of $\mathbf{v}_1 = (1, 1)$ and $\mathbf{v}_2 = (-3, 2).$ Thus, the three vectors are linearly dependent. - -Two vectors: Now consider the linear dependence of the two vectors $\mathbf{v}_1 = (1, 1)$ and $\mathbf{v}_2 = (-3, 2),$ and check, -$$ -a_1 \begin{bmatrix} 1\\1\end{bmatrix} + a_2 \begin{bmatrix} -3\\2\end{bmatrix} =\begin{bmatrix} 0\\0\end{bmatrix}, -$$ - -or -$$ -\begin{bmatrix} 1 & -3 \\ 1 & 2 \end{bmatrix}\begin{bmatrix} a_1\\ a_2 \end{bmatrix}= \begin{bmatrix} 0\\0\end{bmatrix}. -$$ - -The same row reduction presented above yields, -$$ -\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}\begin{bmatrix} a_1\\ a_2 \end{bmatrix}= \begin{bmatrix} 0\\0\end{bmatrix}. -$$ - -This shows that $a_i = 0,$ which means that the vectors v1 = (1, 1) and v2 = (−3, 2) are linearly independent. - -In order to determine if the three vectors in $\mathbb{R}^4,$ -$$ -\mathbf{v}_1= \begin{bmatrix}1\\4\\2\\-3\end{bmatrix}, \mathbf{v}_2=\begin{bmatrix}7\\10\\-4\\-1\end{bmatrix}, \mathbf{v}_3=\begin{bmatrix}-2\\1\\5\\-4\end{bmatrix}. -$$ - -are linearly dependent, form the matrix equation, -$$ -\begin{bmatrix}1&7&-2\\4& 10& 1\\2&-4&5\\-3&-1&-4\end{bmatrix}\begin{bmatrix} a_1\\ a_2 \\ a_3 \end{bmatrix} = \begin{bmatrix}0\\0\\0\\0\end{bmatrix}. -$$ - -Row reduce this equation to obtain, -$$ -\begin{bmatrix} 1& 7 & -2 \\ 0& -18& 9\\ 0 & 0 & 0\\ 0& 0& 0\end{bmatrix} \begin{bmatrix} a_1\\ a_2 \\ a_3 \end{bmatrix} = \begin{bmatrix}0\\0\\0\\0\end{bmatrix}. -$$ - -Rearrange to solve for v3 and obtain, -$$ -\begin{bmatrix} 1& 7 \\ 0& -18 \end{bmatrix} \begin{bmatrix} a_1\\ a_2 \end{bmatrix} = -a_3\begin{bmatrix}-2\\9\end{bmatrix}. -$$ - -This equation is easily solved to define non-zero ai, -$$ -a_1 = -3 a_3 /2, a_2 = a_3/2, -$$ - -where $a_3$ can be chosen arbitrarily. Thus, the vectors $\mathbf{v}_1, \mathbf{v}_2,$ and $\mathbf{v}_3$ are linearly dependent. - -An alternative method relies on the fact that $n$ vectors in $\mathbb{R}^n$ are linearly independent if and only if the determinant of the matrix formed by taking the vectors as its columns is non-zero. - -In this case, the matrix formed by the vectors is -$$ -A = \begin{bmatrix}1&-3\\1&2\end{bmatrix} . -$$ - -We may write a linear combination of the columns as -$$ -A \Lambda = \begin{bmatrix}1&-3\\1&2\end{bmatrix} \begin{bmatrix}\lambda_1 \\ \lambda_2 \end{bmatrix} . -$$ - -We are interested in whether AΛ = 0 for some nonzero vector Λ. This depends on the determinant of $A$, which is -$$ -\det A = 1\cdot2 - 1\cdot(-3) = 5 \ne 0. -$$ - -Since the determinant is non-zero, the vectors $(1, 1)$ and $(-3, 2)$ are linearly independent. - -Otherwise, suppose we have $m$ vectors of $n$ coordinates, with $m < n.$ Then A is an n×m matrix and Λ is a column vector with $m$ entries, and we are again interested in AΛ = 0. As we saw previously, this is equivalent to a list of $n$ equations. Consider the first $m$ rows of $A$, the first $m$ equations; any solution of the full list of equations must also be true of the reduced list. In fact, if ⟨i1,...,im⟩ is any list of $m$ rows, then the equation must be true for those rows. -$$ -A_{\lang i_1,\dots,i_m \rang} \Lambda = \mathbf{0} . -$$ - -Furthermore, the reverse is true. That is, we can test whether the $m$ vectors are linearly dependent by testing whether -$$ -\det A_{\lang i_1,\dots,i_m \rang} = 0 -$$ - -for all possible lists of $m$ rows. (In case $m = n$, this requires only one determinant, as above. If $m > n$, then it is a theorem that the vectors must be linearly dependent.) This fact is valuable for theory; in practical calculations more efficient methods are available. - -If there are more vectors than dimensions, the vectors are linearly dependent. This is illustrated in the example above of three vectors in $\R^2.$ - -Let $V = \R^n$ and consider the following elements in $V$, known as the natural basis vectors: - -\begin{matrix} - -\mathbf{e}_1 & = & (1,0,0,\ldots,0) \\ - -\mathbf{e}_2 & = & (0,1,0,\ldots,0) \\ - -& \vdots \\ - -\mathbf{e}_n & = & (0,0,0,\ldots,1).\end{matrix} - -Then $\mathbf{e}_1, \mathbf{e}_2, \ldots, \mathbf{e}_n$ are linearly independent. - -{{math proof| - -Suppose that $a_1, a_2, \ldots, a_n$ are real numbers such that -$$ -a_1 \mathbf{e}_1 + a_2 \mathbf{e}_2 + \cdots + a_n \mathbf{e}_n = \mathbf{0}. -$$ - -Since -$$ -a_1 \mathbf{e}_1 + a_2 \mathbf{e}_2 + \cdots + a_n \mathbf{e}_n = \left( a_1 ,a_2 ,\ldots, a_n \right), -$$ - -then $a_i = 0$ for all $i = 1, \ldots, n.$ - -}} - -Let $V$ be the vector space of all differentiable functions of a real variable $t$. Then the functions $e^t$ and $e^{2t}$ in $V$ are linearly independent. - -Suppose $a$ and $b$ are two real numbers such that -$$ -ae ^ t + be ^ {2t} = 0 -$$ - -Take the first derivative of the above equation: -$$ -ae ^ t + 2be ^ {2t} = 0 -$$ - -for all values of $t.$ We need to show that $a = 0$ and $b = 0.$ In order to do this, we subtract the first equation from the second, giving $be^{2t} = 0$. Since $e^{2t}$ is not zero for some $t$, $b=0.$ It follows that $a = 0$ too. Therefore, according to the definition of linear independence, $e^{t}$ and $e^{2t}$ are linearly independent. - -A linear dependency or linear relation among vectors v1, ..., vn is a tuple (a1, ..., an) with n scalar components such that -$$ -a_1 \mathbf{v}_1 + \cdots + a_n \mathbf{v}_n= \mathbf{0}. -$$ - -If such a linear dependence exists with at least a nonzero component, then the n vectors are linearly dependent. Linear dependencies among v1, ..., vn form a vector space. - -If the vectors are expressed by their coordinates, then the linear dependencies are the solutions of a homogeneous system of linear equations, with the coordinates of the vectors as coefficients. A basis of the vector space of linear dependencies can therefore be computed by Gaussian elimination. - -A set of vectors is said to be affinely dependent if at least one of the vectors in the set can be defined as an affine combination of the others. Otherwise, the set is called affinely independent. Any affine combination is a linear combination; therefore every affinely dependent set is linearly dependent. Conversely, every linearly independent set is affinely independent. - -Consider a set of $m$ vectors $\mathbf{v}_1, \ldots, \mathbf{v}_m$ of size $n$ each, and consider the set of $m$ augmented vectors $\left(\left[\begin{smallmatrix} 1 \\ \mathbf{v}_1\end{smallmatrix}\right], \ldots, \left[\begin{smallmatrix}1 \\ \mathbf{v}_m\end{smallmatrix}\right]\right)$ of size $n + 1$ each. The original vectors are affinely independent if and only if the augmented vectors are linearly independent. - -Two vector subspaces $M$ and $N$ of a vector space $X$ are said to be linearly independent if $M \cap N = \{0\}.$ - -More generally, a collection $M_1, \ldots, M_d$ of subspaces of $X$ are said to be linearly independent if M_i \cap \sum_{k \neq i} M_k = \{0\} for every index $i,$ where \sum_{k \neq i} M_k = \Big\{m_1 + \cdots + m_{i-1} + m_{i+1} + \cdots + m_d : m_k \in M_k \text{ for all } k\Big\} = \operatorname{span} \cup_{k \in \{1,\ldots,i-1,i+1,\ldots,d\}} M_k. - -The vector space $X$ is said to be a direct sum of $M_1, \ldots, M_d$ if these subspaces are linearly independent and $M_1 + \cdots + M_d = X.$ diff --git a/wiki/wikipedia/3887.txt b/wiki/wikipedia/3887.txt deleted file mode 100644 index 50b28c4e4acead159faafb6fc96b32669b91813a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3887.txt +++ /dev/null @@ -1,29 +0,0 @@ -In computational complexity theory, a planted clique or hidden clique in an undirected graph is a clique formed from another graph by selecting a subset of vertices and adding edges between each pair of vertices in the subset. The planted clique problem is the algorithmic problem of distinguishing random graphs from graphs that have a planted clique. This is a variation of the clique problem; it may be solved in quasi-polynomial time but is conjectured not to be solvable in polynomial time for intermediate values of the clique size. The conjecture that no polynomial time solution exists is called the planted clique conjecture; it has been used as a computational hardness assumption. - -A clique in a graph is a subset of vertices, all of which are adjacent to each other. A planted clique is a clique created from another graph by adding edges between all pairs of a selected subset of vertices. - -The planted clique problem can be formalized as a decision problem over a random distribution on graphs, parameterized by two numbers, n (the number of vertices), and k (the size of the clique). - -These parameters may be used to generate a graph, by the following random process: - -#Create an Erdős–Rényi random graph on n vertices by choosing independently for each pair of vertices whether to include an edge connecting that pair, with probability 1/2 for each pair. - -#Decide whether or not to add a clique to the graph, with probability 1/2; if not, return the graph formed in step 1. - -#Choose randomly a subset of k of the n vertices and add an edge (if one is not already present) between each pair of the selected vertices. - -The problem is then to determine algorithmically whether one of the graphs resulting from this process contains a clique of at least k vertices. - -With high probability, the size of the largest clique in an n-vertex random graph is close to 2 log2 n. And when k is larger than the square root of n, the vertices of a planted clique can be recognized as having unusually large degrees, making a planted clique easy to find. Therefore, the most interesting range of values for the parameter k is between these two values, - -Because the largest clique in a random graph typically has size near 2 log2 n, a planted clique of size k (if it exists) can be found with high probability by the following method: - -#Loop through all sets S of $\min(k,3\log_2 n)$ vertices. - -#For each choice of S, test whether S is a clique. If it is, and $|S| = k$, return S. Otherwise, find the set T of vertices that are adjacent to all vertices in S. If $|T|\ge k$, return T. - -The running time of this algorithm is quasipolynomial, because there are quasipolynomially many choices of S to loop over. This method is guaranteed to try a set S that is a subset of the planted clique; with high probability, the set T will consist only of other members of the planted clique. - -The planted clique conjecture is the conjecture that there is no polynomial time algorithm that takes as input graphs produced by the planted clique process and distinguishes the ones with planted cliques from the ones that don't have planted cliques with probability significantly better than random chance. - -Hazan used the assumption that finding planted cliques is hard as a computational hardness assumption to prove that, if so, it is also hard to approximate the best Nash equilibrium in a two-player game. The planted clique conjecture has also been used as a hardness assumption to show the difficulty of property testing k-independence of random distributions, finding clusters in social networks, and machine learning. diff --git a/wiki/wikipedia/3888.txt b/wiki/wikipedia/3888.txt deleted file mode 100644 index 5d41b4c14c354ec978f20f88287948d9701d73c3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3888.txt +++ /dev/null @@ -1,70 +0,0 @@ -In probability theory, the Girsanov theorem (named after Igor Vladimirovich Girsanov) describes how the dynamics of stochastic processes change when the original measure is changed to an equivalent probability measure. The theorem is especially important in the theory of financial mathematics as it tells how to convert from the physical measure, which describes the probability that an underlying instrument (such as a share price or interest rate) will take a particular value or values, to the risk-neutral measure which is a very useful tool for pricing derivatives on the underlying instrument. - -Results of this type were first proved by Cameron–Martin in the 1940s and by Girsanov in 1960. They have been subsequently extended to more general classes of process culminating in the general form of Lenglart (1977). - -Girsanov's theorem is important in the general theory of stochastic processes since it enables the key result that if Q is an absolutely continuous measure with respect to P then every P-semimartingale is a Q-semimartingale. - -We state the theorem first for the special case when the underlying stochastic process is a Wiener process. This special case is sufficient for risk-neutral pricing in the Black–Scholes model and in many other models (for example, all continuous models). - -Let $\{W_t\}$ be a Wiener process on the Wiener probability space $(\Omega,\mathcal{F},P)$. Let $\{X_t\}$ be a measurable process adapted to the natural filtration of the Wiener process $\{\mathcal{F}^W_t\}$ with $X_0 = 0$. - -Define the Doléans-Dade exponential $\mathcal{E}(X)_t$ of X with respect to W -$$ -\mathcal{E}(X)_t=\exp \left ( X_t - \frac{1}{2} [X]_t \right ), -$$ - -where $[X]_t$ is the quadratic variation of $X_t$. If $\mathcal{E}(X)_t$ is a strictly positive martingale, a probability - -measure Q can be defined on $(\Omega,\mathcal{F})$ such that we have Radon–Nikodym derivative -$$ -\frac{d Q}{d P} \bigg|_{\mathcal{F}_t} = \mathcal{E} (X )_t -$$ - -Then for each t the measure Q restricted to the unaugmented sigma fields $\mathcal{F}^W_t$ is equivalent to P restricted to $\mathcal{F}^W_t $. Furthermore, if Y is a local martingale under P, then the process -$$ -\tilde Y_t = Y_t - \left[ Y,X \right]_t -$$ - -is a Q local martingale on the filtered probability space $(\Omega,\mathcal{F},\{\mathcal{F}^W_t\},Q)$. - -If X is a continuous process and W is Brownian motion under measure P then -$$ - \tilde W_t =W_t - \left [ W, X \right]_t -$$ - -is Brownian motion under Q. - -The fact that $ \tilde W_t$ is continuous is trivial; by Girsanov's theorem it is a Q local martingale, and by computing the quadratic variation - -\left[\tilde W \right]_t= - -\left[W_t, W_t\right] - 2 \left[W_t, [W, X]_t\right] + \leftLévy's characterization of Brownian motion that this is a Q Brownian - -motion. - -In many common applications, the process X is defined by -$$ -X_t = \int_0^t Y_s d W_s. -$$ - -If X is of this form, then a sufficient condition for $\mathcal{E}(X)$ to be a martingale is Novikov's condition, which requires that -$$ - E_P\left [\exp\left (\frac{1}{2}\int_0^T Y_s^2 ds\right )\right ] < \infty. -$$ - -The stochastic exponential $\mathcal{E}(X)$ is the process Z, which solves the stochastic differential equation -$$ - Z_t = 1 + \int_0^t Z_s d X_s. -$$ - -The measure Q constructed above is not equivalent to P on $\mathcal{F}_\infty$, as this would only be the case if the Radon–Nikodym derivative were a uniformly integrable martingale, which the exponential martingale described above is not (for $\lambda\ne0$). - -In finance, Girsanov theorem is used each time one needs to derive an asset's or rate's dynamics under a new probability measure. The most well known case is moving from historic measure P to risk neutral measure Q which is done—in the Black–Scholes model—via the Radon–Nikodym derivative: - - \frac{d Q}{d P} = \mathcal{E}\left ( \int_0^\cdot \frac{r - \mu }{\sigma} - -d W_s \right ) - -where $ r $ denotes the instantaneous risk free rate, $\mu$ the asset's drift and $\sigma$ its volatility. - -Other classical applications of Girsanov theorem are quanto adjustments and the calculation of forwards' drifts under the LIBOR market model. diff --git a/wiki/wikipedia/3889.txt b/wiki/wikipedia/3889.txt deleted file mode 100644 index 2f7ce655d2536a99a4c63db64ae8bf98bb8bf295..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3889.txt +++ /dev/null @@ -1,71 +0,0 @@ -In computer science, in particular in formal language theory, the pumping lemma for context-free languages, also known as the Bar-Hillel lemma, is a lemma that gives a property shared by all context-free languages and generalizes the pumping lemma for regular languages. - -The pumping lemma can be used to construct a proof by contradiction that a specific language is not context-free. Conversely, the pumping lemma does not suffice to guarantee that a language is context-free; there are other necessary conditions, such as Ogden's lemma, or the Interchange lemma. - -If a language $L$ is context-free, then there exists some integer $p \geq 1$ (called a "pumping length") such that every string $s$ in $L$ that has a length of $p$ or more symbols (i.e. with $|s| \geq p$) can be written as -$$ -s = uvwxy -$$ - -with substrings $u, v, w, x$ and $y$, such that - -1. $|vx| \geq 1$, - -2. $|vwx| \leq p$, and - -3. $uv^n wx^n y \in L$ for all $n \geq 0$. - -Below is a formal expression of the Pumping Lemma. - - - -\begin{array}{l} - -(\forall L\subseteq \Sigma^*) \\ - -\quad (\mbox{context free}(L) \Rightarrow \\ - -\quad ((\exists p\geq 1) ((\forall s\in L) ((|s|\geq p) \Rightarrow \\ - -\quad ((\exists u,v,w,x,y \in \Sigma^*) (s=uvwxy \land |vx|\geq 1 \land |vwx|\leq p \land (\forall n \geq 0) - -(uv^nwx^ny\in L))))))) - -\end{array} - - - -The pumping lemma for context-free languages (called just "the pumping lemma" for the rest of this article) describes a property that all context-free languages are guaranteed to have. - -The property is a property of all strings in the language that are of length at least $p$, where $p$ is a constant—called the pumping length—that varies between context-free languages. - -Say $s$ is a string of length at least $p$ that is in the language. - -The pumping lemma states that $s$ can be split into five substrings, $s = uvwxy$, where $vx$ is non-empty and the length of $vwx$ is at most $p$, such that repeating $v$ and $x$ the same number of times ($n$) in $s$ produces a string that is still in the language. It is often useful to repeat zero times, which removes $v$ and $x$ from the string. This process of "pumping up" additional copies of $v$ and $x$ is what gives the pumping lemma its name. - -Finite languages (which are regular and hence context-free) obey the pumping lemma trivially by having $p$ equal to the maximum string length in $L$ plus one. As there are no strings of this length the pumping lemma is not violated. - -The pumping lemma is often used to prove that a given language L is non-context-free, by showing that arbitrarily long strings s are in L that cannot be "pumped" without producing strings outside L. - -For example, the language $L = \{ a^n b^n c^n | n > 0 \}$ can be shown to be non-context-free by using the pumping lemma in a proof by contradiction. First, assume that L is context free. By the pumping lemma, there exists an integer p which is the pumping length of language L. Consider the string $s = a^p b^p c^p$ in L. The pumping lemma tells us that s can be written in the form $s = uvwxy$, where u, v, w, x, and y are substrings, such that $|vx| \geq 1$, $|vwx| \leq p$, and $uv^i wx^i y \in L$ for every integer $i \geq 0$. By the choice of s and the fact that $|vwx| \leq p$, it is easily seen that the substring vwx can contain no more than two distinct symbols. That is, we have one of five possibilities for vwx: - -# $vwx = a^j$ for some $j \leq p$. - -# $vwx = a^j b^k$ for some j and k with $j+k \leq p$ - -# $vwx = b^j$ for some $j \leq p$. - -# $vwx = b^j c^k$ for some j and k with $j+k \leq p$. - -# $vwx = c^j$ for some $j \leq p$. - -For each case, it is easily verified that $uv^i wx^i y$ does not contain equal numbers of each letter for any $i \neq 1$. Thus, $uv^2 wx^2 y$ does not have the form $a^i b^i c^i$. This contradicts the definition of L. Therefore, our initial assumption that L is context free must be false. - -While the pumping lemma is often a useful tool to prove that a given language is not context-free, it does not give a complete characterization of the context-free languages. If a language does not satisfy the condition given by the pumping lemma, we have established that it is not context-free. On the other hand, there are languages that are not context-free, but still satisfy the condition given by the pumping lemma, for example -$$ -L = \{ b^j c^k d^l | j, k, l \in \mathbb{N} \} \cup \{ a^i b^j c^j d^j | i, j \in \mathbb{N}, i \ge 1\} -$$ - -for s=bjckdl with e.g. j≥1 choose vwx to consist only of b’s, for s=aibjcjdj choose vwx to consist only of a’s; in both cases all pumped strings are still in L. - -A precursor of the pumping lemma was used in 1960 by Scheinberg to prove that $L = \{ a^n b^n a^n | n > 0 \}$ is not context-free. diff --git a/wiki/wikipedia/389.txt b/wiki/wikipedia/389.txt deleted file mode 100644 index 16ccbd70d3081fe0087d125c4b4e86311a25eaaf..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/389.txt +++ /dev/null @@ -1,49 +0,0 @@ -In computer science and formal methods, a SAT solver is a computer program which aims to solve the Boolean satisfiability problem. On input a formula over Boolean variables, such as "(x or y) and (x or not y)", a SAT solver outputs whether the formula is satisfiable, meaning that there are possible values of x and y which make the formula true, or unsatisfiable, meaning that there are no such values of x and y. In this case, the formula is satisfiable when y is true, so the solver should return "satisfiable". Since the introduction of algorithms for SAT in the 1960s, modern SAT solvers have grown into complex software artifacts involving a large number of heuristics and program optimizations to work efficiently. - -By a result known as the Cook–Levin theorem, Boolean satisfiability is an NP-complete problem in general. As a result, only algorithms with exponential worst-case complexity are known. In spite of this, efficient and scalable algorithms for SAT were developed during the 2000s and have contributed to dramatic advances in our ability to automatically solve problem instances involving tens of thousands of variables and millions of constraints. - -SAT solvers often begin by converting a formula to conjunctive normal form. They are often based on core algorithms such as the DPLL algorithm, but incorporate a number of extensions and features. Most SAT solvers include time-outs, so they will terminate in reasonable time even if they cannot find a solution with an output such as "unknown". Often, SAT solvers do not just provide an answer, but can provide further information including an example assignment (values for x, y, etc.) in case the formula is satisfiable or minimal set of unsatisfiable clauses if the formula is unsatisfiable. - -Modern SAT solvers have had a significant impact on fields including software verification, program analysis, constraint solving, artificial intelligence, electronic design automation, and operations research. Powerful solvers are readily available as free and open source software. - -A DPLL SAT solver employs a systematic backtracking search procedure to explore the (exponentially sized) space of variable assignments looking for satisfying assignments. The basic search procedure was proposed in two seminal papers in the early 1960s (see references below) and is now commonly referred to as the Davis–Putnam–Logemann–Loveland algorithm ("DPLL" or "DLL"). Many modern approaches to practical SAT solving are derived from the DPLL algorithm and share the same structure. Often they only improve the efficiency of certain classes of SAT problems such as instances that appear in industrial applications or randomly generated instances. - -Modern SAT solvers (developed in the 2000s) come in two flavors: "conflict-driven" and "look-ahead". Both approaches descend from DPLL. Conflict-driven solvers, such as conflict-driven clause learning (CDCL), augment the basic DPLL search algorithm with efficient conflict analysis, clause learning, non-chronological backtracking (a.k.a. backjumping), as well as "two-watched-literals" unit propagation, adaptive branching, and random restarts. These "extras" to the basic systematic search have been empirically shown to be essential for handling the large SAT instances that arise in electronic design automation (EDA). Well known implementations include Chaff and GRASP. Look-ahead solvers have especially strengthened reductions (going beyond unit-clause propagation) and the heuristics, and they are generally stronger than conflict-driven solvers on hard instances (while conflict-driven solvers can be much better on large instances which actually have an easy instance inside). - -The conflict-driven , which was relatively successful at the , only has about 600 lines of code. A modern Parallel SAT solver is ManySAT. It can achieve super linear speed-ups on important classes of problems. An example for look-ahead solvers is , which won a prize at the . - -Certain types of large random satisfiable instances of SAT can be solved by survey propagation (SP). Particularly in hardware design and verification applications, satisfiability and other logical properties of a given propositional formula are sometimes decided based on a representation of the formula as a binary decision diagram (BDD). - -Different SAT solvers will find different instances easy or hard, and some excel at proving unsatisfiability, and others at finding solutions. - -All of these behaviors can be seen in the SAT solving contests. - -Parallel SAT solvers come in three categories: portfolio, divide-and-conquer and parallel local search algorithms. With parallel portfolios, multiple different SAT solvers run concurrently. Each of them solves a copy of the SAT instance, whereas divide-and-conquer algorithms divide the problem between the processors. Different approaches exist to parallelize local search algorithms. - -The has a parallel track reflecting recent advances in parallel SAT solving. In 2016, 2017 and 2018, the benchmarks were run on a shared-memory system with 24 processing cores, therefore solvers intended for distributed memory or manycore processors might have fallen short. - -In general there is no SAT solver that performs better than all other solvers on all SAT problems. An algorithm might perform well for problem instances others struggle with, but will do worse with other instances. Furthermore, given a SAT instance, there is no reliable way to predict which algorithm will solve this instance particularly fast. These limitations motivate the parallel portfolio approach. A portfolio is a set of different algorithms or different configurations of the same algorithm. All solvers in a parallel portfolio run on different processors to solve of the same problem. If one solver terminates, the portfolio solver reports the problem to be satisfiable or unsatisfiable according to this one solver. All other solvers are terminated. Diversifying portfolios by including a variety of solvers, each performing well on a different set of problems, increases the robustness of the solver. - -Many solvers internally use a random number generator. Diversifying their seeds is a simple way to diversify a portfolio. Other diversification strategies involve enabling, disabling or diversifying certain heuristics in the sequential solver. - -One drawback of parallel portfolios is the amount of duplicate work. If clause learning is used in the sequential solvers, sharing learned clauses between parallel running solvers can reduce duplicate work and increase performance. Yet, even merely running a portfolio of the best solvers in parallel makes a competitive parallel solver. An example of such a solver is PPfolio. It was designed to find a lower bound for the performance a parallel SAT solver should be able to deliver. Despite the large amount of duplicate work due to lack of optimizations, it performed well on a shared memory machine. HordeSat is a parallel portfolio solver for large clusters of computing nodes. It uses differently configured instances of the same sequential solver at its core. Particularly for hard SAT instances HordeSat can produce linear speedups and therefore reduce runtime significantly. - -In recent years parallel portfolio SAT solvers have dominated the parallel track of the . Notable examples of such solvers include Plingeling and painless-mcomsps. - -In contrast to parallel portfolios, parallel divide-and-conquer tries to split the search space between the processing elements. Divide-and-conquer algorithms, such as the sequential DPLL, already apply the technique of splitting the search space, hence their extension towards a parallel algorithm is straight forward. However, due to techniques like unit propagation, following a division, the partial problems may differ significantly in complexity. Thus the DPLL algorithm typically does not process each part of the search space in the same amount of time, yielding a challenging load balancing problem. It suggests solving in two phases. In the "cube" phase the Problem is divided into many thousands, up to millions, of sections. This is done by a look-ahead solver, that finds a set of partial configurations called "cubes". A cube can also be seen as a conjunction of a subset of variables of the original formula. In conjunction with the formula, each of the cubes forms a new formula. These formulas can be solved independently and concurrently by conflict-driven solvers. As the disjunction of these formulas is equivalent to the original formula, the problem is reported to be satisfiable, if one of the formulas is satisfiable. The look-ahead solver is favorable for small but hard problems, so it is used to gradually divide the problem into multiple sub-problems. These sub-problems are easier but still large which is the ideal form for a conflict-driven solver. Furthermore look-ahead solvers consider the entire problem whereas conflict-driven solvers make decisions based on information that is much more local. There are three heuristics involved in the cube phase. The variables in the cubes are chosen by the decision heuristic. The direction heuristic decides which variable assignment (true or false) to explore first. In satisfiable problem instances, choosing a satisfiable branch first is beneficial. The cutoff heuristic decides when to stop expanding a cube and instead forward it to a sequential conflict-driven solver. Preferably the cubes are similarly complex to solve. - -One strategy towards a parallel local search algorithm for SAT solving is trying multiple variable flips concurrently on different processing units. Another is to apply the aforementioned portfolio approach, however clause sharing is not possible since local search solvers do not produce clauses. Alternatively, it is possible to share the configurations that are produced locally. These configurations can be used to guide the production of a new initial configuration when a local solver decides to restart its search. - -A SAT problem is often described in the format: an input file in which each line represents a single disjunction. For example, a file with the two lines - - - -1 -5 4 0 - --1 5 3 4 0 - - - -represents the formula "(x1 ∨ ¬x5 ∨ x4) ∧ (¬x1 ∨ x5 ∨ x3 ∨ x4)". - -Another common format for this formula is the 7-bit ASCII representation "(x1 | ~x5 | x4) & (~x1 | x5 | x3 | x4)". diff --git a/wiki/wikipedia/3890.txt b/wiki/wikipedia/3890.txt deleted file mode 100644 index 75c68f8c6222725761dff072bf682d4ff6e5522f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3890.txt +++ /dev/null @@ -1,49 +0,0 @@ -In mathematics, comparison theorems are theorems whose statement involves comparisons between various mathematical objects of the same type, and often occur in fields such as calculus, differential equations and Riemannian geometry. - -In the theory of differential equations, comparison theorems assert particular properties of solutions of a differential equation (or of a system thereof), provided that an auxiliary equation/inequality (or a system thereof) possesses a certain property. - -*Chaplygin inequality - -*Grönwall's inequality, and its various generalizations, provides a comparison principle for the solutions of first-order ordinary differential equations. - -*Sturm comparison theorem - -*Aronson and Weinberger used a comparison theorem to characterize solutions to Fisher's equation, a reaction--diffusion equation. - -*Hille-Wintner comparison theorem - -In Riemannian geometry, it is a traditional name for a number of theorems that compare various metrics and provide various estimates in Riemannian geometry. - -*Rauch comparison theorem relates the sectional curvature of a Riemannian manifold to the rate at which its geodesics spread apart. - -* Toponogov's theorem - -*Myers's theorem - -*Hessian comparison theorem - -*Laplacian comparison theorem - -*Morse-Schoenberg comparison theorem - -*Berger comparison theorem, Rauch-Berger comparison theorem - -*Berger-Kazdan comparison theorem - -*Warner comparison theorem for lengths of N-Jacobi fields (N being a submanifold of a complete Riemannian manifold) - -*Bishop–Gromov inequality, conditional on a lower bound for the Ricci curvatures - -*Lichnerowicz comparison theorem - -*Eigenvalue comparison theorem - -**Cheng's eigenvalue comparison theorem - -* See also: Comparison triangle - -*Limit comparison theorem, about convergence of series - -*Comparison theorem for integrals, about convergence of integrals - -*Zeeman's comparison theorem, a technical tool from the theory of spectral sequences diff --git a/wiki/wikipedia/3891.txt b/wiki/wikipedia/3891.txt deleted file mode 100644 index 09204b6c1b1ae041e0e2552b279c5817316c7191..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3891.txt +++ /dev/null @@ -1,13 +0,0 @@ -The Morita conjectures in general topology are certain problems about normal spaces, now solved in the affirmative. The conjectures, formulated by Kiiti Morita in 1976, asked - -# If $X \times Y$ is normal for every normal space Y, is X a discrete space? - -# If $X \times Y$ is normal for every normal P-space Y, is X metrizable? - -# If $X \times Y$ is normal for every normal countably paracompact space Y, is X metrizable and sigma-locally compact? - -The answers were believed to be affirmative. Here a normal P-space Y is characterised by the property that the product with every metrizable X is normal; thus the conjecture was that the converse holds. - -Keiko Chiba, Teodor C. Przymusiński, and Mary Ellen Rudin proved conjecture (1) and showed that conjectures (2) and (3) cannot be proven false under the standard ZFC axioms for mathematics (specifically, that the conjectures hold under the axiom of constructibility V=L). - -Fifteen years later, Zoltán Tibor Balogh succeeded in showing that conjectures (2) and (3) are true. diff --git a/wiki/wikipedia/3892.txt b/wiki/wikipedia/3892.txt deleted file mode 100644 index 80cdf90873d18fe1c96f63a21707cb246e941afa..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3892.txt +++ /dev/null @@ -1,40 +0,0 @@ -The fixed-point lemma for normal functions is a basic result in axiomatic set theory stating that any normal function has arbitrarily large fixed points (Levy 1979: p. 117). It was first proved by Oswald Veblen in 1908. - -A normal function is a class function $f$ from the class Ord of ordinal numbers to itself such that: - -* $f$ is strictly increasing: $f(\alpha)α is normal (see initial ordinal). Thus, there exists an ordinal θ such that θ = ωθ. In fact, the lemma shows that there is a closed, unbounded class of such θ. diff --git a/wiki/wikipedia/3893.txt b/wiki/wikipedia/3893.txt deleted file mode 100644 index 7bcb5c2ffe6ac33fa1f1abcdc2f0f0fa4494811a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3893.txt +++ /dev/null @@ -1,41 +0,0 @@ -In mathematics, the Lions–Lax–Milgram theorem (or simply Lions's theorem) is a result in functional analysis with applications in the study of partial differential equations. It is a generalization of the famous Lax–Milgram theorem, which gives conditions under which a bilinear function can be "inverted" to show the existence and uniqueness of a weak solution to a given boundary value problem. The result is named after the mathematicians Jacques-Louis Lions, Peter Lax and Arthur Milgram. - -Let H be a Hilbert space and V a normed space. Let B : H × V → R be a continuous, bilinear function. Then the following are equivalent: - -* (coercivity) for some constant c > 0, -$$ -\inf_{\| v \|_{V} = 1} \sup_{\| h \|_{H} \leq 1} | B(h, v) | \geq c; -$$ - -* (existence of a "weak inverse") for each continuous linear functional f ∈ V, there is an element h ∈ H such that -$$ -B(h, v) = \langle f, v \rangle \mbox{ for all } v \in V. -$$ - -The Lions–Lax–Milgram theorem can be applied by using the following result, the hypotheses of which are quite common and easy to verify in practical applications: - -Suppose that V is continuously embedded in H and that B is V-elliptic, i.e. - -* for some c > 0 and all v ∈ V, -$$ -\| v \|_{H} \leq c \| v \|_{V}; -$$ - -* for some α > 0 and all v ∈ V, -$$ -B(v, v) \geq \alpha \| v \|_{V}^{2}. -$$ - -Then the above coercivity condition (and hence the existence result) holds. - -Lions's generalization is an important one since it allows one to tackle boundary value problems beyond the Hilbert space setting of the original Lax–Milgram theory. To illustrate the power of Lions's theorem, consider the heat equation in n spatial dimensions (x) and one time dimension (t): -$$ -\partial_{t} u (t, x) = \Delta u (t, x), -$$ - -where Δ denotes the Laplace operator. Two questions arise immediately: on what domain in spacetime is the heat equation to be solved, and what boundary conditions are to be imposed? The first question - the shape of the domain - is the one in which the power of the Lions–Lax–Milgram theorem can be seen. In simple settings, it suffices to consider cylindrical domains: i.e., one fixes a spatial region of interest, Ω, and a maximal time, T ∈(0, +∞], and proceeds to solve the heat equation on the "cylinder" -$$ -[0, T) \times \Omega \subseteq [0, + \infty) \times \mathbf{R}^{n}. -$$ - -One can then proceed to solve the heat equation using classical Lax–Milgram theory (and/or Galerkin approximations) on each "time slice" {t} × Ω. This is all very well if one only wishes to solve the heat equation on a domain that does not change its shape as a function of time. However, there are many applications for which this is not true: for example, if one wishes to solve the heat equation on the polar ice cap, one must take account of the changing shape of the volume of ice as it evaporates and/or icebergs break away. In other words, one must at least be able to handle domains G in spacetime that do not look the same along each "time slice". (There is also the added complication of domains whose shape changes according to the solution u of the problem itself.) Such domains and boundary conditions are beyond the reach of classical Lax–Milgram theory, but can be attacked using Lions's theorem. diff --git a/wiki/wikipedia/3894.txt b/wiki/wikipedia/3894.txt deleted file mode 100644 index fc13eb5a77d1018f827da0697980c6eabefd5be6..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3894.txt +++ /dev/null @@ -1,47 +0,0 @@ -In graph theory, an isomorphism of graphs G and H is a bijection between the vertex sets of G and H -$$ - f \colon V(G) \to V(H) -$$ - -such that any two vertices u and v of G are adjacent in G if and only if $f(u)$ and $f(v)$ are adjacent in H. This kind of bijection is commonly described as "edge-preserving bijection", in accordance with the general notion of isomorphism being a structure-preserving bijection. - -If an isomorphism exists between two graphs, then the graphs are called isomorphic and denoted as $G\simeq H$. In the case when the bijection is a mapping of a graph onto itself, i.e., when G and H are one and the same graph, the bijection is called an automorphism of G. - -Graph isomorphism is an equivalence relation on graphs and as such it partitions the class of all graphs into equivalence classes. A set of graphs isomorphic to each other is called an isomorphism class of graphs. - -The two graphs shown below are isomorphic, despite their different looking drawings. - -In the above definition, graphs are understood to be uni-directed non-labeled non-weighted graphs. However, the notion of isomorphic may be applied to all other variants of the notion of graph, by adding the requirements to preserve the corresponding additional elements of structure: arc directions, edge weights, etc., with the following exception. - -For labeled graphs, two definitions of isomorphism are in use. - -Under one definition, an isomorphism is a vertex bijection which is both edge-preserving and label-preserving. - -Under another definition, an isomorphism is an edge-preserving vertex bijection which preserves equivalence classes of labels, i.e., vertices with equivalent (e.g., the same) labels are mapped onto the vertices with equivalent labels and vice versa; same with edge labels. - -For example, the $K_2$ graph with the two vertices labelled with 1 and 2 has a single automorphism under the first definition, but under the second definition there are two auto-morphisms. - -The second definition is assumed in certain situations when graphs are endowed with unique labels commonly taken from the integer range 1,...,n, where n is the number of the vertices of the graph, used only to uniquely identify the vertices. In such cases two labeled graphs are sometimes said to be isomorphic if the corresponding underlying unlabeled graphs are isomorphic (otherwise the definition of isomorphism would be trivial). - -The formal notion of "isomorphism", e.g., of "graph isomorphism", captures the informal notion that some objects have "the same structure" if one ignores individual distinctions of "atomic" components of objects in question. Whenever individuality of "atomic" components (vertices and edges, for graphs) is important for correct representation of whatever is modeled by graphs, the model is refined by imposing additional restrictions on the structure, and other mathematical objects are used: digraphs, labeled graphs, colored graphs, rooted trees and so on. The isomorphism relation may also be defined for all these generalizations of graphs: the isomorphism bijection must preserve the elements of structure which define the object type in question: arcs, labels, vertex/edge colors, the root of the rooted tree, etc. - -The notion of "graph isomorphism" allows us to distinguish graph properties inherent to the structures of graphs themselves from properties associated with graph representations: graph drawings, data structures for graphs, graph labelings, etc. For example, if a graph has exactly one cycle, then all graphs in its isomorphism class also have exactly one cycle. On the other hand, in the common case when the vertices of a graph are (represented by) the integers 1, 2,... N, then the expression -$$ -\sum_{v \in V(G)} v\cdot\text{deg }v -$$ - -may be different for two isomorphic graphs. - -The Whitney graph isomorphism theorem, shown by Hassler Whitney, states that two connected graphs are isomorphic if and only if their line graphs are isomorphic, with a single exception: K3, the complete graph on three vertices, and the complete bipartite graph K1,3, which are not isomorphic but both have K3 as their line graph. The Whitney graph theorem can be extended to hypergraphs. - -While graph isomorphism may be studied in a classical mathematical way, as exemplified by the Whitney theorem, it is recognized that it is a problem to be tackled with an algorithmic approach. The computational problem of determining whether two finite graphs are isomorphic is called the graph isomorphism problem. - -Its practical applications include primarily cheminformatics, mathematical chemistry (identification of chemical compounds), and electronic design automation (verification of equivalence of various representations of the design of an electronic circuit). - -The graph isomorphism problem is one of few standard problems in computational complexity theory belonging to NP, but not known to belong to either of its well-known (and, if P ≠ NP, disjoint) subsets: P and NP-complete. It is one of only two, out of 12 total, problems listed in Garey whose complexity remains unresolved, the other being integer factorization. It is however known that if the problem is NP-complete then the polynomial hierarchy collapses to a finite level. - -In November 2015, László Babai, a mathematician and computer scientist at the University of Chicago, claimed to have proven that the graph isomorphism problem is solvable in quasi-polynomial time. He published preliminary versions of these results in the proceedings of the 2016 Symposium on Theory of Computing, and of the 2018 International Congress of Mathematicians. In January 2017, Babai briefly retracted the quasi-polynomiality claim and stated a sub-exponential time time complexity bound instead. He restored the original claim five days later. , the full journal version of Babai's paper has not yet been published. - -Its generalization, the subgraph isomorphism problem, is known to be NP-complete. - -The main areas of research for the problem are design of fast algorithms and theoretical investigations of its computational complexity, both for the general problem and for special classes of graphs. diff --git a/wiki/wikipedia/3895.txt b/wiki/wikipedia/3895.txt deleted file mode 100644 index 57cc6dc5746c1bad4b4218f631ca0e7daabbeb61..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3895.txt +++ /dev/null @@ -1,3 +0,0 @@ -Thomsen's theorem, named after Gerhard Thomsen, is a theorem in elementary geometry. It shows that a certain path constructed by line segments being parallel to the edges of a triangle always ends up at its starting point. - -Consider an arbitrary triangle ABC with a point P1 on its edge BC. A sequence of points and parallel lines is constructed as follows. The parallel line to AC through P1 intersects AB in P2 and the parallel line to BC through P2 intersects AC in P3. Continuing in this fashion the parallel line to AB through P3 intersects BC in P4 and the parallel line to AC through P4 intersects AB in P5. Finally the parallel line to BC through P5 intersects AC in P6 and the parallel line to AB through P6 intersects BC in P7. Thomsen's theorem now states that P7 is identical to P1 and hence the construction always leads to a closed path P1P2P3P4P5P6P1 diff --git a/wiki/wikipedia/3896.txt b/wiki/wikipedia/3896.txt deleted file mode 100644 index 18b2007aa3b692bacbcaef3ca7399fb628df5161..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3896.txt +++ /dev/null @@ -1,245 +0,0 @@ -In functional analysis, the concept of a compact operator on Hilbert space is an extension of the concept of a matrix acting on a finite-dimensional vector space; in Hilbert space, compact operators are precisely the closure of finite-rank operators (representable by finite-dimensional matrices) in the topology induced by the operator norm. As such, results from matrix theory can sometimes be extended to compact operators using similar arguments. By contrast, the study of general operators on infinite-dimensional spaces often requires a genuinely different approach. - -For example, the spectral theory of compact operators on Banach spaces takes a form that is very similar to the Jordan canonical form of matrices. In the context of Hilbert spaces, a square matrix is unitarily diagonalizable if and only if it is normal. A corresponding result holds for normal compact operators on Hilbert spaces. More generally, the compactness assumption can be dropped. As stated above, the techniques used to prove, e.g., the spectral theorem, are different, involving operator-valued measures on the spectrum. - -Some results for compact operators on Hilbert space will be discussed, starting with general properties before considering subclasses of compact operators. - -Let $H$ be a Hilbert space and $L(H)$ be the set of bounded operators on $H$. Then, an operator $T\in L(H)$ is said to be a compact operator if the image of each bounded set under $T$ is relatively compact. - -We list in this section some general properties of compact operators. - -If X and Y are Hilbert spaces (in fact, X Banach and Y normed will suffice), then T : X → Y is compact if and only if it is continuous when viewed as a map from X with the weak topology to Y (with the norm topology). (See , and note in this reference that the uniform boundedness will apply in the situation where F ⊆ X satisfies (∀φ ∈ Hom(X, K)) sup{x**(φ) = φ(x) : x} < ∞, where K is the underlying field. The uniform boundedness principle applies since Hom(X, K) with the norm topology will be a Banach space, and the maps x** : Hom(X,K) → K are continuous homomorphisms with respect to this topology.) - -The family of compact operators is a norm-closed, two-sided, *-ideal in L(H). Consequently, a compact operator T cannot have a bounded inverse if H is infinite-dimensional. If ST = TS = I, then the identity operator would be compact, a contradiction. - -If sequences of bounded operators Bn → B, Cn → C in the strong operator topology and T is compact, then $B_nTC_n^*$ converges to $BTC^*$ in norm. For example, consider the Hilbert space $l^2(\mathbf{N}),$ with standard basis {en}. Let Pm be the orthogonal projection on the linear span of {e1 ... em}. The sequence {Pm} converges to the identity operator I strongly but not uniformly. Define T by $Te_n = \tfrac{1}{n^2} e_n.$ T is compact, and, as claimed above, PmT → IT = T in the uniform operator topology: for all x, -$$ -\left\| P_m T x - T x \right \| \leq \left( \frac{1}{m+1}\right)^2 \| x \|. -$$ - -Notice each Pm is a finite-rank operator. Similar reasoning shows that if T is compact, then T is the uniform limit of some sequence of finite-rank operators. - -By the norm-closedness of the ideal of compact operators, the converse is also true. - -The quotient C*-algebra of L(H) modulo the compact operators is called the Calkin algebra, in which one can consider properties of an operator up to compact perturbation. - -A bounded operator T on a Hilbert space H is said to be self-adjoint if T = T*, or equivalently, -$$ -\langle T x, y \rangle = \langle x, T y \rangle, \quad x, y \in H. -$$ - -It follows that is real for every x ∈ H, thus eigenvalues of T, when they exist, are real. When a closed linear subspace L of H is invariant under T, then the restriction of T to L is a self-adjoint operator on L, and furthermore, the orthogonal complement L of L is also invariant under T. For example, the space H can be decomposed as the orthogonal direct sum of two T-invariant closed linear subspaces: the kernel of T, and the orthogonal complement (ker T) of the kernel (which is equal to the closure of the range of T, for any bounded self-adjoint operator). These basic facts play an important role in the proof of the spectral theorem below. - -The classification result for Hermitian n × n matrices is the spectral theorem: If M = M*, then M is unitarily diagonalizable, and the diagonalization of M has real entries. Let T be a compact self-adjoint operator on a Hilbert space H. We will prove the same statement for T: the operator T can be diagonalized by an orthonormal set of eigenvectors, each of which corresponds to a real eigenvalue. - -Theorem For every compact self-adjoint operator T on a real or complex Hilbert space H, there exists an orthonormal basis of H consisting of eigenvectors of T. More specifically, the orthogonal complement of the kernel of T admits either a finite orthonormal basis of eigenvectors of T, or a countably infinite orthonormal basis {en} of eigenvectors of T, with corresponding eigenvalues {λn} ⊂ R, such that λn → 0. - -In other words, a compact self-adjoint operator can be unitarily diagonalized. This is the spectral theorem. - -When H is separable, one can mix the basis {en} with a countable orthonormal basis for the kernel of T, and obtain an orthonormal basis {fn} for H, consisting of eigenvectors of T with real eigenvalues {μn} such that μn → 0. - -Corollary For every compact self-adjoint operator T on a real or complex separable infinite-dimensional Hilbert space H, there exists a countably infinite orthonormal basis {fn} of H consisting of eigenvectors of T, with corresponding eigenvalues {μn} ⊂ R, such that μn → 0. - -Let us discuss first the finite-dimensional proof. It proves the spectral theorem for a Hermitian n × n matrix T hinges on showing the existence of one eigenvector x. Once this is done, Hermiticity implies that both the linear span and orthogonal complement of x (of dimension n-1) are invariant subspaces of T. The desired result is then obtained by induction for $T_{x^\perp}$. - -The existence of an eigenvector can be shown in (at least) two alternative ways: - -#One can argue algebraically: The characteristic polynomial of T has a complex root, therefore T has an eigenvalue with a corresponding eigenvector. - -#The eigenvalues can be characterized variationally: The largest eigenvalue is the maximum on the closed unit sphere of the function f: R2n → R defined by f(x) = x*Tx = . - -Note. In the finite-dimensional case, part of the first approach works in much greater generality; any square matrix, not necessarily Hermitian, has an eigenvector. This is simply not true for general operators on Hilbert spaces. In infinite dimensions, it is also not immediate how to generalize the concept of the characteristic polynomial. - -The spectral theorem for the compact self-adjoint case can be obtained analogously: one finds an eigenvector by extending the second finite-dimensional argument above, then apply induction. We first sketch the argument for matrices. - -Since the closed unit sphere S in R2n is compact, and f is continuous, f(S) is compact on the real line, therefore f attains a maximum on S, at some unit vector y. By Lagrange's multiplier theorem, y satisfies -$$ -\nabla f = \nabla y^* T y = \lambda \cdot \nabla y^* y -$$ - -for some λ. By Hermiticity, Ty = λy. - -Alternatively, let z ∈ Cn be any vector. Notice that if a unit vector y maximizes on the unit sphere (or on the unit ball), it also maximizes the Rayleigh quotient: -$$ -g(x) = \frac{\langle Tx, x \rangle}{\|x\|^2}, \qquad 0 \ne x \in \mathbf{C}^n. -$$ - -Consider the function: -$$ -\begin{cases} h : \mathbf{R} \to \mathbf{R} \\ h(t) = g(y+tz) \end{cases} -$$ - -By calculus, h′(0) = 0, i.e., - -\begin{align} - -h'(0) &= \lim_{t \to 0} \frac{h(t)-h(0)}{t - 0} \\ - -&= \lim_{t \to 0} \frac{g(y+tz)-g(y)}{t} \\ - -&= \lim_{t \to 0} \frac{1}{t} \left (\frac{\langle T(y+tz), y+tz \rangle}{\|y+tz\|^2}-\frac{\langle Ty, y \rangle}{\|y\|^2} \right ) \\ - -&= \lim_{t \to 0} \frac{1}{t} \left (\frac{\langle T(y+tz), y+tz \rangle - \langle Ty, y \rangle}{\|y\|^2} \right ) \\ - -&= \frac{1}{\|y\|^2} \lim_{t \to 0} \frac{\langle T(y+tz), y+tz \rangle - \langle Ty, y \rangle}{t} \\ - -&= \frac{1}{\|y\|^2} \left (\frac{d}{dt} \frac{\langle T (y + t z), y + tz \rangle}{\langle y + tz, y + tz \rangle} \right)(0) \\ - -&= 0. - -\end{align} - -Define: -$$ -m=\frac{\langle Ty, y \rangle}{\langle y, y \rangle} -$$ - -After some algebra the above expression becomes (Re denotes the real part of a complex number) -$$ -\mathrm{Re}(\langle T y - m y, z \rangle) = 0. -$$ - -But z is arbitrary, therefore Ty − my = 0. This is the crux of proof for spectral theorem in the matricial case. - -Note that while the Lagrange multipliers generalize to the infinite-dimensional case, the compactness of the unit sphere is lost. This is where the assumption that the operator T be compact is useful. - -Claim If T is a compact self-adjoint operator on a non-zero Hilbert space H and -$$ -m(T) := \sup \bigl\{ |\langle T x, x \rangle| : x \in H, \|x\| \le 1 \bigr\}, -$$ - -then m(T) or −m(T) is an eigenvalue of T. - -If m(T) = 0, then T = 0 by the polarization identity, and this case is clear. Consider the function -$$ -\begin{cases} f : H \to \mathbf{R} \\ f(x) = \langle T x, x \rangle \end{cases} -$$ - -Replacing T by −T if necessary, one may assume that the supremum of f on the closed unit ball B ⊂ H is equal to m(T) > 0. If f attains its maximum m(T) on B at some unit vector y, then, by the same argument used for matrices, y is an eigenvector of T, with corresponding eigenvalue λ = <λy, y> = = f(y) = m(T). - -By the Banach–Alaoglu theorem and the reflexivity of H, the closed unit ball B is weakly compact. Also, the compactness of T means (see above) that T : X with the weak topology → X with the norm topology is continuous. These two facts imply that f is continuous on B equipped with the weak topology, and f attains therefore its maximum m on B at some y ∈ B. By maximality, $\|y\|=1,$ which in turn implies that y also maximizes the Rayleigh quotient g(x) (see above). This shows that y is an eigenvector of T, and ends the proof of the claim. - -Note. The compactness of T is crucial. In general, f need not be continuous for the weak topology on the unit ball B. For example, let T be the identity operator, which is not compact when H is infinite-dimensional. Take any orthonormal sequence {yn}. Then yn converges to 0 weakly, but lim f(yn) = 1 ≠ 0 = f(0). - -Let T be a compact operator on a Hilbert space H. A finite (possibly empty) or countably infinite orthonormal sequence {en} of eigenvectors of T, with corresponding non-zero eigenvalues, is constructed by induction as follows. Let H0 = H and T0 = T. If m(T0) = 0, then T = 0 and the construction stops without producing any eigenvector en. Suppose that orthonormal eigenvectors e0, ..., en − 1 of T have been found. Then En:= span(e0, ..., en − 1) is invariant under T, and by self-adjointness, the orthogonal complement Hn of En is an invariant subspace of T. Let Tn denote the restriction of T to Hn. If m(Tn) = 0, then Tn = 0, and the construction stops. Otherwise, by the claim applied to Tn, there is a norm one eigenvector en of T in Hn, with corresponding non-zero eigenvalue λn = ± m(Tn). - -Let F = (span{en}), where {en} is the finite or infinite sequence constructed by the inductive process; by self-adjointness, F is invariant under T. Let S denote the restriction of T to F. If the process was stopped after finitely many steps, with the last vector em−1, then F= Hm and S = Tm = 0 by construction. In the infinite case, compactness of T and the weak-convergence of en to 0 imply that Ten = λnen → 0, therefore λn → 0. Since F is contained in Hn for every n, it follows that m(S) ≤ m({Tn}) = |λn| for every n, hence m(S) = 0. This implies again that S = 0. - -The fact that S = 0 means that F is contained in the kernel of T. Conversely, if x ∈ ker(T) then by self-adjointness, x is orthogonal to every eigenvector {en} with non-zero eigenvalue. It follows that F = ker(T), and that {en} is an orthonormal basis for the orthogonal complement of the kernel of T. One can complete the diagonalization of T by selecting an orthonormal basis of the kernel. This proves the spectral theorem. - -A shorter but more abstract proof goes as follows: by Zorn's lemma, select U to be a maximal subset of H with the following three properties: all elements of U are eigenvectors of T, they have norm one, and any two distinct elements of U are orthogonal. Let F be the orthogonal complement of the linear span of U. If F ≠ {0}, it is a non-trivial invariant subspace of T, and by the initial claim, there must exist a norm one eigenvector y of T in F. But then U ∪ {y} contradicts the maximality of U. It follows that F = {0}, hence span(U) is dense in H. This shows that U is an orthonormal basis of H consisting of eigenvectors of T. - -If T is compact on an infinite-dimensional Hilbert space H, then T is not invertible, hence σ(T), the spectrum of T, always contains 0. The spectral theorem shows that σ(T) consists of the eigenvalues {λn} of T and of 0 (if 0 is not already an eigenvalue). The set σ(T) is a compact subset of the complex numbers, and the eigenvalues are dense in σ(T). - -Any spectral theorem can be reformulated in terms of a functional calculus. In the present context, we have: - -Theorem. Let C(σ(T)) denote the C*-algebra of continuous functions on σ(T). There exists a unique isometric homomorphism Φ : C(σ(T)) → L(H) such that Φ(1) = I and, if f is the identity function f(λ) = λ, then Φ(f) = T. Moreover, σ(f(T)) = f(σ(T)). - -The functional calculus map Φ is defined in a natural way: let {en} be an orthonormal basis of eigenvectors for H, with corresponding eigenvalues {λn}; for f ∈ C(σ(T)), the operator Φ(f), diagonal with respect to the orthonormal basis {en}, is defined by setting -$$ -\Phi(f)(e_n) = f(\lambda_n) e_n -$$ - -for every n. Since Φ(f) is diagonal with respect to an orthonormal basis, its norm is equal to the supremum of the modulus of diagonal coefficients, -$$ -\|\Phi(f)\| = \sup_{\lambda_n \in \sigma(T)} |f(\lambda_n)| = \|f\|_{C(\sigma(T))}. -$$ - -The other properties of Φ can be readily verified. Conversely, any homomorphism Ψ satisfying the requirements of the theorem must coincide with Φ when f is a polynomial. By the Weierstrass approximation theorem, polynomial functions are dense in C(σ(T)), and it follows that Ψ = Φ. This shows that Φ is unique. - -The more general continuous functional calculus can be defined for any self-adjoint (or even normal, in the complex case) bounded linear operator on a Hilbert space. The compact case described here is a particularly simple instance of this functional calculus. - -Consider an Hilbert space H (e.g. the finite-dimensional Cn), and a commuting set $\mathcal{F}\subseteq\operatorname{Hom}(H,H)$ of self-adjoint operators. Then under suitable conditions, it can be simultaneously (unitarily) diagonalized. Viz., there exists an orthonormal basis Q consisting of common eigenvectors for the operators — i.e. -$$ -(\forall{q\in Q,T\in\mathcal{F}})(\exists{\sigma\in\mathbf{C}})(T-\sigma)q=0 -$$ - -Lemma. Suppose all the operators in $\mathcal{F}$ are compact. Then every closed non-zero $\mathcal{F}$-invariant sub-space $S\subseteq H$ has a common eigenvector for $\mathcal{F}$. - -Proof. Case I: all the operators have each exactly one eigenvalue on $S$. - -Take any $s\in S$ of unit length. It is a common eigenvector. - -Case II: there is some operator $T\in\mathcal{F}$ with at least 2 eigenvalues on $S$ and let $0 \neq \alpha \in \sigma(T\upharpoonright S)$. Since T is compact and α is non-zero, we have $S':=\ker(T \upharpoonright S - \alpha)$ is a finite-dimensional (and therefore closed) non-zero $\mathcal{F}$-invariant sub-space (because the operators all commute with T, we have for $T'\in\mathcal{F}$ and $x\in\ker(T\upharpoonright S - \alpha)$, that $(T-\alpha)(T'x)=(T'(T~x)-\alpha T'x)=0$). In particular, since α is just one of the eigenvalues of $T$ on $S$, we definitely have $\dim S' < \dim S$. Thus we could in principle argue by induction over dimension, yielding that $S'\subseteq S$ has a common eigenvector for $\mathcal{F}$. - -Theorem 1. If all the operators in $\mathcal{F}$ are compact then the operators can be simultaneously (unitarily) diagonalized. - -Proof. The following set -$$ -\mathbf{P}=\{ A \subseteq H : A \text{ is an orthonormal set of common eigenvectors for } \mathcal{F}\}, -$$ - -is partially ordered by inclusion. This clearly has the Zorn property. So taking Q a maximal member, if Q is a basis for the whole Hilbert space H, we are done. If this were not the case, then letting $S=\langle Q\rangle^{\bot}$, it is easy to see that this would be an $\mathcal{F}$-invariant non-trivial closed subspace; and thus by the lemma above, therein would lie a common eigenvector for the operators (necessarily orthogonal to Q). But then there would then be a proper extension of Q within P; a contradiction to its maximality. - -Theorem 2. If there is an injective compact operator in $\mathcal{F}$; then the operators can be simultaneously (unitarily) diagonalized. - -Proof. Fix $T_0\in\mathcal{F}$ compact injective. Then we have, by the spectral theory of compact symmetric operators on Hilbert spaces: -$$ -H=\overline{\bigoplus_{\lambda\in\sigma(T_0)} \ker(T_0-\sigma)}, -$$ - -where $\sigma(T_0)$ is a discrete, countable subset of positive real numbers, and all the eigenspaces are finite-dimensional. Since $\mathcal{F}$ a commuting set, we have all the eigenspaces are invariant. Since the operators restricted to the eigenspaces (which are finite-dimensional) are automatically all compact, we can apply Theorem 1 to each of these, and find orthonormal bases Qσ for the $\ker(T_0-\sigma)$. Since T0 is symmetric, we have that -$$ -Q:=\bigcup_{\sigma\in\sigma(T_0)} Q_{\sigma} -$$ - -is a (countable) orthonormal set. It is also, by the decomposition we first stated, a basis for H. - -Theorem 3. If H a finite-dimensional Hilbert space, and $\mathcal{F}\subseteq\operatorname{Hom}(H,H)$ a commutative set of operators, each of which is diagonalisable; then the operators can be simultaneously diagonalized. - -Proof. Case I: all operators have exactly one eigenvalue. Then any basis for H will do. - -Case II: Fix $T_0\in\mathcal{F}$ an operator with at least two eigenvalues, and let $P\in\operatorname{Hom}(H,H)^{\times}$ so that $P^{-1}T_0P$ is a symmetric operator. Now let α be an eigenvalue of $P^{-1}T_0P$. Then it is easy to see that both: -$$ -\ker\left(P^{-1}~T_0(P-\alpha)\right), \quad \ker\left(P^{-1}~T_0(P-\alpha) \right)^{\bot} -$$ - -are non-trivial $P^{-1}\mathcal{F}P$-invariant subspaces. By induction over dimension we have that there are linearly independent bases Q1, Q2 for the subspaces, which demonstrate that the operators in $P^{-1}\mathcal{F}P$ can be simultaneously diagonalisable on the subspaces. Clearly then $P(Q_1\cup Q_2)$ demonstrates that the operators in $\mathcal{F}$ can be simultaneously diagonalised. - -Notice we did not have to directly use the machinery of matrices at all in this proof. There are other versions which do. - -We can strengthen the above to the case where all the operators merely commute with their adjoint; in this case we remove the term "orthogonal" from the diagonalisation. There are weaker results for operators arising from representations due to Weyl–Peter. Let G be a fixed locally compact hausdorff group, and $H=L^2(G)$ (the space of square integrable measurable functions with respect to the unique-up-to-scale Haar measure on G). Consider the continuous shift action: -$$ -\begin{cases} G\times H\to H \\ (gf)(x)=f(g^{-1}x) \end{cases} -$$ - -Then if G were compact then there is a unique decomposition of H into a countable direct sum of finite-dimensional, irreducible, invariant subspaces (this is essentially diagonalisation of the family of operators $G\subseteq U(H)$). If G were not compact, but were abelian, then diagonalisation is not achieved, but we get a unique continuous decomposition of H into 1-dimensional invariant subspaces. - -The family of Hermitian matrices is a proper subset of matrices that are unitarily diagonalizable. A matrix M is unitarily diagonalizable if and only if it is normal, i.e., M*M = MM*. Similar statements hold for compact normal operators. - -Let T be compact and T*T = TT*. Apply the Cartesian decomposition to T: define -$$ -R = \frac{T + T^*}{2}, \quad J = \frac{T - T^*}{2i}. -$$ - -The self-adjoint compact operators R and J are called the real and imaginary parts of T, respectively. T is compact means T*, consequently, R and J are compact. Furthermore, the normality of T implies R and J commute. Therefore they can be simultaneously diagonalized, from which follows the claim. - -A hyponormal compact operator (in particular, a subnormal operator) is normal. - -The spectrum of a unitary operator U lies on the unit circle in the complex plane; it could be the entire unit circle. However, if U is identity plus a compact perturbation, U has only a countable spectrum, containing 1 and possibly, a finite set or a sequence tending to 1 on the unit circle. More precisely, suppose U = I + C where C is compact. The equations UU* = U*U = I and C = U − I show that C is normal. The spectrum of C contains 0, and possibly, a finite set or a sequence tending to 0. Since U = I + C, the spectrum of U is obtained by shifting the spectrum of C by 1. - -* Let H = L2([0, 1]). The multiplication operator M defined by -$$ -(M f)(x) = x f(x), \quad f \in H, x \in [0, 1] -$$ - -is a bounded self-adjoint operator on H that has no eigenvector and hence, by the spectral theorem, cannot be compact. - -* Let K(x, y) be square-integrable on [0, 1]2 and define TK on H by -$$ -(T_K f)(x) = \int_0^1 K(x, y) f(y) \mathrm{d} y. -$$ - -Then TK is compact on H; it is a Hilbert–Schmidt operator. - -* Suppose that the kernel K(x, y) satisfies the Hermiticity condition: -$$ -K(y, x) = \overline{K(x, y)}, \quad x, y \in [0, 1]. -$$ - -Then TK is compact and self-adjoint on H; if {φn} is an orthonormal basis of eigenvectors, with eigenvalues {λn}, it can be proved that -$$ -\sum \lambda_n^2 < \infty, \ \ K(x, y) \sim \sum \lambda_n \varphi_n(x) \overline{\varphi_n(y)}, -$$ - -where the sum of the series of functions is understood as L2 convergence for the Lebesgue measure on [0, 1]2. Mercer's theorem gives conditions under which the series converges to K(x, y) pointwise, and uniformly on [0, 1]2. diff --git a/wiki/wikipedia/3897.txt b/wiki/wikipedia/3897.txt deleted file mode 100644 index 5ad9b3af09daefea679f4978cb7015d2aa7c2d57..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3897.txt +++ /dev/null @@ -1,33 +0,0 @@ -Of the many and varied argument forms that can possibly be constructed, only very few are valid argument forms. In order to evaluate these forms, statements are put into logical form. Logical form replaces any sentences or ideas with letters to remove any bias from content and allow one to evaluate the argument without any bias due to its subject matter. - -Being a valid argument does not necessarily mean the conclusion will be true. It is valid because if the premises are true, then the conclusion has to be true. This can be proven for any valid argument form using a truth table which shows that there is no situation in which there are all true premises and a false conclusion. - -In syllogistic logic, there are 256 possible ways to construct categorical syllogisms using the A, E, I, and O statement forms in the square of opposition. Of the 256, only 24 are valid forms. Of the 24 valid forms, 15 are unconditionally valid, and 9 are conditionally valid. - -The following is a list of some common valid argument forms in propositional logic. It is nowhere near exhaustive, and gives only a few examples of the better known valid argument forms. - -One valid argument form is known as modus ponens, not to be mistaken with modus tollens, which is another valid argument form that has a like-sounding name and structure. Modus ponens (sometimes abbreviated as MP) says that if one thing is true, then another will be. It then states that the first is true. The conclusion is that the second thing is true. It is shown below in logical form. - -If A, then B - -A - -Therefore B - -Before being put into logical form the above statement could have been something like below. - -If Kelly does not finish his homework, he will not go to class - -Kelly did not finish his homework - -Therefore, Kelly will not go to class - -The first two statements are the premises while the third is the conclusion derived from them. - -Another form of argument is known as modus tollens (commonly abbreviated MT). In this form, you start with the same first premise as with modus ponens. However, the second part of the premise is denied, leading to the conclusion that the first part of the premise should be denied as well. It is shown below in logical form. - -If A, then B - -Not B - -Therefore not A. diff --git a/wiki/wikipedia/3898.txt b/wiki/wikipedia/3898.txt deleted file mode 100644 index 49866aae88649481542a862bb56d7d1032c1ba1d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3898.txt +++ /dev/null @@ -1,13 +0,0 @@ -In the mathematical discipline of graph theory, the Erdős–Pósa theorem, named after Paul Erdős and Lajos Pósa, states that there is a function f(k) such that for each positive integer k, every graph either contains at least k vertex-disjoint cycles or it has a feedback vertex set of at most f(k) vertices. Furthermore, f(k) = Θ(k log k) in the sense of Big O notation. Because of this theorem, cycles are said to have the Erdős–Pósa property. - -The theorem claims that for any finite number k there is an appropriate (least) value f(k), with the property that in every graph without a set of k vertex-disjoint cycles, all cycles can be covered by no more than f(k) vertices. This generalized an unpublished result of Béla Bollobás, which states that f(2) = 3. Erdős obtained the bounds c1k log k < f(k) < c2k log k for the general case. For the case k = 2, Lovász gave a complete characterization. Voss proved f(3) = 6 and 9 ≤ f(4) ≤ 12. - -A family F of graphs or hypergraphs is defined to have the Erdős–Pósa property if there exists a function $f : \mathbb{N} \rarr \mathbb{N}$ such that for every (hyper-)graph G and every integer k one of the following is true: - -*G contains k vertex-disjoint subgraphs each isomorphic to a graph in F; or - -*G contains a vertex set C of size at most f(k) such that G - C has no subgraph isomorphic to a graph in F. - -The definition is often phrased as follows. If one denotes by ν(G) the maximum number of vertex disjoint subgraphs of G isomorphic to a graph in F and by τ(G) the minimum number of vertices whose deletion from G leaves a graph without a subgraph isomorphic to a graph in F, then τ(G) ≤ f(ν(G)), for some function $f : \mathbb{N} \rarr \mathbb{N}$ not depending on G. - -Rephrased in this terminology, the Erdős–Pósa theorem states that the family F consisting of all cycles has the Erdős–Pósa property, with bounding function f(k) = Θ(k log k). Robertson and Seymour (1986) generalized this considerably. Given a graph H, let F(H) denote the family of all graphs that contain H as a minor. As a corollary of their grid minor theorem, Robertson and Seymour proved that F(H) has the Erdős–Pósa property if and only if H is a planar graph. Moreover, it is now known that the corresponding bounding function is f(k) = Θ(k) if H is a forest , while f(k) = Θ(k log k) for every other planar graph H . The special case where H is a triangle is equivalent to the Erdős–Pósa theorem. diff --git a/wiki/wikipedia/3899.txt b/wiki/wikipedia/3899.txt deleted file mode 100644 index e9023e8d9f238e3f972a1fda82014e7909b3eb2e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3899.txt +++ /dev/null @@ -1,3 +0,0 @@ -In algebraic geometry, Chasles' theorem says that if two pencils of curves have no curves in common, then the intersections of those curves form another pencil of curves the degree of which can be calculated from the degrees of the initial two pencils. - -The result is attributed to Michel Chasles (1793–1880). diff --git a/wiki/wikipedia/39.txt b/wiki/wikipedia/39.txt deleted file mode 100644 index 0d3b8d3bd10421ecceae04253aeb7f627f7e55db..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/39.txt +++ /dev/null @@ -1,10 +0,0 @@ -In approximation theory, Bernstein's theorem is a converse to Jackson's theorem. The first results of this type were proved by Sergei Bernstein in 1912. - -For approximation by trigonometric polynomials, the result is as follows: - -Let f: [0, 2π] → C be a 2π-periodic function, and assume r is a natural number, and 0 < α < 1. If there exists a number C(f) > 0 and a sequence of trigonometric polynomials {Pn}n ≥ n0 such that -$$ - \deg P_n = n~, \quad \sup_{0 \leq x \leq 2\pi} |f(x) - P_n(x)| \leq \frac{C(f)}{n^{r + \alpha}}~, -$$ - -then f = Pn0 + φ, where φ has a bounded r-th derivative which is α-Hölder continuous. diff --git a/wiki/wikipedia/390.txt b/wiki/wikipedia/390.txt deleted file mode 100644 index 600c39e293d49c10a4e2b9dbc1207c5dd283d3af..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/390.txt +++ /dev/null @@ -1,9 +0,0 @@ -The Nielsen realization problem is a question asked by about whether finite subgroups of mapping class groups can act on surfaces, that was answered positively by . - -Given an oriented surface, we can divide the group Diff(S), the group of diffeomorphisms of the surface to itself, into isotopy classes to get the mapping class group π0(Diff(S)). The conjecture asks whether a finite subgroup of the mapping class group of a surface can be realized as the isometry group of a hyperbolic metric on the surface. - -The mapping class group acts on Teichmüller space. An equivalent way of stating the question asks whether every finite subgroup of the mapping class group fixes some point of Teichmüller space. - -asked whether finite subgroups of mapping class groups can act on surfaces. - -Kravetz claimed to solve the Nielsen realization problem but his proof depended on trying to show that Teichmüller space (with the Teichmüller metric) is negatively curved. Linch pointed out a gap in the argument, and Masur showed that Teichmüller space is not negatively curved. gave a correct proof that finite subgroups of mapping class groups can act on surfaces using left earthquakes. diff --git a/wiki/wikipedia/3900.txt b/wiki/wikipedia/3900.txt deleted file mode 100644 index ff178bc9fcc6f5de48aaa79f8e6d9aa387a632d3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3900.txt +++ /dev/null @@ -1,33 +0,0 @@ -Finsler's lemma is a mathematical result named after Paul Finsler. It states equivalent ways to express the positive definiteness of a quadratic form Q constrained by a linear form L. - -Since it is equivalent to another lemmas used in optimization and control theory, such as Yakubovich's S-lemma, Finsler's lemma has been given many proofs and has been widely used, particularly in results related to robust optimization and linear matrix inequalities. - -Let x ∈ Rn, Q ∈ Rn x n and L ∈ Rn x n . The following statements are equivalent: - -* $\displaystyle x^{T}Lx=0 \text{ and } x \ne 0 \text{ implies } x^T Q x < 0.$ - -* $\exists \mu \in \mathbb{R} : Q - \mu L \prec 0. $ - -In the particular case that L is positive semi-definite, it is possible to decompose it as L = BTB. The following statements, which are also referred as Finsler's lemma in the literature, are equivalent: - -* $ x^T Q x < 0 \text{ for all } x \in \ker(B)\smallsetminus\{0\} $ - -* $ B^{\perp^T} Q B^\perp \prec 0 $ - -* $ \exists \mu \in \mathbb{R} : Q - \mu B^T B \prec 0 $ - -* $ \exists X \in \mathbb{R}^{n \times m} : Q + XB + B^T X^T \prec 0 $ - -The following statement, known as Projection Lemma (or also as Elimination Lemma), is common on the literature of linear matrix inequalities: - -* $ B^{\perp^T} Q B^\perp \prec 0 \text{ and } C^{T \perp T} Q C^{T \perp} \prec 0 $ - -* $ \exists X \in \mathbb{R}^{n \times m} : Q + CXB + B^T X^T C^T \prec 0 $ - -This can be seen as a generalization of one of Finsler's lemma variants with the inclusion of an extra matrix and an extra constraint. - -Finsler's lemma also generalizes for matrices Q and B depending on a parameter s within a set S. In this case, it is natural to ask if the same variable μ (respectively X) can satisfy $Q(s)-\mu B^{T}(s)B(s) \prec 0$ for all $s\in S$ (respectively, $Q(s) + X(s)B(s)+B^T(s)X^T(s) \prec 0$). If Q and B depends continuously on the parameter s, and S is compact, then this is true. If S is not compact, but Q and B are still continuous matrix-valued functions, then μ and X can be guaranteed to be at least continuous functions. - -Finsler's lemma can be used to give novel linear matrix inequality (LMI) characterizations to stability and control problems. The set of LMIs stemmed from this procedure yields less conservative results when applied to control problems where the system matrices has dependence on a parameter, such as robust control problems and control of linear-parameter varying systems. This approach has recently been called as S-variable approach and the LMIs stemming from this approach are known as SV-LMIs (also known as dilated LMIs). - -A nonlinear system has the universal stabilizability property if every forward-complete solution of a system can be globally stabilized. By the use of Finsler's lemma, it is possible to derive a sufficient condition for universal stabilizability in terms of a differential linear matrix inequality. diff --git a/wiki/wikipedia/3901.txt b/wiki/wikipedia/3901.txt deleted file mode 100644 index f34ba7f900ada88e136a4cc6fea7b61be30db48d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3901.txt +++ /dev/null @@ -1,9 +0,0 @@ -In number theory, a branch of mathematics, Dickson's conjecture is the conjecture stated by that for a finite set of linear forms a1 + b1n, a2 + b2n, ..., ak + bkn with bi ≥ 1, there are infinitely many positive integers n for which they are all prime, unless there is a congruence condition preventing this . The case k = 1 is Dirichlet's theorem. - -Two other special cases are well-known conjectures: there are infinitely many twin primes (n and 2 + n are primes), and there are infinitely many Sophie Germain primes (n and 1 + 2n are primes). - -Dickson's conjecture is further extended by Schinzel's hypothesis H. - -Given n polynomials with positive degrees and integer coefficients (n can be any natural number) that each satisfy all three conditions in the Bunyakovsky conjecture, and for any prime p there is an integer x such that the values of all n polynomials at x are not divisible by p, then there are infinitely many positive integers x such that all values of these n polynomials at x are prime. For example, if the conjecture is true then there are infinitely many positive integers x such that x2 + 1, 3x - 1, and x2 + x + 41 are all prime. When all the polynomials have degree 1, this is the original Dickson's conjecture. - -This more general conjecture is the same as the Generalized Bunyakovsky conjecture. diff --git a/wiki/wikipedia/3902.txt b/wiki/wikipedia/3902.txt deleted file mode 100644 index 277beee2807ec5018a135df494839425808e42b4..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3902.txt +++ /dev/null @@ -1,25 +0,0 @@ -In mathematics, the Barban–Davenport–Halberstam theorem is a statement about the distribution of prime numbers in an arithmetic progression. It is known that in the long run primes are distributed equally across possible progressions with the same difference. Theorems of the Barban–Davenport–Halberstam type give estimates for the error term, determining how close to uniform the distributions are. - -Let a be coprime to q and -$$ -\vartheta(x;q,a) = \sum_{p\leq x ; p \equiv a \bmod q} \log p \ -$$ - -be a weighted count of primes in the arithmetic progression a mod q. We have -$$ -\vartheta(x;q,a) = \frac{x}{\varphi(q)} + E(x;q,a) \ -$$ - -where φ is Euler's totient function and the error term E is small compared to x. We take a sum of squares of error terms -$$ -V(x,Q) = \sum_{q \leq Q} \sum_{a \bmod{q}} |E(x;q,a)|^2 \ . -$$ - -Then we have -$$ -V(x,Q) = O(Q x \log x) + O(x^2 (\log x)^{-A}) \ -$$ - -for $1 \leq Q \leq x $ and every positive A, where O is Landau's Big O notation. - -This form of the theorem is due to Gallagher. The result of Barban is valid only for $Q \leq x (\log x)^{-B}$ for some B depending on A, and the result of Davenport–Halberstam has B = A + 5. diff --git a/wiki/wikipedia/3903.txt b/wiki/wikipedia/3903.txt deleted file mode 100644 index e5836d11dd2174639bf7a89a4ad28415e5094d33..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3903.txt +++ /dev/null @@ -1,21 +0,0 @@ -In mathematical logic, the Mostowski collapse lemma, also known as the Shepherdson-Mostowski collapse, is a theorem of set theory introduced by and . - -Suppose that R is a binary relation on a class X such that - -*R is set-like: R−1[x] = {y : y R x} is a set for every x, - -*R is well-founded: every nonempty subset S of X contains an R-minimal element (i.e. an element x ∈ S such that R−1[x] ∩ S is empty), - -*R is extensional: R−1[x] ≠ R−1[y] for every distinct elements x and y of X - -The Mostowski collapse lemma states that for every such R there exists a unique transitive class (possibly proper) whose structure under the membership relation is isomorphic to (X, R), and the isomorphism is unique. The isomorphism maps each element x of X to the set of images of elements y of X such that y R x (Jech 2003:69). - -Every well-founded set-like relation can be embedded into a well-founded set-like extensional relation. This implies the following variant of the Mostowski collapse lemma: every well-founded set-like relation is isomorphic to set-membership on a (non-unique, and not necessarily transitive) class. - -A mapping F such that F(x) = {F(y) : y R x} for all x in X can be defined for any well-founded set-like relation R on X by well-founded recursion. It provides a homomorphism of R onto a (non-unique, in general) transitive class. The homomorphism F is an isomorphism if and only if R is extensional. - -The well-foundedness assumption of the Mostowski lemma can be alleviated or dropped in non-well-founded set theories. In Boffa's set theory, every set-like extensional relation is isomorphic to set-membership on a (non-unique) transitive class. In set theory with Aczel's anti-foundation axiom, every set-like relation is bisimilar to set-membership on a unique transitive class, hence every bisimulation-minimal set-like relation is isomorphic to a unique transitive class. - -Every set model of ZF is set-like and extensional. If the model is well-founded, then by the Mostowski collapse lemma it is isomorphic to a transitive model of ZF and such a transitive model is unique. - -Saying that the membership relation of some model of ZF is well-founded is stronger than saying that the axiom of regularity is true in the model. There exists a model M (assuming the consistency of ZF) whose domain has a subset A with no R-minimal element, but this set A is not a "set in the model" (A is not in the domain of the model, even though all of its members are). More precisely, for no such set A there exists x in M such that A = R−1[x]. So M satisfies the axiom of regularity (it is "internally" well-founded) but it is not well-founded and the collapse lemma does not apply to it. diff --git a/wiki/wikipedia/3904.txt b/wiki/wikipedia/3904.txt deleted file mode 100644 index 096754cbe9b2b03659235d2d61ad735983958453..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3904.txt +++ /dev/null @@ -1,83 +0,0 @@ -A prime number (or a prime) is a natural number greater than 1 that is not a product of two smaller natural numbers. A natural number greater than 1 that is not prime is called a composite number. For example, 5 is prime because the only ways of writing it as a product, 1 × 5 or 5 × 1, involve 5 itself. - -However, 4 is composite because it is a product (2 × 2) in which both numbers are smaller than 4. Primes are central in number theory because of the fundamental theorem of arithmetic: every natural number greater than 1 is either a prime itself or can be factorized as a product of primes that is unique up to their order. - -The property of being prime is called primality. A simple but slow method of checking the primality of a given number $n$, called trial division, tests whether $n$ is a multiple of any integer between 2 and $\sqrt{n}$. Faster algorithms include the Miller–Rabin primality test, which is fast but has a small chance of error, and the AKS primality test, which always produces the correct answer in polynomial time but is too slow to be practical. Particularly fast methods are available for numbers of special forms, such as Mersenne numbers. the largest known prime number is a Mersenne prime with 24,862,048 decimal digits. - -2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97 . - -No even number $n$ greater than 2 is prime because any such number can be expressed as the product $2\times n/2$. Therefore, every prime number other than 2 is an odd number, and is called an odd prime. Similarly, when written in the usual decimal system, all prime numbers larger than 5 end in 1, 3, 7, or 9. The numbers that end with other digits are all composite: - -decimal numbers that end in 0, 2, 4, 6, or 8 are even, and decimal numbers that end in 0 or 5 are divisible by 5. - -The set of all primes is sometimes denoted by $\mathbf{P}$ (a boldface capital P) or by $\mathbb{P}$ (a blackboard bold capital P). - -The Rhind Mathematical Papyrus, from around 1550 BC, has Egyptian fraction expansions of different forms for prime and composite numbers. However, the earliest surviving records of the explicit study of prime numbers come from ancient Greek mathematics. Euclid's Elements (c. 300 BC) proves the infinitude of primes and the fundamental theorem of arithmetic, and shows how to construct a perfect number from a Mersenne prime. Another Greek invention, the Sieve of Eratosthenes, is still used to construct lists of primes. - -Around 1000 AD, the Islamic mathematician Ibn al-Haytham (Alhazen) found Wilson's theorem, characterizing the prime numbers as the numbers $n$ that evenly divide $(n-1)!+1$. He also conjectured that all even perfect numbers come from Euclid's construction using Mersenne primes, but was unable to prove it. Another Islamic mathematician, Ibn al-Banna' al-Marrakushi, observed that the sieve of Eratosthenes can be sped up by testing only the divisors up to the square root of the largest number to be tested. Fibonacci brought the innovations from Islamic mathematics back to Europe. His book Liber Abaci (1202) was the first to describe trial division for testing primality, again using divisors only up to the square root. - -The increased practical importance of computerized primality testing and factorization led to the development of improved methods capable of handling large numbers of unrestricted form. - -Most early Greeks did not even consider 1 to be a number, so they could not consider its primality. A few mathematicians from this time also considered the prime numbers to be a subdivision of the odd numbers, so they also did not consider 2 to be prime. However, Euclid and a majority of the other Greek mathematicians considered 2 as prime. The medieval Islamic mathematicians largely followed the Greeks in viewing 1 as not being a number. - -If the definition of a prime number were changed to call 1 a prime, many statements involving prime numbers would need to be reworded in a more awkward way. For example, the fundamental theorem of arithmetic would need to be rephrased in terms of factorizations into primes greater than 1, because every number would have multiple factorizations with different numbers of copies of 1. Similarly, the sieve of Eratosthenes would not work correctly if it handled 1 as a prime, because it would eliminate all multiples of 1 (that is, all other numbers) and output only the single number 1. - -Other examples of prime-generating formulas come from Mills' theorem and a theorem of Wright. These assert that there are real constants $A>1$ and $\mu$ such that -$$ -\left \lfloor A^{3^n}\right \rfloor \text{ and } \left \lfloor 2^{\cdots^{2^{2^\mu}}} \right \rfloor -$$ - -are prime for any natural number $n$ in the first formula, and any number of exponents in the second formula. Here $\lfloor {}\cdot{} \rfloor$ represents the floor function, the largest integer less than or equal to the number in question. However, these are not useful for generating primes, as the primes must be generated first in order to compute the values of $A$ or $\mu.$ - -Andrica's conjecture, Legendre's conjecture, and Oppermann's conjecture Brun's theorem states that the sum of the reciprocals of twin primes, -$$ - \left( {\frac{1}{3} + \frac{1}{5}} \right) + \left( {\frac{1}{5} + \frac{1}{7}} \right) + \left( {\frac{1} + \frac{1}} \right) + \cdots, -$$ - -is finite. Because of Brun's theorem, it is not possible to use Euler's method to solve the twin prime conjecture, that there exist infinitely many twin primes. This implies that the likelihood that a randomly chosen number less than $n$ is prime is (approximately) inversely proportional to the number of digits in $n$. - -It also implies that the $n$th prime number is proportional to $n\log n$ - -and therefore that the average size of a prime gap is proportional to $\log n$. - -A more accurate estimate for $\pi(n)$ is given by the offset logarithmic integral - -The Ulam spiral arranges the natural numbers in a two-dimensional grid, spiraling in concentric squares surrounding the origin with the prime numbers highlighted. Visually, the primes appear to cluster on certain diagonals and not others, suggesting that some quadratic polynomials take prime values more often than others. although other more elementary proofs have been found. - -The prime-counting function can be expressed by Riemann's explicit formula as a sum in which each term comes from one of the zeros of the zeta function; the main term of this sum is the logarithmic integral, and the remaining terms cause the sum to fluctuate above and below the main term. - -In this sense, the zeros control how regularly the prime numbers are distributed. If the Riemann hypothesis is true, these fluctuations will be small, and the - -asymptotic distribution of primes given by the prime number theorem will also hold over much shorter intervals (of length about the square root of $x$ for intervals near a number $x$). and places (extensions to complete fields in which the given field is a dense set, also called completions). The extension from the rational numbers to the real numbers, for instance, is a place in which the distance between numbers is the usual absolute value of their difference. The corresponding mapping to an additive group would be the logarithm of the absolute value, although this does not meet all the requirements of a valuation. According to Ostrowski's theorem, up to a natural notion of equivalence, the real numbers and $p$-adic numbers, with their orders and absolute values, are the only valuations, absolute values, and places on the rational numbers. - -Before computers, mathematical tables listing all of the primes or prime factorizations up to a given limit were commonly printed. The oldest method for generating a list of primes is called the sieve of Eratosthenes. The animation shows an optimized variant of this method. - -Another more asymptotically efficient sieving method for the same problem is the sieve of Atkin. In advanced mathematics, sieve theory applies similar methods to other problems. - -Some of the fastest modern tests for whether an arbitrary given number $n$ is prime are probabilistic (or Monte Carlo) algorithms, meaning that they have a small random chance of producing an incorrect answer. - -For instance the Solovay–Strassen primality test on a given number $p$ chooses a number $a$ randomly from $2$ through $p-2$ and uses modular exponentiation to check - -whether $a^{(p-1)/2}\pm 1$ is divisible by $p$. If so, it answers yes and otherwise it answers no. If $p$ really is prime, it will always answer yes, but if $p$ is composite then it answers yes with probability at most 1/2 and no with probability at least 1/2. - -In contrast, some other algorithms guarantee that their answer will always be correct: primes will always be determined to be prime and composites will always be determined to be composite. - -For instance, this is true of trial division. - -The algorithms with guaranteed-correct output include both deterministic (non-random) algorithms, such as the AKS primality test, - -and randomized Las Vegas algorithms where the random choices made by the algorithm do not affect its final answer, such as some variations of elliptic curve primality proving. but $F_5$ is composite and so are all other Fermat numbers that have been verified as of 2017. A regular $n$-gon is constructible using straightedge and compass if and only if the odd prime factors of $n$ (if any) are distinct Fermat primes. Likewise, a regular $n$-gon may be constructed using straightedge, compass, and an angle trisector if and only if the prime factors of $n$ are any number of copies of 2 or 3 together with a (possibly empty) set of distinct Pierpont primes, primes of the form $2^a3^b+1$. - -It is possible to partition any convex polygon into $n$ smaller convex polygons of equal area and equal perimeter, when $n$ is a power of a prime number, but this is not known for other values of $n$. - -Beginning with the work of Hugh Montgomery and Freeman Dyson in the 1970s, mathematicians and physicists have speculated that the zeros of the Riemann zeta function are connected to the energy levels of quantum systems. Prime numbers are also significant in quantum information science, thanks to mathematical structures such as mutually unbiased bases and symmetric informationally complete positive-operator-valued measures. - -The evolutionary strategy used by cicadas of the genus Magicicada makes use of prime numbers. These insects spend most of their lives as grubs underground. They only pupate and then emerge from their burrows after 7, 13 or 17 years, at which point they fly about, breed, and then die after a few weeks at most. Biologists theorize that these prime-numbered breeding cycle lengths have evolved in order to prevent predators from synchronizing with these cycles. - -In contrast, the multi-year periods between flowering in bamboo plants are hypothesized to be smooth numbers, having only small prime numbers in their factorizations. - -Prime numbers have influenced many artists and writers. - -The French composer Olivier Messiaen used prime numbers to create ametrical music through "natural phenomena". In works such as La Nativité du Seigneur (1935) and Quatre études de rythme (1949–50), he simultaneously employs motifs with lengths given by different prime numbers to create unpredictable rhythms: the primes 41, 43, 47 and 53 appear in the third étude, "Neumes rythmiques". According to Messiaen this way of composing was "inspired by the movements of nature, movements of free and unequal durations". - -In his science fiction novel Contact, scientist Carl Sagan suggested that prime factorization could be used as a means of establishing two-dimensional image planes in communications with aliens, an idea that he had first developed informally with American astronomer Frank Drake in 1975. In the novel The Curious Incident of the Dog in the Night-Time by Mark Haddon, the narrator arranges the sections of the story by consecutive prime numbers as a way to convey the mental state of its main character, a mathematically gifted teen with Asperger syndrome. Prime numbers are used as a metaphor for loneliness and isolation in the Paolo Giordano novel The Solitude of Prime Numbers, in which they are portrayed as "outsiders" among integers. diff --git a/wiki/wikipedia/3905.txt b/wiki/wikipedia/3905.txt deleted file mode 100644 index 35699b08f6a3db559127465e545aa8a60cf6ab6a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3905.txt +++ /dev/null @@ -1,11 +0,0 @@ -In analytic number theory, the Petersson trace formula is a kind of orthogonality relation between coefficients of a holomorphic modular form. It is a specialization of the more general Kuznetsov trace formula. - -In its simplest form the Petersson trace formula is as follows. Let $\mathcal{F}$ be an orthonormal basis of $S_k(\Gamma(1))$, the space of cusp forms of weight $k>2$ on $SL_2(\mathbb{Z})$. Then for any positive integers $m,n$ we have - - - -\frac{\Gamma(k-1)}{(4\pi \sqrt{mn})^{k-1}} \sum_{f \in \mathcal{F}} \bar{\hat{f}}(m) \hat{f}(n) = \delta_{mn} + 2\pi i^{-k} \sum_{c > 0}\frac{S(m,n;c)}{c} J_{k-1}\left(\frac{4\pi \sqrt{mn}}{c}\right), - - - -where $\delta$ is the Kronecker delta function, $S$ is the Kloosterman sum and $J$ is the Bessel function of the first kind. diff --git a/wiki/wikipedia/3906.txt b/wiki/wikipedia/3906.txt deleted file mode 100644 index 95dbaa2ff58b7b044a105b84e62dc7c5cd4be530..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3906.txt +++ /dev/null @@ -1,523 +0,0 @@ -In mathematical physics and mathematics, the Pauli matrices are a set of three 2 × 2 complex matrices which are Hermitian and unitary. Usually indicated by the Greek letter sigma (σ), they are occasionally denoted by tau (τ) when used in connection with isospin symmetries. - -\begin{align} - -\sigma_1 = \sigma_\mathrm{x} &= - -\begin{pmatrix} - -0&1\\ - -1&0 - -\end{pmatrix} \\ - -\sigma_2 = \sigma_\mathrm{y} &= - -\begin{pmatrix} - -0& -i \\ - -i&0 - -\end{pmatrix} \\ - -\sigma_3 = \sigma_\mathrm{z} &= - -\begin{pmatrix} - -1&0\\ - -0&-1 - -\end{pmatrix} \\ - -\end{align} - -These matrices are named after the physicist Wolfgang Pauli. In quantum mechanics, they occur in the Pauli equation which takes into account the interaction of the spin of a particle with an external electromagnetic field. They also represent the interaction states of two polarization filters for horizontal/vertical polarization, 45 degree polarization (right/left), and circular polarization (right/left). - -Each Pauli matrix is Hermitian, and together with the identity matrix I (sometimes considered as the zeroth Pauli matrix σ_0), the Pauli matrices form a basis for the real vector space of 2 × 2 Hermitian matrices. - -This means that any 2 × 2 Hermitian matrix can be written in a unique way as a linear combination of Pauli matrices, with all coefficients being real numbers. - -Hermitian operators represent observables in quantum mechanics, so the Pauli matrices span the space of observables of the complex 2-dimensional Hilbert space. In the context of Pauli's work, σ_k represents the observable corresponding to spin along the kth coordinate axis in three-dimensional Euclidean space $\mathbb{R}^3.$ - -The Pauli matrices (after multiplication by i to make them anti-Hermitian) also generate transformations in the sense of Lie algebras: the matrices iσ_1, iσ_2, iσ_3 form a basis for the real Lie algebra $\mathfrak{su}(2)$, which exponentiates to the special unitary group SU(2). The algebra generated by the three matrices σ_1, σ_2, σ_3 is isomorphic to the Clifford algebra of $\mathbb{R}^3$, and the (unital associative) algebra generated by iσ_1, iσ_2, iσ_3 is effectively identical (isomorphic) to that of quaternions ($\mathbb{H}$). - -All three of the Pauli matrices can be compacted into a single expression: - - - -\sigma_j = - -\begin{pmatrix} - -\delta_{j3} & \delta_{j1} - i\delta_{j2}\\ - -\delta_{j1} + i\delta_{j2} & -\delta_{j3} - -\end{pmatrix} - - - -where i = sqrt −1 is the "imaginary unit", and δ_jk is the Kronecker delta, which equals +1 if j = k and 0 otherwise. This expression is useful for "selecting" any one of the matrices numerically by substituting values of j = 1, 2, 3, in turn useful when any of the matrices (but no particular one) is to be used in algebraic manipulations. - -The matrices are involutory: -$$ -\sigma_1^2 = \sigma_2^2 = \sigma_3^2 = -i\sigma_1 \sigma_2 \sigma_3 = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} = I -$$ - -where I is the identity matrix. - -The determinants and traces of the Pauli matrices are: - -\begin{align} - -\det \sigma_j &~= -1, \\ - -\operatorname{tr} \sigma_j &~=~~~ 0 ~. - -\end{align} - -From which, we can deduce that each matrix σ_jk has eigenvalues +1 and −1. - -With the inclusion of the identity matrix, I (sometimes denoted σ_0 ), the Pauli matrices form an orthogonal basis (in the sense of Hilbert–Schmidt) of the Hilbert space of real 2 × 2 Hermitian matrices, $\mathcal{H}_2(\mathbb{C})$, and the Hilbert space of all complex 2 × 2 matrices, $\mathcal{M}_{2,2}(\mathbb{C})$. - -Each of the (Hermitian) Pauli matrices has two eigenvalues, +1 and −1. The corresponding normalized eigenvectors are: - -\begin{align} - -\psi_{x+} &= \frac{1}\sqrt{2} \begin{bmatrix} 1 \\ 1 \end{bmatrix} , & - -\psi_{x-} &= \frac{1}\sqrt{2} \begin{bmatrix} 1 \\ -1 \end{bmatrix} , \\ - -\psi_{y+} &= \frac{1}\sqrt{2} \begin{bmatrix} 1 \\ i \end{bmatrix} , & - -\psi_{y-} &= \frac{1}\sqrt{2} \begin{bmatrix} 1 \\ -i \end{bmatrix} , \\ - -\psi_{z+} &= \begin{bmatrix} 1 \\ 0 \end{bmatrix} , & - -\psi_{z-} &= \begin{bmatrix} 0 \\ 1 \end{bmatrix} ~. - -\end{align} - -The Pauli vector is defined by{{efn| - -The Pauli vector is a formal device. It may be thought of as an element of ℳ_2($\mathbb{C}$) ⊗ $\mathbb{R}^3$, where the tensor product space is endowed with a mapping ⋅ : $\mathbb{R}^3$ × (ℳ_2($\mathbb{C}$) ⊗ $\mathbb{R}^3$ ) → ℳ_2($\mathbb{C}$) induced by the dot product on $\mathbb{R}^3.$ - -}} -$$ -\vec{\sigma} = \sigma_1 \hat{x}_1 + \sigma_2 \hat{x}_2 + \sigma_3 \hat{x}_3 ~, -$$ - -where $ \hat{x}_1, \hat{x}_2, \text{ and } \hat{x}_3 $ are an equivalent notation for the more familiar $ \hat{x}, \hat{y}, \text{ and } \hat{z} ; $ the subscripted notation $ \hat{x}_1, \hat{x}_2, \hat{x}_3 $ is more compact than the old $ \hat{x}, \hat{y}, \hat{z} $ form. - -The Pauli vector provides a mapping mechanism from a vector basis to a Pauli matrix basis as follows, - -\begin{align} - -\vec{a} \cdot \vec{\sigma} - -&= \left(a_k \hat{x}_k\right) \cdot \left(\sigma_\ell \hat{x}_\ell \right) - -= a_k \sigma_\ell \hat{x}_k \cdot \hat{x}_\ell \\ \\ - -&= a_k \sigma_\ell \delta_{k\ell} - -= a_k \sigma_k \\ \\ - -&= ~ a_1 \begin{pmatrix} - -0 & 1 \\ - -1 & 0 - -\end{pmatrix} ~ + ~ a_2 i\begin{pmatrix} - -0 & -1 \\ - -1 & 0 - -\end{pmatrix} ~ + ~ a_3 \begin{pmatrix} - -1 & 0 \\ - -0 & -1 - -\end{pmatrix} - -~ = ~ \begin{pmatrix} - -a_3 & a_1 - i a_2 \\ - -a_1 + i a_2 & -a_3 - -\end{pmatrix} - -\end{align} - -using Einstein's summation convention. Further, -$$ -\det \bigl( \vec{a} \cdot \vec{\sigma} \bigr) = -\vec{a} \cdot \vec{a} = -\left|\vec{a}\right|^2, -$$ - -its eigenvalues being $\pm |\vec{a}| $, and moreover (see § completeness relation, below) -$$ -\frac{1}{2} \operatorname{tr} \Bigl( \bigl( \vec{a} \cdot \vec{\sigma} \bigr) \vec{\sigma} \Bigr) = \vec{a} ~. -$$ - -Its normalized eigenvectors are - - - -\psi_+ = \frac{1}{\sqrt{2|\vec{a}|(a_3+|\vec{a}|)}}\begin{bmatrix} a_3 + |\vec{a}| \\ a_1 + ia_2 \end{bmatrix}; \qquad - -\psi_- = \frac{1}{\sqrt{2|\vec{a}|(a_3+|\vec{a}|)}}\begin{bmatrix} ia_2 - a_1 \\ a_3 + |\vec{a}| \end{bmatrix} ~ . - - - -The Pauli matrices obey the following commutation relations: -$$ -[\sigma_j, \sigma_k] = 2 i \varepsilon_{j k \ell}\sigma_\ell ~ , -$$ - -and anticommutation relations: -$$ -\{\sigma_j, \sigma_k\} = 2 \delta_{j k}I ~ . -$$ - -where the structure constant ε_abc is the Levi-Civita symbol, Einstein summation notation is used, δ_jk is the Kronecker delta, and I is the 2 × 2 identity matrix. - -For example, - -Pauli vectors elegantly map these commutation and anticommutation relations to corresponding vector products. Adding the commutator to the anticommutator gives - - \begin{align} - -\left[\sigma_j, \sigma_k\right] + \{\sigma_j, \sigma_k\} &= (\sigma_j \sigma_k - \sigma_k \sigma_j ) + (\sigma_j \sigma_k + \sigma_k \sigma_j) \\ - -2i\varepsilon_{j k \ell}\sigma_\ell + 2 \delta_{j k}I &= 2\sigma_j \sigma_k - -\end{align} - -so that, -$$ -~~ \sigma_j \sigma_k = \delta_{j k}I + i\varepsilon_{j k \ell}\sigma_\ell ~ .~ -$$ - -Contracting each side of the equation with components of two 3-vectors a_p and b_q (which commute with the Pauli matrices, i.e., a_pσ_q = σ_qa_p) for each matrix σ_q and vector component a_p (and likewise with b_q) yields - -~~ \begin{align} - -a_j b_k \sigma_\ell \sigma_k & = a_j b_k \left(i\varepsilon_{jk\ell}\sigma_\ell + \delta_{jk}I\right) \\ - -a_j \sigma_j b_k \sigma_k & = i\varepsilon_{jk\ell}a_j b_k \sigma_\ell + a_j b_k \delta_{jk}I - -\end{align} ~.~ - -Finally, translating the index notation for the dot product and cross product results in - -{{NumBlk||$~~\left(\vec{a} \cdot \vec{\sigma}\right)\left(\vec{b} \cdot \vec{\sigma}\right) = \left(\vec{a} \cdot \vec{b}\right) I + i \left(\vec{a} \times \vec{b}\right) \cdot \vec{\sigma}~~$ - -| - -}} - -If i is identified with the pseudoscalar σ_xσ_yσ_z then the right hand side becomes $ a \cdot b + a \wedge b $ which is also the definition for the product of two vectors in geometric algebra. - -The following traces can be derived using the commutation and anticommutation relations. - -\begin{align} - -\operatorname{tr}\left(\sigma_j \right) &= 0 \\ - -\operatorname{tr}\left(\sigma_j \sigma_k \right) &= 2\delta_{jk} \\ - -\operatorname{tr}\left(\sigma_j \sigma_k \sigma_\ell \right) &= 2i\varepsilon_{jk\ell} \\ - -\operatorname{tr}\left(\sigma_j \sigma_k \sigma_\ell \sigma_m \right) &= 2\left(\delta_{jk}\delta_{\ell m} - \delta_{j\ell}\delta_{km} + \delta_{jm}\delta_{k\ell}\right) - -\end{align} - -If the matrix σ_0 = I is also considered, these relationships become - -\begin{align} - -\operatorname{tr}\left(\sigma_\alpha \right) &= 2\delta_{0 \alpha} \\ - -\operatorname{tr}\left(\sigma_\alpha \sigma_\beta \right) &= 2\delta_{\alpha \beta} \\ - -\operatorname{tr}\left(\sigma_\alpha \sigma_\beta \sigma_\gamma \right) &= 2 \sum_{(\alpha \beta \gamma)} \delta_{\alpha \beta} \delta_{0 \gamma} - 4 \delta_{0 \alpha} \delta_{0 \beta} \delta_{0 \gamma} + 2i\varepsilon_{0 \alpha \beta \gamma} \\ - -\operatorname{tr}\left(\sigma_\alpha \sigma_\beta \sigma_\gamma \sigma_\mu \right) &= 2\left(\delta_{\alpha \beta}\delta_{\gamma \mu} - \delta_{\alpha \gamma}\delta_{\beta \mu} + \delta_{\alpha \mu}\delta_{\beta \gamma}\right) + 4\left(\delta_{\alpha \gamma} \delta_{0 \beta} \delta_{0 \mu} + \delta_{\beta \mu} \delta_{0 \alpha} \delta_{0 \gamma}\right) - 8 \delta_{0 \alpha} \delta_{0 \beta} \delta_{0 \gamma} \delta_{0 \mu} + 2 i \sum_{(\alpha \beta \gamma \mu)} \varepsilon_{0 \alpha \beta \gamma} \delta_{0 \mu} - -\end{align} - -where Greek indices α, β, γ and μ assume values from {0, x, y, z} and the notation $\sum_{(\alpha \ldots)}$ is used to denote the sum over the cyclic permutation of the included indices. - -For -$$ -\vec{a} = a\hat{n}, \quad |\hat{n}| = 1, -$$ - -one has, for even powers, 2p, p = 0, 1, 2, 3, ... -$$ -(\hat{n} \cdot \vec{\sigma})^{2p} = I -$$ - -which can be shown first for the p = 1 case using the anticommutation relations. For convenience, the case p = 0 is taken to be I by convention. - -For odd powers, 2q + 1, q = 0, 1, 2, 3, ... -$$ -\left(\hat{n} \cdot \vec{\sigma}\right)^{2q+1} = \hat{n} \cdot \vec{\sigma} . -$$ - -Matrix exponentiating, and using the Taylor series for sine and cosine, - -\begin{align} - -e^{i a\left(\hat{n} \cdot \vec{\sigma}\right)} - -&= \sum_{k=0}^\infty{\frac{i^k \left[a \left(\hat{n} \cdot \vec{\sigma}\right)\right]^k}{k!}} \\ - -&= \sum_{p=0}^\infty{\frac{(-1)^p (a\hat{n}\cdot \vec{\sigma})^{2p}}{(2p)!}} + i\sum_{q=0}^\infty{\frac{(-1)^q (a\hat{n}\cdot \vec{\sigma})^{2q + 1}}{(2q + 1)!}} \\ - -&= I\sum_{p=0}^\infty{\frac{(-1)^p a^{2p}}{(2p)!}} + i (\hat{n}\cdot \vec{\sigma}) \sum_{q=0}^\infty{\frac{(-1)^q a^{2q+1}}{(2q + 1)!}}\\ - -\end{align}. - -In the last line, the first sum is the cosine, while the second sum is the sine; so, finally, - -{{NumBlk||$~~e^{i a\left(\hat{n} \cdot \vec{\sigma}\right)} = I\cos{a} + i (\hat{n} \cdot \vec{\sigma}) \sin{a} ~~$ - -| - -}} - -which is analogous to Euler's formula, extended to quaternions. - -Note that -$$ -\det[i a(\hat{n} \cdot \vec{\sigma})] = a^2 -$$, - -while the determinant of the exponential itself is just 1, which makes it the generic group element of SU(2). - -A more abstract version of formula for a general 2 × 2 matrix can be found in the article on matrix exponentials. A general version of for an analytic (at a and −a) function is provided by application of Sylvester's formula, -$$ -f(a(\hat{n} \cdot \vec{\sigma})) = I\frac{f(a) + f(-a)}{2} + \hat{n} \cdot \vec{\sigma} \frac{f(a) - f(-a)}{2} ~. -$$ - -A straightforward application of formula provides a parameterization of the composition law of the group SU(2). One may directly solve for c in - -\begin{align} - -e^{ia\left(\hat{n} \cdot \vec{\sigma}\right)} e^{ib\left(\hat{m} \cdot \vec{\sigma}\right)} - -&= I\left(\cos a \cos b - \hat{n} \cdot \hat{m} \sin a \sin b\right) + i\left(\hat{n} \sin a \cos b + \hat{m} \sin b \cos a - \hat{n} \times \hat{m} ~ \sin a \sin b \right) \cdot \vec{\sigma} \\ - -&= I\cos{c} + i \left(\hat{k} \cdot \vec{\sigma}\right) \sin c \\ - -&= e^{ic \left(\hat{k} \cdot \vec{\sigma}\right)}, - -\end{align} - -which specifies the generic group multiplication, where, manifestly, -$$ -\cos c = \cos a \cos b - \hat{n} \cdot \hat{m} \sin a \sin b~, -$$ - -the spherical law of cosines. Given c, then, -$$ -\hat{k} = \frac{1}{\sin c}\left(\hat{n} \sin a \cos b + \hat{m} \sin b \cos a - \hat{n}\times\hat{m} \sin a \sin b\right) ~. -$$ - -Consequently, the composite rotation parameters in this group element (a closed form of the respective BCH expansion in this case) simply amount to -$$ -e^{ic \hat{k} \cdot \vec{\sigma}} = \exp \left( i\frac{c}{\sin c} \left(\hat{n} \sin a \cos b + \hat{m} \sin b \cos a - \hat{n}\times\hat{m} ~ \sin a \sin b\right) \cdot \vec{\sigma}\right) ~. -$$ - -(Of course, when $\hat{n}$ is parallel to $\hat{m}$, so is $\hat{k}$, and c = a + b.) - -It is also straightforward to likewise work out the adjoint action on the Pauli vector, namely rotation effectively by double the angle a, - - - -e^{i a\left(\hat{n} \cdot \vec{\sigma}\right)} ~ \vec{\sigma}~ e^{-i a\left(\hat{n} \cdot \vec{\sigma}\right)} = \vec{\sigma} \cos (2a) + \hat{n} \times \vec{\sigma} ~\sin (2a)+ \hat{n} ~ \hat{n} \cdot \vec{\sigma} ~ (1 - \cos (2a))~ . - - - -An alternative notation that is commonly used for the Pauli matrices is to write the vector index k in the superscript, and the matrix indices as subscripts, so that the element in row α and column β of the k-th Pauli matrix is σ ^k_αβ. - -In this notation, the completeness relation for the Pauli matrices can be written -$$ -\vec{\sigma}_{\alpha\beta}\cdot\vec{\sigma}_{\gamma\delta}\equiv \sum_{k=1}^3 \sigma^k_{\alpha\beta}\sigma^k_{\gamma\delta} = 2\delta_{\alpha\delta} \delta_{\beta\gamma} - \delta_{\alpha\beta}\delta_{\gamma\delta}~. -$$ - -Proof: The fact that the Pauli matrices, along with the identity matrix I, form an orthogonal basis for the Hilbert space of all 2 × 2 complex matrices means that we can express any matrix M as -$$ -M = cI + \sum_k a_k \sigma^k ~ -$$ - -where c is a complex number, and a is a 3-component, complex vector. It is straightforward to show, using the properties listed above, that -$$ -\operatorname{tr}\left( \sigma^j\sigma^k \right) = 2\delta_{jk} -$$ - -where "tr" denotes the trace, and hence that - -\begin{align} - -c &= \tfrac{1}{2}\operatorname{tr} M,\quad \ a_k = \tfrac{1}{2}\operatorname{tr}\sigma^kM ~. \\[3pt] - -\therefore ~~ 2M &= I\operatorname{tr} M + \sum_k \sigma^k\operatorname{tr} \sigma^k M ~, - -\end{align} - -which can be rewritten in terms of matrix indices as -$$ -2 M_{\alpha\beta} = \delta_{\alpha\beta}M_{\gamma\gamma} + \sum_k \sigma^k_{\alpha\beta}\sigma^k_{\gamma\delta}M_{\delta\gamma}~, -$$ - -where summation over the repeated indices is implied γ and δ. Since this is true for any choice of the matrix M, the completeness relation follows as stated above.◻ - -As noted above, it is common to denote the 2 × 2 unit matrix by σ_0, so σ^0_αβ = δ_αβ. The completeness relation can alternatively be expressed as -$$ -\sum_{k=0}^3 \sigma^k_{\alpha\beta}\sigma^k_{\gamma\delta} = 2\delta_{\alpha\delta}\delta_{\beta\gamma} ~ . -$$ - -The fact that any Hermitian complex 2 × 2 matrices can be expressed in terms of the identity matrix and the Pauli matrices also leads to the Bloch sphere representation of 2 × 2 mixed states’ density matrix, (positive semidefinite 2 × 2 matrices with unit trace. This can be seen by first expressing an arbitrary Hermitian matrix as a real linear combination of {σ_0, σ_1, σ_2, σ_3} as above, and then imposing the positive-semidefinite and trace 1 conditions. - -For a pure state, in polar coordinates, -$$ -\vec{a} = \begin{pmatrix}\sin\theta \cos\phi & \sin\theta \sin\phi & \cos\theta\end{pmatrix} -$$, the idempotent density matrix - -\tfrac{1}{2}\left(\mathbf{1} + \vec{a} \cdot \vec{\sigma}\right) = - -\begin{pmatrix} - -\cos^2\left(\frac{\theta}{2}\right) & - -e^{-i\phi}\sin\left(\frac{\theta}{2}\right)\cos\left(\frac{\theta}{2}\right) \\ - -e^{+i\phi}\sin\left(\frac{\theta}{2}\right)\cos\left(\frac{\theta}{2}\right) & - -\sin^2\left(\frac{\theta}{2}\right) - -\end{pmatrix} - - - -acts on the state eigenvector $\begin{pmatrix}\cos\left(\frac{\theta}{2}\right) & e^{+i\phi}\sin\left(\frac{\theta}{2}\right) \end{pmatrix} $ with eigenvalue +1, hence it acts like a projection operator. - -Let P_jk be the transposition (also known as a permutation) between two spins σ_j and σ_k living in the tensor product space \C^2 \otimes \C^2, -$$ -P_{jk} \left| \sigma_j \sigma_k \right\rangle = \left| \sigma_k \sigma_j \right\rangle ~. -$$ - -This operator can also be written more explicitly as Dirac's spin exchange operator, -$$ -P_{jk} = \frac{1}{2}\left(\vec{\sigma}_j \cdot \vec{\sigma}_k + 1\right) ~ . -$$ - -Its eigenvalues are therefore{{efn| - -Explicitly, in the convention of "right-space matrices into elements of left-space matrices", it is \left(\begin{smallmatrix} - -1 & 0 & 0 & 0 \\ - -0 & 0 & 1 & 0 \\ - -0 & 1 & 0 & 0 \\ - -0 & 0 & 0 & 1 - -\end{smallmatrix}\right) ~ . - -}} 1 or −1. It may thus be utilized as an interaction term in a Hamiltonian, splitting the energy eigenvalues of its symmetric versus antisymmetric eigenstates. - -The group SU(2) is the Lie group of unitary 2 × 2 matrices with unit determinant; its Lie algebra is the set of all 2 × 2 anti-Hermitian matrices with trace 0. Direct calculation, as above, shows that the Lie algebra $\mathfrak{su}_2$ is the 3-dimensional real algebra spanned by the set {iσ_k}. In compact notation, -$$ - \mathfrak{su}(2) = \operatorname{span} \{ i\sigma_1 , i\sigma_2 , i\sigma_3 \}~. -$$ - -As a result, each iσ_j can be seen as an infinitesimal generator of SU(2). The elements of SU(2) are exponentials of linear combinations of these three generators, and multiply as indicated above in discussing the Pauli vector. Although this suffices to generate SU(2), it is not a proper representation of su(2), as the Pauli eigenvalues are scaled unconventionally. The conventional normalization is λ = 1/2, so that -$$ - \mathfrak{su}(2) = \operatorname{span} \left\{\frac{i\sigma_1}{2}, \frac{i\sigma_2}{2}, \frac{i\sigma_3}{2} \right\}~. -$$ - -As SU(2) is a compact group, its Cartan decomposition is trivial. - -The Lie algebra su(2) is isomorphic to the Lie algebra so(3), which corresponds to the Lie group SO(3), the group of rotations in three-dimensional space. In other words, one can say that the iσ_j are a realization (and, in fact, the lowest-dimensional realization) of infinitesimal rotations in three-dimensional space. However, even though su(2) and so(3) are isomorphic as Lie algebras, SU(2) and SO(3) are not isomorphic as Lie groups. SU(2) is actually a double cover of SO(3), meaning that there is a two-to-one group homomorphism from SU(2) to SO(3), see relationship between SO(3) and SU(2). - -The real linear span of {I, iσ_1, iσ_2, iσ_3} is isomorphic to the real algebra of quaternions $\mathbb{H}$, represented by the span of the basis vectors $~ \left\{ \mathbf{1}, \mathbf{i}, \mathbf{j}, \mathbf{k} \right\} ~.$ The isomorphism from $\mathbb{H}$ to this set is given by the following map (notice the reversed signs for the Pauli matrices): - - - -\mathbf{1} \mapsto I, \quad - -\mathbf{i} \mapsto - \sigma_2\sigma_3 = - i\sigma_1, \quad - -\mathbf{j} \mapsto - \sigma_3\sigma_1 = - i\sigma_2, \quad - -\mathbf{k} \mapsto - \sigma_1\sigma_2 = - i\sigma_3. - - - -Alternatively, the isomorphism can be achieved by a map using the Pauli matrices in reversed order, - - - -\mathbf{1} \mapsto I, \quad - -\mathbf{i} \mapsto i\sigma_3 , \quad - -\mathbf{j} \mapsto i\sigma_2 , \quad - -\mathbf{k} \mapsto i\sigma_1 ~ . - - - -As the set of versors U ⊂ $\mathbb{H}$ forms a group isomorphic to SU(2), U gives yet another way of describing SU(2). The two-to-one homomorphism from SU(2) to SO(3) may be given in terms of the Pauli matrices in this formulation. - -In classical mechanics, Pauli matrices are useful in the context of the Cayley-Klein parameters. The matrix P corresponding to the position $\vec{x}$ of a point in space is defined in terms of the above Pauli vector matrix, -$$ -P = \vec{x} \cdot \vec{\sigma} = x\sigma_x + y\sigma_y + z\sigma_z ~. -$$ - -Consequently, the transformation matrix Q_θ for rotations about the x-axis through an angle θ may be written in terms of Pauli matrices and the unit matrix as -$$ -Q_\theta = \boldsymbol{1}\cos\frac{\theta}{2} + i\sigma_x \sin\frac{\theta}{2} ~. -$$ - -Similar expressions follow for general Pauli vector rotations as detailed above. - -In quantum mechanics, each Pauli matrix is related to an angular momentum operator that corresponds to an observable describing the spin of a spin particle, in each of the three spatial directions. As an immediate consequence of the Cartan decomposition mentioned above, iσ_j are the generators of a projective representation (spin representation) of the rotation group SO(3) acting on non-relativistic particles with spin . The states of the particles are represented as two-component spinors. In the same way, the Pauli matrices are related to the isospin operator. - -An interesting property of spin particles is that they must be rotated by an angle of 4π in order to return to their original configuration. This is due to the two-to-one correspondence between SU(2) and SO(3) mentioned above, and the fact that, although one visualizes spin up/down as the north/south pole on the 2-sphere S^2, they are actually represented by orthogonal vectors in the two dimensional complex Hilbert space. - -For a spin particle, the spin operator is given by J = ħ/2σ, the fundamental representation of SU(2). By taking Kronecker products of this representation with itself repeatedly, one may construct all higher irreducible representations. That is, the resulting spin operators for higher spin systems in three spatial dimensions, for arbitrarily large j, can be calculated using this spin operator and ladder operators. They can be found in Rotation group SO(3)#A note on Lie algebra. The analog formula to the above generalization of Euler's formula for Pauli matrices, the group element in terms of spin matrices, is tractable, but less simple. - -Also useful in the quantum mechanics of multiparticle systems, the general Pauli group G_n is defined to consist of all n-fold tensor products of Pauli matrices. - -In relativistic quantum mechanics, the spinors in four dimensions are 4 × 1 (or 1 × 4) matrices. Hence the Pauli matrices or the Sigma matrices operating on these spinors have to be 4 × 4 matrices. They are defined in terms of 2 × 2 Pauli matrices as -$$ -\mathsf{\Sigma}_k = \begin{pmatrix} \mathsf{\sigma}_k & 0 \\ 0 & \mathsf{\sigma}_k \end{pmatrix}. -$$ - -It follows from this definition that the $ \mathsf{ \Sigma }_k $ matrices have the same algebraic properties as the σ_k matrices. - -However, relativistic angular momentum is not a three-vector, but a second order four-tensor. Hence $\mathsf{\Sigma}_k$ needs to be replaced by Σ_μν, the generator of Lorentz transformations on spinors. By the antisymmetry of angular momentum, the Σ_μν are also antisymmetric. Hence there are only six independent matrices. - -The first three are the $ \Sigma_{jk}\equiv \epsilon_{jk\ell}\mathsf{\Sigma}_j ~.$ The remaining three, $-i\Sigma_{0k} \equiv \mathsf{\alpha}_k,$ where the Dirac α_k matrices are defined as -$$ -\mathsf{\alpha}_k = \begin{pmatrix} 0 & \mathsf{\sigma}_k\\ \mathsf{\sigma}_k & 0\end{pmatrix}~. -$$ - -The relativistic spin matrices Σ_μν are written in compact form in terms of commutator of gamma matrices as -$$ -\Sigma_{\mu\nu} = \frac{i}{2}\left[\gamma_\mu, \gamma_\nu\right]~. -$$ - -In quantum information, single-qubit quantum gates are 2 × 2 unitary matrices. The Pauli matrices are some of the most important single-qubit operations. In that context, the Cartan decomposition given above is called the "Z–Y decomposition of a single-qubit gate". Choosing a different Cartan pair gives a similar "X–Y decomposition of a single-qubit gate". diff --git a/wiki/wikipedia/3907.txt b/wiki/wikipedia/3907.txt deleted file mode 100644 index bcd0e435d9f4a01492bf246f18fb20af1879367d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3907.txt +++ /dev/null @@ -1,14 +0,0 @@ -In mathematics, the Property P conjecture is a statement about 3-manifolds obtained by Dehn surgery on a knot in the 3-sphere. A knot in the 3-sphere is said to have Property P if every 3-manifold obtained by performing (non-trivial) Dehn surgery on the knot is not simply-connected. The conjecture states that all knots, except the unknot, have Property P. - -Research on Property P was started by R. H. Bing, who popularized the name and conjecture. - -This conjecture can be thought of as a first step to resolving the Poincaré conjecture, since the Lickorish–Wallace theorem says any closed, orientable 3-manifold results from Dehn surgery on a link. - -If a knot $K \subset \mathbb{S}^{3}$ has Property P, then one cannot construct a counterexample to the Poincaré conjecture by surgery along $K$. - -A proof was announced in 2004, as the combined result of efforts of mathematicians working in several different fields. - -Let $[l], [m] \in \pi_{1}(\mathbb{S}^{3} \setminus K)$ denote elements corresponding to a preferred longitude and meridian of a tubular neighborhood of $K$. -$$ -K -$$ has Property P if and only if its Knot group is never trivialised by adjoining a relation of the form $ m = l^{a} $ for some $ 0 \ne a \in \mathbb{Z}$. diff --git a/wiki/wikipedia/3908.txt b/wiki/wikipedia/3908.txt deleted file mode 100644 index b3775d1bc07a8dc5c620e36926062a7efbe4766a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3908.txt +++ /dev/null @@ -1,13 +0,0 @@ -i-drive was a file hosting service that operated from 1998 to 2002. - -The name derived from the words "Internet drive". - -Based in San Francisco, the company was founded in 1998 with seed investors and launched its first product, an online file storage service in August 1999. The idea originated from an early company Jeff Bonforte co-founded in 1996 called ShellServer.net, which provided 10 MB of space for IRC users. Bonforte compiled the founding team, which included Chris Lindland, Patrick Fenton, Tim Craycroft, Rich MacAlmon, John Reddig and Lou Perrelli (the last three were also the company's first angel investors). Originally presented as i-drive.com, the company acquired the domain idrive.com around October 1999. The initial product offered a limited amount of free file storage space, and later enhanced the offering with 'sideloading' – storing files such as MP3 files collected on the World Wide Web without the need for the user to download them to their individual computer. - -In January 2000, the company began offering unlimited storage space and an application called Filo. - -In 2001 the company transitioned from offering the free storage service and transformed the underlying software architecture into a middleware storage mechanism and product, seeking to sell into various markets including the 3G marketplace, targeting companies such as DoCoMo and Earthlink. - -In January 2002 the company name was changed to Anuvio Technologies. - -i-drive's assets were acquired by the EMC Corporation in 2002. Certain assets (including the idrive.com domain name) were acquired by Pro Softnet Corp which also offered online storage services. At its height, i-drive hosted over 10 million registered users, employed 110 people, and held partnerships with MP3.com, ZDnet.com, and 40 major universities. The service was rated as a "Top 5 Web Application" by CNET in 2000 and one of the "3 Top Technologies to Watch" by Fortune Magazine in 2000. The company raised over US$30 million from venture capitalists such as Draper Fisher Jurvetson, Information Technology Ventures, Global Retail Partners, Hikari (Japan), Philips (Netherlands), EMC, and Partners Group (Switzerland). diff --git a/wiki/wikipedia/3909.txt b/wiki/wikipedia/3909.txt deleted file mode 100644 index 0ca4e20d3170f09340dc604dd8fd36ab9b014677..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3909.txt +++ /dev/null @@ -1,17 +0,0 @@ -Optimal job scheduling is a class of optimization problems related to scheduling. The inputs to such problems are a list of jobs (also called processes or tasks) and a list of machines (also called processors or workers). The required output is a schedule – an assignment of jobs to machines. The schedule should optimize a certain objective function. In the literature, problems of optimal job scheduling are often called machine scheduling, processor scheduling, multiprocessor scheduling, or just scheduling. - -There are many different problems of optimal job scheduling, different in the nature of jobs, the nature of machines, the restrictions on the schedule, and the objective function. A convenient notation for optimal scheduling problems was introduced by Ronald Graham, Eugene Lawler, Jan Karel Lenstra and Alexander Rinnooy Kan. - -Here are some examples for problems defined using the above notation. - -* $P2||C_{\max}$ – assigning each of $n$ given jobs to one of the 2 identical machines so to minimize the maximum total processing time over the machines. This is an optimization version of the partition problem - -* 1|prec|$L_\max$ – assigning to a single machine, processes with general precedence constraint, minimizing maximum lateness. - -* R|pmtn|$\sum C_i$ – assigning tasks to a variable number of unrelated parallel machines, allowing preemption, minimizing total completion time. - -* J3|$p_{ij}$|$C_\max$ – a 3-machines job shop problem with unit processing times, where the goal is to minimize the maximum completion time. - -* P|$size_{j}$|$C_\max$ – assigning jobs to $m$ parallel identical machines, where each job comes with a number of machines on which it must be scheduled at the same time, minimizing maximum completion time. See parallel task scheduling. - -All variants surveyed above are deterministic in that all data is known to the planner. There are also stochastic variants, in which the data is not known in advance, or can perturb randomly. diff --git a/wiki/wikipedia/391.txt b/wiki/wikipedia/391.txt deleted file mode 100644 index 8eb4129b5bd1758857229593ed90ebc1c0750c11..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/391.txt +++ /dev/null @@ -1,87 +0,0 @@ -The Mason–Stothers theorem, or simply Mason's theorem, is a mathematical theorem about polynomials, analogous to the abc conjecture for integers. It is named after Walter Wilson Stothers, who published it in 1981, and R. C. Mason, who rediscovered it shortly thereafter. - -The theorem states: - -Let a(t), b(t), and c(t) be relatively prime polynomials over a field such that a + b = c and such that not all of them have vanishing derivative. Then -$$ -\max\{\deg(a),\deg(b),\deg(c)\} \le \deg(\operatorname{rad}(abc))-1. -$$ - -Here rad(f) is the product of the distinct irreducible factors of f. For algebraically closed fields it is the polynomial of minimum degree that has the same roots as f; in this case deg(rad(f)) gives the number of distinct roots of f. - -*Over fields of characteristic 0 the condition that a, b, and c do not all have vanishing derivative is equivalent to the condition that they are not all constant. Over fields of characteristic p > 0 it is not enough to assume that they are not all constant. For example, the identity tp + 1 = (t + 1)p gives an example where the maximum degree of the three polynomials (a and b as the summands on the left hand side, and c as the right hand side) is p, but the degree of the radical is only 2. - -*Taking a(t) = tn and c(t) = (t+1)n gives an example where equality holds in the Mason–Stothers theorem, showing that the inequality is in some sense the best possible. - -*A corollary of the Mason–Stothers theorem is the analog of Fermat's Last Theorem for function fields: if a(t)n + b(t)n = c(t)n for a, b, c relatively prime polynomials over a field of characteristic not dividing n and n > 2 then either at least one of a, b, or c is 0 or they are all constant. - -Snyder gave the following elementary proof of the Mason–Stothers theorem. - -Step 1. The condition a + b + c = 0 implies that the Wronskians W(a, b) = ab′ − a′b, W(b, c), and W(c, a) are all equal. Write W for their common value. - -Step 2. The condition that at least one of the derivatives a′, b′, or c′ is nonzero and that a, b, and c are coprime is used to show that W is nonzero. - -For example, if W = 0 then ab′ = a′b so a divides a′ (as a and b are coprime) so a′ = 0 (as deg a > deg a′ unless a is constant). - -Step 3. W is divisible by each of the greatest common divisors (a, a′), (b, b′), and (c, c′). Since these are coprime it is divisible by their product, and since W is nonzero we get - -deg (a, a′) + deg (b, b′) + deg (c, c′) ≤ deg W. - -Step 4. Substituting in the inequalities - -deg (a, a′) ≥ deg a − (number of distinct roots of a) - -deg (b, b′) ≥ deg b − (number of distinct roots of b) - -deg (c, c′) ≥ deg c − (number of distinct roots of c) - -(where the roots are taken in some algebraic closure) and - -deg W ≤ deg a + deg b − 1 - -we find that - -deg c ≤ (number of distinct roots of abc) − 1 - -which is what we needed to prove. - -There is a natural generalization in which the ring of polynomials is replaced by a one-dimensional function field. - -Let k be an algebraically closed field of characteristic 0, let C/k be a smooth projective curve - -of genus g, let -$$ - a,b\in k(C) -$$ be rational functions on C satisfying $a+b=1$, - -and let - -S be a set of points in C(k) containing all of the zeros and poles of a and b. - -Then -$$ - \max\bigl\{ \deg(a),\deg(b) \bigr\} \le \max\bigl\{|S| + 2g - 2,0\bigr\}. -$$ - -Here the degree of a function in k(C) is the degree of - -the map it induces from C to P1. - -This was proved by Mason, with an alternative short proof published the same year by J. H. Silverman - -. - -There is a further generalization, due independently to J. F. Voloch - -and to - -W. D. Brownawell and D. W. Masser, - -that gives an upper bound for n-variable S-unit - -equations a1 + a2 + ... + an = 1 provided that - -no subset of the ai are k-linearly dependent. Under this assumption, they prove that -$$ - \max\bigl\{ \deg(a_1),\ldots,\deg(a_n) \bigr\} \le \frac{1}{2}n(n-1)\max\bigl\{|S| + 2g - 2,0\bigr\}. -$$ diff --git a/wiki/wikipedia/3910.txt b/wiki/wikipedia/3910.txt deleted file mode 100644 index 8d018ca72affc41d5f2afb6fb4b1640cc90c1534..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3910.txt +++ /dev/null @@ -1,7 +0,0 @@ -Grsync is a graphical user interface for rsync. rsync is a differential backup and file synchronization tool widely used in Unix-like operating systems. - -Grsync is developed with the GTK widget toolkit. Like rsync, Grsync is free and open-source software licensed under the GNU General Public License. - -Rsync is a tool for creating backups in Linux systems. It supports backing up local folders, SSH tunneling, delta-only synchronization, and so on. - -Grsync adds the ability to use such purposes with a graphical user interface, without rsync's need to learn a complex set of command-line arguments. In some cases, it is easier to backup files with grsync than with rsync. Since version 1.3.0, Grsync has GTK-3 compatibility. diff --git a/wiki/wikipedia/3911.txt b/wiki/wikipedia/3911.txt deleted file mode 100644 index ed30858f310c9ff22f11d4fd1f306b33b15f6368..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3911.txt +++ /dev/null @@ -1,18 +0,0 @@ -In mathematical logic, the Paris–Harrington theorem states that a certain combinatorial principle in Ramsey theory, namely the strengthened finite Ramsey theorem, is true, but not provable in Peano arithmetic. This has been described by some (such as the editor of the Handbook of Mathematical Logic in the references below) as the first "natural" example of a true statement about the integers that could be stated in the language of arithmetic, but not proved in Peano arithmetic; it was already known that such statements existed by Gödel's first incompleteness theorem. - -The strengthened finite Ramsey theorem is a statement about colorings and natural numbers and states that: - -For any positive integers n, k, m, such that m ≥ n, one can find N with the following property: if we color each of the n-element subsets of S = {1, 2, 3,..., N} with one of k colors, then we can find a subset Y of S with at least m elements, such that all n-element subsets of Y have the same color, and the number of elements of Y is at least the smallest element of Y. - -Without the condition that the number of elements of Y is at least the smallest element of Y, this is a corollary of the finite Ramsey theorem in $K_{\mathcal{P}_n(S)}$, with N given by: -$$ -\binom{N}{n} = |\mathcal{P}_n(S)| \ge R(\underbrace{ m, m, \ldots , m }_k). -$$ - -Moreover, the strengthened finite Ramsey theorem can be deduced from the infinite Ramsey theorem in almost exactly the same way that the finite Ramsey theorem can be deduced from it, using a compactness argument (see the article on Ramsey's theorem for details). This proof can be carried out in second-order arithmetic. - -The Paris–Harrington theorem states that the strengthened finite Ramsey theorem is not provable in Peano arithmetic. - -Roughly speaking, Jeff Paris and Leo Harrington (1977) showed that the strengthened finite Ramsey theorem is unprovable in Peano arithmetic by showing that in Peano arithmetic it implies the consistency of Peano arithmetic itself. Since Peano arithmetic cannot prove its own consistency by Gödel's second incompleteness theorem, this shows that Peano arithmetic cannot prove the strengthened finite Ramsey theorem. - -The smallest number N that satisfies the strengthened finite Ramsey theorem is a computable function of n, m, k, but grows extremely fast. In particular it is not primitive recursive, but it is also far larger than standard examples of non-primitive recursive functions such as the Ackermann function. Its growth is so large that Peano arithmetic cannot prove it is defined everywhere, although Peano arithmetic easily proves that the Ackermann function is well defined. diff --git a/wiki/wikipedia/3912.txt b/wiki/wikipedia/3912.txt deleted file mode 100644 index d0515887d12839693af075fe8c618db77485a89f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3912.txt +++ /dev/null @@ -1,17 +0,0 @@ -In probability theory, Kolmogorov's two-series theorem is a result about the convergence of random series. It follows from Kolmogorov's inequality and is used in one proof of the strong law of large numbers. - -Let $\left( X_n \right)_{n=1}^{\infty}$ be independent random variables with expected values $\mathbf{E} \left[ X_n \right] = \mu_n$ and variances $\mathbf{Var} \left( X_n \right) = \sigma_n^2$, such that $\sum_{n=1}^{\infty} \mu_n$ converges in ℝ and $\sum_{n=1}^{\infty} \sigma_n^2$ converges in ℝ. Then $\sum_{n=1}^{\infty} X_n$ converges in ℝ almost surely. - -Assume WLOG $\mu_n = 0$. Set $S_N = \sum_{n=1}^N X_n$, and we will see that $\limsup_N S_N - \liminf_NS_N = 0$ with probability 1. - -For every $m \in \mathbb{N}$, - -\limsup_{N \to \infty} S_N - \liminf_{N \to \infty} S_N = \limsup_{N \to \infty} \left( S_N - S_m \right) - \liminf_{N \to \infty} \left( S_N - S_m \right) \leq 2 \max_{k \in \mathbb{N} } \left| \sum_{i=1}^{k} X_{m+i} \right| - -Thus, for every $m \in \mathbb{N}$ and $\epsilon > 0$, - -\begin{align} \mathbb{P} \left( \limsup_{N \to \infty} \left( S_N - S_m \right) - \liminf_{N \to \infty} \left( S_N - S_m \right) \geq \epsilon \right) &\leq \mathbb{P} \left( 2 \max_{k \in \mathbb{N} } \left| \sum_{i=1}^{k} X_{m+i} \right| \geq \epsilon \ \right) \\ &= \mathbb{P} \left( \max_{k \in \mathbb{N} } \left| \sum_{i=1}^{k} X_{m+i} \right| \geq \frac{\epsilon}{2} \ \right) \\ &\leq \limsup_{N \to \infty} 4\epsilon^{-2} \sum_{i=m+1}^{m+N} \sigma_i^2 \\ &= 4\epsilon^{-2} \lim_{N \to \infty} \sum_{i=m+1}^{m+N} \sigma_i^2 \end{align} - -While the second inequality is due to Kolmogorov's inequality. - -By the assumption that $\sum_{n=1}^{\infty} \sigma_n^2$ converges, it follows that the last term tends to 0 when $m \to \infty$, for every arbitrary $\epsilon > 0$. diff --git a/wiki/wikipedia/3913.txt b/wiki/wikipedia/3913.txt deleted file mode 100644 index a7a8652e8ad85a3ae317a03866f9edbd5e7025ec..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3913.txt +++ /dev/null @@ -1,5 +0,0 @@ -The Dittert conjecture, or Dittert–Hajek conjecture, is a mathematical hypothesis (in combinatorics) concerning the maximum achieved by a particular function $\phi$ of matrices with real, nonnegative entries satisfying a summation condition. The conjecture is due to Eric Dittert and (independently) Bruce Hajek. - -Let $A = [a_{ij}]$ be a square matrix of order $n$ with nonnegative entries and with $\sum_{i=1}^n \left ( \sum_{j=1}^n a_{ij} \right ) = n$. Its permanent is defined as $ \operatorname{per}(A)=\sum_{\sigma\in S_n}\prod_{i=1}^n a_{i,\sigma(i)}$, where the sum extends over all elements $\sigma$ of the symmetric group. - -The Dittert conjecture asserts that the function $\operatorname{\phi}(A)$ defined by $\prod_{i=1}^n \left ( \sum_{j=1}^n a_{ij} \right ) + \prod_{j=1}^n \left ( \sum_{i=1}^n a_{ij} \right ) - \operatorname{per}(A)$ is (uniquely) maximized when $A = (1/n) J_n$, where $J_n$ is defined to be the square matrix of order $n$ with all entries equal to 1. diff --git a/wiki/wikipedia/3914.txt b/wiki/wikipedia/3914.txt deleted file mode 100644 index ae0fd61d07508fef0a3eef5c7377bd3c9988d7cd..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3914.txt +++ /dev/null @@ -1,30 +0,0 @@ -In propositional logic, material implication is a valid rule of replacement that allows for a conditional statement to be replaced by a disjunction in which the antecedent is negated. The rule states that P implies Q is logically equivalent to not-$P$ or $Q$ (i.e. either $Q$ must be true, or $P$ must not be true) and that either form can replace the other in logical proofs. -$$ -P \to Q \Leftrightarrow \neg P \lor Q -$$ - -Where "$\Leftrightarrow$" is a metalogical symbol representing "can be replaced in a proof with," and P and Q are any given logical statements. - -Suppose we are given that $P \to Q$. Then, we have $\neg P \lor P$ by the law of excluded middle (i.e. either $P$ must be true, or $P$ must not be true). - -Subsequently, since $P \to Q$, $P$ can be replaced by $Q$ in the statement, and thus it follows that $\neg P \lor Q$ (i.e. either $Q$ must be true, or $P$ must not be true). - -Suppose, conversely, we are given $\neg P \lor Q$. Then if $P$ is true that rules out the first disjunct, so we have $Q$. In short, $P \to Q$. However if $P$ is false, then this entailment fails, because the first disjunct $\neg P$ is true which puts no constraint on the second disjunct $Q$. Hence, nothing can be said about $P \to Q$. In sum, the equivalence in the case of false $P$ is only conventional, and hence the formal proof of equivalence is only partial. - -This can also be expressed with a truth table: - -An example is: - -We are given the conditional fact that if it is a bear, then it can swim. Then all 4 possibilities in the truth table are compared to that fact. - -1st: If it is a bear, then it can swim — T - -2nd: If it is a bear, then it can not swim — F - -3rd: If it is not a bear, then it can swim — T because it doesn’t contradict our initial fact. - -4th: If it is not a bear, then it can not swim — T (as above) - -Thus, the conditional fact can be converted to $\neg P \vee Q$, which is "it is not a bear" or "it can swim", - -where $P$ is the statement "it is a bear" and $Q$ is the statement "it can swim". diff --git a/wiki/wikipedia/3915.txt b/wiki/wikipedia/3915.txt deleted file mode 100644 index 5dbe46242ab64389dd4d034a15b55c9093d45c75..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3915.txt +++ /dev/null @@ -1,21 +0,0 @@ -In computational geometry, Klee's measure problem is the problem of determining how efficiently the measure of a union of (multidimensional) rectangular ranges can be computed. Here, a d-dimensional rectangular range is defined to be a Cartesian product of d intervals of real numbers, which is a subset of Rd. - -The problem is named after Victor Klee, who gave an algorithm for computing the length of a union of intervals (the case d = 1) which was later shown to be optimally efficient in the sense of computational complexity theory. The computational complexity of computing the area of a union of 2-dimensional rectangular ranges is now also known, but the case d ≥ 3 remains an open problem. - -In 1977, Victor Klee considered the following problem: given a collection of n intervals in the real line, compute the length of their union. He then presented an algorithm to solve this problem with computational complexity (or "running time") $O(n \log n)$ - see Big O notation for the meaning of this statement. This algorithm, based on sorting the intervals, was later shown by Michael Fredman and Bruce Weide (1978) to be optimal. - -Later in 1977, Jon Bentley considered a 2-dimensional analogue of this problem: given a collection of n rectangles, find the area of their union. He also obtained a complexity $O(n \log n)$ algorithm, now known as Bentley's algorithm, based on reducing the problem to n 1-dimensional problems: this is done by sweeping a vertical line across the area. Using this method, the area of the union can be computed without explicitly constructing the union itself. Bentley's algorithm is now also known to be optimal (in the 2-dimensional case), and is used in computer graphics, among other areas. - -These two problems are the 1- and 2-dimensional cases of a more general question: given a collection of n d-dimensional rectangular ranges, compute the measure of their union. This general problem is Klee's measure problem. - -When generalized to the d-dimensional case, Bentley's algorithm has a running time of $O(n^{d-1} \log n)$. This turns out not to be optimal, because it only decomposes the d-dimensional problem into n (d-1)-dimensional problems, and does not further decompose those subproblems. In 1981, Jan van Leeuwen and Derek Wood improved the running time of this algorithm to $O(n^{d-1})$ for d ≥ 3 by using dynamic quadtrees. - -In 1988, Mark Overmars and Chee Yap proposed an $O(n^{d/2} \log n)$ algorithm for d ≥ 3. Their algorithm uses a particular data structure similar to a kd-tree to decompose the problem into 2-dimensional components and aggregate those components efficiently; the 2-dimensional problems themselves are solved efficiently using a trellis structure. Although asymptotically faster than Bentley's algorithm, its data structures use significantly more space, so it is only used in problems where either n or d is large. In 1998, Bogdan Chlebus proposed a simpler algorithm with the same asymptotic running time for the common special cases where d is 3 or 4. - -In 2013, Timothy M. Chan developed a simpler algorithm that avoids the need for dynamic data structures and eliminates the logarithmic factor, lowering the best known running time for d ≥ 3 to $O(n^{d/2})$. - -The only known lower bound for any d is $\Omega(n \log n)$, and optimal algorithms with this running time are known for d=1 and d=2. The Chan algorithm provides an upper bound of $O(n^{d/2})$ for d ≥ 3, so for d ≥ 3, it remains an open question whether faster algorithms are possible, or alternatively whether tighter lower bounds can be proven. In particular, it remains open whether the algorithm's running time must depend on d. In addition, the question of whether there are faster algorithms that can deal with special cases (for example, when the input coordinates are integers within a bounded range) remains open. - -The 1D Klee's measure problem (union of intervals) can be solved in $O(n \log p)$ where p denotes the number of piercing points required to stab all intervals (the union of intervals pierced by a common point can be calculated in linear time by computing the extrema). - -Parameter p is an adaptive parameter that depends on the input configuration, and the piercing algorithm yields an adaptive algorithm for Klee's measure problem. diff --git a/wiki/wikipedia/3916.txt b/wiki/wikipedia/3916.txt deleted file mode 100644 index bb58b254381cff10247d3ff77e00bb3406fa918a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3916.txt +++ /dev/null @@ -1,105 +0,0 @@ -In mathematics, the Paley–Zygmund inequality bounds the - -probability that a positive random variable is small, in terms of - -its first two moments. The inequality was - -proved by Raymond Paley and Antoni Zygmund. - -Theorem: If Z ≥ 0 is a random variable with - -finite variance, and if $0 \le \theta \le 1$, then - - - -\operatorname{P}( Z > \theta\operatorname{E}[Z] ) - -\ge (1-\theta)^2 \frac{\operatorname{E}[Z]^2}{\operatorname{E}[Z^2]}. - - - -Proof: First, - - - -\operatorname{E}[Z] = \operatorname{E}[ Z \mathbf{1}_{\{ Z \le \theta \operatorname{E}[Z] \}}] + \operatorname{E}[ Z \mathbf{1}_{\{ Z > \theta \operatorname{E}[Z] \}} ]. - - - -The first addend is at most $\theta \operatorname{E}[Z]$, while the second is at most $ \operatorname{E}[Z^2]^{1/2} \operatorname{P}( Z > \theta\operatorname{E}[Z])^{1/2} $ by the Cauchy–Schwarz inequality. The desired inequality then follows. ∎ - -The Paley–Zygmund inequality can be written as - - - -\operatorname{P}( Z > \theta \operatorname{E}[Z] ) - -\ge \frac{(1-\theta)^2 \operatorname{E}[Z]^2}{\operatorname{Var} Z + \operatorname{E}[Z]^2}. - - - -This can be improved. By the Cauchy–Schwarz inequality, - - - -\operatorname{E}[Z - \theta \operatorname{E}[Z]] - -\le \operatorname{E}[ (Z - \theta \operatorname{E}[Z]) \mathbf{1}_{\{ Z > \theta \operatorname{E}[Z] \}} ] - -\le \operatorname{E}[ (Z - \theta \operatorname{E}[Z])^2 ]^{1/2} \operatorname{P}( Z > \theta \operatorname{E}[Z] )^{1/2} - - - -which, after rearranging, implies that - - - -\operatorname{P}(Z > \theta \operatorname{E}[Z]) - -\ge \frac{(1-\theta)^2 \operatorname{E}[Z]^2}{\operatorname{E}[( Z - \theta \operatorname{E}[Z] )^2]} - -= \frac{(1-\theta)^2 \operatorname{E}[Z]^2}{\operatorname{Var} Z + (1-\theta)^2 \operatorname{E}[Z]^2}. - - - -This inequality is sharp; equality is achieved if Z almost surely equals a positive constant. - -In turn, this implies another convenient form (known as Cantelli's inequality) which is - - - -\operatorname{P}(Z > \mu - \theta \sigma) - -\ge \frac{\theta^2}{1+\theta^2}, - - - -where $\mu=\operatorname{E}[Z]$ and $\sigma^2 = \operatorname{Var}[Z]$. - -This follows from the substitution $\theta = 1-\theta'\sigma/\mu$ valid when $0\le \mu - \theta \sigma\le\mu$. - -A strengthened form of the Paley-Zygmund inequality states that if Z is a non-negative random variable then - - - -\operatorname{P}( Z > \theta \operatorname{E}[Z \mid Z > 0] ) - -\ge \frac{(1-\theta)^2 \operatorname{E}[Z]^2}{\operatorname{E}[Z^2]} - - - -for every $ 0 \leq \theta \leq 1 $. - -This inequality follows by applying the usual Paley-Zygmund inequality to the conditional distribution of Z given that it is positive and noting that the various factors of $\operatorname{P}(Z>0)$ cancel. - -Both this inequality and the usual Paley-Zygmund inequality also admit $ L^p $ versions: If Z is a non-negative random variable and $ p > 1 $ then - - - -\operatorname{P}( Z > \theta \operatorname{E}[Z \mid Z > 0] ) - -\ge \frac{(1-\theta)^{p/(p-1)} \operatorname{E}[Z]^{p/(p-1)}}{\operatorname{E}[Z^p]^{1/(p-1)}}. - - - -for every $ 0 \leq \theta \leq 1 $. This follows by the same proof as above but using Hölder's inequality in place of the Cauchy-Schwarz inequality. diff --git a/wiki/wikipedia/3917.txt b/wiki/wikipedia/3917.txt deleted file mode 100644 index 16abfd3a5cdd36bd579b14e08257970827a79cf1..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3917.txt +++ /dev/null @@ -1,72 +0,0 @@ -In mathematics, a binary relation R is called well-founded (or wellfounded) on a class X if every non-empty subset S ⊆ X has a minimal element with respect to R, that is, an element m not related by sRm (for instance, "s is not smaller than m") for any s ∈ S. In other words, a relation is well founded if -$$ -(\forall S \subseteq X) [S \neq \emptyset \implies (\exists m \in S) (\forall s \in S) \lnot(sRm)]. -$$ - -Some authors include an extra condition that R is set-like, i.e., that the elements less than any given element form a set. - -Equivalently, assuming the axiom of dependent choice, a relation is well-founded if it contains no countable infinite descending chains: that is, there is no infinite sequence x0, x1, x2, ... of elements of X such that xn+1 R xn for every natural number n. - -In order theory, a partial order is called well-founded if the corresponding strict order is a well-founded relation. If the order is a total order then it is called a well-order. - -In set theory, a set x is called a well-founded set if the set membership relation is well-founded on the transitive closure of x. The axiom of regularity, which is one of the axioms of Zermelo–Fraenkel set theory, asserts that all sets are well-founded. - -A relation R is converse well-founded, upwards well-founded or Noetherian on X, if the converse relation R−1 is well-founded on X. In this case R is also said to satisfy the ascending chain condition. In the context of rewriting systems, a Noetherian relation is also called terminating. - -An important reason that well-founded relations are interesting is because a version of transfinite induction can be used on them: if (X, R) is a well-founded relation, P(x) is some property of elements of X, and we want to show that - -P(x) holds for all elements x of X, - -it suffices to show that: - -If x is an element of X and P(y) is true for all y such that y R x, then P(x) must also be true. - -That is, -$$ -(\forall x \in X)[(\forall y \in X)[yRx \implies P(y)] \implies P(x)]\quad\text{implies}\quad(\forall x \in X)P(x). -$$ - -Well-founded induction is sometimes called Noetherian induction, after Emmy Noether. - -On par with induction, well-founded relations also support construction of objects by transfinite recursion. Let (X, R) be a set-like well-founded relation and F a function that assigns an object F(x, g) to each pair of an element x ∈ X and a function g on the initial segment {y: y R x} of X. Then there is a unique function G such that for every x ∈ X, -$$ -G(x) = F\left(x, G\vert_{\left\{y: yRx\right\}}\right). -$$ - -That is, if we want to construct a function G on X, we may define G(x) using the values of G(y) for y R x. - -As an example, consider the well-founded relation (N, S), where N is the set of all natural numbers, and S is the graph of the successor function x ↦ x+1. Then induction on S is the usual mathematical induction, and recursion on S gives primitive recursion. If we consider the order relation (N, <), we obtain complete induction, and course-of-values recursion. The statement that (N, <) is well-founded is also known as the well-ordering principle. - -There are other interesting special cases of well-founded induction. When the well-founded relation is the usual ordering on the class of all ordinal numbers, the technique is called transfinite induction. When the well-founded set is a set of recursively-defined data structures, the technique is called structural induction. When the well-founded relation is set membership on the universal class, the technique is known as ∈-induction. See those articles for more details. - -Well-founded relations which are not totally ordered include: - -* The positive integers {1, 2, 3, ...}, with the order defined by a < b if and only if a divides b and a ≠ b. - -* The set of all finite strings over a fixed alphabet, with the order defined by s < t if and only if s is a proper substring of t. - -* The set N × N of pairs of natural numbers, ordered by (n1, n2) < (m1, m2) if and only if n1 < m1 and n2 < m2. - -* Every class whose elements are sets, with the relation $\in$ ("is an element of"). This is the axiom of regularity. - -* The nodes of any finite directed acyclic graph, with the relation R defined such that a R b if and only if there is an edge from a to b. - -Examples of relations that are not well-founded include: - -* The negative integers {−1, −2, −3, …}, with the usual order, since any unbounded subset has no least element. - -* The set of strings over a finite alphabet with more than one element, under the usual (lexicographic) order, since the sequence "B" > "AB" > "AAB" > "AAAB" > … is an infinite descending chain. This relation fails to be well-founded even though the entire set has a minimum element, namely the empty string. - -* The set of non-negative rational numbers (or reals) under the standard ordering, since, for example, the subset of positive rationals (or reals) lacks a minimum. - -If (X, <) is a well-founded relation and x is an element of X, then the descending chains starting at x are all finite, but this does not mean that their lengths are necessarily bounded. Consider the following example: - -Let X be the union of the positive integers and a new element ω, which is bigger than any integer. Then X is a well-founded set, but - -there are descending chains starting at ω of arbitrary great (finite) length; - -the chain ω, n − 1, n − 2, ..., 2, 1 has length n for any n. - -The Mostowski collapse lemma implies that set membership is a universal among the extensional well-founded relations: for any set-like well-founded relation R on a class X which is extensional, there exists a class C such that (X, R) is isomorphic to (C, ∈). - -A relation R is said to be reflexive if aRa holds for every a in the domain of the relation. Every reflexive relation on a nonempty domain has infinite descending chains, because any constant sequence is a descending chain. For example, in the natural numbers with their usual order ≤, we have $1 \geq 1 \geq 1 \geq \cdots.$ To avoid these trivial descending sequences, when working with a partial order ≤, it is common to apply the definition of well foundedness (perhaps implicitly) to the alternate relation < defined such that a < b if and only if a ≤ b and a ≠ b. More generally, when working with a preorder ≤, it is common to use the relation < defined such that a < b if and only if a ≤ b and b ≰ a. In the context of the natural numbers, this means that the relation <, which is well-founded, is used instead of the relation ≤, which is not. In some texts, the definition of a well-founded relation is changed from the definition above to include these conventions. diff --git a/wiki/wikipedia/3918.txt b/wiki/wikipedia/3918.txt deleted file mode 100644 index 7e9349847fdf616de1e62dc4221b0acd0688120a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3918.txt +++ /dev/null @@ -1,3 +0,0 @@ -Mined-Out (also known as Minesweeper in some countries) is a video game released for the ZX Spectrum in 1983 by Quicksilva. where a player must cross a minefield successfully using logic. Although Mined-Out was not the first game in the style of Minesweeper, it was the first to be released on a home computer, and to display how many mines are adjacent to the player. - -The game was written by Ian Andrew, an early adopter of the ZX81 and Spectrum. He learned to program BASIC in his spare time, and sent a copy of Mined-Out to Quicksilva after they published an advert wanting programs to publish. diff --git a/wiki/wikipedia/3919.txt b/wiki/wikipedia/3919.txt deleted file mode 100644 index 17d60efe092630b56e627028e4eb0d0249d081ec..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3919.txt +++ /dev/null @@ -1,31 +0,0 @@ -In mathematics, connectedness is used to refer to various properties meaning, in some sense, "all one piece". When a mathematical object has such a property, we say it is connected; otherwise it is disconnected. When a disconnected object can be split naturally into connected pieces, each piece is usually called a component (or connected component). - -A topological space is said to be connected if it is not the union of two disjoint nonempty open sets. A set is open if it contains no point lying on its boundary; thus, in an informal, intuitive sense, the fact that a space can be partitioned into disjoint open sets suggests that the boundary between the two sets is not part of the space, and thus splits it into two separate pieces. - -Fields of mathematics are typically concerned with special kinds of objects. Often such an object is said to be connected if, when it is considered as a topological space, it is a connected space. Thus, manifolds, Lie groups, and graphs are all called connected if they are connected as topological spaces, and their components are the topological components. Sometimes it is convenient to restate the definition of connectedness in such fields. For example, a graph is said to be connected if each pair of vertices in the graph is joined by a path. This definition is equivalent to the topological one, as applied to graphs, but it is easier to deal with in the context of graph theory. Graph theory also offers a context-free measure of connectedness, called the clustering coefficient. - -Other fields of mathematics are concerned with objects that are rarely considered as topological spaces. Nonetheless, definitions of connectedness often reflect the topological meaning in some way. For example, in category theory, a category is said to be connected if each pair of objects in it is joined by a sequence of morphisms. Thus, a category is connected if it is, intuitively, all one piece. - -There may be different notions of connectedness that are intuitively similar, but different as formally defined concepts. We might wish to call a topological space connected if each pair of points in it is joined by a path. However this condition turns out to be stronger than standard topological connectedness; in particular, there are connected topological spaces for which this property does not hold. Because of this, different terminology is used; spaces with this property are said to be path connected. While not all connected spaces are path connected, all path connected spaces are connected. - -Terms involving connected are also used for properties that are related to, but clearly different from, connectedness. For example, a path-connected topological space is simply connected if each loop (path from a point to itself) in it is contractible; that is, intuitively, if there is essentially only one way to get from any point to any other point. Thus, a sphere and a disk are each simply connected, while a torus is not. As another example, a directed graph is strongly connected if each ordered pair of vertices is joined by a directed path (that is, one that "follows the arrows"). - -Other concepts express the way in which an object is not connected. For example, a topological space is totally disconnected if each of its components is a single point. - -Properties and parameters based on the idea of connectedness often involve the word connectivity. For example, in graph theory, a connected graph is one from which we must remove at least one vertex to create a disconnected graph. In recognition of this, such graphs are also said to be 1-connected. Similarly, a graph is 2-connected if we must remove at least two vertices from it, to create a disconnected graph. A 3-connected graph requires the removal of at least three vertices, and so on. The connectivity of a graph is the minimum number of vertices that must be removed to disconnect it. Equivalently, the connectivity of a graph is the greatest integer k for which the graph is k-connected. - -While terminology varies, noun forms of connectedness-related properties often include the term connectivity. Thus, when discussing simply connected topological spaces, it is far more common to speak of simple connectivity than simple connectedness. On the other hand, in fields without a formally defined notion of connectivity, the word may be used as a synonym for connectedness. - -Another example of connectivity can be found in regular tilings. Here, the connectivity describes the number of neighbors accessible from a single tile: - - - -Image:Triangular_3_connectivity.svg|3-connectivity in a triangular tiling, - -Image:Square_4_connectivity.svg|4-connectivity in a square tiling, - -Image:Hexagonal_connectivity.svg|6-connectivity in a hexagonal tiling, - -Image:Square_8_connectivity.svg|8-connectivity in a square tiling (note that distance equality is not kept) - - diff --git a/wiki/wikipedia/392.txt b/wiki/wikipedia/392.txt deleted file mode 100644 index a3b42170186d40df59ec817c97eacb70059449e3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/392.txt +++ /dev/null @@ -1,32 +0,0 @@ -In geometry, the equal incircles theorem derives from a Japanese Sangaku, and pertains to the following construction: a series of rays are drawn from a given point to a given line such that the inscribed circles of the triangles formed by adjacent rays and the base line are equal. In the illustration the equal blue circles define the spacing between the rays, as described. - -The theorem states that the incircles of the triangles formed (starting from any given ray) by every other ray, every third ray, etc. and the base line are also equal. The case of every other ray is illustrated above by the green circles, which are all equal. - -From the fact that the theorem doesn't depend on the angle of the initial ray, it can be seen that the theorem properly belongs to analysis, rather than geometry, and must relate to a continuous scaling function which defines the spacing of the rays. In fact, this function is the hyperbolic sine. - -The theorem is a direct corollary of the following lemma: - -Suppose that the nth ray makes an angle $\gamma_n$ with the normal to the baseline. If $\gamma_n$ is parameterized according to the equation, $\tan \gamma_n = \sinh\theta_n$, then values of $\theta_n = a + nb$, where $a$ and $b$ are real constants, define a sequence of rays that satisfy the condition of equal incircles, and furthermore any sequence of rays satisfying the condition can be produced by suitable choice of the constants $a$ and $b$. - -In the diagram, lines PS and PT are adjacent rays making angles $\gamma_n$ and $\gamma_{n+1}$ with line PR, which is perpendicular to the baseline, RST. - -Line QXOY is parallel to the baseline and passes through O, the center of the incircle of $\triangle$ PST, which is tangent to the rays at W and Z. Also, line PQ has length $h-r$, and line QR has length $r$, the radius of the incircle. - -Then $\triangle$ OWX is similar to $\triangle$ PQX and $\triangle$ OZY is similar to $\triangle$ PQY, and from XY = XO + OY we get -$$ -(h-r) ( \tan \gamma_{n+1} - \tan \gamma_n ) = r ( \sec \gamma_n + \sec \gamma_{n+1} ). -$$ - -This relation on a set of angles, $\{ \gamma_m \}$, expresses the condition of equal incircles. - -To prove the lemma, we set $ \tan \gamma_n = \sinh (a+nb)$, which gives $ \sec \gamma_n = \cosh(a+nb)$. - -Using $a+(n+1)b = (a+nb)+b$, we apply the addition rules for $\sinh$ and $\cosh$, and verify that the equal incircles relation is satisfied by setting -$$ -\frac {r}{h-r} = \tanh\frac{b}{2}. -$$ - -This gives an expression for the parameter $b$ in terms of the geometric measures, $h$ and $r$. With this definition of $b$ we then obtain an expression for the radii, $r_N$, of the incircles formed by taking every Nth ray as the sides of the triangles -$$ -\frac {r_N}{h-r_N} = \tanh\frac{Nb}{2}. -$$ diff --git a/wiki/wikipedia/3920.txt b/wiki/wikipedia/3920.txt deleted file mode 100644 index e8fc30df68d9b8930339c6cfc0ec703fe9e4a024..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3920.txt +++ /dev/null @@ -1,17 +0,0 @@ -In topology, Lebesgue's number lemma, named after Henri Lebesgue, is a useful tool in the study of compact metric spaces. It states: - -If the metric space $(X, d)$ is compact and an open cover of $X$ is given, then there exists a number $\delta > 0$ such that every subset of $X$ having diameter less than $\delta$ is contained in some member of the cover. - -Such a number $\delta$ is called a Lebesgue number of this cover. The notion of a Lebesgue number itself is useful in other applications as well. - -Let $\mathcal U$ be an open cover of $X$. Since $X$ is compact we can extract a finite subcover $\{A_1, \dots, A_n\} \subseteq \mathcal U$. - -If any one of the $A_i$'s equals $X$ then any $ \delta > 0 $ will serve as a Lebesgue number. - -Otherwise for each $i \in \{1, \dots, n\}$, let $C_i := X \setminus A_i$, note that $C_i$ is not empty, and define a function $f : X \rightarrow \mathbb R$ by $f(x) := \frac{1}{n} \sum_{i=1}^n d(x,C_i)$. - -Since $f$ is continuous on a compact set, it attains a minimum $\delta$. - -The key observation is that, since every $x$ is contained in some $A_i$, the extreme value theorem shows $\delta > 0$. Now we can verify that this $\delta$ is the desired Lebesgue number. - -If $Y$ is a subset of $X$ of diameter less than $\delta$, then there exists $x_0\in X$ such that $Y\subseteq B_\delta(x_0)$, where $B_\delta(x_0)$ denotes the ball of radius $\delta$ centered at $x_0$ (namely, one can choose $x_0$ as any point in $Y$). Since $f(x_0)\geq \delta$ there must exist at least one $i$ such that $d(x_0,C_i)\geq \delta$. But this means that $B_\delta(x_0)\subseteq A_i$ and so, in particular, $Y\subseteq A_i$. diff --git a/wiki/wikipedia/3921.txt b/wiki/wikipedia/3921.txt deleted file mode 100644 index dafce28cff8986bbef6ff90747db3ca154cfca2a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3921.txt +++ /dev/null @@ -1,73 +0,0 @@ -In mathematics, more specifically in the study of dynamical systems and differential equations, a Liénard equation is a second order differential equation, named after the French physicist Alfred-Marie Liénard. - -During the development of radio and vacuum tube technology, Liénard equations were intensely studied as they can be used to model oscillating circuits. Under certain additional assumptions Liénard's theorem guarantees the uniqueness and existence of a limit cycle for such a system. - -Let f and g be two continuously differentiable functions on R, with g an odd function and f an even function. Then the second order ordinary differential equation of the form -$$ -{d^2x \over dt^2}+f(x){dx \over dt}+g(x)=0 -$$ - -is called the Liénard equation. - -The equation can be transformed into an equivalent two-dimensional system of ordinary differential equations. We define -$$ -F(x) := \int_0^x f(\xi) d\xi -$$ -$$ -x_1:= x -$$ -$$ -x_2:={dx \over dt} + F(x) -$$ - -then - - - -\begin{bmatrix} - -\dot{x}_1 \\ - -\dot{x}_2 - -\end{bmatrix} - -= - -\mathbf{h}(x_1, x_2) - -= - -\begin{bmatrix} - -x_2 - F(x_1) \\ - --g(x_1) - -\end{bmatrix} - - - -is called a Liénard system. - -Alternatively, since the Liénard equation itself is also an autonomous differential equation, the substitution $v = {dx \over dt}$ leads the Liénard equation to become a first order differential equation: -$$ -v{dv \over dx}+f(x)v+g(x)=0 -$$ - -which belongs to Abel equation of the second kind. - -The Van der Pol oscillator -$$ -{d^2x \over dt^2}-\mu(1-x^2){dx \over dt} +x= 0 -$$ - -is a Liénard equation. The solution of a Van der Pol oscillator has a limit cycle. Such cycle has a solution of a Liénard equation with negative $f(x)$ at small $|x|$ and positive $f(x)$ otherwise. The Van der Pol equation has no exact, analytic solution. Such solution for a limit cycle exists if $f(x)$ is a constant piece-wise function. - -A Liénard system has a unique and stable limit cycle surrounding the origin if it satisfies the following additional properties: - -* g(x) > 0 for all x > 0; - -* $\lim_{x \to \infty} F(x) := \lim_{x \to \infty} \int_0^x f(\xi) d\xi\ = \infty;$ - -* F(x) has exactly one positive root at some value p, where F(x) < 0 for 0 < x < p and F(x) > 0 and monotonic for x > p. diff --git a/wiki/wikipedia/3922.txt b/wiki/wikipedia/3922.txt deleted file mode 100644 index 450660414d63808cf504c98ff4c84faf2e59f85d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3922.txt +++ /dev/null @@ -1,13 +0,0 @@ -In geometry, Brianchon's theorem is a theorem stating that when a hexagon is circumscribed around a conic section, its principal diagonals (those connecting opposite vertices) meet in a single point. It is named after Charles Julien Brianchon (1783–1864). - -Let $P_1P_2P_3P_4P_5P_6$ be a hexagon formed by six tangent lines of a conic section. Then lines $\overline{P_1P_4}, \overline{P_2P_5}, \overline{P_3P_6}$ (extended diagonals each connecting opposite vertices) intersect at a single point $B$, the Brianchon point. - -The polar reciprocal and projective dual of this theorem give Pascal's theorem. - -As for Pascal's theorem there exist degenerations for Brianchon's theorem, too: Let coincide two neighbored tangents. Their point of intersection becomes a point of the conic. In the diagram three pairs of neighbored tangents coincide. This procedure results in a statement on inellipses of triangles. From a projective point of view the two triangles $P_1P_3P_5$ and $P_2P_4P_6$ lie perspectively with center $B$. That means there exists a central collineation, which maps the one onto the other triangle. But only in special cases this collineation is an affine scaling. For example for a Steiner inellipse, where the Brianchon point is the centroid. - -Brianchon's theorem is true in both the affine plane and the real projective plane. However, its statement in the affine plane is in a sense less informative and more complicated than that in the projective plane. Consider, for example, five tangent lines to a parabola. These may be considered sides of a hexagon whose sixth side is the line at infinity, but there is no line at infinity in the affine plane. In two instances, a line from a (non-existent) vertex to the opposite vertex would be a line parallel to one of the five tangent lines. Brianchon's theorem stated only for the affine plane would therefore have to be stated differently in such a situation. - -The projective dual of Brianchon's theorem has exceptions in the affine plane but not in the projective plane. - -Brianchon's theorem can be proved by the idea of radical axis or reciprocation. diff --git a/wiki/wikipedia/3923.txt b/wiki/wikipedia/3923.txt deleted file mode 100644 index b5317ca7547344cf2f16d8cb54cb8ff12ef62f2a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3923.txt +++ /dev/null @@ -1,59 +0,0 @@ -In mathematical measure theory, for every positive integer n the ham sandwich theorem states that given n measurable "objects" in n-dimensional Euclidean space, it is possible to divide all of them in half (with respect to their measure, e.g. volume) with a single (n − 1)-dimensional hyperplane. - -It was proposed by Hugo Steinhaus and proved by Stefan Banach (explicitly in dimension 3, without bothering to automatically state the theorem in the n-dimensional case), and also years later called the Stone–Tukey theorem after Arthur H. Stone and John Tukey. - -The ham sandwich theorem takes its name from the case when n = 3 and the three objects to be bisected are the ingredients of a ham sandwich. Sources differ on whether these three ingredients are two slices of bread and a piece of ham , bread and cheese and ham , or bread and butter and ham . In two dimensions, the theorem is known as the pancake theorem to refer to the flat nature of the two objects to be bisected by a line . - -According to Beyer, the earliest known paper about the ham sandwich theorem, specifically the n = 3 case of bisecting three solids with a plane, is by Steinhaus. Beyer and Zardecki's paper includes a translation of the 1938 paper. It attributes the posing of the problem to Hugo Steinhaus, and credits Stefan Banach as the first to solve the problem, by a reduction to the Borsuk–Ulam theorem. The paper poses the problem in two ways: first, formally, as "Is it always possible to bisect three solids, arbitrarily located, with the aid of an appropriate plane?" and second, informally, as "Can we place a piece of ham under a meat cutter so that meat, bone, and fat are cut in halves?" Later, the paper offers a proof of the theorem. - -A more modern reference is Stone, which is the basis of the name "Stone–Tukey theorem". This paper proves the n-dimensional version of the theorem in a more general setting involving measures. The paper attributes the n = 3 case to Stanislaw Ulam, based on information from a referee; but Beyer claim that this is incorrect, given Steinhaus's paper, although "Ulam did make a fundamental contribution in proposing" the Borsuk–Ulam theorem. - -
    - -The two-dimensional variant of the theorem (also known as the pancake theorem) can be proved by an argument which appears in the fair cake-cutting literature (see e.g. Robertson–Webb rotating-knife procedure). - -For each angle $\alpha\in[0,180^\circ]$, we can bisect pancake #1 using a straight line in angle $\alpha$ (to see this, translate [move] a straight line in angle $\alpha$ from $-\infty$ to $\infty$; the fraction of pancake #1 covered by the line changes continuously from 0 to 1, so by the intermediate value theorem it must be equal to 1/2 somewhere along the way). - -This means that we can take a straight knife, rotate it at every angle $\alpha\in[0,180^\circ]$ and translate it appropriately for that particular angle, such that pancake #1 is bisected at each angle and corresponding translation. - -When the knife is at angle 0, it also cuts pancake #2, but the pieces are probably unequal (if we are lucky and the pieces are equal, we are done). Define the 'positive' side of the knife as the side in which the fraction of pancake #2 is larger. Define $p(\alpha)$ as the fraction of pancake #2 at the positive side of the knife. Initially $p(0)\geq 1/2$. - -When the knife is at angle 180, the knife is upside-down, so $p(180)\leq 1/2$. By the intermediate value theorem, there must be an angle in which $p(\alpha)=1/2$. Cutting at that angle bisects both pancakes simultaneously. - -The ham sandwich theorem can be proved as follows using the Borsuk–Ulam theorem. This proof follows the one described by Steinhaus and others (1938), attributed there to Stefan Banach, for the n = 3 case. In the field of Equivariant topology, this proof would fall under the configuration-space/tests-map paradigm. - -Let A1, A2, …, An denote the n objects that we wish to simultaneously bisect. Let S be the unit (n − 1)-sphere embedded in n-dimensional Euclidean space $\mathbb{R}^n$, centered at the origin. For each point p on the surface of the sphere S, we can define a continuum of oriented affine hyperplanes (not necessarily centred at 0) perpendicular to the (normal) vector from the origin to p, with the "positive side" of each hyperplane defined as the side pointed to by that vector (i.e. it is a choice of orientation). By the intermediate value theorem, every family of such hyperplanes contains at least one hyperplane that bisects the bounded object An: at one extreme translation, no volume of An is on the positive side, and at the other extreme translation, all of An's volume is on the positive side, so in between there must be a translation that has half of An's volume on the positive side. If there is more than one such hyperplane in the family, we can pick one canonically by choosing the midpoint of the interval of translations for which An is bisected. Thus we obtain, for each point p on the sphere S, a hyperplane π(p) that is perpendicular to the vector from the origin to p and that bisects An. - -Now we define a function f from the (n − 1)-sphere S to (n − 1)-dimensional Euclidean space $\mathbb{R}^{n-1}$ as follows: - -f(p) = (vol of A1 on the positive side of π(p), vol of A2 on the positive side of π(p), …, vol of An−1 on the positive side of π(p)). - -This function f is continuous. By the Borsuk–Ulam theorem, there are antipodal points p and q on the sphere S such that f(p) = f(q). Antipodal points p and q correspond to hyperplanes π(p) and π(q) that are equal except that they have opposite positive sides. Thus, f(p) = f(q) means that the volume of Ai is the same on the positive and negative side of π(p) (or π(q)), for i = 1, 2, …, n−1. Thus, π(p) (or π(q)) is the desired ham sandwich cut that simultaneously bisects the volumes of A1, A2, …, An. - -In measure theory, Stone proved two more general forms of the ham sandwich theorem. Both versions concern the bisection of n subsets X1, X2, …, Xn of a common set X, where X has a Carathéodory outer measure and each Xi has finite outer measure. - -Their first general formulation is as follows: for any continuous real function $f \colon S^n \times X \to \mathbb{R}$, there is a point p of the n-sphere Sn and a real number s0 such that the surface f(p,x) = s0 divides X into f(p,x) < s0 and f(p,x) > s0 of equal measure and simultaneously bisects the outer measure of X1, X2, …, Xn. The proof is again a reduction to the Borsuk-Ulam theorem. This theorem generalizes the standard ham sandwich theorem by letting f(s,x) = s1x1 + … + snxn. - -Their second formulation is as follows: for any n + 1 measurable functions f0, f1, …, fn over X that are linearly independent over any subset of X of positive measure, there is a linear combination f = a0f0 + a1f1 + … + anfn such that the surface f(x) = 0, dividing X into f(x) < 0 and f(x) > 0, simultaneously bisects the outer measure of X1, X2, …, Xn. This theorem generalizes the standard ham sandwich theorem by letting f0(x) = 1 and letting fi(x), for i > 0, be the i-th coordinate of x. - -In discrete geometry and computational geometry, the ham sandwich theorem usually refers to the special case in which each of the sets being divided is a finite set of points. Here the relevant measure is the counting measure, which simply counts the number of points on either side of the hyperplane. In two dimensions, the theorem can be stated as follows: - -For a finite set of points in the plane, each colored "red" or "blue", there is a line that simultaneously bisects the red points and bisects the blue points, that is, the number of red points on either side of the line is equal and the number of blue points on either side of the line is equal. - -There is an exceptional case when points lie on the line. In this situation, we count each of these points as either being on one side, on the other, or on neither side of the line (possibly depending on the point), i.e. "bisecting" in fact means that each side contains less than half of the total number of points. This exceptional case is actually required for the theorem to hold, of course when the number of red points or the number of blue is odd, but also in specific configurations with even numbers of points, for instance when all the points lie on the same line and the two colors are separated from each other (i.e. colors don't alternate along the line). A situation where the numbers of points on each side cannot match each other is provided by adding an extra point out of the line in the previous configuration. - -In computational geometry, this ham sandwich theorem leads to a computational problem, the ham sandwich problem. In two dimensions, the problem is this: given a finite set of n points in the plane, each colored "red" or "blue", find a ham sandwich cut for them. First, Megiddo described an algorithm for the special, separated case. Here all red points are on one side of some line and all blue points are on the other side, a situation where there is a unique ham sandwich cut, which Megiddo could find in linear time. Later, Edelsbrunner gave an algorithm for the general two-dimensional case; the running time of their algorithm is O(n log n), where the symbol O indicates the use of Big O notation. Finally, Lo found an optimal O(n)-time algorithm. This algorithm was extended to higher dimensions by Lo where the running time is $o(n^{d-1})$. Given d sets of points in general position in d-dimensional space, the algorithm computes a (d−1)-dimensional hyperplane that has an equal number of points of each of the sets in both of its half-spaces, i.e., a ham-sandwich cut for the given points. If d is a part of the input, then no polynomial time algorithm is expected to exist, as if the points are on a moment curve, the problem becomes equivalent to necklace splitting, which is PPA-complete. - -A linear-time algorithm that area-bisects two disjoint convex polygons - -is described by - -Stojmenovíc. - -The original theorem works for at most n collections, where n is the number of dimensions. If we want to bisect a larger number of collections without going to higher dimensions, we can use, instead of a hyperplane, an algebraic surface of degree k, i.e., an (n−1)–dimensional surface defined by a polynomial function of degree k: - -Given $\binom{k+n}{n}-1$ measures in an n–dimensional space, there exists an algebraic surface of degree k which bisects them all. (Smith). - -This generalization is proved by mapping the n–dimensional plane into a $\binom{k+n}{n}-1$ dimensional plane, and then applying the original theorem. For example, for n = 2 and k = 2, the 2–dimensional plane is mapped to a 5–dimensional plane via: - -(x, y) → (x, y, x2, y2, xy). diff --git a/wiki/wikipedia/3924.txt b/wiki/wikipedia/3924.txt deleted file mode 100644 index 05785a8e77b22ac6c5909397d02ae0c8148a9e0a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3924.txt +++ /dev/null @@ -1,3 +0,0 @@ -In mathematics, the Ehrenpreis conjecture of Leon Ehrenpreis states that for any K greater than 1, any two closed Riemann surfaces of genus at least 2 have finite-degree covers which are K-quasiconformal: that is, the covers are arbitrarily close in the Teichmüller metric. - -A proof was announced by Jeremy Kahn and Vladimir Markovic in January 2011, using their proof of the Surface subgroup conjecture and a newly developed "good pants homology" theory. In June 2012, Kahn and Markovic were given the Clay Research Awards for their work on these two problems by the Clay Mathematics Institute at a ceremony at Oxford University. diff --git a/wiki/wikipedia/3925.txt b/wiki/wikipedia/3925.txt deleted file mode 100644 index 2671450333aa836cec79d2dc11ba078d8fd455c7..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3925.txt +++ /dev/null @@ -1,25 +0,0 @@ -In algebraic geometry, a branch of mathematics, the Lefschetz theorem on (1,1)-classes, named after Solomon Lefschetz, is a classical statement relating holomorphic line bundles on a compact Kähler manifold to classes in its integral cohomology. It is the only case of the Hodge conjecture which has been proved for all Kähler manifolds. - -Let X be a compact Kähler manifold. The first Chern class c1 gives a map from holomorphic line bundles to H2(X, Z). By Hodge theory, the de Rham cohomology group H2(X, C) decomposes as a direct sum H0,2(X) ⊕ H1,1(X) ⊕ H2,0(X), and it can be proven that the image of c1 lies in H1,1(X). The theorem says that the map to H2(X, Z) ∩ H1,1(X) is surjective. - -In the special case where X is a projective variety, holomorphic line bundles are in bijection with linear equivalences class of divisors, and given a divisor D on X with associated line bundle O(D), the class c1(O(D)) is Poincaré dual to the homology class given by D. Thus, this establishes the usual formulation of the Hodge conjecture for divisors in projective varieties. - -Lefschetz's original proof worked on projective surfaces and used normal functions, which were introduced by Poincaré. Suppose that Ct is a pencil of curves on X. Each of these curves has a Jacobian variety JCt (if a curve is singular, there is an appropriate generalized Jacobian variety). These can be assembled into a family $\mathcal{J}$, the Jacobian of the pencil, which comes with a projection map π to the base T of the pencil. A normal function is a (holomorphic) section of π. - -Fix an embedding of X in PN, and choose a pencil of curves Ct on X. For a fixed curve Γ on X, the intersection of Γ and Ct is a divisor p1(t) + ... + pd(t) on Ct, where d is the degree of X. Fix a base point p0 of the pencil. Then the divisor p1(t) + ... + pd(t) - dp0 is a divisor of degree zero, and consequently it determines a class νΓ(t) in the Jacobian JCt for all t. The map from t to νΓ(t) is a normal function. - -Henri Poincaré proved that for a general pencil of curves, all normal functions arose as νΓ(t) for some choice of Γ. Lefschetz proved that any normal function determined a class in H2(X, Z) and that the class of νΓ is the fundamental class of Γ. Furthermore, he proved that a class in H2(X, Z) is the class of a normal function if and only if it lies in H1,1. Together with Poincaré's existence theorem, this proves the theorem on (1,1)-classes. - -Because X is a complex manifold, it admits an exponential sheaf sequence -$$ -0 \to \underline{\mathbf{Z}} \stackrel{2\pi i}{\longrightarrow} \mathcal{O}_X \stackrel{\operatorname{exp}}{\longrightarrow} \mathcal{O}_X^\times \to 0. -$$ - -Taking sheaf cohomology of this exact sequence gives maps -$$ -H^1(X, \mathcal{O}_X^\times) \stackrel{c_1}{\to} H^2(X, \mathbf{Z}) \stackrel{i_*}{\to} H^2(X, \mathcal{O}_X). -$$ - -The group Pic X of line bundles on X is isomorphic to $H^1(X, \mathcal{O}_X^\times)$. The first Chern class map is c1 by definition, so it suffices to show that i* is zero. - -Because X is Kähler, Hodge theory implies that $H^2(X, \mathcal{O}_X) \cong H^{0,2}(X)$. However, i* factors through the map from H2(X, Z) to H2(X, C), and on H2(X, C), i* is the restriction of the projection onto H0,2(X). It follows that it is zero on H2(X, Z) ∩ H1,1(X), and consequently that the cycle class map is surjective. diff --git a/wiki/wikipedia/3926.txt b/wiki/wikipedia/3926.txt deleted file mode 100644 index f3a59e9348c7308364f8f769274c4975f92071ec..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3926.txt +++ /dev/null @@ -1,189 +0,0 @@ -In mathematics, the Wallis product for pi, published in 1656 by John Wallis, states that - -\begin{align} - -\frac{\pi}{2} & = \prod_{n=1}^{\infty} \frac{ 4n^2 }{ 4n^2 - 1 } = \prod_{n=1}^{\infty} \left(\frac{2n}{2n-1} \cdot \frac{2n}{2n+1}\right) \\[6pt] - -& = \Big(\frac{2}{1} \cdot \frac{2}{3}\Big) \cdot \Big(\frac{4}{3} \cdot \frac{4}{5}\Big) \cdot \Big(\frac{6}{5} \cdot \frac{6}{7}\Big) \cdot \Big(\frac{8}{7} \cdot \frac{8}{9}\Big) \cdot \cdots \\ - -\end{align} - -Wallis derived this infinite product as it is done in calculus books today, by examining $\int_0^\pi \sin^n xdx$ for even and odd values of $n$, and noting that for large $n$, increasing $n$ by 1 results in a change that becomes ever smaller as $n$ increases. Let -$$ -I(n) = \int_0^\pi \sin^n xdx. -$$ - -(This is a form of Wallis' integrals.) Integrate by parts: - -\begin{align} - -u &= \sin^{n-1}x \\ - -\Rightarrow du &= (n-1) \sin^{n-2}x \cos xdx \\ - -dv &= \sin xdx \\ - -\Rightarrow v &= -\cos x - -\end{align} - -\begin{align} - -\Rightarrow I(n) &= \int_0^\pi \sin^n xdx \\[6pt] - -{} &= -\sin^{n-1}x\cos x \Biggl|_0^\pi - \int_0^\pi (-\cos x)(n-1) \sin^{n-2}x \cos xdx \\[6pt] - -{} &= 0 + (n-1) \int_0^\pi \cos^2x \sin^{n-2}xdx, \qquad n > 1 \\[6pt] - -{} &= (n - 1) \int_0^\pi (1-\sin^2 x) \sin^{n-2}xdx \\[6pt] - -{} &= (n - 1) \int_0^\pi \sin^{n-2}xdx - (n - 1) \int_0^\pi \sin^{n}xdx \\[6pt] - -{} &= (n - 1) I(n-2)-(n-1) I(n) \\[6pt] - -{} &= \frac{n-1}{n} I(n-2) \\[6pt] - -\Rightarrow \frac{I(n)}{I(n-2)} - -&= \frac{n-1}{n} \\[6pt] - -\end{align} - -Now, we make two variable substitutions for convenience to obtain: -$$ -I(2n) = \frac{2n-1}{2n}I(2n-2) -$$ -$$ -I(2n+1) = \frac{2n}{2n+1}I(2n-1) -$$ - -We obtain values for $I(0)$ and $I(1)$ for later use. - -\begin{align} - -I(0) &= \int_0^\pi dx = x\Biggl|_0^\pi = \pi \\[6pt] - -I(1) &= \int_0^\pi \sin xdx = -\cos x \Biggl|_0^\pi = (-\cos \pi)-(-\cos 0) = -(-1)-(-1) = 2 \\[6pt] - -\end{align} - -Now, we calculate for even values $I(2n)$ by repeatedly applying the recurrence relation result from the integration by parts. Eventually, we end get down to $I(0)$, which we have calculated. -$$ -I(2n)=\int_0^\pi \sin^{2n}xdx = \frac{2n-1}{2n}I(2n-2) = \frac{2n-1}{2n} \cdot \frac{2n-3}{2n-2}I(2n-4) -$$ -$$ -=\frac{2n-1}{2n} \cdot \frac{2n-3}{2n-2} \cdot \frac{2n-5}{2n-4} \cdot \cdots \cdot \frac{5}{6} \cdot \frac{3}{4} \cdot \frac{1}{2} I(0)=\pi \prod_{k=1}^n \frac{2k-1}{2k} -$$ - -Repeating the process for odd values $I(2n+1)$, -$$ -I(2n+1)=\int_0^\pi \sin^{2n+1}xdx=\frac{2n}{2n+1}I(2n-1)=\frac{2n}{2n+1} \cdot \frac{2n-2}{2n-1}I(2n-3) -$$ -$$ -=\frac{2n}{2n+1} \cdot \frac{2n-2}{2n-1} \cdot \frac{2n-4}{2n-3} \cdot \cdots \cdot \frac{6}{7} \cdot \frac{4}{5} \cdot \frac{2}{3} I(1)=2 \prod_{k=1}^n \frac{2k}{2k+1} -$$ - -We make the following observation, based on the fact that $\sin{x} \leq 1$ -$$ -\sin^{2n+1}x \le \sin^{2n}x \le \sin^{2n-1}x, 0 \le x \le \pi -$$ -$$ -\Rightarrow I(2n+1) \le I(2n) \le I(2n-1) -$$ - -Dividing by $I(2n+1)$: -$$ -\Rightarrow 1 \le \frac{I(2n)}{I(2n+1)} \le \frac{I(2n-1)}{I(2n+1)}=\frac{2n+1}{2n} -$$, where the equality comes from our recurrence relation. - -By the squeeze theorem, -$$ -\Rightarrow \lim_{n\rightarrow\infty} \frac{I(2n)}{I(2n+1)}=1 -$$ -$$ -\lim_{n\rightarrow\infty} \frac{I(2n)}{I(2n+1)}=\frac{\pi}{2} \lim_{n\rightarrow\infty} \prod_{k=1}^n \left(\frac{2k-1}{2k} \cdot \frac{2k+1}{2k}\right)=1 -$$ -$$ -\Rightarrow \frac{\pi}{2}=\prod_{k=1}^\infty \left(\frac{2k}{2k-1} \cdot \frac{2k}{2k+1}\right)=\frac{2}{1} \cdot \frac{2}{3} \cdot \frac{4}{3} \cdot \frac{4}{5} \cdot \frac{6}{5} \cdot \frac{6}{7} \cdot \cdots -$$ - -While the proof above is typically featured in modern calculus textbooks, the Wallis product is, in retrospect, an easy corollary of the later Euler infinite product for the sine function. -$$ -\frac{\sin x}{x} = \prod_{n=1}^\infty\left(1 - \frac{x^2}{n^2\pi^2}\right) -$$ - -Let $x = \frac{\pi}{2}$: - -\begin{align} - -\Rightarrow\frac{2}{\pi} &= \prod_{n=1}^\infty \left(1 - \frac{1}{4n^2}\right) \\[6pt] - -\Rightarrow\frac{\pi}{2} &= \prod_{n=1}^\infty \left(\frac{4n^2}{4n^2 - 1}\right) \\[6pt] - -&= \prod_{n=1}^\infty \left(\frac{2n}{2n-1}\cdot\frac{2n}{2n+1}\right) = \frac{2}{1} \cdot \frac{2}{3} \cdot \frac{4}{3} \cdot \frac{4}{5} \cdot \frac{6}{5} \cdot \frac{6}{7} \cdots - -\end{align} - - - -Stirling's approximation for the factorial function $n!$ asserts that -$$ -n! = \sqrt {2\pi n} {\left(\frac{n}{e}\right)}^n \left[1 + O\left(\frac{1}{n}\right) \right]. -$$ - -Consider now the finite approximations to the Wallis product, obtained by taking the first $k$ terms in the product -$$ -p_k = \prod_{n=1}^{k} \frac{2n}{2n - 1}\frac{2n}{2n + 1}, -$$ - -where $p_k$ can be written as - -\begin{align} - -p_k &= {1 \over {2k + 1}} \prod_{n=1}^{k} \frac{(2n)^4}{[(2n)(2n - 1)]^2} \\[6pt] - -&= {1 \over {2k + 1}} \cdot {{2^{4k}(k!)^4} \over {[(2k)!]^2}}. - -\end{align} - -Substituting Stirling's approximation in this expression (both for $k!$ and $(2k)!$) one can deduce (after a short calculation) that $p_k$ converges to $\frac{\pi}{2}$ as $k \rightarrow \infty$. - -The Riemann zeta function and the Dirichlet eta function can be defined: - -\begin{align} - -\zeta(s) &= \sum_{n=1}^\infty \frac{1}{n^s}, \Re(s)>1 \\[6pt] - -\eta(s) &= (1-2^{1-s})\zeta(s) \\[6pt] - -&= \sum_{n=1}^\infty \frac{(-1)^{n-1}}{n^s}, \Re(s)>0 - -\end{align} - -Applying an Euler transform to the latter series, the following is obtained: - -\begin{align} - -\eta(s) &= \frac{1}{2}+\frac{1}{2} \sum_{n=1}^\infty (-1)^{n-1}\left[\frac{1}{n^s}-\frac{1}{(n+1)^s}\right], \Re(s)>-1 \\[6pt] - -\Rightarrow \eta'(s) &= (1-2^{1-s})\zeta'(s)+2^{1-s} (\ln 2) \zeta(s) \\[6pt] - -&= -\frac{1}{2} \sum_{n=1}^\infty (-1)^{n-1}\left[\frac{\ln n}{n^s}-\frac{\ln (n+1)}{(n+1)^s}\right], \Re(s)>-1 - -\end{align} - -\begin{align} - -\Rightarrow \eta'(0) &= -\zeta'(0) - \ln 2 = -\frac{1}{2} \sum_{n=1}^\infty (-1)^{n-1}\left[\ln n-\ln (n+1)\right] \\[6pt] - -&= -\frac{1}{2} \sum_{n=1}^\infty (-1)^{n-1}\ln \frac{n}{n+1} \\[6pt] - -&= -\frac{1}{2} \left(\ln \frac{1}{2} - \ln \frac{2}{3} + \ln \frac{3}{4} - \ln \frac{4}{5} + \ln \frac{5}{6} - \cdots\right) \\[6pt] - -&= \frac{1}{2} \left(\ln \frac{2}{1} + \ln \frac{2}{3} + \ln \frac{4}{3} + \ln \frac{4}{5} + \ln \frac{6}{5} + \cdots\right) \\[6pt] - -&= \frac{1}{2} \ln\left(\frac{2}{1}\cdot\frac{2}{3}\cdot\frac{4}{3}\cdot\frac{4}{5}\cdot\cdots\right) = \frac{1}{2} \ln\frac{\pi}{2} \\ - -\Rightarrow \zeta'(0) &= -\frac{1}{2} \ln\left(2 \pi\right) - -\end{align} diff --git a/wiki/wikipedia/3927.txt b/wiki/wikipedia/3927.txt deleted file mode 100644 index 67c5b591e30653d5547f02abec3a19937f683031..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3927.txt +++ /dev/null @@ -1,5 +0,0 @@ -For Gromov's compactness theorem in Riemannian geometry, see that article. - -In the mathematical field of symplectic topology, Gromov's compactness theorem states that a sequence of pseudoholomorphic curves in an almost complex manifold with a uniform energy bound must have a subsequence which limits to a pseudoholomorphic curve which may have nodes or (a finite tree of) "bubbles". A bubble is a holomorphic sphere which has a transverse intersection with the rest of the curve. This theorem, and its generalizations to punctured pseudoholomorphic curves, underlies the compactness results for flow lines in Floer homology and symplectic field theory. - -If the complex structures on the curves in the sequence do not vary, only bubbles can occur; nodes can occur only if the complex structures on the domain are allowed to vary. Usually, the energy bound is achieved by considering a symplectic manifold with compatible almost-complex structure as the target, and assuming that curves to lie in a fixed homology class in the target. This is because the energy of such a pseudoholomorphic curve is given by the integral of the target symplectic form over the curve, and thus by evaluating the cohomology class of that symplectic form on the homology class of the curve. The finiteness of the bubble tree follows from (positive) lower bounds on the energy contributed by a holomorphic sphere. diff --git a/wiki/wikipedia/3928.txt b/wiki/wikipedia/3928.txt deleted file mode 100644 index ee0cce5ffc8b2d762827c084e8fdc7bac2c225b7..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3928.txt +++ /dev/null @@ -1,15 +0,0 @@ -An appeal to probability (or appeal to possibility, also known as possibiliter ergo probabiliter, "possibly, therefore probably") is the logical fallacy of taking something for granted because it would probably be the case (or might possibly be the case). Inductive arguments lack deductive validity and must therefore be asserted or denied in the premises. A mere possibility does not correlate with a probability, and a mere probability does not correlate to a certainty, nor is just any probability that something happened or will happen sufficient to qualify as knowing that it did or will happen. - -The fallacy could be understood as confusing likelihood(The probability of something is non-zero, usually large) with certainty(The probability of something is 1). E.g., For some event X, If Pr(X) > 0 then Pr(X) = 1. Using probabilistic arguments are not in and of themselves fallacious but concluding the conclusion follows logically rather than probabilistically is. When a probabilistic argument is made one must generally make it well understood that the argument itself is probabilistic based and hence the conclusion is probabilistic. Probabilistic arguments are generally transitive in nature and one must be careful when mixing logical and probabilistic arguments as to not to conclude something is logically true from a probabilistic argument. - -A fallacious appeal to possibility: - -Something can go wrong . - -Therefore, something will go wrong . - -If I do not bring my umbrella - -It will rain. . - -Murphy's law is a (typically deliberate, tongue-in-cheek) invocation of the fallacy. diff --git a/wiki/wikipedia/3929.txt b/wiki/wikipedia/3929.txt deleted file mode 100644 index 75dd4d6111ad411b661d1c164e22013a830cdf1a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3929.txt +++ /dev/null @@ -1,13 +0,0 @@ -Tetris is a puzzle video game developed by EA Mobile and published by Electronic Arts for iOS, Android, BlackBerry OS, PlayStation 3, PlayStation Portable and Windows Phone. The game featured gameplay like other Tetris titles, with a new soundtrack. - -The game reached 100 million paid downloads by 2010, making it the best-selling paid mobile game of all time, and third best-selling game of all time altogether. - -Gameplay was nearly identical in gameplay to other Tetris titles, but with a new soundtrack. Players also had the ability to create their own soundtrack for the game using the music library of the iPhone or iPod Touch device in which the game is being played on. The game offered two modes of play dubbed "Marathon" mode and "Magic" mode. - -Marathon mode played as a more classic version of Tetris, where a point system along with number of lines cleared were kept as indicators of progress. The level of speed was chosen prior to starting the mode of gameplay. There were 15 levels total, and like Magic mode, this mode ends after all 15 levels have been completed. Unlike the original version of Tetris, Marathon mode ends after clearing 150 lines. Once Marathon mode ends, the Endless feature becomes unlocked. - -Magic mode was an enhanced version of gameplay, where there are fifteen levels of difficulty. Each level of difficulty is incremented by speed and number of lines required to clear the level. Once the number of lines required to clear the level are met, the next level is presented. Upon failure of a level, the game offers players to retry an unlimited number of times. The game allows for pausing of gameplay, which is automatic when a player receives a phone call on an iPhone device. Another element of gameplay in Magic mode is the addition of helper objects that are retrieved throughout levels, which allow players to make minor edits to the puzzle. The special objects become available in the first five levels, and then remain generating upon lines completed and tetriminos placed. There are five special objects, ranging from a magic crayon to blocks converting to bubble popping status. - -On December 1, 2011, the game was relaunched (only in iOS) and replaced with a paid new version, which comes with an additional "Marathon One Touch Mode" as well as a paid subscription which comes with discounts and premium content. - -In January 2020, EA announced that they would retire their mobile version of the game on April 21, 2020, and the game became unplayable after this date, even if you paid for the full version. The Tetris Company, however, licensed the rights to N3TWORK to create a new mobile version. The new version features online compatibility, as well as a knockout and battle royale mode. diff --git a/wiki/wikipedia/393.txt b/wiki/wikipedia/393.txt deleted file mode 100644 index 933798a1cb2341dc1329e973107b1fce8a136186..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/393.txt +++ /dev/null @@ -1,11 +0,0 @@ -In mathematics, the Weinstein conjecture refers to a general existence problem for periodic orbits of Hamiltonian or Reeb vector flows. More specifically, the conjecture claims that on a compact contact manifold, its Reeb vector field should carry at least one periodic orbit. - -By definition, a level set of contact type admits a contact form obtained by contracting the Hamiltonian vector field into the symplectic form. In this case, the Hamiltonian flow is a Reeb vector field on that level set. It is a fact that any contact manifold (M,α) can be embedded into a canonical symplectic manifold, called the symplectization of M, such that M is a contact type level set (of a canonically defined Hamiltonian) and the Reeb vector field is a Hamiltonian flow. That is, any contact manifold can be made to satisfy the requirements of the Weinstein conjecture. Since, as is trivial to show, any orbit of a Hamiltonian flow is contained in a level set, the Weinstein conjecture is a statement about contact manifolds. - -It has been known that any contact form is isotopic to a form that admits a closed Reeb orbit; for example, for any contact manifold there is a compatible open book decomposition, whose binding is a closed Reeb orbit. This is not enough to prove the Weinstein conjecture, though, because the Weinstein conjecture states that every contact form admits a closed Reeb orbit, while an open book determines a closed Reeb orbit for a form which is only isotopic to the given form. - -The conjecture was formulated in 1978 by Alan Weinstein. In several cases, the existence of a periodic orbit was known. For instance, Rabinowitz showed that on star-shaped level sets of a Hamiltonian function on a symplectic manifold, there were always periodic orbits (Weinstein independently proved the special case of convex level sets). Weinstein observed that the hypotheses of several such existence theorems could be subsumed in the condition that the level set be of contact type. (Weinstein's original conjecture included the condition that the first de Rham cohomology group of the level set is trivial; this hypothesis turned out to be unnecessary). - -The Weinstein conjecture was first proved for contact hypersurfaces in $\mathbb R^{2n}$ in 1986 by , then extended to cotangent bundles by Hofer–Viterbo and to wider classes of aspherical manifolds by Floer–Hofer–Viterbo. The presence of holomorphic spheres was used by Hofer–Viterbo. All these cases dealt with the situation where the contact manifold is a contact submanifold of a symplectic manifold. A new approach without this assumption was discovered in dimension 3 by Hofer and is at the origin of contact homology. - -The Weinstein conjecture has now been proven for all closed 3-dimensional manifolds by Clifford Taubes. The proof uses a variant of Seiberg–Witten Floer homology and pursues a strategy analogous to Taubes' proof that the Seiberg-Witten and Gromov invariants are equivalent on a symplectic four-manifold. In particular, the proof provides a shortcut to the closely related program of proving the Weinstein conjecture by showing that the embedded contact homology of any contact three-manifold is nontrivial. diff --git a/wiki/wikipedia/3930.txt b/wiki/wikipedia/3930.txt deleted file mode 100644 index fb4b5321422805e55f89cd63cd3048c7ecf1c21d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3930.txt +++ /dev/null @@ -1,7 +0,0 @@ -In mathematics the Markov theorem gives necessary and sufficient conditions for two braids to have closures that are equivalent links. In algebraic topology, Alexander's theorem states that every knot or link in three-dimensional Euclidean space is the closure of a braid. The Markov theorem, proved by Russian mathematician Andrei Andreevich Markov Jr. states that three conditions are necessary and sufficient for two braids to have equivalent closures: - -# They are equivalent braids - -# They are conjugate braids - -# Appending or removing on the right of the braid a strand that crosses the strand to its left exactly once. diff --git a/wiki/wikipedia/3931.txt b/wiki/wikipedia/3931.txt deleted file mode 100644 index de29899557280bf4b3bb9eaccdccac15d9bad135..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3931.txt +++ /dev/null @@ -1,5 +0,0 @@ -Rishi Puri is an Indian Sudoku solver. He is a two time Indian National Sudoku Champion. He was part of the top three from India along with Rohan Rao and Prasanna Seshadri at the national championships and the World Sudoku Championships. Rishi won the Indian Sudoku Championships in 2014 and 2015. He also won the Times Sudoku Championship in 2013. - -Rishi has also won various international Sudoku competitions. He was twice runners up at the Brand's international Sudoku Competition held at Bangkok, Thailand in 2011 and 2012. Rishi also won the Times UK National Sudoku Championship in 2012. - -Rishi was the first Indian to qualify for the World Puzzle Federation's Sudoku Grand Prix by finishing in the top 10 across the world where he finished 9th, in 2014. In 2015 at Sofia, Bulgaria, Rishi improved his standing at the Grand Prix by finishing at a career best rank of 7th. 2015 World Sudoku Championship was also the last one attended by Rishi, post which he retired. diff --git a/wiki/wikipedia/3932.txt b/wiki/wikipedia/3932.txt deleted file mode 100644 index 121566b1dfcc1344b16bce59e2effd772dcfbeda..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3932.txt +++ /dev/null @@ -1,271 +0,0 @@ -In statistics, Bessel's correction is the use of n − 1 instead of n in the formula for the sample variance and sample standard deviation, where n is the number of observations in a sample. This method corrects the bias in the estimation of the population variance. It also partially corrects the bias in the estimation of the population standard deviation. However, the correction often increases the mean squared error in these estimations. This technique is named after Friedrich Bessel. - -In estimating the population variance from a sample when the population mean is unknown, the uncorrected sample variance is the mean of the squares of deviations of sample values from the sample mean (i.e. using a multiplicative factor 1/n). In this case, the sample variance is a biased estimator of the population variance. - -Multiplying the uncorrected sample variance by the factor -$$ -\frac n {n-1} -$$ - -gives an unbiased estimator of the population variance. In some literature, the above factor is called Bessel's correction. - -One can understand Bessel's correction as the degrees of freedom in the residuals vector (residuals, not errors, because the population mean is unknown): -$$ -(x_1-\overline{x},\dots,x_n-\overline{x}), -$$ - -where $\overline{x}$ is the sample mean. While there are n independent observations in the sample, there are only n − 1 independent residuals, as they sum to 0. For a more intuitive explanation of the need for Bessel's correction, see . - -Generally Bessel's correction is an approach to reduce the bias due to finite sample size. Such finite-sample bias correction is also needed for other estimates like skew and kurtosis, but in these the inaccuracies are often significantly larger. To fully remove such bias it is necessary to do a more complex multi-parameter estimation. For instance a correct correction for the standard deviation depends on the kurtosis (normalized central 4th moment), but this again has a finite sample bias and it depends on the standard deviation, i.e. both estimations have to be merged. - -There are three caveats to consider regarding Bessel's correction: - -# It does not yield an unbiased estimator of standard deviation. - -# The corrected estimator often has a higher mean squared error (MSE) than the uncorrected estimator. Furthermore, there is no population distribution for which it has the minimum MSE because a different scale factor can always be chosen to minimize MSE. - -# It is only necessary when the population mean is unknown (and estimated as the sample mean). In practice, this generally happens. - -Firstly, while the sample variance (using Bessel's correction) is an unbiased estimator of the population variance, its square root, the sample standard deviation, is a biased estimate of the population standard deviation; because the square root is a concave function, the bias is downward, by Jensen's inequality. There is no general formula for an unbiased estimator of the population standard deviation, though there are correction factors for particular distributions, such as the normal; see unbiased estimation of standard deviation for details. An approximation for the exact correction factor for the normal distribution is given by using n − 1.5 in the formula: the bias decays quadratically (rather than linearly, as in the uncorrected form and Bessel's corrected form). - -Secondly, the unbiased estimator does not minimize mean squared error (MSE), and generally has worse MSE than the uncorrected estimator (this varies with excess kurtosis). MSE can be minimized by using a different factor. The optimal value depends on excess kurtosis, as discussed in mean squared error: variance; for the normal distribution this is optimized by dividing by n + 1 (instead of n − 1 or n). - -Thirdly, Bessel's correction is only necessary when the population mean is unknown, and one is estimating both population mean and population variance from a given sample, using the sample mean to estimate the population mean. In that case there are n degrees of freedom in a sample of n points, and simultaneous estimation of mean and variance means one degree of freedom goes to the sample mean and the remaining n − 1 degrees of freedom (the residuals) go to the sample variance. However, if the population mean is known, then the deviations of the observations from the population mean have n degrees of freedom (because the mean is not being estimated – the deviations are not residuals but errors) and Bessel's correction is not applicable. - -Most simply, to understand the bias that needs correcting, think of an extreme case. Suppose the population is (0,0,0,1,2,9), which has a population mean of 2 and a population variance of 10 1/3. A sample of n = 1 is drawn, and it turns out to be $x_1=0.$ The best estimate of the population mean is $\bar{x} = x_1/n = 0/1 = 0.$ But what if we use the formula $ (x_1-\bar{x})^2/n = (0-0)/1 = 0$ to estimate the variance? The estimate of the variance would be zero--- and the estimate would be zero for any population and any sample of n = 1. The problem is that in estimating the sample mean, the process has already made our estimate of the mean close to the value we sampled--identical, for n = 1. In the case of n = 1, the variance just can't be estimated, because there's no variability in the sample. - -But consider n = 2. Suppose the sample were (0, 2). Then $\bar{x}=1$ and $ \left[(x_1-\bar{x})^2 + (x_2-\bar{x})^2\right] /n = (1+1)/2 = 1$, but with Bessel's correction, $\left[(x_1-\bar{x})^2 + (x_2-\bar{x})^2\right] /(n-1) = (1+1)/1 = 2$, which is an unbiased estimate (if all possible samples of n = 2 are taken and this method is used, the average estimate will be 12.4, same as the sample variance with Bessel's correction.) - -To see this in more detail, consider the following example. Suppose the mean of the whole population is 2050, but the statistician does not know that, and must estimate it based on this small sample chosen randomly from the population: -$$ - 2051,\quad 2053,\quad 2055,\quad 2050,\quad 2051 -$$ - -One may compute the sample average: -$$ - \frac{1}{5}\left(2051 + 2053 + 2055 + 2050 + 2051\right) = 2052 -$$ - -This may serve as an observable estimate of the unobservable population average, which is 2050. Now we face the problem of estimating the population variance. That is the average of the squares of the deviations from 2050. If we knew that the population average is 2050, we could proceed as follows: - -\begin{align} - -{} & \frac{1}{5}\left[(2051 - 2050)^2 + (2053 - 2050)^2 + (2055 - 2050)^2 + (2050 - 2050)^2 + (2051 - 2050)^2\right] \\[6pt] - -= {} & \frac{36}{5} = 7.2 - -\end{align} - -But our estimate of the population average is the sample average, 2052. The actual average, 2050, is unknown. So the sample average, 2052, must be used: - -\begin{align} - -{} & \frac{1}{5}\left[(2051 - 2052)^2 + (2053 - 2052)^2 + (2055 - 2052)^2 + (2050 - 2052)^2 + (2051 - 2052)^2\right] \\[6pt] - -= {} & \frac{16}{5} = 3.2 - -\end{align} - -The variance is now a lot smaller. As proven below, the variance will almost always be smaller when calculated using the sum of squared distances to the sample mean, compared to using the sum of squared distances to the population mean. The one exception to this is when the sample mean happens to be equal to the population mean, in which case the variance is also equal. - -To see why this happens, we use a simple identity in algebra: -$$ -(a+b)^2 = a^2 + 2ab + b^2 -$$ - -With $a$ representing the deviation of an individual sample from the sample mean, and $b$ representing the deviation of the sample mean from the population mean. Note that we've simply decomposed the actual deviation of an individual sample from the (unknown) population mean into two components: the deviation of the single sample from the sample mean, which we can compute, and the additional deviation of the sample mean from the population mean, which we can not. Now, we apply this identity to the squares of deviations from the population mean: - -\begin{align} - -{[}\underbrace{2053 - 2050}_{\begin{smallmatrix} \text{Deviation from} \\ \text{the population} \\ \text{mean} \end{smallmatrix}}]^2 & = [\overbrace{(\underbrace{2053 - 2052}_{\begin{smallmatrix} \text{Deviation from} \\ \text{the sample mean} \end{smallmatrix}})}^{\text{This is }a.} + \overbrace{(2052 - 2050)}^{\text{This is }b.}]^2 \\ - -& = \overbrace{(2053 - 2052)^2}^{\text{This is }a^2.} + \overbrace{2(2053 - 2052)(2052 - 2050)}^{\text{This is }2ab.} + \overbrace{(2052 - 2050)^2}^{\text{This is }b^2.} - -\end{align} - -Now apply this to all five observations and observe certain patterns: - -\begin{alignat}{2} - -\overbrace{(2051 - 2052)^2}^{\text{This is }a^2.}\ &+\ \overbrace{2(2051 - 2052)(2052 - 2050)}^{\text{This is }2ab.}\ &&+\ \overbrace{(2052 - 2050)^2}^{\text{This is }b^2.} \\ - -(2053 - 2052)^2\ &+\ 2(2053 - 2052)(2052 - 2050)\ &&+\ (2052 - 2050)^2 \\ - -(2055 - 2052)^2\ &+\ 2(2055 - 2052)(2052 - 2050)\ &&+\ (2052 - 2050)^2 \\ - -(2050 - 2052)^2\ &+\ 2(2050 - 2052)(2052 - 2050)\ &&+\ (2052 - 2050)^2 \\ - -(2051 - 2052)^2\ &+\ \underbrace{2(2051 - 2052)(2052 - 2050)}_{\begin{smallmatrix} \text{The sum of the entries in this} \\ \text{middle column must be 0.} \end{smallmatrix}}\ &&+\ (2052 - 2050)^2 - -\end{alignat} - -The sum of the entries in the middle column must be zero because the term a will be added across all 5 rows, which itself must equal zero. That is because a contains the 5 individual samples (left side within parentheses) which – when added – naturally have the same sum as adding 5 times the sample mean of those 5 numbers (2052). This means that a subtraction of these two sums must equal zero. The factor 2 and the term b in the middle column are equal for all rows, meaning that the relative difference across all rows in the middle column stays the same and can therefore be disregarded. The following statements explain the meaning of the remaining columns: - -* The sum of the entries in the first column (a2) is the sum of the squares of the distance from sample to sample mean; - -*The sum of the entries in the last column (b2) is the sum of squared distances between the measured sample mean and the correct population mean - -* Every single row now consists of pairs of a2 (biased, because the sample mean is used) and b2 (correction of bias, because it takes the difference between the "real" population mean and the inaccurate sample mean into account). Therefore the sum of all entries of the first and last column now represents the correct variance, meaning that now the sum of squared distance between samples and population mean is used - -* The sum of the a2-column and the b2-column must be bigger than the sum within entries of the a2-column, since all the entries within the b2-column are positive (except when the population mean is the same as the sample mean, in which case all of the numbers in the last column will be 0). - -Therefore: - -* The sum of squares of the distance from samples to the population mean will always be bigger than the sum of squares of the distance to the sample mean, except when the sample mean happens to be the same as the population mean, in which case the two are equal. - -That is why the sum of squares of the deviations from the sample mean is too small to give an unbiased estimate of the population variance when the average of those squares is found. The smaller the sample size, the larger is the difference between the sample variance and the population variance. - -This correction is so common that the term "sample variance" and "sample standard deviation" are frequently used to mean the corrected estimators (unbiased sample variation, less biased sample standard deviation), using n − 1. However caution is needed: some calculators and software packages may provide for both or only the more unusual formulation. This article uses the following symbols and definitions: - -*μ is the population mean - -*$\overline{x}$ is the sample mean - -*σ2 is the population variance - -*sn2 is the biased sample variance (i.e. without Bessel's correction) - -*s2 is the unbiased sample variance (i.e. with Bessel's correction) - -The standard deviations will then be the square roots of the respective variances. Since the square root introduces bias, the terminology "uncorrected" and "corrected" is preferred for the standard deviation estimators: - -*sn is the uncorrected sample standard deviation (i.e. without Bessel's correction) - -*s is the corrected sample standard deviation (i.e. with Bessel's correction), which is less biased, but still biased - -The sample mean is given by - -\overline{x}=\frac{1}{n}\sum_{i=1}^n x_i. - -The biased sample variance is then written: - -s_n^2 = \frac {1}{n} \sum_{i=1}^n \left(x_i - \overline{x} \right)^ 2 = \frac{\sum_{i=1}^n x_i^2}{n} - \frac{\left(\sum_{i=1}^n x_i\right)^2}{n^2} - -and the unbiased sample variance is written: - -s^2 = \frac {1}{n-1} \sum_{i=1}^n \left(x_i - \overline{x} \right)^ 2 = \frac{\sum_{i=1}^n x_i^2}{n-1} - \frac{\left(\sum_{i=1}^n x_i\right)^2}{(n-1)n} = \left(\frac{n}{n-1}\right)s_n^2. - -As a background fact, we use the identity $E[x^2] = \mu^2 + \sigma^2$ which follows from the definition of the standard deviation and linearity of expectation. - -A very helpful observation is that for any distribution, the variance equals half the expected value of $(x_1 - x_2)^2$ when $x_1, x_2$ are an independent sample from that distribution. To prove this observation we will use that $E[x_1x_2] = E[x_1]E[x_2]$ (which follows from the fact that they are independent) as well as linearity of expectation: -$$ -E[(x_1 - x_2)^2] = E[x_1^2] - E[2x_1x_2] + E[x_2^2] = (\sigma^2 + \mu^2) - 2\mu^2 + (\sigma^2 + \mu^2) = 2\sigma^2 -$$ - -Now that the observation is proven, it suffices to show that the expected squared difference of two observations from the sample population $x_1, \ldots, x_n$ equals $(n-1)/n$ times the expected squared difference of two observations from the original distribution. To see this, note that when we pick $x_u$ and $x_v$ via u, v being integers selected independently and uniformly from 1 to n, a fraction $n/n^2 = 1/n$ of the time we will have u = v and therefore the sampled squared difference is zero independent of the original distribution. The remaining $1-1/n$ of the time, the value of $E[(x_u-x_v)^2]$ is the expected squared difference between two independent observations from the original distribution. Therefore, dividing the sample expected squared difference by $(1-1/n)$, or equivalently multiplying by $1/(1-1/n) = n/(n-1),$ gives an unbiased estimate of the original expected squared difference. - -Recycling an identity for variance, - - - -\begin{align} - -\sum_{i=1}^n \left(x_i - \overline{x} \right)^2 - -&= \sum_{i=1}^n \left( x_i^2 - 2 x_i \overline{x} + \overline{x}^2 \right) \\ - -&= \sum_{i=1}^n x_i^2 - 2 \overline{x} \sum_{i=1}^n x_i + \sum_{i=1}^n \overline{x}^2 \\ - -&= \sum_{i=1}^n x_i^2 - 2n \overline{x}^2 + n \overline{x}^2 \\ - -&= \sum_{i=1}^n x_i^2 - n \overline{x}^2 - -\end{align} - - - -so - - - -\begin{align} - -\operatorname{E}\left(\sum_{i=1}^n \left[(x_i - \mu) - \left(\overline{x} - \mu \right) \right]^2 \right) - -&= \operatorname{E}\left( \left( \sum_{i=1}^n (x_i-\mu)^2\right) - n (\overline{x}-\mu)^2 \right) \\ - -&= \left( \sum_{i=1}^n \operatorname{E}\left((x_i-\mu)^2 \right)\right) - n \operatorname{E}\left((\overline{x}-\mu)^2\right) \\ - -&= \left(\sum_{i=1}^n \operatorname{Var}(x_i)\right) - n \operatorname{Var}\left(\overline{x} \right) - -\end{align} - - - -and by definition, - - - -\begin{align} - -\operatorname{E}(s^2) - -& = \operatorname{E}\left(\sum_{i=1}^n \frac{(x_i-\overline{x})^2}{n-1} \right)\\ - -& = \frac{1}{n-1} \operatorname{E}\left(\sum_{i=1}^n \left[(x_i - \mu) - \left(\overline{x} - \mu\right)\right]^2 \right)\\ - -&= \frac{1}{n-1} \left[ \left( \sum_{i=1}^n \operatorname{Var}(x_i) \right) - n \operatorname{Var}(\overline{x})\right] - -\end{align} - - - -Note that, since x1, x2, …, xn are a random sample from a distribution with variance σ2, it follows that for each i = 1, 2, …, n: -$$ - \operatorname{Var}(x_i) = \sigma^2 -$$ - -and also -$$ -\operatorname{Var}(\overline{x}) = \frac{\sigma^2} n -$$ - -This is a property of the variance of uncorrelated variables, arising from the Bienaymé formula. The required result is then obtained by substituting these two formulae: - - - -\operatorname{E}(s^2) = \frac{1}{n-1}\left[\sum_{i=1}^n \sigma^2 - n\sigma^2/n\right] = \frac{1}{n-1}(n\sigma^2-\sigma^2) = \sigma^2. - - - -The expected discrepancy between the biased estimator and the true variance is - - - -\begin{align} - -\operatorname{E} \left[ \sigma^2 - s_n^2 \right] &= \operatorname{E}\left[ \frac{1}{n} \sum_{i=1}^n(x_i - \mu)^2 - \frac{1}{n}\sum_{i=1}^n (x_i - \overline{x})^2 \right] \\ - -&= \operatorname{E}\left[ \frac{1}{n} \sum_{i=1}^n\left((x_i^2 - 2 x_i \mu + \mu^2) - (x_i^2 - 2 x_i \overline{x} + \overline{x}^2)\right) \right] \\ - -&= \operatorname{E}\left[ \frac{1}{n} \sum_{i=1}^n\left(\mu^2 - \overline{x}^2 + 2 x_i (\overline{x}-\mu) \right) \right] \\ - -&= \operatorname{E}\left[ \mu^2 - \overline{x}^2 + \frac{1}{n} \sum_{i=1}^n 2 x_i (\overline{x} - \mu) \right] \\ - -&= \operatorname{E}\left[ \mu^2 - \overline{x}^2 + 2(\overline{x} - \mu) \overline{x} \right] \\ - -&= \operatorname{E}\left[ \mu^2 - 2 \overline{x} \mu + \overline{x}^2 \right] \\ - -&= \operatorname{E}\left[ (\overline{x} - \mu)^2 \right] \\ - -&= \operatorname{Var} (\overline{x}) \\ - -&= \frac{\sigma^2}{n} - -\end{align} - - - -So, the expected value of the biased estimator will be -$$ - \operatorname{E} \left[ s^2_n \right] = \sigma^2 - \frac{\sigma^2}{n} = \frac{n-1}{n} \sigma^2 -$$ - -So, an unbiased estimator should be given by -$$ - s^2 = \frac{n}{n-1} s_n^2 -$$ - -In the biased estimator, by using the sample mean instead of the true mean, you are underestimating each xi − µ by x − µ. We know that the variance of a sum is the sum of the variances (for uncorrelated variables). So, to find the discrepancy between the biased estimator and the true variance, we just need to find the expected value of (x − µ)2. - -This is just the variance of the sample mean, which is σ2/n. So, we expect that the biased estimator underestimates σ2 by σ2/n, and so the biased estimator = (1 − 1/n) × the unbiased estimator = (n − 1)/n × the unbiased estimator. diff --git a/wiki/wikipedia/3933.txt b/wiki/wikipedia/3933.txt deleted file mode 100644 index 036c132929264acc1328d0d5073456b5bbb42ee1..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3933.txt +++ /dev/null @@ -1,20 +0,0 @@ -In mathematics, Vitale's random Brunn–Minkowski inequality is a theorem due to Richard Vitale that generalizes the classical Brunn–Minkowski inequality for compact subsets of n-dimensional Euclidean space Rn to random compact sets. - -Let X be a random compact set in Rn; that is, a Borel-measurable function from some probability space (Ω, Σ, Pr) to the space of non-empty, compact subsets of Rn equipped with the Hausdorff metric. A random vector V : Ω → Rn is called a selection of X if Pr(V ∈ X) = 1. If K is a non-empty, compact subset of Rn, let -$$ -\| K \| = \max \left\{ \left. \| v \|_{\mathbb{R}^{n}} \right| v \in K \right\} -$$ - -and define the set-valued expectation E[X] of X to be -$$ -\mathrm{E} [X] = \{ \mathrm{E} [V] | V \mbox{ is a selection of } X \mbox{ and } \mathrm{E} \| V \| < + \infty \}. -$$ - -Note that E[X] is a subset of Rn. In this notation, Vitale's random Brunn–Minkowski inequality is that, for any random compact set X with $E[\|X\|]<+\infty$, -$$ -\left( \mathrm{vol}_n \left( \mathrm{E} [X] \right) \right)^{1/n} \geq \mathrm{E} \left[ \mathrm{vol}_n (X)^{1/n} \right], -$$ - -where "$vol_n$" denotes n-dimensional Lebesgue measure. - -If X takes the values (non-empty, compact sets) K and L with probabilities 1 - λ and λ respectively, then Vitale's random Brunn–Minkowski inequality is simply the original Brunn–Minkowski inequality for compact sets. diff --git a/wiki/wikipedia/3934.txt b/wiki/wikipedia/3934.txt deleted file mode 100644 index ce316acad9ae559db215b89600edc75f99dfdd40..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3934.txt +++ /dev/null @@ -1,7 +0,0 @@ -Spectral layout is a class of algorithm for drawing graphs. The layout uses the eigenvectors of a matrix, such as the Laplace matrix of the graph, as Cartesian coordinates of the graph's vertices. - -The idea of the layout is to compute the two largest (or smallest) eigenvalues and corresponding eigenvectors of the Laplacian matrix of the graph and then use those for actually placing the nodes. - -Usually nodes are placed in the 2 dimensional plane. An embedding into more dimensions can be found by using more eigenvectors. - -In the 2-dimensional case, for a given node which corresponds to the row/column $i$ in the (symmetric) Laplacian matrix $L$ of the graph, the $x$ and $y$-coordinates are the $i$-th entries of the first and second eigenvectors of $L$, respectively. diff --git a/wiki/wikipedia/3935.txt b/wiki/wikipedia/3935.txt deleted file mode 100644 index b31777891d91c6ee010b3e8a41995f889fcf3468..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3935.txt +++ /dev/null @@ -1,9 +0,0 @@ -In mathematics, Maharam's theorem is a deep result about the decomposability of measure spaces, which plays an important role in the theory of Banach spaces. In brief, it states that every complete measure space is decomposable into "non-atomic parts" (copies of products of the unit interval [0,1] on the reals), and "purely atomic parts", using the counting measure on some discrete space. The theorem is due to Dorothy Maharam. - -It was extended to localizable measure spaces by Irving Segal. - -The result is important to classical Banach space theory, in that, when considering the Banach space given as an $L_p$ space of measurable functions over a general measurable space, it is sufficient to understand it in terms of its decomposition into non-atomic and atomic parts. - -Maharam's theorem can also be translated into the language of abelian von Neumann algebras. Every abelian von Neumann algebra is isomorphic to a product of σ-finite abelian von Neumann algebras, and every σ-finite abelian von Neumann algebra is isomorphic to a spatial tensor product of discrete abelian von Neumann algebras, i.e., algebras of bounded functions on a discrete set. - -A similar theorem was given by Kazimierz Kuratowski for Polish spaces, stating that they are isomorphic, as Borel spaces, to either the reals, the integers, or a finite set. diff --git a/wiki/wikipedia/3936.txt b/wiki/wikipedia/3936.txt deleted file mode 100644 index 968d4fa57300fd735260c5441fb2bdac83437780..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3936.txt +++ /dev/null @@ -1,23 +0,0 @@ -In the geometry of circle packings in the Euclidean plane, the ring lemma gives a lower bound on the sizes of adjacent circles in a circle packing. - -The lemma states: Let $n$ be any integer greater than or equal to three. Suppose that the unit circle is surrounded by a ring of $n$ interior-disjoint circles, all tangent to it, with consecutive circles in the ring tangent to each other. Then the minimum radius of any circle in the ring is at least the unit fraction -$$ -\frac{1}{F_{2n-3}-1} -$$ - -where $F_i$ is the $i$th Fibonacci number. - -The sequence of minimum radii, from $n=3$, begins -$$ -1, \frac{1}{4}, \frac{1}{12}, \frac{1}{33}, \frac{1}{88}, \frac{1}{232}, \dots -$$ - -and the denominators in this sequence are given as . - -Generalizations to three-dimensional space are also known. - -An infinite sequence of circles can be constructed, containing rings for each $n$ that exactly meet the bound of the ring lemma, showing that it is tight. The construction allows halfplanes to be considered as degenerate circles with infinite radius, and includes additional tangencies between the circles beyond those required in the statement of the lemma. It begins by sandwiching the unit circle between two parallel halfplanes; in the geometry of circles, these are considered to be tangent to each other at the point at infinity. Each successive circle after these first two is tangent to the central unit circle and to the two most recently added circles; see the illustration for the first six circles (including the two halfplanes) constructed in this way. The first $n$ circles of this construction form a ring, whose minimum radius can be calculated by Descartes' theorem to be the same as the radius specified in the ring lemma. This construction can be perturbed to a ring of $n$ finite circles, without additional tangencies, whose minimum radius is arbitrarily close to this bound. - -A version of the ring lemma with a weaker bound was first proven by Burton Rodin and Dennis Sullivan as part of their proof of William Thurston's conjecture that circle packings can be used to approximate conformal maps. Lowell Hansen gave a recurrence relation for the tightest possible lower bound, and Dov Aharonov found a closed-form expression for the same bound. - -Beyond its original application to conformal mapping, the circle packing theorem and the ring lemma play key roles in a proof by Keszegh, Pach, and Pálvölgyi that planar graphs of bounded degree can be drawn with bounded slope number. diff --git a/wiki/wikipedia/3937.txt b/wiki/wikipedia/3937.txt deleted file mode 100644 index 16f2c33915b3528dd3df2694a33a23cc9a359152..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3937.txt +++ /dev/null @@ -1,103 +0,0 @@ -In recreational mathematics, a square array of numbers, usually positive integers, is called a magic square if the sums of the numbers in each row, each column, and both main diagonals are the same. The order of the magic square is the number of integers along one side (n), and the constant sum is called the magic constant. If the array includes just the positive integers $1,2,...,n^2$, the magic square is said to be normal. Some authors take magic square to mean normal magic square. - -Magic squares that include repeated entries do not fall under this definition and are referred to as trivial. Some well-known examples, including the Sagrada Família magic square and the Parker square are trivial in this sense. When all the rows and columns but not both diagonals sum to the magic constant we have semimagic squares (sometimes called orthomagic squares). - -The mathematical study of magic squares typically deals with its construction, classification, and enumeration. Although completely general methods for producing all the magic squares of all orders do not exist, historically three general techniques have been discovered: by bordering method, by making composite magic squares, and by adding two preliminary squares. There are also more specific strategies like the continuous enumeration method that reproduces specific patterns. Magic squares are generally classified according to their order n as: odd if n is odd, evenly even (also referred to as "doubly even") if n is a multiple of 4, oddly even (also known as "singly even") if n is any other even number. This classification is based on different techniques required to construct odd, evenly even, and oddly even squares. Beside this, depending on further properties, magic squares are also classified as associative magic squares, pandiagonal magic squares, most-perfect magic squares, and so on. More challengingly, attempts have also been made to classify all the magic squares of a given order as transformations of a smaller set of squares. Except for n ≤ 5, the enumeration of higher order magic squares is still an open challenge. The enumeration of most-perfect magic squares of any order was only accomplished in the late 20th century. - -Magic squares have a long history, dating back to at least 190 BCE in China. At various times they have acquired occult or mythical significance, and have appeared as symbols in works of art. In modern times they have been generalized a number of ways, including using extra or different constraints, multiplying instead of adding cells, using alternate shapes or more than two dimensions, and replacing numbers with shapes and addition with geometric operations. - -The third order magic square was known to Chinese mathematicians as early as 190 BCE, and explicitly given by the first century of the common era. The first dateable instance of the fourth-order magic square occur in 587 CE in India. Specimens of magic squares of order 3 to 9 appear in an encyclopedia from Baghdad , the Encyclopedia of the Brethren of Purity (Rasa'il Ikhwan al-Safa). By the end of 12th century, the general methods for constructing magic squares were well established. Around this time, some of these squares were increasingly used in conjunction with magic letters, as in Shams Al-ma'arif, for occult purposes. In India, all the fourth-order pandiagonal magic squares were enumerated by Narayana in 1356. Magic squares were made known to Europe through translation of Arabic sources as occult objects during the Renaissance, and the general theory had to be re-discovered independent of prior developments in China, India, and Middle East. Also notable are the ancient cultures with a tradition of mathematics and numerology that did not discover the magic squares: Greeks, Babylonians, Egyptians, and Pre-Columbian Americans. - -While ancient references to the pattern of even and odd numbers in the 3×3 magic square appears in the I Ching, the first unequivocal instance of this magic square appears in the chapter called Mingtang (Bright Hall) of a 1st-century book Da Dai Liji (Record of Rites by the Elder Dai), which purported to describe ancient Chinese rites of the Zhou dynasty. - -The above magic squares of orders 3 to 9 are taken from Yang Hui's treatise, in which the Luo Shu principle is clearly evident. In Jinko-ki (1665) by Muramatsu Kudayu Mosei, both magic squares and magic circles are displayed. The largest square Mosei constructs is of 19th order. Various magic squares and magic circles were also published by Nozawa Teicho in Dokai-sho (1666), Sato Seiko in Kongenki (1666), and Hosino Sanenobu in Ko-ko-gen Sho (1673). One of Seki Takakazu's Seven Books (Hojin Yensan) (1683) is devoted completely to magic squares and circles. This is the first Japanese book to give a general treatment of magic squares in which the algorithms for constructing odd, singly even and doubly even bordered magic squares are clearly described. In 1694 and 1695, Yueki Ando gave different methods to create the magic squares and displayed squares of order 3 to 30. A fourth-order magic cube was constructed by Yoshizane Tanaka (1651–1719) in Rakusho-kikan (1683). The study of magic squares was continued by Seki's pupils, notably by Katahiro Takebe, whose squares were displayed in the fourth volume of Ichigen Kappo by Shukei Irie, Yoshisuke Matsunaga in Hojin-Shin-jutsu, Yoshihiro Kurushima in Kyushi Iko who rediscovered a method to produce the odd squares given by Agrippa, and Naonobu Ajima. Thus by the beginning of the 18th century, the Japanese mathematicians were in possession of methods to construct magic squares of arbitrary order. After this, attempts at enumerating the magic squares was initiated by Nushizumi Yamaji. - -The square of Varahamihira as given above has sum of 18. Here the numbers 1 to 8 appear twice in the square. It is a pan-diagonal magic square. Four different magic squares can be obtained by adding 8 to one of the two sets of 1 to 8 sequence. The sequence is selected such that the number 8 is added exactly twice in each row, each column and each of the main diagonals. One of the possible magic squares shown in the right side. This magic square is remarkable in that it is a 90 degree rotation of a magic square that appears in the 13th century Islamic world as one of the most popular magic squares. - -The construction of 4th-order magic square is detailed in a work titled Kaksaputa, composed by the alchemist Nagarjuna around 10th century CE. All of the squares given by Nagarjuna are 4×4 magic squares, and one of them is called Nagarjuniya after him. Nagarjuna gave a method of constructing 4×4 magic square using a primary skeleton square, given an odd or even magic sum. The Nagarjuniya square is given below, and has the sum total of 100. - -The Nagarjuniya square is a pan-diagonal magic square. The Nagarjuniya square is made up of two arithmetic progressions starting from 6 and 16 with eight terms each, with a common difference between successive terms as 4. When these two progressions are reduced to the normal progression of 1 to 8, we obtain the adjacent square. - -Around 12th-century, a 4×4 magic square was inscribed on the wall of Parshvanath temple in Khajuraho, India. Several Jain hymns teach how to make magic squares, although they are undateable. The first dateable appearance of a magic square of order 3 occurs in Jābir ibn Hayyān's (fl. c. 721 – c. 815) Kitab al-mawazin al-Saghir (The Small Book of Balances) where the magic square and its related numerology is associated with alchemy. These early treatise were purely mathematical, and the Arabic designation for magic squares used is wafq al-a'dad, which translates as harmonious disposition of the numbers. The squares of order 3 to 7 from Rasa'il are given below: - -The magic square of order three was described as a child-bearing charm since its first literary appearances in the alchemical works of Jābir ibn Hayyān (fl. c. 721 – c. 815) - -Unlike in Persia and Arabia, we have better documentation of how the magic squares were transmitted to Europe. Around 1315, influenced by Arab sources, the Greek Byzantine scholar Manuel Moschopoulos wrote a mathematical treatise on the subject of magic squares, leaving out the mysticism of his Middle Eastern predecessors, where he gave two methods for odd squares and two methods for evenly even squares. Moschopoulos was essentially unknown to the Latin Europe until the late 17th century, when Philippe de la Hire rediscovered his treatise in the Royal Library of Paris. However, he was not the first European to have written on magic squares; and the magic squares were disseminated to rest of Europe through Spain and Italy as occult objects. The early occult treaties that displayed the squares did not describe how they were constructed. Thus the entire theory had to be rediscovered. - -Magic squares had first appeared in Europe in Kitāb tadbīrāt al-kawākib (Book on the Influences of the Planets) written by Ibn Zarkali of Toledo, Al-Andalus, as planetary squares by 11th century. Ibn Zarkali's work was translated as Libro de Astromagia in the 1280s, due to Alfonso X of Castille. In the Alfonsine text, magic squares of different orders are assigned to the respective planets, as in the Islamic literature; unfortunately, of all the squares discussed, the Mars magic square of order five is the only square exhibited in the manuscript. An early account on the construction of bordered squares was given by Antoine Arnauld in his Nouveaux éléments de géométrie (1667). In the two treatise Des quarrez ou tables magiques and Table générale des quarrez magiques de quatre de côté, published posthumously in 1693, twenty years after his death, Bernard Frenicle de Bessy demonstrated that there were exactly 880 distinct magic squares of order four. Frenicle gave methods to construct magic square of any odd and even order, where the even ordered squares were constructed using borders. He also showed that interchanging rows and columns of a magic square produced new magic squares. - -In the 19th century, Bernard Violle gave a comprehensive treatment of magic squares in his three volume Traité complet des carrés magiques (1837–1838), which also described magic cubes, parallelograms, parallelopipeds, and circles. Pandiagonal squares were extensively studied by Andrew Hollingworth Frost, who learned it while in the town of Nasik, India, (thus calling them Nasik squares) in a series of articles: On the knight's path (1877), On the General Properties of Nasik Squares (1878), On the General Properties of Nasik Cubes (1878), On the construction of Nasik Squares of any order (1896). He showed that it is impossible to have normal singly-even pandiagonal magic square. Frederick A.P. Barnard constructed inlaid magic squares and other three dimensional magic figures like magic spheres and magic cylinders in Theory of magic squares and of magic cubes (1888). According to the legend, there was at one time in ancient China a huge flood. While the great king Yu was trying to channel the water out to sea, a turtle emerged from it with a curious pattern on its shell: a 3×3 grid in which circular dots of numbers were arranged, such that the sum of the numbers in each row, column and diagonal was the same: 15. According to the legend, thereafter people were able to use this pattern in a certain way to control the river and protect themselves from floods. The Lo Shu Square, as the magic square on the turtle shell is called, is the unique normal magic square of order three in which 1 is at the bottom and 2 is in the upper right corner. Every normal magic square of order three is obtained from the Lo Shu by rotation or reflection. - -There is a well-known 12th-century 4×4 normal magic square inscribed on the wall of the Parshvanath temple in Khajuraho, India. - -This is known as the Chautisa Yantra (Chautisa, 34; Yantra, lit. "device"), since its magic sum is 34. It is one of the three 4×4 pandiagonal magic squares and is also an instance of the most-perfect magic square. The study of this square led to the appreciation of pandiagonal squares by European mathematicians in the late 19th century. Pandiagonal squares were referred to as Nasik squares or Jain squares in older English literature. - -The order four normal magic square Albrecht Dürer immortalized in his 1514 engraving Melencolia I, referred to above, is believed to be the first seen in European art. The square associated with Jupiter appears as a talisman used to drive away melancholy. It is very similar to Yang Hui's square, which was created in China about 250 years before Dürer's time. As with every order 4 normal magic square, the magic sum is 34. But in the Durer square this sum is also found - -in each of the quadrants, in the center four squares, and in the corner squares (of the 4×4 as well as the four contained 3×3 grids). This sum can also be found in the four outer numbers clockwise from the corners (3+8+14+9) and likewise the four counter-clockwise (the locations of four queens in the two solutions of the 4 queens puzzle), the two sets of four symmetrical numbers (2+8+9+15 and 3+5+12+14), the sum of the middle two entries of the two outer columns and rows (5+9+8+12 and 3+2+15+14), and in four kite or cross shaped quartets (3+5+11+15, 2+10+8+14, 3+9+7+15, and 2+6+12+14). The two numbers in the middle of the bottom row give the date of the engraving: 1514. The numbers 1 and 4 at either side of the date correspond respectively to the letters "A" and "D," which are the initials of the artist. - -Dürer's magic square can also be extended to a magic cube. - -The Passion façade of the Sagrada Família church in Barcelona, conceptualized by Antoni Gaudí and designed by sculptor Josep Subirachs, features a trivial order 4 magic square: The magic constant of the square is 33, the age of Jesus at the time of the Passion. Structurally, it is very similar to the Melancholia magic square, but it has had the numbers in four of the cells reduced by 1. - -Trivial squares such as this one are not generally mathematically interesting and only have historical significance. Lee Sallows has pointed out that, due to Subirachs's ignorance of magic square theory, the renowned sculptor made a needless blunder, and supports this assertion by giving several examples of non-trivial 4×4 magic squares showing the desired magic constant of 33. - -Similarly to Dürer's magic square, the Sagrada Familia's magic square can also be extended to a magic cube. - -The Parker Square, named after recreational mathematician Matt Parker, is an attempt to create a 33 magic square of squares — a prized unsolved problem since Euler. The Parker Square is a trivial semimagic square since it uses some numbers more than once, and the diagonal 23^2 + 37^2 + 47^2 sums to 4107, not 3051 as for all the other rows, columns, or diagonal. The Parker Square became a "mascot for people who give it a go, but ultimately fall short". It is also a metaphor for something that is almost right, but is a little off. - -; Magic tori - -Cross-referenced to the above sequence, a new classification enumerates the magic tori that display these magic squares. The number of magic tori of order n from 1 to 5, is: - -1, 0, 1, 255, 251449712 . - -; Higher order squares and tori - -The number of distinct normal magic squares rapidly increases for higher orders. - -The 880 magic squares of order 4 are displayed on 255 magic tori of order 4 and the 275,305,224 squares of order 5 are displayed on 251,449,712 magic tori of order 5. The number of magic tori and distinct normal squares is not yet known for any higher order. - -Algorithms tend to only generate magic squares of a certain type or classification, making counting all possible magic squares quite difficult. Traditional counting methods have proven unsuccessful, statistical analysis using the Monte Carlo method has been applied. The basic principle applied to magic squares is to randomly generate n × n matrices of elements 1 to n2 and check if the result is a magic square. The probability that a randomly generated matrix of numbers is a magic square is then used to approximate the number of magic squares. - -More intricate versions of the Monte Carlo method, such as the exchange Monte Carlo, and Monte Carlo backtracking have produced even more accurate estimations. Using these methods it has been shown that the probability of magic squares decreases rapidly as n increases. Using fitting functions give the curves seen to the right. - -* A magic square remains magic when its numbers are multiplied by any constant. In discussing magic squares, equivalent squares are usually not considered as distinct. The 8 equivalent squares are given for the 3×3 magic square below: - -* Given any magic square, another magic square of the same order can be formed by interchanging the row and the column which intersect in a cell on a diagonal with the row and the column which intersect in the complementary cell (i.e. cell symmetrically opposite from the center) of the same diagonal. For an even square, there are n/2 pairs of rows or columns that can be interchanged; thus we can obtain 2n/2 × 2n/2 = 2n equivalent magic squares by combining such interchanges. For odd square, there are (n - 1)/2 pairs of rows or columns that can be interchanged; and 2n-1 equivalent magic squares obtained by combining such interchanges. Interchanging all the rows flips the square vertically (i.e. reflected along the horizontal axis), while interchanging all the columns flips the square horizontally (i.e. reflected along the vertical axis). In the example below, a 4×4 associative magic square on the left is transformed into a square on the right by interchanging the second and third row, yielding the famous Durer's magic square. - -* An associative magic square remains associative when two same sided rows (or columns) are interchanged along with corresponding other sided rows (or columns). The "product" of two magic squares creates a magic square of higher order than the two multiplicands. Let the two magic squares be of orders m and n. The final square will be of order m × n. Divide the square of order m × n into m × m sub-squares, such that there are a total of n2 such sub-squares. In the square of order n, reduce by 1 the value of all the numbers. Multiply these reduced values by m2, and place the results in the corresponding sub-squares of the m × n whole square. The squares of order m are added n2 times to the sub-squares of the final square. The peculiarity of this construction method is that each magic subsquare will have different magic sums. The square made of such magic sums from each magic subsquare will again be a magic square. The smallest composite magic square of order 9, composed of two order 3 squares is given below. - -Since each of the 3×3 sub-squares can be independently rotated and reflected into 8 different squares, from this single 9×9 composite square we can derive 89 = 134,217,728 essentially different 9×9 composite squares. Plenty more composite magic squares can also be derived if we select non-consecutive numbers in the magic sub-squares, like in Yang Hui's version of the 9×9 composite magic square. The next smallest composite magic squares of order 12, composed of magic squares of order 3 and 4 are given below. - -For the base squares, there is only one essentially different 3rd order square, while there 880 essentially different 4th-order squares that we can choose from. Each pairing can produce two different composite squares. Since each magic sub-squares in each composite square can be expressed in 8 different forms due to rotations and reflections, there can be 1×880×89 + 880×1×816 ≈ 2.476×1017 essentially different 12×12 composite magic squares created this way, with consecutive numbers in each sub-square. In general, if there are cm and cn essentially different magic squares of order m and n, then we can form cm × cn × ( 8m2 + 8n2) composite squares of order mn, provided m ≠ n. If m = n, then we can form (cm)2 × 8m2 composite squares of order m2. - -When the squares are of doubly even order, we can construct a composite magic square in a manner more elegant than the above process, in the sense that every magic subsquare will have the same magic constant. Let n be the order of the main square and m the order of the equal subsquares. The subsquares are filled one by one, in any order, with a continuous sequence of m2/2 smaller numbers (i.e. numbers less than or equal to n2/2) together with their complements to n2 + 1. Each subsquare as a whole will yield the same magic sum. The advantage of this type of composite square is that each subsquare is filled in the same way and their arrangement is arbitrary. Thus, the knowledge of a single construction of even order will suffice to fill the whole square. Furthermore, if the subsquares are filled in the natural sequence, then the resulting square will be pandiagonal. The magic sum of the subsquares is related to the magic sum of the whole square by $M_m = \frac{M_n}{k}$ where n = km. - -[[File:Trump Walkington Taneja first linear area magic square 170106.jpg|thumb|168x168px|The first linear area magic square - -]] - -In 2017, following initial ideas of and , the first linear area magic square (L-AMS) was constructed by Walter Trump. - -Other two dimensional shapes than squares can be considered. The general case is to consider a design with N parts to be magic if the N parts are labeled with the numbers 1 through N and a number of identical sub-designs give the same sum. Examples include magic circles, magic rectangles, magic triangles magic stars, magic hexagons, magic diamonds. Going up in dimension results in magic spheres, magic cylinders, magic cubes, magic parallelepiped, magic solids, and other magic hypercubes. - -Possible magic shapes are constrained by the number of equal-sized, equal-sum subsets of the chosen set of labels. For example, if one proposes to form a magic shape labeling the parts with {1, 2, 3, 4}, the sub-designs will have to be labeled with {1,4} and {2,3}. - -The most common use for these kameas is to provide a pattern upon which to construct the sigils of spirits, angels or demons; the letters of the entity's name are converted into numbers, and lines are traced through the pattern that these successive numbers make on the kamea. - -In a magical context, the term magic square is also applied to a variety of word squares or number squares found in magical grimoires, including some that do not follow any obvious pattern, and even those with differing numbers of rows and columns. They are generally intended for use as talismans. For instance the following squares are: The Sator square, one of the most famous magic squares found in a number of grimoires including the Key of Solomon; a square "to overcome envy", from The Book of Power; and two squares from The Book of the Sacred Magic of Abramelin the Mage, the first to cause the illusion of a superb palace to appear, and the second to be worn on the head of a child during an angelic invocation: - -* In Goethe's Faust, the witch's spell used to make a youth elixir for Faust, the ', has been interpreted as a construction of a magic square. - -* The English composer Peter Maxwell Davies has used magic squares to structure many of his compositions. For example, his 1975 Ave Maris Stella uses the 9×9 magic square of Moon while his 1977 A Mirror of Whitening Light uses the 8×8 magic square of Mercury to create the entire set of notes and durations for the piece. His other works that employ magic squares include The Lighthouse (1979), Resurrection (1987), Strathclyde Concerto No. 3 for Horn and Trumpet (1989), as well as many of his symphonies. According to Davies' own account: - -
    - -A magic square in a musical composition is not a block of numbers – it is a generating principle, to be learned and known intimately, perceived inwardly as a multi-dimensional projection into that vast (chaotic!) area of the internal ear – the space/time crucible – where music is conceived. ... Projected onto the page, a magic square is a dead, black conglomeration of digits; tune in, and one hears a powerful, orbiting dynamo of musical images, glowing with numen and lumen. - -* Mathematician Matt Parker attempted to create a 3×3 magic square using square numbers in a YouTube video on the Numberphile channel. His failed attempt is known as the Parker Square. - -* The first season Stargate Atlantis episode "Brotherhood" involves completing a magic square as part of a puzzle guarding a powerful Ancient artefact. - -* Magic Squares are also featured in the 2019 Spanish film Vivir dos veces. diff --git a/wiki/wikipedia/3938.txt b/wiki/wikipedia/3938.txt deleted file mode 100644 index f2156ca89b6caeed83b6756434c2d5c92bfb81fd..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3938.txt +++ /dev/null @@ -1,31 +0,0 @@ -Tentai Show (Japanese: 天体ショー tentai shō), also known by the names Tentaisho, Galaxies, Spiral Galaxies, or Sym-a-Pix, is a binary-determination logic puzzle published by Nikoli. - -Tentai Show is played on a rectangular grid of squares. On the grid are dots representing stars, which can be found on the grid either on the center of a cell, an edge, or a corner. - -The objective of the puzzle is to draw lines along the dashed lines to divide the grid into regions representing galaxies. - -In the resulting grid, all galaxies must have 180° rotational symmetry and contain exactly one dot located at its center. - -The colors of the dots do not affect the logic of the puzzle and can be ignored when solving. In puzzles with multiple colored dots, the regions of the finished grid may be colored with the corresponding dot colors to reveal a picture. - -Tentai Show puzzles can be solved using the following steps. - -# Draw walls between adjacent cells that contain a dot or a fraction of a dot. These cells must belong to different galaxies. - -# Draw walls around the dot according to rotational symmetry. Borders of the grid also count as walls. - -# Find cells in areas that are 'captured' by a dot. These are cells which cannot be reached by any other dot. These cells can only belong to the galaxy for that dot. - -The above steps can be repeated until the puzzle is solved. - -In more advanced puzzles, it may be necessary to consider the image of rotational symmetry. Find cells which only have one valid dot when considering its rotationally symmetric cell. A cell can belong to a galaxy if its symmetric cell can also belong to that galaxy. - -The name of the puzzle, "Tentai Show", has a double meaning when interpreted in Japanese. "Ten" (点) stands for dot, while "tai shō" (対称) stands for symmetry. The Japanese word "Tentai" (天体) is used to refer to astronomical objects. Combined together, "Tentai Show" can both mean rotational symmetry and astronomical show. - -Demaine, Löffler, and Schmidt (2021) further strengthened this by proving NP-completeness even if all galaxies are restricted to be rectangles of sizes 1×1, 1×3, or 3×1. - -They also showed that finding a minimal set of galaxies that exactly cover a given shape is NP-complete. - -Tentai Show puzzles can be solved in exponential time by going through all possible dissections of the grid and checking if it is a valid solution. - -Fertin, Jamshidi, and Komusiewicz (2015) showed a polynomial-time algorithm that can solve the puzzle for various cases, such as: (a) when all galaxies have size at most two, (b) when all galaxies are squares, and (c) when all galaxies are trivially connected. diff --git a/wiki/wikipedia/3939.txt b/wiki/wikipedia/3939.txt deleted file mode 100644 index 30b3f01395274f69c555e93946e516a287b2ed85..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3939.txt +++ /dev/null @@ -1,162 +0,0 @@ -In mathematics, the multiplication theorem is a certain type of identity obeyed by many special functions related to the gamma function. For the explicit case of the gamma function, the identity is a product of values; thus the name. The various relations all stem from the same underlying principle; that is, the relation for one special function can be derived from that for the others, and is simply a manifestation of the same identity in different guises. - -The multiplication theorem takes two common forms. In the first case, a finite number of terms are added or multiplied to give the relation. In the second case, an infinite number of terms are added or multiplied. The finite form typically occurs only for the gamma and related functions, for which the identity follows from a p-adic relation over a finite field. For example, the multiplication theorem for the gamma function follows from the Chowla–Selberg formula, which follows from the theory of complex multiplication. The infinite sums are much more common, and follow from characteristic zero relations on the hypergeometric series. - -The following tabulates the various appearances of the multiplication theorem for finite characteristic; the characteristic zero relations are given further down. In all cases, n and k are non-negative integers. For the special case of n = 2, the theorem is commonly referred to as the duplication formula. - -The duplication formula and the multiplication theorem for the gamma function are the prototypical examples. The duplication formula for the gamma function is - - - -\Gamma(z) \Gamma\left(z + \frac{1}{2}\right) = 2^{1-2z} \sqrt{\pi} \Gamma(2z). - - - -It is also called the Legendre duplication formula or Legendre relation, in honor of Adrien-Marie Legendre. The multiplication theorem is - - - -\Gamma(z) \Gamma\left(z + \frac{1}{k}\right) \Gamma\left(z + \frac{2}{k}\right) \cdots - -\Gamma\left(z + \frac{k-1}{k}\right) = - -(2 \pi)^{ \frac{k-1}{2}} k^{\frac{1-2kz}{2} } \Gamma(kz) - - - -for integer k ≥ 1, and is sometimes called Gauss's multiplication formula, in honour of Carl Friedrich Gauss. The multiplication theorem for the gamma functions can be understood to be a special case, for the trivial Dirichlet character, of the Chowla–Selberg formula. - -The polygamma function is the logarithmic derivative of the gamma function, and thus, the multiplication theorem becomes additive, instead of multiplicative: - -k^{m} \psi^{(m-1)}(kz) = \sum_{n=0}^{k-1} - -\psi^{(m-1)}\left(z+\frac{n}{k}\right) - -for $m>1$, and, for $m=1$, one has the digamma function: - -k\left[\psi(kz)-\log(k)\right] = \sum_{n=0}^{k-1} - -\psi\left(z+\frac{n}{k}\right). - -The polygamma identities can be used to obtain a multiplication theorem for harmonic numbers. - -For the Hurwitz zeta function generalizes the polygamma function to non-integer orders, and thus obeys a very similar multiplication theorem: -$$ -k^s\zeta(s)=\sum_{n=1}^k \zeta\left(s,\frac{n}{k}\right), -$$ - -where $\zeta(s)$ is the Riemann zeta function. This is a special case of -$$ -k^s\zeta(s,kz)= \sum_{n=0}^{k-1}\zeta\left(s,z+\frac{n}{k}\right) -$$ - -and -$$ -\zeta(s,kz)=\sum^{\infty}_{n=0} {s+n-1 \choose n} (1-k)^n z^n \zeta(s+n,z). -$$ - -Multiplication formulas for the non-principal characters may be given in the form of Dirichlet L-functions. - -The periodic zeta function is sometimes defined as - -F(s;q) = \sum_{m=1}^\infty \frac {e^{2\pi imq}}{m^s} - -=\operatorname{Li}_s\left(e^{2\pi i q} \right) - -where Lis(z) is the polylogarithm. It obeys the duplication formula - -2^{-s} F(s;q) = F\left(s,\frac{q}{2}\right) - -+ F\left(s,\frac{q+1}{2}\right). - -As such, it is an eigenvector of the Bernoulli operator with eigenvalue 2-s. The multiplication theorem is -$$ -k^{-s} F(s;kq) = \sum_{n=0}^{k-1} F\left(s,q+\frac{n}{k}\right). -$$ - -The periodic zeta function occurs in the reflection formula for the Hurwitz zeta function, which is why the relation that it obeys, and the Hurwitz zeta relation, differ by the interchange of s → -s. - -The Bernoulli polynomials may be obtained as a limiting case of the periodic zeta function, taking s to be an integer, and thus the multiplication theorem there can be derived from the above. Similarly, substituting q = log z leads to the multiplication theorem for the polylogarithm. - -The duplication formula takes the form -$$ -2^{1-s}\operatorname{Li}_s(z^2) = \operatorname{Li}_s(z)+\operatorname{Li}_s(-z). -$$ - -The general multiplication formula is in the form of a Gauss sum or discrete Fourier transform: - -k^{1-s} \operatorname{Li}_s(z^k) = - -\sum_{n=0}^{k-1}\operatorname{Li}_s\left(ze^{i2\pi n/k}\right). - -These identities follow from that on the periodic zeta function, taking z = log q. - -The duplication formula for Kummer's function is -$$ -2^{1-n}\Lambda_n(-z^2) = \Lambda_n(z)+\Lambda_n(-z) -$$ - -and thus resembles that for the polylogarithm, but twisted by i. - -For the Bernoulli polynomials, the multiplication theorems were given by Joseph Ludwig Raabe in 1851: -$$ -k^{1-m} B_m(kx)=\sum_{n=0}^{k-1} B_m \left(x+\frac{n}{k}\right) -$$ - -and for the Euler polynomials, - -k^{-m} E_m(kx)= \sum_{n=0}^{k-1} - -(-1)^n E_m \left(x+\frac{n}{k}\right) - -\quad \mbox{ for } k=1,3,\dots - -and - -k^{-m} E_m(kx)= \frac{-2}{m+1} \sum_{n=0}^{k-1} - -(-1)^n B_{m+1} \left(x+\frac{n}{k}\right) - -\quad \mbox{ for } k=2,4,\dots. - -The Bernoulli polynomials may be obtained as a special case of the Hurwitz zeta function, and thus the identities follow from there. - -The Bernoulli map is a certain simple model of a dissipative dynamical system, describing the effect of a shift operator on an infinite string of coin-flips (the Cantor set). The Bernoulli map is a one-sided version of the closely related Baker's map. The Bernoulli map generalizes to a k-adic version, which acts on infinite strings of k symbols: this is the Bernoulli scheme. The transfer operator $\mathcal{L}_k$ corresponding to the shift operator on the Bernoulli scheme is given by -$$ -[\mathcal{L}_k f](x) = \frac{1}{k}\sum_{n=0}^{k-1}f\left(\frac{x+n}{k}\right) -$$ - -Perhaps not surprisingly, the eigenvectors of this operator are given by the Bernoulli polynomials. That is, one has that -$$ -\mathcal{L}_k B_m = \frac{1}{k^m}B_m -$$ - -It is the fact that the eigenvalues $k^{-m}<1$ that marks this as a dissipative system: for a non-dissipative measure-preserving dynamical system, the eigenvalues of the transfer operator lie on the unit circle. - -One may construct a function obeying the multiplication theorem from any totally multiplicative function. Let $f(n)$ be totally multiplicative; that is, $f(mn)=f(m)f(n)$ for any integers m, n. Define its Fourier series as -$$ -g(x)=\sum_{n=1}^\infty f(n) \exp(2\pi inx) -$$ - -Assuming that the sum converges, so that g(x) exists, one then has that it obeys the multiplication theorem; that is, that -$$ -\frac{1}{k}\sum_{n=0}^{k-1}g\left(\frac{x+n}{k}\right)=f(k)g(x) -$$ - -That is, g(x) is an eigenfunction of Bernoulli transfer operator, with eigenvalue f(k). The multiplication theorem for the Bernoulli polynomials then follows as a special case of the multiplicative function $f(n)=n^{-s}$. The Dirichlet characters are fully multiplicative, and thus can be readily used to obtain additional identities of this form. - -The multiplication theorem over a field of characteristic zero does not close after a finite number of terms, but requires an infinite series to be expressed. Examples include that for the Bessel function $J_\nu(z)$: - - - -\lambda^{-\nu} J_\nu (\lambda z) = - -\sum_{n=0}^\infty \frac{1}{n!} - -\left(\frac{(1-\lambda^2)z}{2}\right)^n - -J_{\nu+n}(z), - - - -where $\lambda$ and $\nu$ may be taken as arbitrary complex numbers. Such characteristic-zero identities follow generally from one of many possible identities on the hypergeometric series. diff --git a/wiki/wikipedia/394.txt b/wiki/wikipedia/394.txt deleted file mode 100644 index c425c6e3ec626b0690b462cff2d14b7b7a1956cb..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/394.txt +++ /dev/null @@ -1,43 +0,0 @@ -In computer science and engineering, transactional memory attempts to simplify concurrent programming by allowing a group of load and store instructions to execute in an atomic way. It is a concurrency control mechanism analogous to database transactions for controlling access to shared memory in concurrent computing. - -Transactional memory systems provide high-level abstraction as an alternative to low-level thread synchronization. This abstraction allows for coordination between concurrent reads and writes of shared data in parallel systems. - -In concurrent programming, synchronization is required when parallel threads attempt to access a shared resource. Low-level thread synchronization constructs such as locks are pessimistic and prohibit threads that are outside a critical section from making any changes. The process of applying and releasing locks often functions as additional overhead in workloads with little conflict among threads. Transactional memory provides optimistic concurrency control by allowing threads to run in parallel with minimal interference. The goal of transactional memory systems is to transparently support regions of code marked as transactions by enforcing atomicity, consistency and isolation. - -A transaction is a collection of operations that can execute and commit changes as long as a conflict is not present. When a conflict is detected, a transaction will revert to its initial state (prior to any changes) and will rerun until all conflicts are removed. Before a successful commit, the outcome of any operation is purely speculative inside a transaction. In contrast to lock-based synchronization where operations are serialized to prevent data corruption, transactions allow for additional parallelism as long as few operations attempt to modify a shared resource. Since the programmer is not responsible for explicitly identifying locks or the order in which they are acquired, programs that utilize transactional memory cannot produce a deadlock. Hardware transactional memory systems may comprise modifications in processors, cache and bus protocol to support transactions. Speculative values in a transaction must be buffered and remain unseen by other threads until commit time. Large buffers are used to store speculative values while avoiding write propagation through the underlying cache coherence protocol. Traditionally, buffers have been implemented using different structures within the memory hierarchy such as store queues or caches. Buffers further away from the processor, such as the L2 cache, can hold more speculative values (up to a few megabytes). The optimal size of a buffer is still under debate due to the limited use of transactions in commercial programs. - -Sun Microsystems implemented hardware transactional memory and a limited form of speculative multithreading in its high-end Rock processor. This implementation proved that it could be used for lock elision and more complex hybrid transactional memory systems, where transactions are handled with a combination of hardware and software. The Rock processor was canceled in 2009, just before the acquisition by Oracle; while the actual products were never released, a number of prototype systems were available to researchers. - -As of GCC 4.7, an experimental library for transactional memory is available which utilizes a hybrid implementation. The PyPy variant of Python also introduces transactional memory to the language. - -* Hardware: - -** Arm Transactional Memory Extension (TME) - -** Rock processor (canceled by Oracle) - -** Blue Gene/Q processor from IBM (Sequoia supercomputer) - -** IBM zEnterprise EC12, the first commercial server to include transactional memory processor instructions - -** Intel's Transactional Synchronization Extensions (TSX), available in select Haswell-based processors and newer until be removed in Comet Lake - -** IBM POWER8 and 9, removed in Power10 (Power ISA v.3.1) - -* Software: - -** Vega 2 from Azul Systems - -** STM Monad in the Glasgow Haskell Compiler - -** STMX in Common Lisp - -** Refs in Clojure - -** gcc 4.7+ for C/C++ - -**PyPy - -** Part of the picotm Transaction Framework for C - -** The TVar in concurrent-ruby, a concurrency library for Ruby diff --git a/wiki/wikipedia/3940.txt b/wiki/wikipedia/3940.txt deleted file mode 100644 index bd619f5488ad491f3cf4977a9e7ddd4328d554d6..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3940.txt +++ /dev/null @@ -1,49 +0,0 @@ -Exportation is a valid rule of replacement in propositional logic. The rule allows conditional statements having conjunctive antecedents to be replaced by statements having conditional consequents and vice versa in logical proofs. It is the rule that: -$$ -((P \land Q) \to R) \Leftrightarrow (P \to (Q \to R)) -$$ - -Where "$\Leftrightarrow$" is a metalogical symbol representing "can be replaced in a proof with." - -The exportation rule may be written in sequent notation: -$$ -((P \land Q) \to R) \dashv\vdash (P \to (Q \to R)) -$$ - -where $\dashv\vdash$ is a metalogical symbol meaning that $(P \to (Q \to R))$ is a syntactic equivalent of $((P \land Q) \to R)$ in some logical system; - -or in rule form: -$$ -\frac{(P \land Q) \to R}{P \to (Q \to R)} -$$, $\frac{P \to (Q \to R)}{(P \land Q) \to R}.$ - -where the rule is that wherever an instance of "$(P \land Q) \to R$" appears on a line of a proof, it can be replaced with "$P \to (Q \to R)$" and vice versa; - -or as the statement of a truth-functional tautology or theorem of propositional logic: -$$ -((P \land Q) \to R) \leftrightarrow (P \to (Q \to R)) -$$ - -where $P$, $Q$, and $R$ are propositions expressed in some logical system. - -At any time, if P→Q is true, it can be replaced by P→(P∧Q).
    - -One possible case for P→Q is for P to be true and Q to be true; thus P∧Q is also true, and P→(P∧Q) is true.
    - -Another possible case sets P as false and Q as true. Thus, P∧Q is false and P→(P∧Q) is false; false→false is true.
    - -The last case occurs when both P and Q are false. Thus, P∧Q is false and P→(P∧Q) is true. - -It rains and the sun shines implies that there is a rainbow.
    - -Thus, if it rains, then the sun shines implies that there is a rainbow. - -If my car is on, when I switch the gear to D the car starts going. - -If my car is on and I have switched the gear to D, then the car must start going. - -== Proof == - -The following proof uses Material Implication, double negation, De Morgan's Laws, the negation of the conditional statement, the Associative Property of conjunction, the negation of another conditional statement, and double negation again, in that order to derive the result. - -Exportation is associated with Currying via the Curry–Howard correspondence. diff --git a/wiki/wikipedia/3941.txt b/wiki/wikipedia/3941.txt deleted file mode 100644 index 4b99d644642d819a5e75b85035f75ecf30d2c433..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3941.txt +++ /dev/null @@ -1,9 +0,0 @@ -In computability theory, the mortality problem is a decision problem which can be stated as follows: - -Given a Turing machine, decide whether it halts when run on any configuration (not necessarily a starting one) - -In the statement above, the configuration is a pair , where q is one of the machine's states (not necessarily its initial state) and w is an infinite sequence of symbols representing the initial content of the tape. Note that while we usually assume that in the starting configuration all but finitely many cells on the tape are blanks, in the mortality problem the tape can have arbitrary content, including infinitely many non-blank symbols written on it. - -Philip K. Hooper proved in 1966 that the mortality problem is undecidable. However, it can be shown that the set of Turing machines which are mortal (i.e. halt on every starting configuration) is recursively enumerable. - -Category:Theory of computation diff --git a/wiki/wikipedia/3942.txt b/wiki/wikipedia/3942.txt deleted file mode 100644 index 3db055e137ddae30d0c83b12390ae822b5d59749..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3942.txt +++ /dev/null @@ -1,11 +0,0 @@ -In linear algebra and matroid theory, Rota's basis conjecture is an unproven conjecture concerning rearrangements of bases, named after Gian-Carlo Rota. It states that, if X is either a vector space of dimension n or more generally a matroid of rank n, with n disjoint bases Bi, then it is possible to arrange the elements of these bases into an n × n matrix in such a way that the rows of the matrix are exactly the given bases and the columns of the matrix are also bases. That is, it should be possible to find a second set of n disjoint bases Ci, each of which consists of one element from each of the bases Bi. - -Rota's basis conjecture has a simple formulation for points in the Euclidean plane: it states that, given three triangles with distinct vertices, with each triangle colored with one of three colors, it must be possible to regroup the nine triangle vertices into three "rainbow" triangles having one vertex of each color. The triangles are all required to be non-degenerate, meaning that they do not have all three vertices on a line. - -To see this as an instance of the basis conjecture, one may use either linear independence of the vectors ($x_{i},y_{i},1$) in a three-dimensional real vector space (where ($x_{i},y_{i}$) are the Cartesian coordinates of the triangle vertices) or equivalently one may use a matroid of rank three in which a set S of points is independent if either |S| ≤ 2 or S forms the three vertices of a non-degenerate triangle. For this linear algebra and this matroid, the bases are exactly the non-degenerate triangles. Given the three input triangles and the three rainbow triangles, it is possible to arrange the nine vertices into a 3 × 3 matrix in which each row contains the vertices of one of the single-color triangles and each column contains the vertices of one of the rainbow triangles. - -Analogously, for points in three-dimensional Euclidean space, the conjecture states that the sixteen vertices of four non-degenerate tetrahedra of four different colors may be regrouped into four rainbow tetrahedra. - -The statement of Rota's basis conjecture was first published by Huang, crediting it (without citation) to Rota in 1989. The basis conjecture has been proven for paving matroids (for all n) and for the case n ≤ 3 (for all types of matroid). For arbitrary matroids, it is possible to arrange the basis elements into a matrix the first Ω(n) columns of which are bases. The basis conjecture for linear algebras over fields of characteristic zero and for even values of n would follow from another conjecture on Latin squares by Alon and Tarsi. Based on this implication, the conjecture is known to be true for linear algebras over the real numbers for infinitely many values of n. - -In connection with Tverberg's theorem, Bárány conjectured that, for every set of r (d + 1) points in d-dimensional Euclidean space, colored with d + 1 colors in such a way that there are r points of each color, there is a way to partition the points into rainbow simplices (sets of d + 1 points with one point of each color) in such a way that the convex hulls of these sets have a nonempty intersection. For instance, the two-dimensional case (proven by Bárány and Larman) with r = 3 states that, for every set of nine points in the plane, colored with three colors and three points of each color, it is possible to partition the points into three intersecting rainbow triangles, a statement similar to Rota's basis conjecture which states that it is possible to partition the points into three non-degenerate rainbow triangles. The conjecture of Bárány and Larman allows a collinear triple of points to be considered as a rainbow triangle, whereas Rota's basis conjecture disallows this; on the other hand, Rota's basis conjecture does not require the triangles to have a common intersection. Substantial progress on the conjecture of Bárány and Larman was made by Blagojević. diff --git a/wiki/wikipedia/3943.txt b/wiki/wikipedia/3943.txt deleted file mode 100644 index b33a3ba3907141b578fb11d61ba3fc645960b582..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3943.txt +++ /dev/null @@ -1,41 +0,0 @@ -A combinatorial map is a combinatorial object modelling topological structures with subdivided objects. Historically, the concept was introduced informally by J. Edmonds for polyhedral surfaces which are planar graphs. It was given its first definite formal expression under the name "Constellations" by A. Jacques but the concept was already extensively used under the name "rotation" by Gerhard Ringel and J.W.T. Youngs in their famous solution of the Heawood map-coloring problem. The term "constellation" was not retained and instead "combinatorial map" was favored. The concept was later extended to represent higher-dimensional orientable subdivided objects. Combinatorial maps are used as efficient data structures in image representation and processing, in geometrical modeling. This model is related to simplicial complexes and to combinatorial topology. Note that combinatorial maps were extended to generalized maps that allow also to represent non-orientable objects like the Möbius strip and the Klein bottle. A combinatorial map is a boundary representation model; it represents object by its boundaries. - -A combinatorial map is equivalent to a rotation system, as both represent orientably embedded graphs. - -Several applications require a data structure to represent the subdivision of an object. For example, a 2D object can be decomposed into vertices (0-cells), edges (1-cells), and faces (2-cells). More generally, an n-dimensional object is composed with cells of dimension 0 to n. Moreover, it is also often necessary to represent neighboring relations between these cells. - -Thus, we want to describe all the cells of the subdivision, plus all the incidence and adjacency relations between these cells. When all the represented cells are simplexes, a simplicial complex may be used, but when we want to represent any type of cells, we need to use cellular topological models like combinatorial maps or generalized maps. - -The first works about combinatorial maps develop combinatorial representations of graphs on surfaces which includes planar graphs: - -A 2-dimensional combinatorial map (or 2-map) is a triplet M = (D, σ, α) such that: - -* D is a finite set of darts; - -* σ is a permutation on D; - -* α is an involution on D with no fixed point. - -Intuitively, a 2-map corresponds to a planar graph where each edge is subdivided into two darts (sometimes also called half-edges). The permutation σ gives, for each dart, the next dart by turning around the vertex in the positive orientation; the other permutation α gives, for each dart, the other dart of the same edge. - -α allows one to retrieve edges (alpha for arête in French), and σ allows one to retrieve vertices (sigma for sommet in French). We define φ = σ o α which gives, for each dart, the next dart of the same face (phi for face also in French). - -So, there are two ways to represent a combinatorial map depending if the permutation is σ or φ (see example below). These two representations are dual to each other: vertices and faces are exchanged. - -
    - -
    - -The definition of combinatorial map in any dimension is as follows. - -An n-dimensional combinatorial map (or n-map) is a (n + 1)-tuple M = (D, β1, ..., βn) such that: - -* D is a finite set of darts; - -* β1 is a permutation on D; - -* β2, ..., βn are involutions on D; - -* βi o βj is an involution if i + 2 ≤ j (i, j ∈ { 1, ,..., n }). - -An n-dimensional combinatorial map represents the subdivision of a closed orientable n-dimensional space. A dart is an abstract element which is only required to define one-to-one mappings. The last line of this definition fixes constraints which guarantee the topological validity of the represented object: a combinatorial map represents a quasi-manifold subdivision. The initial definition of 2-dimensional combinatorial maps can be retrieved by fixing n = 2 and renaming σ by β1 and α by β2. diff --git a/wiki/wikipedia/3944.txt b/wiki/wikipedia/3944.txt deleted file mode 100644 index f0bd13c6914b4cca8696e198d74ff8bf8bd0d121..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3944.txt +++ /dev/null @@ -1,18 +0,0 @@ -In mathematics, the Cramér–Wold theorem in measure theory states that a Borel probability measure on $\mathbb{R}^k$ is uniquely determined by the totality of its one-dimensional projections. It is used as a method for proving joint convergence results. The theorem is named after Harald Cramér and Herman Ole Andreas Wold. - -Let -$$ - \overline{X}_n = (X_{n1},\dots,X_{nk}) -$$ - -and -$$ - \overline{X} = (X_1,\dots,X_k) -$$ - -be random vectors of dimension k. Then $ \overline{X}_n $ converges in distribution to $ \overline{X} $ if and only if: -$$ - \sum_{i=1}^k t_iX_{ni} \overset{D}{\underset{n\rightarrow\infty}{\rightarrow}} \sum_{i=1}^k t_iX_i. -$$ - -for each $ (t_1,\dots,t_k)\in \mathbb{R}^k $, that is, if every fixed linear combination of the coordinates of $ \overline{X}_n$ converges in distribution to the correspondent linear combination of coordinates of $ \overline{X} $. diff --git a/wiki/wikipedia/3945.txt b/wiki/wikipedia/3945.txt deleted file mode 100644 index 78535c2bbf863f5799128f0584bc6dc1c648a659..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3945.txt +++ /dev/null @@ -1,612 +0,0 @@ -[[File:AM GM inequality animation.gif|thumb|Visual proof that (x + y)2 ≥ 4xy. Taking square roots and dividing by two gives the AM–GM inequality.]] - -In mathematics, the inequality of arithmetic and geometric means, or more briefly the AM–GM inequality, states that the arithmetic mean of a list of non-negative real numbers is greater than or equal to the geometric mean of the same list; and further, that the two means are equal if and only if every number in the list is the same (in which case they are both that number). - -The simplest non-trivial case – i.e., with more than one variable – for two non-negative numbers x and y, is the statement that -$$ -\frac{x+y}2 \ge \sqrt{xy} -$$ - -with equality if and only if x = y. - -This case can be seen from the fact that the square of a real number is always non-negative (greater than or equal to zero) and from the elementary case (a ± b)2 = a2 ± 2ab + b2 of the binomial formula: - -\begin{align} - -0 & \le (x-y)^2 \\ - -& = x^2-2xy+y^2 \\ - -& = x^2+2xy+y^2 - 4xy \\ - -& = (x+y)^2 - 4xy. - -\end{align} - -Hence (x + y)2 ≥ 4xy, with equality precisely when (x − y)2 = 0, i.e. x = y. The AM–GM inequality then follows from taking the positive square root of both sides and then dividing both sides by 2. - -For a geometrical interpretation, consider a rectangle with sides of length x and y, hence it has perimeter 2x + 2y and area xy. Similarly, a square with all sides of length has the perimeter 4 and the same area as the rectangle. The simplest non-trivial case of the AM–GM inequality implies for the perimeters that 2x + 2y ≥ 4 and that only the square has the smallest perimeter amongst all rectangles of equal area. - -Extensions of the AM–GM inequality are available to include weights or generalized means. - -The arithmetic mean, or less precisely the average, of a list of n numbers x1, x2, . . . , xn is the sum of the numbers divided by n: -$$ -\frac{x_1 + x_2 + \cdots + x_n}{n}. -$$ - -The geometric mean is similar, except that it is only defined for a list of nonnegative real numbers, and uses multiplication and a root in place of addition and division: -$$ -\sqrt[n]{x_1 \cdot x_2 \cdots x_n}. -$$ - -If x1, x2, . . . , xn > 0, this is equal to the exponential of the arithmetic mean of the natural logarithms of the numbers: -$$ -\exp \left( \frac{\ln {x_1} + \ln {x_2} + \cdots + \ln {x_n}}{n} \right). -$$ - -Restating the inequality using mathematical notation, we have that for any list of n nonnegative real numbers x1, x2, . . . , xn, -$$ -\frac{x_1 + x_2 + \cdots + x_n}{n} \ge \sqrt[n]{x_1 \cdot x_2 \cdots x_n}, -$$ - -and that equality holds if and only if x1 = x2 = · · · = xn. - -In two dimensions, 2x1 + 2x2 is the perimeter of a rectangle with sides of length x1 and x2. Similarly, 4 is the perimeter of a square with the same area, x1x2, as that rectangle. Thus for n = 2 the AM–GM inequality states that a rectangle of a given area has the smallest perimeter if that rectangle is also a square. - -The full inequality is an extension of this idea to n dimensions. Every vertex of an n-dimensional box is connected to n edges. If these edges' lengths are x1, x2, . . . , xn, then x1 + x2 + · · · + xn is the total length of edges incident to the vertex. There are 2n vertices, so we multiply this by 2n; since each edge, however, meets two vertices, every edge is counted twice. Therefore, we divide by 2 and conclude that there are 2n−1n edges. There are equally many edges of each length and n lengths; hence there are 2n−1 edges of each length and the total of all edge lengths is 2n−1(x1 + x2 + · · · + xn). On the other hand, -$$ -2^{n-1}(x_1+\ldots+x_n)= 2^{n-1} n \sqrt[n]{x_1 x_2 \cdots x_n} -$$ - -is the total length of edges connected to a vertex on an n-dimensional cube of equal volume, since in this case x1=...=xn. Since the inequality says -$$ -{x_1 + x_2 +\cdots + x_n \over n} \ge \sqrt[n]{x_1 x_2\cdots x_n}, -$$ - -it can be restated by multiplying through by n2n–1 to obtain -$$ -2^{n-1}(x_1 + x_2 + \cdots + x_n) \ge 2^{n-1} n \sqrt[n]{x_1 x_2\cdots x_n} -$$ - -with equality if and only if - -x1 = x2 = · · · = xn. - -Thus the AM–GM inequality states that only the n-cube has the smallest sum of lengths of edges connected to each vertex amongst all n-dimensional boxes with the same volume. - -Consider the function -$$ -f(x,y,z) = \frac{x}{y} + \sqrt{\frac{y}{z}} + \sqrt[3]{\frac{z}{x}} -$$ - -for all positive real numbers x, y and z. Suppose we wish to find the minimal value of this function. It can be rewritten as: - - - -\begin{align} - -f(x,y,z) - -&= 6 \cdot \frac{ \frac{x}{y} + \frac{1}{2} \sqrt{\frac{y}{z}} + \frac{1}{2} \sqrt{\frac{y}{z}} + \frac{1}{3} \sqrt[3]{\frac{z}{x}} + \frac{1}{3} \sqrt[3]{\frac{z}{x}} + \frac{1}{3} \sqrt[3]{\frac{z}{x}} }{6}\\ - -&=6\cdot\frac{x_1+x_2+x_3+x_4+x_5+x_6}{6} - -\end{align} - -with -$$ - x_1=\frac{x}{y},\qquad x_2=x_3=\frac{1}{2} \sqrt{\frac{y}{z}},\qquad x_4=x_5=x_6=\frac{1}{3} \sqrt[3]{\frac{z}{x}}. -$$ - -Applying the AM–GM inequality for n = 6, we get - - - -\begin{align} - -f(x,y,z) - -&\ge 6 \cdot \sqrt[6]{ \frac{x}{y} \cdot \frac{1}{2} \sqrt{\frac{y}{z}} \cdot \frac{1}{2} \sqrt{\frac{y}{z}} \cdot \frac{1}{3} \sqrt[3]{\frac{z}{x}} \cdot \frac{1}{3} \sqrt[3]{\frac{z}{x}} \cdot \frac{1}{3} \sqrt[3]{\frac{z}{x}} }\\ - -&= 6 \cdot \sqrt[6]{ \frac{1}{2 \cdot 2 \cdot 3 \cdot 3 \cdot 3} \frac{x}{y} \frac{y}{z} \frac{z}{x} }\\ - -&= 2^{2/3} \cdot 3^{1/2}. - -\end{align} - -Further, we know that the two sides are equal exactly when all the terms of the mean are equal: -$$ -f(x,y,z) = 2^{2/3} \cdot 3^{1/2} \quad \mbox{when} \quad \frac{x}{y} = \frac{1}{2} \sqrt{\frac{y}{z}} = \frac{1}{3} \sqrt[3]{\frac{z}{x}}. -$$ - -All the points (x, y, z) satisfying these conditions lie on a half-line starting at the origin and are given by -$$ -(x,y,z)=\biggr(t,\sqrt[3]{2}\sqrt{3}t,\frac{3\sqrt{3}}{2}t\biggr)\quad\mbox{with}\quad t>0. -$$ - -An important practical application in financial mathematics is to computing the rate of return: the annualized return, computed via the geometric mean, is less than the average annual return, computed by the arithmetic mean (or equal if all returns are equal). This is important in analyzing investments, as the average return overstates the cumulative effect. - -Jensen's inequality states that the value of a concave function of an arithmetic mean is greater than or equal to the arithmetic mean of the function's values. Since the logarithm function is concave, we have -$$ -\log \left(\frac { \sum_i x_i}{n} \right) \geq \sum \frac{1}{n} \log x_i = \sum \left( \log x_i^{1/n}\right) = \log \left( \prod x_i^{1/n}\right). -$$ - -Taking antilogs of the far left and far right sides, we have the AM–GM inequality. - -We have to show that -$$ -\frac{x_1+x_2+\cdots+x_n}{n} \ge \sqrt[n]{x_1x_2 \cdots x_n} -$$ - -with equality only when all numbers are equal. If xi ≠ xj, then replacing both xi and xj by - -(xi + xj)/2 will leave the arithmetic mean on the left-hand side unchanged, but will increase the geometric mean on the right-hand side because -$$ -\Bigl(\frac{x_i+x_j}{2}\Bigr)^2-x_ix_j=\Bigl(\frac{x_i-x_j}{2}\Bigr)^2>0 . -$$ - -Thus the right-hand side will be largest when all xis are equal to the arithmetic mean -$$ -\alpha=\frac{x_1+x_2+\cdots+x_n}{n}, -$$ - -thus as this is then the largest value of right-hand side of the expression, we have -$$ -\frac{x_1+x_2+\cdots+x_n}{n}=\alpha=\sqrt[n]{\alpha\alpha \cdots \alpha}\ge\sqrt[n]{x_1x_2 \cdots x_n}. -$$ - -This is a valid proof for the case n = 2, but the procedure of taking iteratively pairwise averages may fail to produce n equal numbers in the case n ≥ 3. An example of this case is x1 = x2 ≠ x3: Averaging two different numbers produces two equal numbers, but the third one is still different. Therefore, we never actually get an inequality involving the geometric mean of three equal numbers. - -In the general case, the averaging process above does tend towards equal numbers, and so proves the AM-GM. - -We can see this by noting that one of $x_i,x_j$ is negative, and that if $\alpha$ is the average of all $n$ numbers, we can measure the variance by considering $(x_i-\alpha)^2+(x_j-\alpha)^2$. This term is always positive, and tends to zero under the transform $x_i,x_j\to \frac{x_i+x_j}{2}$: - -Let, wlog, $d_i=x_i-\alpha>0$ and $d_j=x_j-\alpha<0$. - -Then -$$ -2\left(\frac{x_i+x_j}{2}-\alpha\right)^2 -$$ -$$ -=2\left(\frac{x_i-\alpha}{2}+\frac{x_j-\alpha}{2}\right)^2 -$$ -$$ -=2\left(\frac{d_i}{2}+\frac{d_j}{2}\right)^2 -$$ -$$ -=\frac{d_i^2}{2}+\frac{d_j^2}{2}-d_i|d_j| -$$ -$$ -\le\frac{d_i^2+d_j^2}{2} -$$ - -Of the non-negative real numbers x1, . . . , xn, the AM–GM statement is equivalent to -$$ -\alpha^n\ge x_1 x_2 \cdots x_n -$$ - -with equality if and only if α = xi for all i ∈ {1, . . . , n}. - -For the following proof we apply mathematical induction and only well-known rules of arithmetic. - -Induction basis: For n = 1 the statement is true with equality. - -Induction hypothesis: Suppose that the AM–GM statement holds for all choices of n non-negative real numbers. - -Induction step: Consider n + 1 non-negative real numbers x1, . . . , xn+1, . Their arithmetic mean α satisfies -$$ - (n+1)\alpha=\ x_1 + \cdots + x_n + x_{n+1}. -$$ - -If all the xi are equal to α, then we have equality in the AM–GM statement and we are done. In the case where some are not equal to α, there must exist one number that is greater than the arithmetic mean α, and one that is smaller than α. Without loss of generality, we can reorder our xi in order to place these two particular elements at the end: xn > α and xn+1 < α. Then -$$ -x_n - \alpha > 0\qquad \alpha-x_{n+1}>0 -$$ -$$ -\implies (x_n-\alpha)(\alpha-x_{n+1})>0.\qquad(*) -$$ - -Now define y with -$$ -y:=x_n+x_{n+1}-\alpha\ge x_n-\alpha>0, -$$ - -and consider the n numbers x1, . . . , xn–1, y which are all non-negative. Since -$$ -(n+1)\alpha=x_1 + \cdots + x_{n-1} + x_n + x_{n+1} -$$ -$$ -n\alpha=x_1 + \cdots + x_{n-1} + \underbrace{x_n+x_{n+1}-\alpha}_{=y}, -$$ - -Thus, α is also the arithmetic mean of n numbers x1, . . . , xn–1, y and the induction hypothesis implies -$$ -\alpha^{n+1}=\alpha^n\cdot\alpha\ge x_1x_2 \cdots x_{n-1} y\cdot\alpha.\qquad(**) -$$ - -Due to (*) we know that -$$ -(\underbrace{x_n+x_{n+1}-\alpha}_{=y})\alpha-x_nx_{n+1}=(x_n-\alpha)(\alpha-x_{n+1})>0, -$$ - -hence -$$ -y\alpha>x_nx_{n+1},\qquad({*}{*}{*}) -$$ - -in particular α > 0. Therefore, if at least one of the numbers x1, . . . , xn–1 is zero, then we already have strict inequality in (**). Otherwise the right-hand side of (**) is positive and strict inequality is obtained by using the estimate (***) to get a lower bound of the right-hand side of (**). Thus, in both cases we can substitute (***) into (**) to get -$$ -\alpha^{n+1}>x_1x_2 \cdots x_{n-1} x_nx_{n+1}, -$$ - -which completes the proof. - -First of all we shall prove that for real numbers x1 < 1 and x2 > 1 there follows -$$ - x_1 + x_2 > x_1x_2+1. -$$ - -Indeed, multiplying both sides of the inequality x2 > 1 by 1 – x1, gives -$$ - x_2 - x_1x_2 > 1 - x_1, -$$ - -whence the required inequality is obtained immediately. - -Now, we are going to prove that for positive real numbers x1, . . . , xn satisfying - -x1 . . . xn = 1, there holds -$$ -x_1 + \cdots + x_n \ge n. -$$ - -The equality holds only if x1 = ... = xn = 1. - -Induction basis: For n = 2 the statement is true because of the above property. - -Induction hypothesis: Suppose that the statement is true for all natural numbers up to n – 1. - -Induction step: Consider natural number n, i.e. for positive real numbers x1, . . . , xn, there holds x1 . . . xn = 1. There exists at least one xk < 1, so there must be at least one xj > 1. Without loss of generality, we let k =n – 1 and j = n. - -Further, the equality x1 . . . xn = 1 we shall write in the form of (x1 . . . xn–2) (xn–1 xn) = 1. Then, the induction hypothesis implies -$$ -(x_1 + \cdots + x_{n-2}) + (x_{n-1} x_n ) > n - 1. -$$ - -However, taking into account the induction basis, we have - -\begin{align} - -x_1 + \cdots + x_{n-2} + x_{n-1} + x_n & = (x_1 + \cdots + x_{n-2}) + (x_{n-1} + x_n ) - -\\ &> (x_1 + \cdots + x_{n-2}) + x_{n-1} x_n + 1 - -\\ & > n, - -\end{align} - -which completes the proof. - -For positive real numbers a1, . . . , an, let's denote -$$ -x_1 = \frac{a_1}{\sqrt[n]{a_1\cdots a_n}}, . . ., x_n = \frac{a_n}{\sqrt[n]{a_1\cdots a_n}}. -$$ - -The numbers x1, . . . , xn satisfy the condition x1 . . . xn = 1. So we have -$$ -\frac{a_1}{\sqrt[n]{a_1\cdots a_n}} + \cdots + \frac{a_n}{\sqrt[n]{a_1\cdots a_n}} \ge n, -$$ - -whence we obtain -$$ -\frac{a_1 + \cdots + a_n}n \ge \sqrt[n]{a_1\cdots a_n}, -$$ - -with the equality holding only for a1 = ... = an. - -The following proof by cases relies directly on well-known rules of arithmetic but employs the rarely used technique of forward-backward-induction. It is essentially from Augustin Louis Cauchy and can be found in his Cours d'analyse. - -If all the terms are equal: -$$ -x_1 = x_2 = \cdots = x_n, -$$ - -then their sum is nx1, so their arithmetic mean is x1; and their product is x1n, so their geometric mean is x1; therefore, the arithmetic mean and geometric mean are equal, as desired. - -It remains to show that if not all the terms are equal, then the arithmetic mean is greater than the geometric mean. Clearly, this is only possible when n > 1. - -This case is significantly more complex, and we divide it into subcases. - -If n = 2, then we have two terms, x1 and x2, and since (by our assumption) not all terms are equal, we have: - -\begin{align} - -\Bigl(\frac{x_1+x_2}{2}\Bigr)^2-x_1x_2 - -&=\frac14(x_1^2+2x_1x_2+x_2^2)-x_1x_2\\ - -&=\frac14(x_1^2-2x_1x_2+x_2^2)\\ - -&=\Bigl(\frac{x_1-x_2}{2}\Bigr)^2>0, - -\end{align} - -hence - - - -\frac{x_1 + x_2}{2} \ge \sqrt{x_1 x_2} - -as desired. - -Consider the case where n = 2k, where k is a positive integer. We proceed by mathematical induction. - -In the base case, k = 1, so n = 2. We have already shown that the inequality holds when n = 2, so we are done. - -Now, suppose that for a given k > 1, we have already shown that the inequality holds for n = 2k−1, and we wish to show that it holds for n = 2k. To do so, we apply the inequality twice for 2k-1 numbers and once for 2 numbers to obtain: - - - -\begin{align} - -\frac{x_1 + x_2 + \cdots + x_{2^k}}{2^k} & {} =\frac{\frac{x_1 + x_2 + \cdots + x_{2^{k-1}}}{2^{k-1}} + \frac{x_{2^{k-1} + 1} + x_{2^{k-1} + 2} + \cdots + x_{2^k}}{2^{k-1}}}{2} \\[7pt] - -& \ge \frac{\sqrt[2^{k-1}]{x_1 x_2 \cdots x_{2^{k-1}}} + \sqrt[2^{k-1}]{x_{2^{k-1} + 1} x_{2^{k-1} + 2} \cdots x_{2^k}}}{2} \\[7pt] - -& \ge \sqrt{\sqrt[2^{k-1}]{x_1 x_2 \cdots x_{2^{k-1}}} \sqrt[2^{k-1}]{x_{2^{k-1} + 1} x_{2^{k-1} + 2} \cdots x_{2^k}}} \\[7pt] - -& = \sqrt[2^k]{x_1 x_2 \cdots x_{2^k}} - -\end{align} - - - -where in the first inequality, the two sides are equal only if -$$ -x_1 = x_2 = \cdots = x_{2^{k-1}} -$$ - -and -$$ -x_{2^{k-1}+1} = x_{2^{k-1}+2} = \cdots = x_{2^k} -$$ - -(in which case the first arithmetic mean and first geometric mean are both equal to x1, and similarly with the second arithmetic mean and second geometric mean); and in the second inequality, the two sides are only equal if the two geometric means are equal. Since not all 2k numbers are equal, it is not possible for both inequalities to be equalities, so we know that: -$$ -\frac{x_1 + x_2 + \cdots + x_{2^k}}{2^k} \ge \sqrt[2^k]{x_1 x_2 \cdots x_{2^k}} -$$ - -as desired. - -If n is not a natural power of 2, then it is certainly less than some natural power of 2, since the sequence 2, 4, 8, . . . , 2k, . . . is unbounded above. Therefore, without loss of generality, let m be some natural power of 2 that is greater than n. - -So, if we have n terms, then let us denote their arithmetic mean by α, and expand our list of terms thus: -$$ -x_{n+1} = x_{n+2} = \cdots = x_m = \alpha. -$$ - -We then have: - - - -\begin{align} - -\alpha & = \frac{x_1 + x_2 + \cdots + x_n}{n} \\[6pt] - -& = \frac{\frac{m}{n} \left( x_1 + x_2 + \cdots + x_n \right)}{m} \\[6pt] - -& = \frac{x_1 + x_2 + \cdots + x_n + \frac{(m-n)}{n} \left( x_1 + x_2 + \cdots + x_n \right)}{m} \\[6pt] - -& = \frac{x_1 + x_2 + \cdots + x_n + \left( m-n \right) \alpha}{m} \\[6pt] - -& = \frac{x_1 + x_2 + \cdots + x_n + x_{n+1} + \cdots + x_m}{m} \\[6pt] - -& \ge \sqrt[m]{x_1 x_2 \cdots x_n x_{n+1} \cdots x_m} \\[6pt] - -& = \sqrt[m]{x_1 x_2 \cdots x_n \alpha^{m-n}}, - -\end{align} - - - -so -$$ -\alpha^m \ge x_1 x_2 \cdots x_n \alpha^{m-n} -$$ - -and -$$ -\alpha \ge \sqrt[n]{x_1 x_2 \cdots x_n} -$$ - -as desired. - -The following proof uses mathematical induction and some basic differential calculus. - -Induction basis: For n = 1 the statement is true with equality. - -Induction hypothesis: Suppose that the AM–GM statement holds for all choices of n non-negative real numbers. - -Induction step: In order to prove the statement for n + 1 non-negative real numbers x1, . . . , xn, xn+1, we need to prove that -$$ -\frac{x_1 + \cdots + x_n + x_{n+1}}{n+1} - ({x_1 \cdots x_n x_{n+1}})^{\frac{1}{n+1}}\ge0 -$$ - -with equality only if all the n + 1 numbers are equal. - -If all numbers are zero, the inequality holds with equality. If some but not all numbers are zero, we have strict inequality. Therefore, we may assume in the following, that all n + 1 numbers are positive. - -We consider the last number xn+1 as a variable and define the function -$$ - f(t)=\frac{x_1 + \cdots + x_n + t}{n+1} - ({x_1 \cdots x_n t})^{\frac{1}{n+1}},\qquad t>0. -$$ - -Proving the induction step is equivalent to showing that f(t) ≥ 0 for all t > 0, with f(t) = 0 only if x1, . . . , xn and t are all equal. This can be done by analyzing the critical points of f using some basic calculus. - -The first derivative of f is given by -$$ -f'(t)=\frac{1}{n+1}-\frac{1}{n+1}({x_1 \cdots x_n})^{\frac{1}{n+1}}t^{-\frac{n}{n+1}},\qquad t>0. -$$ - -A critical point t0 has to satisfy f′(t0) = 0, which means -$$ -({x_1 \cdots x_n})^{\frac{1}{n+1}}t_0^{-\frac{n}{n+1}}=1. -$$ - -After a small rearrangement we get -$$ -t_0^{\frac{n}{n+1}}=({x_1 \cdots x_n})^{\frac{1}{n+1}}, -$$ - -and finally -$$ -t_0=({x_1 \cdots x_n})^{\frac{1}n}, -$$ - -which is the geometric mean of x1, . . . , xn. This is the only critical point of f. Since f′′(t) > 0 for all t > 0, the function f is strictly convex and has a strict global minimum at t0. Next we compute the value of the function at this global minimum: - - - -\begin{align} - -f(t_0) &= \frac{x_1 + \cdots + x_n + ({x_1 \cdots x_n})^{1/n}}{n+1} - ({x_1 \cdots x_n})^{\frac{1}{n+1}}({x_1 \cdots x_n})^{\frac{1}{n(n+1)}}\\ - -&= \frac{x_1 + \cdots + x_n}{n+1} + \frac{1}{n+1}({x_1 \cdots x_n})^{\frac{1}n} - ({x_1 \cdots x_n})^{\frac{1}n}\\ - -&= \frac{x_1 + \cdots + x_n}{n+1} - \frac{n}{n+1}({x_1 \cdots x_n})^{\frac{1}n}\\ - -&= \frac{n}{n+1}\Bigl(\frac{x_1 + \cdots + x_n}n - ({x_1 \cdots x_n})^{\frac{1}n}\Bigr) - -\\ &\ge0, - -\end{align} - -where the final inequality holds due to the induction hypothesis. The hypothesis also says that we can have equality only when x1, . . . , xn are all equal. In this case, their geometric mean t0 has the same value, Hence, unless x1, . . . , xn, xn+1 are all equal, we have f(xn+1) > 0. This completes the proof. - -This technique can be used in the same manner to prove the generalized AM–GM inequality and Cauchy–Schwarz inequality in Euclidean space Rn. - -George Pólya provided a proof similar to what follows. Let f(x) = ex–1 – x for all real x, with first derivative f′(x) = ex–1 – 1 and second derivative f′′(x) = ex–1. Observe that f(1) = 0, f′(1) = 0 and f′′(x) > 0 for all real x, hence f is strictly convex with the absolute minimum at x = 1. Hence x ≤ ex–1 for all real x with equality only for x = 1. - -Consider a list of non-negative real numbers x1, x2, . . . , xn. If they are all zero, then the AM–GM inequality holds with equality. Hence we may assume in the following for their arithmetic mean α > 0. By n-fold application of the above inequality, we obtain that - -\begin{align}{ \frac{x_1}{\alpha} \frac{x_2}{\alpha} \cdots \frac{x_n}{\alpha} } &\le { e^{\frac{x_1}{\alpha} - 1} e^{\frac{x_2}{\alpha} - 1} \cdots e^{\frac{x_n}{\alpha} - 1} }\\ - -& = \exp \Bigl( \frac{x_1}{\alpha} - 1 + \frac{x_2}{\alpha} - 1 + \cdots + \frac{x_n}{\alpha} - 1 \Bigr), \qquad (*) - -\end{align} - -with equality if and only if xi = α for every i ∈ {1, . . . , n}. The argument of the exponential function can be simplified: - -\begin{align} - -\frac{x_1}{\alpha} - 1 + \frac{x_2}{\alpha} - 1 + \cdots + \frac{x_n}{\alpha} - 1 & = \frac{x_1 + x_2 + \cdots + x_n}{\alpha} - n \\ - -& = \frac{n \alpha}{\alpha} - n \\ - -& = 0. - -\end{align} - -Returning to (*), -$$ -\frac{x_1 x_2 \cdots x_n}{\alpha^n} \le e^0 = 1, -$$ - -which produces x1 x2 · · · xn ≤ αn, hence the result -$$ -\sqrt[n]{x_1 x_2 \cdots x_n} \le \alpha. -$$ - -If any of the $x_i$ are $0$, then there is nothing to prove. So we may assume all the $x_i$ are strictly positive. - -Because the arithmetic and geometric means are homogeneous of degree 1, without loss of generality assume that $\prod_{i=1}^n x_i = 1$. Set $G(x_1,x_2,\ldots,x_n)=\prod_{i=1}^n x_i$, and $F(x_1,x_2,\ldots,x_n) = \frac{1}{n}\sum_{i=1}^n x_i$. The inequality will be proved (together with the equality case) if we can show that the minimum of $F(x_1,x_2,...,x_n),$ subject to the constraint $G(x_1,x_2,\ldots,x_n) = 1,$ is equal to $1$, and the minimum is only achieved when $x_1 = x_2 = \cdots = x_n = 1$. Let us first show that the constrained minimization problem has a global minimum. - -Set $K = \{(x_1,x_2,\ldots,x_n) \colon 0 \leq x_1,x_2,\ldots,x_n \leq n\}$. Since the intersection $K \cap \{G = 1\}$ is compact, the extreme value theorem guarantees that the minimum of $F(x_1,x_2,...,x_n)$ subject to the constraints $G(x_1,x_2,\ldots,x_n) = 1$ and $ (x_1,x_2,\ldots,x_n) \in K $ is attained at some point inside $K$. On the other hand, observe that if any of the $x_i > n$, then $F(x_1,x_2,\ldots,x_n) > 1 $, while $F(1,1,\ldots,1) = 1$, and $(1,1,\ldots,1) \in K \cap \{G = 1\} $. This means that the minimum inside $K \cap \{G = 1\}$ is in fact a global minimum, since the value of $F$ at any point inside $K \cap \{G = 1\}$ is certainly no smaller than the minimum, and the value of $F$ at any point $(y_1,y_2,\ldots, y_n)$ not inside $K$ is strictly bigger than the value at $(1,1,\ldots,1)$, which is no smaller than the minimum. - -The method of Lagrange multipliers says that the global minimum is attained at a point $(x_1,x_2,\ldots,x_n)$ where the gradient of $F(x_1,x_2,\ldots,x_n)$ is $\lambda$ times the gradient of $G(x_1,x_2,\ldots,x_n)$, for some $\lambda$. We will show that the only point at which this happens is when $x_1 = x_2 = \cdots = x_n = 1$ and $F(x_1,x_2,...,x_n) = 1.$ - -Compute -$$ -\frac{\partial F}{\partial x_i} = \frac{1}{n} -$$ - -and -$$ -\frac{\partial G}{\partial x_i} = \prod_{j \neq i}x_j = \frac{G(x_1,x_2,\ldots,x_n)}{x_i} = \frac{1}{x_i} -$$ - -along the constraint. Setting the gradients proportional to one another therefore gives for each $i$ that $\frac{1}{n} = \frac{\lambda}{x_i},$ and so $n\lambda= x_i.$ Since the left-hand side does not depend on $i$, it follows that $x_1 = x_2 = \cdots = x_n$, and since $G(x_1,x_2,\ldots, x_n) = 1$, it follows that $ x_1 = x_2 = \cdots = x_n = 1$ and $F(x_1,x_2,\ldots,x_n) = 1$, as desired. - -There is a similar inequality for the weighted arithmetic mean and weighted geometric mean. Specifically, let the nonnegative numbers x1, x2, . . . , xn and the nonnegative weights w1, w2, . . . , wn be given. Set w = w1 + w2 + · · · + wn. If w > 0, then the inequality -$$ -\frac{w_1 x_1 + w_2 x_2 + \cdots + w_n x_n}{w} \ge \sqrt[w]{x_1^{w_1} x_2^{w_2} \cdots x_n^{w_n}} -$$ - -holds with equality if and only if all the xk with wk > 0 are equal. Here the convention 00 = 1 is used. - -If all wk = 1, this reduces to the above inequality of arithmetic and geometric means. - -Using the finite form of Jensen's inequality for the natural logarithm, we can prove the inequality between the weighted arithmetic mean and the weighted geometric mean stated above. - -Since an xk with weight wk = 0 has no influence on the inequality, we may assume in the following that all weights are positive. If all xk are equal, then equality holds. Therefore, it remains to prove strict inequality if they are not all equal, which we will assume in the following, too. If at least one xk is zero (but not all), then the weighted geometric mean is zero, while the weighted arithmetic mean is positive, hence strict inequality holds. Therefore, we may assume also that all xk are positive. - -Since the natural logarithm is strictly concave, the finite form of Jensen's inequality and the functional equations of the natural logarithm imply - -\begin{align} - -\ln\Bigl(\frac{w_1x_1+\cdots+w_nx_n}w\Bigr) & >\frac{w_1}w\ln x_1+\cdots+\frac{w_n}w\ln x_n \\ - -& =\ln \sqrt[w]{x_1^{w_1} x_2^{w_2} \cdots x_n^{w_n}}. - -\end{align} - -Since the natural logarithm is strictly increasing, - - - -\frac{w_1x_1+\cdots+w_nx_n}w - ->\sqrt[w]{x_1^{w_1} x_2^{w_2} \cdots x_n^{w_n}}. - - - -Most matrix generalizations of the arithmetic geometric mean inequality apply on the level of unitarily invariant norms, owing to the fact that even if the matrices $A$ and $B$ are positive semi-definite the matrix $A B$ may not be positive semi-definite and hence may not have a canonical square root. In Bhatia and Kittaneh proved that for any unitarily invariant norm $|||\cdot|||$ and positive semi-definite matrices $A$ and $B$ it is the case that - - - -|||AB|||\leq \frac{1}{2}|||A^2 + B^2||| - - - -Later, in the same authors proved the stronger inequality that - - - -|||AB||| \leq \frac{1}{4}|||(A+B)^2||| - - - -Finally, it is known for dimension $n=2$ that the following strongest possible matrix generalization of the arithmetic-geometric mean inequality holds, and it is conjectured to hold for all $n$ - - - -|||(AB)^{\frac{1}{2}}|||\leq \frac{1}{2}|||A+B||| - - - -This conjectured inequality was shown by Stephen Drury in 2012. Indeed, he proved -$$ -\sqrt{\sigma_j(AB)}\leq \frac{1}{2}\lambda_j(A+B), \ j=1, \ldots, n. -$$ - -S.W. Drury, On a question of Bhatia and Kittaneh, Linear Algebra Appl. 437 (2012) 1955–1960. - -Other generalizations of the inequality of arithmetic and geometric means include: - -* Muirhead's inequality, - -* Maclaurin's inequality, - -* Generalized mean inequality, - -* Means of complex numbers. diff --git a/wiki/wikipedia/3946.txt b/wiki/wikipedia/3946.txt deleted file mode 100644 index 46050b937cb44492ea13ad3fd6d26f6831c4dcd5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3946.txt +++ /dev/null @@ -1,153 +0,0 @@ -In mathematics, the multinomial theorem describes how to expand a power of a sum in terms of powers of the terms in that sum. It is the generalization of the binomial theorem from binomials to multinomials. - -For any positive integer m and any non-negative integer n, the multinomial formula describes how a sum with m terms expands when raised to an arbitrary power n: - -(x_1 + x_2 + \cdots + x_m)^n - -= \sum_{k_1+k_2+\cdots+k_m=n} {n \choose k_1, k_2, \ldots, k_m} - -\prod_{t=1}^m x_t^{k_t}, - -where - - {n \choose k_1, k_2, \ldots, k_m} - -= \frac{n!}{k_1! k_2! \cdots k_m!} - -is a multinomial coefficient. The sum is taken over all combinations of nonnegative integer indices k1 through km such that the sum of all ki is n. That is, for each term in the expansion, the exponents of the xi must add up to n. Also, as with the binomial theorem, quantities of the form x0 that appear are taken to equal 1 (even when x equals zero). - -In the case m = 2, this statement reduces to that of the binomial theorem. - -The third power of the trinomial a + b + c is given by -$$ -(a+b+c)^3 = a^3 + b^3 + c^3 + 3 a^2 b + 3 a^2 c + 3 b^2 a + 3 b^2 c + 3 c^2 a + 3 c^2 b + 6 a b c. -$$ - -This can be computed by hand using the distributive property of multiplication over addition, but it can also be done (perhaps more easily) with the multinomial theorem. It is possible to "read off" the multinomial coefficients from the terms by using the multinomial coefficient formula. For example: -$$ -a^2 b^0 c^1 -$$ has the coefficient ${3 \choose 2, 0, 1} = \frac{3!}{2!\cdot 0!\cdot 1!} = \frac{6}{2 \cdot 1 \cdot 1} = 3.$ -$$ -a^1 b^1 c^1 -$$ has the coefficient ${3 \choose 1, 1, 1} = \frac{3!}{1!\cdot 1!\cdot 1!} = \frac{6}{1 \cdot 1 \cdot 1} = 6.$ - -The statement of the theorem can be written concisely using multiindices: -$$ -(x_1+\cdots+x_m)^n = \sum_{|\alpha|=n}{n \choose \alpha}x^\alpha -$$ - -where - - - -\alpha=(\alpha_1,\alpha_2,\dots,\alpha_m) - - - -and - - - -x^\alpha=x_1^{\alpha_1} x_2^{\alpha_2} \cdots x_m^{\alpha_m} - - - -This proof of the multinomial theorem uses the binomial theorem and induction on m. - -First, for m = 1, both sides equal x1n since there is only one term k1 = n in the sum. For the induction step, suppose the multinomial theorem holds for m. Then - - - -\begin{align} - -& (x_1+x_2+\cdots+x_m+x_{m+1})^n = (x_1+x_2+\cdots+(x_m+x_{m+1}))^n \\[6pt] - -= {} & \sum_{k_1+k_2+\cdots+k_{m-1}+K=n}{n\choose k_1,k_2,\ldots,k_{m-1},K} x_1^{k_1} x_2^{k_2}\cdots x_{m-1}^{k_{m-1}}(x_m+x_{m+1})^K - -\end{align} - - - -by the induction hypothesis. Applying the binomial theorem to the last factor, -$$ - = \sum_{k_1+k_2+\cdots+k_{m-1}+K=n}{n\choose k_1,k_2,\ldots,k_{m-1},K} x_1^{k_1}x_2^{k_2}\cdots x_{m-1}^{k_{m-1}}\sum_{k_m+k_{m+1}=K}{K\choose k_m,k_{m+1}}x_m^{k_m}x_{m+1}^{k_{m+1}} -$$ - - = \sum_{k_1+k_2+\cdots+k_{m-1}+k_m+k_{m+1}=n}{n\choose k_1,k_2,\ldots,k_{m-1},k_m,k_{m+1}} x_1^{k_1}x_2^{k_2}\cdots x_{m-1}^{k_{m-1}}x_m^{k_m}x_{m+1}^{k_{m+1}} - - - -which completes the induction. The last step follows because -$$ -{n\choose k_1,k_2,\ldots,k_{m-1},K}{K\choose k_m,k_{m+1}} = {n\choose k_1,k_2,\ldots,k_{m-1},k_m,k_{m+1}}, -$$ - -as can easily be seen by writing the three coefficients using factorials as follows: -$$ - \frac{n!}{k_1! k_2! \cdots k_{m-1}!K!} \frac{K!}{k_m! k_{m+1}!}=\frac{n!}{k_1! k_2! \cdots k_{m+1}!}. -$$ - -The numbers -$$ - {n \choose k_1, k_2, \ldots, k_m} -$$ - -appearing in the theorem are the multinomial coefficients. They can be expressed in numerous ways, including as a product of binomial coefficients or of factorials: - - - -{n \choose k_1, k_2, \ldots, k_m} = \frac{n!}{k_1! k_2! \cdots k_m!} = {k_1\choose k_1}{k_1+k_2\choose k_2}\cdots{k_1+k_2+\cdots+k_m\choose k_m} - - - -The substitution of xi = 1 for all i into the multinomial theorem - -\sum_{k_1+k_2+\cdots+k_m=n} {n \choose k_1, k_2, \ldots, k_m} x_1^{k_1} x_2^{k_2} \cdots x_m^{k_m} - -= (x_1 + x_2 + \cdots + x_m)^n - -gives immediately that - - - -\sum_{k_1+k_2+\cdots+k_m=n} {n \choose k_1, k_2, \ldots, k_m} = m^n. - - - -The number of terms in a multinomial sum, #n,m, is equal to the number of monomials of degree n on the variables x1, …, xm: - - - -\#_{n,m} = {n+m-1 \choose m-1}. - - - -The count can be performed easily using the method of stars and bars. - -The largest power of a prime $p$ that divides a multinomial coefficient may be computed using a generalization of Kummer's theorem. - -The multinomial coefficients have a direct combinatorial interpretation, as the number of ways of depositing n distinct objects into m distinct bins, with k1 objects in the first bin, k2 objects in the second bin, and so on. - -In statistical mechanics and combinatorics if one has a number distribution of labels then the multinomial coefficients naturally arise from the binomial coefficients. Given a number distribution {ni} on a set of N total items, ni represents the number of items to be given the label i. (In statistical mechanics i is the label of the energy state.) - -The number of arrangements is found by - -*Choosing n1 of the total N to be labeled 1. This can be done $N\choose n_1$ ways. - -*From the remaining N − n1 items choose n2 to label 2. This can be done $N-n_1 \choose n_2$ ways. - -*From the remaining N − n1 − n2 items choose n3 to label 3. Again, this can be done $N-n_1-n_2 \choose n_3$ ways. - -Multiplying the number of choices at each step results in: -$$ -{N \choose n_1}{N-n_1\choose n_2}{N-n_1-n_2\choose n_3}\cdots=\frac{N!}{(N-n_1)!n_1!} \cdot \frac{(N-n_1)!}{(N-n_1-n_2)!n_2!} \cdot \frac{(N-n_1-n_2)!}{(N-n_1-n_2-n_3)!n_3!}\cdots. -$$ - -Cancellation results in the formula given above. - -The multinomial coefficient $\binom{n}{k_1, \ldots, k_m}$ is also the number of distinct ways to permute a multiset of n elements, where ki is the multiplicity of each of the ith element. For example, the number of distinct permutations of the letters of the word MISSISSIPPI, which has 1 M, 4 Is, 4 Ss, and 2 Ps, is -$$ -{11 \choose 1, 4, 4, 2} = \frac{11!}{1! 4! 4! 2!} = 34650. -$$ - -One can use the multinomial theorem to generalize Pascal's triangle or Pascal's pyramid to Pascal's simplex. This provides a quick way to generate a lookup table for multinomial coefficients. diff --git a/wiki/wikipedia/3947.txt b/wiki/wikipedia/3947.txt deleted file mode 100644 index bd95098232b6f83ed2c836ed26a0430013a6a345..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3947.txt +++ /dev/null @@ -1,53 +0,0 @@ -In mathematics, the Weierstrass preparation theorem is a tool for dealing with analytic functions of several complex variables, at a given point P. It states that such a function is, up to multiplication by a function not zero at P, a polynomial in one fixed variable z, which is monic, and whose coefficients of lower degree terms are analytic functions in the remaining variables and zero at P. - -There are also a number of variants of the theorem, that extend the idea of factorization in some ring R as u·w, where u is a unit and w is some sort of distinguished Weierstrass polynomial. Carl Siegel has disputed the attribution of the theorem to Weierstrass, saying that it occurred under the current name in some of late nineteenth century Traités d'analyse without justification. - -For one variable, the local form of an analytic function f(z) near 0 is zkh(z) where h(0) is not 0, and k is the order of the zero of f at 0. This is the result that the preparation theorem generalises. - -We pick out one variable z, which we may assume is first, and write our complex variables as (z, z2, ..., zn). A Weierstrass polynomial W(z) is - -zk + gk-1zk-1 + ... + g0 - -where gi(z2, ..., zn) is analytic and gi(0, ..., 0) = 0. - -Then the theorem states that for analytic functions f, if - -f(0, ...,0) = 0, - -and - -f(z, z2, ..., zn) - -as a power series has some term only involving z, we can write (locally near (0, ..., 0)) - -f(z, z2, ..., zn) = W(z)h(z, z2, ..., zn) - -with h analytic and h(0, ..., 0) not 0, and W a Weierstrass polynomial. - -This has the immediate consequence that the set of zeros of f, near (0, ..., 0), can be found by fixing any small values of z2, ..., zn and then solving the equation W(z)=0. The corresponding values of z form a number of continuously-varying branches, in number equal to the degree of W in z. In particular f cannot have an isolated zero. - -A related result is the Weierstrass division theorem, which states that if f and g are analytic functions, and g is a Weierstrass polynomial of degree N, then there exists a unique pair h and j such that f = gh + j, where j is a polynomial of degree less than N. In fact, many authors prove the Weierstrass preparation as a corollary of the division theorem. It is also possible to prove the division theorem from the preparation theorem so that the two theorems are actually equivalent. - -The Weierstrass preparation theorem can be used to show that the ring of germs of analytic functions in n variables is a Noetherian ring, which is also referred to as the Rückert basis theorem. - -There is a deeper preparation theorem for smooth functions, due to Bernard Malgrange, called the Malgrange preparation theorem. It also has an associated division theorem, named after John Mather. - -There is an analogous result, also referred to as the Weierstrass preparation theorem, for the ring of formal power series over complete local rings A: for any power series $f = \sum_{n=0}^\infty a_n t^n \in At$ such that not all $a_n$ are in the maximal ideal $\mathfrak m$ of A, there is a unique unit u in $At$ and a polynomial F of the form $F=t^s + b_{s-1} t^{s-1} + \dots + b_0$ with $b_i \in \mathfrak m$ (a so-called distinguished polynomial) such that -$$ -f = uF. -$$ - -Since $At$ is again a complete local ring, the result can be iterated and therefore gives similar factorization results for formal power series in several variables. - -For example, this applies to the ring of integers in a p-adic field. In this case the theorem says that a power series f(z) can always be uniquely factored as πn·u(z)·p(z), where u(z) is a unit in the ring of power series, p(z) is a distinguished polynomial (monic, with the coefficients of the non-leading terms each in the maximal ideal), and π is a fixed uniformizer. - -An application of the Weierstrass preparation and division theorem for the ring $\mathbf Z_pt$ (also called Iwasawa algebra) occurs in Iwasawa theory in the description of finitely generated modules over this ring. - -There is also a Weiertrass preparation theorem for Tate algebras -$$ -T_n(k) = \left \{ \sum_{\nu_1, \dots, \nu_n \ge 0} a_{\nu_1, \dots, \nu_n} X_1^{\nu_1} \cdots X_n^{\nu_n}, |a_{\nu_1, \dots, \nu_n}| \to 0 \text{ for } \nu_1 + \dots +\nu_n \to \infty \right \} -$$ - -over a complete non-archimedean field k. - -These algebras are the basic building blocks of rigid geometry. One application of this form of the Weierstrass preparation theorem is the fact that the rings $T_n(k)$ are Noetherian. diff --git a/wiki/wikipedia/3948.txt b/wiki/wikipedia/3948.txt deleted file mode 100644 index a4f6830d2fdd98cecd46742d278b21869a67d9c9..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3948.txt +++ /dev/null @@ -1,29 +0,0 @@ -In programming language theory, the POPLmark challenge (from "Principles of Programming Languages benchmark", formerly Mechanized Metatheory for the Masses!) (Aydemir, 2005) is a set of benchmarks designed to evaluate the state of automated reasoning (or mechanization) in the metatheory of programming languages, and to stimulate discussion and collaboration among a diverse cross section of the formal methods community. Very loosely speaking, the challenge is about measurement of how well programs may be proven to match a specification of how they are intended to behave (and the many complex issues that this involves). The challenge was initially proposed by the members of the PL club at the University of Pennsylvania, in association with collaborators around the world. The Workshop on Mechanized Metatheory is the main meeting of researchers participating in the challenge. - -The design of the POPLmark benchmark is guided by features common to reasoning about programming languages. The challenge problems do not require the formalisation of large programming languages, but they do require sophistication in reasoning about: - -; Binding : Most programming languages have some form of binding, ranging in complexity from the simple binders of simply typed lambda calculus to complex, potentially infinite binders needed in the treatment of record patterns. - -; Induction : Properties such as subject reduction and strong normalisation often require complex induction arguments. - -; Reuse : Furthering collaboration being a key aim of the challenge, the solutions are expected to contain reusable components that would allow researchers to share language features and designs without requiring them to start from scratch every time. - -, the POPLmark challenge is composed of three parts. Part 1 concerns solely the types of System F<: (System F with subtyping), and has problems such as: - -# Checking that the type system admits transitivity of subtyping. - -# Checking the transitivity of subtyping in the presence of records - -Part 2 concerns the syntax and semantics of System F<:. It concerns proofs of - -# Type safety for the pure fragment - -# Type safety in the presence of pattern matching - -Part 3 concerns the usability of the formalisation of System F<:. In particular, the challenge asks for: - -# Simulating and animating the operational semantics - -# Extracting useful algorithms from the formalisations - -Several solutions have been proposed for parts of the POPLmark challenge, using following tools: Isabelle/HOL, Twelf, Coq, , ATS, and Matita. diff --git a/wiki/wikipedia/3949.txt b/wiki/wikipedia/3949.txt deleted file mode 100644 index 4853f5e2d923fea6b68e8bd1af879672d9de2a4d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3949.txt +++ /dev/null @@ -1,105 +0,0 @@ -In propositional logic, double negation is the theorem that states that "If a statement is true, then it is not the case that the statement is not true." This is expressed by saying that a proposition A is logically equivalent to not (not-A), or by the formula A ≡ ~(~A) where the sign ≡ expresses logical equivalence and the sign ~ expresses negation. - -Like the law of the excluded middle, this principle is considered to be a law of thought in classical logic, but it is disallowed by intuitionistic logic. The principle was stated as a theorem of propositional logic by Russell and Whitehead in Principia Mathematica as: -$$ -\mathbf{*4\cdot13}. \ \ \vdash.\ p \ \equiv \ \thicksim(\thicksim p) -$$ - -"This is the principle of double negation, i.e. a proposition is equivalent of the falsehood of its negation." - -Double negation elimination and double negation introduction are two valid rules of replacement. They are the inferences that if A is true, then not not-A is true and its converse, that, if not not-A is true, then A is true. The rule allows one to introduce or eliminate a negation from a formal proof. The rule is based on the equivalence of, for example, It is false that it is not raining. and It is raining. - -The double negation introduction rule is: - -P $\Rightarrow$ notnotP - -and the double negation elimination rule is: - -notnotP $\Rightarrow$ P - -Where "$\Rightarrow$" is a metalogical symbol representing "can be replaced in a proof with." - -In logics that have both rules, negation is an involution. - -The double negation introduction rule may be written in sequent notation: -$$ -P \vdash \neg \neg P -$$ - -The double negation elimination rule may be written as: -$$ -\neg \neg P \vdash P -$$ - -In rule form: -$$ -\frac{P}{\neg \neg P} -$$ - -and -$$ -\frac{\neg \neg P}{P} -$$ - -or as a tautology (plain propositional calculus sentence): -$$ -P \to \neg \neg P -$$ - -and -$$ -\neg \neg P \to P -$$ - -These can be combined into a single biconditional formula: -$$ - \neg \neg P \leftrightarrow P -$$. - -Since biconditionality is an equivalence relation, any instance of ¬¬A in a well-formed formula can be replaced by A, leaving unchanged the truth-value of the well-formed formula. - -Double negative elimination is a theorem of classical logic, but not of weaker logics such as intuitionistic logic and minimal logic. Double negation introduction is a theorem of both intuitionistic logic and minimal logic, as is $ \neg \neg \neg A \vdash \neg A $. - -Because of their constructive character, a statement such as It's not the case that it's not raining is weaker than It's raining. The latter requires a proof of rain, whereas the former merely requires a proof that rain would not be contradictory. This distinction also arises in natural language in the form of litotes. - -In Hilbert-style deductive systems for propositional logic, double negation is not always taken as an axiom (see list of Hilbert systems), and is rather a theorem. We describe a proof of this theorem in the system of three axioms proposed by Jan Łukasiewicz: - -A1. $\phi \to \left( \psi \to \phi \right) $ - -A2. $\left( \phi \to \left( \psi \rightarrow \xi \right) \right) \to \left( \left( \phi \to \psi \right) \to \left( \phi \to \xi \right) \right)$ - -A3. $\left ( \lnot \phi \to \lnot \psi \right) \to \left( \psi \to \phi \right) $ - -We use the lemma $ p \to p$ proved here, which we refer to as (L1), and use the following additional lemma, proved here: - -(L2) $p \to ((p \to q) \to q) $ - -We first prove $ \neg \neg p \to p$. For shortness, we denote $ q \to ( r \to q ) $ by φ0. We also use repeatedly the method of the hypothetical syllogism metatheorem as a shorthand for several proof steps. - -(1) $\varphi_0$ (instance of (A1)) - -(2) $(\neg \neg \varphi_0 \to \neg \neg p ) \to (\neg p \to \neg \varphi_0)$ (instance of (A3)) - -(3) $ (\neg p \to \neg \varphi_0) \to (\varphi_0 \to p )$ (instance of (A3)) - -(4) $ (\neg \neg \varphi_0 \to \neg \neg p ) \to (\varphi_0 \to p )$ (from (2) and (3) by the hypothetical syllogism metatheorem) - -(5) $\neg \neg p \to ( \neg \neg \varphi_0 \to \neg \neg p ) $ (instance of (A1)) - -(6) $\neg \neg p \to (\varphi_0 \to p )$ (from (4) and (5) by the hypothetical syllogism metatheorem) - -(7) $\varphi_0 \to ((\varphi_0 \to p) \to p) $ (instance of (L2)) - -(8) $(\varphi_0 \to p) \to p $ (from (1) and (7) by modus ponens) - -(9) $\neg \neg p \to p $ (from (6) and (8) by the hypothetical syllogism metatheorem) - -We now prove $ p \to \neg \neg p $. - -(1) $\neg\neg\neg p \to \neg p $ (instance of the first part of the theorem we have just proven) - -(2) $ (\neg\neg\neg p \to \neg p) \to (p \to \neg\neg p) $ (instance of (A3)) - -(3) $ p \to \neg \neg p $ (from (1) and (2) by modus ponens) - -And the proof is complete. diff --git a/wiki/wikipedia/395.txt b/wiki/wikipedia/395.txt deleted file mode 100644 index 36e8cd0ea7d49fdf013b3a0b055863f2c634fa9a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/395.txt +++ /dev/null @@ -1,41 +0,0 @@ -In computational complexity theory, the Cook–Levin theorem, also known as Cook's theorem, states that the Boolean satisfiability problem is NP-complete. That is, it is in NP, and any problem in NP can be reduced in polynomial time by a deterministic Turing machine to the Boolean satisfiability problem. - -The theorem is named after Stephen Cook and Leonid Levin. - -An important consequence of this theorem is that if there exists a deterministic polynomial time algorithm for solving Boolean satisfiability, then every NP problem can be solved by a deterministic polynomial time algorithm. The question of whether such an algorithm for Boolean satisfiability exists is thus equivalent to the P versus NP problem, which is widely considered the most important unsolved problem in theoretical computer science. - -The concept of NP-completeness was developed in the late 1960s and early 1970s in parallel by researchers in North America and the USSR. - -In 1971, Stephen Cook published his paper "The complexity of theorem proving procedures" in conference proceedings of the newly founded ACM Symposium on Theory of Computing. Richard Karp's subsequent paper, "Reducibility among - -combinatorial problems", - -There are two parts to proving that the Boolean satisfiability problem (SAT) is NP-complete. One is to show that SAT is an NP problem. The other is to show that every NP problem can be reduced to an instance of a SAT problem by a polynomial-time many-one reduction. - -SAT is in NP because any assignment of Boolean values to Boolean variables that is claimed to satisfy the given expression can be verified in polynomial time by a deterministic Turing machine. (The statements verifiable in polynomial time by a deterministic Turing machine and solvable in polynomial time by a non-deterministic Turing machine are totally equivalent, and the proof can be found in many textbooks, for example Sipser's Introduction to the Theory of Computation, section 7.3., as well as in the Wikipedia article on NP). - -Now suppose that a given problem in NP can be solved by the nondeterministic Turing machine M = (Q, Σ, s, F, δ), where Q is the set of states, Σ is the alphabet of tape symbols, s ∈ Q is the initial state, F ⊆ Q is the set of accepting states, and δ ⊆ ((Q \ F) × Σ) × (Q × Σ × {−1, +1}) is the transition relation. Suppose further that M accepts or rejects an instance of the problem in time p(n) where n is the size of the instance and p is a polynomial function. - -For each input, I, we specify a Boolean expression which is satisfiable if and only if the machine M accepts I. - -The Boolean expression uses the variables set out in the following table. Here, q ∈ Q, −p(n) ≤ i ≤ p(n), j ∈ Σ, and 0 ≤ k ≤ p(n). - -Define the Boolean expression B to be the conjunction of the sub-expressions in the following table, for all −p(n) ≤ i ≤ p(n) and 0 ≤ k ≤ p(n): - -If there is an accepting computation for M on input I, then B is satisfiable by assigning Ti,j,k, Hi,k and Qi,k their intended interpretations. On the other hand, if B is satisfiable, then there is an accepting computation for M on input I that follows the steps indicated by the assignments to the variables. - -There are O(p(n)2) Boolean variables, each encodeable in space O(log p(n)). The number of clauses is O(p(n)3) so the size of B is O(log(p(n))p(n)3). Thus the transformation is certainly a polynomial-time many-one reduction, as required. - -While the above method encodes a non-deterministic Turing machine in complexity $O(log(p(n))p(n)^3)$, the literature describes more sophisticated approaches in complexity $O(p(n)log(p(n)))$. The quasilinear result first appeared seven years after Cook's original publication. - -Generalized versions of boolean satisfiability have encodings with stronger bounds still: quantified boolean formulas (QBF's) encode non-deterministic Turing machines in polynomial complexity to the machine's space bound (as opposed to time bound), and dependency quantified boolean formulas (DQBF's) encode non-deterministic Turing machines in an ideal logarithmic complexity to the machine's space bound. - -The proof shows that any problem in NP can be reduced in polynomial time (in fact, logarithmic space suffices) to an instance of the Boolean satisfiability problem. This means that if the Boolean satisfiability problem could be solved in polynomial time by a deterministic Turing machine, then all problems in NP could be solved in polynomial time, and so the complexity class NP would be equal to the complexity class P. - -The significance of NP-completeness was made clear by the publication in 1972 of Richard Karp's landmark paper, "Reducibility among combinatorial problems", in which he showed that 21 diverse combinatorial and graph theoretical problems, each infamous for its intractability, are NP-complete. - -Karp showed each of his problems to be NP-complete by reducing another problem (already shown to be NP-complete) to that problem. For example, he showed the problem 3SAT (the Boolean satisfiability problem for expressions in conjunctive normal form with exactly three variables or negations of variables per clause) to be NP-complete by showing how to reduce (in polynomial time) any instance of SAT to an equivalent instance of 3SAT. (First you modify the proof of the Cook–Levin theorem, so that the resulting formula is in conjunctive normal form, then you introduce new variables to split clauses with more than 3 atoms. For example, the clause (A ∨ B ∨ C ∨ D) can be replaced by the conjunction of clauses (A ∨ B ∨ Z) ∧ (¬Z ∨ C ∨ D), where Z is a new variable which will not be used anywhere else in the expression. Clauses with fewer than 3 atoms can be padded; for example, A can be replaced by (A ∨ A ∨ A), and (A ∨ B) can be replaced by (A ∨ B ∨ B) ). - -Garey and Johnson presented more than 300 NP-complete problems in their book Computers and Intractability: A Guide to the Theory of NP-Completeness, and new problems are still being discovered to be within that complexity class. - -Although many practical instances of SAT can be solved by heuristic methods, the question of whether there is a deterministic polynomial-time algorithm for SAT (and consequently all other NP-complete problems) is still a famous unsolved problem, despite decades of intense effort by complexity theorists, mathematical logicians, and others. For more details, see the article P versus NP problem. diff --git a/wiki/wikipedia/3950.txt b/wiki/wikipedia/3950.txt deleted file mode 100644 index a47192767c25fbe0902ff1ba9861169424b7e8ed..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3950.txt +++ /dev/null @@ -1,19 +0,0 @@ -Anna Lubiw is a computer scientist - -known for her work in computational geometry and graph theory. She is currently a professor at the University of Waterloo. - -Lubiw received her Ph.D from the University of Toronto in 1986 under the joint supervision of Rudolf Mathon and Stephen Cook. - -At Waterloo, Lubiw's students have included both Erik Demaine and his father Martin Demaine, with whom she published the first proof of the fold-and-cut theorem in mathematical origami. In graph drawing, Hutton and Lubiw found a polynomial time algorithm for upward planar drawing of graphs with a single source vertex. Other contributions of Lubiw include proving the NP-completeness of finding permutation patterns, and of finding derangements in permutation groups. - -Lubiw was named an ACM Distinguished Member in 2009. - -As well her academic work, Lubiw is an amateur violinist, and chairs the volunteer council in charge of the University of Waterloo orchestra. She is married to Jeffrey Shallit, also a computer scientist. - -*. - -*. First presented at the 2nd ACM-SIAM Symposium on Discrete Algorithms, 1991. - -*. First presented at WADS 1993. - -*. diff --git a/wiki/wikipedia/3951.txt b/wiki/wikipedia/3951.txt deleted file mode 100644 index 37fc6b5c267e4f8855ebe12924a08fdcef9a64c6..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3951.txt +++ /dev/null @@ -1,55 +0,0 @@ -Unrelated-machines scheduling is an optimization problem in computer science and operations research. It is a variant of optimal job scheduling. We need to schedule n jobs J1, J2, ..., Jn on m different machines, such that a certain objective function is optimized (usually, the makespan should be minimized). The time that machine i needs in order to process job j is denoted by pi,j. The term unrelated emphasizes that there is no relation between values of pi,j for different i and j. This is in contrast to two special cases of this problem: uniform-machines scheduling - in which pi,j = pi / sj (where sj is the speed of machine j), and identical-machines scheduling - in which pi,j = pi (the same run-time on all machines). - -In the standard three-field notation for optimal job scheduling problems, the unrelated-machines variant is denoted by R in the first field. For example, the problem denoted by " R||$C_\max$" is an unrelated-machines scheduling problem with no constraints, where the goal is to minimize the maximum completion time. - -In some variants of the problem, instead of minimizing the maximum completion time, it is desired to minimize the average completion time (averaged over all n jobs); it is denoted by R||$\sum C_i$. More generally, when some jobs are more important than others, it may be desired to minimize a weighted average of the completion time, where each job has a different weight. This is denoted by R||$\sum w_i C_i$. - -In a third variant, the goal is to maximize the minimum completion time, " R||$C_\min$" . This variant corresponds to the problem of Egalitarian item allocation. - -Minimizing the maximum completion time is NP-hard even for identical machines, by reduction from the partition problem. - -Horowitz and Sahni presented: - -* Exact dynamic programming algorithms for minimizing the maximum completion time on both uniform and unrelated machines. These algorithms run in exponential time (recall that these problems are all NP-hard). - -* Polynomial-time approximation schemes, which for any ε>0, attain at most (1+ε)OPT. For minimizing the maximum completion time on two uniform machines, their algorithm runs in time $O(10^{2l} n)$, where $l$ is the smallest integer for which $\epsilon \geq 2\cdot 10^{-l}$. Therefore, the run-time is in $O( n / \epsilon^2)$, so it is an FPTAS. For minimizing the maximum completion time on two unrelated machines, the run-time is $O(10^{l} n^2)$ = $O( n^2 / \epsilon)$. They claim that their algorithms can be easily extended for any number of uniform machines, but do not analyze the run-time in this case. - -Lenstra, Shmoys and Tardos presented a polytime 2-factor approximation algorithm, and proved that no polytime algorithm with approximation factor smaller than 3/2 is possible unless P=NP. Closing the gap between the 2 and the 3/2 is a long-standing open problem. - -Versache and Wiese presented a different 2-factor approximation algorithm. - -Glass, Potts and Shade compare various local search techniques for minimizing the makespan on unrelated machines. Using computerized simulations, they find that tabu search and simulated annealing perform much better than genetic algorithms. - -Bruno, Coffman and Sethi present an algorithm, running in time $O(\max(m n^2,n^3))$, for minimizing the average completion time on unrelated machines, R||$\sum C_i$. - -Minimizing the weighted average completion time is NP-hard even on identical machines, by reduction from the knapsack problem. - -Bar-Noy, Bar-Yehuda, Freund, Naor and Schieber consider a setting in which, for each job and machine, there is a profit for running this job on that machine. They present a 1/2 approximation for discrete input and (1-ε)/2 approximation for continuous input. - -Suppose that, instead of "jobs" we have valuable items, and instead of "machines" we have people. Person i values item j at pi,j. We would like to allocate the items to the people, such that the least-happy person is as happy as possible. This problem is equivalent to unrelated-machines scheduling in which the goal is to maximize the minimum completion time. It is better known by the name egalitarian or max-min item allocation. - -A natural way to formulate the problem as a linear program is called the Lenstra–Shmoys–Tardos linear program (LST LP). For each machine i and job j, define a variable $z_{i,j}$, which equals 1 iff machine i processes job j, and 0 otherwise. Then, the LP constraints are: - -* $\sum_{i=1}^m z_{i,j} = 1$ for every job j in 1,...,n; - -* $\sum_{i=1}^m z_{i,j}\cdot p_{i,j} \leq T$ for every machine i in 1,...,m; - -* $z_{i,j} \in \{0,1\}$ for every i, j. - -Relaxing the integer constraints gives a linear program with size polynomial in the input. Solving the relaxed problem can be rounded to obtain a 2-approximation to the problem. - -Another LP formulation is the configuration linear program. For each machine i, there are finitely many subsets of jobs that can be processed by machine i in time at most T. Each such subset is called a configuration for machine i. Denote by Ci(T) the set of all configurations for machine i. For each machine i and configuration c in Ci(T), define a variable $x_{i,c}$ which equals 1 iff the actual configuration used in machine i is c, and 0 otherwise. Then, the LP constraints are: - -* $\sum_{c\in C_i(T)}x_{i,c} = 1$ for every machine i in 1,...,m; - -* $\sum_{i=1}^m \sum_{c\ni j, c\in C_i(T)}x_{i,c} = 1$ for every job j in 1,...,n; - -* $x_{i,j} \in \{0,1\}$ for every i, j. - -Note that the number of configurations is usually exponential in the size of the problem, so the size of the configuration LP is exponential. However, in some cases it is possible to bound the number of possible configurations, and therefore find an approximate solution in polynomial time. - -There is a special case in which pi,j is either 1 or infinity. In other words, each job can be processed on a subset of allowed machines, and its run-time in each of these machines is 1. This variant is sometimes denoted by " P|pj=1,Mj|$C_\max$". It can be solved in polynomial time. - -Kim, Kim, Jang and Chen extend the problem by allowing each job to have a setup time, which depends on the job but not on the machine. They present a solution using simulated annealing. Vallada and Ruiz present a solution using a genetic algorithm. - -Caragiannis extends the problem in a different way, by assuming that the jobs are owned by selfish agents (see Truthful job scheduling). diff --git a/wiki/wikipedia/3952.txt b/wiki/wikipedia/3952.txt deleted file mode 100644 index 747284dbe3ab7b41fcdd394ee991bf0e785e918a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3952.txt +++ /dev/null @@ -1,59 +0,0 @@ -In mathematics, the Weierstrass M-test is a test for determining whether an infinite series of functions converges uniformly and absolutely. It applies to series whose terms are bounded functions with real or complex values, and is analogous to the comparison test for determining the convergence of series of real or complex numbers. It is named after the German mathematician Karl Weierstrass (1815-1897). - -Weierstrass M-test. - -Suppose that (fn) is a sequence of real- or complex-valued functions defined on a set A, and that there is a sequence of non-negative numbers (Mn) satisfying the conditions - -* $|f_n(x)|\leq M_n$ for all $n \geq 1$ and all $x \in A$, and - -* $\sum_{n=1}^{\infty} M_n $ converges. - -Then the series -$$ -\sum_{n=1}^{\infty} f_n (x) -$$ - -converges absolutely and uniformly on A. - -The result is often used in combination with the uniform limit theorem. Together they say that if, in addition to the above conditions, the set A is a topological space and the functions fn are continuous on A, then the series converges to a continuous function. - -Consider the sequence of functions -$$ -S_{n}(x) = \sum_{k=1}^{n}f_{k}(x). -$$ - -Since the series $\sum_{n=1}^{\infty}M_{n}$ converges and Mn ≥ 0 for every n, then by the Cauchy criterion, -$$ -\forall \varepsilon>0 : \exists N : \forall m>n>N : \sum_{k=n+1}^{m}M_{k}<\varepsilon. -$$ - -For the chosen N, -$$ -\forall x \in A : \forall m> n> N -$$ -$$ -\left|S_{m}(x)-S_{n}(x)\right|=\left|\sum_{k=n+1}^{m}f_{k}(x)\right|\overset{(1)}{\leq} \sum_{k=n+1}^{m}|f_{k}(x)|\leq \sum_{k=n+1}^{m}M_{k}<\varepsilon . -$$ - -(Inequality (1) follows from the triangle inequality.) - -The sequence Sn(x) is thus a Cauchy sequence in R or C, and by completeness, it converges to some number S(x) that depends on x. For n > N we can write -$$ -\left|S(x) - S_{n}(x)\right|=\left|\lim_{m\to\infty} S_{m}(x) - S_{n}(x)\right|=\lim_{m\to\infty} \left|S_{m}(x) - S_{n}(x)\right|\leq\varepsilon . -$$ - -Since N does not depend on x, this means that the sequence Sn of partial sums converges uniformly to the function S. Hence, by definition, the series $\sum_{k=1}^{\infty}f_{k}(x)$ converges uniformly. - -Analogously, one can prove that $\sum_{k=1}^{\infty}|f_{k}(x)|$ converges uniformly. - -A more general version of the Weierstrass M-test holds if the common codomain of the functions (fn) is a Banach space, in which case the premise -$$ -|f_n(x)|\leq M_n -$$ - -is to be replaced by -$$ -\|f_n(x)\|\leq M_n -$$, - -where $\|\cdot\|$ is the norm on the Banach space. For an example of the use of this test on a Banach space, see the article Fréchet derivative. diff --git a/wiki/wikipedia/3953.txt b/wiki/wikipedia/3953.txt deleted file mode 100644 index 5adf1be60a56c8c59e5164cd45e62c1a03e711d1..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3953.txt +++ /dev/null @@ -1,3 +0,0 @@ -The multi-fragment (MF) algorithm is a heuristic or approximation algorithm for the travelling salesman problem (TSP) (and related problems). This algorithm is also sometimes called the "greedy algorithm" for the TSP. - -The algorithm builds a tour for the traveling salesman one edge at a time and thus maintains multiple tour fragments, each of which is a simple path in the complete graph of cities. At each stage, the algorithm selects the edge of minimal cost that either creates a new fragment, extends one of the existing paths or creates a cycle of length equal to the number of cities. diff --git a/wiki/wikipedia/3954.txt b/wiki/wikipedia/3954.txt deleted file mode 100644 index 041f3eac83f9ca0fb4e26f69ca792e8d0ced05d9..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3954.txt +++ /dev/null @@ -1,3 +0,0 @@ -In symplectic topology and dynamical systems, Poincaré–Birkhoff theorem (also known as Poincaré–Birkhoff fixed point theorem and Poincaré's last geometric theorem) states that every area-preserving, orientation-preserving homeomorphism of an annulus that rotates the two boundaries in opposite directions has at least two fixed points. - -The Poincaré–Birkhoff theorem was discovered by Henri Poincaré, who published it in a 1912 paper titled "Sur un théorème de géométrie", and proved it for some special cases. The general case was proved by George D. Birkhoff in his 1913 paper titled "Proof of Poincaré's geometric theorem". diff --git a/wiki/wikipedia/3955.txt b/wiki/wikipedia/3955.txt deleted file mode 100644 index c4cccc31c5c83a2ab6aed774b744f37b37c3b86a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3955.txt +++ /dev/null @@ -1,57 +0,0 @@ -Microsoft Transaction Server (MTS) was software that provided services to Component Object Model (COM) software components, to make it easier to create large distributed applications. The major services provided by MTS were automated transaction management, instance management (or just-in-time activation) and role-based security. MTS is considered to be the first major software to implement aspect-oriented programming. - -MTS was first offered in the Windows NT 4.0 Option Pack. In Windows 2000, MTS was enhanced and better integrated with the operating system and COM, and was renamed COM+. COM+ added object pooling, loosely-coupled events and user-defined simple transactions (compensating resource managers) to the features of MTS. - -COM+ is still provided with Windows Server 2003 and Windows Server 2008, and the Microsoft .NET Framework provides a wrapper for COM+ in the EnterpriseServices namespace. The Windows Communication Foundation (WCF) provides a way of calling COM+ applications with web services. However, COM+ is based on COM, and Microsoft's strategic software architecture is now web services and .NET, not COM. There are pure .NET-based alternatives for many of the features provided by COM+, and in the long term it is likely COM+ will be phased out. - -A basic MTS architecture comprises: - -* the MTS Executive (mtxex.dll) - -* the Factory Wrappers and Context Wrappers for each component - -* the MTS Server Component - -* MTS clients - -* auxiliary systems like: - -**COM runtime services - -**the Service Control Manager (SCM) - -**the Microsoft Distributed Transaction Coordinator (MS-DTC) - -**the Microsoft Message Queue (MSMQ) - -**the COM-Transaction Integrator (COM-TI) - -**etc. - -COM components that run under the control of the MTS Executive are called MTS components. In COM+, they are referred to as COM+ Applications. MTS components are in-process DLLs. MTS components are deployed and run in the MTS Executive which manages them. As with other COM components, an object implementing the IClassFactory interface serves as a Factory Object to create new instances of these components. - -MTS inserts a Factory Wrapper Object and an Object Wrapper between the actual MTS object and its client. This interposing of wrappers is called interception. Whenever the client makes a call to the MTS component, the wrappers (Factory and Object) intercept the call and inject their own instance-management algorithm called the Just-In-Time Activation (JITA) into the call. The wrapper then makes this call on the actual MTS component. Interception was considered difficult at the time due to a lack of extensible metadata. - -In addition, based on the information from the component's deployment properties, transaction logic and security checks also take place in these wrapper objects. - -For every MTS-hosted object, there also exists a Context Object, which implements the IObjectContext interface. The Context Object maintains specific information about that object, such as its transactional information, security information and deployment information. Methods in the MTS component call into the Context Object through its IObjectContext interface. - -MTS does not create the actual middle-tier MTS object until the call from a client reaches the container. Since the object is not running all the time, it does not use up a lot of system resources (even though an object wrapper and skeleton for the object do persist). - -As soon as the call comes in from the client, the MTS wrapper process activates its Instance Management algorithm called JITA. The actual MTS object is created "just in time" to service the request from the wrapper. And when the request is serviced and the reply is sent back to the client, the component either calls SetComplete()/SetAbort(), or its transaction ends, or the client calls Release() on the reference to the object, and the actual MTS object is destroyed. In short, MTS uses a stateless component model. - -Generally, when a client requests services from a typical MTS component, the following sequence occurs on the server : - -# acquire a database connection - -# read the component's state from either the Shared Property Manager or from an already existing object or from the client - -# perform the business logic - -# write the component's changed state, if any, back to the database - -# close and release the database connection - -# vote on the result of the transaction. MTS components do not directly commit transactions, rather they communicate their success or failure to MTS. - -It is thus possible to implement high-latency resources as asynchronous resource pools, which should take advantage of the stateless JIT activation afforded by the middleware server. diff --git a/wiki/wikipedia/3956.txt b/wiki/wikipedia/3956.txt deleted file mode 100644 index 182763a0a4f0d33b18d0d1c9f0b13f5f37bd196b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3956.txt +++ /dev/null @@ -1,139 +0,0 @@ -In geometry, the Minkowski sum (also known as dilation) of two sets of position vectors A and B in Euclidean space is formed by adding each vector in A to each vector in B, i.e., the set -$$ -A + B = \{\mathbf{a}+\mathbf{b}|\mathbf{a}\in A,\ \mathbf{b}\in B\}. -$$ - -Analogously, the Minkowski difference (or geometric difference) is defined using the complement operation as -$$ -A - B = \left(A^c + (-B)\right)^c -$$ - -In general $A - B \ne A + (-B)$. For instance, in a one-dimensional case $A = [-2, 2]$ and $B = [-1, 1]$ the Minkowski difference $A - B = [-1, 1]$, whereas $A + (-B) = A + B = [-3, 3].$ - -In a two-dimensional case, Minkowski difference is closely related to erosion (morphology) in image processing. - -The concept is named for Hermann Minkowski. - -[[File:Minkowski sum graph - vector version.svg|thumb - -| alt=Three squares are shown in the non-negative quadrant of the Cartesian plane. The square Q1=× is green. The square Q2=× is brown, and it sits inside the turquoise square Q1+Q2=×. - -| Minkowski addition of sets. The sum of the squares $Q_1 = [0, 1]^2$ and $Q_2 = [1, 2]^2$ is the square $Q_1 + Q_2 = [1, 3]^2.$ - -]] - -For example, if we have two sets A and B, each consisting of three position vectors (informally, three points), representing the vertices of two triangles in $\mathbb{R}^2$, with coordinates -$$ -A = \{(1,0), (0,1), (0,-1)\} -$$ - -and -$$ -B = \{(0,0), (1,1), (1,-1)\} -$$ - -then their Minkowski sum is -$$ -A + B = \{(1,0), (2,1), (2,-1), (0,1), (1,2), (1,0), (0,-1), (1,0), (1,-2)\} -$$ - -which comprises the vertices of a hexagon. - -For Minkowski addition, the zero set, $\{ 0 \},$ containing only the zero vector, 0, is an identity element: for every subset S of a vector space, -$$ -S + \{0\} = S. -$$ - -The empty set is important in Minkowski addition, because the empty set annihilates every other subset: for every subset S of a vector space, its sum with the empty set is empty: -$$ -S + \emptyset = \emptyset. -$$ - -For another example, consider the Minkowski sums of open or closed balls in the field $\mathbb{K},$ which is either the real numbers $\R$ or complex numbers $\C.$ If $B_r := \{ s \in \mathbb{K} : |s| \leq r \}$ is the closed ball of radius $r \in [0, \infty]$ centered at $0$ in $\mathbb{K}$ then for any $r, s \in [0, \infty],$ $B_r + B_s = B_{r+s}$ and also $c B_r = B_{|c|r}$ will hold for any scalar $c \in \mathbb{K}$ such that the product $|c|r$ is defined (which happens when $c \neq 0$ or $r \neq \infty$). If $r, s,$ and $c$ are all non-zero then the same equalities would still hold had $B_r$ been defined to be the open ball, rather than the closed ball, centered at $0$ (the non-zero assumption is needed because the open ball of radius $0$ is the empty set). The Minkowski sum of a closed ball and an open ball is an open ball. More generally, the Minkowski sum of an open subset with any other set will be an open subset. - -If $G = \left\{ \left(x, 1/x\right) : 0 \neq x \in \R \right\}$ is the graph of $f(x) = \frac{1}{x}$ and if and $Y = \{ 0 \} \times \R$ is the $y$-axis in $X = \R^2$ then the Minkowski sum of these two closed subsets of the plane is the open set $G + Y = \{ (x, y) \in \R^2 : x \neq 0 \} = \R^2 \setminus Y$ consisting of everything other than the $y$-axis. This shows that the Minkowski sum of two closed sets is not necessarily a closed set. However, the Minkowski sum of two closed subsets will be a closed subset if at least one of these sets is also a compact subset. - -Minkowski addition behaves well with respect to the operation of taking convex hulls, as shown by the following proposition: - -For all non-empty subsets $S_1$ and $S_2$ of a real vector space, the convex hull of their Minkowski sum is the Minkowski sum of their convex hulls: -$$ -\operatorname{Conv}(S_1 + S_2) = \operatorname{Conv}(S_1) + \operatorname{Conv}(S_2). -$$ - -This result holds more generally for any finite collection of non-empty sets: -$$ -\operatorname{Conv}\left(\sum{S_n}\right) = \sum\operatorname{Conv}(S_n). -$$ - -In mathematical terminology, the operations of Minkowski summation and of forming convex hulls are commuting operations. - -If $S$ is a convex set then $\mu S + \lambda S$ is also a convex set; furthermore -$$ -\mu S + \lambda S = (\mu + \lambda)S -$$ - -for every $\mu,\lambda \geq 0$. Conversely, if this "distributive property" holds for all non-negative real numbers, $\mu, \lambda$, then the set is convex. - -The figure to the right shows an example of a non-convex set for which $A + A \supsetneq 2 A.$ - -An example in $1$ dimension is: $B = [1, 2] \cup [4, 5].$ It can be easily calculated that $2 B = [2, 4] \cup [8, 10]$ but $B + B = [2, 4] \cup [5, 7] \cup [8, 10],$ hence again $B + B \supsetneq 2 B.$ - -Minkowski sums act linearly on the perimeter of two-dimensional convex bodies: the perimeter of the sum equals the sum of perimeters. Additionally, if $K$ is (the interior of) a curve of constant width, then the Minkowski sum of $K$ and of its $180^{\circ}$ rotation is a disk. These two facts can be combined to give a short proof of Barbier's theorem on the perimeter of curves of constant width. - -Minkowski addition plays a central role in mathematical morphology. It arises in the brush-and-stroke paradigm of 2D computer graphics (with various uses, notably by Donald E. Knuth in Metafont), and as the solid sweep operation of 3D computer graphics. It has also been shown to be closely connected to the Earth mover's distance, and by extension, optimal transport. - -Minkowski sums are used in motion planning of an object among obstacles. They are used for the computation of the configuration space, which is the set of all admissible positions of the object. In the simple model of translational motion of an object in the plane, where the position of an object may be uniquely specified by the position of a fixed point of this object, the configuration space are the Minkowski sum of the set of obstacles and the movable object placed at the origin and rotated 180 degrees. - -In numerical control machining, the programming of the NC tool exploits the fact that the Minkowski sum of the cutting piece with its trajectory gives the shape of the cut in the material. - -In OpenSCAD Minkowski sums are used to outline a shape with another shape creating a composite of both shapes. - -Minkowski sums are also frequently used in aggregation theory when individual objects to be aggregated are characterized via sets. - -Minkowski sums, specifically Minkowski differences, are often used alongside GJK algorithms to compute collision detection for convex hulls in physics engines. - -[[File:Shapley–Folkman lemma.svg|thumb|300px - -| alt=Minkowski addition of four line-segments. The left-hand pane displays four sets, which are displayed in a two-by-two array. Each of the sets contains exactly two points, which are displayed in red. In each set, the two points are joined by a pink line-segment, which is the convex hull of the original set. Each set has exactly one point that is indicated with a plus-symbol. In the top row of the two-by-two array, the plus-symbol lies in the interior of the line segment; in the bottom row, the plus-symbol coincides with one of the red-points. This completes the description of the left-hand pane of the diagram. The right-hand pane displays the Minkowski sum of the sets, which is the union of the sums having exactly one point from each summand-set; for the displayed sets, the sixteen sums are distinct points, which are displayed in red: The right-hand red sum-points are the sums of the left-hand red summand-points. The convex hull of the sixteen red-points is shaded in pink. In the pink interior of the right-hand sumset lies exactly one plus-symbol, which is the (unique) sum of the plus-symbols from the right-hand side. The right-hand plus-symbol is indeed the sum of the four plus-symbols from the left-hand sets, precisely two points from the original non-convex summand-sets and two points from the convex hulls of the remaining summand-sets. - -| Minkowski addition and convex hulls. The sixteen dark-red points (on the right) form the Minkowski sum of the four non-convex sets (on the left), each of which consists of a pair of red points. Their convex hulls (shaded pink) contain plus-signs (+): The right plus-sign is the sum of the left plus-signs. - -]] - -For two convex polygons P and Q in the plane with m and n vertices, their Minkowski sum is a convex polygon with at most m + n vertices and may be computed in time O(m + n) by a very simple procedure, which may be informally described as follows. Assume that the edges of a polygon are given and the direction, say, counterclockwise, along the polygon boundary. Then it is easily seen that these edges of the convex polygon are ordered by polar angle. Let us merge the ordered sequences of the directed edges from P and Q into a single ordered sequence S. Imagine that these edges are solid arrows which can be moved freely while keeping them parallel to their original direction. Assemble these arrows in the order of the sequence S by attaching the tail of the next arrow to the head of the previous arrow. It turns out that the resulting polygonal chain will in fact be a convex polygon which is the Minkowski sum of P and Q. - -If one polygon is convex and another one is not, the complexity of their Minkowski sum is O(nm). If both of them are nonconvex, their Minkowski sum complexity is O((mn)2). - -There is also a notion of the essential Minkowski sum +e of two subsets of Euclidean space. The usual Minkowski sum can be written as -$$ -A + B = \left\{ z \in \mathbb{R}^{n} | A \cap (z - B) \neq \emptyset \right\}. -$$ - -Thus, the essential Minkowski sum is defined by -$$ -A +_{\mathrm{e}} B = \left\{ z \in \mathbb{R}^{n} | \mu \left[A \cap (z - B)\right] > 0 \right\}, -$$ - -where μ denotes the n-dimensional Lebesgue measure. The reason for the term "essential" is the following property of indicator functions: while -$$ -1_{A + B} (z) = \sup_{x \in \mathbb{R}^{n}} 1_{A} (x) 1_{B} (z - x), -$$ - -it can be seen that -$$ -1_{A +_{\mathrm{e}} B} (z) = \mathop{\mathrm{esssup}}_{x \in \mathbb{R}^{n}} 1_{A} (x) 1_{B} (z - x), -$$ - -where "ess sup" denotes the essential supremum. - -For K and L compact convex subsets in $\mathbb{R}^n$, the Minkowski sum can be described by the support function of the convex sets: -$$ -h_{K+L} = h_K + h_L. -$$ - -For p ≥ 1, Firey defined the Lp Minkowski sum K +_p L of compact convex sets K and L in $\mathbb{R}^n$ containing the origin as -$$ -h_{K +_p L}^p = h_K^p + h_L^p. -$$ - -By the Minkowski inequality, the function h_K+ is again positive homogeneous and convex and hence the support function of a compact convex set. This definition is fundamental in the Lp Brunn-Minkowski theory. diff --git a/wiki/wikipedia/3957.txt b/wiki/wikipedia/3957.txt deleted file mode 100644 index e11292fbe8e5c2a47fd3241fd5fc2a47ee939d39..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3957.txt +++ /dev/null @@ -1 +0,0 @@ -In algebraic number theory, the Gras conjecture relates the p-parts of the Galois eigenspaces of an ideal class group to the group of global units modulo cyclotomic units. It was proved by Mazur as a corollary of their work on the main conjecture of Iwasawa theory. Kolyvagin later gave a simpler proof using Euler systems. diff --git a/wiki/wikipedia/3958.txt b/wiki/wikipedia/3958.txt deleted file mode 100644 index 88832c5a73f410d89841728179384f60ee3d6d4b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3958.txt +++ /dev/null @@ -1,5 +0,0 @@ -Jump-and-Walk is an algorithm for point location in triangulations (though most of the theoretical analysis were performed in 2D and 3D random Delaunay triangulations). Surprisingly, the algorithm does not need any preprocessing or complex data structures except some simple representation of the triangulation itself. The predecessor of Jump-and-Walk was due to Lawson (1977) and Green and Sibson (1978), which picks a random starting point S and then walks from S toward the query point Q one triangle at a time. But no theoretical analysis was known for these predecessors until after mid-1990s. - -Jump-and-Walk picks a small group of sample points and starts the walk from the sample point which is the closest to Q until the simplex containing Q is found. The algorithm was a folklore in practice for some time, and the formal presentation of the algorithm and the analysis of its performance on 2D random Delaunay triangulation was done by Devroye, Mucke and Zhu in mid-1990s (the paper appeared in Algorithmica, 1998). The analysis on 3D random Delaunay triangulation was done by Mucke, Saias and Zhu (ACM Symposium of Computational Geometry, 1996). In both cases, a boundary condition was assumed, namely, Q must be slightly away from the boundary of the convex domain where the vertices of the random Delaunay triangulation are drawn. In 2004, Devroye, Lemaire and Moreau showed that in 2D the boundary condition can be withdrawn (the paper appeared in Computational Geometry: Theory and Applications, 2004). - -Jump-and-Walk has been used in many famous software packages, e.g., QHULL, Triangle and CGAL. diff --git a/wiki/wikipedia/3959.txt b/wiki/wikipedia/3959.txt deleted file mode 100644 index 68f2d5e6d3b9116f7a4582d7bd24a6d804da00e3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3959.txt +++ /dev/null @@ -1,19 +0,0 @@ -Microsoft Minesweeper (formerly just Minesweeper, and also known as Flower Field) is a minesweeper-type video game created by Curt Johnson, originally for IBM's OS/2, that was ported to Microsoft Windows by Robert Donner, both Microsoft employees at the time. First officially released as part of the Microsoft Entertainment Pack 1 in 1990, it was first included in the standard install of Windows 3.1 in 1992, replacing Reversi from Windows 3.0. Microsoft Minesweeper was included without major changes in all subsequent Windows releases until Windows Vista, at which time an updated version by Oberon Media replaced it. In Windows 8 and later the game is not included with a fresh Windows install, but Microsoft Studios has published an updated version of it, developed by Arkadium, on Microsoft Store. - -The goal of Minesweeper is to uncover all the squares on a grid that do not contain mines without being "blown up" by clicking on a square with a mine underneath. The location of most mines is discovered through a logical process, but some require guessing, usually with a 50-50 chance of being correct. Clicking on the game board will reveal what is hidden underneath the chosen square or squares (a large number of blank squares [bordering 0 mines] may be revealed in one go if they are adjacent to each other). Some squares are blank while others contain numbers (from 1 to 8), with each number being the number of mines adjacent to the uncovered square. - -To help the player avoid hitting a mine, the location of a suspected mine can be marked by flagging it with the right mouse button. The game is won once all blank or numbered squares have been uncovered by the player without hitting a mine; any remaining mines not identified by flags are automatically flagged by the computer. However, in the event that a game is lost and the player had mistakenly flagged a safe square, that square will either appear with a red X, or else a red X covering the mine (both denoting the square as safe). The game board comes in three set sizes with a predetermined number of mines: "beginner", "intermediate", and "expert", although a "custom" option is available as well. - -In early versions of the game, a cheat code let players peek beneath the tiles. - -By the year 2000, the game had been given the name of Flower Field instead of Minesweeper in some translations of Windows 2000 (like the Italian version), featuring flowers instead of mines. Flower Fields gameplay was otherwise unchanged, as was the executable file name. - -In 2003, Microsoft created a variation called Minesweeper Flags in MSN Messenger, which is played against an opponent with the objective to find the mines rather than the surrounding squares. - -The game's color scheme changed with the release of Vista (from gray to either blue or green). The icons were updated to match the Aero look. It also came with a more peaceful "flower" motif (called "Flower Garden") to replace the landmines (a game style called "Minesweeper"). This iteration of Minesweeper was created by Oberon Media. The controversy over the land mine theme of the game was settled by defaulting the appearance based on region so that "sensitive" areas used the flower theme, but some still wanted the game removed from Windows altogether. Multiple news outlets criticized the change as greedy. This version updates both motifs (themes called "Modern" and "Garden" as of Windows 10). Daily challenges and an adventure mode were also added. - -As of Windows 10, the non-premium version has six modes of play: Easy (9x9), Medium (16x16), Expert (30x16), Custom, Adventure, and Daily Challenges. The two themes are "Modern theme" and "Garden theme". On the main menu, there are sections for Awards, Leaderboards, Statistics, and Tutorials. - -Some of the game options are only relevant for a touchscreen, like the flag mode and swiping. - -Business Insider called the game an "iconic part" of the Windows operating system. diff --git a/wiki/wikipedia/396.txt b/wiki/wikipedia/396.txt deleted file mode 100644 index 25b09b63f0ee438b697d37debc94daef2ec6a18b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/396.txt +++ /dev/null @@ -1,11 +0,0 @@ -In distributed computing, a shared snapshot object is a type of data structure, which is shared between several threads or processes. For many tasks, it is important to have a data structure, that can provide a consistent view of the state of the memory. In practice, it turns out that it is not possible to get such a consistent state of the memory by just accessing one shared register after another, since the values stored in individual registers can be changed at any time during this process. To solve this problem, snapshot objects store a vector of n components and provide the following two atomic operations: update(i,v) changes the value in the ith component to v, and scan() returns the values stored in all n components. - -Snapshot objects can be constructed using atomic single-writer multi-reader shared registers. - -In general, one distinguishes between single-writer multi-reader (swmr) snapshot objects and multi-writer multi-reader (mwmr) snapshot objects. In a swmr snapshot object, the number of components matches the number of processes and only one process Pi is allowed to write to the memory position i and all the other processes are allowed to read the memory. In contrast, in a mwmr snapshot object all processes are allowed to write to all positions of the memory and are allowed to read the memory as well. - -A shared memory is partitioned into multiple parts. Each of these parts holds a single data value. In the single-writer multi-reader case each process Pi has a memory position i assigned and only this process is allowed to write to the memory position. However, every process is allowed to read any position in the memory. In the multi-writer multi-reader case, the restriction changes and any process is allowed to change any position of the memory. Any process Pi $\in$ {1,...,n} in an n-process system is able to perform two operations on the snapshot object: scan() and update(i,v). The scan operation has no arguments and returns a consistent view of the memory. The update(i,v) operation updates the memory at the position i with the value v. - -Both types of operations are considered to occur atomically between the call by the process and the return by the memory. More generally speaking, in the data vector $\overline{d}$ each entry dk corresponds to the argument of the last linearized update operation, which updates part k of the memory. There are also randomized implementations of snapshot objects based on swmr registers using $O(n \log^2 n)$ operations. Another implementation by Israeli and Shirazi, using unbounded memory, requires $O(n^{3/2}\log^2 n)$ operations on the memory. Applying a general method by Israeli, Shaham, and Shirazi this can be improved to an unbounded snapshot algorithm, which only needs $O(n \log n)$ operations per scan and $O(n)$ operations per update. There are further improvements introduced by Inoue et al., using only a linear number of read and write operations. In contrast to the other presented methods, this approach uses mwmr registers and not swmr registers. - -There are several algorithms in distributed computing which can be simplified in design and/or verification using shared snapshot objects. Examples of this are exclusion problems, concurrent time-stamp systems, approximate agreement, randomized consensus and wait-free implementations of other data structures. With mwmr snapshot objects it is also possible to create atomic multi-writer multi-reader registers. diff --git a/wiki/wikipedia/3960.txt b/wiki/wikipedia/3960.txt deleted file mode 100644 index c50140470fdefb07df3239d97f85411b2cf80766..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3960.txt +++ /dev/null @@ -1,52 +0,0 @@ -In set theory, König's theorem states that if the axiom of choice holds, I is a set, $\kappa_i$ and $\lambda_i$ are cardinal numbers for every i in I, and $\kappa_i < \lambda_i$ for every i in I, then -$$ -\sum_{i \in I}\kappa_i < \prod_{i \in I}\lambda_i. -$$ - -The sum here is the cardinality of the disjoint union of the sets mi, and the product is the cardinality of the Cartesian product. However, without the use of the axiom of choice, the sum and the product cannot be defined as cardinal numbers, and the meaning of the inequality sign would need to be clarified. - -König's theorem was introduced by in the slightly weaker form that the sum of a strictly increasing sequence of nonzero cardinal numbers is less than their product. - -The precise statement of the result: if I is a set, Ai and Bi are sets for every i in I, and $A_ii to Bi, but not one going the other way. The union involved need not be disjoint (a non-disjoint union can't be any bigger than the disjoint version, also assuming the axiom of choice). In this formulation, König's theorem is equivalent to the axiom of choice. - -(Of course, König's theorem is trivial if the cardinal numbers mi and ni are finite and the index set I is finite. If I is empty, then the left sum is the empty sum and therefore 0, while the right product is the empty product and therefore 1). - -König's theorem is remarkable because of the strict inequality in the conclusion. There are many easy rules for the arithmetic of infinite sums and products of cardinals in which one can only conclude a weak inequality ≤, for example: if $m_i < n_i$ for all i in I, then one can only conclude -$$ -\sum_{i \in I} m_i \le \sum_{i \in I} n_i, -$$ - -since, for example, setting $m_i = 1$ and $n_i = 2$, where the index set I is the natural numbers, yields the sum $\aleph_0$ for both sides, and we have an equality. - -* If $\kappa$ is a cardinal, then $\kappa < 2^\kappa$. - -If we take mi = 1, and ni = 2 for each i in κ, then the left side of the above inequality is just κ, while the right side is 2κ, the cardinality of functions from κ to {0, 1}, that is, the cardinality of the power set of κ. Thus, König's theorem gives us an alternate proof of Cantor's theorem. (Historically of course Cantor's theorem was proved much earlier.) - -One way of stating the axiom of choice is "an arbitrary Cartesian product of non-empty sets is non-empty". Let Bi be a non-empty set for each i in I. Let Ai = {} for each i in I. Thus by König's theorem, we have: - -* If $\forall i \in I(\{\} < B_i)$, then $\{\} < \prod_{i \in I}B_i$. - -That is, the Cartesian product of the given non-empty sets Bi has a larger cardinality than the sum of empty sets. Thus it is non-empty, which is just what the axiom of choice states. Since the axiom of choice follows from König's theorem, we will use the axiom of choice freely and implicitly when discussing consequences of the theorem. - -König's theorem has also important consequences for cofinality of cardinal numbers. - -* If $\kappa \ge \aleph_0$, then $\kappa < \kappa^{\operatorname{cf}(\kappa)}$. - -Choose a strictly increasing cf(κ)-sequence of ordinals approaching κ. Each of them is less than κ, so their sum, which is κ, is less than the product of cf(κ) copies of κ. - -According to Easton's theorem, the next consequence of König's theorem is the only nontrivial constraint on the continuum function for regular cardinals. - -* If $\kappa \geq \aleph_0$ and $\lambda \geq 2$, then $\kappa < \operatorname{cf}(\lambda^\kappa)$. - -Let $\mu = \lambda^\kappa$. Suppose that, contrary to this corollary, $\kappa \ge \operatorname{cf}(\mu)$. Then using the previous corollary, $\mu < \mu^{\operatorname{cf}(\mu)} \le \mu^\kappa = (\lambda^\kappa)^\kappa = \lambda^{\kappa \cdot \kappa} = \lambda^\kappa = \mu$, a contradiction. - -Assuming Zermelo–Fraenkel set theory, including especially the axiom of choice, we can prove the theorem. Remember that we are given $\forall i\in I\quad A_ii onto Bi≠{}, and we have to show that any function f from the disjoint union of the As to the product of the Bs is not surjective and that the product is nonempty. That the product is nonempty follows immediately from the axiom of choice and the fact that the factors are nonempty. For each i choose a bi in Bi not in the image of Ai under the composition of f with the projection to Bi. Then the product of the elements bi is not in the image of f, so f does not map the disjoint union of the As onto the product of the Bs. diff --git a/wiki/wikipedia/3961.txt b/wiki/wikipedia/3961.txt deleted file mode 100644 index 6e84a6c6b788d28583d9f5f93d5a96fac409b3c3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3961.txt +++ /dev/null @@ -1,65 +0,0 @@ -This article contains a discussion of paradoxes of set theory. As with most mathematical paradoxes, they generally reveal surprising and counter-intuitive mathematical results, rather than actual logical contradictions within modern axiomatic set theory. - -Set theory as conceived by Georg Cantor assumes the existence of infinite sets. As this assumption cannot be proved from first principles it has been introduced into axiomatic set theory by the axiom of infinity, which asserts the existence of the set N of natural numbers. Every infinite set which can be enumerated by natural numbers is the same size (cardinality) as N, and is said to be countable. Examples of countably infinite sets are the natural numbers, the even numbers, the prime numbers, and also all the rational numbers, i.e., the fractions. These sets have in common the cardinal number |N| = $\aleph_0$ (aleph-nought), a number greater than every natural number. - -Cardinal numbers can be defined as follows. Define two sets to have the same size by: there exists a bijection between the two sets (a one-to-one correspondence between the elements). Then a cardinal number is, by definition, a class consisting of all sets of the same size. To have the same size is an equivalence relation, and the cardinal numbers are the equivalence classes. - -Besides the cardinality, which describes the size of a set, ordered sets also form a subject of set theory. The axiom of choice guarantees that every set can be well-ordered, which means that a total order can be imposed on its elements such that every nonempty subset has a first element with respect to that order. The order of a well-ordered set is described by an ordinal number. For instance, 3 is the ordinal number of the set {0, 1, 2} with the usual order 0 < 1 < 2; and ω is the ordinal number of the set of all natural numbers ordered the usual way. Neglecting the order, we are left with the cardinal number |N| = |ω| = $ \aleph_0$. - -Ordinal numbers can be defined with the same method used for cardinal numbers. Define two well-ordered sets to have the same order type by: there exists a bijection between the two sets respecting the order: smaller elements are mapped to smaller elements. Then an ordinal number is, by definition, a class consisting of all well-ordered sets of the same order type. To have the same order type is an equivalence relation on the class of well-ordered sets, and the ordinal numbers are the equivalence classes. - -Two sets of the same order type have the same cardinality. The converse is not true in general for infinite sets: it is possible to impose different well-orderings on the set of natural numbers that give rise to different ordinal numbers. - -There is a natural ordering on the ordinals, which is itself a well-ordering. Given any ordinal α, one can consider the set of all ordinals less than α. This set turns out to have ordinal number α. This observation is used for a different way of introducing the ordinals, in which an ordinal is equated with the set of all smaller ordinals. This form of ordinal number is thus a canonical representative of the earlier form of equivalence class. - -By forming all subsets of a set S (all possible choices of its elements), we obtain the power set P(S). Georg Cantor proved that the power set is always larger than the set, i.e., |P(S)| > |S|. A special case of Cantor's theorem proves that the set of all real numbers R cannot be enumerated by natural numbers. R is uncountable: |R| > |N|. - -Instead of relying on ambiguous descriptions such as "that which cannot be enlarged" or "increasing without bound", set theory provides definitions for the term infinite set to give an unambiguous meaning to phrases such as "the set of all natural numbers is infinite". Just as for finite sets, the theory makes further definitions which allow us to consistently compare two infinite sets as regards whether one set is "larger than", "smaller than", or "the same size as" the other. But not every intuition regarding the size of finite sets applies to the size of infinite sets, leading to various apparently paradoxical results regarding enumeration, size, measure and order. - -Before set theory was introduced, the notion of the size of a set had been problematic. It had been discussed by Galileo Galilei and Bernard Bolzano, among others. Are there as many natural numbers as squares of natural numbers when measured by the method of enumeration? - -* The answer is yes, because for every natural number n there is a square number n2, and likewise the other way around. - -* The answer is no, because the squares are a proper subset of the naturals: every square is a natural number but there are natural numbers, like 2, which are not squares of natural numbers. - -By defining the notion of the size of a set in terms of its cardinality, the issue can be settled. Since there is a bijection between the two sets involved, this follows in fact directly from the definition of the cardinality of a set. - -See Hilbert's paradox of the Grand Hotel for more on paradoxes of enumeration. - -"I see it but I don't believe," Cantor wrote to Richard Dedekind after proving that the set of points of a square has the same cardinality as that of the points on just an edge of the square: the cardinality of the continuum. - -This demonstrates that the "size" of sets as defined by cardinality alone is not the only useful way of comparing sets. Measure theory provides a more nuanced theory of size that conforms to our intuition that length and area are incompatible measures of size. - -The evidence strongly suggests that Cantor was quite confident in the result itself and that his comment to Dedekind refers instead to his then-still-lingering concerns about the validity of his proof of it. Nevertheless, Cantor's remark would also serve nicely to express the surprise that so many mathematicians after him have experienced on first encountering a result that is so counter-intuitive. - -In 1904 Ernst Zermelo proved by means of the axiom of choice (which was introduced for this reason) that every set can be well-ordered. In 1963 Paul J. Cohen showed that in Zermelo–Fraenkel set theory without the axiom of choice it is not possible to prove the existence of a well-ordering of the real numbers. - -However, the ability to well order any set allows certain constructions to be performed that have been called paradoxical. One example is the Banach–Tarski paradox, a theorem widely considered to be nonintuitive. It states that it is possible to decompose a ball of a fixed radius into a finite number of pieces and then move and reassemble those pieces by ordinary translations and rotations (with no scaling) to obtain two copies from the one original copy. The construction of these pieces requires the axiom of choice; the pieces are not simple regions of the ball, but complicated subsets. - -In set theory, an infinite set is not considered to be created by some mathematical process such as "adding one element" that is then carried out "an infinite number of times". Instead, a particular infinite set (such as the set of all natural numbers) is said to already exist, "by fiat", as an assumption or an axiom. Given this infinite set, other infinite sets are then proven to exist as well, as a logical consequence. But it is still a natural philosophical question to contemplate some physical action that actually completes after an infinite number of discrete steps; and the interpretation of this question using set theory gives rise to the paradoxes of the supertask. - -Tristram Shandy, the hero of a novel by Laurence Sterne, writes his autobiography so conscientiously that it takes him one year to lay down the events of one day. If he is mortal he can never terminate; but if he lived forever then no part of his diary would remain unwritten, for to each day of his life a year devoted to that day's description would correspond. - -An increased version of this type of paradox shifts the infinitely remote finish to a finite time. Fill a huge reservoir with balls enumerated by numbers 1 to 10 and take off ball number 1. Then add the balls enumerated by numbers 11 to 20 and take off number 2. Continue to add balls enumerated by numbers 10n - 9 to 10n and to remove ball number n for all natural numbers n = 3, 4, 5, .... Let the first transaction last half an hour, let the second transaction last quarter an hour, and so on, so that all transactions are finished after one hour. Obviously the set of balls in the reservoir increases without bound. Nevertheless, after one hour the reservoir is empty because for every ball the time of removal is known. - -The paradox is further increased by the significance of the removal sequence. If the balls are not removed in the sequence 1, 2, 3, ... but in the sequence 1, 11, 21, ... after one hour infinitely many balls populate the reservoir, although the same amount of material as before has been moved. - -For all its usefulness in resolving questions regarding infinite sets, naive set theory has some fatal flaws. In particular, it is prey to logical paradoxes such as those exposed by Russell's paradox. The discovery of these paradoxes revealed that not all sets which can be described in the language of naive set theory can actually be said to exist without creating a contradiction. The 20th century saw a resolution to these paradoxes in the development of the various axiomatizations of set theories such as ZFC and NBG in common use today. However, the gap between the very formalized and symbolic language of these theories and our typical informal use of mathematical language results in various paradoxical situations, as well as the philosophical question of exactly what it is that such formal systems actually propose to be talking about. - -In 1897 the Italian mathematician Cesare Burali-Forti discovered that there is no set containing all ordinal numbers. As every ordinal number is defined by a set of smaller ordinal numbers, the well-ordered set Ω of all ordinal numbers (if it exists) fits the definition and is itself an ordinal. On the other hand, no ordinal number can contain itself, so Ω cannot be an ordinal. Therefore, the set of all ordinal numbers cannot exist. - -By the end of the 19th century Cantor was aware of the non-existence of the set of all cardinal numbers and the set of all ordinal numbers. In letters to David Hilbert and Richard Dedekind he wrote about inconsistent sets, the elements of which cannot be thought of as being all together, and he used this result to prove that every consistent set has a cardinal number. - -After all this, the version of the "set of all sets" paradox conceived by Bertrand Russell in 1903 led to a serious crisis in set theory. Russell recognized that the statement x = x is true for every set, and thus the set of all sets is defined by {x | x = x}. In 1906 he constructed several paradox sets, the most famous of which is the set of all sets which do not contain themselves. Russell himself explained this abstract idea by means of some very concrete pictures. One example, known as the Barber paradox, states: The male barber who shaves all and only men who don't shave themselves has to shave himself only if he does not shave himself. - -There are close similarities between Russell's paradox in set theory and the Grelling–Nelson paradox, which demonstrates a paradox in natural language. - -In 1905, the Hungarian mathematician Julius König published a paradox based on the fact that there are only countably many finite definitions. If we imagine the real numbers as a well-ordered set, those real numbers which can be finitely defined form a subset. Hence in this well-order there should be a first real number that is not finitely definable. This is paradoxical, because this real number has just been finitely defined by the last sentence. This leads to a contradiction in naive set theory. - -This paradox is avoided in axiomatic set theory. Although it is possible to represent a proposition about a set as a set, by a system of codes known as Gödel numbers, there is no formula $\varphi(a,x)$ in the language of set theory which holds exactly when $a$ is a code for a finite proposition about a set, $x$ is a set, and $a$ holds for $x$. This result is known as Tarski's indefinability theorem; it applies to a wide class of formal systems including all commonly studied axiomatizations of set theory. - -In the same year the French mathematician Jules Richard used a variant of Cantor's diagonal method to obtain another contradiction in naive set theory. Consider the set A of all finite agglomerations of words. The set E of all finite definitions of real numbers is a subset of A. As A is countable, so is E. Let p be the nth decimal of the nth real number defined by the set E; we form a number N having zero for the integral part and p + 1 for the nth decimal if p is not equal either to 8 or 9, and unity if p is equal to 8 or 9. This number N is not defined by the set E because it differs from any finitely defined real number, namely from the nth number by the nth digit. But N has been defined by a finite number of words in this paragraph. It should therefore be in the set E. That is a contradiction. - -As with König's paradox, this paradox cannot be formalized in axiomatic set theory because it requires the ability to tell whether a description applies to a particular set (or, equivalently, to tell whether a formula is actually the definition of a single set). - -Based upon work of the German mathematician Leopold Löwenheim (1915) the Norwegian logician Thoralf Skolem showed in 1922 that every consistent theory of first-order predicate calculus, such as set theory, has an at most countable model. However, Cantor's theorem proves that there are uncountable sets. The root of this seeming paradox is that the countability or noncountability of a set is not always absolute, but can depend on the model in which the cardinality is measured. It is possible for a set to be uncountable in one model of set theory but countable in a larger model (because the bijections that establish countability are in the larger model but not the smaller one). diff --git a/wiki/wikipedia/3962.txt b/wiki/wikipedia/3962.txt deleted file mode 100644 index f92638917cff58ef7f36ebeb90158fef5a5e1863..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3962.txt +++ /dev/null @@ -1 +0,0 @@ -In algebraic geometry, the Ramanujam vanishing theorem is an extension of the Kodaira vanishing theorem due to , that in particular gives conditions for the vanishing of first cohomology groups of coherent sheaves on a surface. The Kawamata–Viehweg vanishing theorem generalizes it. diff --git a/wiki/wikipedia/3963.txt b/wiki/wikipedia/3963.txt deleted file mode 100644 index 4f420c97609431b7279eb73ed0827667b29e677c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3963.txt +++ /dev/null @@ -1,30 +0,0 @@ -In mathematics, the Gelfond–Schneider theorem establishes the transcendence of a large class of numbers. - -It was originally proved independently in 1934 by Aleksandr Gelfond and Theodor Schneider. - -If a and b are algebraic numbers with a ≠ 0, 1, and b irrational, then any value of ab is a transcendental number. - -* The values of a and b are not restricted to real numbers; complex numbers are allowed (here complex numbers are not regarded as rational when they have an imaginary part not equal to 0, even if both the real and imaginary parts are rational). - -* In general, 1=ab = exp(b ln a) is multivalued, where ln stands for the natural logarithm. This accounts for the phrase "any value of" in the theorem's statement. - -* An equivalent formulation of the theorem is the following: if α and γ are nonzero algebraic numbers, and we take any non-zero logarithm of α, then (log γ)/(log α) is either rational or transcendental. This may be expressed as saying that if log α, log γ are linearly independent over the rationals, then they are linearly independent over the algebraic numbers. The generalisation of this statement to more general linear forms in logarithms of several algebraic numbers is in the domain of transcendental number theory. - -* If the restriction that a and b be algebraic is removed, the statement does not remain true in general. For example, -$$ -{\left(\sqrt{2}^{\sqrt{2}}\right)}^{\sqrt{2}} = \sqrt{2}^{\sqrt{2} \cdot \sqrt{2}} = \sqrt{2}^2 = 2. -$$ - -Here, a is 22, which (as proven by the theorem itself) is transcendental rather than algebraic. Similarly, if 1=a = 3 and 1=b = (log 2)/(log 3), which is transcendental, then 1=ab = 2 is algebraic. A characterization of the values for a and b, which yield a transcendental ab, is not known. - -* Kurt Mahler proved the p-adic analogue of the theorem: if a and b are in Cp, the completion of the algebraic closure of Qp, and they are algebraic over Q, and if $|a-1|_p<1$ and $|b-1|_p<1,$ then $(\log_p a)/(\log_p b)$ is either rational or transcendental, where logp is the p-adic logarithm function. - -The transcendence of the following numbers follows immediately from the theorem: - -* Gelfond–Schneider constant $2^{\sqrt{2}}$ and its square root $\sqrt{2}^{\sqrt{2}}.$ - -* Gelfond's constant $e^{\pi} = \left( e^{i \pi} \right)^{-i} = (-1)^{-i} = 23.14069263 \ldots$ - -* $ i^i = \left( e^{\frac{i \pi}{2}} \right)^i = e^{-\frac{\pi}{2}} = 0.207879576 \ldots$ - -The Gelfond–Schneider theorem answers affirmatively Hilbert's seventh problem. diff --git a/wiki/wikipedia/3964.txt b/wiki/wikipedia/3964.txt deleted file mode 100644 index 5d34053c77ac8b79b23a076d4a0d69d5920a2b80..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3964.txt +++ /dev/null @@ -1,3 +0,0 @@ -In mathematics, the Arthur conjectures are some conjectures about automorphic representations of reductive groups over the adeles and unitary representations of reductive groups over local fields made by , motivated by the Arthur–Selberg trace formula. - -Arthur's conjectures imply the generalized Ramanujan conjectures for cusp forms on general linear groups. diff --git a/wiki/wikipedia/3965.txt b/wiki/wikipedia/3965.txt deleted file mode 100644 index 132f7bfcb102b773265bb44ca8fe6f5eb0ffae49..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3965.txt +++ /dev/null @@ -1,74 +0,0 @@ -In topology, a Jordan curve, sometimes called a plane simple closed curve, is a non-self-intersecting continuous loop in the plane. The Jordan curve theorem asserts that every Jordan curve divides the plane into an "interior" region bounded by the curve and an "exterior" region containing all of the nearby and far away exterior points, so that every continuous path connecting a point of one region to a point of the other intersects with the curve somewhere. While the theorem seems intuitively obvious, it takes some ingenuity to prove it by elementary means. "Although the JCT is one of the best known topological theorems, there are many, even among professional mathematicians, who have never read a proof of it." (Tverberg). More transparent proofs rely on the mathematical machinery of algebraic topology, and these lead to generalizations to higher-dimensional spaces. - -The Jordan curve theorem is named after the mathematician Camille Jordan (1838–1922), who found its first proof. For decades, mathematicians generally thought that this proof was flawed and that the first rigorous proof was carried out by Oswald Veblen. However, this notion has been overturned by Thomas C. Hales and others. - -A Jordan curve or a simple closed curve in the plane R2 is the image C of an injective continuous map of a circle into the plane, φ: S1 → R2. A Jordan arc in the plane is the image of an injective continuous map of a closed and bounded interval [a, b] into the plane. It is a plane curve that is not necessarily smooth nor algebraic. - -Alternatively, a Jordan curve is the image of a continuous map φ: [0,1] → R2 such that φ(0) = φ(1) and the restriction of φ to [0,1) is injective. The first two conditions say that C is a continuous loop, whereas the last condition stipulates that C has no self-intersection points. - -With these definitions, the Jordan curve theorem can be stated as follows: - -
    - -Let C be a Jordan curve in the plane R2. Then its complement, R2 \ C, consists of exactly two connected components. One of these components is bounded (the interior) and the other is unbounded (the exterior), and the curve C is the boundary of each component. - -
    - -In contrast, the complement of a Jordan arc in the plane is connected. - -The Jordan curve theorem was independently generalized to higher dimensions by H. Lebesgue and L.E.J. Brouwer in 1911, resulting in the Jordan–Brouwer separation theorem. - -
    - -Let X be an n-dimensional topological sphere in the (n+1)-dimensional Euclidean space Rn+1 (n > 0), i.e. the image of an injective continuous mapping of the n-sphere Sn into Rn+1. Then the complement Y of X in Rn+1 consists of exactly two connected components. One of these components is bounded (the interior) and the other is unbounded (the exterior). The set X is their common boundary. - -
    - -The proof uses homology theory. It is first established that, more generally, if X is homeomorphic to the k-sphere, then the reduced integral homology groups of Y = Rn+1 \ X are as follows: -$$ -\tilde{H}_{q}(Y)= \begin{cases}\mathbb{Z}, & q=n-k\text{ or }q=n, \\ \{0\}, & \text{otherwise}.\end{cases} -$$ - -This is proved by induction in k using the Mayer–Vietoris sequence. When n = k, the zeroth reduced homology of Y has rank 1, which means that Y has 2 connected components (which are, moreover, path connected), and with a bit of extra work, one shows that their common boundary is X. A further generalization was found by J. W. Alexander, who established the Alexander duality between the reduced homology of a compact subset X of Rn+1 and the reduced cohomology of its complement. If X is an n-dimensional compact connected submanifold of Rn+1 (or Sn+1) without boundary, its complement has 2 connected components. - -There is a strengthening of the Jordan curve theorem, called the Jordan–Schönflies theorem, which states that the interior and the exterior planar regions determined by a Jordan curve in R2 are homeomorphic to the interior and exterior of the unit disk. In particular, for any point P in the interior region and a point A on the Jordan curve, there exists a Jordan arc connecting P with A and, with the exception of the endpoint A, completely lying in the interior region. An alternative and equivalent formulation of the Jordan–Schönflies theorem asserts that any Jordan curve φ: S1 → R2, where S1 is viewed as the unit circle in the plane, can be extended to a homeomorphism ψ: R2 → R2 of the plane. Unlike Lebesgue's and Brouwer's generalization of the Jordan curve theorem, this statement becomes false in higher dimensions: while the exterior of the unit ball in R3 is simply connected, because it retracts onto the unit sphere, the Alexander horned sphere is a subset of R3 homeomorphic to a sphere, but so twisted in space that the unbounded component of its complement in R3 is not simply connected, and hence not homeomorphic to the exterior of the unit ball. - -The statement of the Jordan curve theorem may seem obvious at first, but it is a rather difficult theorem to prove. - -It is easy to establish this result for polygons, but the problem came in generalizing it to all kinds of badly behaved curves, which include nowhere differentiable curves, such as the Koch snowflake and other fractal curves, or even a Jordan curve of positive area constructed by Osgood. - -The first proof of this theorem was given by Camille Jordan in his lectures on real analysis, and was published in his book Cours d'analyse de l'École Polytechnique. There is some controversy about whether Jordan's proof was complete: the majority of commenters on it have claimed that the first complete proof was given later by Oswald Veblen, who said the following about Jordan's proof: - -His proof, however, is unsatisfactory to many mathematicians. It assumes the theorem without proof in the important special case of a simple polygon, and of the argument from that point on, one must admit at least that all details are not given. - -However, Thomas C. Hales wrote: - -Nearly every modern citation that I have found agrees that the first correct proof is due to Veblen... In view of the heavy criticism of Jordan’s proof, I was surprised when I sat down to read his proof to find nothing objectionable about it. Since then, I have contacted a number of the authors who have criticized Jordan, and each case the author has admitted to having no direct knowledge of an error in Jordan’s proof. - -Hales also pointed out that the special case of simple polygons is not only an easy exercise, but was not really used by Jordan anyway, and quoted Michael Reeken as saying: - -Jordan’s proof is essentially correct... Jordan’s proof does not present the details in a satisfactory way. But the idea is right, and with some polishing the proof would be impeccable. - -Earlier, Jordan's proof and another early proof by Charles Jean de la Vallée Poussin had already been critically analyzed and completed by Schoenflies (1924). - -Due to the importance of the Jordan curve theorem in low-dimensional topology and complex analysis, it received much attention from prominent mathematicians of the first half of the 20th century. Various proofs of the theorem and its generalizations were constructed by J. W. Alexander, Louis Antoine, Ludwig Bieberbach, Luitzen Brouwer, Arnaud Denjoy, Friedrich Hartogs, Béla Kerékjártó, Alfred Pringsheim, and Arthur Moritz Schoenflies. - -New elementary proofs of the Jordan curve theorem, as well as simplifications of the earlier proofs, continue to be carried out. - -* Elementary proofs were presented by Filippov and Tverberg. - -* A proof using non-standard analysis by Narens. - -* A proof using constructive mathematics by . - -* A proof using the Brouwer fixed point theorem by Maehara. - -* A proof using non-planarity of the complete bipartite graph K3,3 was given by Thomassen. - -The root of the difficulty is explained in Tverberg as follows. It is relatively simple to prove that the Jordan curve theorem holds for every Jordan polygon (Lemma 1), and every Jordan curve can be approximated arbitrarily well by a Jordan polygon (Lemma 2). A Jordan polygon is a polygonal chain, the boundary of a bounded connected open set, call it the open polygon, and its closure, the closed polygon. Consider the diameter $\delta$ of the largest disk contained in the closed polygon. Evidently, $\delta$ is positive. Using a sequence of Jordan polygons (that converge to the given Jordan curve) we have a sequence $\delta_1, \delta_2, \dots$ presumably converging to a positive number, the diameter $\delta$ of the largest disk contained in the closed region bounded by the Jordan curve. However, we have to prove that the sequence $\delta_1, \delta_2, \dots$ does not converge to zero, using only the given Jordan curve, not the region presumably bounded by the curve. This is the point of Tverberg's Lemma 3. Roughly, the closed polygons should not thin to zero everywhere. Moreover, they should not thin to zero somewhere, which is the point of Tverberg's Lemma 4. - -The first formal proof of the Jordan curve theorem was created by Hales in the HOL Light system, in January 2005, and contained about 60,000 lines. Another rigorous 6,500-line formal proof was produced in 2005 by an international team of mathematicians using the Mizar system. Both the Mizar and the HOL Light proof rely on libraries of previously proved theorems, so these two sizes are not comparable. showed that in reverse mathematics the Jordan curve theorem is equivalent to weak König's lemma over the system $\mathsf{RCA}_0$. - -In computational geometry, the Jordan curve theorem can be used for testing whether a point lies inside or outside a simple polygon. - -From a given point, trace a ray that does not pass through any vertex of the polygon (all rays but a finite number are convenient). Then, compute the number n of intersections of the ray with an edge of the polygon. Jordan curve theorem proof implies that the point is inside the polygon if and only n is odd. diff --git a/wiki/wikipedia/3966.txt b/wiki/wikipedia/3966.txt deleted file mode 100644 index 164fbdc4ab0ab31f47a13dd666af2651e046defa..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3966.txt +++ /dev/null @@ -1,21 +0,0 @@ -The covering problem of Rado is an unsolved problem in geometry concerning covering planar sets by squares. It was formulated in 1928 by Tibor Radó and has been generalized to more general shapes and higher dimensions by Richard Rado. - -In a letter to Wacław Sierpiński, motivated by some results of Giuseppe Vitali, Tibor Radó observed that for every covering of a unit interval, one can select a subcovering consisting of pairwise disjoint intervals with total length at least 1/2 and that this number cannot be improved. He then asked for an analogous statement in the plane. - -If the area of the union of a finite set of squares in the plane with parallel sides is one, what is the guaranteed maximum total area of a pairwise disjoint subset? - -Radó proved that this number is at least 1/9 and conjectured that it is at least 1/4 a constant which cannot be further improved. This assertion was proved for the case of equal squares independently by A. Sokolin, R. Rado, and V. A. Zalgaller. However, in 1973, Miklós Ajtai disproved Radó's conjecture, by constructing a system of squares of two different sizes for which any subsystem consisting of disjoint squares covers the area at most 1/4 - 1/1728 of the total area covered by the system. - -Problems analogous to Tibor Radó's conjecture but involving other shapes were considered by Richard Rado starting in late 1940s. A typical setting is a finite family of convex figures in the Euclidean space Rd that are homothetic to a given X, for example, a square as in the original question, a disk, or a d-dimensional cube. Let -$$ - F(X)=\inf_{S}\sup_{I}\frac, -$$ - -where S ranges over finite families just described, and for a given family S, I ranges over all subfamilies that are independent, i.e. consist of disjoint sets, and bars denote the total volume (or area, in the plane case). Although the exact value of F(X) is not known for any two-dimensional convex X, much work was devoted to establishing upper and lower bounds in various classes of shapes. By considering only families consisting of sets that are parallel and congruent to X, one similarly defines f(X), which turned out to be much easier to study. Thus, R. Rado proved that if X is a triangle, f(X) is exactly 1/6 and if X is a centrally symmetric hexagon, f(X) is equal to 1/4. - -In 2008, Sergey Bereg, Adrian Dumitrescu, and Minghui Jiang established new bounds for various F(X) and f(X) that improve upon earlier results of R. Rado and V. A. Zalgaller. In particular, they proved that -$$ - \frac{1}{8.4797}\leq F(\textrm{square})\leq\frac{1}{4}-\frac{1}{384}, -$$ - -and that $f(X)\geq\frac{1}{6}$ for any convex planar X. diff --git a/wiki/wikipedia/3967.txt b/wiki/wikipedia/3967.txt deleted file mode 100644 index 63b4162c1986024a2b4a37cedcfa640f3197045a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3967.txt +++ /dev/null @@ -1,275 +0,0 @@ -The Routh array is a tabular method permitting one to establish the stability of a system using only the coefficients of the characteristic polynomial. Central to the field of control systems design, the Routh–Hurwitz theorem and Routh array emerge by using the Euclidean algorithm and Sturm's theorem in evaluating Cauchy indices. - -Given the system: - -\begin{align} - -f(x) & {} = a_0x^n+a_1x^{n-1}+\cdots+a_n & {} \quad (1) \\ - -& {} = (x-r_1)(x-r_2)\cdots(x-r_n) & {} \quad (2) \\ - -\end{align} - -Assuming no roots of $f(x) = 0$ lie on the imaginary axis, and letting -$$ -N -$$ = The number of roots of $f(x) = 0$ with negative real parts, and -$$ -P -$$ = The number of roots of $f(x) = 0$ with positive real parts - -then we have -$$ -N+P=n \quad (3) -$$ - -Expressing $f(x)$ in polar form, we have -$$ -f(x) = \rho(x)e^{j\theta(x)} \quad (4) -$$ - -where -$$ -\rho(x) = \sqrt{\mathfrak{Re}^2[f(x)]+\mathfrak{Im}^2[f(x)]} \quad (5) -$$ - -and -$$ -\theta(x) = \tan^{-1}\big(\mathfrak{Im}[f(x)]/\mathfrak{Re}[f(x)]\big) \quad (6) -$$ - -from (2) note that -$$ -\theta(x) = \theta_{r_1}(x)+\theta_{r_2}(x)+\cdots+\theta_{r_n}(x) \quad (7) -$$ - -where -$$ -\theta_{r_i}(x) = \angle(x-r_i) \quad (8) -$$ - -Now if the ith root of $f(x) = 0$ has a positive real part, then (using the notation y=(RE[y],IM[y])) - -\begin{align} - -\theta_{r_i}(x)\big|_{x=-j\infty} & = \angle(x-r_i)\big|_{x=-j\infty} \\ - -& = \angle(0-\mathfrak{Re}[r_i],-\infty-\mathfrak{Im}[r_i]) \\ - -& = \angle(-|\mathfrak{Re}[r_i]|,-\infty) \\ - -& = \pi + \lim_{\phi \to \infty}\tan^{-1}\phi=\frac{3\pi}{2} \quad (9)\\ - -\end{align} - -and -$$ -\theta_{r_i}(x)\big|_{x=j0} = \angle(-|\mathfrak{Re}[r_i]|,0) = \pi - \tan^{-1}0=\pi \quad (10) -$$ - -and -$$ -\theta_{r_i}(x)\big|_{x=j\infty} = \angle(-|\mathfrak{Re}[r_i]|,\infty) = \pi - \lim_{\phi \to \infty}\tan^{-1}\phi=\frac{\pi}{2} \quad (11) -$$ - -Similarly, if the ith root of $f(x)=0$ has a negative real part, - -\begin{align} - -\theta_{r_i}(x)\big|_{x=-j\infty} & = \angle(x-r_i)\big|_{x=-j\infty} \\ - -& = \angle(0- \mathfrak{Re}[r_i],-\infty-\mathfrak{Im}[r_i]) \\ - -& = \angle(|\mathfrak{Re}[r_i]|,-\infty) \\ - -& = 0 - \lim_{\phi \to \infty}\tan^{1}\phi=-\frac{\pi}{2} \quad (12)\\ - -\end{align} - -and -$$ -\theta_{r_i}(x)\big|_{x=j0} = \angle(|\mathfrak{Re}[r_i]|,0) = \tan^{-1}0=0 \quad (13) -$$ - -and -$$ -\theta_{r_i}(x)\big|_{x=j\infty} = \angle(|\mathfrak{Re}[r_i]|,\infty) = \lim_{\phi \to \infty}\tan^{-1}\phi=\frac{\pi}{2} \quad (14) -$$ - -From (9) to (11) we find that $\theta_{r_i}(x)\Big|_{x=-j\infty}^{x=j\infty} = -\pi$ when the ith root of $f(x)$ has a positive real part, and from (12) to (14) we find that $\theta_{r_i}(x)\Big|_{x=-j\infty}^{x=j\infty} = \pi$ when the ith root of $f(x)$ has a negative real part. Thus, -$$ -\theta(x)\Big|_{x=-j\infty}^{x=j\infty} = \angle(x-r_1)\Big|_{x=-j\infty}^{x=j\infty}+\angle(x-r_2)\Big|_{x=-j\infty}^{x=j\infty}+\cdots+\angle(x-r_n)\Big|_{x=-j\infty}^{x=j\infty} = \pi N - \pi P \quad (15) -$$ - -So, if we define -$$ -\Delta=\frac{1}{\pi}\theta(x)\Big|_{-j\infty}^{j\infty} \quad (16) -$$ - -then we have the relationship -$$ -N - P = \Delta \quad (17) -$$ - -and combining (3) and (17) gives us -$$ -N = \frac{n+\Delta}{2} -$$ and $P = \frac{n-\Delta}{2} \quad (18)$ - -Therefore, given an equation of $f(x)$ of degree $n$ we need only evaluate this function $\Delta$ to determine $N$, the number of roots with negative real parts and $P$, the number of roots with positive real parts. - -In accordance with (6) and Figure 1, the graph of $\tan(\theta)$ vs $\theta$, varying $x$ over an interval (a,b) where $\theta_a=\theta(x)|_{x=ja}$ and $\theta_b=\theta(x)|_{x=jb}$ are integer multiples of $\pi$, this variation causing the function $\theta(x)$ to have increased by $\pi$, indicates that in the course of travelling from point a to point b, $\theta$ has "jumped" from $+\infty$ to $-\infty$ one more time than it has jumped from $-\infty$ to $+\infty$. Similarly, if we vary $x$ over an interval (a,b) this variation causing $\theta(x)$ to have decreased by $\pi$, where again $\theta$ is a multiple of $\pi$ at both $x = ja$ and $x = jb$, implies that $\tan \theta (x) = \mathfrak{Im}[f(x)]/\mathfrak{Re}[f(x)]$ has jumped from $-\infty$ to $+\infty$ one more time than it has jumped from $+\infty$ to $-\infty$ as $x$ was varied over the said interval. - -Thus, $\theta(x)\Big|_{-j\infty}^{j\infty}$ is $\pi$ times the difference between the number of points at which $\mathfrak{Im}[f(x)]/\mathfrak{Re}[f(x)]$ jumps from $-\infty$ to $+\infty$ and the number of points at which $\mathfrak{Im}[f(x)]/\mathfrak{Re}[f(x)]$ jumps from $+\infty$ to $-\infty$ as $x$ ranges over the interval $(-j\infty,+j\infty)$ provided that at $x=\pm j\infty$, $\tan[\theta(x)]$ is defined. - -In the case where the starting point is on an incongruity (i.e. $\theta_a=\pi/2 \pm i\pi$, i = 0, 1, 2, ...) the ending point will be on an incongruity as well, by equation (17) (since $N$ is an integer and $P$ is an integer, $\Delta$ will be an integer). In this case, we can achieve this same index (difference in positive and negative jumps) by shifting the axes of the tangent function by $\pi/2$, through adding $\pi/2$ to $\theta$. Thus, our index is now fully defined for any combination of coefficients in $f(x)$ by evaluating $\tan[\theta]=\mathfrak{Im}[f(x)]/\mathfrak{Re}[f(x)]$ over the interval (a,b) = $(+j\infty, -j\infty)$ when our starting (and thus ending) point is not an incongruity, and by evaluating -$$ -\tan[\theta'(x)]=\tan[\theta + \pi/2] = -\cot[\theta(x)] = -\mathfrak{Re}[f(x)]/\mathfrak{Im}[f(x)] \quad (19) -$$ - -over said interval when our starting point is at an incongruity. - -This difference, $\Delta$, of negative and positive jumping incongruities encountered while traversing $x$ from $-j\infty$ to $+j\infty$ is called the Cauchy Index of the tangent of the phase angle, the phase angle being $\theta(x)$ or $\theta'(x)$, depending as $\theta_a$ is an integer multiple of $\pi$ or not. - -To derive Routh's criterion, first we'll use a different notation to differentiate between the even and odd terms of $f(x)$: -$$ -f(x) = a_0x^n + b_0x^{n-1} + a_1x^{n-2} + b_1x^{n-3} + \cdots \quad (20) -$$ - -Now we have: - -\begin{align} - -f(j\omega) & = a_0(j\omega)^n+b_0(j\omega)^{n-1}+a_1(j\omega)^{n-2}+b_1(j\omega)^{n-3}+\cdots & {} \quad (21)\\ - -& = a_0(j\omega)^n+a_1(j\omega)^{n-2}+a_2(j\omega)^{n-4}+\cdots & {} \quad (22)\\ - -& + b_0(j\omega)^{n-1}+b_1(j\omega)^{n-3}+b_2(j\omega)^{n-5}+\cdots \\ - -\end{align} - -Therefore, if $n$ is even, - -\begin{align} - -f(j\omega) & = (-1)^{n/2}\big[a_0\omega^n-a_1\omega^{n-2}+a_2\omega^{n-4}-\cdots \big] & {} \quad (23)\\ - -& + j(-1)^{(n/2)-1}\big[b_0\omega^{n-1}-b_1\omega^{n-3}+b_2\omega^{n-5}-\cdots \big] & {} \\ - -\end{align} - -and if $n$ is odd: - -\begin{align} - -f(j\omega) & = j(-1)^{(n-1)/2}\big[a_0\omega^n-a_1\omega^{n-2}+a_2\omega^{n-4}-\cdots \big] & {} \quad (24)\\ - -& + (-1)^{(n-1)/2}\big[b_0\omega^{n-1}-b_1\omega^{n-3}+b_2\omega^{n-5}-\cdots \big] & {}\\ - -\end{align} - -Now observe that if $n$ is an odd integer, then by (3) $N+P$ is odd. If $N+P$ is an odd integer, then $N-P$ is odd as well. Similarly, this same argument shows that when $n$ is even, $N-P$ will be even. Equation (15) shows that if $N-P$ is even, $\theta$ is an integer multiple of $\pi$. Therefore, $\tan(\theta)$ is defined for $n$ even, and is thus the proper index to use when n is even, and similarly $\tan(\theta') = \tan(\theta+\pi) = -\cot(\theta)$ is defined for $n$ odd, making it the proper index in this latter case. - -Thus, from (6) and (23), for $n$ even: -$$ -\Delta = I_{-\infty}^{+\infty}\frac{-\mathfrak{Im}[f(x)]}{\mathfrak{Re}[f(x)]}= I_{-\infty}^{+\infty}\frac{b_0\omega^{n-1}-b_1\omega^{n-3}+\cdots}{a_0\omega^n-a_1\omega^{n-2}+\ldots} \quad (25) -$$ - -and from (19) and (24), for $n$ odd: -$$ -\Delta = I_{-\infty}^{+\infty}\frac{\mathfrak{Re}[f(x)]}{\mathfrak{Im}[f(x)]}= I_{-\infty}^{+\infty}\frac{b_0\omega^{n-1}-b_1\omega^{n-3}+\ldots}{a_0\omega^n-a_1\omega^{n-2}+\ldots} \quad (26) -$$ - -Lo and behold we are evaluating the same Cauchy index for both: -$$ -\Delta = I_{-\infty}^{+\infty}\frac{b_0\omega^{n-1}-b_1\omega^{n-3}+\ldots}{a_0\omega^n-a_1\omega^{n-2}+\ldots} \quad (27) -$$ - -Sturm gives us a method for evaluating $\Delta = I_{-\infty}^{+\infty}\frac{f_2(x)}{f_1(x)}$. His theorem states as follows: - -Given a sequence of polynomials $f_1(x),f_2(x), \dots, f_m(x)$ where: - -1) If $f_k(x) = 0$ then $f_{k-1}(x) \neq 0$, $f_{k+1}(x) \neq 0$, and $ \operatorname{sign}[f_{k-1}(x)] = - \operatorname{sign}[f_{k+1}(x)]$ - -2) $f_m(x) \neq 0 $ for $-\infty < x < \infty$ - -and we define $V(x)$ as the number of changes of sign in the sequence $f_1(x),f_2(x), \dots, f_m(x)$ for a fixed value of $x$, then: -$$ -\Delta = I_{-\infty}^{+\infty}\frac{f_2(x)}{f_1(x)}= V(-\infty) - V(+\infty) \quad (28) -$$ - -A sequence satisfying these requirements is obtained using the Euclidean algorithm, which is as follows: - -Starting with $f_1(x)$ and $f_2(x)$, and denoting the remainder of $f_1(x)/f_2(x)$ by $f_3(x)$ and similarly denoting the remainder of $f_2(x)/f_3(x)$ by $f_4(x)$, and so on, we obtain the relationships: - -\begin{align} - -&f_1(x)= q_1(x)f_2(x) - f_3(x) \quad (29)\\ - -&f_2(x)= q_2(x)f_3(x) - f_4(x) \\ - -& \ldots \\ - -&f_{m-1}(x)= q_{m-1}(x)f_m(x) \\ - -\end{align} - -or in general -$$ -f_{k-1}(x)= q_{k-1}(x)f_k(x) - f_{k+1}(x) -$$ - -where the last non-zero remainder, $f_m(x)$ will therefore be the highest common factor of $f_1(x),f_2(x), \dots, f_{m-1}(x)$. It can be observed that the sequence so constructed will satisfy the conditions of Sturm's theorem, and thus an algorithm for determining the stated index has been developed. - -It is in applying Sturm's theorem (28) to (29), through the use of the Euclidean algorithm above that the Routh matrix is formed. - -We get -$$ -f_3(\omega) = \frac {a_0}{b_0}f_2(\omega) - f_1(\omega) \quad (30) -$$ - -and identifying the coefficients of this remainder by $c_0$, $-c_1$, $c_2$, $-c_3$, and so forth, makes our formed remainder -$$ -f_3(\omega) = c_0\omega^{n-2} - c_1\omega^{n-4} + c_2\omega^{n-6} - \cdots \quad (31) -$$ - -where -$$ -c_0 = a_1 - \frac{a_0}{b_0}b_1 = \frac{b_0a_1 - a_1b_0}{b_0}; c_1 = a_2 - \frac{a_0}{b_0}b_2 = \frac{b_0a_2 - a_0b_2}{b_0};\ldots \quad (32) -$$ - -Continuing with the Euclidean algorithm on these new coefficients gives us -$$ -f_4(\omega) = \frac {b_0}{c_0}f_3(\omega) - f_2(\omega) \quad (33) -$$ - -where we again denote the coefficients of the remainder $f_4(\omega)$ by $d_0$, $-d_1$, $d_2$, $-d_3$, - -making our formed remainder -$$ -f_4(\omega) = d_0\omega^{n-3} - d_1\omega^{n-5} + d_2\omega^{n-7} - \cdots \quad (34) -$$ - -and giving us -$$ -d_0 = b_1 - \frac{b_0}{c_0}c_1 = \frac{c_0b_1 - b_1c_0}{c_0}; d_1 = b_2 - \frac{b_0}{c_0}c_2 = \frac{c_0b_2 - b_0c_2}{c_0};\ldots \quad (35) -$$ - -The rows of the Routh array are determined exactly by this algorithm when applied to the coefficients of (20). An observation worthy of note is that in the regular case the polynomials $f_1(\omega)$ and $f_2(\omega)$ have as the highest common factor $f_{n+1}(\omega)$ and thus there will be $n$ polynomials in the chain $f_1(x),f_2(x), \dots, f_m(x)$. - -Note now, that in determining the signs of the members of the sequence of polynomials $f_1(x),f_2(x), \dots,f_m(x)$ that at $\omega = \pm \infty$ the dominating power of $\omega$ will be the first term of each of these polynomials, and thus only these coefficients corresponding to the highest powers of $\omega$ in $f_1(x),f_2(x), \dots$, and $f_m(x)$, which are $a_0$, $b_0$, $c_0$, $d_0$, ... determine the signs of $f_1(x)$, $f_2(x)$, ..., $f_m(x)$ at $\omega = \pm\infty$. - -So we get $V(+\infty)=V(a_0, b_0, c_0, d_0, \dots)$ that is, $V(+\infty)$ is the number of changes of sign in the sequence $a_0\infty^n$, $b_0\infty^{n-1}$, $c_0\infty^{n-2}$, ... which is the number of sign changes in the sequence $a_0$, $b_0$, $c_0$, $d_0$, ... and $V(-\infty)=V(a_0, -b_0, c_0, -d_0, ...)$; that is $V(-\infty)$ is the number of changes of sign in the sequence $a_0(-\infty)^n$, $b_0(-\infty)^{n-1}$, $c_0(-\infty)^{n-2}$, ... which is the number of sign changes in the sequence $a_0$, $-b_0$, $c_0$, $-d_0$, ... - -Since our chain $a_0$, $b_0$, $c_0$, $d_0$, ... will have $n$ members it is clear that $V(+\infty) + V(-\infty) = n$ since within $V(a_0, b_0, c_0, d_0, \dots)$ if going from $a_0$ to $b_0$ a sign change has not occurred, within -$$ -V(a_0, -b_0, c_0, -d_0, \dots) -$$ going from $a_0$ to $-b_0$ one has, and likewise for all $n$ transitions (there will be no terms equal to zero) giving us $n$ total sign changes. - -As $\Delta = V(-\infty) - V(+\infty)$ and $n = V(+\infty) + V(-\infty)$, and from (18) $P = (n - \Delta/2)$, we have that $P = V(+\infty) = V(a_0, b_0, c_0, d_0, \dots)$ and have derived Routh's theorem - - -The number of roots of a real polynomial $f(z)$ which lie in the right half plane $\mathfrak{Re}(r_i) > 0$ is equal to the number of changes of sign in the first column of the Routh scheme. - -And for the stable case where $P = 0$ then $V(a_0, b_0, c_0, d_0, \dots) = 0$ by which we have Routh's famous criterion: - -In order for all the roots of the polynomial $f(z)$ to have negative real parts, it is necessary and sufficient that all of the elements in the first column of the Routh scheme be different from zero and of the same sign. diff --git a/wiki/wikipedia/3968.txt b/wiki/wikipedia/3968.txt deleted file mode 100644 index cdaac8f538b62d85e7e6a4d5aa3f22779f35a551..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3968.txt +++ /dev/null @@ -1,17 +0,0 @@ -A conditional proof is a proof that takes the form of asserting a conditional, and proving that the antecedent of the conditional necessarily leads to the consequent. - -The assumed antecedent of a conditional proof is called the conditional proof assumption (CPA). Thus, the goal of a conditional proof is to demonstrate that if the CPA were true, then the desired conclusion necessarily follows. The validity of a conditional proof does not require that the CPA to be true, only that if it were true it would lead to the consequent. - -Conditional proofs are of great importance in mathematics. Conditional proofs exist linking several otherwise unproven conjectures, so that a proof of one conjecture may immediately imply the validity of several others. It can be much easier to show a proposition's truth to follow from another proposition than to prove it independently. - -A famous network of conditional proofs is the NP-complete class of complexity theory. There is a large number of interesting tasks (see List of NP-complete problems), and while it is not known if a polynomial-time solution exists for any of them, it is known that if such a solution exists for some of them, one exists for all of them. Similarly, the Riemann hypothesis has many consequences already proven. - -As an example of a conditional proof in symbolic logic, suppose we want to prove A → C (if A, then C) from the first two premises below: - -==See also== - -* Deduction theorem - -* Logical consequence - -* Propositional calculus diff --git a/wiki/wikipedia/3969.txt b/wiki/wikipedia/3969.txt deleted file mode 100644 index 2964127b7e963a5cb6131362b177bfafffaec917..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3969.txt +++ /dev/null @@ -1,75 +0,0 @@ -In transaction processing, databases, and computer networking, the two-phase commit protocol (2PC) is a type of atomic commitment protocol (ACP). It is a distributed algorithm that coordinates all the processes that participate in a distributed atomic transaction on whether to commit or abort (roll back) the transaction. This protocol (a specialized type of consensus protocol) achieves its goal even in many cases of temporary system failure (involving either process, network node, communication, etc. failures), and is thus widely used. - -However, it is not resilient to all possible failure configurations, and in rare cases, manual intervention is needed to remedy an outcome. To accommodate recovery from failure (automatic in most cases) the protocol's participants use logging of the protocol's states. Log records, which are typically slow to generate but survive failures, are used by the protocol's recovery procedures. Many protocol variants exist that primarily differ in logging strategies and recovery mechanisms. Though usually intended to be used infrequently, recovery procedures compose a substantial portion of the protocol, due to many possible failure scenarios to be considered and supported by the protocol. - -In a "normal execution" of any single distributed transaction (i.e., when no failure occurs, which is typically the most frequent situation), the protocol consists of two phases: - -#The commit-request phase (or voting phase), in which a coordinator process attempts to prepare all the transaction's participating processes (named participants, cohorts, or workers) to take the necessary steps for either committing or aborting the transaction and to vote, either "Yes": commit (if the transaction participant's local portion execution has ended properly), or "No": abort (if a problem has been detected with the local portion), and - -#The commit phase, in which, based on voting of the participants, the coordinator decides whether to commit (only if all have voted "Yes") or abort the transaction (otherwise), and notifies the result to all the participants. The participants then follow with the needed actions (commit or abort) with their local transactional resources (also called recoverable resources; e.g., database data) and their respective portions in the transaction's other output (if applicable). - -The two-phase commit (2PC) protocol should not be confused with the two-phase locking (2PL) protocol, a concurrency control protocol. - -The protocol works in the following manner: one node is a designated coordinator, which is the master site, and the rest of the nodes in the network are designated the participants. The protocol assumes that there is stable storage at each node with a write-ahead log, that no node crashes forever, that the data in the write-ahead log is never lost or corrupted in a crash, and that any two nodes can communicate with each other. The last assumption is not too restrictive, as network communication can typically be rerouted. The first two assumptions are much stronger; if a node is totally destroyed then data can be lost. - -The protocol is initiated by the coordinator after the last step of the transaction has been reached. The participants then respond with an agreement message or an abort message depending on whether the transaction has been processed successfully at the participant. - -#The coordinator sends a query to commit message to all participants and waits until it has received a reply from all participants. - -#The participants execute the transaction up to the point where they will be asked to commit. They each write an entry to their undo log and an entry to their redo log. - -#Each participant replies with an agreement message (participant votes Yes to commit), if the participant's actions succeeded, or an abort message (participant votes No, not to commit), if the participant experiences a failure that will make it impossible to commit. - -If the coordinator received an agreement message from all participants during the commit-request phase: - -#The coordinator sends a commit message to all the participants. - -#Each participant completes the operation, and releases all the locks and resources held during the transaction. - -#Each participant sends an acknowledgement to the coordinator. - -#The coordinator completes the transaction when all acknowledgments have been received. - -If any participant votes No during the commit-request phase (or the coordinator's timeout expires): - -#The coordinator sends a rollback message to all the participants. - -#Each participant undoes the transaction using the undo log, and releases the resources and locks held during the transaction. - -#Each participant sends an acknowledgement to the coordinator. - -#The coordinator undoes the transaction when all acknowledgements have been received. - -
    -
    -Coordinator Participant
    -
    -QUERY TO COMMIT
    -
    --------------------------------->
    -
    -VOTE YES/NO prepare*/abort*
    -
    -<-------------------------------
    -
    -commit*/abort* COMMIT/ROLLBACK
    -
    --------------------------------->
    -
    -ACKNOWLEDGMENT commit*/abort*
    -
    -<-------------------------------- 
    -
    -end
    -
    -
    - -An * next to the record type means that the record is forced to stable storage. - -The greatest disadvantage of the two-phase commit protocol is that it is a blocking protocol. If the coordinator fails permanently, some participants will never resolve their transactions: After a participant has sent an agreement message to the coordinator, it will block until a commit or rollback is received. - -In many cases the 2PC protocol is distributed in a computer network. It is easily distributed by implementing multiple dedicated 2PC components similar to each other, typically named Transaction managers (TMs; also referred to as 2PC agents or Transaction Processing Monitors), that carry out the protocol's execution for each transaction (e.g., The Open Group's X/Open XA). The databases involved with a distributed transaction, the participants, both the coordinator and participants, register to close TMs (typically residing on respective same network nodes as the participants) for terminating that transaction using 2PC. Each distributed transaction has an ad hoc set of TMs, the TMs to which the transaction participants register. A leader, the coordinator TM, exists for each transaction to coordinate 2PC for it, typically the TM of the coordinator database. However, the coordinator role can be transferred to another TM for performance or reliability reasons. Rather than exchanging 2PC messages among themselves, the participants exchange the messages with their respective TMs. The relevant TMs communicate among themselves to execute the 2PC protocol schema above, "representing" the respective participants, for terminating that transaction. With this architecture the protocol is fully distributed (does not need any central processing component or data structure), and scales up with number of network nodes (network size) effectively. - -This common architecture is also effective for the distribution of other atomic commitment protocols besides 2PC, since all such protocols use the same voting mechanism and outcome propagation to protocol participants. An assumption about the outcome of transactions, either commit, or abort, can save both messages and logging operations by the participants during the 2PC protocol's execution. For example, when presumed abort, if during system recovery from failure no logged evidence for commit of some transaction is found by the recovery procedure, then it assumes that the transaction has been aborted, and acts accordingly. This means that it does not matter if aborts are logged at all, and such logging can be saved under this assumption. Typically a penalty of additional operations is paid during recovery from failure, depending on optimization type. Thus the best variant of optimization, if any, is chosen according to failure and transaction outcome statistics. - -The Tree 2PC protocol is a variant of Tree 2PC with no predetermined coordinator. It subsumes several optimizations that have been proposed earlier. Agreement messages (Yes votes) start to propagate from all the leaves, each leaf when completing its tasks on behalf of the transaction (becoming ready). An intermediate (non leaf) node sends ready when an agreement message to the last (single) neighboring node from which agreement message has not yet been received. The coordinator is determined dynamically by racing agreement messages over the transaction tree, at the place where they collide. They collide either at a transaction tree node, to be the coordinator, or on a tree edge. In the latter case one of the two edge's nodes is elected as a coordinator (any node). D2PC is time optimal (among all the instances of a specific transaction tree, and any specific Tree 2PC protocol implementation; all instances have the same tree; each instance has a different node as coordinator): By choosing an optimal coordinator D2PC commits both the coordinator and each participant in minimum possible time, allowing the earliest possible release of locked resources in each transaction participant (tree node). diff --git a/wiki/wikipedia/397.txt b/wiki/wikipedia/397.txt deleted file mode 100644 index f72a4e55652e88b70cf4d81b76e86ef05e1e59c5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/397.txt +++ /dev/null @@ -1,21 +0,0 @@ -Thomson's lamp is a philosophical puzzle based on infinites. It was devised in 1954 by British philosopher James F. Thomson, who used it to analyze the possibility of a supertask, which is the completion of an infinite number of tasks. - -Consider a lamp with a toggle switch. Flicking the switch once turns the lamp on. Another flick will turn the lamp off. Now suppose that there is a being who is able to perform the following task: starting a timer, he turns the lamp on. At the end of one minute, he turns it off. At the end of another half minute, he turns it on again. At the end of another quarter of a minute, he turns it off. At the next eighth of a minute, he turns it on again, and he continues thus, flicking the switch each time after waiting exactly one-half the time he waited before flicking it previously. The sum of this infinite series of time intervals is exactly two minutes. - -The following question is then considered: Is the lamp on or off at two minutes? Thomson reasoned that this supertask creates a contradiction: - -The question is related to the behavior of Grandi's series, i.e. the divergent infinite series - -* S = 1 − 1 + 1 − 1 + 1 − 1 + · · · - -For even values of n, the above finite series sums to 1; for odd values, it sums to 0. In other words, as n takes the values of each of the non-negative integers 0, 1, 2, 3, ... in turn, the series generates the sequence {1, 0, 1, 0, ...}, representing the changing state of the lamp. The sequence does not converge as n tends to infinity, so neither does the infinite series. - -Another way of illustrating this problem is to rearrange the series: - -* S = 1 − (1 − 1 + 1 − 1 + 1 − 1 + · · ·) - -The unending series in the brackets is exactly the same as the original series S. This means S = 1 − S which implies S = 12. In fact, this manipulation can be rigorously justified: there are generalized definitions for the sums of series that do assign Grandi's series the value 12. - -One of Thomson's objectives in his original 1954 paper is to differentiate supertasks from their series analogies. He writes of the lamp and Grandi's series, - -Later, he claims that even the divergence of a series does not provide information about its supertask: "The impossibility of a super-task does not depend at all on whether some vaguely-felt-to-be-associated arithmetical sequence is convergent or divergent." diff --git a/wiki/wikipedia/3970.txt b/wiki/wikipedia/3970.txt deleted file mode 100644 index 44c34742ef123c22ed54b811f001063c0f200358..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3970.txt +++ /dev/null @@ -1,65 +0,0 @@ -In mathematics, the problem of differentiation of integrals is that of determining under what circumstances the mean value integral of a suitable function on a small neighbourhood of a point approximates the value of the function at that point. More formally, given a space X with a measure μ and a metric d, one asks for what functions f : X → R does -$$ -\lim_{r \to 0} \frac1{\mu \big( B_{r} (x) \big)} \int_{B_{r} (x)} f(y) \mathrm{d} \mu(y) = f(x) -$$ - -for all (or at least μ-almost all) x ∈ X? (Here, as in the rest of the article, Br(x) denotes the open ball in X with d-radius r and centre x.) This is a natural question to ask, especially in view of the heuristic construction of the Riemann integral, in which it is almost implicit that f(x) is a "good representative" for the values of f near x. - -One result on the differentiation of integrals is the Lebesgue differentiation theorem, as proved by Henri Lebesgue in 1910. Consider n-dimensional Lebesgue measure λn on n-dimensional Euclidean space Rn. Then, for any locally integrable function f : Rn → R, one has -$$ -\lim_{r \to 0} \frac1{\lambda^{n} \big( B_{r} (x) \big)} \int_{B_{r} (x)} f(y) \mathrm{d} \lambda^{n} (y) = f(x) -$$ - -for λn-almost all points x ∈ Rn. It is important to note, however, that the measure zero set of "bad" points depends on the function f. - -The result for Lebesgue measure turns out to be a special case of the following result, which is based on the Besicovitch covering theorem: if μ is any locally finite Borel measure on Rn and f : Rn → R is locally integrable with respect to μ, then -$$ -\lim_{r \to 0} \frac1{\mu \big( B_{r} (x) \big)} \int_{B_{r} (x)} f(y) \mathrm{d} \mu (y) = f(x) -$$ - -for μ-almost all points x ∈ Rn. - -The problem of the differentiation of integrals is much harder in an infinite-dimensional setting. Consider a separable Hilbert space (H, ⟨ , ⟩) equipped with a Gaussian measure γ. As stated in the article on the Vitali covering theorem, the Vitali covering theorem fails for Gaussian measures on infinite-dimensional Hilbert spaces. Two results of David Preiss (1981 and 1983) show the kind of difficulties that one can expect to encounter in this setting: - -* There is a Gaussian measure γ on a separable Hilbert space H and a Borel set M ⊆ H so that, for γ-almost all x ∈ H, \lim_{r \to 0} \frac{\gamma \big( M \cap B_{r} (x) \big)}{\gamma \big( B_{r} (x) \big)} = 1. - -* There is a Gaussian measure γ on a separable Hilbert space H and a function f ∈ L1(H, γ; R) such that \lim_{r \to 0} \inf \left\{ \left. \frac1{\gamma \big( B_{s} (x) \big)} \int_{B_{s} (x)} f(y) \mathrm{d} \gamma(y) \right| x \in H, 0 < s < r \right\} = + \infty. - -However, there is some hope if one has good control over the covariance of γ. Let the covariance operator of γ be S : H → H given by -$$ -\langle Sx, y \rangle = \int_{H} \langle x, z \rangle \langle y, z \rangle \mathrm{d} \gamma(z), -$$ - -or, for some countable orthonormal basis (ei)i∈N of H, -$$ -Sx = \sum_{i \in \mathbf{N}} \sigma_{i}^{2} \langle x, e_{i} \rangle e_{i}. -$$ - -In 1981, Preiss and Jaroslav Tišer showed that if there exists a constant 0 < q < 1 such that -$$ -\sigma_{i + 1}^{2} \leq q \sigma_{i}^{2}, -$$ - -then, for all f ∈ L1(H, γ; R), -$$ -\frac1{\mu \big( B_{r} (x) \big)} \int_{B_{r} (x)} f(y) \mathrm{d} \mu(y) \xrightarrow[r \to 0]{\gamma} f(x), -$$ - -where the convergence is convergence in measure with respect to γ. In 1988, Tišer showed that if -$$ -\sigma_{i + 1}^{2} \leq \frac{\sigma_{i}^{2}}{i^{\alpha}} -$$ - -for some α > 5 ⁄ 2, then -$$ -\frac1{\mu \big( B_{r} (x) \big)} \int_{B_{r} (x)} f(y) \mathrm{d} \mu(y) \xrightarrow[r \to 0]{} f(x), -$$ - -for γ-almost all x and all f ∈ Lp(H, γ; R), p > 1. - -As of 2007, it is still an open question whether there exists an infinite-dimensional Gaussian measure γ on a separable Hilbert space H so that, for all f ∈ L1(H, γ; R), -$$ -\lim_{r \to 0} \frac1{\gamma \big( B_{r} (x) \big)} \int_{B_{r} (x)} f(y) \mathrm{d} \gamma(y) = f(x) -$$ - -for γ-almost all x ∈ H. However, it is conjectured that no such measure exists, since the σi would have to decay very rapidly. diff --git a/wiki/wikipedia/3971.txt b/wiki/wikipedia/3971.txt deleted file mode 100644 index ef49d1d2b34ced1013dbf2c2aa0976a77ab5b060..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3971.txt +++ /dev/null @@ -1,39 +0,0 @@ -The Hilbert–Bernays paradox is a distinctive paradox belonging to the family of the paradoxes of reference (like Berry's paradox). It is named after David Hilbert and Paul Bernays. - -The paradox appears in Hilbert and Bernays' Grundlagen der Mathematik and is used by them to show that a sufficiently strong consistent theory cannot contain its own reference functor. Although it has gone largely unnoticed in the course of the 20th century, it has recently been rediscovered and appreciated for the distinctive difficulties it presents. - -Just as the semantic property of truth seems to be governed by the naive schema: - -(T) The sentence ′P′ is true if and only if P - -(where single quotes refer to the linguistic expression inside the quotes), the semantic property of reference seems to be governed by the naive schema: - -(R) If a exists, the referent of the name ′a′ is identical with a - -Consider however a name h for (natural) numbers satisfying: - -(H) h is identical with ′(the referent of h) +1′ - -Suppose that, for some number n: - -(1) The referent of h is identical with n - -Then, surely, the referent of h exists, and so does (the referent of h)+1. By (R), it then follows that: - -(2) The referent of ′(the referent of h)+1′ is identical with (the referent of h)+1 - -and so, by (H) and the principle of indiscernibility of identicals, it is the case that: - -(3) The referent of h is identical with (the referent of h)+1 - -But, again by indiscernibility of identicals, (1) and (3) yield: - -(4) The referent of h is identical with n +1 - -and, by transitivity of identity, (1) together with (4) yields: - -(5) n is identical with n+1 - -But (5) is absurd, since no number is identical with its successor. - -Since every sufficiently strong theory will have to accept something like (H), absurdity can only be avoided either by rejecting the principle of naive reference (R) or by rejecting classical logic (which validates the reasoning from (R) and (H) to absurdity). On the first approach, typically whatever one says about the Liar paradox carries over smoothly to the Hilbert–Bernays paradox. The paradox presents instead distinctive difficulties for many solutions pursuing the second approach: for example, solutions to the Liar paradox that reject the law of excluded middle (which is not used by the Hilbert–Bernays paradox) have denied that there is such a thing as the referent of h; solutions to the Liar paradox that reject the law of noncontradiction (which is likewise not used by the Hilbert–Bernays paradox) have claimed that h refers to more than one object. diff --git a/wiki/wikipedia/3972.txt b/wiki/wikipedia/3972.txt deleted file mode 100644 index dec4b7babf52207752d9851571c3f9a1cdf202e4..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3972.txt +++ /dev/null @@ -1,185 +0,0 @@ -In mathematics, the arithmetic–geometric mean of two positive real numbers x and y is defined as follows: - -Call x and y a0 and g0: - -\begin{align} - -a_0 &= x,\\ - -g_0 &= y. - -\end{align} - -Then define the two interdependent sequences (an) and (gn) as - -\begin{align} - -a_{n+1} &= \tfrac12(a_n + g_n),\\ - -g_{n+1} &= \sqrt{a_n g_n} . - -\end{align} - -These two sequences converge to the same number, the arithmetic–geometric mean of x and y; it is denoted by M(x, y), or sometimes by agm(x, y) or AGM(x, y). - -The arithmetic–geometric mean is used in fast algorithms for exponential and trigonometric functions, as well as some mathematical constants, in particular, computing π. - -To find the arithmetic–geometric mean of a0 = 24 and g0 = 6, iterate as follows: - -\begin{array}{rcccl} - -a_1 & = & \tfrac12(24 + 6) & = & 15\\ - -g_1 & = & \sqrt{24 \cdot 6} & = & 12\\ - -a_2 & = & \tfrac12(15 + 12) & = & 13.5\\ - -g_2 & = & \sqrt{15 \cdot 12} & = & 13.416\ 407\ 8649\dots\\ - -& & \vdots & & - -\end{array} - -The first five iterations give the following values: - -The number of digits in which an and gn agree (underlined) approximately doubles with each iteration. The arithmetic–geometric mean of 24 and 6 is the common limit of these two sequences, which is approximately 13.4581714817256154207668131569743992430538388544. - -The first algorithm based on this sequence pair appeared in the works of Lagrange. Its properties were further analyzed by Gauss. - -The geometric mean of two positive numbers is never bigger than the arithmetic mean (see inequality of arithmetic and geometric means). As a consequence, for n > 0, (gn) is an increasing sequence, (an) is a decreasing sequence, and gn ≤ M(x, y) ≤ an. These are strict inequalities if x ≠ y. - -M(x, y) is thus a number between the geometric and arithmetic mean of x and y; it is also between x and y. - -If r ≥ 0, then M(rx,ry) = r M(x,y). - -There is an integral-form expression for M(x,y): - -\begin{align} - -M(x,y) &= \frac{\pi}{2} \left( \int_0^\frac{\pi}{2}\frac{d\theta}{\sqrt{x^2\cos^2\theta+y^2\sin^2\theta}} \right)^{-1}\\ - -&=\pi\left(\int_0^\infty \frac{dt}{\sqrt{t(t+x^2)(t+y^2)}}\right)^{-1}\\ - -&= \frac{\pi}{4} \cdot \frac{x + y}{K\left( \frac{x - y}{x + y} \right)} - -\end{align} - -where K(k) is the complete elliptic integral of the first kind: - -K(k) = \int_0^\frac{\pi}{2}\frac{d\theta}{\sqrt{1 - k^2\sin^2(\theta)}} - -Indeed, since the arithmetic–geometric process converges so quickly, it provides an efficient way to compute elliptic integrals via this formula. In engineering, it is used for instance in elliptic filter design. - -The arithmetic–geometric mean is connected to the Jacobi theta function $\theta_3$ by - -M(1,x)=\theta_3^{-2}\left(\exp \left(-\pi \frac{M(1,x)}{M\left(1,\sqrt{1-x^2}\right)}\right)\right)=\left(\sum_{n\in\mathbb{Z}}\exp \left(-n^2 \pi \frac{M(1,x)}{M\left(1,\sqrt{1-x^2}\right)}\right)\right)^{-2}, - -which upon setting $x=1/\sqrt{2}$ gives - -M(1,1/\sqrt{2})=\left(\sum_{n\in\mathbb{Z}}e^{-n^2\pi}\right)^{-2}. - -The reciprocal of the arithmetic–geometric mean of 1 and the square root of 2 is called Gauss's constant, after Carl Friedrich Gauss. - -\frac{1}{M(1, \sqrt{2})} = G = 0.8346268\dots - -In 1941, $M(1,\sqrt{2})$ (and hence $G$) was proven transcendental by Theodor Schneider. - -The geometric–harmonic mean can be calculated by an analogous method, using sequences of geometric and harmonic means. One finds that GH(x,y) = 1/M(1/x, 1/y) = xy/M(x,y). - -The arithmetic–harmonic mean can be similarly defined, but takes the same value as the geometric mean (see section "Calculation" there). - -The arithmetic–geometric mean can be used to compute – among others – logarithms, complete and incomplete elliptic integrals of the first and second kind, and Jacobi elliptic functions. - -From the inequality of arithmetic and geometric means we can conclude that: - -g_n \leq a_n - -and thus - -g_{n + 1} = \sqrt{g_n \cdot a_n} \geq \sqrt{g_n \cdot g_n} = g_n - -that is, the sequence gn is nondecreasing. - -Furthermore, it is easy to see that it is also bounded above by the larger of x and y (which follows from the fact that both the arithmetic and geometric means of two numbers lie between them). Thus, by the monotone convergence theorem, the sequence is convergent, so there exists a g such that: - -\lim_{n\to \infty}g_n = g - -However, we can also see that: - -a_n = \frac{g_{n + 1}^2}{g_n} - -and so: - -\lim_{n\to \infty}a_n = \lim_{n\to \infty}\frac{g_{n + 1}^2}{g_{n}} = \frac{g^2}{g} = g - -Q.E.D. - -This proof is given by Gauss. - -Let - -I(x,y) = \int_0^{\pi/2}\frac{d\theta}{\sqrt{x^2\cos^2\theta+y^2\sin^2\theta}} , - -Changing the variable of integration to $\theta'$, where - - \sin\theta = \frac{2x\sin\theta'}{(x+y)+(x-y)\sin^2\theta'} , - -gives - - - -\begin{align} - -I(x,y) &= \int_0^{\pi/2}\frac{d\theta'}{\sqrt{\bigl(\frac12(x+y)\bigr)^2\cos^2\theta'+\bigl(\sqrt{xy}\bigr)^2\sin^2\theta'}}\\ - -&= I\bigl(\tfrac12(x+y),\sqrt{xy}\bigr) . - -\end{align} - - - -Thus, we have - - - -\begin{align} - -I(x,y) &= I(a_1, g_1) = I(a_2, g_2) = \cdots\\ - -&= I\bigl(M(x,y),M(x,y)\bigr) = \pi/\bigr(2M(x,y)\bigl) . - -\end{align} - - - -The last equality comes from observing that $I(z,z) = \pi/(2z)$. - -Finally, we obtain the desired result - -M(x,y) = \pi/\bigl(2 I(x,y) \bigr) . - -For example, according to the Gauss–Legendre algorithm: - -\pi = \frac{4M(1,1/\sqrt{2})^2} {1 - \sum_{j=1}^\infty 2^{j+1} c_j^2} , - -where - -c_j = \frac{1}{2}\left(a_{j-1}-g_{j-1}\right) , - -with $a_0=1$ and $g_0=1/\sqrt{2}$, which can be computed without loss of precision using - -c_j = \frac{c_{j-1}^2}{4a_j} . - -Taking $a_0 = 1$ and $g_0 = \cos\alpha$ yields the AGM - -M(1,\cos\alpha) = \frac{\pi}{2K(\sin \alpha)} , - -where K(k) is a complete elliptic integral of the first kind: - -K(k) = \int_0^{\pi/2}(1 - k^2 \sin^2\theta)^{-1/2} d\theta. - -That is to say that this quarter period may be efficiently computed through the AGM, - -K(k) = \frac{\pi}{2M(1,\sqrt{1-k^2})} . - -Using this property of the AGM along with the ascending transformations of John Landen, Richard P. Brent suggested the first AGM algorithms for the fast evaluation of elementary transcendental functions (ex, cos x, sin x). Subsequently, many authors went on to study the use of the AGM algorithms. diff --git a/wiki/wikipedia/3973.txt b/wiki/wikipedia/3973.txt deleted file mode 100644 index 6bf94cff4d0b5b044f024968d7c201f9f3b098f0..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3973.txt +++ /dev/null @@ -1,77 +0,0 @@ -In graph theory, Vizing's theorem states that every simple undirected graph may be edge colored using a number of colors that is at most one larger than the maximum degree Δ of the graph. - -At least Δ colors are always necessary, so the undirected graphs may be partitioned into two classes: "class one" graphs for which Δ colors suffice, and "class two" graphs for which Δ + 1 colors are necessary. - -A more general version of Vizing's theorem states that every undirected multigraph without loops can be colored with at most Δ+µ colors, where µ is the multiplicity of the multigraph. - -The theorem is named for Vadim G. Vizing who published it in 1964. - -When Δ = 1, the graph G must itself be a matching, with no two edges adjacent, and its edge chromatic number is one. That is, all graphs with Δ(G) = 1 are of class one. - -When Δ = 2, the graph G must be a disjoint union of paths and cycles. If all cycles are even, they can be 2-edge-colored by alternating the two colors around each cycle. However, if there exists at least one odd cycle, then no 2-edge-coloring is possible. That is, a graph with Δ = 2 is of class one if and only if it is bipartite. - -This proof is inspired by Diestel. - -Let G = (V, E) be a simple undirected graph. We proceed by induction on m, the number of edges. If the graph is empty, the theorem trivially holds. Let m > 0 and suppose a proper (Δ+1)-edge-coloring exists for all G - xy where xy ∈ E. - -We say that color α ∈ {1,...,Δ+1} is missing in x ∈ V with respect to proper (Δ+1)-edge-coloring c if c(xy) ≠ α for all y ∈ N(x). Also, let α/β-path from x denote the unique maximal path starting in x with α-colored edge and alternating the colors of edges (the second edge has color β, the third edge has color α and so on), its length can be 0. Note that if c is a proper (Δ+1)-edge-coloring of G then every vertex has a missing color with respect to c. - -Suppose that no proper (Δ+1)-edge-coloring of G exists. This is equivalent to this statement: - -(1) Let xy ∈ E and c be arbitrary proper (Δ+1)-edge-coloring of G - xy and α be missing from x and β be missing from y with respect to c. Then the α/β-path from y ends in x. - -This is equivalent, because if (1) doesn't hold, then we can interchange the colors α and β on the α/β-path and set the color of xy to be α, thus creating a proper (Δ+1)-edge-coloring of G from c. The other way around, if a proper (Δ+1)-edge-coloring exists, then we can delete xy, restrict the coloring and (1) won't hold either. - -Now, let xy0 ∈ E and c0 be a proper (Δ+1)-edge-coloring of G - xy0 and α be missing in x with respect to c0. We define y0,...,yk to be a maximal sequence of neighbours of x such that c0(xyi) is missing in yi-1 with respect to c0 for all 0 < i ≤ k. - -We define colorings c1,...,ck as - -ci(xyj)=c0(xyj+1) for all 0 ≤ j < i, - -ci(xyi) not defined, - -ci(e)=c0(e) otherwise. - -Then ci is a proper (Δ+1)-edge-coloring of G - xyi due to definition of y0,...,yk. Also, note that the missing colors in x are the same with respect to ci for all 0 ≤ i ≤ k. - -Let β be the color missing in yk with respect to c0, then β is also missing in yk with respect to ci for all 0 ≤ i ≤ k. Note that β cannot be missing in x, otherwise we could easily extend ck, therefore an edge with color β is incident to x for all cj. From the maximality of k, there exists 1 ≤ i < k such that c0(xyi) = β. From the definition of c1,...,ck this holds: - -c0(xyi) = ci-1(xyi) = ck(xyi-1) = β - -Let P be the α/β-path from yk with respect to ck. From (1), P has to end in x. But α is missing in x, so it has to end with an edge of color β. Therefore, the last edge of P is yi-1x. Now, let P' be the α/β-path from yi-1 with respect to ci-1. Since P' is uniquely determined and the inner edges of P are not changed in c0,...,ck, the path P' uses the same edges as P in reverse order and visits yk. The edge leading to yk clearly has color α. But β is missing in yk, so P' ends in yk. Which is a contradiction with (1) above. - -Several authors have provided additional conditions that classify some graphs as being of class one or class two, but do not provide a complete classification. For instance, if the vertices of the maximum degree Δ in a graph G form an independent set, or more generally if the induced subgraph for this set of vertices is a forest, then G must be of class one. - -Erdős showed that almost all graphs are of class one. That is, in the Erdős–Rényi model of random graphs, in which all n-vertex graphs are equally likely, let p(n) be the probability that an n-vertex graph drawn from this distribution is of class one; then p(n) approaches one in the limit as n goes to infinity. For more precise bounds on the rate at which p(n) converges to one, see Frieze. - -Vizing showed that a planar graph is of class one if its maximum degree is at least eight. - -In contrast, he observed that for any maximum degree in the range from two to five, there exist - -planar graphs of class two. For degree two, any odd cycle is such a graph, and for degree three, four, and five, these graphs can be constructed from platonic solids by replacing a single edge by a path of two adjacent edges. - -In Vizing's planar graph conjecture, Vizing states that all simple, planar graphs with maximum degree six or seven are of class one, closing the remaining possible cases. - -Independently, Zhang and Sanders partially proved Vizing's planar graph conjecture by showing that all planar graphs with maximum degree seven are of class one. - -Thus, the only case of the conjecture that remains unsolved is that of maximum degree six. This conjecture has implications for the total coloring conjecture. - -The planar graphs of class two constructed by subdivision of the platonic solids are not regular: they have vertices of degree two as well as vertices of higher degree. - -The four color theorem (proved by Appel) on vertex coloring of planar graphs, is equivalent to the statement that every bridgeless 3-regular planar graph is of class one . - -In 1969, Branko Grünbaum conjectured that every 3-regular graph with a polyhedral embedding on any two-dimensional oriented manifold such as a torus must be of class one. In this context, a polyhedral embedding is a graph embedding such that every face of the embedding is topologically a disk and such that the dual graph of the embedding is simple, with no self-loops or multiple adjacencies. If true, this would be a generalization of the four color theorem, which was shown by Tait to be equivalent to the statement that 3-regular graphs with a polyhedral embedding on a sphere are of class one. However, Kochol showed the conjecture to be false by finding snarks that have polyhedral embeddings on high-genus orientable surfaces. Based on this construction, he also showed that it is NP-complete to tell whether a polyhedrally embedded graph is of class one. - -Misra describe a polynomial time algorithm for coloring the edges of any graph with Δ + 1 colors, where Δ is the maximum degree of the graph. That is, the algorithm uses the optimal number of colors for graphs of class two, and uses at most one more color than necessary for all graphs. Their algorithm follows the same strategy as Vizing's original proof of his theorem: it starts with an uncolored graph, and then repeatedly finds a way of recoloring the graph in order to increase the number of colored edges by one. - -More specifically, suppose that uv is an uncolored edge in a partially colored graph. The algorithm of Misra and Gries may be interpreted as constructing a directed pseudoforest P (a graph in which each vertex has at most one outgoing edge) on the neighbors of u: for each neighbor p of u, the algorithm finds a color c that is not used by any of the edges incident to p, finds the vertex q (if it exists) for which edge uq has color c, and adds pq as an edge to P. There are two cases: - -*If the pseudoforest P constructed in this way contains a path from v to a vertex w that has no outgoing edges in P, then there is a color c that is available both at u and w. Recoloring edge uw with color c allows the remaining edge colors to be shifted one step along this path: for each vertex p in the path, edge up takes the color that was previously used by the successor of p in the path. This leads to a new coloring that includes edge uv. - -*If, on the other hand, the path starting from v in the pseudoforest P leads to a cycle, let w be the neighbor of u at which the path joins the cycle, let c be the color of edge uw, and let d be a color that is not used by any of the edges at vertex u. Then swapping colors c and d on a Kempe chain either breaks the cycle or the edge on which the path joins the cycle, leading to the previous case. - -With some simple data structures to keep track of the colors that are used and available at each vertex, the construction of P and the recoloring steps of the algorithm can all be implemented in time O(n), where n is the number of vertices in the input graph. Since these steps need to be repeated m times, with each repetition increasing the number of colored edges by one, the total time is O(mn). - -In an unpublished technical report, Gabow claimed a faster $O(m\sqrt n\log n)$ time bound for the same problem of coloring with Δ + 1 colors. - -In both Gutin and Soifer, Vizing mentions that his work was motivated by a theorem of Shannon showing that multigraphs could be colored with at most (3/2)Δ colors. Although Vizing's theorem is now standard material in many graph theory textbooks, Vizing had trouble publishing the result initially, and his paper on it appears in an obscure journal, Diskret. Analiz. diff --git a/wiki/wikipedia/3974.txt b/wiki/wikipedia/3974.txt deleted file mode 100644 index 521dcbb8dc55b51d5b6462e7027031bc85b85eb7..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3974.txt +++ /dev/null @@ -1,119 +0,0 @@ -In complex analysis, Jordan's lemma is a result frequently used in conjunction with the residue theorem to evaluate contour integrals and improper integrals. It is named after the French mathematician Camille Jordan. - -Consider a complex-valued, continuous function f, defined on a semicircular contour -$$ -C_R = \{R e^{i \theta} \mid \theta \in [0, \pi]\} -$$ - -of positive radius R lying in the upper half-plane, centered at the origin. If the function f is of the form -$$ -f(z) = e^{i a z} g(z) , \quad z \in C_R , -$$ - -with a positive parameter a, then Jordan's lemma states the following upper bound for the contour integral: -$$ -\left| \int_{C_R} f(z) dz \right| \le \frac{\pi}{a} M_R \quad \text{where} \quad M_R := \max_{\theta \in [0,\pi]} \left| g \left(R e^{i \theta}\right) \right| . -$$ - -with equality when g vanishes everywhere, in which case both sides are identically zero. An analogous statement for a semicircular contour in the lower half-plane holds when a < 0. - -* If f is continuous on the semicircular contour CR for all large R and - -{{NumBlk|::|$\lim_{R \to \infty} M_R = 0$|}} - -then by Jordan's lemma \lim_{R \to \infty} \int_{C_R} f(z) dz = 0. - -* For the case a = 0, see the estimation lemma. - -* Compared to the estimation lemma, the upper bound in Jordan's lemma does not explicitly depend on the length of the contour CR. - -Jordan's lemma yields a simple way to calculate the integral along the real axis of functions f(z) = ei a z g(z) holomorphic on the upper half-plane and continuous on the closed upper half-plane, except possibly at a finite number of non-real points z1, z2, …, zn. Consider the closed contour C, which is the concatenation of the paths C1 and C2 shown in the picture. By definition, -$$ -\oint_C f(z) dz = \int_{C_1}f(z)dz + \int_{C_2} f(z)dz. -$$ - -Since on C2 the variable z is real, the second integral is real: -$$ -\int_{C_2} f(z) dz = \int_{-R}^{R} f(x)dx. -$$ - -The left-hand side may be computed using the residue theorem to get, for all R larger than the maximum of , , …, , -$$ -\oint_{C} f(z) dz = 2\pi i \sum_{k=1}^n \operatorname{Res}(f, z_k), -$$ - -where Res(f, zk) denotes the residue of f at the singularity zk. Hence, if f satisfies condition (), then taking the limit as R tends to infinity, the contour integral over C1 vanishes by Jordan's lemma and we get the value of the improper integral -$$ -\int_{-\infty}^{\infty} f(x)dx = 2\pi i \sum_{k=1}^n \operatorname{Res}(f, z_k). -$$ - -The function -$$ -f(z)=\frac{e^{iz}}{1+z^2},\qquad z\in{\mathbb C}\setminus\{i,-i\}, -$$ - -satisfies the condition of Jordan's lemma with a = 1 for all R > 0 with R ≠ 1. Note that, for R > 1, -$$ -M_R=\max_{\theta\in[0,\pi]}\frac1=\frac1{R^2-1}, -$$ - -hence () holds. Since the only singularity of f in the upper half plane is at z = i, the above application yields -$$ -\int_{-\infty}^\infty \frac{e^{ix}}{1+x^2}dx=2\pi i\operatorname{Res}(f,i). -$$ - -Since z = i is a simple pole of f and 1 + z2 = (z + i)(z − i), we obtain - -\operatorname{Res}(f,i)=\lim_{z\to i}(z-i)f(z) - -=\lim_{z\to i}\frac{e^{iz}}{z+i}=\frac{e^{-1}}{2i} - -so that -$$ -\int_{-\infty}^\infty \frac{\cos x}{1+x^2}dx=\operatorname{Re}\int_{-\infty}^\infty \frac{e^{ix}}{1+x^2}dx=\frac{\pi}{e}. -$$ - -This result exemplifies the way some integrals difficult to compute with classical methods are easily evaluated with the help of complex analysis. - -By definition of the complex line integral, - - \int_{C_R} f(z) dz - -=\int_0^\pi g(Re^{i\theta})e^{iaR(\cos\theta+i \sin\theta)}i Re^{i\theta}d\theta - -=R\int_0^\pi g(Re^{i\theta})e^{aR(i\cos\theta-\sin\theta)}ie^{i\theta}d\theta. - - - -Now the inequality -$$ - \biggl|\int_a^b f(x)dx\biggr|\le\int_a^b \left|f(x)\right|dx -$$ - -yields - - I_R:=\biggl|\int_{C_R} f(z) dz\biggr| - -\le R\int_0^\pi\bigl|g(Re^{i\theta})e^{aR(i\cos\theta-\sin\theta)}ie^{i\theta} \bigr|d\theta - -=R\int_0^\pi \bigl|g(Re^{i\theta})\bigr|e^{-aR\sin\theta}d\theta. - - - -Using MR as defined in () and the symmetry sin θ = sin(π − θ), we obtain -$$ - I_R \le RM_R\int_0^\pi e^{-aR\sin\theta}d\theta = 2RM_R\int_0^{\pi/2} e^{-aR\sin\theta}d\theta. -$$ - -Since the graph of sin θ is concave on the interval θ ∈ [0, π ⁄ 2], the graph of sin θ lies above the straight line connecting its endpoints, hence -$$ -\sin\theta\ge \frac{2\theta}{\pi}\quad -$$ - -for all θ ∈ [0, π ⁄ 2], which further implies - -I_R - -\le 2RM_R \int_0^{\pi/2} e^{-2aR\theta/\pi}d\theta - -=\frac{\pi}{a} (1-e^{-a R}) M_R\le\frac\pi{a}M_R. diff --git a/wiki/wikipedia/3975.txt b/wiki/wikipedia/3975.txt deleted file mode 100644 index 637ef8e3e8760e052df5e9335761bcb3be5d494a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3975.txt +++ /dev/null @@ -1,73 +0,0 @@ -In computer supported brainstorming, team members contribute their ideas through electronic means either synchronously or asynchronously. The brainstorming software selected by the team mediates the individual interactions and helps to organize and shape the products of the brainstorming session. Computer supported brainstorming can be implemented using a wide variety of electronic technologies. - -In traditional group brainstorming all members of a team are present in the same physical location and their interaction is defined by a selected protocol. Proponents such as Gallupe et al. argue that electronic brainstorming eliminates many of the problems of standard brainstorming, including production blocking (i.e. group members must take turns to express their ideas) and evaluation apprehension (i.e. fear of being judged by others). - -Brainstorming exists in many forms, but first began to be formalized in graphical representation known as "concept mapping" by Joseph D. Novak of Cornell University in the 1970s. Concept mapping involved collecting and organizing information in a hierarchical fashion. - -Seth Hollander, then a student at the Thayer School of Engineering of Dartmouth College in Hanover, New Hampshire, is said to be the first individual to formally propose the use of computers to assist with brainstorming and concept mapping. In his Master of Science thesis "Computer-Assisted Creativity and the Policy Process", Hollander suggested an "interactive computer program designed to enhance creative thinking". One year later, in 1985, The Idea Generator, the first software for computer supported brainstorming, became publicly available. - -In 1991 both GroupSystems at the University of Arizona and the Software Aided Meeting Management (SAMM) system at the University of Minnesota took advantage of emerging computer networking technology installed in rooms dedicated to computer supported meetings. When using these electronic meeting systems (EMS, as they came to be called), group members simultaneously and independently entered ideas into a computer terminal. The software collected (or "pooled") the ideas into a list, which could then be displayed on a central screen (anonymously if desired). Researchers found that the use of such computer supported systems helped groups categorize ideas, eliminate duplicates, and promote assessment and discussion of prioritized or controversial issues. - -Numerous software platforms have been designed for computer supported brainstorming, each of which has advantages and disadvantages over traditional brainstorming depending on the specific circumstances. The features of these software titles are similar in that they: - -* Allow real-time updates - -* Allow groups to download or print final versions - -* Allow color coding information - -* Identify information with the user who submitted it - -* Allow maps to be reorganized and restructured by the group - -* Offer templates for different types of interaction - -Collaborative brainstorming software can be used in a number of ways. It could be used in place of the traditional note card method of outlining an essay, or to make a big concept more understandable, to visualize the scope of a marketing campaign, or to organize interview notes. - -Following are several examples of business use cited by Social Signal, a social media blog: - -* Plan and outline writing projects - -* Wireframe the navigation structure for a website - -* Outline a community engagement plan - -* Diagram an organization chart and decision tree - -* Map out deliverables for a complex project - -* Figure out the relationship among multiple overlapping technical terms - -* Map out responsibilities on a complex project - -As technology has advanced, so have computer supported brainstorming systems. Now some web-based brainstorming systems allow contributors to post their comments anonymously through the use of avatars. This technique also allows users to log on over an extended time period, typically one or two weeks, to allow participants some "soak time" before posting their ideas and feedback. This technique has been used particularly in the field of new product development, but can be applied in any number of areas requiring collection and evaluation of ideas. - -Globalization and rapid technological advances have spurred multi-national companies to use virtual worlds and avatars to connect with each other and with consumers. Avatars and virtual worlds are a unique web-based combination of verbal, non-verbal and written communication without physical limitations such as space and geographic location. Virtual environments provide a context for collaboration that is "media-rich... allowing direct and real-time interaction between companies and users". Research shows that team idea generation and individual cognition in virtual environments increases in creative visual work spaces. - -International companies such as IBM and Coca-Cola have used virtual worlds such as Second Life to collaborate with avatars for new product development. In May 2007, Coca-Cola sponsored a contest for residents of Second Life to design a virtual vending machine that would not dispense Coca-Cola but provide a refreshing and invigorating experience. Although Coca-Cola gave residents a prototype, participants were given complete creative freedom. In addition to business and market collaboration, over 200 universities use Second Life for educational conferences and collaborative work. Avatars and the virtual world allow brainstorming that is visual, synchronous or asynchronous, anonymous and in different locations. - -The advantage of computer supported brainstorming over traditional brainstorming has been shown to be greatest with larger groups. Computer supported brainstorming was not beneficial for small groups, likely because the limited number of participants eliminated the evaluation apprehension and production blocking capabilities of the electronic system. - -The major benefits of computer supported brainstorming software arises from the anonymity of participants, the archiving of data, elimination of wait time for turn taking and the ability to incorporate additional feedback tools to reduce social loafing. - -Another advantage of computer supported brainstorming software is that all ideas can be archived electronically in their original form, and then retrieved later for further thought and discussion. [15] The archiving of data for later review can also stimulate creativity as ideas are revisited and refined over time. - -The ability to review and revise the ideas of others is also an advantage of the elimination of wait time in computer supported brainstorming software. Some software programs show all ideas as they are generated (via chat room or e-mail). The display of ideas may cognitively stimulate brainstorm participants, as their attention is kept on the flow of ideas being generated without the potential distraction of social cues such as facial expressions and verbal language. - -Early researchers into computer supported brainstorming expressed concern that the simultaneous contribution of multiple ideas would cause information overload and reduce productivity. Studies show that computer supported brainstorming can actually help increase focus, thus increasing effectiveness of virtual sessions over in-person brainstorming. - -Color coding features of some computer supported brainstorming software can help mitigate the potential for information overload and differentiate between individual contributions. The use of color coding has been shown to reduce confusion arising from simultaneous contribution of ideas as well as increasing motivation for contribution, as the ideas of each individual team member can be easily identified. - -Computer supported brainstorming techniques have been shown to produce more ideas and help individuals focus their attention on the ideas of others better than a brain writing technique (participants write individual written notes in silence and then subsequently communicate them with the group). The production of more ideas has been linked to the fact that paying attention to others' ideas leads to non-redundancy, as brainstormer participants try to avoid replicating or repeating another participant's comment or idea. - -In a study by Cooper, et al. authors found some evidence that more controversial ideas were produced by members of anonymous computer supported groups than by members of the other groups. The authors also found clear evidence that anonymous brainstorming groups produced more non-redundant ideas than did non-anonymous brainstorming groups. - -Some computer supported brainstorming software now includes a social comparison tracking component to help reduce social loafing. Social loafing is when people exert less effort working collectively compared to working individually. Shepherd et al. found that including a social comparison tracker into brainstorming systems increased the output of a group using computer supported brainstorming by 23% as compared to a control group using computer supported brainstorming with no social comparison. - -The perceived effectiveness of computer brainstorming software is mediated by the ease of use of the technology. In comparing the results of several studies, researchers found that when software was perceived to be difficult to use, students preferred to collaborate face-to-face using a whiteboard. When software was perceived as being easy to use, students preferred the online environment. - -Electronic brainstorming can cause a loss of productivity when group members become highly focused on their own work, or the work of others, instead of finding a productivity balance. The ideas listed on group members' screens can lead other members to spend too much time reading others' ideas instead of entering their own ideas. This occurs most often during synchronous idea generation which can prevent an individual from paying attention to others' contributions when he or she is formulating his or her own ideas. When members are trying to create original ideas, they can become overly focused on not duplicating ideas that they are unable to come up with their own. - -Electronic brainstorming has the ability to help group members spur new ideas when exposed to the ideas generated by others. However, when compared with non-electronic brainstorming, electronic brainstorming actually forces group members to spend additional time and cognitive resources reading, understanding, and interpreting ideas instead of coming up with new ideas of their own, creating a greater cognitive load that can increase time needed for brainstorming. - -Even when technology is in place to help facilitators guide electronic brainstorming, there is still a need for leadership. While the use of the does advance the effective use of groups, technology does not replace the need for group leadership. However, when related to group size, electronic brainstorming is superior to traditional verbal brainstorming for large groups. diff --git a/wiki/wikipedia/3976.txt b/wiki/wikipedia/3976.txt deleted file mode 100644 index 30c49b6c000666cffa6b4ebb43d37b486d7a81b0..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3976.txt +++ /dev/null @@ -1,25 +0,0 @@ -In complex analysis, the open mapping theorem states that if U is a domain of the complex plane C and f : U → C is a non-constant holomorphic function, then f is an open map (i.e. it sends open subsets of U to open subsets of C, and we have invariance of domain.). - -The open mapping theorem points to the sharp difference between holomorphy and real-differentiability. On the real line, for example, the differentiable function f(x) = x2 is not an open map, as the image of the open interval (−1, 1) is the half-open interval [0, 1). - -The theorem for example implies that a non-constant holomorphic function cannot map an open disk onto a portion of any line embedded in the complex plane. Images of holomorphic functions can be of real dimension zero (if constant) or two (if non-constant) but never of dimension 1. - -Assume f : U → C is a non-constant holomorphic function and U is a domain of the complex plane. We have to show that every point in f(U) is an interior point of f(U), i.e. that every point in f(U) has a neighborhood (open disk) which is also in f(U). - -Consider an arbitrary w0 in f(U). Then there exists a point z0 in U such that w0 = f(z0). Since U is open, we can find d > 0 such that the closed disk B around z0 with radius d is fully contained in U. Consider the function g(z) = f(z)−w0. Note that z0 is a root of the function. - -We know that g(z) is not constant and holomorphic. The roots of g are isolated by the identity theorem, and by further decreasing the radius of the image disk d, we can assure that g(z) has only a single root in B (although this single root may have multiplicity greater than 1). - -The boundary of B is a circle and hence a compact set, on which |g(z)| is a positive continuous function, so the extreme value theorem guarantees the existence of a positive minimum e, that is, e is the minimum of |g(z)| for z on the boundary of B and e > 0. - -Denote by D the open disk around w0 with radius e. By Rouché's theorem, the function g(z) = f(z)−w0 will have the same number of roots (counted with multiplicity) in B as h(z):=f(z)−w1 for any w1 in D. This is because - -h(z) = g(z) + (w0 - w1), and for z on the boundary of B, |g(z)| ≥ e > |w0 - w1|. Thus, for every w1 in D, there exists at least one z1 in B such that f(z1) = w1. This means that the disk D is contained in f(B). - -The image of the ball B, f(B) is a subset of the image of U, f(U). Thus w0 is an interior point of f(U). Since w0 was arbitrary in f(U) we know that f(U) is open. Since U was arbitrary, the function f is open. - -*Maximum modulus principle - -*Rouché's theorem - -*Schwarz lemma diff --git a/wiki/wikipedia/3977.txt b/wiki/wikipedia/3977.txt deleted file mode 100644 index 887c343177601ccc038df361c4d0e4f3bbe364e7..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3977.txt +++ /dev/null @@ -1,37 +0,0 @@ -OfflineIMAP is IMAP synchronization utility software, capable of synchronizing mail on IMAP server with local Maildir folder or another server. - -The synchronization is performed bidirectionally between two endpoints ("Remote" and "Local" repositories). - -OfflineIMAP accesses mail servers only via Internet Message Access Protocol (Post Office Protocol – another popular way to get mail from server – is not supported), it works faster (though it is sensitive to connection's latency) and supports more advanced features than most mail clients. The special mode for better handling the non-standard implementation of IMAP in Gmail may optionally be enabled in a configuration file. - -When configured to store mail locally, OfflineIMAP uses the Maildir format. Unix mail boxes support may be added in the future, though currently it is not implemented. - -Several synchronizations account, each consisting of Remote and Local repositories, may be defined in configuration file. Each repository is then configured separately, allowing to specify credentials and access method. - -OfflineIMAP is capable of filtering the folders of Remote repository, so that only partial synchronization would occur if needed. To use this capability one has to define the mask that would be matched against the list of folders with each synchronization. This is achieved by using Python's lambda capability; for example, to synchronize only "INBOX", "Sent Mail" and "Received" folders one should specify the following rule: - - - -folderfilter = lambda foldername: foldername in [ - -'INBOX', 'Sent Mail', 'Received'] - - - -Remaining folders' names may be altered (translated) using similar construct: - - - -nametrans = lambda foldername: re.sub( - -"^Sent$", "root/Sent", re.sub("^(\[G.*ail\]|INBOX)", "root", foldername) - -) - - - -This technique may also be used to synchronize the content of an IMAP server to the folder of another server. - -Each account has to use separate directory; otherwise the synchronization process may suffer from unexpected behavior or even data loss. - -OfflineIMAP provides several command-line interfaces, including interactive color curses-based, non-interactive console logging, and several yet less verbose modes. Tk-based graphical user interface is also available. diff --git a/wiki/wikipedia/3978.txt b/wiki/wikipedia/3978.txt deleted file mode 100644 index 30e915f29e1e5bc3e6d4c2bb131a3b6decbc30ad..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3978.txt +++ /dev/null @@ -1,17 +0,0 @@ -The mutilated chessboard problem is a tiling puzzle proposed by philosopher Max Black in his book Critical Thinking (1946). It was later discussed by Solomon W. Golomb (1954), Gamow and by Martin Gardner in his Scientific American column "Mathematical Games". The problem is as follows: - -
    Suppose a standard 8×8 chessboard has two diagonally opposite corners removed, leaving 62 squares. Is it possible to place 31 dominoes of size 2×1 so as to cover all of these squares?
    - -Most considerations of this problem in literature provide solutions "in the conceptual sense" without proofs. John McCarthy proposed it as a hard problem for automated proof systems. In fact, its solution using the resolution system of inference is exponentially hard. - -The puzzle is impossible to complete. A domino placed on the chessboard will always cover one white square and one black square. Therefore, a collection of dominoes placed on the board will cover an equal numbers of squares of each color. If the two white corners are removed from the board then 30 white squares and 32 black squares remain to be covered by dominoes, so this is impossible. If the two black corners are removed instead, then 32 white squares and 30 black squares remain, so it is again impossible. - -The same impossibility proof shows that no domino tiling exists whenever any two squares of the same colour (not just the corners) are removed from the chessboard. - -However, if two squares of opposite colors are removed, then it is always possible to tile the remaining board with dominoes; this result is called Gomory's theorem, and is named after mathematician Ralph E. Gomory, whose proof was published in 1973. - -Gomory's theorem can be proven using a Hamiltonian cycle of the grid graph formed by the chessboard squares; the removal of two oppositely colored squares splits this cycle into two paths with an even number of squares each, both of which are easy to partition into dominoes. - -A similar problem asks if an ant starting at a corner square of an unmutilated chessboard can visit every square exactly once, and finish at the opposite corner square. Diagonal moves are disallowed; each move must be along a rank or file. - -Using the same reasoning, this task is impossible. For example, if the initial square is white, as each move alternates between black and white squares, the final square of any complete tour is black. However, the opposite corner square is white. diff --git a/wiki/wikipedia/3979.txt b/wiki/wikipedia/3979.txt deleted file mode 100644 index a16ca577d2aefc9a149a5349823721d8e6581e28..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3979.txt +++ /dev/null @@ -1,9 +0,0 @@ -In abstract algebra, Ado's theorem is a theorem characterizing finite-dimensional Lie algebras. - -Ado's theorem states that every finite-dimensional Lie algebra L over a field K of characteristic zero can be viewed as a Lie algebra of square matrices under the commutator bracket. More precisely, the theorem states that L has a linear representation ρ over K, on a finite-dimensional vector space V, that is a faithful representation, making L isomorphic to a subalgebra of the endomorphisms of V. - -The theorem was proved in 1935 by Igor Dmitrievich Ado of Kazan State University, a student of Nikolai Chebotaryov. - -The restriction on the characteristic was later removed by Kenkichi Iwasawa (see also the below Gerhard Hochschild paper for a proof). - -While for the Lie algebras associated to classical groups there is nothing new in this, the general case is a deeper result. Applied to the real Lie algebra of a Lie group G, it does not imply that G has a faithful linear representation (which is not true in general), but rather that G always has a linear representation that is a local isomorphism with a linear group. diff --git a/wiki/wikipedia/398.txt b/wiki/wikipedia/398.txt deleted file mode 100644 index c8febb3644a3cd4dff4c25e99aa197fe796c3664..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/398.txt +++ /dev/null @@ -1,19 +0,0 @@ -In mathematics, Fáry's theorem states that any simple planar graph can be drawn without crossings so that its edges are straight line segments. That is, the ability to draw graph edges as curves instead of as straight line segments does not allow a larger class of graphs to be drawn. The theorem is named after István Fáry, although it was proved independently by , , and . - -One way of proving Fáry's theorem is to use mathematical induction. Let G be a simple plane graph with n vertices; we may add edges if necessary so that G is a maximally plane graph. If n < 3, the result is trivial. If n ≥ 3, then all faces of G must be triangles, as we could add an edge into any face with more sides while preserving planarity, contradicting the assumption of maximal planarity. Choose some three vertices a, b, c forming a triangular face of G. We prove by induction on n that there exists a straight-line combinatorially isomorphic re-embedding of G in which triangle abc is the outer face of the embedding. (Combinatorially isomorphic means that the vertices, edges, and faces in the new drawing can be made to correspond to those in the old drawing, such that all incidences between edges, vertices, and faces—not just between vertices and edges—are preserved.) As a base case, the result is trivial when n = 3 and a, b and c are the only vertices in G. Thus, we may assume that n ≥ 4. - -By Euler's formula for planar graphs, G has 3n - 6 edges; equivalently, if one defines the deficiency of a vertex v in G to be 6 - deg(v), the sum of the deficiencies is 12. Since G has at least four vertices and all faces of G are triangles, it follows that every vertex in G has degree at least three. Therefore each vertex in G has deficiency at most three, so there are at least four vertices with positive deficiency. In particular we can choose a vertex v with at most five neighbors that is different from a, b and c. Let G be formed by removing v from G and retriangulating the face f formed by removing v. By induction, G' has a combinatorially isomorphic straight line re-embedding in which abc is the outer face. Because the re-embedding of G' was combinatorially isomorphic to G', removing from it the edges which were added to create G' leaves the face f, which is now a polygon P with at most five sides. To complete the drawing to a straight-line combinatorially isomorphic re-embedding of G, v should be placed in the polygon and joined by straight lines to the vertices of the polygon. By the art gallery theorem, there exists a point interior to P at which v can be placed so that the edges from v to the vertices of P do not cross any other edges, completing the proof. - -The induction step of this proof is illustrated at right. - -De Fraysseix, Pach and Pollack showed how to find in linear time a straight-line drawing in a grid with dimensions linear in the size of the graph, giving a universal point set with quadratic size. A similar method has been followed by Schnyder to prove enhanced bounds and a characterization of planarity based on the incidence partial order. His work stressed the existence of a particular partition of the edges of a maximal planar graph into three trees known as a Schnyder wood. - -Tutte's spring theorem states that every 3-connected planar graph can be drawn on a plane without crossings so that its edges are straight line segments and an outside face is a convex polygon (Tutte 1963). It is so called because such an embedding can be found as the equilibrium position for a system of springs representing the edges of the graph. - -Steinitz's theorem states that every 3-connected planar graph can be represented as the edges of a convex polyhedron in three-dimensional space. A straight-line embedding of $G,$ of the type described by Tutte's theorem, may be formed by projecting such a polyhedral representation onto the plane. - -The Circle packing theorem states that every planar graph may be represented as the intersection graph of a collection of non-crossing circles in the plane. Placing each vertex of the graph at the center of the corresponding circle leads to a straight line representation. - -Heiko Harborth raised the question of whether every planar graph has a straight line representation in which all edge lengths are integers. The truth of Harborth's conjecture remains unknown . However, integer-distance straight line embeddings are known to exist for cubic graphs. - -Sachs raised the question of whether every graph with a linkless embedding in three-dimensional Euclidean space has a linkless embedding in which all edges are represented by straight line segments, analogously to Fáry's theorem for two-dimensional embeddings. diff --git a/wiki/wikipedia/3980.txt b/wiki/wikipedia/3980.txt deleted file mode 100644 index 10562e6b74e12c90437219737137851558bd9273..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3980.txt +++ /dev/null @@ -1,150 +0,0 @@ -Lagrange's four-square theorem, also known as Bachet's conjecture, states that every natural number can be represented as the sum of four integer squares. That is, the squares form an additive basis of order four. -$$ -p = a_0^2 + a_1^2 + a_2^2 + a_3^2 -$$ - -where the four numbers $a_0, a_1, a_2, a_3$ are integers. For illustration, 3, 31, and 310 in several ways, can be represented as the sum of four squares as follows: - - - -\begin{align} - -3 & = 1^2+1^2+1^2+0^2 \\[3pt] - -31 & = 5^2+2^2+1^2+1^2 \\[3pt] - -310 & = 17^2+4^2+2^2+1^2 \\[3pt] - -& = 16^2 + 7^2 + 2^2 +1^2 \\[3pt] - -& = 15^2 + 9^2 + 2^2 +0^2 \\[3pt] - -& = 12^2 + 11^2 + 6^2 + 3^2. - -\end{align} - - - -This theorem was proven by Joseph Louis Lagrange in 1770. It is a special case of the Fermat polygonal number theorem. - -From examples given in the Arithmetica, it is clear that Diophantus was aware of the theorem. This book was translated in 1621 into Latin by Bachet (Claude Gaspard Bachet de Méziriac), who stated the theorem in the notes of his translation. But the theorem was not proved until 1770 by Lagrange. - -Adrien-Marie Legendre extended the theorem in 1797–8 with his three-square theorem, by proving that a positive integer can be expressed as the sum of three squares if and only if it is not of the form $4^k(8m+7)$ for integers k and m. Later, in 1834, Carl Gustav Jakob Jacobi discovered a simple formula for the number of representations of an integer as the sum of four squares with his own four-square theorem. - -The formula is also linked to Descartes' theorem of four "kissing circles", which involves the sum of the squares of the curvatures of four circles. This is also linked to Apollonian gaskets, which were more recently related to the Ramanujan–Petersson conjecture. - -Several very similar modern versions of Lagrange's proof exist. The proof below is a slightly simplified version, in which the cases for which m is even or odd do not require separate arguments. - -It is sufficient to prove the theorem for every odd prime number p. This immediately follows from Euler's four-square identity (and from the fact that the theorem is true for the numbers 1 and 2). - -The residues of a2 modulo p are distinct for every a between 0 and (p − 1)/2 (inclusive). - -To see this, take some a and define - -c as a2 mod p. - -a is a root of the polynomial - -x2 − c over the field - -Z/pZ. - -So is p − a (which is different from a). - -In a field K, any polynomial of degree n has at most n distinct roots (Lagrange's theorem (number theory)), - -so there are no other a with this property, in particular not among 0 to (p − 1)/2. - -Similarly, for b taking integral values between 0 and (p − 1)/2 (inclusive), the −b2 − 1 are distinct. - -By the pigeonhole principle, there are a and b in this range, for which a2 and −b2 − 1 are congruent modulo p, that is for which -$$ -a^2 + b^2 + 1^2 + 0^2 = np. -$$ - -Now let m be the smallest positive integer such that mp is the sum of four squares, x12 + x22 + x32 + x42 (we have just shown that there is some m (namely n) with this property, so there is a least one m, and it is smaller than p). We show by contradiction that m equals 1: supposing it is not the case, we prove the existence of a positive integer r less than m, for which rp is also the sum of four squares (this is in the spirit of the infinite descent method of Fermat). - -For this purpose, we consider for each xi the yi which is in the same residue class modulo m and between (–m + 1)/2 and m/2 (possibly included). It follows that y12 + y22 + y32 + y42 = mr, for some strictly positive integer r less than m. - -Finally, another appeal to Euler's four-square identity shows that mpmr = z12 + z22 + z32 + z42. But the fact that each xi is congruent to its corresponding yi implies that all of the zi are divisible by m. Indeed, - -\begin{cases} - -z_1 &= x_1 y_1 + x_2 y_2 + x_3 y_3 + x_4 y_4 &\equiv x_1^2 + x_2^2 + x_3^2 + x_4^2 &= mp \equiv 0 &\pmod{m}, \\ - -z_2 &= x_1 y_2 - x_2 y_1 + x_3 y_4 - x_4 y_3 &\equiv x_1 x_2 - x_2 x_1 + x_3 x_4 - x_4 x_3 &= 0 &\pmod{m}, \\ - -z_3 &= x_1 y_3 - x_2 y_4 - x_3 y_1 + x_4 y_2 &\equiv x_1 x_3 - x_2 x_4 - x_3 x_1 + x_4 x_2 &= 0 &\pmod{m}, \\ - -z_4 &= x_1 y_4 + x_2 y_3 - x_3 y_2 - x_4 y_1 &\equiv x_1 x_4 + x_2 x_3 - x_3 x_2 - x_4 x_1 &= 0 &\pmod{m}. - -\end{cases} - -It follows that, for wi = zi/m, w12 + w22 + w32 + w42 = rp, and this is in contradiction with the minimality of m. - -In the descent above, we must rule out both the case y1 = y2 = y3 = y4 = m/2 (which would give r = m and no descent), and also the case y1 = y2 = y3 = y4 = 0 (which would give r = 0 rather than strictly positive). For both of those cases, one can check that mp = x12 + x22 + x32 + x42 would be a multiple of m2, contradicting the fact that p is a prime greater than m. - -Another way to prove the theorem relies on Hurwitz quaternions, which are the analog of integers for quaternions. - -The Hurwitz quaternions consist of all quaternions with integer components and all quaternions with half-integer components. These two sets can be combined into a single formula -$$ -\alpha = \frac{1}{2} E_0 (1 + \mathbf{i} + \mathbf{j} + \mathbf{k}) +E_1\mathbf{i} +E_2\mathbf{j} +E_3\mathbf{k} = a_0 +a_1\mathbf{i} +a_2\mathbf{j} +a_3\mathbf{k} -$$ - -where $E_0, E_1, E_2, E_3$ are integers. Thus, the quaternion components $a_0, a_1, a_2, a_3$ are either all integers or all half-integers, depending on whether $E_0$ is even or odd, respectively. The set of Hurwitz quaternions forms a ring; that is to say, the sum or product of any two Hurwitz quaternions is likewise a Hurwitz quaternion. - -The (arithmetic, or field) norm $\mathrm N(\alpha)$ of a rational quaternion $\alpha$ is the nonnegative rational number -$$ -\mathrm{N}(\alpha) = \alpha\bar\alpha = a_0^2+a_1^2+a_2^2+a_3^2 -$$ - -where $\bar\alpha=a_0 -a_1\mathbf{i} -a_2\mathbf{j} -a_3\mathbf{k}$ is the conjugate of $\alpha$. Note that the norm of a Hurwitz quaternion is always an integer. (If the coefficients are half-integers, then their squares are of the form $\tfrac{1}{4} + n : n \in \mathbb{Z}$, and the sum of four such numbers is an integer.) - -Since quaternion multiplication is associative, and real numbers commute with other quaternions, the norm of a product of quaternions equals the product of the norms: -$$ - \mathrm{N}(\alpha\beta)=\alpha\beta(\overline{\alpha\beta})=\alpha\beta\bar{\beta}\bar{\alpha}=\alpha \mathrm{N}(\beta)\bar\alpha=\alpha\bar\alpha \mathrm{N}(\beta)= \mathrm{N}(\alpha) \mathrm{N}(\beta). -$$ - -For any $\alpha\ne0$, $\alpha^{-1}=\bar\alpha\mathrm N(\alpha)^{-1}$. It follows easily that $\alpha$ is a unit in the ring of Hurwitz quaternions if and only if $\mathrm N(\alpha)=1$. - -The proof of the main theorem begins by reduction to the case of prime numbers. Euler's four-square identity implies that if Langrange's four-square theorem holds for two numbers, it holds for the product of the two numbers. Since any natural number can be factored into powers of primes, it suffices to prove the theorem for prime numbers. It is true for $2=1^2+1^2+0^2+0^2$. To show this for an odd prime integer p, represent it as a quaternion $(p,0,0,0)$ and assume for now (as we shall show later) that it is not a Hurwitz irreducible; that is, it can be factored into two non-unit Hurwitz quaternions -$$ -p=\alpha\beta. -$$ - -The norms of $p,\alpha,\beta$ are integers such that -$$ -\mathrm N(p)=p^2=\mathrm N(\alpha\beta)=\mathrm N(\alpha)\mathrm N(\beta) -$$ - -and $\mathrm N(\alpha),\mathrm N(\beta) > 1$. This shows that both $\mathrm N(\alpha)$ and $\mathrm N(\beta)$ are equal to p (since they are integers), and p is the sum of four squares -$$ -p=\mathrm N(\alpha)=a_0^2+a_1^2+a_2^2+a_3^2. -$$ - -If it happens that the $\alpha$ chosen has half-integer coefficients, it can be replaced by another Hurwitz quaternion. Choose $\omega = (\pm 1\pm\mathbf{i}\pm\mathbf{j} \pm\mathbf{k})/2$ in such a way that $\gamma \equiv \omega + \alpha$ has even integer coefficients. Then -$$ -p=(\bar\gamma-\bar\omega)\omega\bar\omega(\gamma-\omega)=(\bar\gamma\omega-1)(\bar\omega\gamma-1). -$$ - -Since $\gamma$ has even integer coefficients, $(\bar\omega\gamma-1)$ will have integer coefficients and can be used instead of the original $\alpha$ to give a representation of p as the sum of four squares. - -As for showing that p is not a Hurwitz irreducible, Lagrange proved that any odd prime p divides at least one number of the form $u=1+l^2+m^2$, where l and m are integers. - -Some values of r4(n) occur infinitely often as r4(n) = r4(2mn) whenever n is even. The values of r4(n)/n can be arbitrarily large: indeed, r4(n)/n is infinitely often larger than 8log n. - -The sequence of positive integers which have only one representation as a sum of four squares (up to order) is: - -1, 2, 3, 5, 6, 7, 8, 11, 14, 15, 23, 24, 32, 56, 96, 128, 224, 384, 512, 896 ... . - -These integers consist of the seven odd numbers 1, 3, 5, 7, 11, 15, 23 and all numbers of the form $2(4^k),6(4^k)$ or $14(4^k)$. - -The sequence of positive integers which cannot be represented as a sum of four non-zero squares is: - -1, 2, 3, 5, 6, 8, 9, 11, 14, 17, 24, 29, 32, 41, 56, 96, 128, 224, 384, 512, 896 ... . - -These integers consist of the eight odd numbers 1, 3, 5, 9, 11, 17, 29, 41 and all numbers of the form $2(4^k),6(4^k)$ or $14(4^k)$. - -Lagrange's four-square theorem can be refined in various ways. For example, Zhi-Wei Sun proved that each natural number can be written as a sum of four squares with some requirements on the choice of these four numbers. - -One may also wonder whether it is necessary to use the entire set of square integers to write each natural as the sum of four squares. Wirsing proved that there exists a set of squares S with $|S| = O(n^{1/4}\log^{1/4} n)$ such that every positive integer smaller than or equal n can be written as a sum of at most 4 elements of S. diff --git a/wiki/wikipedia/3981.txt b/wiki/wikipedia/3981.txt deleted file mode 100644 index c824ed88495edc897909e38fe28fa5ebe6352c6c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3981.txt +++ /dev/null @@ -1,26 +0,0 @@ -In number theory, a balanced prime is a prime number with equal-sized prime gaps above and below it, so that it is equal to the arithmetic mean of the nearest primes above and below. Or to put it algebraically, given a prime number $p_n$, where n is its index in the ordered set of prime numbers, -$$ -p_n = {{p_{n - 1} + p_{n + 1}} \over 2}. -$$ - -For example, 53 is the sixteenth prime; the fifteenth and seventeenth primes, 47 and 59, add up to 106, and half of that is 53; thus 53 is a balanced prime. - -The first few balanced primes are - -5, 53, 157, 173, 211, 257, 263, 373, 563, 593, 607, 653, 733, 947, 977, 1103 . - -It is conjectured that there are infinitely many balanced primes. - -Three consecutive primes in arithmetic progression is sometimes called a CPAP-3. A balanced prime is by definition the second prime in a CPAP-3. the largest known CPAP-3 has 10546 digits and was found by David Broadhurst. It is: -$$ -p_n = 1213266377 \times 2^{35000} + 2429,\quad p_{n-1} = p_n-2430,\quad p_{n+1} = p_n+2430. -$$ - -The value of n (its rank in the sequence of all primes) is not known. - -The balanced primes may be generalized to the balanced primes of order n. A balanced prime of order n is a prime number that is equal to the arithmetic mean of the nearest n primes above and below. Algebraically, given a prime number $p_k$, where k is its index in the ordered set of prime numbers, -$$ -p_k = { \sum_{i=1}^n ({p_{k - i} + p_{k + i})} \over 2n}. -$$ - -Thus, an ordinary balanced prime is a balanced prime of order 1. The sequences of balanced primes of orders 2, 3, and 4 are given as sequence A082077 in the OEIS, sequence A082078 in the OEIS, and sequence A082079 in the OEIS respectively. diff --git a/wiki/wikipedia/3982.txt b/wiki/wikipedia/3982.txt deleted file mode 100644 index 488ee88a4106533b7eda60e8d189df113bde1f1f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3982.txt +++ /dev/null @@ -1,67 +0,0 @@ -In mathematics, the Gauss circle problem is the problem of determining how many integer lattice points there are in a circle centered at the origin and with radius $r$. This number is approximated by the area of the circle, so the real problem is to accurately bound the error term describing how the number of points differs from the area. - -The first progress on a solution was made by Carl Friedrich Gauss, hence its name. - -Consider a circle in $\mathbb{R}^2$ with center at the origin and radius $r\ge 0$. Gauss' circle problem asks how many points there are inside this circle of the form $(m,n)$ where $m$ and $n$ are both integers. Since the equation of this circle is given in Cartesian coordinates by $x^2+y^2= r^2$, the question is equivalently asking how many pairs of integers m and n there are such that -$$ -m^2+n^2\leq r^2. -$$ - -If the answer for a given $r$ is denoted by $N(r)$ then the following list shows the first few values of $N(r)$ for $r$ an integer between 0 and 12 followed by the list of values $ \pi r^2 $ rounded to the nearest integer: - -1, 5, 13, 29, 49, 81, 113, 149, 197, 253, 317, 377, 441 - -0, 3, 13, 28, 50, 79, 113, 154, 201, 254, 314, 380, 452 -$$ -N(r) -$$ is roughly $\pi r^2$, the area inside a circle of radius $r$. This is because on average, each unit square contains one lattice point. Thus, the actual number of lattice points in the circle is approximately equal to its area, $\pi r^2$. So it should be expected that -$$ -N(r)=\pi r^2 +E(r) -$$ - -for some error term $E(r)$ of relatively small absolute value. Finding a correct upper bound for $\mid E(r)\mid$ is thus the form the problem has taken. Note that $r$ does not have to be an integer. After $N(4)=49 $ one has$N(\sqrt{17})=57 ,N(\sqrt{18})=61, N(\sqrt{20})=69, N(5)=81 .$ At these places $ E(r)$ increases by $8,4,8,12$ after which it decreases (at a rate of $ 2 \pi r $) until the next time it increases. - -Gauss managed to prove that -$$ -| E(r) |\leq 2\sqrt{2}\pi r. -$$ - -Hardy and, independently, Landau found a lower bound by showing that -$$ -| E(r) |\neq o\left(r^{1/2}(\log r)^{1/4}\right), -$$ - -using the little o-notation. It is conjectured that the correct bound is -$$ -| E(r) |=O\left(r^{1/2+\varepsilon}\right). -$$ - -Writing $E(r)\le Cr^t$, the current bounds on $t$ are -$$ -\frac{1}{2}< t\leq\frac{131}{208}=0.6298\ldots, -$$ - -with the lower bound from Hardy and Landau in 1915, and the upper bound proved by Martin Huxley in 2000. - -The value of $N(r)$ can be given by several series. In terms of a sum involving the floor function it can be expressed as: -$$ -N(r)=1+4\sum_{i=0}^\infty \left(\left\lfloor\frac{r^2}{4i+1}\right\rfloor-\left\lfloor\frac{r^2}{4i+3}\right\rfloor\right). -$$ - -This is a consequence of Jacobi's two-square theorem, which follows almost immediately from the Jacobi triple product. - -A much simpler sum appears if the sum of squares function $r_2(n)$ is defined as the number of ways of writing the number $n$ as the sum of two squares. Then It can be intuitively understood as the question of how many trees within a distance of r are visible in the Euclid's orchard, standing in the origin. If the number of such solutions is denoted $V(r)$ then the values of $V(r)$ for $r$ taking small integer values are - -0, 4, 8, 16, 32, 48, 72, 88, 120, 152, 192 … . - -Using the same ideas as the usual Gauss circle problem and the fact that the probability that two integers are coprime is $6/\pi^2$, it is relatively straightforward to show that -$$ -V(r)=\frac{6}{\pi}r^2+O(r^{1+\varepsilon}). -$$ - -As with the usual circle problem, the problematic part of the primitive circle problem is reducing the exponent in the error term. At present the best known exponent is $221/304+\varepsilon$ if one assumes the Riemann hypothesis. Without assuming the Riemann hypothesis, the best known upper bound is -$$ -V(r)=\frac{6}{\pi}r^2+O(r\exp(-c(\log r)^{3/5}(\log\log r^2)^{-1/5})) -$$ - -for a positive constant $c$. In particular, no bound on the error term of the form $1-\varepsilon$ for any $\varepsilon>0$ is currently known that does not assume the Riemann Hypothesis. diff --git a/wiki/wikipedia/3983.txt b/wiki/wikipedia/3983.txt deleted file mode 100644 index 8b811e90cdee768d18172d9c4c5e34041c8fb656..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3983.txt +++ /dev/null @@ -1,7 +0,0 @@ -Buku Sudoku (titled Buku Números in Mexico and Buku数字パズル in Japan) is a downloadable puzzle game developed by Ukrainian studio Absolutist Ltd and published by Merscom based on Sudoku for the Windows PC's. The game was also released for the Xbox 360 via Xbox Live Arcade, on May 28, 2008. - -Buku Sudoku 360 offers variations on the traditional Sudoku puzzle including 4x4, 6x6, 8x8, 9x9, 12x12 and 16x16 grids. There are also three difficulty levels featured, as well as a "Create Your Own Sudoku" feature. The game allows the player customization options by letting them choose different background and tile themes to modify their Sudoku board. The Windows version features a Puzzle Mode and timed Arcade Mode, while the Xbox 360 version promises co-op and competitive online play. The game also features a display hints option for gamers that like to know when they have mis-entered a number. If a player answers a set correctly (rows, columns, boxes and numbers in boxes), then the choices become permanent. The game has 1200 Sudoku puzzles to play. - -The Xbox Live Arcade version of Buku Sudoku will feature downloadable content packs. - -Buku Sudoku received mixed reviews upon the game's release on the Xbox 360. On Metacritic, the game holds a score of 60/100 for the Xbox 360 version based on 19 reviews. On GameRankings, the game holds a score of 58.53% for the Xbox 360 version based on 17 reviews. diff --git a/wiki/wikipedia/3984.txt b/wiki/wikipedia/3984.txt deleted file mode 100644 index 7f8977386f27fec8773b56ca98e00c6a84a4aaaf..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3984.txt +++ /dev/null @@ -1,3 +0,0 @@ -In geometry, the Tammes problem is a problem in packing a given number of circles on the surface of a sphere such that the minimum distance between circles is maximized. It is named after the Dutch botanist Pieter Merkus Lambertus Tammes (the nephew of pioneering botanist Jantina Tammes) who posed the problem in his 1930 doctoral dissertation on the distribution of pores on pollen grains. Mathematicians independent of Tammes began studying circle packing on the sphere in the early 1940s; it was not until twenty years later that the problem became associated with his name. - -It can be viewed as a particular special case of the generalized Thomson problem of minimizing the total Coulomb force of electrons in a spherical arrangement. Thus far, solutions have been proven only for small numbers of circles: 3 through 14, and 24. There are conjectured solutions for many other cases, including those in higher dimensions. diff --git a/wiki/wikipedia/3985.txt b/wiki/wikipedia/3985.txt deleted file mode 100644 index d2144ba4bf0d2b0a067280d8ef7cd1ff1d276835..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3985.txt +++ /dev/null @@ -1,67 +0,0 @@ -KHOPCA is an adaptive clustering algorithm originally developed for dynamic networks. KHOPCA ($k$-hop clustering algorithm) provides a fully distributed and localized approach to group elements such as nodes in a network according to their distance from each other. KHOPCA operates proactively through a simple set of rules that defines clusters, which are optimal with respect to the applied distance function. - -KHOPCA's clustering process explicitly supports joining and leaving of nodes, which makes KHOPCA suitable for highly dynamic networks. However, it has been demonstrated that KHOPCA also performs in static networks. - -Besides applications in ad hoc and wireless sensor networks, KHOPCA can be used in localization and navigation problems, networked swarming, and real-time data clustering and analysis. - -KHOPCA ($k$-hop clustering algorithm) operates proactively through a simple set of rules that defines clusters with variable $k$-hops. A set of local rules describes the state transition between nodes. A node's weight is determined only depending on the current state of its neighbors in communication range. Each node of the network is continuously involved in this process. As result, $k$-hop clusters are formed and maintained in static as well as dynamic networks. - -KHOPCA does not require any predetermined initial configuration. Therefore, a node can potentially choose any weight (between $MIN$ and $MAX$). However, the choice of the initial configuration does influence the convergence time. - -The prerequisites in the start configuration for the application of the rules are the following. - -* $\Nu$ is the network with nodes and links, whereby each node has a weight $w$. - -* Each node $n$ in $\Nu$ node stores the same positive values $MIN$ and $MAX$, with $MIN < MAX$. - -* A node $n$ with weight $w_n=MAX$ is called cluster center. - -* $k$ is $MAX$ - $MIN$ and represents the maximum size a cluster can have from the most outer node to the cluster center. The cluster diameter is therefore $k\cdot2-1$. - -* $\Nu(n)$ returns the direct neighbors of node $n$. - -* $W(\Nu)$ is the set of weights of all nodes of $\Nu$. - -The following rules describe the state transition for a node $n$ with weight $w_n$. These rules have to be executed on each node in the order described here. - -The first rule has the function of constructing an order within the cluster. This happens through a node $n$ detects the direct neighbor with the highest weight $w$, which is higher than the node's own weight $w_n$. If such a direct neighbor is detected, the node $n$ changes its own weight to be the weight of the highest weight within the neighborhood subtracted by 1. Applied iteratively, this process creates a top-to-down hierarchical cluster structure. - -if max(W(N(n))) > w_n - -w_n = max(W(N(n))) - 1 - - - -The second rule deals with the situation where nodes in a neighborhood are on the minimum weight level. This situation can happen if, for instance, the initial configuration assigns the minimum weight to all nodes. If there is a neighborhood with all nodes having the minimum weight level, the node $n$ declares itself as cluster center. Even if coincidentally all nodes declare themselves as cluster centers, the conflict situation will be resolved by one of the other rules. - -if max(W(N(n)) == MIN & w_n == MIN - -w_n = MAX; - - - -The third rule describes situations where nodes with leveraged weight values, which are not cluster centers, attract surrounding nodes with lower weights. This behavior can lead to fragmented clusters without a cluster center. In order to avoid fragmented clusters, the node with higher weight value is supposed to successively decrease its own weight with the objective to correct the fragmentation by allowing the other nodes to reconfigure according to the rules. - -if max(W(N(n))) <= w_n && w_n != MAX - -w_n = w_n - 1; - - - -The fourth rule resolves the situation where two cluster centers connect in 1-hop neighborhood and need to decide which cluster center should continue its role as cluster center. Given any specific criterion (e.g., device ID, battery power), one cluster center remains while the other cluster center is hierarchized in 1-hop neighborhood to that new cluster center. The choice of the specific criterion to resolve the decision-making depends on the used application scenario and on the available information. - -if max(W(N(n)) == MAX && w_n == MAX - -w_n = apply criterion to select a node from set (max(W(N(n)),w_n); - -w_n = w_n - 1; - - - -An exemplary sequence of state transitions applying the described four rules is illustrated below. - -KHOPCA acting in a dynamic 2-D simulation. The geometry is based on a geometric random graph; all existing links are drawn in this network. - -KHOPCA also works in a dynamic 3-D environment. The cluster connections are illustrated with bold lines. - -It has been demonstrated that KHOPCA terminates after a finite number of state transitions in static networks. diff --git a/wiki/wikipedia/3986.txt b/wiki/wikipedia/3986.txt deleted file mode 100644 index 217fe949032afaf987b053946bdf796851e927f0..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3986.txt +++ /dev/null @@ -1,107 +0,0 @@ -In mathematics, Sperner's lemma is a combinatorial result on colorings of triangulations, analogous to the Brouwer fixed point theorem, which is equivalent to it. It states that every Sperner coloring (described below) of a triangulation of an $n$-dimensional simplex contains a cell whose vertices all have different colors. - -The initial result of this kind was proved by Emanuel Sperner, in relation with proofs of invariance of domain. Sperner colorings have been used for effective computation of fixed points and in root-finding algorithms, and are applied in fair division (cake cutting) algorithms. Finding a Sperner coloring or equivalently a Brouwer fixed point is now believed to be an intractable computational problem, even in the plane, in the general case. The problem is PPAD-complete, a complexity class invented by Christos Papadimitriou. - -According to the Soviet Mathematical Encyclopaedia (ed. I.M. Vinogradov), a related 1929 theorem (of Knaster, Borsuk and Mazurkiewicz) had also become known as the Sperner lemma – this point is discussed in the English translation (ed. M. Hazewinkel). It is now commonly known as the Knaster–Kuratowski–Mazurkiewicz lemma. - -In one dimension, Sperner's Lemma can be regarded as a discrete version of the intermediate value theorem. In this case, it essentially says that if a discrete function takes only the values 0 and 1, begins at the value 0 and ends at the value 1, then it must switch values an odd number of times. - -The two-dimensional case is the one referred to most frequently. It is stated as follows: - -Subdivide a triangle ABC arbitrarily into a triangulation consisting of smaller triangles meeting edge to edge. - -Then a Sperner coloring of the triangulation is defined as an assignment of three colors to the vertices of the triangulation such that - -# Each of the three vertices A, B, and C of the initial triangle has a distinct color - -# The vertices that lie along any edge of triangle ABC have only two colors, the two colors at the endpoints of the edge. For example, each vertex on AC must have the same color as A or C. - -Then every Sperner coloring of every triangulation has at least one "rainbow triangle", a smaller triangle in the triangulation that has its vertices colored with all three different colors. More precisely, there must be an odd number of rainbow triangles. - -In the general case the lemma refers to a $n$-dimensional simplex \mathcal{A}=A_1 A_2 \ldots A_{n+1}. - -Consider any triangulation $T$, a disjoint division of $\mathcal{A}$ into smaller $n$-dimensional simplices, again meeting face-to-face. Denote the coloring function as $f:S\to\{1,2,3,\dots,n,n+1\}$, where $S$ is the set of vertices of $T$. A coloring function defines a Sperner coloring when: - -# The vertices of the large simplex are colored with different colors, i. e., $f(A_i) = i$ for $1 \le i \le n+1$. - -# Vertices of $T$ located on any $k$-dimensional subface of the large simplex A_{i_1}A_{i_2}\ldots A_{i_{k+1}} are colored only with the colors i_1,i_2,\ldots,i_{k+1}. - -Then every Sperner coloring has an odd number of simplices its triangulation, whose vertices are colored with all $n+1$ colors. In particular, there must be at least one rainbow simplex. - -We shall first address the two-dimensional case. Consider a graph G built from the triangulation T as follows: - -The vertices of G are the members of T plus the area outside the triangle. Two vertices are connected with an edge if their corresponding areas share a common border with one endpoint colored 1 and the other colored 2. - -Note that on the interval AB there is an odd number of borders colored 1-2 (simply because A is colored 1, B is colored 2; and as we move along AB, there must be an odd number of color changes in order to get different colors at the beginning and at the end). Therefore, the vertex of G corresponding to the outer area has an odd degree. But it is known (the handshaking lemma) that in a finite graph there is an even number of vertices with odd degree. Therefore, the remaining graph, excluding the outer area, has an odd number of vertices with odd degree corresponding to members of T. - -It can be easily seen that the only possible degree of a triangle from T is 0, 1, or 2, and that the degree 1 corresponds to a triangle colored with the three colors 1, 2, and 3. - -Thus we have obtained a slightly stronger conclusion, which says that in a triangulation T there is an odd number (and at least one) of full-colored triangles. - -A multidimensional case can be proved by induction on the dimension of a simplex. We apply the same reasoning, as in the two-dimensional case, to conclude that in a n-dimensional triangulation there is an odd number of full-colored simplices. - -Here is an elaboration of the proof given previously, for a reader new to graph theory. - -This diagram numbers the colors of the vertices of the example given previously. The small triangles whose vertices all have different numbers are shaded in the graph. Each small triangle becomes a node in the new graph derived from the triangulation. The small letters identify the areas, eight inside the figure, and area i designates the space outside of it. - -As described previously, those nodes that share an edge whose endpoints are numbered 1 and 2 are joined in the derived graph. For example, node d shares an edge with the outer area i, and its vertices all have different numbers, so it is also shaded. Node b is not shaded because two vertices have the same number, but it is joined to the outer area. - -One could add a new full-numbered triangle, say by inserting a node numbered 3 into the edge between 1 and 1 of node a, and joining that node to the other vertex of a. Doing so would have to create a pair of new nodes, like the situation with nodes f and g. - -Suppose that each vertex of the triangulation may be labeled with multiple colors, so that the coloring function is f : S → 2[n+1]. - -For every sub-simplex, the set of labelings on its vertices is a set-family over the set of colors $[n+1]$. This set-family can be seen as a hypergraph. - -If, for every vertex $v$ on a face of the simplex, the colors in $f(v)$ are a subset of the set of colors on the face endpoints, then there exists a sub-simplex with a balanced labeling – a labeling in which the corresponding hypergraph admits a perfect fractional matching. To illustrate, here are some balanced labeling examples for $n=2$: - -* ({1}, {2}, {3}) - balanced by the weights (1, 1, 1). - -* ({1,2}, {2,3}, {3,1}) - balanced by the weights (1/2, 1/2, 1/2). - -* ({1,2}, {2,3}, {1}) - balanced by the weights (0, 1, 1). - -This was proved by Shapley in 1973. It is a combinatorial analogue of the KKMS lemma. - -Suppose that, instead of an $n-1$-dimensional simplex, we have a $d$-dimensional polytope with $n$ vertices. - -Then there are at least $n-d$ fully labeled simplices, where "fully labeled" indicates that every label on the simplex has a different color. For example, if a (two-dimensional) polygon with n vertices is triangulated and colored according to the Sperner criterion, then there are at least $n-2$ fully labeled triangles. - -The general statement was conjectured by Atanassov in 1996, who proved it for the case $d=2$. The proof of the general case was first given by de Loera, Peterson, and Su in 2002. - -Suppose that, instead of a single labeling, we have $n$ different Sperner labelings. - -We consider pairs (simplex, permutation) such that, the label of each vertex of the simplex is chosen from a different labeling (so for each simplex, there are $n!$ different pairs). - -Then there are at least $n!$ fully labeled pairs. This was proved by Ravindra Bapat. - -Another way to state this lemma is as follows. Suppose there are $n$ people, each of whom produces a different Sperner labeling of the same triangulation. Then, there exists a simplex, and a matching of the people to its vertices, such that each vertex is labeled by its owner differently (one person labels its vertex by 1, another person labels its vertex by 2, etc.). Moreover, there are at least $n!$ such matchings. This can be used to find an envy-free cake-cutting with connected pieces. - -Suppose that, instead of a single labeling, we have $m$ different Sperner labelings. Then: - -# For any positive integers $k_1,\ldots,k_m$whose sum is $m+n-1$, there exists a baby-simplex on which, for every $i\in\{1,\ldots,m\}$, labeling number $i$ uses at least $k_i$ (out of $n$) distinct labels. Moreover, each label is used by at least one (out of $m$) labeling. - -# For any positive integers $l_1,\ldots,l_n$whose sum is $m+n-1$, there exists a baby-simplex on which, for every $j\in\{1,\ldots,n\}$, the label $j$ is used by at least $l_j$ (out of $m$) different labelings. - -Both versions reduce to Sperner's lemma when $m=1$, or when all $m$ labelings are identical. - -See for similar generalizations. - -Suppose a triangle is triangulated and labeled with {1,2,3}. Consider the cyclic sequence of labels on the boundary of the triangle. Define the degree of the labeling as the difference between the number of switches from 1 to 2, and the number of switches from 2 to 1. See examples in the table at the right. Note that the degree is the same if we count switches from 2 to 3 minus 3 to 2, or from 3 to 1 minus 1 to 3. - -Musin proved that the number of fully labeled triangles is at least the degree of the labeling. In particular, if the degree is nonzero, then there exists at least one fully labeled triangle. - -If a labeling satisfies the Sperner condition, then its degree is exactly 1: there are 1-2 and 2-1 switches only in the side between vertices 1 and 2, and the number of 1-2 switches must be one more than the number of 2-1 switches (when walking from vertex 1 to vertex 2). Therefore, the original Sperner lemma follows from Musin's theorem. - -There is a similar lemma about finite and infinite trees and cycles. - -A variant of Sperner's lemma on a cube (instead of a simplex) was proved by Harold W. Kuhn. It is related to the Poincaré–Miranda theorem. - -Sperner colorings have been used for effective computation of fixed points. A Sperner coloring can be constructed such that fully labeled simplices correspond to fixed points of a given function. By making a triangulation smaller and smaller, one can show that the limit of the fully labeled simplices is exactly the fixed point. Hence, the technique provides a way to approximate fixed points. - -For this reason, Sperner's lemma can also be used in root-finding algorithms and fair division algorithms; see Simmons–Su protocols. - -Sperner's lemma is one of the key ingredients of the proof of Monsky's theorem, that a square cannot be cut into an odd number of equal-area triangles. - -Sperner's lemma can be used to find a competitive equilibrium in an exchange economy, although there are more efficient ways to find it. - -Fifty years after first publishing it, Sperner presented a survey on the development, influence and applications of his combinatorial lemma. diff --git a/wiki/wikipedia/3987.txt b/wiki/wikipedia/3987.txt deleted file mode 100644 index f098648296dd6c77e730cef387a5b16b42a8fca0..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3987.txt +++ /dev/null @@ -1,66 +0,0 @@ -In combinatorics, the Erdős–Ko–Rado theorem of Paul Erdős, Chao Ko, and Richard Rado is a theorem on intersecting set families. - -The theorem is as follows. Suppose that A is a family of distinct subsets of $\{1,2,...,n\}$ such that each subset is of size r and each pair of subsets has a nonempty intersection, and suppose that n ≥ 2r. Then the number of sets in A is less than or equal to the binomial coefficient -$$ -\binom{n-1}{r-1}. -$$ - -The result is part of the theory of hypergraphs. A family of sets may also be called a hypergraph, and when all the sets (which are called "hyperedges" in this context) are the same size r, it is called an r-uniform hypergraph. The theorem thus gives an upper bound for the number of pairwise non-disjoint hyperedges in an r-uniform hypergraph with n vertices and n ≥ 2r. - -The theorem may also be formulated in terms of graph theory: the independence number of the Kneser graph KGn,r for n ≥ 2r is -$$ -\alpha(KG_{n,r})=\binom{n-1}{r-1}. -$$ - -According to Erdős the theorem was proved in 1938, but was not published until 1961 in an apparently more general form. The subsets in question were only required to be size at most r, and with the additional requirement that no subset be contained in any other. - -A version of the theorem also holds for signed sets - -The original proof of 1961 used induction on n. In 1972, Gyula O. H. Katona gave the following short proof using double counting. - -Suppose we have some such family of subsets A. Arrange the elements of {1, 2, ..., n} in any cyclic order, and consider the sets from A that form intervals of length r within this cyclic order. For example if n = 8 and r = 3, we could arrange the numbers {1, 2, ..., 8} into the cyclic order (3,1,5,4,2,7,6,8), which has eight intervals: - -(3,1,5), (1,5,4), (5,4,2), (4,2,7), (2,7,6), (7,6,8), (6,8,3), and (8,3,1). - -However, it is not possible for all of the intervals of the cyclic order to belong to A, because some pairs of them are disjoint. Katona's key observation is that at most r of the intervals for a single cyclic order may belong to A. To see this, note that if (a1, a2, ..., ar) is one of these intervals in A, then every other interval of the same cyclic order that belongs to A separates ai and ai+1 for some i (that is, it contains precisely one of these two elements). The two intervals that separate these elements are disjoint, so at most one of them can belong to A. Thus, the number of intervals in A is one plus the number of separated pairs, which is at most (r - 1). - -Based on this idea, we may count the number of pairs (S,C), where S is a set in A and C is a cyclic order for which S is an interval, in two ways. First, for each set S one may generate C by choosing one of r! permutations of S and (n - r)! permutations of the remaining elements, showing that the number of pairs is |A|r!(n - r)!. And second, there are (n - 1)! cyclic orders, each of which has at most r intervals of A, so the number of pairs is at most r(n - 1)!. Combining these two counts gives the inequality -$$ -|A|r!(n-r)!\le r(n-1)! -$$ - -and dividing both sides by r!(n - r)! gives the result -$$ -|A|\le \frac{r(n-1)!}{r!(n-r)!}={n-1\choose r-1}. -$$ - -There are two different and straightforward constructions for an intersecting family of r-element sets achieving the Erdős–Ko–Rado bound on cardinality. - -First, choose any fixed element x, and let A consist of all r-subsets of $\{1,2,...,n\}$ that include x. For instance, if n = 4, r = 2, and x = 1, this produces the family of three 2-sets - -{1,2}, {1,3}, {1,4}. - -Any two sets in this family intersect, because they both include x. - -Second, when n = 2r and with x as above, let A consist of all r-subsets of $\{1,2,...,n\}$ that do not include x. - -For the same parameters as above, this produces the family - -{2,3}, {2,4}, {3,4}. - -Any two sets in this family have a total of 2r = n elements among them, chosen from the n - 1 elements that are unequal to x, so by the pigeonhole principle they must have an element in common. - -When n > 2r, families of the first type (variously known as sunflowers, stars, dictatorships, centred families, principal families) are the unique maximum families. Friedgut proved that in this case, a family which is almost of maximum size has an element which is common to almost all of its sets. This property is known as stability. - -An intersecting family of r-element sets may be maximal, in that no further set can be added without destroying the intersection property, but not of maximum size. An example with n = 7 and r = 3 is the set of 7 lines of the Fano plane, much less than the Erdős–Ko–Rado bound of 15. - -There is a q-analog of the Erdős–Ko–Rado theorem for intersecting families of subspaces over finite fields. Frankl - -If $S$ is an intersecting family of $k$-dimensional subspaces of an $n$-dimensional vector space over a finite field of order $q$, and $n \geq 2k$, then -$$ -\vert S \vert \leq \binom{n-1}{k-1}_q. -$$ - -The Erdős–Ko–Rado theorem gives a bound on the maximum size of an independent set in Kneser graphs contained in Johnson schemes. - -Similarly, the analog of the Erdős–Ko–Rado theorem for intersecting families of subspaces over finite fields gives a bound on the maximum size of an independent set in q-Kneser graphs contained in Grassmann schemes. diff --git a/wiki/wikipedia/3988.txt b/wiki/wikipedia/3988.txt deleted file mode 100644 index 41e1e9589f0b874fd1f955de23f903fc38e6c9b8..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3988.txt +++ /dev/null @@ -1,77 +0,0 @@ -The blossom algorithm is an algorithm in graph theory for constructing maximum matchings on graphs. The algorithm was developed by Jack Edmonds in 1961, and published in 1965. Given a general graph G = (V, E), the algorithm finds a matching M such that each vertex in V is incident with at most one edge in M and |M| is maximized. The matching is constructed by iteratively improving an initial empty matching along augmenting paths in the graph. Unlike bipartite matching, the key new idea is that an odd-length cycle in the graph (blossom) is contracted to a single vertex, with the search continuing iteratively in the contracted graph. - -The algorithm runs in time $O( |E||V|^2)$, where $|E|$ is the number of edges of the graph and $|V|$ is its number of vertices. A better running time of $O( |E| |V|^{1 / 2} )$ for the same task can be achieved with the much more complex algorithm of Micali and Vazirani. - -A major reason that the blossom algorithm is important is that it gave the first proof that a maximum-size matching could be found using a polynomial amount of computation time. Another reason is that it led to a linear programming polyhedral description of the matching polytope, yielding an algorithm for min-weight matching. - -As elaborated by Alexander Schrijver, further significance of the result comes from the fact that this was the first polytope whose proof of integrality "does not simply follow just from total unimodularity, and its description was a breakthrough in polyhedral combinatorics." - -Given G = (V, E) and a matching M of G, a vertex v is exposed if no edge of M is incident with v. A path in G is an alternating path, if its edges are alternately not in M and in M (or in M and not in M). An augmenting path P is an alternating path that starts and ends at two distinct exposed vertices. Note that the number of unmatched edges in an augmenting path is greater by one than the number of matched edges, and hence the total number of edges in an augmenting path is odd. A matching augmentation along an augmenting path P is the operation of replacing M with a new matching $M_1 = M \oplus P = ( M \setminus P ) \cup ( P \setminus M )$. - -By Berge's lemma, matching M is maximum if and only if there is no M-augmenting path in G. Hence, either a matching is maximum, or it can be augmented. Thus, starting from an initial matching, we can compute a maximum matching by augmenting the current matching with augmenting paths as long as we can find them, and return whenever no augmenting paths are left. We can formalize the algorithm as follows: - -INPUT: Graph G, initial matching M on G - -OUTPUT: maximum matching M* on G - -A1 function find_maximum_matching(G, M) : M* - -A2 P ← find_augmenting_path(G, M) - -A3 if P is non-empty then - -A4 return find_maximum_matching(G, augment M along P) - -A5 else - -A6 return M - -A7 end if - -A8 end function - -We still have to describe how augmenting paths can be found efficiently. The subroutine to find them uses blossoms and contractions. - -Given G = (V, E) and a matching M of G, a blossom B is a cycle in G consisting of 2k + 1 edges of which exactly k belong to M, and where one of the vertices v of the cycle (the base) is such that there exists an alternating path of even length (the stem) from v to an exposed vertex w. - -Finding Blossoms: - -* Traverse the graph starting from an exposed vertex. - -* Starting from that vertex, label it as an outer vertex "o". - -* Alternate the labeling between vertices being inner "i" and outer "o" such that no two adjacent vertices have the same label. - -* If we end up with two adjacent vertices labeled as outer "o" then we have an odd-length cycle and hence a blossom. - -Define the contracted graph G’ as the graph obtained from G by contracting every edge of B, and define the contracted matching M’ as the matching of G’ corresponding to M. - -G’ has an M’-augmenting path if and only if G has an M-augmenting path, and that any M’-augmenting path P’ in G’ can be lifted to an M-augmenting path in G by undoing the contraction by B so that the segment of P’ (if any) traversing through vB is replaced by an appropriate segment traversing through B. In more detail: - -* if P’ traverses through a segment u → vB → w in G’, then this segment is replaced with the segment u → ( u’ → ... → w’ ) → w in G, where blossom vertices u’ and w’ and the side of B, ( u’ → ... → w’ ), going from u’ to w’ are chosen to ensure that the new path is still alternating (u’ is exposed with respect to $M \cap B$, $\{ w', w \} \in E \setminus M$). - -* if P’ has an endpoint vB, then the path segment u → vB in G’ is replaced with the segment u → ( u’ → ... → v’ ) in G, where blossom vertices u’ and v’ and the side of B, ( u’ → ... → v’ ), going from u’ to v’ are chosen to ensure that the path is alternating (v’ is exposed, $\{ u', u \} \in E \setminus M$). - -Thus blossoms can be contracted and search performed in the contracted graphs. This reduction is at the heart of Edmonds' algorithm. - -The search for an augmenting path uses an auxiliary data structure consisting of a forest F whose individual trees correspond to specific portions of the graph G. In fact, the forest F is the same that would be used to find maximum matchings in bipartite graphs (without need for shrinking blossoms). - -In each iteration the algorithm either (1) finds an augmenting path, (2) finds a blossom and recurses onto the corresponding contracted graph, or (3) concludes there are no augmenting paths. The auxiliary structure is built by an incremental procedure discussed next. - -* a tree T in G is an alternating tree with respect to M, if - -** T contains exactly one exposed vertex r called the tree root - -** every vertex at an odd distance from the root has exactly two incident edges in T, and - -** all paths from r to leaves in T have even lengths, their odd edges are not in M and their even edges are in M. - -* a forest F in G is an alternating forest with respect to M, if - -** its connected components are alternating trees, and - -** every exposed vertex in G is a root of an alternating tree in F. - -Each iteration of the loop starting at line B09 either adds to a tree T in F (line B10) or finds an augmenting path (line B17) or finds a blossom (line B20). It is easy to see that the running time is $O( |E||V|^2)$. - -When G is bipartite, there are no odd cycles in G. In that case, blossoms will never be found and one can simply remove lines B20 - B24 of the algorithm. The algorithm thus reduces to the standard algorithm to construct maximum cardinality matchings in bipartite graphs diff --git a/wiki/wikipedia/3989.txt b/wiki/wikipedia/3989.txt deleted file mode 100644 index ae77e41a87767ed3ed19de06dca7da4589c5495a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3989.txt +++ /dev/null @@ -1,281 +0,0 @@ -This following is a list of lemmas (or, "lemmata", i.e. minor theorems, or sometimes intermediate technical results factored out of proofs). See also list of axioms, list of theorems and list of conjectures. - -*Abel's lemma (mathematical series) - -*Abhyankar's lemma (algebraic geometry) - -*Archimedes's lemmas (euclidean geometry) - -*Artin–Rees lemma (commutative algebra) - -*Aubin–Lions lemma - -*Barbalat's lemma (dynamical systems) - -*Berge's lemma (graph theory) - -*Bézout's lemma (number theory) - -*Bhaskara's lemma (Diophantine equations) - -*Borel's lemma (partial differential equations) - -*Borel–Cantelli lemma (probability theory) - -*Bramble–Hilbert lemma (numerical analysis) - -*Burnside's lemma also known as the Cauchy–Frobenius lemma (group theory) - -*Céa's lemma (numerical analysis) - -*Closed map lemma (topology) - -*Cotlar–Stein lemma (functional analysis) - -*Cousin's lemma (integrals) - -*Covering lemma (set theory) - -*Craig interpolation lemma (mathematical logic) - -*Crossing lemma (knot theory, graph theory) - -*Danielson–Lanczos lemma (Fourier transforms) - -*Dehn's lemma (geometric topology) - -*Delta lemma (set theory) - -*Diagonal lemma (mathematical logic) - -*Dickson's lemma (combinatorics) - -*Doob-Dynkin lemma (probability theory) - -*Dwork's lemma (number theory) - -*Dynkin lemma (set theory) - -*Ehrling's lemma (functional analysis) - -*Ellis–Numakura lemma (topological semigroups) - -*Estimation lemma (contour integrals) - -*Euclid's lemma (number theory) - -*Expander mixing lemma (graph theory) - -*Factorization lemma (measure theory) - -*Farkas's lemma (linear programming) - -*Fatou's lemma (measure theory) - -*Fekete's lemma (mathematical analysis) - -*Feld–Tai lemma (electromagnetism) - -*Finsler's lemma (control theory) - -*Fitting lemma (abstract algebra) - -*Five lemma (homological algebra) - -*Fixed-point lemma for normal functions (axiomatic set theory) - -*Fodor's lemma (set theory) - -*Forking lemma (cryptography) - -*Frattini's lemma (finite groups) - -*Frostman's lemma (geometric measure theory) - -*Fundamental lemma (Langlands program) - -*Fundamental lemma of calculus of variations - -*Fundamental lemma of sieve theory (sieve theory) - -*Gauss's lemmas (polynomials | number theory | Riemannian geometry) - -*Glivenko–Cantelli lemma (statistics) - -*Gödel's diagonal lemma (mathematical logic) - -*Goursat's lemma (algebra) - -*Grönwall's inequality Grönwall's lemma (inequalities) - -*Handshaking lemma (graph theory) - -*Hartogs's lemma (several complex variables) - -*Hensel's lemma (commutative rings) - -*Higman's lemma (order theory) - -*Hopf lemma - -*Horseshoe lemma (homological algebra) - -*Hotelling's lemma (microeconomics) - -*Hua's lemma (analytic number theory) - -*Injective test lemma (homological algebra) - -*Itô's lemma (stochastic calculus) - -*Johnson–Lindenstrauss lemma (Euclidean geometry) - -*Jónsson's lemma - -*Jordan's lemma (complex analysis) - -*Kalman–Yakubovich–Popov lemma (system analysis, control theory) - -*Kelly's lemma (graph theory) - -*Knaster–Kuratowski–Mazurkiewicz lemma (fixed-point theory) - -*Kőnig's lemma (graph theory) - -*Kronecker's lemma (infinite sums) - -*Krull's separation lemma - -*Lax–Milgram lemma (differential equations) - -*Lebesgue's number lemma (dimension theory) - -*Leftover hash lemma (cryptography) - -*Lindelöf's lemma (topology) - -*Lindenbaum's lemma (mathematical logic) - -*Little's lemma (queuing theory) - -*Littlewood–Offord lemma (combinatorics) - -*Lovász local lemma (probability theory) - -*Malliavin's absolute continuity lemma (measure theory) - -*Margulis lemma (hyperbolic geometry) - -*Matrix determinant lemma (matrix theory) - -*Matrix inversion lemma - -*Mautner's lemma (representation theory) - -*Morse lemma (differential topology) - -*Moschovakis coding lemma (set theory) - -*Mostowski collapse lemma (mathematical logic) - -*Nakayama lemma (commutative algebra) - -*Newman's lemma (term rewriting) - -*Neyman–Pearson lemma (statistics) - -*Nine lemma (homological algebra) - -*Noether's normalization lemma (commutative algebra) - -*Ogden's lemma (formal languages) - -*Ping-pong lemma (geometric group theory) - -*Piling-up lemma (linear cryptanalysis) - -*Poincaré lemma of closed and exact differential forms (differential forms) - -*Pólya–Burnside lemma - -*Prime avoidance lemma - -*Pugh's closing lemma - -*Pumping lemma (formal languages) sometimes called the Bar-Hillel lemma - -*Rasiowa–Sikorski lemma (set theory) - -*Riemann–Lebesgue lemma (harmonic analysis) - -*Riesz's lemma (functional analysis) - -*Robbins lemma (statistics) - -*Sard's lemma (mathematical analysis, singularity theory) - -*Schanuel's lemma (projective modules) - -*Schreier's subgroup lemma (group theory) - -*Schur's lemma (representation theory) - -*Schwarz lemma (complex analysis) - -*Schwartz–Zippel lemma (polynomials) - -*Shadowing lemma (geometry) - -*Shephard's lemma (microeconomics) - -*Short five lemma (homological algebra) - -*Siegel's lemma (Diophantine approximation) - -*Snake lemma (homological algebra) - -*Sperner's lemma (combinatorics) - -*Splitting lemma (homological algebra) - -*Stechkin's lemma (functional and numerical analysis) - -*Stein's lemma (probability theory) - -*Stewart–Walker lemma (tensors) - -*Switching lemma (computational complexity theory) - -*Szemerédi regularity lemma (graph theory) - -*Tube lemma (topology) - -*Tukey's lemma (metamathematics) also known as the Teichmüller–Tukey lemma - -*Ultrafilter lemma (order theory) - -*Urysohn's lemma (general topology) - -*Vaughan's lemma (analytic number theory) - -*Vitali covering lemma (real analysis) - -*Wald's lemma (probability theory) - -*Watson's lemma - -*Weyl's lemma (Laplace equation) (partial differential equations) - -*Whitehead's lemma (Lie algebras) - -*Yao's XOR lemma (cryptography) - -*Yoneda lemma, (category theory) - -*Zassenhaus lemma (group theory) - -*Zolotarev's lemma (number theory) - -*Zorn's lemma also known as the Kuratowski–Zorn lemma (set theory) - -Lemmas diff --git a/wiki/wikipedia/399.txt b/wiki/wikipedia/399.txt deleted file mode 100644 index a203b8e37f40081ea73511cbe6986d2a8fae4502..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/399.txt +++ /dev/null @@ -1,41 +0,0 @@ -Gomoku, also called Five in a Row, is an abstract strategy board game. It is traditionally played with Go pieces (black and white stones) on a Go board. It is played using a 15×15 board while in the past a 19×19 board was standard. Because pieces are typically not moved or removed from the board, gomoku may also be played as a paper-and-pencil game. The game is known in several countries under different names. - -Players alternate turns placing a stone of their color on an empty intersection. Black plays first. The winner is the first player to form an unbroken chain of five stones horizontally, vertically, or diagonally. Placing so that a line of more than five stones of the same color is created does not result in a win. These are called overlines. - -Gomoku has existed in Japan since the Meiji Restoration (1868). The name "gomoku" is from the Japanese language, in which it is referred to as . Go means five, moku is a counter word for pieces and narabe means line-up. The game is popular in China, where it is called Wuziqi (五子棋). Wu (五 wǔ) means five, zi (子 zǐ) means piece, and qi (棋 qí) refers to a board game category in Chinese. The game is also popular in Korea, where it is called omok (오목 [五目]) which has the same structure and origin as the Japanese name. - -In the nineteenth century, the game was introduced to Britain where it was known as Go Bang, said to be a corruption of the Japanese word goban, which was itself adapted from the Chinese k'i pan (qí pán) "go-board." - -Gomoku has a strong advantage for the first player when unrestricted. - -Championships in Gomoku previously used the "Pro" opening rule, which mandated that the first player place the first stone in the center of the board. The second player's stone placement was unrestricted. The first player's second stone had to be placed at least three intersections away from the first player's first stone. This rule was used in the 1989 and 1991 world championships. - -Freestyle Gomoku has no restrictions on either player and allows a player to win by creating a line of five or more stones, with each player alternating turns placing one stone at a time. - -Black (the player who makes the first move) has long been known to have an advantage, even before L. Victor Allis proved that black can force a win (see below). Renju attempts to mitigate this imbalance with extra rules that aim to reduce black's first player advantage. - -It is played on a 15×15 board, with the rules of three and three, four and four, and overlines applied to Black only. - -* The rule of three and three bans a move that simultaneously forms two open rows of three stones (rows not blocked by an opponent's stone at either end). - -* The rule of four and four bans a move that simultaneously forms two rows of four stones (open or not). - -*Overlines prevent a player from winning if they form a line of 6 or more stones. - -Also called Wu, Ninuki Renju is a variant which adds capturing to the game; A pair of stones of the same color may be captured by the opponent by means of custodial capture (sandwiching a line of two stones lengthwise). The winner is the player either to make a perfect five in a row, or to capture five pairs of the opponent's stones. It uses a 15x15 board and the rules of three and three and overlines, as in Renju. It also allows the game to continue after a player has formed a row of five stones if their opponent can capture a pair across the line. - -Pente is related to Ninuki-Renju, and has the same custodial capture method, but is most often played on a 19x19 board and does not use the rules of three and three, four and four, or overlines. - -Tournament rules are used in professional play to balance the game and mitigate the first player advantage. The tournament rule used for the Gomoku world championships since 2009 is the Swap2 opening rule. - -The first player's first stone must be placed in the center of the board. The second player's first stone may be placed anywhere on the board. The first player's second stone must be placed at least four intersections away from the first stone (three empty intersections in between the two stones). - -Since 2009 tournament play has resumed, with the opening rule changed to swap2. This applies to both free-style gomoku and standard gomoku without any opening rules. It seems very likely that black wins on larger boards too. In any size of a board, freestyle gomoku is an m,n,k-game, hence it is known that the first player can force a win or a draw. In 2001, Allis' winning strategy was also approved for renju, a variation of gomoku, when there was no limitation on the opening stage. - -However, neither the theoretical values of all legal positions, nor the opening rules such as Swap2 used by the professional gomoku players have been solved yet, so the topic of gomoku artificial intelligence is still a challenge for computer scientists, such as the problem on how to improve the gomoku algorithms to make them more strategic and competitive. Nowadays, most of the state-of-the-art gomoku algorithms are based on the alpha-beta pruning framework. - -Reisch proved that Generalized gomoku is PSPACE-complete. He also observed that the reduction can be adapted to the rules of k-in-a-Row for fixed k. Although he did not specify exactly which values of k are allowed, the reduction would appear to generalize to any k ≥ 5. - -There exist several well-known tournaments for gomoku programs since 1989. The Computer Olympiad started with the gomoku game in 1989, but gomoku has not been in the list since 1993. The Renju World Computer Championship was started in 1991, and held for 4 times until 2004. The Gomocup tournament is played since 2000 and taking place every year, still active now, with more than 30 participants from about 10 countries. The Hungarian Computer Go-Moku Tournament was also played twice in 2005. There were also two Computer vs. Human tournaments played in the Czech Republic, in 2006 and 2011. Not until 2017 were the computer programs proved to be able to outperform the world human champion in public competitions. In the Gomoku World Championship 2017, there was a match between the world champion program Yixin and the world champion human player Rudolf Dupszki. Yixin won the match with a score of 2–0. - -Gomoku was featured in a 2018 Korean drama by Baek Seung-Hwa starring Park Se-wan. The film follows Baduk Lee (Park Se-wan), a former go prodigy, who retired after a humiliating loss on time. Baduk Lee works part at a go club years later where she meets Ahn Kyung Kim who introduces her to an Omok (Korean Gomoku) tournament. Lee is initially uninterested and considers Omok a children's game, but after her roommate loses money on an impulse purchase, she enters the tournament for the prize money and loses badly, being humiliated once again. Afterwards she begins training to redeem herself and becomes a serious omok player. diff --git a/wiki/wikipedia/3990.txt b/wiki/wikipedia/3990.txt deleted file mode 100644 index 1abdc3e1a9028228f0492b6b33856668ba456df3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3990.txt +++ /dev/null @@ -1,262 +0,0 @@ -The Schwarzschild solution describes spacetime under the influence of a massive, non-rotating, spherically symmetric object. It is considered by some to be one of the simplest and most useful solutions to the Einstein field equations. - -Working in a coordinate chart with coordinates $ \left(r, \theta, \phi, t \right)$ labelled 1 to 4 respectively, we begin with the metric in its most general form (10 independent components, each of which is a smooth function of 4 variables). The solution is assumed to be spherically symmetric, static and vacuum. For the purposes of this article, these assumptions may be stated as follows (see the relevant links for precise definitions): - -# A spherically symmetric spacetime is one that is invariant under rotations and taking the mirror image. - -# A static spacetime is one in which all metric components are independent of the time coordinate $t$ (so that $\tfrac\partial{\partial t}g_{\mu \nu}=0$) and the geometry of the spacetime is unchanged under a time-reversal $t \rightarrow -t$. - -# A vacuum solution is one that satisfies the equation $T_{ab}=0$. From the Einstein field equations (with zero cosmological constant), this implies that $R_{ab}=0$ since contracting $ R_{ab}-\tfrac{R}{2} g_{ab}=0$ yields $R = 0$. - -# Metric signature used here is (+,+,+,−). - -The first simplification to be made is to diagonalise the metric. Under the coordinate transformation, $(r, \theta, \phi, t) \rightarrow (r, \theta, \phi, -t)$, all metric components should remain the same. The metric components $g_{\mu 4}$ ($\mu \ne 4$) change under this transformation as: -$$ -g_{\mu 4}'=\frac{\partial x^{\alpha}}{\partial x^{'\mu}} \frac{\partial x^{\beta}}{\partial x^{'4}} g_{\alpha \beta}= -g_{\mu 4} -$$ ($\mu \ne 4$) - -But, as we expect $g'_{\mu 4}= g_{\mu 4}$ (metric components remain the same), this means that: -$$ -g_{\mu 4}= 0 -$$ ($\mu \ne 4$) - -Similarly, the coordinate transformations $(r, \theta, \phi, t) \rightarrow (r, \theta, -\phi, t)$ and $(r, \theta, \phi, t) \rightarrow (r, -\theta, \phi, t)$ respectively give: -$$ -g_{\mu 3}= 0 -$$ ($\mu \ne 3$) -$$ -g_{\mu 2}= 0 -$$ ($\mu \ne 2$) - -Putting all these together gives: -$$ -g_{\mu \nu }= 0 -$$ ($ \mu \ne \nu $) - -and hence the metric must be of the form: -$$ -ds^2= g_{11}d r^2 + g_{22} d \theta ^2 + g_{33} d \phi ^2 + g_{44} dt ^2 -$$ - -where the four metric components are independent of the time coordinate $t$ (by the static assumption). - -On each hypersurface of constant $t$, constant $\theta$ and constant $\phi$ (i.e., on each radial line), $g_{11}$ should only depend on $r$ (by spherical symmetry). Hence $g_{11}$ is a function of a single variable: -$$ -g_{11}=A\left(r\right) -$$ - -A similar argument applied to $g_{44}$ shows that: -$$ -g_{44}=B\left(r\right) -$$ - -On the hypersurfaces of constant $t$ and constant $r$, it is required that the metric be that of a 2-sphere: -$$ -dl^2=r_{0}^2 (d \theta^2 + \sin^2 \theta d \phi^2) -$$ - -Choosing one of these hypersurfaces (the one with radius $r_{0}$, say), the metric components restricted to this hypersurface (which we denote by $\tilde{g}_{22}$ and $\tilde{g}_{33}$) should be unchanged under rotations through $\theta$ and $\phi$ (again, by spherical symmetry). Comparing the forms of the metric on this hypersurface gives: -$$ -\tilde{g}_{22}\left(d \theta^2 + \frac{\tilde{g}_{33}}{\tilde{g}_{22}} d \phi^2 \right) = r_{0}^2 (d \theta^2 + \sin^2 \theta d \phi^2) -$$ - -which immediately yields: -$$ -\tilde{g}_{22}=r_{0}^2 -$$ and $\tilde{g}_{33}=r_{0}^2 \sin ^2 \theta$ - -But this is required to hold on each hypersurface; hence, -$$ -g_{22}= r^2 -$$ and $g_{33}= r^2 \sin^2 \theta$ - -An alternative intuitive way to see that $g_{22}$ and $g_{33}$ must be the same as for a flat spacetime is that stretching or compressing an elastic material in a spherically symmetric manner (radially) will not change the angular distance between two points. - -Thus, the metric can be put in the form: -$$ -ds^2=A\left(r\right)dr^2+r^2d \theta^2+r^2 \sin^2 \theta d \phi^2 + B\left(r\right) dt^2 -$$ - -with $A$ and $B$ as yet undetermined functions of $r$. Note that if $A$ or $B$ is equal to zero at some point, the metric would be singular at that point. - -Using the metric above, we find the Christoffel symbols, where the indices are $(1,2,3,4)=(r,\theta,\phi,t)$. The sign $'$ denotes a total derivative of a function. - -\Gamma^1_{ik} = \begin{bmatrix} - -A'/\left( 2A \right) & 0 & 0 & 0\\ - -0 & -r/A & 0 & 0\\ - -0 & 0 & -r \sin^2 \theta /A & 0\\ - -0 & 0 & 0 & -B'/\left( 2A \right) \end{bmatrix} - -\Gamma^2_{ik} = \begin{bmatrix} - -0 & 1/r & 0 & 0\\ - -1/r & 0 & 0 & 0\\ - -0 & 0 & -\sin\theta\cos\theta & 0\\ - -0 & 0 & 0 & 0 \end{bmatrix} - -\Gamma^3_{ik} = \begin{bmatrix} - -0 & 0 & 1/r & 0\\ - -0 & 0 & \cot\theta & 0\\ - -1/r & \cot\theta & 0 & 0 \\ - -0 & 0 & 0 & 0 \end{bmatrix} - -\Gamma^4_{ik} = \begin{bmatrix} - -0 & 0 & 0 & B'/\left( 2B \right)\\ - -0 & 0 & 0 & 0\\ - -0 & 0 & 0 & 0 \\ - -B'/\left( 2B \right) & 0 & 0 & 0\end{bmatrix} - -To determine $A$ and $B$, the vacuum field equations are employed: -$$ -R_{\alpha\beta}= 0 -$$ - -Hence: - -{\Gamma^\rho_{\beta\alpha,\rho}} - \Gamma^\rho_{\rho\alpha,\beta} - -+ \Gamma^\rho_{\rho\lambda} \Gamma^\lambda_{\beta\alpha} - -- \Gamma^\rho_{\beta\lambda}\Gamma^\lambda_{\rho\alpha}=0, - -where a comma is used to set off the index that is being used for the derivative. Only three of these equations are nontrivial and upon simplification become: -$$ -4 A' B^2 - 2 r B AB + r A' B'B + r B' ^2 A=0, -$$ -$$ -r A'B + 2 A^2 B - 2AB - r B' A=0, -$$ -$$ - - 2 r B AB + r A' B'B + r B' ^2 A - 4B' AB=0 -$$ - -(the fourth equation is just $\sin^2 \theta$ times the second equation), where the prime means the r derivative of the functions. Subtracting the first and third equations produces: -$$ -A'B +A B'=0 \Rightarrow A(r)B(r) =K -$$ - -where $K$ is a non-zero real constant. Substituting $A(r)B(r) =K$ into the second equation and tidying up gives: -$$ -r A' =A(1-A) -$$ - -which has general solution: -$$ -A(r)=\left(1+\frac{1}{Sr}\right)^{-1} -$$ - -for some non-zero real constant $S$. Hence, the metric for a static, spherically symmetric vacuum solution is now of the form: -$$ -ds^2=\left(1+\frac{1}{S r}\right)^{-1}dr^2+r^2(d \theta^2 + \sin^2 \theta d \phi^2)+K \left(1+\frac{1}{S r}\right)dt^2 -$$ - -Note that the spacetime represented by the above metric is asymptotically flat, i.e. as $r \rightarrow \infty$, the metric approaches that of the Minkowski metric and the spacetime manifold resembles that of Minkowski space. - -The geodesics of the metric (obtained where $ds$ is extremised) must, in some limit (e.g., toward infinite speed of light), agree with the solutions of Newtonian motion (e.g., obtained by Lagrange equations). (The metric must also limit to Minkowski space when the mass it represents vanishes.) -$$ -0=\delta\int\frac{ds}{dt}dt=\delta\int(KE+PE_g)dt -$$ - -(where $KE$ is the kinetic energy and $PE_g$ is the Potential Energy due to gravity) The constants $K$ and $S$ are fully determined by some variant of this approach; from the weak-field approximation one arrives at the result: -$$ -g_{44}=K\left(1 +\frac{1}{Sr}\right) \approx -c^2+\frac{2Gm}{r} = -c^2 \left(1-\frac{2Gm}{c^2 r} \right) -$$ - -where $G$ is the gravitational constant, $m$ is the mass of the gravitational source and $c$ is the speed of light. It is found that: -$$ -K= -c^2 -$$ and $\frac{1}{S}=-\frac{2Gm}{c^2}$ - -Hence: -$$ -A(r)=\left(1-\frac{2Gm}{c^2 r}\right)^{-1} -$$ and $B(r)=-c^2 \left(1-\frac{2Gm}{c^2 r}\right)$ - -So, the Schwarzschild metric may finally be written in the form: -$$ -ds^2=\left(1-\frac{2Gm}{c^2 r}\right)^{-1}dr^2+r^2(d \theta^2 +\sin^2 \theta d \phi^2)-c^2 \left(1-\frac{2Gm}{c^2 r}\right)dt^2 -$$ - -Note that: -$$ -\frac{2Gm}{c^2}=r_s -$$ - -is the definition of the Schwarzschild radius for an object of mass $m$, so the Schwarzschild metric may be rewritten in the alternative form: -$$ -ds^2=\left(1-\frac{r_s}{r}\right)^{-1}dr^2+r^2(d\theta^2 +\sin^2\theta d\phi^2)-c^2\left(1-\frac{r_s}{r}\right)dt^2 -$$ - -which shows that the metric becomes singular approaching the event horizon (that is, $r \rightarrow r_s$). The metric singularity is not a physical one (although there is a real physical singularity at $r=0$), as can be shown by using a suitable coordinate transformation (e.g. the Kruskal–Szekeres coordinate system). - -The Schwarzschild metric can also be derived using the known physics for a circular orbit and a temporarily stationary point mass. Start with the metric with coefficients that are unknown coefficients of $r$: -$$ --c^2 = \left ( {ds \over d\tau} \right )^2 = A(r)\left ( {dr \over d\tau} \right )^2 + r^2\left ( {d\phi \over d\tau} \right )^2 + B(r)\left( {dt \over d\tau} \right)^2. -$$ - -Now apply the Euler-Lagrange equation to the arc length integral ${ J=\int_{\tau_1}^{\tau_2} \sqrt{-\left(\text{d}s/\text{d}\tau\right)^2 } \text{d}\tau. }$ Since $ds/d\tau$ is constant, the integrand can be replaced with $(\text{d}s/\text{d}\tau)^2,$ because the E-L equation is exactly the same if the integrand is multiplied by any constant. Applying the E-L equation to $J$ with the modified integrand yields: -$$ -\begin{array}{lcl} A'(r)\dot{r}^2 + 2r\dot{\phi}^2 + B'(r)\dot{t}^2 & = & 2A'(r)\dot{r}^2 + 2A(r)\ddot{r} \\ 0 & = & 2r\dot{r}\dot{\phi} + r^2\ddot{\phi} \\ 0 & = & B'(r)\dot{r}\dot{t} + B(r)\ddot{t} \end{array} -$$ - -where dot denotes differentiation with respect to $\tau.$ - -In a circular orbit $\dot{r}=\ddot{r}=0,$ so the first E-L equation above is equivalent to -$$ -2r\dot{\phi}^2 + B'(r)\dot{t}^2 = 0 \Leftrightarrow B'(r) = -2r\dot{\phi}^2/\dot{t}^2 = -2r(d\phi/dt)^2. -$$ - -Kepler's third law of motion is -$$ -\frac{T^2}{r^3} = \frac{4\pi^2}{G(M+m)}. -$$ - -In a circular orbit, the period $T$ equals $2\pi / (d\phi/dt),$ implying -$$ -\left( {d\phi \over dt} \right)^2 = GM/r^3 -$$ - -since the point mass $m$ is negligible compared to the mass of the central body $M.$ So $B'(r) = -2GM/r^2 $ and integrating this yields $B(r) = 2GM/r + C, $ where $C $ is an unknown constant of integration. $C $ can be determined by setting $M=0, $ in which case the space-time is flat and $B(r)=-c^2. $ So $C = -c^2 $ and -$$ -B(r) = 2GM/r - c^2 = c^2(2GM/c^2r - 1) = c^2(r_s/r - 1). -$$ - -When the point mass is temporarily stationary, $\dot{r}=0 $ and $\dot{\phi}=0. $ The original metric equation becomes $\dot{t}^2 = -c^2/B(r)$ and the first E-L equation above becomes $A(r) = B'(r)\dot{t}^2 / (2\ddot{r}).$ When the point mass is temporarily stationary, $\ddot{r}$ is the acceleration of gravity, $-MG/r^2.$ So -$$ -A(r) = \left(\frac{-2MG}{r^2}\right) \left(\frac{-c^2}{2MG/r - c^2}\right) \left(-\frac{r^2}{2MG}\right) = \frac{1}{1 - 2MG/(rc^2)} = \frac{1}{1 - r_s/r}. -$$ - -The original formulation of the metric uses anisotropic coordinates in which the velocity of light is not the same in the radial and transverse directions. Arthur Eddington gave alternative forms in isotropic coordinates. For isotropic spherical coordinates $r_1$, $\theta$, $\phi$, coordinates $\theta$ and $\phi$ are unchanged, and then (provided $r \geq \frac{2 Gm}{c^2}$) -$$ -r = r_1 \left(1+\frac{Gm}{2c^2 r_1}\right)^{2} -$$ , $dr = dr_1 \left(1-\frac{(Gm)^2}{4c^4 r_1^2}\right)$ , and -$$ -\left(1-\frac{2Gm}{c^2 r}\right) = \left(1-\frac{Gm}{2c^2 r_1}\right)^{2}/\left(1+\frac{Gm}{2c^2 r_1}\right)^{2} -$$ - -Then for isotropic rectangular coordinates $x$, $y$, $z$, -$$ -x = r_1 \sin(\theta) \cos(\phi) \quad, -$$ $y = r_1 \sin(\theta) \sin(\phi) \quad,$ $z = r_1 \cos(\theta)$ - -The metric then becomes, in isotropic rectangular coordinates: -$$ -ds^2= \left(1+\frac{Gm}{2c^2 r_1}\right)^{4}(dx^2+dy^2+dz^2) -c^2 dt^2 \left(1-\frac{Gm}{2c^2 r_1}\right)^{2}/\left(1+\frac{Gm}{2c^2 r_1}\right)^{2} -$$ - -In deriving the Schwarzschild metric, it was assumed that the metric was vacuum, spherically symmetric and static. In fact, the static assumption is stronger than required, as Birkhoff's theorem states that any spherically symmetric vacuum solution of Einstein's field equations is stationary; then one obtains the Schwarzschild solution. Birkhoff's theorem has the consequence that any pulsating star which remains spherically symmetric cannot generate gravitational waves (as the region exterior to the star must remain static). diff --git a/wiki/wikipedia/3991.txt b/wiki/wikipedia/3991.txt deleted file mode 100644 index 757eafb556722b7419071b8f59dc16ed9f663909..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3991.txt +++ /dev/null @@ -1,23 +0,0 @@ -In mathematics, the subspace theorem says that points of small height in projective space lie in a finite number of hyperplanes. It is a result obtained by . - -The subspace theorem states that if L1,...,Ln are linearly independent linear forms in n variables with algebraic coefficients and if ε>0 is any given real number, then - -the non-zero integer points x with -$$ -|L_1(x)\cdots L_n(x)|<|x|^{-\epsilon} -$$ - -lie in a finite number of proper subspaces of Qn. - -A quantitative form of the theorem, in which the number of subspaces containing all solutions, was also obtained by Schmidt, and the theorem was generalised by Schlickewei to allow more general absolute values on number fields. - -The theorem may be used to obtain results on Diophantine equations such as Siegel's theorem on integral points and solution of the S-unit equation. - -The following corollary to the subspace theorem is often itself referred to as the subspace theorem. - -If a1,...,an are algebraic such that 1,a1,...,an are linearly independent over Q and ε>0 is any given real number, then there are only finitely many rational n-tuples (x1/y,...,xn/y) with -$$ -|a_i-x_i/y|2 + bx + c is a second degree polynomial, the zero of P′(x) = 2ax + b is the average of the roots of P. In that case, the convex hull is the line segment with the two roots as endpoints and it is clear that the average of the roots is the middle point of the segment. - -For a third degree complex polynomial P (cubic function) with three distinct zeros, Marden's theorem states that the zeros of P′ are the foci of the Steiner inellipse which is the unique ellipse tangent to the midpoints of the triangle formed by the zeros of P. - -For a fourth degree complex polynomial P (quartic function) with four distinct zeros forming a concave quadrilateral, one of the zeros of P lies within the convex hull of the other three; all three zeros of P′ lie in two of the three triangles formed by the interior zero of P and two others zeros of P. - -In addition, if a polynomial of degree n of real coefficients has n distinct real zeros $x_1p(G) is the intersection of all index p normal subgroups; G/Ep(G) is an elementary abelian group, and is the largest elementary abelian p-group onto which G surjects. - -* Ap(G) (notation from ) is the intersection of all normal subgroups K such that G/K is an abelian p-group (i.e., K is an index $p^k$ normal subgroup that contains the derived group $[G,G]$): G/Ap(G) is the largest abelian p-group (not necessarily elementary) onto which G surjects. - -* Op(G) is the intersection of all normal subgroups K of G such that G/K is a (possibly non-abelian) p-group (i.e., K is an index $p^k$ normal subgroup): G/Op(G) is the largest p-group (not necessarily abelian) onto which G surjects. Op(G) is also known as the p-residual subgroup. - -Firstly, as these are weaker conditions on the groups K, one obtains the containments $\mathbf{E}^p(G) \supseteq \mathbf{A}^p(G) \supseteq \mathbf{O}^p(G).$ These are further related as: - -Ap(G) = Op(G)[G,G]. - -Op(G) has the following alternative characterization as the subgroup generated by all Sylow q-subgroups of G as q≠p ranges over the prime divisors of the order of G distinct from p. - -Op(G) is used to define the lower p-series of G, similarly to the upper p-series described in p-core. - -The transfer homomorphism is a homomorphism that can be defined from any group G to the abelian group H/[H,H] defined by a subgroup H ≤ G of finite index, that is [G:H] < ∞. The transfer map from a finite group G into its Sylow p-subgroup has a kernel that is easy to describe: - -The kernel of the transfer homomorphism from a finite group G into its Sylow p-subgroup P has Ap(G) as its kernel, . - -In other words, the "obvious" homomorphism onto an abelian p-group is in fact the most general such homomorphism. - -The fusion pattern of a subgroup H in G is the equivalence relation on the elements of H where two elements h, k of H are fused if they are G-conjugate, that is, if there is some g in G such that h = kg. The normal structure of G has an effect on the fusion pattern of its Sylow p-subgroups, and conversely the fusion pattern of its Sylow p-subgroups has an effect on the normal structure of G, . - -One can define, as in the focal subgroup of H with respect to G as: - -FocG(H) = ⟨ x−1 y x,y in H and x is G-conjugate to y ⟩. - -This focal subgroup measures the extent to which elements of H fuse in G, while the previous definition measured certain abelian p-group homomorphic images of the group G. The content of the focal subgroup theorem is that these two definitions of focal subgroup are compatible. - -shows that the focal subgroup of P in G is the intersection P∩[G,G] of the Sylow p-subgroup P of the finite group G with the derived subgroup [G,G] of G. The focal subgroup is important as it is a Sylow p-subgroup of the derived subgroup. One also gets the following result: - -There exists a normal subgroup K of G with G/K an abelian p-group isomorphic to P/P∩[G,G] (here K denotes Ap(G)), and - -if K is a normal subgroup of G with G/K an abelian p-group, then P∩[G,G] ≤ K, and G/K is a homomorphic image of P/P∩[G,G], . - -The focal subgroup of a finite group G with Sylow p-subgroup P is given by: - -P∩[G,G] = P∩Ap(G) = P∩ker(v) = FocG(P) = ⟨ x−1 y x,y in P and x is G-conjugate to y ⟩ - -where v is the transfer homomorphism from G to P/[P,P], . - -This connection between transfer and fusion is credited to , where, in different language, the focal subgroup theorem was proved along with various generalizations. The requirement that G/K be abelian was dropped, so that Higman also studied Op(G) and the nilpotent residual γ(G), as so called hyperfocal subgroups. Higman also did not restrict to a single prime p, but rather allowed π-groups for sets of primes π and used Philip Hall's theorem of Hall subgroups in order to prove similar results about the transfer into Hall π-subgroups; taking π = {p} a Hall π-subgroup is a Sylow p-subgroup, and the results of Higman are as presented above. - -Interest in the hyperfocal subgroups was renewed by work of in understanding the modular representation theory of certain well behaved blocks. The hyperfocal subgroup of P in G can defined as P∩γ(G) that is, as a Sylow p-subgroup of the nilpotent residual of G. If P is a Sylow p-subgroup of the finite group G, then one gets the standard focal subgroup theorem: - -P∩γ(G) = P∩Op(G) = ⟨ x−1 y : x,y in P and y = xg for some g in G of order coprime to p ⟩ - -and the local characterization: - -P∩Op(G) = ⟨ x−1 y : x,y in Q ≤ P and y = xg for some g in NG(Q) of order coprime to p ⟩. - -This compares to the local characterization of the focal subgroup as: - -P∩Ap(G) = ⟨ x−1 y : x,y in Q ≤ P and y = xg for some g in NG(Q) ⟩. - -Puig is interested in the generalization of this situation to fusion systems, a categorical model of the fusion pattern of a Sylow p-subgroup with respect to a finite group that also models the fusion pattern of a defect group of a p-block in modular representation theory. In fact fusion systems have found a number of surprising applications and inspirations in the area of algebraic topology known as equivariant homotopy theory. Some of the major algebraic theorems in this area only have topological proofs at the moment. - -Various mathematicians have presented methods to calculate the focal subgroup from smaller groups. For instance, the influential work develops the idea of a local control of fusion, and as an example application shows that: - -P ∩ Ap(G) is generated by the commutator subgroups [Q, NG(Q)] where Q varies over a family C of subgroups of P - -The choice of the family C can be made in many ways (C is what is called a "weak conjugation family" in ), and several examples are given: one can take C to be all non-identity subgroups of P, or the smaller choice of just the intersections Q = P ∩ Pg for g in G in which NP(Q) and NPg(Q) are both Sylow p-subgroups of NG(Q). The latter choice is made in . The work of studied aspects of the transfer and fusion as well, resulting in Grün's first theorem: - -P ∩ Ap(G) is generated by P ∩ [N, N] and P ∩ [Q, Q] where N = NG(P) and Q ranges over the set of Sylow p-subgroups Q = Pg of G . - -The textbook presentations in , , , , all contain various applications of the focal subgroup theorem relating fusion, transfer, and a certain kind of splitting called p-nilpotence. - -During the course of the Alperin–Brauer–Gorenstein theorem classifying finite simple groups with quasi-dihedral Sylow 2-subgroups, it becomes necessary to distinguish four types of groups with quasi-dihedral Sylow 2-subgroups: the 2-nilpotent groups, the Q-type groups whose focal subgroup is a generalized quaternion group of index 2, the D-type groups whose focal subgroup a dihedral group of index 2, and the QD-type groups whose focal subgroup is the entire quasi-dihedral group. In terms of fusion, the 2-nilpotent groups have 2 classes of involutions, and 2 classes of cyclic subgroups of order 4; the Q-type have 2 classes of involutions and one class of cyclic subgroup of order 4; the QD-type have one class each of involutions and cyclic subgroups of order 4. In other words, finite groups with quasi-dihedral Sylow 2-subgroups can be classified according to their focal subgroup, or equivalently, according to their fusion patterns. The explicit lists of groups with each fusion pattern are contained in . diff --git a/wiki/wikipedia/3994.txt b/wiki/wikipedia/3994.txt deleted file mode 100644 index 48e168e4883e36d4569f256992c6afd4293909c6..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3994.txt +++ /dev/null @@ -1,103 +0,0 @@ -The Archimedean spiral (also known as the arithmetic spiral) is a spiral named after the 3rd-century BC Greek mathematician Archimedes. It is the locus corresponding to the locations over time of a point moving away from a fixed point with a constant speed along a line that rotates with constant angular velocity. Equivalently, in polar coordinates (r, θ) it can be described by the equation -$$ -r = a + b\cdot\theta -$$ - -with real numbers a and b. Changing the parameter a moves the centerpoint of the spiral outward from the origin (positive a toward θ = 0 and negative a toward θ = π) essentially through a rotation of the spiral, while b controls the distance between loops. - -From the above equation, it can thus be stated: position of particle from point of start is proportional to angle θ as time elapses. - -Archimedes described such a spiral in his book On Spirals. Conon of Samos was a friend of his and Pappus states that this spiral was discovered by Conon. - -A physical approach is used below to understand the notion of Archimedean spirals. - -Suppose a point object moves in the Cartesian system with a constant velocity v directed parallel to the x-axis, with respect to the xy-plane. Let at time t = 0, the object was at an arbitrary point (c, 0, 0). If the xy plane rotates with a constant angular velocity ω about the z-axis, then the velocity of the point with respect to z-axis may be written as: - -\begin{align} - -|v_0|&=\sqrt{v^2+\omega^2(vt+c)^2} \\ - -v_x&=v \cos \omega t - \omega (vt+c) \sin \omega t \\ - -v_y&=v \sin \omega t + \omega (vt+c) \cos \omega t - -\end{align} - -Here vt + c is the modulus of the position vector of the particle at any time t, vx is the velocity component along the x-axis and vy is the component along the y-axis. The figure shown alongside explains this. - -\begin{align} - -\int v_x dt &=x \\ - -\int v_y dt &=y - -\end{align} - -The above equations can be integrated by applying integration by parts, leading to the following parametric equations: - -\begin{align} - -x&=(vt + c) \cos \omega t \\ - -y&=(vt+c) \sin \omega t - -\end{align} - -Squaring the two equations and then adding (and some small alterations) results in the Cartesian equation -$$ -\sqrt{x^2+y^2}=\frac{v}{\omega}\cdot \arctan \frac{y}{x} +c -$$ - -(using the fact that ωt = θ and θ = arctan y/x) or -$$ -\tan \left(\left(\sqrt{x^2+y^2}-c\right)\cdot\frac{\omega}{v}\right) = \frac{y}{x} -$$ - -Its polar form is -$$ -r= \frac{v}{\omega}\cdot \theta +c -$$ - -Given the parametrization in cartesian coordinates -$$ -f\colon\theta\mapsto (r\cos \theta, r\sin \theta) = (b \theta\cos \theta,b \theta\sin\theta) -$$ - -the arc length from $\theta_1$ to $\theta_2$ is -$$ -\frac{b}{2}\left[\theta\sqrt{1+\theta^2}+\ln\left(\theta+\sqrt{1+\theta^2}\right)\right]_{\theta_1}^{\theta_2} -$$ - -or equivalently -$$ -\frac{b}{2}\left[\theta\sqrt{1+\theta^2}+\operatorname{arsinh}\theta\right]_{\theta_1}^{\theta_2}. -$$ - -The total length from $\theta_1=0$ to $\theta_2=\theta$ is therefore -$$ -\frac{b}{2}\left[\theta\sqrt{1+\theta^2}+\ln \left(\theta+\sqrt{1+\theta^2} \right)\right]. -$$ - -The curvature is given by -$$ -\kappa=\frac{\theta^2+2}{b(\theta^2+1)^\frac{3}{2}} -$$ - -The Archimedean spiral has the property that any ray from the origin intersects successive turnings of the spiral in points with a constant separation distance (equal to 2πb if θ is measured in radians), hence the name "arithmetic spiral". In contrast to this, in a logarithmic spiral these distances, as well as the distances of the intersection points measured from the origin, form a geometric progression. - -The Archimedean spiral has two arms, one for θ > 0 and one for θ < 0. The two arms are smoothly connected at the origin. Only one arm is shown on the accompanying graph. Taking the mirror image of this arm across the y-axis will yield the other arm. - -For large θ a point moves with well-approximated uniform acceleration along the Archimedean spiral while the spiral corresponds to the locations over time of a point moving away from a fixed point with a constant speed along a line which rotates with constant angular velocity (see contribution from Mikhail Gaichenkov). - -As the Archimedean spiral grows, its evolute asymptotically approaches a circle with radius /v||ω. - -Sometimes the term Archimedean spiral is used for the more general group of spirals -$$ -r = a + b\cdot\theta^\frac{1}{c}. -$$ - -The normal Archimedean spiral occurs when c = 1. Other spirals falling into this group include the hyperbolic spiral (c = −1), Fermat's spiral (c = 2), and the lituus (c = −2). Virtually all static spirals appearing in nature are logarithmic spirals, not Archimedean ones. Many dynamic spirals (such as the Parker spiral of the solar wind, or the pattern made by a Catherine's wheel) are Archimedean. - -One method of squaring the circle, due to Archimedes, makes use of an Archimedean spiral. Archimedes also showed how the spiral can be used to trisect an angle. Both approaches relax the traditional limitations on the use of straightedge and compass in ancient Greek geometric proofs. - -The Archimedean spiral has a variety of real-world applications. Scroll compressors, used for compressing gases, have rotors that can be made from two interleaved Archimedean spirals, involutes of a circle of the same size that almost resemble Archimedean spirals, or hybrid curves. Archimedean spirals can be found in spiral antenna, which can be operated over a wide range of frequencies. The coils of watch balance springs and the grooves of very early gramophone records form Archimedean spirals, making the grooves evenly spaced (although variable track spacing was later introduced to maximize the amount of music that could be cut onto a record). Asking for a patient to draw an Archimedean spiral is a way of quantifying human tremor; this information helps in diagnosing neurological diseases. Archimedean spirals are also used in digital light processing (DLP) projection systems to minimize the "rainbow effect", making it look as if multiple colors are displayed at the same time, when in reality red, green, and blue are being cycled extremely quickly. Additionally, Archimedean spirals are used in food microbiology to quantify bacterial concentration through a spiral platter. They are also used to model the pattern that occurs in a roll of paper or tape of constant thickness wrapped around a cylinder. diff --git a/wiki/wikipedia/3995.txt b/wiki/wikipedia/3995.txt deleted file mode 100644 index 4af14ea9188871094a444fe473d2c47068d370f0..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3995.txt +++ /dev/null @@ -1,23 +0,0 @@ -A computer-assisted proof is a mathematical proof that has been at least partially generated by computer. - -Most computer-aided proofs to date have been implementations of large proofs-by-exhaustion of a mathematical theorem. The idea is to use a computer program to perform lengthy computations, and to provide a proof that the result of these computations implies the given theorem. In 1976, the four color theorem was the first major theorem to be verified using a computer program. - -Attempts have also been made in the area of artificial intelligence research to create smaller, explicit, new proofs of mathematical theorems from the bottom up using machine reasoning techniques such as heuristic search. Such automated theorem provers have proved a number of new results and found new proofs for known theorems. Additionally, interactive proof assistants allow mathematicians to develop human-readable proofs which are nonetheless formally verified for correctness. Since these proofs are generally human-surveyable (albeit with difficulty, as with the proof of the Robbins conjecture) they do not share the controversial implications of computer-aided proofs-by-exhaustion. - -One method for using computers in mathematical proofs is by means of so-called validated numerics or rigorous numerics. This means computing numerically yet with mathematical rigour. One uses set-valued arithmetic and in order to ensure that the set-valued output of a numerical program encloses the solution of the original mathematical problem. This is done by controlling, enclosing and propagating round-off and truncation errors using for example interval arithmetic. More precisely, one reduces the computation to a sequence of elementary operations, say $(+,-,*,/)$. In a computer, the result of each elementary operation is rounded off by the computer precision. However, one can construct an interval provided by upper and lower bounds on the result of an elementary operation. Then one proceeds by replacing numbers with intervals and performing elementary operations between such intervals of representable numbers. - -Computer-assisted proofs are the subject of some controversy in the mathematical world, with Thomas Tymoczko first to articulate objections. Those who adhere to Tymoczko's arguments believe that lengthy computer-assisted proofs are not, in some sense, 'real' mathematical proofs because they involve so many logical steps that they are not practically verifiable by human beings, and that mathematicians are effectively being asked to replace logical deduction from assumed axioms with trust in an empirical computational process, which is potentially affected by errors in the computer program, as well as defects in the runtime environment and hardware. - -Other mathematicians believe that lengthy computer-assisted proofs should be regarded as calculations, rather than proofs: the proof algorithm itself should be proved valid, so that its use can then be regarded as a mere "verification". Arguments that computer-assisted proofs are subject to errors in their source programs, compilers, and hardware can be resolved by providing a formal proof of correctness for the computer program (an approach which was successfully applied to the four-color theorem in 2005) as well as replicating the result using different programming languages, different compilers, and different computer hardware. - -Another possible way of verifying computer-aided proofs is to generate their reasoning steps in a machine-readable form, and then use a proof checker program to demonstrate their correctness. Since validating a given proof is much easier than finding a proof, the checker program is simpler than the original assistant program, and it is correspondingly easier to gain confidence into its correctness. However, this approach of using a computer program to prove the output of another program correct does not appeal to computer proof skeptics, who see it as adding another layer of complexity without addressing the perceived need for human understanding. - -Another argument against computer-aided proofs is that they lack mathematical elegance—that they provide no insights or new and useful concepts. In fact, this is an argument that could be advanced against any lengthy proof by exhaustion. - -An additional philosophical issue raised by computer-aided proofs is whether they make mathematics into a quasi-empirical science, where the scientific method becomes more important than the application of pure reason in the area of abstract mathematical concepts. This directly relates to the argument within mathematics as to whether mathematics is based on ideas, or "merely" an exercise in formal symbol manipulation. It also raises the question whether, if according to the Platonist view, all possible mathematical objects in some sense "already exist", whether computer-aided mathematics is an observational science like astronomy, rather than an experimental one like physics or chemistry. This controversy within mathematics is occurring at the same time as questions are being asked in the physics community about whether twenty-first century theoretical physics is becoming too mathematical, and leaving behind its experimental roots. - -The emerging field of experimental mathematics is confronting this debate head-on by focusing on numerical experiments as its main tool for mathematical exploration. - -Inclusion in this list does not imply that a formal computer-checked proof exists, but rather, that a computer program has been involved in some way. See the main articles for details. - -In 2010, academics at The University of Edinburgh offered people the chance to "buy their own theorem" created through a computer-assisted proof. This new theorem would be named after the purchaser. diff --git a/wiki/wikipedia/3996.txt b/wiki/wikipedia/3996.txt deleted file mode 100644 index 1713ac98ce8f4f72b958b9b0f8b3c1acf2944fd0..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3996.txt +++ /dev/null @@ -1,51 +0,0 @@ -The less-than sign is a mathematical symbol that denotes an inequality between two values. The widely adopted form of two equal-length strokes connecting in an acute angle at the left, , has been found in documents dated as far back as the 1560s. In mathematical writing, the less-than sign is typically placed between two values being compared and signifies that the first number is less than the second number. Examples of typical usage include 1/2 < 1 and −2 < 0. - -Since the development of computer programming languages, the less-than sign and the greater-than sign have been repurposed for a range of uses and operations. - -The less-than sign, , is an original ASCII character (hex 3C, decimal 60). - -The less-than sign may be used for an approximation of the opening angle bracket, . ASCII does not have angle brackets but are standard in Unicode (). The latter is expected in formal texts. - -In BASIC, Lisp-family languages, and C-family languages (including Java and C++), comparison operator < means "less than". - -In Coldfusion, operator .lt. means "less than". - -In Fortran, operator .LT. means "less than"; later versions allow <. - -In Bourne shell, operator -lt means "less than". - -The double less-than sign, , may be used for an approximation of the much-less-than sign () or of the opening guillemet (). ASCII does not encode either of these signs, though they are both included in Unicode. - -In Bash, Perl, and Ruby, operator (where "EOF" is an arbitrary string, but commonly "EOF" denoting "end of file") is used to denote the beginning of a here document. - -In C and C++, operator represents a binary left shift. - -In the C++ Standard Library, operator , when applied on an output stream, acts as insertion operator and performs an output operation on the stream. - -In Ruby, operator acts as append operator when used between an array and the value to be appended. - -In XPath the operator returns true if the left operand precedes the right operand in document order; otherwise it returns false. - -In PHP, operator is used to denote the beginning of a heredoc statement (where OUTPUT is an arbitrary named variable.) - -In Bash, is used as a "here string", where is expanded and supplied to the command on its standard input, similar to a heredoc. - -The less-than sign plus the equals sign, , may be used for an approximation of the less-than-or-equal-to sign, . ASCII does not have a less-than-or-equal-to sign, but Unicode defines it at code point U+2264. - -In BASIC, Lisp-family languages, and C-family languages (including Java and C++), operator means "less than or equal to". In Sinclair BASIC it is encoded as a single-byte code point token. - -In Prolog, means "less than or equal to" (as distinct from the arrow ). - -In Fortran, operator means "less than or equal to". - -In Bourne shell and Windows PowerShell, the operator means "less than or equal to". - -In the R programming language, the less-than sign is used in conjunction with a hyphen-minus to create an arrow (), this can be used as the left assignment operator. - -In Bourne shell (and many other shells), less-than sign is used to redirect input from a file. Less-than plus ampersand () is used to redirect from a file descriptor. - -Less-than sign is used in the spaceship operator. - -In HTML (and SGML and XML), the less-than sign is used at the beginning of tags. The less-than sign may be included with &lt;. The less-than-or-equal-to sign, , may be included with &le;. - -In an inequality, the less-than sign and greater-than sign always "point" to the smaller number. Put another way, the "jaws" (the wider section of the symbol) always direct to the larger number. diff --git a/wiki/wikipedia/3997.txt b/wiki/wikipedia/3997.txt deleted file mode 100644 index a7452b4a22168cb9bbaae4fd1e1e9291584ab226..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3997.txt +++ /dev/null @@ -1 +0,0 @@ -In mathematics, the Andreotti–Vesentini separation theorem, introduced by states that certain cohomology groups of coherent sheaves are separated. diff --git a/wiki/wikipedia/3998.txt b/wiki/wikipedia/3998.txt deleted file mode 100644 index 3955045b6972e00261b331db446ad31d7621c938..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3998.txt +++ /dev/null @@ -1,113 +0,0 @@ -In Riemannian geometry, Schur's lemma is a result that says, heuristically, whenever certain curvatures are pointwise constant then they are forced to be globally constant. The proof is essentially a one-step calculation, which has only one input: the second Bianchi identity. - -Suppose $(M, g)$ is a smooth Riemannian manifold with dimension $n.$ Recall that this defines for each element $p$ of $M$: - -* the sectional curvature, which assigns to every 2-dimensional linear subspace $V$ of $T_p M,$ a real number $\operatorname{sec}_p(V)$ - -* the Riemann curvature tensor, which is a multilinear map $\operatorname{Rm}_p : T_p M \times T_p M \times T_p M \times T_p M \to \R$ - -* the Ricci curvature, which is a symmetric bilinear map $\operatorname{Ric}_p : T_p M \times T_p M \to \R$ - -* the scalar curvature, which is a real number $\operatorname{R}_p$ - -The Schur lemma states the following: - -{{quote|Suppose that $n$ is not equal to two. If there is a function $\kappa$ on $M$ such that $\operatorname{Ric}_p = \kappa(p) g_p$ for all $p \in M$ then $d \kappa(p) = 0.$ Equivalently, $\kappa$ is constant on each connected component of $M$; this could also be phrased as asserting that each connected component of $M$ is an Einstein manifold.}} - -The Schur lemma is a simple consequence of the "twice-contracted second Bianchi identity," which states that - -\operatorname{div}_g\operatorname{Ric} = \frac{1}{2}dR. - -understood as an equality of smooth 1-forms on $M.$ Substituting in the given condition $\operatorname{Ric}_p = \kappa(p) g_p,$ one finds that $\textstyle d\kappa = \frac{n}{2} d\kappa.$ - -Let $B$ be a symmetric bilinear form on an $n$-dimensional inner product space $(V, g).$ Then - -|B|_g^2 = \left|B-\frac{1}{n}\left(\operatorname{tr}^gB\right)g\right|_g^2 + \frac{1}{n}\left(\operatorname{tr}^gB\right)^2. - -Additionally, note that if $B = \kappa g$ for some number $\kappa,$ then one automatically has $\kappa = \frac{1}{n} \operatorname{tr}^g B.${ With these observations in mind, one can restate the Schur lemma in the following form: - -{{quote|Let $(M, g)$ be a connected smooth Riemannian manifold whose dimension is not equal to two. Then the following are equivalent: - -* There is a function $\kappa$ on $M$ such that $\operatorname{Ric}_p = \kappa(p) g_p$ for all $p \in M$ - -* There is a number $\kappa$ such that $\operatorname{Ric}_p = \kappa g_p$ for all $p \in M,$ that is, $(M, g)$ is Einstein - -* One has $\operatorname{Ric}_p = \frac{1}{n} R_p g_p$ for all $p \in M,$ that is, the traceless Ricci tensor is zero - -* $|\operatorname{Ric}|_g^2=\textstyle\frac{1}{n}R^2$ - -* $|\operatorname{Ric}|_g^2\leq\textstyle\frac{1}{n}R^2.$ - -If $(M, g)$ is a connected smooth pseudo-Riemannian manifold, then the first three conditions are equivalent, and they imply the fourth condition.}} - -Note that the dimensional restriction is important, since every two-dimensional Riemannian manifold which does not have constant curvature would be a counterexample. - -The following is an immediate corollary of the Schur lemma for the Ricci tensor. - -{{quote|Let $(M, g)$ be a connected smooth Riemannian manifold whose dimension $n$ is not equal to two. Then the following are equivalent: - -* There is a function $\kappa$ on $M$ such that $\operatorname{sec}_p(V) = \kappa(p)$ for all $p \in M$ and all two-dimensional linear subspaces $V$ of $T_p M,$ - -* There is a number $\kappa$ such that $\operatorname{sec}_p(V) = \kappa$ for all $p \in M$ and all two-dimensional linear subspaces $V$ of $T_p M,$ that is, $(M, g)$ has constant curvature - -* $\operatorname{sec}_p(V) = \frac{1}{n(n-1)} R_p$ for all $p \in M$ and all two-dimensional linear subspaces $V$ of $T_p M,$ - -* $|\operatorname{Rm}_p|^2_g = \textstyle\frac{2}{n(n-1)} R_p^2$ for all $p \in M$ - -* the sum of the Weyl curvature and semi-traceless part of the Riemann tensor is zero - -* both the Weyl curvature and the semi-traceless part of the Riemann tensor are zero - -}} - -Let $(M, g)$ be a smooth Riemannian or pseudo-Riemannian manifold of dimension $n.$ Let $h$ he a smooth symmetric (0,2)-tensor field whose covariant derivative, with respect to the Levi-Civita connection, is completely symmetric. The symmetry condition is an analogue of the Bianchi identity; continuing the analogy, one takes a trace to find that - -\operatorname{div}^gh = d\big(\operatorname{tr}^gh\big). - -If there is a function $\kappa$ on $M$ such that $h_p = \kappa(p)g_p$ for all $p$ in $M,$ then upon substitution one finds - -d\kappa=n\cdot d\kappa. - -Hence $n > 1$ implies that $\kappa$ is constant on each connected component of $M.$ As above, one can then state the Schur lemma in this context: - -{{quote|Let $(M, g)$ be a connected smooth Riemannian manifold whose dimension is not equal to one. Let $h$ be a smooth symmetric (0,2)-tensor field whose covariant derivative is totally symmetric as a (0,3)-tensor field. Then the following are equivalent: - -* there is a function $\kappa$ on $M$ such that $h_p = \kappa(p) g_p$ for all $p \in M,$ - -* there is a number $\kappa$ such that $h_p = \kappa g_p$ for all $p \in M,$ - -* $h_p = \frac{1}{n}\left(\operatorname{tr}^g h_p\right) g_p$ for all $p \in M,$ that is, the traceless form of $h$ is zero - -* $|h_p|_g^2=\textstyle\frac{1}{n}(\operatorname{tr}^gh_p)^2$ for all $p \in M$ - -* $|h_p|_g^2\leq\textstyle\frac{1}{n}(\operatorname{tr}^gh_p)^2$ for all $p \in M$ - -If $(M, g)$ is a connected and smooth pseudo-Riemannian manifold, then the first three are equivalent, and imply the fourth and fifth. - -}} - -The Schur lemmas are frequently employed to prove roundness of geometric objects. A noteworthy example is to characterize the limits of convergent geometric flows. - -For example, a key part of Richard Hamilton's 1982 breakthrough on the Ricci flow was his "pinching estimate" which, informally stated, says that for a Riemannian metric which appears in a 3-manifold Ricci flow with positive Ricci curvature, the eigenvalues of the Ricci tensor are close to one another relative to the size of their sum. If one normalizes the sum, then, the eigenvalues are close to one another in an absolute sense. In this sense, each of the metrics appearing in a 3-manifold Ricci flow of positive Ricci curvature "approximately" satisfies the conditions of the Schur lemma. The Schur lemma itself is not explicitly applied, but its proof is effectively carried out through Hamilton's calculations. - -In the same way, the Schur lemma for the Riemann tensor is employed to study convergence of Ricci flow in higher dimensions. This goes back to Gerhard Huisken's extension of Hamilton's work to higher dimensions, where the main part of the work is that the Weyl tensor and the semi-traceless Riemann tensor become zero in the long-time limit. This extends to the more general Ricci flow convergence theorems, some expositions of which directly use the Schur lemma. This includes the proof of the differentiable sphere theorem. - -The Schur lemma for Codazzi tensors is employed directly in Huisken's foundational paper on convergence of mean curvature flow, which was modeled on Hamilton's work. In the final two sentences of Huisken's paper, it is concluded that one has a smooth embedding $S^n \to \R^{n+1}$ with - -|h|^2=\frac{1}{n}H^2, - -where $h$ is the second fundamental form and $H$ is the mean curvature. The Schur lemma implies that the mean curvature is constant, and the image of this embedding then must be a standard round sphere. - -Another application relates full isotropy and curvature. Suppose that $(M,g)$ is a connected thrice-differentiable Riemannian manifold, and that for each $p\in M$ the group of isometries $\operatorname{Isom}(M,g)$ acts transitively on $T_pM.$ This means that for all $p\in M$ and all $v,w\in T_pM$ there is an isometry $\varphi:(M,g)\to(M,g)$ such that $\varphi(p)=p$ and $d\varphi_p(v)=w.$ This implies that $\operatorname{Isom}(M,g)$ also acts transitively on $\text{Gr}(2,T_pM),$ that is, for every $P,Q\in\text{Gr}(2,T_pM)$ there is an isometry $\varphi:(M,g)\to(M,g)$ such that $\varphi(p)=p$ and $d\varphi_p(P)=Q.$ Since isometries preserve sectional curvature, this implies that $\operatorname{sec}_p$ is constant for each $p\in M.$ The Schur lemma implies that $(M,g)$ has constant curvature. A particularly notable application of this is that any spacetime which models the cosmological principle must be the warped product of an interval and a constant-curvature Riemannian manifold. See O'Neill (1983, page 341). - -Recent research has investigated the case that the conditions of the Schur lemma are only approximately satisfied. - -Consider the Schur lemma in the form "If the traceless Ricci tensor is zero then the scalar curvature is constant." Camillo De Lellis and Peter Topping have shown that if the traceless Ricci tensor is approximately zero then the scalar curvature is approximately constant. Precisely: - -* Suppose $(M,g)$ is a closed Riemannian manifold with nonnegative Ricci curvature and dimension $n\geq 3.$ Then, where $\overline{R}$ denotes the average value of the scalar curvature, one has \int_M (R-\overline{R})^2d\mu_g\leq\frac{4n(n-1)}{(n-2)^2}\int_M \Big|\operatorname{Ric}-\frac{1}{n}Rg\Big|_g^2d\mu_g. - -Next, consider the Schur lemma in the special form "If $\Sigma$ is a connected embedded surface in $\R^3$ whose traceless second fundamental form is zero, then its mean curvature is constant." Camillo De Lellis and Stefan Müller have shown that if the traceless second fundamental form of a compact surface is approximately zero then the mean curvature is approximately constant. Precisely - -* there is a number $C$ such that, for any smooth compact connected embedded surface $\Sigma \subseteq \R^3,$ one has \int_\Sigma (H-\overline{H})^2d\mu_g\leq C\int_\Sigma \Big|h-\frac{1}{2}Hg\Big|_g^2d\mu_g, where $h$ is the second fundamental form, $g$ is the induced metric, and $H$ is the mean curvature $\operatorname{tr}_gh.$ - -As an application, one can conclude that $\Sigma$ itself is 'close' to a round sphere. diff --git a/wiki/wikipedia/3999.txt b/wiki/wikipedia/3999.txt deleted file mode 100644 index 14018ea50f0b04353dae27388fbc97cd54186aeb..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/3999.txt +++ /dev/null @@ -1,51 +0,0 @@ -Proof-number search (short: PN search) is a game tree search algorithm invented by Victor Allis, with applications mostly in endgame solvers, but also for sub-goals during games. - -Using a binary goal (e.g. first player wins the game), game trees of two-person perfect-information games can be mapped to an and–or tree. Maximizing nodes become OR-nodes, minimizing nodes are mapped to AND-nodes. For all nodes proof and disproof numbers are stored, and updated during the search. - -To each node of the partially expanded game tree the proof number and - -disproof number are associated. A proof number represents the minimum number of leaf - -nodes which have to be proved in order to prove the node. Analogously, a disproof - -number represents the minimum number of leaves which have to be disproved - -in order to disprove the node. Because the goal of the tree is to prove a forced - -win, winning nodes are regarded as proved. Therefore, they have proof number - -0 and disproof number ∞. Lost or drawn nodes are regarded as - -disproved. They have proof number ∞ and disproof number - -0. Unknown leaf nodes have a proof and disproof number of unity. - -The proof number of an internal AND node is equal to the sum of - -its children's proof numbers, since to prove an AND node all the children have - -to be proved. The disproof number of an AND node is equal to the minimum of - -its children's disproof numbers. The disproof number of an internal OR node is - -equal to the sum of its children's disproof numbers, since to disprove an OR node - -all the children have to be disproved. Its proof number is equal to the minimum - -of its children's proof numbers. - -The procedure of selecting the most-proving node - -to expand is the following. We start at the root. Then, at each OR node the child - -with the lowest proof number is selected as successor, and at each AND node the - -child with the lowest disproof number is selected as successor. Finally, when a - -leaf node is reached, it is expanded and its children are evaluated. - -The proof and disproof numbers represent lower bounds on the number of nodes to be evaluated to prove (or disprove) certain nodes. By always selecting the most proving (disproving) node to expand, an efficient search is generated. - -Some variants of proof number search like dfPN, PN2, PDS-PN have been developed to address the quite big memory - -requirements of the algorithm. diff --git a/wiki/wikipedia/4.txt b/wiki/wikipedia/4.txt deleted file mode 100644 index a249ecd3d674ca7b31106e36fb783dab2e49e5bd..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4.txt +++ /dev/null @@ -1,11 +0,0 @@ -In mathematics, a vertex cycle cover (commonly called simply cycle cover) of a graph G is a set of cycles which are subgraphs of G and contain all vertices of G. - -If the cycles of the cover have no vertices in common, the cover is called vertex-disjoint or sometimes simply disjoint cycle cover. This is sometimes known as exact vertex cycle cover. In this case the set of the cycles constitutes a spanning subgraph of G. A disjoint cycle cover of an undirected graph (if it exists) can be found in polynomial time by transforming the problem into a problem of finding a perfect matching in a larger graph. - -If the cycles of the cover have no edges in common, the cover is called edge-disjoint or simply disjoint cycle cover. - -Similar definitions exist for digraphs, in terms of directed cycles. Finding a vertex-disjoint cycle cover of a directed graph can also be performed in polynomial time by a similar reduction to perfect matching. However, adding the condition that each cycle should have length at least 3 makes the problem NP-hard. - -The permanent of a (0,1)-matrix is equal to the number of vertex-disjoint cycle covers of a directed graph with this adjacency matrix. This fact is used in a simplified proof showing that computing the permanent is #P-complete. - -The problems of finding a vertex disjoint and edge disjoint cycle covers with minimal number of cycles are NP-complete. The problems are not in complexity class APX. The variants for digraphs are not in APX either. diff --git a/wiki/wikipedia/40.txt b/wiki/wikipedia/40.txt deleted file mode 100644 index ae46afd8869029b7c98a1dac737f26ac18c7318c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/40.txt +++ /dev/null @@ -1,379 +0,0 @@ -IBM CICS (Customer Information Control System) is a family of mixed-language application servers that provide online transaction management and connectivity for applications on IBM mainframe systems under z/OS and z/VSE. - -CICS family products are designed as middleware and support rapid, high-volume online transaction processing. A CICS transaction is a unit of processing initiated by a single request that may affect one or more objects. This processing is usually interactive (screen-oriented), but background transactions are possible. - -CICS Transaction Server (CICS TS) sits at the head of the CICS family and provides services that extend or replace the functions of the operating system. These services can be more efficient than the generalized operating system services and also simpler for programmers to use, particularly with respect to communication with diverse terminal devices. - -Applications developed for CICS may be written in a variety of programming languages and use CICS-supplied language extensions to interact with resources such as files, database connections, terminals, or to invoke functions such as web services. CICS manages the entire transaction such that if for any reason a part of the transaction fails all recoverable changes can be backed out. - -While CICS TS has its highest profile among large financial institutions, such as banks and insurance companies, many Fortune 500 companies and government entities are reported to run CICS. Other, smaller enterprises can also run CICS TS and other CICS family products. CICS can regularly be found behind the scenes in, for example, bank-teller applications, ATM systems, industrial production control systems, insurance applications, and many other types of interactive applications. - -Recent CICS TS enhancements include new capabilities to improve the developer experience, including the choice of APIs, frameworks, editors, and build tools, while at the same time providing updates in the key areas of security, resilience, and management. In earlier, recent CICS TS releases, support was provided for Web services and Java, event processing, Atom feeds, and RESTful interfaces. - -CICS was preceded by an earlier, single-threaded transaction processing system, IBM MTCS. An 'MTCS-CICS bridge' was later developed to allow these transactions to execute under CICS with no change to the original application programs. - -CICS was originally developed in the United States at an IBM Development Center in Des Plaines, Illinois, beginning in 1966 to address requirements from the public utility industry. The first CICS product was announced in 1968, named Public Utility Customer Information Control System, or PU-CICS. It became clear immediately that it had applicability to many other industries, so the Public Utility prefix was dropped with the introduction of the first release of the CICS Program Product on July 8, 1969, not long after IMS database management system. - -For the next few years, CICS was developed in Palo Alto and was considered a less important "smaller" product than IMS which IBM then considered more strategic. Customer pressure kept it alive, however. When IBM decided to end development of CICS in 1974 to concentrate on IMS, the CICS development responsibility was picked up by the IBM Hursley site in the United Kingdom, which had just ceased work on the PL/I compiler and so knew many of the same customers as CICS. The core of the development work continues in Hursley today alongside contributions from labs in India, China, Russia, Australia, and the United States. - -CICS originally only supported a few IBM-brand devices like the 1965 IBM 2741 Selectric (golf ball) typewriter-based terminal. The 1964 IBM 2260 and 1972 IBM 3270 video display terminals were widely used later. - -In the early days of IBM mainframes, computer software was free bundled at no extra charge with computer hardware. The OS/360 operating system and application support software like CICS were "open" to IBM customers long before the open-source software initiative. Corporations like Standard Oil of Indiana (Amoco) made major contributions to CICS. - -The IBM Des Plaines team tried to add support for popular non-IBM terminals like the ASCII Teletype Model 33 ASR, but the small low-budget software development team could not afford the $100-per-month hardware to test it. IBM executives incorrectly felt that the future would be like the past, with batch processing using traditional punch cards. - -IBM reluctantly provided only minimal funding when public utility companies, banks and credit-card companies demanded a cost-effective interactive system (similar to the 1965 IBM Airline Control Program used by the American Airlines Sabre computer reservation system) for high-speed data access-and-update to customer information for their telephone operators (without waiting for overnight batch processing punch card systems). - -When CICS was delivered to Amoco with Teletype Model 33 ASR support, it caused the entire OS/360 operating system to crash (including non-CICS application programs). The majority of the CICS Terminal Control Program (TCP the heart of CICS) and part of OS/360 had to be laboriously redesigned and rewritten by Amoco Production Company in Tulsa Oklahoma. It was then given back to IBM for free distribution to others. - -In a few years, CICS generated over $60 billion in new hardware revenue for IBM, and became their most-successful mainframe software product. - -In 1972, CICS was available in three versions DOS-ENTRY (program number 5736-XX6) for DOS/360 machines with very limited memory, DOS-STANDARD (program number 5736-XX7), for DOS/360 machines with more memory, and OS-STANDARD V2 (program number 5734-XX7) for the larger machines which ran OS/360. - -In early 1970, a number of the original developers, including Ben Riggins (the principal architect of the early releases) relocated to California and continued CICS development at IBM's Palo Alto Development Center. IBM executives did not recognize value in software as a revenue-generating product until after federal law required software unbundling. In 1980, IBM executives failed to heed Ben Riggins' strong suggestions that IBM should provide their own EBCDIC-based operating system and integrated-circuit microprocessor chip for use in the IBM Personal Computer as a CICS intelligent terminal (instead of the incompatible Intel chip, and immature ASCII-based Microsoft 1980 DOS). - -Because of the limited capacity of even large processors of that era every CICS installation was required to assemble the source code for all of the CICS system modules after completing a process similar to system generation (sysgen), called CICSGEN, to establish values for conditional assembly-language statements. This process allowed each customer to exclude support from CICS itself for any feature they did not intend to use, such as device support for terminal types not in use. - -CICS owes its early popularity to its relatively efficient implementation when hardware was very expensive, its multi-threaded processing architecture, its relative simplicity for developing terminal-based real-time transaction applications, and many open-source customer contributions, including both debugging and feature enhancement. - -Part of CICS was formalized using the Z notation in the 1980s and 1990s in collaboration with the Oxford University Computing Laboratory, under the leadership of Tony Hoare. This work won a Queen's Award for Technological Achievement. - -In 1986, IBM announced CICS support for the record-oriented file services defined by Distributed Data Management Architecture (DDM). This enabled programs on remote, network-connected computers to create, manage, and access files that had previously been available only within the CICS/MVS and CICS/VSE transaction processing environments. - -In newer versions of CICS, support for DDM has been removed. Support for the DDM component of CICS z/OS was discontinued at the end of 2003, and was removed from CICS for z/OS in version 5.2 onward. In CICS TS for z/VSE, support for DDM was stabilised at V1.1.1 level, with an announced intention to discontinue it in a future release. In CICS for z/VSE 2.1 onward, CICS/DDM is not supported. - -CICS Transaction Server first introduced a native HTTP interface in version 1.2, together with a Web Bridge technology for wrapping green-screen terminal-based programs with an HTML facade. CICS Web and Document APIs were enhanced in CICS TS V1.3 to enable web-aware applications to be written to interact more effectively with web browsers. - -CICS TS versions 2.1 through 2.3 focused on introducing CORBA and EJB technologies to CICS, offering new ways to integrate CICS assets into distributed application component models. These technologies relied on hosting Java applications in CICS. The Java hosting environment saw numerous improvements over many releases, ultimately resulting in the embedding of the WebSphere Liberty Profile into CICS Transaction Server V5.1. Numerous web facing technologies could be hosted in CICS using Java, this ultimately resulted in the removal of the native CORBA and EJB technologies. - -CICS TS V3.1 added a native implementation of the SOAP and WSDL technologies for CICS, together with client side HTTP APIs for outbound communication. These twin technologies enabled easier integration of CICS components with other Enterprise applications, and saw widespread adoption. Tools were included for taking traditional CICS programs written in languages such as COBOL, and converting them into WSDL defined Web Services, with little or no program changes. This technology saw regular enhancements over successive releases of CICS. - -CICS TS V4.1 and V4.2 saw further enhancements to web connectivity, including a native implementation of the Atom publishing protocol. - -Many of the newer web facing technologies were made available for earlier releases of CICS using delivery models other than a traditional product release. This allowed early adopters to provide constructive feedback that could influence the final design of the integrated technology. Examples include the Soap for CICS technology preview SupportPac for TS V2.2, or the ATOM SupportPac for TS V3.1. This approach was used to introduce JSON support for CICS TS V4.2, a technology that went on to be integrated into CICS TS V5.2. - -The JSON technology in CICS is similar to earlier SOAP technology, both of which allowed programs hosted in CICS to be wrapped with a modern facade. The JSON technology was in turn enhanced in z/OS Connect Enterprise Edition, an IBM product for composing JSON APIs that can leverage assets from several mainframe subsystems. - -Many partner products have also been used to interact with CICS. Popular examples include using the CICS Transaction Gateway for connecting to CICS from JCA compliant Java application servers, and IBM DataPower appliances for filtering web traffic before it reaches CICS. - -Modern versions of CICS provide many ways for both existing and new software assets to be integrated into distributed application flows. CICS assets can be accessed from remote systems, and can access remote systems; user identity and transactional context can be propagated; RESTful APIs can be composed and managed; devices, users and servers can interact with CICS using standards-based technologies; and the IBM WebSphere Liberty environment in CICS promotes the rapid adoption of new technologies. - -By January, 1985 a 1969-founded consulting company, having done "massive on-line systems" for Hilton Hotels, FTD Florists, Amtrak, and Budget Rent-a-Car, announced what became MicroCICS. The initial focus was the IBM XT/370 and IBM AT/370. - -Although when CICS is mentioned, people usually mean CICS Transaction Server, the CICS Family refers to a portfolio of transaction servers, connectors (called CICS Transaction Gateway) and CICS Tools. - -CICS on distributed platforms—not mainframes—is called IBM TXSeries. TXSeries is distributed transaction processing middleware. It supports C, C++, COBOL, Java™ and PL/I applications in cloud environments and traditional data centers. TXSeries is available on AIX, Linux x86, Windows, Solaris, and HP-UX platforms. CICS is also available on other operating systems, notably IBM i and OS/2. The z/OS implementation (i.e., CICS Transaction Server for z/OS) is by far the most popular and significant. - -Two versions of CICS were previously available for VM/CMS, but both have since been discontinued. In 1986, IBM released CICS/CMS, - -Provisioning, management and analysis of CICS systems and applications is provided by CICS Tools. This includes performance management as well as deployment and management of CICS resources. In 2015, the four core foundational CICS tools (and the CICS Optimization Solution Pack for z/OS) were updated with the release of CICS Transaction Server for z/OS 5.3. The four core CICS Tools: CICS Interdependency Analyzer for z/OS, CICS Deployment Assistant for z/OS, CICS Performance Analyzer for z/OS and CICS Configuration Manager for z/OS. - -Multiple-user interactive-transaction application programs were required to be quasi-reentrant in order to support multiple concurrent transaction threads. A software coding error in one application could block all users from the system. The modular design of CICS reentrant / reusable control programs meant that, with judicious "pruning," multiple users with multiple applications could be executed on a computer with just 32K of expensive magnetic core physical memory (including the operating system). - -Considerable effort was required by CICS application programmers to make their transactions as efficient as possible. A common technique was to limit the size of individual programs to no more than 4,096 bytes, or 4K, so that CICS could easily reuse the memory occupied by any program not currently in use for another program or other application storage needs. When virtual memory was added to versions OS/360 in 1972, the 4K strategy became even more important to reduce paging and thrashing unproductive resource-contention overhead. - -The efficiency of compiled high-level COBOL and PL/I language programs left much to be desired. Many CICS application programs continued to be written in assembler language, even after COBOL and PL/I support became available. - -With 1960s-and-1970s hardware resources expensive and scarce, a competitive "game" developed among system optimization analysts. When critical path code was identified, a code snippet was passed around from one analyst to another. Each person had to either (a) reduce the number of bytes of code required, or (b) reduce the number of CPU cycles required. Younger analysts learned from what more-experienced mentors did. Eventually, when no one could do (a) or (b), the code was considered optimized, and they moved on to other snippets. Small shops with only one analyst learned CICS optimization very slowly (or not at all). - -Because application programs could be shared by many concurrent threads, the use of static variables embedded within a program (or use of operating system memory) was restricted (by convention only). - -Unfortunately, many of the "rules" were frequently broken, especially by COBOL programmers who might not understand the internals of their programs or fail to use the necessary restrictive compile time options. This resulted in "non-re-entrant" code that was often unreliable, leading to spurious storage violations and entire CICS system crashes. - -Originally, the entire partition, or Multiple Virtual Storage (MVS) region, operated with the same memory protection key including the CICS kernel code. Program corruption and CICS control block corruption was a frequent cause of system downtime. A software error in one application program could overwrite the memory (code or data) of one or all currently running application transactions. Locating the offending application code for complex transient timing errors could be a very-difficult operating-system analyst problem. - -These shortcomings persisted for multiple new releases of CICS over a period of more than 20 years, in spite of their severity and the fact that top-quality CICS skills were in high demand and short supply. They were addressed in TS V3.3, V4.1 and V5.2 with the Storage Protection, Transaction Isolation and Subspace features respectively, which utilize operating system hardware features to protect the application code and the data within the same address space even though the applications were not written to be separated. CICS application transactions remain mission-critical for many public utility companies, large banks and other multibillion-dollar financial institutions. - -Additionally, it is possible to provide a measure of advance application protection by performing test under control of a monitoring program that also serves to provide Test and Debug features. - -When CICS was first released, it only supported application transaction programs written in IBM 360 Assembler. COBOL and PL/I support were added years later. Because of the initial assembler orientation, requests for CICS services were made using assembler-language macros. For example, the request to read a record from a file were made by a macro call to the "File Control Program" of CICS might look like this: - -DFHFC TYPE=READ,DATASET=myfile,TYPOPER=UPDATE,....etc. - -This gave rise to the later terminology "Macro-level CICS." - -When high-level language support was added, the macros were retained and the code was converted by a pre-compiler that expanded the macros to their COBOL or PL/I CALL statement equivalents. Thus preparing a HLL application was effectively a "two-stage" compile output from the preprocessor fed into the HLL compiler as input. - -COBOL considerations: unlike PL/I, IBM COBOL does not normally provide for the manipulation of pointers (addresses). In order to allow COBOL programmers to access CICS control blocks and dynamic storage the designers resorted to what was essentially a hack. The COBOL Linkage Section was normally used for inter-program communication, such as parameter passing. The compiler generates a list of addresses, each called a Base Locator for Linkage (BLL) which were set on entry to the called program. The first BLL corresponds to the first item in the Linkage Section and so on. CICS allows the programmer to access and manipulate these by passing the address of the list as the first argument to the program. The BLLs can then be dynamically set, either by CICS or by the application to allow access to the corresponding structure in the Linkage Section. - -During the 1980s, IBM at Hursley Park produced a version of CICS that supported what became known as "Command-level CICS" which still supported the older programs but introduced a new API style to application programs. - -A typical Command-level call might look like the following: - - - -EXEC CICS - -SEND MAPSET('LOSMATT') MAP('LOSATT') - -END-EXEC - - - -The values given in the SEND MAPSET command correspond to the names used on the first DFHMSD macro in the map definition given below for the MAPSET argument, and the DFHMSI macro for the MAP argument. This is pre-processed by a pre-compile batch translation stage, which converts the embedded commands (EXECs) into call statements to a stub subroutine. So, preparing application programs for later execution still required two stages. It was possible to write "Mixed mode" applications using both Macro-level and Command-level statements. - -Initially, at execution time, the command-level commands were converted using a run-time translator, "The EXEC Interface Program", to the old Macro-level call, which was then executed by the mostly unchanged CICS nucleus programs. But when the CICS Kernel was re-written for TS V3, EXEC CICS became the only way to program CICS applications, as many of the underlying interfaces had changed. - -The Command-level-only CICS introduced in the early 1990s offered some advantages over earlier versions of CICS. However, IBM also dropped support for Macro-level application programs written for earlier versions. This meant that many application programs had to be converted or completely rewritten to use Command-level EXEC commands only. - -By this time, there were perhaps millions of programs worldwide that had been in production for decades in many cases. Rewriting them often introduced new bugs without necessarily adding new features. There were a significant number of users who ran CICS V2 application-owning regions (AORs) to continue to run macro code for many years after the change to V3. - -It was also possible to execute old Macro-level programs using conversion software such as APT International's Command CICS. - -Recent CICS Transaction Server enhancements include support for a number of modern programming styles. - -CICS Transaction Server Version 5.6 introduced enhanced support for Java to deliver a cloud-native experience for Java developers. For example, the new CICS Java API () allows easier unit testing using mocking and stubbing approaches, and can be run remotely on the developer’s local workstation. A set of enable developers to resolve Java dependencies using popular dependency management tools such as Apache Maven and Gradle. Plug-ins for Maven () and Gradle () are also provided to simplify automated building of CICS bundles, using familiar IDEs like Eclipse, IntelliJ IDEA, and Visual Studio Code. In addition, Node.js z/OS support is enhanced for version 12, providing faster startup, better default heap limits, updates to the V8 JavaScript engine, etc. Support for Jakarta EE 8 is also included. - -CICS TS 5.5 introduced support for IBM SDK for Node.js, providing a full JavaScript runtime, server-side APIs, and libraries to efficiently build high-performance, highly scalable network applications for IBM Z. - -CICS Transaction Server Version 2.1 introduced support for Java. CICS Transaction Server Version 2.2 supported the Software Developers Toolkit. CICS provides the same run-time container as IBM's WebSphere product family so Java EE applications are portable between CICS and Websphere and there is common tooling for the development and deployment of Java EE applications. - -In addition, CICS placed an emphasis on "wrapping" existing application programs inside modern interfaces so that long-established business functions can be incorporated into more modern services. These include WSDL, SOAP and JSON interfaces that wrap legacy code so that a web or mobile application can obtain and update the core business objects without requiring a major rewrite of the back-end functions. - -A CICS transaction is a set of operations that perform a task together. Usually, the majority of transactions are relatively simple tasks such as requesting an inventory list or entering a debit or credit to an account. A primary characteristic of a transaction is that it should be atomic. On IBM Z servers, CICS easily supports thousands of transactions per second, making it a mainstay of enterprise computing. - -CICS applications comprise transactions, which can be written in numerous programming languages, including COBOL, PL/I, C, C++, IBM Basic Assembly Language, REXX, and Java. - -Each CICS program is initiated using a transaction identifier. CICS screens are usually sent as a construct called a map, a module created with Basic Mapping Support (BMS) assembler macros or third-party tools. CICS screens may contain text that is highlighted, has different colors, and/or blinks depending on the terminal type used. An example of how a map can be sent through COBOL is given below. The end user inputs data, which is made accessible to the program by receiving a map from CICS. - - - -EXEC CICS - -RECEIVE MAPSET('LOSMATT') MAP('LOSATT') INTO(OUR-MAP) - -END-EXEC. - - - -For technical reasons, the arguments to some command parameters must be quoted and some must not be quoted, depending on what is being referenced. Most programmers will code out of a reference book until they get the "hang" or concept of which arguments are quoted, or they'll typically use a "canned template" where they have example code that they just copy and paste, then edit to change the values. - -Basic Mapping Support defines the screen format through assembler macros such as the following. This was assembled to generate both the physical map set a load module in a CICS load library and a symbolic map set a structure definition or DSECT in PL/I, COBOL, assembler, etc. which was copied into the source program. - - - -LOSMATT DFHMSD TYPE=MAP, X - -MODE=INOUT, X - -TIOAPFX=YES, X - -TERM=3270-2, X - -LANG=COBOL, X - -MAPATTS=(COLOR,HILIGHT), X - -DSATTS=(COLOR,HILIGHT), X - -STORAGE=AUTO, X - -CTRL=(FREEKB,FRSET) - -* - -LOSATT DFHMDI SIZE=(24,80), X - -LINE=1, X - -COLUMN=1 - -* - -LSSTDII DFHMDF POS=(1,01), X - -LENGTH=04, X - -COLOR=BLUE, X - -INITIAL='MQCM', X - -ATTRB=PROT - -* - -DFHMDF POS=(24,01), X - -LENGTH=79, X - -COLOR=BLUE X - -ATTRB=ASKIP, X - -INITIAL='PF7- 8- 9- 10- X - -11- 12-CANCEL' - -* - -DFHMSD TYPE=FINAL - -END - - - -In the z/OS environment, a CICS installation comprises one or more "regions" (generally referred to as a "CICS Region"), spread across one or more z/OS system images. Although it processes interactive transactions, each CICS region is usually started as a batch processing|batch address space with standard JCL statements: it's a job that runs indefinitely until shutdown. Alternatively, each CICS region may be started as a started task. Whether a batch job or a started task, CICS regions may run for days, weeks, or even months before shutting down for maintenance (MVS or CICS). Upon restart a parameter determines if the start should be "Cold" (no recovery) or "Warm"/"Emergency" (using a warm shutdown or restarting from the log after a crash). Cold starts of large CICS regions with many resources can take a long time as all the definitions are re-processed. - -Installations are divided into multiple address spaces for a wide variety of reasons, such as: - -* application separation, - -* function separation, - -* avoiding the workload capacity limitations of a single region, or address space, or mainframe instance in the case of a z/OS SysPlex. - -A typical installation consists of a number of distinct applications that make up a service. Each service usually has a number of "Terminal-Owning Region" (TORs) that route transactions to multiple "Application-Owning Regions" (AORs), though other topologies are possible. For example, the AORs might not perform File I/O. Instead there would be a "File-Owning Region" (FOR) that performed the File I/O on behalf of transactions in the AOR given that, at the time, a VSAM file could only support recoverable write access from one address space at a time. - -But not all CICS applications use VSAM as the primary data source (or historically other single address space at a time datastores such as CA Datacom)- many use either IMS/DB or Db2 as the database, and/or MQ as a queue manager. For all these cases, TORs can load-balance transactions to sets of AORs which then directly use the shared databases/queues. CICS supports XA two-phase commit between data stores and so transactions that spanned MQ, VSAM/RLS and Db2, for example, are possible with ACID properties. - -CICS supports distributed transactions using SNA LU6.2 protocol between the address spaces which can be running on the same or different clusters. This allows ACID updates of multiple datastores by cooperating distributed applications. In practice there are issues with this if a system or communications failure occurs because the transaction disposition (backout or commit) may be in-doubt if one of the communicating nodes has not recovered. Thus the use of these facilities has never been very widespread. - -At the time of CICS ESA V3.2, in the early 1990s, IBM faced the challenge of how to get CICS to exploit the new zOS Sysplex mainframe line. - -The Sysplex was to be based on CMOS (Complementary Metal Oxide Silicon) rather than the existing ECL (Emitter Coupled Logic) hardware. The cost of scaling the mainframe-unique ECL was much higher than CMOS which was being developed by a keiretsu with high-volume use cases such as Sony PlayStation to reduce the unit cost of each generation's CPUs. The ECL was also expensive for users to run because the gate drain current produced so much heat that the CPU had to packaged into a special module called a Thermal Conduction Module (TCM) that had inert gas pistons and needed plumbed to be high-volume chilled water to be cooled. But the air-cooled CMOS technology's CPU speed initially was much slower than the ECL (notably the boxes available from the mainframe-clone makers Amdahl and Hitachi). This was especially concerning to IBM in the CICS context as almost all the largest mainframe customers were running CICS and for many of them it was the primary mainframe workload. - -To achieve the same total transaction throughput on a Sysplex multiple boxes would need to be used in parallel for each workload but a CICS address space, due to its semi-reentrant application programming model, could not exploit more than about 1.5 processors on one box at the time even with use of MVS sub-tasks. Without this, these customers would tend to move to the competitors rather than Sysplex as they scaled up the CICS workloads. There was considerable debate inside IBM as to whether the right approach would be to break upward compatibility for applications and move to a model like IMS/DC which was fully reentrant, or to extend the approach customers had adopted to more fully exploit a single mainframe's power using multi-region operation (MRO). - -Eventually the second path was adopted after the CICS user community was consulted and vehemently opposed breaking upward compatibility given that they had the prospect of Y2K to contend with at that time and did not see the value in re-writing and testing millions of lines of mainly COBOL, PL/1, or assembler code. - -The IBM recommended structure for CICS on Sysplex was that at least one CICS Terminal Owning Region was placed on each Sysplex node which dispatched transactions to many Application Owning Regions (AORs) spread across the entire Sysplex. If these applications needed to access shared resources they either used a Sysplex-exploiting datastore (such as Db2 or IMS/DB) or concentrated, by function-shipping, the resource requests into singular-per-resource Resource Owing Regions (RORs) including File Owning Regions (FORs) for VSAM and CICS Data Tables, Queue Owning Regions (QORs) for MQ, CICS Transient Data (TD) and CICS Temporary Storage (TS). This preserved compatibility for legacy applications at the expense of operational complexity to configure and manage many CICS regions. - -In subsequent releases and versions, CICS was able to exploit new Sysplex-exploiting facilities in VSAM/RLS, MQ for zOS and placed its own Data Tables, TD, and TS resources into the architected shared resource manager for the Sysplex -> the Coupling Facility or CF, dispensing with the need for most RORs. The CF provides a mapped view of resources including a shared timebase, buffer pools, locks and counters with hardware messaging assists that made sharing resources across the Sysplex both more efficient than polling and reliable (utilizing a semi-synchronized backup CF for use in case of failure). - -By this time, the CMOS line had individual boxes that exceeded the power available by the fastest ECL box with more processors per CPU and when these were coupled together 32 or more nodes would be able to scale two orders of magnitude larger in total power for a single workload. For example, by 2002, Charles Schwab was running a "MetroPlex" consisting of a redundant pair of its mainframe Sysplexes in two locations in Phoenix, AZ each with 32 nodes driven by one shared CICS/DB/2 workload to support the vast volume of pre-dotcom-bubble web client inquiry requests. - -This cheaper, much more scalable CMOS technology base, and the huge investment costs of having to both get to 64bit addressing and independently produce cloned CF functionality drove the IBM-mainframe clone makers out of the business one by one. - -The objective of recovery/restart in CICS is to minimize and if possible eliminate damage done to Online System when a failure occurs, so that system and data integrity is maintained. If the CICS region was shutdown instead of failing it will perform a "Warm" start exploiting the checkpoint written at shutdown. The CICS region can also be forced to "Cold" start which reloads all definitions and wipes out the log, leaving the resources in whatever state they are in. - -Under CICS, following are some of the resources which are considered recoverable. If one wishes these resources to be recoverable then special options must be specified in relevant CICS definitions: - -* VSAM files - -* CMT CICS-maintained data tables - -* Intrapartition TDQ - -* Temporary Storage Queue in auxiliary storage - -* I/O messages from/to transactions in a VTAM network - -* Other database/queuing resources connected to CICS that support XA two-phase commit protocol (like IMS/DB, Db2, VSAM/RLS) - -CICS also offers extensive recovery/restart facilities for users to establish their own recovery/restart capability in their CICS system. Commonly used recovery/restart facilities include: - -* Dynamic Transaction Backout (DTB) - -* Automatic Transaction Restart - -* Resource Recovery using System Log - -* Resource Recovery using Journal - -* System Restart - -* Extended Recovery Facility - -Each CICS region comprises one major task on which every transaction runs, although certain services such as access to Db2 data use other tasks (TCBs). Within a region, transactions are cooperatively multitasked they are expected to be well-behaved and yield the CPU rather than wait. CICS services handle this automatically. - -Each unique CICS "Task" or transaction is allocated its own dynamic memory at start-up and subsequent requests for additional memory were handled by a call to the "Storage Control program" (part of the CICS nucleus or "kernel"), which is analogous to an operating system. - -A CICS system consists of the online nucleus, batch support programs, and applications services. - -The original CICS nucleus consisted of a number of functional modules written in 370 assembler until V3: - -* Task Control Program (KCP) - -* Storage Control Program (SCP) - -* Program Control Program (PCP) - -* Program Interrupt Control Program (PIP) - -* Interval Control Program (ICP) - -* Dump Control Program (DCP) - -* Terminal Control Program (TCP) - -* File Control Program (FCP) - -* Transient Data Control Program (TDP) - -* Temporary Storage Control Program (TSP) - -Starting in V3, the CICS nucleus was rewritten into a kernel-and-domain structure using IBM's PL/AS language which is compiled into assembler. - -The prior structure did not enforce separation of concerns and so had many inter-program dependencies which led to bugs unless exhaustive code analysis was done. The new structure was more modular and so resilient because it was easier to change without impact. The first domains were often built with the name of the prior program but without the trailing "P". For example, Program Control Domain (DFHPC) or Transient Data Domain (DFHTD). The kernel operated as a switcher for inter-domain requests initially this proved expensive for frequently called domains (such as Trace) but by utilizing PL/AS macros these calls were in-lined without compromising on the separate domain design. - -In later versions, completely redesigned domains were added like the Logging Domain DFHLG and Transaction Domain DFHTM that replaced the Journal Control Program (JCP). - -In addition to the online functions CICS has several support programs that run as batch jobs. - -* High-level language (macro) preprocessor - -* Command language translator - -* Dump utility - prints formatted dumps generated by CICS Dump Management - -* Trace utility - formats and prints CICS trace output - -* Journal formatting utility prints a formatted dump of the CICS region in case of error - -The following components of CICS support application development. - -* Basic Mapping Support (BMS) provides device-independent terminal input and output - -* APPC Support that provides LU6.1 and LU6.2 API support for collaborating distributed applications that support two-phase commit - -* Data Interchange Program (DIP) provides support for IBM 3770 and IBM 3790 programmable devices - -* 2260 Compatibility allows programs written for IBM 2260 display devices to run on 3270 displays - -* EXEC Interface Program the stub program that converts calls generated by EXEC CICS commands to calls to CICS functions - -* Built-in Functions - table search, phonetic conversion, field verify, field edit, bit checking, input formatting, weighted retrieval - -Different countries have differing pronunciations - -* Within IBM (specifically Tivoli) it is referred to as . - -* In the US, it is more usually pronounced by reciting each letter . - -* In Australia, Belgium, Canada, Hong Kong, the UK and some other countries, it is pronounced . - -* In Denmark, it is pronounced kicks. - -* In Finland, it is pronounced - -* In France, it is pronounced . - -* In Germany, Austria and Hungary, it is pronounced and, less often, . - -* In Greece, it is pronounced kiks. - -* In India, it is pronounced kicks. - -* In Iran, it is pronounced kicks. - -* In Israel , it is pronounced C-I-C-S. - -* In Italy, is pronounced . - -* In Poland, it is pronounced . - -* In Portugal and Brazil, it is pronounced . - -* In Russia, it is pronounced kiks. - -* In Slovenia, it is pronounced kiks. - -* In Spain, it is pronounced . - -* In Sweden, it is pronounced kicks. - -* In Israel, it is pronounced kicks. - -* In Uganda, it is pronounced kicks. - -* In Turkey, it is pronounced kiks. diff --git a/wiki/wikipedia/400.txt b/wiki/wikipedia/400.txt deleted file mode 100644 index 459d12ca8c87cfc613f11ed9ca6fd3b8ee200d05..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/400.txt +++ /dev/null @@ -1,11 +0,0 @@ -In numerical analysis, the Lax equivalence theorem is a fundamental theorem in the analysis of finite difference methods for the numerical solution of partial differential equations. It states that for a consistent finite difference method for a well-posed linear initial value problem, the method is convergent if and only if it is stable. - -The importance of the theorem is that while the convergence of the solution of the finite difference method to the solution of the partial differential equation is what is desired, it is ordinarily difficult to establish because the numerical method is defined by a recurrence relation while the differential equation involves a differentiable function. However, consistency—the requirement that the finite difference method approximates the correct partial differential equation—is straightforward to verify, and stability is typically much easier to show than convergence (and would be needed in any event to show that round-off error will not destroy the computation). Hence convergence is usually shown via the Lax equivalence theorem. - -Stability in this context means that a matrix norm of the matrix used in the iteration is at most unity, called (practical) Lax–Richtmyer stability. Often a von Neumann stability analysis is substituted for convenience, although von Neumann stability only implies Lax–Richtmyer stability in certain cases. - -This theorem is due to Peter Lax. It is sometimes called the Lax–Richtmyer theorem, after Peter Lax and Robert D. Richtmyer. - -Category:Numerical differential equations - -Category:Theorems in analysis diff --git a/wiki/wikipedia/4000.txt b/wiki/wikipedia/4000.txt deleted file mode 100644 index d578782176a28c53a592ccaec91257a93c1168f8..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4000.txt +++ /dev/null @@ -1 +0,0 @@ -In anabelian geometry, a branch of algebraic geometry, the section conjecture gives a conjectural description of the splittings of the group homomorphism $\pi_1(X)\to \operatorname{Gal}(k)$, where $X$ is a complete smooth curve of genus at least 2 over a field $k$ that is finitely generated over $\mathbb{Q}$, in terms of decomposition groups of rational points of $X$. The conjecture was introduced by in a 1983 letter to Gerd Faltings. diff --git a/wiki/wikipedia/4001.txt b/wiki/wikipedia/4001.txt deleted file mode 100644 index fc4c32bbf64c68fc670ab1a8ba3f31c67e6d76ef..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4001.txt +++ /dev/null @@ -1,106 +0,0 @@ -[[File:Hillview Farms white potatoes.jpg|thumb|150px|White potatoes are actually around 79% water, - -agar is 99% water.]] - -The potato paradox is a mathematical calculation that has a counter-intuitive result. The Universal Book of Mathematics states the problem as such: - -Then reveals the answer: - -In Quine's classification of paradoxes, the potato paradox is a veridical paradox. - -Initially, if the potatoes are 99% water, the non-water mass is 1%. The potatoes mass 100 kg; 1% of 100 kg is 1 kg. This mass is static; it will not change, as only the water evaporates. - -If after leaving them overnight, the water mass shrinks to 98%, now the potatoes' non-water mass is 2% of the total mass - but that's still just 1 kg. For non-water percentage to be twice as big, the total mass must be half as big. In other words, 1 kg is 2% of 50 kg. - -Another way to word it starting from the beginning: - -1% of 100 kg is 1 kg of solid potato. This amount does not change. So we have 1 kg of solid potato comprising 1% of the total mass. - -When water evaporates so that the water is 98% of the total mass, meaning that 2% of the total mass is the unchanged 1 kg of solid. - -Simple algebra: 1 kg is 2% of what? The answer is 50 kg. - -In the beginning (left figure), there is 1 part non-water and 99 parts water. This is 99% water, or a non-water to water ratio of 1:99. To double the ratio of non-water to water to 1:49, while keeping the one part of non-water, the amount of water must be reduced to 49 parts (middle figure). This is equivalent to 2 parts non-water to 98 parts water (98% water) (right figure). - -In 100 kg of potatoes, 99% water (by weight) means that there is 99 kg of water, and 1 kg of non-water. This is a 1:99 ratio. - -If the percentage decreases to 98%, then the non-water part must now account for 2% of the weight: a ratio of 2:98, or 1:49. Since the non-water part still weighs 1 kg, the water must weigh 49 kg to produce a total of 50 kg. - -After the evaporating of the water, the remaining total quantity, $x$, contains 1 kg pure potatoes and (98/100)x water. The equation becomes: - -\begin{align} - -1 + \frac{98}{100}x &= x \\ - -\Longrightarrow 1 &= \frac{1}{50}x - -\end{align} - -resulting in $x$ = 50 kg. - -The weight of water in the fresh potatoes is $0.99 \cdot 100$. - -If $x$ is the weight of water lost from the potatoes when they dehydrate then $0.98(100 - x)$ is the weight of water in the dehydrated potatoes. Therefore: -$$ -0.99 \cdot 100 - 0.98(100 - x) = x -$$ - -Expanding brackets and simplifying - -\begin{align} - -99 - (98 - 0.98x) &= x \\ - -99 - 98 + 0.98x &= x \\ - -1 + 0.98x &= x - -\end{align} - -Subtracting the smaller $x$ term from each side - -\begin{align} - -1 + 0.98x - 0.98x &= x - 0.98x \\ - -1 &= 0.02x - -\end{align} - -Which gives the lost water as: -$$ -50 = x -$$ - -And the dehydrated weight of the potatoes as: -$$ -100 - x = 100 - 50 = 50 -$$ - -After the potatoes are dehydrated, the potatoes are 98% water. - -This implies that the proportion of non-water weight of the potatoes is $(1 - .98)$. - -If x is the weight of the potatoes after dehydration, then: - -\begin{align} - -(1-.98)x &= 1 \\ - -.02x &= 1 \\ - -x &= \frac{1}{.02} \\ - -x &= 50 - -\end{align} - -The answer is the same as long as the concentration of the non-water part is doubled. For example, if the potatoes were originally 99.999% water, reducing the percentage to 99.998% still requires halving the weight. - -After the first reading, one might wrongly assume that by reducing the water percentage by 1% you reduce its weight by 1 kg. But when the water percentage is reduced by 1%, what this actually means is that the non-water percentage is doubled while its weight stays constant, meaning that 50 kg of water evaporated. - -Another way to interpret the initial query, is that the 99% water refers to the volume and not the weight of the potatoes. Though the volume of the potatoes would still be halved, the answer would be unknowable, as we do not know the weight of the potato solids. For example, the potato solids might weigh 75kg on their own, in which case the answer can never be 50kg, no matter how much the water is reduced. - -But since logic dictates the paradox must have a valid answer, we must assume the water makes up 99% of the weight. - -The paradox is then not mathematical, but more so about our understanding of the language and logic used to define the question. Careful wording must be used to ensure that the "paradox" is correct. diff --git a/wiki/wikipedia/4002.txt b/wiki/wikipedia/4002.txt deleted file mode 100644 index 3cc791ffd4db2cd47812b4c1182faf7a76125a1d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4002.txt +++ /dev/null @@ -1,30 +0,0 @@ -In mathematics, the Cheeger bound is a bound of the second largest eigenvalue of the transition matrix of a finite-state, discrete-time, reversible stationary Markov chain. It can be seen as a special case of Cheeger inequalities in expander graphs. - -Let $X$ be a finite set and let $K(x,y)$ be the transition probability for a reversible Markov chain on $X$. Assume this chain has stationary distribution $\pi$. - -Define -$$ -Q(x,y) = \pi(x) K(x,y) -$$ - -and for $A,B \subset X $ define -$$ -Q(A \times B) = \sum_{x \in A, y \in B} Q(x,y). -$$ - -Define the constant $\Phi$ as -$$ - \Phi = \min_{S \subset X, \pi(S) \leq \frac{1}{2}} \frac{Q (S \times S^c)}{\pi(S)}. -$$ - -The operator $K,$ acting on the space of functions from $|X|$ to $|X|$, defined by -$$ - (K \phi)(x) = \sum_y K(x,y) \phi(y) -$$ - -has eigenvalues $ \lambda_1 \geq \lambda_2 \geq \cdots \geq \lambda_n $. It is known that $\lambda_1 = 1$. The Cheeger bound is a bound on the second largest eigenvalue $\lambda_2$. - -Theorem (Cheeger bound): -$$ - 1 - 2 \Phi \leq \lambda_2 \leq 1 - \frac{\Phi^2}{2}. -$$ diff --git a/wiki/wikipedia/4003.txt b/wiki/wikipedia/4003.txt deleted file mode 100644 index 007b46fd0370dfd3339bcfb1e45b5ba59ca6424b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4003.txt +++ /dev/null @@ -1,15 +0,0 @@ -A fan triangulation is a simple way to triangulate a polygon by choosing a vertex and drawing diagonals to all of the other vertices of the polygon. Not every polygon can be triangulated this way, so this method is usually only used for convex polygons. - -Aside from the properties of all triangulations, fan triangulations have the following properties: - -* All convex polygons, but not all polygons, can be fan triangulated. - -* Polygons with only one concave vertex can always be fan triangulated, as long as the diagonals are drawn from the concave vertex. - -* It can be known if a polygon can be fan triangulated by solving the Art gallery problem, in order to determine whether there is at least one vertex that is visible from every point in the polygon. - -* The triangulation of a polygon with $n$ vertices uses $n - 3$ diagonals, and generates $n - 2$ triangles. - -* Generating the list of triangles is trivial if an ordered list of vertices is available, and can be computed in linear time. As such, it is unnecessary to explicitly store the list of triangles, and therefore, many graphical libraries implement primitives to represent polygons based on this triangulation. - -* Although this triangulation is fit for solving certain problems, such as Rasterisation, or collision detection, it may be unfit for other tasks because the origin vertex accumulates a high number of neighbors, and the internal angles of the triangulation are unevenly distributed. diff --git a/wiki/wikipedia/4004.txt b/wiki/wikipedia/4004.txt deleted file mode 100644 index f274b0fdc623e54b87e0e24ef8a83e1e6c7734c0..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4004.txt +++ /dev/null @@ -1 +0,0 @@ -In mathematics, Waldspurger's theorem, introduced by , is a result that identifies Fourier coefficients of modular forms of half-integral weight k+1/2 with the value of an L-series at s=k/2. diff --git a/wiki/wikipedia/4005.txt b/wiki/wikipedia/4005.txt deleted file mode 100644 index 12db6c5c40c76f1e56cd994f356f0423217ed65a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4005.txt +++ /dev/null @@ -1,20 +0,0 @@ -In geometry, Coxeter's loxodromic sequence of tangent circles is an infinite sequence of circles arranged so that any four consecutive circles in the sequence are pairwise mutually tangent. This means that each circle in the sequence is tangent to the three circles that precede it and also to the three circles that follow it. - -The radii of the circles in the sequence form a geometric progression with ratio -$$ -k=\varphi + \sqrt{\varphi} \approx 2.89005 \ , -$$ - -where φ is the golden ratio. k and its reciprocal satisfy the equation -$$ -(1+x+x^2+x^3)^2=2(1+x^2+x^4+x^6)\ , -$$ - -and so any four consecutive circles in the sequence meet the conditions of Descartes' theorem. - -The centres of the circles in the sequence lie on a logarithmic spiral. Viewed from the centre of the spiral, the angle between the centres of successive circles is -$$ - \cos^{-1} \left( \frac {-1} {\varphi} \right) \approx 128.173 ^ \circ \ . -$$ - -The construction is named after geometer Donald Coxeter, who generalised the two-dimensional case to sequences of spheres and hyperspheres in higher dimensions. It can be interpreted as a degenerate special case of the Doyle spiral. diff --git a/wiki/wikipedia/4006.txt b/wiki/wikipedia/4006.txt deleted file mode 100644 index b9cdcfa85173a8cf95240b76d68ccb1fbcd76ef0..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4006.txt +++ /dev/null @@ -1,89 +0,0 @@ -Brain Age: Train Your Brain in Minutes a Day!, known as Dr. Kawashima's Brain Training: How Old Is Your Brain? in PAL regions, is an edutainment puzzle video game. It was developed and published by Nintendo for the Nintendo DS. Nintendo has stated that it is an entertainment product inspired by Tohoku University professor Ryuta Kawashima's work in the neurosciences. - -It was first released in Japan, and later released in North America, Europe, Australia, and South Korea. It was followed by a sequel titled Brain Age 2: More Training in Minutes a Day!, and was later followed by two redesigns and Brain Age Express for the Nintendo DSi's DSiWare service which uses popular puzzles from these titles as well as several new puzzles, and Brain Age: Concentration Training for Nintendo 3DS. A new installment in the series, Dr Kawashima's Brain Training for Nintendo Switch, for the Nintendo Switch, was released in Japan on December 27, 2019. - -Brain Age features a variety of puzzles, including Stroop tests, mathematical questions, and Sudoku puzzles, all designed to help keep certain parts of the brain active. It was included in the Touch! Generations series of video games, a series which features games for a more casual gaming audience. Brain Age uses the touch screen and microphone for many puzzles. It has received both commercial and critical success, selling 19.01 million copies worldwide (as of September 30, 2015) and has received multiple awards for its quality and innovation. There has been controversy over the game's scientific effectiveness. The game was later released on the Nintendo eShop for the Wii U in Japan in mid-2014. - -Brain Age is designed to be played a little each day, similar to the Nintendogs titles and Animal Crossing: Wild World. The Nintendo DS is held on its side, with the touch screen on the right for right-handed people and the left for left-handed people. The game is entirely touch and voice-controlled – the player either writes the answer to the puzzle on the touch screen or speaks it into the microphone. Before the player can begin a Brain Age session, they must input information. First, players must confirm the date and select which hand they write with. The player then inputs their name and date of birth. - -At the end of all Brain Age Check puzzles, Training puzzles, Quick Play puzzles, and Sudoku puzzles, the player is shown how quickly they completed it, the player's speed (according to metaphors such as "bicycle speed" and "jet speed", the highest being "rocket speed"), and a tip for either improving the player's brain or a game-related tip. If the player's time or score in Brain Age Check or Training is high enough, it will appear on one or both of the Top Three. The Top Three shown is the player's own top three attempts at a puzzle, while they can also compare the top three with those of other saved players. The player's score is only counted on their first attempt at a puzzle each day. - -The player is also awarded stamps for each day they completes the puzzles. When enough is accumulated, the game unlocks certain features such as more puzzles in Training mode, Hard versions of these puzzles, and the ability to customize his or her own stamps. - -While the player is navigating the menus outside of the puzzles, Professor Kawashima appears to prompt and encourage the user. Brain Age allows up to four players to save profiles on one DS game card, and these players can interact with each other in several different ways. There are five modes of play – Brain Age Check, Training, Quick Play, Download, and Sudoku. - -When starting a session, Kawashima may ask the player to participate in a Picture-Drawing Quiz, which requires the player to draw a person, place, or thing by memory using the touch screen. After the player has done all three, the game will compare his or her drawing to an example created by the game developers, along with advice of what to emphasize on below its image. If more than one player profile is saved on the game card, images for the day can be compared to those of other players. - -Kawashima may also ask the player to participate in a Memory Quiz, which requires the player to recall a recent event, such as what the player ate or the most interesting thing seen on television the day before. Several days later, it will ask for the answer originally provided, and will then compare the answer given several days ago and the answer given on the current day to test the player's recollection skills. The player is not scored on his or her ability to remember. The purpose of these tasks is to help the player improve his or her recollection. - -The game includes four modes: Brain Age Check, Training, Quick Play, and Sudoku. The Brain Age Check gives the player three puzzle minigames to complete. The first is usually a Stroop test, although the player can choose to skip the Stroop test if they are not in a quiet environment or is otherwise unable to speak into the microphone. At the end of the Brain Age Check, the game reports on the players "brain age", a theoretical assessment of the age of the player's brain. The higher the brain age, the worse the player performed. The best possible score is 20 and the worst 80, according with Kawashima's theory that the brain stops developing at 20. The player may replay the Brain Age Check, but the brain age is registered per day only. - -Once the player confirms whether or not they can speak into the microphone, Professor Kawashima will describe the first puzzle. If the player answered that they can speak, the game begins with a Stroop test; if the player cannot use the microphone, the game picks a random puzzle from the following: Calculations X 20, Word Memory, Connect Maze, and Number Cruncher. - -During the Stroop Test, the game will display one of four words and colors: blue, black, yellow, and red. One of these words mentioned will appear on screen, in a random color which may not match the color denoted by the word. The player is instructed to say the color of the word, rather than the word itself (e.g., if the word Yellow appears in blue letters, the correct answer is "blue" according to the Stroop effect for details). - -In Speed Counting, which requires speaking but does not use the microphone, the player counts up from one to 120 as fast as they can without slurring the names of numbers. - -Word Memory gives the player a list of 30 four-lettered words. The player is given two minutes to study the list and memorize as many words as possible. After the time is up, the player must write down as many words as they can in three minutes. To spell words that were not on the list won't make the player lose marks but the system won't recognize them. On the contrary, spelling an on-list word will count as memorized and even the test will notify the players if they already wrote this word in case they rewrite it. - -Connect Maze gives players a randomly created group of circles, with letters and numbers in them. The player must follow the pattern A-1-B-2 until reaching M-13 as quickly as possible. - -Calculations × 20 presents the player with 20 simple calculation equations that includes addition, subtraction, and multiplication. On the top screen are the problems which scroll up as answered whether right or wrong, while the touch screen is used to write out the answer. This test is also available in the training section. - -Number Cruncher mental agility game that displays several numbers, which vary in their appearance and on-screen behavior and above it is a question, such as "how many numbers are blue?" or "how many numbers are moving?", which the player must answer as quickly as possible. - -The Training mode allows the player to play a variety of training programs, with all but one of them being exclusive to the Training mode. Once the player completes at least one program, Kawashima awards him or her with a stamp, which he places on the current date. If the player completes at least three programs, the stamp will expand in size. After accumulating a certain number of stamps, Kawashima will award him or her with a new program, difficulty mode, or additional feature under the Options menu. Each program can be played as many times as the player likes, although only the first play-through of the day will count in the graph for that puzzle. - -There are nine training programs in Training mode: - -# Calculations × 20, which is the same as the one found in the Brain Age Check. Mental agility exercise with a total of simple calculation 20 questions that includes addition, subtraction, and multiplication. - -# Calculations × 100, which is the same as Calculations × 20, although with 100 questions instead of 20. There is a hard mode that includes simple division. - -# Reading Aloud, which gives the player an excerpt from a classic story such as The Legend of Sleepy Hollow or Little Women, and tasks the player with reading the story aloud or mute to see how quickly they can do it. The player progresses through the excerpt by pushing Next, until they reach the end of the excerpt. If the player pushes Next too quickly, this will be found as cheating, and the program will count the activity as undone. - -# Low to High: The game instructs to remember several numbers in boxes; those boxes appear with no numbers but at the count of 3, numbers appear only one second. Once gone, the players have to remember the numbers from the lowest to the highest and click the correct box, while failing to order the numbers will count as mistake. Depending on if the player gets right or wrong the number of boxes increase or decrease, meaning that the game has an adaptive difficulty between at least 4 boxes to a maximum of 16. - -# Syllable Count shows several phrases, one after the other, on the top screen, and the player must write the number of syllables in each phrase on the touch screen. - -# Head Count features a random number of people on the top screen (e.g. 4). After a few seconds to allow the player to count the number of people, a house falls over them. The player must watch the screen carefully, as the people inside will leave the house and more people will enter the house. This will eventually cease, and the game asks the player to write down how many people are currently in the house. The puzzle gets more difficult as the player progresses in it. There is also a hard mode in which people also come in and out of the chimney. - -# Triangle Math: A kind of calculations (most of the time addition or subtraction, but sometimes may include sign rules such as -(-3)= 3) in tier format just like Pascal's triangle that the player must solve. This exercise also features a hard mode with an additional tier. - -# Time Lapse displays two analogue clocks (e.g. one at 2:45 and one at 7:30), and requires the player to calculate the difference in time between those clocks, in that case answer would be 4h 45min. - -# Voice Calculation, which is similar to the Calculations puzzles. However, this puzzle requires the player to speak the correct answer loud just like the Stroop Test. - -Quick Play can be played by anyone, whether they have a saved file or not. Quick Play allows the player to play three modes – Quick Brain Age Check, Quick Training, and Quick Sudoku, all only providing the player with one of the easy puzzles in each of these modes to try. Quick Brain Age Check only allows the player to play the stroop test. In Quick Training, the game only allows the player to play Calculations × 20. In Quick Sudoku, which is only available for North America, Europe and South Korea, the player may only play the easiest Sudoku puzzle available. At the end of each session, the player's brain age or time will be assessed, and Kawashima will give a preview of the full game. - -A player with a copy of Brain Age can send certain game data to other Nintendo DS consoles using the DS Download Play feature. They may either download Quick Play mode to this player's Nintendo DS, or Calculations × 30, a variation of the other Calculation puzzles which can be played by up to sixteen people. This mode is not supported in the Wii U Virtual Console version. - -Included in the North America, Europe, Australia and Korea versions of this game is a Sudoku mode, which features more than 100 puzzles across three different modes – Beginner, Intermediate, and Advanced. Sudoku involves a 9×9 grid with numbers in every square. Some of these numbers are visible, while others are not. The objective is to fill in the hidden numbers using the visible numbers as hints. Each row, column, and 3×3 grid has nine squares in it, and each must contain each number in the range from 1 to 9. - -Many neurologists recommend the game for prevention of dementia/Alzheimer's. Nintendo of America has refused to support any scientific claims to the benefits of the game, stressing that they are in the entertainment business. - -One study involved 600 Scottish students with one group of students who played twenty minutes of "Brain Age" before class daily for nine weeks and a control group that studied regularly. The students were tested at the beginning and end of the study. In the end, the group that played Brain Age improved test scores by 50%. The time to complete the tests in the Brain Age group dropped by five minutes, and this improvement doubled that of the control group. - -Another study involving 67 ten-year-olds found no evidence to support claims that Brain Age improves cognitive function better than other means of training one's brain. However, the game states that the best indications of brain age are when the user is at least twenty years of age. Professor of cognitive psychology Alain Lieury at the University of Rennes, France said: "The Nintendo DS is a technological jewel. As a game it's fine. But it is charlatanism to claim that it is a scientific test". Helping children with homework, reading, playing Scrabble or Sudoku, or watching documentaries, matched or beat the benefits of Brain Age. The children were split into four groups. The first two completed a seven-week memory course on a Nintendo DS, the third did puzzles with pencils and paper, and the fourth went to school as normal. Researchers found that children playing Brain Age failed to show any significant improvement in memory tests. They did do 19% better in mathematics but so did the pencil-and-paper group, and the fourth group did 18% better. In memorization, the pencil-and-paper group had a 33% improvement, while the Brain Age group performed 17% worse. In logic tests, the Brain Age group had a 10% improvement as did the pencil-and-paper group. The children who had no training improved 20%. - -It has also been stated that the same effects of "keeping the brain sharp" can be achieved by either playing Sudoku, Tetris or talking with friends. - -A study conducted between March and August 2010 in Sendai city, Miyagi prefecture, Japan, assessed the impact of the brain training game on the elderly, using a double blinded intervention. Results showed that playing Brain Age for 4 weeks could lead to improved cognitive functions (executive functions and processing speed) in the elderly. - -Nintendo was looking for something new to develop that would appeal to both traditional and casual gamers. In one of the meetings, the Chief Financial Officer of Nintendo's Japanese division suggested reviewing a published book titled Train Your Brain: 60 Days to a Better Brain, which was enjoying a great deal of success in Japan. Satoru Iwata, the president of Nintendo, arranged for a meeting with Professor Ryuta Kawashima, the author of the book. - -As both Iwata and Professor Kawashima were too busy to meet under normal circumstances, they both agreed to meet for an hour during the Nintendo DS launch. The original meeting became a brainstorming session that lasted for three hours, in which Professor Kawashima explained the basics of his studies. Iwata assigned a team of nine developers to develop the game and to have it ready in 90 days for demonstration. - -Brain Age's sound director was Masami Yone, while the music was composed by Minako Hamano and Akito Nakatsuka, both having composed music for Nintendo games as early as 1993 and 1987 respectively. The main menu song from this game was later used in Super Smash Bros. Brawl. - -The initial reaction from retailers was that of concern about the new title's ability to sell. The most important retailers in Japan were given the game for 15 minutes to test it out. In the end, Nintendo secured nearly 70,000 orders for the first shipment, an amount above most expectations. In comparison, the sequel had over 850,000 orders placed before its launch. It was the 94th best-selling game in Japan in 2008, selling 140,992 copies, with total lifetime sales of 3,750,890. During its first three weeks on sale in North America, Brain Age sold 120,000 copies, becoming the fifteenth title in the top U.S. video game charts during the month of May in terms of units sold. In Europe, Brain Age received critical acclaim, becoming number 1 in the Nintendo DS sales chart, and number 4 in the all-platforms chart on debut, and selling more than 500,000 units in just over two months. As of January 22, 2007, Brain Age has sold over 2 million copies in Europe. As of October 30, 2007, Brain Age has sold over one million copies in the United Kingdom. It was the 10th best-selling Nintendo DS game of December 2008 in the United States. As of September 30, 2015, Brain Age has worldwide sales of 19.01 million. - -The game was received with generally positive reviews in the Western market, and Discovery Channel) and Australia (featured in Seven News). Wired included the game in its list of "The 15 Most Influential Games of the Decade" at No. 5, due to how it "bucked the dominant trends" and "ushered in the era of games that are (supposedly) good for you, like Wii Fit". - -The positive sales figures and overall reception in Japan led to Brain Age being released in various countries around the world. In the North American market, the game is known as Brain Age: Train Your Brain in Minutes a Day and was released on April 17, 2006, and included 108 Sudoku puzzles of different levels of difficulty. Nintendo gave out copies of the North American version of Brain Age at the 2006 Game Developers Conference. They also shipped free retail versions to special members of the Nintendo NSider Forums. Both groups received their copies before the official release date. It has also been given away to certain retailers with the purchase of a Nintendo DS Lite. - -The game was released as Dr Kawashima's Brain Training: How Old Is Your Brain? in the UK and Ireland. Like the American version, this version also features Sudoku. All 3 parts are saved on one cartridge. In the UK, its initial television commercials featured Chris Tarrant. Nicole Kidman, Ronan Keating, and Patrick Stewart feature in current European commercials for the Brain Age series. - -Nintendo Australia featured an ad campaign that coincided with its Australian release. During the period of June 15 to July 17, 2006, Nintendo Australia stated that for every copy of Brain Age purchased, Nintendo would donate $1.00 to Alzheimer's Australia. - -The game was one of the launch titles for the DS Lite in South Korea, along with English Training: Have Fun Improving Your Skills!. It was released on January 18, 2007. diff --git a/wiki/wikipedia/4007.txt b/wiki/wikipedia/4007.txt deleted file mode 100644 index 4c578785ced68ab5bc0f204f697c76270e210830..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4007.txt +++ /dev/null @@ -1,17 +0,0 @@ -In computational geometry, a Pitteway triangulation is a point set triangulation in which the nearest neighbor of any point p within the triangulation is one of the vertices of the triangle containing p. - -Alternatively, it is a Delaunay triangulation in which each internal edge crosses its dual Voronoi diagram edge. Pitteway triangulations are named after Michael Pitteway, who studied them in 1973. Not every point set supports a Pitteway triangulation. When such a triangulation exists it is a special case of the Delaunay triangulation, and consists of the union of the Gabriel graph and convex hull. - -The concept of a Pitteway triangulation was introduced by Pitteway. See also McLain, who writes "An optimal partition - -is one in which, for any point within any triangle, that point lies at least - -as close to one of the vertices of that triangle as to any other data point." The name "Pitteway triangulation" was given by Okabe. - -Gold points out that not every point set supports a Pitteway triangulation. For instance, any triangulation of a regular pentagon includes a central isosceles triangle such that a point p near the midpoint of one of the triangle sides has its nearest neighbor outside the triangle. - -When a Pitteway triangulation exists, the midpoint of each edge interior to the triangulation must have the two edge endpoints as its nearest neighbors, for any other neighbor would violate the Pitteway property for nearby points in one of the two adjacent triangles. Thus, a circle having that edge as diameter must be empty of vertices, so the Pitteway triangulation consists of the Gabriel graph together with the convex hull of the point set. Conversely, when the Gabriel graph and convex hull together form a triangulation, it is a Pitteway triangulation. - -Since all Gabriel graph and convex hull edges are part of the Delaunay triangulation, a Pitteway triangulation, when it exists, is unique for points in general position and coincides with the Delaunay triangulation. However point sets with no Pitteway triangulation will still have a Delaunay triangulation. - -In the Pitteway triangulation, each edge pq either belongs to the convex hull or crosses the edge of the Voronoi diagram that separates the cells containing p and q. In some references this property is used to define a Pitteway triangulation, as a Delaunay triangulation in which all internal Delaunay edges cross their dual Voronoi edges. However, a Pitteway triangulation may include convex hull edges that do not cross their duals. diff --git a/wiki/wikipedia/4008.txt b/wiki/wikipedia/4008.txt deleted file mode 100644 index e9230e6855315fc26309f0c4611bc9b024936af1..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4008.txt +++ /dev/null @@ -1,49 +0,0 @@ -In graph theory, the Robertson–Seymour theorem (also called the graph minor theorem) states that the undirected graphs, partially ordered by the graph minor relationship, form a well-quasi-ordering. Equivalently, every family of graphs that is closed under minors can be defined by a finite set of forbidden minors, in the same way that Wagner's theorem characterizes the planar graphs as being the graphs that do not have the complete graph K5 or the complete bipartite graph K3,3 as minors. - -The Robertson–Seymour theorem is named after mathematicians Neil Robertson and Paul D. Seymour, who proved it in a series of twenty papers spanning over 500 pages from 1983 to 2004. Before its proof, the statement of the theorem was known as Wagner's conjecture after the German mathematician Klaus Wagner, although Wagner said he never conjectured it. - -A weaker result for trees is implied by Kruskal's tree theorem, which was conjectured in 1937 by Andrew Vázsonyi and proved in 1960 independently by Joseph Kruskal and S. Tarkowski. - -A minor of an undirected graph G is any graph that may be obtained from G by a sequence of zero or more contractions of edges of G and deletions of edges and vertices of G. The minor relationship forms a partial order on the set of all distinct finite undirected graphs, as it obeys the three axioms of partial orders: it is reflexive (every graph is a minor of itself), transitive (a minor of a minor of G is itself a minor of G), and antisymmetric (if two graphs G and H are minors of each other, then they must be isomorphic). However, if graphs that are isomorphic may nonetheless be considered as distinct objects, then the minor ordering on graphs forms a preorder, a relation that is reflexive and transitive but not necessarily antisymmetric. - -A preorder is said to form a well-quasi-ordering if it contains neither an infinite descending chain nor an infinite antichain. For instance, the usual ordering on the non-negative integers is a well-quasi-ordering, but the same ordering on the set of all integers is not, because it contains the infinite descending chain 0, -1, -2, -3... - -The Robertson–Seymour theorem states that finite undirected graphs and graph minors form a well-quasi-ordering. The graph minor relationship does not contain any infinite descending chain, because each contraction or deletion reduces the number of edges and vertices of the graph (a non-negative integer). The statement that every infinite set has finitely many minimal elements implies this form of the theorem, for if there are only finitely many minimal elements, then each of the remaining graphs must belong to a pair of this type with one of the minimal elements. And in the other direction, this form of the theorem implies the statement that there can be no infinite antichains, because an infinite antichain is a set that does not contain any pair related by the minor relation. - -A family F of graphs is said to be closed under the operation of taking minors if every minor of a graph in F also belongs to F. If F is a minor-closed family, then let S be the set of graphs that are not in F (the complement of F). According to the Robertson–Seymour theorem, there exists a finite set H of minimal elements in S. These minimal elements form a forbidden graph characterization of F: the graphs in F are exactly the graphs that do not have any graph in H as a minor. The members of H are called the excluded minors (or forbidden minors, or minor-minimal obstructions) for the family F. - -For example, the planar graphs are closed under taking minors: contracting an edge in a planar graph, or removing edges or vertices from the graph, cannot destroy its planarity. Therefore, the planar graphs have a forbidden minor characterization, which in this case is given by Wagner's theorem: the set H of minor-minimal nonplanar graphs contains exactly two graphs, the complete graph K5 and the complete bipartite graph K3,3, and the planar graphs are exactly the graphs that do not have a minor in the set {K5, K3,3}. - -The existence of forbidden minor characterizations for all minor-closed graph families is an equivalent way of stating the Robertson–Seymour theorem. For, suppose that every minor-closed family F has a finite set H of minimal forbidden minors, and let S be any infinite set of graphs. Define F from S as the family of graphs that do not have a minor in S. Then F is minor-closed and has a finite set H of minimal forbidden minors. Let C be the complement of F. S is a subset of C since S and F are disjoint, and H are the minimal graphs in C. Consider a graph G in H. G cannot have a proper minor in S since G is minimal in C. At the same time, G must have a minor in S, since otherwise G would be an element in F. Therefore, G is an element in S, i.e., H is a subset of S, and all other graphs in S have a minor among the graphs in H, so H is the finite set of minimal elements of S. - -For the other implication, assume that every set of graphs has a finite subset of minimal graphs and let a minor-closed set F be given. We want to find a set H of graphs such that a graph is in F if and only if it does not have a minor in H. Let E be the graphs which are not minors of any graph in F, and let H be the finite set of minimal graphs in E. Now, let an arbitrary graph G be given. Assume first that G is in F. G cannot have a minor in H since G is in F and H is a subset of E. Now assume that G is not in F. Then G is not a minor of any graph in F, since F is minor-closed. Therefore, G is in E, so G has a minor in H. - -The following sets of finite graphs are minor-closed, and therefore (by the Robertson–Seymour theorem) have forbidden minor characterizations: - -*forests, linear forests (disjoint unions of path graphs), pseudoforests, and cactus graphs; - -*planar graphs, outerplanar graphs, apex graphs (formed by adding a single vertex to a planar graph), toroidal graphs, and the graphs that can be embedded on any fixed two-dimensional manifold; - -*graphs that are linklessly embeddable in Euclidean 3-space, and graphs that are knotlessly embeddable in Euclidean 3-space; - -*graphs with a feedback vertex set of size bounded by some fixed constant; graphs with Colin de Verdière graph invariant bounded by some fixed constant; graphs with treewidth, pathwidth, or branchwidth bounded by some fixed constant. - -Some examples of finite obstruction sets were already known for specific classes of graphs before the Robertson–Seymour theorem was proved. For example, the obstruction for the set of all forests is the loop graph (or, if one restricts to simple graphs, the cycle with three vertices). This means that a graph is a forest if and only if none of its minors is the loop (or, the cycle with three vertices, respectively). The sole obstruction for the set of paths is the tree with four vertices, one of which has degree 3. In these cases, the obstruction set contains a single element, but in general this is not the case. Wagner's theorem states that a graph is planar if and only if it has neither K5 nor K3,3 as a minor. In other words, the set {K5, K3,3} is an obstruction set for the set of all planar graphs, and in fact the unique minimal obstruction set. A similar theorem states that K4 and K2,3 are the forbidden minors for the set of outerplanar graphs. - -Although the Robertson–Seymour theorem extends these results to arbitrary minor-closed graph families, it is not a complete substitute for these results, because it does not provide an explicit description of the obstruction set for any family. For example, it tells us that the set of toroidal graphs has a finite obstruction set, but it does not provide any such set. The complete set of forbidden minors for toroidal graphs remains unknown, but it contains at least 17,535 graphs. - -The Robertson–Seymour theorem has an important consequence in computational complexity, due to the proof by Robertson and Seymour that, for each fixed graph G, there is a polynomial time algorithm for testing whether larger graphs have G as a minor. The running time of this algorithm can be expressed as a cubic polynomial in the size of the larger graph (although there is a constant factor in this polynomial that depends superpolynomially on the size of G), which has been improved to quadratic time by Kawarabayashi, Kobayashi, and Reed. As a result, for every minor-closed family F, there is polynomial time algorithm for testing whether a graph belongs to F: simply check, for each of the forbidden minors for F, whether the given graph contains that forbidden minor. - -However, this method requires a specific finite obstruction set to work, and the theorem does not provide one. The theorem proves that such a finite obstruction set exists, and therefore the problem is polynomial because of the above algorithm. However, the algorithm can be used in practice only if such a finite obstruction set is provided. As a result, the theorem proves that the problem can be solved in polynomial time, but does not provide a concrete polynomial-time algorithm for solving it. Such proofs of polynomiality are non-constructive: they prove polynomiality of problems without providing an explicit polynomial-time algorithm. In many specific cases, checking whether a graph is in a given minor-closed family can be done more efficiently: for example, checking whether a graph is planar can be done in linear time. - -For graph invariants with the property that, for each k, the graphs with invariant at most k are minor-closed, the same method applies. For instance, by this result, treewidth, branchwidth, and pathwidth, vertex cover, and the minimum genus of an embedding are all amenable to this approach, and for any fixed k there is a polynomial time algorithm for testing whether these invariants are at most k, in which the exponent in the running time of the algorithm does not depend on k. A problem with this property, that it can be solved in polynomial time for any fixed k with an exponent that does not depend on k, is known as fixed-parameter tractable. - -However, this method does not directly provide a single fixed-parameter-tractable algorithm for computing the parameter value for a given graph with unknown k, because of the difficulty of determining the set of forbidden minors. Additionally, the large constant factors involved in these results make them highly impractical. Therefore, the development of explicit fixed-parameter algorithms for these problems, with improved dependence on k, has continued to be an important line of research. - -Friedman showed that the following theorem exhibits the independence phenomenon by being unprovable in various formal systems that are much stronger than Peano arithmetic, yet being provable in systems much weaker than ZFC: - -Theorem: For every positive integer n, there is an integer m so large that if G1, ..., Gm is a sequence of finite undirected graphs, - -where each Gi has size at most n+i, then Gj ≤ Gk for some j < k. - -(Here, the size of a graph is the total number of its vertices and edges, and ≤ denotes the minor ordering.) diff --git a/wiki/wikipedia/4009.txt b/wiki/wikipedia/4009.txt deleted file mode 100644 index c38f5a595b58625e60a1a6cf37a497ccbc54bc59..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4009.txt +++ /dev/null @@ -1,11 +0,0 @@ -HOL Light is a member of the HOL theorem prover family. Like the other members, it is a proof assistant for classical higher order logic. Compared with other HOL systems, HOL Light is intended to have relatively simple foundations. HOL Light is authored and maintained by the mathematician and computer scientist John Harrison. HOL Light is released under the simplified BSD license. - -HOL Light is based on a formulation of type theory with equality - -as the only primitive notion. The primitive rules of inference - -are the following: - -This formulation of type theory is very close to the one described in - -section II.2 of Lambek. diff --git a/wiki/wikipedia/401.txt b/wiki/wikipedia/401.txt deleted file mode 100644 index 365a859fd2034c3d924409d5bec641714bef5a0d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/401.txt +++ /dev/null @@ -1,13 +0,0 @@ -Chording means pushing several keys or buttons simultaneously to achieve a result. - -In music, more than one key are pressed at a time to achieve more complex sounds, or chords. - -Chording, with a chorded keyboard or keyer allows one to produce as many characters as a QWERTY keyboard but with fewer keys and less motion per finger. - -Mouse chording allows a user to use a two-button mouse, trackball, or touchpad as if it where a three-button device. For example, in the Unix graphical user interface (known as X11), the middle button is used to paste text. Since Microsoft-type mice traditionally only had two buttons, users of Unix-type systems such as Linux and BSD chord the right and left buttons to paste text. - -TipTapSpeech an application for the iPhone and iPad is a chord-based text entry solution for touch screen computing. - -A GKOS chording keyboard application development for iPhone was started on the GKOS Google Group on May 25, 2009. The application for iPhone became available on May 8, 2010, and a similar application for Android on October 3, 2010. Thumbs are used to press the keys that are located towards the sides of the screen, either a single key or two keys simultaneously. - The further development of GKOS has led to the ComboKey keyboard that works better on smartphones. also allows one-hand typing with the hand holding the device, generating combinations by occasional swipes to other keys. - -Douglas Engelbart, Cherif Algreatly, Valerie Landau, Robert Stephenson, Evan Schaffer, and Eric Matsuno filed a patent in 2010 for a chorded solution for multitouch screens. diff --git a/wiki/wikipedia/4010.txt b/wiki/wikipedia/4010.txt deleted file mode 100644 index 116ea9306280648d5610bfe2253ba0edaa1e4259..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4010.txt +++ /dev/null @@ -1,45 +0,0 @@ -In the field of computer science, an atomic commit is an operation that applies a set of distinct changes as a single operation. If the changes are applied, then the atomic commit is said to have succeeded. If there is a failure before the atomic commit can be completed, then all of the changes completed in the atomic commit are reversed. This ensures that the system is always left in a consistent state. The other key property of isolation comes from their nature as atomic operations. Isolation ensures that only one atomic commit is processed at a time. The most common uses of atomic commits are in database systems and version control systems. - -The problem with atomic commits is that they require coordination between multiple systems. As computer networks are unreliable services, this means no algorithm can coordinate with all systems as proven in the Two Generals Problem. As databases become more and more distributed, this coordination will increase the difficulty of making truly atomic commits. - -Atomic commits are essential for multi-step updates to data. This can be clearly shown in a simple example of a money transfer between two checking accounts. - -This example is complicated by a transaction to check the balance of account Y during a transaction for transferring 100 dollars from account X to Y. To start, first 100 dollars is removed from account X. Second, 100 dollars is added to account Y. If the entire operation is not completed as one atomic commit, then several problems could occur. If the system fails in the middle of the operation, after removing the money from X and before adding into Y, then 100 dollars has just disappeared. Another issue is if the balance of Y is checked before the 100 dollars is added, the wrong balance for Y will be reported. - -With atomic commits neither of these cases can happen, in the first case of the system failure, the atomic commit would be rolled back and the money returned to X. In the second case, the request of the balance of Y cannot occur until the atomic commit is fully completed. - -Atomic commits in database systems fulfil two of the key properties of ACID, atomicity and consistency. Consistency is only achieved if each change in the atomic commit is consistent. - -As shown in the example atomic commits are critical to multistep operations in databases. Due to modern hardware design of the physical disk on which the database resides true atomic commits cannot exist. The smallest area that can be written to on disk is known as a sector. A single database entry may span several different sectors. Only one sector can be written at a time. This writing limit is why true atomic commits are not possible. After the database entries in memory have been modified they are queued up to be written to disk. This means the same problems identified in the example have reoccurred. Any algorithmic solution to this problem will still encounter the Two Generals’ Problem. The two-phase commit protocol and three-phase commit protocol attempt to solve this and some of the other problems associated with atomic commits. - -The two-phase commit protocol requires a coordinator to maintain all the information needed to recover the original state of the database if something goes wrong. As the name indicates there are two phases, voting and commit. - -During the voting phase each node writes the changes in the atomic commit to its own disk. The nodes then report their status to the coordinator. If any node does not report to the coordinator or their status message is lost the coordinator assumes the node's write failed. Once all of the nodes have reported to the coordinator the second phase begins. - -During the commit phase the coordinator sends a commit message to each of the nodes to record in their individual logs. Until this message is added to a node's log, any changes made will be recorded as incomplete. If any of the nodes reported a failure the coordinator will instead send a rollback message. This will remove any changes the nodes have written to disk. - -The three-phase commit protocol seeks to remove the main problem with the two phase commit protocol, which occurs if a coordinator and another node fail at the same time during the commit phase neither can tell what action should occur. To solve this problem a third phase is added to the protocol. The prepare to commit phase occurs after the voting phase and before the commit phase. - -In the voting phase, similar to the two-phase commit, the coordinator requests that each node is ready to commit. If any node fails the coordinator will timeout while waiting for the failed node. If this happens the coordinator sends an abort message to every node. The same action will be undertaken if any of the nodes return a failure message. - -Upon receiving success messages from each node in the voting phase the prepare to commit phase begins. During this phase the coordinator sends a prepare message to each node. Each node must acknowledge the prepare message and reply. If any reply is missed or any node return that they are not prepared then the coordinator sends an abort message. Any node that does not receive a prepare message before the timeout expires aborts the commit. - -After all nodes have replied to the prepare message then the commit phase begins. In this phase the coordinator sends a - -commit message to each node. When each node receives this message it performs the actual commit. If the commit message does not reach a node due to the message being lost or the coordinator fails they will perform the commit if the timeout expires. If the coordinator fails upon recovery it will send a commit message to each node. - -Atomic commits are a common feature of version control software, and crucial for maintaining a consistent state in the repository. Most version control software will not apply any part of a commit that fails. Notable exceptions are CVS, VSS and IBM Rational ClearCase (when in UCM mode). - -For instance, if version control software encounters a merge conflict that can not be automatically resolved, then no part of the changeset is merged. Instead, the developer is given an opportunity to either revert their changes or manually resolve the conflict. - -This prevents the entire project from entering a broken state due to a partially applied change set, where one file from a commit is successfully committed, but another file with dependent changes fails. - -Atomic commits may also refer to the ability to simultaneously make changes across multiple projects using version control software in a single operation, using a version control software development strategy known as a monorepo. - -When using a revision control systems a common convention is to use small commits. These are sometimes referred to as atomic commits as they (ideally) only affect a single aspect of the system. These atomic commits allow for greater understandability, less effort to roll back changes, easier bug identification. - -The greater understandability comes from the small size and focused nature of the commit. It is much easier to understand what is changed and reasoning behind the changes if one is only looking for one kind of change. This becomes especially important when making format changes to the source code. If format and functional changes are combined it becomes very difficult to identify useful changes. Imagine if the spacing in a file is changed from using tabs to three spaces every tab in the file will show as having been changed. This becomes critical if some functional changes are also made as a reviewer may simply not see the functional changes. - -If only atomic commits are made then commits that introduce errors become much simpler to identify. One need not look through every commit to see if it was the cause of the error, only the commits dealing with that functionality need to be examined. If the error is to be rolled back, atomic commits again make the job much simpler. Instead of having to revert to the offending revision and remove the changes manually before integrating any later changes; the developer can simply revert any changes in the identified commit. This also reduces the risk of a developer accidentally removing unrelated changes that happened to be in the same commit. - -Atomic commits also allow bug fixes to be easily reviewed if only a single bug fix is committed at a time. Instead of having to check multiple potentially unrelated files the reviewer must only check files and changes that directly impact the bug being fixed. This also means that bug fixes can be easily packaged for testing as only the changes that fix the bug are in the commit. diff --git a/wiki/wikipedia/4011.txt b/wiki/wikipedia/4011.txt deleted file mode 100644 index b7880f9eff93e82488dbe8470a2d99762f80ef0c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4011.txt +++ /dev/null @@ -1,87 +0,0 @@ -In mathematics and analytic number theory, Vaughan's identity is an identity found by that can be used to simplify Vinogradov's work on trigonometric sums. It can be used to estimate summatory functions of the form -$$ -\sum_{n \leq N} f(n)\Lambda(n) -$$ - -where f is some arithmetic function of the natural integers n, whose values in applications are often roots of unity, and Λ is the von Mangoldt function. - -The motivation for Vaughan's construction of his identity is briefly discussed at the beginning of Chapter 24 in Davenport. For now, we will skip over most of the technical details motivating the identity and its usage in applications, and instead focus on the setup of its construction by parts. Following from the reference, we construct four distinct sums based on the expansion of the logarithmic derivative of the Riemann zeta function in terms of functions which are partial Dirichlet series respectively truncated at the upper bounds of $U$ and $V$, respectively. More precisely, we define $F(s) = \sum_{m \leq U} \Lambda(m) m^{-s}$ and $G(s) = \sum_{d \leq V} \mu(d) d^{-s}$, which leads us to the exact identity that -$$ --\frac{\zeta^{\prime}(s)}{\zeta(s)} = F(s) - \zeta(s) F(s) G(s) - \zeta^{\prime}(s) G(s) + \left(-\frac{\zeta^{\prime}(s)}{\zeta(s)} - F(s)\right) (1-\zeta(s) G(s)). -$$ - -This last expansion implies that we can write -$$ -\Lambda(n) = a_1(n) + a_2(n) + a_3(n) + a_4(n), -$$ - -where the component functions are defined to be - -\begin{align} - -a_1(n) & := \Biggl\{\begin{matrix} \Lambda(n), & \text{ if } n \leq U; \\ 0, & \text{ if } n > U\end{matrix} \\ - -a_2(n) & := - \sum_{\stackrel{mdr = n}{\stackrel{m \leq U}{d \leq V}}} \Lambda(m) \mu(d) \\ - -a_3(n) & := \sum_{\stackrel{hd=n}{d \leq V}} \mu(d) \log(h) \\ - -a_4(n) & := -\sum_{\stackrel{mk=n}{\stackrel{m > U}{k > 1}}} \Lambda(m) \left(\sum_{\stackrel{d|k}{d \leq V}} \mu(d)\right). - -\end{align} - - - -We then define the corresponding summatory functions for $1 \leq i \leq 4$ to be -$$ -S_i(N) := \sum_{n \leq N} f(n) a_i(n), -$$ - -so that we can write -$$ -\sum_{n \leq N} f(n) \Lambda(n) = S_1(N) + S_2(N) + S_3(N) + S_4(N). -$$ - -Finally, at the conclusion of a multi-page argument of technical and at times delicate estimations of these sums, we obtain the following form of Vaughan's identity when we assume that $|f(n)| \leq 1,\ \forall n$, $U,V \geq 2$, and $UV \leq N$: - -\sum_{n \leq N} f(n) \Lambda(n) \ll U + (\log N) \times \sum_{t\leq UV}\left(\max_{w} \left|\sum_{w \leq r \leq \frac{N}{t}} f(rt)\right|\right) + - -\sqrt{N} (\log N)^3 \times \max_{U \leq M \leq N/V} \max_{V \leq j \leq N/M}\left(\sum_{V < k \leq N/M} \left| - -\sum_{\stackrel{M < m \leq 2M}{\stackrel{m \leq N/k}{m \leq N/j}}} f(mj) \bar{f(mk)}\right|\right)^{1/2} \mathbf{(V1)}. - -It is remarked that in some instances sharper estimates can be obtained from Vaughan's identity by treating the component sum $S_2$ more carefully by expanding it in the form of -$$ -S_2 = \sum_{t \leq UV} \longmapsto \sum_{t \leq U} + \sum_{U < t \leq UV} =: S_2^{\prime} + S_2^{\prime\prime}. -$$ - -The optimality of the upper bound obtained by applying Vaughan's identity appears to be application-dependent with respect to the best functions $U = f_U(N)$ and $V = f_V(N)$ we can choose to input into equation (V1). See the applications cited in the next section for specific examples that arise in the different contexts respectively considered by multiple authors. - -* Vaughan's identity has been used to simplify the proof of the Bombieri–Vinogradov theorem and to study Kummer sums (see the references and external links below). - -* In Chapter 25 of Davenport, one application of Vaughan's identity is to estimate an important prime-related exponential sum of Vinogradov defined by -$$ -S(\alpha) := \sum_{n \leq N} \Lambda(n) e\left(n\alpha\right). -$$ - -In particular, we obtain an asymptotic upper bound for these sums (typically evaluated at irrational $\alpha \in \mathbb{R} \setminus \mathbb{Q}$) whose rational approximations satisfy -$$ -\left| \alpha - \frac{a}{q} \right| \leq \frac{1}{q^2}, (a, q) = 1, -$$ - -of the form -$$ -S(\alpha) \ll \left(\frac{N}{\sqrt{q}} + N^{4/5} + \sqrt{Nq}\right) (\log N)^4. -$$ - -The argument for this estimate follows from Vaughan's identity by proving by a somewhat intricate argument that -$$ -S(\alpha) \ll \left(UV + q + \frac{N}{\sqrt{U}} + \frac{N}{\sqrt{V}} + \frac{N}{\sqrt{q}} + \sqrt{Nq}\right) (\log(qN))^4, -$$ - -and then deducing the first formula above in the non-trivial cases when $q \leq N$ and with $U = V = N^{2/5}$. - -* Another application of Vaughan's identity is found in Chapter 26 of Davenport where the method is employed to derive estimates for sums (exponential sums) of three primes. - -* Examples of Vaughan's identity in practice are given as the following references / citations in :. - -Vaughan's identity was generalized by Heath-Brown. diff --git a/wiki/wikipedia/4012.txt b/wiki/wikipedia/4012.txt deleted file mode 100644 index 856197c924223832f135e279f5e653df4d216f50..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4012.txt +++ /dev/null @@ -1,112 +0,0 @@ -In real analysis and complex analysis, branches of mathematics, the identity theorem for analytic functions states: given functions f and g analytic on a domain D (open and connected subset of $\mathbb{R}$ or $\mathbb{C}$), if f = g on some $S \subseteq D$, where $S $ has an accumulation point, then f = g on D. - -Thus an analytic function is completely determined by its values on a single open neighborhood in D, or even a countable subset of D (provided this contains a converging sequence). This is not true in general for real-differentiable functions, even infinitely real-differentiable functions. In comparison, analytic functions are a much more rigid notion. Informally, one sometimes summarizes the theorem by saying analytic functions are "hard" (as opposed to, say, continuous functions which are "soft"). - -The underpinning fact from which the theorem is established is the expandability of a holomorphic function into its Taylor series. - -The connectedness assumption on the domain D is necessary. For example, if D consists of two disjoint open sets, $f$ can be $0$ on one open set, and $1$ on another, while $g$ is $0$ on one, and $2$ on another. - -If two holomorphic functions $ f $ and $ g $ on a domain D agree on a set S which has an accumulation point $c$ in $D$, then $f = g $ on a disk in $D$ centered at $c$. - -To prove this, it is enough to show that $f^{(n)}(c)= g^{(n)}(c)$ for all $n\geq 0$. - -If this is not the case, let $ m $ be the smallest nonnegative integer with $f^{(m)}(c)\ne g^{(m)}(c)$. By holomorphy, we have the following Taylor series representation in some open neighborhood U of $ c $: - - - -\begin{align} - -(f - g)(z) &{}=(z - c)^m \cdot \left[\frac{(f - g)^{(m)}(c)}{m!} + \frac{(z - c) \cdot (f - g)^{(m+1)}(c)}{(m+1)!} + \cdots \right] \\[6pt] - -&{}=(z - c)^m \cdot h(z). - -\end{align} - - - -By continuity, $ h $ is non-zero in some small open disk $ B$ around $c$. But then $f-g\neq 0$ on the punctured set $B-\{c\}$. This contradicts the assumption that $c$ is an accumulation point of $\{f = g\}$. - -This lemma shows that for a complex number $ a \in \mathbb{C}$, the fiber $f^{-1}(a)$ is a discrete (and therefore countable) set, unless $f \equiv a$. - -Define the set on which $f$ and $g$ have the same Taylor expansion: -$$ -S = \{ z \in D \mid f^{(k)}(z) = g^{(k)}(z) \text{ for all } k \geq 0\}=\bigcap_{k=0}^\infty \{ z \in D \mid (f^{(k)}- g^{(k)})(z)=0\}. -$$ - -We'll show $S$ is nonempty, open, and closed. Then by connectedness of $D$, $S$ must be all of $D$, which implies $f=g$ on $S=D$. - -By the lemma, $f = g$ in a disk centered at $c$ in $D$, they have the same Taylor series at $c$, so $c\in S$, $S$ is nonempty. - -As $f$ and $g$ are holomorphic on $D$, $\forall w\in S$, the Taylor series of $f$ and $g$ at $w$ have non-zero radius of convergence. Therefore, the open disk $B_r(w)$ also lies in $S$ for some $r$. So $S$ is open. - -By holomorphy of $f$ and $g$, they have holomorphic derivatives, so all $f^{(n)}, g^{(n)}$ are continuous. This means that $ \{z \in D \mid (f^{(k)} - g^{(k)})(z) = 0\}$ is closed for all $k$. $S$ is an intersection of closed sets, so it's closed. - -Since the Identity Theorem is concerned with the equality of two holomorphic functions, we can simply consider the difference (which remains holomorphic) and can simply characterise when a holomorphic function is identically $0$. The following result can be found in. - -Let $G\subseteq\mathbb{C}$ denote a non-empty, connected open subset of the complex plane. - -For $h:G\to\mathbb{C}$ the following are equivalent. - -# $h\equiv 0$ on $G$; - -# the set $G_{0}=\{z\in G\mid h(z)=0\}$ contains an accumulation point, $z_{0}$; - -# the set $G_{\ast}=\bigcap_{n\in\N_0} G_{n}$ is non-empty, where $G_{n} := \{z\in G\mid h^{(n)}(z)=0\}$. - -The directions (1 $\Rightarrow$ 2) and (1 $\Rightarrow$ 3) hold trivially. - -For (3 $\Rightarrow$ 1), by connectedness of $G$ it suffices to prove that the non-empty subset, $G_{\ast}\subseteq G$, is clopen. - -Since holomorphic functions are infinitely differentiable, i.e. $h\in C^{\infty}(G)$, it is clear that $G_{\ast}$ is closed. To show openness, consider some $u \in G_{\ast}$. - -Consider an open ball $U\subseteq G$ containing $u$, in which $h$ has a convergent Taylor-series expansion centered on $u$. - -By virtue of $u\in G_{\ast}$, all coefficients of this series are $0$, whence $h\equiv 0$ on $U$. - -It follows that all $n^{\text{th}}$-derivatives of $h$ are $0$ on $U$, whence $U\subseteq G_{\ast}$. - -So each $u\in G_{\ast}$ lies in the interior of $G_{\ast}$. - -Towards (2 $\Rightarrow$ 3), fix an accumulation point $z_{0}\in G_{0}$. - -We now prove directly by induction that $z_{0}\in G_{n}$ for each $n \in \N_0$. - -To this end let $r\in(0,\infty)$ be strictly smaller than the convergence radius of the power series expansion of $h$ around $z_{0}$, given by $\sum_{k\in\mathbb{N}_{0}}\frac{h^{(k)}(z_{0})}{k!}(z-z_{0})^{k}$. Fix now some $n\geq 0$ and assume that $z_{0}\in G_{k}$ for all $k < n$. Then for $z \in \bar{B}_{r}(z_{0}) \setminus \{z_{0}\}$ manipulation of the power series expansion yields - -{{NumBlk|| - -h^{(n)}(z_{0}) - -= n!\frac{h(z)}{(z-z_{0})^{n}} - (z-z_{0})\underbrace{n!\sum_{k=n+1}^{\infty}\frac{h^{(k)}(z_{0})}{k!}(z-z_{0})^{k-(n+1)}}_{=:R(z)}. - -|}} - -Note that, since $r$ is smaller than radius of the power series, one can readily derive that the power series $R(\cdot)$ is continuous and thus bounded on $\bar{B}_{r}(z_{0})$. - -
    - - - -Now, since $z_{0}$ is an accumulation point in $G_{0}$, there is a sequence of points $(z^{(i)})_{i}\subseteq G_{0}\cap B_{r}(z_{0})\setminus\{z_{0}\}$ convergent to $z_{0}$. - -Since $h\equiv 0$ on $G_{0}$ and since each $z^{(i)}\in G_{0}\cap B_{r}(z_{0})\setminus\{z_{0}\}$, the expression in () yields - -{{NumBlk|| - -h^{(n)}(z_{0}) - -=n!\frac{h(z^{(i)})}{(z^{(i)}-z_{0})^{n}} - --(z^{(i)}-z_{0})R(z^{(i)}) - -= 0 - \underbrace{(z^{(i)}-z_{0})}_{\longrightarrow_{i}0}R(z^{(i)}). - -|}} - -By the boundedness of $R(\cdot)$ on $\bar{B}_{r}(z_{0})$, - -it follows that $h^{(n)}(z_{0})=0$, whence $z_{0}\in G_{n}$. - -Via induction the claim holds. - -Q.E.D. diff --git a/wiki/wikipedia/4013.txt b/wiki/wikipedia/4013.txt deleted file mode 100644 index b27c58a3cecbbe9f8e4b0af5cd96d91356cf19e3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4013.txt +++ /dev/null @@ -1,67 +0,0 @@ -In mathematical order theory, an ideal is a special subset of a partially ordered set (poset). Although this term historically was derived from the notion of a ring ideal of abstract algebra, it has subsequently been generalized to a different notion. Ideals are of great importance for many constructions in order and lattice theory. - -A subset I of a partially ordered set (P, ≤) is an ideal, if the following conditions hold: - -# I is non-empty, - -# for every x in I and y in P, y ≤ x implies that y is in I (I is a lower set), - -# for every x, y in I, there is some element z in I, such that x ≤ z and y ≤ z (I is a directed set). - -While this is the most general way to define an ideal for arbitrary posets, it was originally defined for lattices only. In this case, the following equivalent definition can be given: - -a subset I of a lattice (P, ≤) is an ideal if and only if it is a lower set that is closed under finite joins (suprema), i.e., it is nonempty and for all x, y in I, the element $x\vee y$ of P is also in I. - -The dual notion of an ideal, i.e., the concept obtained by reversing all ≤ and exchanging $\vee$ with $\wedge$, is a filter. - -Some authors use the term ideal to mean a lower set, i.e., they include only condition 2 above, while others use the term order ideal for this weaker notion. With the weaker definition, an ideal of a lattice seen as a poset is not closed under joins, so it is not necessarily an ideal of the lattice. Wikipedia uses only "ideal/filter (of order theory)" and "lower/upper set" to avoid confusion. - -Frink ideals, pseudoideals and Doyle pseudoideals are different generalizations of the notion of a lattice ideal. - -An ideal or filter is said to be proper if it is not equal to the whole set P. - -The smallest ideal that contains a given element p is a principal ideal and p is said to be a principal element of the ideal in this situation. The principal ideal $\downarrow p$ for a principal p is thus given by ↓ p = {x ∈ P . - -An important special case of an ideal is constituted by those ideals whose set-theoretic complements are filters, i.e. ideals in the inverse order. Such ideals are called prime ideals. Also note that, since we require ideals and filters to be non-empty, every prime ideal is necessarily proper. For lattices, prime ideals can be characterized as follows: - -A subset I of a lattice (P, ≤) is a prime ideal, if and only if - -# I is a proper ideal of P, and - -# for all elements x and y of P, $x \wedge y$ in I implies that x ∈ I or y ∈ I. - -It is easily checked that this is indeed equivalent to stating that P \ I is a filter (which is then also prime, in the dual sense). - -For a complete lattice the further notion of a completely prime ideal is meaningful. - -It is defined to be a proper ideal I with the additional property that, whenever the meet (infimum) of some arbitrary set A is in I, some element of A is also in I. - -So this is just a specific prime ideal that extends the above conditions to infinite meets. - -The existence of prime ideals is in general not obvious, and often a satisfactory amount of prime ideals cannot be derived within ZF (Zermelo–Fraenkel set theory without the axiom of choice). - -This issue is discussed in various prime ideal theorems, which are necessary for many applications that require prime ideals. - -An ideal I is maximal if it is proper and there is no proper ideal J that is a strict superset set of I. Likewise, a filter F is maximal if it is proper and there is no proper filter that is a strict superset. - -When a poset is a distributive lattice, maximal ideals and filters are necessarily prime, while the converse of this statement is false in general. - -Maximal filters are sometimes called ultrafilters, but this terminology is often reserved for Boolean algebras, where a maximal filter (ideal) is a filter (ideal) that contains exactly one of the elements {a, ¬a}, for each element a of the Boolean algebra. In Boolean algebras, the terms prime ideal and maximal ideal coincide, as do the terms prime filter and maximal filter. - -There is another interesting notion of maximality of ideals: Consider an ideal I and a filter F such that I is disjoint from F. We are interested in an ideal M that is maximal among all ideals that contain I and are disjoint from F. In the case of distributive lattices such an M is always a prime ideal. A proof of this statement follows. - -{{math proof|1=Assume the ideal M is maximal with respect to disjointness from the filter F. Suppose for a contradiction that M is not prime, i.e. there exists a pair of elements a and b such that a ∧ b in M but neither a nor b are in M. Consider the case that for all m in M, m ∨ a is not in F. One can construct an ideal N by taking the downward closure of the set of all binary joins of this form, i.e. N = { x x ≤ m ∨ a for some m ∈ M}. It is readily checked that N is indeed an ideal disjoint from F which is strictly greater than M. But this contradicts the maximality of M and thus the assumption that M is not prime. - -For the other case, assume that there is some m in M with m ∨ a in F. Now if any element n in M is such that n ∨ b is in F, one finds that (m ∨ n) ∨ b and (m ∨ n) ∨ a are both in F. But then their meet is in F and, by distributivity, (m ∨ n) ∨ (a ∧ b) is in F too. On the other hand, this finite join of elements of M is clearly in M, such that the assumed existence of n contradicts the disjointness of the two sets. Hence all elements n of M have a join with b that is not in F. Consequently one can apply the above construction with b in place of a to obtain an ideal that is strictly greater than M while being disjoint from F. This finishes the proof.}} - -However, in general it is not clear whether there exists any ideal M that is maximal in this sense. Yet, if we assume the axiom of choice in our set theory, then the existence of M for every disjoint filter–ideal-pair can be shown. In the special case that the considered order is a Boolean algebra, this theorem is called the Boolean prime ideal theorem. It is strictly weaker than the axiom of choice and it turns out that nothing more is needed for many order-theoretic applications of ideals. - -The construction of ideals and filters is an important tool in many applications of order theory. - -* In Stone's representation theorem for Boolean algebras, the maximal ideals (or, equivalently via the negation map, ultrafilters) are used to obtain the set of points of a topological space, whose clopen sets are isomorphic to the original Boolean algebra. - -* Order theory knows many completion procedures, to turn posets into posets with additional completeness properties. For example, the ideal completion of a given partial order P is the set of all ideals of P ordered by subset inclusion. This construction yields the free dcpo generated by P. An ideal is principal if and only if it is compact in the ideal completion, so the original poset can be recovered as the sub-poset consisting of compact elements. Furthermore, every algebraic dcpo can be reconstructed as the ideal completion of its set of compact elements. - -Ideals were introduced first by Marshall H. Stone, who derived their name from the ring ideals of abstract algebra. He adopted this terminology because, using the isomorphism of the categories of Boolean algebras and of Boolean rings, the two notions do indeed coincide. - -Ideals and filters are among the most basic concepts of order theory. See the introductory books given for order theory and lattice theory, and the literature on the Boolean prime ideal theorem. diff --git a/wiki/wikipedia/4014.txt b/wiki/wikipedia/4014.txt deleted file mode 100644 index aab5ccbe3ff336b35d7ebb4ad1f7107be01cd976..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4014.txt +++ /dev/null @@ -1,35 +0,0 @@ -The snake-in-the-box problem in graph theory and computer science deals with finding a certain kind of path along the edges of a hypercube. This path starts at one corner and travels along the edges to as many corners as it can reach. After it gets to a new corner, the previous corner and all of its neighbors must be marked as unusable. The path should never travel to a corner which has been marked unusable. - -In other words, a snake is a connected open path in the hypercube where each node connected with path, with the exception of the head (start) and the tail (finish), it has exactly two neighbors that are also in the snake. The head and the tail each have only one neighbor in the snake. The rule for generating a snake is that a node in the hypercube may be visited if it is connected to the current node and it is not a neighbor of any previously visited node in the snake, other than the current node. - -In graph theory terminology, this is called finding the longest possible induced path in a hypercube; it can be viewed as a special case of the induced subgraph isomorphism problem. There is a similar problem of finding long induced cycles in hypercubes, called the coil-in-the-box problem. - -The snake-in-the-box problem was first described by Kautz, motivated by the theory of error-correcting codes. The vertices of a solution to the snake or coil in the box problems can be used as a Gray code that can detect single-bit errors. Such codes have applications in electrical engineering, coding theory, and computer network topologies. In these applications, it is important to devise as long a code as is possible for a given dimension of hypercube. The longer the code, the more effective are its capabilities. - -Finding the longest snake or coil becomes notoriously difficult as the dimension number increases and the search space suffers a serious combinatorial explosion. Some techniques for determining the upper and lower bounds for the snake-in-the-box problem include proofs using discrete mathematics and graph theory, exhaustive search of the search space, and heuristic search utilizing evolutionary techniques. - -The maximum length for the snake-in-the-box problem is known for dimensions one through eight; it is - -1, 2, 4, 7, 13, 26, 50, 98 . - -Beyond that length, the exact length of the longest snake is not known; the best lengths found so far for dimensions nine through thirteen are - -190, 370, 712, 1373, 2687. - -For cycles (the coil-in-the-box problem), a cycle cannot exist in a hypercube of dimension less than two. The maximum lengths of the longest possible cycles are - -0, 4, 6, 8, 14, 26, 48, 96 . - -Beyond that length, the exact length of the longest cycle is not known; the best lengths found so far for dimensions nine through thirteen are - -188, 366, 692, 1344, 2594. - -Doubled coils are a special case: cycles whose second half repeats the structure of their first half, also known as symmetric coils. For dimensions two through seven the lengths of the longest possible doubled coils are - -4, 6, 8, 14, 26, 46. - -Beyond that, the best lengths found so far for dimensions eight through thirteen are - -94, 186, 362, 662, 1222, 2354. - -For both the snake and the coil in the box problems, it is known that the maximum length is proportional to 2n for an n-dimensional box, asymptotically as n grows large, and bounded above by 2n − 1. However the constant of proportionality is not known, but is observed to be in the range 0.3 - 0.4 for current best known values. diff --git a/wiki/wikipedia/4015.txt b/wiki/wikipedia/4015.txt deleted file mode 100644 index 593d4b074980a2e2b72b3302471cc6fe00e8f683..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4015.txt +++ /dev/null @@ -1 +0,0 @@ -SatZ is a well known SAT instance solver. It was developed by Prof. Chu Min Li, a computer science researcher. The Z stands for the last version of SAT solvers. diff --git a/wiki/wikipedia/4016.txt b/wiki/wikipedia/4016.txt deleted file mode 100644 index e726342e8cb8cbe4fe73075c12c04502fc3219d8..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4016.txt +++ /dev/null @@ -1,23 +0,0 @@ -Tetris Splash is a puzzle video game, part of the Tetris games, was published by Microsoft Game Studios for the Xbox 360 via Xbox Live Arcade. It is the first game produced by Tetris Online, Inc. - -The game gets its name from the aquarium background and water-themed music. Tetris Splash was released on the Xbox Live Arcade on October 3, 2007. - -The game is typical Tetris gameplay, with support for up to six players in online multiplayer, and up to four players in same-machine multiplayer. Multiplayer includes team play as well (i.e. 3 teams of 2, designated "Red", "Green" and "Blue" teams.) Gameplay follows the trend of rewarding T-Spins, but also adds a new element of combos, where multiple consecutive pieces each clear lines with no "non-line clearing" pieces in between. - -There are several game modes available. - -*Marathon: Similar to Classic Tetris game play. Goes up to level 15. - -*40 Lines: Clear 40 lines as fast as possible. - -*Same machine: Up to four players. Last survivor wins. - -*Online ranked: Six player free-for-all, playing for TrueSkill ranking. - -*Online unranked: Allows customization, including Team Mode, in which teams play against each other, instead of free-for-all. - -The game also includes an aquarium screensaver, which the user can personalize. Initially, the game comes with the option to make the tank a fresh water or salt water tank. Only two types of fish come with the main game: goldfish (for the fresh water tank) and clownfish (for the salt water tank). - -On first day of release, nine downloadable fish bundles were offered, which included Marine Angelfish, Guppy, Triggerfish, Tetra, Tang, Discus, Butterflyfish, Arowana, and Angelfish to add to the tank. Four decor packs were also offered, which included a Scuba theme, a Pirate theme, a Graveyard theme, and an Atlantis theme to add objects to the tank. - -The game received "mixed" reviews according to the review aggregation website Metacritic. diff --git a/wiki/wikipedia/4017.txt b/wiki/wikipedia/4017.txt deleted file mode 100644 index 95525967a22a82ab38200cb0bbb4afd3b584a4a8..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4017.txt +++ /dev/null @@ -1,11 +0,0 @@ -The theorem of the gnomon states that certain parallelograms occurring in a gnomon have areas of equal size. - -In a parallelogram $ABCD$ with a point $P$ on the diagonal $AC$, the parallel to $AD$ through $P$ intersects the side $CD$ in $G$ and the side $AB$ in $H$. Similarly the parallel to the side $AB$ through $P$ intersects the side $AD$ in $I$ and the side $BC$ in $F$. Then the theorem of the gnomon states that the parallelograms $HBFP$ and $IPGD$ have equal areas. - -Gnomon is the name for the L-shaped figure consisting of the two overlapping parallelograms $ABFI$ and $AHGD$. The parallelograms of equal area $HBFP$ and $IPGD$ are called complements (of the parallelograms on diagonal $PFCG$ and $AHPI$). - -The proof of the theorem is straightforward if one considers the areas of the main parallelogram and the two inner parallelograms around its diagonal: - -* first, the difference between the main parallelogram and the two inner parallelograms is exactly equal to the combined area of the two complements; - -* second, all three of them are bisected by the diagonal. This yields: diff --git a/wiki/wikipedia/4018.txt b/wiki/wikipedia/4018.txt deleted file mode 100644 index 50b4957e05ac61dca0f6cfc2b5189cd51ce28804..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4018.txt +++ /dev/null @@ -1 +0,0 @@ -In mathematics, the Oka coherence theorem, proved by , states that the sheaf $\mathcal{O} := \mathcal{O}_{\mathbb{C}_n}$ of germs of holomorphic functions on $\mathbb{C}^n$ over a complex manifold is coherent. diff --git a/wiki/wikipedia/4019.txt b/wiki/wikipedia/4019.txt deleted file mode 100644 index 956a4a723db338f38b6e7576f8ba4f4c9e2c4426..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4019.txt +++ /dev/null @@ -1 +0,0 @@ -#redirectM-theory diff --git a/wiki/wikipedia/402.txt b/wiki/wikipedia/402.txt deleted file mode 100644 index 6d7da7e5af40a938e723ff3ec3f173d7f8b5e0e7..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/402.txt +++ /dev/null @@ -1,15 +0,0 @@ -In algebraic geometry, a lemniscate is any of several figure-eight or ∞-shaped curves. or which alternatively may refer to the wool from which the ribbons were made. - -The lemniscate may be defined as an algebraic curve, the zero set of the quartic polynomial $(x^2 + y^2)^2 - cx^2 - dy^2$ when the parameter d is negative (or zero for the special case where the lemniscate becomes a pair of externally tangent circles). For positive values of d one instead obtains the oval of Booth. - -In 1680, Cassini studied a family of curves, now called the Cassini oval, defined as follows: the locus of all points, the product of whose distances from two fixed points, the curves' foci, is a constant. Under very particular circumstances (when the half-distance between the points is equal to the square root of the constant) this gives rise to a lemniscate. - -In 1694, Johann Bernoulli studied the lemniscate case of the Cassini oval, now known as the lemniscate of Bernoulli (shown above), in connection with a problem of "isochrones" that had been posed earlier by Leibniz. Like the hippopede, it is an algebraic curve, the zero set of the polynomial $(x^2 + y^2)^2 - 2a^2 (x^2 - y^2)$. Bernoulli's brother Jacob Bernoulli also studied the same curve in the same year, and gave it its name, the lemniscate. It may also be defined geometrically as the locus of points whose product of distances from two foci equals the square of half the interfocal distance. It is a special case of the hippopede (lemniscate of Booth), with $d=-c$, and may be formed as a cross-section of a torus whose inner hole and circular cross-sections have the same diameter as each other. The lemniscatic elliptic functions are analogues of trigonometric functions for the lemniscate of Bernoulli, and the lemniscate constants arise in evaluating the arc length of this lemniscate. - -Another lemniscate, the lemniscate of Gerono or lemniscate of Huygens, is the zero set of the quartic polynomial $y^2-x^2(a^2-x^2)$. Viviani's curve, a three-dimensional curve formed by intersecting a sphere with a cylinder, also has a figure eight shape, and has the lemniscate of Gerono as its planar projection. - -Other figure-eight shaped algebraic curves include - -* The Devil's curve, a curve defined by the quartic equation $y^2 (y^2 - a^2) = x^2 (x^2 - b^2)$ in which one connected component has a figure-eight shape, - -* Watt's curve, a figure-eight shaped curve formed by a mechanical linkage. Watt's curve is the zero set of the degree-six polynomial equation $(x^2+y^2)(x^2+y^2-d^2)^2+4a^2y^2(x^2+y^2-b^2)=0$ and has the lemniscate of Bernoulli as a special case. diff --git a/wiki/wikipedia/4020.txt b/wiki/wikipedia/4020.txt deleted file mode 100644 index a215d30795814f82c3918e955009f705bb8d12f3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4020.txt +++ /dev/null @@ -1,35 +0,0 @@ -Spreading activation is a method for searching associative networks, biological and artificial neural networks, or semantic networks. The search process is initiated by labeling a set of source nodes (e.g. concepts in a semantic network) with weights or "activation" and then iteratively propagating or "spreading" that activation out to other nodes linked to the source nodes. Most often these "weights" are real values that decay as activation propagates through the network. When the weights are discrete this process is often referred to as marker passing. Activation may originate from alternate paths, identified by distinct markers, and terminate when two alternate paths reach the same node. However brain studies show that several different brain areas play an important role in semantic processing. - -Spreading activation in semantic networks as a model were invented in cognitive psychology to model the fan out effect. - -Spreading activation can also be applied in information retrieval, by means of a network of nodes representing documents and terms contained in those documents. - -As it relates to cognitive psychology, spreading activation is the theory of how the brain iterates through a network of associated ideas to retrieve specific information. The spreading activation theory presents the array of concepts within our memory as cognitive units, each consisting of a node and its associated elements or characteristics, all connected together by edges. - -As another example, if the original concept is "red" and the concept "vehicles" is primed, they are much more likely to say "fire engine" instead of something unrelated to vehicles, such as "cherries". If instead "fruits" was primed, they would likely name "cherries" and continue on from there. The activation of pathways in the network has everything to do with how closely linked two concepts are by meaning, as well as how a subject is primed. - -A directed graph is populated by Nodes[ 1...N ] each having an associated activation value A [ i ] which is a real number in the range [ 0.0 ... 1.0]. A Link[ i, j ] connects source node[ i ] with target node[ j ]. Each edge has an associated weight W [ i, j ] usually a real number in the range [0.0 ... 1.0]. - -Parameters: - -* Firing threshold F, a real number in the range [0.0 ... 1.0] - -* Decay factor D, a real number in the range [0.0 ... 1.0] - -Steps: - -# Initialize the graph setting all activation values A [ i ] to zero. Set one or more origin nodes to an initial activation value greater than the firing threshold F. A typical initial value is 1.0. - -# For each unfired node [ i ] in the graph having an activation value A [ i ] greater than the node firing threshold F: - -# For each Link [ i, j ] connecting the source node [ i ] with target node [ j ], adjust A [ j ] = A [ j ] + (A [ i ] * W [ i, j ] * D) where D is the decay factor. - -# If a target node receives an adjustment to its activation value so that it would exceed 1.0, then set its new activation value to 1.0. Likewise maintain 0.0 as a lower bound on the target node's activation value should it receive an adjustment to below 0.0. - -# Once a node has fired it may not fire again, although variations of the basic algorithm permit repeated firings and loops through the graph. - -# Nodes receiving a new activation value that exceeds the firing threshold F are marked for firing on the next spreading activation cycle. - -# If activation originates from more than one node, a variation of the algorithm permits marker passing to distinguish the paths by which activation is spread over the graph - -# The procedure terminates when either there are no more nodes to fire or in the case of marker passing from multiple origins, when a node is reached from more than one path. Variations of the algorithm that permit repeated node firings and activation loops in the graph, terminate after a steady activation state, with respect to some delta, is reached, or when a maximum number of iterations is exceeded. diff --git a/wiki/wikipedia/4021.txt b/wiki/wikipedia/4021.txt deleted file mode 100644 index ca731d56bd5247fbbc4e08bb4364d5e675b504ce..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4021.txt +++ /dev/null @@ -1,15 +0,0 @@ -In computational geometry, a planar straight-line graph, in short PSLG, (or straight-line plane graph, or plane straight-line graph) is a term used for an embedding of a planar graph in the plane such that its edges are mapped into straight-line segments. Fáry's theorem (1948) states that every planar graph has this kind of embedding. - -In computational geometry, PSLGs have often been called planar subdivisions, with an assumption or assertion that subdivisions are polygonal rather than having curved boundaries. - -PSLGs may serve as representations of various maps, e.g., geographical maps in geographical information systems. - -Special cases of PSLGs are triangulations (polygon triangulation, point-set triangulation). Point-set triangulations are maximal PSLGs in the sense that it is impossible to add straight edges to them while keeping the graph planar. Triangulations have numerous applications in various areas. - -PSLGs may be seen as a special kind of Euclidean graphs. However, in discussions involving Euclidean graphs, the primary interest is their metric properties, i.e., distances between vertices, while for PSLGs the primary interest is the topological properties. For some graphs, such as Delaunay triangulations, both metric and topological properties are of importance. - -There exist three well-known data structures for representing PSLGs, these are the Winged-edge data structure, Halfedge, and Quadedge. The winged-edge data structure is the oldest of the three, but manipulating it often requires complicated case distinctions. This is because edge references do not store the edge direction, and the directions of edges around a face need not be consistent. The halfedge data structure stores both orientations of an edge and links them properly, simplifying operations and the storage scheme. The Quadedge data structure stores both the planar subdivision and its dual simultaneously. Its records consist explicitly only of edge records, four for each edge, and in a simplified form it is suitable for storing PSLGs. - -*Point location. For a query point, find which face of the PSLG it belongs to. - -*Map overlay. Find the overlay of two PSLGs (maps), which is the subdivision of the plane by the two simultaneously embedded PSLGs. In GIS this problem is known as "thematic map overlay". diff --git a/wiki/wikipedia/4022.txt b/wiki/wikipedia/4022.txt deleted file mode 100644 index 5b1f5608ec2c6fb792ca99df91ba87d7dd2019c6..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4022.txt +++ /dev/null @@ -1,139 +0,0 @@ -In quantum mechanics, information theory, and Fourier analysis, the entropic uncertainty or Hirschman uncertainty is defined as the sum of the temporal and spectral Shannon entropies. It turns out that Heisenberg's uncertainty principle can be expressed as a lower bound on the sum of these entropies. This is stronger than the usual statement of the uncertainty principle in terms of the product of standard deviations. - -In 1957, Hirschman considered a function f and its Fourier transform g such that -$$ -g(y) \approx \int_{-\infty}^\infty \exp (-2\pi ixy) f(x) dx,\qquad f(x) \approx \int_{-\infty}^\infty \exp (2\pi ixy) g(y) dy ~, -$$ - -where the "≈" indicates convergence in L2, and normalized so that (by Plancherel's theorem), -$$ - \int_{-\infty}^\infty |f(x)|^2 dx = \int_{-\infty}^\infty |g(y)|^2 dy = 1~. -$$ - -He showed that for any such functions the sum of the Shannon entropies is non-negative, -$$ - H(|f|^2) + H(|g|^2) \equiv - \int_{-\infty}^\infty |f(x)|^2 \log |f(x)|^2 dx - \int_{-\infty}^\infty |g(y)|^2 \log |g(y)|^2 dy \ge 0. -$$ - -A tighter bound, -$$ - H(|f|^2) + H(|g|^2) \ge \log \frac e 2 ~, -$$ - -was conjectured by Hirschman and in the same year interpreted as a generalized quantum mechanical uncertainty principle by Białynicki-Birula and Mycielski. - -The equality holds in the case of Gaussian distributions. - -The Hirschman-Everett entropy is injected in the logarithmic Schrödinger equation. - -Note, however, that the above entropic uncertainty function is distinctly different from the quantum Von Neumann entropy represented in phase space. - -The proof of this tight inequality depends on the so-called (q, p)-norm of the Fourier transformation. (Establishing this norm is the most difficult part of the proof.) - -From this norm, one is able to establish a lower bound on the sum of the (differential) Rényi entropies, Hα(, where 1/α + 1/β = 2, which generalize the Shannon entropies. For simplicity, we consider this inequality only in one dimension; the extension to multiple dimensions is straightforward and can be found in the literature cited. - -The (q, p)-norm of the Fourier transform is defined to be -$$ -\|\mathcal F\|_{q,p} = \sup_{f\in L^p(\mathbb R)} \frac{\|\mathcal Ff\|_q}{\|f\|_p}, -$$ where $1 < p \le 2~,$ and $\frac 1 p + \frac 1 q = 1.$ - -In 1961, Babenko found this norm for even integer values of q. Finally, in 1975, - -using Hermite functions as eigenfunctions of the Fourier transform, Beckner proved that the value of this norm (in one dimension) for all q ≥ 2 is -$$ -\|\mathcal F\|_{q,p} = \sqrt{p^{1/p}/q^{1/q}}. -$$ - -Thus we have the Babenko–Beckner inequality that -$$ -\|\mathcal Ff\|_q \le \left(p^{1/p}/q^{1/q}\right)^{1/2} \|f\|_p. -$$ - -From this inequality, an expression of the uncertainty principle in terms of the Rényi entropy can be derived. - -Letting $g=\mathcal Ff$, 2α=p, and 2β=q, so that 1/α + 1/β = 2 and 1/2<α<1<β, we have - -\left(\int_{\mathbb R} |g(y)|^{2\beta}dy\right)^{1/2\beta} - -\le \frac{(2\alpha)^{1/4\alpha}}{(2\beta)^{1/4\beta}} - -\left(\int_{\mathbb R} |f(x)|^{2\alpha}dx\right)^{1/2\alpha}. - - - -Squaring both sides and taking the logarithm, we get - -\frac 1\beta \log\left(\int_{\mathbb R} |g(y)|^{2\beta}dy\right) - -\le \frac 1 2 \log\frac{(2\alpha)^{1/\alpha}}{(2\beta)^{1/\beta}} - -+ \frac 1\alpha \log \left(\int_{\mathbb R} |f(x)|^{2\alpha}dx\right). - - - -Multiplying both sides by -$$ -\frac{\beta}{1-\beta}=-\frac{\alpha}{1-\alpha} -$$ - -reverses the sense of the inequality, - -\frac {1}{1-\beta} \log\left(\int_{\mathbb R} |g(y)|^{2\beta}dy\right) - -\ge \frac\alpha{2(\alpha-1)}\log\frac{(2\alpha)^{1/\alpha}}{(2\beta)^{1/\beta}} - -- \frac{1}{1-\alpha} \log \left(\int_{\mathbb R} |f(x)|^{2\alpha}dx\right) ~. - - - -Rearranging terms, finally yields an inequality in terms of the sum of the Rényi entropies, - -\frac{1}{1-\alpha} \log \left(\int_{\mathbb R} |f(x)|^{2\alpha}dx\right) - -+ \frac {1}{1-\beta} \log\left(\int_{\mathbb R} |g(y)|^{2\beta}dy\right) - -\ge \frac\alpha{2(\alpha-1)}\log\frac{(2\alpha)^{1/\alpha}}{(2\beta)^{1/\beta}}; - - -$$ - H_\alpha(|f|^2) + H_\beta(|g|^2) \ge \frac 1 2 \left(\frac{\log\alpha}{\alpha-1}+\frac{\log\beta}{\beta-1}\right) - \log 2 ~. -$$ - -Note that this inequality is symmetric with respect to α and β: One no longer need assume that α<β; only that they are positive and not both one, and that 1/α + 1/β = 2. To see this symmetry, simply exchange the rôles of i and −i in the Fourier transform. - -Taking the limit of this last inequality as α, β → 1 yields the less general Shannon entropy inequality, -$$ -H(|f|^2) + H(|g|^2) \ge \log\frac e 2,\quad\textrm{where}\quad g(y) \approx \int_{\mathbb R} e^{-2\pi ixy}f(x)dx~, -$$ - -valid for any base of logarithm, as long as we choose an appropriate unit of information, bit, nat, etc. - -The constant will be different, though, for a different normalization of the Fourier transform, (such as is usually used in physics, with normalizations chosen so that ħ=1 ), i.e., -$$ -H(|f|^2) + H(|g|^2) \ge \log(\pi e)\quad\textrm{for}\quad g(y) \approx \frac 1{\sqrt{2\pi}}\int_{\mathbb R} e^{-ixy}f(x)dx~. -$$ - -In this case, the dilation of the Fourier transform absolute squared by a factor of 2π simply adds log(2π) to its entropy. - -The Gaussian or normal probability distribution plays an important role in the relationship between variance and entropy: it is a problem of the calculus of variations to show that this distribution maximizes entropy for a given variance, and at the same time minimizes the variance for a given entropy. In fact, for any probability density function $\phi$ on the real line, Shannon's entropy inequality specifies: -$$ -H(\phi) \le \log \sqrt {2\pi eV(\phi)}, -$$ - -where H is the Shannon entropy and V is the variance, an inequality that is saturated only in the case of a normal distribution. - -Moreover, the Fourier transform of a Gaussian probability amplitude function is also Gaussian—and the absolute squares of both of these are Gaussian, too. This can then be used to derive the usual Robertson variance uncertainty inequality from the above entropic inequality, enabling the latter to be tighter than the former. That is (for ħ=1), exponentiating the Hirschman inequality and using Shannon's expression above, -$$ -1/2 \le \exp (H(|f|^2)+H(|g|^2)) /(2e\pi) \le \sqrt {V(|f|^2)V(|g|^2)}~. -$$ - -Hirschman explained that entropy—his version of entropy was the negative of Shannon's—is a "measure of the concentration of [a probability distribution] in a set of small measure." Thus a low or large negative Shannon entropy means that a considerable mass of the probability distribution is confined to a set of small measure. - -Note that this set of small measure need not be contiguous; a probability distribution can have several concentrations of mass in intervals of small measure, and the entropy may still be low no matter how widely scattered those intervals are. This is not the case with the variance: variance measures the concentration of mass about the mean of the distribution, and a low variance means that a considerable mass of the probability distribution is concentrated in a contiguous interval of small measure. - -To formalize this distinction, we say that two probability density functions $\phi_1$ and $\phi_2$ are equimeasurable if -$$ -\forall \delta > 0,\mu\{x\in\mathbb R|\phi_1(x)\ge\delta\} = \mu\{x\in\mathbb R|\phi_2(x)\ge\delta\}, -$$ - -where μ is the Lebesgue measure. Any two equimeasurable probability density functions have the same Shannon entropy, and in fact the same Rényi entropy, of any order. The same is not true of variance, however. Any probability density function has a radially decreasing equimeasurable "rearrangement" whose variance is less (up to translation) than any other rearrangement of the function; and there exist rearrangements of arbitrarily high variance, (all having the same entropy.) diff --git a/wiki/wikipedia/4023.txt b/wiki/wikipedia/4023.txt deleted file mode 100644 index 31cad763dcf8c2ae248b181ca52edc5a818c146a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4023.txt +++ /dev/null @@ -1,42 +0,0 @@ -In mathematics, the Artin-Rees lemma is a basic result about modules over a Noetherian ring, along with results such as the Hilbert basis theorem. It was proved in the 1950s in independent works by the mathematicians Emil Artin and David Rees; a special case was known to Oscar Zariski prior to their work. - -One consequence of the lemma is the Krull intersection theorem. The result is also used to prove the exactness property of completion. The lemma also plays a key role in the study of ℓ-adic sheaves. - -Let I be an ideal in a Noetherian ring R; let M be a finitely generated R-module and let N a submodule of M. Then there exists an integer k ≥ 1 so that, for n ≥ k, -$$ -I^{n} M \cap N = I^{n - k} ((I^{k} M) \cap N). -$$ - -The lemma immediately follows from the fact that R is Noetherian once necessary notions and notations are set up. - -For any ring R and an ideal I in R, we set $B_I R = \textstyle\bigoplus_{n=0}^\infty I^n$ (B for blow-up.) We say a decreasing sequence of submodules $M = M_0 \supset M_1 \supset M_2 \supset \cdots$ is an I-filtration if $I M_n \subset M_{n+1}$; moreover, it is stable if $I M_n = M_{n+1}$ for sufficiently large n. If M is given an I-filtration, we set $B_I M = \textstyle\bigoplus_{n=0}^\infty M_n$; it is a graded module over $B_I R$. - -Now, let M be a R-module with the I-filtration $M_i$ by finitely generated R-modules. We make an observation -$$ -B_I M -$$ is a finitely generated module over $B_I R$ if and only if the filtration is I-stable. - -Indeed, if the filtration is I-stable, then $B_I M$ is generated by the first $k+1$ terms $M_0, \dots, M_k$ and those terms are finitely generated; thus, $B_I M$ is finitely generated. Conversely, if it is finitely generated, say, by some homogeneous elements in $\textstyle\bigoplus_{j=0}^k M_j$, then, for $n \ge k$, each f in $M_n$ can be written as -$$ -f = \sum a_{j} g_{j}, \quad a_{j} \in I^{n-j} -$$ - -with the generators $g_{j}$ in $M_j, j \le k$. That is, $f \in I^{n-k} M_k$. - -We can now prove the lemma, assuming R is Noetherian. Let $M_n = I^n M$. Then $M_n$ are an I-stable filtration. Thus, by the observation, $B_I M$ is finitely generated over $B_I R$. But $B_I R \simeq R[It]$ is a Noetherian ring since R is. (The ring $R[It]$ is called the Rees algebra.) Thus, $B_I M$ is a Noetherian module and any submodule is finitely generated over $B_I R$; in particular, $B_I N$ is finitely generated when N is given the induced filtration; i.e., $N_n = M_n \cap N$. Then the induced filtration is I-stable again by the observation. - -Besides the use in completion of a ring, a typical application of the lemma is the proof of the Krull's intersection theorem, which says: $\textstyle\bigcap_{n=1}^\infty I^n = 0$ for a proper ideal I in a commutative Noetherian ring that is either a local ring or an integral domain. By the lemma applied to the intersection $N$, we find k such that for $n \ge k$, -$$ -I^{n} \cap N = I^{n - k} (I^{k} \cap N). -$$ - -But then $N = IN$. Thus, if A is local, $N = 0$ by Nakayama's lemma. If A is an integral domain, then one uses the determinant trick (that is a variant of the Cayley–Hamilton theorem and yields Nakayama's lemma): - -Theorem Let u be an endomorphism of an A-module N generated by n elements and I an ideal of A such that $u(N) \subset IN$. Then there is a relation: -$$ -u^n + a_1 u^{n-1} + \cdots + a_{n-1} u + a_n = 0, a_i \in I^i. -$$ - -In the setup here, take u to be the identity operator on N; that will yield a nonzero element x in A such that $x N = 0$, which implies $N = 0$. - -For both a local ring and an integral domain, the "Noetherian" cannot be dropped from the assumption: for the local ring case, see local ring#Commutative case. For the integral domain case, take $A$ to be the ring of algebraic integers (i.e., the integral closure of $\mathbb{Z}$ in $\mathbb{C}$). If $\mathfrak p$ is a prime ideal of A, then we have: $\mathfrak{p}^n = \mathfrak{p}$ for every integer $n > 0$. Indeed, if $y \in \mathfrak p$, then $y = \alpha^n$ for some complex number $\alpha$. Now, $\alpha$ is integral over $\mathbb{Z}$; thus in $A$ and then in $\mathfrak{p}$, proving the claim. diff --git a/wiki/wikipedia/4024.txt b/wiki/wikipedia/4024.txt deleted file mode 100644 index c70908445df15d815e00ccb9622514f0e80eb8e0..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4024.txt +++ /dev/null @@ -1,178 +0,0 @@ -In geometry, a parallelepiped is a three-dimensional figure formed by six parallelograms (the term rhomboid is also sometimes used with this meaning). By analogy, it relates to a parallelogram just as a cube relates to a square. In Euclidean geometry, the four concepts-parallelepiped and cube in three dimensions, parallelogram and square in two dimensions-are defined, but in the context of a more general affine geometry, in which angles are not differentiated, only parallelograms and parallelepipeds exist. Three equivalent definitions of parallelepiped are - -*a polyhedron with six faces (hexahedron), each of which is a parallelogram, - -*a hexahedron with three pairs of parallel faces, and - -*a prism of which the base is a parallelogram. - -The rectangular cuboid (six rectangular faces), cube (six square faces), and the rhombohedron (six rhombus faces) are all specific cases of parallelepiped. - -"Parallelepiped" is now usually pronounced , , or ; traditionally it was in accordance with its etymology in Greek παραλληλεπίπεδον parallelepipedon, a body "having parallel planes". - -Parallelepipeds are a subclass of the prismatoids. - -Any of the three pairs of parallel faces can be viewed as the base planes of the prism. A parallelepiped has three sets of four parallel edges; the edges within each set are of equal length. - -Parallelepipeds result from linear transformations of a cube (for the non-degenerate cases: the bijective linear transformations). - -Since each face has point symmetry, a parallelepiped is a zonohedron. Also the whole parallelepiped has point symmetry Ci (see also triclinic). Each face is, seen from the outside, the mirror image of the opposite face. The faces are in general chiral, but the parallelepiped is not. - -A space-filling tessellation is possible with congruent copies of any parallelepiped. - -A parallelepiped can be considered as an oblique prism with a parallelogram as base. - -Hence the volume $V$ of a parallelepiped is the product of the base area $B$ and the height $h$ (see diagram). With -$$ -B = |\vec a| \cdot |\vec b| \cdot \sin \gamma = |\vec a \times \vec b|~ -$$ (where $\gamma$ is the angle between vectors $\vec a$ and $\vec b$), and -$$ -h = |\vec c| \cdot |\cos \theta~|~ -$$ (where $\theta$ is the angle between vector $\vec c$ and the normal to the base), one gets: -$$ -V = B\cdot h = (|\vec a| |\vec b| \sin \gamma) \cdot |\vec c||\cos \theta~| = |\vec a \times \vec b|~|\vec c|~|\cos \theta~| = |(\vec{a} \times \vec{b}) \cdot \vec{c}|~. -$$ - -The mixed product of three vectors is called triple product. It can be described by a determinant. Hence for $\vec a=(a_1,a_2,a_3)^T, ~\vec b=(b_1,b_2,b_3)^T, ~\vec c=(c_1,c_2,c_3)^T,$ the volume is: - -(V1) \quad V = \left| \det \left(\begin{matrix} a_1 & b_1 & c_1 \\ a_2 & b_2 & c_2 \\ a_3 & b_3 & c_3 - -\end{matrix}\right) \right| - - . - -Another way to prove (V1) is to use the scalar component in the direction of $\vec a\times\vec b$ of vector $\vec c$: -$$ -\begin{align}V=|\vec a\times\vec b| |\mathrm{scal}_{\vec a\times\vec b} \vec c|=|\vec a\times\vec b| \dfrac=|(\vec a\times \vec b)\cdot \vec c|\end{align}. -$$ - -The result follows. - -An alternative representation of the volume uses geometric properties (angles and edge lengths) only: - -(V2)\quad - -V = abc\sqrt{1+2\cos(\alpha)\cos(\beta)\cos(\gamma)-\cos^2(\alpha)-\cos^2(\beta)-\cos^2(\gamma)} - - , - -where $\ \alpha=\angle(\vec b, \vec c) , \beta=\angle(\vec a,\vec c) , \gamma=\angle(\vec a,\vec b) ,\ $ and $a,b,c $ are the edge lengths. - -;Proof of (V2) - -The proof of (V2) uses properties of a determinant and the geometric interpretation of the dot product: - -Let be $M$ the 3x3-matrix, whose columns are the vectors $\vec a, \vec b,\vec c$ (see above). Then the following is true: -$$ - V^2=(\det M)^2=\det M \det M= \det M^T\det M=\det (M^TM) -$$ - -=\mathrm{det}\left( \begin{matrix} - -\vec a\cdot \vec a & \vec a\cdot \vec b & \vec a\cdot \vec c \\ - -\vec b\cdot \vec a & \vec b\cdot \vec b & \vec b\cdot \vec c \\ - -\vec c\cdot \vec a & \vec c\cdot \vec b & \vec c\cdot \vec c - -\end{matrix}\right) - -(expanding the determinant above across the first row) -$$ -=\ a^2\left(b^2c^2 - b^2c^2\cos(\alpha)\right) -$$ -$$ --ab\cos(\gamma)\left(ab\cos(\gamma)c^2-ac\cos(\beta)bc\cos(\alpha)\right) -$$ -$$ -+ac\cos(\beta)\left(ab\cos(\gamma)bc\cos(\alpha)-ac\cos(\beta)b^2\right) -$$ -$$ -=\ a^2b^2c^2-a^2b^2c^2\cos(\alpha) -$$ -$$ --a^2b^2c^2\cos^2(\gamma)+a^2b^2c^2\cos(\alpha)\cos(\beta)\cos(\gamma) -$$ -$$ -+a^2b^2c^2\cos(\alpha)\cos(\beta)\cos(\gamma)-a^2b^2c^2\cos(\beta) -$$ -$$ -=\ a^2b^2c^2\left(1-\cos^2(\alpha)-\cos^2(\gamma)+\cos(\alpha)\cos(\beta)\cos(\gamma)+\cos(\alpha)\cos(\beta)\cos(\gamma)+\cos^2(\beta)\right) -$$ -$$ -=\ a^2b^2c^2\left(1+2\cos(\alpha)\cos(\beta)\cos(\gamma)-\cos^2(\alpha)-\cos^2(\beta)-\cos^2(\gamma)\right). -$$ - -(The last steps use $\ \vec a\cdot \vec a=a^2, ..., \vec a\cdot \vec b=ab\cos\gamma, \vec a\cdot \vec c=ac\cos\beta, \vec b\cdot \vec c=bc\cos\alpha, ... $) - -; - -;Corresponding tetrahedron - -The volume of any tetrahedron that shares three converging edges of a parallelepiped is equal to one sixth of the volume of that parallelepiped (see proof). - -The surface area of a parallelepiped is the sum of the areas of the bounding parallelograms: -$$ - A = 2 \cdot \left(|\vec a \times \vec b| + |\vec a \times \vec c| + |\vec b \times \vec c|\right) -$$ -$$ -= 2(ab\sin\gamma+ bc\sin\alpha+ca\sin\beta)\ -$$. - -(For labeling: see previous section.) - -*The parallelepiped with Oh symmetry is known as a cube, which has six congruent square faces. - -*The parallelepiped with D4h symmetry is known as a square cuboid, which has two square faces and four congruent rectangular faces. - -*The parallelepiped with D3d symmetry is known as a trigonal trapezohedron, which has six congruent rhombic faces (also called an isohedral rhombohedron). - -*For parallelepipeds with D2h symmetry, there are two cases: - -**Rectangular cuboid: it has six rectangular faces (also called a rectangular parallelepiped, or sometimes simply a cuboid). - -**Right rhombic prism: it has two rhombic faces and four congruent rectangular faces. - -Note: the fully rhombic special case, with two rhombic faces and four congruent square faces $(a=b=c)$, has the same name, and the same symmetry group (D2h , order 8). - -*For parallelepipeds with C2h symmetry, there are two cases: - -**Right parallelogrammic prism: it has four rectangular faces and two parallelogrammic faces. - -**Oblique rhombic prism: it has two rhombic faces, while of the other faces, two adjacent ones are equal and the other two also (the two pairs are each other's mirror image). - -A perfect parallelepiped is a parallelepiped with integer-length edges, face diagonals, and space diagonals. In 2009, dozens of perfect parallelepipeds were shown to exist, answering an open question of Richard Guy. One example has edges 271, 106, and 103, minor face diagonals 101, 266, and 255, major face diagonals 183, 312, and 323, and space diagonals 374, 300, 278, and 272. - -Some perfect parallelepipeds having two rectangular faces are known. But it is not known whether there exist any with all faces rectangular; such a case would be called a perfect cuboid. - -Coxeter called the generalization of a parallelepiped in higher dimensions a parallelotope. In modern literature expression parallelepiped is often used in higher (or arbitrary finite) dimensions as well. - -Specifically in n-dimensional space it is called n-dimensional parallelotope, or simply n-parallelotope (or n-parallelepiped). Thus a parallelogram is a 2-parallelotope and a parallelepiped is a 3-parallelotope. - -More generally a parallelotope, or voronoi parallelotope, has parallel and congruent opposite facets. So a 2-parallelotope is a parallelogon which can also include certain hexagons, and a 3-parallelotope is a parallelohedron, including 5 types of polyhedra. - -The diagonals of an n-parallelotope intersect at one point and are bisected by this point. Inversion in this point leaves the n-parallelotope unchanged. See also fixed points of isometry groups in Euclidean space. - -The edges radiating from one vertex of a k-parallelotope form a k-frame $(v_1,\ldots, v_n)$ of the vector space, and the parallelotope can be recovered from these vectors, by taking linear combinations of the vectors, with weights between 0 and 1. - -The n-volume of an n-parallelotope embedded in $\mathbb{R}^m$ where $m \ge n$ can be computed by means of the Gram determinant. Alternatively, the volume is the norm of the exterior product of the vectors: -$$ - V = \left\| v_1 \wedge \cdots \wedge v_n \right\| . -$$ - -If m = n, this amounts to the absolute value of the determinant of the n vectors. - -Another formula to compute the volume of an n-parallelotope P in $\mathbb{R}^n$, whose n + 1 vertices are $V_0,V_1, \ldots, V_n$, is -$$ - {\rm Vol}(P) = |{\rm det}\ ([V_0\ 1]^{\rm T}, [V_1\ 1]^{\rm T}, \ldots, [V_n\ 1]^{\rm T})|, -$$ - -where $[V_i\ 1]$ is the row vector formed by the concatenation of $V_i$ and 1. Indeed, the determinant is unchanged if $[V_0\ 1]$ is subtracted from $[V_i\ 1]$ (i > 0), and placing $[V_0\ 1]$ in the last position only changes its sign. - -Similarly, the volume of any n-simplex that shares n converging edges of a parallelotope has a volume equal to one 1/n! of the volume of that parallelotope. - -The word appears as parallelipipedon in Sir Henry Billingsley's translation of Euclid's Elements, dated 1570. In the 1644 edition of his Cursus mathematicus, Pierre Hérigone used the spelling parallelepipedum. The Oxford English Dictionary cites the present-day parallelepiped as first appearing in Walter Charleton's Chorea gigantum (1663). - -Charles Hutton's Dictionary (1795) shows parallelopiped and parallelopipedon, showing the influence of the combining form parallelo-, as if the second element were pipedon rather than epipedon. Noah Webster (1806) includes the spelling parallelopiped. The 1989 edition of the Oxford English Dictionary describes parallelopiped (and parallelipiped) explicitly as incorrect forms, but these are listed without comment in the 2004 edition, and only pronunciations with the emphasis on the fifth syllable pi () are given. - -A change away from the traditional pronunciation has hidden the different partition suggested by the Greek roots, with epi- ("on") and pedon ("ground") combining to give epiped, a flat "plane". Thus the faces of a parallelepiped are planar, with opposite faces being parallel. diff --git a/wiki/wikipedia/4025.txt b/wiki/wikipedia/4025.txt deleted file mode 100644 index 9fe5abec964be5975969c3891943e71eef4a5762..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4025.txt +++ /dev/null @@ -1,30 +0,0 @@ -In mathematics, particularly in number theory, Hillel Furstenberg's proof of the infinitude of primes is a topological proof that the integers contain infinitely many prime numbers. When examined closely, the proof is less a statement about topology than a statement about certain properties of arithmetic sequences. Unlike Euclid's classical proof, Furstenberg's proof is a proof by contradiction. The proof was published in 1955 in the American Mathematical Monthly while Furstenberg was still an undergraduate student at Yeshiva University. - -Define a topology on the integers Z, called the evenly spaced integer topology, by declaring a subset U ⊆ Z to be an open set if and only if it is a union of arithmetic sequences S(a, b) for a ≠ 0, or is empty (which can be seen as a nullary union (empty union) of arithmetic sequences), where -$$ -S(a, b) = \{ a n + b \mid n \in \mathbb{Z} \} = a \mathbb{Z} + b. -$$ - -Equivalently, U is open if and only if for every x in U there is some non-zero integer a such that S(a, x) ⊆ U. The axioms for a topology are easily verified: - -* ∅ is open by definition, and Z is just the sequence S(1, 0), and so is open as well. - -* Any union of open sets is open: for any collection of open sets Ui and x in their union U, any of the numbers ai for which S(ai, x) ⊆ Ui also shows that S(ai, x) ⊆ U. - -* The intersection of two (and hence finitely many) open sets is open: let U1 and U2 be open sets and let x ∈ U1 ∩ U2 (with numbers a1 and a2 establishing membership). Set a to be the least common multiple of a1 and a2. Then S(a, x) ⊆ S(ai, x) ⊆ Ui. - -This topology has two notable properties: - -# Since any non-empty open set contains an infinite sequence, a finite set cannot be open; put another way, the complement of a finite set cannot be a closed set. - -# The basis sets S(a, b) are both open and closed: they are open by definition, and we can write S(a, b) as the complement of an open set as follows: -$$ -S(a, b) = \mathbb{Z} \setminus \bigcup_{j = 1}^{a - 1} S(a, b + j). -$$ - -The only integers that are not integer multiples of prime numbers are -1 and +1, i.e. -$$ -\mathbb{Z} \setminus \{ -1, + 1 \} = \bigcup_{p \mathrm{ prime}} S(p, 0). -$$ - -By the first property, the set on the left-hand side cannot be closed. On the other hand, by the second property, the sets S(p, 0) are closed. So, if there were only finitely many prime numbers, then the set on the right-hand side would be a finite union of closed sets, and hence closed. This would be a contradiction, so there must be infinitely many prime numbers. diff --git a/wiki/wikipedia/4026.txt b/wiki/wikipedia/4026.txt deleted file mode 100644 index 85b59a7ae45728eb43a2779e38b8e0ebc3691e16..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4026.txt +++ /dev/null @@ -1,13 +0,0 @@ -IBM TXSeries for Multiplatforms is a distributed CICS (Customer Information Control System) online transaction processing (OLTP) environment for mixed language applications. - -TXSeries was introduced by IBM's Transarc subsidiary in 1997 and bundled CICS version 2.1.2 with Encina, MQSeries middleware, Lotus Domino Go web server, and other software. - -TXSeries is a transaction server available on AIX, Linux x86, Windows Server. It shares similar design principles and some functions with CICS on mainframe. End of 2006 saw a major release of TXSeries V6.1, with DCE and Encina components removed. This brought huge simplification to the product. There is also a new graphical web-based administration console. - -IBM TXSeries V9.1 allows clients to: - -# Create RESTful APIs to extend existing applications for mobile and cloud. - -# Extend traditional applications in Java Enterprise Edition (EE) and deploy them on IBM WebSphere Application Server. - -TXSeries for Multiplatforms V9.1 extends the capabilities for TXSeries for Multiplatforms V8.2 and 8.1 and folds in the benefit from earlier releases along with the new capabilities listed above. diff --git a/wiki/wikipedia/4027.txt b/wiki/wikipedia/4027.txt deleted file mode 100644 index 8a1e6509cded8f26651ec24dbc9756d587d3a8fa..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4027.txt +++ /dev/null @@ -1,14 +0,0 @@ -In operator theory, von Neumann's inequality, due to John von Neumann, states that, for a fixed contraction T, the polynomial functional calculus map is itself a contraction. - -For a contraction T acting on a Hilbert space and a polynomial p, then the norm of p(T) is bounded by the supremum of |p(z)| for z in the unit disk." - -The inequality can be proved by considering the unitary dilation of T, for which the inequality is obvious. - -This inequality is a specific case of Matsaev's conjecture. That is that for any polynomial P and contraction T on $L^p$ -$$ -||P(T)||_{L^p\to L^p} \le ||P(S)||_{\ell^p\to\ell^p} -$$ - -where S is the right-shift operator. The von Neumann inequality proves it true for $p=2$ and for $p=1$ and $p=\infty$ it is true by straightforward calculation. - -S.W. Drury has shown in 2011 that the conjecture fails in the general case. diff --git a/wiki/wikipedia/4028.txt b/wiki/wikipedia/4028.txt deleted file mode 100644 index 96d5add87d225e0f312a071c07349332ceba4a8f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4028.txt +++ /dev/null @@ -1,29 +0,0 @@ -The junction tree algorithm (also known as 'Clique Tree') is a method used in machine learning to extract marginalization in general graphs. In essence, it entails performing belief propagation on a modified graph called a junction tree. The graph is called a tree because it branches into different sections of data; nodes of variables are the branches. The basic premise is to eliminate cycles by clustering them into single nodes. Multiple extensive classes of queries can be compiled at the same time into larger structures of data. The Hugin algorithm takes fewer computations to find a solution compared to Shafer-Shenoy. - -* Computed recursively - -* Found by the message passing equation Joint distributions are needed to make local computations happen. - -Theorem: For an undirected graph, G, the following properties are equivalent: - -* Graph G is triangulated. - -* The clique graph of G has a junction tree. - -* There is an elimination ordering for G that does not lead to any added edges. - -Thus, by triangulating a graph, we make sure that the corresponding junction tree exists. A usual way to do this, is to decide an elimination order for its nodes, and then run the Variable elimination algorithm. The variable elimination algorithm states that the algorithm must be run each time there is a different query. This will result to adding more edges to the initial graph, in such a way that the output will be a chordal graph. - -All chordal graphs have a junction tree. The next step is to construct the junction tree. To do so, we use the graph from the previous step, and form its corresponding clique graph. Now the next theorem gives us a way to find a junction tree: - -Theorem: Given a triangulated graph, weight the edges of the clique graph by their cardinality, |A∩B|, of the intersection of the adjacent cliques A and B. Then any maximum-weight spanning tree of the clique graph is a junction tree. - -So, to construct a junction tree we just have to extract a maximum weight spanning tree out of the clique graph. This can be efficiently done by, for example, modifying Kruskal's algorithm. - -The last step is to apply belief propagation to the obtained junction tree. - -Usage: A junction tree graph is used to visualize the probabilities of the problem. The tree can become a binary tree to form the actual building of the tree. A specific use could be found in auto encoders, which combine the graph and a passing network on a large scale automatically. - -Loopy belief propagation: A different method of interpreting complex graphs. The loopy belief propagation is used when an approximate solution is needed instead of the exact solution. It is an approximate inference. - -Cutset conditioning: Used with smaller sets of variables. Cutset conditioning allows for simpler graphs that are easier to read but are not exact. diff --git a/wiki/wikipedia/4029.txt b/wiki/wikipedia/4029.txt deleted file mode 100644 index 072bfa2a2e24ed18e55e57d2ee3848435208537d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4029.txt +++ /dev/null @@ -1,154 +0,0 @@ -The proof of Gödel's completeness theorem given by Kurt Gödel in his doctoral dissertation of 1929 (and a shorter version of the proof, published as an article in 1930, titled "The completeness of the axioms of the functional calculus of logic" (in German)) is not easy to read today; it uses concepts and formalisms that are no longer used and terminology that is often obscure. The version given below attempts to represent all the steps in the proof and all the important ideas faithfully, while restating the proof in the modern language of mathematical logic. This outline should not be considered a rigorous proof of the theorem. - -We work with first-order predicate calculus. Our languages allow constant, function and relation symbols. Structures consist of (non-empty) domains and interpretations of the relevant symbols as constant members, functions or relations over that domain. - -We assume classical logic (as opposed to intuitionistic logic for example). - -We fix some axiomatization (i.e. a syntax-based, machine-manageable proof system) of the predicate calculus: logical axioms and rules of inference. Any of the several well-known equivalent axiomatizations will do. Gödel's original proof assumed the Hilbert-Ackermann proof system. - -We assume without proof all the basic well-known results about our formalism that we need, such as the normal form theorem or the soundness theorem. - -We axiomatize predicate calculus without equality (sometimes confusingly called without identity), i.e. there are no special axioms expressing the properties of (object) equality as a special relation symbol. After the basic form of the theorem has been proved, it will be easy to extend it to the case of predicate calculus with equality. - -In the following, we state two equivalent forms of the theorem, and show their equivalence. - -Later, we prove the theorem. This is done in the following steps: - -# Reducing the theorem to sentences (formulas with no free variables) in prenex form, i.e. with all quantifiers (∀ and ∃) at the beginning. Furthermore, we reduce it to formulas whose first quantifier is ∀. This is possible because for every sentence, there is an equivalent one in prenex form whose first quantifier is ∀. - -# Reducing the theorem to sentences of the form ∀x1x2...∀xky1y2...∃ym φ(x1...xk, y1...ym). While we cannot do this by simply rearranging the quantifiers, we show that it is yet enough to prove the theorem for sentences of that form. - -# Finally we prove the theorem for sentences of that form. - -#* This is done by first noting that a sentence such as B = ∀x1x2...∀xky1y2...∃ym φ(x1...xk, y1...ym) is either refutable (its negation is always true) or satisfiable, i.e. there is some model in which it holds (it might even be always true, i.e. a tautology); this model is simply assigning truth values to the subpropositions from which B is built. The reason for that is the completeness of propositional logic, with the existential quantifiers playing no role. - -#* We extend this result to more and more complex and lengthy sentences, Dn (n=1,2...), built out from B, so that either any of them is refutable and therefore so is φ, or all of them are not refutable and therefore each holds in some model. - -#* We finally use the models in which the Dn hold (in case all are not refutable) in order to build a model in which φ holds. - -This is the most basic form of the completeness theorem. We immediately restate it in a form more convenient for our purposes: - -When we say "all structures", it is important to specify that the structures involved are classical (Tarskian) interpretations I, where I= (U is a non-empty (possibly infinite) set of objects, whereas F is a set of functions from expressions of the interpreted symbolism into U). [By contrast, so-called "free logics" countenance possibly empty sets for U. For more regarding free logics, see the work of Karel Lambert.] - -"φ is refutable" means by definition "¬φ is provable". - -If Theorem 1 holds, and φ is not satisfiable in any structure, then ¬φ is valid in all structures and therefore provable, thus φ is refutable and Theorem 2 holds. If on the other hand Theorem 2 holds and φ is valid in all structures, then ¬φ is not satisfiable in any structure and therefore refutable; then ¬¬φ is provable and then so is φ, thus Theorem 1 holds. - -We approach the proof of Theorem 2 by successively restricting the class of all formulas φ for which we need to prove "φ is either refutable or satisfiable". At the beginning we need to prove this for all possible formulas φ in our language. However, suppose that for every formula φ there is some formula ψ taken from a more restricted class of formulas C, such that "ψ is either refutable or satisfiable" → "φ is either refutable or satisfiable". Then, once this claim (expressed in the previous sentence) is proved, it will suffice to prove "φ is either refutable or satisfiable" only for φ's belonging to the class C. If φ is provably equivalent to ψ (i.e., (φ≡ψ) is provable), then it is indeed the case that "ψ is either refutable or satisfiable" → "φ is either refutable or satisfiable" (the soundness theorem is needed to show this). - -There are standard techniques for rewriting an arbitrary formula into one that does not use function or constant symbols, at the cost of introducing additional quantifiers; we will therefore assume that all formulas are free of such symbols. Gödel's paper uses a version of first-order predicate calculus that has no function or constant symbols to begin with. - -Next we consider a generic formula φ (which no longer uses function or constant symbols) and apply the prenex form theorem to find a formula ψ in normal form such that φ≡ψ (ψ being in normal form means that all the quantifiers in ψ, if there are any, are found at the very beginning of ψ). It follows now that we need only prove Theorem 2 for formulas φ in normal form. - -Next, we eliminate all free variables from φ by quantifying them existentially: if, say, x1...xn are free in φ, we form $\psi=\exists x_1 ... \exists x_n \phi$. If ψ is satisfiable in a structure M, then certainly so is φ and if ψ is refutable, then $\neg \psi = \forall x_1 ... \forall x_n \neg \phi$ is provable, and then so is ¬φ, thus φ is refutable. We see that we can restrict φ to be a sentence, that is, a formula with no free variables. - -Finally, we would like, for reasons of technical convenience, that the prefix of φ (that is, the string of quantifiers at the beginning of φ, which is in normal form) begin with a universal quantifier and end with an existential quantifier. To achieve this for a generic φ (subject to restrictions we have already proved), we take some one-place relation symbol F unused in φ, and two new variables y and z.. If φ = (P)Φ, where (P) stands for the prefix of φ and Φ for the matrix (the remaining, quantifier-free part of φ) we form $\psi = \forall y (P) \exists z ( \Phi \wedge [ F(y) \vee \neg F(z) ] )$. Since $\forall y \exists z ( F(y) \vee \neg F(z) )$ is clearly provable, it is easy to see that $\phi=\psi$ is provable. - -Our generic formula φ now is a sentence, in normal form, and its prefix starts with a universal quantifier and ends with an existential quantifier. Let us call the class of all such formulas R. We are faced with proving that every formula in R is either refutable or satisfiable. Given our formula φ, we group strings of quantifiers of one kind together in blocks: -$$ -\phi = (\forall x_1 ... \forall x_{k_1})(\exists x_{k_1+1} ... \exists x_{k_2}).......(\forall x_{k_{n-2}+1} ... \forall x_{k_{n-1}})(\exists x_{k_{n-1}+1} ... \exists x_{k_n}) (\Phi) -$$ - -We define the degree of $\phi$ to be the number of universal quantifier blocks, separated by existential quantifier blocks as shown above, in the prefix of $\phi$. The following lemma, which Gödel adapted from Skolem's proof of the Löwenheim–Skolem theorem, lets us sharply reduce the complexity of the generic formula $\phi$ we need to prove the theorem for: - -Lemma. Let k>=1. If every formula in R of degree k is either refutable or satisfiable, then so is every formula in R of degree k+1. - -Comment: Take a formula φ of degree k+1 of the form $\phi = (\forall x)(\exists y)(\forall u)(\exist v) (P) \psi$, where $(P)\psi$ is the remainder of $\phi$ (it is thus of degree k-1). φ states that for every x there is a y such that... (something). It would have been nice to have a predicate Q' so that for every x, Q'(x,y) would be true if and only if y is the required one to make (something) true. Then we could have written a formula of degree k, which is equivalent to φ, namely $(\forall x')(\forall x)(\forall y)(\forall u)(\exist v)(\exist y') (P) Q'(x',y') \wedge (Q'(x,y) \rightarrow \psi)$. This formula is indeed equivalent to φ because it states that for every x, if there is a y that satisfies Q'(x,y), then (something) holds, and furthermore, we know that there is such a y, because for every x', there is a y' that satisfies Q'(x',y'). Therefore φ follows from this formula. It is also easy to show that if the formula is false, then so is φ. Unfortunately, in general there is no such predicate Q'. However, this idea can be understood as a basis for the following proof of the Lemma. - -Proof. Let φ be a formula of degree k+1; then we can write it as -$$ -\phi = (\forall x)(\exists y)(\forall u)(\exist v) (P) \psi -$$ - -where (P) is the remainder of the prefix of $\phi$ (it is thus of degree k-1) and $\psi$ is the quantifier-free matrix of $\phi$. x, y, u and v denote here tuples of variables rather than single variables; e.g. $(\forall x)$ really stands for $\forall x_1 \forall x_2 ... \forall x_n$ where $x_1 ... x_n$ are some distinct variables. - -Let now x' and y' be tuples of previously unused variables of the same length as x and y respectively, and let Q be a previously unused relation symbol that takes as many arguments as the sum of lengths of x and y; we consider the formula -$$ -\Phi = (\forall x')(\exists y') Q(x',y') \wedge (\forall x)(\forall y)( Q(x,y) \rightarrow (\forall u)(\exist v)(P)\psi ) -$$ - -Clearly, $\Phi \rightarrow \phi$ is provable. - -Now since the string of quantifiers $(\forall u)(\exists v)(P)$ does not contain variables from x or y, the following equivalence is easily provable with the help of whatever formalism we're using: -$$ -( Q(x,y) \rightarrow (\forall u )(\exists v)(P) \psi) \equiv (\forall u)(\exists v)(P) ( Q(x,y) \rightarrow \psi ) -$$ - -And since these two formulas are equivalent, if we replace the first with the second inside Φ, we obtain the formula Φ' such that Φ≡Φ': -$$ -\Phi' = (\forall x')(\exist y') Q(x',y') \wedge (\forall x)(\forall y) (\forall u)(\exists v)(P) ( Q(x,y) \rightarrow \psi ) -$$ - -Now Φ' has the form $(S)\rho \wedge (S')\rho'$, where (S) and (S') are some quantifier strings, ρ and ρ' are quantifier-free, and, furthermore, no variable of (S) occurs in ρ' and no variable of (S') occurs in ρ. Under such conditions every formula of the form $(T)(\rho \wedge \rho')$, where (T) is a string of quantifiers containing all quantifiers in (S) and (S') interleaved among themselves in any fashion, but maintaining the relative order inside (S) and (S'), will be equivalent to the original formula Φ'(this is yet another basic result in first-order predicate calculus that we rely on). To wit, we form Ψ as follows: -$$ -\Psi = (\forall x')(\forall x)(\forall y) (\forall u)(\exists y')(\exists v)(P)Q(x',y') \wedge (Q(x,y) \rightarrow \psi ) -$$ - -and we have $\Phi' \equiv \Psi$. - -Now $\Psi$ is a formula of degree k and therefore by assumption either refutable or satisfiable. - -If $\Psi$ is satisfiable in a structure M, then, considering $\Psi \equiv \Phi' \equiv \Phi \wedge \Phi \rightarrow \phi$, we see that $\phi$ is satisfiable as well. - -If $\Psi$ is refutable, then so is $\Phi$, which is equivalent to it; thus $\neg \Phi$ is provable. - -Now we can replace all occurrences of Q inside the provable formula $\neg \Phi$ by some other formula dependent on the same variables, and we will still get a provable formula. - -(This is yet another basic result of first-order predicate calculus. Depending on the particular formalism adopted for the calculus, it may be seen as a simple application of a "functional substitution" rule of inference, as in Gödel's paper, or it may be proved by considering the formal proof of $\neg \Phi$, replacing in it all occurrences of Q by some other formula with the same free variables, and noting that all logical axioms in the formal proof remain logical axioms after the substitution, and all rules of inference still apply in the same way.) - -In this particular case, we replace Q(x',y') in $\neg \Phi$ with the formula $(\forall u)(\exists v)(P)\psi(x,y|x',y')$. Here (x,y|x',y') means that instead of ψ we are writing a different formula, in which x and y are replaced with x' and y'. Q(x,y) is simply replaced by $(\forall u)(\exists v)(P)\psi$. -$$ -\neg \Phi -$$ then becomes -$$ -\neg ( (\forall x')(\exists y') (\forall u)(\exists v)(P)\psi(x,y|x',y') \wedge (\forall x)(\forall y) ( (\forall u)(\exists v)(P)\psi \rightarrow (\forall u)(\exists v)(P) \psi ) ) -$$ - -and this formula is provable; since the part under negation and after the $\wedge$ sign is obviously provable, and the part under negation and before the $\wedge$ sign is obviously φ, just with x and y replaced by x' and y', we see that $\neg \phi$ is provable, and φ is refutable. We have proved that φ is either satisfiable or refutable, and this concludes the proof of the Lemma. - -Notice that we could not have used $(\forall u)(\exists v)(P)\psi(x,y|x',y')$ instead of Q(x',y') from the beginning, because $\Psi$ would not have been a well-formed formula in that case. This is why we cannot naively use the argument appearing at the comment that precedes the proof. - -As shown by the Lemma above, we only need to prove our theorem for formulas φ in R of degree 1. φ cannot be of degree 0, since formulas in R have no free variables and don't use constant symbols. So the formula φ has the general form: -$$ - (\forall x_1...x_k)(\exists y_1...y_m) \phi(x_1...x_k, y_1...y_m). -$$ - -Now we define an ordering of the k-tuples of natural numbers as follows: $ (x_1...x_k) < (y_1...y_k) $ should hold if either $ \Sigma_k (x_1...x_k) < \Sigma_k (y_1...y_k) $, or $ \Sigma_k (x_1...x_k) = \Sigma_k (y_1...y_k) $, and $ (x_1...x_k) $ precedes $ (y_1...y_k) $ in lexicographic order. [Here $ \Sigma_k (x_1...x_k) $ denotes the sum of the terms of the tuple.] Denote the nth tuple in this order by $ (a^n_1...a^n_k) $. - -Set the formula $ B_n $ as $ \phi(z_{a^n_1}...z_{a^n_k}, z_{(n-1)m+2}, z_{(n-1)m+3}...z_{nm+1}) $. Then put $D_n$ as -$$ - (\exists z_1...z_{nm+1}) (B_1 \wedge B_2 ... \wedge B_n). -$$ - -Lemma: For every n, φ$ \rightarrow D_n$. - -Proof: By induction on n; we have $ D_k \Leftarrow D_{k-1} \wedge (\forall z_1...z_{(n-1)m+1})(\exists z_{(n-1)m+2}...z_{nm+1}) B_n \Leftarrow D_{k-1} \wedge (\forall z_{a^n_1}...z_{a^n_k})(\exists y_1...y_m) \phi(z_{a^n_1}...z_{a^n_k}, y_1...y_m) $, where the latter implication holds by variable substitution, since the ordering of the tuples is such that $(\forall k)({a^n_1}...{a^n_k}) < (n-1)m + 2$. But the last formula is equivalent to $ D_{k-1} \wedge $φ. - -For the base case, $ D_1 \equiv (\exists z_1...z_{m+1}) \phi(z_{a^1_1}...z_{a^1_k}, z_2, z_3...z_{m+1}) \equiv (\exists z_1...z_{m+1}) \phi(z_1...z_1, z_2, z_3...z_{m+1}) $ is obviously a corollary of φ as well. So the Lemma is proven. - -Now if $ D_n $ is refutable for some n, it follows that φ is refutable. On the other hand, suppose that $ D_n $ is not refutable for any n. Then for each n there is some way of assigning truth values to the distinct subpropositions $E_h$ (ordered by their first appearance in $D_n$; "distinct" here means either distinct predicates, or distinct bound variables) in $ B_k $, such that $ D_n $ will be true when each proposition is evaluated in this fashion. This follows from the completeness of the underlying propositional logic. - -We will now show that there is such an assignment of truth values to $E_h$, so that all $D_n$ will be true: The $E_h$ appear in the same order in every $ D_n $; we will inductively define a general assignment to them by a sort of "majority vote": Since there are infinitely many assignments (one for each $ D_n $) affecting $E_1$, either infinitely many make $E_1$ true, or infinitely many make it false and only finitely many make it true. In the former case, we choose $E_1$ to be true in general; in the latter we take it to be false in general. Then from the infinitely many n for which $E_1$ through $E_{h-1}$ are assigned the same truth value as in the general assignment, we pick a general assignment to $E_h$ in the same fashion. - -This general assignment must lead to every one of the $B_k$ and $D_k$ being true, since if one of the $B_k$ were false under the general assignment, $D_n$ would also be false for every n > k. But this contradicts the fact that for the finite collection of general $E_h$ assignments appearing in $D_k$, there are infinitely many n where the assignment making $D_n$ true matches the general assignment. - -From this general assignment, which makes all of the $D_k$ true, we construct an interpretation of the language's predicates that makes φ true. The universe of the model will be the natural numbers. Each i-ary predicate $\Psi$ should be true of the naturals $(u_1...u_i)$ precisely when the proposition $\Psi(z_{u_1}...z_{u_i})$ is either true in the general assignment, or not assigned by it (because it never appears in any of the $D_k$). - -In this model, each of the formulas $ (\exists y_1...y_m) \phi(a^n_1...a^n_k, y_1...y_m) $ is true by construction. But this implies that φ itself is true in the model, since the $a^n$ range over all possible k-tuples of natural numbers. So φ is satisfiable, and we are done. - -We may write each Bi as Φ(x1...xk,y1...ym) for some x-s, which we may call "first arguments" and y-s that we may call "last arguments". - -Take B1 for example. Its "last arguments" are z2,z3...zm+1, and for every possible combination of k of these variables there is some j so that they appear as "first arguments" in Bj. Thus for large enough n1, Dn1 has the property that the "last arguments" of B1 appear, in every possible combinations of k of them, as "first arguments" in other Bj-s within Dn. For every Bi there is a Dni with the corresponding property. - -Therefore in a model that satisfies all the Dn-s, there are objects corresponding to z1, z2... and each combination of k of these appear as "first arguments" in some Bj, meaning that for every k of these objects zp1...zpk there are zq1...zqm, which makes Φ(zp1...zpk,zq1...zqm) satisfied. By taking a submodel with only these z1, z2... objects, we have a model satisfying φ. - -Gödel reduced a formula containing instances of the equality predicate to ones without it in an extended language. His method involves replacing a formula φ containing some instances of equality with the formula -$$ - (\forall x) Eq(x, x) \wedge (\forall x,y,z) [Eq(x, y) \rightarrow (Eq(x, z) \rightarrow Eq(y, z))] -$$ $ \wedge (\forall x,y,z) [Eq(x, y) \rightarrow (Eq(z, x) \rightarrow Eq(z, y))] $
    $ \wedge $
    $ (\forall x_1...x_k, y_1...x_k) [(Eq(x_1, y_1) \wedge ... \wedge Eq(x_k, y_k)) \rightarrow (A(x_1...x_k) \equiv A(y_1...y_k))] $
    $ \wedge ... \wedge $
    $ (\forall x_1...x_m, y_1...x_m) [(Eq(x_1, y_1) \wedge ... \wedge Eq(x_m, y_m)) \rightarrow (Z(x_1...x_m) \equiv Z(y_1...y_m))] $
    $ \wedge $
    $ \varphi'.$ - -Here $A ... Z$ denote the predicates appearing in φ (with $k ... m$ their respective arities), and φ' is the formula φ with all occurrences of equality replaced with the new predicate Eq. If this new formula is refutable, the original φ was as well; the same is true of satisfiability, since we may take a quotient of satisfying model of the new formula by the equivalence relation representing Eq. This quotient is well-defined with respect to the other predicates, and therefore will satisfy the original formula φ. - -Gödel also considered the case where there are a countably infinite collection of formulas. Using the same reductions as above, he was able to consider only those cases where each formula is of degree 1 and contains no uses of equality. For a countable collection of formulas $ \phi^i $ of degree 1, we may define $ B^i_k $ as above; then define $ D_k $ to be the closure of $ B^1_1...B^1_k, ..., B^k_1...B^k_k $. The remainder of the proof then went through as before. - -When there is an uncountably infinite collection of formulas, the Axiom of Choice (or at least some weak form of it) is needed. Using the full AC, one can well-order the formulas, and prove the uncountable case with the same argument as the countable one, except with transfinite induction. Other approaches can be used to prove that the completeness theorem in this case is equivalent to the Boolean prime ideal theorem, a weak form of AC. diff --git a/wiki/wikipedia/403.txt b/wiki/wikipedia/403.txt deleted file mode 100644 index f7556d9d0ee12b3eaa71c95201a778b16b10d235..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/403.txt +++ /dev/null @@ -1,417 +0,0 @@ -In mathematics, Fatou's lemma establishes an inequality relating the Lebesgue integral of the limit inferior of a sequence of functions to the limit inferior of integrals of these functions. The lemma is named after Pierre Fatou. - -Fatou's lemma can be used to prove the Fatou–Lebesgue theorem and Lebesgue's dominated convergence theorem. - -In what follows, $\operatorname{\mathcal B}_{\R_{\geq 0}}$ denotes the $\sigma$-algebra of Borel sets on $[0,+\infty]$. - -{{math theorem|Fatou's lemma. Given a measure space $(\Omega,\mathcal{F},\mu)$ and a set $X \in \mathcal{F},$ let $\{f_n\}$ be a sequence of $(\mathcal{F}, \operatorname{\mathcal B}_{\R_{\geq 0}})$-measurable non-negative functions $f_n: X\to [0,+\infty]$. Define the function $f: X\to [0,+\infty]$ by setting $f(x) =\liminf_{n\to\infty} f_n(x),$ for every $x\in X$. - -Then $f$ is $(\mathcal{F}, \operatorname{\mathcal B}_{\R_{\geq 0}})$-measurable, and also $\int_X fd\mu \le \liminf_{n\to\infty} \int_X f_nd\mu$, where the integrals may be infinite.}} - -Fatou's lemma remains true if its assumptions hold $\mu$-almost everywhere. In other words, it is enough that there is a null set $N$ such that the sequence $\{f_n(x)\}$ non-decreases for every ${x\in X\setminus N}.$ To see this, note that the integrals appearing in Fatou's lemma are unchanged if we change each function on $N$. - -Fatou's lemma does not require the monotone convergence theorem, but the latter can be used to provide a quick proof. A proof directly from the definitions of integrals is given further below. - -In each case, the proof begins by analyzing the properties of $\textstyle g_n(x)=\inf_{k\geq n}f_k(x)$. These satisfy: - -# the sequence $\{g_n(x)\}_n$ is pointwise non-decreasing at any x and - -# $g_n\leq f_n$, $\forall n \in \N$. - -Since $f(x) =\liminf_{n\to\infty} f_n(x) = \lim_{n\to \infty} \inf_{k\geq n} f_k(x)=\lim_{n\to\infty}{g_n(x)}$, we immediately see that f is measurable. - -Moreover, -$$ -\int_X fd\mu=\int_X\lim_n g_nd\mu -$$ - -By the Monotone Convergence Theorem and property (1), the limit and integral may be interchanged: - -\begin{align} - -\int_X fd\mu&=\lim_n\int_X g_nd\mu\\ - -&=\liminf_n\int_X g_nd\mu\\ - -&\leq \liminf_n\int_X f_nd\mu, - -\end{align} - -where the last step used property (2). - -To demonstrate that the monotone convergence theorem is not "hidden", the proof below does not use any properties of Lebesgue integral except those established here. - -Denote by $\operatorname{SF}(f)$ the set of simple $(\mathcal{F}, \operatorname{\mathcal B}_{\R_{\geq 0}})$-measurable functions $s:X\to [0,\infty)$ such that $0\leq s\leq f$ on $X$. - -*If $f \leq g$ everywhere on $X,$ then -$$ -\int_X fd\mu \leq \int_X gd\mu. -$$ - -*If $ X_1,X_2 \in \mathcal{F} $ and $X_1 \subseteq X_2, $ then -$$ -\int_{X_1} fd\mu \leq \int_{X_2} fd\mu. -$$ - -* If f is nonnegative and S= - -\cup^\infty_{i=1}S_i, where - -S_1\subseteq\ldots\subseteq S_i\subseteq\ldots\subseteq S - - is a non-decreasing chain of $\mu$-measurable sets, then -$$ -\int_S{fd\mu}=\lim_{n\to\infty}{\int_{S_n}{fd\mu -$$}} - -{{math proof| - -1. Since $f \leq g,$ we have -$$ - \operatorname{SF}(f) \subseteq \operatorname{SF}(g). -$$ - -By definition of Lebesgue integral and the properties of supremum, -$$ -\int_X fd\mu = \sup_{s\in {\rm SF}(f)}\int_X sd\mu \leq \sup_{s\in {\rm SF}(g)}\int_X sd\mu = \int_X gd\mu. -$$ - -2. Let ${\mathbf 1}_{X_1}$ be the indicator function of the set $X_1.$ It can be deduced from the definition of Lebesgue integral that -$$ - \int_{X_2} f\cdot {\mathbf 1}_{X_1} d\mu = \int_{X_1} f d\mu -$$ - -if we notice that, for every $s \in {\rm SF}(f\cdot {\mathbf 1}_{X_1}),$ $s=0$ outside of $X_1.$ Combined with the previous property, the inequality $ f\cdot {\mathbf 1}_{X_1} \leq f$ implies -$$ - \int_{X_1} f d\mu = \int_{X_2} f\cdot {\mathbf 1}_{X_1} d\mu \leq \int_{X_2} f d\mu. -$$ - -3. First note that the claim holds if f is the indicator function of a set, by monotonicity of measures. By linearity, this also immediately implies the claim for simple functions. - -Since any simple function supported on Sn is simple and supported on X, we must have -$$ -\int_X{fd\mu}\geq\lim_{n\to\infty}{\int_{S_n}{fd\mu}} -$$. - -For the reverse, suppose g ∈ SF(f) with $\textstyle \int_X{fd\mu}-\epsilon\leq\int_X{gd\mu}$ By the above, -$$ -\int_X{fd\mu}-\epsilon\leq\int_X{gd\mu}=\lim_{n\to\infty}{\int_{S_n}{gd\mu}}\leq\lim_{n\to\infty}{\int_{S_n}{fd\mu}} -$$ }} - -Now we turn to the main theorem -$$ -g_n=g_n(x) -$$ is $(\mathcal{F}, \operatorname{\mathcal B}_{\R_{\geq 0)$-measurable, for every $n\geq 1$, as is $f$.}} - -{{math proof|Recall the closed intervals generate the Borel σ-algebra. Thus it suffices to show, for every $t\in [-\infty,+\infty]$, that $g^{-1}_n([t,+\infty])\in\mathcal{F}$. Now observe that - -\begin{align} - -g^{-1}_n([t,+\infty])&=\left\{x\in X\mid g_n(x)\geq t\right\}\\[3pt] - -&=\bigcap_{k}\left\{x\in X\mid f_k(x)\geq t\right\}\\[3pt] - -&=\bigcap_{k} f^{-1}_k([t,+\infty]) - -\end{align} - -Every set on the right-hand side is from $\mathcal{F}$, which is closed under countable intersections. Thus the left-hand side is also a member of $\mathcal{F}$. - -Similarly, it is enough to verify that $f^{-1}([0,t])\in\mathcal{F}$, for every $t\in [-\infty,+\infty]$. Since the sequence $\{g_n(x)\}$ pointwise non-decreases, -$$ -f^{-1}([0,t])=\bigcap_{n}g^{-1}_n([0,t])\in\mathcal{F} -$$. - -}} - -Given a simple function $s\in\operatorname{SF}(f)$ and a real number $t\in (0,1)$, define -$$ -B^{s,t}_k=\{x\in X\mid t\cdot s(x)\leq g_k(x)\}\subseteq X. -$$ - -Then $B^{s,t}_k\in\mathcal{F}$, $B^{s,t}_k\subseteq B^{s,t}_{k+1}$, and $\textstyle X=\bigcup_k B^{s,t}_k$. - -{{math proof|Step 2a. To prove the first claim, write s as a weighted sum of indicator functions of disjoint sets: -$$ -s=\sum^m_{i=1}c_i\cdot\mathbf{1}_{A_i} -$$. - -Then -$$ -B^{s,t}_k=\bigcup^m_{i=1}\Bigl(g^{-1}_k\Bigl([t\cdot c_i,+\infty]\Bigr)\cap A_i\Bigr) -$$. - -Since the pre-image $g^{-1}_k\Bigl([t\cdot c_i,+\infty]\Bigr)$ of the Borel set $[t\cdot c_i,+\infty]$ under the measurable function $g_k$ is measurable, and $\sigma$-algebras are closed under finite intersection and unions, the first claim follows. - -Step 2b. To prove the second claim, note that, for each $k$ and every $x\in X$, $g_k(x) \leq g_{k+1}(x).$ - -Step 2c. To prove the third claim, suppose for contradiction there exists -$$ -x_0\in X\setminus\bigcup_k B^{s,t}_k=\bigcap_k(X\setminus B^{s,t}_k) -$$ - -Then $g_k(x_0)\begin{align} - -\int_X f d\mu&=\sup_{s\in\operatorname{SF}(f)}\int_X sd\mu\\ - -&\leq\lim_k\int_X g_kd\mu\\ - -&=\liminf_k\int_X g_kd\mu\\ - -&\leq\liminf_k\int_X f_kd\mu - -\end{align} - -The proof is complete. - -Equip the space $S$ with the Borel σ-algebra and the Lebesgue measure. - -* Example for a probability space: Let $S=[0,1]$ denote the unit interval. For every natural number $n$ define - - - -f_n(x)=\begin{cases}n&\text{for }x\in (0,1/n),\\ - -0&\text{otherwise.} - -\end{cases} - -* Example with uniform convergence: Let $S$ denote the set of all real numbers. Define - - - -f_n(x)=\begin{cases}\frac1n&\text{for }x\in [0,n],\\ - -0&\text{otherwise.} - -\end{cases} - -These sequences $(f_n)_{n\in\N}$ converge on $S$ pointwise (respectively uniformly) to the zero function (with zero integral), but every $f_n$ has integral one. - -A suitable assumption concerning the negative parts of the sequence f1, f2, . . . of functions is necessary for Fatou's lemma, as the following example shows. Let S denote the half line [0,∞) with the Borel σ-algebra and the Lebesgue measure. For every natural number n define - - - -f_n(x)=\begin{cases}-\frac1n&\text{for }x\in [n,2n],\\ - -0&\text{otherwise.} - -\end{cases} - -This sequence converges uniformly on S to the zero function and the limit, 0, is reached in a finite number of steps: for every x ≥ 0, if n > x, then fn(x) = 0. However, every function fn has integral -1. Contrary to Fatou's lemma, this value is strictly less than the integral of the limit (0). - -As discussed in below, the problem is that there is no uniform integrable bound on the sequence from below, while 0 is the uniform bound from above. - -Let f1, f2, . . . be a sequence of extended real-valued measurable functions defined on a measure space (S,Σ,μ). If there exists a non-negative integrable function g on S such that fn ≤ g for all n, then - - - -\limsup_{n\to\infty}\int_S f_nd\mu\leq\int_S\limsup_{n\to\infty}f_nd\mu. - - - -Note: Here g integrable means that g is measurable and that $\textstyle\int_S gd\mu<\infty$. - -We apply linearity of Lebesgue integral and Fatou's lemma to the sequence $g - f_n.$ Since $\textstyle\int_Sgd\mu < +\infty,$ this sequence is defined $\mu$-almost everywhere and non-negative. - -Let f1, f2, . . . be a sequence of extended real-valued measurable functions defined on a measure space (S,Σ,μ). If there exists an integrable function g on S such that fn ≥ -g for all n, then - - - -\int_S \liminf_{n\to\infty} f_nd\mu - -\le \liminf_{n\to\infty} \int_S f_nd\mu. - - - -Apply Fatou's lemma to the non-negative sequence given by fn + g. - -If in the previous setting the sequence f1, f2, . . . converges pointwise to a function f μ-almost everywhere on S, then -$$ -\int_S fd\mu \le \liminf_{n\to\infty} \int_S f_nd\mu. -$$ - -Note that f has to agree with the limit inferior of the functions fn almost everywhere, and that the values of the integrand on a set of measure zero have no influence on the value of the integral. - -The last assertion also holds, if the sequence f1, f2, . . . converges in measure to a function f. - -There exists a subsequence such that -$$ -\lim_{k\to\infty} \int_S f_{n_k}d\mu=\liminf_{n\to\infty} \int_S f_nd\mu. -$$ - -Since this subsequence also converges in measure to f, there exists a further subsequence, which converges pointwise to f almost everywhere, hence the previous variation of Fatou's lemma is applicable to this subsubsequence. - -In all of the above statements of Fatou's Lemma, the integration was carried out with respect to a single fixed measure μ. Suppose that μn is a sequence of measures on the measurable space (S,Σ) such that (see Convergence of measures) -$$ -\mu_n(E)\to \mu(E),~\forall E\in \mathcal{F}. -$$ - -Then, with fn non-negative integrable functions and f being their pointwise limit inferior, we have -$$ - \int_S fd\mu \leq \liminf_{n\to \infty} \int_S f_n d\mu_n. -$$ - -In probability theory, by a change of notation, the above versions of Fatou's lemma are applicable to sequences of random variables X1, X2, . . . defined on a probability space $\scriptstyle(\Omega,\mathcal F,\mathbb P)$; the integrals turn into expectations. In addition, there is also a version for conditional expectations. - -Let X1, X2, . . . be a sequence of non-negative random variables on a probability space $\scriptstyle(\Omega,\mathcal F,\mathbb P)$ and let -$$ -\scriptstyle \mathcal G\subset\mathcal F -$$ be a sub-σ-algebra. Then -$$ -\mathbb{E}\Bigl[\liminf_{n\to\infty}X_n\Big|\mathcal G\Bigr]\le\liminf_{n\to\infty}\mathbb{E}[X_n|\mathcal G] -$$ almost surely. - -Note: Conditional expectation for non-negative random variables is always well defined, finite expectation is not needed. - -Besides a change of notation, the proof is very similar to the one for the standard version of Fatou's lemma above, however the monotone convergence theorem for conditional expectations has to be applied. - -Let X denote the limit inferior of the Xn. For every natural number k define pointwise the random variable -$$ -Y_k=\inf_{n\ge k}X_n. -$$ - -Then the sequence Y1, Y2, . . . is increasing and converges pointwise to X. - -For k ≤ n, we have Yk ≤ Xn, so that -$$ -\mathbb{E}[Y_k|\mathcal G]\le\mathbb{E}[X_n|\mathcal G] -$$ almost surely - -by the monotonicity of conditional expectation, hence -$$ -\mathbb{E}[Y_k|\mathcal G]\le\inf_{n\ge k}\mathbb{E}[X_n|\mathcal G] -$$ almost surely, - -because the countable union of the exceptional sets of probability zero is again a null set. - -Using the definition of X, its representation as pointwise limit of the Yk, the monotone convergence theorem for conditional expectations, the last inequality, and the definition of the limit inferior, it follows that almost surely - - - -\begin{align} - -\mathbb{E}\Bigl[\liminf_{n\to\infty}X_n\Big|\mathcal G\Bigr] - -&=\mathbb{E}[X|\mathcal G] - -=\mathbb{E}\Bigl[\lim_{k\to\infty}Y_k\Big|\mathcal G\Bigr] - -=\lim_{k\to\infty}\mathbb{E}[Y_k|\mathcal G]\\ - -&\le\lim_{k\to\infty} \inf_{n\ge k}\mathbb{E}[X_n|\mathcal G] - -=\liminf_{n\to\infty}\mathbb{E}[X_n|\mathcal G]. - -\end{align} - - - -Let X1, X2, . . . be a sequence of random variables on a probability space $\scriptstyle(\Omega,\mathcal F,\mathbb P)$ and let -$$ -\scriptstyle \mathcal G\subset\mathcal F -$$ be a sub-σ-algebra. If the negative parts -$$ -X_n^-:=\max\{-X_n,0\},\qquad n\in{\mathbb N}, -$$ - -are uniformly integrable with respect to the conditional expectation, in the sense that, for ε > 0 there exists a c > 0 such that - -\mathbb{E}\bigl[X_n^-1_{\{X_n^->c\}}|\mathcal G\bigr]<\varepsilon, - -\qquad\text{for all }n\in\mathbb{N},\text{almost surely}, - -then -$$ -\mathbb{E}\Bigl[\liminf_{n\to\infty}X_n\Big|\mathcal G\Bigr]\le\liminf_{n\to\infty}\mathbb{E}[X_n|\mathcal G] -$$ almost surely. - -Note: On the set where -$$ -X:=\liminf_{n\to\infty}X_n -$$ - -satisfies -$$ -\mathbb{E}[\max\{X,0\}|\mathcal G]=\infty, -$$ - -the left-hand side of the inequality is considered to be plus infinity. The conditional expectation of the limit inferior might not be well defined on this set, because the conditional expectation of the negative part might also be plus infinity. - -Let ε > 0. Due to uniform integrability with respect to the conditional expectation, there exists a c > 0 such that - -\mathbb{E}\bigl[X_n^-1_{\{X_n^->c\}}|\mathcal G\bigr]<\varepsilon - -\qquad\text{for all }n\in\mathbb{N},\text{almost surely}. - -Since -$$ -X+c\le\liminf_{n\to\infty}(X_n+c)^+, -$$ - -where x+ := max{x,0} denotes the positive part of a real x, monotonicity of conditional expectation (or the above convention) and the standard version of Fatou's lemma for conditional expectations imply - -\mathbb{E}[X|\mathcal G]+c - -\le\mathbb{E}\Bigl[\liminf_{n\to\infty}(X_n+c)^+\Big|\mathcal G\Bigr] - -\le\liminf_{n\to\infty}\mathbb{E}[(X_n+c)^+|\mathcal G] almost surely. - -Since -$$ -(X_n+c)^+=(X_n+c)+(X_n+c)^-\le X_n+c+X_n^-1_{\{X_n^->c\}}, -$$ - -we have - -\mathbb{E}[(X_n+c)^+|\mathcal G] - -\le\mathbb{E}[X_n|\mathcal G]+c+\varepsilon almost surely, - -hence - -\mathbb{E}[X|\mathcal G]\le - -\liminf_{n\to\infty}\mathbb{E}[X_n|\mathcal G]+\varepsilon almost surely. - -This implies the assertion. diff --git a/wiki/wikipedia/4030.txt b/wiki/wikipedia/4030.txt deleted file mode 100644 index 3d6b179dbe4dd7c4fe842e1473843a8604f422ad..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4030.txt +++ /dev/null @@ -1,39 +0,0 @@ -In mathematics, a transitive reduction of a directed graph D is another directed graph with the same vertices and as few edges as possible, such that for all pairs of vertices v, w a (directed) path from v to w in D exists if and only if such a path exists in the reduction. Transitive reductions were introduced by Aho, who provided tight bounds on the computational complexity of constructing them. - -More technically, the reduction is a directed graph that has the same reachability relation as D. Equivalently, D and its transitive reduction should have the same transitive closure as each other, and the transitive reduction of D should have as few edges as possible among all graphs with that property. - -The transitive reduction of a finite directed acyclic graph (a directed graph without directed cycles) is unique and is a subgraph of the given graph. However, uniqueness fails for graphs with (directed) cycles, and for infinite graphs not even existence is guaranteed. - -The closely related concept of a minimum equivalent graph is a subgraph of D that has the same reachability relation and as few edges as possible. The difference is that a transitive reduction does not have to be a subgraph of D. For finite directed acyclic graphs, the minimum equivalent graph is the same as the transitive reduction. However, for graphs that may contain cycles, minimum equivalent graphs are NP-hard to construct, while transitive reductions can be constructed in polynomial time. - -Transitive reduction can be defined for an abstract binary relation on a set, by interpreting the pairs of the relation as arcs in a directed graph. - -The transitive reduction of a finite directed graph G is a graph with the fewest possible edges that has the same reachability relation as the original graph. That is, if there is a path from a vertex x to a vertex y in graph G, there must also be a path from x to y in the transitive reduction of G, and vice versa. The following image displays drawings of graphs corresponding to a non-transitive binary relation (on the left) and its transitive reduction (on the right). - -
    - -
    - -The transitive reduction of a finite directed acyclic graph G is unique, and consists of the edges of G that form the only path between their endpoints. In particular, it is always a subgraph of the given graph. For this reason, the transitive reduction coincides with the minimum equivalent graph in this case. - -In the mathematical theory of binary relations, any relation R on a set X may be thought of as a directed graph that has the set X as its vertex set and that has an arc xy for every ordered pair of elements that are related in R. In particular, this method lets partially ordered sets be reinterpreted as directed acyclic graphs, in which there is an arc xy in the graph whenever there is an order relation x < y between the given pair of elements of the partial order. When the transitive reduction operation is applied to a directed acyclic graph that has been constructed in this way, it generates the covering relation of the partial order, which is frequently given visual expression by means of a Hasse diagram. - -Transitive reduction has been used on networks which can be represented as directed acyclic graphs (e.g. citation graphs or citation networks) to reveal structural differences between networks. - -In a finite graph that has cycles, the transitive reduction may not be unique: there may be more than one graph on the same vertex set that has a minimum number of edges and has the same reachability relation as the given graph. Additionally, it may be the case that none of these minimum graphs is a subgraph of the given graph. Nevertheless, it is straightforward to characterize the minimum graphs with the same reachability relation as the given graph G. If G is an arbitrary directed graph, and H is a graph with the minimum possible number of edges having the same reachability relation as G, then H consists of - -*A directed cycle for each strongly connected component of G, connecting together the vertices in this component - -*An edge xy for each edge XY of the transitive reduction of the condensation of G, where X and Y are two strongly connected components of G that are connected by an edge in the condensation, x is any vertex in component X, and y is any vertex in component Y. The condensation of G is a directed acyclic graph that has a vertex for every strongly connected component of G and an edge for every two components that are connected by an edge in G. In particular, because it is acyclic, its transitive reduction can be defined as in the previous section. - -The total number of edges in this type of transitive reduction is then equal to the number of edges in the transitive reduction of the condensation, plus the number of vertices in nontrivial strongly connected components (components with more than one vertex). - -The edges of the transitive reduction that correspond to condensation edges can always be chosen to be a subgraph of the given graph G. However, the cycle within each strongly connected component can only be chosen to be a subgraph of G if that component has a Hamiltonian cycle, something that is not always true and is difficult to check. Because of this difficulty, it is NP-hard to find the smallest subgraph of a given graph G with the same reachability (its minimum equivalent graph). - -As Aho et al. show, when the time complexity of graph algorithms is measured only as a function of the number n of vertices in the graph, and not as a function of the number of edges, transitive closure and transitive reduction of directed acyclic graphs have the same complexity. It had already been shown that transitive closure and multiplication of Boolean matrices of size n × n had the same complexity as each other, so this result put transitive reduction into the same class. The best exact algorithms for matrix multiplication, as of 2015, take time O(n2.3729), and this gives the fastest known worst-case time bound for transitive reduction in dense graphs. - -To prove that transitive reduction is as easy as transitive closure, Aho et al. rely on the already-known equivalence with Boolean matrix multiplication. They let A be the adjacency matrix of the given directed acyclic graph, and B be the adjacency matrix of its transitive closure (computed using any standard transitive closure algorithm). Then an edge uv belongs to the transitive reduction if and only if there is a nonzero entry in row u and column v of matrix A, and there is a zero entry in the same position of the matrix product AB. In this construction, the nonzero elements of the matrix AB represent pairs of vertices connected by paths of length two or more. - -To prove that transitive reduction is as hard as transitive closure, Aho et al. construct from a given directed acyclic graph G another graph H, in which each vertex of G is replaced by a path of three vertices, and each edge of G corresponds to an edge in H connecting the corresponding middle vertices of these paths. In addition, in the graph H, Aho et al. add an edge from every path start to every path end. In the transitive reduction of H, there is an edge from the path start for u to the path end for v, if and only if edge uv does not belong to the transitive closure of G. Therefore, if the transitive reduction of H can be computed efficiently, the transitive closure of G can be read off directly from it. - -When measured both in terms of the number n of vertices and the number m of edges in a directed acyclic graph, transitive reductions can also be found in time O(nm), a bound that may be faster than the matrix multiplication methods for sparse graphs. To do so, apply a linear time longest path algorithm in the given directed acyclic graph, for each possible choice of starting vertex. From the computed longest paths, keep only those of length one (single edge); in other words, keep those edges (u,v) for which there exists no other path from u to v. This O(nm) time bound matches the complexity of constructing transitive closures by using depth first search or breadth first search to find the vertices reachable from every choice of starting vertex, so again with these assumptions transitive closures and transitive reductions can be found in the same amount of time. diff --git a/wiki/wikipedia/4031.txt b/wiki/wikipedia/4031.txt deleted file mode 100644 index 498eea0c8e5457482d091b265be00fd963b775f5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4031.txt +++ /dev/null @@ -1,9 +0,0 @@ -A sociogram is a graphic representation of social links that a person has. It is a graph drawing that plots the structure of interpersonal relations in a group situation. - -Sociograms were developed by Jacob L. Moreno to analyze choices or preferences within a group. They can diagram the structure and patterns of group interactions. A sociogram can be drawn on the basis of many different criteria: Social relations, channels of influence, lines of communication etc. - -Those points on a sociogram who have many choices are called stars. Those with few or no choices are called isolates. Individuals who choose each other are known to have made a mutual choice. One-way choice refers to individuals who choose someone but the choice is not reciprocated. Cliques are groups of three or more people within a larger group who all choose each other (mutual choice). - -Sociograms are the charts or tools used to find the sociometry of a social space. - -Under the social discipline model, sociograms are sometimes used to reduce misbehavior in a classroom environment. A sociogram is constructed after students answer a series of questions probing for affiliations with other classmates. The diagram can then be used to identify pathways for social acceptance for misbehaving students. In this context, the resulting sociograms are known as a friendship chart. Often, the most important person/thing is in a bigger bubble in relation to everyone else. The size of the bubble represents the importance, with the biggest bubble meaning most important and the smallest representing the least important. diff --git a/wiki/wikipedia/4032.txt b/wiki/wikipedia/4032.txt deleted file mode 100644 index d94b954ab3c9c6c0357787ed80fa9d11306c38ca..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4032.txt +++ /dev/null @@ -1 +0,0 @@ -In structural proof theory, the nested sequent calculus is a reformulation of the sequent calculus to allow deep inference. diff --git a/wiki/wikipedia/4033.txt b/wiki/wikipedia/4033.txt deleted file mode 100644 index 32af0b633b7656f7728e42f60a0f215852a2ee4f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4033.txt +++ /dev/null @@ -1,19 +0,0 @@ -Provability logic is a modal logic, in which the box (or "necessity") operator is interpreted as 'it is provable that'. The point is to capture the notion of a proof predicate of a reasonably rich formal theory, such as Peano arithmetic. - -There are a number of provability logics, some of which are covered in the literature mentioned in . The basic system is generally referred to as GL (for Gödel–Löb) or L or K4W. It can be obtained by adding the modal version of Löb's theorem to the logic K (or K4). - -Namely, the axioms of GL are all tautologies of classical propositional logic plus all formulas of one of the following forms: - -* Distribution axiom: □(p → q) → (□p → □q); - -* Löb's axiom: □(□p → p) → □p. - -And the rules of inference are: - -* Modus ponens: From p → q and p conclude q; - -* Necessitation: From $\vdash$ p conclude $\vdash$ □p. - -The GL model was pioneered by Robert M. Solovay in 1976. Since then, until his death in 1996, the prime inspirer of the field was George Boolos. Significant contributions to the field have been made by Sergei N. Artemov, Lev Beklemishev, Giorgi Japaridze, Dick de Jongh, Franco Montagna, Giovanni Sambin, Vladimir Shavrukov, Albert Visser and others. - -Interpretability logics and Japaridze's polymodal logic present natural extensions of provability logic. diff --git a/wiki/wikipedia/4034.txt b/wiki/wikipedia/4034.txt deleted file mode 100644 index 96b796924380d1cf41c1a50aff5dcee127e36a03..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4034.txt +++ /dev/null @@ -1,10 +0,0 @@ -In mathematics, the Caristi fixed-point theorem (also known as the Caristi–Kirk fixed-point theorem) generalizes the Banach fixed-point theorem for maps of a complete metric space into itself. Caristi's fixed-point theorem modifies the ε-variational principle of Ekeland (1974, 1979). The conclusion of Caristi's theorem is equivalent to metric completeness, as proved by Weston (1977). The original result is due to the mathematicians James Caristi and William Arthur Kirk. - -Caristi fixed-point theorem can be applied to derive other classical fixed-point results, and also to prove the existence of bounded solutions of a functional equation. - -Let (X, d) be a complete metric space. Let T : X → X and f : X → [0, +∞) be a lower semicontinuous function from X into the non-negative real numbers. Suppose that, for all points x in X, -$$ -d \big( x, T(x) \big) \leq f(x) - f \big( T(x) \big). -$$ - -Then T has a fixed point in X, i.e. a point x0 such that T(x0) = x0. The proof of this result utilizes Zorn's lemma to guarantee the existence of a minimal element which turns out to be a desired fixed point. diff --git a/wiki/wikipedia/4035.txt b/wiki/wikipedia/4035.txt deleted file mode 100644 index bc72f7a369d3882c8163e18d0371b00bbd6d1dca..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4035.txt +++ /dev/null @@ -1,46 +0,0 @@ -The Euclidean minimum spanning tree or EMST is a minimum spanning tree of a set of $n$ points in the plane or higher-dimensional Euclidean space. It connects the points by a system of line segments, so that any two points can reach each other along a path through the line segments, and it selects line segments that minimize the sum of the Euclidean distances between directly-connected pairs of points. - -In the plane, the Euclidean minimum spanning tree is a subgraph of the Delaunay triangulation. Using this fact, the Euclidean minimum spanning tree for a given set of planar points may be found in time $O(n\log n)$ (expressed in Big O notation), using algorithms based on comparisons of simple combinations of input coordinates. Faster randomized algorithms are known in models of computation allowing more powerful operations such as integer rounding. - -In higher dimensions ($d\ge 3$), finding an optimal algorithm remains an open problem. - -An asymptotic lower bound of $\Omega(n\log n)$ of the EMST problem can be established in restricted models of computation, such as the algebraic decision tree and algebraic computation tree models, in which the algorithm has access to the input points only through certain restricted primitives that perform simple algebraic computations on their coordinates: in these models, the closest pair of points problem requires $\Omega(n\log n)$ time, but the closest pair is necessarily an edge of the EMST, so the EMST also requires this much time. However, if the input points have integer coordinates and bitwise operations and table indexing operations are permitted using those coordinates, then faster algorithms are possible. - -The simplest algorithm to find an EMST in two dimensions, given $n$ points, is to actually construct the complete graph on $n$ vertices, which has $n(n-1)/2$ edges, compute each edge weight by finding the distance between each pair of points, and then run a standard minimum spanning tree algorithm (such as Prim's algorithm or Kruskal's algorithm) on it. This solution uses time and space that are both $O(n^2)$. - -A faster approach to finding the EMST in a plane is to note that it is a subgraph of every Delaunay triangulation of the $n$ points, a much-reduced set of edges: - -# Compute the Delaunay triangulation in $O(n\log n)$ time and $O(n)$ space. Because the Delaunay triangulation is a planar graph, and there are no more than three times as many edges as vertices in any planar graph, this generates only $O(n)$ edges. - -# Label each edge with its length. - -# Run a graph minimum spanning tree algorithm on this graph to find a minimum spanning tree. Since there are $O(n)$ edges, this requires $O(n\log n)$ time using any of the standard minimum spanning tree algorithms such as Borůvka's algorithm, Prim's algorithm, or Kruskal's algorithm. - -The final result is an algorithm taking $O(n\log n)$ time and $O(n)$ space. - -If the input coordinates are integers and can be used as array indices, faster algorithms are possible: the Delaunay triangulation can be constructed by a randomized algorithm in $O(n\log\log n)$ expected time. Additionally, since the Delaunay triangulation is a planar graph, its minimum spanning tree can be found in linear time by a variant of Borůvka's algorithm that removes all but the cheapest edge between each pair of components after each stage of the algorithm. Therefore, the total expected time for this algorithm is $O(n\log\log n)$. - -The problem can also be generalized to $n$ points in the $d$-dimensional space $\R^d$. In higher dimensions, the connectivity determined by the Delaunay triangulation (which, likewise, partitions the convex hull into $d$-dimensional simplices) contains the minimum spanning tree; however, the triangulation might contain the complete graph. Therefore, finding the Euclidean minimum spanning tree as a spanning tree of the complete graph or as a spanning tree of the Delaunay triangulation both take $O(dn^2)$ time. For three dimensions it is possible to find the minimum spanning tree in time $O\bigl((n\log n)^{4/3}\bigr)$, and in any dimension greater than three it is possible to solve it in a time that is faster than the quadratic time bound for the complete graph and Delaunay triangulation algorithms. For uniformly random point sets it is possible to compute minimum spanning trees as quickly as sorting. Using a well-separated pair decomposition, it is possible to produce a $(1+\varepsilon)$-approximation in $O(n\log n)$ time. - -All edges of an EMST are edges of a relative neighborhood graph, which in turn are edges of a Gabriel graph, which are edges in a Delaunay triangulation of the points, as can be proven via the equivalent contrapositive statement: every edge not in a Delaunay triangulation is also not in any EMST. The proof is based on two properties of minimum spanning trees and Delaunay triangulations: - -#(the cycle property of minimum spanning trees): For any cycle $C$ in the graph, if the weight of an edge $e$ of $C$ is larger than the weights of other edges of $C$, then this edge cannot belong to an MST. - -#(a property of Delaunay triangulations): If there is a circle with two of the input points on its boundary which contains no other input points, the line between those two points is an edge of every Delaunay triangulation. - -Consider an edge $e$ between two input points $p$ and $q$ which is not an edge of a Delaunay triangulation. Property 2 implies that the circle $C$ with $e$ as its diameter must contain some other point $r$ inside. But then $r$ is closer to both $p$ and $q$ than they are to each other, and so the edge from $p$ to $q$ is the longest edge in the cycle of points $p\to q\to r\to p$, and by property 1 $e$ is not in any EMST. - -The expected size of the EMST for large numbers of points has been determined by J. Michael Steele. If $f$ is the density of the probability function for picking points, then for large $n$ and $d \neq 1$ the size of the EMST is approximately -$$ -c(d) n^{\frac{d-1}{d}} \int_{\mathbb{R}^d} f(x)^{\frac{d-1}{d}} dx -$$ - -where $c(d)$ is a constant depending only on the dimension $d$. The exact value of the constants are unknown but can be estimated from empirical evidence. - -An obvious application of Euclidean minimum spanning trees is to find the cheapest network of wires or pipes to connect a set of places, assuming the links cost a fixed amount per unit length. However, while these give an absolute lower bound on the amount of connection needed, most such networks prefer a k-connected graph to a tree, so that failure of an any individual link will not split the network into parts. - -Another application of EMSTs is a constant-factor approximation algorithm for approximately solving the Euclidean traveling salesman problem, the version of the traveling salesman problem on a set of points in the plane with edges labelled by their length. This realistic variation of the problem can be solved within a factor of 2 by computing the EMST, doing a walk along its boundary which outlines the entire tree, and then removing all but one occurrence of each vertex from this walk. - -The realization problem for Euclidean minimum spanning trees is stated as follows: Given a tree $T=(V,E)$, find a location $D(u)$ for each vertex $u\in V$ so that $T$ is a minimum spanning tree of $\{D(u): u\in V\}$, or determine that no such locations exist. - -Testing of the existence of a realization in the plane is NP-hard. diff --git a/wiki/wikipedia/4036.txt b/wiki/wikipedia/4036.txt deleted file mode 100644 index aaf80321ec04d438d813ca4cf269594fc08885cd..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4036.txt +++ /dev/null @@ -1,5 +0,0 @@ -In algebraic geometry, Zariski's connectedness theorem (due to Oscar Zariski) says that under certain conditions the fibers of a morphism of varieties are connected. It is an extension of Zariski's main theorem to the case when the morphism of varieties need not be birational. - -Zariski's connectedness theorem gives a rigorous version of the "principle of degeneration" introduced by Federigo Enriques, which says roughly that a limit of absolutely irreducible cycles is absolutely connected. - -Suppose that f is a proper surjective morphism of varieties from X to Y such that the function field of Y is separably closed in that of X. Then Zariski's connectedness theorem says that the inverse image of any normal point of Y is connected. An alternative version says that if f is proper and f* OX = OY, then f is surjective and the inverse image of any point of Y is connected. diff --git a/wiki/wikipedia/4037.txt b/wiki/wikipedia/4037.txt deleted file mode 100644 index 14531c60055a42d85f9188c088f74cf36a1864b5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4037.txt +++ /dev/null @@ -1,235 +0,0 @@ -The breadth-first-search algorithm is a way to explore the vertexes of a graph layer by layer. It is a basic algorithm in graph theory which can be used as a part of other graph algorithms. For instance, BFS is used by Dinic's algorithm to find maximum flow in a graph. Moreover, BFS is also one of kernel algorithms in Graph500 benchmark, which is a benchmark for data-intensive supercomputing problems. This article discusses the possibility of speeding up BFS through the use of parallel computing. - -In the conventional sequential BFS algorithm, two data structures are created to store the frontier and the next frontier. The frontier contains the vertexes that have same distance(it is also called "level") from the source vertex, these vertexes need to be explored in BFS. Every neighbor of these vertexes will be checked, some of these neighbors which are not explored yet will be discovered and put into the next frontier. At the beginning of BFS algorithm, a given source vertex s is the only vertex in the frontier. All direct neighbors of s are visited in the first step, which form the next frontier. After each layer-traversal, the "next frontier" is switched to the frontier and new vertexes will be stored in the new next frontier. The following pseudo-code outlines the idea of it, in which the data structures for the frontier and next frontier are called FS and NS respectively. - -1 define bfs_sequential(graph(V,E), source s): - -2 for all v in V do - -3 d[v] = -1; - -4 d[s] = 0; level = 1; FS = {}; NS = {}; - -5 push(s, FS); - -6 while FS !empty do - -7 for u in FS do - -8 for each neighbour v of u do - -9 if d[v] = -1 then - -10 push(v, NS); - -11 d[v] = level; - -12 FS = NS, NS = {}, level = level + 1; - -As a simple and intuitive solution, the classic Parallel Random Access Machine (PRAM) approach is just an extension of the sequential algorithm that is shown above. The two for-loops (line 7 and line 8) can be executed in parallel. The update of the next frontier (line 10) and the increase of distance (line 11) need to be atomic. Atomic operations are program operations that can only run entirely without interruption and pause. - -However, there are two problems in this simple parallelization. Firstly, the distance-checking (line 9) and distance-updating operations (line 11) introduce two benign races. The reason of race is that a neighbor of one vertex can also be the neighbor of another vertex in the frontier. As a result, the distance of this neighbor may be examined and updated more than one time. Although these races waste resource and lead to unnecessary overhead, with the help of synchronization, they don't influence the correctness of BFS, so these races are benign. Secondly, in spite of the speedup of each layer-traversal due to parallel processing, a barrier synchronization is needed after every layer in order to completely discover all neighbor vertexes in the frontier. This layer-by-layer synchronization indicates that the steps of needed communication equals the longest distance between two vertexes, O(d), where O is the big O notation and d is the graph diameter. - -This simple parallelization's asymptotic complexity is same as sequential algorithm in the worst case, but some optimizations can be made to achieve better BFS parallelization, for example: - -#Mitigating barrier synchronization. Barrier synchronization is necessary after each layer-traversal to ensure the correctness of parallel BFS. As a result, reducing the cost of barrier synchronization is an effective way to speed up parallel BFS. - -#Load-balancing for neighbor discovering. Because there is a barrier synchronization after each layer-traversal, every processing entity must wait until the last of them finish its work. Therefore, the parallel entity which has the most neighbors decides the time consumption of this layer. With the optimization of load-balancing, the time of layer-traversal can be reduced. - -#Improving the locality of memory references. In parallel system with distributed memory, remote memory reference are getting data from other processing entities, which has usually extra communication cost compared to local memory reference. Thus, local memory reference is faster than remote memory reference. By designing a better data structure or improving the organization of data can lead to more local memory references and reduce the communications needed for remote memory references. - -Compared to parallel BFS with distributed memory, shared memory provides higher memory-bandwidth and lower latency. Because all processors share the memory together, all of them have the direct access to it. Thus, the developers don't need to program message passing process, which is necessary for distributed memory to get data from remote local memory. Therefore, the overhead of messages is avoided. - -However, the number of vertexes in each layer and the number of neighbors of each vertex are shown to be highly irregular, which leads to highly irregular memory accesses and work distribution of BFS. In parallel BFS, this feature reduces the benefits from parallelization due to unbalanced load. As a result, especially for parallel BFS, it is very important to make the parallel BFS on shared memory load-balanced. Moreover, exploring the data-locality can also speed up parallel process. - -Many parallel BFS algorithms on shared memory can be divided into two types: container centric approaches and vertex centric approaches. In the container centric approach, two data structures are created to store the current frontier and the next vertex frontier. The next vertex frontier is switched to the current frontier at the last of each step. There is a trade-off between the cost for synchronization and data locality according to the place where the data is stored. These two data structures can be held in every processing entity(such as thread) which supports data locality but needs extra load balancing mechanisms. Alternatively, they can be global to provide implicit load balancing, where special data structures are used for concurrent access from processing entities. But then those processing entities will work concurrently and more effort are required for synchronization. - -Besides, data organization of containers can be optimized. The typical data structure in serial BFS and some parallel BFS is FIFO Queue, as it is simple and fast where insertion and delete operation costs only constant time. - -Another alternative is the bag-structure. The insertion operation in a bag takes O(logn) time in the worst-case, whereas it takes only constant amortized time which is as fast as FIFO. Furthermore, union of two bags takes Θ(lgn) time where n is the number of elements in the smaller bag. The bag-split operation also takes Θ(lgn) time. With the help of bag-structure, a certain number of vertexes(according to granularity parameter) are stored in one bag and the bag-structure becomes the basic parallel entity. Moreover, the reducer can be combined with the bag-structure to write vertexes in parallel and traverse them efficiently. - -The vertex centric approach treats vertex as parallel entity,which enables parallel iteration. Each vertex is assigned to a parallel entity. This vertex centric approach might only work well if the graph depth is very low. Graph depth in BFS is defined as the maximum distance of any vertex in the graph to the source vertex. Therefore, the vertex centric approach is well-suited for GPUs if every thread is mapped to exactly one vertex. This algorithm is originally designed for IBM BlueGene/L systems, which has a 3D torus network architecture. Because the synchronization is the main extra cost for parallelized BFS, the authors of this paper also developed a scalable all-to-all communication based on point-to-point communications. After that, they also reduced the number of point-to-point communication, taking advantage of its high-bandwidth torus network. - -The main steps of BFS traversal in the following algorithm are: - -# processor view (line 8): construct the frontier FS with vertexes from local storage - -# global view (line 10–11): terminate the traversal if FS from all processors are empty - -# processor view (line 13): construct the next frontier based on the neighbors vertex of its FS, although some of their neighbors may be stored in other processors - -# global view (line 15–18): run an all-to-all communication to let each processor know, which local vertexes should be put into its local next frontier NS - -# processor view (line 20–22): receive messages from all other processors, update the distance value of their local vertexes in the current frontier, change its NS to FS - -1 define 1_D_distributed_memory_BFS( graph(V,E), source s): - -2 //normal initialization - -3 for all v in V do - -4 d[v] = -1; - -5 d[s] = 0; level = 0; FS = {}; NS = {}; - -6 //begin BFS traversal - -7 while True do: - -8 FS = {the set of local vertexes with level} - -9 //all vertexes traversed - -10 if FS = {} for all processors then: - -11 terminate the while loop - -12 //construct the NS based on local vertexes in current frontier - -13 NS = {neighbors of vertexes in FS, both local and not local vertexes} - -14 //synchronization: all-to-all communication - -15 for 0 <= j < p do: - -16 N_j = {vertexes in NS owned by processor j} - -17 send N_j to processor j - -18 receive N_j_rcv from processor j - -19 //combine the received message to form local next vertex frontier then update the level for them - -20 NS_rcv = Union(N_j_rcv) - -21 for v in NS_rcv and d[v] == -1 do - -22 d[v] = level + 1 - -Combined with multi-threading, the following pseudo code of 1D distributed memory BFS also specifies thread stack and thread barrier, which comes from the paper. - -With multi-threading, local vertexes in the frontier FS can be divided and assigned to different threads inside of one processor, which further parallel the BFS traversal. However, different from the methods above, we need more data structure for each individual threads. For instance, the thread stack, which is prepared for saving the neighbor vertexes from the vertexes of this thread. Each thread has p-1 local storage, where p is the number of processors. Because each thread must separate the messages for all other processors. For example, they will put their neighbor vertexes in their j-th stack to form the message send to j processor, if j processor is the owner of those vertexes. Moreover, Thread barrier is also necessary for synchronization. As a result, although distributed memory with multi-threading might benefit from refinement of parallelization, it also introduces extra synchronization cost for threads. - -The main steps of BFS traversal in the following algorithm are: - -# thread view (line 19–22): based on vertexes assigned to itself, find the owner processor of neighbor vertexes, put them into thread stack base on their owners. - -# processor view (line 23): run a thread barrier, wait until all threads(of the same processor) finish their job. - -# processor view (line 25–26): merge all thread stacks of all threads, which has the same owner (those have the destination for next step). - -# global view (line 28–30): run an all-to-all communication with master thread to let each processor know, which local vertexes should be put into the next frontier. - -# processor view (line 31): run a thread barrier, wait until the communication finished(of master thread). - -# processor view (line 33): assign vertexes from the next frontier to each thread. - -# thread view (line 34–36): if the vertex is not visited, update the distance value for their vertexes and put it in thread stack for the next frontier NS. - -# processor view (line 37): run a thread barrier, wait until all threads(of the same processor) finish their job. - -# processor view (line 39): aggregate thread stacks for the next frontier from every thread - -# processor view (line 40): run a thread barrier, wait until all threads send all their vertexes in their stack. - -1 define 1_D_distributed_memory_BFS_with_threads(graph(V,E), source s): - -2 // normal initialization - -3 for all v in V do - -4 d[v] = -1; - -5 level = 1; FS = {}; NS = {}; - -6 // find the index of the owner processor of source vertex s - -7 pu_s = find_owner(s); - -8 if pu_s = index_pu then - -9 push(s,FS); - -10 d[s] = 0; - -11 // message initialization - -12 for 0 <= j < p do - -13 sendBuffer_j = {} // p shared message buffers - -14 recvBuffer_j = {} // for MPI communication - -15 thrdBuffer_i_j = {} //thread-local stack for thread i - -16 // begin BFS traversal - -17 while FS != {} do - -18 //traverse vertexes and find owners of neighbor vertexes - -19 for each u in FS in parallel do - -20 for each neighbor v of u do - -21 pu_v = find_owner(v) - -22 push(v, thrdBuffer_i_(pu_v)) - -23 Thread Barrier - -24 // combine thread stack to form sendBuffer - -25 for 0 <= j < p do - -26 merge thrdBuffer_i_j in parallel - -27 // all-to-all communication - -28 All-to-all collective step with master thread: - -29 1. send data in sendBuffer - -30 2. receive and aggregate newly-visited vertexes into recvBuffer - -31 Thread Barrier - -32 // update level for new-visited vertexes - -33 for each u in recvBuffer in parallel do - -34 if d[u] == -1 then - -35 d[u] = level - -36 push(u, NS_i) - -37 Thread Barrier - -38 // aggregate NS and form new FS - -39 FS = Union(NS_i) - -40 Thread Barrier - -41 level = level + 1f - -Because BFS algorithm always uses the adjacency matrix as the representation of the graph. The natural 2D decomposition of matrix can also be an option to consider. In 2D partitioning, each processor has a 2D index (i,j). The edges and vertexes are assigned to all processors with 2D block decomposition, in which the sub-adjacency matrix is stored. - -If there are in total P=R·C processors, then the adjacency matrix will be divided like below: - -There are C columns and R·C block rows after this division. For each processor, they are in charge of C blocks, namely the processor (i,j) stores Ai,j(1) to Ai,j(C) blocks. The conventional 1D partitioning is equivalent to the 2D partitioning with R=1 or C=1. - -In general, the parallel edge processing based on 2D partitioning can be organized in 2 communication phases, which are "expand" phase and "fold" phase. but some vertexes inside have much higher degrees than average, such as a small-world graph. As mentioned before, one benign race in parallel BFS is that, if more than one vertex in the frontier has common neighbor vertexes, the distance of neighbor vertexes will be checked many times. Although the distance update is still correct with the help of synchronization, the resource is wasted. In fact, to find the vertexes for the next frontier, each unvisited vertex only need to check if any its neighbor vertex is in the frontier. This is also the core idea for direction optimization. Even better is that, each vertex would quickly find a parent by checking its incoming edges if a significant number of its neighbors are in the frontier. - -In the paper, For 2D partitioning, DCSC(Doubly compressed sparse columns) for hyper-sparse matrices is more suitable, more information about DCSC can be found in the paper - -In the paper, or just to check if vertexes are visited in the top-down BFS - -Graph500 is the first benchmark for data-intensive supercomputing problems. This benchmark generates an edge tuple with two endpoints at first. Then the kernel 1 will constructs an undirected graph, in which weight of edge will not be assigned if only kernel 2 runs afterwards. Users can choose to run BFS in kernel 2 and/or Single-Source-Shortest-Path in kernel 3 on the constructed graph. The result of those kernels will be verified and running time will be measured. - -Graph500 also provides two reference implementations for kernel 2 and 3. In the referenced BFS, the exploration of vertexes is simply sending messages to target processors to inform them of visited neighbors. There is no extra load balancing method. For the synchronization, AML (Active Messages Library, which is an SPMD communication library build on top of MPI3, intend to be used in fine grain applications like Graph500) barrier ensures the consistent traversal after each layer. The referenced BFS is only used for correctness verification of results. Thus, users should implement their own BFS algorithm based on their hardware. The choice of BFS is not constrained, as long as the output BFS tree is correct. - -The correctness of result is based on the comparison with result from referenced BFS. Because only 64 search key are sampled to runs kernel 2 and/or kernel 3, the result is also considered correct when this result is different from referenced result only because the search key is not in samples. These 64 search keys also run the kernel sequentially to compute mean and variance, with which the performance of a single search is measured. - -Different from TOP500, the performance metric in Graph500 is Traversed Edges Per Second(TEPS). diff --git a/wiki/wikipedia/4038.txt b/wiki/wikipedia/4038.txt deleted file mode 100644 index 168234deedff213ee5584fc20697a70155bc50b5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4038.txt +++ /dev/null @@ -1,26 +0,0 @@ -Cramér's theorem is a fundamental result in the theory of large deviations, a subdiscipline of probability theory. It determines the rate function of a series of iid random variables. - -A weak version of this result was first shown by Harald Cramér in 1938. - -The logarithmic moment generating function (which is the cumulant-generating function) of a random variable is defined as: -$$ - \Lambda(t)=\log \operatorname E [\exp(tX_1)]. -$$ - -Let $ X_1, X_2, \dots $ be a sequence of iid real random variables with finite logarithmic moment generating function, e.g. $ \Lambda(t) < \infty $ for all $ t \in \mathbb R $. - -Then the Legendre transform of $ \Lambda $: -$$ - \Lambda^*(x):= \sup_{t \in \mathbb R} \left(tx-\Lambda(t) \right) -$$ - -satisfies, -$$ - \lim_{n \to \infty} \frac 1n \log \left(P\left(\sum_{i=1}^n X_i \geq nx \right)\right) = -\Lambda^*(x) -$$ - -for all $ x > \operatorname E[X_1]. $ - -In the terminology of the theory of large deviations the result can be reformulated as follows: - -If $ X_1, X_2, \dots $ is a series of iid random variables, then the distributions $ \left(\mathcal L ( \tfrac 1n \sum_{i=1}^n X_i) \right)_{n \in \N}$ satisfy a large deviation principle with rate function $ \Lambda^* $. diff --git a/wiki/wikipedia/4039.txt b/wiki/wikipedia/4039.txt deleted file mode 100644 index 5a815a2f633433dd0debecbe477ad20b454f8e65..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4039.txt +++ /dev/null @@ -1,13 +0,0 @@ -Consistency (or Correctness) in database systems refers to the requirement that any given database transaction must change affected data only in allowed ways. Any data written to the database must be valid according to all defined rules, including constraints, cascades, triggers, and any combination thereof. This does not guarantee correctness of the transaction in all ways the application programmer might have wanted (that is the responsibility of application-level code) but merely that any programming errors cannot result in the violation of any defined database constraints. - -Consistency is one of the four guarantees that define ACID transactions; however, significant ambiguity exists about the nature of this guarantee. It is defined variously as: - -* The guarantee that any transactions started in the future necessarily see the effects of other transactions committed in the past - -* The guarantee that database constraints are not violated, particularly once a transaction commits - -* The guarantee that operations in transactions are performed accurately, correctly, and with validity, with respect to application semantics - -As these various definitions are not mutually exclusive, it is possible to design a system that guarantees "consistency" in every sense of the word, as most relational database management systems in common use today arguably do. - -The CAP theorem is based on three trade-offs, one of which is "atomic consistency" (shortened to "consistency" for the acronym), about which the authors note, "Discussing atomic consistency is somewhat different than talking about an ACID database, as database consistency refers to transactions, while atomic consistency refers only to a property of a single request/response operation sequence. And it has a different meaning than the Atomic in ACID, as it subsumes the database notions of both Atomic and Consistent." In the CAP theorem, you can only have two of the following three properties: consistency, availability, or partition tolerance. Therefore, consistency may have to be traded off in some database systems. diff --git a/wiki/wikipedia/404.txt b/wiki/wikipedia/404.txt deleted file mode 100644 index 9cb264eebc8a47d2824203b7ce9ba9ab976917c1..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/404.txt +++ /dev/null @@ -1,13 +0,0 @@ -In-database processing, sometimes referred to as in-database analytics, refers to the integration of data analytics into data warehousing functionality. Today, many large databases, such as those used for credit card fraud detection and investment bank risk management, use this technology because it provides significant performance improvements over traditional methods. - -Traditional approaches to data analysis require data to be moved out of the database into a separate analytics environment for processing, and then back to the database. (SPSS from IBM are examples of tools that still do this today). Doing the analysis in the database, where the data resides, eliminates the costs, time and security issues associated with the old approach by doing the processing in the data warehouse itself. - -Though in-database capabilities were first commercially offered in the mid-1990s, as object-related database systems from vendors including IBM, Illustra/Informix (now IBM) and Oracle, the technology did not begin to catch on until the mid-2000s. The concept of migrating analytics from the analytical workstation and into the Enterprise Data Warehouse was first introduced by Thomas Tileston in his presentation entitled, “Have Your Cake & Eat It Too! Accelerate Data Mining Combining SAS & Teradata” at the Teradata Partners 2005 "Experience the Possibilities" conference in Orlando, FL, September 18–22, 2005. Mr. Tileston later presented this technique globally in 2006, 2007 and 2008. - -At that point, the need for in-database processing had become more pressing as the amount of data available to collect and analyze continues to grow exponentially (due largely to the rise of the Internet), from megabytes to gigabytes, terabytes and petabytes. This “big data” is one of the primary reasons it has become important to collect, process and analyze data efficiently and accurately. - -Also, the speed of business has accelerated to the point where a performance gain of nanoseconds can make a difference in some industries. - -In-database processing is performed and promoted as a feature by many of the major data warehousing vendors, including Teradata (and Aster Data Systems, which it acquired), IBM (with its Netezza, PureData Systems, and products), IEMC Greenplum, Sybase, ParAccel, SAS, and EXASOL. Some of the products offered by these vendors, such as CWI's MonetDB or IBM's Db2 Warehouse, offer users the means to write their own functions (UDFs) or extensions (UDXs) to enhance the products' capabilities. Fuzzy Logix offers libraries of in-database models used for mathematical, statistical, data mining, simulation, and classification modelling, as well as financial models for equity, fixed income, interest rate, and portfolio optimization. collaborates with marketing and IT teams to institutionalize data mining and analytic processes inside the data warehouse for fast, reliable, and customizable consumer-behavior and predictive analytics. - -In-database processing is one of several technologies focused on improving data warehousing performance. Others include parallel computing, shared everything architectures, shared nothing architectures and massive parallel processing. It is an important step towards improving predictive analytics capabilities. diff --git a/wiki/wikipedia/4040.txt b/wiki/wikipedia/4040.txt deleted file mode 100644 index c17a2b89c1cef5765037b31d46b047e8b3dbbe6e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4040.txt +++ /dev/null @@ -1,78 +0,0 @@ -In mathematics, in the area of complex analysis, Nachbin's theorem (named after Leopoldo Nachbin) is commonly used to establish a bound on the growth rates for an analytic function. This article provides a brief review of growth rates, including the idea of a function of exponential type. Classification of growth rates based on type help provide a finer tool than big O or Landau notation, since a number of theorems about the analytic structure of the bounded function and its integral transforms can be stated. In particular, Nachbin's theorem may be used to give the domain of convergence of the generalized Borel transform, given below. - -A function f(z) defined on the complex plane is said to be of exponential type if there exist constants M and α such that -$$ -|f(re^{i\theta})|\le Me^{\alpha r} -$$ - -in the limit of $r\to\infty$. Here, the complex variable z was written as $z=re^{i\theta}$ to emphasize that the limit must hold in all directions θ. Letting α stand for the infimum of all such α, one then says that the function f is of exponential type α. - -For example, let $f(z)=\sin(\pi z)$. Then one says that $\sin(\pi z)$ is of exponential type π, since π is the smallest number that bounds the growth of $\sin(\pi z)$ along the imaginary axis. So, for this example, Carlson's theorem cannot apply, as it requires functions of exponential type less than π. - -Bounding may be defined for other functions besides the exponential function. In general, a function $\Psi(t)$ is a comparison function if it has a series -$$ -\Psi(t)=\sum_{n=0}^\infty \Psi_n t^n -$$ - -with $\Psi_n>0$ for all n, and -$$ -\lim_{n\to\infty} \frac{\Psi_{n+1}}{\Psi_n} = 0. -$$ - -Comparison functions are necessarily entire, which follows from the ratio test. If $\Psi(t)$ is such a comparison function, one then says that f is of Ψ-type if there exist constants M and τ such that -$$ -\left|f\left(re^{i\theta}\right)\right| \le M\Psi(\tau r) -$$ - -as $r\to \infty$. If τ is the infimum of all such τ one says that f is of Ψ-type τ. - -Nachbin's theorem states that a function f(z) with the series -$$ -f(z)=\sum_{n=0}^\infty f_n z^n -$$ - -is of Ψ-type τ if and only if -$$ -\limsup_{n\to\infty} \left| \frac{f_n}{\Psi_n} \right|^{1/n} = \tau. -$$ - -Nachbin's theorem has immediate applications in Cauchy theorem-like situations, and for integral transforms. For example, the generalized Borel transform is given by -$$ -F(w)=\sum_{n=0}^\infty \frac{f_n}{\Psi_n w^{n+1}}. -$$ - -If f is of Ψ-type τ, then the exterior of the domain of convergence of $F(w)$, and all of its singular points, are contained within the disk -$$ -|w| \le \tau. -$$ - -Furthermore, one has -$$ -f(z)=\frac{1}{2\pi i} \oint_\gamma \Psi (zw) F(w) dw -$$ - -where the contour of integration γ encircles the disk $|w| \le \tau$. This generalizes the usual Borel transform for exponential type, where $\Psi(t)=e^t$. The integral form for the generalized Borel transform follows as well. Let $\alpha(t)$ be a function whose first derivative is bounded on the interval $[0,\infty)$, so that -$$ -\frac{1}{\Psi_n} = \int_0^\infty t^n d\alpha(t) -$$ - -where $d\alpha(t)=\alpha^{\prime}(t)dt$. Then the integral form of the generalized Borel transform is -$$ -F(w)=\frac{1}{w} \int_0^\infty f \left(\frac{t}{w}\right) d\alpha(t). -$$ - -The ordinary Borel transform is regained by setting $\alpha(t)=e^{-t}$. Note that the integral form of the Borel transform is just the Laplace transform. - -Nachbin resummation (generalized Borel transform) can be used to sum divergent series that escape to the usual Borel summation or even to solve (asymptotically) integral equations of the form: -$$ - g(s)=s\int_0^\infty K(st) f(t)dt -$$ - -where f(t) may or may not be of exponential growth and the kernel K(u) has a Mellin transform. The solution can be obtained as $ f(x)= \sum_{n=0}^\infty \frac{a_n}{M(n+1)}x^n $ with $ g(s)= \sum_{n=0}^\infty a_n s^{-n} $ and M(n) is the Mellin transform of K(u). An example of this is the Gram series $ \pi (x) \approx 1+\sum_{n=1}^{\infty} \frac{\log^{n}(x)}{n\cdot n!\zeta (n+1)}.$ - -in some cases as an extra condition we require $ \int_0^\infty K(t)t^{n}dt $ to be finite for $n=0,1,2,3,...$ and different from 0. - -Collections of functions of exponential type $\tau$ can form a complete uniform space, namely a Fréchet space, by the topology induced by the countable family of norms -$$ - \|f\|_{n} = \sup_{z \in \mathbb{C}} \exp \left[-\left(\tau + \frac{1}{n}\right)|z|\right]|f(z)|. -$$ diff --git a/wiki/wikipedia/4041.txt b/wiki/wikipedia/4041.txt deleted file mode 100644 index 36e4a6d121f127f91152ee48ea0b852fd5429987..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4041.txt +++ /dev/null @@ -1,5 +0,0 @@ -#redirect Two envelopes problem - -Category:Probability theory paradoxes - -Category:Decision-making paradoxes diff --git a/wiki/wikipedia/4042.txt b/wiki/wikipedia/4042.txt deleted file mode 100644 index 04a6093c8c2fff154ace3a3e0f22a39c2afb9381..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4042.txt +++ /dev/null @@ -1,15 +0,0 @@ -In mathematics, the Skolem problem is the problem of determining whether the values of a constant-recursive sequence include the number zero. The problem can be formulated for recurrences over different types of numbers, including integers, rational numbers, and algebraic numbers. It is not known whether there exists an algorithm that can solve this problem. - -A linear recurrence relation expresses the values of a sequence of numbers as a linear combination of earlier values; for instance, the Fibonacci numbers may be defined from the recurrence relation - -F(n) = F(n - 1) + F(n - 2) - -together with the initial values F(0) = 0 and F(1) = 1. - -The Skolem problem is named after Thoralf Skolem, because of his 1933 paper proving the Skolem–Mahler–Lech theorem on the zeros of a sequence satisfying a linear recurrence with constant coefficients. This theorem states that, if such a sequence has zeros, then with finitely many exceptions the positions of the zeros repeat regularly. Skolem proved this for recurrences over the rational numbers, and Mahler and Lech extended it to other systems of numbers. However, the proofs of the theorem do not show how to test whether there exist any zeros. - -There does exist an algorithm to test whether a constant-recursive sequence has infinitely many zeros, and if so to construct a decomposition of the positions of those zeros into periodic subsequences, based on the algebraic properties of the roots of the characteristic polynomial of the given recurrence. The remaining difficult part of the Skolem problem is determining whether the finite set of non-repeating zeros is empty or not. - -Partial solutions to the Skolem problem are known, covering the special case of the problem for recurrences of degree at most four. However, these solutions do not apply to recurrences of degree five or more. - -For integer recurrences, the Skolem problem is known to be NP-hard. diff --git a/wiki/wikipedia/4043.txt b/wiki/wikipedia/4043.txt deleted file mode 100644 index 15cfd6e87520a8bc012730226b434b7b7fd63920..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4043.txt +++ /dev/null @@ -1,71 +0,0 @@ -Sudoku (; ; originally called Number Place) is a logic-based, combinatorial number-placement puzzle. In classic sudoku, the objective is to fill a 9 × 9 grid with digits so that each column, each row, and each of the nine 3 × 3 subgrids that compose the grid (also called "boxes", "blocks", or "regions") contain all of the digits from 1 to 9. The puzzle setter provides a partially completed grid, which for a well-posed puzzle has a single solution. - -French newspapers featured variations of the Sudoku puzzles in the 19th century, and the puzzle has appeared since 1979 in puzzle books under the name Number Place. It was not a Sudoku because it contained double-digit numbers and required arithmetic rather than logic to solve, but it shared key characteristics: each row, column and subsquare added up to the same number. - -On July 6, 1895, Le Siècle rival, La France, refined the puzzle so that it was almost a modern Sudoku and named it ('diabolical magic square'). It simplified the 9×9 magic square puzzle so that each row, column, and broken diagonals contained only the numbers 1–9, but did not mark the subsquares. Although they were unmarked, each 3×3 subsquare did indeed comprise the numbers 1–9, and the additional constraint on the broken diagonals led to only one solution. - -These weekly puzzles were a feature of French newspapers such as L'Écho de Paris for about a decade, but disappeared about the time of World War I. - -The modern Sudoku was most likely designed anonymously by Howard Garns, a 74-year-old retired architect and freelance puzzle constructor from Connersville, Indiana, and first published in 1979 by Dell Magazines as Number Place (the earliest known examples of modern Sudoku). He died in 1989 before getting a chance to see his creation as a worldwide phenomenon. and the puzzle is generally referred to as or, more informally, a shortening of the two words, . In 1986, Nikoli introduced two innovations: the number of givens was restricted to no more than 32, and puzzles became "symmetrical" (meaning the givens were distributed in rotationally symmetric cells). It is now published in mainstream Japanese periodicals, such as the Asahi Shimbun. - -Cognitive scientist Jeremy Grabbe found that Sudoku involved an area of cognition called working memory. A subsequent experiment by Grabbe showed that routine Sudoku playing could improve working memory in older people. - -In 1997, Hong Kong judge Wayne Gould saw a partly completed puzzle in a Japanese bookshop. Over six years, he developed a computer program to produce unique puzzles rapidly. - -The rapid rise of Sudoku in Britain from relative obscurity to a front-page feature in national newspapers attracted commentary in the media and parody (such as when The Guardian G2 section advertised itself as the first newspaper supplement with a Sudoku grid on every page). Recognizing the different psychological appeals of easy and difficult puzzles, The Times introduced both, side by side, on June 20, 2005. From July 2005, Channel 4 included a daily Sudoku game in their teletext service. On August 2, the BBC's program guide Radio Times featured a weekly Super Sudoku with a 16×16 grid. - -In the United States, the first newspaper to publish a Sudoku puzzle by Wayne Gould was The Conway Daily Sun (New Hampshire), in 2004. - -The world's first live TV Sudoku show, Sudoku Live, was a puzzle contest first broadcast on July 1, 2005, on Sky One. It was presented by Carol Vorderman. Nine teams of nine players (with one celebrity in each team) representing geographical regions competed to solve a puzzle. Each player had a hand-held device for entering numbers corresponding to answers for four cells. Phil Kollin of Winchelsea, England, was the series grand prize winner, taking home over £23,000 over a series of games. The audience at home was in a separate interactive competition, which was won by Hannah Withey of Cheshire. - -Later in 2005, the BBC launched SUDO-Q, a game show that combined Sudoku with general knowledge. However, it used only 4×4 and 6×6 puzzles. Four seasons were produced before the show ended in 2007. - -In 2006, a Sudoku website published songwriter Peter Levy's Sudoku tribute song, but quickly had to take down the MP3 file due to heavy traffic. British and Australian radio picked up the song, which is to feature in a British-made Sudoku documentary. The Japanese Embassy also nominated the song for an award, with Levy doing talks with Sony in Japan to release the song as a single. - -Sudoku software is very popular on PCs, websites, and mobile phones. It comes with many distributions of Linux. Software has also been released on video game consoles, such as the Nintendo DS, PlayStation Portable, the Game Boy Advance, Xbox Live Arcade, the Nook e-book reader, Kindle Fire tablet, several iPod models, and the iPhone. Many Nokia phones also had Sudoku. In fact, just two weeks after Apple Inc. debuted the online App Store within its iTunes Store on July 11, 2008, nearly 30 different Sudoku games were already in it, created by various software developers, specifically for the iPhone and iPod Touch. One of the most popular video games featuring Sudoku is Brain Age: Train Your Brain in Minutes a Day!. Critically and commercially well-received, it generated particular praise for its Sudoku implementation and sold more than 8 million copies worldwide. Due to its popularity, Nintendo made a second Brain Age game titled Brain Age2, which has over 100 new Sudoku puzzles and other activities. - -In June 2008, an Australian drugs-related jury trial costing over A$ 1 million was aborted when it was discovered that five of the twelve jurors had been playing Sudoku instead of listening to evidence. - -Although the 9×9 grid with 3×3 regions is by far the most common, many other variations exist. Sample puzzles can be 4×4 grids with 2×2 regions; 5×5 grids with pentomino regions have been published under the name Logi-5; the World Puzzle Championship has featured a 6×6 grid with 2×3 regions and a 7×7 grid with six heptomino regions and a disjoint region. Larger grids are also possible, or different irregular shapes (under various names such as Suguru, Tectonic, Jigsaw Sudoku etc.). The Times offers a 12×12-grid "Dodeka Sudoku" with 12 regions of 4×3 squares. Dell Magazines regularly publishes 16×16 "Number Place Challenger" puzzles (using the numbers 1–16 or the letters A-P). Nikoli offers 25×25 "Sudoku the Giant" behemoths. A 100×100-grid puzzle dubbed Sudoku-zilla was published in 2010. - -Under the name "Mini Sudoku", a 6×6 variant with 3×2 regions appears in the American newspaper USA Today and elsewhere. The object is the same as that of standard Sudoku, but the puzzle only uses the numbers 1 through 6. A similar form, for younger solvers of puzzles, called "The Junior Sudoku", has appeared in some newspapers, such as some editions of The Daily Mail. - -Another common variant is to add limits on the placement of numbers beyond the usual row, column, and box requirements. Often, the limit takes the form of an extra "dimension"; the most common is to require the numbers in the main diagonals of the grid to also be unique. The aforementioned "Number Place Challenger" puzzles are all of this variant, as are the Sudoku X puzzles in The Daily Mail, which use 6×6 grids. - -The Killer Sudoku variant combines elements of Sudoku and Kakuro. - -Alphabetical variations have emerged, sometimes called Wordoku; no functional difference exists in the puzzle unless the letters spell something. Some variants, such as in the TV Guide, include a word reading along a main diagonal, row, or column once solved; determining the word in advance can be viewed as a solving aid. A Wordoku might contain words other than the main word. - -"Quadratum latinum" is a Sudoku variation with Roman numerals (I, II, III, IV, ..., IX) proposed by Hebdomada aenigmatum, a monthly magazine of Latin puzzles and crosswords. Like the Wordoku, it presents no functional difference from a normal Sudoku, but adds the visual difficulty of using Roman numerals. - -Hyper Sudoku or Windoku uses the classic 9×9 grid with 3×3 regions, but defines four additional interior 3×3 regions in which the numbers 1–9 must appear exactly once. It was invented by Peter Ritmeester and first published by him in Dutch Newspaper NRC Handelsblad in October 2005, and since April 2007 on a daily basis in The International New York Times (International Herald Tribune). The first time it was called Hyper Sudoku was in Will Shortz's Favorite Sudoku Variations (February 2006). It is also known as Windoku because with the grid's four interior regions shaded, it resembles a window with glazing bars. - -In Twin Sudoku two regular grids share a 3×3 box. This is one of many possible types of overlapping grids. The rules for each individual grid are the same as in normal Sudoku, but the digits in the overlapping section are shared by each half. In some compositions neither individual grid can be solved alone – the complete solution is only possible after each individual grid has at least been partially solved. - -Puzzles constructed from more than two grids are also common. Five 9×9 grids that overlap at the corner regions in the shape of a quincunx is known in Japan as Gattai 5 (five merged) Sudoku. In The Times, The Age, and The Sydney Morning Herald, this form of puzzle is known as Samurai Sudoku. The Baltimore Sun and the Toronto Star publish a puzzle of this variant (titled High Five) in their Sunday edition. Often, no givens are placed in the overlapping regions. Sequential grids, as opposed to overlapping, are also published, with values in specific locations in grids needing to be transferred to others. - -A tabletop version of Sudoku can be played with a standard 81-card Set deck (see Set game). A three-dimensional Sudoku puzzle was published in The Daily Telegraph in May 2005. The Times also publishes a three-dimensional version under the name Tredoku. Also, a Sudoku version of the Rubik's Cube is named Sudoku Cube. - -Many other variants have been developed. Some are different shapes in the arrangement of overlapping 9×9 grids, such as butterfly, windmill, or flower. Others vary the logic for solving the grid. One of these is "Greater Than Sudoku". In this, a 3×3 grid of the Sudoku is given with 12 symbols of Greater Than (>) or Less Than (<) on the common line of the two adjacent numbers. The aim is to construct a 9-coloring of a particular graph, given a partial 9-coloring. - -The fewest clues possible for a proper Sudoku is 17 (proven January 2012, and confirmed September 2013). Over 49,000 Sudokus with 17 clues have been found, many by Japanese enthusiasts. Sudokus with 18 clues and rotational symmetry have been found, and there is at least one Sudoku that has 18 clues, exhibits two-way diagonal symmetry and is automorphic. The maximum number of clues that can be provided while still not rendering a unique solution is four short of a full grid (77); if two instances of two numbers each are missing from cells that occupy the corners of an orthogonal rectangle, and exactly two of these cells are within one region, the numbers can be assigned two ways. Since this applies to Latin squares in general, most variants of Sudoku have the same maximum. - -The number of classic 9×9 Sudoku solution grids is 6,670,903,752,021,072,936,960 , or around 6.67|e=21. This is roughly 1.2|e=-6 times the number of 9×9 Latin squares. Various other grid sizes have also been enumerated—see the main article for details. The number of essentially different solutions, when symmetries such as rotation, reflection, permutation, and relabelling are taken into account, was shown to be just 5,472,730,538 . - -Unlike the number of complete Sudoku grids, the number of minimal 9×9 Sudoku puzzles is not precisely known. (A minimal puzzle is one in which no clue can be deleted without losing uniqueness of the solution.) However, statistical techniques combined with a puzzle generator show that about (with 0.065% relative error) 3.10 × 1037 minimal puzzles and 2.55 × 1025 nonessentially equivalent minimal puzzles exist. - -* The first World Sudoku Championship was held in Lucca, Italy, from March 10 to 12, 2006. The winner was Jana Tylová of the Czech Republic. The competition included numerous variants. - -* The second World Sudoku Championship was held in Prague, Czech Republic, from March 28 to April 1, 2007. The individual champion was Thomas Snyder of the US. The team champion was Japan. - -* The third World Sudoku Championship was held in Goa, India, from April 14 to 16, 2008. Thomas Snyder repeated as the individual overall champion, and also won the first ever Classic Trophy (a subset of the competition counting only classic Sudoku). The Czech Republic won the team competition. - -* The fourth World Sudoku Championship was held in Žilina, Slovakia, from April 24 to 27, 2009. After past champion Thomas Snyder of the USA won the general qualification, Jan Mrozowski of Poland emerged from a 36-competitor playoff to become the new World Sudoku Champion. Host nation Slovakia emerged as the top team in a separate competition of three-membered squads. - -* The fifth World Sudoku Championship was held in Philadelphia, Pennsylvania, from April 29 to May 2, 2010. Jan Mrozowski of Poland successfully defended his world title in the individual competition, while Germany won a separate team event. The puzzles were written by Thomas Snyder and Wei-Hwa Huang, both past U.S. Sudoku champions. - -* The 12th World Sudoku Championship (WSC) was held in Bangalore, India, from October 15 to 22, 2017. Kota Morinishi of Japan won the Individual WSC and China won the team event. - -* The 13th World Sudoku Championship took place in the Czech Republic. - -* In the United States, The Philadelphia Inquirer Sudoku National Championship has been held three times, each time offering a $10,000 prize to the advanced division winner and a spot on the U.S. National Sudoku Team traveling to the world championships. The winners of the event were Thomas Snyder (2007), Wei-Hwa Huang (2008), and Tammy McLeod (2009). In the 2009 event, the third-place finalist in the advanced division, Eugene Varshavsky, performed quite poorly onstage after setting a very fast qualifying time on paper, which caught the attention of organizers and competitors including past champion Thomas Snyder, who requested organizers reconsider his results due to a suspicion of cheating. Following an investigation and a retest of Varshavsky, the organizers disqualified him and awarded the third-place to Chris Narrikkattu. diff --git a/wiki/wikipedia/4044.txt b/wiki/wikipedia/4044.txt deleted file mode 100644 index cd05bff49f154371a85a72b2a18d11552155d949..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4044.txt +++ /dev/null @@ -1,57 +0,0 @@ -A Time/Utility Function (TUF), née Time/Value Function, specifies the application-specific utility that an action (e.g., task, mechanical movement) yields depending on its completion time. TUFs and their utility interpretations (semantics), scales, and values are derived from application domain-specific subject matter knowledge. An example interpretation of utility is an action's relative importance, which otherwise is independent of its timeliness. The traditional deadline represented as a TUF is a special case—a downward step of utility from 1 to 0 at the deadline time—i.e., timeliness without importance. A TUF is more general—it has a critical time, with application-specific shapes and utility values on each side, after which it does not increase. The various researcher and practitioner definitions of firm and soft real-time can also be represented as special cases of the TUF model. - -The optimality criterion for scheduling multiple TUF-constrained actions has historically in the literature been only maximal utility accrual (UA)—e.g., a (perhaps expected) weighted sum of the individual actions' completion utilities. This thus takes into account timeliness with respect to critical times. Additional criteria (e.g., energy, predictability), constraints (e.g., dependencies), system models, scheduling algorithms, and assurances have been added as the TUF/UA paradigm and its use cases have evolved. More expressively, TUF/UA allows accrued utility, timeliness, predictability, and other scheduling criteria to be traded off against one another for the schedule to yield situational application QoS—as opposed to only timeliness per se. - -Various published examples of civilian TUF/UA applications are included in the References. Most instantiations of the paradigm have been in classified military systems. - -The TUF/UA paradigm was originally created to address certain timeliness- and application QoS-based scheduling needs of various military applications for which traditional real-time concepts and practices are not sufficiently expressive (e.g., for timeliness-critical systems not having deadlines) and resilient (e.g., for systems subject to routine overloads). An example class of such applications is ballistic missile defense (notionally). - -Subsequently, numerous variations on the original TUF model, the TUF/UA paradigm's system model, and thus scheduling techniques, have been studied in the academic literature—e.g.,—and applied in civilian contexts. - -Some examples of the latter include: cyber-physical systems, AI, multi-robot systems, drone scheduling, autonomous robots, intelligent vehicle-to-cloud data transfers, industrial process control, transaction systems, high performance computing, cloud systems, heterogeneous clusters, service-oriented computing, networking, and memory management for real and virtual machines. A steel mill example is briefly described in the Introduction of Clark's Ph.D. thesis. Information about any commercial or military instances of the paradigm may be publicly inaccessible (proprietary or classified, respectively). - -TUFs and their utility interpretations (semantics), scales, and values are derived from domain-specific subject matter knowledge. A historically frequent interpretation of utility is actions' relative importance. A framework for á priori assigning static utility values subject to strong constraints on system models has been devised, but subsequent (like prior) TUF/UA research and development have preferred to depend on exploiting application-specificity rather than attempting to create more general frameworks. However, such frameworks and tools remain an important research topic. - -By traditional convention, a TUF is a concave function, including linear ones. See the depiction of some example TUFs. - -TUF/UA papers in the research literature, with few exceptions, e.g., are for only either linear or piecewise linear (including conventional deadline-based) TUFs because they are easier to specify and schedule. In many cases, the TUFs are only . - -A constant function represents an action's utility that is not related to the action's completion time—for example, the action's constant relative importance. This allows both time-dependent and time-independent actions to be scheduled coherently. - -A TUF has a global critical time, after which its utility does not increase. If a TUF never decreases, its global critical time is the first time when its maximum utility is reached. A constant TUF has an arbitrary critical time for the purpose of scheduling—such as the action's release time, or the TUF's termination time. The global critical time may be followed by local critical times—for example, consider a TUF having a sequence of downward steps, perhaps to approximate a smooth downward curve. - -TUF utility values are usually either integers or rational numbers. - -TUF utility may include negative values. (A TUF that has negative values in its range is not necessarily dropped from scheduling consideration or aborted during its operation—that decision depends on the scheduling algorithm.) - -A conventional deadline time (d) represented as a TUF is a special case—a downward step TUF having a unit penalty (i.e., having utility values 1 before and 0 after its critical time). - -More generally, a TUF allows downward (and upward) step functions to have any pre- and post-critical time utilities. - -Tardiness represented as a TUF is a special case whose non-zero utility is the linear function C - d, where C is the action's completion time—either current, expected, or believed. More generally, a TUF allows non-zero earliness and tardiness to be non-linear—e.g., increasing tardiness may result in non-linearly decreasing utility, such as when detecting a threat. - -Thus, TUFs provide a rich generalization of traditional action completion time constraints in real-time computing. - -Alternatively, the TUF/UA paradigm can be employed to use timeliness with respect to the global critical time as a means to a utility accrual end—i.e., application-level Quality of Service (QoS)—instead of timeliness per se being an end in itself . - -A TUF (its shape and values) may be dynamically adapted by an application or its operational environment, independently for any actions currently either waiting or operating. - -These adaptations ordinarily occur at discrete events—e.g., at an application mode change such as for ballistic missile flight phases. - -Alternatively, these adaptations may occur continuously, such as for actions whose operational durations and TUFs are application-specific functions of when those actions are either released or begin operation. The operation durations may increase or decrease or both, and may be non-monotonic. This continuous case is called time-dependent scheduling. Time-dependent scheduling was introduced for (but is not limited to) certain real-time military applications, such as radar tracking systems. - -Multiple actions in a system may contend for access to sequentially exclusively shared resources—physical ones such as processors, networks, exogenous application devices (sensors, actuators, etc.)—and logical ones such as synchronizers, data. - -The TUF/UA paradigm resolves each instance of this contention using an application-specific algorithmic technique that creates (or updates) a schedule at scheduling events—e.g., times (such as action arrival or completion) or states. The instance's contending actions are dispatched for resource access sequentially in order from the front of the schedule. Thus, action UA sequencing is not greedy. - -The algorithmic technique creates a schedule based on one or more application-specific objectives (i.e., optimality criteria). - -The primary objective for scheduling actions having TUFs is maximal utility accrual (UA). The accrued utility is an application-specific polynomial sum of the schedule's completed actions' utilities. When actions have one or more stochastic parameters (e.g., operation duration), the accrued utility is also stochastic (i.e., an expected polynomial sum). - -Utility and accrued utility are generic, their interpretations (semantics) and scales are application-specific. - -An action's operation duration may be fixed and known at system configuration time. More generally, it may be either fixed or stochastic but not known (either with certainty or in expectation) until it either arrives or is released. - -An operation duration may be an application-specific function of the action's operation starting time—it may increase or decrease or both, and may be non-monotonic. This case is called time-dependent scheduling. - -[Being updated from here...] diff --git a/wiki/wikipedia/4045.txt b/wiki/wikipedia/4045.txt deleted file mode 100644 index 5e1a8286edc0fa5abe717a1d786f07adb903b46d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4045.txt +++ /dev/null @@ -1,23 +0,0 @@ -In graph theory the road coloring theorem, known previously as the road coloring conjecture, deals with synchronized instructions. The issue involves whether by using such instructions, one can reach or locate an object or destination from any other point within a network (which might be a representation of city streets or a maze). In the real world, this phenomenon would be as if you called a friend to ask for directions to his house, and he gave you a set of directions that worked no matter where you started from. This theorem also has implications in symbolic dynamics. - -The theorem was first conjectured by . It was proved by . - -The image to the right shows a directed graph on eight vertices in which each vertex has out-degree 2. (Each vertex in this case also has in-degree 2, but that is not necessary for a synchronizing coloring to exist.) The edges of this graph have been colored red and blue to create a synchronizing coloring. - -For example, consider the vertex marked in yellow. No matter where in the graph you start, if you traverse all nine edges in the walk "blue-red-red—blue-red-red—blue-red-red", you will end up at the yellow vertex. Similarly, if you traverse all nine edges in the walk "blue-blue-red—blue-blue-red—blue-blue-red", you will always end up at the vertex marked in green, no matter where you started. - -The road coloring theorem states that for a certain category of directed graphs, it is always possible to create such a coloring. - -Let G be a finite, strongly connected, directed graph where all the vertices have the same out-degree k. Let A be the alphabet containing the letters 1, ..., k. A synchronizing coloring (also known as a collapsible coloring) in G is a labeling of the edges in G with letters from A such that (1) each vertex has exactly one outgoing edge with a given label and (2) for every vertex v in the graph, there exists a word w over A such that all paths in G corresponding to w terminate at v. - -The terminology synchronizing coloring is due to the relation between this notion and that of a synchronizing word in finite automata theory. - -For such a coloring to exist at all, it is necessary that G be aperiodic. The road coloring theorem states that aperiodicity is also sufficient for such a coloring to exist. Therefore, the road coloring problem can be stated briefly as: - -Every finite strongly connected directed aperiodic graph of uniform out-degree has a synchronizing coloring. - -Previous partial or special-case results include the following: - -*If G is a finite strongly connected aperiodic directed graph with no multiple edges, and G contains a simple cycle of prime length which is a proper subset of G, then G has a synchronizing coloring. (O'Brien 1981) - -*If G is a finite strongly connected aperiodic directed graph (multiple edges allowed) and every vertex has the same in-degree and out-degree k, then G has a synchronizing coloring. (Kari 2003) diff --git a/wiki/wikipedia/4046.txt b/wiki/wikipedia/4046.txt deleted file mode 100644 index f728ca2a17cfc22033d55a33858380390b0b5443..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4046.txt +++ /dev/null @@ -1,39 +0,0 @@ -A Customer integrated system (CIS) is an extension or hybrid of the transaction processing system (TPS) that places technology in the hands of the customer and allows them to process their own transactions. CIS represents a way of doing business at substantial savings; customers save time and organizations can lower their human resource costs. The clients became increasingly interested in this system and in some cases even requested to borrow it from the sales representatives for their own use. As a result, the head of Research and Development at Bergen Brunswig, Jim McLaughlin, came up with the idea of modifying the system so that it included order-entry software and to provide this new system to the pharmacist free of charge. - -Characteristics include: - -* Streamline organization's business processes - -* Are at the very heart of every organization - -* Are the new primary interface to customers - -* Further decentralization of computing power in an organization by placing that power in the hands of the customers - -* Empower the customers to process their own transactions anywhere at any time - -* Can reduce waiting line time and therefore improve overall customer satisfaction - -* Allow for an organization to cut costs by reducing human resources expenditures - -Functions include: - -* Capturing information - -* Creating information - -* Cradling or storing information - -* Communicating information - -* Conveying information (secondary) - -A range of applications exist: - -* Online shopping – browse or purchase a broad array of products anywhere any time - -* ATMs and online banking – permits banking anywhere any time - -* University and college online services – register for classes, make tuition payments and purchase books anywhere any time - -* Vending machines – purchase anything from assorted snack foods to iPods diff --git a/wiki/wikipedia/4047.txt b/wiki/wikipedia/4047.txt deleted file mode 100644 index ccf23ea067b3cdad89efd5ddf278d96e88fc64c4..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4047.txt +++ /dev/null @@ -1,5 +0,0 @@ -In mathematics, Sullivan conjecture or Sullivan's conjecture on maps from classifying spaces can refer to any of several results and conjectures prompted by homotopy theory work of Dennis Sullivan. A basic theme and motivation concerns the fixed point set in group actions of a finite group $G$. The most elementary formulation, however, is in terms of the classifying space $BG$ of such a group. Roughly speaking, it is difficult to map such a space $BG$ continuously into a finite CW complex $X$ in a non-trivial manner. Such a version of the Sullivan conjecture was first proved by Haynes Miller. Specifically, in 1984, Miller proved that the function space, carrying the compact-open topology, of base point-preserving mappings from $BG$ to $X$ is weakly contractible. - -This is equivalent to the statement that the map $X$ → $F(BG, X)$ from X to the function space of maps $BG$ → $X$, not necessarily preserving the base point, given by sending a point $x$ of $X$ to the constant map whose image is $x$ is a weak equivalence. The mapping space $F(BG, X)$ is an example of a homotopy fixed point set. Specifically, $F(BG, X)$ is the homotopy fixed point set of the group $G$ acting by the trivial action on $X$. In general, for a group $G$ acting on a space $X$, the homotopy fixed points are the fixed points $F(EG, X)^G$ of the mapping space $F(EG, X)$ of maps from the universal cover $EG$ of $BG$ to $X$ under the $G$-action on $F(EG, X)$ given by $g$ in $G$ acts on a map $f$ in $F(EG, X)$ by sending it to $gfg^{-1}$. The $G$-equivariant map from $EG$ to a single point $*$ induces a natural map η: $X^G = F(*,X)^G$→$F(EG, X)^G$ from the fixed points to the homotopy fixed points of $G$ acting on $X$. Miller's theorem is that η is a weak equivalence for trivial $G$-actions on finite-dimensional CW complexes. An important ingredient and motivation (see [1]) for his proof is a result of Gunnar Carlsson on the homology of $BZ/2$ as an unstable module over the Steenrod algebra. - -Miller's theorem generalizes to a version of Sullivan's conjecture in which the action on $X$ is allowed to be non-trivial. In, Sullivan conjectured that η is a weak equivalence after a certain p-completion procedure due to A. Bousfield and D. Kan for the group $G=Z/2$. This conjecture was incorrect as stated, but a correct version was given by Miller, and proven independently by Dwyer-Miller-Neisendorfer, Carlsson, and Jean Lannes, showing that the natural map $(X^G)_p$ → $F(EG, (X)_p)^G$ is a weak equivalence when the order of $G$ is a power of a prime p, and where $(X)_p$ denotes the Bousfield-Kan p-completion of $X$. Miller's proof involves an unstable Adams spectral sequence, Carlsson's proof uses his affirmative solution of the Segal conjecture and also provides information about the homotopy fixed points $F(EG,X)^G$ before completion, and Lannes's proof involves his T-functor. diff --git a/wiki/wikipedia/4048.txt b/wiki/wikipedia/4048.txt deleted file mode 100644 index 12326e03817a09a24997560b0f31cbd0e4031418..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4048.txt +++ /dev/null @@ -1,86 +0,0 @@ -In the context of combinatorial mathematics, stars and bars (also called "sticks and stones", "balls and bars", and "dots and dividers") is a graphical aid for deriving certain combinatorial theorems. It was popularized by William Feller in his classic book on probability. It can be used to solve many simple counting problems, such as how many ways there are to put n indistinguishable balls into k distinguishable bins. - -The stars and bars method is often introduced specifically to prove the following two theorems of elementary combinatorics concerning the number of solutions to an equation. - -For any pair of positive integers n and k, the number of k-tuples of positive integers whose sum is n is equal to the number of (k − 1)-element subsets of a set with n − 1 elements. - -For example, if $n=10$ and $k=4$, the theorem gives the number of solutions to: -$$ -x_1 + x_2 + x_3 + x_4 = 10 -$$ - -with $x_1,x_2,x_3,x_4>0$ - -The answer is given by the binomial coefficient ${n - 1}\choose {k - 1}$. For the above example, there are $ \binom{10 - 1}{4 - 1} = \binom{9}{3} = 84$ of them. - -These consist of the permutations of the tuples $(1,1,1,7),(1,1,2,6),(1,1,3,5),(1,1,4,4),(1,2,2,5),(1,2,3,4),(1,3,3,3),(2,2,2,4),(2,2,3,3)$. - -For any pair of positive integers n and k, the number of k-tuples of non-negative integers whose sum is n is equal to the number of multisets of cardinality k − 1 taken from a set of size n + 1. - -For example, if $n=10$ and $k=4$, the theorem gives the number of solutions to: -$$ -x_1 + x_2 + x_3 + x_4 = 10 -$$ - -with $x_1,x_2,x_3,x_4\ge0$ - -The answer is given by the binomial coefficient $\binom{n + k - 1}{k - 1}$. For the above example, there are $ \binom{10+4-1}{4 - 1} = \binom{13}{3} = 286$ of them. - -Suppose there are n objects (represented here by stars) to be placed into k bins, such that all bins contain at least one object. The bins are distinguishable (say they are numbered 1 to k) but the n stars are not (so configurations are only distinguished by the number of stars present in each bin). A configuration is thus represented by a k-tuple of positive integers, as in the statement of the theorem. - -For example, with n = 7 and k = 3, start by placing the stars in a line: - -The configuration will be determined once it is known which is the first star going to the second bin, and the first star going to the third bin, etc.. This is indicated by placing k − 1 bars between the stars. Because no bin is allowed to be empty (all the variables are positive), there is at most one bar between any pair of stars. - -For example: - -There are n − 1 gaps between stars. A configuration is obtained by choosing k − 1 of these gaps to contain a bar; therefore there are $\tbinom{n - 1}{k - 1}$ possible combinations. - -In this case, the weakened restriction of non-negativity instead of positivity means that we can place multiple bars between stars, before the first star and after the last star. - -For example, when n = 7 and k = 5, the tuple (4, 0, 1, 2, 0) may be represented by the following diagram: - -To see that these objects are counted by the binomial coefficient $\tbinom{n + k - 1}{k-1}$, observe that the desired arrangements consist of size=100%|n + k − 1 objects (n stars and size=100%|k − 1 bars). This can be obtained by imagining there are $ n + k- 1 $ positions in total for placing stars or bars, then selecting $ k - 1 $ positions for bars (or $ n $ for stars). - -Theorem 1 can now be restated in terms of Theorem 2, because the requirement that all the variables are positive is equivalent to pre-assigning each variable a 1, and asking for the number of solutions when each variable is non-negative. - -For example: -$$ -x_1+x_2+x_3+x_4=10 -$$ - -with $x_1,x_2,x_3,x_4>0$ - -is equivalent to: -$$ -x_1+x_2+x_3+x_4=6 -$$ - -with $x_1,x_2,x_3,x_4\ge0$ - -Many elementary word problems in combinatorics are resolved by the theorems above. For example, if one wishes to count the number of ways to distribute seven indistinguishable one dollar coins among Amber, Ben, and Curtis so that each of them receives at least one dollar, one may observe that distributions are essentially equivalent to tuples of three positive integers whose sum is 7. (Here the first entry in the tuple is the number of coins given to Amber, and so on.) Thus stars and bars theorem 1 applies, with n = 7 and k = 3, and there are ${7-1\choose 3-1} = 15$ ways to distribute the coins. - -If n = 5, k = 4, and a set of size k is {a, b, c, d}, - -then ★|★★★||★ could represent either the multiset {a, b, b, b, d} or the 4-tuple (1, 3, 0, 1). The representation of any multiset for this example should use SAB2 with n = 5 stars and k − 1 = 3 bars to give ${5+4-1\choose 4-1} = {8\choose 3} = 56$. - -SAB2 allows for more bars than stars, which isn't permitted in SAB1. - -So, for example, $10$ balls into $7$ bins is $\binom{16}{6}$, while $7$ balls into $10$ bins is $\binom{16}{9}$, with $6$ balls into $11$ bins as $\binom{16}{10}=\binom{16}{6}$ - -If we have the infinite Taylor series $\left[\sum_{k=1}^{\infty}x^k\right]^{m}$, then we can use this method to expand the sum. For the nth term of the expansion, we are picking n powers of x from m separate locations. Hence there are ${{n-1} \choose {m-1}}$ ways to form our nth power: - -\left[\sum_{k=1}^{\infty}x^k\right]^{m} = \sum_{n=m}^{\infty}{{n-1} \choose {m-1}}x^{n} - - - -The graphical method was used by Paul Ehrenfest and Heike Kamerlingh Onnes – with symbol ε (quantum energy element) in place of a star – as a simple derivation of Max Planck's expression of "complexions". - -Planck called "complexions" the number R of possible distributions of P energy elements ε over N resonators: -$$ -R=\frac {(N+P-1)!}{P!(N-1)!} \ -$$ - -The graphical representation would contain P times the symbol ε and (N - 1) times the sign | for each possible distribution. In their demonstration, Ehrenfest and Kamerlingh Onnes took N = 4 and P = 7 (i.e., R = 120 combinations). They chose the 4-tuple (4, 2, 0, 1) as the illustrative example for this symbolic representation: - -εεεε|εε||ε diff --git a/wiki/wikipedia/4049.txt b/wiki/wikipedia/4049.txt deleted file mode 100644 index eb2bf3a61c9ba5aa142127d2596a62ed109cffce..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4049.txt +++ /dev/null @@ -1,12 +0,0 @@ -In mathematics, the Nagata conjecture on curves, named after Masayoshi Nagata, governs the minimal degree required for a plane algebraic curve to pass through a collection of very general points with prescribed multiplicities. - -Nagata arrived at the conjecture via work on the 14th problem of Hilbert, which asks whether the invariant ring of a linear group action on the polynomial ring k[x1, ..., xn] over some field k is finitely generated. Nagata published the conjecture in a 1959 paper in the American Journal of Mathematics, in which he presented a counterexample to Hilbert's 14th problem. - -Nagata Conjecture. Suppose p1, ..., pr are very general points in P2 and that m1, ..., mr are given positive integers. Then for r > 9 any curve C in P2 that passes through each of the points pi with multiplicity mi must satisfy -$$ -\deg C > \frac{1}{\sqrt{r}}\sum_{i=1}^r m_i. -$$ - -The condition r > 9 is necessary: The cases r > 9 and r ≤ 9 are distinguished by whether or not the anti-canonical bundle on the blowup of P2 at a collection of r points is nef. In the case where r ≤ 9, the cone theorem essentially gives a complete description of the cone of curves of the blow-up of the plane. - -The only case when this is known to hold is when r is a perfect square, which was proved by Nagata. Despite much interest, the other cases remain open. A more modern formulation of this conjecture is often given in terms of Seshadri constants and has been generalised to other surfaces under the name of the Nagata–Biran conjecture. diff --git a/wiki/wikipedia/405.txt b/wiki/wikipedia/405.txt deleted file mode 100644 index 3a34672ea8a76dfed4717854478ac857bf37d65a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/405.txt +++ /dev/null @@ -1,27 +0,0 @@ -In analytic number theory, the Siegel–Walfisz theorem was obtained by Arnold Walfisz as an application of a theorem by Carl Ludwig Siegel to primes in arithmetic progressions. It is a refinement both of the prime number theorem and of Dirichlet's theorem on primes in arithmetic progressions. - -Define -$$ -\psi(x;q,a) = \sum_{n\leqx \atop n\equiva\!\pmod{\!q}}\Lambda(n), -$$ - -where $\Lambda$ denotes the von Mangoldt function, and let φ denote Euler's totient function. - -Then the theorem states that given any real number N there exists a positive constant CN depending only on N such that -$$ -\psi(x;q,a)=\frac{x}{\varphi(q)}+O\left(x\exp\left(-C_N(\log x)^\frac{1}{2}\right)\right), -$$ - -whenever (a, q) = 1 and -$$ -q\le(\log x)^N. -$$ - -The constant CN is not effectively computable because Siegel's theorem is ineffective. - -From the theorem we can deduce the following bound regarding the prime number theorem for arithmetic progressions: If, for (a, q) = 1, by $\pi(x;q,a)$ we denote the number of primes less than or equal to x which are congruent to a mod q, then -$$ -\pi(x;q,a) = \frac{{\rm Li}(x)}{\varphi(q)}+O\left(x\exp\left(-\frac{C_N}{2}(\log x)^\frac{1}{2}\right)\right), -$$ - -where N, a, q, CN and φ are as in the theorem, and Li denotes the logarithmic integral. diff --git a/wiki/wikipedia/4050.txt b/wiki/wikipedia/4050.txt deleted file mode 100644 index 004c20434d9aae8e6c5c9e002bfb6def4fd74671..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4050.txt +++ /dev/null @@ -1,15 +0,0 @@ -In geometry, the midsphere or intersphere of a polyhedron is a sphere which is tangent to every edge of the polyhedron. That is to say, it touches any given edge at exactly one point. Not every polyhedron has a midsphere, but for every polyhedron there is a combinatorially equivalent polyhedron, the canonical polyhedron, that does have a midsphere. - -The midsphere is so-called because, for polyhedra that have a midsphere, an inscribed sphere (which is tangent to every face of a polyhedron) and a circumscribed sphere (which touches every vertex), the midsphere is in the middle, between the other two spheres. The radius of the midsphere is called the midradius. - -The uniform polyhedra, including the regular, quasiregular and semiregular polyhedra and their duals all have midspheres. In the regular polyhedra, the inscribed sphere, midsphere, and circumscribed sphere all exist and are concentric. - -If O is the midsphere of a polyhedron P, then the intersection of O with any face of P is a circle. The circles formed in this way on all of the faces of P form a system of circles on O that are tangent exactly when the faces they lie in share an edge. - -Dually, if v is a vertex of P, then there is a cone that has its apex at v and that is tangent to O in a circle; this circle forms the boundary of a spherical cap within which the sphere's surface is visible from the vertex. That is, the circle is the horizon of the midsphere, as viewed from the vertex. The circles formed in this way are tangent to each other exactly when the vertices they correspond to are connected by an edge. - -If a polyhedron P has a midsphere O, then the polar polyhedron with respect to O also has O as its midsphere. The face planes of the polar polyhedron pass through the circles on O that are tangent to cones having the vertices of P as their apexes. - -One stronger form of the circle packing theorem, on representing planar graphs by systems of tangent circles, states that every polyhedral graph can be represented by a polyhedron with a midsphere. The horizon circles of a canonical polyhedron can be transformed, by stereographic projection, into a collection of circles in the Euclidean plane that do not cross each other and are tangent to each other exactly when the vertices they correspond to are adjacent. In contrast, there exist polyhedra that do not have an equivalent form with an inscribed sphere or circumscribed sphere. - -Any two polyhedra with the same face lattice and the same midsphere can be transformed into each other by a projective transformation of three-dimensional space that leaves the midsphere in the same position. The restriction of this projective transformation to the midsphere is a Möbius transformation. There is a unique way of performing this transformation so that the midsphere is the unit sphere and so that the centroid of the points of tangency is at the center of the sphere; this gives a representation of the given polyhedron that is unique up to congruence, the canonical polyhedron. Alternatively, a transformed polyhedron that maximizes the minimum distance of a vertex from the midsphere can be found in linear time; the canonical polyhedron chosen in this way has maximal symmetry among all choices of the canonical polyhedron. diff --git a/wiki/wikipedia/4051.txt b/wiki/wikipedia/4051.txt deleted file mode 100644 index f4d0dc2cdcd88be006d392c99641cc22e2a2d2a5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4051.txt +++ /dev/null @@ -1,168 +0,0 @@ -In mathematics, an implicit equation is a relation of the form R(x1, …, xn) = 0, where R is a function of several variables (often a polynomial). For example, the implicit equation of the unit circle is x2 + y2 − 1 = 0. - -An implicit function is a function that is defined by an implicit equation, that relates one of the variables, considered as the value of the function, with the others considered as the arguments. For example, the equation x2 + y2 − 1 = 0 of the unit circle defines y as an implicit function of x if −1 ≤ x ≤ 1, and one restricts y to nonnegative values. - -The implicit function theorem provides conditions under which some kinds of relations define an implicit function, namely relations defined as the indicator function of the zero set of some continuously differentiable multivariate function. - -A common type of implicit function is an inverse function. Not all functions have a unique inverse function. If g is a function of x that has a unique inverse, then the inverse function of g, called g−1, is the unique function giving a solution of the equation -$$ - y=g(x) -$$ - -for x in terms of y. This solution can then be written as -$$ - x = g^{-1}(y) . -$$ - -Defining g−1 as the inverse of g is an implicit definition. For some functions g, g−1(y) can be written out explicitly as a closed-form expression — for instance, if g(x) = 2x − 1, then g−1(y) = 1/2(y + 1). However, this is often not possible, or only by introducing a new notation (as in the product log example below). - -Intuitively, an inverse function is obtained from g by interchanging the roles of the dependent and independent variables. - -Example: The product log is an implicit function giving the solution for x of the equation y − xex = 0. - -An algebraic function is a function that satisfies a polynomial equation whose coefficients are themselves polynomials. For example, an algebraic function in one variable x gives a solution for y of an equation -$$ -a_n(x)y^n+a_{n-1}(x)y^{n-1}+\cdots+a_0(x)=0 , -$$ - -where the coefficients ai(x) are polynomial functions of x. This algebraic function can be written as the right side of the solution equation y = f(x). Written like this, f is a multi-valued implicit function. - -Algebraic functions play an important role in mathematical analysis and algebraic geometry. A simple example of an algebraic function is given by the left side of the unit circle equation: -$$ -x^2+y^2-1=0 . -$$ - -Solving for y gives an explicit solution: -$$ -y=\pm\sqrt{1-x^2} . -$$ - -But even without specifying this explicit solution, it is possible to refer to the implicit solution of the unit circle equation as y = f(x), where f is the multi-valued implicit function. - -While explicit solutions can be found for equations that are quadratic, cubic, and quartic in y, the same is not in general true for quintic and higher degree equations, such as -$$ - y^5 + 2y^4 -7y^3 + 3y^2 -6y - x = 0 . -$$ - -Nevertheless, one can still refer to the implicit solution y = f(x) involving the multi-valued implicit function f. - -Not every equation R(x, y) = 0 implies a graph of a single-valued function, the circle equation being one prominent example. Another example is an implicit function given by x − C(y) = 0 where C is a cubic polynomial having a "hump" in its graph. Thus, for an implicit function to be a true (single-valued) function it might be necessary to use just part of the graph. An implicit function can sometimes be successfully defined as a true function only after "zooming in" on some part of the x-axis and "cutting away" some unwanted function branches. Then an equation expressing y as an implicit function of the other variables can be written. - -The defining equation R(x, y) = 0 can also have other pathologies. For example, the equation x = 0 does not imply a function f(x) giving solutions for y at all; it is a vertical line. In order to avoid a problem like this, various constraints are frequently imposed on the allowable sorts of equations or on the domain. The implicit function theorem provides a uniform way of handling these sorts of pathologies. - -In calculus, a method called implicit differentiation makes use of the chain rule to differentiate implicitly defined functions. - -To differentiate an implicit function y(x), defined by an equation R(x, y) = 0, it is not generally possible to solve it explicitly for y and then differentiate. Instead, one can totally differentiate R(x, y) = 0 with respect to x and y and then solve the resulting linear equation for dy/dx to explicitly get the derivative in terms of x and y. Even when it is possible to explicitly solve the original equation, the formula resulting from total differentiation is, in general, much simpler and easier to use. - -Consider -$$ -y + x + 5 = 0 . -$$ - -This equation is easy to solve for y, giving -$$ -y = -x - 5 , -$$ - -where the right side is the explicit form of the function y(x). Differentiation then gives dy/dx = −1. - -Alternatively, one can totally differentiate the original equation: - -\begin{align} - -\frac{dy}{dx} + \frac{dx}{dx} + \frac{d}{dx}(5) &= 0 ; \\[6px] - -\frac{dy}{dx} + 1 + 0 &= 0 . - -\end{align} - -Solving for dy/dx gives -$$ -\frac{dy}{dx} = -1 , -$$ - -the same answer as obtained previously. - -An example of an implicit function for which implicit differentiation is easier than using explicit differentiation is the function y(x) defined by the equation -$$ - x^4 + 2y^2 = 8 . -$$ - -To differentiate this explicitly with respect to x, one has first to get -$$ -y(x) = \pm\sqrt{\frac{8 - x^4}{2}} , -$$ - -and then differentiate this function. This creates two derivatives: one for y ≥ 0 and another for y < 0. - -It is substantially easier to implicitly differentiate the original equation: -$$ -4x^3 + 4y\frac{dy}{dx} = 0 , -$$ - -giving -$$ -\frac{dy}{dx} = \frac{-4x^3}{4y} = -\frac{x^3}{y} . -$$ - -Often, it is difficult or impossible to solve explicitly for y, and implicit differentiation is the only feasible method of differentiation. An example is the equation -$$ -y^5-y=x . -$$ - -It is impossible to algebraically express y explicitly as a function of x, and therefore one cannot find dy/dx by explicit differentiation. Using the implicit method, dy/dx can be obtained by differentiating the equation to obtain -$$ -5y^4\frac{dy}{dx} - \frac{dy}{dx} = \frac{dx}{dx} , -$$ - -where dx/dx = 1. Factoring out dy/dx shows that -$$ -\left(5y^4 - 1\right)\frac{dy}{dx} = 1 , -$$ - -which yields the result -$$ -\frac{dy}{dx}=\frac{1}{5y^4-1} , -$$ - -which is defined for -$$ -y \ne \pm\frac{1}{\sqrt[4]{5}} \quad \text{and} \quad y \ne \pm \frac{i}{\sqrt[4]{5}} . -$$ - -If R(x, y) = 0, the derivative of the implicit function y(x) is given by -$$ -\frac{dy}{dx} = -\frac{\frac{\partial R}{\partial x}}{\frac{\partial R}{\partial y}} = -\frac {R_x}{R_y} , -$$ - -where Rx and Ry indicate the partial derivatives of R with respect to x and y. - -The above formula comes from using the generalized chain rule to obtain the total derivative — with respect to x — of both sides of R(x, y) = 0: -$$ -\frac{\partial R}{\partial x} \frac{dx}{dx} + \frac{\partial R}{\partial y} \frac{dy}{dx} = 0 , -$$ - -hence -$$ -\frac{\partial R}{\partial x} + \frac{\partial R}{\partial y} \frac{dy}{dx} =0 , -$$ - -which, when solved for dy/dx, gives the expression above. - -Let R(x, y) be a differentiable function of two variables, and (a, b) be a pair of real numbers such that R(a, b) = 0. If ∂R/∂y ≠ 0, then R(x, y) = 0 defines an implicit function that is differentiable in some small enough neighbourhood of ; in other words, there is a differentiable function f that is defined and differentiable in some neighbourhood of a, such that R(x, f(x)) = 0 for x in this neighbourhood. - -The condition ∂R/∂y ≠ 0 means that (a, b) is a regular point of the implicit curve of implicit equation R(x, y) = 0 where the tangent is not vertical. - -In a less technical language, implicit functions exist and can be differentiated, if the curve has a non-vertical tangent. - -Consider a relation of the form R(x1, …, xn) = 0, where R is a multivariable polynomial. The set of the values of the variables that satisfy this relation is called an implicit curve if n = 2 and an implicit surface if n = 3. The implicit equations are the basis of algebraic geometry, whose basic subjects of study are the simultaneous solutions of several implicit equations whose left-hand sides are polynomials. These sets of simultaneous solutions are called affine algebraic sets. - -The solutions of differential equations generally appear expressed by an implicit function. - -In economics, when the level set R(x, y) = 0 is an indifference curve for the quantities x and y consumed of two goods, the absolute value of the implicit derivative dy/dx is interpreted as the marginal rate of substitution of the two goods: how much more of y one must receive in order to be indifferent to a loss of one unit of x. - -Similarly, sometimes the level set R(L, K) is an isoquant showing various combinations of utilized quantities L of labor and K of physical capital each of which would result in the production of the same given quantity of output of some good. In this case the absolute value of the implicit derivative dK/dL is interpreted as the marginal rate of technical substitution between the two factors of production: how much more capital the firm must use to produce the same amount of output with one less unit of labor. - -Often in economic theory, some function such as a utility function or a profit function is to be maximized with respect to a choice vector x even though the objective function has not been restricted to any specific functional form. The implicit function theorem guarantees that the first-order conditions of the optimization define an implicit function for each element of the optimal vector x* of the choice vector x. When profit is being maximized, typically the resulting implicit functions are the labor demand function and the supply functions of various goods. When utility is being maximized, typically the resulting implicit functions are the labor supply function and the demand functions for various goods. - -Moreover, the influence of the problem's parameters on x* — the partial derivatives of the implicit function — can be expressed as total derivatives of the system of first-order conditions found using total differentiation. diff --git a/wiki/wikipedia/4052.txt b/wiki/wikipedia/4052.txt deleted file mode 100644 index 7342f886c909658ac5366491eb6b1dbee50eb3b5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4052.txt +++ /dev/null @@ -1,9 +0,0 @@ -In mathematics, Scholz's reciprocity law is a reciprocity law for quadratic residue symbols of real quadratic number fields discovered by and rediscovered by . - -Suppose that p and q are rational primes congruent to 1 mod 4 such that the Legendre symbol (p/q) is 1. Then the ideal (p) factorizes in the ring of integers of Q(q) as (p)=𝖕𝖕' and similarly (q)=𝖖𝖖' in the ring of integers of Q(p). - -Write εp and εq for the fundamental units in these quadratic fields. Then Scholz's reciprocity law says that - -[εp/𝖖] = [εq/𝖕] - -where [] is the quadratic residue symbol in a quadratic number field. diff --git a/wiki/wikipedia/4053.txt b/wiki/wikipedia/4053.txt deleted file mode 100644 index 65c7c540417fce6e502de1219d2a9bff09a287dd..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4053.txt +++ /dev/null @@ -1,13 +0,0 @@ -Myers's theorem, also known as the Bonnet–Myers theorem, is a celebrated, fundamental theorem in the mathematical field of Riemannian geometry. It was discovered by Sumner Byron Myers in 1941. It asserts the following: - -Let $(M, g)$ be a complete Riemannian manifold of dimension $n$ whose Ricci curvature satisfies $Ric^g\geq (n-1)k$ for some positive real number $k.$ Then any two points of M can be joined by a geodesic segment of length $,\leq \pi /\sqrt{k}.$ - -In the special case of surfaces, this result was proved by Ossian Bonnet in 1855. For a surface, the Gauss, sectional, and Ricci curvatures are all the same, but Bonnet's proof easily generalizes to higher dimensions if one assumes a positive lower bound on the sectional curvature. Myers' key contribution was therefore to show that a Ricci lower bound is all that is needed to reach the same conclusion. - -The conclusion of the theorem says, in particular, that the diameter of $(M, g)$ is finite. The Hopf-Rinow theorem therefore implies that $M$ must be compact, as a closed (and hence compact) ball of radius $\pi/\sqrt{k}$ in any tangent space is carried onto all of $M$ by the exponential map. - -As a very particular case, this shows that any complete and noncompact smooth Riemannian manifold which is Einstein must have nonpositive Einstein constant. - -Consider the smooth universal covering map $\pi : N \to M.$ One may consider the Riemannian metric π*g on $N.$ Since $\pi$ is a local diffeomorphism, Myers' theorem applies to the Riemannian manifold (N,π*g) and hence $N$ is compact. This implies that the fundamental group of $M$is finite. - -The conclusion of Myers' theorem says that for any $p, q \in M,$ one has dg(p,q) ≤ π/k. In 1975, Shiu-Yuen Cheng proved: diff --git a/wiki/wikipedia/4054.txt b/wiki/wikipedia/4054.txt deleted file mode 100644 index 429f398221c081206b4983f65ef5130985655916..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4054.txt +++ /dev/null @@ -1,5 +0,0 @@ -In mathematics the Watson quintuple product identity is an infinite product identity introduced by and rediscovered by Bailey and Gordon. It is analogous to the Jacobi triple product identity, and is the Macdonald identity for a certain non-reduced affine root system. It is related to Euler's pentagonal number theorem. - - \prod_{n\ge 1} (1-s^n)(1-s^nt)(1-s^{n-1}t^{-1})(1-s^{2n-1}t^2)(1-s^{2n-1}t^{-2}) - -= \sum_{n\in \mathbf{Z}}s^{(3n^2+n)/2}(t^{3n}-t^{-3n-1}) diff --git a/wiki/wikipedia/4055.txt b/wiki/wikipedia/4055.txt deleted file mode 100644 index 54650c4205dd7f3897e5bd42c6c09174397b15fc..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4055.txt +++ /dev/null @@ -1,121 +0,0 @@ -In computer science, multiway number partitioning is the problem of partitioning a multiset of numbers into a fixed number of subsets, such that the sums of the subsets are as similar as possible. It was first presented by Ronald Graham in 1969 in the context of the Identical-machines scheduling problem. The problem is parametrized by a positive integer k, and called k-way number partitioning. The input to the problem is a multiset S of numbers (usually integers), whose sum is k*T. - -The associated decision problem is to decide whether S can be partitioned into k subsets such that the sum of each subset is exactly T. There is also an optimization problem: find a partition of S into k subsets, such that the k sums are "as near as possible". The exact optimization objective can be defined in several ways: - -* Minimize the difference between the largest sum and the smallest sum. This objective is common in papers about multiway number partitioning, as well as papers originating from physics applications. and in sequencing of maintenance actions for modular gas turbine aircraft engines. Suppose there are some k engines, which must be kept working for as long as possible. An engine needs a certain critical part in order to operate. There is a set S of parts, each of which has a different lifetime. The goal is to assign the parts to the engines, such that the shortest engine lifetime is as large as possible. - -These three objective functions are equivalent when k=2, but they are all different when k≥3. - -All these problems are NP-hard, but there are various algorithms that solve it efficiently in many cases. - -Some closely-related problems are: - -* The partition problem - a special case of multiway number partitioning in which the number of subsets is 2. - -* The 3-partition problem - a different and harder problem, in which the number of subsets is not considered a fixed parameter, but is determined by the input (the number of sets is the number of integers divided by 3). - -*The bin packing problem - a dual problem in which the total sum in each subset is bounded, but k is flexible; the goal is to find a partition with the smallest possible k. The optimization objectives are closely related: the optimal number of d-sized bins is at most k, iff the optimal size of a largest subset in a k-partition is at most d. presented a PTAS that attains (1+ε)OPT in time $O(n\cdot (n^2 / \epsilon)^{k-1})$. It is an FPTAS if k is fixed. For k=2, the run-time improves to $O(n^2 / \epsilon)$. The algorithm uses a technique called interval partitioning. - -*Hochbaum and Shmoys presented the following algorithms, which work even when k is part of the input. - -**For any r >0, an algorithm with approximation ratio at most (6/5+2−r ) in time $O(n(r+\log{n}))$. - -**For any r >0, an algorithm with approximation ratio at most (7/6+2−r ) in time $O(n(r k^4+\log{n}))$. - -**For any ε>0, an algorithm with approximation ratio at most (1+ε) in time $O((n/\varepsilon)^{(1/\varepsilon^2)})$ . This is a PTAS. - -The approximation ratio in this context is the smallest sum in the solution returned by the algorithm, divided by the smallest sum in the optimal solution (the ratio is less than 1). - -* For greedy number partitioning , if the numbers are not sorted then the worst-case approximation ratio is 1/k. Sorting the numbers increases the approximation ratio to 5/6 when k=2, and $\frac{3k-1}{4k-2}$ in general, and it is tight. - -*Woeginger presented general PTAS-s (generalizing the PTAS-s of Sanhi, Hochbaum and Shmoys, and Woeginger) for these four problems. Their algorithm works for any f which satisfies the following two conditions: - -# A strong continuity condition called Condition F*: for every ε>0 there exists δ>0 such that, if |y-x|<δx, then |f(y)-f(x)|<εf(x). - -# Convexity (for the minimization problems) or concavity (for the maximization problems). - -The runtime of their PTAS-s is linear in n (the number of inputs), but exponential in the approximation precision. The PTAS for minimizing sum(f(Ci)) is based on some combinatorial observations: - -# Let L := the average sum in a single subset (1/k the sum of all inputs). If some input x is at least L, then there is an optimal partition in which one part contains only x. This follows from the convexity of f. Therefore, the input can be pre-processes by assigning each such input to a unique subset. After this preprocessing, one can assume that all inputs are smaller than L. - -#There is an optimal partition in which all subsets sums are strictly betweel L/2 and 2L (L/2 < Ci < 2L for all i in 1,...,k). Particularly, the partition minimizing the sum of squares Ci2, among all optimal partitions, satisfies these inequalities. - -The PTAS uses an input rounding technique. Given the input sequence S = (v1,...,vn) and a positive integer d, the rounded sequence S#(d) is defined as follows: - -* For any vj > L/d, the sequence S#(d) contains an input vj# which is vj rounded up to the next integer multiple of L/d2. Note that vj ≤ vj# < vj +L/d2, and L/d2 < vj /d, so vj # < (d+1)vj /d. - -*In addition, the sequence S#(d) contains some inputs equal to L/d. The number of these inputs is determined such that the sum of all these new inputs equals the sum of all inputs in S#(d) that are at most L/d, rounded up to the next integer multiple of L/d (for example, if the sum of all "short" inputs in S is 51.3L/d, then 52 new L/d inputs are added to S#(d)). - -In S#(d), all inputs are integer multiples of L/d2. Moreover, the above two observations hold for S#(d) too: - -# Let L# be the average sum in a single subset (1/k the sum of all inputs in S#(d)). By construction, L# is at least L. Since L itself is an integer multiple of L/d2, the rounding-up of inputs smaller than L cannot make them larger than L. Therefore, all inputs in S#(d) are smaller than L, and hence smaller than L#. - -#There is an optimal partition of S#(d) in which all subset sums are strictly between L#/2 and 2L#. Therefore, all subsets contain at most 2d elements (since all inputs in S#(d) are at least L/d). - -Based on these observations, all inputs in S#(d) are of the form hL/d2, where h is an integer in the range $(d,d+1,\ldots,d^2)$. Therefore, the input can be represented as an integer vector $\mathbf{n} = (n_d, n_{d+1}, \ldots, n_{d^2})$, where $n_h$ is the number of hL/d2 inputs in S#(d). Moreover, each subset can be represented as an integer vector $\mathbf{t} = (t_d, t_{d+1}, \ldots, t_{d^2})$, where $x_h$ is the number of hL/d2 inputs in the subset. The subset sum is then $C(\mathbf{t}) = \sum_{h=d}^{d^2} t_h\cdot (h L/d^2)$. Denote by $T$, the set of vectors $\mathbf{t}$ with $L^{\#}/2 < C(\mathbf{t}) < 2 L^{\#}$. Since the sum of elements in such a vector is at most 2d, the total number of these elements is smaller than ${(d^2)}^{2d} = d^{4 d}$, so $|T|\leq d^{4d}$. - -There are two different ways to find an optimal solution to S#(d). One way uses dynamic programming: its run-time is a polynomial whose exponent depends on d. The other way uses Lenstra's algorithm for integer linear programming. - -Define $VAL(k, \mathbf{n})$ as the optimal (minimum) value of the objective function sum(f(Ci)), when the input vector is $\mathbf{n} = (n_d, n_{d+1}, \ldots, n_{d^2})$ and it has to be partitioned into k subsets, among all partitions in which all subset sums are strictly between L#/2 and 2L#. - -It can be solved by the following recurrence relation: - -* $VAL(0, \mathbf{0}) = 0$ - since there objective sum is empty. - -* $VAL(1, \mathbf{n}) = f(C(\mathbf{n}))$ if $L^{\#}/2 < C(\mathbf{n})) < 2 L^{\#}$ - since all inputs must be assigned to a single subset, so its sum is $C(\mathbf{n})$. - -* $VAL(1, \mathbf{n}) = \infty$ otherwise - since we do not consider optimal solutions outside this range. - -* $VAL(k, \mathbf{n}) = \min_{\mathbf{t}\leq \mathbf{n}, \mathbf{t}\in T} [f(C(\mathbf{t})) + VAL(k-1, \mathbf{n}-\mathbf{t})]$ for all $k\geq 2$: we check all options for the k-th subset, and combine it with an optimal partition of the remainder into k-1 subsets. - -For each k and n, the recurrence relation requires to check at most $|T|$ vectors. The total number of vectors n to check is at most $n^{d^2}$, where n is the original number of inputs. Therefore, the run-time of the dynamic programming algorithm is $O(k\cdot n^{d^2}\cdot d^{4d})$. It is linear in n for any fixed d. - -For each vector t in T, introduce a variable xt denoting the number of subsets with this configuration. Minimizing sum(f(Ci)) can be attained by the solving the following ILP: - -* Minimize $\sum_{\mathbf{t}\in T} x_{\mathbf{t}}\cdot f(C(\mathbf{t}))$ - -* subject to $\sum_{\mathbf{t}\in T} x_{\mathbf{t}} = k$ (the total number of subsets) - -* and $\sum_{\mathbf{t}\in T} x_{\mathbf{t}} \cdot \mathbf{t}= \mathbf{n}$ (the total vector of inputs) - -* and $x_{\mathbf{t}} \geq 0$. - -The number of variables is at most $d^{4d}$, and the number of equations is $d^{4d}+d^2-d+2$ - both are constants independent of n, k. Therefore, Lenstra's algorithm can be used. Its run-time is exponential in the dimension ($d^{4d}$), but polynomial in the binary representation of the coefficients, which are in O(log(n)). Constructing the ILP itself takes time O(n). - -The following lemmas relate the partitions of the rounded instance S#(d) and the original instance S. - -* For every partition of S with sums Ci, there is a partition of S#(d) with sums Ci#, where $C_i - \frac{L}{d} \leq C_i^{\#} \leq \frac{d+1}{d}C_i + \frac{L}{d} $. - -* For every partition of S#(d) with sums Ci#, there is a partition of S with sums Ci, where $\frac{d}{d+1}C^{\#}_i - 2 \frac{L}{d} \leq C_i \leq C^{\#}_i + \frac{L}{d} $, and it can be found in time O(n). - -Given a desired approximation precision ε>0, let δ>0 be the constant corresponding to ε/3, whose existence is guaranteed by Condition F*. Let $d := \lceil 5/\delta \rceil $. It is possible to show that converted partition of S has a total cost of at most $(1+\frac{\epsilon}{3})\cdot OPT \leq(1+ \epsilon)\cdot OPT $, so the approximation ratio is 1+ε. - -# - -In contrast to the above result, if we take f(x) = 2x, or f(x)=(x-1)2, then no PTAS for minimizing sum(f(Ci)) exists unless P=NP. - -* The Complete Greedy Algorithm (CGA) considers all partitions by constructing a k-ary tree. Each level in the tree corresponds to an input number, where the root corresponds to the largest number, the level below to the next-largest number, etc. Each of the k branches corresponds to a different set in which the current number can be put. Traversing the tree in depth-first order requires only O(n) space, but might take O(kn) time. The runtime can be improved by using a greedy heuristic: in each level, develop first the branch in which the current number is put in the set with the smallest sum. This algorithm finds first the solution found by greedy number partitioning, but then proceeds to look for better solutions. - -* The Complete Karmarkar-Karp algorithm (CKK) considers all partitions by constructing a tree of degree $k!$ . Each level corresponds to a pair of k-tuples, and each of the $k!$ branches corresponds to a different way of combining these k-tuples. This algorithm finds first the solution found by the largest differencing method, but then proceeds to find better solutions. For k =2 and k =3, CKK runs substantially faster than CGA on random instances. The advantage of CKK over CGA is much larger in the latter case (when an equal partition exists), and can be of several orders of magnitude. In practice, with k=2, problems of arbitrary size can be solved by CKK if the numbers have at most 12 significant digit s; with k=3, at most 6 significant digits. CKK can also run as an anytime algorithm: it finds the KK solution first, and then finds progressively better solutions as time allows (possibly requiring exponential time to reach optimality, for the worst instances). For k ≥ 4, CKK becomes much slower, and CGA performs better. - -*Recursive Number Partitioning (RNP) uses CKK for k=2, but for k>2 it recursively splits S into subsets and splits k into halves. It performs much better than CKK. - -*Korf, Schreiber and Moffitt presented hybrid algorithms, combining CKK, CGA and other methods from the subset sum problem and the bin packing problem to achieve an even better performance. - -The bin packing problem has many fast solvers. A BP solver can be used to find an optimal number partitioning. The idea is to use binary search to find the optimal makespan. To initialize the binary search, we need a lower bound and an upper bound: - -* Some lower bounds on the makespan are: (sum S)/k - the average value per subset, s1 - the largest number in S, and sk + sk+1 - the size of a bin in the optimal partition of only the largest k+1 numbers. - -* Some upper bounds can be attained by running heuristic algorithms, such as the greedy algorithm or KK. - -Given a lower and an upper bound, run the BP solver with bin size middle := (lower+upper)/2. - -* If the result contains more than k bins, then the optimal makespan must be larger: set lower to middle and repeat. - -* If the result contains at most k bins, then the optimal makespan may be smaller set higher to middle and repeat. - -In the balanced number partitioning problem, there are constraints on the number of items that can be allocated to each subset (these are called cardinality constraints). - -Another variant is the multidimensional number partitioning. - -One application of the partition problem is for manipulation of elections. Suppose there are three candidates (A, B and C). A single candidate should be elected using the veto voting rule, i.e., each voter vetoes a single candidate and the candidate with the fewest vetoes wins. If a coalition wants to ensure that C is elected, they should partition their vetoes among A and B so as to maximize the smallest number of vetoes each of them gets. If the votes are weighted, then the problem can be reduced to the partition problem, and thus it can be solved efficiently using CKK. For k=2, the same is true for any other voting rule that is based on scoring. However, for k>2 and other voting rules, some other techniques are required. diff --git a/wiki/wikipedia/4056.txt b/wiki/wikipedia/4056.txt deleted file mode 100644 index 8ab8e631c98ede572a3135c6aaff6b1dc96216c1..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4056.txt +++ /dev/null @@ -1,25 +0,0 @@ -In graph theory, a branch of mathematics, the Erdős–Hajnal conjecture states that families of graphs defined by forbidden induced subgraphs have either large cliques or large independent sets. It is named for Paul Erdős and András Hajnal. - -More precisely, for an arbitrary undirected graph $H$, let $\mathcal{F}_H$ be the family of graphs that do not have $H$ as an induced subgraph. Then, according to the conjecture, there exists a constant $\delta_H > 0$ such that the $n$-vertex graphs in $\mathcal{F}_H$ have either a clique or an independent set of size $\Omega(n^{\delta_H})$. - -An equivalent statement to the original conjecture is that, for every graph $H$, the $H$-free graphs all contain polynomially large perfect induced subgraphs. - -In contrast, for random graphs in the Erdős–Rényi model with edge probability 1/2, both the maximum clique and the maximum independent set are much smaller: their size is proportional to the logarithm of $n$, rather than growing polynomially. Ramsey's theorem proves that no graph has both its maximum clique size and maximum independent set size smaller than logarithmic. all five-vertex graphs except the five-vertex path and its complement, - -As of 2021, however, the full conjecture has not been proven, and remains an open problem. - -An earlier formulation of the conjecture, also by Erdős and Hajnal, concerns the special case when $H$ is a 5-vertex cycle graph. The $C_5$-free graphs include the perfect graphs, which necessarily have either a clique or independent set of size proportional to the square root of their number of vertices. Conversely, every clique or independent set is itself perfect. For this reason, a convenient and symmetric reformulation of the Erdős–Hajnal conjecture is that for every graph $H$, the $H$-free graphs necessarily contain an induced perfect subgraph of polynomial size. - -== Relation to the chromatic number of tournaments == - -Alon et al. showed that the following statement concerning tournaments is equivalent to the Erdős-Hajnal conjecture. - -Conjecture. For every tournament $T$, there exists $\varepsilon(T) > 0$ and $c >0$ such that for every $T$-free tournament $G$ with $n$ vertices $\chi(G) \leq c \cdot n^{1-\varepsilon}$. - -Here $\chi(G)$ denotes the chromatic number of $G$, which is the smallest $k \in \mathbb{N}$ such that there is a $k$-coloring for $G$. A coloring of a tournament $G$ is a mapping $\phi:V(G) \rightarrow \{1, \ldots ,k \}$ such that the color classes $\{v \in V(T): \phi(v) = i \}$ are transitive for all $i = 1, \ldots , k$. - -The class of tournaments $H$ with the property that every $H$-free tournament $G$ has $\chi(G) \leq c(H)$ for some constant $c(H)$ satisfies this equivalent Erdős-Hajnal conjecture (with $\varepsilon = 1$). Such tournaments $H$, called heroes, were considered by Berger et al. There it is proven that a hero has a special structure which is as follows: - -Theorem. A tournament is a hero if and only if all its strong components are heroes. A strong tournament with more than one vertex is a hero if and only if it equals $\Delta(H,k,1)$ or $\Delta(H,1,k)$ for some hero $H$ and some integer $k \geq 0$. - -Here $\Delta(H,k,1)$ denotes the tournament with the three components $H$, the transitive tournament of size $k$ and a single node $q$. The arcs between the three components are defined as follows: $A = \{(v,w) : (v \in H \wedge w \in T_k)\vee (v \in T_k \wedge w = q) \vee (v = q \wedge w \in H) \}$. The tournament $\Delta(H,1,k)$ is defined analogously. diff --git a/wiki/wikipedia/4057.txt b/wiki/wikipedia/4057.txt deleted file mode 100644 index 64111a216c4726e11036dd024e7d0949680a0235..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4057.txt +++ /dev/null @@ -1,67 +0,0 @@ -In geometry, a median of a triangle is a line segment joining a vertex to the midpoint of the opposite side, thus bisecting that side. Every triangle has exactly three medians, one from each vertex, and they all intersect each other at the triangle's centroid. In the case of isosceles and equilateral triangles, a median bisects any angle at a vertex whose two adjacent sides are equal in length. - -The concept of a median extends to tetrahedra. - -Each median of a triangle passes through the triangle's centroid, which is the center of mass of an infinitely thin object of uniform density coinciding with the triangle. Thus the object would balance on the intersection point of the medians. The centroid is twice as close along any median to the side that the median intersects as it is to the vertex it emanates from. - -Each median divides the area of the triangle in half; hence the name, and hence a triangular object of uniform density would balance on any median. (Any other lines which divide the area of the triangle into two equal parts do not pass through the centroid.) The three medians divide the triangle into six smaller triangles of equal area. - -Consider a triangle ABC. Let D be the midpoint of $\overline{AB}$, E be the midpoint of $\overline{BC}$, F be the midpoint of $\overline{AC}$, and O be the centroid (most commonly denoted G). - -By definition, $AD=DB, AF=FC, BE=EC $. Thus $[ADO]=[BDO], [AFO]=[CFO], [BEO]=[CEO],$ and $[ABE]=[ACE] $, where $[ABC]$ represents the area of triangle $\triangle ABC$ ; these hold because in each case the two triangles have bases of equal length and share a common altitude from the (extended) base, and a triangle's area equals one-half its base times its height. - -We have: -$$ -[ABO]=[ABE]-[BEO] -$$ -$$ -[ACO]=[ACE]-[CEO] -$$ - -Thus, $[ABO]=[ACO] $ and $[ADO]=[DBO], [ADO]=\frac{1}{2}[ABO]$ - -Since $[AFO]=[FCO], [AFO]= \frac{1}{2}[ACO]=\frac{1}{2}[ABO]=[ADO]$, therefore, $[AFO]=[FCO]=[DBO]=[ADO]$. - -Using the same method, one can show that $[AFO]=[FCO]=[DBO]=[ADO]=[BEO]=[CEO] $. - -In 2014 Lee Sallows discovered the following theorem: - -The medians of any triangle dissect it into six equal area smaller triangles as in the figure above where three adjacent pairs of triangles meet at the midpoints D, E and F. If the two triangles in each such pair are rotated about their common midpoint until they meet so as to share a common side, then the three new triangles formed by the union of each pair are congruent. - -The lengths of the medians can be obtained from Apollonius' theorem as: - -m_a = \sqrt{\frac{2 b^2 + 2 c^2 - a^2}{4}} - -m_b = \sqrt{\frac{2 a^2 + 2 c^2 - b^2}{4}} - -m_c = \sqrt{\frac{2 a^2 + 2 b^2 - c^2}{4}} - -where $a, b,$ and $c$ are the sides of the triangle with respective medians $m_a, m_b,$ and $m_c$ from their midpoints. - -These formulas imply the relationships: - -a = \frac{2}{3} \sqrt{-m_a^2 + 2m_b^2 + 2m_c^2} = \sqrt{2(b^2+c^2)-4m_a^2} = \sqrt{\frac{b^2}{2} - c^2 + 2m_b^2} = \sqrt{\frac{c^2}{2} - b^2 + 2m_c^2} - -b = \frac{2}{3} \sqrt{-m_b^2 + 2m_a^2 + 2m_c^2} = \sqrt{2(a^2+c^2)-4m_b^2} = \sqrt{\frac{a^2}{2} - c^2 + 2m_a^2} = \sqrt{\frac{c^2}{2} - a^2 + 2m_c^2} - -c = \frac{2}{3} \sqrt{-m_c^2 + 2m_b^2 + 2m_a^2} = \sqrt{2(b^2+a^2)-4m_c^2} = \sqrt{\frac{b^2}{2} - a^2 + 2m_b^2} = \sqrt{\frac{a^2}{2} - b^2 + 2m_a^2}. - -Let ABC be a triangle, let G be its centroid, and let D, E, and F be the midpoints of BC, CA, and AB, respectively. For any point P in the plane of ABC then - -PA+PB+PC \leq 2(PD+PE+PF) + 3PG. - -The centroid divides each median into parts in the ratio 2:1, with the centroid being twice as close to the midpoint of a side as it is to the opposite vertex. - -For any triangle with sides $a, b, c$ and medians $m_a, m_b, m_c,$ - -\tfrac{3}{4}(a+b+c) < m_a + m_b + m_c < a+b+c \quad \text{ and } \quad \tfrac{3}{4}\left(a^2+b^2+c^2\right) = m_a^2 + m_b^2 + m_c^2. - -The medians from sides of lengths $a$ and $b$ are perpendicular if and only if $a^2 + b^2 = 5c^2.$ - -The medians of a right triangle with hypotenuse $c$ satisfy $m_a^2 + m_b^2 = 5m_c^2.$ - -Any triangle's area T can be expressed in terms of its medians $m_a, m_b$, and $m_c$ as follows. If their semi-sum $\left(m_a + m_b + m_c\right)/2$ is denoted by $\sigma$ then - -T = \frac{4}{3} \sqrt{\sigma \left(\sigma - m_a\right)\left(\sigma - m_b\right)\left(\sigma - m_c\right)}. - -A tetrahedron is a three-dimensional object having four triangular faces. A line segment joining a vertex of a tetrahedron with the centroid of the opposite face is called a median of the tetrahedron. There are four medians, and they are all concurrent at the centroid of the tetrahedron. As in the two-dimensional case, the centroid of the tetrahedron is the center of mass. However contrary to the two-dimensional case the centroid divides the medians not in a 2:1 ratio but in a 3:1 ratio (Commandino's theorem). diff --git a/wiki/wikipedia/4058.txt b/wiki/wikipedia/4058.txt deleted file mode 100644 index adcd6c4d074e771dc792874c6728763300b229a8..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4058.txt +++ /dev/null @@ -1,35 +0,0 @@ -In mathematics, more specifically in group theory, a group is said to be perfect if it equals its own commutator subgroup, or equivalently, if the group has no non-trivial abelian quotients (equivalently, its abelianization, which is the universal abelian quotient, is trivial). In symbols, a perfect group is one such that G(1) = G (the commutator subgroup equals the group), or equivalently one such that Gab = {1} (its abelianization is trivial). - -The smallest (non-trivial) perfect group is the alternating group A5. More generally, any non-abelian simple group is perfect since the commutator subgroup is a normal subgroup with abelian quotient. Conversely, a perfect group need not be simple; for example, the special linear group over the field with 5 elements, SL(2,5) (or the binary icosahedral group, which is isomorphic to it) is perfect but not simple (it has a non-trivial center containing $\left(\begin{smallmatrix}-1 & 0 \\ 0 & -1\end{smallmatrix}\right) = \left(\begin{smallmatrix}4 & 0 \\ 0 & 4\end{smallmatrix}\right)$). - -The direct product of any two simple groups is perfect but not simple; the commutator of two elements is [(a,b),(c,d)] = ([a,c],[b,d]). Since commutators in each simple group form a generating set, pairs of commutators form a generating set of the direct product. - -More generally, a quasisimple group (a perfect central extension of a simple group) that is a non-trivial extension (and therefore not a simple group itself) is perfect but not simple; this includes all the insoluble non-simple finite special linear groups SL(n,q) as extensions of the projective special linear group PSL(n,q) (SL(2,5) is an extension of PSL(2,5), which is isomorphic to A5). Similarly, the special linear group over the real and complex numbers is perfect, but the general linear group GL is never perfect (except when trivial or over $\mathbb{F}_2$, where it equals the special linear group), as the determinant gives a non-trivial abelianization and indeed the commutator subgroup is SL. - -A non-trivial perfect group, however, is necessarily not solvable; and 4 divides its order (if finite), moreover, if 8 does not divide the order, then 3 does. - -Every acyclic group is perfect, but the converse is not true: A5 is perfect but not acyclic (in fact, not even superperfect), see . In fact, for $n\ge 5$ the alternating group $A_n$ is perfect but not superperfect, with $H_2(A_n,\Z) = \Z/2$ for $n \ge 8$. - -Any quotient of a perfect group is perfect. A non-trivial finite perfect group that is not simple must then be an extension of at least one smaller simple non-abelian group. But it can be the extension of more than one simple group. In fact, the direct product of perfect groups is also perfect. - -Every perfect group G determines another perfect group E (its universal central extension) together with a surjection f: E → G whose kernel is in the center of E, - -such that f is universal with this property. The kernel of f is called the Schur multiplier of G because it was first studied by Issai Schur in 1904; it is isomorphic to the homology group $H_2(G)$. - -In the plus construction of algebraic K-theory, if we consider the group $\operatorname{GL}(A) = \text{colim} \operatorname{GL}_n(A)$ for a commutative ring $A$, then the subgroup of elementary matrices $E(R)$ forms a perfect subgroup. - -As the commutator subgroup is generated by commutators, a perfect group may contain elements that are products of commutators but not themselves commutators. Øystein Ore proved in 1951 that the alternating groups on five or more elements contained only commutators, and conjectured that this was so for all the finite non-abelian simple groups. Ore's conjecture was finally proven in 2008. The proof relies on the classification theorem. - -A basic fact about perfect groups is Grün's lemma from : the quotient of a perfect group by its center is centerless (has trivial center). - -
    Proof: If G is a perfect group, let Z1 and Z2 denote the first two terms of the upper central series of G (i.e., Z1 is the center of G, and Z2/Z1 is the center of G/Z1). If H and K are subgroups of G, denote the commutator of H and K by [H, K] and note that [Z1, G] = 1 and [Z2, G] ⊆ Z1, and consequently (the convention that [X, Y, Z] = X, Y], Z] is followed): - -[Z_2,G,G]=Hall-Witt identity), it follows that [G, Z2] = G, G], Z2] = [G, G, Z2] = {1}. Therefore, Z2 ⊆ Z1 = Z(G), and the center of the quotient group G / Z(G) is the higher centers (that is, higher terms in the upper central series) of a perfect group equal the center. - -In terms of group homology, a perfect group is precisely one whose first homology group vanishes: H1(G, Z) = 0, as the first homology group of a group is exactly the abelianization of the group, and perfect means trivial abelianization. An advantage of this definition is that it admits strengthening: - -* A superperfect group is one whose first two homology groups vanish: $H_1(G,\Z)=H_2(G,\Z)=0$. - -* An acyclic group is one all of whose (reduced) homology groups vanish $\tilde H_i(G;\Z) = 0.$ (This is equivalent to all homology groups other than $H_0$ vanishing.) - -Especially in the field of algebraic K-theory, a group is said to be quasi-perfect if its commutator subgroup is perfect; in symbols, a quasi-perfect group is one such that G(1) = G(2) (the commutator of the commutator subgroup is the commutator subgroup), while a perfect group is one such that G(1) = G (the commutator subgroup is the whole group). See and . diff --git a/wiki/wikipedia/4059.txt b/wiki/wikipedia/4059.txt deleted file mode 100644 index 0293d1602a6163b0400798c02b1f8e2f9d234174..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4059.txt +++ /dev/null @@ -1,24 +0,0 @@ -Brocard's problem is a problem in mathematics that asks to find integer values of n and m for which -$$ -n!+1 = m^2, -$$ - -where n! is the factorial. It was posed by Henri Brocard in a pair of articles in 1876 and 1885, and independently in 1913 by Srinivasa Ramanujan. - -Pairs of the numbers (n, m) that solve Brocard's problem are called Brown numbers. As of May 2021, there are only three known pairs of Brown numbers: - -(4,5), (5,11), and (7,71). - -Paul Erdős conjectured that no other solutions exist. Overholt showed that there are only finitely many solutions provided that the abc conjecture is true. Berndt performed calculations for n up to 109 and found no further solutions. Matson has extended this by three orders of magnitude to one trillion. Epstein have recently extended this by three more orders of magnitude to one quadrillion. - -Dabrowski generalized Overholt's result by showing that it would follow from the abc conjecture that -$$ -n!+A = k^2 -$$ - -has only finitely many solutions, for any given integer A. This result was further generalized by Luca, who showed (again assuming the abc conjecture) that the equation -$$ -n! = P(x) -$$ - -has only finitely many integer solutions for a given polynomial P(x) of degree at least 2 with integer coefficients. diff --git a/wiki/wikipedia/406.txt b/wiki/wikipedia/406.txt deleted file mode 100644 index 9f0a05613b8857194c05d8f76ae6571c501ececd..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/406.txt +++ /dev/null @@ -1,48 +0,0 @@ -In computational geometry, polygon triangulation is the decomposition of a polygonal area (simple polygon) P into a set of triangles, i.e., finding a set of triangles with pairwise non-intersecting interiors whose union is P. - -Triangulations may be viewed as special cases of planar straight-line graphs. When there are no holes or added points, triangulations form maximal outerplanar graphs. - -Over time, a number of algorithms have been proposed to triangulate a polygon. - -It is trivial to triangulate any convex polygon in linear time into a fan triangulation, by adding diagonals from one vertex to all other vertices. - -The total number of ways to triangulate a convex n-gon by non-intersecting diagonals is the (n−2)nd Catalan number, which equals -$$ -\frac{n(n+1)...(2n-4)}{(n-2)!} -$$, - -a formula found by Leonhard Euler. - -A monotone polygon can be triangulated in linear time with either the algorithm of A. Fournier and D.Y. Montuno, or the algorithm of Godfried Toussaint. - -One way to triangulate a simple polygon is based on the two ears theorem, as the fact that any simple polygon with at least 4 vertices without holes has at least two 'ears', which are triangles with two sides being the edges of the polygon and the third one completely inside it. The algorithm then consists of finding such an ear, removing it from the polygon (which results in a new polygon that still meets the conditions) and repeating until there is only one triangle left. - -This algorithm is easy to implement, but slower than some other algorithms, and it only works on polygons without holes. An implementation that keeps separate lists of convex and concave vertices will run in O(n2) time. This method is known as ear clipping and sometimes ear trimming. An efficient algorithm for cutting off ears was discovered by Hossam ElGindy, Hazel Everett, and Godfried Toussaint. - -A simple polygon is monotone with respect to a line L, if any line orthogonal to L intersects the polygon at most twice. A monotone polygon can be split into two monotone chains. A polygon that is monotone with respect to the y-axis is called y-monotone. A monotone polygon with n vertices can be triangulated in O(n) time. Assuming a given polygon is y-monotone, the greedy algorithm begins by walking on one chain of the polygon from top to bottom while adding diagonals whenever it is possible. It is easy to see that the algorithm can be applied to any monotone polygon. - -If a polygon is not monotone, it can be partitioned into monotone subpolygons in O(n log n) time using a sweep-line approach. The algorithm does not require the polygon to be simple, thus it can be applied to polygons with holes. - -Generally, this algorithm can triangulate a planar subdivision with n vertices in O(n log n) time using O(n) space. - -A useful graph that is often associated with a triangulation of a polygon P is the dual graph. Given a triangulation TP of P, one defines the graph G(TP) as the graph whose vertex set are the triangles of TP, two vertices (triangles) being adjacent if and only if they share a diagonal. It is easy to observe that G(TP) is a tree with maximum degree 3. - -Until 1988, whether a simple polygon can be triangulated faster than O(n log n) time was an open problem in computational geometry. Then, Tarjan discovered an O(n log log n)-time algorithm for triangulation, later simplified by Kirkpatrick. Several improved methods with complexity O(n log* n) (in practice, indistinguishable from linear time) followed. - -Bernard Chazelle showed in 1991 that any simple polygon can be triangulated in linear time, though the proposed algorithm is very complex. A simpler randomized algorithm with linear expected time is also known. - -Seidel's decomposition algorithm and Chazelle's triangulation method are discussed in detail in Li. - -The time complexity of triangulation of an n-vertex polygon with holes has an Ω(n log n) lower bound, in algebraic computation tree models of computation. It is possible to compute the number of distinct triangulations of a simple polygon in polynomial time using dynamic programming, and (based on this counting algorithm) to generate uniformly random triangulations in polynomial time. However, counting the triangulations of a polygon with holes is #P-complete, making it unlikely that it can be done in polynomial time. - -* Both triangulation problems are a special case of triangulation (geometry) and a special case of polygon partition. - -* Minimum-weight triangulation is a triangulation in which the goal is to minimize the total edge length. - -* A point-set triangulation is a polygon triangulation of the convex hull of a set of points. A Delaunay triangulation is another way to create a triangulation based on a set of points. - -* The Associahedron is a polytope whose vertices correspond to the triangulations of a convex polygon. - -* Polygon triangle covering, in which the triangles may overlap. - -* Tiling by polygons, where the goal is to cover the entire plane with polygons of pre-specified shapes. diff --git a/wiki/wikipedia/4060.txt b/wiki/wikipedia/4060.txt deleted file mode 100644 index 0564e5fdc88bac1178c42f50d1ebbc1fe6aa0a59..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4060.txt +++ /dev/null @@ -1,5 +0,0 @@ -In mathematics, the Kleene-Rosser paradox is a paradox that shows that certain systems of formal logic are inconsistent, in particular the version of Haskell Curry's combinatory logic introduced in 1930, and Alonzo Church's original lambda calculus, introduced in 1932-1933, both originally intended as systems of formal logic. The paradox was exhibited by Stephen Kleene and J. B. Rosser in 1935. - -Kleene and Rosser were able to show that both systems are able to characterize and enumerate their provably total, definable number-theoretic functions, which enabled them to construct a term that essentially replicates the Richard paradox in formal language. - -Curry later managed to identify the crucial ingredients of the calculi that allowed the construction of this paradox, and used this to construct a much simpler paradox, now known as Curry's paradox. diff --git a/wiki/wikipedia/4061.txt b/wiki/wikipedia/4061.txt deleted file mode 100644 index 1ff00a02ef734fb1a7df522df3761b32e51808e5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4061.txt +++ /dev/null @@ -1,146 +0,0 @@ -In projective geometry, Segre's theorem, named after the Italian mathematician Beniamino Segre, is the statement: - -*Any oval in a finite pappian projective plane of odd order is a nondegenerate projective conic section. - -This statement was assumed 1949 by the two Finnish mathematicians G. Järnefelt and P. Kustaanheimo and its proof was published in 1955 by B. Segre. - -A finite pappian projective plane can be imagined as the projective closure of the real plane (by a line at infinity), where the real numbers are replaced by a finite field K. Odd order means that is odd. An oval is a curve similar to a circle (see definition below): any line meets it in at most 2 points and through any point of it there is exactly one tangent. The standard examples are the nondegenerate projective conic sections. - -In pappian projective planes of even order greater than four there are ovals which are not conics. In an infinite plane there exist ovals, which are not conics. In the real plane one just glues a half of a circle and a suitable ellipse smoothly. - -The proof of Segre's theorem, shown below, uses the 3-point version of Pascal's theorem and a property of a finite field of odd order, namely, that the product of all the nonzero elements equals -1. - -*In a projective plane a set $\mathfrak o$ of points is called oval, if: - -(1) Any line $g$ meets $\mathfrak o$ in at most two points. - -If $|g\cap\mathfrak o|=0$ the line $g$ is an exterior (or passing) line; in case $|g\cap\mathfrak o|=1$ a tangent line and if $|g\cap\mathfrak o|=2$ the line is a secant line. - -(2) For any point $P \in \mathfrak o$ there exists exactly one tangent $t$ at P, i.e., $ t\cap\mathfrak o=\{P\}$. - -For finite planes (i.e. the set of points is finite) we have a more convenient characterization: - -* For a finite projective plane of order n (i.e. any line contains n + 1 points) a set $\mathfrak o$ of points is an oval if and only if $|\mathfrak o|=n+1$ and no three points are collinear (on a common line). - -;Theorem: - -Let be $\mathfrak o$ an oval in a pappian projective plane of characteristic $\ne 2$.
    -$$ -\mathfrak o -$$ is a nondegenerate conic if and only if statement (P3) - -holds: - -(P3): Let be $P_1,P_2,P_3$ any triangle on $\mathfrak o$ and $\overline {P_iP_i}$ the tangent at point $P_i$ to $\mathfrak o$, then the points -$$ -P_4:= \overline {P_1P_1} \cap \overline {P_2P_3},\ P_5:= \overline {P_2P_2} \cap \overline {P_1P_3}, \ P_6:= \overline {P_3P_3} \cap \overline {P_1P_2} -$$ - -are collinear. - -;Proof: - -Let the projective plane be coordinatized inhomogeneously over a field $K$ - -such that $P_3=(0), g_\infty$ is the tangent at $P_3 , \ (0,0) \in \mathfrak o$, the x-axis is the tangent at the point $(0,0)$ and $\mathfrak o$ contains the point $(1,1)$. Furthermore, we set $P_1=(x_1,y_1), P_2=(x_2,y_2)\ .$ (s. image)
    - -The oval $\mathfrak o$ can be described by a function $f: K \mapsto K$ such that: -$$ -\mathfrak o=\{(x,y)\in K^2 | y=f(x)\} \ \cup \{(\infty)\} . -$$ - -The tangent at point $(x_0,f(x_0))$ will be described using a function $f' $ such that its equation is -$$ -y=f'(x_0)(x-x_0) +f(x_0) -$$ - -Hence (s. image) -$$ -P_5=(x_1,f'(x_2)(x_1-x_2)+f(x_2)) -$$ and $P_4=(x_2,f'(x_1)(x_2-x_1)+f(x_1)) .$ - -I: if $\mathfrak o$ is a non degenerate conic we have $f(x)=x^2$ and $f'(x)=2x$ and one calculates easily that $P_4,P_5,P_6$ are collinear. - -II: If $\mathfrak o$ is an oval with property (P3), the slope of the line $\overline{P_4P_5}$ is equal to the slope of the line $\overline{P_1P_2}$, that means: -$$ -f'(x_2)+f'(x_1) - \frac{f(x_2)-f(x_1)}{x_2-x_1}=\frac{f(x_2)-f(x_1)}{x_2-x_1} -$$ and hence - -(i): $ (f'(x_2)+f'(x_1))(x_2-x_1)=2(f(x_2)-f(x_1))$ for all $x_1,x_2 \in K$. - -With $f(0)=f'(0)=0$ one gets - -(ii): $f'(x_2)x_2=2f(x_2)$ and from $f(1)=1$ we get - -(iii): $f'(1)=2 .$ - -(i) and (ii) yield - -(iv): $f'(x_2)x_1=f'(x_1)x_2$ and with (iii) at least we get - -(v): $f'(x_2)=2x_2$ for all $x_2 \in K$. - -A consequence of (ii) and (v) is -$$ -f(x_2)=x_2^2, x_2 \in K -$$. - -Hence $\mathfrak o$ is a nondegenerate conic. - -Remark: - -Property (P3) is fulfilled for any oval in a pappian projective plane of characteristic 2 with a nucleus (all tangents meet at the nucleus). Hence in this case (P3) is also true for non-conic ovals. - -;Theorem: - -Any oval $\mathfrak o$ in a finite pappian projective plane of odd order is a nondegenerate conic section. - -;Proof: - -For the proof we show that the oval has property (P3) of the 3-point version of Pascal's theorem. - -Let be $P_1,P_2,P_3$ any triangle on $\mathfrak o$ and $P_4,P_5,P_6$ defined as described in (P3). - -The pappian plane will be coordinatized inhomogeneously over a finite field -$$ -K -$$, such that$P_3=(\infty), P_2=(0), P_1=(1,1)$ and $ (0,0)$ is the common point of the tangents at $P_2$ and $P_3$. The oval $\mathfrak o$ can be described using a bijective function $f: K^*:=K\cup \setminus \{0\} \mapsto K^*$: -$$ -\mathfrak o=\{(x,y)\in K^2 | y=f(x), x\ne 0\} \cup \{(0),(\infty)\} . -$$ - -For a point $P=(x,y), x\in K\setminus\{0,1\}$, the expression $m(x)=\tfrac{f(x)-1}{x-1}$ is the slope of the secant $\overline{PP_1} .$ Because both the functions $x\mapsto f(x)-1$ and $x\mapsto x-1$ are bijections from -$$ -K\setminus\{0,1\} -$$ to $K\setminus\{0,-1\}$, and $x\mapsto m(x)$ a bijection from $K\setminus\{0,1\}$ onto $K\setminus\{0,m_1\}$, where $m_1$ is the slope of the tangent at $P_1$, for $K^{**}:=K\setminus\{0,1\} :$ we get - -\prod_{x\in K^{**}}(f(x)-1)=\prod_{x\in K^{**}}(x-1)=1 \quad \text{und}\quad - -m_1\cdot\prod_{x\in K^{**}}\frac{f(x)-1}{x-1}=-1 . - -(Remark: For $K^*:= K\setminus\{0\}$ we have: -$$ -\displaystyle \prod_{k\in K^*}k=-1 . -$$)
    - -Hence - --1=m_1\cdot\prod_{x\in K^{**}}\frac{f(x)-1}{x-1}= - -m_1\cdot\frac{ - -\displaystyle \prod_{x\in K^{**}}(f(x)-1)}{ - -\displaystyle \prod_{x\in K^{**}}(x-1)}=m_1 . - -Because the slopes of line $\overline{P_5P_6}$ and tangent -$$ -\overline{P_1P_1} -$$ both are $-1$, it follows that -$$ -\overline{P_1P_1}\cap \overline{P_2P_3}=P_4 \in\overline{P_5P_6} -$$. - -This is true for any triangle $P_1,P_2,P_3 \in \mathfrak o$. - -So: (P3) of the 3-point Pascal theorem holds and the oval is a non degenerate conic. diff --git a/wiki/wikipedia/4062.txt b/wiki/wikipedia/4062.txt deleted file mode 100644 index a3fb6552bc2d85550eabe1cc996291235c526fea..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4062.txt +++ /dev/null @@ -1,3 +0,0 @@ -In logic, the scope of a quantifier or a quantification is the range in the formula where the quantifier "engages in". It is put right after the quantifier, often in parentheses. Some authors describe this as including the variable put right after the forall or exists symbol. In the formula ∀xP, for example, P (or xP) is the scope of the quantifier ∀x (or ∀). - -A variable in the formula is free, if and only if it does not occur in the scope of any quantifier for that variable. A term is free for a variable in the formula (i.e. free to substitute that variable that occurs free), if and only if that variable does not occur free in the scope of any quantifier for any variable in the term. diff --git a/wiki/wikipedia/4063.txt b/wiki/wikipedia/4063.txt deleted file mode 100644 index 53c30e7ec1e1a238c32bf2afc1d5f4e47562c853..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4063.txt +++ /dev/null @@ -1,75 +0,0 @@ -In the mathematics of graph drawing, the crossing number inequality or crossing lemma gives a lower bound on the minimum number of crossings of a given graph, as a function of the number of edges and vertices of the graph. It states that, for graphs where the number e of edges is sufficiently larger than the number n of vertices, the crossing number is at least proportional to e3/n2. - -It has applications in VLSI design and combinatorial geometry, - -and was discovered independently by Ajtai, Chvátal, Newborn, and Szemerédi - -and by Leighton. - -The crossing number inequality states that, for an undirected simple graph G with n vertices and e edges such that e > 7n, the crossing number cr(G) obeys the inequality -$$ -\operatorname{cr}(G) \geq \frac{e^3}{29 n^2}. -$$ - -The constant 29 is the best known to date, and is due to Ackerman. - -For earlier results with weaker constants see Pach and Pach. - -The constant 7 can be lowered to 4, but at the expense of replacing 29 with the worse constant of 64. - -It is important to distinguish between the crossing number cr(G) and the pairwise crossing number pair-cr(G). As noted by Pach, the pairwise crossing number $\text{pair-cr}(G) \leq \text{cr}(G)$ refers to the minimum number of pairs of edges that each determine one crossing, whereas the crossing number simply refers to the minimum number of crossings. (This distinction is necessary since some authors assume that in a proper drawing no two edges cross more than once.) - -The motivation of Leighton in studying crossing numbers was for applications to VLSI design in theoretical computer science. - -Later, Székely realized that this inequality yielded very simple proofs of some important theorems in incidence geometry. For instance, the Szemerédi–Trotter theorem, an upper bound on the number of incidences that are possible between given numbers of points and lines in the plane, - -follows by constructing a graph whose vertices are the points and whose edges are the segments of lines between incident points. If there were more incidences than the Szemerédi–Trotter bound, this graph would necessarily have more crossings than the total number of pairs of lines, an impossibility. - -The inequality can also be used to prove Beck's theorem, that if a finite point set does not have a linear number of collinear points, then it determines a quadratic number of distinct lines. - -Similarly, Tamal Dey used it to prove upper bounds on geometric k-sets. - -We first give a preliminary estimate: for any graph G with n vertices and e edges, we have -$$ -\operatorname{cr}(G) \geq e - 3n. -$$ - -To prove this, consider a diagram of G which has exactly cr(G) crossings. Each of these crossings can be removed by removing an edge from G. Thus we can find a graph with at least e − cr(G) edges and n vertices with no crossings, and is thus a planar graph. But from Euler's formula we must then have e − cr(G) ≤ 3n, and the claim follows. (In fact we have e − cr(G) ≤ 3n − 6 for n ≥ 3). - -To obtain the actual crossing number inequality, we now use a probabilistic argument. We let 0 < p < 1 be a probability parameter to be chosen later, and construct a random subgraph H of G by allowing each vertex of G to lie in H independently with probability p, and allowing an edge of G to lie in H if and only if its two vertices were chosen to lie in H. Let eH, nH and crH denote the number of edges, vertices and crossings of H, respectively. Since H is a subgraph of G, the diagram of G contains a diagram of H. By the preliminary crossing number inequality, we have -$$ -\operatorname{cr}_H \geq e_H - 3n_H. -$$ - -Taking expectations we obtain -$$ -\mathbf{E}[\operatorname{cr}_H] \geq \mathbf{E}[e_H] - 3 \mathbf{E}[n_H]. -$$ - -Since each of the n vertices in G had a probability p of being in H, we have E[nH] = pn. Similarly, each of the edges in G has a probability p2 of remaining in H since both endpoints need to stay in H, therefore E[eH] = p2e. Finally, every crossing in the diagram of G has a probability p4 of remaining in H, since every crossing involves four vertices. To see this consider a diagram of G with cr(G) crossings. We may assume that any two edges in this diagram with a common vertex are disjoint, otherwise we could interchange the intersecting parts of the two edges and reduce the crossing number by one. Thus every crossing in this diagram involves four distinct vertices of G. Therefore, E[crH] = p4cr(G) and we have -$$ - p^4 \operatorname{cr}(G) \geq p^2 e - 3 p n. -$$ - -Now if we set p = 4n/e < 1 (since we assumed that e > 4n), we obtain after some algebra -$$ - \operatorname{cr}(G) \geq \frac{e^3}{64 n^2}. -$$ - -A slight refinement of this argument allows one to replace 64 by 33.75 for e > 7.5n. - -For graphs with girth larger than 2r and e ≥ 4n, Pach demonstrated an improvement of this inequality to -$$ -\operatorname{cr}(G) \geq c_r\frac{e^{r+2}}{n^{r+1}}. -$$ - -Adiprasito showed a generalization to higher dimensions: If $\Delta$ is a simplicial complex that is mapped piecewise-linearly to $\mathbf{R}^{2d}$, and it has f_d(\Delta)\ \ d - --dimensional faces and f_{d-1}(\Delta)\ \ (d-1) - --dimensional faces such that f_d(\Delta)> (d+3)f_{d-1}(\Delta) - -, then the number of pairwise intersecting $d$-dimensional faces is at least -$$ -\frac{f_d^{d+2}(\Delta)}{(d+3)^{d+2}f_{d-1}^{d+1}(\Delta)}. -$$ diff --git a/wiki/wikipedia/4064.txt b/wiki/wikipedia/4064.txt deleted file mode 100644 index e8fab4e3c0cb6fc0395d0242bc3739310fb2e5be..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4064.txt +++ /dev/null @@ -1,214 +0,0 @@ -In statistical mechanics, the Griffiths inequality, sometimes also called Griffiths-Kelly-Sherman inequality or GKS inequality, named after Robert B. Griffiths, is a correlation inequality for ferromagnetic spin systems. Informally, it says that in ferromagnetic spin systems, if the 'a-priori distribution' of the spin is invariant under spin flipping, the correlation of any monomial of the spins is non-negative; and the two point correlation of two monomial of the spins is non-negative. - -The inequality was proved by Griffiths for Ising ferromagnets with two-body interactions, then generalised by Kelly and Sherman to interactions involving an arbitrary number of spins, and then by Griffiths to systems with arbitrary spins. A more general formulation was given by Ginibre, and is now called the Ginibre inequality. - -Let $ \textstyle \sigma=\{\sigma_j\}_{j \in \Lambda}$ be a configuration of (continuous or discrete) spins on a lattice Λ. If A ⊂ Λ is a list of lattice sites, possibly with duplicates, let $ \textstyle \sigma_A = \prod_{j \in A} \sigma_j $ be the product of the spins in A. - -Assign an a-priori measure dμ(σ) on the spins; - -let H be an energy functional of the form -$$ -H(\sigma)=-\sum_{A} J_A \sigma_A ~, -$$ - -where the sum is over lists of sites A, and let -$$ - Z=\int d\mu(\sigma) e^{-H(\sigma)} -$$ - -be the partition function. As usual, -$$ - \langle \cdot \rangle = \frac{1}{Z} \sum_\sigma \cdot(\sigma) e^{-H(\sigma)} -$$ - -stands for the ensemble average. - -The system is called ferromagnetic if, for any list of sites A, JA ≥ 0. The system is called invariant under spin flipping if, for any j in Λ, the measure μ is preserved under the sign flipping map σ → τ, where - - \tau_k = \begin{cases} - -\sigma_k, &k\neq j, \\ - -- \sigma_k, &k = j. - -\end{cases} - - - -In a ferromagnetic spin system which is invariant under spin flipping, -$$ - \langle \sigma_A\rangle \geq 0 -$$ - -for any list of spins A. - -In a ferromagnetic spin system which is invariant under spin flipping, - - \langle \sigma_A\sigma_B\rangle \geq - -\langle \sigma_A\rangle \langle \sigma_B\rangle - -for any lists of spins A and B. - -The first inequality is a special case of the second one, corresponding to B = ∅. - -Observe that the partition function is non-negative by definition. - -Proof of first inequality: Expand -$$ - e^{-H(\sigma)} = \prod_{B} \sum_{k \geq 0} \frac{J_B^k \sigma_B^k}{k!} = \sum_{\{k_C\}_C} \prod_B \frac{J_B^{k_B} \sigma_B^{k_B}}{k_B!}~, -$$ - -then - -\begin{align}Z \langle \sigma_A \rangle - -&= \int d\mu(\sigma) \sigma_A e^{-H(\sigma)} - -= \sum_{\{k_C\}_C} \prod_B \frac{J_B^{k_B}}{k_B!} \int d\mu(\sigma) \sigma_A \sigma_B^{k_B} \\ - -&= \sum_{\{k_C\}_C} \prod_B \frac{J_B^{k_B}}{k_B!} \int d\mu(\sigma) \prod_{j \in \Lambda} \sigma_j^{n_A(j) + k_B n_B(j)}~,\end{align} - -where nA(j) stands for the number of times that j appears in A. Now, by invariance under spin flipping, -$$ -\int d\mu(\sigma) \prod_j \sigma_j^{n(j)} = 0 -$$ - -if at least one n(j) is odd, and the same expression is obviously non-negative for even values of n. Therefore, Z<σA>≥0, hence also <σA>≥0. - -Proof of second inequality. For the second Griffiths inequality, double the random variable, i.e. consider a second copy of the spin, $\sigma'$, with the same distribution of $\sigma$. Then - - \langle \sigma_A\sigma_B\rangle- - -\langle \sigma_A\rangle \langle \sigma_B\rangle= - -\langle\langle\sigma_A(\sigma_B-\sigma'_B)\rangle\rangle~. - - - -Introduce the new variables - - - -\sigma_j=\tau_j+\tau_j'~, - -\qquad - -\sigma'_j=\tau_j-\tau_j'~. - - - -The doubled system $\langle\langle\cdot\rangle\rangle$ is ferromagnetic in $\tau, \tau'$ because $-H(\sigma)-H(\sigma')$ is a polynomial in $\tau, \tau'$ with positive coefficients - -\begin{align} - -\sum_A J_A (\sigma_A+\sigma'_A) &= \sum_A J_A\sum_{X\subset A} - -\left[1+(-1)^\right] \tau_{A \setminus X} \tau'_X - -\end{align} - -Besides the measure on $\tau,\tau'$ is invariant under spin flipping because $d\mu(\sigma)d\mu(\sigma')$ is. - -Finally the monomials $\sigma_A$, $\sigma_B-\sigma'_B$ are polynomials in $\tau,\tau'$ with positive coefficients - -\begin{align} - -\sigma_A &= \sum_{X \subset A} \tau_{A \setminus X} \tau'_{X}~, \\ - -\sigma_B-\sigma'_B &= \sum_{X\subset B} - -\left[1-(-1)^\right] \tau_{B \setminus X} \tau'_X~. - -\end{align} - -The first Griffiths inequality applied to $\langle\langle\sigma_A(\sigma_B-\sigma'_B)\rangle\rangle$ gives the result. - -More details are in and. - -The Ginibre inequality is an extension, found by Jean Ginibre, of the Griffiths inequality. - -Let (Γ, μ) be a probability space. For functions f, h on Γ, denote -$$ - \langle f \rangle_h = \int f(x) e^{-h(x)} d\mu(x) \Big/ \int e^{-h(x)} d\mu(x). -$$ - -Let A be a set of real functions on Γ such that. for every f1,f2,...,fn in A, and for any choice of signs ±, -$$ - \iint d\mu(x) d\mu(y) \prod_{j=1}^n (f_j(x) \pm f_j(y)) \geq 0. -$$ - -Then, for any f,g,-h in the convex cone generated by A, -$$ - \langle fg\rangle_h - \langle f \rangle_h \langle g \rangle_h \geq 0. -$$ - -Let -$$ - Z_h = \int e^{-h(x)} d\mu(x). -$$ - -Then - -\begin{align} - -&Z_h^2 \left( \langle fg\rangle_h - \langle f \rangle_h \langle g \rangle_h \right)\\ - -&\qquad= \iint d\mu(x) d\mu(y) f(x) (g(x) - g(y)) e^{-h(x)-h(y)} \\ - -&\qquad= \sum_{k=0}^\infty - -\iint d\mu(x) d\mu(y) f(x) (g(x) - g(y)) \frac{(-h(x)-h(y))^k}{k!}. - -\end{align} - -Now the inequality follows from the assumption and from the identity -$$ - f(x) = \frac{1}{2} (f(x)+f(y)) + \frac{1}{2} (f(x)-f(y)). -$$ - -* To recover the (second) Griffiths inequality, take Γ = {-1, +1}Λ, where Λ is a lattice, and let μ be a measure on Γ that is invariant under sign flipping. The cone A of polynomials with positive coefficients satisfies the assumptions of the Ginibre inequality. - -* (Γ, μ) is a commutative compact group with the Haar measure, A is the cone of real positive definite functions on Γ. - -* Γ is a totally ordered set, A is the cone of real positive non-decreasing functions on Γ. This yields Chebyshev's sum inequality. For extension to partially ordered sets, see FKG inequality. - -* The thermodynamic limit of the correlations of the ferromagnetic Ising model (with non-negative external field h and free boundary conditions) exists. - -This is because increasing the volume is the same as switching on new couplings JB for a certain subset B. By the second Griffiths inequality - -\frac{\partial}{\partial J_B}\langle \sigma_A\rangle= - -\langle \sigma_A\sigma_B\rangle- - -\langle \sigma_A\rangle \langle \sigma_B\rangle\geq 0 - - - -Hence $\langle \sigma_A\rangle$ is monotonically increasing with the volume; then it converges since it is bounded by 1. - -* The one-dimensional, ferromagnetic Ising model with interactions $ J_{x,y}\sim |x-y|^{-\alpha} $ displays a phase transition if $ 1<\alpha <2 $. - -This property can be shown in a hierarchical approximation, that differs from the full model by the absence of some interactions: arguing as above with the second Griffiths inequality, the results carries over the full model. - -*The Ginibre inequality provides the existence of the thermodynamic limit for the free energy and spin correlations for the two-dimensional classical XY model. Besides, through Ginibre inequality, Kunz and Pfister proved the presence of a phase transition for the ferromagnetic XY model with interaction $ J_{x,y}\sim |x-y|^{-\alpha} $ if $ 2<\alpha < 4 $. - -* Aizenman and Simon used the Ginibre inequality to prove that the two point spin correlation of the ferromagnetic classical XY model in dimension $D$, coupling $J>0$ and inverse temperature $\beta$ is dominated by (i.e. has upper bound given by) the two point correlation of the ferromagnetic Ising model in dimension $D$, coupling $J>0$, and inverse temperature $\beta/2$ - -\langle \mathbf{s}_i\cdot \mathbf{s}_j\rangle_{J,2\beta} - -\le \langle \sigma_i\sigma_j\rangle_{J,\beta} - -Hence the critical $\beta$ of the XY model cannot be smaller than the double of the critical temperature of the Ising model -$$ - \beta_c^{XY}\ge 2\beta_c^{\rm Is}~; -$$ - -in dimension D = 2 and coupling J = 1, this gives -$$ - \beta_c^{XY} \ge \ln(1 + \sqrt{2}) \approx 0.88~. -$$ - -* There exists a version of the Ginibre inequality for the Coulomb gas that implies the existence of thermodynamic limit of correlations. - -* Other applications (phase transitions in spin systems, XY model, XYZ quantum chain) are reviewed in. diff --git a/wiki/wikipedia/4065.txt b/wiki/wikipedia/4065.txt deleted file mode 100644 index 2e38c6880a2a7422c7eeff9b4c08fe272e28f017..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4065.txt +++ /dev/null @@ -1,21 +0,0 @@ -In mathematics, the Hartogs–Rosenthal theorem is a classical result in complex analysis on the uniform approximation of continuous functions on compact subsets of the complex plane by rational functions. The theorem was proved in 1931 by the German mathematicians Friedrich Hartogs and Arthur Rosenthal and has been widely applied, particularly in operator theory. - -The Hartogs–Rosenthal theorem states that if K is a compact subset of the complex plane with Lebesgue measure zero, then any continuous complex-valued function on K can be uniformly approximated by rational functions. - -By the Stone–Weierstrass theorem any complex-valued continuous function on K can be uniformly approximated by a polynomial in $z$ and $\overline{z}$. - -So it suffices to show that $\overline{z}$ can be uniformly approximated by a rational function on K. - -Let g(z) be a smooth function of compact support on C equal to 1 on K and set -$$ - f(z)=g(z)\cdot \overline{z}. -$$ - -By the generalized Cauchy integral formula -$$ -f(z) = \frac{1}{2\pi i}\iint_ \frac{\partial f}{\partial \bar{w}}\frac{dw\wedge d\bar{w}}{w-z}, -$$ - -since K has measure zero. - -Restricting z to K and taking Riemann approximating sums for the integral on the right hand side yields the required uniform approximation of $\bar{z}$ by a rational function. diff --git a/wiki/wikipedia/4066.txt b/wiki/wikipedia/4066.txt deleted file mode 100644 index 06ecd24ad1ca4e0c8fc66e8f46be3e6d2dd2d019..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4066.txt +++ /dev/null @@ -1,45 +0,0 @@ -Stein's lemma, named in honor of Charles Stein, is a theorem of probability theory that is of interest primarily because of its applications to statistical inference - in particular, to James–Stein estimation and empirical Bayes methods - and its applications to portfolio choice theory. The theorem gives a formula for the covariance of one random variable with the value of a function of another, when the two random variables are jointly normally distributed. - -Suppose X is a normally distributed random variable with expectation μ and variance σ2. - -Further suppose g is a function for which the two expectations E(g(X) (X - μ)) and E(g ′(X)) both exist. - -(The existence of the expectation of any random variable is equivalent to the finiteness of the expectation of its absolute value.) - -Then -$$ -E\bigl(g(X)(X-\mu)\bigr)=\sigma^2 E\bigl(g'(X)\bigr). -$$ - -In general, suppose X and Y are jointly normally distributed. Then -$$ -\operatorname{Cov}(g(X),Y)= \operatorname{Cov}(X,Y)E(g'(X)). -$$ - -The univariate probability density function for the univariate normal distribution with expectation 0 and variance 1 is -$$ -\varphi(x)={1 \over \sqrt{2\pi}}e^{-x^2/2} -$$ - -and the density for a normal distribution with expectation μ and variance σ2 is -$$ -{1\over\sigma} \ \varphi\left({x-\mu \over \sigma}\right). -$$ - -Then use integration by parts. - -Suppose X is in an exponential family, that is, X has the density -$$ -f_\eta(x)=\exp(\eta'T(x) - \Psi(\eta))h(x). -$$ - -Suppose this density has support $(a,b) $ where $ a,b $ could be $ -\infty ,\infty$ and as $x\rightarrow a\text{ or }b$, $ \exp (\eta'T(x))h(x) g(x) \rightarrow 0$ where $g$ is any differentiable function such that $E|g'(X)|<\infty$ or $ \exp (\eta'T(x))h(x) \rightarrow 0 $ if $ a,b $ finite. Then -$$ -E((h'(X)/h(X) + \sum \eta_i T_i'(X))g(X)) = -Eg'(X). -$$ - -The derivation is same as the special case, namely, integration by parts. - -If we only know $ X $ has support $ \mathbb{R} $, then it could be the case that $ E|g(X)| <\infty \text{ and } E|g'(X)| <\infty $ but $ \lim_{x\rightarrow \infty} f_\eta(x) g(x) \not= 0$. To see this, simply put $g(x)=1 $ and $ f_\eta(x) $ with infinitely spikes towards infinity but still integrable. One such example could be adapted from $ f(x) = \begin{cases} 1 & x \in [n, n + 2^{-n}) \\ 0 & \text{otherwise} \end{cases} $ so that $ f$ is smooth. - -Extensions to elliptically-contoured distributions also exist. diff --git a/wiki/wikipedia/4067.txt b/wiki/wikipedia/4067.txt deleted file mode 100644 index ebc462408a53840b53545e82a2c29385464d78bb..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4067.txt +++ /dev/null @@ -1,65 +0,0 @@ -In modular arithmetic, Thue's lemma roughly states that every modular integer may be represented by a "modular fraction" such that the numerator and the denominator have absolute values not greater than the square root of the modulus. - -More precisely, for every pair of integers (a, m) with m > 1, given two positive integers X and Y such that X ≤ m < XY, there are two integers x and y such that -$$ -ay \equiv x \pmod{m} -$$ - -and -$$ -|x| < X,\quad 0 < y < Y. -$$ - -Usually, one takes X and Y equal to the smallest integer greater than the square root of m, but the general form is sometimes useful, and makes the uniqueness theorem (below) easier to state. - -The first known proof is attributed to who used a pigeonhole argument. It can be used to prove Fermat's theorem on sums of two squares by taking m to be a prime p that is congruent to 1 modulo 4 and taking a to satisfy a2 + 1 = 0 mod p. (Such an "a" is guaranteed for "p" by Wilson's theorem.) - -In general, the solution whose existence is asserted by Thue's lemma is not unique. For example, when a = 1 there are usually several solutions (x, y) = (1, 1), (2, 2), (3, 3), ..., provided that X and Y are not too small. Therefore, one may only hope for uniqueness for the rational number x/y, to which a is congruent modulo m if y and m are coprime. Nevertheless, this rational number need not be unique; for example, if m = 5, a = 2 and X = Y = 3, one has the two solutions -$$ -2a+1 \equiv -a+2 \equiv 0 \pmod 5 -$$. - -However, for X and Y small enough, if a solution exists, it is unique. More precisely, with above notation, if -$$ -2XY < m, -$$ - -and -$$ -ay_1-x_1\equiv ay_2-x_2 \equiv 0 \pmod m -$$, - -with -$$ -\left | x_1\right |< X, \quad\left| y_1\right| < Y, -$$ - -and -$$ -\left| x_2\right| < X, \quad\left| y_2\right| < Y, -$$ - -then -$$ -\frac{x_1}{y_1}=\frac{x_2}{y_2}. -$$ - -This result is the basis for rational reconstruction, which allows using modular arithmetic for computing rational numbers for which one knows bounds for numerators and denominators. - -The proof is rather easy: by multiplying each congruence by the other yi and subtracting, one gets -$$ -y_2x_1-y_1x_2 \equiv 0 \pmod m. -$$ - -The hypotheses imply that each term has an absolute value lower than XY < m/2, and thus that the absolute value of their difference is lower than m. This implies that $y_2x_1-y_1x_2 =0$, hence the result. - -The original proof of Thue's lemma is not efficient, in the sense that it does not provide any fast method for computing the solution. - -The extended Euclidean algorithm, allows us to provide a proof that leads to an efficient algorithm that has the same computational complexity of the Euclidean algorithm. - -More precisely, given the two integers m and a appearing in Thue's lemma, the extended Euclidean algorithm computes three sequences of integers (ti), (xi) and (yi) such that -$$ -t_im+y_ia =x_i \quad \text{for } i=0, 1, ..., -$$ - -where the xi are non-negative and strictly decreasing. The desired solution is, up to the sign, the first pair (xi, yi) such that xi < X. diff --git a/wiki/wikipedia/4068.txt b/wiki/wikipedia/4068.txt deleted file mode 100644 index 758fe566454b64f74cf80d0f0c4f4267df18dc8a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4068.txt +++ /dev/null @@ -1,33 +0,0 @@ -In mathematics, the unknotting problem is the problem of algorithmically recognizing the unknot, given some representation of a knot, e.g., a knot diagram. There are several types of unknotting algorithms. A major unresolved challenge is to determine if the problem admits a polynomial time algorithm; that is, whether the problem lies in the complexity class P. - -First steps toward determining the computational complexity were undertaken in proving that the problem is - -in larger complexity classes, which contain the class P. By using normal surfaces to describe the Seifert surfaces of a given knot, Hass showed that the unknotting problem is in the complexity class NP. Hara claimed the weaker result that unknotting is in AM ∩ co-AM; however, later they retracted this claim. In 2011, Greg Kuperberg proved that (assuming the generalized Riemann hypothesis) the unknotting problem is in co-NP, and in 2016, Marc Lackenby provided an unconditional proof of co-NP membership. - -The unknotting problem has the same computational complexity as testing whether an embedding of an undirected graph in Euclidean space is linkless. - -One of Ochiai's unknots featuring 139 vertices, for example, was originally unknotted by computer in 108 hours, but this time has been reduced in more recent research to 10 minutes. - -Several algorithms solving the unknotting problem are based on Haken's theory of normal surfaces: - -* Haken's algorithm uses the theory of normal surfaces to find a disk whose boundary is the knot. Haken originally used this algorithm to show that unknotting is decidable, but did not analyze its complexity in more detail. - -* Hass, Lagarias, and Pippenger showed that the set of all normal surfaces may be represented by the integer points in a polyhedral cone and that a surface witnessing the unknottedness of a curve (if it exists) can always be found on one of the extreme rays of this cone. Therefore, vertex enumeration methods can be used to list all of the extreme rays and test whether any of them corresponds to a bounding disk of the knot. Hass, Lagarias, and Pippenger used this method to show that the unknottedness is in NP; later researchers such as Burton refined their analysis, showing that this algorithm can be useful (though not polynomial time), with its complexity being a low-order singly-exponential function of the number of crossings. - -* The algorithm of Birman uses braid foliations, a somewhat different type of structure than a normal surface. However to analyze its behavior they return to normal surface theory. - -Other approaches include: - -* The number of Reidemeister moves needed to change an unknot diagram to the standard unknot diagram is at most polynomial in the number of crossings. Therefore, a brute force search for all sequences of Reidemeister moves can detect unknottedness in exponential time. - -* Similarly, any two triangulations of the same knot complement may be connected by a sequence of Pachner moves of length at most doubly exponential in the number of crossings. Therefore, it is possible to determine whether a knot is the unknot by testing all sequences of Pachner moves of this length, starting from the complement of the given knot, and determining whether any of them transforms the complement into a standard triangulation of a solid torus. The time for this method would be triply exponential; however, experimental evidence suggests that this bound is very pessimistic and that many fewer Pachner moves are needed. - -* Any arc-presentation of an unknot can be monotonically simplified to a minimal one using elementary moves. So a brute force search among all arc-presentations of not greater complexity gives a single-exponential algorithm for the unknotting problem. - -* Residual finiteness of the knot group (which follows from geometrization of Haken manifolds) gives an algorithm: check if the group has non-cyclic finite group quotient. This idea is used in Kuperberg's result that the unknotting problem is in co-NP. - -* Knot Floer homology of the knot detects the genus of the knot, which is 0 if and only if the knot is an unknot. A combinatorial version of knot Floer homology allows it to be computed . - -* Khovanov homology detects the unknot according to a result of Kronheimer and Mrowka. The complexity of Khovanov homology at least as high as the #P-hard problem of computing the Jones polynomial, but it may be calculated in practice using an algorithm and program of Bar-Natan. Bar-Natan provides no rigorous analysis of his algorithm, but heuristically estimates it to be exponential in the pathwidth of a crossing diagram, which in turn is at most proportional to the square root of the number of crossings. - -Understanding the complexity of these algorithms is an active field of study. diff --git a/wiki/wikipedia/4069.txt b/wiki/wikipedia/4069.txt deleted file mode 100644 index 2c1405e6652bbd0cef0bfbfcf8753bb83acc0dd7..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4069.txt +++ /dev/null @@ -1,27 +0,0 @@ -Kopano is an open-source groupware application suite originally based on Zarafa. The initial version of Kopano Core (KC) was forked from the then-current release of the Zarafa Collaboration Platform, and superseded ZCP in terms of lineage as ZCP switched to maintenance mode with patches flowing from KC. Kopano WebApp similarly descended from Zarafa WebApp. Since October 2017, Kopano Core is also known more specifically as Kopano Groupware Core, since Kopano B.V. developed more products that were not directly requiring groupware components. - -The original goal of ZCP was to be a replacement for Microsoft Exchange, so that users could retain Outlook as a client application. While Kopano's business strategy has shifted towards providing a comprehensive office collaboration suite in its own right, Kopano Core still supports connections from Outlook clients either via Z-push/ActiveSync, or the (by now unsupported) Zarafa Windows MAPI plugin. - -The Kopano Outlook Extension add-in for Outlook provides the Outlook functionality that ActiveSync alone doesn't support. This includes (for example) support for Out of Office or Public Folders. ActiveSync and Kopano Outlook Extension together are therefore able to fully integrate the Kopano backend within Outlook in a corporate environment. - -WebApp plugins exist to perform advanced group tasks such as accessing cloud based storage solutions (e.g. owncloud / nextcloud), for integrated video conference (webmeetings) or for handling S/MIME email within WebApp. - -A desktop application, DeskApp, is also available. This is the same look and feel as WebApp but integrates directly with the user's desktop and it is available for Windows, Linux or Mac. - -All server-side components (Kopano Core) and WebApp are published under the Affero General Public License (AGPL). - -Microsoft Outlook, as well as Kopano/Zarafa clients, uses MAPI at the source code level. So-called MAPI providers (essentially plugins) abstract and take care of the underlying transport mechanism. Kopano-server exposes its functionality over stream sockets and uses the HTTP protocol, with data being serialized using SOAP/XML. The commands sent in the XML data are specific to Kopano/Zarafa. Conversely, the Kopano MAPI provider implements this protocol on the client side. These HTTP connections can be secured with TLS/SSL and be proxied if desired. - -Because Exchange instead uses MAPI/RPC on the wire, the stock Outlook connector for Exchange could not be used and traditionally required the Windows version of the Zarafa MAPI provider (a product that is proprietary and unsupported since 2016–04). Outlook versions 2013 and 2016 support ActiveSync, a protocol also used by many mobile clients, and by using the Z-push software on the server side, ActiveSync requests can be translated and such clients can effectively talk to a Kopano server as well. - -Kopano Core generally stores its data in a MySQL-compatible database. Attachments can be saved on the filesystem, Amazon S3, or the database may be used to place chunked blobs. The server can get its user information from LDAP/Active Directory, Unix user accounts or the MySQL database. Additional gateways for the IMAP, POP3 and iCalendar/CalDAV protocols are provided. - -Kopano WebApp (and DeskApp which is the equivalent stand-alone application) are full-featured applications which include support for mail, calendars, group calendars, public folders and many more functionalities. WebApp can be integrated with many plugins which can be added to the installation. Kopano provides several plugins such as Files (cloud and storage access within WebApp), WebMeetings (video conference) and S/MIME (which allows reading and sending encrypted email). - -Any developer can, however, write additional plugins using the WebApp plugin API. - -Kopano is available as a freely downloadable community edition. The community edition gives the users access to the main branch builds which includes the very latest code as overnight builds. The Kopano community edition includes all the advanced and premium features such as WebMeetings (Video Conferencing), Kopano Files (cloud storage access) and the S/MIME plugin (which allows sending or receiving encrypted email). - -Kopano is also available as a paid-for product where official Kopano QA tested releases are provided and supported directly by Kopano. - -Finally, Kopano is available in the official repositories of some Linux distributions such as openSUSE. diff --git a/wiki/wikipedia/407.txt b/wiki/wikipedia/407.txt deleted file mode 100644 index 4ee5d908d4009bbaa7fee87a15cc8f7bb60c452f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/407.txt +++ /dev/null @@ -1,56 +0,0 @@ -In computer science, Kosaraju-Sharir's algorithm (also known as Kosaraju's algorithm) is a linear time algorithm to find the strongly connected components of a directed graph. Aho, Hopcroft and Ullman credit it to S. Rao Kosaraju and Micha Sharir. Kosaraju suggested it in 1978 but did not publish it, while Sharir independently discovered it and published it in 1981. It makes use of the fact that the transpose graph (the same graph with the direction of every edge reversed) has exactly the same strongly connected components as the original graph. - -The primitive graph operations that the algorithm uses are to enumerate the vertices of the graph, to store data per vertex (if not in the graph data structure itself, then in some table that can use vertices as indices), to enumerate the out-neighbours of a vertex (traverse edges in the forward direction), and to enumerate the in-neighbours of a vertex (traverse edges in the backward direction); however the last can be done without, at the price of constructing a representation of the transpose graph during the forward traversal phase. The only additional data structure needed by the algorithm is an ordered list L of graph vertices, that will grow to contain each vertex once. - -If strong components are to be represented by appointing a separate root vertex for each component, and assigning to each vertex the root vertex of its component, then Kosaraju's algorithm can be stated as follows. - -# For each vertex u of the graph, mark u as unvisited. Let L be empty. - -# For each vertex u of the graph do Visit(u), where Visit(u) is the recursive subroutine: - -#: If u is unvisited then: - -#:# Mark u as visited. - -#:# For each out-neighbour v of u, do Visit(v). - -#:# Prepend u to L. - -#: Otherwise do nothing. - -# For each element u of L in order, do Assign(u,u) where Assign(u,root) is the recursive subroutine: - -#: If u has not been assigned to a component then: - -#:# Assign u as belonging to the component whose root is root. - -#:# For each in-neighbour v of u, do Assign(v,root). - -#: Otherwise do nothing. - -Trivial variations are to instead assign a component number to each vertex, or to construct per-component lists of the vertices that belong to it. The unvisited/visited indication may share storage location with the final assignment of root for a vertex. - -The key point of the algorithm is that during the first (forward) traversal of the graph edges, vertices are prepended to the list L in post-order relative to the search tree being explored. This means it does not matter whether a vertex v was first Visited because it appeared in the enumeration of all vertices or because it was the out-neighbour of another vertex u that got Visited; either way v will be prepended to L before u is, so if there is a forward path from u to v then u will appear before v on the final list L (unless u and v both belong to the same strong component, in which case their relative order in L is arbitrary). - -This means, that each element $n$ of the list can be made to correspond to a block $L[i_{n-1}: i_n]$, where the block consists of all the vertices reachable from vertex $n$ using just outward edges at each node in the path. It is important to note that no vertex in the block beginning at $n$ has an inward link from any of the blocks beginning at some vertex to its right, i.e., the blocks corresponding to vertices $i_n, i_n+1, ... N$ in the list. This is so, because otherwise the vertex having the inward link(say from the block beginning at $n' \ge i_n+1$)would have been already visited and pre-pended to $L$ in the block of $n'$, which is a contradiction. On the other hand, vertices in the block starting at $n$ can have edges pointing to the blocks starting at some vertex in $\{i_n, i_n+1,...N\}$. - -Step 3 of the algorithm, starts from $L[0]$, assigns all vertices which point to it, the same component as $L[0]$. Note that these vertices can only lie in the block beginning at $L[0]$ as higher blocks can't have links pointing to vertices in the block of $L[0]$. Let the set of all vertices that point to $L[0]$ be $In(L[0])$. Subsequently, all the vertices pointing to these vertices, $In(In(L[0]))$ are added too, and so on till no more vertices can be added. - -There is a path to $L[0]$, from all the vertices added to the component containing $L[0]$. And there is a path to all the vertices added from $L[0]$, as all those lie in the block beginning at $L[0]$(which contains all the vertices reachable from $L[0]$ following outward edges at each step of path). Hence all these form a single strongly connected component. Moreover, no vertex remains, because, to be in this strongly connected component a vertex must be reachable from $L[0]$ and must be able to reach $L[0]$. All vertices that are able to reach ${\displaystyle L[0]}$, if any, lie in the first block only, and all the vertices in first block are reachable from $L[0]$. So the algorithm chooses all the vertices in the connected component of $L[0]$. - -When we reach vertex $v = L[i]$, in the loop of step 3, and $v$ hasn't been assigned to any component, we can be sure that all the vertices to the left have made their connected components; that v - - doesn't belong to any of those components; that $v$ doesn't point to any of the vertices to the left of it. Also, since, no edge from higher blocks to $v$'s block exists, the proof remains same. - -As given above, the algorithm for simplicity employs depth-first search, but it could just as well use breadth-first search as long as the post-order property is preserved. - -The algorithm can be understood as identifying the strong component of a vertex u as the set of vertices which are reachable from u both by backwards and forwards traversal. Writing $F(u)$ for the set of vertices reachable from $u$ by forward traversal, $B(u)$ for the set of vertices reachable from $u$ by backwards traversal, and $P(u)$ for the set of vertices which appear strictly before $u$ on the list L after phase 2 of the algorithm, the strong component containing a vertex $u$ appointed as root is -$$ - B(u) \cap F(u) = B(u) \setminus (B(u) \setminus F(u)) = B(u) \setminus P(u) -$$. - -Set intersection is computationally costly, but it is logically equivalent to a double set difference, and since $ B(u) \setminus F(u) \subseteq P(u) $ it becomes sufficient to test whether a newly encountered element of $B(u)$ has already been assigned to a component or not. - -Provided the graph is described using an adjacency list, Kosaraju's algorithm performs two complete traversals of the graph and so runs in Θ(V+E) (linear) time, which is asymptotically optimal because there is a matching lower bound (any algorithm must examine all vertices and edges). It is the conceptually simplest efficient algorithm, but is not as efficient in practice as Tarjan's strongly connected components algorithm and the path-based strong component algorithm, which perform only one traversal of the graph. - -If the graph is represented as an adjacency matrix, the algorithm requires Ο(V2) time. diff --git a/wiki/wikipedia/4070.txt b/wiki/wikipedia/4070.txt deleted file mode 100644 index d41588410ea5b68b273f86ead27f2522de275bae..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4070.txt +++ /dev/null @@ -1,5 +0,0 @@ -In operator theory, the Gelfand–Mazur theorem is a theorem named after Israel Gelfand and Stanisław Mazur which states that a Banach algebra with unit over the complex numbers in which every nonzero element is invertible is isometrically isomorphic to the complex numbers, i. e., the only complex Banach algebra that is a division algebra is the complex numbers C. - -The theorem follows from the fact that the spectrum of any element of a complex Banach algebra is nonempty: for every element a of a complex Banach algebra A there is some complex number λ such that λ1 - a is not invertible. This is a consequence of the complex-analyticity of the resolvent function. By assumption, λ1 - a = 0. So a = λ · 1. This gives an isomorphism from A to C. - -The theorem can be strengthened to the claim that there are (up to isomorphism) exactly three real Banach division algebras: the field of reals R, the field of complex numbers C, and the division algebra of quaternions H. This result was proved first by Stanisław Mazur alone, but it was published in France without a proof, when the author refused the editor's request to shorten his proof. Gelfand (independently) published a proof of the complex case a few years later. diff --git a/wiki/wikipedia/4071.txt b/wiki/wikipedia/4071.txt deleted file mode 100644 index 9ab0e8244789dd0940b6d25ccdd71f2c2c03f61c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4071.txt +++ /dev/null @@ -1,106 +0,0 @@ -Morrie's law is a special trigonometric identity. Its name is due to the physicist Richard Feynman, who used to refer to the identity under that name. Feynman picked that name because he learned it during his childhood from a boy with the name Morrie Jacobs and afterwards remembered it for all of his life. -$$ - \cos(20^\circ) \cdot \cos(40^\circ) \cdot \cos(80^\circ) = \frac{1}{8}. -$$ - -It is a special case of the more general identity -$$ - 2^n \cdot \prod_{k=0}^{n-1} \cos(2^k \alpha) = \frac{\sin(2^n \alpha)}{\sin(\alpha)} -$$ - -with n = 3 and α = 20° and the fact that -$$ - \frac{\sin(160^\circ)}{\sin(20^\circ)} = \frac{\sin(180^\circ-20^\circ)}{\sin(20^\circ)} = 1, -$$ - -since -$$ - \sin(180^\circ-x) = \sin(x). -$$ - -A similar identity for the sine function also holds: -$$ - \sin(20^\circ) \cdot \sin(40^\circ) \cdot \sin(80^\circ) = \frac{\sqrt 3}{8}. -$$ - -Moreover, dividing the second identity by the first, the following identity is evident: -$$ - \tan(20^\circ) \cdot \tan(40^\circ) \cdot \tan(80^\circ) = \sqrt 3 = \tan(60^\circ). -$$ - -Consider a regular nonagon $ABCDEFGHI$ with side length $1$ and let $M$ be the midpoint of $AB$, $L$ the midpoint $BF$ and $J$ the midpoint of $BD$. The inner angles of the nonagon equal $140^\circ$ and furthermore $\gamma=\angle FBM=80^\circ$, $\beta=\angle DBF=40^\circ$ and $\alpha=\angle CBD=20^\circ$ (see graphic). Applying the cosinus definition in the right angle triangles $\triangle BFM$, $\triangle BDL$ and $\triangle BCJ$ then yields the proof for Morrie's law: - -\begin{align} - -1&=|AB|\\ - -&=2\cdot|MB|\\ - -&=2\cdot|BF|\cdot\cos(\gamma)\\ - -&=2^2|BL|\cos(\gamma)\\ - -&=2^2\cdot|BD|\cdot\cos(\gamma)\cdot\cos(\beta)\\ - -&=2^3\cdot|BJ|\cdot\cos(\gamma)\cdot\cos(\beta) \\ - -&=2^3\cdot|BC|\cdot\cos(\gamma)\cdot\cos(\beta)\cdot\cos(\alpha) \\ - -&=2^3\cdot 1 \cdot\cos(\gamma)\cdot\cos(\beta)\cdot\cos(\alpha) \\ - -&=8\cdot\cos(80^\circ)\cdot\cos(40^\circ)\cdot\cos(20^\circ) - -\end{align} - -Recall the double angle formula for the sine function -$$ - \sin(2 \alpha) = 2 \sin(\alpha) \cos(\alpha). -$$ - -Solve for $ \cos(\alpha) $ -$$ - \cos(\alpha)=\frac{\sin(2 \alpha)}{2 \sin(\alpha)}. -$$ - -It follows that: - - - -\begin{align} - -\cos(2 \alpha) & = \frac{\sin(4 \alpha)}{2 \sin(2 \alpha)} \\[6pt] - -\cos(4 \alpha) & = \frac{\sin(8 \alpha)}{2 \sin(4 \alpha)} \\ - -& \vdots \\ - -\cos\left(2^{n-1} \alpha\right) - -& = \frac{\sin\left(2^n \alpha\right)}{2 \sin\left(2^{n-1} \alpha\right)}. - -\end{align} - - - -Multiplying all of these expressions together yields: - - - -\cos(\alpha) \cos(2 \alpha) \cos(4 \alpha) \cdots \cos\left(2^{n-1} \alpha\right) = - -\frac{\sin(2 \alpha)}{2 \sin(\alpha)} \cdot - -\frac{\sin(4 \alpha)}{2 \sin(2 \alpha)} \cdot - -\frac{\sin(8 \alpha)}{2 \sin(4 \alpha)} \cdots - -\frac{\sin\left(2^n \alpha\right)}{2 \sin\left(2^{n-1} \alpha\right)}. - - - -The intermediate numerators and denominators cancel leaving only the first denominator, a power of 2 and the final numerator. Note that there are n terms in both sides of the expression. Thus, -$$ - \prod_{k=0}^{n-1} \cos\left(2^k \alpha\right) = \frac{\sin\left(2^n \alpha\right)}{2^n \sin(\alpha)}, -$$ - -which is equivalent to the generalization of Morrie's law. diff --git a/wiki/wikipedia/4072.txt b/wiki/wikipedia/4072.txt deleted file mode 100644 index 970f3afff1baa051f559b4f50c8adc50df9774c9..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4072.txt +++ /dev/null @@ -1,35 +0,0 @@ -thumb|Karnaugh map of ∨ ∨ . Omitting the red rectangle does not change the covered area. - -In Boolean algebra, the consensus theorem or rule of consensus is the identity: -$$ -xy \vee \bar{x}z \vee yz = xy \vee \bar{x}z -$$ - -The consensus or resolvent of the terms $xy$ and $\bar{x}z$ is $yz$. It is the conjunction of all the unique literals of the terms, excluding the literal that appears unnegated in one term and negated in the other. If $y$ includes a term which is negated in $z$ (or vice versa), the consensus term $yz$ is false; in other words, there is no consensus term. - -The conjunctive dual of this equation is: -$$ -(x \vee y)(\bar{x} \vee z)(y \vee z) = (x \vee y)(\bar{x} \vee z) -$$ - - \begin{align} - -xy \vee \bar{x}z \vee yz &= xy \vee \bar{x}z \vee (x \vee \bar{x})yz \\ - -&= xy \vee \bar{x}z \vee xyz \vee \bar{x}yz \\ - -&= (xy \vee xyz) \vee (\bar{x}z \vee \bar{x}yz) \\ - -&= xy(1\vee z)\vee\bar{x}z(1\vee y) \\ - -&= xy \vee \bar{x}z - -\end{align} - - - -The consensus or consensus term of two conjunctive terms of a disjunction is defined when one term contains the literal $a$ and the other the literal $\bar{a}$, an opposition. The consensus is the conjunction of the two terms, omitting both $a$ and $\bar{a}$, and repeated literals. For example, the consensus of $\bar{x}yz$ and $w\bar{y}z$ is $w\bar{x}z$. The consensus is undefined if there is more than one opposition. - -For the conjunctive dual of the rule, the consensus $y \vee z$ can be derived from $(x\vee y)$ and $(\bar{x} \vee z)$ through the resolution inference rule. This shows that the LHS is derivable from the RHS (if A → B then A → AB; replacing A with RHS and B with (y ∨ z) ). The RHS can be derived from the LHS simply through the conjunction elimination inference rule. Since RHS → LHS and LHS → RHS (in propositional calculus), then LHS = RHS (in Boolean algebra). - -In Boolean algebra, repeated consensus is the core of one algorithm for calculating the Blake canonical form of a formula. It was rediscovered by Samson and Mills in 1954 and by Quine in 1955. Quine coined the term 'consensus'. Robinson used it for clauses in 1965 as the basis of his "resolution principle". diff --git a/wiki/wikipedia/4073.txt b/wiki/wikipedia/4073.txt deleted file mode 100644 index 1f9e1e12baa125e40888e5d6b1348d4876d01af0..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4073.txt +++ /dev/null @@ -1,7 +0,0 @@ -Circle packing in an equilateral triangle is a packing problem in discrete mathematics where the objective is to pack n unit circles into the smallest possible equilateral triangle. Optimal solutions are known for n < 13 and for any triangular number of circles, and conjectures are available for n < 28. - -A conjecture of Paul Erdős and Norman Oler states that, if n is a triangular number, then the optimal packings of n - 1 and of n circles have the same side length: that is, according to the conjecture, an optimal packing for n - 1 circles can be found by removing any single circle from the optimal hexagonal packing of n circles. This conjecture is now known to be true for n ≤ 15. - -Minimum solutions for the side length of the triangle: - -A closely related problem is to cover the equilateral triangle with a fixed number of equal circles, having as small a radius as possible. diff --git a/wiki/wikipedia/4074.txt b/wiki/wikipedia/4074.txt deleted file mode 100644 index 636e33c12a24678c7594229042c3c0a66b7d369a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4074.txt +++ /dev/null @@ -1,50 +0,0 @@ -In mathematical logic, structural proof theory is the subdiscipline of proof theory that studies proof calculi that support a notion of analytic proof, a kind of proof whose semantic properties are exposed. When all the theorems of a logic formalised in a structural proof theory have analytic proofs, then the proof theory can be used to demonstrate such things as consistency, provide decision procedures, and allow mathematical or computational witnesses to be extracted as counterparts to theorems, the kind of task that is more often given to model theory. - -The notion of analytic proof was introduced into proof theory by Gerhard Gentzen for the sequent calculus; the analytic proofs are those that are cut-free. His natural deduction calculus also supports a notion of analytic proof, as was shown by Dag Prawitz; the definition is slightly more complex-the analytic proofs are the normal forms, which are related to the notion of normal form in term rewriting. - -The term structure in structural proof theory comes from a technical notion introduced in the sequent calculus: the sequent calculus represents the judgement made at any stage of an inference using special, extra-logical operators called structural operators: in $A_1, \dots, A_m \vdash B_1, \dots, B_n$, the commas to the left of the turnstile are operators normally interpreted as conjunctions, those to the right as disjunctions, whilst the turnstile symbol itself is interpreted as an implication. However, it is important to note that there is a fundamental difference in behaviour between these operators and the logical connectives they are interpreted by in the sequent calculus: the structural operators are used in every rule of the calculus, and are not considered when asking whether the subformula property applies. Furthermore, the logical rules go one way only: logical structure is introduced by logical rules, and cannot be eliminated once created, while structural operators can be introduced and eliminated in the course of a derivation. - -The idea of looking at the syntactic features of sequents as special, non-logical operators is not old, and was forced by innovations in proof theory: when the structural operators are as simple as in Getzen's original sequent calculus there is little need to analyse them, but proof calculi of deep inference such as display logic (introduced by Nuel Belnap in 1982) support structural operators as complex as the logical connectives, and demand sophisticated treatment. - -The hypersequent framework extends the ordinary sequent structure to a multiset of sequents, using an additional structural connective | (called the hypersequent bar) to separate different sequents. It has been used to provide analytic calculi for, e.g., modal, intermediate and substructural logics A hypersequent is a structure -$$ -\Gamma_1 \vdash \Delta_1 \mid \dots \mid \Gamma_n \vdash \Delta_n -$$ - -where each $\Gamma_i \vdash\Delta_i$ is an ordinary sequent, called a component of the hypersequent. As for sequents, hypersequents can be based on sets, multisets, or sequences, and the components can be single-conclusion or multi-conclusion sequents. The formula interpretation of the hypersequents depends on the logic under consideration, but is nearly always some form of disjunction. The most common interpretations are as a simple disjunction -$$ -(\bigwedge\Gamma_1 \rightarrow \bigvee \Delta_1) \lor \dots \lor (\bigwedge\Gamma_n \rightarrow \bigvee \Delta_n) -$$ - -for intermediate logics, or as a disjunction of boxes -$$ -\Box(\bigwedge\Gamma_1 \rightarrow\bigvee \Delta_1) \lor \dots \lor \Box(\bigwedge\Gamma_n \rightarrow \bigvee\Delta_n) -$$ - -for modal logics. - -In line with the disjunctive interpretation of the hypersequent bar, essentially all hypersequent calculi include the external structural rules, in particular the external weakening rule -$$ -\frac{\Gamma_1 \vdash \Delta_1 \mid \dots \mid \Gamma_n \vdash \Delta_n}{\Gamma_1 \vdash \Delta_1 \mid \dots \mid \Gamma_n \vdash \Delta_n \mid \Sigma \vdash \Pi} -$$ - -and the external contraction rule -$$ -\frac{\Gamma_1\vdash \Delta_1 \mid \dots \mid\Gamma_n \vdash \Delta_n \mid \Gamma_n \vdash \Delta_n}{\Gamma_1\vdash \Delta_1 \mid \dots \mid\Gamma_n \vdash \Delta_n} -$$ - -The additional expressivity of the hypersequent framework is provided by rules manipulating the hypersequent structure. An important example is provided by the modalised splitting rule -$$ -\frac{\Gamma_1 \vdash \Delta_1 \mid \dots \mid \Gamma_n \vdash \Delta_n \mid \Box \Sigma, \Omega \vdash \Box \Pi, \Theta}{\Gamma_1 \vdash \Delta_1 \mid \dots \mid \Gamma_n \vdash \Delta_n \mid \Box \Sigma \vdash \Box \Pi \mid \Omega \vdash \Theta} -$$ - -for modal logic S5, where $\Box \Sigma$ means that every formula in $\Box\Sigma$ is of the form $\Box A$. - -Another example is given by the communication rule for the intermediate logic LC -$$ -\frac{\Gamma_1 \vdash \Delta_1 \mid \dots \mid \Gamma_n \vdash \Delta_n \mid \Omega \vdash A \qquad \Sigma_1 \vdash \Pi_1 \mid \dots \mid \Sigma_m \vdash \Pi_m \mid \Theta \vdash B }{\Gamma_1\vdash \Delta_1 \mid \dots \mid \Gamma_n \vdash \Delta_n \mid \Sigma_1 \vdash \Pi_1 \mid \dots \mid \Sigma_m \vdash \Pi_m \mid \Omega \vdash B \mid \Theta \vdash A} -$$ - -Note that in the communication rule the components are single-conclusion sequents. - -The nested sequent calculus is a formalisation that resembles a 2-sided calculus of structures. diff --git a/wiki/wikipedia/4075.txt b/wiki/wikipedia/4075.txt deleted file mode 100644 index a091143e1a4bb828209c9c88034b7ae3ecbc08da..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4075.txt +++ /dev/null @@ -1,45 +0,0 @@ -In geometry, Keller's conjecture is the conjecture that in any tiling of n-dimensional Euclidean space by identical hypercubes, there are two hypercubes that share an entire (n - 1)-dimensional face with each other. For instance, in any tiling of the plane by identical squares, some two squares must share an entire edge, as they do in the illustration. - -This conjecture was introduced by , after whom it is named. A breakthrough by showed that it is false in ten or more dimensions, and after subsequent refinements, it is now known to be true in spaces of dimension at most seven and false in all higher dimensions. The proofs of these results use a reformulation of the problem in terms of the clique number of certain graphs now known as Keller graphs. - -The related Minkowski lattice cube-tiling conjecture states that whenever a tiling of space by identical cubes has the additional property that the cubes' centers form a lattice, some cubes must meet face-to-face. It was proved by György Hajós in 1942. - -Szabó, Shor, and Zong give surveys of work on Keller's conjecture and related problems. - -A tessellation or tiling of a Euclidean space is, intuitively, a family of subsets that cover the whole space without overlapping. More formally, - -a family of closed sets, called tiles, forms a tiling if their union is the whole space and every two distinct sets in the family have disjoint interiors. A tiling is said to be monohedral if all of the tiles have the same shape (they are congruent to each other). Keller's conjecture concerns monohedral tilings in which all of the tiles are hypercubes of the same dimension as the space. As Szabó formulates the problem, a cube tiling is a tiling by congruent hypercubes in which the tiles are additionally required to all be translations of each other without any rotation, or equivalently, to have all of their sides parallel to the coordinate axes of the space. Not every tiling by congruent cubes has this property; for instance, three-dimensional space may be tiled by two-dimensional sheets of cubes that are twisted at arbitrary angles with respect to each other. In formulating the same problem, Shor instead considers all tilings of space by congruent hypercubes and states, without proof, that the assumption that cubes are axis-parallel can be added without loss of generality. - -An n-dimensional hypercube has 2n faces of dimension n - 1 that are, themselves, hypercubes; for instance, a square has four edges, and a three-dimensional cube has six square faces. Two tiles in a cube tiling (defined in either of the above ways) meet face-to-face if there is an (n - 1)-dimensional hypercube that is a face of both of them. Keller's conjecture is the statement that every cube tiling has at least one pair of tiles that meet face-to-face in this way. - -The original version of the conjecture stated by Keller was for a stronger statement: every cube tiling has a column of cubes all meeting face-to-face. This version of the problem is true or false for the same dimensions as its more commonly studied formulation. - -It is a necessary part of the conjecture that the cubes in the tiling all be congruent to each other, for if cubes of unequal sizes are allowed, then the Pythagorean tiling would form a counterexample in two dimensions. - -The conjecture as stated does not require all of the cubes in a tiling to meet face-to-face with other cubes. Although tilings by congruent squares in the plane have the stronger property that every square meets edge-to-edge with another square, some of the tiles in higher-dimensional hypercube tilings may not meet face-to-face with any other tile. For instance, in three dimensions, the tetrastix structure formed by three perpendicular sets of square prisms can be used to construct a cube tiling, combinatorially equivalent to the Weaire–Phelan structure, in which one fourth of the cubes (the ones not part of any prism) are surrounded by twelve other cubes without meeting any of them face-to-face. - -Keller's conjecture was shown to be true in dimensions at most six by . The disproof of Keller's conjecture, for sufficiently high dimensions, has progressed through a sequence of reductions that transform it from a problem in the geometry of tilings into a problem in group theory and, from there, into a problem in graph theory. - -Hajós first reformulated Keller's conjecture in terms of factorizations of abelian groups. He shows that if there is a counterexample to the conjecture, then it can be assumed to be a periodic tiling of cubes with an integer side length and integer vertex positions; thus, in studying the conjecture, it is sufficient to consider tilings of this special form. In this case, the group of integer translations, modulo the translations that preserve the tiling, forms an abelian group, and certain elements of this group correspond to the positions of the tiles. Hajós defines a family of subsets Ai of an abelian group to be a factorization if each element of the group has a unique expression as a sum a0 + a1 + ..., where each ai belongs to Ai. With this definition, Hajós' reformulated conjecture is that whenever an Abelian group has a factorization in which the first set A0 may be arbitrary but each subsequent set Ai takes the special form {0, gi, 2gi, 3gi, ..., (} for some element gi of Ai, then at least one element must belong to A0 -A0 (the difference set of A0 with itself). - -Szabó showed that any tiling that forms a counterexample to the conjecture can be assumed to have an even more special form: the cubes have side length a power of two and integer vertex coordinates, and the tiling is periodic with period twice the side length of the cubes in each coordinate direction. Based on this geometric simplification, he also simplified Hajós' group-theoretic formulation, showing that it is sufficient to consider abelian groups that are the direct sums of cyclic groups of order four, with each qi = 2. - -Corrádi reformulated Szabó's result as a condition about the existence of a large clique in a certain family of graphs, which subsequently became known as the Keller graphs. More precisely, the vertices of the Keller graph of dimension n are the 4n elements (m1,...,mn) where each m is 0, 1, 2, or 3. Two vertices are joined by an edge if they differ in at least two coordinates and differ by exactly two in at least one coordinate. Corrádi and Szabó showed that the maximum clique in this graph has size at most 2n and if there is a clique of this size, then Keller's conjecture is false. Given such a clique, one can form a covering of space by cubes of side two whose centers have coordinates that, when taken modulo four, are vertices of the clique. The condition that any two vertices of the clique have a coordinate that differs by two implies that cubes corresponding to these vertices do not overlap. The condition that vertices differ in two coordinates implies that these cubes cannot meet face-to-face. The condition that the clique has size 2n implies that the cubes within any period of the tiling have the same total volume as the period itself. Together with the fact that they do not overlap, this implies that the cubes placed in this way tile space without meeting face-to-face. - -disproved Keller's conjecture by finding a clique of size 210 in the Keller graph of dimension 10. This clique leads to a non-face-to-face tiling in dimension 10, and copies of it can be stacked (offset by half a unit in each coordinate direction) to produce non-face-to-face tilings in any higher dimension. Similarly, Mackey found a clique of size 28 in the Keller graph of dimension eight, leading in the same way to a non-face-to-face tiling in dimension 8 and (by stacking) in dimension 9. - -Subsequently, Debroni showed that the Keller graph of dimension seven has a maximum clique of size 124. Because this is less than 27 = 128, the graph-theoretic version of Keller's conjecture is true in seven dimensions. However, the translation from cube tilings to graph theory can change the dimension of the problem, so this result does not settle the geometric version of the conjecture in seven dimensions. Finally, a 200-gigabyte computer-assisted proof in 2019 used Keller graphs to establish that the conjecture holds true in seven dimensions. Therefore, the question Keller posed can be considered solved: the conjecture is true in seven dimensions or fewer but is false when there are more than seven dimensions. - -The sizes of the maximum cliques in the Keller graphs of dimensions 2, 3, 4, 5, and 6 are, respectively, 2, 5, 12, 28, and 60. The Keller graphs of dimensions 4, 5, and 6 have been included in the set of "DIMACS challenge graphs" frequently used as a benchmark for clique-finding algorithms. - -As Szabó describes, Hermann Minkowski was led to a special case of the cube-tiling conjecture from a problem in diophantine approximation. One consequence of Minkowski's theorem is that any lattice (normalized to have determinant one) must contain a nonzero point whose Chebyshev distance to the origin is at most one. The lattices that do not contain a nonzero point with Chebyshev distance strictly less than one are called critical, and the points of a critical lattice form the centers of the cubes in a cube tiling. Minkowski conjectured in 1900 that whenever a cube tiling has its cubes centered at lattice points in this way, it must contain two cubes that meet face-to-face. If this is true, then (because of the symmetries of the lattice) each cube in the tiling must be part of a column of cubes, and the cross-sections of these columns form a cube tiling of one smaller dimension. Reasoning in this way, Minkowski showed that (assuming the truth of his conjecture) every critical lattice has a basis that can be expressed as a triangular matrix, with ones on its main diagonal and numbers less than one away from the diagonal. György Hajós proved Minkowski's conjecture in 1942 using Hajós's theorem on factorizations of abelian groups, a similar group-theoretic method to the one that he would later apply to Keller's more general conjecture. - -Keller's conjecture is a variant of Minkowski's conjecture in which the condition that the cube centers form a lattice is relaxed. A second related conjecture, made by Furtwängler in 1936, instead relaxes the condition that the cubes form a tiling. Furtwängler asked whether a system of cubes centered on lattice points forming a k-fold covering of space (that is, all but a measure-zero subset of the points in the space must be interior to exactly k cubes) must necessarily have two cubes meeting face-to-face. Furtwängler's conjecture is true for two- and three-dimensional space, but Hajós found a four-dimensional counterexample in 1938. Robinson characterized the combinations of k and the dimension n that permit a counterexample. Additionally, combining both Furtwängler's and Keller's conjectures, Robinson showed that k-fold square coverings of the Euclidean plane must include two squares that meet edge-to-edge. However, for every k > 1 and every n > 2, there is a k-fold tiling of n-dimensional space by cubes with no shared faces. - -Once counterexamples to Keller's conjecture became known, it became of interest to ask for the maximum dimension of a shared face that can be guaranteed to exist in a cube tiling. When the dimension n is at most seven, this maximum dimension is just n - 1, by the proofs of Keller's conjecture for those small dimensions, and when n is at least eight, then this maximum dimension is at most n - 2. Lagarias showed that it is at most n - n/3, stronger for ten or more dimensions. - -Iosevich and Lagarias found close connections between cube tilings and the spectral theory of square-integrable functions on the cube. - -Dutour Sikirić use cliques in the Keller graphs that are maximal but not maximum to study packings of cubes into space that cannot be extended by adding any additional cubes. - -In 1975, Ludwig Danzer and independently Branko Grünbaum and G. C. Shephard found a tiling of three-dimensional space by parallelepipeds with 60° and 120° face angles in which no two parallelepipeds share a face. diff --git a/wiki/wikipedia/4076.txt b/wiki/wikipedia/4076.txt deleted file mode 100644 index 99c9a62cf29ede25aa69ee622e9131a5c29dcd34..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4076.txt +++ /dev/null @@ -1,39 +0,0 @@ -Wiles's proof of Fermat's Last Theorem is a proof by British mathematician Andrew Wiles of a special case of the modularity theorem for elliptic curves. Together with Ribet's theorem, it provides a proof for Fermat's Last Theorem. Both Fermat's Last Theorem and the modularity theorem were almost universally considered inaccessible to proof by contemporaneous mathematicians, meaning that they were believed to be impossible to prove using current knowledge. However, in September 1993 the proof was found to contain an error. One year later on 19 September 1994, in what he would call "the most important moment of [his] working life", Wiles stumbled upon a revelation that allowed him to correct the proof to the satisfaction of the mathematical community. The corrected proof was published in 1995. - -Wiles's proof uses many techniques from algebraic geometry and number theory, and has many ramifications in these branches of mathematics. It also uses standard constructions of modern algebraic geometry, such as the category of schemes and Iwasawa theory, and other 20th-century techniques which were not available to Fermat. - -Together, the two papers which contain the proof are 129 pages long, Wiles's path to proving Fermat's Last Theorem, by way of proving the modularity theorem for the special case of semistable elliptic curves, established powerful modularity lifting techniques and opened up entire new approaches to numerous other problems. For proving Fermat's Last Theorem, he was knighted, and received other honours such as the 2016 Abel Prize. When announcing that Wiles had won the Abel Prize, the Norwegian Academy of Science and Letters described his achievement as a "stunning proof". - -Following the developments related to the Frey curve, and its link to both Fermat and Taniyama, a proof of Fermat's Last Theorem would follow from a proof of the Taniyama–Shimura–Weil conjecture—or at least a proof of the conjecture for the kinds of elliptic curves that included Frey's equation (known as semistable elliptic curves). - -* From Ribet's Theorem and the Frey curve, any 4 numbers able to be used to disprove Fermat's Last Theorem could also be used to make a semistable elliptic curve ("Frey's curve") that could never be modular; - -* But if the Taniyama–Shimura–Weil conjecture were also true for semistable elliptic curves, then by definition every Frey's curve that existed must be modular. - -* The contradiction could have only one answer: if Ribet's theorem and the Taniyama–Shimura–Weil conjecture for semistable curves were both true, then it would mean there could not be any solutions to Fermat's equation—because then there would be no Frey curves at all, meaning no contradictions would exist. This would finally prove Fermat's Last Theorem. - -However, despite the progress made by Serre and Ribet, this approach to Fermat was widely considered unusable as well, since almost all mathematicians saw the Taniyama–Shimura–Weil conjecture itself as completely inaccessible to proof with current knowledge. as well as because of the enticing goal of proving such a long-standing problem. - -Ribet later commented that "Andrew Wiles was probably one of the few people on earth who had the audacity to dream that you can actually go and prove [it]." - -After the announcement, Nick Katz was appointed as one of the referees to review Wiles's manuscript. In the course of his review, he asked Wiles a series of clarifying questions that led Wiles to recognise that the proof contained a gap. There was an error in one critical portion of the proof which gave a bound for the order of a particular group: the Euler system used to extend Kolyvagin and Flach's method was incomplete. The error would not have rendered his work worthless—each part of Wiles's work was highly significant and innovative by itself, as were the many developments and techniques he had created in the course of his work, and only one part was affected. Without this part proved, however, there was no actual proof of Fermat's Last Theorem. - -Wiles spent almost a year trying to repair his proof, initially by himself and then in collaboration with his former student Richard Taylor, without success. By the end of 1993, rumours had spread that under scrutiny, Wiles's proof had failed, but how seriously was not known. Mathematicians were beginning to pressure Wiles to disclose his work whether or not complete, so that the wider community could explore and use whatever he had managed to accomplish. Instead of being fixed, the problem, which had originally seemed minor, now seemed very significant, far more serious, and less easy to resolve. - -Wiles states that on the morning of 19 September 1994, he was on the verge of giving up and was almost resigned to accepting that he had failed, and to publishing his work so that others could build on it and find the error. He states that he was having a final look to try and understand the fundamental reasons why his approach could not be made to work, when he had a sudden insight that the specific reason why the Kolyvagin–Flach approach would not work directly also meant that his original attempt using Iwasawa theory could be made to work if he strengthened it using experience gained from the Kolyvagin–Flach approach since then. Each was inadequate by itself, but fixing one approach with tools from the other would resolve the issue and produce a class number formula (CNF) valid for all cases that were not already proven by his refereed paper: and "Ring theoretic properties of certain Hecke algebras", the second of which Wiles had written with Taylor and proved that certain conditions were met which were needed to justify the corrected step in the main paper. - -The two papers were vetted and finally published as the entirety of the May 1995 issue of the Annals of Mathematics. The new proof was widely analysed, and became accepted as likely correct in its major components. - -Wiles used proof by contradiction, in which one assumes the opposite of what is to be proved, and shows if that were true, it would create a contradiction. The contradiction shows that the assumption must have been incorrect. - -The proof falls roughly in two parts. In the first part, Wiles proves a general result about "lifts", known as the "modularity lifting theorem". This first part allows him to prove results about elliptic curves by converting them to problems about Galois representations of elliptic curves. He then uses this result to prove that all semistable curves are modular, by proving that the Galois representations of these curves are modular. - -Wiles opted to attempt to match elliptic curves to a countable set of modular forms. He found that this direct approach was not working, so he transformed the problem by instead matching the Galois representations of the elliptic curves to modular forms. Wiles denotes this matching (or mapping) that, more specifically, is a ring homomorphism: -$$ - R_n \rightarrow \mathbf{T}_n. -$$ -$$ -R -$$ is a deformation ring and $\mathbf{T}$ is a Hecke ring. - -Wiles had the insight that in many cases this ring homomorphism could be a ring isomorphism (Conjecture 2.16 in Chapter 2, §3 of the 1995 paper The book of the Cornell conference also contained simplifications to the original proof. which is claimed to be accessible to "a graduate student in number theory". The Cornell book does not cover the entirety of the Wiles proof. diff --git a/wiki/wikipedia/4077.txt b/wiki/wikipedia/4077.txt deleted file mode 100644 index 5dad185206a27d3cd05fb2a3a232724948b77730..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4077.txt +++ /dev/null @@ -1,5 +0,0 @@ -In mathematics, Arakelyan's theorem is a generalization of Mergelyan's theorem from compact subsets of an open subset of the complex plane to relatively closed subsets of an open subset. - -Let Ω be an open subset of $\mathbb{C}$ and E a relatively closed subset of Ω. By Ω* is denoted the Alexandroff compactification of Ω. - -Arakelyan's theorem states that for every f continuous in E and holomorphic in the interior of E and for every ε > 0 there exists g holomorphic in Ω such that |g - f| < ε on E if and only if Ω* \ E is connected and locally connected. diff --git a/wiki/wikipedia/4078.txt b/wiki/wikipedia/4078.txt deleted file mode 100644 index ab730743e264a31c9641ca1b499070f61a64f4df..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4078.txt +++ /dev/null @@ -1,35 +0,0 @@ -In computer science, priority inversion is a scenario in scheduling in which a high priority task is indirectly superseded by a lower priority task effectively inverting the assigned priorities of the tasks. This violates the priority model that high priority tasks can only be prevented from running by higher priority tasks. Inversion occurs when there is a resource contention with a low priority task that is then preempted by a medium priority task. - -Consider two tasks $H$ and $L$, of high and low priority respectively, either of which can acquire exclusive use of a shared resource $R$. If $H$ attempts to acquire $R$ after $L$ has acquired it, then $H$ becomes blocked until $L$ relinquishes the resource. Sharing an exclusive-use resource ($R$ in this case) in a well-designed system typically involves $L$ relinquishing $R$ promptly so that $H$ (a higher priority task) does not stay blocked for excessive periods of time. Despite good design, however, it is possible that a third task $M$ of medium priority ($p(L) < p(M) < p(H)$, where $p(x)$ represents the priority for task $(x)$) becomes runnable during $L$'s use of $R$. At this point, $M$ being higher in priority than $L$, preempts $L$ (since $M$ does not depend on $R$), causing $L$ to not be able to relinquish $R$ promptly, in turn causing $H$—the highest priority process—to be unable to run (that is, $H$ suffers unexpected blockage indirectly caused by lower priority tasks like $M$). - -In some cases, priority inversion can occur without causing immediate harm—the delayed execution of the high priority task goes unnoticed, and eventually the low priority task releases the shared resource. However, there are also many situations in which priority inversion can cause serious problems. If the high priority task is left starved of the resources, it might lead to a system malfunction or the triggering of pre-defined corrective measures, such as a watchdog timer resetting the entire system. The trouble experienced by the Mars Pathfinder lander in 1997 is a classic example of problems caused by priority inversion in realtime systems. - -Priority inversion can also reduce the perceived performance of the system. Low priority tasks usually have a low priority because it is not important for them to finish promptly (for example, they might be a batch job or another non-interactive activity). Similarly, a high priority task has a high priority because it is more likely to be subject to strict time constraints—it may be providing data to an interactive user, or acting subject to realtime response guarantees. Because priority inversion results in the execution of a lower priority task blocking the high priority task, it can lead to reduced system responsiveness, or even the violation of response time guarantees. - -A similar problem called deadline interchange can occur within earliest deadline first scheduling (EDF). - -The existence of this problem has been known since the 1970s. Lampson and Redell - - published one of the first papers to point out the priority inversion problem. Systems such as the UNIX kernel were already addressing the problem with the splx() primitive. There is no foolproof method to predict the situation. There are however many existing solutions, of which the most common ones are: - -;Disabling all interrupts to protect critical sections - -When disabling interrupts is used to prevent priority inversion, there are only two priorities: preemptible, and interrupts disabled. With no third priority, inversion is impossible. Since there's only one piece of lock data (the interrupt-enable bit), misordering locking is impossible, and so deadlocks cannot occur. Since the critical regions always run to completion, hangs do not occur. Note that this only works if all interrupts are disabled. If only a particular hardware device's interrupt is disabled, priority inversion is reintroduced by the hardware's prioritization of interrupts. In early versions of UNIX, a series of primitives named splx(0) ... splx(7) disabled all interrupts up through the given priority. By properly choosing the highest priority of any interrupt that ever entered the critical section, the priority inversion problem could be solved without locking out all of the interrupts. Ceilings were assigned in rate-monotonic order, i.e. the slower devices had lower priorities. - -In multiple CPU systems, a simple variation, "single shared-flag locking" is used. This scheme provides a single flag in shared memory that is used by all CPUs to lock all inter-processor critical sections with a busy-wait. Interprocessor communications are expensive and slow on most multiple CPU systems. Therefore, most such systems are designed to minimize shared resources. As a result, this scheme actually works well on many practical systems. These methods are widely used in simple embedded systems, where they are prized for their reliability, simplicity and low resource use. These schemes also require clever programming to keep the critical sections very brief. Many software engineers consider them impractical in general-purpose computers. - -;Priority ceiling protocol - -With priority ceiling protocol, the shared mutex process (that runs the operating system code) has a characteristic (high) priority of its own, which is assigned to the task locking the mutex. This works well, provided the other high priority task(s) that tries to access the mutex does not have a priority higher than the ceiling priority. - -;Priority inheritance - -Under the policy of priority inheritance, whenever a high priority task has to wait for some resource shared with an executing low priority task, the low priority task is temporarily assigned the priority of the highest waiting priority task for the duration of its own use of the shared resource, thus keeping medium priority tasks from pre-empting the (originally) low priority task, and thereby affecting the waiting high priority task as well. Once the resource is released, the low priority task continues at its original priority level. - -;Random boosting - -Ready tasks holding locks are randomly boosted in priority until they exit the critical section. This solution is used in Microsoft Windows. - -;Avoid blocking - -Because priority inversion involves a low-priority task blocking a high-priority task, one way to avoid priority inversion is to avoid blocking, for example by using non-blocking algorithms such as read-copy-update. diff --git a/wiki/wikipedia/4079.txt b/wiki/wikipedia/4079.txt deleted file mode 100644 index d002f375635b826ea5135a61c4ede8b536484221..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4079.txt +++ /dev/null @@ -1,19 +0,0 @@ -The Bruck–Ryser–Chowla theorem is a result on the combinatorics of block designs that implies nonexistence of certain kinds of design. It states that if a (v, b, r, k, λ)-design exists with v = b (a symmetric block design), then: - -* if v is even, then k - λ is a square; - -* if v is odd, then the following Diophantine equation has a nontrivial solution: - -*: x2 - (k - λ)y2 - (-1)(v-1)/2 λ z2 = 0. - -The theorem was proved in the case of projective planes by Bruck. It was extended to symmetric designs by Chowla. - -In the special case of a symmetric design with λ = 1, that is, a projective plane, the theorem (which in this case is referred to as the Bruck–Ryser theorem) can be stated as follows: If a finite projective plane of order q exists and q is congruent to 1 or 2 (mod 4), then q must be the sum of two squares. Note that for a projective plane, the design parameters are v = b = q2 + q + 1, r = k = q + 1, λ = 1. Thus, v is always odd in this case. - -The theorem, for example, rules out the existence of projective planes of orders 6 and 14 but allows the existence of planes of orders 10 and 12. Since a projective plane of order 10 has been shown not to exist using a combination of coding theory and large-scale computer search, the condition of the theorem is evidently not sufficient for the existence of a design. However, no stronger general non-existence criterion is known. - -The existence of a symmetric (v, b, r, k, λ)-design is equivalent to the existence of a v × v incidence matrix R with elements 0 and 1 satisfying - -R RT = (k - λ)I + λJ - -where I is the v × v identity matrix and J is the v × v all-1 matrix. In essence, the Bruck–Ryser–Chowla theorem is a statement of the necessary conditions for the existence of a rational v × v matrix R satisfying this equation. In fact, the conditions stated in the Bruck–Ryser–Chowla theorem are not merely necessary, but also sufficient for the existence of such a rational matrix R. They can be derived from the Hasse–Minkowski theorem on the rational equivalence of quadratic forms. diff --git a/wiki/wikipedia/408.txt b/wiki/wikipedia/408.txt deleted file mode 100644 index 0c3b30f376e58a0698929ac939cc312b6d72ad00..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/408.txt +++ /dev/null @@ -1,161 +0,0 @@ -In statistics, the delta method is a result concerning the approximate probability distribution for a function of an asymptotically normal statistical estimator from knowledge of the limiting variance of that estimator. - -The delta method was derived from propagation of error, and the idea behind was known in the early 19th century. Its statistical application can be traced as far back as 1928 by T. L. Kelley. A formal description of the method was presented by J. L. Doob in 1935. Robert Dorfman also described a version of it in 1938. - -While the delta method generalizes easily to a multivariate setting, careful motivation of the technique is more easily demonstrated in univariate terms. Roughly, if there is a sequence of random variables Xn satisfying -$$ -{\sqrt{n}[X_n-\theta]\xrightarrow{D}\mathcal{N}(0,\sigma^2)}, -$$ - -where θ and σ2 are finite valued constants and $\xrightarrow{D}$ denotes convergence in distribution, then -$$ -{\sqrt{n}[g(X_n)-g(\theta)]\xrightarrow{D}\mathcal{N}(0,\sigma^2\cdot[g'(\theta)]^2)} -$$ - -for any function g satisfying the property that g′(θ) exists and is non-zero valued. - -Demonstration of this result is fairly straightforward under the assumption that g′(θ) is continuous. To begin, we use the mean value theorem (i.e.: the first order approximation of a Taylor series using Taylor's theorem): -$$ -g(X_n)=g(\theta)+g'(\tilde{\theta})(X_n-\theta), -$$ - -where $\tilde{\theta}$ lies between Xn and θ. - -Note that since $X_n\xrightarrow{P}\theta$ and $|\tilde{\theta}-\theta|<|X_n-\theta|$, it must be that $\tilde{\theta} \xrightarrow{P}\theta$ and since g′(θ) is continuous, applying the continuous mapping theorem yields -$$ -g'(\tilde{\theta})\xrightarrow{P}g'(\theta), -$$ - -where $\xrightarrow{P}$ denotes convergence in probability. - -Rearranging the terms and multiplying by $\sqrt{n}$ gives -$$ -\sqrt{n}[g(X_n)-g(\theta)]=g' \left (\tilde{\theta} \right )\sqrt{n}[X_n-\theta]. -$$ - -Since -$$ -{\sqrt{n}[X_n-\theta] \xrightarrow{D} \mathcal{N}(0,\sigma^2)} -$$ - -by assumption, it follows immediately from appeal to Slutsky's theorem that -$$ -{\sqrt{n}[g(X_n)-g(\theta)] \xrightarrow{D} \mathcal{N}(0,\sigma^2[g'(\theta)]^2)}. -$$ - -This concludes the proof. - -Alternatively, one can add one more step at the end, to obtain the order of approximation: - - - -\begin{align} - -\sqrt{n}[g(X_n)-g(\theta)]&=g' \left (\tilde{\theta} \right )\sqrt{n}[X_n-\theta]=\sqrt{n}[X_n-\theta]\left[ g'(\tilde{\theta} )+g'(\theta)-g'(\theta)\right]\\ - -&=\sqrt{n}[X_n-\theta]\left[g'(\theta)\right]+\sqrt{n}[X_n-\theta]\left[ g'(\tilde{\theta} )-g'(\theta)\right]\\ - -&=\sqrt{n}[X_n-\theta]\left[g'(\theta)\right]+O_p(1)\cdot o_p(1)\\ - -&=\sqrt{n}[X_n-\theta]\left[g'(\theta)\right]+o_p(1) - -\end{align} - - - -This suggests that the error in the approximation converges to 0 in probability. - -By definition, a consistent estimator B converges in probability to its true value β, and often a central limit theorem can be applied to obtain asymptotic normality: -$$ -\sqrt{n}\left(B-\beta\right)\xrightarrow{D}N\left(0, \Sigma \right), -$$ - -where n is the number of observations and Σ is a (symmetric positive semi-definite) covariance matrix. Suppose we want to estimate the variance of a scalar-valued function h of the estimator B. Keeping only the first two terms of the Taylor series, and using vector notation for the gradient, we can estimate h(B) as -$$ -h(B) \approx h(\beta) + \nabla h(\beta)^T \cdot (B-\beta) -$$ - -which implies the variance of h(B) is approximately - -\begin{align} - -\operatorname{Var}\left(h(B)\right) & \approx \operatorname{Var}\left(h(\beta) + \nabla h(\beta)^T \cdot (B-\beta)\right) \\ - -& = \operatorname{Var}\left(h(\beta) + \nabla h(\beta)^T \cdot B - \nabla h(\beta)^T \cdot \beta\right) \\ - -& = \operatorname{Var}\left(\nabla h(\beta)^T \cdot B\right) \\ - -& = \nabla h(\beta)^T \cdot \operatorname{Cov}(B) \cdot \nabla h(\beta) \\ - -& = \nabla h(\beta)^T \cdot (\Sigma) \cdot \nabla h(\beta) - -\end{align} - -One can use the mean value theorem (for real-valued functions of many variables) to see that this does not rely on taking first order approximation. - -The delta method therefore implies that -$$ -\sqrt{n}\left(h(B)-h(\beta)\right)\xrightarrow{D}N\left(0, \nabla h(\beta)^T \cdot \Sigma \cdot \nabla h(\beta)\right) -$$ - -or in univariate terms, -$$ -\sqrt{n}\left(h(B)-h(\beta)\right)\xrightarrow{D}N\left(0, \sigma^2 \cdot \left(h^\prime(\beta)\right)^2 \right). -$$ - -Suppose Xn is binomial with parameters $ p \in (0,1] $ and n. Since -$$ -{\sqrt{n} \left[ \frac{X_n}{n}-p \right]\xrightarrow{D}N(0,p (1-p))}, -$$ - -we can apply the Delta method with g(θ) = log(θ) to see -$$ -{\sqrt{n} \left[ \log\left( \frac{X_n}{n}\right)-\log(p)\right] \xrightarrow{D}N(0,p (1-p) [1/p]^2)} -$$ - -Hence, even though for any finite n, the variance of $\log\left(\frac{X_n}{n}\right)$ does not actually exist (since Xn can be zero), the asymptotic variance of $ \log \left( \frac{X_n}{n} \right) $ does exist and is equal to -$$ - \frac{1-p}{np}. -$$ - -Note that since p>0, $ \Pr \left( \frac{X_n}{n} > 0 \right) \rightarrow 1 $ as $ n \rightarrow \infty $, so with probability converging to one, $ \log\left(\frac{X_n}{n}\right) $ is finite for large n. - -Moreover, if $\hat p $ and $\hat q$ are estimates of different group rates from independent samples of sizes n and m respectively, then the logarithm of the estimated relative risk $\frac{\hat p}{\hat q} $ has asymptotic variance equal to -$$ - \frac{1-p}{p n}+\frac{1-q}{q m}. -$$ - -This is useful to construct a hypothesis test or to make a confidence interval for the relative risk. - -The delta method is often used in a form that is essentially identical to that above, but without the assumption that Xn or B is asymptotically normal. Often the only context is that the variance is "small". The results then just give approximations to the means and covariances of the transformed quantities. For example, the formulae presented in Klein (1953, p. 258) are: - -\begin{align} - -\operatorname{Var} \left(h_r \right) = & \sum_i \left( \frac{\partial h_r}{\partial B_i} \right)^2 \operatorname{Var}\left( B_i \right) + \sum_i \sum_{j \neq i} \left( \frac{ \partial h_r }{ \partial B_i } \right) \left( \frac{ \partial h_r }{ \partial B_j } \right) \operatorname{Cov}\left( B_i, B_j \right) \\ - -\operatorname{Cov}\left( h_r, h_s \right) = & \sum_i \left( \frac{ \partial h_r }{ \partial B_i } \right) \left( \frac{\partial h_s }{ \partial B_i } \right) \operatorname{Var}\left( B_i \right) + \sum_i \sum_{j \neq i} \left( \frac{\partial h_r}{\partial B_i} \right) \left(\frac{\partial h_s}{\partial B_j} \right) \operatorname{Cov}\left( B_i, B_j \right) - -\end{align} - -where hr is the rth element of h(B) and Bi is the ith element of B. - -When g′(θ) = 0 the delta method cannot be applied. However, if g′′(θ) exists and is not zero, the second-order delta method can be applied. By the Taylor expansion, $\sqrt{n}[g(X_n)-g(\theta)]=\frac{1}{2}\sqrt{n}[X_n-\theta]^2\left[g(\theta)\right]+o_p(1)$, so that the variance of $g\left(X_n\right)$ relies on up to the 4th moment of $X_n$. - -The second-order delta method is also useful in conducting a more accurate approximation of $g\left(X_n\right)$'s distribution when sample size is small. -$$ -\sqrt{n}[g(X_n)-g(\theta)]=\sqrt{n}[X_n-\theta] g'(\theta)+\frac{1}{2}\sqrt{n}[X_n-\theta]^2 g(\theta) +o_p(1) -$$. - -For example, when $X_n$ follows the standard normal distribution, $g\left(X_n\right)$ can be approximated as the weighted sum of a standard normal and a chi-square with degree-of-freedom of 1. - -A version of the delta method exists in nonparametric statistics. Let $X_i \sim F$ be an iid sample of size $n$ with ecdf $\hat{F}_n$, and let $T$ be a functional. If $T$ is Hadamard differentiable with respect to the Chebyshev metric, then -$$ -\frac{ T(\hat{F}_n) - T(F) }{ \widehat{\mathit{se}} } \xrightarrow{D} N(0, 1) -$$ - -where $\widehat{\mathit{se}} = \frac{\hat{\tau}}{\sqrt{n}}$ and $\hat{\tau}^2 = \frac{1}{n}\sum_{i=1}^n \hat{L}^2(X_i)$, with $\hat{L}(x) = L_{\hat{F}_n}(\delta_x)$ denoting the empirical influence function (Gâteaux derivative of $T$ at the ecdf in the ,,direction`` of the point mass at $x$, $\delta_x$). A nonparametric $(1-\alpha)$ pointwise asymptotic confidence interval for $T(F)$ is therefore given by -$$ -T(\hat{F}_n) \pm z_{\alpha/2} \widehat{\mathit{se}} -$$ - -where $z_q$ denotes the $q$-quantile of the standard normal. See Wasserman (2006) p. 19f. for details and examples. diff --git a/wiki/wikipedia/4080.txt b/wiki/wikipedia/4080.txt deleted file mode 100644 index 372b1dfda2a2f7ba45f5cfbcd0dcdd191b66118e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4080.txt +++ /dev/null @@ -1,73 +0,0 @@ -Optimal solutions for Rubik's Cube refer to solutions that are the shortest. There are two common ways to measure the length of a solution. The first is to count the number of quarter turns. The second is to count the number of outer-layer twists, called "face turns". A move to turn an outer layer two quarter (90°) turns in the same direction would be counted as two moves in the quarter turn metric (QTM), but as one turn in the face metric (FTM, or HTM "Half Turn Metric", or OBTM "Outer Block Turn Metric"). - -The minimal number of face turns needed to solve any instance of the Rubik's Cube is 20, In 1992, a solution for the superflip with 20 face turns was found by Dik T. Winter, of which the minimality was shown in 1995 by Michael Reid, providing a new lower bound for the diameter of the cube group. Also in 1995, a solution for superflip in 24 quarter turns was found by Michael Reid, with its minimality proven by Jerry Bryan. In 1998, a new position requiring more than 24 quarter turns to solve was found. The position, which was called a 'superflip composed with four spot' needs 26 quarter turns. - -The first upper bounds were based on the 'human' algorithms. By combining the worst-case scenarios for each part of these algorithms, the typical upper bound was found to be around 100. - -Perhaps the first concrete value for an upper bound was the 277 moves mentioned by David Singmaster in early 1979. He simply counted the maximum number of moves required by his cube-solving algorithm. Later, Singmaster reported that Elwyn Berlekamp, John Conway, and Richard K. Guy had come up with a different algorithm that took at most 160 moves. Soon after, Conway’s Cambridge Cubists reported that the cube could be restored in at most 94 moves. - -The breakthrough, known as "descent through nested sub-groups" was found by Morwen Thistlethwaite; details of Thistlethwaite's algorithm were published in Scientific American in 1981 by Douglas Hofstadter. The approaches to the cube that led to algorithms with very few moves are based on group theory and on extensive computer searches. Thistlethwaite's idea was to divide the problem into subproblems. Where algorithms up to that point divided the problem by looking at the parts of the cube that should remain fixed, he divided it by restricting the type of moves that could be executed. In particular he divided the cube group into the following chain of subgroups: - -*$G_0=\langle L,R,F,B,U,D\rangle$ - -*$G_1=\langle L,R,F,B,U^2,D^2\rangle$ - -*$G_2=\langle L,R,F^2,B^2,U^2,D^2\rangle$ - -*$G_3=\langle L^2,R^2,F^2,B^2,U^2,D^2\rangle$ - -*$G_4=\{1\}$ - -Next he prepared tables for each of the right coset spaces $G_{i+1}\setminus G_i$. For each element he found a sequence of moves that took it to the next smaller group. After these preparations he worked as follows. A random cube is in the general cube group $G_0$. Next he found this element in the right coset space $G_1\setminus G_0$. He applied the corresponding process to the cube. This took it to a cube in $G_1$. Next he looked up a process that takes the cube to $G_2$, next to $G_3$ and finally to $G_4$. - -Although the whole cube group $G_0$ is very large (~4.3×1019), the right coset spaces $G_1\setminus G_0, G_2\setminus G_1, G_3\setminus G_2$ and $G_3$ are much smaller. - -The coset space $G_2\setminus G_1$ is the largest and contains only 1082565 elements. The number of moves required by this algorithm is the sum of the largest process in each step. - -Initially, Thistlethwaite showed that any configuration could be solved in at most 85 moves. In January 1980 he improved his strategy to yield a maximum of 80 moves. Later that same year, he reduced the number to 63, and then again to 52. By exhaustively searching the coset spaces it was later found that the worst possible number of moves for each stage was 7, 10, 13, and 15 giving a total of 45 moves at most. There have been implementations of Thistlewaite's algorithm in various computer languages. - -Thistlethwaite's algorithm was improved by Herbert Kociemba in 1992. He reduced the number of intermediate groups to only two: - -*$G_0=\langle U,D,L,R,F,B\rangle$ - -*$G_1=\langle U,D,L^2,R^2,F^2,B^2\rangle$ - -*$G_2=\{1\}$ - -As with Thistlethwaite's algorithm, he would search through the right coset space $G_1\setminus G_0$ to take the cube to group $G_1$. Next he searched the optimal solution for group $G_1$. The searches in $G_1\setminus G_0$ and $G_1$ were both done with a method equivalent to IDA*. The search in $G_1\setminus G_0$ needs at most 12 moves and the search in $G_1$ at most 18 moves, as Michael Reid showed in 1995. By also generating suboptimal solutions that take the cube to group $G_1$ and looking for short solutions in $G_1$, much shorter overall solutions are usually obtained. Using this algorithm solutions are typically found of fewer than 21 moves, though there is no proof that it will always do so. - -In 1995 Michael Reid proved that using these two groups every position can be solved in at most 29 face turns, or in 42 quarter turns. This result was improved by Silviu Radu in 2005 to 40. - -At first glance, this algorithm appears to be practically inefficient: if $G_0$contains 18 possible moves (each move, its prime, and its 180-degree rotation), that leaves $18^{12}$ (over 1 quadrillion) cube states to be searched. Even with a heuristic-based computer algorithm like IDA*, which may narrow it down considerably, searching through that many states is likely not practical. To solve this problem, Kociemba devised a lookup table that provides an exact heuristic for $G_0$. When the exact number of moves needed to reach $G_1$is available, the search becomes virtually instantaneous: one need only generate 18 cube states for each of the 12 moves and choose the one with the lowest heuristic each time. This allows the second heuristic, that for $G_1$, to be less precise and still allow for a solution to be computed in reasonable time on a modern computer. - -Using these group solutions combined with computer searches will generally quickly give very short solutions. But these solutions do not always come with a guarantee of their minimality. To search specifically for minimal solutions a new approach was needed. - -In 1997 Richard Korf announced an algorithm with which he had optimally solved random instances of the cube. Of the ten random cubes he did, none required more than 18 face turns. The method he used is called IDA* and is described in his paper "Finding Optimal Solutions to Rubik's Cube Using Pattern Databases". Korf describes this method as follows - -IDA* is a depth-first search that looks for increasingly longer solutions in a series of iterations, using a lower-bound heuristic to prune branches once a lower bound on their length exceeds the current iterations bound. - -It works roughly as follows. First he identified a number of subproblems that are small enough to be solved optimally. He used: - -#The cube restricted to only the corners, not looking at the edges - -#The cube restricted to only 6 edges, not looking at the corners nor at the other edges. - -#The cube restricted to the other 6 edges. - -Clearly the number of moves required to solve any of these subproblems is a lower bound for the number of moves needed to solve the entire cube. - -Given a random cube C, it is solved as iterative deepening. First all cubes are generated that are the result of applying 1 move to them. That is C * F, C * U, … Next, from this list, all cubes are generated that are the result of applying two moves. Then three moves and so on. If at any point a cube is found that needs too many moves based on the upper bounds to still be optimal it can be eliminated from the list. - -Although this algorithm will always find optimal solutions, there is no worst case analysis. It is not known how many moves this algorithm might need. An implementation of this algorithm can be found here. - -In 2006, Silviu Radu further improved his methods to prove that every position can be solved in at most 27 face turns or 35 quarter turns. Daniel Kunkle and Gene Cooperman in 2007 used a supercomputer to show that all unsolved cubes can be solved in no more than 26 moves (in face-turn metric). Instead of attempting to solve each of the billions of variations explicitly, the computer was programmed to bring the cube to one of 15,752 states, each of which could be solved within a few extra moves. All were proved solvable in 29 moves, with most solvable in 26. Those that could not initially be solved in 26 moves were then solved explicitly, and shown that they too could be solved in 26 moves. - -Tomas Rokicki reported in a 2008 computational proof that all unsolved cubes could be solved in 25 moves or fewer. This was later reduced to 23 moves. In August 2008 Rokicki announced that he had a proof for 22 moves. - -Finally, in 2010, Tomas Rokicki, Herbert Kociemba, Morley Davidson, and John Dethridge gave the final computer-assisted proof that all cube positions could be solved with a maximum of 20 face turns. - -In 2009, Tomas Rokicki proved that 29 moves in the quarter-turn metric is enough to solve any scrambled cube. And in 2014, Tomas Rokicki and Morley Davidson proved that the maximum number of quarter-turns needed to solve the cube is 26. - -The face-turn and quarter-turn metrics differ in the nature of their antipodes. - -An antipode is a scrambled cube that is maximally far from solved, one that requires the maximum number of moves to solve. In the half-turn metric with a maximum number of 20, there are hundreds of millions of such positions. In the quarter-turn metric, only a single position (and its two rotations) is known that requires the maximum of 26 moves. Despite significant effort, no additional quarter-turn distance-26 positions have been found. Even at distance 25, only two positions (and their rotations) are known to exist. At distance 24, perhaps 150,000 positions exist. diff --git a/wiki/wikipedia/4081.txt b/wiki/wikipedia/4081.txt deleted file mode 100644 index 80c79a4f6e7e4f210254e5ebd837334205573e0d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4081.txt +++ /dev/null @@ -1,62 +0,0 @@ -The star height problem in formal language theory is the question whether all regular languages can be expressed using regular expressions of limited star height, i.e. with a limited nesting depth of Kleene stars. Specifically, is a nesting depth of one always sufficient? If not, is there an algorithm to determine how many are required? The problem was raised by Eggan. - -The first question was answered in the negative when in 1963, Eggan gave examples of regular languages of star height n for every n. Here, the star height h(L) of a regular language L is defined as the minimum star height among all regular expressions representing L. The first few languages found by Eggan are described in the following, by means of giving a regular expression for each language: - -\begin{alignat}{2} - -e_1 &= a_1^* \\ - -e_2 &= \left(a_1^*a_2^*a_3\right)^*\\ - -e_3 &= \left(\left(a_1^*a_2^*a_3\right)^*\left(a_4^*a_5^*a_6\right)^*a_7\right)^*\\ - -e_4 &= \left( - -\left(\left(a_1^*a_2^*a_3\right)^*\left(a_4^*a_5^*a_6\right)^*a_7\right)^* - -\left(\left(a_8^*a_9^*a_{10}\right)^*\left(a_{11}^*a_{12}^*a_{13}\right)^*a_{14}\right)^* - -a_{15}\right)^* - -\end{alignat} - - - -The construction principle for these expressions is that expression $e_{n+1}$ is obtained by concatenating two copies of $e_n$, appropriately renaming the letters of the second copy using fresh alphabet symbols, concatenating the result with another fresh alphabet symbol, and then by surrounding the resulting expression with a Kleene star. The remaining, more difficult part, is to prove that for $e_n$ there is no equivalent regular expression of star height less than n; a proof is given in . - -However, Eggan's examples use a large alphabet, of size 2n-1 for the language with star height n. He thus asked whether we can also find examples over binary alphabets. This was proved to be true shortly afterwards by Dejean. - -Their examples can be described by an inductively defined family of regular expressions over the binary alphabet $\{a,b\}$ as follows-cf. Salomaa: - -\begin{alignat}{2} - -e_1 & = (ab)^* \\ - -e_2 & = \left(aa(ab)^*bb(ab)^*\right)^* \\ - -e_3 & = \left(aaaa \left(aa(ab)^*bb(ab)^*\right)^* bbbb \left(aa(ab)^*bb(ab)^*\right)^*\right)^* \\ - - & \cdots \\ - -e_{n+1} & = (\underbrace{a\cdots a}_{2^n} \cdot e_n \cdot \underbrace{b\cdots b}_{2^n} \cdot e_n )^* - -\end{alignat} - - - -Again, a rigorous proof is needed for the fact that $e_n$ does not admit an equivalent regular expression of lower star height. Proofs are given by and by . - -In contrast, the second question turned out to be much more difficult, and the question became a famous open problem in formal language theory for over two decades . For years, there was only little progress. The pure-group languages were the first interesting family of regular languages for which the star height problem was proved to be decidable . But the general problem remained open for more than 25 years until it was settled by Hashiguchi, who in 1988 published an algorithm to determine the star height of any regular language. The algorithm wasn't at all practical, being of non-elementary complexity. To illustrate the immense resource consumptions of that algorithm, Lombardy and Sakarovitch (2002) give some actual numbers: - -{{Quotation|[The procedure described by Hashiguchi] leads to computations that are by far impossible, even for very small examples. For instance, if L is accepted by a 4 state automaton of loop complexity 3 (and with a small 10 element transition monoid), then a very low minorant of the number of languages to be tested with L for equality is: -$$ -\left(10^{10^{10}}\right)^{\left(10^{10^{10}}\right)^{\left(10^{10^{10}}\right)}}. -$$|S. Lombardy and J. Sakarovitch|Star Height of Reversible Languages and Universal Automata|LATIN 2002}} - -Notice that alone the number $10^{10^{10}}$ has 10 billion zeros when written down in decimal notation, and is already by far larger than the number of atoms in the observable universe. - -A much more efficient algorithm than Hashiguchi's procedure was devised by Kirsten in 2005. This algorithm runs, for a given nondeterministic finite automaton as input, within double-exponential space. Yet the resource requirements of this algorithm still greatly exceed the margins of what is considered practically feasible. - -This algorithm has been optimized and generalized to trees by Colcombet and Löding in 2008 , as part of the theory of regular cost functions. - -It has been implemented in 2017 in the tool suite Stamina. diff --git a/wiki/wikipedia/4082.txt b/wiki/wikipedia/4082.txt deleted file mode 100644 index d6638830703d2bd2a47a39aa5dd3df9c014a601e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4082.txt +++ /dev/null @@ -1,27 +0,0 @@ -The Spanning Tree Protocol (STP) is a network protocol that builds a loop-free logical topology for Ethernet networks. The basic function of STP is to prevent bridge loops and the broadcast radiation that results from them. Spanning tree also allows a network design to include backup links providing fault tolerance if an active link fails. - -As the name suggests, STP creates a spanning tree that characterizes the relationship of nodes within a network of connected layer-2 bridges, and disables those links that are not part of the spanning tree, leaving a single active path between any two network nodes. STP is based on an algorithm that was invented by Radia Perlman while she was working for Digital Equipment Corporation. - -In 2001, the IEEE introduced Rapid Spanning Tree Protocol (RSTP) as 802.1w. RSTP provides significantly faster recovery in response to network changes or failures, introducing new convergence behaviors and bridge port roles to do this. RSTP was designed to be backwards-compatible with standard STP. - -STP was originally standardized as IEEE 802.1D but the functionality of spanning tree (802.1D), rapid spanning tree (802.1w), and multiple spanning tree (802.1s) has since been incorporated into IEEE 802.1Q-2014. - -The need for the Spanning Tree Protocol (STP) arose because switches in local area networks (LANs) are often interconnected using redundant links to improve resilience should one connection fail. - -After STP enabled switches in a LAN have elected the root bridge, all non-root bridges assign one of their ports as root port. This is either the port that connects the switch to the root bridge, or if there are several paths, the port with the preferred path as calculated by the root bridge. Because not all switches are directly connected to the root bridge they communicate amongst each other using STP Bridge Protocol Data Units (BPDUs). Each switch adds the cost of its own path to the cost received from the neighboring switches to determine the total cost of a given path to the root bridge. Once the cost of all possible paths to the root bridge have been added up, each switch assigns a port as root port which connects to the path with the lowest cost, or highest bandwidth, that will eventually lead to the root bridge. - -All switch ports in the LAN where STP is enabled are categorized. incorporating various extensions. The original Perlman-inspired Spanning Tree Protocol, called DEC STP, is not a standard and differs from the IEEE version in message format as well as timer settings. Some bridges implement both the IEEE and the DEC versions of the Spanning Tree Protocol, but their interworking can create issues for the network administrator. - -Different implementations of a standard are not guaranteed to interoperate, due for example to differences in default timer settings. The IEEE encourages vendors to provide a Protocol Implementation Conformance Statement, declaring which capabilities and options have been implemented, However, in Ethernet switched environments where multiple Virtual LANs (VLANs) exist, it is often desirable to create multiple spanning trees so that traffic on different VLANs uses different links. - -Before the IEEE published a Spanning Tree Protocol standard for VLANs, a number of vendors who sold VLAN capable switches developed their own Spanning Tree Protocol versions that were VLAN capable. Cisco developed, implemented and published the Per-VLAN Spanning Tree (PVST) proprietary protocol using its own proprietary Inter-Switch Link (ISL) for VLAN encapsulation, and PVST+ which uses 802.1Q VLAN encapsulation. Both standards implement a separate spanning tree for every VLAN. Cisco switches now commonly implement PVST+ and can only implement Spanning Trees for VLANs if the other switches in the LAN implement the same VLAN STP protocol. HP provides PVST and PVST+ compatibility in some of its network switches. - -The bridge ID (BID) is a field inside a BPDU packet. It is eight bytes in length. The first two bytes are the bridge priority, an unsigned integer of 0–65,535. The last six bytes are a MAC address supplied by the bridge. Prior to IEEE 802.1D-2004, the first two bytes gave a 16 bit bridge priority. Since IEEE 802.1D-2004, the first four bits are a configurable priority, and the last twelve bits carry the bridge system ID extension. In the case of MST, the bridge system ID extension carries the MSTP instance number. Some vendors set the bridge system ID extension to carry a VLAN ID allowing a different spanning tree per VLAN, such as Cisco's PVST. - -Spanning tree is an older protocol with a longer default hold-down time that governs convergence of the protocol state. Improper use or implementation can contribute to network disruptions. The idea of blocking links is something that customers these days do not accept as a proper high availability solution. Modern networks can make use of all connected links by use of protocols that inhibit, control or suppress the natural behavior of logical or physical topology loops. - -Newer, more robust protocols include the TRILL (Transparent Interconnection of Lots of Links) protocol, also created by Dr. Perlman. - -Switch virtualization techniques like HPE IRF, Aruba VSF and Cisco VSS combine multiple switches into a single logical entity. A multi-chassis link aggregation group works like a normal LACP trunk, only distributed through multiple switches. Conversely partitioning technologies compartmentalize a single physical chassis into multiple logical entities. - -On the edge of the network, loop-detection is configured to prevent accidental loops by users. diff --git a/wiki/wikipedia/4083.txt b/wiki/wikipedia/4083.txt deleted file mode 100644 index 7fcb0df77e3fee84cd06d6015bf2ae8b50c20371..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4083.txt +++ /dev/null @@ -1,57 +0,0 @@ -The exterior angle theorem is Proposition 1.16 in Euclid's Elements, which states that the measure of an exterior angle of a triangle is greater than either of the measures of the remote interior angles. This is a fundamental result in absolute geometry because its proof does not depend upon the parallel postulate. - -In several high school treatments of geometry, the term "exterior angle theorem" has been applied to a different result, namely the portion of Proposition 1.32 which states that the measure of an exterior angle of a triangle is equal to the sum of the measures of the remote interior angles. This result, which depends upon Euclid's parallel postulate will be referred to as the "High school exterior angle theorem" (HSEAT) to distinguish it from Euclid's exterior angle theorem. - -Some authors refer to the "High school exterior angle theorem" as the strong form of the exterior angle theorem and "Euclid's exterior angle theorem" as the weak form. - -A triangle has three corners, called vertices. The sides of a triangle (line segments) that come together at a vertex form two angles (four angles if you consider the sides of the triangle to be lines instead of line segments). Only one of these angles contains the third side of the triangle in its interior, and this angle is called an interior angle of the triangle. In the picture below, the angles ∠ABC, ∠BCA and ∠CAB are the three interior angles of the triangle. An exterior angle is formed by extending one of the sides of the triangle; the angle between the extended side and the other side is the exterior angle. In the picture, angle ∠ACD is an exterior angle. - -
    465px
    - -The proof of Proposition 1.16 given by Euclid is often cited as one place where Euclid gives a flawed proof. - -Euclid proves the exterior angle theorem by: - -* construct the midpoint E of segment AC, - -* draw the ray BE, - -* construct the point F on ray BE so that E is (also) the midpoint of B and F, - -* draw the segment FC. - -By congruent triangles we can conclude that ∠ BAC = ∠ ECF and ∠ ECF is smaller than ∠ ECD, ∠ ECD = ∠ ACD therefore ∠ BAC is smaller than ∠ ACD and the same can be done for the angle ∠ CBA by bisecting BC. - -The flaw lies in the assumption that a point (F, above) lies "inside" angle (∠ ACD). No reason is given for this assertion, but the accompanying diagram makes it look like a true statement. When a complete set of axioms for Euclidean geometry is used (see Foundations of geometry) this assertion of Euclid can be proved. - -The exterior angle theorem is not valid in spherical geometry nor in the related elliptical geometry. Consider a spherical triangle one of whose vertices is the North Pole and the other two lie on the equator. The sides of the triangle emanating from the North Pole (great circles of the sphere) both meet the equator at right angles, so this triangle has an exterior angle that is equal to a remote interior angle. The other interior angle (at the North Pole) can be made larger than 90°, further emphasizing the failure of this statement. However, since the Euclid's exterior angle theorem is a theorem in absolute geometry it is automatically valid in hyperbolic geometry. - -The high school exterior angle theorem (HSEAT) says that the size of an exterior angle at a vertex of a triangle equals the sum of the sizes of the interior angles at the other two vertices of the triangle (remote interior angles). So, in the picture, the size of angle ACD equals the size of angle ABC plus the size of angle CAB. - -The HSEAT is logically equivalent to the Euclidean statement that the sum of angles of a triangle is 180°. If it is known that the sum of the measures of the angles in a triangle is 180°, then the HSEAT is proved as follows: -$$ -b + d = 180^\circ -$$ -$$ -b + d = b + a + c -$$ -$$ -\therefore d = a + c. -$$ - -On the other hand, if the HSEAT is taken as a true statement then: -$$ - d = a + c -$$ -$$ - b + d = 180^\circ -$$ -$$ - \therefore b + a + c = 180^\circ. -$$ - -Proving that the sum of the measures of the angles of a triangle is 180°. - -The Euclidean proof of the HSEAT (and simultaneously the result on the sum of the angles of a triangle) starts by constructing the line parallel to side AB passing through point C and then using the properties of corresponding angles and alternate interior angles of parallel lines to get the conclusion as in the illustration. - -The HSEAT can be extremely useful when trying to calculate the measures of unknown angles in a triangle. diff --git a/wiki/wikipedia/4084.txt b/wiki/wikipedia/4084.txt deleted file mode 100644 index 1460094bd4e48d18e855d337d9f37f197589bf33..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4084.txt +++ /dev/null @@ -1,35 +0,0 @@ -In mathematics, the Malgrange preparation theorem is an analogue of the Weierstrass preparation theorem for smooth functions. It was conjectured by René Thom and proved by . - -Suppose that f(t,x) is a smooth complex function of t∈R and x∈Rn near the origin, and let k be the smallest integer such that -$$ -f(0,0)=0, {\partial f\over \partial t}(0,0)=0, \dots, {\partial^{k-1} f\over \partial t^{k-1}}(0,0)=0, {\partial^{k} f\over \partial t^{k}}(0,0)\ne0. -$$ - -Then one form of the preparation theorem states that near the origin f can be written as the product of a smooth function c that is nonzero at the origin and a smooth function that as a function of t is a polynomial of degree k. In other words, -$$ -f(t,x) = c(t,x)\left(t^k+a_{k-1}(x)t^{k-1}+\cdots+a_0(x) \right) -$$ - -where the functions c and a are smooth and c is nonzero at the origin. - -A second form of the theorem, occasionally called the Mather division theorem, is a sort of "division with remainder" theorem: it says that if f and k satisfy the conditions above and g is a smooth function near the origin, then we can write -$$ -g=qf+r -$$ - -where q and r are smooth, and as a function of t, r is a polynomial of degree less than k. This means that -$$ -r(x)=\sum_{0\le jj(x). - -The two forms of the theorem easily imply each other: the first form is the special case of the "division with remainder" form where g is tk, and the division with remainder form follows from the first form of the theorem as we may assume that f as a function of t is a polynomial of degree k. - -If the functions f and g are real, then the functions c, a, q, and r can also be taken to be real. In the case of the Weierstrass preparation theorem these functions are uniquely determined by f and g, but uniqueness no longer holds for the Malgrange preparation theorem. - -The Malgrange preparation theorem can be deduced from the Weierstrass preparation theorem. The obvious way of doing this does not work: although smooth functions have a formal power series expansion at the origin, and the Weierstrass preparation theorem applies to formal power series, the formal power series will not usually converge to smooth functions near the origin. Instead one can use the idea of decomposing a smooth function as a sum of analytic functions by applying a partition of unity to its Fourier transform. - -For a proof along these lines see or - -The Malgrange preparation theorem can be restated as a theorem about modules over rings of smooth, real-valued germs. If X is a manifold, with p∈X, let Cp(X) denote the ring of real-valued germs of smooth functions at p on X. Let Mp(X) denote the unique maximal ideal of Cp(X), consisting of germs which vanish at p. Let A be a Cp(X)-module, and let f:X → Y be a smooth function between manifolds. Let q = f(p). f induces a ring homomorphism f*:Cq(Y) → Cp(X) by composition on the right with f. Thus we can view A as a Cq(Y)-module. Then the Malgrange preparation theorem says that if A is a finitely-generated Cp(X)-module, then A is a finitely-generated Cq(Y)-module if and only if A/Mq(Y)A is a finite-dimensional real vector space. diff --git a/wiki/wikipedia/4085.txt b/wiki/wikipedia/4085.txt deleted file mode 100644 index 3c8f7b212f45f5caf77e63b434097d7d6f415fe7..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4085.txt +++ /dev/null @@ -1,16 +0,0 @@ -In mathematics - specifically, in large deviations theory - the tilted large deviation principle is a result that allows one to generate a new large deviation principle from an old one by "tilting", i.e. integration against an exponential functional. It can be seen as an alternative formulation of Varadhan's lemma. - -Let X be a Polish space (i.e., a separable, completely metrizable topological space), and let (με)ε>0 be a family of probability measures on X that satisfies the large deviation principle with rate function I : X → [0, +∞]. Let F : X → R be a continuous function that is bounded from above. For each Borel set S ⊆ X, let -$$ -J_{\varepsilon} (S) = \int_{S} e^{- F(x) / \varepsilon} \mathrm{d} \mu_{\varepsilon} (x) -$$ - -and define a new family of probability measures (νε)ε>0 on X by -$$ -\nu_{\varepsilon} (S) = \frac{J_{\varepsilon} (S)}{J_{\varepsilon} (X)}. -$$ - -Then (νε)ε>0 satisfies the large deviation principle on X with rate function IF : X → [0, +∞] given by -$$ -I^{F} (x) = \sup_{y \in X} \big[ F(y) - I(y) \big] - \big[ F(x) - I(x) \big]. -$$ diff --git a/wiki/wikipedia/4086.txt b/wiki/wikipedia/4086.txt deleted file mode 100644 index aca361348d1f6ec507ee470e07025f33351a8853..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4086.txt +++ /dev/null @@ -1 +0,0 @@ -In algebraic topology, the Hattori–Stong theorem, proved by and , gives an isomorphism between the stable homotopy of a Thom spectrum and the primitive elements of its K-homology. diff --git a/wiki/wikipedia/4087.txt b/wiki/wikipedia/4087.txt deleted file mode 100644 index a3e7dbe16242f02a7ad6db05b61fbaf2a6a57d52..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4087.txt +++ /dev/null @@ -1,7 +0,0 @@ -In geometry, ellipsoid packing is the problem of arranging identical ellipsoid throughout three-dimensional space to fill the maximum possible fraction of space. - -The currently densest known packing structure for ellipsoid has two candidates, - -a simple monoclinic crystal with two ellipsoids of different orientations and - -a square-triangle crystal containing 24 ellipsoids in the fundamental cell. The former monoclinic structure can reach a maximum packing fraction around $0.77073$ for ellipsoids with maximal aspect ratios larger than $\sqrt{3}$. The packing fraction of the square-triangle crystal exceeds that of the monoclinic crystal for specific biaxial ellipsoids, like ellipsoids with ratios of the axes $\alpha:\sqrt{\alpha}:1$ and $\alpha \in (1.365,1.5625)$. Any ellipsoids with aspect ratios larger than one can pack denser than spheres. diff --git a/wiki/wikipedia/4088.txt b/wiki/wikipedia/4088.txt deleted file mode 100644 index 8ad5a46c8c8e3adedfcb22878444b7c9202f279a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4088.txt +++ /dev/null @@ -1,12 +0,0 @@ -In algebraic graph theory, Babai's problem was proposed in 1979 by László Babai. - -Let $G$ be a finite group, let $\operatorname{Irr}(G)$ be the set of all irreducible characters of $G$, let $\Gamma=\operatorname{Cay}(G,S)$ be the Cayley graph (or directed Cayley graph) corresponding to a generating subset $S$ of $G\setminus \{1\}$, and let $\nu$ be a positive integer. Is the set -$$ -M_\nu^S=\left\{\sum_{s\in S} \chi(s)| \chi\in \operatorname{Irr}(G), \chi(1)=\nu \right\} -$$ - -an invariant of the graph $\Gamma$? In other words, does $\operatorname{Cay}(G,S)\cong \operatorname{Cay}(G,S')$ imply that $M_\nu^S=M_\nu^{S'}$? - -A finite group $G$ is called a BI-group (Babai Invariant group) if $\operatorname{Cay}(G,S)\cong \operatorname{Cay}(G,T)$ for some inverse closed subsets $S$ and $T$ of $G\setminus \{1\}$, then $M_\nu^S=M_\nu^T$ for all positive integers $\nu$. - -Which finite groups are BI-groups? diff --git a/wiki/wikipedia/4089.txt b/wiki/wikipedia/4089.txt deleted file mode 100644 index 07eeb43f8e6caed884f632dc6e7deeba6fba840f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4089.txt +++ /dev/null @@ -1,48 +0,0 @@ -In propositional logic, modus ponens (; MP), also known as modus ponendo ponens (Latin for "method of putting by placing") or implication elimination or affirming the antecedent, is a deductive argument form and rule of inference. It can be summarized as "P implies Q. P is true. Therefore Q must also be true." - -Modus ponens is closely related to another valid form of argument, modus tollens. Both have apparently similar but invalid forms such as affirming the consequent, denying the antecedent, and evidence of absence. Constructive dilemma is the disjunctive version of modus ponens. Hypothetical syllogism is closely related to modus ponens and sometimes thought of as "double modus ponens." - -The history of modus ponens goes back to antiquity. The first to explicitly describe the argument form modus ponens was Theophrastus. It, along with modus tollens, is one of the standard patterns of inference that can be applied to derive chains of conclusions that lead to the desired goal. - -The form of a modus ponens argument resembles a syllogism, with two premises and a conclusion: - -If P, then Q. - -P. - -Therefore, Q. - -The first premise is a conditional ("if–then") claim, namely that P implies Q. The second premise is an assertion that P, the antecedent of the conditional claim, is the case. From these two premises it can be logically concluded that Q, the consequent of the conditional claim, must be the case as well. - -An example of an argument that fits the form modus ponens: - -If today is Tuesday, then John will go to work. - -Today is Tuesday. - -Therefore, John will go to work. - -This argument is valid, but this has no bearing on whether any of the statements in the argument are actually true; for modus ponens to be a sound argument, the premises must be true for any true instances of the conclusion. An argument can be valid but nonetheless unsound if one or more premises are false; if an argument is valid and all the premises are true, then the argument is sound. For example, John might be going to work on Wednesday. In this case, the reasoning for John's going to work (because it is Wednesday) is unsound. The argument is only sound on Tuesdays (when John goes to work), but valid on every day of the week. A propositional argument using modus ponens is said to be deductive. - -In single-conclusion sequent calculi, modus ponens is the Cut rule. The cut-elimination theorem for a calculus says that every proof involving Cut can be transformed (generally, by a constructive method) into a proof without Cut, and hence that Cut is admissible. - -The Curry–Howard correspondence between proofs and programs relates modus ponens to function application: if f is a function of type P → Q and x is of type P, then f x is of type Q. - -In artificial intelligence, modus ponens is often called forward chaining. - -The modus ponens rule may be written in sequent notation as -$$ -P \to Q, P \vdash Q -$$ - -where P, Q and P → Q are statements (or propositions) in a formal language and ⊢ is a metalogical symbol meaning that Q is a syntactic consequence of P and P → Q in some logical system. - -The validity of modus ponens in classical two-valued logic can be clearly demonstrated by use of a truth table. - -In instances of modus ponens we assume as premises that p → q is true and p is true. Only one line of the truth table—the first—satisfies these two conditions (p and p → q). On this line, q is also true. Therefore, whenever p → q is true and p is true, q must also be true. - -While modus ponens is one of the most commonly used argument forms in logic it must not be mistaken for a logical law; rather, it is one of the accepted mechanisms for the construction of deductive proofs that includes the "rule of definition" and the "rule of substitution". Modus ponens allows one to eliminate a conditional statement from a logical proof or argument (the antecedents) and thereby not carry these antecedents forward in an ever-lengthening string of symbols; for this reason modus ponens is sometimes called the rule of detachment or the law of detachment. Enderton, for example, observes that "modus ponens can produce shorter formulas from longer ones", and Russell observes that "the process of the inference cannot be reduced to symbols. Its sole record is the occurrence of ⊦q [the consequent] ... an inference is the dropping of a true premise; it is the dissolution of an implication". - -A justification for the "trust in inference is the belief that if the two former assertions [the antecedents] are not in error, the final assertion [the consequent] is not in error". It would appear to follow that if Doe is in fact gently murdering his mother, then by modus ponens he is doing exactly what he should, unconditionally, be doing. Here again, modus ponens failure is not a popular diagnosis but is sometimes argued for. - -The fallacy of affirming the consequent is a common misinterpretation of the modus ponens. diff --git a/wiki/wikipedia/409.txt b/wiki/wikipedia/409.txt deleted file mode 100644 index e9fd0f4783611ea31aca806c194b930e4605b098..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/409.txt +++ /dev/null @@ -1,72 +0,0 @@ -In the mathematics of coding theory, the Plotkin bound, named after Morris Plotkin, is a limit (or bound) on the maximum possible number of codewords in binary codes of given length n and given minimum distance d. - -A code is considered "binary" if the codewords use symbols from the binary alphabet $\{0,1\}$. In particular, if all codewords have a fixed length n, - -then the binary code has length n. Equivalently, in this case the codewords can be considered elements of vector space $\mathbb{F}_2^n$ over the finite field $\mathbb{F}_2$. Let $d$ be the minimum - -distance of $C$, i.e. -$$ -d = \min_{x,y \in C, x \neq y} d(x,y) -$$ - -where $d(x,y)$ is the Hamming distance between $x$ and $y$. The expression $A_{2}(n,d)$ represents the maximum number of possible codewords in a binary code of length $n$ and minimum distance $d$. The Plotkin bound places a limit on this expression. - -Theorem (Plotkin bound): - -i) If $d$ is even and $ 2d > n $, then -$$ - A_{2}(n,d) \leq 2 \left\lfloor\frac{d}{2d-n}\right\rfloor. -$$ - -ii) If $d$ is odd and $ 2d+1 > n $, then -$$ - A_{2}(n,d) \leq 2 \left\lfloor\frac{d+1}{2d+1-n}\right\rfloor. -$$ - -iii) If $d$ is even, then -$$ - A_{2}(2d,d) \leq 4d. -$$ - -iv) If $d$ is odd, then -$$ - A_{2}(2d+1,d) \leq 4d+4 -$$ - -where $ \left\lfloor ~ \right\rfloor$ denotes the floor function. - -== Proof of case i)== - -Let $d(x,y)$ be the Hamming distance of $x$ and $y$, and $M$ be the number of elements in $C$ (thus, $M$ is equal to $A_{2}(n,d)$). The bound is proved by bounding the quantity $\sum_{(x,y) \in C^2, x\neq y} d(x,y)$ in two different ways. - -On the one hand, there are $M$ choices for $x$ and for each such choice, there are $M-1$ choices for $y$. Since by definition $d(x,y) \geq d$ for all $x$ and $y$ ($ x\neq y $), it follows that -$$ - \sum_{(x,y) \in C^2, x\neq y} d(x,y) \geq M(M-1) d. -$$ - -On the other hand, let $A$ be an $M \times n$ matrix whose rows are the elements of $C$. Let $s_i$ be the number of zeros contained in the $i$'th column of $A$. This means that the $i$'th column contains $M-s_i$ ones. Each choice of a zero and a one in the same column contributes exactly $2$ (because $d(x,y)=d(y,x)$) to the sum $\sum_{(x,y) \in C, x \neq y} d(x,y)$ and therefore -$$ - \sum_{(x,y) \in C, x \neq y} d(x,y) = \sum_{i=1}^n 2s_i (M-s_i). -$$ - -The quantity on the right is maximized if and only if $s_i = M/2$ holds for all $i$ (at this point of the proof we ignore the fact, that the $s_i$ are integers), then -$$ - \sum_{(x,y) \in C, x \neq y} d(x,y) \leq \frac{1}{2} n M^2. -$$ - -Combining the upper and lower bounds for $ \sum_{(x,y) \in C, x \neq y} d(x,y) $ that we have just derived, -$$ - M(M-1) d \leq \frac{1}{2} n M^2 -$$ - -which given that $2d>n$ is equivalent to -$$ - M \leq \frac{2d}{2d-n}. -$$ - -Since $M$ is even, it follows that -$$ - M \leq 2 \left\lfloor \frac{d}{2d-n} \right\rfloor. -$$ - -This completes the proof of the bound. diff --git a/wiki/wikipedia/4090.txt b/wiki/wikipedia/4090.txt deleted file mode 100644 index 3e86def62d848719a374bb670e5c8c6eb805d375..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4090.txt +++ /dev/null @@ -1,50 +0,0 @@ -In mathematics, the Sato–Tate conjecture is a statistical statement about the family of elliptic curves Ep over the finite field with p elements, with p a prime number, obtained from an elliptic curve E over the rational number field, by the process of reduction modulo a prime for almost all p. If Np denotes the number of points on Ep and defined over the field with p elements, the conjecture gives an answer to the distribution of the second-order term for Np. That is, by Hasse's theorem on elliptic curves we have -$$ -N_p/p = 1 + \mathrm{O}(1/\!\sqrt{p})\ -$$ - -as p → ∞, and the point of the conjecture is to predict how the O-term varies. - -The original conjecture and its generalization to all totally real fields was proved by Laurent Clozel, Michael Harris, Nicholas Shepherd-Barron, and Richard Taylor under mild assumptions in 2008, and completed by Thomas Barnet-Lamb, David Geraghty, Harris, and Taylor in 2011. Several generalizations to other algebraic varieties and fields are open. - -Let E be an elliptic curve defined over the rational numbers without complex multiplication. Define θp as the solution to the equation -$$ - p+1-N_p=2\sqrt{p}\cos\theta_p ~~ (0\leq \theta_p \leq \pi). -$$ - -Then, for every two real numbers $ \alpha $ and $ \beta $ for which $ 0\leq \alpha < \beta \leq \pi, $ - -\lim_{N\to\infty}\frac{\#\{p\leq N:\alpha\leq \theta_p \leq \beta\}} - -{\#\{p\leq N\}}=\frac{2}{\pi} \int_\alpha^\beta \sin^2 \theta d\theta. - -By Hasse's theorem on elliptic curves, the ratio -$$ -\frac{((p + 1)-N_p)}{2\sqrt{p}}=:\frac{a_p}{2\sqrt{p}} -$$ - -is between -1 and 1. Thus it can be expressed as cos θ for an angle θ; in geometric terms there are two eigenvalues accounting for the remainder and with the denominator as given they are complex conjugate and of absolute value 1. The Sato–Tate conjecture, when E doesn't have complex multiplication, states that the probability measure of θ is proportional to -$$ -\sin^2 \theta d\theta. -$$ - -This is due to Mikio Sato and John Tate (independently, and around 1960, published somewhat later). - -In 2008, Clozel, Harris, Shepherd-Barron, and Taylor published a proof of the Sato–Tate conjecture for elliptic curves over totally real fields satisfying a certain condition: of having multiplicative reduction at some prime, in a series of three joint papers. - -Further results are conditional on improved forms of the Arthur–Selberg trace formula. Harris has a conditional proof of a result for the product of two elliptic curves (not isogenous) following from such a hypothetical trace formula. In 2011, Barnet-Lamb, Geraghty, Harris, and Taylor proved a generalized version of the Sato–Tate conjecture for an arbitrary non-CM holomorphic modular form of weight greater than or equal to two, by improving the potential modularity results of previous papers. The prior issues involved with the trace formula were solved by Michael Harris, and Sug Woo Shin. - -In 2015, Richard Taylor was awarded the Breakthrough Prize in Mathematics "for numerous breakthrough results in (...) the Sato–Tate conjecture." - -There are generalisations, involving the distribution of Frobenius elements in Galois groups involved in the Galois representations on étale cohomology. In particular there is a conjectural theory for curves of genus n > 1. - -Under the random matrix model developed by Nick Katz and Peter Sarnak, there is a conjectural correspondence between (unitarized) characteristic polynomials of Frobenius elements and conjugacy classes in the compact Lie group USp(2n) = Sp(n). The Haar measure on USp(2n) then gives the conjectured distribution, and the classical case is USp(2) = SU(2). - -There are also more refined statements. The Lang–Trotter conjecture (1976) of Serge Lang and Hale Trotter states the asymptotic number of primes p with a given value of ap, the trace of Frobenius that appears in the formula. For the typical case (no complex multiplication, trace ≠ 0) their formula states that the number of p up to X is asymptotically -$$ -c \sqrt{X}/ \log X\ -$$ - -with a specified constant c. Neal Koblitz (1988) provided detailed conjectures for the case of a prime number q of points on Ep, motivated by elliptic curve cryptography. - -In 1999, Chantal David and Francesco Pappalardi proved an averaged version of the Lang–Trotter conjecture. diff --git a/wiki/wikipedia/4091.txt b/wiki/wikipedia/4091.txt deleted file mode 100644 index bac21e91d4e19e4d19e609bb34a9b9d81ab92ea2..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4091.txt +++ /dev/null @@ -1,54 +0,0 @@ -In mathematics, the nth-term test for divergence is a simple test for the divergence of an infinite series: - -* If $\lim_{n \to \infty} a_n \neq 0$ or if the limit does not exist, then $\sum_{n=1}^\infty a_n$ diverges. - -Many authors do not name this test or give it a shorter name. - -When testing if a series converges or diverges, this test is often checked first due to its ease of use. - -In the case of p-adic analysis the term test is a necessary and sufficient condition for convergence due to the non-archimedean triangle inequality. - -Unlike stronger convergence tests, the term test cannot prove by itself that a series converges. In particular, the converse to the test is not true; instead all one can say is: - -* If $\lim_{n \to \infty} a_n = 0,$ then $\sum_{n=1}^\infty a_n$ may or may not converge. In other words, if $\lim_{n \to \infty} a_n = 0,$ the test is inconclusive. - -The harmonic series is a classic example of a divergent series whose terms limit to zero. The more general class of p-series, -$$ -\sum_{n=1}^\infty \frac{1}{n^p}, -$$ - -exemplifies the possible results of the test: - -* If p ≤ 0, then the term test identifies the series as divergent. - -* If 0 < p ≤ 1, then the term test is inconclusive, but the series is divergent by the integral test for convergence. - -* If 1 < p, then the term test is inconclusive, but the series is convergent, again by the integral test for convergence. - -The test is typically proven in contrapositive form: - -* If $\sum_{n=1}^\infty a_n$ converges, then $\lim_{n \to \infty} a_n = 0.$ - -If sn are the partial sums of the series, then the assumption that the series - -converges means that -$$ -\lim_{n\to\infty} s_n = L -$$ - -for some number s. Then -$$ -\lim_{n\to\infty} a_n = \lim_{n\to\infty}(s_n-s_{n-1}) = \lim_{n\to\infty} s_n - \lim_{n\to\infty} s_{n-1} = L-L = 0. -$$ - -The assumption that the series converges means that it passes Cauchy's convergence test: for every $\varepsilon>0$ there is a number N such that -$$ -\left|a_{n+1}+a_{n+2}+\cdots+a_{n+p}\right|<\varepsilon -$$ - -holds for all n > N and p ≥ 1. Setting p = 1 recovers the definition of the statement -$$ -\lim_{n\to\infty} a_n = 0. -$$ - -The simplest version of the term test applies to infinite series of real numbers. The above two proofs, by invoking the Cauchy criterion or the linearity of the limit, also work in any other normed vector space (or any (additively written) abelian group). diff --git a/wiki/wikipedia/4092.txt b/wiki/wikipedia/4092.txt deleted file mode 100644 index 956c24eee8254c53789a710a864840407ecc6a32..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4092.txt +++ /dev/null @@ -1,144 +0,0 @@ -In mathematics, the Yoneda lemma is arguably the most important result in category theory. It is an abstract result on functors of the type morphisms into a fixed object. It is a vast generalisation of Cayley's theorem from group theory (viewing a group as a miniature category with just one object and only isomorphisms). It allows the embedding of any locally small category into a category of functors (contravariant set-valued functors) defined on that category. It also clarifies how the embedded category, of representable functors and their natural transformations, relates to the other objects in the larger functor category. It is an important tool that underlies several modern developments in algebraic geometry and representation theory. It is named after Nobuo Yoneda. - -The Yoneda lemma suggests that instead of studying the locally small category $ \mathcal{C} $, one should study the category of all functors of $ \mathcal{C} $ into $ \mathbf{Set} $ (the category of sets with functions as morphisms). $ \mathbf{Set} $ is a category we think we understand well, and a functor of $ \mathcal{C} $ into $ \mathbf{Set} $ can be seen as a "representation" of $ \mathcal{C} $ in terms of known structures. The original category $ \mathcal{C} $ is contained in this functor category, but new objects appear in the functor category, which were absent and "hidden" in $ \mathcal{C} $. Treating these new objects just like the old ones often unifies and simplifies the theory. - -This approach is akin to (and in fact generalizes) the common method of studying a ring by investigating the modules over that ring. The ring takes the place of the category $ \mathcal{C} $, and the category of modules over the ring is a category of functors defined on $ \mathcal{C} $. - -Yoneda's lemma concerns functors from a fixed category $ \mathcal{C} $ to the category of sets, $ \mathbf{Set} $. If $ \mathcal{C} $ is a locally small category (i.e. the hom-sets are actual sets and not proper classes), then each object $ A $ of $ \mathcal{C} $ gives rise to a natural functor to $ \mathbf{Set} $ called a hom-functor. This functor is denoted: -$$ -h_A = \mathrm{Hom}(A,-) -$$. - -The (covariant) hom-functor $ h_A $ sends $X$ to the set of morphisms $ \mathrm{Hom}(A,X) $ and sends a morphism $ f \colon X \to Y $ (where $X$ and $Y$ are objects in $ \mathcal{C} $) to the morphism $f \circ -$ (composition with $f$ on the left) that sends a morphism $g$ in $ \mathrm{Hom}(A,X) $ to the morphism $f \circ g$ in $ \mathrm{Hom}(A,Y)$. That is, -$$ -h_A(f) = \mathrm{Hom}(A,f) \mbox{, or} -$$ -$$ -h_A(f)(g) = f \circ g -$$. - -Let $ F $ be an arbitrary functor from $ \mathcal{C} $ to $ \mathbf{Set} $. Then Yoneda's lemma says that: - -{{Quote box - -|quote = For each object $ A $ of $ \mathcal{C} $, the natural transformations $\mathrm{Nat}(h_A,F)\equiv \mathrm{Hom}(\mathrm{Hom}(A,-),F)$ from $ h_A $ to $ F $ are in one-to-one correspondence with the elements of $ F(A) $. That is, -$$ -\mathrm{Hom}(\mathrm{Hom}(A,-),F) \cong F(A). -$$ - -Moreover this isomorphism is natural in $A$ and $F$ when both sides are regarded as functors from $ \mathcal{C} \times \mathbf{Set}^\mathcal{C} $ to $ \mathbf{Set} $. - -|width = 100% - -|align = left - -}} - -Here the notation $ \mathbf{Set}^\mathcal{C} $ denotes the category of functors from $ \mathcal{C} $ to $ \mathbf{Set} $. - -Given a natural transformation $\Phi$ from $ h_A $ to $ F $, the corresponding element of $ F(A) $ is $ u = \Phi_A(\mathrm{id}_A)$;{{efn|Recall that $ \Phi_A : \mathrm{Hom}(A,A) \to F(A) $ so the last expression is well-defined and sends a morphism from $ A $ to $ A $, to an element in $ F(A) $.}} and given an element $ u $ of $ F(A) $, the corresponding natural transformation is given by $\Phi(f) = F(f)(u)$. - -There is a contravariant version of Yoneda's lemma, which concerns contravariant functors from $ \mathcal{C} $ to $ \mathbf{Set} $. This version involves the contravariant hom-functor -$$ -h^A = \mathrm{Hom}(-, A), -$$ - -which sends $ X $ to the hom-set $ \mathrm{Hom}(X,A) $. Given an arbitrary contravariant functor $ G $ from $ \mathcal{C} $ to $ \mathbf{Set} $, Yoneda's lemma asserts that -$$ -\mathrm{Nat}(h^A,G) \cong G(A). -$$ - -The use of $ h_A $ for the covariant hom-functor and $h^A$ for the contravariant hom-functor is not completely standard. Many texts and articles either use the opposite convention or completely unrelated symbols for these two functors. However, most modern algebraic geometry texts starting with Alexander Grothendieck's foundational EGA use the convention in this article. - -The mnemonic "falling into something" can be helpful in remembering that $ h_A $ is the covariant hom-functor. When the letter $ A $ is falling (i.e. a subscript), $ h_A $ assigns to an object $ X $ the morphisms from $ A $ into $ X $. - -Since $\Phi $ is a natural transformation, we have the following commutative diagram: - -This diagram shows that the natural transformation $ \Phi $ is completely determined by $\Phi_A(\mathrm{id}_A)=u$ since for each morphism $ f \colon A \to X $ one has -$$ -\Phi_X(f) = (Ff)u -$$. - -Moreover, any element $ u \in F(A) $ defines a natural transformation in this way. The proof in the contravariant case is completely analogous. - -An important special case of Yoneda's lemma is when the functor $F$ from $ \mathcal{C} $ to $ \mathbf{Set} $ is another hom-functor $ h_B $. In this case, the covariant version of Yoneda's lemma states that -$$ -\mathrm{Nat}(h_A,h_B) \cong \mathrm{Hom}(B,A). -$$ - -That is, natural transformations between hom-functors are in one-to-one correspondence with morphisms (in the reverse direction) between the associated objects. Given a morphism $ f \colon B \to A $ the associated natural transformation is denoted $ \mathrm{Hom}(f,-)$. - -Mapping each object $ A $ in $ \mathcal{C} $ to its associated hom-functor $ h_A = \mathrm{Hom}(A,-) $ and each morphism $ f \colon B \to A $ to the corresponding natural transformation $ \mathrm{Hom}(f,-) $ determines a contravariant functor $h_{\bullet}$ from $ \mathcal{C} $ to $ \mathbf{Set}^\mathcal{C} $, the functor category of all (covariant) functors from $ \mathcal{C} $ to $ \mathbf{Set} $. One can interpret $h_{\bullet}$ as a covariant functor: -$$ -h_{\bullet}\colon \mathcal{C}^{\text{op}} \to \mathbf{Set}^\mathcal{C}. -$$ - -The meaning of Yoneda's lemma in this setting is that the functor $ h_{\bullet} $ is fully faithful, and therefore gives an embedding of $ \mathcal{C}^{\mathrm{op}} $ in the category of functors to $ \mathbf{Set} $. The collection of all functors $ \{h^A | A \in C\}$ is a subcategory of $ \mathbf{Set}^{\mathcal{C}} $. Therefore, Yoneda embedding implies that the category $ \mathcal{C}^{\mathrm{op}} $ is isomorphic to the category $\{h^A | A \in C \}$. - -The contravariant version of Yoneda's lemma states that -$$ -\mathrm{Nat}(h^A,h^B) \cong \mathrm{Hom}(A,B). -$$ - -Therefore, $ h^{\bullet} $ gives rise to a covariant functor from $ \mathcal{C} $ to the category of contravariant functors to $ \mathbf{Set} $: -$$ -h^{\bullet}\colon \mathcal{C} \to \mathbf{Set}^{\mathcal{C}^{\mathrm{op}}}. -$$ - -Yoneda's lemma then states that any locally small category $ \mathcal{C} $ can be embedded in the category of contravariant functors from $ \mathcal{C} $ to $ \mathbf{Set} $ via $h^{\bullet}$. This is called the Yoneda embedding. - -The Yoneda embedding is sometimes denoted by よ, the Hiragana kana Yo. - -The Yoneda embedding essentially states that for every (locally small) category, objects in that category can be represented by presheaves, in a full and faithful manner. That is, -$$ -\mathrm{Nat}(h^A,P) \cong P(A) -$$ - -for a presheaf P. Many common categories are, in fact, categories of pre-sheaves, and on closer inspection, prove to be categories of sheaves, and as such examples are commonly topological in nature, they can be seen to be topoi in general. The Yoneda lemma provides a point of leverage by which the topological structure of a category can be studied and understood. - -Given two categories $\mathbf{C}$ and $\mathbf{D}$ with two functors $F, G : \mathbf{C} \to \mathbf{D}$, natural transformations between them can be written as the following end. -$$ -\mathrm{Nat}(F, G) = \int_{c \in \mathbf{C}} \mathrm{Hom}_\mathbf{D}(Fc, Gc) -$$ - -For any functors $K \colon \mathbf{C}^{op} \to \mathbf{Sets}$ and $H \colon \mathbf{C} \to \mathbf{Sets}$ the following formulas are all formulations of the Yoneda lemma. - - - -K \cong \int^{c \in \mathbf{C}} Kc \times \mathrm{Hom}_\mathbf{C}(-,c), - -\qquad - -K \cong \int_{c \in \mathbf{C}} (Kc)^{\mathrm{Hom}_\mathbf{C}(c,-)}, - - - - - -H \cong \int^{c \in \mathbf{C}} Hc \times \mathrm{Hom}_\mathbf{C}(c,-), - -\qquad - -H \cong \int_{c \in \mathbf{C}} (Hc)^{\mathrm{Hom}_\mathbf{C}(-,c)}. - - - -A preadditive category is a category where the morphism sets form abelian groups and the composition of morphisms is bilinear; examples are categories of abelian groups or modules. In a preadditive category, there is both a "multiplication" and an "addition" of morphisms, which is why preadditive categories are viewed as generalizations of rings. Rings are preadditive categories with one object. - -The Yoneda lemma remains true for preadditive categories if we choose as our extension the category of additive contravariant functors from the original category into the category of abelian groups; these are functors which are compatible with the addition of morphisms and should be thought of as forming a module category over the original category. The Yoneda lemma then yields the natural procedure to enlarge a preadditive category so that the enlarged version remains preadditive - in fact, the enlarged version is an abelian category, a much more powerful condition. In the case of a ring $R$, the extended category is the category of all right modules over $R$, and the statement of the Yoneda lemma reduces to the well-known isomorphism -$$ - M \cong \mathrm{Hom}_R(R,M) -$$ for all right modules $M$ over $R$. - -As stated above, the Yoneda lemma may be considered as a vast generalization of Cayley's theorem from group theory. To see this, let $ \mathcal{C} $ be a category with a single object $*$ such that every morphism is an isomorphism (i.e. a groupoid with one object). Then $G=\mathrm{Hom}_{\mathcal{C}}(*,*)$ forms a group under the operation of composition, and any group can be realized as a category in this way. - -In this context, a covariant functor $ \mathcal{C} \to \mathbf{Set} $ consists of a set $X$ and a group homomorphism $G\to\mathrm{Perm}(X)$, where $\mathrm{Perm}(X)$ is the group of permutations of $X$; in other words, $X$ is a G-set. A natural transformation between such functors is the same thing as an equivariant map between $G$-sets: a set function $\alpha \colon X \to Y$ with the property that $\alpha(g\cdot x)=g\cdot\alpha(x)$ for all $g$ in $G$ and $x$ in $X$. (On the left side of this equation, the $ \cdot $ denotes the action of $G$ on $X$, and on the right side the action on $Y$.) - -Now the covariant hom-functor $\mathrm{Hom}_{\mathcal{C}}(*,-)$ corresponds to the action of $G$ on itself by left-multiplication (the contravariant version corresponds to right-multiplication). The Yoneda lemma with $F=\mathrm{Hom}_{\mathcal{C}}(*,-)$ states that -$$ -\mathrm{Nat}(\mathrm{Hom}_{\mathcal{C}}(*,-),\mathrm{Hom}_{\mathcal{C}}(*,-)) \cong \mathrm{Hom}_{\mathcal{C}}(*,*) -$$, - -that is, the equivariant maps from this $G$-set to itself are in bijection with $G$. But it is easy to see that (1) these maps form a group under composition, which is a subgroup of $\mathrm{Perm}(G)$, and (2) the function which gives the bijection is a group homomorphism. (Going in the reverse direction, it associates to every $g$ in $G$ the equivariant map of right-multiplication by $g$.) Thus $G$ is isomorphic to a subgroup of $\mathrm{Perm}(G)$, which is the statement of Cayley's theorem. - -Yoshiki Kinoshita stated in 1996 that the term "Yoneda lemma" was coined by Saunders Mac Lane following an interview he had with Yoneda. diff --git a/wiki/wikipedia/4093.txt b/wiki/wikipedia/4093.txt deleted file mode 100644 index dbc7617b70589d9be8543f3ae05fd2d189906285..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4093.txt +++ /dev/null @@ -1,69 +0,0 @@ -In probability theory, Proebsting's paradox is an argument that appears to show that the Kelly criterion can lead to ruin. Although it can be resolved mathematically, it raises some interesting issues about the practical application of Kelly, especially in investing. It was named and first discussed by Edward O. Thorp in 2008. The paradox was named for Todd Proebsting, its creator. - -If a bet is equally likely to win or lose, and pays b times the stake for a win, the Kelly bet is: -$$ - f^{*} = \frac{b - 1}{2b} \! -$$ - -times wealth. For example, if a 50/50 bet pays 2 to 1, Kelly says to bet 25% of wealth. If a 50/50 bet pays 5 to 1, Kelly says to bet 40% of wealth. - -Now suppose a gambler is offered 2 to 1 payout and bets 25%. What should he do if the payout on new bets changes to 5 to 1? He should choose f* to maximize: -$$ - 0.5 \ln(1.5 + 5f^{*}) + 0.5 \ln(0.75 - f^{*}) \! -$$ - -because if he wins he will have 1.5 (the 0.5 from winning the 25% bet at 2 to 1 odds) plus 5f*; and if he loses he must pay 0.25 from the first bet, and f* from the second. Taking the derivative with respect to f* and setting it to zero gives: -$$ - 5(0.75 - f^{*}) = 1.5 + 5f^{*} \! -$$ - -which can be rewritten: -$$ - 2.25 = 10f^{*} \! -$$ - -So f* = 0.225. - -The paradox is that the total bet, 0.25 + 0.225 = 0.475, is larger than the 0.4 Kelly bet if the 5 to 1 odds are offered from the beginning. It is counterintuitive that you bet more when some of the bet is at unfavorable odds. Todd Proebsting emailed Ed Thorp asking about this. - -Ed Thorp realized the idea could be extended to give the Kelly bettor a nonzero probability of being ruined. He showed that if a gambler is offered 2 to 1 odds, then 4 to 1, then 8 to 1 and so on (2n to 1 for n = 1 to infinity) Kelly says to bet: -$$ - \frac{3^{n - 1}}{4^n} \! -$$ - -each time. The sum of all these bets is 1. So a Kelly gambler has a 50% chance of losing his entire wealth. - -In general, if a bettor makes the Kelly bet on a 50/50 proposition with a payout of b1, and then is offered b2, he will bet a total of: -$$ - f^{*} = \frac{b_2 - 1}{2b_2} + \frac{b_1 - 1}{4}\left(\frac{1}{f_1} - \frac{1}{f_2}\right). \! -$$ - -The first term is what the bettor would bet if offered b2 initially. The second term is positive if f2 > f1, meaning that if the payout improves, the Kelly bettor will bet more than he would if just offered the second payout, while if the payout gets worse he will bet less than he would if offered only the second payout. - -Many bets have the feature that payoffs and probabilities can change before the outcome is determined. In sports betting for example, the line may change several times before the event is held, and news may come out (such as an injury or weather forecast) that changes the probability of an outcome. In investing, a stock originally bought at $20 per share might be available now at $10 or $30 or any other price. Some sports bettors try to make income from anticipating line changes rather than predicting event outcomes. Some traders concentrate on possible short-term price movements of a security rather than its long-term fundamental prospects. - -A classic investing example is a trader who has exposure limits, say he is not allowed to have more than $1 million at risk in any one stock. That doesn't mean he cannot lose more than $1 million. If he buys $1 million of the stock at $20 and it goes to $10, he can buy another $500,000. If it then goes to $5, he can buy another $500,000. If it goes to zero , he can lose an infinite amount of money, despite never having more than $1 million at risk. - -There is no paradox. Kelly's criterion is to maximise expected rate of growth; only under restricted conditions does it correspond to maximising the log. One easy way to dismiss the paradox is to note that Kelly assumes that probabilities do not change. - -A Kelly bettor who knows odds might change could factor this into a more complex Kelly bet. For example suppose a Kelly bettor is given a one-time opportunity to bet a 50/50 proposition at odds of 2 to 1. He knows there is a 50% chance that a second one-time opportunity will be offered at 5 to 1. Now he should maximize: -$$ - 0.25 \ln(1 + 2f_1) + 0.25 \ln(1 - f_1) + 0.25 \ln(1 + 2f_1 + 5f_2) + 0.25 \ln(1 - f_1 - f_2) \! -$$ - -with respect to both f1 and f2. The answer turns out to be bet zero at 2 to 1, and wait for the chance of betting at 5 to 1, in which case you bet 40% of wealth. If the probability of being offered 5 to 1 odds is less than 50%, some amount between zero and 25% will be bet at 2 to 1. If the probability of being offered 5 to 1 odds is more than 50%, the Kelly bettor will actually make a negative bet at 2 to 1 odds (that is, bet on the 50/50 outcome with payout of 1/2 if he wins and paying 1 if he loses). In either case, his bet at 5 to 1 odds, if the opportunity is offered, is 40% minus 0.7 times his 2 to 1 bet. - -What the paradox says, essentially, is that if a Kelly bettor has incorrect beliefs about what future bets may be offered, he can make suboptimal choices, and even go broke. The Kelly criterion is supposed to do better than any essentially different strategy in the long run and have zero chance of ruin, as long as the bettor knows the probabilities and payouts. - -More light on the issues was shed by an independent consideration of the problem by Aaron Brown, also communicated to Ed Thorp by email. In this formulation, the assumption is the bettor first sells back the initial bet, then makes a new bet at the second payout. In this case his total bet is: -$$ - f^{*} = \frac{b_2 - 1}{2b_2} - \frac{b_1 - 1}{4}\left(\frac{1}{f_1} - \frac{1}{f_2}\right)\frac{f_2-1}{f_2+1} \! -$$ - -which looks very similar to the formula above for the Proebsting formulation, except that the sign is reversed on the second term and it is multiplied by an additional term. - -For example, given the original example of a 2 to 1 payout followed by a 5 to 1 payout, in this formulation the bettor first bets 25% of wealth at 2 to 1. When the 5 to 1 payout is offered, the bettor can sell back the original bet for a loss of 0.125. His 2 to 1 bet pays 0.5 if he wins and costs 0.25 if he loses. At the new 5 to 1 payout, he could get a bet that pays 0.625 if he wins and costs 0.125 if he loses, this is 0.125 better than his original bet in both states. Therefore his original bet now has a value of -0.125. Given his new wealth level of 0.875, his 40% bet (the Kelly amount for the 5 to 1 payout) is 0.35. - -The two formulations are equivalent. In the original formulation, the bettor has 0.25 bet at 2 to 1 and 0.225 bet at 5 to 1. If he wins, he gets 2.625 and if he loses he has 0.525. In the second formulation, the bettor has 0.875 and 0.35 bet at 5 to 1. If he wins, he gets 2.625 and if he loses he has 0.525. - -The second formulation makes clear that the change in behavior results from the mark-to-market loss the investor experiences when the new payout is offered. This is a natural way to think in finance, less natural to a gambler. In this interpretation, the infinite series of doubling payouts does not ruin the Kelly bettor by enticing him to overbet, it extracts all his wealth through changes beyond his control. diff --git a/wiki/wikipedia/4094.txt b/wiki/wikipedia/4094.txt deleted file mode 100644 index e4c6ea3e29c26f48ef6af48761d12da8f0bcb8d7..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4094.txt +++ /dev/null @@ -1,73 +0,0 @@ -In computer science, the longest common substring problem is to find the longest string that is a substring of two or more strings. The problem may have multiple solutions. - -Applications include data deduplication and plagiarism detection. - -The longest common substring of the strings "ABABC", "BABCA" and "ABCBA" is string "ABC" of length 3. Other common substrings are "A", "AB", "B", "BA", "BC" and "C". - -ABABC - -||| - -BABCA - -||| - -ABCBA - -Given two strings, $S$ of length $m$ and $T$ of length $n$, find the longest string which is substring of both $S$ and $T$. - -A generalization is the k-common substring problem. Given the set of strings $S = \{S_1, ..., S_K\}$, where $|S_i|=n_i$ and $\Sigma n_i = N$. Find for each $2 \leq k \leq K$, the longest strings which occur as substrings of at least $k$ strings. - -One can find the lengths and starting positions of the longest common substrings of $S$ and $T$ in $\Theta$$(n+m)$ time with the help of a generalized suffix tree. A faster algorithm can be achieved in the word RAM model of computation if the size $\sigma$ of the input alphabet is in $2^{o(\sqrt{\log(n+m)})}$. In particular, this algorithm runs in $O((n+m)\log\sigma/\sqrt{\log (n+m)})$ time using $O((n+m)\log\sigma/\log (n+m))$ space. Solving the problem by dynamic programming costs $\Theta(nm)$. The solutions to the generalized problem take $\Theta(n_1 + ... + n_K)$ space and $\Theta(n_1$·...·$n_K)$ time with dynamic programming and take $\Theta(N * K)$ time with generalized suffix tree. - -The longest common substrings of a set of strings can be found by building a generalized suffix tree for the strings, and then finding the deepest internal nodes which have leaf nodes from all the strings in the subtree below it. The figure on the right is the suffix tree for the strings "ABAB", "BABA" and "ABBA", padded with unique string terminators, to become "ABAB$0", "BABA$1" and "ABBA$2". The nodes representing "A", "B", "AB" and "BA" all have descendant leaves from all of the strings, numbered 0, 1 and 2. - -Building the suffix tree takes $\Theta(N)$ time (if the size of the alphabet is constant). If the tree is traversed from the bottom up with a bit vector telling which strings are seen below each node, the k-common substring problem can be solved in $\Theta(NK)$ time. If the suffix tree is prepared for constant time lowest common ancestor retrieval, it can be solved in $\Theta(N)$ time. - -The following pseudocode finds the set of longest common substrings between two strings with dynamic programming: - -function LCSubstr(S[1..r], T[1..n]) - -L := array(1..r, 1..n) - -z := 0 - -ret := {} - -for i := 1..r - -for j := 1..n - -if S[i] = T[j] - -if i = 1 or j = 1 - -L[i, j] := 1 - -else - -L[i, j] := L[i − 1, j − 1] + 1 - -if L[i, j] > z - -z := L[i, j] - -ret := {S[i − z + 1..i]} - -else if L[i, j] = z - -ret := ret ∪ {S[i − z + 1..i]} - -else - -L[i, j] := 0 - -return ret - -This algorithm runs in $O(n r)$ time. The array L stores the longest common subsequence of the prefixes S[1..i] and T[1..j] which end at position S[i], T[j], resp. The variable z is used to hold the length of the longest common substring found so far. The set ret is used to hold the set of strings which are of length z. The set ret can be saved efficiently by just storing the index i, which is the last character of the longest common substring (of size z) instead of S[i-z+1..i]. Thus all the longest common substrings would be, for each i in ret, S[(ret[i]-z)..(ret[i])]. - -The following tricks can be used to reduce the memory usage of an implementation: - -* Keep only the last and current row of the DP table to save memory ($O(\min(r, n))$ instead of $O(n r)$) - -* Store only non-zero values in the rows. This can be done using hash-tables instead of arrays. This is useful for large alphabets. diff --git a/wiki/wikipedia/4095.txt b/wiki/wikipedia/4095.txt deleted file mode 100644 index 58c6db5b9bcbc70e4866a2d2847fe12ebb260c10..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4095.txt +++ /dev/null @@ -1,132 +0,0 @@ -In propositional logic, modus tollens () (MT), also known as modus tollendo tollens (Latin for "method of removing by taking away") and denying the consequent, is a deductive argument form and a rule of inference. Modus tollens takes the form of "If P, then Q. Not Q. Therefore, not P." It is an application of the general truth that if a statement is true, then so is its contrapositive. The form shows that inference from P implies Q to the negation of Q implies the negation of P is a valid argument. - -The history of the inference rule modus tollens goes back to antiquity. The first to explicitly describe the argument form modus tollens was Theophrastus. - -Modus tollens is closely related to modus ponens. There are two similar, but invalid, forms of argument: affirming the consequent and denying the antecedent. See also contraposition and proof by contrapositive. - -The form of a modus tollens argument resembles a syllogism, with two premises and a conclusion: - -If P, then Q. - -Not Q. - -Therefore, not P. - -The first premise is a conditional ("if-then") claim, such as P implies Q. The second premise is an assertion that Q, the consequent of the conditional claim, is not the case. From these two premises it can be logically concluded that P, the antecedent of the conditional claim, is also not the case. - -For example: - -If the dog detects an intruder, the dog will bark. - -The dog did not bark. - -Therefore, no intruder was detected by the dog. - -Supposing that the premises are both true (the dog will bark if it detects an intruder, and does indeed not bark), it follows that no intruder has been detected. This is a valid argument since it is not possible for the conclusion to be false if the premises are true. (It is conceivable that there may have been an intruder that the dog did not detect, but that does not invalidate the argument; the first premise is "if the dog detects an intruder". The thing of importance is that the dog detects or does not detect an intruder, not whether there is one.) - -Another example: - -If I am the axe murderer, then I can use an axe. - -I cannot use an axe. - -Therefore, I am not the axe murderer. - -Another example: - -If Rex is a chicken, then he is a bird. - -Rex is not a bird. - -Therefore, Rex is not a chicken. - -Every use of modus tollens can be converted to a use of modus ponens and one use of transposition to the premise which is a material implication. For example: - -If P, then Q. (premise – material implication) - -If not Q, then not P. (derived by transposition) - -Not Q . (premise) - -Therefore, not P. (derived by modus ponens) - -Likewise, every use of modus ponens can be converted to a use of modus tollens and transposition. - -The modus tollens rule can be stated formally as: -$$ -\frac{P \to Q, \neg Q}{\therefore \neg P} -$$ - -where $P \to Q$ stands for the statement "P implies Q". $\neg Q$ stands for "it is not the case that Q" (or in brief "not Q"). Then, whenever "$P \to Q$" and "$\neg Q$" each appear by themselves as a line of a proof, then "$\neg P$" can validly be placed on a subsequent line. - -The modus tollens rule may be written in sequent notation: -$$ -P\to Q, \neg Q \vdash \neg P -$$ - -where $\vdash$ is a metalogical symbol meaning that $\neg P$ is a syntactic consequence of $P \to Q$ and $\neg Q$ in some logical system; - -or as the statement of a functional tautology or theorem of propositional logic: -$$ -((P \to Q) \land \neg Q) \to \neg P -$$ - -where $P$ and $Q$ are propositions expressed in some formal system; - -or including assumptions: -$$ -\frac{\Gamma \vdash P\to Q ~~~ \Gamma \vdash \neg Q}{\Gamma \vdash \neg P} -$$ - -though since the rule does not change the set of assumptions, this is not strictly necessary. - -More complex rewritings involving modus tollens are often seen, for instance in set theory: -$$ -P\subseteq Q -$$ -$$ -x\notin Q -$$ -$$ -\therefore x\notin P -$$ - -("P is a subset of Q. x is not in Q. Therefore, x is not in P.") - -Also in first-order predicate logic: -$$ -\forall x:~P(x) \to Q(x) -$$ -$$ -\neg Q(y) -$$ -$$ -\therefore ~\neg P(y) -$$ - -("For all x if x is P then x is Q. y is not Q. Therefore, y is not P.") - -Strictly speaking these are not instances of modus tollens, but they may be derived from modus tollens using a few extra steps. - -The validity of modus tollens can be clearly demonstrated through a truth table. - -In instances of modus tollens we assume as premises that p → q is true and q is false. There is only one line of the truth table—the fourth line—which satisfies these two conditions. In this line, p is false. Therefore, in every instance in which p → q is true and q is false, p must also be false. - -Modus tollens represents an instance of the law of total probability combined with Bayes' theorem expressed as: -$$ -\Pr(P)=\Pr(P\mid Q)\Pr(Q)+\Pr(P\mid \lnot Q)\Pr(\lnot Q) -$$, - -where the conditionals $\Pr(P\mid Q)$ and $\Pr(P\mid \lnot Q)$ are obtained with (the extended form of) Bayes' theorem expressed as: -$$ -\Pr(P\mid Q) = \frac{\Pr(Q \mid P)a(P)}{\Pr(Q\mid P)a(P)+\Pr(Q\mid \lnot P)a(\lnot P)} -$$ and $\Pr(P\mid \lnot Q) = \frac{\Pr(\lnot Q \mid P)a(P)}{\Pr(\lnot Q\mid P)a(P)+\Pr(\lnot Q\mid \lnot P)a(\lnot P)}$. - -In the equations above $\Pr(Q)$ denotes the probability of $Q$, and $a(P)$ denotes the base rate (aka. prior probability) of $P$. The conditional probability $\Pr(Q\mid P)$ generalizes the logical statement $P \to Q$, i.e. in addition to assigning TRUE or FALSE we can also assign any probability to the statement. Assume that $\Pr(Q) = 1$ is equivalent to $Q$ being TRUE, and that $\Pr(Q) = 0$ is equivalent to $Q$ being FALSE. It is then easy to see that $\Pr(P) = 0$ when $\Pr(Q\mid P) = 1$ and $\Pr(Q) = 0$. This is because $\Pr(\lnot Q\mid P) = 1 - \Pr(Q\mid P) = 0$ so that $\Pr(P\mid \lnot Q) = 0$ in the last equation. Therefore, the product terms in the first equation always have a zero factor so that $\Pr(P) = 0$ which is equivalent to $P$ being FALSE. Hence, the law of total probability combined with Bayes' theorem represents a generalization of modus tollens. - -Modus tollens represents an instance of the abduction operator in subjective logic expressed as: -$$ -\omega^{A}_{P\tilde{\|}Q}= (\omega^{A}_{Q|P},\omega^{A}_{Q|\lnot P})\widetilde{\circledcirc} (a_{P},\omega^{A}_{Q}) -$$, - -where $\omega^{A}_{Q}$ denotes the subjective opinion about $Q$, and $(\omega^{A}_{Q|P},\omega^{A}_{Q|\lnot P})$ denotes a pair of binomial conditional opinions, as expressed by source $A$. The parameter $a_{P}$ denotes the base rate (aka. the prior probability) of $P$. The abduced marginal opinion on $P$ is denoted $\omega^{A}_{P\tilde{\|}Q}$. The conditional opinion $\omega^{A}_{Q|P}$ generalizes the logical statement $P \to Q$, i.e. in addition to assigning TRUE or FALSE the source $A$ can assign any subjective opinion to the statement. The case where $\omega^{A}_{Q}$ is an absolute TRUE opinion is equivalent to source $A$ saying that $Q$ is TRUE, and the case where $\omega^{A}_{Q}$ is an absolute FALSE opinion is equivalent to source $A$ saying that $Q$ is FALSE. The abduction operator $\widetilde{\circledcirc}$ of subjective logic produces an absolute FALSE abduced opinion $\omega^{A}_{P\widetilde{\|}Q}$ when the conditional opinion $\omega^{A}_{Q|P}$ is absolute TRUE and the consequent opinion $\omega^{A}_{Q}$ is absolute FALSE. Hence, subjective logic abduction represents a generalization of both modus tollens and of the Law of total probability combined with Bayes' theorem. diff --git a/wiki/wikipedia/4096.txt b/wiki/wikipedia/4096.txt deleted file mode 100644 index 698bbe68b6334fda139d19179b2d9d9ed54e1853..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4096.txt +++ /dev/null @@ -1,23 +0,0 @@ -In Diophantine approximation, the Oppenheim conjecture concerns representations of numbers by real quadratic forms in several variables. It was formulated in 1929 by Alexander Oppenheim and later the conjectured property was further strengthened by Harold Davenport and Oppenheim. Initial research on this problem took the number n of variables to be large, and applied a version of the Hardy-Littlewood circle method. The definitive work of Margulis, settling the conjecture in the affirmative, used methods arising from ergodic theory and the study of discrete subgroups of semisimple Lie groups. - -Meyer's theorem states that an indefinite integral quadratic form Q in n variables, n ≥ 5, nontrivially represents zero, i.e. there exists a non-zero vector x with integer components such that Q(x) = 0. The Oppenheim conjecture can be viewed as an analogue of this statement for forms Q that are not multiples of a rational form. It states that in this case, the set of values of Q on integer vectors is a dense subset of the real line. - -Several versions of the conjecture were formulated by Oppenheim and Harold Davenport. - -* Let Q be a real nondegenerate indefinite quadratic form in n variables. Suppose that n ≥ 3 and Q is not a multiple of a form with rational coefficients. Then for any ε > 0 there exists a non-zero vector x with integer components such that |Q(x)| < ε. - -For n ≥ 5 this was conjectured by Oppenheim in 1929; the stronger version is due to Davenport in 1946. - -* Let Q and n have the same meaning as before. Then for any ε > 0 there exists a non-zero vector x with integer components such that 0 < |Q(x, x)| < ε. - -This was conjectured by Oppenheim in 1953 and proved by Birch, Davenport, and Ridout for n at least 21, and by Davenport and Heilbronn for diagonal forms in five variables. Other partial results are due to Oppenheim (for forms in four variables, - -but under the strong restriction that the form represents zero over Z), Watson, Iwaniec, Baker–Schlickewey. Early work analytic number theory and reduction theory of quadratic forms. - -The conjecture was proved in 1987 by Margulis in complete generality using methods of ergodic theory. Geometry of actions of certain unipotent subgroups of the orthogonal group on the homogeneous space of the lattices in R3 plays a decisive role in this approach. It is sufficient to establish the case n = 3. The idea to derive the Oppenheim conjecture from a statement about homogeneous group actions is usually attributed to M. S. Raghunathan, who observed in the 1970s that the conjecture for n = 3 is equivalent to the following property of the space of lattices: - -* Any relatively compact orbit of SO(2, 1) in SL(3, R)/SL(3, Z) is compact. - -However, Margulis later remarked that in an implicit form this equivalence occurred already in a 1955 paper of Cassels and H. P. F. Swinnerton-Dyer, albeit in a different language. - -Shortly after Margulis's breakthrough, the proof was simplified and generalized by Dani and Margulis. Qualitative versions of the Oppenheim conjecture were later proved by Eskin–Margulis–Mozes. Borel and Prasad established some S-arithmetic analogues. The study of the properties of unipotent and quasiunipotent flows on homogeneous spaces remains an active area of research, with applications to further questions in the theory of Diophantine approximation. diff --git a/wiki/wikipedia/4097.txt b/wiki/wikipedia/4097.txt deleted file mode 100644 index 7b92e3e46420373e7e2b6b2d82dae73200a3135a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4097.txt +++ /dev/null @@ -1,3 +0,0 @@ -Chaitin's algorithm is a bottom-up, graph coloring register allocation algorithm that uses cost/degree as its spill metric. It is named after its designer, Gregory Chaitin. Chaitin's algorithm was the first register allocation algorithm that made use of coloring of the interference graph for both register allocations and spilling. - -Chaitin's algorithm was presented on the 1982 SIGPLAN Symposium on Compiler Construction, and published in the symposium proceedings. It was extension of an earlier 1981 paper on the use of graph coloring for register allocation. Chaitin's algorithm formed the basis of a large section of research into register allocators. diff --git a/wiki/wikipedia/4098.txt b/wiki/wikipedia/4098.txt deleted file mode 100644 index bf2d0be2549863ea049bb79e91678a19a66f6c97..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4098.txt +++ /dev/null @@ -1,382 +0,0 @@ -In the 1760s, Johann Heinrich Lambert proved that the number pi (pi) is irrational. Thus, it cannot be expressed as a fraction a/b, where a is an integer and b is a non-zero integer. In the 19th century, Charles Hermite found a proof that requires no prerequisite knowledge beyond basic calculus. Three simplifications of Hermite's proof are due to Mary Cartwright, Ivan Niven, and Nicolas Bourbaki. Another proof, which is a simplification of Lambert's proof, is due to Miklós Laczkovich. - -In 1882, Ferdinand von Lindemann proved that pi is not just irrational, but transcendental as well. - -In 1761, Lambert proved that pi is irrational by first showing that this continued fraction expansion holds: -$$ -\tan(x) = \cfrac{x}{1 - \cfrac{x^2}{3 - \cfrac{x^2}{5 - \cfrac{x^2}{7 - {}\ddots}}}}. -$$ - -Then Lambert proved that if x is non-zero and rational then this expression must be irrational. Since tan(pi/4) = 1, it follows that pi/4 is irrational and thus pi is also irrational. A simplification of Lambert's proof is given below. - -Written in 1873, this proof uses the characterization of pi as the smallest positive number whose half is a zero of the cosine function and it actually proves that pi2 is irrational. As in many proofs of irrationality, it is a proof by contradiction. - -Consider the sequences of functions An and Un from $\R$ into $\R$ for $n \in \N_0$ defined by: - -\begin{align} - -A_0(x) &= \sin(x), && A_{n+1}(x) =\int_0^xyA_n(y)dy \\[4pt] - -U_0(x) &= \frac{\sin(x)}x, && U_{n+1}(x) =-\frac{U_n'(x)}x - -\end{align} - -Using induction we can prove that - -\begin{align} - -A_n(x) &=\frac{x^{2n+1}}{(2n+1)!!}-\frac{x^{2n+3}}{2\times(2n+3)!!}+\frac{x^{2n+5}}{2\times4\times(2n+5)!!}\mp\cdots \\[4pt] - -U_n(x) &=\frac1{(2n+1)!!}-\frac{x^2}{2\times(2n+3)!!}+\frac{x^4}{2\times4\times(2n+5)!!}\mp\cdots - -\end{align} - -and therefore we have: -$$ -U_n(x)=\frac{A_n(x)}{x^{2n+1}}. -$$ - -So - - - -\begin{align} - -\frac{A_{n+1}(x)}{x^{2n+3}} & =U_{n+1}(x)=-\frac{U_n'(x)}x=-\frac1x\frac {\mathrm{d}}{\mathrm{d}x}\left(\frac{A_n(x)}{x^{2n+1}}\right) \\[6pt] - -& = -\frac{1}{x} \left( \frac{A_n'(x) \cdot x^{2n+1} - (2n+1) x^{2n} A_n(x)}{x^{2(2n+1)}} \right ) = \frac{(2n+1)A_n(x)-xA_n'(x)}{x^{2n+3}} - -\end{align} - - - -which is equivalent to -$$ -A_{n+1}(x)=(2n+1)A_n(x)-x^2A_{n-1}(x). -$$ - -Using the definition of the sequence and employing induction we can show that -$$ -A_n(x) = P_n(x^2) \sin(x) + x Q_n(x^2) \cos(x), -$$ - -where Pn and Qn are polynomial functions with integer coefficients and the degree of Pn is smaller than or equal to ⌊n/2⌋. In particular, An(pi/2) = Pn(pi2/4). - -Hermite also gave a closed expression for the function An, namely -$$ -A_n(x)=\frac{x^{2n+1}}{2^n n!}\int_0^1(1-z^2)^n\cos(xz)\mathrm{d}z. -$$ - -He did not justify this assertion, but it can be proved easily. First of all, this assertion is equivalent to -$$ -\frac{1}{2^n n!}\int_0^1(1-z^2)^n\cos(x z)\mathrm{d}z=\frac{A_n(x)}{x^{2n+1}}=U_n(x). -$$ - -Proceeding by induction, take n = 0. -$$ -\int_0^1\cos(xz)\mathrm{d}z=\frac{\sin(x)}x=U_0(x) -$$ - -and, for the inductive step, consider any $n \in \N$. If -$$ -\frac{1}{2^nn!}\int_0^1(1-z^2)^n\cos(xz)\mathrm{d}z=U_n(x), -$$ - -then, using integration by parts and Leibniz's rule, one gets - -\begin{align} - -\frac{1}{2^{n+1}(n+1)!} &\int_0^1(1-z^2)^{n+1}\cos(xz)\mathrm{d}z \\ - -&=\frac{1}{2^{n+1}(n+1)!}\left (\overbrace{\left.(1-z^2)^{n+1}\frac{\sin(xz)}x\right|_{z=0}^{z=1}}^{=0} + \int_0^12(n+1)(1-z^2)^nz \frac{\sin(xz)}x\mathrm{d}z\right )\\[8pt] - -&= \frac1x\cdot\frac1{2^n n!}\int_0^1(1-z^2)^nz\sin(xz)\mathrm{d}z\\[8pt] - -&= -\frac1x\cdot\frac{\mathrm{d}}{\mathrm{d}x}\left(\frac1{2^nn!}\int_0^1(1-z^2)^n\cos(xz)\mathrm{d}z\right) \\[8pt] - -&= -\frac{U_n'(x)}x \\[4pt] - -&= U_{n+1}(x). - -\end{align} - -If pi2/4 = p/q, with p and q in $\N$, then, since the coefficients of Pn are integers and its degree is smaller than or equal to ⌊n/2⌋, q⌊n/2⌋Pn(pi2/4) is some integer N. In other words, -$$ -N=q^{\left\lfloor\frac n2\right\rfloor}A_n\left(\frac\pi2\right) =q^{\left\lfloor\frac n2\right\rfloor}\frac{\left(\frac pq \right)^{n+\frac 12}}{2^nn!}\int_0^1(1-z^2)^n \cos \left(\frac\pi2z \right)\mathrm{d}z. -$$ - -But this number is clearly greater than 0. On the other hand, the limit of this quantity as n goes to infinity is zero, and so, if n is large enough, N < 1. Thereby, a contradiction is reached. - -Hermite did not present his proof as an end in itself but as an afterthought within his search for a proof of the transcendence of pi. He discussed the recurrence relations to motivate and to obtain a convenient integral representation. Once this integral representation is obtained, there are various ways to present a succinct and self-contained proof starting from the integral (as in Cartwright's, Bourbaki's or Niven's presentations), which Hermite could easily see (as he did in his proof of the transcendence of e). - -Moreover, Hermite's proof is closer to Lambert's proof than it seems. In fact, An(x) is the "residue" (or "remainder") of Lambert's continued fraction for tan(x). - -Harold Jeffreys wrote that this proof was set as an example in an exam at Cambridge University in 1945 by Mary Cartwright, but that she had not traced its origin. - -Consider the integrals -$$ -I_n(x)=\int_{-1}^1(1 - z^2)^n\cos(xz)dz, -$$ - -where n is a non-negative integer. - -Two integrations by parts give the recurrence relation -$$ -x^2I_n(x)=2n(2n-1)I_{n-1}(x)-4n(n-1)I_{n-2}(x). \qquad (n \geq 2) -$$ - -If -$$ -J_n(x)=x^{2n+1}I_n(x), -$$ - -then this becomes -$$ -J_n(x)=2n(2n-1)J_{n-1}(x)-4n(n-1)x^2J_{n-2}(x). -$$ - -Furthermore, J0(x) = 2sin(x) and J1(x) = −4x cos(x) + 4sin(x). Hence for all n ∈ Z+, -$$ -J_n(x)=x^{2n+1}I_n(x)=n!\bigl(P_n(x)\sin(x)+Q_n(x)\cos(x)\bigr), -$$ - -where Pn(x) and Qn(x) are polynomials of degree ≤ n, and with integer coefficients (depending on n). - -Take x = pi/2, and suppose if possible that pi/2 = a/b, where a and b are natural numbers (i.e., assume that pi is rational). Then -$$ - \frac{a^{2n+1}}{n!}I_n\left(\frac\pi2\right) = P_n\left(\frac\pi2\right)b^{2n+1}. -$$ - -The right side is an integer. But 0 < In(pi/2) < 2 since the interval [-1, 1] has length 2 and the function that is being integrated takes only values between 0 and 1. On the other hand, -$$ - \frac{a^{2n+1}}{n!} \to 0 \quad \text{ as }n \to \infty. -$$ - -Hence, for sufficiently large n -$$ - 0 < \frac{a^{2n+1}I_n\left(\frac\pi2\right)}{n!} < 1, -$$ - -that is, we could find an integer between 0 and 1. That is the contradiction that follows from the assumption that pi is rational. - -This proof is similar to Hermite's proof. Indeed, - -\begin{align} - -J_n(x)&=x^{2n+1}\int_{-1}^1 (1 - z^2)^n \cos(xz)dz\\[5pt] - -&=2x^{2n+1}\int_0^1 (1 - z^2)^n \cos(xz)dz\\[5pt] - -&=2^{n+1}n!A_n(x). - -\end{align} - -However, it is clearly simpler. This is achieved by omitting the inductive definition of the functions An and taking as a starting point their expression as an integral. - -This proof uses the characterization of pi as the smallest positive zero of the sine function. - -Suppose that pi is rational, i.e. pi = a /b for some integers a and b ≠ 0, which may be taken without loss of generality to be positive. Given any positive integer n, we define the polynomial function: -$$ - f(x) = \frac{x^n(a - bx)^n}{n!} -$$ - -and, for each x ∈ ℝ let -$$ -F(x) = f(x)-f(x)+f^{(4)}(x)+\cdots+(-1)^n f^{(2n)}(x). -$$ - -Claim 1: F(0) + F(pi) is an integer. - -Proof: - -Expanding f as a sum of monomials, the coefficient of xk is a number of the form ck /n! where ck is an integer, which is 0 if k < n. Therefore, f (k)(0) is 0 when k < n and it is equal to (k! /n!) ck if n ≤ k ≤ 2n; in each case, f (k)(0) is an integer and therefore F(0) is an integer. - -On the other hand, f(pi – x) = f(x) and so (–1)kf (k)(pi – x) = f (k)(x) for each non-negative integer k. In particular, (–1)kf (k)(pi) = f (k)(0). Therefore, f (k)(pi) is also an integer and so F(pi) is an integer (in fact, it is easy to see that F(pi) = F(0), but that is not relevant to the proof). Since F(0) and F(pi) are integers, so is their sum. - -Claim 2: -$$ - \int_0^\pi f(x)\sin(x)dx=F(0)+F(\pi) -$$ - -Proof: Since f (2n + 2) is the zero polynomial, we have -$$ - F + F = f. -$$ - -The derivatives of the sine and cosine function are given by sin' = cos and cos' = -sin. Hence the product rule implies -$$ - (F'\cdot\sin - F\cdot\cos)' = f\cdot\sin -$$ - -By the fundamental theorem of calculus -$$ - \left. \int_0^\pi f(x)\sin(x)dx= \bigl(F'(x)\sin x - F(x)\cos x\bigr) \right|_0^\pi. -$$ - -Since sin 0 = sin pi = 0 and cos 0 = – cos pi = 1 (here we use the above-mentioned characterization of pi as a zero of the sine function), Claim 2 follows. - -Conclusion: Since f(x) > 0 and sin x > 0 for 0 < x < pi (because pi is the smallest positive zero of the sine function), Claims 1 and 2 show that F(0) + F(pi) is a positive integer. Since 0 ≤ x(a – bx) ≤ pia and 0 ≤ sin x ≤ 1 for 0 ≤ x ≤ pi, we have, by the original definition of f, -$$ -\int_0^\pi f(x)\sin(x)dx\le\pi\frac{(\pi a)^n}{n!} -$$ - -which is smaller than 1 for large n, hence F(0) + F(pi) < 1 for these n, by Claim 2. This is impossible for the positive integer F(0) + F(pi). - -The above proof is a polished version, which is kept as simple as possible concerning the prerequisites, of an analysis of the formula -$$ -\int_0^\pi f(x)\sin(x)dx = \sum_{j=0}^n (-1)^j \left (f^{(2j)}(\pi)+f^{(2j)}(0)\right )+(-1)^{n+1}\int_0^\pi f^{(2n+2)}(x)\sin(x)dx, -$$ - -which is obtained by 2n + 2 integrations by parts. Claim 2 essentially establishes this formula, where the use of F hides the iterated integration by parts. The last integral vanishes because f (2n + 2) is the zero polynomial. Claim 1 shows that the remaining sum is an integer. - -Niven's proof is closer to Cartwright's (and therefore Hermite's) proof than it appears at first sight. In fact, - -\begin{align} - -J_n(x)&=x^{2n+1}\int_{-1}^1(1-z^2)^n\cos(xz)dz\\ - -&=\int_{-1}^1\left (x^2-(xz)^2\right )^nx\cos(xz)dz. - -\end{align} - -Therefore, the substitution xz = y turns this integral into -$$ -\int_{-x}^x(x^2-y^2)^n\cos(y)dy. -$$ - -In particular, - -\begin{align} - -J_n\left(\frac\pi2\right)&=\int_{-\pi/2}^{\pi/2}\left(\frac{\pi^2}4-y^2\right)^n\cos(y)dy\\[5pt] - -&=\int_0^\pi\left(\frac{\pi^2}4-\left(y-\frac\pi2\right)^2\right)^n\cos\left(y-\frac\pi2\right)dy\\[5pt] - -&=\int_0^\pi y^n(\pi-y)^n\sin(y)dy\\[5pt] - -&=\frac{n!}{b^n}\int_0^\pi f(x)\sin(x)dx. - -\end{align} - -Another connection between the proofs lies in the fact that Hermite already mentions that if f is a polynomial function and -$$ -F=f-f^{(2)}+f^{(4)}\mp\cdots, -$$ - -then -$$ -\int f(x)\sin(x)dx=F'(x)\sin(x)-F(x)\cos(x)+C, -$$ - -from which it follows that -$$ -\int_0^\pi f(x)\sin(x)dx=F(\pi)+F(0). -$$ - -Bourbaki's proof is outlined as an exercise in his calculus treatise. For each natural number b and each non-negative integer n, define -$$ -A_n(b)=b^n\int_0^\pi\frac{x^n(\pi-x)^n}{n!}\sin(x)dx. -$$ - -Since An(b) is the integral of a function defined on [0,pi] that takes the value 0 on 0 and on pi and which is greater than 0 otherwise, An(b) > 0. Besides, for each natural number b, An(b) < 1 if n is large enough, because -$$ - x(\pi-x) \le \left(\frac\pi2\right)^2 -$$ - -and therefore -$$ -A_n(b)\le\pi b^n \frac{1}{n!} \left(\frac\pi2\right)^{2n} = \pi \frac{(b\pi^2/4)^n}{n!}. -$$ - -On the other hand, recursive integration by parts allows us to deduce that, if a and b are natural numbers such that pi = a/b and f is the polynomial function from [0,pi] into R defined by -$$ -f(x)=\frac{x^n(a-bx)^n}{n!}, -$$ - -then: - -\begin{align} - -A_n(b) &= \int_0^\pi f(x)\sin(x)dx \\[5pt] - -&= \Big[-f(x)\cos(x)\Big]_{x=0}^{x=\pi}-\Big[-f'(x) \sin(x) \Big]_{x=0}^{x=\pi} + \cdots \pm \Big[ f^{(2n)}(x) \cos(x) \Big]_{x=0}^{x=\pi} \pm \int_0^\pi f^{(2n+1)}(x)\cos(x)dx. - -\end{align} - -This last integral is 0, since f(2n + 1) is the null function (because f is a polynomial function of degree 2n). Since each function f(k) (with 0 ≤ k ≤ 2n) takes integer values on 0 and on pi and since the same thing happens with the sine and the cosine functions, this proves that An(b) is an integer. Since it is also greater than 0, it must be a natural number. But it was also proved that An(b) < 1 if n is large enough, thereby reaching a contradiction. - -This proof is quite close to Niven's proof, the main difference between them being the way of proving that the numbers An(b) are integers. - -Miklós Laczkovich's proof is a simplification of Lambert's original proof. He considers the functions -$$ -f_k(x) = 1 - \frac{x^2}k+\frac{x^4}{2! k(k+1)}-\frac{x^6}{3! k(k+1)(k+2)} + \cdots \quad (k\notin\{0,-1,-2,\ldots\}). -$$ - -These functions are clearly defined for all x ∈ R. Besides -$$ -f_{\frac{1}{2}}(x) = \cos(2x), -$$ -$$ -f_{\frac{3}{2}}(x) = \frac{\sin(2x)}{2x}. -$$ - -Claim 1: The following recurrence relation holds: -$$ -\forall x\in\R: \qquad \frac{x^2}{k(k+1)}f_{k+2}(x)=f_{k+1}(x)-f_k(x). -$$ - -Proof: This can be proved by comparing the coefficients of the powers of x. - -Claim 2: For each x ∈ R, $\lim_{k\to+\infty}f_k(x)=1.$ - -Proof: In fact, the sequence x2n/n! is bounded (since it converges to 0) and if C is an upper bound and if k > 1, then -$$ -\left|f_k(x)-1\right|\leqslant\sum_{n=1}^\infty\frac C{k^n}=C\frac{1/k}{1-1/k}=\frac C{k-1}. -$$ - -Claim 3: If x ≠ 0 and if x2 is rational, then -$$ -\forall k\in\Q\smallsetminus\{0,-1,-2,\ldots\}: \qquad f_k(x)\neq0 \quad \text{ and } \quad \frac{f_{k+1}(x)}{f_k(x)}\notin\Q. -$$ - -Proof: Otherwise, there would be a number y ≠ 0 and integers a and b such that fk(x) = ay and fk + 1(x) = by. In order to see why, take y = fk + 1(x), a = 0 and b = 1 if fk(x) = 0; otherwise, choose integers a and b such that fk + 1(x)/fk(x) = b/a and define y = fk(x)/a = fk + 1(x)/b. In each case, y cannot be 0, because otherwise it would follow from claim 1 that each fk + n(x) (n ∈ N) would be 0, which would contradict claim 2. Now, take a natural number c such that all three numbers bc/k, ck/x2 and c/x2 are integers and consider the sequence -$$ -g_n=\begin{cases}f_k(x) & n=0\\ \dfrac{c^n}{k(k+1)\cdots(k+n-1)}f_{k+n}(x) & n \neq 0 \end{cases} -$$ - -Then -$$ -g_0=f_k(x)=ay\in\Z y \quad \text{ and } \quad g_1=\frac ckf_{k+1}(x)=\frac{bc}ky\in\Z y. -$$ - -On the other hand, it follows from claim 1 that - -\begin{align} - -g_{n+2}&=\frac{c^{n+2}}{x^2k(k+1)\cdots(k+n-1)}\cdot\frac{x^2}{(k+n)(k+n+1)}f_{k+n+2}(x)\\[5pt] - -& =\frac{c^{n+2}}{x^2k(k+1)\cdots(k+n-1)}f_{k+n+1}(x)-\frac{c^{n+2}}{x^2k(k+1)\cdots(k+n-1)}f_{k+n}(x)\\[5pt] - -&=\frac{c(k+n)}{x^2}g_{n+1}-\frac{c^2}{x^2}g_n\\[5pt] - -&=\left(\frac{ck}{x^2}+\frac c{x^2}n\right)g_{n+1}-\frac{c^2}{x^2}g_n, - -\end{align} - -which is a linear combination of gn + 1 and gn with integer coefficients. Therefore, each gn is an integer multiple of y. Besides, it follows from claim 2 that each gn is greater than 0 (and therefore that gn ≥ |y|) if n is large enough and that the sequence of all gn converges to 0. But a sequence of numbers greater than or equal to |y| cannot converge to 0. - -Since f1/2(pi/4) = cos(pi/2) = 0, it follows from claim 3 that pi2/16 is irrational and therefore that pi is irrational. - -On the other hand, since -$$ -\tan x=\frac{\sin x}{\cos x}=x\frac{f_{3/2}(x/2)}{f_{1/2}(x/2)}, -$$ - -another consequence of Claim 3 is that, if x ∈ Q \ {0}, then tan x is irrational. - -Laczkovich's proof is really about the hypergeometric function. In fact, fk(x) = 0F1(k; -x2) and Gauss found a continued fraction expansion of the hypergeometric function using its functional equation. This allowed Laczkovich to find a new and simpler proof of the fact that the tangent function has the continued fraction expansion that Lambert had discovered. - -Laczkovich's result can also be expressed in Bessel functions of the first kind Jν(x). In fact, Γ(k)Jk − 1(2x) = xk − 1fk(x). So Laczkovich's result is equivalent to: If x ≠ 0 and if x2 is rational, then -$$ -\forall k\in\Q\smallsetminus\{0,-1,-2,\ldots\}: \qquad \frac{x J_k(x)}{J_{k-1}(x)}\notin\Q. -$$ diff --git a/wiki/wikipedia/4099.txt b/wiki/wikipedia/4099.txt deleted file mode 100644 index 49b82f5f1a0e42d3b149c6dbebd9ef5cc5020214..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4099.txt +++ /dev/null @@ -1,29 +0,0 @@ -In mathematics, Malmquist's theorem, is the name of any of the three theorems proved by . These theorems restrict the forms of first order algebraic differential equations which have transcendental meromorphic or algebroid solutions. - -Theorem (1913). If the differential equation -$$ -\frac{dw}{dz}=R(z,w) -$$ - -where R(z,w) is a rational function, has a transcendental meromorphic solution, then R is a polynomial of degree at most 2 with respect to w; in other words the differential equation is a Riccati equation, or linear. - -Theorem (1920). If an irreducible differential equation -$$ -F\left(\frac{dw}{dz},w,z\right)=0 -$$ - -where F is a polynomial, has a transcendental meromorphic solution, then the equation has no movable singularities. Moreover, it can be algebraically reduced either to a Riccati equation or to -$$ -\left(\frac{dw}{dz}\right)^2=a(z)P(z,w), -$$ - -where P is a polynomial of degree 3 with respect to w. - -Theorem (1941). If an irreducible differential equation -$$ -F\left(\frac{dw}{dz},w,z\right)=0, -$$ - -where F is a polynomial, has a transcendental algebroid solution, then it can be algebraically reduced to an equation that has no movable singularities. - -A modern account of theorems 1913, 1920 is given in the paper of A. Eremenko(1982) diff --git a/wiki/wikipedia/41.txt b/wiki/wikipedia/41.txt deleted file mode 100644 index 2d4a71244fd53af00d29dd82259ae2b3fdd86139..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/41.txt +++ /dev/null @@ -1,13 +0,0 @@ -Fitting's theorem is a mathematical theorem proved by Hans Fitting. It can be stated as follows: - -If M and N are nilpotent normal subgroups of a group G, then their product MN is also a nilpotent normal subgroup of G; if, moreover, M is nilpotent of class m and N is nilpotent of class n, then MN is nilpotent of class at most m + n. - -By induction it follows also that the subgroup generated by a finite collection of nilpotent normal subgroups is nilpotent. This can be used to show that the Fitting subgroup of certain types of groups (including all finite groups) is nilpotent. However, a subgroup generated by an infinite collection of nilpotent normal subgroups need not be nilpotent. - -In terms of order theory, (part of) Fitting's theorem can be stated as: - -The set of nilpotent normal subgroups form a lattice of subgroups. - -Thus the nilpotent normal subgroups of a finite group also form a bounded lattice, and have a top element, the Fitting subgroup. - -However, nilpotent normal subgroups do not in general form a complete lattice, as a subgroup generated by an infinite collection of nilpotent normal subgroups need not be nilpotent, though it will be normal. The join of all nilpotent normal subgroups is still defined as the Fitting subgroup, but it need not be nilpotent. diff --git a/wiki/wikipedia/410.txt b/wiki/wikipedia/410.txt deleted file mode 100644 index 8241aeaddcf77d337a07d4592f2f0c21a85010f7..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/410.txt +++ /dev/null @@ -1,103 +0,0 @@ -PageRank (PR) is an algorithm used by Google Search to rank web pages in their search engine results. It is named after both the term "web page" and co-founder Larry Page. PageRank is a way of measuring the importance of website pages. According to Google: Currently, PageRank is not the only algorithm used by Google to order search results, but it is the first algorithm that was used by the company, and it is the best known. As of September 24, 2019, PageRank and all associated patents are expired. - -PageRank is a link analysis algorithm and it assigns a numerical weighting to each element of a hyperlinked set of documents, such as the World Wide Web, with the purpose of "measuring" its relative importance within the set. The algorithm may be applied to any collection of entities with reciprocal quotations and references. The numerical weight that it assigns to any given element E is referred to as the PageRank of E and denoted by $PR(E).$ - -A PageRank results from a mathematical algorithm based on the webgraph, created by all World Wide Web pages as nodes and hyperlinks as edges, taking into consideration authority hubs such as cnn.com or mayoclinic.org. The rank value indicates an importance of a particular page. A hyperlink to a page counts as a vote of support. The PageRank of a page is defined recursively and depends on the number and PageRank metric of all pages that link to it ("incoming links"). A page that is linked to by many pages with high PageRank receives a high rank itself. - -Numerous academic papers concerning PageRank have been published since Page and Brin's original paper. In practice, the PageRank concept may be vulnerable to manipulation. Research has been conducted into identifying falsely influenced PageRank rankings. The goal is to find an effective means of ignoring links from documents with falsely influenced PageRank. - -Other link-based ranking algorithms for Web pages include the HITS algorithm invented by Jon Kleinberg (used by Teoma and now Ask.com), the IBM CLEVER project, the TrustRank algorithm and the Hummingbird algorithm. - -The eigenvalue problem was suggested in 1976 by Gabriel Pinski and Francis Narin, who worked on scientometrics ranking scientific journals, in 1977 by Thomas Saaty in his concept of Analytic Hierarchy Process which weighted alternative choices, and in 1995 by Bradley Love and Steven Sloman as a cognitive model for concepts, the centrality algorithm. - -A search engine called "RankDex" from IDD Information Services, designed by Robin Li in 1996, developed a strategy for site-scoring and page-ranking. Li referred to his search mechanism as "link analysis," which involved ranking the popularity of a web site based on how many other sites had linked to it. RankDex, the first search engine with page-ranking and site-scoring algorithms, was launched in 1996. Li patented the technology in RankDex, with his patent filed in 1997 and granted in 1999. He later used it when he founded Baidu in China in 2000. Google founder Larry Page referenced Li's work as a citation in some of his U.S. patents for PageRank. The system was developed with the help of Scott Hassan and Alan Steremberg, both of whom were cited by Page and Brin as being critical to the development of Google. - -The name "PageRank" plays on the name of developer Larry Page, as well as of the concept of a web page. The word is a trademark of Google, and the PageRank process has been patented (). However, the patent is assigned to Stanford University and not to Google. Google has exclusive license rights on the patent from Stanford University. The university received 1.8 million shares of Google in exchange for use of the patent; it sold the shares in 2005 for $336 million. - -PageRank was influenced by citation analysis, early developed by Eugene Garfield in the 1950s at the University of Pennsylvania, and by Hyper Search, developed by Massimo Marchiori at the University of Padua. In the same year PageRank was introduced (1998), Jon Kleinberg published his work on HITS. Google's founders cite Garfield, Marchiori, and Kleinberg in their original papers. - -The PageRank algorithm outputs a probability distribution used to represent the likelihood that a person randomly clicking on links will arrive at any particular page. PageRank can be calculated for collections of documents of any size. It is assumed in several research papers that the distribution is evenly divided among all documents in the collection at the beginning of the computational process. The PageRank computations require several passes, called "iterations", through the collection to adjust approximate PageRank values to more closely reflect the theoretical true value. - -A probability is expressed as a numeric value between 0 and 1. A 0.5 probability is commonly expressed as a "50% chance" of something happening. Hence, a document with a PageRank of 0.5 means there is a 50% chance that a person clicking on a random link will be directed to said document. - -Assume a small universe of four web pages: A, B, C, and D. Links from a page to itself are ignored. Multiple outbound links from one page to another page are treated as a single link. PageRank is initialized to the same value for all pages. In the original form of PageRank, the sum of PageRank over all pages was the total number of pages on the web at that time, so each page in this example would have an initial value of 1. However, later versions of PageRank, and the remainder of this section, assume a probability distribution between 0 and 1. Hence the initial value for each page in this example is 0.25. - -The PageRank transferred from a given page to the targets of its outbound links upon the next iteration is divided equally among all outbound links. - -If the only links in the system were from pages B, C, and D to A, each link would transfer 0.25 PageRank to A upon the next iteration, for a total of 0.75. -$$ -PR(A)= PR(B) + PR(C) + PR(D). -$$ - -Suppose instead that page B had a link to pages C and A, page C had a link to page A, and page D had links to all three pages. Thus, upon the first iteration, page B would transfer half of its existing value, or 0.125, to page A and the other half, or 0.125, to page C. Page C would transfer all of its existing value, 0.25, to the only page it links to, A. Since D had three outbound links, it would transfer one third of its existing value, or approximately 0.083, to A. At the completion of this iteration, page A will have a PageRank of approximately 0.458. -$$ -PR(A)= \frac{PR(B)}{2}+ \frac{PR(C)}{1}+ \frac{PR(D)}{3}. -$$ - -In other words, the PageRank conferred by an outbound link is equal to the document's own PageRank score divided by the number of outbound links L( ). -$$ -PR(A)= \frac{PR(B)}{L(B)}+ \frac{PR(C)}{L(C)}+ \frac{PR(D)}{L(D)}. -$$ - -In the general case, the PageRank value for any page u can be expressed as: -$$ -PR(u) = \sum_{v \in B_u} \frac{PR(v)}{L(v)} -$$, - -i.e. the PageRank value for a page u is dependent on the PageRank values for each page v contained in the set Bu (the set containing all pages linking to page u), divided by the number L(v) of links from page v. - -The PageRank theory holds that an imaginary surfer who is randomly clicking on links will eventually stop clicking. The probability, at any step, that the person will continue is a damping factor d. Various studies have tested different damping factors, but it is generally assumed that the damping factor will be set around 0.85. - -The "Toolbar Pagerank" was updated very infrequently. It was last updated in November 2013. In October 2014 Matt Cutts announced that another visible pagerank update would not be coming. In March 2016 Google announced it would no longer support this feature, and the underlying API would soon cease to operate. On April 15, 2016 Google has officially turned off display of PageRank Data in Google Toolbar. Google will still be using PageRank score when determining how to rank content in search results. - -The search engine results page (SERP) is the actual result returned by a search engine in response to a keyword query. The SERP consists of a list of links to web pages with associated text snippets. The SERP rank of a web page refers to the placement of the corresponding link on the SERP, where higher placement means higher SERP rank. The SERP rank of a web page is a function not only of its PageRank, but of a relatively large and continuously adjusted set of factors (over 200). Search engine optimization (SEO) is aimed at influencing the SERP rank for a website or a set of web pages. - -Positioning of a webpage on Google SERPs for a keyword depends on relevance and reputation, also known as authority and popularity. PageRank is Google's indication of its assessment of the reputation of a webpage: It is non-keyword specific. Google uses a combination of webpage and website authority to determine the overall authority of a webpage competing for a keyword. The PageRank of the HomePage of a website is the best indication Google offers for website authority. - -After the introduction of Google Places into the mainstream organic SERP, numerous other factors in addition to PageRank affect ranking a business in Local Business Results. When Google elaborated on the reasons for PageRank deprecation at Q&A #March 2016, they announced Links and Content as the Top Ranking Factors. RankBrain had earlier in October 2015 been announced as the #3 Ranking Factor, so the Top 3 Factors have been confirmed officially by Google. - -The Google Directory PageRank was an 8-unit measurement. Unlike the Google Toolbar, which shows a numeric PageRank value upon mouseover of the green bar, the Google Directory only displayed the bar, never the numeric values. Google Directory was closed on July 20, 2011. - -In the past, the PageRank shown in the Toolbar was easily manipulated. Redirection from one page to another, either via a HTTP 302 response or a "Refresh" meta tag, caused the source page to acquire the PageRank of the destination page. Hence, a new page with PR 0 and no incoming links could have acquired PR 10 by redirecting to the Google home page. This spoofing technique was a known vulnerability. Spoofing can generally be detected by performing a Google search for a source URL; if the URL of an entirely different site is displayed in the results, the latter URL may represent the destination of a redirection. - -For search engine optimization purposes, some companies offer to sell high PageRank links to webmasters. Even though PageRank has become less important for SEO purposes, the existence of back-links from more popular websites continues to push a webpage higher up in search rankings. - -A more intelligent surfer that probabilistically hops from page to page depending on the content of the pages and query terms the surfer is looking for. This model is based on a query-dependent PageRank score of a page which as the name suggests is also a function of query. When given a multiple-term query, $Q=\{q1,q2,\cdots\}$, the surfer selects a $q$ according to some probability distribution, $P(q)$, and uses that term to guide its behavior for a large number of steps. It then selects another term according to the distribution to determine its behavior, and so on. The resulting distribution over visited web pages is QD-PageRank. - -Katja Mayer views PageRank as a social network as it connects differing viewpoints and thoughts in a single place. People go to PageRank for information and are flooded with citations of other authors who also have an opinion on the topic. This creates a social aspect where everything can be discussed and collected to provoke thinking. There is a social relationship that exists between PageRank and the people who use it as it is constantly adapting and changing to the shifts in modern society. Viewing the relationship between PageRank and the individual through sociometry allows for an in-depth look at the connection that results. - - Matteo Pasquinelli reckons the basis for the belief that PageRank has a social component lies in the idea of attention economy. With attention economy, value is placed on products that receive a greater amount of human attention and the results at the top of the PageRank garner a larger amount of focus then those on subsequent pages. The outcomes with the higher PageRank will therefore enter the human consciousness to a larger extent. These ideas can influence decision-making and the actions of the viewer have a direct relation to the PageRank. They possess a higher potential to attract a user's attention as their location increases the attention economy attached to the site. With this location they can receive more traffic and their online marketplace will have more purchases. The PageRank of these sites allow them to be trusted and they are able to parlay this trust into increased business. - -The mathematics of PageRank are entirely general and apply to any graph or network in any domain. Thus, PageRank is now regularly used in bibliometrics, social and information network analysis, and for link prediction and recommendation. It's even used for systems analysis of road networks, as well as biology, chemistry, neuroscience, and physics. - -PageRank has recently been used to quantify the scientific impact of researchers. The underlying citation and collaboration networks are used in conjunction with pagerank algorithm in order to come up with a ranking system for individual publications which propagates to individual authors. The new index known as pagerank-index (Pi) is demonstrated to be fairer compared to h-index in the context of many drawbacks exhibited by h-index. - -For the analysis of protein networks in biology PageRank is also a useful tool. - -In any ecosystem, a modified version of PageRank may be used to determine species that are essential to the continuing health of the environment. - -A similar new use of PageRank is to rank academic doctoral programs based on their records of placing their graduates in faculty positions. In PageRank terms, academic departments link to each other by hiring their faculty from each other (and from themselves). - -A version of PageRank has recently been proposed as a replacement for the traditional Institute for Scientific Information (ISI) impact factor, and implemented at Eigenfactor as well as at SCImago. Instead of merely counting total citation to a journal, the "importance" of each citation is determined in a PageRank fashion. - -In neuroscience, the PageRank of a neuron in a neural network has been found to correlate with its relative firing rate. - -Personalized PageRank is used by Twitter to present users with other accounts they may wish to follow. - -Swiftype's site search product builds a "PageRank that’s specific to individual websites" by looking at each website's signals of importance and prioritizing content based on factors such as number of links from the home page. - -A Web crawler may use PageRank as one of a number of importance metrics it uses to determine which URL to visit during a crawl of the web. One of the early working papers that were used in the creation of Google is Efficient crawling through URL ordering, which discusses the use of a number of different importance metrics to determine how deeply, and how much of a site Google will crawl. PageRank is presented as one of a number of these importance metrics, though there are others listed such as the number of inbound and outbound links for a URL, and the distance from the root directory on a site to the URL. - -The PageRank may also be used as a methodology to measure the apparent impact of a community like the Blogosphere on the overall Web itself. This approach uses therefore the PageRank to measure the distribution of attention in reflection of the Scale-free network paradigm. - -In 2005, in a pilot study in Pakistan, Structural Deep Democracy, SD2 was used for leadership selection in a sustainable agriculture group called Contact Youth. SD2 uses PageRank for the processing of the transitive proxy votes, with the additional constraints of mandating at least two initial proxies per voter, and all voters are proxy candidates. More complex variants can be built on top of SD2, such as adding specialist proxies and direct votes for specific issues, but SD2 as the underlying umbrella system, mandates that generalist proxies should always be used. - -In sport the PageRank algorithm has been used to rank the performance of: teams in the National Football League (NFL) in the USA; individual soccer players; and athletes in the Diamond League. - -PageRank has been used to rank spaces or streets to predict how many people (pedestrians or vehicles) come to the individual spaces or streets. In lexical semantics it has been used to perform Word Sense Disambiguation, Semantic similarity, and also to automatically rank WordNet synsets according to how strongly they possess a given semantic property, such as positivity or negativity. - -In early 2005, Google implemented a new value, "nofollow", for the rel attribute of HTML link and anchor elements, so that website developers and bloggers can make links that Google will not consider for the purposes of PageRank—they are links that no longer constitute a "vote" in the PageRank system. The nofollow relationship was added in an attempt to help combat spamdexing. - -As an example, people could previously create many message-board posts with links to their website to artificially inflate their PageRank. With the nofollow value, message-board administrators can modify their code to automatically insert "rel='nofollow'" to all hyperlinks in posts, thus preventing PageRank from being affected by those particular posts. This method of avoidance, however, also has various drawbacks, such as reducing the link value of legitimate comments. (See: Spam in blogs#nofollow) - -In an effort to manually control the flow of PageRank among pages within a website, many webmasters practice what is known as PageRank Sculpting—which is the act of strategically placing the nofollow attribute on certain internal links of a website in order to funnel PageRank towards those pages the webmaster deemed most important. This tactic has been used since the inception of the nofollow attribute, but may no longer be effective since Google announced that blocking PageRank transfer with nofollow does not redirect that PageRank to other links. diff --git a/wiki/wikipedia/4100.txt b/wiki/wikipedia/4100.txt deleted file mode 100644 index c8daabf991f3471eb15b6b2fd67fe431e7d8771d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4100.txt +++ /dev/null @@ -1,96 +0,0 @@ -MAX-3SAT is a problem in the computational complexity subfield of computer science. It generalises the Boolean satisfiability problem (SAT) which is a decision problem considered in complexity theory. It is defined as: - -Given a 3-CNF formula Φ (i.e. with at most 3 variables per clause), find an assignment that satisfies the largest number of clauses. - -MAX-3SAT is a canonical complete problem for the complexity class MAXSNP (shown complete in Papadimitriou pg. 314). - -The decision version of MAX-3SAT is NP-complete. Therefore, a polynomial-time solution can only be achieved if P = NP. An approximation within a factor of 2 can be achieved with this simple algorithm, however: - -* Output the solution in which most clauses are satisfied, when either all variables = TRUE or all variables = FALSE. - -* Every clause is satisfied by one of the two solutions, therefore one solution satisfies at least half of the clauses. - -The Karloff-Zwick algorithm runs in polynomial-time and satisfies ≥ 7/8 of the clauses. - -The PCP theorem implies that there exists an ε > 0 such that (1-ε)-approximation of MAX-3SAT is NP-hard. - -Proof: - -Any NP-complete problem L \in \mathsf{PCP}(O(\log (n)), O(1)) by the PCP theorem. For x ∈ L, a 3-CNF formula Ψx is constructed so that - -* x ∈ L ⇒ Ψx is satisfiable - -* x ∉ L ⇒ no more than (1-ε)m clauses of Ψx are satisfiable. - -The Verifier V reads all required bits at once i.e. makes non-adaptive queries. This is valid because the number of queries remains constant. - -* Let q be the number of queries. - -* Enumerating all random strings Ri ∈ V, we obtain poly(x) strings since the length of each string $r(x) = O(\log |x|)$. - -* For each Ri - -** V chooses q positions i1,...,iq and a Boolean function fR: {0,1}q->{0,1} and accepts if and only if fR(π(i1,...,iq)). Here π refers to the proof obtained from the Oracle. - -Next we try to find a Boolean formula to simulate this. We introduce Boolean variables x1,...,xl, where l is the length of the proof. To demonstrate that the Verifier runs in Probabilistic polynomial-time, we need a correspondence between the number of satisfiable clauses and the probability the Verifier accepts. - -* For every R, add clauses representing fR(xi1,...,xiq) using 2q SAT clauses. Clauses of length q are converted to length 3 by adding new (auxiliary) variables e.g. x2 ∨ x10 ∨ x11 ∨ x12 = ( x2 ∨ x10 ∨ yR) ∧ ( yR ∨ x11 ∨ x12). This requires a maximum of q2q 3-SAT clauses. - -* If z ∈ L then - -** there is a proof π such that Vπ (z) accepts for every Ri. - -** All clauses are satisfied if xi = π(i) and the auxiliary variables are added correctly. - -* If input z ∉ L then - -** For every assignment to x1,...,xl and yR's, the corresponding proof π(i) = xi causes the Verifier to reject for half of all R ∈ {0,1}r(|z|). - -*** For each R, one clause representing fR fails. - -*** Therefore, a fraction $\frac{1}{2} \frac{1}{q2^{q}}$ of clauses fails. - -It can be concluded that if this holds for every NP-complete problem then the PCP theorem must be true. - -Håstad demonstrates a tighter result than Theorem 1 i.e. the best known value for ε. - -He constructs a PCP Verifier for 3-SAT that reads only 3 bits from the Proof. - -
    - -For every ε > 0, there is a PCP-verifier M for 3-SAT that reads a random string r of length O(\log(n)) and computes query positions ir, jr, kr in the proof π and a bit br. It accepts if and only if 'π(ir) ⊕ π(jr) ⊕ π(kr) = br. - -
    - -The Verifier has completeness (1-ε) and soundness 1/2 + ε (refer to PCP (complexity)). The Verifier satisfies -$$ -z \in L \implies \exists \pi Pr[V^{\pi} (x) = 1] \ge 1 - \epsilon -$$ -$$ -z \not \in L \implies \forall \pi Pr[V^{\pi} (x) = 1] \le \frac{1}{2} + \epsilon -$$ - -If the first of these two equations were equated to "=1" as usual, one could find a proof π by solving a system of linear equations (see MAX-3LIN-EQN) implying P = NP. - -* If z ∈ L, a fraction ≥ (1 - ε) of clauses are satisfied. - -* If z ∉ L, then for a (1/2 - ε) fraction of R, 1/4 clauses are contradicted. - -This is enough to prove the hardness of approximation ratio -$$ -\frac{1-\frac{1}{4}(\frac{1}{2}-\epsilon)}{1-\epsilon} = \frac{7}{8} + \epsilon' -$$ - -== Related problems == - -MAX-3SAT(B) is the restricted special case of MAX-3SAT where every variable occurs in at most B clauses. Before the PCP theorem was proven, Papadimitriou and Yannakakis showed that for some fixed constant B, this problem is MAX SNP-hard. Consequently, with the PCP theorem, it is also APX-hard. This is useful because MAX-3SAT(B) can often be used to obtain a PTAS-preserving reduction in a way that MAX-3SAT cannot. Proofs for explicit values of B include: all B ≥ 13, and all B ≥ 3 (which is best possible). - -Moreover, although the decision problem 2SAT is solvable in polynomial time, MAX-2SAT(3) is also APX-hard. - -The best possible approximation ratio for MAX-3SAT(B), as a function of B, is at least $7/8+\Omega(1/B)$ and at most $7/8+O(1/\sqrt{B})$, unless NP=RP. Some explicit bounds on the approximability constants for certain values of B are known. - - Berman, Karpinski and Scott proved that for the "critical" instances of MAX-3SAT in which each literal occurs exactly twice, and each clause is exactly of size 3, the problem is approximation hard for some constant factor. - -MAX-EkSAT is a parameterized version of MAX-3SAT where every clause has exactly k literals, for k ≥ 3. It can be efficiently approximated with approximation ratio $1- (1/2)^{k}$ using ideas from coding theory. - -It has been proved that random instances of MAX-3SAT can be approximated to within factor 8/9. diff --git a/wiki/wikipedia/4101.txt b/wiki/wikipedia/4101.txt deleted file mode 100644 index 171ccf9a2f09ab3c674731c7e4d17d0171b4c91b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4101.txt +++ /dev/null @@ -1,15 +0,0 @@ -In mathematics, the Castelnuovo–de Franchis theorem is a classical result on complex algebraic surfaces. Let X be such a surface, projective and non-singular, and let - -ω1 and ω2 - -be two differentials of the first kind on X which are linearly independent but with wedge product 0. Then this data can be represented as a pullback of an algebraic curve: there is a non-singular algebraic curve C, a morphism - -φ: X → C, - -and differentials of the first kind ω′1 and ω′2 on C such that - -φ*(ω′1) = ω1 and φ*(ω′2) = ω2. - -This result is due to Guido Castelnuovo and Michele de Franchis (1875–1946). - -The converse, that two such pullbacks would have wedge 0, is immediate. diff --git a/wiki/wikipedia/4102.txt b/wiki/wikipedia/4102.txt deleted file mode 100644 index e1a34560a4e67ee5a53c8c67df74602e37e1e2f4..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4102.txt +++ /dev/null @@ -1,5 +0,0 @@ -In computational complexity theory, a promise problem is a generalization of a decision problem where the input is promised to belong to a particular subset of all possible inputs. Unlike decision problems, the yes instances (the inputs for which an algorithm must return yes) and no instances do not exhaust the set of all inputs. Intuitively, the algorithm has been promised that the input does indeed belong to set of yes instances or no instances. There may be inputs which are neither yes nor no. If such an input is given to an algorithm for solving a promise problem, the algorithm is allowed to output anything, and may even not halt. - -A decision problem can be associated with a language $L \subseteq \{0,1\}^*$, where the problem is to accept all inputs in $L$ and reject all inputs not in $L$. For a promise problem, there are two languages, $L_{\text{YES}}$ and $L_{\text{NO}}$, which must be disjoint, which means $L_{\text{YES}} \cap L_{\text{NO}} = \varnothing$, such that all the inputs in $L_{\text{YES}}$ are to be accepted and all inputs in $L_{\text{NO}}$ are to be rejected. The set $L_{\text{YES}} \cup L_{\text{NO}}$ is called the promise. There are no requirements on the output if the input does not belong to the promise. If the promise equals $\{0,1\}^*$, then this is also a decision problem, and the promise is said to be trivial. - -Many natural problems are actually promise problems. For instance, consider the following problem: Given a directed acyclic graph, determine if the graph has a path of length 10. The yes instances are directed acyclic graphs with a path of length 10, whereas the no instances are directed acyclic graphs with no path of length 10. The promise is the set of directed acyclic graphs. In this example, the promise is easy to check. In particular, it is very easy to check if a given graph is cyclic. However, the promised property could be difficult to evaluate. For instance, consider the problem "Given a Hamiltonian graph, determine if the graph has a cycle of size 4." Now the promise is NP-hard to evaluate, yet the promise problem is easy to solve since checking for cycles of size 4 can be done in polynomial time. diff --git a/wiki/wikipedia/4103.txt b/wiki/wikipedia/4103.txt deleted file mode 100644 index d795a8838b4a562a2099a6f5b8696c69fe18374b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4103.txt +++ /dev/null @@ -1,23 +0,0 @@ -Cauchy's theorem is a theorem in geometry, named after Augustin Cauchy. It states that - -convex polytopes in three dimensions with congruent corresponding faces must be congruent to each other. That is, any polyhedral net formed by unfolding the faces of the polyhedron onto a flat surface, together with gluing instructions describing which faces should be connected to each other, uniquely determines the shape of the original polyhedron. For instance, if six squares are connected in the pattern of a cube, then they must form a cube: there is no convex polyhedron with six square faces connected in the same way that does not have the same shape. - -This is a fundamental result in rigidity theory: one consequence of the theorem is that, if one makes a physical model of a convex polyhedron by connecting together rigid plates for each of the polyhedron faces with flexible hinges along the polyhedron edges, then this ensemble of plates and hinges will necessarily form a rigid structure. - -Let P and Q be combinatorially equivalent 3-dimensional convex polytopes; that is, they are convex polytopes with isomorphic face lattices. Suppose further that each pair of corresponding faces from P and Q are congruent to each other, i.e. equal up to a rigid motion. Then P and Q are themselves congruent. - -To see that convexity is necessary, consider a regular icosahedron. One can "push in" a vertex to create a nonconvex polyhedron that is still combinatorially equivalent to the regular icosahedron. Another way to see it, is to take the pentagonal pyramid around a vertex, and reflect it with respect to its base. - -The result originated in Euclid's Elements, where solids are called equal if the same holds for their faces. This version of the result was proved by Cauchy in 1813 based on earlier work by Lagrange. An error in Cauchy's proof of the main lemma was corrected by Ernst Steinitz, Isaac Jacob Schoenberg, and Aleksandr Danilovich Aleksandrov. The corrected proof of Cauchy is so short and elegant, that it is considered to be one of the Proofs from THE BOOK. - -* The result does not hold on a plane or for non-convex polyhedra in $\mathbb R^3$: there exist non-convex flexible polyhedra that have one or more degrees of freedom of movement that preserve the shapes of their faces. In particular, the Bricard octahedra are self-intersecting flexible surfaces discovered by a French mathematician Raoul Bricard in 1897. The Connelly sphere, a flexible non-convex polyhedron homeomorphic to a 2-sphere, was discovered by Robert Connelly in 1977. - -* Although originally proven by Cauchy in three dimensions, the theorem was extended to dimensions higher than 3 by Alexandrov (1950). - -* Cauchy's rigidity theorem is a corollary from Cauchy's theorem stating that a convex polytope cannot be deformed so that its faces remain rigid. - -* In 1974 Herman Gluck showed that in a certain precise sense almost all simply connected closed surfaces are rigid. - -* Dehn's rigidity theorem is an extension of the Cauchy rigidity theorem to infinitesimal rigidity. This result was obtained by Dehn in 1916. - -* Alexandrov's uniqueness theorem is a result by Alexandrov (1950), generalizing Cauchy's theorem by showing that convex polyhedra are uniquely described by the metric spaces of geodesics on their surface. The analogous uniqueness theorem for smooth surfaces was proved by Cohn-Vossen in 1927. Pogorelov's uniqueness theorem is a result by Pogorelov generalizing both of these results and applying to general convex surfaces. diff --git a/wiki/wikipedia/4104.txt b/wiki/wikipedia/4104.txt deleted file mode 100644 index 10eba215d7e179ae321353d534d30144eff6afa0..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4104.txt +++ /dev/null @@ -1,8 +0,0 @@ -In mathematics, the Poincaré separation theorem gives the upper and lower bounds of eigenvalues of a real symmetric matrix B'AB that can be considered as the orthogonal projection of a larger real symmetric matrix A onto a linear subspace spanned by the columns of B. The theorem is named after Henri Poincaré. - -More specifically, let A be an n × n real symmetric matrix and B an n × r semi-orthogonal matrix such that B'B = Ir. Denote by $\lambda_i$, i = 1, 2, ..., n and $\mu_i$, i = 1, 2, ..., r the eigenvalues of A and B'AB, respectively (in descending order). We have -$$ -\lambda_i \geq \mu_i \geq \lambda_{n-r+i}, -$$ - -An algebraic proof, based on the variational interpretation of eigenvalues, has been published in Magnus' Matrix Differential Calculus with Applications in Statistics and Econometrics. From the geometric point of view, B'AB can be considered as the orthogonal projection of A onto the linear subspace spanned by B, so the above results follow immediately. diff --git a/wiki/wikipedia/4105.txt b/wiki/wikipedia/4105.txt deleted file mode 100644 index 9e3a62d1115255d708a8eb4c614dab9626e193b3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4105.txt +++ /dev/null @@ -1,31 +0,0 @@ -In mathematics, Maclaurin's inequality, named after Colin Maclaurin, is a refinement of the inequality of arithmetic and geometric means. - -Let a1, a2, ..., an be positive real numbers, and for k = 1, 2, ..., n define the averages Sk as follows: -$$ -S_k = \frac{\displaystyle \sum_{ 1\leq i_1 < \cdots < i_k \leq n}a_{i_1} a_{i_2} \cdots a_{i_k}}{\displaystyle {n \choose k}}. -$$ - -The numerator of this fraction is the elementary symmetric polynomial of degree k in the n variables a1, a2, ..., an, that is, the sum of all products of k of the numbers a1, a2, ..., an with the indices in increasing order. The denominator is the number of terms in the numerator, the binomial coefficient $\scriptstyle{n \choose k}$$.$ - -Maclaurin's inequality is the following chain of inequalities: -$$ -S_1 \geq \sqrt{S_2} \geq \sqrt[3]{S_3} \geq \cdots \geq \sqrt[n]{S_n} -$$ - -with equality if and only if all the ai are equal. - -For n = 2, this gives the usual inequality of arithmetic and geometric means of two numbers. Maclaurin's inequality is well illustrated by the case n = 4: - -\begin{align} - -& {} \quad \frac{a_1+a_2+a_3+a_4}{4} \\[8pt] - -& {} \ge \sqrt{\frac{a_1a_2+a_1a_3+a_1a_4+a_2a_3+a_2a_4+a_3a_4}{6}} \\[8pt] - -& {} \ge \sqrt[3]{\frac{a_1a_2a_3+a_1a_2a_4+a_1a_3a_4+a_2a_3a_4}{4}} \\[8pt] - -& {} \ge \sqrt[4]{a_1a_2a_3a_4}. - -\end{align} - -Maclaurin's inequality can be proved using Newton's inequalities. diff --git a/wiki/wikipedia/4106.txt b/wiki/wikipedia/4106.txt deleted file mode 100644 index 1964eac885c528dfeb2250c3514a14e511b906bf..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4106.txt +++ /dev/null @@ -1,11 +0,0 @@ -In mathematics, the Cartan–Kähler theorem is a major result on the integrability conditions for differential systems, in the case of analytic functions, for differential ideals $I$. It is named for Élie Cartan and Erich Kähler. - -It is not true that merely having $dI$ contained in $I$ is sufficient for integrability. There is a problem caused by singular solutions. The theorem computes certain constants that must satisfy an inequality in order that there be a solution. - -Let $(M,I)$ be a real analytic EDS. Assume that $P \subseteq M$ is a connected, $k$-dimensional, real analytic, regular integral manifold of $I$ with $r(P) \geq 0$ (i.e., the tangent spaces $T_p P$ are "extendable" to higher dimensional integral elements). - -Moreover, assume there is a real analytic submanifold $R \subseteq M$ of codimension $r(P)$ containing $P$ and such that $T_pR \cap H(T_pP)$ has dimension $k+1$ for all $p \in P$. - -Then there exists a (locally) unique connected, $(k+1)$-dimensional, real analytic integral manifold $X \subseteq M$ of $I$ that satisfies $P \subseteq X \subseteq R$. - -The Cauchy-Kovalevskaya theorem is used in the proof, so the analyticity is necessary. diff --git a/wiki/wikipedia/4107.txt b/wiki/wikipedia/4107.txt deleted file mode 100644 index a06d15957046edd3ee29613733b101545a91981f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4107.txt +++ /dev/null @@ -1,41 +0,0 @@ -In mathematics, the term adjoint applies in several situations. Several of these share a similar formalism: if A is adjoint to B, then there is typically some formula of the type - -(Ax, y) = (x, By). - -Specifically, adjoint or adjunction may mean: - -* Adjoint of a linear map, also called its transpose - -* Hermitian adjoint (adjoint of a linear operator) in functional analysis - -* Adjoint endomorphism of a Lie algebra - -* Adjoint representation of a Lie group - -* Adjoint functors in category theory - -* Adjunction (field theory) - -* Adjunction formula (algebraic geometry) - -* Adjunction space in topology - -* Conjugate transpose of a matrix in linear algebra - -* Adjugate matrix, related to its inverse - -* Adjoint equation - -* The upper and lower adjoints of a Galois connection in order theory - -* The adjoint of a differential operator with general polynomial coefficients - -* Kleisli adjunction - -* Monoidal adjunction - -* Quillen adjunction - -* Axiom of adjunction in set theory - -* Adjunction (rule of inference) diff --git a/wiki/wikipedia/4108.txt b/wiki/wikipedia/4108.txt deleted file mode 100644 index 931660e136ff94c274c57d150d1368ccf25e29a5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4108.txt +++ /dev/null @@ -1,11 +0,0 @@ -In logic, a metatheorem is a statement about a formal system proven in a metalanguage. Unlike theorems proved within a given formal system, a metatheorem is proved within a metatheory, and may reference concepts that are present in the metatheory but not the object theory. - -A formal system is determined by a formal language and a deductive system (axioms and rules of inference). The formal system can be used to prove particular sentences of the formal language with that system. Metatheorems, however, are proved externally to the system in question, in its metatheory. Common metatheories used in logic are set theory (especially in model theory) and primitive recursive arithmetic (especially in proof theory). Rather than demonstrating particular sentences to be provable, metatheorems may show that each of a broad class of sentences can be proved, or show that certain sentences cannot be proved. - -Examples of metatheorems include: - -* The deduction theorem for first-order logic says that a sentence of the form φ→ψ is provable from a set of axioms A if and only if the sentence ψ is provable from the system whose axioms consist of φ and all the axioms of A. - -* The class existence theorem of von Neumann–Bernays–Gödel set theory states that for every formula whose quantifiers range only over sets, there is a class consisting of the sets satisfying the formula. - -* Consistency proofs of systems such as Peano arithmetic. diff --git a/wiki/wikipedia/4109.txt b/wiki/wikipedia/4109.txt deleted file mode 100644 index b9df6195eb5319f9e71825ee1c8ed871741b99c6..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4109.txt +++ /dev/null @@ -1,11 +0,0 @@ -In mathematics, the T(1) theorem, first proved by David, describes when an operator T given by a kernel can be extended to a bounded linear operator on the Hilbert space L2(Rn). The name T(1) theorem refers to a condition on the distribution T(1), given by the operator T applied to the function 1. - -Suppose that T is a continuous operator from Schwartz functions on Rn to tempered distributions, so that T is given by a kernel K which is a distribution. Assume that the kernel is standard, which means that off the diagonal it is given by a function satisfying certain conditions. - -Then the T(1) theorem states that T can be extended to a bounded operator on the Hilbert space L2(Rn) if and only if the following conditions are satisfied: - -*T(1) is of bounded mean oscillation (where T is extended to an operator on bounded smooth functions, such as 1). - -*T*(1) is of bounded mean oscillation, where T* is the adjoint of T. - -*T is weakly bounded, a weak condition that is easy to verify in practice. diff --git a/wiki/wikipedia/411.txt b/wiki/wikipedia/411.txt deleted file mode 100644 index ded518f8d1adc9bb276e5ad1f4c3be04cdf93f6b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/411.txt +++ /dev/null @@ -1,13 +0,0 @@ -In the branch of abstract algebra called ring theory, the Akizuki–Hopkins–Levitzki theorem connects the descending chain condition and ascending chain condition in modules over semiprimary rings. A ring R (with 1) is called semiprimary if R/J(R) is semisimple and J(R) is a nilpotent ideal, where J(R) denotes the Jacobson radical. The theorem states that if R is a semiprimary ring and M is an R module, the three module conditions Noetherian, Artinian and "has a composition series" are equivalent. Without the semiprimary condition, the only true implication is that if M has a composition series, then M is both Noetherian and Artinian. - -The theorem takes its current form from a paper by Charles Hopkins and a paper by Jacob Levitzki, both in 1939. For this reason it is often cited as the Hopkins–Levitzki theorem. However Yasuo Akizuki is sometimes included since he proved the result for commutative rings a few years earlier, in 1935. - -Since it is known that right Artinian rings are semiprimary, a direct corollary of the theorem is: a right Artinian ring is also right Noetherian. The analogous statement for left Artinian rings holds as well. This is not true in general for Artinian modules, because there are examples of Artinian modules which are not Noetherian. - -Another direct corollary is that if R is right Artinian, then R is left Artinian if and only if it is left Noetherian. - -Here is the proof of the following: Let R be a semiprimary ring and M a left R-module. If M is either Artinian or Noetherian, then M has a composition series. (The converse of this is true over any ring.) - -Let J be the radical of R. Set $F_i = J^{i-1}M/J^iM$. The R module $F_i$ may then be viewed as an $R/J$-module because J is contained in the annihilator of $F_i$. Each $F_i$ is a semisimple $R/J$-module, because $R/J$ is a semisimple ring. Furthermore, since J is nilpotent, only finitely many of the $F_i$ are nonzero. If M is Artinian (or Noetherian), then $F_i$ has a finite composition series. Stacking the composition series from the $F_i$ end to end, we obtain a composition series for M. - -Several generalizations and extensions of the theorem exist. One concerns Grothendieck categories: if G is a Grothendieck category with an Artinian generator, then every Artinian object in G is Noetherian. diff --git a/wiki/wikipedia/4110.txt b/wiki/wikipedia/4110.txt deleted file mode 100644 index 3b2c528d85a8f4c02cc0160ede18710b5154195c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4110.txt +++ /dev/null @@ -1,3 +0,0 @@ -In abstract algebra, the group isomorphism problem is the decision problem of determining whether two given finite group presentations present isomorphic groups. - -The isomorphism problem was identified by Max Dehn in 1911 as one of three fundamental decision problems in group theory; the other two being the word problem and the conjugacy problem. All three problems are undecidable: there does not exist a computer algorithm that correctly solves every instance of the isomorphism problem, or of the other two problems, regardless of how much time is allowed for the algorithm to run. In fact the problem of deciding whether a group is trivial is undecidable, a consequence of the Adian–Rabin theorem due to Sergei Adian and Michael O. Rabin. diff --git a/wiki/wikipedia/4111.txt b/wiki/wikipedia/4111.txt deleted file mode 100644 index 92fad4e6599f6587800e1b77d089fb0a7c7af9a5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4111.txt +++ /dev/null @@ -1,59 +0,0 @@ -High-multiplicity bin packing is a special case of the bin packing problem, in which the number of different item-sizes is small, while the number of items with each size is large. While the general bin-packing problem is NP-hard, the high-multiplicity setting can be solved in polynomial time, assuming that the number of different sizes is a fixed constant. - -The inputs to the problem are positive integers: - -* d - the number of different sizes (also called the dimension of the problem); - -*B - the bin capacity. - -* s1, ..., sd - the sizes. The vector of sizes is denoted by s. - -* n1, ..., nd - the multiplicities; ni is the number of items with size si. The vector of multiplicities is denoted by n. - -**n denotes the total number of items, that is, n = n1+...+nd. - -**V denotes the largest integer appearing in the description of the problem, that is, V = max(s1, ..., sd, n1, ..., nd, B) - -The output is a packing - an assignment of the items to bins, such that the total size of items in each bin is at most B, and subject to this, the number of bins is as small as possible. - -Example: suppose d=2, s1=30, s2=40, n1=n2=5, B=120. So there are n=10 items with sizes: 30,30,30,30,30,40,40,40,40,40. Then, a possible packing is: {30,30,30,30}, {40,40,40}, {30,40,40}, which uses 3 bins. - -A configuration is a set of items that can fit into a single bin. It can be represented by a vector of d integers, denoting the multiplicities of the different sizes in the configuration. Formally, for each configuration c we define an integer vector ac=ac,1, ..., ac,d such that ac ≤ n and ac·s ≤ B. - -In the above example, one of the configurations is c={30,40,40}, since 1*30+2*40 ≤ 120. Its corresponding vector is ac=(1,2). Other configuration vectors are (4,0), (3,0), (2,0), (2,1), (1,0), (1,1), (1,2), (0,1), (0,2), (0,3). If we had only three items of size 3, then we could not use the (4,0) configuration. - -It is possible to present the problem using the configuration linear program: for each configuration c, there is a variable xc, denoting the number of bins in which c is used. The total number of bins used is simply the sum of xc over all configurations, denoted by 1·x. The total number of items used from each size is the sum of the vectors ac · xc over all configurations c. Then, the problem is to minimize 1·x such that the sum of ac · xc, over all configurations c, is at least n, so that all items are packed. - -Suppose first that all items are large, that is, every si is at least e·B for some fraction e>0. Then, the total number of items in each bin is at most 1/e, so the total number of configuration is at most d1/e. Each configuration appears at most n times. Therefore, there are at most $ n^{d^{1/e}}$ combinations to check. For each combination, we have to check d constraints (one for each size), so the run-time is $d\cdot n^{d^{1/e}}$, which is polynomial in n when d, e are constant. - -The main problem with this algorithm (besides the fact that it works only when the items are large) is that its runtime is polynomial in n, but the length of the input (in binary representation) is linear in log(V), which is of the order of magnitude of log(n). - -Filippi and Agnetis presented an algorithm that finds a solution with at most OPT+d-2 bins in time O(poly(log V)). In particular, for d=2 different sizes, their algorithm finds an optimal solution in time O(log V). - -Goemans and Rothvoss presented an algorithm for any fixed d, that finds the optimal solution when all numbers are given in binary encoding. Their algorithm solves the following problem: given two d-dimensional polytopes P and Q, find the minimum number of integer points in P whose sum lies in Q. Their algorithm runs in time $(\log V)^{2^{O(d)}}$. Their algorithm can be adapted to other problems, such as Identical-machines scheduling and unrelated-machines scheduling with various constraints. - -Several approximation algorithms for the general bin-packing problem use the following scheme: - -* Separate the items to "small" (smaller than eB, for some fraction e in (0,1)) and "large" (at least eB). - -* Handle the large items first: - -** Round the item sizes in some way, such that the number of different sizes is at most some constant d. - -** Solve the resulting high-multiplicity problem. - -* Allocate the small items greedily, e.g. with next-fit bin packing. If no new bins are created, then we are done. If new bins are created, this means that all bins (except maybe the last one) are full up to at least (1-e)B. Therefore, the number of bins is at most OPT/(1-e)+1 ≤ (1+2e)OPT+1. - -The algorithms differ in how they round the instance. - -Lueker and de-la-Vega and invented the idea of adaptive input rounding. Order the items by their size, and group them into 1/e2 groups of cardinality ne2. In each group, round the sizes upwards to the maximum size in the group. Now, there are only d=1/e2 different sizes. The solution of the rounded instance is feasible for the original instance too, but the number of bins may be larger than necessary. To quantify the loss, consider the instance rounded down to the maximum size in the previous group (the first group is rounded down to 0). The rounded-down instance D is almost equal to the rounded-up instance U, except that in D there are some ne2 zeros while in U there are some ne2 large items instead; but their size is at most B. Therefore, U requires at most ne2 more bins than D. Since D requires fewer bins than the optimum, we get that Bins(U) ≤ OPT + ne2, that is, we have an additive error that can be made as small as we like by choosing e. - -If all items are large (of size at least eB), then each bin in OPT contains at most 1/e items (of size at least eB), so OPT must be at least en. Therefore, Bins(U) ≤ (1+e)OPT. After handling the small items, we get at most $(1+2e)\mathrm{OPT}+1$. - -Karmarkar and Karp present a more efficient rounding method which they call geometric rounding (in contrast to the linear rounding of de-la-Vega and Lueker). Based on these innovations, they present an algorithm with run-time polynomial in $n$ and $1/\varepsilon$. Their algorithm finds a solution with size at most $\mathrm{OPT} + \mathcal{O}(\log^2(\mathrm{OPT}))$. - -This technique was later improved by several authors: - -* Rothvoss presented an algorithm that generates a solution with size at most $\mathrm{OPT} + O(\log(\mathrm{OPT})\cdot \log\log(\mathrm{OPT}))$. - -* Hoberg and Rothvoss improved this algorithm to generate a solution with size at most $\mathrm{OPT} + O(\log(\mathrm{OPT}))$. The algorithm is randomized, and its running-time is polynomial in the total number of items. diff --git a/wiki/wikipedia/4112.txt b/wiki/wikipedia/4112.txt deleted file mode 100644 index 2b007ac75644bb55b43501f494a19beb2285e79e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4112.txt +++ /dev/null @@ -1,32 +0,0 @@ -In number theory, Selberg's identity is an approximate identity involving logarithms of primes found by . Selberg and Erdős both used this identity to give elementary proofs of the prime number theorem. - -There are several different but equivalent forms of Selberg's identity. One form is -$$ -\sum_{pα, and U is a unitary operator (possible vacuous). The family {Hα} consists of isomorphic Hilbert spaces. - -A proof can be sketched as follows. Successive applications of V give a descending sequences of copies of H isomorphically embedded in itself: -$$ -H = H \supset V(H) \supset V^2 (H) \supset \cdots = H_0 \supset H_1 \supset H_2 \supset \cdots, -$$ - -where V(H) denotes the range of V. The above defined Hi = Vi(H). If one defines -$$ -M_i = H_i \ominus H_{i+1} = V^i (H \ominus V(H)) \quad \text{for} \quad i \geq 0 , -$$ - -then -$$ -H = (\oplus_{i \geq 0} M_i) \oplus (\cap_{i \geq 0} H_i) = K_1 \oplus K_2. -$$ - -It is clear that K1 and K2 are invariant subspaces of V. - -So V(K2) = K2. In other words, V restricted to K2 is a surjective isometry, i.e., a unitary operator U. - -Furthermore, each Mi is isomorphic to another, with V being an isomorphism between Mi and Mi+1: V "shifts" Mi to Mi+1. Suppose the dimension of each Mi is some cardinal number α. We see that K1 can be written as a direct sum Hilbert spaces -$$ -K_1 = \oplus H_{\alpha} -$$ - -where each Hα is an invariant subspaces of V and V restricted to each Hα is the unilateral shift S. Therefore -$$ -V = V \vert_{K_1} \oplus V\vert_{K_2} = (\oplus_{\alpha \in A} S) \oplus U, -$$ - -which is a Wold decomposition of V. - -It is immediate from the Wold decomposition that the spectrum of any proper, i.e. non-unitary, isometry is the unit disk in the complex plane. - -An isometry V is said to be pure if, in the notation of the above proof, ∩i≥0 Hi = {0}. The multiplicity of a pure isometry V is the dimension of the kernel of V*, i.e. the cardinality of the index set A in the Wold decomposition of V. In other words, a pure isometry of multiplicity N takes the form -$$ -V = \oplus_{1 \le \alpha \le N} S . -$$ - -In this terminology, the Wold decomposition expresses an isometry as a direct sum of a pure isometry and a unitary operator. - -A subspace M is called a wandering subspace of V if Vn(M) ⊥ Vm(M) for all n ≠ m. In particular, each Mi defined above is a wandering subspace of V. - -The decomposition above can be generalized slightly to a sequence of isometries, indexed by the integers. - -Consider an isometry V ∈ L(H). Denote by C*(V) the C*-algebra generated by V, i.e. C*(V) is the norm closure of polynomials in V and V*. The Wold decomposition can be applied to characterize C*(V). - -Let C(T) be the continuous functions on the unit circle T. We recall that the C*-algebra C*(S) generated by the unilateral shift S takes the following form - -C*(S) = {Tf + K | Tf is a Toeplitz operator with continuous symbol f ∈ C(T) and K is a compact operator}. - -In this identification, S = Tz where z is the identity function in C(T). The algebra C*(S) is called the Toeplitz algebra. - -Theorem (Coburn) C*(V) is isomorphic to the Toeplitz algebra and V is the isomorphic image of Tz. - -The proof hinges on the connections with C(T), in the description of the Toeplitz algebra and that the spectrum of a unitary operator is contained in the circle T. - -The following properties of the Toeplitz algebra will be needed: - -#$T_f + T_g = T_{f+g}.$ - -#$ T_f ^* = T_ .$ - -#The semicommutator $T_fT_g - T_{fg} $ is compact. - -The Wold decomposition says that V is the direct sum of copies of Tz and then some unitary U: -$$ -V = (\oplus_{\alpha \in A} T_z) \oplus U. -$$ - -So we invoke the continuous functional calculus f → f(U), and define - - - -\Phi : C^*(S) \rightarrow C^*(V) \quad \text{by} \quad \Phi(T_f + K) = \oplus_{\alpha \in A} (T_f + K) \oplus f(U). - - - -One can now verify Φ is an isomorphism that maps the unilateral shift to V: - -By property 1 above, Φ is linear. The map Φ is injective because Tf is not compact for any non-zero f ∈ C(T) and thus Tf + K = 0 implies f = 0. Since the range of Φ is a C*-algebra, Φ is surjective by the minimality of C*(V). Property 2 and the continuous functional calculus ensure that Φ preserves the *-operation. Finally, the semicommutator property shows that Φ is multiplicative. Therefore the theorem holds. diff --git a/wiki/wikipedia/4117.txt b/wiki/wikipedia/4117.txt deleted file mode 100644 index 566b365f1e18a9c5669b35859aac787f0f77e456..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4117.txt +++ /dev/null @@ -1,37 +0,0 @@ -In mathematical folklore, the "no free lunch" (NFL) theorem (sometimes pluralized) of David Wolpert and William Macready appears in the 1997 "No Free Lunch Theorems for Optimization". Wolpert had previously derived no free lunch theorems for machine learning (statistical inference). - -In 2005, Wolpert and Macready themselves indicated that the first theorem in their paper "state[s] that any two optimization algorithms are equivalent when their performance is averaged across all possible problems". - -The "no free lunch" (NFL) theorem is an easily stated and easily understood consequence of theorems Wolpert and Macready actually prove. It is weaker than the proven theorems, and thus does not encapsulate them. Various investigators have extended the work of Wolpert and Macready substantively. See No free lunch in search and optimization for treatment of the research area. - -While some scholars argue that NFL conveys important insight, others argue that NFL is of little relevance to machine learning research. - -Posit a toy universe that exists for exactly two days and on each day contains exactly one object, a square or a triangle. The universe has exactly four possible histories: - -# (square, triangle): the universe contains a square on day 1, and a triangle on day 2 - -# (square, square) - -# (triangle, triangle) - -# (triangle, square) - -Any prediction strategy that succeeds for history #2, by predicting a square on day 2 if there is a square on day 1, will fail on history #1, and vice versa. If all histories are equally likely, then any prediction strategy will score the same, with the same accuracy rate of 0.5. - -Wolpert and Macready give two NFL theorems that are closely related to the folkloric theorem. In their paper, they state: - -The first theorem hypothesizes objective functions that do not change while optimization is in progress, and the second hypothesizes objective functions that may change. - -To illustrate one of the counter-intuitive implications of NFL, suppose we fix two supervised learning algorithms, C and D. We then sample a target function f to produce a set of input-output pairs, d. How should we choose whether to train C or D on d, in order to make predictions for what output would be associated with a point lying outside of d? - -It is common in almost of all science and statistics to answer this question – to choose between C and D – by running cross-validation on d with those two algorithms. In other words, to decide whether to generalize from d with either C or D, we see which of them has better out-of-sample performance when tested within d. - -Since C and D are fixed, this use of cross-validation to choose between them is itself an algorithm, i.e., a way of generalizing from an arbitrary dataset. Call this algorithm A. (Arguably, A is a simplified model of the scientific method itself.) - -We could also use anti-cross-validation to make our choice. In other words, we could choose between C and D based on which has worse out-of-sample performance within d. Again, since C and D are fixed, this use of anti-cross-validation is itself an algorithm. Call that algorithm B. - -NFL tells us (loosely speaking) that B must beat A on just as many target functions (and associated datasets d) as A beats B. In this very specific sense, the scientific method will lose to the "anti" scientific method just as readily as it wins. - -NFL only applies if the target function is chosen from a uniform distribution of all possible functions. If this is not the case, and certain target functions are more likely to be chosen than others, then A may perform better than B overall. The contribution of NFL is that it tells us choosing an appropriate algorithm requires making assumptions about the kinds of target functions the algorithm is being used for. With no assumptions, no "meta-algorithm", such as the scientific method, performs better than random choice. - -While some scholars argue that NFL conveys important insight, others argue that NFL is of little relevance to machine learning research. If Occam's razor is correct, for example if sequences of lower Kolmogorov complexity are more probable than sequences of higher complexity, then (as is observed in real life) some algorithms, such as cross-validation, perform better on average on practical problems (when compared with random choice or with anti-cross-validation). diff --git a/wiki/wikipedia/4118.txt b/wiki/wikipedia/4118.txt deleted file mode 100644 index 447d82915efd9d3ef9e3f1ec6b1ba1f214e3b442..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4118.txt +++ /dev/null @@ -1,37 +0,0 @@ -In mathematics, a representation theorem is a theorem that states that every abstract structure with certain properties is isomorphic to another (abstract or concrete) structure. - -* Cayley's theorem states that every group is isomorphic to a permutation group. - -* Representation theory studies properties of abstract groups via their representations as linear transformations of vector spaces. - -*Stone's representation theorem for Boolean algebras states that every Boolean algebra is isomorphic to a field of sets. - -*: A variant, Stone's representation theorem for lattices, states that every distributive lattice is isomorphic to a sublattice of the power set lattice of some set. - -*: Another variant states that there exists a duality (in the sense of an arrow-reversing equivalence) between the categories of Boolean algebras and that of Stone spaces. - -* The Poincaré–Birkhoff–Witt theorem states that every Lie algebra embeds into the commutator Lie algebra of its universal enveloping algebra. - -*Ado's theorem states that every finite-dimensional Lie algebra over a field of characteristic zero embeds into the Lie algebra of endomorphisms of some finite-dimensional vector space. - -*Birkhoff's HSP theorem states that every model of an algebra A is the homomorphic image of a subalgebra of a direct product of copies of A. - -* In the study of semigroups, the Wagner–Preston theorem provides a representation of an inverse semigroup S, as a homomorphic image of the set of partial bijections on S, and the semigroup operation given by composition. - -* The Yoneda lemma provides a full and faithful limit-preserving embedding of any category into a category of presheaves. - -*Mitchell's embedding theorem for abelian categories realises every small abelian category as a full (and exactly embedded) subcategory of a category of modules over some ring. - -*Mostowski's collapsing theorem states that every well-founded extensional structure is isomorphic to a transitive set with the ∈-relation. - -* One of the fundamental theorems in sheaf theory states that every sheaf over a topological space can be thought of as a sheaf of sections of some (étalé) bundle over that space: the categories of sheaves on a topological space and that of étalé spaces over it are equivalent, where the equivalence is given by the functor that sends a bundle to its sheaf of (local) sections. - -* The Gelfand–Naimark–Segal construction embeds any C*-algebra in an algebra of bounded operators on some Hilbert space. - -* The Gelfand representation (also known as the commutative Gelfand–Naimark theorem) states that any commutative C*-algebra is isomorphic to an algebra of continuous functions on its Gelfand spectrum. It can also be seen as the construction as a duality between the category of commutative C*-algebras and that of compact Hausdorff spaces. - -* The Riesz–Markov–Kakutani representation theorem is actually a list of several theorems; one of them identifies the dual space of C0(X) with the set of regular measures on X. - -* The Whitney embedding theorems embed any abstract manifold in some Euclidean space. - -* The Nash embedding theorem embeds an abstract Riemannian manifold isometrically in a Euclidean space. diff --git a/wiki/wikipedia/4119.txt b/wiki/wikipedia/4119.txt deleted file mode 100644 index fc8d80f87504047919a1cd2f55c9fd50db83313b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4119.txt +++ /dev/null @@ -1,13 +0,0 @@ -Image segmentation strives to partition a digital image into regions of pixels with similar properties, e.g. homogeneity. The higher-level region representation simplifies image analysis tasks such as counting objects or detecting changes, because region attributes (e.g. average intensity or shape) can be compared more readily than raw pixels. - -To speed up segmentation of large images, the work could be divided among several CPUs. One means of accomplishing this involves splitting images into tiles that are processed independently. However, regions that straddle a tile border might be split up or lost if the fragments do not meet the segmentation algorithm's minimum size requirements. A trivial workaround involves overlapping tiles, i.e. allowing each processor to consider additional pixels around its tile's border. Unfortunately this increases the computational load, since the processors on both sides of a tile boundary are performing redundant work. Also, only objects smaller than the tile overlap are guaranteed to be preserved, which means that long objects such as rivers in aerial imagery are still likely to be split. In some cases, the independent tiles' results can be fused to approximate the true results. - -An alternative exists in the form of graph-based segmentation methods. The connectivity information inherent to graphs allows performing independent work on parts of the original image, and reconnecting them to yield an exact result as if processing had occurred globally. - -The possibility of stitching together independent sub-images motivates adding connectivity information to the pixels. This can be viewed as a graph, the nodes of which are pixels, and edges represent connections between pixels. A simple and comparatively space-efficient variant of this is a grid graph, whereby each pixel is connected to its neighbors in the four cardinal directions. Since the pixel neighborship relation is symmetric, the resulting graph is undirected and only half its edges (e.g. each pixel's eastern and southern neighbor) need be stored. The final step calls for encoding pixel similarity information in edge weights, so that the original image is no longer needed. In the simplest case, edge weights are computed as the difference of pixel intensities. - -A minimum spanning tree (MST) is a minimum-weight, cycle-free subset of a graph's edges such that all nodes are connected. In 2004, Felzenszwalb introduced a segmentation method based on Kruskal's MST algorithm. Edges are considered in increasing order of weight; their endpoint pixels are merged into a region if this doesn't cause a cycle in the graph, and if the pixels are 'similar' to the existing regions' pixels. Detecting cycles is possible in near-constant time with the aid of a disjoint-set data structure. Pixel similarity is judged by a heuristic, which compares the weight to a per-segment threshold. The algorithm outputs multiple disjunct MSTs, i.e. a forest; each tree corresponds to a segment. The complexity of the algorithm is quasi-linear because sorting edges is possible in linear time via counting sort. - -In 2009, Wassenberg et al. developed an algorithm that computes multiple independent Minimum Spanning Forests and then stitches them together. This enables parallel processing without splitting objects on tile borders. Instead of a fixed weight threshold, an initial connected-component labeling is used to estimate a lower bound on the threshold, which can reduce both over- and undersegmentation. Measurements show that the implementation outperforms Felzenszwalb's sequential algorithm by an order of magnitude. - -In 2017, Saglam and Baykan used Prim's sequential representation of minimum spanning tree and proposed a new cutting criterion for image segmentation. They construct the MST with Prim's MST algorithm using the Fibonacci Heap data structure. The method achieves an important success on the test images in fast execution time. diff --git a/wiki/wikipedia/412.txt b/wiki/wikipedia/412.txt deleted file mode 100644 index 3338899526cdd820b0fcd405d2138b73167e8bba..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/412.txt +++ /dev/null @@ -1,88 +0,0 @@ -The infinite monkey theorem states that a monkey hitting keys at random on a typewriter keyboard for an infinite amount of time will almost surely type any given text, such as the complete works of William Shakespeare. In fact, the monkey would almost surely type every possible finite text an infinite number of times. However, the probability that monkeys filling the entire observable universe would type a single complete work, such as Shakespeare's Hamlet, is so tiny that the chance of it occurring during a period of time hundreds of thousands of orders of magnitude longer than the age of the universe is extremely low (but technically not zero). The theorem can be generalized to state that any sequence of events which has a non-zero probability of happening, at least as long as it hasn't occurred, will almost certainly eventually occur. - -In this context, "almost surely" is a mathematical term meaning the event happens with probability 1, and the "monkey" is not an actual monkey, but a metaphor for an abstract device that produces an endless random sequence of letters and symbols. One of the earliest instances of the use of the "monkey metaphor" is that of French mathematician Émile Borel in 1913, and the chance of typing banana approaches 100%. Thus, the probability of the word banana appearing at some point in an infinite sequence of keystrokes is equal to one. - -The same argument applies if we replace one monkey typing n consecutive blocks of text with n monkeys each typing one block (simultaneously and independently). In this case, Xn = (1 − (1/50)6)n is the probability that none of the first n monkeys types banana correctly on their first try. Therefore, at least one of infinitely many monkeys will (with probability equal to one) produce a text as quickly as it would be produced by a perfectly accurate human typist copying it from the original. - -This can be stated more generally and compactly in terms of strings, which are sequences of characters chosen from some finite alphabet: - -* Given an infinite string where each character is chosen uniformly at random, any given finite string almost surely occurs as a substring at some position. - -* Given an infinite sequence of infinite strings, where each character of each string is chosen uniformly at random, any given finite string almost surely occurs as a prefix of one of these strings. - -Both follow easily from the second Borel–Cantelli lemma. For the second theorem, let Ek be the event that the kth string begins with the given text. Because this has some fixed nonzero probability p of occurring, the Ek are independent, and the below sum diverges, -$$ -\sum_{k=1}^\infty P(E_k) = \sum_{k=1}^\infty p = \infty, -$$ - -the probability that infinitely many of the Ek occur is 1. The first theorem is shown similarly; one can divide the random string into nonoverlapping blocks matching the size of the desired text, and make Ek the event where the kth block equals the desired string. - -However, for physically meaningful numbers of monkeys typing for physically meaningful lengths of time the results are reversed. If there were as many monkeys as there are atoms in the observable universe typing extremely fast for trillions of times the life of the universe, the probability of the monkeys replicating even a single page of Shakespeare is unfathomably small. - -Ignoring punctuation, spacing, and capitalization, a monkey typing letters uniformly at random has a chance of one in 26 of correctly typing the first letter of Hamlet. It has a chance of one in 676 (26 × 26) of typing the first two letters. Because the probability shrinks exponentially, at 20 letters it already has only a chance of one in 2620 = 19,928,148,895,209,409,152,340,197,376 (almost 2 × 1028). In the case of the entire text of Hamlet, the probabilities are so vanishingly small as to be inconceivable. The text of Hamlet contains approximately 130,000 letters. Thus there is a probability of one in 3.4 × 10183,946 to get the text right at the first trial. The average number of letters that needs to be typed until the text appears is also 3.4 × 10183,946, or including punctuation, 4.4 × 10360,783. - -Even if every proton in the observable universe were a monkey with a typewriter, typing from the Big Bang until the end of the universe (when protons might no longer exist), they would still need a far greater amount of time – more than three hundred and sixty thousand orders of magnitude longer – to have even a 1 in 10500 chance of success. To put it another way, for a one in a trillion chance of success, there would need to be 10360,641 observable universes made of protonic monkeys. As Kittel and Kroemer put it in their textbook on thermodynamics, the field whose statistical foundations motivated the first known expositions of typing monkeys, "The probability of Hamlet is therefore zero in any operational sense of an event ...", and the statement that the monkeys must eventually succeed "gives a misleading conclusion about very, very large numbers." - -In fact there is less than a one in a trillion chance of success that such a universe made of monkeys could type any particular document a mere 79 characters long. - -The probability that an infinite randomly generated string of text will contain a particular finite substring is 1. However, this does not mean the substring's absence is "impossible", despite the absence having a prior probability of 0. For example, the immortal monkey could randomly type G as its first letter, G as its second, and G as every single letter thereafter, producing an infinite string of Gs; at no point must the monkey be "compelled" to type anything else. (To assume otherwise implies the gambler's fallacy.) However long a randomly generated finite string is, there is a small but nonzero chance that it will turn out to consist of the same character repeated throughout; this chance approaches zero as the string's length approaches infinity. There is nothing special about such a monotonous sequence except that it is easy to describe; the same fact applies to any nameable specific sequence, such as "RGRGRG" repeated forever, or "a-b-aa-bb-aaa-bbb-...", or "Three, Six, Nine, Twelve…". - -If the hypothetical monkey has a typewriter with 90 equally likely keys that include numerals and punctuation, then the first typed keys might be "3.14" (the first three digits of pi) with a probability of (1/90)4, which is 1/65,610,000. Equally probable is any other string of four characters allowed by the typewriter, such as "GGGG", "mATh", or "q%8e". The probability that 100 randomly typed keys will consist of the first 99 digits of pi (including the separator key), or any other particular sequence of that length, is much lower: (1/90)100. If the monkey's allotted length of text is infinite, the chance of typing only the digits of pi is 0, which is just as possible (mathematically probable) as typing nothing but Gs (also probability 0). - -The same applies to the event of typing a particular version of Hamlet followed by endless copies of itself; or Hamlet immediately followed by all the digits of pi; these specific strings are equally infinite in length, they are not prohibited by the terms of the thought problem, and they each have a prior probability of 0. In fact, any particular infinite sequence the immortal monkey types will have had a prior probability of 0, even though the monkey must type something. - -This is an extension of the principle that a finite string of random text has a lower and lower probability of being a particular string the longer it is (though all specific strings are equally unlikely). This probability approaches 0 as the string approaches infinity. Thus, the probability of the monkey typing an endlessly long string, such as all of the digits of pi in order, on a 90-key keyboard is (1/90) which equals (1/∞) which is essentially 0. At the same time, the probability that the sequence contains a particular subsequence (such as the word MONKEY, or the 12th through 999th digits of pi, or a version of the King James Bible) increases as the total string increases. This probability approaches 1 as the total string approaches infinity, and thus the original theorem is correct. - -In a simplification of the thought experiment, the monkey could have a typewriter with just two keys: 1 and 0. The infinitely long string thusly produced would correspond to the binary digits of a particular real number between 0 and 1. A countably infinite set of possible strings end in infinite repetitions, which means the corresponding real number is rational. Examples include the strings corresponding to one-third (010101...), five-sixths (11010101...) and five-eighths (1010000...). Only a subset of such real number strings (albeit a countably infinite subset) contains the entirety of Hamlet (assuming that the text is subjected to a numerical encoding, such as ASCII). - -Meanwhile, there is an uncountably infinite set of strings which do not end in such repetition; these correspond to the irrational numbers. These can be sorted into two uncountably infinite subsets: those which contain Hamlet and those which do not. However, the "largest" subset of all the real numbers are those which not only contain Hamlet, but which contain every other possible string of any length, and with equal distribution of such strings. These irrational numbers are called normal. Because almost all numbers are normal, almost all possible strings contain all possible finite substrings. Hence, the probability of the monkey typing a normal number is 1. The same principles apply regardless of the number of keys from which the monkey can choose; a 90-key keyboard can be seen as a generator of numbers written in base 90. - -In one of the forms in which probabilists now know this theorem, with its "dactylographic" [i.e., typewriting] monkeys (; the French word singe covers both the monkeys and the apes), appeared in Émile Borel's 1913 article "Mécanique Statistique et Irréversibilité" (Statistical mechanics and irreversibility), and in his book "Le Hasard" in 1914. His "monkeys" are not actual monkeys; rather, they are a metaphor for an imaginary way to produce a large, random sequence of letters. Borel said that if a million monkeys typed ten hours a day, it was extremely unlikely that their output would exactly equal all the books of the richest libraries of the world; and yet, in comparison, it was even more unlikely that the laws of statistical mechanics would ever be violated, even briefly. - -The physicist Arthur Eddington drew on Borel's image further in The Nature of the Physical World (1928), writing: - -These images invite the reader to consider the incredible improbability of a large but finite number of monkeys working for a large but finite amount of time producing a significant work, and compare this with the even greater improbability of certain physical events. Any physical process that is even less likely than such monkeys' success is effectively impossible, and it may safely be said that such a process will never happen. - -Not only did the monkeys produce nothing but five total pages largely consisting of the letter "S", - -In his 1931 book The Mysterious Universe, Eddington's rival James Jeans attributed the monkey parable to a "Huxley", presumably meaning Thomas Henry Huxley. This attribution is incorrect. Today, it is sometimes further reported that Huxley applied the example in a now-legendary debate over Charles Darwin's On the Origin of Species with the Anglican Bishop of Oxford, Samuel Wilberforce, held at a meeting of the British Association for the Advancement of Science at Oxford on 30 June 1860. This story suffers not only from a lack of evidence, but the fact that in 1860 the typewriter itself had yet to emerge. - -Despite the original mix-up, monkey-and-typewriter arguments are now common in arguments over evolution. As an example of Christian apologetics Doug Powell argued that even if a monkey accidentally types the letters of Hamlet, it has failed to produce Hamlet because it lacked the intention to communicate. His parallel implication is that natural laws could not produce the information content in DNA. A more common argument is represented by Reverend John F. MacArthur, who claimed that the genetic mutations necessary to produce a tapeworm from an amoeba are as unlikely as a monkey typing Hamlet's soliloquy, and hence the odds against the evolution of all life are impossible to overcome. - -Evolutionary biologist Richard Dawkins employs the typing monkey concept in his book The Blind Watchmaker to demonstrate the ability of natural selection to produce biological complexity out of random mutations. In a simulation experiment Dawkins has his weasel program produce the Hamlet phrase METHINKS IT IS LIKE A WEASEL, starting from a randomly typed parent, by "breeding" subsequent generations and always choosing the closest match from progeny that are copies of the parent, with random mutations. The chance of the target phrase appearing in a single step is extremely small, yet Dawkins showed that it could be produced rapidly (in about 40 generations) using cumulative selection of phrases. The random choices furnish raw material, while cumulative selection imparts information. As Dawkins acknowledges, however, the weasel program is an imperfect analogy for evolution, as "offspring" phrases were selected "according to the criterion of resemblance to a distant ideal target." In contrast, Dawkins affirms, evolution has no long-term plans and does not progress toward some distant goal (such as humans). The weasel program is instead meant to illustrate the difference between non-random cumulative selection, and random single-step selection. In terms of the typing monkey analogy, this means that Romeo and Juliet could be produced relatively quickly if placed under the constraints of a nonrandom, Darwinian-type selection because the fitness function will tend to preserve in place any letters that happen to match the target text, improving each successive generation of typing monkeys. - -A different avenue for exploring the analogy between evolution and an unconstrained monkey lies in the problem that the monkey types only one letter at a time, independently of the other letters. Hugh Petrie argues that a more sophisticated setup is required, in his case not for biological evolution but the evolution of ideas: - -James W. Valentine, while admitting that the classic monkey's task is impossible, finds that there is a worthwhile analogy between written English and the metazoan genome in this other sense: both have "combinatorial, hierarchical structures" that greatly constrain the immense number of combinations at the alphabet level. - -R. G. Collingwood argued in 1938 that art cannot be produced by accident, and wrote as a sarcastic aside to his critics, - -Nelson Goodman took the contrary position, illustrating his point along with Catherine Elgin by the example of Borges' "Pierre Menard, Author of the Quixote", - -In another writing, Goodman elaborates, "That the monkey may be supposed to have produced his copy randomly makes no difference. It is the same text, and it is open to all the same interpretations. ..." Gérard Genette dismisses Goodman's argument as begging the question. - -For Jorge J. E. Gracia, the question of the identity of texts leads to a different question, that of author. If a monkey is capable of typing Hamlet, despite having no intention of meaning and therefore disqualifying itself as an author, then it appears that texts do not require authors. Possible solutions include saying that whoever finds the text and identifies it as Hamlet is the author; or that Shakespeare is the author, the monkey his agent, and the finder merely a user of the text. These solutions have their own difficulties, in that the text appears to have a meaning separate from the other agents: What if the monkey operates before Shakespeare is born, or if Shakespeare is never born, or if no one ever finds the monkey's typescript? - -The theorem concerns a thought experiment which cannot be fully carried out in practice, since it is predicted to require prohibitive amounts of time and resources. Nonetheless, it has inspired efforts in finite random text generation. - -One computer program run by Dan Oliver of Scottsdale, Arizona, according to an article in The New Yorker, came up with a result on 4 August 2004: After the group had worked for 42,162,500,000 billion billion monkey-years, one of the "monkeys" typed, "VALENTINE. Cease toIdor:eFLP0FRjWK78aXzVOwm)-‘;8.t" The first 19 letters of this sequence can be found in "The Two Gentlemen of Verona". Other teams have reproduced 18 characters from "Timon of Athens", 17 from "Troilus and Cressida", and 16 from "Richard II". - -A website entitled The Monkey Shakespeare Simulator, launched on 1 July 2003, contained a Java applet that simulated a large population of monkeys typing randomly, with the stated intention of seeing how long it takes the virtual monkeys to produce a complete Shakespearean play from beginning to end. For example, it produced this partial line from Henry IV, Part 2, reporting that it took "2,737,850 million billion billion billion monkey-years" to reach 24 matching characters: - -RUMOUR. Open your ears; 9r"5j5&?OWTY Z0d - -Due to processing power limitations, the program used a probabilistic model (by using a random number generator or RNG) instead of actually generating random text and comparing it to Shakespeare. When the simulator "detected a match" (that is, the RNG generated a certain value or a value within a certain range), the simulator simulated the match by generating matched text. - -More sophisticated methods are used in practice for natural language generation. If instead of simply generating random characters one restricts the generator to a meaningful vocabulary and conservatively following grammar rules, like using a context-free grammar, then a random document generated this way can even fool some humans (at least on a cursory reading) as shown in the experiments with SCIgen, snarXiv, and the Postmodernism Generator. - -In February 2019, the OpenAI group published the Generative Pre-trained Transformer 2 (GPT-2) artificial intelligence to GitHub, which is able to produce a fully plausible news article given a two sentence input from a human hand. The AI was so effective that instead of publishing the full code, the group chose to publish a scaled-back version and released a statement regarding "concerns about large language models being used to generate deceptive, biased, or abusive language at scale." - -Questions about the statistics describing how often an ideal monkey is expected to type certain strings translate into practical tests for random-number generators; these range from the simple to the "quite sophisticated". Computer-science professors George Marsaglia and Arif Zaman report that they used to call one such category of tests "overlapping m-tuple tests" in lectures, since they concern overlapping m-tuples of successive elements in a random sequence. But they found that calling them "monkey tests" helped to motivate the idea with students. They published a report on the class of tests and their results for various RNGs in 1993. - -The infinite monkey theorem and its associated imagery is considered a popular and proverbial illustration of the mathematics of probability, widely known to the general public because of its transmission through popular culture rather than through formal education. This is helped by the innate humor stemming from the image of literal monkeys rattling away on a set of typewriters, and is a popular visual gag. - -A quotation attributed to a 1996 speech by Robert Wilensky stated, "We've heard that a million monkeys at a million keyboards could produce the complete works of Shakespeare; now, thanks to the Internet, we know that is not true." - -The enduring, widespread popularity of the theorem was noted in the introduction to a 2001 paper, "Monkeys, Typewriters and Networks: The Internet in the Light of the Theory of Accidental Excellence". In 2002, an article in The Washington Post said, "Plenty of people have had fun with the famous notion that an infinite number of monkeys with an infinite number of typewriters and an infinite amount of time could eventually write the works of Shakespeare". In 2003, the previously mentioned Arts Council funded experiment involving real monkeys and a computer keyboard received widespread press coverage. In 2007, the theorem was listed by Wired magazine in a list of eight classic thought experiments. - -American playwright David Ives' short one-act play Words, Words, Words, from the collection All in the Timing, pokes fun of the concept of the infinite monkey theorem. diff --git a/wiki/wikipedia/4120.txt b/wiki/wikipedia/4120.txt deleted file mode 100644 index f1438bb9edfa21861e32e507716731af5ac3cb99..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4120.txt +++ /dev/null @@ -1,18 +0,0 @@ -In mathematics, the Herbrand–Ribet theorem is a result on the class group of certain number fields. It is a strengthening of Ernst Kummer's theorem to the effect that the prime p divides the class number of the cyclotomic field of p-th roots of unity if and only if p divides the numerator of the n-th Bernoulli number Bn - -for some n, 0 < n < p - 1. The Herbrand–Ribet theorem specifies what, in particular, it means when p divides such an Bn. - -The Galois group Δ of the cyclotomic field of pth roots of unity for an odd prime p, Q(ζ) with ζp = 1, consists of the p - 1 group elements σa, where $\sigma_a(\zeta) = \zeta^a$. As a consequence of Fermat's little theorem, in the ring of p-adic integers $\mathbb{Z}_p$ we have p - 1 roots of unity, each of which is congruent mod p to some number in the range 1 to p - 1; we can therefore define a Dirichlet character ω (the Teichmüller character) with values in $\mathbb{Z}_p$ by requiring that for n relatively prime to p, ω(n) be congruent to n modulo p. The p part of the class group is a $\mathbb{Z}_p$-module (since it is p-primary), hence a module over the group ring $\mathbb{Z}_p[\Delta]$. We now define idempotent elements of the group ring for each n from 1 to p - 1, as -$$ -\epsilon_n = \frac{1}{p-1}\sum_{a=1}^{p-1} \omega(a)^n \sigma_a^{-1}. -$$ - -It is easy to see that $\sum\epsilon_n = 1 $ and $ \epsilon_i\epsilon_j = \delta_{ij}\epsilon_i $ where $ \delta_{ij} $ is the Kronecker delta. This allows us to break up the p part of the ideal class group G of Q(ζ) by means of the idempotents; if G is the p-primary part of the ideal class group, then, letting Gn = εn(G), we have $ G = \oplus G_n $. - -The Herbrand–Ribet theorem states that for odd n, Gn is nontrivial if and only if p divides the Bernoulli number Bp-n. - -The theorem makes no assertion about even values of n, but there is no known p for which Gn is nontrivial for any even n: triviality for all p would be a consequence of Vandiver's conjecture. - -The part saying p divides Bp-n if Gn is not trivial is due to Jacques Herbrand. The converse, that if p divides Bp-n then Gn is not trivial is due to Kenneth Ribet, and is considerably more difficult. By class field theory, this can only be true if there is an unramified extension of the field of pth roots of unity by a cyclic extension of degree p which behaves in the specified way under the action of Σ; Ribet proves this by actually constructing such an extension using methods in the theory of modular forms. A more elementary proof of Ribet's converse to Herbrand's theorem, a consequence of the theory of Euler systems, can be found in Washington's book. - -Ribet's methods were pushed further by Barry Mazur and Andrew Wiles in order to prove the main conjecture of Iwasawa theory, a corollary of which is a strengthening of the Herbrand–Ribet theorem: the power of p dividing Bp-n is exactly the power of p dividing the order of Gn. diff --git a/wiki/wikipedia/4121.txt b/wiki/wikipedia/4121.txt deleted file mode 100644 index 30ec220ea6b4b075775f15e161e3cb313f94c9a4..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4121.txt +++ /dev/null @@ -1,27 +0,0 @@ -In mathematics, and especially differential topology and gauge theory, Donaldson's theorem states that a definite intersection form of a compact, oriented, simply connected, smooth manifold of dimension 4 is diagonalisable. If the intersection form is positive (negative) definite, it can be diagonalized to the identity matrix (negative identity matrix) over the integers. - -The theorem was proved by Simon Donaldson. This was a contribution cited for his Fields medal in 1986. - -Donaldson's proof utilizes the moduli space $\mathcal{M}_P$ of solutions to the anti-self-duality equations on a principal $\operatorname{SU}(2)$-bundle $P$ over the four-manifold $X$. By the Atiyah–Singer index theorem, the dimension of the moduli space is given by -$$ -\dim \mathcal{M} = 8k - 3(1-b_1(X) + b_+(X)), -$$ - -where $c_2(P)=k$, $b_1(X)$ is the first Betti number of $X$ and $b_+(X)$ is the dimension of the positive-definite subspace of $H_2(X,\mathbb{R})$ with respect to the intersection form. When $X$ is simply-connected with definite intersection form, possibly after changing orientation, one always has $b_1(X) = 0$ and $b_+(X)=0$. Thus taking any principal $\operatorname{SU}(2)$-bundle with $k=1$, one obtains a moduli space $\mathcal{M}$ of dimension five. - -This moduli space is non-compact and generically smooth, with singularities occurring only at the points corresponding to reducible connections, of which there are exactly $b_2(X)$ many. Results of Clifford Taubes and Karen Uhlenbeck show that whilst $\mathcal{M}$ is non-compact, its structure at infinity can be readily described. Namely, there is an open subset of $\mathcal{M}$, say $\mathcal{M}_{\varepsilon}$, such that for sufficiently small choices of parameter $\varepsilon$, there is a diffeomorphism -$$ -\mathcal{M}_{\varepsilon} \xrightarrow{\quad \cong\quad} X\times (0,\varepsilon) -$$. - -The work of Taubes and Uhlenbeck essentially concerns constructing sequences of ASD connections on the four-manifold $X$ with curvature becoming infinitely concentrated at any given single point $x\in X$. For each such point, in the limit one obtains a unique singular ASD connection, which becomes a well-defined smooth ASD connection at that point using Uhlenbeck's removable singularity theorem. - -Donaldson observed that the singular points in the interior of $\mathcal{M}$ corresponding to reducible connections could also be described: they looked like cones over the complex projective plane $\mathbb{CP}^2$, with its orientation reversed. - -It is thus possible to compactify the moduli space as follows: First, cut off each cone at a reducible singularity and glue in a copy of $\mathbb{CP}^2$. Secondly, glue in a copy of $X$ itself at infinity. The resulting space is a cobordism between $X$ and a disjoint union of $b_2(X)$ copies of $\mathbb{CP}^2$ with its orientation reversed. The intersection form of a four-manifold is a cobordism invariant up to isomorphism of quadratic forms, from which one concludes the intersection form of $X$ is diagonalisable. - -Michael Freedman had previously shown that any unimodular symmetric bilinear form is realized as the intersection form of some closed, oriented four-manifold. Combining this result with the Serre classification theorem and Donaldson's theorem, several interesting results can be seen: - -1) Any non-diagonalizable intersection form gives rise to a four-dimensional topological manifold with no differentiable structure (so cannot be smoothed). - -2) Two smooth simply-connected 4-manifolds are homeomorphic, if and only if, their intersection forms have the same rank, signature, and parity. diff --git a/wiki/wikipedia/4122.txt b/wiki/wikipedia/4122.txt deleted file mode 100644 index 9dfc55890af5008b63c3ae22bb77a2226f3dc42c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4122.txt +++ /dev/null @@ -1,7 +0,0 @@ -The Finsler–Hadwiger theorem is statement in Euclidean plane geometry that describes a third square derived from any two squares that share a vertex. The theorem is named after Paul Finsler and Hugo Hadwiger, who published it in 1937 as part of the same paper in which they published the Hadwiger–Finsler inequality relating the side lengths and area of a triangle. - -To state the theorem, suppose that ABCD and AB'C'D' are two squares with common vertex A. Let E and G be the midpoints of B'D and D'B respectively, and let F and H be the centers of the two squares. Then the theorem states that the quadrilateral EFGH is a square as well. - -The square EFGH is called the Finsler–Hadwiger square of the two given squares. - -Repeated application of the Finsler–Hadwiger theorem can be used to prove Van Aubel's theorem, on the congruence and perpendicularity of segments through centers of four squares constructed on the sides of an arbitrary quadrilateral. Each pair of consecutive squares forms an instance of the theorem, and the two pairs of opposite Finsler–Hadwiger squares of those instances form another two instances of the theorem, having the same derived square. diff --git a/wiki/wikipedia/4123.txt b/wiki/wikipedia/4123.txt deleted file mode 100644 index bedd12c2d7b279b692a4a868c2eb54c7b72d4150..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4123.txt +++ /dev/null @@ -1,59 +0,0 @@ -In proof theory, a discipline within mathematical logic, double-negation translation, sometimes called negative translation, is a general approach for embedding classical logic into intuitionistic logic, typically by translating formulas to formulas which are classically equivalent but intuitionistically inequivalent. Particular instances of double-negation translation include Glivenko's translation for propositional logic, and the Gödel–Gentzen translation and Kuroda's translation for first-order logic. - -The easiest double-negation translation to describe comes from Glivenko's theorem, proved by Valery Glivenko in 1929. It maps each classical formula φ to its double negation ¬¬φ. - -Glivenko's theorem states: - -If φ is a propositional formula, then φ is a classical tautology if and only if ¬¬φ is an intuitionistic tautology. - -Glivenko's theorem implies the more general statement: - -If T is a set of propositional formulas and φ a propositional formula, then T ⊢ φ in classical logic if and only if T ⊢ ¬¬φ in intuitionistic logic. - -In particular, a set of propositional formulas is intuitionistically consistent if and only if it is classically satisfiable. - -The Gödel–Gentzen translation (named after Kurt Gödel and Gerhard Gentzen) associates with each formula φ in a first-order language another formula φN, which is defined inductively: - -* If φ is atomic, then φN is ¬¬φ - -* (φ ∧ θ)N is φN ∧ θN - -* (φ ∨ θ)N is ¬(¬φN ∧ ¬θN) - -* (φ → θ)N is φN → θN - -* (¬φ)N is ¬φN - -* (∀x φ)N is ∀x φN - -* (∃x φ)N is ¬∀x ¬φN - -This translation has the property that φN is classically equivalent to φ. The fundamental soundness theorem (Avigad and Feferman 1998, p. 342; Buss 1998 p. 66) states: - -If T is a set of axioms and φ is a formula, then T proves φ using classical logic if and only if TN proves φN using intuitionistic logic. - -Here TN consists of the double-negation translations of the formulas in T. - -A sentence φ may not imply its negative translation φN in intuitionistic first-order logic. Troelstra and van Dalen (1988, Ch. 2, Sec. 3) give a description (due to Leivant) of formulas that do imply their Gödel–Gentzen translation. - -There are several alternative definitions of the negative translation. They are all provably equivalent in intuitionistic logic, but may be easier to apply in particular contexts. - -One possibility is to change the clauses for disjunction and existential quantifier to - -* (φ ∨ θ)N is ¬¬(φN ∨ θN) - -* (∃x φ)N is ¬¬∃x φN - -Then the translation can be succinctly described as: prefix ¬¬ to every atomic formula, disjunction, and existential quantifier. - -Another possibility (known as Kuroda's translation) is to construct φN from φ by putting ¬¬ before the whole formula and after every universal quantifier. Notice that this reduces to the simple ¬¬φ translation if φ is propositional. - -It is also possible to define φN by prefixing ¬¬ before every subformula of φ, as done by Kolmogorov. Such a translation is the logical counterpart to the call-by-name continuation-passing style translation of functional programming languages along the lines of the Curry–Howard correspondence between proofs and programs. - -The double-negation translation was used by Gödel (1933) to study the relationship between classical and intuitionistic theories of the natural numbers ("arithmetic"). He obtains the following result: - -If a formula φ is provable from the axioms of Peano arithmetic then φN is provable from the axioms of Heyting arithmetic. - -This result shows that if Heyting arithmetic is consistent then so is Peano arithmetic. This is because a contradictory formula 1=θ ∧ ¬θ is interpreted as 1=θN ∧ ¬θN, which is still contradictory. Moreover, the proof of the relationship is entirely constructive, giving a way to transform a proof of 1=θ ∧ ¬θ in Peano arithmetic into a proof of 1=θN ∧ ¬θN in Heyting arithmetic. (By combining the double-negation translation with the Friedman translation, it is in fact possible to prove that Peano arithmetic is Π02-conservative over Heyting arithmetic.) - -The propositional mapping of φ to ¬¬φ does not extend to a sound translation of first-order logic, because 1=∀x ¬¬φ(x) → ¬¬∀x φ(x) is not a theorem of intuitionistic predicate logic. This explains why φN has to be defined in a more complicated way in the first-order case. diff --git a/wiki/wikipedia/4124.txt b/wiki/wikipedia/4124.txt deleted file mode 100644 index 4d3f9a1ddb6118f293c45316a91264bd9e68fe36..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4124.txt +++ /dev/null @@ -1,15 +0,0 @@ -Arc Routing is the process of selecting the best path in a network based on the route. Contrary to normal routing problems, which usually involve mapping a route between nodes, arc routing focuses more heavily on the route itself. The goal of many arc routing problems is to produce a route with the minimum amount of dead mileage, while also fully encompassing the edges required. Examples of arc routing applications include garbage collection, road gritting, mail delivery, network maintenance, and snowploughing. - -Arc routing problems (ARPs) differ in their goal and heuristics. However, all of them are known to be NP-hard. - -This problem is named after the postman and his challenge to deliver mail in any order he may choose, but minimizing his costs such as time or travel distance. It is also sometimes called the undirected chinese postman problem. The undirected rural postman problem (URPP) aims to minimize the total cost of a route that maps the entire network, or in more specific cases, a route that maps every edge that requires a service. If the whole network must be mapped, the route that maps the entire network is called a covering tour. In the case where only certain edges need to be mapped, the problem aims to solve the route that optimizes the demands, crossing over into non-required routes a minimal number of times. - -The undirected capacitated arc routing problem consists of demands placed on the edges, and each edge must meet the demand. An example is garbage collection, where each route might require both a garbage collection and a recyclable collection. Problems in real life applications might arise if there are timing issues, such as the case in which certain routes cannot be serviced due to timing or scheduling conflicts, or constraints, such as a limited period of time. The heuristics described in this article ignore any such problems that arise due to application constraints. - -The URPP was first introduced in 1974 and was proven to be an NP-hard problem by Lenstra and Kan. The UCARP can be derived from the URPP, and thus is NP-hard as well. In 1981, another pair of computer scientists, Golden and Wong, managed to prove that even deriving a .5 approximation to the URPP was NP-hard. In 2000, Dror published a book describing different arc routing problems. - -Most algorithms require a pre-processing of the graph, which simplifies the initial graph by removing all edges that are not in the shortest path between 2 required edges. Another simplification that the pre-processing adds is that it transforms the shortest path between 2 required edges into a single, non-required edge, regardless of the number of edges in the path, provided that there were no required edges in the path. - -Once the pre-processing is done, the problem can be generalized into a convex hull problem, with the edges being the points of the hull. The convex hull problem can be solved through linear programming or through convex hull algorithms, but the process of finding the convex hull is an exponential problem. - -Methods of solving the URPP after the pre-processing is done consist of the cutting plane algorithm and the branch & cut methodology. diff --git a/wiki/wikipedia/4125.txt b/wiki/wikipedia/4125.txt deleted file mode 100644 index 959b45f8ef58b3df1a7b5539572573fe02d27dd6..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4125.txt +++ /dev/null @@ -1,27 +0,0 @@ -The Schur–Zassenhaus theorem is a theorem in group theory which states that if $G$ is a finite group, and $N$ is a normal subgroup whose order is coprime to the order of the quotient group $G/N$, then $G$ is a semidirect product (or split extension) of $N$ and $G/N$. An alternative statement of the theorem is that any normal Hall subgroup $N$ of a finite group $G$ has a complement in $G$. Moreover if either $N$ or $G/N$ is solvable then the Schur–Zassenhaus theorem also states that all complements of $N$ in $G$ are conjugate. The assumption that either $N$ or $G/N$ is solvable can be dropped as it is always satisfied, but all known proofs of this require the use of the much harder Feit–Thompson theorem. - -The Schur–Zassenhaus theorem at least partially answers the question: "In a composition series, how can we classify groups with a certain set of composition factors?" The other part, which is where the composition factors do not have coprime orders, is tackled in extension theory. - -==History== - -The Schur–Zassenhaus theorem was introduced by . Theorem 25, which he credits to Issai Schur, proves the existence of a complement, and theorem 27 proves that all complements are conjugate under the assumption that $N$ or $G/N$ is solvable. It is not easy to find an explicit statement of the existence of a complement in Schur's published works, though the results of on the Schur multiplier imply the existence of a complement in the special case when the normal subgroup is in the center. Zassenhaus pointed out that the Schur–Zassenhaus theorem for non-solvable groups would follow if all groups of odd order are solvable, which was later proved by Feit and Thompson. Ernst Witt showed that it would also follow from the Schreier conjecture (see for Witt's unpublished 1937 note about this), but the Schreier conjecture has only been proved using the classification of finite simple groups, which is far harder than the Feit–Thompson theorem. - -If we do not impose the coprime condition, the theorem is not true: consider for example the cyclic group $C_4$ and its normal subgroup $C_2$. Then if $C_4$ were a semidirect product of $C_2$ and $C_4 / C_2 \cong C_2$ then $C_4$ would have to contain two elements of order 2, but it only contains one. Another way to explain this impossibility of splitting $C_4$ (i.e. expressing it as a semidirect product) is to observe that the automorphisms of $C_2$ are the trivial group, so the only possible [semi]direct product of $C_2$ with itself is a direct product (which gives rise to the Klein four-group, a group that is non-isomorphic with $C_4$). - -An example where the Schur–Zassenhaus theorem does apply is the symmetric group on 3 symbols, $S_3$, which has a normal subgroup of order 3 (isomorphic with $C_3$) which in turn has index 2 in $S_3$ (in agreement with the theorem of Lagrange), so $S_3 / C_3 \cong C_2$. Since 2 and 3 are relatively prime, the Schur–Zassenhaus theorem applies and $S_3 \cong C_3 \rtimes C_2$. Note that the automorphism group of $C_3$ is $C_2$ and the automorphism of $C_3$ used in the semidirect product that gives rise to $S_3$ is the non-trivial automorphism that permutes the two non-identity elements of $C_3$. Furthermore, the three subgroups of order 2 in $S_3$ (any of which can serve as a complement to $C_3$ in $S_3$) are conjugate to each other. - -The non-triviality of the (additional) conjugacy conclusion can be illustrated with the Klein four-group $V$ as the non-example. Any of the three proper subgroups of $V$ (all of which have order 2) is normal in $V$; fixing one of these subgroups, any of the other two remaining (proper) subgroups complements it in $V$, but none of these three subgroups of $V$ is a conjugate of any other one, because $V$ is abelian. - -The quaternion group has normal subgroups of order 4 and 2 but is not a [semi]direct product. Schur's papers at the beginning of the 20th century introduced the notion of central extension to address examples such as $C_4$ and the quaternions. - -The existence of a complement to a normal Hall subgroup H of a finite group G can be proved in the following steps: - -#By induction on the order of G, we can assume that it is true for any smaller group. - -#If H is abelian, then the existence of a complement follows from the fact that the cohomology group H2(G/H,H) vanishes (as H and G/H have coprime orders) and the fact that all complements are conjugate follows from the vanishing of H1(G/H,H). - -#If H is solvable, it has a nontrivial abelian subgroup A that is characteristic in H and therefore normal in G. Applying the Schur–Zassenhaus theorem to G/A reduces the proof to the case when H=A is abelian which has been done in the previous step. - -#If the normalizer N=NG(P) of every p-Sylow subgroup P of H is equal to G, then H is nilpotent, and in particular solvable, so the theorem follows by the previous step. - -#If the normalizer N=NG(P) of some p-Sylow subgroup P of H is smaller than G, then by induction the Schur–Zassenhaus theorem holds for N, and a complement of N∩H in N is a complement for H in G because G=NH. diff --git a/wiki/wikipedia/4126.txt b/wiki/wikipedia/4126.txt deleted file mode 100644 index 0466a27c7f1469997771ba380e456a1a1a5eb80a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4126.txt +++ /dev/null @@ -1,5 +0,0 @@ -In mathematics, Strassmann's theorem is a result in field theory. It states that, for suitable fields, suitable formal power series with coefficients in the valuation ring of the field have only finitely many zeroes. - -It was introduced by . - -Let K be a field with a non-Archimedean absolute value | · | and let R be the valuation ring of K. Let f(x) be a formal power series with coefficients in R other than the zero series, with coefficients an converging to zero with respect to | · |. Then f(x) has only finitely many zeroes in R. More precisely, the number of zeros is at most N, where N is the largest index with |aN| = max |an|. diff --git a/wiki/wikipedia/4127.txt b/wiki/wikipedia/4127.txt deleted file mode 100644 index 1e141c8134a67da327b1222f16928c936197b697..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4127.txt +++ /dev/null @@ -1,5 +0,0 @@ -In mathematics, Noether identities characterize the degeneracy of a Lagrangian system. Given a Lagrangian system and its Lagrangian L, Noether identities can be defined as a differential operator whose kernel contains a range of the Euler-Lagrange operator of L. Any Euler-Lagrange operator obeys Noether identities which therefore are separated into the trivial and non-trivial ones. A Lagrangian L is called degenerate if the Euler-Lagrange operator of L satisfies non-trivial Noether identities. In this case Euler-Lagrange equations are not independent. - -Noether identities need not be independent, but satisfy first-stage Noether identities, which are subject to the second-stage Noether identities and so on. Higher-stage Noether identities also are separated into the trivial and non-trivial once. A degenerate Lagrangian is called reducible if there exist non-trivial higher-stage Noether identities. Yang-Mills gauge theory and gauge gravitation theory exemplify irreducible Lagrangian field theories. - -Different variants of second Noether’s theorem state the one-to-one correspondence between the non-trivial reducible Noether identities and the non-trivial reducible gauge symmetries. Formulated in a very general setting, second Noether’s theorem associates to the Koszul-Tate complex of reducible Noether identities, parameterized by antifields, the BRST complex of reducible gauge symmetries parameterized by ghosts. This is the case of covariant classical field theory and Lagrangian BRST theory. diff --git a/wiki/wikipedia/4128.txt b/wiki/wikipedia/4128.txt deleted file mode 100644 index eefb20b52a6b4a916507c5a4c42361b45e27f424..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4128.txt +++ /dev/null @@ -1,26 +0,0 @@ -In mathematics, the Littlewood conjecture is an open problem () in Diophantine approximation, proposed by John Edensor Littlewood around 1930. It states that for any two real numbers α and β, -$$ -\liminf_{n\to\infty} \ n\Vert n\alpha\Vert \Vert n\beta\Vert = 0, -$$ - -where $\Vert \Vert$ is here the distance to the nearest integer. - -This means the following: take a point (α,β) in the plane, and then consider the sequence of points - -(2α,2β), (3α,3β), ... . - -For each of these, multiply the distance to the closest line with integer x-coordinate by the distance to the closest line with integer y-coordinate. This product will certainly be at most 1/4. The conjecture makes no statement about whether this sequence of values will converge; it typically does not, in fact. The conjecture states something about the limit inferior, and says that there is a subsequence for which the distances decay faster than the reciprocal, i.e. - -o(1/n) - -in the little-o notation. - -It is known that this would follow from a result in the geometry of numbers, about the minimum on a non-zero lattice point of a product of three linear forms in three real variables: the implication was shown in 1955 by J. W. S. Cassels and Swinnerton-Dyer. This can be formulated another way, in group-theoretic terms. There is now another conjecture, expected to hold for n ≥ 3: it is stated in terms of G = SLn(R), Γ = SLn(Z), and the subgroup D of diagonal matrices in G. - -Conjecture: for any g in G/Γ such that Dg is relatively compact (in G/Γ), then Dg is closed. - -This in turn is a special case of a general conjecture of Margulis on Lie groups. - -Borel showed in 1909 that the exceptional set of real pairs (α,β) violating the statement of the conjecture is of Lebesgue measure zero. Manfred Einsiedler, Anatole Katok and Elon Lindenstrauss have shown that it must have Hausdorff dimension zero; and in fact is a union of countably many compact sets of box-counting dimension zero. The result was proved by using a measure classification theorem for diagonalizable actions of higher-rank groups, and an isolation theorem proved by Lindenstrauss and Barak Weiss. - -These results imply that non-trivial pairs satisfying the conjecture exist: indeed, given a real number α such that $\inf_{n \ge 1} n \cdot || n \alpha || > 0 $, it is possible to construct an explicit β such that (α,β) satisfies the conjecture. diff --git a/wiki/wikipedia/4129.txt b/wiki/wikipedia/4129.txt deleted file mode 100644 index 5d6ae0086f3cf7588ab7ab30c785bd58cb34c6fa..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4129.txt +++ /dev/null @@ -1,62 +0,0 @@ -In abstract algebra, Kaplansky's theorem on projective modules, first proven by Irving Kaplansky, states that a projective module over a local ring is free; where a not-necessary-commutative ring is called local if for each element x, either x or 1 - x is a unit element. The theorem can also be formulated so to characterize a local ring (#Characterization of a local ring). - -For a finite projective module over a commutative local ring, the theorem is an easy consequence of Nakayama's lemma. For the general case, the proof (both the original as well as later one) consists of the following two steps: - -*Observe that a projective module over an arbitrary ring is a direct sum of countably generated projective modules. - -*Show that a countably generated projective module over a local ring is free (by a "[reminiscence] of the proof of Nakayama's lemma"). - -The idea of the proof of the theorem was also later used by Hyman Bass to show big projective modules (under some mild conditions) are free. According to , Kaplansky's theorem "is very likely the inspiration for a major portion of the results" in the theory of semiperfect rings. - -The proof of the theorem is based on two lemmas, both of which concern decompositions of modules and are of independent general interest. - - Let $\mathfrak{F}$ denote the family of modules that are direct sums of some of countably generated submodules (here modules can be those over a ring, a group or even a set of endomorphisms). If $M$ is in $\mathfrak{F}$, then each of direct summand of $M$ is also in $\mathfrak{F}$.}} - -Proof: Let N be a direct summand; i.e., $M = N \oplus L$. Using the assumption, we write $M = \bigoplus_{I \in I} M_i$ where each $M_i$ is a countably generated submodule. For each subset $A \subset I$, we write $M_A = \bigoplus_{i \in A} M_i, N_A =$ the image of $M_A$ under the projection $M \to N \hookrightarrow M$ and $L_A$ the same way. Now, consider the set of all triples ($J$, $B$, $C$) consisting of a subset $J \subset I$ and subsets $B, C \subset \mathfrak{F}$ such that $M_J = N_J \oplus L_J$ and $N_J, L_J$ are the direct sums of the modules in $B, C$. We give this set a partial ordering such that $(J, B, C) \le (J', B', C')$ if and only if $J \subset J'$, $B \subset B', C \subset C'$. By Zorn's lemma, the set contains a maximal element $(J, B, C)$. We shall show that $J = I$; i.e., $N = N_J = \bigoplus_{N' \in B} N' \in \mathfrak{F}$. Suppose otherwise. Then we can inductively construct a sequence of at most countable subsets $I_1 \subset I_2 \subset \cdots \subset I$ such that $I_1 \not\subset J$ and for each integer $n \ge 1$, -$$ -M_{I_n} \subset N_{I_n} + L_{I_n} \subset M_{I_{n+1}} -$$. - -Let $I' = \bigcup_0^\infty I_n$ and $J' = J \cup I'$. We claim: -$$ -M_{J'} = N_{J'} \oplus L_{J'}. -$$ - -The inclusion $\subset$ is trivial. Conversely, $N_{J'}$ is the image of $N_J + L_J + M_{I'} \subset N_J + M_{I'}$ and so $N_{J'} \subset M_{J'}$. The same is also true for $L_{J'}$. Hence, the claim is valid. - -Now, $N_J$ is a direct summand of $M$ (since it is a summand of $M_J$, which is a summand of $M$); i.e., $N_J \oplus M' = M$ for some $M'$. Then, by modular law, $N_{J'} = N_J \oplus (M' \cap N_{J'})$. Set $\widetilde{N_J} = M' \cap N_{J'}$. Define $\widetilde{L_J}$ in the same way. Then, using the early claim, we have: -$$ -M_{J'} = M_J \oplus \widetilde{N_J} \oplus \widetilde{L_J}, -$$ - -which implies that -$$ -\widetilde{N_J} \oplus \widetilde{L_J} \simeq M_{J'} / M_J \simeq M_{J' - J} -$$ - -is countably generated as $J' - J \subset I'$. This contradicts the maximality of $(J, B, C)$. $\square$ - -If $M_i, i \in I$ are countably generated modules with local endomorphism rings and if $N$ is a countably generated module that is a direct summand of $\bigoplus_{i \in I} M_i$, then $N$ is isomorphic to $\bigoplus_{i \in I'} M_i$ for some at most countable subset $I' \subset I$. - -Proof: Let $\mathcal{G}$ denote the family of modules that are isomorphic to modules of the form $\bigoplus_{i \in F} M_i$ for some finite subset $F \subset I$. The assertion is then implied by the following claim: - -*Given an element $x \in N$, there exists an $H \in \mathcal{G}$ that contains x and is a direct summand of N. - -Indeed, assume the claim is valid. Then choose a sequence $x_1, x_2, \dots$ in N that is a generating set. Then using the claim, write $N = H_1 \oplus N_1$ where $x_1 \in H_1 \in \mathcal{G}$. Then we write $x_2 = y + z$ where $y \in H_1, z \in N_1$. We then decompose $N_1 = H_2 \oplus N_2$ with $z \in H_2 \in \mathcal{G}$. Note $\{ x_1, x_2 \} \subset H_1 \oplus H_2$. Repeating this argument, in the end, we have: $ \{ x_1, x_2, \dots \} \subset \bigoplus_0^\infty H_n$; i.e., $N = \bigoplus_0^\infty H_n$. Hence, the proof reduces to proving the claim and the claim is a straightforward consequence of Azumaya's theorem (see the linked article for the argument). $\square$ - -Proof of the theorem: Let $N$ be a projective module over a local ring. Then, by definition, it is a direct summand of some free module $F$. This $F$ is in the family $\mathfrak{F}$ in Lemma 1; thus, $N$ is a direct sum of countably generated submodules, each a direct summand of F and thus projective. Hence, without loss of generality, we can assume $N$ is countably generated. Then Lemma 2 gives the theorem. $\square$ - -Kaplansky's theorem can be stated in such a way to give a characterization of a local ring. A direct summand is said to be maximal if it has an indecomposable complement. - - Let R be a ring. Then the following are equivalent. - -# R is a local ring. - -# Every projective module over R is free and has an indecomposable decomposition $M = \bigoplus_{i \in I} M_i$ such that for each maximal direct summand L of M, there is a decomposition $M = \Big(\bigoplus_{j \in J} M_j\Big) \bigoplus L$ for some subset $J \subset I$.}} - -The implication $1. \Rightarrow 2.$ is exactly (usual) Kaplansky's theorem and Azumaya's theorem. The converse $2. \Rightarrow 1.$ follows from the following general fact, which is interested itself: - -*A ring R is local $\Leftrightarrow$ for each nonzero proper direct summand M of $R^2 = R \times R$, either $R^2 = (0 \times R) \oplus M$ or $R^2 = (R \times 0) \oplus M$. -$$ -(\Rightarrow) -$$ is by Azumaya's theorem as in the proof of $1. \Rightarrow 2.$. Conversely, suppose $R^2$ has the above property and that an element x in R is given. Consider the linear map $\sigma:R^2 \to R, \sigma(a, b) = a - b$. Set $y = x - 1$. Then $\sigma(x, y) = 1$, which is to say $\eta: R \to R^2, a \mapsto (ax, ay)$ splits and the image $M$ is a direct summand of $R^2$. It follows easily from that the assumption that either x or -y is a unit element. $\square$ diff --git a/wiki/wikipedia/413.txt b/wiki/wikipedia/413.txt deleted file mode 100644 index c148a43119892b5204b91eb2e252a46b203b538d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/413.txt +++ /dev/null @@ -1 +0,0 @@ -In mathematics, Gray's conjecture is a conjecture made by Brayton Gray in 1984 about maps between loop spaces of spheres. It was later proved by John Harper diff --git a/wiki/wikipedia/4130.txt b/wiki/wikipedia/4130.txt deleted file mode 100644 index 1171953e439bc6d83a23173624579c8b105ae41d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4130.txt +++ /dev/null @@ -1,3 +0,0 @@ -In the mathematical theory of probability, the Heyde theorem is the characterization theorem concerning the normal distribution (the Gaussian distribution) by the symmetry of one linear form given another. This theorem was proved by C. C. Heyde. - -Let $\xi_j, j = 1, 2, \ldots, n, n \ge 2$ be independent random variables. Let $\alpha_j, \beta_j$ be nonzero constants such that $\frac{\beta_i}{\alpha_i} + \frac{\beta_j}{\alpha_j} \ne 0$ for all $i \ne j$. If the conditional distribution of the linear form $L_2 = \beta_1\xi_1 + \cdots + \beta_n\xi_n$ given $L_1 = \alpha_1\xi_1 + \cdots + \alpha_n\xi_n$ is symmetric then all random variables $\xi_j$ have normal distributions (Gaussian distributions). diff --git a/wiki/wikipedia/4131.txt b/wiki/wikipedia/4131.txt deleted file mode 100644 index 94ce13576e60324394c43c17adfe8251695ec12c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4131.txt +++ /dev/null @@ -1,353 +0,0 @@ -In mathematics, the triangle inequality states that for any triangle, the sum of the lengths of any two sides must be greater than or equal to the length of the remaining side. This statement permits the inclusion of degenerate triangles, but some authors, especially those writing about elementary geometry, will exclude this possibility, thus leaving out the possibility of equality. If x, y, and z are the lengths of the sides of the triangle, with no side being greater than z, then the triangle inequality states that -$$ -z \leq x + y , -$$ - -with equality only in the degenerate case of a triangle with zero area. - -In Euclidean geometry and some other geometries, the triangle inequality is a theorem about distances, and it is written using vectors and vector lengths (norms): -$$ -\|\mathbf x + \mathbf y\| \leq \|\mathbf x\| + \|\mathbf y\| , -$$ - -where the length z of the third side has been replaced by the vector sum x + y. When x and y are real numbers, they can be viewed as vectors in R1, and the triangle inequality expresses a relationship between absolute values. - -In Euclidean geometry, for right triangles the triangle inequality is a consequence of the Pythagorean theorem, and for general triangles, a consequence of the law of cosines, although it may be proven without these theorems. The inequality can be viewed intuitively in either R2 or R3. The figure at the right shows three examples beginning with clear inequality (top) and approaching equality (bottom). In the Euclidean case, equality occurs only if the triangle has a 180° angle and two 0° angles, making the three vertices collinear, as shown in the bottom example. Thus, in Euclidean geometry, the shortest distance between two points is a straight line. - -In spherical geometry, the shortest distance between two points is an arc of a great circle, but the triangle inequality holds provided the restriction is made that the distance between two points on a sphere is the length of a minor spherical line segment (that is, one with central angle in ) with those endpoints. - -The triangle inequality is a defining property of norms and measures of distance. This property must be established as a theorem for any function proposed for such purposes for each particular space: for example, spaces such as the real numbers, Euclidean spaces, the Lp spaces (p ≥ 1), and inner product spaces. - -Euclid proved the triangle inequality for distances in plane geometry using the construction in the figure. Beginning with triangle ABC, an isosceles triangle is constructed with one side taken as BC and the other equal leg BD along the extension of side AB. It then is argued that angle β has larger measure than angle α, so side AD is longer than side AC. But AD = AB + BD = AB + BC, so the sum of the lengths of sides AB and BC is larger than the length of AC. This proof appears in Euclid's Elements, Book 1, Proposition 20. - -For a proper triangle, the triangle inequality, as stated in words, literally translates into three inequalities (given that a proper triangle has side lengths a, b, c that are all positive and excludes the degenerate case of zero area): -$$ -a + b > c ,\quad b + c > a ,\quad c + a > b . -$$ - -A more succinct form of this inequality system can be shown to be -$$ -|a - b| < c < a + b . -$$ - -Another way to state it is -$$ -\max(a, b, c) < a + b + c - \max(a, b, c) -$$ - -implying -$$ -2 \max(a, b, c) < a + b + c -$$ - -and thus that the longest side length is less than the semiperimeter. - -A mathematically equivalent formulation is that the area of a triangle with sides a, b, c must be a real number greater than zero. Heron's formula for the area is - - - -\begin{align} - -4\cdot \text{area} & =\sqrt{(a+b+c)(-a+b+c)(a-b+c)(a+b-c)} \\ - -& = \sqrt{-a^4-b^4-c^4+2a^2b^2+2a^2c^2+2b^2c^2}. - -\end{align} - - - -In terms of either area expression, the triangle inequality imposed on all sides is equivalent to the condition that the expression under the square root sign be real and greater than zero (so the area expression is real and greater than zero). - -The triangle inequality provides two more interesting constraints for triangles whose sides are a, b, c, where a ≥ b ≥ c and $\phi$ is the golden ratio, as -$$ -1<\frac{a+c}{b}<3 -$$ -$$ -1\le\min\left(\frac{a}{b}, \frac{b}{c}\right)<\phi. -$$ - -In the case of right triangles, the triangle inequality specializes to the statement that the hypotenuse is greater than either of the two sides and less than their sum. - -The second part of this theorem is already established above for any side of any triangle. The first part is established using the lower figure. In the figure, consider the right triangle ADC. An isosceles triangle ABC is constructed with equal sides AB = AC. From the triangle postulate, the angles in the right triangle ADC satisfy: -$$ - \alpha + \gamma = \pi /2 \ . -$$ - -Likewise, in the isosceles triangle ABC, the angles satisfy: -$$ -2\beta + \gamma = \pi \ . -$$ - -Therefore, -$$ - \alpha = \pi/2 - \gamma ,\ \mathrm{while} \ \beta= \pi/2 - \gamma /2 \ , -$$ - -and so, in particular, -$$ -\alpha < \beta \ . -$$ - -That means side AD opposite angle α is shorter than side AB opposite the larger angle β. But AB = AC. Hence: -$$ -\overline{\mathrm{AC}} > \overline{\mathrm{AD}} \ . -$$ - -A similar construction shows AC > DC, establishing the theorem. - -An alternative proof (also based upon the triangle postulate) proceeds by considering three positions for point B: (i) as depicted (which is to be proven), or (ii) B coincident with D (which would mean the isosceles triangle had two right angles as base angles plus the vertex angle γ, which would violate the triangle postulate), or lastly, (iii) B interior to the right triangle between points A and D (in which case angle ABC is an exterior angle of a right triangle BDC and therefore larger than π/2, meaning the other base angle of the isosceles triangle also is greater than π/2 and their sum exceeds π in violation of the triangle postulate). - -This theorem establishing inequalities is sharpened by Pythagoras' theorem to the equality that the square of the length of the hypotenuse equals the sum of the squares of the other two sides. - -Consider a triangle whose sides are in an arithmetic progression and let the sides be a, a + d, a + 2d. Then the triangle inequality requires that -$$ - 00 \text{ and } -\frac{a}{3}2. Then the triangle inequality requires that -$$ - 0 0; consequently it can be divided through and eliminated. With a > 0, the middle inequality only requires r > 0. This now leaves the first and third inequalities needing to satisfy - - - -\begin{align} - -r^2+r-1 & {} >0 \\ - -r^2-r-1 & {} <0. - -\end{align} - - - -The first of these quadratic inequalities requires r to range in the region beyond the value of the positive root of the quadratic equation r2 + r − 1 = 0, i.e. r > φ − 1 where φ is the golden ratio. The second quadratic inequality requires r to range between 0 and the positive root of the quadratic equation r2 − r − 1 = 0, i.e. 0 < r < φ. The combined requirements result in r being confined to the range -$$ -\varphi - 1 < r <\varphi \text{ and } a >0. -$$ - -When r the common ratio is chosen such that r = sqrt φ it generates a right triangle that is always similar to the Kepler triangle. - -The triangle inequality can be extended by mathematical induction to arbitrary polygonal paths, showing that the total length of such a path is no less than the length of the straight line between its endpoints. Consequently, the length of any polygon side is always less than the sum of the other polygon side lengths. - -Consider a quadrilateral whose sides are in a geometric progression and let the sides be a, ar, ar2, ar3. Then the generalized polygon inequality requires that -$$ - 0 0 reduce to the following -$$ - r^3+r^2+r-1>0 -$$ -$$ - r^3-r^2-r-1<0. -$$ - -The left-hand side polynomials of these two inequalities have roots that are the tribonacci constant and its reciprocal. Consequently, r is limited to the range 1/t < r < t where t is the tribonacci constant. - -This generalization can be used to prove that the shortest curve between two points in Euclidean geometry is a straight line. - -No polygonal path between two points is shorter than the line between them. This implies that no curve can have an arc length less than the distance between its endpoints. By definition, the arc length of a curve is the least upper bound of the lengths of all polygonal approximations of the curve. The result for polygonal paths shows that the straight line between the endpoints is the shortest of all the polygonal approximations. Because the arc length of the curve is greater than or equal to the length of every polygonal approximation, the curve itself cannot be shorter than the straight line path. - -The converse of the triangle inequality theorem is also true: if three real numbers are such that each is less than the sum of the others, then there exists a triangle with these numbers as its side lengths and with positive area; and if one number equals the sum of the other two, there exists a degenerate triangle (that is, with zero area) with these numbers as its side lengths. - -In either case, if the side lengths are a, b, c we can attempt to place a triangle in the Euclidean plane as shown in the diagram. We need to prove that there exists a real number h consistent with the values a, b, and c, in which case this triangle exists. - -By the Pythagorean theorem we have b^2 = h^2 + d^2 and a^2 = h^2 + (c − d)^2 according to the figure at the right. Subtracting these yields a^2 − b^2 = c^2 − 2cd. This equation allows us to express d in terms of the sides of the triangle: -$$ -d=\frac{-a^2+b^2+c^2}{2c}. -$$ - -For the height of the triangle we have that h^2 = b^2 − d^2. By replacing d with the formula given above, we have -$$ -h^2 = b^2-\left(\frac{-a^2+b^2+c^2}{2c}\right)^2. -$$ - -For a real number h to satisfy this, $h^2$ must be non-negative: -$$ -b^2-\left (\frac{-a^2+b^2+c^2}{2c}\right) ^2 \ge 0, -$$ -$$ -\left( b- \frac{-a^2+b^2+c^2}{2c}\right) \left( b+ \frac{-a^2+b^2+c^2}{2c}\right) \ge 0, -$$ -$$ -\left(a^2-(b-c)^2)((b+c)^2-a^2 \right) \ge 0, -$$ -$$ -(a+b-c)(a-b+c)(b+c+a)(b+c-a) \ge 0, -$$ -$$ -(a+b-c)(a+c-b)(b+c-a) \ge 0, -$$ - -which holds if the triangle inequality is satisfied for all sides. Therefore there does exist a real number h consistent with the sides a, b, c, and the triangle exists. If each triangle inequality holds strictly, h > 0 and the triangle is non-degenerate (has positive area); but if one of the inequalities holds with equality, so h = 0, the triangle is degenerate. - -The area of a triangular face of a tetrahedron is less than or equal to the sum of the areas of the other three triangular faces. More generally, in Euclidean space the hypervolume of an (n − 1)-facet of an n-simplex is less than or equal to the sum of the hypervolumes of the other n facets. - -Much as the triangle inequality generalizes to a polygon inequality, the inequality for a simplex of any dimension generalizes to a polytope of any dimension: the hypervolume of any facet of a polytope is less than or equal to the sum of the hypervolumes of the remaining facets. - -In some cases the tetrahedral inequality is stronger than several applications of the triangle inequality. For example, the triangle inequality appears to allow the possibility of four points A, B, C, and Z in Euclidean space such that distances - -AB = BC = CA = 7 - -and - -AZ = BZ = CZ = 4. - -However, points with such distances cannot exist: the area of the 7-7-7 equilateral triangle ABC would be approximately 21.22, which is larger than three times the area of a 7-4-4 isosceles triangle (approximately 6.78 each, by Heron's formula), and so the arrangement is forbidden by the tetrahedral inequality. - -In a normed vector space V, one of the defining properties of the norm is the triangle inequality: -$$ - \|x + y\| \leq \|x\| + \|y\| \quad \forall x, y \in V -$$ - -that is, the norm of the sum of two vectors is at most as large as the sum of the norms of the two vectors. This is also referred to as subadditivity. For any proposed function to behave as a norm, it must satisfy this requirement. - -If the normed space is euclidean, or, more generally, strictly convex, then $\|x+y\|=\|x\|+\|y\|$ if and - -only if the triangle formed by x, y, and x + y, is degenerate, that is, - -x and y are on the same ray, i.e., x = 0 or y = 0, or - -x = α y for some α > 0. This property characterizes strictly convex normed spaces such as - -the ℓp spaces with 1 < p < ∞. However, there are normed spaces in which this is - -not true. For instance, consider the plane with the ℓ1 norm (the Manhattan distance) and - -denote x = (1, 0) and y = (0, 1). Then the triangle formed by - -x, y, and x + y, is non-degenerate but -$$ -\|x+y\|=\|(1,1)\|=|1|+|1|=2=\|x\|+\|y\|. -$$ - -*Absolute value as norm for the real line. To be a norm, the triangle inequality requires that the absolute value satisfy for any real numbers x and y: |x + y| \leq |x|+|y|, which it does. - -Proof: -$$ --\left\vert x \right\vert \leq x \leq \left\vert x \right\vert -$$ -$$ --\left\vert y \right\vert \leq y \leq \left\vert y \right\vert -$$ - -After adding, -$$ --( \left\vert x \right\vert + \left\vert y \right\vert ) \leq x+y \leq \left\vert x \right\vert + \left\vert y \right\vert -$$ - -Use the fact that $\left\vert b \right\vert \leq a \Leftrightarrow -a \leq b \leq a$ - -(with b replaced by x+y and a by $\left\vert x \right\vert + \left\vert y \right\vert$), we have -$$ -|x + y| \leq |x|+|y| -$$ - -The triangle inequality is useful in mathematical analysis for determining the best upper estimate on the size of the sum of two numbers, in terms of the sizes of the individual numbers. - -There is also a lower estimate, which can be found using the reverse triangle inequality which states that for any real numbers x and y: -$$ -|x-y| \geq \biggl||x|-|y|\biggr|. -$$ - -*Inner product as norm in an inner product space. If the norm arises from an inner product (as is the case for Euclidean spaces), then the triangle inequality follows from the Cauchy–Schwarz inequality as follows: Given vectors $x$ and $y$, and denoting the inner product as $\langle x , y\rangle $: - -The Cauchy–Schwarz inequality turns into an equality if and only if x and y - -are linearly dependent. The inequality -$$ -\langle x, y \rangle + \langle y, x \rangle \le 2\left|\left\langle x, y \right\rangle\right| -$$ - -turns into an equality for linearly dependent $x$ and $y$ - -if and only if one of the vectors x or y is a nonnegative scalar of the other. - -Taking the square root of the final result gives the triangle inequality. - -*p-norm: a commonly used norm is the p-norm: \|x\|_p = \left( \sum_{i=1}^n |x_i|^p \right) ^{1/p} \ , where the xi are the components of vector x. For p = 2 the p-norm becomes the Euclidean norm: \|x\|_2 = \left( \sum_{i=1}^n |x_i|^2 \right) ^{1/2} = \left( \sum_{i=1}^n x_{i}^2 \right) ^{1/2} \ , which is Pythagoras' theorem in n-dimensions, a very special case corresponding to an inner product norm. Except for the case p = 2, the p-norm is not an inner product norm, because it does not satisfy the parallelogram law. The triangle inequality for general values of p is called Minkowski's inequality. It takes the form:\|x+y\|_p \le \|x\|_p + \|y\|_p \ . - -In a metric space M with metric d, the triangle inequality is a requirement upon distance: -$$ -d(x,\ z) \le d(x,\ y) + d(y,\ z) \ , -$$ - -for all x, y, z in M. That is, the distance from x to z is at most as large as the sum of the distance from x to y and the distance from y to z. - -The triangle inequality is responsible for most of the interesting structure on a metric space, namely, convergence. This is because the remaining requirements for a metric are rather simplistic in comparison. For example, the fact that any convergent sequence in a metric space is a Cauchy sequence is a direct consequence of the triangle inequality, because if we choose any xn and xm such that d(xn, x) < ε/2 and d(xm, x) < ε/2, where ε > 0 is given and arbitrary (as in the definition of a limit in a metric space), then by the triangle inequality, d(xn, xm) ≤ d(xn, x) + d(xm, x) < ε/2 + ε/2 = ε, so that the sequence {xn} is a Cauchy sequence, by definition. - -This version of the triangle inequality reduces to the one stated above in case of normed vector spaces where a metric is induced via d(x, y) ≔ ‖x − y‖, with x − y being the vector pointing from point y to x. - -The reverse triangle inequality is an elementary consequence of the triangle inequality that gives lower bounds instead of upper bounds. For plane geometry, the statement is: - -Any side of a triangle is greater than or equal to the difference between the other two sides. - -In the case of a normed vector space, the statement is: -$$ -\bigg|\|x\|-\|y\|\bigg| \leq \|x-y\|, -$$ - -or for metric spaces, . - -This implies that the norm $\|\cdot\|$ as well as the distance function $d(x,\cdot)$ are Lipschitz continuous with Lipschitz constant 1, and therefore are in particular uniformly continuous. - -The proof for the reverse triangle uses the regular triangle inequality, and $ \|y-x\| = \|{-}1(x-y)\| = |{-}1|\cdot\|x-y\| = \|x-y\| $: -$$ - \|x\| = \|(x-y) + y\| \leq \|x-y\| + \|y\| \Rightarrow \|x\| - \|y\| \leq \|x-y\|, -$$ -$$ - \|y\| = \|(y-x) + x\| \leq \|y-x\| + \|x\| \Rightarrow \|x\| - \|y\| \geq -\|x-y\|, -$$ - -Combining these two statements gives: -$$ - -\|x-y\| \leq \|x\|-\|y\| \leq \|x-y\| \Rightarrow \bigg|\|x\|-\|y\|\bigg| \leq \|x-y\|. -$$ - -By applying the cosine function to the triangle inequality and reverse triangle inequality for arc lengths and employing the angle addition and subtraction formulas for cosines, it follows immediately that - -\operatorname{sim}(x,z) \geq \operatorname{sim}(x,y) \cdot \operatorname{sim}(y,z) - \sqrt{\left(1-\operatorname{sim}(x,y)^2\right)\cdot\left(1-\operatorname{sim}(y,z)^2\right)} - -and - -\operatorname{sim}(x,z) \leq \operatorname{sim}(x,y) \cdot \operatorname{sim}(y,z) + \sqrt{\left(1-\operatorname{sim}(x,y)^2\right)\cdot\left(1-\operatorname{sim}(y,z)^2\right)}. - -With these formulas, one needs to compute a square root for each triple of vectors {x, y, z} that is examined rather than arccos(sim(x,y)) for each pair of vectors {x, y} examined, and could be a performance improvement when the number of triples examined is less than the number of pairs examined. - -The Minkowski space metric $ \eta_{\mu \nu} $ is not positive-definite, which means that $ \|x\|^2 = \eta_{\mu \nu} x^\mu x^\nu$ can have either sign or vanish, even if the vector x is non-zero. Moreover, if x and y are both timelike vectors lying in the future light cone, the triangle inequality is reversed: -$$ - \|x+y\| \geq \|x\| + \|y\|. -$$ - -A physical example of this inequality is the twin paradox in special relativity. The same reversed form of the inequality holds if both vectors lie in the past light cone, and if one or both are null vectors. The result holds in n + 1 dimensions for any n ≥ 1. If the plane defined by x and y is spacelike (and therefore a Euclidean subspace) then the usual triangle inequality holds. diff --git a/wiki/wikipedia/4132.txt b/wiki/wikipedia/4132.txt deleted file mode 100644 index faf7768035168e15d78b9e909bd7201459d26434..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4132.txt +++ /dev/null @@ -1,15 +0,0 @@ -CloudMe is a file storage service operated by CloudMe AB that offers cloud storage, file synchronization and client software. It features a blue folder that appears on all devices with the same content, all files are synchronized between devices. The CloudMe service is offered with a freemium business model and provides encrypted SSL connection with SSL Extended Validation Certificate. CloudMe provides client software for Microsoft Windows, macOS, Linux, Android, iOS, Google TV, Samsung Smart TV, WD TV, Windows Storage Server for NAS and web browsers. - -As a cloud sync storage provider, CloudMe has a strong focus on the European market and differentiates itself from other storage providers with mobility and media features like Samsung SmartTV support. - -Recently Novell announced support for the CloudMe service in their Dynamic File Services Suite. Novosoft Handy Backup version 7.3 also announced support for CloudMe. WinZip is also integrated with CloudMe. There are many third party mobile apps and software available for CloudMe, many using the WebDAV support of CloudMe. - -CloudMe was founded by Daniel Arthursson in 2012 and is mainly owned by Xcerion. The company runs its own servers and operates from Sweden. In 2012 CloudMe received the Red Herring Top 100 Global company, AlwaysON Global 250 award, White Bull 2012 Yearling Award and the White Bull 2014 Longhorn Award. - -Previously CloudMe.com was called iCloud.com, but the service changed name after Apple acquired the domain and trademark for a rumoured 4.5 million dollars. For a while visitors to icloud.com were directed to cloudme.com. After the name change, the former iCloud.com service was split into two companies and services, CloudMe for file sync and storage, and CloudTop as the virtual cloud desktop that previously was the main attraction of the iCloud.com service and included file storage. Xcerion, the major owner of CloudMe and CloudTop initially gained an investment of $12 million to build the iCloud service. - -Using a SaaS model, the CloudMe service is provided in a free version (3 GB storage up to 19 GB with referral program), a model often called freemium, and premium versions with either 10, 25, 100, 200, 500 GB storage for consumers, 2 TB and 5 TB for business. The closest competitor to CloudMe is Dropbox. - -CloudMe features a Cloud storage and sync solution that allows the users to store, access and share their content, both with each other and with people outside the service. Sharing can be done by email, text messaging, Facebook and Google. Files can be stored in a blue folder, which is synchronized to all connected computers and devices. A web desktop and cloud OS service called CloudTop.com is available that uses CloudMe as its internet file system. - -CloudMe AB is located on Drottninggatan 23 in Linköping, Sweden. diff --git a/wiki/wikipedia/4133.txt b/wiki/wikipedia/4133.txt deleted file mode 100644 index 9b95692ef4b3a45a06f801c2c8b458e4f0c584c7..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4133.txt +++ /dev/null @@ -1,21 +0,0 @@ -In commutative algebra, Krull's principal ideal theorem, named after Wolfgang Krull (1899–1971), gives a bound on the height of a principal ideal in a commutative Noetherian ring. The theorem is sometimes referred to by its German name, Krulls Hauptidealsatz (Satz meaning "proposition" or "theorem"). - -Precisely, if R is a Noetherian ring and I is a principal, proper ideal of R, then each minimal prime ideal over I has height at most one. - -This theorem can be generalized to ideals that are not principal, and the result is often called Krull's height theorem. This says that if R is a Noetherian ring and I is a proper ideal generated by n elements of R, then each minimal prime over I has height at most n. The converse is also true: if a prime ideal has height n, then it is a minimal prime ideal over an ideal generated by n elements. - -The principal ideal theorem and the generalization, the height theorem, both follow from the fundamental theorem of dimension theory in commutative algebra (see also below for the direct proofs). Bourbaki's Commutative Algebra gives a direct proof. Kaplansky's Commutative Rings includes a proof due to David Rees. - -Let $A$ be a Noetherian ring, x an element of it and $\mathfrak{p}$ a minimal prime over x. Replacing A by the localization $A_\mathfrak{p}$, we can assume $A$ is local with the maximal ideal $\mathfrak{p}$. Let $\mathfrak{q} \subsetneq \mathfrak{p}$ be a strictly smaller prime ideal and let $\mathfrak{q}^{(n)} = \mathfrak{q}^n A_{\mathfrak{q}} \cap A$, which is a $\mathfrak{q}$-primary ideal called the n-th symbolic power of $\mathfrak{q}$. It forms a descending chain of ideals $A \supset \mathfrak{q} \supset \mathfrak{q}^{(2)} \supset \mathfrak{q}^{(3)} \supset \cdots$. Thus, there is the descending chain of ideals $\mathfrak{q}^{(n)} + (x)/(x)$ in the ring $\overline{A} = A/(x)$. Now, the radical $\sqrt{(x)}$ is the intersection of all minimal prime ideals containing $x$; $\mathfrak{p}$ is among them. But $\mathfrak{p}$ is a unique maximal ideal and thus $\sqrt{(x)} = \mathfrak{p}$. Since $(x)$ contains some power of its radical, it follows that $\overline{A}$ is an Artinian ring and thus the chain $\mathfrak{q}^{(n)} + (x)/(x)$ stabilizes and so there is some n such that $\mathfrak{q}^{(n)} + (x) = \mathfrak{q}^{(n+1)} + (x)$. It implies: -$$ -\mathfrak{q}^{(n)} = \mathfrak{q}^{(n+1)} + x \mathfrak{q}^{(n)} -$$, - -from the fact $\mathfrak{q}^{(n)}$ is $\mathfrak{q}$-primary (if $y$ is in $\mathfrak{q}^{(n)}$, then $y = z + ax$ with $z \in \mathfrak{q}^{(n+1)}$ and $a \in A$. Since $\mathfrak{p}$ is minimal over $x$, $x \not\in \mathfrak{q}$ and so $ax \in \mathfrak{q}^{(n)}$ implies $a$ is in $\mathfrak{q}^{(n)}$.) Now, quotienting out both sides by $\mathfrak{q}^{(n+1)}$ yields $\mathfrak{q}^{(n)}/\mathfrak{q}^{(n+1)} = (x)\mathfrak{q}^{(n)}/\mathfrak{q}^{(n+1)}$. Then, by Nakayama's lemma (which says a finitely generated module M is zero if $M = IM$ for some ideal I contained in the radical), we get $M = \mathfrak{q}^{(n)}/\mathfrak{q}^{(n+1)} = 0$; i.e., $\mathfrak{q}^{(n)} = \mathfrak{q}^{(n+1)}$ and thus $\mathfrak{q}^{n} A_{\mathfrak{q}} = \mathfrak{q}^{n+1} A_{\mathfrak{q}}$. Using Nakayama's lemma again, $\mathfrak{q}^{n} A_{\mathfrak{q}} = 0$ and $A_{\mathfrak{q}}$ is an Artinian ring; thus, the height of $\mathfrak{q}$ is zero. $\square$ - -Krull’s height theorem can be proved as a consequence of the principal ideal theorem by induction on the number of elements. Let $x_1, \dots, x_n$ be elements in $A$, $\mathfrak{p}$ a minimal prime over $(x_1, \dots, x_n)$ and $\mathfrak{q} \subsetneq \mathfrak{p}$ a prime ideal such that there is no prime strictly between them. Replacing $A$ by the localization $A_{\mathfrak{p}}$ we can assume $(A, \mathfrak{p})$ is a local ring; note we then have $\mathfrak{p} = \sqrt{(x_1, \dots, x_n)}$. By minimality, $\mathfrak{q}$ cannot contain all the $x_i$; relabeling the subscripts, say, $x_1 \not\in \mathfrak{q}$. Since every prime ideal containing $\mathfrak{q} + (x_1)$ is between $\mathfrak{q}$ and $\mathfrak{p}$, $\sqrt{\mathfrak{q} + (x_1)} = \mathfrak{p}$ and thus we can write for each $i \ge 2$, -$$ -x_i^{r_i} = y_i + a_i x_1 -$$ - -with $y_i \in \mathfrak{q}$ and $a_i \in A$. Now we consider the ring $\overline{A} = A/(y_2, \dots, y_n)$ and the corresponding chain $\overline{\mathfrak{q}} \subset \overline{\mathfrak{p}}$ in it. If $\overline{\mathfrak{r}}$ is a minimal prime over $\overline{x_1}$, then $\mathfrak{r}$ contains $x_1, x_2^{r_2}, \dots, x_n^{r_n}$ and thus $\mathfrak{r} = \mathfrak{p}$; that is to say, $\overline{\mathfrak{p}}$ is a minimal prime over $\overline{x_1}$ and so, by Krull’s principal ideal theorem, $\overline{\mathfrak{q}}$ is a minimal prime (over zero); $\mathfrak{q}$ is a minimal prime over $(y_2, \dots, y_n)$. By inductive hypothesis, $\operatorname{ht}(\mathfrak{q}) \le n-1$ and thus $\operatorname{ht}(\mathfrak{p}) \le n$. $\square$ diff --git a/wiki/wikipedia/4134.txt b/wiki/wikipedia/4134.txt deleted file mode 100644 index 826e5a617307d3d4cc9a1ea53eae2c53e5282c00..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4134.txt +++ /dev/null @@ -1,26 +0,0 @@ -Absorption is a valid argument form and rule of inference of propositional logic. The rule states that if $P$ implies $Q$, then $P$ implies $P$ and $Q$. The rule makes it possible to introduce conjunctions to proofs. It is called the law of absorption because the term $Q$ is "absorbed" by the term $P$ in the consequent. The rule can be stated: -$$ -\frac{P \to Q}{\therefore P \to (P \land Q)} -$$ - -where the rule is that wherever an instance of "$P \to Q$" appears on a line of a proof, "$P \to (P \land Q)$" can be placed on a subsequent line. - -The absorption rule may be expressed as a sequent: -$$ -P \to Q \vdash P \to (P \land Q) -$$ - -where $\vdash$ is a metalogical symbol meaning that $P \to (P \land Q)$ is a syntactic consequence of $(P \rightarrow Q)$ in some logical system; - -and expressed as a truth-functional tautology or theorem of propositional logic. The principle was stated as a theorem of propositional logic by Russell and Whitehead in Principia Mathematica as: -$$ -(P \to Q) \leftrightarrow (P \to (P \land Q)) -$$ - -where $P$, and $Q$ are propositions expressed in some formal system. - -If it will rain, then I will wear my coat.
    - -Therefore, if it will rain then it will rain and I will wear my coat. - -
    diff --git a/wiki/wikipedia/4135.txt b/wiki/wikipedia/4135.txt deleted file mode 100644 index de4fa97db02e02e31ae6299604349299c4c558ad..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4135.txt +++ /dev/null @@ -1,68 +0,0 @@ -The Kepler conjecture, named after the 17th-century mathematician and astronomer Johannes Kepler, is a mathematical theorem about sphere packing in three-dimensional Euclidean space. It states that no arrangement of equally sized spheres filling space has a greater average density than that of the cubic close packing (face-centered cubic) and hexagonal close packing arrangements. The density of these arrangements is around 74.05%. - -In 1998 Thomas Hales, following an approach suggested by Fejes Tóth, announced that he had a proof of the Kepler conjecture. Hales' proof is a proof by exhaustion involving the checking of many individual cases using complex computer calculations. Referees said that they were "99% certain" of the correctness of Hales' proof, and the Kepler conjecture was accepted as a theorem. In 2014, the Flyspeck project team, headed by Hales, announced the completion of a formal proof of the Kepler conjecture using a combination of the Isabelle and HOL Light proof assistants. In 2017, the formal proof was accepted by the journal Forum of Mathematics, Pi. - -Imagine filling a large container with small equal-sized spheres: Say a porcelain gallon jug with identical marbles. The "density" of the arrangement is equal to the total volume of all the marbles, divided by the volume of the jug. To maximize the number of marbles in the jug means to create an arrangement of marbles stacked between the sides and bottom of the jug, that has the highest possible density, so that the marbles are packed together as closely as possible. - -Experiment shows that dropping the marbles in randomly, with no effort to arrange them tightly, will achieve a density of around 65%. However, a higher density can be achieved by carefully arranging the marbles as follows: - -# For the first layer of marbles, arrange them in a hexagonal lattice (the honeycomb pattern) - -# Put the next layer of marbles in the lowest lying gaps you can find above and between the marbles in the first layer, regardless of pattern - -# Continue with the same procedure of filling in the lowest gaps in the prior layer, for the third and remaining layers, until the marbles reach the top edge of the jug. - -At each step there are at least two choices of how to place the next layer, so this otherwise unplanned method of stacking the spheres creates an uncountably infinite number of equally dense packings. The best known of these are called cubic close packing and hexagonal close packing. Each of these arrangements has an average density of -$$ -\frac{\pi}{3\sqrt{2}} = 0.740480489\ldots -$$ - -The Kepler conjecture says that this is the best that can be done – no other arrangement of marbles has a higher average density: Despite there being astoundingly many different arrangements possible that follow the same procedure as steps 1–3, no packing (according to the procedure or not) can possibly fit more marbles into the same jug. - -The conjecture was first stated by in his paper 'On the six-cornered snowflake'. He had started to study arrangements of spheres as a result of his correspondence with the English mathematician and astronomer Thomas Harriot in 1606. Harriot was a friend and assistant of Sir Walter Raleigh, who had asked Harriot to find formulas for counting stacked cannonballs; an assignment which in turn led Raleigh's mathematician acquaintance into wondering about what the best way to stack cannonballs were. Harriot published a study of various stacking patterns in 1591, and went on to develop an early version of atomic theory. - -Kepler did not have a proof of the conjecture, and the next step was taken by , who proved that the Kepler conjecture is true if the spheres have to be arranged in a regular lattice. - -This meant that any packing arrangement that disproved the Kepler conjecture would have to be an irregular one. But eliminating all possible irregular arrangements is very difficult, and this is what made the Kepler conjecture so hard to prove. In fact, there are irregular arrangements that are denser than the cubic close packing arrangement over a small enough volume, but any attempt to extend these arrangements to fill a larger volume is now known to always reduce their density. - -After Gauss, no further progress was made towards proving the Kepler conjecture in the nineteenth century. In 1900 David Hilbert included it in his list of twenty three unsolved problems of mathematics-it forms part of Hilbert's eighteenth problem. - -The next step toward a solution was taken by László Fejes Tóth. Fejes Tóth showed that the problem of determining the maximum density of all arrangements (regular and irregular) could be reduced to a finite (but very large) number of calculations. This meant that a proof by exhaustion was, in principle, possible. As Fejes Tóth realised, a fast enough computer could turn this theoretical result into a practical approach to the problem. - -Meanwhile, attempts were made to find an upper bound for the maximum density of any possible arrangement of spheres. English mathematician Claude Ambrose Rogers (see Rogers) established an upper bound value of about 78%, and subsequent efforts by other mathematicians reduced this value slightly, but this was still much larger than the cubic close packing density of about 74%. - -In 1990, Wu-Yi Hsiang claimed to have proven the Kepler conjecture. The proof was praised by Encyclopædia Britannica and Science and Hsiang was also honored at joint meetings of AMS-MAA. claimed to prove the Kepler conjecture using geometric methods. However Gábor Fejes Tóth (the son of László Fejes Tóth) stated in his review of the paper "As far as details are concerned, my opinion is that many of the key statements have no acceptable proofs." - -Hales gave a detailed criticism of Hsiang's work, to which Hsiang responded. The current consensus is that Hsiang's proof is incomplete. - -Following the approach suggested by Fejes Tóth, Thomas Hales, then at the University of Michigan, determined that the maximum density of all arrangements could be found by minimizing a function with 150 variables. In 1992, assisted by his graduate student Samuel Ferguson, he embarked on a research program to systematically apply linear programming methods to find a lower bound on the value of this function for each one of a set of over 5,000 different configurations of spheres. If a lower bound (for the function value) could be found for every one of these configurations that was greater than the value of the function for the cubic close packing arrangement, then the Kepler conjecture would be proved. To find lower bounds for all cases involved solving about 100,000 linear programming problems. - -When presenting the progress of his project in 1996, Hales said that the end was in sight, but it might take "a year or two" to complete. In August 1998 Hales announced that the proof was complete. At that stage, it consisted of 250 pages of notes and 3 gigabytes of computer programs, data and results. - -Despite the unusual nature of the proof, the editors of the Annals of Mathematics agreed to publish it, provided it was accepted by a panel of twelve referees. In 2003, after four years of work, the head of the referee's panel, Gábor Fejes Tóth, reported that the panel were "99% certain" of the correctness of the proof, but they could not certify the correctness of all of the computer calculations. - -Hales published a 100-page paper describing the non-computer part of his proof in detail. - -Hales and several subsequent papers described the computational portions. Hales and Ferguson received the Fulkerson Prize for outstanding papers in the area of discrete mathematics for 2009. - -In January 2003, Hales announced the start of a collaborative project to produce a complete formal proof of the Kepler conjecture. The aim was to remove any remaining uncertainty about the validity of the proof by creating a formal proof that can be verified by automated proof checking software such as HOL Light and Isabelle. This project is called Flyspeck – the F, P and K standing for Formal Proof of Kepler. Hales estimated that producing a complete formal proof would take around 20 years of work. Hales first published a "blueprint" for the formal proof in 2012; the completion of the project was announced on August 10, 2014. In January 2015 Hales and 21 collaborators submitted a paper titled "A formal proof of the Kepler conjecture" to arXiv, claiming to have proved the conjecture. In 2017, the formal proof was accepted into the Forum of Mathematics journal. - -;Thue's theorem: The regular hexagonal packing is the densest circle packing in the plane (1890). The density is pi/sqrt 12. - -The 2-dimensional analog of the Kepler conjecture; the proof is elementary. Henk and Ziegler attribute this result to Lagrange, in 1773 (see references, pag. 770). - -A simple proof by Chau and Chung from 2010 uses the Delaunay triangulation for the set of points that are centers of circles in a saturated circle packing. - -;The hexagonal honeycomb conjecture: The most efficient partition of the plane into equal areas is the regular hexagonal tiling. - -Related to Thue's theorem. - -;Dodecahedral conjecture: The volume of the Voronoi polyhedron of a sphere in a packing of equal spheres is at least the volume of a regular dodecahedron with inradius 1. McLaughlin's proof, for which he received the 1999 Morgan Prize. - -A related problem, whose proof uses similar techniques to Hales' proof of the Kepler conjecture. Conjecture by L. Fejes Tóth in the 1950s. - -;The Kelvin problem: What is the most efficient foam in 3 dimensions? This was conjectured to be solved by the Kelvin structure, and this was widely believed for over 100 years, until disproved in 1993 by the discovery of the Weaire–Phelan structure. The surprising discovery of the Weaire–Phelan structure and disproof of the Kelvin conjecture is one reason for the caution in accepting Hales' proof of the Kepler conjecture. - -;Sphere packing in higher dimensions: In 2016, Maryna Viazovska announced proofs of the optimal sphere packings in dimensions 8 and 24. However, the optimal sphere packing question in dimensions other than 1, 2, 3, 8, and 24 is still open. - -;Ulam's packing conjecture: It is unknown whether there is a convex solid whose optimal packing density is lower than that of the sphere. diff --git a/wiki/wikipedia/4136.txt b/wiki/wikipedia/4136.txt deleted file mode 100644 index c7c4c16a2e5157745ab3629883c3d0a06cb2b444..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4136.txt +++ /dev/null @@ -1,28 +0,0 @@ -In mathematics, Laplace's principle is a basic theorem in large deviations theory which is similar to Varadhan's lemma. It gives an asymptotic expression for the Lebesgue integral of exp(-θφ(x)) over a fixed set A as θ becomes large. Such expressions can be used, for example, in statistical mechanics to determining the limiting behaviour of a system as the temperature tends to absolute zero. - -Let A be a Lebesgue-measurable subset of d-dimensional Euclidean space Rd and let φ : Rd → R be a measurable function with -$$ -\int_A e^{-\varphi(x)} dx < \infty. -$$ - -Then -$$ -\lim_{\theta \to \infty} \frac1{\theta} \log \int_A e^{-\theta \varphi(x)} dx = - \mathop{\mathrm{ess inf}}_{x \in A} \varphi(x), -$$ - -where ess inf denotes the essential infimum. Heuristically, this may be read as saying that for large θ, -$$ -\int_A e^{-\theta \varphi(x)} dx \approx \exp \left(-\theta \mathop{\mathrm{ess inf}}_{x \in A} \varphi(x) \right). -$$ - -The Laplace principle can be applied to the family of probability measures Pθ given by -$$ -\mathbf{P}_\theta (A) = \left( \int_A e^{-\theta \varphi(x)} dx \right) \bigg/ \left( \int_{\mathbf{R}^{d}} e^{-\theta \varphi(y)} dy \right) -$$ - -to give an asymptotic expression for the probability of some event A as θ becomes large. For example, if X is a standard normally distributed random variable on R, then -$$ -\lim_{\varepsilon \downarrow 0} \varepsilon \log \mathbf{P} \big[ \sqrt{\varepsilon} X \in A \big] = - \mathop{\mathrm{ess inf}}_{x \in A} \frac{x^2}{2} -$$ - -for every measurable set A. diff --git a/wiki/wikipedia/4137.txt b/wiki/wikipedia/4137.txt deleted file mode 100644 index 7576a3dde5d30a70a0a168f306f99118a04c96b5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4137.txt +++ /dev/null @@ -1,26 +0,0 @@ -Eilenberg's inequality, also known as the coarea inequality is a mathematical inequality for Lipschitz-continuous functions between metric spaces. Informally, it gives an upper bound on the average size of the fibers of a Lipschitz map in terms of the Lipschitz constant of the function and the measure of the domain. - -The Eilenberg's inequality has applications in geometric measure theory and manifold theory. It is also a key ingredient in the proof of the coarea formula. - -Let ƒ : X → Y be a Lipschitz-continuous function between metric spaces whose Lipschitz constant is denoted by Lip ƒ. Let s and t be nonnegative real numbers. Then, Eilenberg's inequality states that -$$ -\int_Y^* H^{s}(f^{-1}(y) \cap A) dH^t(y) \leq \frac{v_{s}v_t}{v_{s+t}}(\text{Lip }f)^t H^{s+t}(A), -$$ - -for any A ⊂ X. - -* the asterisk denotes the upper integral, - -* vt are universal constants. If v=n, then vt equals the volume of the unit ball in Rn, - -* Ht is the t-dimensional Hausdorff measure. - -The use of upper integral is necessary because in general the function $\ y \mapsto H^{s}(A\cap f^{-1}(y))$ - -may fail to be Ht measurable. - -The inequality was first proved by Eilenberg in 1938 for the case when the function was the distance to a fixed point in the metric space. Then it was generalized in 1943 by Eilenberg and Harold to the case of any real-valued Lipschitz function on a metric space. - -The inequality in the form above was proved by Federer in 1954, except that he could prove it only under additional assumptions that he conjectured were unnecessary. Years later, Davies proved some deep results about Hausdorff contents and this conjecture was proved as a consequence. But recently a new proof, independent of Davies's result, has been found as well. - -In many texts the inequality is proved for the case where the target space is a Euclidean space or a manifold. This is because the isodiametric inequality is available (locally in the case of manifolds), which allows for a straightforward proof. The isodiametric inequality is not available in general metric spaces. The proof of Eilenberg's inequality in the general case is quite involved and requires the notion of the so-called weighted integrals. diff --git a/wiki/wikipedia/4138.txt b/wiki/wikipedia/4138.txt deleted file mode 100644 index 1fd87b1382d5457540c04e04febf20841f66d08b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4138.txt +++ /dev/null @@ -1,3 +0,0 @@ -In mathematical analysis, the Agranovich–Dynin formula is a formula for the index of an elliptic system of differential operators, introduced by - -. diff --git a/wiki/wikipedia/4139.txt b/wiki/wikipedia/4139.txt deleted file mode 100644 index 5247d6a4c1df874cf4b594c42d059940d3be957e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4139.txt +++ /dev/null @@ -1,79 +0,0 @@ -In mathematics, in particular in differential geometry, mathematical physics, and representation theory a Weitzenböck identity, named after Roland Weitzenböck, expresses a relationship between two second-order elliptic operators on a manifold with the same principal symbol. Usually Weitzenböck formulae are implemented for G-invariant self-adjoint operators between vector bundles associated to some principal G-bundle, although the precise conditions under which such a formula exists are difficult to formulate. This article focuses on three examples of Weitzenböck identities: from Riemannian geometry, spin geometry, and complex analysis. - -In Riemannian geometry there are two notions of the Laplacian on differential forms over an oriented compact Riemannian manifold M. The first definition uses the divergence operator δ defined as the formal adjoint of the de Rham operator d: -$$ -\int_M \langle \alpha,\delta\beta\rangle := \int_M\langle d\alpha,\beta\rangle -$$ - -where α is any p-form and β is any (p + 1)-form, and $\langle -,-\rangle$ is the metric induced on the bundle of (p + 1)-forms. The usual form Laplacian is then given by -$$ -\Delta = d\delta +\delta d. -$$ - -On the other hand, the Levi-Civita connection supplies a differential operator -$$ -\nabla:\Omega^pM\rightarrow \Omega^1M \otimes\Omega^pM , -$$ - -where ΩpM is the bundle of p-forms. The Bochner Laplacian is given by -$$ -\Delta'=\nabla^*\nabla -$$ - -where $\nabla^*$ is the adjoint of $\nabla$. - -The Weitzenböck formula then asserts that -$$ -\Delta' - \Delta = A -$$ - -where A is a linear operator of order zero involving only the curvature. - -The precise form of A is given, up to an overall sign depending on curvature conventions, by -$$ -A=\frac{1}{2}\langle R(\theta,\theta)\#,\#\rangle + \operatorname{Ric}(\theta,\#) , -$$ - -where - -*R is the Riemann curvature tensor, - -* Ric is the Ricci tensor, - -* $\theta:T^*M\otimes\Omega^pM\rightarrow\Omega^{p+1}M$ is the map that takes the wedge product of a 1-form and p-form and gives a (p+1)-form, - -* $\#:\Omega^{p+1}M\rightarrow T^*M\otimes\Omega^pM$ is the universal derivation inverse to θ on 1-forms. - -If M is an oriented spin manifold with Dirac operator ð, then one may form the spin Laplacian Δ = ð2 on the spin bundle. On the other hand, the Levi-Civita connection extends to the spin bundle to yield a differential operator -$$ -\nabla:SM\rightarrow T^*M\otimes SM. -$$ - -As in the case of Riemannian manifolds, let $\Delta'=\nabla^*\nabla$. This is another self-adjoint operator and, moreover, has the same leading symbol as the spin Laplacian. The Weitzenböck formula yields: -$$ -\Delta'-\Delta=-\frac{1}{4}Sc -$$ - -where Sc is the scalar curvature. This result is also known as the Lichnerowicz formula. - -If M is a compact Kähler manifold, there is a Weitzenböck formula relating the $\bar{\partial}$-Laplacian (see Dolbeault complex) and the Euclidean Laplacian on (p,q)-forms. Specifically, let -$$ -\Delta=\bar{\partial}^*\bar{\partial}+\bar{\partial}\bar{\partial}^* -$$, and -$$ -\Delta'=-\sum_k\nabla_k\nabla_{\bar{k}} -$$ in a unitary frame at each point. - -According to the Weitzenböck formula, if $\alpha\in\Omega^{(p,q)}M$, then -$$ -\Delta^\prime\alpha-\Delta\alpha=A(\alpha) -$$ - -where $A$ is an operator of order zero involving the curvature. Specifically, - -if $\alpha=\alpha_{i_1i_2\dots i_p\bar{j}_1\bar{j}_2\dots\bar{j}_q}$ in a unitary frame, then -$$ -A(\alpha)=-\sum_{k,j_s} \operatorname{Ric}_{\bar{j}_\alpha}^{\bar{k}}\alpha_{i_1i_2\dots i_p\bar{j}_1\bar{j}_2\dots\bar{k}\dots\bar{j}_q} -$$ with k in the s-th place. - -*In conformal geometry there is a Weitzenböck formula relating a particular pair of differential operators defined on the tractor bundle. See Branson, T. and Gover, A.R., "Conformally Invariant Operators, Differential Forms, Cohomology and a Generalisation of Q-Curvature", Communications in Partial Differential Equations, 30 (2005) 1611–1669. diff --git a/wiki/wikipedia/414.txt b/wiki/wikipedia/414.txt deleted file mode 100644 index a8f898bcce1bcacacceda8658e64b54e5e7a2f13..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/414.txt +++ /dev/null @@ -1,50 +0,0 @@ -In geometry, the napkin-ring problem involves finding the volume of a "band" of specified height around a sphere, i.e. the part that remains after a hole in the shape of a circular cylinder is drilled through the center of the sphere. It is a counterintuitive fact that this volume does not depend on the original sphere's radius but only on the resulting band's height. - -The problem is so called because after removing a cylinder from the sphere, the remaining band resembles the shape of a napkin ring. - -Suppose that the axis of a right circular cylinder passes through the center of a sphere of radius R and that h represents the height (defined as the distance in a direction parallel to the axis) of the part of the cylinder that is inside the sphere. The "band" is the part of the sphere that is outside the cylinder. The volume of the band depends on h but not on R: -$$ - V=\frac{\pi h^3}{6}. -$$ - -As the radius R of the sphere shrinks, the diameter of the cylinder must also shrink in order that h can remain the same. The band gets thicker, and this would increase its volume. But it also gets shorter in circumference, and this would decrease its volume. The two effects exactly cancel each other out. In the extreme case of the smallest possible sphere, the cylinder vanishes (its radius becomes zero) and the height h equals the diameter of the sphere. In this case the volume of the band is the volume of the whole sphere, which matches the formula given above. - -An early study of this problem was written by 17th-century Japanese mathematician Seki Kōwa. According to Smith, Seki called this solid an arc-ring, or in Japanese kokan or kokwan. - -Suppose the radius of the sphere is $R$ and the length of the cylinder (or the tunnel) is $h$. - -By the Pythagorean theorem, the radius of the cylinder is -$$ - \sqrt{R^2 - \left(\frac{h}{2}\right)^2},\qquad\qquad(1) -$$ - -and the radius of the horizontal cross-section of the sphere at height y above the "equator" is -$$ - \sqrt{R^2 - y^2}.\qquad\qquad(2) -$$ - -The cross-section of the band with the plane at height y is the region inside the larger circle of radius given by (2) and outside the smaller circle of radius given by (1). The cross-section's area is therefore the area of the larger circle minus the area of the smaller circle: - - - -\begin{align} - -& {}\quad \pi(\text{larger radius})^2 - \pi(\text{smaller radius})^2 \\ - -& = \pi\left(\sqrt{R^2 - y^2}\right)^2 - \pi\left(\sqrt{R^2 - \left(\frac{h}{2}\right)^2{}}\right)^2 = \pi\left(\left(\frac{h}{2}\right)^2 - y^2\right). - -\end{align} - - - -The radius R does not appear in the last quantity. Therefore, the area of the horizontal cross-section at height y does not depend on R, as long as y ≤ h/2 ≤ R. The volume of the band is -$$ - \int_{-h/2}^{h/2} (\text{area of cross-section at height }y) dy, -$$ - -and that does not depend on R. - -This is an application of Cavalieri's principle: volumes with equal-sized corresponding cross-sections are equal. Indeed, the area of the cross-section is the same as that of the corresponding cross-section of a sphere of radius h/2, which has volume -$$ -\frac{4}{3}\pi\left(\frac{h}{2}\right)^3 = \frac{\pi h^3}{6}. -$$ diff --git a/wiki/wikipedia/4140.txt b/wiki/wikipedia/4140.txt deleted file mode 100644 index a73fef54ceef7a6988995adc82f7450c12c9e587..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4140.txt +++ /dev/null @@ -1,23 +0,0 @@ -In mathematics, the Burr–Erdős conjecture was a problem concerning the Ramsey number of sparse graphs. The conjecture is named after Stefan Burr and Paul Erdős, and is one of many conjectures named after Erdős; it states that the Ramsey number of graphs in any sparse family of graphs should grow linearly in the number of vertices of the graph. - -The conjecture was proven by Choongbum Lee (thus it is now a theorem). - -If G is an undirected graph, then the degeneracy of G is the minimum number p such that every subgraph of G contains a vertex of degree p or smaller. A graph with degeneracy p is called p-degenerate. Equivalently, a p-degenerate graph is a graph that can be reduced to the empty graph by repeatedly removing a vertex of degree p or smaller. - -It follows from Ramsey's theorem that for any graph G there exists a least integer -$$ -r(G) -$$, the Ramsey number of G, such that any complete graph on at least $r(G)$ vertices whose edges are coloured red or blue contains a monochromatic copy of G. For instance, the Ramsey number of a triangle is 6: no matter how the edges of a complete graph on six vertices are colored red or blue, there is always either a red triangle or a blue triangle. - -In 1973, Stefan Burr and Paul Erdős made the following conjecture: - -For every integer p there exists a constant cp so that any p-degenerate graph G on n vertices has Ramsey number at most cp n. - -That is, if an n-vertex graph G is p-degenerate, then a monochromatic copy of G should exist in every two-edge-colored complete graph on cp n vertices. - -Before the full conjecture was proved, it was first settled in some special cases. It was proven for bounded-degree graphs by Chvátal; their proof led to a very high value of cp, and improvements to this constant were made by Eaton and Graham. More generally, the conjecture is known to be true for p-arrangeable graphs, which includes graphs with bounded maximum degree, planar graphs and graphs that do not contain a subdivision of Kp. It is also known for subdivided graphs, graphs in which no two adjacent vertices have degree greater than two. - -For arbitrary graphs, the Ramsey number is known to be bounded by a function that grows only slightly superlinearly. Specifically, Fox showed that there exists a constant cp such that, for any p-degenerate n-vertex graph G, -$$ -r(G) \leq 2^{c_p \sqrt{\log n}} n. -$$ diff --git a/wiki/wikipedia/4141.txt b/wiki/wikipedia/4141.txt deleted file mode 100644 index 394e736c84a38e4df69513610dd93f6eecaed1c1..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4141.txt +++ /dev/null @@ -1,77 +0,0 @@ -In Riemannian geometry, the fundamental theorem of Riemannian geometry states that on any Riemannian manifold (or pseudo-Riemannian manifold) there is a unique torsion-free metric connection, called the Levi-Civita connection of the given metric. Here a metric (or Riemannian) connection is a connection which preserves the metric tensor. More precisely: - -
    Fundamental Theorem of Riemannian Geometry. Let $(M, g)$ be a Riemannian manifold (or pseudo-Riemannian manifold). Then there is a unique connection ∇ which satisfies the following conditions: - -* for any vector fields X, Y, Z we have \partial_X \langle Y,Z \rangle = \langle \nabla_X Y,Z \rangle + \langle Y,\nabla_X Z \rangle, where $ \partial_X \langle Y,Z \rangle $ denotes the derivative of the function $ \langle Y,Z \rangle $ along vector field X. - -* for any vector fields X, Y, \nabla_XY-\nabla_YX=[X,Y], where [X, Y] denotes the Lie bracket for vector fields X, Y.
    - -The first condition means that the metric tensor is preserved by parallel transport, while the second condition expresses the fact that the torsion of ∇ is zero. - -An extension of the fundamental theorem states that given a pseudo-Riemannian manifold there is a unique connection preserving the metric tensor with any given vector-valued 2-form as its torsion. The difference between an arbitrary connection (with torsion) and the corresponding Levi-Civita connection is the contorsion tensor. - -The following technical proof presents a formula for Christoffel symbols of the connection in a local coordinate system. For a given metric this set of equations can become rather complicated. There are quicker and simpler methods to obtain the Christoffel symbols for a given metric, for example, using the action integral and the associated Euler-Lagrange equations. - -A metric defines the curves which are geodesics ; but a connection also defines the geodesics (see also parallel transport). A connection $\bar{\nabla}$ is said to be equal to another $\nabla$ in two different ways: - -* if $\bar{\nabla}_X Y = \nabla_X Y$ for every pair of vectors fields $X, Y$ - -* if $\nabla$ and $\bar{\nabla}$ define the same geodesics and have the same torsion - -This means that two different connections can lead to the same geodesics while giving different results for some vector fields. - -Because a metric also defines the geodesics of a differential manifold, for some metric there is not only one connection defining the same geodesics (some examples can be found of a connection on $\R^3$ leading to the straight lines as geodesics but having some torsion in contrary to the trivial connection on $\R^3,$ that is, the usual directional derivative) , and given a metric, the only connection which defines the same geodesics (which leaves the metric unchanged by parallel transport) and which is torsion-free is the Levi-Civita connection (which is obtained from the metric by differentiation). - -Let m be the dimension of M and, in some local chart, consider the standard coordinate vector fields - -{\partial}_i = \frac{\partial}{\partial x^i}, \qquad i=1,\dots,m. - -Locally, the entry gij of the metric tensor is then given by - -g_{i j} = \left \langle {\partial}_i, {\partial}_j \right \rangle. - -To specify the connection it is enough to specify, for all i, j, and k, - -\left \langle \nabla_{\partial_i}\partial_j, \partial_k \right \rangle. - -We also recall that, locally, a connection is given by $m^3$ smooth functions - -\left \{ \Gamma^l {}_{ij} \right \}, - -where - -\nabla_{\partial_i} \partial_j = \sum_l \Gamma^l_{ij} \partial _l. - -The torsion-free property means - -\nabla_{ \partial _i} \partial _j = \nabla_{\partial_j} \partial_i. - -On the other hand, compatibility with the Riemannian metric implies that - -\partial_k g_{ij} = \left \langle \nabla_{\partial_k}\partial_i, \partial_j \rangle + \langle \partial_i, \nabla_{\partial_k} \partial_j \right \rangle. - -For a fixed, i, j, and k, permutation gives 3 equations with 6 unknowns. The torsion free assumption reduces the number of variables to 3. Solving the resulting system of 3 linear equations gives unique solutions - -\left \langle \nabla_{ \partial_i }\partial_j, \partial_k \right \rangle = \tfrac{1}{2} \left ( \partial_i g_{jk}- \partial_k g_{ij} + \partial_j g_{ik} \right ). - -This is the first Christoffel identity. - -Since - -\left \langle \nabla_{ \partial_i }\partial_j, \partial_k \right \rangle = \Gamma^l _{ij} g_{lk}, - -where we use Einstein summation convention. That is, an index repeated subscript and superscript implies that it is summed over all values. Inverting the metric tensor gives the second Christoffel identity: - -\Gamma^l_{ij} = \tfrac{1}{2} \left ( \partial_i g_{jk}- \partial_k g_{ij} + \partial_j g_{ik} \right ) g^{kl}. - -once again using the Einstein summation convention. The resulting unique connection is called the Levi-Civita connection. - -An alternative proof of the Fundamental theorem of Riemannian geometry proceeds by showing that a torsion-free metric connection on a Riemannian manifold $M$ is necessarily given by the Koszul formula: - -\begin{array}{ll}2 g(\nabla_XY, Z) &= X (g(Y,Z)) + Y (g(X,Z)) - Z (g(X,Y)) \\&\quad+ g([X,Y],Z) - g([X,Z],Y) - g([Y,Z],X),\end{array} - -where the vector field $X$ acts naturally on smooth functions on the Riemannian manifold (so that $Xf = \partial_X f$). - -Assuming the existence of a connection that is symmetric, $\nabla_X Y - \nabla_Y X = [X,Y],$ and compatible with the metric, $ Xg(Y,Z) =g(\nabla_X Y,Z) + g(Y,\nabla_X Z),$ the sum $Xg(Y,Z) + Yg(X,Z) -Zg(Y,X)$ can be simplified using the symmetry property. This results in the Koszul formula. - -The expression for $g(\nabla_X Y,Z)$ therefore uniquely determines $\nabla_X Y.$ Conversely the Koszul formula can be used to define $\nabla_X Y$ and it is routine to verify that $\nabla_X$ is an affine connection, which is symmetric and compatible with the metric $g.$ (The right hand side defines a vector field because it is $C^{\infty}(M)$-linear in the variable $Z$.) diff --git a/wiki/wikipedia/4142.txt b/wiki/wikipedia/4142.txt deleted file mode 100644 index f00cdff08ce35f685c3fcd30a9eeb59df4e8057e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4142.txt +++ /dev/null @@ -1,3 +0,0 @@ -In Riemannian geometry, a branch of mathematics, the prescribed scalar curvature problem is as follows: given a closed, smooth manifold M and a smooth, real-valued function ƒ on M, construct a Riemannian metric on M whose scalar curvature equals ƒ. Due primarily to the work of J. Kazdan and F. Warner in the 1970s, this problem is well understood. - -If the dimension of M is three or greater, then any smooth function ƒ which takes on a negative value somewhere is the scalar curvature of some Riemannian metric. The assumption that ƒ be negative somewhere is needed in general, since not all manifolds admit metrics which have strictly positive scalar curvature. (For example, the three-dimensional torus is such a manifold.) However, Kazdan and Warner proved that if M does admit some metric with strictly positive scalar curvature, then any smooth function ƒ is the scalar curvature of some Riemannian metric. diff --git a/wiki/wikipedia/4143.txt b/wiki/wikipedia/4143.txt deleted file mode 100644 index 915cce9f771986dd26f4a4cedde162386d4d24d3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4143.txt +++ /dev/null @@ -1,11 +0,0 @@ -The zEC12 microprocessor (zEnterprise EC12 or just z12) is a chip made by IBM for their zEnterprise EC12 and zEnterprise BC12 mainframe computers, announced on August 28, 2012. It is manufactured at the East Fishkill, New York fabrication plant (previously owned by IBM but production will continue for ten years by new owner GlobalFoundries). The processor began shipping in the fall of 2012. IBM stated that it was the world's fastest microprocessor and is about 25% faster than its predecessor the z196. - -The chip measures 597.24 mm2 and consists of 2.75 billion transistors fabricated in IBM's 32 nm CMOS silicon on insulator fabrication process, supporting speeds of 5.5 GHz, the highest clock speed CPU ever produced for commercial sale. - -The processor implements the CISC z/Architecture with a superscalar, out-of-order pipeline and some new instructions mainly related to transactional execution. The cores have numerous other enhancements such as better branch prediction, out of order execution and one dedicated co-processor for compression and cryptography. The instruction pipeline has 15 to 17 stages; the instruction queue can hold 40 instructions; and up to 90 instructions can be "in flight". It has six cores, each with a private 64 KB L1 instruction cache, a private 96 KB L1 data cache, a private 1 MB L2 cache instruction cache, and a private 1 MB L2 data cache. In addition, there is a 48 MB shared L3 cache implemented in eDRAM and controlled by two on-chip L3 cache controllers. There's also an additional shared L1 cache used for compression and cryptography operations. - -Each core has six RISC-like execution units, including two integer units, two load-store units, one binary floating point unit and one decimal floating point unit. The zEC12 chip can decode three instructions and execute seven operations in a single clock cycle. - -The zEnterprise System EC12 uses multi-chip modules (MCMs) which allows for six zEC12 chips to be on a single module. Each MCM has two shared cache chips allowing processors on the MCM to be connected with 40 GB/s links. One zEC12 chip draws in the region of 300 W and the MCM is cooled by a liquid cooling mechanism capable of 1800 W. - -The different models of the zEnterprise System have a different number of active cores. To accomplish this, some processors in each MCM may have its fifth and/or sixth core disabled. diff --git a/wiki/wikipedia/4144.txt b/wiki/wikipedia/4144.txt deleted file mode 100644 index f1b84649e1e83880b8c7fee3e70e79371c06ad04..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4144.txt +++ /dev/null @@ -1,3 +0,0 @@ -In mathematical analysis, Glaeser's continuity theorem is a characterization of the continuity of the derivative of the square roots of functions of class $C^2$. It was introduced in 1963 by Georges Glaeser, and was later simplified by Jean Dieudonné. - -The theorem states: Let $f\ :\ U \rightarrow \R^{+}$ be a function of class $C^{2}$ in an open set U contained in $\R^n$, then $\sqrt{f} $ is of class $C^{1}$ in U if and only if its partial derivatives of first and second order vanish in the zeros of f. diff --git a/wiki/wikipedia/4145.txt b/wiki/wikipedia/4145.txt deleted file mode 100644 index 40a54132b585d21ab48922affc9c65786858add7..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4145.txt +++ /dev/null @@ -1,76 +0,0 @@ -In probability theory, the de Moivre–Laplace theorem, which is a special case of the central limit theorem, states that the normal distribution may be used as an approximation to the binomial distribution under certain conditions. In particular, the theorem shows that the probability mass function of the random number of "successes" observed in a series of $n$ independent Bernoulli trials, each having probability $p$ of success (a binomial distribution with $n$ trials), converges to the probability density function of the normal distribution with mean $np$ and standard deviation $\sqrt{np(1-p)}$, as $n$ grows large, assuming $p$ is not $0$ or $1$. - -The theorem appeared in the second edition of The Doctrine of Chances by Abraham de Moivre, published in 1738. Although de Moivre did not use the term "Bernoulli trials", he wrote about the probability distribution of the number of times "heads" appears when a coin is tossed 3600 times. - -This is one derivation of the particular Gaussian function used in the normal distribution. - -As n grows large, for k in the neighborhood of np we can approximate -$$ -{n \choose k} p^k q^{n-k} \simeq \frac{1}{\sqrt{2 \pi npq}}e^{-\frac{(k-np)^2}{2npq}}, \qquad p+q=1,\ p, q > 0 -$$ - -in the sense that the ratio of the left-hand side to the right-hand side converges to 1 as n → ∞. - -The theorem can be more rigorously stated as follows: $\left(X\!-\! np\right)\!/\!\sqrt{npq}$, with $\textstyle X$ a binomially distributed random variable, approaches the standard normal as $n\!\to\!\infty$, with the ratio of the probability mass of $X$ to the limiting normal density being 1. This can be shown for an arbitrary nonzero and finite point $c$. On the unscaled curve for $X$, this would be a point $k$ given by -$$ -k=np+c\sqrt{npq} -$$ - -For example, with $c$ at 3, $k$ stays 3 standard deviations from the mean in the unscaled curve. - -The normal distribution with mean $\mu$ and standard deviation $\sigma$ is defined by the differential equation (DE) -$$ -f'\!(x)\!=\!-\!\frac{x-\mu}{\sigma^2}f(x) -$$ with initial condition set by the probability axiom $\int_{-\infty}^{\infty}\!f(x)dx\!=\!1$. - -The binomial distribution limit approaches the normal if the binomial satisfies this DE. As the binomial is discrete the equation starts as a difference equation whose limit morphs to a DE. Difference equations use the , $\textstyle p(k\!+\!1)\!-\!p(k)$, the change for step size 1. As $\textstyle n\!\to\!\infty$, the discrete derivative becomes the continuous derivative. Hence the proof need show only that, for the unscaled binomial distribution, -$$ -\frac{f'\!(x)}{f\!(x)}\!\cdot\! \left(-\frac{\sigma^2}{x-\mu}\right) \!\to\! 1 -$$ as $ n\!\to\!\infty$. - -The required result can be shown directly: - - - -\begin{align} - -\frac{f'\!(x)}{f\!(x)}\frac{npq}{np\!-\!k}\!&=\frac{p\left(n, k + 1\right) - p\left(n, k\right)}{p\left(n, k\right)}\frac{\sqrt{npq}}{-c} \\ &= \frac{np - k -q}{kq+q}\frac{\sqrt{npq}}{-c} \\ &= \frac{-c\sqrt{npq} -q}{npq + cq\sqrt{npq}+q}\frac{\sqrt{npq}}{-c} \\ & \to 1 - -\end{align} - - - -The last holds because the term $-cnpq$ dominates both the denominator and the numerator as $n\!\to\!\infty$. - -As $\textstyle k$ takes just integral values, the constant $\textstyle c$ is subject to a rounding error. However, the maximum of this error, $\textstyle {0.5}/\!\sqrt{npq}$, is a vanishing value. - -The proof consists of transforming the left-hand side (in the statement of the theorem) to the right-hand side by three approximations. - -First, according to Stirling's formula, the factorial of a large number n can be replaced with the approximation -$$ -n! \simeq n^n e^{-n}\sqrt{2 \pi n}\qquad \text{as } n \to \infty. -$$ - -Thus -$$ -\begin{align}{n \choose k} p^k q^{n-k} & = \frac{n!}{k!(n-k)!} p^k q^{n-k} \\& \simeq \frac{n^n e^{-n}\sqrt{2\pi n} }{k^ke^{-k}\sqrt{2\pi k} (n-k)^{n-k}e^{-(n-k)}\sqrt{2\pi (n-k)}} p^k q^{n-k}\\&=\sqrt{\frac{n}{2\pi k\left(n-k\right)}}\frac{n^n}{k^k\left(n-k\right)^{n-k}}p^kq^{n-k}\\&=\sqrt{\frac{n}{2\pi k\left(n-k\right)}}\left(\frac{np}{k}\right)^k\left(\frac{nq}{n-k}\right)^{n-k}\end{align} -$$ - -Next, the approximation $\tfrac{k}{n} \to p$ is used to match the root above to the desired root on the right-hand side. -$$ -\begin{align}{n \choose k} p^k q^{n-k} & \simeq \sqrt{\frac{1}{2\pi n\frac{k}{n}\left(1-\frac{k}{n}\right)}}\left(\frac{np}{k}\right)^{k} \left(\frac{nq}{n-k}\right)^{n-k}\\&\simeq\frac{1}{\sqrt {2\pi npq}}\left(\frac{np}{k}\right)^{k} \left(\frac{nq}{n-k}\right)^{n-k} \qquad p+q=1\\ \end{align} -$$ - -Finally, the expression is rewritten as an exponential and the Taylor Series approximation for ln(1+x) is used: -$$ - \ln\left(1+x\right)\simeq x-\frac{x^2}{2}+\frac{x^3}{3}-\cdots -$$ - -Then -$$ -\begin{align}{n \choose k} p^k q^{n-k} &\simeq \frac{1}{\sqrt {2\pi npq}}\exp\left\{\ln \left( \left(\frac{np}{k}\right)^{k} \right)+\ln \left( \left(\frac{nq}{n-k}\right)^{n-k}\right)\right\}\\ &=\frac{1}{\sqrt {2\pi npq}}\exp\left\{-k\ln\left(\frac{k}{np}\right)+(k-n)\ln \left(\frac{n-k}{nq}\right)\right\}\\&=\frac{1}{\sqrt {2\pi npq}}\exp\left\{-k\ln\left(\frac{np+x\sqrt{npq}}{np}\right)+(k-n)\ln \left(\frac{n-np-x\sqrt{npq}}{nq}\right)\right\}\\&=\frac{1}{\sqrt {2\pi npq}}\exp\left\{-k\ln\left({1+x\sqrt\frac{q}{np}}\right)+(k-n)\ln \left({1-x\sqrt\frac{p}{nq}}\right)\right\}\qquad p+q=1\\&=\frac{1}{\sqrt {2\pi npq}}\exp\left\{-k\left({x\sqrt\frac{q}{np}}-\frac{x^2q}{2np}+\cdots\right)+(k-n) \left({-x\sqrt\frac{p}{nq}-\frac{x^2p}{2nq}}-\cdots\right)\right\}\\&=\frac{1}{\sqrt {2\pi npq}}\exp\left\{\left(-np-x\sqrt{npq}\right)\left({x\sqrt\frac{q}{np}}-\frac{x^2q}{2np}+\cdots\right)+\left(np+x\sqrt{npq}-n\right) \left(-x\sqrt\frac{p}{nq}-\frac{x^2p}{2nq}-\cdots\right)\right\}\\&=\frac{1}{\sqrt {2\pi npq}}\exp\left\{\left(-np-x\sqrt{npq}\right)\left(x\sqrt\frac{q}{np}-\frac{x^2q}{2np}+\cdots\right)-\left(nq-x\sqrt{npq}\right) \left(-x\sqrt\frac{p}{nq}-\frac{x^2p}{2nq}-\cdots\right)\right\}\\&=\frac{1}{\sqrt {2\pi npq}}\exp\left\{\left(-x\sqrt{npq}+\frac{1}{2}x^2q-x^2q+\cdots\right)+\left(x\sqrt{npq}+\frac{1}{2}x^2p-x^2p-\cdots\right) \right\}\\&=\frac{1}{\sqrt {2\pi npq}}\exp\left\{-\frac{1}{2}x^2q-\frac{1}{2}x^2p-\cdots\right\}\\&=\frac{1}{\sqrt {2\pi npq}}\exp\left\{-\frac{1}{2}x^2(p+q)-\cdots\right\}\\&\simeq\frac{1}{\sqrt {2\pi npq}}\exp\left\{-\frac{1}{2}x^2\right\}\\&=\frac{1}{\sqrt {2\pi npq}}e^\frac{-(k-np)^2}{2npq}\\ \end{align} -$$ - -Each "$\simeq$" in the above argument is a statement that two quantities are asymptotically equivalent as n increases, in the same sense as in the original statement of the theorem—i.e., that the ratio of each pair of quantities approaches 1 as n → ∞. - -* The Wall is an example of a television game show that uses the De Moivre–Laplace theorem. diff --git a/wiki/wikipedia/4146.txt b/wiki/wikipedia/4146.txt deleted file mode 100644 index 176319555ac654023977b1153e34991ba31e638c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4146.txt +++ /dev/null @@ -1,222 +0,0 @@ -In network theory, link prediction is the problem of predicting the existence of a link between two entities in a network. Examples of link prediction include predicting friendship links among users in a social network, predicting co-authorship links in a citation network, and predicting interactions between genes and proteins in a biological network. Link prediction can also have a temporal aspect, where, given a snapshot of the set of links at time $t$, the goal is to predict the links at time $t + 1$. - -Link prediction is widely applicable. In e-commerce, link prediction is often a subtask for recommending items to users. In the curation of citation databases, it can be used for record deduplication. In bioinformatics, it has been used to predict protein-protein interactions (PPI). It is also used to identify hidden groups of terrorists and criminals in security related applications. - -Consider a network $G = (V, E)$, where $V$ represents the entity nodes in the network and $E \subseteq |V|$ x $|V|$ represents the set of "true" links across entities in the network. - -We are given the set of entities $V$ and a subset of true links which are referred to as observed links. - -The goal of link prediction is to identify the unobserved true links. - -In the temporal formulation of link prediction the observed links correspond to true links at a time $t$, and the goal is to infer the set of true links at time $t+1$ - -Usually, we are also given a subset of unobserved links called potential links $E'$, and we need to identify true links among these potential links. - -In the binary classification formulation of the link prediction task the potential links are classified as either true links or false links. - -Link prediction approaches for this setting learn a classifier $M_b$ that maps links in $E'$ to positive and negative labels i.e. $M_b:E' \to \{0,1\}$. - -In the probability estimation formulation, potential links are associated with existence probabilities. - -Link prediction approaches for this setting learn a model $M_p$ that maps links in $E'$ to a probability i.e. $M_p:E' \to [0,1]$. - -Single link approaches learn a model that classifies each link independently. - -Structured prediction approaches capture the correlation between potential links by formulating the task as a collective link prediction task. - -Collective link prediction approaches learn a model that jointly identify all the true links among the set of potential links. - -Link prediction task can also be formulated as an instance of missing value estimation task. - -Here, the graph is represented as a adjacency matrix with missing values. - -The task is to complete the matrix by identifying the missing values. - -Matrix factorization based methods commonly use this formulation. - -The task of link prediction has attracted attention from several research communities ranging from statistics and network science to machine learning and data mining. - -In statistics, generative random graph models such as stochastic block models propose an approach to generate links between nodes in a random graph. - -For social networks, Liben-Nowell and Kleinberg proposed a link prediction models based on different graph proximity measures. - -Several statistical models have been proposed for link prediction by the machine learning and data mining community. - -For example, Popescul et al. proposed a structured logistic regression model that can make use of relational features. - -Local conditional probability models based on attribute and structural features were proposed by O’Madadhain et al - -Several models based on directed graphical models for collective link prediction have been proposed by Getoor. - -Other approached based on random walks. and matrix factorization have also been proposed - -With the advent of deep learning, several graph embedding based approaches for link prediction have also been proposed. - -For more information on link prediction refer to the survey by Getoor et al. and Yu et. al. - -Several link predication approaches have been proposed including unsupervised approaches such as similarity measures computed on the entity attributes, random walk and matrix factorization based approaches, and supervised approaches based on graphical models and deep learning. - -Link prediction approaches can be divided into two broad categories based on the type of the underlying network: (1) link prediction approaches for homogeneous networks (2) link prediction approaches for heterogeneous networks. - -Based on the type of information used to predict links, approaches can be categorized as topology-based approaches, content-based approaches, and mixed methods. - -Topology-based methods broadly make the assumption that nodes with similar network structure are more likely to form a link. - -This is a common approach to link prediction that computes the number of common neighbors. Entities with more neighbors in common are more likely to have a link. It is computed as follows: -$$ - CN(A,B) = -$$ - -A weakness of this approach is that it does not take into account the relative number of common neighbors. - -The Jaccard Measure addresses the problem of Common Neighbors by computing the relative number of neighbors in common: -$$ - J(A,B) = {\over} -$$ - -The Adamic–Adar measure is the sum of the log of the intersection of the neighbors of two nodes. This captures a two-hop similarity, which can yield better results than simple one-hop methods. It is computed as follows: - - - -A(x,y) = \sum_{u \in N(x) \cap N(y)} \frac{1}{\log |N(u)| }, - - - -where $N(u)$ is the set of nodes adjacent to $u$. - -Neighbor based methods can be effective when the number of neighbors is large, but this is not the case in sparse graphs. In these situations it is appropriate to use methods that account for longer walks. The Katz Measure is one metric that captures this. It is computed by searching the graph for paths of length $t$ in the graph and adding the counts of each path length weighted by user specified weights. - -Let A be the adjacency matrix of a network under consideration. Elements $(a_{ij})$ of A are variables that take a value 1 if a node i is connected to node j and 0 otherwise. The powers of A indicate the presence (or absence) of links between two nodes through intermediaries. For instance, in matrix $A^3$, if element $(a_{2,12}) = 1$, it indicates that node 2 and node 12 are connected through some walk of length 3. If $C_{\mathrm{Katz}}(i)$ denotes Katz centrality of a node i, then mathematically: -$$ -C_{\mathrm{Katz}}(i) = \sum_{k=1}^\infty \sum_{j=1}^n \alpha^k (A^k)_{ji} -$$ - -Note that the above definition uses the fact that the element at location $(i,j)$ of $A^k$ reflects the total number of $k$ degree connections between nodes $i$ and $j$. - -Node-similarity methods predict the existence of a link based on the similarity of the node attributes. - -The attribute values are represented as normalized vector and the distance between the vectors used to measure similarity. Small distances indicate higher similarity. - -After normalizing the attribute values, computing the cosine between the two vectors is a good measure of similarity, with low values indicating higher similarity. - -Mixed methods combine attribute and topology based methods. - -Graph embeddings also offer a convenient way to predict links. Graph embedding algorithms, such as Node2vec, learn an embedding space in which neighboring nodes are represented by vectors so that vector similarity measures, such as dot product similarity, or euclidean distance, hold in the embedding space. These similarities are functions of both topological features and attribute-based similarity. One can then use other machine learning techniques to predict edges on the basis of vector similarity. - -A probabilistic relational model (PRM) specifies a template for a probability distribution over a databases. The template describes the relational schema for the domain, and the probabilistic dependencies between attributes in the domain. A PRM, together with a particular database of entities and unobserved links, defines a probability distribution over the unobserved links. - -Probabilistic soft logic (PSL) is a probabilistic graphical model over hinge-loss Markov random field (HL-MRF). HL-MRFs are created by a set of templated first-order logic-like rules, which are then grounded over the data. PSL can combine attribute, or local, information with topological, or relational, information. While PSL can incorporate local predictors, such as cosine similarity, it also supports relational rules, such as triangle completion in a network. - -Markov logic networks (MLNs) is a probabilistic graphical model defined over Markov networks. These networks are defined by templated first-order logic-like rules, which is then grounded over the training data. MLNs are able to incorporate both local and relational rules for the purpose of link prediction. - -R-Models (RMLs) is a neural network model created to provide a deep learning approach to the link weight prediction problem. This model uses a node embedding technique that extracts node embeddings (knowledge of nodes) from the known links’ weights (relations between nodes) and uses this knowledge to predict the unknown links’ weights. - -Link prediction has found varied uses, but any domain in which entities interact in a structures way can benefit from link prediction. A common applications of link prediction is improving similarity measures for collaborative filtering approaches to recommendation. Link prediction is also frequently used in social networks to suggest friends to users. It has also been used to predict criminal associations. - -In biology, link prediction has been used to predict interactions between proteins in protein-protein interaction networks. Link prediction has also been used to infer interactions between drugs and targets using link prediction Another application is found in collaboration prediction in scientific co-authorship networks. - -Entity resolution, also known as deduplication, commonly uses link prediction to predict whether two entities in a network are references to the same physical entity. Some authors have used context information in network structured domains to improve entity resolution. - -Link prediction in the context of network effects has been used to analyze tendencies to spread across networks and can be used to improve marketing strategies, in particular viral marketing. - -* Caffe - -* CNTK - -* Deeplearning4j - -* DeepSpeed - -* ELKI - -* Infer.NET - -* Keras - -* Mahout - -* Mallet - -* ML.NET - -* mlpack - -* MXNet - -* Neural Lab - -* Octave - -* OpenNN - -* Orange - -* Perl Data Language - -* Probabilistic Soft Logic - -* R - -* ROOT (TMVA with ROOT) - -* scikit-learn - -* Shogun - -* Spark MLlib - -* SystemML - -* TensorFlow - -* Torch / PyTorch - -* Weka / MOA - -* Yooreeka - -* KNIME - -* RapidMiner - -* Amazon Machine Learning - -* Angoss KnowledgeSTUDIO - -* Azure Machine Learning - -* Ayasdi - -* IBM Watson Studio - -* Google Prediction API - -* IBM SPSS Modeler - -* KXEN Modeler - -* LIONsolver - -* Mathematica - -* MATLAB - -* Microsoft Azure - -* Neural Designer - -* NeuroSolutions - -* Oracle Data Mining - -* Oracle AI Platform Cloud Service - -* RCASE - -* SAS Enterprise Miner - -* SequenceL - -* Splunk - -* STATISTICA Data Miner diff --git a/wiki/wikipedia/4147.txt b/wiki/wikipedia/4147.txt deleted file mode 100644 index 30b879f2601101efb8e1af2c98d39f57fd54331e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4147.txt +++ /dev/null @@ -1,34 +0,0 @@ -In mathematical logic and metalogic, a formal system is called complete with respect to a particular property if every formula having the property can be derived using that system, i.e. is one of its theorems; otherwise the system is said to be incomplete. - -The term "complete" is also used without qualification, with differing meanings depending on the context, mostly referring to the property of semantical validity. Intuitively, a system is called complete in this particular sense, if it can derive every formula that is true. - -The property converse to completeness is called soundness: a system is sound with respect to a property (mostly semantical validity) if each of its theorems has that property. - -A formal language is expressively complete if it can express the subject matter for which it is intended. - -A set of logical connectives associated with a formal system is functionally complete if it can express all propositional functions. - -Semantic completeness is the converse of soundness for formal systems. A formal system is complete with respect to tautologousness or "semantically complete" when all its tautologies are theorems, whereas a formal system is "sound" when all theorems are tautologies (that is, they are semantically valid formulas: formulas that are true under every interpretation of the language of the system that is consistent with the rules of the system). That is, -$$ - \models_{\mathcal S} \varphi\ \to\ \vdash_{\mathcal S} \varphi. -$$ - -For example, Gödel's completeness theorem establishes semantic completeness for first-order logic. - -A formal system S is strongly complete or complete in the strong sense if for every set of premises Γ, any formula that semantically follows from Γ is derivable from Γ. That is: -$$ - \Gamma\models_{\mathcal S} \varphi \ \to\ \Gamma \vdash_{\mathcal S} \varphi. -$$ - -A formal system S is refutation-complete if it is able to derive false from every unsatisfiable set of formulas. That is, -$$ - \Gamma \models_{\mathcal S} \bot \to\ \Gamma \vdash_{\mathcal S} \bot. -$$ - -Every strongly complete system is also refutation-complete. Intuitively, strong completeness means that, given a formula set $\Gamma$, it is possible to compute every semantical consequence $\varphi$ of $\Gamma$, while refutation-completeness means that, given a formula set $\Gamma$ and a formula $\varphi$, it is possible to check whether $\varphi$ is a semantical consequence of $\Gamma$. - -Examples of refutation-complete systems include: SLD resolution on Horn clauses, superposition on equational clausal first-order logic, Robinson's resolution on clause sets. The latter is not strongly complete: e.g. $ \{ a \} \models a \lor b$ holds even in the propositional subset of first-order logic, but $a \lor b$ cannot be derived from $\{ a \}$ by resolution. However, $\{ a, \lnot (a \lor b) \} \vdash \bot$ can be derived. - -A formal system S is syntactically complete or deductively complete or maximally complete if for each sentence (closed formula) φ of the language of the system either φ or ¬φ is a theorem of S. This is also called negation completeness, and is stronger than semantic completeness. In another sense, a formal system is syntactically complete if and only if no unprovable sentence can be added to it without introducing an inconsistency. Truth-functional propositional logic and first-order predicate logic are semantically complete, but not syntactically complete (for example, the propositional logic statement consisting of a single propositional variable A is not a theorem, and neither is its negation). Gödel's incompleteness theorem shows that any recursive system that is sufficiently powerful, such as Peano arithmetic, cannot be both consistent and syntactically complete. - -In superintuitionistic and modal logics, a logic is structurally complete if every admissible rule is derivable. diff --git a/wiki/wikipedia/4148.txt b/wiki/wikipedia/4148.txt deleted file mode 100644 index e23860a5918ded91c16aa7be206aa86c8d31d98f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4148.txt +++ /dev/null @@ -1,113 +0,0 @@ -In mathematics, the Hamburger moment problem, named after Hans Ludwig Hamburger, is formulated as follows: given a sequence (m0, m1, m2, ...), does there exist a positive Borel measure μ (for instance, the measure determined by the cumulative distribution function of a random variable) on the real line such that -$$ -m_n = \int_{-\infty}^\infty x^nd \mu(x) \text{ ?} -$$ - -In other words, an affirmative answer to the problem means that (m0, m1, m2, ...) is the sequence of moments of some positive Borel measure μ. - -The Stieltjes moment problem, Vorobyev moment problem, and the Hausdorff moment problem are similar but replace the real line by $[0,+\infty)$ (Stieltjes and Vorobyev; but Vorobyev formulates the problem in the terms of matrix theory), or a bounded interval (Hausdorff). - -The Hamburger moment problem is solvable (that is, (mn) is a sequence of moments) if and only if the corresponding Hankel kernel on the nonnegative integers - - - -A = - -\left(\begin{matrix} - -m_0 & m_1 & m_2 & \cdots \\ - -m_1 & m_2 & m_3 & \cdots \\ - -m_2 & m_3 & m_4 & \cdots \\ - -\vdots & \vdots & \vdots & \ddots - -\end{matrix}\right) - -is positive definite, i.e., -$$ - \sum_{j,k\ge0}m_{j+k}c_j\overline{c_k}\ge0 -$$ - -for every arbitrary sequence (cj)j ≥ 0 of complex numbers with finite support (i.e. cj = 0 except for finitely many values of j). - -For the "only if" part of the claims simply note that -$$ - \sum_{j,k\ge0}m_{j+k}c_j \overline{c_k} = \int_{-\infty}^\infty \left|\sum_{j\geq 0} c_j x^j\right|^2d \mu(x) -$$ - -which is non-negative if $ \mu $ is non-negative. - -We sketch an argument for the converse. Let Z+ be the nonnegative integers and F0(Z+) denote the family of complex valued sequences with finite support. The positive Hankel kernel A induces a (possibly degenerate) sesquilinear product on the family of complex-valued sequences with finite support. This in turn gives a Hilbert space -$$ -(\mathcal{H}, \langle , \rangle) -$$ - -whose typical element is an equivalence class denoted by [f]. - -Let en be the element in F0(Z+) defined by en(m) = δnm. One notices that -$$ -\langle [e_{n+1}], [e_m] \rangle = A_{m,n+1} = m_{m+n+1} = \langle [e_n], [e_{m+1}]\rangle. -$$ - -Therefore, the "shift" operator T on $\mathcal{H}$, with T[en] = [en + 1], is symmetric. - -On the other hand, the desired expression -$$ -m_n = \int_{-\infty}^\infty x^nd \mu(x) -$$ - -suggests that μ is the spectral measure of a self-adjoint operator. (More precisely stated, μ is the spectral measure for an operator $\overline{T}$ defined below and the vector [1], ). If we can find a "function model" such that the symmetric operator T is multiplication by x, then the spectral resolution of a self-adjoint extension of T proves the claim. - -A function model is given by the natural isomorphism from F0(Z+) to the family of polynomials, in one single real variable and complex coefficients: for n ≥ 0, identify en with xn. In the model, the operator T is multiplication by x and a densely defined symmetric operator. It can be shown that T always has self-adjoint extensions. Let $\overline{T}$ be one of them and μ be its spectral measure. So -$$ -\langle \overline{T}^n [1], [1] \rangle = \int x^n d \mu(x). -$$ - -On the other hand, -$$ - \langle \overline{T}^n [1], [1] \rangle = \langle T^n [e_0], [e_0] \rangle = m_n. -$$ - -For an alternative proof of the existence that only uses Stieltjes integrals, see also, in particular theorem 3.2. - -The solutions form a convex set, so the problem has either infinitely many solutions or a unique solution. - -Consider the (n + 1) × (n + 1) Hankel matrix - -\Delta_n = \left[\begin{matrix} - -m_0 & m_1 & m_2 & \cdots & m_{n} \\ - -m_1 & m_2 & m_3 & \cdots & m_{n+1} \\ - -m_2 & m_3 & m_4 & \cdots & m_{n+2} \\ - -\vdots & \vdots & \vdots & \ddots & \vdots \\ - -m_{n} & m_{n+1} & m_{n+2} & \cdots & m_{2n} - -\end{matrix}\right]. - -Positivity of A means that for each n, det(Δn) ≥ 0. If det(Δn) = 0, for some n, then -$$ -(\mathcal{H}, \langle , \rangle) -$$ - -is finite-dimensional and T is self-adjoint. So in this case the solution to the Hamburger moment problem is unique and μ, being the spectral measure of T, has finite support. - -More generally, the solution is unique if there are constants C and D such that for all n, |mn| ≤ CDnn! . This follows from the more general Carleman's condition. - -There are examples where the solution is not unique; see e.g. - -One can see that the Hamburger moment problem is intimately related to orthogonal polynomials on the real line. The Gram–Schmidt procedure gives a basis of orthogonal polynomials in which the operator: $\overline{T}$ has a tridiagonal Jacobi matrix representation. This in turn leads to a tridiagonal model of positive Hankel kernels. - -An explicit calculation of the Cayley transform of T shows the connection with what is called the Nevanlinna class of analytic functions on the left half plane. Passing to the non-commutative setting, this motivates Krein's formula which parametrizes the extensions of partial isometries. - -The cumulative distribution function and the probability density function can often be found by applying the inverse Laplace transform to the moment generating function -$$ -m(t) = \sum_{n=0}m_n\frac{t^n}{n!}, -$$ - -provided that this function converges. diff --git a/wiki/wikipedia/4149.txt b/wiki/wikipedia/4149.txt deleted file mode 100644 index 248d4488c351eb138f8e69b9592964b7ad906ac4..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4149.txt +++ /dev/null @@ -1,8 +0,0 @@ -In mathematics, the Hilbert–Burch theorem describes the structure of some free resolutions of a quotient of a local or graded ring in the case that the quotient has projective dimension 2. proved a version of this theorem for polynomial rings, and proved a more general version. Several other authors later rediscovered and published variations of this theorem. Eisenbud gives a statement and proof. - -If R is a local ring with an ideal I and -$$ - 0 \rightarrow R^m\stackrel{f}{\rightarrow} R^n \rightarrow R \rightarrow R/I\rightarrow 0 -$$ - -is a free resolution of the R-module R/I, then m = n – 1 and the ideal I is aJ where a is a regular element of R and J, a depth-2 ideal, is the first Fitting ideal $\operatorname{Fitt}_1 I$ of I, i.e., the ideal generated by the determinants of the minors of size m of the matrix of f. diff --git a/wiki/wikipedia/415.txt b/wiki/wikipedia/415.txt deleted file mode 100644 index b8e5ae0ae5a30f554a4861e1755e10f7ea53c0fc..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/415.txt +++ /dev/null @@ -1,499 +0,0 @@ -In mathematical optimization, the push–relabel algorithm (alternatively, preflow–push algorithm) is an algorithm for computing maximum flows in a flow network. The name "push–relabel" comes from the two basic operations used in the algorithm. Throughout its execution, the algorithm maintains a "preflow" and gradually converts it into a maximum flow by moving flow locally between neighboring nodes using push operations under the guidance of an admissible network maintained by relabel operations. In comparison, the Ford–Fulkerson algorithm performs global augmentations that send flow following paths from the source all the way to the sink. - -The push–relabel algorithm is considered one of the most efficient maximum flow algorithms. The generic algorithm has a strongly polynomial O(V 2E) time complexity, which is asymptotically more efficient than the O(VE 2) Edmonds–Karp algorithm. Specific variants of the algorithms achieve even lower time complexities. The variant based on the highest label node selection rule has O(V 2sqrt E) time complexity and is generally regarded as the benchmark for maximum flow algorithms. Subcubic O(VElog(V 2/E)) time complexity can be achieved using dynamic trees, although in practice it is less efficient. - -The push–relabel algorithm has been extended to compute minimum cost flows. The idea of distance labels has led to a more efficient augmenting path algorithm, which in turn can be incorporated back into the push–relabel algorithm to create a variant with even higher empirical performance. - -The concept of a preflow was originally designed by Alexander V. Karzanov and was published in 1974 in Soviet Mathematical Dokladi 15. This pre-flow algorithm also used a push operation; however, it used distances in the auxiliary network to determine where to push the flow instead of a labeling system. - -The push-relabel algorithm was designed by Andrew V. Goldberg and Robert Tarjan. The algorithm was initially presented in November 1986 in STOC '86: Proceedings of the eighteenth annual ACM symposium on Theory of computing, and then officially in October 1988 as an article in the Journal of the ACM. Both papers detail a generic form of the algorithm terminating in O(V 2E) along with a O(V 3) sequential implementation, a O(VE log(V 2/E)) implementation using dynamic trees, and parallel/distributed implementation. A explained in Goldberg-Tarjan introduced distance labels by incorporating them into the parallel maximum flow algorithm of Yossi Shiloach and Uzi Vishkin. - -Let: - -*G = (V, E) be a network with capacity function c: V × V → $\mathbb{R}$, - -*F = (G, c, s, t) a flow network, where s ∈ V and t ∈ V are chosen source and sink vertices respectively, - -*f : V × V → $\mathbb{R}$ denote a pre-flow in F, - -*xf : V → $\mathbb{R}$ denote the excess function with respect to the flow f, defined by xf (u) = Σv ∈ V f (v, u) − Σv ∈ V f (u, v), - -*cf : V × V → $\mathbb{R}$ denote the residual capacity function with respect to the flow f, defined by cf (e) = c(e) − f (e), - -*Ef ⊂ E being the edges where f < c, - -and - -*Gf (V, Ef ) denote the residual network of G with respect to the flow f. - -The push–relabel algorithm uses a nonnegative integer valid labeling function which makes use of distance labels, or heights, on nodes to determine which arcs should be selected for the push operation. This labeling function is denoted by 𝓁 : V → $\mathbb{N}$. This function must satisfy the following conditions in order to be considered valid: - -Valid labeling: - -𝓁(u) ≤ 𝓁(v) + 1 for all (u, v) ∈ Ef - -Source condition: - -𝓁(s) = - -Sink conservation: - -𝓁(t) = 0 - -In the algorithm, the label values of s and t are fixed. 𝓁(u) is a lower bound of the unweighted distance from u to t in Gf if t is reachable from u. If u has been disconnected from t, then 𝓁(u) − is a lower bound of the unweighted distance from u to s. As a result, if a valid labeling function exists, there are no s-t paths in Gf because no such paths can be longer than . - -An arc (u, v) ∈ Ef is called admissible if 𝓁(u) = 𝓁(v) + 1. The admissible network G̃f (V, Ẽf ) is composed of the set of arcs e ∈ Ef that are admissible. The admissible network is acyclic. - -The algorithm starts by creating a residual graph, initializing the preflow values to zero and performing a set of saturating push operations on residual arcs exiting the source, (s, v) where v ∈ V \ . Similarly, the labels are initialized such that the label at the source is the number of nodes in the graph, 𝓁(s) = , and all other nodes are given a label of zero. Once the initialization is complete the algorithm repeatedly performs either the push or relabel operations against active nodes until no applicable operation can be performed. - -The push operation applies on an admissible out-arc (u, v) of an active node u in Gf. It moves min units of flow from u to v. - -push(u, v): - -assert xf[u] > 0 and 𝓁[u] == 𝓁[v] + 1 - -Δ = min(xf[u], c[u][v] - f[u][v]) - -f[u][v] += Δ - -f[v][u] -= Δ - -xf[u] -= Δ - -xf[v] += Δ - -A push operation that causes f (u, v) to reach c(u, v) is called a saturating push since it uses up all the available capacity of the residual arc. Otherwise, all of the excess at the node is pushed across the residual arc. This is called an unsaturating or non-saturating push. - -The relabel operation applies on an active node u without any admissible out-arcs in Gf. It modifies 𝓁(u) to be the minimum value such that an admissible out-arc is created. Note that this always increases 𝓁(u) and never creates a steep arc, which is an arc (u, v) such that cf (u, v) > 0, and 𝓁(u) > 𝓁(v) + 1. - -relabel(u): - -assert xf[u] > 0 and 𝓁[u] <= 𝓁[v] for all v such that cf[u][v] > 0 - -𝓁[u] = 1 + min(𝓁[v] for all v such that cf[u][v] > 0) - -After a push or relabel operation, 𝓁 remains a valid labeling function with respect to f. - -For a push operation on an admissible arc (u, v), it may add an arc (v, u) to Ef, where 𝓁(v) = 𝓁(u) − 1 ≤ 𝓁(u) + 1; it may also remove the arc (u, v) from Ef, where it effectively removes the constraint 𝓁(u) ≤ 𝓁(v) + 1. - -To see that a relabel operation on node u preserves the validity of 𝓁(u), notice that this is trivially guaranteed by definition for the out-arcs of u in Gf. For the in-arcs of u in Gf, the increased 𝓁(u) can only satisfy the constraints less tightly, not violate them. - -The generic push–relabel algorithm is used as a proof of concept only and does not contain implementation details on how to select an active node for the push and relabel operations. This generic version of the algorithm will terminate in O(V2E). - -Since 𝓁(s) = , 𝓁(t) = 0, and there are no paths longer than in Gf, in order for 𝓁(s) to satisfy the valid labeling condition s must be disconnected from t. At initialisation, the algorithm fulfills this requirement by creating a pre-flow f that saturates all out-arcs of s, after which 𝓁(v) = 0 is trivially valid for all v ∈ V \ . After initialisation, the algorithm repeatedly executes an applicable push or relabel operation until no such operations apply, at which point the pre-flow has been converted into a maximum flow. - -generic-push-relabel(G, c, s, t): - -create a pre-flow f that saturates all out-arcs of s - -let 𝓁[s] = |V| - -let 𝓁[v] = 0 for all v ∈ V \ {s} - -while there is an applicable push or relabel operation do - -execute the operation - -The algorithm maintains the condition that 𝓁 is a valid labeling during its execution. This can be proven true by examining the effects of the push and relabel operations on the label function 𝓁. The relabel operation increases the label value by the associated minimum plus one which will always satisfy the 𝓁(u) ≤ 𝓁(v) + 1 constraint. The push operation can send flow from u to v if 𝓁(u) = 𝓁(v) + 1. This may add (v, u) to Gf and may delete (u, v) from Gf . The addition of (v, u) to Gf will not affect the valid labeling since 𝓁(v) = 𝓁(u) − 1. The deletion of (u, v) from Gf removes the corresponding constraint since the valid labeling property 𝓁(u) ≤ 𝓁(v) + 1 only applies to residual arcs in Gf . - -If a preflow f and a valid labeling 𝓁 for f exists then there is no augmenting path from s to t in the residual graph Gf . This can be proven by contradiction based on inequalities which arise in the labeling function when supposing that an augmenting path does exist. If the algorithm terminates, then all nodes in V \ are not active. This means all v ∈ V \ have no excess flow, and with no excess the preflow f obeys the flow conservation constraint and can be considered a normal flow. This flow is the maximum flow according to the max-flow min-cut theorem since there is no augmenting path from s to t. - -Therefore, the algorithm will return the maximum flow upon termination. - -In order to bound the time complexity of the algorithm, we must analyze the number of push and relabel operations which occur within the main loop. The numbers of relabel, saturating push and nonsaturating push operations are analyzed separately. - -In the algorithm, the relabel operation can be performed at most (2 times. This is because the labeling 𝓁(u) value for any node u can never decrease, and the maximum label value is at most 2 for all nodes. This means the relabel operation could potentially be performed 2 times for all nodes V \ (i.e. ). This results in a bound of O(V 2) for the relabel operation. - -Each saturating push on an admissible arc (u, v) removes the arc from Gf . For the arc to be reinserted into Gf for another saturating push, v must first be relabeled, followed by a push on the arc (v, u), then u must be relabeled. In the process, 𝓁(u) increases by at least two. Therefore, there are O(V) saturating pushes on (u, v), and the total number of saturating pushes is at most 2. This results in a time bound of O(VE) for the saturating push operations. - -Bounding the number of nonsaturating pushes can be achieved via a potential argument. We use the potential function Φ = Σ[u ∈ V ∧ xf (u) > 0] 𝓁(u) (i.e. Φ is the sum of the labels of all active nodes). It is obvious that Φ is 0 initially and stays nonnegative throughout the execution of the algorithm. Both relabels and saturating pushes can increase Φ. However, the value of Φ must be equal to 0 at termination since there cannot be any remaining active nodes at the end of the algorithm's execution. This means that over the execution of the algorithm, the nonsaturating pushes must make up the difference of the relabel and saturating push operations in order for Φ to terminate with a value of 0. - -The relabel operation can increase Φ by at most (2. A saturating push on (u, v) activates v if it was inactive before the push, increasing Φ by at most 2. Hence, the total contribution of all saturating pushes operations to Φ is at most (2. A nonsaturating push on (u, v) always deactivates u, but it can also activate v as in a saturating push. As a result, it decreases Φ by at least 𝓁(u) − 𝓁(v) = 1. Since relabels and saturating pushes increase Φ, the total number of nonsaturating pushes must make up the difference of (2. This results in a time bound of O(V 2E) for the nonsaturating push operations. - -In sum, the algorithm executes O(V 2) relabels, O(VE) saturating pushes and O(V 2E) nonsaturating pushes. Data structures can be designed to pick and execute an applicable operation in O(1) time. Therefore, the time complexity of the algorithm is O(V 2E). - -The following is a sample execution of the generic push-relabel algorithm, as defined above, on the following simple network flow graph diagram. - -In the example, the h and e values denote the label 𝓁 and excess xf , respectively, of the node during the execution of the algorithm. Each residual graph in the example only contains the residual arcs with a capacity larger than zero. Each residual graph may contain multiple iterations of the perform operation loop. - -The example (but with initial flow of 0) can be run interactively. - -While the generic push–relabel algorithm has O(V 2E) time complexity, efficient implementations achieve O(V 3) or lower time complexity by enforcing appropriate rules in selecting applicable push and relabel operations. The empirical performance can be further improved by heuristics. - -The "current-arc" data structure is a mechanism for visiting the in- and out-neighbors of a node in the flow network in a static circular order. If a singly linked list of neighbors is created for a node, the data structure can be as simple as a pointer into the list that steps through the list and rewinds to the head when it runs off the end. - -Based on the "current-arc" data structure, the discharge operation can be defined. A discharge operation applies on an active node and repeatedly pushes flow from the node until it becomes inactive, relabeling it as necessary to create admissible arcs in the process. - -discharge(u): - -while xf[u] > 0 do - -if current-arc[u] has run off the end of neighbors[u] then - -relabel(u) - -rewind current-arc[u] - -else - -let (u, v) = current-arc[u] - -if (u, v) is admissible then - -push(u, v) - -let current-arc[u] point to the next neighbor of u - -Definition of the discharge operation reduces the push–relabel algorithm to repeatedly selecting an active node to discharge. Depending on the selection rule, the algorithm exhibits different time complexities. For the sake of brevity, we ignore s and t when referring to the nodes in the following discussion. - -The FIFO push–relabel algorithm organizes the active nodes into a queue. The initial active nodes can be inserted in arbitrary order. The algorithm always removes the node at the front of the queue for discharging. Whenever an inactive node becomes active, it is appended to the back of the queue. - -The algorithm has O(V 3) time complexity. - -The relabel-to-front push–relabel algorithm organizes all nodes into a linked list and maintains the invariant that the list is topologically sorted with respect to the admissible network. The algorithm scans the list from front to back and performs a discharge operation on the current node if it is active. If the node is relabeled, it is moved to the front of the list, and the scan is restarted from the front. - -The algorithm also has O(V 3) time complexity. - -The highest-label push–relabel algorithm organizes all nodes into buckets indexed by their labels. The algorithm always selects an active node with the largest label to discharge. - -The algorithm has O(V 2sqrt E) time complexity. If the lowest-label selection rule is used instead, the time complexity becomes O(V 2E). - -Although in the description of the generic push–relabel algorithm above, 𝓁(u) is set to zero for each node u other than s and t at the beginning, it is preferable to perform a backward breadth-first search from t to compute exact labels. - -The algorithm is typically separated into two phases. Phase one computes a maximum pre-flow by discharging only active nodes whose labels are below n. Phase two converts the maximum preflow into a maximum flow by returning excess flow that cannot reach t to s. It can be shown that phase two has O(VE) time complexity regardless of the order of push and relabel operations and is therefore dominated by phase one. Alternatively, it can be implemented using flow decomposition. - -Heuristics are crucial to improving the empirical performance of the algorithm. Two commonly used heuristics are the gap heuristic and the global relabeling heuristic. The gap heuristic detects gaps in the labeling function. If there is a label 0 < 𝓁 for which there is no node u such that 𝓁(u) = 𝓁, then any node u with 𝓁 has been disconnected from t and can be relabeled to ( immediately. The global relabeling heuristic periodically performs backward breadth-first search from t in Gf to compute the exact labels of the nodes. Both heuristics skip unhelpful relabel operations, which are a bottleneck of the algorithm and contribute to the ineffectiveness of dynamic trees. - - - -#include - -#include - -#define NODES 6 - -#define MIN(X,Y) ((X) < (Y) ? (X) : (Y)) - -#define INFINITE 10000000 - -void push(const int * const * C, int ** F, int *excess, int u, int v) { - -int send = MIN(excess[u], C[u][v] - F[u][v]); - -F[u][v] += send; - -F[v][u] -= send; - -excess[u] -= send; - -excess[v] += send; - -} - -void relabel(const int * const * C, const int * const * F, int *height, int u) { - -int v; - -int min_height = INFINITE; - -for (v = 0; v < NODES; v++) { - -if (C[u][v] - F[u][v] > 0) { - -min_height = MIN(min_height, height[v]); - -height[u] = min_height + 1; - -} - -} - -}; - -void discharge(const int * const * C, int ** F, int *excess, int *height, int *seen, int u) { - -while (excess[u] > 0) { - -if (seen[u] < NODES) { - -int v = seen[u]; - -if ((C[u][v] - F[u][v] > 0) && (height[u] > height[v])) { - -push(C, F, excess, u, v); - -} else { - -seen[u] += 1; - -} - -} else { - -relabel(C, F, height, u); - -seen[u] = 0; - -} - -} - -} - -void moveToFront(int i, int *A) { - -int temp = A[i]; - -int n; - -for (n = i; n > 0; n--) { - -A[n] = A[n-1]; - -} - -A[0] = temp; - -} - -int pushRelabel(const int * const * C, int ** F, int source, int sink) { - -int *excess, *height, *list, *seen, i, p; - -excess = (int *) calloc(NODES, sizeof(int)); - -height = (int *) calloc(NODES, sizeof(int)); - -seen = (int *) calloc(NODES, sizeof(int)); - -list = (int *) calloc((NODES-2), sizeof(int)); - -for (i = 0, p = 0; i < NODES; i++){ - -if ((i != source) && (i != sink)) { - -list[p] = i; - -p++; - -} - -} - -height[source] = NODES; - -excess[source] = INFINITE; - -for (i = 0; i < NODES; i++) - -push(C, F, excess, source, i); - -p = 0; - -while (p < NODES - 2) { - -int u = list[p]; - -int old_height = height[u]; - -discharge(C, F, excess, height, seen, u); - -if (height[u] > old_height) { - -moveToFront(p, list); - -p = 0; - -} else { - -p += 1; - -} - -} - -int maxflow = 0; - -for (i = 0; i < NODES; i++) - -maxflow += F[source][i]; - -free(list); - -free(seen); - -free(height); - -free(excess); - -return maxflow; - -} - -void printMatrix(const int * const * M) { - -int i, j; - -for (i = 0; i < NODES; i++) { - -for (j = 0; j < NODES; j++) - -printf("%d\t",M[i][j]); - -printf("\n"); - -} - -} - -int main(void) { - -int **flow, **capacities, i; - -flow = (int **) calloc(NODES, sizeof(int*)); - -capacities = (int **) calloc(NODES, sizeof(int*)); - -for (i = 0; i < NODES; i++) { - -flow[i] = (int *) calloc(NODES, sizeof(int)); - -capacities[i] = (int *) calloc(NODES, sizeof(int)); - -} - -// Sample graph - -capacities[0][1] = 2; - -capacities[0][2] = 9; - -capacities[1][2] = 1; - -capacities[1][3] = 0; - -capacities[1][4] = 0; - -capacities[2][4] = 7; - -capacities[3][5] = 7; - -capacities[4][5] = 4; - -printf("Capacity:\n"); - -printMatrix(capacities); - -printf("Max Flow:\n%d\n", pushRelabel(capacities, flow, 0, 5)); - -printf("Flows:\n"); - -printMatrix(flow); - -return 0; - -} - - - - - -def relabel_to_front(C, source: int, sink: int) -> int: - -n = len(C) # C is the capacity matrix - -F = [[0] * n for _ in range(n)] - -# residual capacity from u to v is C[u][v] - F[u][v] - -height = [0] * n # height of node - -excess = [0] * n # flow into node minus flow from node - -seen = [0] * n # neighbours seen since last relabel - -# node "queue" - -nodelist = [i for i in range(n) if i != source and i != sink] - -def push(u, v): - -send = min(excess[u], C[u][v] - F[u][v]) - -F[u][v] += send - -F[v][u] -= send - -excess[u] -= send - -excess[v] += send - -def relabel(u): - -# Find smallest new height making a push possible, - -# if such a push is possible at all. - -min_height = ∞ - -for v in xrange(n): - -if C[u][v] - F[u][v] > 0: - -min_height = min(min_height, height[v]) - -height[u] = min_height + 1 - -def discharge(u): - -while excess[u] > 0: - -if seen[u] < n: # check next neighbour - -v = seen[u] - -if C[u][v] - F[u][v] > 0 and height[u] > height[v]: - -push(u, v) - -else: - -seen[u] += 1 - -else: # we have checked all neighbours. must relabel - -relabel(u) - -seen[u] = 0 - -height[source] = n # longest path from source to sink is less than n long - -excess[source] = ∞ # send as much flow as possible to neighbours of source - -for v in range(n): - -push(source, v) - -p = 0 - -while p < len(nodelist): - -u = nodelist[p] - -old_height = height[u] - -discharge(u) - -if height[u] > old_height: - -nodelist.insert(0, nodelist.pop(p)) # move to front of list - -p = 0 # start from front of list - -else: - -p += 1 - -return sum(F[source]) - - diff --git a/wiki/wikipedia/4150.txt b/wiki/wikipedia/4150.txt deleted file mode 100644 index 37612808cb5d6974b98e4270a1ab075f0fc94d90..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4150.txt +++ /dev/null @@ -1,80 +0,0 @@ -In probability theory and statistics, Campbell's theorem or the Campbell–Hardy theorem is either a particular equation or set of results relating to the expectation of a function summed over a point process to an integral involving the mean measure of the point process, which allows for the calculation of expected value and variance of the random sum. One version of the theorem, also known as Campbell's formula, entails an integral equation for the aforementioned sum over a general point process, and not necessarily a Poisson point process. and queueing theory as well as the related fields stochastic geometry, and spatial statistics. - -Another result by the name of Campbell's theorem is specifically for the Poisson point process and gives a method for calculating moments as well as the Laplace functional of a Poisson point process. - -The name of both theorems stems from the work by Norman R. Campbell on thermionic noise, also known as shot noise, in vacuum tubes, which was partly inspired by the work of Ernest Rutherford and Hans Geiger on alpha particle detection, where the Poisson point process arose as a solution to a family of differential equations by Harry Bateman. which gives a method for calculating expectations of sums of measurable functions $ f$ with ranges on the real line. More specifically, for a point process $N$ and a measurable function $ f: \textbf{R}^d\rightarrow \textbf{R}$, the sum of $ f$ over the point process is given by the equation: -$$ - E\left[\sum_{x\in N}f(x)\right]=\int_{\textbf{R}^d} f(x)\Lambda (dx), -$$ - -where if one side of the equation is finite, then so is the other side. This equation is essentially an application of Fubini's theorem and it holds for a wide class of point processes, simple or not. Depending on the integral notation, this integral may also be written as: -$$ - \operatorname E\left[\sum_{x\in N}f(x)\right]=\int_{\textbf{R}^d} f d\Lambda , -$$ - -If the intensity measure $ \Lambda$ of a point process $N$ has a density $ \lambda(x) $, then Campbell's formula becomes: -$$ - \operatorname E\left[\sum_{x\in N}f(x)\right]= \int_{\textbf{R}^d} f(x)\lambda(x) dx -$$ - -For a stationary point process $N$ with constant density $ \lambda>0$, Campbell's theorem or formula reduces to a volume integral: -$$ - \operatorname E\left[\sum_{x\in N}f(x)\right]=\lambda \int_{\textbf{R}^d} f(x) dx -$$ - -This equation naturally holds for the homogeneous Poisson point processes, which is an example of a stationary stochastic process. - -Campbell's theorem for general point processes gives a method for calculating the expectation of a function of a point (of a point process) summed over all the points in the point process. These random sums over point processes have applications in many areas where they are used as mathematical models. - -Campbell originally studied a problem of random sums motivated by understanding thermionic noise in valves, which is also known as shot-noise. Consequently, the study of random sums of functions over point processes is known as shot noise in probability and, particularly, point process theory. - -In wireless network communication, when a transmitter is trying to send a signal to a receiver, all the other transmitters in the network can be considered as interference, which poses a similar problem as noise does in traditional wired telecommunication networks in terms of the ability to send data based on information theory. If the positioning of the interfering transmitters are assumed to form some point process, then shot noise can be used to model the sum of their interfering signals, which has led to stochastic geometry models of wireless networks. - -For general point processes, other more general versions of Campbell's theorem exist depending on the nature of the random sum and in particular the function being summed over the point process. - -If the function is a function of more than one point of the point process, the moment measures or factorial moment measures of the point process are needed, which can be compared to moments and factorial of random variables. The type of measure needed depends on whether the points of the point process in the random sum are need to be distinct or may repeat. - -Moment measures are used when points are allowed to repeat. - -Factorial moment measures are used when points are not allowed to repeat, hence points are distinct. - -For general point processes, Campbell's theorem is only for sums of functions of a single point of the point process. To calculate the sum of a function of a single point as well as the entire point process, then generalized Campbell's theorems are required using the Palm distribution of the point process, which is based on the branch of probability known as Palm theory or Palm calculus. - -Another version of Campbell's theorem says that for a Poisson point process $N$ with intensity measure $ \Lambda$ and a measurable function $ f:\textbf{R}^d\rightarrow \textbf{R}$, the random sum -$$ - S =\sum_{x\in N}f(x) -$$ - -is absolutely convergent with probability one if and only if the integral -$$ - \int_{\textbf{R}^d} \min(|f(x)|,1)\Lambda (dx) < \infty. -$$ - -Provided that this integral is finite, then the theorem further asserts that for any complex value $\theta$ the equation -$$ - E(e^{\theta S})=\exp \left(\int_{\textbf{R}^d} [e^{\theta f(x)}-1]\Lambda (dx) \right), -$$ - -holds if the integral on the right-hand side converges, which is the case for purely imaginary $\theta$. Moreover, -$$ - E(S)=\int_{\textbf{R}^d} f(x)\Lambda (dx), -$$ - -and if this integral converges, then -$$ - \operatorname{Var}(S)=\int_{\textbf{R}^d} f(x)^2\Lambda (dx), -$$ - -where $ \text{Var}(S)$ denotes the variance of the random sum $S$. - -From this theorem some expectation results for the Poisson point process follow, including its Laplace functional. - -For a Poisson point process $ N$ with intensity measure $ \Lambda$, the Laplace functional is a consequence of the above version of Campbell's theorem and is given by: -$$ - \mathcal{L}_N(sf) := E\bigl[ e^{-s \sum_{x \in N} f(x) } \bigr] =\exp \Bigl[-\int_{\textbf{R}^d} (1-e^{-sf(x)})\Lambda(dx) \Bigr], -$$ - -which for the homogeneous case is: -$$ - \mathcal{L}_N(sf)=\exp\Bigl[-\lambda\int_{\textbf{R}^d}(1-e^{-sf(x)}) dx \Bigr]. -$$ diff --git a/wiki/wikipedia/4151.txt b/wiki/wikipedia/4151.txt deleted file mode 100644 index e2edec8706f501e41672d55ea5729ba2a528411e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4151.txt +++ /dev/null @@ -1,91 +0,0 @@ -Zorn's lemma, also known as the Kuratowski–Zorn lemma, after mathematicians Max Zorn and Kazimierz Kuratowski, is a proposition of set theory. It states that a partially ordered set containing upper bounds for every chain (that is, every totally ordered subset) necessarily contains at least one maximal element. - -Proved by Kuratowski in 1922 and independently by Zorn in 1935, the proof technique outlined by this lemma occurs in the proofs of several theorems of crucial importance, for instance the Hahn–Banach theorem in functional analysis, the theorem that every vector space has a basis, Tychonoff's theorem in topology stating that every product of compact spaces is compact, and the theorems in abstract algebra that in a ring with identity every proper ideal is contained in a maximal ideal and that every field has an algebraic closure. - -Zorn's lemma is equivalent to the well-ordering theorem and also to the axiom of choice, in the sense that any one of the three, together with the Zermelo–Fraenkel axioms of set theory, is sufficient to prove the other two. An earlier formulation of Zorn's lemma is Hausdorff's maximum principle which states that every totally ordered subset of a given partially ordered set is contained in a maximal totally ordered subset of that partially ordered set. - -To prove the existence of a mathematical object that can be viewed as a maximal element in some partially ordered set in some way, one can try proving the existence of such an object by assuming there is no maximal element and using transfinite induction and the assumptions of the situation to get a contradiction. Zorn's lemma tidies up the conditions a situation needs to satisfy in order for such an argument to work and enables mathematicians to not have to repeat the transfinite induction argument by hand each time, but just check the conditions of Zorn's lemma. - -Preliminary notions: - -* A set P equipped with a reflexive, antisymmetric and transitive binary relation ≤ is said to be (partially) ordered by ≤. Given two elements x and y of P with x ≤ y, y is said to be greater than or equal to x. The word "partial" is meant to indicate that not every pair of elements of a partially ordered set is required to be comparable under the order relation, that is, in a partially ordered set P with order relation ≤ there may be elements x and y with neither x ≤ y nor y ≤ x. An ordered set in which every pair of elements is comparable is called totally ordered. - -* Every subset S of a partially ordered set P can itself be seen as partially ordered by restricting the order relation inherited from P to S. A subset S of a partially ordered set P is called a chain (in P) if it is totally ordered in the inherited order. - -* An element m of a partially ordered set P with order relation ≤ is maximal (with respect to ≤) if there is no other element of P greater than m, that is, if there is no s in P with s ≠ m and m ≤ s. Depending on the order relation, a partially ordered set may have any number of maximal elements. However, a totally ordered set can have at most one maximal element. - -* Given a subset S of a partially ordered set P, an element u of P is an upper bound of S if it is greater than or equal to every element of S. Here, S is not required to be a chain, and u is required to be comparable to every element of S but need not itself be an element of S. - -Zorn's lemma can then be stated as: - -Variants of this formulation are sometimes used, such as requiring that the set P and the chains be non-empty. - -Although this formulation appears to be formally weaker (since it places on P the additional condition of being non-empty, but obtains the same conclusion about P), in fact the two formulations are equivalent. To verify this, suppose first that P satisfies the condition that every chain in P has an upper bound in P. Then the empty subset of P is a chain, as it satisfies the definition vacuously; so the hypothesis implies that this subset must have an upper bound in P, and this upper bound shows that P is in fact non-empty. Conversely, if P is assumed to be non-empty and satisfies the hypothesis that every non-empty chain has an upper bound in P, then P also satisfies the condition that every chain has an upper bound, as an arbitrary element of P serves as an upper bound for the empty chain (that is, the empty subset viewed as a chain). - -The difference may seem subtle, but in many proofs that invoke Zorn's lemma one takes unions of some sort to produce an upper bound, and so the case of the empty chain may be overlooked; that is, the verification that all chains have upper bounds may have to deal with empty and non-empty chains separately. So many authors prefer to verify the non-emptiness of the set P rather than deal with the empty chain in the general argument. - -Zorn's lemma can be used to show that every nontrivial ring R with unity contains a maximal ideal. - -Let P be the set consisting of all (two-sided) ideals in R except R itself. The ideal R was excluded because maximal ideals by definition are not equal to R. Since R is non-trivial, the set P contains the trivial ideal {0}, and therefore P is non-empty. Furthermore, P is partially ordered by set inclusion (see inclusion order). Finding a maximal ideal in R is the same as finding a maximal element in P. - -To apply Zorn's lemma, take a chain T in P (that is, T is a subset of P that is totally ordered). If T is the empty set, then the trivial ideal {0} is an upper bound for T in P. Assume then that T is non-empty. It is necessary to show that T has an upper bound, that is, there exists an ideal I ⊆ R which is bigger than all members of T but still smaller than R (otherwise it would not be in P). Take I to be the union of all the ideals in T. We wish to show that I is an upper bound for T in P. We will first show that I is an ideal of R, and then that it is a proper ideal of R and so is an element of P. Since every element of T is contained in I, this will show that I is an upper bound for T in P, as required. - -Because T contains at least one element, and that element contains at least 0, the union I contains at least 0 and is not empty. To prove that I is an ideal, note that if a and b are elements of I, then there exist two ideals J, K ∈ T such that a is an element of J and b is an element of K. Since T is totally ordered, we know that J ⊆ K or K ⊆ J. In the first case, both a and b are members of the ideal K, therefore their sum a + b is a member of K, which shows that a + b is a member of I. In the second case, both a and b are members of the ideal J, and thus a + b ∈ I. Furthermore, if r ∈ R, then ar and ra are elements of J and hence elements of I. Thus, I is an ideal in R. - -Now, an ideal is equal to R if and only if it contains 1. (It is clear that if it is equal to R, then it must contain 1; on the other hand, if it contains 1 and r is an arbitrary element of R, then r1 = r is an element of the ideal, and so the ideal is equal to R.) So, if I were equal to R, then it would contain 1, and that means one of the members of T would contain 1 and would thus be equal to R – but R is explicitly excluded from P. - -The hypothesis of Zorn's lemma has been checked, and thus there is a maximal element in P, in other words a maximal ideal in R. - -The proof depends on the fact that the ring R has a multiplicative unit 1. Without this, the proof would not work and indeed the statement would be false. For example, the ring with $\Q$ as additive group and trivial multiplication (i.e. $a b=0$ for all $a,b$) has no maximal ideal (and of course no 1): Its ideals are precisely the additive subgroups. The factor group $\Q/A$ by a proper subgroup $A$ is a divisible group, hence certainly not finitely generated, hence has a proper non-trivial subgroup, which gives rise to a subgroup and ideal containing $A$. - -A sketch of the proof of Zorn's lemma follows, assuming the axiom of choice. Suppose the lemma is false. Then there exists a partially ordered set, or poset, P such that every totally ordered subset has an upper bound, and that for every element in P there is another element bigger than it. For every totally ordered subset T we may then define a bigger element b(T), because T has an upper bound, and that upper bound has a bigger element. To actually define the function b, we need to employ the axiom of choice. - -Using the function b, we are going to define elements a0 < a1 < a2 < a3 < ... in P. This sequence is really long: the indices are not just the natural numbers, but all ordinals. In fact, the sequence is too long for the set P; there are too many ordinals (a proper class), more than there are elements in any set, and the set P will be exhausted before long and then we will run into the desired contradiction. - -The ai are defined by transfinite recursion: we pick a0 in P arbitrary (this is possible, since P contains an upper bound for the empty set and is thus not empty) and for any other ordinal w we set aw = b({av : v < w}). Because the av are totally ordered, this is a well-founded definition. - -This proof shows that actually a slightly stronger version of Zorn's lemma is true: - -name=Lemma - -The Hausdorff maximal principle is an early statement similar to Zorn's lemma. - -Kazimierz Kuratowski proved in 1922 a version of the lemma close to its modern formulation (it applies to sets ordered by inclusion and closed under unions of well-ordered chains). Essentially the same formulation (weakened by using arbitrary chains, not just well-ordered) was independently given by Max Zorn in 1935, who proposed it as a new axiom of set theory replacing the well-ordering theorem, exhibited some of its applications in algebra, and promised to show its equivalence with the axiom of choice in another paper, which never appeared. - -The name "Zorn's lemma" appears to be due to John Tukey, who used it in his book Convergence and Uniformity in Topology in 1940. Bourbaki's Théorie des Ensembles of 1939 refers to a similar maximal principle as "le théorème de Zorn". The name "Kuratowski–Zorn lemma" prevails in Poland and Russia. - -Zorn's lemma is equivalent (in ZF) to three main results: - -# Hausdorff maximal principle - -# Axiom of choice - -# Well-ordering theorem. - -A well-known joke alluding to this equivalency (which may defy human intuition) is attributed to Jerry Bona: - -"The Axiom of Choice is obviously true, the well-ordering principle obviously false, and who can tell about Zorn's lemma?" - -Zorn's lemma is also equivalent to the strong completeness theorem of first-order logic. - -Moreover, Zorn's lemma (or one of its equivalent forms) implies some major results in other mathematical areas. For example, - -# Banach's extension theorem which is used to prove one of the most fundamental results in functional analysis, the Hahn–Banach theorem - -# Every vector space has a basis, a result from linear algebra (to which it is equivalent). In particular, the real numbers, as a vector space over the rational numbers, possess a Hamel basis. - -# Every commutative unital ring has a maximal ideal, a result from ring theory known as Krull's theorem, to which Zorn's lemma is equivalent - -# Tychonoff's theorem in topology (to which it is also equivalent) - -# Every proper filter is contained in an ultrafilter, a result that yields completeness theorem of first-order logic - -In this sense, we see how Zorn's lemma can be seen as a powerful tool, especially in the sense of unified mathematics. - -A weakened form of Zorn's lemma can be proven from ZF + DC (Zermelo–Fraenkel set theory with the axiom of choice replaced by the axiom of dependent choice). Zorn's lemma can be expressed straightforwardly by observing that the set having no maximal element would be equivalent to stating that the set's ordering relation would be entire, which would allow us to apply the axiom of dependent choice to construct a countable chain. As a result, any partially ordered set with exclusively finite chains must have a maximal element. - -More generally, strengthening the axiom of dependent choice to higher ordinals allows us to generalize the statement in the previous paragraph to higher cardinalities. In the limit where we allow arbitrarily large ordinals, we recover the proof of the full Zorn's lemma using the axiom of choice in the preceding section. - -The 1970 film Zorns Lemma is named after the lemma. - -The lemma was referenced on The Simpsons in the episode "Bart's New Friend". diff --git a/wiki/wikipedia/4152.txt b/wiki/wikipedia/4152.txt deleted file mode 100644 index 5d4faa2dfb2ddb175bf64843545e55e2cbed5d75..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4152.txt +++ /dev/null @@ -1,87 +0,0 @@ -PlantUML is an open-source tool allowing users to create diagrams from a plain text language. Besides various UML diagrams, PlantUML has support for various other software development related formats (such as Archimate, Block diagram, BPMN, C4, Computer network diagram, ERD, Gantt chart, Mind map, and WBD), as well as visualisation of JSON and YAML files. - -The language of PlantUML is an example of a domain-specific language. Besides its own DSL, PlantUML also understands AsciiMath, Creole, DOT, and LaTeX. It uses Graphviz software to layout its diagrams and Tikz for LaTeX support. Images can be output as PNG, SVG, LaTeX and even ASCII art. PlantUML has also been used to allow blind people to design and read UML diagrams. - -There are various extensions or add-ons that incorporate PlantUML. - -* Atom has a community maintained PlantUML syntax highlighter and viewer. - -* Confluence wiki has a PlantUML plug-in for Confluence Server, which renders diagrams on-the-fly during a page reload. There is an additional PlantUML plug-in for Confluence Cloud. - -* Doxygen integrates diagrams for which sources are provided after the command. - -* Eclipse has a PlantUML plug-in. - -* Google Docs has an add-on called PlantUML Gizmo that works with the PlantUML.com server. - -* IntelliJ IDEA can create and display diagrams embedded into Markdown (built-in) or in standalone files (using a plugin). - -* LaTeX using the Tikz package has limited support for PlantUML. - -* LibreOffice has Libo_PlantUML extension to use PlantUML diagrams. - -* MediaWiki has a PlantUML plug-in which renders diagrams in pages as SVG or PNG. - -* Microsoft Word can use PlantUML diagrams via a Word Template Add-in. There is an additional Visual Studio Tools for Office add-in called PlantUML Gizmo that works in a similar fashion. - -* NetBeans has a PlantUML plug-in. - -* Org-mode has a PlantUML org-babel support. - -* Rider has a PlantUML plug-in. - -* Visual Studio Code has various PlantUML extensions on its , most popular being . - -*Vnote open source notetaking markdown application has built in PlantUML support. - -* Xcode has a community maintained Source Editor Extension to generate and view PlantUML class diagrams from Swift source code. - -PlantUML uses well-formed and human-readable code to render the diagrams. - -There are other text formats for UML modelling but PlantUML supports many diagram types and does not need an explicit layouting, though it is possible to tweak the diagrams if necessary. - -The source code for the class diagram shown on the right is as follows: - - - -skinparam style strictuml - -class Façade { - -doSomething() - -} - -Façade .> package1.Class1 - -Façade .> package2.Class2 - -Façade .> package3.Class3 - -Client1 .> Façade : doSomething() - -Client2 .> Façade : doSomething() - -note as N2 - -doSomething() { - -Class1 c1 = newClass1(); - -Class2 c2 = newClass2(); - -Class3 c3 = newClass3(); - -c1.doStuff(c2) - -c3.setX(c1.getX()); - -return c3.getY(); - -} - -end note - -Façade .. N2 - - diff --git a/wiki/wikipedia/4153.txt b/wiki/wikipedia/4153.txt deleted file mode 100644 index 1b633ad8927936cf1ce03b273b483034a7de1cda..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4153.txt +++ /dev/null @@ -1,39 +0,0 @@ -YDS is a scheduling algorithm for dynamic speed scaling processors which minimizes the total energy consumption. It was named after and developed by Yao et al. There is both an online and an offline version of the algorithm. - -Definitions: - -* There is a set of n Jobs $J := J_1, ..., J_n$, where each job $J_i$ has a release time $r_i$, deadline $d_i$ and a processing volume $w_i$. - -* $I$ is a certain time interval. - -* Also we have $\Delta_I = \frac{1} \sum_{J_i \in S_I} w_i$, the work density in $I$. - -* And finally $S_I \in J$ is the set of Jobs that must be processed in $I$, that means Jobs with $[r_i, d_i] \in I$. - -The algorithm then works as follows: - -While $J \neq \{\}$ - -Determine the time interval $I$ of maximum density $\Delta_I$. - -In $I$ process the jobs of $S_I$ at speed $\Delta_I$ according to EDF - -Set $J := J \setminus S_I$. - -Remove $I$ from the time horizon and update the release times and deadlines of unscheduled jobs accordingly. - -end While - -In other terms it's a recursive algorithm that will follow these steps until all jobs are scheduled: - -# Calculate all intensities for all possible combinations of intervals. This means that for every start time and end time combination the intensity of work is calculated. For this the times of all jobs whose arrival time and deadline lie inside the interval are added and divided by the interval length. To speed up the process, only combinations of arrival times and later deadlines need to be considered, as times without arrival of a process or deadline can be shrunk to a smaller interval with the same processes, thus increasing intensity, and negative intervals are invalid. Then the maximum intensity interval is selected. In case of multiple equally intense intervals, one can be chosen at will, as intensities of non-overlapping intervals do not influence each other, and removing a part of an interval will not change the intensity of the rest, as processes are removed proportionally. - -# The processes inside this interval are scheduled using Earliest Deadline First, meaning that the job inside this interval whose deadline will arrive soonest is scheduled first, and so on. The jobs are executed at the above calculated intensity to fit all jobs inside the interval. - -# The interval is removed from the timeline, as no more calculations can be scheduled here. To simplify further calculations, all arrival times and deadlines of remaining jobs are recalculated to omit already occupied times. For example, assume a job $A$ with arrival time $a_A = 0$, deadline $d_A = 10$ and a workload $w_A = 5$, and a job $B$ with $a_B = 5$, $d_B = 10$ and $w_B = 4$. Assume the previous interval was from time $3$ to $8$. To omit this interval the times of $A$ and $B$ need to be adjusted; workloads are unaffected, as no work has been done for either $A$ or $B$. $a_A$ stays the same, as it's unaffected by later omissions. $d_A$, however, needs to be changed to $5$, as $10 - (8 - 3) = 5$. This is the time job $A$ has left before its deadline. The arrival time $a_B$ becomes $3$, as it would have been inside the removed interval. $d_B$ also becomes $5$, as the time left after the removed interval is $2$. It is important, however, to remember the actual arrival and deadline times for later assembly of the scheduling. - -# Repeat steps 1-3 until all jobs have been scheduled. - -# Assemble jobs into final scheduling according to their allotted time intervals. Remember, though, that an interval may be split in two by another interval calculated earlier. - -For any Job instance, the algorithm computes an optimal schedule minimizing the total energy consumption. diff --git a/wiki/wikipedia/4154.txt b/wiki/wikipedia/4154.txt deleted file mode 100644 index 1d1084d7982b870a0e1e6cf632cd0c63fa6ac1cf..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4154.txt +++ /dev/null @@ -1,35 +0,0 @@ -Tetris Friends was an online Tetris game developed by Tetris Online, Inc. Registered users were able to compare their scores with their friends and with the entire community. It was the only official Flash implementation of Tetris made by the Tetris company itself. At the time, it was also the only official Tetris platform that had advertisements play before a match Tetris Friends had over a million registered users. - -Tetris Friends featured six single-player modes and five multiplayer modes. Playing any game mode gave a user Tokens. They could be used to unlock new skins and Tetrimino styles, although a premium "ruby" currency also existed. The version available on Facebook only had four of the modes available on the official website and contained had no coin system, but featured a special mode. - -All game modes except Tetris 1989 and N-Blox used the SRS rotational system, featured the hold function which allowed players to save a block for future use, and had a quad-Tetrimino preview that allows players to see the next few upcoming blocks. - -Tetris Online, Inc ceased all operations on May 30, 2019. As such, Tetris Friends permanently shut down on that date. - -In Marathon Mode, players have to clear a specified number of lines while trying to score as many points as possible. Clearing multiple lines with one piece is worth more than the actual number cleared. For instance, clearing a single line is worth only one, while clearing four lines with a single piece is worth eight lines. Once the goal has been reached, the level and speed increases and a new, higher goal is set. There are fifteen levels in total. The game ends once level fifteen is completed, and the score is recorded and added to the high score list if the user has a registered account. - -Sprint Mode requires players to clear forty lines as quickly as possible. The time required to clear all the lines is recorded as the score. The speed is automatically set to the lowest setting. - -Ultra Mode challenges players to score as many points as possible in two minutes. Clearing multiple lines with a single piece scores more points. Special moves called t-spins are also worth extra points. If the player loses by reaching the top, the score does not count and is not recorded in the high score list. - -Survival Mode is similar to Marathon Mode. Players have to clear ten lines per level while scoring as many points as possible. Every line cleared is worth only one, unlike in Marathon Mode. There are a total of twenty levels, each one at a higher speed than the last. Once all twenty levels are beaten, the player enters the bonus level. The bonus level is a semi-invisible level. Blocks alternate between being visible and invisible. The blocks are invisible for increasing longer time periods as the level progresses. There is no limit on how many lines the player can clear, and the player only loses when the blocks reach the top of the field. This game mode is not available on the Facebook version. - -This mode attempts to simulate the classic Game Boy version of Tetris. There are two modes within this game. Mode A challenges players to clear as many lines as possible. Once certain goals are reached, the speed increases. Mode B requires players to clear a specified number of lines. These are similar to other game modes, but the rotational system of the blocks is slightly different, and there is no hold feature, making this mode more difficult. This game mode is not available on the Facebook version. - -N-Blox mode is developed by British web designer Paul Neave. It does not use many of the new features such as the hold function and the four block preview. Players are also not allowed to change the keyboard controls, which they can in every other mode. The goal is to score as many points as possible. The speed increases after a certain number of lines are cleared. This is similar to other standard Tetris modes available in other games. This mode is not available on the Facebook version. - -This mode is similar to the N-Blox and Tetris 1989 modes. There are no hold pieces, ghost pieces, or four piece preview. The game is endless and challenges players to achieve the high score. This game is a remake of the first Tetris game that was released on Facebook. It is only available to play on Facebook, and is not found on the Tetris Friends website. - -In Battle 2P, the player plays against a recorded game previously played by the player or any other player. The goal is to knock out the other player three times by forcing them to reach the top of the screen. Players can send garbage, blank lines of tiles that cannot be cleared normally, to the other player by clearing multiple lines with a single piece, by clearing lines with consecutive pieces, or by performing the t-spin maneuver. There is a two-minute time limit. If no one has won by the end of two minutes, the player that has been knocked out the most times loses. If both have the same number of knock-outs, then the player that sent the most lines of garbage to the other player is the winner. Players earn stars when they win. Once enough stars have been earned, the player is promoted to the next rank and matched up with better players. There are a total of twenty ranks attainable. - -This mode is very similar to Battle 2P. The player is pitted against five other opponents. The opposing players are also recordings as in Battle 2P. There is a target that constantly alternates between the other five players. When garbage is sent by the player, the person that the target is current pointing at is the one that receives the garbage. Knock-outs are awarded to the person that sent the most garbage to the knocked-out player, so timing of attacks is important in earning knock-outs. The player with the highest knock-out-to-knocked-out ratio after two minutes is the winner. If that is tied, then number of lines of garbage sent is the tie-breaker, as in Battle 2P. By doing well, the players can earn stars, which promotes them to new ranks. Doing poorly will cause players to lose stars. As in Battle 2P, there are twenty ranks in the mode. This mode is not available on the Facebook version. - -In Sprint 5P, the player plays against four others players in the basic Sprint Mode. The players are previously recorded as in Battle 2P. The goal is to clear forty lines before the other players. Players earn stars by placing first or second, and they lose stars by placing fourth of fifth. Once enough stars are earned, they advance to a new rank just as in Battle 2P. - -Arena is the only live multiplayer mode. It pits up to six players against each other in real time. It works similarly to Battle 6P, except that when a player is knocked out, they are out of that round. There are also items available in the game. When special flashing blocks are cleared, the player is awarded with a randomly selected item that harms the opponent in some way. Items can cause more garbage, speed up the game for the opponent, or even make the blocks invisible, among other things. The player is awarded points based on how well they do, how many lines of garbage they sent, and how many people they knocked out. Once a player earns 1,000 points, they are promoted to the next rank. If their points reach 0 because they lost points in a match, they are demoted a rank. This mode is not available on the Facebook version. - -In Rally 8P, the player has to race to the bottom of the field by clearing all the blocks. The player has to race against seven other previously recorded players. The mode features cascade rules, which allows blocks to fall all the way down if the blocks below them are removed. Items are also available in this mode as in Arena. Wins earn stars, which allow promotion like most of the other multiplayer modes. This mode is not available on the Facebook version. - -Tetris Friends met with positive reviews overall. PC Review gave it a 4.0/5, stating that "Gameplay is smooth and the Web site is set up cleanly and very easy to navigate. It's simple, but effective." It received an average score of 4.7/5 with over 100 reviews at Jay Is Games. - -On April 25, 2019, a banner appeared on the homepage of Tetris Friends stating that "Tetris Friends will no longer be available after May 31st, 2019." This coincided with the shutdown of Tetris Online, Inc (the platform's parent company.) Around this time, the platform made all customization options free for all players. On May 31, 2019, Tetris Friends permanently shut down alongside Tetris Online, Inc. diff --git a/wiki/wikipedia/4155.txt b/wiki/wikipedia/4155.txt deleted file mode 100644 index 82511ea1052849bf0d4f4851f4929372b0d8195d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4155.txt +++ /dev/null @@ -1,17 +0,0 @@ -In mathematics, the Tonelli–Hobson test gives sufficient criteria for a function ƒ on R2 to be an integrable function. It is often used to establish that Fubini's theorem may be applied to ƒ. It is named for Leonida Tonelli and E. W. Hobson. - -More precisely, the Tonelli–Hobson test states that if ƒ is a real-valued measurable function on R2, and either of the two iterated integrals -$$ -\int_\mathbb{R}\left(\int_\mathbb{R}|f(x,y)|dx\right) dy -$$ - -or -$$ -\int_\mathbb{R}\left(\int_\mathbb{R}|f(x,y)|dy\right) dx -$$ - -is finite, then ƒ is Lebesgue-integrable on R2. - -Category:Integral calculus - -Category:Theorems in analysis diff --git a/wiki/wikipedia/4156.txt b/wiki/wikipedia/4156.txt deleted file mode 100644 index c4a6506d1cdf4425610462b7982cd07d44205236..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4156.txt +++ /dev/null @@ -1,85 +0,0 @@ -A state diagram is a type of diagram used in computer science and related fields to describe the behavior of systems. State diagrams require that the system described is composed of a finite number of states; sometimes, this is indeed the case, while at other times this is a reasonable abstraction. Many forms of state diagrams exist, which differ slightly and have different semantics. - -State diagrams are used to give an abstract description of the behavior of a system. This behavior is analyzed and represented by a series of events that can occur in one or more possible states. Hereby "each diagram usually represents objects of a single class and track the different states of its objects through the system". - -State diagrams can be used to graphically represent finite-state machines (also called finite automata). This was introduced by Claude Shannon and Warren Weaver in their 1949 book The Mathematical Theory of Communication. Another source is Taylor Booth in his 1967 book Sequential Machines and Automata Theory. Another possible representation is the state-transition table. - -A classic form of state diagram for a finite automaton (FA) is a directed graph with the following elements (Q, Σ, Z, δ, q0, F): - -*Vertices Q: a finite set of states, normally represented by circles and labeled with unique designator symbols or words written inside them - -*Input symbols Σ: a finite collection of input symbols or designators - -*Output symbols Z: a finite collection of output symbols or designators - -The output function ω represents the mapping of ordered pairs of input symbols and states onto output symbols, denoted mathematically as ω : Σ × Q→ Z. - -*Edges δ: represent transitions from one state to another as caused by the input (identified by their symbols drawn on the edges). An edge is usually drawn as an arrow directed from the present state to the next state. This mapping describes the state transition that is to occur on input of a particular symbol. This is written mathematically as δ : Q × Σ → Q, so δ (the transition function) in the definition of the FA is given by both the pair of vertices connected by an edge and the symbol on an edge in a diagram representing this FA. Item δ(q, a) = p in the definition of the FA means that from the state named q under input symbol a, the transition to the state p occurs in this machine. In the diagram representing this FA, this is represented by an edge labeled by a pointing from the vertex labeled by q to the vertex labeled by p. - -*Start state q0: (not shown in the examples below). The start state q0 ∈ Q is usually represented by an arrow with no origin pointing to the state. In older texts, the start state is not shown and must be inferred from the text. - -*Accepting state(s) F: If used, for example for accepting automata, F ∈ Q is the accepting state. It is usually drawn as a double circle. Sometimes the accept state(s) function as "Final" (halt, trapped) states. invented by computer scientist David Harel, are gaining widespread usage since a variant has become part of the Unified Modeling Language (UML). The diagram type allows the modeling of superstates, orthogonal regions, and activities as part of a state. - -Classic state diagrams require the creation of distinct nodes for every valid combination of parameters that define the state. This can lead to a very large number of nodes and transitions between nodes for all but the simplest of systems (state and transition explosion). This complexity reduces the readability of the state diagram. With Harel statecharts it is possible to model multiple cross-functional state diagrams within the statechart. Each of these cross-functional state machines can transition internally without affecting the other state machines in the statechart. The current state of each cross-functional state machine in the statechart defines the state of the system. The Harel statechart is equivalent to a state diagram but it improves the readability of the resulting diagram. - -There are other sets of semantics available to represent state diagrams. For example, there are tools for modeling and designing logic for embedded controllers. These diagrams, like Harel's original state machines, support hierarchically nested states, orthogonal regions, state actions, and transition actions. - -Newcomers to the state machine formalism often confuse state diagrams with flowcharts. The figure below shows a comparison of a state diagram with a flowchart. A state machine (panel (a)) performs actions in response to explicit events. In contrast, the flowchart (panel (b)) does not need explicit events but rather transitions from node to node in its graph automatically upon completion of activities. - -Nodes of flowcharts are edges in the induced graph of states. - -The reason is that each node in a flowchart represents a program command. - -A program command is an action to be executed. - -So it is not a state, but when applied to the program's state, it results in a transition to another state. - -In more detail, the source code listing represents a program graph. - -Executing the program graph (parsing and interpreting) results in a state graph. - -So each program graph induces a state graph. - -Conversion of the program graph to its associated state graph is called "unfolding" of the program graph. - -The program graph is a sequence of commands. - -If no variables exist, then the state consists only of the program counter, which keeps track of where in the program we are during execution (what is the next command to be applied). - -In this case before executing a command the program counter is at some position (state before the command is executed). - -Executing the command moves the program counter to the next command. - -Since the program counter is the whole state, it follows that executing the command changed the state. - -So the command itself corresponds to a transition between the two states. - -Now consider the full case, when variables exist and are affected by the program commands being executed. - -Then between different program counter locations, not only does the program counter change, but variables might also change values, due to the commands executed. - -Consequently, even if we revisit some program command (e.g. in a loop), this doesn't imply the program is in the same state. - -In the previous case, the program would be in the same state, because the whole state is just the program counter, so if the program counterpoints to the same position (next command) it suffices to specify that we are in the same state. - -However, if the state includes variables, then if those change value, we can be at the same program location with different variable values, meaning in a different state in the program's state space. - -The term "unfolding" originates from this multiplication of locations when producing the state graph from the program graph. - -A self transition is a transition where the initial and the final state are the same. - -A representative example is a do loop incrementing some counter until it overflows and becomes 0 again. - -Although the do loop executes the same increment command iteratively, so the program graph executes a cycle, in its state space is not a cycle, but a line. - -This results from the state being the program location (here cycling) combined with the counter value, which is strictly increasing (until the overflow), so different states are visited in sequence, until the overflow. - -After the overflow the counter becomes 0 again, so the initial state is revisited in the state space, closing a cycle in the state space (assuming the counter was initialized to 0). - -The figure above attempts to show that reversal of roles by aligning the arcs of the state diagrams with the processing stages of the flowchart. - -You can compare a flowchart to an assembly line in manufacturing because the flowchart describes the progression of some task from beginning to end (e.g., transforming source code input into object code output by a compiler). A state machine generally has no notion of such a progression. The door state machine shown at the top of this article, for example, is not in a more advanced stage when it is in the "closed" state, compared to being in the "opened" state; it simply reacts differently to the open/close events. A state in a state machine is an efficient way of specifying a particular behavior, rather than a stage of processing. - -An interesting extension is to allow arcs to flow from any number of states to any number of states. This only makes sense if the system is allowed to be in multiple states at once, which implies that an individual state only describes a condition or other partial aspect of the overall, global state. The resulting formalism is known as a Petri net. - -Another extension allows the integration of flowcharts within Harel statecharts. This extension supports the development of software that is both event driven and workflow driven. diff --git a/wiki/wikipedia/4157.txt b/wiki/wikipedia/4157.txt deleted file mode 100644 index ba71e700aaf7b8909ce77aa1116be72dc4af0e8c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4157.txt +++ /dev/null @@ -1,53 +0,0 @@ -A database transaction symbolizes a unit of work performed within a database management system (or similar system) against a database, and treated in a coherent and reliable way independent of other transactions. A transaction generally represents any change in a database. Transactions in a database environment have two main purposes: - -# To provide reliable units of work that allow correct recovery from failures and keep a database consistent even in cases of system failure, when execution prematurely and unexpectedly stops (completely or partially) and many operations upon a database remain uncompleted, with unclear status. - -# To provide isolation between programs accessing a database concurrently. If this isolation is not provided, the programs' outcomes are possibly erroneous. - -In a database management system, a transaction is a single unit of logic or work, sometimes made up of multiple operations. Any logical calculation done in a consistent mode in a database is known as a transaction. One example is a transfer from one bank account to another: the complete transaction requires subtracting the amount to be transferred from one account and adding that same amount to the other. - -A database transaction, by definition, must be atomic (it must either be complete in its entirety or have no effect whatsoever), consistent (it must conform to existing constraints in the database), isolated (it must not affect other transactions) and durable (it must get written to persistent storage). Database practitioners often refer to these properties of database transactions using the acronym ACID. - -Databases and other data stores which treat the integrity of data as paramount often include the ability to handle transactions to maintain the integrity of data. A single transaction consists of one or more independent units of work, each reading and/or writing information to a database or other data store. When this happens it is often important to ensure that all such processing leaves the database or data store in a consistent state. - -Examples from double-entry accounting systems often illustrate the concept of transactions. In double-entry accounting every debit requires the recording of an associated credit. If one writes a check for $100 to buy groceries, a transactional double-entry accounting system must record the following two entries to cover the single transaction: - -# Debit $100 to Groceries Expense Account - -# Credit $100 to Checking Account - -A transactional system would make both entries pass or both entries would fail. By treating the recording of multiple entries as an atomic transactional unit of work the system maintains the integrity of the data recorded. In other words, nobody ends up with a situation in which a debit is recorded but no associated credit is recorded, or vice versa. - -A transactional database is a DBMS that provides the ACID properties for a bracketed set of database operations (begin-commit). All the write operations within a transaction have an all-or-nothing effect, that is, either the transaction succeeds and all writes take effect, or otherwise, the database is brought to a state that does not include any of the writes of the transaction. Transactions also ensure that the effect of concurrent transactions satisfies certain guarantees, known as isolation level. The highest isolation level is serializability, which guarantees that the effect of concurrent transactions is equivalent to their serial (i.e. sequential) execution. - -Most relational database management systems fall into the category of databases that support transactions. NoSQL data stores prioritize scalability along with supporting transactions in order to guarantee data consistency in the event of concurrent updates and accesses. - -In a database system, a transaction might consist of one or more data-manipulation statements and queries, each reading and/or writing information in the database. Users of database systems consider consistency and integrity of data as highly important. A simple transaction is usually issued to the database system in a language like SQL wrapped in a transaction, using a pattern similar to the following: - -# Begin the transaction. - -# Execute a set of data manipulations and/or queries. - -# If no error occurs, then commit the transaction. - -# If an error occurs, then roll back the transaction. - -A transaction commit operation persists all the results of data manipulations within the scope of the transaction to the database. A transaction rollback operation does not persist the partial results of data manipulations within the scope of the transaction to the database. In no case can a partial transaction be committed to the database since that would leave the database in an inconsistent state. - -Internally, multi-user databases store and process transactions, often by using a transaction ID or XID. - -There are multiple varying ways for transactions to be implemented other than the simple way documented above. Nested transactions, for example, are transactions which contain statements within them that start new transactions (i.e. sub-transactions). Multi-level transactions are a variant of nested transactions where the sub-transactions take place at different levels of a layered system architecture (e.g., with one operation at the database-engine level, one operation at the operating-system level). Another type of transaction is the compensating transaction. - -Transactions are available in most SQL database implementations, though with varying levels of robustness. For example, MySQL began supporting transactions from early version 3.23, but the InnoDB storage engine was not default before version 5.5. The earlier available storage engine, MyISAM does not support transactions. - -A transaction is typically started using the command BEGIN (although the SQL standard specifies START TRANSACTION). When the system processes a COMMIT statement, the transaction ends with successful completion. A ROLLBACK statement can also end the transaction, undoing any work performed since BEGIN. If autocommit was disabled with the start of a transaction, autocommit will also be re-enabled with the end of the transaction. - -One can set the isolation level for individual transactional operations as well as globally. At the highest level (READ COMMITTED), the result of any operation performed after a transaction has started will remain invisible to other database users until the transaction has ended. At the lowest level (READ UNCOMMITTED), which may occasionally be used to ensure high concurrency, such changes will be immediately visible. - -Relational databases are traditionally composed of tables with fixed-size fields and records. Object databases comprise variable-sized blobs, possibly serializable or incorporating a mime-type. The fundamental similarities between Relational and Object databases are the start and the commit or rollback. - -After starting a transaction, database records or objects are locked, either read-only or read-write. Reads and writes can then occur. Once the transaction is fully defined, changes are committed or rolled back atomically, such that at the end of the transaction there is no inconsistency. - -Database systems implement distributed transactions as transactions accessing data over multiple nodes. A distributed transaction enforces the ACID properties over multiple nodes, and might include systems such as databases, storage managers, file systems, messaging systems, and other data managers. In a distributed transaction there is typically an entity coordinating all the process to ensure that all parts of the transaction are applied to all relevant systems. - -The Namesys Reiser4 filesystem for Linux supports transactions, and as of Microsoft Windows Vista, the Microsoft NTFS filesystem supports distributed transactions across networks. There is occurring research into more data coherent filesystems, such as the Warp Transactional Filesystem (WTF). diff --git a/wiki/wikipedia/4158.txt b/wiki/wikipedia/4158.txt deleted file mode 100644 index baa59ca2e405ceeb296435703104929741d03977..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4158.txt +++ /dev/null @@ -1,618 +0,0 @@ - - -In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbers is spread out from their average value. Variance has a central role in statistics, where some ideas that use it include descriptive statistics, statistical inference, hypothesis testing, goodness of fit, and Monte Carlo sampling. Variance is an important tool in the sciences, where statistical analysis of data is common. The variance is the square of the standard deviation, the second central moment of a distribution, and the covariance of the random variable with itself, and it is often represented by $\sigma^2$, $s^2$, $\operatorname{Var}(X)$, $V(X)$, or $\mathbb{V}(X)$. - -An advantage of variance as a measure of dispersion is that it is more amenable to algebraic manipulation than other measures of dispersion such as the expected absolute deviation; for example, the variance of a sum of uncorrelated random variables is equal to the sum of their variances. A disadvantage of the variance for practical applications is that, unlike the standard deviation, its units differ from the random variable, which is why the standard deviation is more commonly reported as a measure of dispersion once the calculation is finished. - -There are two distinct concepts that are both called "variance". One, as discussed above, is part of a theoretical probability distribution and is defined by an equation. The other variance is a characteristic of a set of observations. When variance is calculated from observations, those observations are typically measured from a real world system. If all possible observations of the system are present then the calculated variance is called the population variance. Normally, however, only a subset is available, and the variance calculated from this is called the sample variance. The variance calculated from a sample is considered an estimate of the full population variance. There are multiple ways to calculate an estimate of the population variance, as discussed in the section below. - -The two kinds of variance are closely related. To see how, consider that a theoretical probability distribution can be used as a generator of hypothetical observations. If an infinite number of observations are generated using a distribution, then the sample variance calculated from that infinite set will match the value calculated using the distribution's equation for variance. - -The variance of a random variable $X$ is the expected value of the squared deviation from the mean of $X$, $\mu = \operatorname{E}[X]$: -$$ - \operatorname{Var}(X) = \operatorname{E}\left[(X - \mu)^2 \right]. -$$ - -This definition encompasses random variables that are generated by processes that are discrete, continuous, neither, or mixed. The variance can also be thought of as the covariance of a random variable with itself: -$$ -\operatorname{Var}(X) = \operatorname{Cov}(X, X). -$$ - -The variance is also equivalent to the second cumulant of a probability distribution that generates $X$. The variance is typically designated as $\operatorname{Var}(X)$, or sometimes as $V(X)$ or $\mathbb{V}(X)$, or symbolically as $\sigma^2_X$ or simply $\sigma^2$ (pronounced "sigma squared"). The expression for the variance can be expanded as follows: - -\begin{align} - -\operatorname{Var}(X) &= \operatorname{E}\left[(X - \operatorname{E}[X])^2\right] \\[4pt] - -&= \operatorname{E}\left[X^2 - 2X\operatorname{E}[X] + \operatorname{E}[X]^2\right] \\[4pt] - -&= \operatorname{E}\left[X^2\right] - 2\operatorname{E}[X]\operatorname{E}[X] + \operatorname{E}[X]^2 \\[4pt] - -&= \operatorname{E}\left[X^2 \right] - \operatorname{E}[X]^2 - -\end{align} - -In other words, the variance of X is equal to the mean of the square of X minus the square of the mean of X. This equation should not be used for computations using floating point arithmetic, because it suffers from catastrophic cancellation if the two components of the equation are similar in magnitude. For other numerically stable alternatives, see Algorithms for calculating variance. - -If the generator of random variable $X$ is discrete with probability mass function $x_1 \mapsto p_1, x_2 \mapsto p_2, \ldots, x_n \mapsto p_n$, then -$$ -\operatorname{Var}(X) = \sum_{i=1}^n p_i\cdot(x_i - \mu)^2, -$$ - -where $\mu$ is the expected value. That is, -$$ -\mu = \sum_{i=1}^n p_i x_i . -$$ - -(When such a discrete weighted variance is specified by weights whose sum is not 1, then one divides by the sum of the weights.) - -The variance of a collection of $n$ equally likely values can be written as -$$ - \operatorname{Var}(X) = \frac{1}{n} \sum_{i=1}^n (x_i - \mu)^2 -$$ - -where $\mu$ is the average value. That is, -$$ -\mu = \frac{1}{n}\sum_{i=1}^n x_i . -$$ - -The variance of a set of $n$ equally likely values can be equivalently expressed, without directly referring to the mean, in terms of squared deviations of all points from each other: -$$ - \operatorname{Var}(X) = \frac{1}{n^2} \sum_{i=1}^n \sum_{j=1}^n \frac{1}{2}(x_i - x_j)^2 = \frac{1}{n^2}\sum_i \sum_{j>i} (x_i-x_j)^2. -$$ - -If the random variable $X$ has a probability density function $f(x)$, and $F(x)$ is the corresponding cumulative distribution function, then - -\begin{align} - -\operatorname{Var}(X) = \sigma^2 &= \int_{\R} (x-\mu)^2 f(x) dx \\[4pt] - -&= \int_{\R} x^2f(x)dx -2\mu\int_{\R} xf(x)dx + \mu^2\int_{\R} f(x)dx \\[4pt] - -&= \int_{\R} x^2 dF(x) - 2 \mu \int_{\R} x dF(x) + \mu^2 \int_{\R} dF(x) \\[4pt] - -&= \int_{\R} x^2 dF(x) - 2 \mu \cdot \mu + \mu^2 \cdot 1 \\[4pt] - -&= \int_{\R} x^2 dF(x) - \mu^2, - -\end{align} - -or equivalently, -$$ -\operatorname{Var}(X) = \int_{\R} x^2 f(x) dx - \mu^2 , -$$ - -where $\mu$ is the expected value of $X$ given by -$$ -\mu = \int_{\R} x f(x) dx = \int_{\R} x d F(x). -$$ - -In these formulas, the integrals with respect to $dx$ and $dF(x)$ - -are Lebesgue and Lebesgue–Stieltjes integrals, respectively. - -If the function $x^2f(x)$ is Riemann-integrable on every finite interval $[a,b]\subset\R,$ then -$$ -\operatorname{Var}(X) = \int^{+\infty}_{-\infty} x^2 f(x) dx - \mu^2, -$$ - -where the integral is an improper Riemann integral. - -The exponential distribution with parameter λ is a continuous distribution whose probability density function is given by -$$ -f(x) = \lambda e^{-\lambda x} -$$ - -on the interval [0, ∞). Its mean can be shown to be -$$ -\operatorname{E}[X] = \int_0^\infty \lambda xe^{-\lambda x} dx = \frac{1}{\lambda}. -$$ - -Using integration by parts and making use of the expected value already calculated, we have: - -\begin{align} - -\operatorname{E}\left[X^2\right] &= \int_0^\infty \lambda x^2 e^{-\lambda x} dx \\ - -&= \left[ -x^2 e^{-\lambda x} \right]_0^\infty + \int_0^\infty 2xe^{-\lambda x} dx \\ - -&= 0 + \frac{2}{\lambda}\operatorname{E}[X] \\ - -&= \frac{2}{\lambda^2}. - -\end{align} - -Thus, the variance of X is given by -$$ -\operatorname{Var}(X) = \operatorname{E}\left[X^2\right] - \operatorname{E}[X]^2 = \frac{2}{\lambda^2} - \left(\frac{1}{\lambda}\right)^2 = \frac{1}{\lambda^2}. -$$ - -A fair six-sided die can be modeled as a discrete random variable, X, with outcomes 1 through 6, each with equal probability 1/6. The expected value of X is $(1 + 2 + 3 + 4 + 5 + 6)/6 = 7/2.$ Therefore, the variance of X is - -\begin{align} - -\operatorname{Var}(X) &= \sum_{i=1}^6 \frac{1}{6}\left(i - \frac{7}{2}\right)^2 \\[5pt] - -&= \frac{1}{6}\left((-5/2)^2 + (-3/2)^2 + (-1/2)^2 + (1/2)^2 + (3/2)^2 + (5/2)^2\right) \\[5pt] - -&= \frac{35}{12} \approx 2.92. - -\end{align} - -The general formula for the variance of the outcome, X, of an n-sided die is - -\begin{align} - -\operatorname{Var}(X) &= \operatorname{E}\left(X^2\right) - (\operatorname{E}(X))^2 \\[5pt] - -&= \frac{1}{n}\sum_{i=1}^n i^2 - \left(\frac{1}{n}\sum_{i=1}^n i\right)^2 \\[5pt] - -&= \frac{(n + 1)(2n + 1)}{6} - \left(\frac{n + 1}{2}\right)^2 \\[4pt] - -&= \frac{n^2 - 1}{12}. - -\end{align} - -The following table lists the variance for some commonly used probability distributions. - -Variance is non-negative because the squares are positive or zero: -$$ -\operatorname{Var}(X)\ge 0. -$$ - -The variance of a constant is zero. -$$ -\operatorname{Var}(a) = 0. -$$ - -Conversely, if the variance of a random variable is 0, then it is almost surely a constant. That is, it always has the same value: -$$ -\operatorname{Var}(X)= 0 \iff \exists a : P(X=a) = 1. -$$ - -Variance is invariant with respect to changes in a location parameter. That is, if a constant is added to all values of the variable, the variance is unchanged: -$$ -\operatorname{Var}(X+a)=\operatorname{Var}(X). -$$ - -If all values are scaled by a constant, the variance is scaled by the square of that constant: -$$ -\operatorname{Var}(aX)=a^2\operatorname{Var}(X). -$$ - -The variance of a sum of two random variables is given by -$$ -\operatorname{Var}(aX+bY)=a^2\operatorname{Var}(X)+b^2\operatorname{Var}(Y)+2ab \operatorname{Cov}(X,Y), -$$ -$$ -\operatorname{Var}(aX-bY)=a^2\operatorname{Var}(X)+b^2\operatorname{Var}(Y)-2ab \operatorname{Cov}(X,Y), -$$ - -where $\operatorname{Cov}(X,Y)$ is the covariance. - -In general, for the sum of $N$ random variables $\{X_1,\dots,X_N\}$, the variance becomes: -$$ -\operatorname{Var}\left(\sum_{i=1}^N X_i\right)=\sum_{i,j=1}^N\operatorname{Cov}(X_i,X_j)=\sum_{i=1}^N\operatorname{Var}(X_i)+\sum_{i\ne j}\operatorname{Cov}(X_i,X_j), -$$ - -see also general Bienaymé's identity. - -These results lead to the variance of a linear combination as: - - - -\begin{align} - -\operatorname{Var}\left( \sum_{i=1}^N a_iX_i\right) &=\sum_{i,j=1}^{N} a_ia_j\operatorname{Cov}(X_i,X_j) \\ - -&=\sum_{i=1}^N a_i^2\operatorname{Var}(X_i)+\sum_{i\not=j}a_ia_j\operatorname{Cov}(X_i,X_j)\\ - -& =\sum_{i=1}^N a_i^2\operatorname{Var}(X_i)+2\sum_{1\le i - -If the random variables $X_1,\dots,X_N$ are such that -$$ -\operatorname{Cov}(X_i,X_j)=0\ ,\ \forall\ (i\ne j) , -$$ - -then they are said to be uncorrelated. It follows immediately from the expression given earlier that if the random variables $X_1,\dots,X_N$ are uncorrelated, then the variance of their sum is equal to the sum of their variances, or, expressed symbolically: -$$ -\operatorname{Var}\left(\sum_{i=1}^N X_i\right)=\sum_{i=1}^N\operatorname{Var}(X_i). -$$ - -Since independent random variables are always uncorrelated (see ), the equation above holds in particular when the random variables $X_1,\dots,X_n$ are independent. Thus, independence is sufficient but not necessary for the variance of the sum to equal the sum of the variances. - -If a distribution does not have a finite expected value, as is the case for the Cauchy distribution, then the variance cannot be finite either. However, some distributions may not have a finite variance, despite their expected value being finite. An example is a Pareto distribution whose index $k$ satisfies $1 < k \leq 2.$ - -One reason for the use of the variance in preference to other measures of dispersion is that the variance of the sum (or the difference) of uncorrelated random variables is the sum of their variances: -$$ -\operatorname{Var}\left(\sum_{i=1}^n X_i\right) = \sum_{i=1}^n \operatorname{Var}(X_i). -$$ - -This statement is called the Bienaymé formula and was discovered in 1853. It is often made with the stronger condition that the variables are independent, but being uncorrelated suffices. So if all the variables have the same variance σ2, then, since division by n is a linear transformation, this formula immediately implies that the variance of their mean is - - - -\operatorname{Var}\left(\overline{X}\right) = \operatorname{Var}\left(\frac{1}{n} \sum_{i=1}^n X_i\right) = - -\frac{1}{n^2}\sum_{i=1}^n \operatorname{Var}\left(X_i\right) = - -\frac{1}{n^2}n\sigma^2 = - -\frac{\sigma^2}{n}. - - - -That is, the variance of the mean decreases when n increases. This formula for the variance of the mean is used in the definition of the standard error of the sample mean, which is used in the central limit theorem. - -To prove the initial statement, it suffices to show that -$$ -\operatorname{Var}(X + Y) = \operatorname{Var}(X) + \operatorname{Var}(Y). -$$ - -The general result then follows by induction. Starting with the definition, - -\begin{align} - -\operatorname{Var}(X + Y) &= \operatorname{E}\left[(X + Y)^2\right] - (\operatorname{E}[X + Y])^2 \\[5pt] - -&= \operatorname{E}\left[X^2 + 2XY + Y^2\right] - (\operatorname{E}[X] + \operatorname{E}[Y])^2. - -\end{align} - -Using the linearity of the expectation operator and the assumption of independence (or uncorrelatedness) of X and Y, this further simplifies as follows: - -\begin{align} - -\operatorname{Var}(X + Y) &= \operatorname{E}\left[X^2\right] + 2\operatorname{E}[XY] + \operatorname{E}\left[Y^2\right] - \left(\operatorname{E}[X]^2 + 2\operatorname{E}[X]\operatorname{E}[Y] + \operatorname{E}[Y]^2\right) \\[5pt] - -&= \operatorname{E}\left[X^2\right] + \operatorname{E}\left[Y^2\right] - \operatorname{E}[X]^2 - \operatorname{E}[Y]^2 \\[5pt] - -&= \operatorname{Var}(X) + \operatorname{Var}(Y). - -\end{align} - -In general, the variance of the sum of n variables is the sum of their covariances: -$$ -\operatorname{Var}\left(\sum_{i=1}^n X_i\right) = \sum_{i=1}^n \sum_{j=1}^n \operatorname{Cov}\left(X_i, X_j\right) = \sum_{i=1}^n \operatorname{Var}\left(X_i\right) + 2\sum_{1\le ii,Xi) = Var(Xi).) - -Here, $\operatorname{Cov}(\cdot,\cdot)$ is the covariance, which is zero for independent random variables (if it exists). The formula states that the variance of a sum is equal to the sum of all elements in the covariance matrix of the components. The next expression states equivalently that the variance of the sum is the sum of the diagonal of covariance matrix plus two times the sum of its upper triangular elements (or its lower triangular elements); this emphasizes that the covariance matrix is symmetric. This formula is used in the theory of Cronbach's alpha in classical test theory. - -So if the variables have equal variance σ2 and the average correlation of distinct variables is ρ, then the variance of their mean is -$$ -\operatorname{Var}\left(\overline{X}\right) = \frac{\sigma^2}{n} + \frac{n - 1}{n}\rho\sigma^2. -$$ - -This implies that the variance of the mean increases with the average of the correlations. In other words, additional correlated observations are not as effective as additional independent observations at reducing the uncertainty of the mean. Moreover, if the variables have unit variance, for example if they are standardized, then this simplifies to -$$ -\operatorname{Var}\left(\overline{X}\right) = \frac{1}{n} + \frac{n - 1}{n}\rho. -$$ - -This formula is used in the Spearman–Brown prediction formula of classical test theory. This converges to ρ if n goes to infinity, provided that the average correlation remains constant or converges too. So for the variance of the mean of standardized variables with equal correlations or converging average correlation we have -$$ -\lim_{n \to \infty} \operatorname{Var}\left(\overline{X}\right) = \rho. -$$ - -Therefore, the variance of the mean of a large number of standardized variables is approximately equal to their average correlation. This makes clear that the sample mean of correlated variables does not generally converge to the population mean, even though the law of large numbers states that the sample mean will converge for independent variables. - -There are cases when a sample is taken without knowing, in advance, how many observations will be acceptable according to some criterion. In such cases, the sample size N is a random variable whose variation adds to the variation of X, such that, - -Var(ΣX) = E(N)Var(X) + Var(N)E2(X), - -which follows from the law of total variance. - -If N has a Poisson distribution, then E(N) = Var(N) with estimator N = n. So, the estimator of Var(ΣX) becomes nS2X + nX2 giving - -standard error(X) = √[(S2X + X2)/n]. - -Define $X$ as a column vector of $n$ random variables $X_1, \ldots,X_n$, and $c$ as a column vector of $n$ scalars $c_1, \ldots,c_n$. Therefore, $c^\mathsf{T} X$ is a linear combination of these random variables, where $c^\mathsf{T}$ denotes the transpose of $c$. Also let $\Sigma$ be the covariance matrix of $X$. The variance of $c^\mathsf{T}X$ is then given by: -$$ -\operatorname{Var}\left(c^\mathsf{T} X\right) = c^\mathsf{T} \Sigma c . -$$ - -This implies that the variance of the mean can be written as (with a column vector of ones) -$$ -\operatorname{Var}\left(\bar{x}\right) = \operatorname{Var}\left(\frac{1}{n} 1'X\right) = \frac{1}{n^2} 1'\Sigma 1. -$$ - -The scaling property and the Bienaymé formula, along with the property of the covariance Cov(aX, bY) = ab Cov(X, Y) jointly imply that -$$ -\operatorname{Var}(aX \pm bY) =a^2 \operatorname{Var}(X) + b^2 \operatorname{Var}(Y) \pm 2ab \operatorname{Cov}(X, Y). -$$ - -This implies that in a weighted sum of variables, the variable with the largest weight will have a disproportionally large weight in the variance of the total. For example, if X and Y are uncorrelated and the weight of X is two times the weight of Y, then the weight of the variance of X will be four times the weight of the variance of Y. - -The expression above can be extended to a weighted sum of multiple variables: -$$ -\operatorname{Var}\left(\sum_{i}^n a_iX_i\right) = \sum_{i=1}^na_i^2 \operatorname{Var}(X_i) + 2\sum_{1\le i}\sum_{\begin{align} - -\operatorname{Var}(XY) - -={} &\operatorname{E}\left[X^2 Y^2\right] - [\operatorname{E}(XY)]^2 \\[5pt] - -={} &\operatorname{Cov}\left(X^2, Y^2\right) + \operatorname{E}(X^2)\operatorname{E}\left(Y^2\right) - [\operatorname{E}(XY)]^2 \\[5pt] - -={} &\operatorname{Cov}\left(X^2, Y^2\right) + \left(\operatorname{Var}(X) + [\operatorname{E}(X)]^2\right)\left(\operatorname{Var}(Y) + [\operatorname{E}(Y)]^2\right) \\[5pt] - -&- [\operatorname{Cov}(X, Y) + \operatorname{E}(X)\operatorname{E}(Y)]^2 - -\end{align} - -The general formula for variance decomposition or the law of total variance is: If $X$ and $Y$ are two random variables, and the variance of $X$ exists, then -$$ -\operatorname{Var}[X]=\operatorname{E}(\operatorname{Var}[X\mid Y])+\operatorname{Var}(\operatorname{E}[X\mid Y]). -$$ - -The conditional expectation $\operatorname E(X\mid Y)$ of $X$ given $Y$, and the conditional variance $\operatorname{Var}(X\mid Y)$ may be understood as follows. Given any particular value y of the random variable Y, there is a conditional expectation $\operatorname E(X\mid Y=y)$ given the event Y = y. This quantity depends on the particular value y; it is a function $ g(y) = \operatorname E(X\mid Y=y)$. That same function evaluated at the random variable Y is the conditional expectation $\operatorname E(X\mid Y) = g(Y).$ - -In particular, if $Y$ is a discrete random variable assuming possible values $y_1, y_2, y_3 \ldots$ with corresponding probabilities $p_1, p_2, p_3 \ldots, $, then in the formula for total variance, the first term on the right-hand side becomes -$$ -\operatorname{E}(\operatorname{Var}[X \mid Y]) = \sum_i p_i \sigma^2_i, -$$ - -where $\sigma^2_i = \operatorname{Var}[X \mid Y = y_i]$. Similarly, the second term on the right-hand side becomes -$$ -\operatorname{Var}(\operatorname{E}[X \mid Y]) = \sum_i p_i \mu_i^2 - \left(\sum_i p_i \mu_i\right)^2 = \sum_i p_i \mu_i^2 - \mu^2, -$$ - -where $\mu_i = \operatorname{E}[X \mid Y = y_i]$ and $\mu = \sum_i p_i \mu_i$. Thus the total variance is given by -$$ -\operatorname{Var}[X] = \sum_i p_i \sigma^2_i + \left( \sum_i p_i \mu_i^2 - \mu^2 \right). -$$ - -A similar formula is applied in analysis of variance, where the corresponding formula is -$$ -\mathit{MS}_\text{total} = \mathit{MS}_\text{between} + \mathit{MS}_\text{within}; -$$ - -here $\mathit{MS}$ refers to the Mean of the Squares. In linear regression analysis the corresponding formula is -$$ -\mathit{MS}_\text{total} = \mathit{MS}_\text{regression} + \mathit{MS}_\text{residual}. -$$ - -This can also be derived from the additivity of variances, since the total (observed) score is the sum of the predicted score and the error score, where the latter two are uncorrelated. - -Similar decompositions are possible for the sum of squared deviations (sum of squares, $\mathit{SS}$): -$$ -\mathit{SS}_\text{total} = \mathit{SS}_\text{between} + \mathit{SS}_\text{within}, -$$ -$$ -\mathit{SS}_\text{total} = \mathit{SS}_\text{regression} + \mathit{SS}_\text{residual}. -$$ - -The population variance for a non-negative random variable can be expressed in terms of the cumulative distribution function F using -$$ -2\int_0^\infty u(1 - F(u))du - \left(\int_0^\infty (1 - F(u))du\right)^2. -$$ - -This expression can be used to calculate the variance in situations where the CDF, but not the density, can be conveniently expressed. - -The second moment of a random variable attains the minimum value when taken around the first moment (i.e., mean) of the random variable, i.e. $\mathrm{argmin}_m\mathrm{E}\left(\left(X - m\right)^2\right) = \mathrm{E}(X)$. Conversely, if a continuous function $\varphi$ satisfies $\mathrm{argmin}_m\mathrm{E}(\varphi(X - m)) = \mathrm{E}(X)$ for all random variables X, then it is necessarily of the form $\varphi(x) = a x^2 + b$, where a > 0. This also holds in the multidimensional case. - -Unlike the expected absolute deviation, the variance of a variable has units that are the square of the units of the variable itself. For example, a variable measured in meters will have a variance measured in meters squared. For this reason, describing data sets via their standard deviation or root mean square deviation is often preferred over using the variance. In the dice example the standard deviation is sqrt 2.9 ≈ 1.7, slightly larger than the expected absolute deviation of 1.5. - -The standard deviation and the expected absolute deviation can both be used as an indicator of the "spread" of a distribution. The standard deviation is more amenable to algebraic manipulation than the expected absolute deviation, and, together with variance and its generalization covariance, is used frequently in theoretical statistics; however the expected absolute deviation tends to be more robust as it is less sensitive to outliers arising from measurement anomalies or an unduly heavy-tailed distribution. - -The delta method uses second-order Taylor expansions to approximate the variance of a function of one or more random variables: see Taylor expansions for the moments of functions of random variables. For example, the approximate variance of a function of one variable is given by -$$ -\operatorname{Var}\left[f(X)\right] \approx \left(f'(\operatorname{E}\left[X\right])\right)^2\operatorname{Var}\left[X\right] -$$ - -provided that f is twice differentiable and that the mean and variance of X are finite. - -Real-world observations such as the measurements of yesterday's rain throughout the day typically cannot be complete sets of all possible observations that could be made. As such, the variance calculated from the finite set will in general not match the variance that would have been calculated from the full population of possible observations. This means that one estimates the mean and variance from a limited set of observations by using an estimator equation. The estimator is a function of the sample of n observations drawn without observational bias from the whole population of potential observations. In this example that sample would be the set of actual measurements of yesterday's rainfall from available rain gauges within the geography of interest. - -The simplest estimators for population mean and population variance are simply the mean and variance of the sample, the sample mean and (uncorrected) sample variance – these are consistent estimators (they converge to the correct value as the number of samples increases), but can be improved. Estimating the population variance by taking the sample's variance is close to optimal in general, but can be improved in two ways. Most simply, the sample variance is computed as an average of squared deviations about the (sample) mean, by dividing by n. However, using values other than n improves the estimator in various ways. Four common values for the denominator are n, n − 1, n + 1, and n − 1.5: n is the simplest (population variance of the sample), n − 1 eliminates bias, n + 1 minimizes mean squared error for the normal distribution, and n − 1.5 mostly eliminates bias in unbiased estimation of standard deviation for the normal distribution. - -Firstly, if the true population mean is unknown, then the sample variance (which uses the sample mean in place of the true mean) is a biased estimator: it underestimates the variance by a factor of (n − 1) / n; correcting by this factor (dividing by n − 1 instead of n) is called Bessel's correction. The resulting estimator is unbiased, and is called the (corrected) sample variance or unbiased sample variance. For example, when n = 1 the variance of a single observation about the sample mean (itself) is obviously zero regardless of the population variance. If the mean is determined in some other way than from the same samples used to estimate the variance then this bias does not arise and the variance can safely be estimated as that of the samples about the (independently known) mean. - -Secondly, the sample variance does not generally minimize mean squared error between sample variance and population variance. Correcting for bias often makes this worse: one can always choose a scale factor that performs better than the corrected sample variance, though the optimal scale factor depends on the excess kurtosis of the population (see mean squared error: variance), and introduces bias. This always consists of scaling down the unbiased estimator (dividing by a number larger than n − 1), and is a simple example of a shrinkage estimator: one "shrinks" the unbiased estimator towards zero. For the normal distribution, dividing by n + 1 (instead of n − 1 or n) minimizes mean squared error. The resulting estimator is biased, however, and is known as the biased sample variation. - -In general, the population variance of a finite population of size N with values xi is given by - -\begin{align} - -\sigma^2 &= \frac{1}{N} \sum_{i=1}^N \left(x_i - \mu\right)^2 = \frac{1}{N} \sum_{i=1}^N \left(x_i^2 - 2\mu x_i + \mu^2 \right) \\[5pt] - -&= \left(\frac 1N \sum_{i=1}^N x_i^2\right) - 2\mu \left(\frac{1}{N} \sum_{i=1}^N x_i\right) + \mu^2 \\[5pt] - -&= \left(\frac{1}{N} \sum_{i=1}^N x_i^2\right) - \mu^2 - -\end{align} - -where the population mean is -$$ -\mu = \frac 1N \sum_{i=1}^N x_i. -$$ - -The population variance can also be computed using -$$ -\sigma^2 = \frac {1} {N^2}\sum_{i \begin{align} - -&\frac{1}{2N^2} \sum_{i, j=1}^N\left( x_i - x_j \right)^2 \\[5pt] - -={} &\frac{1}{2N^2} \sum_{i, j=1}^N\left( x_i^2 - 2x_i x_j + x_j^2 \right) \\[5pt] - -={} &\frac{1}{2N} \sum_{j=1}^N\left(\frac{1}{N} \sum_{i=1}^N x_i^2\right) - \left(\frac{1}{N} \sum_{i=1}^N x_i\right)\left(\frac{1}{N} \sum_{j=1}^N x_j\right) + \frac{1}{2N} \sum_{i=1}^N\left(\frac{1}{N} \sum_{j=1}^N x_j^2\right) \\[5pt] - -={} &\frac{1}{2} \left( \sigma^2 + \mu^2 \right) - \mu^2 + \frac{1}{2} \left( \sigma^2 + \mu^2 \right) \\[5pt] - -={} &\sigma^2 - -\end{align} - -The population variance matches the variance of the generating probability distribution. In this sense, the concept of population can be extended to continuous random variables with infinite populations. - -In many practical situations, the true variance of a population is not known a priori and must be computed somehow. When dealing with extremely large populations, it is not possible to count every object in the population, so the computation must be performed on a sample of the population. Sample variance can also be applied to the estimation of the variance of a continuous distribution from a sample of that distribution. - -We take a sample with replacement of n values Y1, ..., Yn from the population, where n < N, and estimate the variance on the basis of this sample. Directly taking the variance of the sample data gives the average of the squared deviations: - -\sigma_Y^2 = - -\frac{1}{n} \sum_{i=1}^n \left(Y_i - \overline{Y}\right)^2 = \left(\frac 1n \sum_{i=1}^n Y_i^2\right) - \overline{Y}^2 = - -\frac{1}{n^2} \sum_{i,j:i - -Here, $\overline{Y}$ denotes the sample mean: -$$ -\overline{Y} = \frac{1}{n} \sum_{i=1}^n Y_i . -$$ - -Since the Yi are selected randomly, both $\overline{Y}$ and $\sigma_Y^2$ are random variables. Their expected values can be evaluated by averaging over the ensemble of all possible samples {Yi} of size n from the population. For $\sigma_Y^2$ this gives: - -\begin{align} - -\operatorname{E}[\sigma_Y^2] - -&= \operatorname{E}\left[ \frac{1}{n} \sum_{i=1}^n \left(Y_i - \frac{1}{n} \sum_{j=1}^n Y_j \right)^2 \right] \\[5pt] - -&= \frac 1n \sum_{i=1}^n \operatorname{E}\left[ Y_i^2 - \frac{2}{n} Y_i \sum_{j=1}^n Y_j + \frac{1}{n^2} \sum_{j=1}^n Y_j \sum_{k=1}^n Y_k \right] \\[5pt] - -&= \frac 1n \sum_{i=1}^n \left( \frac{n - 2}{n} \operatorname{E}\left[Y_i^2\right] - \frac{2}{n} \sum_{j \neq i} \operatorname{E}\left[Y_i Y_j\right] + \frac{1}{n^2} \sum_{j=1}^n \sum_{k \neq j}^n \operatorname{E}\left[Y_j Y_k\right] +\frac{1}{n^2} \sum_{j=1}^n \operatorname{E}\left[Y_j^2\right] \right) \\[5pt] - -&= \frac 1n \sum_{i=1}^n \left[ \frac{n - 2}{n} \left(\sigma^2 + \mu^2\right) - \frac{2}{n} (n - 1)\mu^2 + \frac{1}{n^2} n(n - 1)\mu^2 + \frac{1}{n} \left(\sigma^2 + \mu^2\right) \right] \\[5pt] - -&= \frac{n - 1}{n} \sigma^2. - -\end{align} - -Hence $\sigma_Y^2$ gives an estimate of the population variance that is biased by a factor of $\frac{n - 1}{n}$. For this reason, $\sigma_Y^2$ is referred to as the biased sample variance. - -Correcting for this bias yields the unbiased sample variance, denoted $s^2$: -$$ -s^2 = \frac{n}{n - 1} \sigma_Y^2 = \frac{n}{n - 1} \left[ \frac{1}{n} \sum_{i=1}^n \left(Y_i - \overline{Y}\right)^2 \right] = \frac{1}{n - 1} \sum_{i=1}^n \left(Y_i - \overline{Y} \right)^2 -$$ - -Either estimator may be simply referred to as the sample variance when the version can be determined by context. The same proof is also applicable for samples taken from a continuous probability distribution. - -The use of the term n − 1 is called Bessel's correction, and it is also used in sample covariance and the sample standard deviation (the square root of variance). The square root is a concave function and thus introduces negative bias (by Jensen's inequality), which depends on the distribution, and thus the corrected sample standard deviation (using Bessel's correction) is biased. The unbiased estimation of standard deviation is a technically involved problem, though for the normal distribution using the term n − 1.5 yields an almost unbiased estimator. - -The unbiased sample variance is a U-statistic for the function ƒ(y1, y2) = (y1 − y2)2/2, meaning that it is obtained by averaging a 2-sample statistic over 2-element subsets of the population. - -Being a function of random variables, the sample variance is itself a random variable, and it is natural to study its distribution. In the case that Yi are independent observations from a normal distribution, Cochran's theorem shows that s2 follows a scaled chi-squared distribution: - - - -(n - 1)\frac{s^2}{\sigma^2}\sim\chi^2_{n-1}. - - - -As a direct consequence, it follows that - - - -\operatorname{E}\left(s^2\right) = \operatorname{E}\left(\frac{\sigma^2}{n - 1} \chi^2_{n-1}\right) = \sigma^2 , - - - -and - - - -\operatorname{Var}\left[s^2\right] = \operatorname{Var}\left(\frac{\sigma^2}{n - 1} \chi^2_{n-1}\right) = \frac{\sigma^4}{(n - 1)^2}\operatorname{Var}\left(\chi^2_{n-1}\right) = \frac{2\sigma^4}{n - 1}. - - - -If the Yi are independent and identically distributed, but not necessarily normally distributed, then - - - -\operatorname{E}\left[s^2\right] = \sigma^2, \quad - -\operatorname{Var}\left[s^2\right] = \frac{\sigma^4}{n} \left(\kappa - 1 + \frac{2}{n - 1} \right) = \frac{1}{n} \left(\mu_4 - \frac{n - 3}{n - 1}\sigma^4\right), - - - -where κ is the kurtosis of the distribution and μ4 is the fourth central moment. - -If the conditions of the law of large numbers hold for the squared observations, s2 is a consistent estimator of σ2. One can see indeed that the variance of the estimator tends asymptotically to zero. An asymptotically equivalent formula was given in Kenney and Keeping (1951:164), Rose and Smith (2002:264), and Weisstein (n.d.). - -Samuelson's inequality is a result that states bounds on the values that individual observations in a sample can take, given that the sample mean and (biased) variance have been calculated. Values must lie within the limits $\bar y \pm \sigma_Y (n-1)^{1/2}.$ - -It has been shown that for a sample {yi} of positive real numbers, -$$ - \sigma_y^2 \le 2y_{\max} (A - H), -$$ - -where ymax is the maximum of the sample, A is the arithmetic mean, H is the harmonic mean of the sample and $\sigma_y^2$ is the (biased) variance of the sample. - -This bound has been improved, and it is known that variance is bounded by -$$ - \sigma_y^2 \le \frac{y_{\max} (A - H)(y_\max - A)}{y_\max - H}, -$$ -$$ - \sigma_y^2 \ge \frac{y_{\min} (A - H)(A - y_\min)}{H - y_\min}, -$$ - -where ymin is the minimum of the sample. - -Testing for the equality of two or more variances is difficult. The F test and chi square tests are both adversely affected by non-normality and are not recommended for this purpose. - -Several non parametric tests have been proposed: these include the Barton–David–Ansari–Freund–Siegel–Tukey test, the Capon test, Mood test, the Klotz test and the Sukhatme test. The Sukhatme test applies to two variances and requires that both medians be known and equal to zero. The Mood, Klotz, Capon and Barton–David–Ansari–Freund–Siegel–Tukey tests also apply to two variances. They allow the median to be unknown but do require that the two medians are equal. - -The Lehmann test is a parametric test of two variances. Of this test there are several variants known. Other tests of the equality of variances include the Box test, the Box–Anderson test and the Moses test. - -Resampling methods, which include the bootstrap and the jackknife, may be used to test the equality of variances. - -The term variance was first introduced by Ronald Fisher in his 1918 paper The Correlation Between Relatives on the Supposition of Mendelian Inheritance: - -
    The great body of available statistics show us that the deviations of a human measurement from its mean follow very closely the Normal Law of Errors, and, therefore, that the variability may be uniformly measured by the standard deviation corresponding to the square root of the mean square error. When there are two independent causes of variability capable of producing in an otherwise uniform population distributions with standard deviations $\sigma_1$ and $\sigma_2$, it is found that the distribution, when both causes act together, has a standard deviation $\sqrt{\sigma_1^2 + \sigma_2^2}$. It is therefore desirable in analysing the causes of variability to deal with the square of the standard deviation as the measure of variability. We shall term this quantity the Variance...
    - -[[File:variance_visualisation.svg|thumb|Geometric visualisation of the variance of an arbitrary distribution (2, 4, 4, 4, 5, 5, 7, 9): ]] - -The variance of a probability distribution is analogous to the moment of inertia in classical mechanics of a corresponding mass distribution along a line, with respect to rotation about its center of mass. It is because of this analogy that such things as the variance are called moments of probability distributions. The covariance matrix is related to the moment of inertia tensor for multivariate distributions. The moment of inertia of a cloud of n points with a covariance matrix of $\Sigma$ is given by -$$ -I = n\left(\mathbf{1}_{3\times 3} \operatorname{tr}(\Sigma) - \Sigma\right). -$$ - -This difference between moment of inertia in physics and in statistics is clear for points that are gathered along a line. Suppose many points are close to the x axis and distributed along it. The covariance matrix might look like -$$ -\Sigma = \begin{bmatrix}10 & 0 & 0 \\ 0 & 0.1 & 0 \\ 0 & 0 & 0.1\end{bmatrix}. -$$ - -That is, there is the most variance in the x direction. Physicists would consider this to have a low moment about the x axis so the moment-of-inertia tensor is -$$ -I = n\begin{bmatrix}0.2 & 0 & 0 \\ 0 & 10.1 & 0 \\ 0 & 0 & 10.1\end{bmatrix}. -$$ - -The semivariance is calculated in the same manner as the variance but only those observations that fall below the mean are included in the calculation:\text{Semivariance} = {1\over{n}}\sum_{i:x_{i} < \mu}(x_{i}-\mu)^{2}It is also described as a specific measure in different field of application. For skewed distributions, the semivariance can provide additional information that a variance does not. - -For inequalities associated with the semivariance, see . - -If $x$ is a scalar complex-valued random variable, with values in $\mathbb{C},$ then its variance is $\operatorname{E}\left[(x - \mu)(x - \mu)^*\right],$ where $x^*$ is the complex conjugate of $x.$ This variance is a real scalar. - -If $X$ is a vector-valued random variable, with values in $\mathbb{R}^n,$ and thought of as a column vector, then a natural generalization of variance is $\operatorname{E}\left[(X - \mu)(X - \mu)^{\operatorname{T}}\right],$ where $\mu = \operatorname{E}(X)$ and $X^{\operatorname{T}}$ is the transpose of $X,$ and so is a row vector. The result is a positive semi-definite square matrix, commonly referred to as the variance-covariance matrix (or simply as the covariance matrix). - -If $X$ is a vector- and complex-valued random variable, with values in $\mathbb{C}^n,$ then the covariance matrix is $\operatorname{E}\left[(X - \mu)(X - \mu)^\dagger\right],$ where $X^\dagger$ is the conjugate transpose of $X.$ This matrix is also positive semi-definite and square. - -Another generalization of variance for vector-valued random variables $X$, which results in a scalar value rather than in a matrix, is the generalized variance $\det(C)$, the determinant of the covariance matrix. The generalized variance can be shown to be related to the multidimensional scatter of points around their mean. - -A different generalization is obtained by considering the Euclidean distance between the random variable and its mean. This results in $\operatorname{E}\left[(X - \mu)^{\operatorname{T}}(X - \mu)\right] = \operatorname{tr}(C),$ which is the trace of the covariance matrix. diff --git a/wiki/wikipedia/4159.txt b/wiki/wikipedia/4159.txt deleted file mode 100644 index 523c30c3595a2e1a7cf0ff6a58a018b441bf5935..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4159.txt +++ /dev/null @@ -1,25 +0,0 @@ -In graph theory, a minimum cut or min-cut of a graph is a cut (a partition of the vertices of a graph into two disjoint subsets) that is minimal in some metric. - -Variations of the minimum cut problem consider weighted graphs, directed graphs, terminals, and partitioning the vertices into more than two sets. - -The weighted min-cut problem allowing both positive and negative weights can be trivially transformed into a weighted maximum cut problem by flipping the sign in all weights. - -__TOC__ - -The minimum cut problem in undirected, weighted graphs limited to non-negative weights can be solved in polynomial time by the Stoer-Wagner algorithm. In the special case when the graph is unweighted, Karger's algorithm provides an efficient randomized method for finding the cut. In this case, the minimum cut equals the edge connectivity of the graph. - -A generalization of the minimum cut problem without terminals is the minimum k-cut, in which the goal is to partition the graph into at least k connected components by removing as few edges as possible. For a fixed value of k, this problem can be solved in polynomial time, though the algorithm is not practical for large k. - -When two terminal nodes are given, they are typically referred to as the source and the sink. In a directed, weighted flow network, the minimum cut separates the source and sink vertices and minimizes the total weight on the edges that are directed from the source side of the cut to the sink side of the cut. As shown in the max-flow min-cut theorem, the weight of this cut equals the maximum amount of flow that can be sent from the source to the sink in the given network. - -In a weighted, undirected network, it is possible to calculate the cut that separates a particular pair of vertices from each other and has minimum possible weight. A system of cuts that solves this problem for every possible vertex pair can be collected into a structure known as the Gomory–Hu tree of the graph. - -A generalization of the minimum cut problem with terminals is the k-terminal cut, or multi-terminal cut. This problem is NP-hard, even for $k=3$. - -Graph partition problems are a family of combinatorial optimization problems in which a graph is to be partitioned into two or more parts with additional constraints such as balancing the sizes of the two sides of the cut. Segmentation-based object categorization can be viewed as a specific case of normalized min-cut spectral clustering applied to image segmentation. - -Due to max-flow min-cut theorem, 2 nodes' Minimum cut value is equal to their maxflow value. In this case, some algorithms used in maxflow problem could also be used to solve this question. - -A graph with $n$ vertices can at the most have $ \binom{n}{2} = \frac{n (n-1)}{2} $ distinct minimum cuts. - -This bound is tight in the sense that a (simple) cycle on $n$ vertices has exactly $\frac{n (n-1)}{2}$ minimum cuts. diff --git a/wiki/wikipedia/416.txt b/wiki/wikipedia/416.txt deleted file mode 100644 index bc994f7c0ee0c40d0f239239c0945d8d8e7c71ec..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/416.txt +++ /dev/null @@ -1,58 +0,0 @@ -In number theory, the Fermat–Catalan conjecture is a generalization of Fermat's Last Theorem and of Catalan's conjecture, hence the name. The conjecture states that the equation - -has only finitely many solutions (a,b,c,m,n,k) with distinct triplets of values (am, bn, ck) where a, b, c are positive coprime integers and m, n, k are positive integers satisfying - -{{NumBlk|::|$\frac{1}{m}+\frac{1}{n}+\frac{1}{k}<1.$|}} - -The inequality on m, n, and k is a necessary part of the conjecture. Without the inequality there would be infinitely many solutions, for instance with k = 1 (for any a, b, m, and n and with c = am + bn) or with m, n, and k all equal to two (for the infinitely many known Pythagorean triples). - -As of 2015, the following ten solutions to equation (1) which meet the criteria of equation (2) are known: -$$ -1^m+2^3=3^2 -$$ (for $m>6$ to satisfy Eq. 2) -$$ -2^5+7^2=3^4 -$$ -$$ -7^3+13^2=2^9 -$$ -$$ -2^7+17^3=71^2 -$$ -$$ -3^5+11^4=122^2 -$$ -$$ -33^8+1549034^2=15613^3 -$$ -$$ -1414^3+2213459^2=65^7 -$$ -$$ -9262^3+15312283^2=113^7 -$$ -$$ -17^7+76271^3=21063928^2 -$$ -$$ -43^8+96222^3=30042907^2 -$$ - -The first of these (1m + 23 = 32) is the only solution where one of a, b or c is 1, according to the Catalan conjecture, proven in 2002 by Preda Mihăilescu. While this case leads to infinitely many solutions of (1) (since one can pick any m for m > 6), these solutions only give a single triplet of values (am, bn, ck). - -It is known by the Darmon–Granville theorem, which uses Faltings's theorem, that for any fixed choice of positive integers m, n and k satisfying (2), only finitely many coprime triples (a, b, c) solving (1) exist. However, the full Fermat–Catalan conjecture is stronger as it allows for the exponents m, n and k to vary. - -The abc conjecture implies the Fermat–Catalan conjecture. - -For a list of results for impossible combinations of exponents, see Beal conjecture#Partial results. Beal's conjecture is true if and only if all Fermat–Catalan solutions have m = 2, n = 2, or k = 2. - -The Fermat–Catalan–LPS conjecture is a generalization of Fermat–Catalan conjecture and of Lander, Parkin, and Selfridge conjecture, this conjecture is that the equation -$$ -\sum_{i=1}^m {a_i}^{b_i} = \sum_{j=1}^n {c_j}^{d_j} -$$ - -has only finitely many solutions with m, n, ai, bi, cj, dj all positive integers, $\gcd(a_1,a_2,\dots,a_m,c_1,c_2,\dots,c_n) = 1$, $\sum A \neq \sum C$ for all nonempty proper subset A of {aibi, i = 1, 2, ..., m} and all nonempty proper subset C of {cjdj, j = 1, 2, ..., n}, $\sum_{i=1}^m \frac{1}{b_i} + \sum_{j=1}^n \frac{1}{d_j} < 1$ - -The Fermat–Catalan conjecture is that this equation has only finitely many solutions with $m+n=3$. - -The Lander, Parkin, and Selfridge conjecture is that this equation has no solution with $b_1=b_2=\dots=b_m=d_1=d_2=\dots=d_n$. diff --git a/wiki/wikipedia/4160.txt b/wiki/wikipedia/4160.txt deleted file mode 100644 index 1f67cd4330b4719978f7334ba9eeaf7b7087710d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4160.txt +++ /dev/null @@ -1,273 +0,0 @@ -The Cauchy–Schwarz inequality (also called Cauchy–Bunyakovsky–Schwarz inequality) is considered one of the most important and widely used inequalities in mathematics. - -The inequality for sums was published by . The corresponding inequality for integrals was published by - -{{NumBlk|:|$|\langle \mathbf{u}, \mathbf{v} \rangle| \leq \|\mathbf{u}\| \|\mathbf{v}\|.$|}} - -Moreover, the two sides are equal if and only if $\mathbf{u}$ and $\mathbf{v}$ are linearly dependent. - -Sedrakyan's inequality, also called Engel's form, the T2 lemma, or Titu's lemma, states that for positive reals: - -\frac{\left(\sum_{i=1}^n u_i\right)^2 }{\sum_{i=1}^n v_i} \leq \sum_{i=1}^n \frac{u_i^2}{v_i} \quad \text{ or equivalently, } \quad \frac{u^2_1}{v_1} + \frac{u^2_2}{v_2} + \cdots + \frac{u^2_n}{v_n} \geq \frac{\left(u_1 + u_2 + \cdots + u_n\right)^2}{v_1 + v_2 + \cdots + v_n}. - -It is a direct consequence of the Cauchy–Schwarz inequality, obtained by using the dot product on $\R^n$ upon substituting $u_i' = \frac{u_i}{\sqrt{v_i}} \text{ and } v_i' = \sqrt{v_i}.$ This form is especially helpful when the inequality involves fractions where the numerator is a perfect square. - -The real vector space $\R^2$ denotes the 2-dimensional plane. It is also the 2-dimensional Euclidean space where the inner product is the dot product. - -If $\mathbf{v} = \left(v_1, v_2\right)$ and $\mathbf{u} = \left(u_1, u_2\right)$ then the Cauchy–Schwarz inequality becomes: - -\langle \mathbf{u}, \mathbf{v} \rangle^2 = (\|\mathbf{u}\| \|\mathbf{v}\| \cos \theta)^2 \leq \|\mathbf{u}\|^2 \|\mathbf{v}\|^2, - -where $\theta$ is the angle between $u$ and $v.$ - -The form above is perhaps the easiest in which to understand the inequality, since the square of the cosine can be at most 1, which occurs when the vectors are in the same or opposite directions. It can also be restated in terms of the vector coordinates $v_1, v_2, u_1, \text{ and } u_2$ as - -\left(u_1 v_1 + u_2 v_2\right)^2 \leq \left(u_1^2 + u_2^2\right) \left(v_1^2 + v_2^2\right), - -where equality holds if and only if the vector $\left(u_1, u_2\right)$ is in the same or opposite direction as the vector $\left(v_1, v_2\right),$ or if one of them is the zero vector. - -In Euclidean space $\R^n$ with the standard inner product, which is the dot product, the Cauchy–Schwarz inequality becomes: - -\left(\sum_{i=1}^n u_i v_i\right)^2 \leq \left(\sum_{i=1}^n u_i^2\right) \left(\sum_{i=1}^n v_i^2\right). - -The Cauchy–Schwarz inequality can be proved using only ideas from elementary algebra in this case. - -Consider the following quadratic polynomial in $x$ - -0 \leq \left(u_1 x + v_1\right)^2 + \cdots + \left(u_n x + v_n\right)^2 = \left(\sum_i u_i^2\right) x^2 + 2 \left(\sum_i u_i v_i\right) x + \sum_i v_i^2. - -Since it is nonnegative, it has at most one real root for $x,$ hence its discriminant is less than or equal to zero. That is, - -\left(\sum_i u_i v_i\right)^2 - \left(\sum_i {u_i^2}\right) \left(\sum_i {v_i^2}\right) \leq 0, - -which yields the Cauchy–Schwarz inequality. - -If $\mathbf{u}, \mathbf{v} \in \Complex^n$ with $\mathbf{u} = \left(u_1, \ldots, u_n\right)$ and $\mathbf{v} = \left(v_1, \ldots, v_n\right)$ (where $u_1, \ldots, u_n \in \Complex$ and $v_1, \ldots, v_n \in \Complex$) and if the inner product on the vector space $\Complex^n$ is the canonical complex inner product (defined by $\langle \mathbf{u}, \mathbf{v} \rangle := u_1 \overline{v_1} + \cdots + u_{n} \overline{v_n},$ where the bar notation is used for complex conjugation), then the inequality may be restated more explicitly as follows: - -|\langle \mathbf{u}, \mathbf{v} \rangle|^2 - -= \left|\sum_{k=1}^n u_k\bar{v}_k\right|^2 - -\leq \langle \mathbf{u}, \mathbf{u} \rangle \langle \mathbf{v}, \mathbf{v} \rangle - -= \left(\sum_{k=1}^n u_k \bar{u}_k\right) \left(\sum_{k=1}^n v_k \bar{v}_k\right) - -= \sum_{j=1}^n \left|u_j\right|^2 \sum_{k=1}^n \left|v_k\right|^2. - -That is, - -\left|u_1 \bar{v}_1 + \cdots + u_n \bar{v}_n\right|^2 \leq \left(\left|u_1\right|^2 + \cdots + \left|u_n\right|^2\right) \left(\left|v_1\right|^2 + \cdots + \left|v_n\right|^2\right). - -For the inner product space of square-integrable complex-valued functions, the following inequality: - -\left|\int_{\R^n} f(x) \overline{g(x)}dx\right|^2 \leq \int_{\R^n} |f(x)|^2dx \int_{\R^n} |g(x)|^2 dx. - -The Hölder inequality is a generalization of this. - -In any inner product space, the triangle inequality is a consequence of the Cauchy–Schwarz inequality, as is now shown: - -\begin{alignat}{4} - -\|\mathbf{u} + \mathbf{v}\|^2 - -&= \langle \mathbf{u} + \mathbf{v}, \mathbf{u} + \mathbf{v} \rangle && \\ - -&= \|\mathbf{u}\|^2 + \langle \mathbf{u}, \mathbf{v} \rangle + \langle \mathbf{v}, \mathbf{u} \rangle + \|\mathbf{v}\|^2 ~ && ~ \text{ where } \langle \mathbf{v}, \mathbf{u} \rangle = \overline{\langle \mathbf{u}, \mathbf{v} \rangle} \\ - -&= \|\mathbf{u}\|^2 + 2 \operatorname{Re} \langle \mathbf{u}, \mathbf{v} \rangle + \|\mathbf{v}\|^2 && \\ - -&\leq \|\mathbf{u}\|^2 + 2|\langle \mathbf{u}, \mathbf{v} \rangle| + \|\mathbf{v}\|^2 && \\ - -&\leq \|\mathbf{u}\|^2 + 2\|\mathbf{u}\|\|\mathbf{v}\| + \|\mathbf{v}\|^2 && \\ - -&= (\|\mathbf{u}\| + \|\mathbf{v}\|)^2. && - -\end{alignat} - -Taking square roots gives the triangle inequality: - -\|\mathbf{u} + \mathbf{v}\| \leq \|\mathbf{u}\| + \|\mathbf{v}\|. - -The Cauchy–Schwarz inequality is used to prove that the inner product is a continuous function with respect to the topology induced by the inner product itself. - -The Cauchy–Schwarz inequality allows one to extend the notion of "angle between two vectors" to any real inner-product space by defining: - -\cos\theta_{\mathbf{u} \mathbf{v}} = \frac{\langle \mathbf{u}, \mathbf{v} \rangle}{\|\mathbf{u}\| \|\mathbf{v}\|}. - -The Cauchy–Schwarz inequality proves that this definition is sensible, by showing that the right-hand side lies in the interval [-1, 1] and justifies the notion that (real) Hilbert spaces are simply generalizations of the Euclidean space. It can also be used to define an angle in complex inner-product spaces, by taking the absolute value or the real part of the right-hand side, as is done when extracting a metric from quantum fidelity. - -Let $X$ and $Y$ be random variables, then the covariance inequality: is given by - -\operatorname{Var}(Y) \geq \frac{\operatorname{Cov}(Y, X)^2}{\operatorname{Var}(X)}. - -After defining an inner product on the set of random variables using the expectation of their product, - -\langle X, Y \rangle := \operatorname{E}(X Y), - -the Cauchy–Schwarz inequality becomes - -|\operatorname{E}(XY)|^2 \leq \operatorname{E}(X^2) \operatorname{E}(Y^2). - -To prove the covariance inequality using the Cauchy–Schwarz inequality, let $\mu = \operatorname{E}(X)$ and $\nu = \operatorname{E}(Y),$ then - -\begin{align} - -|\operatorname{Cov}(X, Y)|^2 - -&= |\operatorname{E}((X - \mu)(Y - \nu))|^2 \\ - -&= |\langle X - \mu, Y - \nu \rangle |^2\\ - -&\leq \langle X - \mu, X - \mu \rangle \langle Y - \nu, Y - \nu \rangle \\ - -& = \operatorname{E}\left((X - \mu)^2\right) \operatorname{E}\left((Y - \nu)^2\right) \\ - -& = \operatorname{Var}(X) \operatorname{Var}(Y), - -\end{align} - -where $\operatorname{Var}$ denotes variance and $\operatorname{Cov}$ denotes covariance. - -There are many different proofs of the Cauchy–Schwarz inequality other than those given below. - -thus $\mathbf{u} = \frac{\langle \mathbf{u}, \mathbf{v} \rangle}{\|\mathbf{v}\|^2} \mathbf{v},$ which shows that $\mathbf{u}$ and $\mathbf{v}$ are linearly dependent. $\blacksquare$ - -Let $V = \|\mathbf{v}\|^2$ and $c = \langle \mathbf{u}, \mathbf{v} \rangle$ so that $\overline{c} c = |c|^2 = |\langle \mathbf{u}, \mathbf{v} \rangle|^2$ and $\overline{c} = \overline{\langle \mathbf{u}, \mathbf{v} \rangle} = \langle \mathbf{v}, \mathbf{u} \rangle.$ - -Then - -\begin{alignat}{4} - -\left\|\|\mathbf{v}\|^2 \mathbf{u} - \langle \mathbf{u}, \mathbf{v} \rangle \mathbf{v}\right\|^2 - -&= \|V \mathbf{u} - c \mathbf{v}\|^2 - -= \langle V \mathbf{u} - c \mathbf{v}, V \mathbf{u} - c \mathbf{v} \rangle && ~\text{ By definition of the norm } \\ - -&= \langle V \mathbf{u}, V \mathbf{u} \rangle - -- \langle V \mathbf{u}, c \mathbf{v} \rangle - -- \langle c \mathbf{v}, V \mathbf{u} \rangle - -+ \langle c \mathbf{v}, c \mathbf{v} \rangle && ~\text{ Expand } \\ - -&= V^2 \langle \mathbf{u}, \mathbf{u} \rangle - -- V \overline{c} \langle \mathbf{u}, \mathbf{v} \rangle - -- c V \langle \mathbf{v}, \mathbf{u} \rangle - -+ c \overline{c} \langle \mathbf{v}, \mathbf{v} \rangle && ~\text{ Pull out scalars (note that } V := \|\mathbf{v}\|^2 \text{ is real) } \\ - -&= V^2\|\mathbf{u}\|^2 - -~~- V \overline{c} c - -~~~~~~~~~- c V \overline{c} - -~~~~~~~~~+ c \overline{c} V && ~\text{ Use definitions of } c := \langle \mathbf{u}, \mathbf{v} \rangle \text{ and } V \\ - -&= V^2\|\mathbf{u}\|^2 ~~- V \overline{c} c - -~=~ V \left[V\|\mathbf{u}\|^2 - \overline{c} c\right] && ~\text{ Simplify } \\ - -&= \|\mathbf{v}\|^2 \left[\|\mathbf{u}\|^2\|\mathbf{v}\|^2 - \left|\langle \mathbf{u}, \mathbf{v} \rangle\right|^2\right] && ~\text{ Rewrite in terms of } \mathbf{u} \text{ and } \mathbf{v}. \blacksquare \\ - -\end{alignat} - - - -This expansion does not require $\mathbf{v}$ to be non-zero; however, $\mathbf{v}$ must be non-zero in order to divide both sides by $\|\mathbf{v}\|^2$ and to deduce the Cauchy-Schwarz inequality from it. - -Swapping $\mathbf{u}$ and $\mathbf{v}$ gives rise to: - -\left\|\|\mathbf{u}\|^2 \mathbf{v} - \overline{\langle \mathbf{u}, \mathbf{v} \rangle} \mathbf{u}\right\|^2 = \|\mathbf{u}\|^2 \left[\|\mathbf{u}\|^2\|\mathbf{v}\|^2 - |\langle \mathbf{u}, \mathbf{v} \rangle|^2\right] - -and thus - -\begin{alignat}{4} - -\|\mathbf{u}\|^2\|\mathbf{v}\|^2 \left[\|\mathbf{u}\|^2 \|\mathbf{v}\|^2 - \left|\langle \mathbf{u}, \mathbf{v} \rangle\right|^2\right] - -&= \|\mathbf{u}\|^2 \left\|\|\mathbf{v}\|^2 \mathbf{u} - \langle \mathbf{u}, \mathbf{v} \rangle \mathbf{v}\right\|^2 \\ - -&= \|\mathbf{v}\|^2 \left\|\|\mathbf{u}\|^2 \mathbf{v} - \overline{\langle \mathbf{u}, \mathbf{v} \rangle} \mathbf{u}\right\|^2. \\ - -\end{alignat} - - - -The special case of $\mathbf{v} = \mathbf{0}$ was proven above so it is henceforth assumed that $\mathbf{v} \neq \mathbf{0}.$ - -Let - -\mathbf{z} := \mathbf{u} - \frac {\langle \mathbf{u}, \mathbf{v} \rangle} {\langle \mathbf{v}, \mathbf{v} \rangle} \mathbf{v}. - -It follows from the linearity of the inner product in its first argument that: - -\langle \mathbf{z}, \mathbf{v} \rangle - -= \left\langle \mathbf{u} - \frac{\langle \mathbf{u}, \mathbf{v} \rangle} {\langle \mathbf{v}, \mathbf{v} \rangle} \mathbf{v}, \mathbf{v} \right\rangle - -= \langle \mathbf{u}, \mathbf{v} \rangle - \frac{\langle \mathbf{u}, \mathbf{v} \rangle} {\langle \mathbf{v}, \mathbf{v} \rangle} \langle \mathbf{v}, \mathbf{v} \rangle - -= 0. - -Therefore, $\mathbf{z}$ is a vector orthogonal to the vector $\mathbf{v}$ (Indeed, $\mathbf{z}$ is the projection of $\mathbf{u}$ onto the plane orthogonal to $\mathbf{v}.$) We can thus apply the Pythagorean theorem to - -\mathbf{u}= \frac{\langle \mathbf{u}, \mathbf{v} \rangle} {\langle \mathbf{v}, \mathbf{v} \rangle} \mathbf{v} + \mathbf{z} - -which gives - -\|\mathbf{u}\|^2 - -= \left|\frac{\langle \mathbf{u}, \mathbf{v} \rangle}{\langle \mathbf{v}, \mathbf{v} \rangle}\right|^2 \|\mathbf{v}\|^2 + \|\mathbf{z}\|^2 - -= \frac{|\langle \mathbf{u}, \mathbf{v} \rangle|^2}{(\|\mathbf{v}\|^2 )^2} \|\mathbf{v}\|^2 + \|\mathbf{z}\|^2 - -= \frac{|\langle \mathbf{u}, \mathbf{v} \rangle|^2}{\|\mathbf{v}\|^2} + \|\mathbf{z}\|^2 \geq \frac{|\langle \mathbf{u}, \mathbf{v} \rangle|^2}{\|\mathbf{v}\|^2}. - -The Cauchy–Schwarz inequality follows by multiplying by $\|\mathbf{v}\|^2$ and then taking the square root. - -Moreover, if the relation $\geq$ in the above expression is actually an equality, then $\|\mathbf{z}\|^2 = 0$ and hence $\mathbf{z} = \mathbf{0};$ the definition of $\mathbf{z}$ then establishes a relation of linear dependence between $\mathbf{u}$ and $\mathbf{v}.$ The converse was proved at the beginning of this section, so the proof is complete. $\blacksquare$ - -Various generalizations of the Cauchy–Schwarz inequality exist. Hölder's inequality generalizes it to $L^p$ norms. More generally, it can be interpreted as a special case of the definition of the norm of a linear operator on a Banach space (Namely, when the space is a Hilbert space). Further generalizations are in the context of operator theory, e.g. for operator-convex functions and operator algebras, where the domain and/or range are replaced by a C*-algebra or W*-algebra. - -An inner product can be used to define a positive linear functional. For example, given a Hilbert space $L^2(m), m$ being a finite measure, the standard inner product gives rise to a positive functional $\varphi$ by $\varphi (g) = \langle g, 1 \rangle.$ Conversely, every positive linear functional $\varphi$ on $L^2(m)$ can be used to define an inner product $\langle f, g \rangle _\varphi := \varphi\left(g^* f\right),$ where $g^*$ is the pointwise complex conjugate of $g.$ In this language, the Cauchy–Schwarz inequality becomes - -\left|\varphi\left(g^* f\right)\right|^2 \leq \varphi\left(f^* f\right) \varphi\left(g^* g\right), - -which extends verbatim to positive functionals on C*-algebras: - -The next two theorems are further examples in operator algebra. - -This extends the fact $\varphi\left(a^*a\right) \cdot 1 \geq \varphi(a)^* \varphi(a) = |\varphi(a)|^2,$ when $\varphi$ is a linear functional. The case when $a$ is self-adjoint, that is, $a = a^*,$ is sometimes known as Kadison's inequality. - -note=Modified Schwarz inequality for 2-positive maps|style=|math_statement= - -For a 2-positive map $\varphi$ between C*-algebras, for all $a, b$ in its domain, - -\varphi(a)^*\varphi(a) \leq \Vert\varphi(1)\Vert\varphi\left(a^*a\right), \text{ and } - -\Vert\varphi\left(a^* b\right)\Vert^2 \leq \Vert\varphi\left(a^*a\right)\Vert \cdot \Vert\varphi\left(b^*b\right)\Vert. - -}} - -Another generalization is a refinement obtained by interpolating between both sides of the Cauchy-Schwarz inequality: - -{{math theorem|name=Callebaut's Inequality|note=|style=|math_statement= - -For reals $0 \leqslant s \leqslant t \leqslant 1,$ - -\left(\sum_{i=1}^n a_i b_i\right)^2 - -~\leqslant~ \left(\sum_{i=1}^n a_i^{1+s} b_i^{1-s}\right) \left(\sum_{i=1}^n a_i^{1-s} b_i^{1+s}\right) - -~\leqslant~ \left(\sum_{i=1}^n a_i^{1+t} b_i^{1-t}\right) \left(\sum_{i=1}^n a_i^{1-t} b_i^{1+t}\right) - -~\leqslant~ \left(\sum_{i=1}^n a_i^2\right) \left(\sum_{i=1}^n b_i^2\right). - -}} - -This theorem can be deduced from Hölder's inequality. There are also non commutative versions for operators and tensor products of matrices. diff --git a/wiki/wikipedia/4161.txt b/wiki/wikipedia/4161.txt deleted file mode 100644 index 5ded682129523aa29fb319f78955deb095d07477..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4161.txt +++ /dev/null @@ -1,23 +0,0 @@ -Hilbert's fifteenth problem is one of the 23 Hilbert problems set out in a celebrated list compiled in 1900 by David Hilbert. The problem is to put Schubert's enumerative calculus on a rigorous foundation. - -Schubert calculus is the intersection theory of the 19th century, together with applications to enumerative geometry. Justifying this calculus was the content of Hilbert's 15th problem, and was also the major topic of the 20 century algebraic geometry. In the course of securing the foundations of intersection theory, Van der Waerden and André Weil related the problem to the determination of the cohomology ring H*(G/P) of a flag manifold G/P, where G is a Lie group and P a - -parabolic subgroup of G. - -The additive structure of the ring H*(G/P) is given by the basis theorem of Schubert calculus due to Ehresmann, Chevalley, and Bernstein-Gel'fand-Gel'fand, stating that the classical Schubert classes on G/P form a free basis of - -the cohomology ring H*(G/P). The remaining problem of expanding products of Schubert classes as linear combination of - -basis elements was called the characteristic problem, - -While enumerative geometry made no connection with physics during the first century of its development, it has since emerged as a central element of string theory. - -The entirety of the original problem statement is as follows: - -
    - -The problem consists in this: To establish rigorously and with an exact determination of the limits of their validity those geometrical numbers which Schubert especially has determined on the basis of the so-called principle of special position, or conservation of number, by means of the enumerative calculus developed by him. - -Although the algebra of today guarantees, in principle, the possibility of carrying out the processes of elimination, yet for the proof of the theorems of enumerative geometry decidedly more is requisite, namely, the actual carrying out of the process of elimination in the case of equations of special form in such a way that the degree of the final equations and the multiplicity of their solutions may be foreseen. - -b) Special presentations of the Chow rings of flag manifolds have been worked out by Borel, Marlin, Billey-Haiman and Duan-Zhao, et al.; diff --git a/wiki/wikipedia/4162.txt b/wiki/wikipedia/4162.txt deleted file mode 100644 index 7181af80aa929478b23979690321661256c47b70..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4162.txt +++ /dev/null @@ -1,123 +0,0 @@ -An approach to the foundations of mathematics that is of relatively recent origin, Scott–Potter set theory is a collection of nested axiomatic set theories set out by the philosopher Michael Potter, building on earlier work by the mathematician Dana Scott and the philosopher George Boolos. - -Potter (1990, 2004) clarified and simplified the approach of Scott (1974), and showed how the resulting axiomatic set theory can do what is expected of such theory, namely grounding the cardinal and ordinal numbers, Peano arithmetic and the other usual number systems, and the theory of relations. - -This section and the next follow Part I of Potter (2004) closely. The background logic is first-order logic with identity. The ontology includes urelements as well as sets, which makes it clear that there can be sets of entities defined by first-order theories not based on sets. The urelements are not essential in that other mathematical structures can be defined as sets, and it is permissible for the set of urelements to be empty. - -Some terminology peculiar to Potter's set theory: - -* ι is a definite description operator and binds a variable. (In Potter's notation the iota symbol is inverted.) - -* The predicate U holds for all urelements (non-collections). - -* ιxΦ(x) exists iff (∃!x)Φ(x). (Potter uses Φ and other upper-case Greek letters to represent formulas.) - -* {x : Φ(x)} is an abbreviation for ιy(not U(y) and (∀x)(x ∈ y ⇔ Φ(x))). - -* a is a collection if {x : x∈a} exists. (All sets are collections, but not all collections are sets.) - -* The accumulation of a, acc(a), is the set {x : x is an urelement or ∃b∈a (x∈b or x⊂b)}. - -* If ∀v∈V(v = acc(V∩v)) then V is a history. - -* A level is the accumulation of a history. - -* An initial level has no other levels as members. - -* A limit level is a level that is neither the initial level nor the level above any other level. - -* A set is a subcollection of some level. - -* The birthday of set a, denoted V(a), is the lowest level V such that a⊂V. - -The following three axioms define the theory ZU. - -Creation: ∀V∃V' (V∈V' ). - -Remark: There is no highest level, hence there are infinitely many levels. This axiom establishes the ontology of levels. - -Separation: An axiom schema. For any first-order formula Φ(x) with (bound) variables ranging over the level V, the collection {x∈V : Φ(x)} is also a set. (See Axiom schema of separation.) - -Remark: Given the levels established by Creation, this schema establishes the existence of sets and how to form them. It tells us that a level is a set, and all subsets, definable via first-order logic, of levels are also sets. This schema can be seen as an extension of the background logic. - -Infinity: There exists at least one limit level. (See Axiom of infinity.) - -Remark: Among the sets Separation allows, at least one is infinite. This axiom is primarily mathematical, as there is no need for the actual infinite in other human contexts, the human sensory order being necessarily finite. For mathematical purposes, the axiom "There exists an inductive set" would suffice. - -The following statements, while in the nature of axioms, are not axioms of ZU. Instead, they assert the existence of sets satisfying a stated condition. As such, they are "existence premises," meaning the following. Let X denote any statement below. Any theorem whose proof requires X is then formulated conditionally as "If X holds, then..." Potter defines several systems using existence premises, including the following two: - -* ZfU =df ZU + Ordinals; - -* ZFU =df Separation + Reflection. - -Ordinals: For each (infinite) ordinal α, there exists a corresponding level Vα. - -Remark: In words, "There exists a level corresponding to each infinite ordinal." Ordinals makes possible the conventional Von Neumann definition of ordinal numbers. - -Let τ(x) be a first-order term. - -Replacement: An axiom schema. For any collection a, ∀x∈a[τ(x) is a set] → {τ(x) : x∈a} is a set. - -Remark: If the term τ(x) is a function (call it f(x)), and if the domain of f is a set, then the range of f is also a set. - -Reflection: Let Φ denote a first-order formula in which any number of free variables are present. Let Φ(V) denote Φ with these free variables all quantified, with the quantified variables restricted to the level V. - -Then ∃V[Φ→Φ(V)] is an axiom. - -Remark: This schema asserts the existence of a "partial" universe, namely the level V, in which all properties Φ holding when the quantified variables range over all levels, also hold when these variables range over V only. Reflection turns Creation, Infinity, Ordinals, and Replacement into theorems (Potter 2004: §13.3). - -Let A and a denote sequences of nonempty sets, each indexed by n. - -Countable Choice: Given any sequence A, there exists a sequence a such that: - -∀n∈ω[an∈An]. - -Remark. Countable Choice enables proving that any set must be one of finite or infinite. - -Let B and C denote sets, and let n index the members of B, each denoted Bn. - -Choice: Let the members of B be disjoint nonempty sets. Then: - -∃C∀n[C∩Bn is a singleton]. - -The von Neumann universe implements the "iterative conception of set" by stratifying the universe of sets into a series of "levels," with the sets at a given level being the members of the sets making up the next higher level. Hence the levels form a nested and well-ordered sequence, and would form a hierarchy if set membership were transitive. The resulting iterative conception steers clear, in a well-motivated way, of the well-known paradoxes of Russell, Burali-Forti, and Cantor. These paradoxes all result from the unrestricted use of the principle of comprehension that naive set theory allows. Collections such as "the class of all sets" or "the class of all ordinals" include sets from all levels of the hierarchy. Given the iterative conception, such collections cannot form sets at any given level of the hierarchy and thus cannot be sets at all. The iterative conception has gradually become more accepted over time, despite an imperfect understanding of its historical origins. - -Boolos's (1989) axiomatic treatment of the iterative conception is his set theory S, a two sorted first order theory involving sets and levels. - -Scott (1974) did not mention the "iterative conception of set," instead proposing his theory as a natural outgrowth of the simple theory of types. Nevertheless, Scott's theory can be seen as an axiomatization of the iterative conception and the associated iterative hierarchy. - -Scott began with an axiom he declined to name: the atomic formula x∈y implies that y is a set. In symbols: - -∀x,y∃a[x∈y→y=a]. - -His axiom of Extensionality and axiom schema of Comprehension (Separation) are strictly analogous to their ZF counterparts and so do not mention levels. He then invoked two axioms that do mention levels: - -* Accumulation. A given level "accumulates" all members and subsets of all earlier levels. See the above definition of accumulation. - -* Restriction. All collections belong to some level. - -Restriction also implies the existence of at least one level and assures that all sets are well-founded. - -Scott's final axiom, the Reflection schema, is identical to the above existence premise bearing the same name, and likewise does duty for ZF's Infinity and Replacement. Scott's system has the same strength as ZF. - -Potter (1990, 2004) introduced the idiosyncratic terminology described earlier in this entry, and discarded or replaced all of Scott's axioms except Reflection; the result is ZU. ZU, like ZF, cannot be finitely axiomatized. ZU differs from ZFC in that it: - -* Includes no axiom of extensionality because the usual extensionality principle follows from the definition of collection and an easy lemma. - -* Admits nonwellfounded collections. However Potter (2004) never invokes such collections, and all sets (collections which are contained in a level) are wellfounded. No theorem in Potter would be overturned if an axiom stating that all collections are sets were added to ZU. - -*Includes no equivalents of Choice or the axiom schema of Replacement. - -Hence ZU is closer to the Zermelo set theory of 1908, namely ZFC minus Choice, Replacement, and Foundation. It is stronger than this theory, however, since cardinals and ordinals can be defined, despite the absence of Choice, using Scott's trick and the existence of levels, and no such definition is possible in Zermelo set theory. Thus in ZU, an equivalence class of: - -*Equinumerous sets from a common level is a cardinal number; - -* Isomorphic well-orderings, also from a common level, is an ordinal number. - -Similarly the natural numbers are not defined as a particular set within the iterative hierarchy, but as models of a "pure" Dedekind algebra. "Dedekind algebra" is Potter's name for a set closed under a unary injective operation, successor, whose domain contains a unique element, zero, absent from its range. Because the theory of Dedekind algebras is categorical (all models are isomorphic), any such algebra can proxy for the natural numbers. - -Although Potter (2004) devotes an entire appendix to proper classes, the strength and merits of Scott–Potter set theory relative to the well-known rivals to ZFC that admit proper classes, namely NBG and Morse–Kelley set theory, have yet to be explored. - -Scott–Potter set theory resembles NFU in that the latter is a recently (Jensen 1967) devised axiomatic set theory admitting both urelements and sets that are not well-founded. But the urelements of NFU, unlike those of ZU, play an essential role; they and the resulting restrictions on Extensionality make possible a proof of NFU's consistency relative to Peano arithmetic. But nothing is known about the strength of NFU relative to Creation+Separation, NFU+Infinity relative to ZU, and of NFU+Infinity+Countable Choice relative to ZU + Countable Choice. - -Unlike nearly all writing on set theory in recent decades, Potter (2004) mentions mereological fusions. His collections are also synonymous with the "virtual sets" of Willard Quine and Richard Milton Martin: entities arising from the free use of the principle of comprehension that can never be admitted to the universe of discourse. diff --git a/wiki/wikipedia/4163.txt b/wiki/wikipedia/4163.txt deleted file mode 100644 index 82d58ad282cc346f56a59bfc641beb2959c26c1f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4163.txt +++ /dev/null @@ -1,35 +0,0 @@ -Matroid-constrained number partitioning is a variant of the multiway number partitioning problem, in which the subsets in the partition should be independent sets of a matroid. The input to this problem is a set S of items, a positive integer m, and some m matroids over the same set S. The goal is to partition S into m subsets, such that each subset i is an independent set in matroid i. Subject to this constraint, some objective function should be minimized, for example, minimizing the largest sum item sizes in a subset. In a more general variant, each of the m matroids has a weight function, which assigns a weight to each element of the ground-set. Various objective functions have been considered. For each of the three operators max,min,sum, one can use this operator on the weights of items in each subset, and on the subsets themselves. All in all, there are 9 possible objective functions, each of which can be maximized or minimized. - -Some important special cases of matroid-constrainted partitioning problems are: - -* The (max,sum) objective is the maximum over all subsets, of the total weight in the subset. When the items represent jobs and the weights represent their length, this objective is simply the makespan of the schedule. Therefore, minimizing this objective is equivalent to minimizing the makespan under matroid constraints. The dual goal of maximizing (min,sum) has also been studied in this context. - -**The special case in which the matroids are free matroids (no constraints) and the m weight-functions are identical corresponds to identical-machines scheduling, also known as multiway number partitioning. The case of free matroids and different weight-functions corresponds to unrelated-machines scheduling. - -**The special case of uniform matroids corresponds to cardinality constraints on the subsets. The more general case of partition matroids corresponds to categorized cardinality constraints. These problems are described in the page on balanced number partitioning. - -*The (sum,sum) objective is the sum of weights of all items in all subsets, where the weights in each subset i are computed by the weight-function of matroid i. Minimizing this objective can be reduced to the weighted matroid intersection problem - finding a maximum-weight subset that is simultanously independent in two given matroids. This problem is solvable in polynomial time. - -*The (max,max) objective is the maximum weight in all subsets, where the weights in each subset i are computed by the weight-function of matroid i. Minimizing this objective with graphic matroids can be used to solve the minimum bottleneck spanning tree problem. - -*The (sum,min) objective is the sum of minimum weights in all subsets. Maximizing this objective, with k identical graphical matroids, can be used to solve the maximum total capacity spanning tree partition problem. - -*The (sum,max) objective is the sum of maximum weights in all subsets. This objective can represent the total memory needed for scheduling, when each matroid i represens the feasible allocations in machine i. - -General matroid constraints were first considered by Burkard and Yao. They showed that minimizing (sum,max) can be done in polynomial time by a greedy algorithm for a subclass of matroids, which includes partition matroids. Hwang and Rothblum presented an alternative sufficient condition. - -Wu and Yao presented an approximation algorithm for minimizing (max,sum) with general matroid constraints. - -Abbassi, Mirrokni and Thakur present an approximation algorithm for a problem of diversity maximization under matroid constraints. - -Kawase, Kimura, Makino and Sumita show that the maximization problems can be reduced to minimization problems. Then, they analyze seven minimization problems: - -* Minimizing (sum,max): the problem is strongly NP-hard even when the matroids and weights are identical. There is a PTAS for identical matroids and weights. For general matroids and weights, there is an εm-approximation algorithm for any ε > 0. It is NP-hard to approximate with factor O(log m). - -* Minimizing (min,min), (max,max), (min,max) and (min,sum): there are polynomial-time algorithms. They reduce the problems to the feasibility problem of the matroid partitioning problem. - -* Minimizing (max,min) and (sum,min): there are polynomial-time algorithms for identical matroids and weights. In the general case, it is strongly NP-hard even to approximate. - -*The other two problems were analyzed in previous works: minimizing (max,sum) is known to be strongly NP-hard (3-partition is a special case), and minimizing (sum,sum) can be reduced to weighted matroid intersection, which is polynomial. - -Matroid partitioning is a different problem, in which the number of parts m is not fixed. There is a single matroid, and the goal is to partition its elements into a smallest number of independent sets. diff --git a/wiki/wikipedia/4164.txt b/wiki/wikipedia/4164.txt deleted file mode 100644 index da6206fb47b245e54ea99e500403587b2b83445a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4164.txt +++ /dev/null @@ -1,34 +0,0 @@ -In mathematics, more precisely in measure theory, Lebesgue's decomposition theorem states that for every two σ-finite signed measures $\mu$ and $\nu$ on a measurable space $(\Omega,\Sigma),$ there exist two σ-finite signed measures $\nu_0$ and $\nu_1$ such that: - -* $\nu=\nu_0+\nu_1 $ - -* $\nu_0\ll\mu$ (that is, $\nu_0$ is absolutely continuous with respect to $\mu$) - -* $\nu_1\perp\mu$ (that is, $\nu_1$ and $\mu$ are singular). - -These two measures are uniquely determined by $\mu$ and $\nu.$ - -Lebesgue's decomposition theorem can be refined in a number of ways. - -First, the decomposition of the singular part of a regular Borel measure on the real line can be refined: -$$ - \nu = \nu_{\mathrm{cont}} + \nu_{\mathrm{sing}} + \nu_{\mathrm{pp}} -$$ - -where - -* νcont is the absolutely continuous part - -* νsing is the singular continuous part - -* νpp is the pure point part (a discrete measure). - -Second, absolutely continuous measures are classified by the Radon–Nikodym theorem, and discrete measures are easily understood. Hence (singular continuous measures aside), Lebesgue decomposition gives a very explicit description of measures. The Cantor measure (the probability measure on the real line whose cumulative distribution function is the Cantor function) is an example of a singular continuous measure. - -The analogous decomposition for a stochastic processes is the Lévy–Itō decomposition: given a Lévy process X, it can be decomposed as a sum of three independent Lévy processes $X=X^{(1)}+X^{(2)}+X^{(3)}$ where: - -* $X^{(1)}$ is a Brownian motion with drift, corresponding to the absolutely continuous part; - -* $X^{(2)}$ is a compound Poisson process, corresponding to the pure point part; - -* $X^{(3)}$ is a square integrable pure jump martingale that almost surely has a countable number of jumps on a finite interval, corresponding to the singular continuous part. diff --git a/wiki/wikipedia/4165.txt b/wiki/wikipedia/4165.txt deleted file mode 100644 index 2592d03a2de2f35f42bd717316027fe28efc812b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4165.txt +++ /dev/null @@ -1,39 +0,0 @@ -In mathematics, specifically in local class field theory, the Hasse–Arf theorem is a result concerning jumps of the upper numbering filtration of the Galois group of a finite Galois extension. A special case of it when the residue fields are finite was originally proved by Helmut Hasse, and the general result was proved by Cahit Arf. - -Suppose G is cyclic of order $p^n$, $p$ residue characteristic and $G(i)$ be the subgroup of $G$ of order $p^{n-i}$. The theorem says that there exist positive integers $i_0, i_1, ..., i_{n-1}$ such that -$$ -G_0 = \cdots = G_{i_0} = G = G^0 = \cdots = G^{i_0} -$$ -$$ -G_{i_0 + 1} = \cdots = G_{i_0 + p i_1} = G(1) = G^{i_0 + 1} = \cdots = G^{i_0 + i_1} -$$ -$$ -G_{i_0 + p i_1 + 1} = \cdots = G_{i_0 + p i_1 + p^2 i_2} = G(2) = G^{i_0 + i_1 + 1} -$$ - -... -$$ -G_{i_0 + p i_1 + \cdots + p^{n-1}i_{n-1} + 1} = 1 = G^{i_0 + \cdots + i_{n-1} + 1}. -$$ - -For non-abelian extensions the jumps in the upper filtration need not be at integers. Serre gave an example of a totally ramified extension with Galois group the quaternion group Q8 of order 8 with - -*G0 = Q8 - -*G1 = Q8 - -*G2 = Z/2Z - -*G3 = Z/2Z - -*G4 = 1 - -The upper numbering then satisfies - -*Gn = Q8 for n≤1 - -*Gn = Z/2Z for 1n = 1 for 3/20$ and let $f$ be a given function having a third derivative on the range $(0,2a)$, and such that -$$ -f(x)\geq 0 -$$ - -for all $x\in (0,2a)$. Suppose $0 - -\begin{align} - -\lim\limits_{n\rightarrow\infty}{n \choose k} p_n^k (1-p_n)^{n-k} - -&\simeq \lim_{n\to\infty}\frac{n(n-1)(n-2)\dots(n-k+1)}{k!} \left(\frac{\lambda}{n}\right)^k \left(1- \frac{\lambda}{n}\right)^{n-k} \\ - -&= \lim_{n\to\infty}\frac{n^k+O\left(n^{k-1}\right)}{k!}\frac{\lambda^k}{n^k} \left(1- \frac{\lambda}{n}\right)^{n-k} \\ - -&= \lim_{n\to\infty}\frac{\lambda^k}{k!} \left(1-\frac{\lambda}{n}\right)^{n-k} - -\end{align} - -. - -Since -$$ - \lim_{n\to\infty} \left(1-\frac{\lambda}{n}\right)^{n} = e^{-\lambda} -$$ - -and -$$ - \lim_{n\to\infty} \left(1- \frac{\lambda}{n}\right)^{-k}=1 -$$ - -This leaves -$$ -{n \choose k}p^k (1-p)^{n-k} \simeq \frac{\lambda^k e^{-\lambda}}{k!}. -$$ - -Using Stirling's approximation, we can write: - - - -\begin{align} - -{n \choose k}p^k (1-p)^{n-k} - -&= \frac{n!}{(n-k)!k!} p^k (1-p)^{n-k} - -\\ &\simeq \frac{ \sqrt{2\pi n}\left(\frac{n}{e}\right)^n}{ \sqrt{2\pi \left(n-k\right)}\left(\frac{n-k}{e}\right)^{n-k}k!} p^k (1-p)^{n-k} - -\\ &= \sqrt{\frac{n}{n-k}}\frac{n^n e^{-k}}{\left(n-k\right)^{n-k}k!}p^k (1-p)^{n-k}. - -\end{align} - - - -Letting $n \to \infty$ and $np = \lambda$: - - - -\begin{align} - -{n \choose k}p^k (1-p)^{n-k} - -&\simeq \frac{n^np^k (1-p)^{n-k}e^{-k}}{\left(n-k\right)^{n-k}k!} - -\\&= \frac{n^n\left(\frac{\lambda}{n}\right)^k \left(1-\frac{\lambda}{n}\right)^{n-k}e^{-k}}{n^{n-k}\left(1-\frac{k}{n}\right)^{n-k}k!} \\&= \frac{\lambda^k \left(1-\frac{\lambda}{n}\right)^{n-k}e^{-k}}{\left(1-\frac{k}{n}\right)^{n-k}k!} - -\\ &\simeq \frac{\lambda^k \left(1-\frac{\lambda}{n}\right)^{n}e^{-k}}{\left(1-\frac{k}{n}\right)^{n}k!} . - -\end{align} - - - -As $n \to \infty$, $\left(1-\frac{x}{n}\right)^n \to e^{-x}$ so: - -\begin{align} - -{n \choose k}p^k (1-p)^{n-k} - -&\simeq \frac{\lambda^k e^{-\lambda}e^{-k}}{e^{-k}k!} - -\\&= \frac{\lambda^k e^{-\lambda}}{k!} - -\end{align} - -It is also possible to demonstrate the theorem through the use of ordinary generating functions of the binomial distribution: - - - -G_\operatorname{bin}(x;p,N) - -\equiv \sum_{k=0}^N \left[ \binom{N}{k} p^k (1-p)^{N-k} \right] x^k - -= \Big[ 1 + (x-1)p \Big]^N - - - -by virtue of the binomial theorem. Taking the limit $N \rightarrow \infty$ while keeping the product $pN\equiv\lambda$ constant, we find - - - -\lim_{N\rightarrow\infty} G_\operatorname{bin}(x;p,N) - -= \lim_{N\rightarrow\infty} \left[ 1 + \frac{\lambda(x-1)}{N} \right]^N - -= \mathrm{e}^{\lambda(x-1)} - -= \sum_{k=0}^{\infty} \left[ \frac{\mathrm{e}^{-\lambda}\lambda^k}{k!} \right] x^k - - - -which is the OGF for the Poisson distribution. (The second equality holds due to the definition of the exponential function.) diff --git a/wiki/wikipedia/417.txt b/wiki/wikipedia/417.txt deleted file mode 100644 index b24342911be2dfc386dd53eea1c36c14e6a22502..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/417.txt +++ /dev/null @@ -1 +0,0 @@ -In three-dimensional topology, a branch of mathematics, the cyclic surgery theorem states that, for a compact, connected, orientable, irreducible three-manifold M whose boundary is a torus T, if M is not a Seifert-fibered space and r,s are slopes on T such that their Dehn fillings have cyclic fundamental group, then the distance between r and s (the minimal number of times that two simple closed curves in T representing r and s must intersect) is at most 1. Consequently, there are at most three Dehn fillings of M with cyclic fundamental group. The theorem appeared in a 1987 paper written by Marc Culler, Cameron Gordon, John Luecke and Peter Shalen. diff --git a/wiki/wikipedia/4170.txt b/wiki/wikipedia/4170.txt deleted file mode 100644 index 44507e0d70411f76e783a8f78ae976c3540312d3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4170.txt +++ /dev/null @@ -1,11 +0,0 @@ -In mathematics, Dehn's lemma asserts that a piecewise-linear map of a disk into a 3-manifold, with the map's singularity set in the disk's interior, implies the existence of another piecewise-linear map of the disk which is an embedding and is identical to the original on the boundary of the disk. - -This theorem was thought to be proven by , but found a gap in the proof. The status of Dehn's lemma remained in doubt until using work by Johansson (1938) proved it using his "tower construction". He also generalized the theorem to the loop theorem and sphere theorem. - -Papakyriakopoulos proved Dehn's lemma using a tower of covering spaces. Soon afterwards gave a substantially simpler proof, proving a more powerful result. Their proof used Papakyriakopoulos' tower construction, but with double covers, as follows: - -*Step 1: Repeatedly take a connected double cover of a regular neighborhood of the image of the disk to produce a tower of spaces, each a connected double cover of the one below it. The map from the disk can be lifted to all stages of this tower. Each double cover simplifies the singularities of the embedding of the disk, so it is only possible to take a finite number of such double covers, and the top level of this tower has no connected double covers. - -*Step 2. If the 3-manifold has no connected double covers then all its boundary components are 2-spheres. In particular the top level of the tower has this property, and in this case it is easy to modify the map from the disk so that it is an embedding. - -*Step 3. The embedding of the disk can now be pushed down the tower of double covers one step at a time, by cutting and pasting the 2-disk. diff --git a/wiki/wikipedia/4171.txt b/wiki/wikipedia/4171.txt deleted file mode 100644 index 0ca072679f81076329f96b4ddb59172fef7bd18f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4171.txt +++ /dev/null @@ -1,34 +0,0 @@ -In mathematics, Hall's conjecture is an open question, , on the differences between perfect squares and perfect cubes. It asserts that a perfect square y2 and a perfect cube x3 that are not equal must lie a substantial distance apart. This question arose from consideration of the Mordell equation in the theory of integer points on elliptic curves. - -The original version of Hall's conjecture, formulated by Marshall Hall, Jr. in 1970, says that there is a positive constant C such that for any integers x and y for which y2 ≠ x3, -$$ - |y^2 - x^3| > C\sqrt. -$$ - -Hall suggested that perhaps C could be taken as 1/5, which was consistent with all the data known at the time the conjecture was proposed. Danilov showed in 1982 that the exponent 1/2 on the right side (that is, the use of |x|1/2) can't be replaced by any higher power: for no δ > 0 is there a constant C such that |y2 - x3| > C|x|1/2 + δ whenever y2 ≠ x3. - -In 1965, Davenport proved an analogue of the above conjecture in the case of polynomials: - -if f(t) and g(t) are nonzero polynomials over C such that - -g(t)3 ≠ f(t)2 in C[t], then -$$ - \deg(g(t)^2 - f(t)^3) \geq \frac{1}{2}\deg f(t) + 1. -$$ - -The weak form of Hall's conjecture, stated by Stark and Trotter around 1980, replaces the square root on the right side of the inequality by any exponent less than 1/2: for any ε > 0, there is some constant c(ε) depending on ε such that for any integers x and y for which y2 ≠ x3, -$$ - |y^2 - x^3| > c(\varepsilon) x^{1/2-\varepsilon}. -$$ - -The original, strong, form of the conjecture with exponent 1/2 has never been disproved, although it is no longer believed to be true and the term Hall's conjecture now generally means the version with the ε in it. For example, in 1998, Noam Elkies found the example - -4478849284284020423079182 - 58538865167812233 = -1641843, - -for which compatibility with Hall's conjecture would require C to be less than .0214 ≈ 1/50, so roughly 10 times smaller than the original choice of 1/5 that Hall suggested. - -The weak form of Hall's conjecture would follow from the ABC conjecture. A generalization to other perfect powers is Pillai's conjecture. - -The table below displays the known cases with $ r = \sqrt{x}/|y^2-x^3| > 1$. Note that y can be computed as the - -nearest integer to x3/2. diff --git a/wiki/wikipedia/4172.txt b/wiki/wikipedia/4172.txt deleted file mode 100644 index 812e7cfe8c6ed67b747f6a63bb22c23a6b75fe24..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4172.txt +++ /dev/null @@ -1,56 +0,0 @@ -In mathematics, the Hirzebruch–Riemann–Roch theorem, named after Friedrich Hirzebruch, Bernhard Riemann, and Gustav Roch, is Hirzebruch's 1954 result generalizing the classical Riemann–Roch theorem on Riemann surfaces to all complex algebraic varieties of higher dimensions. The result paved the way for the Grothendieck–Hirzebruch–Riemann–Roch theorem proved about three years later. - -The Hirzebruch–Riemann–Roch theorem applies to any holomorphic vector bundle E on a compact complex manifold X, to calculate the holomorphic Euler characteristic of E in sheaf cohomology, namely the alternating sum -$$ - \chi(X,E) = \sum_{i=0}^{n} (-1)^{i} \dim_{\Complex} H^{i}(X,E) -$$ - -of the dimensions as complex vector spaces, where n is the complex dimension of X. - -Hirzebruch's theorem states that χ(X, E) is computable in terms of the Chern classes ck(E) of E, and the Todd classes $\operatorname{td}_{j}(X)$ of the holomorphic tangent bundle of X. These all lie in the cohomology ring of X; by use of the fundamental class (or, in other words, integration over X) we can obtain numbers from classes in $H^{2n}(X).$ The Hirzebruch formula asserts that -$$ - \chi(X,E) = \sum \operatorname{ch}_{n-j}(E) \operatorname{td}_{j}(X), -$$ - -where the sum is taken over all relevant j (so 0 ≤ j ≤ n), using the Chern character ch(E) in cohomology. In other words, the products are formed in the cohomology ring of all the 'matching' degrees that add up to 2n. Formulated differently, it gives the equality -$$ - \chi(X,E) = \int_X \operatorname{ch}(E) \operatorname{td}(X) -$$ - -where $\operatorname{td}(X)$ is the Todd class of the tangent bundle of X. - -Significant special cases are when E is a complex line bundle, and when X is an algebraic surface (Noether's formula). Weil's Riemann–Roch theorem for vector bundles on curves, and the Riemann–Roch theorem for algebraic surfaces (see below), are included in its scope. The formula also expresses in a precise way the vague notion that the Todd classes are in some sense reciprocals of characteristic classes. - -For curves, the Hirzebruch–Riemann–Roch theorem is essentially the classical Riemann–Roch theorem. To see this, recall that for each divisor D on a curve there is an invertible sheaf O(D) (which corresponds to a line bundle) such that the linear system of D is more or less the space of sections of O(D). For curves the Todd class is $1+c_1(T(X))/2,$ and the Chern character of a sheaf O(D) is just 1+c1(O(D)), so the Hirzebruch–Riemann–Roch theorem states that -$$ -h^0(\mathcal{O}(D)) - h^1(\mathcal{O}(D)) = c_1(\mathcal{O}(D)) +c_1(T(X))/2\ \ \ -$$ (integrated over X). - -But h0(O(D)) is just l(D), the dimension of the linear system of D, and by Serre duality h1(O(D)) = h0(O(K − D)) = l(K − D) where K is the canonical divisor. Moreover, c1(O(D)) integrated over X is the degree of D, and c1(T(X)) integrated over X is the Euler class 2 − 2g of the curve X, where g is the genus. So we get the classical Riemann Roch theorem -$$ -\ell(D)-\ell(K-D) = \text{deg}(D)+1-g. -$$ - -For vector bundles V, the Chern character is rank(V) + c1(V), so we get Weil's Riemann Roch theorem for vector bundles over curves: -$$ -h^0(V) - h^1(V) = c_1(V) + \operatorname{rank}(V)(1-g). -$$ - -For surfaces, the Hirzebruch–Riemann–Roch theorem is essentially the Riemann–Roch theorem for surfaces -$$ -\chi(D) = \chi(\mathcal{O}) + ((D.D)-(D.K))/2. -$$ - -combined with the Noether formula. - -If we want, we can use Serre duality to express h2(O(D)) as h0(O(K − D)), but unlike the case of curves there is in general no easy way to write the h1(O(D)) term in a form not involving sheaf cohomology (although in practice it often vanishes). - -Let D be an ample Cartier divisor on an irreducible projective variety X of dimension n. Then -$$ -h^0 \left (X,\mathcal O_X(mD) \right )=\frac{(D^n)}{n!}.m^n+O(m^{n-1}). -$$ - -More generally, if $\mathcal F$ is any coherent sheaf on X then -$$ -h^0 \left (X,\mathcal F\otimes \mathcal O_X(mD) \right )=\operatorname{rank}(\mathcal F)\frac{(D^n)}{n!}.m^n+O(m^{n-1}). -$$ diff --git a/wiki/wikipedia/4173.txt b/wiki/wikipedia/4173.txt deleted file mode 100644 index d553d68cde04203a0513e5fc5d9b540bdc23c6de..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4173.txt +++ /dev/null @@ -1,85 +0,0 @@ -Tetris ( ) is a puzzle video game created by Soviet software engineer Alexey Pajitnov in 1984. It has been published by several companies for multiple platforms, most prominently during a dispute over the appropriation of the rights in the late 1980s. After a significant period of publication by Nintendo, the rights reverted to Pajitnov in 1996, who co-founded The Tetris Company with Henk Rogers to manage licensing. - -In Tetris, players complete lines by moving differently shaped pieces (tetrominoes), which descend onto the playing field. The completed lines disappear and grant the player points, and the player can proceed to fill the vacated spaces. The game ends when the playing field is filled. The longer the player can delay this outcome, the higher their score will be. In multiplayer games, players must last longer than their opponents; in certain versions, players can inflict penalties on opponents by completing a significant number of lines. Some adaptations have provided variations to the game's theme, such as three-dimensional displays or a system for reserving pieces. - -Built on simple rules and requiring intelligence and skill, Tetris established itself as one of the great early video games. By December 2011, it had sold 202million copies – approximately 70million physical units and 132million paid mobile game downloads – making it one of the best-selling video game franchises of all time. The Game Boy version is one of the best-selling games of all time, with more than 35 million copies sold. Tetris is available on over 65 platforms, setting a Guinness world record for the most ported video game. Tetris is rooted within popular culture and its popularity extends beyond the sphere of video games; imagery from the game has influenced architecture, music and cosplay. The game has also been the subject of various research studies that have analyzed its theoretical complexity and have shown its effect on the human brain following a session, in particular the Tetris effect. - -Tetris is primarily composed of a field of play in which pieces of different geometric forms, called "tetrominoes", descend from the top of the field. During this descent, the player can move the pieces laterally and rotate them until they touch the bottom of the field or land on a piece that had been placed before it. The player can neither slow down the falling pieces nor stop them, but can accelerate them in most versions. The objective of the game is to use the pieces to create as many horizontal lines of blocks as possible. When a line is completed, it disappears, and the blocks placed above fall one rank. - -In most versions, the speed of the falling pieces increases with each level, leaving the player with less time to think about the placement. - -The pieces on which the game of Tetris is based around are called "tetrominoes". Pajitnov's original version for the Electronika 60 computer used green brackets to represent the blocks that make up tetrominoes. In conjunction, players can be awarded combos that exist in certain games which reward multiple line clears in quick succession. The exact conditions for triggering combos, and the amount of importance assigned to them, vary from game to game. - -Nearly all Tetris games allow the player to press a button to increase the speed of the current piece's descent or cause the piece to drop and lock into place immediately, known as a "soft drop" and a "hard drop", respectively. While performing a soft drop, the player can also stop the piece's increased speed by releasing the button before the piece settles into place. Some games only allow either soft drop or hard drop; others have separate buttons for both. Many games award a number of points based on the height that the piece fell before locking, so using the hard drop generally awards more points. - -The question Would it be possible to play forever? was first considered in a thesis by John Brzustowski in 1992. The conclusion reached was that the game is statistically doomed to end. If a player receives a sufficiently large sequence of alternating S and Z Tetrominoes, the naïve gravity used by the standard game eventually forces the player to leave holes on the board. The holes will necessarily stack to the top and, ultimately, end the game. If the pieces are distributed randomly, this sequence will eventually occur. Thus, if a game with, for example, an ideal, uniform, uncorrelated random number generator is played long enough, any player will top out. - -Modern versions of Tetris released after 2001 use a bag-style randomizer that guarantees players will never receive more than four S or Z pieces in a row by shuffling tetrominoes of all types for each 7 pieces. This is one of the "Indispensable Rules" enforced by the Tetris Guideline that all officially licensed Tetris games must follow. is a feature in some Tetris games where a tetromino stops falling for a moment after left or right movement or rotation, effectively allowing the player to suspend the piece while deciding where to place it. The mechanic was introduced in 1999's The Next Tetris, and drew criticism in reviews of 2001's Tetris Worlds. went so far as to say that this mechanism broke the game. The goal in Tetris Worlds, however, is to complete a certain number of lines as fast as possible, so the ability to hold off a piece's placement will not make achieving that goal any faster. Later, GameSpot received "easy spin" more openly, saying that "the infinite spin issue honestly really affects only a few of the single-player gameplay modes in Tetris DS, because any competitive mode requires you to lay down pieces as quickly as humanly possible." - -Henk Rogers stated in an interview that infinite spin was an intentional part of the game design, allowing novice players to expend some of their available scoring time to decide on the best placement of a piece. Rogers observed that "gratuitous spinning" does not occur in competitive play, as expert players do not require much time to think about where a piece should be placed. A limitation has been placed on infinite lock delay in later games of the franchise, where after a certain amount of rotations and movements, the piece will instantly lock itself. This is defaulted to 15 such actions. Pajitnov imagined a game consisting of a descent of random pieces that the player would turn to fill rows. Pajitnov felt that the game would be needlessly complicated with twelve different shape variations, so he scaled the concept down to tetrominoes, of which there are seven variants. - -Pajitnov had completed the first playable version of Tetris by June 6, 1984. Pajitnov presented Tetris to his colleagues, who quickly became addicted to it. It permeated the offices within the Academy of Sciences, and within a few weeks it reached every Moscow institute with a computer. A friend of Pajitnov, Vladimir Pokhilko, who requested the game for the Moscow Medical Institute, saw people stop working to play Tetris. Pokhilko eventually banned the game from the Medical Institute to restore productivity. - -Pajitnov sought to adapt Tetris to the IBM Personal Computer, which had a higher quality display than the Electronika 60. Pajitnov recruited Vadim Gerasimov, a 16-year-old high school student who was known for his computer skills. Pajitnov had met Gerasimov before through a mutual acquaintance, and they had worked together on previous games. Gerasimov adapted Tetris to the IBM PC over the course of a few weeks, incorporating color and a scoreboard. - -Pajitnov wanted to export Tetris, but he had no knowledge of the business world. His superiors in the Academy were not necessarily happy with the success of the game, since they had not intended such a creation from the research team. Furthermore, intellectual property did not exist in the Soviet Union, and Soviet researchers were not allowed to sell their creations. Pajitnov asked his supervisor Victor Brjabrin, who had knowledge of the world outside the Soviet Union, to help him publish Tetris. Pajitnov offered to transfer the rights of the game to the Academy, and was delighted to receive a non-compulsory remuneration from Brjabrin through this deal. - -In 1986, Brjabrin sent a copy of Tetris to Hungarian game publisher Novotrade. From there, copies of the game began circulating via floppy disks throughout Hungary and as far as Poland. Robert Stein, an international software salesman for the London-based firm Andromeda Software, saw the game's commercial potential during a visit to Hungary in June 1986. After an indifferent response from the Academy, Stein contacted Pajitnov and Brjabrin by fax to obtain the license rights. The researchers expressed interest in forming an agreement with Stein via fax, but they were unaware that this fax communication could be considered a legal contract in the Western world; Stein began to approach other companies to produce the game. - -Stein approached publishers at the 1987 Consumer Electronics Show in Las Vegas. Gary Carlston, co-founder of Broderbund, retrieved a copy and brought it to California. Despite enthusiasm amongst its employees, Broderbund remained skeptical because of the game's Soviet origins. Likewise, Mastertronic co-founder Martin Alper declared that "no Soviet product will ever work in the Western world". Stein ultimately signed two agreements: he sold the European rights to the publisher Mirrorsoft, and the American rights to Spectrum HoloByte. The latter obtained the rights after a visit to Mirrorsoft by Spectrum HoloByte president Phil Adam in which he played Tetris for two hours. At that time, Stein had not yet signed a contract with the Soviet Union. Nevertheless, he sold the rights to the two companies for £3,000 and a royalty of 7.5 to 15% on sales. - -Before releasing Tetris in the United States, Spectrum HoloByte CEO Gilman Louie asked for an overhaul of the game's graphics and music. The Soviet spirit was preserved, with fields illustrating Russian parks and buildings as well as melodies anchored in Russian folklore of the time. The company's goal was to make people want to buy a Russian product; the game came complete with a red package and Cyrillic text, an unusual approach on the other side of the Berlin Wall. The Mirrorsoft version was released for the IBM PC in November 1987, while the Spectrum HoloByte version was released for the same platform in January 1988. - -Tetris was ported to platforms including the Amiga, Atari ST, ZX Spectrum, Commodore 64 and Amstrad CPC. At the time, it made no mention of Pajitnov and came with the announcement of "Made in the United States of America, designed abroad". Tetris was a commercial success in Europe and the United States: Mirrorsoft sold tens of thousands of copies in two months, and Spectrum HoloByte sold over 100,000 units in the space of a year. According to Spectrum HoloByte, the average Tetris player was between 25 and 45 years old and was a manager or engineer. At the Software Publishers Association's Excellence in Software Awards ceremony in March 1988, Tetris won Best Entertainment Software, Best Original Game, Best Strategy Program, and Best Consumer Software. - -Stein, however, was faced with a problem: the only document certifying a license fee was the fax from Pajitnov and Brjabrin, meaning that Stein sold the license for a game he did not yet own. Stein contacted Pajitnov and asked him for a contract for the rights. Stein began negotiations via fax, offering 75% of the revenue generated by Stein from the license. Elektronorgtechnica ("Elorg"), the Soviet Union's central organization for the import and export of computer software, was unconvinced and requested 80% of the revenue. Stein made several trips to Moscow and held long discussions with Elorg representatives. Stein came to an agreement with Elorg on February 24, 1988, and on May 10 he signed a contract for a ten-year worldwide Tetris license for all current and future computer systems. Pajitnov and Brjabrin were unaware that the game was already on sale and that Stein had claimed to own the rights prior to the agreement. Although Pajitnov would not receive any percentage from these sales, he said that "the fact that so many people enjoy my game is enough for me". - -In 1988, Spectrum HoloByte sold the Japanese rights to its computer games and arcade machines to Bullet-Proof Software's Henk Rogers, who was searching for games for the Japanese market. Mirrorsoft sold its Japanese rights to Atari Games subsidiary Tengen, which then sold the Japanese arcade rights to Sega and the console rights to BPS, which published versions for Japanese computers, including the Nintendo Family Computer (Famicom), known outside Japan as the Nintendo Entertainment System and MSX2. At this point, almost a dozen companies believed they held the Tetris rights, with Stein retaining rights for home computer versions. Devices like Chinese Brick Game, popular in the early 1990s, often had many variations of Tetris. Soviet Union's Elorg was still unaware of the deals Stein had negotiated, which did not bring money to them. Nevertheless, Tetris was a commercial success in North America, Europe and Asia. - -The same year, Nintendo was preparing to launch its first portable console, the Game Boy. Nintendo was attracted to Tetris by its simplicity and established success on the Famicom. Rogers, who was close to then Nintendo president Hiroshi Yamauchi, sought to obtain the handheld rights. After a failed negotiation with Atari, Rogers contacted Stein in November 1988. Stein agreed to sign a contract, but explained that he had to consult Elorg before returning to negotiations with Rogers. After contacting Stein several times, Rogers began to suspect a breach of contract on Stein's part, and decided in February 1989 to go to the Soviet Union and negotiate the rights with Elorg. - -Rogers arrived at the Elorg offices uninvited, while Stein and Mirrorsoft manager Kevin Maxwell made an appointment the same day without consulting each other. During the discussions, Rogers explained that he wanted to obtain the rights to Tetris for the Game Boy. After quickly obtaining an agreement with Elorg president Nikolai Belikov, Rogers showed Belikov a Tetris cartridge. Belikov was surprised, as he believed at the time that the rights to Tetris were only signed for computer systems. The present parties accused Nintendo of illegal publication, but Rogers defended himself by explaining that he had obtained the rights via Atari Games, which had itself signed an agreement with Stein. Belikov then realized the complex path that the license had followed within four years because of Stein's contracts, and he constructed a strategy to regain possession of the rights and obtain better commercial agreements. At that point, Elorg was faced with three different companies seeking to buy the rights. - -During this time, Rogers befriended Pajitnov over a game of Go. Pajitnov would support Rogers throughout the discussions, to the detriment of Maxwell, who came to secure the Tetris rights for Mirrorsoft. Belikov proposed to Rogers that Stein's rights would be cancelled and Nintendo would be granted the game rights for both home and handheld consoles. Rogers flew to the United States to convince Nintendo's American branch to sign up for the rights. The contract with Elorg was signed by executive and president Minoru Arakawa for $500,000, plus 50 cents per cartridge sold. Elorg then sent an updated contract to Stein. One of the clauses defined a computer as a machine with a screen and keyboard, and thus Stein's rights to console versions were withdrawn. Stein signed the contract without paying attention to this clause, and later realized that all the contract's other clauses, notably on payments, were only a "smokescreen" to deceive him. - -In March 1989, Nintendo sent a cease and desist to Atari Games concerning production of the NES version of Tetris. Atari Games contacted Mirrorsoft, and were assured that they still retained the rights. Nintendo, however, maintained its position. In response, Mirrorsoft owner Robert Maxwell pressured Soviet Union leader Mikhail Gorbachev to cancel the contract between Elorg and Nintendo. Despite the threats to Belikov, Elorg refused to give in and highlighted the financial advantages of their contract compared to those signed with Stein and Mirrorsoft. - -On June 15, 1989, Nintendo and Atari Games began a legal battle in the courts of San Francisco. Atari Games sought to prove that the NES was a computer, as indicated by its Japanese name "Famicom", an abbreviation of "Family Computer". In this case, the initial license would authorize Atari Games to release the game. The central argument of Atari Games was that the Famicom was designed to be convertible into a computer via its extension port. This argument was not accepted, and Pajitnov stressed that the initial contract only concerned computers and no other machine. Nintendo brought Belikov to testify on its behalf. Judge Fern M. Smith declared that Mirrorsoft and Spectrum HoloByte never received explicit authorization for marketing on consoles, and on June 21, 1989, ruled in Nintendo's favor, granting them a preliminary injunction against Atari Games in the process. The next day, Atari Games withdrew its NES version from sale, and thousands of cartridges remained unsold in the company's warehouses. - -Sega had planned to release a Genesis version of Tetris on April 15, 1989, but cancelled its release during Nintendo and Atari's legal battle; fewer than ten copies were manufactured. A new port of the arcade version by M2 was included in the Sega Genesis Mini microconsole, released in September 2019. - -Through the legal history of the license, Pajitnov gained a reputation in the West. He was regularly invited by journalists and publishers, through which he discovered that his game had sold millions of copies, from which he had not made any money. However, he remained humble and proud of the game, which he considered "an electronic ambassador of benevolence". - -In January 1990, Pajitnov was invited by Spectrum HoloByte to the Consumer Electronics Show, and was immersed in American life for the first time. After a period of adaptation, he explored American culture in several cities, including Las Vegas, San Francisco, New York City and Boston, and engaged in interviews with several hosts, including the directors of Nintendo of America. He marveled at the freedom and the advantages of Western society, and spoke often of his travels to his colleagues upon returning to the Soviet Union. He realized that there was no market in Russia for their programs. At the same time, sales of the Game Boy – bundled with a handheld version of Tetris – exploded, exceeding sales forecasts three times. - -In 1991, Pajitnov and Pokhilko emigrated to the United States. Pajitnov moved to Seattle, where he produced games for Spectrum HoloByte. In April 1996, as agreed with the Academy ten years earlier and following an agreement with Rogers, the rights to Tetris reverted to Pajitnov. Pajitnov and Rogers founded The Tetris Company in June 1996 to manage the rights on all platforms, the previous agreements having expired. Pajitnov now receives a royalty for each Tetris game and derivative sold worldwide. In 2002, Pajitnov and Rogers founded Tetris Holding after the purchase of the game's remaining rights from Elorg, now a private entity following the dissolution of the Soviet Union. The Tetris Company now owns all rights to the Tetris brand, and is mainly responsible for removing unlicensed clones from the market; In one notable 2012 case, Tetris Holding, LLC v. Xio Interactive, Inc., Tetris Holding and the Tetris Company defended its copyright against an iOS clone, which established a new stance on evaluating video game clone infringements based on look and feel. - -In December 2005, Electronic Arts acquired Jamdat, a company specializing in mobile games. Jamdat had previously bought a company founded by Rogers in 2001, which managed the Tetris license on mobile platforms. As a result, Electronic Arts held a 15-year license on all mobile phone releases of Tetris, Tetris Online had also developed versions for console-based digital download services. Because of its popularity and simplicity of development, Tetris is often used as a hello world project for programmers coding for a new system or programming language. This has resulted in the availability of a large number of ports for different platforms. For instance, μTorrent and GNU Emacs contain similar shape-stacking games as easter eggs. - -Within official franchise installments, each version has made improvements to accommodate advancing technology and the goal to provide a more complete game. Developers are given freedom to add new modes of play and revisit the concept from different angles. Some concepts developed on official versions have been integrated into the Tetris guidelines in order to standardize future versions and allow players to migrate between different versions with little effort. Orson Scott Card joked that the game "proves that Russia still wants to bury us. I shudder to think of the blow to our economy as computer productivity drops to 0". Noting that Tetris was not copy-protected, he wrote: "Obviously, the game is meant to find its way onto every American machine". The IBM version of the game was reviewed in 1988 in Dragon No. 135 by Hartley, Patricia, and Kirk Lesser in "The Role of Computers" column. The reviewers gave the game 4.5 out of 5 stars. The Lessers later reviewed Spectrum HoloByte's Macintosh version of Tetris in 1989 in Dragon No. 141, giving that version 5 out of 5 stars. Macworld reviewed the Macintosh version of Tetris in 1988, praising its strategic gameplay, stating that "Tetris offers the rare combination of being simple to learn but extremely challenging to play", and also praising the inclusion of the Desk Accessory version, which uses less RAM. Macworld summarized their review by listing Tetris' pros and cons, stating that Tetris is "elegant; easy to play; challenging and addicting; requires quick thinking, long-term strategy, and lightning reflexes" and listed Tetris cons as "None". - -Roy Wagner reviewed the game for Computer Gaming World, and said that "Tetris is simple in concept, simple to play, and a unique design". - -There was a hoax that circulated in February 2019 that the original NES instruction manual for Tetris had named the seven tetrominoes with names like "Orange Ricky", "Hero" and "Smashboy", but was disproven. Despite being disproven by video game historians, a question on the October 7 that year airing of Jeopardy! alluded to these names. - -Spectrum HoloByte's versions for personal computers sold 150,000 copies for ( adjusted for inflation) in two years, between 1988 and 1990. Tetris gained greater success with the release of Nintendo's NES version and Game Boy version in 1989. In six months of release by 1990, the NES version sold 1.5 million copies for ( adjusted for inflation), while Game Boy bundles with Tetris sold 2 million units. It topped the Japanese sales charts during AugustSeptember 1989 and from December 1989 to January 1990. Tetris became Nintendo's top-seller for the first few months of 1990. Nintendo's versions of Tetris went on to sell 7.5 million copies in the United States by 1992, and more than 20 million worldwide by 1996. Nintendo eventually sold a total of 35 million copies for the Game Boy, and 8 million for the NES. - -Sega's arcade version of Tetris was also successful in Japan, where it became the highest-grossing arcade game of 1989. Spectrum HoloByte's PC versions of Tetris eventually sold more than 1 million copies , with women accounting for nearly half of Tetris players, in contrast to most other PC games. - -In January 2010, the Tetris franchise had sold more than 170 million copies, including approximately 70 million physical copies and over 100 million copies for cell phones, The game won three Software Publishers Association Excellence in Software Awards in 1989, including Best Entertainment Program and the Critic's Choice Award for consumers. Computer Gaming World in 1996 ranked it 14th on the magazine's list of the most innovative computer games. That same year, Next Generation listed it as number 2 on their "Top 100 Games of All Time", commenting that "there is something so perfect, so Zen about the falling blocks of Tetris that the game has captured the interest of everyone who has ever played it". In 1999, Next Generation listed Tetris as number 2 on their "Top 50 Games of All Time", commenting that "Tetris is the essence of gameplay at its most basic. You have a simple goal, simple controls, and simple objects to manipulate". On March 12, 2007, The New York Times reported that Tetris was named to a list of the ten most important video games of all time, the so-called game canon. After announced at the 2007 Game Developers Conference, the Library of Congress took up the video game preservation proposal and began with these 10 games, including Tetris. - -In 2007, video game website GameFAQs hosted its sixth annual "Character Battle", in which the users nominate their favorite video game characters for a popularity contest in which characters participate. The L-shaped Tetris piece (or "L-Block" as it was called) entered the contest as a joke character, but on November 4, it won the contest. On June 6, 2009, Google honored Tetris 25-year anniversary by changing its logotype to a version drawn with Tetris blocks – the "l" letter being the long Tetris block lowering into its place, seen here. In 2009, Game Informer put Tetris 3rd on their list of "The Top 200 Games of All Time", saying that "if a game could be considered ageless, it's Tetris". The Game Informer staff also placed it third on their 2001 list of the 100 best games ever. - -Electronic Gaming Monthlys 100th issue had Tetris as first place in the "100 Best Games of All Time", commenting that "Tetris is as pure as a video game can get. ... When the right blocks come your way - and if you can manage to avoid mistakes - the game can be relaxing. One mislaid block, however, and your duties switch to damage control, a mad, panicky dash to clean up your mess or die". Tetris was also the only game for which the list did not specify one or two versions; the editors explained that after deadlocking over which version was best, they concluded that there was no wrong version of Tetris to play. In 2007, Tetris came in second place in IGN's "100 Greatest Video Games of All Time". - -In 1991, PC Format named Tetris one of the 50 best computer games ever. The editors called it "incredibly addictive" and "one of the best games of all time". - -Tetris has been the subject of academic research. Vladimir Pokhilko was the first clinical psychologist to conduct experiments using Tetris. Subsequently, it has been used for research in several fields including the theory of computation, algorithmic theory, and cognitive psychology. - -During the game of Tetris, blocks appear to adsorb onto the lower surface of the window. This has led scientists to use tetrominoes "as a proxy for molecules with a complex shape" to model their "adsorption on a flat surface" to study the thermodynamics of nanoparticles. - -Tetris appeared in the 2010 short animated film Pixels, and in the 2015 movie of the same name inspired by the former. - -In 2014 it was announced that Threshold Entertainment had teamed up with The Tetris Company to develop Tetris - The Movie, a film adaptation of the game. Threshold's CEO described the film as an epic sci-fi adventure that would be the first part of a trilogy. In 2016, sources reported on a press release claiming the film would be shot in China in 2017 with an $80 million budget. However, no 2017 or later sources confirm the film ever actually went into production. - -A movie titled Tetris, about the legal battle surrounding the game in the late 1980s, was announced in 2020, to star Taron Egerton as Henk Rogers. diff --git a/wiki/wikipedia/4174.txt b/wiki/wikipedia/4174.txt deleted file mode 100644 index 3d30a84ae81a1de54b3151afbbfb8900fdea38c9..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4174.txt +++ /dev/null @@ -1,19 +0,0 @@ -In mathematics, Dickson's lemma states that every set of $n$-tuples of natural numbers has finitely many minimal elements. This simple fact from combinatorics has become attributed to the American algebraist L. E. Dickson, who used it in order to prove a result in number theory about perfect numbers. - -Let $K$ be a fixed number, and let $S = \{(x,y)\mid xy\ge K\}$ be the set of pairs of numbers whose product is at least $K$. When defined over the positive real numbers, $S$ has infinitely many minimal elements of the form $(x,K/x)$, one for each positive number $x$; this set of points forms one of the branches of a hyperbola. The pairs on this hyperbola are minimal, because it is not possible for a different pair that belongs to $S$ to be less than or equal to $(x,K/x)$ in both of its coordinates. However, Dickson's lemma concerns only tuples of natural numbers, and over the natural numbers there are only finitely many minimal pairs. Every minimal pair $(x,y)$ of natural numbers has $x\le K$ and $y\le K$, for if x were greater than K then (x -1,y) would also belong to S, contradicting the minimality of (x,y), and symmetrically if y were greater than K then (x,y -1) would also belong to S. Therefore, over the natural numbers, $S$ has at most $K^2$ minimal elements, a finite number. - -Let $\mathbb{N}$ be the set of non-negative integers (natural numbers), let n be any fixed constant, and let $\mathbb{N}^n$ be the set of $n$-tuples of natural numbers. These tuples may be given a pointwise partial order, the product order, in which $(a_1,a_2,\dots,a_n)\le (b_1,b_2,\dots b_n)$ if and only if, for every $i$, $a_i\le b_i$. - -The set of tuples that are greater than or equal to some particular tuple $(a_1,a_2,\dots,a_n)$ forms a positive orthant with its apex at the given tuple. - -With this notation, Dickson's lemma may be stated in several equivalent forms: - -*In every subset $S\neq\emptyset$ of $\mathbb{N}^n$, there is at least one but no more than a finite number of elements that are minimal elements of $S$ for the pointwise partial order. - -*For every infinite sequence $(x_i)_{i\in\mathbb{N}}$ of $n$-tuples of natural numbers, there exist two indices $id, find a point set S ⊂ ℝd, |S| = k, so that maxp ∈ P(minq ∈ S(d(p, q)) ) is minimized. - -In the case of the Euclidean metric for k = 1, it is known as the smallest enclosing sphere problem or 1-center problem. Its study traced at least to the year of 1860. see smallest enclosing circle and bounding sphere for more details. - -It has been proved that exact solution of k-center problem is NP hard. - -Approximation to the problem was found to be also NP hard when the error is small. The error level in the approximation algorithm is measured as an approximation factor, which is defined as the ratio between the approximation and the optimum. It's proved that the k-center problem approximation is NP hard when approximation factor is less than 1.822 (dimension = 2) or 2 (dimension > 2). - -Facility location problems are often solved as integer programs. In this context, facility location problems are often posed as follows: suppose there are $n$ facilities and $m$ customers. We wish to choose (1) which of the $n$ facilities to open, and (2) which (open) facilities to use to supply the $m$ customers, in order to satisfy some fixed demand at minimum cost. We introduce the following notation: let $f_i$ denote the (fixed) cost of opening facility $i$, for $i=1,\dots,n$. Let $c_{ij}$denote the cost to ship a product from facility $i$ to customer $j$ for $i=1,\dots,n$ and $j=1,\dots,m$. Let $d_j$ denote the demand of customer $j$ for $j=1,\dots,m$. Further suppose that each facility has a maximum output. Let $u_i$ denote the maximum amount of product that can be produced by facility $i$, that is, let $u_i$ denote the capacity of facility $i$. The remainder of this section follows - -In our initial formulation, introduce a binary variable $x_i$ for $i=1,\dots,n$, where $x_i=1$ if facility $i$ is open, and $x_i=0$ otherwise. Further introduce the variable $y_{ij}$ for $i=1,\dots,n$ and $j=1,\dots,m$ which represents the fraction of the demand $d_j$ filled by facility $i$. The so-called capacitated facility location problem is then given by\begin{array}{rl} - -\min & \displaystyle\sum_{i=1}^n\sum_{j=1}^mc_{ij} d_j y_{ij}+\sum_{i=1}^nf_ix_i \\ - -\text{s.t.} & \displaystyle\sum_{i=1}^ny_{ij}=1 \text{ for all }j=1,\dots,m \\ - -& \displaystyle \sum_{j=1}^md_jy_{ij}\leqslant u_ix_i\text{ for all }i=1\dots,n \\ - -&y_{ij}\geqslant0\text{ for all }i=1,\dots,n \text{ and }j=1,\dots,m\\ - -&x_i\in\{0,1\}\text{ for all } i=1,\dots,n - -\end{array} - -Note that the second set of constraints ensure that if $x_i=0$, that is, facility $i$ isn't open, then $y_{ij}=0$ for all $j$, that is, no demand for any customer can be filled from facility $i$. - -A common case of the capacitated facility location problem above is the case when $u_i=+\infty$ for all $i=1,\dots,n$. In this case, it is always optimal to satisfy all of the demand from customer $j$ from the nearest open facility. Because of this, we may replace the continuous variables $y_{ij}$ from above with the binary variables $z_{ij}$, where $z_{ij}=1$ if customer $j$ is supplied by facility $i$, and $z_{ij}=0$ otherwise. The uncapacitated facility location problem is then given by\begin{array}{rl} - -\min & \displaystyle\sum_{i=1}^n\sum_{j=1}^mc_{ij} d_j z_{ij}+\sum_{i=1}^nf_ix_i \\ - -\text{s.t.} & \displaystyle\sum_{i=1}^nz_{ij}=1 \text{ for all }j=1,\dots,m \\ - -& \displaystyle \sum_{j=1}^mz_{ij}\leqslant Mx_i\text{ for all }i=1\dots,n \\ - -&z_{ij}\in\{0,1\}\text{ for all }i=1,\dots,n \text{ and }j=1,\dots,m\\ - -&x_i\in\{0,1\}\text{ for all } i=1,\dots,n - -\end{array} - -where $M$ is a constant chosen to be suitably large. The choice of $M$ can affect computation results—the best choice in this instance is obvious: take $M=m$. Then, if $x_i=1$, any choice of the $z_{ij}$ will satisfy the second set of constraints. - -Another formulation possibility for the uncapacitated facility location problem is to disaggregate the capacity constraints (the big-$M$ constraints). That is, replace the constraints\sum_{j=1}^{m}z_{ij}\leqslant Mx_i\text{ for all }i=1,\dots,nwith the constraintsz_{ij}\leqslant x_i\text{ for all }i=1,\dots,n \text{ and }j=1,\dots,mIn practice, this new formulation performs significantly better, in the sense that it has a tighter Linear programming relaxation than the first formulation. - -Municipal solid waste management still remains a challenge for developing countries because of increasing waste production and high costs associated with waste management. Through the formulation and exact resolution of a facility location problem it is possible to optimize the location of landfills for waste disposal. - -A particular subset of cluster analysis problems can be viewed as facility location problems. In a centroid-based clustering problem, the objective is to partition $ n $ data points (elements of a common metric space) into equivalence classes—often called colors—such that points of the same color are close to one another (equivalently, such that points of different colors are far from one another). - -To see how one might view (read "transform" or "reduce") a centroid-based clustering problem as a (metric) facility location problem, view each data point in the former as a demand point in the latter. Suppose that the data to be clustered are elements of a metric space $ M $ (e.g. let $ M $ be $ p $-dimensional Euclidean space for some fixed $ p $). In the facility location problem that we are constructing, we permit facilities to be placed at any point within this metric space $ M $; this defines the set of allowed facility locations $ L $. We define the costs $ c_{\ell, d} $ to be the pairwise distances between location-demand point pairs (e.g., see metric k-center). In a centroid-based clustering problem, one partitions the data into $ k $ equivalence classes (i.e. colors) each of which has a centroid. Let us see how a solution to our constructed facility location problem also achieves such a partition. A feasible solution is a non-empty subset $ L' \subseteq L $ of $ k $ locations. These locations in our facility location problem comprise a set of $ k $ centroids in our centroid-based clustering problem. Now, assign each demand point $ d $ to the location $ \ell^* $ that minimizes its servicing-cost; that is, assign the data point $ d $ to the centroid \ell^* := - -\mathrm{argmin}_{\ell \in L} \{c_{\ell, d}\} (break ties arbitrarily). This achieves the partitioning provided that the facility location problem's costs $ c_{\ell, d} $ are defined such that they are the images of the centroid-based clustering problem's distance function. - -The popular algorithms textbook Algorithm Design provides a related problem-description and an approximation algorithm. The authors refer to the metric facility location problem (i.e. the centroid-based clustering problem or the metric $ k $-center problem) as the center selection problem, thereby growing the list of synonyms. - -Furthermore, see that in our above definition of the facility location problem that the objective function $ f $ is general. Specific choices of $ f $ yield different variants of the facility location problem, and hence different variants of the centroid-based clustering problem. For example, one might choose to minimize the sum of distances from each location to each of its assigned demand points (à la the Weber problem), or one might elect to minimize the maximum of all such distances (à la the 1-center problem). diff --git a/wiki/wikipedia/4176.txt b/wiki/wikipedia/4176.txt deleted file mode 100644 index ad14fc4354893461c7c391e95659c384359b03d0..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4176.txt +++ /dev/null @@ -1,213 +0,0 @@ -The Monty Hall problem is a brain teaser, in the form of a probability puzzle, loosely based on the American television game show Let's Make a Deal and named after its original host, Monty Hall. The problem was originally posed (and solved) in a letter by Steve Selvin to the American Statistician in 1975. It became famous as a question from reader Craig F. Whitaker's letter quoted in Marilyn vos Savant's "Ask Marilyn" column in Parade magazine in 1990: - -Vos Savant's response was that the contestant should switch to the other door. Under the standard assumptions, the switching strategy has a 2/3 probability of winning the car, while the strategy that remains with the initial choice has only a 1/3 probability. - -When the player first makes their choice, there is a 2/3 chance that the car is behind one of the doors not chosen. This probability does not change after the host opens one of the unchosen doors. When the host provides information about the 2 unchosen doors (revealing that one of them does not have the car behind it), the 2/3 chance of the car being behind one of the unchosen doors rests on the unchosen and unrevealed door, as opposed to the 1/3 chance of the car being behind the door the contestant chose initially. - -The given probabilities depend on specific assumptions about how the host and contestant choose their doors. A key insight is that, under these standard conditions, there is more information about doors 2 and 3 than was available at the beginning of the game when door 1 was chosen by the player: the host's deliberate action adds value to the door he did not choose to eliminate, but not to the one chosen by the contestant originally. Another insight is that switching doors is a different action from choosing between the two remaining doors at random, as the first action uses the previous information and the latter does not. Other possible behaviors of the host than the one described can reveal different additional information, or none at all, and yield different probabilities. - -Many readers of vos Savant's column refused to believe switching is beneficial and rejected her explanation. After the problem appeared in Parade, approximately 10,000 readers, including nearly 1,000 with PhDs, wrote to the magazine, most of them calling vos Savant wrong. Even when given explanations, simulations, and formal mathematical proofs, many people still did not accept that switching is the best strategy. Paul Erdős, one of the most prolific mathematicians in history, remained unconvinced until he was shown a computer simulation demonstrating vos Savant's predicted result. - -The problem is a paradox of the veridical type, because Vos Savant's solution is so counterintuitive it can seem absurd, but is nevertheless demonstrably true. The Monty Hall problem is mathematically closely related to the earlier Three Prisoners problem and to the much older Bertrand's box paradox. - -Steve Selvin wrote a letter to the American Statistician in 1975 describing a problem based on the game show Let's Make a Deal, dubbing it the "Monty Hall problem" in a subsequent letter. The problem is mathematically equivalent to the Three Prisoners problem described in Martin Gardner's "Mathematical Games" column in Scientific American in 1959 and the Three Shells Problem described in Gardner's book Aha Gotcha. - -Under the standard assumptions, the probability of winning the car after switching is 2/3. - -The key to this solution is the behavior of the host. Ambiguities in the Parade version do not explicitly define the protocol of the host. However, Marilyn vos Savant's solution printed alongside Whitaker's question implies, and both Selven and vos Savant explicitly define, the role of the host as follows: - -# The host must always open a door that was not picked by the contestant. - -# The host must always open a door to reveal a goat and never the car. - -# The host must always offer the chance to switch between the originally chosen door and the remaining closed door. - -When any of these assumptions is varied, it can change the probability of winning by switching doors as detailed in the section below. It is also typically presumed that the car is initially hidden randomly behind the doors and that, if the player initially picks the car, then the host's choice of which goat-hiding door to open is random. Some authors, independently or inclusively, assume that the player's initial choice is random as well. - -The solution presented by vos Savant in Parade shows the three possible arrangements of one car and two goats behind three doors and the result of staying or switching after initially picking door 1 in each case: - -A player who stays with the initial choice wins in only one out of three of these equally likely possibilities, while a player who switches wins in two out of three. - -An intuitive explanation is that, if the contestant initially picks a goat (2 of 3 doors), the contestant will win the car by switching because the other goat can no longer be picked, whereas if the contestant initially picks the car (1 of 3 doors), the contestant will not win the car by switching. The fact that the host subsequently reveals a goat in one of the unchosen doors changes nothing about the initial probability. - -Most people come to the conclusion that switching does not matter because there are two unopened doors and one car and that it is a 50/50 choice. This would be true if the host opens a door randomly, but that is not the case; the door opened depends on the player's initial choice, so the assumption of independence does not hold. Before the host opens a door there is a 1/3 probability that the car is behind each door. If the car is behind door 1 the host can open either door 2 or door 3, so the probability that the car is behind door 1 and the host opens door 3 is 1/3 × 1/2 = 1/6. If the car is behind door 2 (and the player has picked door 1) the host must open door 3, so the probability that the car is behind door 2 and the host opens door 3 is 1/3 × 1 = 1/3. These are the only cases where the host opens door 3, so if the player has picked door 1 and the host opens door 3, the car is twice as likely to be behind door 2 as door 1. The key is that if the car is behind door 2 the host must open door 3, but if the car is behind door 1 the host can open either door. - -Another way to understand the solution is to consider the two original unchosen doors together. As Cecil Adams puts it, "Monty is saying in effect: you can keep your one door or you can have the other two doors." The 2/3 chance of finding the car has not been changed by the opening of one of these doors because Monty, knowing the location of the car, is certain to reveal a goat. So the player's choice after the host opens a door is no different than if the host offered the player the option to switch from the original chosen door to the set of both remaining doors. The switch in this case clearly gives the player a 2/3 probability of choosing the car. - -As Keith Devlin says, "By opening his door, Monty is saying to the contestant 'There are two doors you did not choose, and the probability that the prize is behind one of them is 2/3. I'll help you by using my knowledge of where the prize is to open one of those two doors to show you that it does not hide the prize. You can now take advantage of this additional information. Your choice of door A has a chance of 1 in 3 of being the winner. I have not changed that. But by eliminating door C, I have shown you that the probability that door B hides the prize is 2 in 3. - -Vos Savant suggests that the solution will be more intuitive with 1,000,000 doors rather than 3. In this case, there are 999,999 doors with goats behind them and one door with a prize. After the player picks a door, the host opens 999,998 of the remaining doors. On average, in 999,999 times out of 1,000,000, the remaining door will contain the prize. Intuitively, the player should ask how likely it is that, given a million doors, they managed to pick the right one initially. Stibel et al proposed that working memory demand is taxed during the Monty Hall problem and that this forces people to "collapse" their choices into two equally probable options. They report that when the number of options is increased to more than 7 choices (7 doors), people tend to switch more often; however, most contestants still incorrectly judge the probability of success at 50:50. - -Vos Savant wrote in her first column on the Monty Hall problem that the player should switch. She received thousands of letters from her readers – the vast majority of which, including many from readers with PhDs, disagreed with her answer. During 1990–1991, three more of her columns in Parade were devoted to the paradox. Numerous examples of letters from readers of vos Savant's columns are presented and discussed in The Monty Hall Dilemma: A Cognitive Illusion Par Excellence. - -The discussion was replayed in other venues (e.g., in Cecil Adams' "The Straight Dope" newspaper column) and reported in major newspapers such as The New York Times. - -In an attempt to clarify her answer, she proposed a shell game to illustrate: "You look away, and I put a pea under one of three shells. Then I ask you to put your finger on a shell. The odds that your choice contains a pea are 1/3, agreed? Then I simply lift up an empty shell from the remaining other two. As I can (and will) do this regardless of what you've chosen, we've learned nothing to allow us to revise the odds on the shell under your finger." She also proposed a similar simulation with three playing cards. - -Vos Savant commented that, though some confusion was caused by some readers' not realizing they were supposed to assume that the host must always reveal a goat, almost all her numerous correspondents had correctly understood the problem assumptions, and were still initially convinced that vos Savant's answer ("switch") was wrong. - -When first presented with the Monty Hall problem, an overwhelming majority of people assume that each door has an equal probability and conclude that switching does not matter. Out of 228 subjects in one study, only 13% chose to switch. In his book The Power of Logical Thinking, cognitive psychologist writes: "No other statistical puzzle comes so close to fooling all the people all the time [and] even Nobel physicists systematically give the wrong answer, and that they insist on it, and they are ready to berate in print those who propose the right answer." Pigeons repeatedly exposed to the problem show that they rapidly learn to always switch, unlike humans. - -Most statements of the problem, notably the one in Parade, do not match the rules of the actual game show and do not fully specify the host's behavior or that the car's location is randomly selected. Krauss and Wang conjecture that people make the standard assumptions even if they are not explicitly stated. - -Although these issues are mathematically significant, even when controlling for these factors, nearly all people still think each of the two unopened doors has an equal probability and conclude that switching does not matter. This "equal probability" assumption is a deeply rooted intuition. People strongly tend to think probability is evenly distributed across as many unknowns as are present, whether it is or not. - -The problem continues to attract the attention of cognitive psychologists. The typical behavior of the majority, i.e., not switching, may be explained by phenomena known in the psychological literature as: - -# The endowment effect, in which people tend to overvalue the winning probability of the already chosen – already "owned" – door. - -# The status quo bias, in which people prefer to stick with the choice of door they have already made. - -# The errors of omission vs. errors of commission effect, in which, all other things being equal, people prefer to make errors through inaction (Stay) as opposed to action (Switch). - -Experimental evidence confirms that these are plausible explanations that do not depend on probability intuition. Another possibility is that people's intuition simply does not deal with the textbook version of the problem, but with a real game show setting. There, the possibility exists that the show master plays deceitfully by opening other doors only if a door with the car was initially chosen. A show master playing deceitfully half of the times modifies the winning chances in case one is offered to switch to "equal probability". - -As already remarked, most sources in the field of probability, including many introductory probability textbooks, solve the problem by showing the conditional probabilities that the car is behind door 1 and door 2 are 1/3 and 2/3 (not 1/2 and 1/2) given that the contestant initially picks door 1 and the host opens door 3; various ways to derive and understand this result were given in the previous subsections. - -Among these sources are several that explicitly criticize the popularly presented "simple" solutions, saying these solutions are "correct but ... shaky", or do not "address the problem posed", or are "incomplete", or are "unconvincing and misleading", or are (most bluntly) "false". - -Sasha Volokh (2015) wrote that "any explanation that says something like 'the probability of door 1 was 1/3, and nothing can change that...' is automatically fishy: probabilities are expressions of our ignorance about the world, and new information can change the extent of our ignorance." - -Some say that these solutions answer a slightly different question – one phrasing is "you have to announce before a door has been opened whether you plan to switch". - -The simple solutions show in various ways that a contestant who is determined to switch will win the car with probability 2/3, and hence that switching is the winning strategy, if the player has to choose in advance between "always switching", and "always staying". However, the probability of winning by always switching is a logically distinct concept from the probability of winning by switching given that the player has picked door 1 and the host has opened door 3. As one source says, "the distinction between [these questions] seems to confound many". The fact that these are different can be shown by varying the problem so that these two probabilities have different numeric values. For example, assume the contestant knows that Monty does not pick the second door randomly among all legal alternatives but instead, when given an opportunity to pick between two losing doors, Monty will open the one on the right. In this situation, the following two questions have different answers: - -# What is the probability of winning the car by always switching? - -# What is the probability of winning the car given the player has picked door 1 and the host has opened door 3? - -The answer to the first question is 2/3, as is correctly shown by the "simple" solutions. But the answer to the second question is now different: the conditional probability the car is behind door 1 or door 2 given the host has opened door 3 (the door on the right) is 1/2. This is because Monty's preference for rightmost doors means that he opens door 3 if the car is behind door 1 (which it is originally with probability 1/3) or if the car is behind door 2 (also originally with probability 1/3). For this variation, the two questions yield different answers. However, as long as the initial probability the car is behind each door is 1/3, it is never to the contestant's disadvantage to switch, as the conditional probability of winning by switching is always at least 1/2. - -In Morgan et al., four university professors published an article in The American Statistician claiming that vos Savant gave the correct advice but the wrong argument. They believed the question asked for the chance of the car behind door 2 given the player's initial pick for door 1 and the opened door 3, and they showed this chance was anything between 1/2 and 1 depending on the host's decision process given the choice. Only when the decision is completely randomized is the chance 2/3. - -In an invited comment and in subsequent letters to the editor, Morgan et al were supported by some writers, criticized by others; in each case a response by Morgan et al is published alongside the letter or comment in The American Statistician. In particular, vos Savant defended herself vigorously. Morgan et al complained in their response to vos Savant that vos Savant still had not actually responded to their own main point. Later in their response to Hogbin and Nijdam, they did agree that it was natural to suppose that the host chooses a door to open completely at random, when he does have a choice, and hence that the conditional probability of winning by switching (i.e., conditional given the situation the player is in when he has to make his choice) has the same value, 2/3, as the unconditional probability of winning by switching (i.e., averaged over all possible situations). This equality was already emphasized by Bell (1992), who suggested that Morgan et als mathematically involved solution would appeal only to statisticians, whereas the equivalence of the conditional and unconditional solutions in the case of symmetry was intuitively obvious. - -There is disagreement in the literature regarding whether vos Savant's formulation of the problem, as presented in Parade, is asking the first or second question, and whether this difference is significant. Behrends concludes that "One must consider the matter with care to see that both analyses are correct"; which is not to say that they are the same. Several critics of the paper by Morgan et al., whose contributions were published alongside the original paper, criticized the authors for altering vos Savant's wording and misinterpreting her intention. One discussant (William Bell) considered it a matter of taste whether one explicitly mentions that (under the standard conditions), which door is opened by the host is independent of whether one should want to switch. - -Among the simple solutions, the "combined doors solution" comes closest to a conditional solution, as we saw in the discussion of approaches using the concept of odds and Bayes theorem. It is based on the deeply rooted intuition that revealing information that is already known does not affect probabilities. But, knowing that the host can open one of the two unchosen doors to show a goat does not mean that opening a specific door would not affect the probability that the car is behind the initially chosen door. The point is, though we know in advance that the host will open a door and reveal a goat, we do not know which door he will open. If the host chooses uniformly at random between doors hiding a goat (as is the case in the standard interpretation), this probability indeed remains unchanged, but if the host can choose non-randomly between such doors, then the specific door that the host opens reveals additional information. The host can always open a door revealing a goat and (in the standard interpretation of the problem) the probability that the car is behind the initially chosen door does not change, but it is not because of the former that the latter is true. Solutions based on the assertion that the host's actions cannot affect the probability that the car is behind the initially chosen appear persuasive, but the assertion is simply untrue unless each of the host's two choices are equally likely, if he has a choice. The assertion therefore needs to be justified; without justification being given, the solution is at best incomplete. The answer can be correct but the reasoning used to justify it is defective. - -The simple solutions above show that a player with a strategy of switching wins the car with overall probability 2/3, i.e., without taking account of which door was opened by the host. In contrast most sources in the field of probability calculate the conditional probabilities that the car is behind door 1 and door 2 are 1/3 and 2/3 given the contestant initially picks door 1 and the host opens door 3. The solutions in this section consider just those cases in which the player picked door 1 and the host opened door 3. - -If we assume that the host opens a door at random, when given a choice, then which door the host opens gives us no information at all as to whether or not the car is behind door 1. In the simple solutions, we have already observed that the probability that the car is behind door 1, the door initially chosen by the player, is initially 1/3. Moreover, the host is certainly going to open a (different) door, so opening a door (which door unspecified) does not change this. 1/3 must be the average probability that the car is behind door 1 given the host picked door 2 and given the host picked door 3 because these are the only two possibilities. But, these two probabilities are the same. Therefore, they are both equal to 1/3. This shows that the chance that the car is behind door 1, given that the player initially chose this door and given that the host opened door 3, is 1/3, and it follows that the chance that the car is behind door 2, given that the player initially chose door 1 and the host opened door 3, is 2/3. The analysis also shows that the overall success rate of 2/3, achieved by always switching, cannot be improved, and underlines what already may well have been intuitively obvious: the choice facing the player is that between the door initially chosen, and the other door left closed by the host, the specific numbers on these doors are irrelevant. - -By definition, the conditional probability of winning by switching given the contestant initially picks door 1 and the host opens door 3 is the probability for the event "car is behind door 2 and host opens door 3" divided by the probability for "host opens door 3". These probabilities can be determined referring to the conditional probability table below, or to an equivalent decision tree as shown to the right. The conditional probability of winning by switching is 1/3/1/3 + 1/6, which is 2/3. - -The conditional probability table below shows how 300 cases, in all of which the player initially chooses door 1, would be split up, on average, according to the location of the car and the choice of door to open by the host. - -Many probability text books and articles in the field of probability theory derive the conditional probability solution through a formal application of Bayes' theorem; among them books by Gill and Henze. Use of the odds form of Bayes' theorem, often called Bayes' rule, makes such a derivation more transparent. - -Initially, the car is equally likely to be behind any of the three doors: the odds on door 1, door 2, and door 3 are 1 : 1 : 1. This remains the case after the player has chosen door 1, by independence. According to Bayes' rule, the posterior odds on the location of the car, given that the host opens door 3, are equal to the prior odds multiplied by the Bayes factor or likelihood, which is, by definition, the probability of the new piece of information (host opens door 3) under each of the hypotheses considered (location of the car). Now, since the player initially chose door 1, the chance that the host opens door 3 is 50% if the car is behind door 1, 100% if the car is behind door 2, 0% if the car is behind door 3. Thus the Bayes factor consists of the ratios 1/2 : 1 : 0 or equivalently 1 : 2 : 0, while the prior odds were 1 : 1 : 1. Thus, the posterior odds become equal to the Bayes factor 1 : 2 : 0. Given that the host opened door 3, the probability that the car is behind door 3 is zero, and it is twice as likely to be behind door 2 than door 1. - -Richard Gill analyzes the likelihood for the host to open door 3 as follows. Given that the car is not behind door 1, it is equally likely that it is behind door 2 or 3. Therefore, the chance that the host opens door 3 is 50%. Given that the car is behind door 1, the chance that the host opens door 3 is also 50%, because, when the host has a choice, either choice is equally likely. Therefore, whether or not the car is behind door 1, the chance that the host opens door 3 is 50%. The information "host opens door 3" contributes a Bayes factor or likelihood ratio of 1 : 1, on whether or not the car is behind door 1. Initially, the odds against door 1 hiding the car were 2 : 1. Therefore, the posterior odds against door 1 hiding the car remain the same as the prior odds, 2 : 1. - -In words, the information which door is opened by the host (door 2 or door 3?) reveals no information at all about whether or not the car is behind door 1, and this is precisely what is alleged to be intuitively obvious by supporters of simple solutions, or using the idioms of mathematical proofs, "obviously true, by symmetry". - -Consider the event Ci, indicating that the car is behind door number i, takes value Xi, for the choosing of the player, and value Hi, for the host opening the door. The player initially chooses door i = 1, C = X1 and the host opens door i = 3, C = H3. - -In this case, we have: - -\begin{align} - -P(H3|C1,X1)&=\tfrac12 \\ - -P(H3|C2,X1)&=1 \\ - -P(H3|C3,X1)&=0 \\ - -P(Ci) &= \tfrac13 \\ - -P(Ci,Xi) &= P(Ci)P(Xi) \\ - -P(H3|X1) &= \tfrac12 \\ - -\end{align} - -P(H3|X1) = 1/2 because this expression only depends on X1, not on any Ci. So, in this particular expression, the choosing of the host does not depend on where the car is, and there's only two remaining doors once X1 is chosen (for instance, P(H1|X1) = 0); and P(Ci,Xi) = P(Ci)P(Xi) because Ci and Xi are independent events (the player does not know where is the car in order to make a choice). - -Then, if the player initially selects door 1, and the host opens door 3, we prove that the conditional probability of winning by switching is: - -\begin{align} - -P(C2|H3,X1) &= \tfrac23 - -\end{align} - -From the Bayes' rule, we know that P(A,B) = P(A|B)P(B) = P(B|A)P(A). Extending this logic to multiple events, for example A, B and C, we get that we can play with the different subsets of {A, B, C} to calculate the probability of the intersection, as a tool to simplify the calculation of our conditional probability: - -\begin{align} - -P(A,B,C) &= P(A|B,C)P(B,C) \\ - -&= P(B|A,C)P(A,C) \\ - -&= P(C|A,B)P(A,B) \\ - -&= P(A,B|C)P(C) \\ - -&= P(A,C|B)P(B) \\ - -&= P(B,C|A)P(A) - -\end{align} - -In our case, since we know that P(H3|C2,X1) = 1, we are in luck: - -\begin{align} - -P(C2|H3,X1) &= \frac{P(C2,H3,X1)}{P(H3,X1)} = \frac{P(H3|C2,X1)P(C2,X1)}{P(H3,X1)} = \frac{P(C2)P(X1)}{P(H3|X1)P(X1)} = \frac{1/3}{1/2} = \frac23 - -\end{align} - -Going back to Nalebuff, the Monty Hall problem is also much studied in the literature on game theory and decision theory, and also some popular solutions correspond to this point of view. Vos Savant asks for a decision, not a chance. And the chance aspects of how the car is hidden and how an unchosen door is opened are unknown. From this point of view, one has to remember that the player has two opportunities to make choices: first of all, which door to choose initially; and secondly, whether or not to switch. Since he does not know how the car is hidden nor how the host makes choices, he may be able to make use of his first choice opportunity, as it were to neutralize the actions of the team running the quiz show, including the host. - -Following Gill, a strategy of contestant involves two actions: the initial choice of a door and the decision to switch (or to stick) which may depend on both the door initially chosen and the door to which the host offers switching. For instance, one contestant's strategy is "choose door 1, then switch to door 2 when offered, and do not switch to door 3 when offered". Twelve such deterministic strategies of the contestant exist. - -Elementary comparison of contestant's strategies shows that, for every strategy A, there is another strategy B "pick a door then switch no matter what happens" that dominates it. No matter how the car is hidden and no matter which rule the host uses when he has a choice between two goats, if A wins the car then B also does. For example, strategy A "pick door 1 then always stick with it" is dominated by the strategy B "pick door 1 then always switch after the host reveals a door": A wins when door 1 conceals the car, while B wins when one of the doors 2 and 3 conceals the car. - -Similarly, strategy A "pick door 1 then switch to door 2 (if offered), but do not switch to door 3 (if offered)" is dominated by strategy B "pick door 3 then always switch". - -Dominance is a strong reason to seek for a solution among always-switching strategies, under fairly general assumptions on the environment in which the contestant is making decisions. In particular, if the car is hidden by means of some randomization device – like tossing symmetric or asymmetric three-sided die – the dominance implies that a strategy maximizing the probability of winning the car will be among three always-switching strategies, namely it will be the strategy that initially picks the least likely door then switches no matter which door to switch is offered by the host. - -Strategic dominance links the Monty Hall problem to the game theory. In the zero-sum game setting of Gill, discarding the non-switching strategies reduces the game to the following simple variant: the host (or the TV-team) decides on the door to hide the car, and the contestant chooses two doors (i.e., the two doors remaining after the player's first, nominal, choice). The contestant wins (and her opponent loses) if the car is behind one of the two doors she chose. - -A simple way to demonstrate that a switching strategy really does win two out of three times with the standard assumptions is to simulate the game with playing cards. Three cards from an ordinary deck are used to represent the three doors; one 'special' card represents the door with the car and two other cards represent the goat doors. - -The simulation can be repeated several times to simulate multiple rounds of the game. The player picks one of the three cards, then, looking at the remaining two cards the 'host' discards a goat card. If the card remaining in the host's hand is the car card, this is recorded as a switching win; if the host is holding a goat card, the round is recorded as a staying win. As this experiment is repeated over several rounds, the observed win rate for each strategy is likely to approximate its theoretical win probability, in line with the law of large numbers. - -Repeated plays also make it clearer why switching is the better strategy. After the player picks his card, it is already determined whether switching will win the round for the player. If this is not convincing, the simulation can be done with the entire deck. In this variant, the car card goes to the host 51 times out of 52, and stays with the host no matter how many non-car cards are discarded. - -A common variant of the problem, assumed by several academic authors as the canonical problem, does not make the simplifying assumption that the host must uniformly choose the door to open, but instead that he uses some other strategy. The confusion as to which formalization is authoritative has led to considerable acrimony, particularly because this variant makes proofs more involved without altering the optimality of the always-switch strategy for the player. In this variant, the player can have different probabilities of winning depending on the observed choice of the host, but in any case the probability of winning by switching is at least 1/2 (and can be as high as 1), while the overall probability of winning by switching is still exactly 2/3. The variants are sometimes presented in succession in textbooks and articles intended to teach the basics of probability theory and game theory. A considerable number of other generalizations have also been studied. - -The version of the Monty Hall problem published in Parade in 1990 did not specifically state that the host would always open another door, or always offer a choice to switch, or even never open the door revealing the car. However, vos Savant made it clear in her second follow-up column that the intended host's behavior could only be what led to the 2/3 probability she gave as her original answer. "Anything else is a different question." "Virtually all of my critics understood the intended scenario. I personally read nearly three thousand letters (out of the many additional thousands that arrived) and found nearly every one insisting simply that because two options remained (or an equivalent error), the chances were even. Very few raised questions about ambiguity, and the letters actually published in the column were not among those few." The answer follows if the car is placed randomly behind any door, the host must open a door revealing a goat regardless of the player's initial choice and, if two doors are available, chooses which one to open randomly. The table below shows a variety of other possible host behaviors and the impact on the success of switching. - -Determining the player's best strategy within a given set of other rules the host must follow is the type of problem studied in game theory. For example, if the host is not required to make the offer to switch the player may suspect the host is malicious and makes the offers more often if the player has initially selected the car. In general, the answer to this sort of question depends on the specific assumptions made about the host's behavior, and might range from "ignore the host completely" to "toss a coin and switch if it comes up heads"; see the last row of the table below. - -Morgan et al and Gillman both show a more general solution where the car is (uniformly) randomly placed but the host is not constrained to pick uniformly randomly if the player has initially selected the car, which is how they both interpret the statement of the problem in Parade despite the author's disclaimers. Both changed the wording of the Parade version to emphasize that point when they restated the problem. They consider a scenario where the host chooses between revealing two goats with a preference expressed as a probability q, having a value between 0 and 1. If the host picks randomly q would be 1/2 and switching wins with probability 2/3 regardless of which door the host opens. If the player picks door 1 and the host's preference for door 3 is q, then the probability the host opens door 3 and the car is behind door 2 is 1/3, while the probability the host opens door 3 and the car is behind door 1 is q/3. These are the only cases where the host opens door 3, so the conditional probability of winning by switching given the host opens door 3 is 1/3/1/3 + q/3 which simplifies to 1/1 + q. Since q can vary between 0 and 1 this conditional probability can vary between 1/2 and 1. This means even without constraining the host to pick randomly if the player initially selects the car, the player is never worse off switching. However neither source suggests the player knows what the value of q is so the player cannot attribute a probability other than the 2/3 that vos Savant assumed was implicit. - -
    - -
    - -D. L. Ferguson (1975 in a letter to Selvin) suggests an N-door generalization of the original problem in which the host opens p losing doors and then offers the player the opportunity to switch; in this variant switching wins with probability $\frac{1}{N} \cdot \frac{N-1}{N-p-1}$. This probability is always greater than $\frac{1}{N}$, therefore switching always brings an advantage. - -Even if the host opens only a single door ($p=1$), the player is better off switching in every case. As N grows larger, the advantage decreases and approaches zero. - -At the other extreme, if the host opens all losing doors but one (p = N − 2) the advantage increases as N grows large (the probability of winning by switching is N − 1/N, which approaches 1 as N grows very large). - -A quantum version of the paradox illustrates some points about the relation between classical or non-quantum information and quantum information, as encoded in the states of quantum mechanical systems. The formulation is loosely based on quantum game theory. The three doors are replaced by a quantum system allowing three alternatives; opening a door and looking behind it is translated as making a particular measurement. The rules can be stated in this language, and once again the choice for the player is to stick with the initial choice, or change to another "orthogonal" option. The latter strategy turns out to double the chances, just as in the classical case. However, if the show host has not randomized the position of the prize in a fully quantum mechanical way, the player can do even better, and can sometimes even win the prize with certainty. - -The earliest of several probability puzzles related to the Monty Hall problem is Bertrand's box paradox, posed by Joseph Bertrand in 1889 in his Calcul des probabilités. In this puzzle, there are three boxes: a box containing two gold coins, a box with two silver coins, and a box with one of each. After choosing a box at random and withdrawing one coin at random that happens to be a gold coin, the question is what is the probability that the other coin is gold. As in the Monty Hall problem, the intuitive answer is 1/2, but the probability is actually 2/3. - -The Three Prisoners problem, published in Martin Gardner's Mathematical Games column in Scientific American in 1959 is equivalent to the Monty Hall problem. This problem involves three condemned prisoners, a random one of whom has been secretly chosen to be pardoned. One of the prisoners begs the warden to tell him the name of one of the others to be executed, arguing that this reveals no information about his own fate but increases his chances of being pardoned from 1/3 to 1/2. The warden obliges, (secretly) flipping a coin to decide which name to provide if the prisoner who is asking is the one being pardoned. The question is whether knowing the warden's answer changes the prisoner's chances of being pardoned. This problem is equivalent to the Monty Hall problem; the prisoner asking the question still has a 1/3 chance of being pardoned but his unnamed colleague has a 2/3 chance. - -Steve Selvin posed the Monty Hall problem in a pair of letters to The American Statistician in 1975. The first letter presented the problem in a version close to its presentation in Parade 15 years later. The second appears to be the first use of the term "Monty Hall problem". The problem is actually an extrapolation from the game show. Monty Hall did open a wrong door to build excitement, but offered a known lesser prize – such as $100 cash – rather than a choice to switch doors. As Monty Hall wrote to Selvin: - -A version of the problem very similar to the one that appeared three years later in Parade was published in 1987 in the Puzzles section of The Journal of Economic Perspectives. Nalebuff, as later writers in mathematical economics, sees the problem as a simple and amusing exercise in game theory. - -"The Monty Hall Trap", Phillip Martin's 1989 article in Bridge Today, presented Selvin's problem as an example of what Martin calls the probability trap of treating non-random information as if it were random, and relates this to concepts in the game of bridge. - -A restated version of Selvin's problem appeared in Marilyn vos Savant's Ask Marilyn question-and-answer column of Parade in September 1990. Though vos Savant gave the correct answer that switching would win two-thirds of the time, she estimates the magazine received 10,000 letters including close to 1,000 signed by PhDs, many on letterheads of mathematics and science departments, declaring that her solution was wrong. Due to the overwhelming response, Parade published an unprecedented four columns on the problem. As a result of the publicity the problem earned the alternative name "Marilyn and the Goats". - -In November 1990, an equally contentious discussion of vos Savant's article took place in Cecil Adams's column "The Straight Dope". Adams initially answered, incorrectly, that the chances for the two remaining doors must each be one in two. After a reader wrote in to correct the mathematics of Adams's analysis, Adams agreed that mathematically he had been wrong. "You pick door #1. Now you're offered this choice: open door #1, or open door #2 and door #3. In the latter case you keep the prize if it's behind either door. You'd rather have a two-in-three shot at the prize than one-in-three, wouldn't you? If you think about it, the original problem offers you basically the same choice. Monty is saying in effect: you can keep your one door or you can have the other two doors, one of which (a non-prize door) I'll open for you." Adams did say the Parade version left critical constraints unstated, and without those constraints, the chances of winning by switching were not necessarily two out of three (e.g., it was not reasonable to assume the host always opens a door). Numerous readers, however, wrote in to claim that Adams had been "right the first time" and that the correct chances were one in two. - -The Parade column and its response received considerable attention in the press, including a front-page story in the New York Times in which Monty Hall himself was interviewed. Hall understood the problem, giving the reporter a demonstration with car keys and explaining how actual game play on Let's Make a Deal differed from the rules of the puzzle. In the article, Hall pointed out that because he had control over the way the game progressed, playing on the psychology of the contestant, the theoretical solution did not apply to the show's actual gameplay. He said he was not surprised at the experts' insistence that the probability was 1 out of 2. "That's the same assumption contestants would make on the show after I showed them there was nothing behind one door," he said. "They'd think the odds on their door had now gone up to 1 in 2, so they hated to give up the door no matter how much money I offered. By opening that door we were applying pressure. We called it the Henry James treatment. It was 'The Turn of the Screw'." Hall clarified that as a game show host he did not have to follow the rules of the puzzle in the vos Savant column and did not always have to allow a person the opportunity to switch (e.g., he might open their door immediately if it was a losing door, might offer them money to not switch from a losing door to a winning door, or might allow them the opportunity to switch only if they had a winning door). "If the host is required to open a door all the time and offer you a switch, then you should take the switch," he said. "But if he has the choice whether to allow a switch or not, beware. Caveat emptor. It all depends on his mood." diff --git a/wiki/wikipedia/4177.txt b/wiki/wikipedia/4177.txt deleted file mode 100644 index ef884ba183e0b8cf4d99172076b525c2e5658c10..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4177.txt +++ /dev/null @@ -1,6 +0,0 @@ -In algebraic geometry, the Atiyah–Bott formula says the cohomology ring -$$ -\operatorname{H}^*(\operatorname{Bun}_G(X), \mathbb{Q}_l) -$$ - -of the moduli stack of principal bundles is a free graded-commutative algebra on certain homogeneous generators. The original work of Michael Atiyah and Raoul Bott concerned the integral cohomology ring of $\operatorname{Bun}_G(X)$. diff --git a/wiki/wikipedia/4178.txt b/wiki/wikipedia/4178.txt deleted file mode 100644 index 63f177c3c54cdc331c19309b4174bbd7592f91cc..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4178.txt +++ /dev/null @@ -1,228 +0,0 @@ -In probability theory, the Borel–Kolmogorov paradox (sometimes known as Borel's paradox) is a paradox relating to conditional probability with respect to an event of probability zero (also known as a null set). It is named after Émile Borel and Andrey Kolmogorov. - -Suppose that a random variable has a uniform distribution on a unit sphere. What is its conditional distribution on a great circle? Because of the symmetry of the sphere, one might expect that the distribution is uniform and independent of the choice of coordinates. However, two analyses give contradictory results. First, note that choosing a point uniformly on the sphere is equivalent to choosing the longitude $\lambda$ uniformly from $[-\pi,\pi]$ and choosing the latitude $\varphi$ from $[-\frac{\pi}{2},\frac{\pi}{2}]$ with density $\frac{1}{2} \cos \varphi$. Then we can look at two different great circles: - -# If the coordinates are chosen so that the great circle is an equator (latitude $\varphi = 0$), the conditional density for a longitude $\lambda$ defined on the interval $[-\pi,\pi]$ is - -#: $ f(\lambda\mid\varphi=0) = \frac{1}{2\pi}.$ - -# If the great circle is a line of longitude with $\lambda = 0$, the conditional density for $\varphi$ on the interval $[-\frac{\pi}{2},\frac{\pi}{2}]$ is - -#: $f(\varphi\mid\lambda=0) = \frac{1}{2} \cos \varphi.$ - -One distribution is uniform on the circle, the other is not. Yet both seem to be referring to the same great circle in different coordinate systems. - -In case (1) above, the conditional probability that the longitude λ lies in a set E given that φ = 0 can be written P(λ ∈ E | φ = 0). Elementary probability theory suggests this can be computed as P(λ ∈ E and φ = 0)/P(φ = 0), but that expression is not well-defined since P(φ = 0) = 0. Measure theory provides a way to define a conditional probability, using the family of events Rab = {φ : a < φ < b} which are horizontal rings consisting of all points with latitude between a and b. - -The resolution of the paradox is to notice that in case (2), P(φ ∈ F | λ = 0) is defined using the events Lab = {λ : a < λ < b}, which are lunes (vertical wedges), consisting of all points whose longitude varies between a and b. So although P(λ ∈ E | φ = 0) and P(φ ∈ F | λ = 0) each provide a probability distribution on a great circle, one of them is defined using rings, and the other using lunes. Thus it is not surprising after all that P(λ ∈ E | φ = 0) and P(φ ∈ F | λ = 0) have different distributions. - -To understand the problem we need to recognize that a distribution on a continuous random variable is described by a density f only with respect to some measure μ. Both are important for the full description of the probability distribution. Or, equivalently, we need to fully define the space on which we want to define f. - -Let Φ and Λ denote two random variables taking values in Ω1 = $\left[-\frac{\pi}{2}, \frac{\pi}{2}\right]$ respectively Ω2 = [−pi, pi]. An event {Φ = φ, Λ = λ} gives a point on the sphere S(r) with radius r. We define the coordinate transform - -\begin{align} - -x &= r \cos \varphi \cos \lambda \\ - -y &= r \cos \varphi \sin \lambda \\ - -z &= r \sin \varphi - -\end{align} - -for which we obtain the volume element -$$ -\omega_r(\varphi,\lambda) = \left\| {\partial (x,y,z) \over \partial \varphi} \times {\partial (x,y,z) \over \partial \lambda} \right\| = r^2 \cos \varphi \ . -$$ - -Furthermore, if either φ or λ is fixed, we get the volume elements - -\begin{align} - -\omega_r(\lambda) &= \left\| {\partial (x,y,z) \over \partial \varphi } \right\| = r \ , \quad\text{respectively} \\[3pt] - -\omega_r(\varphi) &= \left\| {\partial (x,y,z) \over \partial \lambda } \right\| = r \cos \varphi\ . - -\end{align} - -Let -$$ -\mu_{\Phi,\Lambda}(d\varphi, d\lambda) = f_{\Phi,\Lambda}(\varphi,\lambda) \omega_r(\varphi,\lambda) d\varphi d\lambda -$$ - -denote the joint measure on $\mathcal{B}(\Omega_1 \times \Omega_2)$, which has a density $f_{\Phi,\Lambda}$ with respect to $\omega_r(\varphi,\lambda) d\varphi d\lambda$ and let - -\begin{align} - -\mu_\Phi(d\varphi) &= \int_{\lambda \in \Omega_2} \mu_{\Phi,\Lambda}(d\varphi, d\lambda)\ ,\\ - -\mu_\Lambda (d\lambda) &= \int_{\varphi \in \Omega_1} \mu_{\Phi,\Lambda}(d\varphi, d\lambda)\ . - -\end{align} - -If we assume that the density $f_{\Phi,\Lambda}$ is uniform, then - -\begin{align} - -\mu_{\Phi \mid \Lambda}(d\varphi \mid \lambda) &= {\mu_{\Phi,\Lambda}(d\varphi, d\lambda) \over \mu_\Lambda(d\lambda)} = \frac{1}{2r} \omega_r(\varphi) d\varphi \ , \quad\text{and} \\[3pt] - -\mu_{\Lambda \mid \Phi}(d\lambda \mid \varphi) &= {\mu_{\Phi,\Lambda}(d\varphi, d\lambda) \over \mu_\Phi(d\varphi)} = \frac{1}{2r\pi} \omega_r(\lambda) d\lambda \ . - -\end{align} - -Hence, $\mu_{\Phi \mid \Lambda}$ has a uniform density with respect to $\omega_r(\varphi) d\varphi$ but not with respect to the Lebesgue measure. On the other hand, $\mu_{\Lambda \mid \Phi}$ has a uniform density with respect to $\omega_r(\lambda) d\lambda$ and the Lebesgue measure. - -Consider a random vector $(X,Y,Z)$ that is uniformly distributed on the unit sphere $S^2$. - -We begin by parametrizing the sphere with the usual spherical polar coordinates: - -\begin{aligned} - -x &= \cos(\varphi) \cos (\theta) \\ - -y &= \cos(\varphi) \sin (\theta) \\ - -z &= \sin(\varphi) - -\end{aligned} - -where $-\frac{\pi}{2} \le \varphi \le \frac{\pi}{2}$ and $-\pi \le \theta \le \pi$. - -We can define random variables $\Phi$, $\Theta$ as the values of $(X, Y, Z)$ - -under the inverse of this parametrization, or more formally using the arctan2 function: - -\begin{align} - -\Phi &= \arcsin(Z) \\ - -\Theta &= \arctan_2\left(\frac{Y}{\sqrt{1 - Z^2}}, \frac{X}{\sqrt{1 - Z^2}}\right) - -\end{align} - -Using the formulas for the surface area spherical cap and the spherical wedge, the surface of a spherical cap wedge is given by - - - -\operatorname{Area}(\Theta \le \theta, \Phi \le \varphi) = (1 + \sin(\varphi)) (\theta + \pi) - - - -Since $(X,Y,Z)$ is uniformly distributed, the probability is proportional to the surface area, giving the joint cumulative distribution function - - - -F_{\Phi, \Theta}(\varphi, \theta) = P(\Theta \le \theta, \Phi \le \varphi) = \frac{1}{4\pi}(1 + \sin(\varphi)) (\theta + \pi) - - - -The joint probability density function is then given by - - - -f_{\Phi, \Theta}(\varphi, \theta) = - -\frac{\partial^2}{\partial \varphi \partial \theta} F_{\Phi, \Theta}(\varphi, \theta) = - -\frac{1}{4\pi} \cos(\varphi) - - - -Note that $\Phi$ and $\Theta$ are independent random variables. - -For simplicity, we won't calculate the full conditional distribution on a great circle, only the probability that the random vector lies in the first octant. That is to say, we will attempt to calculate the conditional probability $\mathbb{P}(A|B)$ with - -\begin{aligned} - -A &= \left\{ 0 < \Theta < \frac{\pi}{4} \right\} &&= \{ 0 < X < 1, 0 < Y < X \}\\ - -B &= \{ \Phi = 0 \} &&= \{ Z = 0 \} - -\end{aligned} - -We attempt to evaluate the conditional probability as a limit of conditioning on the events -$$ -B_\varepsilon = \{ | \Phi | < \varepsilon \} -$$ - -As $\Phi$ and $\Theta$ are independent, so are the events $A$ and $B_\varepsilon$, therefore - - - -P(A \mid B) \mathrel{\stackrel{?}{=}} \lim_{\varepsilon \to 0} \frac{P(A \cap B_\varepsilon)}{P(B_\varepsilon)} = - -\lim_{\varepsilon \to 0} P(A) = P \left(0 < \Theta < \frac{\pi}{4}\right) = \frac{1}{8}. - - - -Now we repeat the process with a different parametrization of the sphere: - -\begin{align} - -x &= \sin(\varphi) \\ - -y &= \cos(\varphi) \sin(\theta) \\ - -z &= -\cos(\varphi) \cos(\theta) - -\end{align} - -This is equivalent to the previous parametrization rotated by 90 degrees around the y axis. - -Define new random variables - -\begin{align} - -\Phi' &= \arcsin(X) \\ - -\Theta' &= \arctan_2\left(\frac{Y}{\sqrt{1 - X^2}}, \frac{-Z}{\sqrt{1 - X^2}}\right). - -\end{align} - -Rotation is measure preserving so the density of $\Phi'$ and $\Theta'$ is the same: -$$ - f_{\Phi', \Theta'}(\varphi, \theta) = \frac{1}{4\pi} \cos(\varphi) -$$. - -The expressions for A and B are: - -\begin{align} - -A &= \left\{ 0 < \Theta < \frac{\pi}{4} \right\} - -&&= \{ 0 < X < 1,\ 0 < Y < X \} - -&&= \left\{ 0 < \Theta' < \pi,\ 0 < \Phi' < \frac{\pi}{2},\ \sin(\Theta') < \tan(\Phi') \right\} \\ - -B &= \{ \Phi = 0 \} - -&&= \{ Z = 0 \} - -&&= \left\{ \Theta' = -\frac{\pi}{2} \right\} \cup \left\{ \Theta' = \frac{\pi}{2} \right\}. - -\end{align} - -Attempting again to evaluate the conditional probability as a limit of conditioning on the events -$$ -B^\prime_\varepsilon = \left\{ \left|\Theta' + \frac{\pi}{2}\right| < \varepsilon \right\} \cup \left\{ \left|\Theta'-\frac{\pi}{2}\right| < \varepsilon \right\}. -$$ - -Using L'Hôpital's rule and differentiation under the integral sign: - -\begin{align} - -P(A \mid B) &\mathrel{\stackrel{?}{=}} \lim_{\varepsilon \to 0} \frac{P(A \cap B^\prime_\varepsilon )}{P(B^\prime_\varepsilon )}\\ - -&= \lim_{\varepsilon \to 0} \frac{1}{\frac{4\varepsilon}{2\pi}}P\left( \frac{\pi}{2} - \varepsilon < \Theta' < \frac{\pi}{2} + \varepsilon,\ 0 < \Phi' < \frac{\pi}{2},\ \sin(\Theta') < \tan(\Phi') \right)\\ - -&= \frac{\pi}{2} \lim_{\varepsilon \to 0} \frac{\partial}{\partial \varepsilon} \int_{{\pi}/{2}-\epsilon}^{{\pi}/{2}+\epsilon} \int_0^{{\pi}/{2}} 1_{\sin(\theta) < \tan(\varphi)} f_{\Phi', \Theta'}(\varphi, \theta) \mathrm{d}\varphi \mathrm{d}\theta \\ - -&= \pi \int_0^{{\pi}/{2}} 1_{1 < \tan(\varphi)} f_{\Phi', \Theta'}\left(\varphi, \frac{\pi}{2}\right) \mathrm{d}\varphi \\ - -&= \pi \int_{\pi/4}^{\pi/2} \frac{1}{4 \pi} \cos(\varphi) \mathrm{d}\varphi \\ - -&= \frac{1}{4} \left( 1 - \frac{1}{\sqrt{2}} \right) \neq \frac{1}{8} - -\end{align} - -This shows that the conditional density cannot be treated as conditioning on an event of probability zero, as explained in Conditional probability#Conditioning on an event of probability zero. diff --git a/wiki/wikipedia/4179.txt b/wiki/wikipedia/4179.txt deleted file mode 100644 index fc02db33dee1b31f04bfd1650748d9960a897495..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4179.txt +++ /dev/null @@ -1,53 +0,0 @@ -The smoothed octagon is a region in the plane found by Karl Reinhardt in 1934 and conjectured by him to have the lowest maximum packing density of the plane of all centrally symmetric convex shapes. It was also independently discovered by Kurt Mahler in 1947. It is constructed by replacing the corners of a regular octagon with a section of a hyperbola that is tangent to the two sides adjacent to the corner and asymptotic to the sides adjacent to these. - -The shape of the smoothed octagon can be derived from its packings, which place octagons at the points of a triangular lattice. The requirement that these packings have the same density no matter how the lattice and smoothed octagon are rotated relative to each other, with shapes that remain in contact with each neighboring shape, can be used to determine the shape of the corners. One of the figures shows three octagons that rotate while the area of the triangle formed by their centres remains constant, keeping them packed together as closely as possible. For regular octagons, the red and blue shapes would overlap, so to enable the rotation to proceed the corners are clipped to a point that lies halfway between their centres, generating the required curve, which turns out to be a hyperbola. - -The hyperbola is constructed tangent to two sides of the octagon, and asymptotic to the two adjacent to these. The following details apply to a regular octagon of circumradius $\sqrt{2}$ with its centre at the point $(2+\sqrt{2},0)$ and one vertex at the point $(2,0)$. For two constants $\ell=\sqrt{2} - 1$ and $m=(1/2)^{1/4}$, the hyperbola is given by the equation - - - -\ell^2x^2-y^2=m^2 - -or the equivalent parameterization (for the right-hand branch only) - - - -\begin{align} - -x&=\frac{m}{\ell} \cosh{t}\\ - -y&= m \sinh{t}\\ - -\end{align} - -for the portion of the hyperbola that forms the corner, given by the range of parameter values - - - --\frac{\ln{2}}{4} - -The lines of the octagon tangent to the hyperbola are $y= \pm \left(\sqrt{2} + 1 \right) \left( x-2 \right)$, - -and the lines asymptotic to the hyperbola are simply $y = \pm \ell x$. - -The smoothed octagon has a maximum packing density given by - - - -\frac{ 8-4\sqrt{2}-\ln{2} }{2\sqrt{2}-1} \approx 0.902414 . - -This is lower than the maximum packing density of circles, which is - - - -\frac{\pi}{\sqrt{12}} \approx 0.906899. - -The maximum known packing density of the ordinary regular octagon is - -\frac{4 + 4 \sqrt{2}}{5 + 4 \sqrt{2}} \approx 0.906163, - -also slightly less than the maximum packing density of circles, but higher than that of the smoothed octagon. - -The smoothed octagon achieves its maximum packing density, not just for a single packing, but for a 1-parameter family. All of these are lattice packings. Reinhardt's conjecture that the smoothed octagon has the lowest maximum packing density of all centrally symmetric convex shapes in the plane remains unsolved. If central symmetry is not required, the regular heptagon has even lower packing density, but its optimality is also unproven. - -In three dimensions, Ulam's packing conjecture states that no convex shape has a lower maximum packing density than the ball. diff --git a/wiki/wikipedia/418.txt b/wiki/wikipedia/418.txt deleted file mode 100644 index 28ac15ecd69e815d903c5ca036ed150f995c521f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/418.txt +++ /dev/null @@ -1,41 +0,0 @@ -In the mathematical area of graph theory, Kőnig's theorem, proved by , describes an equivalence between the maximum matching problem and the minimum vertex cover problem in bipartite graphs. It was discovered independently, also in 1931, by Jenő Egerváry in the more general case of weighted graphs. - -A vertex cover in a graph is a set of vertices that includes at least one endpoint of every edge, and a vertex cover is minimum if no other vertex cover has fewer vertices. A matching in a graph is a set of edges no two of which share an endpoint, and a matching is maximum if no other matching has more edges. - -It is obvious from the definition that any vertex-cover set must be at least as large as any matching set (since for every edge in the matching, at least one vertex is needed in the cover). In particular, the minimum vertex cover set is at least as large as the maximum matching set. Kőnig's theorem states that, in any bipartite graph, the minimum vertex cover set and the maximum matching set have in fact the same size. - -
    In any bipartite graph, the number of edges in a maximum matching equals the number of vertices in a minimum vertex cover. - -Additionally, every vertex in $K$ is an endpoint of a matched edge. - -For, every vertex in $L\setminus Z$ is matched because $Z$ is a superset of $U$, the set of unmatched left vertices. - -And every vertex in $R\cap Z$ must also be matched, for if there existed an alternating path to an unmatched vertex then changing the matching by removing the matched edges from this path and adding the unmatched edges in their place would increase the size of the matching. However, no matched edge can have both of its endpoints in $K$. Thus, $K$ is a vertex cover of cardinality equal to $M$, and must be a minimum vertex cover. - -Kőnig's theorem is named after the Hungarian mathematician Dénes Kőnig. Kőnig had announced in 1914 and published in 1916 the results that every regular bipartite graph has a perfect matching, and more generally that the chromatic index of any bipartite graph (that is, the minimum number of matchings into which it can be partitioned) equals its maximum degree – the latter statement is known as Kőnig's line coloring theorem. However, Bondy attribute Kőnig's theorem itself to a later paper of Kőnig (1931). - -According to Biggs, Kőnig attributed the idea of studying matchings in bipartite graphs to his father, mathematician Gyula Kőnig. In Hungarian, Kőnig's name has a double acute accent, but his theorem is usually spelled in German characters, with an umlaut. - -Kőnig's theorem is equivalent to numerous other min-max theorems in graph theory and combinatorics, such as Hall's marriage theorem and Dilworth's theorem. Since bipartite matching is a special case of maximum flow, the theorem also results from the max-flow min-cut theorem. - -A graph is said to be perfect if, in every induced subgraph, the chromatic number equals the size of the largest clique. Any bipartite graph is perfect, because each of its subgraphs is either bipartite or independent; in a bipartite graph that is not independent the chromatic number and the size of the largest clique are both two while in an independent set the chromatic number and clique number are both one. - -A graph is perfect if and only if its complement is perfect, and Kőnig's theorem can be seen as equivalent to the statement that the complement of a bipartite graph is perfect. For, each color class in a coloring of the complement of a bipartite graph is of size at most 2 and the classes of size 2 form a matching, a clique in the complement of a graph G is an independent set in G, and as we have already described an independent set in a bipartite graph G is a complement of a vertex cover in G. Thus, any matching M in a bipartite graph G with n vertices corresponds to a coloring of the complement of G with n-|M| colors, which by the perfection of complements of bipartite graphs corresponds to an independent set in G with n-|M| vertices, which corresponds to a vertex cover of G with M vertices. Conversely, Kőnig's theorem proves the perfection of the complements of bipartite graphs, a result proven in a more explicit form by Gallai. - -One can also connect Kőnig's Line Coloring Theorem to a different class of perfect graphs, the line graphs of bipartite graphs. If G is a graph, the line graph L(G) has a vertex for each edge of G, and an edge for each pair of adjacent edges in G. Thus, the chromatic number of L(G) equals the chromatic index of G. If G is bipartite, the cliques in L(G) are exactly the sets of edges in G sharing a common endpoint. Now Kőnig's Line Coloring Theorem, stating that the chromatic index equals the maximum vertex degree in any bipartite graph, can be interpreted as stating that the line graph of a bipartite graph is perfect. - -Since line graphs of bipartite graphs are perfect, the complements of line graphs of bipartite graphs are also perfect. A clique in the complement of the line graph of G is just a matching in G. And a coloring in the complement of the line graph of G, when G is bipartite, is a partition of the edges of G into subsets of edges sharing a common endpoint; the endpoints shared by each of these subsets form a vertex cover for G. Therefore, Kőnig's theorem itself can also be interpreted as stating that the complements of line graphs of bipartite graphs are perfect. - -Konig's theorem can be extended to weighted graphs. - -Jenő Egerváry (1931) considered graphs in which each edge e has a non-negative integer weight we. The weight vector is denoted by w. The w-weight of a matching is the sum of weights of the edges participating in the matching. A w-vertex-cover is a multiset of vertices ("multiset" means that each vertex may appear several times), in which each edge e is adjacent to at least we vertices. Egerváry's theorem says:
    In any edge-weighted bipartite graph, the maximum w-weight of a matching equals the smallest number of vertices in a w-vertex-cover.
    The maximum w-weight of a fractional matching is given by the LP:
    Maximize w · x - -Subject to: x ≥ 0E - -__________ AG · x ≤ 1V.
    And the minimum number of vertices in a fractional w-vertex-cover is given by the dual LP:
    Minimize 1V · y - -Subject to: y ≥ 0V - -__________ AGT · y ≥ w.
    As in the proof of Konig's theorem, the LP duality theorem implies that the optimal values are equal (for any graph), and the fact that the graph is bipartite implies that these programs have optimal solutions in which all values are integers. - -One can consider a graph in which each vertex v has a non-negative integer weight bv. The weight vector is denoted by b. The b-weight of a vertex-cover is the sum of bv for all v in the cover. A b-matching is an assignment of a non-negative integral weight to each edge, such that the sum of weights of edges adjacent to any vertex v is at most bv. Egervary's theorem can be extended, using a similar argument, to graphs that have both edge-weights w and vertex-weights b:
    In any edge-weighted vertex-weighted bipartite graph, the maximum w-weight of a b-matching equals the minimum b-weight of vertices in a w-vertex-cover.
    diff --git a/wiki/wikipedia/4180.txt b/wiki/wikipedia/4180.txt deleted file mode 100644 index 97186fd7dfccc7e671776e8ab2d76f846bdcc1e2..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4180.txt +++ /dev/null @@ -1,13 +0,0 @@ -Tom is a programming language particularly well-suited for programming various transformations on tree structures and XML-based documents. Tom is a language extension which adds new matching primitives to C and Java as well as support for rewrite rules systems. The rules can be controlled using a strategy language. - -Tom is good for: - -* programming by pattern matching - -* developing compilers and domain-specific languages (DSL) - -* transforming XML documents - -* implementing rule-based systems - -* describing algebraic transformations diff --git a/wiki/wikipedia/4181.txt b/wiki/wikipedia/4181.txt deleted file mode 100644 index 54c5129ff6ec6fc49f648f8742414c879e1fdf47..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4181.txt +++ /dev/null @@ -1,127 +0,0 @@ -Theta* is an any-angle path planning algorithm that is based on the A* search algorithm. It can find near-optimal paths with run times comparable to those of A*. - -For the simplest version of Theta*, the main loop is much the same as that of A*. The only difference is the $\text{update} \_ \text{vertex}()$function. Compared to A*, the parent of a node in Theta* does not have to be a neighbour of the node as long as there is a line-of-sight between the two nodes. - -Adapted from. - -function theta*(start, goal) - -// This main loop is the same as A* - -gScore(start) := 0 - -parent(start) := start - -// Initializing open and closed sets. The open set is initialized - -// with the start node and an initial cost - -open := {} - -open.insert(start, gScore(start) + heuristic(start)) - -// gScore(node) is the current shortest distance from the start node to node - -// heuristic(node) is the estimated distance of node from the goal node - -// there are many options for the heuristic such as Euclidean or Manhattan - -closed := {} - -while open is not empty - -s := open.pop() - -if s = goal - -return reconstruct_path(s) - -closed.push(s) - -for each neighbor of s - -// Loop through each immediate neighbor of s - -if neighbor not in closed - -if neighbor not in open - -// Initialize values for neighbor if it is - -// not already in the open list - -gScore(neighbor) := infinity - -parent(neighbor) := Null - -update_vertex(s, neighbor) - -return Null - -function update_vertex(s, neighbor) - -// This part of the algorithm is the main difference between A* and Theta* - -if line_of_sight(parent(s), neighbor) - -// If there is line-of-sight between parent(s) and neighbor - -// then ignore s and use the path from parent(s) to neighbor - -if gScore(parent(s)) + c(parent(s), neighbor) < gScore(neighbor) - -// c(s, neighbor) is the Euclidean distance from s to neighbor - -gScore(neighbor) := gScore(parent(s)) + c(parent(s), neighbor) - -parent(neighbor) := parent(s) - -if neighbor in open - -open.remove(neighbor) - -open.insert(neighbor, gScore(neighbor) + heuristic(neighbor)) - -else - -// If the length of the path from start to s and from s to - -// neighbor is shorter than the shortest currently known distance - -// from start to neighbor, then update node with the new distance - -if gScore(s) + c(s, neighbor) < gScore(neighbor) - -gScore(neighbor) := gScore(s) + c(s, neighbor) - -parent(neighbor) := s - -if neighbor in open - -open.remove(neighbor) - -open.insert(neighbor, gScore(neighbor) + heuristic(neighbor)) - -function reconstruct_path(s) - -total_path = {s} - -// This will recursively reconstruct the path from the goal node - -// until the start node is reached - -if parent(s) != s - -total_path.push(reconstruct_path(parent(s))) - -else - -return total_path - - - -The following variants of the algorithm exist: - -* Lazy Theta* – Node expansions are delayed, resulting in fewer line-of-sight checks - -* Incremental Phi* – A modification of Theta* that allows for dynamic path planning similar to D* diff --git a/wiki/wikipedia/4182.txt b/wiki/wikipedia/4182.txt deleted file mode 100644 index 4701b667c5a7004311376801c492fca08eecdfa4..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4182.txt +++ /dev/null @@ -1,57 +0,0 @@ -In mathematics, the limit comparison test (LCT) (in contrast with the related direct comparison test) is a method of testing for the convergence of an infinite series. - -Suppose that we have two series $ \Sigma_n a_n $ and $\Sigma_n b_n$ with $ a_n\geq 0, b_n > 0 $ for all $ n$. - -Then if $ \lim_{n \to \infty} \frac{a_n}{b_n} = c$ with $ 0 < c < \infty $, then either both series converge or both series diverge. - -Because $ \lim_{n \to \infty} \frac{a_n}{b_n} = c$ we know that for every $ \varepsilon > 0 $ there is a positive integer $n_0$ such that for all $n \geq n_0 $ we have that $ \left| \frac{a_n}{b_n} - c \right| < \varepsilon $, or equivalently -$$ - - \varepsilon < \frac{a_n}{b_n} - c < \varepsilon -$$ -$$ - c - \varepsilon < \frac{a_n}{b_n} < c + \varepsilon -$$ -$$ - (c - \varepsilon)b_n < a_n < (c + \varepsilon)b_n -$$ - -As $ c > 0 $ we can choose $ \varepsilon $ to be sufficiently small such that $ c-\varepsilon $ is positive. - -So $ b_n < \frac{1}{c-\varepsilon} a_n $ and by the direct comparison test, if $\sum_n a_n$ converges then so does $\sum_n b_n $. - -Similarly $ a_n < (c + \varepsilon)b_n $, so if $ \sum_n a_n $ diverges, again by the direct comparison test, so does $\sum_n b_n $. - -That is, both series converge or both series diverge. - -We want to determine if the series $ \sum_{n=1}^{\infty} \frac{1}{n^2 + 2n} $ converges. For this we compare it with the convergent series $ \sum_{n=1}^{\infty} \frac{1}{n^2} = \frac{\pi^2}{6} $. - -As $ \lim_{n \to \infty} \frac{1}{n^2 + 2n} \frac{n^2}{1} = 1 > 0 $ we have that the original series also converges. - -One can state a one-sided comparison test by using limit superior. Let $ a_n, b_n \geq 0 $ for all $ n$. Then if $ \limsup_{n \to \infty} \frac{a_n}{b_n} = c$ with $ 0 \leq c < \infty $ and $\Sigma_n b_n$ converges, necessarily $ \Sigma_n a_n $ converges. - -Let $ a_n = \frac{1-(-1)^n}{n^2} $ and $ b_n = \frac{1}{n^2} $ for all natural numbers $ n $. Now -$$ - \lim_{n\to\infty} \frac{a_n}{b_n} = \lim_{n\to\infty}(1-(-1)^n) -$$ does not exist, so we cannot apply the standard comparison test. However, -$$ - \limsup_{n\to\infty} \frac{a_n}{b_n} = \limsup_{n\to\infty}(1-(-1)^n) =2\in [0,\infty) -$$ and since $\sum_{n=1}^{\infty} \frac{1}{n^2}$ converges, the one-sided comparison test implies that $\sum_{n=1}^{\infty}\frac{1-(-1)^n}{n^2}$ converges. - -Let $ a_n, b_n \geq 0 $ for all $ n$. If $\Sigma_n a_n $ diverges and $\Sigma_n b_n $ converges, then necessarily -$$ - \limsup_{n\to\infty} \frac{a_n}{b_n}=\infty -$$, that is, -$$ - \liminf_{n\to\infty} \frac{b_n}{a_n}= 0 -$$. The essential content here is that in some sense the numbers $ a_n $ are larger than the numbers $ b_n $. - -Let $ f(z)=\sum_{n=0}^{\infty}a_nz^n $ be analytic in the unit disc $D = \{ z\in\mathbb{C} : |z|<1\}$ and have image of finite area. By Parseval's formula the area of the image of $ f $ is $ \sum_{n=1}^{\infty} n|a_n|^2$. Moreover, -$$ - \sum_{n=1}^{\infty} 1/n -$$ diverges. Therefore, by the converse of the comparison test, we have -$$ - \liminf_{n\to\infty} \frac{n|a_n|^2}{1/n}= \liminf_{n\to\infty} (n|a_n|)^2 = 0 -$$, that is, -$$ - \liminf_{n\to\infty} n|a_n| = 0 -$$. diff --git a/wiki/wikipedia/4183.txt b/wiki/wikipedia/4183.txt deleted file mode 100644 index 6af0bee58c3f752e831a9a7007c0e630aa0e03e5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4183.txt +++ /dev/null @@ -1,218 +0,0 @@ -In graph theory a minimum spanning tree (MST) $T$ of a graph $G = (V, E)$ with $|V| = n$ and $|E| = m$ is a tree subgraph of $G$ that contains all of its vertices and is of minimum weight. - -MSTs are useful and versatile tools utilised in a wide variety of practical and theoretical fields. For example, a company looking to supply multiple stores with a certain product from a single warehouse might use an MST originating at the warehouse to calculate the shortest paths to each company store. In this case the stores and the warehouse are represented as vertices and the road connections between them - as edges. Each edge is labelled with the length of the corresponding road connection. - -If $G$ is edge-unweighted every spanning tree possesses the same number of edges and thus the same weight. In the edge-weighted case, the spanning tree, the sum of the weights of the edges of which is lowest among all spanning trees of $G$, is called a minimum spanning tree (MST). It is not necessarily unique. More generally, graphs that are not necessarily connected have minimum spanning forests, which consist of a union of MSTs for each connected component. - -As finding MSTs is a widespread problem in graph theory, there exist many sequential algorithms for solving it. Among them are Prim's, Kruskal's and Borůvka's algorithms, each utilising different properties of MSTs. They all operate in a similar fashion - a subset of $E$ is iteratively grown until a valid MST has been discovered. However, as practical problems are often quite large (road networks sometimes have billions of edges), performance is a key factor. One option of improving it is by parallelising known MST algorithms. - -This algorithm utilises the cut-property of MSTs. A simple high-level pseudocode implementation is provided below: -$$ -T \gets \emptyset -$$ -$$ -S \gets \{s\} -$$ where $s$ is a random vertex in $V$ - -repeat $|V| - 1$ times - -find lightest edge $(u,v)$ s.t. $u \in S$ but $v \in (V \setminus S)$ -$$ -S \gets S \cup \{v\} -$$ -$$ -T \gets T \cup \{(u,v)\} -$$ - -return T - -Each edge is observed exactly twice - namely when examining each of its endpoints. Each vertex is examined exactly once for a total of $O(n + m)$ operations aside from the selection of the lightest edge at each loop iteration. This selection is often performed using a priority queue (PQ). For each edge at most one decreaseKey operation (amortised in $O(1)$) is performed and each loop iteration performs one deleteMin operation ($O(\log n)$). Thus using Fibonacci heaps the total runtime of Prim's algorithm is asymptotically in $O(m + n \log n)$. - -It is important to note that the loop is inherently sequential and can not be properly parallelised. This is the case, since the lightest edge with one endpoint in $S$ and on in $V \setminus S$ might change with the addition of edges to $T$. Thus no two selections of a lightest edge can be performed at the same time. However, there do exist some attempts at parallelisation. - -One possible idea is to use $O(n)$ processors to support PQ access in $O(1)$ on an EREW-PRAM machine, thus lowering the total runtime to $O(n + m)$. - -Kruskal's MST algorithm utilises the cycle property of MSTs. A high-level pseudocode representation is provided below. -$$ -T \gets -$$ forest with every vertex in its own subtree - -foreach $(u, v) \in E$ in ascending order of weight - -if $u$ and $v$ in different subtrees of $T$ -$$ -T \gets T \cup \{(u,v)\} -$$ - -return T - -The subtrees of $T$ are stored in union-find data structures, which is why checking whether or not two vertices are in the same subtree is possible in amortised $O(\alpha(m, n))$ where $\alpha(m, n)$ is the inverse Ackermann function. Thus the total runtime of the algorithm is in $O(sort(n) + \alpha(n))$. Here $\alpha(n)$ denotes the single-valued inverse Ackermann function, for which any realistic input yields an integer less than five. - -Similarly to Prim's algorithm there are components in Kruskal's approach that can not be parallelised in its classical variant. For example, determining whether or not two vertices are in the same subtree is difficult to parallelise, as two union operations might attempt to join the same subtrees at the same time. Really the only opportunity for parallelisation lies in the sorting step. As sorting is linear in the optimal case on $O(\log n)$ processors, the total runtime can be reduced to $O(m \alpha(n))$. - -Another approach would be to modify the original algorithm by growing $T$ more aggressively. This idea was presented by Osipov et al. The basic idea behind Filter-Kruskal is to partition the edges in a similar way to quicksort and filter out edges that connect vertices that belong to the same tree in order to reduce the cost of sorting. A high-level pseudocode representation is provided below. - -filterKruskal($G$): - -if $m <$ KruskalThreshold: - -return kruskal($G$) - -pivot = chooseRandom($E$) -$$ -(E_{\leq} -$$, $E_{>}) \gets $partition($E$, pivot) -$$ -A \gets -$$ filterKruskal($E_{\leq}$) -$$ -E_{>} \gets -$$ filter($E_{>}$) -$$ -A \gets A -$$ $\cup$ filterKruskal($E_{>}$) - -return $A$ - -partition($E$, pivot): -$$ -E_{\leq} \gets \emptyset -$$ -$$ -E_{>} \gets \emptyset -$$ - -foreach $(u, v) \in E$: - -if weight($u, v$) $\leq$ pivot: -$$ -E_{\leq} \gets E_{\leq} \cup {(u, v)} -$$ - -else -$$ -E_{>} \gets E_{>} \cup {(u, v)} -$$ - -return ($E_{\leq}$, $E_{>}$) - -filter($E$): -$$ -E_{filtered} \gets \emptyset -$$ - -foreach $(u, v) \in E$: - -if find-set(u) $\neq$ find-set(v): -$$ -E_{filtered} \gets E_{filtered} \cup {(u, v)} -$$ - -return $E_{filtered}$ - -Filter-Kruskal is better suited for parallelisation, since sorting, partitioning and filtering have intuitively easy parallelisations where the edges are simply divided between the cores. - -The main idea behind Borůvka's algorithm is edge contraction. An edge $\{u, v\}$ is contracted by first removing $v$ from the graph and then redirecting every edge $\{w, v\} \in E$ to $\{w, u\}$. These new edges retain their old edge weights. If the goal is not just to determine the weight of an MST but also which edges it comprises, it must be noted between which pairs of vertices an edge was contracted. A high-level pseudocode representation is presented below. -$$ -T \gets \emptyset -$$ - -while $|V| > 0$ -$$ -S \gets \emptyset -$$ - -for $v \in V$ -$$ -S \gets S -$$ $\cup$ lightest $\{u, v\} \in E$ - -for $\{u, v\} \in S$ - -contract $\{u, v\}$ -$$ -T \gets T \cup S -$$ - -return T - -It is possible that contractions lead to multiple edges between a pair of vertices. The intuitive way of choosing the lightest of them is not possible in $O(m)$. However, if all contractions that share a vertex are performed in parallel this is doable. The recursion stops when there is only a single vertex remaining, which means the algorithm needs at most $\log n$ iterations, leading to a total runtime in $O(m \log n)$. - -One possible parallelisation of this algorithm yields a polylogarithmic time complexity, i.e. $T(m, n, p) \cdot p \in O(m \log n)$ and there exists a constant $c$ so that $T(m, n, p) \in O(\log^c m)$. Here $T(m, n, p)$ denotes the runtime for a graph with $m$ edges, $n$ vertices on a machine with $p$ processors. The basic idea is the following: - -while $|V| > 1$ - -find lightest incident edges // $O(\frac{m}{p} + \log n + \log p)$ - -assign the corresponding subgraph to each vertex // $O(\frac{n}{p} + \log n)$ - -contract each subgraph // $O(\frac{m}{p} + \log n)$ - -The MST then consists of all the found lightest edges. - -This parallelisation utilises the adjacency array graph representation for $G = (V, E)$. This consists of three arrays - $\Gamma$ of length $n + 1$ for the vertices, $\gamma$ of length $m$ for the endpoints of each of the $m$ edges and $c$ of length $m$ for the edges' weights. Now for vertex $i$ the other end of each edge incident to $i$ can be found in the entries between $\gamma [\Gamma [i-1]]$ and $\gamma [\Gamma[i]]$. The weight of the $i$-th edge in $\Gamma$ can be found in $c[i]$. Then the $i$-th edge in $\gamma$ is between vertices $u$ and $v$ if and only if $\Gamma[u] \leq i < \Gamma[u + 1]$ and $\gamma[i] = v$. - -First the edges are distributed between each of the $p$ processors. The $i$-th processor receives the edges stored between $\gamma[\frac{i m}{p}]$ and $\gamma[\frac{(i + 1)m}{p} - 1]$. Furthermore, each processor needs to know to which vertex these edges belong (since $\gamma$ only stores one of the edge's endpoints) and stores this in the array $pred$. Obtaining this information is possible in $O(\log n)$ using $p$ binary searches or in $O(\frac{n}{p} + p)$ using a linear search. In practice the latter approach is sometimes quicker, even though it is asymptotically worse. - -Now each processor determines the lightest edge incident to each of its vertices. -$$ -v \gets -$$ find($\frac{i m}{p}$, $\Gamma$) - -for $e \gets \frac{i m}{p}; e < \frac{(i+1) m}{p} - 1; e++$ - -if $\Gamma[v+1] = e $ -$$ -v++ -$$ - -if$c[e] < c[pred[v]]$ -$$ -pred[v] \gets e -$$ - -Here the issue arises some vertices are handled by more than one processor. A possible solution to this is that every processor has its own $prev$ array which is later combined with those of the others using a reduction. Each processor has at most two vertices that are also handled by other processors and each reduction is in $O(\log p)$. Thus the total runtime of this step is in $O(\frac{m}{p} + \log n + \log p)$. - -Observe the graph that consists solely of edges collected in the previous step. These edges are directed away from the vertex to which they are the lightest incident edge. The resulting graph decomposes into multiple weakly connected components. The goal of this step is to assign to each vertex the component of which it is a part. Note that every vertex has exactly one outgoing edge and therefore each component is a pseudotree - a tree with a single extra edge that runs in parallel to the lightest edge in the component but in the opposite direction. The following code mutates this extra edge into a loop: - -parallel forAll $v \in V$ -$$ -w \gets pred[v] -$$ - -if $pred[w] = v \land v < w$ -$$ -pred[v] \gets v -$$ - -Now every weakly connected component is a directed tree where the root has a loop. This root is chosen as the representative of each component. The following code uses doubling to assign each vertex its representative: - -while $\exists v \in V: pred[v] \neq pred[pred[v]]$ - -forAll $v \in V$ -$$ -pred[v] \gets pred[pred[v]] -$$ - -Now every subgraph is a star. With some advanced techniques this step needs $O(\frac{n}{p} + \log n)$ time. - -In this step each subgraph is contracted to a single vertex. -$$ -k \gets -$$ number of subgraphs -$$ -V' \gets \{0, \dots , k-1\} -$$ - -find a bijective function $f:$ star root $\rightarrow \{0, \dots, k-1\}$ -$$ -E' \gets \{(f(pred[v]), f(pred[w]), c, e_{old}): (v, w) \in E \land pred[v] \neq pred[w]\} -$$ - -Finding the bijective function is possible in $O(\frac{n}{p} + \log p)$ using a prefix sum. As we now have a new set of vertices and edges the adjacency array must be rebuilt, which can be done using Integersort on $E'$ in $O(\frac{m}{p} + \log p)$ time. - -Each iteration now needs $O(\frac{m}{p} + \log n)$ time and just like in the sequential case there are $\log n$ iterations, resulting in a total runtime of $O(\log n(\frac{m}{p} + \log n))$. If $m \in \Omega(p \log^2 p)$ the efficiency of the algorithm is in $\Theta(1)$ and it is relatively efficient. If $m \in O(n)$ then it is absolutely efficient. - -There are multiple other parallel algorithms that deal the issue of finding an MST. With a linear number of processors it is possible to achieve this in $O(\log n)$. Bader and Cong presented an MST-algorithm, that was five times quicker on eight cores than an optimal sequential algorithm. - -Another challenge is the External Memory model - there is a proposed algorithm due to Dementiev et al. that is claimed to be only two to five times slower than an algorithm that only makes use of internal memory diff --git a/wiki/wikipedia/4184.txt b/wiki/wikipedia/4184.txt deleted file mode 100644 index c82ced2bdf78b95bdc66d9e8d148330430ef0fd4..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4184.txt +++ /dev/null @@ -1,97 +0,0 @@ -In concurrency control of databases, transaction processing (transaction management), and various transactional applications (e.g., transactional memory and software transactional memory), both centralized and distributed, a transaction schedule is serializable if its outcome (e.g., the resulting database state) is equal to the outcome of its transactions executed serially, i.e. without overlapping in time. Transactions are normally executed concurrently (they overlap), since this is the most efficient way. Serializability is the major correctness criterion for concurrent transactions' executions. It is considered the highest level of isolation between transactions, and plays an essential role in concurrency control. As such it is supported in all general purpose database systems. Strong strict two-phase locking (SS2PL) is a popular serializability mechanism utilized in most of the database systems (in various variants) since their early days in the 1970s. - -Serializability theory provides the formal framework to reason about and analyze serializability and its techniques. Though it is mathematical in nature, its fundamentals are informally (without mathematics notation) introduced below. - -Serializability is used to keep the data in the data item in a consistent state. Serializability is a property of a transaction schedule (history). It relates to the isolation property of a database transaction. - -Serializability of a schedule means equivalence (in the outcome, the database state, data values) to a serial schedule (i.e., sequential with no transaction overlap in time) with the same transactions. It is the major criterion for the correctness of concurrent transactions' schedule, and thus supported in all general purpose database systems. - -The rationale behind serializability is the following: - -If each transaction is correct by itself, i.e., meets certain integrity conditions, then a schedule that comprises any serial execution of these transactions is correct (its transactions still meet their conditions): "Serial" means that transactions do not overlap in time and cannot interfere with each other, i.e, complete isolation between each other exists. Any order of the transactions is legitimate, if no dependencies among them exist, which is assumed (see comment below). As a result, a schedule that comprises any execution (not necessarily serial) that is equivalent (in its outcome) to any serial execution of these transactions, is correct. - -Schedules that are not serializable are likely to generate erroneous outcomes. Well known examples are with transactions that debit and credit accounts with money: If the related schedules are not serializable, then the total sum of money may not be preserved. Money could disappear, or be generated from nowhere. This and violations of possibly needed other invariant preservations are caused by one transaction writing, and "stepping on" and erasing what has been written by another transaction before it has become permanent in the database. It does not happen if serializability is maintained. - -If any specific order between some transactions is requested by an application, then it is enforced independently of the underlying serializability mechanisms. These mechanisms are typically indifferent to any specific order, and generate some unpredictable partial order that is typically compatible with multiple serial orders of these transactions. This partial order results from the scheduling orders of concurrent transactions' data access operations, which depend on many factors. - -A major characteristic of a database transaction is atomicity, which means that it either commits, i.e., all its operations' results take effect in the database, or aborts (rolled-back), all its operations' results do not have any effect on the database ("all or nothing" semantics of a transaction). In all real systems transactions can abort for many reasons, and serializability by itself is not sufficient for correctness. Schedules also need to possess the recoverability (from abort) property. Recoverability means that committed transactions have not read data written by aborted transactions (whose effects do not exist in the resulting database states). While serializability is currently compromised on purpose in many applications for better performance (only in cases when application's correctness is not harmed), compromising recoverability would quickly violate the database's integrity, as well as that of transactions' results external to the database. A schedule with the recoverability property (a recoverable schedule) "recovers" from aborts by itself, i.e., aborts do not harm the integrity of its committed transactions and resulting database. This is false without recoverability, where the likely integrity violations (resulting incorrect database data) need special, typically manual, corrective actions in the database. - -Implementing recoverability in its general form may result in cascading aborts: Aborting one transaction may result in a need to abort a second transaction, and then a third, and so on. This results in a waste of already partially executed transactions, and may result also in a performance penalty. Avoiding cascading aborts (ACA, or Cascadelessness) is a special case of recoverability that exactly prevents such phenomena. Often in practice a special case of ACA is utilized: Strictness. Strictness allows efficient database recovery from failure. - -Note that the recoverability property is needed even if no database failure occurs and no database recovery from failure is needed. It is, rather, needed to correctly automatically handle aborts, which may be unrelated to database failure and recovery from failure. - -In many applications, unlike with finances, absolute correctness is not needed. For example, when retrieving a list of products according to specification, in most cases it does not matter much if a product, whose data was updated a short time ago, does not appear in the list, even if it meets the specification. It will typically appear in such a list when tried again a short time later. Commercial databases provide concurrency control with a whole range of isolation levels which are in fact (controlled) serializability violations in order to achieve higher performance. Higher performance means better transaction execution rate and shorter average transaction response time (transaction duration). Snapshot isolation is an example of a popular, widely utilized efficient relaxed serializability method with many characteristics of full serializability, but still short of some, and unfit in many situations. - -Another common reason nowadays for distributed serializability relaxation (see below) is the requirement of availability of internet products and services. This requirement is typically answered by large-scale data replication. The straightforward solution for synchronizing replicas' updates of the same database object is including all these updates in a single atomic distributed transaction. However, with many replicas such a transaction is very large, and may span enough of a number of several computers and networks that some of them are likely to be unavailable. Thus such a transaction is likely to end with abort and miss its purpose. - -Consequently, Optimistic replication (Lazy replication) is often utilized (e.g., in many products and services by Google, Amazon, Yahoo, and the like), while serializability is relaxed and compromised for eventual consistency. Again, in this case, relaxation is done only for applications that are not expected to be harmed by this technique. - -Classes of schedules defined by relaxed serializability properties either contain the serializability class, or are incomparable with it. - -Mechanisms that enforce serializability need to execute in real time, or almost in real time, while transactions are running at high rates. In order to meet this requirement, special cases of serializability, sufficient conditions for serializability which can be enforced effectively, are utilized. - -Two major types of serializability exist: view-serializability, and conflict-serializability. View-serializability matches the general definition of serializability given above. Conflict-serializability is a broad special case, i.e., any schedule that is conflict-serializable is also view-serializable, but not necessarily the opposite. Conflict-serializability is widely utilized because it is easier to determine and covers a substantial portion of the view-serializable schedules. Determining view-serializability of a schedule is an NP-complete problem (a class of problems with only difficult-to-compute, excessively time-consuming known solutions). - -View-serializability of a schedule is defined by equivalence to a serial schedule (no overlapping transactions) with the same transactions, such that respective transactions in the two schedules read and write the same data values ("view" the same data values). - -Conflict-serializability of a schedule is defined by equivalence to a serial schedule (no overlapping transactions) with the same transactions, such that both schedules have the same sets of respective chronologically ordered pairs of conflicting operations (same precedence relations of respective conflicting operations). - -Operations upon data are read or write (a write: either insert or modify or delete). Two operations are conflicting if they are of different transactions, upon the same datum (data item), and at least one of them is write. Each such pair of conflicting operations has a conflict type: it is either a read–write, or write–read, or a write–write conflict. The transaction of the second operation in the pair is said to be in conflict with the transaction of the first operation. A more general definition of conflicting operations (also for complex operations, which may each consist of several "simple" read/write operations) requires that they are noncommutative (changing their order also changes their combined result). Each such operation needs to be atomic by itself (using proper system support) in order to be considered an operation for a commutativity check. For example, read–read operations are commutative (unlike read–write and the other possibilities) and thus read–read is not a conflict. Another more complex example: the operations increment and decrement of a counter are both write operations (both modify the counter), but do not need to be considered conflicting (write-write conflict type) since they are commutative (thus increment–decrement is not a conflict; e.g., already has been supported in the old IBM's IMS "fast path"). Only precedence (time order) in pairs of conflicting (non-commutative) operations is important when checking equivalence to a serial schedule, since different schedules consisting of the same transactions can be transformed from one to another by changing orders between different transactions' operations (different transactions' interleaving), and since changing orders of commutative operations (non-conflicting) does not change an overall operation sequence result, i.e., a schedule outcome (the outcome is preserved through order change between non-conflicting operations, but typically not when conflicting operations change order). This means that if a schedule can be transformed to any serial schedule without changing orders of conflicting operations (but changing orders of non-conflicting, while preserving operation order inside each transaction), then the outcome of both schedules is the same, and the schedule is conflict-serializable by definition. - -Conflicts are the reason for blocking transactions and delays (non-materialized conflicts), or for aborting transactions due to serializability violation prevention. Both possibilities reduce performance. Thus reducing the number of conflicts, e.g., by commutativity (when possible), is a way to increase performance. - -A transaction can issue/request a conflicting operation and be in conflict with another transaction while its conflicting operation is delayed and not executed (e.g., blocked by a lock). Only executed (materialized) conflicting operations are relevant to conflict serializability (see more below). - -Schedule compliance with conflict serializability can be tested with the precedence graph (serializability graph, serialization graph, conflict graph) for committed transactions of the schedule. It is the directed graph representing precedence of transactions in the schedule, as reflected by precedence of conflicting operations in the transactions. - -In the precedence graph transactions are nodes and precedence relations are directed edges. There exists an edge from a first transaction to a second transaction, if the second transaction is in conflict with the first (see Conflict serializability above), and the conflict is materialized (i.e., if the requested conflicting operation is actually executed: in many cases a requested/issued conflicting operation by a transaction is delayed and even never executed, typically by a lock on the operation's object, held by another transaction, or when writing to a transaction's temporary private workspace and materializing, copying to the database itself, upon commit; as long as a requested/issued conflicting operation is not executed upon the database itself, the conflict is non-materialized; non-materialized conflicts are not represented by an edge in the precedence graph). - -Comment: In many text books only committed transactions are included in the precedence graph. Here all transactions are included for convenience in later discussions. - -The following observation is a key characterization of conflict serializability: - -A schedule is conflict-serializable if and only if its precedence graph of committed transactions (when only committed transactions are considered) is acyclic. This means that a cycle consisting of committed transactions only is generated in the (general) precedence graph, if and only if conflict-serializability is violated. - -Cycles of committed transactions can be prevented by aborting an undecided (neither committed, nor aborted) transaction on each cycle in the precedence graph of all the transactions, which can otherwise turn into a cycle of committed transactions (and a committed transaction cannot be aborted). One transaction aborted per cycle is both required and sufficient in number to break and eliminate the cycle (more aborts are possible, and can happen under some mechanisms, but are unnecessary for serializability). The probability of cycle generation is typically low, but, nevertheless, such a situation is carefully handled, typically with a considerable amount of overhead, since correctness is involved. Transactions aborted due to serializability violation prevention are restarted and executed again immediately. - -Serializability-enforcing mechanisms typically do not maintain a precedence graph as a data structure, but rather prevent or break cycles implicitly (e.g., SS2PL below). - -Strong strict two-phase locking (SS2PL) is a common mechanism utilized in database systems since their early days in the 1970s (the "SS" in the name SS2PL is newer, though) to enforce both conflict serializability and strictness (a special case of recoverability which allows effective database recovery from failure) of a schedule. Under this mechanism, each datum is locked by a transaction before its accessing it (in any read or write operation): the item is marked by and associated with a lock of a certain type depending on the operation being performed (and the specific transaction implementation; various models with different lock types exist; in some models, locks may change type during the transaction's life). As a result, access by another transaction may be blocked, typically upon a conflict (the lock delays or completely prevents the conflict from being materialized and be reflected in the precedence graph by blocking the conflicting operation), depending on lock type and the other transaction's access operation type. Employing an SS2PL mechanism means that all locks on data on behalf of a transaction are released only after the transaction has ended (either committed or aborted). - -SS2PL is the name of the resulting schedule property as well, which is also called rigorousness. SS2PL is a special case (proper subset) of Two-phase locking (2PL) - -Mutual blocking between transactions results in a deadlock, where execution of these transactions is stalled and no completion can be reached. Thus deadlocks need to be resolved to complete these transactions' execution and release related computing resources. A deadlock is a reflection of a potential cycle in the precedence graph that would occur without the blocking when conflicts are materialized. A deadlock is resolved by aborting a transaction involved with such a potential cycle and breaking the cycle. It is often detected using a wait-for graph (a graph of conflicts blocked by locks from being materialized; it can be also defined as the graph of non-materialized conflicts; conflicts not materialized are not reflected in the precedence graph and do not affect serializability), which indicates which transaction is "waiting for" the release of one of more locks by which other transaction or transactions, and a cycle in this graph means a deadlock. Aborting one transaction per cycle is sufficient to break the cycle. Transactions aborted due to deadlock resolution are restarted and executed again immediately. - -Other known mechanisms include: - -* Precedence graph (or Serializability graph, Conflict graph) cycle elimination - -* Two-phase locking (2PL) - -* Timestamp ordering (TO) - -* Serializable snapshot isolation (SerializableSI) - -The above (conflict) serializability techniques in their general form do not provide recoverability. Special enhancements are needed for adding recoverability. - -Concurrency control techniques are of three major types: - -# Pessimistic: In Pessimistic concurrency control, a transaction blocks the data access operations of other transactions upon conflicts, and conflicts are non-materialized until blocking is removed. This is done to ensure that operations that may violate serializability (and in practice also recoverability) do not occur. - -# Optimistic: In Optimistic concurrency control, the data access operations of other transactions are not blocked upon conflicts, and conflicts are immediately materialized. When the transaction reaches the ready state, i.e., its running state has been completed, possible serializability (and in practice also recoverability) violation by the transaction's operations (relative to other running transactions) is checked: if violation has occurred, the transaction is typically aborted (sometimes aborting another transaction to handle serializability violation is preferred). Otherwise, it is committed. - -# Semi-optimistic: Mechanisms that mix blocking in certain situations with not blocking in other situations and employ both materialized and non-materialized conflicts - -The main differences between the technique types is the conflict types that are generated by them. A pessimistic method blocks a transaction operation upon conflict and generates a non-materialized conflict, while an optimistic method does not block and generates a materialized conflict. A semi-optimistic method generates both conflict types. Both conflict types are generated by the chronological orders in which transaction operations are invoked, independently of the type of conflict. A cycle of committed transactions (with materialized conflicts) in the precedence graph (conflict graph) represents a serializability violation, and should be avoided for maintaining serializability. A cycle of (non-materialized) conflicts in the wait-for graph represents a deadlock situation, which should be resolved by breaking the cycle. Both cycle types result from conflicts and should be broken. Under any technique type, conflicts should be detected and considered, with similar overhead for both materialized and non-materialized conflicts (typically by using mechanisms like locking, while either blocking for locks or not blocking but recording conflict for materialized conflicts). In a blocking method, typically a context switching occurs upon conflict, with (additional) incurred overhead. Otherwise, blocked transactions' related computing resources remain idle, unutilized, which may be a worse alternative. When conflicts do not occur frequently, optimistic methods typically have an advantage. With different transaction loads (mixes of transaction types) one technique type (i.e., either optimistic or pessimistic) may provide better performance than the other. - -Unless schedule classes are inherently blocking (i.e., they cannot be implemented without data-access operations blocking; e.g., 2PL, SS2PL and SCO above; see chart), they can also be implemented using optimistic techniques (e.g., Serializability, Recoverability). - -See also Multiversion concurrency control (partial coverage) and Serializable Snapshot Isolation in Snapshot isolation - -Multi-version concurrency control (MVCC) is a common way today to increase concurrency and performance by generating a new version of a database object each time the object is written and allowing transactions' read operations of several last relevant versions (of each object), depending on scheduling method. MVCC can be combined with all the serializability techniques listed above (except SerializableSI, which is originally MVCC-based). It is utilized in most general-purpose DBMS products. - -MVCC is especially popular nowadays through the relaxed serializability (see above) method Snapshot isolation (SI) which provides better performance than most known serializability mechanisms (at the cost of possible serializability violation in certain cases). SerializableSI, which is an efficient enhancement of SI to make it serializable, is intended to provide an efficient serializable solution. SerializableSI has been analyzed via a general theory of MVCC - -Distributed serializability is the serializability of a schedule of a transactional distributed system (e.g., a distributed database system). Such a system is characterized by distributed transactions (also called global transactions), i.e., transactions that span computer processes (a process abstraction in a general sense, depending on computing environment; e.g., operating system's thread) and possibly network nodes. A distributed transaction comprises more than one of several local sub-transactions that each has states as described above for a database transaction. A local sub-transaction comprises a single process, or more processes that typically fail together (e.g., in a single processor core). Distributed transactions imply a need for an atomic commit protocol to reach consensus among its local sub-transactions on whether to commit or abort. Such protocols can vary from a simple (one-phase) handshake among processes that fail together to more sophisticated protocols, like two-phase commit, to handle more complicated cases of failure (e.g., process, node, communication, etc. failure). Distributed serializability is a major goal of distributed concurrency control for correctness. With the proliferation of the Internet, cloud computing, grid computing, and small, portable, powerful computing devices (e.g., smartphones,) the need for effective distributed serializability techniques to ensure correctness in and among distributed applications seems to increase. - -Distributed serializability is achieved by implementing distributed versions of the known centralized techniques. Typically, all such distributed versions require utilizing conflict information (of either materialized or non-materialized conflicts, or, equivalently, transaction precedence or blocking information; conflict serializability is usually utilized) that is not generated locally, but rather in different processes, and remote locations. Thus information distribution is needed (e.g., precedence relations, lock information, timestamps, or tickets). When the distributed system is of a relatively small scale and message delays across the system are small, the centralized concurrency control methods can be used unchanged while certain processes or nodes in the system manage the related algorithms. However, in a large-scale system (e.g., grid and cloud), due to the distribution of such information, a substantial performance penalty is typically incurred, even when distributed versions of the methods (vs. the centralized ones) are used, primarily due to computer and communication latency. Also, when such information is distributed, related techniques typically do not scale well. A well-known example with respect to scalability problems is a distributed lock manager, which distributes lock (non-materialized conflict) information across the distributed system to implement locking techniques. diff --git a/wiki/wikipedia/4185.txt b/wiki/wikipedia/4185.txt deleted file mode 100644 index 24ec53ab0bda011a5790957337b8e28943c4ad65..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4185.txt +++ /dev/null @@ -1,234 +0,0 @@ -A Mersenne prime is a prime number that is one less than a power of two. That is, it is a prime number of the form Mn = 2n − 1 for some integer n. They are named after Marin Mersenne, a French Minim friar, who studied them in the early 17th century. If n is a composite number then so is 2n − 1. Therefore, an equivalent definition of the Mersenne primes is that they are the prime numbers of the form Mp = 2p − 1 for some prime p. - -The exponents n which give Mersenne primes are 2, 3, 5, 7, 13, 17, 19, 31, ... and the resulting Mersenne primes are 3, 7, 31, 127, 8191, 131071, 524287, 2147483647, ... . - -Numbers of the form Mn = 2n − 1 without the primality requirement may be called Mersenne numbers. Sometimes, however, Mersenne numbers are defined to have the additional requirement that n be prime. - -The smallest composite Mersenne number with prime exponent n is 211 − 1 = 2047 = 23 × 89. - -Mersenne primes were studied in antiquity because of their close connection to perfect numbers: the Euclid–Euler theorem asserts a one-to-one correspondence between even perfect numbers and Mersenne primes. Many of the largest known primes are Mersenne primes because Mersenne numbers are easier to check for primality. - -, 51 Mersenne primes are known. The largest known prime number, 282,589,933 − 1, is a Mersenne prime. Since 1997, all newly found Mersenne primes have been discovered by the Great Internet Mersenne Prime Search, a distributed computing project. In December 2020, a major milestone in the project was passed after all exponents below 100 million were checked at least once. - -Many fundamental questions about Mersenne primes remain unresolved. It is not even known whether the set of Mersenne primes is finite or infinite. The Lenstra–Pomerance–Wagstaff conjecture asserts that there are infinitely many Mersenne primes and predicts their order of growth. It is also not known whether infinitely many Mersenne numbers with prime exponents are composite, although this would follow from widely believed conjectures about prime numbers, for example, the infinitude of Sophie Germain primes congruent to 3 (mod 4). For these primes p, 2p + 1 (which is also prime) will divide Mp, for example, 23 , 47 , 167 , 263 , 359 , 383 , 479 , and 503 . Since for these primes p, 2p + 1 is congruent to 7 mod 8, so 2 is a quadratic residue mod 2p + 1, and the multiplicative order of 2 mod 2p + 1 must divide $\frac{(2p+1)-1}{2}$ = p. Since p is a prime, it must be p or 1. However, it cannot be 1 since $\Phi_1(2)=1$ and 1 has no prime factors, so it must be p. Hence, 2p + 1 divides $\Phi_p(2)=2^p-1$ and $2^p-1=M_p$ cannot be prime. - -The first four Mersenne primes are M2 = 3, M3 = 7, M5 = 31 and M7 = 127 and because the first Mersenne prime starts at M2, all Mersenne primes are congruent to 3 (mod 4). Other than M0 = 0 and M1 = 1, all other Mersenne numbers are also congruent to 3 (mod 4). Consequently, in the prime factorization of a Mersenne number ( ≥ M2 ) there must be at least one prime factor congruent to 3 (mod 4). - -A basic theorem about Mersenne numbers states that if Mp is prime, then the exponent p must also be prime. This follows from the identity -$$ -\begin{align}2^{ab}-1&=(2^a-1)\cdot \left(1+2^a+2^{2a}+2^{3a}+\cdots+2^{(b-1)a}\right)\\&=(2^b-1)\cdot \left(1+2^b+2^{2b}+2^{3b}+\cdots+2^{(a-1)b}\right). \end{align} -$$ - -This rules out primality for Mersenne numbers with a composite exponent, such as M4 = 24 − 1 = 15 = 3 × 5 = (22 − 1) × (1 + 22). - -Though the above examples might suggest that Mp is prime for all primes p, this is not the case, and the smallest counterexample is the Mersenne number - -M11 = 211 − 1 = 2047 = 23 × 89. - -The evidence at hand suggests that a randomly selected Mersenne number is much more likely to be prime than an arbitrary randomly selected odd integer of similar size. This was the fourth Mersenne prime discovered by Cooper and his team in the past ten years. - -On September 2, 2016, the Great Internet Mersenne Prime Search finished verifying all tests below M37,156,667, thus officially confirming its position as the 45th Mersenne prime. - -On January 3, 2018, it was announced that Jonathan Pace, a 51-year-old electrical engineer living in Germantown, Tennessee, had found a 50th Mersenne prime, 277,232,917 − 1 (a number with 23,249,425 digits), as a result of a search executed by a GIMPS server network. The discovery was made by a computer in the offices of a church in the same town. - -On December 21, 2018, it was announced that The Great Internet Mersenne Prime Search (GIMPS) discovered the largest known prime number, 282,589,933 − 1, having 24,862,048 digits. A computer volunteered by Patrick Laroche from Ocala, Florida made the find on December 7, 2018. - -In late 2020, GIMPS began using a new technique to rule out potential Mersenne primes called the Probable prime (PRP) test, based on development from Robert Gerbicz in 2017, and a simple way to verify tests developed by Krzysztof Pietrzak in 2018. Due to the low error rate and ease of proof, this nearly halved the computing time to rule out potential primes over the Lucas-Lehmer test (as two users would no longer have to perform the same test to confirm the other's result), although exponents passing the PRP test still require one to confirm their primality. - -# If a and p are natural numbers such that ap − 1 is prime, then a = 2 or p = 1. - -#* Proof: a ≡ 1 (mod a − 1). Then ap ≡ 1 (mod a − 1), so ap − 1 ≡ 0 (mod a − 1). Thus a − 1 . However, ap − 1 is prime, so a − 1 = ap − 1 or a − 1 = ±1. In the former case, a = ap, hence a = 0, 1 (which is a contradiction, as neither −1 nor 0 is prime) or p = 1. In the latter case, a = 2 or a = 0. If a = 0, however, 0p − 1 = 0 − 1 = −1 which is not prime. Therefore, a = 2. - -# If 2p − 1 is prime, then p is prime. - -#* Proof: Suppose that p is composite, hence can be written p = ab with a and b > 1. Then 2p − 1 = 2ab − 1 = (2a)b − 1 = (2a − 1)((2a)b−1 + (2a)b−2 + … + 2a + 1) so 2p − 1 is composite. By contrapositive, if 2p − 1 is prime then p is prime. - -# If p is an odd prime, then every prime q that divides 2p − 1 must be 1 plus a multiple of 2p. This holds even when 2p − 1 is prime. - -#* For example, 25 − 1 = 31 is prime, and 31 = 1 + 3 × (2 × 5). A composite example is 211 − 1 = 23 × 89, where 23 = 1 + (2 × 11) and 89 = 1 + 4 × (2 × 11). - -#* Proof: By Fermat's little theorem, q is a factor of 2q−1 − 1. Since q is a factor of 2p − 1, for all positive integers c, q is also a factor of 2pc − 1. Since p is prime and q is not a factor of 21 − 1, p is also the smallest positive integer x such that q is a factor of 2x − 1. As a result, for all positive integers x, q is a factor of 2x − 1 if and only if p is a factor of x. Therefore, since q is a factor of 2q−1 − 1, p is a factor of q − 1 so q ≡ 1 (mod p). Furthermore, since q is a factor of 2p − 1, which is odd, q is odd. Therefore, q ≡ 1 (mod 2p). - -#* This fact leads to a proof of Euclid's theorem, which asserts the infinitude of primes, distinct from the proof written by Euclid: for every odd prime p, all primes dividing 2p − 1 are larger than p; thus there are always larger primes than any particular prime. - -#* It follows from this fact that for every prime p > 2, there is at least one prime of the form 2kp+1 less than or equal to Mp, for some integer k. - -# If p is an odd prime, then every prime q that divides 2p − 1 is congruent to ±1 (mod 8). - -#* Proof: 2p+1 ≡ 2 (mod q), so 21/2(p+1) is a square root of 2 mod q. By quadratic reciprocity, every prime modulus in which the number 2 has a square root is congruent to ±1 (mod 8). - -# A Mersenne prime cannot be a Wieferich prime. - -#* Proof: We show if p = 2m − 1 is a Mersenne prime, then the congruence 2p−1 ≡ 1 (mod p2) does not hold. By Fermat's little theorem, m . Therefore, one can write p − 1 = mλ. If the given congruence is satisfied, then p2 , therefore 0 ≡ 2 − 1/2m − 1 = 1 + 2m + 22m + ... + 2(λ − 1)m ≡ −λ mod (2m − 1). Hence 2m − 1 , and therefore λ ≥ 2m − 1. This leads to p − 1 ≥ m(2m − 1), which is impossible since m ≥ 2. - -#If m and n are natural numbers then m and n are coprime if and only if 2m − 1 and 2n − 1 are coprime. Consequently, a prime number divides at most one prime-exponent Mersenne number. That is, the set of pernicious Mersenne numbers is pairwise coprime. - -# If p and 2p + 1 are both prime (meaning that p is a Sophie Germain prime), and p is congruent to 3 (mod 4), then 2p + 1 divides 2p − 1. - -#* Example: 11 and 23 are both prime, and 11 = 2 × 4 + 3, so 23 divides 211 − 1. - -#* Proof: Let q be 2p + 1. By Fermat's little theorem, 22p ≡ 1 (mod q), so either 2p ≡ 1 (mod q) or 2p ≡ −1 (mod q). Supposing latter true, then 2p+1 = (21/2(p + 1))2 ≡ −2 (mod q), so −2 would be a quadratic residue mod q. However, since p is congruent to 3 (mod 4), q is congruent to 7 (mod 8) and therefore 2 is a quadratic residue mod q. Also since q is congruent to 3 (mod 4), −1 is a quadratic nonresidue mod q, so −2 is the product of a residue and a nonresidue and hence it is a nonresidue, which is a contradiction. Hence, the former congruence must be true and 2p + 1 divides Mp. - -# All composite divisors of prime-exponent Mersenne numbers are strong pseudoprimes to the base 2. - -# With the exception of 1, a Mersenne number cannot be a perfect power. That is, and in accordance with Mihăilescu's theorem, the equation 2m − 1 = nk has no solutions where m, n, and k are integers with m > 1 and k > 1. - -, the 51 known Mersenne primes are 2p − 1 for the following p: - -2, 3, 5, 7, 13, 17, 19, 31, 61, 89, 107, 127, 521, 607, 1279, 2203, 2281, 3217, 4253, 4423, 9689, 9941, 11213, 19937, 21701, 23209, 44497, 86243, 110503, 132049, 216091, 756839, 859433, 1257787, 1398269, 2976221, 3021377, 6972593, 13466917, 20996011, 24036583, 25964951, 30402457, 32582657, 37156667, 42643801, 43112609, 57885161, 74207281, 77232917, 82589933. - -Since they are prime numbers, Mersenne primes are divisible only by 1 and themselves. However, not all Mersenne numbers are Mersenne primes. Mersenne numbers are very good test cases for the special number field sieve algorithm, so often the largest number factorized with this algorithm has been a Mersenne number. , 2^1,193 − 1 is the record-holder, having been factored with a variant of the special number field sieve that allows the factorization of several numbers at once. See integer factorization records for links to more information. The special number field sieve can factorize numbers with more than one large factor. If a number has only one very large factor then other algorithms can factorize larger numbers by first finding small factors and then running a primality test on the cofactor. , the largest factorization with probable prime factors allowed is 210,443,557 − 1 = 37,289,325,994,807 × q, where q is a 3,143,811-digit probable prime. It was discovered by a GIMPS participant with nickname "fre_games". , the Mersenne number M1277 is the smallest composite Mersenne number with no known factors; it has no prime factors below 268. - -The table below shows factorizations for the first 20 composite Mersenne numbers . - -The number of factors for the first 500 Mersenne numbers can be found at . - -In the mathematical problem Tower of Hanoi, solving a puzzle with an n-disc tower requires Mn steps, assuming no mistakes are made. The number of rice grains on the whole chessboard in the wheat and chessboard problem is M64. - -The asteroid with minor planet number 8191 is named 8191 Mersenne after Marin Mersenne, because 8191 is a Mersenne prime (3 Juno, 7 Iris, 31 Euphrosyne and 127 Johanna having been discovered and named during the 19th century). - -In geometry, an integer right triangle that is primitive and has its even leg a power of 2 ( ≥ 4 ) generates a unique right triangle such that its inradius is always a Mersenne number. For example, if the even leg is 2n + 1 then because it is primitive it constrains the odd leg to be 4n − 1, the hypotenuse to be 4n + 1 and its inradius to be 2n − 1. - -The Mersenne numbers were studied with respect to the total number of accepting paths of non-deterministic polynomial time Turing machines in 2018 and intriguing inclusions were discovered. - -A Mersenne–Fermat number is defined as 2pr − 1/2pr − 1 − 1, with p prime, r natural number, and can be written as MF(p, r). When r = 1, it is a Mersenne number. When p = 2, it is a Fermat number. The only known Mersenne–Fermat primes with r > 1 are - -MF(2, 2), MF(2, 3), MF(2, 4), MF(2, 5), MF(3, 2), MF(3, 3), MF(7, 2), and MF(59, 2). - -In fact, MF(p, r) = Φpr(2), where Φ is the cyclotomic polynomial. - -The simplest generalized Mersenne primes are prime numbers of the form f(2n), where f(x) is a low-degree polynomial with small integer coefficients. An example is 264 − 232 + 1, in this case, n = 32, and f(x) = x2 − x + 1; another example is 2192 − 264 − 1, in this case, n = 64, and f(x) = x3 − x − 1. - -It is also natural to try to generalize primes of the form 2n − 1 to primes of the form bn − 1 (for b ≠ 2 and n > 1). However (see also theorems above), bn − 1 is always divisible by b − 1, so unless the latter is a unit, the former is not a prime. This can be remedied by allowing b to be an algebraic integer instead of an integer: - -In the ring of integers (on real numbers), if b − 1 is a unit, then b is either 2 or 0. But 2n − 1 are the usual Mersenne primes, and the formula 0n − 1 does not lead to anything interesting (since it is always −1 for all n > 0). Thus, we can regard a ring of "integers" on complex numbers instead of real numbers, like Gaussian integers and Eisenstein integers. - -If we regard the ring of Gaussian integers, we get the case b = 1 + i and b = 1 − i, and can ask (WLOG) for which n the number (1 + i)n − 1 is a Gaussian prime which will then be called a Gaussian Mersenne prime. - -(1 + i)n − 1 is a Gaussian prime for the following n: - -2, 3, 5, 7, 11, 19, 29, 47, 73, 79, 113, 151, 157, 163, 167, 239, 241, 283, 353, 367, 379, 457, 997, 1367, 3041, 10141, 14699, 27529, 49207, 77291, 85237, 106693, 160423, 203789, 364289, 991961, 1203793, 1667321, 3704053, 4792057, ... - -Like the sequence of exponents for usual Mersenne primes, this sequence contains only (rational) prime numbers. - -As for all Gaussian primes, the norms (that is, squares of absolute values) of these numbers are rational primes: - -5, 13, 41, 113, 2113, 525313, 536903681, 140737471578113, ... . - -One may encounter cases where such a Mersenne prime is also an Eisenstein prime, being of the form b = 1 + ω and b = 1 − ω. In these cases, such numbers are called Eisenstein Mersenne primes. - -(1 + ω)n − 1 is an Eisenstein prime for the following n: - -2, 5, 7, 11, 17, 19, 79, 163, 193, 239, 317, 353, 659, 709, 1049, 1103, 1759, 2029, 5153, 7541, 9049, 10453, 23743, 255361, 534827, 2237561, ... - -The norms (that is, squares of absolute values) of these Eisenstein primes are rational primes: - -7, 271, 2269, 176419, 129159847, 1162320517, ... - -The other way to deal with the fact that bn − 1 is always divisible by b − 1, it is to simply take out this factor and ask which values of n make -$$ -\frac{b^n-1}{b-1} -$$ - -be prime. (The integer b can be either positive or negative.) If, for example, we take b = 10, we get n values of: - -2, 19, 23, 317, 1031, 49081, 86453, 109297, 270343, ... ,
    corresponding to primes 11, 1111111111111111111, 11111111111111111111111, ... . - -These primes are called repunit primes. Another example is when we take b = −12, we get n values of: - -2, 5, 11, 109, 193, 1483, 11353, 21419, 21911, 24071, 106859, 139739, ... ,
    corresponding to primes −11, 19141, 57154490053, .... - -It is a conjecture that for every integer b which is not a perfect power, there are infinitely many values of n such that bn − 1/b − 1 is prime. (When b is a perfect power, it can be shown that there is at most one n value such that bn − 1/b − 1 is prime) - -Least n such that bn − 1/b − 1 is prime are (starting with b = 2, 0 if no such n exists) - -2, 3, 2, 3, 2, 5, 3, 0, 2, 17, 2, 5, 3, 3, 2, 3, 2, 19, 3, 3, 2, 5, 3, 0, 7, 3, 2, 5, 2, 7, 0, 3, 13, 313, 2, 13, 3, 349, 2, 3, 2, 5, 5, 19, 2, 127, 19, 0, 3, 4229, 2, 11, 3, 17, 7, 3, 2, 3, 2, 7, 3, 5, 0, 19, 2, 19, 5, 3, 2, 3, 2, ... - -For negative bases b, they are (starting with b = −2, 0 if no such n exists) - -3, 2, 2, 5, 2, 3, 2, 3, 5, 5, 2, 3, 2, 3, 3, 7, 2, 17, 2, 3, 3, 11, 2, 3, 11, 0, 3, 7, 2, 109, 2, 5, 3, 11, 31, 5, 2, 3, 53, 17, 2, 5, 2, 103, 7, 5, 2, 7, 1153, 3, 7, 21943, 2, 3, 37, 53, 3, 17, 2, 7, 2, 3, 0, 19, 7, 3, 2, 11, 3, 5, 2, ... (notice this OEIS sequence does not allow n = 2) - -Least base b such that bprime(n) − 1/b − 1 is prime are - -2, 2, 2, 2, 5, 2, 2, 2, 10, 6, 2, 61, 14, 15, 5, 24, 19, 2, 46, 3, 11, 22, 41, 2, 12, 22, 3, 2, 12, 86, 2, 7, 13, 11, 5, 29, 56, 30, 44, 60, 304, 5, 74, 118, 33, 156, 46, 183, 72, 606, 602, 223, 115, 37, 52, 104, 41, 6, 338, 217, ... - -For negative bases b, they are - -3, 2, 2, 2, 2, 2, 2, 2, 2, 7, 2, 16, 61, 2, 6, 10, 6, 2, 5, 46, 18, 2, 49, 16, 70, 2, 5, 6, 12, 92, 2, 48, 89, 30, 16, 147, 19, 19, 2, 16, 11, 289, 2, 12, 52, 2, 66, 9, 22, 5, 489, 69, 137, 16, 36, 96, 76, 117, 26, 3, ... - -Another generalized Mersenne number is -$$ -\frac{a^n-b^n}{a-b} -$$ - -with a, b any coprime integers, a > 1 and −a < b < a. (Since an − bn is always divisible by a − b, the division is necessary for there to be any chance of finding prime numbers. In fact, this number is the same as the Lucas number Un(a + b, ab), since a and b are the roots of the quadratic equation x2 − (a + b)x + ab = 0, and this number equals 1 when n = 1) We can ask which n makes this number prime. It can be shown that such n must be primes themselves or equal to 4, and n can be 4 if and only if a + b = 1 and a2 + b2 is prime. (Since a4 − b4/a − b = (a + b)(a2 + b2). Thus, in this case the pair (a, b) must be (x + 1, −x) and x2 + (x + 1)2 must be prime. That is, x must be in .) It is a conjecture that for any pair (a, b) such that for every natural number r > 1, a and b are not both perfect rth powers, and −4ab is not a perfect fourth power. there are infinitely many values of n such that an − bn/a − b is prime. (When a and b are both perfect rth powers for an r > 1 or when −4ab is a perfect fourth power, it can be shown that there are at most two n values with this property, since if so, then an − bn/a − b can be factored algebraically) However, this has not been proved for any single value of (a, b). - -*Note: if b < 0 and n is even, then the numbers n are not included in the corresponding OEIS sequence. - -A conjecture related to the generalized Mersenne primes: (the conjecture predicts where is the next generalized Mersenne prime, if the conjecture is true, then there are infinitely many primes for all such (a,b) pairs) - -For any integers a and b which satisfy the conditions: - -# a > 1, −a < b < a. - -# a and b are coprime. (thus, b cannot be 0) - -# For every natural number r > 1, a and b are not both perfect rth powers. (since when a and b are both perfect rth powers, it can be shown that there are at most two n value such that an − bn/a − b is prime, and these n values are r itself or a root of r, or 2) - -# −4ab is not a perfect fourth power (if so, then the number has aurifeuillean factorization). - -has prime numbers of the form -$$ -R_p(a,b)=\frac{a^p-b^p}{a-b} -$$ - -for prime p, the prime numbers will be distributed near the best fit line -$$ -Y=G \cdot \log_a(\log_a(R_{(a,b)}(n)))+C -$$ - -where -$$ - \lim_{n\rightarrow\infty} G=\frac{1}{e^\gamma}=0.561459483566\ldots -$$ - -and there are about -$$ -(\log_e(N)+m \cdot \log_e(2) \cdot \log_e \left(\log_e(N))+\frac{1}{\sqrt N}-\delta\right) \cdot \frac{e^\gamma}{\log_e(a)} -$$ - -prime numbers of this form less than N. - -*e is the base of the natural logarithm. - -*γ is the Euler–Mascheroni constant. - -*loga is the logarithm in base a. - -*R(a,b)(n) is the nth prime number of the form ap − bp/a − b for prime p. - -*C is a data fit constant which varies with a and b. - -*δ is a data fit constant which varies with a and b. - -*m is the largest natural number such that a and −b are both perfect 2^m − 1th powers. - -We also have the following three properties: - -# The number of prime numbers of the form ap − bp/a − b (with prime p) less than or equal to n is about eγ loga(loga(n)). - -# The expected number of prime numbers of the form ap − bp/a − b with prime p between n and an is about eγ. - -# The probability that number of the form ap − bp/a − b is prime (for prime p) is about eγ/p loge(a). - -If this conjecture is true, then for all such (a,b) pairs, let q be the nth prime of the form ap − bp/a − b, the graph of loga(loga(q)) versus n is almost linear. (See ) - -When a = b + 1, it is (b + 1)^n − b^n, a difference of two consecutive perfect nth powers, and if an − bn is prime, then a must be b + 1, because it is divisible by a − b. - -Least n such that (b + 1)^n − b^n is prime are - -2, 2, 2, 3, 2, 2, 7, 2, 2, 3, 2, 17, 3, 2, 2, 5, 3, 2, 5, 2, 2, 229, 2, 3, 3, 2, 3, 3, 2, 2, 5, 3, 2, 3, 2, 2, 3, 3, 2, 7, 2, 3, 37, 2, 3, 5, 58543, 2, 3, 2, 2, 3, 2, 2, 3, 2, 5, 3, 4663, 54517, 17, 3, 2, 5, 2, 3, 3, 2, 2, 47, 61, 19, ... - -Least b such that (b + 1)^prime(n) − b^prime(n) is prime are - -1, 1, 1, 1, 5, 1, 1, 1, 5, 2, 1, 39, 6, 4, 12, 2, 2, 1, 6, 17, 46, 7, 5, 1, 25, 2, 41, 1, 12, 7, 1, 7, 327, 7, 8, 44, 26, 12, 75, 14, 51, 110, 4, 14, 49, 286, 15, 4, 39, 22, 109, 367, 22, 67, 27, 95, 80, 149, 2, 142, 3, 11, ... diff --git a/wiki/wikipedia/4186.txt b/wiki/wikipedia/4186.txt deleted file mode 100644 index 127fed05db13b0b2b865baaf5e8ff22960cbf603..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4186.txt +++ /dev/null @@ -1,85 +0,0 @@ -right|thumb|280px|A Delaunay triangulation in the plane with circumcircles shown - -In mathematics and computational geometry, a Delaunay triangulation (also known as a Delone triangulation) for a given set P of discrete points in a general position is a triangulation DT(P) such that no point in P is inside the circumcircle of any triangle in DT(P). Delaunay triangulations maximize the minimum angle of all the angles of the triangles in the triangulation; they tend to avoid sliver triangles. The triangulation is named after Boris Delaunay for his work on this topic from 1934. - -For a set of points on the same line there is no Delaunay triangulation (the notion of triangulation is degenerate for this case). For four or more points on the same circle (e.g., the vertices of a rectangle) the Delaunay triangulation is not unique: each of the two possible triangulations that split the quadrangle into two triangles satisfies the "Delaunay condition", i.e., the requirement that the circumcircles of all triangles have empty interiors. - -By considering circumscribed spheres, the notion of Delaunay triangulation extends to three and higher dimensions. Generalizations are possible to metrics other than Euclidean distance. However, in these cases a Delaunay triangulation is not guaranteed to exist or be unique. - -The Delaunay triangulation of a discrete point set P in general position corresponds to the dual graph of the Voronoi diagram for P. - -The circumcenters of Delaunay triangles are the vertices of the Voronoi diagram. - -In the 2D case, the Voronoi vertices are connected via edges, that can be derived from adjacency-relationships of the Delaunay triangles: If two triangles share an edge in the Delaunay triangulation, their circumcenters are to be connected with an edge in the Voronoi tesselation. - -Special cases where this relationship does not hold, or is ambiguous, include cases like: - -* Three or more collinear points, where the circumcircles are of infinite radii. - -* Four or more points on a perfect circle, where the triangulation is ambiguous and all circumcenters are trivially identical. - -*Edges of the Voronoi diagram going to infinity are not defined by this relation in case of a finite set P. If the Delaunay triangulation is calculated using the Bowyer–Watson algorithm then the circumcenters of triangles having a common vertex with the "super" triangle should be ignored. Edges going to infinity start from a circumcenter and they are perpendicular to the common edge between the kept and ignored triangle. - -For a set P of points in the (d-dimensional) Euclidean space, a Delaunay triangulation is a triangulation DT(P) such that no point in P is inside the circum-hypersphere of any d-simplex in DT(P). It is known - -Many algorithms for computing Delaunay triangulations rely on fast operations for detecting when a point is within a triangle's circumcircle and an efficient data structure for storing triangles and edges. In two dimensions, one way to detect if point D lies in the circumcircle of A, B, C is to evaluate the determinant: - - - -\begin{align} - -& \begin{vmatrix} - -A_x & A_y & A_x^2 + A_y^2 & 1\\ - -B_x & B_y & B_x^2 + B_y^2 & 1\\ - -C_x & C_y & C_x^2 + C_y^2 & 1\\ - -D_x & D_y & D_x^2 + D_y^2 & 1 - -\end{vmatrix} \\[8pt] - -= {} & \begin{vmatrix} - -A_x - D_x & A_y - D_y & (A_x^2 - D_x^2) + (A_y^2 - D_y^2) \\ - -B_x - D_x & B_y - D_y & (B_x^2 - D_x^2) + (B_y^2 - D_y^2) \\ - -C_x - D_x & C_y - D_y & (C_x^2 - D_x^2) + (C_y^2 - D_y^2) - -\end{vmatrix} > 0 - -\end{align} - - - -When A, B and C are sorted in a counterclockwise order, this determinant is positive only if D lies inside the circumcircle. - -As mentioned above, if a triangle is non-Delaunay, we can flip one of its edges. This leads to a straightforward algorithm: construct any triangulation of the points, and then flip edges until no triangle is non-Delaunay. Unfortunately, this can take Ω(n2) edge flips. While this algorithm can be generalised to three and higher dimensions, its convergence is not guaranteed in these cases, as it is conditioned to the connectedness of the underlying flip graph: this graph is connected for two-dimensional sets of points, but may be disconnected in higher dimensions. While the technique extends to higher dimension (as proved by Edelsbrunner and Shah), the runtime can be exponential in the dimension even if the final Delaunay triangulation is small. - -The Bowyer–Watson algorithm provides another approach for incremental construction. It gives an alternative to edge flipping for computing the Delaunay triangles containing a newly inserted vertex. - -Unfortunately the flipping-based algorithms are generally hard to be parallelized, since adding some certain point (e.g. the center point of a wagon wheel) can lead to up to O(n) consecutive flips. Blelloch et al. proposed another version of incremental algorithm based on rip-and-tent, which is practical and highly parallelized with polylogarithmic span. - -A divide and conquer algorithm for triangulations in two dimensions was developed by Lee and Schachter and improved by Guibas and Stolfi and later by Dwyer. In this algorithm, one recursively draws a line to split the vertices into two sets. The Delaunay triangulation is computed for each set, and then the two sets are merged along the splitting line. Using some clever tricks, the merge operation can be done in time O(n), so the total running time is O(n log n). - -For certain types of point sets, such as a uniform random distribution, by intelligently picking the splitting lines the expected time can be reduced to O(n log log n) while still maintaining worst-case performance. - -A divide and conquer paradigm to performing a triangulation in d dimensions is presented in "DeWall: A fast divide and conquer Delaunay triangulation algorithm in Ed" by P. Cignoni, C. Montani, R. Scopigno. - -The divide and conquer algorithm has been shown to be the fastest DT generation technique. - -Sweephull is a hybrid technique for 2D Delaunay triangulation that uses a radially propagating sweep-hull, and a flipping algorithm. The sweep-hull is created sequentially by iterating a radially-sorted set of 2D points, and connecting triangles to the visible part of the convex hull, which gives a non-overlapping triangulation. One can build a convex hull in this manner so long as the order of points guarantees no point would fall within the triangle. But, radially sorting should minimize flipping by being highly Delaunay to start. This is then paired with a final iterative triangle flipping step. - -The Euclidean minimum spanning tree of a set of points is a subset of the Delaunay triangulation of the same points, and this can be exploited to compute it efficiently. - -For modelling terrain or other objects given a point cloud, the Delaunay triangulation gives a nice set of triangles to use as polygons in the model. In particular, the Delaunay triangulation avoids narrow triangles (as they have large circumcircles compared to their area). See triangulated irregular network. - -Delaunay triangulations can be used to determine the density or intensity of points samplings by means of the Delaunay tessellation field estimator (DTFE). - -Delaunay triangulations are often used to generate meshes for space-discretised solvers such as the finite element method and the finite volume method of physics simulation, because of the angle guarantee and because fast triangulation algorithms have been developed. Typically, the domain to be meshed is specified as a coarse simplicial complex; for the mesh to be numerically stable, it must be refined, for instance by using Ruppert's algorithm. - -The increasing popularity of finite element method and boundary element method techniques increases the incentive to improve automatic meshing algorithms. However, all of these algorithms can create distorted and even unusable grid elements. Fortunately, several techniques exist which can take an existing mesh and improve its quality. For example, smoothing (also referred to as mesh refinement) is one such method, which repositions nodes to minimize element distortion. The stretched grid method allows the generation of pseudo-regular meshes that meet the Delaunay criteria easily and quickly in a one-step solution. - -Constrained Delaunay triangulation has found applications in path planning in automated driving and topographic surveying. diff --git a/wiki/wikipedia/4187.txt b/wiki/wikipedia/4187.txt deleted file mode 100644 index 3d12e419a568a5faa366171ffbb65c7800e07f1c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4187.txt +++ /dev/null @@ -1,13 +0,0 @@ -In mathematics, Lehmer's totient problem asks whether there is any composite number n such that Euler's totient function φ(n) divides n − 1. This is an unsolved problem. - -It is known that φ(n) = n − 1 if and only if n is prime. So for every prime number n, we have φ(n) = n − 1 and thus in particular φ(n) divides n − 1. D. H. Lehmer conjectured in 1932 that there are no composite numbers with this property. - -
    - -* Lehmer showed that if any composite solution n exists, it must be odd, square-free, and divisible by at least seven distinct primes (i.e. ω(n) ≥ 7). Such a number must also be a Carmichael number. - -* In 1980 Cohen and Hagis proved that, for any solution n to the problem, n > 1020 and ω(n) ≥ 14. - -* In 1988 Hagis showed that if 3 divides any solution n then n > 101937042 and ω(n) ≥ 298848. - -* The number of solutions to the problem less than $X$ is at most ${X^{1/2}/(\log X)^{1/2+o(1)}}$. diff --git a/wiki/wikipedia/4188.txt b/wiki/wikipedia/4188.txt deleted file mode 100644 index 11cf3f794631e622962f1c8b8331432cd494e16e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4188.txt +++ /dev/null @@ -1,11 +0,0 @@ -Kenneth Alan Ribet (; born June 28, 1948) is an American mathematician working in algebraic number theory and algebraic geometry. He is known for the Herbrand–Ribet theorem and Ribet's theorem, which were key ingredients in the proof of Fermat's Last Theorem, as well as for his service as President of the American Mathematical Society from 2017 to 2019. He is currently a professor of mathematics at the University of California, Berkeley. - -Kenneth Ribet was born in Brooklyn, New York to parents David Ribet and Pearl Ribet, both Jewish, on June 28, 1948. As a student at Far Rockaway High School, Ribet was on a competitive mathematics team, but his first field of study was chemistry. - -Ribet earned his bachelor's degree and master's degree from Brown University in 1969. An earlier theorem of Ribet's, the Herbrand–Ribet theorem, is the converse to Herbrand's theorem on the divisibility properties of Bernoulli numbers and is also related to Fermat's Last Theorem. - -Ribet received the Fermat Prize in 1989 jointly with Abbas Bahri. In 2017, Ribet received the Brouwer Medal. - -In 1988, Ribet was inducted as a vigneron d'honneur by the Jurade de Saint-Émilion. - -Ribet is married to statistician Lisa Goldberg. diff --git a/wiki/wikipedia/4189.txt b/wiki/wikipedia/4189.txt deleted file mode 100644 index ca87e6eb9fdc2bc9886f78a68aaa9d20bcd3d1ea..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4189.txt +++ /dev/null @@ -1,37 +0,0 @@ -In measure theory and probability, the monotone class theorem connects monotone classes and sigma-algebras. The theorem says that the smallest monotone class containing an algebra of sets $G$ is precisely the smallest -algebra containing $G.$ It is used as a type of transfinite induction to prove many other theorems, such as Fubini's theorem. - -A is a family (i.e. class) $M$ of sets that is closed under countable monotone unions and also under countable monotone intersections. Explicitly, this means $M$ has the following properties: - -# if $A_1, A_2, \ldots \in M$ and $A_1 \subseteq A_2 \subseteq \cdots$ then $\bigcup_{i = 1}^{\infty} A_i \in M,$ and - -# if $B_1, B_2, \ldots \in M$ and $B_1 \supseteq B_2 \supseteq \cdots$ then $\bigcap_{i = 1}^{\infty} B_i \in M.$ - -Let $G$ be an algebra of sets and define $M(G)$ to be the smallest monotone class containing $G.$ Then $M(G)$ is precisely the - -Let $\mathcal{A}$ be a pi-system that contains $\Omega$ and let $\mathcal{H}$ be a collection of functions from $\Omega$ to $\R$ with the following properties: - -# If $A \in \mathcal{A}$ then $\mathbf{1}_A \in \mathcal{H}.$ - -# If $f, g \in \mathcal{H}$ and $c \in \R$ then $f + g$ and $c f \in \mathcal{H}.$ - -# If $f_n \in \mathcal{H}$ is a sequence of non-negative functions that increase to a bounded function $f$ then $f \in \mathcal{H}.$ - -Then $\mathcal{H}$ contains all bounded functions that are measurable with respect to $\sigma(\mathcal{A}),$ which is the sigma-algebra generated by $\mathcal{A}.$ - -The following argument originates in Rick Durrett's Probability: Theory and Examples. - -{{math proof|drop=hidden|proof= - -The assumption $\Omega \in \mathcal{A},$ (2), and (3) imply that $\mathcal{G} = \left\{ A : \mathbf{1}_{A} \in \mathcal{H} \right\}$ is a -system. - -By (1) and the pi− theorem, $\sigma(\mathcal{A}) \subset \mathcal{G}.$ - -Statement (2) implies that $\mathcal{H}$ contains all simple functions, and then (3) implies that $\mathcal{H}$ contains all bounded functions measurable with respect to $\sigma(\mathcal{A}).$ - -}} - -As a corollary, if $G$ is a ring of sets, then the smallest monotone class containing it coincides with the sigma-ring of $G.$ - -By invoking this theorem, one can use monotone classes to help verify that a certain collection of subsets is a sigma-algebra. - -The monotone class theorem for functions can be a powerful tool that allows statements about particularly simple classes of functions to be generalized to arbitrary bounded and measurable functions. diff --git a/wiki/wikipedia/419.txt b/wiki/wikipedia/419.txt deleted file mode 100644 index 261c4b2f97bd3585fe16d8f80340fce45e370d08..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/419.txt +++ /dev/null @@ -1,3 +0,0 @@ -In computer science, an attributed graph grammar is a class of graph grammar that associates vertices with a set of attributes and rewrites with functions on attributes. In the algebraic approach to graph grammars, they are usually formulated using the double-pushout approach or the single-pushout approach. - -AGG, a rule-based visual language that directly expresses attributed graph grammars using the single-pushout approach has been developed at TU Berlin for many years. diff --git a/wiki/wikipedia/4190.txt b/wiki/wikipedia/4190.txt deleted file mode 100644 index 55076815c03a42a4baaee5b8d84ce0ad397cf7fb..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4190.txt +++ /dev/null @@ -1,18 +0,0 @@ -In mathematics, the Stein–Strömberg theorem or Stein–Strömberg inequality is a result in measure theory concerning the Hardy–Littlewood maximal operator. The result is foundational in the study of the problem of differentiation of integrals. The result is named after the mathematicians Elias M. Stein and Jan-Olov Strömberg. - -Let λn denote n-dimensional Lebesgue measure on n-dimensional Euclidean space Rn and let M denote the Hardy–Littlewood maximal operator: for a function f : Rn → R, Mf : Rn → R is defined by -$$ -Mf(x) = \sup_{r > 0} \frac1{\lambda^{n} \big( B_{r} (x) \big)} \int_{B_{r} (x)} | f(y) | \mathrm{d} \lambda^{n} (y), -$$ - -where Br(x) denotes the open ball of radius r with center x. Then, for each p > 1, there is a constant Cp > 0 such that, for all natural numbers n and functions f ∈ Lp(Rn; R), -$$ -\| Mf \|_{L^{p}} \leq C_{p} \| f \|_{L^{p}}. -$$ - -In general, a maximal operator M is said to be of strong type (p, p) if -$$ -\| Mf \|_{L^{p}} \leq C_{p, n} \| f \|_{L^{p}} -$$ - -for all f ∈ Lp(Rn; R). Thus, the Stein–Strömberg theorem is the statement that the Hardy–Littlewood maximal operator is of strong type (p, p) uniformly with respect to the dimension n. diff --git a/wiki/wikipedia/4191.txt b/wiki/wikipedia/4191.txt deleted file mode 100644 index a1bf6ee775674186097c228931c44cbe95e3f6a1..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4191.txt +++ /dev/null @@ -1,22 +0,0 @@ -In mathematics, Hölder's theorem states that the gamma function does not satisfy any algebraic differential equation whose coefficients are rational functions. This result was first proved by Otto Hölder in 1887; several alternative proofs have subsequently been found. - -The theorem also generalizes to the $ q $-gamma function. - -For every $n \in \N_0,$ there is no non-zero polynomial $ P \in \Complex [X;Y_0,Y_1,\ldots,Y_n] $ such that -$$ -\forall z \in \Complex \smallsetminus \Z _{\leq 0}: \qquad P \left( z;\Gamma(z),\Gamma'(z),\ldots,{\Gamma^{(n)}}(z) \right) = 0, -$$ - -where $ \Gamma $ is the gamma function. $ \quad \blacksquare $ - -For example, define $ P \in \Complex [X;Y_0,Y_1,Y_2] $ by -$$ - P ~ \stackrel{\text{df}}{=} ~ X^2 Y_2 + X Y_1 + (X^2 - \nu^2) Y_0. -$$ - -Then the equation -$$ -P \left (z;f(z),f'(z),f(z) \right ) = z^2 f(z) + z f'(z) + \left (z^2 - \nu^2 \right ) f(z) \equiv 0 -$$ - -is called an algebraic differential equation, which, in this case, has the solutions $f = J_{\nu}$ and $f = Y_{\nu}$ — the Bessel functions of the first and second kind respectively. Hence, we say that $J_{\nu}$ and $ Y_{\nu}$ are differentially algebraic (also algebraically transcendental). Most of the familiar special functions of mathematical physics are differentially algebraic. All algebraic combinations of differentially algebraic functions are differentially algebraic. Furthermore, all compositions of differentially algebraic functions are differentially algebraic. Hölder’s Theorem simply states that the gamma function, $ \Gamma $, is not differentially algebraic and is therefore transcendentally transcendental. Q.E.D. diff --git a/wiki/wikipedia/4192.txt b/wiki/wikipedia/4192.txt deleted file mode 100644 index a0c75edf7fe803be8ad77cb325da39b66bf9726c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4192.txt +++ /dev/null @@ -1,186 +0,0 @@ -In mathematics, especially vector calculus and differential topology, a closed form is a differential form α whose exterior derivative is zero (1=dα = 0), and an exact form is a differential form, α, that is the exterior derivative of another differential form β. Thus, an exact form is in the image of d, and a closed form is in the kernel of d. - -For an exact form α, 1=α = dβ for some differential form β of degree one less than that of α. The form β is called a "potential form" or "primitive" for α. Since the exterior derivative of a closed form is zero, β is not unique, but can be modified by the addition of any closed form of degree one less than that of α. - -Because 1=d, every exact form is necessarily closed. The question of whether every closed form is exact depends on the topology of the domain of interest. On a contractible domain, every closed form is exact by the Poincaré lemma. More general questions of this kind on an arbitrary differentiable manifold are the subject of de Rham cohomology, which allows one to obtain purely topological information using differential methods. - -A simple example of a form which is closed but not exact is the 1-form $d\theta$ given by the derivative of argument on the punctured plane $\mathbf{R}^2\setminus\{0\}$. Since $\theta$ is not actually a function (see the next paragraph) $d\theta$ is not an exact form. Still, $d\theta$ has vanishing derivative and is therefore closed. - -Note that the argument $\theta$ is only defined up to an integer multiple of $2\pi$ since a single point $p$ can be assigned different arguments $r$, $r+2\pi$, etc. We can assign arguments in a locally consistent manner around $p$, but not in a globally consistent manner. This is because if we trace a loop from $p$ counterclockwise around the origin and back to $p$, the argument increases by $2\pi$. Generally, the argument $\theta$ changes by -$$ -\oint_{S^1} d\theta -$$ - -over a counter-clockwise oriented loop $S^1$. - -Even though the argument $\theta$ is not technically a function, the different local definitions of $\theta$ at a point $p$ differ from one another by constants. Since the derivative at $p$ only uses local data, and since functions that differ by a constant have the same derivative, the argument has a globally well-defined derivative "$d\theta$". - -The upshot is that $d\theta$ is a one-form on $\mathbf{R}^2\setminus\{0\}$ that is not actually the derivative of any well-defined function $\theta$. We say that $d\theta$ is not exact. Explicitly, $d\theta$ is given as: -$$ -d\theta = \frac{-ydx + xdy}{x^2+y^2} , -$$ - -which by inspection has derivative zero. Because $d\theta$ has vanishing derivative, we say that it is closed. - -This form generates the de Rham cohomology group $H^1_{dR}(\mathbf{R}^2\setminus\{0\}) \cong \mathbf{R},$ meaning that any closed form $\omega$ is the sum of an exact form $df$ and a multiple of $d\theta$: $\omega = df + k\ d\theta$, where $k = \frac{1}{2\pi}\oint_{S^1} \omega$ accounts for a non-trivial contour integral around the origin, which is the only obstruction to a closed form on the punctured plane (locally the derivative of a potential function) being the derivative of a globally defined function. - -Differential forms in R2 and R3 were well known in the mathematical physics of the nineteenth century. In the plane, 0-forms are just functions, and 2-forms are functions times the basic area element dx ∧ dy, so that it is the 1-forms -$$ - \alpha = f(x,y) dx + g(x,y) dy -$$ - -that are of real interest. The formula for the exterior derivative d here is -$$ - d \alpha = (g_x-f_y) dx\wedge dy -$$ - -where the subscripts denote partial derivatives. Therefore the condition for $\alpha$ to be closed is -$$ - f_y=g_x. -$$ - -In this case if h(x, y) is a function then -$$ - dh = h_x dx + h_y dy. -$$ - -The implication from 'exact' to 'closed' is then a consequence of the symmetry of second derivatives, with respect to x and y. - -The gradient theorem asserts that a 1-form is exact if and only if the line integral of the form depends only on the endpoints of the curve, or equivalently, - -if the integral around any smooth closed curve is zero. - -On a Riemannian manifold, or more generally a pseudo-Riemannian manifold, k-forms correspond to k-vector fields (by duality via the metric), so there is a notion of a vector field corresponding to a closed or exact form. - -In 3 dimensions, an exact vector field (thought of as a 1-form) is called a conservative vector field, meaning that it is the derivative (gradient) of a 0-form (smooth scalar field), called the scalar potential. A closed vector field (thought of as a 1-form) is one whose derivative (curl) vanishes, and is called an irrotational vector field. - -Thinking of a vector field as a 2-form instead, a closed vector field is one whose derivative (divergence) vanishes, and is called an incompressible flow (sometimes solenoidal vector field). The term incompressible is used because a non-zero divergence corresponds to the presence of sources and sinks in analogy with a fluid. - -The concepts of conservative and incompressible vector fields generalize to n dimensions, because gradient and divergence generalize to n dimensions; curl is defined only in three dimensions, thus the concept of irrotational vector field does not generalize in this way. - -The Poincaré lemma states that if B is an open ball in Rn, any smooth closed p-form ω defined on B is exact, for any integer p with 1 ≤ p ≤ n. - -Translating if necessary, it can be assumed that the ball B has centre 0. Let αs be the flow on Rn defined by 1=αs x = e−s x. For s ≥ 0 it carries B into itself and induces an action on functions and differential forms. - -The derivative of the flow is the vector field X defined on functions f by 1=Xf = d(αsf)/ds: it is the radial vector field 1=−r ∂/∂r = −Σ xi . The derivative of the flow on forms defines the Lie derivative with respect to X given by 1=$L_X \omega = \left.\frac{d(\alpha_s \omega)}{ds}\right|_{s = 0}$. In particular -$$ -\frac{d}{ds} \alpha_s \omega = \alpha_s L_X \omega, -$$ - -Now define -$$ -h\omega =-\int_0^\infty \alpha_t\omega dt. -$$ - -By the fundamental theorem of calculus we have that -$$ -L_X h \omega = -\int_0^\infty \alpha_t L_X \omega dt=-\int_0^\infty {d\over dt} (\alpha_t \omega) dt= -[\alpha _t \omega]_0^\infty= \omega. -$$ - -With $\iota_X$ being the interior multiplication or contraction by the vector field X, Cartan's formula states that -$$ -L_X = d\iota_X + \iota_X d. -$$ - -Using the fact that d commutes with LX, $\alpha_s$ and h, we get: -$$ -\omega = L_X h \omega = (d\iota_X + \iota_X d) h \omega = d (\iota_X h \omega) + \iota_X h d \omega. -$$ - -Setting -$$ -g(\omega)= \iota_X h (\omega), -$$ - -leads to the identity -$$ -(dg + gd) \omega = \omega. -$$ - -It now follows that if ω is closed, i. e. 1=dω = 0, then 1=d(g ω) = ω, so that ω is exact and the Poincaré lemma is proved. - -(In the language of homological algebra, g is a "contracting homotopy".) - -The same method applies to any open set in Rn that is star-shaped about 0, i.e. any open set containing 0 and invariant under αt for $1 < t < \infty$. - -Another standard proof of the Poincaré lemma uses the homotopy invariance formula and can be found in Singer, Lee, Tu and Bott. The local form of the homotopy operator is described in Edelen and the connection of the lemma with the Maurer-Cartan form is explained in Sharpe. - -This formulation can be phrased in terms of homotopies between open domains U in Rm and V in Rn. If F(t,x) is a homotopy from [0,1] × U to V, set Ft(x) = F(t,x). For $\omega$ a p-form on V, define -$$ -g(\omega)=\int_0^1 \iota_{\partial_t} (F_t^*(\omega)) dt -$$ - -Then -$$ -(dg+gd)\omega= \int_0^1 (d\iota_{\partial_t} + \iota_{\partial_t} d)F_t^*(\omega) dt =\int_0^1 L_{\partial_t} F_t^*(\omega) dt= \int_0^1 \partial_t F_t^*(\omega) dt = F_1^*(\omega) - F_0^*(\omega). -$$ - -Example: In two dimensions the Poincaré lemma can be proved directly for closed 1-forms and 2-forms as follows. - -If 1=ω = p dx + q dy is a closed 1-form on (a, b) × (c, d), then 1=py = qx. If 1=ω = df then 1=p = fx and 1=q = fy. Set -$$ -g(x,y)=\int_a^x p(t,y) dt, -$$ - -so that 1=gx = p. Then 1=h = f − g must satisfy 1=hx = 0 and 1=hy = q − gy. The right hand side here is independent of x since its partial derivative with respect to x is 0. So -$$ -h(x,y)=\int_c^y q(a,s) ds - g(a,y)=\int_c^y q(a,s) ds, -$$ - -and hence -$$ -f(x,y)=\int_a^x p(t,y) dt + \int_c^y q(a,s) ds. -$$ - -Similarly, if 1=Ω = r dx ∧ dy then 1=Ω = d(a dx + b dy) with 1=bx − ay = r. Thus a solution is given by 1=a = 0 and -$$ -b(x,y)=\int_a^x r(t,y) dt. -$$ - -When the difference of two closed forms is an exact form, they are said to be cohomologous to each other. That is, if ζ and η are closed forms, and one can find some β such that -$$ -\zeta - \eta = d\beta -$$ - -then one says that ζ and η are cohomologous to each other. Exact forms are sometimes said to be cohomologous to zero. The set of all forms cohomologous to a given form (and thus to each other) is called a de Rham cohomology class; the general study of such classes is known as cohomology. It makes no real sense to ask whether a 0-form (smooth function) is exact, since d increases degree by 1; but the clues from topology suggest that only the zero function should be called "exact". The cohomology classes are identified with locally constant functions. - -Using contracting homotopies similar to the one used in the proof of the Poincaré lemma, it can be shown that de Rham cohomology is homotopy-invariant. - -In electrodynamics, the case of the magnetic field $\vec B(\mathbf r)$ produced by a stationary electrical current is important. There one deals with the vector potential $\vec A(\mathbf r )$ of this field. This case corresponds to 1=k = 2, and the defining region is the full $\R^3$. The current-density vector is $\vec j$. It corresponds to the current two-form -$$ -\mathbf I :=j_1(x_1,x_2, x_3) {\rm d}x_2\wedge {\rm d}x_3+j_2(x_1,x_2, x_3) {\rm d}x_3\wedge {\rm d}x_1+j_3(x_1,x_2, x_3) {\rm d}x_1\wedge {\rm d}x_2. -$$ - -For the magnetic field $\vec B$ one has analogous results: it corresponds to the induction two-form $\Phi_B:=B_1{\rm d}x_2\wedge {\rm d}x_3 +\cdots$, and can be derived from the vector potential $\vec A$, or the corresponding one-form $\mathbf A$, - - \vec B =\operatorname{curl}\vec A =\left\{ \frac{\partial A_3}{\partial x_2}-\frac{\partial A_2}{\partial x_3} , \frac{\partial A_1}{\partial x_3}-\frac{\partial A_3}{\partial x_1} ,\frac{\partial A_2}{\partial x_1}-\frac{\partial A_1}{\partial x_2}\right\}, - -\text{ or } - -\Phi_B={\rm d}\mathbf A. - -Thereby the vector potential $\vec A$ corresponds to the potential one-form -$$ -\mathbf A:=A_1 {\rm d}x_1+A_2 {\rm d}x_2+A_3 {\rm d}x_3. -$$ - -The closedness of the magnetic-induction two-form corresponds to the property of the magnetic field that it is source-free: $\operatorname{div} \vec B \equiv 0$, i.e., that there are no magnetic monopoles. - -In a special gauge, $\operatorname{div}\vec A{~\stackrel{!}{=}~}0$, this implies 1=for i = 1, 2, 3 -$$ -A_i(\vec r) = \int \frac{\mu_0 j_i\left(\vec r^{'}\right) dx_1'dx_2'dx_3'}{4\pi |\vec r -\vec r^{'}|} . -$$ - -(Here $\mu_0$ is a constant, the magnetic vacuum permeability.) - -This equation is remarkable, because it corresponds completely to a well-known formula for the electrical field $\vec E$, namely for the electrostatic Coulomb potential $\phi (x_1,x_2, x_3)$ of a charge density $\rho (x_1,x_2,x_3)$. At this place one can already guess that - -*$\vec E$ and $\vec B ,$ - -*$\rho $ and $\vec j ,$ - -*$\phi$ and $\vec A$ - -can be unified to quantities with six rsp. four nontrivial components, which is the basis of the relativistic invariance of the Maxwell equations. - -If the condition of stationarity is left, on the left-hand side of the above-mentioned equation one must add, in the equations for $A_i$, to the three space coordinates, as a fourth variable also the time t, whereas on the right-hand side, in $j_i'$, the so-called "retarded time", $t':=t-\frac{c}$, must be used, i.e. it is added to the argument of the current-density. Finally, as before, one integrates over the three primed space coordinates. (As usual c is the vacuum velocity of light.) diff --git a/wiki/wikipedia/4193.txt b/wiki/wikipedia/4193.txt deleted file mode 100644 index a353f8d548b25bc16faf7c6ff22aaa1d17ca0930..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4193.txt +++ /dev/null @@ -1,17 +0,0 @@ -Easy2Sync for Files is backup and file synchronization software created for use with the Microsoft Windows environments. It allows backing up and synchronizing files between two folder trees on the same or different drives / computers, including network and USB drives and FTP servers. - -* It has the capability to remember the previous state of directories in a database, and thus also synchronize deletions. It can also detect renamed directories. - -* The program fully supports Unicode characters so that it can copy filenames in all languages. - -* Includes a scheduler. - -* Supports the sync over FTP. - -* Versioning, the ability to keep multiple older versions of each file in the backup. - -* Files / folders can be excluded from the sync by name, age or size. - -* Includes modes to (instead of synchronizing) copy, move or delete files or to flatten a directory structure into a single directory. - -* The program can be installed onto portable drives (USB) diff --git a/wiki/wikipedia/4194.txt b/wiki/wikipedia/4194.txt deleted file mode 100644 index 5942668f57fcb985b34248c4da8bb420a976220f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4194.txt +++ /dev/null @@ -1,45 +0,0 @@ -In computer science and mathematical logic, a proof assistant or interactive theorem prover is a software tool to assist with the development of formal proofs by human-machine collaboration. This involves some sort of interactive proof editor, or other interface, with which a human can guide the search for proofs, the details of which are stored in, and some steps provided by, a computer. - -* ACL2 – a programming language, a first-order logical theory, and a theorem prover (with both interactive and automatic modes) in the Boyer–Moore tradition. - -* Coq – Which allows the expression of mathematical assertions, mechanically checks proofs of these assertions, helps to find formal proofs, and extracts a certified program from the constructive proof of its formal specification. - -* HOL theorem provers – A family of tools ultimately derived from the LCF theorem prover. In these systems the logical core is a library of their programming language. Theorems represent new elements of the language and can only be introduced via "strategies" which guarantee logical correctness. Strategy composition gives users the ability to produce significant proofs with relatively few interactions with the system. Members of the family include: - -** – The "primary descendant", still under active development. Support for both Moscow ML and Poly/ML. Has a BSD-style license. - -**HOL Light – A thriving "minimalist fork". OCaml based. - -**ProofPower – Went proprietary, then returned to open source. Based on Standard ML. - -* IMPS, An Interactive Mathematical Proof System - -* Isabelle is an interactive theorem prover, successor of HOL. The main code-base is BSD-licensed, but the Isabelle distribution bundles many add-on tools with different licenses. - -* Jape – Java based. - -* Lean - -* - -* Matita – A light system based on the Calculus of Inductive Constructions. - -* MINLOG – A proof assistant based on first-order minimal logic. - -* Mizar – A proof assistant based on first-order logic, in a natural deduction style, and Tarski–Grothendieck set theory. - -* PhoX – A proof assistant based on higher-order logic which is eXtensible. - -* Prototype Verification System (PVS) – a proof language and system based on higher-order logic. - -* – Interactive theorem provers also based on simply-typed lambda calculus, but based on an independent formulation of the logical theory and independent implementation. - -* - -* - -The is an initiative to conserve the sources of theorem prover systems for future analysis, since they are important cultural/scientific artefacts. It has the sources of many of the systems mentioned above. - -A popular front-end for proof assistants is the Emacs-based Proof General, developed at the University of Edinburgh. - -Coq includes CoqIDE, which is based on OCaml/Gtk. Isabelle includes Isabelle/jEdit, which is based on jEdit and the Isabelle/Scala infrastructure for document-oriented proof processing. More recently, a Visual Studio Code extension for Isabelle has also been developed by Makarius Wenzel. diff --git a/wiki/wikipedia/4195.txt b/wiki/wikipedia/4195.txt deleted file mode 100644 index 874d7938f4afdb00094792cd3eaa1b464eff3c43..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4195.txt +++ /dev/null @@ -1,3 +0,0 @@ -In mathematics, Nirenberg's conjecture, now Osserman's theorem, states that if a neighborhood of the sphere is omitted by the Gauss map of a complete minimal surface, then the surface in question is a plane. It was proved by Robert Osserman in 1959. - -*Osserman, R (1959) . "Proof of a Conjecture of Nirenberg." Comm. Pure Appl. Math. 12, pp. 229–232. diff --git a/wiki/wikipedia/4196.txt b/wiki/wikipedia/4196.txt deleted file mode 100644 index 0c47e3e6bf3487c451abe2d713b32744a15e2c58..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4196.txt +++ /dev/null @@ -1,61 +0,0 @@ -In mathematics, the Beauville–Laszlo theorem is a result in commutative algebra and algebraic geometry that allows one to "glue" two sheaves over an infinitesimal neighborhood of a point on an algebraic curve. It was proved by . - -Although it has implications in algebraic geometry, the theorem is a local result and is stated in its most primitive form for commutative rings. If A is a ring and f is a nonzero element of A, then we can form two derived rings: the localization at f, Af, and the completion at Af, Â; both are A-algebras. In the following we assume that f is a non-zero divisor. Geometrically, A is viewed as a scheme X = Spec A and f as a divisor (f) on Spec A; then Af is its complement Df = Spec Af, the principal open set determined by f, while  is an "infinitesimal neighborhood" D = Spec  of (f). The intersection of Df and Spec  is a "punctured infinitesimal neighborhood" D0 about (f), equal to Spec  ⊗A Af = Spec Âf. - -Suppose now that we have an A-module M; geometrically, M is a sheaf on Spec A, and we can restrict it to both the principal open set Df and the infinitesimal neighborhood Spec Â, yielding an Af-module F and an Â-module G. Algebraically, -$$ -F = M \otimes_A A_f = M_f \qquad G = M \otimes_A \hat{A}. -$$ - -(Despite the notational temptation to write $G = \widehat{M}$, meaning the completion of the A-module M at the ideal Af, unless A is noetherian and M is finitely-generated, the two are not in fact equal. This phenomenon is the main reason that the theorem bears the names of Beauville and Laszlo; in the noetherian, finitely-generated case, it is, as noted by the authors, a special case of Grothendieck's faithfully flat descent.) F and G can both be further restricted to the punctured neighborhood D0, and since both restrictions are ultimately derived from M, they are isomorphic: we have an isomorphism -$$ -\phi \colon G_f \xrightarrow{\sim} F \otimes_{A_f} \hat{A}_f = F \otimes_A \hat{A}. -$$ - -Now consider the converse situation: we have a ring A and an element f, and two modules: an Af-module F and an Â-module G, together with an isomorphism φ as above. Geometrically, we are given a scheme X and both an open set Df and a "small" neighborhood D of its closed complement (f); on Df and D we are given two sheaves which agree on the intersection D0 = Df ∩ D. If D were an open set in the Zariski topology we could glue the sheaves; the content of the Beauville-Laszlo theorem is that, under one technical assumption on f, the same is true for the infinitesimal neighborhood D as well. - -Theorem: Given A, f, F, G, and φ as above, if G has no f-torsion, then there exist an A-module M and isomorphisms -$$ -\alpha \colon M_f \xrightarrow{\sim} F \qquad \beta \colon M \otimes_A \hat{A} \xrightarrow{\sim} G -$$ - -consistent with the isomorphism φ: φ is equal to the composition -$$ -G_f = G \otimes_A A_f \xrightarrow{\beta^{-1} \otimes 1} M \otimes_A \hat{A} \otimes_A A_f = M_f \otimes_A \hat{A} \xrightarrow{\alpha \otimes 1} F \otimes_A \hat{A}. -$$ - -The technical condition that G has no f-torsion is referred to by the authors as "f-regularity". In fact, one can state a stronger version of this theorem. Let M(A) be the category of A-modules (whose morphisms are A-module homomorphisms) and let Mf(A) be the full subcategory of f-regular modules. In this notation, we obtain a commutative diagram of categories (note Mf(Af) = M(Af)): - -\begin{array}{ccc} - -\mathbf{M}_f(A) & \longrightarrow & \mathbf{M}_f(\hat{A}) \\ - -\downarrow & & \downarrow \\ - -\mathbf{M}(A_f) & \longrightarrow & \mathbf{M}(\hat{A}_f) - -\end{array} - -in which the arrows are the base-change maps; for example, the top horizontal arrow acts on objects by M → M ⊗A Â. - -Theorem: The above diagram is a cartesian diagram of categories. - -In geometric language, the Beauville-Laszlo theorem allows one to glue sheaves on a one-dimensional affine scheme over an infinitesimal neighborhood of a point. Since sheaves have a "local character" and since any scheme is locally affine, the theorem admits a global statement of the same nature. The version of this statement that the authors found noteworthy concerns vector bundles: - -Theorem: Let X be an algebraic curve over a field k, x a k-rational smooth point on X with infinitesimal neighborhood D = Spec kt, R a k-algebra, and r a positive integer. Then the category Vectr(XR) of rank-r vector bundles on the curve XR = X ×Spec k Spec R fits into a cartesian diagram: - -\begin{array}{ccc} - -\mathbf{Vect}_r(X_R) & \longrightarrow & \mathbf{Vect}_r(D_R) \\ - -\downarrow & & \downarrow \\ - -\mathbf{Vect}_r((X \setminus x)_R) & \longrightarrow & \mathbf{Vect}_r(D_R^0) - -\end{array} - -This entails a corollary stated in the paper: - -Corollary: With the same setup, denote by Triv(XR) the set of triples (E, τ, σ), where E is a vector bundle on XR, τ is a trivialization of E over (X \ x)R (i.e., an isomorphism with the trivial bundle O(X - x)R), and σ a trivialization over DR. Then the maps in the above diagram furnish a bijection between Triv(XR) and GLr(R((t))) (where R((t)) is the formal Laurent series ring). - -The corollary follows from the theorem in that the triple is associated with the unique matrix which, viewed as a "transition function" over D0R between the trivial bundles over (X \ x)R and over DR, allows gluing them to form E, with the natural trivializations of the glued bundle then being identified with σ and τ. The importance of this corollary is that it shows that the affine Grassmannian may be formed either from the data of bundles over an infinitesimal disk, or bundles on an entire algebraic curve. diff --git a/wiki/wikipedia/4197.txt b/wiki/wikipedia/4197.txt deleted file mode 100644 index cb523d9725357d994f8423cde61bc0cad31bb0fe..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4197.txt +++ /dev/null @@ -1,191 +0,0 @@ -In engineering, applied mathematics, and physics, the Buckingham pi theorem is a key theorem in dimensional analysis. It is a formalization of Rayleigh's method of dimensional analysis. Loosely, the theorem states that if there is a physically meaningful equation involving a certain number n of physical variables, then the original equation can be rewritten in terms of a set of p = n - k dimensionless parameters pi1, pi2, ..., pip constructed from the original variables. (Here k is the number of physical dimensions involved; it is obtained as the rank of a particular matrix.) - -The theorem provides a method for computing sets of dimensionless parameters from the given variables, or nondimensionalization, even if the form of the equation is still unknown. - -The Buckingham pi theorem indicates that validity of the laws of physics does not depend on a specific unit system. A statement of this theorem is that any physical law can be expressed as an identity involving only dimensionless combinations (ratios or products) of the variables linked by the law (for example, pressure and volume are linked by Boyle's law – they are inversely proportional). If the dimensionless combinations' values changed with the systems of units, then the equation would not be an identity, and Buckingham's theorem would not hold. - -Although named for Edgar Buckingham, the pi theorem was first proved by French mathematician Joseph Bertrand in 1878. Bertrand considered only special cases of problems from electrodynamics and heat conduction, but his article contains, in distinct terms, all the basic ideas of the modern proof of the theorem and clearly indicates the theorem's utility for modelling physical phenomena. The technique of using the theorem ("the method of dimensions") became widely known due to the works of Rayleigh. The first application of the pi theorem in the general case to the dependence of pressure drop in a pipe upon governing parameters probably dates back to 1892, a heuristic proof with the use of series expansions, to 1894. - -Formal generalization of the pi theorem for the case of arbitrarily many quantities was given first by A. Vaschy in 1892, then in 1911—apparently independently—by both A. Federman and D. Riabouchinsky, and again in 1914 by Buckingham. It was Buckingham's article that introduced the use of the symbol "$\pi_i$" for the dimensionless variables (or parameters), and this is the source of the theorem's name. - -More formally, the number $p$ of dimensionless terms that can be formed is equal to the nullity of the dimensional matrix, and $k$ is the rank. For experimental purposes, different systems that share the same description in terms of these dimensionless numbers are equivalent. - -In mathematical terms, if we have a physically meaningful equation such as - -f(q_1,q_2,\ldots,q_n)=0, - -where $q_1, \ldots, q_n$ are the $n$ independent physical variables, and they are expressed in terms of $k$ independent physical units, then the above equation can be restated as - -F(\pi_1,\pi_2,\ldots,\pi_p)=0, - -where $\pi_1, \ldots, \pi_p$ are dimensionless parameters constructed from the $q_i$ by $p = n - k$ dimensionless equations — the so-called Pi groups — of the form - -\pi_i=q_1^{a_1}q_2^{a_2} \cdots q_n^{a_n}, - -where the exponents $a_i$ are rational numbers (they can always be taken to be integers by redefining $\pi_i$ as being raised to a power that clears all denominators). - -The Buckingham pi theorem provides a method for computing sets of dimensionless parameters from given variables, even if the form of the equation remains unknown. However, the choice of dimensionless parameters is not unique; Buckingham's theorem only provides a way of generating sets of dimensionless parameters and does not indicate the most "physically meaningful". - -Two systems for which these parameters coincide are called similar (as with similar triangles, they differ only in scale); they are equivalent for the purposes of the equation, and the experimentalist who wants to determine the form of the equation can choose the most convenient one. Most importantly, Buckingham's theorem describes the relation between the number of variables and fundamental dimensions. - -It will be assumed that the space of fundamental and derived physical units forms a vector space over the rational numbers, with the fundamental units as basis vectors, and with multiplication of physical units as the "vector addition" operation, and raising to powers as the "scalar multiplication" operation: - -represent a dimensional variable as the set of exponents needed for the fundamental units (with a power of zero if the particular fundamental unit is not present). For instance, the standard gravity $g$ has units of $D/T^2 = D^1 T^{-2}$ (distance over time squared), so it is represented as the vector $(1, -2)$ with respect to the basis of fundamental units (distance, time). - -Making the physical units match across sets of physical equations can then be regarded as imposing linear constraints in the physical-units vector space. - -Given a system of $n$ dimensional variables $q_1, \ldots, q_n$ (with physical dimensions) in $k$ fundamental (basis) dimensions, the dimensional matrix is the $k \times n$ matrix $M$ whose $k$ rows are the fundamental dimensions and whose $n$ columns are the dimensions of the variables: the $(i, j)$th entry (where $1 \leq i \leq k$ and $1 \leq j \leq n$) is the power of the $i$th fundamental dimension in the $j$th variable. - -The matrix can be interpreted as taking in a combination of the dimensions of the variable quantities and giving out the dimensions of this product in fundamental dimensions. So the $k \times 1$ (column) vector that results from the multiplication - -M\begin{bmatrix}a_1\\ \vdots \\ a_n\end{bmatrix} - -consists of the units of - -q_1^{a_1}q_2^{a_2}\cdots q_n^{a_n} - -in terms of the $k$ fundamental independent (basis) units.) - -This example is elementary but serves to demonstrate the procedure. - -Suppose a car is driving at 100 km/h; how long does it take to go 200 km? - -This question considers $n = 3$ dimensioned variables: distance $d,$ time $t,$ and speed $v,$ and we are seeking some law of the form $t = \operatorname{Duration}(v, d).$ These variables admit a basis of $k = 2$ dimensions: time dimension $T$ and distance dimension $D.$ Thus there is $p = n - k = 3 - 2 = 1$ dimensionless quantity. - -The dimensional matrix is - -M = \begin{bmatrix} - -1 & 0 & 1\\ - -0 & 1 & -1 - -\end{bmatrix} - -in which the rows correspond to the basis dimensions $D$ and $T,$ and the columns to the considered dimensions $D, T, \text{ and } V,$ where the latter stands for the speed dimension. The elements of the matrix correspond to the powers to which the respective dimensions are to be raised. For instance, the third column $(1, -1),$ states that $V = D^0 T^0 V^1,$ represented by the column vector $\mathbf{v}=[0,0,1],$ is expressible in terms of the basis dimensions as $V = D^1 T^{-1} = D/T,$ since $M\mathbf{v} = [1,-1].$ - -For a dimensionless constant $\pi=D^{a_1}T^{a_2}V^{a_3},$ we are looking for vectors $\mathbf{a}=[a_1,a_2,a_3]$ such that the matrix-vector product $M \mathbf{a}$ equals the zero vector $[0, 0].$ In linear algebra, the set of vectors with this property is known as the kernel (or nullspace) of (the linear map represented by) the dimensional matrix. In this particular case its kernel is one-dimensional. The dimensional matrix as written above is in reduced row echelon form, so one can read off a non-zero kernel vector to within a multiplicative constant: - -\mathbf{a} = \begin{bmatrix} -1\\ 1\\ 1\\ \end{bmatrix}. - -If the dimensional matrix were not already reduced, one could perform Gauss–Jordan elimination on the dimensional matrix to more easily determine the kernel. It follows that the dimensionless constant, replacing the dimensions by the corresponding dimensioned variables, may be written: - -\pi = d^{-1}t^1v^1 = tv/d. - -Since the kernel is only defined to within a multiplicative constant, the above dimensionless constant raised to any arbitrary power yields another (equivalent) dimensionless constant. - -Dimensional analysis has thus provided a general equation relating the three physical variables: - -F(\pi)=0, - -or, letting $C$ denote a zero of function $F,$ - -\pi=C, - -which can be written in the desired form (which recall was $t = \operatorname{Duration}(v, d)$) as - -t = C\frac{d}{v}. - -The actual relationship between the three variables is simply $d = vt.$ In other words, in this case $F$ has one physically relevant root, and it is unity. The fact that only a single value of $C$ will do and that it is equal to 1 is not revealed by the technique of dimensional analysis. - -We wish to determine the period $T$ of small oscillations in a simple pendulum. It will be assumed that it is a function of the length $L,$ the mass $M,$ and the acceleration due to gravity on the surface of the Earth $g,$ which has dimensions of length divided by time squared. The model is of the form - -f(T,M,L,g) = 0. - -(Note that it is written as a relation, not as a function: $T$ is not written here as a function of $M, L, \text{ and } g.$) - -There are $k = 3$ fundamental physical dimensions in this equation: time $t,$ mass $m,$ and length $\ell,$ and $n = 4$ dimensional variables, $T, M, L, \text{ and } g.$ Thus we need only $p = n - k = 4 - 3 = 1$ dimensionless parameter, denoted by $\pi,$ and the model can be re-expressed as - -F(\pi) = 0, - -where $\pi$ is given by - -\pi = T^{a_1}M^{a_2}L^{a_3}g^{a_4} - -for some values of $a_1, a_2, a_3, a_4.$ - -The dimensions of the dimensional quantities are: - -T = t, M = m, L = \ell, g = \ell/t^2. - -The dimensional matrix is: - -\mathbf{M} = \begin{bmatrix} - -1 & 0 & 0 & -2\\ - -0 & 1 & 0 & 0\\ - -0 & 0 & 1 & 1 - -\end{bmatrix}. - -(The rows correspond to the dimensions $t, m,$ and $\ell,$ and the columns to the dimensional variables $T, M, L, \text{ and } g.$ For instance, the 4th column, $(-2, 0, 1),$ states that the $g$ variable has dimensions of $t^{-2}m^0 \ell^1.$) - -We are looking for a kernel vector $a = \left[a_1, a_2, a_3, a_4\right]$ such that the matrix product of $\mathbf{M}$ on $a$ yields the zero vector $[0,0,0].$ The dimensional matrix as written above is in reduced row echelon form, so one can read off a kernel vector within a multiplicative constant: - -a = \begin{bmatrix}2\\ 0 \\ -1 \\ 1\end{bmatrix}. - -Were it not already reduced, one could perform Gauss–Jordan elimination on the dimensional matrix to more easily determine the kernel. It follows that the dimensionless constant may be written: - -\begin{align} - -\pi &= T^2M^0L^{-1}g^1\\ - -&= gT^2/L - -\end{align}. - -In fundamental terms: - -\pi = (t)^2 (m)^0 (\ell)^{-1} \left(\ell/t^2\right)^1 = 1, - -which is dimensionless. Since the kernel is only defined to within a multiplicative constant, if the above dimensionless constant is raised to any arbitrary power, it will yield another equivalent dimensionless constant. - -In this example, three of the four dimensional quantities are fundamental units, so the last (which is $g$) must be a combination of the previous. - -Note that if $a_2$ (the coefficient of $M$) had been non-zero then there would be no way to cancel the $M$ value; therefore $a_2$ must be zero. Dimensional analysis has allowed us to conclude that the period of the pendulum is not a function of its mass $M.$ (In the 3D space of powers of mass, time, and distance, we can say that the vector for mass is linearly independent from the vectors for the three other variables. Up to a scaling factor, $\vec g + 2 \vec T - \vec L$ is the only nontrivial way to construct a vector of a dimensionless parameter.) - -The model can now be expressed as: - -F\left(gT^2/L\right) = 0. - -Assuming the zeroes of $F$ are discrete and that they are labelled $C_1, C_2, \ldots,$ then this implies that $gT^2/L = C_i$ for some zero $C_i$ of the function $F.$ If there is only one zero, call it $C,$ then $gT^2/L = C.$ It requires more physical insight or an experiment to show that there is indeed only one zero and that the constant is in fact given by $C = 4\pi^2.$ - -For large oscillations of a pendulum, the analysis is complicated by an additional dimensionless parameter, the maximum swing angle. The above analysis is a good approximation as the angle approaches zero. - -Drinks cooled with small ice cubes cool faster than drinks cooled with the same mass of larger ice cubes. The common explanation for this phenomenon is that smaller cubes have greater surface area, and this greater area causes greater heat conduction and therefore faster cooling. For a given volume of ice, the total surface area of the ice is proportional to $L^2$ (the surface area of a single cube) times $V/L^3$ (the number of cubes), where $L$ is the length of the cube edges, and $V$ is the volume of ice. If the common explanation were correct, then it would imply that for a fixed volume of ice the rate of cooling should be proportional to $1/L,$ and thus the time for the drink to cool should be proportional to $L.$ In fact, dimensional analysis shows this common explanation to be incorrect, and gives the surprising result that the time to cool the drink is proportional to $L^2.$ - -The important dimensional quantities are the length scale of the cubes $L$ (dimension $\ell$), the time $T$ (dimension $t$), the temperature $\Theta$ (dimension $\theta$), the thermal conductivity $\kappa$ (dimensions $\ell^{1}t^{-3}\theta^{-1}m^{1}$), and the volumetric heat capacity $s$ (dimensions \ell^{-1}t^{-2}\theta^{-1}m^1 - -). The dimensional matrix is: - -M = \left[\begin{array}{rrrrr} - -1 & 0 & 0 & 1 & -1\\ - -0 & 1 & 0 & -3 & -2\\ - -0 & 0 & 1 & -1 & -1\\ - -0 & 0 & 0 & 1 & 1 - -\end{array}\right]. - -The nullspace of $M$ is 1-dimensional, and the kernel is spanned by the vector - -a = \left[\begin{array}{r} - -2\\ -1 \\ 0 \\ -1 \\ 1 - -\end{array}\right] - -and therefore $\pi = L^2 T^{-1} \kappa^{-1} s^1.$ (Note that the temperature $\Theta$ does not appear in the dimensionless group.) Therefore, the cooling time of the drink is solved by an implicit function - -F(L^2 T^{-1} \kappa^{-1} s^1) = 0 - -that is, when the argument of the function $L^2 T^{-1} \kappa^{-1} s^1$ is some constant $c.$ Therefore, the drink cooling time is $T=c L^2 \kappa^{-1} s^1,$ so that the cooling time is proportional to the length scale of the ice cube squared, not just the length scale. - -An example of dimensional analysis can be found for the case of the mechanics of a thin, solid and parallel-sided rotating disc. There are five variables involved which reduce to two non-dimensional groups. The relationship between these can be determined by numerical experiment using, for example, the finite element method. - -The theorem has also been used in fields other than physics, for instance in sport sciences. diff --git a/wiki/wikipedia/4198.txt b/wiki/wikipedia/4198.txt deleted file mode 100644 index bd27f3b295e860b586425d95f8888cad835420a5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4198.txt +++ /dev/null @@ -1,5 +0,0 @@ -Campbell's theorem, also known as Campbell’s embedding theorem and the Campbell-Magaarrd theorem, is a mathematical theorem that evaluates the asymptotic distribution of random impulses acting with a determined intensity on a damped system. The theorem guarantees that any n-dimensional Riemannian manifold can be locally embedded in an (n + 1)-dimensional Ricci-flat Riemannian manifold. - -Campbell's theorem states that any n-dimensional Riemannian manifold can be embedded locally in an (n + 1)-manifold with a Ricci curvature of Ra b = 0. The theorem also states, in similar form, that an n-dimensional pseudo-Riemannian manifold can be both locally and isometrically embedded in an n(n + 1)/2-pseudo-Euclidean space. - -Campbell’s theorem can be used to produce the embedding of numerous 4-dimensional spacetimes in 5-dimensional Ricci-flat spaces. It is also used to embed a class of n-dimensional Einstein spaces. diff --git a/wiki/wikipedia/4199.txt b/wiki/wikipedia/4199.txt deleted file mode 100644 index dd902c9a50b4029b4a9fbc54afab33f74e1be13c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4199.txt +++ /dev/null @@ -1,21 +0,0 @@ -The Lyapunov–Malkin theorem (named for Aleksandr Lyapunov and ) is a mathematical theorem detailing nonlinear stability of systems. - -In the system of differential equations, -$$ -\dot x = Ax + X(x,y),\quad\dot y = Y(x,y) -$$ - -where, $x \in \mathbb{R}^m$, $y \in \mathbb{R}^n$, $A$ in an m × m matrix, and X(x, y), Y(x, y) represent higher order nonlinear terms. If all eigenvalues of the matrix $A$ have negative real parts, and X(x, y), Y(x, y) vanish when x = 0, then the solution x = 0, y = 0 of this system is stable with respect to (x, y) and asymptotically stable with respect to x. If a solution (x(t), y(t)) is close enough to the solution x = 0, y = 0, then -$$ -\lim_{t \to \infty}x(t) = 0,\quad \lim_{t \to \infty}y(t) = c. -$$ - -Consider the vector field given by - -\dot x = -x + x^2y, - -\quad\dot y = xy^2 - -In this case, A = -1 and X(0, y) = Y(0, y) = 0 for all y, so this system satisfy the hypothesis of Lyapunov-Malkin theorem. - -The figure below shows a plot of this vector field along with some trajectories that pass near (0,0). As expected by the theorem, it can be seen that trajectories in the neighborhood of (0,0) converges to a point in the form (0,c). diff --git a/wiki/wikipedia/42.txt b/wiki/wikipedia/42.txt deleted file mode 100644 index 1242c8b6b46f9222365b6d5880221d14d83f8d77..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/42.txt +++ /dev/null @@ -1,116 +0,0 @@ -In measure theory, Carathéodory's extension theorem (named after the mathematician Constantin Carathéodory) states that any pre-measure defined on a given ring R of subsets of a given set Ω can be extended to a measure on the σ-algebra generated by R, and this extension is unique if the pre-measure is σ-finite. Consequently, any pre-measure on a ring containing all intervals of real numbers can be extended to the Borel algebra of the set of real numbers. This is an extremely powerful result of measure theory, and leads, for example, to the Lebesgue measure. - -The theorem is also sometimes known as the Carathéodory-Fréchet extension theorem, the Carathéodory–Hopf extension theorem, the Hopf extension theorem and the Hahn–Kolmogorov extension theorem. - -Several very similar statements of the theorem can be given. A slightly more involved one, based on semi-rings of sets, is given further down below. A shorter, simpler statement is as follows. In this form, it is often called the Hahn–Kolmogorov theorem. - -Let $\Sigma_0$ be an algebra of subsets of a set $X.$ Consider a function -$$ -\mu_0\colon \Sigma_0 \to[0,\infty] -$$ - -which is finitely additive, meaning that -$$ -\mu_0\left(\bigcup_{n=1}^N A_n\right)=\sum_{n=1}^N \mu_0(A_n) -$$ - -for any positive integer N and $A_1, A_2, \dots, A_N$ disjoint sets in $\Sigma_0$. - -Assume that this function satisfies the stronger sigma additivity assumption -$$ - \mu_0\left(\bigcup_{n=1}^\infty A_n\right) = \sum_{n=1}^\infty \mu_0(A_n) -$$ - -for any disjoint family $\{A_n:n\in \mathbb{N}\}$ of elements of $\Sigma_0$ such that $\cup_{n=1}^\infty A_n\in \Sigma_0$. (Functions $\mu_0$ obeying these two properties are known as pre-measures.) Then, -$$ -\mu_0 -$$ extends to a measure defined on the sigma-algebra $\Sigma$ generated by $\Sigma_0$; i.e., there exists a measure -$$ -\mu \colon \Sigma \to[0,\infty] -$$ - -such that its restriction to $\Sigma_0$ coincides with $\mu_0.$ - -If $\mu_0$ is $\sigma$-finite, then the extension is unique. - -This theorem is remarkable for it allows one to construct a measure by first defining it on a small algebra of sets, where its sigma additivity could be easy to verify, and then this theorem guarantees its extension to a sigma-algebra. The proof of this theorem is not trivial, since it requires extending $\mu_0$ from an algebra of sets to a potentially much bigger sigma-algebra, guaranteeing that the extension is unique (if $\mu_0$ is $\sigma$-finite), and moreover that it does not fail to satisfy the sigma-additivity of the original function. - -For a given set $\Omega$, we may define a semi-ring as a subset $ \mathcal{S} $ of $\mathcal{P}(\Omega)$, the power set of $\Omega$, which has the following properties: - -* $\emptyset \in \mathcal{S}$ - -* For all $ A, B \in \mathcal{S} $, we have $ A \cap B \in \mathcal{S} $ (closed under pairwise intersections) - -* For all $ A, B \in \mathcal{S} $, there exist disjoint sets $ K_i \in \mathcal{S}, i=1,2,\dots,n $, such that $ A \setminus B = \bigcup_{i = 1}^n K_i$ (relative complements can be written as finite disjoint unions). - -The first property can be replaced with $ \mathcal{S} \neq \emptyset $ since $ A \in \mathcal{S} \implies A \setminus A = \emptyset \in \mathcal{S} $. - -With the same notation, we define a ring $ \mathcal{R} $ as a subset of the power set of $ \Omega $ which has the following properties: - -* $ \emptyset \in \mathcal{R} $ - -* For all $A, B \in \mathcal{R} $, we have $ A \cup B \in \mathcal{R} $ (closed under pairwise unions) - -* For all $A, B \in \mathcal{R} $, we have $ A \setminus B \in \mathcal{R} $ (closed under relative complements). - -Thus, any ring on $ \Omega $ is also a semi-ring. - -Sometimes, the following constraint is added in the measure theory context: - -* $ \Omega $ is the disjoint union of a countable family of sets in $ \mathcal{S} $. - -A field of sets (respectively, a semi-field) is a ring (respectively, a semi-ring) that also contains $ \Omega $ as one of its elements. - -* Arbitrary (possibly uncountable) intersections of rings on Ω are still rings on Ω. - -* If A is a non-empty subset of $\mathcal{P}(\Omega)$, then we define the ring generated by A (noted R(A)) as the intersection of all rings containing A. It is straightforward to see that the ring generated by A is the smallest ring containing A. - -* For a semi-ring S, the set of all finite unions of sets in S is the ring generated by S: -$$ -R(S) = \left\{ A: A = \bigcup_{i=1}^{n}{A_i}, A_i \in S \right\} -$$ - -(One can show that R(S) is equal to the set of all finite disjoint unions of sets in S). - -* A content μ defined on a semi-ring S can be extended on the ring generated by S. Such an extension is unique. The extended content can be written: -$$ -\mu(A) = \sum_{i=1}^{n}{\mu(A_i)} -$$ for $A = \bigcup_{i=1}^{n}{A_i}$, with the $A_i \in S$ disjoint. - -In addition, it can be proved that μ is a pre-measure if and only if the extended content is also a pre-measure, and that any pre-measure on R(S) that extends the pre-measure on S is necessarily of this form. - -In measure theory, we are not interested in semi-rings and rings themselves, but rather in σ-algebras generated by them. The idea is that it is possible to build a pre-measure on a semi-ring S (for example Stieltjes measures), which can then be extended to a pre-measure on R(S), which can finally be extended to a measure on a σ-algebra through Caratheodory's extension theorem. As σ-algebras generated by semi-rings and rings are the same, the difference does not really matter (in the measure theory context at least). Actually, Carathéodory's extension theorem can be slightly generalized by replacing ring by semi-field. - -The definition of semi-ring may seem a bit convoluted, but the following example shows why it is useful (moreover it allows us to give an explicit representation of the smallest ring containing some semi-ring). - -Think about the subset of $\mathcal{P}(\mathbb{R})$ defined by the set of all half-open intervals [a, b) for a and b reals. This is a semi-ring, but not a ring. Stieltjes measures are defined on intervals; the countable additivity on the semi-ring is not too difficult to prove because we only consider countable unions of intervals which are intervals themselves. Proving it for arbitrary countable unions of intervals is accomplished using Caratheodory's theorem. - -Let $R$ be a ring on $\Omega$ and let μ: R → [0, + ∞] be a pre-measure on R, i.e. for all sets $A \in R$ for which there exists a countable decomposition $A = \bigcup_{i = 1}^\infty A_i$ in disjoint sets $A_i \in R, \forall i=1, 2, \ldots$, we have $\mu(A) = \sum_{i = 1}^\infty \mu(A_i)$. - -Let σ(R) be the σ-algebra generated by R. The pre-measure condition is a necessary condition for $\mu$ to be the restriction to R of a measure on $\sigma(R)$. - -The Carathéodory's extension theorem states that it is also sufficient, i.e. there exists a measure μ′: σ(R) → [0, + ∞] such that μ′ is an extension of μ. (That is, μ′ ). Moreover, if μ is σ-finite then the extension μ′ is unique (and also σ-finite). - -There can be more than one extension of a pre-measure to the generated σ-algebra, if the pre-measure is not sigma-finite. - -Take the algebra generated by all half-open intervals [a,b) on the real line, and give such intervals measure infinity if they are non-empty. The Carathéodory extension gives all non-empty sets measure infinity. Another extension is given by the counting measure. - -This example is a more detailed variation of the above. The rational closed-open interval is any subset of $\mathbb{Q}$ of the form $[a,b)$, where $a, b \in \mathbb{Q}$. - -Let $X$ be $\mathbb{Q}\cap[0,1)$ and let $\Sigma_0$ be the algebra of all finite unions of rational closed-open intervals contained in $\mathbb{Q}\cap[0,1)$. It is easy to prove that $\Sigma_0$ is, in fact, an algebra. It is also easy to see that the cardinal of every non-empty set in $\Sigma_0$ is $\aleph_0$. - -Let $\mu_0$ be the counting set function ($\#$) defined in $\Sigma_0$. - -It is clear that $\mu_0$ is finitely additive and $\sigma$-additive in $\Sigma_0$. Since every non-empty set in $\Sigma_0$ is infinite, then, for every non-empty set $A\in\Sigma_0$, $\mu_0(A)=+\infty$ - -Now, let $\Sigma$ be the $\sigma$-algebra generated by $\Sigma_0$. It is easy to see that $\Sigma$ is the Borel $\sigma$-algebra of subsets of $X$, and both $\#$ and $2\#$ are measures defined on $\Sigma$ and both are extensions of $\mu_0$. - -Another example is closely related to the failure of some forms of Fubini's theorem for spaces that are not σ-finite. - -Suppose that X is the unit interval with Lebesgue measure and Y is the unit interval with the discrete counting measure. Let the ring R be generated by products A×B where A is Lebesgue measurable and B is any subset, and give this set the measure μ(A)card(B). This has a very large number of different extensions to a measure; for example: - -*The measure of a subset is the sum of the measures of its horizontal sections. This is the smallest possible extension. Here the diagonal has measure 0. - -*The measure of a subset is $\int_0^1n(x)dx$ where n(x) is the number of points of the subset with given x-coordinate. The diagonal has measure 1. - -*The Carathéodory extension, which is the largest possible extension. Any subset of finite measure is contained in some union of a countable number of horizontal lines. In particular the diagonal has measure infinity. diff --git a/wiki/wikipedia/420.txt b/wiki/wikipedia/420.txt deleted file mode 100644 index d13eb12b599e5cf3deefbf369704e23b8856d8e9..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/420.txt +++ /dev/null @@ -1,55 +0,0 @@ -The Sommerfeld identity is a mathematical identity, due Arnold Sommerfeld, used in the theory of propagation of waves, - - - -\frac{{e^{ik R} }} - -{R} = \int\limits_0^\infty I_0(\lambda r) e^{ - \mu \left| z \right| } \frac - - - -where - - - -\mu = - -\sqrt {\lambda ^2 - k^2 } - - - -is to be taken with positive real part, to ensure the convergence of the integral and its vanishing in the limit $ z \rightarrow \pm \infty $ and - - - -R^2=r^2+z^2 - -. - -Here, $R$ is the distance from the origin while $r$ is the distance from the central axis of a cylinder as in the $(r,\phi,z)$ cylindrical coordinate system. Here the notation for Bessel functions follows the German convention, to be consistent with the original notation used by Sommerfeld. The function $I_0(z)$ is the zeroth-order Bessel function of the first kind, better known by the notation $I_0(z)=J_0(iz)$ in English literature. - -This identity is known as the Sommerfeld Identity. - -In alternative notation, the Sommerfeld identity can be more easily seen as an expansion of a spherical wave in terms of cylindrically-symmetric waves: - - - -\frac{{e^{ik_0 r} }} - -{r} = i\int\limits_0^\infty {dk_\rho \frac - -J_0 (k_\rho \rho )e^{ik_z \left| z \right|} } - - - -Where - - - -k_z=(k_0^2-k_\rho^2)^{1/2} - - - -The notation used here is different form that above: $r$ is now the distance from the origin and $\rho$ is the radial distance in a cylindrical coordinate system defined as $(\rho,\phi,z)$. The physical interpretation is that a spherical wave can be expanded into a summation of cylindrical waves in $\rho$ direction, multiplied by a two-sided plane wave in the $z$ direction; see the Jacobi-Anger expansion. The summation has to be taken over all the wavenumbers $k_\rho$. - -The Sommerfeld identity is closely related to the two-dimensional Fourier transform with cylindrical symmetry, i.e., the Hankel transform. It is found by transforming the spherical wave along the in-plane coordinates ($x$,$y$, or $\rho$, $\phi$) but not transforming along the height coordinate $z$. diff --git a/wiki/wikipedia/4200.txt b/wiki/wikipedia/4200.txt deleted file mode 100644 index d4075cffe0d184b0871995d12b3ea1e21d0783e6..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4200.txt +++ /dev/null @@ -1,82 +0,0 @@ -In mathematics, in particular in mathematical analysis, the Whitney extension theorem is a partial converse to Taylor's theorem. Roughly speaking, the theorem asserts that if A is a closed subset of a Euclidean space, then it is possible to extend a given function of A in such a way as to have prescribed derivatives at the points of A. It is a result of Hassler Whitney. - -A precise statement of the theorem requires careful consideration of what it means to prescribe the derivative of a function on a closed set. One difficulty, for instance, is that closed subsets of Euclidean space in general lack a differentiable structure. The starting point, then, is an examination of the statement of Taylor's theorem. - -Given a real-valued Cm function f(x) on Rn, Taylor's theorem asserts that for each a, x, y ∈ Rn, there is a function Rα(x,y) approaching 0 uniformly as x,y → a such that - -{{NumBlk|::|f({\mathbf x}) = \sum_} - -where the sum is over multi-indices α. - -Let fα = Dαf for each multi-index α. Differentiating (1) with respect to x, and possibly replacing R as needed, yields - -{{NumBlk|::|$f_\alpha({\mathbf x})=\sum_\frac{f_{\alpha+\beta}({\mathbf y})}{\beta!}({\mathbf x}-{\mathbf y})^{\beta}+R_\alpha({\mathbf x},{\mathbf y})$|}} - -where Rα is o(|x - y|m-|α|) uniformly as x,y → a. - -Note that () may be regarded as purely a compatibility condition between the functions fα which must be satisfied in order for these functions to be the coefficients of the Taylor series of the function f. It is this insight which facilitates the following statement: - -Theorem. Suppose that fα are a collection of functions on a closed subset A of Rn for all multi-indices α with $|\alpha|\le m$ satisfying the compatibility condition () at all points x, y, and a of A. Then there exists a function F(x) of class Cm such that: - -# F = f0 on A. - -# DαF = fα on A. - -# F is real-analytic at every point of Rn - A. - -Proofs are given in the original paper of Whitney, and in Malgrange, Bierstone and Hörmander. - -Seeley proved a sharpening of the Whitney extension theorem in the special case of a half space. A smooth function on a half space Rn,+ of points where xn ≥ 0 is a smooth function f on the interior xn for which the derivatives ∂α f extend to continuous functions on the half space. On the boundary xn = 0, f restricts to smooth function. By Borel's lemma, f can be extended to a - -smooth function on the whole of Rn. Since Borel's lemma is local in nature, the same argument shows that if $\Omega$ is a (bounded or unbounded) domain in Rn with smooth boundary, then any smooth function on the closure of $\Omega$ can be extended to a smooth function on Rn. - -Seeley's result for a half line gives a uniform extension map -$$ -\displaystyle{E:C^\infty(\mathbf{R}^+)\rightarrow C^\infty(\mathbf{R}),} -$$ - -which is linear, continuous (for the topology of uniform convergence of functions and their derivatives on compacta) and takes functions supported in [0,R] into functions supported in [−R,R] - -To define $E,$ set -$$ -\displaystyle{E(f)(x)=\sum_{m=1}^\infty a_m f(-b_mx)\varphi(-b_mx) (x < 0),} -$$ - -where φ is a smooth function of compact support on R equal to 1 near 0 and the sequences (am), (bm) satisfy: - -* $b_m > 0$ tends to $\infty$; - -* $\sum a_m b_m^j = (-1)^j$ for $j \geq 0$ with the sum absolutely convergent. - -A solution to this system of equations can be obtained by taking $b_n = 2^n$ and seeking an entire function -$$ -g(z)=\sum_{m=1}^\infty a_m z^m -$$ - -such that $g\left(2^j\right) = (-1)^j.$ That such a function can be constructed follows from the Weierstrass theorem and Mittag-Leffler theorem. - -It can be seen directly by setting -$$ -W(z)=\prod_{j \ge 1} (1-z/2^j), -$$ - -an entire function with simple zeros at $2^j.$ The derivatives W '(2j) are bounded above and below. Similarly the function -$$ -M(z)=\sum_{j \ge 1} {(-1)^j\over W^\prime(2^j) (z-2^j)} -$$ - -meromorphic with simple poles and prescribed residues at $2^j.$ - -By construction -$$ -\displaystyle{g(z)=W(z)M(z)} -$$ - -is an entire function with the required properties. - -The definition for a half space in Rn by applying the operator R to the last variable xn. Similarly, using a smooth partition of unity and a local change of variables, the result for a half space implies the existence of an analogous extending map -$$ -\displaystyle{C^\infty(\overline{\Omega}) \rightarrow C^\infty(\mathbf{R}^n)} -$$ - -for any domain $\Omega$ in Rn with smooth boundary. diff --git a/wiki/wikipedia/4201.txt b/wiki/wikipedia/4201.txt deleted file mode 100644 index 8c1dc4b3cbe1b45f67f808b661200b8793195c23..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4201.txt +++ /dev/null @@ -1,195 +0,0 @@ -Gauss's lemma in number theory gives a condition for an integer to be a quadratic residue. Although it is not useful computationally, it has theoretical significance, being involved in some proofs of quadratic reciprocity. - -It made its first appearance in Carl Friedrich Gauss's third proof (1808) of quadratic reciprocity and he proved it again in his fifth proof (1818). Gauss used a fourth-power lemma to derive the formula for the biquadratic character of 1 + i in Z[i], the ring of Gaussian integers. Subsequently, Eisenstein used third- and fourth-power versions to prove cubic and quartic reciprocity. - -Let k be an algebraic number field with ring of integers $\mathcal{O}_k,$ and let $\mathfrak{p} \subset \mathcal{O}_k $ be a prime ideal. The ideal norm $ \mathrm{N} \mathfrak{p} $ of $\mathfrak{p}$ is defined as the cardinality of the residue class ring. Since $\mathfrak{p} $ is prime this is a finite field $\mathcal{O}_k / \mathfrak{p}$, so the ideal norm is $ \mathrm{N} \mathfrak{p} = |\mathcal{O}_k / \mathfrak{p}|$. - -Assume that a primitive nth root of unity $\zeta_n\in\mathcal{O}_k,$ and that n and $\mathfrak{p} $ are coprime (i.e. $n\not\in \mathfrak{p}$). Then no two distinct nth roots of unity can be congruent modulo $\mathfrak{p}$. - -This can be proved by contradiction, beginning by assuming that $\zeta_n^r\equiv\zeta_n^s$ mod $\mathfrak{p}$, 0 < r < s ≤ n. Let t = s - r such that $\zeta_n^t\equiv 1$ mod $\mathfrak{p}$, and 0 < t < n. From the definition of roots of unity, -$$ -x^n-1=(x-1)(x-\zeta_n)(x-\zeta_n^2)\dots(x-\zeta_n^{n-1}), -$$ - -and dividing by x - 1 gives -$$ -x^{n-1}+x^{n-2}+\dots +x + 1 =(x-\zeta_n)(x-\zeta_n^2)\dots(x-\zeta_n^{n-1}). -$$ - -Letting x = 1 and taking residues mod $\mathfrak{p}$, -$$ -n\equiv(1-\zeta_n)(1-\zeta_n^2)\dots(1-\zeta_n^{n-1})\pmod{\mathfrak{p}}. -$$ - -Since n and $ \mathfrak{p}$ are coprime, $ n\not\equiv 0$ mod $\mathfrak{p},$ but under the assumption, one of the factors on the right must be zero. Therefore, the assumption that two distinct roots are congruent is false. - -Thus the residue classes of $ \mathcal{O}_k / \mathfrak{p}$ containing the powers of ζn are a subgroup of order n of its (multiplicative) group of units, $(\mathcal{O}_k/\mathfrak{p}) ^\times = \mathcal{O}_k /\mathfrak{p}- \{0\}.$ Therefore, the order of $(\mathcal{O}_k/\mathfrak{p})^ \times$ is a multiple of n, and - - - -\begin{align} - -\mathrm{N} \mathfrak{p} &= |\mathcal{O}_k / \mathfrak{p}| \\ - -&= \left |(\mathcal{O}_k / \mathfrak{p} )^\times \right| + 1 \\ - -&\equiv 1 \pmod{n}. - -\end{align} - -There is an analogue of Fermat's theorem in $\mathcal{O}_k$. If $\alpha \in \mathcal{O}_k$ for $\alpha\not\in \mathfrak{p}$, then -$$ -\alpha^{\mathrm{N} \mathfrak{p} -1}\equiv 1 \pmod{\mathfrak{p} }, -$$ - -and since $\mathrm{N} \mathfrak{p} \equiv 1 $ mod n, -$$ -\alpha^{\frac{\mathrm{N} \mathfrak{p} -1}{n}}\equiv \zeta_n^s\pmod{\mathfrak{p} } -$$ - -is well-defined and congruent to a unique nth root of unity ζns. - -This root of unity is called the nth-power residue symbol for $\mathcal{O}_k,$ and is denoted by - - - -\begin{align} - -\left(\frac{\alpha}{\mathfrak{p} }\right)_n &= \zeta_n^s \\ - -&\equiv \alpha^{\frac{\mathrm{N} \mathfrak{p} -1}{n}}\pmod{\mathfrak{p}}. - -\end{align} - - - -It can be proven that -$$ -\left(\frac{\alpha}{\mathfrak{p} }\right)_n= 1 -$$ - -if and only if there is an $\eta \in\mathcal{O}_k$ such that α ≡ ηn mod $\mathfrak{p}$. - -Let $\mu_n = \{1,\zeta_n,\zeta_n^2,\dots,\zeta_n^{n-1}\} $ be the multiplicative group of the nth roots of unity, and let $A=\{a_1, a_2,\dots,a_m\}$ be representatives of the cosets of $(\mathcal{O}_k / \mathfrak{p})^\times/\mu_n.$ Then A is called a 1/n system mod $\mathfrak{p}.$ - -In other words, there are $mn=\mathrm{N} \mathfrak{p} -1 $ numbers in the set $A\mu=\{ a_i \zeta_n^j: 1 \le i \le m, 0 \le j \le n-1\},$ and this set constitutes a representative set for $(\mathcal{O}_k / \mathfrak{p})^\times.$ - -The numbers 1, 2, … (p - 1)/2, used in the original version of the lemma, are a 1/2 system (mod p). - -Constructing a 1/n system is straightforward: let M be a representative set for $(\mathcal{O}_k / \mathfrak{p})^\times.$ Pick any $a_1\in M $ and remove the numbers congruent to $a_1, a_1\zeta_n, a_1\zeta_n^2, \dots, a_1\zeta_n^{n-1}$ from M. Pick a2 from M and remove the numbers congruent to $a_2, a_2\zeta_n, a_2\zeta_n^2, \dots, a_2\zeta_n^{n-1}$ Repeat until M is exhausted. Then {a1, a2, … am} is a 1/n system mod $\mathfrak{p}.$ - -Gauss's lemma may be extended to the nth power residue symbol as follows. Let $\zeta_n\in \mathcal{O}_k $ be a primitive nth root of unity, $\mathfrak{p} \subset \mathcal{O}_k $ a prime ideal, $\gamma \in \mathcal{O}_k, n\gamma\not\in\mathfrak{p},$ (i.e. $\mathfrak{p}$ is coprime to both γ and n) and let A = {a1, a2, …, am} be a 1/n system mod $\mathfrak{p}.$ - -Then for each i, 1 ≤ i ≤ m, there are integers π(i), unique (mod m), and b(i), unique (mod n), such that -$$ -\gamma a_i \equiv \zeta_n^{b(i)}a_{\pi(i)} \pmod{\mathfrak{p}}, -$$ - -and the nth-power residue symbol is given by the formula -$$ -\left(\frac{\gamma}{\mathfrak{p} }\right)_n = \zeta_n^{b(1)+b(2)+\dots+b(m)}. -$$ - -The classical lemma for the quadratic Legendre symbol is the special case n = 2, ζ2 = -1, A = {1, 2, …, (p - 1)/2}, b(k) = 1 if ak > p/2, b(k) = 0 if ak < p/2. - -The proof of the nth-power lemma uses the same ideas that were used in the proof of the quadratic lemma. - -The existence of the integers π(i) and b(i), and their uniqueness (mod m) and (mod n), respectively, come from the fact that Aμ is a representative set. - -Assume that π(i) = π(j) = p, i.e. -$$ -\gamma a_i \equiv \zeta_n^r a_p \pmod{\mathfrak{p}} -$$ - -and -$$ -\gamma a_j \equiv \zeta_n^s a_p \pmod{\mathfrak{p}}. -$$ - -Then -$$ -\zeta_n^{s-r}\gamma a_i \equiv \zeta_n^s a_p \equiv \gamma a_j\pmod{\mathfrak{p}} -$$ - -Because γ and $\mathfrak{p}$ are coprime both sides can be divided by γ, giving -$$ -\zeta_n^{s-r} a_i \equiv a_j\pmod{\mathfrak{p}}, -$$ - -which, since A is a 1/n system, implies s = r and i = j, showing that π is a permutation of the set {1, 2, …, m}. - -Then on the one hand, by the definition of the power residue symbol, - - - -\begin{align} - -(\gamma a_1)(\gamma a_2)\dots(\gamma a_m) &= \gamma^{\frac{\mathrm{N} \mathfrak{p} -1}{n}} a_1 a_2\dots a_m \\ - -&\equiv \left(\frac{\gamma}{\mathfrak{p} }\right)_n a_1 a_2\dots a_m \pmod{\mathfrak{p}}, - -\end{align} - - - -and on the other hand, since π is a permutation, - - - -\begin{align} - -(\gamma a_1)(\gamma a_2)\dots(\gamma a_m) - -&\equiv - -{\zeta_n^{b(1)}a_{\pi(1)}} {\zeta_n^{b(2)}a_{\pi(2)}}\dots{\zeta_n^{b(m)}a_{\pi(m)}} - -&\pmod{\mathfrak{p}}\\ - -&\equiv - -\zeta_n^{b(1)+b(2)+\dots+b(m)}a_{\pi(1)} a_{\pi(2)}\dots a_{\pi(m)} - -&\pmod{\mathfrak{p}}\\ - -&\equiv - -\zeta_n^{b(1)+b(2)+\dots+b(m)} a_1 a_2\dots a_m - -&\pmod{\mathfrak{p}}, - -\end{align} - - - -so - - - -\left(\frac{\gamma}{\mathfrak{p} }\right)_n a_1 a_2\dots a_m \equiv \zeta_n^{b(1)+b(2)+\dots+b(m)} a_1 a_2\dots a_m - -\pmod{\mathfrak{p}}, - - - -and since for all 1 ≤ i ≤ m, ai and $\mathfrak{p}$ are coprime, a1a2…am can be cancelled from both sides of the congruence, - -\left(\frac{\gamma}{\mathfrak{p} }\right)_n \equiv \zeta_n^{b(1)+b(2)+\dots+b(m)} - -\pmod{\mathfrak{p}}, - - - -and the theorem follows from the fact that no two distinct nth roots of unity can be congruent (mod $\mathfrak{p}$). - -Let G be the multiplicative group of nonzero residue classes in Z/pZ, and let H be the subgroup {+1, -1}. Consider the following coset representatives of H in G, -$$ -1, 2, 3, \dots, \frac{p-1}{2}. -$$ - -Applying the machinery of the transfer to this collection of coset representatives, we obtain the transfer homomorphism -$$ -\phi : G \to H, -$$ - -which turns out to be the map that sends a to (-1)n, where a and n are as in the statement of the lemma. Gauss's lemma may then be viewed as a computation that explicitly identifies this homomorphism as being the quadratic residue character. diff --git a/wiki/wikipedia/4202.txt b/wiki/wikipedia/4202.txt deleted file mode 100644 index 7ef047869da0e8d4663169ded6bf78e7d9fd3d22..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4202.txt +++ /dev/null @@ -1,85 +0,0 @@ -Structural induction is a proof method that is used in mathematical logic (e.g., in the proof of Łoś' theorem), computer science, graph theory, and some other mathematical fields. It is a generalization of mathematical induction over natural numbers and can be further generalized to arbitrary Noetherian induction. Structural recursion is a recursion method bearing the same relationship to structural induction as ordinary recursion bears to ordinary mathematical induction. - -Structural induction is used to prove that some proposition P(x) holds for all x of some sort of recursively defined structure, such as - -formulas, lists, or trees. A well-founded partial order is defined on the structures ("subformula" for formulas, "sublist" for lists, and "subtree" for trees). The structural induction proof is a proof that the proposition holds for all the minimal structures and that if it holds for the immediate substructures of a certain structure S, then it must hold for S also. (Formally speaking, this then satisfies the premises of an axiom of well-founded induction, which asserts that these two conditions are sufficient for the proposition to hold for all x.) - -A structurally recursive function uses the same idea to define a recursive function: "base cases" handle each minimal structure and a rule for recursion. Structural recursion is usually proved correct by structural induction; in particularly easy cases, the inductive step is often left out. The length and ++ functions in the example below are structurally recursive. - -For example, if the structures are lists, one usually introduces the partial order "<", in which L < M whenever list L is the tail of list M. Under this ordering, the empty list [] is the unique minimal element. A structural induction proof of some proposition P(L) then consists of two parts: A proof that P([]) is true and a proof that if P(L) is true for some list L, and if L is the tail of list M, then P(M) must also be true. - -Eventually, there may exist more than one base case and/or more than one inductive case, depending on how the function or structure was constructed. In those cases, a structural induction proof of some proposition P(l) then consists of: - -An ancestor tree is a commonly known data structure, showing the parents, grandparents, etc. of a person as far as known (see picture for an example). It is recursively defined: - -* in the simplest case, an ancestor tree shows just one person (if nothing is known about their parents); - -* alternatively, an ancestor tree shows one person and, connected by branches, the two ancestor subtrees of their parents (using for brevity of proof the simplifying assumption that if one of them is known, both are). - -As an example, the property "An ancestor tree extending over g generations shows at most 2g − 1 persons" can be proven by structural induction as follows: - -* In the simplest case, the tree shows just one person and hence one generation; the property is true for such a tree, since 1 ≤ 21 − 1. - -* Alternatively, the tree shows one person and their parents' trees. Since each of the latter is a substructure of the whole tree, it can be assumed to satisfy the property to be proven (a.k.a. the induction hypothesis). That is, p ≤ 2g − 1 and q ≤ 2h − 1 can be assumed, where g and h denotes the number of generations the father's and the mother's subtree extends over, respectively, and p and q denote the numbers of persons they show. - -** In case g ≤ h, the whole tree extends over 1 + h generations and shows p + q + 1 persons, and p + q + 1 ≤ (2g − 1) + (2h − 1) + 1 ≤ 2h + 2h − 1 = 21+h − 1, i.e. the whole tree satisfies the property. - -** In case h ≤ g, the whole tree extends over 1 + g generations and shows p + q + 1 ≤ 21 + g − 1 persons by similar reasoning, i.e. the whole tree satisfies the property in this case also. - -Hence, by structural induction, each ancestor tree satisfies the property. - -As another, more formal example, consider the following property of lists: - -length (L ++ M) = length L + length M [EQ] - -Here ++ denotes the list concatenation operation, and L and M are lists. - -In order to prove this, we need definitions for length and for the concatenation operation. Let (h:t) denote a list whose head (first element) is h and whose tail (list of remaining elements) is t, and let [] denote the empty list. The definitions for length and the concatenation operation are: - -length [] = 0 [LEN1] - -length (h:t) = 1 + length t [LEN2] - -[] ++ list = list [APP1] - -(h:t) ++ list = h : (t ++ list) [APP2] - -Our proposition P(l) is that EQ is true for all lists M when L is l. We want to show that P(l) is true for all lists l. We will prove this by structural induction on lists. - -First we will prove that P([]) is true; that is, EQ is true for all lists M when L happens to be the empty list []. Consider EQ: - -length (L ++ M) = length ([] ++ M) - -= length M (by APP1) - -= 0 + length M - -= length [] + length M (by LEN1) - -= length L + length M - -So this part of the theorem is proved; EQ is true for all M, when L is [], because the left-hand side and the right-hand side are equal. - -Next, consider any nonempty list I. Since I is nonempty, it has a head item, x, and a tail list, xs, so we can express it as (x:xs). The induction hypothesis is that EQ is true for all values of M when L is xs: - -length (xs ++ M) = length xs + length M (hypothesis) - -We would like to show that if this is the case, then EQ is also true for all values of M when L = I = (x:xs). We proceed as before: - -length L + length M = length (x:xs) + length M - -= 1 + length xs + length M (by LEN2) - -= 1 + length (xs ++ M) (by hypothesis) - -= length (x: (xs ++ M)) (by LEN2) - -= length ((x:xs) ++ M) (by APP2) - -= length (L ++ M) - -Thus, from structural induction, we obtain that P(L) is true for all lists L. - -Just as standard mathematical induction is equivalent to the well-ordering principle, structural induction is also equivalent to a well-ordering principle. If the set of all structures of a certain kind admits a well-founded partial order, then every nonempty subset must have a minimal element. (This is the definition of "well-founded".) The significance of the lemma in this context is that it allows us to deduce that if there are any counterexamples to the theorem we want to prove, then there must be a minimal counterexample. If we can show the existence of the minimal counterexample implies an even smaller counterexample, we have a contradiction (since the minimal counterexample isn't minimal) and so the set of counterexamples must be empty. - -As an example of this type of argument, consider the set of all binary trees. We will show that the number of leaves in a full binary tree is one more than the number of interior nodes. Suppose there is a counterexample; then there must exist one with the minimal possible number of interior nodes. This counterexample, C, has n interior nodes and l leaves, where n + 1 ≠ l. Moreover, C must be nontrivial, because the trivial tree has n = 0 and l = 1 and is therefore not a counterexample. C therefore has at least one leaf whose parent node is an interior node. Delete this leaf and its parent from the tree, promoting the leaf's sibling node to the position formerly occupied by its parent. This reduces both n and l by 1, so the new tree also has n + 1 ≠ l and is therefore a smaller counterexample. But by hypothesis, C was already the smallest counterexample; therefore, the supposition that there were any counterexamples to begin with must have been false. The partial ordering implied by 'smaller' here is the one that says that S < T whenever S has fewer nodes than T. diff --git a/wiki/wikipedia/4203.txt b/wiki/wikipedia/4203.txt deleted file mode 100644 index 12e8715369f5cfc9da268cb2ce1b22ab6821efd7..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4203.txt +++ /dev/null @@ -1,639 +0,0 @@ -In calculus, the Leibniz integral rule for differentiation under the integral sign, named after Gottfried Leibniz, states that for an integral of the form -$$ -\int_{a(x)}^{b(x)} f(x,t)dt, -$$ - -where $-\infty < a(x), b(x) < \infty$, the derivative of this integral is expressible as -$$ -\frac{d}{dx} \left (\int_{a(x)}^{b(x)} f(x,t)dt \right )= f\big(x,b(x)\big)\cdot \frac{d}{dx} b(x) - f\big(x,a(x)\big)\cdot \frac{d}{dx} a(x) + \int_{a(x)}^{b(x)}\frac{\partial}{\partial x} f(x,t) dt, -$$ - -where the partial derivative indicates that inside the integral, only the variation of $f(x, t)$ with $x$ is considered in taking the derivative. Notice that if $a(x)$ and $b(x)$ are constants rather than functions of $x$, we have the special case: -$$ -\frac{d}{dx} \left(\int_a^b f(x,t)dt \right)= \int_a^b \frac{\partial}{\partial x} f(x,t) dt. -$$ - -Besides, if $a(x)=a$ and $b(x)=x$, which is a common situation as well (for example, in the proof of Cauchy's repeated integration formula), we have: -$$ -\frac{d}{dx} \left (\int_a^x f(x,t) dt \right )= f\big(x,x\big) + \int_a^x \frac{\partial}{\partial x} f(x,t) dt, -$$ - -Thus under certain conditions, one may interchange the integral and partial differential operators. This important result is particularly useful in the differentiation of integral transforms. An example of such is the moment generating function in probability theory, a variation of the Laplace transform, which can be differentiated to generate the moments of a random variable. Whether Leibniz's integral rule applies is essentially a question about the interchange of limits. - -The Leibniz integral rule can be extended to multidimensional integrals. In two and three dimensions, this rule is better known from the field of fluid dynamics as the Reynolds transport theorem: - -\frac{d}{dt} \int_{D(t)} F(\mathbf x, t) dV - -= \int_{D(t)} \frac{\partial}{\partial t} F(\mathbf x, t)dV - -+ \int_{\partial D(t)} F(\mathbf x, t) \mathbf v_b \cdot d\mathbf{\Sigma}, - -where $F(\mathbf x, t)$ is a scalar function, D(t) and ∂D(t) denote a time-varying connected region of R3 and its boundary, respectively, $\mathbf v_b$ is the Eulerian velocity of the boundary (see Lagrangian and Eulerian coordinates) and dΣ = n dS is the unit normal component of the surface element. - -The general statement of the Leibniz integral rule requires concepts from differential geometry, specifically differential forms, exterior derivatives, wedge products and interior products. With those tools, the Leibniz integral rule in n dimensions is -$$ -\frac{d}{dt}\int_{\Omega(t)}\omega=\int_{\Omega(t)} i_{\mathbf v}(d_x\omega)+\int_{\partial \Omega(t)} i_{\mathbf v} \omega + \int_{\Omega(t)} \dot{\omega}, -$$ - -where Ω(t) is a time-varying domain of integration, ω is a p-form, $\mathbf v=\frac{\partial \mathbf x}{\partial t}$ is the vector field of the velocity, $i_{\mathbf v}$ denotes the interior product with $\mathbf v$, dxω is the exterior derivative of ω with respect to the space variables only and $\dot{\omega}$ is the time derivative of ω. - -However, all of these identities can be derived from a most general statement about Lie derivatives: -$$ -\left.\frac{d}{dt}\right|_{t=0}\int_{\operatorname{im}_{\psi_t}(\Omega)} \omega = \int_{\Omega} \mathcal{L}_\Psi \omega, -$$ - -Here, the ambient manifold on which the differential form $\omega$ lives includes both space and time. - -*$\Omega$ is the region of integration (a submanifold) at a given instant (it does not depend on $t$, since its parametrization as a submanifold defines its position in time), - -*$\mathcal{L}$ is the Lie derivative, - -*$\Psi$ is the spacetime vector field obtained from adding the unitary vector field in the direction of time to the purely spatial vector field $\mathbf v$ from the previous formulas (i.e, $\Psi$ is the spacetime velocity of $\Omega$), - -*$\psi_t$ is a diffeomorphism from the one-parameter group generated by the flow of $\Psi$, and - -*$\text{im}_{\psi_t}(\Omega)$ is the image of $\Omega$ under such diffeomorphism. - -Something remarkable about this form, is that it can account for the case when $\Omega$ changes its shape and size over time, since such deformations are fully determined by $\Psi$. - -Let $X$ be an open subset of $\mathbf{R}$, and $\Omega$ be a measure space. Suppose $f\colon X \times \Omega \to \mathbf{R} $ satisfies the following conditions: - -#$f(x,\omega)$ is a Lebesgue-integrable function of $\omega$ for each $x \in X$. - -#For almost all $\omega \in \Omega$ , the derivative $f_x$ exists and is continuous for all $x \in X$. - -#There is an integrable function $\theta \colon \Omega \to \mathbf{R}$ such that $|f_x(x,\omega)| \leq \theta ( \omega)$ for all $x \in X$ and almost every $\omega \in \Omega$. - -Then, for all $x \in X$, -$$ -\frac{d}{dx} \int_\Omega f(x, \omega) d\omega = \int_{\Omega} f_x (x, \omega) d\omega. -$$ - -The proof relies on the dominated convergence theorem and the mean value theorem (details below). - -We first prove the case of constant limits of integration a and b. - -We use Fubini's theorem to change the order of integration. For every x and h, such that h>0 and both x and x+h are within [x0,x1], we have: -$$ - \int_x^{x+h} \int_a^b f_x(x,t) dt dx = \int_a^b \int_x^{x+h} f_x(x,t) dx dt = \int_a^b \left(f(x+h,t)-f(x,t)\right) dt = \int_a^b f(x+h,t) dt - \int_a^b f(x,t) dt -$$ - -Note that the integrals at hand are well defined since $ f_x(x,t) $ is continuous at the closed rectangle $ [x_0,x_1] \times [a,b] $ and thus also uniformly continuous there; thus its integrals by either dt or dx are continuous in the other variable and also integrable by it (essentially this is because for uniformly continuous functions, one may pass the limit through the integration sign, as elaborated below). - -Therefore: -$$ -\frac{\int_a^b f(x+h,t) dt - \int_a^b f(x,t) dt }{h} = \frac{1}{h}\int_x^{x+h} \int_a^b f_x(x,t) dt dx = \frac{F(x+h)-F(x)}{h} -$$ - -Where we have defined: -$$ -F(u) \equiv \int_{x_0}^{u} \int_a^b f_x(x,t) dt dx -$$ - -(we may replace x0 here by any other point between x0 and x) - -F is differentiable with derivative $\int_a^b f_x(x,t) dt $, so we can take the limit where h approaches zero. For the left hand side this limit is: -$$ -\frac{d}{dx}\int_a^b f(x,t) dt -$$ - -For the right hand side, we get: -$$ - F'(x) = \int_a^b f_x(x,t) dt -$$ - -And we thus prove the desired result: -$$ -\frac{d}{dx}\int_a^b f(x,t) dt = \int_a^b f_x(x,t) dt -$$ - -If the integrals at hand are Lebesgue integrals, we may use the bounded convergence theorem (valid for these integrals, but not for Riemann integrals) in order to show that the limit can be passed through the integral sign. - -Note that this proof is weaker in the sense that it only shows that fx(x,t) is Lebesgue integrable, but not that it is Riemann integrable. In the former (stronger) proof, if f(x,t) is Riemann integrable, then so is fx(x,t) (and thus is obviously also Lebesgue integrable). - -Let - -By the definition of the derivative, - -{{NumBlk|:|$u'(x) = \lim_{h \to 0} \frac{u(x + h) - u(x)}{h}. $|}} - -Substitute equation () into equation (). The difference of two integrals equals the integral of the difference, and 1/h is a constant, so - -\begin{align} - -u'(x) &= \lim_{h \to 0} \frac{\int_a^bf(x + h, t)dt - \int_a^b f(x, t)dt}{h} \\ - -&= \lim_{h \to 0} \frac{\int_a^b\left( f(x + h, t) - f(x,t) \right)dt}{h} \\ - -&= \lim_{h \to 0} \int_a^b \frac{f(x + h, t) - f(x, t)}{h} dt. - -\end{align} - -We now show that the limit can be passed through the integral sign. - -We claim that the passage of the limit under the integral sign is valid by the bounded convergence theorem (a corollary of the dominated convergence theorem). For each δ > 0, consider the difference quotient -$$ -f_\delta(x, t) = \frac{f(x + \delta, t) - f(x, t)}{\delta}. -$$ - -For t fixed, the mean value theorem implies there exists z in the interval [x, x + δ] such that -$$ -f_\delta(x, t) = f_x(z, t). -$$ - -Continuity of fx(x, t) and compactness of the domain together imply that fx(x, t) is bounded. The above application of the mean value theorem therefore gives a uniform (independent of $t$) bound on $f_\delta(x, t)$. The difference quotients converge pointwise to the partial derivative fx by the assumption that the partial derivative exists. - -The above argument shows that for every sequence {δn} → 0, the sequence $\{f_{\delta_n}(x, t)\}$ is uniformly bounded and converges pointwise to fx. The bounded convergence theorem states that if a sequence of functions on a set of finite measure is uniformly bounded and converges pointwise, then passage of the limit under the integral is valid. In particular, the limit and integral may be exchanged for every sequence {δn} → 0. Therefore, the limit as δ → 0 may be passed through the integral sign. - -For a continuous real valued function g of one real variable, and real valued differentiable functions $ f_1 $ and $ f_2 $ of one real variable, -$$ -\frac{d}{dx} \left( \int_{f_1(x)}^{f_2(x)} g(t) dt \right )= g\left(f_2(x)\right) {f_2'(x)} - g\left(f_1(x)\right) {f_1'(x)}. -$$ - -This follows from the chain rule and the First Fundamental Theorem of Calculus. Define -$$ - G(x) = \int_{f_1(x)}^{f_2(x)} g(t) dt, -$$ - -and -$$ - \Gamma(x) = \int_{0}^{x} g(t) dt. -$$ (The lower limit just has to be some number in the domain of $ g $) - -Then, $ G(x) $ can be written as a composition: $ G(x) = (\Gamma \circ f_2)(x) - (\Gamma \circ f_1)(x) $. The Chain Rule then implies that -$$ - G'(x) = \Gamma'\left(f_2(x)\right) f_2'(x) - \Gamma'\left(f_1(x)\right) f_1'(x). -$$ - -By the First Fundamental Theorem of Calculus, $ \Gamma'(x) = g(x) $. Therefore, substituting this result above, we get the desired equation: -$$ - G'(x) = g\left(f_2(x)\right) {f_2'(x)} - g\left(f_1(x)\right) {f_1'(x)}. -$$ - -Note: This form can be particularly useful if the expression to be differentiated is of the form: -$$ -\int_{f_1(x)}^{f_2(x)} h(x)g(t) dt -$$ - -Because $h(x)$ does not depend on the limits of integration, it may be move out from under the integral sign, and the above form may be used with the Product rule, i.e., - -\frac{d}{dx} \left( \int_{f_1(x)}^{f_2(x)} h(x)g(t) dt \right ) = - -\frac{d}{dx} \left(h(x) \int_{f_1(x)}^{f_2(x)} g(t) dt \right ) = - -h'(x)\int_{f_1(x)}^{f_2(x)} g(t) dt + h(x) \frac{d}{dx} \left(\int_{f_1(x)}^{f_2(x)} g(t) dt \right ) - - - -Set -$$ -\varphi(\alpha) = \int_a^b f(x,\alpha)dx, -$$ - -where a and b are functions of α that exhibit increments Δa and Δb, respectively, when α is increased by Δα. Then, - -\begin{align} - -\Delta\varphi &= \varphi(\alpha + \Delta\alpha) - \varphi(\alpha) \\[4pt] - -&= \int_{a + \Delta a}^{b + \Delta b}f(x, \alpha + \Delta\alpha)dx - \int_a^b f(x, \alpha)dx \\[4pt] - -&= \int_{a + \Delta a}^af(x, \alpha + \Delta\alpha)dx + \int_a^bf(x, \alpha + \Delta\alpha)dx + \int_b^{b + \Delta b} f(x, \alpha+\Delta\alpha)dx - \int_a^b f(x, \alpha)dx \\[4pt] - -&= -\int_a^{a + \Delta a} f(x, \alpha + \Delta\alpha)dx + \int_a^b [f(x, \alpha + \Delta\alpha) - f(x,\alpha)]dx + \int_b^{b + \Delta b} f(x, \alpha + \Delta\alpha)dx. - -\end{align} - -A form of the mean value theorem, $\int_a^b f(x)dx = (b - a)f(\xi)$, where a < ξ < b, may be applied to the first and last integrals of the formula for Δφ above, resulting in -$$ -\Delta\varphi = -\Delta a f(\xi_1, \alpha + \Delta\alpha) + \int_a^b [f(x, \alpha + \Delta\alpha) - f(x,\alpha)]dx + \Delta b f(\xi_2, \alpha + \Delta\alpha). -$$ - -Divide by Δα and let Δα → 0. Notice ξ1 → a and ξ2 → b. We may pass the limit through the integral sign: -$$ -\lim_{\Delta\alpha\to 0}\int_a^b \frac{f(x,\alpha + \Delta\alpha) - f(x,\alpha)}{\Delta\alpha}dx = \int_a^b \frac{\partial}{\partial\alpha}f(x, \alpha)dx, -$$ - -again by the bounded convergence theorem. This yields the general form of the Leibniz integral rule, -$$ -\frac{d\varphi}{d\alpha} = \int_a^b \frac{\partial}{\partial\alpha}f(x, \alpha)dx + f(b, \alpha) \frac{db}{d\alpha} - f(a, \alpha)\frac{da}{d\alpha}. -$$ - -The general form of Leibniz's Integral Rule with variable limits can be derived as a consequence of the basic form of Leibniz's Integral Rule, the multivariable chain rule, and the First Fundamental Theorem of Calculus. Suppose $ f $ is defined in a rectangle in the $ x-t $ plane, for $ x \in [x_1, x_2] $ and $ t \in [t_1, t_2] $. Also, assume $ f $ and the partial derivative $ \frac{\partial f}{\partial x} $ are both continuous functions on this rectangle. Suppose $ a, b$ are differentiable real valued functions defined on $ [x_1, x_2]$, with values in $ [t_1, t_2] $ (i.e. for every $ x \in [x_1, x_2], a(x) , b(x) \in [t_1, t_2] $). Now, set -$$ - F(x,y) = \int_{t_1}^{y} f(x,t)dt , -$$ for $ x \in [x_1, x_2] $ and $ y \in [t_1, t_2] $ - -and -$$ - G(x) = \int_{a(x)}^{b(x)} f(x,t)dt , -$$ for $ x \in [x_1, x_2] $ - -Then, by properties of Definite Integrals, we can write - - \begin{align} - -G(x) &= \int_{t_1}^{b(x)} f(x,t)dt - \int_{t_1}^{a(x)} f(x,t)dt \\[4pt] - -&= F(x, b(x)) - F(x, a(x)) - -\end{align} - -Since the functions $ F, a, b $ are all differentiable (see the remark at the end of the proof), by the Multivariable Chain Rule, it follows that $ G $ is differentiable, and its derivative is given by the formula: - - G'(x) = \left(\frac{\partial F}{\partial x} (x, b(x)) + \frac{\partial F}{\partial y} (x, b(x) ) b'(x) \right) - - -\left(\frac{\partial F}{\partial x} (x, a(x)) + \frac{\partial F}{\partial y} (x, a(x)) a'(x) \right) - -Now, note that for every $ x \in [x_1, x_2] $, and for every $ y \in [t_1, t_2] $, we have that $ \frac{\partial F}{\partial x}(x, y) = \int_{t_1}^y \frac{\partial f}{\partial x}(x,t) dt $, because when taking the partial derivative with respect to $ x $ of $ F $, we are keeping $ y $ fixed in the expression $ \int_{t_1}^{y} f(x,t)dt $; thus the basic form of Leibniz's Integral Rule with constant limits of integration applies. Next, by the First Fundamental Theorem of Calculus, we have that $ \dfrac{\partial F}{\partial y}(x, y) = f(x,y) $; because when taking the partial derivative with respect to $ y $ of $ F $, the first variable $ x $ is fixed, so the fundamental theorem can indeed be applied. - -Substituting these results into the equation for $ G'(x) $ above gives: - - \begin{align} - -G'(x) &= \left(\int_{t_1}^{b(x)} \frac{\partial f}{\partial x}(x,t) dt + f(x, b(x)) b'(x) \right) - - -\left(\int_{t_1}^{a(x)} \dfrac{\partial f}{\partial x}(x,t) dt + f(x, a(x)) a'(x) \right) \\[2pt] - -&= f(x,b(x)) b'(x) - f(x,a(x)) a'(x) + \int_{a(x)}^{b(x)} \frac{\partial f}{\partial x}(x,t) dt, - -\end{align} - -as desired. - -There is a technical point in the proof above which is worth noting: applying the Chain Rule to $ G $ requires that $ F $ already be differentiable. This is where we use our assumptions about $ f $. As mentioned above, the partial derivatives of $ F $ are given by the formulas $ \frac{\partial F}{\partial x}(x, y) = \int_{t_1}^y \frac{\partial f}{\partial x}(x,t) dt $ and $ \frac{\partial F}{\partial y}(x, y) = f(x,y) $. Since $ \dfrac{\partial f}{\partial x}$ is continuous, its integral is also a continuous function, and since $ f $ is also continuous, these two results show that both the partial derivatives of $ F $ are continuous. Since continuity of partial derivatives implies differentiability of the function, $ F $ is indeed differentiable. - -At time t the surface Σ in Figure 1 contains a set of points arranged about a centroid $\mathbf{C}(t)$. The function $\mathbf{F}(\mathbf{r}, t)$ can be written as -$$ -\mathbf{F}(\mathbf{C}(t) + \mathbf{r} - \mathbf{C}(t), t) = \mathbf{F}(\mathbf{C}(t) +\mathbf{I}, t), -$$ - -with $\mathbf{I}$ independent of time. Variables are shifted to a new frame of reference attached to the moving surface, with origin at $\mathbf{C}(t)$. For a rigidly translating surface, the limits of integration are then independent of time, so: -$$ -\frac {d}{dt} \left (\iint_{\Sigma (t)} d \mathbf{A}_{\mathbf{r}}\cdot \mathbf{F}(\mathbf{r}, t) \right) = \iint_\Sigma d \mathbf{A}_{\mathbf{I}} \cdot \frac {d}{dt}\mathbf{F}(\mathbf{C}(t) + \mathbf{I}, t), -$$ - -where the limits of integration confining the integral to the region Σ no longer are time dependent so differentiation passes through the integration to act on the integrand only: -$$ -\frac {d}{dt}\mathbf{F}( \mathbf{C}(t) + \mathbf{I}, t) = \mathbf{F}_t(\mathbf{C}(t) + \mathbf{I}, t) + \mathbf{v \cdot \nabla F}(\mathbf{C}(t) + \mathbf{I}, t) = \mathbf{F}_t(\mathbf{r}, t) + \mathbf{v} \cdot \nabla \mathbf{F}(\mathbf{r}, t), -$$ - -with the velocity of motion of the surface defined by -$$ -\mathbf{v} = \frac {d}{dt} \mathbf{C} (t). -$$ - -This equation expresses the material derivative of the field, that is, the derivative with respect to a coordinate system attached to the moving surface. Having found the derivative, variables can be switched back to the original frame of reference. We notice that (see article on curl) -$$ -\nabla \times \left(\mathbf{v} \times \mathbf{F}\right) = (\nabla \cdot \mathbf{F} + \mathbf{F} \cdot \nabla) \mathbf{v}- (\nabla \cdot \mathbf{v} + \mathbf{v} \cdot \nabla) \mathbf{F}, -$$ - -and that Stokes theorem equates the surface integral of the curl over Σ with a line integral over ∂Σ: -$$ -\frac{d}{dt} \left(\iint_{\Sigma (t)} \mathbf{F} (\mathbf{r}, t) \cdot d \mathbf{A}\right) = \iint_{\Sigma (t)} \big(\mathbf{F}_t (\mathbf{r}, t) + \left(\mathbf{F \cdot \nabla} \right)\mathbf{v} + \left(\nabla \cdot \mathbf{F} \right) \mathbf{v} - (\nabla \cdot \mathbf{v})\mathbf{F}\big)\cdot d\mathbf{A} - \oint_{\partial \Sigma (t)}\left(\mathbf{v} \times \mathbf{F}\right)\cdot d\mathbf{s}. -$$ - -The sign of the line integral is based on the right-hand rule for the choice of direction of line element ds. To establish this sign, for example, suppose the field F points in the positive z-direction, and the surface Σ is a portion of the xy-plane with perimeter ∂Σ. We adopt the normal to Σ to be in the positive z-direction. Positive traversal of ∂Σ is then counterclockwise (right-hand rule with thumb along z-axis). Then the integral on the left-hand side determines a positive flux of F through Σ. Suppose Σ translates in the positive x-direction at velocity v. An element of the boundary of Σ parallel to the y-axis, say ds, sweeps out an area vt × ds in time t. If we integrate around the boundary ∂Σ in a counterclockwise sense, vt × ds points in the negative z-direction on the left side of ∂Σ (where ds points downward), and in the positive z-direction on the right side of ∂Σ (where ds points upward), which makes sense because Σ is moving to the right, adding area on the right and losing it on the left. On that basis, the flux of F is increasing on the right of ∂Σ and decreasing on the left. However, the dot product v × F ⋅ ds = −F × v ⋅ ds = −F ⋅ v × ds. Consequently, the sign of the line integral is taken as negative. - -If v is a constant, -$$ -\frac {d}{dt} \iint_{\Sigma (t)} \mathbf{F} (\mathbf{r}, t) \cdot d \mathbf{A} = \iint_{\Sigma (t)} \big(\mathbf{F}_t (\mathbf{r}, t) + \left(\nabla \cdot \mathbf{F} \right) \mathbf{v}\big) \cdot d \mathbf{A} - \oint_{\partial \Sigma (t)}\left(\mathbf{v} \times \mathbf{F}\right) \cdot d\mathbf{s}, -$$ - -which is the quoted result. This proof does not consider the possibility of the surface deforming as it moves. - -Lemma. One has: -$$ -\frac{\partial}{\partial b} \left (\int_a^b f(x) dx \right ) = f(b), \qquad \frac{\partial}{\partial a} \left (\int_a^b f(x) dx \right )= -f(a). -$$ - -Proof. From the proof of the fundamental theorem of calculus, - -\begin{align} - -\frac{\partial}{\partial b} \left (\int_a^b f(x) dx \right ) &= \lim_{\Delta b \to 0} \frac{1}{\Delta b} \left[ \int_a^{b+\Delta b} f(x)dx - \int_a^b f(x)dx \right] \\[6pt] - -&= \lim_{\Delta b \to 0} \frac{1}{\Delta b} \int_b^{b+\Delta b} f(x)dx \\[6pt] - -&= \lim_{\Delta b \to 0} \frac{1}{\Delta b} \left[ f(b) \Delta b + O\left(\Delta b^2\right) \right] \\[6pt] - -&= f(b), - -\end{align} - -and - -\begin{align} - -\frac{\partial}{\partial a} \left (\int_a^b f(x) dx \right )&= \lim_{\Delta a \to 0} \frac{1}{\Delta a} \left[ \int_{a+\Delta a}^b f(x)dx - \int_a^b f(x)dx \right] \\[6pt] - -&= \lim_{\Delta a \to 0} \frac{1}{\Delta a} \int_{a+\Delta a}^a f(x)dx \\[6pt] - -&= \lim_{\Delta a \to 0} \frac{1}{\Delta a} \left[ -f(a) \Delta a + O\left(\Delta a^2\right) \right]\\[6pt] - -&= -f(a). - -\end{align} - -Suppose a and b are constant, and that f(x) involves a parameter α which is constant in the integration but may vary to form different integrals. Assume that f(x, α) is a continuous function of x and α in the compact set {(x, α) : α0 ≤ α ≤ α1 and a ≤ x ≤ b}, and that the partial derivative fα(x, α) exists and is continuous. If one defines: -$$ -\varphi(\alpha) = \int_a^b f(x,\alpha)dx, -$$ - -then $\varphi$ may be differentiated with respect to α by differentiating under the integral sign, i.e., -$$ -\frac{d\varphi}{d\alpha}=\int_a^b\frac{\partial}{\partial\alpha}f(x,\alpha)dx. -$$ - -By the Heine–Cantor theorem it is uniformly continuous in that set. In other words, for any ε > 0 there exists Δα such that for all values of x in [a, b], -$$ -|f(x,\alpha+\Delta \alpha)-f(x,\alpha)|<\varepsilon. -$$ - -On the other hand, - -\begin{align} - -\Delta\varphi &=\varphi(\alpha+\Delta \alpha)-\varphi(\alpha) \\[6pt] - -&=\int_a^b f(x,\alpha+\Delta\alpha)dx - \int_a^b f(x,\alpha) dx \\[6pt] - -&=\int_a^b \left (f(x,\alpha+\Delta\alpha)-f(x,\alpha) \right )dx \\[6pt] - -&\leq \varepsilon (b-a). - -\end{align} - -Hence φ(α) is a continuous function. - -Similarly if $\frac{\partial}{\partial\alpha} f(x,\alpha)$ exists and is continuous, then for all ε > 0 there exists Δα such that: -$$ -\forall x \in [a, b], \quad \left|\frac{f(x,\alpha+\Delta \alpha)-f(x,\alpha)}{\Delta \alpha} - \frac{\partial f}{\partial\alpha}\right|<\varepsilon. -$$ - -Therefore, -$$ -\frac{\Delta \varphi}{\Delta \alpha}=\int_a^b\frac{f(x,\alpha+\Delta\alpha)-f(x,\alpha)}{\Delta \alpha}dx = \int_a^b \frac{\partial f(x,\alpha)}{\partial \alpha}dx + R, -$$ - -where -$$ -|R| < \int_a^b \varepsilon dx = \varepsilon(b-a). -$$ - -Now, ε → 0 as Δα → 0, so -$$ -\lim_{{\Delta \alpha} \to 0}\frac{\Delta\varphi}{\Delta \alpha}= \frac{d\varphi}{d\alpha} = \int_a^b \frac{\partial}{\partial \alpha} f(x,\alpha)dx. -$$ - -This is the formula we set out to prove. - -Now, suppose -$$ -\int_a^b f(x,\alpha)dx=\varphi(\alpha), -$$ - -where a and b are functions of α which take increments Δa and Δb, respectively, when α is increased by Δα. Then, - -\begin{align} - -\Delta\varphi &=\varphi(\alpha+\Delta\alpha)-\varphi(\alpha) \\[6pt] - -&=\int_{a+\Delta a}^{b+\Delta b}f(x,\alpha+\Delta\alpha)dx -\int_a^b f(x,\alpha)dx \\[6pt] - -&=\int_{a+\Delta a}^af(x,\alpha+\Delta\alpha)dx+\int_a^bf(x,\alpha+\Delta\alpha)dx+\int_b^{b+\Delta b}f(x,\alpha+\Delta\alpha)dx -\int_a^b f(x,\alpha)dx \\[6pt] - -&=-\int_a^{a+\Delta a} f(x,\alpha+\Delta\alpha)dx+\int_a^b[f(x,\alpha+\Delta\alpha)-f(x,\alpha)]dx+\int_b^{b+\Delta b} f(x,\alpha+\Delta\alpha)dx. - -\end{align} - -A form of the mean value theorem, $\int_a^b f(x)dx=(b-a)f(\xi),$ where a < ξ < b, can be applied to the first and last integrals of the formula for Δφ above, resulting in -$$ -\Delta\varphi=-\Delta af(\xi_1,\alpha+\Delta\alpha)+\int_a^b[f(x,\alpha+\Delta\alpha)-f(x,\alpha)]dx+\Delta bf(\xi_2,\alpha+\Delta\alpha). -$$ - -Dividing by Δα, letting Δα → 0, noticing ξ1 → a and ξ2 → b and using the above derivation for -$$ -\frac{d\varphi}{d\alpha} = \int_a^b\frac{\partial}{\partial \alpha} f(x,\alpha)dx -$$ - -yields -$$ -\frac{d\varphi}{d\alpha} = \int_a^b\frac{\partial}{\partial \alpha} f(x,\alpha)dx+f(b,\alpha)\frac{\partial b}{\partial \alpha}-f(a,\alpha)\frac{\partial a}{\partial \alpha}. -$$ - -This is the general form of the Leibniz integral rule. - -Consider the function -$$ -\varphi(\alpha)=\int_0^1\frac{\alpha}{x^2+\alpha^2}dx. -$$ - -The function under the integral sign is not continuous at the point (x, α) = (0, 0), and the function φ(α) has a discontinuity at α = 0 because φ(α) approaches ±π/2 as α → 0±. - -If we differentiate φ(α) with respect to α under the integral sign, we get -$$ -\frac{d}{d\alpha} \varphi(\alpha)=\int_0^1\frac{\partial}{\partial\alpha}\left(\frac{\alpha}{x^2+\alpha^2}\right)dx=\int_0^1\frac{x^2-\alpha^2}{(x^2+\alpha^2)^2} dx=\left.-\frac{x}{x^2+\alpha^2}\right|_0^1=-\frac{1}{1+\alpha^2}, -$$ - -which is, of course, true for all values of α except α = 0. This may be integrated (with respect to α) to find - -\varphi(\alpha) = \begin{cases} - -0, & \alpha = 0, \\ -\arctan({\alpha})+\frac{\pi}{2}, - -& \alpha \neq 0. - -\end{cases} - -An example with variable limits: - -\begin{align} - -\frac{d}{dx} \int_{\sin x}^{\cos x} \cosh t^2dt &= \cosh\left(\cos^2 x\right) \frac{d}{dx}(\cos x) - \cosh\left(\sin^2 x\right) \frac{d}{dx} (\sin x) + \int_{\sin x}^{\cos x} \frac{\partial}{\partial x} (\cosh t^2) dt \\[6pt] - -&= \cosh(\cos^2 x) (-\sin x) - \cosh(\sin^2 x) (\cos x) + 0 \\[6pt] - -&= - \cosh(\cos^2 x) \sin x - \cosh(\sin^2 x) \cos x. - -\end{align} - -The formula -$$ -\frac{d}{dx} \left (\int_{a(x)}^{b(x)}f(x,t) dt \right) = f\big(x,b(x)\big)\cdot \frac{d}{dx} b(x) - f\big(x,a(x)\big)\cdot \frac{d}{dx} a(x) + \int_{a(x)}^{b(x)}\frac{\partial}{\partial x} f(x,t) dt -$$ - -can be of use when evaluating certain definite integrals. When used in this context, the Leibniz integral rule for differentiating under the integral sign is also known as Feynman's trick for integration. -$$ -\mathbf I = \int_0^{\pi/2} \frac{1}{(a\cos^2 x +b\sin^2 x)^2}dx,\qquad a,b > 0. -$$ - -First we calculate: - -\begin{align} - -\mathbf{J} &= \int_0^{\pi/2} \frac{1}{a\cos^2 x + b \sin^2 x} dx \\[6pt] - -&= \int_0^{\pi/2} \frac{\frac{1}{\cos^2 x}}{a + b \frac{\sin^2 x}{\cos^2 x}} dx \\[6pt] - -&= \int_0^{\pi/2} \frac{\sec^2 x}{a +b \tan^2 x} dx \\[6pt] - -&= \frac{1}{b} \int_0^{\pi/2} \frac{1}{\left(\sqrt{\frac{a}{b}}\right)^2+\tan^2 x}d(\tan x) \\[6pt] - -&= \frac{1}{\sqrt{ab}}\arctan \left(\sqrt{\frac{b}{a}}\tan x\right) \Bigg|_0^{\pi/2} \\[6pt] - -&=\frac{\pi}{2\sqrt{ab}}. - -\end{align} - -The limits of integration being independent of $a$, we have: -$$ -\frac{\partial \mathbf J}{\partial a}=-\int_0^{\pi/2} \frac{\cos^2 x}{\left(a\cos^2 x+b \sin^2 x\right)^2}dx -$$ - -On the other hand: -$$ -\frac{\partial \mathbf J}{\partial a}= \frac{\partial}{\partial a} \left(\frac{\pi}{2\sqrt{ab}}\right) =-\frac{\pi}{4\sqrt{a^3b}}. -$$ - -Equating these two relations then yields -$$ -\int_0^{\pi/2} \frac{\cos^2 x}{\left(a \cos^2 x+b \sin^2 x\right)^2}dx=\frac{\pi}{4\sqrt{a^3b}}. -$$ - -In a similar fashion, pursuing $\frac{\partial \mathbf J}{\partial b}$ yields -$$ -\int_0^{\pi/2}\frac{\sin^2 x}{\left(a\cos^2 x+b\sin^2 x\right)^2}dx = \frac{\pi}{4\sqrt{ab^3}}. -$$ - -Adding the two results then produces -$$ -\mathbf I = \int_0^{\pi/2}\frac{1}{\left(a\cos^2x+b\sin^2x\right)^2}dx=\frac{\pi}{4\sqrt{ab}}\left(\frac{1}{a}+\frac{1}{b}\right), -$$ - -which computes $\mathbf I$ as desired. - -This derivation may be generalized. Note that if we define -$$ -\mathbf I_n = \int_0^{\pi/2} \frac{1}{\left(a\cos^2 x+b\sin^2 x\right)^n}dx, -$$ - -it can easily be shown that -$$ -(1-n)\mathbf I_n = \frac{\partial\mathbf I_{n-1}}{\partial a} + \frac{\partial \mathbf I_{n-1}}{\partial b} -$$ - -Given $\mathbf{I}_1$, this integral reduction formula can be used to compute all of the values of $\mathbf{I}_n$ for $n > 1$. Integrals like $\mathbf{I}$ and $\mathbf{J}$ may also be handled using the Weierstrass substitution. - -Here, we consider the integral -$$ -\mathbf I(\alpha)=\int_0^{\pi/2} \frac{\ln (1+\cos\alpha \cos x)}{\cos x}dx, \qquad 0 < \alpha < \pi. -$$ - -Differentiating under the integral with respect to $\alpha$, we have - -\begin{align} - -\frac{d}{d\alpha} \mathbf{I}(\alpha) &= \int_0^{\pi/2} \frac{\partial}{\partial\alpha} \left(\frac{\ln(1 + \cos\alpha \cos x)}{\cos x}\right) dx \\[6pt] - -&=-\int_0^{\pi/2}\frac{\sin \alpha}{1+\cos \alpha \cos x}dx \\ - -&=-\int_0^{\pi/2}\frac{\sin \alpha}{\left(\cos^2 \frac{x}{2}+\sin^2 \frac{x}{2}\right)+\cos \alpha \left(\cos^2 \frac{x}{2}-\sin^2 \frac{x}{2}\right)} dx \\[6pt] - -&=-\frac{\sin\alpha}{1-\cos\alpha} \int_0^{\pi/2} \frac{1}{\cos^2\frac{x}{2}} \frac{1}{\frac{1+\cos \alpha}{1-\cos \alpha} +\tan^2 \frac{x}{2}} dx \\[6pt] - -&=-\frac{2\sin\alpha}{1-\cos\alpha} \int_0^{\pi/2} \frac{\frac{1}{2} \sec^2 \frac{x}{2}}{\frac{2 \cos^2 \frac{\alpha}{2}}{2 \sin^2\frac{\alpha}{2}} + \tan^2 \frac{x}{2}} dx \\[6pt] - -&=-\frac{2\left(2 \sin \frac{\alpha}{2} \cos \frac{\alpha}{2}\right)}{2 \sin^2 \frac{\alpha}{2}} \int_0^{\pi/2} \frac{1}{\cot^2\frac{\alpha}{2} + \tan^2 \frac{x}{2}} d\left(\tan \frac{x}{2}\right)\\[6pt] - -&=-2\cot \frac{\alpha}{2}\int_0^{\pi/2} \frac{1}{\cot^2\frac{\alpha}{2} + \tan^2\frac{x}{2}}d\left(\tan \frac{x}{2}\right)\\[6pt] - -&=-2\arctan \left(\tan \frac{\alpha}{2} \tan \frac{x}{2} \right) \bigg|_0^{\pi/2}\\[6pt] - -&=-\alpha. - -\end{align} - -Therefore: -$$ -\mathbf{I}(\alpha) = C - \frac{\alpha^2}{2}. -$$ - -But $\mathbf{I} \left(\frac{\pi}{2} \right) = 0$ by definition so $C = \frac{\pi^2}{8}$ and -$$ -\mathbf I(\alpha) = \frac{\pi^2}{8}-\frac{\alpha^2}{2}. -$$ - -Here, we consider the integral -$$ -\int_0^{2\pi}e^{\cos\theta} \cos(\sin\theta)d\theta. -$$ - -We introduce a new variable φ and rewrite the integral as -$$ -f(\varphi) = \int_0^{2\pi} e^{\varphi\cos\theta} \cos(\varphi\sin\theta)d\theta. -$$ - -When φ = 1 this equals the original integral. However, this more general integral may be differentiated with respect to $\varphi$: - -\begin{align} - -\frac{df}{d\varphi} &= \int_0^{2\pi} \frac{\partial}{\partial\varphi}\left(e^{\varphi\cos\theta} \cos(\varphi\sin\theta)\right)d\theta \\[6pt] - -&= \int_0^{2\pi} e^{\varphi\cos\theta} \left( \cos\theta\cos(\varphi\sin\theta)-\sin\theta\sin(\varphi\sin\theta) \right)d\theta. - -\end{align} - -Now, fix φ, and consider the vector field on $ \mathbf{R}^2 $ defined by $ \mathbf{F}(x,y) = (F_1(x,y), F_2(x,y)) := (e^{\varphi x} \sin (\varphi y), e^{\varphi x} \cos (\varphi y)) $. Further, choose the positive oriented parametrization of the unit circle $ S^1 $ given by $ \mathbf{r} \colon [0, 2\pi) \to \mathbf{R}^2 $, $\mathbf{r}(\theta) := (\cos \theta, \sin \theta) $, so that $ \mathbf{r}'(t) = (-\sin \theta, \cos \theta) $. Then the final integral above is precisely - -\begin{align} - -& \int_0^{2\pi} e^{\varphi\cos\theta} \left( \cos\theta\cos(\varphi\sin\theta)-\sin\theta\sin(\varphi\sin\theta) \right)d\theta \\[6pt] - -= {} & \int_0^{2\pi} (e^{\varphi \cos \theta} \sin (\varphi \sin \theta), e^{\varphi \cos \theta} \cos (\varphi \sin \theta)) \cdot (-\sin \theta, \cos \theta) d\theta\\[6pt] - -= {} & \int_0^{2\pi} \mathbf{F}(\mathbf{r}(\theta)) \cdot \mathbf{r}'(\theta) d\theta\\[6pt] - -= {} & \oint\limits_{S^1} \mathbf{F}(\mathbf{r}) \cdot d\mathbf{r} - -= \oint\limits_{S^1} F_1 dx + F_2 dy, - -\end{align} - -the line integral of $\mathbf{F}$ over $S^1$. By Green's Theorem, this equals the double integral -$$ -\iint_D \frac{\partial F_2}{\partial x} - \frac{\partial F_1}{\partial y} dA, -$$ - -where $D$ is the closed unit disc. Its integrand is identically 0, so $df/d\varphi$ is likewise identically zero. This implies that f(φ) is constant. The constant may be determined by evaluating $f$ at $\varphi = 0$: -$$ -f(0) = \int_0^{2\pi} 1d\theta = 2\pi. -$$ - -Therefore, the original integral also equals $2\pi$. - -There are innumerable other integrals that can be solved using the technique of differentiation under the integral sign. For example, in each of the following cases, the original integral may be replaced by a similar integral having a new parameter $\alpha$: - -\begin{align} - -\int_0^\infty \frac{\sin x}{x}dx &\to \int_0^\infty e^{-\alpha x} \frac{\sin x}{x} dx,\\[6pt] - -\int_0^{\pi/2} \frac{x}{\tan x}dx &\to\int_0^{\pi/2} \frac{\tan^{-1}(\alpha \tan x)}{\tan x} dx,\\[6pt] - -\int_0^\infty \frac{\ln (1+x^2)}{1+x^2}dx &\to\int_0^\infty \frac{\ln (1+\alpha^2 x^2)}{1+x^2} dx \\[6pt] - -\int_0^1 \frac{x-1}{\ln x}dx &\to \int_0^1 \frac{x^\alpha-1}{\ln x} dx. - -\end{align} - -The first integral, the Dirichlet integral, is absolutely convergent for positive α but only conditionally convergent when $\alpha = 0$. Therefore, differentiation under the integral sign is easy to justify when $\alpha > 0$, but proving that the resulting formula remains valid when $\alpha = 0$ requires some careful work. - -The measure-theoretic version of differentiation under the integral sign also applies to summation (finite or infinite) by interpreting summation as counting measure. An example of an application is the fact that power series are differentiable in their radius of convergence. - -Differentiation under the integral sign is mentioned in the late physicist Richard Feynman's best-selling memoir Surely You're Joking, Mr. Feynman! in the chapter "A Different Box of Tools". He describes learning it, while in high school, from an old text, Advanced Calculus (1926), by Frederick S. Woods (who was a professor of mathematics in the Massachusetts Institute of Technology). The technique was not often taught when Feynman later received his formal education in calculus, but using this technique, Feynman was able to solve otherwise difficult integration problems upon his arrival at graduate school at Princeton University: - -
    - -One thing I never did learn was contour integration. I had learned to do integrals by various methods shown in a book that my high school physics teacher Mr. Bader had given me. One day he told me to stay after class. "Feynman," he said, "you talk too much and you make too much noise. I know why. You're bored. So I'm going to give you a book. You go up there in the back, in the corner, and study this book, and when you know everything that's in this book, you can talk again." So every physics class, I paid no attention to what was going on with Pascal's Law, or whatever they were doing. I was up in the back with this book: , by Woods. Bader knew I had studied a little bit, so he gave me the real works—it was for a junior or senior course in college. It had Fourier series, Bessel functions, determinants, elliptic functions—all kinds of wonderful stuff that I didn't know anything about. That book also showed how to differentiate parameters under the integral sign—it's a certain operation. It turns out that's not taught very much in the universities; they don't emphasize it. But I caught on how to use that method, and I used that one damn tool again and again. So because I was self-taught using that book, I had peculiar methods of doing integrals. The result was, when guys at MIT or Princeton had trouble doing a certain integral, it was because they couldn't do it with the standard methods they had learned in school. If it was contour integration, they would have found it; if it was a simple series expansion, they would have found it. Then I come along and try differentiating under the integral sign, and often it worked. So I got a great reputation for doing integrals, only because my box of tools was different from everybody else's, and they had tried all their tools on it before giving the problem to me. - -
    diff --git a/wiki/wikipedia/4204.txt b/wiki/wikipedia/4204.txt deleted file mode 100644 index 56e3816a9432c6e35ed5c271215b6fc8d844f5fd..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4204.txt +++ /dev/null @@ -1,168 +0,0 @@ -In computer science, the Hopcroft–Karp algorithm (sometimes more accurately called the Hopcroft–Karp–Karzanov algorithm) is an algorithm that takes as input a bipartite graph and produces as output a maximum cardinality matching – a set of as many edges as possible with the property that no two edges share an endpoint. It runs in $O(|E|\sqrt)$ time in the worst case, where $E$ is set of edges in the graph, $V$ is set of vertices of the graph, and it is assumed that $|E|=\Omega(|V|)$. In the case of dense graphs the time bound becomes $O(|V|^{2.5})$, and for sparse random graphs it runs in time $O(|E|\log |V|)$ with high probability. - -The algorithm was found by and independently by . As in previous methods for matching such as the Hungarian algorithm and the work of Edmonds, the Hopcroft–Karp algorithm repeatedly increases the size of a partial matching by finding augmenting paths. These paths are sequences of edges of the graph, which alternate between edges in the matching and edges out of the partial matching, and where the initial and final edge are not in the partial matching. Finding an augmenting path allows us to increment the size of the partial matching, by simply toggling the edges of the augmenting path (putting in the partial matching those that were not, and vice versa). Simpler algorithms for bipartite matching, such as the Ford–Fulkerson algorithm‚ find one augmenting path per iteration: the Hopkroft-Karp algorithm instead finds a maximal set of shortest augmenting paths, so as to ensure that only $O(\sqrt)$ iterations are needed instead of $O(V)$ iterations. The same performance of $O(|E|\sqrt)$ can be achieved to find maximum cardinality matchings in arbitrary graphs, with the more complicated algorithm of Micali and Vazirani. - -The Hopcroft–Karp algorithm can be seen as a special case of Dinic's algorithm for the maximum flow problem. - -A vertex that is not the endpoint of an edge in some partial matching $M$ is called a free vertex. The basic concept that the algorithm relies on is that of an augmenting path, a path that starts at a free vertex, ends at a free vertex, and alternates between unmatched and matched edges within the path. It follows from this definition that, except for the endpoints, all other vertices (if any) in augmenting path must be non-free vertices. An augmenting path could consist of only two vertices (both free) and single unmatched edge between them. - -If $M$ is a matching, and $P$ is an augmenting path relative to $M$, then the symmetric difference of the two sets of edges, $M \oplus P$, would form a matching with size $|M| + 1$. Thus, by finding augmenting paths, an algorithm may increase the size of the matching. - -Conversely, suppose that a matching $M$ is not optimal, and let $P$ be the symmetric difference $M \oplus M^*$ where $M^*$ is an optimal matching. Because $M$ and $M^*$ are both matchings, every vertex has degree at most 2 in $P$. So $P$ must form a collection of disjoint cycles, of paths with an equal number of matched and unmatched edges in $M$, of augmenting paths for $M$, and of augmenting paths for $M^*$; but the latter is impossible because $M^*$ is optimal. Now, the cycles and the paths with equal numbers of matched and unmatched vertices do not contribute to the difference in size between $M$ and $M^*$, so this difference is equal to the number of augmenting paths for $M$ in $P$. Thus, whenever there exists a matching $M^*$ larger than the current matching $M$, there must also exist an augmenting path. If no augmenting path can be found, an algorithm may safely terminate, since in this case $M$ must be optimal. - -An augmenting path in a matching problem is closely related to the augmenting paths arising in maximum flow problems, paths along which one may increase the amount of flow between the terminals of the flow. It is possible to transform the bipartite matching problem into a maximum flow instance, such that the alternating paths of the matching problem become augmenting paths of the flow problem. It suffices to insert two vertices, source and sink, and insert edges of unit capacity from the source to each vertex in $U$, and from each vertex in $V$ to the sink; and let edges from $U$ to $V$ have unit capacity. A generalization of the technique used in Hopcroft–Karp algorithm to find maximum flow in an arbitrary network is known as Dinic's algorithm. - -The algorithm may be expressed in the following pseudocode. - -Input: Bipartite graph $G(U \cup V, E)$ - -Output: Matching $M \subseteq E$ -$$ -M \leftarrow \empty -$$ - -repeat -$$ -\mathcal P \leftarrow \{P_1, P_2, \dots, P_k\} -$$ maximal set of vertex-disjoint shortest augmenting paths -$$ -M \leftarrow M \oplus (P_1 \cup P_2 \cup \dots \cup P_k) -$$ - -until $\mathcal P = \empty$ - -In more detail, let $U$ and $V$ be the two sets in the bipartition of $G$, and let the matching from $U$ to $V$ at any time be represented as the set $M$. - -The algorithm is run in phases. Each phase consists of the following steps. - -* A breadth-first search partitions the vertices of the graph into layers. The free vertices in $U$ are used as the starting vertices of this search and form the first layer of the partitioning. At the first level of the search, there are only unmatched edges, since the free vertices in $U$ are by definition not adjacent to any matched edges. At subsequent levels of the search, the traversed edges are required to alternate between matched and unmatched. That is, when searching for successors from a vertex in $U$, only unmatched edges may be traversed, while from a vertex in $V$ only matched edges may be traversed. The search terminates at the first layer $k$ where one or more free vertices in $V$ are reached. - -* All free vertices in $V$ at layer $k$ are collected into a set $F$. That is, a vertex $v$ is put into $F$ if and only if it ends a shortest augmenting path. - -* The algorithm finds a maximal set of vertex disjoint augmenting paths of length $k$. (Maximal means that no more such paths can be added. This is different from finding the maximum number of such paths, which would be harder to do. Fortunately, it is sufficient here to find a maximal set of paths.) This set may be computed by depth first search (DFS) from $F$ to the free vertices in $U$, using the breadth first layering to guide the search: the DFS is only allowed to follow edges that lead to an unused vertex in the previous layer, and paths in the DFS tree must alternate between matched and unmatched edges. Once an augmenting path is found that involves one of the vertices in $F$, the DFS is continued from the next starting vertex. Any vertex encountered during the DFS can immediately be marked as used, since if there is no path from it to $U$ at the current point in the DFS, then that vertex can't be used to reach $U$ at any other point in the DFS. This ensures $O(|E|)$ running time for the DFS. It is also possible to work in the other direction, from free vertices in $U$ to those in $V$, which is the variant used in the pseudocode. - -* Every one of the paths found in this way is used to enlarge $M$. - -The algorithm terminates when no more augmenting paths are found in the breadth first search part of one of the phases. - -Each phase consists of a single breadth first search and a single depth first search. Thus, a single phase may be implemented in $O(|E|)$ time. - -Therefore, the first $\sqrt$ phases, in a graph with $|V|$ vertices and $|E|$ edges, take time $O(|E|\sqrt)$. - -Each phase increases the length of the shortest augmenting path by at least one: the phase finds a maximal set of augmenting paths of the given length, so any remaining augmenting path must be longer. Therefore, once the initial $\sqrt$ phases of the algorithm are complete, the shortest remaining augmenting path has at least $\sqrt$ edges in it. However, the symmetric difference of the eventual optimal matching and of the partial matching M found by the initial phases forms a collection of vertex-disjoint augmenting paths and alternating cycles. If each of the paths in this collection has length at least $\sqrt$, there can be at most $\sqrt$ paths in the collection, and the size of the optimal matching can differ from the size of $M$ by at most $\sqrt$ edges. Since each phase of the algorithm increases the size of the matching by at least one, there can be at most $\sqrt$ additional phases before the algorithm terminates. - -Since the algorithm performs a total of at most $2\sqrt$ phases, it takes a total time of $O(|E|\sqrt)$ in the worst case. - -In many instances, however, the time taken by the algorithm may be even faster than this worst case analysis indicates. For instance, in the average case for sparse bipartite random graphs, Bast (improving a previous result of ) showed that with high probability all non-optimal matchings have augmenting paths of logarithmic length. As a consequence, for these graphs, the Hopcroft–Karp algorithm takes $O(\log |V|)$ phases and $O(|E| \log |V|)$ total time. - -For sparse graphs, the Hopcroft–Karp algorithm continues to have the best known worst-case performance, but for dense graphs ($|E|=\Omega(|V|^2)$) a more recent algorithm by Alt achieves a slightly better time bound, $O\left(|V|^{1.5}\sqrt{\frac{\log |V|}}\right)$. Their algorithm is based on using a push-relabel maximum flow algorithm and then, when the matching created by this algorithm becomes close to optimum, switching to the Hopcroft–Karp method. - -Several authors have performed experimental comparisons of bipartite matching algorithms. Their results in general tend to show that the Hopcroft–Karp method is not as good in practice as it is in theory: it is outperformed both by simpler breadth-first and depth-first strategies for finding augmenting paths, and by push-relabel techniques. - -The same idea of finding a maximal set of shortest augmenting paths works also for finding maximum cardinality matchings in non-bipartite graphs, and for the same reasons the algorithms based on this idea take $O(\sqrt)$ phases. However, for non-bipartite graphs, the task of finding the augmenting paths within each phase is more difficult. Building on the work of several slower predecessors, Micali showed how to implement a phase in linear time, resulting in a non-bipartite matching algorithm with the same time bound as the Hopcroft–Karp algorithm for bipartite graphs. The Micali–Vazirani technique is complex, and its authors did not provide full proofs of their results; subsequently, - -a "clear exposition" was published by Peterson and alternative methods were described by other authors. In 2012, Vazirani offered a new simplified proof of the Micali-Vazirani algorithm. - -/* - -G = U ∪ V ∪ {NIL} - -where U and V are the left and right sides of the bipartite graph and NIL is a special null vertex - -*/ - -function BFS() is - -for each u in U do - -if Pair_U[u] = NIL then - -Dist[u] := 0 - -Enqueue(Q, u) - -else - -Dist[u] := ∞ - -Dist[NIL] := ∞ - -while Empty(Q) = false do - -u := Dequeue(Q) - -if Dist[u] < Dist[NIL] then - -for each v in Adj[u] do - -if Dist[Pair_V[v]] = ∞ then - -Dist[Pair_V[v]] := Dist[u] + 1 - -Enqueue(Q, Pair_V[v]) - -return Dist[NIL] ≠ ∞ - -function DFS(u) is - -if u ≠ NIL then - -for each v in Adj[u] do - -if Dist[Pair_V[v]] = Dist[u] + 1 then - -if DFS(Pair_V[v]) = true then - -Pair_V[v] := u - -Pair_U[u] := v - -return true - -Dist[u] := ∞ - -return false - -return true - -function Hopcroft–Karp is - -for each u in U do - -Pair_U[u] := NIL - -for each v in V do - -Pair_V[v] := NIL - -matching := 0 - -while BFS() = true do - -for each u in U do - -if Pair_U[u] = NIL then - -if DFS(u) = true then - -matching := matching + 1 - -return matching - -Let the vertices of our graph be partitioned in U and V, and consider a partial matching, as indicated by the Pair_U and Pair_V tables that contain the one vertex to which each vertex of U and of V is matched, or NIL for unmatched vertices. The key idea is to add two dummy vertices on each side of the graph: uDummy connected to all unmatched vertices in U and vDummy connected to all unmatched vertices in V. Now, if we run a breadth-first search (BFS) from uDummy to vDummy then we can get the paths of minimal length that connect currently unmatched vertices in U to currently unmatched vertices in V. Note that, as the graph is bipartite, these paths always alternate between vertices in U and vertices in V, and we require in our BFS that when going from V to U, we always select a matched edge. If we reach an unmatched vertex of V, then we end at vDummy and the search for paths in the BFS terminate. To summarize, the BFS starts at unmatched vertices in U, goes to all their neighbors in V, if all are matched then it goes back to the vertices in U to which all these vertices are matched (and which were not visited before), then it goes to all the neighbors of these vertices, etc., until one of the vertices reached in V is unmatched. - -Observe in particular that BFS marks the unmatched nodes of U with distance 0, then increments the distance every time it comes back to U. This guarantees that the paths considered in the BFS are of minimal length to connect unmatched vertices of U to unmatched vertices of V while always going back from V to U on edges that are currently part of the matching. In particular, the special NIL vertex, which corresponds to vDummy, then gets assigned a finite distance, so the BFS function returns true iff some path has been found. If no path has been found, then there are no augmenting paths left and the matching is maximal. - -If BFS returns true, then we can go ahead and update the pairing for vertices on the minimal-length paths found from U to V: we do so using a depth-first search (DFS). Note that each vertex in V on such a path, except for the last one, is currently matched. So we can explore with the DFS, making sure that the paths that we follow correspond to the distances computed in the BFS. We update along every such path by removing from the matching all edges of the path that are currently in the matching, and adding to the matching all edges of the path that are currently not in the matching: as this is an augmenting path (the first and last edges of the path were not part of the matching, and the path alternated between matched and unmatched edges), then this increases the number of edges in the matching. This is same as replacing the current matching by the symmetric difference between the current matching and the entire path.. - -Note that the code ensures that all augmenting paths that we consider are vertex disjoint. Indeed, after doing the symmetric difference for a path, none of its vertices could be considered again in the DFS, just because the Dist[Pair_V[v]] will not be equal to Dist[u] + 1 (it would be exactly Dist[u]). - -Also observe that the DFS does not visit the same vertex multiple times. This is thanks to the following lines: - -Dist[u] = ∞ - -return false - -When we were not able to find any shortest augmenting path from a vertex u, then the DFS marks vertex u by setting Dist[u] to infinity, so that these vertices are not visited again. - -One last observation is that we actually don't need uDummy: its role is simply to put all unmatched vertices of U in the queue when we start the BFS. As for vDummy, it is denoted as NIL in the pseudocode above. diff --git a/wiki/wikipedia/4205.txt b/wiki/wikipedia/4205.txt deleted file mode 100644 index 2b823b7f76442f584903d88ea72d5916f2ed077a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4205.txt +++ /dev/null @@ -1 +0,0 @@ -In mathematics, Gram's theorem states that an algebraic set in a finite-dimensional vector space invariant under some linear group can be defined by absolute invariants. . It is named after J. P. Gram, who published it in 1874. diff --git a/wiki/wikipedia/4206.txt b/wiki/wikipedia/4206.txt deleted file mode 100644 index 8500efe245014218093111bc9c21debed09ae377..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4206.txt +++ /dev/null @@ -1,3 +0,0 @@ -In logic, a rule of replacement is a transformation rule that may be applied to only a particular segment of an expression. A logical system may be constructed so that it uses either axioms, rules of inference, or both as transformation rules for logical expressions in the system. Whereas a rule of inference is always applied to a whole logical expression, a rule of replacement may be applied to only a particular segment. Within the context of a logical proof, logically equivalent expressions may replace each other. Rules of replacement are used in propositional logic to manipulate propositions. - -Common rules of replacement include de Morgan's laws, commutation, association, distribution, double negation, transposition, material implication, logical equivalence, exportation, and tautology. diff --git a/wiki/wikipedia/4207.txt b/wiki/wikipedia/4207.txt deleted file mode 100644 index 2df8086d5c54f5e6ac18cc54d64b0c0bd337fd94..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4207.txt +++ /dev/null @@ -1,73 +0,0 @@ -In computational geometry, the Bowyer–Watson algorithm is a method for computing the Delaunay triangulation of a finite set of points in any number of dimensions. The algorithm can be also used to obtain a Voronoi diagram of the points, which is the dual graph of the Delaunay triangulation. - -The Bowyer–Watson algorithm is an incremental algorithm. It works by adding points, one at a time, to a valid Delaunay triangulation of a subset of the desired points. After every insertion, any triangles whose circumcircles contain the new point are deleted, leaving a star-shaped polygonal hole which is then re-triangulated using the new point. By using the connectivity of the triangulation to efficiently locate triangles to remove, the algorithm can take O(N log N) operations to triangulate N points, although special degenerate cases exist where this goes up to O(N2). - - - -File:Bowyer-Watson 0.png|First step: insert a node in an enclosing "super"-triangle - -File:Bowyer-Watson 1.png|Insert second node - -File:Bowyer-Watson 2.png|Insert third node - -File:Bowyer-Watson 3.png|Insert fourth node - -File:Bowyer-Watson 4.png|Insert fifth (and last) node - -File:Bowyer-Watson 6.png|Remove edges with extremes in the super-triangle - - - -The algorithm is sometimes known just as the Bowyer Algorithm or the Watson Algorithm. Adrian Bowyer and David Watson devised it independently of each other at the same time, and each published a paper on it in the same issue of The Computer Journal (see below). - -The following pseudocode describes a basic implementation of the Bowyer-Watson algorithm. Its time complexity is $O(n^2)$. Efficiency can be improved in a number of ways. For example, the triangle connectivity can be used to locate the triangles which contain the new point in their circumcircle, without having to check all of the triangles - by doing so we can decrease time complexity to $O(n \log n)$. Pre-computing the circumcircles can save time at the expense of additional memory usage. And if the points are uniformly distributed, sorting them along a space filling Hilbert curve prior to insertion can also speed point location. - - - -function BowyerWatson (pointList) - -// pointList is a set of coordinates defining the points to be triangulated - -triangulation := empty triangle mesh data structure - -add super-triangle to triangulation // must be large enough to completely contain all the points in pointList - -for each point in pointList do // add all the points one at a time to the triangulation - -badTriangles := empty set - -for each triangle in triangulation do // first find all the triangles that are no longer valid due to the insertion - -if point is inside circumcircle of triangle - -add triangle to badTriangles - -polygon := empty set - -for each triangle in badTriangles do // find the boundary of the polygonal hole - -for each edge in triangle do - -if edge is not shared by any other triangles in badTriangles - -add edge to polygon - -for each triangle in badTriangles do // remove them from the data structure - -remove triangle from triangulation - -for each edge in polygon do // re-triangulate the polygonal hole - -newTri := form a triangle from edge to point - -add newTri to triangulation - -for each triangle in triangulation // done inserting points, now clean up - -if triangle contains a vertex from original super-triangle - -remove triangle from triangulation - -return triangulation - - diff --git a/wiki/wikipedia/4208.txt b/wiki/wikipedia/4208.txt deleted file mode 100644 index 21b86f4cce7e9ecdd28e8ff6515395ca42a2daf3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4208.txt +++ /dev/null @@ -1,23 +0,0 @@ -The Atiyah–Segal completion theorem is a theorem in mathematics about equivariant K-theory in homotopy theory. Let G be a compact Lie group and let X be a G-CW-complex. The theorem then states that the projection map -$$ -\pi\colon X\times EG\to X -$$ - -induces an isomorphism of prorings -$$ -\pi^*\colon K_G^*(X)_{\widehat{I}} \to K^*((X\times EG)/G). -$$ - -Here, the induced map has as domain the completion of the G-equivariant K-theory of X with respect to I, where I denotes the augmentation ideal of the representation ring of G. - -In the special case of X a point, the theorem specializes to give an isomorphism $K^*(BG)\cong R(G)_{\widehat{I}}$ between the K-theory of the classifying space of G and the completion of the representation ring. - -The theorem can be interpreted as giving a comparison between the geometrical process of taking the homotopy quotient of a G-space, by making the action free before passing to the quotient, and the algebraic process of completing with respect to an ideal. - -The theorem was first proved for finite groups by Michael Atiyah in 1961, - -and a proof of the general case was published by Atiyah together with Graeme Segal in 1969. - -Different proofs have since appeared generalizing the theorem to completion with respect to families of subgroups. - -The corresponding statement for algebraic K-theory was proven by Alexander Merkurjev, holding in the case that the group is algebraic over the complex numbers. diff --git a/wiki/wikipedia/4209.txt b/wiki/wikipedia/4209.txt deleted file mode 100644 index 5ac29ee938b0a138261d901276c7e51009eeedf1..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4209.txt +++ /dev/null @@ -1,229 +0,0 @@ -In vector calculus, the divergence theorem, also known as Gauss's theorem or Ostrogradsky's theorem, is a theorem which relates the flux of a vector field through a closed surface to the divergence of the field in the volume enclosed. - -More precisely, the divergence theorem states that the surface integral of a vector field over a closed surface, which is called the flux through the surface, is equal to the volume integral of the divergence over the region inside the surface. Intuitively, it states that the sum of all sources of the field in a region (with sinks regarded as negative sources) gives the net flux out of the region. - -The divergence theorem is an important result for the mathematics of physics and engineering, particularly in electrostatics and fluid dynamics. In these fields, it is usually applied in three dimensions. However, it generalizes to any number of dimensions. In one dimension, it is equivalent to integration by parts. In two dimensions, it is equivalent to Green's theorem. - -Vector fields are often illustrated using the example of the velocity field of a fluid, such as a gas or liquid. A moving liquid has a velocity—a speed and a direction—at each point, which can be represented by a vector, so that the velocity of the liquid forms a vector field. Consider an imaginary closed surface S inside a body of liquid, enclosing a volume of liquid. The flux of liquid out of the volume is equal to the volume rate of fluid crossing this surface, i.e., the surface integral of the velocity over the surface. - -Since liquids are incompressible, the amount of liquid inside a closed volume is constant; if there are no sources or sinks inside the volume then the flux of liquid out of S is zero. If the liquid is moving, it may flow into the volume at some points on the surface S and out of the volume at other points, but the amounts flowing in and out at any moment are equal, so the net flux of liquid out of the volume is zero. - -However if a source of liquid is inside the closed surface, such as a pipe through which liquid is introduced, the additional liquid will exert pressure on the surrounding liquid, causing an outward flow in all directions. This will cause a net outward flow through the surface S. The flux outward through S equals the volume rate of flow of fluid into S from the pipe. Similarly if there is a sink or drain inside S, such as a pipe which drains the liquid off, the external pressure of the liquid will cause a velocity throughout the liquid directed inward toward the location of the drain. The volume rate of flow of liquid inward through the surface S equals the rate of liquid removed by the sink. - -If there are multiple sources and sinks of liquid inside S, the flux through the surface can be calculated by adding up the volume rate of liquid added by the sources and subtracting the rate of liquid drained off by the sinks. The volume rate of flow of liquid through a source or sink (with the flow through a sink given a negative sign) is equal to the divergence of the velocity field at the pipe mouth, so adding up (integrating) the divergence of the liquid throughout the volume enclosed by S equals the volume rate of flux through S. This is the divergence theorem. - -The divergence theorem is employed in any conservation law which states that the total volume of all sinks and sources, that is the volume integral of the divergence, is equal to the net flow across the volume's boundary. - -Suppose V is a subset of $\mathbb{R}^n$ (in the case of n = 3, V represents a volume in three-dimensional space) which is compact and has a piecewise smooth boundary S (also indicated with $\partial V = S$). If F is a continuously differentiable vector field defined on a neighborhood of V, then: - -{{oiint - -| preintegral = $\iiint_V\left(\mathbf{\nabla}\cdot\mathbf{F}\right)\mathrm{d}V=$ - -| intsubscpt = $\scriptstyle S$ - -| integrand = $(\mathbf{F}\cdot\mathbf{\hat{n}})\mathrm{d}S .$ - -}} - -The left side is a volume integral over the volume V, the right side is the surface integral over the boundary of the volume V. The closed manifold $\partial V$ is oriented by outward-pointing normals, and $\mathbf{\hat{n}}$ is the outward pointing unit normal at each point on the boundary $\partial V$. ($\mathrm{d} \mathbf{S}$ may be used as a shorthand for $\mathbf{n} \mathrm{d} S$.) In terms of the intuitive description above, the left-hand side of the equation represents the total of the sources in the volume V, and the right-hand side represents the total flow across the boundary S. - -The divergence theorem follows from the fact that if a volume V is partitioned into separate parts, the flux out of the original volume is equal to the sum of the flux out of each component volume. This is true despite the fact that the new subvolumes have surfaces that were not part of the original volume's surface, because these surfaces are just partitions between two of the subvolumes and the flux through them just passes from one volume to the other and so cancels out when the flux out of the subvolumes is summed. - -thumb|upright=2|A volume divided into two subvolumes. At right the two subvolumes are separated to show the flux out of the different surfaces. - -See the diagram. A closed, bounded volume V is divided into two volumes V1 and V2 by a surface S3 (green). The flux Φ(Vi) out of each component region Vi is equal to the sum of the flux through its two faces, so the sum of the flux out of the two parts is -$$ -\Phi(V_\text{1}) + \Phi(V_\text{2}) = \Phi_\text{1} + \Phi_\text{31} + \Phi_\text{2} + \Phi_\text{32} -$$ - -where Φ1 and Φ2 are the flux out of surfaces S1 and S2, Φ31 is the flux through S3 out of volume 1, and Φ32 is the flux through S3 out of volume 2. The point is that surface S3 is part of the surface of both volumes. The "outward" direction of the normal vector $\mathbf{\hat n}$ is opposite for each volume, so the flux out of one through S3 is equal to the negative of the flux out of the other -$$ -\Phi_\text{31} = \iint_{S_3} \mathbf{F} \cdot \mathbf{\hat n} \mathrm{d}S = -\iint_{S_3} \mathbf{F} \cdot (-\mathbf{\hat n}) \mathrm{d}S = -\Phi_\text{32} -$$ - -so these two fluxes cancel in the sum. Therefore -$$ -\Phi(V_\text{1}) + \Phi(V_\text{2}) = \Phi_\text{1} + \Phi_\text{2} -$$ - -Since the union of surfaces S1 and S2 is S -$$ -\Phi(V_\text{1}) + \Phi(V_\text{2}) = \Phi(V) -$$ - -
    - -This principle applies to a volume divided into any number of parts, as shown in the diagram. - -{{oiint - -| preintegral = $\iiint_V \mathbf{c} \cdot \nabla f \mathrm{d}V =$ - -| intsubscpt = $\scriptstyle S$ - -| integrand = $(\mathbf{c} f) \cdot \mathbf{n} \mathrm{d}S - \iiint_V f (\nabla \cdot \mathbf{c}) \mathrm{d}V.$ - -}} - -The last term on the right vanishes for constant $\mathbf{c}$ or any divergence free (solenoidal) vector field, e.g. Incompressible flows without sources or sinks such as phase change or chemical reactions etc. In particular, taking $\mathbf{c}$ to be constant: - -{{oiint - -| preintegral = $\iiint_V \nabla f \mathrm{d}V =$ - -| intsubscpt = $\scriptstyle S$ - -| integrand = $f\mathbf{n} \mathrm{d}S.$ - -}} - -* With $\mathbf{F}\rightarrow \mathbf{c}\times\mathbf{F}$ for vector field F and constant vector c: - -Any inverse-square law can instead be written in a Gauss's law-type form (with a differential and integral form, as described above). Two examples are Gauss's law (in electrostatics), which follows from the inverse-square Coulomb's law, and Gauss's law for gravity, which follows from the inverse-square Newton's law of universal gravitation. The derivation of the Gauss's law-type equation from the inverse-square formulation or vice versa is exactly the same in both cases; see either of those articles for details. He discovered the divergence theorem in 1762. - -Carl Friedrich Gauss was also using surface integrals while working on the gravitational attraction of an elliptical spheroid in 1813, when he proved special cases of the divergence theorem. But it was Mikhail Ostrogradsky, who gave the first proof of the general theorem, in 1826, as part of his investigation of heat flow. Special cases were proven by George Green in 1828 in An Essay on the Application of Mathematical Analysis to the Theories of Electricity and Magnetism, Siméon Denis Poisson in 1824 in a paper on elasticity, and Frédéric Sarrus in 1828 in his work on floating bodies. - -To verify the planar variant of the divergence theorem for a region $R$: -$$ -R = \left \{ (x, y) \in \mathbb{R}^2 \ : \ x^2 + y^2 \leq 1 \right \}, -$$ - -and the vector field: -$$ - \mathbf{F}(x,y)= 2 y\mathbf{i} + 5x \mathbf{j}. -$$ - -The boundary of $R$ is the unit circle, $C$, that can be represented parametrically by: -$$ -x = \cos(s), \quad y = \sin(s) -$$ - -such that $0 \leq s \leq 2\pi$ where $s$ units is the length arc from the point $s = 0$ to the point $P$ on $C$. Then a vector equation of $C$ is -$$ -C(s) = \cos(s)\mathbf{i} + \sin(s)\mathbf{j}. -$$ - -At a point $P$ on $C$: -$$ - P = (\cos(s), \sin(s)) \Rightarrow \mathbf{F} = 2\sin(s)\mathbf{i} + 5\cos(s)\mathbf{j}. -$$ - -Therefore, - -\begin{align} - -\oint_C \mathbf{F} \cdot \mathbf{n} \mathrm{d}s &= \int_0^{2\pi} (2 \sin(s) \mathbf{i} + 5 \cos(s) \mathbf{j}) \cdot (\cos(s) \mathbf{i} + \sin(s) \mathbf{j}) \mathrm{d}s\\ - -&= \int_0^{2\pi} (2 \sin(s) \cos(s) + 5 \sin(s) \cos(s)) \mathrm{d}s\\ - -&= 7\int_0^{2\pi} \sin(s) \cos(s) \mathrm{d}s\\ - -&= 0. - -\end{align} - -Because $M = 2y$, we can evaluate $\frac{\partial M}{\partial x} = 0$, and because $N = 5x$, $\frac{\partial N}{\partial y} = 0$. Thus -$$ -\iint_R \mathbf{\nabla}\cdot\mathbf{F} \mathrm{d}A = \iint_R \left (\frac{\partial M}{\partial x} + \frac{\partial N}{\partial y} \right) \mathrm{d}A = 0. -$$ - -Let's say we wanted to evaluate the flux of the following vector field defined by $ \mathbf{F}=2x^2 \textbf{i} +2y^2 \textbf{j} +2z^2\textbf{k} $ bounded by the following inequalities: -$$ -\left\{0\le x \le 3\right\} \left\{-2\le y \le 2\right\} \left\{0\le z \le 2\pi\right\} -$$ - -By the divergence theorem, - -{{oiint - -| preintegral = $\iiint_V\left(\mathbf{\nabla}\cdot\mathbf{F}\right) \mathrm{d}V=$ - -| intsubscpt = $\scriptstyle S$ - -| integrand = $(\mathbf{F}\cdot\mathbf{n}) \mathrm{d}S .$ - -}} - -We now need to determine the divergence of $\textbf{F}$. If $\mathbf{F}$ is a three-dimensional vector field, then the divergence of $\textbf{F}$ is given by $\nabla \cdot \textbf{F} = \left( \frac{\partial}{\partial x}\textbf{i} + \frac{\partial}{\partial y}\textbf{j} + \frac{\partial}{\partial z}\textbf{k} \right) \cdot \textbf{F}$. - -Thus, we can set up the following flux integral {{oiint - -| preintegral = $I = $ - -| intsubscpt = ${\scriptstyle S}$ - -| integrand = $\mathbf{F} \cdot \mathbf{n} \mathrm{d}S,$ - -}} - -as follows: - - - -\begin{align} - -I - -&=\iiint_V \nabla \cdot \mathbf{F} \mathrm{d}V\\[6pt] - -&=\iiint_V \left( \frac{\partial\mathbf{F_x}}{\partial x}+\frac{\partial\mathbf{F_y}}{\partial y}+\frac{\partial\mathbf{F_z}}{\partial z} \right) \mathrm{d}V\\[6pt] - -&=\iiint_V (4x+4y+4z) \mathrm{d}V\\[6pt] - -&=\int_0^3 \int_{-2}^2 \int_0^{2\pi} (4x+4y+4z) \mathrm{d}V - -\end{align} - - - -Now that we have set up the integral, we can evaluate it. - -\begin{align} - -\int_0^3 \int_{-2}^2 \int_0^{2\pi} (4x+4y+4z) \mathrm{d}V &=\int_{-2}^2 \int_0^{2\pi} (12y+12z+18) \mathrm{d}y \mathrm{d}z\\[6pt] - -&=\int_0^{2\pi} 24 (2z+3) \mathrm{d}z\\[6pt] - -&=48\pi(2\pi+3) - -\end{align} - - - -One can use the general Stokes' Theorem to equate the n-dimensional volume integral of the divergence of a vector field F over a region U to the (n − 1)-dimensional surface integral of F over the boundary of U: -$$ - \underbrace{ \int \cdots \int_U }_n \nabla \cdot \mathbf{F} \mathrm{d}V = \underbrace{ \oint_{} \cdots \oint_{\partial U} }_{n-1} \mathbf{F} \cdot \mathbf{n} \mathrm{d}S -$$ - -This equation is also known as the divergence theorem. - -When n = 2, this is equivalent to Green's theorem. - -When n = 1, it reduces to integration by parts. - -Writing the theorem in Einstein notation: - -{{oiint - -| preintegral = $\iiint_V \dfrac{\partial \mathbf{F}_i}{\partial x_i} \mathrm{d}V=$ - -| intsubscpt = $\scriptstyle S$ - -| integrand = $\mathbf{F}_i n_i \mathrm{d}S $ - -}} - -suggestively, replacing the vector field F with a rank-n tensor field T, this can be generalized to: - -{{oiint - -| preintegral = $\iiint_V \dfrac{\partial T_{i_1i_2\cdots i_q\cdots i_n}}{\partial x_{i_q}} \mathrm{d}V=$ - -| intsubscpt = $\scriptstyle S$ - -| integrand = $T_{i_1i_2\cdots i_q\cdots i_n}n_{i_q} \mathrm{d}S .$ - -}} - -where on each side, tensor contraction occurs for at least one index. This form of the theorem is still in 3d, each index takes values 1, 2, and 3. It can be generalized further still to higher (or lower) dimensions (for example to 4d spacetime in general relativity). diff --git a/wiki/wikipedia/421.txt b/wiki/wikipedia/421.txt deleted file mode 100644 index 7c0ef60aae4d8d19b23f880354028e610e883114..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/421.txt +++ /dev/null @@ -1,26 +0,0 @@ -Stochastic Gronwall inequality is a generalization of Gronwall's inequality and has been used for proving the well-posedness of path-dependent stochastic differential equations with local monotonicity and coercivity assumption with respect to supremum norm. - -Let $X(t), t\geq 0$ be an non-negative right-continuous $(\mathcal{F}_t)_{t\ge 0}$-adapted process. Assume that $A:[0,\infty)\to[0,\infty)$ is a deterministic non-decreasing càdlàg function with $A(0)=0$ and let $H(t),t\geq 0$ - - be a non-decreasing and càdlàg adapted process starting from $H(0)\geq 0$. Further, let $M(t),t\geq 0$ be an $(\mathcal{F}_t)_{t\ge 0}$- local martingale with $M(0)=0$ and càdlàg paths. - -Assume that for all $t\geq 0$, -$$ - X(t)\leq \int_0^t X^*(u^-)d A(u)+M(t)+H(t), -$$ - -where $X^*(u):=\sup_{r\in[0,u]}X(r)$. - -and define $c_p=\frac{p^{-p}}{1-p}$. Then the following estimates hold for $p\in (0,1)$ and $T>0$: - -*If $\mathbb{E} \big(H(T)^p\big)<\infty$ and $H$ is predictable, then $\mathbb{E}\left[\left(X^*(T)\right)^p\Big\vert\mathcal{F}_0\right]\leq \frac{c_p}{p}\mathbb{E}\left[(H(T))^p\big\vert\mathcal{F}_0\right] \exp \left\lbrace c_p^{1/p}A(T)\right\rbrace$; - - - -*If $\mathbb{E} \big(H(T)^p\big)<\infty$ and $M$ has no negative jumps, then $\mathbb{E}\left[\left(X^*(T)\right)^p\Big\vert\mathcal{F}_0\right]\leq \frac{c_p+1}{p}\mathbb{E}\left[(H(T))^p\big\vert\mathcal{F}_0\right] \exp \left\lbrace (c_p+1)^{1/p}A(T)\right\rbrace$; - - - -*If $\mathbb{E} H(T)<\infty,$ then $\displaystyle{\mathbb{E}\left[\left(X^*(T)\right)^p\Big\vert\mathcal{F}_0\right]\leq \frac{c_p}{p}\left(\mathbb{E}\left[ H(T)\big\vert\mathcal{F}_0\right]\right)^p \exp \left\lbrace c_p^{1/p} A(T)\right\rbrace}$; - -It has been proven by Lenglart's inequality. diff --git a/wiki/wikipedia/4210.txt b/wiki/wikipedia/4210.txt deleted file mode 100644 index cb220d9f49c0ed440d6f92ced3984d1978e54b13..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4210.txt +++ /dev/null @@ -1,19 +0,0 @@ -In mathematics, a pathological object is one which possesses deviant, irregular, or counterintuitive property, in such a way that distinguishes it from what is conceived as a typical object in the same category. The opposite of pathological is well-behaved. - -A classic example of a pathological structure is the Weierstrass function, which is continuous everywhere but differentiable nowhere. - -Whether a behavior is pathological is by definition subject to personal intuition. Pathologies depend on context, training, and experience, and what is pathological to one researcher may very well be standard behavior to another. - -Pathological examples can show the importance of the assumptions in a theorem. For example, in statistics, the Cauchy distribution does not satisfy the central limit theorem, even though its symmetric bell-shape appears similar to many distributions which do; it fails the requirement to have a mean and standard deviation which exist and that are finite. - -Some of the best-known paradoxes, such as Banach–Tarski paradox and Hausdorff paradox, are based on the existence of non-measurable sets. Mathematicians, unless they take the minority position of denying the axiom of choice, are in general resigned to living with such sets. - -In computer science, pathological has a slightly different sense with regard to the study of algorithms. Here, an input (or set of inputs) is said to be pathological if it causes atypical behavior from the algorithm, such as a violation of its average case complexity, or even its correctness. For example, hash tables generally have pathological inputs: sets of keys that collide on hash values. Quicksort normally has $O(n \log{n})$ time complexity, but deteriorates to $O(n^2)$ when it is given input that triggers suboptimal behavior. - -The term is often used pejoratively, as a way of dismissing such inputs as being specially designed to break a routine that is otherwise sound in practice (compare with Byzantine). On the other hand, awareness of pathological inputs is important, as they can be exploited to mount a denial-of-service attack on a computer system. Also, the term in this sense is a matter of subjective judgment as with its other senses. Given enough run time, a sufficiently large and diverse user community (or other factors), an input which may be dismissed as pathological could in fact occur (as seen in the first test flight of the Ariane 5). - -A similar but distinct phenomenon is that of exceptional objects (and exceptional isomorphisms), which occurs when there are a "small" number of exceptions to a general pattern (such as a finite set of exceptions to an otherwise infinite rule). By contrast, in cases of pathology, often most or almost all instances of a phenomenon are pathological (e.g., almost all real numbers are irrational). - -Subjectively, exceptional objects (such as the icosahedron or sporadic simple groups) are generally considered "beautiful", unexpected examples of a theory, while pathological phenomena are often considered "ugly", as the name implies. Accordingly, theories are usually expanded to include exceptional objects. For example, the exceptional Lie algebras are included in the theory of semisimple Lie algebras: the axioms are seen as good, the exceptional objects as unexpected but valid. - -By contrast, pathological examples are instead taken to point out a shortcoming in the axioms, requiring stronger axioms to rule them out. For example, requiring tameness of an embedding of a sphere in the Schönflies problem. In general, one may study the more general theory, including the pathologies, which may provide its own simplifications (the real numbers have properties very different from the rationals, and likewise continuous maps have very different properties from smooth ones), but also the narrower theory, from which the original examples were drawn. diff --git a/wiki/wikipedia/4211.txt b/wiki/wikipedia/4211.txt deleted file mode 100644 index a8cac429cbb26027a8e69afdbbcc41f793b9f5c7..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4211.txt +++ /dev/null @@ -1,11 +0,0 @@ -In mathematics, an elementary proof is a mathematical proof that only uses basic techniques. More specifically, the term is used in number theory to refer to proofs that make no use of complex analysis. Historically, it was once thought that certain theorems, like the prime number theorem, could only be proved by invoking "higher" mathematical theorems or techniques. However, as time progresses, many of these results have also been subsequently reproven using only elementary techniques. - -While there is generally no consensus as to what counts as elementary, the term is nevertheless a common part of the mathematical jargon. An elementary proof is not necessarily simple, in the sense of being easy to understand or trivial. In fact, some elementary proofs can be quite complicated — and this is especially true when a statement of notable importance is involved. - -The distinction between elementary and non-elementary proofs has been considered especially important in regard to the prime number theorem. This theorem was first proved in 1896 by Jacques Hadamard and Charles Jean de la Vallée-Poussin using complex analysis. Many mathematicians then attempted to construct elementary proofs of the theorem, without success. G. H. Hardy expressed strong reservations; he considered that the essential "depth" of the result ruled out elementary proofs: - -However, in 1948, Atle Selberg produced new methods which led him and Paul Erdős to find elementary proofs of the prime number theorem. - -A possible formalization of the notion of "elementary" in connection to a proof of a number-theoretical result is the restriction that the proof can be carried out in Peano arithmetic. Also in that sense, these proofs are elementary. - -Harvey Friedman conjectured, "Every theorem published in the Annals of Mathematics whose statement involves only finitary mathematical objects (i.e., what logicians call an arithmetical statement) can be proved in elementary arithmetic." The form of elementary arithmetic referred to in this conjecture can be formalized by a small set of axioms concerning integer arithmetic and mathematical induction. For instance, according to this conjecture, Fermat's Last Theorem should have an elementary proof; Wiles' proof of Fermat's Last Theorem is not elementary. However, there are other simple statements about arithmetic such as the existence of iterated exponential functions that cannot be proven in this theory. diff --git a/wiki/wikipedia/4212.txt b/wiki/wikipedia/4212.txt deleted file mode 100644 index bf1e67c8ea7ecfb2eb5cb0dc3955e522b922dd83..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4212.txt +++ /dev/null @@ -1,71 +0,0 @@ -In mathematics, and particularly in the field of complex analysis, the Weierstrass factorization theorem asserts that every entire function can be represented as a (possibly infinite) product involving its zeroes. The theorem may be viewed as an extension of the fundamental theorem of algebra, which asserts that every polynomial may be factored into linear factors, one for each root. - -The theorem, which is named for Karl Weierstrass, is closely related to a second result that every sequence tending to infinity has an associated entire function with zeroes at precisely the points of that sequence. - -A generalization of the theorem extends it to meromorphic functions and allows one to consider a given meromorphic function as a product of three factors: terms depending on the function's zeros and poles, and an associated non-zero holomorphic function. - -The consequences of the fundamental theorem of algebra are twofold. - -Firstly, any finite sequence $\{c_n\}$ in the complex plane has an associated polynomial $p(z)$ that has zeroes precisely at the points of that sequence, $p(z) = \prod_n (z-c_n).$ - -Secondly, any polynomial function $p(z)$ in the complex plane has a factorization -$$ -p(z)=a\prod_n(z-c_n), -$$ - -where a is a non-zero constant and cn are the zeroes of p. - -The two forms of the Weierstrass factorization theorem can be thought of as extensions of the above to entire functions. The necessity of additional terms in the product is demonstrated when one considers $\prod_n (z-c_n)$ where the sequence $\{c_n\}$ is not finite. It can never define an entire function, because the infinite product does not converge. Thus one cannot, in general, define an entire function from a sequence of prescribed zeroes or represent an entire function by its zeroes using the expressions yielded by the fundamental theorem of algebra. - -A necessary condition for convergence of the infinite product in question is that for each z, the factors $ (z-c_n) $ must approach 1 as $n\to\infty$. So it stands to reason that one should seek a function that could be 0 at a prescribed point, yet remain near 1 when not at that point and furthermore introduce no more zeroes than those prescribed. - -Weierstrass' elementary factors have these properties and serve the same purpose as the factors $ (z-c_n) $ above. - -Consider the functions of the form $\exp(-\tfrac{z^{n+1}}{n+1})$ for $n \in \mathbb{N}$. At $z=0$, they evaluate to $1$ and have a flat slope at order up to $n$. Right after $z=1$, they sharply fall to some small positive value. In contrast, consider the function $1-z$ which has no flat slope but, at $z=1$, evaluates to exactly zero. Also note that for |z| < 1, -$$ -(1-z) = \exp(\ln(1-z)) = \exp \left( -\tfrac{z^1}{1}-\tfrac{z^2}{2}-\tfrac{z^3}{3}+\cdots \right) -$$. - -The elementary factors - -, - -also referred to as primary factors - -, - -are functions that combine the properties of zero slope and zero value (see graphic): -$$ -E_n(z) = \begin{cases} (1-z) & \text{if }n=0, \\ (1-z)\exp \left( \frac{z^1}{1}+\frac{z^2}{2}+\cdots+\frac{z^n}{n} \right) & \text{otherwise}. \end{cases} -$$ - -For |z| < 1 and $n>0$, one may express it as -$$ -E_n(z)=\exp(-\tfrac{z^{n+1}}{n+1}\Sigma_{k=0}^\infty\tfrac{z^k}{1+k/(n+1)}) -$$ and one can read off how those properties are enforced. - -The utility of the elementary factors En(z) lies in the following lemma: - -The trigonometric functions sine and cosine have the factorizations - -\sin \pi z = \pi z \prod_{n\neq 0} \left(1-\frac{z}{n}\right)e^{z/n} = \pi z\prod_{n=1}^\infty \left(1-\left(\frac{z}{n}\right)^2\right) - -\cos \pi z = \prod_{q \in \mathbb{Z}, q \text{odd} } \left(1-\frac{2z}{q}\right)e^{2z/q} = \prod_{n=0}^\infty \left( 1 - \left(\frac{z}{n+\tfrac{1}{2}} \right)^2 \right) - -while the gamma function $\Gamma$ has factorization - -\frac{1}{\Gamma (z)}=e^{\gamma z}z\prod_{n=1}^{\infty }\left ( 1+\frac{z}{n} \right )e^{-z/n}, -$$ -\gamma -$$ is the Euler–Mascheroni constant. The cosine identity can be seen as special case of - -\frac{1}{\Gamma(s-z)\Gamma(s+z)} = \frac{1}{\Gamma(s)^2}\prod_{n=0}^\infty \left( 1 - \left(\frac{z}{n+s} \right)^2 \right) - -for $s=\tfrac{1}{2}$. - -If ƒ is an entire function of finite order ρ and m is the order of the zero of ƒ at z=0, then it admits a factorization -$$ -f(z) = z^m e^{g(z)} \displaystyle\prod_{n=1}^\infty E_{p}\!\!\left(\frac{z}{a_n}\right) -$$ - -where g(z) is a polynomial of degree q, q ≤ ρ and p = [ρ] is the integer part of ρ. diff --git a/wiki/wikipedia/4213.txt b/wiki/wikipedia/4213.txt deleted file mode 100644 index 963e6da3ec823e60d8d6c8ba37add1cf52445467..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4213.txt +++ /dev/null @@ -1,69 +0,0 @@ -In graph theory and graph algorithms, a feedback arc set or feedback edge set in a directed graph is a subset of the edges of the graph that contains at least one edge out of every cycle in the graph. Removing these edges from the graph breaks all of the cycles, producing a directed acyclic graph, an acyclic subgraph of the given graph. The feedback arc set with the fewest possible edges is the minimum feedback arc set and its removal leaves the maximum acyclic subgraph; weighted versions of these optimization problems are also used. If a feedback arc set is minimal, meaning that removing any edge from it produces a subset that is not a feedback arc set, then it has an additional property: reversing all of its edges, rather than removing them, produces a directed acyclic graph. - -Feedback arc sets have applications in circuit analysis, chemical engineering, deadlock resolution, ranked voting, ranking competitors in sporting events, mathematical psychology, ethology, and graph drawing. Finding minimum feedback arc sets and maximum acyclic subgraphs is NP-hard; it can be solved exactly in exponential time, or in fixed-parameter tractable time. In polynomial time, the minimum feedback arc set can be approximated to within a polylogarithmic approximation ratio, and maximum acyclic subgraphs can be approximated to within a constant factor. Both are hard to approximate closer than some constant factor, an inapproximability result that can be strengthened under the unique games conjecture. For tournament graphs, the minimum feedback arc set can be approximated more accurately, and for planar graphs both problems can be solved exactly in polynomial time. - -A closely related problem, the feedback vertex set, is a set of vertices containing at least one vertex from every cycle in a directed or undirected graph. In undirected graphs, the spanning trees are the largest acyclic subgraphs, and the number of edges removed in forming a spanning tree is the circuit rank. - -Several problems involving finding rankings or orderings can be solved by finding a feedback arc set on a tournament graph, a directed graph with one edge between each pair of vertices. Reversing the edges of the feedback arc set produces a directed acyclic graph whose unique topological order can be used as the desired ranking. Applications of this method include the following: - -*In sporting competitions with round-robin play, the outcomes of each game can be recorded by directing an edge from the loser to the winner of each game. Finding a minimum feedback arc set in the resulting graph, reversing its edges, and topological ordering, produces a ranking on all of the competitors. Among all of the different ways of choosing a ranking, it minimizes the total number of upsets, games in which a lower-ranked competitor beat a higher-ranked competitor. Many sports use simpler methods for group tournament ranking systems based on points awarded for each game; these methods can provide a constant approximation to the minimum-upset ranking. - -*In primatology and more generally in ethology, dominance hierarchies are often determined by searching for an ordering with the fewest reversals in observed dominance behavior, another form of the minimum feedback arc set problem. - -*In mathematical psychology, it is of interest to determine subjects' rankings of sets of objects according to a given criterion, such as their preference or their perception of size, based on pairwise comparisons between all pairs of objects. The minimum feedback arc set in a tournament graph provides a ranking that disagrees with as few pairwise outcomes as possible. Alternatively, if these comparisons result in independent probabilities for each pairwise ordering, then the maximum likelihood estimation of the overall ranking can be obtained by converting these probabilities into log-likelihoods and finding a minimum-weight feedback arc set in the resulting tournament. - -*The same maximum-likelihood ordering can be used for seriation, the problem in statistics and exploratory data analysis of arranging elements into a linear ordering, in cases where data is available that provides pairwise comparisons between the elements. - -*In ranked voting, the Kemeny–Young method can be described as seeking an ordering that minimizes the sum, over pairs of candidates, of the number of voters who prefer the opposite ordering for that pair. This can be formulated and solved as a minimum-weight feedback arc set problem, in which the vertices represent candidates, edges are directed to represent the winner of each head-to-head contest, and the cost of each edge represents the number of voters who would be made unhappy by giving a higher ranking to the head-to-head loser. - -Another early application of feedback arc sets concerned the design of sequential logic circuits, in which signals can propagate in cycles through the circuit instead of always progressing from inputs to outputs. In such circuits, a minimum feedback arc set characterizes the number of points at which amplification is necessary to allow the signals to propagate without loss of information. In synchronous circuits made from asynchronous components, synchronization can be achieved by placing clocked gates on the edges of a feedback arc set. Additionally, cutting a circuit on a feedback arc a set reduces the remaining circuit to combinational logic, simplifying its analysis, and the size of the feedback arc set controls how much additional analysis is needed to understand the behavior of the circuit across the cut. Similarly, in process flowsheeting in chemical engineering, breaking edges of a process flow diagram on a feedback arc set, and guessing or trying all possibilities for the values on those edges, allows the rest of the process to be analyzed in a systematic way because of its acyclicity. In this application, the idea of breaking edges in this way is called "tearing". - -In layered graph drawing, the vertices of a given directed graph are partitioned into an ordered sequence of subsets (the layers of the drawing), and each subset is placed along a horizontal line of this drawing, with the edges extending upwards and downwards between these layers. In this type of drawing, it is desirable for most or all of the edges to be oriented consistently downwards, rather than mixing upwards and downwards edges, in order for the reachability relations in the drawing to be more visually apparent. This is achieved by finding a minimum or minimal feedback arc set, reversing the edges in that set, and then choosing the partition into layers in a way that is consistent with a topological order of the resulting acyclic graph. Feedback arc sets have also been used for a different subproblem of layered graph drawing, the ordering of vertices within consecutive pairs of layers. - -In deadlock resolution in operating systems, the problem of removing the smallest number of dependencies to break a deadlock can be modeled as one of finding a minimum feedback arc set. However, because of the computational difficulty of finding this set, and the need for speed within operating system components, heuristics rather than exact algorithms are often used in this application. - -The minimum feedback arc set and maximum acyclic subgraph are equivalent for the purposes of exact optimization, as one is the complement set of the other. However, for parameterized complexity and approximation, they differ, because the analysis used for those kinds of algorithms depends on the size of the solution and not just on the size of the input graph, and the minimum feedback arc set and maximum acyclic subgraph have different sizes from each other. - -A feedback arc set of a given graph $G$ is the same as a feedback vertex set of a directed line graph of $G$. Here, a feedback vertex set is defined analogously to a feedback arc set, as a subset of the vertices of the graph whose deletion would eliminate all cycles. The line graph of a directed graph $G$ has a vertex for each edge of $G$, and an edge for each two-edge path in $G$. In the other direction, the minimum feedback vertex set of a given graph $G$ can be obtained from the solution to a minimum feedback arc set problem on a graph obtained by splitting every vertex of $G$ into two vertices, one for incoming edges and one for outgoing edges. These transformations allow exact algorithms for feedback arc sets and for feedback vertex sets to be converted into each other, with an appropriate translation of their complexity bounds. However, this transformation does not preserve approximation quality for the maximum acyclic subgraph problem. - -In both exact and approximate solutions to the feedback arc set problem, it is sufficient to solve separately each strongly connected component of the given graph, and to break these strongly connected components down even farther to their biconnected components by splitting them at articulation vertices. The choice of solution within any one of these subproblems does not affect the others, and the edges that do not appear in any of these components are useless for inclusion in the feedback arc set. When one of these components can be separated into two disconnected subgraphs by removing two vertices, a more complicated decomposition applies, allowing the problem to be split into subproblems derived from the triconnected components of its strongly connected components. - -One way to find the minimum feedback arc set is to search for an ordering of the vertices such that as few edges as possible are directed from later vertices to earlier vertices in the ordering. Searching all permutations of an $n$-vertex graph would take time $O(n!)$, but a dynamic programming method based on the Held–Karp algorithm can find the optimal permutation in time $O(n2^n)$, also using an exponential amount of space. A divide-and-conquer algorithm that tests all partitions of the vertices into two equal subsets and recurses within each subset can solve the problem in time $O(4^n/\sqrt{n})$, using polynomial space. - -In parameterized complexity, the time for algorithms is measured not just in terms of the size of the input graph, but also in terms of a separate parameter of the graph. In particular, for the minimum feedback arc set problem, the so-called natural parameter is the size of the minimum feedback arc set. On graphs with $n$ vertices, with natural parameter $k$, the feedback arc set problem can be solved in time $O(n^44^kk^3k!)$, by transforming it into an equivalent feedback vertex set problem and applying a parameterized feedback vertex set algorithm. Because the exponent of $n$ in this algorithm is the constant $4$, independent of $k$, this algorithm is said to be fixed-parameter tractable. - -Other parameters than the natural parameter have also been studied. A fixed-parameter tractable algorithm using dynamic programming can find minimum feedback arc sets in time $O(2^r m^4\log m)$, where $r$ is the circuit rank of the underlying undirected graph. The circuit rank is an undirected analogue of the feedback arc set, the minimum number of edges that need to be removed from a graph to reduce it to a spanning tree; it is much easier to compute than the minimum feedback arc set. For graphs of treewidth $t$, dynamic programming on a tree decomposition of the graph can find the minimum feedback arc set in time polynomial in the graph size and exponential in $O(t\log t)$. Under the exponential time hypothesis, no better dependence on $t$ is possible. - -Instead of minimizing the size of the feedback arc set, researchers have also looked at minimizing the maximum number of edges removed from any vertex. This variation of the problem can be solved in linear time. All minimal feedback arc sets can be listed by an algorithm with polynomial delay per set. - -The best known polynomial-time approximation algorithm for the feedback arc set has the non-constant approximation ratio $O(\log n\log\log n)$. This means that the size of the feedback arc set that it finds is at most this factor larger than the optimum. Determining whether feedback arc set has a constant-ratio approximation algorithm, or whether a non-constant ratio is necessary, remains an open problem. - -The maximum acyclic subgraph problem has an easy approximation algorithm that achieves an approximation ratio of $\tfrac12$: - -*Fix an arbitrary ordering of the vertices - -*Partition the edges into two acyclic subgraphs, one consisting of the edges directed consistently with the ordering, and the other consisting of edges directed oppositely to the ordering. - -*Return the larger of the two subgraphs. - -This can be improved by using a greedy algorithm to choose the ordering. This algorithm finds and deletes a vertex whose numbers of incoming and outgoing edges are as far apart as possible, recursively orders the remaining graph, and then places the deleted vertex at one end of the resulting order. For graphs with $m$ edges and $n$ vertices, this produces an acyclic subgraph with $m/2+n/6$ edges, in linear time, giving an approximation ratio of $\tfrac12+\Omega(n/m)$. Another, more complicated, polynomial time approximation algorithm applies to graphs with maximum degree $\Delta$, and finds an acyclic subgraph with $m/2+\Omega(m/\sqrt{\Delta})$ edges, giving an approximation ratio of the form $\tfrac12+\Omega(1/\sqrt{\Delta})$. When $\Delta=3$, the approximation ratio $8/9$ can be achieved. - -In directed planar graphs, the feedback arc set problem is dual to the problem of contracting a set of edges to make the resulting graph strongly connected. This dual problem is polynomially solvable, and therefore the planar minimum feedback arc set problem is as well. It can be solved in time $O(n^{5/2}\log n)$. A weighted version of the problem can be solved in time $O(n^3)$, or when the weights are positive integers that are at most a number $N$, in time $O(n^{5/2}\log nN)$. These planar algorithms can be extended to the graphs that do not have the utility graph $K_{3,3}$ as a graph minor, using the fact that the triconnected components of these graphs are either planar or of bounded size. Planar graphs have also been generalized in a different way to a class of directed graphs called weakly acyclic digraphs, defined by the integrality of a certain polytope associated with their feedback arc sets. Every planar directed graph is weakly acyclic in this sense, and the feedback arc set problem can be solved in polynomial time for all weakly acyclic digraphs. - -The reducible flow graphs are another class of directed graphs on which the feedback arc set problem may be solved in polynomial time. These graphs describe the flow of control in structured programs for many programming languages. Although structured programs often produce planar directed flow graphs, the definition of reducibility does not require the graph to be planar. - -When the minimum feedback arc set problem is restricted to tournaments, it has a polynomial-time approximation scheme, which generalizes to a weighted version of the problem. A subexponential parameterized algorithm for weighted feedback arc sets on tournaments is also known. The maximum acyclic subgraph problem for dense graphs also has a polynomial-time approximation scheme. Its main ideas are to apply randomized rounding to a linear programming relaxation of the problem, and to derandomize the resulting algorithm using walks on expander graphs. - -In order to apply the theory of NP-completeness to the minimum feedback arc set, it is necessary to modify the problem from being an optimization problem (how few edges can be removed to break all cycles) to an equivalent decision version, with a yes or no answer (is it possible to remove $k$ edges). Thus, the decision version of the feedback arc set problem takes as input both a directed graph and a number $k$. It asks whether all cycles can be broken by removing at most $k$ edges, or equivalently whether there is an acyclic subgraph with at least $|E(G)|-k$ edges. This problem is NP-complete, implying that neither it nor the optimization problem are expected to have polynomial time algorithms. It was one of Richard M. Karp's original set of 21 NP-complete problems; its NP-completeness was proved by Karp and Eugene Lawler by showing that inputs for another hard problem, the vertex cover problem, could be transformed ("reduced") into equivalent inputs to the feedback arc set decision problem. - -Some NP-complete problems can become easier when their inputs are restricted to special cases. But for the most important special case of the feedback arc set problem, the case of tournaments, the problem remains NP-complete. - -The complexity class APX is defined as consisting of optimization problems that have a polynomial time approximation algorithm that achieves a constant approximation ratio. Although such approximations are not known for the feedback arc set problem, the problem is known to be APX-hard, meaning that accurate approximations for it could be used to achieve similarly accurate approximations for all other problems in APX. As a consequence of its hardness proof, unless P = NP, it has no polynomial time approximation ratio better than 1.3606. This is the same threshold for hardness of approximation that is known for vertex cover, and the proof uses the Karp–Lawler reduction from vertex cover to feedback arc set, which preserves the quality of approximations. By a different reduction, the maximum acyclic subgraph problem is also APX-hard, and NP-hard to approximate to within a factor of 65/66 of optimal. - -The hardness of approximation of these problems has also been studied under unproven computational hardness assumptions that are standard in computational complexity theory but stronger than P ≠ NP. If the unique games conjecture is true, then the minimum feedback arc set problem is hard to approximate in polynomial time to within any constant factor, and the maximum feedback arc set problem is hard to approximate to within a factor of $\tfrac12+\varepsilon$, for every $\varepsilon>0$. Beyond polynomial time for approximation algorithms, if the exponential time hypothesis is true, then for every $\varepsilon>0$ the minimum feedback arc set does not have an approximation within a factor $\tfrac76-\varepsilon$ that can be computed in the subexponential time bound $O(2^{n^{1-\varepsilon)$.}} - -In planar directed graphs, the feedback arc set problem obeys a min-max theorem: the minimum size of a feedback arc set equals the maximum number of edge-disjoint directed cycles that can be found in the graph. This is not true for some other graphs; for instance the first illustration shows a directed version of the non-planar graph $K_{3,3}$ in which the minimum size of a feedback arc set is two, while the maximum number of edge-disjoint directed cycles is only one. - -Every tournament graph has a Hamiltonian path, and the Hamiltonian paths correspond one-for-one with minimal feedback arc sets, disjoint from the corresponding path. The Hamiltonian path for a feedback arc set is found by reversing its arcs and finding a topological order of the resulting acyclic tournament. Every consecutive pair of the order must be disjoint from the feedback arc sets, because otherwise one could find a smaller feedback arc set by reversing that pair. Therefore, this ordering gives a path through arcs of the original tournament, covering all vertices. Conversely, from any Hamiltonian path, the set of edges that connect later vertices in the path to earlier ones forms a feedback arc set. It is minimal, because each of its edges belongs to a cycle with the Hamiltonian path edges that is disjoint from all other such cycles. In a tournament, it may be the case that the minimum feedback arc set and maximum acyclic subgraph are both close to half the edges. More precisely, every tournament graph has a feedback arc set of size $\tbinom{n}{2}/2-\Omega(n^{3/2})$, and some tournaments require size $\tbinom{n}{2}/2-O(n^{3/2})$. For almost all tournaments, the size is at least $\tbinom{n}{2}/2 - 1.73n^{3/2}$. Every directed acyclic graph $D$ can be embedded as a subgraph of a larger tournament graph, in such a way that $D$ is the unique minimum feedback arc set of the tournament. The size of this tournament has been defined as the "reversing number" of $D$, and among directed acyclic graphs with the same number of vertices it is largest when $D$ is itself an (acyclic) tournament. - -A directed graph has an Euler tour whenever it is strongly connected and each vertex has equal numbers of incoming and outgoing edges. For such a graph, with $m$ edges and $n$ vertices, the size of a minimum feedback arc set is always at least $(m^2+mn)/2n^2$. There are infinitely many Eulerian directed graphs for which this bound is tight. If a directed graph has $n$ vertices, with at most three edges per vertex, then it has a feedback arc set of at most $n/3$ edges, and some graphs require this many. If a directed graph has $m$ edges, with at most four edges per vertex, then it has a feedback arc set of at most $m/3$ edges, and some graphs require this many. diff --git a/wiki/wikipedia/4214.txt b/wiki/wikipedia/4214.txt deleted file mode 100644 index 89ea9ea510c361dcbd2c2aa7c65e1e674bcd196f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4214.txt +++ /dev/null @@ -1,11 +0,0 @@ -Logic for Computable Functions (LCF) is an interactive automated theorem prover developed at Stanford and Edinburgh by Robin Milner and collaborators in early 1970s, based on the theoretical foundation of logic of computable functions previously proposed by Dana Scott. Work on the LCF system introduced the general-purpose programming language ML to allow users to write theorem-proving tactics, supporting algebraic data types, parametric polymorphism, abstract data types, and exceptions. - -Theorems in the system are terms of a special "theorem" abstract data type. The general mechanism of abstract data types of ML ensures that theorems are derived using only the inference rules given by the operations of the theorem abstract type. Users can write arbitrarily complex ML programs to compute theorems; the validity of theorems does not depend on the complexity of such programs, but follows from the soundness of the abstract data type implementation and the correctness of the ML compiler. - -The LCF approach provides similar trustworthiness to systems that generate explicit proof certificates but without the need to store proof objects in memory. Theorem data type can be easily implemented to optionally store proof objects, depending on system's run-time configuration, so it generalizes the basic proof-generation approach. The design decision to use a general-purpose programming language for developing theorems means that, depending on the complexity of programs written, it is possible to use the same language to write step-by-step proofs, decision procedures, or theorem provers. - -The implementation of the underlying ML compiler adds to the trusted computing base. Work on CakeML resulted in a formally verified ML compiler, alleviating some these concerns. - -Theorem proving often benefits from decision procedures and theorem proving algorithms, whose correctness has been extensively analyzed. A straightforward way of implementing these procedures in an LCF approach requires such procedures to always derive outcomes from the axioms, lemmas, and inference rules of the system, as opposed to directly computing the outcome. A potentially more efficient approach is to use reflection to prove that a function operating on formulas always gives correct result. - -Among subsequent implementations is Cambridge LCF. Later systems simplified the logic to use total instead of partial functions, leading to HOL, HOL Light, and Isabelle proof assistant that supports various logics. As of 2019, the Isabelle proof assistant still contains an implementation of an LCF logic, Isabelle/LCF. diff --git a/wiki/wikipedia/4215.txt b/wiki/wikipedia/4215.txt deleted file mode 100644 index 2d1802f8d0f9a0df28dfcc0380900ec9590496c7..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4215.txt +++ /dev/null @@ -1,75 +0,0 @@ -In mathematics, a binary operation is commutative if changing the order of the operands does not change the result. It is a fundamental property of many binary operations, and many mathematical proofs depend on it. Most familiar as the name of the property that says something like 1="3 + 4 = 4 + 3" or 1="2 × 5 = 5 × 2", the property can also be used in more advanced settings. The name is needed because there are operations, such as division and subtraction, that do not have it (for example, "3 − 5 ≠ 5 − 3"); such operations are not commutative, and so are referred to as noncommutative operations. The idea that simple operations, such as the multiplication and addition of numbers, are commutative was for many years implicitly assumed. Thus, this property was not named until the 19th century, when mathematics started to become formalized. - -x * y = y * x\qquad\mbox{for all }x,y\in S. - -An operation that does not satisfy the above property is called non-commutative. - -One says that x commutes with y or that x and y commute under $*$ if - - x * y = y * x. - -In other words, an operation is commutative if every pair of elements commute. - -A binary function $f \colon A \times A \to B$ is sometimes called commutative if - -f(x, y) = f(y, x)\qquad\mbox{for all }x,y\in A. Such a function is more commonly called a symmetric function. - -*Putting on socks resembles a commutative operation since which sock is put on first is unimportant. Either way, the result (having both socks on), is the same. In contrast, putting on underwear and trousers is not commutative. - -*The commutativity of addition is observed when paying for an item with cash. Regardless of the order the bills are handed over in, they always give the same total. - -Two well-known examples of commutative binary operations: which used the word commutatives when describing functions that have what is now called the commutative property. The word is a combination of the French word commuter meaning "to substitute or switch" and the suffix -ative meaning "tending to" so the word literally means "tending to substitute or switch". The term then appeared in English in 1838. in Duncan Farquharson Gregory's article entitled "On the real nature of symbolical algebra" published in 1840 in the Transactions of the Royal Society of Edinburgh. - -In truth-functional propositional logic, commutation, or commutativity refer to two valid rules of replacement. The rules allow one to transpose propositional variables within logical expressions in logical proofs. The rules are: -$$ -(P \lor Q) \Leftrightarrow (Q \lor P) -$$ - -and -$$ -(P \land Q) \Leftrightarrow (Q \land P) -$$ - -where "$\Leftrightarrow$" is a metalogical symbol representing "can be replaced in a proof with". - -Commutativity is a property of some logical connectives of truth functional propositional logic. The following logical equivalences demonstrate that commutativity is a property of particular connectives. The following are truth-functional tautologies. - -;Commutativity of conjunction:$(P \land Q) \leftrightarrow (Q \land P)$ - -;Commutativity of disjunction:$(P \lor Q) \leftrightarrow (Q \lor P)$ - -;Commutativity of implication (also called the law of permutation):$(P \to (Q \to R)) \leftrightarrow (Q \to (P \to R))$ - -;Commutativity of equivalence (also called the complete commutative law of equivalence):$(P \leftrightarrow Q) \leftrightarrow (Q \leftrightarrow P)$ - -In group and set theory, many algebraic structures are called commutative when certain operands satisfy the commutative property. In higher branches of mathematics, such as analysis and linear algebra the commutativity of well-known operations (such as addition and multiplication on real and complex numbers) is often used (or implicitly assumed) in proofs. - -* A commutative semigroup is a set endowed with a total, associative and commutative operation. - -* If the operation additionally has an identity element, we have a commutative monoid - -* An abelian group, or commutative group is a group whose group operation is commutative. - -* A commutative ring is a ring whose multiplication is commutative. (Addition in a ring is always commutative.) - -* In a field both addition and multiplication are commutative. - -The associative property is closely related to the commutative property. The associative property of an expression containing two or more occurrences of the same operator states that the order operations are performed in does not affect the final result, as long as the order of terms does not change. In contrast, the commutative property states that the order of the terms does not affect the final result. - -Most commutative operations encountered in practice are also associative. However, commutativity does not imply associativity. A counterexample is the function -$$ -f(x, y) = \frac{x + y}{2}, -$$ - -which is clearly commutative (interchanging x and y does not affect the result), but it is not associative (since, for example, $f(-4, f(0, +4)) = -1$ but $f(f(-4, 0), +4) = +1$). More such examples may be found in commutative non-associative magmas. - -Some forms of symmetry can be directly linked to commutativity. When a commutative operation is written as a binary function $z=f(x,y),$ then this function is called a symmetric function, and its graph in three-dimensional space is symmetric across the plane $y=x$. For example, if the function f is defined as $f(x,y)=x+y$ then $f$ is a symmetric function. - -For relations, a symmetric relation is analogous to a commutative operation, in that if a relation R is symmetric, then $a R b \Leftrightarrow b R a$. - -In quantum mechanics as formulated by Schrödinger, physical variables are represented by linear operators such as $x$ (meaning multiply by $x$), and $\frac{d}{dx}$. These two operators do not commute as may be seen by considering the effect of their compositions $x \frac{d}{dx}$ and $\frac{d}{dx} x$ (also called products of operators) on a one-dimensional wave function $\psi(x)$: -$$ - x\cdot {\mathrm{d}\over \mathrm{d}x}\psi = x\cdot \psi' \ \neq \ \psi + x\cdot \psi' = {\mathrm{d}\over \mathrm{d}x}\left( x\cdot \psi \right) -$$ - -According to the uncertainty principle of Heisenberg, if the two operators representing a pair of variables do not commute, then that pair of variables are mutually complementary, which means they cannot be simultaneously measured or known precisely. For example, the position and the linear momentum in the $x$-direction of a particle are represented by the operators $x$ and $-i \hbar \frac{\partial}{\partial x}$, respectively (where $\hbar$ is the reduced Planck constant). This is the same example except for the constant $-i \hbar$, so again the operators do not commute and the physical meaning is that the position and linear momentum in a given direction are complementary. diff --git a/wiki/wikipedia/4216.txt b/wiki/wikipedia/4216.txt deleted file mode 100644 index 61d2e692f2ba041c2802752f9172f494d6b79e11..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4216.txt +++ /dev/null @@ -1,5 +0,0 @@ -In geometry, a square trisection consists of cutting a square into pieces that can be rearranged to form three identical squares. - -The dissection of a square in three congruent partitions is a geometrical problem that dates back to the Islamic Golden Age. Craftsman who mastered the art of zellige needed innovative techniques to achieve their fabulous mosaics with complex geometric figures. The first solution to this problem was proposed in the 10th century AD by the Persian mathematician Abu'l-Wafa' (940-998) in his treatise "On the geometric constructions necessary for the artisan". Abu'l-Wafa' also used his dissection to demonstrate the Pythagorean theorem. This geometrical proof of Pythagoras' theorem would be rediscovered in the years 1835 - 1840 by Henry Perigal and published in 1875. - -The beauty of a dissection depends on several parameters. However, it is usual to search for solutions with the minimum number of parts. Far from being minimal, the square trisection proposed by Abu'l-Wafa' uses 9 pieces. In the 14th century Abu Bakr al-Khalil gave two solutions, one of which uses 8 pieces. In the late 17th century Jacques Ozanam came back to this issue and in the 19th century, solutions using 8 and 7 pieces were found, including one given by the mathematician Édouard Lucas. In 1891 Henry Perigal published the first known solution with only 6 pieces (see illustration below). Nowadays, new dissections are still found (see illustration above) and the conjecture that 6 is the minimal number of necessary pieces remains unproved. diff --git a/wiki/wikipedia/4217.txt b/wiki/wikipedia/4217.txt deleted file mode 100644 index ee3f3c0f6c792a8e5cc3ccd177a7cc20135af5e1..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4217.txt +++ /dev/null @@ -1,5 +0,0 @@ -Fred Irvin Diamond (born November 19, 1964) is a mathematician, known for his role in proving the modularity theorem for elliptic curves. His research interest is in modular forms and Galois representations. - -Diamond received his B.A. from the University of Michigan in 1984, and received his Ph.D. in mathematics from Princeton University in 1988 as a doctoral student of Andrew Wiles. He has held positions at Brandeis University and Rutgers University, and is currently a professor at King's College London. - -Diamond is the author of several research papers, and is also a coauthor along with Jerry Shurman of A First Course in Modular Forms, in the Graduate Texts in Mathematics series published by Springer-Verlag. diff --git a/wiki/wikipedia/4218.txt b/wiki/wikipedia/4218.txt deleted file mode 100644 index 4836c6830855856d291185f772797ba16290cb3f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4218.txt +++ /dev/null @@ -1,36 +0,0 @@ -The wine/water paradox is an apparent paradox in probability theory. It is stated by Michael Deakin as follows: - -The core of the paradox is in finding consistent and justifiable simultaneous prior distributions for $x$ and $\frac{1}{x}$. - -We do not know $x$, the wine to water ratio. We only know that it lies in an interval between the minimum of one quarter wine over three quarters water on one end (i.e. 25%), to the maximum of three quarters wine over one quarter water on the other (i.e. 75%). - -Now, making use of the principle of indifference, we may assume that $x$ is uniformly distributed. Therefore, the chance of finding the ratio below $a$ is - -Prob$\{x \le a\} = \frac{a-x_\mathrm{min}}{x_\mathrm{max}-x_\mathrm{min}} = \frac{1}{8} (3a - 1).$ - -This is the linearly growing function which is $0$ resp. $1$ at the end points $\frac{1/4}{3/4} = \frac{1}{3}$ resp. $\frac{3/4}{1/4} = 3$. - -With $a = 2$, as in the example of the original formulation above, we conclude that - -Prob$\{x \le 2\} = \frac{1}{8}(3\cdot 2 - 1) = \frac{5}{8}$. - -Now consider $y = \frac{1}{x}$, the inverted ratio of water to wine. It lies between the inverted bounds. - -Again using the principle of indifference, we get - -Prob$\{y \ge b\} = \frac{x_\mathrm{max}(1 - x_\mathrm{min}b)}{x_\mathrm{max} - x_\mathrm{min}} = \frac{3}{8} (3 - b)$. - -This is the function which is $0$ resp. $1$ at the end points $3$ resp. $\frac{1}{3}$. - -Now taking $b = \frac{1}{a} = \frac{1}{2}$, we conclude that - -Prob$\left\{y \ge \tfrac{1}{2}\right\} = \frac{3}{8}\frac{3\cdot 2 - 1}{2} = \frac{15}{16} = \frac{3}{2}\frac{5}{8}$. - -The second probability exceeds the first by a factor of $\frac{x_\mathrm{max}}{a} \ge 1$, which for our example numbers is $\frac{3}{2}$. - -Finally, since $y = \frac{1}{x}$, we get -$$ -\frac{5}{8} = \operatorname{Prob}\{x \le 2\} = P^* = \operatorname{Prob}\left\{y \ge \frac{1}{2}\right\} = \frac{15}{16} > \frac{5}{8} -$$, - -a contradiction. diff --git a/wiki/wikipedia/4219.txt b/wiki/wikipedia/4219.txt deleted file mode 100644 index 74860abe3103c39369113728824d62a8e2b3203a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4219.txt +++ /dev/null @@ -1,23 +0,0 @@ -In mathematics, more specifically in topology, an open map is a function between two topological spaces that maps open sets to open sets. - -That is, a function $f : X \to Y$ is open if for any open set $U$ in $X,$ the image $f(U)$ is open in $Y.$ - -Likewise, a closed map is a function that maps closed sets to closed sets. - -A map may be open, closed, both, or neither; in particular, an open map need not be closed and vice versa. - -Open and closed maps are not necessarily continuous. However, the composition of two relatively open maps need not be relatively open and similarly, the composition of two relatively closed maps need not be relatively closed. - -If $f : X \to Y$ is strongly open (respectively, strongly closed) and $g : Y \to Z$ is relatively open (respectively, relatively closed) then $g \circ f : X \to Z$ is relatively open (respectively, relatively closed). - -Let $f : X \to Y$ be a map. - -Given any subset $T \subseteq Y,$ if $f : X \to Y$ is a relatively open (respectively, relatively closed, strongly open, strongly closed, continuous, surjective) map then the same is true of its restriction - -f\big\vert_{f^{-1}(T)} ~:~ f^{-1}(T) \to T - -to the $f$-saturated subset $f^{-1}(T).$ - -The categorical sum of two open maps is open, or of two closed maps is closed.
  • - - diff --git a/wiki/wikipedia/422.txt b/wiki/wikipedia/422.txt deleted file mode 100644 index c68da9d308bebe2f116b2e7a8f070abc59ccc7df..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/422.txt +++ /dev/null @@ -1,187 +0,0 @@ -In computer science, software transactional memory (STM) is a concurrency control mechanism analogous to database transactions for controlling access to shared memory in concurrent computing. It is an alternative to lock-based synchronization. STM is a strategy implemented in software, rather than as a hardware component. A transaction in this context occurs when a piece of code executes a series of reads and writes to shared memory. These reads and writes logically occur at a single instant in time; intermediate states are not visible to other (successful) transactions. The idea of providing hardware support for transactions originated in a 1986 paper by Tom Knight. The idea was popularized by Maurice Herlihy and J. Eliot B. Moss. In 1995 Nir Shavit and Dan Touitou extended this idea to software-only transactional memory (STM). Since 2005, STM has been the focus of intense research and support for practical implementations is growing. - -Unlike the locking techniques used in most modern multithreaded applications, STM is often very optimistic: a thread completes modifications to shared memory without regard for what other threads might be doing, recording every read and write that it is performing in a log. Instead of placing the onus on the writer to make sure it does not adversely affect other operations in progress, it is placed on the reader, who after completing an entire transaction verifies that other threads have not concurrently made changes to memory that it accessed in the past. This final operation, in which the changes of a transaction are validated and, if validation is successful, made permanent, is called a commit. A transaction may also abort at any time, causing all of its prior changes to be rolled back or undone. If a transaction cannot be committed due to conflicting changes, it is typically aborted and re-executed from the beginning until it succeeds. - -The benefit of this optimistic approach is increased concurrency: no thread needs to wait for access to a resource, and different threads can safely and simultaneously modify disjoint parts of a data structure that would normally be protected under the same lock. - -However, in practice, STM systems also suffer a performance hit compared to fine-grained lock-based systems on small numbers of processors (1 to 4 depending on the application). This is due primarily to the overhead associated with maintaining the log and the time spent committing transactions. Even in this case performance is typically no worse than twice as slow. Advocates of STM believe this penalty is justified by the conceptual benefits of STM. - -Theoretically, the worst case space and time complexity of n concurrent transactions is O(n). Actual needs depend on implementation details (one can make transactions fail early enough to avoid overhead), but there will also be cases, albeit rare, where lock-based algorithms have better time complexity than software transactional memory. - -In addition to their performance benefits, STM greatly simplifies conceptual understanding of multithreaded programs and helps make programs more maintainable by working in harmony with existing high-level abstractions such as objects and modules. Lock-based programming has a number of well-known problems that frequently arise in practice: - -* Locking requires thinking about overlapping operations and partial operations in distantly separated and seemingly unrelated sections of code, a task which is very difficult and error-prone. - -* Locking requires programmers to adopt a locking policy to prevent deadlock, livelock, and other failures to make progress. Such policies are often informally enforced and fallible, and when these issues arise they are insidiously difficult to reproduce and debug. - -* Locking can lead to priority inversion, a phenomenon where a high-priority thread is forced to wait for a low-priority thread holding exclusive access to a resource that it needs. - -In contrast, the concept of a memory transaction is much simpler, because each transaction can be viewed in isolation as a single-threaded computation. Deadlock and livelock are either prevented entirely or handled by an external transaction manager; the programmer need hardly worry about it. Priority inversion can still be an issue, but high-priority transactions can abort conflicting lower priority transactions that have not already committed. - -On the other hand, the need to abort failed transactions also places limitations on the behavior of transactions: they cannot perform any operation that cannot be undone, including most I/O. Such limitations are typically overcome in practice by creating buffers that queue up the irreversible operations and perform them at a later time outside of any transaction. In Haskell, this limitation is enforced at compile time by the type system. - -In 2005, Tim Harris, Simon Marlow, Simon Peyton Jones, and Maurice Herlihy described an STM system built on Concurrent Haskell that enables arbitrary atomic operations to be composed into larger atomic operations, a useful concept impossible with lock-based programming. To quote the authors: - -
    - -Perhaps the most fundamental objection [...] is that lock-based programs do not compose: correct fragments may fail when combined. For example, consider a hash table with thread-safe insert and delete operations. Now suppose that we want to delete one item A from table t1, and insert it into table t2; but the intermediate state (in which neither table contains the item) must not be visible to other threads. Unless the implementor of the hash table anticipates this need, there is simply no way to satisfy this requirement. [...] In short, operations that are individually correct (insert, delete) cannot be composed into larger correct operations.
    - --Tim Harris et al., "Composable Memory Transactions", Section 2: Background, pg.2 - -
    - -With STM, this problem is simple to solve: simply wrapping two operations in a transaction makes the combined operation atomic. The only sticking point is that it is unclear to the caller, who is unaware of the implementation details of the component methods, when it should attempt to re-execute the transaction if it fails. In response, the authors proposed a retry command which uses the transaction log generated by the failed transaction to determine which memory cells it read, and automatically retries the transaction when one of these cells is modified, based on the logic that the transaction will not behave differently until at least one such value is changed. - -The authors also proposed a mechanism for composition of alternatives, the orElse function. It runs one transaction and, if that transaction does a retry, runs a second one. If both retry, it tries them both again as soon as a relevant change is made. This facility, comparable to features such as the POSIX networking select() call, allows the caller to wait on any one of a number of events simultaneously. It also simplifies programming interfaces, for example by providing a simple mechanism to convert between blocking and nonblocking operations. - -This scheme has been implemented in the Glasgow Haskell Compiler. - -The conceptual simplicity of STMs enables them to be exposed to the programmer using relatively simple language syntax. Tim Harris and Keir Fraser's "Language Support for Lightweight Transactions" proposed the idea of using the classical conditional critical region (CCR) to represent transactions. In its simplest form, this is just an "atomic block", a block of code which logically occurs at a single instant: - -// Insert a node into a doubly linked list atomically - -atomic { - -newNode->prev = node; - -newNode->next = node->next; - -node->next->prev = newNode; - -node->next = newNode; - -} - -When the end of the block is reached, the transaction is committed if possible, or else aborted and retried. (This is simply a conceptual example, not correct code. For example, it behaves incorrectly if node is deleted from the list during the transaction.) - -CCRs also permit a guard condition, which enables a transaction to wait until it has work to do: - -atomic (queueSize > 0) { - -remove item from queue and use it - -} - -If the condition is not satisfied, the transaction manager will wait until another transaction has made a commit that affects the condition before retrying. This loose coupling between producers and consumers enhances modularity compared to explicit signaling between threads. "Composable Memory Transactions" and Zhang et al. 2006). Serializability is the basis for the correctness of (concurrent transactions and) transactional memory. Tens of STM articles on "commit order" have already been published, and the technique is encumbered by a number of patents. - -With CO the desired serializability property is achieved by committing transactions only in chronological order that is compatible with the precedence order (as determined by chronological orders of operations in conflicts) of the respective transactions. To enforce CO some implementation of the Generic local CO algorithm needs to be utilized. The patent abstract quoted above describes a general implementation of the algorithm with a pre-determined commit order (this falls into the category of "CO generic algorithm with real-time constraints"). - -One problem with implementing software transactional memory with optimistic reading is that it is possible for an incomplete transaction to read inconsistent state (that is, to read a mixture of old and new values written by another transaction). Such a transaction is doomed to abort if it ever tries to commit, so this does not violate the consistency condition enforced by the transactional system, but it is possible for this "temporary" inconsistent state to cause a transaction to trigger a fatal exceptional condition such as a segmentation fault or even enter an endless loop, as in the following contrived example from Figure 4 of "Language Support for Lightweight Transactions": - -| - -|} - -Provided x=y initially, neither transaction above alters this invariant, but it is possible that transaction A will read x after transaction B updates it but read y before transaction B updates it, causing it to enter an infinite loop. The usual strategy for dealing with this is to intercept any fatal exceptions and abort any transaction that is not valid. - -One way to deal with these issues is to detect transactions that execute illegal operations or fail to terminate and abort them cleanly; another approach is the transactional locking scheme. - -A number of STM implementations (on varying scales of quality and stability) have been released, many under liberal licenses. These include: - -* , a word-based STM. - -* The (LibLTX), a C implementation by Robert Ennals focusing on efficiency and based on his papers "Software Transactional Memory Should Not Be Obstruction-Free" and "Cache Sensitive Software Transactional Memory". - -* , an open-source implementation in C by Duilio Protti based on "Composable Memory Transactions". The implementation also includes a . - -* is a prototype that brings the "atomic" keyword to C/C++ by instrumenting the assembler output of the compiler. - -* implements STM for C/C++ directly in a compiler (the Intel Compiler) for Linux or Windows producing 32- or 64 bit code for Intel or AMD processors. Implements atomic keyword as well as providing ways to decorate (declspec) function definitions to control/authorize use in atomic sections. A substantial implementation in a compiler with the stated purpose to enable large scale experimentation in any size C/C++ program. Intel has made four research releases of this special experimental version of its product compiler. - -* An implementation of STM in C, based on shared memory mapping. It is for sharing memory between threads and/or processes (not just between threads within a process) with transactional semantics. The multi-threaded version of its memory allocator is in C++. - -* An implementation of STM in C, based on TL2 but with many extensions and optimizations. - -* The TL2 lock-based STM from the research group at Sun Microsystems Laboratories, as featured in the DISC 2006 article "Transactional Locking II". - -* , based on ideas from his papers "Language Support for Lightweight Transactions", "Practical Lock Freedom", and an upcoming unpublished work. - -* The University of Rochester STM written by a team of researchers led by Michael L. Scott. - -* now supports STM for C/C++ directly in the compiler. The feature is still listed as "experimental", but may still provide the necessary functionality for testing. - -* STM is part of the picotm transaction framework for C - -* A strict and mostly obstruction-free STM for .NET, written in C#. Features include: conditional transactions, commutable (low conflict) operations, transactional collection types, and automatic generation of transactional proxy-subclasses for POCO objects. - -* A pure C#, open-source, lightweight software transactional memory API. - -* SXM, an implementation of transactions for C# by Microsoft Research. , Discontinued. - -* , an open-source implementation in C by Duilio Protti based on "Composable Memory Transactions". The implementation also includes a . - -* , .NET Software Transactional Memory written entirely in C# offering truly nested transactions and even integrating with System.Transactions. - -* A Verification-Oriented Model Implementation of an STM in C#. - -* cf. Java implementations. - -* A pure C# implementation of software transactional memory. - -* has STM support built into the core language - -* A multi-platform STM implementation for Common Lisp. - -* An open-source, actively maintained concurrency library providing software, hardware and hybrid memory transactions for Common Lisp. - -* Mnesia A distributed, transactional, in-memory DBMS built into Erlang/OTP, that performs the role of STM; Perhaps the oldest implementation of STM in the wild. - -* F# has them through FSharpX - sample at F# - -* - The framework contains support for STM leveraging the Java implementation. - -* The library, as featured in , is part of the Haskell Platform. - -* The library, a distributed STM, based on the above library. - -* 's implementation of AtomJava. - -* implements the concept of proposed by João Cachopo and António Rito Silva, members of the . Beginning from version 2.0, JVSTM is completely lock-free. - -* A runtime environment for Java Software Transactional Memory using byte code manipulation. - -* Is a Java 1.6+ based Software Transactional Memory (STM) implementation that uses Multi Version Concurrency Control (MVCC) as concurrency control mechanism. - -* Sun Lab's Dynamic Software Transactional Memory Library - -* is an open source implementation for Java and .NET. It can be turned into a Distributed STM through an extension mechanism. Other extensions allow logging, change notification, and persistence. - -* - A library-based STM written in Scala that additionally provides a Java-focused API to allow use with Runnable and Callable objects. - -* implements Distributed Software Transactional Memory to web browsers with a single NodeJS server to validate the effects of transactions. - -* , a concurrent programming library of OCaml, offers STM (originally ) as a module. Just like any other components in this library, the STM module can be used uniformly with VM-level threads, system threads and processes. - -* STM for Perl 6 has been implemented in Pugs via the Glasgow Haskell Compiler's STM library. - -* - Armin Rigo explains his patch to CPython in . - -* announcement from Armin Rigo for PyPy. - -* - -* - -* is an implementation of the Ruby interpreter built on top of the GemStone/S virtual machine - -* is a library providing concurrent features for Ruby, including STM - -* is an implementation for Ractor and Thread on Ruby 3.0. Lbs - -* – A draft proposal along with reference implementation CCSTM to be included in the Scala standard library - -* Akka STM – The framework contains support for STM in both Scala & Java - -* – Manchester University Transactions for Scala - -* – Implementation in ZIO inspired by STM API in Haskell. - -* – An extension of with a software transactional memory implementation similar to that in Haskell's package. - -* GemStone/S A Transactional Memory Object Server for Smalltalk. - -* for the open-source Smalltalk (MIT Licence) - -* Fortress is a language developed by Sun that uses - -* diff --git a/wiki/wikipedia/4220.txt b/wiki/wikipedia/4220.txt deleted file mode 100644 index 30d03bb48aaa7d5938bbda0a1f5a92376d03dfa3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4220.txt +++ /dev/null @@ -1,50 +0,0 @@ -In applied mathematics, the Wiener–Khinchin theorem, also known as the Wiener–Khintchine theorem and sometimes as the Wiener–Khinchin–Einstein theorem or the Khinchin–Kolmogorov theorem, states that the autocorrelation function of a wide-sense-stationary random process has a spectral decomposition given by the power spectrum of that process. - -Norbert Wiener proved this theorem for the case of a deterministic function in 1930; Aleksandr Khinchin later formulated an analogous result for stationary stochastic processes and published that probabilistic analogue in 1934. Albert Einstein explained, without proofs, the idea in a brief two-page memo in 1914. - -For continuous time, the Wiener–Khinchin theorem says that if $ x $ is a wide-sense stochastic process whose autocorrelation function (sometimes called autocovariance) defined in terms of statistical expected value, $r_{xx}(\tau) = \operatorname{E}\big[x(t)^*x(t - \tau)\big]$ (the asterisk denotes complex conjugate, and of course it can be omitted if the random process is real-valued), exists and is finite at every lag $ \tau $, then there exists a monotone function $ F(f) $ in the frequency domain $ -\infty < f < \infty $ such that -$$ - r_{xx} (\tau) = \int_{-\infty}^\infty e^{2\pi i\tau f} dF(f), -$$ - -where the integral is a Riemann–Stieltjes integral. This is a kind of spectral decomposition of the auto-correlation function. F is called the power spectral distribution function and is a statistical distribution function. It is sometimes called the integrated spectrum. - -The Fourier transform of $x(t)$ does not exist in general, because stochastic random functions are not generally either square-integrable or absolutely integrable. Nor is $ r_{xx} $ assumed to be absolutely integrable, so it need not have a Fourier transform either. - -But if $ F(f) $ is absolutely continuous, for example, if the process is purely indeterministic, then $ F $ is differentiable almost everywhere. In this case, one can define $ S(f)$, the power spectral density of $x(t)$, by taking the averaged derivative of $ F $. Because the left and right derivatives of $ F $ exist everywhere, we can put $ S(f) = \frac12 \left(\lim_{\varepsilon \downarrow 0} \frac1\varepsilon \big(F(f + \varepsilon) - F(f)\big) + \lim_{\varepsilon \uparrow 0} \frac1\varepsilon \big(F(f + \varepsilon) - F(f)\big)\right)$ everywhere, (obtaining that F is the integral of its averaged derivative), and the theorem simplifies to -$$ - r_{xx} (\tau) = \int_{-\infty}^\infty S(f) e^{2\pi i\tau f} df. -$$ - -If now one assumes that r and S satisfy the necessary conditions for Fourier inversion to be valid, the Wiener–Khinchin theorem takes the simple form of saying that r and S are a Fourier-transform pair, and -$$ - S(f) = \int_{-\infty}^\infty r_{xx} (\tau) e^{-2\pi if\tau} d\tau. -$$ - -For the discrete-time case, the power spectral density of the function with discrete values $x[n]$ is -$$ - S(f)=\sum_{k=-\infty}^\infty r_{xx}[k]e^{-i(2\pi f) k}, -$$ - -where -$$ -r_{xx}[k] = \operatorname{E}\big[ x[n] ^*x[n-k] \big] -$$ - -is the discrete autocorrelation function of $x[n]$, provided this is absolutely integrable. Being a sampled and discrete-time sequence, the spectral density is periodic in the frequency domain. This is due to the problem of aliasing: the contribution of any frequency higher than the Nyquist frequency seems to be equal to that of its alias between 0 and 1. For this reason, the domain of the function $ S $ is usually restricted to lie between 0 and 1 or between −0.5 and 0.5. - -The theorem is useful for analyzing linear time-invariant systems (LTI systems) when the inputs and outputs are not square-integrable, so their Fourier transforms do not exist. A corollary is that the Fourier transform of the autocorrelation function of the output of an LTI system is equal to the product of the Fourier transform of the autocorrelation function of the input of the system times the squared magnitude of the Fourier transform of the system impulse response. This works even when the Fourier transforms of the input and output signals do not exist because these signals are not square-integrable, so the system inputs and outputs cannot be directly related by the Fourier transform of the impulse response. - -Since the Fourier transform of the autocorrelation function of a signal is the power spectrum of the signal, this corollary is equivalent to saying that the power spectrum of the output is equal to the power spectrum of the input times the energy transfer function. - -This corollary is used in the parametric method for power spectrum estimation. - -In many textbooks and in much of the technical literature it is tacitly assumed that Fourier inversion of the autocorrelation function and the power spectral density is valid, and the Wiener–Khinchin theorem is stated, very simply, as if it said that the Fourier transform of the autocorrelation function was equal to the power spectral density, ignoring all questions of convergence (Einstein is an example). - -But the theorem (as stated here) was applied by Norbert Wiener and Aleksandr Khinchin to the sample functions (signals) of wide-sense-stationary random processes, signals whose Fourier transforms do not exist. - -The whole point of Wiener's contribution was to make sense of the spectral decomposition of the autocorrelation function of a sample function of a wide-sense-stationary random process even when the integrals for the Fourier transform and Fourier inversion do not make sense. - -Further complicating the issue is that the discrete Fourier transform always exists for digital, finite-length sequences, meaning that the theorem can be blindly applied to calculate auto-correlations of numerical sequences. As mentioned earlier, the relation of this discrete sampled data to a mathematical model is often misleading, and related errors can show up as a divergence when the sequence length is modified. - -Some authors refer to $R$ as the autocovariance function. They then proceed to normalise it, by dividing by $R(0)$, to obtain what they refer to as the autocorrelation function. diff --git a/wiki/wikipedia/4221.txt b/wiki/wikipedia/4221.txt deleted file mode 100644 index 7aeb1fe8c13775c16fac92a0097ddfcc8c5f2047..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4221.txt +++ /dev/null @@ -1,76 +0,0 @@ -In mathematics, a coefficient is a multiplicative factor in some term of a polynomial, a series, or any expression; it is usually a number, but may be any expression (including variables such as a, b and c). When the coefficients are variables, they are often called parameters. - -For example, -$$ -2x^2-x+3 -$$, has the real coefficients 2, -1, and 3 respectively, and -$$ -ax^2+bx+c -$$, has coefficient parameters a, b, and c respectively- assuming x is the variable of the equation. - -The constant coefficient is the coefficient not attached to variables in an expression. For example, the constant coefficients of the expressions above are the real coefficient 3 and the parameter represented by c. - -Similarly, the coefficient attached to the highest multiplicity of the variable in a polynomial is referred to as the leading coefficient. For example in the expressions above, the leading coefficients are 2 and the parameter represented by a. - -The binomial coefficients occur in the expanded form of $(x+y)^n$, and are tabulated in Pascal's triangle. - -In mathematics, a coefficient is a multiplicative factor in some term of a polynomial, a series, or any expression; - -For example, in -$$ -7x^2-3xy+1.5+y, -$$ - -the first two terms have the coefficients 7 and −3, respectively. The third term 1.5 is a constant coefficient. The final term does not have any explicitly-written coefficient factor that would not change the term; the coefficient is thus taken to be 1 (since variables without number have a coefficient of 1). - -In many scenarios, coefficients are numbers (as is the case for each term of the above example), although they could be parameters of the problem—or any expression in these parameters. In such a case, one must clearly distinguish between symbols representing variables and symbols representing parameters. Following René Descartes, the variables are often denoted by x, y, ..., and the parameters by a, b, c, ..., but this is not always the case. For example, if y is considered a parameter in the above expression, then the coefficient of x would be −3y, and the constant coefficient (always with respect to x) would be 1.5 + y. - -When one writes -$$ -ax^2+bx+c, -$$ - -it is generally assumed that x is the only variable, and that a, b and c are parameters; thus the constant coefficient is c in this case. - -Similarly, any polynomial in one variable x can be written as -$$ -a_k x^k + \dotsb + a_1 x^1 + a_0 -$$ - -for some positive integer $k$, where $a_k, \dotsc, a_1, a_0$ are coefficients; to allow this kind of expression in all cases, one must allow introducing terms with 0 as coefficient. - -For the largest $i$ with $a_i \ne 0$ (if any), $a_i$ is called the leading coefficient of the polynomial. For example, the leading coefficient of the polynomial -$$ - 4x^5 + x^3 + 2x^2 -$$ - -is 4. - -Some specific coefficients that occur frequently in mathematics have dedicated names. For example, the binomial coefficients occur in the expanded form of $(x+y)^n$, and are tabulated in Pascal's triangle. - -In linear algebra, a system of linear equations is associated with a coefficient matrix, which is used in Cramer's rule to find a solution to the system. - -The leading entry (sometimes leading coefficient) of a row in a matrix is the first nonzero entry in that row. So, for example, given the matrix described as follows: - - - -\begin{pmatrix} - -1 & 2 & 0 & 6\\ - -0 & 2 & 9 & 4\\ - -0 & 0 & 0 & 4\\ - -0 & 0 & 0 & 0 - -\end{pmatrix}, - - - -the leading coefficient of the first row is 1; that of the second row is 2; that of the third row is 4, while the last row does not have a leading coefficient. - -Though coefficients are frequently viewed as constants in elementary algebra, they can also be viewed as variables as the context broadens. For example, the coordinates $(x_1, x_2, \dotsc, x_n)$ of a vector $v$ in a vector space with basis $\lbrace e_1, e_2, \dotsc, e_n \rbrace $, are the coefficients of the basis vectors in the expression -$$ - v = x_1 e_1 + x_2 e_2 + \dotsb + x_n e_n . -$$ diff --git a/wiki/wikipedia/4222.txt b/wiki/wikipedia/4222.txt deleted file mode 100644 index 1056b5f2cbb520d9bdcf9ced5d0d9c593c95619b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4222.txt +++ /dev/null @@ -1,7 +0,0 @@ -In graph theory, the Golomb graph is a polyhedral graph with 10 vertices and 18 edges. It is named after Solomon W. Golomb, who constructed it (with a non-planar embedding) as a unit distance graph that requires four colors in any graph coloring. Thus, like the simpler Moser spindle, it provides a lower bound for the Hadwiger–Nelson problem: coloring the points of the Euclidean plane so that each unit line segment has differently-colored endpoints requires at least four colors. - -The method of construction of the Golomb graph as a unit distance graph, by drawing an outer regular polygon connected to an inner twisted polygon or star polygon, has also been used for unit distance representations of the Petersen graph and of generalized Petersen graphs. As with the Moser spindle, the coordinates of the unit-distance embedding of the Golomb graph can be represented in the quadratic field $\mathbb{Q}[\sqrt{33}]$. - -The fractional chromatic number of the Golomb graph is 10/3. The fact that this number is at least this large follows from the fact that the graph has 10 vertices, at most three of which can be in any independent set. The fact that the number is at most this large follows from the fact that one can find 10 three-vertex independent sets, such that each vertex is in exactly three of these sets. - -This fractional chromatic number is less than the number 7/2 for the Moser spindle and less than the fractional chromatic number of the unit distance graph of the plane, which is bounded between 3.6190 and 4.3599. diff --git a/wiki/wikipedia/4223.txt b/wiki/wikipedia/4223.txt deleted file mode 100644 index 1c57566341e5bdff653210bbe2687670da73077d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4223.txt +++ /dev/null @@ -1,91 +0,0 @@ -In probability theory, the multidimensional Chebyshev's inequality is a generalization of Chebyshev's inequality, which puts a bound on the probability of the event that a random variable differs from its expected value by more than a specified amount. - -Let X be an N-dimensional random vector with expected value $\mu=\operatorname{E}[X] $ and covariance matrix -$$ -V=\operatorname{E} [(X - \mu) (X - \mu)^T]. -$$ - -If $V$ is a positive-definite matrix, for any real number $t>0$: - - - -\Pr \left( \sqrt{( X-\mu)^T V^{-1} (X-\mu) } > t\right) \le \frac N {t^2} - - - -Since $V$ is positive-definite, so is $V^{-1}$. Define the random variable - - - -y = (X-\mu)^T V^{-1} (X-\mu). - - - -Since $y$ is positive, Markov's inequality holds: - - - -\Pr\left( \sqrt{(X-\mu)^T V^{-1} (X-\mu) } > t\right) = \Pr( \sqrt{y} > t) = \Pr(y > t^2) - -\le \frac{\operatorname{E}[y]}{t^2}. - - - -Finally, - -\begin{align} - -\operatorname{E}[y] &= \operatorname{E}[(X-\mu)^T V^{-1} (X-\mu)]\\[6pt] - -&=\operatorname{E}[ \operatorname{trace} ( V^{-1} (X-\mu) (X-\mu)^T )]\\[6pt] - -&= \operatorname{trace} ( V^{-1} V ) = N - -\end{align}. - -There is a straightforward extension of the vector version of Chebyshev's inequality to infinite dimensional settings. Let X be a random variable which takes values in a Fréchet space $\mathcal X$ (equipped with seminorms ). This includes most common settings of vector-valued random variables, e.g., when $\mathcal X$ is a Banach space (equipped with a single norm), a Hilbert space, or the finite-dimensional setting as described above. - -Suppose that X is of "strong order two", meaning that -$$ - \operatorname{E}\left(\| X\|_\alpha^2 \right) < \infty -$$ - -for every seminorm . This is a generalization of the requirement that X have finite variance, and is necessary for this strong form of Chebyshev's inequality in infinite dimensions. The terminology "strong order two" is due to Vakhania. - -Let $\mu \in \mathcal X$ be the Pettis integral of X (i.e., the vector generalization of the mean), and let -$$ -\sigma_a := \sqrt{\operatorname{E}\|X - \mu\|_\alpha^2} -$$ - -be the standard deviation with respect to the seminorm . In this setting we can state the following: - -General version of Chebyshev's inequality. $\forall k > 0: \quad \Pr\left( \|X - \mu\|_\alpha \ge k \sigma_\alpha \right) \le \frac{1}{ k^2 }.$ - -Proof. The proof is straightforward, and essentially the same as the finitary version. If σα = 0, then X is constant (and equal to μ) almost surely, so the inequality is trivial. - -If -$$ -\|X - \mu\|_\alpha \ge k \sigma_\alpha^2 -$$ - -then , so we may safely divide by . The crucial trick in Chebyshev's inequality is to recognize that $1 = \tfrac{\|X - \mu\|_\alpha^2}{\|X - \mu\|_\alpha^2}$. - -The following calculations complete the proof: - -\begin{align} - -\Pr\left( \|X - \mu\|_\alpha \ge k \sigma_\alpha \right) &= \int_\Omega \mathbf{1}_{\|X - \mu\|_\alpha \ge k \sigma_\alpha} \mathrm d\Pr \\ - -& = \int_\Omega \left ( \frac{\|X - \mu\|_\alpha^2}{\|X - \mu\|_\alpha^2} \right ) \cdot \mathbf{1}_{\|X - \mu\|_\alpha \ge k \sigma_\alpha} \mathrm d\Pr \\[6pt] - -&\le \int_\Omega \left (\frac{\|X - \mu\|_\alpha^2}{(k\sigma_\alpha)^2} \right ) \cdot \mathbf{1}_{\|X - \mu\|_\alpha \ge k \sigma_\alpha} \mathrm d\Pr \\[6pt] - -&\le \frac{1}{k^2 \sigma_\alpha^2} \int_\Omega \|X - \mu\|_\alpha^2 \mathrm d\Pr && \mathbf{1}_{\|X - \mu\|_\alpha \ge k \sigma_\alpha} \le 1\\[6pt] - -&= \frac{1}{k^2 \sigma_\alpha^2} \left (\operatorname{E}\|X - \mu\|_\alpha^2 \right )\\[6pt] - -&= \frac{1}{k^2 \sigma_\alpha^2} \left (\sigma_\alpha^2 \right )\\[6pt] - -&= \frac{1}{k^2} - -\end{align} diff --git a/wiki/wikipedia/4224.txt b/wiki/wikipedia/4224.txt deleted file mode 100644 index c376d11a25b9f878a095540f5a3c55d00df52f74..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4224.txt +++ /dev/null @@ -1,61 +0,0 @@ -In proof theory, the Dialectica interpretation is a proof interpretation of intuitionistic arithmetic (Heyting arithmetic) into a finite type extension of primitive recursive arithmetic, the so-called System T. It was developed by Kurt Gödel to provide a consistency proof of arithmetic. The name of the interpretation comes from the journal Dialectica, where Gödel's paper was published in a 1958 special issue dedicated to Paul Bernays on his 70th birthday. - -Via the Gödel–Gentzen negative translation, the consistency of classical Peano arithmetic had already been reduced to the consistency of intuitionistic Heyting arithmetic. Gödel's motivation for developing the dialectica interpretation was to obtain a relative consistency proof for Heyting arithmetic (and hence for Peano arithmetic). - -The interpretation has two components: a formula translation and a proof translation. The formula translation describes how each formula $A$ of Heyting arithmetic is mapped to a quantifier-free formula $A_D(x; y)$ of the system T, where $x$ and $y$ are tuples of fresh variables (not appearing free in $A$). Intuitively, $A$ is interpreted as $\exists x \forall y A_D(x; y)$. The proof translation shows how a proof of $A$ has enough information to witness the interpretation of $A$, i.e. the proof of $A$ can be converted into a closed term $t$ and a proof of $A_D(t; y)$ in the system T. - -The quantifier-free formula $A_D(x; y)$ is defined inductively on the logical structure of $A$ as follows, where $P$ is an atomic formula: - - - -\begin{array}{lcl} - -(P)_D & \equiv & P \\ - -(A \wedge B)_D(x, v; y, w) & \equiv & A_D(x; y) \wedge B_D(v; w) \\ - -(A \vee B)_D(x, v, z; y, w) & \equiv & (z = 0 \rightarrow A_D(x; y)) \wedge (z \neq 0 \to B_D(v; w)) \\ - -(A \rightarrow B)_D(f, g; x, w) & \equiv & A_D(x; f x w) \rightarrow B_D(g x; w) \\ - -(\exists z A)_D(x, z; y) & \equiv & A_D(x; y) \\ - -(\forall z A)_D(f; y, z) & \equiv & A_D(f z; y) - -\end{array} - - - -The formula interpretation is such that whenever $A$ is provable in Heyting arithmetic then there exists a sequence of closed terms $t$ such that $A_D(t; y)$ is provable in the system T. The sequence of terms $t$ and the proof of $A_D(t; y)$ are constructed from the given proof of $A$ in Heyting arithmetic. The construction of $t$ is quite straightforward, except for the contraction axiom $A \rightarrow A \wedge A$ which requires the assumption that quantifier-free formulas are decidable. - -It has also been shown that Heyting arithmetic extended with the following principles - -* Axiom of choice - -* Markov's principle - -* Independence of premise for universal formulas - -is necessary and sufficient for characterising the formulas of HA which are interpretable by the Dialectica interpretation. - -The basic dialectica interpretation of intuitionistic logic has been extended to various stronger systems. Intuitively, the dialectica interpretation can be applied to a stronger system, as long as the dialectica interpretation of the extra principle can be witnessed by terms in the system T (or an extension of system T). - -Given Gödel's incompleteness theorem (which implies that the consistency of PA cannot be proven by finitistic means) it is reasonable to expect that system T must contain non-finitistic constructions. Indeed this is the case. The non-finitistic constructions show up in the interpretation of mathematical induction. To give a Dialectica interpretation of induction, Gödel makes use of what is nowadays called Gödel's primitive recursive functionals, which are higher-order functions with primitive recursive descriptions. - -Formulas and proofs in classical arithmetic can also be given a Dialectica interpretation via an initial embedding into Heyting arithmetic followed by the Dialectica interpretation of Heyting arithmetic. Shoenfield, in his book, combines the negative translation and the Dialectica interpretation into a single interpretation of classical arithmetic. - -In 1962 Spector - - extended Gödel's Dialectica interpretation of arithmetic to full mathematical analysis, by showing how the schema of countable choice can be given a Dialectica interpretation by extending system T with bar recursion. - -The Dialectica interpretation has been used to build a model of Girard's refinement of intuitionistic logic known as linear logic, via the so-called Dialectica spaces. Since linear logic is a refinement of intuitionistic logic, the dialectica interpretation of linear logic can also be viewed as a refinement of the dialectica interpretation of intuitionistic logic. - -Although the linear interpretation in Shirahata's work validates the weakening rule (it is actually an interpretation of affine logic), de Paiva's dialectica spaces interpretation does not validate weakening for arbitrary formulas. - -Several variants of the Dialectica interpretation have been proposed since. Most notably the Diller-Nahm variant (to avoid the contraction problem) and Kohlenbach's monotone and Ferreira-Oliva bounded interpretations (to interpret weak Kőnig's lemma). - -Comprehensive treatments of the interpretation can be found at - -, - - and. diff --git a/wiki/wikipedia/4225.txt b/wiki/wikipedia/4225.txt deleted file mode 100644 index 7d6f74897141f0b434fc6a5c229ac82f167fe7d0..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4225.txt +++ /dev/null @@ -1,42 +0,0 @@ -In representation theory, a branch of mathematics, Engel's theorem states that a finite-dimensional Lie algebra $\mathfrak g$ is a nilpotent Lie algebra if and only if for each $X \in \mathfrak g$, the adjoint map -$$ -\operatorname{ad}(X)\colon \mathfrak{g} \to \mathfrak{g}, -$$ - -given by $\operatorname{ad}(X)(Y) = [X, Y]$, is a nilpotent endomorphism on $\mathfrak{g}$; i.e., $\operatorname{ad}(X)^k = 0$ for some k. It is a consequence of the theorem, also called Engel's theorem, which says that if a Lie algebra of matrices consists of nilpotent matrices, then the matrices can all be simultaneously brought to a strictly upper triangular form. Note that if we merely have a Lie algebra of matrices which is nilpotent as a Lie algebra, then this conclusion does not follow (i.e. the naïve replacement in Lie's theorem of "solvable" with "nilpotent", and "upper triangular" with "strictly upper triangular", is false; this already fails for the one-dimensional Lie subalgebra of scalar matrices). - -The theorem is named after the mathematician Friedrich Engel, who sketched a proof of it in a letter to Wilhelm Killing dated 20 July 1890 . Engel's student K.A. Umlauf gave a complete proof in his 1891 dissertation, reprinted as . - -Let $\mathfrak{gl}(V)$ be the Lie algebra of the endomorphisms of a finite-dimensional vector space V and $\mathfrak g \subset \mathfrak{gl}(V)$ a subalgebra. Then Engel's theorem states the following are equivalent: - -# Each $X \in \mathfrak{g}$ is a nilpotent endomorphism on V. - -# There exists a flag V = V_0 \supset V_1 \supset \cdots \supset V_n = - -0, \operatorname{codim} V_i = i such that $\mathfrak g \cdot V_i \subset V_{i+1}$; i.e., the elements of $\mathfrak g$ are simultaneously strictly upper-triangulizable. - -Note that no assumption on the underlying base field is required. - -We note that Statement 2. for various $\mathfrak g$ and V is equivalent to the statement - -For each nonzero finite-dimensional vector space V and a subalgebra $\mathfrak g \subset \mathfrak{gl}(V)$, there exists a nonzero vector v in V such that $X(v) = 0$ for every $X \in \mathfrak g.$ - -This is the form of the theorem proven in #Proof. (This statement is trivially equivalent to Statement 2 since it allows one to inductively construct a flag with the required property.) - -In general, a Lie algebra $\mathfrak g$ is said to be nilpotent if the lower central series of it vanishes in a finite step; i.e., for $C^0 \mathfrak g = \mathfrak g, C^i \mathfrak g = [\mathfrak g, C^{i-1} \mathfrak g]$ = (i+1)-th power of $\mathfrak g$, there is some k such that $C^k \mathfrak g = 0$. Then Engel's theorem gives the theorem (also called Engel's theorem): when $\mathfrak g$ has finite dimension, $\mathfrak g$ is nilpotent if and only if $\operatorname{ad}(X)$ is nilpotent for each $X \in \mathfrak g$. Indeed, if $\operatorname{ad}(\mathfrak g)$ consists of nilpotent operators, then by 1. $\Leftrightarrow$ 2. applied to the algebra $\operatorname{ad}(\mathfrak g) \subset \mathfrak{gl}(\mathfrak g)$, there exists a flag $\mathfrak g = \mathfrak{g}_0 \supset \mathfrak{g}_1 \supset \cdots \supset \mathfrak{g}_n = 0$ such that $[\mathfrak g, \mathfrak g_i] \subset \mathfrak g_{i+1}$. Since $C^i \mathfrak g\subset \mathfrak g_i$, this implies $\mathfrak g$ is nilpotent. (The converse follows straightforwardly from the definition.) - -We prove the following form of the theorem: if $\mathfrak{g} \subset \mathfrak{gl}(V)$ is a Lie subalgebra such that every $X \in \mathfrak{g}$ is a nilpotent endomorphism and if V has positive dimension, then there exists a nonzero vector v in V such that $X(v) = 0$ for each X in $\mathfrak{g}$. - -The proof is by induction on the dimension of $\mathfrak{g}$ and consists of a few steps. (Note the structure of the proof is very similar to that for Lie's theorem, which concerns a solvable algebra.) The basic case is trivial and we assume the dimension of $\mathfrak{g}$ is positive. - -Step 1: Find an ideal $\mathfrak{h}$ of codimension one in $\mathfrak{g}$. - -This is the most difficult step. Let $\mathfrak{h}$ be a maximal (proper) subalgebra of $\mathfrak{g}$, which exists by finite-dimensionality. We claim it is an ideal and has codimension one. For each $X \in \mathfrak h$, it is easy to check that (1) $\operatorname{ad}(X)$ induces a linear endomorphism $\mathfrak{g}/\mathfrak{h} \to \mathfrak{g}/\mathfrak{h}$ and (2) this induced map is nilpotent (in fact, $\operatorname{ad}(X)$ is nilpotent). Thus, by inductive hypothesis, there exists a nonzero vector v in $\mathfrak{g}/\mathfrak{h}$ such that $\operatorname{ad}(X)(v) = 0$ for each $X \in \mathfrak{h}$. That is to say, if $v = [Y]$ for some Y in $\mathfrak{g}$ but not in $\mathfrak h$, then $[X, Y] = \operatorname{ad}(X)(Y) \in \mathfrak{h}$ for every $X \in \mathfrak{h}$. But then the subspace $\mathfrak{h}' \subset \mathfrak{g}$ spanned by $\mathfrak{h}$ and Y is a Lie subalgebra in which $\mathfrak{h}$ is an ideal. Hence, by maximality, $\mathfrak{h}' = \mathfrak g$. This proves the claim. - -Step 2: Let $W = \{ v \in V | X(v) = 0, X \in \mathfrak{h} \}$. Then $\mathfrak{g}$ stabilizes W; i.e., $X (v) \in W$ for each $X \in \mathfrak{g}, v \in W$. - -Indeed, for $Y$ in $\mathfrak{g}$ and $X$ in $\mathfrak{h}$, we have: $X(Y(v)) = Y(X(v)) + [X, Y](v) = 0$ since $\mathfrak{h}$ is an ideal and so $[X, Y] \in \mathfrak{h}$. Thus, $Y(v)$ is in W. - -Step 3: Finish up the proof by finding a nonzero vector that gets killed by $\mathfrak{g}$. - -Write $\mathfrak{g} = \mathfrak{h} + L$ where L is a one-dimensional vector subspace. Let Y be a nonzero vector in L and v a nonzero vector in W. Now, $Y$ is a nilpotent endomorphism (by hypothesis) and so $Y^k(v) \ne 0, Y^{k+1}(v) = 0$ for some k. Then $Y^k(v)$ is a required vector as the vector lies in W by Step 2. $\square$ diff --git a/wiki/wikipedia/4226.txt b/wiki/wikipedia/4226.txt deleted file mode 100644 index 1bfad0add3a3fd8f3c7301b79cbb08aa33a55409..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4226.txt +++ /dev/null @@ -1,11 +0,0 @@ -In functional analysis, the Dixmier-Ng Theorem is a characterization of when a normed space is in fact a dual Banach space. - -Dixmier-Ng Theorem. Let $X$ be a normed space. The following are equivalent: - -# There exists a Hausdorff locally convex topology $\tau$ on $X$ so that the closed unit ball, $\mathbf{B}_X$, of $X$ is $\tau$-compact. - -# There exists a Banach space $Y$ so that $X$ is isometrically isomorphic to the dual of $Y$. - -That 2. implies 1. is an application of the Banach–Alaoglu theorem, setting $\tau$ to the Weak-* topology. That 1. implies 2. is an application of the Bipolar theorem. - -Let $M$ be a pointed metric space with distinguished point denoted $0_M$. The Dixmier-Ng Theorem is applied to show that the Lipschitz space $\text{Lip}_0(M)$ of all real-valued Lipschitz functions from $M$ to $\mathbb{R}$ that vanish at $0_M$ (endowed with the Lipschitz constant as norm) is a dual Banach space. diff --git a/wiki/wikipedia/4227.txt b/wiki/wikipedia/4227.txt deleted file mode 100644 index 4118a3ee11a57553e76a76f83566e659a631e59c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4227.txt +++ /dev/null @@ -1,39 +0,0 @@ -Clock synchronization is a topic in computer science and engineering that aims to coordinate otherwise independent clocks. Even when initially set accurately, real clocks will differ after some amount of time due to clock drift, caused by clocks counting time at slightly different rates. There are several problems that occur as a result of clock rate differences and several solutions, some being more appropriate than others in certain contexts. - -In serial communication, clock synchronization can refer to clock recovery which achieves frequency synchronization, as opposed to full phase synchronization. Such clock synchronization is used in synchronization in telecommunications and automatic baud rate detection. - -Plesiochronous or isochronous operation refers to a system with frequency synchronization and loose constraints on phase synchronization. Synchronous operation implies a tighter synchronization based on time perhaps in addition to frequency. - -As a result of the difficulties managing time at smaller scales, there are problems associated with clock skew that take on more complexity in distributed computing in which several computers will need to realize the same global time. For instance, in Unix systems the make command is used to compile new or modified code and seeks to avoid recompiling unchanged code. The make command uses the clock of the machine it runs on to determine which source files need to be recompiled. If the sources reside on a separate file server and the two machines have unsynchronized clocks, the make program might not produce the correct results. - -Synchronization is required for accurate reproduction of streaming media. Clock synchronization is a significant component of audio over Ethernet systems. - -In a system with a central server, the synchronization solution is trivial; the server will dictate the system time. Cristian's algorithm and the Berkeley algorithm are potential solutions to the clock synchronization problem in this environment. - -In distributed computing, the problem takes on more complexity because a global time is not easily known. The most used clock synchronization solution on the Internet is the Network Time Protocol (NTP) which is a layered client-server architecture based on User Datagram Protocol (UDP) message passing. Lamport timestamps and vector clocks are concepts of the logical clock in distributed computing. - -In a wireless network, the problem becomes even more challenging due to the possibility of collision of the synchronization packets on the wireless medium and the higher drift rate of clocks on low-cost wireless devices. - -The Berkeley algorithm is suitable for systems where a radio clock is not present, this system has no way of making sure of the actual time other than by maintaining a global average time as the global time. A time server will periodically fetch the time from all the time clients, average the results, and then report back to the clients the adjustment that needs be made to their local clocks to achieve the average. This algorithm highlights the fact that internal clocks may vary not only in the time they contain but also in the clock rate. - -Clock-sampling mutual network synchronization (CS-MNS) is suitable for distributed and mobile applications. It has been shown to be scalable over mesh networks that include indirectly-linked non-adjacent nodes, and is compatible with IEEE 802.11 and similar standards. It can be accurate to the order of few microseconds, but requires direct physical wireless connectivity with negligible link delay (less than 1 microsecond) on links between adjacent nodes, limiting the distance between neighboring nodes to a few hundred meters. - -Cristian's algorithm relies on the existence of a time server. The time server maintains its clock by using a radio clock or other accurate time source, then all other computers in the system stay synchronized with it. A time client will maintain its clock by making a procedure call to the time server. Variations of this algorithm make more precise time calculations by factoring in network radio propagation time. - -In addition to its use in navigation, the Global Positioning System (GPS) can also be used for clock synchronization. The accuracy of GPS time signals is ±10 nanoseconds. Using GPS for synchronization requires a GPS receiver connected to an antenna with unobstructed view of the sky. - -IRIG timecodes are standard formats for transferring timing information. Atomic frequency standards and GPS receivers designed for precision timing are often equipped with an IRIG output. The standards were created by the Telecommunications Working Group of the United States military's Inter-Range Instrumentation Group (IRIG), the standards body of the Range Commanders Council. Work on these standards started in October 1956, and the original standards were accepted in 1960. - -Network Time Protocol (NTP) is a highly robust protocol, widely deployed throughout the Internet. Well tested over the years, it is generally regarded as the state of the art in distributed time synchronization protocols for unreliable networks. It can reduce synchronization offsets to times of the order of a few milliseconds over the public Internet, and to sub-millisecond levels over local area networks. - -A simplified version of the NTP protocol, Simple Network Time Protocol (SNTP), can also be used as a pure single-shot stateless primary/secondary synchronization protocol, but lacks the sophisticated features of NTP, and thus has much lower performance and reliability levels. - -Precision Time Protocol (PTP) is a master/slave protocol for delivery of highly accurate time over local area networks. - -The Reference Broadcast Time Synchronization (RBS) algorithm is often used in wireless networks and sensor networks. In this scheme, an initiator broadcasts a reference message to urge the receivers to adjust their clocks. - -The Reference Broadcast Infrastructure Synchronization (RBIS) protocol is a master/slave synchronization protocol, like RBS, based on a receiver/receiver synchronization paradigm. It is specifically tailored to be used in IEEE 802.11 wireless networks configured in infrastructure mode (i.e., coordinated by an access point). The protocol does not require any modification to the access point. - -Synchronous Ethernet uses Ethernet in a synchronous manner such that when combined with synchronization protocols such as PTP in the case of the White Rabbit Project, sub-nanosecond synchronization accuracy is achieved. - -Synchronization is achieved in wireless ad hoc networks through sending synchronization messages in a multi-hop manner and each node progressively synchronizing with the node that is the immediate sender of a synchronization message. Examples include Flooding Time Synchronization Protocol (FTSP), The findings of this research are being tested in financial market applications. diff --git a/wiki/wikipedia/4228.txt b/wiki/wikipedia/4228.txt deleted file mode 100644 index 49f25d2c52eb1cc57cc945ad9e8d3c231d37c5c3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4228.txt +++ /dev/null @@ -1,83 +0,0 @@ -In mathematics, the closed-subgroup theorem (sometimes referred to as Cartan's theorem) is a theorem in the theory of Lie groups. It states that if H is a closed subgroup of a Lie group G, then H is an embedded Lie group with the smooth structure (and hence the group topology) agreeing with the embedding. - -One of several results known as Cartan's theorem, it was first published in 1930 by Élie Cartan, who was inspired by John von Neumann's 1929 proof of a special case for groups of linear transformations. - -Let $G$ be a Lie group with Lie algebra $\mathfrak{g}$. Now let $H$ be an arbitrary closed subgroup of $G$. Our goal is to show that $H$ is a smooth embedded submanifold of $G$. Our first step is to identify something that could be the Lie algebra of $H$, that is, the tangent space of $H$ at the identity. The challenge is that $H$ is not assumed to have any smoothness and therefore it is not clear how one may define its tangent space. To proceed, we define the "Lie algebra" $\mathfrak{h}$ of $H$ by the formula - -\mathfrak{h} = \left\{ X \mid e^{tX}\in H, \forall t\in\R\right\}. - -It is not difficult to show that $\mathfrak{h}$ is a Lie subalgebra of $\mathfrak{g}$. In particular, $\mathfrak{h}$ is a subspace of $\mathfrak{g}$, which we might hope could be the tangent space of $H$ at the identity. For this idea to work, however, we need to know that $\mathfrak{h}$ is big enough to capture some interesting information about $H$. If, for example, $H$ were some large subgroup of $G$ but $\mathfrak{h}$ turned out to be zero, $\mathfrak{h}$ would not be helpful to us. - -The key step, then, is to show that $\mathfrak{h}$ actually captures all the elements of $H$ that are sufficiently close to the identity. That is to say, we need to show that the following critical lemma holds: - -Take a small neighborhood $U$ of the origin in $\mathfrak{g}$ such that the exponential map sends $U$ diffeomorphically onto some neighborhood $V$ of the identity in $G$, and let $\log: V \to U$ be the inverse of the exponential map. Then there is some smaller neighborhood $W\subset V$ such that if $h$ belongs to $W\cap H$, then $\log(h)$ belongs to $\mathfrak{h}$. - -It is worth noting that Rossmann shows that for any subgroup $H$ of $G$ (not necessarily closed), the Lie algebra $\mathfrak{h}$ of $H$ is a Lie subalgebra of $\mathfrak{g}$. Rossmann then goes on to introduce coordinates on $H$ that make the identity component of $H$ into a Lie group. It is important to note, however, that the topology on $H$ coming from these coordinates is not the subset topology. That it so say, the identity component of $H$ is an immersed submanifold of $G$ but not an embedded submanifold. - -In particular, the lemma stated above does not hold if $H$ is not closed. - -For an example of a subgroup that is not an embedded Lie subgroup, consider the torus and an "irrational winding of the torus". - -G = \mathbb{T}^2 = \left\{\left .\begin{pmatrix}e^{2\pi i\theta} & 0\\0 & e^{2\pi i\phi} \end{pmatrix}\right| \theta, \phi \in \R\right\}, - -and its subgroup - -H = \left\{\left. \begin{pmatrix}e^{2\pi i\theta} & 0\\0 & e^{2\pi ia\theta}\end{pmatrix} \right| \theta \in \R\right\} \text{with Lie algebra } \mathfrak{h} = \left\{\left. \begin{pmatrix} i\theta & 0\\0 & ia\theta\end{pmatrix}\right| \theta \in \R\right\}, - -with a irrational. Then H is dense in G and hence not closed. In the relative topology, a small open subset of H is composed of infinitely many almost parallel line segments on the surface of the torus. This means that H is not locally path connected. In the group topology, the small open sets are single line segments on the surface of the torus and H is locally path connected. - -The example shows that for some groups H one can find points in an arbitrarily small neighborhood U in the relative topology τr of the identity that are exponentials of elements of h, yet they cannot be connected to the identity with a path staying in U. The group (H, τr) is not a Lie group. While the map exp : h → (H, τr) is an analytic bijection, its inverse is not continuous. That is, if U ⊂ h corresponds to a small open interval −ε < θ < ε, there is no open V ⊂ (H, τr) with log(V) ⊂ U due to the appearance of the sets V. However, with the group topology τg, (H, τg) is a Lie group. With this topology the injection ι : (H, τg) → G is an analytic injective immersion, but not a homeomorphism, hence not an embedding. There are also examples of groups H for which one can find points in an arbitrarily small neighborhood (in the relative topology) of the identity that are not exponentials of elements of h. For closed subgroups this is not the case as the proof below of the theorem shows. - -Because of the conclusion of the theorem, some authors chose to define linear Lie groups or matrix Lie groups as closed subgroups of GL(n, R) or GL(n, C). In this setting, one proves that every element of the group sufficiently close to the identity is the exponential of an element of the Lie algebra. (The proof is practically identical to the proof of the closed subgroup theorem presented below.) It follows every closed subgroup is an embedded submanifold of GL(n, C) - -If H ⊂ G is a closed Lie subgroup, then G/H, the left coset space, has a unique real-analytic manifold structure such that the quotient map π:G → G/H is an analytic submersion. The left action given by g1 ⋅ (g2H) = (g1g2)H turns G/H into a homogeneous G-space. - -The closed subgroup theorem now simplifies the hypotheses considerably, a priori widening the class of homogeneous spaces. Every closed subgroup yields a homogeneous space. - -In a similar way, the closed subgroup theorem simplifies the hypothesis in the following theorem. - -If X is a set with transitive group action and the isotropy group or stabilizer of a point x ∈ X is a closed Lie subgroup, then X has a unique smooth manifold structure such that the action is smooth. - -A few sufficient conditions for H ⊂ G being closed, hence an embedded Lie group, is given below. - -*All classical groups are closed in GL(F, n), where F is $\R$, $\Complex$, or $\mathbb{H}$, the quaternions. - -*A subgroup that is locally closed is closed. A subgroup is locally closed if every point has a neighborhood in U ⊂G such that H ∩ U is closed in U. - -*If H = AB = {ab | a ∈ A, b ∈ B}, where A is a compact group and B is a closed set, then H is closed. - -*If h ⊂ g is a Lie subalgebra such that for no X ∈ g \ h, [X, h] ∈ h, then Γ(h), the group generated by eh, is closed in G. - -*If X ∈ g, then the one-parameter subgroup generated by X is not closed if and only if X is similar over $\Complex$ to a diagonal matrix with two entries of irrational ratio. - -*Let h ⊂ g be a Lie subalgebra. If there is a simply connected compact group K with k isomorphic to h, then Γ(h) is closed in G. - -*If G is simply connected and h ⊂ g is an ideal, then the connected Lie subgroup with Lie algebra h is closed. - -An embedded Lie subgroup H ⊂ G is closed so a subgroup is an embedded Lie subgroup if and only if it is closed. Equivalently, H is an embedded Lie subgroup if and only if its group topology equals its relative topology. - -The proof is given for matrix groups with G = GL(n, R) for concreteness and relative simplicity, since matrices and their exponential mapping are easier concepts than in the general case. Historically, this case was proven first, by John von Neumann in 1929, and inspired Cartan to prove the full closed subgroup theorem in 1930. The proof for general G is formally identical, except that elements of the Lie algebra are left invariant vector fields on G and the exponential mapping is the time one flow of the vector field. If H ⊂ G with G closed in GL(n, R), then H is closed in GL(n, R), so the specialization to GL(n, R) instead of arbitrary G ⊂ GL(n, R) matters little. - -We begin by establishing the key lemma stated in the "overview" section above. - -Endow g with an inner product (e.g., the Hilbert–Schmidt inner product), and let h be the Lie algebra of H defined as h = {X ∈ Mn(R) = g | etX ∈ H ∀t ∈ R}. Let s = {S ∈ g | (S, T) = 0 ∀T ∈ h}, the orthogonal complement of h. Then g decomposes as the direct sum g = s ⊕ h, so each X ∈ g is uniquely expressed as X = S + T with S ∈ s, T ∈ h. - -Define a map Φ : g → GL(n, R) by (S, T) ↦ eSeT. Expand the exponentials, - -\Phi(S,T) = e^{tS}e^{tT} = I + tS + tT + O(t^2), - -and the pushforward or differential at 0, Φ(S, T) = d/dtΦ(tS, tT) is seen to be S + T, i.e. Φ = Id, the identity. The hypothesis of the inverse function theorem is satisfied with Φ analytic, and thus there are open sets U1 ⊂ g, V1 ⊂ GL(n, R) with 0 ∈ U1 and I ∈ V1 such that Φ is a real-analytic bijection from U1 to V1 with analytic inverse. It remains to show that U1 and V1 contain open sets U and V such that the conclusion of the theorem holds. - -Consider a countable neighborhood basis Β at 0 ∈ g, linearly ordered by reverse inclusion with B1 ⊂ U1. Suppose for the purpose of obtaining a contradiction that for all i, Φ(Bi) ∩ H contains an element hi that is not on the form hi = eTi,Ti ∈ h. Then, since Φ is a bijection on the Bi, there is a unique sequence Xi = Si + Ti, with 0 ≠ Si ∈ s and Ti ∈ h such that Xi ∈ Bi converging to 0 because Β is a neighborhood basis, with eSieTi = hi. Since eTi ∈ H and hi ∈ H, eSi ∈ H as well. - -Normalize the sequence in s, Yi = Si/. It takes its values in the unit sphere in s and since it is compact, there is a convergent subsequence converging to Y ∈ s. The index i henceforth refers to this subsequence. It will be shown that etY ∈ H, ∀t ∈ R. Fix t and choose a sequence mi of integers such that mi as i → ∞. For example, mi such that mi will do, as Si → 0. Then - -(e^{S_i})^{m_i} = e^{m_iS_i} = e^{m_i\|S_i\| Y_i} \rightarrow e^{t Y}. - -Since H is a group, the left hand side is in H for all i. Since H is closed, etY ∈ H, ∀t, hence Y ∈ h. This is a contradiction. Hence, for some i the sets U = Βi and V = Φ(Βi) satisfy e(U ∩ h) = H ∩ V and the exponential restricted to the open set (U ∩ h) ⊂ h is in analytic bijection with the open set Φ(U) ∩ H ⊂ H. This proves the lemma. - -For j ≥ i, the image in H of Bj under Φ form a neighborhood basis at I. This is, by the way it is constructed, a neighborhood basis both in the group topology and the relative topology. Since multiplication in G is analytic, the left and right translates of this neighborhood basis by a group element g ∈ G gives a neighborhood basis at g. These bases restricted to H gives neighborhood bases at all h ∈ H. The topology generated by these bases is the relative topology. The conclusion is that the relative topology is the same as the group topology. - -Next, construct coordinate charts on H. First define φ1 : e(U) ⊂ G → g, g ↦ log(g). This is an analytic bijection with analytic inverse. Furthermore, if h ∈ H, then φ1(h) ∈ h. By fixing a basis for g = h ⊕ s and identifying g with $\R^n$, then in these coordinates φ1(h) = (x1(h), …, xm(h), 0, …, 0), where m is the dimension of h. This shows that (eU, φ1) is a slice chart. By translating the charts obtained from the countable neighborhood basis used above one obtains slice charts around every point in H. This shows that H is an embedded submanifold of G. - -Moreover, multiplication m, and inversion i in H are analytic since these operations are analytic in G and restriction to a submanifold (embedded or immersed) with the relative topology again yield analytic operations m : H × H → G and i : H × H → G. But since H is embedded, m : H × H → H and i : H × H → H are analytic as well. diff --git a/wiki/wikipedia/4229.txt b/wiki/wikipedia/4229.txt deleted file mode 100644 index d34cf50a8414c126c3b4b67ad6b9b2e9fc7de146..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4229.txt +++ /dev/null @@ -1,64 +0,0 @@ -In mathematics, in the area of additive number theory, the Erdős–Fuchs theorem is a statement about the number of ways that numbers can be represented as a sum of elements of a given additive basis, stating that the average order of this number cannot be too close to being a linear function. - -The theorem is named after Paul Erdős and Wolfgang Heinrich Johannes Fuchs, who published it in 1956. - -Let $\mathcal{A}\subseteq\mathbb{N}$ be an infinite subset of the natural numbers and $r_{\mathcal{A},h}(n)$ its representation function, which denotes the number of ways that a natural number $n$ can be expressed as the sum of $h$ elements of $\mathcal{A}$ (taking order into account). We then consider the accumulated representation function - -s_{\mathcal{A},h}(x) := \sum_{n\leqslant x} r_{\mathcal{A},h}(n), - -which counts (also taking order into account) the number of solutions to $k_1 + \cdots + k_h \leqslant x$, where $k_1,\ldots,k_h \in \mathcal{A}$. The theorem then states that, for any given $c>0$, the relation - -s_{\mathcal{A},2}(n) = cn + o\left(n^{1/4}\log(n)^{-1/2} \right) - -cannot be satisfied; that is, there is no $\mathcal{A}\subseteq\mathbb{N}$ satisfying the above estimate. - -The Erdős–Fuchs theorem has an interesting history of precedents and generalizations. In 1915, it was already known by G. H. Hardy that in the case of the sequence $\mathcal{Q}:=\{0,1,4,9,\ldots\}$ of perfect squares one has - -\limsup_{n\to +\infty} \frac{\left|s_{\mathcal{Q},2}(n)- \pi n\right|}{n^{1/4}\log(n)^{1/4}} > 0 - -This estimate is a little better than that described by Erdős–Fuchs, but at the cost of a slight loss of precision, P. Erdős and W. H. J. Fuchs achieved complete generality in their result (at least for the case $h=2$). Another reason this result is so celebrated may be due to the fact that, in 1941, P. Erdős and P. Turán conjectured that, subject to the same hypotheses as in the theorem stated, the relation - -s_{\mathcal{A},2}(n) = cn + O(1) - -could not hold. This fact remained unproven until 1956, when Erdős and Fuchs obtained their theorem, which is even stronger than the previously conjectured estimate. - -This theorem has been extended in a number of different directions. In 1980, A. Sárközy considered two sequences which are "near" in some sense. He proved the following: - -* Theorem (Sárközy, 1980). If $\mathcal{A} = \{a_1 < a_2 <\ldots\}$ and $\mathcal{B} = \{b_1 < b_2 < \ldots\}$ are two infinite subsets of natural numbers with $a_i - b_i = o\big(a_i ^{1/2}\log(a_i)^{-1} \big)$, then $|\{(i,j):a_i+b_j \leqslant n\}| = cn + o\big(n^{1/4}\log(n)^{-1/2} \big)$ cannot hold for any constant $c > 0$. - -In 1990, H. L. Montgomery and R. C. Vaughan were able to remove the log from the right-hand side of Erdős–Fuchs original statement, showing that - -s_{\mathcal{A},2}(n) = cn + o(n^{1/4}) - -cannot hold. In 2004, Gábor Horváth extended both these results, proving the following: - -* Theorem (Horváth, 2004). If $\mathcal{A}=\{a_10$. - -The natural generalization to Erdős–Fuchs theorem, namely for $h \geqslant 3$, is known to hold with same strength as the Montgomery–Vaughan's version. In fact, M. Tang showed in 2009 that, in the same conditions as in the original statement of Erdős–Fuchs, for every $h \geqslant 2$ the relation - -s_{\mathcal{A},h}(n) = cn + o(n^{1/4}) - -cannot hold. In another direction, in 2002, Gábor Horváth gave a precise generalization of Sárközy's 1980 result, showing that - -* Theorem (Horváth, 2002) If $\mathcal{A}^{(j)}=\{a^{(j)}_1 - -
  • $a^{(1)}_i - a^{(2)}_i = o\big((a^{(1)}_i) ^{1/2}\log(a_i^{(1)})^{-k/2} \big)$ - -
  • $|\mathcal{A}^{(j)}\cap[0,n]| = \Theta\big(|\mathcal{A}^{(1)}\cap[0,n]|\big)$ (for $j=3,\ldots,k$) - - - -then the relation: -$$ -|\{(i_1,\ldots, i_k):a^{(1)}_{i_1}+\ldots+a^{(k)}_{i_k} \leqslant n,~ a^{(j)}_{i_j} \in \mathcal{A}^{(j)} (j =1,\ldots,k)\}| = cn + o\big(n^{1/4}\log(n)^{1-3k/4} \big) -$$
    - -cannot hold for any constant $c >0$. - -Yet another direction in which the Erdős–Fuchs theorem can be improved is by considering approximations to $s_{\mathcal{A},h}(n)$ other than $cn$ for some $c>0$. In 1963, Paul T. Bateman, Eugene E. Kohlbecker and Jack P. Tull proved a slightly stronger version of the following: - -* Theorem (Bateman–Kohlbecker–Tull, 1963). Let $L(x)$ be a slowly varying function which is either convex or concave from some point onward. Then, on the same conditions as in the original Erdős–Fuchs theorem, we cannot have $s_{\mathcal{A},2}(n) = nL(n) + o\big(n^{1/4}\log(n)^{-1/2}L(n)^\alpha \big)$, where $\alpha = 3/4$ if $L(x)$ is bounded, and $1/4$ otherwise. - -At the end of their paper, it is also remarked that it is possible to extend their method to obtain results considering $n^{\gamma}$ with $\gamma \neq 1$, but such results are deemed as not sufficiently definitive. diff --git a/wiki/wikipedia/423.txt b/wiki/wikipedia/423.txt deleted file mode 100644 index 6f2b96efaf3cdd902ecf79e029f00677b8af4bef..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/423.txt +++ /dev/null @@ -1,108 +0,0 @@ -Newton's Minimal Resistance Problem is a problem of finding a solid of revolution which experiences a minimum resistance when it moves through a homogeneous fluid with constant velocity in the direction of the axis of revolution, named after Isaac Newton, who studied the problem in 1685 and published it in 1687 in his Principia Mathematica. This is the first example of a problem solved in what is now called the calculus of variations, appearing a decade before the brachistochrone problem. Newton published the solution in Principia Mathematica without his derivation and David Gregory was the first person who approached Newton and persuaded him to write an analysis for him. Then the derivation was shared with his students and peers by Gregory. - -According to I Bernard Cohen, in his Guide to Newton’s Principia, "The key to Newton’s reasoning was found in the 1880s, when the earl of Portsmouth gave his family’s vast collection of Newton’s scientific and mathematical papers to Cambridge University. Among Newton’s manuscripts they found the draft text of a letter, … in which Newton elaborated his mathematical argument. [This] - -was never fully understood, however, until the publication of the major manuscript documents by D. T. Whiteside [1974], whose analytical and historical commentary has enabled students of Newton not only to follow fully Newton’s path to discovery and proof, but also Newton’s later (1694) recomputation of the surface of least resistance". - -Even though Newton's model for the fluid was wrong as per our current understanding, the fluid he had considered finds its application in Hypersonic flow theory as a limiting case. - -In Proposition 34 of Book 2 of the Principia, Newton wrote, If in a rare medium, consisting of equal particles freely disposed at equal distances from each other, a globe and a cylinder described on equal diameter move with equal velocities in the direction of the axis of the cylinder, the resistance of the globe will be but half as great as that of the cylinder. - -Following this proposition is a scholium containing the famous condition that the curve which, when rotated about its axis, generates the solid that experiences less resistance than any other solid having a fixed length, and width. - -In modern form, Newton's problem is to minimize the following integral: -$$ - I = \int \frac{y \dot{y}^3}{1 + \dot{y}^2} dx -$$ - -where $y(x)$ represents the curve which generates a solid when it is rotated about the x-axis and $\dot{y}=dy/dx$. - -I is the reduction in resistance caused by the particles impinging upon the sloping surface DNG, formed by rotating the curve, instead of perpendicularly upon the horizontal projection of DNG on the rear disc DA from the direction of motion, in Fig. 1. Note that the front of the solid is the disc BG, the triangles GBC and GBR are not part of it, but are used below by Newton to express the minimum condition. - -This integral is related to the total resistance experienced by the body by the following relation: $ \rho = \frac{H^2}{2} + \int \frac{y \dot{y}^3}{1 + \dot{y}^2} dx $ - -The problem is to find the curve that generates the solid that experiences less resistance than any other solid having a fixed axial length = L, and a fixed width, H. - -Since the solid must taper in the direction of motion, H is the radius of the disc forming the rear surface of the curve rotated about the x-axis. The units are chosen so that the constant of proportionality is unity. Also, note that $ \dot{y} < 0 $, and the integral, which is evaluated between x = 0 and x = L is negative. Let y = h when x = L. - -When the curve is the horizontal line, DK, so the solid is a cylinder, $ y = H, \dot{y} = 0 $, the integral is zero and the resistance of the cylinder is: $ \rho = \frac{H^2}{2} $, which explains the constant term. - -The simplest way to apply the Euler–Lagrange equation to this problem is to rewrite the resistance as: -$$ - \rho = \frac{H^2}{2} + \int \frac{y}{1 + \dot{x}^2} dy -$$ where $ \dot{x}=dx/dy $, and the integral, which is evaluated between y = H and y = h < H, is negative. - -Substituting the integrand $ F(y,\dot{x}) = y/(1+\dot{x}^2) $ into the Euler-Lagrange equation -$$ - \frac{d}{dy} \left(\frac{\partial F}{\partial \dot{x}}\right) = \frac{d}{dy} \left(\frac{-2\dot{x}y}{(1+\dot{x}^2)^2}\right) = \frac{\partial F}{\partial x} = 0 -$$, and it follows that $ \frac{\dot{x}y}{(1+\dot{x}^2)^2}$ is constant, and this can be written as -$$ - y = K_1\frac{(1+ p^2)^2}{p^3} -$$ (1) where $ p = -\dot{y} > 0 $, and where $ K_1 > 0 $ is a constant. - -Although the curves that satisfy the minimum condition cannot be described by a simple function, y = f(x), they may be plotted using p as a parameter, to obtain the corresponding coordinates (x,y) of the curves. The equation of x as a function of p is obtained from the minimum condition (1), and an equivalent of it was first found by Newton. - -Differentiating: $ dy = - pdx = K_1(1 - \frac{2}{p^2} - \frac{3}{p^4})dp $, and integrating -$$ -x = K_2 - K_1 \left(\ln p + \frac{1}{p^2} + \frac{3}{4p^4}\right) -$$, where $ K_2 > 0 $ is a constant. - -Since $ y = H $, when $ x = 0 $, and $ y = h $, when $ x = L $, the constants $K_1, \ K_2$ can be determined in terms of H, h and L. Because y from equation (1) can never be zero or negative, the front surface of any solid satisfying the minimum condition must be a disc, GB. - -As this was the first example of this type of problem, Newton had to invent a completely new method of solution. Also, he went much deeper in his analysis of the problem than simply finding the condition (1). - -While a solid of least resistance must satisfy (1), the converse is not true. Fig. 2 shows the family of curves that satisfy it for different values of $ K_1 > 0 $. As $ K_1 $ increases the radius, Bg = h, of the disc at x = L decreases and the curve becomes steeper. - -Directly before the minimum resistance problem, Newton stated that if on any elliptical or oval figure rotated about its axis, p becomes greater than unity, one with less resistance can be found. This is achieved by replacing the part of the solid that has p > 1 with the frustum of a cone whose vertex angle is a right angle, as shown in Fig. 2 for curve $ D\nu \phi \gamma B $. This has less resistance than $ D\nu \phi \Gamma B $. Newton does not prove this, but adds that it might have applications in shipbuilding. Whiteside supplies a proof and contends that Newton would have used the same reasoning. - -In Fig. 2, since the solid generated from the curve Dng satisfies the minimum condition and has p < 1 at g, it experiences less resistance than that from any other curve with the same end point g. However, for the curve DνΓ, with p > 1 at end point Γ, this is not the case for although the curve satisfies the minimum condition, the resistance experienced by φγ and γΓ together is less than that by φΓ. - -Newton concluded that of all solids that satisfy the minimum resistance condition, the one experiencing the least resistance, DNG in Fig. 2, is the one that has p = 1 at G. This is shown schematically in Fig. 3 where the overall resistance of the solid varies against the radius of the front surface disc, the minimum occurring when h = BG, corresponding to p = 1 at G. - -In the Principia, in Fig. 1 the condition for the minimum resistance solid is translated into a geometric form as follows: draw GR parallel to the tangent at N, so that $ p = \frac{GB}{BR} $, and equation (1) becomes: $ \frac{NM.GB^3}{ BR^3(1 + (GB/BR)^2)^2} = \frac{NM.BR.GB^3}{GR^4} = K_1 $ - -At G, $ NM = BG $, $ BR = BC = BG $, and $ GR^2 = 2BG^2 $, so $ \frac{NM.BR.GB^3}{GR^4} = \frac{GB}{4} = K_1 $ which appears in the Principia in the form: -$$ - \frac{NM}{GR} = \frac{GR^3}{4BR.GB^2} -$$ - -Although this appears fairly simple, it has several subtleties that have caused much confusion. - -In Fig 4, assume DNSG is the curve that when rotated about AB generates the solid whose resistance is less than any other such solid with the same heights, AD = H, BG = h and length, AB = L. - -Fig. 5. shows the infinitesimal region of the curve about N and I in more detail. Although NI, Nj and NJ are really curved, they can be approximated by straight lines provided NH is sufficiently small. - -Let HM = y, AM = x, NH = u, and HI = w = dx. Let the tangent at each point on the curve, $ p = - \frac{dy}{dx} = \frac{u}{w} $. The reduction of the resistance of the sloping ring NI compared to the vertical ring NH rotated about AB is $ r = \frac{yp^3}{1 + p^2}dx = \frac{yu^3}{u^2 + w^2} $ (2) - -Let the minimum resistance solid be replaced by an identical one, except that the arc between points I and K is shifted by a small distance to the right $ IJ = KL = o > 0 $, or to the left $ Ij = Kl = o < 0 $, as shown in more detail in Fig. 5. In either case, HI becomes $ HJ,Hj = w + o $. - -The resistance of the arcs of the curve DN and SG are unchanged. Also, the resistance of the arc IK is not changed by being shifted, since the slope remains the same along its length. The only change to the overall resistance of DNSG is due to the change to the gradient of arcs NI and KS. The 2 displacements have to be equal for the slope of the arc IK to be unaffected, and the new curve to end at G. - -The new resistance due to particles impinging upon NJ or Nj, rather that NI is: -$$ - r + \delta r = \frac {yu^3}{u^2 + (w + o)^2} = r - \frac {2ywu^3 o}{(u^2 + w^2)^2} -$$ + w.(terms in ascending powers of $ \frac{o}{w} $ starting with the 2nd). - -The result is a change of resistance of: $ - \frac {2NM.HI.HN^3 o}{NI^4} $ + higher order terms, the resistance being reduced if o > 0 (NJ less resisted than NI). - -This is the original 1685 derivation where he obtains the above result using the series expansion in powers of o. In his 1694 revisit he differentiates (2) with respect to w. He sent details of his later approach to David Gregory, and these are included as an appendix in Motte’s translation of the Principia. - -Similarly, the change in resistance due to particles impinging upon SL or Sl rather that SK is: $ + \frac {2TS.OK.OS^3 o}{SK^4} $ + higher order terms. - -The overall change in the resistance of the complete solid, $ \delta \rho = ( - \frac {2NM.HI.HN^3}{NI^4} + \frac {2TS.OK.OS^3}{SK^4})o $ + w.(terms in ascending powers of $ \frac{o}{w} $ starting with the 2nd). - -Fig 6 represents the total resistance of DNJLSG, or DNjlSG as a function of o. Since the original curve DNIKSG has the least resistance, any change o of whatever sign, must result in an increase in the resistance. This is only possible if the coefficient of o in the expansion of $ \rho (o) $ is zero, so: -$$ - \frac {NM.HI.HN^3}{NI^4} = \frac {TS.OK.OS^3}{SK^4} -$$ (2) - -If this was not the case, it would be possible to choose a value of o with a sign that produced a curve DNJLSG, or DNjlSG with less resistance than the original curve, contrary to the initial assumption. The approximation of taking straight lines for the finite arcs, NI and KS becomes exact in the limit as HN and OS approach zero. Also, NM and HM can be taken as equal, as can OT and ST. - -However, N and S on the original curve are arbitrary points, so for any 2 points anywhere on the curve the above equality must apply. This is only possible if in the limit of any infinitesimal arc HI, anywhere on the curve, the expression, -$$ - \frac {NM.HI.HN^3}{NI^4} -$$ is a constant. (3) - -This has to be the case since, if $ \frac {NM.HI.HN^3}{NI^4} $ was to vary along the curve, it would be possible to find 2 infinitesimal arcs NI and KS such that (2) was false, and the coefficient of o in the expansion of $ \delta \rho $ would be non-zero. Then a solid with less resistance could be produced by choosing a suitable value of o. - -This is the reason for the constant term in the minimum condition in (3). As noted above, Newton went further, and claimed that the resistance of the solid is less than that of any other with the same length and width, when the slope at G is equal to unity. Therefore, in this case, the constant in (3) is equal to one quarter of the radius of the front disc of the solid, $ \frac{GB}{4} $. diff --git a/wiki/wikipedia/4230.txt b/wiki/wikipedia/4230.txt deleted file mode 100644 index bb91f54769266da0883f91b0c45f3c15d0c00eef..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4230.txt +++ /dev/null @@ -1,152 +0,0 @@ -In mathematics, specifically differential calculus, the inverse function theorem gives a sufficient condition for a function to be invertible in a neighborhood of a point in its domain: namely, that its derivative is continuous and non-zero at the point. The theorem also gives a formula for the derivative of the inverse function. - -In multivariable calculus, this theorem can be generalized to any continuously differentiable, vector-valued function whose Jacobian determinant is nonzero at a point in its domain, giving a formula for the Jacobian matrix of the inverse. There are also versions of the inverse function theorem for complex holomorphic functions, for differentiable maps between manifolds, for differentiable functions between Banach spaces, and so forth. - -For functions of a single variable, the theorem states that if $f$ is a continuously differentiable function with nonzero derivative at the point a; then $f$ is invertible in a neighborhood of a, the inverse is continuously differentiable, and the derivative of the inverse function at $b=f(a)$ is the reciprocal of the derivative of $f$ at $a$: -$$ -\bigl(f^{-1}\bigr)'(b) = \frac{1}{f'(a)} = \frac{1}{f'(f^{-1}(b))}. -$$ - -An alternate version, which assumes that $f$ is continuous and injective near a, and differentiable at a with a non-zero derivative, will also result in $f$ being invertible near a, with an inverse that's similarly continuous and injective, and where the above formula would apply as well. - -As a corollary, we see clearly that if $f$ is $k$-th differentiable, with nonzero derivative at the point a, then $f$ is invertible in a neighborhood of a, the inverse is also $k$-th differentiable. Here $k$ is a positive integer or $\infty$. - -For functions of more than one variable, the theorem states that if F is a continuously differentiable function from an open set of $\mathbb{R}^n$ into $\mathbb{R}^n$, and the total derivative is invertible at a point p (i.e., the Jacobian determinant of F at p is non-zero), then F is invertible near p: an inverse function to F is defined on some neighborhood of $q=F(p)$. - -Writing $F=(F_1,\ldots,F_n)$, this means that the system of n equations $y_i = F_i(x_1, \dots, x_n)$ has a unique solution for $x_1, \dots, x_n$ in terms of $y_1, \dots, y_n$, provided that we restrict x and y to small enough neighborhoods of p and q, respectively. - -In the infinite dimensional case, the theorem requires the extra hypothesis that the Fréchet derivative of F at p has a bounded inverse. - -Finally, the theorem says that the inverse function $F^{-1}$ is continuously differentiable, and its Jacobian derivative at $q=F(p)$ is the matrix inverse of the Jacobian of F at p: -$$ - J_{F^{-1}}(q) = [ J_F(p) ]^{-1}. -$$ - -The hard part of the theorem is the existence and differentiability of $F^{-1}$. Assuming this, the inverse derivative formula follows from the chain rule applied to $F^{-1}\circ F = \text{id}$: -$$ -I = J_{F^{-1} \circ F} (p) \ =\ J_{F^{-1}} (F(p)) \cdot J_F (p) \ =\ J_{F^{-1}} (q) \cdot J_F (p). -$$ - -Consider the vector-valued function $F:\mathbb{R}^2\to\mathbb{R}^2\!$ defined by: - - - -F(x,y)= - -\begin{bmatrix} - -{e^x \cos y}\\ - -{e^x \sin y}\\ - -\end{bmatrix}. - - - -The Jacobian matrix is: - - - -J_F(x,y)= - -\begin{bmatrix} - -{e^x \cos y} & {-e^x \sin y}\\ - -{e^x \sin y} & {e^x \cos y}\\ - -\end{bmatrix} - - - -with Jacobian determinant: - - - -\det J_F(x,y)= - -e^{2x} \cos^2 y + e^{2x} \sin^2 y= - -e^{2x}. - -\! - -The determinant $e^{2x}\!$ is nonzero everywhere. Thus the theorem guarantees that, for every point p in $\mathbb{R}^2\!$, there exists a neighborhood about p over which F is invertible. This does not mean F is invertible over its entire domain: in this case F is not even injective since it is periodic: $F(x,y)=F(x,y+2\pi)\!$. - -If one drops the assumption that the derivative is continuous, the function no longer need be invertible. For example $f(x) = x + 2x^2\sin(\tfrac1x)$ and $f(0)= 0$ has discontinuous derivative -$$ -f'\!(x) = 1 -2\cos(\tfrac1x) + 4x\sin(\tfrac1x) -$$ and $f'\!(0) = 1$, which vanishes arbitrarily close to $x=0$. These critical points are local max/min points of $f$, so $f$ is not one-to-one (and not invertible) on any interval containing $x=0$. Intuitively, the slope $f'\!(0)=1$ does not propagate to nearby points, where the slopes are governed by a weak but rapid oscillation. - -As an important result, the inverse function theorem has been given numerous proofs. The proof most commonly seen in textbooks relies on the contraction mapping principle, also known as the Banach fixed-point theorem (which can also be used as the key step in the proof of existence and uniqueness of solutions to ordinary differential equations). - -Since the fixed point theorem applies in infinite-dimensional (Banach space) settings, this proof generalizes immediately to the infinite-dimensional version of the inverse function theorem (see Generalizations below). - -An alternate proof in finite dimensions hinges on the extreme value theorem for functions on a compact set. - -Yet another proof uses Newton's method, which has the advantage of providing an effective version of the theorem: bounds on the derivative of the function imply an estimate of the size of the neighborhood on which the function is invertible. - -The inverse function theorem states that if $f$ is a C1 vector-valued function on an open set $U$, then $ \det f^\prime(a) \ne 0 $ if and only if there is a C1 vector-valued function $g$ defined near $b=f(a)$ with $g(f(x))=x$ near $ a$ and $f(g(y))=y$ near $b$. This was first established by Picard and Goursat using an iterative scheme: the basic idea is to prove a fixed point theorem using the contraction mapping theorem. Taking derivatives, it follows that $g^\prime(y)=f^\prime(g(y))^{-1}$. - -The chain rule implies that the matrices $f^\prime(a)$ and $ g^\prime(b)$ are each inverses. Continuity of $f$ and $g$ means that they are homeomorphisms that are each inverses locally. To prove existence, it can be assumed after an affine transformation that $f(0)=0$ and $f^\prime(0)=I$, so that $ a=b=0$. - -By the fundamental theorem of calculus if $u$ is a C1 function, $u(1)-u(0)= \int_0^1 u^\prime(t) dt$, so that $\|u(1)-u(0)\|\le \sup_{0\le t\le 1} \|u^\prime(t)\|$. Setting $u(t)=f(x+t(x^\prime -x)) - x-t(x^\prime-x)$, it follows that -$$ -\|f(x) - f(x^\prime) - x + x^\prime\| \le \|x -x^\prime\|\sup_{0\le t \le 1} \|f^\prime(x+t(x^\prime -x))-I\|. -$$ - -Now choose $\delta>0$ so that $\|f'(x) - I\| < {1\over 2}$ for $\|x\|< \delta$. Suppose that $\|y\|<\delta/2$ and define $x_n$ inductively by $x_0=0$ and $ x_{n+1}=x_n + y - f(x_n)$. The assumptions show that if $ \|x\|, \|x^\prime\| < \delta$ then -$$ -\|f(x)-f(x^\prime) - x + x^\prime\| \le \|x-x^\prime\|/2 -$$. - -In particular $f(x)=f(x^\prime)$ implies $x=x^\prime$. In the inductive scheme $\|x_n\| <\delta$ - -and $\|x_{n+1} - x_n\| < \delta/2^n$. Thus $(x_n)$ is a Cauchy sequence tending to $x$. By construction $f(x)=y$ as required. - -To check that $g=f^{-1}$ is C1, write $g(y+k) = x+h$ so that -$$ -f(x+h)=f(x)+k -$$. By the inequalities above, $\|h-k\| <\|h\|/2$ so that $\|h\|/2<\|k\| < 2\|h\|$. - -On the other hand if $A=f^\prime(x)$, then $\|A-I\|<1/2$. Using the geometric series for $B=I-A$, it follows that $\|A^{-1}\| < 2$. But then - - {\|g(y+k) -g(y) - f^\prime(g(y))^{-1}k \| \over \|k\|} - -= {\|h -f^\prime(x)^{-1}[f(x+h)-f(x)]\| \over \|k\|} - -\le 4 {\|f(x+h) - f(x) -f^\prime(x)h\|\over \|h\|} - -tends to 0 as $k$ and $h$ tend to 0, proving that $g$ is C1 with $g^\prime(y)=f^\prime(g(y))^{-1}$. - -The proof above is presented for a finite-dimensional space, but applies equally well for Banach spaces. If an invertible function $f$ is Ck with $k>1$, then so too is its inverse. This follows by induction using the fact that the map $F(A)=A^{-1}$ on operators is Ck for any $k$ (in the finite-dimensional case this is an elementary fact because the inverse of a matrix is given as the adjugate matrix divided by its determinant). - - The method of proof here can be found in the books of Henri Cartan, Jean Dieudonné, Serge Lang, Roger Godement and Lars Hörmander. - -The inverse function theorem can be rephrased in terms of differentiable maps between differentiable manifolds. In this context the theorem states that for a differentiable map $F: M \to N$ (of class $C^1$), if the differential of $F$, -$$ -dF_p: T_p M \to T_{F(p)} N -$$ - -is a linear isomorphism at a point $p$ in $M$ then there exists an open neighborhood $U$ of $p$ such that -$$ -F|_U: U \to F(U) -$$ - -is a diffeomorphism. Note that this implies that the connected components of M and N containing p and F(p) have the same dimension, as is already directly implied from the assumption that dFp is an isomorphism. - -If the derivative of F is an isomorphism at all points p in M then the map F is a local diffeomorphism. - -The inverse function theorem can also be generalized to differentiable maps between Banach spaces X and Y. Let U be an open neighbourhood of the origin in X and $F: U \to Y\!$ a continuously differentiable function, and assume that the Fréchet derivative $dF_0: X \to Y\!$ of F at 0 is a bounded linear isomorphism of X onto Y. Then there exists an open neighbourhood V of $F(0)\!$ in Y and a continuously differentiable map $G: V \to X\!$ such that $F(G(y)) = y$ for all y in V. Moreover, $G(y)\!$ is the only sufficiently small solution x of the equation $F(x) = y\!$. - -These two directions of generalization can be combined in the inverse function theorem for Banach manifolds. - -The inverse function theorem (and the implicit function theorem) can be seen as a special case of the constant rank theorem, which states that a smooth map with constant rank near a point can be put in a particular normal form near that point. Specifically, if $F:M\to N$ has constant rank near a point $p\in M\!$, then there are open neighborhoods U of p and V of $F(p)\!$ and there are diffeomorphisms $u:T_pM\to U\!$ and $v:T_{F(p)}N\to V\!$ such that $F(U)\subseteq V\!$ and such that the derivative $dF_p:T_pM\to T_{F(p)}N\!$ is equal to $v^{-1}\circ F\circ u\!$. That is, F "looks like" its derivative near p. The set of points $p\in M$ such that the rank is constant in a neighbourhood of $p$ is an open dense subset of M; this is a consequence of semicontinuity of the rank function. Thus the constant rank theorem applies to a generic point of the domain. - -When the derivative of F is injective (resp. surjective) at a point p, it is also injective (resp. surjective) in a neighborhood of p, and hence the rank of F is constant on that neighborhood, and the constant rank theorem applies. - -If a holomorphic function F is defined from an open set U of $\mathbb{C}^n\!$ into $\mathbb{C}^n\!$, and the Jacobian matrix of complex derivatives is invertible at a point p, then F is an invertible function near p. This follows immediately from the real multivariable version of the theorem. One can also show that the inverse function is again holomorphic. - -If it would be true, the Jacobian conjecture would be a variant of the inverse function theorem for polynomials. It states that if a vector-valued polynomial function has a Jacobian determinant that is an invertible polynomial (that is a nonzero constant), then it has an inverse that is also a polynomial function. It is unknown whether this is true or false, even in the case of two variables. This is a major open problem in the theory of polynomials. - -When $f: \mathbb{R}^n \to \mathbb{R}^m$ with $m\leq n$, $f$ is $k$ times continuously differentiable, and the Jacobian $A=\nabla f(\overline{x})$ at a point $\overline{x}$ is of rank $m$, the inverse of $f$ may not be unique. However, there exists a local selection function $s$ such that $f(s(y)) = y$ for all $y$ in a neighborhood of $\overline{y} = f(\overline{x})$, $s(\overline{y}) = \overline{x}$, $s$ is $k$ times continuously differentiable in this neighborhood, and $\nabla s(\overline{y}) = A^T(A A^T)^{-1}$ ($\nabla s(\overline{y})$ is the Moore–Penrose pseudoinverse of $A$). diff --git a/wiki/wikipedia/4231.txt b/wiki/wikipedia/4231.txt deleted file mode 100644 index 78082a393fc3b84c9af120485ae6cd3bd750ffdb..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4231.txt +++ /dev/null @@ -1,19 +0,0 @@ -Z-Push (presumably Z is for Zarafa) is a FOSS implementation of the Microsoft Exchange ActiveSync protocol which is used to synchronize email, personal contacts and other items between a central server and a mobile device. Note the difference between this protocol, and an earlier (technologically unrelated) protocol named Microsoft ActiveSync. - -Z-Push enables any PHP-based groupware package to become fully syncable with any ActiveSync-compliant device. - -Currently, Z-Push includes four backends: the IMAP and the Maildir backend for e-mail synchronization, the vCard backend for contact synchronization and one for the Zarafa package which is sold by allowing full synchronization of E-mail, Calendar, Contacts and Tasks. - -There is also a 3rd party project that implements a Zimbra Backend allowing Z-push to be used with a ZCS server (Including opensource edition). - -Since 2.3.0, released in July 2016, significant performance improvements have been achieved, as well as significantly lower memory usage. Connecting to Outlook 2013 and 2016 via EAS is also officially supported. With the optional Kopano Outlook Extension (available only for paid subscribers of Zarafa/Kopano), additional Outlook features are enabled such as Out of Office replies, Notes synchronisation, opening of shared and public folders and synchronisation of the Global Address Book. - -Z-Push is under active development with new releases approximately every month including bug fixes, improvements and new features. - -The Z-Push protocol is HTTP based, and uses WBXML (WAP Binary XML) as a communication layer, which is used for bi-directional communication between the PDA/cellular phone and the Server. - -Inside the protocol there is everything you expect from a synchronization protocol: the process of sending items from one side to the other, while keeping track of what has already been sent. The Z-Push hides the complexity of handling these protocol requests to the backend developer, who only needs to implement various standard functions, like getting a list of items, and getting the data for a specific item. All that is needed is a good understanding of the WBXML object definitions and fields, and a developer can quite easily get the items of any groupware solutions onto the PDA/cellular phone. - -The Z-Push has various performance and usability-related features; for example, the entire architecture of the project is based on the idea that only one message should ever have to be in memory at one time, even when the server is sending hundreds of messages to a PDA. This may sound easy, but in most XML-based applications, the XML result data is built in-memory before being serialized to the network - exactly the opposite to what Z-Push does, as data is streamed to the client while it is read from the backend. This not only improves already restricted memory usage in PHP, it also makes the progress bar on the client more user-friendly, as data starts arriving as soon as the synchronization request is made. Z-Push has provided a streaming WBXML encoder and decoder to make this happen. - -When a backend supports it, Z-Push can also make use of advanced features which bring server load down even lower, for example reading message changes directly from a 'diff' source, instead of comparing all the messages with whatever was in there last time. So if the groupware backend can provide a list of changes on-the-fly, then Z-Push can use this information almost instantaneously. Zarafa provides an incremental synchronization backend for its own MAPI-based solution here through their PHP-MAPI extension, enabling extremely low-load synchronizations. diff --git a/wiki/wikipedia/4232.txt b/wiki/wikipedia/4232.txt deleted file mode 100644 index 33de34423fae2ecd11130d8da0f946b0c514714f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4232.txt +++ /dev/null @@ -1,3 +0,0 @@ -In mathematics, a property is any characteristic that applies to a given set. Rigorously, a property p defined for all elements of a set X is usually defined as a function p: X → {true, false}, that is true whenever the property holds; or equivalently, as the subset of X for which p holds; i.e. the set {x | p(x) = true}; p is its indicator function. However, it may be objected that the rigorous definition defines merely the extension of a property, and says nothing about what causes the property to hold for exactly those values. - -Examples of properties include the commutative property of real and complex numbers and the distributive property. diff --git a/wiki/wikipedia/4233.txt b/wiki/wikipedia/4233.txt deleted file mode 100644 index 261ebe388d606e09b8d1b6e93659d407465cc37a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4233.txt +++ /dev/null @@ -1,5 +0,0 @@ -Tuper Tario Tros. is a Flash platform video game first released on Newgrounds on December 24, 2009 by the French developer Swing Swing Submarine. It is a combination of Super Mario Bros. and Tetris, using mechanics from both games. The title of this game is a play on Super Mario Bros., replacing the first letter of each word with the "T" from Tetris. The game features Super Mario level 1-1. - -The game starts out with normal Super Mario Bros. 2D platformer gameplay. After a short while however, the player is granted the ability to control Tetrimino blocks across the field of play. The falling blocks, that fall from a Lakitu, can be stacked and become platforms which the player can use to make Mario cross large gaps or reach higher terrain. Pressing the spacebar will change between modes, either controlling Mario or controlling the falling blocks. The screen moves automatically forward to the right, and the player cannot go backwards. - -Chris Donlan of Edge wrote that the game appeared to have been put together quickly and as a result, its gameplay was occasionally inelegant. Jenni Lada of TechnologyTell appreciated how she could build a staircase to the flagpole at the end of the level. diff --git a/wiki/wikipedia/4234.txt b/wiki/wikipedia/4234.txt deleted file mode 100644 index e2417a69803890f722cacfb6b5a00e08ecc2cade..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4234.txt +++ /dev/null @@ -1,55 +0,0 @@ -In geometry, Stewart's theorem yields a relation between the lengths of the sides and the length of a cevian in a triangle. Its name is in honour of the Scottish mathematician Matthew Stewart, who published the theorem in 1746. - -Let $a$, $b$, and $c$ be the lengths of the sides of a triangle. Let $d$ be the length of a cevian to the side of length $a$. If the cevian divides the side of length $a$ into two segments of length $m$ and $n$, with $m$ adjacent to $c$ and $n$ adjacent to $b$, then Stewart's theorem states that -$$ -b^2m + c^2n = a(d^2 + mn). -$$ - -The theorem may be written more symmetrically using signed lengths of segments. That is, take the length AB to be positive or negative according to whether A is to the left or right of B in some fixed orientation of the line. In this formulation, the theorem states that if A, B, and C are colinear points, and P is any point, then -$$ -PA^2\cdot BC + PB^2\cdot CA + PC^2\cdot AB + AB\cdot BC\cdot CA =0. -$$ - -In the special case that the cevian is the median (that is, it divides the opposite side into two segments of equal length), the result is known as Apollonius' theorem. - -A common mnemonic used by students to memorize the theorem is: A man and his dad put a bomb in the sink ($man + dad = bmb + cnc$). - -The theorem can be proved as an application of the law of cosines. - -Let θ be the angle between m and d and θ′ the angle between n and d. Then θ′ is the supplement of θ, and so cos θ′ = −cos θ. Applying the law of cosines in the two small triangles using angles θ and θ′ produces - - - -\begin{align} - -c^2 &= m^2 + d^2 - 2dm\cos\theta \\ - -b^2 &= n^2 + d^2 - 2dn\cos\theta' \\ - -&= n^2 + d^2 + 2dn\cos\theta. \end{align} - - - -Multiplying the first equation by n and the third equation by m and adding them eliminates cos θ. One obtains - - - -\begin{align} - -&b^2m + c^2n \\ - -&= nm^2 + n^2m + (m+n)d^2 \\ - -&= (m+n)(mn + d^2) \\ - -&= a(mn + d^2), \\ - -\end{align} - - - -which is the required equation. - -Alternatively, the theorem can be proved by drawing a perpendicular from the vertex of the triangle to the base and using the Pythagorean theorem to write the distances b, c, and d in terms of the altitude. The left and right hand sides of the equation then reduce algebraically to the same expression. - -According to Hutton, Stewart published the result in 1746 when he was a candidate to replace Colin Maclaurin as Professor of Mathematics at the University of Edinburgh. Coxeter state that the result was probably known to Archimedes around 300 B.C.E. They go on to say (mistakenly) that the first known proof was provided by R. Simson in 1751. Hutton state that the result is used by Simson in 1748 and by Simpson in 1752, and its first appearance in Europe given by Lazare Carnot in 1803. diff --git a/wiki/wikipedia/4235.txt b/wiki/wikipedia/4235.txt deleted file mode 100644 index 564bde7fd98e82aea3d6dbfb15b5d97ca2c8c09e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4235.txt +++ /dev/null @@ -1,21 +0,0 @@ -The Girvan–Newman algorithm (named after Michelle Girvan and Mark Newman) is a hierarchical method used to detect communities in complex systems. - -The Girvan–Newman algorithm detects communities by progressively removing edges from the original network. The connected components of the remaining network are the communities. Instead of trying to construct a measure that tells us which edges are the most central to communities, the Girvan–Newman algorithm focuses on edges that are most likely "between" communities. - -Vertex betweenness is an indicator of highly central nodes in networks. For any node $i$, vertex betweenness is defined as the fraction of shortest paths between pairs of nodes that run through it. It is relevant to models where the network modulates transfer of goods between known start and end points, under the assumption that such transfer seeks the shortest available route. - -The Girvan–Newman algorithm extends this definition to the case of edges, defining the "edge betweenness" of an edge as the number of shortest paths between pairs of nodes that run along it. If there is more than one shortest path between a pair of nodes, each path is assigned equal weight such that the total weight of all of the paths is equal to unity. If a network contains communities or groups that are only loosely connected by a few inter-group edges, then all shortest paths between different communities must go along one of these few edges. Thus, the edges connecting communities will have high edge betweenness (at least one of them). By removing these edges, the groups are separated from one another and so the underlying community structure of the network is revealed. - -The algorithm's steps for community detection are summarized below - -# The betweenness of all existing edges in the network is calculated first. - -# The edge(s) with the highest betweenness are removed. - -# The betweenness of all edges affected by the removal is recalculated. - -# Steps 2 and 3 are repeated until no edges remain. - -The fact that the only betweennesses being recalculated are only the ones which are affected by the removal, may lessen the running time of the process' simulation in computers. However, the betweenness centrality must be recalculated with each step, or severe errors occur. The reason is that the network adapts itself to the new conditions set after the edge removal. For instance, if two communities are connected by more than one edge, then there is no guarantee that all of these edges will have high betweenness. According to the method, we know that at least one of them will have, but nothing more than that is known. By recalculating betweennesses after the removal of each edge, it is ensured that at least one of the remaining edges between two communities will always have a high value. - -The end result of the Girvan–Newman algorithm is a dendrogram. As the Girvan–Newman algorithm runs, the dendrogram is produced from the top down (i.e. the network splits up into different communities with the successive removal of links). The leaves of the dendrogram are individual nodes. diff --git a/wiki/wikipedia/4236.txt b/wiki/wikipedia/4236.txt deleted file mode 100644 index dadcce122f43ff8ec330f26d161cdc59de8baca7..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4236.txt +++ /dev/null @@ -1,22 +0,0 @@ -In arithmetic combinatorics, the corners theorem states that for every $\varepsilon>0$, for large enough $N$, any set of at least $\varepsilon N^2$ points in the $N\times N$ grid $\{1,\ldots,N\}^2$ contains a corner, i.e., a triple of points of the form $\{(x,y), (x+h,y), (x,y+h)\}$ with $h\ne 0$. It was first proved by Miklós Ajtai and Endre Szemerédi in 1974 using Szemerédi's theorem. In 2003, József Solymosi gave a short proof using the triangle removal lemma. - -Define a corner to be a subet of $\mathbb{Z}^2$ of the form $\{(x,y), (x+h,y), (x,y+h)\}$, where $x,y,h\in \mathbb{Z}$ and $h\ne 0$. For every $\varepsilon>0$, there exists a positive integer $N(\varepsilon)$ such that for any $N\ge N(\varepsilon)$, any subset $A\subseteq\{1,\ldots,N\}^2$ with size at least $\varepsilon N^2$ contains a corner. - -The condition $h\ne 0$ can be relaxed to $h>0$ by showing that if $A$ is dense, then it has some dense subset that is centrally symmetric. - -What follows is a sketch of Solymosi's argument. - -Suppose $A\subset\{1,\ldots,N\}^2$ is corner-free. Construct an auxiliary tripartite graph $G$ with parts $X=\{x_1,\ldots,x_N\}$, $Y=\{y_1,\ldots,y_N\}$, and $Z=\{z_1,\ldots,z_{2N}\}$, where $x_i$ corresponds to the line $x=i$, $y_j$ corresponds to the line $y=j$, and $z_k$ corresponds to the line $x+y=k$. Connect two vertices if the intersection of their corresponding lines lies in $A$. - -Note that a triangle in $G$ corresponds to a corner in $A$, except in the trivial case where the lines corresponding to the vertices of the triangle concur at a point in $A$. It follows that every edge of $G$ is in exactly one triangle, so by the triangle removal lemma, $G$ has $o(|V(G)|^2)$ edges, so $|A|=o(N^2)$, as desired. - -Let $r_{\angle}(N)$ be the size of the largest subset of $[N]^2$ which contains no corner. The best known bounds are -$$ -\frac{N^2}{2^{(c_1+o(1))\sqrt{\log_2 N}}}\le r_{\angle}(N)\le \frac{N^2}{(\log\log N)^{c_2}}, -$$ - -where $c_1\approx 1.822$ and $c_2\approx 0.0137$. The lower bound is due to Green, building on the work of Linial and Shraibman. The upper bound is due to Shkredov. - -A corner in $\mathbb{Z}^d$ is a set of points of the form $\{a\}\cup\{a+he_i:1\le i\le d\}$, where $e_1,\ldots,e_d$ is the standard basis of $\mathbb{R}^d$, and $h\ne 0$. The natural extension of the corners theorem to this setting can be shown using the hypergraph removal lemma, in the spirit of Solymosi's proof. The hypergraph removal lemma was shown independently by Gowers and Nagle, Rödl, Schacht and Skokan. - -The multidimensional Szemerédi theorem states that for any fixed finite subset $S\subseteq\mathbb{Z}^d$, and for every $\varepsilon>0$, there exists a positive integer $N(S,\varepsilon)$ such that for any $N\ge N(S,\varepsilon)$, any subset $A\subseteq\{1,\ldots,N\}^d$ with size at least $\varepsilon N^d$ contains a subset of the form $a\cdot S+h$. This theorem follows from the multidimensional corners theorem by a simple projection argument. In particular, Roth's theorem follows directly from the ordinary corners theorem. diff --git a/wiki/wikipedia/4237.txt b/wiki/wikipedia/4237.txt deleted file mode 100644 index 863ebbe3b2824df4c18f12d9b57a43978511c262..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4237.txt +++ /dev/null @@ -1,28 +0,0 @@ -Approximate max-flow min-cut theorems are mathematical propositions in network flow theory. They deal with the relationship between maximum flow rate ("max-flow") and minimum cut ("min-cut") in a multi-commodity flow problem. The theorems have enabled the development of approximation algorithms for use in graph partition and related problems. - -A "commodity" in a network flow problem is a pair of source and sink nodes. In a multi-commodity flow problem, there are k≥1 commodities, each with its own source $s_{i}$, sink $t_{i}$, and demand $D_{i}$. The objective is to simultaneously route $D_{\text{i}}$ units of commodity i from $s_{i}$ to $t_{i}$ for each i, such that the total amount of all commodities passing through any edge is no greater than its capacity. (In the case of undirected edges, the sum of the flows in both directions cannot exceed the capacity of the edge). - -Specially, a 1-commodity (or single commodity) flow problem is also known as a maximum flow problem. According to the Ford–Fulkerson algorithm, the max-flow and min-cut are always equal in a 1-commodity flow problem. - -In a multicommodity flow problem, max-flow is the maximum value of f, where f is the common fraction of each commodity that is routed, such that $fD_{\text{i}}$ units of commodity i can be simultaneously routed for each i without violating any capacity constraints. - -min-cut is the minimum of all cuts of the ratio $\varphi$ of the capacity of the cut to the demand of the cut. - -Max-flow is always upper bounded by the min-cut for a multicommodity flow problem. - -In a uniform multicommodity flow problem, there is a commodity for every pair of nodes and the demand for every commodity is the same. (Without loss of generality, the demand for every commodity is set to one.) The underlying network and capacities are arbitrary. - -showed that the max-flow and min-cut are always equal for two commodities. Okamura and Seymour illustrated a 4-commodity flow problem with max-flow equals to 3/4 and min-cut equals 1. Shahrokhi and Matula also proved that the max-flow and min-cut are equal provided the dual of the flow problem satisfies a certain cut condition in a uniform multicommodity flow problem. Many other researchers also showed concrete research results in similar problems - -For a general network flow problem, the max-flow is within a factor of k of the min-cut since each commodity can be optimized separately using $1/k$ of the capacity of each edge. This is not a good result especially in case of large numbers of commodities. - -and then extended in 1999. - -defined the forwarding index of the embedding to be the maximum number of paths (each corresponding to an edge of $K_n$) that pass through any node of G. The objective is to find an embedding that minimizes the forwarding index. Using embedding approaches - -uses the approximation algorithm for balanced separators to find a set of -$$ -O\left((R\log n + \sqrt{nR})\log\frac{n}{R}\right) -$$ - -edges whose removal from a bounded-degree graph G results in a planar graph, where R is the minimum number of edges that need to be removed from G before it becomes planar. It remains an open question if there is a polylog n times optimal approximation algorithm for R. diff --git a/wiki/wikipedia/4238.txt b/wiki/wikipedia/4238.txt deleted file mode 100644 index d2d5beced19827556dec8c2da61052f8b70a2735..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4238.txt +++ /dev/null @@ -1,105 +0,0 @@ -This article is a list of notable unsolved problems in computer science. A problem in computer science is considered unsolved when no solution is known, or when experts in the field disagree about proposed solutions. - -* P versus NP problem - -* What is the relationship between BQP and NP? - -* NC = P problem - -* NP = co-NP problem - -* P = BPP problem - -* P = PSPACE problem - -* L = NL problem - -* PH = PSPACE problem - -* L = P problem - -* L = RL problem - -* Unique games conjecture - -* Is the exponential time hypothesis true? - -** Is the strong exponential time hypothesis (SETH) true? - -* Do one-way functions exist? - -** Is public-key cryptography possible? - -* Log-rank conjecture - -* Can integer factorization be done in polynomial time on a classical (non-quantum) computer? - -* Can the discrete logarithm be computed in polynomial time on a classical (non-quantum) computer? - -* Can the shortest vector of a lattice be computed in polynomial time on a classical or quantum computer? - -* Can clustered planar drawings be found in polynomial time? - -* Can the graph isomorphism problem be solved in polynomial time? - -* Can leaf powers and k-leaf powers be recognized in polynomial time? - -* Can parity games be solved in polynomial time? - -* Can the rotation distance between two binary trees be computed in polynomial time? - -* Can graphs of bounded clique-width be recognized in polynomial time? - -* Can one find a simple closed quasigeodesic on a convex polyhedron in polynomial time? - -* Can a simultaneous embedding with fixed edges for two given graphs be found in polynomial time? - -* The dynamic optimality conjecture: do splay trees have a bounded competitive ratio? - -* Is there a k-competitive online algorithm for the k-server problem? - -* Can a depth-first search tree be constructed in NC? - -* Can the fast Fourier transform be computed in o(n log n) time? - -* What is the fastest algorithm for multiplication of two n-digit numbers? - -* What is the lowest possible average-case time complexity of Shellsort with a deterministic, fixed gap sequence? - -* Can 3SUM be solved in strongly sub-quadratic time, that is, in time O(n2−ϵ) for some ϵ>0? - -* Can the edit distance between two strings of length n be computed in strongly sub-quadratic time? (This is only possible if the strong exponential time hypothesis is false.) - -* Can X + Y sorting be done in o(n2 log n) time? - -* What is the fastest algorithm for matrix multiplication? - -* Can all-pairs shortest paths be computed in strongly sub-cubic time, that is, in time O(V3−ϵ) for some ϵ>0? - -* Can the Schwartz–Zippel lemma for polynomial identity testing be derandomized? - -* Does linear programming admit a strongly polynomial-time algorithm? (This is problem #9 in Smale's list of problems.) - -* How many queries are required for envy-free cake-cutting? - -* What is the algorithmic complexity of the minimum spanning tree problem? Equivalently, what is the decision tree complexity of the MST problem? The optimal algorithm to compute MSTs is known, but it relies on decision trees, so its complexity is unknown. - -* Is there any perfect syllabification algorithm in the English language? - -* Is there any perfect stemming algorithm in the English language? - -* Is there any perfect phrase chunking algorithm in the English language? - -* How can computers discern pronoun ambiguity in the English Language? (Also known as the Winograd Schema Challenge). - -* POPLmark - -* Barendregt–Geuvers–Klop conjecture - -* Aanderaa–Karp–Rosenberg conjecture - -* Černý Conjecture - -* Generalized star height problem - -* Separating words problem diff --git a/wiki/wikipedia/4239.txt b/wiki/wikipedia/4239.txt deleted file mode 100644 index ec2da5975eaa781b39d0b440eedda01fdda8ac97..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4239.txt +++ /dev/null @@ -1,31 +0,0 @@ -In discrete mathematics, the social golfer problem (SGP) is a combinatorial-design problem derived from a question posted in the usenet newsgroup sci.op-research in May 1998. It is listed as Problem 010 in the online test problem library CSPLib. - -
    - -A group of 32 golfers plays golf once a week in groups of 4. Schedule these golfers to play for as many weeks as possible without any two golfers playing in the same group more than once. - -
    - -More generally, this problem can be defined for any $n = g \times s$ golfers who play in $g$ groups of $s$ golfers for $w$ weeks. The solution involves either affirming or denying the existence of a schedule and, if such a schedule exists, determining the number of unique schedules and constructing them. - -The SGP is a challenging problem to solve for two main reasons: - -First is the large search space resulting from the combinatorial and highly symmetrical nature of the problem. There are a total of $(n!)^w$ schedules in the search space. For each schedule, the weeks $(w!)$, groups within each week $(g!)$, players within each group $(s!)$, and individual player $(n!)$ can all be permuted. This leads to a total of $w! \times g! \times s! \times n!$ isomorphisms, schedules that are identical through any of these symmetry operations. Due to its high symmetry, the SGP is commonly used as a standard benchmark in symmetry breaking in constraint programming (symmetry-breaking constraints). - -Second is the choice of variables. The SGP can be seen as an optimization problem to maximize the number of weeks in the schedule. Hence, incorrectly defined initial points and other variables in the model can lead the process to an area in the search space with no solution. - -The SGP is the Steiner system S(2,4,32) because 32 golfers are divided into groups of 4 and both the group and week assignments of any 2 golfers can be uniquely identified. Soon after the problem was proposed in 1998, a solution for 9 weeks was found and the existence of a solution for 11 weeks was proven to be impossible. In the case of the latter, note that each player must play with 3 unique players each week. For a schedule lasting 11 weeks, a player will be grouped with a total of $3 \times 11 = 33$ other players. Since there are only 31 other players in the group, this is not possible. In 2004, Alejandro Aguado found a solution for 10 weeks. - -There are many approaches to solving the SGP, namely - -design theory techniques, - -SAT formulations (propositional satisfiability problem), - -constraint-based approaches, metaheuristic methods, and radix approach. - -The radix approach assigns golfers into groups based on the addition of numbers in base $k$. Variables in the general case of the SGP can be redefined as $n = s^k$ golfers who play in $g = s^{k-1}$ groups of $s$ golfers for any number $k$. The maximum number of weeks that these golfers can play without regrouping any two golfers is $(k^n-1)/(k-1)$. - -Working in groups is encouraged in classrooms because it fosters active learning and development of critical-thinking and communication skills. The SGP has been used to assign students into groups in undergraduate chemistry classes and breakout rooms in online meeting software to maximize student interaction and socialization. - -The SGP has also been used as a model to study tournament scheduling. diff --git a/wiki/wikipedia/424.txt b/wiki/wikipedia/424.txt deleted file mode 100644 index 2472b01bd7fd05d90e3d68486466369073ca3dc0..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/424.txt +++ /dev/null @@ -1,21 +0,0 @@ -In computer science, the predecessor problem involves maintaining a set of items to, given an element, efficiently query which element precedes or succeeds that element in an order. Data structures used to solve the problem include balanced binary search trees, van Emde Boas trees, and fusion trees. In the static predecessor problem, the set of elements does not change, but in the dynamic predecessor problem, insertions into and deletions from the set are allowed. - -The predecessor problem is a simple case of the nearest neighbor problem, and data structures that solve it have applications in problems like integer sorting. - -The problem consists of maintaining a set S, which contains a subset of U integers. Each of these integers can be stored with a word size of w, implying that $U \le 2^w$. Data structures that solve the problem support these operations: - -* predecessor(x), which returns the largest element in S less than or equal to x - -* successor(x), which returns the smallest element in S greater than or equal to x - -In addition, data structures which solve the dynamic version of the problem also support these operations: - -* insert(x), which adds x to the set S - -* delete(x), which removes x from the set S - -The problem is typically analyzed in a transdichotomous model of computation such as word RAM. - -One simple solution to this problem is to use a balanced binary search tree, which achieves (in Big O notation) a running time of $O(\log n)$ for predecessor queries. The Van Emde Boas tree achieves a query time of $O(\log \log U)$, but requires $O(U)$ space. Fusion trees, introduced by Michael Fredman and Willard, achieve $O(\log_w n)$ query time and $O(n)$ for predecessor queries for the static problem. The dynamic problem has been solved using exponential trees with $O(\log_w n + \log \log n)$ query time, and with expected time $O(\log_w n)$ using hashing. - -There have been a number of attempts to prove lower bounds on the predecessor problem, or find what the running time of asymptotically optimal solutions would be. For example, Michael Beame and Faith Ellen proved that for all values of w, there exists a value of n with query time (in Big Theta notation) $\Omega\left(\tfrac{\log w}{\log \log w}\right)$, and similarly, for all values of n, there exists a value of n such that the query time is $\Omega\left(\sqrt{\tfrac{\log n}{\log \log n}}\right)$. Other proofs of lower bounds include the notion of communication complexity. diff --git a/wiki/wikipedia/4240.txt b/wiki/wikipedia/4240.txt deleted file mode 100644 index 3745b2ec5672a57e65208a2cec9aae495c4f0960..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4240.txt +++ /dev/null @@ -1,22 +0,0 @@ -In mathematics, in the field of ordinary differential equations, Sturm separation theorem, named after Jacques Charles François Sturm, describes the location of roots of solutions of homogeneous second order linear differential equations. Basically the theorem states that given two linear independent solutions of such an equation the zeros of the two solutions are alternating. - -Given a homogeneous second order linear differential equation and two continuous linear independent solutions u(x) and v(x) with x0 and x1 successive roots of u(x), then v(x) has exactly one root in the open interval (x0, x1). It is a special case of the Sturm-Picone comparison theorem. - -Since $\displaystyle u$ and $\displaystyle v$ are linearly independent it follows that the Wronskian $\displaystyle W[u,v]$ must satisfy $W[u,v](x)\equiv W(x)\neq 0$ for all $\displaystyle x$ where the differential equation is defined, say $\displaystyle I$. Without loss of generality, suppose that $W(x)<0\mbox{ }\forall\mbox{ }x\in I$. Then -$$ -u(x)v'(x)-u'(x)v(x)\neq 0. -$$ - -So at $\displaystyle x=x_0$ -$$ -W(x_0)=-u'\left(x_0\right)v\left(x_0\right) -$$ - -and either $u'\left(x_0\right)$ and $v\left(x_0\right)$ are both positive or both negative. Without loss of generality, suppose that they are both positive. Now, at $\displaystyle x=x_1$ -$$ -W(x_1)=-u'\left(x_1\right)v\left(x_1\right) -$$ - -and since $\displaystyle x=x_0$ and $\displaystyle x=x_1$ are successive zeros of $\displaystyle u(x)$ it causes $u'\left(x_1\right)<0$. Thus, to keep $\displaystyle W(x)<0$ we must have $v\left(x_1\right)<0$. We see this by observing that if $\displaystyle u'(x)>0\mbox{ }\forall\mbox{ }x\in \left(x_0,x_1\right]$ then $\displaystyle u(x)$ would be increasing (away from the $\displaystyle x$-axis), which would never lead to a zero at $\displaystyle x=x_1$. So for a zero to occur at $\displaystyle x=x_1$ at most $u'\left(x_1\right)=0$ (i.e., $u'\left(x_1\right)\leq 0$ and it turns out, by our result from the Wronskian that $u'\left(x_1\right)\leq 0$). So somewhere in the interval $\left(x_0,x_1\right)$ the sign of $\displaystyle v(x)$ changed. By the Intermediate Value Theorem there exists $x^*\in\left(x_0,x_1\right)$ such that $v\left(x^*\right)=0$. - -On the other hand, there can be only one zero in $\left(x_0,x_1\right)$, because otherwise v would have two zeros and there would be no zeros of u in between, and it was just proved that this is impossible. diff --git a/wiki/wikipedia/4241.txt b/wiki/wikipedia/4241.txt deleted file mode 100644 index 68bd319c7a736640bf4539d74598157a9deece06..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4241.txt +++ /dev/null @@ -1,93 +0,0 @@ -The M. Riesz extension theorem is a theorem in mathematics, proved by Marcel Riesz during his study of the problem of moments. - -Let $E$ be a real vector space, $F\subset E$ be a vector subspace, and $K\subset E$ be a convex cone. - -A linear functional $\phi: F\to\mathbb{R}$ is called $K$-positive, if it takes only non-negative values on the cone $K$: -$$ -\phi(x) \geq 0 \quad \text{for} \quad x \in F \cap K. -$$ - -A linear functional $\psi: E\to\mathbb{R}$ is called a $K$-positive extension of $\phi$, if it is identical to $\phi$ in the domain of $\phi$, and also returns a value of at least 0 for all points in the cone $K$: -$$ -\psi|_F = \phi \quad \text{and} \quad \psi(x) \geq 0\quad \text{for} \quad x \in K. -$$ - -In general, a $K$-positive linear functional on $F$ cannot be extended to a $K$-positive linear functional on $E$. Already in two dimensions one obtains a counterexample. Let $E=\mathbb{R}^2,\ K=\{(x,y): y>0\}\cup\{(x,0): x>0\},$ and $F$ be the $x$-axis. The positive functional $\phi(x,0)=x$ can not be extended to a positive functional on $E$. - -However, the extension exists under the additional assumption that $E\subset K+F,$ namely for every $y\in E,$ there exists an $x\in F$ such that $y-x\in K.$ - -The proof is similar to the proof of the Hahn–Banach theorem (see also below). - -By transfinite induction or Zorn's lemma it is sufficient to consider the case dim $E/F = 1$. - -Choose any $y \in E \setminus F$. Set -$$ -a = \sup \{ \phi(x) \mid x \in F, \ y-x \in K \},\ b = \inf \{ \phi(x) \mid x \in F, x-y \in K \}. -$$ - -We will prove below that $-\infty < a \le b$. For now, choose any $c$ satisfying $a \le c \le b$, and set $\psi(y) = c$, $\psi|_F = \phi$, and then extend $\psi$ to all of $E$ by linearity. We need to show that $\psi$ is $K$-positive. Suppose $z \in K$. Then either $z = 0$, or $z = p(x + y)$ or $z = p(x - y)$ for some $p > 0$ and $x \in F$. If $z = 0$, then $\psi(z) > 0$. In the first remaining case $x + y = y -(-x) \in K$, and so -$$ -\psi(y) = c \geq a \geq \phi(-x) = \psi(-x) -$$ - -by definition. Thus -$$ -\psi(z) = p\psi(x+y) = p(\psi(x) + \psi(y)) \geq 0. -$$ - -In the second case, $x - y \in K$, and so similarly -$$ -\psi(y) = c \leq b \leq \phi(x) = \psi(x) -$$ - -by definition and so -$$ -\psi(z) = p\psi(x-y) = p(\psi(x)-\psi(y)) \geq 0. -$$ - -In all cases, $\psi(z) > 0$, and so $\psi$ is $K$-positive. - -We now prove that $-\infty < a \le b$. Notice by assumption there exists at least one $x \in F$ for which $y - x \in K$, and so $-\infty < a$. However, it may be the case that there are no $x \in F$ for which $x - y \in K$, in which case $b = \infty$ and the inequality is trivial (in this case notice that the third case above cannot happen). Therefore, we may assume that $b < \infty$ and there is at least one $x \in F$ for which $x - y \in K$. To prove the inequality, it suffices to show that whenever $x \in F$ and $y - x \in K$, and $x' \in F$ and $x' - y \in K$, then $\phi(x) \le \phi(x')$. Indeed, -$$ -x' -x = (x' - y) + (y-x) \in K -$$ - -since $K$ is a convex cone, and so -$$ -0 \leq \phi(x'-x) = \phi(x')-\phi(x) -$$ - -since $\phi$ is $K$-positive. - -Let E be a real linear space, and let K ⊂ E be a convex cone. Let x ∈ E\(-K) be such that R x + K = E. Then there exists a K-positive linear functional φ: E → R such that φ(x) > 0. - -The Hahn–Banach theorem can be deduced from the M. Riesz extension theorem. - -Let V be a linear space, and let N be a sublinear function on V. Let φ be a functional on a subspace U ⊂ V that is dominated by N: -$$ - \phi(x) \leq N(x), \quad x \in U. -$$ - -The Hahn-Banach theorem asserts that φ can be extended to a linear functional on V that is dominated by N. - -To derive this from the M. Riesz extension theorem, define a convex cone K ⊂ R×V by -$$ - K = \left\{ (a, x) \mid N(x) \leq a \right\}. -$$ - -Define a functional φ1 on R×U by -$$ - \phi_1(a, x) = a - \phi(x). -$$ - -One can see that φ1 is K-positive, and that K + (R × U) = R × V. Therefore φ1 can be extended to a K-positive functional ψ1 on R×V. Then -$$ - \psi(x) = - \psi_1(0, x) -$$ - -is the desired extension of φ. Indeed, if ψ(x) > N(x), we have: (N(x), x) ∈ K, whereas -$$ - \psi_1(N(x), x) = N(x) - \psi(x) < 0, -$$ - -leading to a contradiction. diff --git a/wiki/wikipedia/4242.txt b/wiki/wikipedia/4242.txt deleted file mode 100644 index 4d3af6ecca5645764cfdfbd3b92c6bfc54291fd5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4242.txt +++ /dev/null @@ -1,10 +0,0 @@ -In number theory, the second Hardy–Littlewood conjecture concerns the number of primes in intervals. Along with the first Hardy–Littlewood conjecture, the second Hardy–Littlewood conjecture was proposed by G. H. Hardy and John Edensor Littlewood in 1923. - -The conjecture states that -$$ -\pi(x+y) \leq \pi(x) + \pi(y) -$$ - -for integers x, y ≥ 2, where π(x) denotes the prime-counting function, giving the number of prime numbers up to and including x. - -The statement of the second Hardy–Littlewood conjecture is equivalent to the statement that the number of primes from x + 1 to x + y is always less than or equal to the number of primes from 1 to y. This was proved to be inconsistent with the first Hardy–Littlewood conjecture on prime k-tuples, and the first violation is expected to likely occur for very large values of x. For example, an admissible k-tuple (or prime constellation) of 447 primes can be found in an interval of y = 3159 integers, while π(3159) = 446. If the first Hardy–Littlewood conjecture holds, then the first such k-tuple is expected for x greater than 1.5 × 10174 but less than 2.2 × 101198. diff --git a/wiki/wikipedia/4243.txt b/wiki/wikipedia/4243.txt deleted file mode 100644 index 31d1463c1dbd33e79f43c4e6e696fa483e202671..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4243.txt +++ /dev/null @@ -1,23 +0,0 @@ -There also is Brauer's theorem on induced characters. - -In mathematics, Brauer's theorem, named for Richard Brauer, is a result on the representability of 0 by forms over certain fields in sufficiently many variables. - -Let K be a field such that for every integer r > 0 there exists an integer ψ(r) such that for n ≥ ψ(r) every equation -$$ -(*)\qquad a_1x_1^r+\cdots+a_nx_n^r=0,\quad a_i\in K,\quad i=1,\ldots,n -$$ - -has a non-trivial (i.e. not all xi are equal to 0) solution in K. - -Then, given homogeneous polynomials f1,...,fk of degrees r1,...,rk respectively with coefficients in K, for every set of positive integers r1,...,rk and every non-negative integer l, there exists a number ω(r1,...,rk,l) such that for n ≥ ω(r1,...,rk,l) there exists an l-dimensional affine subspace M of Kn (regarded as a vector space over K) satisfying -$$ -f_1(x_1,\ldots,x_n)=\cdots=f_k(x_1,\ldots,x_n)=0,\quad\forall(x_1,\ldots,x_n)\in M. -$$ - -Letting K be the field of p-adic numbers in the theorem, the equation (*) is satisfied, since $\mathbb{Q}_p^*/\left(\mathbb{Q}_p^*\right)^b$, b a natural number, is finite. Choosing k = 1, one obtains the following corollary: - -A homogeneous equation f(x1,...,xn) = 0 of degree r in the field of p-adic numbers has a non-trivial solution if n is sufficiently large. - -One can show that if n is sufficiently large according to the above corollary, then n is greater than r2. Indeed, Emil Artin conjectured that every homogeneous polynomial of degree r over Qp in more than r2 variables represents 0. This is obviously true for r = 1, and it is well known that the conjecture is true for r = 2 (see, for example, J.-P. Serre, A Course in Arithmetic, Chapter IV, Theorem 6). See quasi-algebraic closure for further context. - -In 1950 Demyanov verified the conjecture for r = 3 and p ≠ 3, and in 1952 D. J. Lewis independently proved the case r = 3 for all primes p. But in 1966 Guy Terjanian constructed a homogeneous polynomial of degree 4 over Q2 in 18 variables that has no non-trivial zero. On the other hand, the Ax–Kochen theorem shows that for any fixed degree Artin's conjecture is true for all but finitely many Qp. diff --git a/wiki/wikipedia/4244.txt b/wiki/wikipedia/4244.txt deleted file mode 100644 index 562e8dae24f10faf4bb84bc0418e4060e5286c5d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4244.txt +++ /dev/null @@ -1,74 +0,0 @@ -In mathematics, the transitive closure of a binary relation R on a set X is the smallest relation on X that contains R and is transitive. For finite sets, "smallest" can be taken in its usual sense, of having the fewest related pairs; for infinite sets it is the unique minimal transitive superset of R. - -For example, if X is a set of airports and xRy means "there is a direct flight from airport x to airport y" (for x and y in X), then the transitive closure of R on X is the relation R+ such that x R+ y means "it is possible to fly from x to y in one or more flights". Informally, the transitive closure gives you the set of all places you can get to from any starting place. - -More formally, the transitive closure of a binary relation R on a set X is the transitive relation R+ on set X such that R+ contains R and R+ is minimal Lidl. If the binary relation itself is transitive, then the transitive closure is that same binary relation; otherwise, the transitive closure is a different relation. - -Conversely, transitive reduction adduces a minimal relation S from a given relation R such that they have the same closure, that is, S+ = R+; however, many different S with this property may exist. - -Both transitive closure and transitive reduction are also used in the closely related area of graph theory. - -A relation R on a set X is transitive if, for all x, y, z in X, whenever x R y and y R z then x R z. Examples of transitive relations include the equality relation on any set, the "less than or equal" relation on any linearly ordered set, and the relation "x was born before y" on the set of all people. Symbolically, this can be denoted as: if x < y and y < z then x < z. - -One example of a non-transitive relation is "city x can be reached via a direct flight from city y" on the set of all cities. Simply because there is a direct flight from one city to a second city, and a direct flight from the second city to the third, does not imply there is a direct flight from the first city to the third. The transitive closure of this relation is a different relation, namely "there is a sequence of direct flights that begins at city x and ends at city y". Every relation can be extended in a similar way to a transitive relation. - -An example of a non-transitive relation with a less meaningful transitive closure is "x is the day of the week after y". The transitive closure of this relation is "some day x comes after a day y on the calendar", which is trivially true for all days of the week x and y (and thus equivalent to the Cartesian square, which is "x and y are both days of the week"). - -For any relation R, the transitive closure of R always exists. To see this, note that the intersection of any family of transitive relations is again transitive. Furthermore, there exists at least one transitive relation containing R, namely the trivial one: X × X. The transitive closure of R is then given by the intersection of all transitive relations containing R. - -For finite sets, we can construct the transitive closure step by step, starting from R and adding transitive edges. - -This gives the intuition for a general construction. For any set X, we - -can prove that transitive closure is given by the following expression -$$ -R^{+}=\bigcup_{i = 1}^{\infty} R^i. -$$ - -where $R^i$ is the i-th power of R, defined inductively by -$$ -R^1 = R -$$ - -and, for $i>0$, -$$ -R^{i+1} = R \circ R^i -$$ - -where $\circ$ denotes composition of relations. - -To show that the above definition of R+ is the least transitive relation containing R, we show that it contains R, that it is transitive, and that it is the smallest set with both of those characteristics. - -* $R \subseteq R^{+}$: $ R^+$ contains all of the $ R^i$, so in particular $ R^+$ contains $ R$. - -* $ R^{+}$ is transitive: If $(s_1, s_2), (s_2, s_3)\in R^+$, then $(s_1, s_2)\in R^j$ and $(s_2, s_3)\in R^k$ for some $j,k$ by definition of $R^+$. Since composition is associative, $R^{j+k} = R^j \circ R^k$; hence $(s_1, s_3)\in R^{j+k} \subseteq R^+$ by definition of $\circ$ and $R^+$. - -* $R^{+}$ is minimal, that is, if $T$ is any transitive relation containing $R$, then $R^{+} \subseteq T$: Given any such $T$, induction on $i$ can be used to show $R^i\subseteq T$ for all $i$ as follows: Base: $R^1 = R \subseteq T$ by assumption. Step: If $R^i\subseteq T$ holds, and $(s_1, s_3)\in R^{i+1} = R \circ R^i$, then $(s_1, s_2) \in R$ and $(s_2, s_3)\in R^i$ for some $s_2$, by definition of $\circ$. Hence, $(s_1, s_2), (s_2, s_3)\in T$ by assumption and by induction hypothesis. Hence $(s_1, s_3)\in T$ by transitivity of $T$; this completes the induction. Finally, $R^i\subseteq T$ for all $i$ implies $R^{+} \subseteq T$ by definition of $R^{+}$. - -The intersection of two transitive relations is transitive. - -The union of two transitive relations need not be transitive. To preserve transitivity, one must take the transitive closure. This occurs, for example, when taking the union of two equivalence relations or two preorders. To obtain a new equivalence relation or preorder one must take the transitive closure (reflexivity and symmetry—in the case of equivalence relations—are automatic). - -In computer science, the concept of transitive closure can be thought of as constructing a data structure that makes it possible to answer reachability questions. That is, can one get from node a to node d in one or more hops? A binary relation tells you only that node a is connected to node b, and that node b is connected to node c, etc. After the transitive closure is constructed, as depicted in the following figure, in an O(1) operation one may determine that node d is reachable from node a. The data structure is typically stored as a matrix, so if matrix[1][4] = 1, then it is the case that node 1 can reach node 4 through one or more hops. - -The transitive closure of the adjacency relation of a directed acyclic graph (DAG) is the reachability relation of the DAG and a strict partial order. - -The transitive closure of a binary relation cannot, in general, be expressed in first-order logic (FO). - -This means that one cannot write a formula using predicate symbols R and T that will be satisfied in - -any model if and only if T is the transitive closure of R. - -In finite model theory, first-order logic (FO) extended with a transitive closure operator is usually called transitive closure logic, and abbreviated FO(TC) or just TC. TC is a sub-type of fixpoint logics. The fact that FO(TC) is strictly more expressive than FO was discovered by Ronald Fagin in 1974; the result was then rediscovered by Alfred Aho and Jeffrey Ullman in 1979, who proposed to use fixpoint logic as a database query language. With more recent concepts of finite model theory, proof that FO(TC) is strictly more expressive than FO follows immediately from the fact that FO(TC) is not Gaifman-local. - -In computational complexity theory, the complexity class NL corresponds precisely to the set of logical sentences expressible in TC. This is because the transitive closure property has a close relationship with the NL-complete problem STCON for finding directed paths in a graph. Similarly, the class L is first-order logic with the commutative, transitive closure. When transitive closure is added to second-order logic instead, we obtain PSPACE. - -Since the 1980s Oracle Database has implemented a proprietary SQL extension CONNECT BY... START WITH that allows the computation of a transitive closure as part of a declarative query. The SQL 3 (1999) standard added a more general WITH RECURSIVE construct also allowing transitive closures to be computed inside the query processor; as of 2011 the latter is implemented in IBM DB2, Microsoft SQL Server, Oracle, PostgreSQL, and MySQL (v8.0+). - -Datalog also implements transitive closure computations. - -MariaDB implements Recursive Common Table Expressions, which can be used to compute transitive closures. This feature was introduced in release 10.2.2 of April 2016. - -Efficient algorithms for computing the transitive closure of the adjacency relation of a graph can be found in Nuutila (1995). The fastest worst-case methods, which are not practical, reduce the problem to matrix multiplication. The problem can also be solved by the Floyd–Warshall algorithm, or by repeated breadth-first search or depth-first search starting from each node of the graph. - -More recent research has explored efficient ways of computing transitive closure on distributed systems based on the MapReduce paradigm. diff --git a/wiki/wikipedia/4245.txt b/wiki/wikipedia/4245.txt deleted file mode 100644 index 8c837029a734c603fd4fc6f7bd3ab75e586bb921..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4245.txt +++ /dev/null @@ -1,5 +0,0 @@ -In the mathematical discipline of algebraic geometry, Serre's theorem on affineness (also called Serre's cohomological characterization of affineness or Serre's criterion on affineness) is a theorem due to Jean-Pierre Serre which gives sufficient conditions for a scheme to be affine. The theorem was first published by Serre in 1957. - -* A special case of this theorem arises when {{mvar is an algebraic variety, in which case the conditions of the theorem imply that X is an affine variety. - -* A similar result has stricter conditions on X but looser conditions on the cohomology: if X is a quasi-separated, quasi-compact scheme, and if H1(X, I) = 0 for any quasi-coherent sheaf of ideals I of finite type, then X is affine. diff --git a/wiki/wikipedia/4246.txt b/wiki/wikipedia/4246.txt deleted file mode 100644 index db0e0186a405440dc09e105b0e919c8a7e556c87..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4246.txt +++ /dev/null @@ -1,39 +0,0 @@ -An incremental backup is one in which successive copies of the data contain only the portion that has changed since the preceding backup copy was made. When a full recovery is needed, the restoration process would need the last full backup plus all the incremental backups until the point of restoration. Incremental backups are often desirable as they reduce storage space usage, and are quicker to perform than differential backups. - -The most basic form of incremental backup consists of identifying, recording and thus, preserving only those files that have changed since the last backup. Since changes are typically low, incremental backups are much smaller and quicker than full backups. For instance, following a full backup on Friday, a Monday backup will contain only those files that changed since Friday. A Tuesday backup contains only those files that changed since Monday, and so on. A full restoration of data will naturally be slower, since all increments must be restored. Should any one of the copies created fail, including the first (full), restoration will be incomplete. - -A Unix example would be: rsync -e ssh -va --link-dest=$dst/hourly.1 $remoteserver:$remotepath $dst/hourly.0 - -The use of rsync's --link-dest option is what makes this command an example of incremental backup. - -A more sophisticated incremental backup scheme involves multiple numbered backup levels. A full backup is level 0. A level n backup will back up everything that has changed since the most recent level n-1 backup. Suppose for instance that a level 0 backup was taken on a Sunday. A level 1 backup taken on Monday would include only changes made since Sunday. A level 2 backup taken on Tuesday would include only changes made since Monday. A level 3 backup taken on Wednesday would include only changes made since Tuesday. If a level 2 backup was taken on Thursday, it would include all changes made since Monday because Monday was the most recent level n-1 backup. - -An incremental backup of the changes made between two instances of a mirror can be forward or reverse. - -If the oldest version of the mirror is treated as the base and the newest version as the revised version, the incremental produced is a forward incremental. - -If the newest version of the mirror is treated as the base and the oldest version as the revised / changed version, the incremental produced is a reverse incremental. - -In making backups using reverse incremental backups, each time a reverse incremental backup is taken, it is applied (in reverse) to the previous full (synthetic) backup, thus the current full (synthetic) backup is always a backup of the most recent state of the system. This is in contrast to forward incremental backups where the current full backup is a backup of the oldest version of the system, and to get a backup of the most recent state of the system, all of the forward incremental backups have to be applied to that oldest version successively. - -By applying a reverse incremental to a mirror, the result will be a previous version of the mirror. This gives a means to revert to any previous version of the mirror. - -In other words, after the initial full backup, each successive incremental backup applies the changes to the previous full, creating a new synthetic full backup every time, while maintaining the ability to revert to previous versions. - -The main advantage of this type of backup is a more efficient recovery process, since the most recent version of the data (which is the most frequently restored version) is a (synthetic) full backup, and no incrementals need to be applied to it during its restoration. Reverse incremental backup works for both tapes and disks, but in practice tends to work better with disks. - -Companies using the reverse incremental backup method include Intronis and Zetta.net. - -This style is similar to the synthetic backup concept. After an initial full backup, only the incremental backups are sent to a centralized backup system. This server keeps track of all the increments and sends the proper data back to the client during restores. This can be implemented by sending each incremental directly to tape as it is taken and then refactoring the tapes as necessary. If enough disk space is available, an online mirror can be maintained along with previous incremental changes so that the current or older versions of the systems being backed up can be restored. This is a suitable method in the case of banking systems. - -In modern cloud architectures, or disk to disk backup scenarios, this is much simpler. Data is broken into chunks and placed on a cloud storage system. Metadata about the chunks is stored in a persistent system, which allows the system to assemble a point in time backup from these chunks at restore time. There is no need to refactor tape. - -This method backs up only the blocks within the file that changed. This requires a higher level of integration between the sender and receiver. - -These backup technologies are similar to the "block-level incremental" backup method; however, the byte (or binary) incremental backup method is based on a binary variation of the files compared to the previous backup: while the block-based technologies work with heavy changing units (blocks of 8K, 4K or 1K), the byte-based technologies work with the minimum unit, saving space when reflecting a change on a file. Another important difference is that they work independently on the file system. At the moment, these are the technologies that achieve the highest relative compression of the data, turning into a great advantage for the security copies carried out through the Internet. - -A synthetic backup is an alternative method of creating full backups. Instead of reading and backing up data directly from the disk, it will synthesize the data from the previous full backup (either a regular full backup for the first backup, or the previous synthetic full backup) and the periodic incremental backups. As only the incremental backups read data from the disk, these are the only files that need to be transferred during offsite replication. This greatly reduces the bandwidth needed for offsite replication. Synthetic backup does not always work with the same efficiency. The rate of data uploaded from the target machine to data, synchronized on the storage, varies depending on the disk fragmentation. - -A differential backup is a cumulative backup of all changes made since the last full or normal backup, i.e., the differences since the last full backup. The advantage to this is the quicker recovery time, requiring only a full backup and the last differential backup to restore the system. The disadvantage is that for each day elapsed since the last full backup, more data needs to be backed up, especially if a significant proportion of the data has changed. - -A forward incremental-forever backup allows the synthetic operation to create a new full backup, which is limited to the size of the incremental file, instead of the complete size of a full backup file as it would happen in a “forward mode with synthetic fulls”. The overall consumed I/O is the same as the reversed incremental, but during the duration of the backup activity only 1 write I/O is used and the snapshot of the VM is opened for less time than the reversed incremental; the remaining 2 I/O are used to update the full backup file. diff --git a/wiki/wikipedia/4247.txt b/wiki/wikipedia/4247.txt deleted file mode 100644 index af7a8a1cdd5827b95ee01a2694c2912af9b0f7e7..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4247.txt +++ /dev/null @@ -1,1548 +0,0 @@ -In concurrent programming (also known as parallel programming), a monitor is a synchronization construct that allows threads to have both mutual exclusion and the ability to wait (block) for a certain condition to become false. Monitors also have a mechanism for signaling other threads that their condition has been met. A monitor consists of a mutex (lock) object and condition variables. A condition variable essentially is a container of threads that are waiting for a certain condition. Monitors provide a mechanism for threads to temporarily give up exclusive access in order to wait for some condition to be met, before regaining exclusive access and resuming their task. - -Another definition of monitor is a thread-safe class, object, or module that wraps around a mutex in order to safely allow access to a method or variable by more than one thread. The defining characteristic of a monitor is that its methods are executed with mutual exclusion: At each point in time, at most one thread may be executing any of its methods. By using one or more condition variables it can also provide the ability for threads to wait on a certain condition (thus using the above definition of a "monitor"). For the rest of this article, this sense of "monitor" will be referred to as a "thread-safe object/class/module". - -Monitors were invented by Per Brinch Hansen and C. A. R. Hoare, and were first implemented in Brinch Hansen's Concurrent Pascal language. - -As a simple example, consider a thread-safe object for performing transactions on a bank account: - -monitor class Account { - -private int balance := 0 - -invariant balance >= 0 - -public method boolean withdraw(int amount) - -precondition amount >= 0 - -{ - -if balance < amount { - -return false - -} else { - -balance := balance - amount - -return true - -} - -} - -public method deposit(int amount) - -precondition amount >= 0 - -{ - -balance := balance + amount - -} - -} - -While a thread is executing a method of a thread-safe object, it is said to occupy the object, by holding its mutex (lock). Thread-safe objects are implemented to enforce that at each point in time, at most one thread may occupy the object. The lock, which is initially unlocked, is locked at the start of each public method, and is unlocked at each return from each public method. - -Upon calling one of the methods, a thread must wait until no other thread is executing any of the thread-safe object's methods before starting execution of its method. Note that without this mutual exclusion, in the present example, two threads could cause money to be lost or gained for no reason. For example, two threads withdrawing 1000 from the account could both return true, while causing the balance to drop by only 1000, as follows: first, both threads fetch the current balance, find it greater than 1000, and subtract 1000 from it; then, both threads store the balance and return. - -The syntactic sugar "monitor class" in the above example is implementing the following basic representation of the code, by wrapping each function's execution in mutexes: - -class Account { - -private lock myLock - -private int balance := 0 - -invariant balance >= 0 - -public method boolean withdraw(int amount) - -precondition amount >= 0 - -{ - -myLock.acquire() - -try { - -if balance < amount { - -return false - -} else { - -balance := balance - amount - -return true - -} - -} finally { - -myLock.release() - -} - -} - -public method deposit(int amount) - -precondition amount >= 0 - -{ - -myLock.acquire() - -try { - -balance := balance + amount - -} finally { - -myLock.release() - -} - -} - -} - -For many applications, mutual exclusion is not enough. Threads attempting an operation may need to wait until some condition P holds true. A busy waiting loop - -while not( P ) do skip - -will not work, as mutual exclusion will prevent any other thread from entering the monitor to make the condition true. Other "solutions" exist such as having a loop that unlocks the monitor, waits a certain amount of time, locks the monitor and checks for the condition P. Theoretically, it works and will not deadlock, but issues arise. It's hard to decide an appropriate amount of waiting time, too small and the thread will hog the CPU, too big and it will be apparently unresponsive. What is needed is a way to signal the thread when the condition P is true (or could be true). - -A classic concurrency problem is that of the bounded producer/consumer, in which there is a queue or ring buffer of tasks with a maximum size, with one or more threads being "producer" threads that add tasks to the queue, and one or more other threads being "consumer" threads that take tasks out of the queue. The queue is assumed to be non–thread-safe itself, and it can be empty, full, or between empty and full. Whenever the queue is full of tasks, then we need the producer threads to block until there is room from consumer threads dequeueing tasks. On the other hand, whenever the queue is empty, then we need the consumer threads to block until more tasks are available due to producer threads adding them. - -As the queue is a concurrent object shared between threads, accesses to it must be made atomic, because the queue can be put into an inconsistent state during the course of the queue access that should never be exposed between threads. Thus, any code that accesses the queue constitutes a critical section that must be synchronized by mutual exclusion. If code and processor instructions in critical sections of code that access the queue could be interleaved by arbitrary context switches between threads on the same processor or by simultaneously-running threads on multiple processors, then there is a risk of exposing inconsistent state and causing race conditions. - -A naïve approach is to design the code with busy-waiting and no synchronization, making the code subject to race conditions: - - - -global RingBuffer queue; // A thread-unsafe ring-buffer of tasks. - -// Method representing each producer thread's behavior: - -public method producer() { - -while (true) { - -task myTask = ...; // Producer makes some new task to be added. - -while (queue.isFull()) {} // Busy-wait until the queue is non-full. - -queue.enqueue(myTask); // Add the task to the queue. - -} - -} - -// Method representing each consumer thread's behavior: - -public method consumer() { - -while (true) { - -while (queue.isEmpty()) {} // Busy-wait until the queue is non-empty. - -myTask = queue.dequeue(); // Take a task off of the queue. - -doStuff(myTask); // Go off and do something with the task. - -} - -} - - - -This code has a serious problem in that accesses to the queue can be interrupted and interleaved with other threads' accesses to the queue. The queue.enqueue and queue.dequeue methods likely have instructions to update the queue's member variables such as its size, beginning and ending positions, assignment and allocation of queue elements, etc. In addition, the queue.isEmpty() and queue.isFull() methods read this shared state as well. If producer/consumer threads are allowed to be interleaved during the calls to enqueue/dequeue, then inconsistent state of the queue can be exposed leading to race conditions. In addition, if one consumer makes the queue empty in-between another consumer's exiting the busy-wait and calling "dequeue", then the second consumer will attempt to dequeue from an empty queue leading to an error. Likewise, if a producer makes the queue full in-between another producer's exiting the busy-wait and calling "enqueue", then the second producer will attempt to add to a full queue leading to an error. - -One naive approach to achieve synchronization, as alluded to above, is to use "spin-waiting", in which a mutex is used to protect the critical sections of code and busy-waiting is still used, with the lock being acquired and released in between each busy-wait check. - - - -global RingBuffer queue; // A thread-unsafe ring-buffer of tasks. - -global Lock queueLock; // A mutex for the ring-buffer of tasks. - -// Method representing each producer thread's behavior: - -public method producer() { - -while (true) { - -task myTask = ...; // Producer makes some new task to be added. - -queueLock.acquire(); // Acquire lock for initial busy-wait check. - -while (queue.isFull()) { // Busy-wait until the queue is non-full. - -queueLock.release(); - -// Drop the lock temporarily to allow a chance for other threads - -// needing queueLock to run so that a consumer might take a task. - -queueLock.acquire(); // Re-acquire the lock for the next call to "queue.isFull()". - -} - -queue.enqueue(myTask); // Add the task to the queue. - -queueLock.release(); // Drop the queue lock until we need it again to add the next task. - -} - -} - -// Method representing each consumer thread's behavior: - -public method consumer() { - -while (true) { - -queueLock.acquire(); // Acquire lock for initial busy-wait check. - -while (queue.isEmpty()) { // Busy-wait until the queue is non-empty. - -queueLock.release(); - -// Drop the lock temporarily to allow a chance for other threads - -// needing queueLock to run so that a producer might add a task. - -queueLock.acquire(); // Re-acquire the lock for the next call to "queue.isEmpty()". - -} - -myTask = queue.dequeue(); // Take a task off of the queue. - -queueLock.release(); // Drop the queue lock until we need it again to take off the next task. - -doStuff(myTask); // Go off and do something with the task. - -} - -} - - - -This method assures that an inconsistent state does not occur, but wastes CPU resources due to the unnecessary busy-waiting. Even if the queue is empty and producer threads have nothing to add for a long time, consumer threads are always busy-waiting unnecessarily. Likewise, even if consumers are blocked for a long time on processing their current tasks and the queue is full, producers are always busy-waiting. This is a wasteful mechanism. What is needed is a way to make producer threads block until the queue is non-full, and a way to make consumer threads block until the queue is non-empty. - -(N.B.: Mutexes themselves can also be spin-locks which involve busy-waiting in order to get the lock, but in order to solve this problem of wasted CPU resources, we assume that queueLock is not a spin-lock and properly uses a blocking lock queue itself.) - -The solution is to use condition variables. Conceptually a condition variable is a queue of threads, associated with a mutex, on which a thread may wait for some condition to become true. Thus each condition variable c is associated with an assertion Pc. While a thread is waiting on a condition variable, that thread is not considered to occupy the monitor, and so other threads may enter the monitor to change the monitor's state. In most types of monitors, these other threads may signal the condition variable c to indicate that assertion Pc is true in the current state. - -Thus there are three main operations on condition variables: - -* wait c, m, where c is a condition variable and m is a mutex (lock) associated with the monitor. This operation is called by a thread that needs to wait until the assertion Pc is true before proceeding. While the thread is waiting, it does not occupy the monitor. The function, and fundamental contract, of the "wait" operation, is to do the following steps: - -*# Atomically: - -*#: - -*# Once this thread is subsequently notified/signaled (see below) and resumed, then automatically re-acquire the mutex m. - -*:Steps 1a and 1b can occur in either order, with 1c usually occurring after them. While the thread is sleeping and in c's wait-queue, the next program counter to be executed is at step 2, in the middle of the "wait" function/subroutine. Thus, the thread sleeps and later wakes up in the middle of the "wait" operation. - -*:The atomicity of the operations within step 1 is important to avoid race conditions that would be caused by a preemptive thread switch in-between them. One failure mode that could occur if these were not atomic is a missed wakeup, in which the thread could be on c's sleep-queue and have released the mutex, but a preemptive thread switch occurred before the thread went to sleep, and another thread called a signal/notify operation (see below) on c moving the first thread back out of c's queue. As soon as the first thread in question is switched back to, its program counter will be at step 1c, and it will sleep and be unable to be woken up again, violating the invariant that it should have been on c's sleep-queue when it slept. Other race conditions depend on the ordering of steps 1a and 1b, and depend on where a context switch occurs. - -* signal c, also known as notify c, is called by a thread to indicate that the assertion Pc is true. Depending on the type and implementation of the monitor, this moves one or more threads from c's sleep-queue to the "ready queue" or another queues for it to be executed. It is usually considered a best practice to perform the "signal"/"notify" operation before releasing mutex m that is associated with c, but as long as the code is properly designed for concurrency and depending on the threading implementation, it is often also acceptable to release the lock before signalling. Depending on the threading implementation, the ordering of this can have scheduling-priority ramifications. (Some authors instead advocate a preference for releasing the lock before signalling.) A threading implementation should document any special constraints on this ordering. - -* broadcast c, also known as notifyAll c, is a similar operation that wakes up all threads in c's wait-queue. This empties the wait-queue. Generally, when more than one predicate condition is associated with the same condition variable, the application will require broadcast instead of signal because a thread waiting for the wrong condition might be woken up and then immediately go back to sleep without waking up a thread waiting for the correct condition that just became true. Otherwise, if the predicate condition is one-to-one with the condition variable associated with it, then signal may be more efficient than broadcast. - -As a design rule, multiple condition variables can be associated with the same mutex, but not vice versa. (This is a one-to-many correspondence.) This is because the predicate Pc is the same for all threads using the monitor and must be protected with mutual exclusion from all other threads that might cause the condition to be changed or that might read it while the thread in question causes it to be changed, but there may be different threads that want to wait for a different condition on the same variable requiring the same mutex to be used. In the producer-consumer example described above, the queue must be protected by a unique mutex object, m. The "producer" threads will want to wait on a monitor using lock m and a condition variable $c_{full}$ which blocks until the queue is non-full. The "consumer" threads will want to wait on a different monitor using the same mutex m but a different condition variable $c_{empty}$ which blocks until the queue is non-empty. It would (usually) never make sense to have different mutexes for the same condition variable, but this classic example shows why it often certainly makes sense to have multiple condition variables using the same mutex. A mutex used by one or more condition variables (one or more monitors) may also be shared with code that does not use condition variables (and which simply acquires/releases it without any wait/signal operations), if those critical sections do not happen to require waiting for a certain condition on the concurrent data. - -The proper basic usage of a monitor is: - - - -acquire(m); // Acquire this monitor's lock. - -while (!p) { // While the condition/predicate/assertion that we are waiting for is not true... - - wait(m, cv); // Wait on this monitor's lock and condition variable. - -} - -// ... Critical section of code goes here ... - -signal(cv2); -- OR -- notifyAll(cv2); // cv2 might be the same as cv or different. - -release(m); // Release this monitor's lock. - - - -To be more precise, this is the same pseudocode but with more verbose comments to better explain what is going on: - - - -// ... (previous code) - -// About to enter the monitor. - -// Acquire the advisory mutex (lock) associated with the concurrent - -// data that is shared between threads, - -// to ensure that no two threads can be preemptively interleaved or - -// run simultaneously on different cores while executing in critical - -// sections that read or write this same concurrent data. If another - -// thread is holding this mutex, then this thread will be put to sleep - -// (blocked) and placed on m's sleep queue. (Mutex "m" shall not be - -// a spin-lock.) - -acquire(m); - -// Now, we are holding the lock and can check the condition for the - -// first time. - -// The first time we execute the while loop condition after the above - -// "acquire", we are asking, "Does the condition/predicate/assertion - -// we are waiting for happen to already be true?" - -while (!p()) // "p" is any expression (e.g. variable or - - // function-call) that checks the condition and - - // evaluates to boolean. This itself is a critical - - // section, so you *MUST* be holding the lock when - - // executing this "while" loop condition! - - - -// If this is not the first time the "while" condition is being checked, - -// then we are asking the question, "Now that another thread using this - -// monitor has notified me and woken me up and I have been context-switched - -// back to, did the condition/predicate/assertion we are waiting on stay - -// true between the time that I was woken up and the time that I re-acquired - -// the lock inside the "wait" call in the last iteration of this loop, or - -// did some other thread cause the condition to become false again in the - -// meantime thus making this a spurious wakeup? - -{ - - // If this is the first iteration of the loop, then the answer is - - // "no" -- the condition is not ready yet. Otherwise, the answer is: - - // the latter. This was a spurious wakeup, some other thread occurred - - // first and caused the condition to become false again, and we must - - // wait again. - - wait(m, cv); - - // Temporarily prevent any other thread on any core from doing - - // operations on m or cv. - - // release(m) // Atomically release lock "m" so other - - // // code using this concurrent data - - // // can operate, move this thread to cv's - - // // wait-queue so that it will be notified - - // // sometime when the condition becomes - - // // true, and sleep this thread. Re-enable - - // // other threads and cores to do - - // // operations on m and cv. - - // - - // Context switch occurs on this core. - - // - - // At some future time, the condition we are waiting for becomes - - // true, and another thread using this monitor (m, cv) does either - - // a signal/notify that happens to wake this thread up, or a - - // notifyAll that wakes us up, meaning that we have been taken out - - // of cv's wait-queue. - - // - - // During this time, other threads may cause the condition to - - // become false again, or the condition may toggle one or more - - // times, or it may happen to stay true. - - // - - // This thread is switched back to on some core. - - // - - // acquire(m) // Lock "m" is re-acquired. - - - - // End this loop iteration and re-check the "while" loop condition to make - - // sure the predicate is still true. - - - -} - -// The condition we are waiting for is true! - -// We are still holding the lock, either from before entering the monitor or from - -// the last execution of "wait". - -// Critical section of code goes here, which has a precondition that our predicate - -// must be true. - -// This code might make cv's condition false, and/or make other condition variables' - -// predicates true. - -// Call signal/notify or notifyAll, depending on which condition variables' - -// predicates (who share mutex m) have been made true or may have been made true, - -// and the monitor semantic type being used. - -for (cv_x in cvs_to_notify) { - - notify(cv_x); -- OR -- notifyAll(cv_x); - -} - -// One or more threads have been woken up but will block as soon as they try - -// to acquire m. - -// Release the mutex so that notified thread(s) and others can enter their critical - -// sections. - -release(m); - - - -Having introduced the usage of condition variables, let's use it to revisit and solve the classic bounded producer/consumer problem. The classic solution is to use two monitors, comprising two condition variables sharing one lock on the queue: - - - -global volatile RingBuffer queue; // A thread-unsafe ring-buffer of tasks. - -global Lock queueLock; // A mutex for the ring-buffer of tasks. (Not a spin-lock.) - -global CV queueEmptyCV; // A condition variable for consumer threads waiting for the queue to - - // become non-empty. - - // Its associated lock is "queueLock". - -global CV queueFullCV; // A condition variable for producer threads waiting for the queue - - // to become non-full. Its associated lock is also "queueLock". - -// Method representing each producer thread's behavior: - -public method producer() { - -while (true) { - -task myTask = ...; // Producer makes some new task to be added. - -queueLock.acquire(); // Acquire lock for initial predicate check. - -while (queue.isFull()) { // Check if the queue is non-full. - -// Make the threading system atomically release queueLock, - -// enqueue this thread onto queueFullCV, and sleep this thread. - -wait(queueLock, queueFullCV); - -// Then, "wait" automatically re-acquires "queueLock" for re-checking - -// the predicate condition. - -} - -// Critical section that requires the queue to be non-full. - -// N.B.: We are holding queueLock. - -queue.enqueue(myTask); // Add the task to the queue. - -// Now the queue is guaranteed to be non-empty, so signal a consumer thread - -// or all consumer threads that might be blocked waiting for the queue to be non-empty: - -signal(queueEmptyCV); -- OR -- notifyAll(queueEmptyCV); - -// End of critical sections related to the queue. - -queueLock.release(); // Drop the queue lock until we need it again to add the next task. - -} - -} - -// Method representing each consumer thread's behavior: - -public method consumer() { - -while (true) { - -queueLock.acquire(); // Acquire lock for initial predicate check. - -while (queue.isEmpty()) { // Check if the queue is non-empty. - -// Make the threading system atomically release queueLock, - -// enqueue this thread onto queueEmptyCV, and sleep this thread. - -wait(queueLock, queueEmptyCV); - -// Then, "wait" automatically re-acquires "queueLock" for re-checking - -// the predicate condition. - -} - -// Critical section that requires the queue to be non-empty. - -// N.B.: We are holding queueLock. - -myTask = queue.dequeue(); // Take a task off of the queue. - -// Now the queue is guaranteed to be non-full, so signal a producer thread - -// or all producer threads that might be blocked waiting for the queue to be non-full: - -signal(queueFullCV); -- OR -- notifyAll(queueFullCV); - -// End of critical sections related to the queue. - -queueLock.release(); // Drop the queue lock until we need it again to take off the next task. - -doStuff(myTask); // Go off and do something with the task. - -} - -} - - - -This ensures concurrency between the producer and consumer threads sharing the task queue, and blocks the threads that have nothing to do rather than busy-waiting as shown in the aforementioned approach using spin-locks. - -A variant of this solution could use a single condition variable for both producers and consumers, perhaps named "queueFullOrEmptyCV" or "queueSizeChangedCV". In this case, more than one condition is associated with the condition variable, such that the condition variable represents a weaker condition than the conditions being checked by individual threads. The condition variable represents threads that are waiting for the queue to be non-full and ones waiting for it to be non-empty. However, doing this would require using notifyAll in all the threads using the condition variable and cannot use a regular signal. This is because the regular signal might wake up a thread of the wrong type whose condition has not yet been met, and that thread would go back to sleep without a thread of the correct type getting signaled. For example, a producer might make the queue full and wake up another producer instead of a consumer, and the woken producer would go back to sleep. In the complementary case, a consumer might make the queue empty and wake up another consumer instead of a producer, and the consumer would go back to sleep. Using notifyAll ensures that some thread of the right type will proceed as expected by the problem statement. - -Here is the variant using only one condition variable and notifyAll: - - - -global volatile RingBuffer queue; // A thread-unsafe ring-buffer of tasks. - -global Lock queueLock; // A mutex for the ring-buffer of tasks. (Not a spin-lock.) - -global CV queueFullOrEmptyCV; // A single condition variable for when the queue is not ready for any thread - -// -- i.e., for producer threads waiting for the queue to become non-full - -// and consumer threads waiting for the queue to become non-empty. - -// Its associated lock is "queueLock". - -// Not safe to use regular "signal" because it is associated with - -// multiple predicate conditions (assertions). - -// Method representing each producer thread's behavior: - -public method producer() { - -while (true) { - -task myTask = ...; // Producer makes some new task to be added. - -queueLock.acquire(); // Acquire lock for initial predicate check. - -while (queue.isFull()) { // Check if the queue is non-full. - -// Make the threading system atomically release queueLock, - -// enqueue this thread onto the CV, and sleep this thread. - -wait(queueLock, queueFullOrEmptyCV); - -// Then, "wait" automatically re-acquires "queueLock" for re-checking - -// the predicate condition. - -} - -// Critical section that requires the queue to be non-full. - -// N.B.: We are holding queueLock. - -queue.enqueue(myTask); // Add the task to the queue. - -// Now the queue is guaranteed to be non-empty, so signal all blocked threads - -// so that a consumer thread will take a task: - -notifyAll(queueFullOrEmptyCV); // Do not use "signal" (as it might wake up another producer instead). - -// End of critical sections related to the queue. - -queueLock.release(); // Drop the queue lock until we need it again to add the next task. - -} - -} - -// Method representing each consumer thread's behavior: - -public method consumer() { - -while (true) { - -queueLock.acquire(); // Acquire lock for initial predicate check. - -while (queue.isEmpty()) { // Check if the queue is non-empty. - -// Make the threading system atomically release queueLock, - -// enqueue this thread onto the CV, and sleep this thread. - -wait(queueLock, queueFullOrEmptyCV); - -// Then, "wait" automatically re-acquires "queueLock" for re-checking - -// the predicate condition. - -} - -// Critical section that requires the queue to be non-full. - -// N.B.: We are holding queueLock. - -myTask = queue.dequeue(); // Take a task off of the queue. - -// Now the queue is guaranteed to be non-full, so signal all blocked threads - -// so that a producer thread will take a task: - -notifyAll(queueFullOrEmptyCV); // Do not use "signal" (as it might wake up another consumer instead). - -// End of critical sections related to the queue. - -queueLock.release(); // Drop the queue lock until we need it again to take off the next task. - -doStuff(myTask); // Go off and do something with the task. - -} - -} - - - -Implementing mutexes and condition variables requires some kind of synchronization primitive provided by hardware support that provides atomicity. Locks and condition variables are higher-level abstractions over these synchronization primitives. On a uniprocessor, disabling and enabling interrupts is a way to implement monitors by preventing context switches during the critical sections of the locks and condition variables, but this is not enough on a multiprocessor. On a multiprocessor, usually special atomic read-modify-write instructions on the memory such as test-and-set, compare-and-swap, etc. are used, depending on what the ISA provides. These usually require deferring to spin-locking for the internal lock state itself, but this locking is very brief. Depending on the implementation, the atomic read-modify-write instructions may lock the bus from other cores' accesses and/or prevent re-ordering of instructions in the CPU. Here is an example pseudocode implementation of parts of a threading system and mutexes and Mesa-style condition variables, using test-and-set and a first-come, first-served policy. This glosses over most of how a threading system works, but shows the parts relevant to mutexes and condition variables: - - - -// Basic parts of threading system: - -// Assume "ThreadQueue" supports random access. - -public volatile ThreadQueue readyQueue; // Thread-unsafe queue of ready threads. Elements are (Thread*). - -public volatile global Thread* currentThread; // Assume this variable is per-core. (Others are shared.) - -// Implements a spin-lock on just the synchronized state of the threading system itself. - -// This is used with test-and-set as the synchronization primitive. - -public volatile global bool threadingSystemBusy = false; - -// Context-switch interrupt service routine (ISR): - -// On the current CPU core, preemptively switch to another thread. - -public method contextSwitchISR() { - -if (testAndSet(threadingSystemBusy)) { - -return; // Can't switch context right now. - -} - -// Ensure this interrupt can't happen again which would foul up the context switch: - -systemCall_disableInterrupts(); - -// Get all of the registers of the currently-running process. - -// For Program Counter (PC), we will need the instruction location of - -// the "resume" label below. Getting the register values is platform-dependent and may involve - -// reading the current stack frame, JMP/CALL instructions, etc. (The details are beyond this scope.) - -currentThread->registers = getAllRegisters(); // Store the registers in the "currentThread" object in memory. - -currentThread->registers.PC = resume; // Set the next PC to the "resume" label below in this method. - -readyQueue.enqueue(currentThread); // Put this thread back onto the ready queue for later execution. - -Thread* otherThread = readyQueue.dequeue(); // Remove and get the next thread to run from the ready queue. - -currentThread = otherThread; // Replace the global current-thread pointer value so it is ready for the next thread. - -// Restore the registers from currentThread/otherThread, including a jump to the stored PC of the other thread - -// (at "resume" below). Again, the details of how this is done are beyond this scope. - -restoreRegisters(otherThread.registers); - -// *** Now running "otherThread" (which is now "currentThread")! The original thread is now "sleeping". *** - -resume: // This is where another contextSwitch() call needs to set PC to when switching context back here. - -// Return to where otherThread left off. - -threadingSystemBusy = false; // Must be an atomic assignment. - -systemCall_enableInterrupts(); // Turn pre-emptive switching back on on this core. - -} - -// Thread sleep method: - -// On current CPU core, a synchronous context switch to another thread without putting - -// the current thread on the ready queue. - -// Must be holding "threadingSystemBusy" and disabled interrupts so that this method - -// doesn't get interrupted by the thread-switching timer which would call contextSwitchISR(). - -// After returning from this method, must clear "threadingSystemBusy". - -public method threadSleep() { - -// Get all of the registers of the currently-running process. - -// For Program Counter (PC), we will need the instruction location of - -// the "resume" label below. Getting the register values is platform-dependent and may involve - -// reading the current stack frame, JMP/CALL instructions, etc. (The details are beyond this scope.) - -currentThread->registers = getAllRegisters(); // Store the registers in the "currentThread" object in memory. - -currentThread->registers.PC = resume; // Set the next PC to the "resume" label below in this method. - -// Unlike contextSwitchISR(), we will not place currentThread back into readyQueue. - -// Instead, it has already been placed onto a mutex's or condition variable's queue. - -Thread* otherThread = readyQueue.dequeue(); // Remove and get the next thread to run from the ready queue. - -currentThread = otherThread; // Replace the global current-thread pointer value so it is ready for the next thread. - -// Restore the registers from currentThread/otherThread, including a jump to the stored PC of the other thread - -// (at "resume" below). Again, the details of how this is done are beyond this scope. - -restoreRegisters(otherThread.registers); - -// *** Now running "otherThread" (which is now "currentThread")! The original thread is now "sleeping". *** - -resume: // This is where another contextSwitch() call needs to set PC to when switching context back here. - -// Return to where otherThread left off. - -} - -public method wait(Mutex m, ConditionVariable c) { - -// Internal spin-lock while other threads on any core are accessing this object's - -// "held" and "threadQueue", or "readyQueue". - -while (testAndSet(threadingSystemBusy)) {} - -// N.B.: "threadingSystemBusy" is now true. - -// System call to disable interrupts on this core so that threadSleep() doesn't get interrupted by - -// the thread-switching timer on this core which would call contextSwitchISR(). - -// Done outside threadSleep() for more efficiency so that this thread will be sleeped - -// right after going on the condition-variable queue. - -systemCall_disableInterrupts(); - -assert m.held; // (Specifically, this thread must be the one holding it.) - -m.release(); - -c.waitingThreads.enqueue(currentThread); - -threadSleep(); - -// Thread sleeps ... Thread gets woken up from a signal/broadcast. - -threadingSystemBusy = false; // Must be an atomic assignment. - -systemCall_enableInterrupts(); // Turn pre-emptive switching back on on this core. - -// Mesa style: - -// Context switches may now occur here, making the client caller's predicate false. - -m.acquire(); - -} - -public method signal(ConditionVariable c) { - -// Internal spin-lock while other threads on any core are accessing this object's - -// "held" and "threadQueue", or "readyQueue". - -while (testAndSet(threadingSystemBusy)) {} - -// N.B.: "threadingSystemBusy" is now true. - -// System call to disable interrupts on this core so that threadSleep() doesn't get interrupted by - -// the thread-switching timer on this core which would call contextSwitchISR(). - -// Done outside threadSleep() for more efficiency so that this thread will be sleeped - -// right after going on the condition-variable queue. - -systemCall_disableInterrupts(); - -if (!c.waitingThreads.isEmpty()) { - -wokenThread = c.waitingThreads.dequeue(); - -readyQueue.enqueue(wokenThread); - -} - -threadingSystemBusy = false; // Must be an atomic assignment. - -systemCall_enableInterrupts(); // Turn pre-emptive switching back on on this core. - -// Mesa style: - -// The woken thread is not given any priority. - -} - -public method broadcast(ConditionVariable c) { - -// Internal spin-lock while other threads on any core are accessing this object's - -// "held" and "threadQueue", or "readyQueue". - -while (testAndSet(threadingSystemBusy)) {} - -// N.B.: "threadingSystemBusy" is now true. - -// System call to disable interrupts on this core so that threadSleep() doesn't get interrupted by - -// the thread-switching timer on this core which would call contextSwitchISR(). - -// Done outside threadSleep() for more efficiency so that this thread will be sleeped - -// right after going on the condition-variable queue. - -systemCall_disableInterrupts(); - -while (!c.waitingThreads.isEmpty()) { - -wokenThread = c.waitingThreads.dequeue(); - -readyQueue.enqueue(wokenThread); - -} - -threadingSystemBusy = false; // Must be an atomic assignment. - -systemCall_enableInterrupts(); // Turn pre-emptive switching back on on this core. - -// Mesa style: - -// The woken threads are not given any priority. - -} - -class Mutex { - -protected volatile bool held = false; - -private volatile ThreadQueue blockingThreads; // Thread-unsafe queue of blocked threads. Elements are (Thread*). - -public method acquire() { - -// Internal spin-lock while other threads on any core are accessing this object's - -// "held" and "threadQueue", or "readyQueue". - -while (testAndSet(threadingSystemBusy)) {} - -// N.B.: "threadingSystemBusy" is now true. - -// System call to disable interrupts on this core so that threadSleep() doesn't get interrupted by - -// the thread-switching timer on this core which would call contextSwitchISR(). - -// Done outside threadSleep() for more efficiency so that this thread will be sleeped - -// right after going on the lock queue. - -systemCall_disableInterrupts(); - -assert !blockingThreads.contains(currentThread); - -if (held) { - -// Put "currentThread" on this lock's queue so that it will be - -// considered "sleeping" on this lock. - -// Note that "currentThread" still needs to be handled by threadSleep(). - -readyQueue.remove(currentThread); - -blockingThreads.enqueue(currentThread); - -threadSleep(); - -// Now we are woken up, which must be because "held" became false. - -assert !held; - -assert !blockingThreads.contains(currentThread); - -} - -held = true; - -threadingSystemBusy = false; // Must be an atomic assignment. - -systemCall_enableInterrupts(); // Turn pre-emptive switching back on on this core. - -} - -public method release() { - -// Internal spin-lock while other threads on any core are accessing this object's - -// "held" and "threadQueue", or "readyQueue". - -while (testAndSet(threadingSystemBusy)) {} - -// N.B.: "threadingSystemBusy" is now true. - -// System call to disable interrupts on this core for efficiency. - -systemCall_disableInterrupts(); - -assert held; // (Release should only be performed while the lock is held.) - -held = false; - -if (!blockingThreads.isEmpty()) { - -Thread* unblockedThread = blockingThreads.dequeue(); - -readyQueue.enqueue(unblockedThread); - -} - -threadingSystemBusy = false; // Must be an atomic assignment. - -systemCall_enableInterrupts(); // Turn pre-emptive switching back on on this core. - -} - -} - -struct ConditionVariable { - -volatile ThreadQueue waitingThreads; - -} - - - -As an example, consider a thread-safe class that implements a semaphore. - -There are methods to increment (V) and to decrement (P) a private integer s. - -However, the integer must never be decremented below 0; thus a thread that tries to decrement must wait until the integer is positive. - -We use a condition variable sIsPositive with an associated assertion of -$$ - P_\mathtt{sIsPositive} = (s > 0) -$$. - -monitor class Semaphore - -{ - -private int s := 0 - -invariant s >= 0 - -private Condition sIsPositive /* associated with s > 0 */ - -public method P() - -{ - -while s = 0: - -wait sIsPositive - -assert s > 0 - -s := s - 1 - -} - -public method V() - -{ - -s := s + 1 - -assert s > 0 - -signal sIsPositive - -} - -} - -Implemented showing all synchronization (removing the assumption of a thread-safe class and showing the mutex): - -class Semaphore - -{ - -private volatile int s := 0 - -invariant s >= 0 - -private ConditionVariable sIsPositive /* associated with s > 0 */ - -private Mutex myLock /* Lock on "s" */ - -public method P() - -{ - -myLock.acquire() - -while s = 0: - -wait(myLock, sIsPositive) - -assert s > 0 - -s := s - 1 - -myLock.release() - -} - -public method V() - -{ - -myLock.acquire() - -s := s + 1 - -assert s > 0 - -signal sIsPositive - -myLock.release() - -} - -} - -Conversely, locks and condition variables can also be derived from semaphores, thus making monitors and semaphores reducible to one another: - -The implementation given here is incorrect. If a thread calls wait() after broadcast() has been called, an originally thread may be stuck indefinitely, since broadcast() increments the semaphore only enough times for threads already waiting. - - - -public method wait(Mutex m, ConditionVariable c) { - -assert m.held; - -c.internalMutex.acquire(); - -c.numWaiters++; - -m.release(); // Can go before/after the neighboring lines. - -c.internalMutex.release(); - -// Another thread could signal here, but that's OK because of how - -// semaphores count. If c.sem's number becomes 1, we'll have no - -// waiting time. - -c.sem.Proberen(); // Block on the CV. - -// Woken - -m.acquire(); // Re-acquire the mutex. - -} - -public method signal(ConditionVariable c) { - -c.internalMutex.acquire(); - -if (c.numWaiters > 0) { - -c.numWaiters--; - -c.sem.Verhogen(); // (Doesn't need to be protected by c.internalMutex.) - -} - -c.internalMutex.release(); - -} - -public method broadcast(ConditionVariable c) { - -c.internalMutex.acquire(); - -while (c.numWaiters > 0) { - -c.numWaiters--; - -c.sem.Verhogen(); // (Doesn't need to be protected by c.internalMutex.) - -} - -c.internalMutex.release(); - -} - -class Mutex { - -protected boolean held = false; // For assertions only, to make sure sem's number never goes > 1. - -protected Semaphore sem = Semaphore(1); // The number shall always be at most 1. - -// Not held <--> 1; held <--> 0. - -public method acquire() { - -sem.Proberen(); - -assert !held; - -held = true; - -} - -public method release() { - -assert held; // Make sure we never Verhogen sem above 1. That would be bad. - -held = false; - -sem.Verhogen(); - -} - -} - -class ConditionVariable { - -protected int numWaiters = 0; // Roughly tracks the number of waiters blocked in sem. - -// (The semaphore's internal state is necessarily private.) - -protected Semaphore sem = Semaphore(0); // Provides the wait queue. - -protected Mutex internalMutex; // (Really another Semaphore. Protects "numWaiters".) - -} - - - -When a signal happens on a condition variable that at least one other thread is waiting on, - -there are at least two threads that could then occupy the monitor: - -the thread that signals and any one of the threads that is waiting. In order that at most - -one thread occupies the monitor at each time, a choice must be made. Two schools of thought exist on how best to - -resolve this choice. This leads to two kinds of condition variables which will be examined next: - -* Blocking condition variables give priority to a signaled thread. - -* Nonblocking condition variables give priority to the signaling thread. - -The original proposals by C. A. R. Hoare and Per Brinch Hansen were for blocking condition variables. With a blocking condition variable, the signaling thread must wait outside the monitor (at least) until the signaled thread relinquishes occupancy of the monitor by either returning or by again waiting on a condition variable. Monitors using blocking condition variables are often called Hoare-style monitors or signal-and-urgent-wait monitors. - -We assume there are two queues of threads associated with each monitor object - -* e is the entrance queue - -* s is a queue of threads that have signaled. - -In addition we assume that for each condition variable c, there is a queue - -* c.q, which is a queue for threads waiting on condition variable c - -All queues are typically guaranteed to be fair and, in some implementations, may be guaranteed to be first in first out. - -The implementation of each operation is as follows. (We assume that each operation runs in mutual exclusion to the others; thus restarted threads do not begin executing until the operation is complete.) - -enter the monitor: - -enter the method - -if the monitor is locked - -add this thread to e - -block this thread - -else - -lock the monitor - -leave the monitor: - -schedule - -return from the method - -wait c: - -add this thread to c.q - -schedule - -block this thread - -signal c: - -if there is a thread waiting on c.q - -select and remove one such thread t from c.q - -(t is called "the signaled thread") - -add this thread to s - -restart t - -(so t will occupy the monitor next) - -block this thread - -schedule: - -if there is a thread on s - -select and remove one thread from s and restart it - -(this thread will occupy the monitor next) - -else if there is a thread on e - -select and remove one thread from e and restart it - -(this thread will occupy the monitor next) - -else - -unlock the monitor - -(the monitor will become unoccupied) - -The schedule routine selects the next thread to occupy the monitor - -or, in the absence of any candidate threads, unlocks the monitor. - -The resulting signaling discipline is known as "signal and urgent wait," as the signaler must wait, but is given priority over threads on the entrance queue. An alternative is "signal and wait," in which there is no s queue and signaler waits on the e queue instead. - -Some implementations provide a signal and return operation that combines signaling with returning from a procedure. - -signal c and return: - -if there is a thread waiting on c.q - -select and remove one such thread t from c.q - -(t is called "the signaled thread") - -restart t - -(so t will occupy the monitor next) - -else - -schedule - -return from the method - -In either case ("signal and urgent wait" or "signal and wait"), when a condition variable is signaled and there is at least one thread waiting on the condition variable, the signaling thread hands occupancy over to the signaled thread seamlessly, so that no other thread can gain occupancy in between. If Pc is true at the start of each signal c operation, it will be true at the end of each wait c operation. This is summarized by the following contracts. In these contracts, I is the monitor's invariant. - -enter the monitor: - -postcondition I - -leave the monitor: - -precondition I - -wait c: - -precondition I - -modifies the state of the monitor - -postcondition Pc and I - -signal c: - -precondition Pc and I - -modifies the state of the monitor - -postcondition I - -signal c and return: - -precondition Pc and I - -In these contracts, it is assumed that I and Pc do not depend on the - -contents or lengths of any queues. - -(When the condition variable can be queried as to the number of threads waiting on its queue, more sophisticated contracts can be given. For example, a useful pair of contracts, allowing occupancy to be passed without establishing the invariant, is: - -wait c: - -precondition I - -modifies the state of the monitor - -postcondition Pc - -signal c - -precondition (not empty(c) and Pc) or (empty(c) and I) - -modifies the state of the monitor - -postcondition I - -(See Howard and Buhr et al. for more.) - -It is important to note here that the assertion Pc is entirely up to the programmer; he or she simply needs to be consistent about what it is. - -We conclude this section with an example of a thread-safe class using a blocking monitor that implements a bounded, thread-safe stack. - -monitor class SharedStack { - -private const capacity := 10 - -private int[capacity] A - -private int size := 0 - -invariant 0 <= size and size <= capacity - -private BlockingCondition theStackIsNotEmpty /* associated with 0 < size and size <= capacity */ - -private BlockingCondition theStackIsNotFull /* associated with 0 <= size and size < capacity */ - -public method push(int value) - -{ - -if size = capacity then wait theStackIsNotFull - -assert 0 <= size and size < capacity - -A[size] := value ; size := size + 1 - -assert 0 < size and size <= capacity - -signal theStackIsNotEmpty and return - -} - -public method int pop() - -{ - -if size = 0 then wait theStackIsNotEmpty - -assert 0 < size and size <= capacity - -size := size - 1 ; - -assert 0 <= size and size < capacity - -signal theStackIsNotFull and return A[size] - -} - -} - -Note that, in this example, the thread-safe stack is internally providing a mutex, which, as in the earlier producer/consumer example, is shared by both condition variables, which are checking different conditions on the same concurrent data. The only difference is that the producer/consumer example assumed a regular non-thread-safe queue and was using a standalone mutex and condition variables, without these details of the monitor abstracted away as is the case here. In this example, when the "wait" operation is called, it must somehow be supplied with the thread-safe stack's mutex, such as if the "wait" operation is an integrated part of the "monitor class". Aside from this kind of abstracted functionality, when a "raw" monitor is used, it will always have to include a mutex and a condition variable, with a unique mutex for each condition variable. - -With nonblocking condition variables (also called "Mesa style" condition variables or "signal and continue" condition variables), signaling does not cause the signaling thread to lose occupancy of the monitor. Instead the signaled threads are moved to the e queue. There is no need for the s queue. - -With nonblocking condition variables, the signal operation is often called notify - a terminology we will follow here. It is also common to provide a notify all operation that moves all threads waiting on a condition variable to the e queue. - -The meaning of various operations are given here. (We assume that each operation runs in mutual exclusion to the others; thus restarted threads do not begin executing until the operation is complete.) - -enter the monitor: - -enter the method - -if the monitor is locked - -add this thread to e - -block this thread - -else - -lock the monitor - -leave the monitor: - -schedule - -return from the method - -wait c: - -add this thread to c.q - -schedule - -block this thread - -notify c: - -if there is a thread waiting on c.q - -select and remove one thread t from c.q - -(t is called "the notified thread") - -move t to e - -notify all c: - -move all threads waiting on c.q to e - -schedule : - -if there is a thread on e - -select and remove one thread from e and restart it - -else - -unlock the monitor - -As a variation on this scheme, the notified thread may be moved to a queue called w, which has priority over e. See Howard Brinch Hansen published the first monitor notation, adopting the class concept of Simula 67, and invented a queueing mechanism. Hoare refined the rules of process resumption. Brinch Hansen created the first implementation of monitors, in Concurrent Pascal. Hoare demonstrated their equivalence to semaphores. - -Monitors (and Concurrent Pascal) were soon used to structure process synchronization in the Solo operating system. - -Programming languages that have supported monitors include: - -* Ada since Ada 95 (as protected objects) - -* C# (and other languages that use the .NET Framework) - -* Concurrent Euclid - -* Concurrent Pascal - -* D - -* Delphi (Delphi 2009 and above, via TObject.Monitor) - -* Java (via the wait and notify methods) - -* Go - -* Mesa - -* Modula-3 - -* Python (via object) - -* Ruby - -* Squeak Smalltalk - -* Turing, Turing+, and Object-Oriented Turing - -* µC++ - -* Visual Prolog - -A number of libraries have been written that allow monitors to be constructed in languages that do not support them natively. When library calls are used, it is up to the programmer to explicitly mark the start and end of code executed with mutual exclusion. Pthreads is one such library. diff --git a/wiki/wikipedia/4248.txt b/wiki/wikipedia/4248.txt deleted file mode 100644 index 52bfbf38e1f9c55edc33d14b06073c70d18172db..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4248.txt +++ /dev/null @@ -1,7 +0,0 @@ -Carl Louis Ferdinand von Lindemann (April 12, 1852 - March 6, 1939) was a German mathematician, noted for his proof, published in 1882, that pi (pi) is a transcendental number, meaning it is not a root of any polynomial with rational coefficients. - -Lindemann was born in Hanover, the capital of the Kingdom of Hanover. His father, Ferdinand Lindemann, taught modern languages at a Gymnasium in Hanover. His mother, Emilie Crusius, was the daughter of the Gymnasium's headmaster. The family later moved to Schwerin, where young Ferdinand attended school. - -He studied mathematics at Göttingen, Erlangen, and Munich. At Erlangen he received a doctorate, supervised by Felix Klein, on non-Euclidean geometry. Lindemann subsequently taught in Würzburg and at the University of Freiburg. During his time in Freiburg, Lindemann devised his proof that pi is a transcendental number (see Lindemann–Weierstrass theorem). After his time in Freiburg, Lindemann transferred to the University of Königsberg. While a professor in Königsberg, Lindemann acted as supervisor for the doctoral theses of the mathematicians David Hilbert, Hermann Minkowski, and Arnold Sommerfeld. - -In 1882, Lindemann published the result for which he is best known, the transcendence of pi. His methods were similar to those used nine years earlier by Charles Hermite to show that e, the base of natural logarithms, is transcendental. Before the publication of Lindemann's proof, it was known that if pi was transcendental, then it would be impossible to square the circle by compass and straightedge. diff --git a/wiki/wikipedia/4249.txt b/wiki/wikipedia/4249.txt deleted file mode 100644 index b3ebda66f941d6b26acc53d1b7f2c6df49e0a445..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4249.txt +++ /dev/null @@ -1,25 +0,0 @@ -In geometry, Thurston's geometrization theorem or hyperbolization theorem implies that closed atoroidal Haken manifolds are hyperbolic, and in particular satisfy the Thurston conjecture. - -One form of Thurston's geometrization theorem states: - -If M is a compact irreducible atoroidal Haken manifold whose boundary has zero Euler characteristic, then the interior of M has a complete hyperbolic structure of finite volume. - -The Mostow rigidity theorem implies that if a manifold of dimension at least 3 has a hyperbolic structure of finite volume, then it is essentially unique. - -The conditions that the manifold M should be irreducible and atoroidal are necessary, as hyperbolic manifolds have these properties. However the condition that the manifold be Haken is unnecessarily strong. Thurston's hyperbolization conjecture states that a closed irreducible atoroidal 3-manifold with infinite fundamental group is hyperbolic, and this follows from Perelman's proof of the Thurston geometrization conjecture. - -Thurston showed that if a compact 3 manifold is prime, homotopically atoroidal, and has non-empty boundary, then it has a complete hyperbolic structure unless it is homeomorphic to a certain manifold (T2×[0,1])/Z/2Z with boundary T2. - -A hyperbolic structure on the interior of a compact orientable 3-manifold has finite volume if and only if all boundary components are tori, except for the manifold T2×[0,1] which has a hyperbolic structure but none of finite volume . - -Thurston never published a complete proof of his theorem for reasons that he explained in , though parts of his argument are contained in . Wall and Morgan gave summaries of Thurston's proof. Otal gave a proof in the case of manifolds that fiber over the circle, and Otal and Kapovich gave proofs for the generic case of manifolds that do not fiber over the circle. Thurston's geometrization theorem also follows from Perelman's proof using Ricci flow of the more general Thurston geometrization conjecture. - -Thurston's original argument for this case was summarized by Sullivan. - -Otal gave a proof in the case of manifolds that fiber over the circle. - -Thurston's geometrization theorem in this special case states that if M is a 3-manifold that fibers over the circle and whose monodromy is a pseudo-Anosov diffeomorphism, then the interior of M has a complete hyperbolic metric of finite volume. - -Otal and Kapovich gave proofs of Thurston's theorem for the generic case of manifolds that do not fiber over the circle. - -The idea of the proof is to cut a Haken manifold M along an incompressible surface, to obtain a new manifold N. By induction one assumes that the interior of N has a hyperbolic structure, and the problem is to modify it so that it can be extended to the boundary of N and glued together. Thurston showed that this follows from the existence of a fixed point for a map of Teichmuller space called the skinning map. The core of the proof of the geometrization theorem is to prove that if N is not an interval bundle over a surface and M is an atoroidal then the skinning map has a fixed point. (If N is an interval bundle then the skinning map has no fixed point, which is why one needs a separate argument when M fibers over the circle.) McMullen gave a new proof of the existence of a fixed point of the skinning map. diff --git a/wiki/wikipedia/425.txt b/wiki/wikipedia/425.txt deleted file mode 100644 index 916faf4bc49df91942dbaebb081b2b7930cee30e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/425.txt +++ /dev/null @@ -1,79 +0,0 @@ -The Sylvester–Gallai theorem in geometry states that every finite set of points in the Euclidean plane has a line that passes through exactly two of the points or a line that passes through all of them. It is named after James Joseph Sylvester, who posed it as a problem in 1893, and Tibor Gallai, who published one of the first proofs of this theorem in 1944. - -A line that contains exactly two of a set of points is known as an ordinary line. Another way of stating the theorem is that every finite set of points that is not collinear has an ordinary line. According to a strengthening of the theorem, every finite point set (not all on one line) has at least a linear number of ordinary lines. An algorithm can find an ordinary line in a set of $n$ points in time $O(n\log n)$. - -The Sylvester–Gallai theorem was posed as a problem by . suggests that Sylvester may have been motivated by a related phenomenon in algebraic geometry, in which the inflection points of a cubic curve in the complex projective plane form a configuration of nine points and twelve lines (the Hesse configuration) in which each line determined by two of the points contains a third point. The Sylvester–Gallai theorem implies that it is impossible for all nine of these points to have real coordinates. - -claimed to have a short proof of the Sylvester–Gallai theorem, but it was already noted to be incomplete at the time of publication. proved the theorem (and actually a slightly stronger result) in an equivalent formulation, its projective dual. Unaware of Melchior's proof, again stated the conjecture, which was subsequently proved by Tibor Gallai, and soon afterwards by other authors. - -In a 1951 review, Erdős called the result "Gallai's theorem", but it was already called the Sylvester–Gallai theorem in a 1954 review by Leonard Blumenthal. It is one of many mathematical topics named after Sylvester. - -The question of the existence of an ordinary line can also be posed for points in the real projective plane RP2 instead of the Euclidean plane. The projective plane can be formed from the Euclidean plane by adding extra points "at infinity" where lines that are parallel in the Euclidean plane intersect each other, and by adding a single line "at infinity" containing all the added points. However, the additional points of the projective plane cannot help create non-Euclidean finite point sets with no ordinary line, as any finite point set in the projective plane can be transformed into a Euclidean point set with the same combinatorial pattern of point-line incidences. Therefore, any pattern of finitely many intersecting points and lines that exists in one of these two types of plane also exists in the other. Nevertheless, the projective viewpoint allows certain configurations to be described more easily. In particular, it allows the use of projective duality, in which the roles of points and lines in statements of projective geometry can be exchanged for each other. Under projective duality, the existence of an ordinary line for a set of non-collinear points in RP2 is equivalent to the existence of an ordinary point in a nontrivial arrangement of finitely many lines. An arrangement is said to be trivial when all its lines pass through a common point, and nontrivial otherwise; an ordinary point is a point that belongs to exactly two lines. - -Arrangements of lines have a combinatorial structure closely connected to zonohedra, polyhedra formed as the Minkowski sum of a finite set of line segments, called generators. In this connection, each pair of opposite faces of a zonohedron corresponds to a crossing point of an arrangement of lines in the projective plane, with one line for each generator. The number of sides of each face is twice the number of lines that cross in the arrangement. For instance, the elongated dodecahedron shown is a zonohedron with five generators, two pairs of opposite hexagon faces, and four pairs of opposite parallelogram faces. - -In the corresponding five-line arrangement, two triples of lines cross (corresponding to the two pairs of opposite hexagons) and the remaining four pairs of lines cross at ordinary points (corresponding to the four pairs of opposite parallelograms). An equivalent statement of the Sylvester–Gallai theorem, in terms of zonohedra, is that every zonohedron has at least one parallelogram face (counting rectangles, rhombuses, and squares as special cases of parallelograms). More strongly, whenever sets of $n$ points in the plane can be guaranteed to have at least $t_2(n)$ ordinary lines, zonohedra with $n$ generators can be guaranteed to have at least $2t_2(n)$ parallogram faces. - -The Sylvester–Gallai theorem has been proved in many different ways. Gallai's 1944 proof switches back and forth between Euclidean and projective geometry, in order to transform the points into an equivalent configuration in which an ordinary line can be found as a line of slope closest to zero; for details, see Borwein. The 1941 proof by Melchior uses projective duality to convert the problem into an equivalent question about arrangements of lines, which can be answered using Euler's polyhedral formula. Another proof by Leroy Milton Kelly shows by contradiction that the connecting line with the smallest nonzero distance to another point must be ordinary. And, following an earlier proof by Steinberg, H. S. M. Coxeter showed that the metric concepts of slope and distance appearing in Gallai's and Kelly's proofs are unnecessarily powerful, instead proving the theorem using only the axioms of ordered geometry. - -This proof is by Leroy Milton Kelly. Aigner call it "simply the best" of the many proofs of this theorem. - -Suppose that a finite set $S$ of points is not all collinear. Define a connecting line to be a line that contains at least two points in the collection. By finiteness, $S$ must have a point $P$ and a connecting line $\ell$ that are a positive distance apart but are closer than all other point-line pairs. Kelly proved that $\ell$ is ordinary, by contradiction. - -Assume that $\ell$ is not ordinary. Then it goes through at least three points of $S$. At least two of these are on the same side of $P'$, the perpendicular projection of $P$ on $\ell$. Call them $B$ and $C$, with $B$ being closest to $P'$ (and possibly coinciding with it). Draw the connecting line $m$ passing through $P$ and $C$, and the perpendicular from $B$ to $B'$ on $m$ . Then $BB'$ is shorter than $PP'$. This follows from the fact that $PP'C$ and $BB'C$ are similar triangles, one contained inside the another. - -However, this contradicts the original definition of $P$ and $\ell$ as the point-line pair with the smallest positive distance. So the assumption that $\ell$ is not ordinary cannot be true, QED. - -In 1941 (thus, prior to Erdős publishing the question and Gallai's subsequent proof) Melchior showed that any nontrivial finite arrangement of lines in the projective plane has at least three ordinary points. By duality, this results also says that any finite nontrivial set of points on the plane has at least three ordinary lines. - -Melchior observed that, for any graph embedded in the real projective plane, the formula $V-E+F$ must equal $1$, the Euler characteristic of the projective plane. Here $V$, $E$, and $F$ are the number of vertices, edges, and faces of the graph, respectively. Any nontrivial line arrangement on the projective plane defines a graph in which each face is bounded by at least three edges, and each edge bounds two faces; so, double counting gives the additional inequality $F\le 2E/3$. Using this inequality to eliminate $F$ from the Euler characteristic leads to the inequality $E\le 3V-3$. But if every vertex in the arrangement were the crossing point of three or more lines, then the total number of edges would be at least $3V$, contradicting this inequality. Therefore, some vertices must be the crossing point of only two lines, and as Melchior's more careful analysis shows, at least three ordinary vertices are needed in order to satisfy the inequality $E\le 3V-3$. - -As Aigner note, the same argument for the existence of an ordinary vertex was also given in 1944 by Norman Steenrod, who explicitly applied it to the dual ordinary line problem. - -By a similar argument, Melchior was able to prove a more general result. For every $k\ge 2$, let $t_k$ be the number of points to which $k$ lines are incident. Then -$$ -\displaystyle \sum_{k\geq2} (k-3) t_k \leq -3. -$$ - -or equivalently, -$$ -\displaystyle t_2 \geqslant 3 + \sum_{k\geq4} (k-3) t_k. -$$ - -writes of Kelly's proof that its use of Euclidean distance is unnecessarily powerful, "like using a sledge hammer to crack an almond". Instead, Coxeter gave another proof of the Sylvester–Gallai theorem within ordered geometry, an axiomatization of geometry in terms of betweenness that includes not only Euclidean geometry but several other related geometries. Coxeter's proof is a variation of an earlier proof given by Steinberg in 1944. The problem of finding a minimal set of axioms needed to prove the theorem belongs to reverse mathematics; see Pambuccian for a study of this question. - -The usual statement of the Sylvester–Gallai theorem is not valid in constructive analysis, as it implies the lesser limited principle of omniscience, a weakened form of the law of excluded middle that is rejected as an axiom of constructive mathematics. Nevertheless, it is possible to formulate a version of the Sylvester–Gallai theorem that is valid within the axioms of constructive analysis, and to adapt Kelly's proof of the theorem to be a valid proof under these axioms. - -Kelly's proof of the existence of an ordinary line can be turned into an algorithm that finds an ordinary line by searching for the closest pair of a point and a line through two other points. Mukhopadhyay report the time for this closest-pair search as $O(n^3)$, based on a brute-force search of all triples of points, but an algorithm to find the closest given point to each line through two given points, in time $O(n^2)$, was given earlier by Edelsbrunner, as a subroutine for finding the minimum-area triangle determined by three of a given set of points. The same paper of Edelsbrunner also shows how to construct the dual arrangement of lines to the given points (as used in Melchior and Steenrod's proof) in the same time, $O(n^2)$, from which it is possible to identify all ordinary vertices and all ordinary lines. Mukhopadhyay first showed how to find a single ordinary line (not necessarily the one from Kelly's proof) in time $O(n\log n)$, and a simpler algorithm with the same time bound was described by Mukhopadhyay. - -The algorithm of Mukhopadhyay is based on Coxeter's proof using ordered geometry. It performs the following steps: - -#Choose a point $p_0$ that is a vertex of the convex hull of the given points. - -#Construct a line $\ell_0$ that passes through $p_0$ and otherwise stays outside of the convex hull. - -#Sort the other given points by the angle they make with $p_0$, grouping together points that form the same angle. - -#If any of the points is alone in its group, then return the ordinary line through that point and $p_0$. - -#For each two consecutive groups of points, in the sorted sequence by their angles, form two lines, each of which passes through the closest point to $p_0$ in one group and the farthest point from $p_0$ in the other group. - -#For each line $\ell_i$ in the set of lines formed in this way, find the intersection point of $\ell_i$ with $\ell_0$ - -#Return the line $\ell_i$ whose intersection point with $\ell_0$ is the closest to $p_0$. - -As the authors prove, the line returned by this algorithm must be ordinary. The proof is either by construction if it is returned by step 4, or by contradiction if it is returned by step 7: if the line returned in step 7 were not ordinary, then the authors prove that there would exist an ordinary line between one of its points and $p_0$, but this line should have already been found and returned in step 4. - -While the Sylvester–Gallai theorem states that an arrangement of points, not all collinear, must determine an ordinary line, it does not say how many must be determined. Let $t_2(n)$ be the minimum number of ordinary lines determined over every set of $n$ non-collinear points. Melchior's proof showed that $t_2(n)\ge 3$. raised the question of whether $t_2(n)$ approaches infinity with $n$. confirmed that it does by proving that $t_2(n)\ge\sqrt n$. conjectured that $t_2\ge\lfloor n/2\rfloor$, for all values of $n$, a conjecture that still stands . This is often referred to as the Dirac–Motzkin conjecture; see for example Brass. Kelly proved that $t_2(n)\ge 3n/7$. - -Dirac's conjectured lower bound is asymptotically the best possible, as the even numbers $n$ greater than four have a matching upper bound $t_2(n)\le n/2$. The construction, due to Károly Böröczky, that achieves this bound consists of the vertices of a regular $m$-gon in the real projective plane and another $m$ points (thus, $n=2m$) on the line at infinity corresponding to each of the directions determined by pairs of vertices. Although there are $m(m-1)/2$ pairs of these points, they determine only $m$ distinct directions. This arrangement has only $m$ ordinary lines, the lines that connect a vertex $v$ with the point at infinity collinear with the two neighbors of $v$. As with any finite configuration in the real projective plane, this construction can be perturbed so that all points are finite, without changing the number of ordinary lines. consists of two regular pentagons joined edge-to-edge together with the midpoint of the shared edge and four points on the line at infinity in the projective plane; these 13 points have among them 6 ordinary lines. Modifications of Böröczky's construction lead to sets of odd numbers of points with $3\lfloor n/4\rfloor$ ordinary lines. - -Csima proved that $t_2(n)\ge\lceil 6n/13\rceil$ except when $n$ is seven. Asymptotically, this formula is already $12/13 \approx 92.3\%$ of the proven $n/2$ upper bound. The $n=7$ case is an exception because otherwise the Kelly–Moser construction would be a counterexample; their construction shows that $t(7)\le 3$. However, were the Csima–Sawyer bound valid for $n=7$, it would claim that $t_2(7)\ge 4$. - -A closely related result is Beck's theorem, stating a tradeoff between the number of lines with few points and the number of points on a single line. - -Ben Green and Terence Tao showed that for all sufficiently large point sets (that is, $n > n_0$ for some suitable choice of $n_0$), the number of ordinary lines is indeed at least $n/2$. Furthermore, when $n$ is odd, the number of ordinary lines is at least $3n/4-C$, for some constant $C$. Thus, the constructions of Böröczky for even and odd (discussed above) are best possible. Minimizing the number of ordinary lines is closely related to the orchard-planting problem of maximizing the number of three-point lines, which Green and Tao also solved for all sufficiently large point sets. - -As Paul Erdős observed, the Sylvester–Gallai theorem immediately implies that any set of $n$ points that are not collinear determines at least $n$ different lines. This result is known as the De Bruijn–Erdős theorem. As a base case, the result is clearly true for $n=3$. For any larger value of $n$, the result can be reduced from $n$ points to $n-1$ points, by deleting an ordinary line and one of the two points on it (taking care not to delete a point for which the remaining subset would lie on a single line). Thus, it follows by mathematical induction. The example of a near-pencil, a set of $n-1$ collinear points together with one additional point that is not on the same line as the other points, shows that this bound is tight. - -Another generalization of the Sylvester–Gallai theorem to arbitrary metric spaces was conjectured by Chvátal and proved by Chen. In this generalization, a triple of points in a metric space is defined to be collinear when the triangle inequality for these points is an equality, and a line is defined from any pair of points by repeatedly including additional points that are collinear with points already added to the line, until no more such points can be added. The generalization of Chvátal and Chen states that every finite metric space has a line that contains either all points or exactly two of the points. diff --git a/wiki/wikipedia/4250.txt b/wiki/wikipedia/4250.txt deleted file mode 100644 index b777844fea8346cf871c27f5f47bea56f5dc8e11..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4250.txt +++ /dev/null @@ -1,139 +0,0 @@ -In number theory, a branch of mathematics, the fundamental theorem of arithmetic, also called the unique factorization theorem, states that every integer greater than 1 can be represented uniquely as a product of prime numbers, up to the order of the factors. For example, - - - -1200 = 2^4 \cdot 3 \cdot 5^2 = (2 \cdot 2 \cdot 2 \cdot 2) \cdot 3 \cdot (5 \cdot 5) = 5 \cdot 2 \cdot 5 \cdot 2 \cdot 3 \cdot 2 \cdot 2 = \ldots - - - -The theorem says two things about this example: first, that 1200 can be represented as a product of primes, and second, that no matter how this is done, there will always be exactly four 2s, one 3, two 5s, and no other primes in the product. - -The requirement that the factors be prime is necessary: factorizations containing composite numbers may not be unique - -(for example, $12 = 2 \cdot 6 = 3 \cdot 4$). - -This theorem is one of the main reasons why 1 is not considered a prime number: if 1 were prime, then factorization into primes would not be unique; for example, $2 = 2 \cdot 1 = 2 \cdot 1 \cdot 1 = \ldots$ - -This theorem generalizes to other algebraic structures, in particular to polynomial rings over a field. These structures are called unique factorization domains. - -Book VII, propositions 30, 31 and 32, and Book IX, proposition 14 of Euclid's Elements are essentially the statement and proof of the fundamental theorem. - -(In modern terminology: if a prime p divides the product ab, then p divides either a or b or both.) Proposition 30 is referred to as Euclid's lemma, and it is the key in the proof of the fundamental theorem of arithmetic. - -(In modern terminology: every integer greater than one is divided evenly by some prime number.) Proposition 31 is proved directly by infinite descent. - -Proposition 32 is derived from proposition 31, and proves that the decomposition is possible. - -(In modern terminology: a least common multiple of several prime numbers is not a multiple of any other prime number.) Book IX, proposition 14 is derived from Book VII, proposition 30, and proves partially that the decomposition is unique – a point critically noted by André Weil. Indeed, in this proposition the exponents are all equal to one, so nothing is said for the general case. - -Article 16 of Gauss' Disquisitiones Arithmeticae is an early modern statement and proof employing modular arithmetic. - -===Canonical representation of a positive integer=== - -Every positive integer n > 1 can be represented in exactly one way as a product of prime powers: - - - -n = p_1^{n_1}p_2^{n_2} \cdots p_k^{n_k} - -= \prod_{i=1}^{k} p_i^{n_i} - - - -where p1 < p2 < ... < pk are primes and the ni are positive integers. This representation is commonly extended to all positive integers, including 1, by the convention that the empty product is equal to 1 (the empty product corresponds to k = 0). - -This representation is called the canonical representation of n, or the standard form of n. For example, - -999 = 33×37, - -1000 = 23×53, - -1001 = 7×11×13. - -Factors p0 = 1 may be inserted without changing the value of n (for example, 1000 = 23×30×53). In fact, any positive integer can be uniquely represented as an infinite product taken over all the positive prime numbers: -$$ -n=2^{n_1}3^{n_2}5^{n_3}7^{n_4}\cdots=\prod_{i=1}^\infty p_i^{n_i}, -$$ - -where a finite number of the ni are positive integers, and the rest are zero. Allowing negative exponents provides a canonical form for positive rational numbers. - -The canonical representations of the product, greatest common divisor (GCD), and least common multiple (LCM) of two numbers a and b can be expressed simply in terms of the canonical representations of a and b themselves: - -\begin{alignat}{2} - -a\cdot b & = 2^{a_1+b_1}3^{a_2+b_2}5^{a_3+b_3}7^{a_4+b_4}\cdots - -&& = \prod p_i^{a_i+b_i},\\ - -\gcd(a,b) & = 2^{\min(a_1,b_1)}3^{\min(a_2,b_2)}5^{\min(a_3,b_3)}7^{\min(a_4,b_4)}\cdots - -&& = \prod p_i^{\min(a_i,b_i)},\\ - -\operatorname{lcm}(a,b) & = 2^{\max(a_1,b_1)}3^{\max(a_2,b_2)}5^{\max(a_3,b_3)}7^{\max(a_4,b_4)}\cdots - -&& = \prod p_i^{\max(a_i,b_i)}. - -\end{alignat} - -However, integer factorization, especially of large numbers, is much more difficult than computing products, GCDs, or LCMs. So these formulas have limited use in practice. - -Many arithmetic functions are defined using the canonical representation. In particular, the values of additive and multiplicative functions are determined by their values on the powers of prime numbers. - -The proof uses Euclid's lemma (Elements VII, 30): If a prime divides the product of two integers, then it must divide at least one of these integers. - -It must be shown that every integer greater than 1 is either prime or a product of primes. First, 2 is prime. Then, by strong induction, assume this is true for all numbers greater than 1 and less than n. If n is prime, there is nothing more to prove. Otherwise, there are integers a and b, where n = a b, and 1 < a ≤ b < n. By the induction hypothesis, a = p1 p2 ⋅⋅⋅ pj and b = q1 q2 ⋅⋅⋅ qk are products of primes. But then n = a b = p1 p2 ⋅⋅⋅ pj q1 q2 ⋅⋅⋅ qk is a product of primes. - -Suppose, to the contrary, there is an integer that has two distinct prime factorizations. Let n be the least such integer and write n = p1 p2 ... pj = q1 q2 ... qk, where each pi and qi is prime. We see that p1 divides q1 q2 ... qk, so p1 divides some qi by Euclid's lemma. Without loss of generality, say p1 divides q1. Since p1 and q1 are both prime, it follows that p1 = q1. Returning to our factorizations of n, we may cancel these two factors to conclude that p2 ... pj = q2 ... qk. We now have two distinct prime factorizations of some integer strictly smaller than n, which contradicts the minimality of n. - -The fundamental theorem of arithmetic can also be proved without using Euclid's lemma. The proof that follows is inspired by Euclid's original version of the Euclidean algorithm. - -Assume that $s$ is the smallest positive integer which is the product of prime numbers in two different ways. Incidentally, this implies that $s$, if it exists, must be a composite number greater than $1$. Now, say - - - -\begin{align} - -s - -&=p_1 p_2 \cdots p_m \\ - -&=q_1 q_2 \cdots q_n. - -\end{align} - - - -Every $p_i$ must be distinct from every $q_j.$ Otherwise, if say $p_i=q_j,$ then there would exist some positive integer $t=s/p_i=s/q_j$ that is smaller than s and has two distinct prime factorizations. One may also suppose that $p_1 < q_1,$ by exchanging the two factorizations, if needed. - -Setting $P=p_2\cdots p_m$ and $Q=q_2\cdots q_n,$ one has $s=p_1P=q_1Q.$ It follows that -$$ -(q_1-p_1)Q= p_1(P-Q) < s. -$$ - -As the positive integers less than s have been supposed to have a unique prime factorization, $p_1$ must occur in the factorization of either $q_1-p_1$ or Q. The latter case is impossible, as Q, being smaller than s, must have a unique prime factorization, and $p_1$ differs from every $q_j.$ The former case is also impossible, as, if $p_1$ is a divisor of $q_1-p_1,$ it must be also a divisor of $q_1,$ which is impossible as $p_1$ and $q_1$ are distinct primes. - -Therefore, there cannot exist a smallest integer with more than a single distinct prime factorization. Every positive integer must either be a prime number itself, which would factor uniquely, or a composite that also factors uniquely into primes, or in the case of the integer $1$, not factor into any prime. - -The first generalization of the theorem is found in Gauss's second monograph (1832) on biquadratic reciprocity. This paper introduced what is now called the ring of Gaussian integers, the set of all complex numbers a + bi where a and b are integers. It is now denoted by $\mathbb{Z}[i].$ He showed that this ring has the four units ±1 and ±i, that the non-zero, non-unit numbers fall into two classes, primes and composites, and that (except for order), the composites have unique factorization as a product of primes. - -Similarly, in 1844 while working on cubic reciprocity, Eisenstein introduced the ring $\mathbb{Z}[\omega]$, where $\omega=\frac{-1+\sqrt{-3}}{2}, $ $\omega^3=1 $ is a cube root of unity. This is the ring of Eisenstein integers, and he proved it has the six units $\pm 1, \pm\omega, \pm\omega^2$ and that it has unique factorization. - -However, it was also discovered that unique factorization does not always hold. An example is given by $\mathbb{Z}[\sqrt{-5}]$. In this ring one has - - - -6= - -2\cdot 3= - -(1+\sqrt{-5})(1-\sqrt{-5}). - - - -Examples like this caused the notion of "prime" to be modified. In $\mathbb{Z}[\sqrt{-5}]$ it can be proven that if any of the factors above can be represented as a product, for example, 2 = ab, then one of a or b must be a unit. This is the traditional definition of "prime". It can also be proven that none of these factors obeys Euclid's lemma; for example, 2 divides neither (1 + sqrt -5) nor (1 - sqrt -5) even though it divides their product 6. In algebraic number theory 2 is called irreducible in $\mathbb{Z}[\sqrt{-5}]$ (only divisible by itself or a unit) but not prime in $\mathbb{Z}[\sqrt{-5}]$ (if it divides a product it must divide one of the factors). The mention of $\mathbb{Z}[\sqrt{-5}]$ is required because 2 is prime and irreducible in $\mathbb{Z}.$ Using these definitions it can be proven that in any integral domain a prime must be irreducible. Euclid's classical lemma can be rephrased as "in the ring of integers $\mathbb{Z}$ every irreducible is prime". This is also true in $\mathbb{Z}[i]$ and $\mathbb{Z}[\omega],$ but not in $\mathbb{Z}[\sqrt{-5}].$ - -The rings in which factorization into irreducibles is essentially unique are called unique factorization domains. Important examples are polynomial rings over the integers or over a field, Euclidean domains and principal ideal domains. - -In 1843 Kummer introduced the concept of ideal number, which was developed further by Dedekind (1876) into the modern theory of ideals, special subsets of rings. Multiplication is defined for ideals, and the rings in which they have unique factorization are called Dedekind domains. - -There is a version of unique factorization for ordinals, though it requires some additional conditions to ensure uniqueness. diff --git a/wiki/wikipedia/4251.txt b/wiki/wikipedia/4251.txt deleted file mode 100644 index ff5cd9f85ceedb73c8a04b096282549ad0a581ba..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4251.txt +++ /dev/null @@ -1,7 +0,0 @@ -The Hausdorff paradox is a paradox in mathematics named after Felix Hausdorff. It involves the sphere ${ S^2 }$ (a 2-dimensional sphere in ${ \R^3 }$). It states that if a certain countable subset is removed from ${ S^2 }$, then the remainder can be divided into three disjoint subsets ${ A,B }$ and ${ C }$ such that ${ A, B, C }$ and ${ B \cup C }$ are all congruent. In particular, it follows that on $S^2$ there is no finitely additive measure defined on all subsets such that the measure of congruent sets is equal (because this would imply that the measure of ${ B \cup C }$ is simultaneously $1/3$, $1/2$, and $2/3$ of the non-zero measure of the whole sphere). - -The paradox was published in Mathematische Annalen in 1914 and also in Hausdorff's book, Grundzüge der Mengenlehre, the same year. The proof of the much more famous Banach–Tarski paradox uses Hausdorff's ideas. The proof of this paradox relies on the axiom of choice. - -This paradox shows that there is no finitely additive measure on a sphere defined on all subsets which is equal on congruent pieces. (Hausdorff first showed in the same paper the easier result that there is no countably additive measure defined on all subsets.) The structure of the group of rotations on the sphere plays a crucial role here the statement is not true on the plane or the line. In fact, as was later shown by Banach, it is possible to define an "area" for all bounded subsets in the Euclidean plane (as well as "length" on the real line) in such a way that congruent sets will have equal "area". (This Banach measure, however, is only finitely additive, so it is not a measure in the full sense, but it equals the Lebesgue measure on sets for which the latter exists.) This implies that if two open subsets of the plane (or the real line) are equi-decomposable then they have equal area. - -__NOTOC__ diff --git a/wiki/wikipedia/4252.txt b/wiki/wikipedia/4252.txt deleted file mode 100644 index 0da3add0a9dba90ef970eeb35806b7f6ab4dfc3e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/4252.txt +++ /dev/null @@ -1,118 +0,0 @@ -In mathematics, Nesbitt's inequality states that for positive real numbers a, b and c, -$$ -\frac{a}{b+c}+\frac{b}{a+c}+\frac{c}{a+b}\geq\frac{3}{2}. -$$ - -It is an elementary special case (N = 3) of the difficult and much studied Shapiro inequality, and was published at least 50 years earlier. - -There is no corresponding upper bound as any of the 3 fractions in the inequality can be made arbitrarily large. - -By the AM-HM inequality on $(a+b),(b+c),(c+a)$, -$$ -\frac{(a+b)+(a+c)+(b+c)}{3}\geq\frac{3}{\displaystyle\frac{1}{a+b}+\frac{1}{a+c}+ \frac{1}{b+c}}. -$$ - -Clearing denominators yields -$$ -((a+b)+(a+c)+(b+c))\left(\frac{1}{a+b}+\frac{1}{a+c}+\frac{1}{b+c}\right)\geq 9, -$$ - -from which we obtain -$$ -2\frac{a+b+c}{b+c}+2\frac{a+b+c}{a+c}+2\frac{a+b+c}{a+b}\geq9 -$$ - -by expanding the product and collecting like denominators. This then simplifies directly to the final result. - -Suppose $ a \ge b \ge c $, we have that -$$ -\frac 1 {b+c} \ge \frac 1 {a+c} \ge \frac 1 {a+b} -$$ - -define -$$ -\vec x = (a, b, c) -$$ -$$ -\vec y = \left(\frac 1 {b+c} , \frac 1 {a+c} , \frac 1 {a+b}\right) -$$ - -The scalar product of the two sequences is maximum because of the rearrangement inequality if they are arranged the same way, call $\vec y_1 $ and $\vec y_2$ the vector $\vec y$ shifted by one and by two, we have: -$$ -\vec x \cdot \vec y \ge \vec x \cdot \vec y_1 -$$ -$$ -\vec x \cdot \vec y \ge \vec x \cdot \vec y_2 -$$ - -Addition yields our desired Nesbitt's inequality. - -The following identity is true for all $a,b,c:$ - -\frac{a}{b+c}+\frac{b}{a+c}+\frac{c}{a+b} = \frac{3}{2} + \frac{1}{2} \left(\frac{(a-b)^2}{(a+c)(b+c)}+\frac{(a-c)^2}{(a+b)(b+c)}+\frac{(b-c)^2}{(a+b)(a+c)}\right) - - - -This clearly proves that the left side is no less than $\frac{3}{2}$ for positive a, b and c. - -Note: every rational inequality can be demonstrated by transforming it to the appropriate sum-of-squares identity, see Hilbert's seventeenth problem. - -Invoking the Cauchy–Schwarz inequality on the vectors $\displaystyle\left\langle\sqrt{a+b},\sqrt{b+c},\sqrt{c+a}\right\rangle,\left\langle\frac{1}{\sqrt{a+b}},\frac{1}{\sqrt{b+c}},\frac{1}{\sqrt{c+a}}\right\rangle$ yields -$$ -((b+c)+(a+c)+(a+b))\left(\frac{1}{b+c}+\frac{1}{a+c}+\frac{1}{a+b}\right)\geq 9, -$$ - -which can be transformed into the final result as we did in the AM-HM proof. - -Let $x=a+b,y=b+c,z=c+a$. We then apply the AM-GM inequality to obtain the following -$$ -\frac{x+z}{y}+\frac{y+z}{x}+\frac{x+y}{z}\geq6. -$$ - -because $\frac{x}{y}+\frac{z}{y}+\frac{y}{x}+\frac{z}{x}+\frac{x}{z}+\frac{y}{z}\geq6\sqrt[6]{\frac{x}{y}\cdot\frac{z}{y}\cdot\frac{y}{x}\cdot\frac{z}{x}\cdot\frac{x}{z}\cdot \frac{y}{z}}=6.$ - -Substituting out the $x,y,z$ in favor of $a,b,c$ yields -$$ -\frac{2a+b+c}{b+c}+\frac{a+b+2c}{a+b}+\frac{a+2b+c}{c+a}\geq6 -$$ -$$ -\frac{2a}{b+c}+\frac{2c}{a+b}+\frac{2b}{a+c}+3\geq6 -$$ - -which then simplifies to the final result. - -Titu's lemma, a direct consequence of the Cauchy–Schwarz inequality, states that for any sequence of $n$ real numbers $(x_k)$ and any sequence of $n$ positive numbers $(a_k)$, $\displaystyle\sum_{k=1}^n\frac{x_k^2}{a_k}\geq\frac{(\sum_{k=1}^n x_k)^2}{\sum_{k=1}^n a_k}$. - -We use the lemma on $(x_k)=(1,1,1)$ and $(a_k)=(b+c,a+c,a+b)$. This gives, -$$ -\frac{1}{b+c}+\frac{1}{c+a}+\frac{1}{a+b}\geq\frac{3^2}{2(a+b+c)} -$$ - -This results in, -$$ -\frac{a+b+c}{b+c}+\frac{a+b+c}{c+a}+\frac{a+b+c}{a+b}\geq\frac{9}{2} -$$ i.e., -$$ -\frac{a}{b+c}+\frac{b}{c+a}+\frac{c}{a+b}\geq\frac{9}{2}-3=\frac{3}{2} -$$ - -As the left side of the inequality is homogeneous, we may assume $a+b+c=1$. Now define $x=a+b$, $y=b+c$, and $z=c+a$. The desired inequality turns into $\frac{1-x}{x}+\frac{1-y}{y}+\frac{1-z}{z}\ge \frac{3}{2}$, or, equivalently, $\frac{1}{x}+\frac{1}{y}+\frac{1}{z}\ge 9/2$. This is clearly true by Titu's Lemma. - -Define $S=a+b+c$ and consider the function $f(x)=\frac{x}{S-x}$. This function can be shown to be convex in $[0,S]$ and, invoking Jensen inequality, we get -$$ -\displaystyle \frac{\frac{a}{S-a}+\frac{b}{S-b}+\frac{c}{S-c}}{3}\geq \frac{S/3}{S-S/3}. -$$ - -A straightforward computation yields -$$ -\frac{a}{b+c}+\frac{b}{c+a}+\frac{c}{a+b}\geq\frac{3}{2}. -$$ - -By clearing denominators, -$$ -\frac{a}{b+c}+\frac{b}{a+c}+\frac{c}{a+b}\geq\frac{3}{2}\iff2(a^3+b^3+c^3)\geq ab^2+a^2b+ac^2+a^2c+bc^2+b^2c. -$$ - -It now suffices to prove that $x^3+y^3\geq xy^2+x^2y$ for $(x, y)\in\mathbb{R}^2_+$, as summing this three times for $(x, y) = (a, b),\ (a, c),$ and $(b, c)$ completes the proof. - -As $x^3+y^3\geq xy^2+x^2y\iff (x-y)(x^2-y^2)\geq 0$ we are done. diff --git a/wiki/wikipedia/426.txt b/wiki/wikipedia/426.txt deleted file mode 100644 index 5c4fd7b293e4d4be881936cf3de823da606e509f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/426.txt +++ /dev/null @@ -1,10 +0,0 @@ -In plane geometry, a Jacobi point is a point in the Euclidean plane determined by a triangle ABC and a triple of angles α, β, and γ. This information is sufficient to determine three points X, Y, and Z such that ∠ZAB = ∠YAC = α, ∠XBC = ∠ZBA = β, and ∠YCA = ∠XCB = γ. Then, by a theorem of , the lines AX, BY, and CZ are concurrent, at a point N called the Jacobi point. - -The Jacobi point is a generalization of the Fermat point, which is obtained by letting α = β = γ = 60° and triangle ABC having no angle being greater or equal to 120°. - -If the three angles above are equal, then N lies on the rectangular hyperbola given in areal coordinates by -$$ -yz(\cot B - \cot C) + zx(\cot C - \cot A) + xy(\cot A - \cot B) = 0, -$$ - -which is Kiepert's hyperbola. Each choice of three equal angles determines a triangle center. diff --git a/wiki/wikipedia/427.txt b/wiki/wikipedia/427.txt deleted file mode 100644 index 12f353a4eff4be5c8f8f6789b32c0d136671c07b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/427.txt +++ /dev/null @@ -1,133 +0,0 @@ -In probability theory and mathematical statistics, the law of total cumulance is a generalization to cumulants of the law of total probability, the law of total expectation, and the law of total variance. It has applications in the analysis of time series. It was introduced by David Brillinger. - -It is most transparent when stated in its most general form, for joint cumulants, rather than for cumulants of a specified order for just one random variable. In general, we have -$$ - \kappa(X_1,\dots,X_n)=\sum_\pi \kappa(\kappa(X_i : i\in B \mid Y) : B \in \pi), -$$ - -where - -* κ(X1, ..., Xn) is the joint cumulant of n random variables X1, ..., Xn, and - -* the sum is over all partitions $\pi$ of the set { 1, ..., n } of indices, and - -* "B ∈ pi;" means B runs through the whole list of "blocks" of the partition pi, and - -* κ(Xi : i ∈ B | Y) is a conditional cumulant given the value of the random variable Y. It is therefore a random variable in its own right-a function of the random variable Y. - -Only in case n = either 2 or 3 is the nth cumulant the same as the nth central moment. The case n = 2 is well-known (see law of total variance). Below is the case n = 3. The notation μ3 means the third central moment. - -\mu_3(X)= \operatorname{E}(\mu_3(X\mid Y))+\mu_3(\operatorname{E}(X\mid Y)) - -+3\operatorname{cov}(\operatorname{E}(X\mid Y),\operatorname{var}(X\mid Y)). - -For general 4th-order cumulants, the rule gives a sum of 15 terms, as follows: - - - -\begin{align} - -& \kappa(X_1,X_2,X_3,X_4) \\[5pt] - -= {} & \kappa(\kappa(X_1,X_2,X_3,X_4\mid Y)) \\[5pt] - -& \left.\begin{matrix} - -& {}+\kappa(\kappa(X_1,X_2,X_3\mid Y),\kappa(X_4\mid Y)) \\[5pt] - -& {}+\kappa(\kappa(X_1,X_2,X_4\mid Y),\kappa(X_3\mid Y)) \\[5pt] - -& {}+\kappa(\kappa(X_1,X_3,X_4\mid Y),\kappa(X_2\mid Y)) \\[5pt] - -& {}+\kappa(\kappa(X_2,X_3,X_4\mid Y),\kappa(X_1\mid Y)) - -\end{matrix}\right\}(\text{partitions of the } 3+1 \text{ form}) \\[5pt] - -& \left.\begin{matrix} - -& {}+\kappa(\kappa(X_1,X_2\mid Y),\kappa(X_3,X_4\mid Y)) \\[5pt] - -& {}+\kappa(\kappa(X_1,X_3\mid Y),\kappa(X_2,X_4\mid Y)) \\[5pt] - -& {}+\kappa(\kappa(X_1,X_4\mid Y),\kappa(X_2,X_3\mid Y))\end{matrix}\right\}(\text{partitions of the } 2+2 \text{ form}) \\[5pt] - -& \left.\begin{matrix} - -& {}+\kappa(\kappa(X_1,X_2\mid Y),\kappa(X_3\mid Y),\kappa(X_4\mid Y)) \\[5pt] - -& {}+\kappa(\kappa(X_1,X_3\mid Y),\kappa(X_2\mid Y),\kappa(X_4\mid Y)) \\[5pt] - -& {}+\kappa(\kappa(X_1,X_4\mid Y),\kappa(X_2\mid Y),\kappa(X_3\mid Y)) \\[5pt] - -& {}+\kappa(\kappa(X_2,X_3\mid Y),\kappa(X_1\mid Y),\kappa(X_4\mid Y)) \\[5pt] - -& {}+\kappa(\kappa(X_2,X_4\mid Y),\kappa(X_1\mid Y),\kappa(X_3\mid Y)) \\[5pt] - -& {}+\kappa(\kappa(X_3,X_4\mid Y),\kappa(X_1\mid Y),\kappa(X_2\mid Y)) - -\end{matrix}\right\}(\text{partitions of the } 2+1+1 \text{ form}) \\[5pt] - -& \begin{matrix} {}+\kappa(\kappa(X_1\mid Y),\kappa(X_2\mid Y),\kappa(X_3\mid Y),\kappa(X_4\mid Y)). \end{matrix} - -\end{align} - - - -Suppose Y has a Poisson distribution with expected value λ, and X is the sum of Y copies of W that are independent of each other and of Y. -$$ -X=\sum_{y=1}^Y W_y. -$$ - -All of the cumulants of the Poisson distribution are equal to each other, and so in this case are equal to λ. Also recall that if random variables W1, ..., Wm are independent, then the nth cumulant is additive: -$$ -\kappa_n(W_1+\cdots+W_m)=\kappa_n(W_1)+\cdots+\kappa_n(W_m). -$$ - -We will find the 4th cumulant of X. We have: - - - -\begin{align} - -\kappa_4(X) = {} & \kappa(X,X,X,X) \\[8pt] - -= {} &\kappa_1(\kappa_4(X\mid Y))+4\kappa(\kappa_3(X\mid Y),\kappa_1(X\mid Y))+3\kappa_2(\kappa_2(X\mid Y)) \\ - -& {}+6\kappa(\kappa_2(X\mid Y),\kappa_1(X\mid Y),\kappa_1(X\mid Y))+\kappa_4(\kappa_1(X\mid Y)) \\[8pt] - -= {} & \kappa_1(Y\kappa_4(W))+4\kappa(Y\kappa_3(W),Y\kappa_1(W)) - -+3\kappa_2(Y\kappa_2(W)) \\ - -& {}+6\kappa(Y\kappa_2(W),Y\kappa_1(W),Y\kappa_1(W)) - -+\kappa_4(Y\kappa_1(W)) \\[8pt] - -= {} & \kappa_4(W)\kappa_1(Y)+4\kappa_3(W)\kappa_1(W)\kappa_2(Y) - -+3\kappa_2(W)^2 \kappa_2(Y) \\ - -& {}+6\kappa_2(W) \kappa_1(W)^2 \kappa_3(Y)+\kappa_1(W)^4 \kappa_4(Y) \\[8pt] - -= {} & \kappa_4(W)\lambda + 4\kappa_3(W)\kappa_1(W)\lambda + 3\kappa_2(W)^2+6\kappa_2(W) \kappa_1(W)^2 \lambda + \kappa_1(W)^4\lambda \\[8pt] - -= {} & \lambda \operatorname E(W^4) \qquad\text{(the punch line -- see the explanation below).} - -\end{align} - - - -We recognize the last sum as the sum over all partitions of the set { 1, 2, 3, 4 }, of the product over all blocks of the partition, of cumulants of W of order equal to the size of the block. That is precisely the 4th raw moment of W (see cumulant for a more leisurely discussion of this fact). Hence the moments of W are the cumulants of X multiplied by λ. - -In this way we see that every moment sequence is also a cumulant sequence (the converse cannot be true, since cumulants of even order ≥ 4 are in some cases negative, and also because the cumulant sequence of the normal distribution is not a moment sequence of any probability distribution). - -Suppose Y = 1 with probability p and Y = 0 with probability q = 1 - p. Suppose the conditional probability distribution of X given Y is F if Y = 1 and G if Y = 0. Then we have - -\kappa_n(X)=p\kappa_n(F)+q\kappa_n(G)+\sum_{\pi<\widehat{1}} \kappa_{\left|\pi\right|}(Y)\prod_{B\in\pi} - -(\kappa_{\left|B\right|}(F)-\kappa_{\left|B\right|}(G)) - -where $\pi<\widehat{1}$ means pi is a partition of the set { 1, ..., n } that is finer than the coarsest partition - the sum is over all partitions except that one. For example, if n = 3, then we have -$$ -\kappa_3(X)=p\kappa_3(F)+q\kappa_3(G)+3pq(\kappa_2(F)-\kappa_2(G))(\kappa_1(F)-\kappa_1(G))+pq(q-p)(\kappa_1(F)-\kappa_1(G))^3. -$$ diff --git a/wiki/wikipedia/428.txt b/wiki/wikipedia/428.txt deleted file mode 100644 index f8f5318645223fdd96776550e4a0d47d95f4f999..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/428.txt +++ /dev/null @@ -1,107 +0,0 @@ -In the mathematical discipline of graph theory, Menger's theorem says that in a finite graph, the size of a minimum cut set is equal to the maximum number of disjoint paths that can be found between any pair of vertices. - -Proved by Karl Menger in 1927, it characterizes the connectivity of a graph. - -It is generalized by the max-flow min-cut theorem, which is a weighted, edge version, and which in turn is a special case of the strong duality theorem for linear programs. - -The edge-connectivity version of Menger's theorem is as follows: - -Let G be a finite undirected graph and x and y two distinct vertices. Then the size of the minimum edge cut for x and y (the minimum number of edges whose removal disconnects x and y) is equal to the maximum number of pairwise edge-independent paths from x to y. - -Extended to all pairs: a graph is k-edge-connected (it remains connected after removing fewer than k edges) if and only if every pair of vertices has k edge-disjoint paths in between. - -The vertex-connectivity statement of Menger's theorem is as follows: - -Let G be a finite undirected graph and x and y two nonadjacent vertices. Then the size of the minimum vertex cut for x and y (the minimum number of vertices, distinct from x and y, whose removal disconnects x and y) is equal to the maximum number of pairwise internally vertex-disjoint paths from x to y. - -Extended to all pairs: a graph is k-vertex-connected (it has more than k vertices and it remains connected after removing fewer than k vertices) if and only if every pair of vertices has at least k internally vertex-disjoint paths in between. - -All these statements (in both edge and vertex versions) remain true in directed graphs (when considering directed paths). - -Most direct proofs consider a more general statement to allow proving it by induction. It is also convenient to use definitions that include some degenerate cases. - -The following proof for undirected graphs works without change for directed graphs or multi-graphs, provided we take path to mean directed path. - -For sets of vertices A,B ⊂ G (not necessarily disjoint), an AB-path is a path in G with a starting vertex in A, a final vertex in B, and no internal vertices in A or B. We allow a path with a single vertex in A ∩ B and zero edges. - -An AB-separator of size k is a set S of k vertices (which may intersect A and B) such that G−S contains no AB-path. - -An AB-connector of size k is a union of k vertex-disjoint AB-paths. - -Theorem: The minimum size of an AB-separator is equal to the maximum size of an AB-connector. - -In other words, if no k−1 vertices disconnect A from B, then there exist k disjoint paths from A to B. - -This variant implies the above vertex-connectivity statement: for x,y ∈ G in the previous section, apply the current theorem to G−{x,y} with A = N(x), B = N(y), the neighboring vertices of x,y. Then a set of vertices disconnecting x and y is the same thing as an - -AB-separator, and removing the end vertices in a set of independent xy-paths gives an AB-connector. - -Proof of the Theorem: - -Induction on the number of edges in G. - -For G with no edges, the minimum AB-separator is A ∩ B, - -which is itself an AB-connector consisting of single-vertex paths. - -For G having an edge e, we may assume by induction that the Theorem holds for G−e. If G−e has a minimal AB-separator of size k, then there is an AB-connector of size k in G−e, and hence in G. - -Otherwise, let S be a AB-separator of G−e of size less than k, - -so that every AB-path in G contains a vertex of S or the edge e. - -The size of S must be k-1, since if it was less, S together with either endpoint of e would be a better AB-separator of G. - -In G−S there is an AB-path through e, since S alone is too small to be an AB-separator of G. - -Let v1 be the earlier and v2 be the later vertex of e on such a path. - -Then v1 is reachable from A but not from B in G−S−e, while v2 is reachable from B but not from A. - -Now, let S1 = S ∪ {v1}, and consider a minimum AS1-separator T in G−e. - -Since v2 is not reachable from A in G−S1, T is also an AS1-separator in G. - -Then T is also an AB-separator in G (because every AB-path intersects S1). - -Hence it has size at least k. - -By induction, G−e contains an AS1-connector C1 of size k. - -Because of its size, the endpoints of the paths in it must be exactly S1. - -Similarly, letting S2 = S ∪ {v2}, a minimum S2B-separator has size k, and there is - -an S2B-connector C2 of size k, with paths whose starting points are exactly S2. - -Furthermore, since S1 disconnects G, every path in C1 is internally disjoint from - -every path in C2, and we can define an AB-connector of size k in G by concatenating paths (k−1 paths through S and one path going through e=v1v2). Q.E.D. - -The directed edge version of the theorem easily implies the other versions. - -To infer the directed graph vertex version, it suffices to split each vertex v into two vertices v1, v2, with all ingoing edges going to v1, all outgoing edges going from v2, and an additional edge from v1 to v2. - -The directed versions of the theorem immediately imply undirected versions: it suffices to replace each edge of an undirected graph with a pair of directed edges (a digon). - -The directed edge version in turn follows from its weighted variant, the max-flow min-cut theorem. - -Its proofs are often correctness proofs for max flow algorithms. - -It is also a special case of the still more general (strong) duality theorem for linear programs. - -A formulation that for finite digraphs is equivalent to the above formulation is: - -Let A and B be sets of vertices in a finite digraph G. Then there exists a family P of disjoint AB-paths and an AB-separating set that consists of exactly one vertex from each path in P. - -In this version the theorem follows in fairly easily from König's theorem: in a bipartite graph, the minimal size of a cover is equal to the maximal size of a matching. - -This is done as follows: replace every vertex v in the original digraph D by two vertices v' , v, and every edge uv by the edge u'v. This results in a bipartite graph, whose one side consists of the vertices v' , and the other of the vertices v. - -Applying König's theorem we obtain a matching M and a cover C of the same size. In particular, exactly one endpoint of each edge of M is in C. Add to C all vertices a, for a in A, and all vertices b' , for b in B. Let P be the set of all AB-paths composed of edges uv in D such that u'v belongs to M. Let Q in the original graph consist of all vertices v such that both v' and v belong to C. It is straightforward to check that Q is an AB-separating set, that every path in the family P contains precisely one vertex from Q, and every vertex in Q lies on a path from P, as desired. - -Menger's theorem holds for infinite graphs, and in that context it applies to the minimum cut between any two elements that are either vertices or ends of the graph . The following result of Ron Aharoni and Eli Berger was originally a conjecture proposed by Paul Erdős, and before being proved was known as the Erdős–Menger conjecture. - -It is equivalent to Menger's theorem when the graph is finite. - -Let A and B be sets of vertices in a (possibly infinite) digraph G. Then there exists a family P of disjoint A-B-paths and a separating set which consists of exactly one vertex from each path in P. diff --git a/wiki/wikipedia/429.txt b/wiki/wikipedia/429.txt deleted file mode 100644 index 728e0b4f3ba5308616b06e3eda1e88c7c0379bfd..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/429.txt +++ /dev/null @@ -1,13 +0,0 @@ -WikiNodes is an app for the Apple iPad built by IDEA.org. WikiNodes was the first tablet app for browsing Wikipedia using a radial tree approach to visualize how articles and subsections of articles are interrelated. The app displays related items (articles or sections of an article), which spread on the screen, as a spiderweb of icons. - -The app uses the SpicyNodes visualization technique which was awarded a "best for teaching and learning" award in 2011 from American Association of School Librarians (AASL), and voted #edchat's 35 Best Web 2.0 Classroom Tools in 2010. - -The user interface is based on two display modes: - -* Page view – displays Wikipedia articles in long form, similar to how they appear on the main Wikipedia web site. - -* Node view – divides Wikipedia articles into sections, and links articles to related articles, similar to mind mapping. The user can drag nodes, taps any node to display it in detail, with a panel to scroll to read the contents of the section. This provides a visual way to see the relationships between articles. - -As of June 2011, the app supports the 36 (by number of articles). - -The app was highlighted as a "Staff pick" by Apple's U.S. App Store, Week of May 28, 2011; as "New and Noteworthy" by Apple's U.S. App Store, Week of May 5, 2011; and at other times by Apple's app stores for non-US countries. It has been favorably covered by several bloggers, including those in the references below. diff --git a/wiki/wikipedia/43.txt b/wiki/wikipedia/43.txt deleted file mode 100644 index 2277abf84c3243ef9e792fb8a5f01c29c2666ebe..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/43.txt +++ /dev/null @@ -1,46 +0,0 @@ -In mathematical analysis, the rising sun lemma is a lemma due to Frigyes Riesz, used in the proof of the Hardy–Littlewood maximal theorem. The lemma was a precursor in one dimension of the Calderón–Zygmund lemma. - -The lemma is stated as follows: - -Suppose g is a real-valued continuous function on the interval [a,b] and S is the set of x in [a,b] such that there exists a y∈(x,b] with g(y) > g(x). (Note that b cannot be in S, though a may be.) Define E = S ∩ (a,b). - -Then E is an open set, and it may be written as a countable union of disjoint intervals -$$ -E=\bigcup_k (a_k,b_k) -$$ - -such that g(ak) = g(bk), unless ak = a ∈ S for some k, in which case g(a) < g(bk) for that one k. Furthermore, if x ∈ (ak,bk), then g(x) < g(bk). - -The colorful name of the lemma comes from imagining the graph of the function g as a mountainous landscape, - -with the sun shining horizontally from the right. The set E consist of points that are in the shadow. - -We need a lemma: Suppose [c,d) ⊂ S, but d ∉ S. Then g(c) < g(d). - -To prove this, suppose g(c) ≥ g(d). - -Then g achieves its maximum on [c,d] at some point z < d. - -Since z ∈ S, there is a y in (z,b] with g(z) < g(y). - -If y ≤ d, then g would not reach its maximum on [c,d] at z. - -Thus, y ∈ (d,b], and g(d) ≤ g(z) < g(y). - -This means that d ∈ S, which is a contradiction, thus establishing the lemma. - -The set E is open, so it is composed of a countable union of disjoint intervals (ak,bk). - -It follows immediately from the lemma that g(x) < g(bk) for x in - -(ak,bk). - -Since g is continuous, we must also have g(ak) ≤ g(bk). - -If ak ≠ a or a ∉ S, then ak ∉ S, - -so g(ak) ≥ g(bk), for otherwise ak ∈ S. - -Thus, g(ak) = g(bk) in these cases. - -Finally, if ak = a ∈ S, the lemma tells us that g(a) < g(bk). diff --git a/wiki/wikipedia/430.txt b/wiki/wikipedia/430.txt deleted file mode 100644 index 6b41beb63a6ea98bc0f6c2935f2ccaecb1057136..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/430.txt +++ /dev/null @@ -1,59 +0,0 @@ -In computational complexity theory, NP-hardness (non-deterministic polynomial-time hardness) is the defining property of a class of problems that are informally "at least as hard as the hardest problems in NP". A simple example of an NP-hard problem is the subset sum problem. - -A more precise specification is: a problem H is NP-hard when every problem L in NP can be reduced in polynomial time to H; that is, assuming a solution for H takes 1 unit time, Hs solution can be used to solve L in polynomial time. As a consequence, finding a polynomial time algorithm to solve any NP-hard problem would give polynomial time algorithms for all the problems in NP. As it is suspected that P $\neq$ NP, it is unlikely that such an algorithm exists. - -A common misconception is that the NP in "NP-hard" stands for "non-polynomial" when in fact it stands for "non-deterministic polynomial acceptable problems". It is suspected that there are no polynomial-time algorithms for NP-hard problems, but that has not been proven. Moreover, the class P, in which all problems can be solved in polynomial time, is contained in the NP class. - -A decision problem H is NP-hard when for every problem L in NP, there is a polynomial-time many-one reduction from L to H. - -An equivalent definition is to require that every problem L in NP can be solved in polynomial time by an oracle machine with an oracle for H. Informally, an algorithm can be thought of that calls such an oracle machine as a subroutine for solving H and solves L in polynomial time if the subroutine call takes only one step to compute. - -Another definition is to require that there be a polynomial-time reduction from an NP-complete problem G to H. As any problem L in NP reduces in polynomial time to G, L reduces in turn to H in polynomial time so this new definition implies the previous one. Awkwardly, it does not restrict the class NP-hard to decision problems, and it also includes search problems or optimization problems. - -If P ≠ NP, then NP-hard problems could not be solved in polynomial time. - -Some NP-hard optimization problems can be polynomial-time approximated up to some constant approximation ratio (in particular, those in APX) or even up to any approximation ratio (those in PTAS or FPTAS). - -An example of an NP-hard problem is the decision subset sum problem: given a set of integers, does any non-empty subset of them add up to zero? That is a decision problem and happens to be NP-complete. Another example of an NP-hard problem is the optimization problem of finding the least-cost cyclic route through all nodes of a weighted graph. This is commonly known as the travelling salesman problem. - -There are decision problems that are NP-hard but not NP-complete such as the halting problem. That is the problem which asks "given a program and its input, will it run forever?" That is a yes/no question and so is a decision problem. It is easy to prove that the halting problem is NP-hard but not NP-complete. For example, the Boolean satisfiability problem can be reduced to the halting problem by transforming it to the description of a Turing machine that tries all truth value assignments and when it finds one that satisfies the formula it halts and otherwise it goes into an infinite loop. It is also easy to see that the halting problem is not in NP since all problems in NP are decidable in a finite number of operations, but the halting problem, in general, is undecidable. There are also NP-hard problems that are neither NP-complete nor Undecidable. For instance, the language of true quantified Boolean formulas is decidable in polynomial space, but not in non-deterministic polynomial time (unless NP = PSPACE). - -NP-hard problems do not have to be elements of the complexity class NP. - -As NP plays a central role in computational complexity, it is used as the basis of several classes: - -;NP: Class of computational decision problems for which any given yes-solution can be verified as a solution in polynomial time by a deterministic Turing machine (or solvable by a non-deterministic Turing machine in polynomial time). - -;NP-hard: Class of problems which are at least as hard as the hardest problems in NP. Problems that are NP-hard do not have to be elements of NP; indeed, they may not even be decidable. - -;NP-complete: Class of decision problems which contains the hardest problems in NP. Each NP-complete problem has to be in NP. - -;NP-easy: At most as hard as NP, but not necessarily in NP. - -;NP-equivalent: Decision problems that are both NP-hard and NP-easy, but not necessarily in NP. - -;NP-intermediate: If P and NP are different, then there exist decision problems in the region of NP that fall between P and the NP-complete problems. (If P and NP are the same class, then NP-intermediate problems do not exist because in this case every NP-complete problem would fall in P, and by definition, every problem in NP can be reduced to an NP-complete problem.) - -NP-hard problems are often tackled with rules-based languages in areas including: - -* Approximate computing - -* Configuration - -* Cryptography - -* Data mining - -* Decision support - -* Phylogenetics - -* Planning - -* Process monitoring and control - -* Rosters or schedules - -* Routing/vehicle routing - -* Scheduling diff --git a/wiki/wikipedia/431.txt b/wiki/wikipedia/431.txt deleted file mode 100644 index 36f1d8ac7a2f95f83497fffe2b96b3cf4e879110..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/431.txt +++ /dev/null @@ -1,46 +0,0 @@ -The Crooks fluctuation theorem (CFT), sometimes known as the Crooks equation, is an equation in statistical mechanics that relates the work done on a system during a non-equilibrium transformation to the free energy difference between the final and the initial state of the transformation. During the non-equilibrium transformation the system is at constant volume and in contact with a heat reservoir. The CFT is named after the chemist Gavin E. Crooks (then at University of California, Berkeley) who discovered it in 1998. - -The most general statement of the CFT relates the probability of a space-time trajectory $x(t)$ to the time-reversal of the trajectory $\tilde{x}(t)$. The theorem says if the dynamics of the system satisfies microscopic reversibility, then the forward time trajectory is exponentially more likely than the reverse, given that it produces entropy, -$$ - \frac{P[x(t)]}{\tilde{P}[\tilde{x}(t)]} = e^{\sigma[x(t)]}. -$$ - -If one defines a generic reaction coordinate of the system as a function of the Cartesian coordinates of the constituent particles ( e.g. , a distance between two particles), one can characterize every point along the reaction coordinate path by a parameter $\lambda$, such that $\lambda = 0$ and $\lambda = 1$ correspond to two ensembles of microstates for which the reaction coordinate is constrained to different values. A dynamical process where $\lambda$ is externally driven from zero to one, according to an arbitrary time scheduling, will be referred as forward transformation , while the time reversal path will be indicated as backward - -transformation. Given these definitions, the CFT sets a relation between the following five quantities: - -* $P(A \rightarrow B)$, i.e. the joint probability of taking a microstate $A$ from the canonical ensemble corresponding to $\lambda = 0$ and of performing the forward transformation to the microstate $B$ corresponding to $\lambda = 1$; - -* $P(A \leftarrow B)$, i.e. the joint probability of taking the microstate $B$ from the canonical ensemble corresponding to $\lambda = 1$ and of performing the backward transformation to the microstate $A$ corresponding to $\lambda = 0$; - -* \beta = (k_B - -T)^{-1}, where $k_B$ is the Boltzmann constant and $T$ the temperature of the reservoir; - -* $W_{A \rightarrow B}$, i.e. the work done on the system during the forward transformation (from $A$ to $B$); - -* $\Delta F = F(B) - F(A)$, i.e. the Helmholtz free energy difference between the state $A$ and $B$, represented by the canonical distribution of microstates having $\lambda = 0$ and $\lambda = 1$, respectively. - -The CFT equation reads as follows: - - - -\frac{P(A \rightarrow B)}{P( A \leftarrow B)} = \exp [ \beta ( W_{A \rightarrow B} - \Delta F)]. - - - -In the previous equation the difference $W_{A \rightarrow B} - \Delta F$ corresponds to the work dissipated in the forward transformation, $W_d$. The probabilities $P(A \rightarrow B)$ and $P(A \leftarrow B)$ become identical when the transformation is performed at infinitely slow speed, i.e. for equilibrium transformations. In such cases, $W_{A \rightarrow B} = \Delta F $ and $W_d = 0.$ - -Using the time reversal relation $W_{A \rightarrow B} = -W_{A \leftarrow B}$, and grouping together all the trajectories yielding the same work (in the forward and backward transformation), i.e. determining the probability distribution (or density) $P_{A\rightarrow B}(W)$ of an amount of work $W$ being exerted by a random system trajectory from $A$ to $B$, we can write the above equation in terms of the work distribution functions as follows - - - -P_{A \rightarrow B} (W) = P_{A\leftarrow B}(- W) ~ \exp[\beta (W - \Delta F)]. - - - -Note that for the backward transformation, the work distribution function must be evaluated by taking the work with the opposite sign. The two work distributions for the forward and backward processes cross at $ W=\Delta F $. This phenomenon has been experimentally verified using optical tweezers for the - -process of unfolding and refolding of a small RNA hairpin and an RNA three-helix junction. - -The CFT implies the Jarzynski equality. diff --git a/wiki/wikipedia/432.txt b/wiki/wikipedia/432.txt deleted file mode 100644 index 481d3bb033facc1bb8b69ad9d79fc419638790a2..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/432.txt +++ /dev/null @@ -1,523 +0,0 @@ -In mathematics, the Pythagorean theorem, or Pythagoras' theorem, is a fundamental relation in Euclidean geometry among the three sides of a right triangle. It states that the area of the square whose side is the hypotenuse (the side opposite the right angle) is equal to the sum of the areas of the squares on the other two sides. This theorem can be written as an equation relating the lengths of the sides a, b and c, often called the Pythagorean equation: -$$ -a^2 + b^2 = c^2 , -$$ - -where c represents the length of the hypotenuse and a and b the lengths of the triangle's other two sides. The theorem, whose history is the subject of much debate, is named for the Greek philosopher Pythagoras, born around 570 BC. - -The theorem has been proven numerous times by many different methods – possibly the most for any mathematical theorem. The proofs are diverse, including both geometric proofs and algebraic proofs, with some dating back thousands of years. - -The theorem can be generalized in various ways: to higher-dimensional spaces, to spaces that are not Euclidean, to objects that are not right triangles, and to objects that are not triangles at all but n-dimensional solids. The Pythagorean theorem has attracted interest outside mathematics as a symbol of mathematical abstruseness, mystique, or intellectual power; popular references in literature, plays, musicals, songs, stamps, and cartoons abound. - -thumb|The rearrangement proof (click to view animation)The two large squares shown in the figure each contain four identical triangles, and the only difference between the two large squares is that the triangles are arranged differently. Therefore, the white space within each of the two large squares must have equal area. Equating the area of the white space yields the Pythagorean theorem, Q.E.D. - -Heath gives this proof in his commentary on Proposition I.47 in Euclid's Elements, and mentions the proposals of Bretschneider and Hankel that Pythagoras may have known this proof. Heath himself favors a different proposal for a Pythagorean proof, but acknowledges from the outset of his discussion "that the Greek literature which we possess belonging to the first five centuries after Pythagoras contains no statement specifying this or any other particular great geometric discovery to him." Recent scholarship has cast increasing doubt on any sort of role for Pythagoras as a creator of mathematics, although debate about this continues. - -If c denotes the length of the hypotenuse and a and b denote the lengths of the other two sides, the Pythagorean theorem can be expressed as the Pythagorean equation: -$$ -a^2 + b^2 = c^2 . -$$ - -If the lengths of both a and b are known, then c can be calculated as -$$ - c = \sqrt{a^2 + b^2}. -$$ - -If the length of the hypotenuse c and of one side (a or b) are known, then the length of the other side can be calculated as -$$ -a = \sqrt{c^2 - b^2} -$$ - -or -$$ -b = \sqrt{c^2 - a^2}. -$$ - -The Pythagorean equation relates the sides of a right triangle in a simple way, so that if the lengths of any two sides are known the length of the third side can be found. Another corollary of the theorem is that in any right triangle, the hypotenuse is greater than any one of the other sides, but less than their sum. - -A generalization of this theorem is the law of cosines, which allows the computation of the length of any side of any triangle, given the lengths of the other two sides and the angle between them. If the angle between the other sides is a right angle, the law of cosines reduces to the Pythagorean equation. - -This theorem may have more known proofs than any other (the law of quadratic reciprocity being another contender for that distinction); the book The Pythagorean Proposition contains 370 proofs. - -This proof is based on the proportionality of the sides of two similar triangles, that is, upon the fact that the ratio of any two corresponding sides of similar triangles is the same regardless of the size of the triangles. - -Let ABC represent a right triangle, with the right angle located at C, as shown on the figure. Draw the altitude from point C, and call H its intersection with the side AB. Point H divides the length of the hypotenuse c into parts d and e. The new triangle ACH is similar to triangle ABC, because they both have a right angle (by definition of the altitude), and they share the angle at A, meaning that the third angle will be the same in both triangles as well, marked as θ in the figure. By a similar reasoning, the triangle CBH is also similar to ABC. The proof of similarity of the triangles requires the triangle postulate: The sum of the angles in a triangle is two right angles, and is equivalent to the parallel postulate. Similarity of the triangles leads to the equality of ratios of corresponding sides: -$$ - \frac{BC}{AB}=\frac{BH}{BC} \text{ and } \frac{AC}{AB}=\frac{AH}{AC}. -$$ - -The first result equates the cosines of the angles θ, whereas the second result equates their sines. - -These ratios can be written as -$$ -BC^2 = AB \times BH \text{ and } AC^2=AB \times AH. -$$ - -Summing these two equalities results in -$$ -BC^2+AC^2=AB\times BH+AB\times AH=AB\times(AH+BH)=AB^2 , -$$ - -which, after simplification, expresses the Pythagorean theorem: -$$ -BC^2+AC^2=AB^2 \ . -$$ - -The role of this proof in history is the subject of much speculation. The underlying question is why Euclid did not use this proof, but invented another. One conjecture is that the proof by similar triangles involved a theory of proportions, a topic not discussed until later in the Elements, and that the theory of proportions needed further development at that time. - -In outline, here is how the proof in Euclid's Elements proceeds. The large square is divided into a left and right rectangle. A triangle is constructed that has half the area of the left rectangle. Then another triangle is constructed that has half the area of the square on the left-most side. These two triangles are shown to be congruent, proving this square has the same area as the left rectangle. This argument is followed by a similar version for the right rectangle and the remaining square. Putting the two rectangles together to reform the square on the hypotenuse, its area is the same as the sum of the area of the other two squares. The details follow. - -Let A, B, C be the vertices of a right triangle, with a right angle at A. Drop a perpendicular from A to the side opposite the hypotenuse in the square on the hypotenuse. That line divides the square on the hypotenuse into two rectangles, each having the same area as one of the two squares on the legs. - -For the formal proof, we require four elementary lemmata: - -# If two triangles have two sides of the one equal to two sides of the other, each to each, and the angles included by those sides equal, then the triangles are congruent (side-angle-side). - -# The area of a triangle is half the area of any parallelogram on the same base and having the same altitude. - -# The area of a rectangle is equal to the product of two adjacent sides. - -# The area of a square is equal to the product of two of its sides (follows from 3). - -Next, each top square is related to a triangle congruent with another triangle related in turn to one of two rectangles making up the lower square. - -The proof is as follows: - -#Let ACB be a right-angled triangle with right angle CAB. - -#On each of the sides BC, AB, and CA, squares are drawn, CBDE, BAGF, and ACIH, in that order. The construction of squares requires the immediately preceding theorems in Euclid, and depends upon the parallel postulate. - -#From A, draw a line parallel to BD and CE. It will perpendicularly intersect BC and DE at K and L, respectively. - -#Join CF and AD, to form the triangles BCF and BDA. - -#Angles CAB and BAG are both right angles; therefore C, A, and G are collinear. - -#Angles CBD and FBA are both right angles; therefore angle ABD equals angle FBC, since both are the sum of a right angle and angle ABC. - -#Since AB is equal to FB, BD is equal to BC and angle ABD equals angle FBC, triangle ABD must be congruent to triangle FBC. - -#Since A-K-L is a straight line, parallel to BD, then rectangle BDLK has twice the area of triangle ABD because they share the base BD and have the same altitude BK, i.e., a line normal to their common base, connecting the parallel lines BD and AL. (lemma 2) - -#Since C is collinear with A and G, and this line is parallel to FB, then square BAGF must be twice in area to triangle FBC. - -#Therefore, rectangle BDLK must have the same area as square BAGF = AB2. - -#By applying steps 3 to 10 to the other side of the figure, it can be similarly shown that rectangle CKLE must have the same area as square ACIH = AC2. - -#Adding these two results, AB2 + AC2 = BD × BK + KL × KC - -#Since BD = KL, BD × BK + KL × KC = BD(BK + KC) = BD × BC - -#Therefore, AB2 + AC2 = BC2, since CBDE is a square. - -This proof, which appears in Euclid's Elements as that of Proposition 47 in Book 1, demonstrates that the area of the square on the hypotenuse is the sum of the areas of the other two squares. - -This is quite distinct from the proof by similarity of triangles, which is conjectured to be the proof that Pythagoras used. - -We have already discussed the Pythagorean proof, which was a proof by rearrangement. The same idea is conveyed by the leftmost animation below, which consists of a large square, side a + b, containing four identical right triangles. The triangles are shown in two arrangements, the first of which leaves two squares a2 and b2 uncovered, the second of which leaves square c2 uncovered. The area encompassed by the outer square never changes, and the area of the four triangles is the same at the beginning and the end, so the black square areas must be equal, therefore a2 + b2 = c2. - -A second proof by rearrangement is given by the middle animation. A large square is formed with area c2, from four identical right triangles with sides a, b and c, fitted around a small central square. Then two rectangles are formed with sides a and b by moving the triangles. Combining the smaller square with these rectangles produces two squares of areas a2 and b2, which must have the same area as the initial large square. - -The third, rightmost image also gives a proof. The upper two squares are divided as shown by the blue and green shading, into pieces that when rearranged can be made to fit in the lower square on the hypotenuse – or conversely the large square can be divided as shown into pieces that fill the other two. This way of cutting one figure into pieces and rearranging them to get another figure is called dissection. This shows the area of the large square equals that of the two smaller ones. - -Albert Einstein gave a proof by dissection in which the pieces need not get moved. Instead of using a square on the hypotenuse and two squares on the legs, one can use any other shape that includes the hypotenuse, and two similar shapes that each include one of two legs instead of the hypotenuse (see Similar figures on the three sides). In Einstein's proof, the shape that includes the hypotenuse is the right triangle itself. The dissection consists of dropping a perpendicular from the vertex of the right angle of the triangle to the hypotenuse, thus splitting the whole triangle into two parts. Those two parts have the same shape as the original right triangle, and have the legs of the original triangle as their hypotenuses, and the sum of their areas is that of the original triangle. Because the ratio of the area of a right triangle to the square of its hypotenuse is the same for similar triangles, the relationship between the areas of the three triangles holds for the squares of the sides of the large triangle as well. - -The theorem can be proved algebraically using four copies of a right triangle with sides a, b and c, arranged inside a square with side c as in the top half of the diagram. The triangles are similar with area $\tfrac12ab$, while the small square has side b − a and area (b − a)2. The area of the large square is therefore -$$ -(b-a)^2+4\frac{ab}{2} = (b-a)^2+2ab = b^2-2ab+a^2+2ab = a^2+b^2. -$$ - -But this is a square with side c and area c2, so -$$ -c^2 = a^2 + b^2. -$$ - -A similar proof uses four copies of the same triangle arranged symmetrically around a square with side c, as shown in the lower part of the diagram. This results in a larger square, with side a + b and area (a + b)2. The four triangles and the square side c must have the same area as the larger square, -$$ -(b+a)^2 = c^2 + 4\frac{ab}{2} = c^2+2ab, -$$ - -giving -$$ -c^2 = (b+a)^2 - 2ab = b^2+2ab+a^2-2ab = a^2 + b^2. -$$ - -A related proof was published by future U.S. President James A. Garfield (then a U.S. Representative) (see diagram). Instead of a square it uses a trapezoid, which can be constructed from the square in the second of the above proofs by bisecting along a diagonal of the inner square, to give the trapezoid as shown in the diagram. The area of the trapezoid can be calculated to be half the area of the square, that is -$$ -\frac{1}{2}(b+a)^2. -$$ - -The inner square is similarly halved, and there are only two triangles so the proof proceeds as above except for a factor of $\frac{1}{2}$, which is removed by multiplying by two to give the result. - -One can arrive at the Pythagorean theorem by studying how changes in a side produce a change in the hypotenuse and employing calculus. - -The triangle ABC is a right triangle, as shown in the upper part of the diagram, with BC the hypotenuse. At the same time the triangle lengths are measured as shown, with the hypotenuse of length y, the side AC of length x and the side AB of length a, as seen in the lower diagram part. - -If x is increased by a small amount dx by extending the side AC slightly to D, then y also increases by dy. These form two sides of a triangle, CDE, which (with E chosen so CE is perpendicular to the hypotenuse) is a right triangle approximately similar to ABC. Therefore, the ratios of their sides must be the same, that is: -$$ - \frac{dy}{dx}=\frac xy. -$$ - -This can be rewritten as $y dy=x dx$ , which is a differential equation that can be solved by direct integration: -$$ -\int y dy=\int x dx, -$$ - -giving -$$ -y^2=x^2+C. -$$ - -The constant can be deduced from x = 0, y = a to give the equation -$$ -y^2 = x^2 + a^2. -$$ - -This is more of an intuitive proof than a formal one: it can be made more rigorous if proper limits are used in place of dx and dy. - -The converse of the theorem is also true: - -
    For any three positive numbers a, b, and c such that a2 + b2 = c2, there exists a triangle with sides a, b and c, and every such triangle has a right angle between the sides of lengths a and b.
    - -An alternative statement is: - -
    For any triangle with sides a, b, c, if a2 + b2 = c2, then the angle between a and b measures 90°.
    - -This converse also appears in Euclid's Elements (Book I, Proposition 48): - -It can be proven using the law of cosines or as follows: - -Let ABC be a triangle with side lengths a, b, and c, with a2 + b2 = c2. Construct a second triangle with sides of length a and b containing a right angle. By the Pythagorean theorem, it follows that the hypotenuse of this triangle has length c = a2 + b2, the same as the hypotenuse of the first triangle. Since both triangles' sides are the same lengths a, b and c, the triangles are congruent and must have the same angles. Therefore, the angle between the side of lengths a and b in the original triangle is a right angle. - -The above proof of the converse makes use of the Pythagorean theorem itself. The converse can also be proven without assuming the Pythagorean theorem. - -A corollary of the Pythagorean theorem's converse is a simple means of determining whether a triangle is right, obtuse, or acute, as follows. Let c be chosen to be the longest of the three sides and a + b > c (otherwise there is no triangle according to the triangle inequality). The following statements apply: - -* If a2 + b2 = c2, then the triangle is right. - -* If a2 + b2 > c2, then the triangle is acute. - -* If a2 + b2 < c2, then the triangle is obtuse. - -Edsger W. Dijkstra has stated this proposition about acute, right, and obtuse triangles in this language: - -sgn(α + β − γ) = sgn(a2 + b2 − c2), - -where α is the angle opposite to side a, β is the angle opposite to side b, γ is the angle opposite to side c, and sgn is the sign function. - -A Pythagorean triple has three positive integers a, b, and c, such that a2 + b2 = c2. In other words, a Pythagorean triple represents the lengths of the sides of a right triangle where all three sides have integer lengths. Each triangle has a side (labeled "1") that is the chosen unit for measurement. In each right triangle, Pythagoras' theorem establishes the length of the hypotenuse in terms of this unit. If a hypotenuse is related to the unit by the square root of a positive integer that is not a perfect square, it is a realization of a length incommensurable with the unit, such as , , . For more detail, see Quadratic irrational. - -Incommensurable lengths conflicted with the Pythagorean school's concept of numbers as only whole numbers. The Pythagorean school dealt with proportions by comparison of integer multiples of a common subunit. According to one legend, Hippasus of Metapontum (ca. 470 B.C.) was drowned at sea for making known the existence of the irrational or incommensurable. - -For any complex number -$$ -z = x + iy, -$$ - -the absolute value or modulus is given by -$$ -r = |z|=\sqrt{x^2 + y^2}. -$$ - -So the three quantities, r, x and y are related by the Pythagorean equation, -$$ -r^2 = x^2 + y^2. -$$ - -Note that r is defined to be a positive number or zero but x and y can be negative as well as positive. Geometrically r is the distance of the z from zero or the origin O in the complex plane. - -This can be generalised to find the distance between two points, z1 and z2 say. The required distance is given by -$$ -|z_1 - z_2|=\sqrt{(x_1 - x_2)^2 + (y_1 - y_2)^2}, -$$ - -so again they are related by a version of the Pythagorean equation, -$$ -|z_1 - z_2|^2 = (x_1 - x_2)^2 + (y_1 - y_2)^2. -$$ - -The distance formula in Cartesian coordinates is derived from the Pythagorean theorem. If (x1, y1) and (x2, y2) are points in the plane, then the distance between them, also called the Euclidean distance, is given by -$$ - \sqrt{(x_1-x_2)^2 + (y_1-y_2)^2}. -$$ - -More generally, in Euclidean n-space, the Euclidean distance between two points, $A=(a_1,a_2,\dots,a_n)$ and $B=(b_1,b_2,\dots,b_n)$, is defined, by generalization of the Pythagorean theorem, as: -$$ -\sqrt{(a_1-b_1)^2 + (a_2-b_2)^2 + \cdots + (a_n-b_n)^2} = \sqrt{\sum_{i=1}^n (a_i-b_i)^2}. -$$ - -If instead of Euclidean distance, the square of this value (the squared Euclidean distance, or SED) is used, the resulting equation avoids square roots and is simply a sum of the SED of the coordinates: -$$ -(a_1-b_1)^2 + (a_2-b_2)^2 + \cdots + (a_n-b_n)^2 = \sum_{i=1}^n (a_i-b_i)^2. -$$ - -The squared form is a smooth, convex function of both points, and is widely used in optimization theory and statistics, forming the basis of least squares. - -If Cartesian coordinates are not used, for example, if polar coordinates are used in two dimensions or, in more general terms, if curvilinear coordinates are used, the formulas expressing the Euclidean distance are more complicated than the Pythagorean theorem, but can be derived from it. A typical example where the straight-line distance between two points is converted to curvilinear coordinates can be found in the applications of Legendre polynomials in physics. The formulas can be discovered by using Pythagoras' theorem with the equations relating the curvilinear coordinates to Cartesian coordinates. For example, the polar coordinates (r, θ) can be introduced as: -$$ - x = r \cos \theta, \ y = r \sin \theta. -$$ - -Then two points with locations (r1, θ1) and (r2, θ2) are separated by a distance s: -$$ -s^2 = (x_1 - x_2)^2 + (y_1-y_2)^2 = (r_1 \cos \theta_1 -r_2 \cos \theta_2 )^2 + (r_1 \sin \theta_1 -r_2 \sin \theta_2)^2. -$$ - -Performing the squares and combining terms, the Pythagorean formula for distance in Cartesian coordinates produces the separation in polar coordinates as: - -\begin{align}s^2 &= r_1^2 +r_2^2 -2 r_1 r_2 \left( \cos \theta_1 \cos \theta_2 +\sin \theta_1 \sin \theta_2 \right)\\ - -&= r_1^2 +r_2^2 -2 r_1 r_2 \cos \left( \theta_1 - \theta_2\right)\\ - -&=r_1^2 +r_2^2 -2 r_1 r_2 \cos \Delta \theta, \end{align} - -using the trigonometric product-to-sum formulas. This formula is the law of cosines, sometimes called the generalized Pythagorean theorem. From this result, for the case where the radii to the two locations are at right angles, the enclosed angle Δθ = pi/2, and the form corresponding to Pythagoras' theorem is regained: $s^2 = r_1^2 + r_2^2.$ The Pythagorean theorem, valid for right triangles, therefore is a special case of the more general law of cosines, valid for arbitrary triangles. - -In a right triangle with sides a, b and hypotenuse c, trigonometry determines the sine and cosine of the angle θ between side a and the hypotenuse as: -$$ -\sin \theta = \frac{b}{c}, \quad \cos \theta = \frac{a}{c}. -$$ - -From that it follows: -$$ - {\cos}^2 \theta + {\sin}^2 \theta = \frac{a^2 + b^2}{c^2} = 1, -$$ - -where the last step applies Pythagoras' theorem. This relation between sine and cosine is sometimes called the fundamental Pythagorean trigonometric identity. In similar triangles, the ratios of the sides are the same regardless of the size of the triangles, and depend upon the angles. Consequently, in the figure, the triangle with hypotenuse of unit size has opposite side of size sin θ and adjacent side of size cos θ in units of the hypotenuse. - -The Pythagorean theorem relates the cross product and dot product in a similar way: -$$ - \|\mathbf{a} \times \mathbf{b}\|^2 + (\mathbf{a} \cdot \mathbf{b})^2 = \|\mathbf{a}\|^2 \|\mathbf{b}\|^2. -$$ - -This can be seen from the definitions of the cross product and dot product, as - -\begin{align} \mathbf{a} \times \mathbf{b} &= ab \mathbf{n} \sin{\theta} \\ - -\mathbf{a} \cdot \mathbf{b} &= ab \cos{\theta}, \end{align} - -with n a unit vector normal to both a and b. The relationship follows from these definitions and the Pythagorean trigonometric identity. - -This can also be used to define the cross product. By rearranging the following equation is obtained -$$ - \|\mathbf{a} \times \mathbf{b}\|^2 = \|\mathbf{a}\|^2 \|\mathbf{b}\|^2 - (\mathbf{a} \cdot \mathbf{b})^2. -$$ - -This can be considered as a condition on the cross product and so part of its definition, for example in seven dimensions. - -A generalization of the Pythagorean theorem extending beyond the areas of squares on the three sides to similar figures was known by Hippocrates of Chios in the 5th century BC, and was included by Euclid in his Elements: - -
    If one erects similar figures (see Euclidean geometry) with corresponding sides on the sides of a right triangle, then the sum of the areas of the ones on the two smaller sides equals the area of the one on the larger side.
    - -This extension assumes that the sides of the original triangle are the corresponding sides of the three congruent figures (so the common ratios of sides between the similar figures are a:b:c). While Euclid's proof only applied to convex polygons, the theorem also applies to concave polygons and even to similar figures that have curved boundaries (but still with part of a figure's boundary being the side of the original triangle). -$$ -a^2+b^2-2ab\cos{\theta}=c^2, -$$ - -where $\theta$ is the angle between sides $a$ and $b$. - -When $\theta$ is $\frac{\pi}{2}$ radians or 90°, then $\cos{\theta} = 0$, and the formula reduces to the usual Pythagorean theorem. - -thumb|Generalization of Pythagoras' theorem by [[Thābit ibn Qurra|Tâbit ibn Qorra. Lower panel: reflection of triangle CAD (top) to form triangle DAC, similar to triangle ABC (top).]] - -At any selected angle of a general triangle of sides a, b, c, inscribe an isosceles triangle such that the equal angles at its base θ are the same as the selected angle. Suppose the selected angle θ is opposite the side labeled c. Inscribing the isosceles triangle forms triangle CAD with angle θ opposite side b and with side r along c. A second triangle is formed with angle θ opposite side a and a side with length s along c, as shown in the figure. Thābit ibn Qurra stated that the sides of the three triangles were related as: -$$ - a^2 +b^2 =c(r+s) \ . -$$ - -As the angle θ approaches pi/2, the base of the isosceles triangle narrows, and lengths r and s overlap less and less. When θ = pi/2, ADB becomes a right triangle, r + s = c, and the original Pythagorean theorem is regained. - -One proof observes that triangle ABC has the same angles as triangle CAD, but in opposite order. (The two triangles share the angle at vertex A, both contain the angle θ, and so also have the same third angle by the triangle postulate.) Consequently, ABC is similar to the reflection of CAD, the triangle DAC in the lower panel. Taking the ratio of sides opposite and adjacent to θ, -$$ -\frac{c}{b} = \frac{b}{r} \ . -$$ - -Likewise, for the reflection of the other triangle, -$$ -\frac{c}{a} = \frac{a}{s} \ . -$$ - -Clearing fractions and adding these two relations: -$$ - cs + cr = a^2 +b^2 \ , -$$ - -the required result. - -The theorem remains valid if the angle $ \theta $ is obtuse so the lengths r and s are non-overlapping. - -Pappus's area theorem is a further generalization, that applies to triangles that are not right triangles, using parallelograms on the three sides in place of squares (squares are a special case, of course). The upper figure shows that for a scalene triangle, the area of the parallelogram on the longest side is the sum of the areas of the parallelograms on the other two sides, provided the parallelogram on the long side is constructed as indicated (the dimensions labeled with arrows are the same, and determine the sides of the bottom parallelogram). This replacement of squares with parallelograms bears a clear resemblance to the original Pythagoras' theorem, and was considered a generalization by Pappus of Alexandria in 4 AD - -The lower figure shows the elements of the proof. Focus on the left side of the figure. The left green parallelogram has the same area as the left, blue portion of the bottom parallelogram because both have the same base b and height h. However, the left green parallelogram also has the same area as the left green parallelogram of the upper figure, because they have the same base (the upper left side of the triangle) and the same height normal to that side of the triangle. Repeating the argument for the right side of the figure, the bottom parallelogram has the same area as the sum of the two green parallelograms. - -In terms of solid geometry, Pythagoras' theorem can be applied to three dimensions as follows. Consider a rectangular solid as shown in the figure. The length of diagonal BD is found from Pythagoras' theorem as: -$$ - \overline{BD}^{2} = \overline{BC}^{2} + \overline{CD}^{2} \ , -$$ - -where these three sides form a right triangle. Using horizontal diagonal BD and the vertical edge AB, the length of diagonal AD then is found by a second application of Pythagoras' theorem as: -$$ - \overline{AD}^{2} = \overline{AB}^{2} + \overline{BD}^{2} \ , -$$ - -or, doing it all in one step: -$$ - \overline{AD}^{2} = \overline{AB}^{2} + \overline{BC}^{2} + \overline{CD}^{2} \ . -$$ - -This result is the three-dimensional expression for the magnitude of a vector v (the diagonal AD) in terms of its orthogonal components {vk} (the three mutually perpendicular sides): -$$ -\|\mathbf{v}\|^2 = \sum_{k=1}^3 \|\mathbf{v}_k\|^2. -$$ - -This one-step formulation may be viewed as a generalization of Pythagoras' theorem to higher dimensions. However, this result is really just the repeated application of the original Pythagoras' theorem to a succession of right triangles in a sequence of orthogonal planes. - -A substantial generalization of the Pythagorean theorem to three dimensions is de Gua's theorem, named for Jean Paul de Gua de Malves: If a tetrahedron has a right angle corner (like a corner of a cube), then the square of the area of the face opposite the right angle corner is the sum of the squares of the areas of the other three faces. This result can be generalized as in the "n-dimensional Pythagorean theorem": - -This statement is illustrated in three dimensions by the tetrahedron in the figure. The "hypotenuse" is the base of the tetrahedron at the back of the figure, and the "legs" are the three sides emanating from the vertex in the foreground. As the depth of the base from the vertex increases, the area of the "legs" increases, while that of the base is fixed. The theorem suggests that when this depth is at the value creating a right vertex, the generalization of Pythagoras' theorem applies. In a different wording: - -The Pythagorean theorem can be generalized to inner product spaces, which are generalizations of the familiar 2-dimensional and 3-dimensional Euclidean spaces. For example, a function may be considered as a vector with infinitely many components in an inner product space, as in functional analysis. - -In an inner product space, the concept of perpendicularity is replaced by the concept of orthogonality: two vectors v and w are orthogonal if their inner product $ \langle \mathbf{v} , \mathbf{w}\rangle $ is zero. The inner product is a generalization of the dot product of vectors. The dot product is called the standard inner product or the Euclidean inner product. However, other inner products are possible. - -The concept of length is replaced by the concept of the norm ||v|| of a vector v, defined as: -$$ -\lVert \mathbf{v} \rVert \equiv \sqrt{\langle \mathbf{v},\mathbf{v}\rangle} . -$$ - -In an inner-product space, the Pythagorean theorem states that for any two orthogonal vectors v and w we have -$$ -\left\| \mathbf{v} + \mathbf{w} \right\|^2 = \left\| \mathbf{v} \right\|^2 + \left\| \mathbf{w} \right\|^2 . -$$ - -Here the vectors v and w are akin to the sides of a right triangle with hypotenuse given by the vector sum v + w. This form of the Pythagorean theorem is a consequence of the properties of the inner product: -$$ -\left\| \mathbf{v} + \mathbf{w} \right\|^2 =\langle \mathbf{ v+w},\ \mathbf{ v+w}\rangle = \langle \mathbf{ v},\ \mathbf{ v}\rangle +\langle \mathbf{ w},\ \mathbf{ w}\rangle +\langle\mathbf{ v,\ w }\rangle + \langle\mathbf{ w,\ v }\rangle \ = \left\| \mathbf{v}\right\|^2 + \left\| \mathbf{w}\right\|^2, -$$ - -where the inner products of the cross terms are zero, because of orthogonality. - -A further generalization of the Pythagorean theorem in an inner product space to non-orthogonal vectors is the parallelogram law : -$$ -\left\|\sum_{k=1}^n\mathbf{v}_k\right\|^2=\sum_{k=1}^n\|\mathbf{v}_k\|^2 -$$ - -Another generalization of the Pythagorean theorem applies to Lebesgue-measurable sets of objects in any number of dimensions. Specifically, the square of the measure of an m-dimensional set of objects in one or more parallel m-dimensional flats in n-dimensional Euclidean space is equal to the sum of the squares of the measures of the orthogonal projections of the object(s) onto all m-dimensional coordinate subspaces. - -In mathematical terms: -$$ -\mu^2_{ms} = \sum_{i=1}^{x}\mathbf{\mu^2}_{mp_i} -$$ - -where: - -* $\mu_m$ is a measure in m-dimensions (a length in one dimension, an area in two dimensions, a volume in three dimensions, etc.). - -* $s$ is a set of one or more non-overlapping m-dimensional objects in one or more parallel m-dimensional flats in n-dimensional Euclidean space. - -* $\mu_{ms}$ is the total measure (sum) of the set of m-dimensional objects. - -* $p$ represents an m-dimensional projection of the original set onto an orthogonal coordinate subspace. - -* $\mu_{mp_i}$ is the measure of the m-dimensional set projection onto m-dimensional coordinate subspace $i$. Because object projections can overlap on a coordinate subspace, the measure of each object projection in the set must be calculated individually, then measures of all projections added together to provide the total measure for the set of projections on the given coordinate subspace. - -* $x$ is the number of orthogonal, m-dimensional coordinate subspaces in n-dimensional space (Rn) onto which the m-dimensional objects are projected (m ≤ n): -$$ -x = \binom{n}{m} = \frac{n!}{m!(n-m)!} -$$ - -The Pythagorean theorem is derived from the axioms of Euclidean geometry, and in fact, were the Pythagorean theorem to fail for some right triangle, then the plane in which this triangle is contained cannot be Euclidean. More precisely, the Pythagorean theorem implies, and is implied by, Euclid's Parallel (Fifth) Postulate. Thus, right triangles in a non-Euclidean geometry - -do not satisfy the Pythagorean theorem. For example, in spherical geometry, all three sides of the right triangle (say a, b, and c) bounding an octant of the unit sphere have length equal to pi/2, and all its angles are right angles, which violates the Pythagorean theorem because -$$ - a^2 + b^2 = 2 c^2 > c^2 -$$. - -Here two cases of non-Euclidean geometry are considered—spherical geometry and hyperbolic plane geometry; in each case, as in the Euclidean case for non-right triangles, the result replacing the Pythagorean theorem follows from the appropriate law of cosines. - -However, the Pythagorean theorem remains true in hyperbolic geometry and elliptic geometry if the condition that the triangle be right is replaced with the condition that two of the angles sum to the third, say A+B = C. The sides are then related as follows: the sum of the areas of the circles with diameters a and b equals the area of the circle with diameter c. - -For any right triangle on a sphere of radius R (for example, if γ in the figure is a right angle), with sides a, b, c, the relation between the sides takes the form: -$$ - \cos \left(\frac{c}{R}\right)=\cos \left(\frac{a}{R}\right)\cos \left(\frac{b}{R}\right). -$$ - -This equation can be derived as a special case of the spherical law of cosines that applies to all spherical triangles: -$$ - \cos \left(\frac{c}{R}\right)=\cos \left(\frac{a}{R}\right)\cos \left(\frac{b}{R}\right) +\sin\left(\frac{a}{R}\right) \sin\left(\frac{b}{R}\right) \cos \gamma \ . -$$ - -By expressing the Maclaurin series for the cosine function as an asymptotic expansion with the remainder term in big O notation, -$$ - \cos x = 1 - \frac{x^2}{2} + O(x^4) \text{ as } x \to 0\ , -$$ - -it can be shown that as the radius R approaches infinity and the arguments a/R, b/R, and c/R tend to zero, the spherical relation between the sides of a right triangle approaches the Euclidean form of the Pythagorean theorem. Substituting the asymptotic expansion for each of the cosines into the spherical relation for a right triangle yields -$$ -1-\frac{1}{2}\left(\frac{c}{R}\right)^2 + O\left(\frac{1}{R^4}\right) = \left[1-\frac{1}{2}\left(\frac{a}{R}\right)^2 + O\left(\frac{1}{R^4}\right) \right]\left[1-\frac{1}{2}\left(\frac{b}{R}\right)^2 + O\left(\frac{1}{R^4}\right) \right] \text{ as }R\to\infty\ . -$$ - -The constants a4, b4, and c4 have been absorbed into the big O remainder terms since they are independent of the radius R. This asymptotic relationship can be further simplified by multiplying out the bracketed quantities, cancelling the ones, multiplying through by −2, and collecting all the error terms together: -$$ -\left(\frac{c}{R}\right)^2 = \left(\frac{a}{R}\right)^2 + \left(\frac{b}{R}\right)^2 + O\left(\frac{1}{R^4}\right)\text{ as }R\to\infty\ . -$$ - -After multiplying through by R2, the Euclidean Pythagorean relationship c2 = a2 + b2 is recovered in the limit as the radius R approaches infinity (since the remainder term tends to zero): -$$ -c^2= a^2 + b^2 + O\left(\frac{1}{R^2}\right)\text{ as }R\to\infty\ . -$$ - -For small right triangles (a, b << R), the cosines can be eliminated to avoid loss of significance, giving -$$ - \sin^2 \frac{c}{2R} = \sin^2 \frac{a}{2R} + \sin^2 \frac{b}{2R} - 2 \sin^2 \frac{a}{2R} \sin^2 \frac{b}{2R} . -$$ - -In a hyperbolic space with uniform curvature −1/R2, for a right triangle with legs a, b, and hypotenuse c, the relation between the sides takes the form: -$$ - \cosh \frac{c}{R} = \cosh \frac{a}{R} \cosh \frac{b}{R} -$$ - -where cosh is the hyperbolic cosine. This formula is a special form of the hyperbolic law of cosines that applies to all hyperbolic triangles: -$$ -\cosh \frac{c}{R} = \cosh \frac{a}{R} \ \cosh \frac{b}{R} - \sinh \frac{a}{R} \ \sinh \frac{b}{R} \ \cos \gamma \ , -$$ - -with γ the angle at the vertex opposite the side c. - -By using the Maclaurin series for the hyperbolic cosine, cosh x ≈ 1 + x2/2, it can be shown that as a hyperbolic triangle becomes very small (that is, as a, b, and c all approach zero), the hyperbolic relation for a right triangle approaches the form of Pythagoras' theorem. - -For small right triangles (a, b << R), the hyperbolic cosines can be eliminated to avoid loss of significance, giving -$$ - \sinh^2 \frac{c}{2R} = \sinh^2 \frac{a}{2R} + \sinh^2 \frac{b}{2R} + 2 \sinh^2 \frac{a}{2R} \sinh^2 \frac{b}{2R} . -$$ - -For any uniform curvature K (positive, zero, or negative), in very small right triangles (|K|a2, |K|b2 << 1) with hypotenuse c, it can be shown that -$$ - c^2 = a^2 + b^2 - \frac{K}{3} a^2 b^2 - \frac{K^2}{45} a^2 b^2 (a^2 + b^2) - \frac{2 K^3}{945} a^2 b^2 (a^2 - b^2)^2 + O (K^4 c^{10}) . -$$ - -On an infinitesimal level, in three dimensional space, Pythagoras' theorem describes the distance between two infinitesimally separated points as: -$$ -ds^2 = dx^2 + dy^2 +dz^2, -$$ - -with ds the element of distance and (dx, dy, dz) the components of the vector separating the two points. Such a space is called a Euclidean space. However, in Riemannian geometry, a generalization of this expression useful for general coordinates (not just Cartesian) and general spaces (not just Euclidean) takes the form: -$$ -ds^2 = \sum_{i,j}^n g_{ij} dx_i dx_j -$$ - -which is called the metric tensor. (Sometimes, by abuse of language, the same term is applied to the set of coefficients gij.) It may be a function of position, and often describes curved space. A simple example is Euclidean (flat) space expressed in curvilinear coordinates. For example, in polar coordinates: -$$ -ds^2 = dr^2 + r^2 d\theta^2 \ . -$$ - -There is debate whether the Pythagorean theorem was discovered once, or many times in many places, and the date of first discovery is uncertain, as is the date of the first proof. Historians of Mesopotamian mathematics have concluded that the Pythagorean rule was in widespread use during the Old Babylonian period (20th to 16th centuries BC), over a thousand years before Pythagoras was born. The history of the theorem can be divided into four parts: knowledge of Pythagorean triples, knowledge of the relationship among the sides of a right triangle, knowledge of the relationships among adjacent angles, and proofs of the theorem within some deductive system. - -Written between 2000 and 1786 BC, the Middle Kingdom Egyptian Berlin Papyrus 6619 includes a problem whose solution is the Pythagorean triple 6:8:10, but the problem does not mention a triangle. The Mesopotamian tablet Plimpton 322, written between 1790 and 1750 BC during the reign of Hammurabi the Great, contains many entries closely related to Pythagorean triples. - -In India, the Baudhayana Shulba Sutra, the dates of which are given variously as between the 8th and 5th century BC, contains a list of Pythagorean triples and a statement of the Pythagorean theorem, both in the special case of the isosceles right triangle and in the general case, as does the Apastamba Shulba Sutra (c. 600 BC). Van der Waerden believed that this material "was certainly based on earlier traditions". Carl Boyer states that the Pythagorean theorem in the Śulba-sũtram may have been influenced by ancient Mesopotamian math, but there is no conclusive evidence in favor or opposition of this possibility. - -Proclus, writing in the fifth century AD, states two arithmetic rules, "one of them attributed to Plato, the other to Pythagoras", for generating special Pythagorean triples. The rule attributed to Pythagoras () starts from an odd number and produces a triple with leg and hypotenuse differing by one unit; the rule attributed to Plato (428/427 or 424/423 – 348/347 BC) starts from an even number and produces a triple with leg and hypotenuse differing by two units. According to Thomas L. Heath (1861–1940), no specific attribution of the theorem to Pythagoras exists in the surviving Greek literature from the five centuries after Pythagoras lived. However, when authors such as Plutarch and Cicero attributed the theorem to Pythagoras, they did so in a way which suggests that the attribution was widely known and undoubted. Classicist Kurt von Fritz wrote, "Whether this formula is rightly attributed to Pythagoras personally, but one can safely assume that it belongs to the very oldest period of Pythagorean mathematics." - -With contents known much earlier, but in surviving texts dating from roughly the 1st century BC, the Chinese text Zhoubi Suanjing (周髀算经), (The Arithmetical Classic of the Gnomon and the Circular Paths of Heaven) gives a reasoning for the Pythagorean theorem for the (3, 4, 5) triangle—in China it is called the "Gougu theorem" (勾股定理). During the Han Dynasty (202 BC to 220 AD), Pythagorean triples appear in The Nine Chapters on the Mathematical Art, together with a mention of right triangles. Some believe the theorem arose first in China, where it is alternatively known as the "Shang Gao theorem" (商高定理), named after the Duke of Zhou's astronomer and mathematician, whose reasoning composed most of what was in the Zhoubi Suanjing. diff --git a/wiki/wikipedia/433.txt b/wiki/wikipedia/433.txt deleted file mode 100644 index 4f08f6d7cb1fff69a79d4cb31ee037c8c2755407..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/433.txt +++ /dev/null @@ -1,79 +0,0 @@ -In mathematics, more specifically in abstract algebra, the Frobenius theorem, proved by Ferdinand Georg Frobenius in 1877, characterizes the finite-dimensional associative division algebras over the real numbers. According to the theorem, every such algebra is isomorphic to one of the following: - -* R (the real numbers) - -* C (the complex numbers) - -* H (the quaternions). - -These algebras have real dimension 1, 2, and 4, respectively. Of these three algebras, R and C are commutative, but H is not. - -The main ingredients for the following proof are the Cayley–Hamilton theorem and the fundamental theorem of algebra. - -* Let D be the division algebra in question. - -* Let n be the dimension of D. - -* We identify the real multiples of 1 with R. - -* When we write a ≤ 0 for an element a of D, we tacitly assume that a is contained in R. - -* We can consider D as a finite-dimensional R-vector space. Any element d of D defines an endomorphism of D by left-multiplication, we identify d with that endomorphism. Therefore, we can speak about the trace of d, and its characteristic and minimal polynomials. - -* For any z in C define the following real quadratic polynomial: -$$ -Q(z; x) = x^2 - 2\operatorname{Re}(z)x + |z|^2 = (x-z)(x-\overline{z}) \in \mathbf{R}[x]. -$$ - -Note that if z ∈ C ∖ R then Q(z; x) is irreducible over R. - -The key to the argument is the following - -Claim. The set V of all elements a of D such that a2 ≤ 0 is a vector subspace of D of dimension n - 1. Moreover D = R ⊕ V as R-vector spaces, which implies that V generates D as an algebra. - -Proof of Claim: Let m be the dimension of D as an R-vector space, and pick a in D with characteristic polynomial p(x). By the fundamental theorem of algebra, we can write -$$ - p(x)= (x-t_1)\cdots(x-t_r) (x-z_1)(x - \overline{z_1}) \cdots (x-z_s)(x - \overline{z_s}), \qquad t_i \in \mathbf{R}, \quad z_j \in \mathbf{C} \backslash \mathbf{R}. -$$ - -We can rewrite p(x) in terms of the polynomials Q(z; x): -$$ - p(x)= (x-t_1)\cdots(x-t_r) Q(z_1; x) \cdots Q(z_s; x). -$$ - -Since zj ∈ C\R, the polynomials Q(zj; x) are all irreducible over R. By the Cayley–Hamilton theorem, p(a) = 0 and because D is a division algebra, it follows that either a − ti = 0 for some i or that Q(zj; a) = 0 for some j. The first case implies that a is real. In the second case, it follows that Q(zj; x) is the minimal polynomial of a. Because p(x) has the same complex roots as the minimal polynomial and because it is real it follows that -$$ - p(x)= Q(z_j; x)^k = \left (x^2 - 2\operatorname{Re}(z_j) x + |z_j|^2 \right )^k -$$ - -Since p(x) is the characteristic polynomial of a the coefficient of x2k−1 in p(x) is tr(a) up to a sign. Therefore, we read from the above equation we have: tr(a) = 0 if and only if Re(zj) = 0, in other words tr(a) = 0 if and only if a2 = −. - -So V is the subset of all a with tr(a) = 0. In particular, it is a vector subspace. The rank–nullity theorem then implies that V has dimension n - 1 since it is the kernel of $\operatorname{tr} : D \to \mathbf{R}$. Since R and V are disjoint (i.e. they satisfy $\mathbf R \cap V = \{0\}$), and their dimensions sum to n, we have that D = R ⊕ V. - -For a, b in V define B(a, b) = (−ab − ba)/2. Because of the identity (a + b)2 − a2 − b2 = ab + ba, it follows that B(a, b) is real. Furthermore, since a2 ≤ 0, we have: B(a, a) > 0 for a ≠ 0. Thus B is a positive definite symmetric bilinear form, in other words, an inner product on V. - -Let W be a subspace of V that generates D as an algebra and which is minimal with respect to this property. Let e1, ..., en be an orthonormal basis of W with respect to B. Then orthonormality implies that: -$$ -e_i^2 =-1, \quad e_i e_j = - e_j e_i. -$$ - -If n = 0, then D is isomorphic to R. - -If n = 1, then D is generated by 1 and e1 subject to the relation e_1|p=2 = −1. Hence it is isomorphic to C. - -If n = 2, it has been shown above that D is generated by 1, e1, e2 subject to the relations -$$ -e_1^2 = e_2^2 =-1, \quad e_1 e_2 = - e_2 e_1, \quad (e_1 e_2)(e_1 e_2) =-1. -$$ - -These are precisely the relations for H. - -If n > 2, then D cannot be a division algebra. Assume that n > 2. Let u = e1e2en. It is easy to see that u2 = 1 (this only works if n > 2). If D were a division algebra, 0 = u2 − 1 = (u − 1)(u + 1) implies u = ±1, which in turn means: en = ∓e1e2 and so e1, ..., en−1 generate D. This contradicts the minimality of W. - -*The fact that D is generated by e1, ..., en subject to the above relations means that D is the Clifford algebra of Rn. The last step shows that the only real Clifford algebras which are division algebras are Cℓ0, Cℓ1 and Cℓ2. - -*As a consequence, the only commutative division algebras are R and C. Also note that H is not a C-algebra. If it were, then the center of H has to contain C, but the center of H is R. Therefore, the only finite-dimensional division algebra over C is C itself. - -* This theorem is closely related to Hurwitz's theorem, which states that the only real normed division algebras are R, C, H, and the (non-associative) algebra O. - -* Pontryagin variant. If D is a connected, locally compact division ring, then D = R, C, or H. diff --git a/wiki/wikipedia/434.txt b/wiki/wikipedia/434.txt deleted file mode 100644 index fc69b094fefd2f9f89ad2456655d9d6ea6867cd1..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/434.txt +++ /dev/null @@ -1,67 +0,0 @@ -In computational complexity theory, generalized geography is a well-known PSPACE-complete problem. - -Geography is a children's game, which is good for a long car trip, where players take turns naming cities from anywhere in the world. Each city chosen must begin with the same letter that ended the previous city name. Repetition is not allowed. The game begins with an arbitrary starting city and ends when a player loses because he or she is unable to continue. - -To visualize the game, a directed graph can be constructed whose nodes are each cities of the world. An arrow is added from node N1 to node N2 if and only if the city labeling N2 starts with the letter that ending the name of the city labeling node N1. In other words, we draw an arrow from one city to another if the first can lead to the second according to the game rules. Each alternate edge in the directed graph corresponds to each player (for a two player game). The first player unable to extend the path loses. An illustration of the game (containing some cities in Michigan) is shown in the figure below. - -In a generalized geography (GG) game, we replace the graph of city names with an arbitrary directed graph. The following graph is an example of a generalized geography game. - -We define P1 as the player moving first and P2 as the player moving second and name the nodes N1 to Nn. In the above figure, P1 has a winning strategy as follows: N1 points only to nodes N2 and N3. Thus P1's first move must be one of these two choices. P1 chooses N2 (if P1 chooses N3, then P2 will choose N9 as that is the only option and P1 will lose). Next P2 chooses N4 because it is the only remaining choice. P1 now chooses N5 and P2 subsequently chooses N3 or N7. Regardless of P2's choice, P1 chooses N9 and P2 has no remaining choices and loses the game. - -The problem of determining which player has a winning strategy in a generalized geography game is PSPACE-complete. - -Let GG = { | P1 has a winning strategy for the generalized geography game played on graph G starting at node b }; to show that GG ∈ PSPACE, we present a polynomial-space recursive algorithm determining which player has a winning strategy. Given an instance of GG, start> where G is a directed graph and nstart is the designated start node, the algorithm M proceeds as follows: - -On M(start>): - -# Measure the out-degree of node nstart. If this degree is 0, then return reject, because there are no moves available for player one. - -# Construct a list of all nodes reachable from nstart by one edge: n1, n2, ..., ni. - -# Remove nstart and all edges connected to it from G to form G1. - -# For each node nj in the list n1, ..., ni, call M(1, nj>). - -# If all of these calls return accept, then no matter which decision P1 makes, P2 has a strategy to win, so return reject. Otherwise (if one of the calls returns reject) P1 has a choice that will deny any successful strategies for P2, so return accept. - -The algorithm M clearly decides GG. It is in PSPACE because the only non-obvious polynomial workspace consumed is in the recursion stack. The space consumed by the recursion stack is polynomial because each level of recursion adds a single node to the stack, and there are at most n levels, where n is the number of nodes in G. This is essentially equivalent to a depth-first search. - -The following proof is due to David Lichtenstein and Michael Sipser. - -To establish the PSPACE-hardness of GG, we can reduce the FORMULA-GAME problem (which is known to be PSPACE-hard) to GG in polynomial time (P). In brief, an instance of the FORMULA-GAME problem consists of a quantified Boolean formula φ = ∃x1 ∀x2 ∃x3 ...Qxk(ψ) where Q is either ∃ or ∀. The game is played by two players, Pa and Pe, who alternate choosing values for successive xi. Pe wins the game if the formula ψ ends up true, and Pa wins if ψ ends up false. The formula ψ is assumed to be in conjunctive normal form. - -In this proof, we assume that the quantifier list starts and ends with the existential qualifier, ∃, for simplicity. Note that any expression can be converted to this form by adding dummy variables that do not appear in ψ. - -By constructing a graph G like the one shown above, we will show any instance of FORMULA-GAME can be reduced to an instance of Generalized Geography, where the optimal strategy for P1 is equivalent to that of Pe, and the optimal strategy for P2 is equivalent to that of Pa. - -The left vertical chain of nodes is designed to mimic the procedure of choosing values for variables in FORMULA-GAME. Each diamond structure corresponds to a quantified variable. Players take turns deciding paths at each branching node. Because we assumed the first quantifier would be existential, P1 goes first, selecting the left node if x1 is true and the right node if x1 is false. Each player must then take forced turns, and then P2 chooses a value for x2. These alternating assignments continue down the left side. After both players pass through all the diamonds, it is again P1 's turn, because we assumed that the last quantifier is existential. P1 has no choice but to follow the path to the right side of the graph. Then it is P2 's turn to make a move. - -When the play gets to the right side of the graph, it is similar to the end of play in the formula game. Recall that in the formula game, Pe wins if ψ is true, while Pa wins if ψ is false. The right side of the graph guarantees that P1 wins if and only if Pe wins, and that P2 wins if and only if Pa wins. - -First we show that P2 always wins when Pa wins. If Pa wins, ψ is false. If ψ is false, there exists an unsatisfying clause. P2 will choose an unsatisfying clause to win. Then when it is P1's turn he must choose a literal in that clause chosen by P2. Since all the literals in the clause are false, they do not connect to previously visited nodes in the left vertical chain. This allows P2 to follow the connection to the corresponding node in a diamond of the left chain and select it. However, P1 is now unable to select any adjacent nodes and loses. - -Now we show that P1 always wins when Pe wins. If Pe wins, ψ is true. If ψ is true, every clause in the right side of the graph contains a true literal. P2 can choose any clause. Then P1 chooses the literal that is true. And because it is true, its adjacent node in the left vertical node has already been selected, so P2 has no moves to make and loses. - -Generalized geography is PSPACE-complete, even when played on planar graphs. The following proof is from theorem 3 of. - -Since planar GG is a special case of GG, and GG is in PSPACE, so planar GG is in PSPACE. It remains to show that planar GG is PSPACE-hard. This can be proved by showing how to convert an arbitrary graph into a planar graph, such that a game of GG played on this graph will have the same outcome as on the original graph. - -In order to do that, it's only necessary to eliminate all the edge crossings of the original graph. We draw the graph such that no three edges intersect at a point, and no pair of crossing edges can both be used in the same game. This is not possible in general, but is always possible for the graph constructed from a FORMULA-GAME instance; for example we could have only the edges from clause vertices involved in crossings. Now we replace each crossing with this construction: - -The result is a planar graph, and the same player can force a win as in the original graph: if a player chooses to move "up" from V in the transformed game, then both players must continuing moving "up" to W or lose immediately. So moving "up" from V in the transformed game simulates the move V→W in the original game. If V→W is a winning move, then moving "up" from V in the transformed game is also a winning move, and vice versa. - -Thus, the game of GG played on the transformed graph will have the same outcome as on the original graph. This transformation takes time that is a constant multiple to the number of edge intersections in the original graph, thus it takes polynomial time. - -Thus planar GG is PSPACE-complete. - -GG played on planar bipartite graphs with maximum degree 3 is still PSPACE-complete, by replacing the vertices of degree higher than 3 with a chain of vertices with degree at most 3. Proof is in. and uses the following construction: - -If one player uses any of the entrances to this construction, the other player chooses which exit will be used. Also the construction can only be traversed once, because the central vertex is always visited. Hence this construction is equivalent to the original vertex. - -A variant of GG is called edge geography, where after each move, the edge that the player went through is erased. This is in contrast to the original GG, where after each move, the vertex that the player used to be on is erased. In this view, the original GG can be called Vertex Geography. - -Edge geography is PSPACE-complete. This can be proved used the same construction that was used for vertex geography. - -One may also consider playing either Geography game on an undirected graph (that is, the edges can be traversed in both directions). Fraenkel, Scheinerman, and Ullman show that undirected vertex geography can be solved in polynomial time, whereas undirected edge geography is PSPACE-complete, even for planar graphs with maximum degree 3. If the graph is bipartite, then Undirected Edge Geography is solvable in polynomial time. - -Given that GG is PSPACE-complete, no polynomial time algorithm exists for optimal play in GG unless P = PSPACE. However, it may not be as easy to prove the complexity of other games because certain games (such as chess) contain a finite number of game positions - making it hard (or impossible) to formulate a mapping to a PSPACE-complete problem. In spite of this, the complexity of certain games can still be analyzed by generalization (e.g., to an n × n board). See the references for a proof for generalized Go, as a corollary of the proof of the completeness of GG. diff --git a/wiki/wikipedia/435.txt b/wiki/wikipedia/435.txt deleted file mode 100644 index dc6d1d820dd0183c38ea627c55f939eddbb167da..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/435.txt +++ /dev/null @@ -1,73 +0,0 @@ -In mathematics, the Khintchine inequality, named after Aleksandr Khinchin and spelled in multiple ways in the Latin alphabet, is a theorem from probability, and is also frequently used in analysis. Heuristically, it says that if we pick $ N $ complex numbers $ x_1,\dots,x_N \in\mathbb{C}$, and add them together each multiplied by a random sign $\pm 1 $, then the expected value of the sum's modulus, or the modulus it will be closest to on average, will be not too far off from $ \sqrt{|x_1|^{2}+\cdots + |x_N|^{2}}$. - -Let $ \{\varepsilon_n\}_{n=1}^N $ be i.i.d. random variables - -with $P(\varepsilon_n=\pm1)=\frac12$ for $n=1,\ldots, N$, - -i.e., a sequence with Rademacher distribution. Let $ 00 $ depending only on $p$ (see Expected value for notation). The sharp values of the constants $A_p,B_p$ were found by Haagerup (Ref. 2; see Ref. 3 for a simpler proof). It is a simple matter to see that $A_p = 1$ when $p \ge 2$, and $B_p = 1$ when $0 < p \le 2$. - -Haagerup found that - - - -\begin{align} - -A_p &= \begin{cases} - -2^{1/2-1/p} & 0 - -where $p_0\approx 1.847$ and $\Gamma$ is the Gamma function. - -One may note in particular that $B_p$ matches exactly the moments of a normal distribution. - -The uses of this inequality are not limited to applications in probability theory. One example of its use in analysis is the following: if we let $T$ be a linear operator between two Lp spaces $ L^p(X,\mu)$ and $ L^p(Y,\nu) $, $1 < p < \infty$, with bounded norm $ \|T\|<\infty $, then one can use Khintchine's inequality to show that -$$ - \left\|\left(\sum_{n=1}^N |Tf_n|^2 \right)^{1/2} \right\|_{L^p(Y,\nu)}\leq C_p \left\|\left(\sum_{n=1}^N |f_n|^2\right)^{1/2} \right\|_{L^p(X,\mu)} -$$ - -for some constant $C_p>0$ depending only on $p$ and $\|T\|$. - -For the case of Rademacher random variables, Pawel Hitczenko showed that the sharpest version is: - - - -A \left(\sqrt{p}\left(\sum_{n=b+1}^N x_n^2\right)^{1/2} + \sum_{n=1}^b x_n\right) - -\leq \left(\operatorname{E} \left|\sum_{n=1}^N \varepsilon_n x_n\right|^p \right)^{1/p} - -\leq B \left(\sqrt{p}\left(\sum_{n=b+1}^N x_n^2\right)^{1/2} + \sum_{n=1}^b x_n\right) - - - -where $b = \lfloor p\rfloor$, and $A$ and $B$ are universal constants independent of $p$. - -Here we assume that the $x_i$ are non-negative and non-increasing. diff --git a/wiki/wikipedia/436.txt b/wiki/wikipedia/436.txt deleted file mode 100644 index 5213ed940438bc6bdb7ddd4ebbfc7e5864394ed0..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/436.txt +++ /dev/null @@ -1,26 +0,0 @@ -In mathematics, the Gaussian isoperimetric inequality, proved by Boris Tsirelson and Vladimir Sudakov, and later independently by Christer Borell, states that among all sets of given Gaussian measure in the n-dimensional Euclidean space, half-spaces have the minimal Gaussian boundary measure. - -Let $\scriptstyle A$ be a measurable subset of $\scriptstyle\mathbf{R}^n $ endowed with the standard Gaussian measure $\gamma^n$ with the density $ {\exp(-\|x\|^2/2)}/(2\pi)^{n/2}$. Denote by - -A_\varepsilon = \left\{ x \in \mathbf{R}^n | - -\text{dist}(x, A) \leq \varepsilon \right\} - -the ε-extension of A. Then the Gaussian isoperimetric inequality states that - -\liminf_{\varepsilon \to +0} - -\varepsilon^{-1} \left\{ \gamma^n (A_\varepsilon) - \gamma^n(A) \right\} - -\geq \varphi(\Phi^{-1}(\gamma^n(A))), - -where -$$ -\varphi(t) = \frac{\exp(-t^2/2)}{\sqrt{2\pi}}\quad{\rm and}\quad\Phi(t) = \int_{-\infty}^t \varphi(s) ds. -$$ - -The original proofs by Sudakov, Tsirelson and Borell were based on Paul Lévy's spherical isoperimetric inequality. - -Sergey Bobkov proved a functional generalization of the Gaussian isoperimetric inequality, from a certain "two point analytic inequality". Bakry and Ledoux gave another proof of Bobkov's functional inequality based on the semigroup techniques which works in a much more abstract setting. Later Barthe and Maurey gave yet another proof using the Brownian motion. - -The Gaussian isoperimetric inequality also follows from Ehrhard's inequality. diff --git a/wiki/wikipedia/437.txt b/wiki/wikipedia/437.txt deleted file mode 100644 index 8411ed32c38bc76d689aeefe9e800ca48fd8bbaf..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/437.txt +++ /dev/null @@ -1,35 +0,0 @@ -In graph theory and computer science, the graph sandwich problem is a problem of finding a graph that belongs to a particular family of graphs and is "sandwiched" between two other graphs, one of which must be a subgraph and the other of which must be a supergraph of the desired graph. - -Graph sandwich problems generalize the problem of testing whether a given graph belongs to a family of graphs, and have attracted attention because of their - -applications and as a natural generalization of recognition problems. - -More precisely, given a vertex set V, a mandatory edge set E1, - -and a larger edge set E2, - -a graph G = (V, E) is called a sandwich graph for the pair - -G1 = (V, E1), G2 = (V, E2) if - -E1 ⊆ E ⊆ E2. - -The graph sandwich problem for property Π is defined as follows: - -Graph Sandwich Problem for Property Π: - -Instance: Vertex set V and edge sets E1 ⊆ E2 ⊆ V × V. - -Question: Is there a graph G = (V, E) such that E1 ⊆ E ⊆ E2 and G satisfies property Π ? - -The recognition problem for a class of graphs (those satisfying a property Π) - -is equivalent to the particular graph sandwich problem where - -E1 = E2, that is, the optional edge set is empty. - -The graph sandwich problem is NP-complete when Π is the property of being a chordal graph, comparability graph, permutation graph, chordal bipartite graph, or chain graph. threshold graphs, and graphs in which every five vertices contain at most one four-vertex induced path. - -The complexity status has also been settled for the H-free graph sandwich problems - -for each of the four-vertex graphs H. diff --git a/wiki/wikipedia/438.txt b/wiki/wikipedia/438.txt deleted file mode 100644 index 47b92dff0cf9252be3678fd9941e08b8fa0d6ab2..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/438.txt +++ /dev/null @@ -1,41 +0,0 @@ -WinSCP (Windows Secure Copy) is a free and open-source SSH File Transfer Protocol (SFTP), File Transfer Protocol (FTP), WebDAV, Amazon S3, and secure copy protocol (SCP) client for Microsoft Windows. Its main function is secure file transfer between a local computer and a remote server. Beyond this, WinSCP offers basic file manager and file synchronization functionality. For secure transfers, it uses the Secure Shell protocol (SSH) and supports the SCP protocol in addition to SFTP. - -Development of WinSCP started around March 2000 and continues. Originally it was hosted by the University of Economics in Prague, where its author worked at the time. Since July 16, 2003, it is licensed under the GNU GPL. It is hosted on SourceForge and GitHub. - -WinSCP is based on the implementation of the SSH protocol from PuTTY and FTP protocol from FileZilla. It is also available as a plugin for Altap Salamander file manager, and there exists a third-party plugin for the FAR file manager. - -*Graphical user interface - -*Translated into several languages - -*Integration with Windows (drag and drop, URL, shortcut icons) - -*All common operations with files - -*Support for SFTP and SCP protocols over SSH-1 and SSH-2, FTP protocol, WebDAV protocol and Amazon S3 protocol. - -*Batch file scripting, command-line interface, and .NET wrapper - -*Directory synchronization in several semi or fully automatic ways - -*Integrated text editor - -*Support for SSH password, keyboard-interactive, public key, and Kerberos (GSS) authentication - -*Integrates with Pageant (PuTTY authentication agent) for full support of public key authentication with SSH - -*Choice of Windows File Explorer-like or Norton Commander-like interfaces - -*Optionally stores session information - -*Optionally import session information from PuTTY sessions in the registry - -*Able to upload files and retain associated original date/timestamps, unlike FTP clients - -WinSCP can act as a remote editor. When the user clicks on a (text) file in the remote file manager, it transfers the file to the local machine and opens it in the integrated editor, allowing users to edit it locally as they would with any other text file. Alternatively, the user may choose local editors based on file extensions. Whenever the document is saved, the remote version is updated automatically. - -Apart from the standard package, three portable versions are also available: A generic package and two customized versions for LiberKey and PortableApps.com. The portable version runs on Wine on several POSIX-compliant operating systems, such as Linux, macOS, and BSD. - -Some older versions of the WinSCP installer included OpenCandy advertising module or bundled Google Chrome. Since version 5.5.5 (August 2014) the installer does not contain any advertisement. - -WinSCP itself did not and does not contain any advertisements. diff --git a/wiki/wikipedia/439.txt b/wiki/wikipedia/439.txt deleted file mode 100644 index 4470590433ac60013d25e5370fad4400871434bc..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/439.txt +++ /dev/null @@ -1,19 +0,0 @@ -Abstraction in mathematics is the process of extracting the underlying structures, patterns or properties of a mathematical concept, removing any dependence on real world objects with which it might originally have been connected, and generalizing it so that it has wider applications or matching among other abstract descriptions of equivalent phenomena. Two of the most highly abstract areas of modern mathematics are category theory and model theory. - -Many areas of mathematics began with the study of real world problems, before the underlying rules and concepts were identified and defined as abstract structures. For example, geometry has its origins in the calculation of distances and areas in the real world, and algebra started with methods of solving problems in arithmetic. - -Abstraction is an ongoing process in mathematics and the historical development of many mathematical topics exhibits a progression from the concrete to the abstract. For example, the first steps in the abstraction of geometry were historically made by the ancient Greeks, with Euclid's Elements being the earliest extant documentation of the axioms of plane geometry—though Proclus tells of an earlier axiomatisation by Hippocrates of Chios. In the 17th century, Descartes introduced Cartesian co-ordinates which allowed the development of analytic geometry. Further steps in abstraction were taken by Lobachevsky, Bolyai, Riemann and Gauss, who generalised the concepts of geometry to develop non-Euclidean geometries. Later in the 19th century, mathematicians generalised geometry even further, developing such areas as geometry in n dimensions, projective geometry, affine geometry and finite geometry. Finally Felix Klein's "Erlangen program" identified the underlying theme of all of these geometries, defining each of them as the study of properties invariant under a given group of symmetries. This level of abstraction revealed connections between geometry and abstract algebra. - -In mathematics, abstraction can be advantageous in the following ways: - -* It reveals deep connections between different areas of mathematics. - -* Known results in one area can suggest conjectures in another related area. - -* Techniques and methods from one area can be applied to prove results in other related areas. - -*Patterns from one mathematical object can be generalized to other similar objects in the same class. - -On the other hand, abstraction can also be disadvantageous in that highly abstract concepts can be difficult to learn. A degree of mathematical maturity and experience may be needed for conceptual assimilation of abstractions. - -Bertrand Russell, in The Scientific Outlook (1931), writes that "Ordinary language is totally unsuited for expressing what physics really asserts, since the words of everyday life are not sufficiently abstract. Only mathematics and mathematical logic can say as little as the physicist means to say." diff --git a/wiki/wikipedia/44.txt b/wiki/wikipedia/44.txt deleted file mode 100644 index e394827a661b1f3ccbde9fe0f45e229a1a70f522..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/44.txt +++ /dev/null @@ -1,31 +0,0 @@ -The Brumer–Stark conjecture is a conjecture in algebraic number theory giving a rough generalization of both the analytic class number formula for Dedekind zeta functions, and also of Stickelberger's theorem about the factorization of Gauss sums. It is named after Armand Brumer and Harold Stark. - -It arises as a special case (abelian and first-order) of Stark's conjecture, when the place that splits completely in the extension is finite. There are very few cases where the conjecture is known to be valid. Its importance arises, for instance, from its connection with Hilbert's twelfth problem. - -Let K/k be an abelian extension of global fields, and let S be a set of places of k containing the Archimedean places and the prime ideals that ramify in K/k. The S-imprimitive equivariant Artin L-function θ(s) is obtained from the usual equivariant Artin L-function by removing the Euler factors corresponding to the primes in S from the Artin L-functions from which the equivariant function is built. It is a function on the complex numbers taking values in the complex group ring C[G] where G is the Galois group of K/k. It is analytic on the entire plane, excepting a lone simple pole at s = 1. - -Let μK be the group of roots of unity in K. The group G acts on μK; let A be the annihilator of μK as a Z[G]-module. An important theorem, first proved by C. L. Siegel and later independently by Takuro Shintani, states that θ(0) is actually in Q[G]. A deeper theorem, proved independently by Pierre Deligne and Ken Ribet, Daniel Barsky, and Pierrette Cassou-Noguès, states that Aθ(0) is in Z[G]. In particular, Wθ(0) is in Z[G], where W is the cardinality of μK. - -The ideal class group of K is a G-module. From the above discussion, we can let Wθ(0) act on it. The Brumer–Stark conjecture says the following: - -Brumer–Stark Conjecture. For each nonzero fractional ideal $\mathfrak{a}$ of K, there is an "anti-unit" ε such that - -#$ \mathfrak{a}^{W \theta(0)} = (\varepsilon).$ - -#The extension $K \left(\varepsilon^{\frac{1}{W}} \right)/k$ is abelian. - -The first part of this conjecture is due to Armand Brumer, and Harold Stark originally suggested that the second condition might hold. The conjecture was first stated in published form by John Tate. - -The term "anti-unit" refers to the condition that is required to be 1 for each Archimedean place ν. - -The Brumer Stark conjecture is known to be true for extensions K/k where - -* K/Q is cyclotomic: this follows from Stickelberger's theorem - -* K is abelian over Q - -* K/k is a quadratic extension - -* K/k is a biquadratic extension - -The analogous statement in the function field case is known to be true, having been proved by John Tate and Pierre Deligne, with a different proof by David Hayes. diff --git a/wiki/wikipedia/440.txt b/wiki/wikipedia/440.txt deleted file mode 100644 index 00d912a4229eb755630ece4aa32500b4a5836c97..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/440.txt +++ /dev/null @@ -1,39 +0,0 @@ -Set packing is a classical NP-complete problem in computational complexity theory and combinatorics, and was one of Karp's 21 NP-complete problems. - -Suppose one has a finite set S and a list of subsets of S. Then, the set packing problem asks if some k subsets in the list are pairwise disjoint (in other words, no two of them share an element). - -More formally, given a universe $\mathcal{U}$ and a family $\mathcal{S}$ of subsets of $\mathcal{U}$, - -a packing is a subfamily $\mathcal{C}\subseteq\mathcal{S}$ of sets such that all sets in $\mathcal{C}$ are pairwise disjoint. The size of the packing is $|\mathcal{C}|$. In the set packing decision problem, the input is a pair $(\mathcal{U},\mathcal{S})$ and an integer $k$; the question is whether - -there is a set packing of size $k$ or more. In the set packing optimization problem, the input is a pair $(\mathcal{U},\mathcal{S})$, and the task is to find a set packing that uses the most sets. - -The problem is clearly in NP since, given k subsets, we can easily verify that they are pairwise disjoint in polynomial time. - -The optimization version of the problem, maximum set packing, asks for the maximum number of pairwise disjoint sets in the list. It is a maximization problem that can be formulated naturally as an integer linear program, belonging to the class of packing problems. - -The maximum set packing problem can be formulated as the following integer linear program. - -The set packing problem is not only NP-complete, but its optimization version (general maximum set packing problem) has been proven as difficult to approximate as the maximum clique problem; in particular, it cannot be approximated within any constant factor. The best known algorithm approximates it within a factor of $O(\sqrt)$. - -The weighted variant can also be approximated as well. - -However, the problem does have a variant which is more tractable: if we assume no subset exceeds k≥3 elements, the answer can be approximated within a factor of k/2 + ε for any ε > 0; in particular, the problem with 3-element sets can be approximated within about 50%. In another more tractable variant, if no element occurs in more than k of the subsets, the answer can be approximated within a factor of k. This is also true for the weighted version. - -There is a one-to-one polynomial-time reduction between the independent set problem and the set packing problem: - -* Given a set packing problem on a collection $\mathcal{S}$, create a graph where for each set $S \in \mathcal{S}$ there is a vertex $v_S$, and there is an edge between $v_S$ and $v_T$ if $S \cap T \neq \varnothing$. Now every independent set of vertices in the generated graph corresponds to a set packing in $\mathcal{S}$. - -* Given an independent vertex set problem on a graph $G(V,E)$, create a collection of sets where for each vertex $v$ there is a set $S_v$ containing all edges adjacent to $v$. Now every set packing in the generated collection corresponds to an independent vertex set in $G(V,E)$. - -This is also a bidirectional PTAS reduction, and it shows that the two problems are equally difficult to approximate. - -Matching and 3-dimensional matching are special cases of set packing. A maximum-size matching can be found in polynomial time, but finding a largest 3-dimensional matching or a largest independent set is NP-hard. - -Set packing is one among a family of problems related to covering or partitioning the elements of a set. One closely related problem is the set cover problem. Here, we are also given a set S and a list of sets, but the goal is to determine whether we can choose k sets that together contain every element of S. These sets may overlap. The optimization version finds the minimum number of such sets. The maximum set packing need not cover every possible element. - -The NP-complete exact cover problem, on the other hand, requires every element to be contained in exactly one of the subsets. Finding such an exact cover at all, regardless of size, is an NP-complete problem. However, if we create a singleton set for each element of S and add these to the list, the resulting problem is about as easy as set packing. - -Karp originally showed set packing NP-complete via a reduction from the clique problem. - -See also: Packing in a hypergraph. diff --git a/wiki/wikipedia/441.txt b/wiki/wikipedia/441.txt deleted file mode 100644 index 92b2e059fe5d5042c0e5beb8cb807801266a404e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/441.txt +++ /dev/null @@ -1,43 +0,0 @@ -Regular semantics is a computing term which describes one type of guarantee provided by a data register shared by several processors in a parallel machine or in a network of computers working together. Regular semantics are defined for a variable with a single writer but multiple readers. These semantics are stronger than safe semantics but weaker than atomic semantics: they guarantee that there is a total order to the write operations which is consistent with real-time and that read operations return either the value of the last write completed before the read begins, or that of one of the writes which are concurrent with the read. - -Regular semantics are weaker than linearizability. - -Consider the example shown below, where the horizontal axis represents time and the arrows represent the interval during which a read or write operation takes place. According to a regular register's definition, the third read should return 3 since the read operation is not concurrent with any write operation. On the other hand, the second read may return 2 or 3, and the first read may return either 5 or 2. The first read could return 3 and the second read could return 2. This behavior would not satisfy atomic semantics. Therefore, regular semantics is a weaker property than an atomic semantics. On the other hand, Leslie Lamport proved that a linearizable register may be implemented from registers with safe semantics, which are weaker than regular registers. - -A single-writer multi-reader (SWMR) atomic semantics is an SWMR regular register if any of its execution history H satisfies the following property: - -r1 and r2 are any two read invocations: (r1 →H r2) ⇒ ¬π(r2) →H π(r1) - -Before we get into the proof, first we should know what does the new/old inversion mean. As it shown in the picture below, by looking at the execution we can see that the only difference between a regular execution and an atomic execution is when a = 0 and b = 1.In this execution, when we consider the two read invocations R.read() → a - -followed by R.read() → b, our first value (new value) is a = 0 while the second value (old value) is b=1. This is actually the main difference between atomicity and regularity. - -The theorem above states that a Single writer multi-reader regular register without new or old inversion - -is an atomic register. By looking at the picture we can say that as R.read() → a →H R.read() → b - -and R.write(1) →H R.write(0), it is not possible to have π (R.read() → b) =R.write(1) and π (R.read() → a) = R.write(0) if the execution is atomic. - -For proving the theorem above, first we should prove that the register is safe, next we should show that the register is regular, and then at the end we should show that the register does not allow for new/old inversion which proves the atomicity. - -By the definition of the atomic register we know that a Single writer multiple reader atomic register is regular and satisfies the no new/old - -inversion property. So, we only need to show that a regular register with no new/old inversion is atomic. - -We know that for any two read invocations (r1 and r2) when the register is regular and there is no new/old inversion - -(r1 →H r2) ⇒sn(π(r1)) ≤ sn(π(r2)). For any execution (M) there is a total order (S) which includes the same operation invocations. We can state that S is built as follow: - -we start from the total order on the write operations and we will insert the read operation as follow: - -first: Read operation (r) is inserted after the associated write operation (π(r)).Second: If two read operations (r1,r2) have the same (sn(r1)=sn(r2)) then first insert the operation which starts first in the execution. - -S includes all the operation invocation of M, from which it follows that S and M are equivalent. Since all the operations are ordered based on their sequence numbers is slightly a total order. Furthermore, this total order is an execution of M only adds an order on operations that are overlapping in M. If there is no overlapping between a read and write operations, there is no difference between the regularity and atomicity. Finally, we can state that S is legal since each read operation gets the last written value that comes before it in the total order. Therefore, the corresponding history is linearizable. Since this reasoning does not rely on a particular - -history H, it implies that the register is atomic. - -Since atomicity (linearizability) is a local property, we can state that a set of SWMR regular registers behave atomically as soon as each of them satisfies the no new/old inversion property. - -* Atomic semantics - -* Safe semantics diff --git a/wiki/wikipedia/442.txt b/wiki/wikipedia/442.txt deleted file mode 100644 index 236cdd21c1e23ff6a5df19c4e07f19c32a2b0e70..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/442.txt +++ /dev/null @@ -1,43 +0,0 @@ -In probability theory, the martingale representation theorem states that a random variable that is measurable with respect to the filtration generated by a Brownian motion can be written in terms of an Itô integral with respect to this Brownian motion. - -The theorem only asserts the existence of the representation and does not help to find it explicitly; it is possible in many cases to determine the form of the representation using Malliavin calculus. - -Similar theorems also exist for martingales on filtrations induced by jump processes, for example, by Markov chains. - -Let $B_t$ be a Brownian motion on a standard filtered probability space $(\Omega, \mathcal{F},\mathcal{F}_t, P )$ and let $\mathcal{G}_t$ be the augmented filtration generated by $B$. If X is a square integrable random variable measurable with respect to $\mathcal{G}_\infty$, then there exists a predictable process C which is adapted with respect to $\mathcal{G}_t$, such that -$$ -X = E(X) + \int_0^\infty C_sdB_s. -$$ - -Consequently, -$$ - E(X| \mathcal{G}_t) = E(X) + \int_0^t C_s d B_s. -$$ - -The martingale representation theorem can be used to establish the existence - -of a hedging strategy. - -Suppose that $\left ( M_t \right )_{0 \le t < \infty}$ is a Q-martingale process, whose volatility $\sigma_t$ is always non-zero. - -Then, if $\left ( N_t \right )_{0 \le t < \infty}$ is any other Q-martingale, there exists an $\mathcal{F}$-previsible process $\varphi$, unique up to sets of measure 0, such that $\int_0^T \varphi_t^2 \sigma_t^2 dt < \infty$ with probability one, and N can be written as: -$$ -N_t = N_0 + \int_0^t \varphi_s d M_s. -$$ - -The replicating strategy is defined to be: - -* hold $\varphi_t$ units of the stock at the time t, and - -* hold $\psi_t B_t = C_t - \varphi_t Z_t$ units of the bond. - -* The Martingale tactic is based on a simple concept, where you can double your bet every time you win a bet. The basic idea is that you can win back a losing bet, just enough so that your bet covers previous rounds. - -where $Z_t$ is the stock price discounted by the bond price to time $t$ and $C_t$ is the expected payoff of the option at time $t$. - -At the expiration day T, the value of the portfolio is: -$$ -V_T = \varphi_T S_T + \psi_T B_T = C_T = X -$$ - -and it's easy to check that the strategy is self-financing: the change in the value of the portfolio only depends on the change of the asset prices $\left ( dV_t = \varphi_t dS_t + \psi_t dB_t \right ) $. diff --git a/wiki/wikipedia/443.txt b/wiki/wikipedia/443.txt deleted file mode 100644 index 49231a7f2044df871d61d2b1992a7d2a120b5f06..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/443.txt +++ /dev/null @@ -1,17 +0,0 @@ -In mathematics, Cartan's theorems A and B are two results proved by Henri Cartan around 1951, concerning a coherent sheaf F on a Stein manifold X. They are significant both as applied to several complex variables, and in the general development of sheaf cohomology. - -Theorem A. F is spanned by its global sections. - -Theorem B is stated in cohomological terms (a formulation that Cartan (1953, p. 51) attributes to J.-P. Serre): - -Theorem B. H p(X, F) = 0 for all p > 0. - -Analogous properties were established by Serre (1957) for coherent sheaves in algebraic geometry, when X is an affine scheme. The analogue of Theorem B in this context is as follows : - -Theorem B (Scheme theoretic analogue). Let X be an affine scheme, F a quasi-coherent sheaf of OX-modules for the Zariski topology on X. Then H p(X, F) = 0 for all p > 0. - -These theorems have many important applications. For instance, they imply that a holomorphic function on a closed complex submanifold, Z, of a Stein manifold X can be extended to a holomorphic function on all of X. At a deeper level, these theorems were used by Jean-Pierre Serre to prove the GAGA theorem. - -Theorem B is sharp in the sense that if H 1(X, F) = 0 for all coherent sheaves F on a complex manifold X (resp. quasi-coherent sheaves F on a noetherian scheme X), then X is Stein (resp. affine); see (resp. and ). - -* See also Cousin problems diff --git a/wiki/wikipedia/444.txt b/wiki/wikipedia/444.txt deleted file mode 100644 index a2fbab8c2b91e1b39cd812de31fc45a2c49d92c8..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/444.txt +++ /dev/null @@ -1,7 +0,0 @@ -The digraph realization problem is a decision problem in graph theory. Given pairs of nonnegative integers $((a_1,b_1),\ldots,(a_n,b_n))$, the problem asks whether there is a labeled simple directed graph such that each vertex $v_i$ has indegree $a_i$ and outdegree $b_i$. - -The problem belongs to the complexity class P. Two algorithms are known to prove that. The first approach is given by the Kleitman–Wang algorithms constructing a special solution with the use of a recursive algorithm. The second one is a characterization by the Fulkerson–Chen–Anstee theorem, i.e. one has to validate the correctness of $n$ inequalities. - -The problem can also be stated in terms of zero-one matrices. The connection can be seen if one realizes that each directed graph has an adjacency matrix where the column sums and row sums correspond to $(a_1,\cdots,a_n)$ and $(b_1,\ldots,b_n)$. Note that the diagonal of the matrix only contains zeros. The problem is then often denoted by 0-1-matrices for given row and column sums. In the classical literature the problem was sometimes be stated in the context of contingency tables by contingency tables with given marginals. - -Similar problems describe the degree sequences of simple graphs, simple directed graphs with loops, and simple bipartite graphs. The first problem is the so-called graph realization problem. The second and third one are equivalent and are known as the bipartite realization problem. Chen gives a characterization for directed multigraphs with a bounded number of parallel arcs and loops to a given degree sequence. The additional constraint of the acycilicity of the directed graph is known as dag realization. Nichterlein proved the NP-completeness of this problem. Berger showed that the class of opposed sequences is in P. The problem uniform sampling a directed graph to a fixed degree sequence is to construct a solution for the digraph realization problem with the additional constraint that such each solution comes with the same probability. This problem was shown to be in FPTAS for regular sequences by The general problem is still unsolved. diff --git a/wiki/wikipedia/445.txt b/wiki/wikipedia/445.txt deleted file mode 100644 index 19a5c79d4132cfc7e195b758f039588d56a3fd8e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/445.txt +++ /dev/null @@ -1,11 +0,0 @@ -In differential geometry, the Tait–Kneser theorem states that, if a smooth plane curve has monotonic curvature, then the osculating circles of the curve are disjoint and nested within each other. - -The logarithmic spiral or the pictured Archimedean spiral provide examples of curves whose curvature is monotonic for the entire curve. This monotonicity cannot happen for a simple closed curve (by the four-vertex theorem, there are at least four vertices where the curvature reaches an extreme point) but for such curves the theorem can be applied to the arcs of the curves between its vertices. - -The theorem is named after Peter Tait, who published it in 1896, and Adolf Kneser, who rediscovered it and published it in 1912. Tait's proof follows simply from the properties of the evolute, the curve traced out by the centers of osculating circles. - -For curves with monotone curvature, the arc length along the evolute between two centers equals the difference in radii of the corresponding circles. - -This arc length must be greater than the straight-line distance between the same two centers, so the two circles have centers closer together than the difference of their radii, from which the theorem follows. - -Analogous disjointness theorems can be proved for the family of Taylor polynomials of a given smooth function, and for the osculating conics to a given smooth curve. diff --git a/wiki/wikipedia/446.txt b/wiki/wikipedia/446.txt deleted file mode 100644 index 330da26ee9e9a111bca51f3d3052f72068d5f763..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/446.txt +++ /dev/null @@ -1,19 +0,0 @@ -In mathematics, a Thue equation is a Diophantine equation of the form - -ƒ(x,y) = r, - -where ƒ is an irreducible bivariate form of degree at least 3 over the rational numbers, and r is a nonzero rational number. It is named after Axel Thue who in 1909 proved a theorem, now called Thue's theorem, that a Thue equation has finitely many solutions in integers x and y. - -The Thue equation is solvable effectively: there is an explicit bound on the solutions x, y of the form $(C_1 r)^{C_2}$ where constants C1 and C2 depend only on the form ƒ. A stronger result holds, that if K is the field generated by the roots of ƒ then the equation has only finitely many solutions with x and y integers of K and again these may be effectively determined. - -Thue's original proof that the equation named in his honour has finitely many solutions is through the proof of what is now known as Thue's theorem: it asserts that for any algebraic number $\alpha$ having degree $d \geq 3$ and for any $\varepsilon > 0 $ there exists only finitely many co-prime integers $p, q$ with $ q > 0 $ such that $ |\alpha - p/q| < q^{-(d+1+\varepsilon)/2}$. Applying this theorem allows one to almost immediately deduce the finiteness of solutions. However, Thue's proof, as well as subsequent improvements by Siegel, Dyson, and Roth were all ineffective. - -Solving a Thue equation can be described as an algorithm ready for implementation in software. In particular, it is implemented in the following computer algebra systems: - -* in PARI/GP as functions thueinit() and thue(). - -* in Magma computer algebra system as functions ThueObject() and ThueSolve(). - -* in Mathematica through Reduce - -While there are several effective methods to solve Thue equations (including using Baker's method and Skolem's $p$-adic method), these are not able to give the best theoretical bounds on the number of solutions. One may qualify an effective bound $C(f,r)$ of the Thue equation $f(x,y) = r$ by the parameters it depends on, and how "good" the dependence is. The best results known today, essentially building on pioneering work of Bombieri and Schmidt, gives a bound of the shape $C(f,r) = C \cdot (\deg f)^{1 + \omega(r)}$, where $C$ is an absolute constant (that is, independent of both $f$ and $r$) and $\omega(\cdot)$ is the number of distinct prime divisors of $r$. The most significant qualitative improvement to the theorem of Bombieri and Schmidt is due to Stewart, who obtained a bound of the form $C(f,r) = C \cdot (\deg f)^{1 + \omega(g)}$ where $g$ is a divisor of $r$ exceeding $|r|^{3/4}$ in absolute value. It is conjectured that one may take the bound $C(f,r) = C(\deg f)$; that is, depending only on the degree of $f$ but not its coefficients, and completely independent of the integer $r$ on the right hand side of the equation. This is a weaker form of a conjecture of Stewart, and is a special case of the uniform boundedness conjecture for rational points. This conjecture has been proven for "small" integers $r$, where smallness is measured in terms of the discriminant of the form $f$, by various authors, including Evertse, Stewart, and Akhtari. Stewart and Xiao demonstrated a strong form of this conjecture, asserting that the number of solutions is absolutely bounded, holds on average (as $r$ ranges over the interval $|r| \leq Z $ with $Z \rightarrow \infty$) diff --git a/wiki/wikipedia/447.txt b/wiki/wikipedia/447.txt deleted file mode 100644 index be321cbdee83657c3dba1919c69e07904e447870..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/447.txt +++ /dev/null @@ -1,40 +0,0 @@ -Miquel's theorem is a result in geometry, named after Auguste Miquel, concerning the intersection of three circles, each drawn through one vertex of a triangle and two points on its adjacent sides. It is one of several results concerning circles in Euclidean geometry due to Miquel, whose work was published in Liouville's newly founded journal Journal de mathématiques pures et appliquées. - -Formally, let ABC be a triangle, with arbitrary points A´, B´ and C´ on sides BC, AC, and AB respectively (or their extensions). Draw three circumcircles (Miquel's circles) to triangles AB´C´, A´BC´, and A´B´C. Miquel's theorem states that these circles intersect in a single point M, called the Miquel point. In addition, the three angles MA´B, MB´C and MC´A (green in the diagram) are all equal, as are the three supplementary angles MA´C, MB´A and MC´B. - -The theorem (and its corollary) follow from the properties of cyclic quadrilaterals. Let the circumcircles of A'B'C and AB'C' meet at $ M \ne A. $ Then $ \angle A'MC' = 2\pi - \angle B'MA' - \angle C'MB' = 2\pi - (\pi - C) - (\pi - A) = A + C = \pi - B, $ hence BA'MC' is cyclic as desired. - -If in the statement of Miquel's theorem the points A´, B´ and C´ form a triangle (that is, are not collinear) then the theorem was named the Pivot theorem in Forder. (In the diagram these points are labeled P, Q and R.) - -If A´, B´ and C´ are collinear then the Miquel point is on the circumcircle of ∆ABC and conversely, if the Miquel point is on this circumcircle, then A´, B´ and C´ are on a line. - -If the fractional distances of A´, B´ and C´ along sides BC (a), CA (b) and AB (c) are da, db and dc, respectively, the Miquel point, in trilinear coordinates (x : y : z), is given by: -$$ -x=a \left(-a^2 d_a d_a^{'} + b^2 d_a d_b + c^2 d_a^{'} d_c^{'} \right) -$$ -$$ -y=b \left(a^2 d_a^{'} d_b^{'} - b^2 d_b d_b^{'} + c^2 d_b d_c \right) -$$ -$$ -z=c \left(a^2 d_a d_c + b^2 d_b^{'} d_c^{'} - c^2 d_c d_c^{'} \right), -$$ - -where da = 1 - da, etc. - -In the case da = db = dc = ½ the Miquel point is the circumcentre (cos α : cos β : cos γ). - -The theorem can be reversed to say: for three circles intersecting at M, a line can be drawn from any point A on one circle, through its intersection C´ with another to give B (at the second intersection). B is then similarly connected, via intersection at A´ of the second and third circles, giving point C. Points C, A and the remaining point of intersection, B´, will then be collinear, and triangle ABC will always pass through the circle intersections A´, B´ and C´. - -If the inscribed triangle XYZ is similar to the reference triangle ABC, then the point M of concurrence of the three circles is fixed for all such XYZ. - -The circumcircles of all four triangles of a complete quadrilateral meet at a point M. In the diagram above these are ∆ABF, ∆CDF, ∆ADE and ∆BCE. - -This result was announced, in two lines, by Jakob Steiner in the 1827/1828 issue of Gergonne's Annales de Mathématiques, but a detailed proof was given by Miquel. - -Let ABCDE be a convex pentagon. Extend all sides until they meet in five points F,G,H,I,K and draw the circumcircles of the five triangles CFD, DGE, EHA, AIB and BKC. Then the second intersection points (other than A,B,C,D,E), namely the new points M,N,P,R and Q are concyclic (lie on a circle). See diagram. - -The converse result is known as the Five circles theorem. - -Given points, A, B, C, and D on a circle, and circles passing through each adjacent pair of points, the alternate intersections of these four circles at W, X, Y and Z then lie on a common circle. This is known as the six circles theorem. It is also known as the four circles theorem and while generally attributed to Jakob Steiner the only known published proof was given by Miquel. Wells refers to this as Miquel's theorem. - -There is also a three-dimensional analog, in which the four spheres passing through a point of a tetrahedron and points on the edges of the tetrahedron intersect in a common point. diff --git a/wiki/wikipedia/448.txt b/wiki/wikipedia/448.txt deleted file mode 100644 index 04edf628869a2d5abbac6d75d783f70621d1cb1b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/448.txt +++ /dev/null @@ -1,100 +0,0 @@ -A mathematical coincidence is said to occur when two expressions with no direct relationship show a near-equality which has no apparent theoretical explanation. - -For example, there is a near-equality close to the round number 1000 between powers of 2 and powers of 10: -$$ -2^{10} = 1024 \approx 1000 = 10^3. -$$ - -Some mathematical coincidences are used in engineering when one expression is taken as an approximation of another. - -A mathematical coincidence often involves an integer, and the surprising feature is the fact that a real number arising in some context is considered by some standard as a "close" approximation to a small integer or to a multiple or power of ten, or more generally, to a rational number with a small denominator. Other kinds of mathematical coincidences, such as integers simultaneously satisfying multiple seemingly unrelated criteria or coincidences regarding units of measurement, may also be considered. In the class of those coincidences that are of a purely mathematical sort, some simply result from sometimes very deep mathematical facts, while others appear to come 'out of the blue'. - -Given the countably infinite number of ways of forming mathematical expressions using a finite number of symbols, the number of symbols used and the precision of approximate equality might be the most obvious way to assess mathematical coincidences; but there is no standard, and the strong law of small numbers is the sort of thing one has to appeal to with no formal opposing mathematical guidance. Beyond this, some sense of mathematical aesthetics could be invoked to adjudicate the value of a mathematical coincidence, and there are in fact exceptional cases of true mathematical significance (see Ramanujan's constant below, which made it into print some years ago as a scientific April Fools' joke). All in all, though, they are generally to be considered for their curiosity value or, perhaps, to encourage new mathematical learners at an elementary level. - -Sometimes simple rational approximations are exceptionally close to interesting irrational values. These are explainable in terms of large terms in the continued fraction representation of the irrational value, but further insight into why such improbably large terms occur is often not available. - -Rational approximants (convergents of continued fractions) to ratios of logs of different numbers are often invoked as well, making coincidences between the powers of those numbers. this high accuracy comes about because π has an unusually large next term in its continued fraction representation: pi = [3; 7, 15, 1, 292, ...]. - -* A coincidence involving π and the golden ratio φ is given by $\pi \approx 4 / \sqrt{\varphi} = 3.1446\dots$. This is related to Kepler triangles. Some believe one or the other of these coincidences is to be found in the Great Pyramid of Giza, but it is highly improbable that this was intentional. - -* There is a sequence of six nines in pi that begins at the 762nd decimal place of the decimal representation of pi. For a randomly chosen normal number, the probability of any chosen number sequence of six digits (including 6 of a number, 658 020, or the like) occurring this early in the decimal representation is only 0.08%. Pi is conjectured, but not known, to be a normal number. - -* The first Feigenbaum constant is approximately equal to 10(1/π − 1), with an error of 0.0015%. - -* The coincidence $2^{10} = 1024 \approx 1000 = 10^3$, correct to 2.4%, relates to the rational approximation $\textstyle\frac{\log10}{\log2} \approx 3.3219 \approx \frac{10}{3}$, or $ 2 \approx 10^{3/10}$ to within 0.3%. This relationship is used in engineering, for example to approximate a factor of two in power as 3 dB (actual is 3.0103 dB – see Half-power point), or to relate a kibibyte to a kilobyte; see binary prefix. - -* This coincidence can also be expressed as $128 = 2^7 \approx 5^3 = 125$ (eliminating common factor of $2^3$, so also correct to 2.4%), which corresponds to the rational approximation $\textstyle\frac{\log5}{\log2} \approx 2.3219 \approx \frac{7}{3}$, or $ 2 \approx 5^{3/7}$ (also to within 0.3%). This is invoked for instance in shutter speed settings on cameras, as approximations to powers of two (128, 256, 512) in the sequence of speeds 125, 250, 500, etc, - -* The coincidence ${(3/2)}^{4} = (81/16) \approx 5$ is the famous coincidence leading historically to the development of meantone temperament, in which the $3/2$ perfect fifths and the $5/4$ major thirds are "tempered" slightly so that four $3/2$'s is approximately equal to $5/1$, or a $5/4$ major third up two octaves. This coincidence can also be written $80 \approx 81$, or $81/80 \approx 1$, where $81/80$ is the famous syntonic comma, which is "tempered out" in this tuning. - -* The coincidence $\sqrt[12]{2}\sqrt[7]{5} = 1.33333319\ldots \approx \frac43$ leads to the rational version of 12-TET, as noted by Johann Kirnberger. - -* The coincidence $\sqrt[8]{5}\sqrt[3]{35} = 4.00000559\ldots \approx 4$ leads to the rational version of quarter-comma meantone temperament. - -* The coincidence $\sqrt[9]{0.6}\sqrt[28]{4.9} = 0.99999999754\ldots \approx 1$ leads to the very tiny interval of $2^{9}3^{-28}5^{37}7^{-18}$ (about a millicent wide), which is the first 7-limit interval tempered out in 103169-TET. - -* The coincidence of powers of 2, above, leads to the approximation that three major thirds concatenate to an octave, ${(5/4)}^{3} \approx {2/1}$. This and similar approximations in music are called dieses. - -*$\pi^2\approx10;$ correct to about 1.3%. This can be understood in terms of the formula for the zeta function $\zeta(2)=\pi^2/6.$ This coincidence was used in the design of slide rules, where the "folded" scales are folded on $\pi$ rather than $\sqrt{10},$ because it is a more useful number and has the effect of folding the scales in about the same place. - -* $\pi^2\approx 227/23,$ correct to 0.0004%. - -* $ \pi^{3^2}/e^{2^3}\approx 10$, to about 5 decimal places. - -*$1^3+5^3+3^3=153$, $3^3+7^3+0^3=370$, $3^3+7^3+1^3=371$, and $4^3+0^3+7^3=407$ are all narcissistic numbers. - -*$588^2+2353^2=5882353 $, a prime number. The fraction 1/17 also produces 0.05882353 when rounded to 8 digits. - -*$2^1+6^2+4^3+6^4+7^5+9^6+8^7=2646798$. The largest number with this pattern is $1^1+2^2+1^3+5^4+7^5+6^6+9^7+2^8+6^9+2^{10}+2^{11}+0^{12}+3^{13}+9^{14}+6^{15}+2^{16}+3^{17}+5^{18}+3^{19}+9^{20}=12157692622039623539$. - -*$\sin(666^\circ)=\cos(6\cdot6\cdot6^\circ)=-\varphi/2$ (where $\varphi$ is the golden ratio), and $\phi(666)=6\cdot6\cdot6$ (where $\phi$ is Euler's totient function). - -* $6^9 = 10077696$, which is close to $10^7 = 10000000$. Also, $6^9 - 10(6^5) = 9999936$, which is even closer to $10^7 = 10000000$. - -* $7^6 = 117649$, which is very close to $10^7/85 = 117647.0588235$. This is why if you multiply $85$ by $7$ repeatedly, you eventually get to $10000165$. - -* $6^2 * 7^3 * 9^2 = 1000188$, which is very close to $10^6 = 1000000$. - -The speed of light is (by definition) exactly 299,792,458 m/s, extremely close to 3.0 × 108 m/s (300,000,000 m/s). This is a pure coincidence, as the meter was originally defined as 1/10,000,000 of the distance between the Earth's pole and equator along the surface at sea level, and the Earth's circumference just happens to be about 2/15 of a light-second. It is also roughly equal to one foot per nanosecond (the actual number is 0.9836 ft/ns). - -As seen from Earth, the angular diameter of the Sun varies between 31′27″ and 32′32″, while that of the Moon is between 29′20″ and 34′6″. The fact that the intervals overlap (the former interval is contained in the latter) is a coincidence, and has implications for the types of solar eclipses that can be observed from Earth. - -While not constant but varying depending on latitude and altitude, the numerical value of the acceleration caused by Earth's gravity on the surface lies between 9.74 and 9.87 m/s2, which is quite close to 10. This means that as a result of Newton's second law, the weight of a kilogram of mass on Earth's surface corresponds roughly to 10 newtons of force exerted on an object. - -This is related to the aforementioned coincidence that the square of pi is close to 10. One of the early definitions of the meter was the length of a pendulum whose half swing had a period equal to one second. Since the period of the full swing of a pendulum is approximated by the equation below, algebra shows that if this definition was maintained, gravitational acceleration measured in meters per second per second would be exactly equal to π2. -$$ -T \approx 2\pi \sqrt\frac{L}{g} -$$ - -The upper limit of gravity on Earth's surface (9.87 m/s2) is equal to π2 m/s2 to four significant figures. It is approximately 0.6% greater than standard gravity (9.80665 m/s2). - -When it was discovered that the circumference of the earth was very close to 40,000,000 times this value, the meter was redefined to reflect this, as it was a more objective standard (because the gravitational acceleration varies over the surface of the Earth). This had the effect of increasing the length of the meter by less than 1%, which was within the experimental error of the time. - -Another coincidence related to the gravitational acceleration g is that its value of approximately 9.8 m/s2 is equal to 1.03 light-year/year2, which numerical value is close to 1. (So, simple-mindedly, if a body fell with acceleration g for a year, it would reach the speed of light.) This is related to the fact that g is close to 10 in SI units (m/s2), as mentioned above, combined with the fact that the number of seconds per year happens to be close to the numerical value of c/10, with c the speed of light in m/s. In fact, it has nothing to do with SI as c/g = 354 days, nearly, and 365/354 = 1.03. - -The Rydberg constant, when multiplied by the speed of light and expressed as a frequency, is close to $\frac{\pi^2}{3}\times 10^{15}\ \text{Hz}$: -$$ -\underline{3.2898}41960364(17) \times 10^{15}\ \text{Hz} = R_\infty c -$$ -$$ -\underline{3.2898}68133696\ldots = \frac{\pi^2}{3} -$$ - -As discovered by Randall Munroe, a cubic mile is close to $\frac{4}{3}\pi$ cubic kilometers (within 0.5%). This means that a sphere with radius n kilometers has almost exactly the same volume as a cube with sides length n miles. - -The ratio of a mile to a kilometre is approximately the Golden ratio. As a consequence, a Fibonacci number of miles is approximately the next Fibonacci number of kilometres. - -While not strictly a metric conversion coincidence, the aspect ratio of US letter paper is close to $\frac{\pi}{4}$ (within 2%) while the ratio of A4 is $\cos(\frac{\pi}{4})$. - -A density of one ounce per cubic foot is very close to one kilogram per cubic metre: 1 oz/ft³ = 1 oz * 0.028349523125 kg/oz / (1 ft * 0.3048 m/ft)³ ≈ 1.0012 kg/m³. - -The fine-structure constant $\alpha$ is close to, and was once conjectured to be precisely equal to, $\frac1{137}$. -$$ -\alpha = \frac1{137.035999074\dots} -$$ - -Although this coincidence is not as strong as some of the others in this section, it is notable that $\alpha$ is a dimensionless physical constant, so this coincidence is not an artifact of the system of units being used. - -The radius of geostationary orbit, is within 0.02% of the variation of the distance of the moon in a month (the difference between its apogee and perigee), , and 5% error of the length of the equator, . Similarly, Earth's escape velocity is . - -The normal value of human body temperature () is almost precisely equal to $10\pi^2$ °F or $10\pi^3$ K, correct to . It is also approximately equal to $\pi^4$ °F (correct to ). diff --git a/wiki/wikipedia/449.txt b/wiki/wikipedia/449.txt deleted file mode 100644 index 78c9f9871f3f3f850a18c283f73ba0fb4f8a0b42..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/449.txt +++ /dev/null @@ -1,167 +0,0 @@ -In Galois theory, the inverse Galois problem concerns whether or not every finite group appears as the Galois group of some Galois extension of the rational numbers $\mathbb{Q}$. This problem, first posed in the early 19th century, is unsolved. - -There are some permutation groups for which generic polynomials are known, which define all algebraic extensions of $\mathbb{Q}$ having a particular group as Galois group. These groups include all of degree no greater than 5. There also are groups known not to have generic polynomials, such as the cyclic group of order 8. - -More generally, let G be a given finite group, and let K be a field. Then the question is this: is there a Galois extension field L/K such that the Galois group of the extension is isomorphic to G? One says that G is realizable over K if such a field L exists. - -There is a great deal of detailed information in particular cases. It is known that every finite group is realizable over any function field in one variable over the complex numbers $\mathbb{C}$, and more generally over function fields in one variable over any algebraically closed field of characteristic zero. Igor Shafarevich showed that every finite solvable group is realizable over $\mathbb{Q}$. It is also known that every sporadic group, except possibly the Mathieu group M23, is realizable over $\mathbb{Q}$. - -David Hilbert had shown that this question is related to a rationality question for G: - -If K is any extension of $\mathbb{Q}$, on which G acts as an automorphism group and the invariant field KG is rational over $\mathbb{Q}$, then G is realizable over $\mathbb{Q}$. - -Here rational means that it is a purely transcendental extension of $\mathbb{Q}$, generated by an algebraically independent set. This criterion can for example be used to show that all the symmetric groups are realizable. - -Much detailed work has been carried out on the question, which is in no sense solved in general. Some of this is based on constructing G geometrically as a Galois covering of the projective line: in algebraic terms, starting with an extension of the field $\mathbb{Q}(t)$ of rational functions in an indeterminate t. After that, one applies Hilbert's irreducibility theorem to specialise t, in such a way as to preserve the Galois group. - -All permutation groups of degree 16 or less are known to be realizable over $\mathbb{Q}$; the group PSL(2,16):2 of degree 17 may not be. - -All 13 non-abelian simple groups smaller than PSL(2,25) (order 7800) are known to be realizable over $\mathbb{Q}$. - -It is possible, using classical results, to construct explicitly a polynomial whose Galois group over $\mathbb{Q}$ is the cyclic group Z/nZ for any positive integer n. To do this, choose a prime p such that p ≡ 1 (mod n); this is possible by Dirichlet's theorem. Let Q(μ) be the cyclotomic extension of $\mathbb{Q}$ generated by μ, where μ is a primitive p-th root of unity; the Galois group of Q(μ)/Q is cyclic of order p − 1. - -Since n divides p − 1, the Galois group has a cyclic subgroup H of order (p − 1)/n. The fundamental theorem of Galois theory implies that the corresponding fixed field, F = Q(μ)H, has Galois group Z/nZ over $\mathbb{Q}$. By taking appropriate sums of conjugates of μ, following the construction of Gaussian periods, one can find an element α of F that generates F over $\mathbb{Q}$, and compute its minimal polynomial. - -This method can be extended to cover all finite abelian groups, since every such group appears in fact as a quotient of the Galois group of some cyclotomic extension of $\mathbb{Q}$. (This statement should not though be confused with the Kronecker–Weber theorem, which lies significantly deeper.) - -For n = 3, we may take p = 7. Then Gal(Q(μ)/Q) is cyclic of order six. Let us take the generator η of this group which sends μ to μ3. We are interested in the subgroup H = {1, η3} of order two. Consider the element α = μ + η3(μ). By construction, α is fixed by H, and only has three conjugates over $\mathbb{Q}$: - -α = η0(α) = μ + μ6, - -β = η1(α) = μ3 + μ4, - -γ = η2(α) = μ2 + μ5. - -Using the identity: - -1 + μ + μ2 + ⋯ + μ6 = 0, - -one finds that - -α + β + γ = −1, - -αβ + βγ + γα = −2, - -αβγ = 1. - -Therefore α is a root of the polynomial - -(x − α)(x − β)(x − γ) = x3 + x2 − 2x − 1, - -which consequently has Galois group Z/3Z over $\mathbb{Q}$. - -Hilbert showed that all symmetric and alternating groups are represented as Galois groups of polynomials with rational coefficients. - -The polynomial xn + ax + b has discriminant -$$ -(-1)^{\frac{n(n-1)}{2}} \!\left( n^n b^{n-1} + (-1)^{1-n} (n-1)^{n-1} a^n \right)\!. -$$ - -We take the special case - -f(x, s) = xn − sx − s. - -Substituting a prime integer for s in f(x, s) gives a polynomial (called a specialization of f(x, s)) that by Eisenstein's criterion is irreducible. Then f(x, s) must be irreducible over $\mathbb{Q}(s)$. Furthermore, f(x, s) can be written -$$ -x^n - \tfrac{x}{2} - \tfrac{1}{2} - \left( s - \tfrac{1}{2} \right)\!(x+1) -$$ - -and f(x, 1/2) can be factored to: -$$ -\tfrac{1}{2} (x-1)\!\left( 1+ 2x + 2x^2 + \cdots + 2 x^{n-1} \right) -$$ - -whose second factor is irreducible (but not by Eisenstein's criterion). Only the reciprocal polynomial is irreducible by Eisenstein's criterion. We have now shown that the group Gal(f(x, s)/Q(s)) is doubly transitive. - -We can then find that this Galois group has a transposition. Use the scaling (1 − n)x = ny to get -$$ - y^n - \left \{ s \left ( \frac{1-n}{n} \right )^{n-1} \right \} y - \left \{ s \left ( \frac{1-n}{n} \right )^n \right \} -$$ - -and with -$$ - t = \frac{s (1-n)^{n-1}}{n^n}, -$$ - -we arrive at: - -g(y, t) = yn − nty + (n − 1)t - -which can be arranged to - -yn − y − (n − 1)(y − 1) + (t − 1)(−ny + n − 1). - -Then g(y, 1) has 1 as a double zero and its other n − 2 zeros are simple, and a transposition in Gal(f(x, s)/Q(s)) is implied. Any finite doubly transitive permutation group containing a transposition is a full symmetric group. - -Hilbert's irreducibility theorem then implies that an infinite set of rational numbers give specializations of f(x, t) whose Galois groups are Sn over the rational field $\mathbb{Q}$. In fact this set of rational numbers is dense in $\mathbb{Q}$. - -The discriminant of g(y, t) equals -$$ - (-1)^{\frac{n(n-1)}{2}} n^n (n-1)^{n-1} t^{n-1} (1-t), -$$ - -and this is not in general a perfect square. - -Solutions for alternating groups must be handled differently for odd and even degrees. - -Let -$$ -t = 1 - (-1)^{\tfrac{n(n-1)}{2}} n u^2 -$$ - -Under this substitution the discriminant of g(y, t) equals - -\begin{align} - -(-1)^{\frac{n(n-1)}{2}} n^n (n-1)^{n-1} t^{n-1} (1-t) - -&= (-1)^{\frac{n(n-1)}{2}} n^n (n-1)^{n-1} t^{n-1} \left (1 - \left (1 - (-1)^{\tfrac{n(n-1)}{2}} n u^2 \right ) \right) \\ - -&= (-1)^{\frac{n(n-1)}{2}} n^n (n-1)^{n-1} t^{n-1} \left ((-1)^{\tfrac{n(n-1)}{2}} n u^2 \right ) \\ - -&= n^{n+1} (n-1)^{n-1} t^{n-1} u^2 - -\end{align} - -which is a perfect square when n is odd. - -Let: -$$ -t = \frac{1}{1 + (-1)^{\tfrac{n(n-1)}{2}} (n-1) u^2} -$$ - -Under this substitution the discriminant of g(y, t) equals: - -\begin{align} - -(-1)^{\frac{n(n-1)}{2}} n^n (n-1)^{n-1} t^{n-1} (1-t) - -&= (-1)^{\frac{n(n-1)}{2}} n^n (n-1)^{n-1} t^{n-1} \left (1 - \frac{1}{1 + (-1)^{\tfrac{n(n-1)}{2}} (n-1) u^2} \right ) \\ - -&= (-1)^{\frac{n(n-1)}{2}} n^n (n-1)^{n-1} t^{n-1} \left ( \frac{\left ( 1 + (-1)^{\tfrac{n(n-1)}{2}} (n-1) u^2 \right ) - 1}{1 + (-1)^{\tfrac{n(n-1)}{2}} (n-1) u^2} \right ) \\ - -&= (-1)^{\frac{n(n-1)}{2}} n^n (n-1)^{n-1} t^{n-1} \left ( \frac{(-1)^{\tfrac{n(n-1)}{2}} (n-1) u^2}{1 + (-1)^{\tfrac{n(n-1)}{2}} (n-1) u^2} \right ) \\ - -&= (-1)^{\frac{n(n-1)}{2}} n^n (n-1)^{n-1} t^{n-1} \left (t (-1)^{\tfrac{n(n-1)}{2}} (n-1) u^2 \right ) \\ - -&= n^n (n-1)^n t^n u^2 - -\end{align} - -which is a perfect square when n is even. - -Again, Hilbert's irreducibility theorem implies the existence of infinitely many specializations whose Galois groups are alternating groups. - -Suppose that C1, …, Cn are conjugacy classes of a finite group G, and A be the set of n-tuples (g1, …, gn) of G such that gi is in Ci and the product g1…gn is trivial. Then A is called rigid if it is nonempty, G acts transitively on it by conjugation, and each element of A generates G. - -Thompson showed that if a finite group G has a rigid set then it can often be realized as a Galois group over a cyclotomic extension of the rationals. (More precisely, over the cyclotomic extension of the rationals generated by the values of the irreducible characters of G on the conjugacy classes Ci.) - -This can be used to show that many finite simple groups, including the monster group, are Galois groups of extensions of the rationals. The monster group is generated by a triad of elements of orders 2, 3, and 29. All such triads are conjugate. - -The prototype for rigidity is the symmetric group Sn, which is generated by an n-cycle and a transposition whose product is an (n − 1)-cycle. The construction in the preceding section used these generators to establish a polynomial's Galois group. - -Let n > 1 be any integer. A lattice Λ in the complex plane with period ratio τ has a sublattice Λ′ with period ratio nτ. The latter lattice is one of a finite set of sublattices permuted by the modular group PSL(2, Z), which is based on changes of basis for Λ. Let j denote the elliptic modular function of Felix Klein. Define the polynomial φn as the product of the differences (X − j(Λi)) over the conjugate sublattices. As a polynomial in X, φn has coefficients that are polynomials over $\mathbb{Q}$ in j(τ). - -On the conjugate lattices, the modular group acts as PGL(2, Z/nZ). It follows that φn has Galois group isomorphic to PGL(2, Z/nZ) over $\mathbb{Q}(\mathrm{J}(\tau))$. - -Use of Hilbert's irreducibility theorem gives an infinite (and dense) set of rational numbers specializing φn to polynomials with Galois group PGL(2, Z/nZ) over $\mathbb{Q}$. The groups PGL(2, Z/nZ) include infinitely many non-solvable groups. diff --git a/wiki/wikipedia/45.txt b/wiki/wikipedia/45.txt deleted file mode 100644 index 232e061e75cd1448dc63bd9691c1e260571a6b52..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/45.txt +++ /dev/null @@ -1,13 +0,0 @@ -In mathematics, an addition theorem is a formula such as that for the exponential function: - -ex + y = ex · ey, - -that expresses, for a particular function f, f(x + y) in terms of f(x) and f(y). Slightly more generally, as is the case with the trigonometric functions sin and cos, several functions may be involved; this is more apparent than real, in that case, since there cos is an algebraic function of sin (in other words, we usually take their functions both as defined on the unit circle). - -The scope of the idea of an addition theorem was fully explored in the nineteenth century, prompted by the discovery of the addition theorem for elliptic functions. To "classify" addition theorems it is necessary to put some restriction on the type of function G admitted, such that - -F(x + y) = G(F(x), F(y)). - -In this identity one can assume that F and G are vector-valued (have several components). An algebraic addition theorem is one in which G can be taken to be a vector of polynomials, in some set of variables. The conclusion of the mathematicians of the time was that the theory of abelian functions essentially exhausted the interesting possibilities: considered as a functional equation to be solved with polynomials, or indeed rational functions or algebraic functions, there were no further types of solution. - -In more contemporary language this appears as part of the theory of algebraic groups, dealing with commutative groups. The connected, projective variety examples are indeed exhausted by abelian functions, as is shown by a number of results characterising an abelian variety by rather weak conditions on its group law. The so-called quasi-abelian functions are all known to come from extensions of abelian varieties by commutative affine group varieties. Therefore, the old conclusions about the scope of global algebraic addition theorems can be said to hold. A more modern aspect is the theory of formal groups. diff --git a/wiki/wikipedia/450.txt b/wiki/wikipedia/450.txt deleted file mode 100644 index a32ba6196dffd56bc0346dea58984ebcad956ef4..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/450.txt +++ /dev/null @@ -1,27 +0,0 @@ -In algebraic number theory, a reflection theorem or Spiegelungssatz (German for reflection theorem – see Spiegel and Satz) is one of a collection of theorems linking the sizes of different ideal class groups (or ray class groups), or the sizes of different isotypic components of a class group. The original example is due to Ernst Eduard Kummer, who showed that the class number of the cyclotomic field $\mathbb{Q} \left( \zeta_p \right)$, with p a prime number, will be divisible by p if the class number of the maximal real subfield $\mathbb{Q} \left( \zeta_p \right)^{+}$ is. Another example is due to Scholz. A simplified version of his theorem states that if 3 divides the class number of a real quadratic field $\mathbb{Q} \left( \sqrt{d} \right)$, then 3 also divides the class number of the imaginary quadratic field $\mathbb{Q} \left( \sqrt{-3d} \right)$. - -Both of the above results are generalized by Leopoldt's "Spiegelungssatz", which relates the p-ranks of different isotypic components of the class group of a number field considered as a module over the Galois group of a Galois extension. - -Let L/K be a finite Galois extension of number fields, with group G, degree prime to p and L containing the p-th roots of unity. Let A be the p-Sylow subgroup of the class group of L. Let φ run over the irreducible characters of the group ring Qp[G] and let Aφ denote the corresponding direct summands of A. For any φ let q = pφ(1) and let the G-rank eφ be the exponent in the index -$$ - [ A_\phi : A_\phi^p ] = q^{e_\phi} . -$$ - -Let ω be the character of G -$$ - \zeta^g = \zeta^{\omega(g)} \text{ for } \zeta \in \mu_p . -$$ - -The reflection (Spiegelung) φ* is defined by -$$ - \phi^*(g) = \omega(g) \phi(g^{-1}) . -$$ - -Let E be the unit group of K. We say that ε is "primary" if $K(\sqrt[p]\epsilon)/K$ is unramified, and let E0 denote the group of primary units modulo Ep. Let δφ denote the G-rank of the φ component of E0. - -The Spiegelungssatz states that -$$ - | e_{\phi^*} - e_\phi | \le \delta_\phi . -$$ - -Extensions of this Spiegelungssatz were given by Oriat and Oriat-Satge, where class groups were no longer associated with characters of the Galois group of K/k, but rather by ideals in a group ring over the Galois group of K/k. Leopoldt's Spiegelungssatz was generalized in a different direction by Kuroda, who extended it to a statement about ray class groups. This was further developed into the very general "T-S reflection theorem" of Georges Gras. Kenkichi Iwasawa also provided an Iwasawa-theoretic reflection theorem. diff --git a/wiki/wikipedia/451.txt b/wiki/wikipedia/451.txt deleted file mode 100644 index e14cb978d618aeb879e5340e3fd3cf26f9f089f6..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/451.txt +++ /dev/null @@ -1,21 +0,0 @@ -In mathematics or statistics, a quantitative variable may be continuous or discrete; they are typically obtained by measuring (i.e. continuous) or counting (i.e. discrete). If it can take on two particular real values such that it can also take on all real values between them (even values that are arbitrarily close together), the variable is continuous in that interval. If it can take on a value such that there is a non-infinitesimal gap on each side of it containing no values that the variable can take on, then it is discrete around that value. In some contexts a variable can be discrete in some ranges of the number line and continuous in others. - -A continuous variable is a variable whose value is obtained by measuring, i.e., one which can take on an uncountable set of values. - -For example, a variable over a non-empty range of the real numbers is continuous, if it can take on any value in that range. The reason is that any range of real numbers between $a$ and $b$ with $a, b \in \mathbb{R}; a \neq b$ is uncountable. - -Methods of calculus are often used in problems in which the variables are continuous, for example in continuous optimization problems. - -In statistical theory, the probability distributions of continuous variables can be expressed in terms of probability density functions. - -In continuous-time dynamics, the variable time is treated as continuous, and the equation describing the evolution of some variable over time is a differential equation. The instantaneous rate of change is a well-defined concept. - -In contrast, a variable is a discrete variable if and only if there exists a one-to-one correspondence between this variable and $\mathbb{N}$, the set of natural numbers. In other words; a discrete variable over a particular range of real values is one for which, for any value in the range that the variable is permitted to take on, there is a positive minimum distance to the nearest other permissible value. The number of permitted values is either finite or countably infinite. Common examples are variables that must be integers, non-negative integers, positive integers, or only the integers 0 and 1. - -Methods of calculus do not readily lend themselves to problems involving discrete variables. Examples of problems involving discrete variables include integer programming. - -In statistics, the probability distributions of discrete variables can be expressed in terms of probability mass functions. - -In discrete time dynamics, the variable time is treated as discrete, and the equation of evolution of some variable over time is called a difference equation. - -In econometrics and more generally in regression analysis, sometimes some of the variables being empirically related to each other are 0-1 variables, being permitted to take on only those two values. A variable of this type is called a dummy variable. If the dependent variable is a dummy variable, then logistic regression or probit regression is commonly employed. diff --git a/wiki/wikipedia/452.txt b/wiki/wikipedia/452.txt deleted file mode 100644 index a8806f7a24276bf899bddf9e4c648f7b97b977ed..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/452.txt +++ /dev/null @@ -1,63 +0,0 @@ -In geometry, Soddy's hexlet is a chain of six spheres (shown in grey in Figure 1), each of which is tangent to both of its neighbors and also to three mutually tangent given spheres. In Figure 1, the three spheres are the red inner sphere and two spheres (not shown) above and below the plane the centers of the hexlet spheres lie on. In addition, the hexlet spheres are tangent to a fourth sphere (the blue outer sphere in Figure 1), which is not tangent to the three others. - -According to a theorem published by Frederick Soddy in 1937, it is always possible to find a hexlet for any choice of mutually tangent spheres A, B and C. Indeed, there is an infinite family of hexlets related by rotation and scaling of the hexlet spheres (Figure 1); in this, Soddy's hexlet is the spherical analog of a Steiner chain of six circles. Consistent with Steiner chains, the centers of the hexlet spheres lie in a single plane, on an ellipse. Soddy's hexlet was also discovered independently in Japan, as shown by Sangaku tablets from 1822 in Kanagawa prefecture. - -Soddy's hexlet is a chain of six spheres, labeled S1-S6, each of which is tangent to three given spheres, A, B and C, that are themselves mutually tangent at three distinct points. (For consistency throughout the article, the hexlet spheres will always be depicted in grey, spheres A and B in green, and sphere C in blue.) The hexlet spheres are also tangent to a fourth fixed sphere D (always shown in red) that is not tangent to the three others, A, B and C. - -Each sphere of Soddy's hexlet is also tangent to its neighbors in the chain; for example, sphere S4 is tangent to S3 and S5. The chain is closed, meaning that every sphere in the chain has two tangent neighbors; in particular, the initial and final spheres, S1 and S6, are tangent to one another. - -The annular Soddy's hexlet is a special case (Figure 2), in which the three mutually tangent spheres consist of a single sphere of radius r (blue) sandwiched between two parallel planes (green) separated by a perpendicular distance 2r. In this case, Soddy's hexlet consists of six spheres of radius r packed like ball bearings around the central sphere and likewise sandwiched. The hexlet spheres are also tangent to a fourth sphere (red), which is not tangent to the other three. - -The chain of six spheres can be rotated about the central sphere without affecting their tangencies, showing that there is an infinite family of solutions for this case. As they are rotated, the spheres of the hexlet trace out a torus (a doughnut-shaped surface); in other words, a torus is the envelope of this family of hexlets. - -The general problem of finding a hexlet for three given mutually tangent spheres A, B and C can be reduced to the annular case using inversion. This geometrical operation always transforms spheres into spheres or into planes, which may be regarded as spheres of infinite radius. A sphere is transformed into a plane if and only if the sphere passes through the center of inversion. An advantage of inversion is that it preserves tangency; if two spheres are tangent before the transformation, they remain so after. Thus, if the inversion transformation is chosen judiciously, the problem can be reduced to a simpler case, such as the annular Soddy's hexlet. Inversion is reversible; repeating an inversion in the same point returns the transformed objects to their original size and position. - -Inversion in the point of tangency between spheres A and B transforms them into parallel planes, which may be denoted as a and b. Since sphere C is tangent to both A and B and does not pass through the center of inversion, C is transformed into another sphere c that is tangent to both planes; hence, c is sandwiched between the two planes a and b. This is the annular Soddy's hexlet (Figure 2). Six spheres s1-s6 may be packed around c and likewise sandwiched between the bounding planes a and b. Re-inversion restores the three original spheres, and transforms s1-s6 into a hexlet for the original problem. In general, these hexlet spheres S1-S6 have different radii. - -An infinite variety of hexlets may be generated by rotating the six balls s1-s6 in their plane by an arbitrary angle before re-inverting them. The envelope produced by such rotations is the torus that surrounds the sphere c and is sandwiched between the two planes a and b; thus, the torus has an inner radius r and outer radius 3r. After the re-inversion, this torus becomes a Dupin cyclide (Figure 3). - -The envelope of Soddy's hexlets is a Dupin cyclide, an inversion of the torus. Thus Soddy's construction shows that a cyclide of Dupin is the envelope of a 1-parameter family of spheres in two different ways, and each sphere in either family is tangent to two spheres in same family and three spheres in the other family. This result was probably known to Charles Dupin, who discovered the cyclides that bear his name in his 1803 dissertation under Gaspard Monge. - -The intersection of the hexlet with the plane of its spherical centers produces a Steiner chain of six circles. - -It is assumed that spheres A and B are the same size. - -In any elliptic hexlet, such as the one shown at the top of the article, there are two tangent planes to the hexlet. In order for an elliptic hexlet to exist, the radius of C must be less than one quarter that of A. If C's radius is one quarter of A's, each sphere will become a plane in the journey. The inverted image shows a normal elliptic hexlet, though, and in the parabolic hexlet, the point where a sphere turns into a plane is precisely when its inverted image passes through the centre of inversion. In such a hexlet there is only one tangent plane to the hexlet. The line of the centres of a parabolic hexlet is a parabola. - -If C is even larger than that, a hyperbolic hexlet is formed, and now there are no tangent planes at all. Label the spheres S1 to S6. S1 thus cannot go very far until it becomes a plane (where its inverted image passes through the centre of inversion) and then reverses its concavity (where its inverted image surrounds the centre of inversion). Now the line of the centres is a hyperbola. - -The limiting case is when A, B and C are all the same size. The hexlet now becomes straight. S1 is small as it passes through the hole between A, B and C, and grows till it becomes a plane tangent to them. The centre of inversion is now also with a point of tangency with the image of S6, so it is also a plane tangent to A, B and C. As S1 proceeds, its concavity is reversed and now it surrounds all the other spheres, tangent to A, B, C, S2 and S6. S2 pushes upwards and grows to become a tangent plane and S6 shrinks. S1 then obtains S6's former position as a tangent plane. It then reverses concavity again and passes through the hole again, beginning another round trip. Now the line of centres is a degenerate hyperbola, where it has collapsed into two straight lines. - -Japanese mathematicians discovered the same hexlet over one hundred years before Soddy did. They analysed the packing problems in which circles and polygons, balls and polyhedrons come into contact and often found the relevant theorems independently before their discovery by Western mathematicians. They often published these as sangaku. The sangaku about the hexlet was made by Irisawa Shintarō Hiroatsu in the school of Uchida Itsumi, and dedicated to the Samukawa Shrine in May 1822. The original sangaku has been lost but was recorded in Uchida's book of Kokonsankan in 1832. A replica of the sangaku was made from the record and dedicated to the Hōtoku museum in the Samukawa Shrine in August, 2009. - -The sangaku by Irisawa consists of three problems. The third problem relates to Soddy's hexlet: "the diameter of the outer circumscribing sphere is 30 sun. The diameters of the nucleus balls are 10 sun and 6 sun each. The diameter of one of the balls in the chain of balls is 5 sun. Then I asked for the diameters of the remaining balls. The answer is 15 sun, 10 sun, 3.75 sun, 2.5 sun and 2 + 8/11 sun." - -In his answer, the method for calculating the diameters of the balls is written down and can consider it the following formulas to be given in the modern scale. If the ratios of the diameter of the outside ball to each of the nucleus balls are a1, a2, and if the ratios of the diameter to the chain balls are c1, ..., c6. we want to represent c2, ..., c6 in terms of a1, a2, and c1. If -$$ -K=\sqrt{3\left( a_1 a_2+a_2 c_1+c_1 a_1- \left( \frac{a_1+a_2+c_1+1}{2} \right)^2 \right)} -$$ - -then, - -\begin{align} - -c_2&=(a_1+a_2+c_1-1)/2-K \\ - -c_3&=(3a_1+3a_2-c_1-3)/2-K \\ - -c_4&=2a_1+2a_2-c_1-2 \\ - -c_5&=(3a_1+3a_2-c_1-3)/2+K \\ - -c_6&=(a_1+a_2+c_1-1)/2+K. - -\end{align} - -. - -Then c1 + c4 = c2 + c5 = c3 + c6. - -If r1, ..., r6 are the diameters of six balls, we get the formula: -$$ -\frac{1}{r_1}+\frac{1}{r_4}=\frac{1}{r_2}+\frac{1}{r_5}=\frac{1}{r_3}+\frac{1}{r_6}. -$$ diff --git a/wiki/wikipedia/453.txt b/wiki/wikipedia/453.txt deleted file mode 100644 index 43a3365bd66f2005666620c35ffd8ade4f7aec51..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/453.txt +++ /dev/null @@ -1,15 +0,0 @@ -In mathematics, Scheinerman's conjecture, now a theorem, states that every planar graph is the intersection graph of a set of line segments in the plane. This conjecture was formulated by E. R. Scheinerman in his Ph.D. thesis Scheinerman|1984}}|(1984), following earlier results that every planar graph could be represented as the intersection graph of a set of simple curves in the plane . It was proven by . - -For instance, the graph G shown below to the left may be represented as the intersection graph of the set of segments shown below to the right. Here, vertices of G are represented by straight line segments and edges of G are represented by intersection points. - -
    240px 240px
    - -Scheinerman also conjectured that segments with only three directions would be sufficient to represent 3-colorable graphs, and West conjectured that analogously every planar graph could be represented using four directions. If a graph is represented with segments having only k directions - -and no two segments belong to the same line, then the graph can be colored using k colors, one color for each direction. Therefore, if every planar graph can be represented in this way with only four directions, - -then the four color theorem follows. - -Hartman and de Fraysseix proved that every bipartite planar graph can be represented as an intersection graph of horizontal and vertical line segments; for this result see also Czyzowicz. De Castro proved that every triangle-free planar graph can be represented as an intersection graph of line segments having only three directions; this result implies Grötzsch's theorem that triangle-free planar graphs can be colored with three colors. de Fraysseix proved that if a planar graph G can be 4-colored in such a way that no separating cycle uses all four colors, then G has a representation as an intersection graph of segments. - -Chalopin proved that planar graphs are in 1-STRING, the class of intersection graphs of simple curves in the plane that intersect each other in at most one crossing point per pair. This class is intermediate between the intersection graphs of segments appearing in Scheinerman's conjecture and the intersection graphs of unrestricted simple curves from the result of Ehrlich et al. It can also be viewed as a generalization of the circle packing theorem, which shows the same result when curves are allowed to intersect in a tangent. The proof of the conjecture by Chalopin was based on an improvement of this result. diff --git a/wiki/wikipedia/454.txt b/wiki/wikipedia/454.txt deleted file mode 100644 index 4685554eae4353912596f69f87c89bf90659696b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/454.txt +++ /dev/null @@ -1,43 +0,0 @@ -In graph theory, the Heawood conjecture or Ringel–Youngs theorem gives a lower bound for the number of colors that are necessary for graph coloring on a surface of a given genus. For surfaces of genus 0, 1, 2, 3, 4, 5, 6, 7, ..., the required number of colors is 4, 7, 8, 9, 10, 11, 12, 12, .... , the chromatic number or Heawood number. - -The conjecture was formulated in 1890 by Percy John Heawood and proven in 1968 by Gerhard Ringel and Ted Youngs. One case, the non-orientable Klein bottle, proved an exception to the general formula. An entirely different approach was needed for the much older problem of finding the number of colors needed for the plane or sphere, solved in 1976 as the four color theorem by Haken and Appel. On the sphere the lower bound is easy, whereas for higher genera the upper bound is easy and was proved in Heawood's original short paper that contained the conjecture. In other words, Ringel, Youngs and others had to construct extreme examples for every genus g = 1,2,3,.... If g = 12s + k, the genera fall into 12 cases according as k = 0,1,2,3,4,5,6,7,8,9,10,11. To simplify, suppose that case k has been established if only a finite number of g's of the form 12s + k are in doubt. Then the years in which the twelve cases were settled and by whom are the following: - -*1954, Ringel: case 5 - -*1961, Ringel: cases 3,7,10 - -*1963, Terry, Welch, Youngs: cases 0,4 - -*1964, Gustin, Youngs: case 1 - -*1965, Gustin: case 9 - -*1966, Youngs: case 6 - -*1967, Ringel, Youngs: cases 2,8,11 - -The last seven sporadic exceptions were settled as follows: - -*1967, Mayer: cases 18, 20, 23 - -*1968, Ringel, Youngs: cases 30, 35, 47, 59, and the conjecture was proved. - -Percy John Heawood conjectured in 1890 that for a given genus g > 0, the minimum number of colors necessary to color all graphs drawn on an orientable surface of that genus (or equivalently to color the regions of any partition of the surface into simply connected regions) is given by -$$ -\gamma (g) = \left \lfloor \frac{7 + \sqrt{1 + 48g}}{2} \right \rfloor, -$$ - -where $\left \lfloor x \right \rfloor$ is the floor function. - -Replacing the genus by the Euler characteristic, we obtain a formula that covers both the orientable and non-orientable cases, -$$ - \gamma(\chi) = \left \lfloor \frac{7 + \sqrt{49 - 24\chi}}2 \right \rfloor. -$$ - -This relation holds, as Ringel and Youngs showed, for all surfaces except for the Klein bottle. Philip Franklin (1930) proved that the Klein bottle requires at most 6 colors, rather than 7 as predicted by the formula. The Franklin graph can be drawn on the Klein bottle in a way that forms six mutually-adjacent regions, showing that this bound is tight. - -The upper bound, proved in Heawood's original short paper, is based on a greedy coloring algorithm. By manipulating the Euler characteristic, one can show that every graph embedded in the given surface must have at least one vertex of degree less than the given bound. If one removes this vertex, and colors the rest of the graph, the small number of edges incident to the removed vertex ensures that it can be added back to the graph and colored without increasing the needed number of colors beyond the bound. In the other direction, the proof is more difficult, and involves showing that in each case (except the Klein bottle) a complete graph with a number of vertices equal to the given number of colors can be embedded on the surface. - -The torus has g = 1, so χ = 0. Therefore, as the formula states, any subdivision of the torus into regions can be colored using at most seven colors. The illustration shows a subdivision of the torus in which each of seven regions are adjacent to each other region; this subdivision shows that the bound of seven on the number of colors is tight for this case. The boundary of this subdivision forms an embedding of the Heawood graph onto the torus. - -[[File:Szilassi polyhedron 3D model.svg|thumb|link=|Interactive Szilassi polyhedron model with each of 7 faces adjacent to every other. In [ the SVG image,] move the mouse to rotate it.]] diff --git a/wiki/wikipedia/455.txt b/wiki/wikipedia/455.txt deleted file mode 100644 index de13d302a99defc297fb1c9fdd8c126b585bcb86..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/455.txt +++ /dev/null @@ -1,24 +0,0 @@ -Mitchell's embedding theorem, also known as the Freyd–Mitchell theorem or the full embedding theorem, is a result about abelian categories; it essentially states that these categories, while rather abstractly defined, are in fact concrete categories of modules. This allows one to use element-wise diagram chasing proofs in these categories. The theorem is named after Barry Mitchell and Peter Freyd. - -The precise statement is as follows: if A is a small abelian category, then there exists a ring R (with 1, not necessarily commutative) and a full, faithful and exact functor F: A → R-Mod (where the latter denotes the category of all left R-modules). - -The functor F yields an equivalence between A and a full subcategory of R-Mod in such a way that kernels and cokernels computed in A correspond to the ordinary kernels and cokernels computed in R-Mod. Such an equivalence is necessarily additive. - -The theorem thus essentially says that the objects of A can be thought of as R-modules, and the morphisms as R-linear maps, with kernels, cokernels, exact sequences and sums of morphisms being determined as in the case of modules. However, projective and injective objects in A do not necessarily correspond to projective and injective R-modules. - -Let $\mathcal{L} \subset \operatorname{Fun}(\mathcal{A}, Ab)$ be the category of left exact functors from the abelian category $\mathcal{A}$ to the category of abelian groups $Ab$. First we construct a contravariant embedding $H:\mathcal{A}\to\mathcal{L}$ by $H(A) = h^A$ for all $A\in\mathcal{A}$, where $h^A$ is the covariant hom-functor, $h^A(X)=\operatorname{Hom}_\mathcal{A}(A,X)$. The Yoneda Lemma states that $H$ is fully faithful and we also get the left exactness of $H$ very easily because $h^A$ is already left exact. The proof of the right exactness of $H$ is harder and can be read in Swan, Lecture Notes in Mathematics 76. - -After that we prove that $\mathcal{L}$ is an abelian category by using localization theory (also Swan). This is the hard part of the proof. - -It is easy to check that the abelian category $\mathcal{L}$ is an AB5 category with a generator -$$ -\bigoplus_{A\in\mathcal{A}} h^A -$$. - -In other words it is a Grothendieck category and therefore has an injective cogenerator $I$. - -The endomorphism ring $R := \operatorname{Hom}_{\mathcal{L}} (I,I)$ is the ring we need for the category of R-modules. - -By $G(B) = \operatorname{Hom}_{\mathcal{L}} (B,I)$ we get another contravariant, exact and fully faithful embedding $G:\mathcal{L}\to R\operatorname{-Mod}.$ The composition $GH:\mathcal{A}\to R\operatorname{-Mod}$ is the desired covariant exact and fully faithful embedding. - -Note that the proof of the Gabriel–Quillen embedding theorem for exact categories is almost identical. diff --git a/wiki/wikipedia/456.txt b/wiki/wikipedia/456.txt deleted file mode 100644 index af0f30c698cb57347a3968a1c069ab1b6911edc0..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/456.txt +++ /dev/null @@ -1,27 +0,0 @@ -In mathematical logic, Craig's theorem states that any recursively enumerable set of well-formed formulas of a first-order language is (primitively) recursively axiomatizable. This result is not related to the well-known Craig interpolation theorem, although both results are named after the same logician, William Craig. - -Let $A_1,A_2,\dots$ be an enumeration of the axioms of a recursively enumerable set T of first-order formulas. Construct another set T* consisting of -$$ -\underbrace{A_i\land\dots\land A_i}_i -$$ - -for each positive integer i. The deductive closures of T* and T are thus equivalent; the proof will show that T* is a recursive set. A decision procedure for T* lends itself according to the following informal reasoning. Each member of T* is either $A_1$ or of the form -$$ -\underbrace{B_j\land\dots\land B_j}_j. -$$ - -Since each formula has finite length, it is checkable whether or not it is $A_1$ or of the said form. If it is of the said form and consists of j conjuncts, it is in T* if the (reoccurring) conjunct is $A_j$; otherwise it is not in T*. Again, it is checkable whether the conjunct is in fact $A_n$ by going through the enumeration of the axioms of T and then checking symbol-for-symbol whether the expressions are identical. - -The proof above shows that for each recursively enumerable set of axioms there is a recursive set of axioms with the same deductive closure. A set of axioms is primitive recursive if there is a primitive recursive function that decides membership in the set. To obtain a primitive recursive aximatization, instead of replacing a formula $A_i$ with -$$ -\underbrace{A_i\land\dots\land A_i}_i -$$ - -one instead replaces it with -$$ -\underbrace{A_i\land\dots\land A_i}_{f(i)} -$$ (*) - -where f(x) is a function that, given i, returns a computation history showing that $A_i$ is in the original recursively enumerable set of axioms. It is possible for a primitive recursive function to parse an expression of form (*) to obtain $A_i$ and j. Then, because Kleene's T predicate is primitive recursive, it is possible for a primitive recursive function to verify that j is indeed a computation history as required. - -If $T$ is a recursively axiomatizable theory and we divide its predicates into two disjoint sets $V_A$ and $V_B$, then those theorems of $T$ that are in the vocabulary $V_A$ are recursively enumerable, and hence, based on Craig's theorem, axiomatizable. Carl G. Hempel argued based on this that since all science's predictions are in the vocabulary of observation terms, the theoretical vocabulary of science is in principle eliminable. He himself raised two objections to this argument: 1) the new axioms of science are practically unmanageable, and 2) science uses inductive reasoning and eliminating theoretical terms may alter the inductive relations between observational sentences. Hilary Putnam argues that this argument is based on a misconception that the sole aim of science is successful prediction. He proposes that the main reason we need theoretical terms is that we wish to talk about theoretical entities (such as viruses, radio stars, and elementary particles). diff --git a/wiki/wikipedia/457.txt b/wiki/wikipedia/457.txt deleted file mode 100644 index a8b3e509d79a26257a62040f715a5e9dd29441fe..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/457.txt +++ /dev/null @@ -1,13 +0,0 @@ -In mathematics, Plateau's problem is to show the existence of a minimal surface with a given boundary, a problem raised by Joseph-Louis Lagrange in 1760. However, it is named after Joseph Plateau who experimented with soap films. The problem is considered part of the calculus of variations. The existence and regularity problems are part of geometric measure theory. - -Various specialized forms of the problem were solved, but it was only in 1930 that general solutions were found in the context of mappings (immersions) independently by Jesse Douglas and Tibor Radó. Their methods were quite different; Radó's work built on the previous work of René Garnier and held only for rectifiable simple closed curves, whereas Douglas used completely new ideas with his result holding for an arbitrary simple closed curve. Both relied on setting up minimization problems; Douglas minimized the now-named Douglas integral while Radó minimized the "energy". Douglas went on to be awarded the Fields Medal in 1936 for his efforts. - -The extension of the problem to higher dimensions (that is, for $k$-dimensional surfaces in $n$-dimensional space) turns out to be much more difficult to study. Moreover, while the solutions to the original problem are always regular, it turns out that the solutions to the extended problem may have singularities if $k \leq n - 2$. In the hypersurface case where $k = n - 1$, singularities occur only for $n \geq 8$. An example of such singular solution of the Plateau problem is the Simons cone, a cone over $ S^3 \times S^3 $ in $\mathbb{R}^8$ that was first described by Jim Simons and was shown to be an area minimizer by Bombieri, De Giorgi and Giusti. To solve the extended problem in certain special cases, the theory of perimeters (De Giorgi) for codimension 1 and the theory of rectifiable currents (Federer and Fleming) for higher codimension have been developed. The theory guarantees existence of codimension 1 solutions that are smooth away from a closed set of Hausdorff dimension $n-8$. In the case of higher codimension Almgren proved existence of solutions with singular set of dimension at most $k-2$ in his regularity theorem. S. X. Chang, a - -student of Almgren, built upon Almgren’s work to show that the singularities of 2-dimensional area - -minimizing integral currents (in arbitrary codimension) form a finite discrete set. - -The axiomatic approach of Jenny Harrison and Harrison Pugh treats a wide variety of special cases. In particular, they solve the anisotropic Plateau problem in arbitrary dimension and codimension for any collection of rectifiable sets satisfying a combination of general homological, cohomological or homotopical spanning conditions. A different proof of Harrison-Pugh's results were obtained by Camillo De Lellis, Francesco Ghiraldin and Francesco Maggi. - -Physical soap films are more accurately modeled by the $(M, 0, \Delta)$-minimal sets of Frederick Almgren, but the lack of a compactness theorem makes it difficult to prove the existence of an area minimizer. In this context, a persistent open question has been the existence of a least-area soap film. Ernst Robert Reifenberg solved such a "universal Plateau's problem" for boundaries which are homeomorphic to single embedded spheres. diff --git a/wiki/wikipedia/458.txt b/wiki/wikipedia/458.txt deleted file mode 100644 index 6e0a8293ceb66028a236480fc7c007d916049624..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/458.txt +++ /dev/null @@ -1,37 +0,0 @@ -In theoretical computer science, Baker's technique is a method for designing polynomial-time approximation schemes (PTASs) for problems on planar graphs. It is named after Brenda Baker, who announced it in a 1983 conference and published it in the Journal of the ACM in 1994. - -The idea for Baker's technique is to break the graph into layers, such that the problem can be solved optimally on each layer, then combine the solutions from each layer in a reasonable way that will result in a feasible solution. This technique has given PTASs for the following problems: subgraph isomorphism, maximum independent set, minimum vertex cover, minimum dominating set, minimum edge dominating set, maximum triangle matching, and many others. - -The bidimensionality theory of Erik Demaine, Fedor Fomin, Hajiaghayi, and Dimitrios Thilikos and its offshoot simplifying decompositions (Demaine,Demaine) generalizes and greatly expands the applicability of Baker's technique - -for a vast set of problems on planar graphs and more generally graphs excluding a fixed minor, such as bounded genus graphs, as well as to other classes of graphs not closed under taking minors such as the 1-planar graphs. - -The example that we will use to demonstrate Baker's technique is the maximum weight independent set problem. - -INDEPENDENT-SET($G$, $w$, $\epsilon$) - -Choose an arbitrary vertex $ r $ -$$ -k = 1/\epsilon -$$ - -find the breadth-first search levels for $ G $ rooted at $ r $ $\pmod k$: $\{V_0,V_1, \ldots, V_{k-1} \}$ - -for $\ell = 0, \ldots, k-1$ - -find the components $G^\ell_1, G^\ell_2, \ldots,$ of $G$ after deleting $V_\ell$ - -for $i = 1,2, \ldots $ - -compute $S_i^\ell$, the maximum-weight independent set of $G_i^\ell$ -$$ -S^\ell = \cup_i S_i^\ell -$$ - -let $S^{\ell^*}$ be the solution of maximum weight among $\{S^0,S^1, \ldots, S^{k-1} \}$ - -return $S^{\ell^*}$ - -Notice that the above algorithm is feasible because each $S^\ell $ is the union of disjoint independent sets. - -Dynamic programming is used when we compute the maximum-weight independent set for each $G_i^\ell$. This dynamic program works because each $G_i^\ell$ is a $k$-outerplanar graph. Many NP-complete problems can be solved with dynamic programming on $k$-outerplanar graphs. Baker's technique can be interpreted as covering the given planar graphs with subgraphs of this type, finding the solution to each subgraph using dynamic programming, and gluing the solutions together. diff --git a/wiki/wikipedia/459.txt b/wiki/wikipedia/459.txt deleted file mode 100644 index aee3def6b3b7076c9cdda03624d1be8d14bb3e63..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/459.txt +++ /dev/null @@ -1,79 +0,0 @@ -Slitherlink (also known as Fences, Takegaki, Loop the Loop, Loopy, Ouroboros, Suriza and Dotty Dilemma) is a logic puzzle developed by publisher Nikoli. - -Slitherlink is played on a rectangular lattice of dots. Some of the squares formed by the dots have numbers inside them. The objective is to connect horizontally and vertically adjacent dots so that the lines form a simple loop with no loose ends. In addition, the number inside a square represents how many of its four sides are segments in the loop. - -Other types of planar graphs can be used in lieu of the standard grid, with varying numbers of edges per vertex or vertices per polygon. These patterns include snowflake, Penrose, Laves and Altair tilings. These add complexity by varying the number of possible paths from an intersection, and/or the number of sides to each polygon; but similar rules apply to their solution. - -Whenever the number of lines around a cell matches the number in the cell, the other potential lines must be eliminated. This is usually indicated by marking an X on lines known to be empty. - -Another useful notation when solving Slitherlink is a ninety degree arc between two adjacent lines, to indicate that exactly one of the two must be filled. A related notation is a double arc between adjacent lines, indicating that both or neither of the two must be filled. These notations are not necessary to the solution, but can be helpful in deriving it. - -Many of the methods below can be broken down into two simpler steps by use of arc notation. - -A key to many deductions in Slitherlink is that every point has either exactly two lines connected to it, or no lines. So if a point which is in the centre of the grid, not at an edge or corner, has three incoming lines which are X'd out, the fourth must also be X'd out. This is because the point cannot have just one line - it has no exit route from that point. Similarly, if a point on the edge of the grid, not at a corner, has two incoming lines which are X'd out, the third must also be X'd out. And if a corner of the grid has one incoming line which is X'd out, the other must also be X'd out. - -Application of this simple rule leads to increasingly complex deductions. Recognition of these simple patterns will help greatly in solving Slitherlink puzzles. - -* If a 1 is in a corner, the actual corner's lines may be X'd out, because a line that entered said corner could not leave it except by passing by the 1 again. This also applies if two lines leading into the 1-box at the same corner are X'd out. - -* If a 3 is in a corner, the two outside edges of that box can be filled in because otherwise the rule above would have to be broken. - -* If a 2 is in a corner, two lines must be going away from the 2 at the border. - -* If a line comes into a corner of a 1 and if one of the three remaining directions that the line can continue, the one that is not a side of the 1 is a known blank, then the two sides of the 1 opposite that corner can be X'd out. - -* This also applies in reverse. That is, if a line comes into the corner of a 1, and the two opposite edges of the 1 are already X'd out, the line cannot go away from the 1 since that would put Xs around all sides of the 1. - -* If two 1s are diagonally adjacent, then of the eight segments around those two cells, either the "inner" set of four segments sharing a common endpoint (the point shared by the 1s) or the other "outer" set of four segments must all be X'd out. Thus if any two inner or outer segments in one 1 are X'd, the respective inner or outer segments of the other 1 must also be X'd. - -* If two 1s are adjacent along the edge of the grid, the line between them can be X'd out, because there would be no direction for it to continue when it reached the edge. - -If a 2 has any surrounding line X’d, then a line coming into either of the two corners not adjacent to the X’d out line cannot immediately exit at right angles away from the 2, as then two lines around the 2 would be impossible, and can therefore be X’d. This means that the incoming line must continue on one side of the 2 or the other. This in turn means that the second line of the 2 must be on the only remaining free side, adjacent to the originally X’d line, so that can be filled in.
    - -Conversely, if a 2 has a line on one side, and an adjacent X’d out line, then the second line must be in one of the two remaining sides, and exit from the opposite corner (in either direction). If either of those two exits is X’d out, then it must take the other route. - -* If a 3 is adjacent to a 0, either horizontally or vertically, then all edges of that 3 can be filled except for the one touching the 0. In addition, the two lines perpendicular to the adjacent boxes can be filled. - -* If two 3s are adjacent to each other horizontally or vertically, their common edge must be filled in, because the only other option is a closed oval that is impossible to connect to any other line. Second, the two outer lines of the group (parallel to the common line) must be filled in. Thirdly, the line through the 3s will always wrap around in an "S" shape. Therefore, the line between the 3s cannot continue in a straight line, and those sides which are in a straight line from the middle line can be X'd out. - -* If a 3 is adjacent to a 0 diagonally, both sides of the 3 that meet the 0's corner must be filled. This is because if either of those sides were open, the line ending in the corner of the 0 would have no place to go. This is similar to the 3-in-a-corner rule. - -* Similarly, if a 3 has a corner with Xs in both directions going away from that corner, then both sides of the 3 that meet that corner must be filled. This is because if one of those two sides of the 3 were open, the other would have to be filled (because the 3 can only have one open side) but would meet 3 Xs at that corner, which is impossible because each point on the grid must have exactly 2 or 0 lines. - -* If a line reaches a corner of a 3, there must be lines on both sides of the 3 that said corner is not adjacent to, because if the 3's sole empty space were not adjacent to it, the corner would have three lines connected to it. Furthermore, the segment leading away from the 3 at the corner reached by the line must be empty; if it were filled, neither of the remaining 2 undetermined sides of the 3 would be able to contain a line. - -* If two 3s are adjacent diagonally, the edges which do not run into the common point must be filled in. - -* Similarly, if two 3s are in the same diagonal, but separated by any number of 2s (and only 2s) the outside edges of the 3s must be filled in, just as if they were adjacent diagonally. - -* If there is a series of 2s in a diagonal line and an angled line meets the corner of the 2 at one end of the series, a matching angled line can be drawn all the way up the series. - -* If a line reaches the starting point (A) of a diagonal that contains one or more 2s and ends with a 3, both sides of the far corner (farthest from A on the diagonal) of the 3 must be filled. If this were not true, it would imply that both sides of the near corner of the 3 must be filled, which would imply that the near corners of all the 2s must be filled, including the 2 at the start of the diagonal, which is impossible because it conflicts with the line that has reached the starting point (A). - -* If a 1 and a 3 are adjacent diagonally and the outer two sides of the 1 are X'd out, then the outer two sides of the 3 must be filled in. - -* The opposite is the same: if the outer two corners of the 3 are filled in, then the outer two corners of the 1 must be X'd out. - -* If a line reaches a corner of a 2, and the line must continue through one of the two connecting sides of the 2, then exactly one of the other two sides of the 2 must be filled, and that line must continue through one of the two connecting sides of the diagonally adjacent square. - -If a region of the lattice is closed-off (such that no lines can "escape"), and is not empty, there must be a non-zero, even number of lines entering the region that begin outside the region. (An odd number of lines entering implies an odd number of segment ends inside the region, which makes it impossible for all the segment ends to connect. If there are no such lines, the lines inside the region cannot connect with the lines outside, making a solution impossible.) Often, this rule will eliminate one or more otherwise feasible options. - -In the figure below, the line at the top-left will close off the top-right region of the lattice whether it proceeds down or to the right. The line to the right (around two sides of the 3) has entered the closed region. To satisfy the rule, the first line must enter the region, and the second line must not enter the region a second time. (Since the boundary of any closed region also closes off the remainder of the puzzle, the rule can also be applied to the larger, bottom-left region. To apply the rule, it is only necessary to count the lines crossing the boundary.) - -In an exceptionally difficult puzzle, one may use the Jordan curve theorem, which states that any open curve that starts and ends outside of a closed curve must intersect the closed curve an even number of times. In particular, this means that any row of the grid must have an even number of vertical lines and any column must have an even number of horizontal lines. When only one potential line segment in one of these groups is unknown, you can determine whether it is part of the loop or not with this theorem. - -A simple strategy to assist in using this theorem is to "paint" (sometimes called "shade") the outside and the inside areas. When you see two outside cells, or two inside cells next to each other, then you know that there is not a line between them. The converse is also true: if you know there is no line between two cells, then those cells must be the same "color" (both inside or both outside). Similarly, if an outside cell and an inside cell are adjacent, you know there must be a filled line between them; and again the converse is true. - -* If there are exactly two possible paths, A and B, between two points in the solution (two points that have been, or must be, reached by lines); and if a solution containing A must also work with B, and the reverse is not true; then B is the correct path, and the solution must pass through a point contained in A but not B. - -In the figure below, if a solution could pass through the top and right sides of the 2, then there must be another solution which is exactly the same except that it passes through the bottom and left sides of the 2, because the squares to the top and right of the 2 are unconstrained (do not contain numbers). Also, the solution must pass through the top-right corner of the 2, otherwise there must be another solution which is exactly the same except that it passes through the top and right sides of the 2. - -If there is a 2 in a corner, and the two non-diagonally adjacent squares are unconstrained, lines can be drawn as shown below. (In the figure, the question mark represents any number or blank, but the number will only be a 2 or 3. A puzzle with only one solution cannot have a 2 in a corner with two non-diagonally adjacent, unconstrained squares, and a diagonally adjacent 0 or 1.) - -* If there are two paths between two points, such that a solution containing one must also work with the other, then both paths can be ruled out. - -In the figure below, the circled points can be connected by a line directly between them, and also by a line that traverses the other three sides of the square that extends to the left of the points. It should be clear (with the red line ignored) that for both paths the remainder of the solution can be the same – since the constraints for the remainder of the solution are the same – so both paths are ruled out. - -Slitherlink is an original puzzle of Nikoli; it first appeared in Puzzle Communication Nikoli #26 (June 1989). The editor combined two original puzzles contributed there. At first, every square contained a number and the edges did not have to form a loop. - -Slitherlink puzzles have been featured in video games on several platforms. A game titled Slither Link was published in Japan by Bandai for the Wonderswan portable console in 2000. Slitherlink puzzles were included alongside Sudoku and Nonogram puzzles in the Loppi Puzzle Magazine: Kangaeru Puzzle series of games from Success for the Game Boy Nintendo Power cartridge in 2001. Slitherlink games were also featured for the Nintendo DS handheld game console, with Hudson Soft releasing Puzzle Series Vol. 5: Slitherlink in Japan on November 16, 2006, and Agetec including Slitherlink in its Nikoli puzzle compilation, Brain Buster Puzzle Pak, released in North America on June 17, 2007. diff --git a/wiki/wikipedia/46.txt b/wiki/wikipedia/46.txt deleted file mode 100644 index 426925196e3917c1865c2220e189271f62cdeffe..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/46.txt +++ /dev/null @@ -1,221 +0,0 @@ -In differential geometry, the Atiyah–Singer index theorem, proved by Michael Atiyah and Isadore Singer (1963), states that for an elliptic differential operator on a compact manifold, the analytical index (related to the dimension of the space of solutions) is equal to the topological index (defined in terms of some topological data). It includes many other theorems, such as the Chern–Gauss–Bonnet theorem and Riemann–Roch theorem, as special cases, and has applications to theoretical physics. - -The index problem for elliptic differential operators was posed by Israel Gel'fand. He noticed the homotopy invariance of the index, and asked for a formula for it by means of topological invariants. Some of the motivating examples included the Riemann–Roch theorem and its generalization the Hirzebruch–Riemann–Roch theorem, and the Hirzebruch signature theorem. Friedrich Hirzebruch and Armand Borel had proved the integrality of the  genus of a spin manifold, and Atiyah suggested that this integrality could be explained if it were the index of the Dirac operator (which was rediscovered by Atiyah and Singer in 1961). - -The Atiyah–Singer theorem was announced in 1963. The proof sketched in this announcement was never published by them, though it appears in the Palais's book. It appears also in the "Séminaire Cartan-Schwartz 1963/64" that was held in Paris simultaneously with the seminar led by Richard Palais at Princeton University. The last talk in Paris was by Atiyah on manifolds with boundary. Their first published proof replaced the cobordism theory of the first proof with K-theory, and they used this to give proofs of various generalizations in another sequence of papers. - -*1965: Sergey P. Novikov published his results on the topological invariance of the rational Pontryagin classes on smooth manifolds. - -* Robion Kirby and Laurent C. Siebenmann's results, combined with René Thom's paper proved the existence of rational Pontryagin classes on topological manifolds. The rational Pontryagin classes are essential ingredients of the index theorem on smooth and topological manifolds. - -*1969: Michael Atiyah defines abstract elliptic operators on arbitrary metric spaces. Abstract elliptic operators became protagonists in Kasparov's theory and Connes's noncommutative differential geometry. - -*1971: Isadore Singer proposes a comprehensive program for future extensions of index theory. - -*1972: Gennadi G. Kasparov publishes his work on the realization of K-homology by abstract elliptic operators. - -*1973: Atiyah, Raoul Bott, and Vijay Patodi gave a new proof of the index theorem using the heat equation, described in a paper by Melrose. - -*1977: Dennis Sullivan establishes his theorem on the existence and uniqueness of Lipschitz and quasiconformal structures on topological manifolds of dimension different from 4. - -*1983: Ezra Getzler motivated by ideas of Edward Witten and Luis Alvarez-Gaume, gave a short proof of the local index theorem for operators that are locally Dirac operators; this covers many of the useful cases. - -*1983: Nicolae Teleman proves that the analytical indices of signature operators with values in vector bundles are topological invariants. - -*1984: Teleman establishes the index theorem on topological manifolds. - -*1986: Alain Connes publishes his fundamental paper on noncommutative geometry. - -*1989: Simon K. Donaldson and Sullivan study Yang–Mills theory on quasiconformal manifolds of dimension 4. They introduce the signature operator S defined on differential forms of degree two. - -*1990: Connes and Henri Moscovici prove the local index formula in the context of non-commutative geometry. - -*1994: Connes, Sullivan, and Teleman prove the index theorem for signature operators on quasiconformal manifolds. - -*X is a compact smooth manifold (without boundary). - -*E and F are smooth vector bundles over X. - -*D is an elliptic differential operator from E to F. So in local coordinates it acts as a differential operator, taking smooth sections of E to smooth sections of F. - -If D is a differential operator on a Euclidean space of order n in k variables $x_1, \dots, x_k$, then its symbol is the function of 2k variables -$$ -x_1, \dots, x_k, y_1, \dots, y_k -$$, given by dropping all terms of order less than n and replacing $\partial/\partial x_i$ by $y_i$. So the symbol is homogeneous in the variables y, of degree n. The symbol is well defined even though $\partial/\partial x_i$ does not commute with $x_i$ because we keep only the highest order terms and differential operators commute "up to lower-order terms". The operator is called elliptic if the symbol is nonzero whenever at least one y is nonzero. - -Example: The Laplace operator in k variables has symbol $y_1^2 + \cdots + y_k^2$, and so is elliptic as this is nonzero whenever any of the $y_i$'s are nonzero. The wave operator has symbol $-y_1^2 + \cdots + y_k^2$, which is not elliptic if $k\ge 2$, as the symbol vanishes for some non-zero values of the ys. - -The symbol of a differential operator of order n on a smooth manifold X is defined in much the same way using local coordinate charts, and is a function on the cotangent bundle of X, homogeneous of degree n on each cotangent space. (In general, differential operators transform in a rather complicated way under coordinate transforms (see jet bundle); however, the highest order terms transform like tensors so we get well defined homogeneous functions on the cotangent spaces that are independent of the choice of local charts.) More generally, the symbol of a differential operator between two vector bundles E and F is a section of the pullback of the bundle Hom(E, F) to the cotangent space of X. The differential operator is called elliptic if the element of Hom(Ex, Fx) is invertible for all non-zero cotangent vectors at any point x of X. - -A key property of elliptic operators is that they are almost invertible; this is closely related to the fact that their symbols are almost invertible. More precisely, an elliptic operator D on a compact manifold has a (non-unique) parametrix (or pseudoinverse) D′ such that DD′ −1 and D′D−1 are both compact operators. An important consequence is that the kernel of D is finite-dimensional, because all eigenspaces of compact operators, other than the kernel, are finite-dimensional. (The pseudoinverse of an elliptic differential operator is almost never a differential operator. However, it is an elliptic pseudodifferential operator.) - -As the elliptic differential operator D has a pseudoinverse, it is a Fredholm operator. Any Fredholm operator has an index, defined as the difference between the (finite) dimension of the kernel of D (solutions of Df = 0), and the (finite) dimension of the cokernel of D (the constraints on the right-hand-side of an inhomogeneous equation like Df = g, or equivalently the kernel of the adjoint operator). In other words, - -Index(D) = dim Ker(D) − dim Coker(D) = dim Ker(D) − dim Ker(D*). - -This is sometimes called the analytical index of D. - -Example: Suppose that the manifold is the circle (thought of as R/Z), and D is the operator d/dx − λ for some complex constant λ. (This is the simplest example of an elliptic operator.) Then the kernel is the space of multiples of exp(λx) if λ is an integral multiple of 2πi and is 0 otherwise, and the kernel of the adjoint is a similar space with λ replaced by its complex conjugate. So D has index 0. This example shows that the kernel and cokernel of elliptic operators can jump discontinuously as the elliptic operator varies, so there is no nice formula for their dimensions in terms of continuous topological data. However the jumps in the dimensions of the kernel and cokernel are the same, so the index, given by the difference of their dimensions, does indeed vary continuously, and can be given in terms of topological data by the index theorem. - -The topological index of an elliptic differential operator $D$ between smooth vector bundles $E$ and $F$ on an $n$-dimensional compact manifold $X$ is given by -$$ -\operatorname{ch}(D)\operatorname{Td}(X)[X] = \int_X \operatorname{ch}(D)\operatorname{Td}(X) -$$ - -in other words the value of the top dimensional component of the mixed cohomology class $\operatorname{ch}(D) \operatorname{Td}(X)$ on the fundamental homology class of the manifold $X$. - -Here, - -*$\operatorname{Td}(X)$ is the Todd class of the complexified tangent bundle of $X$. - -*$\operatorname{ch}(D)$ is equal to $\varphi^{-1}(\operatorname{ch}(d(p^*E,p^*F, \sigma(D)))) $, where - -**$\varphi: H^k(X;\mathbb{Q}) \to H^{n+k}(B(X)/S(X);\mathbb{Q})$ is the Thom isomorphism for the sphere bundle $p:B(X)/S(X) \to X$ - -**$\operatorname{ch}:K(X)\otimes\mathbb{Q} \to H^*(X;\mathbb{Q})$ is the Chern character - -**$d(p^*E,p^*F,\sigma(D))$ is the "difference element" in $K(B(X)/S(X))$ associated to two vector bundles $p^*E$ and $p^*F$ on $B(X)$ and an isomorphism $\sigma(D)$ between them on the subspace $S(X)$. - -**$\sigma(D)$ is the symbol of $D$ - -One can also define the topological index using only K-theory (and this alternative definition is compatible in a certain sense with the Chern-character construction above). If X is a compact submanifold of a manifold Y then there is a pushforward (or "shriek") map from K(TX) to K(TY). The topological index of an element of - -K(TX) is defined to be the image of this operation with Y some Euclidean space, for which K(TY) can be naturally identified with the integers Z (as a consequence of Bott-periodicity). This map is independent of the embedding of X in Euclidean space. Now a differential operator as above naturally defines an element of K(TX), and the image in Z under this map "is" the topological index. - -As usual, D is an elliptic differential operator between vector bundles E and F over a compact manifold X. - -The index problem is the following: compute the (analytical) index of D using only the symbol s and topological data derived from the manifold and the vector bundle. The Atiyah–Singer index theorem solves this problem, and states: - -The analytical index of D is equal to its topological index. - -In spite of its formidable definition, the topological index is usually straightforward to evaluate explicitly. So this makes it possible to evaluate the analytical index. (The cokernel and kernel of an elliptic operator are in general extremely hard to evaluate individually; the index theorem shows that we can usually at least evaluate their difference.) Many important invariants of a manifold (such as the signature) can be given as the index of suitable differential operators, so the index theorem allows us to evaluate these invariants in terms of topological data. - -Although the analytical index is usually hard to evaluate directly, it is at least obviously an integer. The topological index is by definition a rational number, but it is usually not at all obvious from the definition that it is also integral. So the Atiyah–Singer index theorem implies some deep integrality properties, as it implies that the topological index is integral. - -The index of an elliptic differential operator obviously vanishes if the operator is self adjoint. It also vanishes if the manifold X has odd dimension, though there are pseudodifferential elliptic operators whose index does not vanish in odd dimensions. - -The Grothendieck–Riemann–Roch theorem was one of the main motivations behind the index theorem because the index theorem is the counterpart of this theorem in the setting of real manifolds. Now, if there's a map $f:X\to Y$ of compact stably almost complex manifolds, then there is a commutative diagram
    frameless|182x182px
    if $Y = *$ is a point, then we recover the statement above. Here $K(X)$ is the Grothendieck group of complex vector bundles. This commutative diagram is formally very similar to the GRR theorem because the cohomology groups on the right are replaced by the Chow ring of a smooth variety, and the Grothendieck group on the left is given by the Grothendieck group of algebraic vector bundles. - -Due to , : - -For any abstract elliptic operator on a closed, oriented, topological manifold, the analytical index equals the topological index. - -The proof of this result goes through specific considerations, including the extension of Hodge theory on combinatorial and Lipschitz manifolds , , the extension of Atiyah–Singer's signature operator to Lipschitz manifolds , Kasparov's K-homology and topological cobordism . - -This result shows that the index theorem is not merely a differentiability statement, but rather a topological statement. - -Due to , : - -For any quasiconformal manifold there exists a local construction of the Hirzebruch–Thom characteristic classes. - -This theory is based on a signature operator S, defined on middle degree differential forms on even-dimensional quasiconformal manifolds (compare ). - -Using topological cobordism and K-homology one may provide a full statement of an index theorem on quasiconformal manifolds (see page 678 of ). The work "provides local constructions for characteristic classes based on higher dimensional relatives of the measurable Riemann mapping in dimension two and the Yang–Mills theory in dimension four." - -These results constitute significant advances along the lines of Singer's program Prospects in Mathematics . At the same time, they provide, also, an effective construction of the rational Pontrjagin classes on topological manifolds. The paper provides a link between Thom's original construction of the rational Pontrjagin classes and index theory. - -It is important to mention that the index formula is a topological statement. The obstruction theories due to Milnor, Kervaire, Kirby, Siebenmann, Sullivan, Donaldson show that only a minority of topological manifolds possess differentiable structures and these are not necessarily unique. Sullivan's result on Lipschitz and quasiconformal structures shows that any topological manifold in dimension different from 4 possesses such a structure which is unique (up to isotopy close to identity). - -The quasiconformal structures and more generally the Lp-structures, - -p > n(n+1)/2, - -introduced by M. Hilsum , are the weakest analytical structures on topological manifolds of dimension n for which the - -index theorem is known to hold. - -*The Atiyah–Singer theorem applies to elliptic pseudodifferential operators in much the same way as for elliptic differential operators. In fact, for technical reasons most of the early proofs worked with pseudodifferential rather than differential operators: their extra flexibility made some steps of the proofs easier. - -*Instead of working with an elliptic operator between two vector bundles, it is sometimes more convenient to work with an elliptic complex -$$ -0\rightarrow E_0 \rightarrow E_1 \rightarrow E_2 \rightarrow ... \rightarrow E_m \rightarrow 0 -$$ - -of vector bundles. The difference is that the symbols now form an exact sequence (off the zero section). In the case when there are just two non-zero bundles in the complex this implies that the symbol is an isomorphism off the zero section, so an elliptic complex with 2 terms is essentially the same as an elliptic operator between two vector bundles. Conversely the index theorem for an elliptic complex can easily be reduced to the case of an elliptic operator: the two vector bundles are given by the sums of the even or odd terms of the complex, and the elliptic operator is the sum of the operators of the elliptic complex and their adjoints, restricted to the sum of the even bundles. - -*If the manifold is allowed to have boundary, then some restrictions must be put on the domain of the elliptic operator in order to ensure a finite index. These conditions can be local (like demanding that the sections in the domain vanish at the boundary) or more complicated global conditions (like requiring that the sections in the domain solve some differential equation). The local case was worked out by Atiyah and Bott, but they showed that many interesting operators (e.g., the signature operator) do not admit local boundary conditions. To handle these operators, Atiyah, Patodi and Singer introduced global boundary conditions equivalent to attaching a cylinder to the manifold along the boundary and then restricting the domain to those sections that are square integrable along the cylinder. This point of view is adopted in the proof of Melrose of the Atiyah–Patodi–Singer index theorem. - -*Instead of just one elliptic operator, one can consider a family of elliptic operators parameterized by some space Y. In this case the index is an element of the K-theory of Y, rather than an integer. If the operators in the family are real, then the index lies in the real K-theory of Y. This gives a little extra information, as the map from the real K-theory of Y to the complex K-theory is not always injective. - -*If there is a group action of a group G on the compact manifold X, commuting with the elliptic operator, then one replaces ordinary K-theory with equivariant K-theory. Moreover, one gets generalizations of the Lefschetz fixed-point theorem, with terms coming from fixed-point submanifolds of the group G. See also: equivariant index theorem. - -*Atiyah showed how to extend the index theorem to some non-compact manifolds, acted on by a discrete group with compact quotient. The kernel of the elliptic operator is in general infinite dimensional in this case, but it is possible to get a finite index using the dimension of a module over a von Neumann algebra; this index is in general real rather than integer valued. This version is called the L2 index theorem, and was used by Atiyah to rederive properties of the discrete series representations of semisimple Lie groups. - -*The Callias index theorem is an index theorem for a Dirac operator on a noncompact odd-dimensional space. The Atiyah–Singer index is only defined on compact spaces, and vanishes when their dimension is odd. In 1978 Constantine Callias, at the suggestion of his Ph.D. advisor Roman Jackiw, used the axial anomaly to derive this index theorem on spaces equipped with a Hermitian matrix called the Higgs field. The index of the Dirac operator is a topological invariant which measures the winding of the Higgs field on a sphere at infinity. If U is the unit matrix in the direction of the Higgs field, then the index is proportional to the integral of U(dU)n−1 over the (n−1)-sphere at infinity. If n is even, it is always zero. - -**The topological interpretation of this invariant and its relation to the Hörmander index proposed by Boris Fedosov, as generalized by Lars Hörmander, was published by Raoul Bott and Robert Thomas Seeley. - -Suppose that M is a compact oriented manifold. If we take E to be the sum of the even exterior powers of the cotangent bundle, and F to be the sum of the odd powers, define D = d + d*, considered as a map from E to F. Then the topological index of D is the Euler characteristic of the Hodge cohomology of M, and the analytical index is the Euler class of the manifold. The index formula for this operator yields the Chern–Gauss–Bonnet theorem. - -Take X to be a complex manifold with a holomorphic vector bundle V. We let the vector bundles E and F be the sums of the bundles of differential forms with coefficients in V of type (0,i) with i even or odd, and we let the differential operator D be the sum -$$ -\overline\partial + \overline\partial^* -$$ - -restricted to E. Then the analytical index of D is the holomorphic Euler characteristic of V: -$$ -\textrm{index}(D)=\sum_p (-1)^p \textrm{dim} H^p(X,V) -$$ - -The topological index of D is given by -$$ -\textrm{index}(D)=\textrm{ch}(V) \textrm{Td}(X)[X] -$$, - -the product of the Chern character of V and the Todd class of X evaluated on the fundamental class of X. - -By equating the topological and analytical indices we get the Hirzebruch–Riemann–Roch theorem. In fact we get a generalization of it to all complex manifolds: Hirzebruch's proof only worked for projective complex manifolds X. - -This derivation of the Hirzebruch–Riemann–Roch theorem is more natural if we use the index theorem for elliptic complexes rather than elliptic operators. - -We can take the complex to be -$$ -0 \rightarrow V \rightarrow V\otimes \Lambda^{0,1}T^*(X) \rightarrow V\otimes \Lambda^{0,2}T^*(X) \rightarrow ... -$$ - -with the differential given by $\overline\partial$. Then the ith cohomology group is just the coherent cohomology group Hi(X, V), so the analytical index of this complex is the holomorphic Euler characteristic Σ (−1)i dim(Hi(X, V)). As before, the topological index is ch(V)Td(X)[X]. - -The Hirzebruch signature theorem states that the signature of a compact oriented manifold X of dimension 4k is given by the L genus of the manifold. This follows from the Atiyah–Singer index theorem applied to the following signature operator. - -The bundles E and F are given by the +1 and −1 eigenspaces of the operator on the bundle of differential forms of X, that acts on k-forms as -$$ -i^{k(k-1)} -$$ - -times the Hodge * operator. The operator D is the Hodge Laplacian -$$ -D\equiv \Delta :=(\mathbf d + \mathbf{d^*})^2 -$$ - -restricted to E, where d is the Cartan exterior derivative and d* is its adjoint. - -The analytic index of D is the signature of the manifold X, and its topological index is the L genus of X, so these are equal. - -The  genus is a rational number defined for any manifold, but is in general not an integer. Borel and Hirzebruch showed that it is integral for spin manifolds, and an even integer if in addition the dimension is 4 mod 8. This can be deduced from the index theorem, which implies that the  genus for spin manifolds is the index of a Dirac operator. The extra factor of 2 in dimensions 4 mod 8 comes from the fact that in this case the kernel and cokernel of the Dirac operator have a quaternionic structure, so as complex vector spaces they have even dimensions, so the index is even. - -In dimension 4 this result implies Rochlin's theorem that the signature of a 4-dimensional spin manifold is divisible by 16: this follows because in dimension 4 the  genus is minus one eighth of the signature. - -Pseudodifferential operators can be explained easily in the case of constant coefficient operators on Euclidean space. In this case, constant coefficient differential operators are just the Fourier transforms of multiplication by polynomials, and constant coefficient pseudodifferential operators are just the Fourier transforms of multiplication by more general functions. - -Many proofs of the index theorem use pseudodifferential operators rather than differential operators. The reason for this is that for many purposes there are not enough differential operators. For example, a pseudoinverse of an elliptic differential operator of positive order is not a differential operator, but is a pseudodifferential operator. - -Also, there is a direct correspondence between data representing elements of K(B(X), S(X)) (clutching functions) and symbols of elliptic pseudodifferential operators. - -Pseudodifferential operators have an order, which can be any real number or even −∞, and have symbols (which are no longer polynomials on the cotangent space), and elliptic differential operators are those whose symbols are invertible for sufficiently large cotangent vectors. Most version of the index theorem can be extended from elliptic differential operators to elliptic pseudodifferential operators. - -The initial proof was based on that of the Hirzebruch–Riemann–Roch theorem (1954), and involved cobordism theory and pseudodifferential operators. - -The idea of this first proof is roughly as follows. Consider the ring generated by pairs (X, V) where V is a smooth vector bundle on the compact smooth oriented manifold X, with relations that the sum and product of the ring on these generators are given by disjoint union and product of manifolds (with the obvious operations on the vector bundles), and any boundary of a manifold with vector bundle is 0. This is similar to the cobordism ring of oriented manifolds, except that the manifolds also have a vector bundle. The topological and analytical indices are both reinterpreted as functions from this ring to the integers. Then one checks that these two functions are in fact both ring homomorphisms. In order to prove they are the same, it is then only necessary to check they are the same on a set of generators of this ring. Thom's cobordism theory gives a set of generators; for example, complex vector spaces with the trivial bundle together with certain bundles over even dimensional spheres. So the index theorem can be proved by checking it on these particularly simple cases. - -Atiyah and Singer's first published proof used K-theory rather than cobordism. If i is any inclusion of compact manifolds from X to Y, they defined a 'pushforward' operation i! on elliptic operators of X to elliptic operators of Y that preserves the index. By taking Y to be some sphere that X embeds in, this reduces the index theorem to the case of spheres. If Y is a sphere and X is some point embedded in Y, then any elliptic operator on Y is the image under i! of some elliptic operator on the point. This reduces the index theorem to the case of a point, where it is trivial. - -gave a new proof of the index theorem using the heat equation, see e.g. Berline. - -The proof is also published in and . - -If D is a differential operator with adjoint D*, then D*D and DD* are self adjoint operators whose non-zero eigenvalues have the same multiplicities. However their zero eigenspaces may have different multiplicities, as these multiplicities are the dimensions of the kernels of D and D*. Therefore, the index of D is given by -$$ -\textrm{Index}(D) = \dim {\rm Ker}(D^*) = {\rm Tr}(e^{-t D^* D})- {\rm Tr}(e^{-t DD^*}) -$$ - -for any positive t. The right hand side is given by the trace of the difference of the kernels of two heat operators. These have an asymptotic expansion for small positive t, which can be used to evaluate the limit as t tends to 0, giving a proof of the Atiyah–Singer index theorem. The asymptotic expansions for small t appear very complicated, but invariant theory shows that there are huge cancellations between the terms, which makes it possible to find the leading terms explicitly. These cancellations were later explained using supersymmetry. diff --git a/wiki/wikipedia/460.txt b/wiki/wikipedia/460.txt deleted file mode 100644 index e5f597b970f2a7b57b76df2e063864a69fea5391..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/460.txt +++ /dev/null @@ -1,23 +0,0 @@ -In abstract algebra, a Robbins algebra is an algebra containing a single binary operation, usually denoted by $\lor$, and a single unary operation usually denoted by $\neg$. These operations satisfy the following axioms: - -For all elements a, b, and c: - -# Associativity: $a \lor \left(b \lor c \right) = \left(a \lor b \right) \lor c$ - -# Commutativity: $a \lor b = b \lor a$ - -# Robbins equation: $\neg \left( \neg \left(a \lor b \right) \lor \neg \left(a \lor \neg b \right) \right) = a$ - -For many years, it was conjectured, but unproven, that all Robbins algebras are Boolean algebras. This was proved in 1996, so the term "Robbins algebra" is now simply a synonym for "Boolean algebra". - -In 1933, Edward Huntington proposed a new set of axioms for Boolean algebras, consisting of (1) and (2) above, plus: - -*Huntington's equation: $\neg(\neg a \lor b) \lor \neg(\neg a \lor \neg b) = a.$ - -From these axioms, Huntington derived the usual axioms of Boolean algebra. - -Very soon thereafter, Herbert Robbins posed the Robbins conjecture, namely that the Huntington equation could be replaced with what came to be called the Robbins equation, and the result would still be Boolean algebra. $\lor$ would interpret Boolean join and $\neg$ Boolean complement. Boolean meet and the constants 0 and 1 are easily defined from the Robbins algebra primitives. Pending verification of the conjecture, the system of Robbins was called "Robbins algebra." - -Verifying the Robbins conjecture required proving Huntington's equation, or some other axiomatization of a Boolean algebra, as theorems of a Robbins algebra. Huntington, Robbins, Alfred Tarski, and others worked on the problem, but failed to find a proof or counterexample. - -William McCune proved the conjecture in 1996, using the automated theorem prover EQP. For a complete proof of the Robbins conjecture in one consistent notation and following McCune closely, see Mann (2003). Dahn (1998) simplified McCune's machine proof. diff --git a/wiki/wikipedia/461.txt b/wiki/wikipedia/461.txt deleted file mode 100644 index d5c9a3fdd37c2a1ba60ca310ff855b3ce846f808..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/461.txt +++ /dev/null @@ -1,89 +0,0 @@ -The set cover problem is a classical question in combinatorics, computer science, operations research, and complexity theory. It is one of Karp's 21 NP-complete problems shown to be NP-complete in 1972. - -It is a problem "whose study has led to the development of fundamental techniques for the entire field" of approximation algorithms. - -Given a set of elements $\{1,2,...,n\}$ (called the universe) and a collection $S$ of $m$ sets whose union equals the universe, the set cover problem is to identify the smallest sub-collection of $S$ whose union equals the universe. For example, consider the universe $U = \{1, 2, 3, 4, 5\}$ and the collection of sets $S = \{\{1, 2, 3\}, \{2, 4\}, \{3, 4\}, \{4, 5\}\}$. Clearly the union of $S$ is $U$. However, we can cover all of the elements with the following, smaller number of sets: $\{\{1, 2, 3\}, \{4, 5\}\}$. - -More formally, given a universe $\mathcal{U}$ and a family $\mathcal{S}$ of subsets of $\mathcal{U}$, - -a cover is a subfamily $\mathcal{C}\subseteq\mathcal{S}$ of sets whose union is $\mathcal{U}$. In the set covering decision problem, the input is a pair $(\mathcal{U},\mathcal{S})$ and an integer $k$; the question is whether - -there is a set covering of size $k$ or less. In the set covering optimization problem, the input is a pair $(\mathcal{U},\mathcal{S})$, and the task is to find a set covering that uses the fewest sets. - -The decision version of set covering is NP-complete, and the optimization/search version of set cover is NP-hard. - -If each set is assigned a cost, it becomes a weighted set cover problem. - -The minimum set cover problem can be formulated as the following integer linear program (ILP). - -This ILP belongs to the more general class of ILPs for covering problems. - -The integrality gap of this ILP is at most $\scriptstyle \log n$, so its relaxation gives a factor-$\scriptstyle \log n$ approximation algorithm for the minimum set cover problem (where $\scriptstyle n$ is the size of the universe). - -In weighted set cover, the sets are assigned weights. Denote the weight of set $s\in \mathcal{S}$ by $w_{s}$. Then the integer linear program describing weighted set cover is identical to the one given above, except that the objective function to minimize is $\sum_{s \in \mathcal S} w_s x_s$. - -Set covering is equivalent to the hitting set problem. That is seen by observing that an instance of set covering can - -be viewed as an arbitrary bipartite graph, with sets represented by vertices on the left, the universe represented by vertices on the - -right, and edges representing the inclusion of elements in sets. The task is then to find a minimum cardinality subset of left-vertices which covers all of the right-vertices. In the Hitting set problem, the objective is to cover the left-vertices using a minimum subset of the right vertices. Converting from one problem to the other is therefore achieved by interchanging the two sets of vertices. - -There is a greedy algorithm for polynomial time approximation of set covering that chooses sets according to one rule: at each stage, choose the set that contains the largest number of uncovered elements. This method can be implemented in time linear in the sum of sizes of the input sets, using a bucket queue to prioritize the sets. It achieves an approximation ratio of $H(s)$, where $s$ is the size of the set to be covered. In other words, it finds a covering that may be $H(n)$ times as large as the minimum one, where $H(n)$ is the $n$-th harmonic number: - - H(n) = \sum_{k=1}^{n} \frac{1}{k} \le \ln{n} +1 - -This greedy algorithm actually achieves an approximation ratio of $H(s^\prime)$ where $s^\prime$ is the maximum cardinality set of $S$. For $\delta-$dense instances, however, there exists a $c \ln{m}$-approximation algorithm for every $c > 0$. - -There is a standard example on which the greedy algorithm achieves an approximation ratio of $\log_2(n)/2$. - -The universe consists of $n=2^{(k+1)}-2$ elements. The set system consists of $k$ pairwise disjoint sets -$$ -S_1,\ldots,S_k -$$ with sizes $2,4,8,\ldots,2^k$ respectively, as well as two additional disjoint sets $T_0,T_1$, - -each of which contains half of the elements from each $S_i$. On this input, the greedy algorithm takes the sets -$$ -S_k,\ldots,S_1 -$$, in that order, while the optimal solution consists only of $T_0$ and $T_1$. - -An example of such an input for $k=3$ is pictured on the right. - -Inapproximability results show that the greedy algorithm is essentially the best-possible polynomial time approximation algorithm for set cover up to lower order terms - -(see Inapproximability results below), under plausible complexity assumptions. A tighter analysis for the greedy algorithm shows that the approximation ratio is exactly $\ln{n} - \ln{\ln{n}} + \Theta(1)$. - -If each element occurs in at most f sets, then a solution can be found in polynomial time that approximates the optimum to within a factor of f using LP relaxation. - -If the constraint $x_S\in\{0,1\}$ is replaced by $x_S \geq 0$ for all S in $\mathcal{S}$ in the integer linear program shown above, then it becomes a (non-integer) linear program L. The algorithm can be described as follows: - -# Find an optimal solution O for the program L using some polynomial-time method of solving linear programs. - -# Pick all sets S for which the corresponding variable xS has value at least 1/f in the solution O. - -When $ n$ refers to the size of the universe, showed that set covering cannot be approximated in polynomial time to within a factor of $\tfrac{1}{2}\log_2{n} \approx 0.72\ln{n}$, unless NP has quasi-polynomial time algorithms. Feige (1998) improved this lower bound to $\bigl(1-o(1)\bigr)\cdot\ln{n}$ under the same assumptions, which essentially matches the approximation ratio achieved by the greedy algorithm. established a lower bound - -of $c\cdot\ln{n}$, where $c$ is a certain constant, under the weaker assumption that P$\not=$NP. - -A similar result with a higher value of $c$ was recently proved by . showed optimal inapproximability by proving that it cannot be approximated to $\bigl(1 - o(1)\bigr) \cdot \ln{n}$ unless P$=$NP. - -Relaxing the integer linear program for weighted set cover stated above, one may use randomized rounding to get an $O(\log n)$-factor approximation. The corresponding analysis for nonweighted set cover is outlined in Randomized rounding#Randomized-rounding algorithm for set cover and can be adapted to the weighted case. - -* Hitting set is an equivalent reformulation of Set Cover. - -* Vertex cover is a special case of Hitting Set. - -* Edge cover is a special case of Set Cover. - -* Geometric set cover is a special case of Set Cover when the universe is a set of points in $\mathbb{R}^d$ and the sets are induced by the intersection of the universe and geometric shapes (e.g., disks, rectangles). - -* Set packing - -* Maximum coverage problem is to choose at most k sets to cover as many elements as possible. - -* Dominating set is the problem of selecting a set of vertices (the dominating set) in a graph such that all other vertices are adjacent to at least one vertex in the dominating set. The Dominating set problem was shown to be NP complete through a reduction from Set cover. - -* Exact cover problem is to choose a set cover with no element included in more than one covering set. - -* Red Blue Set Cover. - -*Set-cover abduction. diff --git a/wiki/wikipedia/462.txt b/wiki/wikipedia/462.txt deleted file mode 100644 index 0fb386119fad19e485208633f678006815c2993c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/462.txt +++ /dev/null @@ -1,59 +0,0 @@ -In mathematics, the Gauss class number problem (for imaginary quadratic fields), as usually understood, is to provide for each n ≥ 1 a complete list of imaginary quadratic fields $\mathbb{Q}(\sqrt{d})$ (for negative integers d) having class number n. It is named after Carl Friedrich Gauss. It can also be stated in terms of discriminants. There are related questions for real quadratic fields and for the behavior as $d \to -\infty$. - -The difficulty is in effective computation of bounds: for a given discriminant, it is easy to compute the class number, and there are several ineffective lower bounds on class number (meaning that they involve a constant that is not computed), but effective bounds (and explicit proofs of completeness of lists) are harder. - -The problems are posed in Gauss's Disquisitiones Arithmeticae of 1801 (Section V, Articles 303 and 304). - -Gauss discusses imaginary quadratic fields in Article 303, stating the first two conjectures, and discusses real quadratic fields in Article 304, stating the third conjecture. - -;Gauss Conjecture (Class number tends to infinity): $h(d) \to \infty\text{ as }d\to -\infty.$ - -;Gauss Class Number Problem (Low class number lists): For given low class number (such as 1, 2, and 3), Gauss gives lists of imaginary quadratic fields with the given class number and believes them to be complete. - -;Infinitely many real quadratic fields with class number one: Gauss conjectures that there are infinitely many real quadratic fields with class number one. - -The original Gauss class number problem for imaginary quadratic fields is significantly different and easier than the modern statement: he restricted to even discriminants, and allowed non-fundamental discriminants. - -;Gauss Conjecture: Solved, Heilbronn, 1934. - -;Low class number lists: Class number 1: solved, Baker (1966), Stark (1967), Heegner (1952). - -Class number 2: solved, Baker (1971), Stark (1971) - -Class number 3: solved, Oesterlé (1985) - -;Infinitely many real quadratic fields with class number one: Open. - -For imaginary quadratic number fields, the (fundamental) discriminants of class number 1 are: -$$ -d=-3,-4,-7,-8,-11,-19,-43,-67,-163. -$$ - -The non-fundamental discriminants of class number 1 are: -$$ -d=-12,-16,-27,-28. -$$ - -Thus, the even discriminants of class number 1, fundamental and non-fundamental (Gauss's original question) are: -$$ -d=-4,-8,-12,-16,-28. -$$ - -In 1934, Hans Heilbronn proved the Gauss Conjecture. Equivalently, for any given class number, there are only finitely many imaginary quadratic number fields with that class number. - -Also in 1934, Heilbronn and Edward Linfoot showed that there were at most 10 imaginary quadratic number fields with class number 1 (the 9 known ones, and at most one further). - -The result was ineffective (see effective results in number theory): it did not give bounds on the size of the remaining field. - -In later developments, the case n = 1 was first discussed by Kurt Heegner, using modular forms and modular equations to show that no further such field could exist. This work was not initially accepted; only with later work of Harold Stark and Bryan Birch (e.g. on the Stark–Heegner theorem and Heegner number) was the position clarified and Heegner's work understood. Practically simultaneously, Alan Baker proved what we now know as Baker's theorem on linear forms in logarithms of algebraic numbers, which resolved the problem by a completely different method. The case n = 2 was tackled shortly afterwards, at least in principle, as an application of Baker's work. - -The complete list of imaginary quadratic fields with class number one is $\mathbf{Q}(\sqrt{k})$ with k one of -$$ --1, -2, -3, -7, -11, -19, -43, -67, -163. -$$ - -The general case awaited the discovery of Dorian Goldfeld in 1976 that the class number problem could be connected to the L-functions of elliptic curves. This effectively reduced the question of effective determination to one about establishing the existence of a multiple zero of such an L-function. With the proof of the Gross-Zagier theorem in 1986, a complete list of imaginary quadratic fields with a given class number could be specified by a finite calculation. All cases up to n = 100 were computed by Watkins in 2004. A full list of class numbers - -The contrasting case of real quadratic fields is very different, and much less is known. That is because what enters the analytic formula for the class number is not h, the class number, on its own — but h log ε, where ε is a fundamental unit. This extra factor is hard to control. It may well be the case that class number 1 for real quadratic fields occurs infinitely often. - -The Cohen–Lenstra heuristics are a set of more precise conjectures about the structure of class groups of quadratic fields. For real fields they predict that about 75.446% of the fields obtained by adjoining the square root of a prime will have class number 1, a result that agrees with computations. diff --git a/wiki/wikipedia/463.txt b/wiki/wikipedia/463.txt deleted file mode 100644 index 3f041278f0426b392a88395d4fee4a5ed1756ca0..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/463.txt +++ /dev/null @@ -1,20 +0,0 @@ -In computational complexity and optimization the no free lunch theorem is a result that states that for certain types of mathematical problems, the computational cost of finding a solution, averaged over all problems in the class, is the same for any solution method. No solution therefore offers a "short cut". This is under the assumption that the search space is a probability density function. It does not apply to the case where the search space has underlying structure (e.g., is a differentiable function) that can be exploited more efficiently (e.g., Newton's method in optimization) than random search or even has closed-form solutions (e.g., the extrema of a quadratic polynomial) that can be determined without search at all. For such probabilistic assumptions, the outputs of all procedures solving a particular type of problem are statistically identical. A colourful way of describing such a circumstance, introduced by David Wolpert and William G. Macready in connection with the problems of search - -and optimization, - -is to say that there is no free lunch. Wolpert had previously derived no free lunch theorems for machine learning (statistical inference). - -Before Wolpert's article was published, Cullen Schaffer independently proved a restricted version of one of Wolpert's theorems and used it to critique the current state of machine learning research on the problem of induction. - -In the "no free lunch" metaphor, each "restaurant" (problem-solving procedure) has a "menu" associating each "lunch plate" (problem) with a "price" (the performance of the procedure in solving the problem). The menus of restaurants are identical except in one regard – the prices are shuffled from one restaurant to the next. For an omnivore who is as likely to order each plate as any other, the average cost of lunch does not depend on the choice of restaurant. But a vegan who goes to lunch regularly with a carnivore who seeks economy might pay a high average cost for lunch. To methodically reduce the average cost, one must use advance knowledge of a) what one will order and b) what the order will cost at various restaurants. That is, improvement of performance in problem-solving hinges on using prior information to match procedures to problems. This condition does not hold precisely in practice, - -Some computational problems are solved by searching for good solutions in a space of candidate solutions. A description of how to repeatedly select candidate solutions for evaluation is called a search algorithm. On a particular problem, different search algorithms may obtain different results, but over all problems, they are indistinguishable. It follows that if an algorithm achieves superior results on some problems, it must pay with inferiority on other problems. In this sense there is no free lunch in search. The "no free lunch" results indicate that matching algorithms to problems gives higher average performance than does applying a fixed algorithm to all. Igel and Toussaint - -Wolpert and Macready stipulate that an algorithm never reevaluates a candidate solution, and that algorithm performance is measured on outputs. For instance, if each candidate solution is encoded as a sequence of 300 0's and 1's, and the goodness values are 0 and 1, then most objective functions have Kolmogorov complexity of at least 2300 bits, and this is greater than Lloyd's bound of 1090 ≈ 2299 bits. It follows that the original "no free lunch" theorem does not apply to what can be stored in a physical computer; instead the so-called "tightened" no free lunch theorems need to be applied. It has also been shown that NFL results apply to incomputable functions. -$$ -Y^X -$$ is the set of all objective functions f:X→Y, where $X$ is a finite solution space and $Y$ is a finite poset. The set of all permutations of X is J. A random variable F is distributed on $Y^X$. For all j in J, F o j is a random variable distributed on $Y^X$, with P(F o j = f) = P(F = f o j−1) for all f in $Y^X$. - -Let a(f) denote the output of search algorithm a on input f. If a(F) and b(F) are identically distributed for all search algorithms a and b, then F has an NFL distribution. This condition holds if and only if F and F o j are identically distributed for all j in J. - -Wolpert and Macready give two principal NFL theorems, the first regarding objective functions that do not change while search is in progress, and the second regarding objective functions that may change. Their analysis "covers 'self-play' problems. In these problems, the set of players work together to produce a champion, who then engages one or more antagonists in a subsequent multiplayer game." That is, the objective is to obtain a good player, but without an objective function. The goodness of each player (candidate solution) is assessed by observing how well it plays against others. An algorithm attempts to use players and their quality of play to obtain better players. The player deemed best of all by the algorithm is the champion. Wolpert and Macready have demonstrated that some coevolutionary algorithms are generally superior to other algorithms in quality of champions obtained. Generating a champion through self-play is of interest in evolutionary computation and game theory. The results are inapplicable to coevolution of biological species, which does not yield champions. diff --git a/wiki/wikipedia/464.txt b/wiki/wikipedia/464.txt deleted file mode 100644 index 59319f58e8904c052ae69f99dffbe83325279317..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/464.txt +++ /dev/null @@ -1,117 +0,0 @@ -Gregory coefficients Gn, also known as reciprocal logarithmic numbers, Bernoulli numbers of the second kind, or Cauchy numbers of the first kind, - - - -G_n=(-1)^{n-1} \int_0^\infty \frac{dx}{(1+x)^n(\ln^2 x + \pi^2)}, - - - -or the finite summation formula - - - -G_n=\frac 1 {n!} \sum_{\ell=1}^n \frac{s(n,\ell)}{\ell+1} , - - - -where s(n,ℓ) are the signed Stirling numbers of the first kind. - -The Gregory coefficients satisfy the bounds - - - -\frac{1}{6n(n-1)}<\big|G_n\big|<\frac{1}{6n},\qquad n>2, - - - -given by Johan Steffensen. In particular, - - - -\frac{1}{n\ln^2\! n} - \frac{2}{n\ln^3\! n} - -\leqslant\big|G_n\big| \leqslant \frac{1}{n\ln^2\! n} - \frac{2\gamma - -}{n\ln^3\! n} , - -\qquad\quad n\geqslant5. - - - -Asymptotically, at large index n, these numbers behave as Davis, Coffey, Nemes and Blagouchine. More complicated series with the Gregory coefficients were calculated by various authors. Kowalenko, also gives these identities - - - -\begin{align} - -& \big|G_1\big| + \big|G_2\big| -\big|G_4\big| - \big|G_5\big| +\big|G_7\big|+\big|G_8\big| - \big|G_{10}\big| - \big|G_{11}\big| + \cdots = \frac{\sqrt{3}}{\pi} \\[2mm] - -& \big|G_2\big| + \big|G_3\big| -\big|G_5\big| - \big|G_6\big| +\big|G_8\big|+\big|G_9\big| - \big|G_{11}\big| - \big|G_{12}\big| + \cdots = \frac{2\sqrt{3}}{\pi} -1 \\[2mm] - -& \big|G_1\big| - \big|G_3\big| -\big|G_4\big| + \big|G_6\big| +\big|G_7\big|-\big|G_9\big| - \big|G_{10}\big| + \big|G_{12}\big| + \cdots = 1- \frac{\sqrt{3}}{\pi}. - -\end{align} - - - -Candelperger, Coppo and Young - -Various generalizations are possible for the Gregory coefficients. Many of them may be obtained by modifying the parent generating equation. For example, Van Veen defines polynomials ψn(s) such that - - - -\frac{z(1+z)^s}{\ln(1+z)}= \sum_{n=0}^\infty z^n \psi_n(s) ,\qquad |z|<1, - - - -and call them Bernoulli polynomials of the second kind. From the above, it is clear that Gn = ψn(0). - -Carlitz generalized Jordan's polynomials ψn(s) by introducing polynomials β - - - -\left(\frac{z}{\ln(1+z)}\right)^s \!\!\cdot (1+z)^x= \sum_{n=0}^\infty \frac{z^n}{n!}\beta^{(s)}_n(x) ,\qquad |z|<1, - - - -and therefore - - - -G_n=\frac{\beta^{(1)}_n(0)}{n!} - - - -Blagouchine introduced numbers Gn(k) such that - - - -G_n(k)=\frac 1 {n!} \sum_{\ell=1}^n \frac{s(n,\ell)}{\ell+k} , - - - -obtained their generating function and studied their asymptotics at large n. Clearly, Gn = Gn(1). These numbers are strictly alternating Gn(k) = (-1)n-1 and involved in various expansions for the zeta-functions, Euler's constant and polygamma functions. - -A different generalization of the same kind was also proposed by Komatsu - - - -c_n^{(k)}=\sum_{\ell=0}^n \frac{s(n,\ell)}{(\ell+1)^k}, - - - -so that Gn = cn(1)/n! Numbers cn(k) are called by the author poly-Cauchy numbers. Coffey - -defines polynomials - - - -P_{n+1}(y)=\frac 1 {n!} \int_0^y x(1-x)(2-x)\cdots(n-1-x) dx - - - -and therefore . - -* Stirling polynomials - -* Bernoulli polynomials of the second kind diff --git a/wiki/wikipedia/465.txt b/wiki/wikipedia/465.txt deleted file mode 100644 index 998a968732877779c170c0c02f111a3433308387..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/465.txt +++ /dev/null @@ -1,17 +0,0 @@ -In number theory, Lagrange's theorem is a statement named after Joseph-Louis Lagrange about how frequently a polynomial over the integers may evaluate to a multiple of a fixed prime. More precisely, it states that if p is a prime number and $\textstyle f(x) \in \mathbb{Z}[x]$ is a polynomial with integer coefficients, then either: - -* every coefficient of f(x) is divisible by p, or - -* f(x) ≡ 0 (mod p) has at most deg f(x) incongruent solutions - -where deg f(x) is the degree of f(x). Solutions are "incongruent" if they do not differ by a multiple of p. If the modulus is not prime, then it is possible for there to be more than deg f(x) solutions. - -The two key ideas are the following. Let g(x) ∈ (Z/p)[x] be the polynomial obtained from f(x) by taking the coefficients mod p. Now: - -# f(k) is divisible by p if and only if g(k) = 0; and - -# g(x) has no more than deg g(x) roots. - -More rigorously, start by noting that g(x) = 0 if and only if each coefficient of f(x) is divisible by p. Assume g(x) ≠ 0; its degree is thus well-defined. It is easy to see deg g(x) ≤ deg f(x). To prove (1), first note that we can compute g(k) either directly, i.e. by plugging in (the residue class of) k and performing arithmetic in Z/p, or by reducing f(k) mod p. Hence g(k) = 0 if and only if f(k) ≡ 0 (mod p), i.e. if and only if f(k) is divisible by p. To prove (2), note that Z/p is a field, which is a standard fact (a quick proof is to note that since p is prime, Z/p is a finite integral domain, hence is a field). Another standard fact is that a non-zero polynomial over a field has at most as many roots as its degree; this follows from the division algorithm. - -Finally, note that two solutions f(k_1) ≡ f(k_2) ≡ 0 (mod p) are incongruent if and only if $ k_1 \not\equiv k_2 $ (mod p). Putting everything together, the number of incongruent solutions by (1) is the same as the number of roots of g(x), which by (2) is at most deg g(x), which is at most deg f(x). diff --git a/wiki/wikipedia/466.txt b/wiki/wikipedia/466.txt deleted file mode 100644 index 8f4d540f0c11d7379aaaf38e77133eaa6ff2a54f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/466.txt +++ /dev/null @@ -1,95 +0,0 @@ -The maximum coverage problem is a classical question in computer science, computational complexity theory, and operations research. - -It is a problem that is widely taught in approximation algorithms. - -As input you are given several sets and a number $k$. - -The sets may have some elements in common. - -You must select at most $k$ of these sets such that the maximum number of elements are covered, - -i.e. the union of the selected sets has maximal size. - -Formally, (unweighted) Maximum Coverage - -Instance: A number $ k $ and a collection of sets $ S = \{S_1, S_2, \ldots, S_m\} $. - -Objective: Find a subset $ S' \subseteq S$ of sets, such that $ \left| S' \right| \leq k$ and the number of covered elements $ \left| \bigcup_{S_i \in S'}{S_i} \right| $ is maximized. - -The maximum coverage problem is NP-hard, and cannot be approximated within $1 - \frac{1}{e} + o(1) \approx 0.632$ under standard assumptions. - -This result essentially matches the approximation ratio achieved by the generic greedy algorithm used for maximization of submodular functions with a cardinality constraint. - -The maximum coverage problem can be formulated as the following integer linear program. - -The greedy algorithm for maximum coverage chooses sets according to one rule: at each stage, choose a set which contains the largest number of uncovered elements. It can be shown that this algorithm achieves an approximation ratio of $1 - \frac{1}{e}$. ln-approximability results show that the greedy algorithm is essentially the best-possible polynomial time approximation algorithm for maximum coverage unless $P = NP$. - -The inapproximability results apply to all extensions of the maximum coverage problem since they hold the maximum coverage problem as a special case. - -The Maximum Coverage Problem can be applied to road traffic situations; one such example is selecting which bus routes in a public transportation network should be installed with pothole detectors to maximise coverage, when only a limited number of sensors is available. This problem is a known extension of the Maximum Coverage Problem and was first explored in literature by Junade Ali and Vladimir Dyo. - -In the weighted version every element $ e_j $ has a weight $w(e_j)$. The task is to find a maximum coverage which has maximum weight. The basic version is a special case when all weights are $1$. - -maximize $\sum_{e \in E} w(e_j) \cdot y_j $. (maximizing the weighted sum of covered elements). - -subject to $ \sum{x_i} \leq k $; (no more than $k$ sets are selected). -$$ - \sum_{e_j \in S_i} x_i \geq y_j -$$; (if $y_j > 0 $ then at least one set $e_j \in S_i$ is selected). -$$ -y_j \in \{0,1\} -$$; (if $y_j=1$ then $e_j$ is covered) -$$ -x_i \in \{0,1\} -$$ (if $x_i=1$ then $S_i$ is selected for the cover). - -The greedy algorithm for the weighted maximum coverage at each stage chooses a set that contains the maximum weight of uncovered elements. This algorithm achieves an approximation ratio of $1 - \frac{1}{e}$. - -In the budgeted maximum coverage version, not only does every element $ e_j $ have a weight $w(e_j)$, but also every set $S_i$ has a cost $c(S_i)$. Instead of $k$ that limits the number of sets in the cover a budget $B$ is given. This budget $B$ limits the total cost of the cover that can be chosen. - -maximize $\sum_{e \in E} w(e_j) \cdot y_j $. (maximizing the weighted sum of covered elements). - -subject to $ \sum{c(S_i) \cdot x_i} \leq B $; (the cost of the selected sets cannot exceed $B$). -$$ - \sum_{e_j \in S_i} x_i \geq y_j -$$; (if $y_j > 0 $ then at least one set $e_j \in S_i$ is selected). -$$ -y_j \in \{0,1\} -$$; (if $y_j=1$ then $e_j$ is covered) -$$ -x_i \in \{0,1\} -$$ (if $x_i=1$ then $S_i$ is selected for the cover). - -A greedy algorithm will no longer produce solutions with a performance guarantee. Namely, the worst case behavior of this algorithm might be very far from the optimal solution. The approximation algorithm is extended by the following way. First, define a modified greedy algorithm, that selects the set $S_i$ that has the best ratio of weighted uncovered elements to cost. Second, among covers of cardinality $1, 2, ..., k-1$, find the best cover that does not violate the budget. Call this cover $H_1$. Third, find all covers of cardinality $k$ that do not violate the budget. Using these covers of cardinality $k$ as starting points, apply the modified greedy algorithm, maintaining the best cover found so far. Call this cover $H_2$. At the end of the process, the approximate best cover will be either $H_1$ or $H_2$. This algorithm achieves an approximation ratio of $1- {1 \over e}$ for values of $k \geq 3$. This is the best possible approximation ratio unless $NP \subseteq DTIME(n^{O(\log\log n)})$. - -In the generalized maximum coverage version every set $S_i$ has a cost $c(S_i)$, - -element $ e_j $ has a different weight and cost depending on which set covers it. - -Namely, if $ e_j $ is covered by set $S_i$ the weight of $ e_j $ - -is $w_i(e_j)$ and its cost is $c_i(e_j)$. - -A budget $ B $ is given for the total cost of the solution. - -maximize $\sum_{e \in E, S_i} w_i(e_j) \cdot y_{ij} $. (maximizing the weighted sum of covered elements in the sets in which they are covered). - -subject to $ \sum{c_i(e_j) \cdot y_{ij}} + \sum{c(S_i) \cdot x_i} \leq B $; (the cost of the selected sets cannot exceed $B$). -$$ - \sum_{i} y_{ij} \leq 1 -$$; (element $e_j=1$ can only be covered by at most one set). -$$ - \sum_{S_i} x_i \geq y_{ij} -$$; (if $y_j > 0 $ then at least one set $e_j \in S_i$ is selected). -$$ -y_{ij} \in \{0,1\} -$$; (if $y_{ij}=1$ then $e_j$ is covered by set $S_i$) -$$ -x_i \in \{0,1\} -$$ (if $x_i=1$ then $S_i$ is selected for the cover). - -The algorithm uses the concept of residual cost/weight. The residual cost/weight is measured against a tentative solution and it is the difference of the cost/weight from the cost/weight gained by a tentative solution. - -The algorithm has several stages. First, find a solution using greedy algorithm. In each iteration of the greedy algorithm the tentative solution is added the set which contains the maximum residual weight of elements divided by the residual cost of these elements along with the residual cost of the set. Second, compare the solution gained by the first step to the best solution which uses a small number of sets. Third, return the best out of all examined solutions. This algorithm achieves an approximation ratio of $1-1/e - o(1)$. - -* Set cover problem is to cover all elements with as few sets as possible. diff --git a/wiki/wikipedia/467.txt b/wiki/wikipedia/467.txt deleted file mode 100644 index 87cb961408695f27df7f85d2dd05e4ac43907f8a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/467.txt +++ /dev/null @@ -1,127 +0,0 @@ -In graph theory, the shortest path problem is the problem of finding a path between two vertices (or nodes) in a graph such that the sum of the weights of its constituent edges is minimized. - -The problem of finding the shortest path between two intersections on a road map may be modeled as a special case of the shortest path problem in graphs, where the vertices correspond to intersections and the edges correspond to road segments, each weighted by the length of the segment. - -The shortest path problem can be defined for graphs whether undirected, directed, or mixed. - -It is defined here for undirected graphs; for directed graphs the definition of path - -requires that consecutive vertices be connected by an appropriate directed edge. - -Two vertices are adjacent when they are both incident to a common edge. - -A path in an undirected graph is a sequence of vertices $P = ( v_1, v_2, \ldots, v_n ) \in V \times V \times \cdots \times V$ - -such that $v_i$ is adjacent to $v_{i+1}$ for $1 \leq i < n$. - -Such a path $P$ is called a path of length $n-1$ - -from $v_1$ to $v_n$. - -(The $v_i$ are variables; their numbering here relates to their position in the sequence and needs not to relate to any canonical labeling of the vertices.) - -Let $e_{i, j}$ be the edge incident to both $v_i$ and $v_j$. Given a real-valued weight function $f: E \rightarrow \mathbb{R}$, and an undirected (simple) graph $G$, the shortest path from $v$ to $v'$ is the path $P = ( v_1, v_2, \ldots, v_n )$ (where $v_1 = v$ and $v_n = v'$) that over all possible $n$ minimizes the sum $\sum_{i =1}^{n-1} f(e_{i, i+1}).$ When each edge in the graph has unit weight or $f: E \rightarrow \{1\}$, this is equivalent to finding the path with fewest edges. - -The problem is also sometimes called the single-pair shortest path problem, to distinguish it from the following variations: - -* The single-source shortest path problem, in which we have to find shortest paths from a source vertex v to all other vertices in the graph. - -* The single-destination shortest path problem, in which we have to find shortest paths from all vertices in the directed graph to a single destination vertex v. This can be reduced to the single-source shortest path problem by reversing the arcs in the directed graph. - -* The all-pairs shortest path problem, in which we have to find shortest paths between every pair of vertices v, v' in the graph. - -These generalizations have significantly more efficient algorithms than the simplistic approach of running a single-pair shortest path algorithm on all relevant pairs of vertices. - -The most important algorithms for solving this problem are: - -* Dijkstra's algorithm solves the single-source shortest path problem with non-negative edge weight. - -* Bellman–Ford algorithm solves the single-source problem if edge weights may be negative. - -* A* search algorithm solves for single-pair shortest path using heuristics to try to speed up the search. - -* Floyd–Warshall algorithm solves all pairs shortest paths. - -* Johnson's algorithm solves all pairs shortest paths, and may be faster than Floyd–Warshall on sparse graphs. - -* Viterbi algorithm solves the shortest stochastic path problem with an additional probabilistic weight on each node. - -Additional algorithms and associated evaluations may be found in Cherkassky. - -An algorithm using topological sorting can solve the single-source shortest path problem in time Θ(E + V) in arbitrarily-weighted DAGs. - -The following table is taken from Schrijver, with some corrections and additions. - -A green background indicates an asymptotically best bound in the table; L is the maximum length (or weight) among all edges, assuming integer edge weights. - -The all-pairs shortest path problem finds the shortest paths between every pair of vertices v, v' in the graph. The all-pairs shortest paths problem for unweighted directed graphs was introduced by Shimbel, who observed that it could be solved by a linear number of matrix multiplications that takes a total time of O(V4). - -Shortest path algorithms are applied to automatically find directions between physical locations, such as driving directions on web mapping websites like MapQuest or Google Maps. For this application fast specialized algorithms are available. - -If one represents a nondeterministic abstract machine as a graph where vertices describe states and edges describe possible transitions, shortest path algorithms can be used to find an optimal sequence of choices to reach a certain goal state, or to establish lower bounds on the time needed to reach a given state. For example, if vertices represent the states of a puzzle like a Rubik's Cube and each directed edge corresponds to a single move or turn, shortest path algorithms can be used to find a solution that uses the minimum possible number of moves. - -In a networking or telecommunications mindset, this shortest path problem is sometimes called the min-delay path problem and usually tied with a widest path problem. For example, the algorithm may seek the shortest (min-delay) widest path, or widest shortest (min-delay) path. - -A more lighthearted application is the games of "six degrees of separation" that try to find the shortest path in graphs like movie stars appearing in the same film. - -Other applications, often studied in operations research, include plant and facility layout, robotics, transportation, and VLSI design. - -A road network can be considered as a graph with positive weights. The nodes represent road junctions and each edge of the graph is associated with a road segment between two junctions. The weight of an edge may correspond to the length of the associated road segment, the time needed to traverse the segment, or the cost of traversing the segment. Using directed edges it is also possible to model one-way streets. Such graphs are special in the sense that some edges are more important than others for long-distance travel (e.g. highways). This property has been formalized using the notion of highway dimension. There are a great number of algorithms that exploit this property and are therefore able to compute the shortest path a lot quicker than would be possible on general graphs. - -All of these algorithms work in two phases. In the first phase, the graph is preprocessed without knowing the source or target node. The second phase is the query phase. In this phase, source and target node are known. The idea is that the road network is static, so the preprocessing phase can be done once and used for a large number of queries on the same road network. - -The algorithm with the fastest known query time is called hub labeling and is able to compute shortest path on the road networks of Europe or the US in a fraction of a microsecond. Other techniques that have been used are: - -* ALT (A* search, landmarks, and triangle inequality) - -* Arc flags - -* Contraction hierarchies - -* Transit node routing - -* Reach-based pruning - -* Labeling - -* Hub labels - -For shortest path problems in computational geometry, see Euclidean shortest path. - -The travelling salesman problem is the problem of finding the shortest path that goes through every vertex exactly once, and returns to the start. Unlike the shortest path problem, which can be solved in polynomial time in graphs without negative cycles, the travelling salesman problem is NP-complete and, as such, is believed not to be efficiently solvable for large sets of data (see P = NP problem). The problem of finding the longest path in a graph is also NP-complete. - -The Canadian traveller problem and the stochastic shortest path problem are generalizations where either the graph isn't completely known to the mover, changes over time, or where actions (traversals) are probabilistic. - -The shortest multiple disconnected path is a representation of the primitive path network within the framework of Reptation theory. - -The widest path problem seeks a path so that the minimum label of any edge is as large as possible. - -Sometimes, the edges in a graph have personalities: each edge has its own selfish interest. An example is a communication network, in which each edge is a computer that possibly belongs to a different person. Different computers have different transmission speeds, so every edge in the network has a numeric weight equal to the number of milliseconds it takes to transmit a message. Our goal is to send a message between two points in the network in the shortest time possible. If we know the transmission-time of each computer (the weight of each edge), then we can use a standard shortest-paths algorithm. If we do not know the transmission times, then we have to ask each computer to tell us its transmission-time. But, the computers may be selfish: a computer might tell us that its transmission time is very long, so that we will not bother it with our messages. A possible solution to this problem is to use a variant of the VCG mechanism, which gives the computers an incentive to reveal their true weights. - -There is a natural linear programming formulation for the shortest path problem, given below. It is very simple compared to most other uses of linear programs in discrete optimization, however it illustrates connections to other concepts. - -Given a directed graph (V, A) with source node s, target node t, and cost wij for each edge (i, j) in A, consider the program with variables xij - -minimize $\sum_{ij \in A} w_{ij} x_{ij}$ subject to $x \ge 0$ and for all i, $\sum_j x_{ij} - \sum_j x_{ji} = \begin{cases}1, &\text{if }i=s;\\ -1, &\text{if }i=t;\\ 0, &\text{ otherwise.}\end{cases}$ - -The intuition behind this is that $x_{ij}$ is an indicator variable for whether edge (i, j) is part of the shortest path: 1 when it is, and 0 if it is not. We wish to select the set of edges with minimal weight, subject to the constraint that this set forms a path from s to t (represented by the equality constraint: for all vertices except s and t the number of incoming and outcoming edges that are part of the path must be the same (i.e., that it should be a path from s to t). - -This LP has the special property that it is integral; more specifically, every basic optimal solution (when one exists) has all variables equal to 0 or 1, and the set of edges whose variables equal 1 form an s-t dipath. See Ahuja et al. for one proof, although the origin of this approach dates back to mid-20th century. - -The dual for this linear program is - -maximize yt − ys subject to for all ij, yj − yi ≤ wij - -and feasible duals correspond to the concept of a consistent heuristic for the A* algorithm for shortest paths. For any feasible dual y the reduced costs $w'_{ij} = w_{ij} - y_j + y_i$ are nonnegative and A* essentially runs Dijkstra's algorithm on these reduced costs. - -Many problems can be framed as a form of the shortest path for some suitably substituted notions of addition along a path and taking the minimum. The general approach to these is to consider the two operations to be those of a semiring. Semiring multiplication is done along the path, and the addition is between paths. This general framework is known as the algebraic path problem. - -Most of the classic shortest-path algorithms (and new ones) can be formulated as solving linear systems over such algebraic structures. - -More recently, an even more general framework for solving these (and much less obviously related problems) has been developed under the banner of valuation algebras. - -In real-life situations, the transportation network is usually stochastic and time-dependent. In fact, a traveler traversing a link daily may experiences different travel times on that link due not only to the fluctuations in travel demand (origin-destination matrix) but also due to such incidents as work zones, bad weather conditions, accidents and vehicle breakdowns. As a result, a stochastic time-dependent (STD) network is a more realistic representation of an actual road network compared with the deterministic one. - -Despite considerable progress during the course of the past decade, it remains a controversial question how an optimal path should be defined and identified in stochastic road networks. In other words, there is no unique definition of an optimal path under uncertainty. One possible and common answer to this question is to find a path with the minimum expected travel time. The main advantage of using this approach is that efficient shortest path algorithms introduced for the deterministic networks can be readily employed to identify the path with the minimum expected travel time in a stochastic network. However, the resulting optimal path identified by this approach may not be reliable, because this approach fails to address travel time variability. To tackle this issue some researchers use distribution of travel time instead of expected value of it so they find the probability distribution of total travelling time using different optimization methods such as dynamic programming and Dijkstra's algorithm . These methods use stochastic optimization, specifically stochastic dynamic programming to find the shortest path in networks with probabilistic arc length. The concept of travel time reliability is used interchangeably with travel time variability in the transportation research literature, so that, in general, one can say that the higher the variability in travel time, the lower the reliability would be, and vice versa. - -In order to account for travel time reliability more accurately, two common alternative definitions for an optimal path under uncertainty have been suggested. Some have introduced the concept of the most reliable path, aiming to maximize the probability of arriving on time or earlier than a given travel time budget. Others, alternatively, have put forward the concept of an α-reliable path based on which they intended to minimize the travel time budget required to ensure a pre-specified on-time arrival probability. diff --git a/wiki/wikipedia/468.txt b/wiki/wikipedia/468.txt deleted file mode 100644 index 3d68143441c2f81dae31caa62e35f42fba9962cd..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/468.txt +++ /dev/null @@ -1,5 +0,0 @@ -The Microsoft Distributed Transaction Coordinator (MSDTC) service is a component of modern versions of Microsoft Windows that is responsible for coordinating transactions that span multiple resource managers, such as databases, message queues, and file systems. MSDTC is included in Windows 2000 and later operating systems, and is also available for Windows NT 4.0. - -MSDTC performs the transaction coordination role for components, usually with COM and .NET architectures. In MSDTC terminology, the director is called the transaction manager. - -By default, the Microsoft Distributed Transaction Coordinator (MSDTC) service is installed with Windows 2000. It cannot be uninstalled through Add/Remove Programs. diff --git a/wiki/wikipedia/469.txt b/wiki/wikipedia/469.txt deleted file mode 100644 index b4981aed68b128e1e6cce2621881ed6a3024a9b8..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/469.txt +++ /dev/null @@ -1,35 +0,0 @@ -In number theory, Proth's theorem is a primality test for Proth numbers. - -It states that if p is a Proth number, of the form k2n + 1 with k odd and k < 2n, and if there exists an integer a for which -$$ -a^{\frac{p-1}{2}}\equiv -1 \pmod{p}, -$$ - -then p is prime. In this case p is called a Proth prime. This is a practical test because if p is prime, any chosen a has about a 50 percent chance of working. - -If a is a quadratic nonresidue modulo p then the converse is also true, and the test is conclusive. Such an a may be found by iterating a over small primes and computing the Jacobi symbol until: -$$ -\left(\frac{a}{p}\right)=-1 . -$$ - -Thus, in contrast to many Monte Carlo primality tests (randomized algorithms that can return a false positive), the primality testing algorithm based on Proth's theorem is a Las Vegas algorithm, always returning the correct answer but with a running time that varies randomly. - -Examples of the theorem include: - -* for p = 3 = 1(21) + 1, we have that 2(3-1)/2 + 1 = 3 is divisible by 3, so 3 is prime. - -* for p = 5 = 1(22) + 1, we have that 3(5-1)/2 + 1 = 10 is divisible by 5, so 5 is prime. - -* for p = 13 = 3(22) + 1, we have that 5(13-1)/2 + 1 = 15626 is divisible by 13, so 13 is prime. - -* for p = 9, which is not prime, there is no a such that a(9-1)/2 + 1 is divisible by 9. - -The first Proth primes are : - -3, 5, 13, 17, 41, 97, 113, 193, 241, 257, 353, 449, 577, 641, 673, 769, 929, 1153 …. - -The largest known Proth prime is $10223 \cdot 2^{31172165} + 1$, and is 9,383,761 digits long. It was found by Peter Szabolcs in the PrimeGrid distributed computing project which announced it on 6 November 2016. It is also the largest known non-Mersenne prime and largest Colbert number. The second largest known Proth prime is $19249 \cdot 2^{13018586} + 1$, found by Seventeen or Bust. - -The proof for this theorem uses the Pocklington-Lehmer primality test, and closely resembles the proof of Pépin's test. The proof can be found on page 52 of the book by Ribenboim in the references. - -François Proth (1852–1879) published the theorem in 1878. diff --git a/wiki/wikipedia/47.txt b/wiki/wikipedia/47.txt deleted file mode 100644 index 013f91c14f82afd51617e18ceaae5583e50c4903..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/47.txt +++ /dev/null @@ -1,15 +0,0 @@ -In probability theory, Dudley's theorem is a result relating the expected upper bound and regularity properties of a Gaussian process to its entropy and covariance structure. - -The result was first stated and proved by V. N. Sudakov, as pointed out in a paper by Dudley, "V. N. Sudakov's work on expected suprema of Gaussian processes," in High Dimensional Probability VII, Eds. C. Houdré, D. M. Mason, P. Reynaud-Bouret, and Jan Rosiński, Birkhăuser, Springer, Progress in Probability 71, 2016, pp. 37–43. Dudley had earlier credited Volker Strassen with making the connection between entropy and regularity. - -Let (Xt)t∈T be a Gaussian process and let dX be the pseudometric on T defined by -$$ -d_{X}(s, t) = \sqrt{\mathbf{E} \big[ | X_{s} - X_{t} |^{2} ]}. -$$ - -For ε > 0, denote by N(T, dX; ε) the entropy number, i.e. the minimal number of (open) dX-balls of radius ε required to cover T. Then -$$ -\mathbf{E} \left[ \sup_{t \in T} X_{t} \right] \leq 24 \int_0^{+\infty} \sqrt{\log N(T, d_{X}; \varepsilon)} \mathrm{d} \varepsilon. -$$ - -Furthermore, if the entropy integral on the right-hand side converges, then X has a version with almost all sample path bounded and (uniformly) continuous on (T, dX). diff --git a/wiki/wikipedia/470.txt b/wiki/wikipedia/470.txt deleted file mode 100644 index 801fa8f16e9be1285d918e9fb16060e688cc1956..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/470.txt +++ /dev/null @@ -1,341 +0,0 @@ -In mathematics, the Stolz–Cesàro theorem is a criterion for proving the convergence of a sequence. The theorem is named after mathematicians Otto Stolz and Ernesto Cesàro, who stated and proved it for the first time. - -The Stolz–Cesàro theorem can be viewed as a generalization of the Cesàro mean, but also as a l'Hôpital's rule for sequences. - -Let $(a_n)_{n \geq 1}$ and $(b_n)_{n \geq 1}$ be two sequences of real numbers. Assume that $(b_n)_{n \geq 1}$ is a strictly monotone and divergent sequence (i.e. strictly increasing and approaching $ + \infty $, or strictly decreasing and approaching $ - \infty $) and the following limit exists: -$$ - \lim_{n \to \infty} \frac{a_{n+1}-a_n}{b_{n+1}-b_n}=l.\ -$$ - -Then, the limit -$$ - \lim_{n \to \infty} \frac{a_n}{b_n}=l.\ -$$ - -Let $(a_n)_{n \geq 1}$ and $(b_n)_{n \geq 1}$ be two sequences of real numbers. Assume now that $(a_n)\to 0$ and $(b_n)\to 0$ while $(b_n)_{n \geq 1}$ is strictly decreasing. If -$$ - \lim_{n \to \infty} \frac{a_{n+1}-a_n}{b_{n+1}-b_n}=l,\ -$$ - -then -$$ - \lim_{n \to \infty} \frac{a_n}{b_n}=l.\ -$$ - -Case 1: suppose $(b_n)$ strictly increasing and divergent to $+\infty$, and $-\infty 0$ there exists $\nu > 0$ such that $\forall n > \nu$ -$$ -\left|\frac{a_{n+1}-a_n}{b_{n+1}-b_n}-l\right| < \frac{\epsilon}{2}, -$$ - -which is to say -$$ -l-\epsilon/2<\frac{a_{n+1}-a_n}{b_{n+1}-b_n} \nu. -$$ - -Since $(b_n)$ is strictly increasing, $b_{n+1}-b_n>0$, and the following holds -$$ -(l-\epsilon/2)(b_{n+1}-b_n) \nu -$$. - -Next we notice that -$$ -a_n = [(a_n-a_{n-1})+\dots+(a_{\nu+2}-a_{\nu+1})]+a_{\nu+1} -$$ - -thus, by applying the above inequality to each of the terms in the square brackets, we obtain - -\begin{align} - -&(l-\epsilon/2)(b_n-b_{\nu+1})+a_{\nu+1}=(l-\epsilon/2)[(b_n-b_{n-1})+\dots+(b_{\nu+2}-b_{\nu+1})]+a_{\nu+1} - -Now, since $b_n\to+\infty$ as $n\to\infty$, there is an $n_0>0$ such that $b_n>0$ for all $n>n_0$, and we can divide the two inequalities by $b_n$ for all $n>\max\{\nu,n_0\}$ -$$ -(l-\epsilon/2)+\frac{a_{\nu+1}-b_{\nu+1}(l-\epsilon/2)}{b_n}<\frac{a_n}{b_n}<(l+\epsilon/2)+\frac{a_{\nu+1}-b_{\nu+1}(l+\epsilon/2)}{b_n}. -$$ - -The two sequences (which are only defined for $n>n_0$ as there could be an $N\leq n_0$ such that $b_N=0$) -$$ -c^{\pm}_n:=\frac{a_{\nu+1}-b_{\nu+1}(l\pm\epsilon/2)}{b_n} -$$ - -are infinitesimal since $b_n\to+\infty$ and the numerator is a constant number, hence for all $\epsilon/2>0$ there exist $n_{\pm}>n_0>0$, such that - -\begin{align} - -&|c^+_n|<\epsilon/2,\quad\forall n > n_+,\\ - -&|c^-_n|<\epsilon/2,\quad\forall n > n_-, - -\end{align} - -therefore -$$ -l-\epsilon < l-\epsilon/2+c^-_n < \frac{a_n}{b_n} < l+\epsilon/2+c^+_n \max\lbrace\nu,n_{\pm}\rbrace =: N > 0, -$$ - -which concludes the proof. The case with $(b_n)$ strictly decreasing and divergent to $-\infty$, and $l<\infty$ is similar. - -Case 2: we assume $(b_n)$ strictly increasing and divergent to $+\infty$, and $l=+\infty$. Proceeding as before, for all $2M > 0$ there exists $\nu > 0$ such that for all $n > \nu$ -$$ -\frac{a_{n+1}-a_n}{b_{n+1}-b_n} > 2M. -$$ - -Again, by applying the above inequality to each of the terms inside the square brackets we obtain -$$ -a_n > 2M(b_n-b_{\nu+1}) + a_{\nu+1},\quad\forall n > \nu, -$$ - -and -$$ -\frac{a_n}{b_n} > 2M + \frac{a_{\nu+1}-2M b_{\nu+1}}{b_n},\quad\forall n > \max\{\nu,n_0\}. -$$ - -The sequence $(c_n)_{n>n_0}$ defined by -$$ -c_n := \frac{a_{\nu+1}-2M b_{\nu+1}}{b_n} -$$ - -is infinitesimal, thus -$$ -\forall M > 0\exists \bar{n}>n_0>0 \text{ such that } -M < c_n < M,\forall n > \bar{n}, -$$ - -combining this inequality with the previous one we conclude -$$ -\frac{a_n}{b_n} > 2M + c_n > M,\quad\forall n > \max\{\nu,\bar{n}\} =: N. -$$ - -The proofs of the other cases with $(b_n)$ strictly increasing or decreasing and approaching $+\infty$ or $-\infty$ respectively and $l=\pm\infty$ all proceed in this same way. - -Case 1: we first consider the case with $l < \infty$ and $(b_n)$ strictly decreasing. This time, for each $\nu > 0$, we can write -$$ -a_n = (a_n-a_{n+1})+\dots+(a_{n+\nu-1}-a_{n+\nu})+a_{n+\nu}, -$$ - -and for any $\epsilon>0,$ $\exist n_0$ such that for all $n>n_0$ we have, - -\begin{align} - -&(l-\epsilon/2)(b_n-b_{n+\nu})+a_{n+\nu} = (l-\epsilon/2)[(b_n-b_{n+1})+\dots+(b_{n+\nu-1}-b_{n+\nu})]+a_{n+\nu} < a_n\\ - -&a_n < (l+\epsilon/2)[(b_n-b_{n+1})+\dots+(b_{n+\nu-1}-b_{n+\nu})]+a_{n+\nu} = (l+\epsilon/2)(b_n-b_{n+\nu})+a_{n+\nu}.\end{align} - -The two sequences -$$ -c^{\pm}_\nu := \frac{a_{n+\nu}-b_{n+\nu}(l\pm\epsilon/2)}{b_n} -$$ - -are infinitesimal since by hypothesis $a_{n+\nu},b_{n+\nu} \to 0$ as $\nu\to\infty$, thus for all $\epsilon/2 > 0$ there are $\nu^{\pm} > 0$ such that - -\begin{align} - -&|c^+_\nu| < \epsilon/2,\quad\forall \nu>\nu_+,\\ - -&|c^-_\nu| < \epsilon/2,\quad\forall \nu>\nu_-, - -\end{align} - -thus, choosing $\nu$ appropriately (which is to say, taking the limit with respect to $\nu$) we obtain -$$ -l-\epsilon < l-\epsilon/2+c^-_\nu < \frac{a_n}{b_n} < l+\epsilon/2+c^+_\nu < l+\epsilon,\quad\forall n > n_0 -$$ - -which concludes the proof. - -Case 2: we assume $l=+\infty$ and $(b_n)$ strictly decreasing. For all $2M > 0$ there exists $n_0 > 0$ such that for all $n > n_0$ -$$ -\frac{a_{n+1}-a_n}{b_{n+1}-b_n} > 2M \implies a_{n+1}-a_n > 2M(b_{n}-b_{n+1}) -$$ - -Therefore, for each $m > n$ -$$ -\frac{a_n}{b_n} > 2M + \frac{a_{n+\nu}-2M b_{n+\nu}}{b_n},\quad\forall n > n_0 -$$ - -The sequence -$$ -c_{\nu} := \frac{a_{n+\nu}-2M b_{n+\nu}}{b_n} -$$ - -converges to $0$ (keeping $n$ fixed), hence -$$ -\forall M > 0~\exists \bar{\nu} > 0 \text{ such that } -M < c_\nu < M,\forall \nu > \bar{\nu}, -$$ - -and, choosing $\nu$ conveniently, we conclude the proof -$$ -\frac{a_n}{b_n} > 2M + c_\nu > M,\quad\forall n > n_0 -$$ - -The theorem concerning the $\cdot/\infty$ case has a few notable consequences which are useful in the computation of limits. - -Let $(x_n)$ be a sequence of real numbers which converges to $l$, define -$$ -a_n:=\sum_{m=1}^nx_m=x_1+\dots+x_n,\quad b_n:=n -$$ - -then $(b_n)$ is strictly increasing and diverges to $+\infty$. We compute -$$ -\lim_{n\to\infty}\frac{a_{n+1}-a_n}{b_{n+1}-b_n}=\lim_{n\to\infty} x_{n+1}=\lim_{n\to\infty} x_n=l -$$ - -therefore -$$ -\lim_{n\to\infty}\frac{x_1+\dots+ x_n}{n}=\lim_{n\to\infty}x_n. -$$ - -
    Given any sequence $(x_n)_{n\geq 1}$ of real numbers, suppose that -$$ -\lim_{n\to\infty}x_n -$$ - -exists (finite or infinite), then -$$ -\lim_{n\to\infty}\frac{x_1+\dots+x_n}{n}=\lim_{n\to\infty}x_n. -$$
    - -Let $(x_n)$ be a sequence of positive real numbers converging to $l$ and define -$$ -a_n:=\log(x_1\cdots x_n),\quad b_n:=n, -$$ - -again we compute -$$ -\lim_{n\to\infty}\frac{a_{n+1}-a_n}{b_{n+1}-b_n}=\lim_{n\to\infty}\log\Big(\frac{x_1\cdots x_{n+1}}{x_1\cdots x_n}\Big)=\lim_{n\to\infty}\log(x_{n+1})=\lim_{n\to\infty}\log(x_n)=\log(l), -$$ - -where we used the fact that the logarithm is continuous. Thus -$$ -\lim_{n\to\infty}\frac{\log(x_1\cdots x_n)}{n}=\lim_{n\to\infty}\log\Big((x_1\cdots x_n)^{\frac{1}{n}}\Big)=\log(l), -$$ - -since the logarithm is both continuous and injective we can conclude that -$$ -\lim_{n\to\infty}\sqrt[n]{x_1\cdots x_n}=\lim_{n\to\infty}x_n -$$. - -
    Given any sequence $(x_n)_{n\geq 1}$ of (strictly) positive real numbers, suppose that -$$ -\lim_{n\to\infty}x_n -$$ - -exists (finite or infinite), then -$$ -\lim_{n\to\infty}\sqrt[n]{x_1\cdots x_n}=\lim_{n\to\infty}x_n. -$$
    - -Suppose we are given a sequence $(y_n)_{n\geq1}$ and we are asked to compute -$$ -\lim_{n\to\infty}\sqrt[n]{y_n}, -$$ - -defining $y_0=1$ and $x_n=y_n/y_{n-1}$ we obtain -$$ -\lim_{n\to\infty}\sqrt[n]{x_1\dots x_n}=\lim_{n\to\infty}\sqrt[n]{\frac{y_1\dots y_{n}}{y_0\cdot y_1\dots y_{n-1}}}=\lim_{n\to\infty}\sqrt[n]{y_n}, -$$ - -if we apply the property above -$$ -\lim_{n\to\infty}\sqrt[n]{y_n}=\lim_{n\to\infty} x_n=\lim_{n\to\infty}\frac{y_n}{y_{n-1}}. -$$ - -This last form is usually the most useful to compute limits - -
    Given any sequence $(y_n)_{n\geq 1}$ of (strictly) positive real numbers, suppose that -$$ -\lim_{n\to\infty}\frac{y_{n+1}}{y_{n}} -$$ - -exists (finite or infinite), then -$$ -\lim_{n\to\infty}\sqrt[n]{y_n}=\lim_{n\to\infty}\frac{y_{n+1}}{y_{n}}. -$$
    -$$ -\lim_{n\to\infty}\sqrt[n]{n}=\lim_{n\to\infty}\frac{n+1}{n}=1. -$$ - -\begin{align} - -\lim_{n\to\infty}\frac{\sqrt[n]{n!}}{n}&=\lim_ {n\to\infty}\frac{(n+1)!(n^n)}{n!(n+1)^{n+1}}\\ - -&=\lim_{n\to\infty}\frac{n^n}{(n+1)^n}=\lim_{n\to\infty}\frac{1}{(1+\frac{1}{n})^n}=\frac{1}{e} - -\end{align} - -where we used the representation of $e$ as the limit of a sequence. - -The ∞/∞ case is stated and proved on pages 173—175 of Stolz's 1885 book and also on page 54 of Cesàro's 1888 article. - -It appears as Problem 70 in Pólya and Szegő (1925). - -The general form of the Stolz–Cesàro theorem is the following: If $ (a_n)_{n\geq 1}$ and $ (b_n)_{n\geq 1}$ are two sequences such that $(b_n)_{n \geq 1}$ is monotone and unbounded, then: -$$ -\liminf_{n\to\infty} \frac{a_{n+1}-a_n}{b_{n+1}-b_n}\leq \liminf_{n\to\infty}\frac{a_n}{b_n}\leq\limsup_{n\to\infty}\frac{a_n}{b_n}\leq\limsup_{n\to\infty}\frac{a_{n+1}-a_n}{b_{n+1}-b_n}. -$$ - -Instead of proving the previous statement, we shall prove a slightly different one; first we introduce a notation: let $(a_n)_{n\geq1}$ be any sequence, its partial sum will be denoted by $A_n:=\sum_{m\geq1}^na_m$. The equivalent statement we shall prove is: - -
    Let $(a_n)_{n\geq1},(b_n)_{\geq1}$ be any two sequences of real numbers such that - -* $ b_n > 0, \quad \forall n\in {\mathbb{Z}}_{>0}$, - -* $\lim_{n\to\infty}B_n=+\infty$, - -then -$$ -\liminf_{n\to\infty}\frac{a_n}{b_n}\leq\liminf_{n\to\infty}\frac{A_n}{B_n}\leq\limsup_{n\to\infty}\frac{A_n}{B_n}\leq\limsup_{n\to\infty}\frac{a_n}{b_n}. -$$ - -
    - -First we notice that: - -*$\liminf_{n\to\infty}\frac{A_n}{B_n}\leq\limsup_{n\to\infty}\frac{A_n}{B_n}$ holds by definition of limit superior and limit inferior; - -*$\liminf_{n\to\infty}\frac{a_n}{b_n}\leq\liminf_{n\to\infty}\frac{A_n}{B_n}$ holds if and only if $\limsup_{n\to\infty}\frac{A_n}{B_n}\leq\limsup_{n\to\infty}\frac{a_n}{b_n}$ because $\liminf_{n\to\infty} x_n=-\limsup_{n\to\infty}(-x_n)$ for any sequence $(x_n)_{n\geq1}$. - -Therefore we need only to show that $\limsup_{n\to\infty}\frac{A_n}{B_n}\leq\limsup_{n\to\infty}\frac{a_n}{b_n}$. If $L:=\limsup_{n\to\infty}\frac{a_n}{b_n}=+\infty$ there is nothing to prove, hence we can assume $L<+\infty$ (it can be either finite or $-\infty$). By definition of $\limsup$, for all $l > L$ there is a natural number $\nu>0$ such that -$$ - \frac{a_n}{b_n}\nu. -$$ - -We can use this inequality so as to write -$$ -A_n = A_\nu + a_{\nu + 1} + \dots + a_n < A_\nu + l(B_n - B_\nu), \quad\forall n > \nu, -$$ - -Because $b_n>0$, we also have $B_n>0$ and we can divide by $B_n$ to get -$$ -\frac{A_n}{B_n} < \frac{A_\nu - lB_\nu}{B_n} + l, \quad \forall n > \nu. -$$ - -Since $B_n\to+\infty$ as $n\to+\infty$, the sequence -$$ -\frac{A_{\nu}-lB_{\nu}}{B_n}\to0\text{ as } n\to+\infty \text{ (keeping }\nu\text{ fixed)}, -$$ - -and we obtain -$$ -\limsup_{n\to\infty} \frac{A_n}{B_n} \le l, \quad\forall l > L, -$$ - -By definition of least upper bound, this precisely means that -$$ -\limsup_{n\to\infty}\frac{A_n}{B_n}\leq L=\limsup_{n\to\infty}\frac{a_n}{b_n}, -$$ - -and we are done. - -Now, take $(a_n),(b_n)$ as in the statement of the general form of the Stolz-Cesàro theorem and define -$$ -\alpha_1=a_1,\alpha_k=a_k-a_{k-1},\forall k>1\quad\beta_1=b_1,\beta_k=b_k-b_{k-1}\forall k>1 -$$ - -since $(b_n)$ is strictly monotone (we can assume strictly increasing for example), $\beta_n>0$ for all $n$ and since $b_n\to+\infty$ also $\Beta_n=b_1+(b_2-b_1)+\dots+(b_n-b_{n-1})=b_n\to+\infty$, thus we can apply the theorem we have just proved to $(\alpha_n),(\beta_n)$ (and their partial sums $(\Alpha_n),(\Beta_n)$) -$$ -\limsup_{n\to\infty}\frac{a_n}{b_{n}}=\limsup_{n\to\infty}\frac{\Alpha_n}{\Beta_n}\leq\limsup_{n\to\infty}\frac{\alpha_n}{\beta_n}=\limsup_{n\to\infty}\frac{a_n-a_{n-1}}{b_n-b_{n-1}}, -$$ - -which is exactly what we wanted to prove. diff --git a/wiki/wikipedia/471.txt b/wiki/wikipedia/471.txt deleted file mode 100644 index 9718a6b2d7406640d4ba29a1645476a80b6d5ca5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/471.txt +++ /dev/null @@ -1,3 +0,0 @@ -In Riemannian geometry, Gromov's (pre)compactness theorem states that the set of compact Riemannian manifolds of a given dimension, with Ricci curvature $\geq c$ and diameter $\leq D$ is relatively compact in the Gromov–Hausdorff metric. It was proved by Mikhail Gromov in 1981. - -This theorem is a generalization of Myers's theorem. diff --git a/wiki/wikipedia/472.txt b/wiki/wikipedia/472.txt deleted file mode 100644 index 5b4394e5fa9eb8a4c6a5ef8a29a01f800ed5b658..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/472.txt +++ /dev/null @@ -1,214 +0,0 @@ -The P versus NP problem is a major unsolved problem in computer science. It asks whether every problem whose solution can be quickly verified can also be solved quickly. - -The informal term quickly, used above, means the existence of an algorithm solving the task that runs in polynomial time, such that the time to complete the task varies as a polynomial function on the size of the input to the algorithm (as opposed to, say, exponential time). The general class of questions for which some algorithm can provide an answer in polynomial time is "P" or "class P". For some questions, there is no known way to find an answer quickly, but if one is provided with information showing what the answer is, it is possible to verify the answer quickly. The class of questions for which an answer can be verified in polynomial time is NP, which stands for "nondeterministic polynomial time". - -An answer to the P versus NP question would determine whether problems that can be verified in polynomial time can also be solved in polynomial time. If it turns out that P ≠ NP, which is widely believed, it would mean that there are problems in NP that are harder to compute than to verify: they could not be solved in polynomial time, but the answer could be verified in polynomial time. - -The problem is considered by many to be the most important open problem in computer science. Aside from being an important problem in computational theory, a proof either way would have profound implications for mathematics, cryptography, algorithm research, artificial intelligence, game theory, multimedia processing, philosophy, economics and many other fields. - -It is one of the seven Millennium Prize Problems selected by the Clay Mathematics Institute, each of which carries a US$1,000,000 prize for the first correct solution. - -Consider Sudoku, a game where the player is given a partially filled-in grid of numbers and attempts to complete the grid following certain rules. Given an incomplete Sudoku grid, of any size, is there at least one legal solution? Any proposed solution is easily verified, and the time to check a solution grows slowly (polynomially) as the grid gets bigger. However, all known algorithms for finding solutions take, for difficult examples, time that grows exponentially as the grid gets bigger. So, Sudoku is in NP (quickly checkable) but does not seem to be in P (quickly solvable). Thousands of other problems seem similar, in that they are fast to check but slow to solve. Researchers have shown that many of the problems in NP have the extra property that a fast solution to any one of them could be used to build a quick solution to any other problem in NP, a property called NP-completeness. Decades of searching have not yielded a fast solution to any of these problems, so most scientists suspect that none of these problems can be solved quickly. This, however, has never been proven. - -The precise statement of the P versus NP problem was introduced in 1971 by Stephen Cook in his seminal paper "The complexity of theorem proving procedures" (and independently by Leonid Levin in 1973). - -Although the P versus NP problem was formally defined in 1971, there were previous inklings of the problems involved, the difficulty of proof, and the potential consequences. In 1955, mathematician John Nash wrote a letter to the NSA, where he speculated that cracking a sufficiently complex code would require time exponential in the length of the key. If proved (and Nash was suitably skeptical) this would imply what is now called P ≠ NP, since a proposed key can easily be verified in polynomial time. Another mention of the underlying problem occurred in a 1956 letter written by Kurt Gödel to John von Neumann. Gödel asked whether theorem-proving (now known to be co-NP-complete) could be solved in quadratic or linear time, and pointed out one of the most important consequences—that if so, then the discovery of mathematical proofs could be automated. - -The relation between the complexity classes P and NP is studied in computational complexity theory, the part of the theory of computation dealing with the resources required during computation to solve a given problem. The most common resources are time (how many steps it takes to solve a problem) and space (how much memory it takes to solve a problem). - -In such analysis, a model of the computer for which time must be analyzed is required. Typically such models assume that the computer is deterministic (given the computer's present state and any inputs, there is only one possible action that the computer might take) and sequential (it performs actions one after the other). - -In this theory, the class P consists of all those decision problems (defined below) that can be solved on a deterministic sequential machine in an amount of time that is polynomial in the size of the input; the class NP consists of all those decision problems whose positive solutions can be verified in polynomial time given the right information, or equivalently, whose solution can be found in polynomial time on a non-deterministic machine. Clearly, P ⊆ NP. Arguably, the biggest open question in theoretical computer science concerns the relationship between those two classes: - -Is P equal to NP? - -Since 2002, William Gasarch has conducted three polls of researchers concerning this and related questions. Confidence that P ≠ NP has been increasing – in 2019, 88% believed P ≠ NP, as opposed to 83% in 2012 and 61% in 2002. When restricted to experts, the 2019 answers became 99% believe P ≠ NP. It is in NP because (given an input) it is simple to check whether M accepts the input by simulating M; it is NP-complete because the verifier for any particular instance of a problem in NP can be encoded as a polynomial-time machine M that takes the solution to be verified as input. Then the question of whether the instance is a yes or no instance is determined by whether a valid input exists. - -The first natural problem proven to be NP-complete was the Boolean satisfiability problem, also known as SAT. As noted above, this is the Cook–Levin theorem; its proof that satisfiability is NP-complete contains technical details about Turing machines as they relate to the definition of NP. However, after this problem was proved to be NP-complete, proof by reduction provided a simpler way to show that many other problems are also NP-complete, including the game Sudoku discussed earlier. In this case, the proof shows that a solution of Sudoku in polynomial time could also be used to complete Latin squares in polynomial time. This in turn gives a solution to the problem of partitioning tri-partite graphs into triangles, which could then be used to find solutions for the special case of SAT known as 3-SAT, which then provides a solution for general Boolean satisfiability. So a polynomial time solution to Sudoku leads, by a series of mechanical transformations, to a polynomial time solution of satisfiability, which in turn can be used to solve any other NP-problem in polynomial time. Using transformations like this, a vast class of seemingly unrelated problems are all reducible to one another, and are in a sense "the same problem". - -Although it is unknown whether P = NP, problems outside of P are known. Just as the class P is defined in terms of polynomial running time, the class EXPTIME is the set of all decision problems that have exponential running time. In other words, any problem in EXPTIME is solvable by a deterministic Turing machine in O(2p(n)) time, where p(n) is a polynomial function of n. A decision problem is EXPTIME-complete if it is in EXPTIME, and every problem in EXPTIME has a polynomial-time many-one reduction to it. A number of problems are known to be EXPTIME-complete. Because it can be shown that P ≠ EXPTIME, these problems are outside P, and so require more than polynomial time. In fact, by the time hierarchy theorem, they cannot be solved in significantly less than exponential time. Examples include finding a perfect strategy for chess positions on an N × N board and similar problems for other board games. - -The problem of deciding the truth of a statement in Presburger arithmetic requires even more time. Fischer and Rabin proved in 1974 that every algorithm that decides the truth of Presburger statements of length n has a runtime of at least $2^{2^{cn}}$ for some constant c. Hence, the problem is known to need more than exponential run time. Even more difficult are the undecidable problems, such as the halting problem. They cannot be completely solved by any algorithm, in the sense that for any particular algorithm there is at least one input for which that algorithm will not produce the right answer; it will either produce the wrong answer, finish without giving a conclusive answer, or otherwise run forever without producing any answer at all. - -It is also possible to consider questions other than decision problems. One such class, consisting of counting problems, is called #P: whereas an NP problem asks "Are there any solutions?", the corresponding #P problem asks "How many solutions are there?" Clearly, a #P problem must be at least as hard as the corresponding NP problem, since a count of solutions immediately tells if at least one solution exists, if the count is greater than zero. Surprisingly, some #P problems that are believed to be difficult correspond to easy (for example linear-time) P problems. For these problems, it is very easy to tell whether solutions exist, but thought to be very hard to tell how many. Many of these problems are #P-complete, and hence among the hardest problems in #P, since a polynomial time solution to any of them would allow a polynomial time solution to all other #P problems. - -In 1975, Richard E. Ladner showed that if P ≠ NP then there exist problems in NP that are neither in P nor NP-complete. If graph isomorphism is NP-complete, the polynomial time hierarchy collapses to its second level. Since it is widely believed that the polynomial hierarchy does not collapse to any finite level, it is believed that graph isomorphism is not NP-complete. The best algorithm for this problem, due to László Babai, runs in quasi-polynomial time. - -The integer factorization problem is the computational problem of determining the prime factorization of a given integer. Phrased as a decision problem, it is the problem of deciding whether the input has a factor less than k. No efficient integer factorization algorithm is known, and this fact forms the basis of several modern cryptographic systems, such as the RSA algorithm. The integer factorization problem is in NP and in co-NP (and even in UP and co-UP). If the problem is NP-complete, the polynomial time hierarchy will collapse to its first level (i.e., NP = co-NP). The best known algorithm for integer factorization is the general number field sieve, which takes expected time -$$ -O\left (\exp \left ( \left (\tfrac{64n}{9} \log(2) \right )^{\frac{1}{3}} \left ( \log(n\log(2)) \right )^{\frac{2}{3}} \right) \right ) -$$ - -to factor an n-bit integer. However, the best known quantum algorithm for this problem, Shor's algorithm, does run in polynomial time, although this does not indicate where the problem lies with respect to non-quantum complexity classes. - -All of the above discussion has assumed that P means "easy" and "not in P" means "difficult", an assumption known as Cobham's thesis. It is a common and reasonably accurate assumption in complexity theory; however, it has some caveats. - -First, it is not always true in practice. A theoretical polynomial algorithm may have extremely large constant factors or exponents, thus rendering it impractical. For example, the problem of deciding whether a graph G contains H as a minor, where H is fixed, can be solved in a running time of O(n2), where n is the number of vertices in G. However, the big O notation hides a constant that depends superexponentially on H. The constant is greater than $ 2 \uparrow \uparrow (2 \uparrow \uparrow (2 \uparrow \uparrow (h/2) ) ) $ (using Knuth's up-arrow notation), and where h is the number of vertices in H. - -On the other hand, even if a problem is shown to be NP-complete, and even if P ≠ NP, there may still be effective approaches to tackling the problem in practice. There are algorithms for many NP-complete problems, such as the knapsack problem, the traveling salesman problem and the Boolean satisfiability problem, that can solve to optimality many real-world instances in reasonable time. The empirical average-case complexity (time vs. problem size) of such algorithms can be surprisingly low. An example is the simplex algorithm in linear programming, which works surprisingly well in practice; despite having exponential worst-case time complexity, it runs on par with the best known polynomial-time algorithms. - -Finally, there are types of computations which do not conform to the Turing machine model on which P and NP are defined, such as quantum computation and randomized algorithms. - -Cook provides a restatement of the problem in THE P VERSUS NP PROBLEM as: Does P = NP ?. to an NP-complete problem such as 3-SAT would break most existing cryptosystems including: - -* Existing implementations of public-key cryptography, a foundation for many modern security applications such as secure financial transactions over the Internet. - -* Symmetric ciphers such as AES or 3DES, used for the encryption of communications data. - -* Cryptographic hashing, which underlies blockchain cryptocurrencies such as Bitcoin, and is used to authenticate software updates. For these applications, the problem of finding a pre-image that hashes to a given value must be difficult in order to be useful, and ideally should require exponential time. However, if P = NP, then finding a pre-image M can be done in polynomial time, through reduction to SAT. - -These would need to be modified or replaced by information-theoretically secure solutions not inherently based on P-NP inequivalence. - -On the other hand, there are enormous positive consequences that would follow from rendering tractable many currently mathematically intractable problems. For instance, many problems in operations research are NP-complete, such as some types of integer programming and the travelling salesman problem. Efficient solutions to these problems would have enormous implications for logistics. Many other important problems, such as some problems in protein structure prediction, are also NP-complete; if these problems were efficiently solvable, it could spur considerable advances in life sciences and biotechnology. - -But such changes may pale in significance compared to the revolution an efficient method for solving NP-complete problems would cause in mathematics itself. Gödel, in his early thoughts on computational complexity, noted that a mechanical method that could solve any problem would revolutionize mathematics: - -Similarly, Stephen Cook (assuming not only a proof, but a practically efficient algorithm) says - -Research mathematicians spend their careers trying to prove theorems, and some proofs have taken decades or even centuries to find after problems have been stated—for instance, Fermat's Last Theorem took over three centuries to prove. A method that is guaranteed to find proofs to theorems, should one exist of a "reasonable" size, would essentially end this struggle. - -Donald Knuth has stated that he has come to believe that P = NP, but is reserved about the impact of a possible proof: - -A proof that showed that P ≠ NP would lack the practical computational benefits of a proof that P = NP, but would nevertheless represent a very significant advance in computational complexity theory and provide guidance for future research. It would allow one to show in a formal way that many common problems cannot be solved efficiently, so that the attention of researchers can be focused on partial solutions or solutions to other problems. Due to widespread belief in P ≠ NP, much of this focusing of research has already taken place. - -Also P ≠ NP still leaves open the average-case complexity of hard problems in NP. For example, it is possible that SAT requires exponential time in the worst case, but that almost all randomly selected instances of it are efficiently solvable. Russell Impagliazzo has described five hypothetical "worlds" that could result from different possible resolutions to the average-case complexity question. These range from "Algorithmica", where P = NP and problems like SAT can be solved efficiently in all instances, to "Cryptomania", where P ≠ NP and generating hard instances of problems outside P is easy, with three intermediate possibilities reflecting different possible distributions of difficulty over instances of NP-hard problems. The "world" where P ≠ NP but all problems in NP are tractable in the average case is called "Heuristica" in the paper. A Princeton University workshop in 2009 studied the status of the five worlds. - -Although the P = NP problem itself remains open despite a million-dollar prize and a huge amount of dedicated research, efforts to solve the problem have led to several new techniques. In particular, some of the most fruitful research related to the P = NP problem has been in showing that existing proof techniques are not powerful enough to answer the question, thus suggesting that novel technical approaches are required. - -As additional evidence for the difficulty of the problem, essentially all known proof techniques in computational complexity theory fall into one of the following classifications, each of which is known to be insufficient to prove that P ≠ NP: - -These barriers are another reason why NP-complete problems are useful: if a polynomial-time algorithm can be demonstrated for an NP-complete problem, this would solve the P = NP problem in a way not excluded by the above results. - -These barriers have also led some computer scientists to suggest that the P versus NP problem may be independent of standard axiom systems like ZFC (cannot be proved or disproved within them). The interpretation of an independence result could be that either no polynomial-time algorithm exists for any NP-complete problem, and such a proof cannot be constructed in (e.g.) ZFC, or that polynomial-time algorithms for NP-complete problems may exist, but it is impossible to prove in ZFC that such algorithms are correct. However, if it can be shown, using techniques of the sort that are currently known to be applicable, that the problem cannot be decided even with much weaker assumptions extending the Peano axioms (PA) for integer arithmetic, then there would necessarily exist nearly-polynomial-time algorithms for every problem in NP. Therefore, if one believes (as most complexity theorists do) that not all problems in NP have efficient algorithms, it would follow that proofs of independence using those techniques cannot be possible. Additionally, this result implies that proving independence from PA or ZFC using currently known techniques is no easier than proving the existence of efficient algorithms for all problems in NP. - -While the P versus NP problem is generally considered unsolved, many amateur and some professional researchers have claimed solutions. Gerhard J. Woeginger maintains a list that, as of 2016, contains 62 purported proofs of P = NP, 50 proofs of P ≠ NP, 2 proofs the problem is unprovable, and one proof that it is undecidable. Some attempts at resolving P versus NP have received brief media attention, though these attempts have since been refuted. - -The P = NP problem can be restated in terms of expressible certain classes of logical statements, as a result of work in descriptive complexity. - -Consider all languages of finite structures with a fixed signature including a linear order relation. Then, all such languages in P can be expressed in first-order logic with the addition of a suitable least fixed-point combinator. Effectively, this, in combination with the order, allows the definition of recursive functions. As long as the signature contains at least one predicate or function in addition to the distinguished order relation, so that the amount of space taken to store such finite structures is actually polynomial in the number of elements in the structure, this precisely characterizes P. - -Similarly, NP is the set of languages expressible in existential second-order logic—that is, second-order logic restricted to exclude universal quantification over relations, functions, and subsets. The languages in the polynomial hierarchy, PH, correspond to all of second-order logic. Thus, the question "is P a proper subset of NP" can be reformulated as "is existential second-order logic able to describe languages (of finite linearly ordered structures with nontrivial signature) that first-order logic with least fixed point cannot?". The word "existential" can even be dropped from the previous characterization, since P = NP if and only if P = PH (as the former would establish that NP = co-NP, which in turn implies that NP = PH). - -No algorithm for any NP-complete problem is known to run in polynomial time. However, there are algorithms known for NP-complete problems with the property that if P = NP, then the algorithm runs in polynomial time on accepting instances (although with enormous constants, making the algorithm impractical). However, these algorithms do not qualify as polynomial time because their running time on rejecting instances are not polynomial. The following algorithm, due to Levin (without any citation), is such an example below. It correctly accepts the NP-complete language SUBSET-SUM. It runs in polynomial time on inputs that are in SUBSET-SUM if and only if P = NP: - -// Algorithm that accepts the NP-complete language SUBSET-SUM. - -// - -// this is a polynomial-time algorithm if and only if P = NP. - -// - -// "Polynomial-time" means it returns "yes" in polynomial time when - -// the answer should be "yes", and runs forever when it is "no". - -// - -// Input: S = a finite set of integers - -// Output: "yes" if any subset of S adds up to 0. - -// Runs forever with no output otherwise. - -// Note: "Program number M" is the program obtained by - -// writing the integer M in binary, then - -// considering that string of bits to be a - -// program. Every possible program can be - -// generated this way, though most do nothing - -// because of syntax errors. - -FOR K = 1...∞ - -FOR M = 1...K - -Run program number M for K steps with input S - -IF the program outputs a list of distinct integers - -AND the integers are all in S - -AND the integers sum to 0 - -THEN - -OUTPUT "yes" and HALT - -If, and only if, P = NP, then this is a polynomial-time algorithm accepting an NP-complete language. "Accepting" means it gives "yes" answers in polynomial time, but is allowed to run forever when the answer is "no" (also known as a semi-algorithm). - -This algorithm is enormously impractical, even if P = NP. If the shortest program that can solve SUBSET-SUM in polynomial time is b bits long, the above algorithm will try at least 2b − 1 other programs first. - -Conceptually speaking, a decision problem is a problem that takes as input some string w over an alphabet Σ, and outputs "yes" or "no". If there is an algorithm (say a Turing machine, or a computer program with unbounded memory) that can produce the correct answer for any input string of length n in at most cnk steps, where k and c are constants independent of the input string, then we say that the problem can be solved in polynomial time and we place it in the class P. Formally, P is defined as the set of all languages that can be decided by a deterministic polynomial-time Turing machine. That is, -$$ -\mathbf{P} = \{ L : L=L(M) \text{ for some deterministic polynomial-time Turing machine } M \} -$$ - -where -$$ -L(M) = \{ w\in\Sigma^{*}: M \text{ accepts } w \} -$$ - -and a deterministic polynomial-time Turing machine is a deterministic Turing machine M that satisfies the following two conditions: - -# M halts on all inputs w and - -# there exists $k \in N$ such that $T_M(n)\in O(n^k)$, where O refers to the big O notation and -$$ -T_M(n) = \max\{ t_M(w) : w\in\Sigma^{*}, |w| = n \} -$$ -$$ -t_M(w) = \text{ number of steps }M\text{ takes to halt on input }w. -$$ - -NP can be defined similarly using nondeterministic Turing machines (the traditional way). However, a modern approach to define NP is to use the concept of certificate and verifier. Formally, NP is defined as the set of languages over a finite alphabet that have a verifier that runs in polynomial time, where the notion of "verifier" is defined as follows. - -Let L be a language over a finite alphabet, Σ. - -L ∈ NP if, and only if, there exists a binary relation $R\subset\Sigma^{*}\times\Sigma^{*}$ and a positive integer k such that the following two conditions are satisfied: - -# For all $x\in\Sigma^{*}$, $x\in L \Leftrightarrow\exists y\in\Sigma^{*}$ such that (x, y) ∈ R and $|y|\in O(|x|^k)$; and - -# the language $L_{R} = \{ x\# y:(x,y)\in R\}$ over $\Sigma\cup\{\#\}$ is decidable by a deterministic Turing machine in polynomial time. - -A Turing machine that decides LR is called a verifier for L and a y such that (x, y) ∈ R is called a certificate of membership of x in L. - -In general, a verifier does not have to be polynomial-time. However, for L to be in NP, there must be a verifier that runs in polynomial time. - -Let -$$ -\mathrm{COMPOSITE} = \left \{x\in\mathbb{N} \mid x=pq \text{ for integers } p, q > 1 \right \} -$$ -$$ -R = \left \{(x,y)\in\mathbb{N} \times\mathbb{N} \mid 1 0, angular frequency ω > 0, and phase φ ∈ S1. Thus the parameter space is $R^+ \times R^+ \times S^1 .$ - -* In complex dynamics, the parameter space is the complex plane C = { z = x + y i : x, y ∈ R }, where i2 = -1. - -The famous Mandelbrot set is a subset of this parameter space, consisting of the points in the complex plane which give a bounded set of numbers when a particular iterated function is repeatedly applied from that starting point. The remaining points, which are not in the set, give an unbounded set of numbers (they tend to infinity) when this function is repeatedly applied from that starting point. - -* In machine learning, an artificial neural network is a model that consists of a directed graph, with weights (real numbers) on the edges of the graph. The parameter space is known as a weight space, and "learning" consists of updating the parameters, most often by gradient descent or some variant. - -Parameter space contributed to the liberation of geometry from the confines of three-dimensional space. For instance, the parameter space of spheres in three dimensions, has four dimensions—three for the sphere center and another for the radius. According to Dirk Struik, it was the book Neue Geometrie des Raumes (1849) by Julius Plücker that showed - -...geometry need not solely be based on points as basic elements. Lines, planes, circles, spheres can all be used as the elements (Raumelemente) on which a geometry can be based. This fertile conception threw new light on both synthetic and algebraic geometry and created new forms of duality. The number of dimensions of a particular form of geometry could now be any positive number, depending on the number of parameters necessary to define the "element". - -The requirement for higher dimensions is illustrated by Plücker's line geometry. Struik writes - -[Plücker's] geometry of lines in three-space could be considered as a four-dimensional geometry, or, as Klein has stressed, as the geometry of a four-dimensional quadric in a five-dimensional space. - -Thus the Klein quadric describes the parameters of lines in space. diff --git a/wiki/wikipedia/476.txt b/wiki/wikipedia/476.txt deleted file mode 100644 index eaf8189192d78e381a7105c57abcb851f5fe0c34..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/476.txt +++ /dev/null @@ -1,7 +0,0 @@ -In algebraic geometry, a field of mathematics, the AF+BG theorem (also known as Max Noether's fundamental theorem) is a result of Max Noether that asserts that, if the equation of an algebraic curve in the complex projective plane belongs locally (at each intersection point) to the ideal generated by the equations of two other algebraic curves, then it belongs globally to this ideal. - -Let F, G, and H be homogeneous polynomials in three variables, with H having higher degree than F and G; let a = deg H − deg F and b = deg H − deg G (both positive integers) be the differences of the degrees of the polynomials. Suppose that the greatest common divisor of F and G is a constant, which means that the projective curves that they define in the projective plane P2 have an intersection consisting in a finite number of points. For each point P of this intersection, the polynomials F and G generate an ideal (F, G)P of the local ring of P2 at P (this local ring is the ring of the fractions n/d, where n and d are polynomials in three variables and d(P) ≠ 0). The theorem asserts that, if H lies in (F, G)P for every intersection point P, then H lies in the ideal (F, G); that is, there are homogeneous polynomials A and B of degrees a and b, respectively, such that H = AF + BG. Furthermore, any two choices of A differ by a multiple of G, and similarly any two choices of B differ by a multiple of F. - -This theorem may be viewed as a generalization of Bézout's identity, which provides a condition under which an integer or a univariate polynomial h may be expressed as an element of the ideal generated by two other integers or univariate polynomials f and g: such a representation exists exactly when h is a multiple of the greatest common divisor of f and g. The AF+BG condition expresses, in terms of divisors (sets of points, with multiplicities), a similar condition under which a homogeneous polynomial H in three variables can be written as an element of the ideal generated by two other polynomials F and G. - -This theorem is also a refinement, for this particular case, of Hilbert's Nullstellensatz, which provides a condition expressing that some power of a polynomial h (in any number of variables) belongs to the ideal generated by a finite set of polynomials. diff --git a/wiki/wikipedia/477.txt b/wiki/wikipedia/477.txt deleted file mode 100644 index 8ca2151d0e0ce513043f21d4a7a911c20f99e773..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/477.txt +++ /dev/null @@ -1,130 +0,0 @@ -Sudoku puzzles can be studied mathematically to answer questions such as "How many filled Sudoku grids are there?", "What is the minimal number of clues in a valid puzzle?" and "In what ways can Sudoku grids be symmetric?" through the use of combinatorics and group theory. - -The main results are that for the classical Sudoku the number of filled grids is 6,670,903,752,021,072,936,960 (6.67|e=21), which reduces to 5,472,730,538 essentially different groups under the validity preserving transformations. There are 26 types of symmetry, but they can only be found in about 0.005% of all filled grids. A puzzle with a unique solution must have at least 17 clues, and there is a solvable puzzle with at most 21 clues for every solved grid. The largest minimal puzzle found so far has 40 clues. - -Similar results are known for variants and smaller grids. No exact results are known for Sudokus larger than the classical 9×9 grid, although there are estimates which are believed to be fairly accurate. - -The analysis of Sudoku falls into two main areas: - -# analyzing the properties of completed grids - -# analyzing the properties of completed puzzles. - -Initial analysis was largely focused on enumerating solutions, with results first appearing in 2004. - -There are many Sudoku variants, partially characterized by size (N), and the shape of their regions. Unless noted, discussion in this article assumes classic Sudoku, i.e. N=9 (a 9×9 grid and 3×3 regions). A rectangular Sudoku uses rectangular regions of row-column dimension R×C. Other variants include those with irregularly-shaped regions or with additional constraints (hypercube) or different constraint types (Samunamupure). - -Regions are also called blocks or boxes. A band is a part of the grid that encapsulates 3 rows and 3 boxes, and a stack is a part of the grid that encapsulates 3 columns and 3 boxes. A puzzle is a partially completed grid, and the initial values are givens or clues. A proper puzzle has a unique solution. A minimal puzzle is a proper puzzle from which no clue can be removed without introducing additional solutions. See Glossary of Sudoku for other terminology. - -Solving Sudokus from the viewpoint of a player has been explored in Denis Berthier's book "The Hidden Logic of Sudoku" (2007) which considers strategies such as "hidden xy-chains". - -The general problem of solving Sudoku puzzles on n2×n2 grids of n×n blocks is known to be NP-complete. For n=3 (classical Sudoku), however, this result is of little practical relevance: algorithms can solve puzzles in a fraction of a second because of the small size of the problem. - -A puzzle can be expressed as a graph coloring problem. The aim is to construct a 9-coloring of a particular graph, given a partial 9-coloring. The Sudoku graph has 81 vertices, one vertex for each cell. The vertices are labeled with ordered pairs (x, y), where x and y are integers between 1 and 9. In this case, two distinct vertices labeled by (x, y) and (x′, y′) are joined by an edge if and only if: - -* x = x′ (same column) or, - -* y = y′ (same row) or, - -* ⌈ x/3 ⌉ = ⌈ x′/3 ⌉ and ⌈ y/3 ⌉ = ⌈ y′/3 ⌉ (same 3×3 cell) - -The puzzle is then completed by assigning an integer between 1 and 9 to each vertex, in such a way that vertices that are joined by an edge do not have the same integer assigned to them. - -A Sudoku solution grid is also a Latin square. For N ≥ 4 some of these tilings are not compatible with any Latin square; i.e. all Sudoku puzzles on such a tiling have no solution. obtaining 6,670,903,752,021,072,936,960 (6.67|e=21) distinct solutions. - -In a 2005 study, Felgenhauer and Jarvis these conjugacy classes represent the different types of symmetry (self-similarity or automorphism) that can be found in completed sudoku grids. Using this technique, Ed Russell and Frazer Jarvis were the first to compute the number of essentially different sudoku solutions as 5,472,730,538. based on calculated band counts (detailed in the sections below) compares to the true grid count: it is an underestimate in all cases evaluated so far. The numbers for square block grids (n2 × n2) are listed in , and the numbers for 2 × n blocks (2n × 2n grids) are listed in . - -Similar to Latin squares, the number of sudoku grids can be reduced by noting that there is a one-to-one correspondence with a partially standardized form, in which the first block has the canonical labels and both the top row and leftmost column are sorted (for as much as the rules allow, i.e. within blocks and the stacks/bands themselves). For a grid with R\times C blocks, each such reduced grid corresponds to (RC)! \times (R!)^{C-1}(C-1)! \times (C!)^{R-1} (R-1)! = (RC-1)! \cdot R!^C \cdot C!^Rtotal grids, which is 9! × 722 or 1,881,169,920 for the normal 3×3 Sudoku. This reduction is always one-to-one, unlike the action of the full set of validity preserving transformations ('Sudoku symmetries') discussed below. - -A solved Sudoku will remain valid under the actions of the validity preserving transformations (see also Jarvis splits the band into subbands, which are then grouped into equivalence classes; it is currently the fastest known technique for exact evaluation of these bR,C. - -Ordinary Sudokus (proper puzzles) have a unique solution. A minimal Sudoku is a Sudoku from which no clue can be removed leaving it a proper Sudoku. Different minimal Sudokus can have a different number of clues. This section discusses the minimum number of givens for proper puzzles. - -Many Sudokus have been found with 17 clues, although finding them is not a trivial task. A paper by Gary McGuire, Bastian Tugemann, and Gilles Civario, released on 1 January 2012, explains how it was proved through an exhaustive computer search that the minimum number of clues in any proper Sudoku is 17, and this was independently confirmed in September 2013. A few 17-clue puzzles with diagonal symmetry were provided by Ed Russell, after a search through equivalence transformations of Gordon Royle's database of 17-clue puzzles. - -* 8×8(2×4) Sudoku: The fewest clues is 14. It is not known if this is the fewest possible. - -* 12×12(2×6) Sudoku: At least one puzzle with 32 clues has been created. - -"Du-sum-oh" (a.k.a. "geometry number place") puzzles replace the 3×3 (or R×C) regions of Sudoku with irregular shapes of a fixed size. Bob Harris has proved that it is always possible to create (N − 1)-clue du-sum-ohs on an N×N grid, and has constructed several examples. Johan de Ruiter has proved that for any N>3 there exist polyomino tilings that can not be turned into a Sudoku puzzle with N irregular shapes of size N. - -In sum number place (Samunampure), the usual constraints of no repeated value in any row, column or region apply; additionally, extra regions ("cages") of various size and shape which cannot contain repeats are present, with clues providing the sum of digits within the cage (e.g. a 4-cell cage with sum 10 must consist of values 1,2,3,4 in some order). The minimal cell coverage known is 18 cells provided by Philip Newman, and this is conjectured to be the fewest possible (a 17-cell example would imply a currently unknown 17-clue classic sudoku). The minimum number of cages known is 7, also provided by Philip Newman; it is not known if this is the fewest possible. - -A variant on Miyuki Misawa's web site replaces sums with relations: the clues are symbols =, < and > showing the relative values of (some but not all) adjacent region sums. She demonstrates an example with only eight relations. It is not known whether this is the best possible. - -The most clues for a minimal Sudoku is believed to be 40, of which only two are known. If any clue is removed from either of these Sudokus, the puzzle would have more than one solution (and thus not be a proper Sudoku). In the work to find these Sudokus, other high-clue puzzles were catalogued, including more than 6,500,000,000 minimal puzzles with 36 clues. About 2600 minimal Sudokus with 39 clues were also found. show that there are approximately (with 0.065% relative error): - -* 3.10 × 1037 minimal puzzles, - -* 2.55 × 1025 non-essentially-equivalent minimal puzzles. - -Other authors used faster methods and calculated additional precise distribution statistics. - -It has been conjectured that no proper Sudoku can have clues limited to the range of positions in the clear space of the first image above. The largest rectangular orthogonal "hole" (region with no clues) in a proper Sudoku is believed to be a rectangle of 30 cells (a 5×6 rectangular area). One example is a Sudoku with 22 clues (second image). The largest total number of empty groups (rows, columns, and boxes) in a Sudoku is believed to be nine. One example is a Sudoku with 3 empty rows, 3 empty columns, and 3 empty boxes (third image). - -A Sudoku grid is automorphic if it can be transformed in a way that leads back to the original grid, when that same transformation would not otherwise lead back to the original grid. One example of a grid which is automorphic would be a grid which can be rotated 180 degrees resulting in a new grid where the new cell values are a permutation of the original grid. Automorphic Sudokus are Sudoku puzzles which solve to an automorphic grid. Two examples of automorphic Sudokus, and an automorphic grid are shown below. - -In the first two examples, notice that if the Sudoku is rotated 180 degrees, and the clues relabeled with the permutation (123456789) -> (987654321), it returns to the same Sudoku. Expressed another way, these Sudokus have the property that every 180-degree rotational pair of clues (a, b) follows the rule (a) + (b) = 10. - -Since these Sudokus are automorphic, so too their solutions grids are automorphic. Furthermore, every cell which is solved has a symmetrical partner which is solved with the same technique (and the pair would take the form a + b = 10). Notice that in the second example, the Sudoku also exhibits translational (or repetition) symmetry; clues are clustered in groups, with the clues in each group ordered sequentially (i.e., n, n+1, n+2, and n+3). - -The third image is the Most Canonical solution grid. This grid has 648 automorphisms and contributes to all ~6.67|e=21 solution grids by factor of 1/648 compared to any non-automorphic grid. - -In these examples the automorphisms are easy to identify, but in general automorphism is not always obvious. The table at right shows the number of the essentially different Sudoku solution grids for all existing automorphisms. - -An enumeration technique based on band generation was developed that is significantly less computationally intensive. The strategy begins by analyzing the permutations of the top band used in valid solutions. Once the Band1 symmetries and equivalence class for the partial solutions are identified, the completions of the lower two bands are constructed and counted for each equivalence class. - -The Band1 algorithm proceeds as follows: - -* Choose a canonical labeling of the digits by assigning values for B1 (see grid), and compute the rest of the Band1 permutations relative B1. - -* Compute the permutations of B2 by partitioning the B1 cell values over the B2 row triplets. From the triplet combinations compute the B2 permutations. There are k=0..3 ways to choose the: - -B1 r11 values for B2 r22, the rest must go to r16, - -B1 r12 values for B2 r23, the rest must go to r16, - -B1 r13 values for B2 r21, the rest must go to r16, i.e. -$$ - \mbox{N combinations for B2} = \sum_{k=0}^{3}{{3 \choose k}^3} -$$ - -(This expression may be generalized to any R×3 box band variant. (Pettersen was established. - -The following sequence demonstrates mapping a band configuration to a counting symmetry equivalence class. Begin with a valid band configuration (1). Build column triplets by ordering the column values within each column. This is not a valid Sudoku band, but does place the same constraints on the lower bands as the example (2). Construct an equivalence class ID from the B2, B3 column triplet values. Use column and box swaps to achieve the lowest lexicographical ID. The last figure shows the column and box ordering for the ID: 124 369 578 138 267 459. All Band1 permutations with this counting symmetry ID will have the same number of grid completions as the original example. An extension of this process can be used to build the largest possible band counting symmetry equivalence classes (3). - -Note, while column triplets are used to construct and identify the equivalence classes, the class members themselves are the valid Band1 permutations: class size (Sb.z) reflects column triplet permutations compatible with the One Rule solution requirements. Counting symmetry is a completion property and applies only to a partial grid (band or stack). Solution symmetry for preserving solutions can be applied to either partial grids (bands, stacks) or full grid solutions. Lastly note, counting symmetry is more restrictive than simple numeric completion count equality: two (distinct) bands belong to the same counting symmetry equivalence class only if they impose equivalent completion constraints. - -Symmetries group similar object into equivalence classes. Two numbers need to be distinguished for equivalence classes, and band symmetries as used here, a third: - -* the number of equivalence classes ({Sb}.n). - -* the cardinality, size or number of elements in an equivalence class, which may vary by class (Sb.z) - -* the number of Band2,3 completions compatible with a member of a Band1 equivalence class (Sb.n) - -The Band1 (65) symmetries divide the (56×66) Band1 valid permutations into (not less than) 336 (56×6) equivalence classes with (up to) 6^5 permutations each. - -The not less than and up to caveats are necessary, since some combinations of the transformations may not produce distinct results, when relabeling is required (see below). Consequently, some equivalence classes may contain less than 65 distinct permutations and the theoretical minimum number of classes may not be achieved. - -Each of the valid Band1 permutations can be expanded (completed) into a specific number of solutions with the Band2,3 permutations. By virtue of their similarity, each member of an equivalence class will have the same number of completions. Consequently, we only need to construct the solutions for one member of each equivalence class and then multiply the number of solutions by the size of the equivalence class. We are still left with the task of identifying and calculating the size of each equivalence class. Further progress requires the dexterous application of computational techniques to catalogue (classify and count) the permutations into equivalence classes. - -Felgenhauer/Jarvis catalogued the Band1 permutations using lexicographical ordered IDs based on the ordered digits from blocks B2,3. Block 1 uses a canonical digit assignment and is not needed for a unique ID. Equivalence class identification and linkage uses the lowest ID within the class. - -Application of the (2×62) B2,3 symmetry permutations produces 36288 (28×64) equivalence classes, each of size 72. Since the size is fixed, the computation only needs to find the 36288 equivalence class IDs. (Note: in this case, - -for any Band1 permutation, applying these permutations to achieve the lowest ID provides an index to the associated equivalence class.) - -Application of the rest of the block, column and row symmetries provided further reduction, i.e. allocation of the 36288 IDs into fewer, larger equivalence classes. - -When the B1 canonical labeling is lost through a transformation, the result is relabeled to the canonical B1 usage and then catalogued under this ID. This approach generated 416 equivalence classes, somewhat less effective than the theoretical 336 minimum limit for a full reduction. Application of counting symmetry patterns for duplicate paired digits achieved reduction to 174 and then to 71 equivalence classes. The introduction of equivalence classes based on band counting symmetry (subsequent to Felgenhauer/Jarvis by Russell) reduced the equivalence classes to a minimum generating set of 44. - -The diversity of the ~2.6|e=6, 56×66 Band1 permutations can be reduced to a set of 44 Band1 equivalence classes. Each of the 44 equivalence classes can be expanded to millions of distinct full solutions, but the entire solution space has a common origin in these 44. The 44 equivalence classes play a central role in other enumeration approaches as well, and speculation will return to the characteristics of the 44 classes when puzzle properties are explored later. - -Enumerating the Sudoku solutions breaks into an initial setup stage and then into two nested loops. Initially all the valid Band1 permutations are grouped into equivalence classes, who each impose a common constraint on the Band2,3 completions. - -For each of the Band1 equivalence classes, all possible Band2,3 solutions need to be enumerated. An outer Band1 loop iterates over the 44 equivalence classes. In the inner loop, all lower band completions for each of the Band1 equivalence class are found and counted. - -The computation required for the lower band solution search can be minimised by the same type of symmetry application used for Band1. There are 6! (720) permutations for the 6 values in column 1 of Band2,3. Applying the lower band (2) and row within band (6×6) permutations creates 10 equivalence classes of size 72. At this point, completing 10 sets of solutions for the remaining 48 cells with a recursive descent, backtracking algorithm is feasible with 2 GHz class PC so further simplification is not required to carry out the enumeration. Using this approach, the number of ways of filling in a blank Sudoku grid has been shown to be 6,670,903,752,021,072,936,960 (6.67|e=21). - -The result, as confirmed by Russell, also contains the distribution of solution counts for the 44 equivalence classes. The listed values are before application of the 9! factor for labeling and the two 72 factors (722 = 5184) for each of Stack 2,3 and Band2,3 permutations. The number of completions for each class is consistently on the order of 100,000,000, while the number of Band1 permutations covered by each class however varies from 4 – 3240. Within this wide size range, there are clearly two clusters. Ranked by size, the lower 33 classes average ~400 permutations/class, while the upper 11 average ~2100. The disparity in consistency between the distributions for size and number of completions or the separation into two clusters by size is yet to be examined. diff --git a/wiki/wikipedia/478.txt b/wiki/wikipedia/478.txt deleted file mode 100644 index de854bdd7f1ccb68fb14c292382321e1e391b6c9..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/478.txt +++ /dev/null @@ -1,100 +0,0 @@ -In geometry, Descartes' theorem states that for every four kissing, or mutually tangent, circles, the radii of the circles satisfy a certain quadratic equation. By solving this equation, one can construct a fourth circle tangent to three given, mutually tangent circles. The theorem is named after René Descartes, who stated it in 1643. - -Geometrical problems involving tangent circles have been pondered for millennia. In ancient Greece of the third century BC, Apollonius of Perga devoted an entire book to the topic. - -René Descartes discussed the problem briefly in 1643, in a letter to Princess Elisabeth of the Palatinate. He came up with essentially the same solution as given in below, and thus attached his name to the theorem. - -They were rediscovered in 1826 by Jakob Steiner, in 1842 by Philip Beecroft, and in 1936 by Frederick Soddy. The kissing circles in this problem are sometimes known as Soddy circles, perhaps because Soddy chose to publish his version of the theorem in the form of a poem titled The Kiss Precise, which was printed in Nature (June 20, 1936). Soddy also extended the theorem to spheres; Thorold Gosset extended the theorem to arbitrary dimensions. - -Descartes' theorem is most easily stated in terms of the circles' curvatures. The curvature (or bend) of a circle is defined as k = ±1/r, where r is its radius. The larger a circle, the smaller is the magnitude of its curvature, and vice versa. - -The plus sign in k = ±1/r applies to a circle that is externally tangent to the other circles, like the three black circles in the image. For an internally tangent circle like the large red circle, that circumscribes the other circles, the negative sign applies. - -If a straight line is considered a degenerate circle with zero curvature (and thus infinite radius), Descartes' theorem also applies to a line and two circles that are all three mutually tangent, giving the radius of a third circle tangent to the other two circles and the line. - -If four circles are tangent to each other at six distinct points, and the circles have curvatures ki (for i = 1, ..., 4), Descartes' theorem says: - -When trying to find the radius of a fourth circle tangent to three given kissing circles, the equation is best rewritten as: - -{{NumBlk|:|$ k_4 = k_1 + k_2 + k_3 \pm2 \sqrt{k_1 k_2 + k_2 k_3 + k_3 k_1}. $|}} - -The ± sign reflects the fact that there are in general two solutions. Ignoring the degenerate case of a straight line, one solution is positive and the other is either positive or negative; if negative, it represents a circle that circumscribes the first three (as shown in the diagram above). - -Problem-specific criteria may favor one solution over the other in any given problem. - -If one of the three circles is replaced by a straight line, then one ki, say k3, is zero and drops out of . then becomes much simpler: - -{{NumBlk|:|$k_4=k_1+k_2\pm2\sqrt{k_1k_2}.$|}} - -If two circles are replaced by lines, the tangency between the two replaced circles becomes a parallelism between their two replacement lines. For all four curves to remain mutually tangent, the other two circles must be congruent. In this case, with k2 = k3 = 0, is reduced to the trivial -$$ -\displaystyle k_4=k_1. -$$ - -It is not possible to replace three circles by lines, as it is not possible for three lines and one circle to be mutually tangent. - -Descartes' theorem does not apply when all four circles are tangent to each other at the same point. - -Another special case is when the ki are squares, -$$ -(v^2+x^2+y^2+z^2)^2=2(v^4+x^4+y^4+z^4) -$$ - -Euler showed that this is equivalent to the simultaneous triplet of Pythagorean triples, -$$ -(2vx)^2+(2yz)^2 = (v^2+x^2-y^2-z^2)^2 -$$ -$$ -(2vy)^2+(2xz)^2 = (v^2-x^2+y^2-z^2)^2 -$$ -$$ -(2vz)^2+(2xy)^2 = (v^2-x^2-y^2+z^2)^2 -$$ - -and can be given a parametric solution. When the minus sign of a curvature is chosen, -$$ -(-v^2+x^2+y^2+z^2)^2=2(v^4+x^4+y^4+z^4) -$$ - -this can be solved as, - - - -\begin{align} - -{[} & v, x, y, z] \\[6pt] - -= {} \Big[ & 2(ab-cd)(ab+cd),\ (a^2+b^2+c^2+d^2)(a^2-b^2+c^2-d^2), \\ - -& \qquad 2(ac-bd)(a^2+c^2),\ 2(ac-bd)(b^2+d^2) \Big] - -\end{align} - - - -where -$$ -a^4+b^4 = c^4+d^4 -$$ - -parametric solutions of which are well-known. - -To determine a circle completely, not only its radius (or curvature), but also its center must be known. The relevant equation is expressed most clearly if the coordinates (x, y) are interpreted as a complex number z = x + iy. The equation then looks similar to Descartes' theorem and is therefore called the complex Descartes theorem. - -Given four circles with curvatures ki and centers zi (for i = 1...4), the following equality holds in addition to : - -Once k4 has been found using , one may proceed to calculate z4 by rewriting to a form similar to : -$$ -z_4 = \frac{z_1 k_1 + z_2 k_2 + z_3 k_3 \pm 2 \sqrt{k_1 k_2 z_1 z_2 + k_2 k_3 z_2 z_3 + k_1 k_3 z_1 z_3} }{k_4}. -$$ - -Again, in general, there are two solutions for z4, corresponding to the two solutions for k4. Note that the plus/minus sign in the above formula for z does not necessarily correspond to the plus/minus sign in the formula for k. - -The generalization to n dimensions is sometimes referred to as the Soddy–Gosset theorem, even though it was shown by R. Lachlan in 1886. In n-dimensional Euclidean space, the maximum number of mutually tangent (n - 1)-spheres is n + 2. For example, in 3-dimensional space, five spheres can be mutually tangent. The curvatures of the hyperspheres satisfy -$$ -\left(\sum_{i=1}^{n+2} k_i\right)^2 = n\sum_{i=1}^{n+2} k_i^2 -$$ - -with the case ki = 0 corresponding to a flat hyperplane, in exact analogy to the 2-dimensional version of the theorem. - -Although there is no 3-dimensional analogue of the complex numbers, the relationship between the positions of the centers can be re-expressed as a matrix equation, which also generalizes to n dimensions. diff --git a/wiki/wikipedia/479.txt b/wiki/wikipedia/479.txt deleted file mode 100644 index 026a43fe60195420e46f47567de4234da0522d36..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/479.txt +++ /dev/null @@ -1,237 +0,0 @@ -In proof theory, ordinal analysis assigns ordinals (often large countable ordinals) to mathematical theories as a measure of their strength. - -If theories have the same proof-theoretic ordinal they are often equiconsistent, and if one theory has a larger proof-theoretic ordinal than another it can often prove the consistency of the second theory. - -The field of ordinal analysis was formed when Gerhard Gentzen in 1934 used cut elimination to prove, in modern terms, that the proof-theoretic ordinal of Peano arithmetic is ε0. See Gentzen's consistency proof. - -Ordinal analysis concerns true, effective (recursive) theories that can interpret a sufficient portion of arithmetic to make statements about ordinal notations. - -The proof-theoretic ordinal of such a theory $T$ is the supremum of the order types of all ordinal notations (necessarily recursive, see next section) that the theory can prove are well founded-the supremum of all ordinals $\alpha$ for which there exists a notation $o$ in Kleene's sense such that $T$ proves that $o$ is an ordinal notation. Equivalently, it is the supremum of all ordinals $\alpha$ such that there exists a recursive relation $R$ on $\omega$ (the set of natural numbers) that well-orders it with ordinal $\alpha$ and such that $T$ proves transfinite induction of arithmetical statements for $R$. - -Some theories, such as subsystems of second-order arithmetic, have no conceptualization of or way to make arguments about transfinite ordinals. For example, to formalize what it means for a subsystem of Z2 $T$ to "prove $\alpha$ well-ordered", we instead construct an ordinal notation $(A,\tilde <)$ with order type $\alpha$. $T$ can now work with various transfinite induction principles along $(A,\tilde <)$, which substitute for reasoning about set-theoretic ordinals. - -However, some pathological notation systems exist that are unexpectedly difficult to work with. For example, Rathjen gives a primitive recursive notation system $(\mathbb N,<_T)$ that is well-founded iff PA is consistent, despite having order type $\omega$ - including such a notation in the ordinal analysis of PA would result in the false equality $\mathsf{PTO(PA)}=\omega$. - -The existence of a recursive ordinal that the theory fails to prove is well-ordered follows from the $\Sigma^1_1$ bounding theorem, as the set of natural numbers that an effective theory proves to be ordinal notations is a $\Sigma^0_1$ set (see Hyperarithmetical theory). Thus the proof-theoretic ordinal of a theory will always be a (countable) recursive ordinal, that is, less than the Church–Kleene ordinal $\omega_1^{\mathrm{CK}}$. - -*Q, Robinson arithmetic (although the definition of the proof-theoretic ordinal for such weak theories has to be tweaked). - -*PA-, the first-order theory of the nonnegative part of a discretely ordered ring. - -*RFA, rudimentary function arithmetic. - -*IΔ0, arithmetic with induction on Δ0-predicates without any axiom asserting that exponentiation is total. - -*EFA, elementary function arithmetic. - -*IΔ0 + exp, arithmetic with induction on Δ0-predicates augmented by an axiom asserting that exponentiation is total. - -*RCA_*|b=0, a second order form of EFA sometimes used in reverse mathematics. - -*WKL_*|b=0, a second order form of EFA sometimes used in reverse mathematics. - -Friedman's grand conjecture suggests that much "ordinary" mathematics can be proved in weak systems having this as their proof-theoretic ordinal. - -*IΔ0 or EFA augmented by an axiom ensuring that each element of the n-th level $\mathcal{E}^n$ of the Grzegorczyk hierarchy is total. - -*RCA0, recursive comprehension. - -*WKL0, weak König's lemma. - -*PRA, primitive recursive arithmetic. - -*IΣ1, arithmetic with induction on Σ1-predicates. - -*PA, Peano arithmetic (shown by Gentzen using cut elimination). - -*ACA0, arithmetical comprehension. - -*ATR0, arithmetical transfinite recursion. - -*Martin-Löf type theory with arbitrarily many finite level universes. - -This ordinal is sometimes considered to be the upper limit for "predicative" theories. - -* ID1, the first theory of inductive definitions. - -* KP, Kripke–Platek set theory with the axiom of infinity. - -* CZF, Aczel's constructive Zermelo–Fraenkel set theory. - -* EON, a weak variant of the Feferman's explicit mathematics system T0. - -The Kripke-Platek or CZF set theories are weak set theories without axioms for the full powerset given as set of all subsets. Instead, they tend to either have axioms of restricted separation and formation of new sets, or they grant existence of certain function spaces (exponentiation) instead of carving them out from bigger relations. - -*$\Pi^1_1\mbox{-}\mathsf{CA}_0$, Π11 comprehension has a rather large proof-theoretic ordinal, which was described by Takeuti in terms of "ordinal diagrams", and which is bounded by ψ0ω) in Buchholz's notation. It is also the ordinal of $ID_{<\omega}$, the theory of finitely iterated inductive definitions. And also the ordinal of MLW, Martin-Löf type theory with indexed W-Types Setzer. - -*IDω, the theory of ω-iterated inductive definitions. It's proof-theoretic ordinal is equal to the Takeuti-Feferman-Buchholz ordinal. - -*T0, Feferman's constructive system of explicit mathematics has a larger proof-theoretic ordinal, which is also the proof-theoretic ordinal of the KPi, Kripke–Platek set theory with iterated admissibles and $\Sigma^1_2\mbox{-}\mathsf{AC} + \mathsf{BI}$. - -*KPi, an extension of Kripke–Platek set theory based on an inaccessible cardinal, has a very large proof-theoretic ordinal $\psi(\varepsilon_{I + 1})$, where I is the smallest inaccessible, using an unknown function. Dubbed "Madore's ordinal", after David Madore. This ordinal is also the proof-theoretic ordinal of $\Delta^1_2\mbox{-}\mathsf{CA} + \mathsf{BI}$. - -*KPM, an extension of Kripke–Platek set theory based on a Mahlo cardinal, has a very large proof-theoretic ordinal ϑ, which was described by Rathjen, and dubbed the "Small Rathjen Ordinal". - -*MLM, an extension of Martin-Löf type theory by one Mahlo-universe, has an even larger proof-theoretic ordinal ψΩ1M + ω). - -*$\mathsf{KP} + \Pi_3 - Ref$ has a proof-theoretic ordinal equal to $\Psi(\varepsilon_{K + 1})$, where $K$ refers to the first weakly compact, using Rathjen's Psi function, dubbed the "Large Rathjen Ordinal" - -*$\mathsf{KP} + \Pi_\omega - Ref$ has a proof-theoretic ordinal equal to $\Psi^{\varepsilon_{\Xi + 1}}_X$, where $\Xi$ refers to the first $\Pi^2_0$-indescribable and $\mathbb{X} = (\omega^+; P_0; \epsilon, \epsilon, 0)$, using Stegert's Psi function, dubbed the "Small Stegert Ordinal" - -*$\mathsf{Stability}$ has a proof-theoretic ordinal equal to $\Psi^{\varepsilon_{\Upsilon+1}}_{\mathbb{X}}$ where $\Upsilon$ is a cardinal analogue of the least ordinal $\alpha$ which is $\alpha+\beta$-stable for all $\beta < \alpha$ and $\mathbb{X} = (\omega^+; P_0; \epsilon, \epsilon, 0)$, using Stegert's Psi function, dubbed the "Large Stegert Ordinal" - -Most theories capable of describing the power set of the natural numbers have proof-theoretic ordinals that are so large that no explicit combinatorial description has yet been given. This includes $\Pi^1_2 - CA_0$, full second-order arithmetic ($\Pi^1_\infty - CA_0$) and set theories with powersets including ZF and ZFC. The strength of intuitionistic ZF (IZF) equals that of ZF. - -This is a list of symbols used in this table: - -* ψ represents Buchholz's psi unless stated otherwise. - -* Ψ represents either Rathjen's or Stegert's Psi. - -* φ represents Veblen's function. - -* ω represents the first transfinite ordinal. - -* εα represents the epsilon numbers. - -* Γα represents the gamma numbers (Γ0 is the Feferman–Schütte ordinal) - -* Ωα represent the uncountable ordinals (Ω1, abbreviated Ω, is ω1). - -This is a list of the abbreviations used in this table: - -* First-order arithmetic - -** $\mathsf{Q}$ is Robinson arithmetic - -** $\mathsf{PA}^-$ is the first-order theory of the nonnegative part of a discretely ordered ring. - -** $\mathsf{RFA}$ is rudimentary function arithmetic. - -** $\mathsf{I\Delta}_0$ is arithmetic with induction on Δ0-predicates without any axiom asserting that exponentiation is total. - -** $\mathsf{EFA}$ is elementary function arithmetic. - -** $\mathsf{I\Delta}_0^{\mathsf{+}}$ is arithmetic with induction on Δ0-predicates augmented by an axiom asserting that exponentiation is total. - -** $\mathsf{EFA}^{\mathsf{n}}$ is elementary function arithmetic augmented by an axiom ensuring that each element of the n-th level $\mathcal{E}^n$ of the Grzegorczyk hierarchy is total. - -** $\mathsf{I\Delta}_0^{\mathsf{n+}}$ is $\mathsf{I\Delta}_0^{\mathsf{+}}$ augmented by an axiom ensuring that each element of the n-th level $\mathcal{E}^n$ of the Grzegorczyk hierarchy is total. - -** $\mathsf{PRA}$ is primitive recursive arithmetic. - -** $\mathsf{I\Sigma}_1$ is arithmetic with induction on Σ1-predicates. - -** $\mathsf{PA}$ is Peano arithmetic. - -** $\mathsf{ID}_\nu\#$ is $\widehat{\mathsf{ID}}_\nu$ but with induction only for positive formulas. - -** $\widehat{\mathsf{ID}}_\nu$ extends PA by ν iterated fixed points of monotone operators. - -** $\mathsf{U(PA)}$ is not exactly a first-order arithmetic system, but captures what one can get by predicative reasoning based on the natural numbers. - -** $\mathsf{Aut(\widehat{ID})}$ is an automorphism on $\widehat{\mathsf{ID}}_\nu$. - -** $\mathsf{ID}_\nu$ extends PA by ν iterated least fixed points of monotone operators. - -** $\mathsf{U(ID}_\nu\mathsf{)}$ is not exactly a first-order arithmetic system, but captures what one can get by predicative reasoning based on ν-times iterated generalized inductive definitions. - -**$\mathsf{Aut(U(ID))}$ is an automorphism on $\mathsf{U(ID}_\nu\mathsf{)}$. - -** $\mathsf{W-ID}_{\nu}$ is a weakened version of $\mathsf{ID}_{\nu}$ based on W-types. - -* Second-order arithmetic - -** $\mathsf{RCA}_0^*$ is a second order form of $\mathsf{EFA}$ sometimes used in reverse mathematics. - -** $\mathsf{WKL}_0^*$ is a second order form of $\mathsf{EFA}$ sometimes used in reverse mathematics. - -** $\mathsf{RCA}_0$ is recursive comprehension. - -** $\mathsf{WKL}_0$ is weak König's lemma. - -** $\mathsf{ACA}_0$ is arithmetical comprehension. - -** $\mathsf{ACA}$ is $\mathsf{ACA}_0$ plus the full second-order induction scheme. - -** $\mathsf{ATR}_0$ is arithmetical transfinite recursion. - -** $\mathsf{ATR}$ is $\mathsf{ATR}_0$ plus the full second-order induction scheme. - -** $\mathsf{\Delta}_2^1\mathsf{-CA+BI+(M)}$ is $\mathsf{\Delta}_2^1\mathsf{-CA+BI}$ plus the assertion "every true $\mathsf{\Pi}^1_3$-sentence with parameters holds in a $\beta$-model of $\mathsf{\Delta}_2^1\mathsf{-CA}$". - -* Kripke-Platek set theory - -** $\mathsf{KP}$ is Kripke-Platek set theory with the axiom of infinity. - -** $\mathsf{KP\omega}$ is Kripke-Platek set theory, whose universe is an admissible set containing $\omega$. - -** $\mathsf{W-KPI}$ is a weakened version of $\mathsf{KPI}$ based on W-types. - -** $\mathsf{KPI}$ asserts that the universe is a limit of admissible sets. - -** $\mathsf{W-KPi}$ is a weakened version of $\mathsf{KPi}$ based on W-types. - -** $\mathsf{KPi}$ asserts that the universe is inaccessible sets. - -** $\mathsf{KPh}$ asserts that the universe is hyperinaccessible: an inaccessible set and a limit of inaccessible sets. - -** $\mathsf{KPM}$ asserts that the universe is a Mahlo set. - -** $\mathsf{KP + \Pi}_\mathsf{n} - \mathsf{Ref}$ is $\mathsf{KP}$ augmented by a certain first-order reflection scheme. - -** $\mathsf{Stability}$ is KPi augmented by the axiom $\forall \alpha \exists \kappa \geq \alpha (L_\kappa \preceq_1 L_{\kappa + \alpha})$. - -** $\mathsf{KPM}^+$ is KPI augmented by the assertion "at least one recursively Mahlo ordinal exists". - -A superscript zero indidcates that $\in$-induction is removed (making the theory significantly weaker). - -* Type theory - -** $\mathsf{CPRC}$ is the Herbelin-Patey Calculus of Primitive Recursive Constructions. - -** $\mathsf{ML}_\mathsf{n}$ is type theory without W-types and with $n$ universes. - -** $\mathsf{ML}_{<\omega}$ is type theory without W-types and with finitely many universes. - -** $\mathsf{MLU}$ is type theory with a next universe operator. - -** $\mathsf{MLS}$ is type theory without W-types and with a superuniverse. - -**$\mathsf{Aut(ML)}$ is an automorphism on type theory without W-types. - -** $\mathsf{ML}_1\mathsf{V}$ is type theory with one universe and Aczel's type of iterative sets. - -** $\mathsf{MLW}$ is type theory with indexed W-Types. - -** $\mathsf{ML}_1\mathsf{W}$ is type theory with W-types and one universe. - -** $\mathsf{ML}_{<\omega}\mathsf{W}$ is type theory with W-types and finitely many universes. - -**$\mathsf{Aut(MLW)}$ is an automorphism on type theory with W-types. - -** $\mathsf{MLM}$ is type theory with a Mahlo universe. - -* Constructive set theory - -** $\mathsf{CZF}$ is Aczel's constructive set theory. - -** $\mathsf{CZF+REA}$ is $\mathsf{CZF}$ plus the regular extension axiom. - -** $\mathsf{CZF+REA+FZ}_2$ is $\mathsf{CZF+REA}$ plus the full-second order induction scheme. - -** $\mathsf{CZFM}$ is $\mathsf{CZF}$ with a Mahlo universe. - -* Explicit mathematics - -** $\mathsf{EM}_0$ is basic explicit mathematics plus elementary comprehension - -** $\mathsf{EM}_0 \mathsf{+JR}$ is $\mathsf{EM}_0$ plus join rule - -** $\mathsf{EM}_0 \mathsf{+J}$ is $\mathsf{EM}_0$ plus join axioms - -** $\mathsf{EON}$ is a weak variant of the Feferman's $\mathsf{T}_0$. - -** $\mathsf{T}_0$ is $\mathsf{EM}_0 \mathsf{+J+IG}$, where $\mathsf{IG}$ is inductive generation. - -** $\mathsf{T}$ is $\mathsf{EM}_0 \mathsf{+J+IG+FZ}_2$, where $\mathsf{FZ}_2$ is the full second-order induction scheme. diff --git a/wiki/wikipedia/48.txt b/wiki/wikipedia/48.txt deleted file mode 100644 index b9241cbbbd4bb9616bd0c0b1a29b26ebe6b3470d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/48.txt +++ /dev/null @@ -1,23 +0,0 @@ -is a puzzle video game developed and published by Nintendo for the Game Boy in 1989. It is a portable version of Alexey Pajitnov's original Tetris and it was bundled with the North American and European releases of the Game Boy itself. It is the first game to have been compatible with the Game Link Cable, a pack-in accessory that allows two Game Boy consoles to link for multiplayer purposes. A colorized remake of the game was released on the Game Boy Color titled . A Nintendo 3DS Virtual Console version of Tetris was released in December 2011, lacking multiplayer functionality. - -The Game Boy version of Tetris plays identically to versions on other platforms. A pseudorandom sequence of tetromino shapes, composed of four square blocks each, fall down the playing field, which is 10 blocks wide by 18 blocks high. The object of the game is to manipulate the tetrominoes by moving each one sideways and rotating it by 90-degree units with the aim of creating a horizontal line of blocks without gaps. When one or more such lines are created, they disappear, and the blocks above (if any) move down by the number of lines cleared. As in most standard versions of Tetris, blocks do not automatically fall into open gaps when lines are cleared. - -As the game progresses, the tetrominoes fall faster. The game ends when at least part of a tetromino extends beyond the top of the playfield when setting in place. The player can normally see which block will appear next in a window off to the side of the playing field, but this feature can be toggled during the game. Points are awarded based on the current level and number of lines cleared. The level increases each time the player clears ten lines, as does the speed of falling tetrominoes. The game was soon licensed by Andromeda Software executive Robert Stein, who sublicensed the game to multiple publishers in different territories. In 1988, Henk Rogers of Bullet-Proof Software noticed the US home computer version at the Las Vegas Consumer Electronics Show in a Spectrum HoloByte booth. Finding himself hooked to the game, he pursued the rights to publish Tetris in Japan, and secured licenses from both Spectrum HoloByte, who held the North American computer license, and Atari Games, which had produced the American arcade version under a sublicense from Mirrorsoft, which had the rights for the European computer market. Knowing Nintendo was planning to release the Game Boy, Rogers approached Nintendo of America president Minoru Arakawa to suggest Tetris as the perfect bundled launch game. Arakawa questioned the idea, having planned to bundle Super Mario Land, but Rogers countered by stating that though a Mario game would promote the Game Boy to young boys, Tetris would promote it to everyone. Rogers was told to pursue the rights, he approached Stein to seek rights for it to be distributed with the Game Boy. - -However, after several months passed, Stein had not signed to contract for the rights for the Game Boy, and Rogers learned that another person had approached Nintendo with the idea of a Game Boy Tetris. Requesting more time from Arakawa, he traveled to Moscow to speak with the Ministry of Foreign Economic Relations's bureau for computer hardware and software export, called Elektronorgtechnica or ELORG, and Pajitnov. During this time, Nintendo approached Spectrum HoloByte on the prospect of a Game Boy Tetris, causing Mirrorsoft to send a representative, Kevin Maxwell, to Moscow to secure rights for the Game Boy version. Bullet-Proof Software is mentioned as a copyright holder and the sub-licensor of the Tetris handheld rights to Nintendo on the game's startup screen. - -The main soundtrack for Tetris was created by Nintendo's accomplished composer Hirokazu Tanaka. The player can select one of three types of background music during the game or play with sound effects only. Two of the songs are arrangements of works from other composers: "Type A" is based on the Russian folk song "Korobeiniki" (also known as "Korobushka"), and "Type C" is an arranged version of "French Suite No. 3 in B minor, BWV 814: Menuet" (transposed to F# minor) by Johann Sebastian Bach. In version 1.0, that was only released in Japan with an estimated 25,000 copies sold, the "Type A" song is "Minuet". The compositions "Type A" and "Type B" can be unlocked in the Super Smash Bros. series. - -The victory fanfares played after completing levels are different arrangements of "Trepak", from Pyotr Ilyich Tchaikovsky's famous ballet The Nutcracker. - -Tetris DX is a Game Boy Color game that is backward compatible with the original Game Boy. It was developed by Nintendo and released in Japan on October 21, 1998, in North America on November 18, 1998, and in Europe and Australia in 1999. Tetris DX features battery-saved high scores and three player profiles. It has a new single-player mode against the CPU and also features two new modes of play. In "Ultra Mode", players must accumulate as many points as possible within a three-minute time period. In "40 Lines", players are timed on how quickly they can clear 40 lines of play. New music themes were added. - -The Game Boy version of Tetris was released in North America and Europe as a Nintendo 3DS Virtual Console game on December 22, 2011 and on December 28 in Japan. In contrast to the original version, it is not possible to play multiplayer in the Virtual Console version. The Virtual Console version of Tetris was delisted in Europe from the Nintendo eShop after December 31, 2014 and in North America. - -Tetris has been credited as the Game Boy's killer app. It topped the Japanese sales charts during AugustSeptember 1989 and from December 1989 to January 1990. It also topped the US sales charts during AugustSeptember 1989 and then December 1989. - -Nintendo sold 2.5 million copies by early 1990, as its top seller. About 7.5 million copies had been sold in the United States by 1992. By 1997, 29.72 million units had been sold worldwide, including bundles. As of June 2009, more than 35 million copies had been sold worldwide. - -Official Nintendo Magazine ranked Tetris fifth on its list of the "100 Best Nintendo Games". Game Informers Ben Reeves called it the best Game Boy game and a "legendary puzzle game". - -In August 2008, Nintendo Power listed Tetris DX as the best Game Boy/Game Boy Color video game, stating that it meant more to handheld gaming than any other video game. They also described it as the best version of Tetris until Tetris DS was released. Alexey Pajitnov called the Game Boy version of Tetris his favorite and very close to his original version. In Japan, Famitsu gave it a score of 26 out of 40. diff --git a/wiki/wikipedia/480.txt b/wiki/wikipedia/480.txt deleted file mode 100644 index eee063cceffd145ff5f5c0cbf39b4fe3265eaafe..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/480.txt +++ /dev/null @@ -1,45 +0,0 @@ -The five color theorem is a result from graph theory that given a plane separated into regions, such as a political map of the countries of a state, the regions may be colored using no more than five colors in such a way that no two adjacent regions receive the same color. - -The five color theorem is implied by the stronger four color theorem, but is considerably easier to prove. It was based on a failed attempt at the four color proof by Alfred Kempe in 1879. Percy John Heawood found an error 11 years later, and proved the five color theorem based on Kempe's work. - -First of all, one associates a simple planar graph $G$ to the given map, namely one puts a vertex in each region of the map, then connects two vertices with an edge if and only if the corresponding regions share a common border. The problem is then translated into a graph coloring problem: one has to paint the vertices of the graph so that no edge has endpoints of the same color. - -Because $G$ is a simple planar, i.e. it may be embedded in the plane without intersecting edges, and it does not have two vertices sharing more than one edge, and it doesn't have loops, then it can be shown (using the Euler characteristic of the plane) that it must have a vertex shared by at most five edges. (Note: This is the only place where the five-color condition is used in the proof. If this technique is used to prove the four-color theorem, it will fail on this step. In fact, an icosahedral graph is 5-regular and planar, and thus does not have a vertex shared by at most four edges.) Find such a vertex, and call it $v$. - -Now remove $v$ from $G$. The graph $G'$ obtained this way has one fewer vertex than $G$, so we can assume by induction that it can be colored with only five colors. If the coloring did not use all five colors on the five neighboring vertices of $v$, it can be colored in $G$ with a color not used by the neighbors. So now look at those five vertices $v_1$, $v_2$, $v_3$, $v_4$, $v_5$ that were adjacent to $v$ in cyclic order (which depends on how we write G). So we can assume that $v_1$, $v_2$, $v_3$, $v_4$, $v_5$ are colored with colors 1, 2, 3, 4, 5 respectively. - -Now consider the subgraph $G_{1,3}$ of $G'$ consisting of the vertices that are colored with colors 1 and 3 only and the edges connecting them. To be clear, each edge connects a color 1 vertex to a color 3 vertex (this is called a Kempe chain). If $v_1$ and $v_3$ lie in different connected components of $G_{1,3}$, we can swap the 1 and 3 colors on the component containing $v_1$ without affecting the coloring of the rest of $G'$. This frees color 1 for $v$ completing the task. If on the contrary $v_1$ and $v_3$ lie in the same connected component of $G_{1,3}$, we can find a path in $G_{1,3}$ joining them that consists of only color 1 and 3 vertices. - -Now turn to the subgraph $G_{2,4}$ of $G'$ consisting of the vertices that are colored with colors 2 and 4 only and the edges connecting them, and apply the same arguments as before. Then either we are able to reverse the 2-4 coloration on the subgraph of $G_{2,4}$ containing $v_2$ and paint $v$ color 2, or we can connect $v_2$ and $v_4$ with a path that consists of only color 2 and 4 vertices. Such a path would intersect the 1-3 colored path we constructed before since $v_1$ through $v_5$ were in cyclic order. This is clearly absurd as it contradicts the planarity of the graph. - -So $G$ can in fact be five-colored, contrary to the initial presumption. - -In 1996, Robertson, Sanders, Seymour, and Thomas described a quadratic four-coloring algorithm in their "Efficiently four-coloring planar graphs". In the same paper they briefly describe a linear-time five-coloring algorithm, which is asymptotically optimal. The algorithm as described here operates on multigraphs and relies on the ability to have multiple copies of edges between a single pair of vertices. It is based on Wernicke's theorem, which states the following: - -Wernicke's theorem: Assume G is planar, nonempty, has no faces bounded by two edges, and has minimum degree 5. Then G has a vertex of degree 5 which is adjacent to a vertex of degree at most 6. - -We will use a representation of the graph in which each vertex maintains a circular linked list of adjacent vertices, in clockwise planar order. - -In concept, the algorithm is recursive, reducing the graph to a smaller graph with one less vertex, five-coloring that graph, and then using that coloring to determine a coloring for the larger graph in constant time. In practice, rather than maintain an explicit graph representation for each reduced graph, we will remove vertices from the graph as we go, adding them to a stack, then color them as we pop them back off the stack at the end. We will maintain three stacks: - -* S4: Contains all remaining vertices with either degree at most four, or degree five and at most four distinct adjacent vertices (due to multiple edges). - -* S5: Contains all remaining vertices that have degree five, five distinct adjacent vertices, and at least one adjacent vertex with degree at most six. - -* Sd: Contains all vertices deleted from the graph so far, in the order that they were deleted. - -The algorithm works as follows: - -# In the first step, we collapse all multiple edges to single edges, so that the graph is simple. Next, we iterate over the vertices of the graph, pushing any vertex matching the conditions for S4 or S5 onto the appropriate stack. - -# Next, as long as S4 is non-empty, we pop v from S4 and delete v from the graph, pushing it onto Sd, along with a list of its neighbors at this point in time. We check each former neighbor of v, pushing it onto S4 or S5 if it now meets the necessary conditions. - -# When S4 becomes empty, we know that our graph has minimum degree five. If the graph is empty, we go to the final step 5 below. Otherwise, Wernicke's Theorem tells us that S5 is nonempty. Pop v off S5, delete it from the graph, and let v1, v2, v3, v4, v5 be the former neighbors of v in clockwise planar order, where v1 is the neighbor of degree at most 6. We check if v1 is adjacent to v3 (which we can do in constant time due to the degree of v1). There are two cases: - -## If v1 is not adjacent to v3, we can merge these two vertices into a single vertex. To do this, we remove v from both circular adjacency lists, and then splice the two lists together into one list at the point where v was formerly found. Provided that v maintains a reference to its position in each list, this can be done in constant time. It's possible that this might create faces bounded by two edges at the two points where the lists are spliced together; we delete one edge from any such faces. After doing this, we push v3 onto Sd, along with a note that v1 is the vertex that it was merged with. Any vertices affected by the merge are added or removed from the stacks as appropriate. - -## Otherwise, v2 lies inside the face outlined by v, v1, and v3. Consequently, v2 cannot be adjacent to v4, which lies outside this face. We merge v2 and v4 in the same manner as v1 and v3 above. - -# Go to step 2. - -# At this point S4, S5, and the graph are empty. We pop vertices off Sd. If the vertex were merged with another vertex in step 3, the vertex that it was merged with will already have been colored, and we assign it the same color. This is valid because we only merged vertices that were not adjacent in the original graph. If we had removed it in step 2 because it had at most 4 adjacent vertices, all of its neighbors at the time of its removal will have already been colored, and we can simply assign it a color that none of its neighbors is using. diff --git a/wiki/wikipedia/481.txt b/wiki/wikipedia/481.txt deleted file mode 100644 index 16e819475b114921f30138a14a723210d48d4003..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/481.txt +++ /dev/null @@ -1,28 +0,0 @@ -In computational complexity theory, (SAT, ε-UNSAT) is a language that is used in the proof of the PCP theorem, which relates the language NP to probabilistically checkable proof systems. - -For a given 3-CNF formula, Φ, and a constant, ε < 1, Φ is in (SAT, ε-UNSAT) if it is satisfiable and not in (SAT, ε-UNSAT) if the maximum number of satisfiable clauses (MAX-3SAT) is less than or equal to (1-ε) times the number of clauses in Φ. If neither of these conditions are true, the membership of Φ in (SAT, ε-UNSAT) is undefined. - -It can be shown that (SAT, ε-UNSAT) characterizes PCP(O(log n), O(1)). -$$ -L \in \mbox{PCP}(O(\log n),O(1)) -$$, then $L \le ( \mbox{SAT}, \epsilon-\mbox{UNSAT})$. (See PCP theorem for more information)
    - -Let each bit in the proof y be $ y_1, y_2, \ldots, y_m$. - -First, it is necessary to encode when the verifier accepts in 3CNF clauses $\phi=\bigwedge_r \phi_r$. - -Next, for each random string r, construct a sub-formula $\phi_r$. - -For a fixed r, its possible to determine all the variables queried, - -Enumerate each random string r, and add a clause $\phi_r = f_r (y_{i_1}, y_{i_2}, \ldots , y_{i_q}) \ $, where $f_r$ is true if and only if the PCP system accepts on reading the given random bits r. There are at most $2^q$ SAT clauses. After these clauses are converted into 3CNF clauses, there are at most $q 2 ^ q$ clauses. - -If $x \in L$, then there is a proof y such that is accepted for every random string r. Therefore, $\phi(y_1,y_2,\ldots,y_m)$ is satisfiable.
    - -If $x \notin L$, then for every assignment to $y_1, y_2, \ldots, y_m$ the corresponding proof causes the verifier to reject for half of the random strings r. For each r that is rejected one of the clauses in $f_r$ fails. Therefore, at least $\epsilon = \frac{1}{2 q 2^q}$ fraction of the clauses fail. - -Therefore, $L \le (\mbox{SAT}, \epsilon-\mbox{UNSAT})$. - -For $(SAT, \epsilon-UNSAT) \in PCP(O(\log n), O(1))$, let the proof that the PCP system reads be a satisfying assignment for the input 3-CNF, Φ. The system chooses $O(1/\epsilon)$ clauses of the proof to check if they are truly satisfied. Note that only $\log n$ random bits are needed to choose one of $n$ clauses, and thus only $O(\log n/\epsilon) = O(\log n)$ total random bits are needed. (Remember that ε is a constant.) For each clause to be checked, only 3 bits need to be read, and thus only $O(3/\epsilon) = O(1)$ (a constant number) of bits from the proof need to be read. The system rejects if any of the clauses are not satisfied. If Φ is satisfiable, then there exists a proof (a truly satisfying assignment) that the system will always accept. If Φ is not in (SAT, ε-UNSAT), this means that an ε fraction of the clauses is not satisfiable. The probability that this system will accept in this case is $(1-\epsilon)^{1/\epsilon} \leq 1/e < 1/2$. Therefore, $(SAT, \epsilon-UNSAT) \in PCP(O(\log n), O(1))$. - -(SAT, ε-UNSAT) is an NP-hard language. As part of the proof of the PCP theorem, (SAT, ε-UNSAT) has also been shown to be in PCP(O(log n), O(1)). diff --git a/wiki/wikipedia/482.txt b/wiki/wikipedia/482.txt deleted file mode 100644 index f36077e81a503150525b5be3d6b39656bb34e346..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/482.txt +++ /dev/null @@ -1,229 +0,0 @@ -In mathematics, division by zero is division where the divisor (denominator) is zero. Such a division can be formally expressed as $\tfrac{a}{0}$ where a is the dividend (numerator). In ordinary arithmetic, the expression has no meaning, as there is no number which, when multiplied by 0, gives a (assuming $a \neq 0$), and so division by zero is undefined. Since any number multiplied by zero is zero, the expression $\tfrac{0}{0}$ is also undefined; when it is the form of a limit, it is an indeterminate form. Historically, one of the earliest recorded references to the mathematical impossibility of assigning a value to $\tfrac{a}{0}$ is contained in Anglo-Irish philosopher George Berkeley’s criticism of infinitesimal calculus in 1734 in The Analyst ("ghosts of departed quantities"). - -There are mathematical structures in which $\tfrac{a}{0}$ is defined for some a such as in the Riemann sphere (a model of the extended complex plane) and the Projectively extended real line; however, such structures do not satisfy every ordinary rule of arithmetic (the field axioms). - -In computing, a program error may result from an attempt to divide by zero. Depending on the programming environment and the type of number (e.g. floating point, integer) being divided by zero, it may generate positive or negative infinity by the IEEE 754 floating point standard, generate an exception, generate an error message, cause the program to terminate, result in a special not-a-number value, or crash. - -When division is explained at the elementary arithmetic level, it is often considered as splitting a set of objects into equal parts. As an example, consider having ten cookies, and these cookies are to be distributed equally to five people at a table. Each person would receive $\tfrac{10}{5} = 2$ cookies. Similarly, if there are ten cookies, and only one person at the table, that person would receive $\tfrac{10}{1} = 10$ cookies. - -So, for dividing by zero, what is the number of cookies that each person receives when 10 cookies are evenly distributed amongst 0 people at a table? Certain words can be pinpointed in the question to highlight the problem. The problem with this question is the "when". There is no way to distribute 10 cookies to nobody. So $\tfrac{10}{0}$, at least in elementary arithmetic, is said to be either meaningless, or undefined. - -If there are, say, 5 cookies and 2 people, the problem is in "evenly distribute". In any integer partition of 5 things into 2 parts, either one of the parts of the partition will have more elements than the other, or there will be a remainder (written as 5/2 = 2 r1). Or, the problem with 5 cookies and 2 people can be solved by cutting one cookie in half, which introduces the idea of fractions (5/2 = 2/1|2) . The problem with 5 cookies and 0 people, on the other hand, cannot be solved in any way that preserves the meaning of "divides". - -In elementary algebra, another way of looking at division by zero is that division can always be checked using multiplication. Considering the 10/0 example above, setting x = 10/0, if x equals ten divided by zero, then x times zero equals ten, but there is no x that, when multiplied by zero, gives ten (or any number other than zero). If instead of x = 10/0, x = 0/0, then every x satisfies the question 'what number x, multiplied by zero, gives zero?' - -The Brāhmasphuṭasiddhānta of Brahmagupta (c. 598–668) is the earliest text to treat zero as a number in its own right and to define operations involving zero. The author could not explain division by zero in his texts: his definition can be easily proven to lead to algebraic absurdities. According to Brahmagupta, - -
    A positive or negative number when divided by zero is a fraction with the zero as denominator. Zero divided by a negative or positive number is either zero or is expressed as a fraction with zero as numerator and the finite quantity as denominator. Zero divided by zero is zero.
    - -In 830, Mahāvīra unsuccessfully tried to correct Brahmagupta's mistake in his book in Ganita Sara Samgraha: "A number remains unchanged when divided by zero." - -The four basic operations – addition, subtraction, multiplication and division – as applied to whole numbers (positive integers), with some restrictions, in elementary arithmetic are used as a framework to support the extension of the realm of numbers to which they apply. For instance, to make it possible to subtract any whole number from another, the realm of numbers must be expanded to the entire set of integers in order to incorporate the negative integers. Similarly, to support division of any integer by any other, the realm of numbers must expand to the rational numbers. During this gradual expansion of the number system, care is taken to ensure that the "extended operations", when applied to the older numbers, do not produce different results. Loosely speaking, since division by zero has no meaning (is undefined) in the whole number setting, this remains true as the setting expands to the real or even complex numbers. - -As the realm of numbers to which these operations can be applied expands there are also changes in how the operations are viewed. For instance, in the realm of integers, subtraction is no longer considered a basic operation since it can be replaced by addition of signed numbers. Similarly, when the realm of numbers expands to include the rational numbers, division is replaced by multiplication by certain rational numbers. In keeping with this change of viewpoint, the question, "Why can't we divide by zero?", becomes "Why can't a rational number have a zero denominator?". Answering this revised question precisely requires close examination of the definition of rational numbers. - -In the modern approach to constructing the field of real numbers, the rational numbers appear as an intermediate step in the development that is founded on set theory. First, the natural numbers (including zero) are established on an axiomatic basis such as Peano's axiom system and then this is expanded to the ring of integers. The next step is to define the rational numbers keeping in mind that this must be done using only the sets and operations that have already been established, namely, addition, multiplication and the integers. Starting with the set of ordered pairs of integers, {(a, b)} with b ≠ 0, define a binary relation on this set by (a, b) ≃ (c, d) if and only if ad = bc. This relation is shown to be an equivalence relation and its equivalence classes are then defined to be the rational numbers. It is in the formal proof that this relation is an equivalence relation that the requirement that the second coordinate is not zero is needed (for verifying transitivity). - -The above explanation may be too abstract and technical for many purposes, but if one assumes the existence and properties of the rational numbers, as is commonly done in elementary mathematics, the "reason" that division by zero is not allowed is hidden from view. Nevertheless, a (non-rigorous) justification can be given in this setting. - -It follows from the properties of the number system we are using (that is, integers, rationals, reals, etc.), if b ≠ 0 then the equation a/b = c is equivalent to a = b × c. Assuming that a/0 is a number c, then it must be that a = 0 × c = 0. However, the single number c would then have to be determined by the equation 0 = 0 × c, but every number satisfies this equation, so we cannot assign a numerical value to 0/0. - -The concept that explains division in algebra is that it is the inverse of multiplication. For example, - -\frac{6}{3}=2 - -since 2 is the value for which the unknown quantity in - -?\times 3 = 6 - -is true. But the expression - -\frac{6}{0} = x - -requires a value to be found for the unknown quantity in - -x\times 0 = 6. - -But any number multiplied by 0 is 0 and so there is no number that solves the equation. - -The expression - -\frac{0}{0} = x - -requires a value to be found for the unknown quantity in - -x \times 0 = 0. - -Again, any number multiplied by 0 is 0 and so this time every number solves the equation instead of there being a single number that can be taken as the value of 0/0. - -In general, a single value can’t be assigned to a fraction where the denominator is 0 so the value remains undefined. - -A compelling reason for not allowing division by zero is that, if it were allowed, many absurd results (i.e., fallacies) would arise. When working with numerical quantities it is easy to determine when an illegal attempt to divide by zero is being made. For example, consider the following computation. - -With the assumptions: - -\begin{align} - -0\times 1 &= 0, \\ - -0\times 2 &= 0, - -\end{align} - -the following is true: - -0\times 1 = 0\times 2. - -Dividing both sides by zero gives: - -\begin{align} - -\frac{0 \times 1}{0} &= \frac{0\times 2}{0} \\[6px] - -\frac{0}{0}\times 1 &= \frac{0}{0}\times 2. - -\end{align} - -Simplified, this yields: - -1 = 2. - -The fallacy here is the assumption that dividing 0 by 0 is a legitimate operation with the same properties as dividing by any other number. - -However, it is possible to disguise a division by zero in an algebraic argument, leading to invalid proofs that, for instance, 1 = 2 such as the following: - -em=1.2|text=Let 1 = x. - -Multiply by x to get - -x = x^2. - -Subtract 1 from each side to get - -x - 1 = x^2 - 1. - -Divide both sides by x − 1 - -\begin{align} - -\frac{x-1}{x-1} - -&= \frac{x^2 -1}{x - 1} \\[6pt] - -&= \frac{(x + 1) (x - 1)}{x - 1}, - -\end{align} - -which simplifies to - -1 = x + 1. - -But, since x = 1, - -1 = 1 + 1 = 2 - -and therefore - -1 = 2. - -The disguised division by zero occurs since x − 1 = 0 when x = 1. - -At first glance it seems possible to define a/0 by considering the limit of a/b as b approaches 0. - -For any positive a, the limit from the right is - -\lim_{b \to 0^+} {a \over b} = +\infty - -however, the limit from the left is - -\lim_{b \to 0^-} {a \over b} = -\infty - -and so the $\lim_{b \to 0} {a \over b}$ is undefined (the limit is also undefined for negative a). - -Furthermore, there is no obvious definition of 0/0 that can be derived from considering the limit of a ratio. The limit - - \lim_{(a,b) \to (0,0)} {a \over b} - -does not exist. Limits of the form - - \lim_{x \to 0} {f(x) \over g(x)} - -in which both f(x) and g(x) approach 0 as x approaches 0, may equal any real or infinite value, or may not exist at all, depending on the particular functions f and g. - -For example, consider: - - \lim_{x \to 1} {x^2 - 1 \over x - 1} - -This initially appears to be indeterminate. However: - -\begin{align} - -&= \lim_{x \to 1} {(x - 1)(x + 1) \over x - 1} \\ - -&= \lim_{x \to 1} {(x + 1)} \\ - -&= 2 - -\end{align} - -and so the limit exists, and is equal to $2$. - -These and other similar facts show that the expression $\frac{0}{0}$ cannot be well-defined as a limit. - -A formal calculation is one carried out using rules of arithmetic, without consideration of whether the result of the calculation is well-defined. Thus, it is sometimes useful to think of a/0, where a ≠ 0, as being $\infty$. This infinity can be either positive, negative, or unsigned, depending on context. For example, formally: - -\lim_{x \to 0} {\frac{1}{x} = \frac{\lim\limits_{x \to 0} {1}}{\lim\limits_{x \to 0} {x}}} = \infty. - -As with any formal calculation, invalid results may be obtained. A logically rigorous (as opposed to formal) computation would assert only that - -\lim_{x \to 0^+} \frac{1}{x} = +\infty - -~\text{ and }~ - -\lim_{x \to 0^-} \frac{1}{x} = -\infty. - -Since the one-sided limits are different, the two-sided limit does not exist in the standard framework of the real numbers. Also, the fraction 1/0 is left undefined in the extended real line, therefore it and - - \frac{\lim\limits_{x \to 0} 1 }{\lim\limits_{x \to 0} x} - -are meaningless expressions. - -The set $\mathbb{R}\cup\{\infty\}$ is the projectively extended real line, which is a one-point compactification of the real line. Here $\infty$ means an unsigned infinity, an infinite quantity that is neither positive nor negative. This quantity satisfies $-\infty = \infty$, which is necessary in this context. In this structure, $\frac{a}{0} = \infty$ can be defined for nonzero a, and $\frac{a}{\infty} = 0$ when a is not $\infty$. It is the natural way to view the range of the tangent function and cotangent functions of trigonometry: tan(x) approaches the single point at infinity as x approaches either +π/2 or −π/2 from either direction. - -This definition leads to many interesting results. However, the resulting algebraic structure is not a field, and should not be expected to behave like one. For example, $\infty+\infty$ is undefined in this extension of the real line. - -The set $\mathbb{C}\cup\{\infty\}$ is the Riemann sphere, which is of major importance in complex analysis. Here too $\infty$ is an unsigned infinity – or, as it is often called in this context, the point at infinity. This set is analogous to the projectively extended real line, except that it is based on the field of complex numbers. In the Riemann sphere, $\frac{1}{0}=\infty$ and $\frac{1}{\infty} = 0$, but $\frac{0}{0}$ and $0\times\infty$ are undefined. - -The negative real numbers can be discarded, and infinity introduced, leading to the set , where division by zero can be naturally defined as a/0 = ∞ for positive a. While this makes division defined in more cases than usual, subtraction is instead left undefined in many cases, because there are no negative numbers. - -Although division by zero cannot be sensibly defined with real numbers and integers, it is possible to consistently define it, or similar operations, in other mathematical structures. - -In the hyperreal numbers and the surreal numbers, division by zero is still impossible, but division by non-zero infinitesimals is possible. - -In distribution theory one can extend the function $\frac{1}{x}$ to a distribution on the whole space of real numbers (in effect by using Cauchy principal values). It does not, however, make sense to ask for a "value" of this distribution at x = 0; a sophisticated answer refers to the singular support of the distribution. - -In matrix algebra (or linear algebra in general), one can define a pseudo-division, by setting a/b = ab+, in which b+ represents the pseudoinverse of b. It can be proven that if b−1 exists, then b+ = b−1. If b equals 0, then b+ = 0. - -Any number system that forms a commutative ring—for instance, the integers, the real numbers, and the complex numbers—can be extended to a wheel in which division by zero is always possible; however, in such a case, "division" has a slightly different meaning. - -The concepts applied to standard arithmetic are similar to those in more general algebraic structures, such as rings and fields. In a field, every nonzero element is invertible under multiplication; as above, division poses problems only when attempting to divide by zero. This is likewise true in a skew field (which for this reason is called a division ring). However, in other rings, division by nonzero elements may also pose problems. For example, the ring Z/6Z of integers mod 6. The meaning of the expression $\frac{2}{2}$ should be the solution x of the equation $2x = 2$. But in the ring Z/6Z, 2 is a zero divisor. This equation has two distinct solutions, x = 1 and x = 4, so the expression $\frac{2}{2}$ is undefined. - -In field theory, the expression $\frac{a}{b}$ is only shorthand for the formal expression ab−1, where b−1 is the multiplicative inverse of b. Since the field axioms only guarantee the existence of such inverses for nonzero elements, this expression has no meaning when b is zero. Modern texts, that define fields as a special type of ring, include the axiom 0 ≠ 1 for fields (or its equivalent) so that the zero ring is excluded from being a field. In the zero ring, division by zero is possible, which shows that the other field axioms are not sufficient to exclude division by zero in a field. - -The IEEE floating-point standard, supported by almost all modern floating-point units, specifies that every floating point arithmetic operation, including division by zero, has a well-defined result. The standard supports signed zero, as well as infinity and NaN (not a number). There are two zeroes: +0 (positive zero) and −0 (negative zero) and this removes any ambiguity when dividing. In IEEE 754 arithmetic, a ÷ +0 is positive infinity when a is positive, negative infinity when a is negative, and NaN when a = ±0. The infinity signs change when dividing by −0 instead. - -The justification for this definition is to preserve the sign of the result in case of arithmetic underflow. For example, in the single-precision computation 1/(x/2), where 1=x = ±2−149, the computation x/2 underflows and produces ±0 with sign matching x, and the result will be ±∞ with sign matching x. The sign will match that of the exact result ±2150, but the magnitude of the exact result is too large to represent, so infinity is used to indicate overflow. - -Integer division by zero is usually handled differently from floating point since there is no integer representation for the result. Some processors generate an exception when an attempt is made to divide an integer by zero, although others will simply continue and generate an incorrect result for the division. The result depends on how division is implemented, and can either be zero, or sometimes the largest possible integer. - -Because of the improper algebraic results of assigning any value to division by zero, many computer programming languages (including those used by calculators) explicitly forbid the execution of the operation and may prematurely halt a program that attempts it, sometimes reporting a "Divide by zero" error. In these cases, if some special behavior is desired for division by zero, the condition must be explicitly tested (for example, using an if statement). Some programs (especially those that use fixed-point arithmetic where no dedicated floating-point hardware is available) will use behavior similar to the IEEE standard, using large positive and negative numbers to approximate infinities. In some programming languages, an attempt to divide by zero results in undefined behavior. The graphical programming language Scratch 2.0 and 3.0 used in many schools returns Infinity or −Infinity depending on the sign of the dividend. - -In two's complement arithmetic, attempts to divide the smallest signed integer by −1 are attended by similar problems, and are handled with the same range of solutions, from explicit error conditions to undefined behavior. - -Most calculators will either return an error or state that 1/0 is undefined; however, some TI and HP graphing calculators will evaluate (1/0)2 to ∞. - -Microsoft Math and Mathematica return ComplexInfinity for 1/0. Maple and SageMath return an error message for 1/0, and infinity for 1/0.0 (0.0 tells these systems to use floating point arithmetic instead of algebraic arithmetic). - -Some modern calculators allow division by zero in special cases, where it will be useful to students and, presumably, understood in context by mathematicians. Some calculators, the online Desmos calculator is one example, allow arctangent(1/0). Students are often taught that the inverse cotangent function, arccotangent, should be calculated by taking the arctangent of the reciprocal, and so a calculator may allow arctangent(1/0), giving the output $\tfrac{\pi}{2}$, which is the correct value of arccotangent 0. The mathematical justification is that the limit as x goes to zero of arctangent 1/x is $\tfrac{\pi}{2}$. - -* On September 21, 1997, a division by zero error in the "Remote Data Base Manager" aboard USS Yorktown (CG-48) brought down all the machines on the network, causing the ship's propulsion system to fail. diff --git a/wiki/wikipedia/483.txt b/wiki/wikipedia/483.txt deleted file mode 100644 index f8f761b97ed448c62c3502e20c7f79adb6253566..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/483.txt +++ /dev/null @@ -1,29 +0,0 @@ -Commandino's theorem, named after Federico Commandino (1509–1575), states that the four medians of a tetrahedron are concurrent at a point S, which divides them in a 3:1 ratio. In a tetrahedron a median is a line segment that connects a vertex with the centroid of the opposite face – that is, the centroid of the opposite triangle. The point S is also the centroid of the tetrahedron. - -The theorem is attributed to Commandino, who stated, in his work De Centro Gravitatis Solidorum (The Center of Gravity of Solids, 1565), that the four medians of the tetrahedron are concurrent. However, according to the 19th century scholar Guillaume Libri, Francesco Maurolico (1494–1575) claimed to have found the result earlier. Libri nevertheless thought that it had been known even earlier to Leonardo da Vinci, who seemed to have used it in his work. Julian Coolidge shared that assessment but pointed out that he couldn't find any explicit description or mathematical treatment of the theorem in da Vinci's works. Other scholars have speculated that the result may have already been known to Greek mathematicians during antiquity. - -Commandino's theorem has a direct analog for simplexes of any dimension: - -Let $ \Delta $ be a $d$-simplex of some dimension $d>1$ in $\R^n (d,n \in \N , n \geq d) $ and let $V_0,V_1,\ldots,V_p$ be its vertices. Furthermore, let $ \ell_0, \ell_1,\ldots,\ell_d$, be the medians of $ \Delta $, the lines joining each vertex $V_i$ with the centroid of the opposite $(d-1)$-dimensional facet $V_0\ldots V_{i-1}V_{i+1}\ldots V_d$. Then, these lines intersect each other in a point $S$, in a ratio of $d:1$. - -The former analog is easy to prove via the following, more general result, which is analogous to the way levers in physics work: - -Let $m$ and $k$ be natural numbers, so that in an $\R$-vector space $\mathcal {V}$, $m+k$ pairwise different points $X_1, \dots, X_m, Y_1, \dots, Y_k \in \mathcal {V} $ are given. - -Let $S_X$ be the centroid of the points $X_i (i=1, \dots, m)$, let $S_Y$ be the centroid of the points $Y_j (j=1, \dots, k)$, and let $S$ be the centroid of all of these $m+k$ points. - -Then, one has - -S = S_X + \frac{k}{m+k} (S_Y-S_X) = \frac{m}{m+k} S_X + \frac{k}{m+k} S_Y. - - - -In particular, the centroid $S$ lies on the line $\overline{ {S_X} {S_Y}}$ and divides it in a ratio of $k:m$. - -The previous theorem has further interesting consequences other than the aforementioned generalization of Commandino's theorem. It can be used to prove the following theorem about the centroid of a tetrahedron, first described in the Mathematische Unterhaltungen by the German physicist : - -One may find the centroid of a tetrahedron by taking the midpoints of two pairs of two of its opposite edges and connecting the corresponding midpoints through their respective midline. The intersection point of both midlines will be the centroid of the tetrahedron. - -Since a tetrahedron has six edges in three opposite pairs, one obtains the following corollary: - -Let a quadrilateral in $\R^2 $ be given. Then the two midlines connecting opposite edge midpoints intersect in the centroid of the quadrilateral and are divided in half by it. diff --git a/wiki/wikipedia/484.txt b/wiki/wikipedia/484.txt deleted file mode 100644 index 53542ecec7774df9fdeca3173cc4569bd6edcec3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/484.txt +++ /dev/null @@ -1,51 +0,0 @@ -In mathematics, specifically in real analysis, the Bolzano–Weierstrass theorem, named after Bernard Bolzano and Karl Weierstrass, is a fundamental result about convergence in a finite-dimensional Euclidean space Rn. The theorem states that each bounded sequence in Rn has a convergent subsequence. An equivalent formulation is that a subset of Rn is sequentially compact if and only if it is closed and bounded. The theorem is sometimes called the sequential compactness theorem. - -The Bolzano–Weierstrass theorem is named after mathematicians Bernard Bolzano and Karl Weierstrass. It was actually first proved by Bolzano in 1817 as a lemma in the proof of the intermediate value theorem. Some fifty years later the result was identified as significant in its own right, and proved again by Weierstrass. It has since become an essential theorem of analysis. - -First we prove the theorem for $\mathbb{R}^1$ (set of all real numbers), in which case the ordering on $\mathbb{R}^1$ can be put to good use. Indeed, we have the following result: - -Lemma: Every infinite sequence $(x_n)$ in $\mathbb{R}^1$ has a monotone subsequence. - -Proof: Let us call a positive integer-valued index $n$ of a sequence a "peak" of the sequence when $x_m < x_n$ for every $m > n$. Suppose first that the sequence has infinitely many peaks, which means there is a subsequence with the following indices $n_1x_{n_2}>x_{n_3}>\dots>x_{n_j}>\dots$. So, the infinite sequence $(x_n)$ in $\mathbb{R}^1$ has a monotone subsequence, which is $(x_{n_j})$. But suppose now that there are only finitely many peaks, let $N$ be the final peak and let the first index of a new subsequence $(x_{n_j})$ be set to $n_1=N+1$. Then $n_1$ is not a peak, since $n_1$ comes after the final peak, which implies the existence of $n_2$ with $n_1 - -File:Bolzano–Weierstrass theorem - step 1.svg|Because $(x_n)_{n\in\N}$ is bounded, this sequence has a lower bound $s$ and an upper bound $S$. - -File:Bolzano–Weierstrass theorem - step 2.svg|We take $I_1=[s,S]$ as the first interval for the sequence of nested intervals. - -File:Bolzano–Weierstrass theorem - step 3.svg|Then we split $I_1$ at the mid into two equally sized subintervals. - -File:Bolzano–Weierstrass theorem - step 4.svg|Because each sequence has infinitely many members, there must be (at least) one of these subintervals that contains infinitely many members of $(x_n)_{n\in\N}$. We take this subinterval as the second interval $I_2$ of the sequence of nested intervals. - -File:Bolzano–Weierstrass theorem - step 5.svg|Then we split $I_2$ again at the mid into two equally sized subintervals. - -File:Bolzano–Weierstrass theorem - step 6.svg|Again, one of these subintervals contains infinitely many members of $(x_n)_{n\in\N}$. We take this subinterval as the third subinterval $I_3$ of the sequence of nested intervals. - -File:Bolzano–Weierstrass theorem - step 7.svg|We continue this process infinitely many times. Thus we get a sequence of nested intervals. - - - -Because we halve the length of an interval at each step, the limit of the interval's length is zero. Also, by the nested intervals theorem, which states that if each In is a closed and bounded interval, say - -In = [an, bn] - -with - -an ≤ bn - -then under the assumption of nesting, the intersection of the In is not empty. Thus there is a number $x$ that is in each interval $I_n$. Now we show, that $x$ is an accumulation point of $(x_n)$. - -Take a neighbourhood $U$ of $x$. Because the length of the intervals converges to zero, there is an interval $I_N$ that is a subset of $U$. Because $I_N$ contains by construction infinitely many members of $(x_n)$ and $I_N\subseteq U$, also $U$ contains infinitely many members of $(x_n)$. This proves that $x$ is an accumulation point of $(x_n)$. Thus, there is a subsequence of $(x_n)$ that converges to $x$. - -Suppose A is a subset of Rn with the property that every sequence in A has a subsequence converging to an element of A. Then A must be bounded, since otherwise there exists a sequence xm in A with ||xm|| ≥ m for all m, and then every subsequence is unbounded and therefore not convergent. Moreover, A must be closed, since from a noninterior point x in the complement of A, one can build an A-valued sequence converging to x. Thus the subsets A of Rn for which every sequence in A has a subsequence converging to an element of A - i.e., the subsets that are sequentially compact in the subspace topology - are precisely the closed and bounded subsets. - -This form of the theorem makes especially clear the analogy to the Heine–Borel theorem, which asserts that a subset of Rn is compact if and only if it is closed and bounded. In fact, general topology tells us that a metrizable space is compact if and only if it is sequentially compact, so that the Bolzano–Weierstrass and Heine–Borel theorems are essentially the same. - -There are different important equilibrium concepts in economics, the proofs of the existence of which often require variations of the Bolzano–Weierstrass theorem. One example is the existence of a Pareto efficient allocation. An allocation is a matrix of consumption bundles for agents in an economy, and an allocation is Pareto efficient if no change can be made to it that makes no agent worse off and at least one agent better off (here rows of the allocation matrix must be rankable by a preference relation). The Bolzano–Weierstrass theorem allows one to prove that if the set of allocations is compact and non-empty, then the system has a Pareto-efficient allocation. diff --git a/wiki/wikipedia/485.txt b/wiki/wikipedia/485.txt deleted file mode 100644 index f2cb46a0de3ca1b4b031cb242aac555ec55a47a5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/485.txt +++ /dev/null @@ -1,39 +0,0 @@ -In mathematics, a projection is a mapping of a set (or other mathematical structure) into a subset (or sub-structure), which is equal to its square for mapping composition, i.e., which is idempotent. The restriction to a subspace of a projection is also called a projection, even if the idempotence property is lost. - -An everyday example of a projection is the casting of shadows onto a plane (sheet of paper): the projection of a point is its shadow on the sheet of paper, and the shadow of a point on the sheet of paper is that point itself (idempotency). The shadow of a three-dimensional sphere is a closed disk. Originally, the notion of projection was introduced in Euclidean geometry to denote the projection of the three-dimensional Euclidean space onto a plane in it, like the shadow example. The two main projections of this kind are: - -* The projection from a point onto a plane or central projection: If C is a point, called the center of projection, then the projection of a point P different from C onto a plane that does not contain C is the intersection of the line CP with the plane. The points P such that the line CP is parallel to the plane does not have any image by the projection, but one often says that they project to a point at infinity of the plane (see Projective geometry for a formalization of this terminology). The projection of the point C itself is not defined. - -* The projection parallel to a direction D, onto a plane or parallel projection: The image of a point P is the intersection with the plane of the line parallel to D passing through P. See for an accurate definition, generalized to any dimension. - -The concept of projection in mathematics is a very old one, and most likely has its roots in the phenomenon of the shadows cast by real-world objects on the ground. This rudimentary idea was refined and abstracted, first in a geometric context and later in other branches of mathematics. Over time different versions of the concept developed, but today, in a sufficiently abstract setting, we can unify these variations. - -In cartography, a map projection is a map of a part of the surface of the Earth onto a plane, which, in some cases, but not always, is the restriction of a projection in the above meaning. The 3D projections are also at the basis of the theory of perspective. - -The need for unifying the two kinds of projections and of defining the image by a central projection of any point different of the center of projection are at the origin of projective geometry. However, a projective transformation is a bijection of a projective space, a property not shared with the projections of this article. - -In an abstract setting we can generally say that a projection is a mapping of a set (or of a mathematical structure) which is idempotent, which means that a projection is equal to its composition with itself. A projection may also refer to a mapping which has a right inverse. Both notions are strongly related, as follows. Let p be an idempotent mapping from a set A into itself (thus p ∘ p = p) and B = p(A) be the image of p. If we denote by π the map p viewed as a map from A onto B and by i the injection of B into A (so that p = i ∘ π), then we have π ∘ i = IdB (so that π has a right inverse). Conversely, if π has a right inverse, then π ∘ i = IdB implies that i ∘ π is idempotent. - -The original notion of projection has been extended or generalized to various mathematical situations, frequently, but not always, related to geometry, for example: - -* In set theory: - -** An operation typified by the jth projection map, written projj, that takes an element 1=x = (x1, ..., xj, ..., xn) of the Cartesian product X1 × ⋯ × Xj × ⋯ × Xn to the value 1=projj(x) = xj. This map is always surjective. - -** A mapping that takes an element to its equivalence class under a given equivalence relation is known as the canonical projection. - -** The evaluation map sends a function f to the value f(x) for a fixed x. The space of functions YX can be identified with the Cartesian product $\prod_{i\in X}Y$, and the evaluation map is a projection map from the Cartesian product. - -* For relational databases and query languages, the projection is a unary operation written as $\Pi_{a_1, \ldots,a_n}( R )$ where $a_1,\ldots,a_n$ is a set of attribute names. The result of such projection is defined as the set that is obtained when all tuples in R are restricted to the set $\{a_1,\ldots,a_n\}$. R is a database-relation. - -* In spherical geometry, projection of a sphere upon a plane was used by Ptolemy (~150) in his Planisphaerium. The method is called stereographic projection and uses a plane tangent to a sphere and a pole C diametrically opposite the point of tangency. Any point P on the sphere besides C determines a line CP intersecting the plane at the projected point for P. The correspondence makes the sphere a one-point compactification for the plane when a point at infinity is included to correspond to C, which otherwise has no projection on the plane. A common instance is the complex plane where the compactification corresponds to the Riemann sphere. Alternatively, a hemisphere is frequently projected onto a plane using the gnomonic projection. - -* In linear algebra, a linear transformation that remains unchanged if applied twice: p(u) = p(p(u)). In other words, an idempotent operator. For example, the mapping that takes a point (x, y, z) in three dimensions to the point (x, y, 0) is a projection. This type of projection naturally generalizes to any number of dimensions n for the domain and k ≤ n for the codomain of the mapping. See Orthogonal projection, Projection (linear algebra). In the case of orthogonal projections, the space admits a decomposition as a product, and the projection operator is a projection in that sense as well. - -* In differential topology, any fiber bundle includes a projection map as part of its definition. Locally at least this map looks like a projection map in the sense of the product topology and is therefore open and surjective. - -* In topology, a retraction is a continuous map r: X → X which restricts to the identity map on its image. This satisfies a similar idempotency condition r2 = r and can be considered a generalization of the projection map. The image of a retraction is called a retract of the original space. A retraction which is homotopic to the identity is known as a deformation retraction. This term is also used in category theory to refer to any split epimorphism. - -* The scalar projection (or resolute) of one vector onto another. - -* In category theory, the above notion of Cartesian product of sets can be generalized to arbitrary categories. The product of some objects has a canonical projection morphism to each factor. This projection will take many forms in different categories. The projection from the Cartesian product of sets, the product topology of topological spaces (which is always surjective and open), or from the direct product of groups, etc. Although these morphisms are often epimorphisms and even surjective, they do not have to be. diff --git a/wiki/wikipedia/486.txt b/wiki/wikipedia/486.txt deleted file mode 100644 index ca8c92d9e811d34f385f2ea1b7cd865a8353bd8a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/486.txt +++ /dev/null @@ -1,19 +0,0 @@ -In graph theory, a nonblocker is a subset of vertices in an undirected graph, all of which are adjacent to vertices outside of the subset. Equivalently, a nonblocker is the complement of a dominating set. - -The computational problem of finding the largest nonblocker in a graph was formulated by Papadimitriou, who observed that it belongs to MaxSNP. - -Although computing a dominating set is not fixed-parameter tractable under standard assumptions, the complementary problem of finding a nonblocker of a given size is fixed-parameter tractable. - -In graphs with no isolated vertices, every maximal nonblocker (one to which no more vertices can be added) is itself a dominating set. - -One way to construct a fixed-parameter tractable algorithm for the nonblocker problem is to use kernelization, an algorithmic design principle in which a polynomial-time algorithm is used to reduce a larger problem instance to an equivalent instance whose size is bounded by a function of the parameter. - -For the nonblocker problem, an input to the problem consists of a graph $G$ and a parameter $k$, and the goal is to determine whether $G$ has a nonblocker with $k$ or more vertices. - -This problem has an easy kernelization that reduces it to an equivalent problem with at most $2k$ vertices. First, remove all isolated vertices from $G$, as they cannot be part of any nonblocker. Once this has been done, the remaining graph must have a nonblocker that includes at least half of its vertices; for instance, if one 2-colors any spanning tree of the graph, each color class is a nonblocker and one of the two color classes includes at least half the vertices. Therefore, if the graph with isolated vertices removed still has $2k$ or more vertices, the problem can be solved immediately. Otherwise, the remaining graph is a kernel with at most $2k$ vertices. - -Dehne et al. improved this to a kernel of size at most $\tfrac{5}{3}k+3$. Their method involves merging pairs of neighbors of degree-one vertices until all such vertices have a single neighbor, and removing all but one of the degree-one vertices, leaving an equivalent instance - -with only one degree-one vertex. Then, they show that (except for small values of $k$, which can be handled separately) this instance must either be smaller than the kernel size bound or contain a $k$-vertex blocker. - -Once a small kernel has been obtained, an instance of the nonblocker problem may be solved in fixed-parameter tractable time by applying a brute-force search algorithm to the kernel. Applying faster (but still exponential) time bounds leads to a time bound for the nonblocker problem of the form $O(2.5154^k+n)$. Even faster algorithms are possible for certain special classes of graphs. diff --git a/wiki/wikipedia/487.txt b/wiki/wikipedia/487.txt deleted file mode 100644 index 2f062d085664265da08a7c2ba8ada2587209527a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/487.txt +++ /dev/null @@ -1,55 +0,0 @@ -In logic, a substructural logic is a logic lacking one of the usual structural rules (e.g. of classical and intuitionistic logic), such as weakening, contraction, exchange or associativity. Two of the more significant substructural logics are relevance logic and linear logic. - -In a sequent calculus, one writes each line of a proof as -$$ -\Gamma\vdash\Sigma -$$. - -Here the structural rules are rules for rewriting the LHS of the sequent, denoted Γ, initially conceived of as a string (sequence) of propositions. The standard interpretation of this string is as conjunction: we expect to read -$$ -\mathcal A,\mathcal B \vdash\mathcal C -$$ - -as the sequent notation for - -(A and B) implies C. - -Here we are taking the RHS Σ to be a single proposition C (which is the intuitionistic style of sequent); but everything applies equally to the general case, since all the manipulations are taking place to the left of the turnstile symbol $\vdash$. - -Since conjunction is a commutative and associative operation, the formal setting-up of sequent theory normally includes structural rules for rewriting the sequent Γ accordingly—for example for deducing -$$ -\mathcal B,\mathcal A\vdash\mathcal C -$$ - -from -$$ -\mathcal A,\mathcal B\vdash\mathcal C -$$. - -There are further structural rules corresponding to the idempotent and monotonic properties of conjunction: from -$$ - \Gamma,\mathcal A,\mathcal A,\Delta\vdash\mathcal C -$$ - -we can deduce -$$ - \Gamma,\mathcal A,\Delta\vdash\mathcal C -$$. - -Also from -$$ - \Gamma,\mathcal A,\Delta\vdash\mathcal C -$$ - -one can deduce, for any B, -$$ - \Gamma,\mathcal A,\mathcal B,\Delta\vdash\mathcal C -$$. - -Linear logic, in which duplicated hypotheses 'count' differently from single occurrences, leaves out both of these rules, while relevant (or relevance) logics merely leaves out the latter rule, on the ground that B is clearly irrelevant to the conclusion. - -The above are basic examples of structural rules. It is not that these rules are contentious, when applied in conventional propositional calculus. They occur naturally in proof theory, and were first noticed there (before receiving a name). - -There are numerous ways to compose premises (and in the multiple-conclusion case, conclusions as well). One way is to collect them into a set. But since e.g. {a,a} = {a} we have contraction for free if premises are sets. We also have associativity and permutation (or commutativity) for free as well, among other properties. In substructural logics, typically premises are not composed into sets, but rather they are composed into more fine-grained structures, such as trees or multisets (sets that distinguish multiple occurrences of elements) or sequences of formulae. For example, in linear logic, since contraction fails, the premises must be composed in something at least as fine-grained as multisets. - -It is a relatively young field. The first conference on the topic was held in October 1990 in Tübingen, as "Logics with Restricted Structural Rules". During the conference Kosta Došen proposed the term "substructural logics", which is now in use today. diff --git a/wiki/wikipedia/488.txt b/wiki/wikipedia/488.txt deleted file mode 100644 index 50b59e6e2f0eae58b95565fc2c693a2ae68e4e70..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/488.txt +++ /dev/null @@ -1,49 +0,0 @@ -Twelf is an implementation of the logical framework LF developed by Frank Pfenning and Carsten Schürmann at Carnegie Mellon University. It is used for logic programming and for the formalization of programming language theory. - -At its simplest, a Twelf program (called a "signature") is a collection of declarations of type families (relations) and constants that inhabit those type families. For example, the following is the standard definition of the natural numbers, with standing for zero and the successor operator. - - - -nat : type. - -z : nat. - -s : nat -> nat. - - - -Here is a type, and and are constant terms. As a dependently typed system, types can be indexed by terms, which allows the definition of more interesting type families. Here is a definition of addition: - - - -plus : nat -> nat -> nat -> type. - -plus_zero : {M:nat} plus M z M. - -plus_succ : {M:nat} {N:nat} {P:nat} - -plus M (s N) (s P) - -<- plus M N P. - - - -The type family is read as a relation between three natural numbers , and , such that . We then give the constants that define the relation: the constant indicates that . The quantifier {{code| {M:nat} }} can be read as "for all of type ". - -The constant defines the case for when the second argument is the successor of some other number (see pattern matching). The result is the successor of , where is the sum of and . This recursive call is made via the subgoal , introduced with . The arrow can be understood operationally as Prolog's , or as logical implication ("if M + N = P, then M + (s N) = (s P)"), or most faithfully to the type theory, as the type of the constant ("when given a term of type , return a term of type "). - -Twelf features type reconstruction and supports implicit parameters, so in practice, one usually does not need to explicitly write {{code| {M:nat} }} (etc.) above. - -These simple examples do not display LF's higher-order features, nor any of its theorem checking capabilities. See the Twelf distribution for its included examples. - -Twelf is used in several different ways. - -Twelf signatures can be executed via a search procedure. Its core is more sophisticated than Prolog, since it is higher-order and dependently typed, but it is restricted to pure operators: there is no cut or other extralogical operators (such as ones for performing I/O) as are often found in Prolog implementations, which may make it less well-suited for practical logic programming applications. Some uses of Prolog's cut rule can be obtained by declaring that certain operators belong to deterministic type families, which avoids recalculation. Also, like λProlog, Twelf generalizes Horn clauses to hereditary Harrop formulas, which allow for logically well-founded operational notions of fresh-name generation and scoped extension of the clause database. - -Twelf is mainly used today as a system for formalizing mathematics, especially the metatheory of programming languages. As such, it is closely related to Coq and Isabelle/HOL/HOL Light. However, unlike those systems, Twelf proofs are typically developed by hand. Despite this, for the problem domains at which it excels, Twelf proofs are often shorter and easier to develop than in the automated, general-purpose systems. - -Twelf's built-in notion of binding and substitution facilitates the encoding of programming languages and logics, most of which make use of binding and substitution, which can often be directly encoded through higher-order abstract syntax (HOAS), where the meta-language's binders represent the object-level binders. Thus standard theorems such as type-preserving substitution and alpha conversion come "for free". - -Twelf has been used to formalize many different logics and programming languages (examples are included with the distribution). Among the larger projects are a proof of safety for Standard ML, a foundational typed assembly language system from CMU, and a foundational proof carrying code system from Princeton. - -Twelf is written in Standard ML, and binaries are available for Linux and Windows. , it is under active development, mostly at Carnegie Mellon University. diff --git a/wiki/wikipedia/489.txt b/wiki/wikipedia/489.txt deleted file mode 100644 index 74250627bdb71b2335f832b197a3c33d195a25c6..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/489.txt +++ /dev/null @@ -1,97 +0,0 @@ -In mathematics, Thurston's geometrization conjecture states that each of certain three-dimensional topological spaces has a unique geometric structure that can be associated with it. It is an analogue of the uniformization theorem for two-dimensional surfaces, which states that every simply connected Riemann surface can be given one of three geometries (Euclidean, spherical, or hyperbolic). - -In three dimensions, it is not always possible to assign a single geometry to a whole topological space. Instead, the geometrization conjecture states that every closed 3-manifold can be decomposed in a canonical way into pieces that each have one of eight types of geometric structure. The conjecture was proposed by , and implies several other conjectures, such as the Poincaré conjecture and Thurston's elliptization conjecture. - -Thurston's hyperbolization theorem implies that Haken manifolds satisfy the geometrization conjecture. Thurston announced a proof in the 1980s and since then several complete proofs have appeared in print. - -Grigori Perelman sketched a proof of the full geometrization conjecture in 2003 using Ricci flow with surgery. - -There are now several different manuscripts (see below) with details of the proof. The Poincaré conjecture and the spherical space form conjecture are corollaries of the geometrization conjecture, although there are shorter proofs of the former that do not lead to the geometrization conjecture. - -A 3-manifold is called closed if it is compact and has no boundary. - -Every closed 3-manifold has a prime decomposition: this means it is the connected sum of prime 3-manifolds (this decomposition is essentially unique except for a small problem in the case of non-orientable manifolds). This reduces much of the study of 3-manifolds to the case of prime 3-manifolds: those that cannot be written as a non-trivial connected sum. - -Here is a statement of Thurston's conjecture: - -Every oriented prime closed 3-manifold can be cut along tori, so that the interior of each of the resulting manifolds has a geometric structure with finite volume. - -There are 8 possible geometric structures in 3 dimensions, described in the next section. There is a unique minimal way of cutting an irreducible oriented 3-manifold along tori into pieces that are Seifert manifolds or atoroidal called the JSJ decomposition, which is not quite the same as the decomposition in the geometrization conjecture, because some of the pieces in the JSJ decomposition might not have finite volume geometric structures. (For example, the mapping torus of an Anosov map of a torus has a finite volume solv structure, but its JSJ decomposition cuts it open along one torus to produce a product of a torus and a unit interval, and the interior of this has no finite volume geometric structure.) - -For non-oriented manifolds the easiest way to state a geometrization conjecture is to first take the oriented double cover. It is also possible to work directly with non-orientable manifolds, but this gives some extra complications: it may be necessary to cut along projective planes and Klein bottles as well as spheres and tori, and manifolds with a projective plane boundary component usually have no geometric structure. - -In 2 dimensions the analogous statement says that every surface (without boundary) has a geometric structure consisting of a metric with constant curvature; it is not necessary to cut the manifold up first. - -A model geometry is a simply connected smooth manifold X together with a transitive action of a Lie group G on X with compact stabilizers. - -A model geometry is called maximal if G is maximal among groups acting smoothly and transitively on X with compact stabilizers. Sometimes this condition is included in the definition of a model geometry. - -A geometric structure on a manifold M is a diffeomorphism from M to X/Γ for some model geometry X, where Γ is a discrete subgroup of G acting freely on X ; this is a special case of a complete (G,X)-structure. If a given manifold admits a geometric structure, then it admits one whose model is maximal. - -A 3-dimensional model geometry X is relevant to the geometrization conjecture if it is maximal and if there is at least one compact manifold with a geometric structure modelled on X. Thurston classified the 8 model geometries satisfying these conditions; they are listed below and are sometimes called Thurston geometries. (There are also uncountably many model geometries without compact quotients.) - -There is some connection with the Bianchi groups: the 3-dimensional Lie groups. Most Thurston geometries can be realized as a left invariant metric on a Bianchi group. However S2 × R cannot be, Euclidean space corresponds to two different Bianchi groups, and there are an uncountable number of solvable non-unimodular Bianchi groups, most of which give model geometries with no compact representatives. - -The point stabilizer is O(3, R), and the group G is the 6-dimensional Lie group O(4, R), with 2 components. The corresponding manifolds are exactly the closed 3-manifolds with finite fundamental group. Examples include the 3-sphere, the Poincaré homology sphere, Lens spaces. This geometry can be modeled as a left invariant metric on the Bianchi group of type IX. Manifolds with this geometry are all compact, orientable, and have the structure of a Seifert fiber space (often in several ways). The complete list of such manifolds is given in the article on spherical 3-manifolds. Under Ricci flow, manifolds with this geometry collapse to a point in finite time. - -The point stabilizer is O(3, R), and the group G is the 6-dimensional Lie group R3 × O(3, R), with 2 components. Examples are the 3-torus, and more generally the mapping torus of a finite-order automorphism of the 2-torus; see torus bundle. There are exactly 10 finite closed 3-manifolds with this geometry, 6 orientable and 4 non-orientable. This geometry can be modeled as a left invariant metric on the Bianchi groups of type I or VII0. Finite volume manifolds with this geometry are all compact, and have the structure of a Seifert fiber space (sometimes in two ways). The complete list of such manifolds is given in the article on Seifert fiber spaces. Under Ricci flow, manifolds with Euclidean geometry remain invariant. - -The point stabilizer is O(3, R), and the group G is the 6-dimensional Lie group O+(1, 3, R), with 2 components. There are enormous numbers of examples of these, and their classification is not completely understood. The example with smallest volume is the Weeks manifold. Other examples are given by the Seifert–Weber space, or "sufficiently complicated" Dehn surgeries on links, or most Haken manifolds. The geometrization conjecture implies that a closed 3-manifold is hyperbolic if and only if it is irreducible, atoroidal, and has infinite fundamental group. This geometry can be modeled as a left invariant metric on the Bianchi group of type V. Under Ricci flow, manifolds with hyperbolic geometry expand. - -The point stabilizer is O(2, R) × Z/2Z, and the group G is O(3, R) × R × Z/2Z, with 4 components. The four finite volume manifolds with this geometry are: S2 × S1, the mapping torus of the antipode map of S2, the connected sum of two copies of 3-dimensional projective space, and the product of S1 with two-dimensional projective space. The first two are mapping tori of the identity map and antipode map of the 2-sphere, and are the only examples of 3-manifolds that are prime but not irreducible. The third is the only example of a non-trivial connected sum with a geometric structure. This is the only model geometry that cannot be realized as a left invariant metric on a 3-dimensional Lie group. Finite volume manifolds with this geometry are all compact and have the structure of a Seifert fiber space (often in several ways). Under normalized Ricci flow manifolds with this geometry converge to a 1-dimensional manifold. - -The point stabilizer is O(2, R) × Z/2Z, and the group G is O+(1, 2, R) × R × Z/2Z, with 4 components. Examples include the product of a hyperbolic surface with a circle, or more generally the mapping torus of an isometry of a hyperbolic surface. Finite volume manifolds with this geometry have the structure of a Seifert fiber space if they are orientable. (If they are not orientable the natural fibration by circles is not necessarily a Seifert fibration: the problem is that some fibers may "reverse orientation"; in other words their neighborhoods look like fibered solid Klein bottles rather than solid tori.) The classification of such (oriented) manifolds is given in the article on Seifert fiber spaces. This geometry can be modeled as a left invariant metric on the Bianchi group of type III. Under normalized Ricci flow manifolds with this geometry converge to a 2-dimensional manifold. - -The universal cover of SL(2, R) is denoted ${\widetilde{\rm{SL}}}(2, \mathbf{R})$. It fibers over H2. The group G has 2 components. Its identity component has the structure $(\mathbf{R}\times\widetilde{\rm{SL}}_2 (\mathbf{R}))/\mathbf{Z}$. The point stabilizer is O(2,R). - -Examples of these manifolds include: the manifold of unit vectors of the tangent bundle of a hyperbolic surface, and more generally the Brieskorn homology spheres (excepting the 3-sphere and the Poincare dodecahedral space). This geometry can be modeled as a left invariant metric on the Bianchi group of type VIII. Finite volume manifolds with this geometry are orientable and have the structure of a Seifert fiber space. The classification of such manifolds is given in the article on Seifert fiber spaces. Under normalized Ricci flow manifolds with this geometry converge to a 2-dimensional manifold. - -This fibers over E2, and is the geometry of the Heisenberg group. The point stabilizer is O(2, R). The group G has 2 components, and is a semidirect product of the 3-dimensional Heisenberg group by the group O(2, R) of isometries of a circle. Compact manifolds with this geometry include the mapping torus of a Dehn twist of a 2-torus, or the quotient of the Heisenberg group by the "integral Heisenberg group". This geometry can be modeled as a left invariant metric on the Bianchi group of type II. Finite volume manifolds with this geometry are compact and orientable and have the structure of a Seifert fiber space. The classification of such manifolds is given in the article on Seifert fiber spaces. Under normalized Ricci flow, compact manifolds with this geometry converge to R2 with the flat metric. - -This geometry (also called Solv geometry) fibers over the line with fiber the plane, and is the geometry of the identity component of the group G. The point stabilizer is the dihedral group of order 8. The group G has 8 components, and is the group of maps from 2-dimensional Minkowski space to itself that are either isometries or multiply the metric by −1. The identity component has a normal subgroup R2 with quotient R, where R acts on R2 with 2 (real) eigenspaces, with distinct real eigenvalues of product 1. This is the Bianchi group of type VI0 and the geometry can be modeled as a left invariant metric on this group. All finite volume manifolds with solv geometry are compact. The compact manifolds with solv geometry are either the mapping torus of an Anosov map of the 2-torus (such a map is an automorphism of the 2-torus given by an invertible 2 by 2 matrix whose eigenvalues are real and distinct, such as \left( {\begin{array}{*{20}c} - -2 & 1 \\ - -1 & 1 \\ - -\end{array}} \right)), or quotients of these by groups of order at most 8. The eigenvalues of the automorphism of the torus generate an order of a real quadratic field, and the solv manifolds could in principle be classified in terms of the units and ideal classes of this order, though the details do not seem to be written down anywhere. - -Under normalized Ricci flow compact manifolds with this geometry converge (rather slowly) to R1. - -A closed 3-manifold has a geometric structure of at most one of the 8 types above, but finite volume non-compact 3-manifolds can occasionally have more than one type of geometric structure. (Nevertheless, a manifold can have many different geometric structures of the same type; for example, a surface of genus at least 2 has a continuum of different hyperbolic metrics.) More precisely, if M is a manifold with a finite volume geometric structure, then the type of geometric structure is almost determined as follows, in terms of the fundamental group π1(M): - -*If π1(M) is finite then the geometric structure on M is spherical, and M is compact. - -*If π1(M) is virtually cyclic but not finite then the geometric structure on M is S2×R, and M is compact. - -*If π1(M) is virtually abelian but not virtually cyclic then the geometric structure on M is Euclidean, and M is compact. - -*If π1(M) is virtually nilpotent but not virtually abelian then the geometric structure on M is nil geometry, and M is compact. - -*If π1(M) is virtually solvable but not virtually nilpotent then the geometric structure on M is solv geometry, and M is compact. - -*If π1(M) has an infinite normal cyclic subgroup but is not virtually solvable then the geometric structure on M is either H2×R or the universal cover of SL(2, R). The manifold M may be either compact or non-compact. If it is compact, then the 2 geometries can be distinguished by whether or not π1(M) has a finite index subgroup that splits as a semidirect product of the normal cyclic subgroup and something else. If the manifold is non-compact, then the fundamental group cannot distinguish the two geometries, and there are examples (such as the complement of a trefoil knot) where a manifold may have a finite volume geometric structure of either type. - -*If π1(M) has no infinite normal cyclic subgroup and is not virtually solvable then the geometric structure on M is hyperbolic, and M may be either compact or non-compact. - -Infinite volume manifolds can have many different types of geometric structure: for example, R3 can have 6 of the different geometric structures listed above, as 6 of the 8 model geometries are homeomorphic to it. Moreover if the volume does not have to be finite there are an infinite number of new geometric structures with no compact models; for example, the geometry of almost any non-unimodular 3-dimensional Lie group. - -There can be more than one way to decompose a closed 3-manifold into pieces with geometric structures. For example: - -*Taking connected sums with several copies of S3 does not change a manifold. - -*The connected sum of two projective 3-spaces has a S2×R geometry, and is also the connected sum of two pieces with S3 geometry. - -*The product of a surface of negative curvature and a circle has a geometric structure, but can also be cut along tori to produce smaller pieces that also have geometric structures. There are many similar examples for Seifert fiber spaces. - -It is possible to choose a "canonical" decomposition into pieces with geometric structure, for example by first cutting the manifold into prime pieces in a minimal way, then cutting these up using the smallest possible number of tori. However this minimal decomposition is not necessarily the one produced by Ricci flow; in fact, the Ricci flow can cut up a manifold into geometric pieces in many inequivalent ways, depending on the choice of initial metric. - -The Fields Medal was awarded to Thurston in 1982 partially for his proof of the geometrization conjecture for Haken manifolds. - -The case of 3-manifolds that should be spherical has been slower, but provided the spark needed for Richard S. Hamilton to develop his Ricci flow. In 1982, Hamilton showed that given a closed 3-manifold with a metric of positive Ricci curvature, the Ricci flow would collapse the manifold to a point in finite time, which proves the geometrization conjecture for this case as the metric becomes "almost round" just before the collapse. He later developed a program to prove the geometrization conjecture by Ricci flow with surgery. The idea is that the Ricci flow will in general produce singularities, but one may be able to continue the Ricci flow past the singularity by using surgery to change the topology of the manifold. Roughly speaking, the Ricci flow contracts positive curvature regions and expands negative curvature regions, so it should kill off the pieces of the manifold with the "positive curvature" geometries S3 and S2 × R, while what is left at large times should have a thick–thin decomposition into a "thick" piece with hyperbolic geometry and a "thin" graph manifold. - -In 2003, Grigori Perelman sketched a proof of the geometrization conjecture by showing that the Ricci flow can indeed be continued past the singularities, and has the behavior described above. The main difficulty in verifying Perelman's proof of the geometrization conjecture was a critical use of his Theorem 7.4 in the preprint 'Ricci Flow with surgery on three-manifolds'. This theorem was stated by Perelman without proof. There are now several different proofs of Perelman's Theorem 7.4, or variants of it which are sufficient to prove geometrization. There is the paper of Shioya and Yamaguchi that uses Perelman's stability theorem and a fibration theorem for Alexandrov spaces. This method, with full details leading to the proof of geometrization, can be found in the exposition by Bruce Kleiner and John Lott. - -A second route to the last part of Perelman's proof of geometrization is the method of Bessières et al., which uses Thurston's hyperbolization theorem for Haken manifolds and Gromov's norm for 3-manifolds. A book by the same authors with complete details of their version of the proof has been published by the European Mathematical Society. - -Also containing proofs of Perelman's Theorem 7.4, there is a paper of Morgan and Tian, another paper of Kleiner and Lott, and a paper by Jianguo Cao and Jian Ge. diff --git a/wiki/wikipedia/49.txt b/wiki/wikipedia/49.txt deleted file mode 100644 index d9aa6f00761bcbb2ea639d10a670e2ddf94236bf..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/49.txt +++ /dev/null @@ -1,309 +0,0 @@ -The expression problem is a challenge problem in programming languages that concerns the extensibility and modularity of statically typed data abstractions. - -The goal is to define a data abstraction that is extensible both in its representations and its behaviors, where one can add new representations and new behaviors to the data abstraction, without recompiling existing code, and while retaining static type safety (e.g., no casts). - -It exposed deficiencies in programming paradigms and programming languages, and it is still not definitively solved, although there are many proposed solutions. - -Philip Wadler formulated the challenge and named it "The Expression Problem" in response to a discussion with Rice University's Programming Languages Team (PLT). - -He also cited three sources that defined the context for his challenge: - -The problem was first observed by John Reynolds in 1975. Reynolds discussed two forms of Data Abstraction: User-defined Types, which are now known as Abstract Data Types (ADTs) (not to be confused with Algebraic Data Types), and Procedural Data Structures, which are now understood as a primitive form of Objects with only one method. He argued that they are complementary, in that User-defined Types could be extended with new behaviors, and Procedural Data Structures could be extended with new representations. He also discussed related work going back to 1967. However, Reynold's conclusions based on this early analysis turned out to be completely wrong: - -he wrote that adding a second method to an object "is more a tour de force than a specimen of - -clear programming," which completely missed the Object-Oriented paradigm and its - -great success. He also believed - -the two forms of data abstraction "are inherently distinct and complementary." - -Fifteen years later in 1990, William Cook - -applied Reynold's seminal idea in the context of Objects and Abstract Data Types, which had both grown extensively. Cook identified the matrix of representations and behaviors that are implicit in a Data Abstraction, and discussed how ADTs are based on the behavioral axis, while Objects are based on the representation axis. He provides extensive discussion of work on ADTs and Objects that are relevant to the problem. He also reviewed implementations in both styles, discussed extensibility in both directions, and also identified the importance of static typing. - -Most importantly, he discussed situations in which there was more flexibility than - -Reynolds considered, including internalization and optimization of methods. - -At ECOOP '98, Shriram Krishnamurthi et al. presented a design pattern solution to the problem of simultaneously extending an expression-oriented programming language and its tool-set. They dubbed it the "expressivity problem" because they thought programming language designers could use the problem to demonstrate the expressive power of their creations. For PLT, the problem had shown up in the construction of DrScheme, now DrRacket, and they solved it via a rediscovery of mixins. To avoid using a programming language problem in a paper about programming languages, Krishnamurthi et al. used an old geometry programming problem to explain their pattern-oriented solution. In conversations with Felleisen and Krishnamurthi after the ECOOP presentation, Wadler understood the PL-centric nature of the problem and he pointed out that Krishnamurthi's solution used a cast to circumvent Java's type system. The discussion continued on the types mailing list, where Corky Cartwright (Rice) and Kim Bruce (Williams) showed how type systems for OO languages might eliminate this cast. In response Wadler formulated his essay and stated the challenge, "whether a language can solve the expression problem is a salient indicator of its capacity for expression." The label "expression problem" puns on expression = "how much can your language express" and expression = - -"the terms you are trying to represent are language expressions". - -Others co-discovered variants of the expression problem around the same time as Rice University's PLT, in particular Thomas Kühne in his dissertation, and Smaragdakis and Batory in a parallel ECOOP 98 article. - -Some follow-up work used the expression problem to showcase the power of programming language designs. - -The expression problem is also a fundamental problem in multi-dimensional Software Product Line design and in particular as an application or special case of FOSD Program Cubes. - -There are various solutions to the expression problem. Each solution varies in the amount of code a user must write to implement them, and the language features they require. - -* Multiple dispatch - -* Open classes - -* Coproducts of functors - -* Type classes - -* Tagless-final / Object algebras - -* Polymorphic Variants - -We can imagine we do not have the source code for the following library, written in C#, which we wish to extend: - - - -public interface IEvalExp - -{ - -int Eval(); - -} - -public class Lit: IEvalExp - -{ - -public Lit(int n) - -{ - -N = n; - -} - -public int N { get; } - -public int Eval() - -{ - -return N; - -} - -} - -public class Add: IEvalExp - -{ - -public Add(IEvalExp left, IEvalExp right) - -{ - -Left = left; - -Right = right; - -} - -public IEvalExp Left { get; } - -public IEvalExp Right { get; } - -public int Eval() - -{ - -return Left.Eval() + Right.Eval(); - -} - -} - -public static class ExampleOne - -{ - -public static IEvalExp AddOneAndTwo() => new Add(new Lit(1), new Lit(2)); - -public static int EvaluateTheSumOfOneAndTwo() => AddOneAndTwo().Eval(); - -} - - - -Using this library we can express the arithmetic expression as we did in and can evaluate the expression by calling . Now imagine that we wish to extend this library, adding a new type is easy because we are working with an Object-oriented programming language. For example, we might create the following class: - - - -public class Mult: IEvalExp - -{ - -public Mult(IEvalExp left, IEvalExp right) - -{ - -Left = left; - -Right = right; - -} - -public IEvalExp Left { get; } - -public IEvalExp Right { get; } - -public int Eval() - -{ - -return Left.Eval() * Right.Eval(); - -} - -} - - - -However, if we wish to add a new function over the type (a new method in C# terminology) we have to change the interface and then modify all the classes that implement the interface. Another possibility is to create a new interface that extends the interface and then create sub-types for , and classes, but the expression returned in has already been compiled so we will not be able to use the new function over the old type. The problem is reversed in functional programming languages like F# where it is easy to add a function over a given type, but extending or adding types is difficult. - -Let us redesign the original library with extensibility in mind using the ideas from the paper Extensibility for the Masses. - - - -public interface ExpAlgebra - -{ - -T Lit(int n); - -T Add(T left, T right); - -} - -public class ExpFactory: ExpAlgebra - -{ - -public IEvalExp Lit(int n) - -{ - -return new Lit(n); - -} - -public IEvalExp Add(IEvalExp left, IEvalExp right) - -{ - -return new Add(left, right); - -} - -} - -public static class ExampleTwo - -{ - -public static T AddOneToTwo(ExpAlgebra ae) => ae.Add(ae.Lit(1), ae.Lit(2)); - -} - - - -We use the same implementation as in the first code example but now add a new interface containing the functions over the type as well as a factory for the algebra. Notice that we now generate the expression in using the interface instead of directly from the types. We can now add a function by extending the interface, we will add functionality to print the expression: - - - -public interface IPrintExp: IEvalExp - -{ - -string Print(); - -} - -public class PrintableLit: Lit, IPrintExp - -{ - -public PrintableLit(int n): base(n) - -{ - -N = n; - -} - -public int N { get; } - -public string Print() - -{ - -return N.ToString(); - -} - -} - -public class PrintableAdd: Add, IPrintExp - -{ - -public PrintableAdd(IPrintExp left, IPrintExp right): base(left, right) - -{ - -Left = left; - -Right = right; - -} - -public new IPrintExp Left { get; } - -public new IPrintExp Right { get; } - -public string Print() - -{ - -return Left.Print() + " + " + Right.Print(); - -} - -} - -public class PrintFactory: ExpFactory, ExpAlgebra - -{ - -public IPrintExp Add(IPrintExp left, IPrintExp right) - -{ - -return new PrintableAdd(left, right); - -} - -public new IPrintExp Lit(int n) - -{ - -return new PrintableLit(n); - -} - -} - -public static class ExampleThree - -{ - -public static int Evaluate() => ExampleTwo.AddOneToTwo(new PrintFactory()).Eval(); - -public static string Print() => ExampleTwo.AddOneToTwo(new PrintFactory()).Print(); - -} - - - -Notice that in we are printing an expression that was already compiled in , we did not need to modify any existing code. Notice also that this is still strongly typed, we do not need reflection or casting. If we would replace the with the in the we would get a compilation error since the method does not exist in that context. diff --git a/wiki/wikipedia/490.txt b/wiki/wikipedia/490.txt deleted file mode 100644 index 66f769c71ace8bb25a5228c5c4e8955b627bdeaa..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/490.txt +++ /dev/null @@ -1,79 +0,0 @@ -In mathematics, Pascal's rule (or Pascal's formula) is a combinatorial identity about binomial coefficients. It states that for positive natural numbers n and k, -$$ -{n-1\choose k} + {n-1\choose k-1} = {n\choose k}, -$$ - -where $\tbinom{n}{k}$ is a binomial coefficient; one interpretation of which is the coefficient of the xk term in the expansion of (1 + x)n. There is no restriction on the relative sizes of n and k, since, if n < k the value of the binomial coefficient is zero and the identity remains valid. - -Pascal's rule can also be viewed as a statement that the formula -$$ -\frac{(x+y)!}{x!y!} = {x+y \choose x} = {x+y \choose y} -$$ - -solves the linear two-dimensional difference equation -$$ - N_{x,y} = N_{x-1,y} + N_{x,y-1}, \quad N_{0,y} = N_{x,0} = 1 -$$ - -over the natural numbers. Thus, Pascal's rule is also a - -statement about a formula for the numbers appearing in Pascal's triangle. - -Pascal's rule can also be generalized to apply to multinomial coefficients. - -Pascal's rule has an intuitive combinatorial meaning, that is clearly expressed in this counting proof. - -Proof. Recall that $\tbinom{n}{k}$ equals the number of subsets with k elements from a set with n elements. Suppose one particular element is uniquely labeled X in a set with n elements. - -To construct a subset of k elements containing X, include X and choose k − 1 elements from the remaining n − 1 elements in the set. There are $\tbinom{n-1}{k-1}$ such subsets. - -To construct a subset of k elements not containing X, choose k elements from the remaining n − 1 elements in the set. There are $\tbinom{n-1}{k}$ such subsets. - -Every subset of k elements either contains X or not. The total number of subsets with k elements in a set of n elements is the sum of the number of subsets containing X and the number of subsets that do not contain X, $\tbinom{n-1}{k-1} + \tbinom{n-1}{k}$. - -This equals $\tbinom{n}{k}$; therefore, $\tbinom{n}{k} = \tbinom{n-1}{k-1} + \tbinom{n-1}{k}$. - -Alternatively, the algebraic derivation of the binomial case follows. - -\begin{align} - -{ n-1 \choose k } + { n-1 \choose k-1} & = \frac{(n-1)!}{k! (n - 1 - k)!} + \frac{(n-1)!}{(k - 1)!(n - k)!} \\ - -& = (n-1)! \left[ \frac{n - k}{k!(n - k)!} + \frac{k}{k!(n - k)!}\right] \\ - -& = (n-1)! \frac{n}{k!(n - k)!} \\ - -& = \frac{n!}{k!(n - k)!} \\ - -& = \binom{n}{k}. - -\end{align} - - - -Pascal's rule can be generalized to multinomial coefficients. For any integer p such that $p \ge 2$, $k_1, k_2, k_3, \dots, k_p \in \mathbb{N}^{+}\!,$ and $n = k_1 + k_2 + k_3 + \cdots + k_p \ge 1$, -$$ -{n-1\choose k_1-1,k_2,k_3, \dots, k_p}+{n-1 \choose k_1,k_2-1,k_3,\dots, k_p}+\cdots+{n-1\choose k_1,k_2,k_3,\dots,k_p-1} = {n\choose k_1, k_2, k_3, \dots, k_p} -$$ - -where ${n\choose k_1, k_2, k_3, \dots , k_p}$ is the coefficient of the $x_1^{k_1}x_2^{k_2}\dots x_p^{k_p}$ term in the expansion of $(x_1 + x_2 + \dots + x_p)^{n}$. - -The algebraic derivation for this general case is as follows. Let p be an integer such that $p \ge 2$, $k_1, k_2, k_3, \dots, k_p \in \mathbb{N}^{+}\!,$ and $n = k_1 + k_2 + k_3 + \cdots + k_p \ge 1$. Then - - - -\begin{align} - -& {} \quad {n-1 \choose k_1-1,k_2,k_3, \dots, k_p}+{n-1 \choose k_1,k_2-1,k_3,\dots, k_p} + \cdots + {n-1 \choose k_1,k_2,k_3,\dots,k_p-1} \\ - -& = \frac{(n-1)!}{(k_1-1)!k_2!k_3! \cdots k_p!} + \frac{(n-1)!}{k_1!(k_2-1)!k_3!\cdots k_p!} + \cdots + \frac{(n-1)!}{k_1!k_2!k_3! \cdots (k_p-1)!} \\ - -& = \frac{k_1(n-1)!}{k_1!k_2!k_3! \cdots k_p!} + \frac{k_2(n-1)!}{k_1!k_2!k_3! \cdots k_p!} + \cdots + \frac{k_p(n-1)!}{k_1!k_2!k_3! \cdots k_p!} = \frac{(k_1+k_2+\cdots+k_p) (n-1)!}{k_1!k_2!k_3!\cdots k_p!} \\ - -& = \frac{n(n-1)!}{k_1!k_2!k_3! \cdots k_p!} = \frac{n!}{k_1!k_2!k_3! \cdots k_p!} - -= {n \choose k_1, k_2, k_3, \dots, k_p}. - -\end{align} - - diff --git a/wiki/wikipedia/491.txt b/wiki/wikipedia/491.txt deleted file mode 100644 index d880f8ae9c7a837921748974c4ef2a3a2eb12701..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/491.txt +++ /dev/null @@ -1,23 +0,0 @@ -In computability theory, computational complexity theory and proof theory, the slow-growing hierarchy is an ordinal-indexed family of slowly increasing functions gα: N → N (where N is the set of natural numbers, {0, 1, ...}). It contrasts with the fast-growing hierarchy. - -Let μ be a large countable ordinal such that a fundamental sequence is assigned to every limit ordinal less than μ. The slow-growing hierarchy of functions gα: N → N, for α < μ, is then defined as follows: - -*$ g_0(n) = 0 $ - -*$ g_{k+1}(n) = g_k(n) + 1 $ - -*$ g_\alpha(n) = g_{\alpha[n]}(n)$ for limit ordinal α. - -Here α[n] denotes the nth element of the fundamental sequence assigned to the limit ordinal α. - -The article on the Fast-growing hierarchy describes a standardized choice for fundamental sequence for all α < ε0. - -The slow-growing hierarchy grows much more slowly than the fast-growing hierarchy. Even gε0 is only equivalent to f3 and gα only attains the growth of fε0 (the first function that Peano arithmetic cannot prove total in the hierarchy) when α is the Bachmann–Howard ordinal. - -However, Girard proved that the slow-growing hierarchy eventually catches up with the fast-growing one. However, for the assignment of fundamental sequences found in For Buchholz style tree ordinals it could be shown that the first match up even occurs at $\omega^2$. - -Extensions of the result proved - -The slow-growing hierarchy depends extremely sensitively on the choice of the underlying fundamental sequences. - -Cichon provided an interesting connection between the slow-growing hierarchy and derivation length for term rewriting. diff --git a/wiki/wikipedia/492.txt b/wiki/wikipedia/492.txt deleted file mode 100644 index 9c17529808b90bf1e8277d282f29109debc94503..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/492.txt +++ /dev/null @@ -1 +0,0 @@ -In mathematics, the Trombi–Varadarajan theorem, introduced by , gives an isomorphism between a certain space of spherical functions on a semisimple Lie group, and a certain space of holomorphic functions defined on a tubular neighborhood of the dual of a Cartan subalgebra. diff --git a/wiki/wikipedia/493.txt b/wiki/wikipedia/493.txt deleted file mode 100644 index c62d59896d75e4c55b24622350cbfc4ba666b234..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/493.txt +++ /dev/null @@ -1,504 +0,0 @@ -Euler's constant (sometimes also called the Euler–Mascheroni constant) is a mathematical constant usually denoted by the lowercase Greek letter gamma (γ). - -It is defined as the limiting difference between the harmonic series and the natural logarithm, denoted here by $\log:$ - -\begin{align} - -\gamma &= \lim_{n\to\infty}\left(-\log n + \sum_{k=1}^n \frac1{k}\right)\\[5px] - -&=\int_1^\infty\left(-\frac1x+\frac1{\lfloor x\rfloor}\right)dx. - -\end{align} - -Here, $\lfloor x\rfloor$ represents the floor function. - -The numerical value of Euler's constant, to 50 decimal places, is: - -  - -The constant first appeared in a 1734 paper by the Swiss mathematician Leonhard Euler, titled De Progressionibus harmonicis observationes (Eneström Index 43). Euler used the notations C and O for the constant. In 1790, Italian mathematician Lorenzo Mascheroni used the notations A and a for the constant. The notation γ appears nowhere in the writings of either Euler or Mascheroni, and was chosen at a later time perhaps because of the constant's connection to the gamma function. For example, the German mathematician Carl Anton Bretschneider used the notation γ in 1835 and Augustus De Morgan used it in a textbook published in parts from 1836 to 1842. - -Euler's constant appears, among other places, in the following (where '*' means that this entry contains an explicit equation): - -* Expressions involving the exponential integral* - -* The Laplace transform* of the natural logarithm - -* The first term of the Laurent series expansion for the Riemann zeta function*, where it is the first of the Stieltjes constants* - -* Calculations of the digamma function - -* A product formula for the gamma function - -* The asymptotic expansion of the gamma function for small arguments. - -* An inequality for Euler's totient function - -* The growth rate of the divisor function - -* In dimensional regularization of Feynman diagrams in quantum field theory - -* The calculation of the Meissel–Mertens constant - -* The third of Mertens' theorems* - -* Solution of the second kind to Bessel's equation - -* In the regularization/renormalization of the harmonic series as a finite value - -* The mean of the Gumbel distribution - -* The information entropy of the Weibull and Lévy distributions, and, implicitly, of the chi-squared distribution for one or two degrees of freedom. - -* The answer to the coupon collector's problem* - -* In some formulations of Zipf's law - -* A definition of the cosine integral* - -* Lower bounds to a prime gap - -* An upper bound on Shannon entropy in quantum information theory - -* Fisher-Orr model for genetics of adaptation in evolutionary biology - -The number γ has not been proved algebraic or transcendental. In fact, it is not even known whether γ is irrational. Using a continued fraction analysis, Papanikolaou showed in 1997 that if γ is rational, its denominator must be greater than 10244663. The ubiquity of γ revealed by the large number of equations below makes the irrationality of γ a major open question in mathematics. - -However, some progress was made. Kurt Mahler showed in 1968 that the number $\frac{\pi}{2}\frac{Y_0(2)}{J_0(2)}-\gamma$ is transcendental (here, $J_\alpha(x)$ and $Y_\alpha(x)$ are Bessel functions). In 2009 Alexander Aptekarev proved that at least one of Euler's constant γ and the Euler–Gompertz constant δ is irrational. This result was improved in 2012 by Tanguy Rivoal, who proved that at least one of them is transcendental. - -In 2010 M. Ram Murty and N. Saradha considered an infinite list of numbers containing γ/4 and showed that all but at most one of them are transcendental. - -In 2013 M. Ram Murty and A. Zaytseva again considered an infinite list of numbers containing γ and showed that all but at most one are transcendental. - -γ is related to the digamma function Ψ, and hence the derivative of the gamma function Γ, when both functions are evaluated at 1. Thus: -$$ --\gamma = \Gamma'(1) = \Psi(1). -$$ - -This is equal to the limits: -$$ -\begin{align}-\gamma &= \lim_{z\to 0}\left(\Gamma(z) - \frac1{z}\right) \\&= \lim_{z\to 0}\left(\Psi(z) + \frac1{z}\right).\end{align} -$$ - -Further limit results are: - -\begin{align} \lim_{z\to 0}\frac1{z}\left(\frac1{\Gamma(1+z)} - \frac1{\Gamma(1-z)}\right) &= 2\gamma \\ - -\lim_{z\to 0}\frac1{z}\left(\frac1{\Psi(1-z)} - \frac1{\Psi(1+z)}\right) &= \frac{\pi^2}{3\gamma^2}. \end{align} - -A limit related to the beta function (expressed in terms of gamma functions) is - -\begin{align} \gamma &= \lim_{n\to\infty}\left(\frac{ \Gamma\left(\frac1{n}\right) \Gamma(n+1) n^{1+\frac1{n}}}{\Gamma\left(2+n+\frac1{n}\right)} - \frac{n^2}{n+1}\right) \\ - -&= \lim\limits_{m\to\infty}\sum_{k=1}^m{m \choose k}\frac{(-1)^k}{k}\log\big(\Gamma(k+1)\big). \end{align} - -γ can also be expressed as an infinite sum whose terms involve the Riemann zeta function evaluated at positive integers: - -\begin{align}\gamma &= \sum_{m=2}^{\infty} (-1)^m\frac{\zeta(m)}{m} \\ - -&= \log\frac4{\pi} + \sum_{m=2}^{\infty} (-1)^m\frac{\zeta(m)}{2^{m-1}m}.\end{align} - -Other series related to the zeta function include: - -\begin{align} \gamma &= \tfrac3{2}- \log 2 - \sum_{m=2}^\infty (-1)^m\frac{m-1}{m}\big(\zeta(m)-1\big) \\ - -&= \lim_{n\to\infty}\left(\frac{2n-1}{2n} - \log n + \sum_{k=2}^n \left(\frac1{k} - \frac{\zeta(1-k)}{n^k}\right)\right) \\ - -&= \lim_{n\to\infty}\left(\frac{2^n}{e^{2^n}} \sum_{m=0}^\infty \frac{2^{mn}}{(m+1)!} \sum_{t=0}^m \frac1{t+1} - n \log 2+ O \left (\frac1{2^{n} e^{2^n}}\right)\right).\end{align} - -The error term in the last equation is a rapidly decreasing function of n. As a result, the formula is well-suited for efficient computation of the constant to high precision. - -Other interesting limits equaling Euler's constant are the antisymmetric limit: -$$ -\begin{align} \gamma &= \lim_{s\to 1^+}\sum_{n=1}^\infty \left(\frac1{n^s}-\frac1{s^n}\right) \\&= \lim_{s\to 1}\left(\zeta(s) - \frac{1}{s-1}\right) \\&= \lim_{s\to 0}\frac{\zeta(1+s)+\zeta(1-s)}{2} \end{align} -$$ - -and the following formula, established in 1898 by de la Vallée-Poussin: -$$ -\gamma = \lim_{n\to\infty}\frac1{n} \sum_{k=1}^n \left(\left\lceil \frac{n}{k} \right\rceil - \frac{n}{k}\right) -$$ - -where $\lceil \rceil$ are ceiling brackets. - -This formula indicates that when taking any positive integer n and dividing it by each positive integer k less than n, the average fraction by which the quotient n/k falls short of the next integer tends to $\gamma$ (rather than 0.5) as n tends to infinity. - -Closely related to this is the rational zeta series expression. By taking separately the first few terms of the series above, one obtains an estimate for the classical series limit: -$$ -\gamma = \sum_{k=1}^n \frac1{k} - \log n -\sum_{m=2}^\infty \frac{\zeta(m,n+1)}{m}, -$$ - -where ζ(s,k) is the Hurwitz zeta function. The sum in this equation involves the harmonic numbers, Hn. Expanding some of the terms in the Hurwitz zeta function gives: -$$ -H_n = \log(n) + \gamma + \frac1{2n} - \frac1{12n^2} + \frac1{120n^4} - \varepsilon, -$$ - -where 0 < ε < 1/252n6. - -γ can also be expressed as follows where A is the Glaisher–Kinkelin constant: -$$ -\gamma =12\log(A)-\log(2\pi)+\frac{6}{\pi^2}\zeta'(2) -$$ - -γ can also be expressed as follows, which can be proven by expressing the zeta function as a Laurent series: -$$ -\gamma=\lim_{n\to\infty}\biggl(-n+\zeta\Bigl(\frac{n+1}{n}\Bigr)\biggr) -$$ - -γ equals the value of a number of definite integrals: - -\begin{align} - -\gamma &= - \int_0^\infty e^{-x} \log x dx \\ - -&= -\int_0^1 \log\left(\log\frac 1 x \right) dx \\ - -&= \int_0^\infty \left(\frac1{e^x-1}-\frac1{x\cdot e^x} \right)dx \\ - -&= \int_0^1\frac{1-e^{-x}}{x} dx -\int_1^\infty \frac{e^{-x}}{x} dx\\ - -&= \int_0^1\left(\frac1{\log x} + \frac1{1-x}\right)dx\\ - -&= \int_0^\infty \left(\frac1{1+x^k}-e^{-x}\right)\frac{dx}{x},\quad k>0\\ - -&= 2\int_0^\infty \frac{e^{-x^2}-e^{-x}}{x} dx ,\\ - -&= \int_0^1 H_x dx, \end{align} - -where Hx is the fractional harmonic number. - -The third formula in the integral list can be proved in the following way: - -\begin{align} - -&\int_0^{\infty} \left(\frac{1}{e^x - 1} - \frac{1}{x e^x} \right) dx - -= \int_0^{\infty} \frac{e^{-x} + x - 1}{x[e^x -1]} dx - -= \int_0^{\infty} \frac{1}{x[e^x - 1]} \sum_{m = 1}^{\infty} \frac{(-1)^{m+1}x^{m+1}}{(m+1)!} dx \\[2pt] - -&= \int_0^{\infty} \sum_{m = 1}^{\infty} \frac{(-1)^{m+1}x^m}{(m+1)![e^x -1]} dx - -= \sum_{m = 1}^{\infty} \int_0^{\infty} \frac{(-1)^{m+1}x^m}{(m+1)![e^x -1]} dx - -= \sum_{m = 1}^{\infty} \frac{(-1)^{m+1}}{(m+1)!} \int_0^{\infty} \frac{x^m}{e^x - 1} dx \\[2pt] - -&= \sum_{m = 1}^{\infty} \frac{(-1)^{m+1}}{(m+1)!} m!\zeta(m+1) - -= \sum_{m = 1}^{\infty} \frac{(-1)^{m+1}}{m+1}\zeta(m+1) - -= \sum_{m = 1}^{\infty} \frac{(-1)^{m+1}}{m+1} \sum_{n = 1}^{\infty}\frac{1}{n^{m+1}} - -= \sum_{m = 1}^{\infty} \sum_{n = 1}^{\infty} \frac{(-1)^{m+1}}{m+1}\frac{1}{n^{m+1}} \\[2pt] - -&= \sum_{n = 1}^{\infty} \sum_{m = 1}^{\infty} \frac{(-1)^{m+1}}{m+1}\frac{1}{n^{m+1}} - -= \sum_{n = 1}^{\infty} \left[\frac{1}{n} - \ln\left(1+\frac{1}{n}\right)\right] - -= \gamma - -\end{align} - -The integral on the second line of the equation stands for the Debye function value of +∞, which is m! ζ(m + 1). - -Definite integrals in which γ appears include: - -\begin{align} - -\int_0^\infty e^{-x^2} \log x dx &= -\frac{(\gamma+2\log 2)\sqrt{\pi}}{4} \\ - -\int_0^\infty e^{-x} \log^2 x dx &= \gamma^2 + \frac{\pi^2}{6}\\ - -\int_{-\infty}^\infty \frac{ - -e^{-|x|} \log|x|}{2} dx &= \gamma - -\end{align} - -One can express γ using a special case of Hadjicostas's formula as a double integral with equivalent series: - -\begin{align} - -\gamma &= \int_0^1 \int_0^1 \frac{x-1}{(1-xy)\log xy}dxdy \\ - -&= \sum_{n=1}^\infty \left(\frac 1 n -\log\frac{n+1} n \right). - -\end{align} - -An interesting comparison by Sondow is the double integral and alternating series - -\begin{align} - -\log\frac 4 \pi &= \int_0^1 \int_0^1 \frac{x-1}{(1+xy)\log xy} dxdy \\ - -&= \sum_{n=1}^\infty \left((-1)^{n-1}\left(\frac 1 n -\log\frac{n+1} n \right)\right). - -\end{align} - -It shows that log 4/π may be thought of as an "alternating Euler constant". - -The two constants are also related by the pair of series - -\begin{align} - -\gamma &= \sum_{n=1}^\infty \frac{N_1(n) + N_0(n)}{2n(2n+1)} \\ - -\log\frac4{\pi} &= \sum_{n=1}^\infty \frac{N_1(n) - N_0(n)}{2n(2n+1)} , - -\end{align} - -where N1(n) and N0(n) are the number of 1s and 0s, respectively, in the base 2 expansion of n. - -We have also Catalan's 1875 integral -$$ -\gamma = \int_0^1 \left(\frac1{1+x}\sum_{n=1}^\infty x^{2^n-1}\right)dx. -$$ - -In general, - - - -\gamma = \lim_{n \to \infty}\left(\frac{1}{1}+\frac{1}{2}+\frac{1}{3} + \ldots + \frac{1}{n} - \log(n+\alpha) \right) \equiv \lim_{n \to \infty} \gamma_n(\alpha) - - - -for any $\alpha > -n$. However, the rate of convergence of this expansion depends significantly on $\alpha$. In particular, $\gamma_n(1/2)$ exhibits much more rapid convergence than the conventional expansion $\gamma_n(0)$. This is because - - - -\frac{1}{2(n+1)} < \gamma_n(0) - \gamma < \frac{1}{2n}, - - - -while - - - -\frac{1}{24(n+1)^2} < \gamma_n(1/2) - \gamma < \frac{1}{24n^2}. - - - -Even so, there exist other series expansions which converge more rapidly than this; some of these are discussed below. - -Euler showed that the following infinite series approaches γ: -$$ -\gamma = \sum_{k=1}^\infty \left(\frac 1 k - \log\left(1+\frac 1 k \right)\right). -$$ - -The series for γ is equivalent to a series Nielsen found in 1897: -$$ -\gamma = 1 - \sum_{k=2}^\infty (-1)^k\frac{\left\lfloor\log_2 k\right\rfloor}{k+1}. -$$ - -In 1910, Vacca found the closely related series - -\begin{align} - -\gamma & = \sum_{k=2}^\infty (-1)^k\frac{\left\lfloor\log_2 k\right\rfloor} k \\[5pt] - -& = \tfrac12-\tfrac13 + 2\left(\tfrac14 - \tfrac15 + \tfrac16 - \tfrac17\right) + 3\left(\tfrac18 - \tfrac19 + \tfrac1{10} - \tfrac1{11} + \cdots - \tfrac1{15}\right) + \cdots, - -\end{align} - -where log2 is the logarithm to base 2 and ⌊ ⌋ is the floor function. - -In 1926 he found a second series: - -\begin{align} - -\gamma + \zeta(2) & = \sum_{k=2}^\infty \left( \frac1{\left\lfloor\sqrt{k}\right\rfloor^2} - \frac1{k}\right) \\[5pt] - -& = \sum_{k=2}^\infty \frac{k - \left\lfloor\sqrt{k}\right\rfloor^2}{k \left\lfloor \sqrt{k} \right\rfloor^2} \\[5pt] - -&= \frac12 + \frac23 + \frac1{2^2}\sum_{k=1}^{2\cdot 2} \frac{k}{k+2^2} + \frac1{3^2}\sum_{k=1}^{3\cdot 2} \frac{k}{k+3^2} + \cdots - -\end{align} - -From the Malmsten–Kummer expansion for the logarithm of the gamma function we get: -$$ -\gamma = \log\pi - 4\log\left(\Gamma(\tfrac34)\right) + \frac 4 \pi \sum_{k=1}^\infty (-1)^{k+1}\frac{\log(2k+1)}{2k+1}. -$$ - -An important expansion for Euler's constant is due to Fontana and Mascheroni -$$ -\gamma = \sum_{n=1}^\infty \frac{n} = \frac12 + \frac1{24} + \frac1{72} + \frac{19}{2880} + \frac3{800} + \cdots, -$$ - -where Gn are Gregory coefficients This series is the special case $k=1$ of the expansions - -\begin{align} - -\gamma &= H_{k-1} - \log k + \sum_{n=1}^{\infty}\frac{(n-1)!|G_n|}{k(k+1) \cdots (k+n-1)} && \\ - -&= H_{k-1} - \log k + \frac1{2k} + \frac1{12k(k+1)} + \frac1{12k(k+1)(k+2)} + \frac{19}{120k(k+1)(k+2)(k+3)} + \cdots && - -\end{align} - -convergent for $k=1,2,\ldots$ - -A similar series with the Cauchy numbers of the second kind Cn is -$$ -\gamma = 1 - \sum_{n=1}^\infty \frac{C_{n}}{n (n+1)!} =1- \frac{1}{4} -\frac{5}{72} - \frac{1}{32} - \frac{251}{14400} - \frac{19}{1728} - \ldots -$$ - -Blagouchine (2018) found an interesting generalisation of the Fontana-Mascheroni series - -\gamma=\sum_{n=1}^\infty\frac{(-1)^{n+1}}{2n}\Big\{\psi_{n}(a)+ \psi_{n}\Big(-\frac{a}{1+a}\Big)\Big\}, - -\quad a>-1 - -where ψn(a) are the Bernoulli polynomials of the second kind, which are defined by the generating function - - - -\frac{z(1+z)^s}{\log(1+z)}= \sum_{n=0}^\infty z^n \psi_n(s) ,\qquad |z|<1, - - - -For any rational a this series contains rational terms only. For example, at a = 1, it becomes -$$ -\gamma=\frac{3}{4} - \frac{11}{96} - \frac{1}{72} - \frac{311}{46080} - \frac{5}{1152} - \frac{7291}{2322432} - \frac{243}{100352} - \ldots -$$ - -Other series with the same polynomials include these examples: -$$ -\gamma= -\log(a+1) - \sum_{n=1}^\infty\frac{(-1)^n \psi_{n}(a)}{n},\qquad \Re(a)>-1 -$$ - -and -$$ -\gamma= -\frac{2}{1+2a} \left\{\log\Gamma(a+1) -\frac{1}{2}\log(2\pi) + \frac{1}{2} + \sum_{n=1}^\infty\frac{(-1)^n \psi_{n+1}(a)}{n}\right\},\qquad \Re(a)>-1 -$$ - -where Γ(a) is the gamma function. - -A series related to the Akiyama-Tanigawa algorithm is - -\gamma= \log(2\pi) - 2 - 2 \sum_{n=1}^\infty\frac{(-1)^n G_{n}(2)}{n}= - -\log(2\pi) - 2 + \frac{2}{3} + \frac{1}{24}+ \frac{7}{540} + \frac{17}{2880}+ \frac{41}{12600} + \ldots - -where Gn(2) are the Gregory coefficients of the second order. - -Series of prime numbers: -$$ -\gamma = \lim_{n\to\infty}\left(\log n - \sum_{p\le n}\frac{\log p}{p-1}\right). -$$ - -γ equals the following asymptotic formulas (where Hn is the nth harmonic number): -$$ -\gamma \sim H_n - \log n - \frac1{2n} + \frac1{12n^2} - \frac1{120n^4} + \cdots -$$ (Euler) -$$ -\gamma \sim H_n - \log\left({n + \frac1{2} + \frac1{24n} - \frac1{48n^3} + \cdots}\right) -$$ (Negoi) -$$ -\gamma \sim H_n - \frac{\log n + \log(n+1)}{2} - \frac1{6n(n+1)} + \frac1{30n^2(n+1)^2} - \cdots -$$ (Cesàro) - -The third formula is also called the Ramanujan expansion. - -Alabdulmohsin derived closed-form expressions for the sums of errors of these approximations. He showed that (Theorem A.1): - - \sum_{n=1}^\infty \log n +\gamma - H_n + \frac{1}{2n} = \frac{\log (2\pi)-1-\gamma}{2} - - \sum_{n=1}^\infty \log \sqrt{n(n+1)} +\gamma - H_n = \frac{\log (2\pi)-1}{2}-\gamma - - \sum_{n=1}^\infty (-1)^n\Big(\log n +\gamma - H_n\Big) = \frac{\log \pi-\gamma}{2} - -The constant e is important in number theory. Some authors denote this quantity simply as γ′. e equals the following limit, where pn is the nth prime number: -$$ -e^\gamma = \lim_{n\to\infty}\frac1{\log p_n} \prod_{i=1}^n \frac{p_i}{p_i-1}. -$$ - -This restates the third of Mertens' theorems. The numerical value of e is: - -. - -Other infinite products relating to e include: - -\begin{align} - -\frac{e^{1+\frac{\gamma}{2}}}{\sqrt{2\pi}} &= \prod_{n=1}^\infty e^{-1+\frac1{2n}}\left(1+\frac1{n}\right)^n \\ - -\frac{e^{3+2\gamma}}{2\pi} &= \prod_{n=1}^\infty e^{-2+\frac2{n}}\left(1+\frac2{n}\right)^n. \end{align} - -These products result from the Barnes G-function. - -In addition, -$$ -e^{\gamma} = \sqrt{\frac2{1}} \cdot \sqrt[3]{\frac{2^2}{1\cdot 3}} \cdot \sqrt[4]{\frac{2^3\cdot 4}{1\cdot 3^3}} \cdot \sqrt[5]{\frac{2^4\cdot 4^4}{1\cdot 3^6\cdot 5}} \cdots -$$ - -where the nth factor is the (n + 1)th root of -$$ -\prod_{k=0}^n (k+1)^{(-1)^{k+1}{n \choose k}}. -$$ - -This infinite product, first discovered by Ser in 1926, was rediscovered by Sondow using hypergeometric functions. - -It also holds that -$$ -\frac{e^\frac{\pi}{2}+e^{-\frac{\pi}{2}}}{\pi e^\gamma}=\prod_{n=1}^\infty\left(e^{-\frac{1}{n}}\left(1+\frac{1}{n}+\frac{1}{2n^2}\right)\right). -$$ - -The continued fraction expansion of γ begins [0; 1, 1, 2, 1, 2, 1, 4, 3, 13, 5, 1, 1, 8, 1, 2, 4, 1, 1, 40, ...], which has no apparent pattern. The continued fraction is known to have at least 475,006 terms, and it has infinitely many terms if and only if γ is irrational. - -Euler's generalized constants are given by -$$ -\gamma_\alpha = \lim_{n\to\infty}\left(\sum_{k=1}^n \frac1{k^\alpha} - \int_1^n \frac1{x^\alpha}dx\right), -$$ - -for 0 < α < 1, with γ as the special case α = 1. This can be further generalized to -$$ -c_f = \lim_{n\to\infty}\left(\sum_{k=1}^n f(k) - \int_1^n f(x)dx\right) -$$ - -for some arbitrary decreasing function f. For example, -$$ -f_n(x) = \frac{(\log x)^n}{x} -$$ - -gives rise to the Stieltjes constants, and -$$ -f_a(x) = x^{-a} -$$ - -gives -$$ -\gamma_{f_a} = \frac{(a-1)\zeta(a)-1}{a-1} -$$ - -where again the limit -$$ -\gamma = \lim_{a\to 1}\left(\zeta(a) - \frac1{a-1}\right) -$$ - -appears. - -A two-dimensional limit generalization is the Masser–Gramain constant. - -Euler–Lehmer constants are given by summation of inverses of numbers in a common - -modulo class: -$$ -\gamma(a,q) = \lim_{x\to \infty}\left (\sum_{0\begin{align} - -\gamma(0,q) &= \frac{\gamma -\log q}{q}, \\ - -\sum_{a=0}^{q-1} \gamma(a,q)&=\gamma, \\ - -q\gamma(a,q) &= \gamma-\sum_{j=1}^{q-1}e^{-\frac{2\pi aij}{q}}\log\left(1-e^{\frac{2\pi ij}{q}}\right), - -\end{align} - -and if gcd(a,q) = d then -$$ -q\gamma(a,q) = \frac{q}{d}\gamma\left(\frac{a}{d},\frac{q}{d}\right)-\log d. -$$ - -Euler initially calculated the constant's value to 6 decimal places. In 1781, he calculated it to 16 decimal places. Mascheroni attempted to calculate the constant to 32 decimal places, but made errors in the 20th–22nd and 31st-32nd decimal places; starting from the 20th digit, he calculated ...1811209008239 when the correct value is ...0651209008240. diff --git a/wiki/wikipedia/494.txt b/wiki/wikipedia/494.txt deleted file mode 100644 index 07d08718392f7a9bef1022405b280b5d7d36247b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/494.txt +++ /dev/null @@ -1,40 +0,0 @@ -In the mathematical areas of order and lattice theory, the Kleene fixed-point theorem, named after American mathematician Stephen Cole Kleene, states the following: - -Kleene Fixed-Point Theorem. Suppose $(L, \sqsubseteq)$ is a directed-complete partial order (dcpo) with a least element, and let $f: L \to L$ be a Scott-continuous (and therefore monotone) function. Then $f$ has a least fixed point, which is the supremum of the ascending Kleene chain of $f.$ - -The ascending Kleene chain of f is the chain -$$ -\bot \sqsubseteq f(\bot) \sqsubseteq f(f(\bot)) \sqsubseteq \cdots \sqsubseteq f^n(\bot) \sqsubseteq \cdots -$$ - -obtained by iterating f on the least element ⊥ of L. Expressed in a formula, the theorem states that -$$ -\textrm{lfp}(f) = \sup \left(\left\{f^n(\bot) \mid n\in\mathbb{N}\right\}\right) -$$ - -where $\textrm{lfp}$ denotes the least fixed point. - -Although Tarski's fixed point theorem - -does not consider how fixed points can be computed by iterating f from some seed (also, it pertains to monotone functions on complete lattices), this result is often attributed to Alfred Tarski who proves it for additive functions Moreover, Kleene Fixed-Point Theorem can be extended to monotone functions using transfinite iterations. - -We first have to show that the ascending Kleene chain of $f$ exists in $L$. To show that, we prove the following: - -Lemma. If $L$ is a dcpo with a least element, and $f: L \to L$ is Scott-continuous, then $f^n(\bot) \sqsubseteq f^{n+1}(\bot), n \in \mathbb{N}_0$ - -Proof. We use induction: - -* Assume n = 0. Then $f^0(\bot) = \bot \sqsubseteq f^1(\bot),$ since $\bot$ is the least element. - -* Assume n > 0. Then we have to show that $f^n(\bot) \sqsubseteq f^{n+1}(\bot)$. By rearranging we get $f(f^{n-1}(\bot)) \sqsubseteq f(f^n(\bot))$. By inductive assumption, we know that $f^{n-1}(\bot) \sqsubseteq f^n(\bot)$ holds, and because f is monotone (property of Scott-continuous functions), the result holds as well. - -As a corollary of the Lemma we have the following directed ω-chain: -$$ -\mathbb{M} = \{ \bot, f(\bot), f(f(\bot)), \ldots\}. -$$ - -From the definition of a dcpo it follows that $\mathbb{M}$ has a supremum, call it $m.$ What remains now is to show that $m$ is the least fixed-point. - -First, we show that $m$ is a fixed point, i.e. that $f(m) = m$. Because $f$ is Scott-continuous, $f(\sup(\mathbb{M})) = \sup(f(\mathbb{M}))$, that is $f(m) = \sup(f(\mathbb{M}))$. Also, since $\mathbb{M} = f(\mathbb{M})\cup\{\bot\}$ and because $\bot$ has no influence in determining the supremum we have: $\sup(f(\mathbb{M})) = \sup(\mathbb{M})$. It follows that $f(m) = m$, making $m$ a fixed-point of $f$. - -The proof that $m$ is in fact the least fixed point can be done by showing that any element in $\mathbb{M}$ is smaller than any fixed-point of $f$ (because by property of supremum, if all elements of a set $D \subseteq L$ are smaller than an element of $L$ then also $\sup(D)$ is smaller than that same element of $L$). This is done by induction: Assume $k$ is some fixed-point of $f$. We now prove by induction over $i$ that $\forall i \in \mathbb{N}: f^i(\bot) \sqsubseteq k$. The base of the induction $(i = 0)$ obviously holds: $f^0(\bot) = \bot \sqsubseteq k,$ since $\bot$ is the least element of $L$. As the induction hypothesis, we may assume that $f^i(\bot) \sqsubseteq k$. We now do the induction step: From the induction hypothesis and the monotonicity of $f$ (again, implied by the Scott-continuity of $f$), we may conclude the following: $f^i(\bot) \sqsubseteq k ~\implies~ f^{i+1}(\bot) \sqsubseteq f(k).$ Now, by the assumption that $k$ is a fixed-point of $f,$ we know that $f(k) = k,$ and from that we get $f^{i+1}(\bot) \sqsubseteq k.$ diff --git a/wiki/wikipedia/495.txt b/wiki/wikipedia/495.txt deleted file mode 100644 index 1417ec3d0bafc6ef60d8b41ff7583a0640cad415..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/495.txt +++ /dev/null @@ -1,54 +0,0 @@ -{{multiple image - -|image1=Graph K3-3.svg - -|image2=Complex polygon 2-4-3-bipartite graph.png - -|total_width=360 - -|footer=Two views of the utility graph, also known as the Thomsen graph or $K_{3,3}$}} - -The classical mathematical puzzle known as the three utilities problem or sometimes water, gas and electricity asks for non-crossing connections to be drawn between three houses and three utility companies in the plane. When posing it in the early 20th century, Henry Dudeney wrote that it was already an old problem. It is an impossible puzzle: it is not possible to connect all nine lines without crossing. Versions of the problem on nonplanar surfaces such as a torus or Möbius strip, or that allow connections to pass through other houses or utilities, can be solved. - -This puzzle can be formalized as a problem in topological graph theory by asking whether the complete bipartite graph $K_{3,3}$, with vertices representing the houses and utilities and edges representing their connections, has a graph embedding in the plane. The impossibility of the puzzle corresponds to the fact that $K_{3,3}$ is not a planar graph. Multiple proofs of this impossibility are known, and form part of the proof of Kuratowski's theorem characterizing planar graphs by two forbidden subgraphs, one of which is $K_{3,3}$. The question of minimizing the number of crossings in drawings of complete bipartite graphs is known as Turán's brick factory problem, and for $K_{3,3}$ the minimum number of crossings is one. -$$ -K_{3,3} -$$ is a graph with six vertices and nine edges, often referred to as the utility graph in reference to the problem. It has also been called the Thomsen graph after 19th-century chemist Julius Thomsen. It is a well-covered graph, the smallest triangle-free cubic graph, and the smallest non-planar minimally rigid graph. - -A review of the history of the three utilities problem is given by Kullman. He states that most published references to the problem characterize it as "very ancient". In the earliest publication found by Kullman, names it "water, gas, and electricity". However, Dudeney states that the problem is "as old as the hills...much older than electric lighting, or even gas". Dudeney also published the same puzzle previously, in The Strand Magazine in 1913. A competing claim of priority goes to Sam Loyd, who was quoted by his son in a posthumous biography as having published the problem in 1900. - -Another early version of the problem involves connecting three houses to three wells. It is stated similarly to a different (and solvable) puzzle that also involves three houses and three fountains, with all three fountains and one house touching a rectangular wall; the puzzle again involves making non-crossing connections, but only between three designated pairs of houses and wells or fountains, as in modern numberlink puzzles. Loyd's puzzle "The Quarrelsome Neighbors" similarly involves connecting three houses to three gates by three non-crossing paths (rather than nine as in the utilities problem); one house and the three gates are on the wall of a rectangular yard, which contains the other two houses within it. - -As well as in the three utilities problem, the graph $K_{3,3}$ appears in late 19th-century and early 20th-century publications both in early studies of structural rigidity and in chemical graph theory, where Julius Thomsen proposed it in 1886 for the then-uncertain structure of benzene. In honor of Thomsen's work, $K_{3,3}$ is sometimes called the Thomsen graph. - -The three utilities problem can be stated as follows: - -The problem is an abstract mathematical puzzle which imposes constraints that would not exist in a practical engineering situation. Its mathematical formalization is part of the field of topological graph theory which studies the embedding of graphs on surfaces. An important part of the puzzle, but one that is often not stated explicitly in informal wordings of the puzzle, is that the houses, companies, and lines must all be placed on a two-dimensional surface with the topology of a plane, and that the lines are not allowed to pass through other buildings; sometimes this is enforced by showing a drawing of the houses and companies, and asking for the connections to be drawn as lines on the same drawing. - -In more formal graph-theoretic terms, the problem asks whether the complete bipartite graph $K_{3,3}$ is a planar graph. This graph has six vertices in two subsets of three: one vertex for each house, and one for each utility. It has nine edges, one edge for each of the pairings of a house with a utility, or more abstractly one edge for each pair of a vertex in one subset and a vertex in the other subset. Planar graphs are the graphs that can be drawn without crossings in the plane, and if such a drawing could be found, it would solve the three utilities puzzle. - -As it is usually presented (on a flat two-dimensional plane), the solution to the utility puzzle is "no": there is no way to make all nine connections without any of the lines crossing each other. - -In other words, the graph $K_{3,3}$ is not planar. Kazimierz Kuratowski stated in 1930 that $K_{3,3}$ is nonplanar, from which it follows that the problem has no solution. Kullman, however, states that "Interestingly enough, Kuratowski did not publish a detailed proof that [ $K_{3,3}$ ] is non-planar". - -One proof of the impossibility of finding a planar embedding of $K_{3,3}$ uses a case analysis involving the Jordan curve theorem. In this solution, one examines different possibilities for the locations of the vertices with respect to the 4-cycles of the graph and shows that they are all inconsistent with a planar embedding. - -Alternatively, it is possible to show that any bridgeless bipartite planar graph with $V$ vertices and $E$ edges has $E\le 2V-4$ by combining the Euler formula $V-E+F=2$ (where $F$ is the number of faces of a planar embedding) with the observation that the number of faces is at most half the number of edges (the vertices around each face must alternate between houses and utilities, so each face has at least four edges, and each edge belongs to exactly two faces). In the utility graph, $E=9$ and $2V-4=8$ so in the utility graph it is untrue that $E\le 2V-4$. Because it does not satisfy this inequality, the utility graph cannot be planar. -$$ -K_{3,3} -$$ is a toroidal graph, which means that it can be embedded without crossings on a torus, a surface of genus one. These embeddings solve versions of the puzzle in which the houses and companies are drawn on a coffee mug or other such surface instead of a flat plane. There is even enough additional freedom on the torus to solve a version of the puzzle with four houses and four utilities. Similarly, if the three utilities puzzle is presented on a sheet of a transparent material, it may be solved after twisting and gluing the sheet to form a Möbius strip. - -Another way of changing the rules of the puzzle that would make it solvable, suggested by Henry Dudeney, is to allow utility lines to pass through other houses or utilities than the ones they connect. - -Beyond the utility puzzle, the same graph $K_{3,3}$ comes up in several other mathematical contexts, including rigidity theory, the classification of cages and well-covered graphs, the study of graph crossing numbers, and the theory of graph minors. - -The utility graph $K_{3,3}$ is a Laman graph, meaning that for almost all placements of its vertices in the plane, there is no way to continuously move its vertices while preserving all edge lengths, other than by a rigid motion of the whole plane, and that none of its spanning subgraphs have the same rigidity property. It is the smallest example of a nonplanar Laman graph. Despite being a minimally rigid graph, it has non-rigid embeddings with special placements for its vertices. For general-position embeddings, a polynomial equation describing all possible placements with the same edge lengths has degree 16, meaning that in general there can be at most 16 placements with the same lengths. It is possible to find systems of edge lengths for which up to eight of the solutions to this equation describe realizable placements. -$$ -K_{3,3} -$$ is a triangle-free graph, in which every vertex has exactly three neighbors (a cubic graph). Among all such graphs, it is the smallest. Therefore, it is the (3,4)-cage, the smallest graph that has three neighbors per vertex and in which the shortest cycle has length four. - -Like all other complete bipartite graphs, it is a well-covered graph, meaning that every maximal independent set has the same size. In this graph, the only two maximal independent sets are the two sides of the bipartition, and are of equal sizes. $K_{3,3}$ is one of only seven 3-regular 3-connected well-covered graphs. - -Two important characterizations of planar graphs, Kuratowski's theorem that the planar graphs are exactly the graphs that contain neither $K_{3,3}$ nor the complete graph $K_5$ as a subdivision, and Wagner's theorem that the planar graphs are exactly the graphs that contain neither $K_{3,3}$ nor $K_5$ as a minor, make use of and generalize the non-planarity of $K_{3,3}$. - -Pál Turán's "brick factory problem" asks more generally for a formula for the minimum number of crossings in a drawing of the complete bipartite graph $K_{a,b}$ in terms of the numbers of vertices $a$ and $b$ on the two sides of the bipartition. The utility graph $K_{3,3}$ may be drawn with only one crossing, but not with zero crossings, so its crossing number is one. diff --git a/wiki/wikipedia/496.txt b/wiki/wikipedia/496.txt deleted file mode 100644 index 5eaae79ffdf9afbd00b067c490be2f41a0d1bb4f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/496.txt +++ /dev/null @@ -1 +0,0 @@ -The stride scheduling is a type of scheduling mechanism that has been introduced as a simple concept to achieve proportional CPU capacity reservation among concurrent processes. Stride scheduling aims to sequentially allocate a resource for the duration of standard time-slices (quantum) in a fashion, that performs periodic recurrences of allocations. Thus, a process p1 which has reserved twice the share of a process p2 will be allocated twice as often as p2. In particular, process p1 will even be allocated two times every time p2 is waiting for allocation, assuming that neither of the two processes performs a blocking operation. diff --git a/wiki/wikipedia/497.txt b/wiki/wikipedia/497.txt deleted file mode 100644 index 2309306444e4e1cd782d75fa6a6dbb953b2d607b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/497.txt +++ /dev/null @@ -1,69 +0,0 @@ -In mathematics, the Hales–Jewett theorem is a fundamental combinatorial result of Ramsey theory named after Alfred W. Hales and Robert I. Jewett, concerning the degree to which high-dimensional objects must necessarily exhibit some combinatorial structure; it is impossible for such objects to be "completely random". - -An informal geometric statement of the theorem is that for any positive integers n and c there is a number H such that if the cells of a H-dimensional n×n×n×...×n cube are colored with c colors, there must be one row, column, or certain diagonal (more details below) of length n all of whose cells are the same color. In other words, the higher-dimensional, multi-player, n-in-a-row generalization of a game of tic-tac-toe cannot end in a draw, no matter how large n is, no matter how many people c are playing, and no matter which player plays each turn, provided only that it is played on a board of sufficiently high dimension H. By a standard strategy stealing argument, one can thus conclude that if two players alternate, then the first player has a winning strategy when H is sufficiently large, though no practical algorithm for obtaining this strategy is known. - -More formally, let W_n|p=H be the set of words of length H over an alphabet with n letters; that is, the set of sequences of {1, 2, ..., n} of length H. This set forms the hypercube that is the subject of the theorem. - -A variable word w(x) over W_n|p=H still has length H but includes the special element x in place of at least one of the letters. The words w(1), w(2), ..., w(n) obtained by replacing all instances of the special element x with 1, 2, ..., n, form a combinatorial line in the space W_n|p=H; combinatorial lines correspond to rows, columns, and (some of the) diagonals of the hypercube. The Hales–Jewett theorem then states that for given positive integers n and c, there exists a positive integer H, depending on n and c, such that for any partition of W_n|p=H into c parts, there is at least one part that contains an entire combinatorial line. - -For example, take n = 3, H = 2, and c = 2. The hypercube W_n|p=H in this case - -is just the standard tic-tac-toe board, with nine positions: - -A typical combinatorial - -line would be the word 2x, which corresponds to the line 21, 22, 23; another combinatorial line is xx, which is the line - -11, 22, 33. (Note that the line 13, 22, 31, while a valid line for the game tic-tac-toe, is not considered a combinatorial line.) In this particular case, the Hales–Jewett theorem does not apply; it is possible to divide - -the tic-tac-toe board into two sets, e.g. {11, 22, 23, 31} and {12, 13, 21, 32, 33}, neither of which contain - -a combinatorial line (and would correspond to a draw in the game of tic-tac-toe). On the other hand, if we increase - -H to, say, 8 (so that the board is now eight-dimensional, with 38 = 6561 positions), and partition this board - -into two sets (the "noughts" and "crosses"), then one of the two sets must contain a combinatorial line (i.e. no draw is possible in this variant of tic-tac-toe). For a proof, see below. - -We now prove the Hales–Jewett theorem in the special case n = 3, c = 2, H = 8 discussed above. The idea is to - -reduce this task to that of proving simpler versions of the Hales–Jewett theorem (in this particular case, to the cases n = 2, c = 2, H = 2 and n = 2, c = 6, H = 6). One can prove the general case of the Hales–Jewett theorem by similar methods, using mathematical induction. - -Each element of the hypercube W_3|p=8 is a string of eight numbers from 1 to 3, e.g. 13211321 is an element of the hypercube. We are assuming that this hypercube is completely filled with "noughts" and "crosses". We shall use a proof by contradiction and assume that neither the set of noughts nor the set of crosses contains a combinatorial line. If we fix the first six elements of such a string and let the last two vary, we obtain an ordinary tic-tac-toe board, for instance "132113??" gives such a board. For each such board "abcdef??", we consider the positions - -"abcdef11", "abcdef12", "abcdef22". Each of these must be filled with either a nought or a cross, so by the pigeonhole principle two of them must be filled with the same symbol. Since any two of these positions are part of - -a combinatorial line, the third element of that line must be occupied by the opposite symbol (since we are assuming that no combinatorial line has all three elements filled with the same symbol). In other words, for each choice of "abcdef" (which can be thought of as an element of the six-dimensional hypercube W_3|p=6), there are six (overlapping) possibilities: - -# abcdef11 and abcdef12 are noughts; abcdef13 is a cross. - -# abcdef11 and abcdef22 are noughts; abcdef33 is a cross. - -# abcdef12 and abcdef22 are noughts; abcdef32 is a cross. - -# abcdef11 and abcdef12 are crosses; abcdef13 is a nought. - -# abcdef11 and abcdef22 are crosses; abcdef33 is a nought. - -# abcdef12 and abcdef22 are crosses; abcdef32 is a nought. - -Thus we can partition the six-dimensional hypercube W_3|p=6 into six classes, corresponding to each of the above six possibilities. (If an element abcdef obeys multiple possibilities, we can choose one arbitrarily, e.g. by choosing the highest one on the above list). - -Now consider the seven elements 111111, 111112, 111122, 111222, 112222, 122222, 222222 in W_3|p=6. By the pigeonhole principle, two of these elements must fall into the same class. Suppose for instance - -111112 and 112222 fall into class (5), thus 11111211, 11111222, 11222211, 11222222 are crosses and 11111233, 11222233 are noughts. But now consider the position 11333233, which must be filled with either a cross or a nought. If it is filled with a cross, then the combinatorial line 11xxx2xx is filled entirely with crosses, contradicting our hypothesis. If instead it is filled with a nought, then the combinatorial line 11xxx233 is filled entirely with noughts, again contradicting our hypothesis. Similarly if any other two of the above seven elements of W_3|p=6 fall into the same class. Since we have a contradiction in all cases, the original hypothesis must be false; thus there must exist at least one combinatorial line consisting entirely of noughts or entirely of crosses. - -The above argument was somewhat wasteful; in fact the same theorem holds for H = 4. - -If one extends the above argument to general values of n and c, then H will grow very fast; even when c = 2 (which corresponds to two-player tic-tac-toe) the H given by the above argument grows as fast as the Ackermann function. The first primitive recursive bound is due to Saharon Shelah, and is still the best known bound in general for the Hales–Jewett number H = H(n, c). - -Observe that the above argument also gives the following corollary: if we let A be the set of all - -eight-digit numbers whose digits are all either 1, 2, 3 (thus A contains numbers such as 11333233), - -and we color A with two colors, then A contains at least one arithmetic progression of length three, all of whose elements are the same color. This is simply because all of the combinatorial lines appearing in the above proof of the Hales–Jewett theorem, also form arithmetic progressions in decimal notation. A more general formulation of this argument can be used to show that the Hales–Jewett theorem generalizes van der Waerden's theorem. Indeed the Hales–Jewett theorem is substantially a stronger theorem. - -Just as van der Waerden's theorem has a stronger density version in Szemerédi's theorem, the Hales–Jewett theorem also has a density version. In this strengthened version of the Hales–Jewett theorem, instead of coloring the entire hypercube W_n|p=H into c colors, one is given an arbitrary subset A of the hypercube W_n|p=H with some given density 0 < δ < 1. The theorem states that if H is sufficiently large depending on n and δ, then the set A must necessarily contain an entire combinatorial line. - -The density Hales–Jewett theorem was originally proved by Furstenberg and Katznelson using ergodic theory. In 2009, the Polymath Project developed a new proof of the density Hales–Jewett theorem based ideas on from the proof of the corners theorem. Dodos, Kanellopoulos, and Tyros gave a simplified version of the Polymath proof. - -The Hales–Jewett is generalized by the Graham–Rothschild theorem, on higher-dimensional combinatorial cubes. diff --git a/wiki/wikipedia/498.txt b/wiki/wikipedia/498.txt deleted file mode 100644 index b2717e3aa1b696c8f9c80aeb1395997afa472ddf..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/498.txt +++ /dev/null @@ -1,23 +0,0 @@ -In mathematics, Delta-convergence, or Δ-convergence, is a mode of convergence in metric spaces, weaker than the usual metric convergence, and similar to (but distinct from) the weak convergence in Banach spaces. In Hilbert space, Delta-convergence and weak convergence coincide. For a general class of spaces, similarly to weak convergence, every bounded sequence has a Delta-convergent subsequence. - -Delta convergence was first introduced by Teck-Cheong Lim, and, soon after, under the name of almost convergence, by Tadeusz Kuczumow. - -A sequence $(x_k)$ in a metric space $(X,d)$ is said to be Δ-convergent to $x\in X$ if for every $y\in X$, $\limsup(d(x_k,x)-d(x_k,y))\le 0$. - -If $X$ is a uniformly convex and uniformly smooth Banach space, with the duality mapping $x\mapsto x^*$ given by $\|x\|=\|x^*\|$, $\langle x^*,x\rangle=\|x\|^2$, then a sequence $(x_k)\subset X$ is Delta-convergent to $x$ if and only if $(x_k-x)^*$ converges to zero weakly in the dual space $X^*$ (see ). In particular, Delta-convergence and weak convergence coincide if $X$ is a Hilbert space. - -Coincidence of weak convergence and Delta-convergence is equivalent, for uniformly convex Banach spaces, to the well-known - -Opial property - -The Delta-compactness theorem of T. C. Lim states that if $(X,d)$ is an asymptotically complete metric space, then every bounded sequence in $X$ has a Delta-convergent subsequence. - -The Delta-compactness theorem is similar to the Banach–Alaoglu theorem for weak convergence but, unlike the Banach-Alaoglu theorem (in the non-separable case) its proof does not depend on the Axiom of Choice. - -An asymptotic center of a sequence $(x_k)_{k\in\mathbb N}$, if it exists, is a limit of the Chebyshev centers $c_n$ for truncated sequences $(x_k)_{k\ge n}$. A metric space is called asymptotically complete, if any bounded sequence in it has an asymptotic center. - -Condition of asymptotic completeness in the Delta-compactness theorem is satisfied by uniformly convex Banach spaces, and more generally, by uniformly rotund metric spaces as defined by J. Staples. - -*William Kirk, Naseer Shahzad, Fixed point theory in distance spaces. Springer, Cham, 2014. xii+173 pp. - -* G. Devillanova, S. Solimini, C. Tintarev, On weak convergence in metric spaces, Nonlinear Analysis and Optimization (B. S. Mordukhovich, S. Reich, A. J. Zaslavski, Editors), 43–64, Contemporary Mathematics 659, AMS, Providence, RI, 2016. diff --git a/wiki/wikipedia/499.txt b/wiki/wikipedia/499.txt deleted file mode 100644 index 0957fda1d62dd2007dd296191d84be36df1d7391..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/499.txt +++ /dev/null @@ -1,190 +0,0 @@ -The ratio estimator is a statistical parameter and is defined to be the ratio of means of two random variables. Ratio estimates are biased and corrections must be made when they are used in experimental or survey work. The ratio estimates are asymmetrical and symmetrical tests such as the t test should not be used to generate confidence intervals. - -The bias is of the order O(1/n) (see big O notation) so as the sample size (n) increases, the bias will asymptotically approach 0. Therefore, the estimator is approximately unbiased for large sample sizes. - -Assume there are two characteristics – x and y – that can be observed for each sampled element in the data set. The ratio R is -$$ - R = \bar{\mu}_y / \bar{\mu}_x -$$ - -The ratio estimate of a value of the y variate (θy) is -$$ - \theta_y = R \theta_x -$$ - -where θx is the corresponding value of the x variate. θy is known to be asymptotically normally distributed. - -The sample ratio (r) is estimated from the sample -$$ - r = \frac{ \bar{ y } }{ \bar{ x } } = \frac{ \sum_{ i = 1 }^n y }{ \sum_{ i = 1 }^n x } -$$ - -That the ratio is biased can be shown with Jensen's inequality as follows (assuming independence between x and y): -$$ - E\left( \frac{ y }{ x } \right) = E\left( y \frac{ 1 }{ x } \right) = E( y )E\left( \frac{ 1 }{ x } \right) \ge E(y)\frac{ 1 }{ E( x ) } = \frac{ E( y )}{ E( x ) } -$$ - -Under simple random sampling the bias is of the order O( n−1 ). An upper bound on the relative bias of the estimate is provided by the coefficient of variation (the ratio of the standard deviation to the mean). Under simple random sampling the relative bias is O( n−1/2 ). - -The correction methods, depending on the distributions of the x and y variates, differ in their efficiency making it difficult to recommend an overall best method. Because the estimates of r are biased a corrected version should be used in all subsequent calculations. - -A correction of the bias accurate to the first order is -$$ - r_\mathrm{ corr } = r - \frac{ s_{ x y } }{ m_x } -$$ - -where mx is the mean of the variate x and sxy is the covariance between x and y. - -To simplify the notation sxy will be used subsequently to denote the covariance between the variates x and y. - -Another estimator based on the Taylor expansion is -$$ - r_\mathrm{ corr } = r + \frac{ 1 }{ n }( 1 - \frac{ n - 1 }{ N - 1 } ) \frac{ r s_x^2 - \rho s_x s_y }{ m_x^2 } -$$ - -where n is the sample size, N is the population size, mx is the mean of the variate x, sx2 and sy2 are the sample variances of the x and y variates respectively and ρ is the sample correlation between the x and y variates. - -A computationally simpler but slightly less accurate version of this estimator is -$$ - r_\mathrm{ corr } = r - \frac{ N - n }{ N } \frac{ ( r s_x^2 - \rho s_x s_y ) }{ n m_x^2 } -$$ - -where N is the population size, n is the sample size, mx is the mean of the x variate, sx2 and sy2 are the sample variances of the x and y variates respectively and ρ is the sample correlation between the x and y variates. These versions differ only in the factor in the denominator ( N - 1 ). For a large N the difference is negligible. - -A second-order correction is -$$ - r_\mathrm{ corr } = r \left[ 1 + \frac{ 1 }{ n } \left( \frac{ 1 }{ m_x } - \frac{ s_{ xy } }{ m_x m_y } \right) + \frac{ 1 }{ n^2 } \left( \frac{ 2 }{ m_x^2 } - \frac{ s_{ xy } }{ m_x m_y } \left[ 2 + \frac{ 3 }{ m_x } \right] + \frac{ s_{ x^2 y } }{ m_x^2 m_y } \right) \right] -$$ - -Other methods of bias correction have also been proposed. To simplify the notation the following variables will be used -$$ - \theta = \frac{ 1 }{ n } - \frac{ 1 }{ N } -$$ -$$ - c_x^2 = \frac{ s_x^2 }{ m_x^2 } -$$ -$$ - c_{ xy } = \frac{ s_{xy} }{ m_x m_y } -$$ - -Pascual's estimator: -$$ - r_\mathrm{ corr } = r + \frac{ N - 1 }{ N } \frac{ m_y - r m_x }{ n - 1 } -$$ - -Beale's estimator: -$$ - r_\mathrm{ corr } = r \frac{ 1 + \theta c_{ xy } }{ 1 + \theta c_x^2 } -$$ - -Tin's estimator: -$$ - r_\mathrm{ corr } = r \left( 1 + \theta \left( c_{ xy } - c_x^2 \right) \right) -$$ - -Sahoo's estimator: -$$ - r_\mathrm{ corr } = \frac{ r }{ 1 + \theta ( c_x^2 - c_{ xy } ) } -$$ - -Sahoo has also proposed a number of additional estimators: -$$ - r_\mathrm{ corr } = r ( 1 + \theta c_{ xy } ) ( 1 - \theta c_x^2 ) -$$ -$$ - r_\mathrm{ corr } = \frac{ r ( 1 - \theta c_x^2 ) }{ 1 - \theta c_{ xy } } -$$ -$$ - r_\mathrm{ corr } = \frac{ r }{ ( 1 + \theta c_{ xy } )( 1 + \theta c_x^2 ) } -$$ - -If mx and my are both greater than 10, then the following approximation is correct to order O( n−3 ). -$$ - r_\mathrm{ corr } = r + c_x^2 \frac{ m_y }{ m_x } - \frac{ s_{ xy } }{ m_x^2 } -$$ - -A jackknife estimate of the ratio is less biased than the naive form. A jackknife estimator of the ratio is -$$ - r_\mathrm{corr} = nr - \frac{ n - 1 }{ n } \sum_{ i \ne j = 1 }^n r_i -$$ - -where n is the size of the sample and the ri are estimated with the omission of one pair of variates at a time. - -An alternative method is to divide the sample into g groups each of size p with n = pg. Let ri be the estimate of the ith group. Then the estimator -$$ - r_\mathrm{corr} = gr - \frac{ g - 1 }{ g } \sum_{ i = 1 }^g r_i -$$ - -has a bias of at most O( n−2 ). - -Other estimators based on the division of the sample into g groups are: -$$ - r_\mathrm{ corr } = \frac{ g }{ g + 1 } r - \frac{ 1 }{ g ( g - 1 ) } \sum_{ i = 1 }^g r_i -$$ -$$ - r_\mathrm{ corr } = \bar{ r } +\frac{ n }{ n - 1 } \frac{ m_y - \bar{ r } m_x }{ m_x } -$$ -$$ - r_\mathrm{ corr } = \bar{ r_g } + \frac{ g ( m_y - \bar{ r_g } m_x ) }{ m_x } -$$ - -where $ \bar{ r } $ is the mean of the ratios rg of the g groups and -$$ - \bar{ r_g } = \sum \frac{ r_i' }{ g } -$$ - -where ri' is the value of the sample ratio with the ith group omitted. - -Other methods of estimating a ratio estimator include maximum likelihood and bootstrapping. The algorithm here is based upon the description by Lohr. - -# Choose a number M = max( x1, ..., xN) where N is the population size. - -# Choose i at random from a uniform distribution on [1,N]. - -# Choose k at random from a uniform distribution on [1,M]. - -# If k ≤ xi, then xi is retained in the sample. If not then it is rejected. - -# Repeat this process from step 2 until the desired sample size is obtained. - -The same procedure for the same desired sample size is carried out with the y variate. - -Lahiri's scheme as described by Lohr is biased high and, so, is interesting only for historical reasons. The Midzuno-Sen technique described below is recommended instead. - -In 1952 Midzuno and Sen independently described a sampling scheme that provides an unbiased estimator of the ratio. - -The first sample is chosen with probability proportional to the size of the x variate. The remaining n - 1 samples are chosen at random without replacement from the remaining N - 1 members in the population. The probability of selection under this scheme is -$$ - P = \frac{ \sum x_i } { { N - 1 \choose n - 1 } X } -$$ - -where X is the sum of the N x variates and the xi are the n members of the sample. Then the ratio of the sum of the y variates and the sum of the x variates chosen in this fashion is an unbiased estimate of the ratio estimator. - -In symbols we have -$$ - r = \frac { \sum y_i }{ \sum x_i } -$$ - -where xi and yi are chosen according to the scheme described above. - -The ratio estimator given by this scheme is unbiased. - -Särndal, Swensson, and Wretman credit Lahiri, Midzuno and Sen for the insights leading to this method but Lahiri's technique is biased high. - -Tin (1965) described and compared ratio estimators proposed by Beale (1962) and Quenouille (1956) and proposed a modified approach (now referred to as Tin's method). These ratio estimators are commonly used to calculate pollutant loads from sampling of waterways, particularly where flow is measured more frequently than water quality. For example see Quilbe et al., (2006) - -If a linear relationship between the x and y variates exists and the regression equation passes through the origin then the estimated variance of the regression equation is always less than that of the ratio estimator. The precise relationship between the variances depends on the linearity of the relationship between the x and y variates: when the relationship is other than linear the ratio estimate may have a lower variance than that estimated by regression. - -Although the ratio estimator may be of use in a number of settings it is of particular use in two cases: - -* when the variates x and y are highly correlated through the origin. - -* In survey methodology when estimating a weighted average in which the denominator indicates the sum of weights that reflect the total population size, but the total population size is unknown. - -The first known use of the ratio estimator was by John Graunt in England who in 1662 was the first to estimate the ratio y/x where y represented the total population and x the known total number of registered births in the same areas during the preceding year. - -Later Messance (~1765) and Moheau (1778) published very carefully prepared estimates for France based on enumeration of population in certain districts and on the count of births, deaths and marriages as reported for the whole country. The districts from which the ratio of inhabitants to birth was determined only constituted a sample. - -In 1802, Laplace wished to estimate the population of France. No population census had been carried out and Laplace lacked the resources to count every individual. Instead he sampled 30 parishes whose total number of inhabitants was 2,037,615. The parish baptismal registrations were considered to be reliable estimates of the number of live births so he used the total number of births over a three-year period. The sample estimate was 71,866.333 baptisms per year over this period giving a ratio of one registered baptism for every 28.35 persons. The total number of baptismal registrations for France was also available to him and he assumed that the ratio of live births to population was constant. He then used the ratio from his sample to estimate the population of France. - -Karl Pearson said in 1897 that the ratio estimates are biased and cautioned against their use. diff --git a/wiki/wikipedia/5.txt b/wiki/wikipedia/5.txt deleted file mode 100644 index 0aa06ba38658f93d683227e7204e9e25fb385e08..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/5.txt +++ /dev/null @@ -1 +0,0 @@ -In mathematics, the Peterson–Stein formula, introduced by , describes the Spanier–Whitehead dual of a secondary cohomology operation. diff --git a/wiki/wikipedia/50.txt b/wiki/wikipedia/50.txt deleted file mode 100644 index fb8053e3c0d79555df0c685b4f4361955449af40..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/50.txt +++ /dev/null @@ -1 +0,0 @@ -In finite group theory, a mathematical discipline, the Gilman–Griess theorem, proved by , classifies the finite simple groups of characteristic 2 type with e(G) ≥ 4 that have a "standard component", which covers one of the three cases of the trichotomy theorem. diff --git a/wiki/wikipedia/500.txt b/wiki/wikipedia/500.txt deleted file mode 100644 index 395380838997a17a72046a8219017377d13ccdfd..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/500.txt +++ /dev/null @@ -1,181 +0,0 @@ -In mathematics, Watson's lemma, proved by G. N. Watson (1918, p. 133), has significant application within the theory on the asymptotic behavior of integrals. - -Let $0 < T \leq \infty$ be fixed. Assume $\varphi(t)=t^\lambdag(t)$, where $g(t)$ has an infinite number of derivatives in the neighborhood of $t=0$, with $g(0)\neq 0$, and $\lambda > -1$. - -Suppose, in addition, either that -$$ -|\varphi(t)| < Ke^{bt} \ \forall t>0, -$$ - -where $K,b$ are independent of $t$, or that -$$ -\int_0^T |\varphi(t)| \mathrm dt < \infty. -$$ - -Then, it is true that for all positive $x$ that -$$ -\left|\int_0^T e^{-x t}\varphi(t) \mathrm dt\right| < \infty -$$ - -and that the following asymptotic equivalence holds: -$$ -\int_0^T e^{-x t}\varphi(t) \mathrm dt \sim\ \sum_{n=0}^\infty \frac{g^{(n)}(0)\ \Gamma(\lambda+n+1)}{n!\ x^{\lambda+n+1}},\ \ (x>0,\ x\rightarrow \infty). -$$ - -See, for instance, Watson for the original proof or Miller for a more recent development. - -We will prove the version of Watson's lemma which assumes that $|\varphi(t)|$ has at most exponential growth as $t \to \infty$. The basic idea behind the proof is that we will approximate $g(t)$ by finitely many terms of its Taylor series. Since the derivatives of $g$ are only assumed to exist in a neighborhood of the origin, we will essentially proceed by removing the tail of the integral, applying Taylor's theorem with remainder in the remaining small interval, then adding the tail back on in the end. At each step we will carefully estimate how much we are throwing away or adding on. This proof is a modification of the one found in Miller. - -Let $0 < T \leq \infty$ and suppose that $\varphi$ is a measurable function of the form $\varphi(t) = t^\lambda g(t)$, where $\lambda > -1$ and $g$ has an infinite number of continuous derivatives in the interval $[0,\delta]$ for some $0 < \delta < T$, and that $|\varphi(t)| \leq Ke^{bt}$ for all $\delta \leq t \leq T$, where the constants $K$ and $b$ are independent of $t$. - -We can show that the integral is finite for $x$ large enough by writing -$$ -(1) \quad \int_0^T e^{-xt}\varphi(t)\mathrm dt = \int_0^\delta e^{-xt} \varphi(t)\mathrm dt + \int_\delta^T e^{-xt}\varphi(t)\mathrm dt -$$ - -and estimating each term. - -For the first term we have -$$ -\left|\int_0^\delta e^{-xt}\varphi(t)\mathrm dt\right| \leq \int_0^\delta e^{-xt}|\varphi(t)|\mathrm dt \leq \int_0^\delta |\varphi(t)|\mathrm dt -$$ - -for $x \geq 0$, where the last integral is finite by the assumptions that $g$ is continuous on the interval $[0,\delta]$ and that $\lambda > -1$. For the second term we use the assumption that $\varphi$ is exponentially bounded to see that, for $x > b$, - -\begin{align} - -\left|\int_\delta^T e^{-xt}\varphi(t)\mathrm dt\right| &\leq \int_\delta^T e^{-xt} |\varphi(t)|\mathrm dt \\ - -&\leq K \int_\delta^T e^{(b-x)t}\mathrm dt \\ - -&\leq K \int_\delta^\infty e^{(b-x)t}\mathrm dt \\ - -&= K \frac{e^{(b-x)\delta}}{x-b}. - -\end{align} - -The finiteness of the original integral then follows from applying the triangle inequality to $(1)$. - -We can deduce from the above calculation that -$$ -(2) \quad \int_0^T e^{-xt}\varphi(t)\mathrm dt = \int_0^\delta e^{-xt} \varphi(t)\mathrm dt + O\left(x^{-1} e^{-\delta x}\right) -$$ - -as $x \to \infty$. - -By appealing to Taylor's theorem with remainder we know that, for each integer $N \geq 0$, -$$ -g(t) = \sum_{n=0}^{N} \frac{g^{(n)}(0)}{n!}t^n + \frac{g^{(N+1)}(t^*)}{(N+1)!}t^{N+1} -$$ - -for $0 \leq t \leq \delta$, where $0 \leq t^* \leq t$. Plugging this in to the first term in $(2)$ we get - -\begin{align} - -(3) \quad \int_0^\delta e^{-xt} \varphi(t)\mathrm dt &= \int_0^\delta e^{-xt} t^\lambda g(t)\mathrm dt \\ - -&= \sum_{n=0}^{N} \frac{g^{(n)}(0)}{n!} \int_0^\delta t^{\lambda + n} e^{-xt}\mathrm dt + \frac{1}{(N+1)!} \int_0^\delta g^{(N+1)}(t^*) t^{\lambda+N+1} e^{-xt}\mathrm dt. - -\end{align} - -To bound the term involving the remainder we use the assumption that $g^{(N+1)}$ is continuous on the interval $[0,\delta]$, and in particular it is bounded there. As such we see that - -\begin{align} - -\left|\int_0^\delta g^{(N+1)}(t^*) t^{\lambda+N+1} e^{-xt}\mathrm dt\right| &\leq \sup_{t \in [0,\delta]} \left|g^{(N+1)}(t)\right| \int_0^\delta t^{\lambda+N+1} e^{-xt}\mathrm dt \\ - -&< \sup_{t \in [0,\delta]} \left|g^{(N+1)}(t)\right| \int_0^\infty t^{\lambda+N+1} e^{-xt}\mathrm dt \\ - -&= \sup_{t \in [0,\delta]} \left|g^{(N+1)}(t)\right| \frac{\Gamma(\lambda + N + 2)}{x^{\lambda+N+2}}. - -\end{align} - -Here we have used the fact that -$$ -\int_0^\infty t^a e^{-xt}\mathrm dt = \frac{\Gamma(a+1)}{x^{a+1}} -$$ - -if $x > 0$ and $a > -1$, where $\Gamma$ is the gamma function. - -From the above calculation we see from $(3)$ that -$$ -(4) \quad \int_0^\delta e^{-xt} \varphi(t)\mathrm dt = \sum_{n=0}^N \frac{g^{(n)}(0)}{n!} \int_0^\delta t^{\lambda + n} e^{-xt}\mathrm dt + O\left(x^{-\lambda-N-2}\right) -$$ - -as $x \to \infty$. - -We will now add the tails on to each integral in $(4)$. For each $n$ we have - -\begin{align} - -\int_0^\delta t^{\lambda + n} e^{-xt}\mathrm dt &= \int_0^\infty t^{\lambda + n} e^{-xt}\mathrm dt - \int_\delta^\infty t^{\lambda + n} e^{-xt}\mathrm dt \\[5pt] - -&= \frac{\Gamma(\lambda+n+1)}{x^{\lambda+n+1}} - \int_\delta^\infty t^{\lambda + n} e^{-xt}\mathrm dt, - -\end{align} - -and we will show that the remaining integrals are exponentially small. Indeed, if we make the change of variables $t = s + \delta$ we get - -\begin{align} - -\int_\delta^\infty t^{\lambda + n} e^{-xt}\mathrm dt &= \int_0^\infty (s+\delta)^{\lambda + n} e^{-x(s+\delta)}ds \\[5pt] - -&= e^{-\delta x} \int_0^\infty (s+\delta)^{\lambda + n} e^{-xs}ds \\[5pt] - -&\leq e^{-\delta x} \int_0^\infty (s+\delta)^{\lambda + n} e^{-s}ds - -\end{align} - -for $x \geq 1$, so that -$$ -\int_0^\delta t^{\lambda + n} e^{-xt}\mathrm dt = \frac{\Gamma(\lambda+n+1)}{x^{\lambda+n+1}} + O\left(e^{-\delta x}\right) \text{ as } x \to \infty. -$$ - -If we substitute this last result into $(4)$ we find that - -\begin{align} - -\int_0^\delta e^{-xt} \varphi(t)\mathrm dt &= \sum_{n=0}^{N} \frac{g^{(n)}(0) \ \Gamma(\lambda+n+1)}{n! \ x^{\lambda+n+1}} + O\left(e^{-\delta x}\right) + O\left(x^{-\lambda-N-2}\right) \\ - -&= \sum_{n=0}^{N} \frac{g^{(n)}(0) \ \Gamma(\lambda+n+1)}{n! \ x^{\lambda+n+1}} + O\left(x^{-\lambda-N-2}\right) - -\end{align} - -as $x \to \infty$. Finally, substituting this into $(2)$ we conclude that - -\begin{align} - -\int_0^T e^{-xt}\varphi(t)\mathrm dt &= \sum_{n=0}^{N} \frac{g^{(n)}(0) \ \Gamma(\lambda+n+1)}{n! \ x^{\lambda+n+1}} + O\left(x^{-\lambda-N-2}\right) + O\left(x^{-1} e^{-\delta x}\right) \\ - -&= \sum_{n=0}^{N} \frac{g^{(n)}(0) \ \Gamma(\lambda+n+1)}{n! \ x^{\lambda+n+1}} + O\left(x^{-\lambda-N-2}\right) - -\end{align} - -as $x \to \infty$. - -Since this last expression is true for each integer $N \geq 0$ we have thus shown that -$$ -\int_0^T e^{-xt}\varphi(t)\mathrm dt \sim \sum_{n=0}^\infty \frac{g^{(n)}(0) \ \Gamma(\lambda+n+1)}{n! \ x^{\lambda+n+1}} -$$ - -as $x \to \infty$, where the infinite series is interpreted as an asymptotic expansion of the integral in question. - -When $0 < a < b$, the confluent hypergeometric function of the first kind has the integral representation -$$ -{}_1F_1(a,b,x) = \frac{\Gamma(b)}{\Gamma(a) \Gamma(b-a)}\int_0^1 e^{xt} t^{a-1} (1-t)^{b-a-1}\mathrm dt, -$$ - -where $\Gamma$ is the gamma function. The change of variables $t = 1-s$ puts this into the form -$$ -{}_1F_1(a,b,x) = \frac{\Gamma(b)}{\Gamma(a) \Gamma(b-a)}e^x\int_0^1 e^{-xs} (1-s)^{a-1} s^{b-a-1}ds, -$$ - -which is now amenable to the use of Watson's lemma. Taking $\lambda = b-a-1$ and $g(s) = (1-s)^{a-1}$, Watson's lemma tells us that -$$ -\int_0^1 e^{-xs} (1-s)^{a-1} s^{b-a-1}ds \sim \Gamma(b-a) x^{a-b} \quad \text{as } x \to \infty \text{ with } x > 0, -$$ - -which allows us to conclude that -$$ - {}_1F_1(a,b,x) \sim \frac{\Gamma(b)}{\Gamma(a)}x^{a-b} e^x \quad \text{as } x \to \infty \text{ with } x > 0. -$$ diff --git a/wiki/wikipedia/501.txt b/wiki/wikipedia/501.txt deleted file mode 100644 index 454fccb5ad15558b3a7525d005915fd6ff4fdf8b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/501.txt +++ /dev/null @@ -1,23 +0,0 @@ -In plane geometry, a triangle ABC contains a triangle having one-seventh of the area of ABC, which is formed as follows: the sides of this triangle lie on cevians p, q, r where - -p connects A to a point on BC that is one-third the distance from B to C, - -q connects B to a point on CA that is one-third the distance from C to A, - -r connects C to a point on AB that is one-third the distance from A to B. - -The proof of the existence of the one-seventh area triangle follows from the construction of six parallel lines: - -two parallel to p, one through C, the other through q.r - -two parallel to q, one through A, the other through r.p - -two parallel to r, one through B, the other through p.q. - -The suggestion of Hugo Steinhaus is that the (central) triangle with sides p,q,r be reflected in its sides and vertices. These six extra triangles partially cover ABC, and leave six overhanging extra triangles lying outside ABC. Focusing on the parallelism of the full construction (offered by Martin Gardner through James Randi’s on-line magazine), the pair-wise congruences of overhanging and missing pieces of ABC is evident. As seen in the graphical solution, six plus the original equals the whole triangle ABC. - -An early exhibit of this geometrical construction and area computation was given by Robert Potts in 1859 in his Euclidean geometry textbook. - -According to Cook and Wood (2004), this triangle puzzled Richard Feynman in a dinner conversation; they go on to give four different proofs. - -A more general result is known as Routh's theorem. diff --git a/wiki/wikipedia/502.txt b/wiki/wikipedia/502.txt deleted file mode 100644 index b7ee60bced2b582dd82a40f7a46d50355c30bc93..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/502.txt +++ /dev/null @@ -1,311 +0,0 @@ -In general relativity, a geodesic generalizes the notion of a "straight line" to curved spacetime. Importantly, the world line of a particle free from all external, non-gravitational forces is a particular type of geodesic. In other words, a freely moving or falling particle always moves along a geodesic. - -In general relativity, gravity can be regarded as not a force but a consequence of a curved spacetime geometry where the source of curvature is the stress–energy tensor (representing matter, for instance). Thus, for example, the path of a planet orbiting a star is the projection of a geodesic of the curved four-dimensional (4-D) spacetime geometry around the star onto three-dimensional (3-D) space. - -The full geodesic equation is -$$ - {d^2 x^\mu \over ds^2}+\Gamma^\mu {}_{\alpha \beta}{d x^\alpha \over ds}{d x^\beta \over ds}=0\ -$$ - -where s is a scalar parameter of motion (e.g. the proper time), and $ \Gamma^\mu {}_{\alpha \beta}$ are Christoffel symbols (sometimes called the affine connection coefficients or Levi-Civita connection coefficients) symmetric in the two lower indices. Greek indices may take the values: 0, 1, 2, 3 and the summation convention is used for repeated indices $\alpha$ and $\beta$. The quantity on the left-hand-side of this equation is the acceleration of a particle, so this equation is analogous to Newton's laws of motion, which likewise provide formulae for the acceleration of a particle. This equation of motion employs the Einstein notation, meaning that repeated indices are summed (i.e. from zero to three). The Christoffel symbols are functions of the four space-time coordinates and so are independent of the velocity or acceleration or other characteristics of a test particle whose motion is described by the geodesic equation. - -So far the geodesic equation of motion has been written in terms of a scalar parameter s. It can alternatively be written in terms of the time coordinate, $t \equiv x^0$ (here we have used the triple bar to signify a definition). The geodesic equation of motion then becomes: -$$ - {d^2 x^\mu \over dt^2} =- \Gamma^\mu {}_{\alpha \beta}{d x^\alpha \over dt}{d x^\beta \over dt}+ \Gamma^0 {}_{\alpha \beta}{d x^\alpha \over dt}{d x^\beta \over dt}{d x^\mu \over dt}\ . -$$ - -This formulation of the geodesic equation of motion can be useful for computer calculations and to compare General Relativity with Newtonian Gravity. It is straightforward to derive this form of the geodesic equation of motion from the form which uses proper time as a parameter using the chain rule. Notice that both sides of this last equation vanish when the mu index is set to zero. If the particle's velocity is small enough, then the geodesic equation reduces to this: -$$ - {d^2 x^n \over dt^2} =- \Gamma^n {}_{00}. -$$ - -Here the Latin index n takes the values [1,2,3]. This equation simply means that all test particles at a particular place and time will have the same acceleration, which is a well-known feature of Newtonian gravity. For example, everything floating around in the International Space Station will undergo roughly the same acceleration due to gravity. - -Physicist Steven Weinberg has presented a derivation of the geodesic equation of motion directly from the equivalence principle. The first step in such a derivation is to suppose that a free falling particle does not accelerate in the neighborhood of a point-event with respect to a freely falling coordinate system ($X^\mu$). Setting $T \equiv X^0$, we have the following equation that is locally applicable in free fall: -$$ - {d^2 X^\mu \over dT^2} = 0 . -$$ - -The next step is to employ the multi-dimensional chain rule. We have: -$$ - {d X^\mu \over dT}={d x^\nu \over dT} {\partial X^\mu \over \partial x^\nu} -$$ - -Differentiating once more with respect to the time, we have: -$$ - {d^2 X^\mu \over dT^2}={d^2 x^\nu \over dT^2} {\partial X^\mu \over \partial x^\nu} + {d x^\nu \over dT} {d x^\alpha \over dT} {\partial^2 X^\mu \over \partial x^\nu\partial x^\alpha} -$$ - -Therefore: -$$ - {d^2 x^\nu \over dT^2} {\partial X^\mu \over \partial x^\nu} =- {d x^\nu \over dT} {d x^\alpha \over dT} {\partial^2 X^\mu \over \partial x^\nu\partial x^\alpha} -$$ - -Multiply both sides of this last equation by the following quantity: -$$ - {\partial x^\lambda \over \partial X^\mu} -$$ - -Consequently, we have this: -$$ - {d^2 x^\lambda \over dT^2} = - {d x^\nu \over dT} {d x^\alpha \over dT} \left[{\partial^2 X^\mu \over \partial x^\nu\partial x^\alpha} {\partial x^\lambda \over \partial X^\mu}\right] . -$$ - -Using (from transformation law under change of variable and the fact that the Christoffel symbols vanish in an inertial frame of reference) -$$ -\Gamma^\lambda {}_{\nu \alpha} = \left[{\partial^2 X^\mu \over \partial x^\nu\partial x^\alpha} {\partial x^\lambda \over \partial X^\mu}\right] -$$ - -it becomes -$$ - {d^2 x^\lambda \over dT^2} = - \Gamma^{\lambda}_{\nu \alpha} {d x^\nu \over dT} {d x^\alpha \over dT} . -$$ - -Applying the one-dimensional chain rule gives -$$ - {d^2 x^\lambda \over d t^2} \left( \frac{d t}{d T} \right)^2 + {d x^\lambda \over d t} \frac{d^2 t}{d T^2} = - \Gamma^{\lambda}_{\nu \alpha} {d x^\nu \over d t} {d x^\alpha \over d t} \left( \frac{d t}{d T} \right)^2 . -$$ -$$ - {d^2 x^\lambda \over d t^2} + {d x^\lambda \over d t} \frac{d^2 t}{d T^2} \left( \frac{d T}{d t} \right)^2 = - \Gamma^{\lambda}_{\nu \alpha} {d x^\nu \over d t} {d x^\alpha \over d t} . -$$ - -As before, we can set $t \equiv x^0$. Then the first derivative of x0 with respect to t is one and the second derivative is zero. Replacing λ with zero gives: -$$ - \frac{d^2 t}{d T^2} \left( \frac{d T}{d t} \right)^2 = - \Gamma^{0}_{\nu \alpha} {d x^\nu \over d t} {d x^\alpha \over d t} . -$$ - -Subtracting d xλ / d t times this from the previous equation gives: -$$ - {d^2 x^\lambda \over dt^2} = - \Gamma^{\lambda}_{\nu \alpha} {d x^\nu \over dt} {d x^\alpha \over dt} + \Gamma^{0}_{\nu \alpha} {d x^\nu \over dt} {d x^\alpha \over dt}{d x^\lambda \over dt} -$$ - -which is a form of the geodesic equation of motion (using the coordinate time as parameter). - -The geodesic equation of motion can alternatively be derived using the concept of parallel transport. - -We can (and this is the most common technique) derive the geodesic equation via the action principle. Consider the case of trying to find a geodesic between two timelike-separated events. - -Let the action be -$$ -S=\int ds -$$ - -where $ds=\sqrt{-g_{\mu\nu}(x) dx^\mu dx^\nu}$ is the line element. There is a negative sign inside the square root because the curve must be timelike. To get the geodesic equation we must vary this action. To do this let us parameterize this action with respect to a parameter $\lambda$. Doing this we get: -$$ -S=\int\sqrt{-g_{\mu\nu}\frac{dx^\mu}{d\lambda}\frac{dx^\nu}{d\lambda}} d\lambda -$$ - -We can now go ahead and vary this action with respect to the curve $x^{\mu}$. By the principle of least action we get: -$$ -0=\delta S=\int\delta\left(\sqrt{-g_{\mu\nu}\frac{dx^\mu}{d\lambda}\frac{dx^\nu}{d\lambda}}\right) d\lambda =\int\frac{\delta\left(-g_{\mu\nu}\frac{dx^\mu}{d\lambda}\frac{dx^\nu}{d\lambda}\right)}{2\sqrt{-g_{\mu\nu}\frac{dx^\mu}{d\lambda}\frac{dx^\nu}{d\lambda}}}d\lambda -$$ - -Using the product rule we get: -$$ -0=\int\left(\frac{dx^\mu}{d\lambda}\frac{dx^\nu}{d\tau}\delta g_{\mu\nu}+g_{\mu\nu}\frac{d\delta x^\mu}{d\lambda}\frac{dx^\nu}{d\tau} + g_{\mu\nu} \frac{dx^\mu}{d\tau} \frac{d\delta x^\nu}{d\lambda}\right) d\lambda = \int\left(\frac{dx^\mu}{d\lambda}\frac{dx^\nu}{d\tau} \partial_\alpha g_{\mu\nu} \delta x^\alpha +2g_{\mu\nu}\frac{d\delta x^\mu}{d\lambda}\frac{dx^\nu}{d\tau}\right) d\lambda -$$ - -where -$$ - \frac{d\tau}{d\lambda} = \sqrt{-g_{\mu\nu}\frac{dx^\mu}{d\lambda}\frac{dx^\nu}{d\lambda}} -$$ - -Integrating by-parts the last term and dropping the total derivative (which equals to zero at the boundaries) we get that: -$$ -0=\int \left(\frac{dx^\mu}{d\tau}\frac{dx^\nu}{d\tau}\partial_\alpha g_{\mu\nu}\delta x^\alpha-2\delta x^\mu\frac{d}{d\tau} \left(g_{\mu\nu} \frac{dx^\nu}{d\tau}\right)\right) d\tau = \int \left(\frac{dx^{\mu}}{d\tau}\frac{dx^\nu}{d\tau} \partial_\alpha g_{\mu\nu}\delta x^\alpha-2\delta x^\mu \partial_\alpha g_{\mu\nu}\frac{dx^\alpha}{d\tau}\frac{dx^\nu}{d\tau}-2\delta x^\mu g_{\mu\nu}\frac{d^2 x^\nu}{d\tau^2}\right) d\tau -$$ - -Simplifying a bit we see that: -$$ -0=\int \left(-2g_{\mu\nu}\frac{d^2 x^\nu}{d\tau^2}+\frac{dx^\alpha}{d\tau}\frac{dx^\nu}{d\tau} \partial_\mu g_{\alpha\nu} - 2\frac{dx^\alpha}{d\tau} \frac{dx^\nu}{d\tau} \partial_\alpha g_{\mu\nu}\right) \delta x^\mu d\tau -$$ - -so, -$$ -0=\int \left(-2g_{\mu\nu}\frac{d^2 x^\nu}{d\tau^2}+\frac{dx^\alpha}{d\tau}\frac{dx^\nu}{d\tau}\partial_\mu g_{\alpha\nu}-\frac{dx^\alpha}{d\tau}\frac{dx^\nu}{d\tau}\partial_\alpha g_{\mu\nu}-\frac{dx^\nu}{d\tau} \frac{dx^\alpha}{d\tau} \partial_\nu g_{\mu\alpha}\right) \delta x^\mu d\tau -$$ - -multiplying this equation by $-\frac{1}{2}$ we get: -$$ -0=\int \left(g_{\mu\nu}\frac{d^2 x^\nu}{d\tau^2}+\frac{1}{2} \frac{dx^\alpha}{d\tau}\frac{dx^\nu}{d\tau}\left(\partial_\alpha g_{\mu\nu} + \partial_\nu g_{\mu\alpha}-\partial_\mu g_{\alpha\nu}\right)\right) \delta x^\mu d\tau -$$ - -So by Hamilton's principle we find that the Euler–Lagrange equation is -$$ -g_{\mu\nu}\frac{d^{2}x^{\nu}}{d\tau^{2}}+\frac{1}{2}\frac{dx^{\alpha}}{d\tau}\frac{dx^{\nu}}{d\tau}\left(\partial_{\alpha}g_{\mu\nu}+\partial_{\nu}g_{\mu\alpha}-\partial_{\mu}g_{\alpha\nu}\right)=0 -$$ - -Multiplying by the inverse metric tensor $g^{\mu\beta}$ we get that -$$ -\frac{d^{2}x^{\beta}}{d\tau^{2}}+\frac{1}{2}g^{\mu\beta}\left(\partial_{\alpha}g_{\mu\nu}+\partial_{\nu}g_{\mu\alpha}-\partial_{\mu}g_{\alpha\nu}\right)\frac{dx^{\alpha}}{d\tau}\frac{dx^{\nu}}{d\tau}=0 -$$ - -Thus we get the geodesic equation: -$$ -\frac{d^{2}x^{\beta}}{d\tau^{2}}+\Gamma^{\beta}{}_{\alpha\nu}\frac{dx^{\alpha}}{d\tau}\frac{dx^{\nu}}{d\tau}=0 -$$ - -with the Christoffel symbol defined in terms of the metric tensor as -$$ -\Gamma^{\beta}{}_{\alpha\nu}=\frac{1}{2}g^{\mu\beta}\left(\partial_{\alpha}g_{\mu\nu}+\partial_{\nu}g_{\mu\alpha}-\partial_{\mu}g_{\alpha\nu}\right) -$$ - -(Note: Similar derivations, with minor amendments, can be used to produce analogous results for geodesics between light-like or space-like separated pairs of points.) - -Albert Einstein believed that the geodesic equation of motion can be derived from the field equations for empty space, i.e. from the fact that the Ricci curvature vanishes. He wrote: - -
    It has been shown that this law of motion — generalized to the case of arbitrarily large gravitating masses — can be derived from the field equations of empty space alone. According to this derivation the law of motion is implied by the condition that the field be singular nowhere outside its generating mass points.
    - -and - -
    One of the imperfections of the original relativistic theory of gravitation was that as a field theory it was not complete; it introduced the independent postulate that the law of motion of a particle is given by the equation of the geodesic. - -A complete field theory knows only fields and not the concepts of particle and motion. For these must not exist independently from the field but are to be treated as part of it. - -On the basis of the description of a particle without singularity, one has the possibility of a logically more satisfactory treatment of the combined problem: The problem of the field and that of the motion coincide.
    - -Both physicists and philosophers have often repeated the assertion that the geodesic equation can be obtained from the field equations to describe the motion of a gravitational singularity, but this claim remains disputed. Less controversial is the notion that the field equations determine the motion of a fluid or dust, as distinguished from the motion of a point-singularity. - -In deriving the geodesic equation from the equivalence principle, it was assumed that particles in a local inertial coordinate system are not accelerating. However, in real life, the particles may be charged, and therefore may be accelerating locally in accordance with the Lorentz force. That is: -$$ - {d^2 X^\mu \over ds^2} = {q \over m} {F^{\mu \beta}} {d X^\alpha \over ds}{\eta_{\alpha \beta}}. -$$ - -with -$$ - {\eta_{\alpha \beta}}{d X^\alpha \over ds}{d X^\beta \over ds}=-1. -$$ - -The Minkowski tensor $\eta_{\alpha \beta}$ is given by: -$$ -\eta_{\alpha \beta} = \begin{pmatrix}-1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1\end{pmatrix} -$$ - -These last three equations can be used as the starting point for the derivation of an equation of motion in General Relativity, instead of assuming that acceleration is zero in free fall. Because the Minkowski tensor is involved here, it becomes necessary to introduce something called the metric tensor in General Relativity. The metric tensor g is symmetric, and locally reduces to the Minkowski tensor in free fall. The resulting equation of motion is as follows: -$$ - {d^2 x^\mu \over ds^2} =- \Gamma^\mu {}_{\alpha \beta}{d x^\alpha \over ds}{d x^\beta \over ds}\ +{q \over m} {F^{\mu \beta}} {d x^\alpha \over ds}{g_{\alpha \beta}}. -$$ - -with -$$ - {g_{\alpha \beta}}{d x^\alpha \over ds}{d x^\beta \over ds}=-1. -$$ - -This last equation signifies that the particle is moving along a timelike geodesic; massless particles like the photon instead follow null geodesics (replace −1 with zero on the right-hand side of the last equation). It is important that the last two equations are consistent with each other, when the latter is differentiated with respect to proper time, and the following formula for the Christoffel symbols ensures that consistency: -$$ -\Gamma^{\lambda}{}_{\alpha\beta}=\frac{1}{2}g^{\lambda \tau} \left(\frac{\partial g_{\tau\alpha}}{\partial x^\beta} + \frac{\partial g_{\tau\beta}}{\partial x^{\alpha}} - \frac{\partial g_{\alpha\beta}}{\partial x^{\tau}} \right) -$$ - -This last equation does not involve the electromagnetic fields, and it is applicable even in the limit as the electromagnetic fields vanish. The letter g with superscripts refers to the inverse of the metric tensor. In General Relativity, indices of tensors are lowered and raised by contraction with the metric tensor or its inverse, respectively. - -A geodesic between two events can also be described as the curve joining those two events which has a stationary interval (4-dimensional "length"). Stationary here is used in the sense in which that term is used in the calculus of variations, namely, that the interval along the curve varies minimally among curves that are nearby to the geodesic. - -In Minkowski space there is only one geodesic that connects any given pair of events, and for a time-like geodesic, this is the curve with the longest proper time between the two events. In curved spacetime, it is possible for a pair of widely separated events to have more than one time-like geodesic between them. In such instances, the proper times along several geodesics will not in general be the same. For some geodesics in such instances, it is possible for a curve that connects the two events and is nearby to the geodesic to have either a longer or a shorter proper time than the geodesic. - -For a space-like geodesic through two events, there are always nearby curves which go through the two events that have either a longer or a shorter proper length than the geodesic, even in Minkowski space. In Minkowski space, the geodesic will be a straight line. Any curve that differs from the geodesic purely spatially (i.e. does not change the time coordinate) in any inertial frame of reference will have a longer proper length than the geodesic, but a curve that differs from the geodesic purely temporally (i.e. does not change the space coordinates) in such a frame of reference will have a shorter proper length. - -The interval of a curve in spacetime is -$$ - l = \int \sqrt{\left|g_{\mu \nu} \dot x^\mu \dot x^\nu \right|} ds\ . -$$ - -Then, the Euler–Lagrange equation, - - {d \over ds} {\partial \over \partial \dot x^\alpha} \sqrt{\left| g_{\mu \nu} \dot - -x^\mu \dot x^\nu \right|} = {\partial \over \partial x^\alpha} \sqrt{\left| g_{\mu \nu} \dot x^\mu \dot x^\nu \right|} \ , - -becomes, after some calculation, -$$ - 2\left(\Gamma^\lambda {}_{\mu \nu} \dot x^\mu \dot x^\nu + \ddot x^\lambda\right) = U^\lambda {d \over ds} \ln |U_\nu U^\nu| \ , -$$ - -where $ U^\mu = \dot x^\mu .$ - -The goal being to find a curve for which the value of -$$ - l = \int d\tau = \int {d\tau \over d\phi} d\phi = \int \sqrt d\phi = \int \sqrt{{-g_{\mu \nu} dx^\mu dx^\nu \over d\phi d\phi}} d\phi = \int f d\phi -$$ - -is stationary, where -$$ - f = \sqrt{-g_{\mu \nu} \dot x^\mu \dot x^\nu} -$$ - -such goal can be accomplished by calculating the Euler–Lagrange equation for f, which is -$$ - {d \over d\tau} {\partial f \over \partial \dot x^\lambda} = {\partial f \over \partial x^\lambda} -$$. - -Substituting the expression of f into the Euler–Lagrange equation (which makes the value of the integral l stationary), gives -$$ - {d \over d\tau} {\partial \sqrt{-g_{\mu \nu} \dot x^\mu \dot x^\nu} \over \partial \dot x^\lambda} = {\partial \sqrt{-g_{\mu \nu} \dot x^\mu \dot x^\nu} \over \partial x^\lambda} -$$ - -Now calculate the derivatives: -$$ - {d \over d\tau} \left( {-g_{\mu \nu} {\partial \dot x^\mu \over \partial \dot x^\lambda} \dot x^\nu - g_{\mu \nu} \dot x^\mu {\partial \dot x^\nu \over \partial \dot x^\lambda} \over 2 \sqrt{-g_{\mu \nu} \dot x^\mu \dot x^\nu}} \right) = {-g_{\mu \nu, \lambda} \dot x^\mu \dot x^\nu \over 2 \sqrt{-g_{\mu \nu} \dot x^\mu \dot x^\nu}} \qquad \qquad (1) -$$ -$$ - {d \over d\tau} \left( {g_{\mu \nu} \delta^\mu {}_\lambda \dot x^\nu + g_{\mu \nu} \dot x^\mu \delta^\nu {}_\lambda \over 2 \sqrt{-g_{\mu \nu} \dot x^\mu \dot x^\nu}} \right) = {g_{\mu \nu , \lambda} \dot x^\mu \dot x^\nu \over 2 \sqrt{-g_{\mu \nu} \dot x^\mu \dot x^\nu}} \qquad \qquad (2) -$$ -$$ - {d \over d\tau} \left( {g_{\lambda \nu} \dot x^\nu + g_{\mu \lambda} \dot x^\mu \over \sqrt{-g_{\mu \nu} \dot x^\mu \dot x^\nu}} \right) = {g_{\mu \nu , \lambda} \dot x^\mu \dot x^\nu \over \sqrt{-g_{\mu \nu} \dot x^\mu \dot x^\nu}} \qquad \qquad (3) -$$ -$$ - {\sqrt{-g_{\mu \nu} \dot x^\mu \dot x^\nu} {d \over d\tau} (g_{\lambda \nu} \dot x^\nu + g_{\mu \lambda} \dot x^\mu) - (g_{\lambda \nu} \dot x^\nu + g_{\mu \lambda} \dot x^\mu) {d \over d\tau} \sqrt{-g_{\mu \nu} \dot x^\mu \dot x^\nu} \over -g_{\mu \nu} \dot x^\mu \dot x^\nu} = {g_{\mu \nu , \lambda} \dot x^\mu \dot x^\nu \over \sqrt{-g_{\mu \nu} \dot x^\mu \dot x^\nu}} \qquad \qquad (4) -$$ -$$ - {(-g_{\mu \nu} \dot x^\mu \dot x^\nu) {d \over d\tau} (g_{\lambda \nu} \dot x^\nu + g_{\mu \lambda} \dot x^\mu) + {1 \over 2} (g_{\lambda \nu} \dot x^\nu + g_{\mu \lambda} \dot x^\mu) {d \over d\tau} (g_{\mu \nu} \dot x^\mu \dot x^\nu) \over -g_{\mu \nu} \dot x^\mu \dot x^\nu} = g_{\mu \nu ,\lambda} \dot x^\mu \dot x^\nu \qquad \qquad (5) -$$ -$$ - (g_{\mu \nu} \dot x^\mu \dot x^\nu) (g_{\lambda \nu ,\mu} \dot x^\nu \dot x^\mu + g_{\mu \lambda ,\nu} \dot x^\mu \dot x^\nu + g_{\lambda \nu} \ddot x^\nu + g_{\lambda \mu} \ddot x^\mu) -$$ -$$ -= (g_{\mu \nu ,\lambda} \dot x^\mu \dot x^\nu) (g_{\alpha \beta} \dot x^\alpha \dot x^\beta) + {1 \over 2} (g_{\lambda \nu} \dot x^\nu + g_{\lambda \mu} \dot x^\mu) {d \over d\tau} (g_{\mu \nu} \dot x^\mu \dot x^\nu) \qquad \qquad (6) -$$ -$$ - g_{\lambda \nu ,\mu} \dot x^\mu \dot x^\nu + g_{\lambda \mu ,\nu} \dot x^\mu \dot x^\nu - g_{\mu \nu ,\lambda} \dot x^\mu \dot x^\nu + 2 g_{\lambda \mu} \ddot x^\mu = {\dot x_\lambda {d \over d\tau} (g_{\mu \nu} \dot x^\mu \dot x^\nu) \over g_{\alpha \beta} \dot x^\alpha \dot x^\beta} \qquad \qquad (7) -$$ -$$ - 2(\Gamma_{\lambda \mu \nu} \dot x^\mu \dot x^\nu + \ddot x_\lambda) = {\dot x_\lambda {d \over d\tau} (\dot x_\nu \dot x^\nu) \over \dot x_\beta \dot x^\beta} = {U_\lambda {d \over d\tau} (U_\nu U^\nu) \over U_\beta U^\beta} = U_\lambda {d \over d\tau} \ln |U_\nu U^\nu| \qquad \qquad (8) -$$ - -This is just one step away from the geodesic equation. - -If the parameter s is chosen to be affine, then the right side of the above equation vanishes (because $U_\nu U^\nu$ is constant). Finally, we have the geodesic equation -$$ - \Gamma^\lambda {}_{\mu \nu} \dot x^\mu \dot x^\nu + \ddot x^\lambda = 0\ . -$$ - -The geodesic equation can be alternatively derived from the autoparallel transport of curves. The derivation is based on the lectures given by Frederic P. Schuller at the We-Heraeus International Winter School on Gravity & Light. - -Let $ (M,O,A,\nabla)$ be a smooth manifold with connection and $\gamma $ be a curve on the manifold. The curve is said to be autoparallely transported if and only if $\nabla_{v_{\gamma}}v_{\gamma}=0 $. - -In order to derive the geodesic equation, we have to choose a chart $(U,x) \in A$: - - - -\nabla_{\dot \gamma^i \frac{\partial}{\partial x^i}} \left( \dot \gamma^m \frac{\partial}{\partial x^m} \right)=0 - - - -Using the $ C^{\infty} $ linearity and the Leibniz rule: -$$ - \dot \gamma^i \left( \nabla_{\frac{\partial}{\partial x^i}} \dot \gamma^m \right) \frac{\partial}{\partial x^m}+\dot \gamma^i \dot \gamma^m \nabla_{\frac{\partial}{\partial x^i}}\left( \frac{\partial}{\partial x^m} \right)=0 -$$ - -Using how the connection acts on functions ($\dot \gamma^m $) and expanding the second term with the help of the connection coefficient functions: -$$ - \dot \gamma^i \frac{ \partial \dot \gamma^m}{\partial x^i} \frac{\partial}{\partial x^m}+\dot \gamma^i \dot \gamma^m \Gamma^{q}_{im} \frac{\partial}{\partial x^q}=0 -$$ - -The first term can be simplified to $\ddot \gamma^m \frac{\partial}{\partial x^m} $. Renaming the dummy indices: -$$ - \ddot \gamma^q \frac{\partial}{\partial x^q}+\dot \gamma^i \dot \gamma^m \Gamma^{q}_{im} \frac{\partial}{\partial x^q}=0 -$$ - -We finally arrive to the geodesic equation: -$$ - \ddot \gamma^q +\dot \gamma^i \dot \gamma^m \Gamma^{q}_{im}=0 -$$ diff --git a/wiki/wikipedia/503.txt b/wiki/wikipedia/503.txt deleted file mode 100644 index 799b31723c894e04f835dced29e54261a7b3f311..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/503.txt +++ /dev/null @@ -1,9 +0,0 @@ -In mathematics, cyclical monotonicity is a generalization of the notion of monotonicity to the case of vector-valued function. - -Let $\langle\cdot,\cdot\rangle$ denote the inner product on an inner product space $X$ and let $U$ be a nonempty subset of $X$. A correspondence $f: U \rightrightarrows X$ is called cyclically monotone if for every set of points $x_1,\dots,x_{m+1} \in U$ with $x_{m+1}=x_1$ it holds that $\sum_{k=1}^m \langle x_{k+1},f(x_{k+1})-f(x_k)\rangle\geq 0.$ - -* For the case of scalar functions of one variable the definition above is equivalent to usual monotonicity. - -* Gradients of convex functions are cyclically monotone. - -* In fact, the converse is true. Suppose $U$ is convex and $f: U \rightrightarrows \mathbb{R}^n$is a correspondence with nonempty values. Then if $f$ is cyclically monotone, then there exists an upper semicontinuous convex function $F:U\to \mathbb{R}$ such that $f(x)\subset \partial F(x)$ for every $x\in U$, where $\partial F(x)$ denotes the subgradient of $F$ at $x$. diff --git a/wiki/wikipedia/504.txt b/wiki/wikipedia/504.txt deleted file mode 100644 index e902bc4bcb06fea1caf7e179831a3d88606f0fcf..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/504.txt +++ /dev/null @@ -1,13 +0,0 @@ -Mario's Picross is a 1995 puzzle video game for the Game Boy. Developed by Jupiter and Ape and published by Nintendo, it is a compilation of nonogram logic puzzles. The game stars Mario who chisels away at puzzle grids to form pictures. The game initially received positive reviews, with reviewers citing its length and addictive nature as a positive, but its grid sizes and absence of typical Mario elements as a negative. - -Although the game sold well in Japan, the game flopped in English-speaking regions. As a result of this, the game was followed by two sequels, Mario's Super Picross and Picross 2, released only in Japan. The next Picross game published by Nintendo to be released in English-speaking regions would be Picross DS in 2007, twelve years later. Due to its limited sales, the game is somewhat of a cult classic. The game is also available on the Nintendo 3DS through its Virtual Console service. - -In Mario's Picross, the player is presented with a puzzle grid (either 5 by 5, 10 by 10 or 15 by 15 spaces in size, depending on the difficulty chosen) that they must chisel at in accordance with the numerical hints provided on the upper and left-hand edges of the grid in order to reveal a picture. In addition to the ability to chisel spaces, the player is also able to mark spaces with a cross to signify that the space is not meant to be chiseled. The player must use these numerical hints to fill in the grid both vertically and horizontally. If the player incorrectly chisels a space, some of the remaining time is deducted; upon the player's first error, two minutes are deducted; upon the second, four minutes; upon the third, eight minutes. Mistakes succeeding the third continue deducting eight minutes. Additionally, a "With Hint" option is available at the beginning of the puzzle. Choosing this will start a roulette with the numbers labelling the columns and rows. the game did not meet the market trends of English-speaking regions and failed there. Due to the failure of this game in English-speaking regions, future nonogram logic puzzles from Nintendo were not released in those regions until Picross DS in 2007. The success of Picross DS resulted in worldwide Picross releases, as well as a re-release of the game on the Nintendo 3DS online store's Virtual Console. - -Mario's Picross received positive reviews upon its release. The four reviewers of Electronic Gaming Monthly criticized the game's focus on logic and reasoning instead of rapid button presses, saying it makes the game boring to play after the first few puzzles. GamePro gave it a more mixed review. They criticized the repetitive music and the fact that Mario does not appear in the main graphics, but acknowledged that the game is "undeniably addicting, especially if you love numbers". Marcel van Duyn of Nintendo Life cited the game's addictive nature, volume of puzzles and soundtrack in his review. Mike Rose of Pocket Gamer stated that although the game has sometimes unintuitive controls and always has the Hint system default to 'yes', the game represents the Mario series well and is a workout for the brain. Andrew Brown of Nintendo World Report criticized the localization of the game and the game's attempt to fit on the small screen of the Game Boy, reasoning that Mario's Super Picross or Picross DS would be a better choice for first-time Picross players. Chris Scullion of Official Nintendo Magazine praised the game's use of characters from the Super Mario series, although he felt that Mario's Picross would feel like a "slight step backwards" to those who had already played other Picross games. - -Despite a large advertising campaign by Nintendo the game failed to sell well in America and Europe, leading the game's sequels to be confined to Japan. As a result, the game is seen as something of a cult classic in English-speaking regions. The main criticisms aimed at the game was the size of the grids; due to the small size of the Game Boy screen, the game's puzzle grids are restricted to being just 15 by 15, when puzzles four times that size were common in other media. Complex ranked the game as the 24th best Game Boy game in their list of the best Game Boy games, citing how it is a rewarding experience for those with inquisitive minds. - -Whilst the game's sales were lackluster in English-speaking regions, the game succeeded enough in Japan to spawn two Japan-exclusive sequels, Mario's Super Picross for the Super Famicom and Picross 2, a direct sequel to Mario's Picross, on the Game Boy. The continued success of these games in the Japan region saw Nintendo create the Picross NP series in 1999, a Japan-exclusive series of games meant to promote other, larger games or series. These games were meant for use with the Japan-only Nintendo Power peripheral for the Super Nintendo Entertainment System. There were eight instalments of this series, each one having a different theme; Pokémon, Yoshi's Story, Kirby, Star Fox 64, The Legend of Zelda: Ocarina of Time, Super Mario 64, Wario Land II and Donkey Kong Country, respectively. The series ran from 1999 to 2000. - -The series had a hiatus until 2007, when Picross DS for the Nintendo DS was released worldwide. Picross DS was quite well-received in comparison to Mario's Picross upon its release. This success led to Nintendo Picross games becoming a worldwide series, with games appearing on future iterations of Nintendo handhelds, including Picross 3D, a 3D re-imagining of traditional Picross, the Picross e series, on Nintendo 3DS, and Picross S, on Nintendo Switch. Mario's Picross was re-released on the Nintendo 3DS Virtual Console on July 14, 2011. Being the original iteration of the series, other Mario video games have referenced Picross. Picross DS features downloadable puzzles taken from Mario's Picross. The explorer attire Mario wears both in-game and in promotional material makes a cameo appearance in Super Mario Odyssey as an unlockable costume Mario is able to wear. diff --git a/wiki/wikipedia/505.txt b/wiki/wikipedia/505.txt deleted file mode 100644 index 810120fc4a88c6e698ded1b78e84d20a3df8a3d0..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/505.txt +++ /dev/null @@ -1,292 +0,0 @@ -In linear algebra, a branch of mathematics, the polarization identity is any one of a family of formulas that express the inner product of two vectors in terms of the norm of a normed vector space. - -If a norm arises from an inner product then the polarization identity can be used to express this inner product entirely in terms of the norm. The polarization identity shows that a norm can arise from at most one inner product; however, there exist norms that do not arise from any inner product. - -The norm associated with any inner product space satisfies the parallelogram law: $\|x+y\|^2 + \|x-y\|^2 = 2\|x\|^2 + 2\|y\|^2.$ - -In fact, as observed by John von Neumann, the parallelogram law characterizes those norms that arise from inner products. - -Explicitly, if $(H, \|\cdot\|)$ is any normed space then: - -the parallelogram law holds for $\|\cdot\|$ if and only if there exists an inner product $\langle \cdot, \cdot \rangle$ on $H$ such that $\|x\|^2 = \langle x,\ x\rangle$ for all $x \in H,$ in which case this inner product is uniquely determined by the norm via the polarization identity. - -Any inner product on a vector space induces a norm by the equation - -\|x\| = \sqrt{\langle x, x \rangle}. - -The polarization identities reverse this relationship, recovering the inner product from the norm. - -Every inner product satisfies: - -\|x + y\|^2 = \|x\|^2 + \|y\|^2 + 2\operatorname{Re}\langle x, y \rangle \qquad \text{ for all vectors } x, y. - -If the vector space is over the real numbers then the polarization identities are: - -\begin{alignat}{4} - -\langle x, y \rangle - -&= \frac{1}{4} \left(\|x+y\|^2 - \|x-y\|^2\right) \\[3pt] - -&= \frac{1}{2} \left(\|x+y\|^2 - \|x\|^2 - \|y\|^2\right) \\[3pt] - -&= \frac{1}{2} \left(\|x\|^2 + \|y\|^2 - \|x-y\|^2\right). \\[3pt] - -\end{alignat} - -These various forms are all equivalent by the parallelogram law: - -2\|x\|^2 + 2\|y\|^2 = \|x+y\|^2 + \|x-y\|^2. - -For vector spaces over the complex numbers, the above formulas are not quite correct because they do not describe the imaginary part of the (complex) inner product. - -However, an analogous expression does ensure that both real and imaginary parts are retained. - -The complex part of the inner product depends on whether it is antilinear in the first or the second argument. - -The notation $\langle x | y \rangle,$ which is commonly used in physics will be assumed to be antilinear in the first argument while $\langle x, y \rangle,$ which is commonly used in mathematics, will be assumed to be antilinear its the second argument. - -They are related by the formula: - -\langle x, y \rangle = \langle y | x \rangle \quad \text{ for all } x, y \in H. - -The real part of any inner product (no matter which argument is antilinear and no matter if it is real or complex) is a symmetric bilinear map that for any $x, y \in H$ is always equal to: - -\begin{alignat}{4} - -R(x, y) - -&= \operatorname{Re} \langle x \mid y \rangle = \operatorname{Re} \langle x, y \rangle \\ - -&= \frac{1}{4} \left(\|x+y\|^2 - \|x-y\|^2\right) \\ - -&= \frac{1}{2} \left(\|x+y\|^2 - \|x\|^2 - \|y\|^2\right) \\[3pt] - -&= \frac{1}{2} \left(\|x\|^2 + \|y\|^2 - \|x-y\|^2\right). \\[3pt] - -\end{alignat} - -It is always a symmetric map, meaning that - -R(x, y) = R(y, x) \quad \text{ for all } x, y \in H, - -and it also satisfies: - -R(y, ix) = - R(x, iy) \quad \text{ for all } x, y \in H. - -Thus $R(ix, y) = - R(x, iy),$ which in plain English says that to move a factor of $i = \sqrt{-1}$ to the other argument, introduce a negative sign. - -Unlike its real part, the imaginary part of a complex inner product depends on which argument is antilinear. - -Antilinear in first argument - -For the inner product $\langle x | y \rangle,$ which is antilinear in the first argument, for any $x, y \in H,$ - -\begin{alignat}{4} - -\langle x | y \rangle - -&= \frac{1}{4} \left(\|x+y\|^2 - \|x-y\|^2 - i\|x + iy\|^2 + i\|x - iy\|^2\right) \\ - -&= R(x, y) - i R(x, iy) \\ - -&= R(x, y) + i R(ix, y). \\ - -\end{alignat} - -The second to last equality is similar to the formula expressing a linear functional $\varphi$ in terms of its real part: $\varphi(y) = \operatorname{Re} \varphi(y) - i (\operatorname{Re} \varphi)(i y).$ - -Antilinear in second argument - -The formula for the inner product $\langle x, \ y \rangle$ which is antilinear in the second argument, follows from that of $\langle x | y \rangle$ by the relationship: -$$ -\langle x, \ y \rangle := \langle y | x \rangle = \overline{\langle x | y \rangle}. -$$ - -So for any $x, y \in H,$ - -\begin{alignat}{4} - -\langle x, y \rangle - -&= \frac{1}{4} \left(\|x+y\|^2 - \|x-y\|^2 + i\|x + iy\|^2 - i\|x - iy\|^2\right) \\ - -&= R(x, y) + i R(x, iy) \\ - -&= R(x, y) - i R(ix, y). \\ - -\end{alignat} - -This expression can be phrased symmetrically as: - -\langle x, y \rangle = \frac{1}{4} \sum_{k=0}^3 i^k \left\|x + i^k y\right\|^2. - -Thus if $R(x, y) + i I(x, y)$ denotes the real and imaginary parts of some inner product's value at the point $(x, y) \in H \times H$ of its domain, then its imaginary part will be - -I(x, y) - -~=~ - -\begin{cases} - -R(ix, y) & \quad \text{ if antilinear in the 1st argument} \\ - -R(x, iy) & \quad \text{ if antilinear in the 2nd argument} \\ - -\end{cases} - -where the scalar $i$ is always located in the same argument that the inner product is antilinear in. - -In a normed space $(H, \|\cdot\|),$ if the parallelogram law - -\|x+y\|^2 ~+~ \|x-y\|^2 ~=~ 2\|x\|^2+2\|y\|^2 - -holds, then there exists a unique inner product $\langle \cdot,\ \cdot\rangle$ on $H$ such that $\|x\|^2 = \langle x,\ x\rangle$ for all $x \in H.$ - -We will only give the real case here; the proof for complex vector spaces is analogous. - -By the above formulas, if the norm is described by an inner product (as we hope), then it must satisfy - -\langle x, \ y \rangle = \frac{1}{4} \left(\|x+y\|^2 - \|x-y\|^2\right) \quad \text{ for all } x, y \in H. - -It remains to prove that this formula defines an inner product and that this inner product induces the norm $\|\cdot\|.$ - -Explicitly, the following will be shown: - -#$\langle x, x \rangle = \|x\|^2, \quad x \in H$ - -#$\langle x, y \rangle = \langle y, x \rangle, \quad x, y \in H$ - -#$\langle x+z, y\rangle = \langle x, y\rangle + \langle z, y\rangle \quad \text{ for all } x, y, z \in H,$ - -#$\langle \alpha x, y \rangle = \alpha\langle x, y \rangle \quad \text{ for all } x, y \in H \text{ and all } \alpha \in \R$ - -(This axiomatization omits positivity, which is implied by (1) and the fact that $\|\cdot\|$ is a norm.) - -For properties (1) and (2), substitute: \langle x, x \rangle = \frac{1}{4} \left(\|x+x\|^2 - \|x-x\|^2\right) = \|x\|^2, and $\|x-y\|^2 = \|y-x\|^2.$ - -For property (3), it is convenient to work in reverse. - -It remains to show that - -\|x+z+y\|^2 - \|x+z-y\|^2 \overset{?}{=} \|x+y\|^2 - \|x-y\|^2 + \|z+y\|^2 - \|z-y\|^2 - -or equivalently, - -2\left(\|x+z+y\|^2 + \|x-y\|^2\right) - 2\left(\|x+z-y\|^2 + \|x+y\|^2\right) \overset{?}{=} 2\|z+y\|^2 - 2\|z-y\|^2. - -Now apply the parallelogram identity: - -2\|x+z+y\|^2 + 2\|x-y\|^2 = \|2x+z\|^2 + \|2y+z\|^2 - -2\|x+z-y\|^2 + 2\|x+y\|^2 = \|2x+z\|^2 + \|z-2y\|^2 - -Thus it remains to verify: - -\cancel{\|2x+z\|^2} + \|2y+z\|^2 - (\cancel{\|2x+z\|^2} + \|z-2y\|^2) \overset{?}{{}={}} 2\|z+y\|^2 - 2\|z-y\|^2 - -\|2y+z\|^2 - \|z-2y\|^2 \overset{?}{=} 2\|z+y\|^2 - 2\|z-y\|^2 - -But the latter claim can be verified by subtracting the following two further applications of the parallelogram identity: - -\|2y+z\|^2 + \|z\|^2 = 2\|z+y\|^2 + 2\|y\|^2 - -\|z-2y\|^2 + \|z\|^2 = 2\|z-y\|^2 + 2\|y\|^2 - -Thus (3) holds. - -It can be verified by induction that (3) implies (4), as long as $\alpha \in \Z.$ - -But "(4) when $\alpha \in \Z$" implies "(4) when $\alpha \in \Q$". - -And any positive-definite, real-valued, $\Q$-bilinear form satisfies the Cauchy–Schwarz inequality, so that $\langle \sdot,\sdot \rangle$ is continuous. - -Thus $\langle \sdot,\sdot \rangle$ must be $\R$-linear as well. - -Another necessary and sufficient condition for there to exist an inner product that induces a given norm $\|\cdot\|$ is for the norm to satisfy Ptolemy's inequality, which is: - -\|x - y\| \|z\| ~+~ \|y - z\| \|x\| ~\geq~ \|x - z\| \|y\| \qquad \text{ for all vectors } x, y, z. - -If $H$ is a complex Hilbert space then $\langle x \mid y \rangle$ is real if and only if its complex part is $0 = R(x, iy) = \frac{1}{4} \left(\|x+iy\|^2 - \|x-iy\|^2\right),$ which happens if and only if $\|x+iy\| = \|x-iy\|.$ - -Similarly, $\langle x \mid y \rangle$ is (purely) imaginary if and only if $\|x+y\| = \|x-y\|.$ - -For example, from $\|x+ix\| = |1+i| \|x\| = \sqrt{2} \|x\| = |1-i| \|x\| = \|x-ix\|$ it can be concluded that $\langle x | x \rangle$ is real and that $\langle x | ix \rangle$ is purely imaginary. - -If $A : H \to Z$ is a linear isometry between two Hilbert spaces (so $\|A h\| = \|h\|$ for all $h \in H$) then - -\langle A h, A k \rangle_Z = \langle h, k \rangle_H \quad \text{ for all } h, k \in H; - -that is, linear isometries preserve inner products. - -If $A : H \to Z$ is instead an antilinear isometry then - -\langle A h, A k \rangle_Z = \overline{\langle h, k \rangle_H} = \langle k, h \rangle_H \quad \text{ for all } h, k \in H. - -The second form of the polarization identity can be written as - -\|\textbf{u}-\textbf{v}\|^2 = \|\textbf{u}\|^2 + \|\textbf{v}\|^2 - 2(\textbf{u} \cdot \textbf{v}). - -This is essentially a vector form of the law of cosines for the triangle formed by the vectors $\textbf{u}, \textbf{v},$ and $\textbf{u}-\textbf{v}.$ - -In particular, - -\textbf{u}\cdot\textbf{v} = \|\textbf{u}\|\|\textbf{v}\| \cos\theta, - -where $\theta$ is the angle between the vectors $\textbf{u}$ and $\textbf{v}.$ - -The basic relation between the norm and the dot product is given by the equation - -\|\textbf{v}\|^2 = \textbf{v} \cdot \textbf{v}. - -Then - -\begin{align} - -\|\textbf{u} + \textbf{v}\|^2 - -&= (\textbf{u} + \textbf{v}) \cdot (\textbf{u} + \textbf{v}) \\[3pt] - -&= (\textbf{u} \cdot \textbf{u}) + (\textbf{u} \cdot \textbf{v}) + (\textbf{v} \cdot \textbf{u}) + (\textbf{v} \cdot \textbf{v}) \\[3pt] - -&= \|\textbf{u}\|^2 + \|\textbf{v}\|^2 + 2(\textbf{u} \cdot \textbf{v}), - -\end{align} - -and similarly - -\|\textbf{u} - \textbf{v}\|^2 = \|\textbf{u}\|^2 + \|\textbf{v}\|^2 - 2(\textbf{u} \cdot \textbf{v}). - -Forms (1) and (2) of the polarization identity now follow by solving these equations for $\textbf{u} \cdot \textbf{v},$ while form (3) follows from subtracting these two equations. - -(Adding these two equations together gives the parallelogram law.) - -The polarization identities are not restricted to inner products. - -If $B$ is any symmetric bilinear form on a vector space, and $Q$ is the quadratic form defined by - -Q(v) = B(v, v), - -then - -\begin{align} - -2 B(u, v) &= Q(u + v) - Q(u) - Q(v), \\ - -2 B(u, v) &= Q(u) + Q(v) - Q(u - v), \\ - -4 B(u, v) &= Q(u + v) - Q(u - v). - -\end{align} - -The so-called symmetrization map generalizes the latter formula, replacing $Q$ by a homogeneous polynomial of degree $k$ defined by $Q(v) = B(v, \ldots, v),$ where $B$ is a symmetric $k$-linear map. - -The formulas above even apply in the case where the field of scalars has characteristic two, though the left-hand sides are all zero in this case. - -Consequently, in characteristic two there is no formula for a symmetric bilinear form in terms of a quadratic form, and they are in fact distinct notions, a fact which has important consequences in L-theory; for brevity, in this context "symmetric bilinear forms" are often referred to as "symmetric forms". - -These formulas also apply to bilinear forms on modules over a commutative ring, though again one can only solve for $B(u, v)$ if 2 is invertible in the ring, and otherwise these are distinct notions. For example, over the integers, one distinguishes integral quadratic forms from integral symmetric forms, which are a narrower notion. - -More generally, in the presence of a ring involution or where 2 is not invertible, one distinguishes $\varepsilon$-quadratic forms and $\varepsilon$-symmetric forms; a symmetric form defines a quadratic form, and the polarization identity (without a factor of 2) from a quadratic form to a symmetric form is called the "symmetrization map", and is not in general an isomorphism. This has historically been a subtle distinction: over the integers it was not until the 1950s that relation between "twos out" (integral quadratic form) and "twos in" (integral symmetric form) was understood – see discussion at integral quadratic form; and in the algebraization of surgery theory, Mishchenko originally used symmetric L-groups, rather than the correct quadratic L-groups (as in Wall and Ranicki) – see discussion at L-theory. - -Finally, in any of these contexts these identities may be extended to homogeneous polynomials (that is, algebraic forms) of arbitrary degree, where it is known as the polarization formula, and is reviewed in greater detail in the article on the polarization of an algebraic form. diff --git a/wiki/wikipedia/506.txt b/wiki/wikipedia/506.txt deleted file mode 100644 index 13cb6115af46084c6d7637122226806744d722e2..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/506.txt +++ /dev/null @@ -1,29 +0,0 @@ -In complex analysis, the monodromy theorem is an important result about analytic continuation of a complex-analytic function to a larger set. The idea is that one can extend a complex-analytic function (from here on called simply analytic function) along curves starting in the original domain of the function and ending in the larger set. A potential problem of this analytic continuation along a curve strategy is there are usually many curves which end up at the same point in the larger set. The monodromy theorem gives sufficient conditions for analytic continuation to give the same value at a given point regardless of the curve used to get there, so that the resulting extended analytic function is well-defined and single-valued. - -Before stating this theorem it is necessary to define analytic continuation along a curve and study its properties. - -The definition of analytic continuation along a curve is a bit technical, but the basic idea is that one starts with an analytic function defined around a point, and one extends that function along a curve via analytic functions defined on small overlapping disks covering that curve. - -Formally, consider a curve (a continuous function) $\gamma:[0, 1]\to \Complex.$ Let $f$ be an analytic function defined on an open disk $U$ centered at $\gamma(0).$ An analytic continuation of the pair $(f, U)$ along $\gamma$ is a collection of pairs $(f_t, U_t)$ for $0\le t\le 1$ such that - -* $f_0=f$ and $U_0=U.$ - -* For each $t\in [0, 1], U_t$ is an open disk centered at $\gamma(t)$ and $f_t:U_t\to\Complex$ is an analytic function. - -* For each $t\in [0, 1]$ there exists $\varepsilon >0$ such that for all $t'\in [0, 1]$ with $|t-t'|<\varepsilon$ one has that $\gamma(t')\in U_t$ (which implies that $U_t$ and $U_{t'}$ have a non-empty intersection) and the functions $f_t$ and $f_{t'}$ coincide on the intersection $U_t\cap U_{t'}.$ - -Analytic continuation along a curve is essentially unique, in the sense that given two analytic continuations $(f_t, U_t)$ and $(g_t, V_t)$ $(0\le t\le 1)$ of $(f, U)$ along $\gamma,$ the functions $f_1$ and $g_1$ coincide on $U_1\cap V_1.$ Informally, this says that any two analytic continuations of $(f, U)$ along $\gamma$ will end up with the same values in a neighborhood of $\gamma(1).$ - -If the curve $\gamma$ is closed (that is, $\gamma(0)=\gamma(1)$), one need not have $f_0$ equal $f_1$ in a neighborhood of $\gamma(0).$ For example, if one starts at a point $(a, 0)$ with $a>0$ and the complex logarithm defined in a neighborhood of this point, and one lets $\gamma$ be the circle of radius $a$ centered at the origin (traveled counterclockwise from $(a, 0)$), then by doing an analytic continuation along this curve one will end up with a value of the logarithm at $(a, 0)$ which is $2\pi i$ plus the original value (see the second illustration on the right). - -As noted earlier, two analytic continuations along the same curve yield the same result at the curve's endpoint. However, given two different curves branching out from the same point around which an analytic function is defined, with the curves reconnecting at the end, it is not true in general that the analytic continuations of that function along the two curves will yield the same value at their common endpoint. - -Indeed, one can consider, as in the previous section, the complex logarithm defined in a neighborhood of a point $(a, 0)$ and the circle centered at the origin and radius $a.$ Then, it is possible to travel from $(a, 0)$ to $(-a, 0)$ in two ways, counterclockwise, on the upper half-plane arc of this circle, and clockwise, on the lower half-plane arc. The values of the logarithm at $(-a, 0)$ obtained by analytic continuation along these two arcs will differ by $2\pi i.$ - -If, however, one can continuously deform one of the curves into another while keeping the starting points and ending points fixed, and analytic continuation is possible on each of the intermediate curves, then the analytic continuations along the two curves will yield the same results at their common endpoint. This is called the monodromy theorem and its statement is made precise below. - -Let $U$ be an open disk in the complex plane centered at a point $P$ and $f:U\to \Complex$ be a complex-analytic function. Let $Q$ be another point in the complex plane. If there exists a family of curves $\gamma_s:[0, 1]\to \Complex$ with $s\in [0, 1]$ such that $\gamma_s(0)=P$ and $\gamma_s(1)=Q$ for all $s\in [0, 1],$ the function $(s, t)\in [0, 1]\times[0, 1]\to \gamma_s(t)\in \mathbb C$ is continuous, and for each $s\in [0, 1]$ it is possible to do an analytic continuation of $f$ along $\gamma_s,$ then the analytic continuations of $f$ along $\gamma_0$ and $\gamma_1$ will yield the same values at $Q.$ - -The monodromy theorem makes it possible to extend an analytic function to a larger set via curves connecting a point in the original domain of the function to points in the larger set. The theorem below which states that is also called the monodromy theorem. - -Let $U$ be an open disk in the complex plane centered at a point $P$ and $f:U\to\Complex$ be a complex-analytic function. If $W$ is an open simply-connected set containing $U,$ and it is possible to perform an analytic continuation of $f$ on any curve contained in $W$ which starts at $P,$ then $f$ admits a direct analytic continuation to $W,$ meaning that there exists a complex-analytic function $g:W\to\Complex$ whose restriction to $U$ is $f.$ diff --git a/wiki/wikipedia/507.txt b/wiki/wikipedia/507.txt deleted file mode 100644 index 98075bc42bd571c31fcb76d7dc5978fda8419569..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/507.txt +++ /dev/null @@ -1,68 +0,0 @@ -In information theory, Pinsker's inequality, named after its inventor Mark Semenovich Pinsker, is an inequality that bounds the total variation distance (or statistical distance) in terms of the Kullback–Leibler divergence. - -The inequality is tight up to constant factors. - -Pinsker's inequality states that, if $P$ and $Q$ are two probability distributions on a measurable space $(X, \Sigma)$, then -$$ -\delta(P,Q) \le \sqrt{\frac{1}{2} D_{\mathrm{KL}}(P\|Q)}, -$$ - -where -$$ -\delta(P,Q)=\sup \bigl\{ |P(A) - Q(A)| \big| A \in \Sigma \text{ is a measurable event} \bigr\} -$$ - -is the total variation distance (or statistical distance) between $P$ and $Q$ and -$$ -D_{\mathrm{KL}}(P\|Q) = \operatorname{E}_P \left( \log \frac{\mathrm{d} P}{\mathrm{d} Q} \right) = \int_X \left( \log \frac{\mathrm{d} P}{\mathrm{d} Q} \right) \mathrm{d} P -$$ - -is the Kullback–Leibler divergence in nats. When the sample space $X$ is a finite set, the Kullback–Leibler divergence is given by -$$ -D_{\mathrm{KL}}(P\|Q) = \sum_{i \in X} \left( \log \frac{P(i)}{Q(i)}\right) P(i)\! -$$ - -Note that in terms of the total variation norm $\| P - Q \|$ of the signed measure $P - Q$, Pinsker's inequality differs from the one given above by a factor of two: -$$ -\| P - Q \| \le \sqrt{2 D_{\mathrm{KL}}(P\|Q)}. -$$ - -A proof of Pinsker's inequality uses the partition inequality for f-divergences. - -There is an alternative statement of Pinsker's inequality in some literature that relates information divergence to variation distance: - - - -D(P\| Q) - -\ge - -\frac{1}{2 \ln 2} - -V^2(p, q), - - - -in which V(p, q) = - -\sum_{x \in \mathcal{X}} - -|p(x) - q(x) |. - - is the variation distance between two probability density functions $p$ and $q$ on the same alphabet $\mathcal{X}$. - -This form of Pinsker's inequality shows that "convergence in divergence" is stronger notion than "convergence in variation distance". - -Pinsker first proved the inequality with a greater constant. The inequality in the above form was proved independently by Kullback, Csiszár, and Kemperman. - -A precise inverse of the inequality cannot hold: for every $\varepsilon > 0$, there are distributions $P_\varepsilon, Q$ with $\delta(P_\varepsilon,Q)\le\varepsilon$ but $D_{\mathrm{KL}}(P_\varepsilon\|Q) = \infty$. An easy example is given by the two-point space $\{0,1\}$ with $Q(0) = 0, Q(1) = 1$ and $P_\varepsilon(0) = \varepsilon, P_\varepsilon(1) = 1-\varepsilon$. - -However, an inverse inequality holds on finite spaces $X$ with a constant depending on $Q$. More specifically, it can be shown that with the definition $\alpha_Q := \min_{x \in X: Q(x) > 0} Q(x)$ we have for any measure $P$ which is absolutely continuous to $Q$ -$$ -\frac{1}{2} D_{\mathrm{KL}}(P\|Q) \le \frac{1}{\alpha_Q} \delta(P,Q)^2. -$$ - -As a consequence, if $Q$ has full support (i.e. $Q(x) > 0$ for all $x \in X$), then -$$ - \delta(P,Q)^2 \le \frac{1}{2} D(P\|Q) \le \frac{1}{\alpha_Q} \delta(P,Q)^2. -$$ diff --git a/wiki/wikipedia/508.txt b/wiki/wikipedia/508.txt deleted file mode 100644 index c5c794ad0b3be09e361bce9c100e6973b307a712..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/508.txt +++ /dev/null @@ -1,13 +0,0 @@ -In computational complexity, strong NP-completeness is a property of computational problems that is a special case of NP-completeness. A general computational problem may have numerical parameters. For example, the input to the bin packing problem is a list of objects of specific sizes and a size for the bins that must contain the objects—these object sizes and bin size are numerical parameters. - -A problem is said to be strongly NP-complete (NP-complete in the strong sense), if it remains NP-complete even when all of its numerical parameters are bounded by a polynomial in the length of the input. A problem is said to be strongly NP-hard if a strongly NP-complete problem has a polynomial reduction to it; in combinatorial optimization, particularly, the phrase "strongly NP-hard" is reserved for problems that are not known to have a polynomial reduction to another strongly NP-complete problem. - -Normally numerical parameters to a problem are given in positional notation, so a problem of input size n might contain parameters whose size is exponential in n. If we redefine the problem to have the parameters given in unary notation, then the parameters must be bounded by the input size. Thus strong NP-completeness or NP-hardness may also be defined as the NP-completeness or NP-hardness of this unary version of the problem. - -For example, bin packing is strongly NP-complete while the 0-1 Knapsack problem is only weakly NP-complete. Thus the version of bin packing where the object and bin sizes are integers bounded by a polynomial remains NP-complete, while the corresponding version of the Knapsack problem can be solved in pseudo-polynomial time by dynamic programming. - -From a theoretical perspective any strongly NP-hard optimization problem with a polynomially bounded objective function cannot have a fully polynomial-time approximation scheme (or FPTAS) unless P = NP. - - However, the converse fails: e.g. if P does not equal NP, knapsack with two constraints is not strongly NP-hard, but has no FPTAS even when the optimal objective is polynomially bounded. - -Some strongly NP-complete problems may still be easy to solve on average, but it's more likely that difficult instances will be encountered in practice. diff --git a/wiki/wikipedia/509.txt b/wiki/wikipedia/509.txt deleted file mode 100644 index 87b8cc4d03274d76671529b2001ec08c676d529f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/509.txt +++ /dev/null @@ -1,38 +0,0 @@ -Gentzen's consistency proof is a result of proof theory in mathematical logic, published by Gerhard Gentzen in 1936. It shows that the Peano axioms of first-order arithmetic do not contain a contradiction (i.e. are "consistent"), as long as a certain other system used in the proof does not contain any contradictions either. This other system, today called "primitive recursive arithmetic with the additional principle of quantifier-free transfinite induction up to the ordinal ε0", is neither weaker nor stronger than the system of Peano axioms. Gentzen argued that it avoids the questionable modes of inference contained in Peano arithmetic and that its consistency is therefore less controversial. - -Gentzen's theorem is concerned with first-order arithmetic: the theory of the natural numbers, including their addition and multiplication, axiomatized by the first-order Peano axioms. This is a "first-order" theory: the quantifiers extend over natural numbers, but not over sets or functions of natural numbers. The theory is strong enough to describe recursively defined integer functions such as exponentiation, factorials or the Fibonacci sequence. - -Gentzen showed that the consistency of the first-order Peano axioms is provable over the base theory of primitive recursive arithmetic with the additional principle of quantifier-free transfinite induction up to the ordinal ε0. Primitive recursive arithmetic is a much simplified form of arithmetic that is rather uncontroversial. The additional principle means, informally, that there is a well-ordering on the set of finite rooted trees. Formally, ε0 is the first ordinal $\alpha$ such that $\omega^\alpha = \alpha$, i.e. the limit of the sequence -$$ -\omega,\ \omega^\omega,\ \omega^{\omega^\omega},\ \ldots -$$ - -It is a countable ordinal much smaller than large countable ordinals. To express ordinals in the language of arithmetic, an ordinal notation is needed, i.e. a way to assign natural numbers to ordinals less than ε0. This can be done in various ways, one example provided by Cantor's normal form theorem. Gentzen's proof is based on the following assumption: for any quantifier-free formula A(x), if there is an ordinal a< ε0 for which A(a) is false, then there is a least such ordinal. - -Gentzen defines a notion of "reduction procedure" for proofs in Peano arithmetic. For a given proof, such a procedure produces a tree of proofs, with the given one serving as the root of the tree, and the other proofs being, in a sense, "simpler" than the given one. This increasing simplicity is formalized by attaching an ordinal < ε0 to every proof, and showing that, as one moves down the tree, these ordinals get smaller with every step. He then shows that if there were a proof of a contradiction, the reduction procedure would result in an infinite strictly descending sequence of ordinals smaller than ε0 produced by a primitive recursive operation on proofs corresponding to a quantifier-free formula. - -Gentzen's proof highlights one commonly missed aspect of Gödel's second incompleteness theorem. It is sometimes claimed that the consistency of a theory can only be proved in a stronger theory. Gentzen's theory obtained by adding quantifier-free transfinite induction to primitive recursive arithmetic proves the consistency of first-order Peano arithmetic (PA) but does not contain PA. For example, it does not prove ordinary mathematical induction for all formulae, whereas PA does (since all instances of induction are axioms of PA). Gentzen's theory is not contained in PA, either, however, since it can prove a number-theoretical fact—the consistency of PA—that PA cannot. Therefore, the two theories are, in one sense, incomparable. - -That said, there are other, more powerful ways to compare the strength of theories, the most important of which is defined in terms of the notion of interpretability. It can be shown that, if one theory T is interpretable in another B, then T is consistent if B is. (Indeed, this is a large point of the notion of interpretability.) And, assuming that T is not extremely weak, T itself will be able to prove this very conditional: If B is consistent, then so is T. Hence, T cannot prove that B is consistent, by the second incompleteness theorem, whereas B may well be able to prove that T is consistent. This is what motivates the idea of using interpretability to compare theories, i.e., the thought that, if B interprets T, then B is at least as strong (in the sense of 'consistency strength') as T is. - -A strong form of the second incompleteness theorem, proved by Pavel Pudlák, who was building on earlier work by Solomon Feferman, states that no consistent theory T that contains Robinson arithmetic, Q, can interpret Q plus Con(T), the statement that T is consistent. By contrast, Q+Con(T) does interpret T, by a strong form of the arithmetized completeness theorem. So Q+Con(T) is always stronger (in one good sense) than T is. But Gentzen's theory trivially interprets Q+Con(PA), since it contains Q and proves Con(PA), and so Gentzen's theory interprets PA. But, by Pudlák's result, PA cannot interpret Gentzen's theory, since Gentzen's theory (as just said) interprets Q+Con(PA), and interpretability is transitive. That is: If PA did interpret Gentzen's theory, then it would also interpret Q+Con(PA) and so would be inconsistent, by Pudlák's result. So, in the sense of consistency strength, as characterized by interpretability, Gentzen's theory is stronger than Peano arithmetic. - -Hermann Weyl made the following comment in 1946 regarding the significance of Gentzen's consistency result following the devastating impact of Gödel's 1931 incompleteness result on Hilbert's plan to prove the consistency of mathematics. - -It is likely that all mathematicians ultimately would have accepted Hilbert's approach had he been able to carry it out successfully. The first steps were inspiring and promising. But then Gödel dealt it a terrific blow (1931), from which it has not yet recovered. Gödel enumerated the symbols, formulas, and sequences of formulas in Hilbert's formalism in a certain way, and thus transformed the assertion of consistency into an arithmetic proposition. He could show that this proposition can neither be proved nor disproved within the formalism. This can mean only two things: either the reasoning by which a proof of consistency is given must contain some argument that has no formal counterpart within the system, i.e., we have not succeeded in completely formalizing the procedure of mathematical induction; or hope for a strictly "finitistic" proof of consistency must be given up altogether. When G. Gentzen finally succeeded in proving the consistency of arithmetic he trespassed those limits indeed by claiming as evident a type of reasoning that penetrates into Cantor's "second class of ordinal numbers." - -Kleene made the following comment in 1952 on the significance of Gentzen's result, particularly in the context of the formalist program which was initiated by Hilbert. - -The original proposals of the formalists to make classical mathematics secure by a consistency proof did not contemplate that such a method as transfinite induction up to ε0 would have to be used. To what extent the Gentzen proof can be accepted as securing classical number theory in the sense of that problem formulation is in the present state of affairs a matter for individual judgement, depending on how ready one is to accept induction up to ε0 as a finitary method. - -Gentzen's first version of his consistency proof was not published during his lifetime because Paul Bernays had objected to a method implicitly used in the proof. The modified proof, described above, was published in 1936 in the Annals. Gentzen went on to publish two more consistency proofs, one in 1938 and one in 1943. All of these are contained in . - -Kurt Gödel reinterpreted Gentzen's 1936 proof in a lecture in 1938 in what came to be known as the no-counterexample interpretation. Both the original proof and the reformulation can be understood in game-theoretic terms. . - -In 1940 Wilhelm Ackermann published another consistency proof for Peano arithmetic, also using the ordinal ε0. - -Gentzen's proof is the first example of what is called proof-theoretic ordinal analysis. In ordinal analysis one gauges the strength of theories by measuring how large the (constructive) ordinals are that can be proven to be well-ordered, or equivalently for how large a (constructive) ordinal can transfinite induction be proven. A constructive ordinal is the order type of a recursive well-ordering of natural numbers. - -In this language, Gentzen's work establishes that the proof-theoretic ordinal of first-order Peano arithmetic is ε0. - -Laurence Kirby and Jeff Paris proved in 1982 that Goodstein's theorem cannot be proven in Peano arithmetic. Their proof was based on Gentzen's theorem. diff --git a/wiki/wikipedia/51.txt b/wiki/wikipedia/51.txt deleted file mode 100644 index c6a788b709f290f94f13a41e63527619eb897ea0..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/51.txt +++ /dev/null @@ -1,21 +0,0 @@ -In geometry, a triangulation is a subdivision of a planar object into triangles, and by extension the subdivision of a higher-dimension geometric object into simplices. Triangulations of a three-dimensional volume would involve subdividing it into tetrahedra packed together. - -In most instances, the triangles of a triangulation are required to meet edge-to-edge and vertex-to-vertex. - -Different types of triangulations may be defined, depending both on what geometric object is to be subdivided and on how the subdivision is determined. - -* A triangulation $T$ of $\mathbb{R}^d$ is a subdivision of $\mathbb{R}^d$ into $d$-dimensional simplices such that any two simplices in $T$ intersect in a common face (a simplex of any lower dimension) or not at all, and any bounded set in $\mathbb{R}^d$ intersects only finitely many simplices in $T$. That is, it is a locally finite simplicial complex that covers the entire space. - -* A point-set triangulation, i.e., a triangulation of a discrete set of points $\mathcal{P}\subset\mathbb{R}^d$, is a subdivision of the convex hull of the points into simplices such that any two simplices intersect in a common face of any dimension or not at all and such that the set of vertices of the simplices are contained in $\mathcal{P}$. Frequently used and studied point set triangulations include the Delaunay triangulation (for points in general position, the set of simplices that are circumscribed by an open ball that contains no input points) and the minimum-weight triangulation (the point set triangulation minimizing the sum of the edge lengths). - -* In cartography, a triangulated irregular network is a point set triangulation of a set of two-dimensional points together with elevations for each point. Lifting each point from the plane to its elevated height lifts the triangles of the triangulation into three-dimensional surfaces, which form an approximation of a three-dimensional landform. - -* A polygon triangulation is a subdivision of a given polygon into triangles meeting edge-to-edge, again with the property that the set of triangle vertices coincides with the set of vertices of the polygon. Polygon triangulations may be found in linear time and form the basis of several important geometric algorithms, including a simple approximate solution to the art gallery problem. The constrained Delaunay triangulation is an adaptation of the Delaunay triangulation from point sets to polygons or, more generally, to planar straight-line graphs. - -* A triangulation of a surface consists of a net of triangles with points on a given surface covering the surface partly or totally. - -* In the finite element method, triangulations are often used as the mesh (in this case, a triangle mesh) underlying a computation. In this case, the triangles must form a subdivision of the domain to be simulated, but instead of restricting the vertices to input points, it is allowed to add additional Steiner points as vertices. In order to be suitable as finite element meshes, a triangulation must have well-shaped triangles, according to criteria that depend on the details of the finite element simulation (see mesh quality); for instance, some methods require that all triangles be right or acute, forming nonobtuse meshes. Many meshing techniques are known, including Delaunay refinement algorithms such as Chew's second algorithm and Ruppert's algorithm. - -* In more general topological spaces, triangulations of a space generally refer to simplicial complexes that are homeomorphic to the space. - -The concept of a triangulation may also be generalized somewhat to subdivisions into shapes related to triangles. In particular, a pseudotriangulation of a point set is a partition of the convex hull of the points into pseudotriangles, polygons that like triangles have exactly three convex vertices. As in point set triangulations, pseudotriangulations are required to have their vertices at the given input points. diff --git a/wiki/wikipedia/510.txt b/wiki/wikipedia/510.txt deleted file mode 100644 index 0081257b924656f21188b89926c8bd1eb981c338..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/510.txt +++ /dev/null @@ -1,83 +0,0 @@ -In information theory, the asymptotic equipartition property (AEP) is a general property of the output samples of a stochastic source. It is fundamental to the concept of typical set used in theories of data compression. - -Roughly speaking, the theorem states that although there are many series of results that may be produced by a random process, the one actually produced is most probably from a loosely defined set of outcomes that all have approximately the same chance of being the one actually realized. (This is a consequence of the law of large numbers and ergodic theory.) Although there are individual outcomes which have a higher probability than any outcome in this set, the vast number of outcomes in the set almost guarantees that the outcome will come from the set. One way of intuitively understanding the property is through Cramér's large deviation theorem, which states that the probability of a large deviation from mean decays exponentially with the number of samples. Such results are studied in large deviations theory; intuitively, it is the large deviations that would violate equipartition, but these are unlikely. - -In the field of pseudorandom number generation, a candidate generator of undetermined quality whose output sequence lies too far outside the typical set by some statistical criteria is rejected as insufficiently random. Thus, although the typical set is loosely defined, practical notions arise concerning sufficient typicality. - -Given a discrete-time stationary ergodic stochastic process $X$ on the probability space $(\Omega, B, p)$, the asymptotic equipartition property is an assertion that -$$ --\frac{1}{n} \log p(X_1, X_2, \dots, X_n) \to H(X) \quad \text{ as } \quad n\to\infty -$$ - -where $H(X)$ or simply $H$ denotes the entropy rate of $X$, which must exist for all discrete-time stationary processes including the ergodic ones. The asymptotic equipartition property is proved for finite-valued (i.e. $|\Omega| < \infty$) stationary ergodic stochastic processes in the Shannon–McMillan–Breiman theorem using the ergodic theory and for any i.i.d. sources directly using the law of large numbers in both the discrete-valued case (where $H$ is simply the entropy of a symbol) and the continuous-valued case (where H is the differential entropy instead). The definition of the asymptotic equipartition property can also be extended for certain classes of continuous-time stochastic processes for which a typical set exists for long enough observation time. The convergence is proven almost sure in all cases. - -Given $X$ is an i.i.d. source which may take values in the alphabet $\mathcal{X}$, its time series $X_1,\ldots,X_n$ is i.i.d. with entropy $H(X)$. The weak law of large numbers gives the asymptotic equipartition property with convergence in probability, -$$ -\lim_{n\to\infty}\Pr\left[\left|-\frac{1}{n} \log p(X_1, X_2, \ldots, X_n) - H(X)\right|> \epsilon\right]=0 \qquad \forall \epsilon>0. -$$ - -since the entropy is equal to the expectation of -$$ --\frac{1}{n} \log p(X_1, X_2, \ldots , X_n). -$$ - -The strong law of large numbers asserts the stronger almost sure convergence, -$$ -\Pr\left[\lim_{n\to\infty} - \frac{1}{n} \log p(X_1, X_2,\ldots, X_n) = H(X)\right]=1. -$$ - -Consider a finite-valued sample space $\Omega$, i.e. $|\Omega| < \infty$, for the discrete-time stationary ergodic process $X:=\{X_n\}$ defined on the probability space $(\Omega, B, p)$. The asymptotic equipartition property for such stochastic source is known as the Shannon–McMillan–Breiman theorem, due to Claude Shannon, Brockway McMillan, and Leo Breiman. - -The assumptions of stationarity/ergodicity/identical distribution of random variables is not essential for the asymptotic equipartition property to hold. Indeed, as is quite clear intuitively, the asymptotic equipartition property requires only some form of the law of large numbers to hold, which is fairly general. However, the expression needs to be suitably generalized, and the conditions need to be formulated precisely. - -We assume that the source is producing independent symbols, with possibly different output statistics at each instant. We assume that the statistics of the process are known completely, that is, the marginal distribution of the process seen at each time instant is known. The joint distribution is just the product of marginals. Then, under the condition (which can be relaxed) that $\mathrm{Var}[\log p(X_i)] 0, the following holds (AEP): -$$ -\lim_{n\to\infty}\Pr\left[\left|-\frac{1}{n} \log p(X_1, X_2, \ldots, X_n) - \overline{H}_n(X)\right|< \epsilon\right]=1\qquad \forall \epsilon>0 -$$ - -where -$$ -\overline{H}_n(X)=\frac{1}{n}H(X_1,X_2,\ldots,X_n) -$$ - -The asymptotic equipartition property for non-stationary discrete-time independent process leads us to (among other results) the source coding theorem for non-stationary source (with independent output symbols) and noisy-channel coding theorem for non-stationary memoryless channels. - -Discrete-time functions can be interpolated to continuous-time functions. If such interpolation f is measurable, we may define the continuous-time stationary process accordingly as $\tilde{X}:=f\circ X$. If the asymptotic equipartition property holds for the discrete-time process, as in the i.i.d. or finite-valued stationary ergodic cases shown above, it automatically holds for the continuous-time stationary process derived from it by some measurable interpolation. i.e. -$$ --\frac{1}{n} \log p(\tilde{X}_0^\tau) \to H(X) -$$ - -where n corresponds to the degree of freedom in time τ. nH(X)/τ and H(X) are the entropy per unit time and per degree of freedom respectively, defined by Shannon. - -An important class of such continuous-time stationary process is the bandlimited stationary ergodic process with the sample space being a subset of the continuous $\mathcal{L}_2$ functions. The asymptotic equipartition property holds if the process is white, in which case the time samples are i.i.d., or there exists T > 1/2W, where W is the nominal bandwidth, such that the T-spaced time samples take values in a finite set, in which case we have the discrete-time finite-valued stationary ergodic process. - -Any time-invariant operations also preserves the asymptotic equipartition property, stationarity and ergodicity and we may easily turn a stationary process to non-stationary without losing the asymptotic equipartition property by nulling out a finite number of time samples in the process. - -A category theoretic definition for the equipartition property is given by Gromov. Given a sequence of Cartesian powers $P^N=P\times \cdots \times P$ of a measure space P, this sequence admits an asymptotically equivalent sequence HN of homogeneous measure spaces (i.e. all sets have the same measure; all morphisms are invariant under the group of automorphisms, and thus factor as a morphism to the terminal object) . - -The above requires a definition of asymptotic equivalence. This is given in terms of a distance function, giving how much an injective correspondence differs from an isomorphism. An injective correspondence $\pi: P\to Q$ is a partially defined map that is a bijection; that is, it is a bijection between a subset $P'\subset P$ and $Q'\subset Q$. Then define -$$ -|P-Q|_\pi = |P\smallsetminus P'| + |Q\smallsetminus Q'| -$$ - -where |S| denotes the measure of a set S. In what follows, the measure of P and Q are taken to be 1, so that the measure spaces are probability spaces. This distance $|P-Q|_\pi$ is commonly known as the earth mover's distance or Wasserstein metric. - -Similarly, define -$$ -|\log P:Q|_\pi = \frac{\sup_{p\in P'}|\log p - \log \pi(p)|}{\log \min \left(|\operatorname{set}(P')|,|\operatorname{set}(Q')|\right)} -$$ - -with $|\operatorname{set}(P)|$ taken to be the counting measure on P. Thus, this definition requires that P be a finite measure space. Finally, let -$$ -\text{dist}_\pi(P,Q) = |P-Q|_\pi +|\log P:Q|_\pi -$$ - -A sequence of injective correspondences $\pi_N:P_N\to Q_N$ are then asymptotically equivalent when -$$ -\text{dist}_{\pi_N}(P_N,Q_N) \to 0 \quad\text{ as }\quad N\to\infty -$$ - -Given a homogenous space sequence HN that is asymptotically equivalent to PN, the entropy H(P) of P may be taken as -$$ -H(P)=\lim_{N\to\infty}\frac{1}{N} |\operatorname{set}(H_N)| -$$ diff --git a/wiki/wikipedia/511.txt b/wiki/wikipedia/511.txt deleted file mode 100644 index 3da7ccf6b59342f529c8fc52498ff041a7ca640f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/511.txt +++ /dev/null @@ -1,15 +0,0 @@ -A lattice graph, mesh graph, or grid graph, is a graph whose drawing, embedded in some Euclidean space Rn, forms a regular tiling. This implies that the group of bijective transformations that send the graph to itself is a lattice in the group-theoretical sense. - -Typically, no clear distinction is made between such a graph in the more abstract sense of graph theory, and its drawing in space (often the plane or 3D space). This type of graph may more shortly be called just a lattice, mesh, or grid. Moreover, these terms are also commonly used for a finite section of the infinite graph, as in "an 8×8 square grid". - -The term lattice graph has also been given in the literature to various other kinds of graphs with some regular structure, such as the Cartesian product of a number of complete graphs. - -A common type of a lattice graph (known under different names, such as square grid graph) is the graph whose vertices correspond to the points in the plane with integer coordinates, x-coordinates being in the range 1,\cdots, n, y-coordinates being in the range 1,\cdots, m, and two vertices are connected by an edge whenever the corresponding points are at distance 1. In other words, it is a unit distance graph for the described point set. - -Every planar graph H is a minor of the h×h-grid, where $h = 2|V(H)| + 4|E(H)|$. - -A triangular grid graph is a graph that corresponds to a triangular grid. - -A Hanan grid graph for a finite set of points in the plane is produced by the grid obtained by intersections of all vertical and horizontal lines through each point of the set. - -The rook's graph (the graph that represents all legal moves of the rook chess piece on a chessboard) is also sometimes called the lattice graph, although this graph is strictly different than the lattice graph described in this article. The valid moves of fairy chess piece wazir form the square lattice graph. diff --git a/wiki/wikipedia/512.txt b/wiki/wikipedia/512.txt deleted file mode 100644 index 7cdd13b2f5aa414b3576392156dd23569cafaadb..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/512.txt +++ /dev/null @@ -1,363 +0,0 @@ -In number theory, an integer q is called a quadratic residue modulo n if it is congruent to a perfect square modulo n; i.e., if there exists an integer x such that: -$$ -x^2\equiv q \pmod{n}. -$$ - -Otherwise, q is called a quadratic nonresidue modulo n. - -Originally an abstract mathematical concept from the branch of number theory known as modular arithmetic, quadratic residues are now used in applications ranging from acoustical engineering to cryptography and the factoring of large numbers. - -Fermat, Euler, Lagrange, Legendre, and other number theorists of the 17th and 18th centuries established theorems and formed conjectures about quadratic residues, but the first systematic treatment is § IV of Gauss's Disquisitiones Arithmeticae (1801). Article 95 introduces the terminology "quadratic residue" and "quadratic nonresidue", and states that if the context makes it clear, the adjective "quadratic" may be dropped. - -For a given n a list of the quadratic residues modulo n may be obtained by simply squaring the numbers 0, 1, ..., n - 1. Because a2 ≡ (n - a)2 (mod n), the list of squares modulo n is symmetric around n/2, and the list only needs to go that high. This can be seen in the table below. - -Thus, the number of quadratic residues modulo n cannot exceed n/2 + 1 (n even) or (n + 1)/2 (n odd). - -The product of two residues is always a residue. - -Modulo 2, every integer is a quadratic residue. - -Modulo an odd prime number p there are (p + 1)/2 residues (including 0) and (p - 1)/2 nonresidues, by Euler's criterion. In this case, it is customary to consider 0 as a special case and work within the multiplicative group of nonzero elements of the field Z/pZ. (In other words, every congruence class except zero modulo p has a multiplicative inverse. This is not true for composite moduli.) - -Following this convention, the multiplicative inverse of a residue is a residue, and the inverse of a nonresidue is a nonresidue. - -Following this convention, modulo an odd prime number there are an equal number of residues and nonresidues. - -If the modulus is pn, - -then pka - -is a residue modulo pn if k ≥ n - -is a nonresidue modulo pn if k < n is odd - -is a residue modulo pn if k < n is even and a is a residue - -is a nonresidue modulo pn if k < n is even and a is a nonresidue. - -Notice that the rules are different for powers of two and powers of odd primes. - -Modulo an odd prime power n = pk, the products of residues and nonresidues relatively prime to p obey the same rules as they do mod p; p is a nonresidue, and in general all the residues and nonresidues obey the same rules, except that the products will be zero if the power of p in the product ≥ n. - -Modulo 8, the product of the nonresidues 3 and 5 is the nonresidue 7, and likewise for permutations of 3, 5 and 7. In fact, the multiplicative group of the non-residues and 1 form the Klein four-group. - -The basic fact in this case is - -if a is a residue modulo n, then a is a residue modulo pk for every prime power dividing n. - -if a is a nonresidue modulo n, then a is a nonresidue modulo pk for at least one prime power dividing n. - -Modulo a composite number, the product of two residues is a residue. The product of a residue and a nonresidue may be a residue, a nonresidue, or zero. - -
    - -For example, from the table for modulus 6 - -1, 2, 3, 4, 5 (residues in bold). - -The product of the residue 3 and the nonresidue 5 is the residue 3, whereas the product of the residue 4 and the nonresidue 2 is the nonresidue 2. - -
    - -Also, the product of two nonresidues may be either a residue, a nonresidue, or zero. - -
    - -For example, from the table for modulus 15 - -1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14 (residues in bold). - -The product of the nonresidues 2 and 8 is the residue 1, whereas the product of the nonresidues 2 and 7 is the nonresidue 14. - -
    - -This phenomenon can best be described using the vocabulary of abstract algebra. The congruence classes relatively prime to the modulus are a group under multiplication, called the group of units of the ring Z/nZ, and the squares are a subgroup of it. Different nonresidues may belong to different cosets, and there is no simple rule that predicts which one their product will be in. Modulo a prime, there is only the subgroup of squares and a single coset. - -The fact that, e.g., modulo 15 the product of the nonresidues 3 and 5, or of the nonresidue 5 and the residue 9, or the two residues 9 and 10 are all zero comes from working in the full ring Z/nZ, which has zero divisors for composite n. - -For this reason some authors add to the definition that a quadratic residue a must not only be a square but must also be relatively prime to the modulus n. (a is coprime to n if and only if a2 is coprime to n.) - -Although it makes things tidier, this article does not insist that residues must be coprime to the modulus. - -Gauss used R and N to denote residuosity and non-residuosity, respectively; - -for example, 2 R 7 and 5 N 7, or 1 R 8 and 3 N 8. - -Although this notation is compact and convenient for some purposes, a more useful notation is the Legendre symbol, also called the quadratic character, which is defined for all integers a and positive odd prime numbers p as - - - -\left(\frac{a}{p}\right) = \begin{cases}0&\text{ if }p \text { divides } a\\+1&\text{ if } a \operatorname{R} p \text{ and }p \text { does not divide } a\\-1&\text{ if }a \operatorname{N} p \text{ and }p \text{ does not divide } a\end{cases} - -There are two reasons why numbers ≡ 0 (mod p) are treated specially. As we have seen, it makes many formulas and theorems easier to state. The other (related) reason is that the quadratic character is a homomorphism from the multiplicative group of nonzero congruence classes modulo p to the complex numbers under multiplication. Setting $(\tfrac{np}{p}) = 0$ allows its domain to be extended to the multiplicative semigroup of all the integers. - -One advantage of this notation over Gauss's is that the Legendre symbol is a function that can be used in formulas. It can also easily be generalized to cubic, quartic and higher power residues. - -There is a generalization of the Legendre symbol for composite values of p, the Jacobi symbol, but its properties are not as simple: if m is composite and the Jacobi symbol $(\tfrac{a}{m}) = -1,$ then a N m, and if a R m then $(\tfrac{a}{m}) = 1,$ but if $(\tfrac{a}{m}) = 1$ we do not know whether a R m or a N m. For example: $(\tfrac{2}{15}) = 1$ and $(\tfrac{4}{15}) = 1$, but 2 N 15 and 4 R 15. If m is prime, the Jacobi and Legendre symbols agree. - -Although quadratic residues appear to occur in a rather random pattern modulo n, and this has been exploited in such applications as acoustics and cryptography, their distribution also exhibits some striking regularities. - -Using Dirichlet's theorem on primes in arithmetic progressions, the law of quadratic reciprocity, and the Chinese remainder theorem (CRT) it is easy to see that for any M > 0 there are primes p such that the numbers 1, 2, ..., M are all residues modulo p. - -
    For example, if p ≡ 1 (mod 8), (mod 12), (mod 5) and (mod 28), then by the law of quadratic reciprocity 2, 3, 5, and 7 will all be residues modulo p, and thus all numbers 1-10 will be. The CRT says that this is the same as p ≡ 1 (mod 840), and Dirichlet's theorem says there are an infinite number of primes of this form. 2521 is the smallest, and indeed 12 ≡ 1, 10462 ≡ 2, 1232 ≡ 3, 22 ≡ 4, 6432 ≡ 5, 872 ≡ 6, 6682 ≡ 7, 4292 ≡ 8, 32 ≡ 9, and 5292 ≡ 10 (mod 2521).
    - -The first of these regularities stems from Peter Gustav Lejeune Dirichlet's work (in the 1830s) on the analytic formula for the class number of binary quadratic forms. Let q be a prime number, s a complex variable, and define a Dirichlet L-function as -$$ -L(s) = \sum_{n=1}^\infty\left(\frac{n}{q}\right)n^{-s}. -$$ - -Dirichlet showed that if q ≡ 3 (mod 4), then -$$ -L(1) = -\frac{\pi}{\sqrt q}\sum_{n=1}^{q-1} \frac{n}{q} \left(\frac{n}{q}\right) > 0. -$$ - -Therefore, in this case (prime q ≡ 3 (mod 4)), the sum of the quadratic residues minus the sum of the nonresidues in the range 1, 2, ..., q - 1 is a negative number. - -
    - -For example, modulo 11, - -1, 2, 3, 4, 5, 6, 7, 8, 9, 10 (residues in bold) - -1 + 4 + 9 + 5 + 3 = 22, 2 + 6 + 7 + 8 + 10 = 33, and the difference is -11.
    - -In fact the difference will always be an odd multiple of q if q > 3. In contrast, for prime q ≡ 1 (mod 4), the sum of the quadratic residues minus the sum of the nonresidues in the range 1, 2, ..., q - 1 is zero, implying that both sums equal $\frac{q(q-1)}{4}$. - -Dirichlet also proved that for prime q ≡ 3 (mod 4), -$$ -L(1) = \frac{\pi}{\left(2-\left(\frac{2}{q}\right)\right)\!\sqrt q}\sum_{n=1}^\frac{q-1}{2}\left(\frac{n}{q}\right) > 0. -$$ - -This implies that there are more quadratic residues than nonresidues among the numbers 1, 2, ..., (q - 1)/2. - -
    For example, modulo 11 there are four residues less than 6 (namely 1, 3, 4, and 5), but only one nonresidue (2).
    - -An intriguing fact about these two theorems is that all known proofs rely on analysis; no-one has ever published a simple or direct proof of either statement. - -If p and q are odd primes, then: - -((p is a quadratic residue mod q) if and only if (q is a quadratic residue mod p)) if and only if (at least one of p and q is congruent to 1 mod 4). - -That is: -$$ - \left(\frac{p}{q}\right) \left(\frac{q}{p}\right) = (-1)^{\frac{p-1}{2} \cdot \frac{q-1}{2}} -$$ - -where $\left(\frac{p}{q}\right)$ is the Legendre symbol. - -Thus, for numbers a and odd primes p that don't divide a: - -Modulo a prime p, the number of pairs n, n + 1 where n R p and n + 1 R p, or n N p and n + 1 R p, etc., are almost equal. More precisely, let p be an odd prime. For i, j = 0, 1 define the sets -$$ -A_{ij}=\left\{k\in\{1,2,\dots,p-2\}: \left(\frac{k}{p}\right)=(-1)^i\land\left(\frac{k+1}{p}\right)=(-1)^j\right\}, -$$ - -and let -$$ -\alpha_{ij} = |A_{ij}|. -$$ - -That is, - -α00 is the number of residues that are followed by a residue, - -α01 is the number of residues that are followed by a nonresidue, - -α10 is the number of nonresidues that are followed by a residue, and - -α11 is the number of nonresidues that are followed by a nonresidue. - -Then if p ≡ 1 (mod 4) -$$ -\alpha_{00} = \frac{p-5}{4},\alpha_{01} =\alpha_{10} =\alpha_{11} = \frac{p-1}{4} -$$ - -and if p ≡ 3 (mod 4) -$$ -\alpha_{01} = \frac{p+1}{4},\alpha_{00} =\alpha_{10} =\alpha_{11} = \frac{p-3}{4}. -$$ - -
    For example: (residues in bold) - -Modulo 17 - -1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16 - -A00 = {1,8,15}, - -A01 = {2,4,9,13}, - -A10 = {3,7,12,14}, - -A11 = {5,6,10,11}. - -Modulo 19 - -1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18 - -A00 = {4,5,6,16}, - -A01 = {1,7,9,11,17}, - -A10 = {3,8,10,15}, - -A11 = {2,12,13,14}.
    - -Gauss (1828) introduced this sort of counting when he proved that if p ≡ 1 (mod 4) then x4 ≡ 2 (mod p) can be solved if and only if p = a2 + 64 b2. - -The values of $(\tfrac{a}{p})$ for consecutive values of a mimic a random variable like a coin flip. Specifically, Pólya and Vinogradov proved (independently) in 1918 that for any nonprincipal Dirichlet character χ(n) modulo q and any integers M and N, -$$ -\left|\sum_{n=M+1}^{M+N}\chi(n)\right| =O\left( \sqrt q \log q\right), -$$ - -in big O notation. Setting -$$ - \chi(n) = \left(\frac{n}{q}\right), -$$ - -this shows that the number of quadratic residues modulo q in any interval of length N is -$$ -\frac{1}{2}N + O(\sqrt q\log q). -$$ - -It is easy to prove that -$$ - \left| \sum_{n=M+1}^{M+N} \left( \frac{n}{q} \right) \right| < \sqrt q \log q. -$$ - -In fact, -$$ - \left| \sum_{n=M+1}^{M+N} \left( \frac{n}{q} \right) \right| < \frac{4}{\pi^2} \sqrt q \log q+0.41\sqrt q +0.61. -$$ - -Montgomery and Vaughan improved this in 1977, showing that, if the generalized Riemann hypothesis is true then -$$ -\left|\sum_{n=M+1}^{M+N}\chi(n)\right|=O\left(\sqrt q \log \log q\right). -$$ - -This result cannot be substantially improved, for Schur had proved in 1918 that -$$ -\max_N \left|\sum_{n=1}^{N}\left(\frac{n}{q}\right)\right|>\frac{1}{2\pi}\sqrt q -$$ - -and Paley had proved in 1932 that -$$ -\max_N \left|\sum_{n=1}^{N}\left(\frac{d}{n}\right)\right|>\frac{1}{7}\sqrt d \log \log d -$$ - -for infinitely many d > 0. - -The least quadratic residue mod p is clearly 1. The question of the magnitude of the least quadratic non-residue n(p) is more subtle, but it is always prime, with 7 appearing for the first time at 71. - -The Pólya–Vinogradov inequality above gives O(p log p). - -The best unconditional estimate is n(p) ≪ pθ for any θ>1/4e, obtained by estimates of Burgess on character sums. - -The least quadratic non-residues mod p for odd primes p are: - -2, 2, 3, 2, 2, 3, 2, 5, 2, 3, 2, ... - -Let p be an odd prime. The quadratic excess E(p) is the number of quadratic residues on the range (0,p/2) minus the number in the range (p/2,p) . For p congruent to 1 mod 4, the excess is zero, since −1 is a quadratic residue and the residues are symmetric under r ↔ p−r. For p congruent to 3 mod 4, the excess E is always positive. - -That is, given a number a and a modulus n, how hard is it - -# to tell whether an x solving x2 ≡ a (mod n) exists - -# assuming one does exist, to calculate it? - -An important difference between prime and composite moduli shows up here. Modulo a prime p, a quadratic residue a has 1 + (a|p) roots (i.e. zero if a N p, one if a ≡ 0 (mod p), or two if a R p and gcd(a,p) = 1.) - -In general if a composite modulus n is written as a product of powers of distinct primes, and there are n1 roots modulo the first one, n2 mod the second, ..., there will be n1n2... roots modulo n. - -The theoretical way solutions modulo the prime powers are combined to make solutions modulo n is called the Chinese remainder theorem; it can be implemented with an efficient algorithm. - -
    For example: - -Solve x2 ≡ 6 (mod 15). - -x2 ≡ 6 (mod 3) has one solution, 0; x2 ≡ 6 (mod 5) has two, 1 and 4. - -and there are two solutions modulo 15, namely 6 and 9. - -Solve x2 ≡ 4 (mod 15). - -x2 ≡ 4 (mod 3) has two solutions, 1 and 2; x2 ≡ 4 (mod 5) has two, 2 and 3. - -and there are four solutions modulo 15, namely 2, 7, 8, and 13. - -Solve x2 ≡ 7 (mod 15). - -x2 ≡ 7 (mod 3) has two solutions, 1 and 2; x2 ≡ 7 (mod 5) has no solutions. - -and there are no solutions modulo 15. - -
    - -First off, if the modulus n is prime the Legendre symbol $\left(\frac{a}{n}\right)$ can be quickly computed using a variation of Euclid's algorithm or the Euler's criterion. If it is −1 there is no solution. - -Secondly, assuming that $\left(\frac{a}{n}\right)=1$, if n ≡ 3 (mod 4), Lagrange found that the solutions are given by -$$ -x \equiv \pm a^{(n+1)/4} \pmod{n}, -$$ - -and Legendre found a similar solution if n ≡ 5 (mod 8): - -x \equiv \begin{cases} - -\pm a^{(n+3)/8} \pmod{n}& \text{ if } a \text{ is a quartic residue modulo } n \\ - -\pm a^{(n+3)/8}2^{(n-1)/4} \pmod{n}& \text{ if } a \text{ is a quartic non-residue modulo } n \end{cases} - -For prime n ≡ 1 (mod 8), however, there is no known formula. Tonelli (in 1891) and Cipolla found efficient algorithms that work for all prime moduli. Both algorithms require finding a quadratic nonresidue modulo n, and there is no efficient deterministic algorithm known for doing that. But since half the numbers between 1 and n are nonresidues, picking numbers x at random and calculating the Legendre symbol $\left(\frac{x}{n}\right)$ until a nonresidue is found will quickly produce one. A slight variant of this algorithm is the Tonelli–Shanks algorithm. - -If the modulus n is a prime power n = pe, a solution may be found modulo p and "lifted" to a solution modulo n using Hensel's lemma or an algorithm of Gauss. - -If the modulus n has been factored into prime powers the solution was discussed above. - -If n is not congruent to 2 modulo 4 and the Kronecker symbol $\left(\tfrac{a}{n}\right)=-1$ then there is no solution; if n is congruent to 2 modulo 4 and $\left(\tfrac{a}{n/2}\right)=-1$, then there is also no solution. If n is not congruent to 2 modulo 4 and $\left(\tfrac{a}{n}\right)=1$, or n is congruent to 2 modulo 4 and $\left(\tfrac{a}{n/2}\right)=1$, there may or may not be one. - -If the complete factorization of n is not known, and $\left(\tfrac{a}{n}\right)=1$ and n is not congruent to 2 modulo 4, or n is congruent to 2 modulo 4 and $\left(\tfrac{a}{n/2}\right)=1$, the problem is known to be equivalent to integer factorization of n (i.e. an efficient solution to either problem could be used to solve the other efficiently). - -
    The above discussion indicates how knowing the factors of n allows us to find the roots efficiently. Say there were an efficient algorithm for finding square roots modulo a composite number. The article congruence of squares discusses how finding two numbers x and y where x2 ≡ y2 (mod n) and x ≠ ±y suffices to factorize n efficiently. Generate a random number, square it modulo n, and have the efficient square root algorithm find a root. Repeat until it returns a number not equal to the one we originally squared (or its negative modulo n), then follow the algorithm described in congruence of squares. The efficiency of the factoring algorithm depends on the exact characteristics of the root-finder (e.g. does it return all roots? just the smallest one? a random one?), but it will be efficient.
    - -Determining whether a is a quadratic residue or nonresidue modulo n (denoted a R n or a N n) can be done efficiently for prime n by computing the Legendre symbol. However, for composite n, this forms the quadratic residuosity problem, which is not known to be as hard as factorization, but is assumed to be quite hard. - -On the other hand, if we want to know if there is a solution for x less than some given limit c, this problem is NP-complete; however, this is a fixed-parameter tractable problem, where c is the parameter. - -In general, to determine if a is a quadratic residue modulo composite n, one can use the following theorem: - -Let n > 1, and gcd(a,n) = 1. Then x2 ≡ a (mod n) is solvable if and only if: - -* The Legendre symbol $\left(\tfrac{a}{p}\right)=1$ for all odd prime divisors p of n. - -* a ≡ 1 (mod 4) if n is divisible by 4 but not 8; or a ≡ 1 (mod 8) if n is divisible by 8. - -Note: This theorem essentially requires that the factorization of n is known. Also notice that if gcd(a,n) = m, then the congruence can be reduced to a/m ≡ x2/m (mod n/m), but then this takes the problem away from quadratic residues (unless m is a square). - -The list of the number of quadratic residues modulo n, for n = 1, 2, 3 ..., looks like: - -1, 2, 2, 2, 3, 4, 4, 3, 4, 6, 6, 4, 7, 8, 6, ... - -A formula to count the number of squares modulo n is given by Stangl. - -Sound diffusers have been based on number-theoretic concepts such as primitive roots and quadratic residues. - -Paley graphs are dense undirected graphs, one for each prime p ≡ 1 (mod 4), that form an infinite family of conference graphs, which yield an infinite family of symmetric conference matrices. - -Paley digraphs are directed analogs of Paley graphs, one for each p ≡ 3 (mod 4), that yield antisymmetric conference matrices. - -The construction of these graphs uses quadratic residues. - -The fact that finding a square root of a number modulo a large composite n is equivalent to factoring (which is widely believed to be a hard problem) has been used for constructing cryptographic schemes such as the Rabin cryptosystem and the oblivious transfer. The quadratic residuosity problem is the basis for the Goldwasser-Micali cryptosystem. - -The discrete logarithm is a similar problem that is also used in cryptography. - -Euler's criterion is a formula for the Legendre symbol (a|p) where p is prime. If p is composite the formula may or may not compute (a|p) correctly. The Solovay–Strassen primality test for whether a given number n is prime or composite picks a random a and computes (a|n) using a modification of Euclid's algorithm, and also using Euler's criterion. If the results disagree, n is composite; if they agree, n may be composite or prime. For a composite n at least 1/2 the values of a in the range 2, 3, ..., n - 1 will return "n is composite"; for prime n none will. If, after using many different values of a, n has not been proved composite it is called a "probable prime". - -The Miller–Rabin primality test is based on the same principles. There is a deterministic version of it, but the proof that it works depends on the generalized Riemann hypothesis; the output from this test is "n is definitely composite" or "either n is prime or the GRH is false". If the second output ever occurs for a composite n, then the GRH would be false, which would have implications through many branches of mathematics. - -In § VI of the Disquisitiones Arithmeticae Gauss discusses two factoring algorithms that use quadratic residues and the law of quadratic reciprocity. - -Several modern factorization algorithms (including Dixon's algorithm, the continued fraction method, the quadratic sieve, and the number field sieve) generate small quadratic residues (modulo the number being factorized) in an attempt to find a congruence of squares which will yield a factorization. The number field sieve is the fastest general-purpose factorization algorithm known. - -The following table lists the quadratic residues mod 1 to 75 (a means it is not coprime to n). (For the quadratic residues coprime to n, see , and for nonzero quadratic residues, see .) diff --git a/wiki/wikipedia/513.txt b/wiki/wikipedia/513.txt deleted file mode 100644 index c2bcc54409f755b864ddf8c35d698ed6c920fa12..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/513.txt +++ /dev/null @@ -1,84 +0,0 @@ -In computability theory, an undecidable problem is a type of computational problem that requires a yes/no answer, but where there cannot possibly be any computer program that always gives the correct answer; that is, any possible program would sometimes give the wrong answer or run forever without giving any answer. More formally, an undecidable problem is a problem whose language is not a recursive set; see the article Decidable language. There are uncountably many undecidable problems, so the list below is necessarily incomplete. Though undecidable languages are not recursive languages, they may be subsets of Turing recognizable languages: i.e., such undecidable languages may be recursively enumerable. - -Many, if not most, undecidable problems in mathematics can be posed as word problems: determining when two distinct strings of symbols (encoding some mathematical concept or object) represent the same object or not. - -For undecidability in axiomatic mathematics, see List of statements undecidable in ZFC. - -* Hilbert's Entscheidungsproblem. - -* Type inference and type checking for the second-order lambda calculus (or equivalent). - -* Determining whether a first-order sentence in the logic of graphs can be realized by a finite undirected graph. - -* Trakhtenbrot's theorem - Finite satisfiability is undecidable. - -* Satisfiability of first order Horn clauses. - -* The halting problem (determining whether a Turing machine halts on a given input) and the mortality problem (determining whether it halts for every starting configuration). - -* Determining whether a Turing machine is a busy beaver champion (i.e., is the longest-running among halting Turing machines with the same number of states and symbols). - -* Rice's theorem states that for all nontrivial properties of partial functions, it is undecidable whether a given machine computes a partial function with that property. - -* The halting problem for a Minsky machine: a finite-state automaton with no inputs and two counters that can be incremented, decremented, and tested for zero. - -* Universality of a Nondeterministic Pushdown automaton: determining whether all words are accepted. - -* The problem whether a tag system halts. - -* The mortal matrix problem: determining, given a finite set of n × n matrices with integer entries, whether they can be multiplied in some order, possibly with repetition, to yield the zero matrix. This is known to be undecidable for a set of six or more 3 × 3 matrices, or a set of two 15 × 15 matrices. - -* Determining whether a finite set of upper triangular 3 × 3 matrices with nonnegative integer entries generates a free semigroup. - -* Determining whether two finitely generated subsemigroups of integer matrices have a common element. - -* The word problem for groups. - -* The conjugacy problem. - -* The group isomorphism problem. - -* Determining whether two finite simplicial complexes are homeomorphic. - -* Determining whether a finite simplicial complex is (homeomorphic to) a manifold. - -* Determining whether the fundamental group of a finite simplicial complex is trivial. - -* Determining whether two non-simply connected 5-manifolds are homeomorphic, or if a 5-manifold is homeomorphic to S5. - -* For functions in certain classes, the problem of determining: whether two functions are equal, known as the zero-equivalence problem (see Richardson's theorem); the zeroes of a function; whether the indefinite integral of a function is also in the class. - -* Determining the domain of a solution to an ordinary differential equation of the form -$$ -\frac{dx}{dt} = p(t, x),~x(t_0)=x_0, -$$ - -where x is a vector in Rn, p(t, x) is a vector of polynomials in t and x, and (t0, x0) belongs to Rn+1. - -* The Post correspondence problem. - -* Determining if a context-free grammar generates all possible strings, or if it is ambiguous. - -* Given two context-free grammars, determining whether they generate the same set of strings, or whether one generates a subset of the strings generated by the other, or whether there is any string at all that both generate. - -* The problem of determining if a given set of Wang tiles can tile the plane. - -* The problem of determining the Kolmogorov complexity of a string. - -* Hilbert's tenth problem: the problem of deciding whether a Diophantine equation (multivariable polynomial equation) has a solution in integers. - -* Determining whether a given initial point with rational coordinates is periodic, or whether it lies in the basin of attraction of a given open set, in a piecewise-linear iterated map in two dimensions, or in a piecewise-linear flow in three dimensions. - -* Determining whether a λ-calculus formula has a normal form. - -* Conway's Game of Life on whether given an initial pattern and another pattern, can the latter pattern ever appear from the initial one. - -* Rule 110 - most questions involving can property "X" appear later is undecidable. - -* The problem of determining whether a quantum mechanical system has a spectral gap. - -* Determining whether a player has a winning strategy in a game of Magic: The Gathering. - -* Planning in a partially observable Markov decision process. - -* The problem of planning air travel from one destination to another, when fares are taken into account. diff --git a/wiki/wikipedia/514.txt b/wiki/wikipedia/514.txt deleted file mode 100644 index 3bf2a9073f36aacf6dc6030c3ca8bd723ba7844b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/514.txt +++ /dev/null @@ -1,90 +0,0 @@ -In the theory of formal languages, the pumping lemma for regular languages is a lemma that describes an essential property of all regular languages. Informally, it says that all sufficiently long strings in a regular language may be pumped—that is, have a middle section of the string repeated an arbitrary number of times—to produce a new string that is also part of the language. - -Specifically, the pumping lemma says that for any regular language $L$ there exists a constant $p$ such that any string $w$ in $L$ with length at least $p$ can be split into three substrings $x$, $y$ and $z$ ($w = xyz$, with $y$ being non-empty), such that the strings $xz, xyz, xyyz, xyyyz, ...$ constructed by repeating $y$ zero or more times are still in $L$. This process of repetition is known as "pumping". Moreover, the pumping lemma guarantees that the length of $xy$ will be at most $p$, imposing a limit on the ways in which $w$ may be split. Finite languages vacuously satisfy the pumping lemma by having $p$ equal to the maximum string length in $L$ plus one. - -The pumping lemma is useful for disproving the regularity of a specific language in question. It was first proven by Michael Rabin and Dana Scott in 1959, and rediscovered shortly after by Yehoshua Bar-Hillel, Micha A. Perles, and Eli Shamir in 1961, as a simplification of their pumping lemma for context-free languages. - -Let $L$ be a regular language. Then there exists an integer $p \geq 1$ depending only on $L$ such that every string $w$ in $L$ of length at least $p$ ($p$ is called the "pumping length") can be written as $w = xyz$ (i.e., $w$ can be divided into three substrings), satisfying the following conditions: - -* $ |y| \geq 1 $ - -* $ |xy| \leq p $ - -* $ (\forall n \geq 0) ( xy^nz \in L )$ -$$ -y -$$ is the substring that can be pumped (removed or repeated any number of times, and the resulting string is always in $L$). (1) means the loop $y$ to be pumped must be of length at least one; (2) means the loop must occur within the first $p$ characters. $|x|$ must be smaller than $p$ (conclusion of (1) and (2)), but apart from that, there is no restriction on $x$ and $z$. - -In simple words, for any regular language $L$, any sufficiently long string $w$ (in $L$) can be split into 3 parts. - -i.e. $w = xyz$ , such that all the strings $xy^nz$ for $n \geq 0$ are also in $L$. - -Below is a formal expression of the Pumping Lemma. - - - -\begin{array}{l} - -(\forall L\subseteq \Sigma^*) \\ - -\quad (\mbox{regular}(L) \Rightarrow \\ - -\quad ((\exists p\geq 1) ( (\forall w\in L) ((|w|\geq p) \Rightarrow \\ - -\quad ((\exists x,y,z \in \Sigma^*) (w=xyz \land (|y|\geq 1 \land |xy|\leq p \land - -(\forall n\geq 0)(xy^nz\in L)))))))) - -\end{array} - - - -The pumping lemma is often used to prove that a particular language is non-regular: a proof by contradiction may consist of exhibiting a string (of the required length) in the language that lacks the property outlined in the pumping lemma. - -For example, the language $L = \{a^n b^n : n \geq 0\}$ over the alphabet $\Sigma = \{a, b\}$ can be shown to be non-regular as follows: - -Let $w, x, y, z, p$, and $n$ be as used in the formal statement for the pumping lemma above. Assume that some constant $p$ exists as required by the lemma. Let $w$ in $L$ be given by $w = a^p b^p$, which is a string longer than $p$. By the pumping lemma, there must exist a decomposition $w = xyz$ with $|xy| \leq p$ and $|y| \geq 1$ such that -$$ -xy^iz -$$ in $L$ for every $i \geq 0$. Since $|xy| \leq p$, the string $y$ only consists of instances of $a$. Moreover, because $|y| \geq 1$, it contains at least one instance of the letter $a$. However, $xy^2z$ has more instances of the letter $a$ than the letter $b$, since some instances of $a$ but none of $b$ were added. Therefore, $xy^2z$ is not in $L$ which contradicts the Pumping lemma. Therefore, $L$ can not be regular. - -The proof that the language of balanced (i.e., properly nested) parentheses is not regular follows the same idea. Given $p$, there is a string of balanced parentheses that begins with more than $p$ left parentheses, so that $y$ will consist entirely of left parentheses. By repeating $y$, a string can be produced that does not contain the same number of left and right parentheses, and so they cannot be balanced. - -For every regular language there is a finite state automaton (FSA) that accepts the language. The number of states in such an FSA are counted and that count is used as the pumping length $p$. For a string of length at least $p$, let $q_0$ be the start state and let $q_1, ..., q_p$ be the sequence of the next $p$ states visited as the string is emitted. Because the FSA has only $p$ states, within this sequence of $p+1$ visited states there must be at least one state that is repeated. Write $q_s$ for such a state. The transitions that take the machine from the first encounter of state $q_s$ to the second encounter of state $q_s$ match some string. This string is called $y$ in the lemma, and since the machine will match a string without the $y$ portion, or with the string $y$ repeated any number of times, the conditions of the lemma are satisfied. - -For example, the following image shows an FSA. - -The FSA accepts the string: abcd. Since this string has a length at least as large as the number of states, which is four, the pigeonhole principle indicates that there must be at least one repeated state among the start state and the next four visited states. In this example, only $q_1$ is a repeated state. Since the substring bc takes the machine through transitions that start at state $q_1$ and end at state $q_1$, that portion could be repeated and the FSA would still accept, giving the string . Alternatively, the bc portion could be removed and the FSA would still accept giving the string ad. In terms of the pumping lemma, the string abcd is broken into an $x$ portion a, a $y$ portion bc and a $z$ portion d. - -As a side remark, the problem of checking whether a given string can be accepted by a given nondeterministic finite automaton without visiting any state repeatedly, is NP hard. - -If a language $L$ is regular, then there exists a number $p \geq 1$ (the pumping length) such that every string $uwv$ in $L$ with $|w| \ge p$ can be written in the form -$$ -uwv = uxyzv -$$ - -with strings $x$, $y$ and $z$ such that $|xy| \le p$, $|y| \ge 1$ and -$$ -uxy^izv -$$ is in $L$ for every integer $i \geq 0$. - -From this, the above standard version follows a special case, with both $u$ and $v$ being the empty string. - -Since the general version imposes stricter requirements on the language, it can be used to prove the non-regularity of many more languages, such as $\{ a^m b^n c^n : m \geq 1 \text{ and } n \geq 1 \}$. - -While the pumping lemma states that all regular languages satisfy the conditions described above, the converse of this statement is not true: a language that satisfies these conditions may still be non-regular. In other words, both the original and the general version of the pumping lemma give a necessary but not sufficient condition for a language to be regular. - -For example, consider the following language: -$$ -\begin{matrix}L & = & \{uvwxy : u,y \in \{0,1,2,3\}^*; v,w,x \in \{0,1,2,3\} \land (v=w \lor v=x \lor x=w)\} \\ & & \cup \ \{w : w \in \{0,1,2,3\}^*\land \text {precisely } \tfrac 1 7 \text{ of the characters in }w \text{ are 3's}\}\end{matrix} -$$. - -In other words, $L$ contains all strings over the alphabet $\{0,1,2,3\}$ with a substring of length 3 including a duplicate character, as well as all strings over this alphabet where precisely 1/7 of the string's characters are 3's. This language is not regular but can still be "pumped" with $p = 5$. Suppose some string s has length at least 5. Then, since the alphabet has only four characters, at least two of the first five characters in the string must be duplicates. They are separated by at most three characters. - -* If the duplicate characters are separated by 0 characters, or 1, pump one of the other two characters in the string, which will not affect the substring containing the duplicates. - -* If the duplicate characters are separated by 2 or 3 characters, pump 2 of the characters separating them. Pumping either down or up results in the creation of a substring of size 3 that contains 2 duplicate characters. - -* The second condition of $L$ ensures that $L$ is not regular: Consider the string $(013)^{3m}(012)^i$. This string is in $L$ exactly when $i=4m$ and thus $L$ is not regular by the Myhill–Nerode theorem. - -The Myhill–Nerode theorem provides a test that exactly characterizes regular languages. The typical method for proving that a language is regular is to construct either a finite state machine or a regular expression for the language. diff --git a/wiki/wikipedia/515.txt b/wiki/wikipedia/515.txt deleted file mode 100644 index 9166166ed1e2642ec96ea0285039aef48c6b020c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/515.txt +++ /dev/null @@ -1,133 +0,0 @@ -In quantum field theory and statistical mechanics, the Mermin–Wagner theorem (also known as Mermin–Wagner–Hohenberg theorem, Mermin–Wagner–Berezinskii theorem, or Coleman theorem) states that continuous symmetries cannot be spontaneously broken at finite temperature in systems with sufficiently short-range interactions in dimensions d ≤ 2. Intuitively, this means that long-range fluctuations can be created with little energy cost and since they increase the entropy they are favored. - -This is because if such a spontaneous symmetry breaking occurred, then the corresponding Goldstone bosons, being massless, would have an infrared divergent correlation function. - -The absence of spontaneous symmetry breaking in d ≤ 2 dimensional systems was rigorously proved by David Mermin, Herbert Wagner (1966), and Pierre Hohenberg (1967) in statistical mechanics and by in quantum field theory. That the theorem does not apply to discrete symmetries can be seen in the two-dimensional Ising model. - -Consider the free scalar field φ of mass m in two Euclidean dimensions. Its propagator is: -$$ -G(x) = \left\langle \varphi (x)\varphi (0) \right\rangle = \int \frac{d^2 k}{(2\pi)^2} \frac{e^{ik \cdot x}}{k^2 + m^2}. -$$ - -For small m, G is a solution to Laplace's equation with a point source: -$$ -\nabla^2 G = \delta(x). -$$ - -This is because the propagator is the reciprocal of ∇2 in k space. To use Gauss's law, define the electric field analog to be E = ∇G. The divergence of the electric field is zero. In two dimensions, using a large Gaussian ring: -$$ -E = {1\over 2\pi r}. -$$ - -So that the function G has a logarithmic divergence both at small and large r. -$$ -G(r) = {1\over 2\pi} \log(r) -$$ - -The interpretation of the divergence is that the field fluctuations cannot stay centered around a mean. If you start at a point where the field has the value 1, the divergence tells you that as you travel far away, the field is arbitrarily far from the starting value. This makes a two dimensional massless scalar field slightly tricky to define mathematically. If you define the field by a Monte-Carlo simulation, it doesn't stay put, it slides to infinitely large values with time. - -This happens in one dimension too, when the field is a one dimensional scalar field, a random walk in time. A random walk also moves arbitrarily far from its starting point, so that a one-dimensional or two-dimensional scalar does not have a well defined average value. - -If the field is an angle, θ, as it is in the Mexican hat model where the complex field A = Re has an expectation value but is free to slide in the θ direction, the angle θ will be random at large distances. This is the Mermin–Wagner theorem: there is no spontaneous breaking of a continuous symmetry in two dimensions. - -While the Mermin–Wagner theorem prevents any spontaneous symmetry breaking on a global scale, ordering transitions of Kosterlitz-Thouless-type may be allowed. This is the case for the XY model where the continuous (internal) O(2) symmetry on a spatial lattice of dimension d ≤ 2, i.e. the (spin-)field's expectation value, remains zero for any finite temperature (quantum phase transitions remain unaffected). However, the theorem does not prevent the existence of a phase transition in the sense of a diverging correlation length ξ. To this end, the model has two phases: a conventional disordered phase at high temperature with dominating exponential decay of the correlation function $G(r)\sim\exp(-r/\xi)$ for $r/\xi\gg1$, and a low-temperature phase with quasi-long-range order where G(r) decays according to some power law for "sufficiently large", but finite distance r (a ≪ r ≪ ξ with a the lattice spacing). - -We will present an intuitive way to understand the mechanism that prevents symmetry breaking in low dimensions, through an application to the Heisenberg model, that is a system of n-component spins Si of unit length , located at the sites of a d-dimensional square lattice, with nearest neighbor coupling J. Its Hamiltonian is -$$ -H = - J\sum_{\left\langle {i,j} \right\rangle } \mathbf{S}_i \cdot \mathbf{S}_j. -$$ - -The name of this model comes from its rotational symmetry. Consider the low temperature behavior of this system and assume that there exists a spontaneously broken, that is a phase where all spins point in the same direction, e.g. along the x-axis. Then the O(n) rotational symmetry of the system is spontaneously broken, or rather reduced to the O(n − 1) symmetry under rotations around this direction. We can parametrize the field in terms of independent fluctuations σα around this direction as follows: -$$ -\mathbf{S} = \left(\sqrt{1 - \sum_\alpha \sigma_\alpha^2}, \left \{\sigma_\alpha \right\} \right), \qquad \alpha = 1, \cdots, n - 1. -$$ - -with , and Taylor expand the resulting Hamiltonian. We have - -\begin{align} - -\mathbf{S}_i \cdot \mathbf{S}_j &= \sqrt{\left(1 - \sum_\alpha \sigma^2_{i\alpha} \right)\left(1 - \sum_\alpha \sigma^2_{j\alpha } \right)} + \sum_\alpha \sigma_{i\alpha} \sigma_{j\alpha}\\ - -&= 1 - \tfrac{1}{2} \sum_\alpha \left(\sigma^2_{i\alpha} + \sigma^2_{j\alpha}\right) + \sum_\alpha \sigma _{i\alpha} \sigma _{j\alpha} + \mathcal{O}\left (\sigma ^4 \right )\\ - -&= 1 - \tfrac{1}{2} \sum_\alpha \left (\sigma _{i\alpha} - \sigma _{j\alpha } \right )^2 + \ldots - -\end{align} - -whence -$$ -H = H_0 + \tfrac{1}{2} J\sum_{\left\langle i,j \right\rangle} \sum_\alpha \left (\sigma_{i\alpha}- \sigma_{j\alpha} \right )^2 + \cdots -$$ - -Ignoring the irrelevant constant term H0 = −JNd and passing to the continuum limit, given that we are interested in the low temperature phase where long-wavelength fluctuations dominate, we get -$$ -H = \tfrac{1}{2}J \int {\mathrm{d}^d x\sum_\alpha {(\nabla \sigma _\alpha )^2 } } + \ldots. -$$ - -The field fluctuations σα are called spin waves and can be recognized as Goldstone bosons. Indeed, they are n-1 in number and they have zero mass since there is no mass term in the Hamiltonian. - -To find if this hypothetical phase really exists we have to check if our assumption is self-consistent, that is if the expectation value of the magnetization, calculated in this framework, is finite as assumed. To this end we need to calculate the first order correction to the magnetization due to the fluctuations. This is the procedure followed in the derivation of the well-known Ginzburg criterion. - -The model is Gaussian to first order and so the momentum space correlation function is proportional to k−2. Thus the real space two-point correlation function for each of these modes is -$$ -\left\langle \sigma_\alpha (r)\sigma_\alpha (0) \right\rangle = \frac{1}{\beta J} \int^{\frac{1}{a}} \frac{\mathrm{d}^d k}{(2\pi)^d} \frac{e^{i\mathbf{k} \cdot \mathbf{r}}}{k^2} -$$ - -where a is the lattice spacing. The average magnetization is -$$ -\left\langle S_1 \right\rangle =1-\tfrac{1}{2}\sum_\alpha\left\langle \sigma_\alpha^2 \right\rangle + \ldots -$$ - -and the first order correction can now easily be calculated: -$$ -\sum_\alpha \left\langle \sigma_\alpha ^2 (0) \right\rangle = (n-1)\frac{1}{\beta J} \int^{\frac{1}{a}}\frac{\mathrm{d}^d k}{(2\pi)^d} \frac{1}{k^2}. -$$ - -The integral above is proportional to -$$ -\int^{\frac{1}{a}} k^{d-3} \mathrm{d}k -$$ - -and so it is finite for d > 2, but appears to be logarithmically divergent for d ≤ 2. However, this is really an artifact of the linear approximation. In a more careful treatment, the average magnetization is zero. - -We thus conclude that for d ≤ 2 our assumption that there exists a phase of spontaneous magnetization is incorrect for all T > 0, because the fluctuations are strong enough to destroy the spontaneous symmetry breaking. This is a general result: - -Mermin–Wagner–Hohenberg Theorem. There is no phase with spontaneous breaking of a continuous symmetry for T > 0, in d ≤ 2 dimensions. - -The result can also be extended to other geometries, such as Heisenberg films with an arbitrary number of layers, as well as to other lattice systems (Hubbard model, s-f model). - -Much stronger results than absence of magnetization can actually be proved, and the setting can be substantially more general. In particular : - -#The Hamiltonian can be invariant under the action of an arbitrary compact, connected Lie group G. - -#Long-range interactions can be allowed (provided that they decay fast enough; necessary and sufficient conditions are known). - -In this general setting, Mermin–Wagner theorem admits the following strong form (stated here in an informal way): - -All (infinite-volume) Gibbs states associated to this Hamiltonian are invariant under the action of G. - -When the assumption that the Lie group be compact is dropped, a similar result holds, but with the conclusion that infinite-volume Gibbs states do not exist. - -Finally, there are other important applications of these ideas and methods, most notably to the proof that there cannot be non-translation invariant Gibbs states in 2-dimensional systems. A typical such example would be the absence of crystalline states in a system of hard disks (with possibly additional attractive interactions). - -It has been proved however that interactions of hard-core type can lead in general to violations of Mermin–Wagner theorem. - -Already in 1930, Felix Bloch has argued by diagonalizing the Slater-determinant for fermions, that magnetism in 2D should not exist. Some easy arguments, which are summarized below, were given by Rudolf Peierls based on entropic and energetic considerations. Also Lev Landau did some work about symmetry breaking in two dimensions. - -One reason for the lack of global symmetry breaking is, that one can easily excite long wavelength fluctuations which destroy perfect order. ``Easily excited´´ means, that the energy for those fluctuations tend to zero for large enough systems. Let's consider a magnetic model (e.g. the XY-model in one dimension). It is a chain of magnetic moments of length $L$. We consider harmonic approximation, where the forces (torque) between neighbouring moments increase linearly with the angle of twisting $\gamma_i$. This implies, that the energy due to twisting increases quadratically $E_i \propto \gamma_i^2$. The total energy is the sum of all twisted pairs of magnetic moments $E_{ges} \propto \sum_i \gamma_i^2$. If one considers the excited mode with the lowest energy in one dimension (see figure), then the moments on the chain of length $L$ are tilted by $2\pi$ along the chain. The relative angle between neighbouring moments is the same for all pairs of moments in this mode and equals $\gamma_i = 2\pi/N$, if the chain consists of $N$ magnetic moments. It follows that the total energy of this lowest mode is $E_{ges} \propto N \cdot \gamma_i^2 = N \frac{4\pi^2}{N^2}\propto L \frac{4\pi^2}{L^2}$. It decreases with increasing system size $ \propto 1/L $ and tends to zero in the thermodynamic limit $ L \to \infty$, $ N \to \infty$, $ L/N = const.$. For arbitrary large systems follows, that the lowest modes do not cost any energy and will be thermally excited. Simultaneously, the long range order is destroyed on the chain. In two dimensions (or in a plane) the number of magnetic moments is proportional to the area of the plain $ N \propto L^2$. The energy for the lowest excited mode is then $E_{ges} \propto N^2 \cdot \gamma_i^2 = \propto L^2 \frac{4\pi^2}{L^2}$, which tends to a constant in the thermodynamic limit. Thus the modes will be excited at sufficiently large temperatures. In three dimensions, the number of magnetic moments is proportional to the volume $ V = L^3 $ and the energy of the lowest mode is $E_{ges} \propto N^3 \cdot \gamma_i^2 = \propto L^3 \frac{4\pi^2}{L^2}$. It diverges with system size and will thus not be excited for large enough systems. Long range order is not affected by this mode and global symmetry breaking is allowed. - -thumb|There is only one way between neighbouring particles in one dimension, two ways in two dimensions and six different ways in three dimensions. An entropic argument against perfect long range order in crystals with $D < 3$ is as follows (see figure): consider a chain of atoms/particles with an average particle distance of $ \langle a \rangle $. Thermal fluctuations between particle $ 0 $ and particle $ 1 $ will lead to fluctuations of the average particle distance of the order of $ \xi_{0,1} $, thus the distance is given by $ a = \langle a\rangle \pm \xi_{0,1}$. The fluctuations between particle $ -1 $ and $ 0 $ will be of the same size: $ |\xi_{-1,0}| = |\xi_{0,1}|$. We assume that the thermal fluctuations are statistically independent (which is evident if we consider only nearest neighbour interaction) and the fluctuations between $ -1 $ and particle $ +1 $ (with double the distance) has to be summed statistically independent (or incoherent): $ \xi_{-1,1} = \sqrt{2}\cdot \xi_{0,1}$. For particles N-times the average distance, the fluctuations will increase with the square root $ \xi_{0,N} = \sqrt{N} \cdot \xi_{0,1}$ if neighbouring fluctuations are summed independently. Although the average distance $ \langle a\rangle $ is well defined, the deviations from a perfect periodic chain increase with the square root of the system size. In three dimensions, one has to walk along three linearly independent directions to cover the whole space; in a cubic crystal, this is effectively along the space diagonal, to get from particle $ 0 $ to particle $ 3 $. As one can easily see in the figure, there are six different possibilities to do this. This implies, that the fluctuations on the six different pathways cannot be statistically independent, since they pass the same particles at position $ 0 $ and $ 3 $. Now, the fluctuations of the six different ways have to be summed in a coherent way and will be of the order of $ \xi $ – independent of the size of the cube. The fluctuations stay finite and lattice sites are well defined. For the case of two dimensions, Herbert Wagner and David Mermin have proved rigorously, that fluctuations distances increase logarithmically with systems size $ \xi \propto ln (L) $. This is frequently called the logarithmic divergence of displacements. - -thumb| 2D x-tal with thermal fluctuations of particle positions. Red lines symbolize lattice-axis and green arrows symbolize the deviations of equilibrium positions. The Image shows a (quasi-) two-dimensional crystal of colloidal particles. These are micrometer sized particles dispersed in water and sedimented on a flat interface thus they can perform Brownian motions only within a plane. The sixfold crystalline order is easy to detect on a local scale, since the logarithmic increase of displacements is rather slow. The deviations from the (red) lattice axis are easy to detect, too, here shown as green arrows. The deviations are basically given by the elastic lattice vibrations (acoustic phonons). A direct experimental proof of Mermin-Wagner-Hohenberg fluctuations would be, if the displacements increase logarithmic with the distance of a locally fitted coordinate frame (blue). This logarithmic divergence goes along with an algebraic (slow) decay of positional correlations. The spatial order of a 2D crystal is called quasi long range (see also auch hexatic phase for the phase behaviour of 2D ensembles). - -Interestingly, significant signatures of Mermin-Wagner-Hohenberg fluctuations have not been found in crystals but in disordered amorphous systems. - -This work did not investigate the logarithmic displacements of lattice sites (which are difficult to quantify for a finite system size), but the magnitude of the mean squared displacement of the particles as function of time. This way, the displacements are not analysed in space but in the time domain. The theoretical background is given by D. Cassi as well as F. Merkl and H. Wagner. This work analyses the recurrence probability of random walks and spontaneous symmetry breaking in various dimensions. The finite recurrence probability of a random walk in one and two dimension shows a dualism to the lack of perfect long range order in one and two dimensions, while the vanishing recurrence probability of a random walk in 3D is dual to existence of perfect long range order and the possibility of symmetry breaking. - -Real magnets usually do not have a continuous symmetry, since the spin-orbit coupling of the electrons imposes an anisotropy. For atomic systems like graphene, one can show that monolayers of cosmological (or at least continental) size are necessary to measure a significant size of the amplitudes of fluctuations. - -A recent discussion about the Mermin-Wagner-Hohenberg-Theorems and its limitations is given by Bertrand Halperin. - -The most severe physical limitation are finite-size effects in 2D, because the suppression due to infrared fluctuations is only logarithmic in the size. The sample would have to be larger than the observable universe for a 2D superconducting transition to be suppressed below ~100 K. For magnetism, there is a roughly order-of-magnitude suppression of Tc, which still allows magnetic order in 2D samples at ~10 K. However, because disorder and interlayer coupling compete with finite-size effects at restoring order, it cannot be said a priori which of them is responsible for the observation of magnetic ordering in a given 2D sample. - -The discrepancy between the Mermin-Wagner-Hohenberg theorem (ruling out long range order in 2D) and the first computer simulations (Alder&Wainwright), which indicated crystallization in 2D, once motivated Michael Kosterlitz and David Thouless, to work on topological phase transitions in 2D. This work is awarded with the 2016 Nobel-prize in physics (together with Duncan Haldane). diff --git a/wiki/wikipedia/516.txt b/wiki/wikipedia/516.txt deleted file mode 100644 index 3599436067adfa0f1cda43b2b0b602ace7a776a9..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/516.txt +++ /dev/null @@ -1,27 +0,0 @@ -In graph-theoretic mathematics, a cycle double cover is a collection of cycles in an undirected graph that together include each edge of the graph exactly twice. For instance, for any polyhedral graph, the faces of a convex polyhedron that represents the graph provide a double cover of the graph: each edge belongs to exactly two faces. - -It is an unsolved problem, posed by George Szekeres and Paul Seymour and known as the cycle double cover conjecture, whether every bridgeless graph has a cycle double cover. The conjecture can equivalently be formulated in terms of graph embeddings, and in that context is also known as the circular embedding conjecture. - -The usual formulation of the cycle double cover conjecture asks whether every bridgeless undirected graph has a collection of cycles such that each edge of the graph is contained in exactly two of the cycles. The requirement that the graph be bridgeless is an obvious necessary condition for such a set of cycles to exist, because a bridge cannot belong to any cycle. A collection of cycles satisfying the condition of the cycle double cover conjecture is called a cycle double cover. Some graphs such as cycle graphs and bridgeless cactus graphs can only be covered by using the same cycle more than once, so this sort of duplication is allowed in a cycle double cover. - -A snark is a special case of a bridgeless graph, having the additional properties that every vertex has exactly three incident edges (that is, the graph is cubic) and that it is not possible to partition the edges of the graph into three perfect matchings (that is, the graph has no 3-edge coloring, and by Vizing's theorem has chromatic index 4). It turns out that snarks form the only difficult case of the cycle double cover conjecture: if the conjecture is true for snarks, it is true for any graph. - -Jaeger observes that, in any potential minimal counterexample to the cycle double cover conjecture, all vertices must have three or more incident edges. For, a vertex with only one edge incident forms a bridge, while if two edges are incident on a vertex, one can contract them to form a smaller graph such that any double cover of the smaller graph extends to one of the original graph. On the other hand, if a vertex v has four or more incident edges, one may “split off” two of those edges by removing them from the graph and replacing them by a single edge connecting their two other endpoints, while preserving the bridgelessness of the resulting graph. Again, a double cover of the resulting graph may be extended in a straightforward way to a double cover of the original graph: every cycle of the split off graph corresponds either to a cycle of the original graph, or to a pair of cycles meeting at v. Thus, every minimal counterexample must be cubic. But if a cubic graph can have its edges 3-colored (say with the colors red, blue, and green), then the subgraph of red and blue edges, the subgraph of blue and green edges, and the subgraph of red and green edges each form a collection of disjoint cycles that together cover all edges of the graph twice. Therefore, every minimal counterexample must be a non-3-edge-colorable bridgeless cubic graph, that is, a snark. - -One possible attack on the cycle double cover problem would be to show that there cannot exist a minimum counterexample, by proving that any graph contains a reducible configuration, a subgraph that can be replaced by a smaller subgraph in a way that would preserve the existence or nonexistence of a cycle double cover. For instance, if a cubic graph contains a triangle, a Δ-Y transform will replace the triangle by a single vertex; any cycle double cover of the smaller graph can be extended back to a cycle double cover of the original cubic graph. Therefore, a minimal counterexample to the cycle double cover conjecture must be a triangle-free graph, ruling out some snarks such as Tietze's graph which contain triangles. Through computer searches, it is known that every cycle of length 11 or less in a cubic graph forms a reducible configuration, and therefore that any minimal counterexample to the cycle double cover conjecture must have girth at least 12. - -Unfortunately, it is not possible to prove the cycle double cover conjecture using a finite set of reducible configurations. Every reducible configuration contains a cycle, so for every finite set S of reducible configurations there is a number γ such that all configurations in the set contain a cycle of length at most γ. However, there exist snarks with arbitrarily high girth, that is, with arbitrarily high bounds on the length of their shortest cycle. A snark G with girth greater than γ cannot contain any of the configurations in the set S, so the reductions in S are not strong enough to rule out the possibility that G might be a minimal counterexample. - -If a graph has a cycle double cover, the cycles of the cover can be used to form the 2-cells of a graph embedding onto a two-dimensional cell complex. In the case of a cubic graph, this complex always forms a manifold. The graph is said to be circularly embedded onto the manifold, in that every face of the embedding is a simple cycle in the graph. However, a cycle double cover of a graph with degree greater than three may not correspond to an embedding on a manifold: the cell complex formed by the cycles of the cover may have non-manifold topology at its vertices. The circular embedding conjecture or strong embedding conjecture states that every biconnected graph has a circular embedding onto a manifold. If so, the graph also has a cycle double cover, formed by the faces of the embedding. - -For cubic graphs, biconnectivity and bridgelessness are equivalent. Therefore, the circular embedding conjecture is clearly at least as strong as the cycle double cover conjecture. However, it turns out to be no stronger. If the vertices of a graph G are expanded to form a cubic graph, which is then circularly embedded, and the expansions are undone by contracting the added edges, the result will be a circular embedding of G itself. Therefore, if the cycle double cover conjecture is true, every biconnected graph has a circular embedding. That is, the cycle double cover conjecture is equivalent to the circular embedding conjecture, even though a cycle double cover and a circular embedding are not always the same thing. - -If a circular embedding exists, it might not be on a surface of minimal genus: Nguyen Huy Xuong described a biconnected toroidal graph none of whose circular embeddings lie on a torus. - -A stronger version of the circular embedding conjecture that has also been considered is the conjecture that every biconnected graph has a circular embedding on an orientable manifold. In terms of the cycle double cover conjecture, this is equivalent to the conjecture that there exists a cycle double cover, and an orientation for each of the cycles in the cover, such that for every edge e the two cycles that cover e are oriented in opposite directions through e. - -Alternatively, strengthenings of the conjecture that involve colorings of the cycles in the cover have also been considered. The strongest of these is a conjecture that every bridgeless graph has a circular embedding on an orientable manifold in which the faces can be 5-colored. If true, this would imply a conjecture of W. T. Tutte that every bridgeless graph has a nowhere-zero 5-flow. - -A stronger type of embedding than a circular embedding is a polyhedral embedding, an embedding of a graph on a surface in such a way that every face is a simple cycle and every two faces that intersect do so in either a single vertex or a single edge. (In the case of a cubic graph, this can be simplified to a requirement that every two faces that intersect do so in a single edge.) Thus, in view of the reduction of the cycle double cover conjecture to snarks, it is of interest to investigate polyhedral embeddings of snarks. Unable to find such embeddings, Branko Grünbaum conjectured that they do not exist, but disproved Grünbaum's conjecture by finding a snark with a polyhedral embedding. - -See also Petersen coloring conjecture. diff --git a/wiki/wikipedia/517.txt b/wiki/wikipedia/517.txt deleted file mode 100644 index ac5218f8a7162cb755a7f15bc75b050790b0bba8..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/517.txt +++ /dev/null @@ -1,3 +0,0 @@ -In hyperbolic geometry, Thurston's double limit theorem gives condition for a sequence of quasi-Fuchsian groups to have a convergent subsequence. It was introduced in Thurston and is a major step in Thurston's proof of the hyperbolization theorem for the case of manifolds that fiber over the circle. - -By Bers's theorem, quasi-Fuchsian groups (of some fixed genus) are parameterized by points in T×T, where T is Teichmüller space of the same genus. Suppose that there is a sequence of quasi-Fuchsian groups corresponding to points (gi, hi) in T×T. Also suppose that the sequences gi, hi converge to points μ,μ′ in the Thurston boundary of Teichmüller space of projective measured laminations. If the points μ,μ′ have the property that any nonzero measured lamination has positive intersection number with at least one of them, then the sequence of quasi-Fuchsian groups has a subsequence that converges algebraically. diff --git a/wiki/wikipedia/518.txt b/wiki/wikipedia/518.txt deleted file mode 100644 index 9b77a9582fde54a61e236cbfe2ab66f2daf35cdc..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/518.txt +++ /dev/null @@ -1,52 +0,0 @@ -In number theory, a Sierpiński number is an odd natural number k such that $k \times 2^n + 1 $ is composite for all natural numbers n. In 1960, Wacław Sierpiński proved that there are infinitely many odd integers k which have this property. - -In other words, when k is a Sierpiński number, all members of the following set are composite: -$$ -\left\{ k \cdot 2^n + 1 : n \in\mathbb{N}\right\}. -$$ - -If the form is instead $k \times 2^n - 1 $, then k is a Riesel number. - -The sequence of currently known Sierpiński numbers begins with: - -78557, 271129, 271577, 322523, 327739, 482719, 575041, 603713, 903983, 934909, 965431, 1259779, 1290677, 1518781, 1624097, 1639459, 1777613, 2131043, 2131099, 2191531, 2510177, 2541601, 2576089, 2931767, 2931991, ... . - -The number 78557 was proved to be a Sierpiński number by John Selfridge in 1962, who showed that all numbers of the form 78557⋅2n + 1 have a factor in the covering set {3, 5, 7, 13, 19, 37, 73}. For another known Sierpiński number, 271129, the covering set is {3, 5, 7, 13, 17, 241}. Most currently known Sierpiński numbers possess similar covering sets. - -However, in 1995 A. S. Izotov showed that some fourth powers could be proved to be Sierpiński numbers without establishing a covering set for all values of n. His proof depends on the aurifeuillean factorization t4⋅24m+2 + 1 = (t2⋅22m+1 + t⋅2m+1 + 1)⋅(t2⋅22m+1 - t⋅2m+1 + 1). This establishes that all n ≡ 2 (mod 4) give rise to a composite, and so it remains to eliminate only n ≡ 0, 1, 3 (mod 4) using a covering set. - -The Sierpiński problem asks for the value of the smallest Sierpiński number. In private correspondence with Paul Erdős, Selfridge conjectured that 78,557 was the smallest Sierpiński number. No smaller Sierpiński numbers have been discovered, and it is now believed that 78,557 is the smallest number. - -To show that 78,557 really is the smallest Sierpiński number, one must show that all the odd numbers smaller than 78,557 are not Sierpiński numbers. That is, for every odd k below 78,557, there needs to exist a positive integer n such that k2n + 1 is prime. - -In 1976, Nathan Mendelsohn determined that the second provable Sierpiński number is the prime k = 271129. The prime Sierpiński problem asks for the value of the smallest prime Sierpiński number, and there is an ongoing "Prime Sierpiński search" which tries to prove that 271129 is the first Sierpiński number which is also a prime. , the nine prime values of k less than 271129 for which a prime of the form k2n + 1 is not known are: - -k = 22699, 67607, 79309, 79817, 152267, 156511, 222113, 225931, and 237019. - -, no prime has been found for these values of k with $n \le 26736459$. - -The first two, being less than 78557, are also unsolved cases of the (non-prime) Sierpiński problem described above. The most recently eliminated candidate was k = 168451, when the prime number $168451\times2^{19375200} + 1$ was discovered by PrimeGrid in September 2017. The number is 5,832,522 digits long. - -Suppose that both preceding Sierpiński problems had finally been solved, showing that 78557 is the smallest Sierpiński number and that 271129 is the smallest prime Sierpiński number. This still leaves unsolved the question of the second Sierpinski number; there could exist a composite Sierpiński number k such that $78557 < k < 271129$. An ongoing search is trying to prove that 271129 is the second Sierpiński number, by testing all k values between 78557 and 271129, prime or not. - -Solving the extended Sierpiński problem, the most demanding of the three posed problems, requires the elimination of 21 remaining candidates $k < 271129$, of which nine are prime (see above) and twelve are composite. The latter include k = 21181, 24737, 55459 from the original Sierpiński problem. , the following eight values of k, unique to the extended Sierpiński problem, remain: - -k = 91549, 131179, 163187, 200749, 209611, 227723, 229673, and 238411. - -, no prime has been found for these values of k with $n \le 21458090$. - -In December 2019, $99739\times2^{14019102} + 1$ was found to be prime by PrimeGrid, eliminating k = 99739. The number is 4,220,176 digits long. - -The most recent elimination was in December 2021, when $202705\times2^{21320516} + 1$ was found to be prime by PrimeGrid, eliminating k = 202705. The number is 6,418,121 digits long. - -A number may be simultaneously Sierpiński and Riesel. These are called Brier numbers. The smallest five known examples are 3316923598096294713661, 10439679896374780276373, 11615103277955704975673, 12607110588854501953787, 17855036657007596110949, ... (). - -If we take n to be a negative integer, then the number k2n + 1 becomes $\frac{2^ + k}{2^}$. When k is odd, this is a fraction in reduced form, with numerator 2|n| + k. A dual Sierpinski number is defined as an odd natural number k such that 2n + k is composite for all natural numbers n. There is a conjecture that the set of these numbers is the same as the set of Sierpinski numbers; for example, 2n + 78557 is composite for all natural numbers n. - -For odd values of k the least n such that 2n + k is prime are - -1, 1, 1, 2, 1, 1, 2, 1, 1, 2, 1, 3, 2, 1, 1, 4, 2, 1, 2, 1, 1, 2, 1, 5, 2, ... - -The odd values of k for which 2n + k is composite for all n < k are - -773, 2131, 2491, 4471, 5101, 7013, 8543, 10711, 14717, 17659, 19081, 19249, 20273, 21661, 22193, 26213, 28433, ... diff --git a/wiki/wikipedia/519.txt b/wiki/wikipedia/519.txt deleted file mode 100644 index 6d5e69fa331a39a7a0866df21a535f87905e2366..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/519.txt +++ /dev/null @@ -1,42 +0,0 @@ -In statistics, the Lehmann–Scheffé theorem is a prominent statement, tying together the ideas of completeness, sufficiency, uniqueness, and best unbiased estimation. The theorem states that any estimator which is unbiased for a given unknown quantity and that depends on the data only through a complete, sufficient statistic is the unique best unbiased estimator of that quantity. The Lehmann–Scheffé theorem is named after Erich Leo Lehmann and Henry Scheffé, given their two early papers. - -If T is a complete sufficient statistic for θ and E(g(T)) = τ(θ) then g(T) is the uniformly minimum-variance unbiased estimator (UMVUE) of τ(θ). - -Let $\vec{X}= X_1, X_2, \dots, X_n$ be a random sample from a distribution that has p.d.f (or p.m.f in the discrete case) $f(x:\theta)$ where $\theta \in \Omega$ is a parameter in the parameter space. Suppose $Y = u(\vec{X})$ is a sufficient statistic for θ, and let $\{ f_Y(y:\theta): \theta \in \Omega\}$ be a complete family. If $\varphi:\operatorname{E}[\varphi(Y)] = \theta$ then $\varphi(Y)$ is the unique MVUE of θ. - -By the Rao–Blackwell theorem, if $Z$ is an unbiased estimator of θ then $\varphi(Y):= \operatorname{E}[Z\mid Y]$ defines an unbiased estimator of θ with the property that its variance is not greater than that of $Z$. - -Now we show that this function is unique. Suppose $W$ is another candidate MVUE estimator of θ. Then again $\psi(Y):= \operatorname{E}[W\mid Y]$ defines an unbiased estimator of θ with the property that its variance is not greater than that of $W$. Then - - - -\operatorname{E}[\varphi(Y) - \psi(Y)] = 0, \theta \in \Omega. - - - -Since $\{ f_Y(y:\theta): \theta \in \Omega\}$ is a complete family - - - -\operatorname{E}[\varphi(Y) - \psi(Y)] = 0 \implies \varphi(y) - \psi(y) = 0, \theta \in \Omega - - - -and therefore the function $\varphi$ is the unique function of Y with variance not greater than that of any other unbiased estimator. We conclude that $\varphi(Y)$ is the MVUE. - -An example of an improvable Rao–Blackwell improvement, when using a minimal sufficient statistic that is not complete, was provided by Galili and Meilijson in 2016. Let $X_1, \ldots, X_n$ be a random sample from a scale-uniform distribution $X \sim U ( (1-k) \theta, (1+k) \theta),$ with unknown mean $\operatorname{E}[X]=\theta$ and known design parameter $k \in (0,1)$. In the search for "best" possible unbiased estimators for $\theta$, it is natural to consider $X_1$ as an initial (crude) unbiased estimator for $\theta$ and then try to improve it. Since $X_1$ is not a function of $T = \left( X_{(1)}, X_{(n)} \right)$, the minimal sufficient statistic for $\theta$ (where $X_{(1)} = \min_i X_i $ and $X_{(n)} = \max_i X_i $), it may be improved using the Rao–Blackwell theorem as follows: -$$ -\hat{\theta}_{RB} =\operatorname{E}_\theta[X_1\mid X_{(1)}, X_{( n)}] = \frac{X_{(1)}+X_{(n)}} 2. -$$ - -However, the following unbiased estimator can be shown to have lower variance: -$$ -\hat{\theta}_{LV} = \frac 1 {k^2\frac{n-1}{n+1}+1} \cdot \frac{(1-k)X_{(1)} + (1+k) X_{(n)}} 2. -$$ - -And in fact, it could be even further improved when using the following estimator: -$$ -\hat{\theta}_\text{BAYES}=\frac{n+1} n \left[1- \frac{\frac{X_{(1)} (1+k)}{X_{(n)} (1-k)}-1}{ \left (\frac{X_{(1)} (1+k)}{X_{(n)} (1-k)}\right )^{n+1} -1} \right] \frac{X_{(n)}}{1+k} -$$ - -The model is a scale model. Optimal equivariant estimators can then be derived for loss functions that are invariant. diff --git a/wiki/wikipedia/52.txt b/wiki/wikipedia/52.txt deleted file mode 100644 index 6e41648c4d482797ec6a434a3a818f02677d518c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/52.txt +++ /dev/null @@ -1,49 +0,0 @@ -In applied mathematics, the maximum generalized assignment problem is a problem in combinatorial optimization. This problem is a generalization of the assignment problem in which both tasks and agents have a size. Moreover, the size of each task might vary from one agent to the other. - -This problem in its most general form is as follows: There are a number of agents and a number of tasks. Any agent can be assigned to perform any task, incurring some cost and profit that may vary depending on the agent-task assignment. Moreover, each agent has a budget and the sum of the costs of tasks assigned to it cannot exceed this budget. It is required to find an assignment in which all agents do not exceed their budget and total profit of the assignment is maximized. - -In the special case in which all the agents' budgets and all tasks' costs are equal to 1, this problem reduces to the assignment problem. When the costs and profits of all tasks do not vary between different agents, this problem reduces to the multiple knapsack problem. If there is a single agent, then, this problem reduces to the knapsack problem. - -In the following, we have n kinds of items, $a_1$ through $a_n$ and m kinds of bins $b_1$ through $b_m$. Each bin $b_i$ is associated with a budget $t_i$. For a bin $b_i$, each item $a_j$ has a profit $p_{ij}$ and a weight $w_{ij}$. A solution is an assignment from items to bins. A feasible solution is a solution in which for each bin $b_i$ the total weight of assigned items is at most $t_i$. The solution's profit is the sum of profits for each item-bin assignment. The goal is to find a maximum profit feasible solution. - -Mathematically the generalized assignment problem can be formulated as an integer program: - - - -\begin{align} - -\text{maximize } & \sum_{i=1}^m\sum_{j=1}^n p_{ij} x_{ij}. \\ - -\text{subject to } & \sum_{j=1}^n w_{ij} x_{ij} \le t_i & & i=1, \ldots, m; \\ - -& \sum_{i=1}^m x_{ij} = 1 & & j=1, \ldots, n; \\ - -& x_{ij} \in \{0,1\} & & i=1, \ldots, m, \quad j=1, \ldots, n; - -\end{align} - - - -The generalized assignment problem is NP-hard, However, there are linear-programming relaxations which give a $(1 - 1/e)$-approximation. - -For the problem variant in which not every item must be assigned to a bin, there is a family of algorithms for solving the GAP by using a combinatorial translation of any algorithm for the knapsack problem into an approximation algorithm for the GAP. - -Using any $\alpha$-approximation algorithm ALG for the knapsack problem, it is possible to construct a ($\alpha + 1$)-approximation for the generalized assignment problem in a greedy manner using a residual profit concept. - -The algorithm constructs a schedule in iterations, where during iteration $j$ a tentative selection of items to bin $b_j$ is selected. - -The selection for bin $b_j$ might change as items might be reselected in a later iteration for other bins. - -The residual profit of an item $x_i$ for bin $b_j$ is $p_{ij}$ if $x_i$ is not selected for any other bin or $ p_{ij}$ – $p_{ik} $ if $x_i$ is selected for bin $b_k$. - -Formally: We use a vector $T$ to indicate the tentative schedule during the algorithm. Specifically, $T[i]=j$ means the item $x_i$ is scheduled on bin $b_j$ and $T[i]=-1$ means that item $x_i$ is not scheduled. The residual profit in iteration $j$ is denoted by $P_j$, where $P_j[i]=p_{ij}$ if item $x_i$ is not scheduled (i.e. $T[i]=-1$) and $P_j[i]=p_{ij}-p_{ik}$ if item $x_i$ is scheduled on bin $b_k$ (i.e. $T[i]=k$). - -Formally: - -Set $T[i]=-1 \text{ for } i = 1\ldots n$ - -For $j=1,\ldots,m$ do: - -Call ALG to find a solution to bin $b_j$ using the residual profit function $P_j$. Denote the selected items by $S_j$. - -Update $T$ using $S_j$, i.e., $T[i]=j$ for all $i \in S_j$. diff --git a/wiki/wikipedia/520.txt b/wiki/wikipedia/520.txt deleted file mode 100644 index c47473fd68df21dc8d90b37e8d8fcc1466e0f2f3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/520.txt +++ /dev/null @@ -1,17 +0,0 @@ -In mathematics, the Erdős–Ulam problem asks whether the plane contains a dense set of points whose Euclidean distances are all rational numbers. It is named after Paul Erdős and Stanislaw Ulam. - -The Erdős–Anning theorem states that a set of points with integer distances must either be finite or lie on a single line. However, there are other infinite sets of points with rational distances. For instance, on the unit circle, let S be the set of points -$$ -(\cos\theta,\sin\theta) -$$ - -where $\theta$ is restricted to values that cause $\tan\tfrac{\theta}{4}$ to be a rational number. For each such point, both $\sin\tfrac{\theta}{2}$ and $\cos\tfrac\theta 2$ are themselves both rational, and if $\theta$ and $\varphi$ define two points in S, then their distance is the rational number -$$ - \left| 2\sin\frac \theta 2 \cos\frac \varphi 2 -2\sin\frac \varphi 2 \cos\frac \theta 2 \right|. -$$ - -More generally, a circle with radius $\rho$ contains a dense set of points at rational distances to each other if and only if $\rho^2$ is rational. However, these sets are only dense on their circle, not dense on the whole plane. - -In 1946, Stanislaw Ulam asked whether there exists a set of points at rational distances from each other that forms a dense subset of the Euclidean plane. Using different methods, Hector Pasten proved that the abc conjecture also implies a negative solution to the Erdős–Ulam problem. - -If the Erdős–Ulam problem has a positive solution, it would provide a counterexample to the Bombieri–Lang conjecture and to the abc conjecture. It would also solve Harborth's conjecture, on the existence of drawings of planar graphs in which all distances are integers. If a dense rational-distance set exists, any straight-line drawing of a planar graph could be perturbed by a small amount (without introducing crossings) to use points from this set as its vertices, and then scaled to make the distances integers. However, like the Erdős–Ulam problem, Harborth's conjecture remains unproven. diff --git a/wiki/wikipedia/521.txt b/wiki/wikipedia/521.txt deleted file mode 100644 index c78b8378f0b5b9e04a3a2d30e77c1f04b284eb89..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/521.txt +++ /dev/null @@ -1,13 +0,0 @@ -In mathematical logic, the cut rule is an inference rule of sequent calculus. It is a generalisation of the classical modus ponens inference rule. Its meaning is that, if a formula A appears as a conclusion in one proof and a hypothesis in another, then another proof in which the formula A does not appear can be deduced. In the particular case of the modus ponens, for example occurrences of man are eliminated of Every man is mortal, Socrates is a man to deduce Socrates is mortal. - -Formal notation in sequent calculus notation : - -;cut: - - - -\cfrac{\Gamma \vdash A, \Delta \qquad \Gamma', A \vdash \Delta'} {\Gamma, \Gamma' \vdash \Delta, \Delta'} - -The cut rule is the subject of an important theorem, the cut elimination theorem. It states that any judgement that possesses a proof in the sequent calculus that makes use of the cut rule also possesses a cut-free proof, that is, a proof that does not make use of the cut rule. - -Category:Rules of inference diff --git a/wiki/wikipedia/522.txt b/wiki/wikipedia/522.txt deleted file mode 100644 index ecb2cd59428c21c820158ac6a08c0385a2e2b99d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/522.txt +++ /dev/null @@ -1,24 +0,0 @@ -In integral geometry (otherwise called geometric probability theory), Hadwiger's theorem characterises the valuations on convex bodies in $\R^n.$ It was proved by Hugo Hadwiger. - -Let $\mathbb{K}^n$ be the collection of all compact convex sets in $\R^n.$ A valuation is a function $v : \mathbb{K}^n \to \R$ such that $v(\varnothing) = 0$ and for every $S, T \in \mathbb{K}^n$ that satisfy $S \cup T \in \mathbb{K}^n,$ - -v(S) + v(T) = v(S \cap T) + v(S \cup T)~. - -A valuation is called continuous if it is continuous with respect to the Hausdorff metric. A valuation is called invariant under rigid motions if $v(\varphi(S)) = v(S)$ whenever $S \in \mathbb{K}^n$ and $\varphi$ is either a translation or a rotation of $\R^n.$ - -The quermassintegrals $W_j : \mathbb{K}^n \to \R$ are defined via Steiner's formula - -\mathrm{Vol}_n(K + t B) = \sum_{j=0}^n \binom{n}{j} W_j(K) t^j~, - -where $B$ is the Euclidean ball. For example, $W_o$ is the volume, $W_1$ is proportional to the surface measure, $W_{n-1}$ is proportional to the mean width, and $W_n$ is the constant $\operatorname{Vol}_n(B).$ -$$ -W_j -$$ is a valuation which is homogeneous of degree $n - j,$ that is, - -W_j(tK) = t^{n-j} W_j(K)~, \quad t \geq 0~. - -Any continuous valuation $v$ on $\mathbb{K}^n$ that is invariant under rigid motions can be represented as - -v(S) = \sum_{j=0}^n c_j W_j(S)~. - -Any continuous valuation $v$ on $\mathbb{K}^n$ that is invariant under rigid motions and homogeneous of degree $j$ is a multiple of $W_{n-j}.$ diff --git a/wiki/wikipedia/523.txt b/wiki/wikipedia/523.txt deleted file mode 100644 index b8024810a0681c0ecf19973855a48b115cfb9ce9..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/523.txt +++ /dev/null @@ -1,33 +0,0 @@ -In concurrent programming, an operation (or set of operations) is linearizable if it consists of an ordered list of invocation and response events (callbacks), that may be extended by adding response events such that: - -# The extended list can be re-expressed as a sequential history (is serializable). - -# That sequential history is a subset of the original unextended list. - -Informally, this means that the unmodified list of events is linearizable if and only if its invocations were serializable, but some of the responses of the serial schedule have yet to return. - -In other words: - -* its invocations and responses can be reordered to yield a sequential history; - -* that sequential history is correct according to the sequential definition of the object; - -* if a response preceded an invocation in the original history, it must still precede it in the sequential reordering. - -(Note that the first two bullet points here match serializability: the operations appear to happen in some order. It is the last point which is unique to linearizability, and is thus the major contribution of Herlihy and Wing.) even though Herlihy earlier proved that compare-and-swap is better for certain other algorithms that can't be implemented at all using only fetch-and-increment. - -So CPU designs with both fetch-and-increment and compare-and-swap (or equivalent instructions) may be a better choice than ones with only one or the other. - -Another approach is to turn the naive algorithm into a critical section, preventing other threads from disrupting it, using a lock. Once again fixing the non-atomic counter algorithm: - -# Acquire a lock, excluding other threads from running the critical section (steps 2-4) at the same time; - -# read the value in the memory location; - -# add one to the value; - -# write the incremented value back to the memory location; - -# release the lock. - -This strategy works as expected; the lock prevents other threads from updating the value until it is released. However, when compared with direct use of atomic operations, it can suffer from significant overhead due to lock contention. To improve program performance, it may therefore be a good idea to replace simple critical sections with atomic operations for non-blocking synchronization (as we have just done for the counter with compare-and-swap and fetch-and-increment), instead of the other way around, but unfortunately a significant improvement is not guaranteed and lock-free algorithms can easily become too complicated to be worth the effort. diff --git a/wiki/wikipedia/524.txt b/wiki/wikipedia/524.txt deleted file mode 100644 index 4f51a66291fac77150268f51700c478089eb2bab..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/524.txt +++ /dev/null @@ -1,7 +0,0 @@ -In mathematics, von Neumann's theorem is a result in the operator theory of linear operators on Hilbert spaces. - -Let $G$ and $H$ be Hilbert spaces, and let $T : \operatorname{dom}(T) \subseteq G \to H$ be an unbounded operator from $G$ into $H.$ Suppose that $T$ is a closed operator and that $T$ is densely defined, that is, $\operatorname{dom}(T)$ is dense in $G.$ Let $T^* : \operatorname{dom}\left(T^*\right) \subseteq H \to G$ denote the adjoint of $T.$ Then $T^* T$ is also densely defined, and it is self-adjoint. That is, - -\left(T^* T\right)^* = T^* T - -and the operators on the right- and left-hand sides have the same dense domain in $G.$ diff --git a/wiki/wikipedia/525.txt b/wiki/wikipedia/525.txt deleted file mode 100644 index 19ec6f3b6bda126f518a1b0875f654f285c0c078..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/525.txt +++ /dev/null @@ -1,83 +0,0 @@ -Sperner's theorem, in discrete mathematics, describes the largest possible families of finite sets none of which contain any other sets in the family. It is one of the central results in extremal set theory. It is named after Emanuel Sperner, who published it in 1928. - -This result is sometimes called Sperner's lemma, but the name "Sperner's lemma" also refers to an unrelated result on coloring triangulations. To differentiate the two results, the result on the size of a Sperner family is now more commonly known as Sperner's theorem. - -A family of sets in which none of the sets is a strict subset of another is called a Sperner family, or an antichain of sets, or a clutter. For example, the family of k-element subsets of an n-element set is a Sperner family. No set in this family can contain any of the others, because a containing set has to be strictly bigger than the set it contains, and in this family all sets have equal size. The value of k that makes this example have as many sets as possible is n/2 if n is even, or either of the nearest integers to n/2 if n is odd. For this choice, the number of sets in the family is $\tbinom{n}{\lfloor n/2\rfloor}$. - -Sperner's theorem states that these examples are the largest possible Sperner families over an n-element set. - -Formally, the theorem states that, - -#for every Sperner family S whose union has a total of n elements, $|S| \le \binom{n}{\lfloor n/2\rfloor},$ and - -#equality holds if and only if S consists of all subsets of an n-element set that have size $\lfloor n/2\rfloor$ or all that have size $\lceil n/2\rceil$. - -Sperner's theorem can also be stated in terms of partial order width. The family of all subsets of an n-element set (its power set) can be partially ordered by set inclusion; in this partial order, two distinct elements are said to be incomparable when neither of them contains the other. The width of a partial order is the largest number of elements in an antichain, a set of pairwise incomparable elements. Translating this terminology into the language of sets, an antichain is just a Sperner family, and the width of the partial order is the maximum number of sets in a Sperner family. - -Thus, another way of stating Sperner's theorem is that the width of the inclusion order on a power set is $\binom{n}{\lfloor n/2\rfloor}$. - -A graded partially ordered set is said to have the Sperner property when one of its largest antichains is formed by a set of elements that all have the same rank. In this terminology, Sperner's theorem states that the partially ordered set of all subsets of a finite set, partially ordered by set inclusion, has the Sperner property. - -There are many proofs of Sperner's theorem, each leading to different generalizations (see Anderson (1987)). - -The following proof is due to Lubell. Let sk denote the number of k-sets in S. For all 0 ≤ k ≤ n, -$$ -{n \choose \lfloor{n/2}\rfloor} \ge {n \choose k} -$$ - -and, thus, -$$ -{s_k \over {n \choose \lfloor{n/2}\rfloor}} \le {s_k \over {n \choose k}}. -$$ - -Since S is an antichain, we can sum over the above inequality from k = 0 to n and then apply the LYM inequality to obtain -$$ -\sum_{k=0}^n{s_k \over {n \choose \lfloor{n/2}\rfloor}} \le \sum_{k=0}^n{s_k \over {n \choose k}} \le 1, -$$ - -which means -$$ - |S| = \sum_{k=0}^n s_k \le {n \choose \lfloor{n/2}\rfloor}. -$$ - -This completes the proof of part 1. - -To have equality, all the inequalities in the preceding proof must be equalities. Since -$$ -{n \choose \lfloor{n/2}\rfloor} = {n \choose k} -$$ - -if and only if $k = \lfloor{n/2}\rfloor$ or $\lceil{n/2}\rceil,$ we conclude that equality implies that S consists only of sets of sizes $\lfloor{n/2}\rfloor$ or $\lceil{n/2}\rceil.$ For even n that concludes the proof of part 2. - -For odd n there is more work to do, which we omit here because it is complicated. See Anderson (1987), pp. 3–4. - -There are several generalizations of Sperner's theorem for subsets of $\mathcal P(E),$ the poset of all subsets of E. - -A chain is a subfamily $\{S_0,S_1,\dots,S_r\} \subseteq \mathcal P(E)$ that is totally ordered, i.e., $S_0 \subset S_1\subset \dots\subset S_r$ (possibly after renumbering). The chain has r + 1 members and length r. An r-chain-free family (also called an r-family) is a family of subsets of E that contains no chain of length r. Erdős proved that the largest size of an r-chain-free family is the sum of the r largest binomial coefficients $\binom{n}{i}$. The case r = 1 is Sperner's theorem. - -In the set $\mathcal P(E)^p$ of p-tuples of subsets of E, we say a p-tuple $(S_1,\dots,S_p)$ is ≤ another one, $(T_1,\dots,T_p),$ if $S_i \subseteq T_i$ for each i = 1,2,...,p. We call $(S_1,\dots,S_p)$ a p-composition of E if the sets $S_1,\dots,S_p$ form a partition of E. Meshalkin proved that the maximum size of an antichain of p-compositions is the largest p-multinomial coefficient $\binom{n}{n_1\ n_2\ \dots\ n_p},$ that is, the coefficient in which all ni are as nearly equal as possible (i.e., they differ by at most 1). Meshalkin proved this by proving a generalized LYM inequality. - -The case p = 2 is Sperner's theorem, because then $S_2 = E \setminus S_1$ and the assumptions reduce to the sets $S_1$ being a Sperner family. - -Beck combined the Erdös and Meshalkin theorems by adapting Meshalkin's proof of his generalized LYM inequality. They showed that the largest size of a family of p-compositions such that the sets in the i-th position of the p-tuples, ignoring duplications, are r-chain-free, for every $i = 1,2,\dots,p-1$ (but not necessarily for i = p), is not greater than the sum of the $r^{p-1}$ largest p-multinomial coefficients. - -In the finite projective geometry PG(d, Fq) of dimension d over a finite field of order q, let $\mathcal L(p,F_q)$ be the family of all subspaces. When partially ordered by set inclusion, this family is a lattice. Rota proved that the largest size of an antichain in $\mathcal L(p,F_q)$ is the largest Gaussian coefficient $\begin{bmatrix} d+1 \\ k\end{bmatrix};$ this is the projective-geometry analog, or q-analog, of Sperner's theorem. - -They further proved that the largest size of an r-chain-free family in $\mathcal L(p,F_q)$ is the sum of the r largest Gaussian coefficients. Their proof is by a projective analog of the LYM inequality. - -Beck obtained a Meshalkin-like generalization of the Rota-Harper theorem. In PG(d, Fq), a Meshalkin sequence of length p is a sequence $(A_1,\ldots,A_p)$ of projective subspaces such that no proper subspace of PG(d, Fq) contains them all and their dimensions sum to $d-p+1$. The theorem is that a family of Meshalkin sequences of length p in PG(d, Fq), such that the subspaces appearing in position i of the sequences contain no chain of length r for each $i = 1,2,\dots,p-1,$ is not more than the sum of the largest $r^{p-1}$ of the quantities -$$ -\begin{bmatrix} d+1 \\ n_1\ n_2\ \dots\ n_p \end{bmatrix} q^{s_2(n_1,\ldots,n_p)}, -$$ - -where $\begin{bmatrix} d+1 \\ n_1\ n_2\ \dots\ n_p \end{bmatrix}$ (in which we assume that $d+1 = n_1+\cdots+n_p$) denotes the p-Gaussian coefficient -$$ -\begin{bmatrix} d+1 \\ n_1 \end{bmatrix} \begin{bmatrix} d+1-n_1 \\ n_2 \end{bmatrix} \cdots \begin{bmatrix} d+1-(n_1+\cdots+n_{p-1} )\\ n_p \end{bmatrix} -$$ - -and -$$ -s_2(n_1,\ldots,n_p) := n_1n_2 + n_1n_3 + n_2n_3 + n_1n_4 + \cdots + n_{p-1}n_p, -$$ - -the second elementary symmetric function of the numbers $n_1, n_2, \dots, n_p.$ diff --git a/wiki/wikipedia/526.txt b/wiki/wikipedia/526.txt deleted file mode 100644 index 4c0e414fabb5a7a0ec945d8d79024b99f724a49e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/526.txt +++ /dev/null @@ -1 +0,0 @@ -In mathematics, the Brauer–Suzuki–Wall theorem, proved by Brauer, characterizes the one-dimensional unimodular projective groups over finite fields. diff --git a/wiki/wikipedia/527.txt b/wiki/wikipedia/527.txt deleted file mode 100644 index dc154b3275d9ade34f8562a0117c1669b5dddfdb..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/527.txt +++ /dev/null @@ -1,116 +0,0 @@ -In mathematical analysis, the Minkowski inequality establishes that the Lp spaces are normed vector spaces. Let S be a measure space, let 1 ≤ p < ∞ and let f and g be elements of Lp(S). Then f + g is in Lp(S), and we have the triangle inequality -$$ -\|f+g\|_p \le \|f\|_p + \|g\|_p -$$ - -with equality for 1 < p < ∞ if and only if f and g are positively linearly dependent, i.e., 1=f = λg for some λ ≥ 0 or 1=g = 0. Here, the norm is given by: -$$ -\|f\|_p = \left( \int |f|^p d\mu \right)^{\frac{1}{p}} -$$ - -if p < ∞, or in the case p = ∞ by the essential supremum -$$ -\|f\|_\infty = \operatorname{ess\ sup}_{x\in S}|f(x)|. -$$ - -The Minkowski inequality is the triangle inequality in Lp(S). In fact, it is a special case of the more general fact -$$ -\|f\|_p = \sup_{\|g\|_q = 1} \int |fg| d\mu, \qquad \tfrac{1}{p} + \tfrac{1}{q} = 1 -$$ - -where it is easy to see that the right-hand side satisfies the triangular inequality. - -Like Hölder's inequality, the Minkowski inequality can be specialized to sequences and vectors by using the counting measure: -$$ -\biggl( \sum_{k=1}^n |x_k + y_k|^p \biggr)^{1/p} \le \biggl( \sum_{k=1}^n |x_k|^p \biggr)^{1/p} + \biggl( \sum_{k=1}^n |y_k|^p \biggr)^{1/p} -$$ - -for all real (or complex) numbers x1, ..., xn, y1, ..., yn and where n is the cardinality of S (the number of elements in S). - -The inequality is named after the German mathematician Hermann Minkowski. - -First, we prove that f+g has finite p-norm if f and g both do, which follows by -$$ -|f + g|^p \le 2^{p-1}(|f|^p + |g|^p). -$$ - -Indeed, here we use the fact that $h(x)=|x|^p$ is convex over R+ (for p > 1) and so, by the definition of convexity, -$$ -\left|\tfrac{1}{2} f + \tfrac{1}{2} g\right|^p\le\left|\tfrac{1}{2} |f| + \tfrac{1}{2} |g|\right|^p \le \tfrac{1}{2}|f|^p + \tfrac{1}{2} |g|^p. -$$ - -This means that -$$ -|f+g|^p \le \tfrac{1}{2}|2f|^p + \tfrac{1}{2}|2g|^p=2^{p-1}|f|^p + 2^{p-1}|g|^p. -$$ - -Now, we can legitimately talk about $\|f + g\|_p$. If it is zero, then Minkowski's inequality holds. We now assume that $\|f + g\|_p$ is not zero. Using the triangle inequality and then Hölder's inequality, we find that - -\begin{align} - -\|f + g\|_p^p &= \int |f + g|^p \mathrm{d}\mu \\ - -&= \int |f + g| \cdot |f + g|^{p-1} \mathrm{d}\mu \\ - -&\le \int (|f| + |g|)|f + g|^{p-1} \mathrm{d}\mu \\ - -&=\int |f||f + g|^{p-1} \mathrm{d}\mu+\int |g||f + g|^{p-1} \mathrm{d}\mu \\ - -&\le \left( \left(\int |f|^p \mathrm{d}\mu\right)^{\frac{1}{p}} + \left (\int |g|^p \mathrm{d}\mu\right)^{\frac{1}{p}} \right) \left(\int |f + g|^{(p-1)\left(\frac{p}{p-1}\right)} \mathrm{d}\mu \right)^{1-\frac{1}{p}} && \text{ Hölder's inequality} \\ - -&= \left (\|f\|_p + \|g\|_p \right )\frac{\|f + g\|_p^p}{\|f + g\|_p} - -\end{align} - -We obtain Minkowski's inequality by multiplying both sides by -$$ -\frac{\|f + g\|_p}{\|f + g\|_p^p}. -$$ - -Suppose that (S1, μ1) and (S2, μ2) are two σ-finite measure spaces and F : S1 × S2 → R is measurable. Then Minkowski's integral inequality is , : -$$ - \left[\int_{S_2}\left|\int_{S_1}F(x,y) \mu_1(\mathrm{d}x)\right|^p \mu_2(\mathrm{d}y)\right]^{\frac{1}{p}} \le \int_{S_1}\left(\int_{S_2}|F(x,y)|^p\mu_2(\mathrm{d}y)\right)^{\frac{1}{p}}\mu_1(\mathrm{d}x), -$$ - -with obvious modifications in the case 1=p = ∞. If p > 1, and both sides are finite, then equality holds only if 1=|F(x, y) = φ(x)ψ(y)| a.e. for some non-negative measurable functions φ and ψ. - -If μ1 is the counting measure on a two-point set 1=S1 = {1,2}, then Minkowski's integral inequality gives the usual Minkowski inequality as a special case: for putting 1=fi(y) = F(i, y) for 1=i = 1, 2, the integral inequality gives -$$ -\|f_1 + f_2\|_p = \left(\int_{S_2}\left|\int_{S_1}F(x,y)\mu_1(\mathrm{d}x)\right|^p\mu_2(\mathrm{d}y)\right)^{\frac{1}{p}} \le\int_{S_1}\left(\int_{S_2}|F(x,y)|^p\mu_2(\mathrm{d}y)\right)^{\frac{1}{p}}\mu_1(\mathrm{d}x)=\|f_1\|_p + \|f_2\|_p. -$$ - -This notation has been generalized to -$$ -\|f\|_{p,q} = \left(\int_{\mathbb{R}^m} \left[\int_{\mathbb{R}^n}|f(x,y)|^q\mathrm{d}y\right]^{\frac{p}{q}} \mathrm{d}x \right)^{\frac{1}{p}} -$$ - -for $f:\mathbb{R}^{m+n}\to E$, with $\mathcal{L}_{p,q}(\mathbb{R}^{m+n},E) = \{f\in E^{\mathbb{R}^{m+n}} : \|f\|_{p,q} < \infty\}$. Using this notation, manipulation of the exponents reveals that, if $p>q$, then $\|f\|_{p,q}\leq\|f\|_{q,p}$. - -When $p< 1$ the reverse inequality holds: -$$ -\|f+g\|_p \ge \|f\|_p + \|g\|_p -$$ - -We further need the restriction that both $f$ and $g$ are non-negative, as we can see from the example $f=-1, g=1$ and $p=1$: $\|f+g\|_1 = 0 < 2 = \|f\|_1 + \|g\|_1$. - -The reverse inequality follows from the same argument as the standard Minkowski, but uses that Holder's inequality is also reversed in this range. - -See also the chapter on Minkowski's Inequality in. - -Using the Reverse Minkowski, we may prove that power means with $p\le 1$, such as the Harmonic Mean and the Geometric Mean are concave. - -The Minkowski inequality can be generalized to other functions $\phi(x)$ beyond the power function -$$ -x^{p} -$$. The generalized inequality has the form -$$ -\phi^{-1}(\sum_{i=1}^{n} \phi(x_{i}+y_{i})) \leq \phi^{-1}(\sum_{i=1}^{n} \phi(x_{i}))+\phi^{-1}(\sum_{i=1}^{n} \phi(y_{i})) -$$ - -Various sufficient conditions on $\phi$ have been found by Mulholland and others. For example, for $x\geq 0$ one set of sufficient conditions from Mulholland is - -# $\phi(x) $ is continuous and strictly increasing with $\phi(0)=0 $. - -#$\phi(x) $ is a convex function of $x $. - -#$\log\phi(x) $ is a convex function of $\log(x) $. diff --git a/wiki/wikipedia/528.txt b/wiki/wikipedia/528.txt deleted file mode 100644 index f61736d7ddc535990830c7144e01b20441cf8d40..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/528.txt +++ /dev/null @@ -1,13 +0,0 @@ -In theoretical computer science and metric geometry, the GNRS conjecture connects the theory of graph minors, the stretch factor of embeddings, and the approximation ratio of multi-commodity flow problems. It is named after Anupam Gupta, Ilan Newman, Yuri Rabinovich, and Alistair Sinclair, who formulated it in 2004. - -One formulation of the conjecture involves embeddings of the shortest path distances of weighted undirected graphs into $\ell_1$ spaces, real vector spaces in which the distance between two vectors is the sum of their coordinate differences. If an embedding maps all pairs of vertices with distance $d$ to pairs of vectors with distance in the range $[cd,Cd]$ then its stretch factor or distortion is the ratio $C/c$; an isometry has stretch factor one, and all other embeddings have greater stretch factor. - -The graphs that have an embedding with at most a given distortion are closed under graph minor operations, operations that delete vertices or edges from a graph or contract some of its edges. The GNRS conjecture states that, conversely, every nontrivial minor-closed family of graphs can be embedded into an $\ell_1$ space with bounded distortion. That is, the distortion of graphs in the family is bounded by a constant that depends on the family but not on the individual graphs. For instance, the planar graphs are closed under minors. Therefore, it would follow from the GNRS conjecture that the planar graphs have bounded distortion. - -An alternative formulation involves analogues of the max-flow min-cut theorem for undirected multi-commodity flow problems. The ratio of the maximum flow to the minimum cut, in such problems, is known as the flow-cut gap. The largest flow-cut gap that a flow problem can have on a given graph equals the distortion of the optimal $\ell_1$ embedding of the graph. Therefore, the GNRS conjecture can be rephrased as stating that the minor-closed families of graphs have bounded flow-cut gap. - -Arbitrary $n$-vertex graphs (indeed, arbitrary $n$-point metric spaces) have $\ell_1$ embeddings with distortion $O(\log n)$. Some graphs have logarithmic flow-cut gap, and in particular this is true for a multicommodity flow with every pair of vertices having equal demand on a bounded-degree expander graph. Therefore, this logarithmic bound on the distortion of arbitrary graphs is tight. Planar graphs can be embedded with smaller distortion, $O(\sqrt{\log n})$. - -Although the GNRS conjecture remains unsolved, it has been proven for some minor-closed graph families that bounded-distortion embeddings exist. These include the series–parallel graphs and the graphs of bounded circuit rank, the graphs of bounded pathwidth, the 2-clique-sums of graphs of bounded size, and the $k$-outerplanar graphs. - -In contrast to the behavior of metric embeddings into $\ell_1$ spaces, every finite metric space has embeddings into $\ell_2$ with stretch arbitrarily close to one by the Johnson–Lindenstrauss lemma, and into $\ell_\infty$ spaces with stretch exactly one by the tight span construction. diff --git a/wiki/wikipedia/529.txt b/wiki/wikipedia/529.txt deleted file mode 100644 index 5654b010925c588668da8ad33f330655a9f9aa2f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/529.txt +++ /dev/null @@ -1,10 +0,0 @@ -The Kempner series is a modification of the harmonic series, formed by omitting all terms whose denominator expressed in base 10 contains the digit 9. That is, it is the sum -$$ - {\sideset{}{'}\sum_{n=1}^\infty} \frac{1}{n} -$$ - -where the prime indicates that n takes only values whose decimal expansion has no nines. The series was first studied by A. J. Kempner in 1914. The series is counterintuitive because, unlike the harmonic series, it converges. Kempner showed the sum of this series is less than 80. Baillie found an efficient algorithm for the more general problem of any omitted string of digits. For example, the sum of 1/n where n has no instances of "42" is about . Another example: the sum of 1/n where n has no occurrence of the digit string "314159" is about . (All values are rounded in the last decimal place). - -Kempner's proof of convergence considered the sum of reciprocals of j-th powers simultaneously for all j. He developed a recursion that expresses the j-th power contribution from the (k + 1)-digit block in terms of all higher power contributions of the k-digit block. Therefore, with a small amount of computation, the original series (which is the value for j = 1, summed over all k) can be accurately estimated. - -Most authors do not name this series. The name "Kempner series" is used in MathWorld and in Havil's book Gamma on the Euler–Mascheroni constant. diff --git a/wiki/wikipedia/53.txt b/wiki/wikipedia/53.txt deleted file mode 100644 index 6c0fa014264b86f2d909bf59576036fb1517ac69..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/53.txt +++ /dev/null @@ -1,49 +0,0 @@ -Quadratic pseudo-Boolean optimisation (QPBO) is a combinatorial optimization method for quadratic pseudo-Boolean functions in the form -$$ - f(\mathbf{x}) = w_0 + \sum_{p \in V} w_p(x_p) + \sum_{(p, q) \in E} w_{pq}(x_p, x_q) -$$ - -in the binary variables $x_p \in \{0, 1\} \forall p \in V = \{1, \dots, n\}$, with $E \subseteq V \times V$. If $f$ is submodular then QPBO produces a global optimum equivalently to graph cut optimization, while if $f$ contains non-submodular terms then the algorithm produces a partial solution with specific optimality properties, in both cases in polynomial time. - -QPBO is a useful tool for inference on Markov random fields and conditional random fields, and has applications in computer vision problems such as image segmentation and stereo matching. - -If the coefficients $w_{pq}$ of the quadratic terms satisfy the submodularity condition -$$ - w_{pq}(0, 0) + w_{pq}(1, 1) \le w_{pq}(0, 1) + w_{pq}(1, 0) -$$ - -then the function can be efficiently optimised with graph cut optimization. It is indeed possible to represent it with a non-negative weighted graph, and the global minimum can be found in polynomial time by computing a minimum cut of the graph, which can be computed with algorithms such as Ford–Fulkerson, Edmonds–Karp, and Boykov–Kolmogorov's. - -If the function is not submodular, then the problem is NP-hard in the general case and it is not always possible to solve it exactly in polynomial time. It is possible to replace the target function with a similar but submodular approximation, e.g. by removing all non-submodular terms or replacing them with submodular approximations, but such approach is generally sub-optimal and it produces satisfying results only if the number of non-submodular terms is relatively small. - -QPBO builds an extended graph, introducing a set of auxiliary variables ideally equivalent to the negation of the variables in the problem. If the nodes in the graph associated to a variable (representing the variable itself and its negation) are separated by the minimum cut of the graph in two different connected components, then the optimal value for such variable is well defined, otherwise it is not possible to infer it. Such method produces results generally superior to submodular approximations of the target function. - -QPBO produces a solution where each variable assumes one of three possible values: true, false, and undefined, noted in the following as 1, 0, and $\emptyset$ respectively. The solution has the following two properties. - -* Partial optimality: if $f$ is submodular, then QPBO produces a global minimum exactly, equivalent to graph cut, and all variables have a non-undefined value; if submodularity is not satisfied, the result will be a partial solution $\mathbf{x}$ where a subset $\hat{V} \subseteq V$ of the variables have a non-undefined value. A partial solution is always part of a global solution, i.e. there exists a global minimum point $\mathbf{x^*}$ for $f$ such that $x_i = x_i^*$ for each $i \in \hat{V}$. - -* Persistence: given a solution $\mathbf{x}$ generated by QPBO and an arbitrary assignment of values $\mathbf{y}$ to the variables, if a new solution $\hat{\mathbf{y}}$ is constructed by replacing $y_i$ with $x_i$ for each $i \in \hat{V}$, then $f(\hat{\mathbf{y}}) \le f(\mathbf{y})$. - -The algorithm can be divided in three steps: graph construction, max-flow computation, and assignment of values to the variables. - -When constructing the graph, the set of vertices $V$ contains the source and sink nodes $s$ and $t$, and a pair of nodes $p$ and $p'$ for each variable. After re-parametrising the function to normal form, a pair of edges is added to the graph for each term $w$: - -* for each term $w_p(0)$ the edges $p \rightarrow t$ and $s \rightarrow p'$, with weight $\frac{1}{2} w_p(0)$; - -* for each term $w_p(1)$ the edges $s \rightarrow p$ and $p' \rightarrow t$, with weight $\frac{1}{2} w_p(1)$; - -* for each term $w_{pq}(0, 1)$ the edges $p \rightarrow q$ and $q' \rightarrow p'$, with weight $\frac{1}{2} w_{pq}(0, 1)$; - -* for each term $w_{pq}(1, 0)$ the edges $q \rightarrow p$ and $p' \rightarrow q'$, with weight $\frac{1}{2} w_{pq}(1, 0)$; - -* for each term $w_{pq}(0, 0)$ the edges $p \rightarrow q'$ and $q \rightarrow p'$, with weight $\frac{1}{2} w_{pq}(0, 0)$; - -* for each term $w_{pq}(1, 1)$ the edges $q' \rightarrow p$ and $p' \rightarrow q$, with weight $\frac{1}{2} w_{pq}(1, 1)$. - -The minimum cut of the graph can be computed with a max-flow algorithm. In the general case, the minimum cut is not unique, and each minimum cut correspond to a different partial solution, however it is possible to build a minimum cut such that the number of undefined variables is minimal. - -Once the minimum cut is known, each variable receives a value depending upon the position of its corresponding nodes $p$ and $p'$: if $p$ belongs to the connected component containing the source and $p'$ belongs to the connected component containing the sink then the variable will have value of 0. Vice versa, if $p$ belongs to the connected component containing the sink and $p'$ to the one containing the source, then the variable will have value of 1. If both nodes $p$ and $p'$ belong to the same connected component, then the value of the variable will be undefined. - -The way undefined variables can be handled is dependent upon the context of the problem. In the general case, given a partition of the graph in two sub-graphs and two solutions, each one optimal for one of the sub-graphs, then it is possible to combine the two solutions into one solution optimal for the whole graph in polynomial time. However, computing an optimal solution for the subset of undefined variables is still a NP-hard problem. In the context of iterative algorithms such as $\alpha$-expansion, a reasonable approach is to leave the value of undefined variables unchanged, since the persistence property guarantees that the target function will have non-increasing value. Different exact and approximate strategies to minimise the number of undefined variables exist. - -The problem of optimizing higher-order pseudo-boolean functions is generally difficult. It is always possible to reduce a higher-order function to a quadratic function which is equivalent with respect to the optimisation, problem known as "higher-order clique reduction" (HOCR), and the result of such reduction can be optimized with QPBO. Generic methods for reduction of arbitrary functions rely on specific substitution rules and in the general case they require the introduction of auxiliary variables. In practice most terms can be reduced without introducing additional variables, resulting in a simpler optimization problem, and the remaining terms can be reduced exactly, with addition of auxiliary variables, or approximately, without addition of any new variable. diff --git a/wiki/wikipedia/530.txt b/wiki/wikipedia/530.txt deleted file mode 100644 index c9d72ca7647f9d44bebb0d02cbd82155fde373c7..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/530.txt +++ /dev/null @@ -1,150 +0,0 @@ -In combinatorial game theory, the Sprague-Grundy theorem states that every impartial game under the normal play convention is equivalent to a one-heap game of nim, or to an infinite generalization of nim. It can therefore be represented as a natural number, the size of the heap in its equivalent game of nim, as an ordinal number in the infinite generalization, or alternatively as a nimber, the value of that one-heap game in an algebraic system whose addition operation combines multiple heaps to form a single equivalent heap in nim. - -The Grundy value or nim-value of any impartial game is the unique nimber that the game is equivalent to. In the case of a game whose positions are indexed by the natural numbers (like nim itself, which is indexed by its heap sizes), the sequence of nimbers for successive positions of the game is called the nim-sequence of the game. - -The Sprague–Grundy theorem and its proof encapsulate the main results of a theory discovered independently by R. P. Sprague (1936) and P. M. Grundy (1939). - -For the purposes of the Sprague-Grundy theorem, a game is a two-player sequential game of perfect information satisfying the ending condition (all games come to an end: there are no infinite lines of play) and the normal play condition (a player who cannot move loses). - -At any given point in the game, a player's position is the set of moves they are allowed to make. As an example, we can define the zero game to be the two-player game where neither player has any legal moves. Referring to the two players as $A$ (for Alice) and $B$ (for Bob), we would denote their positions as $(A, B) = (\{\}, \{\})$, since the set of moves each player can make is empty. - -An impartial game is one in which at any given point in the game, each player is allowed exactly the same set of moves. Normal-play nim is an example of an impartial game. In nim, there are one or more heaps of objects, and two players (we'll call them Alice and Bob), take turns choosing a heap and removing 1 or more objects from it. The winner is the player who removes the final object from the final heap. The game is impartial because for any given configuration of pile sizes, the moves Alice can make on her turn are exactly the same moves Bob would be allowed to make if it were his turn. In contrast, a game such as checkers is not impartial because, supposing Alice were playing red and Bob were playing black, for any given arrangement of pieces on the board, if it were Alice's turn, she would only be allowed to move the red pieces, and if it were Bob's turn, he would only be allowed to move the black pieces. - -Note that any configuration of an impartial game can therefore be written as a single position, because the moves will be the same no matter whose turn it is. For example, the position of the zero game can simply be written $\{\}$, because if it's Alice's turn, she has no moves to make, and if it's Bob's turn, he has no moves to make either. - -A move can be associated with the position it leaves the next player in. - -Doing so allows positions to be defined recursively. For example, consider the following game of Nim played by Alice and Bob. - -* At step 6 of the game (when all of the heaps are empty) the position is $\{\}$, because Bob has no valid moves to make. We name this position $*0$. - -* At step 5, Alice had exactly one option: to remove one object from heap C, leaving Bob with no moves. Since her move leaves Bob in position $*0$, her position is written $\{ *0 \}$. We name this position $*1$. - -* At step 4, Bob had two options: remove one from B or remove one from C. Note, however, that it didn't really matter which heap Bob removed the object from: Either way, Alice would be left with exactly one object in exactly one pile. So, using our recursive definition, Bob really only has one move: $*1$. Thus, Bob's position is $\{*1\}$. - -* At step 3, Alice had 3 options: remove two from C, remove one from C, or remove one from B. Removing two from C leaves Bob in position $*1$. Removing one from C leaves Bob with two piles, each of size one, i.e., position $\{*1\}$, as described in step 4. However, removing 1 from B would leave Bob with two objects in a single pile. His moves would then be $*0$ and $*1$, so her move would result in the position $\{*0, *1\}$. We call this position $*2$. Alice's position is then the set of all her moves: $\big\{*1, \{*1\}, *2\big\}$. - -* Following the same recursive logic, at step 2, Bob's position is $\big\{ \{*1, \{*1\}, *2\}, *2\big\}$. - -* Finally, at step 1, Alice's position is - -\Big\{ - -\big\{*1, \{*1\}, *2\big\}, - -\big\{*2, \{*1, \{*1\},*2\} \big\}, - -\big\{\{*1\}, \{\{*1\}\}, \{*1, \{*1\}, *2\}\big\} - -\Big\}. - -The special names $*0$, $*1$, and $*2$ referenced in our example game are called nimbers. In general, the nimber $*n$ corresponds to the position in a game of nim where there are exactly $n$ objects in exactly one heap. - -Formally, nimbers are defined inductively as follows: $*0$ is $\{\}$, $*1 = \{*0\}$, $*2 = \{*0, *1\}$ and for all $n \geq 0$, $*(n+1) = *n \cup \{*n\}$. - -While the word nimber comes from the game nim, nimbers can be used to describe the positions of any finite, impartial game, and in fact, the Sprague-Grundy theorem states that every instance of a finite, impartial game can be associated with a single nimber. - -Two games can be combined by adding their positions together. - -For example, consider another game of nim with heaps $A'$, $B'$, and $C'$. - -We can combine it with our first example to get a combined game with six heaps: $A$, $B$, $C$, $A'$, $B'$, and $C'$: - -To differentiate between the two games, for the first example game, we'll label its starting position $\color{blue}S$, and color it blue: - -\color{blue}S = \Big\{ - -\big\{*1, \{*1\}, *2\big\}, - -\big\{*2, \{*1, \{*1\},*2\} \big\}, - -\big\{\{*1\}, \{\{*1\}\}, \{*1, \{*1\}, *2\}\big\} - -\Big\} - -For the second example game, we'll label the starting position $\color{red}S'$ and color it red: - - - -\color{red}S' = \Big\{\{*1\}\Big\} - -. - -To compute the starting position of the combined game, remember that a player can either make a move in the first game, leaving the second game untouched, or make a move in the second game, leaving the first game untouched. So the combined game's starting position is: - - - -\color{blue}S \color{black} + \color{red}S' \color{black}= - -\Big\{ - -\color{blue}S\color{black} + \color{red} \{*1\} \color{black} - -\Big\} \cup \Big\{ - -\color{red}S'\color{black} + \color{blue}\{*1, \{*1\}, *2\} \color{black}, - -\color{red}S'\color{black} + \color{blue} \{*2, \{*1, \{*1\},*2\} \} \color{black}, - -\color{red}S'\color{black} + \color{blue} \{\{*1\}, \{\{*1\}\}, \{*1, \{*1\}, *2\}\} \color{black} - -\Big\} - - - -The explicit formula for adding positions is: $S+S'=\{S+s'\mid s'\in S'\}\cup\{s+S'\mid s\in S\}$, which means that addition is both commutative and associative. - -===Equivalence=== - -Positions in impartial games fall into two outcome classes: either the next player (the one whose turn it is) wins (an $\boldsymbol{\mathcal{N}}$- position), or the previous player wins (a $\boldsymbol{\mathcal{P}}$- position). So, for example, $*0$ is a $\mathcal{P}$-position, while $*1$ is an $\mathcal{N}$-position. - -Two positions $G$ and $G'$ are equivalent if, no matter what position $H$ is added to them, they are always in the same outcome class. - -Formally, -$$ -G \approx G' -$$ if and only if $\forall H $, $G + H$ is in the same outcome class as $G' + H$. - -To use our running examples, notice that in both the first and second games above, we can show that on every turn, Alice has a move that forces Bob into a $\mathcal{P}$-position. Thus, both $\color{blue}S$ and $\color{red}S'$ are $\mathcal{N}$-positions. (Notice that in the combined game, Bob is the player with the $\mathcal{N}$-positions. In fact, $\color{blue}S \color{black} + \color{red}S' $ is a $\mathcal{P}$-position, which as we will see in Lemma 2, means $ \color{blue} S \color{black} \approx \color{red} S'$.) - -As an intermediate step to proving the main theorem, we show that for every position $G$ and every $\mathcal{P}$-position $A$, the equivalence $G\approx A+G$ holds. By the above definition of equivalence, this amounts to showing that $G+H$ and $A+G+H$ share an outcome class for all $H$. - -Suppose that $G+H$ is a $\mathcal{P}$-position. Then the previous player has a winning strategy for $A+G+H$: respond to moves in $A$ according to their winning strategy for $A$ (which exists by virtue of $A$ being a $\mathcal{P}$-position), and respond to moves in $G+H$ according to their winning strategy for $G+H$ (which exists for the analogous reason). So $A+G+H$ must also be a $\mathcal{P}$-position. - -On the other hand, if $G+H$ is an $\mathcal{N}$-position, then $A+G+H$ is also an $\mathcal{N}$-position, because the next player has a winning strategy: choose a $\mathcal{P}$-position from among the $G+H$ options, and we conclude from the previous paragraph that adding $A$ to that position is still a $\mathcal{P}$-position. Thus, in this case, $A+G+H$ must be a $\mathcal{N}$-position, just like $G+H$. - -As these are the only two cases, the lemma holds. - -As a further step, we show that $G\approx G'$ if and only if $G+G'$ is a $\mathcal{P}$-position. - -In the forward direction, suppose that $G\approx G'$. Applying the definition of equivalence with $H=G$, we find that $G'+G$ (which is equal to $G+G'$ by commutativity of addition) is in the same outcome class as $G+G$. But $G+G$ must be a $\mathcal{P}$-position: for every move made in one copy of $G$, the previous player can respond with the same move in the other copy, and so always make the last move. - -In the reverse direction, since $A=G+G'$ is a $\mathcal{P}$-position by hypothesis, it follows from the first lemma, $G\approx G+A$, that $G\approx G+(G+G')$. Similarly, since $B=G+G$ is also a $\mathcal{P}$-position, it follows from the first lemma in the form $G'\approx G'+B$ that $G'\approx G'+(G+G)$. By associativity and commutativity, the right-hand sides of these results are equal. Furthermore, $\approx$ is an equivalence relation because equality is an equivalence relation on outcome classes. Via the transitivity of $\approx$, we can conclude that $G\approx G'$. - -We prove that all positions are equivalent to a nimber by structural induction. The more specific result, that the given game's initial position must be equivalent to a nimber, shows that the game is itself equivalent to a nimber. - -Consider a position $G = \{G_1, G_2, \ldots, G_k\}$. By the induction hypothesis, all of the options are equivalent to nimbers, say $G_i \approx *n_i$. So let $G'=\{*n_1, *n_2, \ldots, *n_k\}$. We will show that $G \approx *m$, where $m$ is the mex (minimum exclusion) of the numbers $n_1, n_2, \ldots, n_k$, that is, the smallest non-negative integer not equal to some $n_i$. - -The first thing we need to note is that $G \approx G'$, by way of the second lemma. If $k$ is zero, the claim is trivially true. Otherwise, consider $G+G'$. If the next player makes a move to $G_i$ in $G$, then the previous player can move to $*n_i$ in $G'$, and conversely if the next player makes a move in $G'$. After this, the position is a $\mathcal{P}$-position by the lemma's forward implication. Therefore, $G+G'$ is a $\mathcal{P}$-position, and, citing the lemma's reverse implication, $G \approx G'$. - -Now let us show that $G'+*m$ is a $\mathcal{P}$-position, which, using the second lemma once again, means that $G'\approx *m$. We do so by giving an explicit strategy for the previous player. - -Suppose that $G'$ and $*m$ are empty. Then $G'+*m$ is the null set, clearly a $\mathcal{P}$-position. - -Or consider the case that the next player moves in the component $*m$ to the option $*m'$ where $m' m$, the previous player moves in $*n_i$ to $*m$; in either case the result is a position plus itself. (It is not possible that $n_i = m$ because $m$ was defined to be different from all the $n_i$.) - -In summary, we have $G\approx G'$ and $G'\approx *m$. By transitivity, we conclude that $G \approx *m$, as desired. - -If $G$ is a position of an impartial game, the unique integer $m$ such that $G \approx *m$ is called its Grundy value, or Grundy number, and the function that assigns this value to each such position is called the Sprague–Grundy function. R. L. Sprague and P. M. Grundy independently gave an explicit definition of this function, not based on any concept of equivalence to nim positions, and showed that it had the following properties: - -*The Grundy value of a single nim pile of size $m$ (i.e. of the position $*m$) is $m$; - -* A position is a loss for the next player to move (i.e. a $\mathcal{P}$-position) if and only if its Grundy value is zero; and - -*The Grundy value of the sum of a finite set of positions is just the nim-sum of the Grundy values of its summands. - -It follows straightforwardly from these results that if a position $G$ has a Grundy value of $m$, then $G + H$ has the same Grundy value as $*m + H$, and therefore belongs to the same outcome class, for any position $H$. Thus, although Sprague and Grundy never explicitly stated the theorem described in this article, it follows directly from their results and is credited to them. - -These results have subsequently been developed into the field of combinatorial game theory, notably by Richard Guy, Elwyn Berlekamp, John Horton Conway and others, where they are now encapsulated in the Sprague–Grundy theorem and its proof in the form described here. The field is presented in the books Winning Ways for your Mathematical Plays and On Numbers and Games. diff --git a/wiki/wikipedia/531.txt b/wiki/wikipedia/531.txt deleted file mode 100644 index dcdcb07b3e573abfe717444226932a46236727d9..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/531.txt +++ /dev/null @@ -1,53 +0,0 @@ -In mathematics, Serre's multiplicity conjectures, named after Jean-Pierre Serre, are certain purely algebraic problems, in commutative algebra, motivated by the needs of algebraic geometry. Since André Weil's initial definition of intersection numbers, around 1949, there had been a question of how to provide a more flexible and computable theory. - -Let R be a (Noetherian, commutative) regular local ring and P and Q be prime ideals of R. In 1958, Serre realized that classical algebraic-geometric ideas of multiplicity could be generalized using the concepts of homological algebra. Serre defined the intersection multiplicity of R/P and R/Q by means of the Tor functors of homological algebra, as - - - -\chi (R/P,R/Q):=\sum _{i=0}^\infty (-1)^i\ell_R (\operatorname{Tor}^R_i(R/P,R/Q)). - - - -This requires the concept of the length of a module, denoted here by $\ell_R$, and the assumption that - - - -\ell _R((R/P)\otimes(R/Q)) < \infty. - - - -If this idea were to work, however, certain classical relationships would presumably have to continue to hold. Serre singled out four important properties. These then became conjectures, challenging in the general case. (There are more general statements of these conjectures where R/P and R/Q are replaced by finitely generated modules: see Serre's Local Algebra for more details.) -$$ -\dim(R/P) + \dim(R/Q) \le \dim(R) -$$ - -Serre proved this for all regular local rings. He established the following three properties when R is either of equal characteristic or of mixed characteristic and unramified (which in this case means that characteristic of the residue field is not an element of the square of the maximal ideal of the local ring), and conjectured that they hold in general. -$$ -\chi (R/P,R/Q) \ge 0 -$$ - -This was proven by Ofer Gabber in 1995. - -If -$$ -\dim (R/P) + \dim (R/Q) < \dim (R)\ -$$ - -then -$$ -\chi (R/P,R/Q) = 0.\ -$$ - -This was proven in 1985 by Paul C. Roberts, and independently by Henri Gillet and Christophe Soulé. - -If -$$ -\dim (R/P) + \dim (R/Q) = \dim (R)\ -$$ - -then -$$ -\chi (R/P,R/Q) > 0.\ -$$ - -This remains open. diff --git a/wiki/wikipedia/532.txt b/wiki/wikipedia/532.txt deleted file mode 100644 index bff1781e95e0b15769c42b4ac6ad75a6fee9a2df..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/532.txt +++ /dev/null @@ -1,47 +0,0 @@ -In mathematical logic, especially set theory and model theory, the back-and-forth method is a method for showing isomorphism between countably infinite structures satisfying specified conditions. In particular it can be used to prove that - -* any two countably infinite densely ordered sets (i.e., linearly ordered in such a way that between any two members there is another) without endpoints are isomorphic. An isomorphism between linear orders is simply a strictly increasing bijection. This result implies, for example, that there exists a strictly increasing bijection between the set of all rational numbers and the set of all real algebraic numbers. - -* any two countably infinite atomless Boolean algebras are isomorphic to each other. - -* any two equivalent countable atomic models of a theory are isomorphic. - -* the Erdős–Rényi model of random graphs, when applied to countably infinite graphs, always produces a unique graph, the Rado graph. - -* any two many-complete recursively enumerable sets are recursively isomorphic. - -As an example, the back-and-forth method can be used to prove Cantor's isomorphism theorem, although this was not Georg Cantor's original proof. This theorem states that two unbounded countable dense linear orders are isomorphic. - -Suppose that - -* (A, ≤A) and (B, ≤B) are linearly ordered sets; - -* They are both unbounded, in other words neither A nor B has either a maximum or a minimum; - -* They are densely ordered, i.e. between any two members there is another; - -* They are countably infinite. - -Fix enumerations (without repetition) of the underlying sets: - -A = { a1, a2, a3, ... }, - -B = { b1, b2, b3, ... }. - -Now we construct a one-to-one correspondence between A and B that is strictly increasing. Initially no member of A is paired with any member of B. - -(1) Let i be the smallest index such that ai is not yet paired with any member of B. Let j be some index such that bj is not yet paired with any member of A and ai can be paired with bj consistently with the requirement that the pairing be strictly increasing. Pair ai with bj. - -(2) Let j be the smallest index such that bj is not yet paired with any member of A. Let i be some index such that ai is not yet paired with any member of B and bj can be paired with ai consistently with the requirement that the pairing be strictly increasing. Pair bj with ai. - -(3) Go back to step (1). - -It still has to be checked that the choice required in step (1) and (2) can actually be made in accordance to the requirements. Using step (1) as an example: - -If there are already ap and aq in A corresponding to bp and bq in B respectively such that ap < ai < aq and bp < bq, we choose bj in between bp and bq using density. Otherwise, we choose a suitable large or small element of B using the fact that B has neither a maximum nor a minimum. Choices made in step (2) are dually possible. Finally, the construction ends after countably many steps because A and B are countably infinite. Note that we had to use all the prerequisites. - -According to Hodges (1993): - -Back-and-forth methods are often ascribed to Cantor, Bertrand Russell and C. H. Langford [...], but there is no evidence to support any of these attributions. - -While the theorem on countable densely ordered sets is due to Cantor (1895), the back-and-forth method with which it is now proved was developed by Edward Vermilye Huntington (1904) and Felix Hausdorff (1914). Later it was applied in other situations, most notably by Roland Fraïssé in model theory. diff --git a/wiki/wikipedia/533.txt b/wiki/wikipedia/533.txt deleted file mode 100644 index 977bbe05b2e01ac4a42081b15d6f74eca7b3293b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/533.txt +++ /dev/null @@ -1,41 +0,0 @@ -In applied mathematics, Tikhonov's theorem on dynamical systems is a result on stability of solutions of systems of differential equations. It has applications to chemical kinetics. The theorem is named after Andrey Nikolayevich Tikhonov. - -Consider this system of differential equations: - - - -\begin{align} - -\frac{d\mathbf{x}}{dt} & = \mathbf{f}(\mathbf{x},\mathbf{z},t), \\ - -\mu\frac{d\mathbf{z}}{dt} & = \mathbf{g}(\mathbf{x},\mathbf{z},t). - -\end{align} - - - -Taking the limit as $\mu\to 0$, this becomes the "degenerate system": - - - -\begin{align} - -\frac{d\mathbf{x}}{dt} & = \mathbf{f}(\mathbf{x},\mathbf{z},t), \\ - -\mathbf{z} & = \varphi(\mathbf{x},t), - -\end{align} - - - -where the second equation is the solution of the algebraic equation -$$ - \mathbf{g}(\mathbf{x},\mathbf{z},t) = 0. -$$ - -Note that there may be more than one such function $ \varphi $. - -Tikhonov's theorem states that as $\mu\to 0,$ the solution of the system of two differential equations above approaches the solution of the degenerate system if $\mathbf{z} = \varphi(\mathbf{x},t)$ is a stable root of the "adjoined system" -$$ - \frac{d\mathbf{z}}{dt} = \mathbf{g}(\mathbf{x},\mathbf{z},t). -$$ diff --git a/wiki/wikipedia/534.txt b/wiki/wikipedia/534.txt deleted file mode 100644 index 804ad4c013adebda446e522fd82c88e825330b30..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/534.txt +++ /dev/null @@ -1,268 +0,0 @@ -In calculus, the product rule (or Leibniz rule or Leibniz product rule) is a formula used to find the derivatives of products of two or more functions. For two functions, it may be stated in Lagrange's notation as -$$ -(u \cdot v)' = u '\cdot v +u \cdot v' -$$ - -or in Leibniz's notation as -$$ -\dfrac{d}{dx}(u\cdot v) = \dfrac{du}{dx} \cdot v + u\cdot \dfrac{dv}{dx}. -$$ - -The rule may be extended or generalized to products of three or more functions, to a rule for higher-order derivatives of a product, and to other contexts. - -Discovery of this rule is credited to Gottfried Leibniz, who demonstrated it using differentials. (However, J. M. Child, a translator of Leibniz's papers, argues that it is due to Isaac Barrow.) Here is Leibniz's argument: Let u(x) and v(x) be two differentiable functions of x. Then the differential of uv is - - - -\begin{align} - -d(u\cdot v) & {} = (u + du)\cdot (v + dv) - u\cdot v \\ - -& {} = u\cdot dv + v\cdot du + du\cdot dv. - -\end{align} - - - -Since the term du·dv is "negligible" (compared to du and dv), Leibniz concluded that -$$ -d(u\cdot v) = v\cdot du + u\cdot dv -$$ - -and this is indeed the differential form of the product rule. If we divide through by the differential dx, we obtain -$$ -\frac{d}{dx} (u\cdot v) = v \cdot \frac{du}{dx} + u \cdot \frac{dv}{dx} -$$ - -which can also be written in Lagrange's notation as -$$ -(u\cdot v)' = v\cdot u' + u\cdot v'. -$$ - -*Suppose we want to differentiate f(x) = x2 sin(x). By using the product rule, one gets the derivative '(x) = 2x sin(x) + x2 cos(x) (since the derivative of x2 is 2x and the derivative of the sine function is the cosine function). - -*One special case of the product rule is the constant multiple rule, which states: if c is a number and f(x) is a differentiable function, then cf(x) is also differentiable, and its derivative is (x) = c(x). This follows from the product rule since the derivative of any constant is zero. This, combined with the sum rule for derivatives, shows that differentiation is linear. - -*The rule for integration by parts is derived from the product rule, as is (a weak version of) the quotient rule. (It is a "weak" version in that it does not prove that the quotient is differentiable, but only says what its derivative is if it is differentiable.) - -Let h(x) = f(x)g(x) and suppose that f and g are each differentiable at x. We want to prove that h is differentiable at x and that its derivative, , is given by . To do this, $f(x)g(x+\Delta x)-f(x)g(x+\Delta x)$ (which is zero, and thus does not change the value) is added to the numerator to permit its factoring, and then properties of limits are used. - -\begin{align} - -h'(x) &= \lim_{\Delta x\to 0} \frac{h(x+\Delta x)-h(x)}{\Delta x} \\[5pt] - -&= \lim_{\Delta x\to 0} \frac{f(x+\Delta x)g(x+\Delta x)-f(x)g(x)}{\Delta x} \\[5pt] - -&= \lim_{\Delta x\to 0} \frac{f(x+\Delta x)g(x+\Delta x)-f(x)g(x+\Delta x)+f(x)g(x+\Delta x)-f(x)g(x)}{\Delta x} \\[5pt] - -&= \lim_{\Delta x\to 0} \frac{\big[f(x+\Delta x)-f(x)\big] \cdot g(x+\Delta x) + f(x) \cdot \big[g(x+\Delta x)-g(x)\big]}{\Delta x} \\[5pt] - -&= \lim_{\Delta x\to 0} \frac{f(x+\Delta x)-f(x)}{\Delta x} \cdot \underbrace{\lim_{\Delta x\to 0} g(x+\Delta x)}_\text{See the note below.} - -+ \lim_{\Delta x\to 0} f(x) \cdot \lim_{\Delta x\to 0} \frac{g(x+\Delta x)-g(x)}{\Delta x} \\[5pt] - -&= f'(x)g(x)+f(x)g'(x). - -\end{align} - -The fact that -$$ - \lim_{\Delta x\to0} g(x+\Delta x) = g(x) -$$ - -is deduced from a theorem that states that differentiable functions are continuous. - -By definition, if $ f, g: \mathbb{R} \rightarrow \mathbb{R} $ are differentiable at $ x $ then we can write -$$ - f(x+h) = f(x) + f'(x)h + \psi_1(h) \qquad \qquad g(x+h) = g(x) + g'(x)h + \psi_2(h) -$$ - -such that $ \lim_{h \to 0} \frac{\psi_1(h)}{h} = \lim_{h \to 0} \frac{\psi_2(h)}{h} = 0, $ also written $\psi_1, \psi_2 \sim o(h)$. Then: - - \begin{align} - -fg(x+h) - fg(x) &= (f(x) + f'(x)h +\psi_1(h))(g(x) + g'(x)h + \psi_2(h)) - fg(x) \\ - -&= f(x)g(x) + f'(x)g(x)h + f(x)g'(x)h -fg(x) + \text{other terms} \\ - -&= f'(x)g(x)h + f(x)g'(x)h + o(h) \\[12pt] - -\end{align} - -The "other terms" consist of items such as $f(x)\psi_2(h), f'(x)g'(x)h^2$ and $hf'(x)\psi_1(h).$ It is not difficult to show that they are all $o(h).$ Dividing by $ h $ and taking the limit for small $ h $ gives the result. - -There is a proof using quarter square multiplication which relies on the chain rule and on the properties of the quarter square function (shown here as q, i.e., with $q(x)=\tfrac{x^2}4$). - -Let -$$ -f=q(u+v)-q(u-v), -$$ - -Differentiating both sides: - -\begin{align} - -f' &= q'(u+v)(u'+v') - q'(u-v)(u'-v') \\[4pt] - -&= \left({1 \over 2}(u+v)(u'+v')\right) - \left({1 \over 2}(u-v)(u'-v')\right) \\[4pt] - -&= {1 \over 2}(uu' + vu' + uv' + vv') - {1 \over 2}(uu' - vu' - uv' + vv') \\[4pt] - -&= vu'+uv' \\[4pt] - -&= uv'+u'v - -\end{align} - -The product rule can be considered a special case of the chain rule for several variables. -$$ - {d (ab) \over dx} = \frac{\partial(ab)}{\partial a}\frac{da}{dx}+\frac{\partial (ab)}{\partial b}\frac{db}{dx} = b \frac{da}{dx} + a \frac{db}{dx}. -$$ - -Let u and v be continuous functions in x, and let dx, du and dv be infinitesimals within the framework of non-standard analysis, specifically the hyperreal numbers. Using st to denote the standard part function that associates to a finite hyperreal number the real infinitely close to it, this gives - -\begin{align} - -\frac{d(uv)}{dx} &= \operatorname{st}\left(\frac{(u + du)(v + dv) - uv}{dx}\right) \\[4pt] - -&= \operatorname{st}\left(\frac{uv + u \cdot dv + v \cdot du + dv \cdot du -uv}{dx}\right) \\[4pt] - -&= \operatorname{st}\left(\frac{u \cdot dv + (v + dv) \cdot du}{dx}\right) \\[4pt] - -&= u \frac{dv}{dx} + v \frac{du}{dx}. - -\end{align} - -This was essentially Leibniz's proof exploiting the transcendental law of homogeneity (in place of the standard part above). - -In the context of Lawvere's approach to infinitesimals, let dx be a nilsquare infinitesimal. Then du = u′ dx and dv = v ′ dx, so that - - - -\begin{align} - -d(uv) & = (u + du)(v + dv) -uv \\ - -& = uv + u\cdot dv + v\cdot du + du\cdot dv - uv \\ - -& = u\cdot dv + v\cdot du + du\cdot dv \\ - -& = u\cdot dv + v\cdot du\! - -\end{align} - - - -since -$$ -du dv = u' v' (dx)^2 = 0. -$$ - -The product rule can be generalized to products of more than two factors. For example, for three factors we have -$$ -\frac{d(uvw)}{dx} = \frac{du}{dx}vw + u\frac{dv}{dx}w + uv\frac{dw}{dx}. -$$ - -For a collection of functions $f_1, \dots, f_k$, we have - -\frac{d}{dx} \left [ \prod_{i=1}^k f_i(x) \right ] - -= \sum_{i=1}^k \left(\left(\frac{d}{dx} f_i(x) \right) \prod_{j=1,j\ne i}^k f_j(x) \right) - -= \left( \prod_{i=1}^k f_i(x) \right) \left( \sum_{i=1}^k \frac{f'_i(x)}{f_i(x)} \right). - -The logarithmic derivative provides a simpler expression of the last form, as well as a direct proof that does not involve any recursion. The logarithmic derivative of a function f, denoted here Logder(f), is the derivative of the logarithm of the function. It follows that -$$ -\operatorname{Logder}(f)=\frac {f'}f. -$$ - -Using that the logarithm of a product is the sum of the logarithms of the factors, the sum rule for derivatives gives immediately -$$ -\operatorname{Logder}(f_1\cdots f_k)= \sum_{i=1}^k\operatorname{Logder}(f_i). -$$ - -The last above expression of the derivative of a product is obtained by multiplying both members of this equation by the product of the $f_i.$ - -It can also be generalized to the general Leibniz rule for the nth derivative of a product of two factors, by symbolically expanding according to the binomial theorem: -$$ -d^n(uv) = \sum_{k=0}^n {n \choose k} \cdot d^{(n-k)}(u)\cdot d^{(k)}(v). -$$ - -Applied at a specific point x, the above formula gives: -$$ -(uv)^{(n)}(x) = \sum_{k=0}^n {n \choose k} \cdot u^{(n-k)}(x)\cdot v^{(k)}(x). -$$ - -Furthermore, for the nth derivative of an arbitrary number of factors: -$$ -\left(\prod_{i=1}^kf_i\right)^{(n)}=\sum_{j_1+j_2+\cdots+j_k=n}{n\choose j_1,j_2,\ldots,j_k}\prod_{i=1}^kf_i^{(j_i)}. -$$ - -For partial derivatives, we have - -{\partial^n \over \partial x_1\cdots\partial x_n} (uv) - -= \sum_S {\partial^ u \over \prod_{i\in S} \partial x_i} \cdot {\partial^{n-|S|} v \over \prod_{i\not\in S} \partial x_i} - -where the index S runs through all 2n subsets of {1, ..., n}, and is the cardinality of S. For example, when n = 3, - -\begin{align} & {\partial^3 \over \partial x_1\partial x_2\partial x_3} (uv) \\[6pt] - -= {} & u \cdot{\partial^3 v \over \partial x_1\partial x_2\partial x_3} + {\partial u \over \partial x_1}\cdot{\partial^2 v \over \partial x_2\partial x_3} + {\partial u \over \partial x_2}\cdot{\partial^2 v \over \partial x_1\partial x_3} + {\partial u \over \partial x_3}\cdot{\partial^2 v \over \partial x_1\partial x_2} \\[6pt] - -& + {\partial^2 u \over \partial x_1\partial x_2}\cdot{\partial v \over \partial x_3} - -+ {\partial^2 u \over \partial x_1\partial x_3}\cdot{\partial v \over \partial x_2} - -+ {\partial^2 u \over \partial x_2\partial x_3}\cdot{\partial v \over \partial x_1} - -+ {\partial^3 u \over \partial x_1\partial x_2\partial x_3}\cdot v. \end{align} - -Suppose X, Y, and Z are Banach spaces (which includes Euclidean space) and B : X × Y → Z is a continuous bilinear operator. Then B is differentiable, and its derivative at the point (x,y) in X × Y is the linear map D(x,y)B : X × Y → Z given by -$$ - (D_\left( x,y \right)B)\left( u,v \right) = B\left( u,y \right) + B\left( x,v \right)\qquad\forall (u,v)\in X \times Y. -$$ - -In abstract algebra, the product rule is used to define what is called a derivation, not vice versa. - -The product rule extends to scalar multiplication, dot products, and cross products of vector functions, as follows. - -For scalar multiplication: -$$ -(f \cdot \mathbf g)' = f'\cdot \mathbf g + f \cdot \mathbf g' -$$ - -For dot products: -$$ -(\mathbf f \cdot \mathbf g)' = \mathbf f' \cdot \mathbf g + \mathbf f \cdot \mathbf g' -$$ - -For cross products: -$$ -(\mathbf f \times \mathbf g)' = \mathbf f' \times \mathbf g + \mathbf f \times \mathbf g' -$$ - -There are also analogues for other analogs of the derivative: if f and g are scalar fields then there is a product rule with the gradient: - -\nabla (f \cdot g) = \nabla f \cdot g + f \cdot \nabla g - -Among the applications of the product rule is a proof that -$$ - {d \over dx} x^n = nx^{n-1} -$$ - -when n is a positive integer (this rule is true even if n is not positive or is not an integer, but the proof of that must rely on other methods). The proof is by mathematical induction on the exponent n. If n = 0 then xn is constant and nxn - 1 = 0. The rule holds in that case because the derivative of a constant function is 0. If the rule holds for any particular exponent n, then for the next value, n + 1, we have - -\begin{align} - -{d \over dx}x^{n+1} &{}= {d \over dx}\left( x^n\cdot x\right) \\[12pt] - -&{}= x{d \over dx} x^n + x^n{d \over dx}x \qquad\mbox{(the product rule is used here)} \\[12pt] - -&{}= x\left(nx^{n-1}\right) + x^n\cdot 1\qquad\mbox{(the induction hypothesis is used here)} \\[12pt] - -&{}= (n + 1)x^n. - -\end{align} - -Therefore, if the proposition is true for n, it is true also for n + 1, and therefore for all natural n. diff --git a/wiki/wikipedia/535.txt b/wiki/wikipedia/535.txt deleted file mode 100644 index e2ad2a1bc806c8164a309ef31e6859258c8e529b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/535.txt +++ /dev/null @@ -1,51 +0,0 @@ -In mathematics, the bipolar theorem is a theorem in functional analysis that characterizes the bipolar (that is, the polar of the polar) of a set. - -In convex analysis, the bipolar theorem refers to a necessary and sufficient conditions for a cone to be equal to its bipolar. The bipolar theorem can be seen as a special case of the Fenchel–Moreau theorem. - -Suppose that $X$ is a topological vector space (TVS) with a continuous dual space $X^{\prime}$ and let $\left\langle x, x^{\prime} \right\rangle := x^{\prime}(x)$ for all $x \in X$ and $x^{\prime} \in X^{\prime}.$ - -The convex hull of a set $A,$ denoted by $\operatorname{co} A,$ is the smallest convex set containing $A.$ - -The convex balanced hull of a set $A$ is the smallest convex balanced set containing $A.$ - -The polar of a subset $A \subseteq X$ is defined to be: - -A^\circ := \left\{ x^{\prime} \in X^{\prime} : \sup_{a \in A} \left| \left\langle a, x^{\prime} \right\rangle \right| \leq 1 \right\}. - -while the prepolar of a subset $B \subseteq X^{\prime}$ is: - -{}^{\circ} B := \left\{ x \in X : \sup_{x^{\prime} \in B} \left| \left\langle x, x^{\prime} \right\rangle \right| \leq 1 \right\}. - -The bipolar of a subset $A \subseteq X,$ often denoted by $A^{\circ\circ}$ is the set - -A^{\circ\circ} := {}^{\circ}\left(A^{\circ}\right) - -= \left\{ x \in X : \sup_{x^{\prime} \in A^{\circ}} \left|\left\langle x, x^{\prime} \right\rangle\right| \leq 1 \right\}. - -Let $\sigma\left(X, X^{\prime}\right)$ denote the weak topology on $X$ (that is, the weakest TVS topology on $A$ making all linear functionals in $X^{\prime}$ continuous). - -The bipolar theorem: The bipolar of a subset $A \subseteq X$ is equal to the $\sigma\left(X, X^{\prime}\right)$-closure of the convex balanced hull of $A.$ - -The bipolar theorem: For any nonempty cone $A$ in some linear space $X,$ the bipolar set $A^{\circ \circ}$ is given by: - -A^{\circ \circ} = \operatorname{cl} (\operatorname{co} \{ r a : r \geq 0, a \in A \}). - -A subset $C \subseteq X$ is a nonempty closed convex cone if and only if $C^{++} = C^{\circ \circ} = C$ when $C^{++} = \left(C^{+}\right)^{+},$ where $A^{+}$ denotes the positive dual cone of a set $A.$ - -Or more generally, if $C$ is a nonempty convex cone then the bipolar cone is given by - -C^{\circ \circ} = \operatorname{cl} C. - -Let - -f(x) := \delta(x|C) = \begin{cases}0 & x \in C\\ \infty & \text{otherwise}\end{cases} - -be the indicator function for a cone $C.$ - -Then the convex conjugate, - -f^*(x^*) = \delta\left(x^*|C^\circ\right) = \delta^*\left(x^*|C\right) = \sup_{x \in C} \langle x^*,x \rangle - -is the support function for $C,$ and $f^{**}(x) = \delta(x|C^{\circ\circ}).$ - -Therefore, $C = C^{\circ \circ}$ if and only if $f = f^{**}.$ diff --git a/wiki/wikipedia/536.txt b/wiki/wikipedia/536.txt deleted file mode 100644 index 0c273990f98e0ebfaacb73d4e03da000890e4cad..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/536.txt +++ /dev/null @@ -1,10 +0,0 @@ -In mathematics, the Mazur–Ulam theorem states that if $V$ and $W$ are normed spaces over R and the mapping -$$ -f\colon V\to W -$$ - -is a surjective isometry, then $f$ is affine. - -It is named after Stanisław Mazur and Stanisław Ulam in response to an issue raised by Stefan Banach. - -For strictly convex spaces the result is true, and easy, even for isometries which are not necessarily surjective. In this case, for any $u$ and $v$ in $V$, and for any $t$ in $[0,1]$, denoting $r:=\|u-v\|_V=\|f(u)-f(v)\|_W$, one has that $tu+(1-t)v$ is the unique element of $\bar B(v,tr)\cap \bar B(u,(1-t)r)$, so, being $f$ injective, $f(tu+(1-t)v)$ is the unique element of $f\big(\bar B(v,tr)\cap \bar B(u,(1-t)r\big)= f\big(\bar B(v,tr)\big)\cap f\big(\bar B(u,(1-t)r\big)=\bar B\big(f(v),tr\big)\cap\bar B\big(f(u),(1-t)r\big)$, namely $tf(u)+(1-t)f(v)$. Therefore $f$ is an affine map. This argument fails in the general case, because in a normed space which is not strictly convex two tangent balls may meet in some flat convex region of their boundary, not just a single point. diff --git a/wiki/wikipedia/537.txt b/wiki/wikipedia/537.txt deleted file mode 100644 index e1d5395a761b336846061e263dabb54b44a496c9..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/537.txt +++ /dev/null @@ -1,7 +0,0 @@ -In transaction processing, a pseudoconversational transaction is a type of transaction that emulates a true conversation in an interactive session. To the end user, it appears as though the program has simply "paused" to request further input, whereas in reality, most resources are released while the input is waiting to be received. - -The controlling program has deliberately saved most of its state during the delay, terminated, and then, on being restarted through new input, restores its previous state. A single control variable is usually retained to hold the current state in terms of the stage of input reached (and therefore what must be recovered at any stage in order to resume processing). The state, including the control variable, is usually preserved in a 'temporary storage record' that maps the variables needing restoration as an aggregate set, usually contained in a single structure (other variables will be re-initialized on restart). - -This method of programming frees up pooled resources (such as memory) for an indeterminate time. This delay is the end-user 'thinking time' (or response time) and depends on human factors including speed of typing. - -For systems supporting many thousands of users on a single processor, it allows the transparent 'look and feel' of a true conversational session without tying up limited resources. diff --git a/wiki/wikipedia/538.txt b/wiki/wikipedia/538.txt deleted file mode 100644 index a3db189503a237f8c6f9956e0f2b6df45bcb247d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/538.txt +++ /dev/null @@ -1,52 +0,0 @@ -The minimum-cost flow problem (MCFP) is an optimization and decision problem to find the cheapest possible way of sending a certain amount of flow through a flow network. A typical application of this problem involves finding the best delivery route from a factory to a warehouse where the road network has some capacity and cost associated. The minimum cost flow problem is one of the most fundamental among all flow and circulation problems because most other such problems can be cast as a minimum cost flow problem and also that it can be solved efficiently using the network simplex algorithm. - -A flow network is a directed graph $G=(V,E)$ with a source vertex $s \in V$ and a sink vertex $t \in V$, where each edge $(u,v) \in E$ has capacity $c(u,v) > 0$, flow $f(u,v) \ge 0$ and cost $a(u,v)$, with most minimum-cost flow algorithms supporting edges with negative costs. The cost of sending this flow along an edge $(u,v)$ is $f(u,v)\cdot a(u,v)$. The problem requires an amount of flow $d$ to be sent from source $s$ to sink $t$. - -The definition of the problem is to minimize the total cost of the flow over all edges: -$$ -\sum_{(u,v) \in E} a(u,v) \cdot f(u,v) -$$ - -with the constraints - -A variation of this problem is to find a flow which is maximum, but has the lowest cost among the maximum flow solutions. This could be called a minimum-cost maximum-flow problem and is useful for finding minimum cost maximum matchings. - -With some solutions, finding the minimum cost maximum flow instead is straightforward. If not, one can find the maximum flow by performing a binary search on $d$. - -A related problem is the minimum cost circulation problem, which can be used for solving minimum cost flow. This is achieved by setting the lower bound on all edges to zero, and then making an extra edge from the sink $t$ to the source $s$, with capacity $c(t,s)=d$ and lower bound $l(t,s)=d$, forcing the total flow from $s$ to $t$ to also be $d$. - -The following problems are special cases of the minimum cost flow problem (we provide brief sketches of each applicable reduction, in turn): - -* Shortest path problem (single-source). Require that a feasible solution to the minimum cost flow problem sends one unit of flow from a designated source $s$ to a designated sink $t$. Give all edges infinite capacity. - -* Maximum flow problem. Let all nodes have zero demand, and let the cost associated with traversing any edge be zero. Now, introduce a new edge $(t,s)$ from the current sink $t$ to the current source $s$. Stipulate that the per-unit cost of sending flow across edge $(t,s)$ equals $-1$, and permit $(t,s)$ infinite capacity. (This reduction is also mentioned in Circulation problem). - -* Assignment problem. Suppose that each partite set in the bipartition has $n$ vertices, and denote the bipartition by $(X,Y)$. Give each $x \in X$ supply $1/n$ and give each $y \in Y$ demand $1/n$. Each edge is to have unit capacity. - -The minimum cost flow problem can be solved by linear programming, since we optimize a linear function, and all constraints are linear. - -Apart from that, many combinatorial algorithms exist, for a comprehensive survey, see . Some of them are generalizations of maximum flow algorithms, others use entirely different approaches. - -Well-known fundamental algorithms (they have many variations): - -* Cycle canceling: a general primal method. - -* Cut canceling: a general dual method. - -* Minimum mean cycle canceling: a simple strongly polynomial algorithm. - -* Successive shortest path and capacity scaling: dual methods, which can be viewed as the generalization of the Ford–Fulkerson algorithm. - -* Cost scaling: a primal-dual approach, which can be viewed as the generalization of the push-relabel algorithm. - -* Network simplex algorithm: a specialized version of the linear programming simplex method. - -* Out-of-kilter algorithm by D. R. Fulkerson - -Given a bipartite graph G = (A ∪ B, E), the goal is to find the maximum cardinality matching in G that has minimum cost. Let w: E → R be a weight function on the edges of E. The minimum weight bipartite matching problem or assignment problem is to find a - -perfect matching M ⊆ E whose total weight is minimized. The idea is to reduce this problem to a network flow problem. - -Let G′ = (V′ = A ∪ B, E′ = E). Assign the capacity of all the edges in E′ to 1. Add a source vertex s and connect it to all the vertices in A′ and add a sink - -vertex t and connect all vertices inside group B′ to this vertex. The capacity of all the new edges is 1 and their costs is 0. It is proved that there is minimum weight perfect bipartite matching in G if and only if there a minimum cost flow in G′. diff --git a/wiki/wikipedia/539.txt b/wiki/wikipedia/539.txt deleted file mode 100644 index 3f77ab1078ea6dd440bd49df89a98a10e6bc617d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/539.txt +++ /dev/null @@ -1,34 +0,0 @@ -In mathematics, especially in the areas of abstract algebra dealing with group cohomology or relative homological algebra, Shapiro's lemma, also known as the Eckmann–Shapiro lemma, relates extensions of modules over one ring to extensions over another, especially the group ring of a group and of a subgroup. It thus relates the group cohomology with respect to a group to the cohomology with respect to a subgroup. Shapiro's lemma is named after Arnold S. Shapiro, who proved it in 1961; however, Beno Eckmann had discovered it earlier, in 1953. - -Let R → S be a ring homomorphism, so that S becomes a left and right R-module. Let M be a left S-module and N a left R-module. By restriction of scalars, M is also a left R-module. - -* If S is projective as a right R-module, then: -$$ -\operatorname{Ext}^n_R(N, {}_R M) \cong \operatorname{Ext}^n_S(S \otimes_R N, M) -$$ - -* If S is projective as a left R-module, then: -$$ -\operatorname{Ext}^n_R({}_R M,N) \cong \operatorname{Ext}^n_S(M,\operatorname{Hom}_R(S,N)) -$$ - -See . The projectivity conditions can be weakened into conditions on the vanishing of certain Tor- or Ext-groups: see . - -When H is a subgroup of finite index in G, then the group ring R[G] is finitely generated projective as a left and right R[H] module, so the previous theorem applies in a simple way. Let M be a finite-dimensional representation of G and N a finite-dimensional representation of H. In this case, the module S ⊗R N is called the induced representation of N from H to G, and RM is called the restricted representation of M from G to H. One has that: -$$ -\operatorname{Ext}^n_G( M, N\uparrow_H^G) \cong \operatorname{Ext}^n_H( M\downarrow_H^G, N) -$$ - -When n = 0, this is called Frobenius reciprocity for completely reducible modules, and Nakayama reciprocity in general. See , which also contains these higher versions of the Mackey decomposition. - -Specializing M to be the trivial module produces the familiar Shapiro's lemma. Let H be a subgroup of G and N a representation of H. For NG the induced representation of N from H to G using the tensor product, and for H* the group homology: - -H*(G, NG) = H*(H, N) - -Similarly, for NG the co-induced representation of N from H to G using the Hom functor, and for H* the group cohomology: - -H*(G, NG) = H*(H, N) - -When H is finite index in G, then the induced and coinduced representations coincide and the lemma is valid for both homology and cohomology. - -See . diff --git a/wiki/wikipedia/54.txt b/wiki/wikipedia/54.txt deleted file mode 100644 index 58b670df61a38e548dc77896ea2f797d7b0e403a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/54.txt +++ /dev/null @@ -1,9 +0,0 @@ -Almgren isomorphism theorem is a result in geometric measure theory and algebraic topology about the topology of the space of flat cycles in a Riemannian manifold. - -The theorem plays a fundamental role in the Almgren–Pitts min-max theory as it establishes existence of topologically non-trivial families of cycles, which were used by Frederick J. Almgren Jr., Jon T. Pitts and others to prove existence of (possibly singular) minimal submanifolds in every Riemannian manifold. In the special case of codimension 1 cycles with mod 2 coefficients Almgren isomorphism theorem implies that the space of cycles is weakly homotopy equivalent to - -the infinite real projective space. - -Let M be a Riemannian manifold. Almgren isomorphism theorem asserts that the m-th homotopy group of the space of flat k-dimensional cycles in M is isomorphic to the (m+k)-th dimensional homology group of M. This result is a generalization of the Dold–Thom theorem, which can be thought of as the k=0 case of Almgren's theorem. - -The isomorphism is defined as follows. Let G be an abelian group and $Z_k(M;G)$ denote the space of flat cycles with coefficients in group G. To each family of cycles $f: S^m \rightarrow Z_k(M;G)$ we associate an (m+k)-cycle C as follows. Fix a fine triangulation T of $S^m$. To each vertex v in the 0-skeletion of T we associate a cycle f(v). To each edge E in the 1-skeleton of T with ∂E=v-w we associate a (k+1)-chain with boundary f(v)-f(w) of minimal mass. We proceed this way by induction over the skeleton of T. The sum of all chains corresponding to m-dimensional faces of T will be the desired (m+k)-cycle C. Even though the choices of triangulation and minimal mass fillings were not unique, they all result in an (m+k)-cycle in the same homology class. diff --git a/wiki/wikipedia/540.txt b/wiki/wikipedia/540.txt deleted file mode 100644 index 8ba03be2818c53c804bad923cb217cebf028c765..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/540.txt +++ /dev/null @@ -1,78 +0,0 @@ -Hardy's inequality is an inequality in mathematics, named after G. H. Hardy. It states that if $a_1, a_2, a_3, \dots $ is a sequence of non-negative real numbers, then for every real number p > 1 one has -$$ -\sum_{n=1}^\infty \left (\frac{a_1+a_2+\cdots +a_n}{n}\right )^p\leq\left (\frac{p}{p-1}\right )^p\sum_{n=1}^\infty a_n^p. -$$ - -If the right-hand side is finite, equality holds if and only if $a_n = 0$ for all n. - -An integral version of Hardy's inequality states the following: if f is a measurable function with non-negative values, then -$$ -\int_0^\infty \left (\frac{1}{x}\int_0^x f(t) dt\right)^p dx\le\left (\frac{p}{p-1}\right )^p\int_0^\infty f(x)^p dx. -$$ - -If the right-hand side is finite, equality holds if and only if f(x) = 0 almost everywhere. - -Hardy's inequality was first published and proved (at least the discrete version with a worse constant) in 1920 in a note by Hardy. The original formulation was in an integral form slightly different from the above. - -In the multidimensional case, Hardy's inequality can be extended to $L^{p}$-spaces, taking the form -$$ -\left\|\frac{f}\right\|_{L^{p}(R^{n})}\le \frac{p}{n-p}\|\nabla f\|_{L^{p}(R^{n})}, 2\le n, 1\le p $\left(\int_0^\infty\left(\frac{1}{x}\int_0^x f(t)dt\right)^p\ dx\right)^{1/p}=\left(\int_0^\infty\left(\int_0^1 f(sx)ds\right)^pdx\right)^{1/p}$,
    which is less or equal than $\int_0^1\left(\int_0^\infty f(sx)^pdx\right)^{1/p}ds$ by Minkowski's integral inequality. Finally, by another change of variables, the last expression equals
    $\int_0^1\left(\int_0^\infty f(x)^pdx\right)^{1/p}s^{-1/p}ds=\frac{p}{p-1}\left(\int_0^\infty f(x)^pdx\right)^{1/p}$. - -* Discrete version: assuming the right-hand side to be finite, we must have $a_n\to 0$ as $n\to\infty$. Hence, for any positive integer j, there are only finitely many terms bigger than $2^{-j}$. This allows us to construct a decreasing sequence $b_1\ge b_2\ge\cdots$ containing the same positive terms as the original sequence (but possibly no zero terms). Since $a_1+a_2+\cdots +a_n\le b_1+b_2+\cdots +b_n$ for every n, it suffices to show the inequality for the new sequence. This follows directly from the integral form, defining $f(x)=b_n$ if $n-1 $\int_0^\infty f(x)^pdx=\sum_{n=1}^\infty b_n^p$
    and, for $n-1 $\frac{1}{x}\int_0^x f(t)dt=\frac{b_1+\dots+b_{n-1}+(x-n+1)b_n}{x} \ge \frac{b_1+\dots+b_n}{n}$
    (the last inequality is equivalent to $(n-x)(b_1+\dots+b_{n-1})\ge (n-1)(n-x)b_n$, which is true as the new sequence is decreasing) and thus
    $\sum_{n=1}^\infty\left(\frac{b_1+\dots+b_n}{n}\right)^p\le\int_0^\infty\left(\frac{1}{x}\int_0^x f(t)dt\right)^pdx$. - -* Discrete version: Direct proof: Let $p > 1$ and let $b_1 , \dots , b_n$ be positive real numbers. Set $S_k = \sum_{i=1}^k b_i$ First we prove the inequality
    $\sum_{n=1}^N \frac{S_n^p}{n^p} \leq \frac{p}{p-1} \sum_{n=1}^N \frac{b_n S_n^{p-1}}{n^{p-1}} \quad (*)$, - -Let $T_n = \frac{S_n}{n}$ and let $\Delta_n$ be the difference between the $n$-th terms in the RHS and LHS of $(*)$, that is, $\Delta_n := T_n^p - \frac{p}{p-1} b_n T_n^{p-1}$. We have: -$$ -\Delta_n = T_n^p - \frac{p}{p-1} b_n T_n^{p-1} = T_n^p - \frac{p}{p-1} (n T_n - (n-1) T_{n-1}) T_n^{p-1} -$$ - -or -$$ -\Delta_n = T_n^p \left( 1 - \frac{np}{p-1} \right) + \frac{p (n-1)}{p-1} T_{n-1} T_n^p . -$$ - -According to Young's inequality we have: -$$ -T_{n-1} T_n^{p-1} \leq \frac{T_{n-1}^p}{p} + (p-1) \frac{T_n^p}{p} , -$$ - -from which it follows that: -$$ -\Delta_n \leq \frac{n-1}{p-1} T_{n-1}^p - \frac{n}{p-1} T_n^p . -$$ - -By telescoping we have: - -\begin{align} - -\sum_{n=1}^N \Delta_n &\leq 0 - \frac{1}{p-1} T_1^p \\ - -&+ \frac{1}{p-1} T_1^p - \frac{2}{p-1} T_2^p \\ - -&+ \frac{2}{p-1} T_2^p - \frac{3}{p-1} T_3^p \\ - -& \vdots \\ - -&+ \frac{N-1}{p-1} T_{N-1}^p - \frac{N}{p-1} T_N^p \\ - -&= - \frac{N}{p-1} T_N^p < 0 , - -\end{align} - -proving $(*)$. By applying Hölder's inequality to the RHS of $(*)$ we have: -$$ -\sum_{n=1}^N \frac{S_n^p}{n^p} \leq \frac{p}{p-1} \sum_{n=1}^N \frac{b_n S_n^{p-1}}{n^{p-1}} \leq \frac{p}{p-1} \left( \sum_{n=1}^N b_n^p \right)^{1/p} \left( \sum_{n=1}^N \frac{S_n^p}{n^p} \right)^{(p-1)/p} -$$ - -from which we immediately obtain: -$$ -\sum_{n=1}^N \frac{S_n^p}{n^p} \leq \left( \frac{p}{p-1} \right)^p \sum_{n=1}^N b_n^p . -$$ - -Letting $N \rightarrow \infty$ we obtain Hardy's inequality. diff --git a/wiki/wikipedia/541.txt b/wiki/wikipedia/541.txt deleted file mode 100644 index 53a522a65ea0e3126b04e42f6d91d3bda06a74a9..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/541.txt +++ /dev/null @@ -1,84 +0,0 @@ -In mathematics, specifically commutative algebra, Hilbert's basis theorem says that a polynomial ring over a Noetherian ring is Noetherian. - -If $R$ is a ring, let $R[X]$ denote the ring of polynomials in the indeterminate $X$ over $R$. Hilbert proved that if $R$ is "not too large", in the sense that if $R$ is Noetherian, the same must be true for $R[X]$. Formally, - -
    Hilbert's Basis Theorem. If $R$ is a Noetherian ring, then $R[X]$ is a Noetherian ring.
    - -
    Corollary. If $R$ is a Noetherian ring, then $R[X_1,\dotsc,X_n]$ is a Noetherian ring.
    - -This can be translated into algebraic geometry as follows: every algebraic set over a field can be described as the set of common roots of finitely many polynomial equations. Hilbert proved the theorem (for the special case of polynomial rings over a field) in the course of his proof of finite generation of rings of invariants. - -Hilbert produced an innovative proof by contradiction using mathematical induction; his method does not give an algorithm to produce the finitely many basis polynomials for a given ideal: it only shows that they must exist. One can determine basis polynomials using the method of Gröbner bases. - -Theorem. If $R$ is a left (resp. right) Noetherian ring, then the polynomial ring $R[X]$ is also a left (resp. right) Noetherian ring. - -Remark. We will give two proofs, in both only the "left" case is considered; the proof for the right case is similar. - -Suppose $\mathfrak a \subseteq R[X]$ is a non-finitely generated left-ideal. Then by recursion (using the axiom of dependent choice) there is a sequence $\{ f_0, f_1, \ldots \}$ of polynomials such that if $\mathfrak b_n$ is the left ideal generated by $f_0, \ldots, f_{n-1}$ then $f_n \in \mathfrak a \setminus \mathfrak b_n$ is of minimal degree. It is clear that $\{\deg(f_0), \deg(f_1), \ldots \}$ is a non-decreasing sequence of naturals. Let $a_n$ be the leading coefficient of $f_n$ and let $\mathfrak{b}$ be the left ideal in $R$ generated by $a_0,a_1,\ldots$. Since $R$ is Noetherian the chain of ideals -$$ -(a_0)\subset(a_0,a_1)\subset(a_0,a_1,a_2) \subset \cdots -$$ - -must terminate. Thus $\mathfrak b = (a_0,\ldots ,a_{N-1})$ for some integer $N$. So in particular, -$$ -a_N=\sum_{i2) time in the worst case. - -An efficient algorithm was proposed by Booth (1980). - -The algorithm uses a modified preprocessing function from the Knuth-Morris-Pratt string search algorithm. The failure function for the string is computed as normal, but the string is rotated during the computation so some indices must be computed more than once as they wrap around. Once all indices of the failure function have been successfully computed without the string rotating again, the minimal lexicographical rotation is known to be found and its starting index is returned. The correctness of the algorithm is somewhat difficult to understand, but it is easy to implement. - - - -def least_rotation(S: str) -> int: - -"""Booth's algorithm.""" - -S += S # Concatenate string to it self to avoid modular arithmetic - -f = [-1] * len(S) # Failure function - -k = 0 # Least rotation of string found so far - -for j in range(1, len(S)): - -sj = S[j] - -i = f[j - k - 1] - -while i != -1 and sj != S[k + i + 1]: - -if sj < S[k + i + 1]: - -k = j - i - 1 - -i = f[i] - -if sj != S[k + i + 1]: # if sj != S[k+i+1], then i == -1 - -if sj < S[k]: # k+i+1 = k - -k = j - -f[j - k] = -1 - -else: - -f[j - k] = i + 1 - -return k - - - -Of interest is that removing all lines of code which modify the value of k results in the original Knuth-Morris-Pratt preprocessing function, as k (representing the rotation) will remain zero. Booth's algorithm runs in O(n) time, where n is the length of the string. The algorithm performs at most 3n comparisons in the worst case, and requires auxiliary memory of length n to hold the failure function table. - -Shiloach (1981) - -proposed an algorithm improving on Booth's result in terms of performance. It was observed that if there are q equivalent lexicographically minimal rotations of a string of length n, then the string must consist of q equal substrings of length d=n/q. The algorithm requires only n + d/2 comparisons and constant space in the worst case. - -The algorithm is divided into two phases. The first phase is a quick sieve which rules out indices that are obviously not starting locations for the lexicographically minimal rotation. The second phase then finds the lexicographically minimal rotation start index from the indices which remain. - -Duval (1983) - -proposed an efficient algorithm involving the factorization of the string into its component Lyndon words, which runs in linear time with a constant memory requirement. - -Shiloach (1979) - -proposed an algorithm to efficiently compare two circular strings for equality without a normalization requirement. An additional application which arises from the algorithm is the fast generation of certain chemical structures without repetitions. diff --git a/wiki/wikipedia/544.txt b/wiki/wikipedia/544.txt deleted file mode 100644 index 48de6cbdd075ea4b72eb055b0fe142560ac3fd49..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/544.txt +++ /dev/null @@ -1,34 +0,0 @@ -In mathematics, the Shapiro inequality is an inequality proposed by Harold S. Shapiro in 1954. - -Suppose $n$ is a natural number and $x_1, x_2, \dots, x_n$ are positive numbers and: - -* $n$ is even and less than or equal to $12$, or - -* $n$ is odd and less than or equal to $23$. - -Then the Shapiro inequality states that -$$ -\sum_{i=1}^{n} \frac{x_i}{x_{i+1}+x_{i+2}} \geq \frac{n}{2} -$$ - -where $x_{n+1}=x_1, x_{n+2}=x_2$. - -For greater values of $n$ the inequality does not hold and the strict lower bound is $\gamma \frac{n}{2}$ with $\gamma \approx 0.9891\dots$. - -The initial proofs of the inequality in the pivotal cases $n=12$ (Godunova and Levin, 1976) and $n=23$ (Troesch, 1989) rely on numerical computations. In 2002, P.J. Bushell and J.B. McLeod published an analytical proof for $n=12$. - -The value of $\gamma$ was determined in 1971 by Vladimir Drinfeld. Specifically, he proved that the strict lower bound $\gamma$ is given by $\frac{1}{2} \psi(0)$, where the function $\psi$ is the convex hull of $f(x)=e^{-x}$ and $g(x) = \frac{2}{e^x+e^{\frac{x}{2}}}$. (That is, the region above the graph of $\psi$ is the convex hull of the union of the regions above the graphs of '$f$ and $g$.) - -Interior local minima of the left-hand side are always$\ge\frac{n}2$ (Nowosad, 1968). - -The first counter-example was found by Lighthill in 1956, for $n=20$: -$$ -x_{20} = (1+5\epsilon,\ 6\epsilon,\ 1+4\epsilon,\ 5\epsilon,\ 1+3\epsilon,\ 4\epsilon,\ 1+2\epsilon,\ 3\epsilon,\ 1+\epsilon,\ 2\epsilon,\ 1+2\epsilon,\ \epsilon,\ 1+3\epsilon,\ 2\epsilon,\ 1+4\epsilon,\ 3\epsilon,\ 1+5\epsilon,\ 4\epsilon,\ 1+6\epsilon,\ 5\epsilon) -$$ where $\epsilon$ is close to 0. - -Then the left-hand side is equal to $10 - \epsilon^2 + O(\epsilon^3)$, thus lower than 10 when $\epsilon$ is small enough. - -The following counter-example for $n=14$ is by Troesch (1985): -$$ -x_{14} = (0, 42, 2, 42, 4, 41, 5, 39, 4, 38, 2, 38, 0, 40) -$$ (Troesch, 1985) diff --git a/wiki/wikipedia/545.txt b/wiki/wikipedia/545.txt deleted file mode 100644 index 08d4f0aa2a4ecdf49fe573ac2d686d0ae83e67d5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/545.txt +++ /dev/null @@ -1,30 +0,0 @@ -In mathematics, Sard's theorem, also known as Sard's lemma or the Morse-Sard theorem, is a result in mathematical analysis that asserts that the set of critical values (that is, the image of the set of critical points) of a smooth function f from one Euclidean space or manifold to another is a null set, i.e., it has Lebesgue measure 0. This makes the set of critical values "small" in the sense of a generic property. The theorem is named for Anthony Morse and Arthur Sard. - -More explicitly, let -$$ -f\colon \mathbb{R}^n \rightarrow \mathbb{R}^m -$$ - -be $C^k$, (that is, $k$ times continuously differentiable), where $k\geq \max\{n-m+1, 1\}$. Let $X$ denote the critical set of $f,$ which is the set of points $x\in \mathbb{R}^n$ at which the Jacobian matrix of $f$ has rank $1 norm. Its power and utility are two of the primary theoretical advantages of Lebesgue integration over Riemann integration. - -In addition to its frequent appearance in mathematical analysis and partial differential equations, it is widely used in probability theory, since it gives a sufficient condition for the convergence of expected values of random variables. - -Lebesgue's dominated convergence theorem. Let (fn) be a sequence of complex-valued measurable functions on a measure space (S, Σ, μ). Suppose that the sequence converges pointwise to a function f and is dominated by some integrable function g in the sense that -$$ - |f_n(x)| \le g(x) -$$ - -for all numbers n in the index set of the sequence and all points x ∈ S. - -Then f is integrable (in the Lebesgue sense) and -$$ - \lim_{n\to\infty} \int_S |f_n-f|d\mu = 0 -$$ - -which also implies -$$ -\lim_{n\to\infty} \int_S f_nd\mu = \int_S fd\mu -$$ - -Remark 1. The statement "g is integrable" means that measurable function g is Lebesgue integrable; i.e. -$$ -\int_S|g|d\mu < \infty. -$$ - -Remark 2. The convergence of the sequence and domination by g can be relaxed to hold only μ-almost everywhere provided the measure space (S, Σ, μ) is complete or f is chosen as a measurable function which agrees μ-almost everywhere with the μ-almost everywhere existing pointwise limit. (These precautions are necessary, because otherwise there might exist a non-measurable subset of a μ-null set N ∈ Σ, hence f might not be measurable.) - -Remark 3. If μ(S) < ∞, the condition that there is a dominating integrable function g can be relaxed to uniform integrability of the sequence (fn), see Vitali convergence theorem. - -Remark 4. While f is Lebesgue integrable, it is not in general Riemann integrable. For example, take fn to be defined in [0,1] so that it is zero everywhere except for rational numbers of the form k/m, so that k and m are coprime and m>n. The series (fn) converges pointwise to 0, so f is identically zero, but |fn-f|=fn is not Riemann integrable, since its image in every finite interval is {0,1} and thus the upper and lower Darboux integrals are 1 and 0, respectively. - -Without loss of generality, one can assume that f is real, because one can split f into its real and imaginary parts (remember that a sequence of complex numbers converges if and only if both its real and imaginary counterparts converge) and apply the triangle inequality at the end. - -Lebesgue's dominated convergence theorem is a special case of the Fatou–Lebesgue theorem. Below, however, is a direct proof that uses Fatou’s lemma as the essential tool. - -Since f is the pointwise limit of the sequence (fn) of measurable functions that are dominated by g, it is also measurable and dominated by g, hence it is integrable. Furthermore, (these will be needed later), -$$ - |f-f_n| \le |f| + |f_n| \leq 2g -$$ - -for all n and -$$ - \limsup_{n\to\infty} |f-f_n| = 0. -$$ - -The second of these is trivially true (by the very definition of f). Using linearity and monotonicity of the Lebesgue integral, -$$ - \left | \int_S{fd\mu} - \int_S{f_nd\mu} \right|= \left| \int_S{(f-f_n)d\mu} \right|\le \int_S{|f-f_n|d\mu}. -$$ - -By the reverse Fatou lemma (it is here that we use the fact that |f−fn| is bounded above by an integrable function) -$$ -\limsup_{n\to\infty} \int_S |f-f_n|d\mu \le \int_S \limsup_{n\to\infty} |f-f_n|d\mu = 0, -$$ - -which implies that the limit exists and vanishes i.e. -$$ -\lim_{n\to\infty} \int_S |f-f_n|d\mu= 0. -$$ - -Finally, since -$$ -\lim_{n\to\infty} \left|\int_S fd\mu-\int_S f_nd\mu\right| \leq\lim_{n\to\infty} \int_S |f-f_n|d\mu= 0. -$$ - -we have that -$$ -\lim_{n\to\infty} \int_S f_nd\mu= \int_S fd\mu. -$$ - -The theorem now follows. - -If the assumptions hold only μ-almost everywhere, then there exists a μ-null set N ∈ Σ such that the functions fn 1S \ N satisfy the assumptions everywhere on S. Then the function f(x) defined as the pointwise limit of fn(x) for x ∈ S \ N and by f(x) = 0 for x ∈ N, is measurable and is the pointwise limit of this modified function sequence. The values of these integrals are not influenced by these changes to the integrands on this μ-null set N, so the theorem continues to hold. - -DCT holds even if fn converges to f in measure (finite measure) and the dominating function is non-negative almost everywhere. - -The assumption that the sequence is dominated by some integrable g cannot be dispensed with. This may be seen as follows: define fn(x) = n for x in the interval (0, 1/n] and fn(x) = 0 otherwise. Any g which dominates the sequence must also dominate the pointwise supremum h = supn fn. Observe that -$$ -\int_0^1 h(x)dx \ge \int_{\frac{1}{m}}^1{h(x)dx} = \sum_{n=1}^{m-1} \int_{\left(\frac{1}{n+1},\frac{1}{n}\right]}{h(x)dx} \ge \sum_{n=1}^{m-1} \int_{\left(\frac{1}{n+1},\frac{1}{n}\right]}{ndx}=\sum_{n=1}^{m-1} \frac{1}{n+1} \to \infty \qquad \text{as }m\to\infty -$$ - -by the divergence of the harmonic series. Hence, the monotonicity of the Lebesgue integral tells us that there exists no integrable function which dominates the sequence on [0,1]. A direct calculation shows that integration and pointwise limit do not commute for this sequence: -$$ -\int_0^1 \lim_{n\to\infty} f_n(x)dx = 0 \neq 1 = \lim_{n\to\infty}\int_0^1 f_n(x)dx, -$$ - -because the pointwise limit of the sequence is the zero function. Note that the sequence (fn) is not even uniformly integrable, hence also the Vitali convergence theorem is not applicable. - -One corollary to the dominated convergence theorem is the bounded convergence theorem, which states that if (fn) is a sequence of uniformly bounded complex-valued measurable functions which converges pointwise on a bounded measure space (S, Σ, μ) (i.e. one in which μ(S) is finite) to a function f, then the limit f is an integrable function and -$$ -\lim_{n\to\infty} \int_S{f_nd\mu} = \int_S{fd\mu}. -$$ - -Remark: The pointwise convergence and uniform boundedness of the sequence can be relaxed to hold only μ-almost everywhere, provided the measure space (S, Σ, μ) is complete or f is chosen as a measurable function which agrees μ-almost everywhere with the μ-almost everywhere existing pointwise limit. - -Since the sequence is uniformly bounded, there is a real number M such that for all x ∈ S and for all n. Define g(x) = M for all x ∈ S. Then the sequence is dominated by g. Furthermore, g is integrable since it is a constant function on a set of finite measure. Therefore, the result follows from the dominated convergence theorem. - -If the assumptions hold only μ-almost everywhere, then there exists a μ-null set N ∈ Σ such that the functions fn1S\N satisfy the assumptions everywhere on S. - -Let $(\Omega,\mathcal{A},\mu)$ be a measure space, 1 ≤ p < ∞ a real number and (fn) a sequence of $\mathcal{A}$-measurable functions $f_n:\Omega\to\Complex\cup\{\infty\}$. - -Assume the sequence (fn) converges μ-almost everywhere to an $\mathcal{A}$-measurable function f, and is dominated by a $g \in L^p$ (cf. Lp space), i.e., for every natural number n we have: |fn| ≤ g, μ-almost everywhere. - -Then all fn as well as f are in $L^p$ and the sequence (fn) converges to f in the sense of $L^p$, i.e.: -$$ -\lim_{n \to \infty}\|f_n-f\|_p =\lim_{n \to \infty}\left(\int_\Omega |f_n-f|^p d\mu\right)^{\frac{1}{p}} = 0. -$$ - -Idea of the proof: Apply the original theorem to the function sequence $h_n = |f_n-f|^p$ with the dominating function $(2g)^p$. - -The dominated convergence theorem applies also to measurable functions with values in a Banach space, with the dominating function still being non-negative and integrable as above. The assumption of convergence almost everywhere can be weakened to require only convergence in measure. - -The dominated convergence theorem applies also to conditional expectations. diff --git a/wiki/wikipedia/548.txt b/wiki/wikipedia/548.txt deleted file mode 100644 index 66d8fbb70b65cc4f93c587e47e82947fa95beb7f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/548.txt +++ /dev/null @@ -1,3 +0,0 @@ -The Eilenberg–Ganea conjecture is a claim in algebraic topology. It was formulated by Samuel Eilenberg and Tudor Ganea in 1957, in a short, but influential paper. It states that if a group G has cohomological dimension 2, then it has a 2-dimensional Eilenberg–MacLane space $K(G,1)$. For n different from 2, a group G of cohomological dimension n has an n-dimensional Eilenberg–MacLane space. It is also known that a group of cohomological dimension 2 has a 3-dimensional Eilenberg−MacLane space. - -In 1997, Mladen Bestvina and Noel Brady constructed a group G so that either G is a counterexample to the Eilenberg–Ganea conjecture, or there must be a counterexample to the Whitehead conjecture; in other words, not both conjectures can be true. diff --git a/wiki/wikipedia/549.txt b/wiki/wikipedia/549.txt deleted file mode 100644 index fd629200f6b6bf6c3314543f87bbc237ce1fd7fc..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/549.txt +++ /dev/null @@ -1,52 +0,0 @@ -In the mathematical theory of random processes, the Markov chain central limit theorem has a conclusion somewhat similar in form to that of the classic central limit theorem (CLT) of probability theory, but the quantity in the role taken by the variance in the classic CLT has a more complicated definition. See also the general form of Bienaymé's identity. - -Suppose that: - -* the sequence $ X_1,X_2,X_3,\ldots $ of random elements of some set is a Markov chain that has a stationary probability distribution; and - -* the initial distribution of the process, i.e. the distribution of $ X_1$, is the stationary distribution, so that $ X_1,X_2,X_3,\ldots$ are identically distributed. In the classic central limit theorem these random variables would be assumed to be independent, but here we have only the weaker assumption that the process has the Markov property; and - -* $ g$ is some (measurable) real-valued function for which $ \operatorname{var}(g(X_1)) <+\infty.$ - -Now let - - - -\begin{align} - -\mu & = \operatorname E(g(X_1)), \\ - -\sigma^2 & = \operatorname{var}(g(X_1)) + 2\sum_{k=1}^\infty \operatorname{cov}( g(X_1), g(X_{1+k})), \\ - -\widehat\mu_n & = \frac 1 n \sum_{k=1}^n g(X_k). - -\end{align} - - - -Then as $ n \to\infty,$ we have - - \hat\mu_n \approx \operatorname{Normal}\left( \mu, \frac{\sigma^2} n \right), - - - -or more precisely, - - - -\sqrt{n} (\hat{\mu}_n - \mu) \ \xrightarrow{\mathcal{D}} \ \text{Normal}(0, \sigma^2), - - - -where the decorated arrow indicates convergence in distribution. - -The Markov chain central limit theorem can be guaranteed for functionals of general state space Markov chains under certain conditions. In particular, this can be done with a focus on Monte Carlo settings. An example of the application in a MCMC (Markov Chain Monte Carlo) setting is the following: - -Consider a simple hard spheres model on a grid. Suppose X = {1, . . . , n 1 } × {1, . . . , n 2 } ⊆ Z 2 . A proper configuration on X consists of coloring each point either black or white in such a way that no two adjacent points are white. Let X denote the set of all proper configurations on X , N X (n 1 , n 2 ) be the total number of proper configurations and π be the uniform distribution on X so that each proper configuration is equally likely. Suppose our goal is to calculate the typical number of white points in a proper configuration; that is, if W (x) is the number of white points in x ∈ X then we want the value of -$$ -E_{\pi}W=\sum_{x\epsilon\Chi}=\frac{w\bigl(x\bigr)}{N_\Chi\bigl(n_1,n_2\bigr)} -$$ - -If n1 and n2 are even moderately large then we will have to resort to an approximation to E π W . Consider the following Markov chain on X. Fix p ∈ (0, 1) and set X 0 = x 0 where x 0 ∈ X is an arbitrary proper configuration. Randomly choose a point (x, y) ∈ X and independently draw U ∼ Uniform(0, 1). If u ≤ p and all of the adjacent points are black then color (x, y) white leaving all other points alone. Otherwise, color (x, y) black and leave all other points alone. Call the resulting configuration X 1 . Continuing in this fashion yields a Harris ergodic Markov chain {X_0 , X_1 , X_2 , . . .} having π as its invariant distribution. It is now a simple matter to estimate E π W with w̄ n . Also, since X is finite (albeit potentially large) it is well known that X will converge exponentially fast to π which implies that a CLT holds for w̄ n . - -Not taking into account the additional terms in the variance which stem from correlations (e.g. serial correlations in markov chain monte carlo simulations) can result in the problem of pseudoreplication when computing e.g. the confidence intervals for the sample mean. diff --git a/wiki/wikipedia/55.txt b/wiki/wikipedia/55.txt deleted file mode 100644 index 7bf4f5a933324dd1cb7efcb027cf028cd339d360..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/55.txt +++ /dev/null @@ -1,18 +0,0 @@ -In computational complexity theory, there is an open problem of whether some information about a sum of radicals may be computed in polynomial time depending on the input size, i.e., in the number of bits necessary to represent this sum. It is of importance for many problems in computational geometry, since the computation of the Euclidean distance between two points in the general case involves the computation of a square root, and therefore the perimeter of a polygon or the length of a polygonal chain takes the form of a sum of radicals. - -The sum of radicals is defined as a finite linear combination of radicals: -$$ -\sum_{i=1}^n k_i\sqrt[r_i]{x_i}, -$$ - -where $n, r_i$ are natural numbers and $k_i, x_i$ are real numbers. - -Most theoretical research in computational geometry of combinatorial character assumes the computational model of infinite precision real RAM, i.e., an abstract computer in which real numbers and operations on them are performed with infinite precision and the input size of a real number and the cost of an elementary operation are constants. However, there is research in computational complexity, especially in computer algebra, where the input size of a number is the number of bits necessary for its representation. - -Of particular interest in computational geometry is the problem of determining the sign of the sum of radicals. For instance, the length of a polygonal path in which all vertices have integer coordinates may be expressed using the Pythagorean theorem as a sum of integer square roots, so in order to determine whether one path is longer or shorter than another in a Euclidean shortest path problem, it is necessary to determine the sign of an expression in which the first path's length is subtracted from the second; this expression is a sum of radicals. - -In a similar way, the sum of radicals problem is inherent in the problem of minimum-weight triangulation in the Euclidean metric. - -In 1991, Blömer proposed a polynomial time Monte Carlo algorithm for determining whether a sum of radicals is zero, or more generally whether it represents a rational number. - -While Blömer's result does not resolve the computational complexity of finding the sign of the sum of radicals, it does imply that if the latter problem is in class NP, then it is also in co-NP. diff --git a/wiki/wikipedia/550.txt b/wiki/wikipedia/550.txt deleted file mode 100644 index 101c43d6f0271513285058100ee812786d5eddaf..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/550.txt +++ /dev/null @@ -1,35 +0,0 @@ -In set theory, a prewellordering on a set $X$ is a preorder $\le$ on $X$ (a transitive and strongly connected relation on $X$) that is wellfounded in the sense that the relation $x\le y\land y\nleq x$ is wellfounded. If $\leq$ is a prewellordering on $X$, then the relation $\sim$ defined by -$$ -x\sim y\iff x\leq y \land y\leq x -$$ - -is an equivalence relation on $X$, and $\leq$ induces a wellordering on the quotient $X/\sim$. The order-type of this induced wellordering is an ordinal, referred to as the length of the prewellordering. - -A norm on a set $X$ is a map from $X$ into the ordinals. Every norm induces a prewellordering; if $\phi:X\to Ord$ is a norm, the associated prewellordering is given by -$$ -x\leq y\iff\phi(x)\leq\phi(y) -$$ - -Conversely, every prewellordering is induced by a unique regular norm (a norm $\phi:X\to Ord$ is regular if, for any $x\in X$ and any $\alpha<\phi(x)$, there is $y\in X$ such that $\phi(y)=\alpha$). - -If $\boldsymbol{\Gamma}$ is a pointclass of subsets of some collection $\mathcal{F}$ of Polish spaces, $\mathcal{F}$ closed under Cartesian product, and if $\leq$ is a prewellordering of some subset $P$ of some element $X$ of $\mathcal{F}$, then $\leq$ is said to be a $\boldsymbol{\Gamma}$-prewellordering of $P$ if the relations $<^*$ and $\leq^*$ are elements of $\boldsymbol{\Gamma}$, where for $x,y\in X$, - -# $x<^*y\iff x\in P\land[y\notin P\lor\{x\leq y\land y\not\leq x\}]$ - -# $x\leq^* y\iff x\in P\land[y\notin P\lor x\leq y]$ -$$ -\boldsymbol{\Gamma} -$$ is said to have the prewellordering property if every set in $\boldsymbol{\Gamma}$ admits a $\boldsymbol{\Gamma}$-prewellordering. - -The prewellordering property is related to the stronger scale property; in practice, many pointclasses having the prewellordering property also have the scale property, which allows drawing stronger conclusions. -$$ -\boldsymbol{\Pi}^1_1 -$$ and $\boldsymbol{\Sigma}^1_2$ both have the prewellordering property; this is provable in ZFC alone. Assuming sufficient large cardinals, for every $n\in\omega$, $\boldsymbol{\Pi}^1_{2n+1}$ and $\boldsymbol{\Sigma}^1_{2n+2}$ - -have the prewellordering property. - -If $\boldsymbol{\Gamma}$ is an adequate pointclass with the prewellordering property, then it also has the reduction property: For any space $X\in\mathcal{F}$ and any sets $A,B\subseteq X$, $A$ and $B$ both in $\boldsymbol{\Gamma}$, the union $A\cup B$ may be partitioned into sets $A^*,B^*$, both in $\boldsymbol{\Gamma}$, such that $A^*\subseteq A$ and $B^*\subseteq B$. - -If $\boldsymbol{\Gamma}$ is an adequate pointclass whose dual pointclass has the prewellordering property, then $\boldsymbol{\Gamma}$ has the separation property: For any space $X\in\mathcal{F}$ and any sets $A,B\subseteq X$, $A$ and $B$ disjoint sets both in $\boldsymbol{\Gamma}$, there is a set $C\subseteq X$ such that both $C$ and its complement $X\setminus C$ are in $\boldsymbol{\Gamma}$, with $A\subseteq C$ and $B\cap C=\emptyset$. - -For example, $\boldsymbol{\Pi}^1_1$ has the prewellordering property, so $\boldsymbol{\Sigma}^1_1$ has the separation property. This means that if $A$ and $B$ are disjoint analytic subsets of some Polish space $X$, then there is a Borel subset $C$ of $X$ such that $C$ includes $A$ and is disjoint from $B$. diff --git a/wiki/wikipedia/551.txt b/wiki/wikipedia/551.txt deleted file mode 100644 index 85d9010b974c6625cd72c380d7dfc43188ce7e75..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/551.txt +++ /dev/null @@ -1,87 +0,0 @@ - - -Euler's conjecture is a disproved conjecture in mathematics related to Fermat's Last Theorem. It was proposed by Leonhard Euler in 1769. It states that for all integers n and k greater than 1, if the sum of n kth powers of positive integers is itself a kth power, then n is greater than or equal to k: - -⇒ n ≥ k - -The conjecture represents an attempt to generalize Fermat's Last Theorem, which is the special case n = 2: if , then 2 ≥ k. - -Although the conjecture holds for the case k = 3 (which follows from Fermat's Last Theorem for the third powers), it was disproved for k = 4 and k = 5. It is unknown whether the conjecture fails or holds for any value k ≥ 6. - -Euler was aware of the equality 59^4 + 158^4 = 133^4 + 134^4 involving sums of four fourth powers; this however is not a counterexample because no term is isolated on one side of the equation. He also provided a complete solution to the four cubes problem as in Plato's number 3^3 + 4^3 + 5^3 = 6^3 or the taxicab number 1729. The general solution of the equation -$$ -x_1^3+x_2^3=x_3^3+x_4^3 -$$ - -is -$$ -x_1 = 1-(a-3b)(a^2+3b^2), \quad x_2 = (a+3b)(a^2+3b^2)-1 -$$ -$$ -x_3 = (a+3b)-(a^2+3b^2)^2, \quad x_4 = (a^2+3b^2)^2-(a-3b) -$$ - -where a and b are any integers. - -Euler's conjecture was disproven by L. J. Lander and T. R. Parkin in 1966 when, through a direct computer search on a CDC 6600, they found a counterexample for k = 5. This was published in a paper comprising just two sentences. His smallest counterexample was - -26824404 + 153656394 + 187967604 = 206156734. - -A particular case of Elkies' solutions can be reduced to the identity - -(85v2 + 484v − 313)4 + (68v2 − 586v + 10)4 + (2u)4 = (357v2 − 204v + 363)4 - -where - -u2 = 22030 + 28849v − 56158v2 + 36941v3 − 31790v4. - -This is an elliptic curve with a rational point at v1 = −31/467. From this initial rational point, one can compute an infinite collection of others. Substituting v1 into the identity and removing common factors gives the numerical example cited above. - -In 1988, Roger Frye found the smallest possible counterexample - -958004 + 2175194 + 4145604 = 4224814 - -for k = 4 by a direct computer search using techniques suggested by Elkies. This solution is the only one with values of the variables below 1,000,000. - -In 1967, L. J. Lander, T. R. Parkin, and John Selfridge conjectured that if -$$ -\sum_{i=1}^{n} a_i^k = \sum_{j=1}^{m} b_j^k -$$, - -where ai ≠ bj are positive integers for all 1 ≤ i ≤ n and 1 ≤ j ≤ m, then m + n ≥ k. In the special case m = 1, the conjecture states that if -$$ -\sum_{i=1}^{n} a_i^k = b^k -$$ - -(under the conditions given above) then n ≥ k − 1. - -The special case may be described as the problem of giving a partition of a perfect power into few like powers. For k = 4, 5, 7, 8 and n = k or k − 1, there are many known solutions. Some of these are listed below. As of 2002, there are no solutions for $k=6$ whose final term is ≤ 730000. - -See for more data. - -3^3 + 4^3 + 5^3 = 6^3 (Plato's number 216) - -This is the case a = 1, b = 0 of Srinivasa Ramanujan's formula -$$ -(3a^2+5ab-5b^2)^3 + (4a^2-4ab+6b^2)^3 + (5a^2-5ab-3b^2)^3 = (6a^2-4ab+4b^2)^3 . -$$ - -A cube as the sum of three cubes can also be parameterized as -$$ -a^3(a^3+b^3)^3 = b^3(a^3+b^3)^3+a^3(a^3-2b^3)^3+b^3(2a^3-b^3)^3 -$$ - -or as -$$ -a^3(a^3+2b^3)^3 = a^3(a^3-b^3)^3+b^3(a^3-b^3)^3+b^3(2a^3+b^3)^3. -$$ - -195 + 435 + 465 + 475 + 675 = 725 (Lander, Parkin, Selfridge, smallest, 1967) - -215 + 235 + 375 + 795 + 845 = 945 (Lander, Parkin, Selfridge, second smallest, 1967) - -75 + 435 + 575 + 805 + 1005 = 1075 (Sastry, 1934, third smallest) - -1277 + 2587 + 2667 + 4137 + 4307 + 4397 + 5257 = 5687 (M. Dodrill, 1999) - -908 + 2238 + 4788 + 5248 + 7488 + 10888 + 11908 + 13248 = 14098 (S. Chase, 2000) diff --git a/wiki/wikipedia/552.txt b/wiki/wikipedia/552.txt deleted file mode 100644 index e4b8be08ee8cacf719c75d9a14bf3a02e68f0a23..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/552.txt +++ /dev/null @@ -1,146 +0,0 @@ -In probability theory, the Borel–Cantelli lemma is a theorem about sequences of events. In general, it is a result in measure theory. It is named after Émile Borel and Francesco Paolo Cantelli, who gave statement to the lemma in the first decades of the 20th century. A related result, sometimes called the second Borel–Cantelli lemma, is a partial converse of the first Borel–Cantelli lemma. The lemma states that, under certain conditions, an event will have probability of either zero or one. Accordingly, it is the best-known of a class of similar theorems, known as zero-one laws. Other examples include Kolmogorov's zero–one law and the Hewitt–Savage zero–one law. - -Let E1,E2,... be a sequence of events in some probability space. - -The Borel–Cantelli lemma states: - -If the sum of the probabilities of the events {En} is finite - -\sum_{n=1}^\infty \Pr(E_n)<\infty, - -then the probability that infinitely many of them occur is 0, that is, - -\Pr\left(\limsup_{n\to\infty} E_n\right) = 0. - -Here, "lim sup" denotes limit supremum of the sequence of events, and each event is a set of outcomes. That is, lim sup En is the set of outcomes that occur infinitely many times within the infinite sequence of events (En). Explicitly, - -\limsup_{n\to\infty} E_n = \bigcap_{n=1}^\infty \bigcup_{k = n}^\infty E_k. - -The set lim sup En is sometimes denoted {En i.o. }, where "i.o." stands for "infinitely often". The theorem therefore asserts that if the sum of the probabilities of the events En is finite, then the set of all outcomes that are "repeated" infinitely many times must occur with probability zero. Note that no assumption of independence is required. - -Suppose (Xn) is a sequence of random variables with Pr(Xn = 0) = 1/n2 for each n. The probability that Xn = 0 occurs for infinitely many n is equivalent to the probability of the intersection of infinitely many [Xn = 0] events. The intersection of infinitely many such events is a set of outcomes common to all of them. However, the sum ΣPr(Xn = 0) converges to pi2/6 ≈ 1.645 < ∞, and so the Borel–Cantelli Lemma states that the set of outcomes that are common to infinitely many such events occurs with probability zero. Hence, the probability of Xn = 0 occurring for infinitely many n is 0. Almost surely (i.e., with probability 1), Xn is nonzero for all but finitely many n. - -Let (En) be a sequence of events in some probability space. - -The sequence of events $\left\{\bigcup_{n=N}^\infty E_n\right\}^\infty_{N=1}$ is non-increasing: - -\bigcup_{n=1}^\infty E_n \supseteq \bigcup_{n=2}^\infty E_n \supseteq \cdots \supseteq \bigcup_{n=N}^\infty E_n \supseteq \bigcup_{n=N+1}^\infty E_n \supseteq \cdots \supseteq \limsup_{n\to\infty} E_n. - -By continuity from above, - -\Pr(\limsup_{n \to \infty} E_n) = \lim_{N\to\infty}\Pr\left(\bigcup_{n=N}^\infty E_n\right). - -By subadditivity, - -\Pr\left(\bigcup_{n=N}^\infty E_n\right) \leq \sum^\infty_{n=N} \Pr(E_n). - -By original assumption, $\sum_{n=1}^\infty \Pr(E_n)<\infty.$ As the series $ \sum_{n=1}^\infty \Pr(E_n)$ converges, - -\lim_{N\to\infty} \sum^\infty_{n=N} \Pr(E_n)=0, - -as required. - -For general measure spaces, the Borel–Cantelli lemma takes the following form: - -Let μ be a (positive) measure on a set X, with σ-algebra F, and let (An) be a sequence in F. If - -\sum_{n=1}^\infty\mu(A_n)<\infty, - -then - -\mu\left(\limsup_{n\to\infty} A_n\right) = 0. - -A related result, sometimes called the second Borel–Cantelli lemma, is a partial converse of the first Borel–Cantelli lemma. The lemma states: If the events En are independent and the sum of the probabilities of the En diverges to infinity, then the probability that infinitely many of them occur is 1. That is: - -If $\sum^{\infty}_{n = 1} \Pr(E_n) = \infty$ and the events $(E_n)^{\infty}_{n = 1}$ are independent, then $\Pr(\limsup_{n \to \infty} E_n) = 1.$ - -The assumption of independence can be weakened to pairwise independence, but in that case the proof is more difficult. - -The infinite monkey theorem, that endless typing at random will, with probability 1, eventually produce every finite text (such as the works of Shakespeare), amounts to the statement that a (not necessarily fair) coin tossed infinitely often will eventually come up Heads. This is a special case of the second Lemma. - -The lemma can be applied to give a covering theorem in Rn. Specifically , if Ej is a collection of Lebesgue measurable subsets of a compact set in Rn such that - -\sum_j \mu(E_j) = \infty, - -then there is a sequence Fj of translates - -F_j = E_j + x_j - -such that - -\lim\sup F_j = \bigcap_{n=1}^\infty \bigcup_{k=n}^\infty F_k = \mathbb{R}^n - -apart from a set of measure zero. - -Suppose that $\sum_{n = 1}^\infty \Pr(E_n) = \infty$ and the events $(E_n)^\infty_{n = 1}$ are independent. It is sufficient to show the event that the En's did not occur for infinitely many values of n has probability 0. This is just to say that it is sufficient to show that - - 1-\Pr(\limsup_{n \to \infty} E_n) = 0. - -Noting that: - -\begin{align} - -1 - \Pr(\limsup_{n \to \infty} E_n) &= 1 - \Pr\left(\{E_n\text{ i.o.}\}\right) = \Pr\left(\{E_n \text{ i.o.}\}^c \right) \\ - -& = \Pr\left(\left(\bigcap_{N=1}^\infty \bigcup_{n=N}^\infty E_n\right)^c \right) = \Pr\left(\bigcup_{N=1}^\infty \bigcap_{n=N}^\infty E_n^c \right)\\ - -&= \Pr\left(\liminf_{n \to \infty}E_n^{c}\right)= \lim_{N \to \infty}\Pr\left(\bigcap_{n=N}^\infty E_n^c \right) - -\end{align} - - - -it is enough to show: $\Pr\left(\bigcap_{n=N}^{\infty} E_n^{c}\right) = 0$. Since the $(E_n)^{\infty}_{n = 1}$ are independent: - -\begin{align} - -\Pr\left(\bigcap_{n=N}^\infty E_n^c\right) - -&= \prod^\infty_{n=N} \Pr(E_n^c) \\ - -&= \prod^\infty_{n=N} (1-\Pr(E_n)) \\ - -&\leq\prod^\infty_{n=N} \exp(-\Pr(E_n))\\ - -&=\exp\left(-\sum^\infty_{n=N} \Pr(E_n)\right)\\ - -&= 0. - -\end{align} - - - -This completes the proof. Alternatively, we can see $\Pr\left(\bigcap_{n=N}^\infty E_n^c \right) = 0$ by taking negative the logarithm of both sides to get: - - - -\begin{align} - --\log\left(\Pr\left(\bigcap_{n=N}^{\infty}E_n^{c}\right)\right) - -&= -\log\left(\prod^{\infty}_{n=N} (1-\Pr(E_n))\right) \\ - -&= - \sum^\infty_{n=N}\log(1-\Pr(E_n)). - -\end{align} - - - -Since −log(1 − x) ≥ x for all x > 0, the result similarly follows from our assumption that $\sum^\infty_{n = 1} \Pr(E_n) = \infty.$ - -Another related result is the so-called counterpart of the Borel–Cantelli lemma. It is a counterpart of the - -Lemma in the sense that it gives a necessary and sufficient condition for the limsup to be 1 by replacing the independence assumption by the completely different assumption that $(A_n)$ is monotone increasing for sufficiently large indices. This Lemma says: - -Let $(A_n)$ be such that $A_k \subseteq A_{k+1}$, - -and let $\bar A$ denote the complement of $A$. Then the probability of infinitely many $A_k$ occur (that is, at least one $A_k$ occurs) is one if and only if there exists a strictly increasing sequence of positive integers $( t_k)$ such that - - \sum_k \Pr( A_{t_{k+1}} \mid \bar A_{t_k}) = \infty. - -This simple result can be useful in problems such as for instance those involving hitting probabilities for stochastic process with the choice of the sequence $(t_k)$ usually being the essence. - -Let $A_n$ be a sequence of events with $\sum\Pr(A_n)=\infty$ and -$$ - \liminf_{k\to\infty} \frac{\sum_{1\le m,n \le k} \Pr(A_m\cap A_n)} {\left(\sum_{n=1}^k\Pr(A_n)\right)^2} < \infty, -$$ then there is a positive probability that $A_n$ occur infinitely often. diff --git a/wiki/wikipedia/553.txt b/wiki/wikipedia/553.txt deleted file mode 100644 index 4b3bf4eb10580b3e3afe1fa26da8b3cc8aaa6ac7..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/553.txt +++ /dev/null @@ -1,25 +0,0 @@ -Planarity is a puzzle computer game by John Tantalo, based on a concept by Mary Radcliffe at Western Michigan University. - -The name comes from the concept of planar graphs in graph theory; these are graphs that can be embedded in the Euclidean plane so that no edges intersect. By Fáry's theorem, if a graph is planar, it can be drawn without crossings so that all of its edges are straight line segments. In the planarity game, the player is presented with a circular layout of a planar graph, with all the vertices placed on a single circle and with many crossings. The goal for the player is to eliminate all of the crossings and construct a straight-line embedding of the graph by moving the vertices one by one into better positions. - -The game was written in Flash by John Tantalo at Case Western Reserve University. Online popularity and the local notoriety he gained placed Tantalo as one of Cleveland's most interesting people for 2006. It in turn has inspired the creation of a GTK+ version by Xiph.org's Chris Montgomery, which possesses additional level generation algorithms and the ability to manipulate multiple nodes at once. - -The definition of the planarity puzzle does not depend on how the planar graphs in the puzzle are generated, but the original implementation uses the following algorithm: - -# Generate a set of random lines in a plane such that no two lines are parallel and no three lines meet in a single point. - -# Calculate the intersections of every line pair. - -# Create a graph with a vertex for each intersection and an edge for each line segment connecting two intersections (the arrangement of the lines). - -If a graph is generated from $L$ lines, then the graph will have exactly $\tbinom{L}{2} = \tfrac{L(L-1)}{2}$ vertices (each line has $L-1$ vertices, and each vertex is shared with one other line) and $L(L-2)$ edges (each line contains $L-2$ edges). The first level of Planarity is built with $L=4$ lines, so it has $L(L-1)/2=6$ vertices and $L(L-2)=8$ edges. Each level after is generated by one more line than the last. If a level was generated with $L$ lines, then the next level has $L$ more vertices and $2L-1$ more edges. - -The best known algorithms from computational geometry for constructing the graphs of line arrangements solve the problem in $O(L^2)$ time, linear in the size of the graph to be constructed, but they are somewhat complex. Alternatively and more simply, it is possible to index each crossing point by the pair of lines that cross at that point, sort the crossings along each line by their $x$-coordinates, and use this sorted ordering to generate the edges of the planar graph, in near-optimal $O(L^2\log L)$ time. Once the vertices and edges of the graph have been generated, they may be placed evenly around a circle using a random permutation. - -The problem of determining whether a graph is planar can be solved in linear time, and any such graph is guaranteed to have a straight-line embedding by Fáry's theorem, that can also be found from the planar embedding in linear time. Therefore, any puzzle could be solved in linear time by a computer. However, these puzzles are not as straightforward for human players to solve. - -In the field of computational geometry, the process of moving a subset of the vertices in a graph embedding to eliminate edge crossings has been studied by Pach and Tardos (2002), and others, inspired by the planarity puzzle. The results of these researchers shows that (in theory, assuming that the field of play is the infinite plane rather than a bounded rectangle) it is always possible to solve the puzzle while leaving $n^\epsilon$ of the $n$ input vertices fixed in place at their original positions, for a constant $\epsilon$ that has not been determined precisely but lies between 1/4 and slightly less than 1/2. When the planar graph to be untangled is a cycle graph, a larger number of vertices may be fixed in place. However, determining the largest number of vertices that may be left in place for a particular input puzzle (or equivalently, the smallest number of moves needed to solve the puzzle) is NP-complete. - -Verbitsky has shown that the randomized circular layout used for the initial state of Planarity is nearly the worst possible in terms of its number of crossings: regardless of what planar graph is to be tangled, the expected value of the number of crossings for this layout is within a factor of three of the largest number of crossings among all layouts. - -In 2014, mathematician David Eppstein published a paper providing an effective algorithm for solving planar graphs generated by the original Planarity game, based on the specifics of the puzzle generation algorithm. diff --git a/wiki/wikipedia/554.txt b/wiki/wikipedia/554.txt deleted file mode 100644 index 7a2f91fada8f972ea138d935ff42e17947692eee..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/554.txt +++ /dev/null @@ -1,5 +0,0 @@ -Telehash is a mesh networking protocol that aims to be secure. The protocol is licensed under the Creative Commons Public domain. - -Telehash implementations were still in development by 2014. As a security-sensitive application, it has yet to receive a third-party security review. TeleHash is similar to Resilio Sync in that it allows users of the software to share data securely without any central server authority. There are implementations in C, Python, Ruby, Erlang, JavaScript, Go, and Objective-C. - -Telehash is used in the Locker project, and was planned to be used as a communication and file transfer mechanism in Git-annex. diff --git a/wiki/wikipedia/555.txt b/wiki/wikipedia/555.txt deleted file mode 100644 index c4ca7b3ed5993e7b2559b53f68915225d307e4dd..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/555.txt +++ /dev/null @@ -1,9 +0,0 @@ -The postage stamp problem is a mathematical riddle that asks what is the smallest postage value which cannot be placed on an envelope, if the latter can hold only a limited number of stamps, and these may only have certain specified face values. - -For example, suppose the envelope can hold only three stamps, and the available stamp values are 1 cent, 2 cents, 5 cents, and 20 cents. Then the solution is 13 cents; since any smaller value can be obtained with at most three stamps (e.g. 4 = 2 + 2, 8 = 5 + 2 + 1, etc.), but to get 13 cents one must use at least four stamps. - -Mathematically, the problem can be formulated as follows: - -Given an integer m and a set V of positive integers, find the smallest integer z that cannot be written as the sum v1 + v2 + ··· + vk of some number k ≤ m of (not necessarily distinct) elements of V. - -This problem can be solved by brute force search or backtracking with maximum time proportional to |V |m, where |V | is the number of distinct stamp values allowed. Therefore, if the capacity of the envelope m is fixed, it is a polynomial time problem. If the capacity m is arbitrary, the problem is known to be NP-hard. diff --git a/wiki/wikipedia/556.txt b/wiki/wikipedia/556.txt deleted file mode 100644 index 7ee6d705d376e7fc03964e0544517e0eac259407..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/556.txt +++ /dev/null @@ -1,47 +0,0 @@ -The k shortest path routing problem is a generalization of the shortest path routing problem in a given network. It asks not only about a shortest path but also about next k−1 shortest paths (which may be longer than the shortest path). A variation of the problem is the loopless k shortest paths. - -Finding k shortest paths is possible by extending Dijkstra algorithm or Bellman-Ford algorithm and extend them to find more than one path. - -Since 1957 many papers were published on the k shortest path routing problem. Most of the fundamental works were done between 1960s and 2001. Since then, most of the research has been on the problem's applications and its variants. In 2010, Michael Günther et al. published a book on Symbolic calculation of k-shortest paths and related measures with the stochastic process algebra tool CASPA. - -The Dijkstra algorithm can be generalized to find the k shortest paths. - -There are two main variations of the k shortest path routing problem. In one variation, paths are allowed to visit the same node more than once, thus creating loops. In another variation, paths are required to be simple and loopless. The loopy version is solvable using Eppstein's algorithm - -In this variant, the problem is simplified by not requiring paths to be loopless. In 2015, Akuba et al. devised an indexing method as a significantly faster alternative for Eppstein's algorithm, in which a data structure called an index is constructed from a graph and then top-k distances between arbitrary pairs of vertices can be rapidly obtained. - -In the loopless variant, the paths are forbidden to contain loops which adds an additional level of complexity. It can be solved using Yen's algorithm to find the lengths of all shortest paths from a fixed node to all other nodes in an n-node non negative-distance network, a technique requiring only 2n2 additions and n2 comparison, fewer than other available shortest path algorithms need. The running time complexity is pseudo-polynomial, being O(kn(m + n log n)) (where m and n represent the number of edges and vertices, respectively). In 2007, John Hershberger and Subhash Suri proposed a replacement paths algorithm, a more efficient implementation of Lawler's and Yen's algorithm with O(n) improvement in time. - -The following example makes use of Yen’s model to find k shortest paths between communicating end nodes. That is, it finds a shortest path, second shortest path, etc. up to the Kth shortest path. More details can be found . - -The code provided in this example attempts to solve the k shortest path routing problem for a 15-nodes network containing a combination of unidirectional and bidirectional links: - -Another example is the use of k shortest paths algorithm to track multiple objects. The technique implements a multiple object tracker based on the k shortest paths routing algorithm. A set of probabilistic occupancy maps is used as input. An object detector provides the input. - -The complete details can be found at " – CVLAB". - -Another use of k shortest paths algorithms is to design a transit network that enhances passengers' experience in public transportation systems. Such an example of a transit network can be constructed by putting traveling time under consideration. In addition to traveling time, other conditions may be taken depending upon economical and geographical limitations. Despite variations in parameters, the k shortest path algorithms finds the most optimal solutions that satisfies almost all user needs. Such applications of k shortest path algorithms are becoming common, recently Xu, He, Song, and Chaudry (2012) studied the k shortest path problems in transit network systems. - -The k shortest path routing is a good alternative for: - -* - -* Network routing, especially in optical mesh network where there are additional constraints that cannot be solved by using ordinary shortest path algorithms. - -*Hypothesis generation in computational linguistics - -* - -* as described above - -* Road Networks: road junctions are the nodes (vertices) and each edge (link) of the graph is associated with a road segment between two junctions. - -*The breadth-first search algorithm is used when the search is only limited to two operations. - -*The Floyd–Warshall algorithm solves all pairs shortest paths. - -*Johnson's algorithm solves all pairs' shortest paths, and may be faster than Floyd–Warshall on sparse graphs. - -*Perturbation theory finds (at worst) the locally shortest path. - -Cherkassky et al. provide more algorithms and associated evaluations. diff --git a/wiki/wikipedia/557.txt b/wiki/wikipedia/557.txt deleted file mode 100644 index 400c42e902d7ea7ed2788b08355965804cff38f5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/557.txt +++ /dev/null @@ -1,51 +0,0 @@ -In lambda calculus, the Church–Rosser theorem states that, when applying reduction rules to terms, the ordering in which the reductions are chosen does not make a difference to the eventual result. - -More precisely, if there are two distinct reductions or sequences of reductions that can be applied to the same term, then there exists a term that is reachable from both results, by applying (possibly empty) sequences of additional reductions. The theorem was proved in 1936 by Alonzo Church and J. Barkley Rosser, after whom it is named. - -The theorem is symbolized by the adjacent diagram: If term a can be reduced to both b and c, then there must be a further term d (possibly equal to either b or c) to which both b and c can be reduced. - -Viewing the lambda calculus as an abstract rewriting system, the Church–Rosser theorem states that the reduction rules of the lambda calculus are confluent. As a consequence of the theorem, a term in the lambda calculus has at most one normal form, justifying reference to "the normal form" of a given normalizable term. - -In 1936, Alonzo Church and J. Barkley Rosser proved that the theorem holds for β-reduction in the λI-calculus (in which every abstracted variable must appear in the term's body). The proof method is known as "finiteness of developments", and it has additional consequences such as the Standardization Theorem, which relates to a method in which reductions can be performed from left to right to reach a normal form (if one exists). The result for the pure untyped lambda calculus was proved by D. E. Shroer in 1965. - -One type of reduction in the pure untyped lambda calculus for which the Church–Rosser theorem applies is β-reduction, in which a subterm of the form $( \lambda x . t) s$ is contracted by the substitution $ t [ x := s]$. If β-reduction is denoted by $ \rightarrow_\beta $ and its reflexive, transitive closure by $ \twoheadrightarrow_\beta $ then the Church–Rosser theorem is that: -$$ -\forall M, N_1, N_2 \in \Lambda: \text{if}\ M\twoheadrightarrow_\beta N_1 \ \text{and}\ M\twoheadrightarrow_\beta N_2 \ \text{then}\ \exists X\in \Lambda: N_1\twoheadrightarrow_\beta X \ \text{and}\ N_2\twoheadrightarrow_\beta X -$$ - -A consequence of this property is that two terms equal in $\lambda\beta$ must reduce to a common term: -$$ -\forall M, N\in \Lambda: \text{if}\ \lambda\beta \vdash M=N \ \text{then}\ \exists X: M \twoheadrightarrow_\beta X \ \text{and}\ N\twoheadrightarrow_\beta X -$$ - -The theorem also applies to η-reduction, in which a subterm $\lambda x.Sx$ is replaced by $S$. It also applies to βη-reduction, the union of the two reduction rules. - -For β-reduction, one proof method originates from William W. Tait and Per Martin-Löf. Say that a binary relation $ \rightarrow $ satisfies the diamond property if: -$$ -\forall M, N_1, N_2 \in \Lambda: \text{if}\ M\rightarrow N_1 \ \text{and}\ M\rightarrow N_2 \ \text{then}\ \exists X\in \Lambda: N_1\rightarrow X \ \text{and}\ N_2\rightarrow X -$$ - -Then the Church–Rosser property is the statement that $ \twoheadrightarrow_\beta $ satisfies the diamond property. We introduce a new reduction $ \rightarrow_{\|} $ whose reflexive transitive closure is $ \twoheadrightarrow_\beta $ and which satisfies the diamond property. By induction on the number of steps in the reduction, it thus follows that $ \twoheadrightarrow_\beta $ satisfies the diamond property. - -The relation $ \rightarrow_{\|} $ has the formation rules: - -*$M \rightarrow_{\|} M$ - -*If $M \rightarrow_{\|} M'$ and $N \rightarrow_{\|} N'$ then $\lambda x.M \rightarrow_{\|} \lambda x.M'$ and $MN \rightarrow_{\|} M'N'$ and $(\lambda x. M)N \rightarrow_{\|} M'[x:=N']$ - -The η-reduction rule can be proved to be Church–Rosser directly. Then, it can be proved that β-reduction and η-reduction commute in the sense that: - -If $M \rightarrow_\beta N_1$ and $M \rightarrow_\eta N_2$ then there exists a term $X$ such that $N_1 \rightarrow_\eta X$ and $N_2\rightarrow_\beta X$. - -Hence we can conclude that βη-reduction is Church–Rosser. - -A reduction rule that satisfies the Church–Rosser property has the property that every term M can have at most one distinct normal form, as follows: if X and Y are normal forms of M then by the Church–Rosser property, they both reduce to an equal term Z. Both terms are already normal forms so $X=Z=Y$. - -If a reduction is strongly normalising (there are no infinite reduction paths) then a weak form of the Church–Rosser property implies the full property (see Newman's lemma). The weak property, for a relation $\rightarrow$, is: -$$ -\forall M, N_1, N_2\in \Lambda: -$$ if $M\rightarrow N_1$ and $M\rightarrow N_2$ then there exists a term $X$ such that $N_1\twoheadrightarrow X$ and $N_2 \twoheadrightarrow X$. - -The Church–Rosser theorem also holds for many variants of the lambda calculus, such as the simply-typed lambda calculus, many calculi with advanced type systems, and Gordon Plotkin's beta-value calculus. Plotkin also used a Church–Rosser theorem to prove that the evaluation of functional programs (for both lazy evaluation and eager evaluation) is a function from programs to values (a subset of the lambda terms). - -In older research papers, a rewriting system is said to be Church–Rosser, or to have the Church–Rosser property, when it is confluent. diff --git a/wiki/wikipedia/558.txt b/wiki/wikipedia/558.txt deleted file mode 100644 index a7d14c75a8055185295af41316d690831972841c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/558.txt +++ /dev/null @@ -1,27 +0,0 @@ -In incidence geometry, the De Bruijn–Erdős theorem, originally published by , states a lower bound on the number of lines determined by n points in a projective plane. By duality, this is also a bound on the number of intersection points determined by a configuration of lines. - -Although the proof given by De Bruijn and Erdős is combinatorial, De Bruijn and Erdős noted in their paper that the analogous (Euclidean) result is a consequence of the Sylvester–Gallai theorem, by an induction on the number of points. - -Let P be a configuration of n points in a projective plane, not all on a line. Let t be the number of lines determined by P. Then, - -* t ≥ n, and - -* if t = n, any two lines have exactly one point of P in common. In this case, P is either a projective plane or P is a near pencil, meaning that exactly n - 1 of the points are collinear. - -The theorem is clearly true for three non-collinear points. We proceed by induction. - -Assume n > 3 and the theorem is true for n - 1. - -Let P be a set of n points not all collinear. - -The Sylvester–Gallai theorem states that there is a line containing exactly two points of P. Such two point lines are called ordinary lines. - -Let a and b be the two points of P on an ordinary line. - -If the removal of point a produces a set of collinear points then P generates a near pencil of n lines (the n - 1 ordinary lines through a plus the one line containing the other n - 1 points). - -Otherwise, the removal of a produces a set, P' , of n - 1 points that are not all collinear. - -By the induction hypothesis, P' determines at least n - 1 lines. The ordinary line determined by a and b is not among these, so P determines at least n lines. - -John Horton Conway has a purely combinatorial proof which consequently also holds for points and lines over the complex numbers, quaternions and octonions. diff --git a/wiki/wikipedia/559.txt b/wiki/wikipedia/559.txt deleted file mode 100644 index 12a0450d2f75ad47736b2b0b13f9da12dcbdb536..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/559.txt +++ /dev/null @@ -1,5 +0,0 @@ -Transactional NTFS (abbreviated TxF) is a component introduced in Windows Vista and present in later versions of the Microsoft Windows operating system that brings the concept of atomic transactions to the NTFS file system, allowing Windows application developers to write file-output routines that are guaranteed to either succeed completely or to fail completely. - -Major operating system components, including System Restore, Task Scheduler, and Windows Update, rely on TxF for stability. - -Due to its complexity and various nuances which developers need to consider as part of application development, Microsoft has deprecated TxF and stated that it may be removed in a future version of Windows. Microsoft has strongly recommended that developers investigate using the alternatives rather than adopting the Transactional NTFS API platform which may not be available in future versions of Windows. diff --git a/wiki/wikipedia/56.txt b/wiki/wikipedia/56.txt deleted file mode 100644 index 50ef2ef43cc7fa66a30145b50850da786d6566b7..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/56.txt +++ /dev/null @@ -1,46 +0,0 @@ -The Dubins–Spanier theorems are several theorems in the theory of fair cake-cutting. They were published by Lester Dubins and Edwin Spanier in 1961. Although the original motivation for these theorems is fair division, they are in fact general theorems in measure theory. - -There is a set $U$, and a set $\mathbb{U}$ which is a sigma-algebra of subsets of $U$. - -There are $n$ partners. Every partner $i$ has a personal value measure $V_i: \mathbb{U} \to \mathbb{R}$. This function determines how much each subset of $U$ is worth to that partner. - -Let $X$ a partition of $U$ to $k$ measurable sets: $U = X_1 \sqcup \cdots \sqcup X_k$. Define the matrix $M_X$ as the following $n\times k$ matrix: -$$ -M_X[i,j] = V_i(X_j) -$$ - -This matrix contains the valuations of all players to all pieces of the partition. - -Let $\mathbb{M}$ be the collection of all such matrices (for the same value measures, the same $k$, and different partitions): -$$ -\mathbb{M} = \{M_X \mid X\text{ is a }k\text{-partition of U}\} -$$ - -The Dubins–Spanier theorems deal with the topological properties of $\mathbb{M}$. - -If all value measures $V_i$ are countably-additive and nonatomic, then: - -* $\mathbb{M}$ is a compact set; - -* $\mathbb{M}$ is a convex set. - -This was already proved by Dvoretzky, Wald, and Wolfowitz. - -A cake partition $X$ to k pieces is called a consensus partition with weights $w_1, w_2, \ldots, w_k$ (also called exact division) if: -$$ -\forall i \in \{1,\dots, n\}: \forall j \in \{1,\dots, k\}: V_i(X_j) = w_j -$$ - -I.e, there is a consensus among all partners that the value of piece j is exactly $w_j$. - -Suppose, from now on, that $w_1, w_2, \ldots, w_k$ are weights whose sum is 1: -$$ -\sum_{j=1}^k w_j =1 -$$ - -and the value measures are normalized such that each partner values the entire cake as exactly 1: -$$ -\forall i \in \{1,\dots, n\}: V_i(U) = 1 -$$ - -The convexity part of the DS theorem implies that: diff --git a/wiki/wikipedia/560.txt b/wiki/wikipedia/560.txt deleted file mode 100644 index 4da1c1293f2bbc645f7408d5c1ceeda40f7fa636..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/560.txt +++ /dev/null @@ -1,63 +0,0 @@ -In mathematics, in the field of ordinary differential equations, the Kneser theorem, named after Adolf Kneser, provides criteria to decide whether a differential equation is oscillating or not. - -Consider an ordinary linear homogeneous differential equation of the form -$$ -y + q(x)y = 0 -$$ - -with -$$ -q: [0,+\infty) \to \mathbb{R} -$$ - -continuous. - -We say this equation is oscillating if it has a solution y with infinitely many zeros, and non-oscillating otherwise. - -The theorem states that the equation is non-oscillating if -$$ -\limsup_{x \to +\infty} x^2 q(x) < \tfrac{1}{4} -$$ - -and oscillating if -$$ -\liminf_{x \to +\infty} x^2 q(x) > \tfrac{1}{4}. -$$ - -To illustrate the theorem consider -$$ -q(x) = \left(\frac{1}{4} - a\right) x^{-2} \quad\text{for}\quad x > 0 -$$ - -where $a$ is real and non-zero. According to the theorem, solutions will be oscillating or not depending on whether $a$ is positive (non-oscillating) or negative (oscillating) because -$$ -\limsup_{x \to +\infty} x^2 q(x) = \liminf_{x \to +\infty} x^2 q(x) = \frac{1}{4} - a -$$ - -To find the solutions for this choice of $q(x)$, and verify the theorem for this example, substitute the 'Ansatz' -$$ -y(x) = x^n -$$ - -which gives -$$ -n(n-1) + \frac{1}{4} - a = \left(n-\frac{1}{2}\right)^2 - a = 0 -$$ - -This means that (for non-zero $a$) the general solution is -$$ -y(x) = A x^{\frac{1}{2} + \sqrt{a}} + B x^{\frac{1}{2} - \sqrt{a}} -$$ - -where $A$ and $B$ are arbitrary constants. - -It is not hard to see that for positive $a$ the solutions do not oscillate while for negative $a = -\omega^2$ the identity -$$ -x^{\frac{1}{2} \pm i \omega} = \sqrt{x}\ e^{\pm (i\omega) \ln{x}} = \sqrt{x}\ (\cos{(\omega \ln x)} \pm i \sin{(\omega \ln x)}) -$$ - -shows that they do. - -The general result follows from this example by the Sturm–Picone comparison theorem. - -There are many extensions to this result. For a recent account see. diff --git a/wiki/wikipedia/561.txt b/wiki/wikipedia/561.txt deleted file mode 100644 index 5fcdb74ac2ca4b616ad70c4cbf98f1a57e8a6164..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/561.txt +++ /dev/null @@ -1,13 +0,0 @@ -The Hasse–Minkowski theorem is a fundamental result in number theory which states that two quadratic forms over a number field are equivalent if and only if they are equivalent locally at all places, i.e. equivalent over every completion of the field (which may be real, complex, or p-adic). A related result is that a quadratic space over a number field is isotropic if and only if it is isotropic locally everywhere, or equivalently, that a quadratic form over a number field nontrivially represents zero if and only if this holds for all completions of the field. The theorem was proved in the case of the field of rational numbers by Hermann Minkowski and generalized to number fields by Helmut Hasse. The same statement holds even more generally for all global fields. - -The importance of the Hasse–Minkowski theorem lies in the novel paradigm it presented for answering arithmetical questions: in order to determine whether an equation of a certain type has a solution in rational numbers, it is sufficient to test whether it has solutions over complete fields of real and p-adic numbers, where analytic considerations, such as Newton's method and its p-adic analogue, Hensel's lemma, apply. This is encapsulated in the idea of a local-global principle, which is one of the most fundamental techniques in arithmetic geometry. - -The Hasse–Minkowski theorem reduces the problem of classifying quadratic forms over a number field K up to equivalence to the set of analogous but much simpler questions over local fields. Basic invariants of a nonsingular quadratic form are its dimension, which is a positive integer, and its discriminant modulo the squares in K, which is an element of the multiplicative group K*/K*2. In addition, for every place v of K, there is an invariant coming from the completion Kv. Depending on the choice of v, this completion may be the real numbers R, the complex numbers C, or a p-adic number field, each of which has different kinds of invariants: - -* Case of R. By Sylvester's law of inertia, the signature (or, alternatively, the negative index of inertia) is a complete invariant. - -* Case of C. All nonsingular quadratic forms of the same dimension are equivalent. - -* Case of Qp and its algebraic extensions. Forms of the same dimension are classified up to equivalence by their Hasse invariant. - -These invariants must satisfy some compatibility conditions: a parity relation (the sign of the discriminant must match the negative index of inertia) and a product formula (a local–global relation). Conversely, for every set of invariants satisfying these relations, there is a quadratic form over K with these invariants. diff --git a/wiki/wikipedia/562.txt b/wiki/wikipedia/562.txt deleted file mode 100644 index 8e061c84c751271e63908fb35c35921307677a13..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/562.txt +++ /dev/null @@ -1,6 +0,0 @@ -In algebraic geometry, the Mumford vanishing theorem proved by Mumford in 1967 states that if L is a semi-ample invertible sheaf with Iitaka dimension at least 2 on a complex projective manifold, then -$$ -H^i(X,L^{-1})=0\text{ for }i = 0,1.\ -$$ - -The Mumford vanishing theorem is related to the Ramanujam vanishing theorem, and is generalized by the Kawamata–Viehweg vanishing theorem. diff --git a/wiki/wikipedia/563.txt b/wiki/wikipedia/563.txt deleted file mode 100644 index 7fe4d6206802005076d2616d3f0d17dec582f052..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/563.txt +++ /dev/null @@ -1,186 +0,0 @@ -In probability theory, the law of large numbers (LLN) is a theorem that describes the result of performing the same experiment a large number of times. According to the law, the average of the results obtained from a large number of trials should be close to the expected value and tends to become closer to the expected value as more trials are performed. - -The LLN is important because it guarantees stable long-term results for the averages of some random events. These further studies have given rise to two prominent forms of the LLN. One is called the "weak" law and the other the "strong" law, in reference to two different modes of convergence of the cumulative sample means to the expected value; in particular, as explained below, the strong form implies the weak. - -There are two different versions of the law of large numbers that are described below. They are called the strong law of large numbers and the weak law of large numbers. - -If the summands are independent but not identically distributed, then - - - -\overline{X}_n - \operatorname{E}\big[\overline{X}_n\big]\ \xrightarrow{\text{a.s.}}\ 0, - - - -provided that each Xk has a finite second moment and - - - -\sum_{k=1}^{\infty} \frac{1}{k^2} \operatorname{Var}[X_k] < \infty. - - - -This statement is known as Kolmogorov's strong law, see e.g. Sen. - -The weak law states that for a specified large n, the average \overline{X}_n is likely to be near μ. Thus, it leaves open the possibility that |\overline{X}_n -\mu| > \varepsilon happens an infinite number of times, although at infrequent intervals. (Not necessarily |\overline{X}_n -\mu| \neq 0 for all n). - -The strong law shows that this almost surely will not occur. In particular, it implies that with probability 1, we have that for any ε > 0 the inequality |\overline{X}_n -\mu| < \varepsilon holds for all large enough n. - -The strong law does not hold in the following cases, but the weak law does. - -1. Let X be an exponentially distributed random variable with parameter 1. The random variable $\sin(X)e^X X^{-1}$ has no expected value according to Lebesgue integration, but using conditional convergence and interpreting the integral as a Dirichlet integral, which is an improper Riemann integral, we can say: -$$ - E\left(\frac{\sin(X)e^X}{X}\right) =\ \int_{0}^{\infty}\frac{\sin(x)e^x}{x}e^{-x}dx = \frac{\pi}{2} -$$ - -2. Let x be geometric distribution with probability 0.5. The random variable $2^X(-1)^X X^{-1}$ does not have an expected value in the conventional sense because the infinite series is not absolutely convergent, but using conditional convergence, we can say: -$$ - E\left(\frac{2^X(-1)^X}{X}\right) =\ \sum_{1}^{\infty}\frac{2^x(-1)^x}{x}2^{-x}=-\ln(2) -$$ - -3. If the cumulative distribution function of a random variable is -$$ - 1-F(x)=\frac{e}{2x\ln(x)},x \ge e -$$ -$$ - F(x)=\frac{e}{-2x\ln(-x)},x \le -e -$$ - -then it has no expected value, but the weak law is true. - -4. Let Xk be plus or minus $\sqrt{k/\log\log\log k}$ (starting at sufficiently large k so that the denominator is positive) with probability 1/2 for each. The variance of Xk is then $k/\log\log\log k.$ Kolmogorov's strong law does not apply because the partial sum in his criterion up to k=n is asymptotic to $\log n/\log\log\log n$ and this is unbounded. If we replace the random variables with Gaussian variables having the same variances, namely $\sqrt{k/\log\log\log k},$ then the average at any point will also be normally distributed. The width of the distribution of the average will tend toward zero (standard deviation asymptotic to $1/\sqrt{2\log\log\log n}$), but for a given ε, there is probability which does not go to zero with n, while the average sometime after the nth trial will come back up to ε. Since the width of the distribution of the average is not zero, it must have a positive lower bound p(ε), which means there is a probability of at least p(ε) that the average will attain ε after n trials. It will happen with probability p(ε)/2 before some m which depends on n. But even after m, there is still a probability of at least p(ε) that it will happen. (This seems to indicate that p(ε)=1 and the average will attain ε an infinite number of times.) - -Suppose f(x,θ) is some function defined for θ ∈ Θ, and continuous in θ. Then for any fixed θ, the sequence {f(X1,θ), f(X2,θ), ...} will be a sequence of independent and identically distributed random variables, such that the sample mean of this sequence converges in probability to E[f(X,θ)]. This is the pointwise (in θ) convergence. - -The uniform law of large numbers states the conditions under which the convergence happens uniformly in θ. If - -# Θ is compact, - -# f(x,θ) is continuous at each θ ∈ Θ for almost all xs, and measurable function of x at each θ. - -# there exists a dominating function d(x) such that E[d(X)] < ∞, and - -#: $ \left\| f(x,\theta) \right\| \leq d(x) \quad\text{for all}\ \theta\in\Theta.$ - -Then E[f(X,θ)] is continuous in θ, and - - - -\sup_{\theta\in\Theta} \left\| \frac1n\sum_{i=1}^n f(X_i,\theta) - \operatorname{E}[f(X,\theta)] \right\| \xrightarrow{\mathrm{a.s.}} \ 0. - - - -This result is useful to derive consistency of a large class of estimators (see Extremum estimator). - -Borel's law of large numbers, named after Émile Borel, states that if an experiment is repeated a large number of times, independently under identical conditions, then the proportion of times that any specified event occurs approximately equals the probability of the event's occurrence on any particular trial; the larger the number of repetitions, the better the approximation tends to be. More precisely, if E denotes the event in question, p its probability of occurrence, and Nn(E) the number of times E occurs in the first n trials, then with probability one, -$$ - \frac{N_n(E)}{n}\to p\text{ as }n\to\infty. -$$ - -This theorem makes rigorous the intuitive notion of probability as the long-run relative frequency of an event's occurrence. It is a special case of any of several more general laws of large numbers in probability theory. - -Chebyshev's inequality. Let X be a random variable with finite expected value μ and finite non-zero variance σ2. Then for any real number k > 0, - - - -\Pr(|X-\mu|\geq k\sigma) \leq \frac{1}{k^2}. - - - -Given X1, X2, ... an infinite sequence of i.i.d. random variables with finite expected value E(X1) = E(X2) = ... = µ < ∞, we are interested in the convergence of the sample average -$$ -\overline{X}_n=\tfrac1n(X_1+\cdots+X_n). -$$ - -The weak law of large numbers states: - -{{NumBlk||Theorem: \begin{matrix}{}\\ - -\overline{X}_n\ \xrightarrow{P}\ \mu \qquad\textrm{when}\ n \to \infty. - -\\{}\end{matrix}|}} - -This proof uses the assumption of finite variance $ \operatorname{Var} (X_i)=\sigma^2 $ (for all $i$). The independence of the random variables implies no correlation between them, and we have that - - - -\operatorname{Var}(\overline{X}_n) = \operatorname{Var}(\tfrac1n(X_1+\cdots+X_n)) = \frac{1}{n^2} \operatorname{Var}(X_1+\cdots+X_n) = \frac{n\sigma^2}{n^2} = \frac{\sigma^2}{n}. - - - -The common mean μ of the sequence is the mean of the sample average: - - - -E(\overline{X}_n) = \mu. - - - -Using Chebyshev's inequality on $\overline{X}_n $ results in - - - -\operatorname{P}( \left| \overline{X}_n-\mu \right| \geq \varepsilon) \leq \frac{\sigma^2}{n\varepsilon^2}. - - - -This may be used to obtain the following: - - - -\operatorname{P}( \left| \overline{X}_n-\mu \right| < \varepsilon) = 1 - \operatorname{P}( \left| \overline{X}_n-\mu \right| \geq \varepsilon) \geq 1 - \frac{\sigma^2}{n \varepsilon^2 }. - - - -As n approaches infinity, the expression approaches 1. And by definition of convergence in probability, we have obtained - -{{NumBlk|:|\begin{matrix}{}\\ - -\overline{X}_n\ \xrightarrow{P}\ \mu \qquad\textrm{when}\ n \to \infty. - -\\{}\end{matrix}|}} - -By Taylor's theorem for complex functions, the characteristic function of any random variable, X, with finite mean μ, can be written as -$$ -\varphi_X(t) = 1 + it\mu + o(t), \quad t \rightarrow 0. -$$ - -All X1, X2, ... have the same characteristic function, so we will simply denote this φX. - -Among the basic properties of characteristic functions there are - -\varphi_{\frac 1 n X}(t)= \varphi_X(\tfrac t n) \quad \text{and} \quad - -\varphi_{X+Y}(t)=\varphi_X(t) \varphi_Y(t) \quad if X and Y are independent. - -These rules can be used to calculate the characteristic function of $\scriptstyle\overline{X}_n$ in terms of φX: -$$ -\varphi_{\overline{X}_n}(t)= \left[\varphi_X\left({t \over n}\right)\right]^n = \left[1 + i\mu{t \over n} + o\left({t \over n}\right)\right]^n \rightarrow e^{it\mu}, \quad \text{as} \quad n \rightarrow \infty. -$$ - -The limit eitμ is the characteristic function of the constant random variable μ, and hence by the Lévy continuity theorem, $ \scriptstyle\overline{X}_n$ converges in distribution to μ: -$$ -\overline{X}_n \xrightarrow{\mathcal D} \mu \qquad\text{for}\qquad n \to \infty. -$$ - -μ is a constant, which implies that convergence in distribution to μ and convergence in probability to μ are equivalent (see Convergence of random variables.) Therefore, - -{{NumBlk|:|\begin{matrix}{}\\ - -\overline{X}_n\ \xrightarrow{P}\ \mu \qquad\textrm{when}\ n \to \infty. - -\\{}\end{matrix}|}} - -This shows that the sample mean converges in probability to the derivative of the characteristic function at the origin, as long as the latter exists. - -The law of large numbers provides an expectation of an unknown distribution from a realization of the sequence, but also any feature of the probability distribution. By applying Borel's law of large numbers, one could easily obtain the probability mass function. For each event in the objective probability mass function, one could approximate the probability of the event's occurrence with the proportion of times that any specified event occurs. The larger the number of repetitions, the better the approximation. As for the continuous case: $C=(a-h,a+h]$, for small positive h. Thus, for large n: - - \frac{N_n(C)}{n}\thickapprox - -p=P(X\in C)=\int_{a-h}^{a+h} f(x)dx - -\thickapprox - -2hf(a) - -With this method, one can cover the whole x-axis with a grid (with grid size 2h) and obtain a bar graph which is called a histogram. diff --git a/wiki/wikipedia/564.txt b/wiki/wikipedia/564.txt deleted file mode 100644 index 60698310112fdb0e38599d0c3a1bbadd7386ef33..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/564.txt +++ /dev/null @@ -1,17 +0,0 @@ -In graph theory, an edge dominating set for a graph G = (V, E) is a subset D ⊆ E such that every edge not in D is adjacent to at least one edge in D. An edge dominating set is also known as a line dominating set. Figures (a)–(d) are examples of edge dominating sets (thick red lines). - -A minimum edge dominating set is a smallest edge dominating set. Figures (a) and (b) are examples of minimum edge dominating sets (it can be checked that there is no edge dominating set of size 2 for this graph). - -An edge dominating set for G is a dominating set for its line graph L(G) and vice versa. - -Any maximal matching is always an edge dominating set. Figures (b) and (d) are examples of maximal matchings. - -Furthermore, the size of a minimum edge dominating set equals the size of a minimum maximal matching. A minimum maximal matching is a minimum edge dominating set; Figure (b) is an example of a minimum maximal matching. A minimum edge dominating set is not necessarily a minimum maximal matching, as illustrated in Figure (a); however, given a minimum edge dominating set D, it is easy to find a minimum maximal matching with |D| edges (see, e.g., ). - -Determining whether there is an edge dominating set of a given size for a given graph is an NP-complete problem (and therefore finding a minimum edge dominating set is an NP-hard problem). Yannakakis show that the problem is NP-complete even in the case of a bipartite graph with maximum degree 3, and also in the case of a planar graph with maximum degree 3. - -There is a simple polynomial-time approximation algorithm with approximation factor 2: find any maximal matching. A maximal matching is an edge dominating set; furthermore, a maximal matching M can be at worst 2 times as large as a smallest maximal matching, and a smallest maximal matching has the same size as the smallest edge dominating set. - -Also the edge-weighted version of the problem can be approximated within factor 2, but the algorithm is considerably more complicated (; ). - -Chlebík show that finding a better than (7/6)-approximation is NP-hard. Schmied & Viehmann proved that the Problem is UGC-hard to approximate to within any constant better than 3/2. diff --git a/wiki/wikipedia/565.txt b/wiki/wikipedia/565.txt deleted file mode 100644 index af25c7a4e5ef21546c625cd3a412a0700c8622ee..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/565.txt +++ /dev/null @@ -1,71 +0,0 @@ -The Shortest Path Faster Algorithm (SPFA) is an improvement of the Bellman–Ford algorithm which computes single-source shortest paths in a weighted directed graph. The algorithm is believed to work well on random sparse graphs and is particularly suitable for graphs that contain negative-weight edges. However, the worst-case complexity of SPFA is the same as that of Bellman–Ford, so for graphs with nonnegative edge weights Dijkstra's algorithm is preferred. The SPFA algorithm was first published by Edward F. Moore in 1959, as a generalization of breadth first search; SPFA is Moore's “Algorithm D.” The name, “Shortest Path Faster Algorithm (SPFA),” was given by FanDing Duan, a Chinese researcher who rediscovered the algorithm in 1994. - -Given a weighted directed graph $ G=(V,E) $ and a source vertex $ s $, the SPFA algorithm finds the shortest path from $ s $, to each vertex $ v $, in the graph. The length of the shortest path from $ s $ to $ v $ is stored in $ d(v) $ for each vertex $ v $. - -The basic idea of SPFA is the same as the Bellman-Ford algorithm in that each vertex is used as a candidate to relax its adjacent vertices. The improvement over the latter is that instead of trying all vertices blindly, SPFA maintains a queue of candidate vertices and adds a vertex to the queue only if that vertex is relaxed. This process repeats until no more vertex can be relaxed. - -Below is the pseudo-code of the algorithm. Here $ Q $ is a first-in, first-out queue of candidate vertices, and $ w(u,v) $ is the edge weight of $ (u,v) $. - -procedure Shortest-Path-Faster-Algorithm(G, s) - -1 for each vertex v ≠ s in V(G) - -2 d(v) := ∞ - -3 d(s) := 0 - -4 push s into Q - -5 while Q is not empty do - -6 u := poll Q - -7 for each edge (u, v) in E(G) do - -8 if d(u) + w(u, v) < d(v) then - -9 d(v) := d(u) + w(u, v) - -10 if v is not in Q then - -11 push v into Q - -The algorithm can also be applied to an undirected graph by replacing each undirected edge with two directed edges of opposite directions. - -We will prove that the algorithm never computes incorrect shortest path lengths. - -Lemma: Whenever the queue is checked for emptiness, any vertex currently capable of causing relaxation is in the queue. - -Proof: We want to show that if $dist[w] > dist[u]+w(u,w)$ for any two vertices $u$ and $w$ at the time the condition is checked,$u$ is in the queue. We do so by induction on the number of iterations of the loop that have already occurred. First we note that this certainly holds before the loop is entered: if $u \not= s$, then relaxation is not possible; relaxation is possible from $u=s$ , and this is added to the queue immediately before the while loop is entered. Now, consider what happens inside the loop. A vertex $u$ is popped, and is used to relax all its neighbors, if possible. Therefore, immediately after that iteration of the loop, $u$ is not capable of causing any more relaxations (and does not have to be in the queue anymore). However, the relaxation by $x$ might cause some other vertices to become capable of causing relaxation. If there exists some vertex $x$ such that $dist[x] > dist[w]+w(w,x)$ before the current loop iteration, then $w$ is already in the queue. If this condition becomes true during the current loop iteration, then either $dist[x]$ increased, which is impossible, or $dist[w]$ decreased, implying that $w$ was relaxed. But after $w$ is relaxed, it is added to the queue if it is not already present. - -Corollary: The algorithm terminates when and only when no further relaxations are possible. - -Proof: If no further relaxations are possible, the algorithm continues to remove vertices from the queue, but does not add any more into the queue, because vertices are added only upon successful relaxations. Therefore the queue becomes empty and the algorithm terminates. If any further relaxations are possible, the queue is not empty, and the algorithm continues to run. - -The algorithm fails to terminate if negative-weight cycles are reachable from the source. See for a proof that relaxations are always possible when negative-weight cycles exist. In a graph with no cycles of negative weight, when no more relaxations are possible, the correct shortest paths have been computed (). Therefore in graphs containing no cycles of negative weight, the algorithm will never terminate with incorrect shortest path lengths. - -The worst-case running time of the algorithm is $ O(|V|\cdot|E|) $, just like the standard Bellman-Ford algorithm. Experiments suggest that the average running time is $ O(|E|) $, and indeed this is true on random graphs, but it is possible to construct sparse graphs where SPFA runs in time $ \Omega(|V|^2) $ like the usual Bellman-Ford algorithm. - -The performance of the algorithm is strongly determined by the order in which candidate vertices are used to relax other vertices. In fact, if $ Q $ is a priority queue, then the algorithm pretty much resembles Dijkstra's. However, since a priority queue is not used here, two techniques are sometimes employed to improve the quality of the queue, which in turn improves the average-case performance (but not the worst-case performance). Both techniques rearrange the order of elements in $ Q $ so that vertices closer to the source are processed first. Therefore, when implementing these techniques, $ Q $ is no longer a first-in, first-out queue, but rather a normal doubly linked list or double-ended queue. - -Small Label First (SLF) technique. In line 11, instead of always pushing vertex $ v $ to the end of the queue, we compare $ d(v) $ to $ d\big(\text{front}(Q)\big) $, and insert $ v $ to the front of the queue if $ d(v) $ is smaller. The pseudo-code for this technique is (after pushing $ v $ to the end of the queue in line 11): - -procedure Small-Label-First(G, Q) - -if d(back(Q)) < d(front(Q)) then - -u := pop back of Q - -push u into front of Q - -Large Label Last (LLL) technique. After line 11, we update the queue so that the first element is smaller than the average, and any element larger than the average is moved to the end of the queue. The pseudo-code is: - -procedure Large-Label-Last(G, Q) - -x := average of d(v) for all v in Q - -while d(front(Q)) > x - -u := pop front of Q - -push u to back of Q diff --git a/wiki/wikipedia/566.txt b/wiki/wikipedia/566.txt deleted file mode 100644 index d721c2831f4080c5f406e5efbdfdc03211a69fef..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/566.txt +++ /dev/null @@ -1,47 +0,0 @@ -"Love Me or Hate Me" is the sixth single from English grime artist Lady Sovereign and the fourth from her debut album Public Warning. The single was confirmed for release by her official website and was released in October 2006. The song was produced by Lukasz "Dr. Luke" Gottwald. The video features Lady Sovereign appearing in Tetris-style blocks, and acting out certain lyrics of the song. - -The song became the first song by a British rap artist to ever reach number one on MTV's TRL, as well as being heard in a TV spot for Verizon Wireless. In addition, it peaked at number 45 on the Billboard Hot 100 chart, making it Lady Sovereign's first single to chart in the US. In 2007, the song debuted at number 48 on the UK Singles Chart, later peaking at number 26, becoming her highest charting solo hit to date within the UK. Blender ranked it as the 32nd best song of 2006. A remix is available featuring Missy Elliott. - -The song has sold over 750,000 copies in the US. The song has been featured as the theme song for Season 1 of the Oxygen Channel's reality show The Bad Girls Club as well as their spinoff Bad Girls Road Trip. It can also be heard in the video game Need for Speed: Carbon, and in an episode of the ABC TV series Ugly Betty. It was also featured on the season four episode "The Metamorphosis" of the teen drama series The O.C. - -"Love Me or Hate Me" debuted on the Billboard Hot 100 the week of 28 October 2006 at number 81. Three weeks later, it peaked at number 45 the week of 18 November, staying on the chart for nine weeks. - -The track also reached number one during a 15-week run on the Billboard Club Play chart on 7 April 2007 on the strength of the Jason Nevins mixes. - -Directed by Brian Beltric (who previously directed videos for the Black Eyed Peas), the video features Lady Sovereign in London acting out certain parts of the song's lyrics. Intercut are scenes of Lady Sovereign appearing and disappearing in Tetris-style blocks. - -;UK CD single - -*1. "Love Me or Hate Me" (album version) – 3:29 - -*2. "Love Me or Hate Me" (Remix featuring Missy Elliott) – 3:40 - -*3. "Love Me or Hate Me" (Jason Nevins Remix) – 3:29 - -;UK 12" single - -*A1. "Love Me or Hate Me" (Original) – 3:29 - -*A2. "Love Me or Hate Me" (Remix featuring Missy Elliott) – 3:40 - -*B1. "Love Me or Hate Me" (Jason Nevins Remix) – 3:29 - -*B2. "Love Me or Hate Me" (Instrumental) – 3:30 - -;US CD single - -*1. "Love Me or Hate Me (F**k You!!!!)" (Clean) – 3:32 - -*2. "Love Me or Hate Me (F**k You!!!!)" (Main) – 3:32 - -*3. "Love Me or Hate Me (F**k You!!!!)" (Instrumental) – 3:33 - -*4. "Love Me or Hate Me (F**k You!!!!)" (A Cappella) – 3:29 - -;Germany CD single - -*1. "Love Me or Hate Me (Fuck You!!!)" (Clean) – 3:30 - -*2. "Love Me or Hate Me (Fuck You!!!)" (Remix featuring Missy Elliott) – 3:40 - -*3. "Love Me or Hate Me" (video) – 2:47 diff --git a/wiki/wikipedia/567.txt b/wiki/wikipedia/567.txt deleted file mode 100644 index a8c7e4948e31b912b4ef0333baf41a83c6f0232b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/567.txt +++ /dev/null @@ -1,34 +0,0 @@ -In Galois cohomology, local Tate duality (or simply local duality) is a duality for Galois modules for the absolute Galois group of a non-archimedean local field. It is named after John Tate who first proved it. It shows that the dual of such a Galois module is the Tate twist of usual linear dual. This new dual is called the (local) Tate dual. - -Local duality combined with Tate's local Euler characteristic formula provide a versatile set of tools for computing the Galois cohomology of local fields. - -Let K be a non-archimedean local field, let Ks denote a separable closure of K, and let GK = Gal(Ks/K) be the absolute Galois group of K. - -Denote by μ the Galois module of all roots of unity in Ks. Given a finite GK-module A of order prime to the characteristic of K, the Tate dual of A is defined as -$$ -A^\prime=\mathrm{Hom}(A,\mu) -$$ - -(i.e. it is the Tate twist of the usual dual A). Let Hi(K, A) denote the group cohomology of GK with coefficients in A. The theorem states that the pairing -$$ -H^i(K,A)\times H^{2-i}(K,A^\prime)\rightarrow H^2(K,\mu)=\mathbf{Q}/\mathbf{Z} -$$ - -given by the cup product sets up a duality between Hi(K, A) and H2-i(K, A) for i = 0, 1, 2. Since GK has cohomological dimension equal to two, the higher cohomology groups vanish. - -Let p be a prime number. Let Qp(1) denote the p-adic cyclotomic character of GK (i.e. the Tate module of μ). A p-adic representation of GK is a continuous representation -$$ -\rho:G_K\rightarrow\mathrm{GL}(V) -$$ - -where V is a finite-dimensional vector space over the p-adic numbers Qp and GL(V) denotes the group of invertible linear maps from V to itself. The Tate dual of V is defined as -$$ -V^\prime=\mathrm{Hom}(V,\mathbf{Q}_p(1)) -$$ - -(i.e. it is the Tate twist of the usual dual V = Hom(V, Qp)). In this case, Hi(K, V) denotes the continuous group cohomology of GK with coefficients in V. Local Tate duality applied to V says that the cup product induces a pairing -$$ -H^i(K,V)\times H^{2-i}(K,V^\prime)\rightarrow H^2(K,\mathbf{Q}_p(1))=\mathbf{Q}_p -$$ - -which is a duality between Hi(K, V) and H2-i(K, V ′) for i = 0, 1, 2. Again, the higher cohomology groups vanish. diff --git a/wiki/wikipedia/568.txt b/wiki/wikipedia/568.txt deleted file mode 100644 index 86f3b07330358cea7fa70f2ffe5375ea66a6b417..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/568.txt +++ /dev/null @@ -1,65 +0,0 @@ -In probability theory, the law (or formula) of total probability is a fundamental rule relating marginal probabilities to conditional probabilities. It expresses the total probability of an outcome which can be realized via several distinct events—hence the name. - -The law of total probability is a theorem that, in its discrete case, states if $\left\{{B_n : n = 1, 2, 3, \ldots}\right\}$ is a finite or countably infinite partition of a sample space (in other words, a set of pairwise disjoint events whose union is the entire sample space) and each event $B_n$ is measurable, then for any event $A$ of the same probability space: -$$ -P(A)=\sum_n P(A\cap B_n) -$$ - -or, alternatively, "overall probability" is sometimes used in less formal writings. - -The law of total probability, can also be stated for conditional probabilities. -$$ -P(A \mid C) = \sum_n P(A \mid C \cap B_n) P(B_n \mid C) -$$ - -Taking the $B_n$ as above, and assuming $C$ is an event independent of any of the $B_n$: -$$ -P(A \mid C) = \sum_n P(A \mid C \cap B_n) P(B_n) -$$ - -The above mathematical statement might be interpreted as follows: given an event $A$, with known conditional probabilities given any of the $B_n$ events, each with a known probability itself, what is the total probability that $A$ will happen? The answer to this question is given by $P(A)$. - -The law of total probability extends to the case of conditioning on events generated by continuous random variables. Let $(\Omega, \mathcal{F}, P) $ be a probability space. Suppose $ X $ is a random variable with distribution function $F_X$, and $A$ an event on $(\Omega, \mathcal{F}, P) $. Then the law of total probability states -$$ -P(A) = \int_{-\infty}^\infty P(A |X = x) d F_X(x). -$$ - -If $X$ admits a density function $f_X$, then the result is -$$ -P(A) = \int_{-\infty}^\infty P(A |X = x) f_X(x) dx. -$$ - -Moreover, for the specific case where $A = \{Y \in B \}$, where $B$ is a borel set, then this yields -$$ -P(Y \in B) = \int_{-\infty}^\infty P(Y \in B |X = x) f_X(x) dx. -$$ - -Suppose that two factories supply light bulbs to the market. Factory X's bulbs work for over 5000 hours in 99% of cases, whereas factory Y's bulbs work for over 5000 hours in 95% of cases. It is known that factory X supplies 60% of the total bulbs available and Y supplies 40% of the total bulbs available. What is the chance that a purchased bulb will work for longer than 5000 hours? - -Applying the law of total probability, we have: - - - -\begin{align} - -P(A) & = P(A\mid B_X) \cdot P(B_X) + P(A\mid B_Y) \cdot P(B_Y) \\[4pt] - -& = {99 \over 100} \cdot {6 \over 10} + {95 \over 100} \cdot {4 \over 10} = {{594 + 380} \over 1000} = {974 \over 1000} - -\end{align} - - - -where - -* $P(B_X)={6 \over 10}$ is the probability that the purchased bulb was manufactured by factory X; - -* $P(B_Y)={4 \over 10}$ is the probability that the purchased bulb was manufactured by factory Y; - -* $P(A\mid B_X)={99 \over 100}$ is the probability that a bulb manufactured by X will work for over 5000 hours; - -* $P(A\mid B_Y)={95 \over 100}$ is the probability that a bulb manufactured by Y will work for over 5000 hours. - -Thus each purchased light bulb has a 97.4% chance to work for more than 5000 hours. - -The term law of total probability is sometimes taken to mean the law of alternatives, which is a special case of the law of total probability applying to discrete random variables. One author uses the terminology of the "Rule of Average Conditional Probabilities", while another refers to it as the "continuous law of alternatives" in the continuous case. This result is given by Grimmett and Welsh as the partition theorem, a name that they also give to the related law of total expectation. diff --git a/wiki/wikipedia/569.txt b/wiki/wikipedia/569.txt deleted file mode 100644 index c8cdb7b885d71fc856cee1efe9a63ca2ef357a2c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/569.txt +++ /dev/null @@ -1,3 +0,0 @@ -Tulip is an information visualization framework dedicated to the analysis and visualization of relational data. Tulip aims to provide the developer with a complete library, supporting the design of interactive information visualization applications for relational data that can be tailored to the problems being addressed. - -Written in C++ the framework enables the development of algorithms, visual encodings, interaction techniques, data models, and domain-specific visualizations. Tulip allows the reuse of components; this makes the framework efficient for research prototyping as well as the development of end-user applications. diff --git a/wiki/wikipedia/57.txt b/wiki/wikipedia/57.txt deleted file mode 100644 index 33097f00ae5980cc716f72b145172015287ff945..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/57.txt +++ /dev/null @@ -1 +0,0 @@ -In abstract algebra, the Cartan-Brauer-Hua theorem (named after Richard Brauer, Élie Cartan, and Hua Luogeng) is a theorem pertaining to division rings. It says that given two division rings K ⊆ D such that xKx−1 is contained in K for every x not equal to 0 in D, either K is contained in the center of D, or 1=K = D. In other words, if the unit group of K is a normal subgroup of the unit group of D, then either 1=K = D or K is central . diff --git a/wiki/wikipedia/570.txt b/wiki/wikipedia/570.txt deleted file mode 100644 index 2065ffeb16fd0a1de1059f4f36aa1ba3a63c2137..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/570.txt +++ /dev/null @@ -1,73 +0,0 @@ -Sudoku codes are non-linear forward error correcting codes following rules of sudoku puzzles designed for an erasure channel. Based on this model, the transmitter sends a sequence of all symbols of a solved sudoku. The receiver either receives a symbol correctly or an erasure symbol to indicate that the symbol was not received. The decoder gets a matrix with missing entries and uses the constraints of sudoku puzzles to reconstruct a limited amount of erased symbols. - -Sudoku codes are not suitable for practical usage but are subject of research. Questions like the rate and error performance are still unknown for general dimensions. - -In a sudoku one can find missing information by using different techniques to reproduce the full puzzle. This method can be seen as decoding a sudoku coded message that is sent over an erasure channel where some symbols got erased. By using the sudoku rules the decoder can recover the missing information. Sudokus can be modeled as a probabilistic graphical model and thus methods from decoding low-density parity-check codes like belief propagation can be used. - -In the erasure channel model a symbol gets either transmitted correctly with probability $1-p_e$ or is erased with probability $p_e$ (see Figure \ref{fig:Sudoku3x3channel}). The channel introduces no errors, i.e. no channel input is changed to another symbol. The example in Figure \ref{fig:Sudoku3x3BSC} shows the transmission of a $ 3 \times 3$ Sudoku code. 5 of the 9 symbols got erased by the channel. The decoder is still able to reconstruct the message, i.e. the whole puzzle. - -Note that the symbols sent over the channel are not binary. For a binary channel the symbols (e.g. integers $\{ 1,\ldots ,9 \}$) have to be mapped onto base 2. The binary erasure channel model however is not applicable because it erases only individual bits with some probability and not Sudoku symbols. If the symbols of the Sudoku are sent in packets the channel can be described as a packet erasure channel model. - -A sudoku is a $ N \times N$ number-placement puzzle. It is filled in a way, that in each column, row and sub-grid N distinct symbols occur exactly once. Typical alphabet is the set of the integers $\{ 1,\ldots ,N \}$. The size of the sub-grids limit the size of SUDOKUs to $N=n^2$ with $n \in \Nu$. Every solved sudoku and every sub-grid of it is a Latin square, meaning every symbol occurs exactly once in each row and column. At the starting point (in this case after the erasure channel) the puzzle is only partially complete but has only one unique solution. - -For channel codes also other varieties of sudokus are conceivable. Diagonal regions instead of square sub-grids can be used for performance investigations. The diagonal sudoku has the advantage, that its size can be chosen more freely. Due to the sub-grid structure normal sudokus can only be of size n², diagonal sudokus have valid solutions for all odd $N$. - -There are several possible decoding methods for sudoku codes. Some algorithms are very specific developments for Sudoku codes. Several methods are described in sudoku solving algorithms. Another efficient method is with dancing links. - -Decoding methods like belief propagation are also used for low-density parity-check codes are of special interest. Performance analysis of these methods on sudoku codes can help to understand decoding problems for low-density parity-check codes better. This method is originally designed for low-density parity-check codes. Due to its generality belief propagation works not only with the classical $ 9 \times 9$ Sudoku but with a variety of those. LDPC decoding is a common use-case for belief propagation, with slight modifications this approach can be used for solving Sudoku codes. is as follows: - -* Start with an empty grid - -* Do the following for all entries sequentially - -* Use belief propagation to determine all valid symbols for the entry - -* If the cardinality of valid symbols is k>1 then convert the source randomness into a k-ary symbol and use it in the cell - -For a $ 4 \times 4$ sudoku the first entry can be filled with a source of cardinality 4. In this example this a 1. For the rest of this row, column and $ 2 \times 2$ sub-grid this number is excluded from the possibilities in the belief propagation decoder. For the second cell only the numbers 2,3,4 are valid. The source has to be transformed into a uniform distribution between three possibilities and mapped to the valid numbers and so on, until the grid is filled. - -The calculation of the rate of sudoku codes is not trivial. An example rate calculation of a $4 \times 4$ sudoku is shown above. Filling line by line from the top left corner only the first entry has maximum information of $\log_2 4=2$ bits. Every next entry cannot be any of the numbers used before, so the information reduces to $\log_2 3$, $\log_2 2$ and $0$ for the following entries, as they have to be of the remaining left numbers. In the second lines the information is additionally reduced by the area rule: cell $5$ in row-scan order can only be a $3$ or $4$ as the numbers $1$ and $2$ are already used in the sub-grid. The last row contains no information at all. Adding all information up one gets $\log(4*3*2^5) \approx 8.58$ bits. The rate in this example is - - \frac{4+3+2*5}{16*4} \approx 0.27. - -The exact number of possible Sudoku grids according to Mathematics of Sudoku is $6,670,903,752,021,072,936,960$. With the total information of - - \begin{align} - -I_{log_9} &= \log_{9}6.67*10^{21} \approx 22.87 \\ - -I_{log_2} &= \log_{2}6.67*10^{21} \approx 72.50\text{bits} - -\end{align} - -the average rate of a standard Sudoku is - - R = I_{9} / 9^2 \approx 0.28. - -The average number of possible entries for a cell is $ \sqrt[81]{6.67*10^{21}} \approx 1.86$ or $ \log_{2}{1.86} \approx 0.90\text{bits}$ of information per Sudoku cell. Note that the rate may vary between codewords. - -The minimum number of given entries that render a unique solution was proven to be 17. In the worst case only four missing entries can lead to ambiguous solutions. For an erasure channel it is very unlikely that 17 successful transmissions are enough to reproduce the puzzle. There are only about 50,000 known solutions with 17 given entries. - -Density evolution is a capacity analysis algorithm originally developed for low-density parity-check codes on belief propagation decoding. Density evolution can also be applied to Sudoku-type constraints. One important simplification used in density evolution on LDPC codes is the sufficiency to analyze only the all-one codeword. With the Sudoku constraints however, this is not a valid codeword. Unlike for linear codes the weight-distance equivalence property does not hold for non-linear codes. Therewith it is necessary to compute density evolution recursions for every possible Sudoku puzzle to get precise performance analysis. - -A proposed simplification is to analyze the probability distribution of the cardinalities of messages instead of the probability distribution of the message. Density evolution is calculated on the entry nodes and the constraint nodes (compare Tanner graph above). On the entry nodes one analyzes the cardinalities of the constraints. If for example the constraints have the cardinalities $(1,1)$ then the entry can only be of one symbol. If the constraints have cardinalities $(2,2)$ then both constraints allow two different symbols. For both constraints the correct symbol is contained for sure, lets assume the correct symbol is $1$. The other symbol can be equal or different for the constraints. If the symbols are different then the correct symbol is determined. If the second symbol is equal, lets assume $2$ the output symbols are of cardinality $2$ i.e. the symbols $\left\{1,2\right\}$. Depending on the alphabet size ($q$) the probability for the unique output for the input cardinalities$(2,2)$ is - - \begin{align} - -p_1^{(2,2)}=1 - \frac{1}{q-1} - -\end{align} - -and for output of cardinality 2 - - \begin{align} - - p_2^{(2,2)}= \frac{1}{q-1}. - -\end{align} - -For a standard $9 \times 9$ Sudoku this results in a probability of $7/8$ for a unique solution. An analog calculation is done for all cardinality combinations. In the end the distribution of output cardinalities are summed up from the results. Note that the order of the input cardinality is interchangeable. The calculation of non-decreasing constraint combinations is therewith sufficient. - -For constraint nodes the procedure is somewhat similar and described in the following example based on a $4 \times 4$ standard Sudoku. Inputs to the constraint nodes are the possible symbols of the connected entry nodes. Cardinality 1 means that the symbol of the source node is already determined. Again a non-decreasing analysis is sufficient. Lets assume the true output value is 4 and the inputs have cardinalities $(1,1,2)$ with the true symbols 1-2-3. The messages with cardinality 1 are $\left\{1\right\}$ and $\left\{2\right\}$. The message of cardinality 2 might be $\left\{1,3\right\}$, $\left\{2,3\right\}$ or $\left\{3,4\right\}$ as the true symbol 3 must be contained. In two of three cases the output is the correct symbol 4 with cardinality 1: $\left\{1\right\}$, $\left\{2\right\}$, $\left\{1,3\right\}$ and $\left\{1\right\}$, $\left\{2\right\}$, $\left\{2,3\right\}$. In one of three case the output cardinality is 2:$\left\{1\right\}$, $\left\{2\right\}$, $\left\{3,4\right\}$. The output symbols in this case are $\left\{3,4\right\}$. The final output cardinality distribution can be expressed by summing over all possible input combinations. For a $4 \times 4$ standard Sudoku these are 64 combinations that can be grouped to 20 non-decreasing ones. - -If the cardinality converges to 1 the decoding is error-free. To find the threshold the erasure probability must be increased until the decoding error remains positive for any number of iterations. With the method of Sayir density evolution recursions can be used to calculate thresholds also for Sudoku codes up to an alphabet size $q=8$. diff --git a/wiki/wikipedia/571.txt b/wiki/wikipedia/571.txt deleted file mode 100644 index c48a06e14c397fc26989aa9a7571805a0993e720..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/571.txt +++ /dev/null @@ -1,3 +0,0 @@ -In number theory, the Chinese hypothesis is a disproven conjecture stating that an integer n is prime if and only if it satisfies the condition that $2^n-2$ is divisible by n—in other words, that integer n is prime if and only if $2^n \equiv 2 \bmod{n}$. It is true that if n is prime, then $2^n \equiv 2 \bmod{n}$ (this is a special case of Fermat's little theorem). However, the converse (if $2^n \equiv 2 \bmod{n}$ then n is prime) is false, and therefore the hypothesis as a whole is false. The smallest counter example is n = 341 = 11×31. Composite numbers n for which $2^n-2$ is divisible by n are called Poulet numbers. They are a special class of Fermat pseudoprimes. - -Once, and sometimes still, mistakenly thought to be of ancient Chinese origin, the Chinese hypothesis actually originates in the mid-19th century from the work of Qing dynasty mathematician Li Shanlan (1811–1882). He was later made aware his statement was incorrect and removed it from his subsequent work but it was not enough to prevent the false proposition from appearing elsewhere under his name; a later mistranslation in the 1898 work of Jeans dated the conjecture to Confucian times and gave birth to the ancient origin myth. diff --git a/wiki/wikipedia/572.txt b/wiki/wikipedia/572.txt deleted file mode 100644 index 8371ce67e300ad399e9ebfb30e262fcb6a88b948..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/572.txt +++ /dev/null @@ -1,5 +0,0 @@ -In algebraic number theory, a supersingular prime for a given elliptic curve is a prime number with a certain relationship to that curve. If the curve E is defined over the rational numbers, then a prime p is supersingular for E if the reduction of E modulo p is a supersingular elliptic curve over the residue field Fp. - -Noam Elkies showed that every elliptic curve over the rational numbers has infinitely many supersingular primes. However, the set of supersingular primes has asymptotic density zero (if E does not have complex multiplication). Lang conjectured that the number of supersingular primes less than a bound X is within a constant multiple of $\frac{\sqrt{X}}{\ln X}$, using heuristics involving the distribution of eigenvalues of the Frobenius endomorphism. As of 2019, this conjecture is open. - -More generally, if K is any global field—i.e., a finite extension either of Q or of Fp(t)—and A is an abelian variety defined over K, then a supersingular prime $\mathfrak{p}$ for A is a finite place of K such that the reduction of A modulo $\mathfrak{p}$ is a supersingular abelian variety. diff --git a/wiki/wikipedia/573.txt b/wiki/wikipedia/573.txt deleted file mode 100644 index ec05f384587c9bd799c48fbf259c89adc99e8def..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/573.txt +++ /dev/null @@ -1,3 +0,0 @@ -In the mathematical field of graph theory, an Archimedean graph is a graph that forms the skeleton of one of the Archimedean solids. There are 13 Archimedean graphs, and all of them are regular, polyhedral (and therefore by necessity also 3-vertex-connected planar graphs), and also Hamiltonian graphs. - -Along with the 13, the set of infinite prism graphs and antiprism graphs can also be considered Archimedean graphs. diff --git a/wiki/wikipedia/574.txt b/wiki/wikipedia/574.txt deleted file mode 100644 index 85fe55d1bccbe8d74cf0adb5d2752852b7198cb5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/574.txt +++ /dev/null @@ -1,158 +0,0 @@ -In mathematics, the Eckmann–Hilton argument (or Eckmann–Hilton principle or Eckmann–Hilton theorem) is an argument about two unital magma structures on a set where one is a homomorphism for the other. Given this, the structures can be shown to coincide, and the resulting magma demonstrated to be commutative monoid. This can then be used to prove the commutativity of the higher homotopy groups. The principle is named after Beno Eckmann and Peter Hilton, who used it in a 1962 paper. - -Let $X$ be a set equipped with two binary operations, which we will write $\circ$ and $\otimes$, and suppose: - -# $\circ$ and $\otimes$ are both unital, meaning that there are elements $1_\circ $ and $1_\otimes$ of $X$ such that $1_\circ \circ a= a =a \circ 1_\circ$ and $1_\otimes \otimes a= a =a \otimes 1_\otimes$, for all $a\in X$.
    - -# $(a \otimes b) \circ (c \otimes d) = (a \circ c) \otimes (b \circ d)$ for all $a,b,c,d \in X $. - -Then $\circ$ and $\otimes$ are the same and in fact commutative and associative. - -The operations $\otimes$ and $\circ$ are often referred to as monoid structures or multiplications, but this suggests they are assumed to be associative, a property that is not required for the proof. In fact, associativity follows. Likewise, we do not have to require that the two operations have the same neutral element; this is a consequence. - -First, observe that the units of the two operations coincide: - - 1_\circ = 1_\circ \circ 1_\circ - -= (1_{\otimes} \otimes 1_\circ) \circ (1_\circ \otimes 1_\otimes) - -= (1_\otimes \circ 1_\circ) \otimes (1_\circ \circ 1_\otimes) - -= 1_\otimes \otimes 1_\otimes - -= 1_\otimes. - -Now, let $a,b \in X$. - -Then a \circ b = (1 \otimes a) \circ (b \otimes 1) - -= (1 \circ b) \otimes (a \circ 1) - -= b \otimes a - -= (b \circ 1) \otimes (1 \circ a) - -= (b \otimes 1) \circ (1 \otimes a) - -= b \circ a. This establishes that the two operations coincide and are commutative. - -For associativity, $(a \otimes b) \otimes c = (a \otimes b) \otimes (1 \otimes c) = (a \otimes 1) \otimes (b \otimes c) = a \otimes (b \otimes c)$. - -The above proof also has a "two-dimensional" presentation that better illustrates the application to higher homotopy groups. - -For this version of the proof, we write the two operations as vertical and horizontal juxtaposition, i.e., $\begin{matrix}a\\[-60pt]b\end{matrix}$ and $a \ b$. The interchange property can then be expressed as follows: - -For all $a,b,c,d\in X$, $\begin{matrix}(a \ b)\\[-60pt](c \ d)\end{matrix} = \begin{pmatrix}a\\[-60pt]c\end{pmatrix} \begin{pmatrix}b\\[-60pt]d\end{pmatrix}$, so we can write - - - -\begin{matrix} - -a \ b \\[-60pt] - -c \ d - -\end{matrix} - - without ambiguity. - -Let $\bullet$ and $\circ$ be the units for vertical and horizontal composition respectively. Then \bullet = - -\begin{matrix}\bullet \\[-60pt] - -\bullet - -\end{matrix} - -= - -\begin{matrix} - -\bullet \ \ \circ \\[-60pt] - -\circ \ \ \bullet - -\end{matrix} - -= \circ \ \circ = \circ - -, so both units are equal. - -Now, for all $a,b\in X$, a \ b = - -\begin{matrix} - -a \ \ \bullet\\[-60pt] - -\bullet \ \ b - -\end{matrix} - -= \begin{matrix}a \\[-60pt] b\end{matrix} - -= - -\begin{matrix} - -\bullet \ \ a \\[-60pt] - -b \ \ \bullet - -\end{matrix} - -= b \ a = - -\begin{matrix} - -b \ \ \bullet \\[-60pt] - -\bullet \ \ a - -\end{matrix} - -= \begin{matrix}b \\[-60pt] a\end{matrix} - -, so horizontal composition is the same as vertical composition and both operations are commutative. - -Finally, for all $a,b,c\in X$, - -a \ (b \ c) = a\ \begin{pmatrix}b\\[-60pt]c\end{pmatrix} = - -\begin{matrix} - -a \ \ b\\[-60pt] - -\bullet \ \ c - -\end{matrix} - -= - -\begin{matrix} - -(a \ b) \\[-60pt] - -c - -\end{matrix} - -= (a \ b) \ c - -, so composition is associative. - -If the operations are associative, each one defines the structure of a monoid on $X$, and the conditions above are equivalent to the more abstract condition that $\otimes$ is a monoid homomorphism $(X,\circ)\times(X,\circ)\to(X,\circ)$ (or vice versa). An even more abstract way of stating the theorem is: If $X$ is a monoid object in the category of monoids, then $X$ is in fact a commutative monoid. - -It is important that a similar argument does NOT give such a triviality result in the case of monoid objects in the categories of small categories or of groupoids. Instead the notion of group object in the category of groupoids turns out to be equivalent to the notion of crossed module. This leads to the idea of using multiple groupoid objects in homotopy theory. - -More generally, the Eckmann–Hilton argument is a special case of the use of the interchange law in the theory of (strict) double and multiple categories. A (strict) double category is a set, or class, equipped with two category structures, each of which is a morphism for the other structure. If the compositions in the two category structures are written $\circ, \otimes$ then the interchange law reads -$$ - (a \circ b) \otimes (c \circ d) = (a \otimes c) \circ (b \otimes d) -$$ - -whenever both sides are defined. For an example of its use, and some discussion, see the paper of Higgins referenced below. The interchange law implies that a double category contains a family of abelian monoids. - -The history in relation to homotopy groups is interesting. The workers in topology of the early 20th century were aware that the nonabelian fundamental group was of use in geometry and analysis; that abelian homology groups could be defined in all dimensions; and that for a connected space, the first homology group was the fundamental group made abelian. So there was a desire to generalise the nonabelian fundamental group to all dimensions. - -In 1932, Eduard Čech submitted a paper on higher homotopy groups to the International Congress of Mathematics at Zürich. However, Pavel Alexandroff and Heinz Hopf quickly proved these groups were abelian for $n > 1$, and on these grounds persuaded Čech to withdraw his paper, so that only a small paragraph appeared in the Proceedings. It is said that Witold Hurewicz attended this conference, and his first work on higher homotopy groups appeared in 1935. Thus the dreams of the early topologists have long been regarded as a mirage. - -Cubical higher homotopy groupoids are constructed for filtered spaces in the book ' cited below, which develops basic algebraic topology, including higher analogues to the Seifert–Van Kampen theorem, without using singular homology or simplicial approximation. diff --git a/wiki/wikipedia/575.txt b/wiki/wikipedia/575.txt deleted file mode 100644 index 31d19247d54b0bc41d81be421c8e4442352b9349..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/575.txt +++ /dev/null @@ -1,346 +0,0 @@ -In mathematics, a projective plane is a geometric structure that extends the concept of a plane. In the ordinary Euclidean plane, two lines typically intersect in a single point, but there are some pairs of lines (namely, parallel lines) that do not intersect. A projective plane can be thought of as an ordinary plane equipped with additional "points at infinity" where parallel lines intersect. Thus any two distinct lines in a projective plane intersect in one and only one point. - -Renaissance artists, in developing the techniques of drawing in perspective, laid the groundwork for this mathematical topic. The archetypical example is the real projective plane, also known as the extended Euclidean plane. This example, in slightly different guises, is important in algebraic geometry, topology and projective geometry where it may be denoted variously by PG(2, R), RP2, or P2(R), among other notations. There are many other projective planes, both infinite, such as the complex projective plane, and finite, such as the Fano plane. - -A projective plane is a 2-dimensional projective space, but not all projective planes can be embedded in 3-dimensional projective spaces. Such embeddability is a consequence of a property known as Desargues' theorem, not shared by all projective planes. - -A projective plane consists of a set of lines, a set of points, and a relation between points and lines called incidence, having the following properties: - -
    - -#Given any two distinct points, there is exactly one line incident with both of them. - -#Given any two distinct lines, there is exactly one point incident with both of them. - -#There are four points such that no line is incident with more than two of them. - -
    - -The second condition means that there are no parallel lines. The last condition excludes the so-called degenerate cases (see below). The term "incidence" is used to emphasize the symmetric nature of the relationship between points and lines. Thus the expression "point P is incident with line ℓ " is used instead of either "P is on ℓ " or "ℓ passes through P ". - -To turn the ordinary Euclidean plane into a projective plane proceed as follows: - -# To each parallel class of lines (a maximum set of mutually parallel lines) associate a single new point. That point is to be considered incident with each line in its class. The new points added are distinct from each other. These new points are called points at infinity. - -# Add a new line, which is considered incident with all the points at infinity (and no other points). This line is called the line at infinity. - -The extended structure is a projective plane and is called the extended Euclidean plane or the real projective plane. The process outlined above, used to obtain it, is called "projective completion" or projectivization. This plane can also be constructed by starting from R3 viewed as a vector space, see below. - -The points of the Moulton plane are the points of the Euclidean plane, with coordinates in the usual way. To create the Moulton plane from the Euclidean plane some of the lines are redefined. That is, some of their point sets will be changed, but other lines will remain unchanged. Redefine all the lines with negative slopes so that they look like "bent" lines, meaning that these lines keep their points with negative x-coordinates, but the rest of their points are replaced with the points of the line with the same y-intercept but twice the slope wherever their x-coordinate is positive. - -The Moulton plane has parallel classes of lines and is an affine plane. It can be projectivized, as in the previous example, to obtain the projective Moulton plane. Desargues' theorem is not a valid theorem in either the Moulton plane or the projective Moulton plane. - -This example has just thirteen points and thirteen lines. We label the points P1,...,P13 and the lines m1,...,m13. The incidence relation (which points are on which lines) can be given by the following incidence matrix. The rows are labelled by the points and the columns are labelled by the lines. A 1 in row i and column j means that the point Pi is on the line mj, while a 0 (which we represent here by a blank cell for ease of reading) means that they are not incident. The matrix is in Paige-Wexler normal form. - -To verify the conditions that make this a projective plane, observe that every two rows have exactly one common column in which 1's appear (every pair of distinct points are on exactly one common line) and that every two columns have exactly one common row in which 1's appear (every pair of distinct lines meet at exactly one point). Among many possibilities, the points P1,P4,P5,and P8, for example, will satisfy the third condition. This example is known as the projective plane of order three. - -Though the line at infinity of the extended real plane may appear to have a different nature than the other lines of that projective plane, this is not the case. Another construction of the same projective plane shows that no line can be distinguished (on geometrical grounds) from any other. In this construction, each "point" of the real projective plane is the one-dimensional subspace (a geometric line) through the origin in a 3-dimensional vector space, and a "line" in the projective plane arises from a (geometric) plane through the origin in the 3-space. This idea can be generalized and made more precise as follows. - -Let K be any division ring (skewfield). Let K3 denote the set of all triples x = (x0, x1, x2) of elements of K (a Cartesian product viewed as a vector space). For any nonzero x in K3, the minimal subspace of K3 containing x (which may be visualized as all the vectors in a line through the origin) is the subset -$$ -\{ k x : k \in K \} -$$ - -of K3. Similarly, let x and y be linearly independent elements of K3, meaning that 1=kx + my = 0 implies that 1=k = m = 0. The minimal subspace of K3 containing x and y (which may be visualized as all the vectors in a plane through the origin) is the subset -$$ -\{k x + m y : k, m \in K\} -$$ - -of K3. This 2-dimensional subspace contains various 1-dimensional subspaces through the origin that may be obtained by fixing k and m and taking the multiples of the resulting vector. Different choices of k and m that are in the same ratio will give the same line. - -The projective plane over K, denoted PG(2,K) or KP2, has a set of points consisting of all the 1-dimensional subspaces in K3. A subset L of the points of PG(2,K) is a line in PG(2,K) if there exists a 2-dimensional subspace of K3 whose set of 1-dimensional subspaces is exactly L. - -Verifying that this construction produces a projective plane is usually left as a linear algebra exercise. - -An alternate (algebraic) view of this construction is as follows. The points of this projective plane are the equivalence classes of the set K3 ∖ {(0, 0, 0)} modulo the equivalence relation - -x ~ kx, for all k in K×. - -Lines in the projective plane are defined exactly as above. - -The coordinates (x0, x1, x2) of a point in PG(2,K) are called homogeneous coordinates. Each triple (x0, x1, x2) represents a well-defined point in PG(2,K), except for the triple (0, 0, 0), which represents no point. Each point in PG(2,K), however, is represented by many triples. - -If K is a topological space, then KP2, inherits a topology via the product, subspace, and quotient topologies. - -The real projective plane RP2, arises when K is taken to be the real numbers, R. As a closed, non-orientable real 2-manifold, it serves as a fundamental example in topology. - -In this construction consider the unit sphere centered at the origin in R3. Each of the R3 lines in this construction intersects the sphere at two antipodal points. Since the R3 line represents a point of RP2, we will obtain the same model of RP2 by identifying the antipodal points of the sphere. The lines of RP2 will be the great circles of the sphere after this identification of antipodal points. This description gives the standard model of elliptic geometry. - -The complex projective plane CP2, arises when K is taken to be the complex numbers, C. It is a closed complex 2-manifold, and hence a closed, orientable real 4-manifold. It and projective planes over other fields (known as pappian planes) serve as fundamental examples in algebraic geometry. - -The quaternionic projective plane HP2 is also of independent interest. - -By Wedderburn's Theorem, a finite division ring must be commutative and so be a field. Thus, the finite examples of this construction are known as "field planes". Taking K to be the finite field of q = pn elements with prime p produces a projective plane of q2 + q + 1 points. The field planes are usually denoted by PG(2,q) where PG stands for projective geometry, the "2" is the dimension and q is called the order of the plane (it is one less than the number of points on any line). The Fano plane, discussed below, is denoted by PG(2,2). The third example above is the projective plane PG(2,3). - -The Fano plane is the projective plane arising from the field of two elements. It is the smallest projective plane, with only seven points and seven lines. In the figure at right, the seven points are shown as small balls, and the seven lines are shown as six line segments and a circle. However, one could equivalently consider the balls to be the "lines" and the line segments and circle to be the "points" – this is an example of duality in the projective plane: if the lines and points are interchanged, the result is still a projective plane (see below). A permutation of the seven points that carries collinear points (points on the same line) to collinear points is called a collineation or symmetry of the plane. The collineations of a geometry form a group under composition, and for the Fano plane this group (PΓL(3,2) = PGL(3,2)) has 168 elements. - -The theorem of Desargues is universally valid in a projective plane if and only if the plane can be constructed from a three-dimensional vector space over a skewfield as above. These planes are called Desarguesian planes, named after Girard Desargues. The real (or complex) projective plane and the projective plane of order 3 given above are examples of Desarguesian projective planes. The projective planes that can not be constructed in this manner are called non-Desarguesian planes, and the Moulton plane given above is an example of one. The PG(2,K) notation is reserved for the Desarguesian planes. When K is a field, a very common case, they are also known as field planes and if the field is a finite field they can be called Galois planes. - -A subplane of a projective plane is a subset of the points of the plane which themselves form a projective plane with the same incidence relations. - -proves the following theorem. Let Π be a finite projective plane of order N with a proper subplane Π0 of order M. Then either N = M2 or N ≥ M2 + M. - -When N is a square, subplanes of order sqrt N are called Baer subplanes. Every point of the plane lies on a line of a Baer subplane and every line of the plane contains a point of the Baer subplane. - -In the finite Desarguesian planes PG(2,pn), the subplanes have orders which are the orders of the subfields of the finite field GF(pn), that is, pi where i is a divisor of n. In non-Desarguesian planes however, Bruck's theorem gives the only information about subplane orders. The case of equality in the inequality of this theorem is not known to occur. Whether or not there exists a subplane of order M in a plane of order N with M2 + M = N is an open question. If such subplanes existed there would be projective planes of composite (non-prime power) order. - -A Fano subplane is a subplane isomorphic to PG(2,2), the unique projective plane of order 2. - -If you consider a quadrangle (a set of 4 points no three collinear) in this plane, the points determine six of the lines of the plane. The remaining three points (called the diagonal points of the quadrangle) are the points where the lines that do not intersect at a point of the quadrangle meet. The seventh line consists of all the diagonal points (usually drawn as a circle or semicircle). - -In finite desarguesian planes, PG(2,q), Fano subplanes exist if and only if q is even (that is, a power of 2). The situation in non-desarguesian planes is unsettled. They could exist in any non-desarguesian plane of order greater than 6, and indeed, they have been found in all non-desarguesian planes in which they have been looked for (in both odd and even orders). - -An open question is: Does every non-desarguesian plane contain a Fano subplane? - -A theorem concerning Fano subplanes due to is: - -If every quadrangle in a finite projective plane has collinear diagonal points, then the plane is desarguesian (of even order). - -Projectivization of the Euclidean plane produced the real projective plane. The inverse operation - starting with a projective plane, remove one line and all the points incident with that line - produces an affine plane. - -More formally an affine plane consists of a set of lines and a set of points, and a relation between points and lines called incidence, having the following properties: - -
    - -#Given any two distinct points, there is exactly one line incident with both of them. - -#Given any line l and any point P not incident with l, there is exactly one line incident with P that does not meet l. - -#There are four points such that no line is incident with more than two of them. - -
    - -The second condition means that there are parallel lines and is known as Playfair's axiom. The expression "does not meet" in this condition is shorthand for "there does not exist a point incident with both lines." - -The Euclidean plane and the Moulton plane are examples of infinite affine planes. A finite projective plane will produce a finite affine plane when one of its lines and the points on it are removed. The order of a finite affine plane is the number of points on any of its lines (this will be the same number as the order of the projective plane from which it comes). The affine planes which arise from the projective planes PG(2,q) are denoted by AG(2,q). - -There is a projective plane of order N if and only if there is an affine plane of order N. When there is only one affine plane of order N there is only one projective plane of order N, but the converse is not true. The affine planes formed by the removal of different lines of the projective plane will be isomorphic if and only if the removed lines are in the same orbit of the collineation group of the projective plane. These statements hold for infinite projective planes as well. - -The affine plane K2 over K embeds into KP2 via the map which sends affine (non-homogeneous) coordinates to homogeneous coordinates, -$$ -(x_1, x_2) \mapsto (1, x_1, x_2). -$$ - -The complement of the image is the set of points of the form (0, x1, x2). From the point of view of the embedding just given, these points are the points at infinity. They constitute a line in KP2 - namely, the line arising from the plane -$$ -\{k (0, 0, 1) + m (0, 1, 0) : k, m \in K\} -$$ - -in K3 - called the line at infinity. The points at infinity are the "extra" points where parallel lines intersect in the construction of the extended real plane; the point (0, x1, x2) is where all lines of slope x2 / x1 intersect. Consider for example the two lines -$$ -u = \{(x, 0) : x \in K\} -$$ -$$ -y = \{(x, 1) : x \in K\} -$$ - -in the affine plane K2. These lines have slope 0 and do not intersect. They can be regarded as subsets of KP2 via the embedding above, but these subsets are not lines in KP2. Add the point (0, 1, 0) to each subset; that is, let -$$ -\bar{u} = \{(1, x, 0) : x \in K\} \cup \{(0, 1, 0)\} -$$ -$$ -\bar{y} = \{(1, x, 1) : x \in K\} \cup \{(0, 1, 0)\} -$$ - -These are lines in KP2; ū arises from the plane -$$ -\{k (1, 0, 0) + m (0, 1, 0) : k, m \in K\} -$$ - -in K3, while ȳ arises from the plane -$$ -{k (1, 0, 1) + m (0, 1, 0) : k, m \in K}. -$$ - -The projective lines ū and ȳ intersect at (0, 1, 0). In fact, all lines in K2 of slope 0, when projectivized in this manner, intersect at (0, 1, 0) in KP2. - -The embedding of K2 into KP2 given above is not unique. Each embedding produces its own notion of points at infinity. For example, the embedding -$$ -(x_1, x_2) \to (x_2, 1, x_1), -$$ - -has as its complement those points of the form (x0, 0, x2), which are then regarded as points at infinity. - -When an affine plane does not have the form of K2 with K a division ring, it can still be embedded in a projective plane, but the construction used above does not work. A commonly used method for carrying out the embedding in this case involves expanding the set of affine coordinates and working in a more general "algebra". - -One can construct a coordinate "ring"-a so-called planar ternary ring (not a genuine ring)-corresponding to any projective plane. A planar ternary ring need not be a field or division ring, and there are many projective planes that are not constructed from a division ring. They are called non-Desarguesian projective planes and are an active area of research. The Cayley plane (OP2), a projective plane over the octonions, is one of these because the octonions do not form a division ring. - -Conversely, given a planar ternary ring (R,T), a projective plane can be constructed (see below). The relationship is not one to one. A projective plane may be associated with several non-isomorphic planar ternary rings. The ternary operator T can be used to produce two binary operators on the set R, by: - -a + b = T(a,1,b), and - -a • b = T(a,b,0). - -The ternary operator is linear if T(x,m,k) = x•m + k. When the set of coordinates of a projective plane actually form a ring, a linear ternary operator may be defined in this way, using the ring operations on the right, to produce a planar ternary ring. - -Algebraic properties of this planar ternary coordinate ring turn out to correspond to geometric incidence properties of the plane. For example, Desargues' theorem corresponds to the coordinate ring being obtained from a division ring, while Pappus's theorem corresponds to this ring being obtained from a commutative field. A projective plane satisfying Pappus's theorem universally is called a Pappian plane. Alternative, not necessarily associative, division algebras like the octonions correspond to Moufang planes. - -There is no known purely geometric proof of the purely geometric statement that Desargues' theorem implies Pappus' theorem in a finite projective plane (finite Desarguesian planes are Pappian). (The converse is true in any projective plane and is provable geometrically, but finiteness is essential in this statement as there are infinite Desarguesian planes which are not Pappian.) The most common proof uses coordinates in a division ring and Wedderburn's theorem that finite division rings must be commutative; Bamberg give a proof that uses only more "elementary" algebraic facts about division rings. - -To describe a finite projective plane of order N(≥ 2) using non-homogeneous coordinates and a planar ternary ring: - -Let one point be labelled (∞). - -Label N points, (r) where r = 0, ..., (N - 1). - -Label N2 points, (r, c) where r, c = 0, ..., (N - 1). - -On these points, construct the following lines: - -One line [] = { (∞), (0), ..., (N - 1)} - -N lines [c] = {(∞), (c,0), ..., (c, N - 1)}, where c = 0, ..., (N - 1) - -N2 lines [r, c] = {(r) and the points (x, T(x,r,c)) }, where x, r, c = 0, ..., (N - 1) and T is the ternary operator of the planar ternary ring. - -For example, for N=2 we can use the symbols {0,1} associated with the finite field of order 2. The ternary operation defined by T(x,m,k) = xm + k with the operations on the right being the multiplication and addition in the field yields the following: - -One line [] = { (∞), (0), (1)}, - -2 lines [c] = {(∞), (c,0), (c,1) : c = 0, 1}, - -[0] = {(∞), (0,0), (0,1) } - -[1] = {(∞), (1,0), (1,1) } - -4 lines [r, c]: (r) and the points (i,ir + c), where i = 0, 1 : r, c = 0, 1. - -[0,0]: {(0), (0,0), (1,0) } - -[0,1]: {(0), (0,1), (1,1) } - -[1,0]: {(1), (0,0), (1,1) } - -[1,1]: {(1), (0,1), (1,0) } - -Degenerate planes do not fulfill the third condition in the definition of a projective plane. They are not structurally complex enough to be interesting in their own right, but from time to time they arise as special cases in general arguments. There are seven kinds of degenerate plane according to . They are: - -# the empty set; - -# a single point, no lines; - -# a single line, no points; - -# a single point, a collection of lines, the point is incident with all of the lines; - -# a single line, a collection of points, the points are all incident with the line; - -# a point P incident with a line m, an arbitrary collection of lines all incident with P and an arbitrary collection of points all incident with m; - -# a point P not incident with a line m, an arbitrary (can be empty) collection of lines all incident with P and all the points of intersection of these lines with m. - -These seven cases are not independent, the fourth and fifth can be considered as special cases of the sixth, while the second and third are special cases of the fourth and fifth respectively. The special case of the seventh plane with no additional lines can be seen as an eighth plane. All the cases can therefore be organized into two families of degenerate planes as follows (this representation is for finite degenerate planes, but may be extended to infinite ones in a natural way): - -1) For any number of points P1, ..., Pn, and lines L1, ..., Lm, - -L1 = { P1, P2, ..., Pn} - -L2 = { P1 } - -L3 = { P1 } - -... - -Lm = { P1 } - -2) For any number of points P1, ..., Pn, and lines L1, ..., Ln, (same number of points as lines) - -L1 = { P2, P3, ..., Pn } - -L2 = { P1, P2 } - -L3 = { P1, P3 } - -... - -Ln = { P1, Pn } - -A collineation of a projective plane is a bijective map of the plane to itself which maps points to points and lines to lines that preserves incidence, meaning that if σ is a bijection and point P is on line m, then Pσ is on mσ. - -If σ is a collineation of a projective plane, a point P with P = Pσ is called a fixed point of σ, and a line m with m = mσ is called a fixed line of σ. The points on a fixed line need not be fixed points, their images under σ are just constrained to lie on this line. The collection of fixed points and fixed lines of a collineation form a closed configuration, which is a system of points and lines that satisfy the first two but not necessarily the third condition in the definition of a projective plane. Thus, the fixed point and fixed line structure for any collineation either form a projective plane by themselves, or a degenerate plane. Collineations whose fixed structure forms a plane are called planar collineations. - -A homography (or projective transformation) of PG(2,K) is a collineation of this type of projective plane which is a linear transformation of the underlying vector space. Using homogeneous coordinates they can be represented by invertible 3 × 3 matrices over K which act on the points of PG(2,K) by y = M xT, where x and y are points in K3 (vectors) and M is an invertible 3 × 3 matrix over K. Two matrices represent the same projective transformation if one is a constant multiple of the other. Thus the group of projective transformations is the quotient of the general linear group by the scalar matrices called the projective linear group. - -Another type of collineation of PG(2,K) is induced by any automorphism of K, these are called automorphic collineations. If α is an automorphism of K, then the collineation given by (x0,x1,x2) → (x0α,x1α,x2α) is an automorphic collineation. The fundamental theorem of projective geometry says that all the collineations of PG(2,K) are compositions of homographies and automorphic collineations. Automorphic collineations are planar collineations. - -A projective plane is defined axiomatically as an incidence structure, in terms of a set P of points, a set L of lines, and an incidence relation I that determines which points lie on which lines. As P and L are only sets one can interchange their roles and define a plane dual structure. - -By interchanging the role of "points" and "lines" in - -C = (P,L,I) - -we obtain the dual structure - -C* = (L,P,I*), - -where I* is the converse relation of I. - -In a projective plane a statement involving points, lines and incidence between them that is obtained from another such statement by interchanging the words "point" and "line" and making whatever grammatical adjustments that are necessary, is called the plane dual statement of the first. The plane dual statement of "Two points are on a unique line." is "Two lines meet at a unique point." Forming the plane dual of a statement is known as dualizing the statement. - -If a statement is true in a projective plane C, then the plane dual of that statement must be true in the dual plane C*. This follows since dualizing each statement in the proof "in C" gives a statement of the proof "in C*." - -In the projective plane C, it can be shown that there exist four lines, no three of which are concurrent. Dualizing this theorem and the first two axioms in the definition of a projective plane shows that the plane dual structure C* is also a projective plane, called the dual plane of C. - -If C and C* are isomorphic, then C is called self-dual. The projective planes PG(2,K) for any division ring K are self-dual. However, there are non-Desarguesian planes which are not self-dual, such as the Hall planes and some that are, such as the Hughes planes. - -The Principle of Plane Duality says that dualizing any theorem in a self-dual projective plane C produces another theorem valid in C. - -A duality is a map from a projective plane C = (P, L, I) to its dual plane C* = (L, P, I*) (see above) which preserves incidence. That is, a duality σ will map points to lines and lines to points (Pσ = L and Lσ = P) in such a way that if a point Q is on a line m (denoted by Q I m) then Qσ I* mσ ⇔ mσ I Qσ. A duality which is an isomorphism is called a correlation. If a correlation exists then the projective plane C is self-dual. - -In the special case that the projective plane is of the PG(2,K) type, with K a division ring, a duality is called a reciprocity. These planes are always self-dual. By the fundamental theorem of projective geometry a reciprocity is the composition of an automorphic function of K and a homography. If the automorphism involved is the identity, then the reciprocity is called a projective correlation. - -A correlation of order two (an involution) is called a polarity. If a correlation φ is not a polarity then φ2 is a nontrivial collineation. - -It can be shown that a projective plane has the same number of lines as it has points (infinite or finite). Thus, for every finite projective plane there is an integer N ≥ 2 such that the plane has - -N2 + N + 1 points, - -N2 + N + 1 lines, - -N + 1 points on each line, and - -N + 1 lines through each point. - -The number N is called the order of the projective plane. - -The projective plane of order 2 is called the Fano plane. See also the article on finite geometry. - -Using the vector space construction with finite fields there exists a projective plane of order N = pn, for each prime power pn. In fact, for all known finite projective planes, the order N is a prime power. - -The existence of finite projective planes of other orders is an open question. The only general restriction known on the order is the Bruck-Ryser-Chowla theorem that if the order N is congruent to 1 or 2 mod 4, it must be the sum of two squares. This rules out N = 6. The next case N = 10 has been ruled out by massive computer calculations. Nothing more is known; in particular, the question of whether there exists a finite projective plane of order N = 12 is still open. - -Another longstanding open problem is whether there exist finite projective planes of prime order which are not finite field planes (equivalently, whether there exists a non-Desarguesian projective plane of prime order). - -A projective plane of order N is a Steiner S(2, N + 1, N2 + N + 1) system - -(see Steiner system). Conversely, one can prove that all Steiner systems of this form (λ = 2) are projective planes. - -The number of mutually orthogonal Latin squares of order N is at most N - 1. N - 1 exist if and only if there is a projective plane of order N. - -While the classification of all projective planes is far from complete, results are known for small orders: - -*2 : all isomorphic to PG(2,2) - -*3 : all isomorphic to PG(2,3) - -*4 : all isomorphic to PG(2,4) - -*5 : all isomorphic to PG(2,5) - -*6 : impossible as the order of a projective plane, proved by Tarry who showed that Euler's thirty-six officers problem has no solution. However, the connection between these problems was not known until Bose proved it in 1938. - -*7 : all isomorphic to PG(2,7) - -*8 : all isomorphic to PG(2,8) - -*9 : PG(2,9), and three more different (non-isomorphic) non-Desarguesian planes: a Hughes plane, a Hall plane, and the dual of this Hall plane. All are described in . - -*10 : impossible as an order of a projective plane, proved by heavy computer calculation. - -*11 : at least PG(2,11), others are not known but possible. - -*12 : it is conjectured to be impossible as an order of a projective plane. - -Projective planes may be thought of as projective geometries of "geometric" dimension two. Higher-dimensional projective geometries can be defined in terms of incidence relations in a manner analogous to the definition of a projective plane. These turn out to be "tamer" than the projective planes since the extra degrees of freedom permit Desargues' theorem to be proved geometrically in the higher-dimensional geometry. This means that the coordinate "ring" associated to the geometry must be a division ring (skewfield) K, and the projective geometry is isomorphic to the one constructed from the vector space Kd+1, i.e. PG(d,K). As in the construction given earlier, the points of the d-dimensional projective space PG(d,K) are the lines through the origin in Kd + 1 and a line in PG(d,K) corresponds to a plane through the origin in Kd + 1. In fact, each i-dimensional object in PG(d,K), with i < d, is an (i + 1)-dimensional (algebraic) vector subspace of Kd + 1 ("goes through the origin"). The projective spaces in turn generalize to the Grassmannian spaces. - -It can be shown that if Desargues' theorem holds in a projective space of dimension greater than two, then it must also hold in all planes that are contained in that space. Since there are projective planes in which Desargues' theorem fails (non-Desarguesian planes), these planes can not be embedded in a higher-dimensional projective space. Only the planes from the vector space construction PG(2,K) can appear in projective spaces of higher dimension. Some disciplines in mathematics restrict the meaning of projective plane to only this type of projective plane since otherwise general statements about projective spaces would always have to mention the exceptions when the geometric dimension is two. diff --git a/wiki/wikipedia/576.txt b/wiki/wikipedia/576.txt deleted file mode 100644 index 27871ce4650ffa64b0ce144d48b17687a2688a98..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/576.txt +++ /dev/null @@ -1,9 +0,0 @@ -Anne's theorem, named after the French mathematician Pierre-Leon Anne (1806–1850), is a statement from Euclidean geometry, which describes an equality of certain areas within a convex quadrilateral. - -Specifically, it states: - -Let ABCD be a convex quadrilateral with diagonals AC and BD, that is not a parallelogram. Furthermore let E and F be the midpoints of the diagonals and L be an arbitrary point in the interior of ABCD. L forms four triangles with the edges of ABCD. If the two sums of areas of opposite triangles are equal ( Area(BCL) + Area(DAL) = Area(LAB) + Area(DLC) ), then the point L is located on the Newton line, that is the line which connects E and F. - -For a parallelogram the Newton line does not exist since both midpoints of the diagonals coincide with point of intersection of the diagonals. Moreover the area identity of the theorem holds in this case for any inner point of the quadrilateral. - -The converse of Anne's theorem is true as well, that is for any point on the Newton line which is an inner point of the quadrilateral, the area identity holds. diff --git a/wiki/wikipedia/577.txt b/wiki/wikipedia/577.txt deleted file mode 100644 index d4d3b17d168ebaa3f120ad441991b14abe252a0a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/577.txt +++ /dev/null @@ -1,93 +0,0 @@ -Concolic testing (a portmanteau of concrete and symbolic) is a hybrid software verification technique that performs symbolic execution, a classical technique that treats program variables as symbolic variables, along a concrete execution (testing on particular inputs) path. Symbolic execution is used in conjunction with an automated theorem prover or constraint solver based on constraint logic programming to generate new concrete inputs (test cases) with the aim of maximizing code coverage. Its main focus is finding bugs in real-world software, rather than demonstrating program correctness. - -A description and discussion of the concept was introduced in "DART: Directed Automated Random Testing" by Patrice Godefroid, Nils Klarlund, and Koushik Sen. The paper "CUTE: A concolic unit testing engine for C", by Koushik Sen, Darko Marinov, and Gul Agha, further extended the idea to data structures, and first coined the term concolic testing. Another tool, called EGT (renamed to EXE and later improved and renamed to KLEE), based on similar ideas was independently developed by Cristian Cadar and Dawson Engler in 2005, and published in 2005 and 2006. PathCrawler first proposed to perform symbolic execution along a concrete execution path, but unlike concolic testing PathCrawler does not simplify complex symbolic constraints using concrete values. These tools (DART and CUTE, EXE) applied concolic testing to unit testing of C programs and concolic testing was originally conceived as a white box improvement upon established random testing methodologies. The technique was later generalized to testing multithreaded Java programs with jCUTE, and unit testing programs from their executable codes (tool OSMOSE). It was also combined with fuzz testing and extended to detect exploitable security issues in large-scale x86 binaries by Microsoft Research's SAGE. - -The concolic approach is also applicable to model checking. In a concolic model checker, the model checker traverses states of the model representing the software being checked, while storing both a concrete state and a symbolic state. The symbolic state is used for checking properties on the software, while the concrete state is used to avoid reaching unreachable state. One such tool is ExpliSAT by Sharon Barner, Cindy Eisner, Ziv Glazberg, Daniel Kroening and Ishai Rabinovitz - -Implementation of traditional symbolic execution based testing requires the implementation of a full-fledged symbolic interpreter for a programming language. Concolic testing implementors noticed that implementation of full-fledged symbolic execution can be avoided if symbolic execution can be piggy-backed with the normal execution of a program through instrumentation. This idea of simplifying implementation of symbolic execution gave birth to concolic testing. - -An important reason for the rise of concolic testing (and more generally, symbolic-execution based analysis of programs) in the decade since it was introduced in 2005 is the dramatic improvement in the efficiency and expressive power of SMT Solvers. The key technical developments that lead to the rapid development of SMT solvers include combination of theories, lazy solving, DPLL(T) and the huge improvements in the speed of SAT solvers. SMT solvers that are particularly tuned for concolic testing include Z3, STP, Z3str2, and Boolector. - -Consider the following simple example, written in C: - - - -void f(int x, int y) { - -int z = 2*y; - -if (x == 100000) { - -if (x < z) { - -assert(0); /* error */ - -} - -} - -} - - - -Simple random testing, trying random values of x and y, would require an impractically large number of tests to reproduce the failure. - -We begin with an arbitrary choice for x and y, for example x = y = 1. In the concrete execution, line 2 sets z to 2, and the test in line 3 fails since 1 ≠ 100000. Concurrently, the symbolic execution follows the same path but treats x and y as symbolic variables. It sets z to the expression 2y and notes that, because the test in line 3 failed, x ≠ 100000. This inequality is called a path condition and must be true for all executions following the same execution path as the current one. - -Since we'd like the program to follow a different execution path on the next run, we take the last path condition encountered, x ≠ 100000, and negate it, giving x = 100000. An automated theorem prover is then invoked to find values for the input variables x and y given the complete set of symbolic variable values and path conditions constructed during symbolic execution. In this case, a valid response from the theorem prover might be x = 100000, y = 0. - -Running the program on this input allows it to reach the inner branch on line 4, which is not taken since 100000 (x) is not less than 0 (z = 2y). The path conditions are x = 100000 and x ≥ z. The latter is negated, giving x < z. The theorem prover then looks for x, y satisfying x = 100000, x < z, and z = 2y; for example, x = 100000, y = 50001. This input reaches the error. - -Essentially, a concolic testing algorithm operates as follows: - -# Classify a particular set of variables as input variables. These variables will be treated as symbolic variables during symbolic execution. All other variables will be treated as concrete values. - -# Instrument the program so that each operation which may affect a symbolic variable value or a path condition is logged to a trace file, as well as any error that occurs. - -# Choose an arbitrary input to begin with. - -# Execute the program. - -# Symbolically re-execute the program on the trace, generating a set of symbolic constraints (including path conditions). - -# Negate the last path condition not already negated in order to visit a new execution path. If there is no such path condition, the algorithm terminates. - -# Invoke an automated satisfiability solver on the new set of path conditions to generate a new input. If there is no input satisfying the constraints, return to step 6 to try the next execution path. - -# Return to step 4. - -There are a few complications to the above procedure: - -* The algorithm performs a depth-first search over an implicit tree of possible execution paths. In practice programs may have very large or infinite path trees – a common example is testing data structures that have an unbounded size or length. To prevent spending too much time on one small area of the program, the search may be depth-limited (bounded). - -* Symbolic execution and automated theorem provers have limitations on the classes of constraints they can represent and solve. For example, a theorem prover based on linear arithmetic will be unable to cope with the nonlinear path condition xy = 6. Any time that such constraints arise, the symbolic execution may substitute the current concrete value of one of the variables to simplify the problem. An important part of the design of a concolic testing system is selecting a symbolic representation precise enough to represent the constraints of interest. - -Symbolic-execution based analysis and testing, in general, has witnessed a significant level of interest from industry . Perhaps the most famous commercial tool that uses dynamic symbolic execution (aka concolic testing) is the SAGE tool from Microsoft. The KLEE and S2E tools (both of which are open-source tools, and use the STP constraint solver) are widely used in many companies including Micro Focus Fortify, NVIDIA, and IBM . Increasingly these technologies are being used by many security companies and hackers alike to find security vulnerabilities. - -Concolic testing has a number of limitations: - -* If the program exhibits nondeterministic behavior, it may follow a different path than the intended one. This can lead to nontermination of the search and poor coverage. - -* Even in a deterministic program, a number of factors may lead to poor coverage, including imprecise symbolic representations, incomplete theorem proving, and failure to search the most fruitful portion of a large or infinite path tree. - -* Programs which thoroughly mix the state of their variables, such as cryptographic primitives, generate very large symbolic representations that cannot be solved in practice. For example, the condition if(sha256_hash(input) == 0x12345678) { ... } requires the theorem prover to invert SHA256, which is an open problem. - -* is a restricted version of the current PathCrawler tool which is publicly available as an online test-case server for evaluation and education purposes. - -* is available as binary under a research-use only license by Urbana-Champaign for Java. - -* is an open-source solution for C that replacedCUTE (modified BSD license). - -* is an open source solution built on-top of the LLVM infrastructure (UIUC license). - -* is an open-source solution for Java (BSD license). - -* is an open-source concolic testing and symbolic execution tool for JavaScript. Jalangi supports integers and strings. - -* , developed at Microsoft Rise, is publicly available as a Microsoft Visual Studio 2010 Power Tool for the NET Framework. - -* is an open-source -based concolic execution framework for x86 and x86-64 binaries. - -* is an open-source concolic testing tool for the Erlang functional programming language. - -Many tools, notably DART and SAGE, have not been made available to the public at large. Note however that for instance SAGE is "used daily" for internal security testing at Microsoft. diff --git a/wiki/wikipedia/578.txt b/wiki/wikipedia/578.txt deleted file mode 100644 index 3b029322f14c59bd849431e2e75ecfbcd7d65bc3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/578.txt +++ /dev/null @@ -1,125 +0,0 @@ -In mathematical analysis Fubini's theorem is a result that gives conditions under which it is possible to compute a double integral by using an iterated integral, introduced by Guido Fubini in 1907. One may switch the order of integration if the double integral yields a finite answer when the integrand is replaced by its absolute value. - - \iint\limits_{X\times Y} f(x,y)\text{d}(x,y) = \int_X\left(\int_Y f(x,y)\text{d}y\right)\text{d}x=\int_Y\left(\int_X f(x,y) \text{d}x \right) \text{d}y \qquad \text{ if } \qquad \iint\limits_{X\times Y} |f(x,y)|\text{d}(x,y) <+\infty. - -As a consequence, it allows the order of integration to be changed in certain iterated integrals. - -Fubini's theorem implies that two iterated integrals are equal to the corresponding double integral across its integrands. Tonelli's theorem, introduced by Leonida Tonelli in 1909, is similar, but applies to a non-negative measurable function rather than one integrable over their domains. - -A related theorem is often called Fubini's theorem for infinite series, which states that if $\{a_{m,n}\}_{m=1,n=1}^{\infty}$ is a doubly-indexed sequence of real numbers, and if $\sum_{(m,n)\in \N\times \N} a_{m,n} $ is absolutely convergent, then - - \sum_{(m,n)\in\N\times \N}a_{m,n} = \sum_{m=1}^\infty\sum_{n=1}^{\infty} a_{m,n} = \sum_{n=1}^\infty \sum_{m=1}^\infty a_{m,n} - -Although Fubini's theorem for infinite series is a special case of the more general Fubini's theorem, it is not appropriate to characterize it as a logical consequence of Fubini's theorem. This is because some properties of measures, in particular sub-additivity, are often proved using Fubini's theorem for infinite series. In this case, Fubini's general theorem is a logical consequence of Fubini's theorem for infinite series. - -The special case of Fubini's theorem for continuous functions on a product of closed bounded subsets of real vector spaces was known to Leonhard Euler in the 18th century. extended this to bounded measurable functions on a product of intervals. Levi conjectured that the theorem could be extended to functions that were integrable rather than bounded, and this was proved by Fubini. gave a variation of Fubini's theorem that applies to non-negative functions rather than integrable functions. - -If X and Y are measure spaces with measures, there are several natural ways to define a product measure on their product. - -The product X × Y of measure spaces (in the sense of category theory) has as its measurable sets the σ-algebra generated by the products A × B of measurable subsets of X and Y. - -A measure μ on X × Y is called a product measure if μ(A × B) = μ1(A)μ2(B) for measurable subsets A ⊂ X and B ⊂ Y and measures µ1 on X and µ2 on Y. In general there may be many different product measures on X × Y. Fubini's theorem and Tonelli's theorem both need technical conditions to avoid this complication; the most common way is to assume all measure spaces are σ-finite, in which case there is a unique product measure on X×Y. There is always a unique maximal product measure on X × Y, where the measure of a measurable set is the inf of the measures of sets containing it that are countable unions of products of measurable sets. The maximal product measure can be constructed by applying Carathéodory's extension theorem to the additive function μ such that μ(A × B) = μ1(A)μ2(B) on the ring of sets generated by products of measurable sets. (Carathéodory's extension theorem gives a measure on a measure space that in general contains more measurable sets than the measure space X × Y, so strictly speaking the measure should be restricted to the σ-algebra generated by the products A × B of measurable subsets of X and Y.) - -The product of two complete measure spaces is not usually complete. For example, the product of the Lebesgue measure on the unit interval I with itself is not the Lebesgue measure on the square I × I. There is a variation of Fubini's theorem for complete measures, which uses the completion of the product of measures rather than the uncompleted product. - -Suppose X and Y are σ-finite measure spaces, and suppose that X × Y is given the product measure (which is unique as X and Y are σ-finite). Fubini's theorem states that if f is X × Y integrable, meaning that f is a measurable function and - -\int_{X\times Y} |f(x,y)|\text{d}(x,y) < \infty, - -then - -\int_X\left(\int_Y f(x,y)\text{d}y\right)\text{d}x = \int_Y\left(\int_X f(x,y)\text{d}x\right)\text{d}y = \int_{X\times Y} f(x,y)\text{d}(x,y). - -The first two integrals are iterated integrals with respect to two measures, respectively, and the third is an integral with respect to the product measure. The partial integrals $\int_Y f(x,y)\text{d}y$ and $\int_X f(x,y)\text{d}x$ need not be defined everywhere, but this does not matter as the points where they are not defined form a set of measure 0. - -If the above integral of the absolute value is not finite, then the two iterated integrals may have different values. See below for an illustration of this possibility. - -The condition that X and Y are σ-finite is usually harmless because in practice almost all measure spaces one wishes to use Fubini's theorem for are σ-finite. - -Fubini's theorem has some rather technical extensions to the case when X and Y are not assumed to be σ-finite . The main extra complication in this case is that there may be more than one product measure on X×Y. Fubini's theorem continues to hold for the maximal product measure, but can fail for other product measures. For example, there is a product measure and a non-negative measurable function f for which the double integral of |f| is zero but the two iterated integrals have different values; see the section on counterexamples below for an example of this. Tonelli's theorem and the Fubini–Tonelli theorem (stated below) can fail on non σ-finite spaces even for the maximal product measure. - -Tonelli's theorem (named after Leonida Tonelli) is a successor of Fubini's theorem. The conclusion of Tonelli's theorem is identical to that of Fubini's theorem, but the assumption that $|f|$ has a finite integral is replaced by the assumption that $f$ is a non-negative measurable function. - -Tonelli's theorem states that if (X, A, μ) and (Y, B, ν) are σ-finite measure spaces, while f from X×Y to [0,∞] is non-negative measurable function, then - -\int_X\left(\int_Y f(x,y)\text{d}y\right)\text{d}x = \int_Y\left(\int_X f(x,y)\text{d}x\right)\text{d}y = \int_{X\times Y} f(x,y)\text{d}(x,y). - -A special case of Tonelli's theorem is in the interchange of the summations, as in $\sum_x \sum_y a_{xy} = \sum_y \sum_x a_{xy}$, where $a_{xy}$ are non-negative for all x and y. The crux of the theorem is that the interchange of order of summation holds even if the series diverges. In effect, the only way a change in order of summation can change the sum is when there exist some subsequences that diverge to $+\infty$ and others diverging to $-\infty$. With all elements non-negative, this does not happen in the stated example. - -Without the condition that the measure spaces are σ-finite it is possible for all three of these integrals to have different values. - -Some authors give generalizations of Tonelli's theorem to some measure spaces that are not σ-finite but these generalizations often add conditions that immediately reduce the problem to the σ-finite case. For example, one could take the σ-algebra on A×B to be that generated by the product of subsets of finite measure, rather than that generated by all products of measurable subsets, though this has the undesirable consequence that the projections from the product to its factors A and B are not measurable. Another way is to add the condition that the support of f is contained in a countable union of products of sets of finite measure. Fremlin gives some rather technical extensions of Tonelli's theorem to some non σ-finite spaces. None of these generalizations have found any significant applications outside abstract measure theory, largely because almost all measure spaces of practical interest are σ-finite. - -Combining Fubini's theorem with Tonelli's theorem gives - -the Fubini–Tonelli theorem (often just called Fubini's theorem), which states that if X and Y are σ-finite measure spaces, and if f is a measurable function, then - -\int_X\left(\int_Y |f(x,y)|\text{d}y\right)\text{d}x=\int_Y\left(\int_X |f(x,y)|\text{d}x\right)\text{d}y=\int_{X\times Y} |f(x,y)|\text{d}(x,y) - -Besides if any one of these integrals is finite, then - -\int_X\left(\int_Y f(x,y)\text{d}y\right)\text{d}x=\int_Y\left(\int_X f(x,y)\text{d}x\right)\text{d}y=\int_{X\times Y} f(x,y)\text{d}(x,y). - -The absolute value of f in the conditions above can be replaced by either the positive or the negative part of f; these forms include Tonelli's theorem as a special case as the negative part of a non-negative function is zero and so has finite integral. Informally all these conditions say that the double integral of f is well defined, though possibly infinite. - -The advantage of the Fubini–Tonelli over Fubini's theorem is that the repeated integrals of the absolute value of |f| may be easier to study than the double integral. As in Fubini's theorem, the single integrals may fail to be defined on a measure 0 set. - -The versions of Fubini's and Tonelli's theorems above do not apply to integration on the product of the real line R with itself with Lebesgue measure. The problem is that Lebesgue measure on R×R is not the product of Lebesgue measure on R with itself, but rather the completion of this: a product of two complete measure spaces X and Y is not in general complete. For this reason one sometimes uses versions of Fubini's theorem for complete measures: roughly speaking one just replaces all measures by their completions. The various versions of Fubini's theorem are similar to the versions above, with the following minor differences: - -*Instead of taking a product X×Y of two measure spaces, one takes the completion of some product. - -*If f is a measurable on the completion of X×Y then its restrictions to vertical or horizontal lines may be non-measurable for a measure zero subset of lines, so one has to allow for the possibility that the vertical or horizontal integrals are undefined on a set of measure 0 because they involve integrating non-measurable functions. This makes little difference, because they can already be undefined due to the functions not being integrable. - -*One generally also assumes that the measures on X and Y are complete, otherwise the two partial integrals along vertical or horizontal lines may be well-defined but not measurable. For example, if f is the characteristic function of a product of a measurable set and a non-measurable set contained in a measure 0 set then its single integral is well defined everywhere but non-measurable. - -Proofs of the Fubini and Tonelli theorems are necessarily somewhat technical, as they have to use a hypothesis related to σ-finiteness. Most proofs involve building up to the full theorems by proving them for increasingly complicated functions with the steps as follows. - -# Use the fact that the measure on the product is a product measure to prove the theorems for the characteristic functions of rectangles. - -# Use the condition that the spaces are σ-finite (or some related condition) to prove the theorem for the characteristic functions of measurable sets. This also covers the case of simple measurable functions (measurable functions taking only a finite number of values). - -# Use the condition that the functions are measurable to prove the theorems for positive measurable functions by approximating them by simple measurable functions. This proves Tonelli's theorem. - -# Use the condition that the functions are integrable to write them as the difference of two positive integrable functions, and apply Tonelli's theorem to each of these. This proves Fubini's theorem. - -For Riemann integrals, Fubini's theorem is proven by refining the partitions along the x-axis and y-axis as to create a joint partition of the form $[x_i,x_{i+1}] \times [y_j,y_{j+1}]$, which is a partition over $X\times Y$. This is used to show that the double integrals of either order are equal to the integral over $X\times Y$. - -The following examples show how Fubini's theorem and Tonelli's theorem can fail if any of their hypotheses are omitted. - -Suppose that X is the unit interval with the Lebesgue measurable sets and Lebesgue measure, and Y is the unit interval with all subsets measurable and the counting measure, so that Y is not σ-finite. If f is the characteristic function of the diagonal of X×Y, then integrating f along X gives the 0 function on Y, but integrating f along Y gives the function 1 on X. So the two iterated integrals are different. This shows that Tonelli's theorem can fail for spaces that are not σ-finite no matter what product measure is chosen. The measures are both decomposable, showing that Tonelli's theorem fails for decomposable measures (which are slightly more general than σ-finite measures). - -Fubini's theorem holds for spaces even if they are not assumed to be σ-finite provided one uses the maximal product measure. - -In the example above, for the maximal product measure, the diagonal has infinite measure so the double integral of |f| is infinite, and Fubini's theorem holds vacuously. - -However, if we give X×Y the product measure such that the measure of a set is the sum of the Lebesgue measures of its horizontal sections, then the double integral of |f| is zero, but the two iterated integrals still have different values. This gives an example of a product measure where Fubini's theorem fails. - -This gives an example of two different product measures on the same product of two measure spaces. For products of two σ-finite measure spaces, there is only one product measure. - -Suppose that X is the first uncountable ordinal, with the finite measure where the measurable sets are either countable (with measure 0) or the sets of countable complement (with measure 1). The (non-measurable) subset E of X×X given by pairs (x,y) with x\frac{x^2-y^2}{(x^2+y^2)^2} = -\frac{\partial^2}{\partial x\partial y} \arctan(y/x). - -The iterated integrals - -\int_{x=0}^1\left(\int_{y=0}^1\frac{x^2-y^2}{(x^2+y^2)^2}\text{d}y\right)\text{d}x = \frac{\pi}{4} - -and - -\int_{y=0}^1\left(\int_{x=0}^1\frac{x^2-y^2}{(x^2+y^2)^2}\text{d}x\right)\text{d}y=-\frac{\pi}{4} - -have different values. The corresponding double integral does not converge absolutely (in other words the integral of the absolute value is not finite): - -\int_0^1\int_0^1 \left|\frac{x^2-y^2}{\left(x^2 + y^2\right)^2}\right|\text{d}y\text{d}x=\infty. diff --git a/wiki/wikipedia/579.txt b/wiki/wikipedia/579.txt deleted file mode 100644 index b75d83bfa97862d4a98ab83a21be0ffa3b1585d7..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/579.txt +++ /dev/null @@ -1,45 +0,0 @@ -Jacobi's four-square theorem gives a formula for the number of ways that a given positive integer n can be represented as the sum of four squares. - -The theorem was proved in 1834 by Carl Gustav Jakob Jacobi. - -Two representations are considered different if their terms are in different order or if the integer being squared (not just the square) is different; to illustrate, these are three of the eight different ways to represent 1: - - - -\begin{align} - -& 1^2 + 0^2 + 0^2 + 0^2 \\ - -& 0^2 + 1^2 + 0^2 + 0^2 \\ - -& (-1)^2 + 0^2 + 0^2 + 0^2. - -\end{align} - - - -The number of ways to represent n as the sum of four squares is eight times the sum of the divisors of n if n is odd and 24 times the sum of the odd divisors of n if n is even (see divisor function), i.e. - -r_4(n)=\begin{cases}8\sum\limits_{m|n}m&\text{if }n\text{ is odd}\\[12pt] - -24\sum\limits_{\begin{smallmatrix} m|n \\ m\text{ odd} \end{smallmatrix}}m&\text{if }n\text{ is even}. - -\end{cases} - -Equivalently, it is eight times the sum of all its divisors which are not divisible by 4, i.e. -$$ -r_4(n)=8\sum_{m\mid n , 4 \nmid m}m. -$$ - -We may also write this as -$$ -r_4(n) = 8 \sigma(n) -32 \sigma(n/4) \ , -$$ - -where the second term is to be taken as zero if n is not divisible by 4. In particular, for a prime number p we have the explicit formula r4(p) = 8(p + 1). - -Some values of r4(n) occur infinitely often as r4(n) = r4(2mn) whenever n is even. The values of r4(n) can be arbitrarily large: indeed, r4(n) is infinitely often larger than 8log n. - -The theorem can be proved by elementary means starting with the Jacobi triple product. - -The proof shows that the Theta series for the lattice Z4 is a modular form of a certain level, and hence equals a linear combination of Eisenstein series. diff --git a/wiki/wikipedia/58.txt b/wiki/wikipedia/58.txt deleted file mode 100644 index 039adc1172654d06f8eab9c346606d2849244011..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/58.txt +++ /dev/null @@ -1,7 +0,0 @@ -VIPER is a 32-bit microprocessor design created by Royal Signals and Radar Establishment (RSRE) in the 1980s, intended to be used in safety-critical systems such as avionics. It was the first commercial microprocessor design to be formally proven correct, although there was some controversy surrounding this claim and the definition of proof. - -The design was completed in 1987 and implemented initially by RSRE in a gate array. Marconi Electronics subsequently licensed the design, implementing it as the MAS1908 VIPER-1, fabricated using CMOS and silicon-on-sapphire technologies, being packaged as a 120-pin grid array product. - -Architecturally, VIPER is a 32-bit processor supporting 20-bit word-oriented addressing of memory and of "I/O space" (and thus 4 megabytes of each). Although employing a uniform instruction layout suggestive of RISC architectures, instruction execution times vary from 6 to 26 clock cycles, in contrast to a throughput of one instruction per cycle sought by conventional RISC architectures. - -A safety critical programming language named Newspeak was designed by Ian Currie of RSRE in 1984 for use with VIPER. Its principal characteristic was that all exceptional behaviour in programs must be dealt with at compile time. diff --git a/wiki/wikipedia/580.txt b/wiki/wikipedia/580.txt deleted file mode 100644 index a3edc00551f12e9e32978225ea9fa01f6173220b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/580.txt +++ /dev/null @@ -1,93 +0,0 @@ -In number theory, a Wagstaff prime is a prime number of the form -$$ -{{2^p+1}\over 3} -$$ - -where p is an odd prime. Wagstaff primes are named after the mathematician Samuel S. Wagstaff Jr.; the prime pages credit François Morain for naming them in a lecture at the Eurocrypt 1990 conference. Wagstaff primes appear in the New Mersenne conjecture and have applications in cryptography. - -The first three Wagstaff primes are 3, 11, and 43 because - - - -\begin{align} - -3 & = {2^3+1 \over 3}, \\[5pt] - -11 & = {2^5+1 \over 3}, \\[5pt] - -43 & = {2^7+1 \over 3}. - -\end{align} - - - -The first few Wagstaff primes are: - -3, 11, 43, 683, 2731, 43691, 174763, 2796203, 715827883, 2932031007403, 768614336404564651, … - -, known exponents which produce Wagstaff primes or probable primes are: - -3, 5, 7, 11, 13, 17, 19, 23, 31, 43, 61, 79, 101, 127, 167, 191, 199, 313, 347, 701, 1709, 2617, 3539, 5807, 10501, 10691, 11279, 12391, 14479, 42737, 83339, 95369, (all known Wagstaff primes) - -117239, 127031, 138937, 141079, 267017, 269987, 374321, 986191, 4031399, …, 13347311, 13372531, 15135397 (Wagstaff probable primes) - -In February 2010, Tony Reix discovered the Wagstaff probable prime: -$$ -\frac{2^{4031399}+1}3 -$$ - -which has 1,213,572 digits and was the 3rd biggest probable prime ever found at this date. - -In September 2013, Ryan Propper announced the discovery of two additional Wagstaff probable primes: -$$ -\frac{2^{13347311}+1}3 -$$ - -and -$$ -\frac{2^{13372531}+1}3 -$$ - -Each is a probable prime with slightly more than 4 million decimal digits. It is not currently known whether there are any exponents between 4031399 and 13347311 that produce Wagstaff probable primes. - -In June 2021, Ryan Propper announced the discovery of the Wagstaff probable prime: -$$ -\frac{2^{15135397}+1}3 -$$ - -which is a probable prime with slightly more than 4.5 million decimal digits. - -Primality has been proven or disproven for the values of p up to 95369. Those with p > 95369 are probable primes . The primality proof for p = 42737 was performed by François Morain in 2007 with a distributed ECPP implementation running on several networks of workstations for 743 GHz-days on an Opteron processor. It was the third largest primality proof by ECPP from its discovery until March 2009. - -The Lucas–Lehmer–Riesel test can be used to identify Wagstaff PRPs. In particular, if p is an exponent of a Wagstaff prime, then -$$ -25^{2^{p-1}} \equiv 25 \pmod{2^p + 1} -$$. - -It is natural to consider more generally numbers of the form -$$ -Q(b,n)=\frac{b^n+1}{b+1} -$$ - -where the base $b \ge 2$. Since for $n$ odd we have -$$ -\frac{b^n+1}{b+1}=\frac{(-b)^n-1}{(-b)-1}=R_n(-b) -$$ - -these numbers are called "Wagstaff numbers base $b$", and sometimes considered a case of the repunit numbers with negative base $-b$. - -For some specific values of $b$, all $Q(b,n)$ (with a possible exception for very small $n$) are composite because of an "algebraic" factorization. Specifically, if $b$ has the form of a perfect power with odd exponent (like 8, 27, 32, 64, 125, 128, 216, 243, 343, 512, 729, 1000, etc. ), then the fact that $x^m+1$, with $m$ odd, is divisible by $x+1$ shows that $Q(a^m, n)$ is divisible by $a^n+1$ in these special cases. Another case is $b=4k^4$, with k positive integer (like 4, 64, 324, 1024, 2500, 5184, etc. ), where we have the aurifeuillean factorization. - -However, when $b$ does not admit an algebraic factorization, it is conjectured that an infinite number of $n$ values make $Q(b,n)$ prime, notice all $n$ are odd primes. - -For $b=10$, the primes themselves have the following appearance: 9091, 909091, 909090909090909091, 909090909090909090909090909091, … , and these ns are: 5, 7, 19, 31, 53, 67, 293, 641, 2137, 3011, 268207, ... . - -See repunit for the list of the generalized Wagstaff primes base $b$. (Generalized Wagstaff primes base $b$ are generalized repunit primes base $-b$ with odd $n$) - -Least prime p such that $Q(n, p)$ is prime are (starts with n = 2, 0 if no such p exists) - -3, 3, 3, 5, 3, 3, 0, 3, 5, 5, 5, 3, 7, 3, 3, 7, 3, 17, 5, 3, 3, 11, 7, 3, 11, 0, 3, 7, 139, 109, 0, 5, 3, 11, 31, 5, 5, 3, 53, 17, 3, 5, 7, 103, 7, 5, 5, 7, 1153, 3, 7, 21943, 7, 3, 37, 53, 3, 17, 3, 7, 11, 3, 0, 19, 7, 3, 757, 11, 3, 5, 3, ... - -Least base b such that $Q(b, prime(n))$ is prime are (starts with n = 2) - -2, 2, 2, 2, 2, 2, 2, 2, 7, 2, 16, 61, 2, 6, 10, 6, 2, 5, 46, 18, 2, 49, 16, 70, 2, 5, 6, 12, 92, 2, 48, 89, 30, 16, 147, 19, 19, 2, 16, 11, 289, 2, 12, 52, 2, 66, 9, 22, 5, 489, 69, 137, 16, 36, 96, 76, 117, 26, 3, ... diff --git a/wiki/wikipedia/581.txt b/wiki/wikipedia/581.txt deleted file mode 100644 index e06dccacadfab9d269c8489db67d3c103893321a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/581.txt +++ /dev/null @@ -1,3 +0,0 @@ -In mathematics, Weber's theorem, named after Heinrich Martin Weber, is a result on algebraic curves. It states the following. - -Consider two non-singular curves C and C′ having the same genus g > 1. If there is a rational correspondence φ between C and C′, then φ is a birational transformation. diff --git a/wiki/wikipedia/582.txt b/wiki/wikipedia/582.txt deleted file mode 100644 index 9d04c08b4854bf106061f15939179ce2a38835ad..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/582.txt +++ /dev/null @@ -1,30 +0,0 @@ -In mathematics, a sign sequence, or ±1-sequence or bipolar sequence, is a sequence of numbers, each of which is either 1 or −1. One example is the sequence (1, −1, 1, −1 ...). - -Such sequences are commonly studied in discrepancy theory. - -Around 1932, mathematician Paul Erdős conjectured that for any infinite ±1-sequence $\textstyle\langle x_1, x_2, \ldots \rangle $ and any integer C, there exist integers k and d such that -$$ - \left| \sum_{i=1}^k x_{i\cdot d} \right| > C . -$$ - -The Erdős discrepancy problem asks for a proof or disproof of this conjecture. - -In February 2014, Alexei Lisitsa and Boris Konev of the University of Liverpool showed that every sequence of 1161 or more elements satisfies the conjecture in the special case C = 2, which proves the conjecture for C ≤ 2. This was the best such bound available at the time. Their proof relied on a SAT-solver computer algorithm whose output takes up 13 gigabytes of data, more than the entire text of Wikipedia at that time, so it cannot be independently verified by human mathematicians without further use of a computer. - -In September 2015, Terence Tao announced a proof of the conjecture, building on work done in 2010 during Polymath5 (a form of crowdsourcing applied to mathematics) and a suggestion made by German mathematician Uwe Stroinski on Tao's blog. His proof was published in 2016, as the first paper in the new journal Discrete Analysis. - -Erdős discrepancy of finite sequences has been proposed as a measure of local randomness in DNA sequences. This is based on the fact that in the case of finite-length sequences discrepancy is bounded, and therefore one can determine the finite sequences with discrepancy less than a certain value. Those sequences will also be those that "avoid" certain periodicities. By comparing the expected versus observed distribution in the DNA or using other correlation measures, one can make conclusions related to the local behavior of DNA sequences. - -A Barker code is a sequence of N values of +1 and -1, -$$ -x_j \text{ for } j=1,\ldots,N -$$ - -such that -$$ -\left|\sum_{j=1}^{N-v} x_j x_{j+v}\right| \le 1 -$$ - -for all $1 \le v < N$. - -Barker codes of lengths 11 and 13 are used in direct-sequence spread spectrum and pulse compression radar systems because of their low autocorrelation properties. diff --git a/wiki/wikipedia/583.txt b/wiki/wikipedia/583.txt deleted file mode 100644 index eda8a0c61b4db1ad5cb17864e0db26271019afe3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/583.txt +++ /dev/null @@ -1,9 +0,0 @@ -is a Super Famicom sequel to Mario's Picross. The game is compatible with the Super Famicom Mouse. - -After the failure of Mario's Picross in North America, Nintendo decided not to release this game in that region. The game was made available for download on the Wii's Virtual Console service on December 19, 2006 in Japan and later in PAL regions on September 14, 2007, the 12th anniversary of the game's original Japanese release - marking the first Western release of the game, which has been left nontranslated with original Japanese text intact. This game was re-released for download on the Wii U's Virtual Console service in both Japan and the PAL regions on April 27, 2013. It was made available worldwide on Nintendo Switch Online in September 2020. - -Gameplay remains the same as in Mario's Picross, where the player must decipher the picture in each level, progressing to harder and harder puzzles. However, after completing the first level the player may also play "as" Wario, who presents a different challenge due to changes in the gameplay. - -Each game is played against the clock. Opposing the picross tradition of black and white squares, the puzzles are set in stone and are picked out by Mario with a Hammer and Chisel. When the player solves a puzzle correctly, the black-and-white representation becomes colored and animated, and the game shows the player the title of the puzzle. When the player finishes a level, Mario will congratulate them on their progress and either bow (in the first and last levels) or give a thumbs up (in all other levels). - -The player must work through levels in order to get access to harder levels, with more rows and columns. In Mario's puzzles, if the player marks an incorrect cell, they receive a time penalty. The amount of time lost doubles for every mistake (one minute, two minutes, four, and finally eight). In Wario's puzzles, the time counts up from zero, and the player is not penalized for marking an incorrect cell. However, the player will not be notified if they make a mistake. diff --git a/wiki/wikipedia/584.txt b/wiki/wikipedia/584.txt deleted file mode 100644 index 608dcbe233c5f3336d6f85f1b802bb915e25d517..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/584.txt +++ /dev/null @@ -1,35 +0,0 @@ -The Wiener–Ikehara theorem is a Tauberian theorem introduced by . It follows from Wiener's Tauberian theorem, and can be used to prove the prime number theorem (PNT) (Chandrasekharan, 1969). - -Let A(x) be a non-negative, monotonic nondecreasing function of x, defined for 0 ≤ x < ∞. Suppose that -$$ -f(s)=\int_0^\infty A(x) e^{-xs}dx -$$ - -converges for ℜ(s) > 1 to the function ƒ(s) and that, for some non-negative number c, -$$ -f(s) - \frac{c}{s-1} -$$ - -has an extension as a continuous function for ℜ(s) ≥ 1. - -Then the limit as x goes to infinity of e-x A(x) is equal to c. - -An important number-theoretic application of the theorem is to Dirichlet series of the form -$$ -\sum_{n=1}^\infty a(n) n^{-s} -$$ - -where a(n) is non-negative. If the series converges to an analytic function in -$$ -\Re(s) \ge b -$$ - -with a simple pole of residue c at s = b, then -$$ -\sum_{n\le X}a(n) \sim \frac{c}{b} X^b. -$$ - -Applying this to the logarithmic derivative of the Riemann zeta function, where the coefficients in the Dirichlet series are values of the von Mangoldt function, it is possible to deduce the PNT from the fact that the zeta function has no zeroes on the line -$$ -\Re(s)=1. -$$ diff --git a/wiki/wikipedia/585.txt b/wiki/wikipedia/585.txt deleted file mode 100644 index 7ff687b8b15843bbe7b77716ba2eeade6d393028..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/585.txt +++ /dev/null @@ -1,11 +0,0 @@ -In the theory of dynamical systems, Peixoto theorem, proved by Maurício Peixoto, states that among all smooth flows on surfaces, i.e. compact two-dimensional manifolds, structurally stable systems may be characterized by the following properties: - -* The set of non-wandering points consists only of periodic orbits and fixed points. - -* The set of fixed points is finite and consists only of hyperbolic equilibrium points. - -* Finiteness of attracting or repelling periodic orbits. - -* Absence of saddle-to-saddle connections. - -Moreover, they form an open set in the space of all flows endowed with C1 topology. diff --git a/wiki/wikipedia/586.txt b/wiki/wikipedia/586.txt deleted file mode 100644 index 8dcec485e1351505ab1249198e5f40282c7b9692..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/586.txt +++ /dev/null @@ -1,13 +0,0 @@ -In mathematics, the Chowla–Mordell theorem is a result in number theory determining cases where a Gauss sum is the square root of a prime number, multiplied by a root of unity. It was proved and published independently by Sarvadaman Chowla and Louis Mordell, around 1951. - -In detail, if $p$ is a prime number, $\chi$ a nontrivial Dirichlet character modulo $p$, and -$$ -G(\chi)=\sum \chi(a) \zeta^a -$$ - -where $\zeta$ is a primitive $p$-th root of unity in the complex numbers, then -$$ -\frac{G(\chi)} -$$ - -is a root of unity if and only if $\chi$ is the quadratic residue symbol modulo $p$. The 'if' part was known to Gauss: the contribution of Chowla and Mordell was the 'only if' direction. The ratio in the theorem occurs in the functional equation of L-functions. diff --git a/wiki/wikipedia/587.txt b/wiki/wikipedia/587.txt deleted file mode 100644 index f4f874d542ce4e393b93ade305fa9c0cf467cb66..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/587.txt +++ /dev/null @@ -1,5 +0,0 @@ -In the theory of orthogonal functions, Lauricella's theorem provides a condition for checking the closure of a set of orthogonal functions, namely: - -Theorem. A necessary and sufficient condition that a normal orthogonal set $\{u_k\}$ be closed is that the formal series for each function of a known closed normal orthogonal set $\{v_k\}$ in terms of $\{u_k\}$ converge in the mean to that function. - -The theorem was proved by Giuseppe Lauricella in 1912. diff --git a/wiki/wikipedia/588.txt b/wiki/wikipedia/588.txt deleted file mode 100644 index b0e8fc7345f43320a58c96170f611fd03bca7c13..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/588.txt +++ /dev/null @@ -1,9 +0,0 @@ -In computer science, coinduction is a technique for defining and proving properties of systems of concurrent interacting objects. - -Coinduction is the mathematical dual to structural induction. Coinductively defined types are known as codata and are typically infinite data structures, such as streams. - -As a definition or specification, coinduction describes how an object may be "observed", "broken down" or "destructed" into simpler objects. As a proof technique, it may be used to show that an equation is satisfied by all possible implementations of such a specification. - -To generate and manipulate codata, one typically uses corecursive functions, in conjunction with lazy evaluation. Informally, rather than defining a function by pattern-matching on each of the inductive constructors, one defines each of the "destructors" or "observers" over the function result. - -In programming, co-logic programming (co-LP for brevity) "is a natural generalization of logic programming and coinductive logic programming, which in turn generalizes other extensions of logic programming, such as infinite trees, lazy predicates, and concurrent communicating predicates. Co-LP has applications to rational trees, verifying infinitary properties, lazy evaluation, concurrent logic programming, model checking, bisimilarity proofs, etc." Experimental implementations of co-LP are available from The University of Texas at Dallas and in Logtalk (for examples see ) and SWI-Prolog. diff --git a/wiki/wikipedia/589.txt b/wiki/wikipedia/589.txt deleted file mode 100644 index 74d87fbdc220158a3b075f3bf71a0b51b4b10e09..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/589.txt +++ /dev/null @@ -1,75 +0,0 @@ -A definition is a statement of the meaning of a term (a word, phrase, or other set of symbols). Definitions can be classified into two large categories, intensional definitions (which try to give the sense of a term) and extensional definitions (which try to list the objects that a term describes). Another important category of definitions is the class of ostensive definitions, which convey the meaning of a term by pointing out examples. A term may have many different senses and multiple meanings, and thus require multiple definitions. - -In mathematics, a definition is used to give a precise meaning to a new term, by describing a condition which unambiguously qualifies what a mathematical term is and is not. Definitions and axioms form the basis on which all of modern mathematics is to be constructed. - -In modern usage, a definition is something, typically expressed in words, that attaches a meaning to a word or group of words. The word or group of words that is to be defined is called the definiendum, and the word, group of words, or action that defines it is called the definiens. For example, in the definition "An elephant is a large gray animal native to Asia and Africa", the word "elephant" is the definiendum, and everything after the word "is" is the definiens. - -The definiens is not the meaning of the word defined, but is instead something that conveys the same meaning as that word. Thus homonyms are simultaneously homographs (words that share the same spelling, regardless of their pronunciation) and homophones (words that share the same pronunciation, regardless of their spelling). The state of being a homonym is called homonymy. Examples of homonyms are the pair stalk (part of a plant) and stalk (follow/harass a person) and the pair left (past tense of leave) and left (opposite of right). A distinction is sometimes made between "true" homonyms, which are unrelated in origin, such as skate (glide on ice) and skate (the fish), and polysemous homonyms, or polysemes, which have a shared origin, such as mouth (of a river) and mouth (of an animal). - -Polysemy is the capacity for a sign (such as a word, phrase, or symbol) to have multiple meanings (that is, multiple semes or sememes and thus multiple senses), usually related by contiguity of meaning within a semantic field. It is thus usually regarded as distinct from homonymy, in which the multiple meanings of a word may be unconnected or unrelated. - -In mathematics, definitions are generally not used to describe existing terms, but to describe or characterize a concept. For naming the object of a definition mathematicians can use either a neologism (this was mainly the case in the past) or words or phrases of the common language (this is generally the case in modern mathematics). The precise meaning of a term given by a mathematical definition is often different than the English definition of the word used, which can lead to confusion, particularly when the meanings are close. For example a set is not exactly the same thing in mathematics and in common language. In some case, the word used can be misleading; for example, a real number has nothing more (or less) real than an imaginary number. Frequently, a definition uses a phrase built with common English words, which has no meaning outside mathematics, such as primitive group or irreducible variety. - -Authors have used different terms to classify definitions used in formal languages like mathematics. Norman Swartz classifies a definition as "stipulative" if it is intended to guide a specific discussion. A stipulative definition might be considered a temporary, working definition, and can only be disproved by showing a logical contradiction. In contrast, a "descriptive" definition can be shown to be "right" or "wrong" with reference to general usage. - -Swartz defines a precising definition as one that extends the descriptive dictionary definition (lexical definition) for a specific purpose by including additional criteria. A precising definition narrows the set of things that meet the definition. - -C.L. Stevenson has identified persuasive definition as a form of stipulative definition which purports to state the "true" or "commonly accepted" meaning of a term, while in reality stipulating an altered use (perhaps as an argument for some specific belief). Stevenson has also noted that some definitions are "legal" or "coercive" – their object is to create or alter rights, duties, or crimes. - -A recursive definition, sometimes also called an inductive definition, is one that defines a word in terms of itself, so to speak, albeit in a useful way. Normally this consists of three steps: - -# At least one thing is stated to be a member of the set being defined; this is sometimes called a "base set". - -# All things bearing a certain relation to other members of the set are also to count as members of the set. It is this step that makes the definition recursive. - -# All other things are excluded from the set - -For instance, we could define a natural number as follows (after Peano): - -# "0" is a natural number. - -# Each natural number has a unique successor, such that: - -#* the successor of a natural number is also a natural number; - -#* distinct natural numbers have distinct successors; - -#* no natural number is succeeded by "0". - -# Nothing else is a natural number. - -So "0" will have exactly one successor, which for convenience can be called "1". In turn, "1" will have exactly one successor, which could be called "2", and so on. Notice that the second condition in the definition itself refers to natural numbers, and hence involves self-reference. Although this sort of definition involves a form of circularity, it is not vicious, and the definition has been quite successful. - -In the same way, we can define ancestor as follows: - -#A parent is an ancestor. - -#A parent of an ancestor is an ancestor. - -#Nothing else is an ancestor. - -Or simply: an ancestor is a parent or a parent of an ancestor. - -In medical dictionaries, guidelines and other consensus statements and classifications, definitions should as far as possible be: - -*simple and easy to understand, preferably even by the general public; - -*useful clinically - -#A definition must set out the essential attributes of the thing defined. - -#Definitions should avoid circularity. To define a horse as "a member of the species equus" would convey no information whatsoever. For this reason, Locke adds that a definition of a term must not consist of terms which are synonymous with it. This would be a circular definition, a circulus in definiendo. Note, however, that it is acceptable to define two relative terms in respect of each other. Clearly, we cannot define "antecedent" without using the term "consequent", nor conversely. - -#The definition must not be too wide or too narrow. It must be applicable to everything to which the defined term applies (i.e. not miss anything out), and to nothing else (i.e. not include any things to which the defined term would not truly apply). - -#The definition must not be obscure. The purpose of a definition is to explain the meaning of a term which may be obscure or difficult, by the use of terms that are commonly understood and whose meaning is clear. The violation of this rule is known by the Latin term obscurum per obscurius. However, sometimes scientific and philosophical terms are difficult to define without obscurity. - -#A definition should not be negative where it can be positive. We should not define "wisdom" as the absence of folly, or a healthy thing as whatever is not sick. Sometimes this is unavoidable, however. For example, it appears difficult to define blindness in positive terms rather than as "the absence of sight in a creature that is normally sighted". - -Given that a natural language such as English contains, at any given time, a finite number of words, any comprehensive list of definitions must either be circular or rely upon primitive notions. If every term of every definiens must itself be defined, "where at last should we stop?" A dictionary, for instance, insofar as it is a comprehensive list of lexical definitions, must resort to circularity. - -Many philosophers have chosen instead to leave some terms undefined. The scholastic philosophers claimed that the highest genera (called the ten generalissima) cannot be defined, since a higher genus cannot be assigned under which they may fall. Thus being, unity and similar concepts cannot be defined. Locke supposes in An Essay Concerning Human Understanding that the names of simple concepts do not admit of any definition. More recently Bertrand Russell sought to develop a formal language based on logical atoms. Other philosophers, notably Wittgenstein, rejected the need for any undefined simples. Wittgenstein pointed out in his Philosophical Investigations that what counts as a "simple" in one circumstance might not do so in another. He rejected the very idea that every explanation of the meaning of a term needed itself to be explained: "As though an explanation hung in the air unless supported by another one", claiming instead that explanation of a term is only needed to avoid misunderstanding. - -Locke and Mill also argued that individuals cannot be defined. Names are learned by connecting an idea with a sound, so that speaker and hearer have the same idea when the same word is used. This is not possible when no one else is acquainted with the particular thing that has "fallen under our notice". Russell offered his theory of descriptions in part as a way of defining a proper name, the definition being given by a definite description that "picks out" exactly one individual. Saul Kripke pointed to difficulties with this approach, especially in relation to modality, in his book Naming and Necessity. - -There is a presumption in the classic example of a definition that the definiens can be stated. Wittgenstein argued that for some terms this is not the case. The examples he used include game, number and family. In such cases, he argued, there is no fixed boundary that can be used to provide a definition. Rather, the items are grouped together because of a family resemblance. For terms such as these it is not possible and indeed not necessary to state a definition; rather, one simply comes to understand the use of the term. diff --git a/wiki/wikipedia/59.txt b/wiki/wikipedia/59.txt deleted file mode 100644 index 2af15110fa7babc1190a35eab8badcc594e46473..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/59.txt +++ /dev/null @@ -1,21 +0,0 @@ -Tresorit is an online cloud storage service based in Switzerland and Hungary that emphasizes enhanced security and data encryption for businesses and individuals. The Business version offers up to 1TB of storage space per user (the Solo version offers 2TB for one user) and extra security features such as DRM, granular access levels and other functions, which Tresorit cites to creating a safer collaborative environment. - -Tresorit's service is accessible through client desktop software, a web-based application and mobile apps. Currently, the software is available for Windows, macOS, Android, Windows Phone 8, iOS, and Linux. - -Currently as of 2021, Swiss Post owns a majority stake in the cloud storage service. Tresorit works as an independent entity under Swiss Post. - -Tresorit was founded in 2011 by Hungarian programmers Istvan Lam, who remains CEO, Szilveszter Szebeni, who is currently CIO and Gyorgy Szilagyi, who is the CPO of the company. - -Tresorit officially launched its client-side encrypted cloud storage service after emerging from its stealth beta in April 2014. - -In August 2015, Wuala (owned by LaCie and Seagate), a pioneer of secure cloud storage, announced it was closing its service after 7 years, and recommended their users to choose Tresorit as their secure cloud alternative. - -By the end of 2016, Tresorit launched a beta of the software development kit (SDK) ZeroKit. In January 2017, Apple's SDK project CareKit announced the option for mobile app developers using CareKit to integrate ZeroKit, enabling zero knowledge user authentication and encryption for medical and health apps. - -Tresorit claims to encrypt files using client-side encryption with AES-256 before uploading them. Files are also secured by HMAC message authentication codes applied on SHA-512 hashes. - -"Tresors" (German for safes) are encrypted counterparts of uploaded directories. Tresors automatically sync with the cloud as files are added or removed from them, similar to Box.com and Dropbox's desktop software. The main difference between Tresorit and its competition is that Tresorit applies AES-256 client-side encryption to files while they are still local and then uploads them to the cloud. The company claims that due to its end-to-end encryption, users can share protected files and folders with others and work together on them, keeping the documents synced and secure in every step of the process. There are additional layers of security, but the core privacy feature of the service is that the encryption key never leaves the user: Using Zero-Knowledge encryption protocols, Tresorit is not in possession of the users’ authentication data, so the content of files cannot be accessed from their servers nor delivered to authorities upon request. - -In 2013 and 2014, Tresorit hosted a hacking contest offering $10,000 to anyone who hacked their data encryption methods to gain access to their servers. After some months, the reward was increased to $25,000 and later to $50,000, challenging experts from institutions like Harvard, Stanford or MIT. The contest ran for 468 days and according to the company, nobody was able to break the encryption. - -Tresorit has received a number of nominations and awards. Up-Cloud Rewards named it one of the top 5 Cloud security solutions for 2012. Early 2016, Forbes listed Tresorit's cofounder Istvan Lam among the European "30 under 30". In 2017, Tresorit was listed as finalist in the Cybersecurity Excellence Awards, category Encryption. diff --git a/wiki/wikipedia/590.txt b/wiki/wikipedia/590.txt deleted file mode 100644 index daba8a84b2c5dcd7627bc2d2447541be4785bbf3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/590.txt +++ /dev/null @@ -1,31 +0,0 @@ -In Mathematics, the Mashreghi–Ransford inequality is a bound on the growth rate of certain sequences. It is named after J. Mashreghi and T. Ransford. - -Let $(a_n)_{n \geq 0}$ be a sequence of complex numbers, and let -$$ - b_n = \sum_{k=0}^n {n\choose k} a_k, \qquad (n \geq 0), -$$ - -and -$$ - c_n = \sum_{k=0}^n (-1)^{k} {n\choose k} a_k, \qquad (n \geq 0). -$$ - -We remind that the binomial coefficients are defined by -$$ - {n\choose k} = \frac{n!}{k! (n-k)!}. -$$ - -Assume that, for some $\beta>1$, we have $b_n = O(\beta^n)$ and $c_n = O(\beta^n)$ as $n \to \infty$. Then Mashreghi-Ransford showed that -$$ -a_n = O(\alpha^n) -$$, as $n \to \infty$, - -where $\alpha=\sqrt{\beta^2-1}.$ Moreover, there is a universal constant $\kappa$ such that -$$ - \left( \limsup_{n \to \infty} \frac{\alpha^n} \right) \leq \kappa \left( \limsup_{n \to \infty} \frac{\beta^n} \right)^{\frac{1}{2}} \left( \limsup_{n \to \infty} \frac{\beta^n} \right)^{\frac{1}{2}}. -$$ - -The precise value of $\kappa$ is still unknown. However, it is known that -$$ - \frac{2}{\sqrt{3}}\leq \kappa \leq 2. -$$ diff --git a/wiki/wikipedia/591.txt b/wiki/wikipedia/591.txt deleted file mode 100644 index d88675956d518fbb35153e26cb717de0b2b0466c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/591.txt +++ /dev/null @@ -1,25 +0,0 @@ -thumb|Jeff Xia's 5-body configuration consists of five point masses, with two pairs in eccentric elliptic orbits around each other and one mass moving along the line of symmetry. Xia proved that for certain initial conditions the final mass will be accelerated to infinite velocity in finite time. This proves the Painlevé conjecture for five bodies and upwards. In physics, the Painlevé conjecture is a theorem about singularities among the solutions to the n-body problem: there are noncollision singularities for n ≥ 4. - -The theorem was proven for n ≥ 5 in 1988 by Jeff Xia and for n=4 in 2014 by Jinxin Xue. - -Solutions $(\mathbf{q},\mathbf{p})$ of the n-body problem $\dot{\mathbf{q}} = M^{-1}\mathbf{p}, \dot{\mathbf{p}} = \nabla U(\mathbf{q})$ (where M are the masses and U denotes the gravitational potential) are said to have a singularity if there is a sequence of times $t_n$ converging to a finite $t^*$ where $\nabla U\left(\mathbf{q}\left(t_n\right)\right) \rightarrow \infty$. That is, the forces and accelerations become infinite at some finite point in time. - -A collision singularity occurs if $\mathbf{q}(t)$ tends to a definite limit when $t \rightarrow t^*, t c$ for variables $x$ and $y$ and constant $c$. - -Most SMT solvers support only quantifier-free fragments of their logics. - -An SMT instance is a generalization of a Boolean SAT instance in which various sets of variables are replaced by predicates from a variety of underlying theories. SMT formulas provide a much richer modeling language than is possible with Boolean SAT formulas. For example, an SMT formula allows us to model the datapath operations of a microprocessor at the word rather than the bit level. - -By comparison, answer set programming is also based on predicates (more precisely, on atomic sentences created from atomic formula). Unlike SMT, answer-set programs do not have quantifiers, and cannot easily express constraints such as linear arithmetic or difference logic—ASP is at best suitable for Boolean problems that reduce to the free theory of uninterpreted functions. Implementing 32-bit integers as bitvectors in ASP suffers from most of the same problems that early SMT solvers faced: "obvious" identities such as x+y=y+x are difficult to deduce. - -Constraint logic programming does provide support for linear arithmetic constraints, but within a completely different theoretical framework. SMT solvers have also been extended to solve formulas in higher-order logic. - -Early attempts for solving SMT instances involved translating them to Boolean SAT instances (e.g., a 32-bit integer variable would be encoded by 32 single-bit variables with appropriate weights and word-level operations such as 'plus' would be replaced by lower-level logic operations on the bits) and passing this formula to a Boolean SAT solver. This approach, which is referred to as the eager approach, has its merits: by pre-processing the SMT formula into an equivalent Boolean SAT formula existing Boolean SAT solvers can be used "as-is" and their performance and capacity improvements leveraged over time. On the other hand, the loss of the high-level semantics of the underlying theories means that the Boolean SAT solver has to work a lot harder than necessary to discover "obvious" facts (such as $x + y = y + x$ for integer addition.) This observation led to the development of a number of SMT solvers that tightly integrate the Boolean reasoning of a DPLL-style search with theory-specific solvers (T-solvers) that handle conjunctions (ANDs) of predicates from a given theory. This approach is referred to as the lazy approach. - -Dubbed DPLL(T), this architecture gives the responsibility of Boolean reasoning to the DPLL-based SAT solver which, in turn, interacts with a solver for theory T through a well-defined interface. The theory solver only needs to worry about checking the feasibility of conjunctions of theory predicates passed on to it from the SAT solver as it explores the Boolean search space of the formula. For this integration to work well, however, the theory solver must be able to participate in propagation and conflict analysis, i.e., it must be able to infer new facts from already established facts, as well as to supply succinct explanations of infeasibility when theory conflicts arise. In other words, the theory solver must be incremental and backtrackable. - -Most of the common SMT approaches support decidable theories. However, many real-world systems can only be modelled by means of non-linear arithmetic over the real numbers involving transcendental functions, e.g. an aircraft and its behavior. This fact motivates an extension of the SMT problem to non-linear theories, e.g. determine whether - - - -\begin{array}{lr} - -& (\sin(x)^3 = \cos(\log(y)\cdot x) \vee b \vee -x^2 \geq 2.3y) \wedge \left(\neg b \vee y < -34.4 \vee \exp(x) > {y \over x}\right) - -\end{array} - - - -where -$$ -b \in {\mathbb B}, x,y \in {\mathbb R} -$$ - -is satisfiable. Then, such problems become undecidable in general. (The theory of real closed fields, and thus the full first order theory of the real numbers, are however decidable using quantifier elimination. This is due to Alfred Tarski.) The first order theory of the natural numbers with addition (but not multiplication), called Presburger arithmetic, is also decidable. Since multiplication by constants can be implemented as nested additions, the arithmetic in many computer programs can be expressed using Presburger arithmetic, resulting in decidable formulas. - -Examples of SMT solvers addressing Boolean combinations of theory atoms from undecidable arithmetic theories over the reals are ABsolver, which employs a classical DPLL(T) architecture with a non-linear optimization packet as (necessarily incomplete) subordinate theory solver, and , building on a unification of DPLL SAT-solving and interval constraint propagation called the iSAT algorithm. - -The table below summarizes some of the features of the many available SMT solvers. The column "SMT-LIB" indicates compatibility with the SMT-LIB language; many systems marked 'yes' may support only older versions of SMT-LIB, or offer only partial support for the language. The column "CVC" indicates support for the CVC language. The column "DIMACS" indicates support for the format. - -Projects differ not only in features and performance, but also in the viability of the surrounding community, its ongoing interest in a project, and its ability to contribute documentation, fixes, tests and enhancements. - -There are multiple attempts to describe a standardized interface to SMT solvers (and automated theorem provers, a term often used synonymously). The most prominent is the SMT-LIB standard, which provides a language based on S-expressions. Other standardized formats commonly supported are the DIMACS format supported by many Boolean SAT solvers, and the CVC format used by the CVC automated theorem prover. - -The SMT-LIB format also comes with a number of standardized benchmarks and has enabled a yearly competition between SMT solvers called SMT-COMP. Initially, the competition took place during the Computer Aided Verification conference (CAV), but as of 2020 the competition is hosted as part of the SMT Workshop, which is affiliated with the International Joint Conference on Automated Reasoning (IJCAR). - -SMT solvers are useful both for verification, proving the correctness of programs, software testing based on symbolic execution, and for synthesis, generating program fragments by searching over the space of possible programs. Outside of software verification, SMT solvers have also been used for type inference and for modelling theoretic scenarios, including modelling actor beliefs in nuclear arms control. - -Computer-aided verification of computer programs often uses SMT solvers. A common technique is to translate preconditions, postconditions, loop conditions, and assertions into SMT formulas in order to determine if all properties can hold. - -There are many verifiers built on top of the Z3 SMT solver. is an intermediate verification language that uses Z3 to automatically check simple imperative programs. The verifier for concurrent C uses Boogie, as well as for imperative object-based programs, for concurrent programs, and for C#. is a dependently typed language that uses Z3 to find proofs; the compiler carries these proofs through to produce proof-carrying bytecode. The encodes verification conditions to Z3. The library provides SMT-based verification of Haskell programs, and lets the user choose among a number of solvers such as Z3, ABC, Boolector, CVC4, MathSAT and Yices. - -There are also many verifiers built on top of the SMT solver. Here is a list of mature applications: - -* , a platform for deductive program verification, uses Alt-Ergo as its main prover; - -* CAVEAT, a C-verifier developed by CEA and used by Airbus; Alt-Ergo was included in the qualification DO-178C of one of its recent aircraft; - -* Frama-C, a framework to analyse C-code, uses Alt-Ergo in the Jessie and WP plugins (dedicated to "deductive program verification"); - -* SPARK uses CVC4 and Alt-Ergo (behind GNATprove) to automate the verification of some assertions in SPARK 2014; - -* Atelier-B can use Alt-Ergo instead of its main prover (increasing success from 84% to 98% on the ); - -* Rodin, a B-method framework developed by Systerel, can use Alt-Ergo as a back-end; - -* , an open source model checker for verifying safety properties of array-based transition systems. - -* , a toolset for reasoning about relational properties of probabilistic computations with adversarial code. - -Many SMT solvers implement a common interface format called (such files usually have the extension ".smt2"). The - -tool implements a refinement type based verifier for Haskell that can use any SMTLIB2 compliant solver, e.g. CVC4, MathSat, or Z3. - -An important application of SMT solvers is symbolic execution for analysis and testing of programs (e.g., concolic testing), aimed particularly at finding security vulnerabilities. Example tools in this category include from Microsoft Research, , , and . SMT solvers that have been used for symbolic-execution applications include , , the , and . diff --git a/wiki/wikipedia/593.txt b/wiki/wikipedia/593.txt deleted file mode 100644 index 51fc826128325f5e03fc308f48805a17a3a88382..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/593.txt +++ /dev/null @@ -1,61 +0,0 @@ -Positional notation (or place-value notation, or positional numeral system) usually denotes the extension to any base of the Hindu–Arabic numeral system (or decimal system). More generally, a positional system is a numeral system in which the contribution of a digit to the value of a number is the value of the digit multiplied by a factor determined by the position of the digit. In early numeral systems, such as Roman numerals, a digit has only one value: I means one, X means ten and C a hundred (however, the value may be negated if placed before another digit). In modern positional systems, such as the decimal system, the position of the digit means that its value must be multiplied by some value: in 555, the three identical symbols represent five hundreds, five tens, and five units, respectively, due to their different positions in the digit string. - -The Babylonian numeral system, base 60, was the first positional system to be developed, and its influence is present today in the way time and angles are counted in tallies related to 60, such as 60 minutes in an hour and 360 degrees in a circle. Today, the Hindu–Arabic numeral system (base ten) is the most commonly used system globally. However, the binary numeral system (base two) is used in almost all computers and electronic devices because it is easier to implement efficiently in electronic circuits. - -Systems with negative base, complex base or negative digits have been described. Most of them do not require a minus sign for designating negative numbers. - -The use of a radix point (decimal point in base ten), extends to include fractions and allows representing any real number with arbitrary accuracy. With positional notation, arithmetical computations are much simpler than with any older numeral system; this led to the rapid spread of the notation when it was introduced in western Europe. - -Today, the base-10 (decimal) system, which is presumably motivated by counting with the ten fingers, is ubiquitous. Other bases have been used in the past, and some continue to be used today. For example, the Babylonian numeral system, credited as the first positional numeral system, was base-60. However it lacked a real 0. Initially inferred only from context, later, by about 700 BC, zero came to be indicated by a "space" or a "punctuation symbol" (such as two slanted wedges) between numerals. It was a placeholder rather than a true zero because it was not used alone. Nor was it used at the end of a number. Numbers like 2 and 120 (2×60) looked the same because the larger number lacked a final placeholder. Only context could differentiate them. - -The polymath Archimedes (ca. 287–212 BC) invented a decimal positional system in his Sand Reckoner which was based on 108 and later led the German mathematician Carl Friedrich Gauss to lament what heights science would have already reached in his days if Archimedes had fully realized the potential of his ingenious discovery. - -Before positional notation became standard, simple additive systems (sign-value notation) such as Roman numerals were used, and accountants in ancient Rome and during the Middle Ages used the abacus or stone counters to do arithmetic. - -Counting rods and most abacuses have been used to represent numbers in a positional numeral system. With counting rods or abacus to perform arithmetic operations, the writing of the starting, intermediate and final values of a calculation could easily be done with a simple additive system in each position or column. This approach required no memorization of tables (as does positional notation) and could produce practical results quickly. - -The oldest extant positional notation system is that of Chinese rod numerals, used from at least the early 8th century. - -It isn't clear whether this system was introduced from India or whether it was an autochthonous development. - -Indian numerals originate with the Brahmi numerals of about the 3rd century BC, which symbols were, at the time, not used positionally. - -Medieval Indian numerals are positional, as are the derived Arabic numerals, recorded from the 10th century. - -After the French Revolution (1789–1799), the new French government promoted the extension of the decimal system. - -Some of those pro-decimal efforts—such as decimal time and the decimal calendar—were unsuccessful. - -Other French pro-decimal efforts—currency decimalisation and the metrication of weights and measures—spread widely out of France to almost the whole world. - -J. Lennart Berggren notes that positional decimal fractions were used for the first time by Arab mathematician Abu'l-Hasan al-Uqlidisi as early as the 10th century. The Jewish mathematician Immanuel Bonfils used decimal fractions around 1350, but did not develop any notation to represent them. The Persian mathematician Jamshīd al-Kāshī made the same discovery of decimal fractions in the 15th century. This form of fraction with numerator on top and denominator at bottom without a horizontal bar was also used by 10th century Abu'l-Hasan al-Uqlidisi and 15th century Jamshīd al-Kāshī's work "Arithmetic Key". but both Stevin and E. J. Dijksterhuis indicate that Regiomontanus contributed to the European adoption of general decimals: - -European mathematicians, when taking over from the Hindus, via the Arabs, the idea of positional value for integers, neglected to extend this idea to fractions. For some centuries they confined themselves to using common and sexagesimal fractions... This half-heartedness has never been completely overcome, and sexagesimal fractions still form the basis of our trigonometry, astronomy and measurement of time. ¶ ... Mathematicians sought to avoid fractions by taking the radius R equal to a number of units of length of the form 10n and then assuming for n so great an integral value that all occurring quantities could be expressed with sufficient accuracy by integers. ¶ The first to apply this method was the German astronomer Regiomontanus. To the extent that he expressed goniometrical line-segments in a unit R/10n, Regiomontanus may be called an anticipator of the doctrine of decimal positional fractions. - -Danish numerals display a similar base-20 structure. - -The Māori language of New Zealand also has evidence of an underlying base-20 system as seen in the terms Te Hokowhitu a Tu referring to a war party (literally "the seven 20s of Tu") and Tama-hokotahi, referring to a great warrior ("the one man equal to 20"). - -The binary system was used in the Egyptian Old Kingdom, 3000 BC to 2050 BC. It was cursive by rounding off rational numbers smaller than 1 to 1/2 + 1/4 + 1/8 + 1/16 + 1/32 + 1/64, with a 1/64 term thrown away (the system was called the Eye of Horus). - -A number of Australian Aboriginal languages employ binary or binary-like counting systems. For example, in Kala Lagaw Ya, the numbers one through six are urapon, ukasar, ukasar-urapon, ukasar-ukasar, ukasar-ukasar-urapon, ukasar-ukasar-ukasar. - -North and Central American natives used base-4 (quaternary) to represent the four cardinal directions. Mesoamericans tended to add a second base-5 system to create a modified base-20 system. - -A base-5 system (quinary) has been used in many cultures for counting. Plainly it is based on the number of digits on a human hand. It may also be regarded as a sub-base of other bases, such as base-10, base-20, and base-60. - -A base-8 system (octal) was devised by the Yuki tribe of Northern California, who used the spaces between the fingers to count, corresponding to the digits one through eight. There is also linguistic evidence which suggests that the Bronze Age Proto-Indo Europeans (from whom most European and Indic languages descend) might have replaced a base-8 system (or a system which could only count up to 8) with a base-10 system. The evidence is that the word for 9, newm, is suggested by some to derive from the word for "new", newo-, suggesting that the number 9 had been recently invented and called the "new number". - -Many ancient counting systems use five as a primary base, almost surely coming from the number of fingers on a person's hand. Often these systems are supplemented with a secondary base, sometimes ten, sometimes twenty. In some African languages the word for five is the same as "hand" or "fist" (Dyola language of Guinea-Bissau, Banda language of Central Africa). Counting continues by adding 1, 2, 3, or 4 to combinations of 5, until the secondary base is reached. In the case of twenty, this word often means "man complete". This system is referred to as quinquavigesimal. It is found in many languages of the Sudan region. - -The Telefol language, spoken in Papua New Guinea, is notable for possessing a base-27 numeral system. - -Interesting properties exist when the base is not fixed or positive and when the digit symbol sets denote negative values. There are many more variations. These systems are of practical and theoretic value to computer scientists. - -Balanced ternary uses a base of 3 but the digit set is {1},0,1 instead of {0,1,2}. The "1" has an equivalent value of −1. The negation of a number is easily formed by switching the on the 1s. This system can be used to solve the balance problem, which requires finding a minimal set of known counter-weights to determine an unknown weight. Weights of 1, 3, 9, ... 3n known units can be used to determine any unknown weight up to 1 + 3 + ... + 3n units. A weight can be used on either side of the balance or not at all. Weights used on the balance pan with the unknown weight are designated with 1, with 1 if used on the empty pan, and with 0 if not used. If an unknown weight W is balanced with 3 (31) on its pan and 1 and 27 (30 and 33) on the other, then its weight in decimal is 25 or 1011 in balanced base-3. - -10113 = 1 × 33 + 0 × 32 − 1 × 31 + 1 × 30 = 25. - -The factorial number system uses a varying radix, giving factorials as place values; they are related to Chinese remainder theorem and residue number system enumerations. This system effectively enumerates permutations. A derivative of this uses the Towers of Hanoi puzzle configuration as a counting system. The configuration of the towers can be put into 1-to-1 correspondence with the decimal count of the step at which the configuration occurs and vice versa. - -Each position does not need to be positional itself. Babylonian sexagesimal numerals were positional, but in each position were groups of two kinds of wedges representing ones and tens (a narrow vertical wedge ( | ) and an open left pointing wedge (<))—up to 14 symbols per position (5 tens (<<<<<) and 9 ones ( ||||||||| ) grouped into one or two near squares containing up to three tiers of symbols, or a place holder (\\) for the lack of a position). Hellenistic astronomers used one or two alphabetic Greek numerals for each position (one chosen from 5 letters representing 10–50 and/or one chosen from 9 letters representing 1–9, or a zero symbol). diff --git a/wiki/wikipedia/594.txt b/wiki/wikipedia/594.txt deleted file mode 100644 index 2dfe662e1f04786a753b9f9b88f6e403ba3787dd..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/594.txt +++ /dev/null @@ -1,61 +0,0 @@ -The Riemann hypothesis is one of the most important conjectures in mathematics. It is a statement about the zeros of the Riemann zeta function. Various geometrical and arithmetical objects can be described by so-called global L-functions, which are formally similar to the Riemann zeta-function. One can then ask the same question about the zeros of these L-functions, yielding various generalizations of the Riemann hypothesis. Many mathematicians believe these generalizations of the Riemann hypothesis to be true. The only cases of these conjectures which have been proven occur in the algebraic function field case (not the number field case). - -Global L-functions can be associated to elliptic curves, number fields (in which case they are called Dedekind zeta-functions), Maass forms, and Dirichlet characters (in which case they are called Dirichlet L-functions). When the Riemann hypothesis is formulated for Dedekind zeta-functions, it is known as the extended Riemann hypothesis (ERH) and when it is formulated for Dirichlet L-functions, it is known as the generalized Riemann hypothesis (GRH). These two statements will be discussed in more detail below. (Many mathematicians use the label generalized Riemann hypothesis to cover the extension of the Riemann hypothesis to all global L-functions, - -not just the special case of Dirichlet L-functions.) - -The generalized Riemann hypothesis (for Dirichlet L-functions) was probably formulated for the first time by Adolf Piltz in 1884. Like the original Riemann hypothesis, it has far reaching consequences about the distribution of prime numbers. - -The formal statement of the hypothesis follows. A Dirichlet character is a completely multiplicative arithmetic function χ such that there exists a positive integer k with 1=χ(n + k) = χ(n) for all n and 1=χ(n) = 0 whenever gcd(n, k) > 1. If such a character is given, we define the corresponding Dirichlet L-function by - - - -L(\chi,s) = \sum_{n=1}^\infty \frac{\chi(n)}{n^s} - - - -for every complex number s such that Re s > 1. By analytic continuation, this function can be extended to a meromorphic function (only when $ \chi $ is primitive) defined on the whole complex plane. The generalized Riemann hypothesis asserts that, for every Dirichlet character χ and every complex number s with 1=L(χ, s) = 0, if s is not a negative real number, then the real part of s is 1/2. - -The case 1=χ(n) = 1 for all n yields the ordinary Riemann hypothesis. - -Dirichlet's theorem states that if a and d are coprime natural numbers, then the arithmetic progression a, a + d, a + 2d, a + 3d, ... contains infinitely many prime numbers. Let π(x, a, d) denote the number of prime numbers in this progression which are less than or equal to x. If the generalized Riemann hypothesis is true, then for every coprime a and d and for every ε > 0, -$$ -\pi(x,a,d) = \frac{1}{\varphi(d)} \int_2^x \frac{1}{\ln t}dt + O(x^{1/2+\varepsilon})\quad\mbox{ as } \ x\to\infty, -$$ - -where φ(d) is Euler's totient function and O is the Big O notation. This is a considerable strengthening of the prime number theorem. - -If GRH is true, then every proper subgroup of the multiplicative group $(\mathbb Z/n\mathbb Z)^\times$ omits a number less than 2(ln n)2, as well as a number coprime to n less than 3(ln n)2. In other words, $(\mathbb Z/n\mathbb Z)^\times$ is generated by a set of numbers less than 2(ln n)2. This is often used in proofs, and it has many consequences, for example (assuming GRH): - -*The Miller–Rabin primality test is guaranteed to run in polynomial time. (A polynomial-time primality test which does not require GRH, the AKS primality test, was published in 2002.) - -*The Shanks–Tonelli algorithm is guaranteed to run in polynomial time. - -*The Ivanyos–Karpinski–Saxena deterministic algorithm for factoring polynomials over finite fields with prime constant-smooth degrees is guaranteed to run in polynomial time. - -If GRH is true, then for every prime p there exists a primitive root mod p (a generator of the multiplicative group of integers modulo p) that is less than $O((\ln p)^6).$ - -Goldbach's weak conjecture also follows from the generalized Riemann hypothesis. The yet to be verified proof of Harald Helfgott of this conjecture verifies the GRH for several thousand small characters up to a certain imaginary part to obtain sufficient bounds that prove the conjecture for all integers above 1029, integers below which have already been verified by calculation. - -Assuming the truth of the GRH, the estimate of the character sum in the Pólya–Vinogradov inequality can be improved to $O\left(\sqrt{q}\log\log q\right)$, q being the modulus of the character. - -Suppose K is a number field (a finite-dimensional field extension of the rationals Q) with ring of integers OK (this ring is the integral closure of the integers Z in K). If a is an ideal of OK, other than the zero ideal, we denote its norm by Na. The Dedekind zeta-function of K is then defined by - - - -\zeta_K(s) = \sum_a \frac{1}{(Na)^s} - - - -for every complex number s with real part > 1. The sum extends over all non-zero ideals a of OK. - -The Dedekind zeta-function satisfies a functional equation and can be extended by analytic continuation to the whole complex plane. The resulting function encodes important information about the number field K. The extended Riemann hypothesis asserts that for every number field K and every complex number s with ζK(s) = 0: if the real part of s is between 0 and 1, then it is in fact 1/2. - -The ordinary Riemann hypothesis follows from the extended one if one takes the number field to be Q, with ring of integers Z. - -The ERH implies an effective version of the Chebotarev density theorem: if L/K is a finite Galois extension with Galois group G, and C a union of conjugacy classes of G, the number of unramified primes of K of norm below x with Frobenius conjugacy class in C is -$$ -\frac\Bigl(\operatorname{li}(x)+O\bigl(\sqrt x(n\log x+\log|\Delta|)\bigr)\Bigr), -$$ - -where the constant implied in the big-O notation is absolute, n is the degree of L over Q, and Δ its discriminant. diff --git a/wiki/wikipedia/595.txt b/wiki/wikipedia/595.txt deleted file mode 100644 index 67fc6c624e6348c3e0e7cf053556ece9a474b19d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/595.txt +++ /dev/null @@ -1,15 +0,0 @@ -In Riemannian geometry, Gromov's optimal stable 2-systolic inequality is the inequality - -\mathrm{stsys}_2{}^n \leq n! - -\mathrm{vol}_{2n}(\mathbb{CP}^n), - -valid for an arbitrary Riemannian metric on the complex projective space, where the optimal bound is attained - -by the symmetric Fubini–Study metric, providing a natural geometrisation of quantum mechanics. Here $\operatorname{stsys_2}$ is the stable 2-systole, which in this case can be defined as the infimum of the areas of rational 2-cycles representing the class of the complex projective line $\mathbb{CP}^1 \subset \mathbb{CP}^n$ in 2-dimensional homology. - -The inequality first appeared in Gromov as Theorem 4.36. - -The proof of Gromov's inequality relies on the Wirtinger inequality for exterior 2-forms. - -In the special case n=2, Gromov's inequality becomes $\mathrm{stsys}_2{}^2 \leq 2 \mathrm{vol}_4(\mathbb{CP}^2)$. This inequality can be thought of as an analog of Pu's inequality for the real projective plane $\mathbb{RP}^2$. In both cases, the boundary case of equality is attained by the symmetric metric of the projective plane. Meanwhile, in the quaternionic case, the symmetric metric on $\mathbb{HP}^2$ is not its systolically optimal metric. In other words, the manifold $\mathbb{HP}^2$ admits Riemannian metrics with higher systolic ratio $\mathrm{stsys}_4{}^2/\mathrm{vol}_8$ than for its symmetric metric . diff --git a/wiki/wikipedia/596.txt b/wiki/wikipedia/596.txt deleted file mode 100644 index eec1baeaf568281a3837fe47ada37922f6f19e17..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/596.txt +++ /dev/null @@ -1,453 +0,0 @@ -The monkey and the coconuts is a mathematical puzzle in the field of Diophantine analysis that originated in a magazine fictional short story involving five sailors and a monkey on a desert island who divide up a pile of coconuts; the problem is to find the number of coconuts in the original pile (fractional coconuts not allowed). The problem is notorious for its confounding difficulty to unsophisticated puzzle solvers, though with the proper mathematical approach, the solution is trivial. The problem has become a staple in recreational mathematics collections. - -The problem can be expressed as: - -There is a pile of coconuts, owned by five men. One man divides the pile into five equal piles, giving the one left over coconut to a passing monkey, and takes away his own share. The second man then repeats the procedure, dividing the remaining pile into five and taking away his share, as do the third, fourth, and fifth, each of them finding one coconut left over when dividing the pile by five, and giving it to a monkey. Finally, the group divide the remaining coconuts into five equal piles: this time no coconuts are left over. - -How many coconuts were there in the original pile? - -The monkey and the coconuts is the best known representative of a class of puzzle problems requiring integer solutions structured as recursive division or fractionating of some discretely divisible quantity, with or without remainders, and a final division into some number of equal parts, possibly with a remainder. The problem is so well known that the entire class is often referred to broadly as "monkey and coconut type problems", though most are not closely related to the problem. - -Another example: "I have a whole number of pounds of cement, I know not how many, but after addition of a ninth and an eleventh, it was partitioned into 3 sacks, each with a whole number of pounds. How many pounds of cement did I have?" - -Problems ask for either the initial or terminal quantity. Stated or implied is the smallest positive number that could be a solution. There are two unknowns in such problems, the initial number and the terminal number, but only one equation which is an algebraic reduction of an expression for the relation between them. Common to the class is the nature of the resulting equation, which is a linear Diophantine equation in two unknowns. Most members of the class are determinate, but some are not (the monkey and the coconuts is one of the latter). Familiar algebraic methods are unavailing for solving such equations. - -The origin of the class of such problems has been attributed to the Indian mathematician Mahāvīra in chapter VI, §1311/2, 1321/2 of his Ganita-sara-sangraha (“Compendium of the Essence of Mathematics”), circa 850CE, which dealt with serial division of fruit and flowers with specified remainders. That would make progenitor problems over 1000 years old before their resurgence in the modern era. Problems involving division which invoke the Chinese remainder theorem appeared in Chinese literature as early as the first century CE. Sun Tzu asked: Find a number which leaves the remainders 2, 3 and 2 when divided by 3, 5 and 7, respectively. Diophantus of Alexandria first studied problems requiring integer solutions in the 3rd century CE. The Euclidean algorithm for greatest common divisor which underlies the solution of such problems was discovered by the Greek geometer Euclid and published in his Elements in 300CE. - -Prof. David Singmaster, an historian of puzzles, traces a series of less plausibly related problems through the middle ages, with a few references as far back as the Babylonian empire circa 1700BC. They involve the general theme of adding or subtracting fractions of a pile or specific numbers of discrete objects and asking how many there could have been in the beginning. The next reference to a similar problem is in Jacques Ozanam's Récréations mathématiques et physiques, 1725. In the realm of pure mathematics, Lagrange in 1770 expounded his continued fraction theorem and applied it to solution of Diophantine equations. - -The first description of the problem in close to its modern wording appeared in the diaries of the mathematician and author Lewis Carroll ("Alice in Wonderland") in 1888, involving a pile of nuts on a table serially divided by four brothers, each time with remainder of one given to a monkey, and the final division comes out even. The problem never appeared in any of the author's published works, though from other references it appears the problem was in circulation in 1888. An almost identical problem soon appeared in W.W. Rouse Ball's Elementary Algebra, 1890. Such propinquity suggests a common source; dissemination of the problem may have occurred via Carroll's exchanges with Bartholomew Price, professor of mathematics and Carrol's friend and tutor. Four renditions of the problem existed: two forms, one with remainders of one and another with remainders of zero but nuts discarded after division, and two endings, one where the final division has a remainder and one where it comes out even (or no nuts are discarded). The problem was mentioned in works of period mathematicians, with solutions, mostly wrong, indicating that the problem was new and unfamiliar at the time. - -The device of marked objects (see Blue coconuts, below) to aid in conceptualizing the division with remainders first appeared in 1912 in the work of Norman H. Anning involving a bin of apples divided by three men. - -The problem became notorious when American novelist and short story writer Ben Ames Williams modified an older problem and included it in a story, "Coconuts", in the October 9, 1926 issue of the Saturday Evening Post. Here is how the problem was stated by Williams (condensed and paraphrased): - -Five men and a monkey were shipwrecked on an island. They spent the first day gathering coconuts. During the night, one man woke up and decided to take his share of the coconuts. He divided them into five piles. One coconut was left over so he gave it to the monkey, then hid his share, put the rest back together, and went back to sleep. - -Soon a second man woke up and did the same thing. After dividing the coconuts into five piles, one coconut was left over which he gave to the monkey. He then hid his share, put the rest back together, and went back to bed. The third, fourth, and fifth man followed exactly the same procedure. The next morning, after they all woke up, they divided the remaining coconuts into five equal shares. This time no coconuts were left over. - -How many coconuts were there in the original pile? - -Williams had not included an answer in the story. The magazine was inundated by more than 2,000 letters pleading for an answer to the problem. The Post editor, Horace Lorimer, famously fired off a telegram to Williams saying: "FOR THE LOVE OF MIKE, HOW MANY COCONUTS? HELL POPPING AROUND HERE". Williams continued to get letters asking for a solution or proposing new ones for the next twenty years. - -Martin Gardner featured the problem in his April 1958 Mathematical Games column in Scientific American. He stated that Williams had modified an older problem to make it more confounding. In the older version there is a coconut for the monkey on the final division; in Williams's version the final division in the morning comes out even. But the available historical evidence doesn't indicate which versions Williams had access to. Gardner once told his son Jim that it was his favorite problem, so much so that he later chose to make it the first chapter of his "best of columns" collection, The Colossal Book of Mathematics. He said that the Monkey and the Coconuts is "probably the most worked on and least often solved" Diophantine puzzle. Since that time the Williams version of the problem has become a staple of recreational mathematics. The original story containing the problem was reprinted in full in Clifton Fadiman's 1962 anthology The Mathematical Magpie, a book that the Mathematical Association of America recommends for acquisition by undergraduate mathematics libraries. - -Numerous variants which vary the number of sailors, monkeys, or number of coconuts have appeared in the literature. - -
    - -
    A Diophantine problem
    - -Diophantine analysis is the study of - -equations with rational coefficients - -requiring integer solutions. In - -Diophantine problems, there are fewer - -equations than unknowns. The "extra" - -information required to solve the - -equations is the condition that the - -solutions be integers. Any solution - -must satisfy all equations. Some - -Diophantine equations have no solution, - -some have one or a finite number, and - -others have infinitely many solutions. - -The monkey and the coconuts reduces - -algebraically to a two variable linear Diophantine - -equation of the form - -ax + by = c, or more generally, - -(a/d)x + (b/d)y = c/d - -where d is the greatest common divisor - -of a and b. By Bézout's identity, - -the equation is solvable if and only if d divides c. If it does, - -the equation has infinitely many periodic - -solutions of the form - -x = x0 + tb, - -y = y0 + ta - -where (x0,y0) is a solution and t is a - -parameter than can be any integer. The problem - -is not intended to be solved by trial-and-error; there - -are deterministic methods for solving (x0,y0) - -in this case (see text). - -
    - -Numerous solutions starting as early as 1928 have been published both for the original problem and Williams modification. Clever and succinct solutions using modulo congruences, sieves, and alternate number bases have been devised based partly or mostly on the recursive definition of the problem, a structure that won't be applicable in the general case. The smallest positive solutions to both versions are sufficiently large that trial and error is very likely to be fruitless. An ingenious concept of negative coconuts was introduced that fortuitously solves the original problem. Formalistic solutions are based on Euclid's algorithm applied to the Diophantine coefficients. Finally, the calculus of finite differences yields a parameterized general solution for any number of sailors and all multiples of coconuts that could have been on the original pile. In modern times, a computer brute force search over the positive integers quickly yields the solution. - -Before entering upon a solution to the problem, a couple of things may be noted. If there - -were no remainders, given there are 6 divisions of 5, 56=15,625 coconuts must be in the pile; on - -the 6th and last division, each sailor receives 1024 coconuts. No smaller positive number - -will result in all 6 divisions coming out even. That means that in the problem as stated, - -any multiple of 15,625 may be added to the pile, and it will satisfy the problem conditions. - -That also means that the number of coconuts in the original pile is smaller than 15,625, - -else subtracting 15,625 will yield a smaller solution. But the number in the original pile - -isn't trivially small, like 5 or 10 (that's why this is a hard problem) - it may be in the - -hundreds or thousands. Unlike trial and error in the case of guessing a polynomial root, - -trial and error for a Diophantine root will not result in any obvious convergence. There's - -no simple way of estimating what the solution will be. - -A summary analysis of both the original problem and Williams's version was presented by Martin Gardner when he featured the problem in his Mathematical Games column. Gardner begins by solving the original problem because it is less confounding than the Williams variation. Let N be the size of the original pile and F be the number of coconuts received by each sailor after the final division into 5 equal shares in the morning. Then the number of coconuts left before the morning division is F5+1. If that quantity is designated n, the number remaining before the last sailor's division is: -$$ -n' = n\cdot \tfrac{5}{4} + 1 -$$ - -reversing the procedure of the sailor during the night. But each sailor followed the same procedure; there are thus defined a recursive series of 5 such $n'$s (replacing $n$ with $n'$ and generating a new $n'$), the fifth and last of which is N itself, the number of coconuts before the division by the first sailor. Successively substituting the $n'$s and $n$ yields: -$$ -N=(((((F\cdot 5 + 1)\cdot \tfrac{5}{4} + 1)\cdot \tfrac{5}{4} + 1)\cdot \tfrac{5}{4} + 1)\cdot \tfrac{5}{4} + 1)\cdot \tfrac{5}{4} + 1 -$$ - -which reduces to the Diophantine equation: -$$ -1024N = 15625F + 11529 -$$ - -By a fundamental theorem, this equation has a solution if and only if 11529 is a multiple of the Greatest Common Divisor of 1024 and 15625. 1024=45 and 15625=56, so their GCD is 1, and 11529 is a multiple of 1 so the equation is solvable. - -Gardner points out that this equation is much too difficult to solve by trial and error. Moreover, it has an infinite number of solutions. But Gardner was mistaken about the difficulty. In fact, if (N, F) is a solution then so is (N + 15625 t, F + 1024 t) for any integer t. This means that the equation also has solutions in negative integers. Trying out a few small negative numbers it turns out N = -4 and F = -1 is a solution. This involves the absurd notion of negative coconuts; so we add 15625 to -4 and add 1024 to -1 to get the smallest positive solution (15621, 1023). - -The negative coconuts approach doesn't apply to the Williams version, at least not for any reasonably small |N|, so a more systematic approach is needed. - -The search space can be reduced by a series of increasingly larger factors by observing the structure of the problem so that a bit of trial and error finds the solution. The search space is much smaller if one starts with the number of coconuts received by each man in the morning division, because that number is much smaller than the number in the original pile. - -If F is the number of coconuts each sailor receives in the final division in the morning, the pile in the morning is 5F, which must also be divisible by 4, since the last sailor in the night combined 4 piles for the morning division. So the morning pile, call the number n, is a multiple of 20. The pile before the last sailor woke up must have been 5/4(n)+1. If only one sailor woke up in the night, then 5/4(20)+1 = 26 works for the minimum number of coconuts in the original pile. But if two sailors woke up, 26 is not divisible by 4, so the morning pile must be some multiple of 20 that yields a pile divisible by 4 before the last sailor wakes up. It so happens that 3*20=60 works for two sailors: applying the recursion formula for n twice yields 96 as the smallest number of coconuts in the original pile. 96 is divisible by 4 once more, so for 3 sailors awakening, the pile could have been 121 coconuts. But 121 isn't divisible by 4, so for 4 sailors awakening, one needs to make another leap. At this point, the analogy becomes obtuse, because in order to accommodate 4 sailors awakening, the morning pile must be some multiple of 60: if one is persistent, it may be discovered that 17*60=1020 does the trick and the minimum number in the original pile would be 2496. A last iteration on 2496 for 5 sailors awakening, i.e. 5/4(2496)+1 brings the original pile to 3121 coconuts. - -Another device predating the problem, is to use extra or marked objects, in this case blue coconuts, to clarify the division process. Suppose that the first sailor before the division, adds four blue coconuts to the pile to guarantee division by 5 (since we know even if he doesn't, that there's going to be a remainder of 1, so adding 4 makes the pile divisible by 5). He divides the pile, takes the fifth with an extra (non-blue) coconut which he tosses to the monkey, hides his share, then puts the rest back together, putting the 4 blue coconuts to the side. Each sailor does the same. After the fifth sailor (or after the division in the morning if there's a remainder there), the blue coconuts are left on the side, belonging to no one. Since the original pile is divided 5 times (or 6, if there's a remainder in the morning), including the blue coconuts it must have been 55 (56) coconuts. That makes the pile 3125-4=3121 (15625-4=15621) coconuts. The blue coconuts may be considered to be "virtual" coconuts which play no role in the problem, only serving to clarify the divisibility. - -A simple and obvious solution appears when the divisions and subtractions are performed in base 5. - -Consider the subtraction, when the first sailor takes his share (and the monkey's). Let n0,n1,... represent the digits of N, the number of coconuts in the original pile, and s0,s1... represent the digits of the sailor's share S, both base 5. After the monkey's share, the least significant digit of N must now be 0; after the subtraction, the least significant digit of N' left by the first sailor must be 1, hence the following (the actual number of digits in N as well as S is unknown, but they're irrelevant just now): - -n5n4n3n2n1 0 (N5) - -s4s3s2s1s0 (S5) - -1 (N'5) - -The digit subtracted from 0 base 5 to yield 1 is 4, so s0=4. But since S is (N-1)/5, and dividing by 55 is just shifting the number right one position, n1=s0=4. So now the subtraction looks like: - -n5n4n3n2 4 0 - -s4s3s2s1 4 - -1 - -Since the next sailor is going to do the same thing on N', the least significant digit of N' becomes 0 after tossing one to the monkey, and the LSD of S' must be 4 for the same reason; the next digit of N' must also be 4. So now it looks like: - -n5n4n3n2 4 0 - -s4s3s2s1 4 - -4 1 - -Borrowing 1 from n1 (which is now 4) leaves 3, so s1 must be 4, and therefore n2 as well. So now - -it looks like: - -n5n4n3 4 4 0 - -s4s3s2 4 4 - -4 1 - -But the same reasoning again applies to N' as applied to N, so the next digit of N' is 4, so s2 and n3 are also 4, etc. There are 5 divisions; the first four must leave an odd number base 5 in the pile for the next division, but the last division must leave an even number base 5 so the morning division will come out even (in 5s). So there are four 4s in N following a LSD of 1: N=444415=312110 - -A straightforward numeric analysis goes like this: If N is the initial number, each of 5 sailors transitions the original pile thus: - -N => 4(N–1)/5 or equivalently, N => 4(N+4)/5 – 4. - -Repeating this transition 5 times gives the number left in the morning: - -N => 4(N+4)/5 – 4 - - => 16(N+4)/25 – 4 - - => 64(N+4)/125 – 4 - - => 256(N+4)/625 – 4 - - => 1024(N+4)/3125 – 4 - -Since that number must be an integer and 1024 is relatively prime to 3125, N+4 must be a multiple of 3125. The smallest such multiple is 31251, so N = 3125 – 4 = 3121; the number left in the morning comes to 1020, which is evenly divisible by 5 as required. - -A simple succinct solution can be obtained by directly utilizing the recursive structure - -of the problem: There were five divisions of the coconuts into fifths, each time with - -one left over (putting aside the final division in the morning). The pile remaining after each division must contain an integral number of - -coconuts. If there were only one such division, then it is readily apparent that 51+1=6 - -is a solution. In fact any multiple of five plus one is a solution, so a possible - -general formula is 5k - 4, since a multiple of 5 plus 1 is also a multiple of 5 minus 4. - -So 11, 16, etc also work for one division. - -If two divisions are done, a multiple of 55=25 rather than 5 must be used, because 25 - -can be divided by 5 twice. So the number of coconuts that could be in the pile is k25 - 4. - -k=1 yielding 21 is the smallest positive number that can be successively divided by 5 twice - -with remainder 1. If there are 5 divisions, then multiples of 55=3125 are required; the - -smallest such number is 3125 - 4 = 3121. After 5 divisions, there are 1020 coconuts left - -over, a number divisible by 5 as required by the problem. In fact, after n divisions, - -it can be proven that the remaining pile is divisible by n, a property made convenient - -use of by the creator of the problem. - -A formal way of stating the above argument is: - -The original pile of coconuts will be divided by 5 a total of 5 times with a remainder of 1, not considering the last division in the morning. Let N = number of coconuts in the original pile. Each division must leave the number of nuts in the same congruence class (mod 5). So, -$$ -N \equiv 4/5\cdot(N-1) -$$ (mod 5) (the –1 is the nut tossed to the monkey) -$$ -5N \equiv 4N - 4 -$$ (mod 5) -$$ -N \equiv -4 -$$ (mod 5) (–4 is the congruence class) - -So if we began in modulo class –4 nuts then we will remain in modulo class –4. Since ultimately we have to divide the pile 5 times or 5^5, the original pile was 5^5 – 4 = 3121 coconuts. The remainder of 1020 coconuts conveniently divides evenly by 5 in the morning. This solution essentially reverses how the problem was (probably) constructed. - -The equivalent Diophantine equation for this version is: -$$ -1024N=15625F+8404 -$$ (1) - -where N is the original number of coconuts, and F is the number received by each sailor on the final division in the morning. This is only trivially different than the equation above for the predecessor problem, and solvability is guaranteed by the same reasoning. - -Reordering, -$$ -1024N-15625F=8404 -$$ (2) - -This Diophantine equation has a solution which follows directly from the Euclidean algorithm; in fact, it has infinitely many periodic solutions positive and negative. If (x0, y0) is a solution of 1024x–15625y=1, - -then N0=x08404, F0=y08404 is a solution of (2), which means any solution must have the form - - \begin{cases} - -N=N_0 + 15625\cdot t \\ - -F=F_0 + 1024\cdot t - -\end{cases}(3) - -where $t$ is an arbitrary parameter that can have any integral value. - -One can take both sides of (1) above modulo 1024, so -$$ -1024N=15625F+8404 \mod 1024 -$$ - -Another way of thinking about it is that in order for $n$ to be an integer, the RHS of the equation must be an integral multiple of 1024; that property will be unaltered by factoring out as many multiples of 1024 as possible from the RHS. Reducing both sides by multiples of 1024, -$$ -0=(15625F-15\cdot 1024F)+(8404-8\cdot 1024) \mod 1024 -$$ - -subtracting, -$$ -0=265F+212 \mod 1024 -$$ - -factoring, -$$ -0=53 \cdot(5F+4) \mod 1024 -$$ - -The RHS must still be a multiple of 1024; since 53 is relatively prime to 1024, 5F+4 must be a multiple of 1024. The smallest such multiple is 11024, so 5F+4=1024 and F=204. Substituting into (1) -$$ -1024N=15625\cdot 204+8404 \Rightarrow N=\frac{3 195 904}{1024} \Rightarrow N=3121 -$$ - -The Euclidean algorithm is quite tedious but a general methodology for solving rational equations ax+by=c requiring integral answers. From (2) above, it is evident that 1024 (210) and 15625 (56) are relatively prime and therefore their GCD is 1, but we need the reduction equations for back substitution to obtain N and F in terms of these two quantities: - -First, obtain successive remainders until GCD remains: - -15625 = 15·1024 + 265 (a) - -1024 = 3·265 + 229 (b) - -265 = 1·229 + 36 (c) - -229 = 6·36 + 13 (d) - -36 = 2·13 + 10 (e) - -13 = 1·10 + 3 (f) - -10 = 3·3 + 1 (g) (remainder 1 is GCD of 15625 and 1024) - -1 = 10 – 3(13–1·10) = 4·10 – 3·13 (reorder (g), substitute for 3 from (f) and combine) - -1 = 4·(36 – 2·13) – 3·13 = 4·36 – 11·13 (substitute for 10 from (e) and combine) - -1 = 4·36 – 11·(229 – 6·36) = –11·229 + 70*36 (substitute for 13 from (d) and combine) - -1 = –11·229 + 70·(265 – 1·229) = –81·229 + 70·265 (substitute for 36 from (c) and combine) - -1 = –81·(1024 – 3·265) + 70·265 = –81·1024 + 313·265 (substitute for 229 from (b) and combine) - -1 = –81·1024 + 313·(15625 – 15·1024) = 313·15625 – 4776·1024 (substitute for 265 from (a) and combine) - -So the pair (N0,F0) = (-4776·8404, -313*8404); the smallest $t$ (see (3) in the previous subsection) that will make both N and F positive is 2569, so: -$$ - N = N_0 + 15625\cdot 2569 = 3121 -$$ -$$ - F = F_0 + 1024\cdot 2569 = 204 -$$ - -Alternately, one may use a continued fraction, whose construction is based on the Euclidean algorithm. The continued fraction for 1024/15625 (0.065536 exactly) is [;15,3,1,6,2,1,3]; its convergent terminated after the repetend is 313/4776, giving us x0=–4776 and y0=313. The least value of t for which both N and F are non-negative is 2569, so -$$ -N= -4776 \cdot 8404+15625 \cdot 2569=3121 -$$. - -This is the smallest positive number that satisfies the conditions of the problem. - -When the number of sailors is a parameter, let it be $m$, rather than a computational value, careful algebraic reduction of the relation between the number of coconuts in the original pile and the number allotted to each sailor in the morning yields an analogous Diophantine relation whose coefficients are expressions in $m$. - -The first step is to obtain an algebraic expansion of the recurrence relation corresponding to each sailor's transformation of the pile, $n_i$ being the number left by the sailor: -$$ -n_i = \frac{m-1}{m}(n_{i-1}-1) -$$ - -where $n_{i\rightarrow 0} \equiv N$, the number originally gathered, and $n_{i\rightarrow m}$ the number left in the morning. Expanding the recurrence by substituting $n_i$ for $n_{i-1}$ $m$ times yields: -$$ -n_m=\left(\frac{m-1}{m}\right)^m\cdot n_0 - \left[(\frac{m-1}{m})^m+...+(\frac{m-1}{m})^2+\frac{m-1}{m}\right] -$$ - -Factoring the latter term, -$$ -n_m=\left(\frac{m-1}{m}\right)^m\cdot n_0 - \left(\frac{m-1}{m}\right)\cdot \left[(\frac{m-1}{m})^{m-1}+...+\frac{m-1}{m}+1\right] -$$ - -The power series polynomial in brackets of the form $x^{m-1}+...+x+1$ sums to $(1-x^m)/(1-x)$ so, -$$ -n_m=\left(\frac{m-1}{m}\right)^m\cdot n_0 - \left(\frac{m-1}{m}\right)\cdot \left[\left(1-(\frac{m-1}{m})^m\right)\bigg/\left(1-(\frac{m-1}{m})\right)\right] -$$ - -which simplifies to: -$$ -n_m=\left(\frac{m-1}{m}\right)^m\cdot n_0 - (m-1)\cdot \frac{m^m - (m-1)^m}{m^m} -$$ - -But $n_m$ is the number left in the morning which is a multiple of $m$ (i.e. $F$, the number allotted to each sailor in the morning): -$$ -m\cdot F = \left(\frac{m-1}{m}\right)^m\cdot n_0 - (m-1)\cdot \frac{m^m - (m-1)^m}{m^m} -$$ - -Solving for $n_0$(=$N$), -$$ -N=m^m\cdot \frac{m-1+m\cdot F}{(m-1)^m} - (m-1) -$$ - -The equation is a linear Diophantine equation in two variables, $N$ and $F$. $m$ is a parameter that can be any integer. The nature of the equation and the method of its solution do not depend on $m$. - -Number theoretic considerations now apply. For $N$ to be an integer, it is sufficient that $\frac{m-1+m\cdot F}{(m-1)^m}$ be an integer, so let it be $r$: -$$ -r=\frac{m-1+m\cdot F}{(m-1)^m} -$$ - -The equation must be transformed into the form $ax+by=\pm 1$ whose solutions are formulaic. Hence: -$$ -(m-1)^m\cdot r - m\cdot s = -1 -$$, where $s = 1+F$ - -Because $m$ and $m-1$ are relatively prime, there exist integer solutions $(r,s)$ by Bézout's identity. This equation can be restated as: -$$ -(m-1)^m\cdot r\equiv -1\mod m -$$ - -But (m–1)m is a polynomial Z'm–1 if m is odd and Z'm+1 if m is even, where Z is a polynomial with monomial basis in m. Therefore r0=1 if m is odd and r0=–1 if m is even is a solution. - -Bézout's identity gives the periodic solution $r=r_0+k\cdot m$, so substituting for $r$ in the Diophantine equation and rearranging: -$$ -N=r_0\cdot m^m - (m-1) + k\cdot m^{m+1} -$$ - -where $r_0=1$ for $m$ odd and $r_0=-1$ for $m$ even and $k$ is any integer. For a given $m$, the smallest positive $k$ will be chosen such that $N$ satisfies the constraints of the problem statement. - -In the William's version of the problem, $m$ is 5 sailors, so $r_0$ is 1, and $k$ may be taken to be zero to obtain the lowest positive answer, so N = 1 55 – 4 = 3121 for the number of coconuts in the original pile. (It may be noted that the next sequential solution of the equation for k=–1, is –12504, so trial and error around zero will not solve the Williams version of the problem, unlike the original version whose equation, fortuitously, had a small magnitude negative solution). - -Here is a table of the positive solutions $N$ for the first few $m$ ($k$ is any non-negative integer): - -Other variants, including the putative predecessor problem, have related general solutions for an arbitrary number of sailors. - -When the morning division also has a remainder of one, the solution is: -$$ -N=- (m-1) + k\cdot m^{m+1} -$$ - -For $m=5$ and $k=1$ this yields 15,621 as the smallest positive number of coconuts for the pre-William's version of the problem. - -In some earlier alternate forms of the problem, the divisions came out even, and nuts (or items) were allocated from the remaining pile after division. In these forms, the recursion relation is: -$$ -n_i = \frac{m-1}{m}n_{i-1}-1 -$$ - -The alternate form also had two endings, when the morning division comes out even, and when there is one nut left over for the monkey. - -When the morning division comes out even, the general solution reduces via a similar derivation to: -$$ -N=-m + k\cdot m^{m+1} -$$ - -For example, when $m=4$ and $k=1$, the original pile has 1020 coconuts, and after four successive even divisions in the night with a coconut allocated to the monkey after each division, there are 80 coconuts left over in the morning, so the final division comes out even with no coconut left over. - -When the morning division results in a nut left over, the general solution is: -$$ -N=r_0\cdot m^m - m + k\cdot m^{m+1} -$$ - -where $r_0=-1$ if $m$ is odd, and $r_0=1$ if $m$ is even. For example, when $m=3$, $r_0=-1$ and $k=1$, the original pile has 51 coconuts, and after three successive divisions in the night with a coconut allocated to the monkey after each division, there are 13 coconuts left over in the morning, so the final division has a coconut left over for the monkey. - -Other post-Williams variants which specify different remainders including positive ones (i.e. the monkey adds coconuts to the pile), have been treated in the literature. The solution is: -$$ -N=r_0\cdot m^m - c \cdot(m-1) + k\cdot m^{m+1} -$$ - -where $r_0=1$ for $m$ odd and $r_0=-1$ for $m$ even, $c$ is the remainder after each division (or number of monkeys) and $k$ is any integer ($c$ is negative if the monkeys add coconuts to the pile). - -Other variants in which the number of men or the remainders vary between divisions, are generally outside the class of problems associated with the monkey and the coconuts, though these similarly reduce to linear Diophantine equations in two variables. Their solutions yield to the same techniques and present no new difficulties. diff --git a/wiki/wikipedia/597.txt b/wiki/wikipedia/597.txt deleted file mode 100644 index d6fcb3a6fc27a72fbd422edd0dd66e30743487fa..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/597.txt +++ /dev/null @@ -1,47 +0,0 @@ -In geometry, the kissing number of a mathematical space is defined as the greatest number of non-overlapping unit spheres that can be arranged in that space such that they each touch a common unit sphere. For a given sphere packing (arrangement of spheres) in a given space, a kissing number can also be defined for each individual sphere as the number of spheres it touches. For a lattice packing the kissing number is the same for every sphere, but for an arbitrary sphere packing the kissing number may vary from one sphere to another. - -Other names for kissing number that have been used are Newton number (after the originator of the problem), and contact number. - -In general, the kissing number problem seeks the maximum possible kissing number for n-dimensional spheres in (n + 1)-dimensional Euclidean space. Ordinary spheres correspond to two-dimensional closed surfaces in three-dimensional space. - -Finding the kissing number when centers of spheres are confined to a line (the one-dimensional case) or a plane (two-dimensional case) is trivial. Proving a solution to the three-dimensional case, despite being easy to conceptualise and model in the physical world, eluded mathematicians until the mid-20th century. - -In one dimension, the kissing number is 2: - -In two dimensions, the kissing number is 6: - -Proof: Consider a circle with center C that is touched by circles with centers C1, C2, .... Consider the rays C Ci. These rays all emanate from the same center C, so the sum of angles between adjacent rays is 360°. - -Assume by contradiction that there are more than six touching circles. Then at least two adjacent rays, say C C1 and C C2, are separated by an angle of less than 60°. The segments C Ci have the same length – 2r – for all i. Therefore, the triangle C C1 C2 is isosceles, and its third side – C1 C2 – has a side length of less than 2r. Therefore, the circles 1 and 2 intersect – a contradiction. - -In three dimensions, the kissing number is 12, but the correct value was much more difficult to establish than in dimensions one and two. It is easy to arrange 12 spheres so that each touches a central sphere, but there is a lot of space left over, and it is not obvious that there is no way to pack in a 13th sphere. (In fact, there is so much extra space that any two of the 12 outer spheres can exchange places through a continuous movement without any of the outer spheres losing contact with the center one.) This was the subject of a famous disagreement between mathematicians Isaac Newton and David Gregory. Newton correctly thought that the limit was 12; Gregory thought that a 13th could fit. Some incomplete proofs that Newton was correct were offered in the nineteenth century, most notably one by Reinhold Hoppe, but the first correct proof (according to Brass, Moser, and Pach) did not appear until 1953. - -The twelve neighbors of the central sphere correspond to the maximum bulk coordination number of an atom in a crystal lattice in which all atoms have the same size (as in a chemical element). A coordination number of 12 is found in a cubic close-packed or a hexagonal close-packed structure. - -In four dimensions, it was known for some time that the answer was either 24 or 25. It is straightforward to produce a packing of 24 spheres around a central sphere (one can place the spheres at the vertices of a suitably scaled 24-cell centered at the origin). As in the three-dimensional case, there is a lot of space left over—even more, in fact, than for n = 3—so the situation was even less clear. In 2003, Oleg Musin proved the kissing number for n = 4 to be 24. - -The kissing number in n dimensions is unknown for n > 4, except for n = 8 (240), and n = 24 (196,560). The results in these dimensions stem from the existence of highly symmetrical lattices: the E8 lattice and the Leech lattice. - -If arrangements are restricted to lattice arrangements, in which the centres of the spheres all lie on points in a lattice, then this restricted kissing number is known for n = 1 to 9 and n = 24 dimensions. For 5, 6, and 7 dimensions the arrangement with the highest known kissing number found so far is the optimal lattice arrangement, but the existence of a non-lattice arrangement with a higher kissing number has not been excluded. - -The following table lists some known bounds on the kissing number in various dimensions. The dimensions in which the kissing number is known are listed in boldface. - -The kissing number problem can be generalized to the problem of finding the maximum number of non-overlapping congruent copies of any convex body that touch a given copy of the body. There are different versions of the problem depending on whether the copies are only required to be congruent to the original body, translates of the original body, or translated by a lattice. For the regular tetrahedron, for example, it is known that both the lattice kissing number and the translative kissing number are equal to 18, whereas the congruent kissing number is at least 56. - -There are several approximation algorithms on intersection graphs where the approximation ratio depends on the kissing number. For example, there is - -a polynomial-time 10-approximation algorithm to find a maximum non-intersecting subset of a set of rotated unit squares. - -The kissing number problem can be stated as the existence of a solution to a set of inequalities. Let $x_n$ be a set of N D-dimensional position vectors of the centres of the spheres. The condition that this set of spheres can lie round the centre sphere without overlapping is: - -\exist x\ \left\{ \forall_n \{x_n^\textsf{T} x_n = 1\} \land \forall_{m,n: m \neq n} \{ (x_n - x_m)^\textsf{T}(x_n - x_m) \geq 1\} \right\} - -Thus the problem for each dimension can be expressed in the existential theory of the reals. However, general methods of solving problems in this form take at least exponential time which is why this problem has only been solved up to four dimensions. By adding additional variables, $y_{nm}$ this can be converted to a single quartic equation in N(N − 1)/2 + DN variables: - - \exist xy\ \left\{ \sum_n \left(x_n^\textsf{T} x_n - 1\right)^2 + \sum_{m,n: m - -Therefore, to solve the case in D = 5 dimensions and N = 40 + 1 vectors would be equivalent to determining the existence of real solutions to a quartic polynomial in 1025 variables. For the D = 24 dimensions and N = 196560 + 1, the quartic would have 19,322,732,544 variables. An alternative statement in terms of distance geometry is given by the distances squared $R_{mn}$ between the mth and nth sphere: - - \exist R\ \{ \forall_n \{R_{0n} = 1 \} \land \forall_{m,n: m - -This must be supplemented with the condition that the Cayley–Menger determinant is zero for any set of points which forms an (D + 1) simplex in D dimensions, since that volume must be zero. Setting $R_{mn} = 1 + {y_{mn}}^2$ gives a set of simultaneous polynomial equations in just y which must be solved for real values only. The two methods, being entirely equivalent, have various different uses. For example, in the second case one can randomly alter the values of the y by small amounts to try to minimise the polynomial in terms of the y. diff --git a/wiki/wikipedia/598.txt b/wiki/wikipedia/598.txt deleted file mode 100644 index 544e604f75f2bac74d58c78c2ece7612030191c7..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/598.txt +++ /dev/null @@ -1,136 +0,0 @@ -In mathematics, Riemann–Hilbert problems, named after Bernhard Riemann and David Hilbert, are a class of problems that arise in the study of differential equations in the complex plane. Several existence theorems for Riemann–Hilbert problems have been produced by Mark Krein, Israel Gohberg and others (see the book by Clancey and Gohberg (1981)). - -Suppose that $\Sigma$ is a closed simple contour in the complex plane dividing the plane into two parts denoted by $\Sigma_{+}$ (the inside) and $\Sigma_{-}$ (the outside), determined by the index of the contour with respect to a point. The classical problem, considered in Riemann's PhD dissertation (see Pandey), was that of finding a function -$$ -M_+(z) = u(z) + i v(z) -$$ - -analytic inside $\Sigma_{+}$ such that the boundary values of M+ along $\Sigma$ satisfy the equation -$$ -a(z)u(z) - b(z)v(z) = c(z) -$$ - -for all $z\in \Sigma$, where a, b, and c are given real-valued functions . - -By the Riemann mapping theorem, it suffices to consider the case when $\Sigma$ is the unit circle . In this case, one may seek M+(z) along with its Schwarz reflection: -$$ -M_-(z) = \overline{M_+\left(\bar{z}^{-1}\right)}. -$$ - -On the unit circle Σ, one has $z = 1/\bar{z}$, and so -$$ -M_-(z) = \overline{M_+(z)},\quad z\in\Sigma. -$$ - -Hence the problem reduces to finding a pair of functions M+(z) and M(z) analytic, respectively, on the inside and the outside of the unit disc, so that on the unit circle -$$ -\frac{a(z)+ib(z)}{2}M_+(z) + \frac{a(z)-ib(z)}{2}M_-(z) = c(z), -$$ - -and, moreover, so that the condition at infinity holds: -$$ -\lim_{z\to\infty}M_-(z) = \overline{{M}_+(0)}. -$$ - -Hilbert's generalization was to consider the problem of attempting to find M+ and M analytic, respectively, on the inside and outside of the curve Σ, such that on $\Sigma$ one has -$$ -\alpha(z) M_+(z) + \beta(z) M_-(z) = c(z) -$$ - -where α, β, and c are arbitrary given complex-valued functions (no longer just complex conjugates). - -In the Riemann problem as well as Hilbert's generalization, the contour $\Sigma$ was simple. A full Riemann–Hilbert problem allows that the contour may be composed of a union of several oriented smooth curves, with no intersections. The + and − sides of the "contour" may then be determined according to the index of a point with respect to $\Sigma$. The Riemann–Hilbert problem is to find a pair of functions, M+ and M analytic, respectively, on the + and − side of $\Sigma$, subject to the equation -$$ -\alpha(z) M_+(z) + \beta(z) M_-(z) = c(z) -$$ - -for all z ∈ Σ. - -Given an oriented "contour" Σ (technically: an oriented union of smooth curves without points of infinite self-intersection in the complex plane), a Riemann–Hilbert factorization problem is the following. - -Given a matrix function V defined on the contour Σ, to find a holomorphic matrix function M defined on the complement of Σ, such that two conditions be satisfied: - -# If M+ and M denote the non-tangential limits of M as we approach Σ, then M+ = MV, at all points of non-intersection in Σ. - -#As z tends to infinity along any direction outside Σ, M tends to the identity matrix. - -In the simplest case V is smooth and integrable. In more complicated cases it could have singularities. The limits M+ and M could be classical and continuous or they could be taken in the L2 sense. - -At end-points or intersection points of the contour Σ the jump condition is not defined; constraints on the growth of M near those points have to be posed to ensure uniqueness (see the scalar problem below). - -Riemann–Hilbert problems have applications to several related classes of problems. - -;A. Integrable models: The inverse scattering or inverse spectral problem associated to the Cauchy problems for 1+1 dimensional partial differential equations on the line, or to periodic problems, or even to initial-boundary value problems (Fokas), can be stated as a Riemann–Hilbert problem. Likewise the inverse monodromy problem for Painlevé equations can be stated as a Riemann–Hilbert problem. - -;B. Orthogonal polynomials, Random matrices: Given a weight on a contour, the corresponding orthogonal polynomials can be computed via the solution of a Riemann–Hilbert factorization problem (Fokas). Furthermore, the distribution of eigenvalues of random matrices in several classical ensembles is reduced to computations involving orthogonal polynomials (see for example Deift). - -;C. Combinatorial probability: The most celebrated example is the theorem of Baik on the distribution of the length of the longest increasing subsequence of a random permutation. Together with the study of B above, it is one of the original rigorous investigations of so-called "integrable probability". But the connection between the theory of integrability and various classical ensembles of random matrices goes back to the work of Dyson (e.g.Dyson). - -The numerical analysis of Riemann–Hilbert problems can provide an effective way for numerically solving integrable PDEs, see eg. Trogdon & Olver (2016). - -In particular, Riemann–Hilbert factorization problems are used to extract asymptotic values for the three problems above (say, as time goes to infinity, or as the dispersion coefficient goes to zero, or as the polynomial degree goes to infinity, or as the size of the permutation goes to infinity). There exists a method for extracting the asymptotic behavior of solutions of Riemann–Hilbert problems, analogous to the method of stationary phase and the method of steepest descent applicable to exponential integrals. - -By analogy with the classical asymptotic methods, one "deforms" Riemann–Hilbert problems which are not explicitly solvable to problems that are. The so-called "nonlinear" method of stationary phase is due to Deift, expanding on a previous idea by Its and Manakov. A crucial ingredient of the Deift–Zhou analysis is the asymptotic analysis of singular integrals on contours. The relevant kernel is the standard Cauchy kernel (see Gakhov; also cf. the scalar example below). - -An essential extension of the nonlinear method of stationary phase has been the introduction of the so-called finite gap g-function transformation by Deift, which has been crucial in most applications. This was inspired by work of Lax, Levermore and Venakides, who reduced the analysis of the small dispersion limit of the KdV equation to the analysis of a maximization problem for a logarithmic potential under some external field: a variational problem of "electrostatic" type. The g-function is the logarithmic transform of the maximizing "equilibrium" measure. The analysis of the small dispersion limit of KdV equation has in fact provided the basis for the analysis of most of the work concerning "real" orthogonal polynomials (i.e. with the orthogonality condition defined on the real line) and Hermitian random matrices. - -Perhaps the most sophisticated extension of the theory so far is the one applied to the "non self-adjoint" case, i.e. when the underlying Lax operator (the first component of the Lax pair) is not self-adjoint, by Kamvissis. In that case, actual "steepest descent contours" are defined and computed. The corresponding variational problem is a max-min problem: one looks for a contour that minimizes the "equilibrium" measure. The study of the variational problem and the proof of existence of a regular solution, under some conditions on the external field, was done in Kamvissis; the contour arising is an "S-curve", as defined and studied in the 1980s by Herbert R. Stahl, Andrei A. Gonchar and Evguenii A Rakhmanov. - -An alternative asymptotic analysis of Riemann–Hilbert factorization problems is provided in McLaughlin, especially convenient when jump matrices do not have analytic extensions. Their method is based on the analysis of d-bar problems, rather than the asymptotic analysis of singular integrals on contours. An alternative way of dealing with jump matrices with no analytic extensions was introduced in Varzugin. - -Another extension of the theory appears in Kamvissis where the underlying space of the Riemann–Hilbert problem is a compact hyperelliptic Riemann surface. The correct factorization problem is no more holomorphic, but rather meromorphic, by reason of the Riemann–Roch theorem. The related singular kernel is not the usual Cauchy kernel, but rather a more general kernel involving meromorphic differentials defined naturally on the surface (see e.g. the appendix in Kamvissis). The Riemann–Hilbert problem deformation theory is applied to the problem of stability of the infinite periodic Toda lattice under a "short range" perturbation (for example a perturbation of a finite number of particles). - -Most Riemann–Hilbert factorization problems studied in the literature are 2-dimensional, i.e., the unknown matrices are of dimension 2. Higher-dimensional problems have been studied by Arno Kuijlaars and collaborators, see e.g. Kuijlaars. - -Suppose V = 2, and Σ is a contour from z = −1 to z = 1. Assuming M is bounded, what is the solution of M? - -To solve this, let's take the logarithm of equation $M_+=M_- V$. -$$ - \log M_+(z) = \log M_-(z) + \log 2. -$$ - -Since M tends to 1, log M → 0 as z → ∞. - -A standard fact about the Cauchy transform is that $C_+ -C_- = I $ where $C_+ , C_-$ are the limits of the Cauchy transform from above and below Σ; therefore, we get - - \frac{1}{2\pi i}\int_{\Sigma_+} \frac{\log 2}{\zeta-z} d\zeta - \frac{1}{2\pi i} \int_{\Sigma_-} \frac{\log{2}}{\zeta-z} d\zeta = \log 2 - -\text{ when } z\in\Sigma. - - - -Because the solution M of a Riemann–Hilbert factorization problem is unique (an easy application of Liouville's theorem (complex analysis)), the Sokhotski–Plemelj theorem gives the solution. We get -$$ -\log M = \frac{1}{2\pi i}\int_{\Sigma}\frac{\log{2}}{\zeta-z}d\zeta = \frac{\log 2}{2\pi i}\int^{1-z}_{-1-z}\frac{1}{\zeta}d\zeta = \frac{\log 2}{2\pi i} \log{\frac{z-1}{z+1}}, -$$ - -i.e. -$$ - M(z)=\left( \frac{z-1}{z+1} \right)^{\frac{\log{2}}{2\pi i}} -$$ - -which has a branch cut at contour $\Sigma$. - -Check: - -\begin{align} - -M_+(0) &=(e^{i\pi} )^{\frac{\log 2}{2\pi i}} = e^{\frac{\log 2}{2}} \\ - -M_-(0) &=(e^{-i\pi})^{\frac{\log 2}{2\pi i}} = e^{-\frac{\log 2}{2}} - -\end{align} - -therefore, -$$ -M_+(0)=M_-(0)e^{\log{2}}=M_-(0)2. -$$ - -CAVEAT 1: If the problem is not scalar one cannot easily take logarithms. In general explicit solutions are very rare. - -CAVEAT 2: The boundedness (or at least a constraint on the blow-up) of M near the special points 1 and -1 is crucial. Otherwise any function of the form -$$ - M(z)=\left( \frac{z-1}{z+1} \right)^{\frac{\log{2}}{2\pi i}} + \frac{a}{z-1}+ \frac{b}{z+1} -$$ - -is also a solution. In general, conditions on growth are necessary at special points (the end-points of the jump contour or intersection point) to ensure that the problem is well-posed. diff --git a/wiki/wikipedia/599.txt b/wiki/wikipedia/599.txt deleted file mode 100644 index bc3ea090f144ddcea8610856473d1220a4b6035c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/599.txt +++ /dev/null @@ -1,9 +0,0 @@ -In complex analysis, a field in mathematics, the Remmert–Stein theorem, introduced by , gives conditions for the closure of an analytic set to be analytic. - -The theorem states that if F is an analytic set of dimension less than k in some complex manifold D, and M is an analytic subset of D – F with all components of dimension at least k, then the closure of M is either analytic or contains F. - -The condition on the dimensions is necessary: for example, the set of points 1/n in the complex plane is analytic in the complex plane minus the origin, but its closure in the complex plane is not. - -A consequence of the Remmert–Stein theorem (also treated in their paper), is Chow's theorem stating that any projective complex analytic space is necessarily a projective algebraic variety. - -The Remmert–Stein theorem is implied by a proper mapping theorem due to Bishop, see Aguilar. diff --git a/wiki/wikipedia/6.txt b/wiki/wikipedia/6.txt deleted file mode 100644 index bc5624fc11e8badefc5b856a3341a1e4aefa46dc..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/6.txt +++ /dev/null @@ -1,51 +0,0 @@ -In geometry, Radon's theorem on convex sets, published by Johann Radon in 1921, states that any set of d + 2 points in Rd can be partitioned into two sets whose convex hulls intersect. A point in the intersection of these convex hulls is called a Radon point of the set. - -For example, in the case d = 2, any set of four points in the Euclidean plane can be partitioned in one of two ways. It may form a triple and a singleton, where the convex hull of the triple (a triangle) contains the singleton; alternatively, it may form two pairs of points that form the endpoints of two intersecting line segments. - -Consider any set $X=\{x_1,x_2,\dots,x_{d+2}\}\subset \mathbf{R}^d$ of d + 2 points in d-dimensional space. Then there exists a set of multipliers a1, ..., ad + 2, not all of which are zero, solving the system of linear equations -$$ - \sum_{i=1}^{d+2} a_i x_i=0,\quad \sum_{i=1}^{d+2} a_i=0, -$$ - -because there are d + 2 unknowns (the multipliers) but only d + 1 equations that they must satisfy (one for each coordinate of the points, together with a final equation requiring the sum of the multipliers to be zero). Fix some particular nonzero solution a1, ..., ad + 2. Let $I\subseteq X$ be the set of points with positive multipliers, and let $J=X\setminus I$ be the set of points with multipliers that are negative or zero. Then $I$ and $J$ form the required partition of the points into two subsets with intersecting convex hulls. - -The convex hulls of $I$ and $J$ must intersect, because they both contain the point -$$ -p= \sum_{x_i\in I}\frac{a_i}{A} x_i=\sum_{x_j\in J}\frac{-a_j}{A}x_j, -$$ - -where -$$ -A=\sum_{x_i\in I} a_i=-\sum_{x_j\in J} a_j. -$$ - -The left hand side of the formula for $p$ expresses this point as a convex combination of the points in $I$, and the right hand side expresses it as a convex combination of the points in $J$. Therefore, $p$ belongs to both convex hulls, completing the proof. - -This proof method allows for the efficient construction of a Radon point, in an amount of time that is polynomial in the dimension, by using Gaussian elimination or other efficient algorithms to solve the system of equations for the multipliers. - -A topological generalization of Radon's theorem states that, if ƒ is any continuous function from a (d + 1)-dimensional simplex to d-dimensional space, then the simplex has two disjoint faces whose images under ƒ are not disjoint. Radon's theorem itself can be interpreted as the special case in which ƒ is the unique affine map that takes the vertices of the simplex to a given set of d + 2 points in d-dimensional space. - -More generally, if K is any (d + 1)-dimensional compact convex set, and ƒ is any continuous function from K to d-dimensional space, then there exists a linear function g such that some point where g achieves its maximum value and some other point where g achieves its minimum value are mapped by ƒ to the same point. In the case where K is a simplex, the two simplex faces formed by the maximum and minimum points of g must then be two disjoint faces whose images have a nonempty intersection. This same general statement, when applied to a hypersphere instead of a simplex, gives the Borsuk–Ulam theorem, that ƒ must map two opposite points of the sphere to the same point. - -The Radon point of any four points in the plane is their geometric median, the point that minimizes the sum of distances to the other points. - -Radon's theorem forms a key step of a standard proof of Helly's theorem on intersections of convex sets; this proof was the motivation for Radon's original discovery of Radon's theorem. - -Radon's theorem can also be used to calculate the VC dimension of d-dimensional points with respect to linear separations. There exist sets of d + 1 points (for instance, the points of a regular simplex) such that every two nonempty subsets can be separated from each other by a hyperplane. However, no matter which set of d + 2 points is given, the two subsets of a Radon partition cannot be linearly separated. Therefore, the VC dimension of this system is exactly d + 1. - -A randomized algorithm that repeatedly replaces sets of d + 2 points by their Radon point can be used to compute an approximation to a centerpoint of any point set, in an amount of time that is polynomial in both the number of points and the dimension. - -The Radon point of three points in a one-dimensional space is just their median. The geometric median of a set of points is the point minimizing the sum of distances to the points in the set; it generalizes the one-dimensional median and has been studied both from the point of view of facility location and robust statistics. For sets of four points in the plane, the geometric median coincides with the Radon point. - -Another generalization for partition into r sets was given by and is now known as Tverberg's theorem. It states that for any set of -$$ -(d + 1)(r - 1) + 1\ -$$ - -points in Euclidean d-space, there is a partition into r subsets whose convex hulls intersect in at least one common point. - -Carathéodory's theorem states that any point in the convex hull of some set of points is also within the convex hull of a subset of at most d + 1 of the points; that is, that the given point is part of a Radon partition in which it is a singleton. One proof of Carathéodory's theorem uses a technique of examining solutions to systems of linear equations, similar to the proof of Radon's theorem, to eliminate one point at a time until at most d + 1 remain. - -Concepts related to Radon's theorem have also been considered for convex geometries, families of finite sets with the properties that the intersection of any two sets in the family remains in the family, and that the empty set and the union of all the sets belongs to the family. In this more general context, the convex hull of a set S is the intersection of the family members that contain S, and the Radon number of a space is the smallest r such that any r points have two subsets whose convex hulls intersect. Similarly, one can define the Helly number h and the Carathéodory number c by analogy to their definitions for convex sets in Euclidean spaces, and it can be shown that these numbers satisfy the inequalities h < r ≤ ch + 1. - -In an arbitrary undirected graph, one may define a convex set to be a set of vertices that includes every induced path connecting a pair of vertices in the set. With this definition, every set of ω + 1 vertices in the graph can be partitioned into two subsets whose convex hulls intersect, and ω + 1 is the minimum number for which this is possible, where ω is the clique number of the given graph. For related results involving shortest paths instead of induced paths see Chepoi and Bandelt. diff --git a/wiki/wikipedia/60.txt b/wiki/wikipedia/60.txt deleted file mode 100644 index ea9fa7e7c623134a8bc043453496d163baef0c71..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/60.txt +++ /dev/null @@ -1,120 +0,0 @@ -In number theory, Fermat's Last Theorem (sometimes called Fermat's conjecture, especially in older texts) states that no three positive integers a, b, and c satisfy the equation an + bn = cn for any integer value of n greater than 2. The cases n = 1 and n = 2 have been known since antiquity to have infinitely many solutions. - -The proposition was first stated as a theorem by Pierre de Fermat around 1637 in the margin of a copy of Arithmetica; Fermat added that he had a proof that was too large to fit in the margin. Although other statements claimed by Fermat without proof were subsequently proven by others and credited as theorems of Fermat (for instance, Fermat's theorem on sums of two squares), Fermat's Last Theorem resisted proof, leading to doubt that Fermat ever had a correct proof and it becoming known as a conjecture rather than a theorem. After 358 years of effort by mathematicians, the first successful proof was released in 1994 by Andrew Wiles, and formally published in 1995; it was described as a "stunning advance" in the citation for Wiles's Abel Prize award in 2016. It also proved much of the modularity theorem and opened up entire new approaches to numerous other problems and mathematically powerful modularity lifting techniques. - -The unsolved problem stimulated the development of algebraic number theory in the 19th century and the proof of the modularity theorem in the 20th century. It is among the most notable theorems in the history of mathematics and prior to its proof was in the Guinness Book of World Records as the "most difficult mathematical problem" in part because the theorem has the largest number of unsuccessful proofs. - -The Pythagorean equation, 1=x2 + y2 = z2, has an infinite number of positive integer solutions for x, y, and z; these solutions are known as Pythagorean triples (with the simplest example 3,4,5). Around 1637, Fermat wrote in the margin of a book that the more general equation 1=an + bn = cn had no solutions in positive integers if n is an integer greater than 2. Although he claimed to have a general proof of his conjecture, Fermat left no details of his proof, and no proof by him has ever been found. His claim was discovered some 30 years later, after his death. This claim, which came to be known as Fermat's Last Theorem, stood unsolved for the next three and a half centuries. - -The claim eventually became one of the most notable unsolved problems of mathematics. Attempts to prove it prompted substantial development in number theory, and over time Fermat's Last Theorem gained prominence as an unsolved problem in mathematics. - -The special case n = 4, proved by Fermat himself, is sufficient to establish that if the theorem is false for some exponent n that is not a prime number, it must also be false for some smaller n, so only prime values of n need further investigation. Over the next two centuries (1637–1839), the conjecture was proved for only the primes 3, 5, and 7, although Sophie Germain innovated and proved an approach that was relevant to an entire class of primes. In the mid-19th century, Ernst Kummer extended this and proved the theorem for all regular primes, leaving irregular primes to be analyzed individually. Building on Kummer's work and using sophisticated computer studies, other mathematicians were able to extend the proof to cover all prime exponents up to four million, Mathematician John Coates' quoted reaction was a common one: For his proof, Wiles was honoured and received numerous awards, including the 2016 Abel Prize. - -There are several alternative ways to state Fermat's Last Theorem that are mathematically equivalent to the original statement of the problem. - -In order to state them, we use mathematical notation: let N be the set of natural numbers 1, 2, 3, ..., let Z be the set of integers 0, ±1, ±2, ..., and let Q be the set of rational numbers a/b, where a and b are in Z with b ≠ 0. In what follows we will call a solution to xn + yn = zn where one or more of x, y, or z is zero a trivial solution. A solution where all three are non-zero will be called a non-trivial solution. - -For comparison's sake we start with the original formulation. - -* Original statement. With n, x, y, z ∈ N (meaning that n, x, y, z are all positive whole numbers) and n > 2, the equation xn + yn = zn has no solutions. - -Most popular treatments of the subject state it this way. It is also commonly stated over Z: - -* Equivalent statement 1: xn + yn = zn, where integer n ≥ 3, has no non-trivial solutions x, y, z ∈ Z. - -The equivalence is clear if n is even. If n is odd and all three of x, y, z are negative, then we can replace x, y, z with −x, −y, −z to obtain a solution in N. If two of them are negative, it must be x and z or y and z. If x, z are negative and y is positive, then we can rearrange to get (−z)n + yn = (−x)n resulting in a solution in N; the other case is dealt with analogously. Now if just one is negative, it must be x or y. If x is negative, and y and z are positive, then it can be rearranged to get (−x)n + zn = yn again resulting in a solution in N; if y is negative, the result follows symmetrically. Thus in all cases a nontrivial solution in Z would also mean a solution exists in N, the original formulation of the problem. - -* Equivalent statement 2: xn + yn = zn, where integer n ≥ 3, has no non-trivial solutions x, y, z ∈ Q. - -This is because the exponents of x, y, and z are equal (to n), so if there is a solution in Q, then it can be multiplied through by an appropriate common denominator to get a solution in Z, and hence in N. - -* Equivalent statement 3: xn + yn = 1, where integer n ≥ 3, has no non-trivial solutions x, y ∈ Q. - -A non-trivial solution a, b, c ∈ Z to xn + yn = zn yields the non-trivial solution a/c, b/c ∈ Q for vn + wn = 1. Conversely, a solution a/b, c/d ∈ Q to vn + wn = 1 yields the non-trivial solution ad, cb, bd for xn + yn = zn. - -This last formulation is particularly fruitful, because it reduces the problem from a problem about surfaces in three dimensions to a problem about curves in two dimensions. Furthermore, it allows working over the field Q, rather than over the ring Z; fields exhibit more structure than rings, which allows for deeper analysis of their elements. - -* Equivalent statement 4 – connection to elliptic curves: If a, b, c is a non-trivial solution to ap + bp = cp, p odd prime, then y2 = x(x − ap)(x + bp) (Frey curve) will be an elliptic curve. - -Examining this elliptic curve with Ribet's theorem shows that it does not have a modular form. However, the proof by Andrew Wiles proves that any equation of the form y2 = x(x − an)(x + bn) does have a modular form. Any non-trivial solution to xp + yp = zp (with p an odd prime) would therefore create a contradiction, which in turn proves that no non-trivial solutions exist. - -In other words, any solution that could contradict Fermat's Last Theorem could also be used to contradict the Modularity Theorem. So if the modularity theorem were found to be true, then it would follow that no contradiction to Fermat's Last Theorem could exist either. As described above, the discovery of this equivalent statement was crucial to the eventual solution of Fermat's Last Theorem, as it provided a means by which it could be "attacked" for all numbers at once. - -In ancient times it was known that a triangle whose sides were in the ratio 3:4:5 would have a right angle as one of its angles. This was used in construction and later in early geometry. It was also known to be one example of a general rule that any triangle where the length of two sides, each squared and then added together 1=(32 + 42 = 9 + 16 = 25), equals the square of the length of the third side 1=(52 = 25), would also be a right angle triangle. - -This is now known as the Pythagorean theorem, and a triple of numbers that meets this condition is called a Pythagorean triple – both are named after the ancient Greek Pythagoras. Examples include (3, 4, 5) and (5, 12, 13). There are infinitely many such triples, and methods for generating such triples have been studied in many cultures, beginning with the Babylonians and later ancient Greek, Chinese, and Indian mathematicians. - -Problem II.8 of the Arithmetica asks how a given square number is split into two other squares; in other words, for a given rational number k, find rational numbers u and v such that k2 = u2 + v2. Diophantus shows how to solve this sum-of-squares problem for k = 4 (the solutions being u = 16/5 and v = 12/5). - -Around 1637, Fermat wrote his Last Theorem in the margin of his copy of the Arithmetica next to Diophantus's sum-of-squares problem: - -After Fermat's death in 1665, his son Clément-Samuel Fermat produced a new edition of the book (1670) augmented with his father's comments. Although not actually a theorem at the time (meaning a mathematical statement for which proof exists), the margin note became known over time as Fermat’s Last Theorem, as it was the last of Fermat's asserted theorems to remain unproved. - -It is not known whether Fermat had actually found a valid proof for all exponents n, but it appears unlikely. Only one related proof by him has survived, namely for the case n = 4, as described in the section Proofs for specific exponents. - -While Fermat posed the cases of n = 4 and of n = 3 as challenges to his mathematical correspondents, such as Marin Mersenne, Blaise Pascal, and John Wallis, he never posed the general case. Moreover, in the last thirty years of his life, Fermat never again wrote of his "truly marvelous proof" of the general case, and never published it. Van der Poorten suggests that while the absence of a proof is insignificant, the lack of challenges means Fermat realised he did not have a proof; he quotes Weil as saying Fermat must have briefly deluded himself with an irretrievable idea. - -The techniques Fermat might have used in such a "marvelous proof" are unknown. - -Taylor and Wiles's proof relies on 20th-century techniques. Fermat's proof would have had to be elementary by comparison, given the mathematical knowledge of his time. - -While Harvey Friedman's grand conjecture implies that any provable theorem (including Fermat's last theorem) can be proved using only 'elementary function arithmetic', such a proof need be 'elementary' only in a technical sense and could involve millions of steps, and thus be far too long to have been Fermat's proof. - -Only one relevant proof by Fermat has survived, in which he uses the technique of infinite descent to show that the area of a right triangle with integer sides can never equal the square of an integer. His proof is equivalent to demonstrating that the equation -$$ -x^4 - y^4 = z^2 -$$ - -has no primitive solutions in integers (no pairwise coprime solutions). In turn, this proves Fermat's Last Theorem for the case n = 4, since the equation a4 + b4 = c4 can be written as c4 - b4 = (a2)2. - -Alternative proofs of the case n = 4 were developed later by Frénicle de Bessy (1676), Leonhard Euler (1738), Kausler (1802), Legendre (1823, 1830), Calzolari (1855), Gabriel Lamé (1865), Peter Guthrie Tait (1872), Günther (1878), Gambioli (1901), Krey (1909), Rychlík (1910), Stockhaus (1910), Carmichael (1915), Johannes van der Corput (1915), Axel Thue (1917), and Duarte (1944). - -The case p = 5 was proved independently by Legendre and Peter Gustav Lejeune Dirichlet around 1825. Alternative proofs were developed by Carl Friedrich Gauss (1875, posthumous), Lebesgue (1843), Lamé (1847), Gambioli (1901), His rather complicated proof was simplified in 1840 by Lebesgue, and still simpler proofs were published by Angelo Genocchi in 1864, 1874 and 1876. Alternative proofs were developed by Théophile Pépin (1876) and Edmond Maillet (1897). - -Fermat's Last Theorem was also proved for the exponents n = 6, 10, and 14. Proofs for n = 6 were published by Kausler, Swift, and Breusch. Similarly, Dirichlet and Terjanian each proved the case n = 14, while Kapferer Since they became ever more complicated as p increased, it seemed unlikely that the general case of Fermat's Last Theorem could be proved by building upon the proofs for individual exponents. First, she defined a set of auxiliary primes $\theta$ constructed from the prime exponent $p$ by the equation $\theta = 2hp + 1$, where $h$ is any integer not divisible by three. She showed that, if no integers raised to the $p^{\mathrm{th}}$ power were adjacent modulo $\theta$ (the non-consecutivity condition), then $\theta$ must divide the product $xyz$. Her goal was to use mathematical induction to prove that, for any given $p$, infinitely many auxiliary primes $\theta$ satisfied the non-consecutivity condition and thus divided $xyz$; since the product $xyz$ can have at most a finite number of prime factors, such a proof would have established Fermat's Last Theorem. Although she developed many techniques for establishing the non-consecutivity condition, she did not succeed in her strategic goal. She also worked to set lower limits on the size of solutions to Fermat's equation for a given exponent $p$, a modified version of which was published by Adrien-Marie Legendre. As a byproduct of this latter work, she proved Sophie Germain's theorem, which verified the first case of Fermat's Last Theorem (namely, the case in which $p$ does not divide $xyz$) for every odd prime exponent less than $270$, - -However, despite these efforts and their results, no proof existed of Fermat's Last Theorem. Proofs of individual exponents by their nature could never prove the general case: even if all exponents were verified up to an extremely large number X, a higher exponent beyond X might still exist for which the claim was not true. (This had been the case with some other past conjectures, and it could not be ruled out in this conjecture.) - -The strategy that ultimately led to a successful proof of Fermat's Last Theorem arose from the "astounding" Taniyama–Shimura–Weil conjecture, proposed around 1955—which many mathematicians believed would be near to impossible to prove,) - -y2 = x (x − ap)(x + bp) - -would have such unusual properties that it was unlikely to be modular. This would conflict with the modularity theorem, which asserted that all elliptic curves are modular. As such, Frey observed that a proof of the Taniyama–Shimura–Weil conjecture might also simultaneously prove Fermat's Last Theorem. By contraposition, a disproof or refutation of Fermat's Last Theorem would disprove the Taniyama–Shimura–Weil conjecture. - -In plain English, Frey had shown that, if this intuition about his equation was correct, then any set of 4 numbers (a, b, c, n) capable of disproving Fermat's Last Theorem, could also be used to disprove the Taniyama–Shimura–Weil conjecture. Therefore, if the latter were true, the former could not be disproven, and would also have to be true. - -Following this strategy, a proof of Fermat's Last Theorem required two steps. First, it was necessary to prove the modularity theorem – or at least to prove it for the types of elliptical curves that included Frey's equation (known as semistable elliptic curves). This was widely believed inaccessible to proof by contemporary mathematicians. - -Following Frey, Serre and Ribet's work, this was where matters stood: - -* Fermat's Last Theorem needed to be proven for all exponents n that were prime numbers. - -* The modularity theorem – if proved for semi-stable elliptic curves – would mean that all semistable elliptic curves must be modular. - -* Ribet's theorem showed that any solution to Fermat's equation for a prime number could be used to create a semistable elliptic curve that could not be modular; - -* The only way that both of these statements could be true, was if no solutions existed to Fermat's equation (because then no such curve could be created), which was what Fermat's Last Theorem said. As Ribet's Theorem was already proved, this meant that a proof of the Modularity Theorem would automatically prove Fermat's Last theorem was true as well. - -Ribet's proof of the epsilon conjecture in 1986 accomplished the first of the two goals proposed by Frey. Upon hearing of Ribet's success, Andrew Wiles, an English mathematician with a childhood fascination with Fermat's Last Theorem, and who had worked on elliptic curves, decided to commit himself to accomplishing the second half: proving a special case of the modularity theorem (then known as the Taniyama–Shimura conjecture) for semistable elliptic curves. - -Wiles worked on that task for six years in near-total secrecy, covering up his efforts by releasing prior work in small segments as separate papers and confiding only in his wife. By the end of 1993, rumours had spread that under scrutiny, Wiles's proof had failed, but how seriously was not known. Mathematicians were beginning to pressure Wiles to disclose his work whether it was complete or not, so that the wider community could explore and use whatever he had managed to accomplish. But instead of being fixed, the problem, which had originally seemed minor, now seemed very significant, far more serious, and less easy to resolve. - -Wiles states that on the morning of 19 September 1994, he was on the verge of giving up and was almost resigned to accepting that he had failed, and to publishing his work so that others could build on it and fix the error. He adds that he was having a final look to try and understand the fundamental reasons for why his approach could not be made to work, when he had a sudden insight – that the specific reason why the Kolyvagin–Flach approach would not work directly also meant that his original attempts using Iwasawa theory could be made to work, if he strengthened it using his experience gained from the Kolyvagin–Flach approach. Fixing one approach with tools from the other approach would resolve the issue for all the cases that were not already proven by his refereed paper. - -In particular, the exponents m, n, k need not be equal, whereas Fermat's last theorem considers the case m = n = k. - -The Beal conjecture, also known as the Mauldin conjecture and the Tijdeman-Zagier conjecture, states that there are no solutions to the generalized Fermat equation in positive integers a, b, c, m, n, k with a, b, and c being pairwise coprime and all of m, n, k being greater than 2. - -The Fermat–Catalan conjecture generalizes Fermat's last theorem with the ideas of the Catalan conjecture. The conjecture states that the generalized Fermat equation has only finitely many solutions (a, b, c, m, n, k) with distinct triplets of values (am, bn, ck), where a, b, c are positive coprime integers and m, n, k are positive integers satisfying - -{{NumBlk|:|$\frac{1}{m} + \frac{1}{n} + \frac{1}{k} < 1.$|}} - -The statement is about the finiteness of the set of solutions because there are 10 known solutions. - -In 1816, and again in 1850, the French Academy of Sciences offered a prize for a general proof of Fermat's Last Theorem. In 1857, the Academy awarded 3,000 francs and a gold medal to Kummer for his research on ideal numbers, although he had not submitted an entry for the prize. Another prize was offered in 1883 by the Academy of Brussels. - -In 1908, the German industrialist and amateur mathematician Paul Wolfskehl bequeathed 100,000 gold marks—a large sum at the time—to the Göttingen Academy of Sciences to offer as a prize for a complete proof of Fermat's Last Theorem. On 27 June 1908, the Academy published nine rules for awarding the prize. Among other things, these rules required that the proof be published in a peer-reviewed journal; the prize would not be awarded until two years after the publication; and that no prize would be given after 13 September 2007, roughly a century after the competition was begun. Wiles collected the Wolfskehl prize money, then worth $50,000, on 27 June 1997. In March 2016, Wiles was awarded the Norwegian government's Abel prize worth €600,000 for "his stunning proof of Fermat's Last Theorem by way of the modularity conjecture for semistable elliptic curves, opening a new era in number theory." - -Prior to Wiles's proof, thousands of incorrect proofs were submitted to the Wolfskehl committee, amounting to roughly 10 feet (3 meters) of correspondence. In the first year alone (1907–1908), 621 attempted proofs were submitted, although by the 1970s, the rate of submission had decreased to roughly 3–4 attempted proofs per month. According to some claims, Edmund Landau tended to use a special preprinted form for such proofs, where the location of the first mistake was left blank to be filled by one of his graduate students. According to F. Schlichting, a Wolfskehl reviewer, most of the proofs were based on elementary methods taught in schools, and often submitted by "people with a technical education but a failed career". In the words of mathematical historian Howard Eves, "Fermat's Last Theorem has the peculiar distinction of being the mathematical problem for which the greatest number of incorrect proofs have been published." - -In "The Royale", a 1989 episode of the 24th-century-set TV series Star Trek: The Next Generation, Picard tells Commander Riker about his attempts to solve the theorem, still unsolved after 800 years. He concludes, "In our arrogance, we feel we are so advanced. And yet we cannot unravel a simple knot tied by a part-time French mathematician working alone without a computer." (Andrew Wiles's insight leading to his breakthrough proof happened four months after the series ended. Wiles' proof was referenced in the Star Trek: Deep Space Nine season three episode Facets, where Jadzia Dax says to Tobin Dax that his proof of the theorem was "the most original approach to the proof since Wiles over three hundred years ago".) diff --git a/wiki/wikipedia/600.txt b/wiki/wikipedia/600.txt deleted file mode 100644 index aab72110c8ca79c93d8a0fa5766a9712dd831aa3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/600.txt +++ /dev/null @@ -1,102 +0,0 @@ -In mathematics, the root test is a criterion for the convergence (a convergence test) of an infinite series. It depends on the quantity -$$ -\limsup_{n\rightarrow\infty}\sqrt[n], -$$ - -where $a_n$ are the terms of the series, and states that the series converges absolutely if this quantity is less than one, but diverges if it is greater than one. It is particularly useful in connection with power series. - -The root test was developed first by Augustin-Louis Cauchy who published it in his textbook Cours d'analyse (1821). Thus, it is sometimes known as the Cauchy root test or Cauchy's radical test. For a series -$$ -\sum_{n=1}^\infty a_n. -$$ - -the root test uses the number -$$ -C = \limsup_{n\rightarrow\infty}\sqrt[n], -$$ - -where "lim sup" denotes the limit superior, possibly ∞+. Note that if -$$ -\lim_{n\rightarrow\infty}\sqrt[n], -$$ - -converges then it equals C and may be used in the root test instead. - -The root test states that: - -* if C < 1 then the series converges absolutely, - -* if C > 1 then the series diverges, - -* if C = 1 and the limit approaches strictly from above then the series diverges, - -* otherwise the test is inconclusive (the series may diverge, converge absolutely or converge conditionally). - -There are some series for which C = 1 and the series converges, e.g. $\textstyle \sum 1/{n^2}$, and there are others for which C = 1 and the series diverges, e.g. $\textstyle\sum 1/n$. - -This test can be used with a power series -$$ -f(z) = \sum_{n=0}^\infty c_n (z-p)^n -$$ - -where the coefficients cn, and the center p are complex numbers and the argument z is a complex variable. - -The terms of this series would then be given by an = cn(z - p)n. One then applies the root test to the an as above. Note that sometimes a series like this is called a power series "around p", because the radius of convergence is the radius R of the largest interval or disc centred at p such that the series will converge for all points z strictly in the interior (convergence on the boundary of the interval or disc generally has to be checked separately). A corollary of the root test applied to such a power series is the Cauchy–Hadamard theorem: the radius of convergence is exactly $1/\limsup_{n \rightarrow \infty}{\sqrt[n]},$ taking care that we really mean ∞ if the denominator is 0. - -The proof of the convergence of a series Σan is an application of the comparison test. If for all n ≥ N (N some fixed natural number) we have $\sqrt[n] \le k < 1,$ then $|a_n| \le k^n < 1$. Since the geometric series $\sum_{n=N}^\infty k^n$ converges so does $\sum_{n=N}^\infty |a_n|$ by the comparison test. Hence Σan converges absolutely. - -If $\sqrt[n] > 1$ for infinitely many n, then an fails to converge to 0, hence the series is divergent. - -Proof of corollary: - -For a power series Σan = Σcn(z - p)n, we see by the above that the series converges if there exists an N such that for all n ≥ N we have -$$ -\sqrt[n] = \sqrt[n] < 1, -$$ - -equivalent to -$$ -\sqrt[n]\cdot|z - p| < 1 -$$ - -for all n ≥ N, which implies that in order for the series to converge we must have $|z - p| < 1/\sqrt[n]$ for all sufficiently large n. This is equivalent to saying -$$ -|z - p| < 1/\limsup_{n \rightarrow \infty}{\sqrt[n]}, -$$ - -so $R \le 1/\limsup_{n \rightarrow \infty}{\sqrt[n]}.$ Now the only other place where convergence is possible is when -$$ -\sqrt[n] = \sqrt[n] = 1, -$$ - -(since points > 1 will diverge) and this will not change the radius of convergence since these are just the points lying on the boundary of the interval or disc, so -$$ -R = 1/\limsup_{n \rightarrow \infty}{\sqrt[n]}. -$$ - -Example 1: -$$ - \sum_{i=1}^\infty \frac{2^i}{i^9} -$$ - -Applying the root test and using the fact that $ \lim_{n \rightarrow \infty} n^{1/n}=1,$ -$$ - C = \sqrt[n]= \frac{ \sqrt[n]{2^n} } { \sqrt[n]{n^9} } = \frac{ 2 } {(n^{1/n})^9 } = 2 -$$ - -Since $ C=2>1,$ the series diverges. - -Example 2: -$$ -1 + 1 + 0.5 + 0.5 + 0.25 + 0.25 + 0.125 + 0.125 + ... -$$ - -The root test shows convergence because -$$ -r=\limsup_{n\to\infty}\sqrt[n] = \limsup_{n\to\infty}\sqrt[n]=.5<1. -$$ - -This example shows how the root test is stronger than the ratio test. The ratio test is inconclusive for this series if $n$ is odd so $a_n=a_{n+1} = .5^n$ (though not if $n$ is even), because -$$ -r=\lim_{n\to\infty}\left|\frac{a_{n+1}}{a_n}\right| = \lim_{n\to\infty}\left|\frac{ 2 \cdot.5^{n}}{2 \cdot.5^{n}}\right| =1. -$$ diff --git a/wiki/wikipedia/601.txt b/wiki/wikipedia/601.txt deleted file mode 100644 index 7191982e6af977b85e2d92f516a2dfde9d1ff6b5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/601.txt +++ /dev/null @@ -1,21 +0,0 @@ -In elementary geometry, the pizza theorem states the equality of two areas that arise when one partitions a disk in a certain way. - -Let p be an interior point of the disk, and let n be a multiple of 4 and greater than or equal to 8. Form n sectors of the disk with equal angles by choosing an arbitrary line through p, rotating the line n/2 − 1 times by an angle of 2pi/n radians, and slicing the disk on each of the resulting n/2 lines. Number the sectors consecutively in a clockwise or anti-clockwise fashion. Then the pizza theorem states that: - -The sum of the areas of the odd-numbered sectors equals the sum of the areas of the even-numbered sectors . - -The pizza theorem is so called because it mimics a traditional pizza slicing technique. It shows that, if two people share a pizza sliced in this way by taking alternating slices, then they each get an equal amount of pizza. - -The pizza theorem was originally proposed as a challenge problem by Upton. The published solution to this problem, by Michael Goldberg, involved direct manipulation of the algebraic expressions for the areas of the sectors. - -Carter provide an alternative proof by dissection. They show how to partition the sectors into smaller pieces so that each piece in an odd-numbered sector has a congruent piece in an even-numbered sector, and vice versa. Frederickson gave a family of dissection proofs for all cases (in which the number of sectors is 8, 12, 16, ...). - -The requirement that the number of sectors be a multiple of four is necessary: as Don Coppersmith showed, dividing a disk into four sectors, or a number of sectors that is not divisible by four, does not in general produce equal areas. Mabry answered a problem of Carter by providing a more precise version of the theorem that determines which of the two sets of sectors has greater area in the cases that the areas are unequal. Specifically, if the number of sectors is 2 (mod 8) and no slice passes through the center of the disk, then the subset of slices containing the center has smaller area than the other subset, while if the number of sectors is 6 (mod 8) and no slice passes through the center, then the subset of slices containing the center has larger area. An odd number of sectors is not possible with straight-line cuts, and a slice through the center causes the two subsets to be equal regardless of the number of sectors. - -Mabry also observe that, when the pizza is divided evenly, then so is its crust (the crust may be interpreted as either the perimeter of the disk or the area between the boundary of the disk and a smaller circle having the same center, with the cut-point lying in the latter's interior), and since the disks bounded by both circles are partitioned evenly so is their difference. However, when the pizza is divided unevenly, the diner who gets the most pizza area actually gets the least crust. - -As Hirschhorn note, an equal division of the pizza also leads to an equal division of its toppings, as long as each topping is distributed in a disk (not necessarily concentric with the whole pizza) that contains the central point p of the division into sectors. - -Hirschhorn show that a pizza sliced in the same way as the pizza theorem, into a number n of sectors with equal angles where n is divisible by four, can also be shared equally among n/4 people. For instance, a pizza divided into 12 sectors can be shared equally by three people as well as by two; however, to accommodate all five of the Hirschhorns, a pizza would need to be divided into 20 sectors. - -Cibulka and Knauer study the game theory of choosing free slices of pizza in order to guarantee a large share, a problem posed by Dan Brown and Peter Winkler. In the version of the problem they study, a pizza is sliced radially (without the guarantee of equal-angled sectors) and two diners alternately choose pieces of pizza that are adjacent to an already-eaten sector. If the two diners both try to maximize the amount of pizza they eat, the diner who takes the first slice can guarantee a 4/9 share of the total pizza, and there exists a slicing of the pizza such that he cannot take more. The fair division or cake cutting problem considers similar games in which different players have different criteria for how they measure the size of their share; for instance, one diner may prefer to get the most pepperoni while another diner may prefer to get the most cheese. diff --git a/wiki/wikipedia/602.txt b/wiki/wikipedia/602.txt deleted file mode 100644 index 5d163d177e20a077f8ab9d14f7cbc813f9861517..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/602.txt +++ /dev/null @@ -1,9 +0,0 @@ -The International Protein Index (IPI) is a defunct protein database launched in 2001 by the European Bioinformatics Institute (EBI), and closed in 2011. Its purpose was to provide the proteomics community with a resource that enables - -* accession numbers from a variety of bioinformatics databases to be mapped - -* a complete set of proteins for a species i.e. a reference set - -In its last version, the IPI contained the complete reference sets for six animal species: Homo sapiens (human), Mus musculus (mouse), Rattus norvegicus (rat), Bos taurus (cattle), Gallus gallus (chicken) and Danio rerio (zebrafish); and one plant species: Arabidopsis thaliana (thale cress). The human, mouse and rat datasets were the first to be developed, combining information taken from the Swiss-Prot, TrEMBL, Ensembl and RefSeq databases. - -In 2001, when the IPI was launched, databases cataloguing human genes varied greatly and had few links between them. Since then, much more data has been produced giving a more complete picture and databases have collaborated to synchronize data. Currently many model organisms have a reference set of genes/proteins which are catalogued in Ensembl/UniProt respectively, as well as other species specific databases. Because of this redundancy, the IPI was retired in 2011. EBI advised users of its services to employ accession numbers as their protein identifiers. diff --git a/wiki/wikipedia/603.txt b/wiki/wikipedia/603.txt deleted file mode 100644 index 2a2ab307dd928de8dd5cf275aeade36c6de47f57..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/603.txt +++ /dev/null @@ -1,325 +0,0 @@ -In mathematics, Jensen's inequality, named after the Danish mathematician Johan Jensen, relates the value of a convex function of an integral to the integral of the convex function. It was proved by Jensen in 1906. Given its generality, the inequality appears in many forms depending on the context, some of which are presented below. In its simplest form the inequality states that the convex transformation of a mean is less than or equal to the mean applied after convex transformation; it is a simple corollary that the opposite is true of concave transformations. - -Jensen's inequality generalizes the statement that the secant line of a convex function lies above the graph of the function, which is Jensen's inequality for two points: the secant line consists of weighted means of the convex function (for t ∈ [0,1]), -$$ -t f(x_1) + (1-t) f(x_2), -$$ - -while the graph of the function is the convex function of the weighted means, -$$ -f(t x_1 + (1-t) x_2). -$$ - -Thus, Jensen's inequality is -$$ -f(t x_1 + (1-t) x_2) \leq t f(x_1) + (1-t) f(x_2). -$$ - -In the context of probability theory, it is generally stated in the following form: if X is a random variable and φ is a convex function, then - -\varphi(\operatorname{E}[X]) \leq \operatorname{E} \left[\varphi(X)\right]. - -The difference between the two sides of the inequality, $\operatorname{E} \left[\varphi(X)\right] - \varphi\left(\operatorname{E}[X]\right)$, is called the Jensen gap. - -The classical form of Jensen's inequality involves several numbers and weights. The inequality can be stated quite generally using either the language of measure theory or (equivalently) probability. In the probabilistic setting, the inequality can be further generalized to its full strength. - -For a real convex function $\varphi$, numbers $x_1, x_2, \ldots, x_n$ in its domain, and positive weights $a_i$, Jensen's inequality can be stated as: - -{{NumBlk|:|$\varphi\left(\frac{\sum a_i x_i}{\sum a_i}\right) \le \frac{\sum (a_i \varphi (x_i))}{\sum a_i}$|}} - -and the inequality is reversed if $\varphi$ is concave, which is - -{{NumBlk|:|$\varphi\left(\frac{\sum a_i x_i}{\sum a_i}\right) \geq \frac{\sum a_i \varphi (x_i)}{\sum a_i}.$|}} - -Equality holds if and only if $x_1=x_2=\cdots =x_n$ or $\varphi$ is linear on a domain containing $x_1,x_2,\cdots ,x_n$. - -As a particular case, if the weights $a_i$ are all equal, then () and () become - -{{NumBlk|:|$\varphi\left(\frac{\sum x_i}{n}\right) \le \frac{\sum \varphi (x_i)}{n}$|}} - -{{NumBlk|:|$\varphi\left(\frac{\sum x_i}{n}\right) \geq \frac{\sum \varphi (x_i)}{n}$|}} - -For instance, the function log(x) is concave, so substituting $\varphi(x) = \log(x)$ in the previous formula () establishes the (logarithm of the) familiar arithmetic-mean/geometric-mean inequality: - -\log\!\left( \frac{\sum_{i=1}^n x_i}{n}\right) \geq \frac{\sum_{i=1}^n \log\!\left( x_i \right)}{n} \quad \text{or} \quad - -\frac{x_1 + x_2 + \cdots + x_n}{n} \geq \sqrt[n]{x_1 \cdot x_2 \cdots x_n} - -A common application has x as a function of another variable (or set of variables) t, that is, $x_i = g(t_i)$. All of this carries directly over to the general continuous case: the weights ai are replaced by a non-negative integrable function  f (x), such as a probability distribution, and the summations are replaced by integrals. - -Let $(\Omega, A, \mu)$ be a probability space, i.e. $\mu(\Omega)= 1$. If $g$ is a real-valued function that is $\mu$-integrable, and if $\varphi$ is a convex function on the real line, then: -$$ -\varphi\left(\int_\Omega g d\mu\right) \le \int_\Omega \varphi \circ g d\mu. -$$ - -In real analysis, we may require an estimate on -$$ -\varphi\left(\int_a^b f(x) dx\right), -$$ - -where $a, b \in \mathbb{R}$, and $f\colon[a, b] \to \R$ is a non-negative Lebesgue-integrable function. In this case, the Lebesgue measure of $[a, b]$ need not be unity. However, by integration by substitution, the interval can be rescaled so that it has measure unity. Then Jensen's inequality can be applied to get -$$ -\varphi\left(\frac{1}{b-a}\int_a^b f(x) dx\right) \le \frac{1}{b-a} \int_a^b \varphi(f(x)) dx. -$$ - -The same result can be equivalently stated in a probability theory setting, by a simple change of notation. Let $(\Omega, \mathfrak{F},\operatorname{P})$ be a probability space, X an integrable real-valued random variable and φ a convex function. Then: -$$ -\varphi\left(\operatorname{E}[X]\right) \leq \operatorname{E} \left[ \varphi(X) \right]. -$$ - -In this probability setting, the measure μ is intended as a probability $\operatorname{P}$, the integral with respect to μ as an expected value $\operatorname{E}$, and the function $g$ as a random variable X. - -Note that the equality holds if and only if φ is a linear function on some convex set $A$ such that $\mathrm{P}(X \in A) = 1$ (which follows by inspecting the measure-theoretical proof below). - -More generally, let T be a real topological vector space, and X a T-valued integrable random variable. In this general setting, integrable means that there exists an element $\operatorname{E}[X]$ in T, such that for any element z in the dual space of T: $\operatorname{E}|\langle z, X \rangle|<\infty $, and $\langle z, \operatorname{E}[X]\rangle = \operatorname{E}[\langle z, X \rangle]$. Then, for any measurable convex function φ and any sub-σ-algebra $\mathfrak{G}$ of $\mathfrak{F}$: -$$ -\varphi\left(\operatorname{E}\left[X\mid\mathfrak{G}\right]\right) \leq \operatorname{E}\left[\varphi(X)\mid\mathfrak{G}\right]. -$$ - -Here $\operatorname{E}[\cdot\mid\mathfrak{G}]$ stands for the expectation conditioned to the σ-algebra $\mathfrak{G}$. This general statement reduces to the previous ones when the topological vector space T is the real axis, and $\mathfrak{G}$ is the trivial σ-algebra {∅, Ω} (where ∅ is the empty set, and Ω is the sample space). - -Let X be a one-dimensional random variable with mean $\mu$ and variance $\sigma^2\ge 0$. Let $\varphi(x)$ be a twice differentiable function, and define the function - - - -h(x)\triangleq\frac{\varphi \left(x\right)-\varphi \left(\mu \right)}{\left(x-\mu - -\right)^2}-\frac{\varphi '\left(\mu \right)}{x-\mu}. - - - -Then - - - -\sigma^2\inf \frac{\varphi(x)}{2} \le \sigma^2\inf h(x) \leq E\left[\varphi \left(X\right)\right]-\varphi\left(E[X]\right)\le \sigma^2\sup h(x) \le \sigma^2\sup \frac{\varphi(x)}{2}. - - - -In particular, when $\varphi(x)$ is convex, then $\varphi(x)\ge 0$, and the standard form of Jensen's inequality immediately follows for the case where $\varphi(x)$ is additionally assumed to be twice differentiable. - -Jensen's inequality can be proved in several ways, and three different proofs corresponding to the different statements above will be offered. Before embarking on these mathematical derivations, however, it is worth analyzing an intuitive graphical argument based on the probabilistic case where X is a real number (see figure). Assuming a hypothetical distribution of X values, one can immediately identify the position of $\operatorname{E}[X]$ and its image $ \varphi(\operatorname{E}[X])$ in the graph. Noticing that for convex mappings Y = φ(X) the corresponding distribution of Y values is increasingly "stretched out" for increasing values of X, it is easy to see that the distribution of Y is broader in the interval corresponding to X > X0 and narrower in X < X0 for any X0; in particular, this is also true for $ X_0 = \operatorname{E}[X]$. Consequently, in this picture the expectation of Y will always shift upwards with respect to the position of $ \varphi(\operatorname{E}[X])$. A similar reasoning holds if the distribution of X covers a decreasing portion of the convex function, or both a decreasing and an increasing portion of it. This "proves" the inequality, i.e. -$$ -\varphi(\operatorname{E}[X]) \leq \operatorname{E}[\varphi(X)] = \operatorname{E}[Y], -$$ - -with equality when φ(X) is not strictly convex, e.g. when it is a straight line, or when X follows a degenerate distribution (i.e. is a constant). - -The proofs below formalize this intuitive notion. - -If λ1 and λ2 are two arbitrary nonnegative real numbers such that λ1 + λ2 = 1 then convexity of φ implies -$$ -\forall x_1, x_2: \qquad \varphi \left (\lambda_1 x_1+\lambda_2 x_2 \right )\leq \lambda_1\varphi(x_1)+\lambda_2\varphi(x_2). -$$ - -This can be generalized: if λ1, ..., λn are nonnegative real numbers such that λ1 + ... + λn = 1, then -$$ -\varphi(\lambda_1 x_1+\lambda_2 x_2+\cdots+\lambda_n x_n)\leq \lambda_1\varphi(x_1)+\lambda_2\varphi(x_2)+\cdots+\lambda_n\varphi(x_n), -$$ - -for any x1, ..., xn. - -The finite form of the Jensen's inequality can be proved by induction: by convexity hypotheses, the statement is true for n = 2. Suppose the statement is true for some n, so - -\varphi\left(\sum_{i=1}^{n}\lambda_i x_i\right) \leq \sum_{i=1}^{n}\lambda_i \varphi\left(x_i\right) - - - -for any λ1, ..., λn such that λ1 + ... + λn = 1. - -One needs to prove it for n + 1. At least one of the λi is strictly smaller than $1$, say λn+1; therefore by convexity inequality: - -\begin{align} - -\varphi\left(\sum_{i=1}^{n+1}\lambda_i x_i\right) &= \varphi\left((1-\lambda_{n+1})\sum_{i=1}^{n} \frac{\lambda_i}{1-\lambda_{n+1}} x_i + \lambda_{n+1} x_{n+1} \right) \\ - -&\leq (1-\lambda_{n+1}) \varphi\left(\sum_{i=1}^{n} \frac{\lambda_i}{1-\lambda_{n+1}} x_i \right)+\lambda_{n+1}\varphi(x_{n+1}). - -\end{align} - -Since λ1 + ... +λn + λn+1 = 1, -$$ -\sum_{i=1}^{n} \frac{\lambda_i}{1-\lambda_{n+1}} = 1 -$$, - -applying the induction hypothesis gives -$$ -\varphi\left(\sum_{i=1}^{n}\frac{\lambda_i}{1-\lambda_{n+1}} x_i\right) \leq \sum_{i=1}^{n}\frac{\lambda_i}{1-\lambda_{n+1}} \varphi(x_i) -$$ - -therefore - -\begin{align} - -\varphi\left(\sum_{i=1}^{n+1}\lambda_i x_i\right) - -&\leq (1-\lambda_{n+1}) \sum_{i=1}^{n}\frac{\lambda_i}{1-\lambda_{n+1}} \varphi(x_i)+\lambda_{n+1}\varphi(x_{n+1}) - -=\sum_{i=1}^{n+1}\lambda_i \varphi(x_i) - -\end{align} - -We deduce the equality is true for n + 1, by the principle of mathematical induction it follows that the result is also true for all integer n greater than 2. - -In order to obtain the general inequality from this finite form, one needs to use a density argument. The finite form can be rewritten as: -$$ -\varphi\left(\int xd\mu_n(x) \right)\leq \int \varphi(x)d\mu_n(x), -$$ - -where μn is a measure given by an arbitrary convex combination of Dirac deltas: -$$ -\mu_n= \sum_{i=1}^n \lambda_i \delta_{x_i}. -$$ - -Since convex functions are continuous, and since convex combinations of Dirac deltas are weakly dense in the set of probability measures (as could be easily verified), the general statement is obtained simply by a limiting procedure. - -Let g be a real-valued μ-integrable function on a probability space Ω, and let φ be a convex function on the real numbers. Since φ is convex, at each real number x we have a nonempty set of subderivatives, which may be thought of as lines touching the graph of φ at x, but which are at or below the graph of φ at all points (support lines of the graph). - -Now, if we define -$$ -x_0:=\int_\Omega g d\mu, -$$ - -because of the existence of subderivatives for convex functions, we may choose a and b such that -$$ -ax + b \leq \varphi(x), -$$ - -for all real x and -$$ -ax_0+ b = \varphi(x_0). -$$ - -But then we have that -$$ -\varphi \circ g (x) \geq ag(x)+ b -$$ - -for all x. Since we have a probability measure, the integral is monotone with μ(Ω) = 1 so that -$$ -\int_\Omega \varphi \circ g d\mu \geq \int_\Omega (ag + b) d\mu = a\int_\Omega g d\mu + b\int_\Omega d\mu = ax_0 + b = \varphi (x_0) = \varphi \left (\int_\Omega g d\mu \right ), -$$ - -as desired. - -Let X be an integrable random variable that takes values in a real topological vector space T. Since $\varphi: T \to \R$ is convex, for any $x,y \in T$, the quantity -$$ -\frac{\varphi(x+\thetay)-\varphi(x)}{\theta}, -$$ - -is decreasing as θ approaches 0+. In particular, the subdifferential of $\varphi$ evaluated at x in the direction y is well-defined by -$$ -(D\varphi)(x)\cdot y:=\lim_{\theta \downarrow 0} \frac{\varphi(x+\thetay)-\varphi(x)}{\theta}=\inf_{\theta \neq 0} \frac{\varphi(x+\thetay)-\varphi(x)}{\theta}. -$$ - -It is easily seen that the subdifferential is linear in y (that is false and the assertion requires Hahn-Banach theorem to be proved) and, since the infimum taken in the right-hand side of the previous formula is smaller than the value of the same term for θ = 1, one gets -$$ -\varphi(x)\leq \varphi(x+y)-(D\varphi)(x)\cdot y. -$$ - -In particular, for an arbitrary sub-σ-algebra $ \mathfrak{G}$ we can evaluate the last inequality when $ x = \operatorname{E}[X\mid\mathfrak{G}],y=X-\operatorname{E}[X\mid\mathfrak{G}]$ to obtain -$$ -\varphi(\operatorname{E}[X\mid\mathfrak{G}]) \leq \varphi(X)-(D\varphi)(\operatorname{E}[X\mid\mathfrak{G}])\cdot (X-\operatorname{E}[X\mid\mathfrak{G}]). -$$ - -Now, if we take the expectation conditioned to $ \mathfrak{G}$ on both sides of the previous expression, we get the result since: -$$ -\operatorname{E} \left [\left[(D\varphi)(\operatorname{E}[X\mid\mathfrak{G}])\cdot (X-\operatorname{E}[X\mid\mathfrak{G}])\right]\mid\mathfrak{G} \right] = (D\varphi)(\operatorname{E}[X\mid\mathfrak{G}])\cdot \operatorname{E}[\left( X-\operatorname{E}[X\mid\mathfrak{G}] \right) \mid \mathfrak{G}]=0, -$$ - -by the linearity of the subdifferential in the y variable, and the following well-known property of the conditional expectation: -$$ -\operatorname{E} \left [ \left(\operatorname{E}[X\mid\mathfrak{G}] \right) \mid\mathfrak{G} \right ] = \operatorname{E}[ X \mid\mathfrak{G}]. -$$ - -Suppose Ω is a measurable subset of the real line and f(x) is a non-negative function such that -$$ -\int_{-\infty}^\infty f(x)dx = 1. -$$ - -In probabilistic language, f is a probability density function. - -Then Jensen's inequality becomes the following statement about convex integrals: - -If g is any real-valued measurable function and $\varphi$ is convex over the range of g, then -$$ - \varphi\left(\int_{-\infty}^\infty g(x)f(x) dx\right) \le \int_{-\infty}^\infty \varphi(g(x)) f(x) dx. -$$ - -If g(x) = x, then this form of the inequality reduces to a commonly used special case: -$$ -\varphi\left(\int_{-\infty}^\infty x f(x) dx\right) \le \int_{-\infty}^\infty \varphi(x)f(x) dx. -$$ - -This is applied in Variational Bayesian methods. - -If g(x) = x2n, and X is a random variable, then g is convex as -$$ - \frac{d^{2}g}{dx^{2}}(x) = 2n(2n - 1)x^{2n - 2} \geq 0\quad \forall\ x \in \R -$$ - -and so - - - -g(\operatorname{E}[X]) = (\operatorname{E}[X])^{2n} \leq\operatorname{E}[X^{2n}]. - - - -In particular, if some even moment 2n of X is finite, X has a finite mean. An extension of this argument shows X has finite moments of every order $l\in\N$ dividing n. - -Let Ω = {x1, ... xn}, and take μ to be the counting measure on Ω, then the general form reduces to a statement about sums: -$$ - \varphi\left(\sum_{i=1}^{n} g(x_i)\lambda_i \right) \le \sum_{i=1}^{n} \varphi(g(x_i)) \lambda_i, -$$ - -provided that λi ≥ 0 and -$$ -\lambda_1 + \cdots + \lambda_n = 1. -$$ - -There is also an infinite discrete form. - -Jensen's inequality is of particular importance in statistical physics when the convex function is an exponential, giving: -$$ - e^{\operatorname{E}[X]} \leq \operatorname{E} \left [ e^X \right ], -$$ - -where the expected values are with respect to some probability distribution in the random variable X. - -The proof in this case is very simple (cf. Chandler, Sec. 5.5). The desired inequality follows directly, by writing -$$ - \operatorname{E} \left [ e^X \right ] = e^{\operatorname{E}[X]} \operatorname{E} \left [ e^{X - \operatorname{E}[X]} \right] -$$ - -and then applying the inequality eX ≥ 1 + X to the final exponential. - -If p(x) is the true probability density for X, and q(x) is another density, then applying Jensen's inequality for the random variable Y(X) = q(X)/p(X) and the convex function φ(y) = −log(y) gives -$$ -\operatorname{E}[\varphi(Y)] \ge \varphi(\operatorname{E}[Y]) -$$ - -Therefore: -$$ --D(p(x)\|q(x))=\int p(x) \log \left (\frac{q(x)}{p(x)} \right ) dx \le \log \left ( \int p(x) \frac{q(x)}{p(x)}dx \right ) = \log \left (\int q(x)dx \right ) =0 -$$ - -a result called Gibbs' inequality. - -It shows that the average message length is minimised when codes are assigned on the basis of the true probabilities p rather than any other distribution q. The quantity that is non-negative is called the Kullback–Leibler divergence of q from p. - -Since −log(x) is a strictly convex function for x > 0, it follows that equality holds when p(x) equals q(x) almost everywhere. - -If L is a convex function and $\mathfrak{G}$ a sub-sigma-algebra, then, from the conditional version of Jensen's inequality, we get -$$ -L(\operatorname{E}[\delta(X) \mid \mathfrak{G}]) \le \operatorname{E}[L(\delta(X)) \mid \mathfrak{G}] \quad \Longrightarrow \quad \operatorname{E}[L(\operatorname{E}[\delta(X) \mid \mathfrak{G}])] \le \operatorname{E}[L(\delta(X))]. -$$ - -So if δ(X) is some estimator of an unobserved parameter θ given a vector of observables X; and if T(X) is a sufficient statistic for θ; then an improved estimator, in the sense of having a smaller expected loss L, can be obtained by calculating -$$ -\delta_1 (X) = \operatorname{E}_{\theta}[\delta(X') \mid T(X')= T(X)], -$$ - -the expected value of δ with respect to θ, taken over all possible vectors of observations X compatible with the same value of T(X) as that observed. Further, because T is a sufficient statistics, $\delta_1 (X)$ does not depend on θ, hence, becomes a statistics. - -This result is known as the Rao–Blackwell theorem. diff --git a/wiki/wikipedia/604.txt b/wiki/wikipedia/604.txt deleted file mode 100644 index b6c0d8c877b300d3e2aa729cbab3a9bf7a5d4a42..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/604.txt +++ /dev/null @@ -1,44 +0,0 @@ -In mathematics, in the area of complex analysis, Carlson's theorem is a uniqueness theorem which was discovered by Fritz David Carlson. Informally, it states that two different analytic functions which do not grow very fast at infinity can not coincide at the integers. The theorem may be obtained from the Phragmén–Lindelöf theorem, which is itself an extension of the maximum-modulus theorem. - -Carlson's theorem is typically invoked to defend the uniqueness of a Newton series expansion. Carlson's theorem has generalized analogues for other expansions. - -Assume that f satisfies the following three conditions: the first two conditions bound the growth of f at infinity, whereas the third one states that f vanishes on the non-negative integers. - -* f(z) is an entire function of exponential type, meaning that -$$ -|f(z)| \leq C e^{\tau|z|}, \quad z \in \mathbb{C} -$$ - -for some real values C, τ. - -* There exists c < pi such that -$$ -|f(iy)| \leq C e^{c|y|}, \quad y \in \mathbb{R} -$$ - -* f(n) = 0 for any non-negative integer n. - -Then f is identically zero. - -The first condition may be relaxed: it is enough to assume that f is analytic in Re z > 0, continuous in Re z ≥ 0, and satisfies -$$ -|f(z)| \leq C e^{\tau|z|}, \quad \operatorname{Re} z > 0 -$$ - -for some real values C, τ. - -To see that the second condition is sharp, consider the function f(z) = sin(piz). It vanishes on the integers; however, it grows exponentially on the imaginary axis with a growth rate of c = pi, and indeed it is not identically zero. - -A result, due to Rubel, relaxes the condition that f vanish on the integers. Namely, Rubel showed that the conclusion of the theorem remains valid if f vanishes on a subset A ⊂ {0, 1, 2, …} of upper density 1, meaning that -$$ - \limsup_{n \to \infty} \frac{\left| A \cap \{0,1,\cdots,n-1\} \right|}{n} = 1. -$$ - -This condition is sharp, meaning that the theorem fails for sets A of upper density smaller than 1. - -Suppose f(z) is a function that possess all finite forward differences $\Delta^n f(0)$. Consider then the Newton series -$$ -g(z)=\sum_{n=0}^\infty {z \choose n} \Delta^n f(0) -$$ - -with ${z \choose n}$ is the binomial coefficient and $\Delta^n f(0)$ is the n-th forward difference. By construction, one then has that f(k) = g(k) for all non-negative integers k, so that the difference h(k) = f(k) − g(k) = 0. This is one of the conditions of Carlson's theorem; if h obeys the others, then h is identically zero, and the finite differences for f uniquely determine its Newton series. That is, if a Newton series for f exists, and the difference satisfies the Carlson conditions, then f is unique. diff --git a/wiki/wikipedia/605.txt b/wiki/wikipedia/605.txt deleted file mode 100644 index 45cd29214a9887da906669a3addf7cbe8d915c25..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/605.txt +++ /dev/null @@ -1,14 +0,0 @@ -In probability theory, the Vysochanskij–Petunin inequality gives a lower bound for the probability that a random variable with finite variance lies within a certain number of standard deviations of the variable's mean, or equivalently an upper bound for the probability that it lies further away. The sole restrictions on the distribution are that it be unimodal and have finite variance. (This implies that it is a continuous probability distribution except at the mode, which may have a non-zero probability.) - -The theorem applies even to heavily skewed distributions and puts bounds on how much of the data is, or is not, "in the middle." - -Let X be a random variable with unimodal distribution, mean μ and finite, non-zero variance σ2. Then, for any \lambda > \sqrt{\frac{8}{3}} = 1.63299..., -$$ -P(\left|X-\mu\right|\geq \lambda\sigma)\leq\frac{4}{9\lambda^2}. -$$ - -(For a relatively elementary proof see e.g. ). Furthermore, the equality is attained for a random variable having a probability 1 - 4/(3 λ2) of being exactly equal to the mean, and which, when it is not equal to the mean, is distributed uniformly in an interval centred on the mean. When \lambda < \sqrt{\frac{8}{3}}, there exist non-symmetric distributions for which the \frac{4}{9\lambda^2} bound is exceeded. - -The theorem refines Chebyshev's inequality by including the factor of 4/9, made possible by the condition that the distribution be unimodal. - -It is common, in the construction of control charts and other statistical heuristics, to set λ = 3, corresponding to an upper probability bound of 4/81= 0.04938..., and to construct 3-sigma limits to bound nearly all (i.e. 95%) of the values of a process output. Without unimodality Chebyshev's inequality would give a looser bound of 1/9 = 0.11111.... diff --git a/wiki/wikipedia/606.txt b/wiki/wikipedia/606.txt deleted file mode 100644 index b799cc7f54842a204a663cd130524f0bf493d4aa..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/606.txt +++ /dev/null @@ -1,3 +0,0 @@ -Log shipping is the process of automating the backup of transaction log files on a primary (production) database server, and then restoring them onto a standby server. This technique is supported by Microsoft SQL Server, 4D Server, MySQL, and PostgreSQL. Similar to replication, the primary purpose of log shipping is to increase database availability by maintaining a backup server that can replace a production server quickly. Other databases such as Adaptive Server Enterprise and Oracle Database support the technique but require the Database Administrator to write code or scripts to perform the work. - -Although the actual failover mechanism in log shipping is manual, this implementation is often chosen due to its low cost in human and server resources, and ease of implementation. In comparison, SQL server clusters enable automatic failover, but at the expense of much higher storage costs. Compared to database replication, log shipping does not provide as much in terms of reporting capabilities, but backs up system tables along with data tables, and locks the standby server from users' modifications. A replicated server can be modified (e.g. views) and is therefore unsuitable for failover purposes. diff --git a/wiki/wikipedia/607.txt b/wiki/wikipedia/607.txt deleted file mode 100644 index 8327bc92e93b05e9a4750463f318dfedc5bcb5d3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/607.txt +++ /dev/null @@ -1,33 +0,0 @@ -The vehicle routing problem (VRP) is a combinatorial optimization and integer programming problem which asks "What is the optimal set of routes for a fleet of vehicles to traverse in order to deliver to a given set of customers?". It generalises the well-known travelling salesman problem (TSP). It first appeared in a paper by George Dantzig and John Ramser in 1959, in which the first algorithmic approach was written and was applied to petrol deliveries. Often, the context is that of delivering goods located at a central depot to customers who have placed orders for such goods. The objective of the VRP is to minimize the total route cost. In 1964, Clarke and Wright improved on Dantzig and Ramser's approach using an effective greedy approach called the savings algorithm. - -Determining the optimal solution to VRP is NP-hard, so the size of problems that can be solved, optimally, using mathematical programming or combinatorial optimization may be limited. Therefore, commercial solvers tend to use heuristics due to the size and frequency of real world VRPs they need to solve. - -The VRP has many direct applications in industry. In fact, the use of computer optimization programs can give savings of 5% to a company as transportation is usually a significant component of the cost of a product (10%) - indeed, the transportation sector makes up 10% of the EU's GDP. Consequently, any savings created by the VRP, even less than 5%, are significant. - -There are three main different approaches to modelling the VRP - -#Vehicle flow formulations—this uses integer variables associated with each arc that count the number of times that the edge is traversed by a vehicle. It is generally used for basic VRPs. This is good for cases where the solution cost can be expressed as the sum of any costs associated with the arcs. However it can't be used to handle many practical applications. and subsequently extended by Christofides, Mingozzi and Toth. - - - -u_j-u_i\geq d_j-C(1-x_{ij}) ~~~~~~\forall i,j \in V\backslash\{0\}, i\neq j~~~~\text{s.t. } d_i +d_j \leq C - - - - - -0 \leq u_i \leq C-d_i ~~~~~~\forall i \in V\backslash \{0\} - - - -where $ u_i,~i \in V \backslash \{0\}$ is an additional continuous variable which represents the load left in the vehicle after visiting customer $i$ and $d_i$ is the demand of customer $i$. These impose both the connectivity and the capacity requirements. When $x_{ij}=0$ constraint then $i$ is not binding' since $u_i\leq C$ and $u_j\geq d_j$ whereas $x_{ij} = 1$ they impose that $u_j \geq u_i +d_j$. - -These have been used extensively to model the basic VRP (CVRP) and the VRPB. However, their power is limited to these simple problems. They can only be used when the cost of the solution can be expressed as the sum of the costs of the arc costs. We cannot also know which vehicle traverses each arc. Hence we cannot use this for more complex models where the cost and or feasibility is dependent on the order of the customers or the vehicles used. - -There are many methods to solve vehicle routing problems manually. For example, optimum routing is a big efficiency issue for forklifts in large warehouses. Some of the manual methods to decide upon the most efficient route are: Largest gap, S-shape, Aisle-by-aisle, Combined and Combined +. While Combined + method is the most complex, thus the hardest to be used by lift truck operators, it is the most efficient routing method. Still the percentage difference between the manual optimum routing method and the real optimum route was on average 13%. - -Due to the difficulty of solving to optimality large-scale instances of vehicle routing problems, a significant research effort has been dedicated to metaheuristics such as Genetic algorithms, Tabu search, Simulated annealing and Adaptive Large Neighborhood Search (ALNS). Some of the most recent and efficient metaheuristics for vehicle routing problems reach solutions within 0.5% or 1% of the optimum for problem instances counting hundreds or thousands of delivery points - -. - -These methods are also more robust in the sense that they can be more easily adapted to deal with a variety of side constraints. As such, the application of metaheuristic techniques is often preferred for large-scale applications with complicating constraints and decision sets. diff --git a/wiki/wikipedia/608.txt b/wiki/wikipedia/608.txt deleted file mode 100644 index 74f86b51d8280156e749b761df5b3af65a6668eb..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/608.txt +++ /dev/null @@ -1,70 +0,0 @@ -{{unsolved|computer science|Is there an algorithm to solve the 3SUM problem in time $O(n^{2-\epsilon})$, for some $\epsilon>0$?}} - -In computational complexity theory, the 3SUM problem asks if a given set of $n$ real numbers contains three elements that sum to zero. A generalized version, k-SUM, asks the same question on k numbers. 3SUM can be easily solved in $O(n^2)$ time, and matching $\Omega(n^{\lceil k/2 \rceil})$ lower bounds are known in some specialized models of computation . - -It was conjectured that any deterministic algorithm for the 3SUM requires $ \Omega(n^2) $ time. - -In 2014, the original 3SUM conjecture was refuted by Allan Grønlund and Seth Pettie who gave a deterministic algorithm that solves 3SUM in $O(n^2 / ({\log n} / {\log \log n})^{2/3})$ time. - -Additionally, Grønlund and Pettie showed that the 4-linear decision tree complexity of 3SUM is $ O(n^{3/2}\sqrt{\log n}) $. - -These bounds were subsequently improved. - -The current best known algorithm for 3SUM runs in $O(n^2 (\log \log n)^{O(1)} / {\log^2 n}) $ time. - -Kane, Lovett, and Moran showed that the 6-linear decision tree complexity of 3SUM is $O(n{\log^2 n})$. The latter bound is tight (up to a logarithmic factor). - -It is still conjectured that 3SUM is unsolvable in $O(n^{2-\Omega(1)})$ expected time. -$$ -S[i+j]=S[i]+S[j] -$$ - -Given a solver for 3SUM, the Conv3SUM problem can be solved in the following way. - -The reduction uses a hash function. As a first approximation, assume that we have a linear hash function, i.e. a function h such that: -$$ -h(x+y)=h(x)+h(y) -$$ - -Suppose that all elements are integers in the range: 0...N-1, and that the function h maps each element to an element in the smaller range of indices: 0...n-1. Create a new array T and send each element of S to its hash value in T, i.e., for every x in S(\forall x \in S): -$$ -T[h(x)] = x -$$ - -Initially, suppose that the mappings are unique (i.e. each cell in T accepts only a single element from S). Solve Conv3SUM on T. Now: - -* If there is a solution for 3SUM: $z=x+y$, then: $T[h(z)]=T[h(x)]+T[h(y)]$ and $h(z)=h(x)+h(y)$, so this solution will be found by the Conv3SUM solver on T. - -* Conversely, if a Conv3SUM is found on T, then obviously it corresponds to a 3SUM solution on S since T is just a permutation of S. - -This idealized solution doesn't work, because any hash function might map several distinct elements of S to the same cell of T. The trick is to create an array T^* by selecting a single random element from each cell of T, and run Conv3SUM on T^*. If a solution is found, then it is a correct solution for 3SUM on S. If no solution is found, then create a different random T^* and try again. Suppose there are at most R elements in each cell of T. Then the probability of finding a solution (if a solution exists) is the probability that the random selection will select the correct element from each cell, which is $(1/R)^3$. By running Conv3SUM $R^3$ times, the solution will be found with a high probability. - -Unfortunately, we do not have linear perfect hashing, so we have to use an almost linear hash function, i.e. a function h such that: -$$ -h(x+y)=h(x)+h(y) -$$ or -$$ -h(x+y)=h(x)+h(y)+1 -$$ - -This requires to duplicate the elements of S when copying them into T, i.e., put every element $x\in S$ both in $T[h(x)]$ (as before) and in $T[h(x)]-1$. So each cell will have 2R elements, and we will have to run Conv3SUM $(2R)^3$ times. - -A problem is called 3SUM-hard if solving it in subquadratic time implies a subquadratic-time algorithm for 3SUM. The concept of 3SUM-hardness was introduced by Gajentaan. They proved that a large class of problems in computational geometry are 3SUM-hard, including the following ones. (The authors acknowledge that many of these problems are contributed by other researchers.) - -* Given a set of lines in the plane, are there three that meet in a point? - -* Given a set of non-intersecting axis-parallel line segments, is there a line that separates them into two non-empty subsets? - -* Given a set of infinite strips in the plane, do they fully cover a given rectangle? - -* Given a set of triangles in the plane, compute their measure. - -* Given a set of triangles in the plane, does their union have a hole? - -* A number of visibility and motion planning problems, e.g., - -** Given a set of horizontal triangles in space, can a particular triangle be seen from a particular point? - -** Given a set of non-intersecting axis-parallel line segment obstacles in the plane, can a given rod be moved by translations and rotations between a start and finish positions without colliding with the obstacles? - -By now there are a multitude of other problems that fall into this category. An example is the decision version of X + Y sorting: given sets of numbers X and Y of n elements each, are there n² distinct x + y for x ∈ X, y ∈ Y? diff --git a/wiki/wikipedia/609.txt b/wiki/wikipedia/609.txt deleted file mode 100644 index 8f50c045dc6579c77fc5d0ef292f5621966feb6e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/609.txt +++ /dev/null @@ -1,61 +0,0 @@ -The Erdős–Straus conjecture is an unproven statement in number theory, according to which every fraction $4/n$ can be written as a sum of three positive unit fractions $1/x+1/y+1/z$. It is named after Paul Erdős and Ernst G. Straus, who formulated the conjecture in 1948, but it is connected to much more ancient mathematics: sums of unit fractions, like the one in this problem, are known as Egyptian fractions, because of their use in ancient Egyptian mathematics. The Erdős–Straus conjecture is one of many conjectures by Erdős, and one of many unsolved problems in mathematics concerning Diophantine equations. - -Although a solution is not known for all values of n, solutions are known for infinitely many values, and these known values can be used to speed up searches for counterexamples. For instance, these searches need only consider values of $n$ that are prime numbers, because if $n$ is a composite number, $n=pq$, then an expansion for $4/n$ could be found from an expansion for $4/p$ or $4/q$. Additionally, the values of $n$ that belong to certain infinite arithmetic progressions have solutions given by simple formulas, although these progressions cannot contain any square numbers. Combining the ideas of factorization and progressions shows that, if a counterexample to the Erdős–Straus conjecture exists, the smallest $n$ forming a counterexample must be a prime number congruent to one of six values modulo 840. Computer searches have verified the truth of the conjecture up to $n\le 10^{17}$, but proving it for all $n$ remains an open problem. - -The restriction that the three unit fractions be positive is essential to the difficulty of the problem, for if negative values were allowed the problem could always be solved. - -More formally, the conjecture states that, for every integer $n\ge 2$, there exist positive integers $x$, $y$, and $z$ such that - -\frac4n = \frac1x + \frac1y + \frac1z. - -For instance, for $n=5$, there are two solutions: - -\frac45=\frac12+\frac14+\frac1{20}=\frac12+\frac15+\frac1{10}. - -The conjecture was formulated in 1948 by Paul Erdős and Ernst G. Straus, and published by Erdős; another paper on the problem, by Obláth, was also originally submitted in 1948. - -Some researchers additionally require these integers to be distinct from each other, while others allow them to be equal. For $n\ge 3$, it does not matter whether they are required to be distinct: if there exists a solution with any three integers $x$, $y$, and $z$ then there exists a solution with distinct integers. For $n=2$, however, the only solutions are permutations of $\tfrac42=\tfrac12+\tfrac12+\tfrac11$. When $x$, $y$, and $z$ are distinct then these unit fractions form an Egyptian fraction representation of the number $\tfrac4n$. - -The search for expansions of rational numbers as sums of unit fractions dates to the mathematics of ancient Egypt, in which Egyptian fraction expansions of this type were used as a notation for recording fractional quantities. The Egyptians produced tables such as the Rhind Mathematical Papyrus table of expansions of fractions of the form $\tfrac2n$, most of which use either two or three terms. Egyptian fractions typically have an additional constraint, that all of the unit fractions be distinct from each other, but for the purposes of the Erdős–Straus conjecture this makes no difference: if $\tfrac4n$ can be expressed as a sum of three unit fractions, for $n>2$, it can also be expressed as a sum of three distinct unit fractions by repeatedly replacing any duplicated fraction by one of the following two expansions, - - - -\begin{align} - -\frac{1}{2r}+\frac{1}{2r} &\Rightarrow \frac{1}{r+1}+\frac{1}{r(r+1)}\\ - -\frac{1}{2r+1}+\frac{1}{2r+1} &\Rightarrow \frac{1}{r+1}+\frac{1}{(r+1)(2r+1)}\\ - -\end{align} - -(according to whether the repeated fraction has an even or odd denominator) until no duplicate fractions remain. - -The greedy algorithm for Egyptian fractions, first described in 1202 by Fibonacci in his book Liber Abaci, finds an expansion in which each successive term is the largest unit fraction that is no larger than the remaining number to be represented. For fractions of the form $\tfrac2n$ or $\tfrac3n$, the greedy algorithm uses at most two or three terms respectively. A number of the form $\tfrac3n$ has a two-term expansion if and only if $n$ has a factor congruent to 2 modulo 3, and requires three terms in any expansion otherwise. - -Thus, for the numerators 2 and 3, the question of how many terms are needed in an Egyptian fraction is completely settled, and fractions of the form $\tfrac4n$ are the first case in which the worst-case length of an expansion remains unknown. The greedy algorithm produces expansions of length two, three, or four depending on the value of $n$ modulo 4; when $n$ is congruent to 1 modulo 4, the greedy algorithm produces four-term expansions. Therefore, the worst-case length of an Egyptian fraction of $\tfrac4n$ must be either three or four. The Erdős–Straus conjecture states that, in this case, as in the case for the numerator 3, the maximum number of terms in an expansion is three. - -Multiplying both sides of the equation $\tfrac4n=\tfrac1x+\tfrac1y+\tfrac1z$ by $nxyz$ leads to an equivalent form $4xyz=n(xy+xz+yz)$ for the problem. As a polynomial equation with integer variables, this is an example of a Diophantine equation. The Hasse principle for Diophantine equations asserts that an integer solution of a Diophantine equation should be formed by combining solutions obtained modulo each possible prime number. On the face of it this principle makes little sense for the Erdős–Straus conjecture, as the equation $4xyz=n(xy+xz+yz)$ is easily solvable modulo any prime. Nevertheless, modular identities have proven a very important tool in the study of the conjecture. The power of the Hasse principle to solve some problems is limited by the Manin obstruction, but for the Erdős–Straus conjecture this obstruction does not exist. - -For values of $n$ satisfying certain congruence relations, one can find an expansion for $\tfrac4n$ automatically as an instance of a polynomial identity. For instance, whenever $n$ is 2 modulo 3, $\tfrac4n$ has the expansion - -\frac{4}{n} = \frac{1}{n} + \frac{1}{(n+1)/3} + \frac{1}{n(n+1)/3}. - -Here each of the three denominators $n$, $(n+1)/3$, and $n(n+1)/3$ is a polynomial of $n$, and each is an integer whenever $n$ is 2 modulo 3. The greedy algorithm for Egyptian fractions finds a solution in three or fewer terms whenever $n$ is not 1 or 17 mod 24, and the 17 mod 24 case is covered by the 2 mod 3 relation, so the only values of $n$ for which these two methods do not find expansions in three or fewer terms are those congruent to 1 mod 24. - -If it were possible to find solutions such as the ones above for enough different moduli, forming a complete covering system of congruences, the problem would be solved. However, as Mordell showed, a polynomial identity that provides a solution for values of $n$ congruent to $r$ mod $p$ can exist only when $r$ is not a square modulo $p$ (more formally, if $r$ is not a quadratic residue modulo $p$). For instance, 2 is a non-square mod 3, so Mordell's result allows the existence of an identity for $n$ congruent to 2 mod 3. However, 1 is a square mod 3 (equal to the square of both 1 and 2 mod 3), so there can be no similar identity for all values of $n$ that are congruent to 1 mod 3. More generally, as 1 is a square mod $n$ for all $n>1$, there can be no complete covering system of modular identities for all $n$. - -Polynomial identities listed by Mordell provide three-term Egyptian fractions for $\tfrac4n$ whenever $n$ is one of: - -*2 mod 3 (above), - -*3 mod 4, - -*2 or 3 mod 5, - -*3, 5, or 6 mod 7, or - -*5 mod 8. - -For each modulus 3, 4, 5, 7, or 8, these identities cover all non-squares. However, for larger moduli, some non-squares are also left uncovered by the known identities. Combinations of Mordell's identities can be used to expand $\tfrac4n$ for all $n$. except possibly those that are 1, 121, 169, 289, 361, or 529 mod 840. The smallest prime that these identities do not cover is 1009. By combining larger classes of modular identities, Webb and others showed that the natural density of potential counterexamples to the conjecture is zero: as a parameter $N$. goes to infinity, the fraction of values in the interval $[1,N]$. that could be counterexamples tends to zero in the limit. In particular, if the Erdős–Straus conjecture itself (the case $k=4$) is false, then the number of counterexamples grows only sublinearly. Even more strongly, for any fixed $k$, only a sublinear number of values of $n$ need more than two terms in their Egyptian fraction expansions. The generalized version of the conjecture is equivalent to the statement that the number of unexpandable fractions is not just sublinear but bounded. - -When $n$ is an odd number, by analogy to the problem of odd greedy expansions for Egyptian fractions, one may ask for solutions to $\tfrac{k}{n}=\tfrac1x+\tfrac1y+\tfrac1z$ in which $x$, $y$, and $z$ are distinct positive odd numbers. Solutions to this equation are known to always exist for the case in which k = 3. diff --git a/wiki/wikipedia/61.txt b/wiki/wikipedia/61.txt deleted file mode 100644 index 92c27e69122549bce9d0998921cea3b3026736b5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/61.txt +++ /dev/null @@ -1,9 +0,0 @@ -In graph theory, the graph bandwidth problem is to label the n vertices vi of a graph G with distinct integers f(vi) so that the quantity $\max\{| f(v_i) - f(v_j)| : v_iv_j \in E \}$ is minimized (E is the edge set of G). - -The problem may be visualized as placing the vertices of a graph at distinct integer points along the x-axis so that the length of the longest edge is minimized. Such placement is called linear graph arrangement, linear graph layout or linear graph placement. A heuristic algorithm for obtaining linear graph layouts of low bandwidth is the Cuthill–McKee algorithm. Fast multilevel algorithm for graph bandwidth computation was proposed in. - -The interest in this problem comes from some application areas. - -One area is sparse matrix/band matrix handling, and general algorithms from this area, such as Cuthill–McKee algorithm, may be applied to find approximate solutions for the graph bandwidth problem. - -Another application domain is in electronic design automation. In standard cell design methodology, typically standard cells have the same height, and their placement is arranged in a number of rows. In this context, graph bandwidth problem models the problem of placement of a set of standard cells in a single row with the goal of minimizing the maximal propagation delay (which is assumed to be proportional to wire length). diff --git a/wiki/wikipedia/610.txt b/wiki/wikipedia/610.txt deleted file mode 100644 index f7a77f911bd9fd23adc938e33f24331a0f3f4462..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/610.txt +++ /dev/null @@ -1,13 +0,0 @@ -IsaPlanner is a proof planner for the interactive proof assistant, Isabelle. Originally developed by Lucas Dixon as part of his PhD thesis at the University of Edinburgh, it is now maintained by members of the Mathematical Reasoning Group, in the School of Informatics at Edinburgh. - -IsaPlanner is the latest of a series of proof planners written at Edinburgh. Earlier planners include Clam and LambdaClam. - -IsaPlanner allows the user to encode reasoning techniques, using a combinator language, for conjecturing and proving theorems. IsaPlanner works by manipulating reasoning states, records of open goals, the current proof plan and other important information, and combinators are functions mapping reasoning states to lazy lists of successor reasoning states. - -IsaPlanner's library supplies combinators for branching and iteration, amongst other tasks, and powerful reasoning techniques can be created by combining simpler reasoning techniques with these combinators. - -Several reasoning techniques come ready implemented within IsaPlanner, notably, IsaPlanner features an implementation of dynamic rippling, a rippling heuristic capable of working in higher order settings, a best-first rippling heuristic and a reasoning technique for proofs by induction. - -Additional features include an interactive tracing tool, for manually stepping through proof attempts and a module for viewing and manipulating hierarchical proofs. - -Features currently being implemented, or planned for the future, are an expanded set of proof critics, suitable for use in higher order domains, dynamic relational rippling, a rippling heuristic suitable for rippling over relational expressions as opposed to functional expressions, again suitable for use in higher order domains, and integration of IsaPlanner with Proof General. diff --git a/wiki/wikipedia/611.txt b/wiki/wikipedia/611.txt deleted file mode 100644 index d417d01cd0e6f2b6de8869aa854d84dde6f39844..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/611.txt +++ /dev/null @@ -1,19 +0,0 @@ -Noncommutative logic is an extension of linear logic which combines the commutative connectives of linear logic with the noncommutative multiplicative connectives of the Lambek calculus. Its sequent calculus relies on the structure of order varieties (a family of cyclic orders which may be viewed as a species of structure), and the correctness criterion for its proof nets is given in terms of partial permutations. It also has a denotational semantics in which formulas are interpreted by modules over some specific Hopf algebras. - -By extension, the term noncommutative logic is also used by a number of authors to refer to a family of substructural logics in which the exchange rule is inadmissible. The remainder of this article is devoted to a presentation of this acceptance of the term. - -The oldest noncommutative logic is the Lambek calculus, which gave rise to the class of logics known as categorial grammars. Since the publication of Jean-Yves Girard's linear logic there have been several new noncommutative logics proposed, namely the cyclic linear logic of David Yetter, the pomset logic of Christian Retoré, and the noncommutative logics BV and NEL. - -Noncommutative logic is sometimes called ordered logic, since it is possible with most proposed noncommutative logics to impose a total or partial order on the formulae in sequents. However this is not fully general since some noncommutative logics do not support such an order, such as Yetter's cyclic linear logic. Although most noncommutative logics do not allow weakening or contraction together with noncommutativity, this restriction is not necessary. - -Joachim Lambek proposed the first noncommutative logic in his 1958 paper Mathematics of Sentence Structure to model the combinatory possibilities of the syntax of natural languages. His calculus has thus become one of the fundamental formalisms of computational linguistics. - -David N. Yetter proposed a weaker structural rule in place of the exchange rule of linear logic, yielding cyclic linear logic. Sequents of cyclic linear logic form a ring, and so are invariant under rotation, where multipremise rules glue their rings together at the formulae described in the rules. The calculus supports three structural modalities, a self-dual modality allowing exchange, but still linear, and the usual exponentials (? and !) of linear logic, allowing nonlinear structural rules to be used together with exchange. - -Pomset logic was proposed by Christian Retoré in a semantic formalism with two dual sequential operators existing together with the usual tensor product and par operators of linear logic, the first logic proposed to have both commutative and noncommutative operators. A sequent calculus for the logic was given, but it lacked a cut-elimination theorem; instead the sense of the calculus was established through a denotational semantics. - -Alessio Guglielmi proposed a variation of Retoré's calculus, BV, in which the two noncommutative operations are collapsed onto a single, self-dual, operator, and proposed a novel proof calculus, the calculus of structures to accommodate the calculus. The principal novelty of the calculus of structures was its pervasive use of deep inference, which it was argued is necessary for calculi combining commutative and noncommutative operators; this explanation concurs with the difficulty of designing sequent systems for pomset logic that have cut-elimination. - -Lutz Strassburger devised a related system, NEL, also in the calculus of structures in which linear logic with the mix rule appears as a subsystem. - -Structads are an approach to the semantics of logic that are based upon generalising the notion of sequent along the lines of Joyal's combinatorial species, allowing the treatment of more drastically nonstandard logics than those described above, where, for example, the ',' of the sequent calculus is not associative. diff --git a/wiki/wikipedia/612.txt b/wiki/wikipedia/612.txt deleted file mode 100644 index f88707281c7edb1e2259cf78f9b6ba220154b0d0..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/612.txt +++ /dev/null @@ -1,29 +0,0 @@ -In geometry, an Archimedean circle is any circle constructed from an arbelos that has the same radius as each of Archimedes' twin circles. If the arbelos is normed such that the diameter of its outer (largest) half circle has a length of 1 and r denotes the radiius of any of the inner half circles, then the radius ρ of such an Archimedean circle is given by -$$ -\rho=\frac{1}{2}r\left(1-r\right), -$$ - -There are over fifty different known ways to construct Archimedean circles. - -An Archimedean circle was first constructed by Archimedes in his Book of Lemmas. In his book, he constructed what is now known as Archimedes' twin circles. - -If $a$ and $b$ are the radii of the small semicircles of the arbelos, the radius of an Archimedean circle is equal to -$$ -R = \frac{ab}{a+b} -$$ - -This radius is thus $\frac 1R = \frac 1a + \frac 1b$. - -The Archimedean circle with center $C$ (as in the figure at right) is tangent to the tangents from the centers of the small semicircles to the other small semicircle. - -Leon Bankoff constructed other Archimedean circles called Bankoff's triplet circle and Bankoff's quadruplet circle. - -In 1978 Thomas Schoch found a dozen more Archimedean circles (the Schoch circles) that have been published in 1998. He also constructed what is known as the Schoch line. - -Peter Y. Woo considered the Schoch line, and with it, he was able to create a family of infinitely many Archimedean circles known as the Woo circles. - -In the summer of 1998, Frank Power introduced four more Archimedes circles known as Archimedes' quadruplets. - -In 1831, Nagata (永田岩三郎遵道) proposed a sangaku problem involving two Archimedean circles, which are denoted by W6 and W7 in [3]. - -In 1853, Ootoba (大鳥羽源吉守敬) proposed a sangaku problem involving an Archimedean circle. diff --git a/wiki/wikipedia/613.txt b/wiki/wikipedia/613.txt deleted file mode 100644 index c4f5b66e77692ecf78888504533ad31a080b2cf5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/613.txt +++ /dev/null @@ -1,57 +0,0 @@ -In the study of dynamical systems, a delay embedding theorem gives the conditions under which a chaotic dynamical system can be reconstructed from a sequence of observations of the state of a dynamical system. The reconstruction preserves the properties of the dynamical system that do not change under smooth coordinate changes (i.e., diffeomorphisms), but it does not preserve the geometric shape of structures in phase space. - -Takens' theorem is the 1981 delay embedding theorem of Floris Takens. It provides the conditions under which a smooth attractor can be reconstructed from the observations made with a generic function. Later results replaced the smooth attractor with a set of arbitrary box counting dimension and the class of generic functions with other classes of functions. - -Delay embedding theorems are simpler to state for - -discrete-time dynamical systems. - -The state space of the dynamical system is a $\nu$-dimensional manifold $M$. The dynamics is given by a smooth map -$$ -f: M \to M. -$$ - -Assume that the dynamics $f$ has a strange attractor $A\sub M$ with box counting dimension $d_A$. Using ideas from Whitney's embedding theorem, $A$ can be embedded in $k$-dimensional Euclidean space with -$$ -k > 2 d_A. -$$ - -That is, there is a diffeomorphism $\phi$ that maps $A$ into $\R^k$ such that the derivative of $\phi$ has full rank. - -A delay embedding theorem uses an observation function to construct the embedding function. An observation function $\alpha : M \to \R$ must be twice-differentiable and associate a real number to any point of the attractor $A$. It must also be typical, so its derivative is of full rank and has no special symmetries in its components. The delay embedding theorem states that the function -$$ -\phi_T(x)=\left(\alpha(x), \alpha\left(f(x)\right), \dots, \alpha\left(f^{k-1}(x)\right)\right) -$$ - -is an embedding of the strange attractor $A$ in $\R^k$. - -Suppose the $d$-dimensional - -state vector $x_t$ evolves according to an unknown but continuous - -and (crucially) deterministic dynamic. Suppose, too, that the - -one-dimensional observable $y$ is a smooth function of $x$, and “coupled” - -to all the components of $x$. Now at any time we can look not just at - -the present measurement $y(t)$, but also at observations made at times - -removed from us by multiples of some lag $\tau: y_{t+\tau}, y_{t+2\tau} $, etc. If we use -$$ -k -$$ lags, we have a $k$-dimensional vector. One might expect that, as the - -number of lags is increased, the motion in the lagged space will become - -more and more predictable, and perhaps in the limit $ k \to \infty $ would become - -deterministic. In fact, the dynamics of the lagged vectors become - -deterministic at a finite dimension; not only that, but the deterministic - -dynamics are completely equivalent to those of the original state space (More exactly, they are related by a smooth, invertible change of coordinates, - -or diffeomorphism.) The magic embedding dimension $k$ is - -at most $2d+1$, and often less. diff --git a/wiki/wikipedia/614.txt b/wiki/wikipedia/614.txt deleted file mode 100644 index 0994c11b4d4b3ef18ecb7816c6f23abf4117d573..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/614.txt +++ /dev/null @@ -1,27 +0,0 @@ -In mathematics, Goldie's theorem is a basic structural result in ring theory, proved by Alfred Goldie during the 1950s. What is now termed a right Goldie ring is a ring R that has finite uniform dimension (="finite rank") as a right module over itself, and satisfies the ascending chain condition on right annihilators of subsets of R. - -Goldie's theorem states that the semiprime right Goldie rings are precisely those that have a semisimple Artinian right classical ring of quotients. The structure of this ring of quotients is then completely determined by the Artin–Wedderburn theorem. - -In particular, Goldie's theorem applies to semiprime right Noetherian rings, since by definition right Noetherian rings have the ascending chain condition on all right ideals. This is sufficient to guarantee that a right-Noetherian ring is right Goldie. The converse does not hold: every right Ore domain is a right Goldie domain, and hence so is every commutative integral domain. - -A consequence of Goldie's theorem, again due to Goldie, is that every semiprime principal right ideal ring is isomorphic to a finite direct sum of prime principal right ideal rings. Every prime principal right ideal ring is isomorphic to a matrix ring over a right Ore domain. - -This is a sketch of the characterization mentioned in the introduction. It may be found in . - -*If R be a semiprime right Goldie ring, then it is a right order in a semisimple ring: - -** Essential right ideals of R are exactly those containing a regular element. - -** There are no non-zero nil ideals in R. - -** R is a right nonsingular ring. - -** From the previous observations, R is a right Ore ring, and so its right classical ring of quotients Qr exists. Also from the previous observations, Qr is a semisimple ring. Thus R is a right order in Qr. - -* If R is a right order in a semisimple ring Q, then it is semiprime right Goldie: - -**Any right order in a Noetherian ring (such as Q) is right Goldie. - -**Any right order in a Noetherian semiprime ring (such as Q) is itself semiprime. - -**Thus, R is semiprime right Goldie. diff --git a/wiki/wikipedia/615.txt b/wiki/wikipedia/615.txt deleted file mode 100644 index d7a075f5d80b20ed3ce38565ae6bf3e7fd6fbfeb..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/615.txt +++ /dev/null @@ -1,46 +0,0 @@ -In graph theory, a cactus (sometimes called a cactus tree) is a connected graph in which any two simple cycles have at most one vertex in common. Equivalently, it is a connected graph in which every edge belongs to at most one simple cycle, or (for nontrivial cactus) in which every block (maximal subgraph without a cut-vertex) is an edge or a cycle. - -Cacti are outerplanar graphs. Every pseudotree is a cactus. A nontrivial graph is a cactus if and only if every block is either a simple cycle or a single edge. - -The family of graphs in which each component is a cactus is downwardly closed under graph minor operations. This graph family may be characterized by a single forbidden minor, the four-vertex diamond graph formed by removing an edge from the complete graph K4. - -A triangular cactus is a special type of cactus graph such that each cycle has length three and each edge belongs to a cycle. For instance, the friendship graphs, graphs formed from a collection of triangles joined together at a single shared vertex, are triangular cacti. As well as being cactus graphs the triangular cacti are also block graphs and locally linear graphs. - -Triangular cactuses have the property that they remain connected if any matching is removed from them; for a given number of vertices, they have the fewest possible edges with this property. Every tree with an odd number of vertices may be augmented to a triangular cactus by adding edges to it, - -giving a minimal augmentation with the property of remaining connected after the removal of a matching. - -The largest triangular cactus in any graph may be found in polynomial time using an algorithm for the matroid parity problem. Since triangular cactus graphs are planar graphs, the largest triangular cactus can be used as an approximation to the largest planar subgraph, an important subproblem in planarization. As an approximation algorithm, this method has approximation ratio 4/9, the best known for the maximum planar subgraph problem. - -The algorithm for finding the largest triangular cactus is associated with a theorem of Lovász and Plummer that characterizes the number of triangles in this largest cactus. - -Lovász and Plummer consider pairs of partitions of the vertices and edges of the given graph into subsets, with the property that every triangle of the graph either has two vertices in a single class of the vertex partition or all three edges in a single class of the edge partition; they call a pair of partitions with this property valid. - -Then the number of triangles in the largest triangular cactus equals the maximum, over pairs of valid partitions $\mathcal{P}=\{V_1, V_2, \dots, V_k\}$ and $\mathcal{Q} = \{E_1, E_2, \dots, E_m\}$, of -$$ -\sum_{i=1}^{m}\frac{(u_i - 1)}{2} + n - k, -$$, - -where $n$ is the number of vertices in the given graph and $u_i$ is the number of vertex classes met by edge class $E_i$. - -Recently, a tight extremal bound was proven which showed that given any plane graph $G$, there always exist a cactus subgraph $C \subseteq G$ containing at least $1/6$ fraction of the triangular faces of $G$. This result implies a direct analysis of the 4/9 - approximation algorithm for maximum planar subgraph problem without using the above min-max formula. - -An important conjecture related to the triangular cactus is the Rosa's Conjecture, named after Alexander Rosa, which says that all triangular cacti are graceful or nearly-graceful. More precisely - -All triangular cacti with t ≡ 0, 1 mod 4 blocks are graceful, and those with t ≡ 2, 3 mod 4 are near graceful. - -Some facility location problems which are NP-hard for general graphs, as well as some other graph problems, may be solved in polynomial time for cacti. - -Since cacti are special cases of outerplanar graphs, a number of combinatorial optimization problems on graphs may be solved for them in polynomial time. - -Cacti represent electrical circuits that have useful properties. An early application of cacti was associated with the representation of op-amps. - -Cacti have also recently been used in comparative genomics as a way of representing the relationship between different genomes or parts of genomes. - -If a cactus is connected, and each of its vertices belongs to at most two blocks, then it is called a Christmas cactus. Every polyhedral graph has a Christmas cactus subgraph that includes all of its vertices, a fact that plays an essential role in a proof by Leighton that every polyhedral graph has a greedy embedding in the Euclidean plane, an assignment of coordinates to the vertices for which greedy forwarding succeeds in routing messages between all pairs of vertices. - -In topological graph theory, the graphs whose cellular embeddings are all planar are exactly the subfamily of the cactus graphs with the additional property that each vertex belongs to at most one cycle. These graphs have two forbidden minors, the diamond graph and the five-vertex friendship graph. - -Cacti were first studied under the name of Husimi trees, bestowed on them by Frank Harary and George Eugene Uhlenbeck in honor of previous work on these graphs by Kôdi Husimi. The same Harary–Uhlenbeck paper reserves the name "cactus" for graphs of this type in which every cycle is a triangle, but now allowing cycles of all lengths is standard. - -Meanwhile, the name Husimi tree commonly came to refer to graphs in which every block is a complete graph (equivalently, the intersection graphs of the blocks in some other graph). This usage had little to do with the work of Husimi, and the more pertinent term block graph is now used for this family; however, because of this ambiguity this phrase has become less frequently used to refer to cactus graphs. diff --git a/wiki/wikipedia/616.txt b/wiki/wikipedia/616.txt deleted file mode 100644 index 7efaf6812d50e377b740ad74239180394e6e830a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/616.txt +++ /dev/null @@ -1,121 +0,0 @@ -Anti-unification is the process of constructing a generalization common to two given symbolic expressions. As in unification, several frameworks are distinguished depending on which expressions (also called terms) are allowed, and which expressions are considered equal. If variables representing functions are allowed in an expression, the process is called "higher-order anti-unification", otherwise "first-order anti-unification". If the generalization is required to have an instance literally equal to each input expression, the process is called "syntactical anti-unification", otherwise "E-anti-unification", or "anti-unification modulo theory". - -An anti-unification algorithm should compute for given expressions a complete, and minimal generalization set, that is, a set covering all generalizations, and containing no redundant members, respectively. Depending on the framework, a complete and minimal generalization set may have one, finitely many, or possibly infinitely many members, or may not exist at all; it cannot be empty, since a trivial generalization exists in any case. For first-order syntactical anti-unification, Gordon Plotkin gave an algorithm that computes a complete and minimal singleton generalization set containing the so-called "least general generalization" (lgg). - -Anti-unification should not be confused with dis-unification. The latter means the process of solving systems of inequations, that is of finding values for the variables such that all given inequations are satisfied. This task is quite different from finding generalizations. - -Formally, an anti-unification approach presupposes - -* An infinite set V of variables. For higher-order anti-unification, it is convenient to choose V disjoint from the set of lambda-term bound variables. - -* A set T of terms such that V ⊆ T. For first-order and higher-order anti-unification, T is usually the set of first-order terms (terms built from variable and function symbols) and lambda terms (terms containing some higher-order variables), respectively. - -* An equivalence relation $\equiv$ on $T$, indicating which terms are considered equal. For higher-order anti-unification, usually $t \equiv u$ if $t$ and $u$ are alpha equivalent. For first-order E-anti-unification, $\equiv$ reflects the background knowledge about certain function symbols; for example, if $\oplus$ is considered commutative, $t \equiv u$ if $u$ results from $t$ by swapping the arguments of $\oplus$ at some (possibly all) occurrences. If there is no background knowledge at all, then only literally, or syntactically, identical terms are considered equal. - -Given a set $V$ of variable symbols, a set $C$ of constant symbols and sets $F_n$ of $n$-ary function symbols, also called operator symbols, for each natural number $n \geq 1$, the set of (unsorted first-order) terms $T$ is recursively defined to be the smallest set with the following properties: - -* every variable symbol is a term: V ⊆ T, - -* every constant symbol is a term: C ⊆ T, - -* from every n terms t1,...,tn, and every n-ary function symbol f ∈ Fn, a larger term $f(t_1,\ldots,t_n)$ can be built. - -For example, if x ∈ V is a variable symbol, 1 ∈ C is a constant symbol, and add ∈ F2 is a binary function symbol, then x ∈ T, 1 ∈ T, and (hence) add(x,1) ∈ T by the first, second, and third term building rule, respectively. The latter term is usually written as x+1, using Infix notation and the more common operator symbol + for convenience. - -A substitution is a mapping $\sigma: V \longrightarrow T$ from variables to terms; the notation $\{x_1 \mapsto t_1, \ldots, x_k \mapsto t_k \}$ refers to a substitution mapping each variable $x_i$ to the term $t_i$, for $i=1,\ldots,k$, and every other variable to itself. Applying that substitution to a term t is written in postfix notation as $t \{x_1 \mapsto t_1, \ldots, x_k \mapsto t_k \}$; it means to (simultaneously) replace every occurrence of each variable $x_i$ in the term t by $t_i$. The result tσ of applying a substitution σ to a term t is called an instance of that term t. - -As a first-order example, applying the substitution $\{x \mapsto h(a,y), z \mapsto b\}$ to the term - -If a term $t$ has an instance equivalent to a term $u$, that is, if $t \sigma \equiv u$ for some substitution $\sigma$, then $t$ is called more general than $u$, and $u$ is called more special than, or subsumed by, $t$. For example, $x \oplus a$ is more general than $a \oplus b$ if $\oplus$ is commutative, since then $(x \oplus a)\{x \mapsto b\} = b \oplus a \equiv a \oplus b$. - -If $\equiv$ is literal (syntactic) identity of terms, a term may be both more general and more special than another one only if both terms differ just in their variable names, not in their syntactic structure; such terms are called variants, or renamings of each other. - -For example, $f(x_1,a,g(z_1),y_1)$ is a variant of $f(x_2,a,g(z_2),y_2)$, since $f(x_1,a,g(z_1),y_1) \{ x_1 \mapsto x_2, y_1 \mapsto y_2, z_1 \mapsto z_2\} = f(x_2,a,g(z_2),y_2)$ and $f(x_2,a,g(z_2),y_2) \{x_2 \mapsto x_1, y_2 \mapsto y_1, z_2 \mapsto z_1\} = f(x_1,a,g(z_1),y_1)$. - -However, $f(x_1,a,g(z_1),y_1)$ is not a variant of $f(x_2,a,g(x_2),x_2)$, since no substitution can transform the latter term into the former one, although $\{x_1 \mapsto x_2, z_1 \mapsto x_2, y_1 \mapsto x_2 \}$ achieves the reverse direction. - -The latter term is hence properly more special than the former one. - -A substitution $\sigma$ is more special than, or subsumed by, a substitution $\tau$ if $x \sigma$ is more special than $x \tau$ for each variable $x$. - -For example, $\{ x \mapsto f(u), y \mapsto f(f(u)) \}$ is more special than $\{ x \mapsto z, y \mapsto f(z) \}$, since $f(u)$ and $f(f(u)) $ is more special than $z$ and $f(z)$, respectively. - -An anti-unification problem is a pair $\langle t_1, t_2 \rangle$ of terms. - -A term $t$ is a common generalization, or anti-unifier, of $t_1$ and $t_2$ if $t \sigma_1 \equiv t_1$ and $t \sigma_2 \equiv t_2$ for some substitutions $\sigma_1, \sigma_2$. - -For a given anti-unification problem, a set $S$ of anti-unifiers is called complete if each generalization subsumes some term $t \in S$; the set $S$ is called minimal if none of its members subsumes another one. - -The framework of first-order syntactical anti-unification is based on $T$ being the set of first-order terms (over some given set $V$ of variables, $C$ of constants and $F_n$ of $n$-ary function symbols) and on $\equiv$ being syntactic equality. - -In this framework, each anti-unification problem $\langle t_1, t_2 \rangle$ has a complete, and obviously minimal, singleton solution set $\{t\}$. - -Its member $t$ is called the least general generalization (lgg) of the problem, it has an instance syntactically equal to $t_1$ and another one syntactically equal to $t_2$. - -Any common generalization of $t_1$ and $t_2$ subsumes $t$. - -The lgg is unique up to variants: if $S_1$ and $S_2$ are both complete and minimal solution sets of the same syntactical anti-unification problem, then $S_1 = \{ s_1\}$ and $S_2 = \{ s_2 \}$ for some terms $s_1$ and $s_2$, that are renamings of each other. - -Plotkin - -The algorithm consists of two rules: - -For example, $(0*0) \sqcup (4*4) \rightsquigarrow (0 \sqcup 4)*(0 \sqcup 4) \rightsquigarrow \phi(0,4) * \phi(0,4) \rightsquigarrow x*x$; this least general generalization reflects the common property of both inputs of being square numbers. - -Plotkin used his algorithm to compute the "relative least general generalization (rlgg)" of two clause sets in first-order logic, which was the basis of the Golem approach to inductive logic programming. - -* - -* - -* - -* - -*One associative and commutative operation: ; - -*Commutative theories: - -*Free monoids: - -*Regular congruence classes: ; - -*A-, C-, AC-, ACU-theories with ordered sorts: - -*Purely idempontent theories: - -*Taxonomic sorts: ; ; - -*Feature terms: - -* - -* - -*A-, C-, AC-, ACU-theories with ordered sorts: see above - -* Baumgartner, Alexander; Kutsia, Temur; Levy, Jordi; Villaret, Mateu (Jun 2013). . Proc. RTA 2015. Vol. 36 of LIPIcs. Schloss Dagstuhl, 57-73. - -* Program analysis: ; - -* Code factoring: - -* Induction proving: - -* Information Extraction: - -* Case-based reasoning: - -* Program synthesis: The idea of generalizing terms with respect to an equational theory can be traced back to Manna and Waldinger (1978, 1980) who desired to apply it in program synthesis. In section "Generalization", they suggest (on p. 119 of the 1980 article) to generalize reverse(l) and reverse(tail(l))<>[head(l)] to obtain reverse(l')<>m' . This generalization is only possible if the background equation u<>[]=u is considered. - -- preprint of the 1980 article - -* Natural language processing: - -*Calculus of constructions: - -* Simply-typed lambda calculus (Input: Terms in the eta-long beta-normal form. Output: higher-order patterns): Baumgartner, Alexander; Kutsia, Temur; Levy, Jordi; Villaret, Mateu (Jun 2013). . Proc. RTA 2013. Vol. 21 of LIPIcs. Schloss Dagstuhl, 113-127. - -*Simply-typed lambda calculus (Input: Terms in the eta-long beta-normal form. Output: Various fragments of the simply-typed lambda calculus including patterns): - -* Restricted Higher-Order Substitutions: ; diff --git a/wiki/wikipedia/617.txt b/wiki/wikipedia/617.txt deleted file mode 100644 index 74efb897bbb6c2133b3dbd2dabaf96d59c720e29..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/617.txt +++ /dev/null @@ -1,19 +0,0 @@ -In mathematics, Stone's representation theorem for Boolean algebras states that every Boolean algebra is isomorphic to a certain field of sets. The theorem is fundamental to the deeper understanding of Boolean algebra that emerged in the first half of the 20th century. The theorem was first proved by Marshall H. Stone. Stone was led to it by his study of the spectral theory of operators on a Hilbert space. - -Each Boolean algebra B has an associated topological space, denoted here S(B), called its Stone space. The points in S(B) are the ultrafilters on B, or equivalently the homomorphisms from B to the two-element Boolean algebra. The topology on S(B) is generated by a (closed) basis consisting of all sets of the form - -\{ x \in S(B) \mid b \in x\}, - -where b is an element of B. This is the topology of pointwise convergence of nets of homomorphisms into the two-element Boolean algebra. - -For every Boolean algebra B, S(B) is a compact totally disconnected Hausdorff space; such spaces are called Stone spaces (also profinite spaces). Conversely, given any topological space X, the collection of subsets of X that are clopen (both closed and open) is a Boolean algebra. - -A simple version of Stone's representation theorem states that every Boolean algebra B is isomorphic to the algebra of clopen subsets of its Stone space S(B). The isomorphism sends an element $b \in B$ to the set of all ultrafilters that contain b. This is a clopen set because of the choice of topology on S(B) and because B is a Boolean algebra. - -Restating the theorem using the language of category theory; the theorem states that there is a duality between the category of Boolean algebras and the category of Stone spaces. This duality means that in addition to the correspondence between Boolean algebras and their Stone spaces, each homomorphism from a Boolean algebra A to a Boolean algebra B corresponds in a natural way to a continuous function from S(B) to S(A). In other words, there is a contravariant functor that gives an equivalence between the categories. This was an early example of a nontrivial duality of categories. - -The theorem is a special case of Stone duality, a more general framework for dualities between topological spaces and partially ordered sets. - -The proof requires either the axiom of choice or a weakened form of it. Specifically, the theorem is equivalent to the Boolean prime ideal theorem, a weakened choice principle that states that every Boolean algebra has a prime ideal. - -An extension of the classical Stone duality to the category of Boolean spaces (= zero-dimensional locally compact Hausdorff spaces) and continuous maps (respectively, perfect maps) was obtained by G. D. Dimov (respectively, by H. P. Doctor). diff --git a/wiki/wikipedia/618.txt b/wiki/wikipedia/618.txt deleted file mode 100644 index 85c747235aa8f9bacf6fc3e6168a547d9190d4ca..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/618.txt +++ /dev/null @@ -1,5 +0,0 @@ -Squatting attack, in computer science, is a kind of DoS attack where a program interferes with another program through the use of shared synchronization objects in an unwanted or unexpected way. - -That attack is known in the Microsoft Windows operating system, which offers named objects as an interprocess synchronization mechanism. With named objects, a process may open a synchronization object as a shared resource by just specifying a name. Subsequent processes may use the same name to open that resource and have a way to synchronize with the first process. The squatting attack is possible because, if the legitimate program does not enforce tight security rules for the resources, processes from arbitrary security contexts may gain access to them and ultimately take control of the system. - -Consider, for example, antivirus software installed on a Microsoft Windows machine. The solution has two pieces: a service, which monitors and scans every file when it is opened, and a manual scanner, which scans the file system when a user requests it. Under normal conditions the service should scan the system occasionally. However, if a user requests a manual scan, the service must stop temporarily to let the manual scanner work, otherwise every file would be scanned twice: by the manual scanner and by the service. To solve this problem the vendor chooses to implement an event based synchronization mechanism, where the service keeps a named event opened and checks it whenever a file is opened. If the event is unset the file is scanned, otherwise it is ignored. The manual scanner, then, to operate, opens the named event, sets it before scanning (disabling the service), scans the file system and resets the event back when finished. This design is prone to a squatting attack because a malicious program can set the named event and disable the service completely. diff --git a/wiki/wikipedia/619.txt b/wiki/wikipedia/619.txt deleted file mode 100644 index 2a6df0ca69faa1c3739d214dda4096acfdba990c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/619.txt +++ /dev/null @@ -1,22 +0,0 @@ -In mathematical field of combinatorial geometry, the Littlewood–Offord problem is the problem of determining the number of subsums of a set of vectors that fall in a given convex set. More formally, if V is a vector space of dimension d, the problem is to determine, given a finite subset of vectors S and a convex subset A, the number of subsets of S whose summation is in A. - -The first upper bound for this problem was proven (for d = 1 and d = 2) in 1938 by John Edensor Littlewood and A. Cyril Offord. This Littlewood–Offord lemma states that if S is a set of n real or complex numbers of absolute value at least one and A is any disc of radius one, then not more than $ \Big( c \log n / \sqrt{n} \Big) 2^n $ of the 2n possible subsums of S fall into the disc. - -In 1945 Paul Erdős improved the upper bound for d = 1 to -$$ -{n \choose \lfloor{n/2}\rfloor} \approx 2^n \frac{1}{\sqrt{n}} -$$ - -using Sperner's theorem. This bound is sharp; equality is attained when all vectors in S are equal. In 1966, Kleitman showed that the same bound held for complex numbers. In 1970, he extended this to the setting when V is a normed space. - -Suppose S = {v1, …, vn}. By subtracting -$$ -\frac{1}{2} \sum_{i = 1}^n v_i -$$ - -from each possible subsum (that is, by changing the origin and then scaling by a factor of 2), the Littlewood–Offord problem is equivalent to the problem of determining the number of sums of the form -$$ -\sum_{i = 1}^n \epsilon_i v_i -$$ - -that fall in the target set A, where $\epsilon_i$ takes the value 1 or -1. This makes the problem into a probabilistic one, in which the question is of the distribution of these random vectors, and what can be said knowing nothing more about the vi. diff --git a/wiki/wikipedia/62.txt b/wiki/wikipedia/62.txt deleted file mode 100644 index 8f09c3a4fe09cdb979e3bf60fa311e441b4fdeb6..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/62.txt +++ /dev/null @@ -1,42 +0,0 @@ -Bernstein's theorem is an inequality relating the maximum modulus of a complex polynomial function on the unit disk with the maximum modulus of its derivative on the unit disk. It was proven by Sergei Bernstein while he was working on approximation theory. - -Let $\max_{|z| = 1} |f(z)|$ denote the maximum modulus of an arbitrary - -function $f(z)$ on $|z|=1$, and let $f'(z)$ denote its derivative. - -Then for every polynomial $P(z)$ of degree $n$ we have -$$ -\max_{|z| = 1} |P'(z)| \le n \max_{|z| = 1} |P(z)| -$$. - -The inequality is best possible with equality holding if and only if -$$ - P(z) = \alpha z^n,\ |\alpha| = \max_{|z| = 1} |P(z)| -$$. - -Let $P(z)$ be a polynomial of degree $n$, and let $Q(z)$ be another polynomial of the same degree with no zeros in $|z| \ge 1$. We show first that if $|P(z)| < |Q(z)|$ on $|z| = 1$, then $|P'(z)| < |Q'(z)|$ on $|z| \ge 1$. - -By Rouché's theorem, $P(z) + \varepsilon\ Q(z)$ with $|\varepsilon| \geq 1$ has all - -its zeros in $|z| < 1$. By virtue of the Gauss–Lucas theorem, -$$ -P'(z) + \varepsilon\ Q'(z) -$$ has all its zeros in $|z| < 1$ as well. - -It follows that $|P'(z)| < |Q'(z)|$ on $|z| \geq 1$, - -otherwise we could choose an $\varepsilon$ with $|\varepsilon| \geq 1$ such that -$$ -P'(z) + \varepsilon Q'(z) -$$ has a zero in $|z|\geq 1$. - -For an arbitrary polynomial $P(z)$ of degree $n$, we obtain Bernstein's Theorem by applying the above result to the polynomials $Q(z) = C z^n$, where $C$ is an arbitrary constant exceeding $\max_{|z|=1}|P(z)|$. - -In mathematical analysis, Bernstein's inequality states that on the complex plane, within the disk of radius 1, the degree of a polynomial times the maximum value of a polynomial is an upper bound for the similar maximum of its derivative. Taking the k-th derivative of the theorem, -$$ -\max_{|z| \le 1}( |P^{(k)}(z)| ) \le \frac{n!}{(n-k)!} \cdot\max_{|z| \le 1}( |P(z)| ). -$$ - -Paul Erdős conjectured that if $P(z)$ has no zeros in $|z|<1$, then $\max_{|z| = 1} |P'(z)| \le \frac{n}{2} \max_{|z| = 1} |P(z)|$. This was proved by Peter Lax. - -M. A. Malik showed that if $P(z)$ has no zeros in $|z|I calculus corresponds to relevant logic. - -* The local truth (∇) modality in Grothendieck topology or the equivalent "lax" modality (◯) of Benton, Bierman, and de Paiva (1998) correspond to CL-logic describing "computation types". - -At the time of Curry, and also at the time of Howard, the proofs-as-programs correspondence concerned only intuitionistic logic, i.e. a logic in which, in particular, Peirce's law was not deducible. The extension of the correspondence to Peirce's law and hence to classical logic became clear from the work of Griffin on typing operators that capture the evaluation context of a given program execution so that this evaluation context can be later on reinstalled. The basic Curry–Howard-style correspondence for classical logic is given below. Note the correspondence between the double-negation translation used to map classical proofs to intuitionistic logic and the continuation-passing-style translation used to map lambda terms involving control to pure lambda terms. More particularly, call-by-name continuation-passing-style translations relates to Kolmogorov's double negation translation and call-by-value continuation-passing-style translations relates to a kind of double-negation translation due to Kuroda. - -A finer Curry–Howard correspondence exists for classical logic if one defines classical logic not by adding an axiom such as Peirce's law, but by allowing several conclusions in sequents. In the case of classical natural deduction, there exists a proofs-as-programs correspondence with the typed programs of Parigot's λμ-calculus. - -A proofs-as-programs correspondence can be settled for the formalism known as Gentzen's sequent calculus but it is not a correspondence with a well-defined pre-existing model of computation as it was for Hilbert-style and natural deductions. - -Sequent calculus is characterized by the presence of left introduction rules, right introduction rule and a cut rule that can be eliminated. The structure of sequent calculus relates to a calculus whose structure is close to the one of some abstract machines. The informal correspondence is as follows: - -N. G. de Bruijn used the lambda notation for representing proofs of the theorem checker Automath, and represented propositions as "categories" of their proofs. It was in the late 1960s at the same period of time Howard wrote his manuscript; de Bruijn was likely unaware of Howard's work, and stated the correspondence independently (Sørensen & Urzyczyn [1998] 2006, pp 98–99). Some researchers tend to use the term Curry–Howard–de Bruijn correspondence in place of Curry–Howard correspondence. - -The BHK interpretation interprets intuitionistic proofs as functions but it does not specify the class of functions relevant for the interpretation. If one takes lambda calculus for this class of function, then the BHK interpretation tells the same as Howard's correspondence between natural deduction and lambda calculus. - -Kleene's recursive realizability splits proofs of intuitionistic arithmetic into the pair of a recursive function and of - -a proof of a formula expressing that the recursive function "realizes", i.e. correctly instantiates the disjunctions and existential quantifiers of the initial formula so that the formula gets true. - -Kreisel's modified realizability applies to intuitionistic higher-order predicate logic and shows that the simply typed lambda term inductively extracted from the proof realizes the initial formula. In the case of propositional logic, it coincides with Howard's statement: the extracted lambda term is the proof itself (seen as an untyped lambda term) and the realizability statement is a paraphrase of the fact that the extracted lambda term has the type that the formula means (seen as a type). - -Gödel's dialectica interpretation realizes (an extension of) intuitionistic arithmetic with computable functions. The connection with lambda calculus is unclear, even in the case of natural deduction. - -Joachim Lambek showed in the early 1970s that the proofs of intuitionistic propositional logic and the combinators of typed combinatory logic share a common equational theory which is the one of cartesian closed categories. The expression Curry–Howard–Lambek correspondence is now used by some people to refer to the three way isomorphism between intuitionistic logic, typed lambda calculus and cartesian closed categories, with objects being interpreted as types or propositions and morphisms as terms or proofs. The correspondence works at the equational level and is not the expression of a syntactic identity of structures as it is the case for each of Curry's and Howard's correspondences: i.e. the structure of a well-defined morphism in a cartesian-closed category is not comparable to the structure of a proof of the corresponding judgment in either Hilbert-style logic or natural deduction. To clarify this distinction, the underlying syntactic structure of cartesian closed categories is rephrased below. - -Objects (types) are defined by - -* $\top$ is an object - -* if α and β are objects then $\alpha \times \beta$ and $\alpha \rightarrow \beta$ are objects. - -Morphisms (terms) are defined by - -* $id$, $\star$, $\operatorname{eval}$, $\pi_1$ and $\pi_2$ are morphisms - -* if t is a morphism, λt is a morphism - -* if t and u are morphisms, $(t, u)$ and $u \circ t$ are morphisms. - -Well-defined morphisms (typed terms) are defined by the following typing rules (in which the usual categorical morphism notation $f : \alpha \to \beta$ is replaced with sequent calculus notation $f :\!\!-~~ \alpha ~\vdash~ \beta$). - -Identity: -$$ -\frac{}{id:\!\!-~~ \alpha ~\vdash~ \alpha} -$$ - -Composition: -$$ -\frac{t:\!\!-~~ \alpha ~\vdash~ \beta\qquad u:\!\!-~~ \beta ~\vdash~ \gamma}{u \circ t:\!\!- ~\alpha ~\vdash~ \gamma} -$$ - -Unit type (terminal object): -$$ -\frac{}{\star:\!\!-~~\alpha ~\vdash~ \top} -$$ - -Cartesian product: -$$ -\frac{t:\!\!-~~\alpha ~\vdash~ \beta\qquad u:\!\!-~~\alpha ~\vdash~ \gamma}{(t,u):\!\!-~~ \alpha ~\vdash~ \beta \times \gamma} -$$ - -Left and right projection: -$$ -\frac{}{\pi_1:\!\!-~~ \alpha \times \beta ~\vdash~ \alpha}\qquad\frac{}{\pi_2:\!\!-~~ \alpha \times \beta ~\vdash~ \beta} -$$ - -Currying: -$$ -\frac{t:\!\!-~~ \alpha \times \beta ~\vdash~ \gamma}{\lambda t:\!\!-~~ \alpha ~\vdash~ \beta \rightarrow \gamma} -$$ - -Application: -$$ -\frac{}{eval:\!\!-~~ (\alpha \rightarrow \beta) \times \alpha ~\vdash~ \beta} -$$ - -Finally, the equations of the category are - -* $id \circ t = t$ - -* $t \circ id = t$ - -* $(v \circ u) \circ t = v \circ (u \circ t)$ - -* $\star = id$ (if well-typed) - -* $\star \circ u = \star$ - -* $\pi_1 \circ (t, u) = t$ - -* $\pi_2 \circ (t,u) = u$ - -* $(\pi_1, \pi_2) = id$ - -* $(t_1, t_2) \circ u = (t_1 \circ u, t_2 \circ u) $ - -* $eval \circ (\lambda t \circ \pi_1, \pi_2) = t$ - -* $\lambda eval = id$ - -* $\lambda t \circ u = \lambda (t \circ (u \circ \pi_1, \pi_2))$ - -These equations imply the following $\eta$-laws: - -* $(\pi_1 \circ t, \pi_2 \circ t) = t$ - -* $\lambda (eval \circ (t \circ \pi_1, \pi_2)) = t$ - -Now, there exists t such that $t:\!\!-~ \alpha_1 \times \ldots \times \alpha_n \vdash \beta$ iff $\alpha_1, \ldots, \alpha_n \vdash \beta $ is provable in implicational intuitionistic logic,. - -Thanks to the Curry–Howard correspondence, a typed expression whose type corresponds to a logical formula is analogous to a proof of that formula. Here are examples. - -As an example, consider a proof of the theorem α → α. In lambda calculus, this is the type of the identity function I = λx.x and in combinatory logic, the identity function is obtained by applying S = λfgx.fx(gx) twice to K = λxy.x. That is, I = ((S K) K). As a description of a proof, this says that the following steps can be used to prove α → α: - -* instantiate the second axiom scheme with the formulas α, β → α and α to obtain a proof of (α → ((β → α) → α)) → ((α → (β → α)) → (α → α)), - -* instantiate the first axiom scheme once with α and β → α to obtain a proof of α → ((β → α) → α), - -* instantiate the first axiom scheme a second time with α and β to obtain a proof of α → (β → α), - -* apply modus ponens twice to obtain a proof of α → α - -In general, the procedure is that whenever the program contains an application of the form (P Q), these steps should be followed: - -# First prove theorems corresponding to the types of P and Q. - -# Since P is being applied to Q, the type of P must have the form α → β and the type of Q must have the form α for some α and β. Therefore, it is possible to detach the conclusion, β, via the modus ponens rule. - -As a more complicated example, let's look at the theorem that corresponds to the B function. The type of B is (β → α) → (γ → β) → γ → α. B is equivalent to (S (K S) K). This is our roadmap for the proof of the theorem (β → α) → (γ → β) → γ → α. - -The first step is to construct (K S). To make the antecedent of the K axiom look like the S axiom, set α equal to (α → β → γ) → (α → β) → α → γ, and β equal to δ (to avoid variable collisions): - -K : α → β → α - -K[α = (α → β → γ) → (α → β) → α → γ, β = δ] : ((α → β → γ) → (α → β) → α → γ) → δ → (α → β → γ) → (α → β) → α → γ - -Since the antecedent here is just S, the consequent can be detached using Modus Ponens: - -K S : δ → (α → β → γ) → (α → β) → α → γ - -This is the theorem that corresponds to the type of (K S). Now apply S to this expression. Taking S as follows - -S : (α → β → γ) → (α → β) → α → γ, - -put α = δ, β = α → β → γ, and γ = (α → β) → α → γ, yielding - -S[α = δ, β = α → β → γ, γ = (α → β) → α → γ] : (δ → (α → β → γ) → (α → β) → α → γ) → (δ → (α → β → γ)) → δ → (α → β) → α → γ - -and then detach the consequent: - -S (K S) : (δ → α → β → γ) → δ → (α → β) → α → γ - -This is the formula for the type of (S (K S)). A special case of this theorem has δ = (β → γ): - -S (K S)[δ = β → γ] : ((β → γ) → α → β → γ) → (β → γ) → (α → β) → α → γ - -This last formula must be applied to K. Specialize K again, this time by replacing α with (β → γ) and β with α: - -K : α → β → α - -K[α = β → γ, β = α] : (β → γ) → α → (β → γ) - -This is the same as the antecedent of the prior formula so, detaching the consequent: - -S (K S) K : (β → γ) → (α → β) → α → γ - -Switching the names of the variables α and γ gives us - -(β → α) → (γ → β) → γ → α - -which was what remained to prove. - -The diagram below gives proof of (β → α) → (γ → β) → γ → α in natural deduction and shows how it can be interpreted as the λ-expression λa.λb.λg.(a (b g)) of type (β → α) → (γ → β) → γ → α. - -a:β → α, b:γ → β, g:γ ⊢ b : γ → β a:β → α, b:γ → β, g:γ ⊢ g : γ - -——————————————————————————————————— ———————————————————————————————————————————————————————————————————— - -a:β → α, b:γ → β, g:γ ⊢ a : β → α a:β → α, b:γ → β, g:γ ⊢ b g : β - -———————————————————————————————————————————————————————————————————————— - -a:β → α, b:γ → β, g:γ ⊢ a (b g) : α - -———————————————————————————————————— - -a:β → α, b:γ → β ⊢ λ g. a (b g) : γ → α - -———————————————————————————————————————— - -a:β → α ⊢ λ b. λ g. a (b g) : (γ → β) -> γ → α - -———————————————————————————————————— - -⊢ λ a. λ b. λ g. a (b g) : (β → α) -> (γ → β) -> γ → α - -Recently, the isomorphism has been proposed as a way to define search space partition in genetic programming. The method indexes sets of genotypes (the program trees evolved by the GP system) by their Curry–Howard isomorphic proof (referred to as a species). - -As noted by INRIA research director Bernard Lang, the Curry-Howard correspondence constitutes an argument against the patentability of software: since algorithms are mathematical proofs, patentability of the former would imply patentability of the latter. A theorem could be private property; a mathematician would have to pay for using it, and to trust the company that sells it but keeps its proof secret and rejects responsibility for any errors. - -The correspondences listed here go much farther and deeper. For example, cartesian closed categories are generalized by closed monoidal categories. The internal language of these categories is the linear type system (corresponding to linear logic), which generalizes simply-typed lambda calculus as the internal language of cartesian closed categories. Moreover, these can be shown to correspond to cobordisms, which play a vital role in string theory. - -An extended set of equivalences is also explored in homotopy type theory, which became a very active area of research around 2013 and still is. Here, type theory is extended by the univalence axiom ("equivalence is equivalent to equality") which permits homotopy type theory to be used as a foundation for all of mathematics (including set theory and classical logic, providing new ways to discuss the axiom of choice and many other things). That is, the Curry–Howard correspondence that proofs are elements of inhabited types is generalized to the notion of homotopic equivalence of proofs (as paths in space, the identity type or equality type of type theory being interpreted as a path). diff --git a/wiki/wikipedia/621.txt b/wiki/wikipedia/621.txt deleted file mode 100644 index dd22c1a0e6dbaa7626b3a5fc72bf380c2778a063..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/621.txt +++ /dev/null @@ -1,5 +0,0 @@ -In mathematical logic, interpretability is a relation between formal theories that expresses the possibility of interpreting or translating one into the other. - -Assume T and S are formal theories. Slightly simplified, T is said to be interpretable in S if and only if the language of T can be translated into the language of S in such a way that S proves the translation of every theorem of T. Of course, there are some natural conditions on admissible translations here, such as the necessity for a translation to preserve the logical structure of formulas. - -This concept, together with weak interpretability, was introduced by Alfred Tarski in 1953. Three other related concepts are cointerpretability, logical tolerance, and cotolerance, introduced by Giorgi Japaridze in 1992–93. diff --git a/wiki/wikipedia/622.txt b/wiki/wikipedia/622.txt deleted file mode 100644 index f2c8c7b801ed37d526713159376971b9cf31d5f7..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/622.txt +++ /dev/null @@ -1,13 +0,0 @@ -The Leggett–Garg inequality, named for Anthony James Leggett and Anupam Garg, is a mathematical inequality fulfilled by all macrorealistic physical theories. Here, macrorealism (macroscopic realism) is a classical worldview defined by the conjunction of two postulates: However, some criticism has been raised concerning the nature of indistinguishable electrons in a Fermi sea. - -A criticism of some other proposed experiments on the Leggett–Garg inequality is that they do not really show a violation of macrorealism because they are essentially about measuring spins of individual particles. In 2015 Robens et al. demonstrated an experimental violation of the Leggett–Garg inequality using superpositions of positions instead of spin with a massive particle. At that time, and so far up until today, the Cesium atoms employed in their experiment represent the largest quantum objects which have been used to experimentally test the Leggett–Garg inequality. - -The experiments of Robens et al. using ideal negative measurements, also avoid a second criticism (referred to as “clumsiness loophole”) that has been directed to previous experiments using measurement protocols that could be interpreted as invasive, thereby conflicting with postulate 2. - -Several other experimental violations have been reported, including in 2016 with neutrino particles using the MINOS dataset. - -Brukner and Kofler have also demonstrated that quantum violations can be found for arbitrarily large macroscopic systems. As an alternative to quantum decoherence, Brukner and Kofler are proposing a solution of the quantum-to-classical transition in terms of coarse-grained quantum measurements under which usually no violation of the Leggett–Garg inequality can be seen anymore. - -Experiments proposed by Mermin and Braunstein and Mann would be better for testing macroscopic realism, but warns that the experiments may be complex enough to admit unforeseen loopholes in the analysis. A detailed discussion of the subject can be found in the review by Emary et al. - -The four-term Leggett–Garg inequality can be seen to be similar to the CHSH inequality. Moreover, equalities were proposed by Jaeger et al. diff --git a/wiki/wikipedia/623.txt b/wiki/wikipedia/623.txt deleted file mode 100644 index 400ef55ee9fd8b84204af3e46367582ece5e916a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/623.txt +++ /dev/null @@ -1,84 +0,0 @@ -In probability theory, Eaton's inequality is a bound on the largest values of a linear combination of bounded random variables. This inequality was described in 1974 by Morris L. Eaton. - -Let {Xi} be a set of real independent random variables, each with an expected value of zero and bounded above by 1 ( |Xi | ≤ 1, for 1 ≤ i ≤ n). The variates do not have to be identically or symmetrically distributed. Let {ai} be a set of n fixed real numbers with -$$ - \sum_{ i = 1 }^n a_i^2 = 1 . -$$ - -Eaton showed that -$$ - P\left( \left| \sum_{ i = 1 }^n a_i X_i \right| \ge k \right) \le 2 \inf_{ 0 \le c \le k } \int_c^\infty \left( \frac{ z - c }{ k - c } \right)^3 \phi( z ) dz = 2 B_E( k ) , -$$ - -where φ(x) is the probability density function of the standard normal distribution. - -A related bound is Edelman's -$$ - P\left( \left| \sum_{ i = 1 }^n a_i X_i \right| \ge k \right) \le 2 \left( 1 - \Phi\left[ k - \frac{ 1.5 }{ k } \right] \right) = 2 B_{ Ed }( k ) , -$$ - -where Φ(x) is cumulative distribution function of the standard normal distribution. - -Pinelis has shown that Eaton's bound can be sharpened: -$$ - B_{ EP } = \min\{ 1, k^{ -2 }, 2 B_E \} -$$ - -A set of critical values for Eaton's bound have been determined. - -Let {ai} be a set of independent Rademacher random variables – P( ai = 1 ) = P( ai = −1 ) = 1/2. Let Z be a normally distributed variate with a mean 0 and variance of 1. Let {bi} be a set of n fixed real numbers such that -$$ - \sum_{ i = 1 }^n b_i^2 = 1 . -$$ - -This last condition is required by the Riesz–Fischer theorem which states that -$$ - a_i b_i + \cdots + a_n b_n -$$ - -will converge if and only if -$$ - \sum_{ i = 1 }^n b_i^2 -$$ - -is finite. - -Then -$$ - E f( a_i b_i + \cdots + a_n b_n ) \le E f( Z ) -$$ - -for f(x) = | x |p. The case for p ≥ 3 was proved by Whittle and p ≥ 2 was proved by Haagerup. - -If f(x) = eλx with λ ≥ 0 then -$$ - E f( a_i b_i + \cdots + a_n b_n ) \le \inf \left[ \frac{ E ( e^{ \lambda Z } ) }{ e^{ \lambda x } } \right] = e^{ -x^2 / 2 } -$$ - -where inf is the infimum. - -Let -$$ - S_n = a_i b_i + \cdots + a_n b_n -$$ - -Then -$$ - P( S_n \ge x ) \le \frac{ 2e^3 }{ 9 } P( Z \ge x ) -$$ - -The constant in the last inequality is approximately 4.4634. - -An alternative bound is also known: -$$ - P( S_n \ge x ) \le e^{ -x^2 / 2 } -$$ - -This last bound is related to the Hoeffding's inequality. - -In the uniform case where all the bi = n−1/2 the maximum value of Sn is n1/2. In this case van Zuijlen has shown that -$$ - P( | \mu - \sigma | ) \le 0.5 -$$ - -where μ is the mean and σ is the standard deviation of the sum. diff --git a/wiki/wikipedia/624.txt b/wiki/wikipedia/624.txt deleted file mode 100644 index 70e149037022ebe123a8cdcb6b3b8e77bea0d23c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/624.txt +++ /dev/null @@ -1,61 +0,0 @@ -Mathematics of apportionment describes mathematical principles and algorithms for fair allocation of identical items among parties with different entitlements. Such principles are used to apportion seats in parliaments among federal states or political parties. See apportionment (politics) for the more concrete principles and issues related to apportionment, and apportionment by country for practical methods used around the world. - -Mathematically, an apportionment method is just a method of rounding fractions to integers. As simple as it may sound, each and every method for rounding suffers from one or more paradoxes. The mathematical theory of apportionment aims to decide what paradoxes can be avoided, or in other words, what properties can be expected from an apportionment method. - -The mathematical theory of apportionment was studied as early as 1907 by the mathematician Agner Krarup Erlang. It was later developed to a great detail by the mathematician Michel Balinsky and the economist Peyton Young. Besides its application to political parties, it is also applicable to fair item allocation when agents have different entitlements. It is also relevant in manpower planning - where jobs should be allocated in proportion to characteristics of the labor pool, to statistics - where the reported rounded numbers of percentages should sum up to 100%, and to bankruptcy problems. - -The inputs to an apportionment method are: - -* A positive integer $h$ representing the total number of items to allocate. It is also called the house size, since in many cases, the items to allocate are seats in a house of representatives. - -*A positive integer $n$ representing the number of agents to which items should be allocated. For example, these can be federal states or political parties. - -* A vector of numbers $(t_1,\ldots,t_n)$ representing entitlements - $t_i$ represents the entitlement of agent $i$, that is, the amount of items to which $i$ is entitled (out of the total of $h$). These entitlements are often normalized such that $\sum_{i=1}^n t_i = 1$. Alternatively, they can be normalized such that their sum is $h$; in this case the entitlements are called quotas and termed denoted by $q_i$, where $q_i := t_i\cdot h$ and $\sum_{i=1}^n q_i = h$. Alternatively, one is given a vector of populations $(p_1,\ldots,p_n)$; here, the entitlement of agent $i$ is $t_i = p_i / \sum_{j=1}^n p_j$. - -The output is a vector of integers $a_1,\ldots,a_n$ with $\sum_{i=1}^n a_i = h$, called an apportionment of $h$, where $a_i$ is the number of items allocated to agent i. - -For each agent $i$, the real number $q_i := t_i\cdot h$ is called the quota of $i$, and denotes the exact number of items that should be given to $i$. In general, a "fair" apportionment is one in which each allocation $a_i$ is as close as possible to the quota $q_i$. - -An apportionment method may return a set of apportionment vectors (in other words: it is a multivalued function). This is required, since in some cases there is no fair way to distinguish between two possible solutions. For example, if $h = 101$ (or any othe odd number) and $t_1 = t_2 = 1/2$, then (50,51) and (51,50) are both equally reasonable solutions, and there is no mathematical way to choose one over the other. While such ties are extremely rare in practice, the theory must account for them (in practice, when an apportionment method returns multiple outputs, one of them may be chosen by some external priority rules, or by coin flipping, but this is beyond the scope of the mathematical apportionment theory). - -An apportionment method is denoted by a multivalued function $M(\mathbf{t}, h)$; a particular $M$-solution is a single-valued function $f(\mathbf{t}, h)$ which selects a single apportionment from $M(\mathbf{t}, h)$. - -A partial apportionment method is an apportionment method for specific fixed values of $n$ and $h$; it is a multivalued function $M^*(\mathbf{t})$ that accepts only an $n$-vectors. - -Sometimes, the input also contains a vector of integers $r_1,\ldots,r_n$ representing minimum requirements - $r_i$ represents the smallest number of items that agent $i$ should receive, regardless of its entitlement. So there is an additional requirement on the output: $a_i \geq r_i$ for all $i$. - -When the agents are political parties, these numbers are usually 0, so this vector is omitted. But when the agents are states or districts, these numbers are often positive in order to ensure that all are represented. They can be the same for all agents (e.g. 1 for USA states, 2 for France districts), or different (e.g. in Canada or the European parliament). - -Sometimes there is also a vector of maximum requirements, but it is less common. - -There are basic properties that should be satisfied by any reasonable apportionment method. They were given different names by different authors: the names on the left are from Pukelsheim; The names in parentheses on the right are from Balinsky and Young. means that exactness also holds "in the limit". That is, if a sequence of entitlement vectors converges to an integer quota vector $(q_1,\ldots,q_n)$, then the only allocation vector in all elements of the sequence is $(q_1,\ldots,q_n)$. To see the difference from weak exactness, consider the following rule. (a) Give each agent its quota rounded down, $\lfloor q_i\rfloor$; (b) give the remaining seats iteratively to the largest parties. This rule is weakly exact, but not strongly exact. For example, suppose h=6 and consider the sequence of quota vectors (4+1/k, 2-1/k). The above rule yields the allocation (5,1) for all k, even though the limit when k→∞ is the integer vector (4,2). - -**Strong proportionality We say that an apportionment method - - -* Satisfies lower quota if $a_i\geq \lfloor q_i\rfloor $ for all $i $ (this holds iff $a_i + 1 > q_i $). - -* Satisfies upper quota if $a_i\leq \lceil q_i\rceil $ for all $i $ (this holds iff $a_i - 1 < q_i $). - -*Satisfies both quotas if both the above conditions hold (this holds iff $\frac{q_i}{a_i+1} < 1 < \frac{q_i}{a_i-1} $). - -Hamilton's largest-remainder method satisfies both lower quota and upper quota by construction. This does not hold for the divisor methods: - -* Jefferson's method is the only uniform method satisfying lower quota; - -* Adams's method is the only uniform method satisfying upper quota; - -* Webster's method is the only uniform method that is near quota; - -* No uniform method satisfies both quotas. In particular, Hamilton's method and the Quota method are not uniform. However, the Quota method is the unique method that satisfies both quotas in addition to house-monotonicity and "quota-consistency", which is a weaker form of uniformity. - -When the agents are political parties, they often split or merge, and it is interesting to check how such splitting/merging affects the apportionment. Suppose a certain apportionment method gives two agents $i,j$ some $a_i, a_j$ seats respectively, and then these two agents form a coalition, and the method is re-activated. - -* An apportionment method encourages coalitions if the coalition receives at least $a_i + a_j$ seats (in other words, it is split-proof - a party cannot gain a seat by splitting). - -* An apportionment method encourages schisms if the coalition receives at most $a_i + a_j$ seats (in other words, it is merge-proof - two parties cannot gain a seat by merging). - -Among the divisor methods: - -* A divisor method with divisor $d$ is coalitionally-stable iff $d(a_1 + a_2) \leq d(a_1) + d(a_2) \leq d(a_1 + a_2+1)$; this holds for all five standard divisor methods. - -Moreover, every method satisfying both quotas is "almost coalitionally-stable" - it gives every coalition between $a_i + a_j-2$ and $a_i + a_j+2$ seats. diff --git a/wiki/wikipedia/625.txt b/wiki/wikipedia/625.txt deleted file mode 100644 index 7709ba73d04f9b56547384e105e6270b5b23dc2e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/625.txt +++ /dev/null @@ -1,43 +0,0 @@ -In computer science, the test-and-set CPU instruction is used to implement - -mutual exclusion in multiprocessor environments. Although a correct lock can be implemented with test-and-set, it can lead to resource contention in busy lock (caused by bus locking and cache invalidation when test-and-set operation needs to access memory atomically). - -To lower the overhead a more elaborate locking protocol test and test-and-set is used. - -Given a lock: - -boolean locked := false // shared lock variable - -Entry protocol is: - -procedure EnterCritical() { - -do { - -while ( locked == true ) skip // spin until the lock seems free - -} while ( TestAndSet(locked) == true ) // attempt actual atomic locking using the test-and-set instruction - -} - -Exit protocol is: - -procedure ExitCritical() { - -locked := false - -} - -The entry protocol uses normal memory reads to wait for the lock to become free. Test-and-set is only used to try to get the lock when normal memory read says it's free. Thus the expensive atomic memory operations happen less often than in a simple spin around test-and-set. - -If the programming language used supports short-circuit evaluation, the entry protocol could be implemented as: - -procedure EnterCritical() { - -while ( locked == true or TestAndSet(locked) == true ) - -skip // spin until locked - -} - -Although this optimization is useful in system programming, it should be avoided in high-level concurrent programming when constraints are unclear and misunderstood. One example of bad usage is a similar idiom called double-checked locking, which is unsafe without special precautions and can be an anti-pattern. diff --git a/wiki/wikipedia/626.txt b/wiki/wikipedia/626.txt deleted file mode 100644 index 9c4a1ca81791ae5d29727c0c6bf49519d1eb33fd..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/626.txt +++ /dev/null @@ -1,38 +0,0 @@ -In mathematical logic, a theory can be extended with - -new constants or function names under certain conditions with assurance that the extension will introduce - -no contradiction. Extension by definitions is perhaps the best-known approach, but it requires - -unique existence of an object with the desired property. Addition of new names can also be done - -safely without uniqueness. - -Suppose that a closed formula -$$ -\exists x_1\ldots\exists x_m\varphi(x_1,\ldots,x_m) -$$ - -is a theorem of a first-order theory $T$. Let $T_1$ be a theory obtained from $T$ by extending its language with new constants -$$ -a_1,\ldots,a_m -$$ - -and adding a new axiom -$$ -\varphi(a_1,\ldots,a_m) -$$. - -Then $T_1$ is a conservative extension of $T$, which means that the theory $T_1$ has the same set of theorems in the original language (i.e., without constants $a_i$) as the theory $T$. - -Such a theory can also be conservatively extended by introducing a new functional symbol: - -Suppose that a closed formula $\forall \vec{x}\exists y\!\varphi(y,\vec{x})$ is a theorem of a first-order theory $T$, where we denote $\vec{x}:=(x_1,\ldots,x_n)$. Let $T_1$ be a theory obtained from $T$ by extending its language with a new functional symbol $f$ (of arity $n$) and adding a new axiom $\forall \vec{x}\varphi(f(\vec{x}),\vec{x})$. Then $T_1$ is a conservative extension of $T$, i.e. the theories $T$ and $T_1$ prove the same theorems not involving the functional symbol $f$). - -Shoenfield states the theorem in the form for a new function name, and constants are the same as functions - -of zero arguments. In formal systems that admit ordered tuples, extension by multiple constants as shown here - -can be accomplished by addition of a new constant tuple and the new constant names - -having the values of elements of the tuple. diff --git a/wiki/wikipedia/627.txt b/wiki/wikipedia/627.txt deleted file mode 100644 index fad652fc94f97a055d44c846047c50dfe912923f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/627.txt +++ /dev/null @@ -1 +0,0 @@ -In algebraic geometry, the Enriques–Babbage theorem states that a canonical curve is either a set-theoretic intersection of quadrics, or trigonal, or a plane quintic. It was proved by and . diff --git a/wiki/wikipedia/628.txt b/wiki/wikipedia/628.txt deleted file mode 100644 index 5cb27fc564d5b3f43fe92e97456ca6e5a826b5e1..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/628.txt +++ /dev/null @@ -1,556 +0,0 @@ -A Pythagorean triple consists of three positive integers a, b, and c, such that a2 + b2 = c2. Such a triple is commonly written (a, b, c), and a well-known example is (3, 4, 5). If (a, b, c) is a Pythagorean triple, then so is (ka, kb, kc) for any positive integer k. A primitive Pythagorean triple is one in which a, b and c are coprime (that is, they have no common divisor larger than 1). A triangle whose sides form a Pythagorean triple is called a Pythagorean triangle, and is necessarily a right triangle. - -The name is derived from the Pythagorean theorem, stating that every right triangle has side lengths satisfying the formula a2 + b2 = c2; thus, Pythagorean triples describe the three integer side lengths of a right triangle. However, right triangles with non-integer sides do not form Pythagorean triples. For instance, the triangle with sides a = b = 1 and c = sqrt 2 is a right triangle, but (1, 1, sqrt 2) is not a Pythagorean triple because sqrt 2 is not an integer. Moreover, 1 and sqrt 2 do not have an integer common multiple because sqrt 2 is irrational. - -Pythagorean triples have been known since ancient times. The oldest known record comes from Plimpton 322, a Babylonian clay tablet from about 1800 BC, written in a sexagesimal number system. It was discovered by Edgar James Banks shortly after 1900, and sold to George Arthur Plimpton in 1922, for $10. - -When searching for integer solutions, the equation a2 + b2 = c2 is a Diophantine equation. Thus Pythagorean triples are among the oldest known solutions of a nonlinear Diophantine equation. - -There are 16 primitive Pythagorean triples of numbers up to 100: - -Each of these points forms a radiating line in the scatter plot. Other small Pythagorean triples such as (6, 8, 10) are not listed because they are not primitive; for instance (6, 8, 10) is a multiple of (3, 4, 5). - -Additionally these are the remaining primitive Pythagorean triples of numbers up to 300: - -Euclid's formula is a fundamental formula for generating Pythagorean triples given an arbitrary pair of integers m and n with m > n > 0. The formula states that the integers -$$ - a = m^2 - n^2 ,\ b = 2mn ,\ c = m^2 + n^2 -$$ - -form a Pythagorean triple. The triple generated by Euclid's formula is primitive if and only if m and n are coprime and not both odd. When both m and n are odd, then a, b, and c will be even, and the triple will not be primitive; however, dividing a, b, and c by 2 will yield a primitive triple when m and n are coprime and both odd. - -Every primitive triple arises (after the exchange of a and b, if a is even) from a unique pair of coprime numbers m, n, one of which is even. It follows that there are infinitely many primitive Pythagorean triples. This relationship of a, b and c to m and n from Euclid's formula is referenced throughout the rest of this article. - -Despite generating all primitive triples, Euclid's formula does not produce all triples—for example, (9, 12, 15) cannot be generated using integer m and n. This can be remedied by inserting an additional parameter k to the formula. The following will generate all Pythagorean triples uniquely: -$$ - a = k\cdot(m^2 - n^2) ,\ b = k\cdot(2mn) ,\ c = k\cdot(m^2 + n^2) -$$ - -where m, n, and k are positive integers with m > n, and with m and n coprime and not both odd. - -That these formulas generate Pythagorean triples can be verified by expanding a2 + b2 using elementary algebra and verifying that the result equals c2. Since every Pythagorean triple can be divided through by some integer k to obtain a primitive triple, every triple can be generated uniquely by using the formula with m and n to generate its primitive counterpart and then multiplying through by k as in the last equation. - -Choosing m and n from certain integer sequences gives interesting results. For example, if m and n are consecutive Pell numbers, a and b will differ by 1. - -Many formulas for generating triples with particular properties have been developed since the time of Euclid. - -That satisfaction of Euclid's formula by a, b, c is sufficient for the triangle to be Pythagorean is apparent from the fact that for positive integers m and n, m > n, the a, b, and c given by the formula are all positive integers, and from the fact that -$$ - a^2+b^2 = (m^2 - n^2)^2 + (2mn)^2 = (m^2 + n^2)^2 = c^2. -$$ - -A proof of the necessity that a, b, c be expressed by Euclid's formula for any primitive Pythagorean triple is as follows. All such primitive triples can be written as (a, b, c) where 1=a2 + b2 = c2 and a, b, c are coprime. Thus a, b, c are pairwise coprime (if a prime number divided two of them, it would be forced also to divide the third one). As a and b are coprime, at least one of them is odd, so we may suppose that a is odd, by exchanging, if needed, a and b. This implies that b is even and c is odd (if b were odd, c would be even, and c2 would be a multiple of 4, while a2 + b2 would be congruent to 2 modulo 4, as an odd square is congruent to 1 modulo 4). - -From $a^2+b^2=c^2$ we obtain $c^2-a^2=b^2$ and hence $(c-a)(c+a)=b^2$. Then $\tfrac{(c+a)}{b}=\tfrac{b}{(c-a)}$. Since $\tfrac{(c+a)}{b}$ is rational, we set it equal to $\tfrac{m}{n}$ in lowest terms. Thus $\tfrac{(c-a)}{b}=\tfrac{n}{m}$, being the reciprocal of $\tfrac{(c+a)}{b}$. Then solving -$$ -\frac{c}{b}+\frac{a}{b}=\frac{m}{n}, \quad \quad \frac{c}{b}-\frac{a}{b}=\frac{n}{m} -$$ - -for $\tfrac{c}{b}$ and $\tfrac{a}{b}$ gives -$$ -\frac{c}{b}=\frac{1}{2}\left(\frac{m}{n}+\frac{n}{m}\right)=\frac{m^2+n^2}{2mn}, \quad \quad \frac{a}{b}=\frac{1}{2}\left(\frac{m}{n}-\frac{n}{m}\right)=\frac{m^2-n^2}{2mn}. -$$ - -As $\tfrac{m}{n}$ is fully reduced, m and n are coprime, and they cannot both be even. If they were both odd, the numerator of $\tfrac{m^2-n^2}{2mn}$ would be a multiple of 4 (because an odd square is congruent to 1 modulo 4), and the denominator 2mn would not be a multiple of 4. Since 4 would be the minimum possible even factor in the numerator and 2 would be the maximum possible even factor in the denominator, this would imply a to be even despite defining it as odd. Thus one of m and n is odd and the other is even, and the numerators of the two fractions with denominator 2mn are odd. Thus these fractions are fully reduced (an odd prime dividing this denominator divides one of m and n but not the other; thus it does not divide m2 ± n2). One may thus equate numerators with numerators and denominators with denominators, giving Euclid's formula -$$ - a = m^2 - n^2 ,\ b = 2mn ,\ c = m^2 + n^2 -$$ with m and n coprime and of opposite parities. - -A longer but more commonplace proof is given in Maor (2007) and Sierpiński (2003). Another proof is given in , as an instance of a general method that applies to every homogeneous Diophantine equation of degree two. - -Suppose the sides of a Pythagorean triangle have lengths m2 − n2, 2mn, and m2 + n2, and suppose the angle between the leg of length m2 − n2 and the hypotenuse of length m2 + n2 is denoted as β. Then $\tan{\tfrac{\beta}{2}}=\tfrac{n}{m}$ and the full-angle trigonometric values are $\sin{\beta}=\tfrac{2mn}{m^2+n^2}$, $\cos{\beta}=\tfrac{m^2-n^2}{m^2+n^2}$, and $\tan{\beta}=\tfrac{2mn}{m^2-n^2}$. - -The following variant of Euclid's formula is sometimes more convenient, as being more symmetric in m and n (same parity condition on m and n). - -If m and n are two odd integers such that m > n, then -$$ - a = mn ,\ b =\frac {m^2 - n^2}{2} ,\ c = \frac{m^2 + n^2}{2} -$$ - -are three integers that form a Pythagorean triple, which is primitive if and only if m and n are coprime. Conversely, every primitive Pythagorean triple arises (after the exchange of a and b, if a is even) from a unique pair m > n > 0 of coprime odd integers. - -The properties of a primitive Pythagorean triple (a, b, c) with a < b < c (without specifying which of a or b is even and which is odd) include: - -* $\tfrac{(c-a)(c-b)}{2}$ is always a perfect square. As it is only a necessary condition but not a sufficient one, it can be used in checking if a given triple of numbers is not a Pythagorean triple when they fail the test. For example, the triples {6, 12, 18} and {1, 8, 9} each pass the test that (c − a)(c − b)/2 is a perfect square, but neither is a Pythagorean triple. - -*When a triple of numbers a, b and c forms a primitive Pythagorean triple, then (c minus the even leg) and one-half of (c minus the odd leg) are both perfect squares; however this is not a sufficient condition, as the numbers {1, 8, 9} pass the perfect squares test but are not a Pythagorean triple since 12 + 82 ≠ 92. - -*At most one of a, b, c is a square. - -*The area of a Pythagorean triangle cannot be the square - -*Exactly one of a, b is divisible by 3, but never c. Equivalently, the radius of the outer Soddy circle of any right triangle is equal to its semiperimeter. The outer Soddy center is located at D, where ACBD is a rectangle, ACB the right triangle and AB its hypotenuse. - -*There are no Pythagorean triangles in which the hypotenuse and one leg are the legs of another Pythagorean triangle; this is one of the equivalent forms of Fermat's right triangle theorem. - -*Each primitive Pythagorean triangle has a ratio of area, K, to squared semiperimeter, s, that is unique to itself and is given by -$$ -\tfrac{K}{s^2} = \tfrac{n(m-n)}{m(m+n)} = 1-\tfrac{c}{s}. -$$ - -*No primitive Pythagorean triangle has an integer altitude from the hypotenuse; that is, every primitive Pythagorean triangle is indecomposable. Such Pythagorean triangles are known as decomposable since they can be split along this altitude into two separate and smaller Pythagorean triangles. - -Euclid's formula for a Pythagorean triple -$$ -a = 2mn,\quad b=m^2-n^2,\quad c=m^2+n^2 -$$ - -can be understood in terms of the geometry of rational points on the unit circle . - -In fact, a point in the Cartesian plane with coordinates (x, y) belongs to the unit circle if x2 + y2 = 1. The point is rational if x and y are rational numbers, that is, if there are coprime integers a, b, c such that -$$ -\biggl(\frac{a}{c}\biggr)^2 + \biggl(\frac{b}{c}\biggr)^2=1. -$$ - -By multiplying both members by c2, one can see that the rational points on the circle are in one-to-one correspondence with the primitive Pythagorean triples. - -The unit circle may also be defined by a parametric equation -$$ -x=\frac{1-t^2}{1+t^2}\quad y=\frac{2t}{1+t^2}. -$$ - -Euclid's formula for Pythagorean triples means that, except for (−1, 0), a point on the circle is rational if and only if the corresponding value of t is a rational number. - -There is a correspondence between points on the unit circle with rational coordinates and primitive Pythagorean triples. At this point, Euclid's formulae can be derived either by methods of trigonometry or equivalently by using the stereographic projection. - -For the stereographic approach, suppose that P′ is a point on the x-axis with rational coordinates -$$ -P' = \left(\frac{m}{n},0\right). -$$ - -Then, it can be shown by basic algebra that the point P has coordinates - - - -P = \left( - -\frac{2\left(\frac{m}{n}\right)}{\left(\frac{m}{n}\right)^2+1}, - -\frac{\left(\frac{m}{n}\right)^2-1}{\left(\frac{m}{n}\right)^2+1} - -\right) = - -\left( - -\frac{2mn}{m^2+n^2}, - -\frac{m^2-n^2}{m^2+n^2} - -\right). - -This establishes that each rational point of the x-axis goes over to a rational point of the unit circle. The converse, that every rational point of the unit circle comes from such a point of the x-axis, follows by applying the inverse stereographic projection. Suppose that P(x, y) is a point of the unit circle with x and y rational numbers. Then the point P′ obtained by stereographic projection onto the x-axis has coordinates -$$ -\left(\frac{x}{1-y},0\right) -$$ - -which is rational. - -In terms of algebraic geometry, the algebraic variety of rational points on the unit circle is birational to the affine line over the rational numbers. The unit circle is thus called a rational curve, and it is this fact which enables an explicit parameterization of the (rational number) points on it by means of rational functions. - -A 2D lattice is a regular array of isolated points where if any one point is chosen as the Cartesian origin (0, 0), then all the other points are at (x, y) where x and y range over all positive and negative integers. Any Pythagorean triangle with triple (a, b, c) can be drawn within a 2D lattice with vertices at coordinates (0, 0), (a, 0) and (0, b). The count of lattice points lying strictly within the bounds of the triangle is given by $\tfrac{(a-1)(b-1)-\gcd{(a,b)}+1}{2};$ for primitive Pythagorean triples this interior lattice count is $\tfrac{(a-1)(b-1)}{2}.$ The area (by Pick's theorem equal to one less than the interior lattice count plus half the boundary lattice count) equals $\tfrac{ab}{2}$ . - -The first occurrence of two primitive Pythagorean triples sharing the same area occurs with triangles with sides (20, 21, 29), (12, 35, 37) and common area 210 . The first occurrence of two primitive Pythagorean triples sharing the same interior lattice count occurs with (18108, 252685, 253333), (28077, 162964, 165365) and interior lattice count 2287674594 . Three primitive Pythagorean triples have been found sharing the same area: (4485, 5852, 7373), (3059, 8580, 9109), (1380, 19019, 19069) with area 13123110. As yet, no set of three primitive Pythagorean triples have been found sharing the same interior lattice count. - -By Euclid's formula all primitive Pythagorean triples can be generated from integers $m$ and $n$ with $m>n>0$, $m+n$ odd and $\gcd(m, n)=1$. Hence there is a 1 to 1 mapping of rationals (in lowest terms) to primitive Pythagorean triples where $\tfrac{n}{m}$ is in the interval $(0,1)$ and $m+n$ odd. - -The reverse mapping from a primitive triple $(a , b , c)$ where $c>b>a>0$ to a rational $\tfrac{n}{m}$ is achieved by studying the two sums $a+c$ and $b+c$. One of these sums will be a square that can be equated to $(m+n)^2$ and the other will be twice a square that can be equated to $2m^2$. It is then possible to determine the rational $\tfrac{n}{m}$. - -In order to enumerate primitive Pythagorean triples the rational can be expressed as an ordered pair $(n,m)$ and mapped to an integer using a pairing function such as Cantor's pairing function. An example can be seen at . It begins -$$ -8,18,19,32,33,34,\dots -$$ and gives rationals -$$ -\tfrac{1}{2},\tfrac{2}{3},\tfrac{1}{4},\tfrac{3}{4},\tfrac{2}{5},\tfrac{1}{6},\dots -$$ these, in turn, generate primitive triples -$$ -(3,4,5),(5,12,13),(8,15,17),(7,24,25),(20,21,29),(12,35,37),\dots -$$ - -Pythagorean triples can likewise be encoded into a square matrix of the form - -X = \begin{bmatrix} - -c+b & a\\ - -a & c-b - -\end{bmatrix}. - - - -A matrix of this form is symmetric. Furthermore, the determinant of X is -$$ -\det X = c^2 - a^2 - b^2 -$$ - -which is zero precisely when (a,b,c) is a Pythagorean triple. If X corresponds to a Pythagorean triple, then as a matrix it must have rank 1. - -Since X is symmetric, it follows from a result in linear algebra that there is a column vector ξ = [m n]T such that the outer product - -{{NumBlk|:|$X = 2\begin{bmatrix}m\\n\end{bmatrix}[m\ n] = 2\xi\xi^T$|}} - -holds, where the T denotes the matrix transpose. The vector ξ is called a spinor (for the Lorentz group SO(1, 2)). In abstract terms, the Euclid formula means that each primitive Pythagorean triple can be written as the outer product with itself of a spinor with integer entries, as in (). - -The modular group Γ is the set of 2×2 matrices with integer entries -$$ -A = \begin{bmatrix}\alpha&\beta\\ \gamma&\delta\end{bmatrix} -$$ - -with determinant equal to one: αδ − βγ = 1. This set forms a group, since the inverse of a matrix in Γ is again in Γ, as is the product of two matrices in Γ. The modular group acts on the collection of all integer spinors. Furthermore, the group is transitive on the collection of integer spinors with relatively prime entries. For if [m n]T has relatively prime entries, then -$$ -\begin{bmatrix}m&-v\\n&u\end{bmatrix}\begin{bmatrix}1\\0\end{bmatrix} = \begin{bmatrix}m\\n\end{bmatrix} -$$ - -where u and v are selected (by the Euclidean algorithm) so that mu + nv = 1. - -By acting on the spinor ξ in (), the action of Γ goes over to an action on Pythagorean triples, provided one allows for triples with possibly negative components. Thus if A is a matrix in Γ, then - -gives rise to an action on the matrix X in (). This does not give a well-defined action on primitive triples, since it may take a primitive triple to an imprimitive one. It is convenient at this point (per ) to call a triple (a,b,c) standard if c > 0 and either (a,b,c) are relatively prime or (a/2,b/2,c/2) are relatively prime with a/2 odd. If the spinor [m n]T has relatively prime entries, then the associated triple (a,b,c) determined by () is a standard triple. It follows that the action of the modular group is transitive on the set of standard triples. - -Alternatively, restrict attention to those values of m and n for which m is odd and n is even. Let the subgroup Γ(2) of Γ be the kernel of the group homomorphism -$$ -\Gamma=\mathrm{SL}(2,\mathbf{Z})\to \mathrm{SL}(2,\mathbf{Z}_2) -$$ - -where SL(2,Z2) is the special linear group over the finite field Z2 of integers modulo 2. Then Γ(2) is the group of unimodular transformations which preserve the parity of each entry. Thus if the first entry of ξ is odd and the second entry is even, then the same is true of Aξ for all A ∈ Γ(2). In fact, under the action (), the group Γ(2) acts transitively on the collection of primitive Pythagorean triples . - -The group Γ(2) is the free group whose generators are the matrices -$$ -U=\begin{bmatrix}1&2\\0&1\end{bmatrix},\qquad L=\begin{bmatrix}1&0\\2&1\end{bmatrix}. -$$ - -Consequently, every primitive Pythagorean triple can be obtained in a unique way as a product of copies of the matrices U and L. - -By a result of Berggren, all primitive Pythagorean triples can be generated from the (3, 4, 5) triangle by using the three linear transformations T1, T2, T3 below, where a, b, c are sides of a triple: - -In other words, every primitive triple will be a "parent" to three additional primitive triples. - -Starting from the initial node with a = 3, b = 4, and c = 5, the operation T1 produces the new triple - -(3 − (2×4) + (2×5), (2×3) − 4 + (2×5), (2×3) − (2×4) + (3×5)) = (5, 12, 13), - -and similarly T2 and T3 produce the triples (21, 20, 29) and (15, 8, 17). - -The linear transformations T1, T2, and T3 have a geometric interpretation in the language of quadratic forms. They are closely related to (but are not equal to) reflections generating the orthogonal group of x2 + y2 − z2 over the integers. - -Alternatively, Euclid's formulae can be analyzed and proven using the Gaussian integers. Gaussian integers are complex numbers of the form α = u + vi, where u and v are ordinary integers and i is the square root of negative one. The units of Gaussian integers are ±1 and ±i. The ordinary integers are called the rational integers and denoted as Z. The Gaussian integers are denoted as Z[i]. The right-hand side of the Pythagorean theorem may be factored in Gaussian integers: -$$ -c^2 = a^2+b^2 = (a+bi)\overline{(a+bi)} = (a+bi)(a-bi). -$$ - -A primitive Pythagorean triple is one in which a and b are coprime, i.e., they share no prime factors in the integers. For such a triple, either a or b is even, and the other is odd; from this, it follows that c is also odd. - -The two factors z := a + bi and z* := a − bi of a primitive Pythagorean triple each equal the square of a Gaussian integer. This can be proved using the property that every Gaussian integer can be factored uniquely into Gaussian primes up to units. (This unique factorization follows from the fact that, roughly speaking, a version of the Euclidean algorithm can be defined on them.) The proof has three steps. First, if a and b share no prime factors in the integers, then they also share no prime factors in the Gaussian integers. (Assume a = gu and b = gv with Gaussian integers g, u and v and g not a unit. Then u and v lie on the same line through the origin. All Gaussian integers on such a line are integer multiples of some Gaussian integer h. But then the integer gh ≠ ±1 divides both a and b.) Second, it follows that z and z* likewise share no prime factors in the Gaussian integers. For if they did, then their common divisor δ would also divide z + z* = 2a and z − z* = 2ib. Since a and b are coprime, that implies that δ divides 2 = (1 + i)(1 − i) = i(1 − i)2. From the formula c2 = zz*, that in turn would imply that c is even, contrary to the hypothesis of a primitive Pythagorean triple. Third, since c2 is a square, every Gaussian prime in its factorization is doubled, i.e., appears an even number of times. Since z and z* share no prime factors, this doubling is also true for them. Hence, z and z* are squares. - -Thus, the first factor can be written -$$ -a+bi = \varepsilon\left(m + ni \right)^2, \quad \varepsilon\in\{\pm 1, \pm i\}. -$$ - -The real and imaginary parts of this equation give the two formulas: -$$ -\begin{cases}\varepsilon = +1, & \quad a = +\left( m^2 - n^2 \right),\quad b = +2mn; \\ \varepsilon = -1, & \quad a = -\left( m^2 - n^2 \right),\quad b = -2mn; \\ \varepsilon = +i, & \quad a = -2mn,\quad b = +\left( m^2 - n^2 \right); \\ \varepsilon = -i, & \quad a = +2mn,\quad b = -\left( m^2 - n^2 \right).\end{cases} -$$ - -For any primitive Pythagorean triple, there must be integers m and n such that these two equations are satisfied. Hence, every Pythagorean triple can be generated from some choice of these integers. - -If we consider the square of a Gaussian integer we get the following direct interpretation of Euclid's formula as representing the perfect square of a Gaussian integer. -$$ -(m+ni)^2 = (m^2-n^2)+2mni. -$$ - -Using the facts that the Gaussian integers are a Euclidean domain and that for a Gaussian integer p $|p|^2$ is always a square it is possible to show that a Pythagorean triple corresponds to the square of a prime Gaussian integer if the hypotenuse is prime. - -If the Gaussian integer is not prime then it is the product of two Gaussian integers p and q with $|p|^2$ and $|q|^2$ integers. Since magnitudes multiply in the Gaussian integers, the product must be $|p||q|$, which when squared to find a Pythagorean triple must be composite. The contrapositive completes the proof. - -With reference to the figure and the definition of the foci of an ellipse, F1 and F2, for any point P on the ellipse, F1P + PF2 is constant. - -As points A and B are both on the ellipse, F1A + AF2 = F1B + BF2. Due to symmetry, F1A + AF2 = F2A' + AF2 = AA' = 2 AC, and F1B + BF2 = 2 BF2. Hence, AC = BF2. - -Thus, if BCF2 is a right-angle triangle with integral sides, the separation of the foci, linear eccentricity, minor axis and major axis are all also integers. - -There are a number of results on the distribution of Pythagorean triples. In the scatter plot, a number of obvious patterns are already apparent. Whenever the legs (a,b) of a primitive triple appear in the plot, all integer multiples of (a,b) must also appear in the plot, and this property produces the appearance of lines radiating from the origin in the diagram. - -Within the scatter, there are sets of parabolic patterns with a high density of points and all their foci at the origin, opening up in all four directions. Different parabolas intersect at the axes and appear to reflect off the axis with an incidence angle of 45 degrees, with a third parabola entering in a perpendicular fashion. Within this quadrant, each arc centered on the origin shows that section of the parabola that lies between its tip and its intersection with its semi-latus rectum. - -These patterns can be explained as follows. If $a^2/4n$ is an integer, then (a, $|n-a^2/4n|$, $n+a^2/4n$) is a Pythagorean triple. (In fact every Pythagorean triple (a, b, c) can be written in this way with integer n, possibly after exchanging a and b, since $n=(b+c)/2$ and a and b cannot both be odd.) The Pythagorean triples thus lie on curves given by $b = |n-a^2/4n|$, that is, parabolas reflected at the a-axis, and the corresponding curves with a and b interchanged. If a is varied for a given n (i.e. on a given parabola), integer values of b occur relatively frequently if n is a square or a small multiple of a square. If several such values happen to lie close together, the corresponding parabolas approximately coincide, and the triples cluster in a narrow parabolic strip. For instance, 382 = 1444, 2 × 272 = 1458, - -3 × 222 = 1452, 5 × 172 = 1445 and 10 × 122 = 1440; the corresponding parabolic strip around n ≈ 1450 is clearly visible in the scatter plot. - -The angular properties described above follow immediately from the functional form of the parabolas. The parabolas are reflected at the a-axis at a = 2n, and the derivative of b with respect to a at this point is –1; hence the incidence angle is 45°. Since the clusters, like all triples, are repeated at integer multiples, the value 2n also corresponds to a cluster. The corresponding parabola intersects the b-axis at right angles at b = 2n, and hence its reflection upon interchange of a and b intersects the a-axis at right angles at a = 2n, precisely where the parabola for n is reflected at the a-axis. (The same is of course true for a and b interchanged.) - -Albert Fässler and others provide insights into the significance of these parabolas in the context of conformal mappings. - -The case n = 1 of the more general construction of Pythagorean triples has been known for a long time. Proclus, in his commentary to the 47th Proposition of the first book of Euclid's Elements, describes it as follows: - -
    Certain methods for the discovery of triangles of this kind are handed down, one which they refer to Plato, and another to Pythagoras. (The latter) starts from odd numbers. For it makes the odd number the smaller of the sides about the right angle; then it takes the square of it, subtracts unity and makes half the difference the greater of the sides about the right angle; lastly it adds unity to this and so forms the remaining side, the hypotenuse.
    - -...For the method of Plato argues from even numbers. It takes the given even number and makes it one of the sides about the right angle; then, bisecting this number and squaring the half, it adds unity to the square to form the hypotenuse, and subtracts unity from the square to form the other side about the right angle. ... Thus it has formed the same triangle that which was obtained by the other method.
    - -In equation form, this becomes: - -a is odd (Pythagoras, c. 540 BC): -$$ -\text{side }a : \text{side }b = {a^2 - 1 \over 2} : \text{side }c = {a^2 + 1 \over 2}. -$$ - -a is even (Plato, c. 380 BC): -$$ -\text{side }a : \text{side }b = \left({a \over 2}\right)^2 - 1 : \text{side }c = \left({a \over 2}\right)^2 + 1 -$$ - -It can be shown that all Pythagorean triples can be obtained, with appropriate rescaling, from the basic Platonic sequence (a, (a2 − 1)/2 and (a2 + 1)/2) by allowing a to take non-integer rational values. If a is replaced with the fraction m/n in the sequence, the result is equal to the 'standard' triple generator (2mn, m2 − n2,m2 + n2) after rescaling. It follows that every triple has a corresponding rational a value which can be used to generate a similar triangle (one with the same three angles and with sides in the same proportions as the original). For example, the Platonic equivalent of (56, 33, 65) is generated by a = m/n = 7/4 as (a, (a2 –1)/2, (a2+1)/2) = (56/32, 33/32, 65/32). The Platonic sequence itself can be derived by following the steps for 'splitting the square' described in Diophantus II.VIII. - -The equation, -$$ -a^4+b^4+c^4+d^4 = (a+b+c+d)^4 -$$ - -is equivalent to the special Pythagorean triple, -$$ -(a^2+ab+b^2)^2+(c^2+cd+d^2)^2 = ((a+b)^2+(a+b)(c+d)+(c+d)^2)^2 -$$ - -There is an infinite number of solutions to this equation as solving for the variables involves an elliptic curve. Small ones are, -$$ -a, b, c, d = -2634, 955, 1770, 5400 -$$ -$$ -a, b, c, d = -31764, 7590, 27385, 48150 -$$ - -One way to generate solutions to $a^2+b^2=c^2+d^2$ is to parametrize a, b, c, d in terms of integers m, n, p, q as follows: -$$ -(m^2+n^2)(p^2+q^2)=(mp-nq)^2+(np+mq)^2=(mp+nq)^2+(np-mq)^2. -$$ - -Given two sets of Pythagorean triples, -$$ -(a^2-b^2)^2+(2a b)^2 = (a^2+b^2)^2 -$$ -$$ -(c^2-d^2)^2+(2c d)^2 = (c^2+d^2)^2 -$$ - -the problem of finding equal products of a non-hypotenuse side and the hypotenuse, -$$ -(a^2 -b^2)(a^2+b^2) = (c^2 -d^2)(c^2+d^2) -$$ - -is easily seen to be equivalent to the equation, -$$ -a^4 -b^4 = c^4 -d^4 -$$ - -and was first solved by Euler as $a, b, c, d = 133,59,158,134$. Since he showed this is a rational point in an elliptic curve, then there is an infinite number of solutions. In fact, he also found a 7th degree polynomial parameterization. - -For the case of Descartes' circle theorem where all variables are squares, -$$ -2(a^4+b^4+c^4+d^4) = (a^2+b^2+c^2+d^2)^2 -$$ - -Euler showed this is equivalent to three simultaneous Pythagorean triples, -$$ -(2ab)^2+(2cd)^2 = (a^2+b^2-c^2-d^2)^2 -$$ -$$ -(2ac)^2+(2bd)^2 = (a^2-b^2+c^2-d^2)^2 -$$ -$$ -(2ad)^2+(2bc)^2 = (a^2-b^2-c^2+d^2)^2 -$$ - -There is also an infinite number of solutions, and for the special case when $a+b=c$, then the equation simplifies to, -$$ -4(a^2+a b+b^2) = d^2 -$$ - -with small solutions as $a, b, c, d = 3, 5, 8, 14$ and can be solved as binary quadratic forms. - -No Pythagorean triples are isosceles, because the ratio of the hypotenuse to either other side is 2, but 2 cannot be expressed as the ratio of 2 integers. - -There are, however, right-angled triangles with integral sides for which the lengths of the non-hypotenuse sides differ by one, such as, -$$ -3^2+4^2 = 5^2 -$$ -$$ -20^2+21^2 = 29^2 -$$ - -and an infinite number of others. They can be completely parameterized as, -$$ -\left(\tfrac{x-1}{2}\right)^2+\left(\tfrac{x+1}{2}\right)^2 = y^2 -$$ - -where {x, y} are the solutions to the Pell equation $x^2-2y^2 = -1$. - -If a, b, c are the sides of this type of primitive Pythagorean triple (PPT) then the solution to the Pell equation is given by the recursive formula -$$ -a_n=6a_{n-1}-a_{n-2}+2 -$$ with $a_1=3$ and $a_2=20$ -$$ -b_n=6b_{n-1}-b_{n-2}-2 -$$ with $b_1=4$ and $b_2=21$ -$$ -c_n=6c_{n-1}-c_{n-2} -$$ with $c_1=5$ and $c_2=29$. - -This sequence of PPTs forms the central stem (trunk) of the rooted ternary tree of PPTs. - -When it is the longer non-hypotenuse side and hypotenuse that differ by one, such as in -$$ -5^2+12^2 = 13^2 -$$ -$$ -7^2+24^2 = 25^2 -$$ - -then the complete solution for the PPT a, b, c is -$$ -a=2m+1, \quad b=2m^2+2m, \quad c=2m^2+2m+1 -$$ - -and -$$ -(2m+1)^2+(2m^2+2m)^2=(2m^2+2m+1)^2 -$$ - -where integer $m>0$ is the generating parameter. - -It shows that all odd numbers (greater than 1) appear in this type of almost-isosceles PPT. This sequence of PPTs forms the right hand side outer stem of the rooted ternary tree of PPTs. - -Another property of this type of almost-isosceles PPT is that the sides are related such that -$$ -a^b+b^a=Kc -$$ - -for some integer $K$. Or in other words $a^b+b^a$ is divisible by $c$ such as in -$$ -(5^{12}+12^5)/13 = 18799189 -$$. - -Starting with 5, every second Fibonacci number is the length of the hypotenuse of a right triangle with integer sides, or in other words, the largest number in a Pythagorean triple, obtained from the formula - -(F_nF_{n+3})^2 + (2F_{n+1}F_{n+2})^2 = F_{2n+3}^2. - -The sequence of Pythagorean triangles obtained from this formula has sides of lengths - -(3,4,5), (5,12,13), (16,30,34), (39,80,89), ... - -The middle side of each of these triangles is the sum of the three sides of the preceding triangle. - -There are several ways to generalize the concept of Pythagorean triples. - -Using the simple algebraic identity, -$$ -(x_1^2-x_0)^2 + (2x_1)^{2}x_0 = (x_1^2+x_0)^2 -$$ - -for arbitrary x0, x1, it is easy to prove that the square of the sum of n squares is itself the sum of n squares by letting x0 = x22 + x32 + ... + xn2 and then distributing terms. One can see how Pythagorean triples and quadruples are just the particular cases x0 = x22 and x0 = x22 + x32, respectively, and so on for other n, with quintuples given by -$$ -(a^2-b^2-c^2-d^2)^2 + (2ab)^2 + (2ac)^2 + (2ad)^2 = (a^2+b^2+c^2+d^2)^2. -$$ - -Since the sum F(k,m) of k consecutive squares beginning with m2 is given by the formula, -$$ -F(k,m)=km(k-1+m)+\frac{k(k-1)(2k-1)}{6} -$$ - -one may find values (k, m) so that F(k,m) is a square, such as one by Hirschhorn where the number of terms is itself a square, -$$ -m=\tfrac{v^4-24v^2-25}{48}, k=v^2, F(m,k)=\tfrac{v^5+47v}{48} -$$ - -and v ≥ 5 is any integer not divisible by 2 or 3. For the smallest case v = 5, hence k = 25, this yields the well-known cannonball-stacking problem of Lucas, -$$ -0^2+1^2+2^2+\dots+24^2 = 70^2 -$$ - -a fact which is connected to the Leech lattice. - -In addition, if in a Pythagorean n-tuple (n ≥ 4) all addends are consecutive except one, one can use the equation, -$$ -F(k,m) + p^{2} = (p+1)^{2} -$$ - -Since the second power of p cancels out, this is only linear and easily solved for as $p=\tfrac{F(k,m)-1}{2}$ though k, m should be chosen so that p is an integer, with a small example being k = 5, m = 1 yielding, -$$ -1^2+2^2+3^2+4^2+5^2+27^2=28^2 -$$ - -Thus, one way of generating Pythagorean n-tuples is by using, for various x, -$$ -x^2+(x+1)^2+\cdots +(x+q)^2+p^2=(p+1)^2, -$$ - -where q = n–2 and where -$$ -p=\frac{(q+1)x^2+q(q+1)x+\frac{q(q+1)(2q+1)}{6} -1}{2}. -$$ - -A set of four positive integers a, b, c and d such that a2 + b2+ c2 = d2 is called a Pythagorean quadruple. The simplest example is (1, 2, 2, 3), since 12 + 22 + 22 = 32. The next simplest (primitive) example is (2, 3, 6, 7), since 22 + 32 + 62 = 72. - -All quadruples are given by the formula -$$ -(m^2+n^2-p^2-q^2)^2+(2mq+2np)^2+(2nq-2mp)^2=(m^2+n^2+p^2+q^2)^2. -$$ - -A generalization of the concept of Pythagorean triples is the search for triples of positive integers a, b, and c, such that an + bn = cn, for some n strictly greater than 2. Pierre de Fermat in 1637 claimed that no such triple exists, a claim that came to be known as Fermat's Last Theorem because it took longer than any other conjecture by Fermat to be proven or disproven. The first proof was given by Andrew Wiles in 1994. - -Another generalization is searching for sequences of n + 1 positive integers for which the nth power of the last is the sum of the nth powers of the previous terms. The smallest sequences for known values of n are: - -* n = 3: {3, 4, 5; 6}. - -* n = 4: {30, 120, 272, 315; 353} - -* n = 5: {19, 43, 46, 47, 67; 72} - -* n = 7: {127, 258, 266, 413, 430, 439, 525; 568} - -* n = 8: {90, 223, 478, 524, 748, 1088, 1190, 1324; 1409} - -For the n=3 case, in which $x^3+y^3+z^3=w^3,$ called the Fermat cubic, a general formula exists giving all solutions. - -A slightly different generalization allows the sum of (k + 1) nth powers to equal the sum of (n − k) nth powers. For example: - -* (n = 3): 13 + 123 = 93 + 103, made famous by Hardy's recollection of a conversation with Ramanujan about the number 1729 being the smallest number that can be expressed as a sum of two cubes in two distinct ways. - -There can also exist n − 1 positive integers whose nth powers sum to an nth power (though, by Fermat's Last Theorem, not for n = 3); these are counterexamples to Euler's sum of powers conjecture. The smallest known counterexamples are - -* n = 4: (95800, 217519, 414560; 422481) - -* n = 5: (27, 84, 110, 133; 144) - -A Heronian triangle is commonly defined as one with integer sides whose area is also an integer, and we shall consider Heronian triangles with distinct integer sides. The lengths of the sides of such a triangle form a Heronian triple (a, b, c) provided a < b < c. - -Every Pythagorean triple is a Heronian triple, because at least one of the legs a, b must be even in a Pythagorean triple, so the area ab/2 is an integer. Not every Heronian triple is a Pythagorean triple, however, as the example (4, 13, 15) with area 24 shows. - -If (a, b, c) is a Heronian triple, so is (ma, mb, mc) where m is any positive integer; its area will be the integer that is m2 times the integer area of the (a, b, c) triangle. - -The Heronian triple (a, b, c) is primitive provided a, b, c are setwise coprime. (With primitive Pythagorean triples the stronger statement that they are pairwise coprime also applies, but with primitive Heronian triangles the stronger statement does not always hold true, such as with (7, 15, 20).) Here are a few of the simplest primitive Heronian triples that are not Pythagorean triples: - -(4, 13, 15) with area 24 - -(3, 25, 26) with area 36 - -(7, 15, 20) with area 42 - -(6, 25, 29) with area 60 - -(11, 13, 20) with area 66 - -(13, 14, 15) with area 84 - -(13, 20, 21) with area 126 - -By Heron's formula, the extra condition for a triple of positive integers (a, b, c) with a < b < c to be Heronian is that - -(a2 + b2 + c2)2 − 2(a4 + b4 + c4) - -or equivalently - -2(a2b2 + a2c2 + b2c2) − (a4 + b4 + c4) - -be a nonzero perfect square divisible by 16. - -Primitive Pythagorean triples have been used in cryptography as random sequences and for the generation of keys. diff --git a/wiki/wikipedia/629.txt b/wiki/wikipedia/629.txt deleted file mode 100644 index 0adac76762b9d0c3c1b72f5b89955f46de6d59d9..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/629.txt +++ /dev/null @@ -1,73 +0,0 @@ -In geometry, an n-gonal antiprism or n-antiprism is a polyhedron composed of two parallel direct copies (not mirror images) of an n-sided polygon, connected by an alternating band of 2n triangles. - -Antiprisms are a subclass of prismatoids, and are a (degenerate) type of snub polyhedron. - -Antiprisms are similar to prisms, except that the bases are twisted relatively to each other, and that the side faces (connecting the bases) are 2n triangles, rather than n quadrilaterals. - -At the intersection of modern-day graph theory and coding theory, the triangulation of a set of points have interested mathematicians since Isaac Newton, who fruitlessly sought a mathematical proof of the kissing number problem in 1694. The existence of antiprisms was discussed and their name was coined by Johannes Kepler, though it is possible that they were previously known to Archimedes, as they satisfy the same conditions on faces and on vertices as the Archimedean solids. According to Ericson and Zinoviev, HSM Coxeter (1907-2003) wrote at length on the topic, especially those of the family of boron hydrides in 1975, and carboranes because they are isoelectronic. This is a mathematically real conclusion reached by studies of X-ray diffraction patterns, and stems from the 1971 work of Kenneth Wade, the nominative source for Wade's rules of Polyhedral skeletal electron pair theory. - -Rare earth metals such as the lanthanides form antiprismal compounds with some of the halides or some of the iodides. The study of crystallography is useful here. Some lanthanides, when arranged in peculiar antiprismal structures with chlorine and water, can form molecule-based magnets. - -For an antiprism with regular n-gon bases, one usually considers the case where these two copies are twisted by an angle of 180/n degrees. - -The axis of a regular polygon is the line perpendicular to the polygon plane and lying in the polygon centre. - -For an antiprism with congruent regular n-gon bases, twisted by an angle of 180/n degrees, more regularity is obtained if the bases have the same axis: are coaxial; i.e. (for non-coplanar bases): if the line connecting the base centers is perpendicular to the base planes. Then, the antiprism is called a right antiprism, and its 2n side faces are isosceles triangles. - -A uniform antiprism has two congruent regular n-gon base faces, and 2n equilateral triangles as side faces. - -Uniform antiprisms form an infinite class of vertex-transitive polyhedra, as do uniform prisms. For n = 2, we have the regular tetrahedron as a digonal antiprism (degenerate antiprism); for n = 3, the regular octahedron as a triangular antiprism (non-degenerate antiprism). - -The dual polyhedra of the antiprisms are the trapezohedra. - -Cartesian coordinates for the vertices of a right antiprism (i.e. with regular n-gon bases and isosceles side faces) are -$$ -\left( \cos\frac{k\pi}{n}, \sin\frac{k\pi}{n}, (-1)^k h \right) -$$ - -with k ranging from 0 to 2n – 1; - -if the triangles are equilateral, then -$$ -2h^2=\cos\frac{\pi}{n}-\cos\frac{2\pi}{n}. -$$ - -Let a be the edge-length of a uniform antiprism; then the volume is -$$ -V = \frac{n \sqrt{4\cos^2\frac{\pi}{2n}-1}\sin \frac{3\pi}{2n} }{12\sin^2\frac{\pi}{n}}~a^3, -$$ - -and the surface area is -$$ -A = \frac{n}{2} \left( \cot{\frac{\pi}{n}} + \sqrt{3}\right) a^2. -$$ - -There are an infinite set of truncated antiprisms, including a lower-symmetry form of the truncated octahedron (truncated triangular antiprism). These can be alternated to create snub antiprisms, two of which are Johnson solids, and the snub triangular antiprism is a lower symmetry form of the icosahedron. - -The symmetry group of a right n-antiprism (i.e. with regular bases and isosceles side faces) is Dnd of order 4n, except in the cases of: - -*n = 2: the regular tetrahedron, which has the larger symmetry group Td of order 24 = 3×(4×2), which has three versions of D2d as subgroups; - -*n = 3: the regular octahedron, which has the larger symmetry group Oh of order 48 = 4×(4×3), which has four versions of D3d as subgroups. - -The symmetry group contains inversion if and only if n is odd. - -The rotation group is Dn of order 2n, except in the cases of: - -*n = 2: the regular tetrahedron, which has the larger rotation group T of order 12 = 3×(2×2), which has three versions of D2 as subgroups; - -*n = 3: the regular octahedron, which has the larger rotation group O of order 24 = 4×(2×3), which has four versions of D3 as subgroups. - -Uniform star antiprisms are named by their star polygon bases, {p/q}, and exist in prograde and in retrograde (crossed) solutions. Crossed forms have intersecting vertex figures, and are denoted by "inverted" fractions: p/(p – q) instead of p/q; example: 5/3 instead of 5/2. - -A right star antiprism has two congruent coaxial regular convex or star polygon base faces, and 2n isosceles triangle side faces. - -Any star antiprism with regular convex or star polygon bases can be made a right star antiprism (by translating and/or twisting one of its bases, if necessary). - -In the retrograde forms but not in the prograde forms, the triangles joining the convex or star bases intersect the axis of rotational symmetry. Thus: - -*Retrograde star antiprisms with regular convex polygon bases cannot have all equal edge lengths, so cannot be uniform. "Exception": a retrograde star antiprism with equilateral triangle bases (vertex configuration: 3.3/2.3.3) can be uniform; but then, it has the appearance of an equilateral triangle: it is a degenerate star polyhedron. - -*Similarly, some retrograde star antiprisms with regular star polygon bases cannot have all equal edge lengths, so cannot be uniform. Example: a retrograde star antiprism with regular star 7/5-gon bases (vertex configuration: 3.3.3.7/5) cannot be uniform. - -Also, star antiprism compounds with regular star p/q-gon bases can be constructed if p and q have common factors. Example: a star 10/4-antiprism is the compound of two star 5/2-antiprisms. diff --git a/wiki/wikipedia/63.txt b/wiki/wikipedia/63.txt deleted file mode 100644 index 5dedb73c0d499cadae3e0746c7991f8689f1e876..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/63.txt +++ /dev/null @@ -1,39 +0,0 @@ -In mathematics, in the study of dynamical systems, the Hartman–Grobman theorem or linearisation theorem is a theorem about the local behaviour of dynamical systems in the neighbourhood of a hyperbolic equilibrium point. It asserts that linearisation—a natural simplification of the system—is effective in predicting qualitative patterns of behaviour. The theorem owes its name to Philip Hartman and David M. Grobman. - -The theorem states that the behaviour of a dynamical system in a domain near a hyperbolic equilibrium point is qualitatively the same as the behaviour of its linearisation near this equilibrium point, where hyperbolicity means that no eigenvalue of the linearisation has real part equal to zero. Therefore, when dealing with such dynamical systems one can use the simpler linearisation of the system to analyse its behaviour around equilibria. - -Consider a system evolving in time with state $u(t)\in\mathbb R^n$ that satisfies the differential equation $du/dt=f(u)$ for some smooth map $f: \mathbb{R}^n \to \mathbb{R}^n$. Suppose the map has a hyperbolic equilibrium state $u^*\in\mathbb R^n$: that is, $f(u^*)=0$ and the Jacobian matrix $A=[\partial f_i/\partial x_j]$ of $f$ at state $u^*$ has no eigenvalue with real part equal to zero. Then there exists a neighbourhood $N$ of the equilibrium $u^*$ and a homeomorphism $h : N \to \mathbb{R}^n$, - -such that $h(u^*)=0$ and such that in the neighbourhood $N$ the flow of $du/dt=f(u)$ is topologically conjugate by the continuous map $U=h(u)$ to the flow of its linearisation $dU/dt=AU$. - -Even for infinitely differentiable maps $f$, the homeomorphism $h$ need not to be smooth, nor even locally Lipschitz. However, it turns out to be Hölder continuous, with an exponent depending on the constant of hyperbolicity of $A$. - -The Hartman–Grobman theorem has been extended to infinite-dimensional Banach spaces, non-autonomous systems $du/dt=f(u,t)$ (potentially stochastic), and to cater for the topological differences that occur when there are eigenvalues with zero or near-zero real-part. - -The algebra necessary for this example is easily carried out by a web service that computes normal form coordinate transforms of systems of differential equations, autonomous or non-autonomous, deterministic or stochastic. - -Consider the 2D system in variables $u=(y,z)$ evolving according to the pair of coupled differential equations -$$ - \frac{dy}{dt} = -3y+yz\quad\text{and}\quad \frac{dz}{dt} = z+y^2. -$$ - -By direct computation it can be seen that the only equilibrium of this system lies at the origin, that is $u^*=0$. The coordinate transform, $u=h^{-1}(U)$ where $U=(Y,Z)$, given by - - - -\begin{align} - -y & \approx Y+YZ+\dfrac1{42}Y^3+\dfrac1 2Y Z^2 \\[5pt] - -z & \approx Z-\dfrac1 7Y^2-\dfrac1 3Y^2 Z - -\end{align} - - - -is a smooth map between the original $u=(y,z)$ and new $U=(Y,Z)$ coordinates, at least near the equilibrium at the origin. In the new coordinates the dynamical system transforms to its linearisation -$$ - \frac{dY}{dt}=-3Y\quad\text{and}\quad \frac{dZ}{dt} = Z. -$$ - -That is, a distorted version of the linearisation gives the original dynamics in some finite neighbourhood. diff --git a/wiki/wikipedia/630.txt b/wiki/wikipedia/630.txt deleted file mode 100644 index 9342e46c85f048813ea3831e54cb28aa39df30bc..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/630.txt +++ /dev/null @@ -1,3 +0,0 @@ -In queueing theory, a discipline within the mathematical theory of probability, the arrival theorem (also referred to as the random observer property, ROP or job observer property) states that "upon arrival at a station, a job observes the system as if in steady state at an arbitrary instant for the system without that job." - -The arrival theorem always holds in open product-form networks with unbounded queues at each node, but it also holds in more general networks. A necessary and sufficient condition for the arrival theorem to be satisfied in product-form networks is given in terms of Palm probabilities in Boucherie & Dijk, 1997. and networks with a delay protocol. and Reiser and Lavenberg, where the result was used to develop mean value analysis. diff --git a/wiki/wikipedia/631.txt b/wiki/wikipedia/631.txt deleted file mode 100644 index cee3a28ea4a846267e3beb7baa044c69db381d37..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/631.txt +++ /dev/null @@ -1,25 +0,0 @@ -In differential geometry, the Carathéodory conjecture is a mathematical conjecture attributed to Constantin Carathéodory by Hans Ludwig Hamburger in a session of the Berlin Mathematical Society in 1924. Carathéodory did publish a paper on a related subject, but never committed the Conjecture into writing. In, John Edensor Littlewood mentions the Conjecture and Hamburger's contribution as an example of a mathematical claim that is easy to state but difficult to prove. Dirk Struik describes in the formal analogy of the Conjecture with the Four Vertex Theorem for plane curves. Modern references to the Conjecture are the problem list of Shing-Tung Yau, the books of Marcel Berger, as well as the books. - -The Conjecture claims that any convex, closed and sufficiently smooth surface in three dimensional Euclidean space needs to admit at least two umbilic points. In the sense of the Conjecture, the spheroid with only two umbilic points and the sphere, all points of which are umbilic, are examples of surfaces with minimal and maximal numbers of the umbilicus. For the conjecture to be well posed, or the umbilic points to be well-defined, the surface needs to be at least twice differentiable. - -The invited address of Stefan Cohn-Vossen to the International Congress of Mathematicians of 1928 in Bologna was on the subject and in the 1929 edition of Wilhelm Blaschke's third volume on Differential Geometry he states: - -
    - -While this book goes into print, Mr. Cohn-Vossen has succeeded in proving that closed real-analytic surfaces do not have umbilic points of index > 2 (invited talk at the ICM in Bologna 1928). This proves the conjecture of Carathéodory for such surfaces, namely that they need to have at least two umbilics. - -
    - -Here Blaschke's index is twice the usual definition for an index of an umbilic point, and the global conjecture follows by the Poincaré–Hopf index theorem. No paper was submitted by Cohn-Vossen to the proceedings of the International Congress, while in later editions of Blaschke's book the above comments were removed. It is, therefore, reasonable to assume that this work was inconclusive. - -For analytic surfaces, an affirmative answer to this conjecture was given in 1940 by Hans Hamburger in a long paper published in three parts. In 1943, a shorter proof was proposed by Gerrit Bol, see also, but, in 1959, Tilla Klotz found and corrected a gap in Bol's proof in. a proof of the global conjecture for surfaces of smoothness $ C^{3,\alpha }$. Their method uses neutral Kähler geometry of the Klein quadric to define an associated Riemann-Hilbert boundary value problem, and then applies mean curvature flow and the Sard–Smale Theorem on regular values of Fredholm operators to prove a contradiction for a surface with a single umbilic point. - -In particular, the boundary value problem seeks to find a holomorphic curve with boundary lying on the Lagrangian surface in the Klein quadric determined by the normal lines to the surface in Euclidean 3-space. Previously it was proven that the number of isolated umbilic points contained on the surface in $R^3$ determines the Keller-Maslov class of the boundary curve and therefore, when the problem is Fredholm regular, determines the dimension of the space of holomorphic disks. All of the geometric quantities referred to are defined with respect to the canonical neutral Kähler structure, for which surfaces can be both holomorphic and Lagrangian. - -In addressing the global conjecture, the question is “what would be so special about a smooth closed convex surface in $R^3$ with a single umbilic point?” This is answered by Guilfoyle and Klingenberg: the associated Riemann-Hilbert boundary value problem would be Fredholm regular. The existence of an isometry group of sufficient size to fix a point has been proven to be enough to ensure this, thus identifying the size of the Euclidean isometry group of $R^3$ as the underlying reason why the Carathéodory conjecture is true. This is reinforced by a more recent result in which ambient smooth metrics (without symmetries) that are different but arbitrarily close to the Euclidean metric on $R^3$, are constructed that admit smooth convex surfaces violating both the local and the global conjectures. - -By Fredholm regularity, for a generic convex surface close to a putative counter-example of the global Carathéodory Conjecture, the associated Riemann-Hilbert problem would have no solutions. The second step of the proof is to show that such solutions always exist, thus concluding the non-existence of a counter-example. This is done using co-dimension 2 mean curvature flow with boundary. While the complete second step of the proof has not been published as of November 2020, the required interior estimates for higher codimensional mean curvature flow in an indefinite geometry have appeared in print. The final part is the establishment of sufficient boundary control under mean curvature flow to ensure weak convergence. - -In 2012 the proof was announced of a weaker version of the local index conjecture for smooth surfaces, namely that an isolated umbilic must have index less than or equal to 3/2. The proof follows that of the global conjecture, but also uses more topological methods, in particular, replacing hyperbolic umbilic points by totally real cross-caps in the boundary of the associated Riemann-Hilbert problem. It leaves open the possibility of a smooth (non-real analytic by Hamburger ) convex surface with an isolated umbilic of index 3/2. The proof by similar methods of a conjecture of Toponogov regarding umbilic points on complete planes was announced in 2020. - -In 2012, Mohammad Ghomi and Ralph Howard showed, using a Möbius transformation, that the global conjecture for surfaces of smoothness $C^2$ can be reformulated in terms of the number of umbilic points on graphs subject to certain asymptotics of the gradient. diff --git a/wiki/wikipedia/632.txt b/wiki/wikipedia/632.txt deleted file mode 100644 index 31cf2f5d7224f443dc7ac3c4fb12e1db35b2b11b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/632.txt +++ /dev/null @@ -1,13 +0,0 @@ -In geometric measure theory, Falconer's conjecture, named after Kenneth Falconer, is an unsolved problem concerning the sets of Euclidean distances between points in compact $d$-dimensional spaces. Intuitively, it states that a set of points that is large in its Hausdorff dimension must determine a set of distances that is large in measure. More precisely, if $S$ is a compact set of points in $d$-dimensional Euclidean space whose Hausdorff dimension is strictly greater than $d/2$, then the conjecture states that the set of distances between pairs of points in $S$ must have nonzero Lebesgue measure. - -Falconer proved that Borel sets with Hausdorff dimension greater than $(d+1)/2$ have distance sets with nonzero measure. He motivated this result as a multidimensional generalization of the Steinhaus theorem, a previous result of Hugo Steinhaus proving that every set of real numbers with nonzero measure must have a difference set that contains an interval of the form $ (-\varepsilon,\varepsilon)$ for some $\varepsilon>0$. It may also be seen as a continuous analogue of the Erdős distinct distances problem, which states that large finite sets of points must have large numbers of distinct distances. - -Erdoğan proved that compact sets of points whose Hausdorff dimension is greater than $\tfrac{d}{2} + \tfrac{1}{3}$ have distance sets with nonzero measure; for large values of $d$ this approximates the threshold on Hausdorff dimension given by the Falconer conjecture. For points in the Euclidean plane, Borel sets of Hausdorff dimension greater than 5/4 have distance sets with nonzero measure and, more strongly, they have a point such that the Lebesgue measure of the distances from the set to this point is positive. - -A variant of Falconer's conjecture states that, for points in the plane, a compact set whose Hausdorff dimension is greater than or equal to one must have a distance set of Hausdorff dimension one. This follows from the results on measure for sets of Hausdorff dimension greater than 5/4. For a compact planar set with Hausdorff dimension at least one, the distance set must have Hausdorff dimension at least 1/2. - -Proving a bound strictly greater than 1/2 for the dimension of the distance set in the case of compact planar sets with Hausdorff dimension at least one would be equivalent to resolving several other unsolved conjectures. These include a conjecture of Paul Erdős on the existence of Borel subrings of the real numbers with fractional Hausdorff dimension, and a variant of the Kakeya set problem on the Hausdorff dimension of sets such that, for every possible direction, there is a line segment whose intersection with the set has high Hausdorff dimension. - -These conjectures were solved by Bourgain. - -For non-Euclidean distance functions in the plane defined by polygonal norms, the analogue of the Falconer conjecture is false: there exist sets of Hausdorff dimension two whose distance sets have measure zero. diff --git a/wiki/wikipedia/633.txt b/wiki/wikipedia/633.txt deleted file mode 100644 index fe386e94fe7a98f6496da897aecdbedb91c378d4..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/633.txt +++ /dev/null @@ -1,22 +0,0 @@ -In mathematics, Eisenstein's theorem, named after the German mathematician Gotthold Eisenstein, applies to the coefficients of any power series which is an algebraic function with rational number coefficients. Through the theorem, it is readily demonstrable, for example, that the exponential function must be a transcendental function. - -Suppose that -$$ -\sum_{}^{} a_n t^n -$$ - -is a formal power series with rational coefficients an, which has a non-zero radius of convergence in the complex plane, and within it represents an analytic function that is in fact an algebraic function. Then Eisenstein's theorem states that there exists a non-zero integer A, such that Anan are all integers. - -This has an interpretation in terms of p-adic numbers: with an appropriate extension of the idea, the p-adic radius of convergence of the series is at least 1, for almost all p (i.e., the primes outside a finite set S). In fact that statement is a little weaker, in that it disregards any initial partial sum of the series, in a way that may vary according to p. For the other primes the radius is non-zero. - -Eisenstein's original paper is the short communication - -Über eine allgemeine Eigenschaft der Reihen-Entwicklungen aller algebraischen Functionen - -(1852), reproduced in Mathematische Gesammelte Werke, Band II, Chelsea Publishing Co., New York, 1975, - -p. 765–767. - -More recently, many authors have investigated precise and effective bounds quantifying the above almost all. - -See, e.g., Sections 11.4 and 11.55 of the book by E. Bombieri & W. Gubler. diff --git a/wiki/wikipedia/634.txt b/wiki/wikipedia/634.txt deleted file mode 100644 index 9abdff364ec413657310516c3e12a3d7bac0f0e1..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/634.txt +++ /dev/null @@ -1,57 +0,0 @@ -Coffee Lake is Intel's codename for its eighth generation Core microprocessor family, announced on September 25, 2017. It is manufactured using Intel's second 14 nm process node refinement. - -On April 2, 2018, Intel released additional desktop Core i3, i5, i7, Pentium Gold, Celeron CPUs, the first six-core Core i7 and i9 mobile CPUs, hyper-threaded four-core Core i5 mobile CPUs, and the first Coffee Lake ultra-power CPUs with Intel Iris Plus graphics. - -On June 8, 2018, to commemorate the 40th anniversary of the Intel 8086 CPU architecture, Intel released the i7-8086K as a limited edition CPU, a renumbered and slightly higher clocked batch of the i7-8700K dies. - -== History == - -Its development was led by Intel Israel's processor design team in Haifa, Israel, as an optimization of Kaby Lake. Intel first launched its 8th Generation Intel Core family processors in August 2017. While with the release of the new 8th Gen Intel Core i9 processor in 2018, Intel said it would be the highest-performance laptop processor Intel has ever built. It features increased transistor gate pitch for a lower current density and higher leakage transistors that allows higher peak power and higher frequency at the expense of die area and idle power. - -Coffee Lake marks a shift in the number of cores for Intel's mainstream desktop processors, the first such update for the previous ten-year history of Intel Core CPUs. In the 8th generation, mainstream desktop i7 CPUs feature six cores and 12 threads, i5 CPUs feature six single-threaded cores and i3 CPUs feature four single-threaded cores. - -For the 9th generation, the Intel Core i9 branding made its debut on the mainstream desktop, describing CPUs with 8 cores and 16 threads. 9th generation i7s feature 8 single-threaded cores, marking the first time desktop Core i7s have not featured Intel's Hyper-threading technology, although the 9th generation Core i7 mobile CPUs do support hyperthreading and have 6 cores just like 8th gen mobile chips. 9th generation i5 CPUs feature six single-threaded cores, just like their 8th generation predecessors. - -The ninth generation Core i series includes hardware fixes for Meltdown and L1 Terminal Fault. - -The 300 series chipsets, while using physically identical LGA 1151 socket to the 100 and 200 series chipsets, are officially only compatible with Coffee Lake CPUs, meaning that older motherboards do not officially support Coffee Lake processors, and 300 series motherboards do not officially support Skylake or Kaby Lake processors. - -The enthusiast Z370 (a rebranded Z270), launched alongside the first Coffee Lake CPUs in October 2017, was the only officially supported chipset for these mainstream CPUs. When the full lineup of CPUs was revealed in April 2018, it was then accompanied by the lower-end H310, B360, H370 and Q370 chipsets for home and business users. The Z390 chipset was launched alongside the release of the 9th generation CPUs, supporting all 8th and 9th generation mainstream desktop parts. A B365 chipset was added later on. - -9th generation Xeons require motherboards with the C246 chipset. - -Coffee Lake features largely the same CPU core and performance per MHz as Skylake/Kaby Lake. Features specific to Coffee Lake include: - -* Increased core count to six cores on Core i5 and 8th generation i7 parts; Core i3 is now a quad-core brand. 9th generation i7 and i9 parts feature eight cores. - -* Increased L3 cache in accordance to the number of threads - -* Increased turbo clock speeds across i5 and i7 CPUs models (increased by up to 400 MHz) - -* Increased iGPU clock speeds by 50 MHz and rebranded it UHD (Ultra High Definition) - -* DDR4 memory support updated for 2666 MHz (for i5, i7 and i9 parts) and 2400 MHz (for i3 parts); DDR3 memory is no longer supported on LGA1151 parts, unless using with H310C chipset - -* 300 series chipset on the second revision of socket LGA 1151 - -* Support for CNVi - -On August 8, 2017, Intel announced the first of its new eighth generation of processors would be mobile processors. As Intel's previous changes in product generations coincided with new microarchitectures, it was unclear but generally expected that the eighth Core generation products would be based on the new Coffee Lake microarchitecture. When it was officially announced on August 21, 2017, however, Intel stated that the eighth generation family would be based on multiple microarchitectures: Kaby Lake Refresh, Coffee Lake and Cannon Lake. - -These processors mark the first time that Intel has released mainstream consumer CPUs that support up to 128GB RAM. - -* various reviews show that the Core i7-8700K CPU may consume over 110W under load. - -The first 9th generation Coffee Lake CPUs were released in the fourth quarter of 2018. They include hardware mitigations against certain Meltdown/Spectre vulnerabilities. - -The main differences from the 8th generation (besides increased frequency) are: - -* Core i7 parts contain 8/8 cores/threads compared to 6/12 in 8th generation Core i7 parts. - -* Core i3 parts are equipped with Turbo Boost technology. - -Even though the F suffix CPUs lack an integrated GPU, Intel set the same price for these CPUs as their featureful counterparts. Intel eventually reduced the official pricing of those CPUs in October 2019. - -The Intel Core i9-9900KS CPU, released at the end of October 2019, features a limited one year warranty both for box and tray versions due to "its limited volume". - -Coffee Lake-W CPUs require C242 or C246 chipset diff --git a/wiki/wikipedia/635.txt b/wiki/wikipedia/635.txt deleted file mode 100644 index 935676dba7fb2abd383c328ca7d255c77b493420..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/635.txt +++ /dev/null @@ -1,6 +0,0 @@ -In mathematics, the Nagata–Biran conjecture, named after Masayoshi Nagata and Paul Biran, is a generalisation of Nagata's conjecture on curves to arbitrary polarised surfaces. - -Let X be a smooth algebraic surface and L be an ample line bundle on X of degree d. The Nagata–Biran conjecture states that for sufficiently large r the Seshadri constant satisfies -$$ - \varepsilon(p_1,\ldots,p_r;X,L) = {d \over \sqrt{r}}. -$$ diff --git a/wiki/wikipedia/636.txt b/wiki/wikipedia/636.txt deleted file mode 100644 index f5c34fe585509dfca20d466ae78a0363484cce2e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/636.txt +++ /dev/null @@ -1,11 +0,0 @@ -The traveling tournament problem (TTP) is a mathematical optimization problem. The question involves scheduling a series of teams such that: - -#Each team plays every other team twice, once at home and once in the other's stadium. - -#No team plays the same opponent in two consecutive weeks. - -#No team plays more than three games in a row at home, or three games in a row on the road. - -A matrix is provided of the travel distances between each team's home city. All teams start and end at their own home city, and the goal is to minimize the total travel distance for every team over the course of the whole season. - -There have been many papers published on the subject, and a contest exists to find the best solutions for certain specific schedules. diff --git a/wiki/wikipedia/637.txt b/wiki/wikipedia/637.txt deleted file mode 100644 index e59041fb4dd76a2d61c8bd90b29d04aeead76bd3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/637.txt +++ /dev/null @@ -1,5 +0,0 @@ -Jan Kratochvíl (born 10 February 1959) is a Czech mathematician and computer scientist whose research concerns graph theory and intersection graphs. - -Kratochvíl was born on 10 February 1959 in Prague. He studied at Charles University in Prague, earning a master's degree in 1983 and a Ph.D. in 1987; his dissertation, supervised by Jaroslav Nešetřil, combined graph theory with coding theory. He remained at Charles University as a faculty member, earned his habilitation in 1995, and was promoted to full professor in 2003. From 2003 to 2011 he chaired the department of applied mathematics at Charles University, and from 2012 to 2020 he was the dean of the Faculty of Mathematics and Physics there. - -Kratochvíl was the program chair and organizer of the 7th International Symposium on Graph Drawing, in 1999. From 2002 to 2010 he was president of the Czech Mathematical Society. Since March 2021, Kratochvíl is editor-in-chief of Elsevier's Computer Science Review (Impact Factor: 7.7), together with Giuseppe Liotta and Jaroslav Nešetřil. diff --git a/wiki/wikipedia/638.txt b/wiki/wikipedia/638.txt deleted file mode 100644 index 7ede8037ab1e45e4526d9b841b535281b8af1e70..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/638.txt +++ /dev/null @@ -1,5 +0,0 @@ -In algebra, the Mori–Nagata theorem introduced by and , states the following: let A be a noetherian reduced commutative ring with the total ring of fractions K. Then the integral closure of A in K is a direct product of r Krull domains, where r is the number of minimal prime ideals of A. - -The theorem is a partial generalization of the Krull–Akizuki theorem, which concerns a one-dimensional noetherian domain. A consequence of the theorem is that if R is a Nagata ring, then every R-subalgebra of finite type is again a Nagata ring . - -The Mori–Nagata theorem follows from Matijevic's theorem. diff --git a/wiki/wikipedia/639.txt b/wiki/wikipedia/639.txt deleted file mode 100644 index cc58e7f2cac7059141d1337d914150cee279f589..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/639.txt +++ /dev/null @@ -1,117 +0,0 @@ -Clock angle problems are a type of mathematical problem which involve finding the angle between the hands of an analog clock. - -Clock angle problems relate two different measurements: angles and time. The angle is typically measured in degrees from the mark of number 12 clockwise. The time is usually based on a 12-hour clock. - -A method to solve such problems is to consider the rate of change of the angle in degrees per minute. The hour hand of a normal 12-hour analogue clock turns 360° in 12 hours (720 minutes) or 0.5° per minute. The minute hand rotates through 360° in 60 minutes or 6° per minute. -$$ -\theta_{\text{hr}} = 0.5^{\circ} \times M_{\Sigma} = 0.5^{\circ} \times (60 \times H + M) -$$ - -where: - -* θ is the angle in degrees of the hand measured clockwise from the 12 - -* H is the hour. - -* M is the minutes past the hour. - -* MΣ is the number of minutes since 12 o'clock. $ M_{\Sigma} = (60 \times H + M)$ -$$ -\theta_{\text{min.}} = 6^{\circ} \times M -$$ - -where: - -* θ is the angle in degrees of the hand measured clockwise from the 12 o'clock position. - -* M is the minute. - -The time is 5:24. The angle in degrees of the hour hand is: -$$ -\theta_{\text{hr}} = 0.5^{\circ} \times (60 \times 5 + 24) = 162^{\circ} -$$ - -The angle in degrees of the minute hand is: -$$ -\theta_{\text{min.}} = 6^{\circ} \times 24 = 144^{\circ} -$$ - -The angle between the hands can be found using the following formula: - -\begin{align} - -\Delta\theta - -&= \vert \theta_{\text{hr}} - \theta_{\text{min.}} \vert \\ - -&= \vert 0.5^{\circ}\times(60\times H+M) -6^{\circ}\times M \vert \\ - -&= \vert 0.5^{\circ}\times(60\times H+M) -0.5^{\circ}\times 12 \times M \vert \\ - -&= \vert 0.5^{\circ}\times(60\times H -11 \times M) \vert \\ - -\end{align} - -where - -* H is the hour - -* M is the minute - -If the angle is greater than 180 degrees then subtract it from 360 degrees. - -The time is 2:20. - -\begin{align} - -\Delta\theta - -&= \vert 0.5^{\circ} \times (60 \times 2 - 11 \times 20) \vert \\ - -&= \vert 0.5^{\circ} \times (120 - 220) \vert \\ - -&= 50^{\circ} - -\end{align} - -The time is 10:16. - -\begin{align} - -\Delta\theta - -&= \vert 0.5^{\circ} \times (60 \times 10 - 11 \times 16) \vert \\ - -&= \vert 0.5^{\circ} \times (600 - 176) \vert \\ - -&= 212^{\circ} \ \ ( > 180^{\circ})\\ - -&= 360^{\circ} - 212^{\circ} \\ - -&= 148^{\circ} - -\end{align} - -The hour and minute hands are superimposed only when their angle is the same. - -\begin{align} - -\theta_{\text{min}} &= \theta_{\text{hr}}\\ - -\Rightarrow 6^{\circ} \times M &= 0.5^{\circ} \times (60 \times H + M) \\ - -\Rightarrow 12 \times M &= 60 \times H + M \\ - -\Rightarrow 11 \times M &= 60 \times H\\ - -\Rightarrow M &= \frac{60}{11} \times H\\ - -\Rightarrow M &= 5.\overline{45} \times H - -\end{align} - -H is an integer in the range 0–11. This gives times of: 0:00, 1:05.45, 2:10.90, 3:16.36, 4:21.81, 5:27.27. 6:32.72, 7:38.18, 8:43.63, 9:49.09, - -10:54.54, and 12:00. - -(0.45 minutes are exactly 27.27 seconds.) diff --git a/wiki/wikipedia/64.txt b/wiki/wikipedia/64.txt deleted file mode 100644 index 30d801cbbe177476946dce73fe658f3c772f37f0..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/64.txt +++ /dev/null @@ -1 +0,0 @@ -In algebra, Posner's theorem states that given a prime polynomial identity algebra A with center Z, the ring $A \otimes_Z Z_{(0)}$ is a central simple algebra over $Z_{(0)}$, the field of fractions of Z. It is named after Ed Posner. diff --git a/wiki/wikipedia/640.txt b/wiki/wikipedia/640.txt deleted file mode 100644 index 7ae47279797d43bf88f57da4db4b849ebab8d611..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/640.txt +++ /dev/null @@ -1,34 +0,0 @@ -The splitting theorem is a classical theorem in Riemannian geometry. - -It states that if a complete Riemannian manifold M with Ricci curvature -$$ -{\rm Ric} (M) \ge 0 -$$ - -has a straight line, i.e., a geodesic γ such that -$$ -d(\gamma(u),\gamma(v))=|u-v| -$$ - -for all -$$ -u, v\in\mathbb{R}, -$$ - -then it is isometric to a product space -$$ -\mathbb{R}\times L, -$$ - -where $L$ is a Riemannian manifold with -$$ -{\rm Ric} (L) \ge 0. -$$ - -For surfaces, the theorem was proved by Stefan Cohn-Vossen. - -Victor Andreevich Toponogov generalized it to manifolds with non-negative sectional curvature. - -Jeff Cheeger and Detlef Gromoll proved that non-negative Ricci curvature is sufficient. - -Later the splitting theorem was extended to Lorentzian manifolds with nonnegative Ricci curvature in the time-like directions. diff --git a/wiki/wikipedia/641.txt b/wiki/wikipedia/641.txt deleted file mode 100644 index 7b44a78d548d88fe0b68c6cc1a847a94b59818c2..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/641.txt +++ /dev/null @@ -1,19 +0,0 @@ -In mathematics, Vojta's conjecture is a conjecture introduced by about heights of points on algebraic varieties over number fields. The conjecture was motivated by an analogy between diophantine approximation and Nevanlinna theory (value distribution theory) in complex analysis. It implies many other conjectures in Diophantine approximation, Diophantine equations, arithmetic geometry, and mathematical logic. - -Let $F$ be a number field, let $X/F$ be a non-singular algebraic variety, let $D$ be an effective divisor on $X$ with at worst normal crossings, let $H$ be an ample divisor on $X$, and let $K_X$ be a canonical divisor on $X$. Choose Weil height functions $h_H$ and $h_{K_X}$ and, for each absolute value $v$ on $F$, a local height function $\lambda_{D,v}$. Fix a finite set of absolute values $S$ of $F$, and let $\epsilon>0$. Then there is a constant $C$ and a non-empty Zariski open set $U\subseteq X$, depending on all of the above choices, such that - - \sum_{v\in S} \lambda_{D,v}(P) + h_{K_X}(P) \le \epsilon h_H(P) + C - -\quad\hbox{for all } P\in U(F). - -Examples: - -# Let $X=\mathbb{P}^N$. Then $K_X\sim -(N+1)H$, so Vojta's conjecture reads $ \sum_{v\in S} \lambda_{D,v}(P) \le (N+1+\epsilon) h_H(P) + C$ for all $P\in U(F)$. - -# Let $X$ be a variety with trivial canonical bundle, for example, an abelian variety, a K3 surface or a Calabi-Yau variety. Vojta's conjecture predicts that if $D$ is an effective ample normal crossings divisor, then the $S$-integral points on the affine variety $X\setminus D$ are not Zariski dense. For abelian varieties, this was conjectured by Lang and proven by Faltings. - -# Let $X$ be a variety of general type, i.e., $K_X$ is ample on some non-empty Zariski open subset of $X$. Then taking $S=\emptyset$, Vojta's conjecture predicts that $X(F)$ is not Zariski dense in $X$. This last statement for varieties of general type is the Bombieri–Lang conjecture. - -There are generalizations in which $P$ is allowed to vary over $X(\overline{F})$, and there is an additional term in the upper bound that depends on the discriminant of the field extension $F(P)/F$. - -There are generalizations in which the non-archimedean local heights $\lambda_{D,v}$ are replaced by truncated local heights, which are local heights in which multiplicities are ignored. These versions of Vojta's conjecture provide natural higher-dimensional analogues of the ABC conjecture. diff --git a/wiki/wikipedia/642.txt b/wiki/wikipedia/642.txt deleted file mode 100644 index 96faf854afab9a4aaa5fd9328574024f58bdb81f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/642.txt +++ /dev/null @@ -1,5 +0,0 @@ -In mathematics, the Nakai conjecture is an unproven characterization of smooth algebraic varieties, conjectured by Japanese mathematician Yoshikazu Nakai in 1961. - -It states that if V is a complex algebraic variety, such that its ring of differential operators is generated by the derivations it contains, then V is a smooth variety. The converse statement, that smooth algebraic varieties have rings of differential operators that are generated by their derivations, is a result of Alexander Grothendieck. - -The Nakai conjecture is known to be true for algebraic curves and Stanley–Reisner rings. A proof of the conjecture would also establish the Zariski–Lipman conjecture, for a complex variety V with coordinate ring R. This conjecture states that if the derivations of R are a free module over R, then V is smooth. diff --git a/wiki/wikipedia/643.txt b/wiki/wikipedia/643.txt deleted file mode 100644 index 5fcc9d751b692939aada56aa94a0ad1f63dd8375..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/643.txt +++ /dev/null @@ -1,12 +0,0 @@ -In mathematics, the minimum k-cut, is a combinatorial optimization problem that requires finding a set of edges whose removal would partition the graph to at least k connected components. These edges are referred to as k-cut. The goal is to find the minimum-weight k-cut. This partitioning can have applications in VLSI design, data-mining, finite elements and communication in parallel computing. - -Given an undirected graph G = (V, E) with an assignment of weights to the edges w: E → N and an integer k ∈ {2, 3, …, |V|}, partition V into k disjoint sets F = {C1, C2, …, Ck} while minimizing -$$ -\sum_{i=1}^{k-1}\sum_{j=i+1}^k\sum_{\begin{smallmatrix} v_1 \in C_i \\ v_2 \in C_j \end{smallmatrix}} w ( \left \{ v_1, v_2 \right \} ) -$$ - -For a fixed k, the problem is polynomial time solvable in O(|V|k2). However, the problem is NP-complete if k is part of the input. It is also NP-complete if we specify $k$ vertices and ask for the minimum $k$-cut which separates these vertices among each of the sets. - -Several approximation algorithms exist with an approximation of 2 - 2/k. A simple greedy algorithm that achieves this approximation factor computes a minimum cut in each of the connected components and removes the heaviest one. This algorithm requires a total of n - 1 max flow computations. Another algorithm achieving the same guarantee uses the Gomory–Hu tree representation of minimum cuts. Constructing the Gomory-Hu tree requires n - 1 max flow computations, but the algorithm requires an overall O(kn) max flow computations. Yet, it is easier to analyze the approximation factor of the second algorithm. Moreover, under the Small Set Expansion Hypothesis (a conjecture closely related to the Unique Games Conjecture), the problem is NP-hard to approximate to within $(2 - \epsilon)$ factor for every constant $\epsilon > 0$, meaning that the aforementioned approximation algorithms are essentially tight for large $k$. - -A variant of the problem asks for a minimum weight k-cut where the output partitions have pre-specified sizes. This problem variant is approximable to within a factor of 3 for any fixed k if one restricts the graph to a metric space, meaning a complete graph that satisfies the triangle inequality. More recently, polynomial time approximation schemes (PTAS) were discovered for those problems. diff --git a/wiki/wikipedia/644.txt b/wiki/wikipedia/644.txt deleted file mode 100644 index f9200b7007d73a08cdf7883e7f75d8cfae3a26e0..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/644.txt +++ /dev/null @@ -1,48 +0,0 @@ -In number theory, more specifically in p-adic analysis, Krasner's lemma is a basic result relating the topology of a complete non-archimedean field to its algebraic extensions. - -Let K be a complete non-archimedean field and let K be a separable closure of K. Given an element α in K, denote its Galois conjugates by α2, ..., αn. Krasner's lemma states: - -if an element β of K is such that -$$ -\left|\alpha-\beta\right|<\left|\alpha-\alpha_i\right|\text{ for }i=2,\dots,n -$$ - -then K(α) ⊆ K(β). - -*Krasner's lemma can be used to show that $\mathfrak{p}$-adic completion and separable closure of global fields commute. In other words, given $\mathfrak{p}$ a prime of a global field L, the separable closure of the $\mathfrak{p}$-adic completion of L equals the $\overline{\mathfrak{p}}$-adic completion of the separable closure of L (where $\overline{\mathfrak{p}}$ is a prime of L above $\mathfrak{p}$). - -*Another application is to proving that Cp - the completion of the algebraic closure of Qp - is algebraically closed. - -Krasner's lemma has the following generalization. - -Consider a monic polynomial -$$ -f^*=\prod_{k=1}^n(X-\alpha_k^*) -$$ - -of degree n > 1 - -with coefficients in a Henselian field (K, v) and roots in the - -algebraic closure K. Let I and J be two disjoint, - -non-empty sets with union {1,...,n}. Moreover, consider a - -polynomial -$$ -g=\prod_{i\in I}(X-\alpha_i) -$$ - -with coefficients and roots in K. Assume -$$ -\forall i\in I\forall j\in J: v(\alpha_i-\alpha_i^*)>v(\alpha_i^*-\alpha_j^*). -$$ - -Then the coefficients of the polynomials -$$ -g^*:=\prod_{i\in I}(X-\alpha_i^*),\ h^*:=\prod_{j\in J}(X-\alpha_j^*) -$$ - -are contained in the field extension of K generated by the - -coefficients of g. (The original Krasner's lemma corresponds to the situation where g has degree 1.) diff --git a/wiki/wikipedia/645.txt b/wiki/wikipedia/645.txt deleted file mode 100644 index 439152d99f90d36f33bb830a5c07013d2d655016..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/645.txt +++ /dev/null @@ -1,9 +0,0 @@ -Crossing Numbers of Graphs is a book in mathematics, on the minimum number of edge crossings needed in graph drawings. It was written by Marcus Schaefer, a professor of computer science at DePaul University, and published in 2018 by the CRC Press in their book series Discrete Mathematics and its Applications. - -The main text of the book has two parts, on the crossing number as traditionally defined and on variations of the crossing number, followed by two appendices providing background material on topological graph theory and computational complexity theory. - -After introducing the problem, the first chapter studies the crossing numbers of complete graphs (including Hill's conjectured formula for these numbers) and complete bipartite graphs (Turán's brick factory problem and the Zarankiewicz crossing number conjecture), again giving a conjectured formula). It also includes the crossing number inequality, and the Hanani–Tutte theorem on the parity of crossings. The second chapter concerns other special classes of graphs including graph products (especially products of cycle graphs) and hypercube graphs. After a third chapter relating the crossing number to graph parameters including skewness, bisection width, thickness, and (via the Albertson conjecture) the chromatic number, the final chapter of part I concerns the computational complexity of finding minimum-crossing graph drawings, including the results that the problem is both NP-complete and fixed-parameter tractable. - -In the second part of the book, two chapters concern the rectilinear crossing number, describing graph drawings in which the edges must be represented as straight line segments rather than arbitrary curves, and Fáry's theorem that every planar graph can be drawn without crossings in this way. Another chapter concerns 1-planar graphs and the associated local crossing number, the smallest number k such that the graph can be drawn with at most k crossings per edge. Two chapters concern book embeddings and string graphs, and two more chapters concern variations of the crossing number that count crossings in different ways, for instance by the number of pairs of edges that cross or that cross an odd number of times. The final chapter of part II concerns thrackles and the problem of finding drawings with a maximum number of crossings. - -The book can be used as an advanced textbook, and has exercises provided for that use. However, it assumes that its readers are already familiar with both graph theory and the design and analysis of algorithms. Reviewing the book, L. W. Beineke calls it a "valuable contribution" for its presentation of the many results in this area. diff --git a/wiki/wikipedia/646.txt b/wiki/wikipedia/646.txt deleted file mode 100644 index 83df090d8257013ee694060a40ad1c98669ad0ea..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/646.txt +++ /dev/null @@ -1,91 +0,0 @@ -In elementary number theory, the lifting-the-exponent (LTE) lemma provides several formulas for computing the p-adic valuation $\nu_p$ of special forms of integers. The lemma is named as such because it describes the steps necessary to "lift" the exponent of $p$ in such expressions. It is related to Hensel's lemma. - -The exact origins of the LTE lemma are unclear; the result, with its present name and form, has only come into focus within the last 10 to 20 years. However, several key ideas used in its proof were known to Gauss and referenced in his Disquisitiones Arithmeticae. Despite chiefly featuring in mathematical olympiads, it is sometimes applied to research topics, such as elliptic curves. - -For any integers $x, y$ and positive integers $n$ and $p$, where $p$ is a prime such that $p \nmid x$ and $p \nmid y$, the following identities hold: - -* When $p$ is odd: - -** If $p \mid x-y$, $\nu_p(x^n-y^n) = \nu_p(x-y)+\nu_p(n)$. - -** If $n$ is odd and $p \mid x+y$, $\nu_p(x^n+y^n) = \nu_p(x+y)+\nu_p(n)$. - -* When $p = 2$: - -** If $4 \mid x-y$, $\nu_2(x^n-y^n) = \nu_2(x-y)+\nu_2(n)$. - -** If $2 \mid x-y$ and $n$ is even, $\nu_2(x^n-y^n) = \nu_2(x-y)+\nu_2(x+y)+\nu_2(n)-1$. - -* For all $p$: - -** If $\gcd(n,p) = 1$ and $p \mid x-y$, $\nu_p(x^n-y^n) = \nu_p(x-y)$. - -** If $\gcd(n,p) = 1$, $p \mid x+y$ and $n$ odd, $\nu_p(x^n+y^n) = \nu_p(x+y)$. - -The base case $\nu_p(x^n-y^n) = \nu_p(x-y)$ when $\gcd(n,p) = 1$ is proven first. Because $p \mid x-y \iff x \equiv y \pmod{p}$, - - - -x^{n-1}+x^{n-2}y+x^{n-3}y^2+\dots+y^{n-1} \equiv nx^{n-1} \not\equiv 0 \pmod{p}\ (1) - - - -The fact that $x^n-y^n = (x-y)(x^{n-1}+x^{n-2}y+x^{n-3}y^2+\dots+y^{n-1})$ completes the proof. The condition $\nu_p(x^n+y^n) = \nu_p(x+y)$ for odd $n$ is similar. - -Via the binomial expansion, the substitution $y = x+kp$ can be used in (1) to show that $\nu_p(x^p-y^p) = \nu_p(x-y)+1$ because (1) is a multiple of $p$ but not $p^2$. Likewise, $\nu_p(x^p+y^p) = \nu_p(x+y)+1$. - -Then, if $n$ is written as $p^ab$ where $p \nmid b$, the base case gives $\nu_p(x^n-y^n) = \nu_p((x^{p^a})^b-(y^{p^a})^b) = \nu_p(x^{p^a}-y^{p^a})$. - -By induction on $a$, - - - -\begin{align} - -\nu_p(x^{p^a}-y^{p^a}) &= \nu_p(((\dots(x^p)^p\dots))^p-((\dots(y^p)^p\dots))^p)\ \text{(exponentiation used } a \text{ times per term)} \\ - -&= \nu_p(x-y)+a - -\end{align} - - - -A similar argument can be applied for $\nu_p(x^n+y^n)$. - -The proof for the odd $p$ case cannot be directly applied when $p = 2$ because the binomial coefficient $\binom{p}{2} = \frac{p(p-1)}{2}$ is only an integral multiple of $p$ when $p$ is odd. - -However, it can be shown that $\nu_2(x^n-y^n) = \nu_2(x-y)+\nu_2(n)$ when $4 \mid x-y$ by writing $n = 2^ab$ where $a$ and $b$ are integers with $b$ odd and noting that - - - -\begin{align} - -\nu_2(x^n-y^n) &= \nu_2((x^{2^a})^b-(y^{2^a})^b) \\ - -&= \nu_2(x^{2^a}-y^{2^a}) \\ - -&= \nu_2((x^{2^{a-1}}+y^{2^{a-1}})(x^{2^{a-2}}+y^{2^{a-2}})\cdots(x^2+y^2)(x+y)(x-y)) \\ - -&= \nu_2(x-y)+a - -\end{align} - - - -because since $x \equiv y \equiv \pm 1 \pmod{4}$, each factor in the difference of squares step in the form $x^{2^k}+y^{2^k}$ is congruent to 2 modulo 4. - -The stronger statement $\nu_2(x^n-y^n) = \nu_2(x-y)+\nu_2(x+y)+\nu_2(n)-1$ when $2 \mid x-y$ is proven analogously. - -The LTE lemma can be used to solve 2020 AIME I #12: - -
    - -Let $n$ be the least positive integer for which $149^n-2^n$ is divisible by $3^3\cdot5^5\cdot7^7.$ Find the number of positive integer divisors of $n$. - -
    - -Solution. Note that $149-2 = 147 = 3\cdot 7^2$. Using the LTE lemma, since $3 \nmid 149$ and $2$ but $3 \mid 147$, $\nu_3(149^n-2^n) = \nu_3(147)+\nu_3(n) = \nu_3(n)+1$. Thus, $3^3 \mid 149^n-2^n \iff 3^2 \mid n$. Similarly, $7 \nmid 149,2$ but $7 \mid 147$, so $\nu_7(149^n-2^n) = \nu_7(147)+\nu_7(n) = \nu_7(n)+2$ and $7^7 \mid 149^n-2^n \iff 7^5 \mid n$. - -Since $5 \nmid 147$, the factors of 5 are addressed by noticing that since the residues of $149^n$ modulo 5 follow the cycle $4,1,4,1$ and those of $2^n$ follow the cycle $2,4,3,1$, the resiudes of $149^n-2^n$ modulo 5 cycle through the sequence $2,2,1,0$. Thus, $5 \mid 149^n-2^n$ iff $n = 4k$ for some positive integer $k$. The LTE lemma can now be applied again: $\nu_5(149^{4k}-2^{4k}) = \nu_5((149^4)^k-(2^4)^k) = \nu_5(149^4-2^4)+\nu_5(k)$. Since $149^4-2^4 \equiv (-1)^4-2^4 \equiv -15 \pmod{25}$, $\nu_5(149^4-2^4) = 1$. Hence $5^5 \mid 149^n-2^n \iff 5^4 \mid k \iff 4\cdot 5^4 \mid n$. - -Combining these three results, it is found that $n = 2^2\cdot 3^2\cdot 5^4\cdot 7^5$, which has $(2+1)(2+1)(4+1)(5+1) = 270$ positive divisors. diff --git a/wiki/wikipedia/647.txt b/wiki/wikipedia/647.txt deleted file mode 100644 index a67b96c4d907a9605c33940da9bd66fe4adcb0eb..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/647.txt +++ /dev/null @@ -1,42 +0,0 @@ -The intersecting chords theorem or just the chord theorem is a statement in elementary geometry that describes a relation of the four line segments created by two intersecting chords within a circle. - -It states that the products of the lengths of the line segments on each chord are equal. - -It is Proposition 35 of Book 3 of Euclid's Elements. - -More precisely, for two chords AC and BD intersecting in a point S the following equation holds: -$$ -|AS|\cdot|SC|=|BS|\cdot|SD| -$$ - -The converse is true as well, that is if for two line segments AC and BD intersecting in S the equation above holds true, then their four endpoints A, B, C and D lie on a common circle. Or in other words if the diagonals of a quadrilateral ABCD intersect in S and fulfill the equation above then it is a cyclic quadrilateral. - -The value of the two products in the chord theorem depends only on the distance of the intersection point S from the circle's center and is called the absolute value of the power of S, more precisely it can be stated that: -$$ -|AS|\cdot|SC|=|BS|\cdot|SD|=r^2-d^2 -$$ - -where r is the radius of the circle, and d is the distance between the center of the circle and the intersection point S. This property follows directly from applying the chord theorem to a third chord going through S and the circle's center M (see drawing). - -The theorem can be proven using similar triangles (via the inscribed-angle theorem). Consider the angles of the triangles ASD and BSC: - - - -\begin{align} - -\angle ADS&=\angle BCS (\text{inscribed angles over AB})\\ - -\angle DAS&=\angle CBS (\text{inscribed angles over CD})\\ - -\angle ASD&=\angle BSC (\text{opposing angles}) - -\end{align} - - - -This means the triangles ASD and BSC are similar and therefore -$$ -\frac{AS}{SD}=\frac{BS}{SC} \Leftrightarrow |AS|\cdot|SC|=|BS|\cdot|SD| -$$ - -Next to the tangent-secant theorem and the intersecting secants theorem the intersecting chord theorem represents one of the three basic cases of a more general theorem about two intersecting lines and a circle - the power of point theorem. diff --git a/wiki/wikipedia/648.txt b/wiki/wikipedia/648.txt deleted file mode 100644 index 9582796edf13c4a54ce8af2fcc4a6e442e2abdc8..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/648.txt +++ /dev/null @@ -1,8 +0,0 @@ -The Birch–Tate conjecture is a conjecture in mathematics (more specifically in algebraic K-theory) proposed by both Bryan John Birch and John Tate. - -In algebraic K-theory, the group K2 is defined as the center of the Steinberg group of the ring of integers of a number field F. K2 is also known as the tame kernel of F. The Birch–Tate conjecture relates the order of this group (its number of elements) to the value of the Dedekind zeta function $\zeta_F$. More specifically, let F be a totally real number field and let N be the largest natural number such that the extension of F by the Nth root of unity has an elementary abelian 2-group as its Galois group. Then the conjecture states that -$$ -\#K_2 = |N\zeta_F(-1)|. -$$ - -Progress on this conjecture has been made as a consequence of work on Iwasawa theory, and in particular of the proofs given for the so-called "main conjecture of Iwasawa theory." diff --git a/wiki/wikipedia/649.txt b/wiki/wikipedia/649.txt deleted file mode 100644 index 1dedcbd320cda6d9f5cc7c41fc5714119a9052e2..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/649.txt +++ /dev/null @@ -1,13 +0,0 @@ -In mathematics, the prime decomposition theorem for 3-manifolds states that every compact, orientable 3-manifold is the connected sum of a unique (up to homeomorphism) finite collection of prime 3-manifolds. - -A manifold is prime if it cannot be presented as a connected sum of more than one manifold, none of which is the sphere of the same dimension. This condition is necessary since for any manifold M of dimension $n$ it is true that - -M = M \# S^n. - -(where $M \# S^n$ means the connected sum of $M$ and $S^n$). If $P$ is a prime 3-manifold then either it is $S^2 \times S^1$ or the non-orientable $S^2$ bundle over $S^1,$ - -or it is irreducible, which means that any embedded 2-sphere bounds a ball. So the theorem can be restated to say that there is a unique connected sum decomposition into irreducible 3-manifolds and fiber bundles of $S^2$ over $S^1.$ - -The prime decomposition holds also for non-orientable 3-manifolds, but the uniqueness statement must be modified slightly: every compact, non-orientable 3-manifold is a connected sum of irreducible 3-manifolds and non-orientable $S^2$ bundles over $S^1.$ This sum is unique as long as we specify that each summand is either irreducible or a non-orientable $S^2$ bundle over $S^1.$ - -The proof is based on normal surface techniques originated by Hellmuth Kneser. Existence was proven by Kneser, but the exact formulation and proof of the uniqueness was done more than 30 years later by John Milnor. diff --git a/wiki/wikipedia/65.txt b/wiki/wikipedia/65.txt deleted file mode 100644 index 7b6e23a17fe5060fcfe51b7c3a8f4d25aa2788c4..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/65.txt +++ /dev/null @@ -1,120 +0,0 @@ -In mathematics, Bézout's identity (also called Bézout's lemma), named after Étienne Bézout, is the following theorem: - -Let a and b be integers or polynomials with greatest common divisor d. Then there exist integers or polynomials x and y such that ax + by = d. Moreover, the integers or polynomials of the form az + bt are exactly the multiples of d. - -Here the greatest common divisor of 0 and 0 is taken to be 0. The integers or polynomials x and y are called Bézout coefficients for (a, b); they are not unique. A pair of Bézout coefficients can be computed by the extended Euclidean algorithm, and this pair is, in the case of integers one of the two pairs such that $|x|\le | b/d |$ and $|y|\le | a/d |;$ equality occurs only if one of a and b is a multiple of the other. In the polynomial case, the extended Euclidean algorithm produces the unique pair such that $\deg x < \deg b$ or $\deg y < \deg a$ (both inequalities are verified except one of a and b is a multiple of the other). - -As an example, the greatest common divisor of 15 and 69 is 3, and 3 can be written as a combination of 15 and 69 as 3 = 15 × (−9) + 69 × 2, with Bézout coefficients −9 and 2. - -Many other theorems in elementary number theory, such as Euclid's lemma or the Chinese remainder theorem, result from Bézout's identity. - -A Bézout domain is an integral domain in which Bézout's identity holds. In particular, Bézout's identity holds in principal ideal domains. Every theorem that results from Bézout's identity is thus true in all principal ideal domains. - -If a and b are not both zero and one pair of Bézout coefficients (x, y) has been computed (e.g., using extended Euclidean algorithm), all pairs can be represented in the form -$$ -\left(x-k\frac{b}{d},\ y+k\frac{a}{d}\right), -$$ - -where k is an arbitrary integer, d is the greatest common divisor of a and b, and the fractions simplify to integers. - -If a and b are both nonzero, then exactly two of these pairs of pairs of Bézout coefficients satisfy - - |x| \le \left |\frac{b}{d}\right |\quad \text{and}\quad |y| \le \left |\frac{a}{d}\right |, - -and equality may occur only if one of a and b divides the other. - -This relies on a property of Euclidean division: given two non-zero integers c and d, if d does not divide c, there is exactly one pair (q, r) such that c = dq + r and 0 < r < , and another one such that c = dq + r and −. - -The two pairs of small Bézout's coefficients are obtained from the given one (x, y) by choosing for k in the above formula either of the two integers next to $\frac{x}{b/d}$. - -The extended Euclidean algorithm always produces one of these two minimal pairs. - -Let a = 12 and b = 42, then gcd (12, 42) = 6. Then the following Bézout's identities are had, with the Bézout coefficients written in red for the minimal pairs and in blue for the other ones. - -\begin{align} - -\vdots \\ - -12 &\times ({\color{blue}{-10}}) & + 42 &\times \color{blue}{3} &= 6 \\ - -12 &\times ({\color{red}{-3}}) & + 42 &\times \color{red}{1} &= 6 \\ - -12 &\times \color{red}{4} & + 42 &\times({\color{red}{-1}}) &= 6 \\ - -12 &\times \color{blue}{11} & + 42 &\times ({\color{blue}{-3}}) &= 6 \\ - -12 &\times \color{blue}{18} & + 42 &\times ({\color{blue}{-5}}) &= 6 \\ - -\vdots - -\end{align} - -If (x, y) = (18, −5) is the original pair of Bézout coefficients, then $\frac{18}{42/6} \in [2, 3]$ yields the minimal pairs via k = 2, respectively k = 3; that is, (18 − 2 ⋅ 7, −5 + 2 ⋅ 2) = (4, −1), and (18 − 3 ⋅ 7, −5 + 3 ⋅ 2) = (−3, 1). - -Given any nonzero integers a and b, let $S=\{ax+by \mid x,y\in\mathbb{Z} \text{ and } ax+by>0\}.$ The set S is nonempty since it contains either a or –a (with x = ±1 and y = 0). Since S is a nonempty set of positive integers, it has a minimum element $d = as + bt$. To prove that d is the greatest common divisor of a and b, it must be proven that d is a common divisor of a and b, and that for any other common divisor c, one has c ≤ d. - -The Euclidean division of a by d may be written - -a=dq+r\quad\text{with}\quad 0\le r - -The remainder r is in $S\cup \{0\}$, because - -\begin{align} - -r & = a - qd \\ - -& = a - q(as+bt)\\ - -& = a(1-qs) - bqt. - -\end{align} - -Thus r is of the form $ax+by$, and hence $r\in S\cup \{0\}$. However, 0 ≤ r < d, and d is the smallest positive integer in S: the remainder r can therefore not be in S, making r necessarily 0. This implies that d is a divisor of a. Similarly d is also a divisor of b, and d is a common divisor of a and b. - -Now, let c be any common divisor of a and b; that is, there exist u and v such that - -a = cu and b = cv. One has thus - -\begin{align} - -d&=as + bt\\ - -& =cus+cvt\\ - -&=c(us+vt). - -\end{align} - -That is, c is a divisor of d. Since d > 0, this implies c ≤ d. - -Bézout's identity can be extended to more than two integers: if - -\gcd(a_1, a_2, \ldots, a_n) = d - -then there are integers $x_1, x_2, \ldots, x_n$ such that - -d = a_1 x_1 + a_2 x_2 + \cdots + a_n x_n - -has the following properties: - -* d is the smallest positive integer of this form - -* every number of this form is a multiple of d - -Bézout's identity works for univariate polynomials over a field exactly in the same ways as for integers. In particular the Bézout's coefficients and the greatest common divisor may be computed with the extended Euclidean algorithm. - -As the common roots of two polynomials are the roots of their greatest common divisor, Bézout's identity and fundamental theorem of algebra imply the following result: - -em=1.5|text=For univariate polynomials f and g with coefficients in a field, there exist polynomials a and b such that af + bg = 1 if and only if f and g have no common root in any algebraically closed field (commonly the field of complex numbers). - -The generalization of this result to any number of polynomials and indeterminates is Hilbert's Nullstellensatz. - -As noted in the introduction, Bézout's identity works not only in the ring of integers, but also in any other principal ideal domain (PID). - -That is, if R is a PID, and a and b are elements of R, and d is a greatest common divisor of a and b, - -then there are elements x and y in R such that ax + by = d. The reason is that the ideal Ra + Rb is principal and equal to Rd. - -An integral domain in which Bézout's identity holds is called a Bézout domain. - -French mathematician Étienne Bézout (1730–1783) proved this identity for polynomials. However, this statement for integers can be found already in the work of an earlier French mathematician, Claude Gaspard Bachet de Méziriac (1581–1638). diff --git a/wiki/wikipedia/650.txt b/wiki/wikipedia/650.txt deleted file mode 100644 index ef70b6dc46558a9784d4ad7d6693b5b2f73ee2aa..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/650.txt +++ /dev/null @@ -1,207 +0,0 @@ -In classical statistical mechanics, the equipartition theorem relates the temperature of a system to its average energies. The equipartition theorem is also known as the law of equipartition, equipartition of energy, or simply equipartition. The original idea of equipartition was that, in thermal equilibrium, energy is shared equally among all of its various forms; for example, the average kinetic energy per degree of freedom in translational motion of a molecule should equal that in rotational motion. - -The equipartition theorem makes quantitative predictions. Like the virial theorem, it gives the total average kinetic and potential energies for a system at a given temperature, from which the system's heat capacity can be computed. However, equipartition also gives the average values of individual components of the energy, such as the kinetic energy of a particular particle or the potential energy of a single spring. For example, it predicts that every atom in a monatomic ideal gas has an average kinetic energy of (3/2)kBT in thermal equilibrium, where kB is the Boltzmann constant and T is the (thermodynamic) temperature. More generally, equipartition can be applied to any classical system in thermal equilibrium, no matter how complicated. It can be used to derive the ideal gas law, and the Dulong–Petit law for the specific heat capacities of solids. The equipartition theorem can also be used to predict the properties of stars, even white dwarfs and neutron stars, since it holds even when relativistic effects are considered. - -Although the equipartition theorem makes accurate predictions in certain conditions, it is inaccurate when quantum effects are significant, such as at low temperatures. When the thermal energy kBT is smaller than the quantum energy spacing in a particular degree of freedom, the average energy and heat capacity of this degree of freedom are less than the values predicted by equipartition. Such a degree of freedom is said to be "frozen out" when the thermal energy is much smaller than this spacing. For example, the heat capacity of a solid decreases at low temperatures as various types of motion become frozen out, rather than remaining constant as predicted by equipartition. Such decreases in heat capacity were among the first signs to physicists of the 19th century that classical physics was incorrect and that a new, more subtle, scientific model was required. Along with other evidence, equipartition's failure to model black-body radiation—also known as the ultraviolet catastrophe—led Max Planck to suggest that energy in the oscillators in an object, which emit light, were quantized, a revolutionary hypothesis that spurred the development of quantum mechanics and quantum field theory. - -The name "equipartition" means "equal division," as derived from the Latin equi from the antecedent, æquus ("equal or even"), and partition from the noun, partitio ("division, portion"). The original concept of equipartition was that the total kinetic energy of a system is shared equally among all of its independent parts, on the average, once the system has reached thermal equilibrium. Equipartition also makes quantitative predictions for these energies. For example, it predicts that every atom of an inert noble gas, in thermal equilibrium at temperature T, has an average translational kinetic energy of (3/2)kBT, where kB is the Boltzmann constant. As a consequence, since kinetic energy is equal to 1/2(mass)(velocity)2, the heavier atoms of xenon have a lower average speed than do the lighter atoms of helium at the same temperature. Figure 2 shows the Maxwell–Boltzmann distribution for the speeds of the atoms in four noble gases. - -In this example, the key point is that the kinetic energy is quadratic in the velocity. The equipartition theorem shows that in thermal equilibrium, any degree of freedom (such as a component of the position or velocity of a particle) which appears only quadratically in the energy has an average energy of 1/2kBT and therefore contributes 1/2kB to the system's heat capacity. This has many applications. - -The (Newtonian) kinetic energy of a particle of mass m, velocity v is given by - - - -H_{\text{kin}} = \tfrac12 m |\mathbf{v}|^2 = \tfrac{1}{2} m\left( v_x^2 + v_y^2 + v_z^2 \right), - - - -where vx, vy and vz are the Cartesian components of the velocity v. Here, H is short for Hamiltonian, and used henceforth as a symbol for energy because the Hamiltonian formalism plays a central role in the most general form of the equipartition theorem. - -Since the kinetic energy is quadratic in the components of the velocity, by equipartition these three components each contribute 1/2kBT to the average kinetic energy in thermal equilibrium. Thus the average kinetic energy of the particle is (3/2)kBT, as in the example of noble gases above. - -More generally, in an ideal gas, the total energy consists purely of (translational) kinetic energy: by assumption, the particles have no internal degrees of freedom and move independently of one another. Equipartition therefore predicts that the total energy of an ideal gas of N particles is (3/2) N kB T. - -It follows that the heat capacity of the gas is (3/2) N kB and hence, in particular, the heat capacity of a mole of such gas particles is (3/2)NAkB = (3/2)R, where NA is the Avogadro constant and R is the gas constant. Since R ≈ 2 cal/(mol·K), equipartition predicts that the molar heat capacity of an ideal gas is roughly 3 cal/(mol·K). This prediction is confirmed by experiment. and the Dulong–Petit law of solid heat capacities. The latter application was particularly significant in the history of equipartition. - -An important application of the equipartition theorem is to the specific heat capacity of a crystalline solid. Each atom in such a solid can oscillate in three independent directions, so the solid can be viewed as a system of 3N independent simple harmonic oscillators, where N denotes the number of atoms in the lattice. Since each harmonic oscillator has average energy kBT, the average total energy of the solid is 3NkBT, and its heat capacity is 3NkB. - -By taking N to be the Avogadro constant NA, and using the relation R = NAkB between the gas constant R and the Boltzmann constant kB, this provides an explanation for the Dulong–Petit law of specific heat capacities of solids, which stated that the specific heat capacity (per unit mass) of a solid element is inversely proportional to its atomic weight. A modern version is that the molar heat capacity of a solid is 3R ≈ 6 cal/(mol·K). - -However, this law is inaccurate at lower temperatures, due to quantum effects; it is also inconsistent with the experimentally derived third law of thermodynamics, according to which the molar heat capacity of any substance must go to zero as the temperature goes to absolute zero. - -This article uses the non-SI unit of cal/(mol·K) for heat capacity, because it offers greater accuracy for single digits.
    For an approximate conversion to the corresponding SI unit of J/(mol·K), such values should be multiplied by 4.2 J/cal. - -The equipartition of kinetic energy was proposed initially in 1843, and more correctly in 1845, by John James Waterston. In 1859, James Clerk Maxwell argued that the kinetic heat energy of a gas is equally divided between linear and rotational energy. In 1876, Ludwig Boltzmann expanded on this principle by showing that the average energy was divided equally among all the independent components of motion in a system. Boltzmann applied the equipartition theorem to provide a theoretical explanation of the Dulong–Petit law for the specific heat capacities of solids. - -The history of the equipartition theorem is intertwined with that of specific heat capacity, both of which were studied in the 19th century. In 1819, the French physicists Pierre Louis Dulong and Alexis Thérèse Petit discovered that the specific heat capacities of solid elements at room temperature were inversely proportional to the atomic weight of the element. Their law was used for many years as a technique for measuring atomic weights. However, subsequent studies by James Dewar and Heinrich Friedrich Weber showed that this Dulong–Petit law holds only at high temperatures; at lower temperatures, or for exceptionally hard solids such as diamond, the specific heat capacity was lower. - -Experimental observations of the specific heat capacities of gases also raised concerns about the validity of the equipartition theorem. The theorem predicts that the molar heat capacity of simple monatomic gases should be roughly 3 cal/(mol·K), whereas that of diatomic gases should be roughly 7 cal/(mol·K). Experiments confirmed the former prediction, but found that molar heat capacities of diatomic gases were typically about 5 cal/(mol·K), and fell to about 3 cal/(mol·K) at very low temperatures. Maxwell noted in 1875 that the disagreement between experiment and the equipartition theorem was much worse than even these numbers suggest; since atoms have internal parts, heat energy should go into the motion of these internal parts, making the predicted specific heats of monatomic and diatomic gases much higher than 3 cal/(mol·K) and 7 cal/(mol·K), respectively. - -A third discrepancy concerned the specific heat of metals. According to the classical Drude model, metallic electrons act as a nearly ideal gas, and so they should contribute (3/2) NekB to the heat capacity by the equipartition theorem, where Ne is the number of electrons. Experimentally, however, electrons contribute little to the heat capacity: the molar heat capacities of many conductors and insulators are nearly the same. - -\! - -\Bigl\langle x_{m} \frac{\partial H}{\partial x_{n}} \Bigr\rangle = \delta_{mn} k_{B} T. - - - -Here δmn is the Kronecker delta, which is equal to one if m = n and is zero otherwise. The averaging brackets $\left\langle \ldots \right\rangle$ is assumed to be an ensemble average over phase space or, under an assumption of ergodicity, a time average of a single system. - -The general equipartition theorem holds in both the microcanonical ensemble, when the total energy of the system is constant, and also in the canonical ensemble, when the system is coupled to a heat bath with which it can exchange energy. Derivations of the general formula are given later in the article. - -The general formula is equivalent to the following two: - -# $\Bigl\langle x_{n} \frac{\partial H}{\partial x_{n}} \Bigr\rangle = k_{B} T \quad \mbox{for all } n$ - -# $\Bigl\langle x_{m} \frac{\partial H}{\partial x_{n}} \Bigr\rangle = 0 \quad \mbox{for all } m \neq n.$ - -If a degree of freedom xn appears only as a quadratic term anxn2 in the Hamiltonian H, then the first of these formulae implies that - - - -k_{B} T = \Bigl\langle x_{n} \frac{\partial H}{\partial x_{n}}\Bigr\rangle = 2\langle a_n x_n^2 \rangle, - - - -which is twice the contribution that this degree of freedom makes to the average energy $\langle H\rangle$. Thus the equipartition theorem for systems with quadratic energies follows easily from the general formula. A similar argument, with 2 replaced by s, applies to energies of the form anxns. - -The degrees of freedom xn are coordinates on the phase space of the system and are therefore commonly subdivided into generalized position coordinates qk and generalized momentum coordinates pk, where pk is the conjugate momentum to qk. In this situation, formula 1 means that for all k, - - - -\Bigl\langle p_{k} \frac{\partial H}{\partial p_{k}} \Bigr\rangle = \Bigl\langle q_{k} \frac{\partial H}{\partial q_{k}} \Bigr\rangle = k_{\rm B} T. - - - -Using the equations of Hamiltonian mechanics, these formulae may also be written - - - -\Bigl\langle p_{k} \frac{dq_{k}}{dt} \Bigr\rangle = -\Bigl\langle q_{k} \frac{dp_{k}}{dt} \Bigr\rangle = k_{\rm B} T. - - - -Similarly, one can show using formula 2 that - - - -\Bigl\langle q_{j} \frac{\partial H}{\partial p_{k}} \Bigr\rangle = \Bigl\langle p_{j} \frac{\partial H}{\partial q_{k}} \Bigr\rangle = 0 - -\quad \mbox{ for all } j,k - - - -and - - - -\Bigl\langle q_{j} \frac{\partial H}{\partial q_{k}} \Bigr\rangle = - -\Bigl\langle p_{j} \frac{\partial H}{\partial p_{k}} \Bigr\rangle = 0 \quad \mbox{ for all } j \neq k. - - - -The general equipartition theorem is an extension of the virial theorem (proposed in 1870), which states that - - - -\Bigl\langle \sum_{k} q_{k} \frac{\partial H}{\partial q_{k}} \Bigr\rangle = - -\Bigl\langle \sum_{k} p_{k} \frac{\partial H}{\partial p_{k}} \Bigr\rangle = - -\Bigl\langle \sum_{k} p_{k} \frac{dq_{k}}{dt} \Bigr\rangle = -\Bigl\langle \sum_{k} q_{k} \frac{dp_{k}}{dt} \Bigr\rangle, - - - -where t denotes time. - -A diatomic gas can be modelled as two masses, m1 and m2, joined by a spring of stiffness a, which is called the rigid rotor-harmonic oscillator approximation. The classical energy of this system is - - - -H = - -\frac{\left| \mathbf{p}_{1} \right|^{2}}{2m_{1}} + - -\frac{\left| \mathbf{p}_{2} \right|^{2}}{2m_{2}} + - -\frac{1}{2} a q^{2}, - - - -where p1 and p2 are the momenta of the two atoms, and q is the deviation of the inter-atomic separation from its equilibrium value. Every degree of freedom in the energy is quadratic and, thus, should contribute 1/2kBT to the total average energy, and 1/2kB to the heat capacity. Therefore, the heat capacity of a gas of N diatomic molecules is predicted to be 7N·1/2kB: the momenta p1 and p2 contribute three degrees of freedom each, and the extension q contributes the seventh. It follows that the heat capacity of a mole of diatomic molecules with no other degrees of freedom should be (7/2)NAkB = (7/2)R and, thus, the predicted molar heat capacity should be roughly 7 cal/(mol·K). However, the experimental values for molar heat capacities of diatomic gases are typically about 5 cal/(mol·K) It follows that the mean potential energy associated to the interaction of the given particle with the rest of the gas is - - - -\langle h_{\mathrm{pot}} \rangle = \int_{0}^{\infty} 4\pi r^{2} \rho U(r) g(r) dr. - - - -The total mean potential energy of the gas is therefore $ \langle H_{pot} \rangle = \tfrac12 N \langle h_{\mathrm{pot}} \rangle $, where N is the number of particles in the gas, and the factor 1/2 is needed because summation over all the particles counts each interaction twice. - -Adding kinetic and potential energies, then applying equipartition, yields the energy equation - - - -H = - -\langle H_{\mathrm{kin}} \rangle + \langle H_{\mathrm{pot}} \rangle = - -\frac{3}{2} Nk_{B}T + 2\pi N \rho \int_{0}^{\infty} r^{2} U(r) g(r) dr. - - - -A similar argument, Simple examples are provided by potential energy functions of the form - - - -H_{\mathrm{pot}} = C q^{s}, - - - -where C and s are arbitrary real constants. In these cases, the law of equipartition predicts that - - - -k_{\rm B} T = \Bigl\langle q \frac{\partial H_{\mathrm{pot}}}{\partial q} \Bigr\rangle = - -\langle q \cdot s C q^{s-1} \rangle = \langle s C q^{s} \rangle = s \langle H_{\mathrm{pot}} \rangle. - - - -Thus, the average potential energy equals kBT/s, not kBT/2 as for the quadratic harmonic oscillator (where s = 2). - -More generally, a typical energy function of a one-dimensional system has a Taylor expansion in the extension q: - - - -H_{\mathrm{pot}} = \sum_{n=2}^{\infty} C_{n} q^{n} - - - -for non-negative integers n. There is no n = 1 term, because at the equilibrium point, there is no net force and so the first derivative of the energy is zero. The n = 0 term need not be included, since the energy at the equilibrium position may be set to zero by convention. In this case, the law of equipartition predicts that This may be shown using the Maxwell-Boltzmann distribution (see Figure 2), which is the probability distribution - - - -f (v) = 4 \pi - -\left( \frac{m}{2 \pi k_{\rm B} T}\right)^{3/2}\!\!v^2 - -\exp \Bigl( - -\frac{-mv^2}{2k_{\rm B} T} - -\Bigr) - - - -for the speed of a particle of mass m in the system, where the speed v is the magnitude $\sqrt{v_x^2 + v_y^2 + v_z^2}$ of the velocity vector $\mathbf{v} = (v_x,v_y,v_z).$ - -The Maxwell–Boltzmann distribution applies to any system composed of atoms, and assumes only a canonical ensemble, specifically, that the kinetic energies are distributed according to their Boltzmann factor at a temperature T. If the system is isolated from the rest of the world, the energy in each normal mode is constant; energy is not transferred from one mode to another. Hence, equipartition does not hold for such a system; the amount of energy in each normal mode is fixed at its initial value. If sufficiently strong nonlinear terms are present in the energy function, energy may be transferred between the normal modes, leading to ergodicity and rendering the law of equipartition valid. However, the Kolmogorov–Arnold–Moser theorem states that energy will not be exchanged unless the nonlinear perturbations are strong enough; if they are too small, the energy will remain trapped in at least some of the modes. - -Another way ergodicity can be broken is by the existence of nonlinear soliton symmetries. In 1953, Fermi, Pasta, Ulam and Tsingou conducted computer simulations of a vibrating string that included a non-linear term (quadratic in one test, cubic in another, and a piecewise linear approximation to a cubic in a third). They found that the behavior of the system was quite different from what intuition based on equipartition would have led them to expect. Instead of the energies in the modes becoming equally shared, the system exhibited a very complicated quasi-periodic behavior. This puzzling result was eventually explained by Kruskal and Zabusky in 1965 in a paper which, by connecting the simulated system to the Korteweg–de Vries equation led to the development of soliton mathematics. - -The law of equipartition breaks down when the thermal energy kBT is significantly smaller than the spacing between energy levels. Equipartition no longer holds because it is a poor approximation to assume that the energy levels form a smooth continuum, which is required in the derivations of the equipartition theorem above. The paradox arises because there are an infinite number of independent modes of the electromagnetic field in a closed container, each of which may be treated as a harmonic oscillator. If each electromagnetic mode were to have an average energy kBT, there would be an infinite amount of energy in the container. However, by the reasoning above, the average energy in the higher-frequency modes goes to zero as ν goes to infinity; moreover, Planck's law of black body radiation, which describes the experimental distribution of energy in the modes, follows from the same reasoning. - -Other, more subtle quantum effects can lead to corrections to equipartition, such as identical particles and continuous symmetries. The effects of identical particles can be dominant at very high densities and low temperatures. For example, the valence electrons in a metal can have a mean kinetic energy of a few electronvolts, which would normally correspond to a temperature of tens of thousands of kelvins. Such a state, in which the density is high enough that the Pauli exclusion principle invalidates the classical approach, is called a degenerate fermion gas. Such gases are important for the structure of white dwarf and neutron stars. At low temperatures, a fermionic analogue of the Bose–Einstein condensate (in which a large number of identical particles occupy the lowest-energy state) can form; such superfluid electrons are responsible for superconductivity. diff --git a/wiki/wikipedia/651.txt b/wiki/wikipedia/651.txt deleted file mode 100644 index 7aba7147d862b54142f285464751fdae72d7ee73..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/651.txt +++ /dev/null @@ -1,13 +0,0 @@ -Conjunction introduction (often abbreviated simply as conjunction and also called and introduction or adjunction) is a valid rule of inference of propositional logic. The rule makes it possible to introduce a conjunction into a logical proof. It is the inference that if the proposition p is true, and proposition q is true, then the logical conjunction of the two propositions p and q is true. For example, if it is true that "it's raining", and it is true that "I'm inside", then it is true that "it's raining and I'm inside". The rule can be stated: -$$ -\frac{P,Q}{\therefore P \land Q} -$$ - -where the rule is that wherever an instance of "$P$" and "$Q$" appear on lines of a proof, a "$P \land Q$" can be placed on a subsequent line. - -The conjunction introduction rule may be written in sequent notation: -$$ -P, Q \vdash P \land Q -$$ - -where $P$ and $Q$ are propositions expressed in some formal system, and $\vdash$ is a metalogical symbol meaning that $P \land Q$ is a syntactic consequence if $P$ and $Q$ are each on lines of a proof in some logical system; diff --git a/wiki/wikipedia/652.txt b/wiki/wikipedia/652.txt deleted file mode 100644 index 2a04009c565942cc36484d6f6eaa6fbfd8f74974..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/652.txt +++ /dev/null @@ -1,52 +0,0 @@ -In mathematics, the ascending chain condition (ACC) and descending chain condition (DCC) are finiteness properties satisfied by some algebraic structures, most importantly ideals in certain commutative rings. These conditions played an important role in the development of the structure theory of commutative rings in the works of David Hilbert, Emmy Noether, and Emil Artin. - -The conditions themselves can be stated in an abstract form, so that they make sense for any partially ordered set. This point of view is useful in abstract algebraic dimension theory due to Gabriel and Rentschler. - -A partially ordered set (poset) P is said to satisfy the ascending chain condition (ACC) if no infinite strictly ascending sequence -$$ -a_1 < a_2 < a_3 < \cdots -$$ - -of elements of P exists. - -Equivalently, every weakly ascending sequence -$$ -a_1 \leq a_2 \leq a_3 \leq \cdots, -$$ - -of elements of P eventually stabilizes, meaning that there exists a positive integer n such that -$$ -a_n = a_{n+1} = a_{n+2} = \cdots. -$$ - -Similarly, P is said to satisfy the descending chain condition (DCC) if there is no infinite descending chain of elements of P. Equivalently, every weakly descending sequence -$$ -a_1 \geq a_2 \geq a_3 \geq \cdots -$$ - -of elements of P eventually stabilizes. - -* Assuming the axiom of dependent choice, the descending chain condition on (possibly infinite) poset P is equivalent to P being well-founded: every nonempty subset of P has a minimal element (also called the minimal condition or minimum condition). A totally ordered set that is well-founded is a well-ordered set. - -* Similarly, the ascending chain condition is equivalent to P being converse well-founded (again, assuming dependent choice): every nonempty subset of P has a maximal element (the maximal condition or maximum condition). - -* Every finite poset satisfies both the ascending and descending chain conditions, and thus is both well-founded and converse well-founded. - -Consider the ring -$$ -\mathbb{Z} = \{\dots, -3, -2, -1, 0, 1, 2, 3, \dots\} -$$ - -of integers. Each ideal of $\mathbb{Z}$ consists of all multiples of some number $n$. For example, the ideal -$$ -I = \{\dots, -18, -12, -6, 0, 6, 12, 18, \dots\} -$$ - -consists of all multiples of $6$. Let -$$ -J = \{\dots, -6, -4, -2, 0, 2, 4, 6, \dots\} -$$ - -be the ideal consisting of all multiples of $2$. The ideal $I$ is contained inside the ideal $J$, since every multiple of $6$ is also a multiple of $2$. In turn, the ideal $J$ is contained in the ideal $\mathbb{Z}$, since every multiple of $2$ is a multiple of $1$. However, at this point there is no larger ideal; we have "topped out" at $\mathbb{Z}$. - -In general, if $I_1, I_2, I_3, \dots$ are ideals of $\mathbb{Z}$ such that $I_1$ is contained in $I_2$, $I_2$ is contained in $I_3$, and so on, then there is some $n$ for which all $I_n = I_{n+1} = I_{n+2} = \cdots$. That is, after some point all the ideals are equal to each other. Therefore, the ideals of $\mathbb{Z}$ satisfy the ascending chain condition, where ideals are ordered by set inclusion. Hence $\mathbb{Z}$ is a Noetherian ring. diff --git a/wiki/wikipedia/653.txt b/wiki/wikipedia/653.txt deleted file mode 100644 index e86ebc0a53a53d1ec05f21ae85d73642cd531d80..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/653.txt +++ /dev/null @@ -1,25 +0,0 @@ -The following proofs of elementary ring properties use only the axioms that define a mathematical ring: - -Theorem: $0\cdot a=a\cdot 0=0$ - -Theorem: The identity element e for a binary operation (addition or multiplication) of a ring is unique. - -Theorem: - a as the additive inverse element for a is unique. - -Theorem: a−1 as the multiplicative inverse element for a is unique. - -{{hidden - -|(Click "show" at right to see the proof of this theorem or "hide" to hide it.) - -|2= - -If there is another inverse element $a^{-1'}$ for $a$, then $a^{-1} = a^{-1} \times 1 = a^{-1} \times a \times a^{-1'} = 1 \times a^{-1'} = a^{-1'}$.}} - -Theorem: A ring $(R, +, \cdot)$ is the zero ring (that is, consists of precisely one element) if and only if $0 = 1$. - -Theorem: $(-1)a=-a$ - -Theorem: $(-a) \cdot b= a \cdot (-b) = -(ab)$ - -Category:Mathematical proofs diff --git a/wiki/wikipedia/654.txt b/wiki/wikipedia/654.txt deleted file mode 100644 index 17d820512692895fd464d40af3b1372d3d6a4eb0..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/654.txt +++ /dev/null @@ -1,17 +0,0 @@ -This classical, popular puzzle involves a large rectangle divided into five "rooms". The objective of the puzzle is to cross each "wall" of the diagram with a continuous line only once. - -As with the Seven Bridges of Königsberg, the puzzle may be represented in graphical form with each room corresponding to a vertex (including the outside area as a room) and two vertices joined by an edge if the rooms have a common wall. Because there is more than one pair of vertices with an odd number of edges, the resulting multigraph does not contain an Eulerian path nor an Eulerian circuit, which means that this puzzle cannot be solved. - -By bending the rules, a related puzzle could be solved. For instance, by permitting passage through more than one wall at a time (that is, through a corner of a room), or by solving the puzzle on a torus (doughnut) instead of a flat plane. - -Even without using graph theory, it is not difficult to show that the Five Room Puzzle has no solution. First, the rules must be clarified. The rooms and the solution line must all be drawn on a single side of a normal flat sheet of paper. The solution line must be continuous, but can bend sharply or smoothly in any way and can even cross over itself (but not at a wall, so this is often prohibited). The solution line must cross over each "wall" exactly once, where "cross over" means to pass completely from one to the other of the two rooms that are separated by the "wall", or from a room to the area outside the drawing. This precludes "crossing" two walls at the same time by drawing the solution line through the corner at which they meet. It also precludes "crossing" a wall by drawing the solution line up to a wall, perhaps along it, but then leaving the wall on the same side. There are 16 "walls", seven separating rooms and nine separating the rooms from the area outside the drawing. - -The method of proof is proof by contradiction. That is, we proceed as if a solution exists and discover some properties of all solutions. These put us in an impossible situation and thus we have to conclude that we were wrong - there is no solution after all. - -Imagine that there is an "observer" in each "room". The observer can see the solution line when it is in his room, but not otherwise. As the solution line is drawn, he will see it enter his room through one wall and leave through another. He may also see that the line starts in his room and/or ends in his room. There is no observer in the area outside the drawing, so there are five observers. - -Consider, first, the observers in the lower-left and lower-right rooms. Each of these rooms has four walls. If the solution line starts in one of these rooms, its observer will see the line leave through a wall. Then it will come back into the room through another wall and leave again through a third. Finally, it will come back into the room through the fourth wall and end. If the solution line starts somewhere else, the observer will see the solution line come into and leave his room exactly twice, passing through all four walls in some order. There is no problem with any of this. - -Consider, however, the observers in the remaining three rooms. Each of these rooms has five walls. If the solution line starts in one of these rooms, its observer will see the line leave (through one wall), re-enter and leave again (two more walls) and enter and leave a second time (the last two walls). If the solution line starts somewhere else, the observer will see the solution line enter and leave (two walls), enter and leave a second time (two more walls) and finally enter through the fifth wall and end (all five walls have been crossed, so the line cannot get back out of the room again). So, we see that for the rooms with five walls, the solution line must either start inside the room or it must end inside the room. There is no other possibility. In our arguments, we have said nothing about exactly which walls the solution line crosses, the order in which it crosses them or where the line goes when it is outside a particular room. Therefore, these arguments apply to all solutions that obey the rules. Again, for the rooms with five walls, the solution line must either start or end inside the room. - -But, we have three rooms with five walls. The solution line has one start and one end, so it can pass through all five walls of two of these rooms. However, having run out of ends, the line can not pass through all of the walls of the third five-walled room. Therefore, the solution line cannot be drawn to obey the rules. diff --git a/wiki/wikipedia/655.txt b/wiki/wikipedia/655.txt deleted file mode 100644 index 84cca6676412585d1860f085fee314b8f0db054f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/655.txt +++ /dev/null @@ -1,91 +0,0 @@ -Japaridze's polymodal logic (GLP) is a system of provability logic with infinitely many provability modalities. This system has played an important role - -in some applications of provability algebras in proof theory, and has been extensively studied since the late 1980s. It is named after Giorgi Japaridze. - -The language of GLP extends that of the language of classical propositional logic by including the infinite series [0],[1],[2],... of necessity operators. Their dual possibility operators <0>,<1>,<2>,... are defined by p = ¬[n]¬p. - -The axioms of GLP are all classical tautologies and all formulas of one of the following forms: - -*[n](p → q) → ([n]p → [n]q) - -*[n]([n]p → p) → [n]p - -*[n]p → [n+1]p - -*p → [n+1]p - -And the rules of inference are: - -* From p and p → q conclude q - -* From p conclude [0]p - -Consider a sufficiently strong first-order theory T such as Peano Arithmetic PA. - -Define the series T0,T1,T2,... of theories as follows: - -*T0 is T - -*Tn+1 is the extension of Tn through the additional axioms ∀xF(x) for each formula F(x) such that Tn proves all of the formulas F(0), F(1), F(2),... - -For each n, let Prn(x) be a natural arithmetization of the predicate - -"x is the Gödel number of a sentence provable in Tn". - -A realization is a function * that sends each nonlogical atom a of - -the language of GLP to a sentence a* of the language of T. It extends to all formulas - -of the language of GLP by stipulating that * commutes with the Boolean connectives, and - -that ([n]F)* is Prn('F*'), where 'F*' - -stands for (the numeral for) the Gödel number of F*. - -An arithmetical completeness theorem for GLP states that a formula F is provable in GLP if and only if, for every interpretation *, the sentence F* is provable in T. - -The above understanding of the series T0,T1,T2,... of theories is not the only natural understanding yielding the soundness and completeness of GLP. For instance, each theory Tn can be understood as T augmented with all true Πn sentences as additional axioms. George Boolos showed that GLP remains sound and complete with analysis (second-order arithmetic) in the role of the base theory T. - -GLP has been shown - -The problem of being a theorem of GLP is PSPACE-complete. So is the same problem restricted to only variable-free formulas of GLP. - -GLP, under the name GP, was introduced by Giorgi Japaridze in his PhD thesis "Modal Logical Means of Investigating Provability" (Moscow State University, 1986) and published two years later along with (a) the completeness theorem for GLP with respect to its provability interpretation (Beklemishev subsequently came up with a simpler proof of the same theorem) and (b) a proof that Kripke frames for GLP do not exist. Beklemishev also conducted a more extensive study of Kripke models for GLP. - -Topological models for GLP were studied by Beklemishev, Bezhanishvili, Icard and Gabelaia. - -The decidability of GLP in polynomial space was proven by I. Shapirovsky, and the PSPACE-hardness of its variable-free fragment was proven by F. Pakhomov. Among the most notable applications of GLP has been its use in proof-theoretically analyzing Peano arithmetic, elaborating a canonical way for recovering ordinal notation up to ɛ0 from the corresponding algebra, and constructing simple combinatorial independent statements (Beklemishev ). - -An extensive survey of GLP in the context of provability logics in general was given by George Boolos in his book The Logic of Provability. - -*L. Beklemishev, . Annals of Pure and Applied Logic 128 (2004), pp. 103–123. - -*L. Beklemishev, J. Joosten and M. Vervoort, . Journal of Logic and Computation 15 (2005), No 4, pp. 447–463. - -*L. Beklemishev, . Annals of Pure and Applied Logic 161, 756–774 (2010). - -*L. Beklemishev, G. Bezhanishvili and T. Icar, "On topological models of GLP". Ways of proof theory, Ontos Mathematical Logic, 2, eds. R. Schindler, Ontos Verlag, Frankfurt, 2010, pp. 133–153. - -*L. Beklemishev, "On the Craig interpolation and the fixed point properties of GLP". Proofs, Categories and Computations. S. Feferman et al., eds., College Publications 2010. pp. 49–60. - -*L. Beklemishev, . Lecture Notes in Computer Science 6618 (2011), pp. 1–15. - -*L. Beklemishev, . Proceedings of the Steklov Institute of Mathematics 274 (2011), pp. 25–33. - -*L. Beklemishev and D. Gabelaia, . Annals of Pure and Applied Logic 164 (2013), pp. 1201–1223. - -*G. Boolos, . Annals of Pure and Applied Logic 61 (1993), pp. 95–111. - -*G. Boolos, ' Cambridge University Press, 1993. - -*E.V. Dashkov, "On the positive fragment of the polymodal provability logic GLP". Mathematical Notes 2012; 91:318–333. - -*D. Fernandez-Duque and J.Joosten, . Logic Journal of the IGPL 22 (2014), pp. 933–963. - -*G. Japaridze, . Intensional Logics and Logical Structure of Theories. Metsniereba, Tbilisi, 1988, pp. 16–48 (Russian). - -*F. Pakhomov, . Archive for Mathematical Logic 53 (2014), pp. 949–967. - -*D.S. Shamkanov, . Proceedings of the Steklov Institute of Mathematics 274 (2011), pp. 303–316. - -*I. Shapirovsky, . Advances in Modal Logic 7 (2008), pp. 289–304. diff --git a/wiki/wikipedia/656.txt b/wiki/wikipedia/656.txt deleted file mode 100644 index a51c2cbd62f955dfbb4454032d22035f7550919c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/656.txt +++ /dev/null @@ -1,9 +0,0 @@ -In mathematics, the finite lattice representation problem, or finite congruence lattice problem, asks whether every finite lattice is isomorphic to the congruence lattice of some finite algebra. - -A lattice is called algebraic if it is complete and compactly generated. In 1963, Grätzer and Schmidt proved that every algebraic lattice is isomorphic to the congruence lattice of some algebra. Thus there is essentially no restriction on the shape of a congruence lattice of an algebra. The finite lattice representation problem asks whether the same is true for finite lattices and finite algebras. That is, does every finite lattice occur as the congruence lattice of a finite algebra? - -In 1980, Pálfy and Pudlák proved that this problem is equivalent to the problem of deciding whether every finite lattice occurs as an interval in the subgroup lattice of a finite group. For an overview of the group theoretic approach to the problem, see Pálfy (1993) and Pálfy (2001). - -This problem should not be confused with the congruence lattice problem. - -This is among the oldest unsolved problems in universal algebra. Until it is answered, the theory of finite algebras is incomplete since, given a finite algebra, it is unknown whether there are, a priori, any restrictions on the shape of its congruence lattice. diff --git a/wiki/wikipedia/657.txt b/wiki/wikipedia/657.txt deleted file mode 100644 index 605f083e0c0812b5af9c39fb269e20a16e37c984..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/657.txt +++ /dev/null @@ -1,5 +0,0 @@ -A kinetic minimum spanning tree is a kinetic data structure that maintains the minimum spanning tree (MST) of a graph whose edge weights are changing as a continuous function of time. - -The most efficient known data structure for the general case uses a kinetic sorted list to store the edge weights, and a standard MST algorithm to compute the MST given the sorted edge weights. This data structure must process $O(n^2)$ events, developing a more efficient data structure remains an open problem. - -Agarwal et al. developed a data structure that maintains the MST for a graph belonging to a minor closed family. It uses the idea of a "swap", calculating the amount by which the weight of the MST would increase if some edge in the tree e was replaced by an edge f outside the tree such that the circle induced by f in the tree contains e. Maintaining the tree is then equivalent to finding and swapping the next pair for which this quantity becomes negative. This data structure considers the dual view of the graph, and then divides based on Frederickson's restricted partitions to make this efficient. It results in a total run time $O(pn^{\frac{1}{2}}\log^{\frac{3}{2}}n)$ if $p$ insertions or deletions are made, or $O(n^{\frac{19}{12}}\log^{\frac{3}{2}}n)$ if only weight changes are allowed. These deterministic bounds are slightly improved if randomization is allowed. diff --git a/wiki/wikipedia/658.txt b/wiki/wikipedia/658.txt deleted file mode 100644 index 44eb35c4d43be00cc740495dd9876bdab2997de5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/658.txt +++ /dev/null @@ -1,72 +0,0 @@ -Bertrand's box paradox is a paradox of elementary probability theory, first posed by Joseph Bertrand in his 1889 work Calcul des probabilités. - -There are three boxes: - -# a box containing two gold coins, - -# a box containing two silver coins, - -# a box containing one gold coin and one silver coin. - -The question is to calculate the probability, after choosing a box at random and withdrawing one coin at random, if that happens to be a gold coin, of the next coin drawn from the same box also being a gold coin. It may seem that the probability that the remaining coin is gold is 1/2, but the probability is actually 2/3. - -It is not a "true" paradox, because it has a well-defined single answer. - -This simple but counterintuitive puzzle is used as a standard example in teaching probability theory. The solution illustrates some basic principles, including the Kolmogorov axioms. - -[[File:3Outcomes.jpg|300px|thumb|Bertrand's box paradox: the three equally probable outcomes after the first gold coin draw. - -The probability of drawing another gold coin from the same box is 0 in (a), and 1 in (b) and (c). Thus, the overall probability of drawing a gold coin in the second draw is 0/3 + 1/3 + 1/3 = 2/3.]] - -The problem can be reframed by describing the boxes as each having one drawer on each of two sides. Each drawer contains a coin. One box has a gold coin on each side (GG), one a silver coin on each side (SS), and the other a gold coin on one side and a silver coin on the other (GS). A box is chosen at random, a random drawer is opened, and a gold coin is found inside it. What is the chance of the coin on the other side being gold? - -The following faulty reasoning appears to give a probability of 1/2: - -*Originally, all three boxes were equally likely to be chosen. - -*The chosen box cannot be box SS. - -*So it must be box GG or GS. - -*The two remaining possibilities are equally likely. So the probability that the box is GG, and the other coin is also gold, is 1/2. - -The flaw is in the last step. While those two cases were originally equally likely, the fact that you are certain to find a gold coin if you had chosen the GG box, but are only 50% sure of finding a gold coin if you had chosen the GS box, means they are no longer equally likely given that you have found a gold coin. Specifically: - -*The probability that GG would produce a gold coin is 1. - -*The probability that SS would produce a gold coin is 0. - -*The probability that GS would produce a gold coin is 1/2. - -Initially GG, SS and GS are equally likely $\left(\mathrm{i.e., P(GG) = P(SS) = P(GS)} = \frac13\right)$. Therefore, by Bayes rule the conditional probability that the chosen box is GG, given we have observed a gold coin, is: -$$ -\mathrm{ P(GG \mid see\ gold) = \frac { P(see\ gold \mid GG)\times\frac13} { P(see\ gold \mid GG)\times\frac13+P(see\ gold \mid SS)\times\frac13+P(see\ gold \mid GS)\times\frac13 }} = \frac{\frac13}{\frac13}\times\frac{1}{1+0+\frac12} = \frac{2}{3} -$$ - -The correct answer of 2/3 can also be obtained as follows: - -*Originally, all six coins were equally likely to be chosen. - -*The chosen coin cannot be from drawer S of box GS, or from either drawer of box SS. - -*So it must come from the G drawer of box GS, or either drawer of box GG. - -*The three remaining possibilities are equally likely, so the probability that the drawer is from box GG is 2/3. - -Alternatively, one can simply note that the chosen box has two coins of the same type 2/3 of the time. So, regardless of what kind of coin is in the chosen drawer, the box has two coins of that type 2/3 of the time. In other words, the problem is equivalent to asking the question "What is the probability that I will pick a box with two coins of the same color?". - -Bertrand's point in constructing this example was to show that merely counting cases is not always proper. Instead, one should sum the probabilities that the cases would produce the observed result; and the two methods are equivalent only if this probability is either 1 or 0 in every case. This condition is correctly applied in the second solution method, but not in the first. - -It can be easier to understand the correct answer if you consider the paradox as Bertrand originally described it. After a box has been chosen, but before a box is opened to let you observe a coin, the probability is 2/3 that the box has two of the same kind of coin. If the probability of "observing a gold coin" in combination with "the box has two of the same kind of coin" is 1/2, then the probability of "observing a silver coin" in combination with "the box has two of the same kind of coin" must also be 1/2. And if the probability that the box has two like coins changes to 1/2 no matter what kind of coin is shown, the probability would have to be 1/2 even if you hadn't observed a coin this way. Since we know his probability is 2/3, not 1/2, we have an apparent paradox. It can be resolved only by recognizing how the combination of "observing a gold coin" with each possible box can only affect the probability that the box was GS or SS, but not GG. - -In a survey of 53 Psychology freshmen taking an introductory probability course, 35 incorrectly responded 1/2; only 3 students correctly responded 2/3. - -* Boy or Girl paradox - -* Monty Hall problem - -* Three Prisoners problem - -* Two envelopes problem - -* Sleeping Beauty problem diff --git a/wiki/wikipedia/659.txt b/wiki/wikipedia/659.txt deleted file mode 100644 index 7d6474e4579fa5e9945acc5027888b1f43d981f1..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/659.txt +++ /dev/null @@ -1,286 +0,0 @@ -The Tonelli–Shanks algorithm (referred to by Shanks as the RESSOL algorithm) is used in modular arithmetic to solve for r in a congruence of the form r2 ≡ n (mod p), where p is a prime: that is, to find a square root of n modulo p. - -Tonelli–Shanks cannot be used for composite moduli: finding square roots modulo composite numbers is a computational problem equivalent to integer factorization. - -An equivalent, but slightly more redundant version of this algorithm was developed by - -Alberto Tonelli - -in 1891. The version discussed here was developed independently by Daniel Shanks in 1973, who explained: - -
    My tardiness in learning of these historical references was because I had lent Volume 1 of Dickson's History to a friend and it was never returned. - -
    - -According to Dickson, Tonelli's algorithm can take square roots of x modulo prime powers pλ apart from primes. - -Given a non-zero $n$ and an odd prime $p$, Euler's criterion tells us that $n$ has a square root (i.e., $n$ is a quadratic residue) if and only if: -$$ -n^{\frac{p-1}{2}} \equiv 1 \pmod p -$$. - -In contrast, if a number $z$ has no square root (is a non-residue), Euler's criterion tells us that: -$$ -z^{\frac{p-1}{2}} \equiv -1 \pmod p -$$. - -It is not hard to find such $z$, because half of the integers between 1 and $p-1$ have this property. So we assume that we have access to such a non-residue. - -By (normally) dividing by 2 repeatedly, we can write $p-1$ as $Q 2^S$, where $Q$ is odd. Note that if we try -$$ -R \equiv n^{\frac{Q+1}{2}} \pmod p -$$, - -then $R^2 \equiv n^{Q+1} = (n)(n^Q) \pmod p$. If $t \equiv n^Q \equiv 1 \pmod p$, then $R$ is a square root of $n$. Otherwise, for $M = S$, we have $R$ and $t$ satisfying: - -* $R^2 \equiv nt \pmod p$; and - -* $t$ is a $2^{M-1}$-th root of 1 (because $t^{2^{M-1}} = t^{2^{S-1}} \equiv n^{Q 2^{S-1}} = n^{\frac{p-1}{2}}$). - -If, given a choice of $R$ and $t$ for a particular $M$ satisfying the above (where $R$ is not a square root of $n$), we can easily calculate another $R$ and $t$ for $M - 1$ such that the above relations hold, then we can repeat this until $t$ becomes a $2^0$-th root of 1, i.e., $t = 1$. At that point $R$ is a square root of $n$. - -We can check whether $t$ is a $2^{M-2}$-th root of 1 by squaring it $M-2$ times and check whether it is 1. If it is, then we do not need to do anything, the same choice of $R$ and $t$ works. But if it is not, $t^{2^{M-2}}$ must be -1 (because squaring it gives 1, and there can only be two square roots 1 and -1 of 1 modulo $p$). - -To find a new pair of $R$ and $t$, we can multiply $R$ by a factor $b$, to be determined. Then $t$ must be multiplied by a factor $b^2$ to keep $R^2 \equiv nt \pmod p$. So we need to find a factor $b^2$ so that $tb^2$ is a $2^{M-2}$-th root of 1, or equivalently $b^2$ is a $2^{M-2}$-th root of -1. - -The trick here is to make use of $z$, the known non-residue. The Euler's criterion applied to $z$ shown above says that $z^Q$ is a $2^{S-1}$-th root of -1. So by squaring $z^Q$ repeatedly, we have access to a sequence of $2^i$-th root of -1. We can select the right one to serve as $b$. With a little bit of variable maintenance and trivial case compression, the algorithm below emerges naturally. - -Operations and comparisons on elements of the multiplicative group of integers modulo p $\mathbb{Z}/p\mathbb{Z}$ are implicitly mod p. - -Inputs: - -* p, a prime - -* n, an element of $\mathbb{Z}/p\mathbb{Z}$ such that solutions to the congruence r2 = n exist; when this is so we say that n is a quadratic residue mod p. - -Outputs: - -* r in $\mathbb{Z}/p\mathbb{Z}$ such that r2 = n - -Algorithm: - -# By factoring out powers of 2, find Q and S such that $p-1=Q 2^S$ with Q odd - -# Search for a z in $\mathbb{Z}/p\mathbb{Z}$ which is a quadratic non-residue - -#* Half of the elements in the set will be quadratic non-residues - -#* Candidates can be tested with Euler's criterion or by finding the Jacobi symbol - -# Let - -#:\begin{align} - -M &\leftarrow S \\ - -c &\leftarrow z^Q \\ - -t &\leftarrow n^Q \\ - -R &\leftarrow n^\frac{Q+1}{2} - -\end{align} - -# Loop: - -#* If t = 0, return r = 0 - -#* If t = 1, return r = R - -#* Otherwise, use repeated squaring to find the least i, 0 < i < M, such that $t^{2^i} = 1$ - -#* Let $b \leftarrow c^{2^{M-i-1}}$, and set - -#*:\begin{align} - -M &\leftarrow i \\ - -c &\leftarrow b^2 \\ - -t &\leftarrow tb^2 \\ - -R &\leftarrow Rb - -\end{align} - -Once you have solved the congruence with r the second solution is $ -r \pmod p$. If the least i such that $t^{2^i} = 1$ is M, then no solution to the congruence exists, ie n is not a quadratic residue. - -This is most useful when p ≡ 1 (mod 4). - -For primes such that p ≡ 3 (mod 4), this problem has possible solutions $r = \pm n^{\frac{p+1}{4}}\pmod p$. If these satisfy $r^2 \equiv n \pmod p$, they are the only solutions. If not, $r^2 \equiv -n \pmod p$, n is a quadratic non-residue, and there are no solutions. - -We can show that at the start of each iteration of the loop the following loop invariants hold: - -* $c^{2^{M-1}} = -1$ - -* $t^{2^{M-1}} = 1$ - -* $R^2 = tn$ - -Initially: - -* $c^{2^{M-1}} = z^{Q2^{S-1}} = z^\frac{p-1}{2} = -1$ (since z is a quadratic nonresidue, per Euler's criterion) - -* $t^{2^{M-1}} = n^{Q2^{S-1}} = n^\frac{p-1}{2} = 1$ (since n is a quadratic residue) - -* $R^2 = n^{Q+1} = tn$ - -At each iteration, with M' , c' , t' , R' the new values replacing M, c, t, R: - -* $c'^{2^{M'-1}} = (b^2)^{2^{i-1}} = c^{2^{M-i}2^{i-1}} = c^{2^{M-1}} = -1$ - -* $t'^{2^{M'-1}} = (tb^2)^{2^{i-1}} = t^{2^{i-1}}b^{2^i} = -1 \cdot -1 = 1$ - -** $t^{2^{i-1}} = -1$ since we have that $t^{2^i} = 1$ but $t^{2^{i-1}} \neq 1$ (i is the least value such that $t^{2^i} = 1$) - -** $b^{2^i} = c^{2^{M-i-1}2^i}= c^{2^{M-1}} = -1$ - -* $R'^2 = R^2b^2 = tnb^2 = t'n$ - -From $t^{2^{M-1}} = 1$ and the test against t = 1 at the start of the loop, we see that we will always find an i in 0 < i < M such that $t^{2^i} = 1$. M is strictly smaller on each iteration, and thus the algorithm is guaranteed to halt. When we hit the condition t = 1 and halt, the last loop invariant implies that R2 = n. - -We can alternately express the loop invariants using the order of the elements: - -* $\operatorname{ord}(c) = 2^M$ - -* $\operatorname{ord}(t) | 2^{M-1}$ - -* $R^2 = tn$ as before - -Each step of the algorithm moves t into a smaller subgroup by measuring the exact order of t and multiplying it by an element of the same order. - -Solving the congruence r2 ≡ 5 (mod 41). 41 is prime as required and 41 ≡ 1 (mod 4). 5 is a quadratic residue by Euler's criterion: $5^{\frac{41-1}{2}} = 5^{20} = 1$ (as before, operations in $(\mathbb{Z}/41\mathbb{Z})^\times$ are implicitly mod 41). - -# $p-1 = 40 = 5 \cdot 2^3 $ so $Q \leftarrow 5$, $S \leftarrow 3$ - -# Find a value for z: - -#* $2^{\frac{41-1}{2}} = 1$, so 2 is a quadratic residue by Euler's criterion. - -#* $3^{\frac{41-1}{2}} = 40 = -1$, so 3 is a quadratic nonresidue: set $z \leftarrow 3$ - -# Set - -#*$M \leftarrow S = 3$ - -#*$c \leftarrow z^Q = 3^5 = 38$ - -#*$t \leftarrow n^Q = 5^5 = 9$ - -#*$R \leftarrow n^{\frac{Q+1}{2}} = 5^{\frac{5+1}{2}} = 2$ - -# Loop: - -#* First iteration: - -#** $t \neq 1$, so we're not finished - -#** $t^{2^1} = 40$, $t^{2^2} = 1$ so $i \leftarrow 2$ - -#** $b \leftarrow c^{2^{M-i-1}} = 38^{2^{3-2-1}} = 38$ - -#** $M \leftarrow i = 2$ - -#** $c \leftarrow b^2 = 38^2 = 9$ - -#** $t \leftarrow tb^2 = 9 \cdot 9 = 40$ - -#** $R \leftarrow Rb = 2 \cdot 38 = 35$ - -#* Second iteration: - -#** $t \neq 1$, so we're still not finished - -#** $t^{2^1} = 1$ so $i \leftarrow 1$ - -#** $b \leftarrow c^{2^{M-i-1}} = 9^{2^{2-1-1}} = 9$ - -#** $M \leftarrow i = 1$ - -#** $c \leftarrow b^2 = 9^2 = 40$ - -#** $t \leftarrow tb^2 = 40 \cdot 40 = 1$ - -#** $R \leftarrow Rb = 35 \cdot 9 = 28$ - -#* Third iteration: - -#** $t = 1$, and we are finished; return $r = R = 28$ - -Indeed, 282 ≡ 5 (mod 41) and (−28)2 ≡ 132 ≡ 5 (mod 41). So the algorithm yields the two solutions to our congruence. - -The Tonelli–Shanks algorithm requires (on average over all possible input (quadratic residues and quadratic nonresidues)) -$$ -2m+2k+\frac{S(S-1)}{4} +\frac{1}{2^{S-1}} - 9 -$$ - -modular multiplications, where $m$ is the number of digits in the binary representation of $p$ and $k$ is the number of ones in the binary representation of $p$. If the required quadratic nonresidue $z$ is to be found by checking if a randomly taken number $y$ is a quadratic nonresidue, it requires (on average) $2$ computations of the Legendre symbol. The average of two computations of the Legendre symbol are explained as follows: $y$ is a quadratic residue with chance $\tfrac{\tfrac{p+1}{2}}{p} = \tfrac{1 + \tfrac{1}{p}}{2}$, which is smaller than $1$ but $\geq \tfrac{1}{2}$, so we will on average need to check if a $y$ is a quadratic residue two times. - -This shows essentially that the Tonelli–Shanks algorithm works very well if the modulus $p$ is random, that is, if $S$ is not particularly large with respect to the number of digits in the binary representation of $p$. As written above, Cipolla's algorithm works better than Tonelli–Shanks if (and only if) $S(S-1) > 8m + 20$. - -However, if one instead uses Sutherland's algorithm to perform the discrete logarithm computation in the 2-Sylow subgroup of $\mathbb{F}_p$, one may replace $S(S-1)$ with an expression that is asymptotically bounded by $O(S\log S/\log\log S)$. Explicitly, one computes $e$ such that $c^e\equiv n^Q$ and then $R\equiv c^{-e/2} n^{(Q+1)/2}$ satisfies $R^2\equiv n$ (note that $e$ is a multiple of 2 because $n$ is a quadratic residue). - -The algorithm requires us to find a quadratic nonresidue $z$. There is no known deterministic algorithm that runs in polynomial time for finding such a $z$. However, if the generalized Riemann hypothesis is true, there exists a quadratic nonresidue $z < 2\ln^2{p}$, making it possible to check every $z$ up to that limit and find a suitable $z$ within polynomial time. Keep in mind, however, that this is a worst-case scenario; in general, $z$ is found in on average 2 trials as stated above. - -The Tonelli–Shanks algorithm can (naturally) be used for any process in which square roots modulo a prime are necessary. For example, it can be used for finding points on elliptic curves. It is also useful for the computations in the Rabin cryptosystem and in the sieving step of the quadratic sieve. - -Tonelli–Shanks can be generalized to any cyclic group (instead of $(\mathbb{Z}/p\mathbb{Z})^\times$) and to kth roots for arbitrary integer k, in particular to taking the kth root of an element of a finite field. - -If many square-roots must be done in the same cyclic group and S is not too large, a table of square-roots of the elements of 2-power order can be prepared in advance and the algorithm simplified and sped up as follows. - -# Factor out powers of 2 from p − 1, defining Q and S as: $p-1 = Q2^S$ with Q odd. - -# Let $R \leftarrow n^{\frac{Q+1}{2}}, t\leftarrow n^Q \equiv R^2/n$ - -# Find $b$ from the table such that $b^2 \equiv t $ and set $R \equiv R/b$ - -#return R. - -According to Dickson's "Theory of Numbers" - -
    - -A. Tonelli gave an explicit formula for the roots of $x^{2}=c (\bmod{ p^{\lambda}})$ - -
    - -The Dickson reference shows the following formula for the square root of $x^{2}\bmod{p^{\lambda}}$. - -when $p=4*7+1$, or $s=2$(s must be 2 for this equation) and $A=7$ such that $29=2^{2}*7+1$ - -for $x^{2}\bmod{p^{\lambda}}\equiv c$ then -$$ -x \bmod{p^{\lambda}}\equiv \pm (c^{A}+3)^{\beta}*c^{(\beta+1)/2} -$$ where $\beta \equiv a*p^{\lambda-1}$ - -Noting that $23^{2} \bmod{ 29^{3}}\equiv 529$ and noting that $\beta = 7*29^{2}$ then -$$ -(529^{7} + 3)^{7* 29^{2}}* 529^{(7*29^{2} + 1)/2}\bmod{ 29^{3}}\equiv 24366 \equiv -23 -$$ - -To take another example: $2333^{2} \bmod{ 29^{3}}\equiv 4142$ and -$$ -(4142^{7} + 3)^{7 *29^{2}}* 4142^{(7*29^{2} + 1)/2}\bmod{ 29^{3}}\equiv 2333 -$$ - -Dickson also attributes the following equation to Tonelli: -$$ -X\bmod{p^{\lambda}}\equiv x^{p^{\lambda-1}}*c^{(p^{\lambda}-2p^{\lambda-1}+1)/2} -$$ where $X^{2}\bmod{p^{\lambda}}\equiv c$ and $x^{2}\bmod{p}\equiv c$; - -Using $p=23$ and using the modulus of $p^{3}$ the math follows: -$$ -1115^{2}\bmod{ 23^{3}}=2191 -$$ - -First, find the modular square root mod $p$ which can be done by the regular Tonelli algorithm: -$$ -1115^{2}\bmod{ 23}\equiv 6 -$$ and thus $\sqrt{6}\bmod{23}\equiv 11$ - -And applying Tonelli's equation (see above): -$$ -11^{23^{2}}* 2191^{(23^{3} - 2 *23^{2} + 1)/2}\bmod {23^{3}} \equiv 1115 -$$ - -Dickson's reference clearly shows that Tonelli's algorithm works on moduli of $p^{\lambda}$. diff --git a/wiki/wikipedia/66.txt b/wiki/wikipedia/66.txt deleted file mode 100644 index de020af33d660160b998108e3eb37e08f7118cd3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/66.txt +++ /dev/null @@ -1,7 +0,0 @@ -In temporal databases, transaction time (TT) is the time during which a fact stored in the database is considered to be true. As of December 2011, ISO/IEC 9075, Database Language SQL:2011 Part 2: SQL/Foundation included clauses in table definitions to define "system-versioned tables" (that is, transaction-time tables). - -In a database table transaction interval is often represented as an interval allowing the system to "remove" entries by using two table-columns StartTT and EndTT. The time interval is closed at its lower bound and open at its upper bound. - -When the ending transaction time is unknown, it may be considered as "Until Changed". Academic researchers and some RDBMS have represented "Until Changed" with the largest timestamp supported or the keyword "forever". This convention is not technically precise. - -The term was coined by Richard T. Snodgrass and his doctoral student Ilsoo Ahn. diff --git a/wiki/wikipedia/660.txt b/wiki/wikipedia/660.txt deleted file mode 100644 index 8e2fa32130e79384292f1965a37b722b9b3f86ab..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/660.txt +++ /dev/null @@ -1,490 +0,0 @@ -In estimation theory and statistics, the Cramér–Rao bound (CRB) expresses a lower bound on the variance of unbiased estimators of a deterministic (fixed, though unknown) parameter, stating that the variance of any such estimator is at least as high as the inverse of the Fisher information. The result is named in honor of Harald Cramér and C. R. Rao, but has independently also been derived by Maurice Fréchet, Georges Darmois, as well as Alexander Aitken and Harold Silverstone. - -An unbiased estimator which achieves this lower bound is said to be (fully) efficient. Such a solution achieves the lowest possible mean squared error among all unbiased methods, and is therefore the minimum variance unbiased (MVU) estimator. However, in some cases, no unbiased technique exists which achieves the bound. This may occur either if for any unbiased estimator, there exists another with a strictly smaller variance, or if an MVU estimator exists, but its variance is strictly greater than the inverse of the Fisher information. - -The Cramér–Rao bound can also be used to bound the variance of biased estimators of given bias. In some cases, a biased approach can result in both a variance and a mean squared error that are below the unbiased Cramér–Rao lower bound; see estimator bias. - -The Cramér–Rao bound is stated in this section for several increasingly general cases, beginning with the case in which the parameter is a scalar and its estimator is unbiased. All versions of the bound require certain regularity conditions, which hold for most well-behaved distributions. These conditions are listed later in this section. - -Suppose $\theta$ is an unknown deterministic parameter which is to be estimated from $n$ independent observations (measurements) of $x$, each from a distribution according to some probability density function $f(x;\theta)$. The variance of any unbiased estimator $\hat{\theta}$ of $\theta$ is then bounded by the reciprocal of the Fisher information $I(\theta)$: - -\operatorname{var}(\hat{\theta}) - -\geq - -\frac{1}{I(\theta)} - - - -where the Fisher information $I(\theta)$ is defined by - - - -I(\theta) = n \operatorname{E}_\theta - -\left[ - -\left( - -\frac{\partial \ell(X;\theta)}{\partial\theta} - -\right)^2 - -\right] - - - -and $\ell(x;\theta)=\log (f(x;\theta))$ is the natural logarithm of the likelihood function for a single sample $x$ and $\operatorname{E}_\theta$ denotes the expected value with respect to the density $f(x;\theta)$ of $X$. If $\ell(x;\theta)$ is twice differentiable and certain regularity conditions hold then the Fisher information can also be defined as follows: - - - -I(\theta) = -n \operatorname{E}_\theta\left[ \frac{\partial^2 \ell(X;\theta)}{\partial\theta^2} \right] - - - -The efficiency of an unbiased estimator $\hat{\theta}$ measures how close this estimator's variance comes to this lower bound; estimator efficiency is defined as -$$ -e(\hat{\theta}) = \frac{I(\theta)^{-1}}{\operatorname{var}(\hat{\theta})} -$$ - -or the minimum possible variance for an unbiased estimator divided by its actual variance. - -The Cramér–Rao lower bound thus gives -$$ -e(\hat{\theta}) \le 1 -$$. - -A more general form of the bound can be obtained by considering a biased estimator $T(X)$, whose expectation is not $\theta$ but a function of this parameter, say, $\psi(\theta)$. Hence $ E\{T(X)\} - \theta = \psi(\theta) - \theta $ is not generally equal to 0. In this case, the bound is given by - - - -\operatorname{var}(T) - -\geq - -\frac{[\psi'(\theta)]^2}{I(\theta)} - - - -where $\psi'(\theta)$ is the derivative of $\psi(\theta)$ (by $\theta$), and $I(\theta)$ is the Fisher information defined above. - -Apart from being a bound on estimators of functions of the parameter, this approach can be used to derive a bound on the variance of biased estimators with a given bias, as follows. Consider an estimator $\hat{\theta}$ with bias $b(\theta) = E\{\hat{\theta}\} - \theta$, and let $\psi(\theta) = b(\theta) + \theta$. By the result above, any unbiased estimator whose expectation is $\psi(\theta)$ has variance greater than or equal to $(\psi'(\theta))^2/I(\theta)$. Thus, any estimator $\hat{\theta}$ whose bias is given by a function $b(\theta)$ satisfies - - - -\operatorname{var} \left(\hat{\theta}\right) - -\geq - -\frac{[1+b'(\theta)]^2}{I(\theta)}. - - - -The unbiased version of the bound is a special case of this result, with $b(\theta)=0$. - -It's trivial to have a small variance − an "estimator" that is constant has a variance of zero. But from the above equation we find that the mean squared error of a biased estimator is bounded by -$$ -\operatorname{E}\left((\hat{\theta}-\theta)^2\right)\geq\frac{[1+b'(\theta)]^2}{I(\theta)}+b(\theta)^2, -$$ - -using the standard decomposition of the MSE. Note, however, that if $1+b'(\theta)<1$ this bound might be less than the unbiased Cramér–Rao bound $1/I(\theta)$. For instance, in the example of estimating variance below, $1+b'(\theta)= \frac{n}{n+2} <1$. - -Extending the Cramér–Rao bound to multiple parameters, define a parameter column vector -$$ -\boldsymbol{\theta} = \left[ \theta_1, \theta_2, \dots, \theta_d \right]^T \in \mathbb{R}^d -$$ - -with probability density function $f(x; \boldsymbol{\theta})$ which satisfies the two regularity conditions below. - -The Fisher information matrix is a $d \times d$ matrix with element $I_{m, k}$ defined as - - - -I_{m, k} - -= \operatorname{E} \left[ - -\frac{\partial }{\partial \theta_m} \log f\left(x; \boldsymbol{\theta}\right) - -\frac{\partial }{\partial \theta_k} \log f\left(x; \boldsymbol{\theta}\right) - -\right] = -\operatorname{E} \left[ - -\frac{\partial ^2}{\partial \theta_m \partial \theta_k} \log f\left(x; \boldsymbol{\theta}\right) - -\right]. - - - -Let $\boldsymbol{T}(X)$ be an estimator of any vector function of parameters, $\boldsymbol{T}(X) = (T_1(X), \ldots, T_d(X))^T$, and denote its expectation vector $\operatorname{E}[\boldsymbol{T}(X)]$ by $\boldsymbol{\psi}(\boldsymbol{\theta})$. The Cramér–Rao bound then states that the covariance matrix of $\boldsymbol{T}(X)$ satisfies - - - -\operatorname{cov}_{\boldsymbol{\theta}}\left(\boldsymbol{T}(X)\right) - -\geq - -\frac - -{\partial \boldsymbol{\psi} \left(\boldsymbol{\theta}\right)} - -{\partial \boldsymbol{\theta}} - -[I\left(\boldsymbol{\theta}\right)]^{-1} - -\left( - -\frac - -{\partial \boldsymbol{\psi}\left(\boldsymbol{\theta}\right)} - -{\partial \boldsymbol{\theta}} - -\right)^T - - - -where - -* The matrix inequality $A \ge B$ is understood to mean that the matrix $A-B$ is positive semidefinite, and - -* $\partial \boldsymbol{\psi}(\boldsymbol{\theta})/\partial \boldsymbol{\theta}$ is the Jacobian matrix whose $ij$ element is given by $\partial \psi_i(\boldsymbol{\theta})/\partial \theta_j$. - -If $\boldsymbol{T}(X)$ is an unbiased estimator of $\boldsymbol{\theta}$ (i.e., $\boldsymbol{\psi}\left(\boldsymbol{\theta}\right) = \boldsymbol{\theta}$), then the Cramér–Rao bound reduces to - - - -\operatorname{cov}_{\boldsymbol{\theta}}\left(\boldsymbol{T}(X)\right) - -\geq - -I\left(\boldsymbol{\theta}\right)^{-1}. - - - -If it is inconvenient to compute the inverse of the Fisher information matrix, - -then one can simply take the reciprocal of the corresponding diagonal element - -to find a (possibly loose) lower bound. - - - -\operatorname{var}_{\boldsymbol{\theta}}(T_m(X)) - -= - -\left[\operatorname{cov}_{\boldsymbol{\theta}}\left(\boldsymbol{T}(X)\right)\right]_{mm} - -\geq - -\left[I\left(\boldsymbol{\theta}\right)^{-1}\right]_{mm} - -\geq - -\left(\left[I\left(\boldsymbol{\theta}\right)\right]_{mm}\right)^{-1}. - - - -The bound relies on two weak regularity conditions on the probability density function, $f(x; \theta)$, and the estimator $T(X)$: - -* The Fisher information is always defined; equivalently, for all $x$ such that $f(x; \theta) > 0$, -$$ - \frac{\partial}{\partial\theta} \log f(x;\theta) -$$ - -exists, and is finite. - -* The operations of integration with respect to $x$ and differentiation with respect to $\theta$ can be interchanged in the expectation of $T$; that is, - - - -\frac{\partial}{\partial\theta} - -\left[ - -\int T(x) f(x;\theta) dx - -\right] - -= - -\int T(x) - -\left[ - -\frac{\partial}{\partial\theta} f(x;\theta) - -\right] - -dx - - - -whenever the right-hand side is finite. - -This condition can often be confirmed by using the fact that integration and differentiation can be swapped when either of the following cases hold: - -# The function $f(x;\theta)$ has bounded support in $x$, and the bounds do not depend on $\theta$; - -# The function $f(x;\theta)$ has infinite support, is continuously differentiable, and the integral converges uniformly for all $\theta$. - -The following is a proof of the general scalar case of the Cramér–Rao bound described above. Assume that $T=t(X)$ is an estimator with expectation $\psi(\theta)$ (based on the observations $X$), i.e. that $\operatorname{E}(T) = \psi (\theta)$. The goal is to prove that, for all $\theta$, -$$ -\operatorname{var}(t(X)) \geq \frac{[\psi^\prime(\theta)]^2}{I(\theta)}. -$$ - -Let $X$ be a random variable with probability density function $f(x; \theta)$. - -Here $T = t(X)$ is a statistic, which is used as an estimator for $\psi (\theta)$. Define $V$ as the score: -$$ -V = \frac{\partial}{\partial\theta} \ln f(X;\theta) = \frac{1}{f(X;\theta)}\frac{\partial}{\partial\theta}f(X;\theta) -$$ - -where the chain rule is used in the final equality above. Then the expectation of $V$, written $\operatorname{E}(V)$, is zero. This is because: - - - -\operatorname{E}(V) = \int f(x;\theta)\left[\frac{1}{f(x;\theta)}\frac{\partial }{\partial \theta} f(x;\theta)\right] dx = \frac{\partial}{\partial\theta}\int f(x;\theta) dx = 0 - - - -where the integral and partial derivative have been interchanged (justified by the second regularity condition). - -If we consider the covariance $\operatorname{cov}(V, T)$ of $V$ and $T$, we have $\operatorname{cov}(V, T) = \operatorname{E}(V T)$, because $\operatorname{E}(V) = 0$. Expanding this expression we have - - - -\begin{align} - -\operatorname{cov}(V,T) - -& = \operatorname{E} - -\left( - -T \cdot\left[\frac{1}{f(X;\theta)}\frac{\partial}{\partial\theta}f(X;\theta) \right] - -\right) \\[6pt] - -& = \int t(x) \left[\frac{1}{f(x;\theta)} \frac{\partial}{\partial\theta} f(x;\theta) \right] f(x;\theta) dx \\[6pt] - -& = \frac{\partial}{\partial\theta} - -\left[ \int t(x) f(x;\theta)dx \right] - -= \frac{\partial}{\partial\theta} E(T) = \psi^\prime(\theta) - -\end{align} - - - -again because the integration and differentiation operations commute (second condition). - -The Cauchy–Schwarz inequality shows that - - - -\sqrt{ \operatorname{var} (T) \operatorname{var} (V)} \geq \left| \operatorname{cov}(V,T) \right| = \left | \psi^\prime (\theta) - -\right | - -therefore - - - -\operatorname{var} (T) \geq \frac{[\psi^\prime(\theta)]^2}{\operatorname{var} (V)} - -= \frac{[\psi^\prime(\theta)]^2}{I(\theta)} - - - -which proves the proposition. - -For the case of a d-variate normal distribution - - - -\boldsymbol{x} - -\sim - -N_d - -\left( - -\boldsymbol{\mu}( \boldsymbol{\theta}) - -, - -{\boldsymbol C} ( \boldsymbol{\theta}) - -\right) - - - -the Fisher information matrix has elements - - - -I_{m, k} - -= \frac{\partial \boldsymbol{\mu}^T}{\partial \theta_m} - -{\boldsymbol C}^{-1} - -\frac{\partial \boldsymbol{\mu}}{\partial \theta_k} - -+ \frac{1}{2} - -\operatorname{tr} - -\left( - -{\boldsymbol C}^{-1} - -\frac{\partial {\boldsymbol C}}{\partial \theta_m} - -{\boldsymbol C}^{-1} - -\frac{\partial {\boldsymbol C}}{\partial \theta_k} - -\right) - - - -where "tr" is the trace. - -For example, let $w[n]$ be a sample of $N$ independent observations with unknown mean $\theta$ and known variance $\sigma^2$ . -$$ -w[n] \sim \mathbb{N}_N \left(\theta {\boldsymbol 1}, \sigma^2 {\boldsymbol I} \right). -$$ - -Then the Fisher information is a scalar given by - - - -I(\theta) - -= - -\left(\frac{\partial\boldsymbol{\mu}(\theta)}{\partial\theta}\right)^T{\boldsymbol C}^{-1} \left(\frac{\partial\boldsymbol{\mu}(\theta)}{\partial\theta}\right) - -= \sum^N_{i=1}\frac{1}{\sigma^2} = \frac{N}{\sigma^2}, - - - -and so the Cramér–Rao bound is - - - -\operatorname{var}\left(\hat \theta\right) - -\geq - -\frac{\sigma^2}{N}. - - - -Suppose X is a normally distributed random variable with known mean $\mu$ and unknown variance $\sigma^2$. Consider the following statistic: - - - -T=\frac{\sum_{i=1}^n (X_i-\mu)^2}{n}. - - - -Then T is unbiased for $\sigma^2$, as $E(T)=\sigma^2$. What is the variance of T? - - - -\operatorname{var}(T) = \operatorname{var}\left(\frac{(X-\mu)^2}{n}\right)=\frac{\operatorname{var}(X-\mu)^2}{n^2}=\frac{1}{n^2} - -\left[ - -\operatorname{E}\left\{(X-\mu)^4\right\}-\left(\operatorname{E}\{(X-\mu)^2\}\right)^2 - -\right] - - - -(the second equality follows directly from the definition of variance). The first term is the fourth moment about the mean and has value $3(\sigma^2)^2$; the second is the square of the variance, or $(\sigma^2)^2$. - -Thus -$$ -\operatorname{var}(T)=\frac{2(\sigma^2)^2}{n}. -$$ - -Now, what is the Fisher information in the sample? Recall that the score $V$ is defined as - - - -V=\frac{\partial}{\partial\sigma^2}\log L(\sigma^2,X) - - - -where $L$ is the likelihood function. Thus in this case, - - - -L=\log\left[\frac{1}{\sqrt{2\pi\sigma^2}}e^{-(X-\mu)^2 /{2\sigma^2}}\right] =-\log(\sqrt{2\pi\sigma^2})-\frac{(X-\mu)^2}{2\sigma^2} - - - - - -V=\frac{\partial}{\partial\sigma^2}\log L(\sigma^2,X)=\frac{\partial}{\partial\sigma^2}\left[-\log(\sqrt{2\pi\sigma^2})-\frac{(X-\mu)^2}{2\sigma^2}\right] =-\frac{1}{2\sigma^2}+\frac{(X-\mu)^2}{2(\sigma^2)^2} - - - -where the second equality is from elementary calculus. Thus, the information in a single observation is just minus the expectation of the derivative of $V$, or - - - -I - -=-\operatorname{E}\left(\frac{\partial V}{\partial\sigma^2}\right) - -=-\operatorname{E}\left(-\frac{(X-\mu)^2}{(\sigma^2)^3}+\frac{1}{2(\sigma^2)^2}\right) - -=\frac{\sigma^2}{(\sigma^2)^3}-\frac{1}{2(\sigma^2)^2} - -=\frac{1}{2(\sigma^2)^2}. - -Thus the information in a sample of $n$ independent observations is just $n$ times this, or $\frac{n}{2(\sigma^2)^2}.$ - -The Cramer–Rao bound states that - - - -\operatorname{var}(T)\geq\frac{1}{I}. - -In this case, the inequality is saturated (equality is achieved), showing that the estimator is efficient. - -However, we can achieve a lower mean squared error using a biased estimator. The estimator - - - -T=\frac{\sum_{i=1}^n (X_i-\mu)^2}{n+2}. - - - -obviously has a smaller variance, which is in fact -$$ -\operatorname{var}(T)=\frac{2n(\sigma^2)^2}{(n+2)^2}. -$$ - -Its bias is -$$ -\left(1-\frac{n}{n+2}\right)\sigma^2=\frac{2\sigma^2}{n+2} -$$ - -so its mean squared error is - -\operatorname{MSE}(T)=\left(\frac{2n}{(n+2)^2}+\frac{4}{(n+2)^2}\right)(\sigma^2)^2 - -=\frac{2(\sigma^2)^2}{n+2} - -which is clearly less than the Cramér–Rao bound found above. - -When the mean is not known, the minimum mean squared error estimate of the variance of a sample from Gaussian distribution is achieved by dividing by $n+1$, rather than $n-1$ or $n+2$. diff --git a/wiki/wikipedia/661.txt b/wiki/wikipedia/661.txt deleted file mode 100644 index bcf7e25439f00bb3701176c7559fd3c09000cbee..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/661.txt +++ /dev/null @@ -1,10 +0,0 @@ -In combinatorial number theory, the Erdős–Graham problem is the problem of proving that, if the set $\{2,3,4,\dots\}$ of integers greater than one is partitioned into finitely many subsets, then one of the subsets can be used to form an Egyptian fraction representation of unity. That is, for every $r > 0$, and every $r$-coloring of the integers greater than one, there is a finite monochromatic subset $S$ of these integers such that -$$ -\sum_{n\in S}\frac{1}{n} = 1. -$$ - -In more detail, Paul Erdős and Ronald Graham conjectured that, for sufficiently large $r$, the largest member of $S$ could be bounded by $b^r$ for some constant $b$ independent of $r$. It was known that, for this to be true, $b$ must be at least Euler's constant $e$. - -Ernie Croot proved the conjecture as part of his Ph.D thesis, and later (while a post-doctoral researcher at UC Berkeley) published the proof in the Annals of Mathematics. The value Croot gives for $b$ is very large: it is at most $e^{167000}$. Croot's result follows as a corollary of a more general theorem stating the existence of Egyptian fraction representations of unity for sets $C$ of smooth numbers in intervals of the form $[X,X^{1+\delta}]$, where $C$ contains sufficiently many numbers so that the sum of their reciprocals is at least six. The Erdős–Graham conjecture follows from this result by showing that one can find an interval of this form in which the sum of the reciprocals of all smooth numbers is at least $6r$; therefore, if the integers are $r$-colored there must be a monochromatic subset $C$ satisfying the conditions of Croot's theorem. - -A stronger form of the result, that any set of integers with positive upper density includes the denominators of an Egyptian fraction representation of one, was announced in 2021 by Thomas Bloom, a postdoctoral researcher at the University of Oxford. diff --git a/wiki/wikipedia/662.txt b/wiki/wikipedia/662.txt deleted file mode 100644 index a45daff7e419e4571b6af3974bdc928a7b81cb3e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/662.txt +++ /dev/null @@ -1,61 +0,0 @@ -Stress majorization is an optimization strategy used in multidimensional scaling (MDS) where, for a set of n m-dimensional data items, a configuration X of n points in r(< - -\sigma(X)=\sum_{i - -Note that the first term is a constant $C$ and the second term is quadratic in X (i.e. for the Hessian matrix V the second term is equivalent to tr$X'VX$) and therefore relatively easily solved. The third term is bounded by: - - - -\sum_{i - -where $B(Z)$ has: -$$ -b_{ij}=-\frac{w_{ij}\delta_{ij}}{d_{ij}(Z)} -$$ for $d_{ij}(Z)\ne 0, i \ne j$ - -and $b_{ij}=0$ for $d_{ij}(Z)=0, i\ne j$ - -and $b_{ii}=-\sum_{j=1,j\ne i}^n b_{ij}$. - -Proof of this inequality is by the Cauchy-Schwarz inequality, see Borg (pp. 152–153). - -Thus, we have a simple quadratic function $\tau(X,Z)$ that majorizes stress: - -\sigma(X)=C+\operatorname{tr} X'VX - 2 \operatorname{tr} X'B(X)X - - - -\le C+\operatorname{tr} X' V X - 2 \operatorname{tr} X'B(Z)Z = \tau(X,Z) - - - -The iterative minimization procedure is then: - -* at the kth step we set $Z\leftarrow X^{k-1}$ - -* $X^k\leftarrow \min_X \tau(X,Z)$ - -* stop if $\sigma(X^{k-1})-\sigma(X^{k})<\epsilon$ otherwise repeat. - -This algorithm has been shown to decrease stress monotonically (see de Leeuw). - -Stress majorization and algorithms similar to SMACOF also have application in the field of graph drawing. That is, one can find a reasonably aesthetically appealing layout for a network or graph by minimizing a stress function over the positions of the nodes in the graph. In this case, the $\delta_{ij}$ are usually set to the graph-theoretic distances between nodes i and j and the weights $w_{ij}$ are taken to be $\delta_{ij}^{-\alpha}$. Here, $\alpha$ is chosen as a trade-off between preserving long- or short-range ideal distances. Good results have been shown for $\alpha=2$. diff --git a/wiki/wikipedia/663.txt b/wiki/wikipedia/663.txt deleted file mode 100644 index 1b14194bc0979737c88350d821bc39db81256540..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/663.txt +++ /dev/null @@ -1,77 +0,0 @@ -In the mathematical subject of group theory, the Stallings theorem about ends of groups states that a finitely generated group G has more than one end if and only if the group G admits a nontrivial decomposition as an amalgamated free product or an HNN extension over a finite subgroup. In the modern language of Bass–Serre theory the theorem says that a finitely generated group G has more than one end if and only if G admits a nontrivial (that is, without a global fixed point) action on a simplicial tree with finite edge-stabilizers and without edge-inversions. - -The theorem was proved by John R. Stallings, first in the torsion-free case (1968) and then in the general case (1971). - -Let Γ be a connected graph where the degree of every vertex is finite. One can view Γ as a topological space by giving it the natural structure of a one-dimensional cell complex. Then the ends of Γ are the ends of this topological space. A more explicit definition of the number of ends of a graph is presented below for completeness. - -Let n ≥ 0 be a non-negative integer. The graph Γ is said to satisfy e(Γ) ≤ n if for every finite collection F of edges of Γ the graph Γ - F has at most n infinite connected components. By definition, e(Γ) = m if e(Γ) ≤ m and if for every 0 ≤ n < m the statement e(Γ) ≤ n is false. Thus e(Γ) = m if m is the smallest nonnegative integer n such that e(Γ) ≤ n. If there does not exist an integer n ≥ 0 such that e(Γ) ≤ n, put e(Γ) = ∞. The number e(Γ) is called the number of ends of Γ. - -Informally, e(Γ) is the number of "connected components at infinity" of Γ. If e(Γ) = m < ∞, then for any finite set F of edges of Γ there exists a finite set K of edges of Γ with F ⊆ K such that Γ - F has exactly m infinite connected components. If e(Γ) = ∞, then for any finite set F of edges of Γ and for any integer n ≥ 0 there exists a finite set K of edges of Γ with F ⊆ K such that Γ - K has at least n infinite connected components. - -Let G be a finitely generated group. Let S ⊆ G be a finite generating set of G and let Γ(G, S) be the Cayley graph of G with respect to S. The number of ends of G is defined as e(G) = e(Γ(G, S)). A basic fact in the theory of ends of groups says that e(Γ(G, S)) does not depend on the choice of a finite generating set S of G, so that e(G) is well-defined. - -*For a finitely generated group G we have e(G) = 0 if and only if G is finite. - -*For the infinite cyclic group $\mathbb Z$ we have $e(\mathbb Z)=2.$ - -*For the free abelian group of rank two $\mathbb Z^2$ we have $e(\mathbb Z^2)=1.$ - -*For a free group F(X) where 1 < |X| < ∞ we have e(F(X)) = ∞ - -Hans Freudenthal and independently Heinz Hopf established in the 1940s the following two facts: - -*For any finitely generated group G we have e(G) ∈ {0, 1, 2, ∞}. - -*For any finitely generated group G we have e(G) = 2 if and only if G is virtually infinite cyclic (that is, G contains an infinite cyclic subgroup of finite index). - -Charles T. C. Wall proved in 1967 the following complementary fact: - -*A group G is virtually infinite cyclic if and only if it has a finite normal subgroup W such that G/W is either infinite cyclic or infinite dihedral. - -Let G be a finitely generated group, S ⊆ G be a finite generating set of G and let Γ = Γ(G, S) be the Cayley graph of G with respect to S. For a subset A ⊆ G denote by A the complement G - A of A in G. - -For a subset A ⊆ G, the edge boundary or the co-boundary δA of A consists of all (topological) edges of Γ connecting a vertex from A with a vertex from A. - -Note that by definition δA = δA. - -An ordered pair (A, A) is called a cut in Γ if δA is finite. A cut (A,A) is called essential if both the sets A and A are infinite. - -A subset A ⊆ G is called almost invariant if for every g∈G the symmetric difference between A and Ag is finite. It is easy to see that (A, A) is a cut if and only if the sets A and A are almost invariant (equivalently, if and only if the set A is almost invariant). - -A simple but important observation states: - -e(G) > 1 if and only if there exists at least one essential cut (A,A) in Γ. - -If G = H∗K where H and K are nontrivial finitely generated groups then the Cayley graph of G has at least one essential cut and hence e(G) > 1. Indeed, let X and Y be finite generating sets for H and K accordingly so that S = X ∪ Y is a finite generating set for G and let Γ=Γ(G,S) be the Cayley graph of G with respect to S. Let A consist of the trivial element and all the elements of G whose normal form expressions for G = H∗K starts with a nontrivial element of H. Thus A consists of all elements of G whose normal form expressions for G = H∗K starts with a nontrivial element of K. It is not hard to see that (A,A) is an essential cut in Γ so that e(G) > 1. - -A more precise version of this argument shows that for a finitely generated group G: - -*If G = H∗CK is a free product with amalgamation where C is a finite group such that C ≠ H and C ≠ K then H and K are finitely generated and e(G) > 1 . - -*If $\scriptstyle G=\langle H, t| t^{-1}C_1t=C_2\rangle$ is an HNN-extension where C1, C2 are isomorphic finite subgroups of H then G is a finitely generated group and e(G) > 1. - -Stallings' theorem shows that the converse is also true. - -Let G be a finitely generated group. - -Then e(G) > 1 if and only if one of the following holds: - -*The group G admits a splitting G=H∗CK as a free product with amalgamation where C is a finite group such that C ≠ H and C ≠ K. - -*The group G is an HNN extension $\scriptstyle G=\langle H, t| t^{-1}C_1t=C_2\rangle$ where and C1, C2 are isomorphic finite subgroups of H. - -In the language of Bass–Serre theory this result can be restated as follows: - -For a finitely generated group G we have e(G) > 1 if and only if G admits a nontrivial (that is, without a global fixed vertex) action on a simplicial tree with finite edge-stabilizers and without edge-inversions. - -For the case where G is a torsion-free finitely generated group, Stallings' theorem implies that e(G) = ∞ if and only if G admits a proper free product decomposition G = A∗B with both A and B nontrivial. - -*Among the immediate applications of Stallings' theorem was a proof by Stallings of a long-standing conjecture that every finitely generated group of cohomological dimension one is free and that every torsion-free virtually free group is free. - -*Stallings' theorem also implies that the property of having a nontrivial splitting over a finite subgroup is a quasi-isometry invariant of a finitely generated group since the number of ends of a finitely generated group is easily seen to be a quasi-isometry invariant. For this reason Stallings' theorem is considered to be one of the first results in geometric group theory. - -*Stallings' theorem was a starting point for Dunwoody's accessibility theory. A finitely generated group G is said to be accessible if the process of iterated nontrivial splitting of G over finite subgroups always terminates in a finite number of steps. In Bass–Serre theory terms that the number of edges in a reduced splitting of G as the fundamental group of a graph of groups with finite edge groups is bounded by some constant depending on G. Dunwoody proved that every finitely presented group is accessible but that there do exist finitely generated groups that are not accessible. Linnell showed that if one bounds the size of finite subgroups over which the splittings are taken then every finitely generated group is accessible in this sense as well. These results in turn gave rise to other versions of accessibility such as Bestvina-Feighn accessibility of finitely presented groups (where the so-called "small" splittings are considered), acylindrical accessibility, strong accessibility, and others. - -*Stallings' theorem is a key tool in proving that a finitely generated group G is virtually free if and only if G can be represented as the fundamental group of a finite graph of groups where all vertex and edge groups are finite (see, for example,). - -*Using Dunwoody's accessibility result, Stallings' theorem about ends of groups and the fact that if G is a finitely presented group with asymptotic dimension 1 then G is virtually free) where the minimal surfaces argument is replaced by an easier harmonic analysis argument and this approach was pushed further by Kapovich to cover the original case of finitely generated groups. diff --git a/wiki/wikipedia/664.txt b/wiki/wikipedia/664.txt deleted file mode 100644 index 2abc7e2f557857ddb7afd40d94fbb9fb02710adc..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/664.txt +++ /dev/null @@ -1,45 +0,0 @@ -In mathematics, the Krull–Schmidt theorem states that a group subjected to certain finiteness conditions on chains of subgroups, can be uniquely written as a finite direct product of indecomposable subgroups. - -We say that a group G satisfies the ascending chain condition (ACC) on subgroups if every sequence of subgroups of G: -$$ -1 = G_0 \le G_1 \le G_2 \le \cdots -$$ - -is eventually constant, i.e., there exists N such that GN = GN+1 = GN+2 = ... . We say that G satisfies the ACC on normal subgroups if every such sequence of normal subgroups of G eventually becomes constant. - -Likewise, one can define the descending chain condition on (normal) subgroups, by looking at all decreasing sequences of (normal) subgroups: -$$ -G = G_0 \ge G_1 \ge G_2 \ge \cdots. -$$ - -Clearly, all finite groups satisfy both ACC and DCC on subgroups. The infinite cyclic group $\mathbf{Z}$ satisfies ACC but not DCC, since (2) > (2)2 > (2)3 > ... is an infinite decreasing sequence of subgroups. On the other hand, the $p^\infty$-torsion part of $\mathbf{Q}/\mathbf{Z}$ (the quasicyclic p-group) satisfies DCC but not ACC. - -We say a group G is indecomposable if it cannot be written as a direct product of non-trivial subgroups G = H × K. - -If $G$ is a group that satisfies either ACC or DCC on normal subgroups, then there is exactly one way of writing $G$ as a direct product $G_1 \times G_2 \times\cdots \times G_k$ of finitely many indecomposable subgroups of $G$. Here, uniqueness means direct decompositions into indecomposable subgroups have the exchange property. That is: suppose $G = H_1 \times H_2 \times \cdots \times H_l$ is another expression of $G$ as a product of indecomposable subgroups. Then $k=l$ and there is a reindexing of the $H_i$'s satisfying - -* $G_i$ and $H_i$ are isomorphic for each $i$; - -* $G = G_1 \times \cdots \times G_r \times H_{r+1} \times\cdots\times H_l$ for each $r$. - -Proving existence is relatively straightforward: let S be the set of all normal subgroups that can not be written as a product of indecomposable subgroups. Moreover, any indecomposable subgroup is (trivially) the one-term direct product of itself, hence decomposable. If Krull-Schmidt fails, then S contains G; so we may iteratively construct a descending series of direct factors; this contradicts the DCC. One can then invert the construction to show that all direct factors of G appear in this way. - -The proof of uniqueness, on the other hand, is quite long and requires a sequence of technical lemmas. For a complete exposition, see. - -The theorem does not assert the existence of a non-trivial decomposition, but merely that any such two decompositions (if they exist) are the same. - -If $E \neq 0$ is a module that satisfies the ACC and DCC on submodules (that is, it is both Noetherian and Artinian or – equivalently – of finite length), then $E$ is a direct sum of indecomposable modules. Up to a permutation, the indecomposable components in such a direct sum are uniquely determined up to isomorphism. - -In general, the theorem fails if one only assumes that the module is Noetherian or Artinian. - -The present-day Krull–Schmidt theorem was first proved by Joseph Wedderburn (Ann. of Math (1909)), for finite groups, though he mentions some credit is due to an earlier study of G.A. Miller where direct products of abelian groups were considered. Wedderburn's theorem is stated as an exchange property between direct decompositions of maximum length. However, Wedderburn's proof makes no use of automorphisms. - -The thesis of Robert Remak (1911) derived the same uniqueness result as Wedderburn but also proved (in modern terminology) that the group of central automorphisms acts transitively on the set of direct decompositions of maximum length of a finite group. From that stronger theorem Remak also proved various corollaries including that groups with a trivial center and perfect groups have a unique Remak decomposition. - -Otto Schmidt (Sur les produits directs, S. M. F. Bull. 41 (1913), 161–164), simplified the main theorems of Remak to the 3 page predecessor to today's textbook proofs. His method improves Remak's use of idempotents to create the appropriate central automorphisms. Both Remak and Schmidt published subsequent proofs and corollaries to their theorems. - -Wolfgang Krull (Über verallgemeinerte endliche Abelsche Gruppen, M. Z. 23 (1925) 161–196), returned to G.A. Miller's original problem of direct products of abelian groups by extending to abelian operator groups with ascending and descending chain conditions. This is most often stated in the language of modules. His proof observes that the idempotents used in the proofs of Remak and Schmidt can be restricted to module homomorphisms; the remaining details of the proof are largely unchanged. - -O. Ore unified the proofs from various categories include finite groups, abelian operator groups, rings and algebras by proving the exchange theorem of Wedderburn holds for modular lattices with descending and ascending chain conditions. This proof makes no use of idempotents and does not reprove the transitivity of Remak's theorems. - -Kurosh's The Theory of Groups and Zassenhaus' The Theory of Groups include the proofs of Schmidt and Ore under the name of Remak–Schmidt but acknowledge Wedderburn and Ore. Later texts use the title Krull–Schmidt (Hungerford's Algebra) and Krull–Schmidt–Azumaya (Curtis–Reiner). The name Krull–Schmidt is now popularly substituted for any theorem concerning uniqueness of direct products of maximum size. Some authors choose to call direct decompositions of maximum-size Remak decompositions to honor his contributions. diff --git a/wiki/wikipedia/665.txt b/wiki/wikipedia/665.txt deleted file mode 100644 index 011e8f2adcb26da86591b00f342b40471e615c48..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/665.txt +++ /dev/null @@ -1,40 +0,0 @@ -In graph theory, a connected dominating set and a maximum leaf spanning tree are two closely related structures defined on an undirected graph. - -A connected dominating set of a graph G is a set D of vertices with two properties: - -#Any node in D can reach any other node in D by a path that stays entirely within D. That is, D induces a connected subgraph of G. - -#Every vertex in G either belongs to D or is adjacent to a vertex in D. That is, D is a dominating set of G. - -A minimum connected dominating set of a graph G is a connected dominating set with the smallest possible cardinality among all connected dominating sets of G. The connected domination number of G is the number of vertices in the minimum connected dominating set. - -Any spanning tree T of a graph G has at least two leaves, vertices that have only one edge of T incident to them. A maximum leaf spanning tree is a spanning tree that has the largest possible number of leaves among all spanning trees of G. The max leaf number of G is the number of leaves in the maximum leaf spanning tree. - -If d is the connected domination number of an n-vertex graph G, where n > 2, and l is its max leaf number, then the three quantities d, l, and n obey the simple equation -$$ -\displaystyle n = d + l. -$$ - -If D is a connected dominating set, then there exists a spanning tree in G whose leaves include all vertices that are not in D: form a spanning tree of the subgraph induced by D, together with edges connecting each remaining vertex v that is not in D to a neighbor of v in D. This shows that l ≥ n - d. - -In the other direction, if T is any spanning tree in G, then the vertices of T that are not leaves form a connected dominating set of G. This shows that n - l ≥ d. Putting these two inequalities together proves the equality 1=n = d + l. - -Therefore, in any graph, the sum of the connected domination number and the max leaf number equals the total number of vertices. - -Computationally, this implies that determining the connected domination number is equally difficult as finding the max leaf number. - -It is NP-complete to test whether there exists a connected dominating set with size less than a given threshold, or equivalently to test whether there exists a spanning tree with at least a given number of leaves. Therefore, it is believed that the minimum connected dominating set problem and the maximum leaf spanning tree problem cannot be solved in polynomial time. - -When viewed in terms of approximation algorithms, connected domination and maximum leaf spanning trees are not the same: approximating one to within a given approximation ratio is not the same as approximating the other to the same ratio. - -There exists an approximation for the minimum connected dominating set that achieves a factor of 2 ln Δ + O(1), where Δ is the maximum degree of a vertex in G. - -The maximum leaf spanning tree problem is MAX-SNP hard, implying that no polynomial time approximation scheme is likely. However, it can be approximated to within a factor of 2 in polynomial time. - -Both problems may be solved, on n-vertex graphs, in time O(1.9n). The maximum leaf problem is fixed-parameter tractable, meaning that it can be solved in time exponential in the number of leaves but only polynomial in the input graph size. The klam value of these algorithms (intuitively, a number of leaves up to which the problem can be solved within a reasonable amount of time) has gradually increased, as algorithms for the problem have improved, to approximately 37, and it has been suggested that at least 50 should be achievable. - -In graphs of maximum degree three, the connected dominating set and its complementary maximum leaf spanning tree problem can be solved in polynomial time, by transforming them into an instance of the matroid parity problem for linear matroids. - -Connected dominating sets are useful in the computation of routing for mobile ad hoc networks. In this application, a small connected dominating set is used as a backbone for communications, and nodes that are not in this set communicate by passing messages through neighbors that are in the set. - -The max leaf number has been employed in the development of fixed-parameter tractable algorithms: several NP-hard optimization problems may be solved in polynomial time for graphs of bounded max leaf number. diff --git a/wiki/wikipedia/666.txt b/wiki/wikipedia/666.txt deleted file mode 100644 index fadf5cfc72ec8c8c82579bdb0c051f92377ee494..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/666.txt +++ /dev/null @@ -1,16 +0,0 @@ -Raikov’s theorem is a result in probability theory. It is well known that if each of two independent random variables ξ1 and ξ2 has a Poisson distribution, then their sum ξ=ξ12 has a Poisson distribution as well. It turns out that the converse is also valid. - -Suppose that a random variable ξ has Poisson's distribution and admits a decomposition as a sum ξ=ξ12 of two independent random variables. Then the distribution of each summand is a shifted Poisson's distribution. - -Raikov's theorem is similar to Cramér’s decomposition theorem. The latter result claims that if a sum of two independent random variables has normal distribution, then each summand is normally distributed as well. It was also proved by Yu.V.Linnik that a convolution of normal distribution and Poisson's distribution possesses a similar property (). - -Let $X$ be a locally compact Abelian group. Denote by $M^1(X)$ the convolution semigroup of probability distributions on $X$, and by $E_x$the degenerate distribution concentrated at $x\in X$. Let $x_0\in X, \lambda>0$. - -The Poisson distribution generated by the measure $\lambda E_{x_0}$ is defined as a shifted distribution of the form -$$ -\mu=e(\lambda E_{x_0})=e^{-\lambda}(E_0+\lambda E_{x_0}+\lambda^2 E_{2x_0}/2!+\ldots+\lambda^n E_{nx_0}/n!+\ldots). -$$ - -One has the following - -Let $\mu$ be the Poisson distribution generated by the measure $\lambda E_{x_0}$. Suppose that $\mu=\mu_1*\mu_2$, with $\mu_j\in M^1(X)$. If $x_0$ is either an infinite order element, or has order 2, then $\mu_j$ is also a Poisson's distribution. In the case of $x_0$ being an element of finite order $n\ne 2$, $\mu_j$ can fail to be a Poisson's distribution. diff --git a/wiki/wikipedia/667.txt b/wiki/wikipedia/667.txt deleted file mode 100644 index aedca4a71d39145436d05cd069a72a9ac418a788..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/667.txt +++ /dev/null @@ -1,75 +0,0 @@ -A temporal database stores data relating to time instances. It offers temporal data types and stores information relating to past, present and future time. - -Temporal databases could be uni-temporal, bi-temporal or tri-temporal. - -More specifically the temporal aspects usually include valid time, transaction time or decision time. - -* Valid time is the time period during which a fact is true in the real world. - -* Transaction time is the time at which a fact was recorded in the database. - -* Decision time is the time at which the decision was made about the fact. - -A uni-temporal database has one axis of time, either the validity range or the system time range. - -A bi-temporal database has two axes of time: - -* valid time - -* transaction time or decision time - -A tri-temporal database has three axes of time: - -* valid time - -* transaction time - -* decision time - -This approach introduces additional complexities. - -Temporal databases are in contrast to current databases (not to be confused with currently available databases), which store only facts which are believed to be true at the current time. - -Temporal databases support managing and accessing temporal data by providing one or more of the following features: In late 1993, Snodgrass presented this work to the group responsible for the American National Standard for Database Language SQL, ANSI Technical Committee X3H2 (now known as NCITS H2). The preliminary language specification appeared in the March 1994 ACM SIGMOD Record. Based on responses to that specification, changes were made to the language, and the definitive version of the TSQL2 Language Specification was published in September, 1994 - -An attempt was made to incorporate parts of TSQL2 into the new SQL standard SQL:1999, called SQL3. Parts of TSQL2 were included in a new substandard of SQL3, ISO/IEC 9075-7, called SQL/Temporal. although this is not part of SQL or similar standards. - -Approaches to minimize the complexities of schema evolution are: - -* to use a semi-structured database/NoSQL database which reduces the complexities of modeling attribute data but provides no features for handling multiple time axes. - -* to use a database capable of storing both semi-structured data for attributes and structured data for time axes (e.g., SnowflakeDB, PostgreSQL) - -The following implementations provide temporal features in a relational database management system (RDBMS). - -* MariaDB version 10.3.4 added support for SQL:2011 standard as "System-Versioned Tables". - -* Oracle Database Oracle Workspace Manager is a feature of Oracle Database which enables application developers and DBAs to manage current, proposed and historical versions of data in the same database. - -* PostgreSQL version 9.2 added native ranged data types that are capable of implementing all of the features of the pgFoundry temporal contributed extension. The PostgreSQL range types are supported by numerous native operators and functions. - -* Teradata provides two products. Teradata version 13.10 and Teradata version 14 have temporal features based on TSQL2 built into the database. - -* IBM DB2 version 10 added a feature called "time travel query" which is based on the temporal capabilities of the SQL:2011 standard. - -* Microsoft SQL Server introduced Temporal Tables as a feature for SQL Server 2016. The feature is described in a video on Microsoft's "Channel 9" web site. - -Non-relational, NoSQL database management systems that provide temporal features including the following: - -* TerminusDB is a fully featured open source graph database that natively supports version control, time-travel queries and diffing functions. It has an immutable layer architecture based on delta encoding and succinct data structures. - -* MarkLogic introduced bitemporal data support in version 8.0. Time stamps for Valid and System time are stored in JSON or XML documents. - -* stores snapshots of (currently) XML- and JSON-documents very efficiently in a binary format due to a novel versioning algorithm called sliding snapshot, which balances read-/write-performance and never creates write peaks. Time-travel queries are supported natively as well as diffing functions. - -* (formerly Crux) provides point-in-time bitemporal Datalog queries over transactions and documents ingested from semi-immutable Kafka logs. Documents are automatically indexed to create Entity–attribute–value model indexes without any requirement to define a schema. Transaction operations specify the effective Valid times. Transaction times are assigned by Kafka and enable horizontal scalability via consistent reads. - -* is a point-in-time, unitemporal (transaction time) graph database, built on top of ArangoDB. It runs on ArangoDB's sub-system. It features VCS-like semantics in many parts of its interface, and is backed by a transactional event tracker. Bitemporality is listed as one of the items in its . - -Slowly changing dimensions can be used to model temporal relations. - -* C.J. Date, Hugh Darwen, Nikos Lorentzos (2002). Temporal Data & the Relational Model, First Edition (The Morgan Kaufmann Series in Data Management Systems); Morgan Kaufmann; 1st edition; 422 pages. . - -* Joe Celko (2014). Joe Celko's SQL for Smarties: Advanced SQL Programming (The Morgan Kaufmann Series in Data Management); Morgan Kaufmann; 5th edition. .—Chapters 12 and 35 in particular discuss temporal issues. - -* Snodgrass, Richard T. (1999). (Morgan Kaufmann Series in Data Management Systems); Morgan Kaufmann; 504 pages; diff --git a/wiki/wikipedia/668.txt b/wiki/wikipedia/668.txt deleted file mode 100644 index fa0d1e4f8efe0bf38c57208a788cb2eacedba928..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/668.txt +++ /dev/null @@ -1,3 +0,0 @@ -Chaff is an algorithm for solving instances of the Boolean satisfiability problem in programming. It was designed by researchers at Princeton University, United States. The algorithm is an instance of the DPLL algorithm with a number of enhancements for efficient implementation. - -Some available implementations of the algorithm in software are mChaff and zChaff, the latter one being the most widely known and used. zChaff was originally written by Dr. Lintao Zhang, at Microsoft Research, hence the “z”. It is now maintained by researchers at Princeton University and available for download as both source code and binaries on Linux. zChaff is free for non-commercial use. diff --git a/wiki/wikipedia/669.txt b/wiki/wikipedia/669.txt deleted file mode 100644 index 42bc0b75c48a3e427a9b27b691cead240e936146..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/669.txt +++ /dev/null @@ -1,3 +0,0 @@ -In functional analysis the Meyers–Serrin theorem, named after James Serrin and Norman George Meyers, states that smooth functions are dense in the Sobolev space $W^{k,p}(\Omega)$. - -Originally there were two spaces: $W^{k,p}(\Omega)$ defined as the set of all functions which have weak derivatives of order up to k all of which are in $L^p$ and $H^{k,p}(\Omega)$ defined as the closure of the smooth functions with respect to the corresponding Sobolev norm (obtained by summing over the $L^p$ norms of the functions and all derivatives). The theorem establishes the equivalence $W^{k,p}(\Omega)=H^{k,p}(\Omega)$ of both definitions. According to the standard reference on Sobolev spaces by Adams and Fournier (p60): "This result, published in 1964 by Meyers and Serrin ended much confusion about the relationship of these spaces that existed in the literature before that time. It is surprising that this elementary result remained undiscovered for so long." diff --git a/wiki/wikipedia/67.txt b/wiki/wikipedia/67.txt deleted file mode 100644 index 53af35317b0a742158f7d93ad474eb4de90eecce..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/67.txt +++ /dev/null @@ -1,49 +0,0 @@ -In the mathematical areas of order and lattice theory, the Knaster–Tarski theorem, named after Bronisław Knaster and Alfred Tarski, states the following: - -Let (L,≤) be a complete lattice and let f : L → L be an order-preserving function (w.r.t. ≤). Then the set of fixed points of f in L also forms a complete lattice under ≤. - -It was Tarski who stated the result in its most general form, and so the theorem is often known as Tarski's fixed-point theorem. Some time earlier, Knaster and Tarski established the result for the special case where L is the lattice of subsets of a set, the power set lattice. - -The theorem has important applications in formal semantics of programming languages and abstract interpretation. - -A kind of converse of this theorem was proved by Anne C. Davis: If every order preserving function f : L → L on a lattice L has a fixed point, then L is a complete lattice. - -Since complete lattices cannot be empty (they must contain a supremum/infimum of the empty set), the theorem in particular guarantees the existence of at least one fixed point of f, and even the existence of a least (or greatest) fixed point. In many practical cases, this is the most important implication of the theorem. - -The least fixpoint of f is the least element x such that f(x) = x, or, equivalently, such that f(x) ≤ x; the dual holds for the greatest fixpoint, the greatest element x such that f(x) = x. - -If f(lim xn)=lim f(xn) for all ascending sequences xn, then the least fixpoint of f is lim fn(0) where 0 is the least element of L, thus giving a more "constructive" version of the theorem. (See: Kleene fixed-point theorem.) More generally, if f is monotonic, then the least fixpoint of f is the stationary limit of fα(0), taking α over the ordinals, where fα is defined by transfinite induction: fα+1 = f ( fα) and fγ for a limit ordinal γ is the least upper bound of the fβ for all β ordinals less than γ. The dual theorem holds for the greatest fixpoint. - -For example, in theoretical computer science, least fixed points of monotone functions are used to define program semantics. Often a more specialized version of the theorem is used, where L is assumed to be the lattice of all subsets of a certain set ordered by subset inclusion. This reflects the fact that in many applications only such lattices are considered. One then usually is looking for the smallest set that has the property of being a fixed point of the function f. Abstract interpretation makes ample use of the Knaster–Tarski theorem and the formulas giving the least and greatest fixpoints. - -Knaster–Tarski theorem can be used for a simple proof of the Cantor–Bernstein–Schroeder theorem. - -Weaker versions of the Knaster–Tarski theorem can be formulated for ordered sets, but involve more complicated assumptions. For example: - -Let L be a partially ordered set with the smallest element (bottom) and let f : L → L be an order-preserving function. Further, suppose there exists u in L such that f(u) ≤ u and that any chain in the subset $\{x \in L \mid x \leq f(x), x \leq u\}$ has supremum. Then f admits a least fixed point. - -This can be applied to obtain various theorems on invariant sets, e.g. the Ok's theorem: - -For the monotone map F : P(X) → P(X) on the family of (closed) nonempty subsets of X the following are equivalent: (o) F admits A in P(X) s.t. $A \subseteq F(A)$, (i) F admits invariant set A in P(X) i.e. $A = F(A)$, (ii) F admits maximal invariant set A, (iii) F admits the greatest invariant set A. - -In particular, using the Knaster-Tarski principle one can develop the theory of global attractors for noncontractive discontinuous (multivalued) iterated function systems. For weakly contractive iterated function systems Kantorovitch fixpoint theorem (known also as Tarski-Kantorovitch fixpoint principle) suffices. - -Other applications of fixed-point principles for ordered sets come from the theory of differential, integral and operator equations. - -Let's restate the theorem. - -For a complete lattice $\langle L, \le \rangle$ and a monotone function $f\colon L \rightarrow L$ on L, the set of all fixpoints of f is also a complete lattice $\langle P, \le \rangle$, with: - -* $\bigvee P = \bigvee \{ x \in L \mid x \le f(x) \}$ as the greatest fixpoint of f - -* $\bigwedge P = \bigwedge \{ x \in L \mid x \ge f(x) \}$ as the least fixpoint of f. - -Proof. We begin by showing that P has both a least element and a greatest element. Let D = { x | x ≤ f(x) } and x ∈ D (we know that at least 0L belongs to D). Then because f is monotone we have f(x) ≤ f(f(x)), that is f(x) ∈ D. - -Now let $u = \bigvee D$ (u exists because D ⊆ L and L is a complete lattice). Then for all x ∈ D it is true that x ≤ u and f(x) ≤ f(u), so x ≤ f(x) ≤ f(u). Therefore, f(u) is an upper bound of D, but u is the least upper bound, so u ≤ f(u), i.e. u ∈ D. Then f(u) ∈ D (because f(u) ≤ f(f(u))) and so f(u) ≤ u from which follows f(u) = u. Because every fixpoint is in D we have that u is the greatest fixpoint of f. - -The function f is monotone on the dual (complete) lattice $\langle L^{op}, \ge \rangle$. As we have just proved, its greatest fixpoint exists. It is the least fixpoint of L, so P has least and greatest elements, that is more generally, every monotone function on a complete lattice has a least fixpoint and a greatest fixpoint. - -If a ∈ L and b ∈ L, we'll write [a, b] for the closed interval with bounds a and b: { x ∈ L | a ≤ x ≤ b }. If a ≤ b, then $\langle$[a, b], $\le\rangle$ is a complete lattice. - -It remains to be proven that P is a complete lattice. Let $1_L = \bigvee L$, W ⊆ P and $w = \bigvee W$. We′ll show that f([w, 1L]) ⊆ [w, 1L]. Indeed, for every x ∈ W we have x = f(x) and since w is the least upper bound of W x ≤ f(w). In particular w ≤ f(w). Then from y ∈ [w, 1L] follows that w ≤ f(w) ≤ f(y), giving f(y) ∈ [w, 1L] or simply f([w, 1L]) ⊆ [w, 1L]. This allows us to look at f as a function on the complete lattice [w, 1L]. Then it has a least fixpoint there, giving us the least upper bound of W. We′ve shown that an arbitrary subset of P has a supremum, that is, P is a complete lattice. diff --git a/wiki/wikipedia/670.txt b/wiki/wikipedia/670.txt deleted file mode 100644 index 8b492ebc930bdf62565051d1c62914c34bc80403..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/670.txt +++ /dev/null @@ -1,7 +0,0 @@ -In graph drawing, a RAC drawing of a graph is a drawing in which the vertices are represented as points, the edges are represented as straight line segments or polylines, at most two edges cross at any point, and when two edges cross they do so at right angles to each other. In the name of this drawing style, "RAC" stands for "right angle crossing". - -The right-angle crossing style and the name "RAC drawing" for this style were both formulated by Didimo, motivated by previous user studies showing that crossings with large angles are much less harmful to the readability of drawings than shallow crossings. Even for planar graphs, allowing some right-angle crossings in a drawing of the graph can significantly improve measures of the drawing quality such as its area or angular resolution. - -The complete graph K5 has a RAC drawing with straight edges, but K6 does not. Every 6-vertex RAC drawing has at most 14 edges, but K6 has 15 edges, too many to have a RAC drawing. However, there exist 1-planar graphs with 4n − 10 edges that do not have RAC drawings. - -It is NP-hard to determine whether a given graph has a RAC drawing with straight edges, even if the input graph is 1-planar and the output RAC drawing must be 1-planar as well. More specifically, RAC drawing is complete for the existential theory of the reals. The RAC drawing problem remains NP-hard for upward drawing of directed acyclic graphs. However, in the special case of outer-1-planar graphs, a RAC drawing can be constructed in linear time. diff --git a/wiki/wikipedia/671.txt b/wiki/wikipedia/671.txt deleted file mode 100644 index 7054799effc29a7d43279ce45f01ec66128887b7..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/671.txt +++ /dev/null @@ -1,97 +0,0 @@ -In the mathematical theory of functional analysis, the Krein–Milman theorem is a proposition about compact convex sets in locally convex topological vector spaces (TVSs). - -A compact convex subset of a Hausdorff locally convex topological vector space is equal to the closed convex hull of its extreme points. - -This theorem generalizes to infinite-dimensional spaces and to arbitrary compact convex sets the following basic observation: a convex (i.e. "filled") triangle, including its perimeter and the area "inside of it", is equal to the convex hull of its three vertices, where these vertices are exactly the extreme points of this shape. - -This observation also holds for any other convex polygon in the plane $\R^2.$ - -Throughout, $X$ will be a real or complex vector space. - -For any elements $x$ and $y$ in a vector space, the set $[x, y] := \{ tx + (1-t)y : 0 \leq t \leq 1 \}$ is called the ' or closed interval between $x$ and $y.$ The ' or open interval between $x$ and $y$ is $(x, x) := \varnothing$ when $x = y$ while it is $(x, y) := \{ tx + (1-t)y : 0 < t < 1 \}$ when $x \neq y$; it satisfies $(x, y) = [x, y] \setminus \{ x, y \}$ and $[x, y] = (x, y) \cup \{ x, y \}.$ The points $x$ and $y$ are called the endpoints of these interval. An interval is said to be ' or proper if its endpoints are distinct. - -The intervals $[x, x] = \{ x \}$ and $[x, y]$ always contain their endpoints while $(x, x) = \varnothing$ and $(x, y)$ never contain either of their endpoints. - -If $x$ and $y$ are points in the real line $\R$ then the above definition of $[x, y]$ is the same as its usual definition as a closed interval. - -For any $p, x, y \in X,$ the point $p$ is said to (strictly) ' $x$ and $y$ if $p$ belongs to the open line segment $(x, y).$ - -If $K$ is a subset of $X$ and $p \in K,$ then $p$ is called an extreme point of $K$ if it does not lie between any two distinct points of $K.$ That is, if there does not exist $x, y \in K$ and $0 < t < 1$ such that $x \neq y$ and $p = tx + (1-t) y.$ In this article, the set of all extreme points of $K$ will be denoted by $\operatorname{extreme}(K).$ - -For example, the vertices of any convex polygon in the plane $\R^2$ are the extreme points of that polygon. - -The extreme points of the closed unit disk in $\R^2$ is the unit circle. - -Every open interval and degenerate closed interval in $\R$ has no extreme points while the extreme points of a non-degenerate closed interval $[x, y]$ are $x$ and $y.$ - -A set $S$ is called convex if for any two points $x, y \in S,$ $S$ contains the line segment $[x, y].$ The smallest convex set containing $S$ is called the convex hull of $S$ and it is denoted by $\operatorname{co} S.$ - -The closed convex hull of a set $S,$ denoted by $\overline{\operatorname{co}}(S),$ is the smallest closed and convex set containing $S.$ It is also equal to the intersection of all closed convex subsets that contain $S$ and to the closure of the convex hull of $S$; that is, - -\overline{\operatorname{co}}(S) = \overline{\operatorname{co}(S)}, - -where the right hand side denotes the closure of $\operatorname{co}(S)$ while the left hand side is notation. - -For example, the convex hull of any set of three distinct points forms either a closed line segment (if they are collinear) or else a solid (that is, "filled") triangle, including its perimeter. - -And in the plane $\R^2,$ the unit circle is not convex but the closed unit disk is convex and furthermore, this disk is equal to the convex hull of the circle. - -{{Math theorem|name=Krein–Milman theorem|math_statement= - -Suppose $X$ is a Hausdorff locally convex topological vector space (for example, a normed space) and $K$ is a compact and convex subset of $X.$ - -Then $K$ is equal to the closed convex hull of its extreme points: - -K ~=~ \overline{\operatorname{co}} (\operatorname{extreme}(K)). - -Moreover, if $B \subseteq K$ then $K$ is equal to the closed convex hull of $B$ if and only if $\operatorname{extreme} K \subseteq \operatorname{cl} B,$ where $\operatorname{cl} B$ is closure of $B.$ - -}} - -The convex hull of the extreme points of $K$ forms a convex subset of $K$ so the main burden of the proof is to show that there are enough extreme points so that their convex hull covers all of $K.$ - -For this reason, the following corollary to the above theorem is also often called the Krein–Milman theorem. - -A particular case of this theorem, which can be easily visualized, states that given a convex polygon, the corners of the polygon are all that is needed to recover the polygon shape. - -The statement of the theorem is false if the polygon is not convex, as then there can be many ways of drawing a polygon having given points as corners. - -The requirement that the convex set be compact can be weakened to give the following strengthened version of the theorem. - -{{Math theorem|name=() Generalization of Krein–Milman theorem (Existence)|math_statement= - -Suppose $X$ is a Hausdorff locally convex topological vector space and $K$ is a non-empty convex subset of $X$ with the property that whenever $\mathcal{C}$ is a cover of $K$ by convex closed subsets of $X$ such that $\{K \cap C : C \in \mathcal{C}\}$ has the finite intersection property, then $K \cap \bigcap_{C \in \mathcal{C}} C$ is not empty. - -Then $\operatorname{extreme}(K)$ is not empty. - -}} - -The assumption of local convexity for the ambient space is necessary, because constructed a counter-example for the non-locally convex space $L^p[0, 1]$ where $0 < p < 1.$ - -Linearity is also needed, because the statement fails for weakly compact convex sets in CAT(0) spaces, as proved by . However, proved that the Krein–Milman theorem does hold for metrically compact CAT(0) spaces. - -Under the previous assumptions on $K,$ if $T$ is a subset of $K$ and the closed convex hull of $T$ is all of $K,$ then every extreme point of $K$ belongs to the closure of $T.$ - -This result is known as Milman's (partial) converse to the Krein–Milman theorem. - -The Choquet–Bishop–de Leeuw theorem states that every point in $K$ is the barycenter of a probability measure supported on the set of extreme points of $K.$ - -Under the Zermelo–Fraenkel set theory (ZF) axiomatic framework, the axiom of choice (AC) suffices to prove all version of the Krein–Milman theorem given above, including statement KM and its generalization SKM. - -The axiom of choice also implies, but is not equivalent to, the Boolean prime ideal theorem (BPI), which is equivalent to the Banach–Alaoglu theorem. - -Conversely, the Krein–Milman theorem KM together with the Boolean prime ideal theorem (BPI) imply the axiom of choice. - -In summary, AC holds if and only if both KM and BPI hold. - -It follows that under ZF, the axiom of choice is equivalent to the following statement: - -The closed unit ball of the continuous dual space of any real normed space has an extreme point. - -Furthermore, SKM together with the Hahn–Banach theorem for real vector spaces (HB) are also equivalent to the axiom of choice. It is known that BPI implies HB, but that it is not equivalent to it (said differently, BPI is strictly stronger than HB). - -The original statement proved by was somewhat less general than the form stated here. - -Earlier, proved that if $X$ is 3-dimensional then $K$ equals the convex hull of the set of its extreme points. This assertion was expanded to the case of any finite dimension by . - -The Krein–Milman theorem generalizes this to arbitrary locally convex $X$; however, to generalize from finite to infinite dimensional spaces, it is necessary to use the closure. diff --git a/wiki/wikipedia/672.txt b/wiki/wikipedia/672.txt deleted file mode 100644 index ba734db80fe8735b0f69658e7ad35fa6fd7f5b3d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/672.txt +++ /dev/null @@ -1 +0,0 @@ -In number theory, Mazur's control theorem, introduced by , describes the behavior in Zp extensions of the Selmer group of an abelian variety over a number field. diff --git a/wiki/wikipedia/673.txt b/wiki/wikipedia/673.txt deleted file mode 100644 index 1ddb99f839eb20eab158769b64d740030319dcad..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/673.txt +++ /dev/null @@ -1,8 +0,0 @@ -In algebraic topology and algebraic geometry, Leray's theorem (so named after Jean Leray) relates abstract sheaf cohomology with Čech cohomology. - -Let $\mathcal F$ be a sheaf on a topological space $X$ and $\mathcal U$ an open cover of $X.$ If $\mathcal F$ is acyclic on every finite intersection of elements of $\mathcal U$, then -$$ - \check H^q(\mathcal U,\mathcal F)= H^q(X,\mathcal F), -$$ - -where $\check H^q(\mathcal U,\mathcal F)$ is the $q$-th Čech cohomology group of $\mathcal F$ with respect to the open cover $\mathcal U.$ diff --git a/wiki/wikipedia/674.txt b/wiki/wikipedia/674.txt deleted file mode 100644 index 07a2854ec92314971678721b3e3737015655ed88..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/674.txt +++ /dev/null @@ -1,41 +0,0 @@ -Wolfram Mathematica is a software system with built-in libraries for several areas of technical computing that allow machine learning, statistics, symbolic computation, manipulating matrices, plotting functions and various types of data, implementation of algorithms, creation of user interfaces, and interfacing with programs written in other programming languages. It was conceived by Stephen Wolfram, and is developed by Wolfram Research of Champaign, Illinois. The Wolfram Language is the programming language used in Mathematica. - -__TOC__ - -Wolfram Mathematica (called Mathematica by some of its users) is split into two parts: the kernel and the front end. The kernel interprets expressions (Wolfram Language code) and returns result expressions, which can then be displayed by the front end. - -The original front end, designed by Theodore Gray in 1988, consists of a notebook interface and allows the creation and editing of notebook documents that can contain code, plaintext, images, and graphics. - -Alternatives to the Mathematica front end include Wolfram Workbench—an Eclipse-based integrated development environment (IDE) that was introduced in 2006. It provides project-based code development tools for Mathematica, including revision management, debugging, profiling, and testing. - -There is also a plugin for IntelliJ IDEA-based IDEs to work with Wolfram Language code that in addition to syntax highlighting can analyze and auto-complete local variables and defined functions. The Mathematica Kernel also includes a command line front end. - -Other interfaces include JMath, based on GNU Readline and WolframScript which runs self-contained Mathematica programs (with arguments) from the UNIX command line. - -Capabilities for high-performance computing were extended with the introduction of packed arrays in version 4 (1999) and sparse matrices (version 5, 2003), and by adopting the GNU Multi-Precision Library to evaluate high-precision arithmetic. - -Version 5.2 (2005) added automatic multi-threading when computations are performed on multi-core computers. This release included CPU-specific optimized libraries. In addition Mathematica is supported by third party specialist acceleration hardware such as ClearSpeed. - -In 2002, gridMathematica was introduced to allow user level parallel programming on heterogeneous clusters and multiprocessor systems and in 2008 parallel computing technology was included in all Mathematica licenses including support for grid technology such as Windows HPC Server 2008, Microsoft Compute Cluster Server and Sun Grid. - -Support for CUDA and OpenCL GPU hardware was added in 2010. - -In 2019, support was added for compiling Wolfram Language code to LLVM. - -Communication with other applications occurs through a protocol called Wolfram Symbolic Transfer Protocol (WSTP). It allows communication between the Wolfram Mathematica kernel and front end and provides a general interface between the kernel and other applications. - -Wolfram Research freely distributes a developer kit for linking applications written in the programming language C to the Mathematica kernel through WSTP using J/Link., a Java program that can ask Mathematica to perform computations. Similar functionality is achieved with .NET /Link, but with .NET programs instead of Java programs. - -Other languages that connect to Mathematica include Haskell, AppleScript, Racket, Visual Basic, Python, and Clojure. - -Mathematica supports the generation and execution of Modelica models for systems modeling and connects with Wolfram System Modeler. - -Links are also available to many third-party software packages and APIs. - -Mathematica can also capture real-time data from a variety of sources and can read and write to public blockchains (Bitcoin, Ethereum, and ARK). - -It supports import and export of over 220 data, image, video, sound, computer-aided design (CAD), geographic information systems (GIS), document, and biomedical formats - -Mathematica is also integrated with Wolfram Alpha, an online computational knowledge answer engine that provides additional data, some of which is kept updated in real time, for users who use Mathematica with an internet connection. Some of the data sets include astronomical, chemical, geopolitical, language, biomedical, and weather data, in addition to mathematical data (such as knots and polyhedra). - -BYTE in 1989 listed Mathematica as among the "Distinction" winners of the BYTE Awards, stating that it "is another breakthrough Macintosh application ... it could enable you to absorb the algebra and calculus that seemed impossible to comprehend from a textbook". Mathematica has been criticized for being closed source. Wolfram Research claims keeping Mathematica closed source is central to its business model and the continuity of the software. diff --git a/wiki/wikipedia/675.txt b/wiki/wikipedia/675.txt deleted file mode 100644 index a941cb3ec99877363c41cdada4ad20fbf2d41105..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/675.txt +++ /dev/null @@ -1,80 +0,0 @@ -In mathematics, the Noether normalization lemma is a result of commutative algebra, introduced by Emmy Noether in 1926. It states that for any field k, and any finitely generated commutative k-algebra A, there exists a non-negative integer d and algebraically independent elements y1, y2, ..., yd in A such that A is a finitely generated module over the polynomial ring S = k[y1, y2, ..., yd]. - -The integer d above is uniquely determined; it is the Krull dimension of the ring A. When A is an integral domain, d is also the transcendence degree of the field of fractions of A over k. - -The theorem has a geometric interpretation. Suppose A is integral. Let S be the coordinate ring of the d-dimensional affine space $\mathbb A^d_k$, and let A be the coordinate ring of some other d-dimensional affine variety X. Then the inclusion map S → A induces a surjective finite morphism of affine varieties $X\to \mathbb A^d_k$. The conclusion is that any affine variety is a branched covering of affine space. - -When k is infinite, such a branched covering map can be constructed by taking a general projection from an affine space containing X to a d-dimensional subspace. - -More generally, in the language of schemes, the theorem can equivalently be stated as follows: every affine k-scheme (of finite type) X is finite over an affine n-dimensional space. The theorem can be refined to include a chain of ideals of R (equivalently, closed subsets of X) that are finite over the affine coordinate subspaces of the appropriate dimensions. - -The form of the Noether normalization lemma stated above can be used as an important step in proving Hilbert's Nullstellensatz. This gives it further geometric importance, at least formally, as the Nullstellensatz underlies the development of much of classical algebraic geometry. The theorem is also an important tool in establishing the notions of Krull dimension for k-algebras. - -The following proof is due to Nagata and is taken from Mumford's red book. A proof in the geometric flavor is also given in the page 127 of the red book and . - -The ring A in the lemma is generated as a k-algebra by elements, say, $y_1, ..., y_m$. We shall induct on m. If $m = 0$, then the assertion is trivial. Assume now $m > 0$. It is enough to show that there is a subring S of A that is generated by $m-1$ elements, such that A is finite over S. Indeed, by the inductive hypothesis, we can find algebraically independent elements $x_1, ..., x_d$ of S such that S is finite over $k[x_1, ..., x_d]$. - -Since otherwise there would be nothing to prove, we can also assume that there is a nonzero polynomial f in m variables over k such that -$$ -f(y_1, \ldots, y_m) = 0 -$$. - -Given an integer r which is determined later, set -$$ -z_i = y_i - y_1^{r^{i-1}}, \quad 2 \le i \le m. -$$ - -Then the preceding reads: -$$ -f(y_1, z_2 + y_1^r, z_3 + y_1^{r^2}, \ldots, z_m + y_1^{r^{m-1}}) = 0 -$$. - -Now, if $a y_1^{\alpha_1} \prod_2^m (z_i + y_1^{r^{i-1}})^{\alpha_i}$ is a monomial appearing in $f$, with coefficient $a \in k$, the highest term in $y_1$ after expanding the product looks like -$$ -a y_1^{\alpha_1 + r \alpha_2 + \cdots + \alpha_m r^{m-1}}. -$$ - -Whenever the above exponent agrees with the highest $y_1$ exponent produced by some other monomial, it is possible that the highest term in $y_1$ of $f(y_1, z_2 + y_1^r, z_3 + y_1^{r^2}, ..., z_m + y_1^{r^{m-1}})$ will not be of the above form, because it may be affected by cancellation. However, if r is larger than any exponent appearing in f, then each $\alpha_1 + r \alpha_2 + \cdots + \alpha_m r^{m-1}$ encodes a unique base r number, so this does not occur. Thus $y_1$ is integral over $S = k[z_2, ..., z_m]$. Since $y_i = z_i + y_1^{r^{i-1}}$ are also integral over that ring, A is integral over S. It follows A is finite over S, and since S is generated by m-1 elements, by the inductive hypothesis we are done. - -If A is an integral domain, then d is the transcendence degree of its field of fractions. Indeed, A and $S = k[y_1, ..., y_d]$ have the same transcendence degree (i.e., the degree of the field of fractions) since the field of fractions of A is algebraic over that of S (as A is integral over S) and S has transcendence degree d. Thus, it remains to show the Krull dimension of the polynomial ring S is d. (This is also a consequence of dimension theory.) We induct on d, with the case $d=0$ being trivial. Since $0 \subsetneq (y_1) \subsetneq (y_1, y_2) \subsetneq \cdots \subsetneq (y_1, \dots, y_d)$ is a chain of prime ideals, the dimension is at least d. To get the reverse estimate, let $0 \subsetneq \mathfrak{p}_1 \subsetneq \cdots \subsetneq \mathfrak{p}_m$ be a chain of prime ideals. Let $0 \ne u \in \mathfrak{p}_1$. We apply the noether normalization and get $T = k[u, z_2, \dots, z_d]$ (in the normalization process, we're free to choose the first variable) such that S is integral over T. By the inductive hypothesis, $T/(u)$ has dimension d - 1. By incomparability, $\mathfrak{p}_i \cap T$ is a chain of length $m$ and then, in $T/(\mathfrak{p}_1 \cap T)$, it becomes a chain of length $m-1$. Since $\operatorname{dim} T/(\mathfrak{p}_1 \cap T) \le \operatorname{dim} T/(u)$, we have $m - 1 \le d - 1$. Hence, $\dim S \le d$. - -The following refinement appears in Eisenbud's book, which builds on Nagata's idea: - -{{math_theorem - -|Let A be a finitely generated algebra over a field k, and $I_1 \subset \dots \subset I_m$ be a chain of ideals such that $\operatorname{dim} (A/I_i) = d_i > d_{i+1}.$ Then there exists algebraically independent elements y1, ..., yd in A such that - -# A is a finitely generated module over the polynomial subring S = k[y1, ..., yd]. - -# $I_i \cap S = (y_{d_i+1}, \dots, y_d)$. - -# If the $I_i$'s are homogeneous, then yi's may be taken to be homogeneous. - -Moreover, if k is an infinite field, then any sufficiently general choice of yI's has Property 1 above ("sufficiently general" is made precise in the proof).}} - -Geometrically speaking, the last part of the theorem says that for $X = \operatorname{Spec} A \subset \mathbf{A}^m$ any general linear projection $\mathbf{A}^m \to \mathbf{A}^d$ induces a finite morphism $X \to \mathbf{A}^d$ (cf. the lede); besides Eisenbud, see also . - -Let A be an integral domain that is a finitely generated algebra over a field. If $\mathfrak{p}$ is a prime ideal of A, then -$$ -\dim A = \operatorname{height} \mathfrak{p} + \dim A/\mathfrak{p} -$$. - -In particular, the Krull dimension of the localization of A at any maximal ideal is dim A. - -Let $A \subset B$ be integral domains that are finitely generated algebras over a field. Then -$$ -\dim B = \dim A + \operatorname{tr.deg}_{Q(A)} Q(B) -$$ - -(the special case of Nagata's altitude formula). - -The proof of generic freeness (the statement later) illustrates a typical yet nontrivial application of the normalization lemma. The generic freeness says: let $A, B$ be rings such that $A$ is a Noetherian integral domain and suppose there is a ring homomorphism $A \to B$ that exhibits $B$ as a finitely generated algebra over $A$. Then there is some $0 \ne g \in A$ such that $B[g^{-1}]$ is a free $A[g^{-1}]$-module. - -Let $F$ be the fraction field of $A$. We argue by induction on the Krull dimension of $F \otimes_A B$. The basic case is when the Krull dimension is $-\infty$; i.e., $F \otimes_A B = 0$. This is to say there is some $0 \ne g \in A$ such that $g B = 0$ and so $B[g^{-1}]$ is free as an $A[g^{-1}]$-module. For the inductive step, note $F \otimes_A B$ is a finitely generated $F$-algebra. Hence, by the Noether normalization lemma, $F \otimes_A B$ contains algebraically independent elements $x_1, \dots, x_d$ such that $F \otimes_A B$ is finite over the polynomial ring $F[x_1, \dots, x_d]$. Multiplying each $x_i$ by elements of $A$, we can assume $x_i$ are in $B$. We now consider: -$$ -A' := A[x_1, \dots, x_d] \to B. -$$ - -It need not be the case that $B$ is finite over $A'$. But that will be the case after inverting a single element, as follows. If $b$ is an element of $B$, then, as an element of $F \otimes_A B$, it is integral over $F[x_1, \dots, x_d]$; i.e., $b^n + a_1 b^{n-1} + \dots + a_n = 0$ for some $a_i$ in $F[x_1, \dots, x_d]$. Thus, some $0 \ne g \in A$ kills all the denominators of the coefficients of $a_i$ and so $b$ is integral over $A'[g^{-1}]$. Choosing some finitely many generators of $B$ as an $A'$-algebra and applying this observation to each generator, we find some $0 \ne g \in A$ such that $B[g^{-1}]$ is integral (thus finite) over $A'[g^{-1}]$. Replace $B, A$ by $B[g^{-1}], A[g^{-1}]$ and then we can assume $B$ is finite over $A' := A[x_1, \dots, x_d]$. - -To finish, consider a finite filtration $B = B_0 \supset B_1 \supset B_2 \supset \cdots \supset B_r$ by $A'$-submodules such that $B_i / B_{i+1} \simeq A'/\mathfrak{p}_i$ for prime ideals $\mathfrak{p}_i$ (such a filtration exists by the theory of associated primes). For each i, if $\mathfrak{p}_i \ne 0$, by inductive hypothesis, we can choose some $g_i \ne 0$ in $A$ such that $A'/\mathfrak{p}_i[g_i^{-1}]$ is free as an $A[g_i^{-1}]$-module, while $A'$ is a polynomial ring and thus free. Hence, with $g = g_0 \cdots g_r$, $B[g^{-1}]$ is a free module over $A[g^{-1}]$. $\square$ diff --git a/wiki/wikipedia/676.txt b/wiki/wikipedia/676.txt deleted file mode 100644 index 0a816dd5a5eef0b7b8265e34a3072637fcfacde4..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/676.txt +++ /dev/null @@ -1,50 +0,0 @@ -In computer science, a one-way function is a function that is easy to compute on every input, but hard to invert given the image of a random input. Here, "easy" and "hard" are to be understood in the sense of computational complexity theory, specifically the theory of polynomial time problems. Not being one-to-one is not considered sufficient for a function to be called one-way (see Theoretical definition, below). - -The existence of such one-way functions is still an open conjecture. In fact, their existence would prove that the complexity classes P and NP are not equal, thus resolving the foremost unsolved question of theoretical computer science. The converse is not known to be true, i.e. the existence of a proof that P≠NP would not directly imply the existence of one-way functions. - -In applied contexts, the terms "easy" and "hard" are usually interpreted relative to some specific computing entity; typically "cheap enough for the legitimate users" and "prohibitively expensive for any malicious agents". One-way functions, in this sense, are fundamental tools for cryptography, personal identification, authentication, and other data security applications. While the existence of one-way functions in this sense is also an open conjecture, there are several candidates that have withstood decades of intense scrutiny. Some of them are essential ingredients of most telecommunications, e-commerce, and e-banking systems around the world. - -A function f : {0,1}* → {0,1}* is one-way if f can be computed by a polynomial time algorithm, but any polynomial time randomized algorithm $F$ that attempts to compute a pseudo-inverse for f succeeds with negligible probability. (The * superscript means any number of repetitions, see Kleene star.) That is, for all randomized algorithms $F$ , all positive integers c and all sufficiently large n = length(x) , -$$ -\Pr[f(F(f(x))) = f(x)] < n^{-c}, -$$ - -where the probability is over the choice of x from the discrete uniform distribution on {0,1}n, and the randomness of $F$. - -Note that, by this definition, the function must be "hard to invert" in the average-case, rather than worst-case sense. This is different from much of complexity theory (e.g., NP-hardness), where the term "hard" is meant in the worst-case. That is why even if some candidates for one-way functions (described below) are known to be NP-complete, it does not imply their one-wayness. The latter property is only based on the lack of known algorithm to solve the problem. - -It is not sufficient to make a function "lossy" (not one-to-one) to have a one-way function. In particular, the function that outputs the string of n zeros on any input of length n is not a one-way function because it is easy to come up with an input that will result in the same output. More precisely: For such a function that simply outputs a string of zeroes, an algorithm F that just outputs any string of length n on input f(x) will "find" a proper preimage of the output, even if it is not the input which was originally used to find the output string. - -A one-way permutation is a one-way function that is also a permutation—that is, a one-way function that is bijective. One-way permutations are an important cryptographic primitive, and it is not known if their existence is implied by the existence of one-way functions. - -A trapdoor one-way function or trapdoor permutation is a special kind of one-way function. Such a function is hard to invert unless some secret information, called the trapdoor, is known. - -A collision-free hash function f is a one-way function that is also collision-resistant; that is, no randomized polynomial time algorithm can find a collision—distinct values x, y such that f(x) = f(y)—with non-negligible probability. - -If f is a one-way function, then the inversion of f would be a problem whose output is hard to compute (by definition) but easy to check (just by computing f on it). Thus, the existence of a one-way function implies that FP≠FNP, which in turn implies that P≠NP. However, P≠NP does not imply the existence of one-way functions. - -The existence of a one-way function implies the existence of many other useful concepts, including: - -*Pseudorandom generators - -*Pseudorandom function families - -*Bit commitment schemes - -*Private-key encryption schemes secure against adaptive chosen-ciphertext attack - -*Message authentication codes - -*Digital signature schemes (secure against adaptive chosen-message attack) - -The existence of one-way functions also implies that there is no natural proof for P≠NP. - -The following are several candidates for one-way functions (as of April 2009). Clearly, it is not known whether - -these functions are indeed one-way; but extensive research has so far failed to produce an efficient inverting algorithm for any of them. - -The function f takes as inputs two prime numbers p and q in binary notation and returns their product. This function can be "easily" computed in O(b2) time, where b is the total number of bits of the inputs. Inverting this function requires finding the factors of a given integer N. The best factoring algorithms known run in $O\left(\exp\sqrt[3]{\frac{64}{9} b (\log b)^2}\right)$time, where b is the number of bits needed to represent N. - -This function can be generalized by allowing p and q to range over a suitable set of semiprimes. Note that f is not one-way for randomly selected integers p, q > 1, since the product will have 2 as a factor with probability 3/4 (because the probability that an arbitrary p is odd is 1/2, and likewise for q, so if they're chosen independently, the probability that both are odd is therefore 1/4; hence the probability that p or q is even is 1=1 − 1/4 = 3/4). - -The Rabin function, In other words, if any function is one-way, then so is f. Since this function was the first combinatorial complete one-way function to be demonstrated, it is known as the "universal one-way function". The problem of finding a one way function is thus reduced to proving that one such function exists. diff --git a/wiki/wikipedia/677.txt b/wiki/wikipedia/677.txt deleted file mode 100644 index 55cbebdb428ce0d3782163748538568261d9cf8c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/677.txt +++ /dev/null @@ -1,3 +0,0 @@ -In differential geometry, Yau's conjecture from 1982, is a mathematical conjecture which states that a closed Riemannian three-manifold has an infinite number of smooth closed immersed minimal surfaces. It is named after Shing-Tung Yau. It was the first problem in the Minimal submanifolds section in Yau's list of open problems. - -The conjecture has recently been claimed by Kei Irie, Fernando Codá Marques and André Neves in the generic case, and by Antoine Song in full generality. diff --git a/wiki/wikipedia/678.txt b/wiki/wikipedia/678.txt deleted file mode 100644 index 41bc9d225bb8435c593d9ef5032748e7c4660743..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/678.txt +++ /dev/null @@ -1,3 +0,0 @@ -In algebraic topology, the nilpotence theorem gives a condition for an element of the coefficient ring of a ring spectrum to be nilpotent, in terms of complex cobordism. It was conjectured by and proved by . - -showed that elements of positive degree of the homotopy groups of spheres are nilpotent. This is a special case of the nilpotence theorem. diff --git a/wiki/wikipedia/679.txt b/wiki/wikipedia/679.txt deleted file mode 100644 index 3af4dc0acda07eaf21957276edbc0590e80de051..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/679.txt +++ /dev/null @@ -1,15 +0,0 @@ -__notoc__ - -In geometry, the Japanese theorem states that no matter how one triangulates a cyclic polygon, the sum of inradii of triangles is constant. - -Conversely, if the sum of inradii is independent of the triangulation, then the polygon is cyclic. The Japanese theorem follows from Carnot's theorem; it is a Sangaku problem. - -This theorem can be proven by first proving a special case: no matter how one triangulates a cyclic quadrilateral, the sum of inradii of triangles is constant. - -After proving the quadrilateral case, the general case of the cyclic polygon theorem is an immediate corollary. The quadrilateral rule can be applied to quadrilateral components of a general partition of a cyclic polygon, and repeated application of the rule, which "flips" one diagonal, will generate all the possible partitions from any given partition, with each "flip" preserving the sum of the inradii. - -The quadrilateral case follows from a simple extension of the Japanese theorem for cyclic quadrilaterals, which shows that a rectangle is formed by the two pairs of incenters corresponding to the two possible triangulations of the quadrilateral. The steps of this theorem require nothing beyond basic constructive Euclidean geometry. - -With the additional construction of a parallelogram having sides parallel to the diagonals, and tangent to the corners of the rectangle of incenters, the quadrilateral case of the cyclic polygon theorem can be proved in a few steps. The equality of the sums of the radii of the two pairs is equivalent to the condition that the constructed parallelogram be a rhombus, and this is easily shown in the construction. - -Another proof of the quadrilateral case is available due to Wilfred Reyes (2002). In the proof, both the Japanese theorem for cyclic quadrilaterals and the quadrilateral case of the cyclic polygon theorem are proven as a consequence of Thébault's problem III. diff --git a/wiki/wikipedia/68.txt b/wiki/wikipedia/68.txt deleted file mode 100644 index c9837d9ae94d8bde1c837eab6c8d0dae45c552eb..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/68.txt +++ /dev/null @@ -1,95 +0,0 @@ -In computer science, fringe search is a graph search algorithm that finds the least-cost path from a given initial node to one goal node. - -In essence, fringe search is a middle ground between A* and the iterative deepening A* variant (IDA*). - -If g(x) is the cost of the search path from the first node to the current, and h(x) is the heuristic estimate of the cost from the current node to the goal, then ƒ(x) = g(x) + h(x), and h* is the actual path cost to the goal. Consider IDA*, which does a recursive left-to-right depth-first search from the root node, stopping the recursion once the goal has been found or the nodes have reached a maximum value ƒ. If no goal is found in the first threshold ƒ, the threshold is then increased and the algorithm searches again. I.E. It iterates on the threshold. - -There are three major inefficiencies with IDA*. First, IDA* will repeat states when there are multiple (sometimes non-optimal) paths to a goal node - this is often solved by keeping a cache of visited states. IDA* thus altered is denoted as memory-enhanced IDA* (ME-IDA*), since it uses some storage. Furthermore, IDA* repeats all previous operations in a search when it iterates in a new threshold, which is necessary to operate with no storage. By storing the leaf nodes of a previous iteration and using them as the starting position of the next, IDA*'s efficiency is significantly improved (otherwise, in the last iteration it would always have to visit every node in the tree). - -Fringe search implements these improvements on IDA* by making use of a data structure that is more or less two lists to iterate over the frontier or fringe of the search tree. One list now, stores the current iteration, and the other list later stores the immediate next iteration. So from the root node of the search tree, now will be the root and later will be empty. Then the algorithm takes one of two actions: If ƒ(head) is greater than the current threshold, remove head from now and append it to the end of later; i.e. save head for the next iteration. Otherwise, if ƒ(head) is less than or equal to the threshold, expand head and discard head, consider its children, adding them to the beginning of now. At the end of an iteration, the threshold is increased, the later list becomes the now list, and later is emptied. - -An important difference here between fringe and A* is that the contents of the lists in fringe do not necessarily have to be sorted - a significant gain over A*, which requires the often expensive maintenance of order in its open list. Unlike A*, however, fringe will have to visit the same nodes repeatedly, but the cost for each such visit is constant compared to the worst-case logarithmic time of sorting the list in A*. - -Implementing both lists in one doubly linked list, where nodes that precede the current node are the later portion and all else are the now list. Using an array of pre-allocated nodes in the list for each node in the grid, access time to nodes in the list is reduced to a constant. Similarly, a marker array allows lookup of a node in the list to be done in constant time. g is stored as a hash-table, and a last marker array is stored for constant-time lookup of whether or not a node has been visited before and if a cache entry is valid. - - - -init(start, goal) - -fringe F = s - -cache C[start] = (0, null) - -flimit = h(start) - -found = false - -while (found == false) AND (F not empty) - -fmin = ∞ - -for node in F, from left to right - -(g, parent) = C[node] - -f = g + h(node) - -if f > flimit - -fmin = min(f, fmin) - -continue - -if node == goal - -found = true - -break - -for child in children(node), from right to left - -g_child = g + cost(node, child) - -if C[child] != null - -(g_cached, parent) = C[child] - -if g_child >= g_cached - -continue - -if child in F - -remove child from F - -insert child in F past node - -C[child] = (g_child, node) - -remove node from F - -flimit = fmin - -if reachedgoal == true - -reverse_path(goal) - - - -Reverse pseudo-code. - - - -reverse_path(node) - -(g, parent) = C[node] - -if parent != null - -reverse_path(parent) - -print node - - - -When tested on grid-based environments typical of computer games including impassable obstacles, fringe outperformed A* by some 10 percent to 40 percent, depending on use of tiles or octiles. Possible further improvements include use of a data structure that lends itself more easily to caches. diff --git a/wiki/wikipedia/680.txt b/wiki/wikipedia/680.txt deleted file mode 100644 index 6dabe2bbd0abff54fd39cd7914580095e6c25877..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/680.txt +++ /dev/null @@ -1,199 +0,0 @@ -The Wigner–Eckart theorem is a theorem of representation theory and quantum mechanics. It states that matrix elements of spherical tensor operators in the basis of angular momentum eigenstates can be expressed as the product of two factors, one of which is independent of angular momentum orientation, and the other a Clebsch–Gordan coefficient. The name derives from physicists Eugene Wigner and Carl Eckart, who developed the formalism as a link between the symmetry transformation groups of space (applied to the Schrödinger equations) and the laws of conservation of energy, momentum, and angular momentum. - -Mathematically, the Wigner–Eckart theorem is generally stated in the following way. Given a tensor operator $T^{(k)}$ and two states of angular momenta $j$ and $j'$, there exists a constant $\langle j \| T^{(k)} \| j' \rangle$ such that for all $m$, $m'$, and $q$, the following equation is satisfied: - - - -\langle j m | T^{(k)}_q | j' m'\rangle - -= \langle j' m' k q | j m \rangle \langle j \| T^{(k)} \| j'\rangle, - - - -where - -*$T^{(k)}_q$ is the q-th component of the spherical tensor operator $T^{(k)}$ of rank k, - -* $|j m\rangle$ denotes an eigenstate of total angular momentum J2 and its z component Jz, - -* $\langle j' m' k q | j m\rangle$ is the Clebsch–Gordan coefficient for coupling j′ with k to get j, - -* $\langle j \| T^{(k)} \| j' \rangle$ denotes some value that does not depend on m, m′, nor q and is referred to as the reduced matrix element. - -The Wigner–Eckart theorem states indeed that operating with a spherical tensor operator of rank k on an angular momentum eigenstate is like adding a state with angular momentum k to the state. The matrix element one finds for the spherical tensor operator is proportional to a Clebsch–Gordan coefficient, which arises when considering adding two angular momenta. When stated another way, one can say that the Wigner–Eckart theorem is a theorem that tells how vector operators behave in a subspace. Within a given subspace, a component of a vector operator will behave in a way proportional to the same component of the angular momentum operator. This definition is given in the book Quantum Mechanics by Cohen–Tannoudji, Diu and Laloe. - -Let's say we want to calculate transition dipole moments for an electron transition from a 4d to a 2p orbital of a hydrogen atom, i.e. the matrix elements of the form $\langle 2p,m_1 | r_i | 4d,m_2 \rangle$, where ri is either the x, y, or z component of the position operator, and m1, m2 are the magnetic quantum numbers that distinguish different orbitals within the 2p or 4d subshell. If we do this directly, it involves calculating 45 different integrals: there are 3 possibilities for m1 (−1, 0, 1), 5 possibilities for m2 (−2, −1, 0, 1, 2), and 3 possibilities for i, so the total is 3 × 5 × 3 = 45. - -The Wigner–Eckart theorem allows one to obtain the same information after evaluating just one of those 45 integrals (any of them can be used, as long as it is nonzero). Then the other 44 integrals can be inferred from that first one—without the need to write down any wavefunctions or evaluate any integrals—with the help of Clebsch–Gordan coefficients, which can be easily looked up in a table or computed by hand or computer. - -The Wigner–Eckart theorem works because all 45 of these different calculations are related to each other by rotations. If an electron is in one of the 2p orbitals, rotating the system will generally move it into a different 2p orbital (usually it will wind up in a quantum superposition of all three basis states, m = +1, 0, −1). Similarly, if an electron is in one of the 4d orbitals, rotating the system will move it into a different 4d orbital. Finally, an analogous statement is true for the position operator: when the system is rotated, the three different components of the position operator are effectively interchanged or mixed. - -If we start by knowing just one of the 45 values (say, we know that $\langle 2p,m_1 | r_i | 4d,m_2 \rangle = K$) and then we rotate the system, we can infer that K is also the matrix element between the rotated version of $\langle 2p,m_1 |$, the rotated version of $r_i$, and the rotated version of $| 4d,m_2 \rangle$. This gives an algebraic relation involving K and some or all of the 44 unknown matrix elements. Different rotations of the system lead to different algebraic relations, and it turns out that there is enough information to figure out all of the matrix elements in this way. - -(In practice, when working through this math, we usually apply angular momentum operators to the states, rather than rotating the states. But this is fundamentally the same thing, because of the close mathematical relation between rotations and angular momentum operators.) - -To state these observations more precisely and to prove them, it helps to invoke the mathematics of representation theory. For example, the set of all possible 4d orbitals (i.e., the 5 states m = −2, −1, 0, 1, 2 and their quantum superpositions) form a 5-dimensional abstract vector space. Rotating the system transforms these states into each other, so this is an example of a "group representation", in this case, the 5-dimensional irreducible representation ("irrep") of the rotation group SU(2) or SO(3), also called the "spin-2 representation". Similarly, the 2p quantum states form a 3-dimensional irrep (called "spin-1"), and the components of the position operator also form the 3-dimensional "spin-1" irrep. - -Now consider the matrix elements $\langle 2p,m_1 | r_i | 4d,m_2 \rangle$. It turns out that these are transformed by rotations according to the tensor product of those three representations, i.e. the spin-1 representation of the 2p orbitals, the spin-1 representation of the components of r, and the spin-2 representation of the 4d orbitals. This direct product, a 45-dimensional representation of SU(2), is not an irreducible representation, instead it is the direct sum of a spin-4 representation, two spin-3 representations, three spin-2 representations, two spin-1 representations, and a spin-0 (i.e. trivial) representation. The nonzero matrix elements can only come from the spin-0 subspace. The Wigner–Eckart theorem works because the direct product decomposition contains one and only one spin-0 subspace, which implies that all the matrix elements are determined by a single scale factor. - -Apart from the overall scale factor, calculating the matrix element $\langle 2p,m_1 | r_i | 4d,m_2 \rangle$ is equivalent to calculating the projection of the corresponding abstract vector (in 45-dimensional space) onto the spin-0 subspace. The results of this calculation are the Clebsch–Gordan coefficients. The key qualitative aspect of the Clebsch–Gordan decomposition that makes the argument work is that in the decomposition of the tensor product of two irreducible representations, each irreducible representation occurs only once. This allows Schur's lemma to be used. - -Starting with the definition of a spherical tensor operator, we have -$$ -[J_\pm, T^{(k)}_q] = \hbar \sqrt{(k \mp q)(k \pm q + 1)}T_{q\pm 1}^{(k)}, -$$ - -which we use to then calculate - - - -\begin{align} - -&\langle j m | [J_\pm, T^{(k)}_q] | j' m' - -\rangle = \hbar \sqrt{(k \mp q) (k \pm q + 1)} - -\langle j m | T^{(k)}_{q \pm 1} | j' m' \rangle. - -\end{align} - - - -If we expand the commutator on the LHS by calculating the action of the J± on the bra and ket, then we get - - - -\begin{align} - -\langle j m | [J_\pm, T^{(k)}_q] | j' m' - -\rangle ={} &\hslash\sqrt{(j \pm m) (j \mp m + 1)} \langle j (m \mp 1) | T^{(k)}_q | j' m' \rangle \\ - -&-\hslash\sqrt{(j' \mp m')(j' \pm m' + 1)} \langle j m | T^{(k)}_q | j' (m' \pm 1) \rangle. - -\end{align} - - - -We may combine these two results to get - - - -\begin{align} - -\sqrt{(j \pm m) (j \mp m + 1)} \langle j (m \mp 1) | T^{(k)}_q | j' m' - -\rangle = &\sqrt{(j' \mp m') (j' \pm m' + 1)} \langle j m | T^{(k)}_q | j' (m' \pm 1) \rangle \\ - -&+\sqrt{(k \mp q) (k \pm q + 1)} \langle j m | T^{(k)}_{q \pm 1} | j' m' \rangle. - -\end{align} - - - -This recursion relation for the matrix elements closely resembles that of the Clebsch–Gordan coefficient. In fact, both are of the form Σc ab, c xc = 0. We therefore have two sets of linear homogeneous equations: - - - -\begin{align} - -\sum_c a_{b, c} x_c &= 0, & - -\sum_c a_{b, c} y_c &= 0. - -\end{align} - - - -one for the Clebsch–Gordan coefficients (xc) and one for the matrix elements (yc). It is not possible to exactly solve for xc. We can only say that the ratios are equal, that is -$$ -\frac{x_c}{x_d} = \frac{y_c}{y_d} -$$ - -or that xc ∝ yc, where the coefficient of proportionality is independent of the indices. Hence, by comparing recursion relations, we can identify the Clebsch–Gordan coefficient ⟨j1 m1 j2 (m2 ± 1) with the matrix element ⟨j′ m′, then we may write - - - -\langle j' m' | T^{(k)}_{q \pm 1} | j m\rangle - -\propto \langle j m k (q \pm 1) | j' m' \rangle. - - - -There are different conventions for the reduced matrix elements. One convention, used by Racah and Wigner, includes an additional phase and normalization factor, - - - -\langle j m | T^{(k)}_q | j' m'\rangle - -= \frac{(-1)^{2 k} \langle j' m' k q | j m \rangle \langle j \| T^{(k)} \| j'\rangle_{\mathrm{R}}}{\sqrt{2 j + 1}} - -= (-1)^{j - m} - -\begin{pmatrix} - -j & k & j' \\ - --m & q & m' - -\end{pmatrix} \langle j \| T^{(k)} \| j'\rangle_{\mathrm{R}}. - - - -where the 2 × 3 array denotes the 3-j symbol. (Since in practice k is often an integer, the (−1)2 k factor is sometimes omitted in literature.) With this choice of normalization, the reduced matrix element satisfies the relation: -$$ -\langle j \| T^{\dagger (k)} \| j'\rangle_{\mathrm{R}} = (-1)^{k + j' - j} \langle j' \| T^{(k)} \| j\rangle_{\mathrm{R}}^*, -$$ - -where the Hermitian adjoint is defined with the k − q convention. Although this relation is not affected by the presence or absence of the (−1)2 k phase factor in the definition of the reduced matrix element, it is affected by the phase convention for the Hermitian adjoint. - -Another convention for reduced matrix elements is that of Sakurai's Modern Quantum Mechanics: - - - -\langle j m | T^{(k)}_q | j' m'\rangle - -= \frac{\langle j' m' k q | j m \rangle \langle j \| T^{(k)} \| j'\rangle_{\mathrm{S}}}{\sqrt{2 j' + 1}}. - - - -Consider the position expectation value ⟨n j m. This matrix element is the expectation value of a Cartesian operator in a spherically symmetric hydrogen-atom-eigenstate basis, which is a nontrivial problem. However, the Wigner–Eckart theorem simplifies the problem. (In fact, we could obtain the solution quickly using parity, although a slightly longer route will be taken.) - -We know that x is one component of r, which is a vector. Since vectors are rank-1 spherical tensor operators, it follows that x must be some linear combination of a rank-1 spherical tensor T(1)q with q ∈ {−1, 0, 1}. In fact, it can be shown that -$$ -x = \frac{T^{(1)}_{-1} - T^{(1)}_1}{\sqrt{2}}, -$$ - -where we define the spherical tensors as -$$ -T^{(1)}_{q} = \sqrt{\frac{4 \pi}{3}} r Y_1^q -$$ - -and Ylm are spherical harmonics, which themselves are also spherical tensors of rank l. Additionally, T(1)0 = z, and -$$ -T^{(1)}_{\pm 1} = \mp \frac{x \pm i y}{\sqrt{2}}. -$$ - -Therefore, - - - -\begin{align} - -&\langle n j m | x | n' j' m' - -\rangle = \\ - -& = \left\langle n j m \left| \frac{T^{(1)}_{-1} - T^{(1)}_1}{\sqrt{2}} \right| n' j' m' - -\right\rangle = \\ - -& = \frac{1}{\sqrt{2}} \langle n j \| T^{(1)} \| n' j'\rangle - -\big(\langle j' m' 1 (-1) | j m \rangle - \langle j' m' 1 1 | j m \rangle\big). - -\end{align} - - - -The above expression gives us the matrix element for x in the basis. To find the expectation value, we set n′ = n, j′ = j, and m′ = m. The selection rule for m′ and m is m ± 1 = m′ for the T(1)±1 spherical tensors. As we have m′ = m, this makes the Clebsch–Gordan Coefficients zero, leading to the expectation value to be equal to zero. diff --git a/wiki/wikipedia/681.txt b/wiki/wikipedia/681.txt deleted file mode 100644 index 682a92be188698477a9d9b6c9625486608d4a01f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/681.txt +++ /dev/null @@ -1,29 +0,0 @@ -In algebra, Brahmagupta's identity says that, for given $n$, the product of two numbers of the form $a^2+nb^2$ is itself a number of that form. In other words, the set of such numbers is closed under multiplication. Specifically: - -\begin{align} - -\left(a^2 + nb^2\right)\left(c^2 + nd^2\right) & {}= \left(ac-nbd\right)^2 + n\left(ad+bc\right)^2 & & & (1) \\ - -& {}= \left(ac+nbd\right)^2 + n\left(ad-bc\right)^2, & & & (2) - -\end{align} - -Both (1) and (2) can be verified by expanding each side of the equation. Also, (2) can be obtained from (1), or (1) from (2), by changing b to -b. - -This identity holds in both the ring of integers and the ring of rational numbers, and more generally in any commutative ring. - -The identity is a generalization of the so-called Fibonacci identity (where n=1) which is actually found in Diophantus' Arithmetica (III, 19). - -That identity was rediscovered by Brahmagupta (598-668), an Indian mathematician and astronomer, who generalized it and used it in his study of what is now called Pell's equation. His Brahmasphutasiddhanta was translated from Sanskrit into Arabic by Mohammad al-Fazari, and was subsequently translated into Latin in 1126. The identity later appeared in Fibonacci's Book of Squares in 1225. - -In its original context, Brahmagupta applied his discovery to the solution of what was later called Pell's equation, namely x2 - Ny2 = 1. Using the identity in the form -$$ -(x_1^2 - Ny_1^2)(x_2^2 - Ny_2^2) = (x_1x_2 + Ny_1y_2)^2 - N(x_1y_2 + x_2y_1)^2, -$$ - -he was able to "compose" triples (x1, y1, k1) and (x2, y2, k2) that were solutions of x2 - Ny2 = k, to generate the new triple -$$ -(x_1x_2 + Ny_1y_2 , x_1y_2 + x_2y_1 , k_1k_2). -$$ - -Not only did this give a way to generate infinitely many solutions to x2 - Ny2 = 1 starting with one solution, but also, by dividing such a composition by k1k2, integer or "nearly integer" solutions could often be obtained. The general method for solving the Pell equation given by Bhaskara II in 1150, namely the chakravala (cyclic) method, was also based on this identity. diff --git a/wiki/wikipedia/682.txt b/wiki/wikipedia/682.txt deleted file mode 100644 index 3f6b7dfb264a1f792f5a663c24df86ffc4c09649..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/682.txt +++ /dev/null @@ -1,74 +0,0 @@ -Menelaus's theorem, named for Menelaus of Alexandria, is a proposition about triangles in plane geometry. Suppose we have a triangle ABC, and a transversal line that crosses BC, AC, and AB at points D, E, and F respectively, with D, E, and F distinct from A, B, and C. Using signed lengths of segments (the length AB is taken to be positive or negative according to whether A is to the left or right of B in some fixed orientation of the line; for example, AF/FB is defined as having positive value when F is between A and B and negative otherwise), the theorem states -$$ -\frac{AF}{FB} \times \frac{BD}{DC} \times \frac{CE}{EA} = - 1. -$$ - -or equivalently -$$ -AF \times BD \times CE= - FB \times DC \times EA . -$$ - -Some authors organize the factors differently and obtain the seemingly different relation -$$ -\frac{FA}{FB} \times \frac{DB}{DC} \times \frac{EC}{EA} = 1, -$$ - -but as each of these factors is the negative of the corresponding factor above, the relation is seen to be the same. - -The converse is also true: If points D, E, and F are chosen on BC, AC, and AB respectively so that -$$ -\frac{AF}{FB} \times \frac{BD}{DC} \times \frac{CE}{EA} = -1, -$$ - -then D, E, and F are collinear. The converse is often included as part of the theorem. - -The theorem is very similar to Ceva's theorem in that their equations differ only in sign. - -A standard proof is as follows: - -First, the sign of the left-hand side will be negative since either all three of the ratios are negative, the case where the line DEF misses the triangle (lower diagram), or one is negative and the other two are positive, the case where DEF crosses two sides of the triangle. (See Pasch's axiom.) - -To check the magnitude, construct perpendiculars from A, B, and C to the line DEF and let their lengths be a, b, and c respectively. Then by similar triangles it follows that |AF/FB| = |a/b|, |BD/DC| = |b/c|, and |CE/EA| = |c/a|. So -$$ -\left|\frac{AF}{FB}\right| \cdot \left|\frac{BD}{DC}\right| \cdot \left|\frac{CE}{EA}\right| = \left| \frac{a}{b} \cdot \frac{b}{c} \cdot \frac{c}{a} \right| = 1. \quad\text{(Magnitude only)} -$$ - -For a simpler, if less symmetrical way to check the magnitude, draw CK parallel to AB where DEF meets CK at K. Then by similar triangles -$$ -\left|\frac{BD}{DC}\right| = \left|\frac{BF}{CK}\right|,\left|\frac{AE}{EC}\right| = \left|\frac{AF}{CK}\right| -$$ - -and the result follows by eliminating CK from these equations. - -The converse follows as a corollary. Let D, E, and F be given on the lines BC, AC, and AB so that the equation holds. Let F′ be the point where DE crosses AB. Then by the theorem, the equation also holds for D, E, and F′. Comparing the two, -$$ -\frac{AF}{FB} = \frac{AF'}{F'B}. -$$ - -But at most one point can cut a segment in a given ratio so F=F′. - -The following proof uses only notions of affine geometry, notably homothecies. - -Whether or not D, E, and F are collinear, there are three homothecies with centers D, E, F that respectively send B to C, C to A, and A to B. The composition of the three then is an element of the group of homothecy-translations that fixes B, so it is a homothecy with center B, possibly with ratio 1 (in which case it is the identity). This composition fixes the line DE if and only if F is collinear with D and E (since the first two homothecies certainly fix DE, and the third does so only if F lies on DE). Therefore D, E, and F are collinear if and only if this composition is the identity, which means that the magnitude of product of the three ratios is 1: - -\frac{\overrightarrow{DC}}{\overrightarrow{DB}} \times - -\frac{\overrightarrow{EA}}{\overrightarrow{EC}} \times - -\frac{\overrightarrow{FB}}{\overrightarrow{FA}} = -1, - -which is equivalent to the given equation. - -It is uncertain who actually discovered the theorem; however, the oldest extant exposition appears in Spherics by Menelaus. In this book, the plane version of the theorem is used as a lemma to prove a spherical version of the theorem. - -In Almagest, Ptolemy applies the theorem on a number of problems in spherical astronomy. During the Islamic Golden Age, Muslim scholars devoted a number of works that engaged in the study of Menelaus's theorem, which they referred to as "the proposition on the secants" (shakl al-qatta). The complete quadrilateral was called the "figure of secants" in their terminology. or works composed as independent treatises such as: - -* The "Treatise on the Figure of Secants" (Risala fi shakl al-qatta) by Thabit ibn Qurra. - -* Husam al-DIn al-Salar's Removing the Veil from the Mysteries of the Figure of Secants (Kashf al-qina' 'an asrar al-shakl al-qatta'), also known as "The Book on the Figure of Secants" (Kitab al-shakl al-qatta) or in Europe as The Treatise on the Complete Quadrilateral. The lost treatise was referred to by Al-Tusi and Nasir al-Din al-Tusi. - -* Work by al-Sijzi. - -* Tahdhib by Abu Nasr ibn Iraq. - -* Roshdi Rashed and Athanase Papadopoulos, Menelaus' Spherics: Early Translation and al-Mahani'/al-Harawi's version (Critical edition of Menelaus' Spherics from the Arabic manuscripts, with historical and mathematical commentaries), De Gruyter, Series: Scientia Graeco-Arabica, 21, 2017, 890 pages. diff --git a/wiki/wikipedia/683.txt b/wiki/wikipedia/683.txt deleted file mode 100644 index 80ff5609499d1608092ac979368f3987fcc76a58..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/683.txt +++ /dev/null @@ -1,9 +0,0 @@ -In mathematics, the Sims conjecture is a result in group theory, originally proposed by Charles Sims. He conjectured that if $G$ is a primitive permutation group on a finite set $S$ and $G_\alpha$ denotes the stabilizer of the point $\alpha$ in $S$, then there exists an integer-valued function $f$ such that $f(d) \geq |G_\alpha|$ for $d$ the length of any orbit of $G_\alpha$ in the set $S \setminus \{\alpha\}$. - -The conjecture was proven by Peter Cameron, Cheryl Praeger, Jan Saxl, and Gary Seitz using the classification of finite simple groups, in particular the fact that only finitely many isomorphism types of sporadic groups exist. - -The theorem reads precisely as follows. - -There exists a function $f: \mathbb{N} \to \mathbb{N} $ such that whenever $G$ is a primitive permutation group and $h > 1$ is the length of a non-trivial orbit of a point stabilizer $H$ in $G$, then the order of $H$ is at most $f(h)$. - -Thus, in a primitive permutation group with "large" stabilizers, these stabilizers cannot have any small orbit. A consequence of their proof is that there exist only finitely many connected distance-transitive graphs having degree greater than 2. diff --git a/wiki/wikipedia/684.txt b/wiki/wikipedia/684.txt deleted file mode 100644 index 392975521589251530c62b909afb4f54bada1ab3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/684.txt +++ /dev/null @@ -1,87 +0,0 @@ -The Brooks–Iyengar algorithm or Brooks–Iyengar hybrid algorithm is a distributed algorithm that improves both the precision and accuracy of the interval measurements taken by a distributed sensor network, even in the presence of faulty sensors. The sensor network does this by exchanging the measured value and accuracy value at every node with every other node, and computes the accuracy range and a measured value for the whole network from all of the values collected. Even if some of the data from some of the sensors is faulty, the sensor network will not malfunction. The algorithm is fault-tolerant and distributed. It could also be used as a sensor fusion method. The precision and accuracy bound of this algorithm have been proved in 2016. - -The Brooks–Iyengar hybrid algorithm for distributed control in the presence of noisy data combines Byzantine agreement with sensor fusion. It bridges the gap between sensor fusion and Byzantine fault tolerance. This seminal algorithm unified these disparate fields for the first time. Essentially, it combines Dolev's algorithm for approximate agreement with Mahaney and Schneider's fast convergence algorithm (FCA). The algorithm assumes N processing elements (PEs), t of which are faulty and can behave maliciously. It takes as input either real values with inherent inaccuracy or noise (which can be unknown), or a real value with apriori defined uncertainty, or an interval. The output of the algorithm is a real value with an explicitly specified accuracy. The algorithm runs in O(NlogN) where N is the number of PEs. It is possible to modify this algorithm to correspond to Crusader's Convergence Algorithm (CCA), however, the bandwidth requirement will also increase. The algorithm has applications in distributed control, software reliability, High-performance computing, etc. - -The Brooks–Iyengar algorithm is executed in every processing element (PE) of a distributed sensor network. Each PE exchanges their measured interval with all other PEs in the network. The "fused" measurement is a weighted average of the midpoints of the regions found. The concrete steps of Brooks–Iyengar algorithm are shown in this section. Each PE performs the algorithm separately: - -Input: - -The measurement sent by PE k to PE i is a closed interval $[l_{k,i}, h_{k,i}]$, $1 \leq k \leq N$ - -Output: - -The output of PE i includes a point estimate and an interval estimate - -# PE i receives measurements from all the other PEs. - -# Divide the union of collected measurements into mutually exclusive intervals based on the number of measurements that intersect, which is known as the weight of the interval. - -# Remove intervals with weight less than $N-\tau$, where $\tau$ is the number of faulty PEs - -# If there are L intervals left, let $A_i$ denote the set of the remaining intervals. We have $A_i = \{(I_1^i, w_1^i), \dots, (I_L^i, w_L^i)\}$, where interval $I_l^i = [l_{I_l^i}, h_{I_l^i}]$ and $w_l^i$is the weight associated with interval $I_l^i$ . We also assume $h_{I_l^i} \leq h_{I_{l+1}^i}$. - -# Calculate the point estimate $v_i'$ of PE i as $v_i' = \frac{\sum_l \frac{(l_{I_l^i}+h_{I_l^i})\cdot w_l^i}{2}}{\sum_l w_l^i}$ and the interval estimate is $[l_{I_1^i}, h_{I_L^i}]$ - -Example: - -Consider an example of 5 PEs, in which PE 5 ($S_5$) is sending wrong values to other PEs and they all exchange the values. - -The values received by $S_1$ are in the next Table. - -We draw a Weighted Region Diagram (WRD) of these intervals, then we can determine $A_1$ for PE 1 according to the Algorithm: -$$ -A_1 = \{([1.5, 2.7], 4), ([2.7, 2.8], 5), ( [2.8, 3.2], 4) \} -$$ - -which consists of intervals where at least 4(= $N-\tau$ = 5−1) measurements intersect. The output of PE 1 is equal to -$$ -\frac{4*\frac{1.5+2.7}{2} + 5* \frac{2.7+2.8}{2} + 4*\frac{2.8+3.2}{2}} {13} = 2.625 -$$ - -and the interval estimate is $[1.5, 3.2]$ - -Similar, we could obtain all the inputs and results of the 5 PEs: - -center|450x450px1982 Byzantine Problem: The Byzantine General Problem as an extension of Two Generals' Problem could be viewed as a binary problem. - -1983 Approximate Consensus: The method removes some values from the set consists of scalars to tolerant faulty inputs. - -1985 In-exact Consensus: The method also uses scalar as the input. - -1996 Brooks-Iyengar Algorithm: The method is based on intervals. - -2013 Byzantine Vector Consensus: The method uses vectors as the input. - -2013 Multidimensional Agreement: The method also use vectors as the input while the measure of distance is different. - -We could use Approximate Consensus (scalar-based), Brooks-Iyengar Algorithm (interval-based) and Byzantine Vector Consensus (vector-based) to deal with interval inputs, and the paper proved that Brooks–Iyengar algorithm is the best here. - -Brooks–Iyengar algorithm is a seminal work and a major milestone in distributed sensing, and could be used as a fault tolerant solution for many redundancy scenarios. Also, it is easy to implement and embed in any networking systems. - -In 1996, the algorithm was used in MINIX to provide more accuracy and precision, which leads to the development of the first version of RT-Linux. - -In 2000, the algorithm was also central to the DARPA SensIT program's distributed tracking program. Acoustic, seismic and motion detection readings from multiple sensors are combined and fed into a distributed tracking system. Besides, it was used to combine heterogeneous sensor feeds in the application fielded by BBN Technologies, BAE systems, Penn State Applied Research Lab(ARL), and USC/ISI. - -The Thales Group, a UK Defense Manufacturer, used this work in its Global Operational Analysis Laboratory. It is applied to Raytheon's programs where many systems need to extract reliable data from unreliable sensor network, this exempts the increasing investment in improving sensor reliability. Also, the research in developing this algorithm results in the tools used by the US Navy in its maritime domain awareness software. - -In education, Brooks–Iyengar algorithm has been widely used in teaching classes such as University of Wisconsin, Purdue, Georgia Tech, Clemson University, University of Maryland, etc. - -In addition to the area of sensor network, other fields such as time-triggered architecture, safety of cyber-physical systems, data fusion, robot convergence, high-performance computing, software/hardware reliability, ensemble learning in artificial intelligence systems could also benefit from Brooks–Iyengar algorithm. - -* Faulty PEs tolerated < N/3 - -* Maximum faulty PEs < 2N/3 - -* Complexity = O(N log N) - -* Order of network bandwidth = O(N) - -* Convergence = 2t/N - -* Accuracy = limited by input - -* Iterates for precision = often - -* Precision over accuracy = no - -* Accuracy over precision = no diff --git a/wiki/wikipedia/685.txt b/wiki/wikipedia/685.txt deleted file mode 100644 index 35be5c58f09ceba535eef1e2f1e1428f85504ee1..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/685.txt +++ /dev/null @@ -1,13 +0,0 @@ -SpellTower is a puzzle video game by Zach Gage in which the player creates words from a jumble of letter tiles to clear the screen before it refills. The game has several game modes and a multiplayer battle mode. The impetus for the game—the concept of combining elements from Tetris and Boggle in what was a prototype of the puzzle video game Puzzlejuice—inspired Gage to create SpellTower. The game released for iOS in November 2011 to generally favorable reviews. Versions for OS X and Android followed over the next two years. In 2017 SpellTower Minutes was released. This browser-based Flash game created special "blitz" like modes not found in the mobile releases. A new iOS version released in 2017 swapped out the unnamed dictionary and began using Merriam-Webster's Third New International Dictionary, Unabridged. French and Dutch language specific versions were also released. A 2020 release, SpellTower+, added new game modes, cleaner visuals, and a jazz soundtrack. - -In the iPad puzzle video game SpellTower, the player attempts to clear the screen of jumbled, lettered tiles by using them to create words. The player can select adjacent and diagonal tiles to create words, which clears those tiles from the screen. If the player creates a long word with five or more tiles, any adjacent tile will be cleared as well. Additionally, difficult characters like X, Q, and J, will remove an entire row when used in a word. Some tiles are blank and can only be cleared by such an adjacent effect. In battle mode, each completed word sends tiles to their opponent's screen. - -When indie developer Zach Gage was first told about a video game that combined Tetris and Boggle, he had a very specific idea of how the game would play. But after seeing that the prototype of Puzzlejuice played differently, he created—with the developer's permission—the version he imagined as SpellTower. Gage's game eventually released prior to the game that inspired it. - -SpellTower released for the iPad tablet computer on November 17, 2011. A month later, Gage added support for iPhone and iPod Touch, and Game Center achievements. In 2012, Gage added local multiplayer support over Bluetooth in a new battle game mode. Gage later released versions for OS X (July 25, 2012) and Android (March 7, 2013). The Android release is identical apart from the omission of word lookup. It also supports local Wi-Fi multiplayer and high score competition via Scoreloop. - -Gage and developer Jack Schlesinger rebuilt SpellTower from scratch to better accommodate changes made since its original release. The new version, SpellTower+, has a revised look, a new soundtrack, iCloud backup, and new game modes. - -The game received "generally favorable" reviews, according to video game review score aggregator Metacritic. Edge called it a "magnificent ... brainteaser that's nervy, humbling, and strangely energizing". The title was one of TouchArcade honorable mentions for 2011 game of the year. A year later, TouchArcade said the game remained among the best on the App Store. In 2012, SpellTower was named among IGN underrated iOS word games. - -Edge compared the game's tension to that of Resident Evil survival horror, though noted that Tower mode was much less tense than the game's Puzzle modes. The reviewer highlighted the role of strategy in both modes, as a small word might fare better than a large word in maintaining the growth of the Puzzle mode tower. diff --git a/wiki/wikipedia/686.txt b/wiki/wikipedia/686.txt deleted file mode 100644 index 3c78c1d1a99cde0300b15adfeb60f1c294e1b3ca..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/686.txt +++ /dev/null @@ -1,23 +0,0 @@ -Meurs Challenger is an online graph visualization program, with data analysis and browsing. - -The software supports several graph layout algorithms, and allows the user to interact with the nodes. The displayed data can be filtered using textual search, node and edge type, or based on the graph distance between nodes. Written in ActionScript, the program runs on Windows, Linux, macOS and other platforms that support the Adobe Flash Player. - -Meurs Challenger was the winner at the 2011 edition of the International Symposium on Graph Drawing, in the large graph category. - -It is publicly available as a Facebook application, which displays the network graph of the user's friends. - -The main problem in network visualization is to reduce data complexity by projecting a multivariate data matrix onto a lower-dimensional planar display space, which is achieved by proper node positioning. - -Meurs Challenger addresses this by combining several different approaches: - -* force-based algorithms - -* modularization techniques - -* clustering - -* node overlap minimization, which is achieved by guaranteeing a minimum edge length between two linked nodes - -* edge length uniformization, using an algorithm that minimizes the total edge length discrepancy of the projected graph - -* grid-based layouts diff --git a/wiki/wikipedia/687.txt b/wiki/wikipedia/687.txt deleted file mode 100644 index d669c2f7d2adb267acbeff0688a6d1fb48ea9baa..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/687.txt +++ /dev/null @@ -1,9 +0,0 @@ -In mathematics, the Vitali–Carathéodory theorem is a result in real analysis that shows that, under the conditions stated below, integrable functions can be approximated in L1 from above and below by lower- and upper-semicontinuous functions, respectively. It is named after Giuseppe Vitali and Constantin Carathéodory. - -Let X be a locally compact Hausdorff space equipped with a Borel measure, µ, that is finite on every compact set, outer regular, and tight when restricted to any Borel set that is open or of finite mass. If f is an element of L1(µ) then, for every ε > 0, there are functions u and v on X such that u ≤ f ≤ v, u is upper-semicontinuous and bounded above, v is lower-semicontinuous and bounded below, and - - - -\int_X (v - u) \mathrm{d}\mu < \varepsilon. - - diff --git a/wiki/wikipedia/688.txt b/wiki/wikipedia/688.txt deleted file mode 100644 index 7525c24963ae4944314cc9335ffeac66bef58299..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/688.txt +++ /dev/null @@ -1,41 +0,0 @@ -In computational complexity theory, the PCP theorem (also known as the PCP characterization theorem) states that every decision problem in the NP complexity class has probabilistically checkable proofs (proofs that can be checked by a randomized algorithm) of constant query complexity and logarithmic randomness complexity (uses a logarithmic number of random bits). - -The PCP theorem says that for some universal constant K, for every n, any mathematical proof of length n can be rewritten as a different proof of length poly(n) that is formally verifiable with 99% accuracy by a randomized algorithm that inspects only K letters of that proof. - -The PCP theorem is the cornerstone of the theory of computational hardness of approximation, which investigates the inherent difficulty in designing efficient approximation algorithms for various optimization problems. It has been described by Ingo Wegener as "the most important result in complexity theory since Cook's theorem" and by Oded Goldreich as "a culmination of a sequence of impressive works […] rich in innovative ideas". - -The PCP theorem states that - -NP = PCP[O(log n), O(1)]. - -An alternative formulation of the PCP theorem states that the maximum fraction of satisfiable constraints of a constraint satisfaction problem is NP-hard to approximate within some constant factor. - -Formally, for some constants K and α < 1, the following promise problem (Lyes, Lno) is an NP-hard decision problem: - -* Lyes = {Φ: all constraints in Φ are simultaneously satisfiable} - -* Lno = {Φ: every assignment satisfies fewer than an α fraction of Φ's constraints}, - -where Φ is a constraint satisfaction problem over Boolean alphabet with at most K variables per constraint. - -As a consequence of this theorem, it can be shown that the solutions to many natural optimization problems including maximum boolean formula satisfiability, maximum independent set in graphs, and the shortest vector problem for lattices cannot be approximated efficiently unless P = NP. These results are sometimes also called PCP theorems because they can be viewed as probabilistically checkable proofs for NP with some additional structure. - -A proof of a weaker result, NP = PCP[n3, 1] is given in one of the lectures of Dexter Kozen. - -The PCP theorem is the culmination of a long line of work on interactive proofs and probabilistically checkable proofs. The first theorem relating standard proofs and probabilistically checkable proofs is the statement that NEXP ⊆ PCP[poly(n), poly(n)], proved by Babai. - -The notation PCPc(n), s(n)[r(n), q(n)] is explained at Probabilistically checkable proof. The notation is that of a function that returns a certain complexity class. See the explanation mentioned above. - -The name of this theorem (the "PCP theorem") probably comes either from "PCP" meaning "probabilistically checkable proof", or from the notation mentioned above (or both). - -Subsequently, the methods used in this work were extended by Babai, Lance Fortnow, Levin, and Szegedy in 1991 , - -Feige, Goldwasser, Lund, Safra, and Szegedy (1991), and Arora and Safra in 1992 to yield a proof of the PCP theorem by Arora, Lund, Motwani, Sudan, and Szegedy in 1998 . - -The 2001 Gödel Prize was awarded to Sanjeev Arora, Uriel Feige, Shafi Goldwasser, Carsten Lund, László Lovász, Rajeev Motwani, Shmuel Safra, Madhu Sudan, and Mario Szegedy for work on the PCP theorem and its connection to hardness of approximation. - -In 2005 Irit Dinur discovered a significantly simpler proof of the PCP theorem, using expander graphs. She received the 2019 Gödel Prize for this. - -In 2012, Thomas Vidick and Tsuyoshi Ito published a result that showed a "strong limitation on the ability of entangled provers to collude in a multiplayer game." This could be a step toward proving the quantum analogue of the PCP theorem, since when the result - -In 2018, Thomas Vidick and Anand Natarajan proved a games variant of quantum PCP theorem under randomized reduction. It states that QMA ⊆ MIP*[log(n),1,1/2], where MIP*[f(n),c,s] is a complexity class of multi-prover quantum interactive proofs systems with f(n)-bit classical communications, and the completeness is c and the soundness is s. They also showed that the Hamiltonian version of a quantum PCP conjecture, namely a local Hamiltonian problem with constant promise gap c − s is QMA-hard, implies the games quantum PCP theorem. diff --git a/wiki/wikipedia/689.txt b/wiki/wikipedia/689.txt deleted file mode 100644 index 57f17db7d571e784f44f6d7956caabbac68e1b10..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/689.txt +++ /dev/null @@ -1,29 +0,0 @@ -Force-directed graph drawing algorithms are a class of algorithms for drawing graphs in an aesthetically-pleasing way. Their purpose is to position the nodes of a graph in two-dimensional or three-dimensional space so that all the edges are of more or less equal length and there are as few crossing edges as possible, by assigning forces among the set of edges and the set of nodes, based on their relative positions, and then using these forces either to simulate the motion of the edges and nodes or to minimize their energy. - -While graph drawing can be a difficult problem, force-directed algorithms, being physical simulations, usually require no special knowledge about graph theory such as planarity. - -Force-directed graph drawing algorithms assign forces among the set of edges and the set of nodes of a graph drawing. Typically, spring-like attractive forces based on Hooke's law are used to attract pairs of endpoints of the graph's edges towards each other, while simultaneously repulsive forces like those of electrically charged particles based on Coulomb's law are used to separate all pairs of nodes. In equilibrium states for this system of forces, the edges tend to have uniform length (because of the spring forces), and nodes that are not connected by an edge tend to be drawn further apart (because of the electrical repulsion). Edge attraction and vertex repulsion forces may be defined using functions that are not based on the physical behavior of springs and particles; for instance, some force-directed systems use springs whose attractive force is logarithmic rather than linear. - -An alternative model considers a spring-like force for every pair of nodes $(i,j)$ where the ideal length $\delta_{ij}$ of each spring is proportional to the graph-theoretic distance between nodes i and j, without using a separate repulsive force. Minimizing the difference (usually the squared difference) between Euclidean and ideal distances between nodes is then equivalent to a metric multidimensional scaling problem. - -A force-directed graph can involve forces other than mechanical springs and electrical repulsion. A force analogous to gravity may be used to pull vertices towards a fixed point of the drawing space; this may be used to pull together different connected components of a disconnected graph, which would otherwise tend to fly apart from each other because of the repulsive forces, and to draw nodes with greater centrality to more central positions in the drawing; it may also affect the vertex spacing within a single component. Analogues of magnetic fields may be used for directed graphs. Repulsive forces may be placed on edges as well as on nodes in order to avoid overlap or near-overlap in the final drawing. In drawings with curved edges such as circular arcs or spline curves, forces may also be placed on the control points of these curves, for instance to improve their angular resolution. - -Once the forces on the nodes and edges of a graph have been defined, the behavior of the entire graph under these sources may then be simulated as if it were a physical system. In such a simulation, the forces are applied to the nodes, pulling them closer together or pushing them further apart. This is repeated iteratively until the system comes to a mechanical equilibrium state; i.e., their relative positions do not change anymore from one iteration to the next. The positions of the nodes in this equilibrium are used to generate a drawing of the graph. - -For forces defined from springs whose ideal length is proportional to the graph-theoretic distance, stress majorization gives a very well-behaved (i.e., monotonically convergent) and mathematically elegant way to minimize these differences and, hence, find a good layout for the graph. - -It is also possible to employ mechanisms that search more directly for energy minima, either instead of or in conjunction with physical simulation. Such mechanisms, which are examples of general global optimization methods, include simulated annealing and genetic algorithms. - -The following are among the most important advantages of force-directed algorithms: - -; Good-quality results: At least for graphs of medium size (up to 50–500 vertices), the results obtained have usually very good results based on the following criteria: uniform edge length, uniform vertex distribution and showing symmetry. This last criterion is among the most important ones and is hard to achieve with any other type of algorithm. - -; Flexibility: Force-directed algorithms can be easily adapted and extended to fulfill additional aesthetic criteria. This makes them the most versatile class of graph drawing algorithms. Examples of existing extensions include the ones for directed graphs, 3D graph drawing, cluster graph drawing, constrained graph drawing, and dynamic graph drawing. - -; Intuitive: Since they are based on physical analogies of common objects, like springs, the behavior of the algorithms is relatively easy to predict and understand. This is not the case with other types of graph-drawing algorithms. - -; Simplicity: Typical force-directed algorithms are simple and can be implemented in a few lines of code. Other classes of graph-drawing algorithms, like the ones for orthogonal layouts, are usually much more involved. - -; Interactivity: Another advantage of this class of algorithm is the interactive aspect. By drawing the intermediate stages of the graph, the user can follow how the graph evolves, seeing it unfold from a tangled mess into a good-looking configuration. In some interactive graph drawing tools, the user can pull one or more nodes out of their equilibrium state and watch them migrate back into position. This makes them a preferred choice for dynamic and online graph-drawing systems. - -; Strong theoretical foundations: While simple ad-hoc force-directed algorithms often appear in the literature and in practice (because they are relatively easy to understand), more reasoned approaches are starting to gain traction. Statisticians have been solving similar problems in multidimensional scaling (MDS) since the 1930s, and physicists also have a long history of working with related n-body problems - so extremely mature approaches exist. As an example, the stress majorization approach to metric MDS can be applied to graph drawing as described above. This has been proven to converge monotonically. can improve running time to n*log(n) per iteration. As a rough guide, in a few seconds one can expect to draw at most 1,000 nodes with a standard n2 per iteration technique, and 100,000 with a n*log(n) per iteration technique. The idea of using only spring forces between all pairs of vertices, with ideal spring lengths equal to the vertices' graph-theoretic distance, is from Kamada. diff --git a/wiki/wikipedia/69.txt b/wiki/wikipedia/69.txt deleted file mode 100644 index 54259a64ea65bc7ddd34650f2fb5140dc4f6d5ec..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/69.txt +++ /dev/null @@ -1,153 +0,0 @@ -In mathematics, more specifically in multivariable calculus, the implicit function theorem is a tool that allows relations to be converted to functions of several real variables. It does so by representing the relation as the graph of a function. There may not be a single function whose graph can represent the entire relation, but there may be such a function on a restriction of the domain of the relation. The implicit function theorem gives a sufficient condition to ensure that there is such a function. - -More precisely, given a system of m equations fi (often abbreviated into F(x, y) = 0), the theorem states that, under a mild condition on the partial derivatives (with respect to the yis) at a point, the m variables yi are differentiable functions of the xj in some neighborhood of the point. As these functions can generally not be expressed in closed form, they are implicitly defined by the equations, and this motivated the name of the theorem. - -In other words, under a mild condition on the partial derivatives, the set of zeros of a system of equations is locally the graph of a function. - -Augustin-Louis Cauchy (1789–1857) is credited with the first rigorous form of the implicit function theorem. Ulisse Dini (1845–1918) generalized the real-variable version of the implicit function theorem to the context of functions of any number of real variables. - -If we define the function f(x, y) = x2 + y2, then the equation f(x, y) = 1 cuts out the unit circle as the level set {(x, y) . There is no way to represent the unit circle as the graph of a function of one variable y = g(x) because for each choice of x ∈ (−1, 1), there are two choices of y, namely $\pm\sqrt{1-x^2}$. - -However, it is possible to represent part of the circle as the graph of a function of one variable. If we let $g_1(x) = \sqrt{1-x^2}$ for −1 ≤ x ≤ 1, then the graph of y = g1(x) provides the upper half of the circle. Similarly, if $g_2(x) = -\sqrt{1-x^2}$, then the graph of y = g2(x) gives the lower half of the circle. - -The purpose of the implicit function theorem is to tell us the existence of functions like g1(x) and g2(x), even in situations where we cannot write down explicit formulas. It guarantees that g1(x) and g2(x) are differentiable, and it even works in situations where we do not have a formula for f(x, y). - -Let $f: \R^{n+m} \to \R^m$ be a continuously differentiable function. We think of $\R^{n+m}$ as the Cartesian product $\R^n\times\R^m,$ and we write a point of this product as $(\mathbf{x}, \mathbf{y}) = (x_1,\ldots, x_n, y_1, \ldots y_m).$ Starting from the given function f, our goal is to construct a function $g: \R^n \to \R^m$ whose graph (x, g(x)) is precisely the set of all (x, y) such that f(x, y) = 0. - -As noted above, this may not always be possible. We will therefore fix a point (a, b) = (a1, ..., an, b1, ..., bm) which satisfies f(a, b) = 0, and we will ask for a g that works near the point (a, b). In other words, we want an open set $U \subset \R^n$ containing a, an open set $V \subset \R^m$ containing b, and a function g : U → V such that the graph of g satisfies the relation f = 0 on U × V, and that no other points within U × V do so. In symbols, - -\{ (\mathbf{x}, g(\mathbf{x})) \mid \mathbf x \in U \} = \{ (\mathbf{x}, \mathbf{y})\in U \times V \mid f(\mathbf{x}, \mathbf{y}) = \mathbf{0} \}. - -To state the implicit function theorem, we need the Jacobian matrix of f, which is the matrix of the partial derivatives of f. Abbreviating (a1, ..., an, b1, ..., bm) to (a, b), the Jacobian matrix is - -(Df)(\mathbf{a},\mathbf{b}) = \left[\begin{matrix} - -\frac{\partial f_1}{\partial x_1}(\mathbf{a},\mathbf{b}) & - -\cdots & \frac{\partial f_1}{\partial x_n}(\mathbf{a},\mathbf{b})\\ - -\vdots & \ddots & \vdots\\ - -\frac{\partial f_m}{\partial x_1}(\mathbf{a},\mathbf{b}) & \cdots & \frac{\partial f_m}{\partial x_n}(\mathbf{a},\mathbf{b}) - -\end{matrix}\right|\left. - -\begin{matrix} - -\frac{\partial f_1}{\partial y_1}(\mathbf{a},\mathbf{b}) & \cdots & \frac{\partial f_1}{\partial y_m}(\mathbf{a},\mathbf{b})\\ - -\vdots & \ddots & \vdots\\ - -\frac{\partial f_m}{\partial y_1}(\mathbf{a},\mathbf{b}) & \cdots & \frac{\partial f_m}{\partial y_m}(\mathbf{a},\mathbf{b})\\ - -\end{matrix}\right] = [X|Y] - -where X is the matrix of partial derivatives in the variables xi and Y is the matrix of partial derivatives in the variables yj. The implicit function theorem says that if Y is an invertible matrix, then there are U, V, and g as desired. Writing all the hypotheses together gives the following statement. - -Let $f: \R^{n+m} \to \R^m$ be a continuously differentiable function, and let $\R^{n+m}$ have coordinates (x, y). Fix a point (a, b) = (a1, …, an, b1, …, bm) with f(a, b) = 0, where $\mathbf{0} \in \R^m$ is the zero vector. If the Jacobian matrix (this is the right-hand panel of the Jacobian matrix shown in the previous section): - -J_{f, \mathbf{y}} (\mathbf{a}, \mathbf{b}) = \left [ \frac{\partial f_i}{\partial y_j} (\mathbf{a}, \mathbf{b}) \right ] - -is invertible, then there exists an open set $U \subset \R^n$ containing a such that there exists a unique continuously differentiable function $g: U \to \R^m$ such that $ g(\mathbf{a}) = \mathbf{b}$, and $ f(\mathbf{x}, g(\mathbf{x})) = \mathbf{0} ~ \text{for all} ~ \mathbf{x}\in U$. - -Moreover, the partial derivatives of g in U are given by the matrix product: $ \frac{\partial g}{\partial x_j} (\mathbf{x}) =- \left [ J_{f, \mathbf{y}}(\mathbf{x}, g(\mathbf{x})) \right ]_{m \times m} ^{-1} \left [ \frac{\partial f}{\partial x_j}(\mathbf{x},g(\mathbf{x})) \right ]_{m \times 1}$ - -If, moreover, f is analytic or continuously differentiable k times in a neighborhood of (a, b), then one may choose U in order that the same holds true for g inside U. In the analytic case, this is called the analytic implicit function theorem. - -Suppose $F:\R^2 \to \R$ is a continuously differentiable function defining a curve $F(\mathbf{r}) = F(x,y) = 0 $. Let $(x_0, y_0)$ be a point on the curve. The statement of the theorem above can be rewritten for this simple case as follows: - -If \left. \frac{\partial F}{ \partial y} \right|_{(x_0, y_0)} \neq 0 then for the curve around $(x_0, y_0)$ we can write $y = f(x)$, where $f$ is a real function. - -Proof. Since F is differentiable we write the differential of F through partial derivatives: - -\mathrm{d} F = \operatorname{grad} F \cdot \mathrm{d}\mathbf{r} = \frac{\partial F}{\partial x} \mathrm{d} x + \frac{\partial F}{\partial y}\mathrm{d}y. - -Since we are restricted to movement on the curve $\mathrm{d} F = 0$ and by assumption $\tfrac{\partial F}{\partial y} \neq 0$ around the point $(x_0, y_0)$ (since $\tfrac{\partial F}{\partial y}$ is continuous at $(x_0, y_0)$ and $\left. \tfrac{\partial F}{ \partial y} \right|_{(x_0, y_0)} \neq 0$). Therefore we have a first-order ordinary differential equation: - -\partial_x F \mathrm{d} x + \partial_y F \mathrm{d} y = 0, \quad y(x_0) = y_0 - -Now we are looking for a solution to this ODE in an open interval around the point $(x_0, y_0)$ for which, at every point in it, $ \partial_y F \neq 0$. Since F is continuously differentiable and from the assumption we have - -|\partial_x F| < \infty, |\partial_y F| < \infty, \partial_y F \neq 0. - -From this we know that $\tfrac{\partial_x F}{\partial_y F}$ is continuous and bounded on both ends. From here we know that $-\tfrac{\partial_x F}{\partial_y F}$ is Lipschitz continuous in both x and y. Therefore, by Cauchy-Lipschitz theorem, there exists unique y(x) that is the solution to the given ODE with the initial conditions. Q.E.D. - -Let us go back to the example of the unit circle. In this case n = m = 1 and $f(x,y) = x^2 + y^2 - 1$. The matrix of partial derivatives is just a 1 × 2 matrix, given by - -(Df)(a,b) = \begin{bmatrix} \dfrac{\partial f}{\partial x}(a,b) & \dfrac{\partial f}{\partial y}(a,b) \end{bmatrix} = \begin{bmatrix} 2a & 2b \end{bmatrix} - -Thus, here, the Y in the statement of the theorem is just the number 2b; the linear map defined by it is invertible if and only if b ≠ 0. By the implicit function theorem we see that we can locally write the circle in the form y = g(x) for all points where y ≠ 0. For (±1, 0) we run into trouble, as noted before. The implicit function theorem may still be applied to these two points, by writing x as a function of y, that is, $x = h(y)$; now the graph of the function will be $\left(h(y), y\right)$, since where b = 0 we have a = 1, and the conditions to locally express the function in this form are satisfied. - -The implicit derivative of y with respect to x, and that of x with respect to y, can be found by totally differentiating the implicit function $x^2+y^2-1$ and equating to 0: - -2x dx+2y dy = 0, - -giving - -\frac{dy}{dx}=-\frac{x}{y} - -and - -\frac{dx}{dy} = -\frac{y}{x}. - -Suppose we have an m-dimensional space, parametrised by a set of coordinates $ (x_1,\ldots,x_m) $. We can introduce a new coordinate system $ (x'_1,\ldots,x'_m) $ by supplying m functions $ h_1\ldots h_m $ each being continuously differentiable. These functions allow us to calculate the new coordinates $ (x'_1,\ldots,x'_m) $ of a point, given the point's old coordinates $ (x_1,\ldots,x_m) $ using $ x'_1=h_1(x_1,\ldots,x_m), \ldots, x'_m=h_m(x_1,\ldots,x_m) $. One might want to verify if the opposite is possible: given coordinates $ (x'_1,\ldots,x'_m) $, can we 'go back' and calculate the same point's original coordinates $ (x_1,\ldots,x_m) $? The implicit function theorem will provide an answer to this question. The (new and old) coordinates $(x'_1,\ldots,x'_m, x_1,\ldots,x_m)$ are related by f = 0, with - -f(x'_1,\ldots,x'_m,x_1,\ldots, x_m)=(h_1(x_1,\ldots, x_m)-x'_1,\ldots , h_m(x_1,\ldots, x_m)-x'_m). - -Now the Jacobian matrix of f at a certain point (a, b) [ where $a=(x'_1,\ldots,x'_m), b=(x_1,\ldots,x_m)$ ] is given by - -(Df)(a,b) = \left [\begin{matrix} - --1 & \cdots & 0 \\ - -\vdots & \ddots & \vdots \\ - -0 & \cdots & -1 - -\end{matrix}\left| - -\begin{matrix} - -\frac{\partial h_1}{\partial x_1}(b) & \cdots & \frac{\partial h_1}{\partial x_m}(b)\\ - -\vdots & \ddots & \vdots\\ - -\frac{\partial h_m}{\partial x_1}(b) & \cdots & \frac{\partial h_m}{\partial x_m}(b)\\ - -\end{matrix} \right.\right] = [-I_m |J ]. - -where Im denotes the m × m identity matrix, and J is the m × m matrix of partial derivatives, evaluated at (a, b). (In the above, these blocks were denoted by X and Y. As it happens, in this particular application of the theorem, neither matrix depends on a.) The implicit function theorem now states that we can locally express $ (x_1,\ldots,x_m) $ as a function of $ (x'_1,\ldots,x'_m) $ if J is invertible. Demanding J is invertible is equivalent to det J ≠ 0, thus we see that we can go back from the primed to the unprimed coordinates if the determinant of the Jacobian J is non-zero. This statement is also known as the inverse function theorem. - -As a simple application of the above, consider the plane, parametrised by polar coordinates (R, θ). We can go to a new coordinate system (cartesian coordinates) by defining functions x(R, θ) = R cos(θ) and y(R, θ) = R sin(θ). This makes it possible given any point (R, θ) to find corresponding Cartesian coordinates (x, y). When can we go back and convert Cartesian into polar coordinates? By the previous example, it is sufficient to have det J ≠ 0, with - -J =\begin{bmatrix} - -\frac{\partial x(R,\theta)}{\partial R} & \frac{\partial x(R,\theta)}{\partial \theta} \\ - -\frac{\partial y(R,\theta)}{\partial R} & \frac{\partial y(R,\theta)}{\partial \theta} \\ - -\end{bmatrix}= - -\begin{bmatrix} - -\cos \theta & -R \sin \theta \\ - -\sin \theta & R \cos \theta - -\end{bmatrix}. - -Since det J = R, conversion back to polar coordinates is possible if R ≠ 0. So it remains to check the case R = 0. It is easy to see that in case R = 0, our coordinate transformation is not invertible: at the origin, the value of θ is not well-defined. - -Based on the inverse function theorem in Banach spaces, it is possible to extend the implicit function theorem to Banach space valued mappings. - -Let X, Y, Z be Banach spaces. Let the mapping f : X × Y → Z be continuously Fréchet differentiable. If $(x_0,y_0)\in X\times Y$, $f(x_0,y_0)=0$, and $y\mapsto Df(x_0,y_0)(0,y)$ is a Banach space isomorphism from Y onto Z, then there exist neighbourhoods U of x0 and V of y0 and a Fréchet differentiable function g : U → V such that f(x, g(x)) = 0 and f(x, y) = 0 if and only if y = g(x), for all $(x,y)\in U\times V$. - -Various forms of the implicit function theorem exist for the case when the function f is not differentiable. It is standard that local strict monotonicity suffices in one dimension. The following more general form was proven by Kumagai based on an observation by Jittorntrum. - -Consider a continuous function $f : \R^n \times \R^m \to \R^n$ such that $f(x_0, y_0) = 0$. There exist open neighbourhoods $A \subset \R^n$ and $B \subset \R^m$ of x0 and y0, respectively, such that, for all y in B, $f(\cdot, y) : A \to \R^n$ is locally one-to-one if and only if there exist open neighbourhoods $A_0 \subset \R^n$ and $B_0 \subset \R^m$ of x0 and y0, such that, for all $y \in B_0$, the equation - -f(x, y) = 0 has a unique solution - -x = g(y) \in A_0, - -where g is a continuous function from B0 into A0. diff --git a/wiki/wikipedia/690.txt b/wiki/wikipedia/690.txt deleted file mode 100644 index 3f4076a4fb0354f46dc5cfd779b693f5fa1e2a35..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/690.txt +++ /dev/null @@ -1,47 +0,0 @@ -Goursat's lemma, named after the French mathematician Édouard Goursat, is an algebraic theorem about subgroups of the direct product of two groups. - -It can be stated more generally in a Goursat variety (and consequently it also holds in any Maltsev variety), from which one recovers a more general version of Zassenhaus' butterfly lemma. In this form, Goursat's theorem also implies the snake lemma. - -Goursat's lemma for groups can be stated as follows. - -Let $G$, $G'$ be groups, and let $H$ be a subgroup of $G\times G'$ such that the two projections $p_1: H \to G$ and $p_2: H \to G'$ are surjective (i.e., $H$ is a subdirect product of $G$ and $G'$). Let $N$ be the kernel of $p_2$ and $N'$ the kernel of $p_1$. One can identify $N$ as a normal subgroup of $G$, and $N'$ as a normal subgroup of $G'$. Then the image of $H$ in $G/N \times G'/N'$ is the graph of an isomorphism $G/N \cong G'/N'$. One then obtains a bijection between : - -(i) Subgroups of $G\times G'$ which project onto both factors, - -(ii) Triples $(N, N', f)$ with $N$ normal in $G$, $N'$ normal in $G'$ and $f$ isomorphism of $G/N$ onto $G'/N'$. - -An immediate consequence of this is that the subdirect product of two groups can be described as a fiber product and vice versa. - -Notice that if $H$ is any subgroup of $G\times G'$ (the projections $p_1: H \to G$ and $p_2: H \to G'$ need not be surjective), then the projections from $H$ onto $p_1(H)$ and $ p_2(H)$ are surjective. Then one can apply Goursat's lemma to $H \leq p_1(H)\times p_2(H)$. - -To motivate the proof, consider the slice $S = \{g\} \times G'$ in $G \times G'$, for any arbitrary $g \in G$. By the surjectivity of the projection map to $G$, this has a non trivial intersection with $H$. Then essentially, this intersection represents exactly one particular coset of $N'$. Indeed, if we had distinct elements $(g,a), (g,b) \in S \cap H$ with $a \in pN' \subset G'$ and $b \in qN' \subset G'$, then $H$ being a group, we get that $(e, ab^{-1}) \in H$, and hence, $(e, ab^{-1}) \in N'$. But this a contradiction, as $a,b$ belong to distinct cosets of $N'$, and thus $ab^{-1}N' \neq N'$, and thus the element $(e, ab^{-1})\in N'$ cannot belong to the kernel $N'$ of the projection map from $H$ to $G$. Thus the intersection of $H$ with every "horizontal" slice isomorphic to $G' \in G\times G'$ is exactly one particular coset of $N'$ in $G'$. - -By an identical argument, the intersection of $H$ with every "vertical" slice isomorphic to $G \in G\times G'$ is exactly one particular coset of $N $ in $G$. - -All the cosets of $G,G'$ are present in the group $H$, and by the above argument, there is an exact 1:1 correspondence between them. The proof below further shows that the map is an isomorphism. - -Before proceeding with the proof, $N$ and $N'$ are shown to be normal in $G \times \{e'\}$ and $\{e\} \times G'$, respectively. It is in this sense that $N$ and $N'$ can be identified as normal in G and G, respectively. - -Since $p_2$ is a homomorphism, its kernel N is normal in H. Moreover, given $g \in G$, there exists $h=(g,g') \in H$, since $p_1$ is surjective. Therefore, $p_1(N)$ is normal in G, viz: -$$ -gp_1(N) = p_1(h)p_1(N) = p_1(hN) = p_1(Nh) = p_1(N)g -$$. - -It follows that $N$ is normal in $G \times \{e'\}$ since -$$ -(g,e')N = (g,e')(p_1(N) \times \{e'\}) = gp_1(N) \times \{e'\} = p_1(N)g \times \{e'\} = (p_1(N) \times \{e'\})(g,e') = N(g,e') -$$. - -The proof that $N'$ is normal in $\{e\} \times G'$ proceeds in a similar manner. - -Given the identification of $G$ with $G \times \{e'\}$, we can write $G/N$ and $gN$ instead of $(G \times \{e'\})/N$ and $(g,e')N$, $g \in G$. Similarly, we can write $G'/N'$ and $g'N'$, $g' \in G'$. - -On to the proof. Consider the map $H \to G/N \times G'/N'$ defined by $(g,g') \mapsto (gN, g'N')$. The image of $H$ under this map is $\{(gN,g'N') \mid (g,g') \in H \}$. Since $H \to G/N$ is surjective, this relation is the graph of a well-defined function $G/N \to G'/N'$ provided $g_1N = g_2N \implies g_1'N' = g_2'N'$ for every $(g_1,g_1'),(g_2,g_2') \in H$, essentially an application of the vertical line test. - -Since $g_1N=g_2N$ (more properly, $(g_1,e')N = (g_2,e')N$), we have $(g_2^{-1}g_1,e') \in N \subset H$. Thus $(e,g_2'^{-1}g_1') = (g_2,g_2')^{-1}(g_1,g_1')(g_2^{-1}g_1,e')^{-1} \in H$, whence $(e,g_2'^{-1}g_1') \in N'$, that is, $g_1'N'=g_2'N'$. - -Furthermore, for every $(g_1,g_1'),(g_2,g_2')\in H$ we have $(g_1g_2,g_1'g_2')\in H$. It follows that this function is a group homomorphism. - -By symmetry, $\{(g'N',gN) \mid (g,g') \in H \}$ is the graph of a well-defined homomorphism $G'/N' \to G/N$. These two homomorphisms are clearly inverse to each other and thus are indeed isomorphisms. - -As a consequence of Goursat's theorem, one can derive a very general version on the Jordan–Hölder–Schreier theorem in Goursat varieties. diff --git a/wiki/wikipedia/691.txt b/wiki/wikipedia/691.txt deleted file mode 100644 index 497b2748c66d1826b05fc09a8bcdfbef727d31be..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/691.txt +++ /dev/null @@ -1,87 +0,0 @@ -In the mathematical field of order theory, a continuum or linear continuum is a generalization of the real line. - -Formally, a linear continuum is a linearly ordered set S of more than one element that is densely ordered, i.e., between any two distinct elements there is another (and hence infinitely many others), and complete, i.e., which "lacks gaps" in the sense that every nonempty subset with an upper bound has a least upper bound. More symbolically: - -
      - -
    1. S has the least upper bound property, and
    2. - -
    3. For each x in S and each y in S with x < y, there exists z in S such that x < z < y
    4. - -
    - -A set has the least upper bound property, if every nonempty subset of the set that is bounded above has a least upper bound. Linear continua are particularly important in the field of topology where they can be used to verify whether an ordered set given the order topology is connected or not. - -Unlike the standard real line, a linear continuum may be bounded on either side: for example, any (real) closed interval is a linear continuum. - -* The ordered set of real numbers, R, with its usual order is a linear continuum, and is the archetypal example. Property b) is trivial, and property a) is simply a reformulation of the completeness axiom. - -Examples in addition to the real numbers: - -*sets which are order-isomorphic to the set of real numbers, for example a real open interval, and the same with half-open gaps (note that these are not gaps in the above-mentioned sense) - -*the affinely extended real number system and order-isomorphic sets, for example the unit interval - -*the set of real numbers with only +∞ or only −∞ added, and order-isomorphic sets, for example a half-open interval - -*the long line - -* The set I × I (where × denotes the Cartesian product and I = [0, 1]) in the lexicographic order is a linear continuum. Property b) is trivial. To check property a), we define a map, π1 : I × I → I by - -π1 (x, y) = x - -This map is known as the projection map. The projection map is continuous (with respect to the product topology on I × I) and is surjective. Let A be a nonempty subset of I × I which is bounded above. Consider π1(A). Since A is bounded above, π1(A) must also be bounded above. Since, π1(A) is a subset of I, it must have a least upper bound (since I has the least upper bound property). Therefore, we may let b be the least upper bound of π1(A). If b belongs to π1(A), then b × I will intersect A at say b × c for some c ∈ I. Notice that since b × I has the same order type of I, the set (b × I) ∩ A will indeed have a least upper bound b × c, which is the desired least upper bound for A. - -If b does not belong to π1(A), then b × 0 is the least upper bound of A, for if d < b, and d × e is an upper bound of A, then d would be a smaller upper bound of π1(A) than b, contradicting the unique property of b. - -* The ordered set Q of rational numbers is not a linear continuum. Even though property b) is satisfied, property a) is not. Consider the subset - -A = {x ∈ Q | x < 2} - -of the set of rational numbers. Even though this set is bounded above by any rational number greater than 2 (for instance 3), it has no least upper bound in the rational numbers. (Specifically, for any rational upper bound r > 2, r/2 + 1/r is a closer rational upper bound; details at .) - -* The ordered set of non-negative integers with its usual order is not a linear continuum. Property a) is satisfied (let A be a subset of the set of non-negative integers that is bounded above. Then A is finite so it has a maximum, and this maximum is the desired least upper bound of A). On the other hand, property b) is not. Indeed, 5 is a non-negative integer and so is 6, but there exists no non-negative integer that lies strictly between them. - -* The ordered set A of nonzero real numbers - -A = (−∞, 0) ∪ (0, +∞) - -is not a linear continuum. Property b) is trivially satisfied. However, if B is the set of negative real numbers: - -B = (−∞, 0) - -then B is a subset of A which is bounded above (by any element of A greater than 0; for instance 1), but has no least upper bound in B. Notice that 0 is not a bound for B since 0 is not an element of A. - -* Let Z denote the set of negative integers and let A = (0, 5) ∪ (5, +∞). Let - -S = Z ∪ A. - -Then S satisfies neither property a) nor property b). The proof is similar to the previous examples. - -Even though linear continua are important in the study of ordered sets, they do have applications in the mathematical field of topology. In fact, we will prove that an ordered set in the order topology is connected if and only if it is a linear continuum. We will prove one implication, and leave the other one as an exercise. (Munkres explains the second part of the proof in ) - -Theorem - -Let X be an ordered set in the order topology. If X is connected, then X is a linear continuum. - -Proof: - -Suppose that x and y are elements of X with x < y. If there exists no z in X such that x < z < y, consider the sets: - -A = (−∞, y) - -B = (x, +∞) - -These sets are disjoint (If a is in A, a < y so that if a is in B, a > x and a < y which is impossible by hypothesis), nonempty (x is in A and y is in B) and open (in the order topology), and their union is X. This contradicts the connectedness of X. - -Now we prove the least upper bound property. If C is a subset of X that is bounded above and has no least upper bound, let D be the union of all open rays of the form (b, +∞) where b is an upper bound for C. Then D is open (since it is the union of open sets), and closed (if a is not in D, then a < b for all upper bounds b of C so that we may choose q > a such that q is in C (if no such q exists, a is the least upper bound of C), then an open interval containing a may be chosen that doesn't intersect D). Since D is nonempty (there is more than one upper bound of D for if there was exactly one upper bound s, s would be the least upper bound. Then if b1 and b2 are two upper bounds of D with b1 < b2, b2 will belong to D), D and its complement together form a separation on X. This contradicts the connectedness of X. - -# Since the ordered set A = (−∞, 0) U (0,+∞) is not a linear continuum, it is disconnected. - -# By applying the theorem just proved, the fact that R is connected follows. In fact any interval (or ray) in R is also connected. - -# The set of integers is not a linear continuum and therefore cannot be connected. - -# In fact, if an ordered set in the order topology is a linear continuum, it must be connected. Since any interval in this set is also a linear continuum, it follows that this space is locally connected since it has a basis consisting entirely of connected sets. - -# For an example of a topological space that is a linear continuum, see long line. diff --git a/wiki/wikipedia/692.txt b/wiki/wikipedia/692.txt deleted file mode 100644 index 759fc5041b5331a60243091142b18f88015d356d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/692.txt +++ /dev/null @@ -1,13 +0,0 @@ -In algebra, the absorption law or absorption identity is an identity linking a pair of binary operations. - -Two binary operations, ¤ and ⁂, are said to be connected by the absorption law if: - -a ¤ (a ⁂ b) = a ⁂ (a ¤ b) = a. - -A set equipped with two commutative and associative binary operations $\scriptstyle \lor$ ("join") and $\scriptstyle \land$ ("meet") that are connected by the absorption law is called a lattice; in this case, both operations are necessarily idempotent. - -Examples of lattices include Heyting algebras and Boolean algebras, in particular sets of sets with union and intersection operators, and ordered sets with min and max operations. - -In classical logic, and in particular Boolean algebra, the operations OR and AND, which are also denoted by $\scriptstyle \lor$ and $\scriptstyle \land$, satisfy the lattice axioms, including the absorption law. The same is true for intuitionistic logic. - -The absorption law does not hold in many other algebraic structures, such as commutative rings, e.g. the field of real numbers, relevance logics, linear logics, and substructural logics. In the last case, there is no one-to-one correspondence between the free variables of the defining pair of identities. diff --git a/wiki/wikipedia/693.txt b/wiki/wikipedia/693.txt deleted file mode 100644 index a93b420da514ec95d6e2f925f1a2b76476f9d147..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/693.txt +++ /dev/null @@ -1,69 +0,0 @@ -Aristarchus's inequality (after the Greek astronomer and mathematician Aristarchus of Samos; c. 310 – c. 230 BCE) is a law of trigonometry which states that if α and β are acute angles (i.e. between 0 and a right angle) and β < α then -$$ - \frac{\sin\alpha}{\sin\beta} < \frac{\alpha}{\beta} < \frac{\tan\alpha}{\tan\beta}. -$$ - -Ptolemy used the first of these inequalities while constructing his table of chords. - -The proof is a consequence of the more known inequalities -$$ - 0<\sin(\alpha)<\alpha<\tan(\alpha) -$$, $ 0<\sin(\beta)<\sin(\alpha)<1 $ and $ 1>\cos(\beta)>\cos(\alpha)>0$. - -Using these inequalities we can first prove that -$$ - \frac{\sin(\alpha)}{\sin(\beta)} < \frac{\alpha}{\beta}. -$$ - -We first note that the inequality is equivalent to -$$ -\frac{\sin(\alpha)}{\alpha} < \frac{\sin(\beta)}{\beta} -$$ - -which itself can be rewritten as -$$ -\frac{\sin(\alpha)-\sin(\beta)}{\alpha-\beta} < \frac{\sin(\beta)}{\beta}. -$$ - -We now want show that -$$ -\frac{\sin(\alpha)-\sin(\beta)}{\alpha-\beta}<\cos(\beta) < \frac{\sin(\beta)}{\beta}. -$$ - -The second inequality is simply $\beta<\tan\beta$. The first one is true because - - - -\frac{\sin(\alpha)-\sin(\beta)}{\alpha-\beta} - -= \frac{2\cdot\sin\left(\frac{\alpha-\beta}2 - -\right)\cos\left(\frac{\alpha+\beta}2\right)}{\alpha-\beta} - -< \frac{2\cdot \left(\frac{\alpha-\beta}2 - -\right) \cdot \cos(\beta)}{\alpha-\beta} - -= \cos(\beta). - - - -Now we want to show the second inequality, i.e. that: -$$ - \frac{\alpha}{\beta} <\frac{\tan(\alpha)}{\tan(\beta)}. -$$ - -We first note that due to the initial inequalities we have that: -$$ - \beta<\tan(\beta)=\frac{\sin(\beta)}{\cos(\beta)}<\frac{\sin(\beta)}{\cos(\alpha)} -$$ - -Consequently, using that $0<\alpha-\beta<\alpha $ in the previous equation (replacing $\beta $ by $\alpha-\beta<\alpha $) we obtain: -$$ -{\alpha-\beta}<{\frac{\sin(\alpha-\beta)}{\cos(\alpha)}}=\tan(\alpha)\cos(\beta)-\sin(\beta). -$$ - -We conclude that -$$ - \frac{\alpha}{\beta}=\frac{\alpha-\beta}{\beta}+1< \frac{\tan(\alpha)\cos(\beta)-\sin(\beta)}{\sin(\beta)}+1 = \frac{\tan(\alpha)}{\tan(\beta)}. -$$ diff --git a/wiki/wikipedia/694.txt b/wiki/wikipedia/694.txt deleted file mode 100644 index 9e5e999bbda8c1e3e8844ef8c49e10e28c418f83..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/694.txt +++ /dev/null @@ -1,9 +0,0 @@ -In number theory, the Gaussian moat problem asks whether it is possible to find an infinite sequence of distinct Gaussian prime numbers such that the difference between consecutive numbers in the sequence is bounded. More colorfully, if one imagines the Gaussian primes to be stepping stones in a sea of complex numbers, the question is whether one can walk from the origin to infinity with steps of bounded size, without getting wet. The problem was first posed in 1962 by Basil Gordon (although it has sometimes been erroneously attributed to Paul Erdős) and it remains unsolved. - -With the usual prime numbers, such a sequence is impossible: the prime number theorem implies that there are arbitrarily large gaps in the sequence of prime numbers, and there is also an elementary direct proof: for any n, the n - 1 consecutive numbers n! + 2, n! + 3, ..., n! + n are all composite. - -The problem of finding a path between two Gaussian primes that minimizes the maximum hop size is an instance of the minimax path problem, and the hop size of an optimal path is equal to the width of the widest moat between the two primes, where a moat may be defined by a partition of the primes into two subsets and its width is the distance between the closest pair that has one element in each subset. Thus, the Gaussian moat problem may be phrased in a different but equivalent form: is there a finite bound on the widths of the moats that have finitely many primes on the side of the origin? - -Computational searches have shown that the origin is separated from infinity by a moat of width 6. - -It is known that, for any positive number k, there exist Gaussian primes whose nearest neighbor is at distance k or larger. In fact, these numbers may be constrained to be on the real axis. For instance, the number 20785207 is surrounded by a moat of width 17. Thus, there definitely exist moats of arbitrarily large width, but these moats do not necessarily separate the origin from infinity. diff --git a/wiki/wikipedia/695.txt b/wiki/wikipedia/695.txt deleted file mode 100644 index 4771dcd31266ab41edc2a178ba8b0f5b92958921..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/695.txt +++ /dev/null @@ -1,57 +0,0 @@ -In mathematics, more specifically non-commutative ring theory, modern algebra, and module theory, the Jacobson density theorem is a theorem concerning simple modules over a ring R. - -The theorem can be applied to show that any primitive ring can be viewed as a "dense" subring of the ring of linear transformations of a vector space. This theorem first appeared in the literature in 1945, in the famous paper "Structure Theory of Simple Rings Without Finiteness Assumptions" by Nathan Jacobson. This can be viewed as a kind of generalization of the Artin-Wedderburn theorem's conclusion about the structure of simple Artinian rings. - -Let R be a ring and let U be a simple right R-module. If u is a non-zero element of U, u • R = U (where u • R is the cyclic submodule of U generated by u). Therefore, if u, v are non-zero elements of U, there is an element of R that induces an endomorphism of U transforming u to v. The natural question now is whether this can be generalized to arbitrary (finite) tuples of elements. More precisely, find necessary and sufficient conditions on the tuple (x1, ..., xn) and (y1, ..., yn) separately, so that there is an element of R with the property that xi • r = yi for all i. If D is the set of all R-module endomorphisms of U, then Schur's lemma asserts that D is a division ring, and the Jacobson density theorem answers the question on tuples in the affirmative, provided that the xi are linearly independent over D. - -With the above in mind, the theorem may be stated this way: - -The Jacobson Density Theorem. Let U be a simple right R-module, D = End(UR), and X ⊂ U a finite and D-linearly independent set. If A is a D-linear transformation on U then there exists r ∈ R such that A(x) = x • r for all x in X. - -In the Jacobson density theorem, the right R-module U is simultaneously viewed as a left D-module where D = End(UR), in the natural way: g • u = g(u). It can be verified that this is indeed a left module structure on U. As noted before, Schur's lemma proves D is a division ring if U is simple, and so U is a vector space over D. - -The proof also relies on the following theorem proven in p. 185: - -Theorem. Let U be a simple right R-module, D = End(UR), and X ⊂ U a finite set. Write I = annR(X) for the annihilator of X in R. Let u be in U with u • I = 0. Then u is in XD; the D-span of X. - -===Proof of the Jacobson density theorem=== - -We use induction on . If X is empty, then the theorem is vacuously true and the base case for induction is verified. - -Assume X is non-empty, let x be an element of X and write Y = X \{x}. If A is any D-linear transformation on U, by the induction hypothesis there exists s ∈ R such that A(y) = y • s for all y in Y. Write I = annR(Y). It is easily seen that x • I is a submodule of U. If x • I = 0, then the previous theorem implies that x would be in the D-span of Y, contradicting the D-linear independence of X, therefore x • I ≠ 0. Since U is simple, we have: x • I = U. Since A(x) − x • s ∈ U = x • I, there exists i in I such that x • i = A(x) − x • s. - -Define r = s + i and observe that for all y in Y we have: - -\begin{align} - -y \cdot r &= y \cdot(s + i) \\ - -&= y \cdot s + y \cdot i \\ - -&= y \cdot s && (\text{since } i\in \text{ann}_R(Y)) \\ - -&= A(y) - -\end{align} - -Now we do the same calculation for x: - -\begin{align} - -x \cdot r &= x \cdot (s + i) \\ - -&= x \cdot s + x \cdot i \\ - -&= x \cdot s + \left ( A(x) - x \cdot s \right )\\ - -&= A(x) - -\end{align} - -Therefore, A(z) = z • r for all z in X, as desired. This completes the inductive step of the proof. It follows now from mathematical induction that the theorem is true for finite sets X of any size. - -A ring R is said to act densely on a simple right R-module U if it satisfies the conclusion of the Jacobson density theorem. There is a topological reason for describing R as "dense". Firstly, R can be identified with a subring of End(DU) by identifying each element of R with the D linear transformation it induces by right multiplication. If U is given the discrete topology, and if UU is given the product topology, and End(DU) is viewed as a subspace of UU and is given the subspace topology, then R acts densely on U if and only if R is dense set in End(DU) with this topology. - -The Jacobson density theorem has various important consequences in the structure theory of rings. Notably, the Artin–Wedderburn theorem's conclusion about the structure of simple right Artinian rings is recovered. The Jacobson density theorem also characterizes right or left primitive rings as dense subrings of the ring of D-linear transformations on some D-vector space U, where D is a division ring. - -This result is related to the Von Neumann bicommutant theorem, which states that, for a *-algebra A of operators on a Hilbert space H, the double commutant A′′ can be approximated by A on any given finite set of vectors. In other words, the double commutant is the closure of A in the weak operator topology. See also the Kaplansky density theorem in the von Neumann algebra setting. diff --git a/wiki/wikipedia/696.txt b/wiki/wikipedia/696.txt deleted file mode 100644 index 11534f45c1376fafd0ca230e1a6e103db0f93998..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/696.txt +++ /dev/null @@ -1,323 +0,0 @@ -The Larch Prover, or LP for short, was an interactive theorem proving system - -for multi-sorted first-order logic. It was used at MIT and elsewhere during the 1990s - -to reason about designs for circuits, concurrent algorithms, hardware, - -and software. - -Unlike most theorem provers, which attempt to find proofs automatically for - -correctly stated conjectures, LP was intended to assist users in finding and correcting flaws in - -conjectures-the predominant activity in the early stages of the design process. It worked efficiently - -on large problems, had many important user amenities, and could be used by relatively naïve users. - -LP was developed by Stephen Garland and John Guttag at the MIT Laboratory for Computer Science - -with assistance from James Horning and James Saxe at the DEC Systems Research Center, - -as part of the Larch project on formal specifications. - -It extended the REVE 2 equational term rewriting system developed by Pierre Lescanne, Randy Forgaard with assistance from David Detlefs and - -Katherine Yelick. It supports proofs by equational term rewriting - -(for terms with associative-commutative operators), cases, contradiction, induction, generalization, and specialization. - -LP was written in the CLU programming language. - -
    -
    -declare sorts E, S
    -
    -declare variables e, e1, e2: E, x, y, z: S
    -
    -declare operators
    -
    -{}: -> S
    -
    -{__}: E -> S
    -
    -insert: E, S -> S
    -
    -__ \union __: S, S -> S
    -
    -__ \in __: E, S -> Bool
    -
    -__ \subseteq __: S, S -> Bool
    -
    -..
    -
    -set name setAxioms
    -
    -assert
    -
    -sort S generated by {}, insert;
    -
    -{e} = insert(e, {});
    -
    -~(e \in {});
    -
    -e \in insert(e1, x) <=> e = e1 \/ e \in x;
    -
    -{} \subseteq x;
    -
    -insert(e, x) \subseteq y <=> e \in y /\ x \subseteq y;
    -
    -e \in (x \union y) <=> e \in x \/ e \in y
    -
    -..
    -
    -set name extensionality
    -
    -assert \A e (e \in x <=> e \in y) => x = y
    -
    -
    - -
    -
    -set name setTheorems
    -
    -prove e \in {e}
    -
    -qed
    -
    -prove \E x \A e (e \in x <=> e = e1 \/ e = e2)
    -
    -resume by specializing x to insert(e2, {e1})
    -
    -qed
    -
    -% Three theorems about union (proved using extensionality)
    -
    -prove x \union {} = x
    -
    -instantiate y by x \union {} in extensionality
    -
    -qed
    -
    -prove x \union insert(e, y) = insert(e, x \union y)
    -
    -resume by contradiction
    -
    -set name lemma
    -
    -critical-pairs *Hyp with extensionality
    -
    -qed
    -
    -prove ac \union
    -
    -resume by contradiction
    -
    -set name lemma
    -
    -critical-pairs *Hyp with extensionality
    -
    -resume by contradiction
    -
    -set name lemma
    -
    -critical-pairs *Hyp with extensionality
    -
    -qed
    -
    -% Three theorems about subset
    -
    -set proof-methods =>, normalization
    -
    -prove e \in x /\ x \subseteq y => e \in y by induction on x
    -
    -resume by case ec = e1c
    -
    -set name lemma
    -
    -complete
    -
    -qed
    -
    -prove x \subseteq y /\ y \subseteq x => x = y
    -
    -set name lemma
    -
    -prove e \in xc <=> e \in yc by <=>
    -
    -complete
    -
    -complete
    -
    -instantiate x by xc, y by yc in extensionality
    -
    -qed
    -
    -prove (x \union y) \subseteq z <=> x \subseteq z /\ y \subseteq z by induction on x
    -
    -qed
    -
    -% An alternate induction rule
    -
    -prove sort S generated by {}, {__}, \union
    -
    -set name lemma
    -
    -resume by induction
    -
    -critical-pairs *GenHyp with *GenHyp
    -
    -critical-pairs *InductHyp with lemma
    -
    -qed
    -
    -
    - -Pascal André, Annya Romanczuk, Jean-Claude Royer, and Aline Vasconcelos, "Checking the consistency of UML - -class diagrams using Larch Prover", Proceedings of the 2000 International Conference on Rigorous Object-Oriented Methods, - -page 1, York, UK, BCS Learning & Development Ltd., Swindon, GBR, January 2000. - -Boutheina Chetali, "Formal verification of concurrent programs using the Larch Prover", - -IEEE Transactions on Software Engineering 24:1, pages 46-62, January 1998. doi: 10.1109/32.663997. - -Manfred Broy, "Experiences with software specification and verification using LP, the Larch proof assistant", - -Formal Methods in System Design 8:3, pages 221-272, 1996. - -Urban Engberg, Peter Grønning, and Leslie Lamport, "Mechanical Verification of Concurrent Systems with TLA", - -Computer-Aided Verification, G. v. Bochmann and D. K. Probst editors, Proceedings of the Fourth International Conference CAV'92), - -Lecture Notes in Computer Science 663, Springer-Verlag, June 1992, pages 44-55. - -Urban Engberg, Reasoning in the Temporal Logic of Actions, BRICS Dissertation Series DS 96-1, - -Department of Computer Science, University of Aarhus, Denmark, August 1996. ISSN 1396-7002. - -Stephen J. Garland and John V. Guttag, "Inductive methods for reasoning about abstract data types," - -Fifteenth Annual ACM Symposium on Principles of Programming Languages, pages 219-228, San Diego, CA, January 1988. - -Stephen J. Garland and John V. Guttag, "LP: The Larch Prover," Ninth International Conference on Automated Deduction - -Lecture Notes in Computer Science 310, pages 748-749, Argonne, Illinois, May 1988. Springer-Verlag. - -Stephen J. Garland, John V. Guttag, and Jørgen Staunstrup, "Verification of VLSI circuits using LP," - -The Fusion of Hardware Design and Verification, pages 329–345, Glasgow, Scotland, July 4-6, 1988. IFIP WG 10.2, North Holland. - -Stephen J. Garland and John V. Guttag, "An overview of LP, the Larch Prover," - -Third International Conference on Rewriting Techniques and Applications - -Lecture Notes in Computer Science 355, pages 137-151, Chapel Hill, NC, April 1989. Springer-Verlag. - -Stephen J. Garland and John V. Guttag, "Using LP to debug specifications," - -Programming Concepts and Methods, Sea of Galilee, Israel, April 2-5, 1990. - -IFIP WG 2.2/2.3, North-Holland. - -Stephen J. Garland and John V. Guttag, A Guide to LP: the Larch Prover, - -MIT Laboratory for Computer Science, December 1991. - -Also published as Digital Equipment Corporation Systems Research Center Report 82, 1991. - -Victor Luchangco, Ekrem Söylemez, Stephen Garland, and Nancy Lynch, - -"Verifying timing properties of concurrent algorithms," - -FORTE '94: Seventh International Conference on Formal Description Techniques, - -pages 259–273, Berne, Switzerland, October 4-7, 1994. Chapman & Hall. - -Ursula Martin and Michael Lai, "Some experiments with a completion theorem prover", - -Journal of Symbolic Computation 13:1, 1992, pages 81-100, ISSN 0747-7171. - -Ursula Martin and Jeannette M. Wing, editors, - -First International Workshop on Larch, - -Proceedings of the First International Workshop non Larch, Dedham, Massachusetts, July 13-15 1992, - -Workshops in Computing, Springer-Verlag, 1992. - -* Michel Bidoit and Rolf Hennicker, "How to prove observational theorems with LP", pages 18-35 - -* Boutheina Chetali and Pierre Lescanne, "An exercise in LP: the proof of a non-restoring division circuit",, pages 55-68 - -* Christine Choppy and Michel Bidoit, "Integrating ASSPEGIQUE and LP", pages 69-85 - -* Niels Mellergaard and Jørgen Staunstrup, "Generating proof obligations for circuits", pages 185-200 - -* E. A. Scott and K. J. Norrie, "Using LP to study the language PL0+", pages 227-245 - -* Frédéric Voisin, "A new front-end for the Larch Prover", pages 282-296 - -* J. M. Wing, E. Rollins, and A. Moorman Zaremski, "Thoughts on a Larch/ML and a new application for LP", pages 297-312 - -Toh Ne Win, Michael D. Ernst, Stephen J. Garland, Dilsun Kirli, and Nancy Lynch, - -Using simulated execution in verifying distributed algorithms," - -Software Tools for Technology Transfer 6:1, Lenore D. Zuck, Paul C. Attie, - -Agostino Cortesi, and Supratik Mukhopadhyay (editors), pages 67-76. Springer-Verlag, July 2004. - -Tsvetomir P. Petrov, Anya Pogosyants, Stephen J. Garland, Victor Luchangco, and Nancy A. Lynch, - -"Computer-assisted verification of an algorithm for concurrent timestamps," - -Formal Description Techniques IX: Theory, Application, and Tools (FORTE/PSTV), - -Reinhard Gotzhein and Jan Bredereke (editors), pages 29-44, Kaiserslautern, Germany, October 8-11, 1996. Chapman & Hall. - -James B. Saxe, Stephen J. Garland, John V. Guttag, and James J. Horning, - -"Using transformations and verification in circuit design," - -Formal Methods in System Design 3:3 (December 1993), pages 181-209. - -Jørgen F. Søgaard-Anderson, Stephen J. Garland, John V. Guttag, Nancy A. Lynch, and Anya Pogosyants, - -"Computed-assisted simulation proofs," Fifth Conference on Computer-Aided Verification (CAV '03), - -Costas Courcoubetis (editor), Lecture Notes in Computer Science 697, pages 305-319, Elounda, Greece, June 1993. Springer-Verlag. - -Jørgen Staunstrup, Stephen J. Garland, and John V. Guttag, - -"Localized verification of circuit descriptions," - -Automatic Verification Methods for Finite State Systems, Lecture Notes in Computer Science 407, - -pages 349-364, Grenoble, France, June 1989. Springer-Verlag. - -Jørgen Staunstrup, Stephen J. Garland, and John V. Guttag, - -"Mechanized verification of circuit descriptions using the Larch Prover", - -Theorem Provers in Circuit Design, Victoria Stavridou, Thomas F. Melham, and Raymond T. Boute (editors), - -IFIP Transactions A-10, pages 277-299, Nijmegen, The Netherlands, June 22-24, 1992. North-Holland. - -Mark T. Vandevoorde and Deepak Kapur, "Distributed Larch Prover (DLP): an experiment in parallelizing a rewrite-rule based prover", - -International Conference on Rewriting Techniques and Applications RTA 1996, - -Lecture Notes in Computer Science 1103, pages 420-423. Springer-Verlag. - -Frédéric Voisin, "A new proof manager and graphic interface for the Larch prover", - -International Conference on Rewriting Techniques and Applications RTA 1996, - -Lecture Notes in Computer Science 1103, pages 408-411. Springer-Verlag. - -Jeannette M. Wing and Chun Gong, Experience with the Larch Prover, - -ACM SIGSOFT Software Engineering Notes 15:44, September 1990, - -pages 140-143 https://doi.org/10.1145/99571.99835 diff --git a/wiki/wikipedia/697.txt b/wiki/wikipedia/697.txt deleted file mode 100644 index d10009d4721a09d6beb509b9d955463d5f9aa6f3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/697.txt +++ /dev/null @@ -1,25 +0,0 @@ -An arc diagram is a style of graph drawing, in which the vertices of a graph are placed along a line in the Euclidean plane, with edges being drawn as semicircles in one or both of the two halfplanes bounded by the line, or as smooth curves formed by sequences of semicircles. In some cases, line segments of the line itself are also allowed as edges, as long as they connect only vertices that are consecutive along the line. Variations of this drawing style in which the semicircles are replaced by convex curves of some other type are also commonly called arc diagrams. - -The use of the phrase "arc diagram" for this kind of drawing follows the use of a similar type of diagram by Wattenberg to visualize the repetition patterns in strings, by using arcs to connect pairs of equal substrings. However, this style of graph drawing is much older than its name, dating back to the work of Saaty and Nicholson, who used arc diagrams to study crossing numbers of graphs. An older but less frequently used name for arc diagrams is linear embeddings. - -Heer - -write that arc diagrams "may not convey the overall structure of the graph as effectively as a two-dimensional layout", but that their layout makes it easy to display multivariate data associated with the vertices of the graph. Applications of arc diagrams include the Farey diagram, a visualization of number-theoretic connections between rational numbers, and diagrams representing RNA secondary structure in which the crossings of the diagram represent pseudoknots in the structure. - -As Nicholson observed, every drawing of a graph in the plane may be deformed into an arc diagram, without changing its number of crossings. In particular, every planar graph has a planar arc diagram. However, this embedding may need to use more than one semicircle for some of its edges. - -If a graph is drawn without crossings using an arc diagram in which each edge is a single semicircle, then the drawing is a two-page book embedding, something that is only possible for the subhamiltonian graphs, a proper subset of the planar graphs. For instance, a maximal planar graph has such an embedding if and only if it contains a Hamiltonian cycle. Therefore, a non-Hamiltonian maximal planar graph such as the Goldner–Harary graph cannot have a planar embedding with one semicircle per edge. Testing whether a given graph has a crossing-free arc diagram of this type (or equivalently, whether it has pagenumber two) is NP-complete. - -However, every planar graph has an arc diagram in which each edge is drawn as a biarc with at most two semicircles. More strongly, every st-planar directed graph (a planar directed acyclic graph with a single source and a single sink, both on the outer face) has an arc diagram in which every edge forms a monotonic curve, with these curves all consistently oriented from one end of the vertex line towards the other. For undirected planar graphs, one way to construct an arc diagram with at most two semicircles per edge is to subdivide the graph and add extra edges so that the resulting graph has a Hamiltonian cycle (and so that each edge is subdivided at most once), and to use the ordering of the vertices on the Hamiltonian cycle as the ordering along the line. In a planar graph with $n$ vertices, at most $n/2$ biarcs are needed. - -Because it is NP-complete to test whether a given graph has an arc diagram with one semicircle per edge and no crossings, it is also NP-hard to find an arc diagram of this type that minimizes the number of crossings. This crossing minimization problem remains NP-hard, for non-planar graphs, even if the ordering of the vertices along the line is fixed. However, in the fixed-ordering case, an embedding without crossings (if one exists) may be found in polynomial time by translating the problem into a 2-satisfiability problem, in which the variables represent the placement of each arc and the constraints prevent crossing arcs from being placed on the same side of the vertex line. Additionally, in the fixed-ordering case, a crossing-minimizing embedding may be approximated by solving a maximum cut problem in an auxiliary graph that represents the semicircles and their potential crossings (or equivalently, by approximating the MAX2SAT version of the 2-satisfiability instance). - -Cimikowski, Cimikowski, and He discuss heuristics for finding arc diagrams with few crossings. - -For drawings of directed graphs, a common convention is to draw each arc in a clockwise direction, so that arcs that are directed from an earlier to a later vertex in the sequence are drawn above the vertex line, and arcs directed from a later to an earlier vertex are drawn below the line. This clockwise orientation convention was developed as part of a different graph drawing style by Fekete, and applied to arc diagrams by Pretorius. - -The Farey diagram of a set of rational numbers is a structure that may be represented geometrically as an arc diagram. In this form it has a vertex for each number, placed on the number line, and a semicircular edge above the line connecting pairs of numbers $p/q$ and $r/s$ (in simplest terms) for which $|ps-rq|=1$. The semicircles of the diagram may be thought of as lines in the Poincaré half-plane model of the hyperbolic plane, with the vertices placed at infinite points on the boundary line of this model. The Poincaré half-plane model has an infinite point that is not represented as point on the boundary line, the shared endpoint of all vertical rays in the model, and this may be represented by the "fraction" 1/0 (undefined as a number), with the same rule for determining its adjacencies. The Farey diagram of any set of rational numbers is a planar graph, and the Farey diagram of the set of all rational numbers forms a tessellation of the hyperbolic plane by ideal triangles. - -Arc diagrams may be used to visualize the bonds between base pairs in an RNA sequence, or other biological polymer, with the bases of the sequence arranged in sequence order along the line of the diagrams, and with arcs above the line representing bonds between bases that are adjacent in the physical structure of the polymer despite being nonadjacent in the sequence order. For sequences whose secondary structure consists of nested stems and loops, the arc diagram will have no crossings. When there are crossings, the crossings represent pseudoknots in the structure. The degree distribution of the intersection graph of arcs and crossings can be used to understand the scaling behavior of these topological features of polymers. - -Arc diagrams were used by Brandes to visualize the state diagram of a shift register, by Djidjev to show that the crossing number of every graph is lower-bounded by a combination of its cutwidth and vertex degrees, by Byrne to visualize interactions between Bluetooth devices, and by Owens to visualize the yardage of plays in a game of American football. Additional applications of this visualization technique are surveyed by Nagel. diff --git a/wiki/wikipedia/698.txt b/wiki/wikipedia/698.txt deleted file mode 100644 index 012964a1689192eb519098850d418b437fbc6e2c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/698.txt +++ /dev/null @@ -1,3 +0,0 @@ -In mathematics, the Zeeman conjecture or Zeeman's collapsibility conjecture asks whether given a finite contractible 2-dimensional CW complex $K$, the space $K\times [0,1]$ is collapsible. - -The conjecture, due to Christopher Zeeman, implies the Poincaré conjecture and the Andrews–Curtis conjecture. diff --git a/wiki/wikipedia/699.txt b/wiki/wikipedia/699.txt deleted file mode 100644 index 5d76846046929887ffbd9e2a9967c2632d69eded..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/699.txt +++ /dev/null @@ -1,100 +0,0 @@ -In mathematics, Abel's theorem for power series relates a limit of a power series to the sum of its coefficients. It is named after Norwegian mathematician Niels Henrik Abel. - -Let - -G (x) = \sum_{k=0}^\infty a_k x^k - -be a power series with real coefficients $a_k$ with radius of convergence $1.$ Suppose that the series - -\sum_{k=0}^\infty a_k - -converges. - -Then $G(x)$ is continuous from the left at $x = 1,$ that is, - -\lim_{x\to 1^-} G(x) = \sum_{k=0}^\infty a_k. - -The same theorem holds for complex power series - -G(z) = \sum_{k=0}^\infty a_k z^k, - -provided that $z \to 1$ within a Stolz sector, that is, a region of the open unit disk where - -|1-z|\leq M(1-|z|) - -for some $M > 1.$ Without this restriction, the limit may fail to exist: for example, the power series - -\sum_{n>0} \frac{z^{3^n}-z^{2\cdot 3^n}} n - -converges to $0$ at $z \to 1,$ but is unbounded near any point of the form $e^{\pi i/3^n},$ so the value at $z \to 1$ is not the limit as $z$ tends to 1 in the whole open disk. - -Note that $G(z)$ is continuous on the real closed interval $[0, t]$ for $t < 1,$ by virtue of the uniform convergence of the series on compact subsets of the disk of convergence. Abel's theorem allows us to say more, namely that $G(z)$ is continuous on $[0, 1].$ - -As an immediate consequence of this theorem, if $z$ is any nonzero complex number for which the series - -\sum_{k=0}^\infty a_k z^k - -converges, then it follows that - -\lim_{t\to 1^{-}} G(tz) = \sum_{k=0}^\infty a_kz^k - -in which the limit is taken from below. - -The theorem can also be generalized to account for sums which diverge to infinity. If - -\sum_{k=0}^\infty a_k = \infty - -then - -\lim_{z\to 1^{-}} G(z) \to \infty. - -However, if the series is only known to be divergent, but for reasons other than diverging to infinity, then the claim of the theorem may fail: take, for example, the power series for - -\frac{1}{1+z}. - -At $z = 1$ the series is equal to $1 - 1 + 1 - 1 + \cdots,$ but $\tfrac{1}{1+1} = \tfrac{1}{2}.$ - -We also remark the theorem holds for radii of convergence other than $R = 1$: let - -G(x) = \sum_{k=0}^\infty a_kx^k - -be a power series with radius of convergence $R,$ and suppose the series converges at $x = R.$ Then $G(x)$ is continuous from the left at $x = R,$ that is, - -\lim_{x\to R^-}G(x) = G(R). - -The utility of Abel's theorem is that it allows us to find the limit of a power series as its argument (that is, $z$) approaches $1$ from below, even in cases where the radius of convergence, $R,$ of the power series is equal to $1$ and we cannot be sure whether the limit should be finite or not. See for example, the binomial series. Abel's theorem allows us to evaluate many series in closed form. For example, when - -a_k = \frac{(-1)^k}{k+1}, - -we obtain - -G_a(z) = \frac{\ln(1+z)}{z}, \qquad 0 < z < 1, - -by integrating the uniformly convergent geometric power series term by term on $[-z, 0]$; thus the series - -\sum_{k=0}^\infty \frac{(-1)^k}{k+1} - -converges to $\ln(2)$ by Abel's theorem. Similarly, - -\sum_{k=0}^\infty \frac{(-1)^k}{2k+1} - -converges to $\arctan(1) = \tfrac{\pi}{4}.$ -$$ -G_a(z) -$$ is called the generating function of the sequence $a.$ Abel's theorem is frequently useful in dealing with generating functions of real-valued and non-negative sequences, such as probability-generating functions. In particular, it is useful in the theory of Galton-Watson processes. - -After subtracting a constant from $a_0,$ we may assume that $\sum_{k=0}^\infty a_k=0.$ Let $s_n=\sum_{k=0}^n a_k\!.$ Then substituting $a_k=s_k-s_{k-1}$ and performing a simple manipulation of the series (summation by parts) results in - -G_a(z) = (1-z)\sum_{k=0}^{\infty} s_k z^k. - -Given $\varepsilon > 0,$ pick $n$ large enough so that $|s_k| < \varepsilon$ for all $k \geq n$ and note that - -\left|(1-z)\sum_{k=n}^\infty s_kz^k \right| \leq \varepsilon |1-z|\sum_{k=n}^\infty |z|^k = \varepsilon|1-z|\frac < \varepsilon M - -when $z$ lies within the given Stolz angle. Whenever $z$ is sufficiently close to $1$ we have - -\left|(1-z)\sum_{k=0}^{n-1} s_kz^k \right| < \varepsilon, - -so that $\left|G_a(z)\right| < (M+1) \varepsilon$ when $z$ is both sufficiently close to $1$ and within the Stolz angle. - -Converses to a theorem like Abel's are called Tauberian theorems: There is no exact converse, but results conditional on some hypothesis. The field of divergent series, and their summation methods, contains many theorems of abelian type and of tauberian type. diff --git a/wiki/wikipedia/7.txt b/wiki/wikipedia/7.txt deleted file mode 100644 index dc8129e41609adf2f7191e35d94bc73908f7caa1..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/7.txt +++ /dev/null @@ -1,60 +0,0 @@ -In mathematics, the Stieltjes moment problem, named after Thomas Joannes Stieltjes, seeks necessary and sufficient conditions for a sequence (m0, m1, m2, ...) to be of the form -$$ -m_n = \int_0^\infty x^nd\mu(x) -$$ - -for some measure μ. If such a function μ exists, one asks whether it is unique. - -The essential difference between this and other well-known moment problems is that this is on a half-line [0, ∞), whereas in the Hausdorff moment problem one considers a bounded interval [0, 1], and in the Hamburger moment problem one considers the whole line (−∞, ∞). - -Let - -\Delta_n=\left[\begin{matrix} - -m_0 & m_1 & m_2 & \cdots & m_{n} \\ - -m_1 & m_2 & m_3 & \cdots & m_{n+1} \\ - -m_2& m_3 & m_4 & \cdots & m_{n+2} \\ - -\vdots & \vdots & \vdots & \ddots & \vdots \\ - -m_{n} & m_{n+1} & m_{n+2} & \cdots & m_{2n} - -\end{matrix}\right] - -and - -\Delta_n^{(1)}=\left[\begin{matrix} - -m_1 & m_2 & m_3 & \cdots & m_{n+1} \\ - -m_2 & m_3 & m_4 & \cdots & m_{n+2} \\ - -m_3 & m_4 & m_5 & \cdots & m_{n+3} \\ - -\vdots & \vdots & \vdots & \ddots & \vdots \\ - -m_{n+1} & m_{n+2} & m_{n+3} & \cdots & m_{2n+1} - -\end{matrix}\right]. - -Then { mn : n = 1, 2, 3, ... } is a moment sequence of some measure on $[0,\infty)$ with infinite support if and only if for all n, both -$$ -\det(\Delta_n) > 0\ \mathrm{and}\ \det\left(\Delta_n^{(1)}\right) > 0. -$$ - -{ mn : n = 1, 2, 3, ... } is a moment sequence of some measure on $[0,\infty)$ with finite support of size m if and only if for all $n \leq m$, both -$$ -\det(\Delta_n) > 0\ \mathrm{and}\ \det\left(\Delta_n^{(1)}\right) > 0 -$$ - -and for all larger $n$ -$$ -\det(\Delta_n) = 0\ \mathrm{and}\ \det\left(\Delta_n^{(1)}\right) = 0. -$$ - -There are several sufficient conditions for uniqueness, for example, Carleman's condition, which states that the solution is unique if -$$ - \sum_{n \geq 1} m_n^{-1/(2n)} = \infty~. -$$ diff --git a/wiki/wikipedia/70.txt b/wiki/wikipedia/70.txt deleted file mode 100644 index 435d4318e8f373359c03bc71e81b342df766f31b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/70.txt +++ /dev/null @@ -1,158 +0,0 @@ -Guido Grandi (1671–1742) reportedly provided a simplistic account of the series in 1703. He noticed that inserting parentheses into 1=1 − 1 + 1 − 1 + · · · produced varying results: either -$$ -(1-1) + (1-1) + \cdots = 0 -$$ - -or -$$ -1+(-1+1)+(-1+1) +\cdots = 1. -$$ - -Grandi's explanation of this phenomenon became well known for its religious overtones: - -In fact, the series was not an idle subject for Grandi, and he didn't think it summed to either 0 or 1. Rather, like many mathematicians to follow, he thought the true value of the series was 12 for a variety of reasons. - -Grandi's mathematical treatment of 1=1 − 1 + 1 − 1 + · · · occurs in his 1703 book Quadratura circula et hyperbolae per infinitas hyperbolas geometrice exhibita. Broadly interpreting Grandi's work, he derived 1=1 − 1 + 1 − 1 + · · · = 12 through geometric reasoning connected with his investigation of the witch of Agnesi. Eighteenth-century mathematicians immediately translated and summarized his argument in analytical terms: for a generating circle with diameter a, the equation of the witch y = a3/(a2 + x2) has the series expansion -$$ -\sum_{n=0}^\infty \frac{(-1)^nx^{2n}}{a^{2n-1}}=a - \frac{x^2}{a} + \frac{x^4}{a^3} - \frac{x^6}{a^5} + \cdots -$$ - -and setting a = x = 1, one has 1 − 1 + 1 − 1 + · · · = 12. - -*According to Morris Kline, Grandi started with the binomial expansion -$$ -\frac{1}{1+x} = 1 - x + x^2 - x^3 + \cdots -$$ - -and substituted x = 1 to get 1=1 − 1 + 1 − 1 + · · · = 12. Grandi "also argued that since the sum was both 0 and 12, he had proved that the world could be created out of nothing." - -Grandi offered a new explanation that 1=1 − 1 + 1 − 1 + · · · = 12 in 1710, both in the second edition of the Quadratura circula and in a new work, De Infinitis infinitorum, et infinite parvorum ordinibus disquisitio geometrica. Two brothers inherit a priceless gem from their father, whose will forbids them to sell it, so they agree that it will reside in each other's museums on alternating years. If this agreement lasts for all eternity between the brother's descendants, then the two families will each have half possession of the gem, even though it changes hands infinitely often. This argument was later criticized by Leibniz. - -The parable of the gem is the first of two additions to the discussion of the corollary that Grandi added to the second edition. The second repeats the link between the series and the creation of the universe by God: - -After Grandi published the second edition of the Quadratura, his fellow countryman Alessandro Marchetti became one of his first critics. One historian charges that Marchetti was motivated more by jealousy than any other reason. Marchetti found the claim that an infinite number of zeros could add up to a finite quantity absurd, and he inferred from Grandi's treatment the danger posed by theological reasoning. The two mathematicians began attacking each other in a series of open letters; their debate was ended only by Marchetti's death in 1714. - -With the help and encouragement of Antonio Magliabechi, Grandi sent a copy of the 1703 Quadratura to Leibniz, along with a letter expressing compliments and admiration for the master's work. Leibniz received and read this first edition in 1705, and he called it an unoriginal and less-advanced "attempt" at his calculus. Grandi's treatment of 1 − 1 + 1 − 1 + · · · would not catch Leibniz's attention until 1711, near the end of his life, when Christian Wolff sent him a letter on Marchetti's behalf describing the problem and asking for Leibniz's opinion. - -As early as 1674, in a minor, lesser-known writing De Triangulo Harmonico on the harmonic triangle, Leibniz mentioned 1=1 − 1 + 1 − 1 + · · · very briefly in an example: -$$ -\frac{1}{1+1} = \frac11-\frac{1}{1+1}. \mathrm{Ergo} \frac{1}{1+1} = 1-1+1-1+1-1 \mathrm{etc.} -$$ - -Presumably he arrived at this series by repeated substitution: -$$ -\frac{1}{1+1} = 1 - ( 1 - \frac{1}{1+1} ) -$$ -$$ -\frac{1}{1+1} = 1 - ( 1 - ( 1 - ( 1 - \frac{1}{1+1} ) ) ) -$$ -$$ -\frac{1}{1+1} = 1 - ( 1 - ( 1 - ( 1 - ( 1 - ( 1 - \frac{1}{1+1} ) ) ) ) ) -$$ - -
    And so on. - -The series 1=1 − 1 + 1 − 1 + · · · also appears indirectly in a discussion with Tschirnhaus in 1676. - -Leibniz had already considered the divergent alternating series 1=1 − 2 + 4 − 8 + 16 − · · · as early as 1673. In that case he argued that by subtracting either on the left or on the right, one could produce either positive or negative infinity, and therefore both answers are wrong and the whole should be finite. Two years after that, Leibniz formulated the first convergence test in the history of mathematics, the alternating series test, in which he implicitly applied the modern definition of convergence. - -In the 1710s, Leibniz described Grandi's series in his correspondence with several other mathematicians. The letter with the most lasting impact was his first reply to Wolff, which he published in the Acta Eruditorum. In this letter, Leibniz attacked the problem from several angles. - -In general, Leibniz believed that the algorithms of calculus were a form of "blind reasoning" that ultimately had to be founded upon geometrical interpretations. Therefore, he agreed with Grandi that 1=1 − 1 + 1 − 1 + · · · = 12, claiming that the relation was well-founded because there existed a geometric demonstration. - -On the other hand, Leibniz sharply criticized Grandi's example of the shared gem, claiming that the series 1=1 − 1 + 1 − 1 + · · · has no relation to the story. He pointed out that for any finite, even number of years, the brothers have equal possession, yet the sum of the corresponding terms of the series is zero. - -Pierre Varignon (1654–1722) treated Grandi's series in his report Précautions à prendre dans l'usage des Suites ou Series infinies résultantes…. The first of his purposes for this paper was to point out the divergence of Grandi's series and expand on Jacob Bernoulli's 1696 treatment. - -(Varignon's math…) - -The final version of Varignon's paper is dated February 16, 1715, and it appeared in a volume of the Mémories of the French Academy of Sciences that was itself not published until 1718. For such a relatively late treatment of Grandi's series, it is surprising that Varignon's report does not even mention Leibniz's earlier work. But most of the Précautions was written in October 1712, while Varignon was away from Paris. The Abbé Poignard's 1704 book on magic squares, Traité des Quarrés sublimes, had become a popular subject around the Academy, and the second revised and expanded edition weighed in at 336 pages. To make the time to read the Traité, Varignon had to escape to the countryside for nearly two months, where he wrote on the topic of Grandi's series in relative isolation. Upon returning to Paris and checking in at the Academy, Varignon soon discovered that the great Leibniz had ruled in favor of Grandi. Having been separated from his sources, Varignon still had to revise his paper by looking up and including the citation to Jacob Bernoulli. Rather than also take Leibniz's work into account, Varignon explains in a postscript to his report that the citation was the only revision he had made in Paris, and that if other research on the topic arose, his thoughts on it would have to wait for a future report. - -(Letters between Varignon and Leibniz…) - -In the 1751 Encyclopédie, Jean le Rond d'Alembert echoes the view that Grandi's reasoning based on division had been refuted by Varignon in 1715. (Actually, d'Alembert attributes the problem to "Guido Ubaldus", an error that is still occasionally propagated today.) - -In a 1715 letter to Jacopo Riccati, Leibniz mentioned the question of Grandi's series and advertised his own solution in the Acta Eruditorum. Later, Riccati would criticize Grandi's argument in his 1754 Saggio intorno al sistema dell'universo, saying that it causes contradictions. He argues that one could just as well write 1=n − n + n − n + · · · = n/(1 + 1), but that this series has "the same quantity of zeroes" as Grandi's series. These zeroes lack any evanescent character of n, as Riccati points out that the equality 1=1 − 1 = n − n is guaranteed by 1=1 + n = n + 1. He concludes that the fundamental mistake is in using a divergent series to begin with: - -Another 1754 publication also criticized Grandi's series on the basis of its collapse to 0. Louis Antoine de Bougainville briefly treats the series in his acclaimed 1754 textbook Traité du calcul intégral. He explains that a series is "true" if its sum is equal to the expression from which is expanded; otherwise it is "false". Thus Grandi's series is false because 1=1/(1 + 1) = 1/2 and yet 1=(1 − 1) + (1 − 1) + · · · = 0. - -Leonhard Euler treats 1=1 − 1 + 1 − 1 + · · · along with other divergent series in his De seriebus divergentibus, a 1746 paper that was read to the Academy in 1754 and published in 1760. He identifies the series as being first considered by Leibniz, and he reviews Leibniz's 1713 argument based on the series 1=1 − a + a2 − a3 + a4 − a5 + · · ·, calling it "fairly sound reasoning", and he also mentions the even/odd median argument. Euler writes that the usual objection to the use of 1=1/(1 + a) is that it does not equal 1=1 − a + a2 − a3 + a4 − a5 + · · · unless a is less than 1; otherwise all one can say is that -$$ -\frac{1}{1+a} = 1 - a + a^2 - a^3 + \cdots \pm a^n \mp \frac{a^{n+1}}{1+a}, -$$ - -where the last remainder term does not vanish and cannot be disregarded as n is taken to infinity. Still writing in the third person, Euler mentions a possible rebuttal to the objection: essentially, since an infinite series has no last term, there is no place for the remainder and it should be neglected. After reviewing more badly divergent series like 1=1 + 2 + 4 + 8 + · · ·, where he judges his opponents to have firmer support, Euler seeks to define away the issue: - -Euler also used finite differences to attack 1=1 − 1 + 1 − 1 + · · ·. In modern terminology, he took the Euler transform of the sequence and found that it equalled 12. As late as 1864, De Morgan claims that "this transformation has always appeared one of the strongest presumptions in favour of 1=1 − 1 + 1 − … being 12." - -Despite the confident tone of his papers, Euler expressed doubt over divergent series in his correspondence with Nicolaus I Bernoulli. Euler claimed that his attempted definition had never failed him, but Bernoulli pointed out a clear weakness: it does not specify how one should determine "the" finite expression that generates a given infinite series. Not only is this a practical difficulty, it would be theoretically fatal if a series were generated by expanding two expressions with different values. Euler's treatment of 1=1 − 1 + 1 − 1 + · · · rests upon his firm belief that 12 is the only possible value of the series; what if there were another? - -In a 1745 letter to Christian Goldbach, Euler claimed that he was not aware of any such counterexample, and in any case Bernoulli had not provided one. Several decades later, when Jean-Charles Callet finally asserted a counterexample, it was aimed at 1=1 − 1 + 1 − 1 + · · ·. The background of the new idea begins with Daniel Bernoulli in 1771. - -* - -Daniel Bernoulli, who accepted the probabilistic argument that 1=1 − 1 + 1 − 1 + · · · = 12, noticed that by inserting 0s into the series in the right places, it could achieve any value between 0 and 1. In particular, the argument suggested that - -1 + 0 − 1 + 1 + 0 − 1 + 1 + 0 − 1 + · · · = 23. - -In a memorandum sent to Joseph Louis Lagrange toward the end of the century, Callet pointed out that 1=1 − 1 + 1 − 1 + · · · could also be obtained from the series -$$ -\frac{1+x}{1+x+x^2}=1-x^2+x^3-x^5+x^6-x^8+\cdots; -$$ - -substituting x = 1 now suggests a value of 23, not 12. - -Lagrange approved Callet's submission for publication in the Mémoires of the French Academy of Sciences, but it was never directly published. Instead, Lagrange (along with Charles Bossut) summarized Callet's work and responded to it in the Mémoires of 1799. He defended Euler by suggesting that Callet's series actually should be written with the 0 terms left in: -$$ -1+0-x^2+x^3+0-x^5+x^6+0-x^8+\cdots, -$$ - -which reduces to - -1 + 0 − 1 + 1 + 0 − 1 + 1 + 0 − 1 + · · · - -instead. - -The 19th century is remembered as the approximate period of Cauchy's and Abel's largely successful ban on the use of divergent series, but Grandi's series continued to make occasional appearances. Some mathematicians did not follow Abel's lead, mostly outside France, and British mathematicians especially took "a long time" to understand the analysis coming from the continent. - -In 1803, Robert Woodhouse proposed that 1=1 − 1 + 1 − 1 + · · · summed to something called -$$ -\frac{1}{1+1}, -$$ - -which could be distinguished from 12. Ivor Grattan-Guinness remarks on this proposal, "… R. Woodhouse … wrote with admirable honesty on the problems which he failed to understand. … Of course, there is no harm in defining new symbols such as 11+1; but the idea is 'formalist' in the unflattering sense, and it does not bear on the problem of the convergence of series." - -In 1830, a mathematician identified only as "M. R. S." wrote in the Annales de Gergonne on a technique to numerically find fixed points of functions of one variable. If one can transform a problem into the form of an equation x = A + f(x), where A can be chosen at will, then -$$ -x = A+f(A+f(A+f(\cdots))) -$$ - -should be a solution, and truncating this infinite expression results in a sequence of approximations. Conversely, given the series 1=x = a − a + a − a + · · ·, the author recovers the equation -$$ -x = a - x, -$$ - -to which the solution is (12)a. - -M. R. S. notes that the approximations in this case are a, 0, a, 0, …, but there is no need for Leibniz's "subtle reasoning". Moreover, the argument for averaging the approximations is problematic in a wider context. For equations not of the form x = A + f(x), M. R. S.'s solutions are continued fractions, continued radicals, and other infinite expressions. In particular, the expression 1=a / (a / (a / · · · ))) should be a solution of the equation 1=x = a/x. Here, M. R. S. writes that based on Leibniz's reasoning, one is tempted to conclude that x is the average of the truncations a, 1, a, 1, …. This average is 1=(1 + a)/2, but the solution to the equation is the square root of a. - -Bernard Bolzano criticized M. R. S.' algebraic solution of the series. In reference to the step -$$ -x = a-a+a-a+\cdots = a-(a-a+a-\cdots), -$$ - -Bolzano charged, - -This comment exemplifies Bolzano's intuitively appealing but deeply problematic views on infinity. In his defense, Cantor himself pointed out that Bolzano worked in a time when the concept of the cardinality of a set was absent. - -As late as 1844, Augustus De Morgan commented that if a single instance where 1=1 − 1 + 1 − 1 + · · · did not equal 12 could be given, he would be willing to reject the entire theory of trigonometric series. - -The same volume contains papers by Samuel Earnshaw and J R Young dealing in part with 1=1 − 1 + 1 − 1 + · · ·. G. H. Hardy dismisses both of these as "little more than nonsense", in contrast to De Morgan's "remarkable mixture of acuteness and confusion"; in any case, Earnshaw got De Morgan's attention with the following remarks: - -De Morgan fired back in 1864 in the same journal: - -The last scholarly article to be motivated by 1 − 1 + 1 − 1 + · · · might be identified as the first article in the modern history of divergent series. Georg Frobenius published an article titled "Ueber die Leibnitzsche Reihe" (On Leibniz's series) in 1880. He had found Leibniz's old letter to Wolff, citing it along with an 1836 article by Joseph Ludwig Raabe, who in turn drew on ideas by Leibniz and Daniel Bernoulli. - -Frobenius' short paper, barely two pages, begins by quoting from Leibniz's treatment of 1 − 1 + 1 − 1 + · · ·. He infers that Leibniz was actually stating a generalization of Abel's Theorem. The result, now known as Frobenius' theorem, has a simple statement in modern terms: any series that is Cesàro summable is also Abel summable to the same sum. Historian Giovanni Ferraro emphasizes that Frobenius did not actually state the theorem in such terms, and Leibniz did not state it at all. Leibniz was defending the association of the divergent series 1 − 1 + 1 − 1 + · · · with the value 12, while Frobenius' theorem is stated in terms of convergent sequences and the epsilon-delta formulation of the limit of a function. - -Frobenius' theorem was soon followed with further generalizations by Otto Hölder and Thomas Joannes Stieltjes in 1882. Again, to a modern reader their work strongly suggests new definitions of the sum of a divergent series, but those authors did not yet make that step. Ernesto Cesàro proposed a systematic definition for the first time in 1890. Since then, mathematicians have explored many different summability methods for divergent series. Most of these, especially the simpler ones with historical parallels, sum Grandi's series to 12. Others, motivated by Daniel Bernoulli's work, sum the series to another value, and a few do not sum it at all. diff --git a/wiki/wikipedia/700.txt b/wiki/wikipedia/700.txt deleted file mode 100644 index dc944e14e4a1df89fa48a3011510996f9fafb95e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/700.txt +++ /dev/null @@ -1,5 +0,0 @@ -A graph neural network (GNN) is a class of neural networks for processing data represented by graph data structures. They were popularized by their use in supervised learning on properties of various molecules. - -Since their inception, several variants of the simple message passing neural network (MPNN) framework have been proposed. These models optimize GNNs for use on larger graphs and apply them to domains such as social networks, citation networks, and online communities. - -It has been mathematically proven that GNNs are a weak form of the Weisfeiler–Lehman graph isomorphism test, so any GNN model is at least as powerful as this test. There is now growing interest in uniting GNNs with other so-called "geometric deep learning models" to better understand how and why these models work. diff --git a/wiki/wikipedia/701.txt b/wiki/wikipedia/701.txt deleted file mode 100644 index 6e4dee0e178b6dbea03501b7d06ae552ba710d74..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/701.txt +++ /dev/null @@ -1,91 +0,0 @@ -In mathematics, the Littlewood subordination theorem, proved by J. E. Littlewood in 1925, is a theorem in operator theory and complex analysis. It states that any holomorphic univalent self-mapping of the unit disk in the complex numbers that fixes 0 induces a contractive composition operator on various function spaces of holomorphic functions on the disk. These spaces include the Hardy spaces, the Bergman spaces and Dirichlet space. - -Let h be a holomorphic univalent mapping of the unit disk D into itself such that h(0) = 0. Then the composition operator Ch defined on holomorphic functions f on D by -$$ -C_h(f) = f\circ h -$$ - -defines a linear operator with operator norm less than 1 on the Hardy spaces $ H^p(D)$, the Bergman spaces $A^p(D)$. - -(1 ≤ p < ∞) and the Dirichlet space $ \mathcal{D}(D)$. - -The norms on these spaces are defined by: -$$ - \|f\|_{H^p}^p = \sup_r {1\over 2\pi}\int_0^{2\pi} |f(re^{i\theta})|^p d\theta -$$ -$$ - \|f\|_{A^p}^p = {1\over \pi} \iint_D |f(z)|^p dxdy -$$ -$$ - \|f\|_{\mathcal D}^2 = {1\over \pi} \iint_D |f^\prime(z)|^2 dxdy= {1\over 4 \pi} \iint_D |\partial_x f|^2 + |\partial_y f|^2 dxdy -$$ - -Let f be a holomorphic function on the unit disk D and let h be a holomorphic univalent mapping of D into itself with h(0) = 0. Then - -if 0 < r < 1 and 1 ≤ p < ∞ -$$ -\int_0^{2\pi} |f(h(re^{i\theta}))|^p d\theta \le \int_0^{2\pi} |f(re^{i\theta})|^p d\theta. -$$ - -This inequality also holds for 0 < p < 1, although in this case there is no operator interpretation. - -To prove the result for H2 it suffices to show that for f a polynomial -$$ -\displaystyle{\|C_h f\|^2 \le \|f\|^2,} -$$ - -Let U be the unilateral shift defined by -$$ - \displaystyle{Uf(z)= zf(z)}. -$$ - -This has adjoint U* given by -$$ - U^*f(z) ={f(z)-f(0)\over z}. -$$ - -Since f(0) = a0, this gives -$$ - f= a_0 + zU^*f -$$ - -and hence -$$ - C_h f = a_0 + h C_hU^*f. -$$ - -Thus -$$ - \|C_h f\|^2 = |a_0|^2 + \|hC_hU^*f\|^2 \le |a_0^2|+ \|C_h U^*f\|^2. -$$ - -Since U*f has degree less than f, it follows by induction that -$$ -\|C_h U^*f\|^2 \le \|U^*f\|^2 = \|f\|^2 - |a_0|^2, -$$ - -and hence -$$ -\|C_h f\|^2 \le \|f\|^2. -$$ - -The same method of proof works for A2 and $\mathcal D.$ - -If f is in Hardy space Hp, then it has a factorization -$$ - f(z) = f_i(z)f_o(z) -$$ - -with fi an inner function and fo an outer function. - -Then -$$ - \|C_h f\|_{H^p} \le \|(C_hf_i) (C_h f_o)\|_{H^p} \le \|C_h f_o\|_{H^p} \le \|C_h f_o^{p/2}\|_{H^2}^{2/p} \le \|f\|_{H^p}. -$$ - -Taking 0 < r < 1, Littlewood's inequalities follow by applying the Hardy space inequalities to the function -$$ - f_r(z)=f(rz). -$$ - -The inequalities can also be deduced, following Riesz, using subharmonic functions. The inequaties in turn immediately imply the subordination theorem for general Bergman spaces. diff --git a/wiki/wikipedia/702.txt b/wiki/wikipedia/702.txt deleted file mode 100644 index d7d624167eb7efa066410acdce3e8db290129b1b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/702.txt +++ /dev/null @@ -1,23 +0,0 @@ -In mathematics, Lebesgue's density theorem states that for any Lebesgue measurable set $A\subset \R^n$, the "density" of A is 0 or 1 at almost every point in $\R^n$. Additionally, the "density" of A is 1 at almost every point in A. Intuitively, this means that the "edge" of A, the set of points in A whose "neighborhood" is partially in A and partially outside of A, is negligible. - -Let μ be the Lebesgue measure on the Euclidean space Rn and A be a Lebesgue measurable subset of Rn. Define the approximate density of A in a ε-neighborhood of a point x in Rn as -$$ - d_\varepsilon(x)=\frac{\mu(A\cap B_\varepsilon(x))}{\mu(B_\varepsilon(x))} -$$ - -where Bε denotes the closed ball of radius ε centered at x. - -Lebesgue's density theorem asserts that for almost every point x of A the density -$$ - d(x)=\lim_{\varepsilon\to 0} d_{\varepsilon}(x) -$$ - -exists and is equal to 0 or 1. - -In other words, for every measurable set A, the density of A is 0 or 1 almost everywhere in Rn. However, if μ(A) > 0 and μ(Rn \ A) > 0, then there are always points of Rn where the density is neither 0 nor 1. - -For example, given a square in the plane, the density at every point inside the square is 1, on the edges is 1/2, and at the corners is 1/4. The set of points in the plane at which the density is neither 0 nor 1 is non-empty (the square boundary), but it is negligible. - -The Lebesgue density theorem is a particular case of the Lebesgue differentiation theorem. - -Thus, this theorem is also true for every finite Borel measure on Rn instead of Lebesgue measure, see Discussion. diff --git a/wiki/wikipedia/703.txt b/wiki/wikipedia/703.txt deleted file mode 100644 index aea9f79ac025ab003f89126c4d1ca9447c9b8328..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/703.txt +++ /dev/null @@ -1,63 +0,0 @@ -In mathematics, reduction refers to the rewriting of an expression into a simpler form. For example, the process of rewriting a fraction into one with the smallest whole-number denominator possible (while keeping the numerator a whole number) is called "reducing a fraction". Rewriting a radical (or "root") expression with the smallest possible whole number under the radical symbol is called "reducing a radical". Minimizing the number of radicals that appear underneath other radicals in an expression is called denesting radicals. - -In linear algebra, reduction refers to applying simple rules to a series of equations or matrices to change them into a simpler form. In the case of matrices, the process involves manipulating either the rows or the columns of the matrix and so is usually referred to as row-reduction or column-reduction, respectively. Often the aim of reduction is to transform a matrix into its "row-reduced echelon form" or "row-echelon form"; this is the goal of Gaussian elimination. - -In calculus, reduction refers to using the technique of integration by parts to evaluate integrals by reducing them to simpler forms. - -In dynamic analysis, static reduction refers to reducing the number of degrees of freedom. Static reduction can also be used in finite element analysis to refer to simplification of a linear algebraic problem. Since a static reduction requires several inversion steps it is an expensive matrix operation and is prone to some error in the solution. Consider the following system of linear equations in an FEA problem: - -\begin{bmatrix} - -K_{11} & K_{12} \\ - -K_{21} & K_{22} - -\end{bmatrix} - -\begin{bmatrix} x_1 \\ x_2 \end{bmatrix} - -= - -\begin{bmatrix} F_1 \\ F_2 \end{bmatrix} - -where K and F are known and K, x and F are divided into submatrices as shown above. If F2 contains only zeros, and only x1 is desired, K can be reduced to yield the following system of equations - -\begin{bmatrix} - -K_{11,\text{reduced}} - -\end{bmatrix}\begin{bmatrix} - -x_1 - -\end{bmatrix} = \begin{bmatrix} - -F_1 - -\end{bmatrix} -$$ -K_{11,\text{reduced}} -$$ is obtained by writing out the set of equations as follows: - -{{NumBlk|:|$K_{11}x_1 + K_{12}x_2 = F_1$|}} - -{{NumBlk|:|$K_{21}x_1 + K_{22}x_2 = 0$|}} - -Equation () can be solved for $x_2$ (assuming invertibility of $K_{22}$): -$$ --K_{22}^{-1} K_{21} x_1 = x_2. -$$ - -And substituting into () gives -$$ -K_{11}x_1 - K_{12} K_{22}^{-1} K_{21} x_1 = F_1. -$$ - -Thus -$$ -K_{11,\text{reduced}} = K_{11} - K_{12} K_{22}^{-1} K_{21}. -$$ - -In a similar fashion, any row or column i of F with a zero value may be eliminated if the corresponding value of xi is not desired. A reduced K may be reduced again. As a note, since each reduction requires an inversion, and each inversion is an operation with computational cost O(n3), most large matrices are pre-processed to reduce calculation time. - -In the 9th century, Persian mathematician Al-Khwarizmi's Al-Jabr introduced the fundamental concepts of "reduction" and "balancing", referring to the transposition of subtracted terms to the other side of an equation and the cancellation of like terms on opposite sides of the equation. This is the operation which Al-Khwarizmi originally described as al-jabr. The name "algebra" comes from the "al-jabr" in the title of his book. diff --git a/wiki/wikipedia/704.txt b/wiki/wikipedia/704.txt deleted file mode 100644 index 8d78dca7d867c17a469b8826c974fadd98bcf45b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/704.txt +++ /dev/null @@ -1,25 +0,0 @@ -In the mathematics of circle packing, a Doyle spiral is a pattern of non-crossing circles in the plane, each tangent to six others. The sequences of circles linked to each other through opposite points of tangency lie on logarithmic spirals (or, in degenerate cases, circles or lines) having, in general, three different shapes of spirals. - -These patterns are named after mathematician Peter G. Doyle, who made an important contribution to their mathematical construction in the late 1980s or early 1990s. However, their study in phyllotaxis (the mathematics of plant growth) dates back to the early 20th century. - -The precise shape of any Doyle spiral can be parameterized by a pair of natural numbers describing the number of spiral arms for each of the three ways of grouping circles by their opposite points of tangency. If the numbers of arms of two of the three types of spiral arm are $p$ and $q$, with $p < q$ and with fewer than $q$ arms of the third type, then the number of arms of the third type is necessarily $q-p$. As special cases of this formula, when $p=q$ the arms of the third type degenerate to circles, and there are infinitely many of them. And when $p=q/2$ the two types of arms with the smaller number $p$ of copies are mirror reflections of each other and the arms with $q$ copies degenerate to straight lines. For example, in the illustration shown, there are eight spiral arms with the same shape as the shaded arm, another eight spiral arms with the mirror reflected shape, and sixteen radial lines of circles, so this spiral can be parameterized as $p=8$, $q=16$. - -Alternatively, the Doyle spiral can be parameterized by a pair of real numbers $a$ and $b$ describing the relative sizes of the circles. Peter Doyle observed that, when a unit circle is surrounded by of six other circles with radii $a$, $b$, $b/a$, $1/a$, $1/b$, and $a/b$, then these six surrounding circles close up to form a ring of mutually tangent circles, all tangent to the central unit circle. The Doyle spiral can then be constructed by using the same relative radii for rings of six circles surrounding each previously-constructed circle. The resulting system of circles closes up on itself to form a non-crossing Doyle spiral of circles in the plane only for certain special pairs of numbers $a$ and $b$, which can be found from the integer parameters $p$ and $q$ by a numerical search. When $(a,b)$ is not one of these special pairs, the resulting system of circles still consists of spiral arms all wrapping around a central point, but with a rotation angle around that central point that is not an integer fraction of $2\pi$, causing them to overlap non-locally. The two real parameters can also be combined into a single complex number, interpreting the plane in which the circles are drawn as the complex plane. The parameters $(a,b)$ associated with a Doyle spiral must be algebraic numbers. - -Coxeter's loxodromic sequence of tangent circles is a Doyle spiral with parameters $p=1$ and $q=3$ or with $a=\varphi + \sqrt{\varphi}$ and $b=a^3$, where $\varphi$ denotes the golden ratio. Within the single spiral arm of tightest curvature, the circles form a sequence whose radii are powers of $a$, in which each four consecutive circles in the sequence are tangent. - -The standard hexagonal packing of the plane by unit circles can also be interpreted as a degenerate special case of the Doyle spiral, the case obtained by using the parameters $a=b=1$. Unlike other Doyle spirals, it has no central limit point. - -The Doyle spirals form a discrete analogue of the exponential function Spirals of tangent circles have been used to study Kleinian groups. - -Spirals of tangent circles, often with Fibonacci numbers of arms, have been used to model phyllotaxis, the spiral growth patterns characteristic of certain plant species, beginning with the work of Gerrit van Iterson in 1907. In this application, a single spiral of circles may be called a parastichy and the parameters $p$ and $q$ of the Doyle spiral may be called parastichy numbers. The difference $q-p$ is also a parastichy number (if nonzero), the number of parastichies of the third type. When the two parastichy numbers $p$ and $q$ are either consecutive Fibonacci numbers, or Fibonacci numbers that are one step apart from each other in the sequence of Fibonacci numbers, then the third parastichy number will also be a Fibonacci number. For modeling plant growth in this way, spiral packings of tangent circles on surfaces other than the plane, including cylinders and cones, may also be used. - -Spiral packings of circles have also been studied as a decorative motif in architectural design. - -The Doyle spirals (and the hexagonal packing of the plane) are the only possible "coherent hexagonal circle packings" in the plane, where "coherent" means that no two circles overlap and "hexagonal" means that each circle is tangent to six others that surround it by a ring of tangent circles. Applying a Möbius transformation to a Doyle spiral can produce a related pattern of non-crossing tangent circles, each tangent to six others, with a double-spiral pattern in which the connected sequences of circles spiral out of one center point and into another; however, some circles in this pattern will not be surrounded by their six neighboring circles. - -Additional patterns are possible with six circles surrounding each interior circle but only covering a partial subset of the plane and with circles on the boundary of that region not completely surrounded by other circles. It is also possible to form spiral patterns of tangent circles whose local structure resembles a square grid rather than a hexagonal grid, or to continuously transform these patterns into Doyle packings or vice versa. However, the space of realizations of locally-square spiral packings is infinite-dimensional, unlike the Doyle spirals which can be determined only by a constant number of parameters. - -It is also possible to describe spiraling systems of overlapping circles that cover the plane, rather than non-crossing circles that pack the plane, with each point of the plane covered by at most two circles except for points where three circles meet at $60^\circ$ angles, and with each circle surrounded by six others. These have many properties in common with the Doyle spirals. - -The Doyle spiral, in which the circle centers lie on logarithmic spirals and their radii increase geometrically in proportion to their distance from the central limit point, should be distinguished from a different spiral pattern of disjoint but non-tangent unit circles, also resembling certain forms of plant growth such as the seed heads of sunflowers. This different pattern can be obtained by placing the centers of unit circles on an appropriately scaled Fermat's spiral, at angular offsets of $2\pi/\varphi$ from each other relative to the center of the spiral, where again $\varphi$ is the golden ratio. For more, see . diff --git a/wiki/wikipedia/705.txt b/wiki/wikipedia/705.txt deleted file mode 100644 index 8b40c26191bc0aa2d05c7d4e3b1954f1a2606a49..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/705.txt +++ /dev/null @@ -1,17 +0,0 @@ -In Euclidean geometry, the Droz-Farny line theorem is a property of two perpendicular lines through the orthocenter of an arbitrary triangle. - -Let $T$ be a triangle with vertices $A$, $B$, and $C$, and let $H$ be its orthocenter (the common point of its three altitude lines. Let $L_1$ and $L_2$ be any two mutually perpendicular lines through $H$. Let $A_1$, $B_1$, and $C_1$ be the points where $L_1$ intersects the side lines $BC$, $CA$, and $AB$, respectively. Similarly, let Let $A_2$, $B_2$, and $C_2$ be the points where $L_2$ intersects those side lines. The Droz-Farny line theorem says that the midpoints of the three segments $A_1A_2$, $B_1B_2$, and $C_1C_2$ are collinear. - -The theorem was stated by Arnold Droz-Farny in 1899, but it is not clear whether he had a proof. - -A generalization of the Droz-Farny line theorem was proved in 1930 by René Goormaghtigh. - -As above, let $T$ be a triangle with vertices $A$, $B$, and $C$. Let $P$ be any point distinct from $A$, $B$, and $C$, and $L$ be any line through $P$. Let $A_1$, $B_1$, and $C_1$ be points on the side lines $BC$, $CA$, and $AB$, respectively, such that the lines $PA_1$, $PB_1$, and $PC_1$ are the images of the lines $PA$, $PB$, and $PC$, respectively, by reflection against the line $L$. Goormaghtigh's theorem then says that the points $A_1$, $B_1$, and $C_1$ are collinear. - -The Droz-Farny line theorem is a special case of this result, when $P$ is the orthocenter of triangle $T$. - -The theorem was further generalized by Dao Thanh Oai. The generalization as follows: - -First generalization: Let ABC be a triangle, P be a point on the plane, let three parallel segments AA', BB', CC' such that its midpoints and P are collinear. Then PA', PB', PC' meet BC, CA, AB respectively at three collinear points. - -Second generalization: Let a conic S and a point P on the plane. Construct three lines da, db, dc through P such that they meet the conic at A, A'; B, B' ; C, C' respectively. Let D be a point on the polar of point P with respect to (S) or D lies on the conic (S). Let DA' ∩ BC =A0; DB' ∩ AC = B0; DC' ∩ AB= C0. Then A0, B0, C0 are collinear. diff --git a/wiki/wikipedia/706.txt b/wiki/wikipedia/706.txt deleted file mode 100644 index 6afdc2600f51c9da817b336484584b3140fd359c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/706.txt +++ /dev/null @@ -1 +0,0 @@ -In mathematics, the Manin–Drinfeld theorem, proved by and , states that the difference of two cusps of a modular curve has finite order in the Jacobian variety. diff --git a/wiki/wikipedia/707.txt b/wiki/wikipedia/707.txt deleted file mode 100644 index c67c7f9c8f4b54f736c9cfbfefdf11da3c264e37..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/707.txt +++ /dev/null @@ -1,50 +0,0 @@ -In number theory, the Bateman–Horn conjecture is a statement concerning the frequency of prime numbers among the values of a system of polynomials, named after mathematicians Paul T. Bateman and Roger A. Horn who proposed it in 1962. It provides a vast generalization of such conjectures as the Hardy and Littlewood conjecture on the density of twin primes or their conjecture on primes of the form n2 + 1; it is also a strengthening of Schinzel's hypothesis H. - -The Bateman–Horn conjecture provides a conjectured density for the positive integers at which a given set of polynomials all have prime values. For a set of m distinct irreducible polynomials ƒ1, ..., ƒm with integer coefficients, an obvious necessary condition for the polynomials to simultaneously generate prime values infinitely often is that they satisfy Bunyakovsky's property, that there does not exist a prime number p that divides their product f(n) for every positive integer n. For, if there were such a prime p, having all values of the polynomials simultaneously prime for a given n would imply that at least one of them must be equal to p, which can only happen for finitely many values of n or there would be a polynomial with infinitely many roots, whereas the conjecture is how to give conditions where the values are simultaneously prime for infinitely many n. - -An integer n is prime-generating for the given system of polynomials if every polynomial ƒi(n) produces a prime number when given n as its argument. If P(x) is the number of prime-generating integers among the positive integers less than x, then the Bateman–Horn conjecture states that -$$ -P(x) \sim \frac{C}{D} \int_2^x \frac{dt}{(\log t)^m}, -$$ - -where D is the product of the degrees of the polynomials and where C is the product over primes p -$$ -C = \prod_p \frac{1-N(p)/p}{(1-1/p)^m}\ -$$ - -with $N(p)$ the number of solutions to -$$ -f(n) \equiv 0 \pmod p.\ -$$ - -Bunyakovsky's property implies $N(p) < p$ for all primes p, - -so each factor in the infinite product C is positive. - -Intuitively one then naturally expects that the constant C is itself positive, and with some work this can be proved. - -(Work is needed since some infinite products of positive numbers equal zero.) - -As stated above, the conjecture is not true: the single polynomial ƒ1(x) = -x produces only negative numbers when given a positive argument, so the fraction of prime numbers among its values is always zero. There are two equally valid ways of refining the conjecture to avoid this difficulty: - -*One may require all the polynomials to have positive leading coefficients, so that only a constant number of their values can be negative. - -*Alternatively, one may allow negative leading coefficients but count a negative number as being prime when its absolute value is prime. - -It is reasonable to allow negative numbers to count as primes as a step towards formulating more general conjectures that apply to other systems of numbers than the integers, but at the same time it is easy - -to just negate the polynomials if necessary to reduce to the case where the leading coefficients are positive. - -If the system of polynomials consists of the single polynomial ƒ1(x) = x, then the values n for which ƒ1(n) is prime are themselves the prime numbers, and the conjecture becomes a restatement of the prime number theorem. - -If the system of polynomials consists of the two polynomials ƒ1(x) = x and ƒ2(x) = x + 2, then the values of n for which both ƒ1(n) and ƒ2(n) are prime are just the smaller of the two primes in every pair of twin primes. In this case, the Bateman–Horn conjecture reduces to the Hardy–Littlewood conjecture on the density of twin primes, according to which the number of twin prime pairs less than x is -$$ -\pi_2(x) \sim 2 \prod_{p\ge 3} \frac{p(p-2)}{(p-1)^2}\frac{x}{(\log x)^2 } \approx 1.32 \frac {x}{(\log x)^2}. -$$ - -When the integers are replaced by the polynomial ring F[u] for a finite field F, one can ask how often a finite set of polynomials fi(x) in F[u][x] simultaneously takes irreducible values in F[u] when we substitute for x elements of F[u]. Well-known analogies between integers and F[u] suggest an analogue of the Bateman–Horn conjecture over F[u], but the analogue is wrong. For example, data suggest that the polynomial -$$ -x^3 + u -$$ - -in F3[u][x] takes (asymptotically) the expected number of irreducible values when x runs over polynomials in F3[u] of odd degree, but it appears to take (asymptotically) twice as many irreducible values as expected when x runs over polynomials of degree that is 2 mod 4, while it (provably) takes no irreducible values at all when x runs over nonconstant polynomials with degree that is a multiple of 4. An analogue of the Bateman–Horn conjecture over F[u] which fits numerical data uses an additional factor in the asymptotics which depends on the value of d mod 4, where d is the degree of the polynomials in F[u] over which x is sampled. diff --git a/wiki/wikipedia/708.txt b/wiki/wikipedia/708.txt deleted file mode 100644 index 971b53975537388e30159a20cf3ec1e10005b8ad..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/708.txt +++ /dev/null @@ -1,42 +0,0 @@ -#redirectHardy–Littlewood tauberian theoremjb5ffhqhqgdfcjygituxa5nlcrl9or4 - - - - - - - -File:Young Reader's Choice Award.png - -6 - -42014153 - - - -766685615 - -596592079 - -2017-02-21T15:31:48Z - - - -BU RoBOT - -25993922 - - - - - -/* Licensing */Bot reviewing non-free use rationale. Human review encouraged! (Task 32) using AWB - -wikitext - -text/x-wiki - -== Summary == - -== Licensing == diff --git a/wiki/wikipedia/709.txt b/wiki/wikipedia/709.txt deleted file mode 100644 index 62c6a58794a044a782fb5f299755c1c4e1007002..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/709.txt +++ /dev/null @@ -1,30 +0,0 @@ -In mathematics, a Schur-convex function, also known as S-convex, isotonic function and order-preserving function is a function $f: \mathbb{R}^d\rightarrow \mathbb{R}$ that for all $x,y\in \mathbb{R}^d $ such that $x$ is majorized by $y$, one has that $f(x)\le f(y)$. Named after Issai Schur, Schur-convex functions are used in the study of majorization. Every function that is convex and symmetric is also Schur-convex. The opposite implication is not true, but all Schur-convex functions are symmetric (under permutations of the arguments). - -A function f is 'Schur-concave' if its negative, -f, is Schur-convex. - -If f is symmetric and all first partial derivatives exist, then - -f is Schur-convex if and only if -$$ -(x_i - x_j)\left(\frac{\partial f}{\partial x_i} - \frac{\partial f}{\partial x_j}\right) \ge 0 -$$ for all $x \in \mathbb{R}^d$ - -holds for all 1≤i≠j≤d. - -* $ f(x)=\min(x) $ is Schur-concave while $ f(x)=\max(x) $ is Schur-convex. This can be seen directly from the definition. - -* The Shannon entropy function $\sum_{i=1}^d{P_i \cdot \log_2{\frac{1}{P_i}}}$ is Schur-concave. - -* The Rényi entropy function is also Schur-concave. - -* $ \sum_{i=1}^d{x_i^k},k \ge 1 $ is Schur-convex. - -* The function $ f(x) = \prod_{i=1}^d x_i $ is Schur-concave, when we assume all $ x_i > 0 $. In the same way, all the Elementary symmetric functions are Schur-concave, when $ x_i > 0 $. - -* A natural interpretation of majorization is that if $ x \succ y $ then $ x $ is less spread out than $ y $. So it is natural to ask if statistical measures of variability are Schur-convex. The variance and standard deviation are Schur-convex functions, while the Median absolute deviation is not. - -* If $ g $ is a convex function defined on a real interval, then $ \sum_{i=1}^n g(x_i) $ is Schur-convex. - -* A probability example: If $ X_1, \dots, X_n $ are exchangeable random variables, then the function $ \text{E} \prod_{j=1}^n X_j^{a_j} $ is Schur-convex as a function of $ a=(a_1, \dots, a_n) $, assuming that the expectations exist. - -* The Gini coefficient is strictly Schur convex. diff --git a/wiki/wikipedia/71.txt b/wiki/wikipedia/71.txt deleted file mode 100644 index caf6eb8b13d9c5954b88e8e50a688712be318372..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/71.txt +++ /dev/null @@ -1,121 +0,0 @@ -In mathematics, the Mohr–Mascheroni theorem states that any geometric construction that can be performed by a compass and straightedge can be performed by a compass alone. - -It must be understood that by "any geometric construction", we are referring to figures that contain no straight lines, as it is clearly impossible to draw a straight line without a straightedge. It is understood that a line is determined provided that two distinct points on that line are given or constructed, even though no visual representation of the line will be present. The theorem can be stated more precisely as: - -Any Euclidean construction, insofar as the given and required elements are points (or circles), may be completed with the compass alone if it can be completed with both the compass and the straightedge together. - -Though the use of a straightedge can make a construction significantly easier, the theorem shows that any set of points that fully defines a constructed figure can be determined with compass alone, and the only reason to use a straightedge is for the aesthetics of seeing straight lines, which for the purposes of construction is functionally unnecessary. - -The result was originally published by Georg Mohr in 1672, but his proof languished in obscurity until 1928. The theorem was independently discovered by Lorenzo Mascheroni in 1797 and it was known as Mascheroni's Theorem until Mohr's work was rediscovered. - -Motivated by Mascheroni's result, in 1822 Jean Victor Poncelet conjectured a variation on the same theme. He proposed that any construction possible by straightedge and compass could be done with straightedge alone. The one stipulation though is that a single circle with its center identified must be provided. The Poncelet-Steiner theorem was proved by Jakob Steiner eleven years later. This was a generalization of the proofs given by Ferrari and Cardano and several others in the 16th century where they demonstrated that all the constructions appearing in Euclid's Elements were possible with a straightedge and a "rusty" (fixed-width) compass. - -To prove the theorem, each of the basic constructions of compass and straightedge need to be proven to be possible by using a compass alone, as these are the foundations of, or elementary steps for, all other constructions. These are: - -#Creating the line through two existing points - -#Creating the circle through one point with centre another point - -#Creating the point which is the intersection of two existing, non-parallel lines - -#Creating the one or two points in the intersection of a line and a circle (if they intersect) - -#Creating the one or two points in the intersection of two circles (if they intersect). - -#1 - A line through two points - -It is understood that a straight line cannot be drawn without a straightedge. A line is considered to be given by any two points, as any two points define a line uniquely, and a unique line can be defined by any two points on it. In keeping with the intent of the theorem which we aim to prove, the actual line need not be drawn but for aesthetic reasons. This fact will be shown when all other constructions involving the line are proven. - -#2 - A circle through one point with defined center - -This can be done with compass alone quite naturally; it is the very purpose for which compasses are meant. There is nothing to prove. Any doubts about this construction would equally apply to traditional constructions that do involve a straightedge. - -#5 - Intersection of two circles - -This construction can be done directly with a compass provided the centers and radii of the two circles are known. Due to the compass-only construction of the center of a circle (given below), it can always be assumed that any circle is described by its center and radius. Indeed, some authors include this in their descriptions of the basic constructions. - -#3, #4 - The other constructions - -Thus, to prove the theorem, there only compass-only constructions for #3 and #4 need to be given. - -Several proofs of the result are known. Mascheroni's proof of 1797 was generally based on the idea of using reflection in a line as the major tool. Mohr's solution was different. - -# Construct point D, the inverse of C in the circle A(B). - -# Reflect A in the line BD to the point X. - -# O is the inverse of X in the circle A(B). - -* Given non-parallel lines AB and CD, find their point of intersection, X. - -# Select circle O(r) of arbitrary radius whose center O does not lie on either line. - -# Invert points A and B in circle O(r) to points A' and B' respectively. - -# The line AB is inverted to the circle passing through O, A' and B'. Find the center E of this circle. - -# Invert points C and D in circle O(r) to points C' and D' respectively. - -# The line CD is inverted to the circle passing through O, C' and D'. Find the center F of this circle. - -# Let Y ≠ O be the intersection of circles E(O) and F(O). - -# X is the inverse of Y in the circle O(r). - -The compass-only construction of the intersection points of a line and a circle breaks into two cases depending upon whether the center of the circle is or is not collinear with the line. - -Assume that center of the circle does not lie on the line. - -*Given a circle C(r) (in black) and a line AB. We wish to construct the points of intersection, P and Q, between them (if they exist). - -#Construct the point D, which is the reflection of point C across line AB. (See above.) - -#* Under the assumption of this case, C ≠ D. - -#Construct a circle D(r) (in red). (See above, compass equivalence.) - -# The intersections of circle C(r) and the new red circle D(r) are points P and Q. - -#* If the two circles are (externally) tangential then $P=Q$. - -# Points P and Q are the intersection points of circle C(r) and the line AB. - -#* If $P=Q$ then the line is tangential to the circle $C(r)$. - -An alternate construction, using circle inversion can also be given. - -*Given a circle C(r) and a line AB. We wish to construct the points of intersection, P and Q, between them (if they exist). - -# Invert points A and B in circle C(r) to points A' and B' respectively. - -#* Under the assumption of this case, points A', B', and C are not collinear. - -# Find the center E of the circle passing through points C, A', and B'. - -# Construct circle E(C), which represents the inversion of the line AB into circle C(r). - -# P and Q are the intersection points of circles C(r) and E(C). - -#* If the two circles are (internally) tangential then $P=Q$, and the line is also tangential. - -* Given the circle C(D) whose center C lies on the line AB, find the points P and Q, the intersection points of the circle and the line. - -# Construct point D' ≠ D as the other intersection of circles A(D) and C(D). - -# Construct point F as the intersection of circles C(DD' ) and D(C). (F is the fourth vertex of parallelogram CD'DF.) - -# Construct point F' as the intersection of circles C(DD' ) and D' (C). (F' is the fourth vertex of parallelogram CDD'F'.) - -# Construct point M as an intersection of circles F(D' ) and F' (D). (M lies on AB.) - -# Points P and Q are the intersections of circles F(CM) and C(D). - -Thus it has been shown that all of the basic construction one can perform with a straightedge and compass can be done with a compass alone, provided that it is understood that a line cannot be literally drawn but merely defined by two points. - -Renaissance mathematicians Lodovico Ferrari and Niccolò Fontana Tartaglia were able to show that any construction could be accomplished with a straightedge and a fixed-width compass (i.e. a rusty compass). - -The Mohr-Mascheroni theorem can be contrasted with the Poncelet–Steiner theorem, which states that any compass and straightedge construction can be performed with only a straightedge, provided that at least one circle with center identified is given in the plane. This reduces Ferrari's rusty compass result to a single use of a compass. - -A proof later provided in 1904 by Francesco Severi relaxes the requirement that one full circle be provided, and shows that any small arc of the circle, so long as the center is still provided, is still sufficient. - -Additionally, the center itself may be omitted instead of portions of the arc, if it is substituted for something else sufficient, such as a second concentric or intersecting circle, or a third circle, or a non-intersecting second circle provided a point on either the centerline or the radial axis between them is given. diff --git a/wiki/wikipedia/710.txt b/wiki/wikipedia/710.txt deleted file mode 100644 index 28df45b7f7071df90eaf2889b19b4ae85de27df7..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/710.txt +++ /dev/null @@ -1,154 +0,0 @@ -In computing and graph theory, a dynamic connectivity structure is a data structure that dynamically maintains information about the connected components of a graph. - -The set V of vertices of the graph is fixed, but the set E of edges can change. The three cases, in order of difficulty, are: - -* Edges are only added to the graph (this can be called incremental connectivity); - -* Edges are only deleted from the graph (this can be called decremental connectivity); - -* Edges can be either added or deleted (this can be called fully dynamic connectivity). - -After each addition/deletion of an edge, the dynamic connectivity structure should adapt itself such that it can give quick answers to queries of the form "is there a path between x and y?" (equivalently: "do vertices x and y belong to the same connected component?"). - -If edges can only be added, then the dynamic connectivity problem can be solved by a Disjoint-set data structure. Each set represents a connected component; there is a path between x and y if and only if they belong to the same set. The amortized time per operation is $\Theta(\alpha(n))$, where n is the number of vertices and α is the inverse Ackermann function. - -The case in which edges can only be deleted was solved by Shimon Even and Yossi Shiloach. - -The structure uses a table that specifies, for each vertex, the name of the component to which it belongs. Thus a connectivity query takes constant time. The challenge is to update the table when an edge is deleted. - -When edge u-v is deleted in a forest, the tree containing that edge is broken to two trees: one of them contains u and the other contains v. The table is updated in the following way. - -* Scan the tree starting from u (using any tree scan algorithm, such as DFS). - -* Scan the tree starting from v. - -* Do the above two procedures in parallel, i.e., either using two parallel processes, or by interleaving their steps (make a step of first scan, then a step of the second scan, then a step of the first scan, etc.). - -* Suppose the first scan that terminates is the scan from u (so we know that the tree containing u is the smaller one). Assign a new component name to every node in the subtree of u. - -Since we always rename the smaller sub-component, the amortized time for a delete operation is $O(\log(n))$. - -When an edge is deleted in a general graph, we don't know whether its component remains a single component (connected by other edges) or broken to two components. So we use two processes which run in parallel (or in an interleaved way). Process A checks whether the edge deletion breaks a component, and if it does, both processes halt. Process B checks whether the edge deletion does not break the component to which it belongs, and if it does not, again both processes halt. - -;Process A: is similar to the acyclic-graph case: there are two sub-processes who scan from both ends of the deleted edge. If one of the sub-processes finishes before reaching the other end, this means that the component is broken into two sub-components, and the name of the smaller sub-component is updated, as before. Thus the amortized time for a delete operation is again $O(\log(n))$. - -;Process B: uses a breadth-first structure (BFS), which is initialized as follows. A vertex r is chosen and the BFS starts from it. The only vertex in level 0 is r. All the vertices of distance i from the root are in level i. If G is not connected, a new scan is started at some unscanned vertex v, v is put in level 1, and an artificial edge connects v to the root r; all vertices of distance i from v are now in level i+1, etc. Artificial edges are introduced in order to keep all the connected components in one BFS structure and are used only for this purpose. Clearly, the artificial edges are used only in process B. - -The structure has the following properties. A vertex v in level i, i>0, has only three types of edges: backward edges which connect it to level i-1 (there is at least one such edge, which may be artificial), local edges which connect it to other edges in level i (there are zero or more such edges), or forward edges which connect it to edges in level i+1 (there are zero or more such edges). So for each vertex v, we maintain three sets of edges (backward, local and forward). - -When an edge u-v is deleted, there are two options: either u and v are in the same level, or they are in levels whose number differs by 1. - -;Case 1: both u and v are on the same level. In this case, the edge deletion cannot change the components. The edge is simply deleted from the sets of local edges of u and v, and process B halts (and therefore process A is halted too). Our BFS structure is still valid. - -;Case 2: u and v are on different levels. Without loss of generality, assume u is in level i-1 and v is in level i; hence the edge should be removed from forward(u) and from backward(v). - -;Case 2.1: If the new backward(v) is not empty, then the components have not changed: there are other edges which connect v backwards. Process B halts (and process A is halted too). - -;Case 2.2: If the new backward(v) is empty, then v is no longer connected to level i-1, and so its distance from the root is no longer i; it must be at least i+1. Additionally, there may be other vertices, connected to v, whose distance from the root increases as a result of the deletion. To calculate the updated distances, we use a queue Q, which initially contains only the vertex v. - -While Q is not empty: - -# w := dequeue(Q) - -# Remove w from its level (say, j), and put it in the next level (j+1). - -# Update local neighbours: - -#* For each edge w-x in local(w), remove it from local(x) and put it in forward(x). - -#* backward(w) := local(w) - -# Update forward neighbours: - -#* For each edge w-x in forward(w), remove it from backward(x) and put it in local(x); if the new backward(x) is empty, enqueue x on Q. - -#* local(w) := forward(w) - -#* forward(w) := empty set - -# If the new backward(w) is empty, enqueue w again on Q. - -If the edge deletion does not break any component and we are in case 2.2, then eventually the procedure will halt. In this case it is easy to see that the BFS structure is maintained correctly. If its deletion does break a component, then the procedure will not halt by itself. However, process A, recognizing the break, will halt, and both processes will halt. In this case all the changes made in the BFS structure are ignored, and we go back to the BFS structure we had just before the deletion, except that the deleted edge is now replaced by an artificial edge. Clearly, in this case v is now the root of a tree which includes the new component, and perhaps additional components, through some other artificial edges. Also, there are no edges connecting the descendants of v - -with any vertices which are not v's descendants, except the artificial edge u-v. - -whenever an edge is processed in the procedure, one of its endpoints drops by one level. Since the lowest level a vertex can reach in runs which are terminated by process B is $|V|-1$, the cost per edge is bounded by $2|V|$. Hence the amortized time per deletion operation is $O(n)$. - -A forest can be represented using a collection of either Link-cut trees or Euler tour trees. Then the dynamic connectivity problem can be solved easily, as for every two nodes x,y, x is connected to y if and only if FindRoot(x)=FindRoot(y). The amortized update time and query time are both O(log(n)). - -A general graph can be represented by its spanning forest - a forest which contains a tree for every connected component of the graph. We call this spanning forest F. F itself can be represented by a forest of Euler tour trees. - -The Query and Insert operations are implemented using the corresponding operations on the ET trees representing F. The challenging operation is Delete, and in particular, deleting an edge which is contained in one of the spanning trees of F. This breaks the spanning tree into two trees, but, it is possible that there is another edge which connects them. The challenge is to quickly find such a replacement edge, if it exists. This requires a more complex data structure. Several such structures are described below. - -Each edge in the graph is assigned a level. Let L=lg n. The level of each edge inserted to the graph is initialized to L, and may decrease towards 0 during delete operations. - -For each i between 0 and L, define Gi as the subgraph consisting of edges that are at level i or less, and Fi a spanning forest of Gi. Our forest F from before is now called FL. We will keep a decreasing sequence of forests FL ⊇ ... ⊇ F0. - -The Query and Insert operations use only the largest forest FL. The smaller subgraphs are consulted only during a Delete operation, and in particular, deleting an edge which is contained in one of the spanning trees of FL. - -When such an edge e = x-y is deleted, it is first removed from FL and from all smaller spanning forests to which it belongs, i.e. from every Fi with i ≥ level(e). Then we look for a replacement edge. - -Start with the smallest spanning forest which contained e, namely, Fi with i = level(e). The edge e belongs to a certain tree T⊆Fi. After the deletion of e, the tree T is broken to two smaller trees: Tx which contains the node x and Ty which contains the node y. An edge of Gi is a replacement edge, if and only if it connects a node in Tx with a node in Ty. Suppose wlog that Tx is the smaller tree (i.e. contains at most half the nodes of T; we can tell the size of each subtree by an annotation added to the Euler trees). - -We loop over all the edges ε with level i and at least one node in Tx: - -* If the other node of ε is in Ty, then a replacement edge is found! Add this edge to Fi and to all containing forests up to FL, and finish. The spanning forests are fixed. Notice that in order to pay for this search, we decrease the level of the edges visited during the search. - -* If the other node of ε is in Tx, then this is not a replacement edge, and to 'penalize' it for wasting our time, we decrease its level by 1. - -The level of each edge will be decreased at most lg n times. Why? Because with each decrease, it falls into a tree whose size is at most half the size of its tree in the previous level. So in each level i, the number of nodes in each connected component is at most 2i. Hence the level of an edge is always at least 0. - -Each edge whose level is decreased, takes $O(\lg n)$ time to find (using the ET tree operations). In total, each inserted edge takes $O(\lg^2 n)$ time until it is deleted, so the amortized time for deletion is $O(\lg^2 n)$. The remaining part of delete also takes $O(\lg^2 n)$ time, since we have to delete the edge from at most $O(\lg n)$ levels, and deleting from each level takes $O(\lg n)$ (again using the ET operations). - -In total, the amortized time per update is $O(\lg^2 n)$. The time per query can be improved to $O(\lg n / \lg \lg n)$. - -However, the worst-case time per update might be $O(n)$. The question of whether the worst-case time can be improved had been an open question, until it was solved in the affirmative by the Cutset structure. - -Given a graph G(V,E) and a subset T⊆V, define cutset(T) as the set of edges that connect T with V\T. The cutset structure is a data structure that, without keeping the entire graph in memory, can quickly find an edge in the cutset, if such an edge exists. - -Start by giving a number to each vertex. Suppose there are n vertices; then each vertex can be represented by a number with lg(n) bits. Next, give a number to each edge, which is a concatenation of the numbers of its vertices - a number with 2 lg(n) bits. - -For each vertex v, calculate and keep xor(v), which is the xor of the numbers of all edges adjacent to it. - -Now for each subset T⊆V, it is possible to calculate xor(T) = the xor of the values of all vertices in T. Consider an edge e = u-v which is an internal edge of T (i.e. both u and v are in T). The number of e is included twice in xor(T) - once for u and once for v. Since the xor of every number with itself is 0, e vanishes and does not affect xor(T). Thus, xor(T) is actually the xor of all edges in cutset(T). There are several options: - -* If xor(T)=0, then we can confidently reply that cutset(T) is empty. - -* If xor(T) is the number of a real edge e, then probably e is the only edge in cutset(T), and we can return e. We can also read the endpoints of e from the number of e by splitting it to the lg(n) leftmost bits and the lg(n) rightmost bits. - -* The third option is that xor(T) is a nonzero number which does not represent a real edge. This can only happen if there are two or more edges in cutset(T), since in that case xor(T) is the xor of several numbers of edges. In this case, we report "failure", since we know that there are edges in the cutset but cannot identify any single edge. - -Our goal now is to handle this third option. - -First, create a sequence of lg(n) levels of the cutset structures, each of which contains about half the edges from the upper level (i.e., for each level, pick each edge from the upper level with probability 1/2). If in the first level xor(T) returns an illegal value, meaning that cutset(T) has two or more edges, then there is a chance that in the next level, which contains fewer edges, xor(T) will return a legal value since cutset(T) will contain a single edge. If xor(T) still returns an illegal value, continue to the next level, etc. Since the number of edges is decreasing, there are two cases: - -* The good case is that we eventually find a level in which cutset(T) contains a single edge; then we return that edge and finish. - -* The bad case is that we eventually find a level in which cutset(T) contains no edges; then we report "failure", since we know that there are edges in the cutset but cannot identify any single edge. - -It is possible to prove that the probability of success is at least 1/9. - -Next, create a collection of C lg(n) independent versions of the level structure, where C is a constant. In each versions, do an independent random reduction of edges from level to level. Try each query on each of the versions until one of them succeeds. The probability that all versions fail is at most: -$$ -(1-1/9)^{C \lg{n}} = 2^{- 0.17C \lg{n}} = n^{-0.17 C} -$$ - -By proper selection of C we can make the probability of failure arbitrarily close to 0. - -We can add a cutset structure to a dynamic connectivity structure. - -The Insert and Delete operations on the cutset structure are done in exactly the same way: the edge inserted/deleted is XORed into both its endpoints. - -When an edge is deleted from the spanning forest used for the dynamic connectivity structure, the cutset structure is used to find a replacement edge. - -A single cutset structure requires only O(n lg n) memory - only a single number, with 2 lg n bits, for each of the n vertices. We don't have to keep the edges themselves. For dense graphs, this is much cheaper than keeping the entire graph in memory. - -We have to keep lg(n) versions, each of which contains lg(n) levels. Hence, the total memory requirement is O(n \lg^3 n). - -The query time is O(polylog(n)) in the worst case. This is in contrast to The Level structure, in which the query time is O(polylog(n)) amortized, but the worst-case time is O(n). - -If the order in which edges will be deleted is known ahead of time, then we can solve the dynamic connectivity problem in log(n) per query. If we can maintain a Maximum Spanning Forest where edges are ordered by their deletion time, we know that when we delete some edge that is in the forest, there is no possible edge that can replace it. If there were some edge that connects the same two components the deleted edge does, then this other edge would have been part of the maximum spanning forest instead of the edge we deleted. This makes the delete operation trivial: we simply need to split the tree into its two parts if the edge to delete is part of our forest, or ignore the operation otherwise. - -Adding an edge is slightly more complicated. If we add an edge edge e from u to v, then if u and v are not connected, then this edge will be part of the Maximum Spanning Forest. If they are connected, we want to add u->v to our forest if it can improve our Maximum Spanning Forest. To do this, we need to quickly check what edge has the smallest removal time on the path from u to v. If this edge's removal time comes after e's removal time, then e cannot improve our minimum spanning forest. Otherwise, the other edge should be deleted and replaced with e. - -This requires us to do the following operations: add an edge, cut an edge, and query the minimum edge on a path which can be done rather easily with a link-cut tree in log(n) per operation. diff --git a/wiki/wikipedia/711.txt b/wiki/wikipedia/711.txt deleted file mode 100644 index b719bb0596e24ba3f31e807fde329eb6f54962f8..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/711.txt +++ /dev/null @@ -1,5 +0,0 @@ -The traveling purchaser problem (TPP) is an NP-hard problem studied in theoretical computer science. Given a list of marketplaces, the cost of travelling between different marketplaces, and a list of available goods together with the price of each such good at each marketplace, the task is to find, for a given list of articles, the route with the minimum combined cost of purchases and traveling. The traveling salesman problem (TSP) is a special case of this problem. - -The problem can be seen as a generalization of the traveling salesman problem, i.e. each article is available at one market only and each market sells only one item. Since TSP is NP-hard, TPP is NP-hard. - -Approaches for solving the traveling purchaser problem include dynamic programming and tabu search algorithms. diff --git a/wiki/wikipedia/712.txt b/wiki/wikipedia/712.txt deleted file mode 100644 index f714f9883d5bb7112d63e7c962fbdac8ab21bd5c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/712.txt +++ /dev/null @@ -1,60 +0,0 @@ -MAXEkSAT is a problem in computational complexity theory that is a maximization version of the Boolean satisfiability problem 3SAT. In MAXEkSAT, each clause has exactly k literals, each with distinct variables, and is in conjunctive normal form. These are called k-CNF formulas. The problem is to determine the maximum number of clauses that can be satisfied by a truth assignment to the variables in the clauses. - -We say that an algorithm A provides an α-approximation to MAXEkSAT if, for some fixed positive α less than or equal to 1, and every kCNF formula φ, A can find a truth assignment to the variables of φ that will satisfy at least an α-fraction of the maximum number of satisfiable clauses of φ. - -Because the NP-hard k-SAT problem (for k ≥ 3) is equivalent to determining if the corresponding MAXEkSAT instance has a value equal to the number of clauses, MAXEkSAT must also be NP-hard, meaning that there is no polynomial time algorithm unless P=NP. A natural next question, then, is that of finding approximate solutions: what's the largest real number α < 1 such that some explicit P (complexity) algorithm always finds a solution of size α·OPT, where OPT is the (potentially hard to find) maximizing assignment. - -There is a simple randomized polynomial-time algorithm that provides a $\textstyle\left(1-\frac{1}{2^k}\right)$-approximation to MAXEkSAT: independently set each variable to true with probability 1/2, otherwise set it to false. - -Any given clause c is unsatisfied only if all of its k constituent literals evaluates to false. Because each literal within a clause has a 1/2 chance of evaluating to true independently of any of the truth value of any of the other literals, the probability that they are all false is $\textstyle(\frac{1}{2})^k = \frac{1}{2^k}$. Thus, the probability that c is indeed satisfied is $\textstyle 1-\frac{1}{2^k}$, so the indicator variable $\textstyle 1_c$ (that is 1 if c is true and 0 otherwise) has expectation $\textstyle 1-\frac{1}{2^k}$. The sum of all of the indicator variables over all $\textstyle|C|$ clauses is $(\textstyle 1-\frac{1}{2^k})|C|$, so by linearity of expectation we satisfy a $\textstyle \left(1-\frac{1}{2^k}\right)$ fraction of the clauses in expectation. Because the optimal solution can't satisfy more than all $\textstyle |C|$ of the clauses, we have that $\textit{ALG} = \left(1-\frac{1}{2^k}\right)\cdot |C| > \left(1-\frac{1}{2^k}\right)\cdot \textit{OPT}$, so the algorithm finds a $\textstyle \geq \left(1-\frac{1}{2^k}\right)$ approximation to the true optimal solution in expectation. - -Despite its high expectation, this algorithm may occasionally stumble upon solutions of value lower than the expectation we computed above. However, over a large number of trials, the average fraction of satisfied clauses will tend towards $\textstyle \left(1-\frac{1}{2^k}\right)$. This implies two things: - -# There must exist an assignment satisfying at least a $\textstyle \left(1-\frac{1}{2^k}\right)$ fraction of the clauses. If there weren't, we could never attain a value this large on average over a large number of trials. - -# If we run the algorithm a large number of times, at least half of the trials (in expectation) will satisfy some $\textstyle (1-\frac{2}{2^k})$ fraction of the clauses. This is because any smaller fraction would bring down the average enough that the algorithm must occasionally satisfy more than 100% of the clauses to get back to its expectation of $\textstyle \left(1-\frac{2}{2^k}\right)$, which cannot happen. Extending this using Markov's inequality, at least some $\textstyle \left(\frac{1}{1+2^k\epsilon}\right)$-fraction of the trials (in expectation) will satisfy at least an $\textstyle \left(1-\frac{1}{2^k}-\epsilon\right)$-fraction of the clauses. Therefore, for any positive $\textstyle \epsilon$, it takes only a polynomial number of random trials until we expect to find an assignment satisfying at least an $\textstyle \left(1-\frac{1}{2^k}-\epsilon\right)$ fraction of the clauses. - -A more robust analysis (such as that in ) shows that we will, in fact, satisfy at least a $\textstyle \left(1-\frac{1}{2^k}\right)$-fraction of the clauses a constant fraction of the time (depending only on k), with no loss of $\textstyle \epsilon$. - -While the above algorithm is efficient, it's not obvious how to remove its dependence on randomness. Trying out all possible random assignments is equivalent to the naive brute force approach, so may take exponential time. One clever way to derandomize the above in polynomial time relies on work in error correcting codes, satisfying a $\textstyle \left(1-\frac{1}{2^k}\right)$ fraction of the clauses in time polynomial in the input size (although the exponent depends on k). - -We need one definition and two facts to find the algorithm. -$$ -S\subseteq\{0,1\}^n -$$ is an ℓ-wise independent source if, for a uniformly chosen random (x1, x2, ..., xn) ∈ S, x1, x2, ..., xn are ℓ-wise independent random variables. - -Note that such an assignment can be found among elements of any ℓ-wise independent source over n binary variables. This is easier to see once you realize that an ℓ-wise independent source is really just any set of binary vectors over {0, 1}n with the property that all restrictions of those vectors to ℓ co-ordinates must present the 2 possible binary combinations an equal number of times. - -Recall that BCH2,m,d is an $ [n=2^m, n-1 -\lceil {d-2}/2\rceil m, d]_2$ linear code. - -There exists an ℓ-wise independent source of size $O(n^{\lfloor \ell/2 \rfloor})$, namely the dual of a BCH2,log n,ℓ+1 code, which is a linear code. Since every BCH code can be presented as a polynomial-time computable restriction of a related Reed Solomon code, which itself is strongly explicit, there is a polynomial-time algorithm for finding such an assignment to the xi's. The proof of fact 2 can be found at Dual of BCH is an independent source. - -The algorithm works by generating BCH2,log n,ℓ+1, computing its dual (which as a set is an ℓ-wise independent source) and treating each element (codeword) of that source as a truth assignment to the n variables in φ. At least one of them will satisfy at least 1 - 2-ℓ of the clauses of φ, whenever φ is in kCNF form, k = ℓ. - -There are many problems related to the satisfiability of conjunctive normal form Boolean formulas. - -* Decision problems: - -** 2SAT - -** 3SAT - -* Optimization problems, where the goal is to maximize the number of clauses satisfied: - -** MAX-SAT, and the corresponded weighted version Weighted MAX-SAT - -** MAX-kSAT, where each clause has exactly k variables: - -*** MAX-2SAT - -*** MAX-3SAT - -*** MAXEkSAT - -** The partial maximum satisfiability problem (PMAX-SAT) asks for the maximum number of clauses which can be satisfied by any assignment of a given subset of clauses. The rest of the clauses must be satisfied. - -** The soft satisfiability problem (soft-SAT), given a set of SAT problems, asks for the maximum number of sets which can be satisfied by any assignment. - -** The minimum satisfiability problem. - -* The MAX-SAT problem can be extended to the case where the variables of the constraint satisfaction problem belong the set of reals. The problem amounts to finding the smallest q such that the q-relaxed intersection of the constraints is not empty. diff --git a/wiki/wikipedia/713.txt b/wiki/wikipedia/713.txt deleted file mode 100644 index 488d881d30364dff0225d30aa4b76bcd16b28e9d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/713.txt +++ /dev/null @@ -1,41 +0,0 @@ -In computing, the Two Generals' Problem is a thought experiment meant to illustrate the pitfalls and design challenges of attempting to coordinate an action by communicating over an unreliable link. In the experiment, two generals are only able to communicate with one another by sending a messenger through enemy territory. The experiment asks how they might reach an agreement on the time to launch an attack, while knowing that any messenger they send could be captured. - -It is related to the more general Byzantine Generals Problem and appears often in introductory classes about computer networking (particularly with regard to the Transmission Control Protocol, where it shows that TCP can't guarantee state consistency between endpoints and why this is the case), though it applies to any type of two-party communication where failures of communication are possible. A key concept in epistemic logic, this problem highlights the importance of common knowledge. Some authors also refer to this as the Two Generals' Paradox, the Two Armies Problem, or the Coordinated Attack Problem. The Two Generals' Problem was the first computer communication problem to be proved to be unsolvable. An important consequence of this proof is that generalizations like the Byzantine Generals problem are also unsolvable in the face of arbitrary communication failures, thus providing a base of realistic expectations for any distributed consistency protocols. - -Two armies, each led by a different general, are preparing to attack a fortified city. The armies are encamped near the city, each in its own valley. A third valley separates the two hills, and the only way for the two generals to communicate is by sending messengers through the valley. Unfortunately, the valley is occupied by the city's defenders and there's a chance that any given messenger sent through the valley will be captured. - -While the two generals have agreed that they will attack, they haven't agreed upon a time for an attack. It is required that the two generals have their armies attack the city simultaneously to succeed, lest the lone attacker army will die trying. They must thus communicate with each other to decide on a time to attack and to agree to attack at that time, and each general must know that the other general knows that they have agreed to the attack plan. Because acknowledgement of message receipt can be lost as easily as the original message, a potentially infinite series of messages is required to come to consensus. - -The thought experiment involves considering how they might go about coming to a consensus. In its simplest form, one general is known to be the leader, decides on the time of the attack, and must communicate this time to the other general. The problem is to come up with algorithms that the generals can use, including sending messages and processing received messages, that can allow them to correctly conclude: - -Yes, we will both attack at the agreed-upon time. - -Allowing that it is quite simple for the generals to come to an agreement on the time to attack (i.e. one successful message with a successful acknowledgement), the subtlety of the Two Generals' Problem is in the impossibility of designing algorithms for the generals to use to safely agree to the above statement. - -The first general may start by sending a message "Attack at 0900 on August 4." However, once dispatched, the first general has no idea whether or not the messenger got through. This uncertainty may lead the first general to hesitate to attack due to the risk of being the sole attacker. - -To be sure, the second general may send a confirmation back to the first: "I received your message and will attack at 0900 on August 4." However, the messenger carrying the confirmation could face capture and the second general may hesitate, knowing that the first might hold back without the confirmation. - -Further confirmations may seem like a solution—let the first general send a second confirmation: "I received your confirmation of the planned attack at 0900 on August 4." However, this new messenger from the first general is liable to be captured, too. Thus it quickly becomes evident that no matter how many rounds of confirmation are made, there is no way to guarantee the second requirement that each general is sure the other has agreed to the attack plan. Both generals will always be left wondering whether their last messenger got through. - -Because this protocol is deterministic, suppose there is a sequence of a fixed number of messages, one or more successfully delivered and one or more not. The assumption is that there should be a shared certainty for both generals to attack. - -Consider the last such message that was successfully delivered. If that last message had not been successfully delivered, then one general at least (presumably the receiver) would decide not to attack. From the viewpoint of the sender of that last message, however, the sequence of messages sent and delivered is exactly the same as it would have been, had that message been delivered. - -Since the protocol is deterministic, the general sending that last message will still decide to attack. - -We've now created a situation where the suggested protocol leads one general to attack and the other not to attack—contradicting the assumption that the protocol was a solution to the problem. - -A non-deterministic protocol with a potentially variable message count can be compared to an edge-labeled finite tree, where each node in the tree represents an explored example up to a specified point. A protocol that terminates before sending any messages is represented by a tree containing only a root node. The edges from a node to each child are labeled with the messages sent in order to reach the child state. Leaf nodes represent points at which the protocol terminates. - -Suppose there exists a non-deterministic protocol P which solves the Two Generals' Problem. Then, by a similar argument to the one used for fixed-length deterministic protocols above, P' must also solve the Two Generals' Problem, where the tree representing P' is obtained from that for P by removing all leaf nodes and the edges leading to them. - -Since P is finite, it then follows that the protocol that terminates before sending any messages would solve the problem. But clearly, it does not. Therefore a nondeterministic protocol that solves the problem cannot exist. - -A pragmatic approach to dealing with the Two Generals' Problem is to use schemes that accept the uncertainty of the communications channel and not attempt to eliminate it, but rather mitigate it to an acceptable degree. For example, the first general could send 100 messengers, anticipating that the probability of all being captured is low. With this approach, the first general will attack no matter what, and the second general will attack if any message is received. Alternatively, the first general could send a stream of messages and the second general could send acknowledgments to each, with each general feeling more comfortable with every message received. As seen in the proof, however, neither can be certain that the attack will be coordinated. There is no algorithm that they can use (e.g. attack if more than four messages are received) that will be certain to prevent one from attacking without the other. Also, the first general can send a marking on each message saying it is message 1, 2, 3 ... of n. This method will allow the second general to know how reliable the channel is and send an appropriate number of messages back to ensure a high probability of at least one message being received. If the channel can be made to be reliable, then one message will suffice and additional messages do not help. The last is as likely to get lost as the first. - -Assuming that the generals must sacrifice lives every time a messenger is sent and intercepted, an algorithm can be designed to minimize the number of messengers required to achieve the maximum amount of confidence the attack is coordinated. To save them from sacrificing hundreds of lives to achieve very high confidence in coordination, the generals could agree to use the absence of messengers as an indication that the general who began the transaction has received at least one confirmation and has promised to attack. Suppose it takes a messenger 1 minute to cross the danger zone, allowing 200 minutes of silence to occur after confirmations have been received will allow us to achieve extremely high confidence while not sacrificing messenger lives. In this case messengers are used only in the case where a party has not received the attack time. At the end of 200 minutes, each general can reason: "I have not received an additional message for 200 minutes; either 200 messengers failed to cross the danger zone, or it means the other general has confirmed and committed to the attack and has confidence I will too". - -The Two Generals' Problem and its impossibility proof was first published by E. A. Akkoyunlu, K. Ekanadham, and R. V. Huber in 1975 in "Some Constraints and Trade-offs in the Design of Network Communications", where it is described starting on page 73 in the context of communication between two groups of gangsters. - -This problem was given the name the Two Generals Paradox by Jim Gray in 1978 in "Notes on Data Base Operating Systems" starting on page 465. This reference is widely given as a source for the definition of the problem and the impossibility proof, though both were published previously as mentioned above. diff --git a/wiki/wikipedia/714.txt b/wiki/wikipedia/714.txt deleted file mode 100644 index 42bdd43e088c0ca3b9625893900e2106788dc758..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/714.txt +++ /dev/null @@ -1,62 +0,0 @@ -In mathematics, in particular linear algebra, the matrix determinant lemma computes the determinant of the sum of an invertible matrix A and the dyadic product, uvT, of a column vector u and a row vector vT. - -Suppose A is an invertible square matrix and u, v are column vectors. Then the matrix determinant lemma states that -$$ -\det\left(\mathbf{A} + \mathbf{uv}^\textsf{T}\right) = \left(1 + \mathbf{v}^\textsf{T}\mathbf{A}^{-1}\mathbf{u}\right)\det\left(\mathbf{A}\right). -$$ - -Here, uvT is the outer product of two vectors u and v. - -The theorem can also be stated in terms of the adjugate matrix of A: -$$ -\det\left(\mathbf{A} + \mathbf{uv}^\textsf{T}\right) = \det\left(\mathbf{A}\right) + \mathbf{v}^\textsf{T}\mathrm{adj}\left(\mathbf{A}\right)\mathbf{u}, -$$ - -in which case it applies whether or not the square matrix A is invertible. - -First the proof of the special case A = I follows from the equality: - - - -\begin{pmatrix} \mathbf{I} & 0 \\ \mathbf{v}^\textsf{T} & 1 \end{pmatrix} - -\begin{pmatrix} \mathbf{I} + \mathbf{uv}^\textsf{T} & \mathbf{u} \\ 0 & 1 \end{pmatrix} - -\begin{pmatrix} \mathbf{I} & 0 \\ -\mathbf{v}^\textsf{T} & 1 \end{pmatrix} = - -\begin{pmatrix} \mathbf{I} & \mathbf{u} \\ 0 & 1 + \mathbf{v}^\textsf{T}\mathbf{u} \end{pmatrix}. - - - -The determinant of the left hand side is the product of the determinants of the three matrices. Since the first and third matrix are triangular matrices with unit diagonal, their determinants are just 1. The determinant of the middle matrix is our desired value. The determinant of the right hand side is simply (1 + vTu). So we have the result: -$$ -\det\left(\mathbf{I} + \mathbf{uv}^\textsf{T}\right) = \left(1 + \mathbf{v}^\textsf{T}\mathbf{u}\right). -$$ - -Then the general case can be found as: - -\begin{align} - -\det\left(\mathbf{A} + \mathbf{uv}^\textsf{T}\right) - -&= \det\left(\mathbf{A}\right) \det\left(\mathbf{I} + \left(\mathbf{A}^{-1}\mathbf{u}\right)\mathbf{v}^\textsf{T}\right)\\ - -&= \det\left(\mathbf{A}\right) \left(1 + \mathbf{v}^\textsf{T} \left(\mathbf{A}^{-1}\mathbf{u}\right)\right). - -\end{align} - -If the determinant and inverse of A are already known, the formula provides a numerically cheap way to compute the determinant of A corrected by the matrix uvT. The computation is relatively cheap because the determinant of A + uvT does not have to be computed from scratch (which in general is expensive). Using unit vectors for u and/or v, individual columns, rows or elements of A may be manipulated and a correspondingly updated determinant computed relatively cheaply in this way. - -When the matrix determinant lemma is used in conjunction with the Sherman–Morrison formula, both the inverse and determinant may be conveniently updated together. - -Suppose A is an invertible n-by-n matrix and U, V are n-by-m matrices. Then -$$ -\det\left(\mathbf{A} + \mathbf{UV}^\textsf{T}\right) = \det\left(\mathbf{I_m} + \mathbf{V}^\textsf{T}\mathbf{A}^{-1}\mathbf{U}\right)\det(\mathbf{A}). -$$ - -In the special case $\mathbf{A}=\mathbf{I_n}$ this is the Weinstein–Aronszajn identity. - -Given additionally an invertible m-by-m matrix W, the relationship can also be expressed as -$$ -\det\left(\mathbf{A} + \mathbf{UWV}^\textsf{T}\right) = \det\left(\mathbf{W}^{-1} + \mathbf{V}^\textsf{T}\mathbf{A}^{-1}\mathbf{U}\right)\det\left(\mathbf{W}\right)\det\left(\mathbf{A}\right). -$$ diff --git a/wiki/wikipedia/715.txt b/wiki/wikipedia/715.txt deleted file mode 100644 index 342b7be3fe0a999b7f8a7672f637b227f9aa2ff8..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/715.txt +++ /dev/null @@ -1,14 +0,0 @@ -In mathematics, the mean value problem was posed by Stephen Smale in 1981. This problem is still open in full generality. The problem asks: - -For a given complex polynomial $f$ of degree $d \ge 2$ and a complex number $z$, is there a critical point $c$ of $f$ 1=(i.e. $f'(c) = 0$) such that -$$ - \left| \frac{f(z) - f(c)}{z - c} \right| \le K|f'(z)| \text{ for }K=1 \text{?} -$$ - -It was proved for $K=4$. - -The conjecture is known to hold in special cases; for other cases, the bound on $K$ could be improved depending on the degree $d$, although no absolute bound $K<4$ is known that holds for all $d$. - -In 1989, Tischler has shown that the conjecture is true for the optimal bound $K = \frac{d-1}{d} $ if $f$ has only real roots, or if all roots of $f$ have the same norm. In 2007, Conte et al. proved that $K \le 4 \frac{d-1}{d+1}$, - -Considering the reverse inequality, Dubinin and Sugawa have proven that (under the same conditions as above) there exists a critical point $\zeta$ such that $ \left| \frac{f(z) - f(\zeta)}{z - \zeta} \right| \ge \frac{n 4^{n}} $. The problem of optimizing this lower bound is known as the dual mean value problem. diff --git a/wiki/wikipedia/716.txt b/wiki/wikipedia/716.txt deleted file mode 100644 index 4f73133c574ffcd1bfb73b85867470a71ac50afe..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/716.txt +++ /dev/null @@ -1,59 +0,0 @@ -In computational complexity theory, a pseudo-polynomial transformation is a function which maps instances of one strongly NP-complete problem into another and is computable in pseudo-polynomial time. - -Some computational problems are parameterized by numbers whose magnitude exponentially exceed size of the input. For example, the problem of testing whether a number n is prime can be solved by naively checking candidate factors from 2 to $\sqrt{n}$ in $\sqrt{n}-1$ divisions, therefore exponentially more than the input size $O(\log(n))$. Suppose that $L$ is an encoding of a computational problem $\Pi$ over alphabet $\Sigma$, then -$$ - \operatorname{Num}_\Pi: \Sigma^* \to \mathbb{N} -$$ - -is a function that maps $w_I \in \Sigma^*$, being the encoding of an instance $I$ of the problem $\Pi$, to the maximal numerical parameter of $I$. - -Suppose that $\Pi_1$ and $\Pi_2$ are decision problems, $L_1$ and $L_2$ are their encodings over correspondingly $\Sigma_1$ and $\Sigma_2$ alphabets. - -A pseudo-polynomial transformation from $\Pi_1$ to $\Pi_2$ is a function $f: \Sigma_1 \to \Sigma_2$ such that - -# $\forall w \in \Sigma_1 \quad w \in L_1 \iff f(w) \in L_2$ - -# $\forall w \in \Sigma_1 \quad f(w)$ can be computed in time polynomial in two variables $\operatorname{Num}_{\Pi_1}(w)$ and $|w|$ - -# $\exists Q_A \in \mathbb{N}[X] \quad\forall w \in \Sigma_1 \quad |w| \leq Q_A(|f(w)|)$ - -# $\exists Q_B \in \mathbb{N}[X,Y] \quad\forall w \in \Sigma_1 \quad \operatorname{Num}_{\Pi_2}(f(w)) \leq Q_B(\operatorname{Num}_{\Pi_1}(w), |w|)$ - -Intuitively, (1) allows one to reason about instances of $\Pi_1$ in terms of instances of $\Pi_2$ (and back), (2) ensures that deciding $L_1$ using the transformation and a pseudo-polynomial decider for $L_2$ is pseudo-polynomial itself, (3) enforces that $f$ grows fast enough so that $L_2$ must have a pseudo-polynomial decider, and (4) enforces that a subproblem of $L_1$ that testifies its strong NP-completeness (i.e. all instances have numerical parameters bounded by a polynomial in input size and the subproblem is NP-complete itself) is mapped to a subproblem of $L_2$ whose instances also have numerical parameters bounded by a polynomial in input size. - -The following lemma allows to derive strong NP-completeness from existence of a transformation: - -If $\Pi_1$ is a strongly NP-complete decision problem, $\Pi_2$ is a decision problem in NP, and there exists a pseudo-polynomial transformation from $\Pi_1$ to $\Pi_2$, then $\Pi_2$ is strongly NP-complete - -Suppose that $\Pi_1$ is a strongly NP-complete decision problem encoded by $L_1$ over $\Sigma_1$ alphabet and $\Pi_2$ is a decision problem in NP encoded by $L_2$ over $\Sigma_2$ alphabet. - -Let $f: L_1 \to L_2$ be a pseudo-polynomial transformation from $\Pi_1$ to $\Pi_2$ with $Q_A$, $Q_B$ as specified in the definition. - -From the definition of strong NP-completeness there exists a polynomial $P \in \mathbb{N}[X]$ such that $L_{1/P} = \{w \in L_1 : \operatorname{Num}_{\Pi_1}(w) \leq P(|w|) \}$ is NP-complete. - -For $\widehat{P}(n) = Q_B(P(Q_A(n)),Q_A(n))$ and any $w \in L_{1/P}$ there is - - - -\begin{aligned} - -\operatorname{Num}_{\Pi_2}(f(w)) &\leq Q_B(\operatorname{Num}_{\Pi_1}(w), |w|) && \text{(definition of }f\text{)} \\[4pt] - -&\leq Q_B(P(w), |w|) && \text{(property of } L_{1/P}\text{)} \\[4pt] - -&\leq Q_B(P(Q_A(|f(w)|)), Q_A(|f(w)|)) && \text{(definition of }f\text{)} \\[4pt] - -&\leq \widehat{P}(|f(w)|) && \text{(definition of } \widehat{P}\text{)} - -\end{aligned} - - - -Therefore, -$$ -f(L_{1/P}) = \{w \in L_2 : \operatorname{Num}_{\Pi_2}(w) \leq \widehat{P}(|w|) \} = L_{2/\widehat{P}} -$$ - -Since $L_{1/P}$ is NP-complete and $f|L_{1/P}$ is computable in polynomial time, $L_{2/\widehat{P}}$ is NP-complete. - -From this and the definition of strong NP-completeness it follows that $L_2$ is strongly NP-complete. diff --git a/wiki/wikipedia/717.txt b/wiki/wikipedia/717.txt deleted file mode 100644 index 842261f2ba1ca0d7ede71af12e19676b10b818cb..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/717.txt +++ /dev/null @@ -1,51 +0,0 @@ -This is a list of mathematical conjectures. - -The conjecture terminology may persist: theorems often enough may still be referred to as conjectures, using the anachronistic names. - -* Deligne's conjecture on 1-motives - -* Goldbach's weak conjecture (proved in 2013) - -* Sensitivity conjecture (proved in 2019) - -* Atiyah conjecture (not a conjecture to start with) - -* Borsuk's conjecture - -* Chinese hypothesis (not a conjecture to start with) - -* Doomsday conjecture - -* Euler's sum of powers conjecture - -* Ganea conjecture - -* Generalized Smith conjecture - -* Hauptvermutung - -* Hedetniemi's conjecture, counterexample announced 2019 - -* Hirsch conjecture (disproved in 2010) - -* Intersection graph conjecture - -* Kelvin's conjecture - -* Kouchnirenko's conjecture - -* Mertens conjecture - -* Pólya conjecture, 1919 (1958) - -* Ragsdale conjecture - -* Schoenflies conjecture (disproved 1910) - -* Tait's conjecture - -* Von Neumann conjecture - -* Weyl–Berry conjecture - -* Williamson conjecture diff --git a/wiki/wikipedia/718.txt b/wiki/wikipedia/718.txt deleted file mode 100644 index e07c5895a3b6b370e6348aac417a01a19639a9d4..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/718.txt +++ /dev/null @@ -1,19 +0,0 @@ -A tree k-spanner (or simply k-spanner) of a graph $G$ is a spanning subtree $T$ of $G$ in which the distance between every pair of vertices is at most $k$ times their distance in $G$. - -There are several papers written on the subject of tree spanners. One of these was entitled Tree Spanners written by mathematicians Leizhen Cai and Derek Corneil, which explored theoretical and algorithmic problems associated with tree spanners. Some of the conclusions from that paper are listed below. $n$ is always the number of vertices of the graph, and $m$ is its number of edges. - -# A tree 1-spanner, if it exists, is a minimum spanning tree and can be found in $\mathcal{O}(m \log \beta (m,n))$ time (in terms of complexity) for a weighted graph, where $\beta(m,n) = \min\left\{i\mid \log^{i} n \leq m/n\right\}$. Furthermore, every tree 1-spanner admissible weighted graph contains a unique minimum spanning tree. - -# A tree 2-spanner can be constructed in $\mathcal{O}(m+n)$ time, and the tree $t$-spanner problem is NP-complete for any fixed integer $t > 3$. - -# The complexity for finding a minimum tree spanner in a digraph is $\mathcal{O}((m+n)\cdot\alpha(m+n,n))$, where $\alpha(m+n,n)$ is a functional inverse of the Ackermann function - -# The minimum 1-spanner of a weighted graph can be found in $\mathcal{O}(mn+n^2\log(n))$ time. - -# For any fixed rational number $t > 1$, it is NP-complete to determine whether a weighted graph contains a tree t-spanner, even if all edge weights are positive integers. - -# A tree spanner (or a minimum tree spanner) of a digraph can be found in linear time. - -# A digraph contains at most one tree spanner. - -# The quasi-tree spanner of a weighted digraph can be found in $\mathcal{O}(m \log \beta(m,n))$ time. diff --git a/wiki/wikipedia/719.txt b/wiki/wikipedia/719.txt deleted file mode 100644 index 308bbdbbff5b8427192b00c42ee4433431ea3e4e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/719.txt +++ /dev/null @@ -1,18 +0,0 @@ -In mathematical analysis, Littlewood's 4/3 inequality, named after John Edensor Littlewood, is an inequality that holds for every complex-valued bilinear form defined on c0, the Banach space of scalar sequences that converge to zero. - -Precisely, let B:c0 × c0 → ℂ or IR be a bilinear form. Then the following holds: -$$ -\left( \sum_{i,j=1}^\infty |B(e_i,e_j)|^{4/3} \right)^{3/4} \le \sqrt{2} \| B \|, -$$ - -where -$$ -\| B \| = \sup \{|B(x_1,x_2)|: \|x_i\|_\infty \le 1 \}. -$$ - -The exponent 4/3 is optimal, i.e., cannot be improved by a smaller exponent. It is also known that for real scalars the aforementioned constant is sharp. - -Bohnenblust–Hille inequality is a multilinear extension of Littlewood's inequality that states that for all m-linear mapping M:c0 × ... × c0 → ℂ the following holds: -$$ -\left( \sum_{i_1,\ldots,i_m=1}^\infty |M(e_{i_1},\ldots,e_{i_m})|^{2m/(m+1)} \right)^{(m+1)/(2m)} \le 2^{(m-1)/2} \| M \|, -$$ diff --git a/wiki/wikipedia/72.txt b/wiki/wikipedia/72.txt deleted file mode 100644 index 7ae9b6f442a7235f9d89bc76378bed24813b26fa..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/72.txt +++ /dev/null @@ -1,97 +0,0 @@ -In mathematics, Hilbert's syzygy theorem is one of the three fundamental theorems about polynomial rings over fields, first proved by David Hilbert in 1890, which were introduced for solving important open questions in invariant theory, and are at the basis of modern algebraic geometry. The two other theorems are Hilbert's basis theorem that asserts that all ideals of polynomial rings over a field are finitely generated, and Hilbert's Nullstellensatz, which establishes a bijective correspondence between affine algebraic varieties and prime ideals of polynomial rings. - -Hilbert's syzygy theorem concern the relations, or syzygies in Hilbert's terminology, between the generators of an ideal, or, more generally, a module. As the relations form a module, one may consider the relations between the relations; Hilbert's syzygy theorem asserts that, if one continues in this way, starting with a module over a polynomial ring in n indeterminates over a field, one eventually finds a zero module of relations, after at most n steps. - -Hilbert's syzygy theorem is now considered to be an early result of homological algebra. It is the starting point of the use of homological methods in commutative algebra and algebraic geometry. - -The syzygy theorem first appeared in Hilbert's seminal paper "Über die Theorie der algebraischen Formen" (1890). The paper is split into five parts: part I proves Hilbert's basis theorem over a field, while part II proves it over the integers. Part III contains the syzygy theorem (Theorem III), which is used in part IV to discuss the Hilbert polynomial. The last part, part V, proves finite generation of certain rings of invariants. Incidentally part III also contains a special case of the Hilbert–Burch theorem. - -Originally, Hilbert defined syzygies for ideals in polynomial rings, but the concept generalizes trivially to (left) modules over any ring. - -Given a generating set $g_1, \ldots, g_k$ of a module M over a ring R, a relation or first syzygy between the generators is a k-tuple $(a_1, \ldots, a_k)$ of elements of R such that -$$ -a_1g_1 + \cdots + a_kg_k =0. -$$ - -Let $L_0$ be a free module with basis $(G_1, \ldots, G_k).$ The k-tuple $(a_1, \ldots, a_k)$ may be identified with the element -$$ -a_1G_1 + \cdots + a_kG_k, -$$ - -and the relations form the kernel $R_1$ of the linear map $L_0 \to M$ defined by $G_i \mapsto g_i.$ In other words, one has an exact sequence -$$ -0 \to R_1 \to L_0 \to M \to 0. -$$ - -This first syzygy module $R_1$ depends on the choice of a generating set, but, if $S_1$ is the module which is obtained with another generating set, there exist two free modules $F_1$ and $F_2$ such that -$$ -R_1 \oplus F_1 \cong S_1 \oplus F_2 -$$ - -where $\oplus$ denote the direct sum of modules. - -The second syzygy module is the module of the relations between generators of the first syzygy module. By continuing in this way, one may define the kth syzygy module for every positive integer k. - -If the kth syzygy module is free for some k, then by taking a basis as a generating set, the next syzygy module (and every subsequent one) is the zero module. If one does not take a bases as generating sets, then all subsequent syzygy modules are free. - -Let n be the smallest integer, if any, such that the nth syzygy module of a module M is free or projective. The above property of invariance, up to the sum direct with free modules, implies that n does not depend on the choice of generating sets. The projective dimension of M is this integer, if it exists, or ∞ if not. This is equivalent with the existence of an exact sequence -$$ -0 \longrightarrow R_n \longrightarrow L_{n-1} \longrightarrow \cdots \longrightarrow L_0 \longrightarrow M \longrightarrow 0, -$$ - -where the modules $L_i$ are free and $R_n$ is projective. It can be shown that one may always choose the generating sets for $R_n$ being free, that is for the above exact sequence to be a free resolution. - -Hilbert's syzygy theorem states that, if M is a finitely generated module over a polynomial ring $k[x_1,\ldots,x_n]$ in n indeterminates over a field k, then the nth syzygy module of M is always a free module. - -In modern language, this implies that the projective dimension of M is at most n, and thus that there exists a free resolution -$$ -0 \longrightarrow L_k \longrightarrow L_{k-1} \longrightarrow \cdots \longrightarrow L_0 \longrightarrow M \longrightarrow 0 -$$ - -of length k ≤ n. - -This upper bound on the projective dimension is sharp, that is, there are modules of projective dimension exactly n. The standard example is the field k, which may be considered as a $k[x_1,\ldots,x_n]$-module by setting $x_i c=0$ for every i and every c ∈ k. For this module, the nth syzygy module is free, but not the (n − 1)th one (for a proof, see , below). - -The theorem is also true for modules that are not finitely generated. As the global dimension of a ring is the supremum of the projective dimensions of all modules, Hilbert's syzygy theorem may be restated as: the global dimension of $k[x_1,\ldots,x_n]$ is n. - -In the case of zero indeterminates, Hilbert's syzygy theorem is simply the fact that every vector space has a basis. - -In the case of a single indeterminate, Hilbert's syzygy theorem is an instance of the theorem asserting that over a principal ideal ring, every submodule of a free module is itself free. - -The Koszul complex, also called "complex of exterior algebra", allows, in some cases, an explicit description of all syzygy modules. - -Let $g_1, \ldots, g_k$ be a generating system of an ideal I in a polynomial ring $R=k[x_1,\ldots,x_n]$, and let $L_1$ be a free module of basis $G_1, \ldots, G_k.$ The exterior algebra of $L_1$ is the direct sum -$$ -\Lambda(L_1)=\bigoplus_{t=0}^k L_t, -$$ - -where $L_t$ is the free module, which has, as a basis, the exterior products -$$ -G_{i_1} \wedge \cdots \wedge G_{i_t}, -$$ - -such that $i_1< i_2<\cdots k. For every positive t, one may define a linear map $L_t\to L_{t-1}$ by -$$ -G_{i_1} \wedge \cdots \wedge G_{i_t} \mapsto \sum_{j=1}^t (-1)^{j+1}g_{i_j}G_{i_1}\wedge \cdots\wedge \widehat{G}_{i_j} \wedge \cdots\wedge G_{i_t}, -$$ - -where the hat means that the factor is omitted. A straightforward computation shows that the composition of two consecutive such maps is zero, and thus that one has a complex -$$ -0\to L_t \to L_{t-1} \to \cdots \to L_1 \to L_0 \to R/I. -$$ - -This is the Koszul complex. In general the Koszul complex is not an exact sequence, but it is an exact sequence if one works with a polynomial ring $R=k[x_1,\ldots,x_n]$ and an ideal generated by a regular sequence of homogeneous polynomials. - -In particular, the sequence $x_1,\ldots,x_n$ is regular, and the Koszul complex is thus a projective resolution of $$$k=R/\langle x_1, \ldots, x_n\rangle.$ In this case, the nth syzygy module is free of dimension one (generated by the product of all $G_i$); the (n − 1)th syzygy module is thus the quotient of a free module of dimension n by the submodule generated by $(x_1, -x_2, \ldots, \pm x_n).$ This quotient may not be a projective module, as otherwise, there would exist polynomials $p_i$ such that $p_1x_1 + \cdots +p_nx_n=1,$ which is impossible (substituting 0 for the $x_i$ in the latter equality provides 1 = 0). This proves that the projective dimension of $k=R/\langle x_1, \ldots, x_n\rangle$ is exactly n. - -The same proof applies for proving that the projective dimension of $k[x_1, \ldots, x_n]/\langle g_1, \ldots, g_t\rangle$ is exactly t if the $g_i$ form a regular sequence of homogeneous polynomials. - -At Hilbert's time, there were no method available for computing syzygies. It was only known that an algorithm may be deduced from any upper bound of the degree of the generators of the module of syzygies. In fact, the coefficients of the syzygies are unknown polynomials. If the degree of these polynomials is bounded, the number of their monomials is also bounded. Expressing that one has a syzygy provides a system of linear equations whose unknowns are the coefficients of these monomials. Therefore, any algorithm for linear systems implies an algorithm for syzygies, as soon as a bound of the degrees is known. - -The first bound for syzygies (as well as for ideal membership problem) was given in 1926 by Grete Hermann: Let M a submodule of a free module L of dimension t over $k[x_1, \ldots, x_n];$ if the coefficients over a basis of L of a generating system of M have a total degree at most d, then there is a constant c such that the degrees occurring in a generating system of the first syzygy module is at most $(td)^{2^{cn}}.$ The same bound applies for testing the membership to M of an element of L. - -On the other hand, there are examples where a double exponential degree necessarily occurs. However such examples are extremely rare, and this sets the question of an algorithm that is efficient when the output is not too large. At the present time, the best algorithms for computing syzygies are Gröbner basis algorithms. They allow the computation of the first syzygy module, and also, with almost no extra cost, all syzygies modules. - -One might wonder which ring-theoretic property of $A=k[x_1,\ldots,x_n]$ causes the Hilbert syzygy theorem to hold. It turns out that - -this is regularity, which is an algebraic formulation of the fact that affine n-space is a variety without singularities. In fact the following generalization holds: Let $A$ be a Noetherian ring. Then $A$ has finite global dimension if and only if $A$ is regular and the Krull dimension of $A$ is finite; in that case the global dimension of $A$ is equal to the Krull dimension. This result may be proven using Serre's theorem on regular local rings. diff --git a/wiki/wikipedia/720.txt b/wiki/wikipedia/720.txt deleted file mode 100644 index 41cb2c445a01940ccaedf76b19460cb4892d16bd..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/720.txt +++ /dev/null @@ -1 +0,0 @@ -In algebraic topology, a branch of mathematics, Snaith's theorem, introduced by Victor Snaith, identifies the complex K-theory spectrum with the localization of the suspension spectrum of $\mathbb{C}P^\infty$ away from the Bott element. diff --git a/wiki/wikipedia/721.txt b/wiki/wikipedia/721.txt deleted file mode 100644 index 887d8ebe808803c2533957fb351fac722b72314d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/721.txt +++ /dev/null @@ -1,57 +0,0 @@ -In logic, Peirce's law is named after the philosopher and logician Charles Sanders Peirce. It was taken as an axiom in his first axiomatisation of propositional logic. It can be thought of as the law of excluded middle written in a form that involves only one sort of connective, namely implication. - -In propositional calculus, Peirce's law says that ((P→Q)→P)→P. Written out, this means that P must be true if there is a proposition Q such that the truth of P follows from the truth of "if P then Q". In particular, when Q is taken to be a false formula, the law says that if P must be true whenever it implies falsity, then P is true. In this way Peirce's law implies the law of excluded middle. - -Peirce's law does not hold in intuitionistic logic or intermediate logics and cannot be deduced from the deduction theorem alone. - -Under the Curry-Howard isomorphism, Peirce's law is the type of continuation operators, e.g. call/cc in Scheme. - -Here is Peirce's own statement of the law: - -A fifth icon is required for the principle of excluded middle and other propositions connected with it. One of the simplest formulae of this kind is: - -This is hardly axiomatical. That it is true appears as follows. It can only be false by the final consequent x being false while its antecedent (x → y) → x is true. If this is true, either its consequent, x, is true, when the whole formula would be true, or its antecedent x → y is false. But in the last case the antecedent of x → y, that is x, must be true. (Peirce, the Collected Papers 3.384). - -Peirce goes on to point out an immediate application of the law: - -From the formula just given, we at once get: - -where the a is used in such a sense that (x → y) → a means that from (x → y) every proposition follows. With that understanding, the formula states the principle of excluded middle, that from the falsity of the denial of x follows the truth of x. (Peirce, the Collected Papers 3.384). - -Warning: ((x→y)→a)→x is not a tautology. However, [a→x]→[((x→y)→a)→x] is a tautology. - -Here is a simple proof of Peirce's law assuming double negation $(\neg \neg P \iff P)$ and deriving the standard disjunction from an implication $((P \rightarrow Q) \Rightarrow (\neg P \vee Q))$: - - - -\begin{align} - -(p \rightarrow q) \rightarrow p \\ - -\neg (p \rightarrow q) \lor p \\ - -\neg (\neg p \lor q) \lor p \\ - -(p \land \neg q) \lor p \\ - -p \lor p \\ - -p. \\ - -\end{align} - - - -Peirce's law allows one to enhance the technique of using the deduction theorem to prove theorems. Suppose one is given a set of premises Γ and one wants to deduce a proposition Z from them. With Peirce's law, one can add (at no cost) additional premises of the form Z→P to Γ. For example, suppose we are given P→Z and (P→Q)→Z and we wish to deduce Z so that we can use the deduction theorem to conclude that (P→Z)→(((P→Q)→Z)→Z) is a theorem. Then we can add another premise Z→Q. From that and P→Z, we get P→Q. Then we apply modus ponens with (P→Q)→Z as the major premise to get Z. Applying the deduction theorem, we get that (Z→Q)→Z follows from the original premises. Then we use Peirce's law in the form ((Z→Q)→Z)→Z and modus ponens to derive Z from the original premises. Then we can finish off proving the theorem as we originally intended. - -One reason that Peirce's law is important is that it can substitute for the law of excluded middle in the logic which only uses implication. The sentences which can be deduced from the axiom schemas: - -* P→(Q→P) - -* (P→(Q→R))→((P→Q)→(P→R)) - -* ((P→Q)→P)→P - -* from P and P→Q infer Q - -(where P,Q,R contain only "→" as a connective) are all the tautologies which use only "→" as a connective. diff --git a/wiki/wikipedia/722.txt b/wiki/wikipedia/722.txt deleted file mode 100644 index f362804aeda476ef62fbda5cc4db456de84404f2..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/722.txt +++ /dev/null @@ -1,129 +0,0 @@ -This page lists some examples of vector spaces. See vector space for the definitions of terms used on this page. See also: dimension, basis. - -Notation. Let F denote an arbitrary field such as the real numbers R or the complex numbers C. - -The simplest example of a vector space is the trivial one: {0}, which contains only the zero vector (see the third axiom in the Vector space article). Both vector addition and scalar multiplication are trivial. A basis for this vector space is the empty set, so that {0} is the 0-dimensional vector space over F. Every vector space over F contains a subspace isomorphic to this one. - -The zero vector space is conceptually different from the null space of a linear operator L, which is the kernel of L. (Incidentally, the null space of L is a zero space if and only if L is injective.) - -The next simplest example is the field F itself. Vector addition is just field addition, and scalar multiplication is just field multiplication. This property can be used to prove that a field is a vector space. Any non-zero element of F serves as a basis so F is a 1-dimensional vector space over itself. - -The field is a rather special vector space; in fact it is the simplest example of a commutative algebra over F. Also, F has just two subspaces: {0} and F itself. - -The original example of a vector space is the following. For any positive integer n, the set of all n-tuples of elements of F forms an n-dimensional vector space over F sometimes called coordinate space and denoted Fn. An element of Fn is written -$$ -x = (x_1, x_2, \ldots, x_n) -$$ - -where each xi is an element of F. The operations on Fn are defined by -$$ -x + y = (x_1 + y_1, x_2 + y_2, \ldots, x_n + y_n) -$$ -$$ -\alpha x = (\alpha x_1, \alpha x_2, \ldots, \alpha x_n) -$$ -$$ -0 = (0, 0, \ldots, 0) -$$ -$$ --x = (-x_1, -x_2, \ldots, -x_n) -$$ - -Commonly, F is the field of real numbers, in which case we obtain real coordinate space Rn. The field of complex numbers gives complex coordinate space Cn. The a + bi form of a complex number shows that C itself is a two-dimensional real vector space with coordinates (a,b). Similarly, the quaternions and the octonions are respectively four- and eight-dimensional real vector spaces, and Cn is a 2n-dimensional real vector space. - -The vector space Fn has a standard basis: -$$ -e_1 = (1, 0, \ldots, 0) -$$ -$$ -e_2 = (0, 1, \ldots, 0) -$$ -$$ -\vdots -$$ -$$ -e_n = (0, 0, \ldots, 1) -$$ - -where 1 denotes the multiplicative identity in F. - -Let F denote the space of infinite sequences of elements from F such that only finitely many elements are nonzero. That is, if we write an element of F as -$$ -x = (x_1, x_2, x_3, \ldots) -$$ - -then only a finite number of the xi are nonzero (i.e., the coordinates become all zero after a certain point). Addition and scalar multiplication are given as in finite coordinate space. The dimensionality of F is countably infinite. A standard basis consists of the vectors ei which contain a 1 in the i-th slot and zeros elsewhere. This vector space is the coproduct (or direct sum) of countably many copies of the vector space F. - -Note the role of the finiteness condition here. One could consider arbitrary sequences of elements in F, which also constitute a vector space with the same operations, often denoted by FN - see below. FN is the product of countably many copies of F. - -By Zorn's lemma, FN has a basis (there is no obvious basis). There are uncountably infinite elements in the basis. Since the dimensions are different, FN is not isomorphic to F. It is worth noting that FN is (isomorphic to) the dual space of F, because a linear map T from F to F is determined uniquely by its values T(ei) on the basis elements of F, and these values can be arbitrary. Thus one sees that a vector space need not be isomorphic to its double dual if it is infinite dimensional, in contrast to the finite dimensional case. - -Starting from n vector spaces, or a countably infinite collection of them, each with the same field, we can define the product space like above. - -Let Fm×n denote the set of m×n matrices with entries in F. Then Fm×n is a vector space over F. Vector addition is just matrix addition and scalar multiplication is defined in the obvious way (by multiplying each entry by the same scalar). The zero vector is just the zero matrix. The dimension of Fm×n is mn. One possible choice of basis is the matrices with a single entry equal to 1 and all other entries 0. - -When m = n the matrix is square and matrix multiplication of two such matrices produces a third. This vector space of dimension n2 forms an algebra over a field. - -The set of polynomials with coefficients in F is a vector space over F, denoted F[x]. Vector addition and scalar multiplication are defined in the obvious manner. If the degree of the polynomials is unrestricted then the dimension of F[x] is countably infinite. If instead one restricts to polynomials with degree less than or equal to n, then we have a vector space with dimension n + 1. - -One possible basis for F[x] is a monomial basis: the coordinates of a polynomial with respect to this basis are its coefficients, and the map sending a polynomial to the sequence of its coefficients is a linear isomorphism from F[x] to the infinite coordinate space F. - -The vector space of polynomials with real coefficients and degree less than or equal to n is often denoted by Pn. - -The set of polynomials in several variables with coefficients in F is vector space over F denoted F[x1, x2, …, xr]. Here r is the number of variables. - -See also: Polynomial ring - -See main article at Function space, especially the functional analysis section. - -Let X be a non-empty arbitrary set and V an arbitrary vector space over F. The space of all functions from X to V is a vector space over F under pointwise addition and multiplication. That is, let f : X → V and g : X → V denote two functions, and let α in F. We define -$$ -(f + g)(x) = f(x) + g(x) -$$ -$$ -(\alpha f)(x) = \alpha f(x) -$$ - -where the operations on the right hand side are those in V. The zero vector is given by the constant function sending everything to the zero vector in V. The space of all functions from X to V is commonly denoted VX. - -If X is finite and V is finite-dimensional then VX has dimension |X|(dim V), otherwise the space is infinite-dimensional (uncountably so if X is infinite). - -Many of the vector spaces that arise in mathematics are subspaces of some function space. We give some further examples. - -Let X be an arbitrary set. Consider the space of all functions from X to F which vanish on all but a finite number of points in X. This space is a vector subspace of FX, the space of all possible functions from X to F. To see this, note that the union of two finite sets is finite, so that the sum of two functions in this space will still vanish outside a finite set. - -The space described above is commonly denoted (FX)0 and is called generalized coordinate space for the following reason. If X is the set of numbers between 1 and n then this space is easily seen to be equivalent to the coordinate space Fn. Likewise, if X is the set of natural numbers, N, then this space is just F. - -A canonical basis for (FX)0 is the set of functions {δx | x ∈ X} defined by -$$ -\delta_x(y) = \begin{cases}1 \quad x = y \\ 0 \quad x \neq y\end{cases} -$$ - -The dimension of (FX)0 is therefore equal to the cardinality of X. In this manner we can construct a vector space of any dimension over any field. Furthermore, every vector space is isomorphic to one of this form. Any choice of basis determines an isomorphism by sending the basis onto the canonical one for (FX)0. - -Generalized coordinate space may also be understood as the direct sum of |X| copies of F (i.e. one for each point in X): -$$ -(\mathbf F^X)_0 = \bigoplus_{x\in X}\mathbf F. -$$ - -The finiteness condition is built into the definition of the direct sum. Contrast this with the direct product of |X| copies of F which would give the full function space FX. - -An important example arising in the context of linear algebra itself is the vector space of linear maps. Let L(V,W) denote the set of all linear maps from V to W (both of which are vector spaces over F). Then L(V,W) is a subspace of WV since it is closed under addition and scalar multiplication. - -Note that L(Fn,Fm) can be identified with the space of matrices Fm×n in a natural way. In fact, by choosing appropriate bases for finite-dimensional spaces V and W, L(V,W) can also be identified with Fm×n. This identification normally depends on the choice of basis. - -If X is some topological space, such as the unit interval [0,1], we can consider the space of all continuous functions from X to R. This is a vector subspace of RX since the sum of any two continuous functions is continuous and scalar multiplication is continuous. - -The subset of the space of all functions from R to R consisting of (sufficiently differentiable) functions that satisfy a certain differential equation is a subspace of RR if the equation is linear. This is because differentiation is a linear operation, i.e., (a f + b g)′ = a f′ + b g′, where ′ is the differentiation operator. - -Suppose K is a subfield of F (cf. field extension). Then F can be regarded as a vector space over K by restricting scalar multiplication to elements in K (vector addition is defined as normal). The dimension of this vector space, if it exists, is called the degree of the extension. For example the complex numbers C form a two-dimensional vector space over the real numbers R. Likewise, the real numbers R form a vector space over the rational numbers Q which has (uncountably) infinite dimension, if a Hamel basis exists. - -If V is a vector space over F it may also be regarded as vector space over K. The dimensions are related by the formula - -dimKV = (dimFV)(dimKF) - -For example Cn, regarded as a vector space over the reals, has dimension 2n. - -Apart from the trivial case of a zero-dimensional space over any field, a vector space over a field F has a finite number of elements if and only if F is a finite field and the vector space has a finite dimension. Thus we have Fq, the unique finite field (up to isomorphism) with q elements. Here q must be a power of a prime (q = pm with p prime). Then any n-dimensional vector space V over Fq will have qn elements. Note that the number of elements in V is also the power of a prime (because a power of a prime power is again a prime power). The primary example of such a space is the coordinate space (Fq)n. - -These vector spaces are of critical importance in the representation theory of finite groups, number theory, and cryptography. diff --git a/wiki/wikipedia/723.txt b/wiki/wikipedia/723.txt deleted file mode 100644 index 2edbf5bab9d80895e49802b1cd13fef1c6ad01bc..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/723.txt +++ /dev/null @@ -1,3 +0,0 @@ -A flooding algorithm is an algorithm for distributing material to every part of a graph. The name derives from the concept of inundation by a flood. - -Flooding algorithms are used in computer networking and graphics. Flooding algorithms are also useful for solving many mathematical problems, including maze problems and many problems in graph theory. diff --git a/wiki/wikipedia/724.txt b/wiki/wikipedia/724.txt deleted file mode 100644 index f3357b04f04300654be5cfb671318b5a8e7eef7b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/724.txt +++ /dev/null @@ -1,51 +0,0 @@ -In mathematics, hypergeometric identities are equalities involving sums over hypergeometric terms, i.e. the coefficients occurring in hypergeometric series. These identities occur frequently in solutions to combinatorial problems, and also in the analysis of algorithms. - -These identities were traditionally found 'by hand'. There exist now several algorithms which can find and prove all hypergeometric identities. -$$ - \sum_{i=0}^{n} {n \choose i} = 2^{n} -$$ -$$ - \sum_{i=0}^{n} {n \choose i}^2 = {2n \choose n} -$$ -$$ - \sum_{k=0}^{n} k {n \choose k} = n2^{n-1} -$$ -$$ - \sum_{i=n}^{N} i{i \choose n} = (n+1){N+2\choose n+2}-{N+1\choose n+1} -$$ - -There are two definitions of hypergeometric terms, both used in different cases as explained below. See also hypergeometric series. - -A term tk is a hypergeometric term if -$$ -\frac{t_{k+1}}{t_k} -$$ - -is a rational function in k. - -A term F(n,k) is a hypergeometric term if -$$ -\frac{F(n,k+1)}{F(n,k)} -$$ - -is a rational function in k. - -There exist two types of sums over hypergeometric terms, the definite and indefinite sums. A definite sum is of the form -$$ - \sum_{k} t_k. -$$ - -The indefinite sum is of the form -$$ - \sum_{k=0}^{n} F(n,k). -$$ - -Although in the past one has found proofs of certain identities there exist several algorithms to find and prove identities. These algorithms first find a simple expression for a sum over hypergeometric terms and then provide a certificate which anyone could use to easily check and prove the correctness of the identity. - -For each of the hypergeometric sum types there exist one or more methods to find a simple expression. These methods also provide a certificate to easily check the proof of an identity: - -* Definite sums: Sister Celine's Method, Zeilberger's algorithm - -* Indefinite sums: Gosper's algorithm - -A book named A = B has been written by Marko Petkovšek, Herbert Wilf and Doron Zeilberger describing the three main approaches described above. diff --git a/wiki/wikipedia/725.txt b/wiki/wikipedia/725.txt deleted file mode 100644 index 263ad3ecbc6c1c4841fcd3bd056d3ad304badfba..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/725.txt +++ /dev/null @@ -1,15 +0,0 @@ -In algebraic geometry and number theory, the torsion conjecture or uniform boundedness conjecture for torsion points for abelian varieties states that the order of the torsion group of an abelian variety over a number field can be bounded in terms of the dimension of the variety and the number field. A stronger version of the conjecture is that the torsion is bounded in terms of the dimension of the variety and the degree of the number field. The torsion conjecture has been completely resolved in the case of elliptic curves. - -From 1906 to 1911, Beppo Levi published a series of papers investigating the possible finite orders of points on elliptic curves over the rationals. He showed that there are infinitely many elliptic curves over the rationals with the following torsion groups: - -* Cn with 1 ≤ n ≤ 10, where Cn denotes the cyclic group of order n; - -* C12; - -* C2n × C2 with 1 ≤ n ≤ 4, where × denotes the direct sum. - -At the 1908 International Mathematical Congress in Rome, Levi conjectured that this is a complete list of torsion groups for elliptic curves over the rationals. The torsion conjecture for elliptic curves over the rationals was independently reformulated by and again by , with the conjecture becoming commonly known as Ogg's conjecture. - -drew the connection between the torsion conjecture for elliptic curves over the rationals and the theory of classical modular curves. In the early 1970s, the work of Gérard Ligozat, Daniel Kubert, Barry Mazur, and John Tate showed that several small values of n do not occur as orders of torsion points on elliptic curves over the rationals. proved the full torsion conjecture for elliptic curves over the rationals. His techniques were generalized by Kamienny and Kamienny, who obtained uniform boundedness for quadratic fields and number fields of degree at most 8 respectively. Finally, proved the conjecture for elliptic curves over any number field. - -An effective bound for the size of the torsion group in terms of the degree of the number field was given by Parent. A complete list of possible torsion groups has also been given for elliptic curves over quadratic number fields. There are substantial partial results for quartic and quintic number fields . diff --git a/wiki/wikipedia/726.txt b/wiki/wikipedia/726.txt deleted file mode 100644 index 1dbb7e46dd13a0a8216d66bbdb4bf6459bd3c5a1..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/726.txt +++ /dev/null @@ -1,21 +0,0 @@ -All horses are the same color is a falsidical paradox that arises from a flawed use of mathematical induction to prove the statement All horses are the same color. There is no actual contradiction, as these arguments have a crucial flaw that makes them incorrect. This example was originally raised by George Pólya in a 1954 book in different terms: "Are any n numbers equal?" or "Any n girls have eyes of the same color", as an exercise in mathematical induction. It has also been restated as "All cows have the same color". - -The "horses" version of the paradox was presented in 1961 in a satirical article by Joel E. Cohen. It was stated as a lemma, which in particular allowed the author to "prove" that Alexander the Great did not exist and he had an infinite number of limbs. - -The argument is proof by induction. First we establish a base case for one horse ($n=1$). We then prove that if $n$ horses have the same color, then $n+1$ horses must also have the same color. - -The case with just one horse is trivial. If there is only one horse in the "group", then clearly all horses in that group have the same color. - -Assume that $n$ horses always are the same color. Consider a group consisting of $n+1$ horses. - -First, exclude one horse and look only at the other $n$ horses; all these are the same color since $n$ horses always are the same color. Likewise, exclude some other horse (not identical to the one first removed) and look only at the other $n$ horses. By the same reasoning, these too, must also be of the same color. Therefore, the first horse that was excluded is of the same color as the non-excluded horses, who in turn are of the same color as the other excluded horse. Hence the first horse excluded, the non-excluded horses, and last horse excluded are all of the same color, and we have proven that: - -*If $n$ horses have the same color, then $n+1$ horses will also have the same color. - -We already saw in the base case that the rule ("all horses have the same color") was valid for $n=1$. The inductive step proved here implies that since the rule is valid for $n=1$, it must also be valid for $n=2$, which in turn implies that the rule is valid for $n=3$ and so on. - -Thus in any group of horses, all horses must be the same color. - -The argument above makes the implicit assumption that the set of $n+1$ horses has the size at least 3, so that the two proper subsets of horses to which the induction assumption is applied would necessarily share a common element. This is not true at the first step of induction, i.e., when $n+1=2$. - -Let the two horses be horse A and horse B. When horse A is removed, it is true that the remaining horses in the set are the same color (only horse B remains). The same is true when horse B is removed. However the statement "the first horse in the group is of the same color as the horses in the middle" is meaningless, because there are no "horses in the middle" (common elements (horses) in the two sets). Therefore, the above proof has a logical link broken. The proof forms a falsidical paradox; it seems to show by valid reasoning something that is manifestly false, but in fact the reasoning is flawed. diff --git a/wiki/wikipedia/727.txt b/wiki/wikipedia/727.txt deleted file mode 100644 index 8a493f6e7a42a8ad62fbe7278b3200855cd17694..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/727.txt +++ /dev/null @@ -1,9 +0,0 @@ -is a 1995 puzzle video game developed by Locomotive and published by Bullet-Proof Software in Japan for the Virtual Boy. Its gameplay involves the player clearing horizontal lines by moving pieces of different shapes that descend onto the playing field by filling empty spaces in order to make completed lines disappear and gain points across three modes of play. - -Designed by Toshiaki Kamiya and headed by Takehiro Moriyama, V-Tetris was developed under supervision of Bullet-Proof Software by staff at Locomotive Corporation who would later work on both Virtual Fishing and SD Gundam Dimension War, with the team wanting the game to feel fresh and new with its additional content while keeping the familiarity of the original Tetris. BPS felt that producing a Tetris game for the Virtual Boy would appeal to a wider audience and wanted to keep the game's simplicity to respect the original's legacy. Gameplay was altered to make use of the system's 3D hardware capabilities to create a sense of depth. It was intended to be released as a launch title and feature support for the then-upcoming Link Cable peripheral, which was never released due to the console's short lifespan. - -V-Tetris was met with mixed reception from critics since its release. - -V-Tetris is a falling-tile puzzle game similar to previous Tetris titles on other platforms with three modes of play. In type-A mode, the player must clear horizontal lines by moving pieces of different shapes that descend onto the playing field by filling empty spaces in order to make completed lines disappear and gain as many points as possible, while avoiding having the playfield full and the game's speed increases after clearing every ten lines. Production was headed by Takehiro Moriyama alongside co-directors Norifumi Hara and Makoto Hijiya, while Hijiya also acted with Yuji Hatanaka as co-programmer. The game was intended to be released as a launch title and feature support for the then-upcoming Link Cable peripheral, which was never released due to the short lifespan of the Virtual Boy. The title was released by Bullet-Proof Software in Japan on August 25 of the same year and was housed in a four megabit cartridge. Though it was never officially released in North America, copies of the game were imported by Electronics Boutique in 1996 and sold alongside other Japanese-exclusive titles for the console, while all the in-game text is in English. - -V-Tetris received a mixture of opinions from critics prior to and since its release, though retrospective reviewers gave it a positive recommendation. Famitsus four reviewers gave the game a very mixed analysis. Nintendojos Nathan Heckel felt that the elven motiv could be "disturbing" for players under 10 and that the type-C mode did not take avantage of the system's 3D capabilities. However, Heckel referred to the latter mode as "mind-bending" due to the ability of rotate between stacks and scoring opportunities but regarded its gameplay to be standard and remarked that the music did not compared to previous Tetris games. Nintendo Lifes Dave Frear stated that "V-Tetris gains absolutely nothing from being on the Virtual Boy as the 3D effect is only really used for the backgrounds. The music could have been better and it's harmed by not having a save feature for the high scores. However it's still Tetris, which is as simple and addictive as it's ever been and mode C is excellent." diff --git a/wiki/wikipedia/728.txt b/wiki/wikipedia/728.txt deleted file mode 100644 index 71b6d8765cb0521666c6cbf1acc12c466296703d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/728.txt +++ /dev/null @@ -1,65 +0,0 @@ -In computer science, graph reduction implements an efficient version of non-strict evaluation, an evaluation strategy where the arguments to a function are not immediately evaluated. This form of non-strict evaluation is also known as lazy evaluation and used in functional programming languages. The technique was first developed by Chris Wadsworth in 1971. - -A simple example of evaluating an arithmetic expression follows: - - - -\begin{align} - -& {} & &((2+2)+(2+2))+(3+3) \\ - -& {} &=&((2+2)+(2+2))+ 6 \\ - -& {} &=&((2+2)+ 4)+6 \\ - -& {} &=&(4+4)+6 \\ - -& {} &=&8+6 \\ - -& {} &=&14 - -\end{align} - - - -The above reduction sequence employs a strategy known as outermost tree reduction. The same expression can be evaluated using innermost tree reduction, yielding the reduction sequence: - - - -\begin{align} - -& {} & &((2+2)+(2+2))+(3+3) \\ - -& {} &= &((2+2)+4)+(3+3) \\ - -& {} &= &(4+4)+(3+3) \\ - -& {} &= &(4+4)+6 \\ - -& {} &= &8+6 \\ - -& {} &= &14 - -\end{align} - - - -Notice that the reduction order is made explicit by the addition of parentheses. This expression could also have been simply evaluated right to left, because addition is an associative operation. - -Represented as a tree, the expression above looks like this: - -This is where the term tree reduction comes from. When represented as a tree, we can think of innermost reduction as working from the bottom up, while outermost works from the top down. - -The expression can also be represented as a directed acyclic graph, allowing sub-expressions to be shared: - -As for trees, outermost and innermost reduction also applies to graphs. Hence we have graph reduction. - -Now evaluation with outermost graph reduction can proceed as follows: - -Notice that evaluation now only requires four steps. Outermost graph reduction is referred to as lazy evaluation and innermost graph reduction is referred to as eager evaluation. - -Combinator graph reduction is a fundamental implementation technique for functional programming languages, in which a program is converted into a combinator representation which is mapped to a directed graph data structure in computer memory, and program execution then consists of rewriting parts of this graph ("reducing" it) so as to move towards useful results. - -The concept of a graph reduction that allows evaluated values to be shared was first developed by Chris Wadsworth in his 1971 Ph.D. dissertation. This dissertation was cited by Peter Henderson and James H. Morris Jr. in 1976 paper, “A lazy evaluator” that introduced the notion of lazy evaluation. In 1976 David Turner incorporated lazy evaluation into SASL using combinators. - -SASL was an early functional programming language first developed by Turner in 1972. diff --git a/wiki/wikipedia/729.txt b/wiki/wikipedia/729.txt deleted file mode 100644 index c6cbfbd4073c14c8d7a0d67719403648ed5c8f1f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/729.txt +++ /dev/null @@ -1,15 +0,0 @@ -Consequentia mirabilis (Latin for "admirable consequence"), also known as Clavius's Law, is used in traditional and classical logic to establish the truth of a proposition from the inconsistency of its negation. It is thus related to reductio ad absurdum, but it can prove a proposition using just its own negation and the concept of consistency. - -For a more concrete formulation, it states that if a proposition is a consequence of its negation, then it is true, for consistency. In formal notation: -$$ - (\neg A \rightarrow A) \rightarrow A -$$. - -Given $P\to Q$ being equivalent to $\neg P\lor Q$, the principle is equivalent to -$$ -(\neg \neg A \lor A) \rightarrow A -$$. - -Consequentia mirabilis was a pattern of argument popular in 17th-century Europe that first appeared in a fragment of Aristotle's Protrepticus: "If we ought to philosophise, then we ought to philosophise; and if we ought not to philosophise, then we ought to philosophise (i.e. in order to justify this view); in any case, therefore, we ought to philosophise." - -Barnes claims in passing that the term consequentia mirabilis refers only to the inference of the proposition from the inconsistency of its negation, and that the term Lex Clavia (or Clavius' Law) refers to the inference of the proposition's negation from the inconsistency of the proposition. diff --git a/wiki/wikipedia/73.txt b/wiki/wikipedia/73.txt deleted file mode 100644 index d155a0d04e7c8d95ab3e619b262a23c8896a2b3e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/73.txt +++ /dev/null @@ -1,140 +0,0 @@ -In mathematics, the Bauer–Fike theorem is a standard result in the perturbation theory of the eigenvalue of a complex-valued diagonalizable matrix. In its substance, it states an absolute upper bound for the deviation of one perturbed matrix eigenvalue from a properly chosen eigenvalue of the exact matrix. Informally speaking, what it says is that the sensitivity of the eigenvalues is estimated by the condition number of the matrix of eigenvectors. - -The theorem was proved by Friedrich L. Bauer and C. T. Fike in 1960. - -In what follows we assume that: - -* A ∈ Cn,n is a diagonalizable matrix; - -* V ∈ Cn,n is the non-singular eigenvector matrix such that A = VΛV −1, where Λ is a diagonal matrix. - -* If X ∈ Cn,n is invertible, its condition number in p-norm is denoted by κp(X) and defined by: -$$ -\kappa_p(X)=\|X\|_p \left \|X^{-1} \right \|_p. -$$ - -Bauer–Fike Theorem. Let μ be an eigenvalue of A + δA. Then there exists λ ∈ Λ(A) such that: -$$ -|\lambda-\mu| \leq \kappa_p (V) \| \delta A \|_p -$$ - -Proof. We can suppose μ ∉ Λ(A), otherwise take λ = μ and the result is trivially true since κp(V) ≥ 1. Since μ is an eigenvalue of A + δA, we have det(A + δA − μI) = 0 and so - -\begin{align} - -0 &= \det(A+\delta A-\mu I) \\ - -&= \det(V^{-1})\det(A+\delta A-\mu I)\det(V) \\ - -&= \det \left ( V^{-1} (A+\delta A-\mu I) V \right ) \\ - -&= \det \left ( V^{-1}AV + V^{-1}\delta AV- V^{-1} \mu I V \right ) \\ - -&= \det \left (\Lambda+V^{-1}\delta AV-\mu I \right ) \\ - -&= \det(\Lambda-\mu I)\det \left ((\Lambda-\mu I)^{-1}V^{-1}\delta AV +I \right ) \\ - -\end{align} - -However our assumption, μ ∉ Λ(A), implies that: det(Λ − μI) ≠ 0 and therefore we can write: -$$ -\det \left ((\Lambda-\mu I)^{-1}V^{-1}\delta AV +I \right ) = 0. -$$ - -This reveals −1 to be an eigenvalue of -$$ -(\Lambda-\mu I)^{-1}V^{-1}\delta AV. -$$ - -Since all p-norms are consistent matrix norms we have where λ is an eigenvalue of A. In this instance this gives us: -$$ -1 = |-1| \leq \left \|(\Lambda-\mu I)^{-1}V^{-1}\delta AV \right \|_p \leq \left \|(\Lambda-\mu I)^{-1} \right \|_p \left \|V^{-1}\right \|_p \|V\|_p\|\delta A\|_p = \left \|(\Lambda-\mu I)^{-1} \right \|_p\ \kappa_p(V)\|\delta A\|_p -$$ - -But (Λ − μI)−1 is a diagonal matrix, the p-norm of which is easily computed: -$$ -\left \| \left (\Lambda-\mu I \right )^{-1} \right \|_p\ =\max_{\|\boldsymbol{x}\|_p\ne 0} \frac{\left \|\left (\Lambda-\mu I \right )^{-1} \boldsymbol{x} \right \|_p}{\|\boldsymbol{x}\|_p} =\max_{\lambda\in\Lambda(A)}\frac{1}\ = \frac{1}{\min_{\lambda\in\Lambda(A)}|\lambda-\mu|} -$$ - -whence: -$$ -\min_{\lambda\in\Lambda(A)}|\lambda-\mu|\leq\ \kappa_p(V)\|\delta A\|_p. -$$ - -The theorem can also be reformulated to better suit numerical methods. In fact, dealing with real eigensystem problems, one often has an exact matrix A, but knows only an approximate eigenvalue-eigenvector couple, (λa, va ) and needs to bound the error. The following version comes in help. - -Bauer–Fike Theorem (Alternate Formulation). Let (λa, va ) be an approximate eigenvalue-eigenvector couple, and r = Ava − λava. Then there exists λ ∈ Λ(A) such that: -$$ - \left |\lambda-\lambda^a \right |\leq \kappa_p (V)\frac{\|\boldsymbol{r}\|_p}{\left \|\boldsymbol{v}^a \right \|_p} -$$ - -Proof. We can suppose λa ∉ Λ(A), otherwise take λ = λa and the result is trivially true since κp(V) ≥ 1. So (A − λaI)−1 exists, so we can write: -$$ -\boldsymbol{v}^a = \left (A-\lambda^a I \right )^{-1} \boldsymbol{r}= V \left (D-\lambda^a I \right )^{-1}V^{-1}\boldsymbol{r} -$$ - -since A is diagonalizable; taking the p-norm of both sides, we obtain: -$$ -\left \| \boldsymbol{v}^a \right \|_p= \left \|V \left (D-\lambda^a I \right )^{-1}V^{-1}\boldsymbol{r} \right \|_p \leq \|V\|_p \left \| \left (D-\lambda^a I \right )^{-1} \right \|_p \left \|V^{-1} \right \|_p \|\boldsymbol{r}\|_p =\kappa_p(V) \left \| \left (D-\lambda^a I \right )^{-1} \right \|_p \|\boldsymbol{r}\|_p. -$$ - -However -$$ -\left (D- \lambda^a I \right )^{-1} -$$ - -is a diagonal matrix and its p-norm is easily computed: -$$ -\left \|\left (D-\lambda^a I \right )^{-1} \right \|_p = \max_{\|\boldsymbol{x}\|_p \ne 0}\frac{\left \|\left (D-\lambda^a I \right )^{-1}\boldsymbol{x} \right \|_p}{\|\boldsymbol{x}\|_p} =\max_{\lambda\in\sigma(A)} \frac{1}{\left |\lambda-\lambda^a \right |}=\frac{1}{\min_{\lambda\in\sigma(A)} \left |\lambda- \lambda^a \right |} -$$ - -whence: -$$ -\min_{\lambda\in\lambda(A)} \left |\lambda-\lambda^a \right | \leq\kappa_p(V) \frac{\|\boldsymbol{r}\|_p}{\left \|\boldsymbol{v}^a \right \|_p}. -$$ - -Both formulations of Bauer–Fike theorem yield an absolute bound. The following corollary is useful whenever a relative bound is needed: - -Corollary. Suppose A is invertible and that μ is an eigenvalue of A + δA. Then there exists λ ∈ Λ(A) such that: -$$ -\frac\leq\kappa_p (V) \left \|A^{-1}\delta A \right \|_p -$$ - -Note. can be formally viewed as the relative variation of A, just as is the relative variation of λ. - -Proof. Since μ is an eigenvalue of A + δA and det(A) ≠ 0, by multiplying by −A−1 from left we have: -$$ --A^{-1}(A+\delta A)\boldsymbol{v}=-\mu A^{-1}\boldsymbol{v}. -$$ - -If we set: -$$ -A^a =\mu A^{-1}, \qquad (\delta A)^a = -A^{-1}\delta A -$$ - -then we have: -$$ -\left (A^a + (\delta A)^a - I \right )\boldsymbol{v} = \boldsymbol{0} -$$ - -which means that 1 is an eigenvalue of Aa + (δA)a, with v as an eigenvector. Now, the eigenvalues of Aa are μ/λi, while it has the same eigenvector matrix as A. Applying the Bauer–Fike theorem to Aa + (δA)a with eigenvalue 1, gives us: -$$ -\min_{\lambda\in\Lambda(A)} \left|\frac{\mu}{\lambda}-1\right| = \min_{\lambda\in\Lambda(A)}\frac \leq \kappa_p (V) \left \|A^{-1}\delta A \right \|_p -$$ - -If A is normal, V is a unitary matrix, therefore: -$$ -\|V\|_2= \left \|V^{-1} \right \|_2 = 1, -$$ - -so that κ2(V) = 1. The Bauer–Fike theorem then becomes: -$$ -\exists\lambda\in\Lambda(A): \quad |\lambda-\mu|\leq\|\delta A\|_2 -$$ - -Or in alternate formulation: -$$ -\exists\lambda\in\Lambda(A): \quad \left |\lambda-\lambda^a \right |\leq\frac{\|\boldsymbol{r}\|_2}{\left \|\boldsymbol{v}^a\right \|_2} -$$ - -which obviously remains true if A is a Hermitian matrix. In this case, however, a much stronger result holds, known as the Weyl's theorem on eigenvalues. In the hermitian case one can also restate the Bauer–Fike theorem in the form that the map A ↦ Λ(A) that maps a matrix to its spectrum is a non-expansive function with respect to the Hausdorff distance on the set of compact subsets of C. diff --git a/wiki/wikipedia/730.txt b/wiki/wikipedia/730.txt deleted file mode 100644 index 07fd8e8d0e2b59a4c6babdc6df6a03a7dd7528ba..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/730.txt +++ /dev/null @@ -1,15 +0,0 @@ -In graph theory, Conway's 99-graph problem is an unsolved problem asking whether there exists an undirected graph with 99 vertices, in which each two adjacent vertices have exactly one common neighbor, and in which each two non-adjacent vertices have exactly two common neighbors. Equivalently, every edge should be part of a unique triangle and every non-adjacent pair should be one of the two diagonals of a unique 4-cycle. John Horton Conway offered a $1000 prize for its solution. - -If such a graph exists, it would necessarily be a locally linear graph and a strongly regular graph with parameters (99,14,1,2). The first, third, and fourth parameters encode the statement of the problem: the graph should have 99 vertices, every pair of adjacent vertices should have 1 common neighbor, and every pair of non-adjacent vertices should have 2 common neighbors. The second parameter means that the graph is a regular graph with 14 edges per vertex. - -If this graph exists, it cannot have symmetries that take every vertex to every other vertex. Additional restrictions on its possible groups of symmetries are known. - -The possibility of a graph with these parameters was already suggested in 1969 by Norman L. Biggs, - -and its existence noted as an open problem by others before Conway. - -Conway himself had worked on the problem as early as 1975, but offered the prize in 2014 as part of a set of problems posed in the DIMACS Conference on Challenges of Identifying Integer Sequences. - -Other problems in the set include the thrackle conjecture, the minimum spacing of Danzer sets, and the question of who wins after the move 16 in the game sylver coinage. - -More generally, there are only five possible combinations of parameters for which a strongly regular graph could exist with each edge in a unique triangle and each non-edge forming the diagonal of a unique quadrilateral. It is only known that graphs exist with two of these five combinations. These two graphs are the nine-vertex Paley graph (the graph of the 3-3 duoprism) with parameters (9,4,1,2) and the Berlekamp–van Lint–Seidel graph with parameters (243,22,1,2). The parameters for which graphs are unknown are: (99,14,1,2), (6273,112,1,2) and (494019,994,1,2). The 99-graph problem describes the smallest of these combinations of parameters for which the existence of a graph is unknown. diff --git a/wiki/wikipedia/731.txt b/wiki/wikipedia/731.txt deleted file mode 100644 index fc447c86286dbca92d6130786a208a02e59fff32..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/731.txt +++ /dev/null @@ -1,77 +0,0 @@ -In mathematics, a classification theorem answers the classification problem "What are the objects of a given type, up to some equivalence?". It gives a non-redundant enumeration: each object is equivalent to exactly one class. - -A few issues related to classification are the following. - -*The equivalence problem is "given two objects, determine if they are equivalent". - -*A complete set of invariants, together with which invariants are solves the classification problem, and is often a step in solving it. - -*A (together with which invariants are realizable) solves both the classification problem and the equivalence problem. - -* A canonical form solves the classification problem, and is more data: it not only classifies every class, but provides a distinguished (canonical) element of each class. - -There exist many classification theorems in mathematics, as described below. - -* Classification of Euclidean plane isometries - -* Classification theorem of surfaces - -** Classification of two-dimensional closed manifolds - -** Enriques–Kodaira classification of algebraic surfaces (complex dimension two, real dimension four) - -** Nielsen–Thurston classification which characterizes homeomorphisms of a compact surface - -* Thurston's eight model geometries, and the geometrization conjecture - -* Berger classification - -* Classification of Riemannian symmetric spaces - -* Classification of 3-dimensional lens spaces - -* Classification of manifolds - -* Classification of finite simple groups - -** Classification of Abelian groups - -** Classification of Finitely generated abelian group - -** Classification of Rank 3 permutation group - -** Classification of 2-transitive permutation groups - -* Artin–Wedderburn theorem - a classification theorem for semisimple rings - -* Classification of Clifford algebras - -* Classification of low-dimensional real Lie algebras - -* Bianchi classification - -* ADE classification - -*Langlands classification - -* Finite-dimensional vector spaces (by dimension) - -* Rank–nullity theorem (by rank and nullity) - -* Structure theorem for finitely generated modules over a principal ideal domain - -* Jordan normal form - -* Sylvester's law of inertia - -* Classification of discontinuities - -* Classification of Fatou components - -* Classification of electromagnetic fields - -* Petrov classification - -* Segre classification - -* Wigner's classification diff --git a/wiki/wikipedia/732.txt b/wiki/wikipedia/732.txt deleted file mode 100644 index 32c4291bcd1f4e5ec7629f7bb4e4eb736871a7e9..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/732.txt +++ /dev/null @@ -1,77 +0,0 @@ -Chebotarev's density theorem in algebraic number theory describes statistically the splitting of primes in a given Galois extension K of the field $\mathbb{Q}$ of rational numbers. Generally speaking, a prime integer will factor into several ideal primes in the ring of algebraic integers of K. There are only finitely many patterns of splitting that may occur. Although the full description of the splitting of every prime p in a general Galois extension is a major unsolved problem, the Chebotarev density theorem says that the frequency of the occurrence of a given pattern, for all primes p less than a large integer N, tends to a certain limit as N goes to infinity. It was proved by Nikolai Chebotaryov in his thesis in 1922, published in . - -A special case that is easier to state says that if K is an algebraic number field which is a Galois extension of $\mathbb{Q}$ of degree n, then the prime numbers that completely split in K have density - -1/n - -among all primes. More generally, splitting behavior can be specified by assigning to (almost) every prime number an invariant, its Frobenius element, which is a representative of a well-defined conjugacy class in the Galois group - -Gal(K/Q). - -Then the theorem says that the asymptotic distribution of these invariants is uniform over the group, so that a conjugacy class with k elements occurs with frequency asymptotic to - -k/n. - -When Carl Friedrich Gauss first introduced the notion of complex integers Z[i], he observed that the ordinary prime numbers may factor further in this new set of integers. In fact, if a prime p is congruent to 1 mod 4, then it factors into a product of two distinct prime gaussian integers, or "splits completely"; if p is congruent to 3 mod 4, then it remains prime, or is "inert"; and if p is 2 then it becomes a product of the square of the prime (1+i) and the invertible gaussian integer -i; we say that 2 "ramifies". For instance, -$$ - 5 = (1 + 2i)(1-2i) -$$ splits completely; -$$ - 3 -$$ is inert; -$$ - 2 = -i(1+i)^2 -$$ ramifies. - -From this description, it appears that as one considers larger and larger primes, the frequency of a prime splitting completely approaches 1/2, and likewise for the primes that remain primes in Z[i]. Dirichlet's theorem on arithmetic progressions demonstrates that this is indeed the case. Even though the prime numbers themselves appear rather erratically, splitting of the primes in the extension -$$ - \mathbb{Z}\subset \mathbb{Z}[i] -$$ - -follows a simple statistical law. - -Similar statistical laws also hold for splitting of primes in the cyclotomic extensions, obtained from the field of rational numbers by adjoining a primitive root of unity of a given order. For example, the ordinary integer primes group into four classes, each with probability 1/4, according to their pattern of splitting in the ring of integers corresponding to the 8th roots of unity. - -In this case, the field extension has degree 4 and is abelian, with the Galois group isomorphic to the Klein four-group. It turned out that the Galois group of the extension plays a key role in the pattern of splitting of primes. Georg Frobenius established the framework for investigating this pattern and proved a special case of the theorem. The general statement was proved by Nikolai Grigoryevich Chebotaryov in 1922. - -The Chebotarev density theorem may be viewed as a generalisation of Dirichlet's theorem on arithmetic progressions. A quantitative form of Dirichlet's theorem states that if N≥2 is an integer and a is coprime to N, then the proportion of the primes p congruent to a mod N is asymptotic to 1/n, where n=φ(N) is the Euler totient function. This is a special case of the Chebotarev density theorem for the Nth cyclotomic field K. Indeed, the Galois group of K/Q is abelian and can be canonically identified with the group of invertible residue classes mod N. The splitting invariant of a prime p not dividing N is simply its residue class because the number of distinct primes into which p splits is φ(N)/m, where m is multiplicative order of p modulo N; hence by the Chebotarev density theorem, primes are asymptotically uniformly distributed among different residue classes coprime to N. - -In their survey article, Lenstra give an earlier result of Frobenius in this area. Suppose K is a Galois extension of the rational number field Q, and P(t) a monic integer polynomial such that K is a splitting field of P. It makes sense to factorise P modulo a prime number p. Its 'splitting type' is the list of degrees of irreducible factors of P mod p, i.e. P factorizes in some fashion over the prime field Fp. If n is the degree of P, then the splitting type is a partition Π of n. Considering also the Galois group G of K over Q, each g in G is a permutation of the roots of P in K; in other words by choosing an ordering of α and its algebraic conjugates, G is faithfully represented as a subgroup of the symmetric group Sn. We can write g by means of its cycle representation, which gives a 'cycle type' c(g), again a partition of n. - -The theorem of Frobenius states that for any given choice of Π the primes p for which the splitting type of P mod p is Π has a natural density δ, with δ equal to the proportion of g in G that have cycle type Π. - -The statement of the more general Chebotarev theorem is in terms of the Frobenius element of a prime (ideal), which is in fact an associated conjugacy class C of elements of the Galois group G. If we fix C then the theorem says that asymptotically a proportion |C|/|G| of primes have associated Frobenius element as C. When G is abelian the classes of course each have size 1. For the case of a non-abelian group of order 6 they have size 1, 2 and 3, and there are correspondingly (for example) 50% of primes p that have an order 2 element as their Frobenius. So these primes have residue degree 2, so they split into exactly three prime ideals in a degree 6 extension of Q with it as Galois group. - -Let L be a finite Galois extension of a number field K with Galois group G. Let X be a subset of G that is stable under conjugation. The set of primes v of K that are unramified in L and whose associated Frobenius conjugacy class Fv is contained in X has density -$$ -\frac{\#X}{\#G}. -$$ - -The statement is valid when the density refers to either the natural density or the analytic density of the set of primes. - -The Generalized Riemann hypothesis implies an effective version of the Chebotarev density theorem: if L/K is a finite Galois extension with Galois group G, and C a union of conjugacy classes of G, the number of unramified primes of K of norm below x with Frobenius conjugacy class in C is -$$ -\frac\Bigl(\mathrm{li}(x)+O\bigl(\sqrt x(n\log x+\log|\Delta|)\bigr)\Bigr), -$$ - -where the constant implied in the big-O notation is absolute, n is the degree of L over Q, and Δ its discriminant. - -The effective form of Chebotarev's density theory becomes much weaker without GRH. Take L to be a finite Galois extension of Q with Galois group G and degree d. Take $\rho$ to be a nontrivial irreducible representation of G of degree n, and take $\mathfrak{f}(\rho)$ to be the Artin conductor of this representation. Suppose that, for $\rho_0$ a subrepresentation of $\rho \otimes \rho$ or $ \rho \otimes \bar{\rho}$, $L(\rho_0, s)$ is entire; that is, the Artin conjecture is satisfied for all $\rho_0$. Take $\chi_{\rho}$ to be the character associated to $\rho$. Then there is an absolute positive $c$ such that, for $ x \ge 2$, -$$ -\sum_{p \le x, p \not\mid \mathfrak{f}(\rho)} \chi_{\rho}(\text{Fr}_p) \log p = rx + O\biggl(\frac{x^{\beta}}{\beta} + x\exp\biggl(\frac{-c(dn)^{-4} \log x }{3\log \mathfrak{f}(\rho) + \sqrt{\log x}}\biggr) (dn \log (x\mathfrak{f}(\rho))\biggr), -$$ - -where $r$ is 1 if $\rho$ is trivial and is otherwise 0, and where $\beta$ is an exceptional real zero of $L(\rho, s)$; if there is no such zero, the $x^{\beta}/\beta$ term can be ignored. The implicit constant of this expression is absolute. - -The statement of the Chebotarev density theorem can be generalized to the case of an infinite Galois extension L / K that is unramified outside a finite set S of primes of K (i.e. if there is a finite set S of primes of K such that any prime of K not in S is unramified in the extension L / K). In this case, the Galois group G of L / K is a profinite group equipped with the Krull topology. Since G is compact in this topology, there is a unique Haar measure μ on G. For every prime v of K not in S there is an associated Frobenius conjugacy class Fv. The Chebotarev density theorem in this situation can be stated as follows: - -Let X be a subset of G that is stable under conjugation and whose boundary has Haar measure zero. Then, the set of primes v of K not in S such that Fv ⊆ X has density -$$ -\frac{\mu(X)}{\mu(G)}. -$$ - -This reduces to the finite case when L / K is finite (the Haar measure is then just the counting measure). - -A consequence of this version of the theorem is that the Frobenius elements of the unramified primes of L are dense in G. - -The Chebotarev density theorem reduces the problem of classifying Galois extensions of a number field to that of describing the splitting of primes in extensions. Specifically, it implies that as a Galois extension of K, L is uniquely determined by the set of primes of K that split completely in it. A related corollary is that if almost all prime ideals of K split completely in L, then in fact L = K. diff --git a/wiki/wikipedia/733.txt b/wiki/wikipedia/733.txt deleted file mode 100644 index e101a1e5706d1c2900974390628eb6eb66494915..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/733.txt +++ /dev/null @@ -1,363 +0,0 @@ -Artificial intelligence (AI) is intelligence demonstrated by machines, as opposed to natural intelligence displayed by animals including humans. - -Leading AI textbooks define the field as the study of "intelligent agents": any system that perceives its environment and takes actions that maximize its chance of achieving its goals. - -Some popular accounts use the term "artificial intelligence" to describe machines that mimic "cognitive" functions that humans associate with the human mind, such as "learning" and "problem solving", however, this definition is rejected by major AI researchers. - -AI applications include advanced web search engines (e.g., Google), recommendation systems (used by YouTube, Amazon and Netflix), understanding human speech (such as Siri and Alexa), self-driving cars (e.g., Tesla), automated decision-making and competing at the highest level in strategic game systems (such as chess and Go). - -As machines become increasingly capable, tasks considered to require "intelligence" are often removed from the definition of AI, a phenomenon known as the AI effect. For instance, optical character recognition is frequently excluded from things considered to be AI, having become a routine technology. - -Artificial intelligence was founded as an academic discipline in 1956, and in the years since has experienced several waves of optimism, - -and have been common in fiction, as in Mary Shelley's Frankenstein or Karel Čapek's R.U.R. These characters and their fates raised many of the same issues now discussed in the ethics of artificial intelligence. - -The study of mechanical or "formal" reasoning began with philosophers and mathematicians in antiquity. The study of mathematical logic led directly to Alan Turing's theory of computation, which suggested that a machine, by shuffling symbols as simple as "0" and "1", could simulate any conceivable act of mathematical deduction. This insight that digital computers can simulate any process of formal reasoning is known as the Church–Turing thesis. - -The Church-Turing thesis, along with concurrent discoveries in neurobiology, information theory and cybernetics, led researchers to consider the possibility of building an electronic brain. - -The first work that is now generally recognized as AI was McCullouch and Pitts' 1943 formal design for Turing-complete "artificial neurons". - -When access to digital computers became possible in the mid-1950s, AI research began to explore the possibility that human intelligence could be reduced to step-by-step symbol manipulation, known as Symbolic AI or GOFAI. Approaches based on cybernetics or artificial neural networks were abandoned or pushed into the background. - -The field of AI research was born at a workshop at Dartmouth College in 1956. - -The attendees became the founders and leaders of AI research. - -They and their students produced programs that the press described as "astonishing": - -computers were learning checkers strategies, solving word problems in algebra, proving logical theorems and speaking English. - -By the middle of the 1960s, research in the U.S. was heavily funded by the Department of Defense - -and laboratories had been established around the world. - -Researchers in the 1960s and the 1970s were convinced that symbolic approaches would eventually succeed in creating a machine with artificial general intelligence and considered this the goal of their field. - -Herbert Simon predicted, "machines will be capable, within twenty years, of doing any work a man can do". - -Marvin Minsky agreed, writing, "within a generation ... the problem of creating 'artificial intelligence' will substantially be solved". - -They failed to recognize the difficulty of some of the remaining tasks. Progress slowed and in 1974, in response to the criticism of Sir James Lighthill - -and ongoing pressure from the US Congress to fund more productive projects, both the U.S. and British governments cut off exploratory research in AI. The next few years would later be called an "AI winter", a period when obtaining funding for AI projects was difficult. - -In the early 1980s, AI research was revived by the commercial success of expert systems, - -a form of AI program that simulated the knowledge and analytical skills of human experts. By 1985, the market for AI had reached over a billion dollars. At the same time, Japan's fifth generation computer project inspired the U.S and British governments to restore funding for academic research. - -However, beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, longer-lasting winter began. - -Many researchers began to doubt that the symbolic approach would be able to imitate all the processes of human cognition, especially perception, robotics, learning and pattern recognition. A number of researchers began to look into "sub-symbolic" approaches to specific AI problems. Robotics researchers, such as Rodney Brooks, rejected symbolic AI and focused on the basic engineering problems that would allow robots to move, survive, and learn their environment. - -Interest in neural networks and "connectionism" was revived by Geoffrey Hinton, David Rumelhart and others in the middle of the 1980s. - -Soft computing tools were developed in the 80s, such as neural networks, fuzzy systems, Grey system theory, evolutionary computation and many tools drawn from statistics or mathematical optimization. - -AI gradually restored its reputation in the late 1990s and early 21st century by finding specific solutions to specific problems. The narrow focus allowed researchers to produce verifiable results, exploit more mathematical methods, and collaborate with other fields (such as statistics, economics and mathematics). - -By 2000, solutions developed by AI researchers were being widely used, although in the 1990s they were rarely described as "artificial intelligence". - -Faster computers, algorithmic improvements, and access to large amounts of data enabled advances in machine learning and perception; data-hungry deep learning methods started to dominate accuracy benchmarks around 2012. - -According to Bloomberg's Jack Clark, 2015 was a landmark year for artificial intelligence, with the number of software projects that use AI within Google increased from a "sporadic usage" in 2012 to more than 2,700 projects. He attributes this to an increase in affordable neural networks, due to a rise in cloud computing infrastructure and to an increase in research tools and datasets. In a 2017 survey, one in five companies reported they had "incorporated AI in some offerings or processes". The amount of research into AI (measured by total publications) increased by 50% in the years 2015-2019. - -Numerous academic researchers became concerned that AI was no longer pursuing the original goal of creating versatile, fully intelligent machines. Much of current research involves statistical AI, which is overwhelmingly used to solve specific problems, even highly successful techniques such as deep learning. This concern has led to the subfield artificial general intelligence (or "AGI"), which had several well-funded institutions by the 2010s. - -The general problem of simulating (or creating) intelligence has been broken down into sub-problems. These consist of particular traits or capabilities that researchers expect an intelligent system to display. The traits described below have received the most attention. - -Early researchers developed algorithms that imitated step-by-step reasoning that humans use when they solve puzzles or make logical deductions. - -By the late 1980s and 1990s, AI research had developed methods for dealing with uncertain or incomplete information, employing concepts from probability and economics. - -Many of these algorithms proved to be insufficient for solving large reasoning problems because they experienced a "combinatorial explosion": they became exponentially slower as the problems grew larger. - -Even humans rarely use the step-by-step deduction that early AI research could model. They solve most of their problems using fast, intuitive judgments. - -Knowledge representation and knowledge engineering - -allow AI programs to answer questions intelligently and make deductions about real world facts. - -A representation of "what exists" is an ontology: the set of objects, relations, concepts, and properties formally described so that software agents can interpret them. - -The most general ontologies are called upper ontologies, which attempt to provide a foundation for all other knowledge and act as mediators between domain ontologies that cover specific knowledge about a particular knowledge domain (field of interest or area of concern). A truly intelligent program would also need access to commonsense knowledge; the set of facts that an average person knows. The semantics of an ontology is typically represented in a description logic, such as the Web Ontology Language. - -and the sub-symbolic form of most commonsense knowledge (much of what people know is not represented as "facts" or "statements" that they could express verbally). - -Planning algorithms search through trees of goals and subgoals, attempting to find a path to a target goal, a process called means-ends analysis. - -Robotics algorithms for moving limbs and grasping objects use local searches in configuration space. - -Simple exhaustive searches - -are rarely sufficient for most real-world problems: the search space (the number of places to search) quickly grows to astronomical numbers. The result is a search that is too slow or never completes. The solution, for many problems, is to use "heuristics" or "rules of thumb" that prioritize choices in favor of those more likely to reach a goal and to do so in a shorter number of steps. In some search methodologies heuristics can also serve to eliminate some choices unlikely to lead to a goal (called "pruning the search tree"). Heuristics supply the program with a "best guess" for the path on which the solution lies. - -Heuristics limit the search for solutions into a smaller sample size. - -A very different kind of search came to prominence in the 1990s, based on the mathematical theory of optimization. For many problems, it is possible to begin the search with some form of a guess and then refine the guess incrementally until no more refinements can be made. These algorithms can be visualized as blind hill climbing: we begin the search at a random point on the landscape, and then, by jumps or steps, we keep moving our guess uphill, until we reach the top. Other optimization algorithms are simulated annealing, beam search and random optimization. - -Evolutionary computation uses a form of optimization search. For example, they may begin with a population of organisms (the guesses) and then allow them to mutate and recombine, selecting only the fittest to survive each generation (refining the guesses). Classic evolutionary algorithms include genetic algorithms, gene expression programming, and genetic programming. - -Alternatively, distributed search processes can coordinate via swarm intelligence algorithms. Two popular swarm algorithms used in search are particle swarm optimization (inspired by bird flocking) and ant colony optimization (inspired by ant trails). - -Logic - -is used for knowledge representation and problem solving, but it can be applied to other problems as well. For example, the satplan algorithm uses logic for planning - -and inductive logic programming is a method for learning. - -Several different forms of logic are used in AI research. Propositional logic involves truth functions such as "or" and "not". First-order logic - -adds quantifiers and predicates, and can express facts about objects, their properties, and their relations with each other. Fuzzy logic assigns a "degree of truth" (between 0 and 1) to vague statements such as "Alice is old" (or rich, or tall, or hungry), that are too linguistically imprecise to be completely true or false. - -Default logics, non-monotonic logics and circumscription are forms of logic designed to help with default reasoning and the qualification problem. - -Several extensions of logic have been designed to handle specific domains of knowledge, such as: description logics; - -situation calculus, event calculus and fluent calculus (for representing events and time); - -causal calculus; - -belief calculus (belief revision); and modal logics. - -Logics to model contradictory or inconsistent statements arising in multi-agent systems have also been designed, such as paraconsistent logics. - -Many problems in AI (in reasoning, planning, learning, perception, and robotics) require the agent to operate with incomplete or uncertain information. AI researchers have devised a number of powerful tools to solve these problems using methods from probability theory and economics. - -Bayesian networks - -are a very general tool that can be used for various problems: reasoning (using the Bayesian inference algorithm), - -learning (using the expectation-maximization algorithm), - -planning (using decision networks) and perception (using dynamic Bayesian networks). - -Probabilistic algorithms can also be used for filtering, prediction, smoothing and finding explanations for streams of data, helping perception systems to analyze processes that occur over time (e.g., hidden Markov models or Kalman filters). - -and information value theory. These tools include models such as Markov decision processes, dynamic decision networks, - -The simplest AI applications can be divided into two types: classifiers ("if shiny then diamond") and controllers ("if diamond then pick up"). Controllers do, however, also classify conditions before inferring actions, and therefore classification forms a central part of many AI systems. Classifiers are functions that use pattern matching to determine a closest match. They can be tuned according to examples, making them very attractive for use in AI. These examples are known as observations or patterns. In supervised learning, each pattern belongs to a certain predefined class. A class is a decision that has to be made. All the observations combined with their class labels are known as a data set. When a new observation is received, that observation is classified based on previous experience. - -A classifier can be trained in various ways; there are many statistical and machine learning approaches. - -The decision tree is the simplest and most widely used symbolic machine learning algorithm. - -K-nearest neighbor algorithm was the most widely used analogical AI until the mid-1990s. - -Kernel methods such as the support vector machine (SVM) displaced k-nearest neighbor in the 1990s. - -The naive Bayes classifier is reportedly the "most widely used learner" at Google, due in part to its scalability. - -Neural networks are also used for classification. - -were inspired by the architecture of neurons in the human brain. A simple "neuron" N accepts input from other neurons, each of which, when activated (or "fired"), casts a weighted "vote" for or against whether neuron N should itself activate. Learning requires an algorithm to adjust these weights based on the training data; one simple algorithm (dubbed "fire together, wire together") is to increase the weight between two connected neurons when the activation of one triggers the successful activation of another. Neurons have a continuous spectrum of activation; in addition, neurons can process inputs in a nonlinear way rather than weighing straightforward votes. - -Modern neural networks model complex relationships between inputs and outputs or and find patterns in data. They can learn continuous functions and even digital logical operations. Neural networks can be viewed a type of mathematical optimization - they perform a gradient descent on a multi-dimensional topology that was created by training the network. The most common training technique is the backpropagation algorithm. - -Other learning techniques for neural networks are Hebbian learning ("fire together, wire together"), GMDH or competitive learning. - -The main categories of networks are acyclic or feedforward neural networks (where the signal passes in only one direction) and recurrent neural networks (which allow feedback and short-term memories of previous input events). Among the most popular feedforward networks are perceptrons, multi-layer perceptrons and radial basis networks. - -[[File:Deep Learning.jpg|alt=Representing Images on Multiple Layers of Abstraction in Deep Learning|thumb|Representing Images on Multiple Layers of - -Abstraction in Deep Learning]] - -Deep learning - -uses several layers of neurons between the network's inputs and outputs. The multiple layers can progressively extract higher-level features from the raw input. For example, in image processing, lower layers may identify edges, while higher layers may identify the concepts relevant to a human such as digits or letters or faces. Deep learning has drastically improved the performance of programs in many important subfields of artificial intelligence, including computer vision, speech recognition, image classification and others. - -Deep learning often uses convolutional neural networks for many or all of its layers. In a convolutional layer, each neuron receives input from only a restricted area of the previous layer called the neuron's receptive field. This can substantially reduce the number of weighted connections between neurons, and creates a hierarchy similar to the organization of the animal visual cortex. - -In a recurrent neural network the signal will propagate through a layer more than once; - -thus, an RNN is an example of deep learning. - -RNNs can be trained by gradient descent, - -however long-term gradients which are back-propagated can "vanish" (that is, they can tend to zero) or "explode" (that is, they can tend to infinity), known as the vanishing gradient problem. - -The long short term memory (LSTM) technique can prevent this in most cases. - -Specialized languages for artificial intelligence have been developed, such as Lisp, Prolog, TensorFlow and many others. Hardware developed for AI includes AI accelerators and neuromorphic computing. - -AI is relevant to any intellectual task. - -Modern artificial intelligence techniques are pervasive and are too numerous to list here. - -Frequently, when a technique reaches mainstream use, it is no longer considered artificial intelligence; this phenomenon is described as the AI effect. - -In the 2010s, AI applications were at the heart of the most commercially successful areas of computing, and have become a ubiquitous feature of daily life. AI is used in search engines (such as Google Search), - -targeting online advertisements, - -recommendation systems (offered by Netflix, YouTube or Amazon), - -driving internet traffic, - -targeted advertising (AdSense, Facebook), - -virtual assistants (such as Siri or Alexa), - -autonomous vehicles (including drones and self-driving cars), - -automatic language translation (Microsoft Translator, Google Translate), - -facial recognition (Apple's Face ID or Microsoft's DeepFace), - -image labeling (used by Facebook, Apple's iPhoto and TikTok) - -and spam filtering. - -There are also thousands of successful AI applications used to solve problems for specific industries or institutions. A few examples are: - -energy storage, - -deepfakes, - -medical diagnosis, - -military logistics, or - -supply chain management. - -Game playing has been a test of AI's strength since the 1950s. Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov, on 11 May 1997. - -In 2011, in a Jeopardy! quiz show exhibition match, IBM's question answering system, Watson, defeated the two greatest Jeopardy! champions, Brad Rutter and Ken Jennings, by a significant margin. - -In March 2016, AlphaGo won 4 out of 5 games of Go in a match with Go champion Lee Sedol, becoming the first computer Go-playing system to beat a professional Go player without handicaps. - -Other programs handle imperfect-information games; such as for poker at a superhuman level, Pluribus - -and Cepheus. - -DeepMind in the 2010s developed a "generalized artificial intelligence" that could learn many diverse Atari games on its own. - -By 2020, Natural Language Processing systems such as the enormous GPT-3 (then by far the largest artificial neural network) were matching human performance on pre-existing benchmarks, albeit without the system attaining commonsense understanding of the contents of the benchmarks. - -DeepMind's AlphaFold 2 (2020) demonstrated the ability to approximate, in hours rather than months, the 3D structure of a protein. - -Other applications predict the result of judicial decisions, create art (such as poetry or painting) and prove mathematical theorems. - -Alan Turing wrote in 1950 "I propose to consider the question 'can machines think'?" - -He advised changing the question from whether a machine "thinks", to "whether or not it is possible for machinery to show intelligent behaviour". - -The only thing visible is the behavior of the machine, so it does not matter if the machine is conscious, or has a mind, or whether the intelligence is merely a "simulation" and not "the real thing". He noted that we also don't know these things about other people, but that we extend a "polite convention" that they are actually "thinking". This idea forms the basis of the Turing test. - -AI founder John McCarthy said: "Artificial intelligence is not, by definition, simulation of human intelligence". Russell and Norvig agree and criticize the Turing test. They wrote: "Aeronautical engineering texts do not define the goal of their field as 'making machines that fly so exactly like pigeons that they can fool other pigeons. Other researchers and analysts disagree and have argued that AI should simulate natural intelligence by studying psychology or neurobiology. - -The intelligent agent paradigm - -defines intelligent behavior in general, without reference to human beings. An intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success. Any system that has goal-directed behavior can be analyzed as an intelligent agent: something as simple as a thermostat, as complex as a human being, as well as large systems such as firms, biomes or nations. The intelligent agent paradigm became widely accepted during the 1990s, and currently serves as the definition of the field. - -The paradigm has other advantages for AI. It provides a reliable and scientific way to test programs; researchers can directly compare or even combine different approaches to isolated problems, by asking which agent is best at maximizing a given "goal function". It also gives them a common language to communicate with other fields — such as mathematical optimization (which is defined in terms of "goals") or economics (which uses the same definition of a "rational agent"). - -No established unifying theory or paradigm has guided AI research for most of its history. The unprecedented success of statistical machine learning in the 2010s eclipsed all other approaches (so much so that some sources, especially in the business world, use the term "artificial intelligence" to mean "machine learning with neural networks"). This approach is mostly sub-symbolic, neat, soft and narrow (see below). Critics argue that these questions may have to be revisited by future generations of AI researchers. - -Symbolic AI (or "GOFAI") simulated the high-level conscious reasoning that people use when they solve puzzles, express legal reasoning and do mathematics. They were highly successful at "intelligent" tasks such as algebra or IQ tests. In the 1960s, Newell and Simon proposed the physical symbol systems hypothesis: "A physical symbol system has the necessary and sufficient means of general intelligent action." - -However, the symbolic approach failed dismally on many tasks that humans solve easily, such as learning, recognizing an object or commonsense reasoning. Moravec's paradox is the discovery that high-level "intelligent" tasks were easy for AI, but low level "instinctive" tasks were extremely difficult. - -Philosopher Hubert Dreyfus had argued since the 1960s that human expertise depends on unconscious instinct rather than conscious symbol manipulation, and on having a "feel" for the situation, rather than explicit symbolic knowledge. - -Although his arguments had been ridiculed and ignored when they were first presented, eventually AI research came to agree. - -but in the 1990s mathematical methods and solid scientific standards became the norm, a transition that Russell and Norvig termed "the victory of the neats". - -Finding a provably correct or optimal solution is intractable for many important problems. - -If a machine has a mind and subjective experience, then it may also have sentience (the ability to feel), and if so, then it could also suffer, and thus it would be entitled to certain rights. - -Any hypothetical robot rights would lie on a spectrum with animal rights and human rights. - -This issue has been considered in fiction for centuries, - -and is now being considered by, for example, California's Institute for the Future, however critics argue that the discussion is premature. - -A superintelligence, hyperintelligence, or superhuman intelligence, is a hypothetical agent that would possess intelligence far surpassing that of the brightest and most gifted human mind. Superintelligence may also refer to the form or degree of intelligence possessed by such an agent. - -If research into artificial general intelligence produced sufficiently intelligent software, it might be able to reprogram and improve itself. The improved software would be even better at improving itself, leading to recursive self-improvement. - -Its intelligence would increase exponentially in an intelligence explosion and could dramatically surpass humans. Science fiction writer Vernor Vinge named this scenario the "singularity". - -Because it is difficult or impossible to know the limits of intelligence or the capabilities of superintelligent machines, the technological singularity is an occurrence beyond which events are unpredictable or even unfathomable. - -Robot designer Hans Moravec, cyberneticist Kevin Warwick, and inventor Ray Kurzweil have predicted that humans and machines will merge in the future into cyborgs that are more capable and powerful than either. This idea, called transhumanism, has roots in Aldous Huxley and Robert Ettinger. - -Edward Fredkin argues that "artificial intelligence is the next stage in evolution", an idea first proposed by Samuel Butler's "Darwin among the Machines" as far back as 1863, and expanded upon by George Dyson in his book of the same name in 1998. - -In the past technology has tended to increase rather than reduce total employment, but economists acknowledge that "we're in uncharted territory" with AI. - -A survey of economists showed disagreement about whether the increasing use of robots and AI will cause a substantial increase in long-term unemployment, but they generally agree that it could be a net benefit, if productivity gains are redistributed. - -Subjective estimates of the risk vary widely; for example, Michael Osborne and Carl Benedikt Frey estimate 47% of U.S. jobs are at "high risk" of potential automation, while an OECD report classifies only 9% of U.S. jobs as "high risk". - -Unlike previous waves of automation, many middle-class jobs may be eliminated by artificial intelligence; The Economist states that "the worry that AI could do to white-collar jobs what steam power did to blue-collar ones during the Industrial Revolution" is "worth taking seriously". - -Jobs at extreme risk range from paralegals to fast food cooks, while job demand is likely to increase for care-related professions ranging from personal healthcare to the clergy. - -AI provides a number of tools that are particularly useful for authoritarian governments: smart spyware, face recognition and voice recognition allow widespread surveillance; such surveillance allows machine learning to classify potential enemies of the state and can prevent them from hiding; recommendation systems can precisely target propaganda and misinformation for maximum effect; deepfakes aid in producing misinformation; advanced AI can make centralized decision making more competitive with liberal and decentralized systems such as markets. - -Terrorists, criminals and rogue states may use other forms of weaponized AI such as advanced digital warfare and lethal autonomous weapons. By 2015, over fifty countries were reported to be researching battlefield robots. - -AI programs can become biased after learning from real world data. It is not typically introduced by the system designers, but is learned by the program, and thus the programmers are often unaware that the bias exists. - -Bias can be inadvertently introduced by the way training data is selected. - -It can also emerge from correlations: AI is used to classify individuals into groups and then make predictions assuming that the individual will resemble other members of the group. In some cases, this assumption may be unfair. - -An example of this is COMPAS, a commercial program widely used by U.S. courts to assess the likelihood of a defendant becoming a recidivist. ProPublica claims that the COMPAS-assigned recidivism risk level of black defendants is far more likely to be an overestimate than that of white defendants, despite the fact that the program was not told the races of the defendants. Other examples where algorithmic bias can lead to unfair outcomes are when AI is used for credit rating or hiring. - -Superintelligent AI may be able to improve itself to the point that humans could not control it. This could, as physicist Stephen Hawking puts it, "spell the end of the human race". Philosopher Nick Bostrom argues that sufficiently intelligent AI, if it chooses actions based on achieving some goal, will exhibit convergent behavior such as acquiring resources or protecting itself from being shut down. If this AI's goals do not fully reflect humanity's, it might need to harm humanity to acquire more resources or prevent itself from being shut down, ultimately to better achieve its goal. He concludes that AI poses a risk to mankind, however humble or "friendly" its stated goals might be. - -Political scientist Charles T. Rubin argues that "any sufficiently advanced benevolence may be indistinguishable from malevolence." Humans should not assume machines or robots would treat us favorably because there is no a priori reason to believe that they would share our system of morality. - -The opinion of experts and industry insiders is mixed, with sizable fractions both concerned and unconcerned by risk from eventual superhumanly-capable AI. - -Stephen Hawking, Microsoft founder Bill Gates, history professor Yuval Noah Harari, and SpaceX founder Elon Musk have all expressed serious misgivings about the future of AI. - -Prominent tech titans including Peter Thiel (Amazon Web Services) and Musk have committed more than $1 billion to nonprofit companies that champion responsible AI development, such as OpenAI and the Future of Life Institute. - -Mark Zuckerberg (CEO, Facebook) has said that artificial intelligence is helpful in its current form and will continue to assist humans. - -Other experts argue is that the risks are far enough in the future to not be worth researching, - -or that humans will be valuable from the perspective of a superintelligent machine. - -Rodney Brooks, in particular, has said that "malevolent" AI is still centuries away. - -Friendly AI are machines that have been designed from the beginning to minimize risks and to make choices that benefit humans. Eliezer Yudkowsky, who coined the term, argues that developing friendly AI should be a higher research priority: it may require a large investment and it must be completed before AI becomes an existential risk. - -Machines with intelligence have the potential to use their intelligence to make ethical decisions. The field of machine ethics provides machines with ethical principles and procedures for resolving ethical dilemmas. - -Machine ethics is also called machine morality, computational ethics or computational morality, - -and was founded at an AAAI symposium in 2005. - -Others approaches include Wendell Wallach's "artificial moral agents" - -and Stuart J. Russell's three principles for developing provably beneficial machines. - -The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI); it is therefore related to the broader regulation of algorithms. - -The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally. - -Between 2016 and 2020, more than 30 countries adopted dedicated strategies for AI. - -Most EU member states had released national AI strategies, as had Canada, China, India, Japan, Mauritius, the Russian Federation, Saudi Arabia, United Arab Emirates, USA and Viet Nam. Others were in the process of elaborating their own AI strategy, including Bangladesh, Malaysia and Tunisia. - -The Global Partnership on Artificial Intelligence was launched in June 2020, stating a need for AI to be developed in accordance with human rights and democratic values, to ensure public confidence and trust in the technology. Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher published an joint statement in November 2021 calling for a government commission to regulate AI. - -Thought-capable artificial beings have appeared as storytelling devices since antiquity, - -and have been a persistent theme in science fiction. - -A common trope in these works began with Mary Shelley's Frankenstein, where a human creation becomes a threat to its masters. This includes such works as Arthur C. Clarke's and Stanley Kubrick's 2001: A Space Odyssey (both 1968), with HAL 9000, the murderous computer in charge of the Discovery One spaceship, as well as The Terminator (1984) and The Matrix (1999). In contrast, the rare loyal robots such as Gort from The Day the Earth Stood Still (1951) and Bishop from Aliens (1986) are less prominent in popular culture. - -Isaac Asimov introduced the Three Laws of Robotics in many books and stories, most notably the "Multivac" series about a super-intelligent computer of the same name. Asimov's laws are often brought up during lay discussions of machine ethics; - -while almost all artificial intelligence researchers are familiar with Asimov's laws through popular culture, they generally consider the laws useless for many reasons, one of which is their ambiguity. - -Transhumanism (the merging of humans and machines) is explored in the manga Ghost in the Shell and the science-fiction series Dune. - -Several works use AI to force us to confront the fundamental question of what makes us human, showing us artificial beings that have the ability to feel, and thus to suffer. This appears in Karel Čapek's R.U.R., the films A.I. Artificial Intelligence and Ex Machina, as well as the novel Do Androids Dream of Electric Sheep?, by Philip K. Dick. Dick considers the idea that our understanding of human subjectivity is altered by technology created with artificial intelligence. diff --git a/wiki/wikipedia/734.txt b/wiki/wikipedia/734.txt deleted file mode 100644 index aaeef5efa34e54d3642087f88e71ac58a121fa13..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/734.txt +++ /dev/null @@ -1,23 +0,0 @@ -Argus – the Audit Record Generation and Utilization System is the first implementation of network flow monitoring, and is an ongoing open source network flow monitor project. Started by Carter Bullard in 1984 at Georgia Tech, and developed for cyber security at Carnegie Mellon University in the early 1990s, Argus has been an important contributor to Internet cyber security technology over its 30 years. . - -The Argus Project is focused on developing all aspects of large scale network situational awareness and network audit trail establishment in support of Network Operations (NetOps), Performance and Security Management. Motivated by the telco Call detail record (CDR), Argus attempts to generate network metadata that can be used to perform a large number of network management tasks. Argus is used by many universities, corporations and government entities including US DISA, DoD, DHS, FFRDCs, GLORIAD and is a Top 100 Internet Security Tool. Argus is designed to be a real-time situational awareness system, and its data can be used to track, alarm and alert on wire-line network conditions. The data can also be used to establish a comprehensive audit of all network traffic, as described in the Red Book, US DoD NCSC-TG-005, supplementing traditional Intrusion detection system (IDS) based network security. The audit trail is traditionally used as historical network traffic measurement data for network forensics and Network Behavior Anomaly Detection (NBAD). Argus has been used extensively in cybersecurity, end-to-end performance analysis, and more recently, software-defined networking (SDN) research. Argus has also been a topic in network management standards development. RMON (1995) and IPFIX (2001). - -Argus is composed of an advanced comprehensive network flow data generator, the Argus monitor, which processes packets (either capture files or live packet data) and generates detailed network traffic flow status reports of all the flows in the packet stream. Argus monitors all network traffic, data plane, control plane and management plane, not just Internet Protocol (IP) traffic. Argus captures much of the packet dynamics and semantics of each flow, with a great deal of data reduction, so you can store, process, inspect and analyze large amounts of network data efficiently. Argus provides reachability, availability, connectivity, duration, rate, load, good-put, loss, jitter, retransmission (data networks), and delay metrics for all network flows, and captures most attributes that are available from the packet contents, such as Layer 2 addresses, tunnel identifiers (MPLS, GRE, IPsec, etc...), protocol ids, SAP's, hop-count, options, L4 transport identification (RTP detection), host flow control indications, etc... Argus has implemented a number of packet dynamics metrics specifically designed for cyber security. Argus detects human typing behavior in any flow, but of particular interest is key-stroke detection in encrypted SSH tunnels. and Argus generates the Producer Consumer Ratio (PCR) which indicates whether a network entity is a data producer and/or consumer, an important property when evaluating the potential for a node to be involved in an Advanced persistent threat (APT) mediated exfiltration. - -Argus is an Open Source (GPL) project, owned and managed by QoSient, LLC, and has been ported to most operating systems and many hardware accelerated platforms, such as Bivio, Pluribus, Arista, and Tilera. The software should be portable to many other environments with little or no modifications. Performance is such that auditing an entire enterprise's Internet activity can be accomplished using modest computing resources. - -* Linux: Unix operating system running the Linux kernel - -* Solaris: Unix operating system developed by Sun Microsystems - -* BSD: Unix operating system family (FreeBSD, NetBSD, OpenBSD) - -* OS X: Unix operating system developed by Apple Inc. - -* IRIX: Unix operating system developed by Silicon Graphics - -* AIX, Unix operating system developed by IBM - -* Windows, (under Cygwin) operating system developed by Microsoft - -* OpenWrt: Unix operation system running the Linux kernel on embedded devices diff --git a/wiki/wikipedia/735.txt b/wiki/wikipedia/735.txt deleted file mode 100644 index 1ad21216595b17d56a7a2df4bc7c5972a16407de..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/735.txt +++ /dev/null @@ -1,39 +0,0 @@ -The Sleeping Beauty problem is a puzzle in decision theory in which whenever an ideally rational epistemic agent is awoken from sleep, she has no memory of whether she has been awoken before. Upon being told that she has been woken once or twice according to the toss of a coin, once if heads and twice if tails, she is asked her degree of belief for the coin having come up heads. - -The problem was originally formulated in unpublished work in the mid 1980s by Arnold Zuboff (the work was later published as "One Self: The Logic of Experience") followed by a paper by Adam Elga. A formal analysis of the problem of belief formation in decision problems with imperfect recall was provided first by Michele Piccione and Ariel Rubinstein in their paper: "On the Interpretation of Decision Problems with Imperfect Recall" where the "paradox of the absent minded driver" was first introduced and the Sleeping Beauty problem discussed as Example 5. The name "Sleeping Beauty" was given to the problem by Robert Stalnaker and was first used in extensive discussion in the Usenet newsgroup rec.puzzles in 1999. - -Sleeping Beauty volunteers to undergo the following experiment and is told all of the following details: On Sunday she will be put to sleep. Once or twice, during the experiment, Sleeping Beauty will be awakened, interviewed, and put back to sleep with an amnesia-inducing drug that makes her forget that awakening. A fair coin will be tossed to determine which experimental procedure to undertake: - -* If the coin comes up heads, Sleeping Beauty will be awakened and interviewed on Monday only. - -* If the coin comes up tails, she will be awakened and interviewed on Monday and Tuesday. - -In either case, she will be awakened on Wednesday without interview and the experiment ends. - -Any time Sleeping Beauty is awakened and interviewed she will not be able to tell which day it is or whether she has been awakened before. During the interview Sleeping Beauty is asked: "What is your credence now for the proposition that the coin landed heads?" - -This problem continues to produce ongoing debate. - -The thirder position argues that the probability of heads is 1/3. Adam Elga argued for this position originally as follows: Suppose Sleeping Beauty is told and she comes to fully believe that the coin landed tails. By even a highly restricted principle of indifference, given that the coin lands tails, her credence that it is Monday should equal her credence that it is Tuesday, since being in one situation would be subjectively indistinguishable from the other. In other words, P(Monday Tails) = P(Tuesday | Tails), and thus - -P(Tails and Tuesday) = P(Tails and Monday). - -Suppose now that Sleeping Beauty is told upon awakening and comes to fully believe that it is Monday. Guided by the objective chance of heads landing being equal to the chance of tails landing, it should hold that P(Tails | Monday) = P(Heads | Monday), and thus - -P(Tails and Tuesday) = P(Tails and Monday) = P(Heads and Monday). - -Since these three outcomes are exhaustive and exclusive for one trial, the probability of each is one-third by the previous two steps in the argument. - -David Lewis responded to Elga's paper with the position that Sleeping Beauty's credence that the coin landed heads should be 1/2. Sleeping Beauty receives no new non-self-locating information throughout the experiment because she is told the details of the experiment. Since her credence before the experiment is P(Heads) = 1/2, she ought to continue to have a credence of P(Heads) = 1/2 since she gains no new relevant evidence when she wakes up during the experiment. This directly contradicts one of the thirder's premises, since it means P(Tails | Monday) = 1/3 and P(Heads | Monday) = 2/3. - -Nick Bostrom argues that Sleeping Beauty does have new evidence about her future from Sunday: "that she is now in it," but does not know whether it is Monday or Tuesday, so the halfer argument fails. In particular, she gains the information that it is not both Tuesday and the case that Heads was flipped. - -The double halfer position argues that both P(Heads) and P(Heads | Monday) equal 1/2. Mikaël Cozic, in particular, argues that context-sensitive propositions like "it is Monday" are in general problematic for conditionalization and proposes the use of an imaging rule instead, which supports the double halfer position. - -Nick Bostrom argues that the thirder position is implied by the Self-Indication Assumption. - -Credence about what precedes awakenings is a core question in connection with the anthropic principle. - -This differs from the original in that there are one million and one wakings if tails comes up. It was formulated by Nick Bostrom, and is used to argue for the thirder position. - -The Sailor's Child problem, introduced by Radford M. Neal, is somewhat similar. It involves a sailor who regularly sails between ports. In one port there is a woman who wants to have a child with him, across the sea there is another woman who also wants to have a child with him. The sailor cannot decide if he will have one or two children, so he will leave it up to a coin toss. If Heads, he will have one child, and if Tails, two children. But if the coin lands on Heads, which woman would have his child? He would decide this by looking at The Sailors Guide to Ports and the woman in the port that appears first would be the woman that he has a child with. You are his child. You do not have a copy of The Sailors Guide to Ports. What is the probability that you are his only child, thus the coin landed on Heads (assume a fair coin)? diff --git a/wiki/wikipedia/736.txt b/wiki/wikipedia/736.txt deleted file mode 100644 index 22faff56dc91d41b4c966fc28663f324abf0b1f9..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/736.txt +++ /dev/null @@ -1,23 +0,0 @@ -In graph theory, a part of discrete mathematics, the BEST theorem gives a product formula for the number of Eulerian circuits in directed (oriented) graphs. The name is an acronym of the names of people who discovered it: de Bruijn, van Aardenne-Ehrenfest, Smith and Tutte. - -== Precise statement == - -Let G = (V, E) be a directed graph. An Eulerian circuit is a directed closed path which visits each edge exactly once. In 1736, Euler showed that G has an Eulerian circuit if and only if G is connected and the indegree is equal to outdegree at every vertex. In this case G is called Eulerian. We denote the indegree of a vertex v by deg(v). - -The BEST theorem states that the number ec(G) of Eulerian circuits in a connected Eulerian graph G is given by the formula - - - -\operatorname{ec}(G) = t_w(G) \prod_{v\in V} \bigl(\deg(v)-1\bigr)!. - - - -Here tw(G) is the number of arborescences, which are trees directed towards the root at a fixed vertex w in G. The number tw(G) can be computed as a determinant, by the version of the matrix tree theorem for directed graphs. It is a property of Eulerian graphs that tv(G) = tw(G) for every two vertices v and w in a connected Eulerian graph G. - -The BEST theorem shows that the number of Eulerian circuits in directed graphs can be computed in polynomial time, a problem which is #P-complete for undirected graphs. It is also used in the asymptotic enumeration of Eulerian circuits of complete and complete bipartite graphs. - -The BEST theorem is due to van Aardenne-Ehrenfest and de Bruijn - -(1951), §6, Theorem 6. - -Their proof is bijective and generalizes the de Bruijn sequences. In a "note added in proof", they refer to an earlier result by Smith and Tutte (1941) which proves the formula for graphs with deg(v)=2 at every vertex. diff --git a/wiki/wikipedia/737.txt b/wiki/wikipedia/737.txt deleted file mode 100644 index 892a0f1d08e4412611e28aed8cd1261fa094caf0..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/737.txt +++ /dev/null @@ -1,55 +0,0 @@ -In mathematics the symmetrization methods are algorithms of transforming a set $A\subset \mathbb{R}^n$ to a ball $B\subset \mathbb{R}^n$ with equal volume $\operatorname{vol}(B)=\operatorname{vol}(A)$ and centered at the origin. B is called the symmetrized version of A, usually denoted $A^{*}$. These algorithms show up in solving the classical isoperimetric inequality problem, which asks: Given all two-dimensional shapes of a given area, which of them has the minimal perimeter (for details see Isoperimetric inequality). The conjectured answer was the disk and Steiner in 1838 showed this to be true using the Steiner symmetrization method (described below). From this many other isoperimetric problems sprung and other symmetrization algorithms. For example, Rayleigh's conjecture is that the first eigenvalue of the Dirichlet problem is minimized for the ball (see Rayleigh–Faber–Krahn inequality for details). Another problem is that the Newtonian capacity of a set A is minimized by $A^{*}$ and this was proved by Polya and G. Szego (1951) using circular symmetrization (described below). - -If $ \Omega\subset \mathbb{R}^n$ is measurable, then it is denoted by $\Omega^{*}$ the symmetrized version of $\Omega$ i.e. a ball $\Omega^{*}:=B_r(0)\subset\mathbb{R}^n$ such that $\operatorname{vol}(\Omega^{*})=\operatorname{vol}(\Omega)$. We denote by $f^{*}$ the symmetric decreasing rearrangement of nonnegative measurable function f and define it as $f^{*}(x):=\int_0^\infty 1_{\{y:f(y)>t\}^{*}}(x) dt$, where $\{y:f(y)>t\}^{*}$ is the symmetrized version of preimage set $\{y:f(y)>t\}$. The methods described below have been proved to transform $\Omega$ to $\Omega^{*}$ i.e. given a sequence of symmetrization transformations $\{T_k\}$ there is $\lim\limits_{k\to \infty}d_{Ha}(\Omega^{*}, T_k(K) )=0$, where $d_{Ha}$ is the Hausdorff distance (for discussion and proofs see Burchard ) - -Steiner symmetrization was introduced by Steiner (1838) to solve the isoperimetric theorem stated above. Let $H\subset\mathbb{R}^n$ be a hyperplane through the origin. Rotate space so that $H$ is the $x_n=0$ ($x_n$ is the nth coordinate in $\mathbb{R}^n$) hyperplane. For each $x\in H$ let the perpendicular line through $x\in H $ be $L_x = \{x+ye_n:y\in \mathbb{R}\}$. Then by replacing each $\Omega\cap L_x$ by a line centered at H and with length $|\Omega\cap L_x|$ we obtain the Steiner symmetrized version. -$$ - \operatorname{St}(\Omega):=\{x+ye_n:x+ze_n\in \Omega \text{ for some } z \text{ and } |y|\leq\frac{1}{2} |\Omega\cap L_x|\}. -$$ - -It is denoted by $\operatorname{St}(f)$ the Steiner symmetrization wrt to $x_n=0$ hyperplane of nonnegative measurable function $f:\mathbb{R}^d\to \mathbb{R}$ and for fixed $x_1,\ldots,x_{n-1}$ define it as -$$ - St: f(x_1,\ldots,x_{n-1},\cdot)\mapsto (f(x_1,\ldots,x_{n-1},\cdot))^{*}. -$$ - -* It preserves convexity: if $ \Omega $ is convex, then $ St(\Omega) $ is also convex. - -*It is linear: $St(x+\lambda \Omega)=St(x)+\lambda St(\Omega)$. - -*Super-additive: $ St(K)+St(U)\subset St(K+U)$. - -A popular method for symmetrization in the plane is Polya's circular symmetrization. After, its generalization will be described to higher dimensions. Let $\Omega\subset \mathbb{C}$ be a domain; then its circular symmetrization $\operatorname{Circ}(\Omega)$ with regard to the positive real axis is defined as follows: Let -$$ -\Omega_t:=\{\theta \in [0,2\pi]:te^{i\theta}\in \Omega\} -$$ - -i.e. contain the arcs of radius t contained in $\Omega$. So it is defined - -* If $\Omega_t$ is the full circle, then $\operatorname{Circ}(\Omega)\cap \{|z|=t\}:=\{|z|=t\} $. - -* If the length is $m(\Omega_t)=\alpha$, then $\operatorname{Circ}(\Omega)\cap \{|z|=t\}:=\{te^{i\theta}: |\theta|<\frac{\alpha}{2}\}$. - -* $0,\infty\in \operatorname{Circ}(\Omega)$ iff $0,\infty \in \Omega$. - -In higher dimensions $\Omega\subset \mathbb{R}^n$, its spherical symmetrization $Sp^n(\Omega)$ wrt to positive axis of $x_1$ is defined as follows: Let -$$ -\Omega_r:=\{x\in \mathbb{S}^{n-1}: rx\in \Omega\} -$$ - -i.e. contain the caps of radius r contained in $\Omega$. Also, for the first coordinate let $\operatorname{angle}(x_1):=\theta$ if $x_1=rcos\theta$. So as above - -* If $\Omega_r$ is the full cap, then $Sp^n(\Omega)\cap \{|z|=r\}:=\{|z|=r\}$. - -* If the surface area is $m_s(\Omega_t)=\alpha$, then $Sp^n(\Omega)\cap \{|z|=r\}:=\{x:|x|=r$ and $0\leq \operatorname{angle}(x_1)\leq \theta_\alpha\}=:C(\theta_\alpha)$ where $\theta_\alpha$ is picked so that its surface area is $m_s (C(\theta_\alpha)=\alpha$. In words, $C(\theta_\alpha)$ is a cap symmetric around the positive axis $x_1$ with the same area as the intersection $\Omega\cap \{|z|=r\}$. - -* $0,\infty\in Sp^n(\Omega)$ iff $0,\infty \in \Omega$. - -Let $\Omega\subset\mathbb{R}^n$ be a domain and $H^{n-1}\subset\mathbb{R}^n$ be a hyperplane through the origin. Denote the reflection across that plane to the positive halfspace $\mathbb{H}^{+}$ as $\sigma_H$ or just $\sigma$ when it is clear from the context. Also, the reflected $\Omega$ across hyperplane H is defined as $\sigma \Omega$. Then, the polarized $\Omega$ is denoted as $\Omega^\sigma$ and defined as follows - -* If $x\in \Omega\cap \mathbb{H}^{+}$, then $x\in \Omega^{\sigma}$. - -* If $x\in \Omega\cap \sigma(\Omega) \cap \mathbb{H}^{-}$, then $x\in \Omega^{\sigma}$. - -* If $x\in (\Omega\setminus \sigma(\Omega)) \cap \mathbb{H}^{-}$, then $\sigma x\in \Omega^{\sigma}$. - -In words, $(\Omega\setminus \sigma(\Omega)) \cap \mathbb{H}^{-}$ is simply reflected to the halfspace $\mathbb{H}^{+}$. It turns out that this transformation can approximate the above ones (in the Hausdorff distance) (see Brock). diff --git a/wiki/wikipedia/738.txt b/wiki/wikipedia/738.txt deleted file mode 100644 index 4fb4a20ec569b0b1d6a933e2ed0f9a53ae912f85..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/738.txt +++ /dev/null @@ -1,28 +0,0 @@ -In the mathematical subject of group theory, the Adian–Rabin theorem is a result which states that most "reasonable" properties of finitely presentable groups are algorithmically undecidable. The theorem is due to Sergei Adian (1955) and, independently, Michael O. Rabin (1958). - -A Markov property P of finitely presentable groups is one for which: - -#P is an abstract property, that is, P is preserved under group isomorphism. - -#There exists a finitely presentable group $A_+ $ with property P. - -#There exists a finitely presentable group $A_- $ which cannot be embedded as a subgroup in any finitely presentable group with property P. - -For example, being a finite group is a Markov property: We can take $A_+ $ to be the trivial group and we can take $A_- $ to be the infinite cyclic group $\mathbb{Z}$. - -In modern sources, the Adian–Rabin theorem is usually stated as follows: - -Let P be a Markov property of finitely presentable groups. Then there does not exist an algorithm that, given a finite presentation $G=\langle X \mid R\rangle$, decides whether or not the group $G$ defined by this presentation has property P. - -The word 'algorithm' here is used in the sense of recursion theory. More formally, the conclusion of Adian–Rabin theorem means that set of all finite presentations -$$ -\langle x_1, x_2, x_3, \dots \mid R\rangle -$$ - -(where $x_1, x_2, x_3, \dots $ is a fixed countably infinite alphabet, and $R$ is a finite set of relations in these generators and their inverses) - -defining groups with property P, is not a recursive set. - -The statement of the Adian–Rabin theorem generalizes a similar earlier result for semigroups by Andrey Markov, Jr., proved by different methods. It was also in the semigroup context that Markov introduced the above notion that that group theorists came to call the Markov property of finitely presented groups. This Markov, a prominent Soviet logician, is not to be confused with his father, the famous Russian probabilist Andrey Markov after whom Markov chains and Markov processes are named. - -According to Don Collins, proved that the property of being Hopfian is undecidable for finitely presentable groups, while neither being Hopfian nor being non-Hopfian are Markov. diff --git a/wiki/wikipedia/739.txt b/wiki/wikipedia/739.txt deleted file mode 100644 index 9a20955e86dd5ea9eb2519a0dbea6f8f07c7a5c5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/739.txt +++ /dev/null @@ -1,16 +0,0 @@ -In combinatorial geometry, the Hadwiger conjecture states that any convex body in n-dimensional Euclidean space can be covered by 2n or fewer smaller bodies homothetic with the original body, and that furthermore, the upper bound of 2n is necessary if and only if the body is a parallelepiped. There also exists an equivalent formulation in terms of the number of floodlights needed to illuminate the body. - -The Hadwiger conjecture is named after Hugo Hadwiger, who included it on a list of unsolved problems in 1957; it was, however, previously studied by Levi and independently, Gohberg. Additionally, there is a different Hadwiger conjecture concerning graph coloring—and in some sources the geometric Hadwiger conjecture is also called the Levi–Hadwiger conjecture or the Hadwiger–Levi covering problem. - -The conjecture remains unsolved even in three dimensions, though the two dimensional case was resolved by Levi. - -Formally, the Hadwiger conjecture is: If K is any bounded convex set in the n-dimensional Euclidean space Rn, then there exists a set of 2n scalars si and a set of 2n translation vectors vi such that all si lie in the range 0 < si < 1, and -$$ -K\subseteq\bigcup_{i=1}^{2^n} s_i K + v_i. -$$ - -Furthermore, the upper bound is necessary iff K is a parallelepiped, in which case all 2n of the scalars may be chosen to be equal to 1/2. - -As shown by Boltyansky, the problem is equivalent to one of illumination: how many floodlights must be placed outside of an opaque convex body in order to completely illuminate its exterior? For the purposes of this problem, a body is only considered to be illuminated if for each point of the boundary of the body, there is at least one floodlight that is separated from the body by all of the tangent planes intersecting the body on this point; thus, although the faces of a cube may be lit by only two floodlights, the planes tangent to its vertices and edges cause it to need many more lights in order for it to be fully illuminated. For any convex body, the number of floodlights needed to completely illuminate it turns out to equal the number of smaller copies of the body that are needed to cover it. - -The conjecture is known to hold for certain special classes of convex bodies, including symmetric polyhedra and bodies of constant width in three dimensions. The number of copies needed to cover any zonotope is at most $(3/4)2^n$, while for bodies with a smooth surface (that is, having a single tangent plane per boundary point), at most $n+1$ smaller copies are needed to cover the body, as Levi already proved. diff --git a/wiki/wikipedia/74.txt b/wiki/wikipedia/74.txt deleted file mode 100644 index 5cd4ee18eff005098b0f1af684366f36b0b2d0e3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/74.txt +++ /dev/null @@ -1,65 +0,0 @@ -Hilbert's paradox of the Grand Hotel (colloquial: Infinite Hotel Paradox or Hilbert's Hotel) is a thought experiment which illustrates a counterintuitive property of infinite sets. It is demonstrated that a fully occupied hotel with infinitely many rooms may still accommodate additional guests, even infinitely many of them, and this process may be repeated infinitely often. The idea was introduced by David Hilbert in a 1924 lecture "Über das Unendliche", reprinted in , and was popularized through George Gamow's 1947 book One Two Three... Infinity. - -Consider a hypothetical hotel with a countably infinite number of rooms, all of which are occupied. One might be tempted to think that the hotel would not be able to accommodate any newly arriving guests, as would be the case with a finite number of rooms, where the pigeonhole principle would apply. - -Suppose a new guest arrives and wishes to be accommodated in the hotel. We can (simultaneously) move the guest currently in room 1 to room 2, the guest currently in room 2 to room 3, and so on, moving every guest from their current room n to room n+1. After this, room 1 is empty and the new guest can be moved into that room. By repeating this procedure, it is possible to make room for any finite number of new guests. In general, assume that k guests seek a room. We can apply the same procedure and move every guest from room n to room n + k. In a similar manner, if k guests wished to leave the hotel, every guest moves from room n to room n − k. - -It is also possible to accommodate a countably infinite number of new guests: just move the person occupying room 1 to room 2, the guest occupying room 2 to room 4, and, in general, the guest occupying room n to room 2n (2 times n), and all the odd-numbered rooms (which are countably infinite) will be free for the new guests. - -It is possible to accommodate countably infinitely many coachloads of countably infinite passengers each, by several different methods. Most methods depend on the seats in the coaches being already numbered (or use the axiom of countable choice). In general any pairing function can be used to solve this problem. For each of these methods, consider a passenger's seat number on a coach to be $n$, and their coach number to be $c$, and the numbers $n$ and $c$ are then fed into the two arguments of the pairing function. - -Empty the odd numbered rooms by sending the guest in room $i$ to room $2^i$, then put the first coach's load in rooms $3^n$, the second coach's load in rooms $5^n$; for coach number $c$ we use the rooms $p^n$ where $p$ is the $c$th odd prime number. This solution leaves certain rooms empty (which may or may not be useful to the hotel); specifically, all odd numbers that are not prime powers, such as 15 or 847, will no longer be occupied. (So, strictly speaking, this shows that the number of arrivals is less than or equal to the number of vacancies created. It is easier to show, by an independent means, that the number of arrivals is also greater than or equal to the number of vacancies, and thus that they are equal, than to modify the algorithm to an exact fit.) (The algorithm works equally well if one interchanges $n$ and $c$, but whichever choice is made, it must be applied uniformly throughout.) - -You can put each person of a certain seat $s$ and coach $c$ into room $2^s 3^c$ (presuming c=0 for the people already in the hotel, 1 for the first coach, etc. ...). Because every number has a unique prime factorization, it's easy to see all people will have a room, while no two people will end up in the same room. For example, the person in room 2592 ($2^5 3^4$) was sitting in on the 4th coach, on the 5th seat. Like the prime powers method, this solution leaves certain rooms empty. - -This method can also easily be expanded for infinite nights, infinite entrances, etc. ... ( $2^s 3^c 5^n 7^e$ ) - -For each passenger, compare the lengths of $n$ and $c$ as written in any positional numeral system, such as decimal. (Treat each hotel resident as being in coach #0.) If either number is shorter, add leading zeroes to it until both values have the same number of digits. Interleave the digits to produce a room number: its digits will be [first digit of coach number]-[first digit of seat number]-[second digit of coach number]-[second digit of seat number]-etc. The hotel (coach #0) guest in room number 1729 moves to room 01070209 (i.e., room 1,070,209). The passenger on seat 1234 of coach 789 goes to room 01728394 (i.e., room 1,728,394). - -Unlike the prime powers solution, this one fills the hotel completely, and we can reconstruct a guest's original coach and seat by reversing the interleaving process. First add a leading zero if the room has an odd number of digits. Then de-interleave the number into two numbers: the coach number consists of the odd-numbered digits and the seat number is the even-numbered ones. Of course, the original encoding is arbitrary, and the roles of the two numbers can be reversed (seat-odd and coach-even), so long as it is applied consistently. - -Those already in the hotel will be moved to room $(n^2+n)/2$, or the $n$th triangular number. Those in a coach will be in room $ ((c+n-1)^2+c+n-1)/2+n$, or the $(c+n-1)$ triangular number plus $n$. In this way all the rooms will be filled by one, and only one, guest. - -This pairing function can be demonstrated visually by structuring the hotel as a one-room-deep, infinitely tall pyramid. The pyramid's topmost row is a single room: room 1; its second row is rooms 2 and 3; and so on. The column formed by the set of rightmost rooms will correspond to the triangular numbers. Once they are filled (by the hotel's redistributed occupants), the remaining empty rooms form the shape of a pyramid exactly identical to the original shape. Thus, the process can be repeated for each infinite set. Doing this one at a time for each coach would require an infinite number of steps, but by using the prior formulas, a guest can determine what his room "will be" once his coach has been reached in the process, and can simply go there immediately. - -Let $ S := \{(a, b) \mid a, b \in \mathbb{N}\}$. $ S $ is countable since $\mathbb{N}$ is countable, hence we may enumerate its elements $s_1, s_2, \dots$. Now if $s_n = (a, b)$, assign the $b$th guest of the $a$th coach to the $n$th room (consider the guests already in the hotel as guests of the $0$th coach). Thus we have a function assigning each person to a room; furthermore, this assignment does not skip over any rooms. - -Suppose the hotel is next to an ocean, and an infinite number of car ferries arrive, each bearing an infinite number of coaches, each with an infinite number of passengers. This is a situation involving three "levels" of infinity, and it can be solved by extensions of any of the previous solutions. - -The prime factorization method can be applied by adding a new prime number for every additional layer of infinity ( $2^s 3^c 5^f$, with $f$ the ferry). - -The prime power solution can be applied with further exponentiation of prime numbers, resulting in very large room numbers even given small inputs. For example, the passenger in the second seat of the third bus on the second ferry (address 2-3-2) would raise the 2nd odd prime (5) to 49, which is the result of the 3rd odd prime (7) being raised to the power of his seat number (2). This room number would have over thirty decimal digits. - -The interleaving method can be used with three interleaved "strands" instead of two. The passenger with the address 2-3-2 would go to room 232, while the one with the address 4935-198-82217 would go to room #008,402,912,391,587 (the leading zeroes can be removed). - -Anticipating the possibility of any number of layers of infinite guests, the hotel may wish to assign rooms such that no guest will need to move, no matter how many guests arrive afterward. One solution is to convert each arrival's address into a binary number in which ones are used as separators at the start of each layer, while a number within a given layer (such as a guest's coach number) is represented with that many zeroes. Thus, a guest with the prior address 2-5-1-3-1 (five infinite layers) would go to room 10010000010100010 (decimal 295458). - -As an added step in this process, one zero can be removed from each section of the number; in this example, the guest's new room is 101000011001 (decimal 2585). This ensures that every room could be filled by a hypothetical guest. If no infinite sets of guests arrive, then only rooms that are a power of two will be occupied. - -Although a room can be found for any finite number of nested infinities of people, the same is not always true for an infinite number of layers, even if a finite number of elements exists at each layer. - -Hilbert's paradox is a veridical paradox: it leads to a counter-intuitive result that is provably true. The statements "there is a guest to every room" and "no more guests can be accommodated" are not equivalent when there are infinitely many rooms. - -Initially, this state of affairs might seem to be counter-intuitive. The properties of "infinite collections of things" are quite different from those of "finite collections of things". The paradox of Hilbert's Grand Hotel can be understood by using Cantor's theory of transfinite numbers. Thus, in an ordinary (finite) hotel with more than one room, the number of odd-numbered rooms is obviously smaller than the total number of rooms. However, in Hilbert's aptly named Grand Hotel, the quantity of odd-numbered rooms is not smaller than the total "number" of rooms. In mathematical terms, the cardinality of the subset containing the odd-numbered rooms is the same as the cardinality of the set of all rooms. Indeed, infinite sets are characterized as sets that have proper subsets of the same cardinality. For countable sets (sets with the same cardinality as the natural numbers) this cardinality is $\aleph_0$. - -Rephrased, for any countably infinite set, there exists a bijective function which maps the countably infinite set to the set of natural numbers, even if the countably infinite set contains the natural numbers. For example, the set of rational numbers—those numbers which can be written as a quotient of integers—contains the natural numbers as a subset, but is no bigger than the set of natural numbers since the rationals are countable: there is a bijection from the naturals to the rationals. - -* BBC Learning Zone repeatedly screened a 1996 one-off educational docudrama Hotel Hilbert set in the hotel as seen through the eyes of a young female guest Fiona Knight, her name a pun on finite. The programme was designed to educate viewers about the concept of infinity. - -* The novel White Light by mathematician/science fiction writer Rudy Rucker includes a hotel based on Hilbert's paradox, and where the protagonist of the story meets Georg Cantor. - -* Stephen Baxter's science fiction novel Transcendent has a brief discussion on the nature of infinity, with an explanation based on the paradox, modified to use soldiers rather than hotels. - -* Geoffrey A. Landis' Nebula Award-winning short story "Ripples in the Dirac Sea" uses the Hilbert hotel as an explanation of why an infinitely-full Dirac sea can nevertheless still accept particles. - -* In Peter Høeg's novel Miss Smilla's Feeling for Snow, the titular heroine reflects that it is admirable for the hotel's manager and guests to go to all that trouble so that the latecomer can have his own room and some privacy. - -* In Ivar Ekeland's novel for children, The Cat in Numberland, a "Mr. Hilbert" and his wife run an infinite hotel for all the integers. The story progresses through the triangular method for the rationals. - -* In Will Wiles's novel The Way Inn, about an infinitely large motel, the villain's name is Hilbert. - -* In Reginald Hill's novel "The Stranger House" the character Sam refers to the Hilbert Hotel paradox. - -* The short story by Naum Ya. Vilenkin The Extraordinary Hotel (often erroneously attributed to Stanislaw Lem) shows the way in which Hilbert's Grand Hotel may be reshuffled when infinite new hosts arrive. - -* The comic book saga The Tempest from the League of Extraordinary Gentlemen series by Alan Moore and Kevin O'Neill shows a villain called Infinity. In the story it is suggested that the villain goes to the hotel based on Hilbert's paradox. Georg Cantor is mentioned as well. diff --git a/wiki/wikipedia/740.txt b/wiki/wikipedia/740.txt deleted file mode 100644 index 668dfb06167141d74f078532a2006f4fd12410ff..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/740.txt +++ /dev/null @@ -1,5 +0,0 @@ -Hilbert's ninth problem, from the list of 23 Hilbert's problems (1900), asked to find the most general reciprocity law for the norm residues of k-th order in a general algebraic number field, where k is a power of a prime. - -The problem was partially solved by Emil Artin (1924; 1927; 1930) by establishing the Artin reciprocity law which deals with abelian extensions of algebraic number fields. Together with the work of Teiji Takagi and Helmut Hasse (who established the more general Hasse reciprocity law), this led to the development of the class field theory, realizing Hilbert's program in an abstract fashion. Certain explicit formulas for norm residues were later found by Igor Shafarevich (1948; 1949; 1950). - -The non-abelian generalization, also connected with Hilbert's twelfth problem, is one of the long-standing challenges in number theory and is far from being complete. diff --git a/wiki/wikipedia/741.txt b/wiki/wikipedia/741.txt deleted file mode 100644 index 8835bb0e192c3bca8bf3b433b583591beea9a27a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/741.txt +++ /dev/null @@ -1,158 +0,0 @@ -In theoretical computer science, the algorithmic Lovász local lemma gives an algorithmic way of constructing objects that obey a system of constraints with limited dependence. - -Given a finite set of bad events {A1, ..., An} in a probability space with limited dependence amongst the Ais and with specific bounds on their respective probabilities, the Lovász local lemma proves that with non-zero probability all of these events can be avoided. However, the lemma is non-constructive in that it does not provide any insight on how to avoid the bad events. - -If the events {A1, ..., An} are determined by a finite collection of mutually independent random variables, a simple Las Vegas algorithm with expected polynomial runtime proposed by Robin Moser and Gábor Tardos can compute an assignment to the random variables such that all events are avoided. - -The Lovász Local Lemma is a powerful tool commonly used in the probabilistic method to prove the existence of certain complex mathematical objects with a set of prescribed features. A typical proof proceeds by operating on the complex object in a random manner and uses the Lovász Local Lemma to bound the probability that any of the features is missing. The absence of a feature is considered a bad event and if it can be shown that all such bad events can be avoided simultaneously with non-zero probability, the existence follows. The lemma itself reads as follows: - -
    Let $\mathcal{A} = \{ A_1, \ldots, A_n \}$ be a finite set of events in the probability space Ω. For $ A \in \mathcal{A} $ let $ \Gamma(A)$ denote a subset of $\mathcal{A}$ such that $A$ is independent from the collection of events $\mathcal{A} \setminus (\{A \} \cup \Gamma(A))$. If there exists an assignment of reals $ x : \mathcal{A} \rightarrow (0,1) $ to the events such that -$$ - \forall A \in \mathcal{A} : \Pr[A] \leq x(A) \prod_{B \in \Gamma(A)} (1-x(B)) -$$ - -then the probability of avoiding all events in $ \mathcal{A} $ is positive, in particular -$$ - \Pr\left[\overline{A_1} \wedge \cdots \wedge \overline{A_n}\right] \geq \prod_{A \in \mathcal{A}} (1-x(A)). -$$
    - -The Lovász Local Lemma is non-constructive because it only allows us to conclude the existence of structural properties or complex objects but does not indicate how these can be found or constructed efficiently in practice. Note that random sampling from the probability space Ω is likely to be inefficient, since the probability of the event of interest -$$ - \Pr \left[ \overline{A_1} \wedge \cdots \wedge \overline{A_n} \right] -$$ - -is only bounded by a product of small numbers -$$ - \prod_{A \in \mathcal{A}} (1-x(A)) -$$ - -and therefore likely to be very small. - -Under the assumption that all of the events in $ \mathcal{A} $ are determined by a finite collection of mutually independent random variables $ \mathcal{P} $ in Ω, Robin Moser and Gábor Tardos proposed an efficient randomized algorithm that computes an assignment to the random variables in $ \mathcal{P} $ such that all events in $ \mathcal{A} $ are avoided. - -Hence, this algorithm can be used to efficiently construct witnesses of complex objects with prescribed features for most problems to which the Lovász Local Lemma applies. - -Prior to the recent work of Moser and Tardos, earlier work had also made progress in developing algorithmic versions of the Lovász Local Lemma. József Beck in 1991 first gave proof that an algorithmic version was possible. In this breakthrough result, a stricter requirement was imposed upon the problem formulation than in the original non-constructive definition. Beck's approach required that for each $A \in \mathcal{A}$, the number of dependencies of A was bounded above with $|\Gamma(A)| < 2^{n/48}$ (approximately). The existential version of the Local Lemma permits a larger upper bound on dependencies: -$$ -|\Gamma(A)| < \frac{2^n}{e}, -$$ - -This bound is known to be tight. Since the initial algorithm, work has been done to push algorithmic versions of the Local Lemma closer to this tight value. Moser and Tardos's recent work are the most recent in this chain, and provide an algorithm that achieves this tight bound. - -Let us first introduce some concepts that are used in the algorithm. - -For any random variable $ P \in \mathcal{P}, v_P $ denotes the current assignment (evaluation) of P. An assignment (evaluation) to all random variables is denoted $ (v_P)_{\mathcal{P}}$. - -The unique minimal subset of random variables in $ \mathcal{P} $ that determine the event A is denoted by vbl(A). - -If the event A is true under an evaluation $ (v_P)_{\mathcal{P}}$, we say that $ (v_P)_{\mathcal{P}}$ satisfies A, otherwise it avoids A. - -Given a set of bad events $ \mathcal{A} $ we wish to avoid that is determined by a collection of mutually independent random variables $ \mathcal{P} $, the algorithm proceeds as follows: - -# $ \forall P \in \mathcal{P} $: $ v_P \leftarrow $ a random evaluation of P - -# while $ \exists A \in \mathcal{A}$ such that A is satisfied by $ (v_P)_{\mathcal{P}}$ - -#* pick an arbitrary satisfied event $ A \in \mathcal{A}$ - -#* $ \forall P \in \text{vbl}(A) $: $ v_P \leftarrow $ a new random evaluation of P - -# return $ (v_P)_{\mathcal{P}}$ - -In the first step, the algorithm randomly initializes the current assignment vP for each random variable $ P \in \mathcal{P}$. This means that an assignment vP is sampled randomly and independently according to the distribution of the random variable P. - -The algorithm then enters the main loop which is executed until all events in $ \mathcal{A} $ are avoided and which point the algorithm returns the current assignment. At each iteration of the main loop, the algorithm picks an arbitrary satisfied event A (either randomly or deterministically) and resamples all the random variables that determine A. - -Let $ \mathcal{P} $ be a finite set of mutually independent random variables in the probability space Ω. Let $ \mathcal{A} $ be a finite set of events determined by these variables. If there exists an assignment of reals $ x : \mathcal{A} \to (0,1) $ to the events such that -$$ - \forall A \in \mathcal{A} : \Pr[A] \leq x(A) \prod_{B \in \Gamma(A)} (1-x(B)) -$$ - -then there exists an assignment of values to the variables $\mathcal{P}$ avoiding all of the events in $ \mathcal{A} $. - -Moreover, the randomized algorithm described above resamples an event $ A \in \mathcal{A} $ at most an expected -$$ - \frac{x(A)}{1-x(A)} -$$ - -times before it finds such an evaluation. Thus the expected total number of resampling steps and therefore the expected runtime of the algorithm is at most -$$ - \sum_{A \in \mathcal{A}} \frac{x(A)}{1-x(A)}. -$$ - -The proof of this theorem using the method of entropy compression can be found in the paper by Moser and Tardos - -The requirement of an assignment function x satisfying a set of inequalities in the theorem above is complex and not intuitive. But this requirement can be replaced by three simple conditions: - -* $ \forall A \in \mathcal{A}: |\Gamma(A)| \leq D $, i.e. each event A depends on at most D other events, - -* $ \forall A \in \mathcal{A}: \Pr[A] \leq p $, i.e. the probability of each event A is at most p, - -* $ e p (D+1) \leq 1 $, where e is the base of the natural logarithm. - -The version of the Lovász Local Lemma with these three conditions instead of the assignment function x is called the Symmetric Lovász Local Lemma. We can also state the Symmetric Algorithmic Lovász Local Lemma: - -Let $\mathcal{P}$ be a finite set of mutually independent random variables and $\mathcal{A}$ be a finite set of events determined by these variables as before. If the above three conditions hold then there exists an assignment of values to the variables $\mathcal{P}$ avoiding all of the events in $\mathcal{A} $. - -Moreover, the randomized algorithm described above resamples an event $ A \in \mathcal{A} $ at most an expected $\frac{1}{D}$ times before it finds such an evaluation. Thus the expected total number of resampling steps and therefore the expected runtime of the algorithm is at most $\frac{n}{D}$. - -The following example illustrates how the algorithmic version of the Lovász Local Lemma can be applied to a simple problem. - -Let Φ be a CNF formula over variables X1, ..., Xn, containing n clauses, and with at least k literals in each clause, and with each variable Xi appearing in at most $\frac{2^k}{ke} $ clauses. Then, Φ is satisfiable. - -This statement can be proven easily using the symmetric version of the Algorithmic Lovász Local Lemma. Let X1, ..., Xn be the set of mutually independent random variables $ \mathcal{P} $ which are sampled uniformly at random. - -Firstly, we truncate each clause in Φ to contain exactly k literals. Since each clause is a disjunction, this does not harm satisfiability, for if we can find a satisfying assignment for the truncated formula, it can easily be extended to a satisfying assignment for the original formula by reinserting the truncated literals. - -Now, define a bad event Aj for each clause in Φ, where Aj is the event that clause j in Φ is unsatisfied by the current assignment. Since each clause contains k literals (and therefore k variables) and since all variables are sampled uniformly at random, we can bound the probability of each bad event by -$$ -\Pr[A_j] = p = 2^{-k}. -$$ - -Since each variable can appear in at most $ \frac{2^k}{ke}$ clauses and there are k variables in each clause, each bad event Aj can depend on at most -$$ - D = k\left(\frac{2^k}{ke}-1\right) \leq \frac{2^k}{e} -1 -$$ - -other events. Therefore: -$$ -D+1 \leq \frac{2^k}{e}, -$$ - -multiplying both sides by ep we get: -$$ - ep(D+1) \leq e 2^{-k} \frac{2^k}{e} = 1 -$$ - -it follows by the symmetric Lovász Local Lemma that the probability of a random assignment to X1, ..., Xn satisfying all clauses in Φ is non-zero and hence such an assignment must exist. - -Now, the Algorithmic Lovász Local Lemma actually allows us to efficiently compute such an assignment by applying the algorithm described above. The algorithm proceeds as follows: - -It starts with a random truth value assignment to the variables X1, ..., Xn sampled uniformly at random. While there exists a clause in Φ that is unsatisfied, it randomly picks an unsatisfied clause C in Φ and assigns a new truth value to all variables that appear in C chosen uniformly at random. Once all clauses in Φ are satisfied, the algorithm returns the current assignment. Hence, the Algorithmic Lovász Local Lemma proves that this algorithm has an expected runtime of at most -$$ - \frac{n}{\frac{2^k}{e}-k} -$$ - -steps on CNF formulas that satisfy the two conditions above. A stronger version of the above statement is proven by Moser, see also Berman, Karpinski and Scott. - -The algorithm is similar to WalkSAT which is used to solve general boolean satisfiability problems. The main difference is that in WalkSAT, after the unsatisfied clause C is selected, a single variable in C is selected at random and has its value flipped (which can be viewed as selecting uniformly among only $k$ rather than all $2^k$ value assignments to C). - -As mentioned before, the Algorithmic Version of the Lovász Local Lemma applies to most problems for which the general Lovász Local Lemma is used as a proof technique. Some of these problems are discussed in the following articles: - -* Probabilistic proofs of non-probabilistic theorems - -* Random graph - -The algorithm described above lends itself well to parallelization, since resampling two independent events $ A,B \in \mathcal{A}$, i.e. $ \operatorname{vbl}(A) \cap \operatorname{vbl}(B) = \emptyset $, in parallel is equivalent to resampling A, B sequentially. Hence, at each iteration of the main loop one can determine the maximal set of independent and satisfied events S and resample all events in S in parallel. - -Under the assumption that the assignment function x satisfies the slightly stronger conditions: -$$ - \forall A \in \mathcal{A} : \Pr[A] \leq (1 - \varepsilon) x(A) \prod_{B \in \Gamma(A)} (1-x(B)) -$$ - -for some ε > 0 Moser and Tardos proved that the parallel algorithm achieves a better runtime complexity. In this case, the parallel version of the algorithm takes an expected -$$ - O\left(\frac{1}{\varepsilon} \log \sum_{A \in \mathcal{A}} \frac{x(A)}{1-x(A)}\right) -$$ - -steps before it terminates. The parallel version of the algorithm can be seen as a special case of the sequential algorithm shown above, and so this result also holds for the sequential case. diff --git a/wiki/wikipedia/742.txt b/wiki/wikipedia/742.txt deleted file mode 100644 index 075458a892a5a41ac14b0e43b432cba102817d6a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/742.txt +++ /dev/null @@ -1,23 +0,0 @@ -In mathematics, the Banach–Stone theorem is a classical result in the theory of continuous functions on topological spaces, named after the mathematicians Stefan Banach and Marshall Stone. - -In brief, the Banach–Stone theorem allows one to recover a compact Hausdorff space X from the Banach space structure of the space C(X) of continuous real- or complex-valued functions on X. If one is allowed to invoke the algebra structure of C(X) this is easy --- we can identify X with the spectrum of C(X), the set of algebra homomorphisms into the scalar field, equipped with the weak*-topology inherited from the dual space C(X)*. The Banach-Stone theorem avoids reference to multiplicative structure by recovering X from the extreme points of the unit ball of C(X)*. - -For a compact Hausdorff space X, let C(X) denote the Banach space of continuous real- or complex-valued functions on X, equipped with the supremum norm ‖·‖. - -Given compact Hausdorff spaces X and Y, suppose T : C(X) → C(Y) is a surjective linear isometry. Then there exists a homeomorphism φ : Y → X and a function g ∈ C(Y) with -$$ -| g(y) | = 1 \mbox{ for all } y \in Y -$$ - -such that -$$ -(T f) (y) = g(y) f(\varphi(y)) \mbox{ for all } y \in Y, f \in C(X). -$$ - -The case where X and Y are compact metric spaces is due to Banach, while the extension to compact Hausdorff spaces is due to Stone. In fact, they both prove a slight generalization—they do not assume that T is linear, only that it is an isometry in the sense of metric spaces, and use the Mazur–Ulam theorem to show that T is affine, and so $T - T(0)$ is a linear isometry. - -The Banach–Stone theorem has some generalizations for vector-valued continuous functions on compact, Hausdorff topological spaces. For example, if E is a Banach space with trivial centralizer and X and Y are compact, then every linear isometry of C(X; E) onto C(Y; E) is a strong Banach–Stone map. - -A similar technique has also been used to recover a space X from the extreme points of the duals of some other spaces of functions on X. - -The noncommutative analog of the Banach-Stone theorem is the folklore theorem that two unital C*-algebras are isomorphic if and only if they are completely isometric (i.e., isometric at all matrix levels). Mere isometry is not enough, as shown by the existence of a C*-algebra that is not isomorphic to its opposite algebra (which trivially has the same Banach space structure). diff --git a/wiki/wikipedia/743.txt b/wiki/wikipedia/743.txt deleted file mode 100644 index dbc30e3a352ca2411a9ac2c1207707136c14ff8c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/743.txt +++ /dev/null @@ -1,17 +0,0 @@ -In mathematics, specifically in number theory, Newman's conjecture is a conjecture about the behavior of the partition function modulo any integer. Specifically, it states that for any integers m and r such that $0\le r\le m-1$, the value of the partition function $p(n)$ satisfies the congruence $p(n)\equiv r\pmod{m}$ for infinitely many non-negative integers n. It was formulated by mathematician Morris Newman in 1960. It is unsolved as of 2020. - -Oddmund Kolberg was probably the first to prove a related result, namely that the partition function takes both even and odd values infinitely often. The proof employed was of elementary nature and easily accessible, and was proposed as an exercise by Newman in the American Mathematical Monthly. - -1 year later, in 1960, Newman proposed the conjecture and proved the cases m=5 and 13 in his original paper, Ahlgren expanded on his result to show that Ono's condition is, in fact, true for all composite numbers coprime to 6. - -Three years later, Ono showed that for every prime m greater than 3, one of the following must hold: - -* Newman's conjecture holds for m, or - -*$m\mid p(mn+k)$ for all nonnegative integers n, and $1\le k < 24,24k\equiv 1 \pmod{m}$. - -Using computer technology, he proved the theorem for all primes less than 200,000 (except 3). - -Afterwards, Ahlgren and Boylan used Ono's criterion to extend Newman's conjecture to all primes except possibly 3. 2 years afterwards, they extended their result to all prime powers except powers of 2 or 3. - -The weaker statement that $p(n)\equiv 0 \pmod{m}$ has at least 1 solution has been proved for all m. It was formerly known as the Erdős–Ivić conjecture, named after mathematicians Paul Erdős and . It was settled by Ken Ono. diff --git a/wiki/wikipedia/744.txt b/wiki/wikipedia/744.txt deleted file mode 100644 index afb85595556c34a5e0abe96cef88d022c7762e2c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/744.txt +++ /dev/null @@ -1,43 +0,0 @@ -An edit conflict is a computer problem that may occur when multiple editors edit the same file during a short time period. - -The conflict occurs when an editor gets a copy of a shared document file, changes the copy, and attempts to save the changes to the original file, which has been altered by another editor after the copy was obtained. - -The simplest way to resolve an edit conflict is to ignore intervening edits and overwrite the current file. This may lead to a substantial loss of information, and alternative methods are often employed to resolve or prevent conflicts. - -* Manual resolution, where the editor determines which version to retain and may manually incorporate edits into the current version of the file. - -* Store backupa backup, or data backup is a copy of computer data taken and stored elsewhere so that it may be used to restore the original after a data loss event or file comparisons of each edit, so there are backups changes of the file once the original is overwritten. Backups can be used to recover data after its loss from data deletion or corruption, or to recover data from an earlier time. Backups provide a simple form of disaster recovery; however not all backup systems are able to reconstitute a computer system or other complex configuration such as a computer cluster, active directory server, or database server. A backup system contains at least one copy of all data considered worth saving. The data storage requirements can be large. An information repository model may be used to provide structure to this storage. There are different types of data storage devices used for copying backups of data that is already in secondary storage onto archive . There are also different ways these devices can be arranged to provide geographic dispersion, data security, and Portability. Data is selected, extracted, and manipulated for storage. The process can include methods for dealing with live data, including open files, as well as compression, encryption, and de-duplication. Additional techniques apply to enterprise client-server backup. Backup schemes may include dry runs that validate the reliability of the data being backed up. There are limitations[5] and human factors involved in any backup scheme. - -* File locking, which limits the file to one editor at a time to prevent edit conflicts. File locking is a mechanism that restricts access to a computer file, or to a region of a file, by allowing only one user or process to modify or delete it at a specific time and to prevent reading of the file while it's being modified or deleted. - -Systems implement locking to prevent the classic interceding update scenario, which is a typical example of a race condition, by enforcing the serialization of update processes to any given file. The following example illustrates the interceding update problem: - -Process A reads a customer record from a file containing account information, including the customer's account balance and phone number. - -Process B now reads the same record from the same file, so it has its own copy. - -Process A changes the account balance in its copy of the customer record and writes the record back to the file. - -Process B, which still has the original stale value for the account balance in its copy of the customer record, updates the account balance and writes the customer record back to the file. - -Process B has now written its stale account-balance value to the file, causing the changes made by process A to be lost. - -Most operating systems support the concept of record locking, which means that individual records within any given file may be locked, thereby increasing the number of concurrent update processes. Database maintenance uses file locking, whereby it can serialize access to the entire physical file underlying a database. Although this does prevent any other process from accessing the file, it can be more efficient than individually locking many regions in the file by removing the overhead of acquiring and releasing each lock. - -Poor use of file locks, like any computer lock, can result in poor performance or in deadlocks. File locking may also refer to additional security applied by a computer user either by using Windows security, NTFS permissions or by installing a third party file locking software. Computer writer Gary B. Shelly notes that many wiki systems "will block the contributor who is attempting to edit the page from being able to do so until the contributor currently editing the page saves changes or remains idle on the page for an extended period of time." - -* Merge, by determining if the edits are in unrelated parts of the file and combining without user intervention. In version control, merging (also called integration) is a fundamental operation that reconciles multiple changes made to a version-controlled collection of files. Most often, it is necessary when a file is modified on two independent branches and subsequently merged. The result is a single collection of files that contains both sets of changes. - -In some cases, the merge can be performed automatically, because there is sufficient history information to reconstruct the changes, and the changes do not conflict. In other cases, a person must decide exactly what the resulting files should contain. Many revision control software tools include merge capabilities. - -file comparisons.This article compares computer software tools which are used for accomplishing comparisons of files of various types. The file types addressed by individual file comparison apps varies, but may include text, symbols, images, audio, or video. This category of software tool is often called "file comparison" or "diff tool", but those effectively are equivalent terms — where the term "diff" is more commonly associated with the Unix diff utility. - -A typical rudimentary case is the comparison of one file against another. However, it also may include comparisons between two populations of files, such as in the case of comparing directories or folders, as part of file management. For instance, this might be to detect problems with corrupted backup versions of a collection of files ... or to validate a package of files is in compliance with standards before publishing. - -Note that comparisons must be made among the same file type. Meaning, a text file cannot be compared to a picture containing text, unless an optical character reader (OCR) process is done first to extract the text. Likewise, text cannot be compared to spoken words, unless the spoken words first are transcribed into text. Additionally, text in one language cannot be compared to text in another, unless one is translated into the language of other. - -A critical consideration is how the two files being compared must be substantially similar and thus not radically different. Even different revisions of the same document — if there are many changes due to additions, removals, or moving of content — may make comparisons of file changes very difficult to interpret. This suggests frequent version saves of a critical document, to better facilitate a file comparison. - -A "diff" file comparison tool is a vital time and labor saving utility, because it aids in accomplishing tedious comparisons. Thus, it is a vital part of demanding comparison processes employed by individuals, academics, legal arena, forensics field, and other professional endeavors — to identify sometimes hard to spot differences needed for detecting. - -The problem is encountered on heavily edited articles in wikis (frequency higher in articles related to a current event or person), distributed data systems (e.g., Google Sites), and revision control systems not using file locking. as well as other high-traffic pages. If a significant amount of new text is involved, the editor who receives an "edit conflict" error message can cut and paste the new text into a word processor or similar program for further editing, or can paste that text directly into a newer version of the target document. Simple copyediting can be done directly on the newer version, and then saved. diff --git a/wiki/wikipedia/745.txt b/wiki/wikipedia/745.txt deleted file mode 100644 index d4404755550040352e293c771ca16aec0ba24fa2..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/745.txt +++ /dev/null @@ -1,41 +0,0 @@ -Hermite's problem is an open problem in mathematics posed by Charles Hermite in 1848. He asked for a way of expressing real numbers as sequences of natural numbers, such that the sequence is eventually periodic precisely when the original number is a cubic irrational. - -A standard way of writing real numbers is by their decimal representation, such as: -$$ -x=a_0.a_1a_2a_3\ldots\ -$$ - -where a0 is an integer, the integer part of x, and a1, a2, a3, … are integers between 0 and 9. Given this representation the number x is equal to -$$ -x=\sum_{n=0}^\infty \frac{a_n}{10^n}. -$$ - -The real number x is a rational number only if its decimal expansion is eventually periodic, that is if there are natural numbers N and p such that for every n ≥ N it is the case that an+p = an. - -Another way of expressing numbers is to write them as continued fractions, as in: -$$ -x=[a_0;a_1,a_2,a_3,\ldots],\ -$$ - -where a0 is an integer and a1, a2, a3… are natural numbers. From this representation we can recover x since -$$ -x=a_0 + \cfrac{1}{a_1 + \cfrac{1}{a_2 + \cfrac{1}{a_3 + \ddots}}}. -$$ - -If x is a rational number then the sequence (an) terminates after finitely many terms. On the other hand, Euler proved that irrational numbers require an infinite sequence to express them as continued fractions. Moreover, this sequence is eventually periodic (again, so that there are natural numbers N and p such that for every n ≥ N we have an+p = an), if and only if x is a quadratic irrational. - -Rational numbers are algebraic numbers that satisfy a polynomial of degree 1, while quadratic irrationals are algebraic numbers that satisfy a polynomial of degree 2. For both these sets of numbers we have a way to construct a sequence of natural numbers (an) with the property that each sequence gives a unique real number and such that this real number belongs to the corresponding set if and only if the sequence is eventually periodic. - -In 1848 Charles Hermite wrote a letter to Carl Gustav Jacob Jacobi asking if this situation could be generalised, that is can one assign a sequence of natural numbers to each real number x such that the sequence is eventually periodic precisely when x is a cubic irrational, that is an algebraic number of degree 3? Or, more generally, for each natural number d is there a way of assigning a sequence of natural numbers to each real number x that can pick out when x is algebraic of degree d? - -Sequences that attempt to solve Hermite's problem are often called multidimensional continued fractions. Jacobi himself came up with an early example, finding a sequence corresponding to each pair of real numbers (x, y) that acted as a higher-dimensional analogue of continued fractions. He hoped to show that the sequence attached to (x, y) was eventually periodic if and only if both x and y belonged to a cubic number field, but was unable to do so and whether this is the case remains unsolved. - -In 2015, for the first time, a periodic representation for any cubic irrational has been provided by means of ternary continued fractions, i.e., the problem of writing cubic irrationals as a periodic sequence of rational or integer numbers has been solved. However, the periodic representation does not derive from an algorithm defined over all real numbers and it is derived only starting from the knowledge of the minimal polynomial of the cubic irrational. - -Rather than generalising continued fractions, another approach to the problem is to generalise Minkowski's question mark function. This function ? : [0, 1] → [0, 1] also picks out quadratic irrational numbers since ?(x) is rational if and only if x is either rational or a quadratic irrational number, and moreover x is rational if and only if ?(x) is a dyadic rational, thus x is a quadratic irrational precisely when ?(x) is a non-dyadic rational number. Various generalisations of this function to either the unit square [0, 1] × [0, 1] or the two-dimensional simplex have been made, though none has yet solved Hermite's problem. - -In 2021 two subtractive algorithms for finding a periodic representative of cubic vectors were proposed by Oleg Karpenkov. - -The first ($\sin^2$ algorithm) works for the totally real case only. The input for the algorithm is a triples of cubic vectors. A cubic vector is any vector generating a degree 3 extension of $\mathbb{Q}$. In this case the cubic vectors are conjugate if and only if the output of the algorithm is periodic. - -The second (HAPD algorithm) is conjectured to work for all cases (including for complex cubic vectors) and all dimensions $d\geq3$. diff --git a/wiki/wikipedia/746.txt b/wiki/wikipedia/746.txt deleted file mode 100644 index c8359543462b284e248cb3982438c7bc7a8a21bf..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/746.txt +++ /dev/null @@ -1,29 +0,0 @@ -In mathematics, especially functional analysis, Bessel's inequality is a statement about the coefficients of an element $x$ in a Hilbert space with respect to an orthonormal sequence. The inequality was derived by F.W. Bessel in 1828. - -Let $H$ be a Hilbert space, and suppose that $e_1, e_2, ...$ is an orthonormal sequence in $H$. Then, for any $x$ in $H$ one has -$$ -\sum_{k=1}^{\infty}\left\vert\left\langle x,e_k\right\rangle \right\vert^2 \le \left\Vert x\right\Vert^2, -$$ - -where ⟨·,·⟩ denotes the inner product in the Hilbert space $H$. If we define the infinite sum -$$ -x' = \sum_{k=1}^{\infty}\left\langle x,e_k\right\rangle e_k, -$$ - -consisting of "infinite sum" of vector resolute $x$ in direction $e_k$, Bessel's inequality tells us that this series converges. One can think of it that there exists $x' \in H$ that can be described in terms of potential basis $e_1, e_2, \dots$. - -For a complete orthonormal sequence (that is, for an orthonormal sequence that is a basis), we have Parseval's identity, which replaces the inequality with an equality (and consequently $x'$ with $x$). - -Bessel's inequality follows from the identity - -\begin{align} - -0 \leq \left\| x - \sum_{k=1}^n \langle x, e_k \rangle e_k\right\|^2 &= \|x\|^2 - 2 \sum_{k=1}^n \operatorname{Re} \langle x, \langle x, e_k \rangle e_k \rangle + \sum_{k=1}^n | \langle x, e_k \rangle |^2 \\ - -&= \|x\|^2 - 2 \sum_{k=1}^n |\langle x, e_k \rangle |^2 + \sum_{k=1}^n | \langle x, e_k \rangle |^2 \\ - -&= \|x\|^2 - \sum_{k=1}^n | \langle x, e_k \rangle |^2, - -\end{align} - -which holds for any natural n. diff --git a/wiki/wikipedia/747.txt b/wiki/wikipedia/747.txt deleted file mode 100644 index de97bea37d1b6624b26c5cd277aa42ff6657fc8c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/747.txt +++ /dev/null @@ -1,81 +0,0 @@ -In the mathematical discipline of graph theory, the dual graph of a plane graph G is a graph that has a vertex for each face of G. The dual graph has an edge for each pair of faces in G that are separated from each other by an edge, and a self-loop when the same face appears on both sides of an edge. Thus, each edge e of G has a corresponding dual edge, whose endpoints are the dual vertices corresponding to the faces on either side of e. The definition of the dual depends on the choice of embedding of the graph G, so it is a property of plane graphs (graphs that are already embedded in the plane) rather than planar graphs (graphs that may be embedded but for which the embedding is not yet known). For planar graphs generally, there may be multiple dual graphs, depending on the choice of planar embedding of the graph. - -Historically, the first form of graph duality to be recognized was the association of the Platonic solids into pairs of dual polyhedra. Graph duality is a topological generalization of the geometric concepts of dual polyhedra and dual tessellations, and is in turn generalized algebraically by the concept of a dual matroid. Variations of planar graph duality include a version of duality for directed graphs, and duality for graphs embedded onto non-planar two-dimensional surfaces. - -However, these notions of dual graphs should not be confused with a different notion, the edge-to-vertex dual or line graph of a graph. - -The term dual is used because the property of being a dual graph is symmetric, meaning that if H is a dual of a connected graph G, then G is a dual of H. When discussing the dual of a graph G, the graph G itself may be referred to as the "primal graph". Many other graph properties and structures may be translated into other natural properties and structures of the dual. For instance, cycles are dual to cuts, spanning trees are dual to the complements of spanning trees, and simple graphs (without parallel edges or self-loops) are dual to 3-edge-connected graphs. - -Graph duality can help explain the structure of mazes and of drainage basins. Dual graphs have also been applied in computer vision, computational geometry, mesh generation, and the design of integrated circuits. - -The unique planar embedding of a cycle graph divides the plane into only two regions, the inside and outside of the cycle, by the Jordan curve theorem. However, in an n-cycle, these two regions are separated from each other by n different edges. Therefore, the dual graph of the n-cycle is a multigraph with two vertices (dual to the regions), connected to each other by n dual edges. Such a graph is called a dipole graph. Conversely, the dual to an n-edge dipole graph is an n-cycle. - -According to Steinitz's theorem, every polyhedral graph (the graph formed by the vertices and edges of a three-dimensional convex polyhedron) must be planar and 3-vertex-connected, and every 3-vertex-connected planar graph comes from a convex polyhedron in this way. Every three-dimensional convex polyhedron has a dual polyhedron; the dual polyhedron has a vertex for every face of the original polyhedron, with two dual vertices adjacent whenever the corresponding two faces share an edge. Whenever two polyhedra are dual, their graphs are also dual. For instance the Platonic solids come in dual pairs, with the octahedron dual to the cube, the dodecahedron dual to the icosahedron, and the tetrahedron dual to itself. Polyhedron duality can also be extended to duality of higher dimensional polytopes, but this extension of geometric duality does not have clear connections to graph-theoretic duality. - -A plane graph is said to be self-dual if it is isomorphic to its dual graph. The wheel graphs provide an infinite family of self-dual graphs coming from self-dual polyhedra (the pyramids). - -It follows from Euler's formula that every self-dual graph with n vertices has exactly 2n − 2 edges. Every simple self-dual planar graph contains at least four vertices of degree three, and every self-dual embedding has at least four triangular faces. - -Many natural and important concepts in graph theory correspond to other equally natural but different concepts in the dual graph. Because the dual of the dual of a connected plane graph is isomorphic to the primal graph, each of these pairings is bidirectional: if concept X in a planar graph corresponds to concept Y in the dual graph, then concept Y in a planar graph corresponds to concept X in the dual. - -The dual of a simple graph need not be simple: it may have self-loops (an edge with both endpoints at the same vertex) or multiple edges connecting the same two vertices, as was already evident in the example of dipole multigraphs being dual to cycle graphs. As a special case of the cut-cycle duality discussed below, - -the bridges of a planar graph G are in one-to-one correspondence with the self-loops of the dual graph. For the same reason, a pair of parallel edges in a dual multigraph (that is, a length-2 cycle) corresponds to a 2-edge cutset in the primal graph (a pair of edges whose deletion disconnects the graph). Therefore, a planar graph is simple if and only if its dual has no 1- or 2-edge cutsets; that is, if it is 3-edge-connected. The simple planar graphs whose duals are simple are exactly the 3-edge-connected simple planar graphs. This class of graphs includes, but is not the same as, the class of 3-vertex-connected simple planar graphs. For instance, the figure showing a self-dual graph is 3-edge-connected (and therefore its dual is simple) but is not 3-vertex-connected. - -Because the dual graph depends on a particular embedding, the dual graph of a planar graph is not unique, in the sense that the same planar graph can have non-isomorphic dual graphs. In the picture, the blue graphs are isomorphic but their dual red graphs are not. The upper red dual has a vertex with degree 6 (corresponding to the outer face of the blue graph) while in the lower red graph all degrees are less than 6. - -Hassler Whitney showed that if the graph is 3-connected then the embedding, and thus the dual graph, is unique. By Steinitz's theorem, these graphs are exactly the polyhedral graphs, the graphs of convex polyhedra. A planar graph is 3-vertex-connected if and only if its dual graph is 3-vertex-connected. More generally, a planar graph has a unique embedding, and therefore also a unique dual, if and only if it is a subdivision of a 3-vertex-connected planar graph (a graph formed from a 3-vertex-connected planar graph by replacing some of its edges by paths). For some planar graphs that are not 3-vertex-connected, such as the complete bipartite graph K2,4, the embedding is not unique, but all embeddings are isomorphic. When this happens, correspondingly, all dual graphs are isomorphic. - -Because different embeddings may lead to different dual graphs, testing whether one graph is a dual of another (without already knowing their embeddings) is a nontrivial algorithmic problem. For biconnected graphs, it can be solved in polynomial time by using the SPQR trees of the graphs to construct a canonical form for the equivalence relation of having a shared mutual dual. For instance, the two red graphs in the illustration are equivalent according to this relation. However, for planar graphs that are not biconnected, this relation is not an equivalence relation and the problem of testing mutual duality is NP-complete. - -A cutset in an arbitrary connected graph is a subset of edges defined from a partition of the vertices into two subsets, by including an edge in the subset when it has one endpoint on each side of the partition. Removing the edges of a cutset necessarily splits the graph into at least two connected components. A minimal cutset (also called a bond) is a cutset with the property that every proper subset of the cutset is not itself a cut. A minimal cutset of a connected graph necessarily separates its graph into exactly two components, and consists of the set of edges that have one endpoint in each component. A simple cycle is a connected subgraph in which each vertex of the cycle is incident to exactly two edges of the cycle. - -In a connected planar graph G, every simple cycle of G corresponds to a minimal cutset in the dual of G, and vice versa. This can be seen as a form of the Jordan curve theorem: each simple cycle separates the faces of G into the faces in the interior of the cycle and the faces of the exterior of the cycle, and the duals of the cycle edges are exactly the edges that cross from the interior to the exterior. The girth of any planar graph (the size of its smallest cycle) equals the edge connectivity of its dual graph (the size of its smallest cutset). - -In directed planar graphs, simple directed cycles are dual to directed cuts (partitions of the vertices into two subsets such that all edges go in one direction, from one subset to the other). Strongly oriented planar graphs (graphs whose underlying undirected graph is connected, and in which every edge belongs to a cycle) are dual to directed acyclic graphs in which no edge belongs to a cycle. To put this another way, the strong orientations of a connected planar graph (assignments of directions to the edges of the graph that result in a strongly connected graph) are dual to acyclic orientations (assignments of directions that produce a directed acyclic graph). - -A spanning tree may be defined as a set of edges that, together with all of the vertices of the graph, forms a connected and acyclic subgraph. But, by cut-cycle duality, if a set S of edges in a planar graph G is acyclic (has no cycles), then the set of edges dual to S has no cuts, from which it follows that the complementary set of dual edges (the duals of the edges that are not in S) forms a connected subgraph. Symmetrically, if S is connected, then the edges dual to the complement of S form an acyclic subgraph. Therefore, when S has both properties – it is connected and acyclic – the same is true for the complementary set in the dual graph. That is, each spanning tree of G is complementary to a spanning tree of the dual graph, and vice versa. Thus, the edges of any planar graph and its dual can together be partitioned (in multiple different ways) into two spanning trees, one in the primal and one in the dual, that together extend to all the vertices and faces of the graph but never cross each other. In particular, the minimum spanning tree of G is complementary to the maximum spanning tree of the dual graph. However, this does not work for shortest path trees, even approximately: there exist planar graphs such that, for every pair of a spanning tree in the graph and a complementary spanning tree in the dual graph, at least one of the two trees has distances that are significantly longer than the distances in its graph. - -An example of this type of decomposition into interdigitating trees can be seen in some simple types of mazes, with a single entrance and no disconnected components of its walls. In this case both the maze walls and the space between the walls take the form of a mathematical tree. If the free space of the maze is partitioned into simple cells (such as the squares of a grid) then this system of cells can be viewed as an embedding of a planar graph, in which the tree structure of the walls forms a spanning tree of the graph and the tree structure of the free space forms a spanning tree of the dual graph. Similar pairs of interdigitating trees can also be seen in the tree-shaped pattern of streams and rivers within a drainage basin and the dual tree-shaped pattern of ridgelines separating the streams. - -This partition of the edges and their duals into two trees leads to a simple proof of Euler’s formula V − E + F = 2 for planar graphs with V vertices, E edges, and F faces. Any spanning tree and its complementary dual spanning tree partition the edges into two subsets of V − 1 and F − 1 edges respectively, and adding the sizes of the two subsets gives the equation - -E = (V − 1) + (F − 1) - -which may be rearranged to form Euler's formula. According to Duncan Sommerville, this proof of Euler's formula is due to K. G. C. Von Staudt’s Geometrie der Lage (Nürnberg, 1847). - -In nonplanar surface embeddings the set of dual edges complementary to a spanning tree is not a dual spanning tree. Instead this set of edges is the union of a dual spanning tree with a small set of extra edges whose number is determined by the genus of the surface on which the graph is embedded. The extra edges, in combination with paths in the spanning trees, can be used to generate the fundamental group of the surface. - -Any counting formula involving vertices and faces that is valid for all planar graphs may be transformed by planar duality into an equivalent formula in which the roles of the vertices and faces have been swapped. Euler's formula, which is self-dual, is one example. Another given by Harary involves the handshaking lemma, according to which the sum of the degrees of the vertices of any graph equals twice the number of edges. In its dual form, this lemma states that in a plane graph, the sum of the numbers of sides of the faces of the graph equals twice the number of edges. - -The medial graph of a plane graph is isomorphic to the medial graph of its dual. Two planar graphs can have isomorphic medial graphs only if they are dual to each other. - -A planar graph with four or more vertices is maximal (no more edges can be added while preserving planarity) if and only if its dual graph is both 3-vertex-connected and 3-regular. - -A connected planar graph is Eulerian (has even degree at every vertex) if and only if its dual graph is bipartite. Strictly speaking, this construction is not a duality of directed planar graphs, because starting from a graph G and taking the dual twice does not return to G itself, but instead constructs a graph isomorphic to the transpose graph of G, the graph formed from G by reversing all of its edges. Taking the dual four times returns to the original graph. - -The weak dual of a plane graph is the subgraph of the dual graph whose vertices correspond to the bounded faces of the primal graph. A plane graph is outerplanar if and only if its weak dual is a forest. For any plane graph G, let G+ be the plane multigraph formed by adding a single new vertex v in the unbounded face of G, and connecting v to each vertex of the outer face (multiple times, if a vertex appears multiple times on the boundary of the outer face); then, G is the weak dual of the (plane) dual of G+. - -The concept of duality applies as well to infinite graphs embedded in the plane as it does to finite graphs. However, care is needed to avoid topological complications such as points of the plane that are neither part of an open region disjoint from the graph nor part of an edge or vertex of the graph. When all faces are bounded regions surrounded by a cycle of the graph, an infinite planar graph embedding can also be viewed as a tessellation of the plane, a covering of the plane by closed disks (the tiles of the tessellation) whose interiors (the faces of the embedding) are disjoint open disks. Planar duality gives rise to the notion of a dual tessellation, a tessellation formed by placing a vertex at the center of each tile and connecting the centers of adjacent tiles. - -The concept of a dual tessellation can also be applied to partitions of the plane into finitely many regions. It is closely related to but not quite the same as planar graph duality in this case. For instance, the Voronoi diagram of a finite set of point sites is a partition of the plane into polygons within which one site is closer than any other. The sites on the convex hull of the input give rise to unbounded Voronoi polygons, two of whose sides are infinite rays rather than finite line segments. The dual of this diagram is the Delaunay triangulation of the input, a planar graph that connects two sites by an edge whenever there exists a circle that contains those two sites and no other sites. The edges of the convex hull of the input are also edges of the Delaunay triangulation, but they correspond to rays rather than line segments of the Voronoi diagram. This duality between Voronoi diagrams and Delaunay triangulations can be turned into a duality between finite graphs in either of two ways: by adding an artificial vertex at infinity to the Voronoi diagram, to serve as the other endpoint for all of its rays, or by treating the bounded part of the Voronoi diagram as the weak dual of the Delaunay triangulation. Although the Voronoi diagram and Delaunay triangulation are dual, their embedding in the plane may have additional crossings beyond the crossings of dual pairs of edges. Each vertex of the Delaunay triangle is positioned within its corresponding face of the Voronoi diagram. Each vertex of the Voronoi diagram is positioned at the circumcenter of the corresponding triangle of the Delaunay triangulation, but this point may lie outside its triangle. - -The concept of duality can be extended to graph embeddings on two-dimensional manifolds other than the plane. The definition is the same: there is a dual vertex for each connected component of the complement of the graph in the manifold, and a dual edge for each graph edge connecting the two dual vertices on either side of the edge. In most applications of this concept, it is restricted to embeddings with the property that each face is a topological disk; this constraint generalizes the requirement for planar graphs that the graph be connected. With this constraint, the dual of any surface-embedded graph has a natural embedding on the same surface, such that the dual of the dual is isomorphic to and isomorphically embedded to the original graph. For instance, the complete graph K7 is a toroidal graph: it is not planar but can be embedded in a torus, with each face of the embedding being a triangle. This embedding has the Heawood graph as its dual graph. - -The same concept works equally well for non-orientable surfaces. For instance, K6 can be embedded in the projective plane with ten triangular faces as the hemi-icosahedron, whose dual is the Petersen graph embedded as the hemi-dodecahedron. - -Even planar graphs may have nonplanar embeddings, with duals derived from those embeddings that differ from their planar duals. For instance, the four Petrie polygons of a cube (hexagons formed by removing two opposite vertices of the cube) form the hexagonal faces of an embedding of the cube in a torus. The dual graph of this embedding has four vertices forming a complete graph K4 with doubled edges. In the torus embedding of this dual graph, the six edges incident to each vertex, in cyclic order around that vertex, cycle twice through the three other vertices. In contrast to the situation in the plane, this embedding of the cube and its dual is not unique; the cube graph has several other torus embeddings, with different duals. The two dual concepts of girth and edge connectivity are unified in matroid theory by matroid girth: the girth of the graphic matroid of a planar graph is the same as the graph's girth, and the girth of the dual matroid (the graphic matroid of the dual graph) is the edge connectivity of the graph. - -Along with its use in graph theory, the duality of planar graphs has applications in several other areas of mathematical and computational study. - -In geographic information systems, flow networks (such as the networks showing how water flows in a system of streams and rivers) are dual to cellular networks describing drainage divides. This duality can be explained by modeling the flow network as a spanning tree on a grid graph of an appropriate scale, and modeling the drainage divide as the complementary spanning tree of ridgelines on the dual grid graph. - -In computer vision, digital images are partitioned into small square pixels, each of which has its own color. The dual graph of this subdivision into squares has a vertex per pixel and an edge between pairs of pixels that share an edge; it is useful for applications including clustering of pixels into connected regions of similar colors. - -In computational geometry, the duality between Voronoi diagrams and Delaunay triangulations implies that any algorithm for constructing a Voronoi diagram can be immediately converted into an algorithm for the Delaunay triangulation, and vice versa. The same duality can also be used in finite element mesh generation. Lloyd's algorithm, a method based on Voronoi diagrams for moving a set of points on a surface to more evenly spaced positions, is commonly used as a way to smooth a finite element mesh described by the dual Delaunay triangulation. This method improves the mesh by making its triangles more uniformly sized and shaped. - -In the synthesis of CMOS circuits, the function to be synthesized is represented as a formula in Boolean algebra. Then this formula is translated into two series–parallel multigraphs. These graphs can be interpreted as circuit diagrams in which the edges of the graphs represent transistors, gated by the inputs to the function. One circuit computes the function itself, and the other computes its complement. One of the two circuits is derived by converting the conjunctions and disjunctions of the formula into series and parallel compositions of graphs, respectively. The other circuit reverses this construction, converting the conjunctions and disjunctions of the formula into parallel and series compositions of graphs. These two circuits, augmented by an additional edge connecting the input of each circuit to its output, are planar dual graphs. - -The duality of convex polyhedra was recognized by Johannes Kepler in his 1619 book Harmonices Mundi. - -Recognizable planar dual graphs, outside the context of polyhedra, appeared as early as 1725, in Pierre Varignon's posthumously published work, Nouvelle Méchanique ou Statique. This was even before Leonhard Euler's 1736 work on the Seven Bridges of Königsberg that is often taken to be the first work on graph theory. Varignon analyzed the forces on static systems of struts by drawing a graph dual to the struts, with edge lengths proportional to the forces on the struts; this dual graph is a type of Cremona diagram. In connection with the four color theorem, the dual graphs of maps (subdivisions of the plane into regions) were mentioned by Alfred Kempe in 1879, and extended to maps on non-planar surfaces by in 1891. Duality as an operation on abstract planar graphs was introduced by Hassler Whitney in 1931. diff --git a/wiki/wikipedia/748.txt b/wiki/wikipedia/748.txt deleted file mode 100644 index 69b457d0ed90d590a09524afe1f37f6e1ad85ced..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/748.txt +++ /dev/null @@ -1,11 +0,0 @@ -In mathematics, the Jacquet–Langlands correspondence is a correspondence between automorphic forms on GL2 and its twisted forms, proved by in their book Automorphic Forms on GL(2) using the Selberg trace formula. It was one of the first examples of the Langlands philosophy that maps between L-groups should induce maps between automorphic representations. There are generalized versions of the Jacquet–Langlands correspondence relating automorphic representations of GLr(D) and GLdr(F), where D is a division algebra of degree d2 over the local or global field F. - -Suppose that G is an inner twist of the algebraic group GL2, in other words the multiplicative group of a quaternion algebra. The Jacquet–Langlands correspondence is bijection between - -*Automorphic representations of G of dimension greater than 1 - -*Cuspidal automorphic representations of GL2 that are square integrable (modulo the center) at each ramified place of G. - -Corresponding representations have the same local components at all unramified places of G. - -Rogawski and Deligne extended the Jacquet–Langlands correspondence to division algebras of higher dimension. diff --git a/wiki/wikipedia/749.txt b/wiki/wikipedia/749.txt deleted file mode 100644 index 5e623bf952af225e5cd1621c5b184bc28ae2be7f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/749.txt +++ /dev/null @@ -1,11 +0,0 @@ -The Darmois–Skitovich theorem is one of the most famous characterization theorems of mathematical statistics. It characterizes the normal distribution (the Gaussian distribution) by the independence of two linear forms from independent random variables. This theorem was proved independently by G. Darmois and V. P. Skitovich in 1953. - -Let $\xi_j, j = 1, 2, \ldots, n, n \ge 2$ be independent random variables. Let $\alpha_j, \beta_j$ be nonzero constants. If the linear forms $L_1 = \alpha_1\xi_1 + \cdots + \alpha_n\xi_n$ and $L_2 = \beta_1\xi_1 + \cdots + \beta_n\xi_n$ are independent then all random variables $\xi_j$ have normal distributions (Gaussian distributions). - -The Darmois–Skitovich theorem is a generalization of the Kac–Bernstein theorem in which the normal distribution (the Gaussian distribution) is characterized by the independence of the sum and the difference of two independent random variables. For a history of proving the theorem by V. P. Skitovich, see the article - -* - -* Skitivic, V. P. (1953). "On a property of the normal distribution." Dokl. Akad. Nauk SSSR (N.S.) (89): 217—219 (in Russian). - -* A. M. Kagan, Yu. V. Linnik, and C. R. Rao, Characterization Problems in Mathematical Statistics, Wiley, New York (1973). diff --git a/wiki/wikipedia/75.txt b/wiki/wikipedia/75.txt deleted file mode 100644 index 6920731ab0a9f247ed4120545886c8c6c12e5187..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/75.txt +++ /dev/null @@ -1,23 +0,0 @@ -In computational number theory, Marsaglia's theorem connects modular arithmetic and analytic geometry to describe the flaws with the pseudorandom numbers resulting from a linear congruential generator. As a direct consequence, it is now widely considered that linear congruential generators are weak for the purpose of generating random numbers. Particularly, it is inadvisable to use them for simulations with the Monte Carlo method or in cryptographic settings, such as issuing a public key certificate, unless specific numerical requirements are satisfied. Poorly chosen values for the modulus and multiplier in a Lehmer random number generator will lead to a short period for the sequence of random numbers. Marsaglia's result may be further extended to a mixed linear congruential generator. - -Consider a Lehmer random number generator with -$$ -r_{i+1} \equiv kr_i\mod m -$$ - -for any modulus $m$ and multiplier $ k$ where each $0< r_i < m $, and define a sequence -$$ -u_1 = \frac{r_1} m, u_2 = \frac{r_2} m, u_3 = \frac{r_3} m , \ldots -$$ - -Define the points -$$ -\pi_1 = (u_1, \ldots, u_n), \pi_2 = (u_2, \ldots, u_{n+1}), \pi_3 = (u_3, \ldots, u_{n+2}), \ldots -$$ - -on a unit $n$-cube formed from successive terms of the sequence of $u$. With such a multiplicative number generator, all $n$-tuples of resulting random numbers lie in at most $(n!m)^{1/n}$ hyperplanes. Additionally, for a choice of constants $c_1,c_2, \ldots, c_n$ which satisfy the congruence -$$ -c_1 + c_2k+c_3k^2 + \cdots + c_n k^{n-1} \equiv 0 \mod m, -$$ - -there are at most $|c_1| + |c_2| + \cdots + |c_n| $ parallel hyperplanes which contain all $n$-tuples produced by the generator. Proofs for these claims may be found in Marsaglia's original paper. diff --git a/wiki/wikipedia/750.txt b/wiki/wikipedia/750.txt deleted file mode 100644 index f34ca41a45aa8349bddd4cf3819e956804b9ee52..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/750.txt +++ /dev/null @@ -1,10 +0,0 @@ -In mathematics, the Feit–Thompson conjecture is a conjecture in number theory, suggested by . The conjecture states that there are no distinct prime numbers p and q such that -$$ -\frac{p^{q} - 1}{p - 1} -$$ divides $\frac{q^{p} - 1}{q - 1}$. - -If the conjecture were true, it would greatly simplify the final chapter of the proof of the Feit–Thompson theorem that every finite group of odd order is solvable. A stronger conjecture that the two numbers are always coprime was disproved by Stephens with the counterexample p = 17 and q = 3313 with common factor 2pq + 1 = 112643. - -It is known that the conjecture is true for q = 3 . - -Informal probability arguments suggest that the "expected" number of counterexamples to the Feit–Thompson conjecture is very close to 0, suggesting that the Feit–Thompson conjecture is likely to be true. diff --git a/wiki/wikipedia/751.txt b/wiki/wikipedia/751.txt deleted file mode 100644 index 93732d90cd197b5e917606ac6358676261ea4da2..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/751.txt +++ /dev/null @@ -1,11 +0,0 @@ -SecureSafe is a cloud based software-as-a-service with a password safe, a document storage and digital spaces for online collaboration. The service is developed based on the principles of security by design and privacy by design. - -SecureSafe stores customers’ data in three data centers using triple redundancy mirroring. The first data center is dedicated to production, the second is a hot standby and the third acts as the so-called disaster recovery center. The first two data centers are located in the greater area of Zürich at the company Interxion. The third center is located in a former military bunker in the mountains of central Switzerland. - -A password manager is used to store passwords. The passwords that are stored in SecureSafe are protected by AES-256 and RSA-2048 encryption. - -A file storage or cloud storage is used to store files online. - -The login method 2-factor authentication is also known from e-banking systems. It works by sending a one-time code to a user’s mobile every time he or she logs into a given online account. Even if a hacker should get to the user’s login data, the information is useless without the additional security code. - -Data inheritance or digital inheritance enables customers to pass on important digital assets to others. Among the digital assets people pass on is login criteria to online accounts, insurance and legal documents and photo collections. diff --git a/wiki/wikipedia/752.txt b/wiki/wikipedia/752.txt deleted file mode 100644 index f61939528a6df5e1ced96ff28c61d4b7e2cfa0a9..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/752.txt +++ /dev/null @@ -1 +0,0 @@ -In mathematics, the Fontaine–Mazur conjectures are some conjectures introduced by about when p-adic representations of Galois groups of number fields can be constructed from representations on étale cohomology groups of a varieties. Some cases of this conjecture in dimension 2 were already proved by . diff --git a/wiki/wikipedia/753.txt b/wiki/wikipedia/753.txt deleted file mode 100644 index c13ac8e0e610d02d9f60585fee4d1418d853d418..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/753.txt +++ /dev/null @@ -1,323 +0,0 @@ -In the branch of mathematics known as Euclidean geometry, the Poncelet–Steiner theorem is one of several results concerning compass and straightedge constructions with additional restrictions imposed. This result states that whatever can be constructed by straightedge and compass together can be constructed by straightedge alone, provided that a single circle and its centre are given. This theorem is related to the rusty compass equivalence. - -Any Euclidean construction, insofar as the given and required elements are points (or lines), if it can be completed with both the compass and the straightedge together, may be completed with the straightedge alone provided that no fewer than one circle with its center exist in the plane. - -Though a compass can make constructions significantly easier, it is implied that there is no functional purpose of the compass once the first circle has been drawn. All constructions remain possible, though it is naturally understood that circles and their arcs cannot be drawn without the compass. This means only that the compass may be used for aesthetic purposes, rather than for the purposes of construction. All points that uniquely define a construction, which can be determined with the use of the compass, are equally determinable without. - -Constructions carried out in adherence with this theorem - relying solely on the use of a straightedge tool without the aid of a compass - are known as Steiner constructions. Steiner constructions may involve any number of circles, including none, already drawn in the plane, with or without their centers. - -In the tenth century, the Persian mathematician Abu al-Wafa' Buzjani (940−998) considered geometric constructions using a straightedge and a compass with a fixed opening, a so-called rusty compass. Constructions of this type appeared to have some practical significance as they were used by artists Leonardo da Vinci and Albrecht Dürer in Europe in the late fifteenth century. A new viewpoint developed in the mid sixteenth century when the size of the opening was considered fixed but arbitrary and the question of how many of Euclid's constructions could be obtained was paramount. - -Renaissance mathematician Lodovico Ferrari, a student of Gerolamo Cardano in a "mathematical challenge" against Niccolò Fontana Tartaglia was able to show that "all of Euclid" (that is, the straightedge and compass constructions in the first six books of Euclid's Elements) could be accomplished with a straightedge and rusty compass. Within ten years additional sets of solutions were obtained by Cardano, Tartaglia and Tartaglia's student Benedetti. During the next century these solutions were generally forgotten until, in 1673, Georg Mohr published (anonymously and in Dutch) Euclidis Curiosi containing his own solutions. Mohr had only heard about the existence of the earlier results and this led him to work on the problem. - -Showing that "all of Euclid" could be performed with straightedge and rusty compass is not the same as proving that all straightedge and compass constructions could be done with a straightedge and just a rusty compass. Such a proof would require the formalization of what a straightedge and compass could construct. This groundwork was provided by Jean Victor Poncelet in 1822, having been motivated by Mohr's work on the Mohr-Mascheroni theorem. He also conjectured and suggested a possible proof that a straightedge and rusty compass would be equivalent to a straightedge and compass, and moreover, the rusty compass need only be used once. The result that a straightedge and single circle with given centre is equivalent to a straightedge and compass was proved by Jakob Steiner in 1833. - -Various other notions, tools, terminology, etc., is often associated (sometimes loosely) to the Poncelet-Steiner theorem. Some are listed here. - -The Rusty Compass describes a compass whose hinge is so rusted as to be fused such that its legs - the needle and pencil - are unable to adjust width. In essence, it is a compass whose distance is fixed, and which draws circles of a predetermined and constant, but arbitrary radius. Circles may be drawn centered at any arbitrary point, but the radius is unchangeable. - -As a restricted construction paradigm, the rusty compass constructions allow the use of a straightedge and the fixed-width compass. - -In some sense, the rusty compass is a generalization and simplification of the Poncelet-Steiner theorem. Though not more powerful, it is certainly more convenient. The Poncelet-Steiner theorem requires a single circle with arbitrary radius and center point to be placed in the plane. As it is the only drawn circle, whether or not it was drawn by a rusty compass is immaterial and equivalent. The benefit of general rusty compass constructions, however, is that the compass may be used repeatedly to redraw circles centered at any desired point, albeit with the same radius, thus simplifying many constructions. Naturally if all constructions are possible with a single circle arbitrarily placed in the plane, then the same can surely be said about a straightedge and rusty compass. - -It is known that a straightedge and a rusty compass is sufficient to construct all that is possible with straightedge and standard compass - with the implied understanding that circular arcs of arbitrary radii cannot be drawn, and only need be drawn for aesthetic purposes rather than constructive ones. Historically this was proven when the Poncelet-Steiner theorem was proven, which is a stronger result. The rusty compass, therefore, is no weaker than the Poncelet-Steiner theorem. The rusty compass is also no stronger. - -The Poncelet-Steiner theorem reduces Ferrari's rusty compass equivalence, a claim at the time, to a single-use compass: all points necessary to uniquely describe any compass-straightedge construction may be achieved with only a straightedge, once the first circle has been placed. The Poncelet-Steiner theorem takes the rusty compass scenario, and breaks the compass completely after its first use. - -The term Steiner construction typically refers to any geometric construction that utilizes the straightedge tool only, and is sometimes simply called a straightedge-only construction. No stipulations are made about what geometric objects already exist in the plane, and no implications are made about what is or is not possible to construct. Thus, all constructions adhering to the Poncelet-Steiner theorem are Steiner constructions, though not all Steiner constructions adhere to the rule that only one circle with its center must be given. - -If only one circle is to be given and no other special information, Steiner's theorem implies that the center of the circle must be provided along with the circle. This is done by proving the impossibility of constructing the circle's center from straightedge alone using only a single circle in the plane, without its center. An argument using projective transformations and Steiner's conic sections is used. - -A naïve summary of the proof is that lines project onto lines under any linear projective transformation, while conic sections project onto conic sections under a linear projective transformation, but are skewed such that eccentricities, foci, and centers of circles are not preserved. Under different mappings the center does not map uniquely and reversibly. This would not be the case if lines could be used to determine a circles center, as linear transformations are reversible operations and would thus produce unique results. - -Thus it is not possible to construct everything that can be constructed with straightedge and compass with straightedge alone. Consequently, requirements on the Poncelet-Steiner theorem cannot be weakened with respect to the circle center. If the centre of the only given circle is not provided, it cannot be obtained by a straightedge alone. Many constructions are impossible with straightedge alone. Something more is necessary, and a circle with its center identified is sufficient. - -Alternatively, the center may be omitted with sufficient additional information. This is not a weakening of the Poncelet-Steiner theorem, merely an alternative framework. Nor is it a contradiction of Steiner's Theorem which hypothesizes only a single circle. The inclusion of this sufficient alternative information disambiguates the mappings under the projective transformations, thus allowing various Steiner constructions to recover the circle center. Some alternatives include two concentric or two intersecting circles, or three circles, or other variations wherein the provided circle(s) are devoid of their centers, but some other unique but sufficient criterion is met. In any of these cases, the center of a circle can be constructed, thereby reducing the problem to the Poncelet-Steiner theorem hypothesis. - -To prove the theorem, each of the basic constructions of compass and straightedge need to be proven to be possible by using a straightedge alone (provided that a circle and its center exist in the plane), as these are the foundations of, or elementary steps for, all other constructions. That is to say, all constructions can be written as a series of steps involving these five basic constructions: - -#Creating the line through two existing points - -#Creating the circle through one point with centre another point - -#Creating the point which is the intersection of two existing, non-parallel lines - -#Creating the one or two points in the intersection of a line and a circle (if they intersect) - -#Creating the one or two points in the intersection of two circles (if they intersect). - -#1 - A line through two points - -This can be done with straightedge alone quite naturally; it is the very purpose for which straightedges are meant. There is nothing to prove. Any doubts about this construction would equally apply to traditional constructions that do involve a compass. - -#2 - A circle through one point with defined center - -It is understood that the arc of a circle cannot be drawn without a compass. A circle is considered to be given by any two points, one defining the center and one existing on the circumference. Any such pair define a unique circle, and a unique circle can be defined by a center and any point on the circumference. In keeping with the intent of the theorem which we aim to prove, the actual circle need not be drawn but for aesthetic reasons. This fact will be shown when all other constructions involving a circle defined only by these two points are proven. - -#3 – Intersection of two lines - -This construction can be done directly with a straightedge. Neither a compass nor a circle are required. Indeed, the straight edge is not even required if the lines are already drawn. There is nothing to prove. Any doubts about this construction would equally apply to traditional constructions that do involve a compass. - -#4, #5 - The other constructions - -Thus, to prove the theorem, only constructions #4 and #5 need be proven possible using only a straightedge (and a given circle in the plane). - -In general constructions there are often several variations that will produce the same result. The choices made in such a variant can be made without loss of generality. However, when a construction is being used to prove that something can be done, it is not necessary to describe all these various choices and, for the sake of clarity of exposition, only one variant will be given below. The variants below are chosen for their ubiquity in application rather than simplicity under any particular set of special conditions. - -In the constructions below, a circle defined by a center point P and a point on its circumference, Q, through which the arc of the circle passes, is denoted P(Q). As most circles are not compass-drawn, center and circumference points are named explicitly, and usually separately. Per the theorem, when a compass-drawn circle is provided it is simply referred to as the given circle or the provided circle. The provided circle should always be assumed to be placed arbitrarily in the plane with an arbitrary radius (i.e. in general position). - -The intersection points between any line and the given circle may be found directly. The Poncelet-Steiner Theorem does not prohibit the normal treatment of circles already drawn in the plane; normal construction rules apply. The theorem only prohibits the construction of new circular arcs with a compass. - -It is important to note that Steiner constructions and those constructions herein proving the Poncelet-Steiner theorem require the arbitrary placement of points in space. In some construction paradigms - such as in the geometric definition of the constructible number - this may be prohibited. - -To prove the above constructions #4 and #5, which are included below, a few necessary intermediary constructions are also explained below since they are used and referenced frequently. These are also straightedge-only constructions. All constructions below rely on basic constructions #1,#2,#3, and any other construction that is listed prior to it. - -This construction does not require the use of the given circle. Naturally any line that passes through the center of the given circle implicitly has a bisected segment: the diameter is bisected by the center. The animated gif file embedded at the introduction to this article demonstrates this construction, reiterated here without the circle and with enumerated steps. - -Given an arbitrary line n (in black) on which there exist two points A and B, having a midpoint M between them, and an arbitrary point P in the plane (assumed not to be on line n) through which a parallel of line n is to be made: - -# Construct a line AP (in red). - -# Construct a line BP (in orange). - -# Define an arbitrary point R on line AP. - -# Construct a line BR (in green). - -# Construct a line MR (in light blue). - -# Lines MR and BP intersect at point X. - -# Construct a line AX (in purple). - -# Lines BR and AX intersect at point Q. - -# Construct a line PQ (in dark blue), the desired parallel. - -In some literature the bisected line segment is sometimes viewed as a one-dimensional "circle" existing on the line. Alternatively, some literature views the bisected line segment as a two dimensional circle in three dimensional space with the line passing through a diameter, but not parallel to the plane, thus intersecting the plane of construction at two points on the circumference with the midpoint simply being the prescribed circle center. - -If the line passes through the center of a circle, the segment defined by the diameter through the circle is bisected by the center of the circle. In the general case, however, any other line in the plane may have a bisected segment constructed onto it. This construction does require the use of the given circle. - -Given a line, m (in black), and a circle centered at A, we wish to create points E, B, and H on the line such that B is the midpoint: - -# Draw an arbitrary line (in red) passing through the given circles center, A, and the desired midpoint B (chosen arbitrarily) on the line m. - -#* Notice that the red line, AB, passes through the center of the circle and highlights a diameter, bisected by the circle center. Any parallel may be made from this line according to the previous construction. - -# Choose an arbitrary point C on the given circle (which does not lie on the perpendicular of AB through the circle center). - -# Construct a line (in orange), passing through C, that is parallel to the red line AB. - -#* This parallel intersects the given circle at D. - -#* This parallel also intersects the black line m at E, defining one end of the line segment. - -# Create two lines (in green), AC and AD, that each pass through the given circles center. - -#* These green lines intersect the given circle at points G and F, respectively. - -# Line FG (in blue) intersects the line m at H, defining the other endpoint of the line segment. - -This construction does require the use of the given circle. In order to generalize the parallel line construction to all possible lines, not just the ones with a collinear bisected line segment, it becomes necessary to have additional information. In keeping with the Poncelet-Steiner theorem, a circle (with center) is the object of choice for this construction. - -To construct a parallel line of any given line, through any point in the plane, we trivially combine two constructions: - -# Any line from which a parallel is to be made must have a bisected segment constructed onto it, if one does not already exist. - -# A parallel is then constructed according to the previous parallel construction involving the collinear bisected segment. - -In general, however, a parallel may be constructed from any pair of lines which are already parallel to one another; thus a third parallel may be produced from any two, without the use of a circle. Additionally, a parallel of any line may be constructed whenever there exists in the plane any parallelogram, also without the use of a given circle. - -This construction does require the use of the given circle and takes advantage of Thales's theorem. - -From a given line m, and a given point A in the plane, a perpendicular to the line is to be constructed through the point. Provided is the given circle O(r). - -# If the desired line from which a perpendicular is to be made, m, does not pass through the given circle (or it also passes through the given circle's center), then a new parallel line (in red) may be constructed arbitrarily such that it does pass through the given circle but not its center, and the perpendicular is to be made from this line instead. - -# This red line which passes through the given circle but not its center, will intersect the given circle in two points, B and C. - -# Draw a line BO, through the circle center. - -#* This line intersects the given circle at point D. - -# Draw a line DC. - -#* This line is perpendicular to the red (and black) lines, BC and m. - -# Construct a parallel of line DC through point A using previous constructions. - -#* A perpendicular of the original black line, m, now exists in the plane, and a parallel of it may be constructed through any point in the plane. - -An alternative construction allows a perpendicular to be constructed without the given circle, provided there exists in the plane any square. - -Given is a line segment AB, which is to be bisected. Optionally, a parallel line m exists in the plane. - -# If the line m, which is parallel to line segment AB, does not exist in the plane then it must be constructed according to earlier constructions using the given circle in the plane (not depicted). - -#* A given circle in the plane is not required for this construction if the parallel already exists. - -#* The parallel may be placed in the plane arbitrarily, so long as it is not collinear with the line segment. - -# Arbitrarily choose a point C in the plane which is not collinear with the line or the line segment. - -# Draw a line AC (in red), intersecting line m at point D. - -# Draw a line BC (in orange), intersecting line m at point E. - -# Draw two lines, AE and BD (each in light green), intersecting each other at point X - -# Draw a line CX (in blue), intersecting segment AB at point M. - -#* Point M is the desired midpoint of segment AB. - -#* Line CX also bisects segment DE - -For added perspective, in some sense this construction is a variant of a previous construction of a parallel from a bisected line segment. It is the same set of lines when taken on whole, but constructed in a different order, and from a different initial set of conditions, arriving at a different end goal. - -This construction does require the use of the given circle (which is not depicted) for the referenced sub-constructions. - -Suppose two circles A(B) and C(D) are implicitly given, defined only by the points A, B, C, and D in the plane, with their centers defined, but are not compass-constructed. The radical axis, line m, between the two circles may be constructed: - -# Draw a line AC (in orange) through the circle centers. - -# Draw a line segment BD (in red) between the points on the circumference of the circles. - -# Find the midpoint, M, of segment BD. - -# Draw lines AM and CM (both in light green), connecting the segment midpoint with each of the circle centers. - -# Construct a line j (in purple) passing through point B, and perpendicular to AM. - -# Construct a line k (in dark green) passing through point D, and perpendicular to CM. - -# Lines j and k intersect at point X. - -#* If the lines j and k are parallel then the segment midpoint M is on the line AC, and the construction will fail. An alternative approach is required (see below). - -# Construct a line m (in dark blue) perpendicular to line AC and passing through point X. - -# Line m is the desired radical axis. - -In the event that the construction of the radical axis fails due to there not being an intersection point X between parallel lines j and k, which results from the coincidental placement of the midpoint M on the line AC, an alternative approach is required. One such alternative is given below with the arbitrarily chosen circle A(B) used for demonstration, along with the provided circle O(r). The circle C(D) of the radical axis construction is not depicted. - -To define a circle only the center and one point - any point - on the circumference is required. In principle a new point B' is constructed such that circle A(B) is equal to circle A(B' ), but point B is not equal to point B' . In essence, segment AB is rotated to AB' , for a different set of defining points for the same circle. The construction of the radical axis is begun anew with circle A(B' ) standing in for circle A(B). In this way the coincidental placement of the midpoint M (now of segment B'D ) on the line AC is avoided. - -One way of going about this which satisfies most conditions is to construct point B' diametrically opposite B, collinear with a line AB: - -# Draw the line AB (in red). - -# Construct a parallel (in orange) of line AB through the center, point O, of the given circle. - -#* The parallel intersects the given circle at points E and F. - -# Draw a line AO (in green), connecting the center of circle A(B) with the center of the given circle. - -# Draw a line BE (in pink), connecting the points on the circle circumferences. - -#* In the general case, points E and F may be switched without loss of generality. - -# Lines AO and BE intersect in a point Z. - -#* If point Z does not exist due to lines AO and BE being parallel - caused by circles A(B) and O(r) having equal radii - then refer to step 4 and switch the roles of points E and F. - -# Draw a line FZ (in blue). - -# Lines AB and FZ intersect at a point B' . - -#* Point B' is the desired point. - -In the general case it is now possible to construct the radical axis between the circles A(B' )=A(B) and C(D). - -This specific construction of a diametrically opposite point, however, can itself potentially fail under the right conditions - when points A, B, and O are collinear. If the final goal is to construct a diametrically opposite point, an alternative approach is required. - -If the goal is to resolve the limitation in the radical axis construction, one option is to attempt a similar construction on circle C(D) instead. This too may fail, if all five points are collinear. Alternatively an entirely different point B' may be determined, not necessarily a diametrically opposite one, requiring a small variation on the above construction. - -This construction does require the use of the provided circle, O(r). - -Given is the line m (in black) and the circle P(Q), which is not compass-constructed. The intersection points of the circle P(Q) and the line m, which are point A and B, may be constructed: - -# Draw a line PQ (in red) through the points defining the circle. - -# Construct a parallel (in orange) of line PQ through the center O of the provided circle. - -#* The parallel intersects the provided circle at two points, one of which is arbitrarily chosen: R. - -# Draw a line PO (in light green), through the centers of the two circles (i.e. the one provided by compass construction and the one which is to be intersected). - -# Draw a line QR (in light blue), connecting the two points on the circumferences of the two circles. - -# Intersect the lines PO and QR at point X. - -#* If point X does not exist due to lines PO and QR being parallel - which results from circles P(Q) and O(r) having equal radii - then refer back to step 2 and choose the alternate point of intersection, R. - -# Choosing a point M arbitrarily on line m, such that it is not on line PO, draw a line PM (in pink). - -#* For construction simplicity and only if line PQ is not parallel to line m, lines PM and PQ may be coincident. - -# Draw a line MX (in brown). - -# Construct a parallel (in dark purple) of line PM through the center O of the provided circle. - -#* The parallel intersects the line MX at a point N. - -# Construct a parallel (in yellow) of line m through the point N. - -#* The parallel intersects the provided circle at points C and D. - -#* If the parallel does not intersect the provided circle then neither does the line m intersect circle P(Q). - -# Draw lines CX and DX (both in dark blue). - -#* These lines both intersect line m at points A and B, respectively. - -# Points A and B are the desired points of intersection between the line m and the circle P(Q). - -The intersection between two circles becomes a trivial combination of two earlier constructions: - -# Construct the radical axis between the two circles. - -# Construct the intersection points between the radical axis (which is a line) and either one of the two circles arbitrarily chosen, using basic construction #4. - -# These points are the desired points of intersection of the circles. - -#* The two circles and the radical axis all intersect at the same loci of points: two points, one point if tangential, or none if they do not intersect. - -#* If the radical axis does not intersect one circle then it intersects neither, and neither do the two circles intersect. - -The second basic construction - defining a circle with two points - never needed an arc to be constructed with the compass in order for the circle to be utilized in constructions - namely the intersections with circles and with lines which, together, are the essence of all constructions involving a circle. Thus defining a circle by its center and by any arbitrary point on its circumference is sufficient to fully describe the entire circle and construct with it. Basic construction #2 is satisfied. - -Since all five basic constructions have been shown to be achievable with only a straightedge, provided that a single circle with its center is placed in the plane, this proves the Poncelet-Steiner theorem. - -The Poncelet–Steiner theorem can be contrasted with the Mohr–Mascheroni theorem, which states that any compass and straightedge construction can be performed with only a compass. - -The rusty compass restriction allows the use of a compass, provided it produces circles of fixed radius. Although the rusty compass constructions were explored since the 10th century, and all of Euclid was shown to be constructable with a rusty compass by the 17th century, the Poncelet-Steiner theorem proves that the rusty compass and straightedge together are more than sufficient for any and all Euclidean construction. Indeed, the rusty compass becomes a tool simplifying constructions over merely the straightedge and single circle. Viewed the other way, the Poncelet-Steiner theorem not only fixes the width of the rusty compass, but ensures that the compass breaks after its first use. - -The requirement that one circle with its center provided has been since generalized to include alternative but equally restrictive conditions. In one such alternative, the entire circle is not required at all. In 1904, Francesco Severi proved that any small arc (of the circle), together with the centre, will suffice. This construction breaks the rusty compass at any point before the first circle is completed, but after it has begun, and still all constructions remain possible. Thus, the conditions hypothesizing the Poncelet-Steiner theorem may indeed be weakened, but only with respect to the completeness of the circular arc, and not, per the Steiner theorem, with respect to the center. - -In two other alternatives, the centre may be omitted entirely provided that given are either two concentric circles, or two distinct intersecting circles, of which there are two cases: two intersection points and one intersection point (tangential circles). From any of these scenarios, centres can be constructed, reducing the scenario to the original hypothesis. - -Still other variations exist. It suffices to have two non-intersecting circles (without their centres), provided that at least one point is given on either the centerline or on the radical axis of the two circles, or alternatively to have three non-intersecting circles. Once a single center is constructed, the scenario again reduces to the original hypothesis of the Poncelet-Steiner theorem. - -Instead of restricting the rules of construction, it is of equal interest to study alleviating the rules. Just as geometers have studied what remains possible to construct (and how) when additional restrictions are placed on traditional construction rules - such as compass only, straightedge only, rusty compass, etc. - they have also studied what constructions becomes possible that weren't already when the natural restrictions inherent to traditional construction rules are alleviated. Questions such as "what becomes constructible", "how might it be constructed", "what are the fewest traditional rules to be broken", "what are the simplest tools needed", "which seemingly different tools are equivalent", etc. are asked. - -The arbitrary angle is not trisectable using traditional compass and straightedge rules, for example, but the trisection becomes constructible when allowed the additional tool of an ellipse in the plane. Some of the traditional problems such as angle trisection, doubling the cube, squaring the circle, finding cubic roots, etc., have been resolvable using an expanded set of tools. In general, the objects studied to expand the scope of what is constructible have included: - -* Non-constructible "auxiliary" curves in the plane - including any of the conic sections, cycloids, lemniscates, limaçons, the Archimedean spiral, any of the trisectrices or quadratrices, and others. - -* Physical tools other than the compass and straightedge - generally called neuseis - which include specific tools such as the Tomahawk, markable straightedges and graduated rulers, right triangular rulers, linkages, ellipsographs, and others. - -* Origami, or paper-folding techniques. - -The ancient geometers considered the conic sections, and regarded their use as a less pure form of construction, but more pure than the use of neuseis (alternative physical tools) or other unusual curves. The term neusis or neusis construction may also refer to a specific tool or method employed by the ancient geometers. - -Although not a true and rigorous construction (nor considered a neusis construction by normal definitions), it is possible to approximate a construction to a predetermined level of precision using only compass and straightedge, using a reiterative approach. Although each point, line or circle is a valid construction, what it aims to approximate can never truly be achieved. Indeed, using a compass and straightedge alone, if an infinite number of constructive steps are allowed, many more points beyond what is normally constructible become possible, as a convergent process and limiting behavior. For example, an angle trisection may be performed exactly using an infinite sequence of angle bisections. If terminated at some finite point, an accurate approximation of a trisection can be achieved. In traditional construction rules, this is not allowed: a construction must terminate in a finite number of applications of the compass and straightedge, and must produce the desired construction exactly. - -*Steel square - -*Constructible polygon - -*Projective geometry - -*Inversive geometry - -*Geometrography diff --git a/wiki/wikipedia/754.txt b/wiki/wikipedia/754.txt deleted file mode 100644 index 43e7dcd5fd320ce8bed68590d356251b12d59414..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/754.txt +++ /dev/null @@ -1,5 +0,0 @@ -The Maximum Degree-and-Diameter-Bounded Subgraph problem (MaxDDBS) is a problem in graph theory. - -Given a connected host graph G, an upper bound for the degree d, and an upper bound for the diameter k, we look for the largest subgraph S of G with maximum degree at most d and diameter at most k. - -This problem is also referred to as the Degree-Diameter Subgraph Problem, as it contains the degree diameter problem as a special case (namely, by taking a sufficiently large complete graph as a host graph). Despite being a natural generalization of the Degree-Diameter Problem, MaxDDBS only began to be investigated in 2011, while research in the Degree-Diameter Problem has been active since the 1960s. Regarding its computational complexity, the problem is NP-hard, and not in APX (i.e. it cannot be approximated to within a constant factor in polynomial time). diff --git a/wiki/wikipedia/755.txt b/wiki/wikipedia/755.txt deleted file mode 100644 index 5d7defc59d82754f142591e14e00fc794f7c93a0..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/755.txt +++ /dev/null @@ -1,3 +0,0 @@ -In differential geometry, Vermeil's theorem essentially states that the scalar curvature is the only (non-trivial) absolute invariant among those of prescribed type suitable for Albert Einstein’s theory of General Relativity. The theorem was proved by the German mathematician Hermann Vermeil in 1917. - -The theorem states that the Ricci scalar $R$ is the only scalar invariant (or absolute invariant) linear in the second derivatives of the metric tensor $g_{\mu\nu}$. diff --git a/wiki/wikipedia/756.txt b/wiki/wikipedia/756.txt deleted file mode 100644 index 39258bdee667875083ff9a35e805caae1e445870..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/756.txt +++ /dev/null @@ -1,131 +0,0 @@ -In mathematics, the classification of the finite simple groups is a result of group theory stating that every finite simple group is either cyclic, or alternating, or it belongs to a broad infinite class called the groups of Lie type, or else it is one of twenty-six or twenty-seven exceptions, called sporadic. The proof consists of tens of thousands of pages in several hundred journal articles written by about 100 authors, published mostly between 1955 and 2004. - -Simple groups can be seen as the basic building blocks of all finite groups, reminiscent of the way the prime numbers are the basic building blocks of the natural numbers. The Jordan–Hölder theorem is a more precise way of stating this fact about finite groups. However, a significant difference from integer factorization is that such "building blocks" do not necessarily determine a unique group, since there might be many non-isomorphic groups with the same composition series or, put in another way, the extension problem does not have a unique solution. - -Gorenstein (d.1992), Lyons, and Solomon are gradually publishing a simplified and revised version of the proof. - -The classification theorem has applications in many branches of mathematics, as questions about the structure of finite groups (and their action on other mathematical objects) can sometimes be reduced to questions about finite simple groups. Thanks to the classification theorem, such questions can sometimes be answered by checking each family of simple groups and each sporadic group. - -Daniel Gorenstein announced in 1983 that the finite simple groups had all been classified, but this was premature as he had been misinformed about the proof of the classification of quasithin groups. The completed proof of the classification was announced by Aschbacher after Aschbacher and Smith published a 1221-page proof for the missing quasithin case. - -wrote two volumes outlining the low rank and odd characteristic part of the proof, and - -wrote a 3rd volume covering the remaining characteristic 2 case. The proof can be broken up into several major pieces as follows: - -The simple groups of low 2-rank are mostly groups of Lie type of small rank over fields of odd characteristic, together with five alternating and seven characteristic 2 type and nine sporadic groups. - -The simple groups of small 2-rank include: - -*Groups of 2-rank 0, in other words groups of odd order, which are all solvable by the Feit–Thompson theorem. - -*Groups of 2-rank 1. The Sylow 2-subgroups are either cyclic, which is easy to handle using the transfer map, or generalized quaternion, which are handled with the Brauer–Suzuki theorem: in particular there are no simple groups of 2-rank 1 except for the cyclic group of order two. - -*Groups of 2-rank 2. Alperin showed that the Sylow subgroup must be dihedral, quasidihedral, wreathed, or a Sylow 2-subgroup of U3(4). The first case was done by the Gorenstein–Walter theorem which showed that the only simple groups are isomorphic to L2(q) for q odd or A7, the second and third cases were done by the Alperin–Brauer–Gorenstein theorem which implies that the only simple groups are isomorphic to L3(q) or U3(q) for q odd or M11, and the last case was done by Lyons who showed that U3(4) is the only simple possibility. - -*Groups of sectional 2-rank at most 4, classified by the Gorenstein–Harada theorem. - -The classification of groups of small 2-rank, especially ranks at most 2, makes heavy use of ordinary and modular character theory, which is almost never directly used elsewhere in the classification. - -All groups not of small 2 rank can be split into two major classes: groups of component type and groups of characteristic 2 type. This is because if a group has sectional 2-rank at least 5 then MacWilliams showed that its Sylow 2-subgroups are connected, and the balance theorem implies that any simple group with connected Sylow 2-subgroups is either of component type or characteristic 2 type. (For groups of low 2-rank the proof of this breaks down, because theorems such as the signalizer functor theorem only work for groups with elementary abelian subgroups of rank at least 3.) - -A group is said to be of component type if for some centralizer C of an involution, C/O(C) has a component (where O(C) is the core of C, the maximal normal subgroup of odd order). - -These are more or less the groups of Lie type of odd characteristic of large rank, and alternating groups, together with some sporadic groups. - -A major step in this case is to eliminate the obstruction of the core of an involution. This is accomplished by the B-theorem, which states that every component of C/O(C) is the image of a component of C. - -The idea is that these groups have a centralizer of an involution with a component that is a smaller quasisimple group, which can be assumed to be already known by induction. So to classify these groups one takes every central extension of every known finite simple group, and finds all simple groups with a centralizer of involution with this as a component. This gives a rather large number of different cases to check: there are not only 26 sporadic groups and 16 families of groups of Lie type and the alternating groups, but also many of the groups of small rank or over small fields behave differently from the general case and have to be treated separately, and the groups of Lie type of even and odd characteristic are also quite different. - -A group is of characteristic 2 type if the generalized Fitting subgroup F*(Y) of every 2-local subgroup Y is a 2-group. - -As the name suggests these are roughly the groups of Lie type over fields of characteristic 2, plus a handful of others that are alternating or sporadic or of odd characteristic. Their classification is divided into the small and large rank cases, where the rank is the largest rank of an odd abelian subgroup normalizing a nontrivial 2-subgroup, which is often (but not always) the same as the rank of a Cartan subalgebra when the group is a group of Lie type in characteristic 2. - -The rank 1 groups are the thin groups, classified by Aschbacher, and the rank 2 ones are the notorious quasithin groups, classified by Aschbacher and Smith. These correspond roughly to groups of Lie type of ranks 1 or 2 over fields of characteristic 2. - -Groups of rank at least 3 are further subdivided into 3 classes by the trichotomy theorem, proved by Aschbacher for rank 3 and by Gorenstein and Lyons for rank at least 4. - -The three classes are groups of GF(2) type (classified mainly by Timmesfeld), groups of "standard type" for some odd prime (classified by the Gilman–Griess theorem and work by several others), and groups of uniqueness type, where a result of Aschbacher implies that there are no simple groups. - -The general higher rank case consists mostly of the groups of Lie type over fields of characteristic 2 of rank at least 3 or 4. - -The main part of the classification produces a characterization of each simple group. It is then necessary to check that there exists a simple group for each characterization and that it is unique. This gives a large number of separate problems; for example, the original proofs of existence and uniqueness of the monster group totaled about 200 pages, and the identification of the Ree groups by Thompson and Bombieri was one of the hardest parts of the classification. Many of the existence proofs and some of the uniqueness proofs for the sporadic groups originally used computer calculations, most of which have since been replaced by shorter hand proofs. - -In 1972 Gorenstein announced a program for completing the classification of finite simple groups, consisting of the following 16 steps: - -# Groups of low 2-rank. This was essentially done by Gorenstein and Harada, who classified the groups with sectional 2-rank at most 4. Most of the cases of 2-rank at most 2 had been done by the time Gorenstein announced his program. - -# The semisimplicity of 2-layers. The problem is to prove that the 2-layer of the centralizer of an involution in a simple group is semisimple. - -# Standard form in odd characteristic. If a group has an involution with a 2-component that is a group of Lie type of odd characteristic, the goal is to show that it has a centralizer of involution in "standard form" meaning that a centralizer of involution has a component that is of Lie type in odd characteristic and also has a centralizer of 2-rank 1. - -# Classification of groups of odd type. The problem is to show that if a group has a centralizer of involution in "standard form" then it is a group of Lie type of odd characteristic. This was solved by Aschbacher's classical involution theorem. - -# Quasi-standard form - -# Central involutions - -# Classification of alternating groups. - -# Some sporadic groups - -# Thin groups. The simple thin finite groups, those with 2-local p-rank at most 1 for odd primes p, were classified by Aschbacher in 1978 - -# Groups with a strongly p-embedded subgroup for p odd - -# The signalizer functor method for odd primes. The main problem is to prove a signalizer functor theorem for nonsolvable signalizer functors. This was solved by McBride in 1982. - -# Groups of characteristic p type. This is the problem of groups with a strongly p-embedded 2-local subgroup with p odd, which was handled by Aschbacher. - -# Quasithin groups. A quasithin group is one whose 2-local subgroups have p-rank at most 2 for all odd primes p, and the problem is to classify the simple ones of characteristic 2 type. This was completed by Aschbacher and Smith in 2004. - -# Groups of low 2-local 3-rank. This was essentially solved by Aschbacher's trichotomy theorem for groups with e(G)=3. The main change is that 2-local 3-rank is replaced by 2-local p-rank for odd primes. - -# Centralizers of 3-elements in standard form. This was essentially done by the Trichotomy theorem. - -# Classification of simple groups of characteristic 2 type. This was handled by the Gilman–Griess theorem, with 3-elements replaced by p-elements for odd primes. - -Many of the items in the list below are taken from Solomon. The date given is usually the publication date of the complete proof of a result, which is sometimes several years later than the proof or first announcement of the result, so some of the items appear in the "wrong" order. - -The proof of the theorem, as it stood around 1985 or so, can be called first generation. Because of the extreme length of the first generation proof, much effort has been devoted to finding a simpler proof, called a second-generation classification proof. This effort, called "revisionism", was originally led by Daniel Gorenstein. - -, nine volumes of the second generation proof have been published (Gorenstein, Lyons & Solomon 1994, 1996, 1998, 1999, 2002, 2005, 2018a, 2018b, 2021). In 2012 Solomon estimated that the project would need another 5 volumes, but said that progress on them was slow. It is estimated that the new proof will eventually fill approximately 5,000 pages. (This length stems in part from the second generation proof being written in a more relaxed style.) However, with the publication of volume 9 of the GLS series, and including the Aschbacher–Smith contribution, this estimate was already reached, with several more volumes still in preparation (the rest of what was originally intended for volume 9, plus projected volumes 10 and 11). Aschbacher and Smith wrote their two volumes devoted to the quasithin case in such a way that those volumes can be part of the second generation proof. - -Gorenstein and his collaborators have given several reasons why a simpler proof is possible. - -* The most important thing is that the correct, final statement of the theorem is now known. Simpler techniques can be applied that are known to be adequate for the types of groups we know to be finite simple. In contrast, those who worked on the first generation proof did not know how many sporadic groups there were, and in fact some of the sporadic groups (e.g., the Janko groups) were discovered while proving other cases of the classification theorem. As a result, many of the pieces of the theorem were proved using techniques that were overly general. - -*Because the conclusion was unknown, the first generation proof consists of many stand-alone theorems, dealing with important special cases. Much of the work of proving these theorems was devoted to the analysis of numerous special cases. Given a larger, orchestrated proof, dealing with many of these special cases can be postponed until the most powerful assumptions can be applied. The price paid under this revised strategy is that these first generation theorems no longer have comparatively short proofs, but instead rely on the complete classification. - -*Many first generation theorems overlap, and so divide the possible cases in inefficient ways. As a result, families and subfamilies of finite simple groups were identified multiple times. The revised proof eliminates these redundancies by relying on a different subdivision of cases. - -*Finite group theorists have more experience at this sort of exercise, and have new techniques at their disposal. - -Aschbacher has called the work on the classification problem by Ulrich Meierfrankenfeld, Bernd Stellmacher, Gernot Stroth, and a few others, a third generation program. One goal of this is to treat all groups in characteristic 2 uniformly using the amalgam method. - -Gorenstein has discussed some of the reasons why there might not be a short proof of the classification similar to the classification of compact Lie groups. - -*The most obvious reason is that the list of simple groups is quite complicated: with 26 sporadic groups there are likely to be many special cases that have to be considered in any proof. So far no one has yet found a clean uniform description of the finite simple groups similar to the parameterization of the compact Lie groups by Dynkin diagrams. - -*Atiyah and others have suggested that the classification ought to be simplified by constructing some geometric object that the groups act on and then classifying these geometric structures. The problem is that no one has been able to suggest an easy way to find such a geometric structure associated with a simple group. In some sense, the classification does work by finding geometric structures such as BN-pairs, but this only comes at the end of a very long and difficult analysis of the structure of a finite simple group. - -*Another suggestion for simplifying the proof is to make greater use of representation theory. The problem here is that representation theory seems to require very tight control over the subgroups of a group in order to work well. For groups of small rank, one has such control and representation theory works very well, but for groups of larger rank no-one has succeeded in using it to simplify the classification. In the early days of the classification, there was a considerable effort made to use representation theory, but this never achieved much success in the higher rank case. - -This section lists some results that have been proved using the classification of finite simple groups. - -*The Schreier conjecture - -*The Signalizer functor theorem - -*The B conjecture - -*The Schur–Zassenhaus theorem for all groups (though this only uses the Feit–Thompson theorem). - -*A transitive permutation group on a finite set with more than 1 element has a fixed-point-free element of prime power order. - -*The classification of 2-transitive permutation groups. - -*The classification of rank 3 permutation groups. - -*The Sims conjecture - -*Frobenius's conjecture on the number of solutions of 1=xn = 1. diff --git a/wiki/wikipedia/757.txt b/wiki/wikipedia/757.txt deleted file mode 100644 index c9d45b996d07e4788d117c2c229f944f98bbbdba..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/757.txt +++ /dev/null @@ -1,178 +0,0 @@ -In quantum mechanics, the Hellmann–Feynman theorem relates the derivative of the total energy with respect to a parameter, to the expectation value of the derivative of the Hamiltonian with respect to that same parameter. According to the theorem, once the spatial distribution of the electrons has been determined by solving the Schrödinger equation, all the forces in the system can be calculated using classical electrostatics. - -The theorem has been proven independently by many authors, including Paul Güttinger (1932), Wolfgang Pauli (1933), Hans Hellmann (1937) and Richard Feynman (1939). - -The theorem states - -{{NumBlk|:|$\frac{\mathrm{d} E_{\lambda}}{\mathrm{d}{\lambda}}=\bigg\langle\psi_\lambda\bigg|\frac{\mathrm{d}\hat{H}_{\lambda}}{\mathrm{d}\lambda}\bigg|\psi_\lambda\bigg\rangle,$|}} - -where - -*$\hat{H}_{\lambda}$ is a Hamiltonian operator depending upon a continuous parameter $\lambda$, - -*$|\psi_\lambda\rangle$, is an eigen-state (eigenfunction) of the Hamiltonian, depending implicitly upon $\lambda$, - -*$E_{\lambda}$ is the energy (eigenvalue) of the state $|\psi_\lambda\rangle$, i.e. $\hat{H}_{\lambda}|\psi_\lambda\rangle = E_{\lambda}|\psi_\lambda\rangle$. - -This proof of the Hellmann–Feynman theorem requires that the wavefunction be an eigenfunction of the Hamiltonian under consideration; however, one can also prove more generally that the theorem holds for non-eigenfunction wavefunctions which are stationary (partial derivative is zero) for all relevant variables (such as orbital rotations). The Hartree–Fock wavefunction is an important example of an approximate eigenfunction that still satisfies the Hellmann–Feynman theorem. Notable example of where the Hellmann–Feynman is not applicable is for example finite-order Møller–Plesset perturbation theory, which is not variational. - -The proof also employs an identity of normalized wavefunctions – that derivatives of the overlap of a wavefunction with itself must be zero. Using Dirac's bra–ket notation these two conditions are written as -$$ -\hat{H}_{\lambda}|\psi_\lambda\rangle = E_{\lambda}|\psi_\lambda\rangle, -$$ -$$ -\langle\psi_\lambda|\psi_\lambda\rangle = 1 \Rightarrow \frac{\mathrm{d}}{\mathrm{d}\lambda}\langle\psi_\lambda|\psi_\lambda\rangle =0. -$$ - -The proof then follows through an application of the derivative product rule to the expectation value of the Hamiltonian viewed as a function of λ: - - - -\begin{align} - -\frac{\mathrm{d} E_{\lambda}}{\mathrm{d}\lambda} &= \frac{\mathrm{d}}{\mathrm{d}\lambda}\langle\psi_\lambda|\hat{H}_{\lambda}|\psi_\lambda\rangle \\ - -&=\bigg\langle\frac{\mathrm{d}\psi_\lambda}{\mathrm{d}\lambda}\bigg|\hat{H}_{\lambda}\bigg|\psi_\lambda\bigg\rangle + \bigg\langle\psi_\lambda\bigg|\hat{H}_{\lambda}\bigg|\frac{\mathrm{d}\psi_\lambda}{\mathrm{d}\lambda}\bigg\rangle + \bigg\langle\psi_\lambda\bigg|\frac{\mathrm{d}\hat{H}_{\lambda}}{\mathrm{d}\lambda}\bigg|\psi_\lambda\bigg\rangle \\ - -&=E_{\lambda}\bigg\langle\frac{\mathrm{d}\psi_\lambda}{\mathrm{d}\lambda}\bigg|\psi_\lambda\bigg\rangle + E_{\lambda}\bigg\langle\psi_\lambda\bigg|\frac{\mathrm{d}\psi_\lambda}{\mathrm{d}\lambda}\bigg\rangle + \bigg\langle\psi_\lambda\bigg|\frac{\mathrm{d}\hat{H}_{\lambda}}{\mathrm{d}\lambda}\bigg|\psi_\lambda\bigg\rangle \\ - -&=E_{\lambda}\frac{\mathrm{d}}{\mathrm{d}\lambda}\langle\psi_\lambda|\psi_\lambda\rangle + \bigg\langle\psi_\lambda\bigg|\frac{\mathrm{d}\hat{H}_{\lambda}}{\mathrm{d}\lambda}\bigg|\psi_\lambda\bigg\rangle \\ - -&=\bigg\langle\psi_\lambda\bigg|\frac{\mathrm{d}\hat{H}_{\lambda}}{\mathrm{d}\lambda}\bigg|\psi_\lambda\bigg\rangle. - -\end{align} - - - -The Hellmann–Feynman theorem is actually a direct, and to some extent trivial, consequence of the variational principle (the Rayleigh-Ritz variational principle) from which the Schrödinger equation may be derived. This is why the Hellmann–Feynman theorem holds for wave-functions (such as the Hartree–Fock wave-function) that, though not eigenfunctions of the Hamiltonian, do derive from a variational principle. This is also why it holds, e.g., in density functional theory, which is not wave-function based and for which the standard derivation does not apply. - -According to the Rayleigh–Ritz variational principle, the eigenfunctions of the Schrödinger equation are stationary points of the functional (which is nicknamed Schrödinger functional for brevity): - -{{NumBlk|:|$E[\psi,\lambda]=\frac{\langle\psi|\hat{H}_{\lambda}|\psi\rangle}{\langle\psi|\psi\rangle}.$|}} - -The eigenvalues are the values that the Schrödinger functional takes at the stationary points: - -{{NumBlk|:|$E_{\lambda}=E[\psi_{\lambda},\lambda],$|}} - -where $\psi_{\lambda} $ satisfies the variational condition: - -{{NumBlk|:|$\left.\frac{\delta E[\psi,\lambda]}{\delta\psi(x)}\right|_{\psi=\psi_{\lambda}}=0.$|}} - -By differentiating Eq. (3) using the chain rule, one obtains: - -{{NumBlk|:|$ \frac{dE_{\lambda}}{d\lambda}=\frac{\partial E[\psi_{\lambda},\lambda]}{\partial\lambda}+\int\frac{\delta E[\psi,\lambda]}{\delta\psi(x)}\frac{d\psi_{\lambda}(x)}{d\lambda}dx. $|}} - -Due to the variational condition, Eq. (4), the second term in Eq. (5) vanishes. In one sentence, the Hellmann–Feynman theorem states that the derivative of the stationary values of a function(al) with respect to a parameter on which it may depend, can be computed from the explicit dependence only, disregarding the implicit one. On account of the fact that the Schrödinger functional can only depend explicitly on an external parameter through the Hamiltonian, Eq. (1) trivially follows. - -The most common application of the Hellmann–Feynman theorem is to the calculation of intramolecular forces in molecules. This allows for the calculation of equilibrium geometries – the nuclear coordinates where the forces acting upon the nuclei, due to the electrons and other nuclei, vanish. The parameter λ corresponds to the coordinates of the nuclei. For a molecule with 1 ≤ i ≤ N electrons with coordinates {ri}, and 1 ≤ α ≤ M nuclei, each located at a specified point {1=Rα=} and with nuclear charge Zα, the clamped nucleus Hamiltonian is -$$ -\hat{H}=\hat{T} + \hat{U} - \sum_{i=1}^{N}\sum_{\alpha=1}^{M}\frac{Z_{\alpha}} + \sum_{\alpha}^{M}\sum_{\beta>\alpha}^{M}\frac{Z_{\alpha}Z_{\beta}}. -$$ - -The x-component of the force acting on a given nucleus is equal to the negative of the derivative of the total energy with respect to that coordinate. Employing the Hellmann–Feynman theorem this is equal to -$$ -F_{X_{\gamma}} = -\frac{\partial E}{\partial X_{\gamma}} = -\bigg\langle\psi\bigg|\frac{\partial\hat{H}}{\partial X_{\gamma}}\bigg|\psi\bigg\rangle. -$$ - -Only two components of the Hamiltonian contribute to the required derivative – the electron-nucleus and nucleus-nucleus terms. Differentiating the Hamiltonian yields - - - -\begin{align} - -\frac{\partial\hat{H}}{\partial X_{\gamma}} &= \frac{\partial}{\partial X_{\gamma}} \left(- \sum_{i=1}^{N}\sum_{\alpha=1}^{M}\frac{Z_{\alpha}} + \sum_{\alpha}^{M}\sum_{\beta>\alpha}^{M}\frac{Z_{\alpha}Z_{\beta}}\right), \\ - -&=-Z_{\gamma}\sum_{i=1}^{N}\frac{x_{i}-X_{\gamma}}{|\mathbf{r}_{i}-\mathbf{R}_{\gamma}|^{3}} - -+Z_{\gamma}\sum_{\alpha\neq\gamma}^{M}Z_{\alpha}\frac{X_{\alpha}-X_{\gamma}}{|\mathbf{R}_{\alpha}-\mathbf{R}_{\gamma}|^{3}}. - -\end{align} - - - -Insertion of this in to the Hellmann–Feynman theorem returns the x-component of the force on the given nucleus in terms of the electronic density (ρ(r)) and the atomic coordinates and nuclear charges: -$$ -F_{X_{\gamma}} = Z_{\gamma}\left(\int\mathrm{d}\mathbf{r}\ \rho(\mathbf{r})\frac{x-X_{\gamma}}{|\mathbf{r}-\mathbf{R}_{\gamma}|^{3}} - \sum_{\alpha\neq\gamma}^{M}Z_{\alpha}\frac{X_{\alpha}-X_{\gamma}}{|\mathbf{R}_{\alpha}-\mathbf{R}_{\gamma}|^{3}}\right). -$$ - -An alternative approach for applying the Hellmann–Feynman theorem is to promote a fixed or discrete parameter which appears in a Hamiltonian to be a continuous variable solely for the mathematical purpose of taking a derivative. Possible parameters are physical constants or discrete quantum numbers. As an example, the radial Schrödinger equation for a hydrogen-like atom is -$$ -\hat{H}_{l}=-\frac{\hbar^{2}}{2\mu r^2}\left(\frac{\mathrm{d}}{\mathrm{d}r}\left(r^{2}\frac{\mathrm{d}}{\mathrm{d}r}\right)-l(l+1)\right) -\frac{Ze^{2}}{r}, -$$ - -which depends upon the discrete azimuthal quantum number l. Promoting l to be a continuous parameter allows for the derivative of the Hamiltonian to be taken: -$$ -\frac{\partial \hat{H}_{l}}{\partial l} = \frac{\hbar^{2}}{2\mu r^{2}}(2l+1). -$$ - -The Hellmann–Feynman theorem then allows for the determination of the expectation value of $\frac{1}{r^{2}}$ for hydrogen-like atoms: - - - -\begin{align} - -\bigg\langle\psi_{nl}\bigg|\frac{1}{r^{2}}\bigg|\psi_{nl}\bigg\rangle &= \frac{2\mu}{\hbar^{2}}\frac{1}{2l+1}\bigg\langle\psi_{nl}\bigg|\frac{\partial \hat{H}_{l}}{\partial l}\bigg|\psi_{nl}\bigg\rangle \\ - -&=\frac{2\mu}{\hbar^{2}}\frac{1}{2l+1}\frac{\partial E_{n}}{\partial l} \\ - -&=\frac{2\mu}{\hbar^{2}}\frac{1}{2l+1}\frac{\partial E_{n}}{\partial n}\frac{\partial n}{\partial l} \\ - -&=\frac{2\mu}{\hbar^{2}}\frac{1}{2l+1}\frac{Z^{2}\mu e^{4}}{\hbar^{2}n^{3}} \\ - -&=\frac{Z^{2}\mu^{2}e^{4}}{\hbar^{4}n^{3}(l+1/2)}. - -\end{align} - - - -In order to compute the energy derivative, the way $n$ depends on $l$ has to be known. These quantum numbers are usually independent, but here the solutions must be varied so as to keep the number of nodes in the wavefunction fixed. The number of nodes is $n-l+1$, so $ \partial n/\partial l=1$. - -In the end of Feynman's paper, he states that, "Van der Waals' forces can also be interpreted as arising from charge distributions with higher concentration between the nuclei. The Schrödinger perturbation theory for two interacting atoms at a separation R, large compared to the radii of the atoms, leads to the result that the charge distribution of each is distorted from central symmetry, a dipole moment of order 1/R7 being induced in each atom. The negative charge distribution of each atom has its center of gravity moved slightly toward the other. It is not the interaction of these dipoles which leads to van der Waals's force, but rather the attraction of each nucleus for the distorted charge distribution of its own electrons that gives the attractive 1/R7 force." - -For a general time-dependent wavefunction satisfying the time-dependent Schrödinger equation, the Hellmann–Feynman theorem is not valid. - -However, the following identity holds: - - - -\bigg\langle\Psi_\lambda(t)\bigg|\frac{\partial H_\lambda}{\partial\lambda}\bigg|\Psi_\lambda(t)\bigg\rangle = i \hbar \frac{\partial}{\partial t}\bigg\langle\Psi_\lambda(t)\bigg|\frac{\partial \Psi_\lambda(t)}{\partial \lambda}\bigg\rangle - - - -For - - - -i\hbar\frac{\partial\Psi_\lambda(t)}{\partial t}=H_\lambda\Psi_\lambda(t) - - - -The proof only relies on the Schrödinger equation and the assumption that partial derivatives with respect to λ and t can be interchanged. - - - -\begin{align} - -\bigg\langle\Psi_\lambda(t)\bigg|\frac{\partial H_\lambda}{\partial\lambda}\bigg|\Psi_\lambda(t)\bigg\rangle &= - -\frac{\partial}{\partial\lambda}\langle\Psi_\lambda(t)|H_\lambda|\Psi_\lambda(t)\rangle - -- \bigg\langle\frac{\partial\Psi_\lambda(t)}{\partial\lambda}\bigg|H_\lambda\bigg|\Psi_\lambda(t)\bigg\rangle - -- \bigg\langle\Psi_\lambda(t)\bigg|H_\lambda\bigg|\frac{\partial\Psi_\lambda(t)}{\partial\lambda}\bigg\rangle \\ - -&= i\hbar \frac{\partial}{\partial\lambda}\bigg\langle\Psi_\lambda(t)\bigg|\frac{\partial\Psi_\lambda(t)}{\partial t}\bigg\rangle - -- i\hbar\bigg\langle\frac{\partial\Psi_\lambda(t)}{\partial\lambda}\bigg|\frac{\partial\Psi_\lambda(t)}{\partial t}\bigg\rangle - -+ i\hbar\bigg\langle\frac{\partial\Psi_\lambda(t)}{\partial t}\bigg|\frac{\partial\Psi_\lambda(t)}{\partial\lambda}\bigg\rangle \\ - -&= i\hbar \bigg\langle\Psi_\lambda(t)\bigg| \frac{\partial^2\Psi_\lambda(t)}{\partial\lambda \partial t}\bigg\rangle - -+ i\hbar\bigg\langle\frac{\partial\Psi_\lambda(t)}{\partial t}\bigg|\frac{\partial\Psi_\lambda(t)}{\partial\lambda}\bigg\rangle \\ - -&= i \hbar \frac{\partial}{\partial t}\bigg\langle\Psi_\lambda(t)\bigg|\frac{\partial \Psi_\lambda(t)}{\partial \lambda}\bigg\rangle - -\end{align} - - diff --git a/wiki/wikipedia/758.txt b/wiki/wikipedia/758.txt deleted file mode 100644 index 765f142d6bf6ea69f227f45fdd58f0e9ae4c0952..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/758.txt +++ /dev/null @@ -1,155 +0,0 @@ -In information theory, the Shannon–Hartley theorem tells the maximum rate at which information can be transmitted over a communications channel of a specified bandwidth in the presence of noise. It is an application of the noisy-channel coding theorem to the archetypal case of a continuous-time analog communications channel subject to Gaussian noise. The theorem establishes Shannon's channel capacity for such a communication link, a bound on the maximum amount of error-free information per time unit that can be transmitted with a specified bandwidth in the presence of the noise interference, assuming that the signal power is bounded, and that the Gaussian noise process is characterized by a known power or power spectral density. The law is named after Claude Shannon and Ralph Hartley. - -The Shannon–Hartley theorem states the channel capacity $C$, meaning the theoretical tightest upper bound on the information rate of data that can be communicated at an arbitrarily low error rate using an average received signal power $S$ through an analog communication channel subject to additive white Gaussian noise (AWGN) of power $N$: -$$ -C = B \log_2 \left( 1+\frac{S}{N} \right) -$$ - -where - -* $C$ is the channel capacity in bits per second, a theoretical upper bound on the net bit rate (information rate, sometimes denoted $I$) excluding error-correction codes; - -* $B$ is the bandwidth of the channel in hertz (passband bandwidth in case of a bandpass signal); - -* $S$ is the average received signal power over the bandwidth (in case of a carrier-modulated passband transmission, often denoted C), measured in watts (or volts squared); - -* $N$ is the average power of the noise and interference over the bandwidth, measured in watts (or volts squared); and - -* $S/N$ is the signal-to-noise ratio (SNR) or the carrier-to-noise ratio (CNR) of the communication signal to the noise and interference at the receiver (expressed as a linear power ratio, not as logarithmic decibels). - -During the late 1920s, Harry Nyquist and Ralph Hartley developed a handful of fundamental ideas related to the transmission of information, particularly in the context of the telegraph as a communications system. At the time, these concepts were powerful breakthroughs individually, but they were not part of a comprehensive theory. In the 1940s, Claude Shannon developed the concept of channel capacity, based in part on the ideas of Nyquist and Hartley, and then formulated a complete theory of information and its transmission. - -In 1927, Nyquist determined that the number of independent pulses that could be put through a telegraph channel per unit time is limited to twice the bandwidth of the channel. In symbolic notation, -$$ -f_p \le 2B -$$ - -where $f_p$ is the pulse frequency (in pulses per second) and $B$ is the bandwidth (in hertz). The quantity $2B$ later came to be called the Nyquist rate, and transmitting at the limiting pulse rate of $2B$ pulses per second as signalling at the Nyquist rate. Nyquist published his results in 1928 as part of his paper "Certain topics in Telegraph Transmission Theory". - -During 1928, Hartley formulated a way to quantify information and its line rate (also known as data signalling rate R bits per second). This method, later known as Hartley's law, became an important precursor for Shannon's more sophisticated notion of channel capacity. - -Hartley argued that the maximum number of distinguishable pulse levels that can be transmitted and received reliably over a communications channel is limited by the dynamic range of the signal amplitude and the precision with which the receiver can distinguish amplitude levels. Specifically, if the amplitude of the transmitted signal is restricted to the range of [−A ... +A] volts, and the precision of the receiver is ±ΔV volts, then the maximum number of distinct pulses M is given by -$$ -M = 1 + { A \over \Delta V } -$$. - -By taking information per pulse in bit/pulse to be the base-2-logarithm of the number of distinct messages M that could be sent, Hartley constructed a measure of the line rate R as: -$$ - R = f_p \log_2(M), -$$ - -where $f_p$ is the pulse rate, also known as the symbol rate, in symbols/second or baud. - -Hartley then combined the above quantification with Nyquist's observation that the number of independent pulses that could be put through a channel of bandwidth $B$ hertz was $2B$ pulses per second, to arrive at his quantitative measure for achievable line rate. - -Hartley's law is sometimes quoted as just a proportionality between the analog bandwidth, $B$, in Hertz and what today is called the digital bandwidth, $R$, in bit/s. - -Other times it is quoted in this more quantitative form, as an achievable line rate of $R$ bits per second: -$$ - R \le 2B \log_2(M). -$$ - -Hartley did not work out exactly how the number M should depend on the noise statistics of the channel, or how the communication could be made reliable even when individual symbol pulses could not be reliably distinguished to M levels; with Gaussian noise statistics, system designers had to choose a very conservative value of $M$ to achieve a low error rate. - -The concept of an error-free capacity awaited Claude Shannon, who built on Hartley's observations about a logarithmic measure of information and Nyquist's observations about the effect of bandwidth limitations. - -Hartley's rate result can be viewed as the capacity of an errorless M-ary channel of $2B$ symbols per second. Some authors refer to it as a capacity. But such an errorless channel is an idealization, and if M is chosen small enough to make the noisy channel nearly errorless, the result is necessarily less than the Shannon capacity of the noisy channel of bandwidth $B$, which is the Hartley–Shannon result that followed later. - -Claude Shannon's development of information theory during World War II provided the next big step in understanding how much information could be reliably communicated through noisy channels. Building on Hartley's foundation, Shannon's noisy channel coding theorem (1948) describes the maximum possible efficiency of error-correcting methods versus levels of noise interference and data corruption. The proof of the theorem shows that a randomly constructed error-correcting code is essentially as good as the best possible code; the theorem is proved through the statistics of such random codes. - -Shannon's theorem shows how to compute a channel capacity from a statistical description of a channel, and establishes that given a noisy channel with capacity $C$ and information transmitted at a line rate $R$, then if -$$ - R < C -$$ - -there exists a coding technique which allows the probability of error at the receiver to be made arbitrarily small. This means that theoretically, it is possible to transmit information nearly without error up to nearly a limit of $C$ bits per second. - -The converse is also important. If -$$ - R > C -$$ - -the probability of error at the receiver increases without bound as the rate is increased. So no useful information can be transmitted beyond the channel capacity. The theorem does not address the rare situation in which rate and capacity are equal. - -The Shannon–Hartley theorem establishes what that channel capacity is for a finite-bandwidth continuous-time channel subject to Gaussian noise. It connects Hartley's result with Shannon's channel capacity theorem in a form that is equivalent to specifying the M in Hartley's line rate formula in terms of a signal-to-noise ratio, but achieving reliability through error-correction coding rather than through reliably distinguishable pulse levels. - -If there were such a thing as a noise-free analog channel, one could transmit unlimited amounts of error-free data over it per unit of time (Note that an infinite-bandwidth analog channel couldn’t transmit unlimited amounts of error-free data absent infinite signal power). Real channels, however, are subject to limitations imposed by both finite bandwidth and nonzero noise. - -Bandwidth and noise affect the rate at which information can be transmitted over an analog channel. Bandwidth limitations alone do not impose a cap on the maximum information rate because it is still possible for the signal to take on an indefinitely large number of different voltage levels on each symbol pulse, with each slightly different level being assigned a different meaning or bit sequence. Taking into account both noise and bandwidth limitations, however, there is a limit to the amount of information that can be transferred by a signal of a bounded power, even when sophisticated multi-level encoding techniques are used. - -In the channel considered by the Shannon–Hartley theorem, noise and signal are combined by addition. That is, the receiver measures a signal that is equal to the sum of the signal encoding the desired information and a continuous random variable that represents the noise. This addition creates uncertainty as to the original signal's value. If the receiver has some information about the random process that generates the noise, one can in principle recover the information in the original signal by considering all possible states of the noise process. In the case of the Shannon–Hartley theorem, the noise is assumed to be generated by a Gaussian process with a known variance. Since the variance of a Gaussian process is equivalent to its power, it is conventional to call this variance the noise power. - -Such a channel is called the Additive White Gaussian Noise channel, because Gaussian noise is added to the signal; "white" means equal amounts of noise at all frequencies within the channel bandwidth. Such noise can arise both from random sources of energy and also from coding and measurement error at the sender and receiver respectively. Since sums of independent Gaussian random variables are themselves Gaussian random variables, this conveniently simplifies analysis, if one assumes that such error sources are also Gaussian and independent. - -Comparing the channel capacity to the information rate from Hartley's law, we can find the effective number of distinguishable levels M: -$$ -2B \log_2(M) = B \log_2 \left( 1+\frac{S}{N} \right) -$$ -$$ -M = \sqrt{1+\frac{S}{N}}. -$$ - -The square root effectively converts the power ratio back to a voltage ratio, so the number of levels is approximately proportional to the ratio of signal RMS amplitude to noise standard deviation. - -This similarity in form between Shannon's capacity and Hartley's law should not be interpreted to mean that $M$ pulse levels can be literally sent without any confusion. More levels are needed to allow for redundant coding and error correction, but the net data rate that can be approached with coding is equivalent to using that $M$ in Hartley's law. - -In the simple version above, the signal and noise are fully uncorrelated, in which case $S+N$ is the total power of the received signal and noise together. A generalization of the above equation for the case where the additive noise is not white (or that the S/N is not constant with frequency over the bandwidth) is obtained by treating the channel as many narrow, independent Gaussian channels in parallel: -$$ - C = \int_{0}^B \log_2 \left( 1+\frac{S(f)}{N(f)} \right) df -$$ - -where - -* $C$ is the channel capacity in bits per second; - -* $B$ is the bandwidth of the channel in Hz; - -* $S(f)$ is the signal power spectrum - -* $N(f)$ is the noise power spectrum - -* $f$ is frequency in Hz. - -Note: the theorem only applies to Gaussian stationary process noise. This formula's way of introducing frequency-dependent noise cannot describe all continuous-time noise processes. For example, consider a noise process consisting of adding a random wave whose amplitude is 1 or -1 at any point in time, and a channel that adds such a wave to the source signal. Such a wave's frequency components are highly dependent. Though such a noise may have a high power, it is fairly easy to transmit a continuous signal with much less power than one would need if the underlying noise was a sum of independent noises in each frequency band. - -For large or small and constant signal-to-noise ratios, the capacity formula can be approximated: - -When the SNR is large (S/N ≫ 1), the logarithm is approximated by -$$ -\log_2 \left( 1+\frac{S}{N} \right) \approx \log_2 \frac{S}{N} = \frac{\ln 10}{\ln 2} \cdot \log_{10} \frac{S}{N} \approx 3.32 \cdot \log_{10} \frac{S}{N} -$$, - -in which case the capacity is logarithmic in power and approximately linear in bandwidth (not quite linear, since N increases with bandwidth, imparting a logarithmic effect). This is called the bandwidth-limited regime. -$$ - C \approx 0.332 \cdot B \cdot \mathrm{SNR\ (in\ dB)} -$$ - -where -$$ -\mathrm{SNR\ (in \ dB)} = 10\log_{10}{S \over N}. -$$ - -Similarly, when the SNR is small (if S/N \ll 1), applying the approximation to the logarithm: -$$ -\log_2 \left( 1+\frac{S}{N} \right) = \frac{1}{\ln 2} \cdot \ln \left( 1+\frac{S}{N} \right) \approx \frac{1}{\ln 2} \cdot \frac{S}{N} \approx 1.44 \cdot {S \over N} -$$; - -then the capacity is linear in power. This is called the power-limited regime. -$$ - C \approx 1.44 \cdot B \cdot {S \over N}. -$$ - -In this low-SNR approximation, capacity is independent of bandwidth if the noise is white, of spectral density N_0 watts per hertz, in which case the total noise power is $N = B \cdot N_0$. -$$ - C \approx 1.44 \cdot {S \over N_0} -$$ - -# At a SNR of 0 dB (Signal power = Noise power) the Capacity in bits/s is equal to the bandwidth in hertz. - -# If the SNR is 20 dB, and the bandwidth available is 4 kHz, which is appropriate for telephone communications, then C = 4000 log2(1 + 100) = 4000 log2 (101) = 26.63 kbit/s. Note that the value of S/N = 100 is equivalent to the SNR of 20 dB. - -# If the requirement is to transmit at 50 kbit/s, and a bandwidth of 10 kHz is used, then the minimum S/N required is given by 50000 = 10000 log2(1+S/N) so C/B = 5 then S/N = 25 − 1 = 31, corresponding to an SNR of 14.91 dB (10 x log10(31)). - -# What is the channel capacity for a signal having a 1 MHz bandwidth, received with a SNR of −30 dB ? That means a signal deeply buried in noise. −30 dB means a S/N = 10−3. It leads to a maximal rate of information of 106 log2 (1 + 10−3) = 1443 bit/s. These values are typical of the received ranging signals of the GPS, where the navigation message is sent at 50 bit/s (below the channel capacity for the given S/N), and whose bandwidth is spread to around 1 MHz by a pseudo-noise multiplication before transmission. - -# As stated above, channel capacity is proportional to the bandwidth of the channel and to the logarithm of SNR. This means channel capacity can be increased linearly either by increasing the channel's bandwidth given a fixed SNR requirement or, with fixed bandwidth, by using higher-order modulations that need a very high SNR to operate. As the modulation rate increases, the spectral efficiency improves, but at the cost of the SNR requirement. Thus, there is an exponential rise in the SNR requirement if one adopts a 16QAM or 64QAM (see: Quadrature amplitude modulation); however, the spectral efficiency improves. diff --git a/wiki/wikipedia/759.txt b/wiki/wikipedia/759.txt deleted file mode 100644 index 6cf423e20e2e0f87fed1e053d8dedc38d318cc76..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/759.txt +++ /dev/null @@ -1,205 +0,0 @@ -Longest-processing-time-first (LPT) is a greedy algorithm for job scheduling. The input to the algorithm is a set of jobs, each of which has a specific processing-time. There is also a number m specifying the number of machines that can process the jobs. The LPT algorithm works as follows: - -# Order the jobs by descending order of their processing-time, such that the job with the longest processing time is first. - -# Schedule each job in this sequence into a machine in which the current load (= total processing-time of scheduled jobs) is smallest. - -Step 2 of the algorithm is essentially the list-scheduling (LS) algorithm. The difference is that LS loops over the jobs in an arbitrary order, while LPT pre-orders them by descending processing time. - -LPT was first analyzed by Ronald Graham in the 1960s in the context of the identical-machines scheduling problem. Later, it was applied to many other variants of the problem. - -LPT can also be described in a more abstract way, as an algorithm for multiway number partitioning. The input is a set S of numbers, and a positive integer m; the output is a partition of S into m subsets. LPT orders the input from largest to smallest, and puts each input in turn into the part with the smallest sum so far. - -If the input set is S = {4, 5, 6, 7, 8} and m = 2, then the resulting partition is {8, 5, 4}, {7, 6}. If m = 3, then the resulting 3-way partition is {8}, {7, 4}, {6, 5}. - -LPT might not find the optimal partition. For example, in the above instance the optimal partition {8,7}, {6,5,4}, where both sums are equal to 15. However, its suboptimality is bounded both in the worst case and in the average case; see Performance guarantees below. - -The running time of LPT is dominated by the sorting, which takes O(n log n) time, where n is the number of inputs. - -When used for identical-machines scheduling, LPT attains the following approximation ratios. - -In the worst case, the largest sum in the greedy partition is at most $\frac{4}{3}$ times the optimal (minimum) largest sum. Normalize the input items such that OPT=1. This implies that the sum of all items is at most m. - -Partition the items into large (more than 2/3), medium (between 1/3 and 2/3), and small (less than 1/3). Let their numbers be nL, nM and nS. In each optimal partition, each part contains at most one large item, so nL ≤ m. Moreover, each optimal part cannot contain both a large and a medium item, or three medium items; so nM ≤ 2(m-nL). - -The operation of the greedy algorithm can be partitioned into three phases: - -# Allocating the large items - each of which is put in a different bin. Since nL ≤ m, when this phase completes, each bin contains at most one item, so the max-sum is at most 1. - -# Allocating the medium items. The first m-nL ones are put in empty bins, and the next m-nL ones (if any) are added into the same bins. Since nM ≤ 2(m-nL), when this phase completes, each bin contains either one large item - with sum at most 1, or at most two medium items - with sum at most 4/3. - -#Allocating the small items. Each item is added into the bin with the smallest sum. The smallest sum is at most the average sum, which is at most 1. Hence, once a small item is added, the new sum becomes at most 4/3. - -A more detailed analysis yields a factor of $\frac{4m-1}{3m} = \frac{4}{3}-\frac{1}{3 m}$ times the optimal (minimum) largest sum. (for example, when m =2 this ratio is $7/6\approx 1.167$). The previous proof can be refined in two ways. - -First, one can prove that, once all large and medium items are allocated, the sum in each bin is at most 1. - -* If there are at most m-nL medium items, then each large and medium item is placed in a separate bin, so the sum is clearly at most 1. - -* Otherwise, denote the first m-nL medium items by top-medium items, and the others (at most m-nL) by bottom-medium items. - -* Assume first that item #m is larger than 1/2. This means that the items #1,...,#m are all larger than 1/2, so each must be in a different optimal part. Each of the bottom-medium items (items #m+1,...#nM) must fit into an optimal part with exactly one of the items #1,...,#m . Let us call two items matchable if their sum is at most 1, so that they can fit into the same optimal part. By Hall's theorem, each subset of k bottom-medium items must be matchable to at least k of the items #1,...,#m. In particular, the item #m+1 must be matchable to item #m; items #m+1 and #m+2 must be matchable to item #m-1; and in general, item #m+k must be matchable to item #'m-k+1. LPT indeed puts item #m+k in the same bin as #'m-k+1, so the sum of each bin is at most 1. - -Second, one can prove that, when allocating the small inputs, the sum of every new bin is at most 4/3-1/(3m). There are two cases: - -* If the current smallest sum is at most 1-1/(3m), then the new sum - after adding one small input - is at most 1-1/(3m)+1/3 = 4/3-1/(3m). - -* Otherwise, all sums are larger than 1-1/(3m), so the sum of the m-1 largest bins is larger than m-1-1/3+1/(3m) = m-(4/3-1/(3m)). Since the total sum of all inputs is at most m, the new sum must be less than 4/3-1/(3m). - -The factor $\frac{4m-1}{3m} $ is tight. - -Suppose there are $2 m+1$ inputs (where m is even): $2 m-1,2 m-1, 2 m-2,2 m-2, \ldots, m+1,m+1, m,m, m$. Then the greedy algorithm returns: - -* $2 m-1,m,m$ - -* $2 m-1,m$ - -* $2 m-2,m+1$ - -* $2 m-2,m+1$, - -* ... - -* $3 m/2,3 m/2-1$ - -* $3 m/2,3 m/2-1$ - -with a maximum of $4 m-1$, but the optimal partition is: - -* $m,m,m$ - -* $2 m-1,m+1$ - -* $2 m-1,m+1$ - -* $2 m-2,m+2$ - -* $2 m-2,m+2$ - -* ... - -* $3 m/2,3 m/2$ - -with a maximum of $3 m$. - -An even more detalied analysis takes into account the number of inputs in the max-sum part. - -# In each part of the greedy partition, the j-th highest number is at most $OPT/j$. - -# Suppose that, in the greedy part P with the max-sum, there are L inputs. Then, the approximation ratio of the greedy algorithm is $\frac{L+1}{L}-\frac{1}{L m} = 1 + \frac{1}{L} - \frac{1}{L m} $. - -The proof is by contradiction. We consider a minimal counterexample, that is, a counterexample with a smallest m and fewest input numbers. Denote the greedy partition by P1,...,Pm, and the optimal partition by Q1,...,Qm. Some properties of a minimal counterexample are: - -* The min-sum in the optimal partition is 4, and the min-sum in the greedy partition is less than 3 (this is just normalization - it is without loss of generality). - -* The max-sum in the greedy partition is more than 4 (since the total sum in both partitions is the same, and it is at least 4m). - -* If sum(Pi)≥3 for some greedy bin Pi, then Pi is not dominated by any optimal bin Qj. Proof: if Pi is dominated by Qj, then we can construct a smaller counterexample by decreasing m to m-1 and removing the items in Pi. The min-sum in the greedy partition remains less than 3. In the optimal partition, the items in Pi can be replaced by their dominating items in Qj, so the min-sum remains at least 4. - -* If sum(Pi)≥3 for some greedy bin Pi, then Pi contains at least two numbers. Proof: if Pi contains only one number x, then it is dominated by the optimal bin Qj which contains x.givese some input x is at least 3, and the greedy algorithm puts it in some Pi. Then, since there is a bundle with sum less than 3, the greedy algorithm will not put any other input in Pi, contradicting the previous lemma. - -* Every greedy bin Pi contains at most one input weakly-larger than 3/2. Proof: Let Pi be the first greedy bin which is assigned two such inputs. Since inputs are assigned in descending order, Pi is the first greedy bin assigned two inputs. This means that it must contain the smallest two inputs from among the largest m+1 inputs. Moreover, since the sum of these two items is at least 3/2+3/2=3, Pi is not assigned any other input. On the other hand, by the pigeonhole principle, there must be some optimal bin Qj that contains some two inputs from among the largest m+1 inputs; so Pi is dominated by Qj. - -*During the run of the greedy algorithm, the sum in every bin Pi becomes at least 8/3 before the sum of any bin exceeds 4. Proof: Let y be the first input added to some bin Pi, which made its sum larger than 4. Before y was added, Pi had the smallest sum, which by assumption was smaller than 8/3; this means that y>4/3. Let T denote the set of all inputs from the first one down to y; all these inputs are larger than 4/3 too. Since Pi was smaller than 8/3, it contained exactly one item x from T. So now Pi contains exactly 2 items {x,y}, and remains with these 2 items until the algorithm ends. Let m be the number of items from the first one down to x. We now show a contradiction by counting the items in T in two ways. - -**First, consider the n optimal bins. If any such bin contains an item at least as large as x, then it cannot contain any other item of T, since otherwise it dominates {x,y}. Moreover, any such bin cannot contain three items from T, since the sum of any two of them is larger than 8/3, which is larger than x; and the third one is at least y, so it dominates {x,y}. Therefore, the number of items in T is at most 1*m + 2*(n-m) = 2n-m. - -**Now, consider the n greedy bins. When y is added to the bundle containing x, it is the bundle with the smallest sum. Therefore, all elements of T that are smaller than x, must be in a greedy bin with at least one other item of T. The same is true for x and y. Therefore, the number of items in T is at least (m-1)+2*(n-m+1) = 2n-m+1 - contradiction. - -** - -*We can assume, without loss of generality, that all inputs are either smaller than 1/3, or at least 1. Proof: Suppose some input x is in [1/3,1). We replace x with 1. This obviously does not decrease the optimal min-sum. We show that it does not change the greedy min-sum. We know that some greedy bundle Pi has a final sum larger than 4. Before the last input was added into Pi, its sum was smaller than 3; so Pi became larger than 4 when some input larger than 1 was added to it. By the previous lemma, at that point the sum of all other greedy bundles was at least 8/3. The algorithm arrives at x afterwards. Once the algorithm adds x to some bin Pj, the sum of Pj becomes at least 8/3+1/3=3, so no more items are added into Pj. So Pj contains only one input with size in [1/3,1). Once x is replaced with 1, it is still inserted into Pj, and its sum is still above 3. So the greedy min-sum does not change. - -*We can now partition the inputs into small (less than 1/3) and large (at least 1). The set of small items in Pi is denoted by Si. Note that, when the algorithm starts processing small items, the sum in all bundles is at least 8/3. - -The proof that a minimal counterexample does not exist uses a weighting scheme. Each input x is assigned a weight w(x) according to its size and greedy bundle Pi: - -* If x is a large item: - -** If x is the single large item in Pi, then w(x)=8/3. - -** If Pi contains exactly two items {x,y} and both of them are large, and x>y, and sum(Pi)≥3, then w(x)=8/3. - -** Otherwise, w(x)=4/3. - -* If x is a small item: - -** if sum(Pi)≥3, then w(x) = 4x/(3 sum(Si)); so w(Si) = 4/3. - -** if sum(Pi)<3, then w(x) = 2x/(3 sum(Si)); so w(Si) = 2/3. - -This weighting scheme has the following properties: - -* If x≥2, then w(x)=8/3. Proof: x is large. Suppose it is in Pi. If Pi contains another large item y, then x+y≥3 so there is no other item in Pi. Moreover, x>y since there is at most one item larger than 3/2. So w(x)=8/3. - -* If x<1/3, then w(x) > 2x. Proof: x is small. Suppose it is in Pi. - -**If sum(Pi)≥3 then, since sum(Pi) was smaller than 3 before x was added to it, it is now smaller than 10/3. But when the algorithm started processing small items, sum(Pi) was at least 8/3. This means that sum(Si) < 2/3, so w(x) = 4x/(3 sum(Si)) > 2x. - -**If sum(Pi)<3 then sum(Si) < 3-8/3=1/3, so w(x) = 2x/(3 sum(Si)) > 2x. - -* The weight of every greedy bin Pi is at most 4, and the weight of at least one greedy bin is at most 10/3. Proof: - -**If all inputs in Pi are large, then it contains either a single input with weight 8/3, two inputs with weights 8/3+4/3, or three inputs with weights 4/3+4/3+4/3. - -**If some inputs in Pi are small, then their total weight is at most 4/3. There are at most two large inputs, and their weights are either 8/3 or 4/3+4/3. - -**Finally, the weight of the greedy bin with sum smaller than 3 is at most 8/3 (if it has only large inputs) or 10/3 (if it has some small inputs). - -* The weight of every optimal bin Qj is at least 4. Proof: - -**If Qj contains only small items, then each of them satisfies w(x) > 2x, so w(Qj) > 2 sum(Qj) ≥ 8. - -**If Qj contains exactly one large item x, then it must contain some small items whose sum is at least 4-x and weight at least 8-2x. Then, either x<2 and the weight of small items is at least 8-4=4, or x in (2,3) and w(x)=8/3 and the weight of small items is at least 8-6=2. In both cases the total weight is at least 4. - -**If Qj contains exactly two large items x>y, and x≥2, then their is at least 8/3+4/3=4. If x+y≤10/3, then the sum of small items must be at least 2/3, so the total weight is at least 4/3+4/3+2*2/3=4. Otherwise, x>5/3. So x was the first input in some greedy bin Pm. Let z be the second input added into Pm. If x+z≥3, then there are no more inputs in Pm, so w(x)=8/3 and we are done. Otherwise, x+z<3. Let v be the smallest input in some greedy bin whose sum exceeds 4. Since x<8/3, z must have been processed before v, so z≥v. Consider now any small item t in Qj, and suppose it is in some greedy bin Pi. - -***If sum(Pi)<3, then the fact that v was not put in Pi implies that v > 4-sum(large-items-in-Pi) > 1+sum(small-items-in-Pi). Therefore, 1+sum(Si)+x < v+x ≤ z+x < 3 and sum(Si) < 2-x. This means that 2*sum(Si) < 4-2x ≤ 4-x-y ≤ sum(small-items-in-Qj). So w(t) = 2t/(3sum(Si)) > 4t/(3sum(small-items-in-Qj)). - -***If sum(Pi)≥3, and sum(Si)≤1, then w(t)=4/3 and we are done. Since sum(Pi) was less than 3 before t was added into it, sum(Pi)<3+sum(Si)/2. The fact that v was not put in Pi implies that v > 4-sum(large-items-in-Pi) > 1+sum(small-items-in-Pi)/2. Similariy to the previous paragraph, w(t) > 4t/(3sum(small-items-in-Qj)). - -***Therefore, the total weight of all small items in Qj is at least 4/3, so the total weight of Qj is at least 4/3+10/3>4. - -**If Qj contains exactly three or more large items, then its total weight is at least 4/3+4/3+4/3=4. - -*The last two claims are contradictory, since the former implies that the weight of all inputs is at most 4m-2/3, and the latter implies that the weight of all inputs is at least 4m. Therefore, a counterexample does not exist. - -A more sophisticated analysis shows that the ratio is at most $\frac{3 m-1}{4 m-2}$ (for example, when m=2 the ratio is 5/6). It is tight. - -* The largest sum is at most $1 + O(\log{\log{n}}/n)$ times the optimum almost surely, and $1 + O(1/n)$ in expectation, where $n$ is the number of inputs. - -* The difference between the LPT largest sum and the optimal largest sum is at most $O(\log{n}/n)$ almost surely (for uniform or negative exponential distributions), and at most $O(m^2/n)$ in expectation (for uniform distribution). These results hold also for machines with different speeds. - -Let Ci (for i between 1 and m) be the sum of subset i in a given partition. Instead of minimizing the objective function max(Ci), one can minimize the objective function max(f(Ci)), where f is any fixed function. Similarly, one can minimize the objective function sum(f(Ci)). Alon, Azar, Woeginger and Yadid prove that, if f satisfies the following two conditions: - -# A strong continuity condition called Condition F*: for every ε>0 there exists δ>0 such that, if |y-x|<δx, then |f(y)-f(x)|<εf(x). - -# Convexity. - -Then the LPT rule has a finite approximation ratio for minimizing sum(f(Ci)). - -Besides the simple case of identical-machines scheduling, LPT has been adapted to more general settings. - -In uniform-machines scheduling, different machines may have different speeds. The LPT rule assigns each job to the machine on which its completion time will be earliest (that is, LPT may assign a job to a machine with a larger current load, if this machine is so fast that it would finish that job earlier than all other machines). - -* Gonzalez, Ibarra and Sahni that MLPT has the same approximation ratio for more general cardinality constraints (c>3). Currently, it is known that the approximation ratio of MLPT for general c>3 is at most 2. - -* Chen, He and Lin show that, for the same problem, MLPT attains at least $(3 m-1)/(4 m-2)$ of the maximum smallest sum, which is again the same ratio that LPT attains for the unconstrained problem. - -Another constraint is that the number of jobs on all machines should be $n/m$ rounded either up or down. In an adaptation of LPT called restricted LPT or RLPT, inputs are assigned in pairs - one to each machine (for m=2 machines). - -In the kernel partitioning problem, there are some m pre-specified jobs called kernels, and each kernel must be scheduled to a unique machine. An equivalent problem is scheduling when machines are available in different times: each machine i becomes available at some time ti ≥ 0 (the time ti can be thought of as the length of the kernel job). - -A simple heuristic algorithm, called SLPT, assigns each kernel to a different subset, and then runs the LPT algorithm. - -* Lee proves that this heuristic has a tight approximation ratio $\frac{3 m-1}{2 m}$ for the minimum largest sum. He then suggests running, in the second step, a modified version of LPT, and proves that it attains an approximation ratio $\frac{4}{3}$ . - -* Lin, Yao and He prove that this heuristic has a tight approximation ratio $\frac{2 m-1}{3 m-2}$ for the maximum smallest sum. - -*Shen, Wang and Wang study different objective functions for this setting, and present polynomial-time algorithms. - -Often, the inputs come online, and their sizes becomes known only when they arrive. In this case, it is not possible to sort them in advance. List scheduling is a similar algorithm that takes a list in any order, not necessarily sorted. Its approximation ratio is $\frac{2 m-1}{m} = 2-\frac{1}{m}$. - -A more sophisticated adaptation of LPT to an online setting attains an approximation ratio of 3/2. - -* Python: the numberpartitioning package has an implementation of LPT called '. diff --git a/wiki/wikipedia/76.txt b/wiki/wikipedia/76.txt deleted file mode 100644 index 882c248a16f3dce6ed822f98d8849ce257f0ae16..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/76.txt +++ /dev/null @@ -1,105 +0,0 @@ -__notoc__ - -Cassini's identity (sometimes called Simson's identity) and Catalan's identity are mathematical identities for the Fibonacci numbers. Cassini's identity, a special case of Catalan's identity, states that for the nth Fibonacci number, -$$ -F_{n-1}F_{n+1} - F_n^2 = (-1)^n. -$$ - -Catalan's identity generalizes this: -$$ -F_n^2 - F_{n-r}F_{n+r} = (-1)^{n-r}F_r^2. -$$ - -Vajda's identity generalizes this: -$$ -F_{n+i}F_{n+j} - F_{n}F_{n+i+j} = (-1)^nF_{i}F_{j}. -$$ - -Cassini's formula was discovered in 1680 by Giovanni Domenico Cassini, then director of the Paris Observatory, and independently proven by Robert Simson (1753). However the identity was already published in 1960 by Dustan Everman as problem 1396 in The American Mathematical Monthly. - -A quick proof of Cassini's identity may be given by recognising the left side of the equation as a determinant of a 2×2 matrix of Fibonacci numbers. The result is almost immediate when the matrix is seen to be the nth power of a matrix with determinant -1: - -F_{n-1}F_{n+1} - F_n^2 - -=\det\left[\begin{matrix}F_{n+1}&F_n\\F_n&F_{n-1}\end{matrix}\right] - -=\det\left[\begin{matrix}1&1\\1&0\end{matrix}\right]^n - -=\left(\det\left[\begin{matrix}1&1\\1&0\end{matrix}\right]\right)^n - -=(-1)^n. - -Consider the induction statement: -$$ -F_{n-1}F_{n+1} - F_n^2 = (-1)^n -$$ - -The base case $n=1$ is true. - -Assume the statement is true for $n$. Then: -$$ -F_{n-1}F_{n+1} - F_n^2 + F_nF_{n+1} - F_nF_{n+1} = (-1)^n -$$ -$$ -F_{n-1}F_{n+1} + F_nF_{n+1} - F_n^2 - F_nF_{n+1} = (-1)^n -$$ -$$ -F_{n+1}(F_{n-1} + F_n) - F_n(F_n + F_{n+1}) = (-1)^n -$$ -$$ -F_{n+1}^2 - F_nF_{n+2} = (-1)^n -$$ -$$ -F_nF_{n+2} - F_{n+1}^2 = (-1)^{n+1} -$$ - -so the statement is true for all integers $n>0$. - -We use Binet's Theorem, that $F_n=\frac{\phi^n-\psi^n}{\sqrt5}$, where $\phi=\frac{1+\sqrt5}{2}$ and $\psi=\frac{1-\sqrt5}{2}$. - -Hence, $\phi+\psi=1$ and $\phi\psi=-1$. - -So, -$$ -5(F_n^2 - F_{n-r}F_{n+r}) -$$ -$$ -= (\phi^n-\psi^n)^2 - (\phi^{n-r}-\psi^{n-r})(\phi^{n+r}-\psi^{n+r}) -$$ -$$ -= (\phi^{2n} - 2\phi^{n}\psi^{n} +\psi^{2n}) - (\phi^{2n} - \phi^{n}\psi^{n}(\phi^{-r}\psi^{r}+\phi^{r}\psi^{-r}) + \psi^{2n}) -$$ -$$ -= - 2\phi^{n}\psi^{n} + \phi^{n}\psi^{n}(\phi^{-r}\psi^{r}+\phi^{r}\psi^{-r}) -$$ - -Using $\phi\psi=-1$, -$$ -= -(-1)^n2 + (-1)^n(\phi^{-r}\psi^{r}+\phi^{r}\psi^{-r}) -$$ - -and again as $\phi=\frac{-1}{\psi}$, -$$ -= -(-1)^n2 + (-1)^{n-r}(\psi^{2r}+\phi^{2r}) -$$ - -The Lucas number $L_n$ is defined as $L_n=\phi^n+\psi^n$, so -$$ -= -(-1)^n2 + (-1)^{n-r}L_{2r} -$$ - -Because $L_{2n} = 5 F_n^2 + 2(-1)^n$ -$$ -= -(-1)^n2 + (-1)^{n-r}(5 F_r^2 + 2(-1)^r) -$$ -$$ -= -(-1)^n2 + (-1)^{n-r}2(-1)^r + (-1)^{n-r}5 F_r^2 -$$ -$$ -= -(-1)^n2 + (-1)^n2 + (-1)^{n-r}5 F_r^2 -$$ -$$ -= (-1)^{n-r}5 F_r^2 -$$ - -Cancelling the $5$'s gives the result. diff --git a/wiki/wikipedia/760.txt b/wiki/wikipedia/760.txt deleted file mode 100644 index 0fdd2681c184b8c8ffa168fb88d13688915f4de6..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/760.txt +++ /dev/null @@ -1 +0,0 @@ -TheBrain, formerly branded PersonalBrain, is a mind mapping and personal knowledge base software from TheBrain Technologies. It uses a dynamic graphical interface that maps hierarchical and network relationships. It includes the ability to add links to Web pages and files as well as notes and events using a built-in calendar. It is cross-platform, available for Windows, Unix and Unix-like operating systems, and Mac OS X. It is available in a free edition as well as in commercial editions with additional features. diff --git a/wiki/wikipedia/761.txt b/wiki/wikipedia/761.txt deleted file mode 100644 index addec232021ccaaf79dfaadec83fbb280278d824..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/761.txt +++ /dev/null @@ -1,74 +0,0 @@ -In differential geometry, Pu's inequality, proved by Pao Ming Pu, relates the area of an arbitrary Riemannian surface homeomorphic to the real projective plane with the lengths of the closed curves contained in it. - -A student of Charles Loewner, Pu proved in his 1950 thesis that every Riemannian surface $ M $ homeomorphic to the real projective plane satisfies the inequality -$$ - \operatorname{Area}(M) \geq \frac{2}{\pi} \operatorname{Systole}(M)^2 , -$$ - -where $ \operatorname{Systole}(M) $ is the systole of $ M $. - -The equality is attained precisely when the metric has constant Gaussian curvature. - -In other words, if all noncontractible loops in $ M $ have length at least $ L $, then $ \operatorname{Area}(M) \geq \frac{2}{\pi} L^2, $ and the equality holds if and only if $ M $ is obtained from a Euclidean sphere of radius $ r=L/\pi $ by identifying each point with its antipodal. - -Pu's paper also stated for the first time Loewner's inequality, a similar result for Riemannian metrics on the torus. - -Pu's original proof relies on the uniformization theorem and employs an averaging argument, as follows. - -By uniformization, the Riemannian surface $ (M,g) $ is conformally diffeomorphic to a round projective plane. This means that we may assume that the surface $ M $ is obtained from the Euclidean unit sphere $ S^2 $ by identifying antipodal points, and the Riemannian length element at each point $ x $ is -$$ - \mathrm{dLength} = f(x) \mathrm{dLength}_{\text{Euclidean}}, -$$ - -where $ \mathrm{dLength}_{\text{Euclidean}} $ is the Euclidean length element and the function $ f: S^2\to(0,+\infty) $, called the conformal factor, satisfies $ f(-x)=f(x) $. - -More precisely, the universal cover of $ M $ is $ S^2 $, a loop $\gamma\subseteq M $ is noncontractible if and only if its lift $ \widetilde\gamma\subseteq S^2$ goes from one point to its opposite, and the length of each curve $\gamma$ is -$$ - \operatorname{Length}(\gamma)=\int_{\widetilde\gamma} f \mathrm{dLength}_{\text{Euclidean}}. -$$ - -Subject to the restriction that each of these lengths is at least $ L $, we want to find an $ f $ that minimizes the -$$ - \operatorname{Area}(M,g)=\int_{S^2_+} f(x)^2\mathrm{dArea}_{\text{Euclidean}}(x), -$$ - -where $ S^2_+ $ is the upper half of the sphere. - -A key observation is that if we average several different $ f_i $ that satisfy the length restriction and have the same area $ A $, then we obtain a better conformal factor $ f_{\text{new}} = \frac{1}{n} \sum_{0\leq i2. Further on, there will be ω3, then ω4, and so on, and ωω, then ωωω, then later ωωωω, and even later ε0 (epsilon nought) (to give a few examples of relatively small—countable—ordinals). This can be continued indefinitely (as every time one says "and so on" when enumerating ordinals, it defines a larger ordinal). The smallest uncountable ordinal is the set of all countable ordinals, expressed as ω1 or $\Omega$. - -In a well-ordered set, every non-empty subset contains a distinct smallest element. Given the axiom of dependent choice, this is equivalent to saying that the set is totally ordered and there is no infinite decreasing sequence (the latter being easier to visualize). In practice, the importance of well-ordering is justified by the possibility of applying transfinite induction, which says, essentially, that any property that passes on from the predecessors of an element to that element itself must be true of all elements (of the given well-ordered set). If the states of a computation (computer program or game) can be well-ordered—in such a way that each step is followed by a "lower" step—then the computation will terminate. - -It is inappropriate to distinguish between two well-ordered sets if they only differ in the "labeling of their elements", or more formally: if the elements of the first set can be paired off with the elements of the second set such that if one element is smaller than another in the first set, then the partner of the first element is smaller than the partner of the second element in the second set, and vice versa. Such a one-to-one correspondence is called an order isomorphism, and the two well-ordered sets are said to be order-isomorphic or similar (with the understanding that this is an equivalence relation). - -Formally, if a partial order ≤ is defined on the set S, and a partial order ≤' is defined on the set S' , then the posets (S,≤) and (S' ,≤') are order isomorphic if there is a bijection f that preserves the ordering. That is, f(a) ≤' f(b) if and only if a ≤ b. Provided there exists an order isomorphism between two well-ordered sets, the order isomorphism is unique: this makes it quite justifiable to consider the two sets as essentially identical, and to seek a "canonical" representative of the isomorphism type (class). This is exactly what the ordinals provide, and it also provides a canonical labeling of the elements of any well-ordered set. Every well-ordered set (S,<) is order-isomorphic to the set of ordinals less than one specific ordinal number under their natural ordering. This canonical set is the order type of (S,<). - -Essentially, an ordinal is intended to be defined as an isomorphism class of well-ordered sets: that is, as an equivalence class for the equivalence relation of "being order-isomorphic". There is a technical difficulty involved, however, in the fact that the equivalence class is too large to be a set in the usual Zermelo–Fraenkel (ZF) formalization of set theory. But this is not a serious difficulty. The ordinal can be said to be the order type of any set in the class. - -The original definition of ordinal numbers, found for example in the Principia Mathematica, defines the order type of a well-ordering as the set of all well-orderings similar (order-isomorphic) to that well-ordering: in other words, an ordinal number is genuinely an equivalence class of well-ordered sets. This definition must be abandoned in ZF and related systems of axiomatic set theory because these equivalence classes are too large to form a set. However, this definition still can be used in type theory and in Quine's axiomatic set theory New Foundations and related systems (where it affords a rather surprising alternative solution to the Burali-Forti paradox of the largest ordinal). - -Rather than defining an ordinal as an equivalence class of well-ordered sets, it will be defined as a particular well-ordered set that (canonically) represents the class. Thus, an ordinal number will be a well-ordered set; and every well-ordered set will be order-isomorphic to exactly one ordinal number. - -For each well-ordered set $T$, $a\mapsto T_{0 = ωε0. The so-called "natural" arithmetical operations retain commutativity at the expense of continuity. - -Interpreted as nimbers (a game-theoretic variant of numbers), ordinals are also subject to nimber arithmetic operations. - -Each ordinal associates with one cardinal, its cardinality. If there is a bijection between two ordinals (e.g. ω = 1 + ω and ω + 1 > ω), then they associate with the same cardinal. Any well-ordered set having an ordinal as its order-type has the same cardinality as that ordinal. The least ordinal associated with a given cardinal is called the initial ordinal of that cardinal. Every finite ordinal (natural number) is initial, and no other ordinal associates with its cardinal. But most infinite ordinals are not initial, as many infinite ordinals associate with the same cardinal. The axiom of choice is equivalent to the statement that every set can be well-ordered, i.e. that every cardinal has an initial ordinal. In theories with the axiom of choice, the cardinal number of any set has an initial ordinal, and one may employ the Von Neumann cardinal assignment as the cardinal's representation. (However, we must then be careful to distinguish between cardinal arithmetic and ordinal arithmetic.) In set theories without the axiom of choice, a cardinal may be represented by the set of sets with that cardinality having minimal rank (see Scott's trick). - -One issue with Scott's trick is that it identifies the cardinal number $0$ with $\{\emptyset\}$, which in some formulations is the ordinal number $1$. It may be clearer to apply Von Neumann cardinal assignment to finite cases and to use Scott's trick for sets which are infinite or do not admit well orderings. Note that cardinal and ordinal arithmetic agree for finite numbers. - -The α-th infinite initial ordinal is written $\omega_\alpha$, it is always a limit ordinal. Its cardinality is written $\aleph_\alpha$. For example, the cardinality of ω0 = ω is $\aleph_0$, which is also the cardinality of ω2 or ε0 (all are countable ordinals). So ω can be identified with $\aleph_0$, except that the notation $\aleph_0$ is used when writing cardinals, and ω when writing ordinals (this is important since, for example, $\aleph_0^2$ = $\aleph_0$ whereas $\omega^2 > \omega$). Also, $\omega_1$ is the smallest uncountable ordinal (to see that it exists, consider the set of equivalence classes of well-orderings of the natural numbers: each such well-ordering defines a countable ordinal, and $\omega_1$ is the order type of that set), $\omega_2$ is the smallest ordinal whose cardinality is greater than $\aleph_1$, and so on, and $\omega_\omega$ is the limit of the $\omega_n$ for natural numbers n (any limit of cardinals is a cardinal, so this limit is indeed the first cardinal after all the $\omega_n$). - -The cofinality of an ordinal $\alpha$ is the smallest ordinal $\delta$ that is the order type of a cofinal subset of $\alpha$. Notice that a number of authors define cofinality or use it only for limit ordinals. The cofinality of a set of ordinals or any other well-ordered set is the cofinality of the order type of that set. - -Thus for a limit ordinal, there exists a $\delta$-indexed strictly increasing sequence with limit $\alpha$. For example, the cofinality of ω2 is ω, because the sequence ω·m (where m ranges over the natural numbers) tends to ω2; but, more generally, any countable limit ordinal has cofinality ω. An uncountable limit ordinal may have either cofinality ω as does $\omega_\omega$ or an uncountable cofinality. - -The cofinality of 0 is 0. And the cofinality of any successor ordinal is 1. The cofinality of any limit ordinal is at least $\omega$. - -An ordinal that is equal to its cofinality is called regular and it is always an initial ordinal. Any limit of regular ordinals is a limit of initial ordinals and thus is also initial even if it is not regular, which it usually is not. If the Axiom of Choice, then $\omega_{\alpha+1}$ is regular for each α. In this case, the ordinals 0, 1, $\omega$, $\omega_1$, and $\omega_2$ are regular, whereas 2, 3, $\omega_\omega$, and ωω·2 are initial ordinals that are not regular. - -The cofinality of any ordinal α is a regular ordinal, i.e. the cofinality of the cofinality of α is the same as the cofinality of α. So the cofinality operation is idempotent. - -As mentioned above (see Cantor normal form), the ordinal ε0 is the smallest satisfying the equation $\omega^\alpha = \alpha$, so it is the limit of the sequence 0, 1, $\omega$, $\omega^\omega$, $\omega^{\omega^\omega}$, etc. Many ordinals can be defined in such a manner as fixed points of certain ordinal functions (the $\iota$-th ordinal such that $\omega^\alpha = \alpha$ is called $\varepsilon_\iota$, then one could go on trying to find the $\iota$-th ordinal such that $\varepsilon_\alpha = \alpha$, "and so on", but all the subtlety lies in the "and so on"). One could try to do this systematically, but no matter what system is used to define and construct ordinals, there is always an ordinal that lies just above all the ordinals constructed by the system. Perhaps the most important ordinal that limits a system of construction in this manner is the Church–Kleene ordinal, $\omega_1^{\mathrm{CK}}$ (despite the $\omega_1$ in the name, this ordinal is countable), which is the smallest ordinal that cannot in any way be represented by a computable function (this can be made rigorous, of course). Considerably large ordinals can be defined below $\omega_1^{\mathrm{CK}}$, however, which measure the "proof-theoretic strength" of certain formal systems (for example, $\varepsilon_0$ measures the strength of Peano arithmetic). Large countable ordinals such as countable admissible ordinals can also be defined above the Church-Kleene ordinal, which are of interest in various parts of logic. - -Any ordinal number can be made into a topological space by endowing it with the order topology; this topology is discrete if and only if the ordinal is a countable cardinal, i.e. at most ω. A subset of ω + 1 is open in the order topology if and only if either it is cofinite or it does not contain ω as an element. - -See the Topology and ordinals section of the "Order topology" article. - -A set is downward closed if anything less than an element of the set is also in the set. If a set of ordinals is downward closed, then that set is an ordinal—the least ordinal not in the set. - -Examples: - -*The set of ordinals less than 3 is 3 = { 0, 1, 2 }, the smallest ordinal not less than 3. - -*The set of finite ordinals is infinite, the smallest infinite ordinal: ω. - -*The set of countable ordinals is uncountable, the smallest uncountable ordinal: ω1. - -The transfinite ordinal numbers, which first appeared in 1883, originated in Cantor's work with derived sets. If P is a set of real numbers, the derived set P' is the set of limit points of P. In 1872, Cantor generated the sets P(n) by applying the derived set operation n times to P. In 1880, he pointed out that these sets form the sequence P' ⊇ ··· ⊇ P(n) ⊇ P(n + 1) ⊇ ···, and he continued the derivation process by defining P(∞) as the intersection of these sets. Then he iterated the derived set operation and intersections to extend his sequence of sets into the infinite: P(∞) ⊇ P(∞ + 1) ⊇ P(∞ + 2) ⊇ ··· ⊇ P(2∞) ⊇ ··· ⊇ P(∞2) ⊇ ···. The superscripts containing ∞ are just indices defined by the derivation process. - -Cantor used these sets in the theorems: (1) If P(α) = ∅ for some index α, then P' is countable; (2) Conversely, if P' is countable, then there is an index α such that P(α) = ∅. These theorems are proved by partitioning P' into pairwise disjoint sets: P = (P' ∖ P(2)) ∪ (P(2) ∖ P(3)) ∪ ··· ∪ (P(∞) ∖ P(∞ + 1)) ∪ ··· ∪ P(α). For β < α: since P(β + 1) contains the limit points of P(β), the sets P(β) ∖ P(β + 1) have no limit points. Hence, they are discrete sets, so they are countable. Proof of first theorem: If P(α) = ∅ for some index α, then P' is the countable union of countable sets. Therefore, P' is countable. - -The second theorem requires proving the existence of an α such that P(α) = ∅. To prove this, Cantor considered the set of all α having countably many predecessors. To define this set, he defined the transfinite ordinal numbers and transformed the infinite indices into ordinals by replacing ∞ with ω, the first transfinite ordinal number. Cantor called the set of finite ordinals the first number class. The second number class is the set of ordinals whose predecessors form a countably infinite set. The set of all α having countably many predecessors—that is, the set of countable ordinals—is the union of these two number classes. Cantor proved that the cardinality of the second number class is the first uncountable cardinality. - -Cantor's second theorem becomes: If P' is countable, then there is a countable ordinal α such that P(α) = ∅. Its proof uses proof by contradiction. Let P' be countable, and assume there is no such α. This assumption produces two cases. - -* Case 1: P(β) ∖ P(β + 1) is non-empty for all countable β. Since there are uncountably many of these pairwise disjoint sets, their union is uncountable. This union is a subset of P, so P' is uncountable. - -* Case 2: P(β) ∖ P(β + 1) is empty for some countable β. Since P(β + 1) ⊆ P(β), this implies P(β + 1) = P(β). Thus, P(β) is a perfect set, so it is uncountable. Since P(β) ⊆ P, the set P' is uncountable. - -In both cases, P' is uncountable, which contradicts P' being countable. Therefore, there is a countable ordinal α such that P(α) = ∅. Cantor's work with derived sets and ordinal numbers led to the Cantor-Bendixson theorem. - -Using successors, limits, and cardinality, Cantor generated an unbounded sequence of ordinal numbers and number classes. The (α + 1)-th number class is the set of ordinals whose predecessors form a set of the same cardinality as the α-th number class. The cardinality of the (α + 1)-th number class is the cardinality immediately following that of the α-th number class. For a limit ordinal α, the α-th number class is the union of the β-th number classes for β < α. Its cardinality is the limit of the cardinalities of these number classes. - -If n is finite, the n-th number class has cardinality $\aleph_{n-1}$. If α ≥ ω, the α-th number class has cardinality $\aleph_\alpha$. Therefore, the cardinalities of the number classes correspond one-to-one with the aleph numbers. Also, the α-th number class consists of ordinals different from those in the preceding number classes if and only if α is a non-limit ordinal. Therefore, the non-limit number classes partition the ordinals into pairwise disjoint sets. diff --git a/wiki/wikipedia/764.txt b/wiki/wikipedia/764.txt deleted file mode 100644 index ca5497e4a5cd716d933ac3ce1c28b659cbecf04c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/764.txt +++ /dev/null @@ -1,19 +0,0 @@ -In mathematics, and in particular, in the mathematical background of string theory, the Goddard–Thorn theorem (also called the no-ghost theorem) is a theorem describing properties of a functor that quantizes bosonic strings. It is named after Peter Goddard and Charles Thorn. - -The name "no-ghost theorem" stems from the fact that in the original statement of the theorem, the natural inner product induced on the output vector space is positive definite. Thus, there were no so-called ghosts (Pauli–Villars ghosts), or vectors of negative norm. The name "no-ghost theorem" is also a word play on the no-go theorem of quantum mechanics. - -There are two naturally isomorphic functors that are typically used to quantize bosonic strings. In both cases, one starts with positive-energy representations of the Virasoro algebra of central charge 26, equipped with Virasoro-invariant bilinear forms, and ends up with vector spaces equipped with bilinear forms. Here, "Virasoro-invariant" means Ln is adjoint to L−n for all integers n. - -The first functor historically is "old canonical quantization", and it is given by taking the quotient of the weight 1 primary subspace by the radical of the bilinear form. Here, "primary subspace" is the set of vectors annihilated by Ln for all strictly positive n, and "weight 1" means L0 acts by identity. A second, naturally isomorphic functor, is given by degree 1 BRST cohomology. Older treatments of BRST cohomology often have a shift in the degree due to a change in choice of BRST charge, so one may see degree −1/2 cohomology in papers and texts from before 1995. A proof that the functors are naturally isomorphic can be found in Section 4.4 of Polchinski's String Theory text. - -The Goddard–Thorn theorem amounts to the assertion that this quantization functor more or less cancels the addition of two free bosons, as conjectured by Lovelace in 1971. Lovelace's precise claim was that at critical dimension 26, Virasoro-type Ward identities cancel two full sets of oscillators. Mathematically, this is the following claim: - -Let V be a unitarizable Virasoro representation of central charge 24 with Virasoro-invariant bilinear form, and let π1,1λ be the irreducible module of the R1,1 Heisenberg Lie algebra attached to a nonzero vector λ in R1,1. Then the image of V ⊗ π1,1λ under quantization is canonically isomorphic to the subspace of V on which L0 acts by 1-(λ,λ). - -The no-ghost property follows immediately, since the positive-definite Hermitian structure of V is transferred to the image under quantization. - -The bosonic string quantization functors described here can be applied to any conformal vertex algebra of central charge 26, and the output naturally has a Lie algebra structure. The Goddard–Thorn theorem can then be applied to concretely describe the Lie algebra in terms of the input vertex algebra. - -Perhaps the most spectacular case of this application is Richard Borcherds's proof of the monstrous moonshine conjecture, where the unitarizable Virasoro representation is the Monster vertex algebra (also called "Moonshine module") constructed by Frenkel, Lepowsky, and Meurman. By taking a tensor product with the vertex algebra attached to a rank 2 hyperbolic lattice, and applying quantization, one obtains the monster Lie algebra, which is a generalized Kac–Moody algebra graded by the lattice. By using the Goddard–Thorn theorem, Borcherds showed that the homogeneous pieces of the Lie algebra are naturally isomorphic to graded pieces of the Moonshine module, as representations of the monster simple group. - -Earlier applications include Frenkel's determination of upper bounds on the root multiplicities of the Kac-Moody Lie algebra whose Dynkin diagram is the Leech lattice, and Borcherds's construction of a generalized Kac-Moody Lie algebra that contains Frenkel's Lie algebra and saturates Frenkel's 1/∆ bound. diff --git a/wiki/wikipedia/765.txt b/wiki/wikipedia/765.txt deleted file mode 100644 index 4ff4b356460476ba75e0a4ef0e4c4d8b5bd228b9..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/765.txt +++ /dev/null @@ -1,217 +0,0 @@ -Linear logic is a substructural logic proposed by Jean-Yves Girard as a refinement of classical and intuitionistic logic, joining the dualities of the former with many of the constructive properties of the latter. Although the logic has also been studied for its own sake, more broadly, ideas from linear logic have been influential in fields such as programming languages, game semantics, and quantum physics (because linear logic can be seen as the logic of quantum information theory), as well as linguistics, particularly because of its emphasis on resource-boundedness, duality, and interaction. - -Linear logic lends itself to many different presentations, explanations, and intuitions. - -Proof-theoretically, it derives from an analysis of classical sequent calculus in which uses of (the structural rules) contraction and weakening are carefully controlled. Operationally, this means that logical deduction is no longer merely about an ever-expanding collection of persistent "truths", but also a way of manipulating resources that cannot always be duplicated or thrown away at will. In terms of simple denotational models, linear logic may be seen as refining the interpretation of intuitionistic logic by replacing cartesian (closed) categories by symmetric monoidal (closed) categories, or the interpretation of classical logic by replacing Boolean algebras by C*-algebras. - -The language of classical linear logic (CLL) is defined inductively by the BNF notation - -Here p and p range - -over logical atoms. For reasons to be explained - -below, the connectives ⊗, ⅋, 1, and - -⊥ are called multiplicatives, the connectives &, - -⊕, ⊤, and 0 are called additives, and the - -connectives ! and ? are called exponentials. We can further - -employ the following terminology: - -Binary connectives ⊗, ⊕, & and ⅋ are associative and commutative; 1 is the unit for ⊗, 0 is the unit for ⊕, ⊥ is the unit for ⅋ and ⊤ is the unit for &. - -Every proposition A in CLL has a dual A, defined as follows: - -Observe that (-) is an involution, i.e., A⊥⊥ = A for all propositions. A is also called the linear negation of A. - -The columns of the table suggest another way of classifying the connectives of linear logic, termed polarity: the connectives negated in the left column (⊗, ⊕, 1, 0, !) are called positive, while their duals on the right (⅋, &, ⊥, ⊤, ?) are called negative; cf. table on the right. - -Linear implication is not included in the grammar of connectives, but is definable in CLL using linear negation and multiplicative disjunction, by AB := AB. The connective ⊸ is sometimes pronounced "lollipop", owing to its shape. - -One way of defining linear logic is as a sequent calculus. We use the letters Γ and Δ to range over list of propositions A1, ..., An, also called contexts. A sequent places a context to the left and the right of the turnstile, written Γ . Intuitively, the sequent asserts that the conjunction of Γ entails the disjunction of Δ (though we mean the "multiplicative" conjunction and disjunction, as explained below). Girard describes classical linear logic using only one-sided sequents (where the left-hand context is empty), and we follow here that more economical presentation. This is possible because any premises to the left of a turnstile can always be moved to the other side and dualised. - -We now give inference rules describing how to build proofs of sequents. - -First, to formalize the fact that we do not care about the order of propositions inside a context, we add the structural rule of - -exchange: - -Note that we do not add the structural rules of weakening and contraction, because we do care about the - -absence of propositions in a sequent, and the number of copies present. - -Next we add initial sequents and cuts: - -| width="50" | - -| style="text-align: center;" | - -|} - -The cut rule can be seen as a way of composing proofs, and initial sequents serve as the units - -for composition. In a certain sense these rules are redundant: as we introduce additional rules for building proofs below, we will maintain the property that arbitrary initial sequents can be derived from atomic initial sequents, and that whenever a sequent is provable it can be given a cut-free proof. Ultimately, this canonical form property (which can be divided into the completeness of atomic initial sequents and the cut-elimination theorem, inducing a notion of analytic proof) lies behind the applications of linear logic in computer science, since it allows the logic to be used in proof search and as a resource-aware lambda-calculus. - -Now, we explain the connectives by giving logical rules. Typically in sequent calculus - -one gives both "right-rules" and "left-rules" for each connective, essentially describing two modes of reasoning - -about propositions involving that connective (e.g., verification and falsification). In a one-sided presentation, one instead makes use of negation: the right-rules for a connective - -(say ⅋) effectively play the role of left-rules for its dual (⊗). So, we should expect a certain "harmony" - -between the rule(s) for a connective and the rule(s) for its dual. - -The rules for multiplicative conjunction (⊗) and disjunction (⅋): - -| width="50" | - -| style="text-align: center;" | - -|} - -and for their units: - -| width="50" | - -| style="text-align: center;" | - -|} - -Observe that the rules for multiplicative conjunction and disjunction are admissible for - -plain conjunction and disjunction under a classical interpretation - -(i.e., they are admissible rules in LK). - -The rules for additive conjunction (&) and disjunction (⊕) : - -| width="50" | - -| style="text-align: center;" | - -| width="25" | - -| style="text-align: center;" | - -|} - -and for their units: - -| width="50" | - -| style="text-align: center;" | (no rule for 0) - -|} - -Observe that the rules for additive conjunction and disjunction are again admissible - -under a classical interpretation. But now we can explain the basis for the multiplicative/additive distinction - -in the rules for the two different versions of conjunction: for the multiplicative connective (⊗), - -the context of the conclusion (Γ, Δ) is split up between the premises, whereas - -for the additive case connective (&) the context of the conclusion (Γ) is carried whole into both - -premises. - -The exponentials are used to give controlled access to weakening and contraction. Specifically, we add - -structural rules of weakening and contraction for ?'d propositions: - -| width="50" | - -| style="text-align: center;" | - -|} - -and use the following logical rules: - -| width="50" | - -| style="text-align: center;" | - -|} - -One might observe that the rules for the exponentials follow a different pattern from the rules for the other connectives, - -resembling the inference rules governing modalities in sequent calculus formalisations of the normal modal logic S4, and that there is no longer such a clear symmetry between the duals ! and ?. This situation is remedied in alternative - -presentations of CLL (e.g., the LU presentation). - -In addition to the De Morgan dualities described above, some important equivalences in linear logic include: - -; Distributivity : - -By definition of AB as AB, the last two distributivity laws also give: - -(Here AB is (AB) & (BA).) - -; Exponential isomorphism : - -; Linear distributions : - -A map that is not an isomorphism yet plays a crucial role in linear logic is: - -Linear distributions are fundamental in the proof theory of linear logic. The consequences of this map were first investigated in and called a "weak distribution". In subsequent work it was renamed to "linear distribution" to reflect the fundamental connection to linear logic. - -; Other implications : - -The following distributivity formulas are not in general an equivalence, only an implication: - -Both intuitionistic and classical implication can be recovered from linear implication by inserting exponentials: intuitionistic implication is encoded as !AB, while classical implication can be encoded as !?A ⊸ ?B or !A ⊸ ?!B (or a variety of alternative possible translations). The idea is that exponentials allow us to use a formula as many times as we need, which is always possible in classical and intuitionistic logic. - -Formally, there exists a translation of formulas of intuitionistic logic to formulas of linear logic in a way that guarantees that the original formula is provable in intuitionistic logic if and only if the translated formula is provable in linear logic. Using the Gödel–Gentzen negative translation, we can thus embed classical first-order logic into linear first-order logic. - -Lafont (1993) first showed how intuitionistic linear logic can be explained as a logic of resources, so providing the logical language with access to formalisms that can be used for reasoning about resources within the logic itself, rather than, as in classical logic, by means of non-logical predicates and relations. Tony Hoare (1985)'s classical example of the vending machine can be used to illustrate this idea. - -Suppose we represent having a candy bar by the atomic proposition candy, and having a dollar by $1. To state the fact that a dollar will buy you one candy bar, we might write the implication $1candy. But in ordinary (classical or intuitionistic) logic, from A and AB one can conclude AB. So, ordinary logic leads us to believe that we can buy the candy bar and keep our dollar! Of course, - -we can avoid this problem by using more sophisticated encodings, although typically such encodings suffer from the frame problem. However, the rejection of weakening and contraction allows linear logic to avoid this kind of spurious reasoning even with the "naive" rule. Rather than $1candy, we express the property of the vending machine as a linear implication $1candy. From $1 and this fact, we can conclude candy, but not $1candy. In general, we can use the linear logic proposition AB to express the validity of transforming resource A into resource B. - -Running with the example of the vending machine, consider the "resource interpretations" of the other multiplicative and additive connectives. (The exponentials provide the means to combine this resource interpretation with the usual notion of persistent logical truth.) - -Multiplicative conjunction (AB) denotes simultaneous occurrence of resources, to be used as the consumer directs. For example, if you buy a stick of gum and a bottle of soft drink, then you are requesting gumdrink. The constant 1 denotes the absence of any resource, and so functions as the unit of ⊗. - -Additive conjunction (A & B) represents alternative occurrence of resources, the choice of which the consumer controls. If in the vending machine there is a packet of chips, a candy bar, and a can of soft drink, each costing one dollar, then for that price you can buy exactly one of these products. Thus we write $1 ⊸ (candy & chips & drink). We do not write $1 ⊸ (candychipsdrink), which would imply that one dollar suffices for buying all three products together. However, from $1 ⊸ (candy & chips & drink), we can correctly deduce $3 ⊸ (candychipsdrink), where $3 := $1$1$1. The unit ⊤ of additive conjunction can be seen as a wastebasket for unneeded resources. For example, we can write $3 ⊸ (candy ⊗ ⊤) to express that with three dollars you can get a candy bar and some other stuff, without being more specific (for example, chips and a drink, or $2, or $1 and chips, etc.). - -Additive disjunction (AB) represents alternative occurrence of resources, the choice of which the machine controls. For example, suppose the vending machine permits gambling: insert a dollar and the machine may dispense a candy bar, a packet of chips, or a soft drink. We can express this situation as $1 ⊸ (candychipsdrink). The constant 0 represents a product that cannot be made, and thus serves as the unit of ⊕ (a machine that might produce A or 0 is as good as a machine that always produces A because it will never succeed in producing a 0). So unlike above, we cannot deduce $3 ⊸ (candychipsdrink) from this. - -Multiplicative disjunction (AB) is more difficult to gloss in terms of the resource interpretation, although we can encode back into linear implication, either as AB or BA. - -Introduced by Jean-Yves Girard, proof nets have been created to avoid the bureaucracy, that is all the things that make two derivations different in the logical point of view, but not in a "moral" point of view. - -For instance, these two proofs are "morally" identical: - -| style="text-align: center;" | - -|} - -The goal of proof nets is to make them identical by creating a graphical representation of them. - -The entailment relation in full CLL is undecidable. When considering fragments of - -CLL, the decision problem has varying complexity: - -* Multiplicative linear logic (MLL): only the multiplicative connectives. MLL entailment is NP-complete, even restricting to Horn clauses in the purely implicative fragment, or to atom-free formulas. - -* Multiplicative-additive linear logic (MALL): only multiplicatives and additives (i.e., exponential-free). MALL entailment is PSPACE-complete. - -* Multiplicative-exponential linear logic (MELL): only multiplicatives and exponentials. By reduction from the reachability problem for Petri nets, MELL entailment must be at least EXPSPACE-hard, although decidability itself has had the status of a longstanding open problem. In 2015, a proof of decidability was published in the journal TCS, but was later shown to be erroneous. - -* Affine linear logic (that is linear logic with weakening, an extension rather than a fragment) was shown to be decidable, in 1995. - -Many variations of linear logic arise by further tinkering with the structural rules: - -* Affine logic, which forbids contraction but allows global weakening (a decidable extension). - -* Strict logic or relevant logic, which forbids weakening but allows global contraction. - -* Non-commutative logic or ordered logic, which removes the rule of exchange, in addition to barring weakening and contraction. In ordered logic, linear implication divides further into left-implication and right-implication. - -Different intuitionistic variants of linear logic have been considered. When based on a single-conclusion sequent calculus presentation, like in ILL (Intuitionistic Linear Logic), the connectives ⅋, ⊥, and ? are absent, and linear implication is treated as a primitive connective. In FILL (Full Intuitionistic Linear Logic) the connectives ⅋, ⊥, and ? are present, linear implication is a primitive connective and, similarly to what happens in intuitionistic logic, all connectives (except linear negation) are independent. - -There are also first- and higher-order extensions of linear logic, whose formal development is somewhat standard (see first-order logic and higher-order logic). diff --git a/wiki/wikipedia/766.txt b/wiki/wikipedia/766.txt deleted file mode 100644 index c57b2bd575b9b6f146995c117ce9b306b28d079b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/766.txt +++ /dev/null @@ -1,13 +0,0 @@ -In graph theory and combinatorial optimization, a closure of a directed graph is a set of vertices C, such that no edges leave C. The closure problem is the task of finding the maximum-weight or minimum-weight closure in a vertex-weighted directed graph. - -It may be solved in polynomial time using a reduction to the maximum flow problem. It may be used to model various application problems of choosing an optimal subset of tasks to perform, with dependencies between pairs of tasks, one example being in open pit mining. - -The maximum-weight closure of a given graph G is the same as the complement of the minimum-weight closure on the transpose graph of G, so the two problems are equivalent in computational complexity. - -If two vertices of the graph belong to the same strongly connected component, they must behave the same as each other with respect to all closures: it is not possible for a closure to contain one vertex without containing the other. For this reason, the input graph to a closure problem may be replaced by its condensation, in which every strongly connected component is replaced by a single vertex. - -The condensation is always a directed acyclic graph. - -As Picard showed, Their running time is similar to that of the fastest known flow algorithms. Together with open pit mining, this was one of the original motivating applications for studying the closure problem; it was originally studied in 1970, in two independent papers published in the same issue of the same journal by J. M. W. Rhys and Michel Balinski. - -Although (as Lawler shows) this scheduling problem is NP-complete in general, Sidney describes a decomposition method that can help solve the problem by reducing it to several smaller problems of the same type. In particular, if S is a subset of the tasks that (among all subsets) has the largest possible ratio of its total weight to its total processing time, and in addition S is minimal among all sets with the same ratio, then there exists an optimal schedule in which all tasks in S are performed before all other tasks. As long as S is not the whole set of tasks, this partition of the tasks splits the scheduling problem into two smaller problems, one of scheduling S and one of scheduling the remaining tasks. Although S is a closure (for a graph with reversed edges from G) the problem of finding S is not exactly a maximum weight closure problem, because the value of S is a ratio rather than a sum of weights. Nevertheless, Lawler shows that S may be found in polynomial time by a binary search algorithm in which each step of the search uses an instance of the closure problem as a subroutine. diff --git a/wiki/wikipedia/767.txt b/wiki/wikipedia/767.txt deleted file mode 100644 index b9dde7a111fb055dca8ce6ddc62a9247b1d9192e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/767.txt +++ /dev/null @@ -1,12 +0,0 @@ -In predicate logic, existential generalization (also known as existential introduction, ∃I) is a valid rule of inference that allows one to move from a specific statement, or one instance, to a quantified generalized statement, or existential proposition. In first-order logic, it is often used as a rule for the existential quantifier ($\exists$) in formal proofs. - -Example: "Rover loves to wag his tail. Therefore, something loves to wag its tail." - -In the Fitch-style calculus: -$$ - Q(a) \to\ \exists{x} Q(x) -$$ - -Where $a$ replaces all free instances of $x$ within $Q(x)$. - -According to Willard Van Orman Quine, universal instantiation and existential generalization are two aspects of a single principle, for instead of saying that $\forall x x=x$ implies $\text{Socrates}=\text{Socrates}$, we could as well say that the denial $\text{Socrates} \ne \text{Socrates}$ implies $\exists x x \ne x$. The principle embodied in these two operations is the link between quantifications and the singular statements that are related to them as instances. Yet it is a principle only by courtesy. It holds only in the case where a term names and, furthermore, occurs referentially. diff --git a/wiki/wikipedia/768.txt b/wiki/wikipedia/768.txt deleted file mode 100644 index cfdadcb9714cf01cf45e752c1968ae697372d180..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/768.txt +++ /dev/null @@ -1,7 +0,0 @@ -: See also Rado's theorem (Ramsey theory) - -In mathematics, Radó's theorem is a result about harmonic functions, named after Tibor Radó. Informally, it says that any "nice looking" shape without holes can be smoothly deformed into a disk. - -Suppose Ω is an open, connected and convex subset of the Euclidean space R2 with smooth boundary ∂Ω and suppose that D is the unit disk. Then, given any homeomorphism - -μ : ∂D → ∂Ω, there exists a unique harmonic function u : D → Ω such that u = μ on ∂D and u is a diffeomorphism. diff --git a/wiki/wikipedia/769.txt b/wiki/wikipedia/769.txt deleted file mode 100644 index 726286c7d26bac293bb552306b605500774fbfff..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/769.txt +++ /dev/null @@ -1,106 +0,0 @@ -In mathematics, the Riesz–Fischer theorem in real analysis is any of a number of closely related results concerning the properties of the space L2 of square integrable functions. The theorem was proven independently in 1907 by Frigyes Riesz and Ernst Sigismund Fischer. - -For many authors, the Riesz–Fischer theorem refers to the fact that the Lp spaces $L^p$ from Lebesgue integration theory are complete. - -The most common form of the theorem states that a measurable function on $[-\pi, \pi]$ is square integrable if and only if the corresponding Fourier series converges in the Lp space $L^2.$ This means that if the Nth partial sum of the Fourier series corresponding to a square-integrable function f is given by - -S_N f(x) = \sum_{n=-N}^{N} F_n \mathrm{e}^{inx}, - -where $F_n,$ the nth Fourier coefficient, is given by - -F_n =\frac{1}{2\pi}\int_{-\pi}^\pi f(x) \mathrm{e}^{-inx} \mathrm{d}x, - -then - -\lim_{N \to \infty} \left\Vert S_N f - f \right\|_2 = 0, - -where $\|\cdot\|_2$ is the $L^2$-norm. - -Conversely, if $ \{a_n\} $ is a two-sided sequence of complex numbers (that is, its indices range from negative infinity to positive infinity) such that - -\sum_{n=-\infty}^\infty \left|a_n\right\vert^2 < \infty, - -then there exists a function f such that f is square-integrable and the values $a_n$ are the Fourier coefficients of f. - -This form of the Riesz–Fischer theorem is a stronger form of Bessel's inequality, and can be used to prove Parseval's identity for Fourier series. - -Other results are often called the Riesz–Fischer theorem . Among them is the theorem that, if A is an orthonormal set in a Hilbert space H, and $x \in H,$ then - -\langle x, y\rangle = 0 - -for all but countably many $y \in A,$ and - -\sum_{y\in A} |\langle x,y\rangle|^2 \le \|x\|^2. - -Furthermore, if A is an orthonormal basis for H and x an arbitrary vector, the series - -\sum_{y\in A} \langle x,y\rangle y - -converges commutatively (or unconditionally) to x. This is equivalent to saying that for every $\varepsilon > 0,$ there exists a finite set $B_0$ in A such that - -\|x - \sum_{y\in B} \langle x,y\rangle y \| < \varepsilon - -for every finite set B containing B0. Moreover, the following conditions on the set A are equivalent: - -* the set A is an orthonormal basis of H - -* for every vector $x \in H,$ - -\|x\|^2 = \sum_{y\in A} |\langle x,y\rangle|^2. - -Another result, which also sometimes bears the name of Riesz and Fischer, is the theorem that $L^2$ (or more generally $L^p, 0 < p \leq \infty$) is complete. - -The Riesz–Fischer theorem also applies in a more general setting. Let R be an inner product space consisting of functions (for example, measurable functions on the line, analytic functions in the unit disc; in old literature, sometimes called Euclidean Space), and let $\{\varphi_n\}$ be an orthonormal system in R (e.g. Fourier basis, Hermite or Laguerre polynomials, etc. – see orthogonal polynomials), not necessarily complete (in an inner product space, an orthonormal set is complete if no nonzero vector is orthogonal to every vector in the set). The theorem asserts that if the normed space R is complete (thus R is a Hilbert space), then any sequence $\{c_n\}$ that has finite $\ell^2$ norm defines a function f in the space R. - -The function f is defined by -$$ -f = \lim_{n \to \infty} \sum_{k=0}^n c_k \varphi_k, -$$ limit in R-norm. - -Combined with the Bessel's inequality, we know the converse as well: if f is a function in R, then the Fourier coefficients $(f,\varphi_n)$ have finite $\ell^2$ norm. - -In his Note, Riesz states the following result (translated here to modern language at one point: the notation $L^2([a, b])$ was not used in 1907). - -Let $\left\{\varphi_n\right\}$be an orthonormal system in $L^2([a, b])$ and $\left\{a_n\right\}$ a sequence of reals. The convergence of the series $ \sum a_n^2 $ is a necessary and sufficient condition for the existence of a function f such that \int_a^b f(x) \varphi_n(x) \mathrm{d}x = a_n \quad \text{ for every } n. - -Today, this result of Riesz is a special case of basic facts about series of orthogonal vectors in Hilbert spaces. - -Riesz's Note appeared in March. In May, Fischer states explicitly in a theorem (almost with modern words) that a Cauchy sequence in $L^2([a, b])$ converges in $L^2$-norm to some function $f \in L^2([a, b]).$ In this Note, Cauchy sequences are called "sequences converging in the mean" and $L^2([a, b])$ is denoted by $\Omega.$ Also, convergence to a limit in $L^2$-norm is called "convergence in the mean towards a function". Here is the statement, translated from French: - -Theorem. If a sequence of functions belonging to $\Omega$ converges in the mean, there exists in $\Omega$ a function f towards which the sequence converges in the mean. - -Fischer goes on proving the preceding result of Riesz, as a consequence of the orthogonality of the system, and of the completeness of $L^2.$ - -Fischer's proof of completeness is somewhat indirect. It uses the fact that the indefinite integrals of the functions gn in the given Cauchy sequence, namely - -G_n(x) = \int_a^x g_n(t) \mathrm{d}t, - -converge uniformly on $[a, b]$ to some function G, continuous with bounded variation. - -The existence of the limit $g \in L^2$ for the Cauchy sequence is obtained by applying to G differentiation theorems from Lebesgue's theory.
    - -Riesz uses a similar reasoning in his Note, but makes no explicit mention to the completeness of $L^2,$ although his result may be interpreted this way. He says that integrating term by term a trigonometric series with given square summable coefficients, he gets a series converging uniformly to a continuous function F  with bounded variation. The derivative f  of F, defined almost everywhere, is square summable and has for Fourier coefficients the given coefficients. - -For some authors, notably Royden, the Riesz-Fischer Theorem is the result that $L^p$ is complete: that every Cauchy sequence of functions in $L^p$ converges to a function in $L^p,$ under the metric induced by the p-norm. The proof below is based on the convergence theorems for the Lebesgue integral; the result can also be obtained for $p \in [1,\infty]$ by showing that every Cauchy sequence has a rapidly converging Cauchy sub-sequence, that every Cauchy sequence with a convergent sub-sequence converges, and that every rapidly Cauchy sequence in $L^p$ converges in $L^p.$ - -When $1 \leq p \leq \infty,$ the Minkowski inequality implies that the Lp space $L^p$ is a normed space. In order to prove that $L^p$ is complete, i.e. that $L^p$ is a Banach space, it is enough (see e.g. Banach space#Definition) to prove that every series $\sum u_n$ of functions in $L^p(\mu)$ such that - -\sum \|u_n\|_p < \infty - -converges in the $L^p$-norm to some function $f \in L^p(\mu).$ For $p < \infty,$ the Minkowski inequality and the monotone convergence theorem imply that - -\int \left(\sum_{n=0}^\infty |u_n|\right)^p \mathrm{d}\mu \le \left(\sum_{n=0}^{\infty} \|u_n\|_p\right)^p< \infty, \ \ \text{ hence } \ \ f = \sum_{n=0}^\infty u_n - -is defined $\mu$-almost everywhere and $f \in L^p(\mu).$ The dominated convergence theorem is then used to prove that the partial sums of the series converge to f in the $L^p$-norm, - -\int \left|f - \sum_{k=0}^{n} u_k\right|^p \mathrm{d}\mu \le \int \left( \sum_{\ell > n} |u_\ell| \right)^p \mathrm{d}\mu \rightarrow 0 \text{ as } n \rightarrow \infty. - -The case $0 < p < 1$ requires some modifications, because the p-norm is no longer subadditive. One starts with the stronger assumption that - -\sum \|u_n\|_p^p < \infty - -and uses repeatedly that - -\left|\sum_{k=0}^n u_k \right|^p \le \sum_{k=0}^n |u_k|^p \text{ when } p < 1 - -The case $p = \infty$ reduces to a simple question about uniform convergence outside a $\mu$-negligible set. diff --git a/wiki/wikipedia/77.txt b/wiki/wikipedia/77.txt deleted file mode 100644 index e41a695c3fcfc547b46f7bac047b43a3db616950..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/77.txt +++ /dev/null @@ -1,29 +0,0 @@ -Artificial empathy (AE) or computational empathy is the development of AI systems − such as companion robot or virtual agents − that are able to detect and respond to human emotions in an empathic way. According to scientists, although the technology can be perceived as scary or threatening by many people, it could also have a significant advantage over humans in professions which are traditionally involved in emotional role-playing such as the health care sector. From the care-giver perspective for instance, performing emotional labor above and beyond the requirements of paid labor often results in chronic stress or burnout, and the development of a feeling of being desensitized to patients. However, it is argued that the emotional role-playing between the care-receiver and a robot can actually have a more positive outcome in terms of creating the conditions of less fear and concern for one's own predicament best exemplified by the phrase: "if it is just a robot taking care of me it cannot be that critical." Scholars debate the possible outcome of such technology using two different perspectives. Either, the AE could help the socialization of care-givers, or serve as role model for emotional detachment. - -A broader definition of artificial empathy is "the ability of nonhuman models to predict a person's internal state (e.g., cognitive, affective, physical) given the signals (s)he emits (e.g., facial expression, voice, gesture) or to predict a person's reaction (including, but not limited to internal states) when he or she is exposed to a given set of stimuli (e.g., facial expression, voice, gesture, graphics, music, etc.)". - -There are a variety of philosophical, theoretical, and applicative questions related to AE. For example: - -# Which conditions would have to be met for a robot to respond competently to a human emotion? - -# What models of empathy can or should be applied to Social and Assistive Robotics? - -# Does the interaction of humans with robots have to imitate affective interaction between humans? - -# Can a robot help science learn about affective development of humans? - -# Would robots create unforeseen categories of inauthentic relations? - -# What relations with robots can be considered truly authentic? - -Humans often communicate and make decisions based on the inferences of other's internal states (e.g., emotional, cognitive and physical states) from the various signals emitted by the person, such as facial expression, body gesture, voice and words. Broadly speaking, the domain of AE focuses on developing non-human models to achieve similar objectives using the data emitted by or shown to humans. - -The concept of AE has been applied in various research disciplines, including artificial intelligence and business. Specifically, there have been two main streams of research in this domain: first, the use of nonhuman models in predicting a person's internal state (e.g., cognitive, affective, physical) given the signals he or she emits (e.g., facial expression, voice, gesture); second, the use of nonhuman models in predicting a person's reaction when he or she is exposed to a given set of stimuli (e.g., facial expression, voice, gesture, graphics, music etc.). - -Research on affective computing, such as emotional speech recognition and facial expression detection, falls within the first stream of AE. Contexts that have been studied include oral interviews, call center human-computer interaction, sales pitch, and financial reporting. The second stream of AE have been researched more in marketing contexts, such as advertising, branding, customer reviews, in-store recommendation system, movies, and online dating. - -With the increasing volume of visual, audio and text data in commerce, there have been many business applications using AE. For example, Affectiva analyses viewers' facial expressions from video recordings while they are watching video advertisements in order to optimize the content design of video ads. HireVue, a hiring intelligence firm, help firms make recruitment decisions using analysis of the audio and video information from candidates' video interviews. Lapetus Solutions develops a model to estimate an individual's longevity, health status and disease susceptibility from a face photo. Their technology has been applied in the insurance industry. - -Although AI has not been shown to replace social workers themselves yet, the technology has begun making waves in the field. Social Work Today published an article in 2017 describing research performed at Florida State University. The research involved the use of computer algorithms to analyze health records and detect combinations of risk factors that could indicate a future suicide attempt. The article reports, "machine learning—a future frontier for artificial intelligence—can predict with 80% to 90% accuracy whether someone will attempt suicide as far off as two years into the future. The algorithms become even more accurate as a person's suicide attempt gets closer. For example, the accuracy climbs to 92% one week before a suicide attempt when artificial intelligence focuses on general hospital patients". - -At this point in time, artificial intelligence has not been able to replace social workers completely, but algorithmic machines such as those described above can have incredible benefits to social workers. Social work operates on a cycle of engagement, assessment, intervention, and evaluation with clients. This technology can make the assessment for risk of suicide can lead to earlier interventions and prevention, therefore saving lives. It is the hope of these researchers that the technology will be implemented in our modern healthcare system. The system would learn, analyze, and detect risk factors, alerting the clinician of a patient's suicide risk score (equivalent to a patient's cardiovascular risk score). At this point, social workers could step in for further assessment and preventive intervention. diff --git a/wiki/wikipedia/770.txt b/wiki/wikipedia/770.txt deleted file mode 100644 index cc4cd220f0decdfd9d40ace5c7d1018b88976e16..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/770.txt +++ /dev/null @@ -1,29 +0,0 @@ -Brauer's theorem on induced characters, often known as Brauer's induction theorem, and named after Richard Brauer, is a basic result in the branch of mathematics known as character theory, within representation theory of a finite group. - -A precursor to Brauer's induction theorem was Artin's induction theorem, which states that |G| times the trivial character of G is an integer combination of characters which are each induced from trivial characters of cyclic subgroups of G. Brauer's theorem removes the factor |G|, - -but at the expense of expanding the collection of subgroups used. Some years after the proof of Brauer's theorem appeared, J.A. Green showed (in 1955) that no such induction theorem (with integer combinations of characters induced from linear characters) could be proved with a collection of subgroups smaller than the Brauer elementary subgroups. - -Another result between Artin's induction theorem and Brauer's induction theorem, also due to Brauer and also known as Brauer's theorem or Brauer's lemma is the fact that the regular representation of G can be written as $1+\sum\lambda_i\rho_i$ where the $\lambda_i$ are positive rationals and the $\rho_i$ are induced from characters of cyclic subgroups of G. Note that in Artin's theorem the characters are induced from the trivial character of the cyclic group, while here they are induced from arbitrary characters (in applications to Artin's L functions it is important that the groups are cyclic and hence all characters are linear giving that the corresponding L functions are analytic). - -Let G be a finite group and let Char(G) denote the subring of the ring of complex-valued class functions of G consisting of integer combinations of irreducible characters. Char(G) is known as the character ring of G, and its elements are known as virtual characters (alternatively, as generalized characters, or sometimes difference characters). It is a ring by virtue of the fact that the product of characters of G is again a character of G. Its multiplication is given by the elementwise product of class functions. - -Brauer's induction theorem shows that the character ring can be generated (as an abelian group) by induced characters of the form $\lambda^{G}_{H}$, where H ranges over subgroups of G and λ ranges over linear characters (having degree 1) of H. - -In fact, Brauer showed that the subgroups H could be chosen from a very - -restricted collection, now called Brauer elementary subgroups. These are direct products of cyclic groups and groups whose order is a power of a prime. - -The proof of Brauer's induction theorem exploits the ring structure of Char(G) (most proofs also make use of a slightly larger ring, Char*(G), which consists of $\mathbb{Z}[\omega]$-combinations of irreducible characters, where ω is a primitive complex |G|-th root of unity). The set of integer combinations of characters induced from linear characters of Brauer elementary subgroups is an ideal I(G) of Char(G), so the proof reduces to showing that the trivial character is in I(G). Several proofs of the theorem, beginning with a proof due to Brauer and John Tate, show that the trivial character is in the analogously defined ideal I*(G) of Char*(G) by concentrating attention on one prime p at a time, and constructing integer-valued elements of I*(G) which differ (elementwise) from the trivial character by (integer multiples of) a sufficiently high power of p. Once this is achieved for every prime divisor of |G|, some manipulations with congruences - -and algebraic integers, again exploiting the fact that I*(G) is an ideal of Ch*(G), place the trivial character in I(G). An auxiliary result here is that a $\mathbb{Z}[\omega]$-valued class function lies in the ideal I*(G) if its values are all divisible (in $\mathbb{Z}[\omega]$) by |G|. - -Brauer's induction theorem was proved in 1946, and there are now many alternative proofs. In 1986, Victor Snaith gave a proof by a radically different approach, topological in nature (an application of the Lefschetz fixed-point theorem). There has been related recent work on the question of finding natural and explicit forms of Brauer's theorem, notably by Robert Boltje. - -Using Frobenius reciprocity, Brauer's induction theorem leads easily to his fundamental characterization of characters, which asserts that a complex-valued class function of G is a virtual character if and only if its restriction to each Brauer elementary subgroup of G is a virtual character. This result, together with the fact that a virtual character θ is an irreducible character - -if and only if θ(1) > 0 and $\langle \theta,\theta \rangle =1 $ (where $\langle,\rangle$ is the usual inner product on the ring of complex-valued class functions) gives - -a means of constructing irreducible characters without explicitly constructing the associated representations. - -An initial motivation for Brauer's induction theorem was application to Artin L-functions. It shows that those are built up from Dirichlet L-functions, or more general Hecke L-functions. Highly significant for that application is whether each character of G is a non-negative integer combination of characters induced from linear characters of subgroups. In general, this is not the case. In fact, by a theorem of Taketa, if all characters of G are so expressible, then G must be a solvable group (although solvability alone does not guarantee such expressions- for example, the solvable group SL(2,3) has an irreducible complex character of degree 2 which is not expressible as a non-negative integer combination of characters induced from linear characters of subgroups). An ingredient of the proof of Brauer's induction theorem is that when G is a finite nilpotent group, every complex irreducible character of G is induced from a linear character of some subgroup. diff --git a/wiki/wikipedia/771.txt b/wiki/wikipedia/771.txt deleted file mode 100644 index e0bf80147429e712ba68a667c192da48c3cadc19..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/771.txt +++ /dev/null @@ -1,40 +0,0 @@ -The log sum inequality is used for proving theorems in information theory. - -Let $a_1,\ldots,a_n$ and $b_1,\ldots,b_n$ be nonnegative numbers. Denote the sum of all $a_i$s by $a$ and the sum of all $b_i$s by $b$. The log sum inequality states that -$$ -\sum_{i=1}^n a_i\log\frac{a_i}{b_i}\geq a\log\frac{a}{b}, -$$ - -with equality if and only if $\frac{a_i}{b_i}$ are equal for all $i$, in other words $a_i =c b_i$ for all $i$. - -(Take $a_i\log \frac{a_i}{b_i}$ to be $0$ if $a_i=0$ and $\infty$ if $a_i>0, b_i=0$. These are the limiting values obtained as the relevant number tends to $0$.) - -Notice that after setting $f(x)=x\log x$ we have - - - -\begin{align} - -\sum_{i=1}^n a_i\log\frac{a_i}{b_i} & {} = \sum_{i=1}^n b_i f\left(\frac{a_i}{b_i}\right) - -= b\sum_{i=1}^n \frac{b_i}{b} f\left(\frac{a_i}{b_i}\right) \\ - -& {} \geq b f\left(\sum_{i=1}^n \frac{b_i}{b}\frac{a_i}{b_i}\right) = b f\left(\frac{1}{b}\sum_{i=1}^n a_i\right) - -= b f\left(\frac{a}{b}\right) \\ - -& {} = a\log\frac{a}{b}, - -\end{align} - - - -where the inequality follows from Jensen's inequality since $\frac{b_i}{b}\geq 0$, $\sum_{i=1}^n\frac{b_i}{b}= 1$, and $f$ is convex. - -The inequality remains valid for $n=\infty$ provided that $a<\infty$ and $b<\infty$. - -The proof above holds for any function $g$ such that $f(x)=xg(x)$ is convex, such as all continuous non-decreasing functions. Generalizations to non-decreasing functions other than the logarithm is given in Csiszár, 2004. - -The log sum inequality can be used to prove inequalities in information theory. Gibbs' inequality states that the Kullback-Leibler divergence is non-negative, and equal to zero precisely if its arguments are equal. One proof uses the log sum inequality. - -The inequality can also prove convexity of Kullback-Leibler divergence. diff --git a/wiki/wikipedia/772.txt b/wiki/wikipedia/772.txt deleted file mode 100644 index db6ab9ffddb2b3332347a159d124c70b966e06cc..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/772.txt +++ /dev/null @@ -1,8 +0,0 @@ -In Euclidean geometry, Carnot's theorem states that the sum of the signed distances from the circumcenter D to the sides of an arbitrary triangle ABC is -$$ -DF + DG + DH = R + r,\ -$$ - -where r is the inradius and R is the circumradius of the triangle. Here the sign of the distances is taken to be negative if and only if the open line segment DX (X = F, G, H) lies completely outside the triangle. In the diagram, DF is negative and both DG and DH are positive. - -The theorem is named after Lazare Carnot (1753-1823). It is used in a proof of the Japanese theorem for concyclic polygons. diff --git a/wiki/wikipedia/773.txt b/wiki/wikipedia/773.txt deleted file mode 100644 index b312c25bc4beadc492b1da3b4e24b8b169e37515..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/773.txt +++ /dev/null @@ -1,161 +0,0 @@ -An arithmetic progression or arithmetic sequence is a sequence of numbers such that the difference between the consecutive terms is constant. For instance, the sequence 5, 7, 9, 11, 13, 15, . . . is an arithmetic progression with a common difference of 2. - -If the initial term of an arithmetic progression is $a_1$ and the common difference of successive members is $d$, then the $n$-th term of the sequence ($a_n$) is given by: -$$ -\ a_n = a_1 + (n - 1)d -$$, - -and in general -$$ -\ a_n = a_m + (n - m)d -$$. - -A finite portion of an arithmetic progression is called a finite arithmetic progression and sometimes just called an arithmetic progression. The sum of a finite arithmetic progression is called an arithmetic series. - -
    - -
    - -
    - -Computation of the sum 2 + 5 + 8 + 11 + 14. When the sequence is reversed and added to itself term by term, the resulting sequence has a single repeated value in it, equal to the sum of the first and last numbers (2 + 14 = 16). Thus 16 × 5 = 80 is twice the sum. - -
    - -
    - -
    - -The sum of the members of a finite arithmetic progression is called an arithmetic series. For example, consider the sum: -$$ -2 + 5 + 8 + 11 + 14 -$$ - -This sum can be found quickly by taking the number n of terms being added (here 5), multiplying by the sum of the first and last number in the progression (here 2 + 14 = 16), and dividing by 2: -$$ -\frac{n(a_1 + a_n)}{2} -$$ - -In the case above, this gives the equation: -$$ -2 + 5 + 8 + 11 + 14 = \frac{5(2 + 14)}{2} = \frac{5 \times 16}{2} = 40. -$$ - -This formula works for any real numbers $a_1$ and $a_n$. For example: -$$ -\left(-\frac{3}{2}\right) + \left(-\frac{1}{2}\right) + \frac{1}{2} = \frac{3\left(-\frac{3}{2} + \frac{1}{2}\right)}{2} = -\frac{3}{2}. -$$ - -To derive the above formula, begin by expressing the arithmetic series in two different ways: -$$ - S_n=a_1+(a_1+d)+(a_1+2d)+\cdots+(a_1+(n-2)d)+(a_1+(n-1)d) -$$ -$$ - S_n=(a_n-(n-1)d)+(a_n-(n-2)d)+\cdots+(a_n-2d)+(a_n-d)+a_n. -$$ - -Adding both sides of the two equations, all terms involving d cancel: -$$ -\ 2S_n=n(a_1 + a_n). -$$ - -Dividing both sides by 2 produces a common form of the equation: -$$ - S_n=\frac{n}{2}( a_1 + a_n). -$$ - -An alternate form results from re-inserting the substitution: $a_n = a_1 + (n-1)d$: -$$ - S_n=\frac{n}{2}[ 2a_1 + (n-1)d]. -$$ - -Furthermore, the mean value of the series can be calculated via: $S_n / n$: -$$ - \overline{a} =\frac{a_1 + a_n}{2}. -$$ - -The formula is very similar to the mean of a discrete uniform distribution. - -The product of the members of a finite arithmetic progression with an initial element a1, common differences d, and n elements in total is determined in a closed expression -$$ -a_1a_2a_3\cdots a_n = a_1(a_1+d)(a_1+2d)...(a_1+(n-1)d)= \prod_{k=0}^{n-1} (a_1+kd) = d^n \frac{\Gamma \left(\frac{a_1}{d} + n\right) }{\Gamma \left( \frac{a_1}{d} \right)} -$$ - -where $\Gamma$ denotes the Gamma function. The formula is not valid when $a_1/d$ is negative or zero. - -This is a generalization from the fact that the product of the progression $1 \times 2 \times \cdots \times n$ is given by the factorial $n!$ and that the product -$$ -m \times (m+1) \times (m+2) \times \cdots \times (n-2) \times (n-1) \times n -$$ - -for positive integers $m$ and $n$ is given by -$$ -\frac{n!}{(m-1)!}. -$$ - -\begin{align} - -a_1a_2a_3\cdots a_n &=\prod_{k=0}^{n-1} (a_1+kd) \\ - -&= \prod_{k=0}^{n-1} d\left(\frac{a_1}{d}+k\right) = d \left (\frac{a_1}{d}\right) d \left (\frac{a_1}{d}+1 \right )d \left ( \frac{a_1}{d}+2 \right )\cdots d \left ( \frac{a_1}{d}+(n-1) \right ) \\ - -&= d^n\prod_{k=0}^{n-1} \left(\frac{a_1}{d}+k\right)=d^n {\left(\frac{a_1}{d}\right)}^{\overline{n}} - -\end{align} - -where $x^{\overline{n}}$ denotes the rising factorial. - -By the recurrence formula $\Gamma(z+1)=z\Gamma(z)$, valid for a complex number $z>0$, -$$ -\Gamma(z+2)=(z+1)\Gamma(z+1)=(z+1)z\Gamma(z) -$$, -$$ -\Gamma(z+3)=(z+2)\Gamma(z+2)=(z+2)(z+1)z\Gamma(z) -$$, - -so that -$$ - \frac{\Gamma(z+m)}{\Gamma(z)} = \prod_{k=0}^{m-1}(z+k) -$$ - -for $m$ a positive integer and $z$ a positive complex number. - -Thus, if $a_1/d > 0 $, -$$ -\prod_{k=0}^{n-1} \left(\frac{a_1}{d}+k\right)= \frac{\Gamma \left(\frac{a_1}{d} + n\right) }{\Gamma \left( \frac{a_1}{d} \right)} -$$, - -and, finally, -$$ -a_1a_2a_3\cdots a_n = d^n\prod_{k=0}^{n-1} \left(\frac{a_1}{d}+k\right) = d^n \frac{\Gamma \left(\frac{a_1}{d} + n\right) }{\Gamma \left( \frac{a_1}{d} \right)} -$$ - -;Example 1 - -Taking the example $ 3, 8, 13, 18, 23, 28, \ldots $, the product of the terms of the arithmetic progression given by $a_n = 3 + 5(n-1) $ up to the 50th term is -$$ -P_{50} = 5^{50} \cdot \frac{\Gamma \left(3/5 + 50\right) }{\Gamma \left( 3 / 5 \right) } \approx 3.78438 \times 10^{98}. -$$ - -; Example 2 - -The product of the first 10 odd numbers $(1,3,5,7,9,11,13,15,17,19)$ is given by -$$ - 1.3.5\cdots 19 =\prod_{k=0}^{9} (1+2k) = 2^{10} \cdot \frac{\Gamma \left(\frac{1}{2} + 10\right) }{\Gamma \left( \frac{1}{2} \right) } -$$ = - -The standard deviation of any arithmetic progression can be calculated as -$$ - \sigma = |d|\sqrt{\frac{(n-1)(n+1)}{12}} -$$ - -where $ n$ is the number of terms in the progression and -$$ - d -$$ is the common difference between terms. The formula is very similar to the standard deviation of a discrete uniform distribution. - -The intersection of any two doubly infinite arithmetic progressions is either empty or another arithmetic progression, which can be found using the Chinese remainder theorem. If each pair of progressions in a family of doubly infinite arithmetic progressions have a non-empty intersection, then there exists a number common to all of them; that is, infinite arithmetic progressions form a Helly family. However, the intersection of infinitely many infinite arithmetic progressions might be a single number rather than itself being an infinite progression. - -According to an anecdote of uncertain reliability, young Carl Friedrich Gauss in primary school reinvented this method to compute the sum of the integers from 1 through 100, by multiplying n/2 pairs of numbers in the sum by the values of each pair n + 1. However, regardless of the truth of this story, Gauss was not the first to discover this formula, and some find it likely that its origin goes back to the Pythagoreans in the 5th century BC. Similar rules were known in antiquity to Archimedes, Hypsicles and Diophantus; in China to Zhang Qiujian; in India to Aryabhata, Brahmagupta and Bhaskara II; and in medieval Europe to Alcuin, Dicuil, Fibonacci, Sacrobosco - -and to anonymous commentators of Talmud known as Tosafists. diff --git a/wiki/wikipedia/774.txt b/wiki/wikipedia/774.txt deleted file mode 100644 index 81cff8c8a9bd614d57f7c346f7e493832ad92bba..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/774.txt +++ /dev/null @@ -1,9 +0,0 @@ -SpiNNaker (Spiking Neural Network Architecture) is a massively parallel, manycore supercomputer architecture designed by the Advanced Processor Technologies Research Group (APT) at the Department of Computer Science, University of Manchester. It is composed of 57,600 processing nodes, each with 18 ARM9 processors (specifically ARM968) and 128 MB of mobile DDR SDRAM, totalling 1,036,800 cores and over 7 TB of RAM. The computing platform is based on spiking neural networks, useful in simulating the human brain (see Human Brain Project). - -The completed design is housed in 10 19-inch racks, with each rack holding over 100,000 cores. The cards holding the chips are held in 5 blade enclosures, and each core emulates 1,000 neurons. In total, the goal is to simulate the behaviour of aggregates of up to a billion neurons in real time. This machine requires about 100 kW from a 240 V supply and an air-conditioned environment. - -SpiNNaker is being used as one component of the neuromorphic computing platform for the Human Brain Project. - -On 14 October 2018 the HBP announced that the million core milestone had been achieved. - -On 24 September 2019 HBP announced that a 8 million euro grant, that will fund construction of the second generation machine, (called SpiNNcloud) has been given to TU Dresden. diff --git a/wiki/wikipedia/775.txt b/wiki/wikipedia/775.txt deleted file mode 100644 index a6fea544276a745e42308bdf430da895daebe6f1..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/775.txt +++ /dev/null @@ -1,55 +0,0 @@ -In logic, a true/false decision problem is decidable if there exists an effective method for deriving the correct answer. Zeroth-order logic (propositional logic) is decidable, whereas first-order and higher-order logic are not. Logical systems are decidable if membership in their set of logically valid formulas (or theorems) can be effectively determined. A theory (set of sentences closed under logical consequence) in a fixed logical system is decidable if there is an effective method for determining whether arbitrary formulas are included in the theory. Many important problems are undecidable, that is, it has been proven that no effective method for determining membership (returning a correct answer after finite, though possibly very long, time in all cases) can exist for them. - -Each logical system comes with both a syntactic component, which among other things determines the notion of provability, and a semantic component, which determines the notion of logical validity. The logically valid formulas of a system are sometimes called the theorems of the system, especially in the context of first-order logic where Gödel's completeness theorem establishes the equivalence of semantic and syntactic consequence. In other settings, such as linear logic, the syntactic consequence (provability) relation may be used to define the theorems of a system. - -A logical system is decidable if there is an effective method for determining whether arbitrary formulas are theorems of the logical system. For example, propositional logic is decidable, because the truth-table method can be used to determine whether an arbitrary propositional formula is logically valid. - -First-order logic is not decidable in general; in particular, the set of logical validities in any signature that includes equality and at least one other predicate with two or more arguments is not decidable. Logical systems extending first-order logic, such as second-order logic and type theory, are also undecidable. - -The validities of monadic predicate calculus with identity are decidable, however. This system is first-order logic restricted to those signatures that have no function symbols and whose relation symbols other than equality never take more than one argument. - -Some logical systems are not adequately represented by the set of theorems alone. (For example, Kleene's logic has no theorems at all.) In such cases, alternative definitions of decidability of a logical system are often used, which ask for an effective method for determining something more general than just validity of formulas; for instance, validity of sequents, or the consequence relation {(Г, A) | Г ⊧ A} of the logic. - -A theory is a set of formulas, often assumed to be closed under logical consequence. Decidability for a theory concerns whether there is an effective procedure that decides whether the formula is a member of the theory or not, given an arbitrary formula in the signature of the theory. The problem of decidability arises naturally when a theory is defined as the set of logical consequences of a fixed set of axioms. - -There are several basic results about decidability of theories. Every (non-paraconsistent) inconsistent theory is decidable, as every formula in the signature of the theory will be a logical consequence of, and thus a member of, the theory. Every complete recursively enumerable first-order theory is decidable. An extension of a decidable theory may not be decidable. For example, there are undecidable theories in propositional logic, although the set of validities (the smallest theory) is decidable. - -A consistent theory that has the property that every consistent extension is undecidable is said to be essentially undecidable. In fact, every consistent extension will be essentially undecidable. The theory of fields is undecidable but not essentially undecidable. Robinson arithmetic is known to be essentially undecidable, and thus every consistent theory that includes or interprets Robinson arithmetic is also (essentially) undecidable. - -Examples of decidable first-order theories include the theory of real closed fields, and Presburger arithmetic, while the theory of groups and Robinson arithmetic are examples of undecidable theories. - -Some decidable theories include (Monk 1976, p. 234): - -* The set of logical validities in any first-order signature with equality and either: a relation symbol of arity no less than 2, or two unary function symbols, or one function symbol of arity no less than 2, established by Trakhtenbrot in 1953. - -* The first-order theory of the natural numbers with addition, multiplication, and equality, established by Tarski and Andrzej Mostowski in 1949. - -* The first-order theory of the rational numbers with addition, multiplication, and equality, established by Julia Robinson in 1949. - -* The first-order theory of groups, established by Alfred Tarski in 1953. Remarkably, not only the general theory of groups is undecidable, but also several more specific theories, for example (as established by Mal'cev 1961) the theory of finite groups. Mal'cev also established that the theory of semigroups and the theory of rings are undecidable. Robinson established in 1949 that the theory of fields is undecidable. - -*Robinson arithmetic (and therefore any consistent extension, such as Peano arithmetic) is essentially undecidable, as established by Raphael Robinson in 1950. - -* The first-order theory with equality and two function symbols - -The interpretability method is often used to establish undecidability of theories. If an essentially undecidable theory T is interpretable in a consistent theory S, then S is also essentially undecidable. This is closely related to the concept of a many-one reduction in computability theory. - -A property of a theory or logical system weaker than decidability is semidecidability. A theory is semidecidable if there is an effective method which, given an arbitrary formula, will always tell correctly when the formula is in the theory, but may give either a negative answer or no answer at all when the formula is not in the theory. A logical system is semidecidable if there is an effective method for generating theorems (and only theorems) such that every theorem will eventually be generated. This is different from decidability because in a semidecidable system there may be no effective procedure for checking that a formula is not a theorem. - -Every decidable theory or logical system is semidecidable, but in general the converse is not true; a theory is decidable if and only if both it and its complement are semi-decidable. For example, the set of logical validities V of first-order logic is semi-decidable, but not decidable. In this case, it is because there is no effective method for determining for an arbitrary formula A whether A is not in V. Similarly, the set of logical consequences of any recursively enumerable set of first-order axioms is semidecidable. Many of the examples of undecidable first-order theories given above are of this form. - -Decidability should not be confused with completeness. For example, the theory of algebraically closed fields is decidable but incomplete, whereas the set of all true first-order statements about nonnegative integers in the language with + and × is complete but undecidable. - -Unfortunately, as a terminological ambiguity, the term "undecidable statement" is sometimes used as a synonym for independent statement. - -As with the concept of a decidable set, the definition of a decidable theory or logical system can be given either in terms of effective methods or in terms of computable functions. These are generally considered equivalent per Church's thesis. Indeed, the proof that a logical system or theory is undecidable will use the formal definition of computability to show that an appropriate set is not a decidable set, and then invoke Church's thesis to show that the theory or logical system is not decidable by any effective method (Enderton 2001, pp. 206ff.). - -Some games have been classified as to their decidability: - -* Chess is decidable. The same holds for all other finite two-player games with perfect information. - -* Mate in n in infinite chess (with limitations on rules and gamepieces) is decidable. However, there are positions (with finitely many pieces) that are forced wins, but not mate in n for any finite n. - -* Some team games with imperfect information on a finite board (but with unlimited time) are undecidable. - -* Conway's Game of Life is undecidable. diff --git a/wiki/wikipedia/776.txt b/wiki/wikipedia/776.txt deleted file mode 100644 index 4092cd1b18ebab4bd90e87ccd503d49f354f40d7..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/776.txt +++ /dev/null @@ -1,95 +0,0 @@ -In cryptanalysis, the piling-up lemma is a principle used in linear cryptanalysis to construct linear approximations to the action of block ciphers. It was introduced by Mitsuru Matsui (1993) as an analytical tool for linear cryptanalysis. The lemma states that the bias (deviation of the expected value from 1/2) of a linear Boolean function (XOR-clause) of independent binary random variables is related to the product of the input biases: -$$ -\epsilon(X_1\oplus X_2\oplus\cdots\oplus X_n)=2^{n-1}\prod_{i=1}^n \epsilon(X_i) -$$ - -or -$$ -I(X_1\oplus X_2\oplus\cdots\oplus X_n ) =\prod_{i=1}^n I(X_i) -$$ - -where $\epsilon \in [-\tfrac{1}{2}, \tfrac{1}{2}]$ is the bias (towards zero) and $I \in [-1, 1]$ the imbalance: -$$ -\epsilon(X) = P(X=0) - \frac{1}{2} -$$ -$$ -I(X) = P(X=0) - P(X=1) = 2 \epsilon(X) -$$. - -Conversely, if the lemma does not hold, then the input variables are not independent. - -The lemma implies that XOR-ing independent binary variables always reduces the bias (or at least does not increase it); moreover, the output is unbiased if and only if there is at least one unbiased input variable. - -Note that for two variables the quantity $I(X \oplus Y)$ is a correlation measure of $X$ and $Y$, equal to $P(X=Y)-P(X\ne Y)$; $I(X)$ can be interpreted as the correlation of $X$ with $0$. - -The piling-up lemma can be expressed more naturally when the random variables take values in $\{-1,1\}$. If we introduce variables $\chi_i = 1 - 2X_i = (-1)^{X_i}$ (mapping 0 to 1 and 1 to -1) then, by inspection, the XOR-operation transforms to a product: -$$ -\chi_1\chi_2\cdots\chi_n = 1 - 2(X_1 \oplus X_2\oplus\cdots\oplus X_n) = (-1)^{X_1 \oplus X_2\oplus\cdots\oplus X_n} -$$ - -and since the expected values are the imbalances, $E(\chi_i)=I(X_i)$, the lemma now states: -$$ -E\left(\prod_{i=1}^n \chi_i \right)=\prod_{i=1}^nE(\chi_i) -$$ - -which is a known property of the expected value for independent variables. - -For dependent variables the above formulation gains a (positive or negative) covariance term, thus the lemma does not hold. In fact, since two Bernoulli variables are independent if and only if they are uncorrelated (i.e. have zero covariance; see uncorrelatedness), we have the converse of the piling up lemma: if it does not hold, the variables are not independent (uncorrelated). - -The piling-up lemma allows the cryptanalyst to determine the probability that the equality: -$$ -X_1\oplus X_2\oplus\cdots\oplus X_n=0 -$$ - -holds, where the X 's are binary variables (that is, bits: either 0 or 1). - -Let P(A) denote "the probability that A is true". If it equals one, A is certain to happen, and if it equals zero, A cannot happen. First of all, we consider the piling-up lemma for two binary variables, where $P(X_1 = 0)=p_1$ and $P(X_2 = 0)=p_2$. - -Now, we consider: -$$ -P(X_1 \oplus X_2 = 0) -$$ - -Due to the properties of the xor operation, this is equivalent to -$$ -P(X_1=X_2) -$$ - -X1 = X2 = 0 and X1 = X2 = 1 are mutually exclusive events, so we can say -$$ -P(X_1=X_2)=P(X_1=X_2=0) + P(X_1=X_2=1)=P(X_1=0, X_2=0) + P(X_1=1, X_2=1) -$$ - -Now, we must make the central assumption of the piling-up lemma: the binary variables we are dealing with are independent; that is, the state of one has no effect on the state of any of the others. Thus we can expand the probability function as follows: - -Now we express the probabilities p1 and p2 as ½ + ε1 and ½ + ε2, where the ε's are the probability biases - the amount the probability deviates from ½. - -Thus the probability bias ε1,2 for the XOR sum above is 2ε1ε2. - -This formula can be extended to more X 's as follows: -$$ -P(X_1\oplus X_2\oplus\cdots\oplus X_n=0)=1/2+2^{n-1}\prod_{i=1}^n \epsilon_i -$$ - -Note that if any of the ε's is zero; that is, one of the binary variables is unbiased, the entire probability function will be unbiased - equal to ½. - -A related slightly different definition of the bias is -$$ - \epsilon_i = P(X_i=1) - P(X_i=0), -$$ - -in fact minus two times the previous value. The advantage is that now with -$$ -\varepsilon_{total}= P(X_1\oplus X_2\oplus\cdots\oplus X_n=1)- P(X_1\oplus X_2\oplus\cdots\oplus X_n=0) -$$ - -we have -$$ -\varepsilon_{total}=(-1)^{n+1}\prod_{i=1}^n \varepsilon_i, -$$ - -adding random variables amounts to multiplying their (2nd definition) biases. - -In practice, the Xs are approximations to the S-boxes (substitution components) of block ciphers. Typically, X values are inputs to the S-box and Y values are the corresponding outputs. By simply looking at the S-boxes, the cryptanalyst can tell what the probability biases are. The trick is to find combinations of input and output values that have probabilities of zero or one. The closer the approximation is to zero or one, the more helpful the approximation is in linear cryptanalysis. - -However, in practice, the binary variables are not independent, as is assumed in the derivation of the piling-up lemma. This consideration has to be kept in mind when applying the lemma; it is not an automatic cryptanalysis formula. diff --git a/wiki/wikipedia/777.txt b/wiki/wikipedia/777.txt deleted file mode 100644 index 795dca508dbe03f3cb7b7d5d0496372dda576623..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/777.txt +++ /dev/null @@ -1,65 +0,0 @@ -The Steiner conic or more precisely Steiner's generation of a conic, named after the Swiss mathematician Jakob Steiner, is an alternative method to define a non-degenerate projective conic section in a projective plane over a field. - -The usual definition of a conic uses a quadratic form (see Quadric (projective geometry)). Another alternative definition of a conic uses a hyperbolic polarity. It is due to K. G. C. von Staudt and sometimes called a von Staudt conic. The disadvantage of von Staudt's definition is that it only works when the underlying field has odd characteristic (i.e., $Char\ne2$). - -*Given two pencils $B(U),B(V)$ of lines at two points $U,V$ (all lines containing $U$ and $V$ resp.) and a projective but not perspective mapping $\pi$ of $B(U)$ onto $B(V)$. Then the intersection points of corresponding lines form a non-degenerate projective conic section (figure 1) - -A perspective mapping $\pi$ of a pencil $B(U)$ onto a pencil $B(V)$ is a bijection (1-1 correspondence) such that corresponding lines intersect on a fixed line $a$, which is called the axis of the perspectivity $\pi$ (figure 2). - -A projective mapping is a finite product of perspective mappings. - -Simple example: If one shifts in the first diagram point $U$ and its pencil of lines onto $V$ and rotates the shifted pencil around $V$ by a fixed angle $\varphi$ then the shift (translation) and the rotation generate a projective mapping $\pi$ of the pencil at point $U$ onto the pencil at $V$. From the inscribed angle theorem one gets: The intersection points of corresponding lines form a circle. - -Examples of commonly used fields are the real numbers $\R$, the rational numbers $\Q$ or the complex numbers $\C$. The construction also works over finite fields, providing examples in finite projective planes. - -Remark: - -The fundamental theorem for projective planes states, that a projective mapping in a projective plane over a field (pappian plane) is uniquely determined by prescribing the images of three lines. That means that, for the Steiner generation of a conic section, besides two points $U,V$ only the images of 3 lines have to be given. These 5 items (2 points, 3 lines) uniquely determine the conic section. - -Remark: - -The notation "perspective" is due to the dual statement: The projection of the points on a line $a$ from a center $Z$ onto a line $b$ is called a perspectivity (see below). - -For the following example the images of the lines $ a,u,w$ (see picture) are given: $\pi(a)=b, \pi(u)=w, \pi(w)=v$. The projective mapping $\pi$ is the product of the following perspective mappings $\pi_b,\pi_a$: 1) $\pi_b$ is the perspective mapping of the pencil at point $U$ onto the pencil at point $O$ with axis $b$. 2) $\pi_a$ is the perspective mapping of the pencil at point $O$ onto the pencil at point $V$ with axis $a$. - -First one should check that $\pi=\pi_a\pi_b$ has the properties: $\pi(a)=b, \pi(u)=w, \pi(w)=v$. Hence for any line $g$ the image $\pi(g)=\pi_a\pi_b(g)$ can be constructed and therefore the images of an arbitrary set of points. The lines $u$ and $v$ contain only the conic points $U$ and $V$ resp.. Hence $u$ and $v$ are tangent lines of the generated conic section. - -A proof that this method generates a conic section follows from switching to the affine restriction with line $w$ as the line at infinity, point $O$ as the origin of a coordinate system with points $U,V$ as points at infinity of the x- and y-axis resp. and point $E=(1,1)$. The affine part of the generated curve appears to be the hyperbola $y=1/x$. - -Remark: - -#The Steiner generation of a conic section provides simple methods for the construction of ellipses, parabolas and hyperbolas which are commonly called the parallelogram methods. - -#The figure that appears while constructing a point (figure 3) is the 4-point-degeneration of Pascal's theorem. - -Dualizing (see duality (projective geometry)) a projective plane means exchanging the points with the lines and the operations intersection and connecting. The dual structure of a projective plane is also a projective plane. The dual plane of a pappian plane is pappian and can also be coordinatized by homogenous coordinates. A nondegenerate dual conic section is analogously defined by a quadratic form. - -A dual conic can be generated by Steiner's dual method: - -*Given the point sets of two lines $u,v$ and a projective but not perspective mapping $\pi$ of $u$ onto $v$. Then the lines connecting corresponding points form a dual non-degenerate projective conic section. - -A perspective mapping $\pi$ of the point set of a line $u$ onto the point set of a line $v$ is a bijection (1-1 correspondence) such that the connecting lines of corresponding points intersect at a fixed point $Z$, which is called the centre of the perspectivity $\pi$ (see figure). - -A projective mapping is a finite sequence of perspective mappings. - -It is usual, when dealing with dual and common conic sections, to call the common conic section a point conic and the dual conic a line conic. - -In the case that the underlying field has $Char =2$ all the tangents of a point conic intersect in a point, called the knot (or nucleus) of the conic. Thus, the dual of a non-degenerate point conic is a subset of points of a dual line and not an oval curve (in the dual plane). So, only in the case that $Char\ne2$ is the dual of a non-degenerate point conic a non-degenerate line conic. - -(1) Projectivity given by two perspectivities:
    - -Two lines $u,v$ with intersection point $W$ are given and a projectivity $\pi$ from $u$ onto $v$ by two perspectivities $\pi_A,\pi_B$ with centers $A,B$. $\pi_A$ maps line $u$ onto a third line $o$, $\pi_B$ maps line $o$ onto line $v$ (see diagram). Point $W$ must not lie on the lines $\overline{AB},o$. Projectivity $\pi$ is the composition of the two perspectivities: $ \ \pi=\pi_B\pi_A$. Hence a point $X$ is mapped onto $\pi(X)=\pi_B\pi_A(X)$ and the line $x=\overline{X\pi(X)}$ is an element of the dual conic defined by $\pi$.
    - -(If $W$ would be a fixpoint, $\pi$ would be perspective .) - -(2) Three points and their images are given:
    - -The following example is the dual one given above for a Steiner conic.
    - -The images of the points $ A,U,W$ are given: $\pi(A)=B, \pi(U)=W, \pi(W)=V$. The projective mapping $\pi$ can be represented by the product of the following perspectivities $\pi_B,\pi_A$: - -# $\pi_B$ is the perspectivity of the point set of line $u$ onto the point set of line $o$ with centre $B$. - -# $\pi_A$ is the perspectivity of the point set of line $o$ onto the point set of line $v$ with centre $A$. - -One easily checks that the projective mapping $\pi=\pi_A\pi_B$ fulfills $\pi(A)=B, \pi(U)=W, \pi(W)=V $. Hence for any arbitrary point $G$ the image $\pi(G)=\pi_A\pi_B(G)$ can be constructed and line $\overline{G\pi(G)}$ is an element of a non degenerate dual conic section. Because the points $U$ and $V$ are contained in the lines $u$, $v$ resp.,the points $U$ and $V$ are points of the conic and the lines $u,v$ are tangents at $U,V$. diff --git a/wiki/wikipedia/778.txt b/wiki/wikipedia/778.txt deleted file mode 100644 index c55e944c1183a02525b07fbdccabe15854715a59..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/778.txt +++ /dev/null @@ -1,76 +0,0 @@ -In mathematics, an algebraic geometric code (AG-code), otherwise known as a Goppa code, is a general type of linear code constructed by using an algebraic curve $X$ over a finite field $\mathbb{F}_q$. Such codes were introduced by Valerii Denisovich Goppa. In particular cases, they can have interesting extremal properties. They should not be confused with binary Goppa codes that are used, for instance, in the McEliece cryptosystem. - -Traditionally, an AG-code is constructed from a non-singular projective curve X over a finite field $\mathbb{F}_q$ by using a number of fixed distinct $\mathbb{F}_q$-rational points on $\mathbf{X}$: -$$ -\mathcal{P}:= \{P_1, \ldots, P_n \} \subset \mathbf{X} (\mathbb{F}_q). -$$ - -Let $G$ be a divisor on X, with a support that consists of only rational points and that is disjoint from the $P_i$ (i.e., $\mathcal{P} \cap \operatorname{supp}(G) = \varnothing$). - -By the Riemann–Roch theorem, there is a unique finite-dimensional vector space, $L(G)$, with respect to the divisor $G$. The vector space is a subspace of the function field of X. - -There are two main types of AG-codes that can be constructed using the above information. - -The function code (or dual code) with respect to a curve X, a divisor $G$ and the set $\mathcal{P}$ is constructed as follows. - -Let $D = P_1 + \cdots + P_n$, be a divisor, with the $P_i$ defined as above. We usually denote a Goppa code by C(D,G). We now know all we need to define the Goppa code: -$$ -C(D, G) = \left \{ \left (f(P_1), \ldots, f(P_n) \right ) \ : \ f \in L(G) \right \} \subset \mathbb{F}_q^n -$$ - -For a fixed basis $f_1, \ldots, f_k$ for L(G) over $\mathbb{F}_q$, the corresponding Goppa code in $\mathbb{F}_q^n$ is spanned over $\mathbb{F}_q$ by the vectors -$$ -\left (f_i(P_1), \ldots, f_i(P_n) \right ) -$$ - -Therefore, -$$ - \begin{bmatrix} f_1(P_1) & \cdots & f_1(P_n) \\ \vdots & & \vdots \\ f_k(P_1) & \cdots & f_k(P_n) \end{bmatrix} -$$ - -is a generator matrix for $C(D, G).$ - -Equivalently, it is defined as the image of -$$ -\begin{cases} \alpha : L(G) \to \mathbb{F}^n \\ f \mapsto (f(P_1), \ldots ,f(P_n)) \end{cases} -$$ - -The following shows how the parameters of the code relate to classical parameters of linear systems of divisors D on C (cf. Riemann–Roch theorem for more). The notation ℓ(D) means the dimension of L(D). - -Proposition A. The dimension of the Goppa code $C(D, G)$ is $k = \ell(G) - \ell(G-D).$ - -Proof. Since $C(D,G) \cong L(G)/\ker(\alpha),$ we must show that -$$ -\ker(\alpha)=L(G-D). -$$ - -Let $f \in \ker(\alpha)$ then $f(P_1)=\cdots=f(P_n) =0$ so $\operatorname{div}(f) > D $. Thus, $f \in L(G-D).$ Conversely, suppose $f \in L(G-D),$ then $\operatorname{div}(f)> D$ since -$$ -P_i < G, \quad i=1, \ldots ,n. -$$ - -(G doesn't “fix” the problems with the $-D$, so f must do that instead.) It follows that $f(P_1)=\cdots = f(P_n) = 0.$ - -Proposition B. The minimal distance between two code words is $d \geqslant n - \deg(G).$ - -Proof. Suppose the Hamming weight of $\alpha(f)$ is d. That means that for $n-d$ indices $ i_1, \ldots, i_{n-d}$ we have$f(P_{i_k})=0$ for $k \in \{1, \ldots, n-d\}.$ Then $f \in L(G-P_{i_1} - \cdots - P_{i_{n-d}})$, and -$$ -\operatorname{div}(f)+G-P_{i_1} - \cdots - P_{i_{n-d}}> 0. -$$ - -Taking degrees on both sides and noting that -$$ -\deg(\operatorname{div}(f))=0, -$$ - -we get -$$ -\deg(G)-(n-d) \geqslant 0. -$$ - -so -$$ -d \geq n - \deg(G). -$$ - -The residue code can be defined as the dual of the function code, or as the residue of some functions at the $P_i$'s. diff --git a/wiki/wikipedia/779.txt b/wiki/wikipedia/779.txt deleted file mode 100644 index 41c85b09adb9c3a758901ae1e2b03cf605453395..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/779.txt +++ /dev/null @@ -1,17 +0,0 @@ -In metalogic and metamathematics, Frege's theorem is a metatheorem that states that the Peano axioms of arithmetic can be derived in second-order logic from Hume's principle. It was first proven, informally, by Gottlob Frege in his 1884 Die Grundlagen der Arithmetik (The Foundations of Arithmetic) and proven more formally in his 1893 Grundgesetze der Arithmetik I (Basic Laws of Arithmetic I). The theorem was re-discovered by Crispin Wright in the early 1980s and has since been the focus of significant work. It is at the core of the philosophy of mathematics known as neo-logicism (at least of the Scottish School variety). - -In The Foundations of Arithmetic (1884), and later, in Basic Laws of Arithmetic (vol. 1, 1893; vol. 2, 1903), Frege attempted to derive all of the laws of arithmetic from axioms he asserted as logical (see logicism). Most of these axioms were carried over from his Begriffsschrift; the one truly new principle was one he called the Basic Law V - -The inconsistency in Frege's Grundgesetze overshadowed Frege's achievement: according to Edward Zalta, the Grundgesetze "contains all the essential steps of a valid proof (in second-order logic) of the fundamental propositions of arithmetic from a single consistent principle." This achievement has become known as Frege's theorem. - -In propositional logic, Frege's theorems refers to this tautology: - -(P → (Q→R)) → ((P→Q) → (P→R)) - -The theorem already holds in one of the weakest logics imaginable, the constructive implicational calculus. The proof under the Brouwer–Heyting–Kolmogorov interpretation reads $f \mapsto g\mapsto p\mapsto (f(p)\circ g)(p)$. - -In words: - -"Let f denote a reason that P implies that Q implies R. And let g denote a reason that P implies Q. Then given a f, then given a g, then given a reason p for P, we know that both Q holds by g and that Q implies R holds by f. So R holds." - -The truth table to the right gives a semantic proof. For all possible assignments of false () or true () to P, Q, and R (columns 1, 3, 5), each subformula is evaluated according to the rules for material conditional, the result being shown below its main operator. Column 6 shows that the whole formula evaluates to true in every case, i.e. that it is a tautology. In fact, its antecedent (column 2) and its consequent (column 10) are even equivalent. diff --git a/wiki/wikipedia/78.txt b/wiki/wikipedia/78.txt deleted file mode 100644 index bfbfef4fbb087a805d2fbb36af84e799fb41442e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/78.txt +++ /dev/null @@ -1,42 +0,0 @@ -In mathematics, particularly in universal algebra and category theory, transport of structure refers to the process whereby a mathematical object acquires a new structure and its canonical definitions, as a result of being isomorphic to (or otherwise identified with) another object with a pre-existing structure. Definitions by transport of structure are regarded as canonical. - -Since mathematical structures are often defined in reference to an underlying space, many examples of transport of structure involve spaces and mappings between them. For example, if $V$ and $W$ are vector spaces with $(\cdot,\cdot)$ being an inner product on $W$, such that there is an isomorphism $\phi$ from $V$ to $W$, then one can define an inner product $[\cdot, \cdot]$ on $V$ by the following rule: -$$ -[v_1, v_2] = (\phi(v_1), \phi(v_2)) -$$ - -Although the equation makes sense even when $\phi$ is not an isomorphism, it only defines an inner product on $V$ when $\phi$ is, since otherwise it will cause $[\cdot,\cdot]$ to be degenerate. The idea is that $\phi$ allows one to consider $V$ and $W$ as "the same" vector space, and by following this analogy, then one can transport an inner product from one space to the other. - -A more elaborated example comes from differential topology, in which the notion of smooth manifold is involved: if $M$ is such a manifold, and if $X$ is any topological space which is homeomorphic to $M$, then one can consider $X$ as a smooth manifold as well. That is, given a homeomorphism $\phi \colon X \to M$, one can define coordinate charts on $X$ by "pulling back" coordinate charts on $M$ through $\phi$. Recall that a coordinate chart on $M$ is an open set $U$ together with an injective map -$$ -c \colon U \to \mathbb{R}^n -$$ - -for some natural number $n$; to get such a chart on $X$, one uses the following rules: -$$ -U' = \phi^{-1}(U) -$$ and $c' = c \circ \phi$. - -Furthermore, it is required that the charts cover $M$ (the fact that the transported charts cover $X$ follows immediately from the fact that $\phi$ is a bijection). Since $M$ is a smooth manifold, if U and V, with their maps $c \colon U \to \mathbb{R}^n$ and $d \colon V \to \mathbb{R}^n$, are two charts on $M$, then the composition, the "transition map" -$$ -d \circ c^{-1} \colon c(U \cap V) \to \mathbb{R}^n -$$ (a self-map of $\mathbb{R}^n$) - -is smooth. To verify this for the transported charts on $X$, notice that -$$ -\phi^{-1}(U) \cap \phi^{-1}(V) = \phi^{-1}(U \cap V) -$$, - -and therefore -$$ -c'(U' \cap V') = (c \circ \phi)(\phi^{-1}(U \cap V)) = c(U \cap V) -$$, and -$$ -d' \circ (c')^{-1} = (d \circ \phi) \circ (c \circ \phi)^{-1} = d \circ (\phi \circ \phi^{-1}) \circ c^{-1} = d \circ c^{-1} -$$. - -Thus the transition map for $U'$ and $V'$ is the same as that for $U$ and $V$, hence smooth. That is, $X$ is a smooth manifold via transport of structure. This is a special case of transport of structures in general. - -The second example also illustrates why "transport of structure" is not always desirable. Namely, one can take $M$ to be the plane, and $X$ to be an infinite one-sided cone. By "flattening" the cone, a homeomorphism of $X$ and $M$ can be obtained, and therefore the structure of a smooth manifold on $X$, but the cone is not "naturally" a smooth manifold. That is, one can consider $X$ as a subspace of 3-space, in which context it is not smooth at the cone point. - -A more surprising example is that of exotic spheres, discovered by Milnor, which states that there are exactly 28 smooth manifolds which are homeomorphic (but by definition not diffeomorphic) to $S^7$, the 7-dimensional sphere in 8-space. Thus, transport of structure is most productive when there exists a canonical isomorphism between the two objects. diff --git a/wiki/wikipedia/780.txt b/wiki/wikipedia/780.txt deleted file mode 100644 index 140d928806e1a993aabeebd70781b31eed92d3a0..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/780.txt +++ /dev/null @@ -1,24 +0,0 @@ -In mathematics, a complete set of invariants for a classification problem is a collection of maps -$$ -f_i : X \to Y_i -$$ - -(where $X$ is the collection of objects being classified, up to some equivalence relation $\sim$, and the $Y_i$ are some sets), such that $x \sim x'$ if and only if $f_i(x) = f_i(x')$ for all $i$. In words, such that two objects are equivalent if and only if all invariants are equal. - -Symbolically, a complete set of invariants is a collection of maps such that -$$ -\left( \prod f_i \right) : (X/\sim) \to \left( \prod Y_i \right) -$$ - -is injective. - -As invariants are, by definition, equal on equivalent objects, equality of invariants is a necessary condition for equivalence; a complete set of invariants is a set such that equality of these is also sufficient for equivalence. In the context of a group action, this may be stated as: invariants are functions of coinvariants (equivalence classes, orbits), and a complete set of invariants characterizes the coinvariants (is a set of defining equations for the coinvariants). - -* In the classification of two-dimensional closed manifolds, Euler characteristic (or genus) and orientability are a complete set of invariants. - -* Jordan normal form of a matrix is a complete invariant for matrices up to conjugation, but eigenvalues (with multiplicities) are not. - -A complete set of invariants does not immediately yield a classification theorem: not all combinations of invariants may be realized. Symbolically, one must also determine the image of -$$ -\prod f_i : X \to \prod Y_i. -$$ diff --git a/wiki/wikipedia/781.txt b/wiki/wikipedia/781.txt deleted file mode 100644 index 4f2609b0386e66d6c0d1551055259508101e0a0a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/781.txt +++ /dev/null @@ -1,203 +0,0 @@ -{{Image frame|width=215 - -|content= - - - -\begin{array}{c} - -1 \\ - -1 \quad 1 \\ - -1 \quad 2 \quad 1 \\ - -1 \quad 3 \quad 3 \quad 1 \\ - -1 \quad 4 \quad 6 \quad 4 \quad 1 \\ - -1 \quad 5 \quad 10 \quad 10 \quad 5 \quad 1 \\ - -1 \quad 6 \quad 15 \quad 20 \quad 15 \quad 6 \quad 1 \\ - -1 \quad 7 \quad 21 \quad 35 \quad 35 \quad 21 \quad 7 \quad 1 - -\end{array} - - - -|caption=The binomial coefficient $\tbinom{n}{b}$ appears as the bth entry in the nth row of Pascal's triangle (counting starts at 0). Each entry is the sum of the two above it.}} - -In elementary algebra, the binomial theorem (or binomial expansion) describes the algebraic expansion of powers of a binomial. According to the theorem, it is possible to expand the polynomial (x + y)n into a sum involving terms of the form axbyc, where the exponents b and c are nonnegative integers with b + c = n, and the coefficient a of each term is a specific positive integer depending on n and b. For example, for n = 4, -$$ -(x+y)^4 = x^4 + 4 x^3y + 6 x^2 y^2 + 4 x y^3 + y^4. -$$ - -The coefficient a in the term of axbyc is known as the binomial coefficient $\tbinom{n}{b}$ or $\tbinom{n}{c}$ (the two have the same value). These coefficients for varying n and b can be arranged to form Pascal's triangle. These numbers also arise in combinatorics, where $\tbinom{n}{b}$ gives the number of different combinations of b elements that can be chosen from an n-element set. Therefore $\tbinom{n}{b}$ is often pronounced as "n choose b". - -Special cases of the binomial theorem were known since at least the 4th century BC when Greek mathematician Euclid mentioned the special case of the binomial theorem for exponent 2. There is evidence that the binomial theorem for cubes was known by the 6th century AD in India. The commentator Halayudha from the 10th century AD explains this method using what is now known as Pascal's triangle. and a clear statement of this rule can be found in the 12th century text Lilavati by Bhaskara. and also provided a mathematical proof of both the binomial theorem and Pascal's triangle, using an early form of mathematical induction. Blaise Pascal studied the eponymous triangle comprehensively in his Traité du triangle arithmétique. However, the pattern of numbers was already known to the European mathematicians of the late Renaissance, including Stifel, Niccolò Fontana Tartaglia, and Simon Stevin. if one sets $a=x$ and $b=\Delta x,$ interpreting b as an infinitesimal change in a, then this picture shows the infinitesimal change in the volume of an n-dimensional hypercube, $(x+\Delta x)^n,$ where the coefficient of the linear term (in $\Delta x$) is $nx^{n-1},$ the area of the n faces, each of dimension n - 1: -$$ -(x+\Delta x)^n = x^n + nx^{n-1}\Delta x + \binom{n}{2}x^{n-2}(\Delta x)^2 + \cdots. -$$ - -Substituting this into the definition of the derivative via a difference quotient and taking limits means that the higher order terms, $(\Delta x)^2$ and higher, become negligible, and yields the formula $(x^n)'=nx^{n-1},$ interpreted as - -"the infinitesimal rate of change in volume of an n-cube as side length varies is the area of n of its (n - 1)-dimensional faces". - -If one integrates this picture, which corresponds to applying the fundamental theorem of calculus, one obtains Cavalieri's quadrature formula, the integral $\textstyle{\int x^{n-1}dx = \tfrac{1}{n} x^n}$ – see proof of Cavalieri's quadrature formula for details. and r is any complex number, one has - -\begin{align} - -(x+y)^r & =\sum_{k=0}^\infty {r \choose k} x^{r-k} y^k \\ - -&= x^r + r x^{r-1} y + \frac{r(r-1)}{2!} x^{r-2} y^2 + \frac{r(r-1)(r-2)}{3!} x^{r-3} y^3 + \cdots. - -\end{align} - -When r is a nonnegative integer, the binomial coefficients for k > r are zero, so this equation reduces to the usual binomial theorem, and there are at most r + 1 nonzero terms. For other values of r, the series typically has infinitely many nonzero terms. - -For example, r = 1/2 gives the following series for the square root: -$$ -\sqrt{1+x} = 1 + \frac{1}{2}x - \frac{1}{8}x^2 + \frac{1}{16}x^3 - \frac{5}{128}x^4 + \frac{7}{256}x^5 - \cdots -$$ - -Taking r = -1, the generalized binomial series gives the geometric series formula, valid for |x| < 1: -$$ -(1+x)^{-1} = \frac{1}{1+x} = 1 - x + x^2 - x^3 + x^4 - x^5 + \cdots -$$ - -More generally, with s = −r: -$$ -\frac{1}{(1-x)^s} = \sum_{k=0}^\infty {s+k-1 \choose k} x^k. -$$ - -So, for instance, when s = 1/2, -$$ -\frac{1}{\sqrt{1+x}} = 1 -\frac{1}{2}x + \frac{3}{8}x^2 - \frac{5}{16}x^3 + \frac{35}{128}x^4 - \frac{63}{256}x^5 + \cdots -$$ - -The generalized binomial theorem can be extended to the case where x and y are complex numbers. For this version, one should again assume |x| > |y| -$$ - (a + b)^{(n)} = \sum_{k=0}^{n}\binom{n}{k}a^{(n-k)}b^{(k)}. -$$ - -The case c = 0 recovers the usual binomial theorem. - -More generally, a sequence $\{p_n\}_{n=0}^\infty$ of polynomials is said to be binomial if - -* $ \deg p_n = n $ for all $n$, - -* $ p_0(0) = 1 $, and - -* $ p_n(x+y) = \sum_{k=0}^n \binom{n}{k} p_k(x) p_{n-k}(y) $ for all $x$, $y$, and $n$. - -An operator $Q$ on the space of polynomials is said to be the basis operator of the sequence $\{p_n\}_{n=0}^\infty$ if $Qp_0 = 0$ and $ Q p_n = n p_{n-1} $ for all $ n \geqslant 1 $. A sequence $\{p_n\}_{n=0}^\infty$ is binomial if and only if its basis operator is a Delta operator. Writing $ E^a $ for the shift by $ a $ operator, the Delta operators corresponding to the above "Pochhammer" families of polynomials are the backward difference $ I - E^{-c} $ for $ c>0 $, the ordinary derivative for $ c=0 $, and the forward difference $ E^{-c} - I $ for $ c<0 $. - -The binomial theorem can be generalized to include powers of sums with more than two terms. The general version is - -(x_1 + x_2 + \cdots + x_m)^n = \sum_{k_1+k_2+\cdots +k_m = n} \binom{n}{k_1, k_2, \ldots, k_m} - -x_1^{k_1} x_2^{k_2} \cdots x_m^{k_m}, - -where the summation is taken over all sequences of nonnegative integer indices k1 through km such that the sum of all ki is n. (For each term in the expansion, the exponents must add up to n). The coefficients $ \tbinom{n}{k_1,\cdots,k_m} $ are known as multinomial coefficients, and can be computed by the formula -$$ - \binom{n}{k_1, k_2, \ldots, k_m} = \frac{n!}{k_1! \cdot k_2! \cdots k_m!}. -$$ - -Combinatorially, the multinomial coefficient $\tbinom{n}{k_1,\cdots,k_m}$ counts the number of different ways to partition an n-element set into disjoint subsets of sizes k1, ..., km. - -When working in more dimensions, it is often useful to deal with products of binomial expressions. By the binomial theorem this is equal to -$$ - (x_1+y_1)^{n_1}\dotsm(x_d+y_d)^{n_d} = \sum_{k_1=0}^{n_1}\dotsm\sum_{k_d=0}^{n_d} \binom{n_1}{k_1} x_1^{k_1}y_1^{n_1-k_1} \dotsc \binom{n_d}{k_d} x_d^{k_d}y_d^{n_d-k_d}. -$$ - -This may be written more concisely, by multi-index notation, as -$$ - (x+y)^\alpha = \sum_{\nu \le \alpha} \binom{\alpha}{\nu} x^\nu y^{\alpha - \nu}. -$$ - -The general Leibniz rule gives the nth derivative of a product of two functions in a form similar to that of the binomial theorem: -$$ -(fg)^{(n)}(x) = \sum_{k=0}^n \binom{n}{k} f^{(n-k)}(x) g^{(k)}(x). -$$ - -Here, the superscript (n) indicates the nth derivative of a function. If one sets f(x) = e^ax and g(x) = e^bx, and then cancels the common factor of e^(a + b)x from both sides of the result, the ordinary binomial theorem is recovered. - -For the complex numbers the binomial theorem can be combined with de Moivre's formula to yield multiple-angle formulas for the sine and cosine. According to De Moivre's formula, -$$ -\cos\left(nx\right)+i\sin\left(nx\right) = \left(\cos x+i\sin x\right)^n. -$$ - -Using the binomial theorem, the expression on the right can be expanded, and then the real and imaginary parts can be taken to yield formulas for cos(nx) and sin(nx). For example, since -$$ -\left(\cos x+i\sin x\right)^2 = \cos^2 x + 2i \cos x \sin x - \sin^2 x, -$$ - -De Moivre's formula tells us that -$$ -\cos(2x) = \cos^2 x - \sin^2 x \quad\text{and}\quad\sin(2x) = 2 \cos x \sin x, -$$ - -which are the usual double-angle identities. Similarly, since -$$ -\left(\cos x+i\sin x\right)^3 = \cos^3 x + 3i \cos^2 x \sin x - 3 \cos x \sin^2 x - i \sin^3 x, -$$ - -De Moivre's formula yields -$$ -\cos(3x) = \cos^3 x - 3 \cos x \sin^2 x \quad\text{and}\quad \sin(3x) = 3\cos^2 x \sin x - \sin^3 x. -$$ - -In general, -$$ -\cos(nx) = \sum_{k\text{ even}} (-1)^{k/2} {n \choose k}\cos^{n-k} x \sin^k x -$$ - -and -$$ -\sin(nx) = \sum_{k\text{ odd}} (-1)^{(k-1)/2} {n \choose k}\cos^{n-k} x \sin^k x. -$$ - -The number e is often defined by the formula -$$ -e = \lim_{n\to\infty} \left(1 + \frac{1}{n}\right)^n. -$$ - -Applying the binomial theorem to this expression yields the usual infinite series for e. In particular: -$$ -\left(1 + \frac{1}{n}\right)^n = 1 + {n \choose 1}\frac{1}{n} + {n \choose 2}\frac{1}{n^2} + {n \choose 3}\frac{1}{n^3} + \cdots + {n \choose n}\frac{1}{n^n}. -$$ - -The kth term of this sum is -$$ -{n \choose k}\frac{1}{n^k} = \frac{1}{k!}\cdot\frac{n(n-1)(n-2)\cdots (n-k+1)}{n^k} -$$ - -As n → ∞, the rational expression on the right approaches 1, and therefore -$$ -\lim_{n\to\infty} {n \choose k}\frac{1}{n^k} = \frac{1}{k!}. -$$ - -This indicates that e can be written as a series: -$$ -e=\sum_{k=0}^\infty\frac{1}{k!}=\frac{1}{0!} + \frac{1}{1!} + \frac{1}{2!} + \frac{1}{3!} + \cdots. -$$ - -Indeed, since each term of the binomial expansion is an increasing function of n, it follows from the monotone convergence theorem for series that the sum of this infinite series is equal to e. - -The binomial theorem is closely related to the probability mass function of the negative binomial distribution. The probability of a (countable) collection of independent Bernoulli trials $\{X_t\}_{t\in S}$ with probability of success $p\in [0,1]$ all not happening is -$$ - P\left(\bigcap_{t\in S} X_t^C\right) = (1-p)^ = \sum_{n=0}^ . -$$ - -The binomial theorem is valid more generally for two elements x and y in a ring, or even a semiring, provided that xy = yx. For example, it holds for two n × n matrices, provided that those matrices commute; this is useful in computing powers of a matrix. - -The binomial theorem can be stated by saying that the polynomial sequence {1, x, x2, x3, ...} is of binomial type. - -* The binomial theorem is mentioned in the Major-General's Song in the comic opera The Pirates of Penzance. - -* Professor Moriarty is described by Sherlock Holmes as having written a treatise on the binomial theorem. - -* The Portuguese poet Fernando Pessoa, using the heteronym Álvaro de Campos, wrote that "Newton's Binomial is as beautiful as the Venus de Milo. The truth is that few people notice it." - -* In the 2014 film The Imitation Game, Alan Turing makes reference to Isaac Newton's work on the binomial theorem during his first meeting with Commander Denniston at Bletchley Park. diff --git a/wiki/wikipedia/782.txt b/wiki/wikipedia/782.txt deleted file mode 100644 index 9bf68dc8f24b5b56c3dc7c51b54be55c47f3d267..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/782.txt +++ /dev/null @@ -1,358 +0,0 @@ -This is a list of rules of inference, logical laws that relate to mathematical formulae. - -Rules of inference are syntactical transform rules which one can use to infer a conclusion from a premise to create an argument. A set of rules can be used to infer any valid conclusion if it is complete, while never inferring an invalid conclusion, if it is sound. A sound and complete set of rules need not include every rule in the following list, as many of the rules are redundant, and can be proven with the other rules. - -Discharge rules permit inference from a subderivation based on a temporary assumption. Below, the notation -$$ -\varphi \vdash \psi -$$ - -indicates such a subderivation from the temporary assumption $\varphi$ to $\psi$. - -Sentential calculus is also known as propositional calculus. - -;Reductio ad absurdum (or Negation Introduction): -$$ -\varphi \vdash \psi -$$ -$$ -\underline{\varphi \vdash \lnot \psi} -$$ -$$ -\lnot \varphi -$$ - -;Reductio ad absurdum (related to the law of excluded middle): -$$ -\lnot \varphi \vdash \psi -$$ -$$ -\underline{\lnot \varphi \vdash \lnot \psi} -$$ -$$ -\varphi -$$ - -;Ex contradictione quodlibet: -$$ -\varphi -$$ -$$ -\underline{\lnot \varphi} -$$ -$$ -\psi -$$ - -;Double negation elimination: -$$ -\underline{\lnot \lnot \varphi} -$$ -$$ - \varphi -$$ - -;Double negation introduction: -$$ -\underline{\varphi \quad \quad} -$$ -$$ - \lnot \lnot \varphi -$$ - -;Deduction theorem (or Conditional Introduction): -$$ -\underline{\varphi \vdash \psi} -$$ -$$ -\varphi \rightarrow \psi -$$ - -;Modus ponens (or Conditional Elimination): -$$ -\varphi \rightarrow \psi -$$ -$$ -\underline{\varphi \quad \quad \quad} -$$ -$$ -\psi -$$ - -;Modus tollens: -$$ -\varphi \rightarrow \psi -$$ -$$ -\underline{\lnot \psi \quad \quad \quad} -$$ -$$ -\lnot \varphi -$$ - -;Adjunction (or Conjunction Introduction): -$$ -\varphi -$$ -$$ -\underline{\psi \quad \quad \ \ } -$$ -$$ -\varphi \land \psi -$$ - -;Simplification (or Conjunction Elimination): -$$ -\underline{\varphi \land \psi} -$$ -$$ -\varphi -$$ -$$ -\underline{\varphi \land \psi} -$$ -$$ -\psi -$$ - -;Addition (or Disjunction Introduction): -$$ -\underline{\varphi \quad \quad \ \ } -$$ -$$ -\varphi \lor \psi -$$ -$$ -\underline{\psi \quad \quad \ \ } -$$ -$$ -\varphi \lor \psi -$$ - -;Case analysis (or Proof by Cases or Argument by Cases or Disjunction elimination) -$$ -\varphi \rightarrow \chi -$$ -$$ -\psi \rightarrow \chi -$$ -$$ -\underline{\varphi \lor \psi} -$$ -$$ -\chi -$$ - -;Disjunctive syllogism: -$$ -\varphi \lor \psi -$$ -$$ -\underline{\lnot \varphi \quad \quad} -$$ -$$ -\psi -$$ -$$ -\varphi \lor \psi -$$ -$$ -\underline{\lnot \psi \quad \quad} -$$ -$$ -\varphi -$$ - -;Constructive dilemma -$$ -\varphi \rightarrow \chi -$$ -$$ -\psi \rightarrow \xi -$$ -$$ -\underline{\varphi \lor \psi} -$$ -$$ -\chi \lor \xi -$$ - -;Biconditional introduction: -$$ -\varphi \rightarrow \psi -$$ -$$ -\underline{\psi \rightarrow \varphi} -$$ -$$ -\varphi \leftrightarrow \psi -$$ - -;Biconditional elimination: -$$ -\varphi \leftrightarrow \psi -$$ -$$ -\underline{\varphi \quad \quad} -$$ -$$ -\psi -$$ -$$ -\varphi \leftrightarrow \psi -$$ -$$ -\underline{\psi \quad \quad} -$$ -$$ -\varphi -$$ -$$ -\varphi \leftrightarrow \psi -$$ -$$ -\underline{\lnot \varphi \quad \quad} -$$ -$$ -\lnot \psi -$$ -$$ -\varphi \leftrightarrow \psi -$$ -$$ -\underline{\lnot \psi \quad \quad} -$$ -$$ -\lnot \varphi -$$ -$$ -\varphi \leftrightarrow \psi -$$ -$$ -\underline{\psi \lor \varphi} -$$ -$$ -\psi \land \varphi -$$ -$$ -\varphi \leftrightarrow \psi -$$ -$$ -\underline{\lnot \psi \lor \lnot \varphi} -$$ -$$ -\lnot \psi \land \lnot \varphi -$$ - -In the following rules, $\varphi(\beta / \alpha)$ is exactly like $\varphi$ except for having the term $\beta$ wherever $\varphi$ has the free variable $\alpha$. - -;Universal Generalization (or Universal Introduction): -$$ -\underline{\varphi{(\beta / \alpha)}} -$$ -$$ -\forall \alpha \varphi -$$ - -Restriction 1: $\beta$ is a variable which does not occur in $\varphi$. - -
    - -Restriction 2: $\beta$ is not mentioned in any hypothesis or undischarged assumptions. - -;Universal Instantiation (or Universal Elimination): -$$ - \forall \alpha \varphi -$$ -$$ -\overline{\varphi{(\beta / \alpha)}} -$$ - -Restriction: No free occurrence of $\alpha$ in $\varphi$ falls within the scope of a quantifier quantifying a variable occurring in $\beta$. - -;Existential Generalization (or Existential Introduction): -$$ -\underline{\varphi(\beta / \alpha)} -$$ -$$ -\exists \alpha \varphi -$$ - -Restriction: No free occurrence of $\alpha$ in $\varphi$ falls within the scope of a quantifier quantifying a variable occurring in $\beta$. - -;Existential Instantiation (or Existential Elimination): -$$ -\exists \alpha \varphi -$$ -$$ -\underline{\varphi(\beta / \alpha) \vdash \psi} -$$ -$$ -\psi -$$ - -Restriction 1: $\beta$ is a variable which does not occur in $\varphi$. - -
    - -Restriction 2: There is no occurrence, free or bound, of $\beta$ in $\psi$. - -
    - -Restriction 3: $\beta$ is not mentioned in any hypothesis or undischarged assumptions. - -The following are special cases of universal generalization and existential elimination; these occur in substructural logics, such as linear logic. - -;Rule of weakening (or monotonicity of entailment) (aka no-cloning theorem) -$$ -\alpha\vdash\beta -$$ -$$ -\overline{\alpha,\alpha\vdash\beta} -$$ - -;Rule of contraction (or idempotency of entailment) (aka no-deleting theorem) -$$ -\underline{\alpha,\alpha,\gamma\vdash\beta} -$$ -$$ -\alpha,\gamma\vdash\beta -$$ - -The rules above can be summed up in the following table. The "Tautology" column shows how to interpret the notation of a given rule. - -All rules use the basic logic operators. A complete table of "logic operators" is shown by a truth table, giving definitions of all the possible (16) truth functions of 2 boolean variables (p, q): - -where T = true and F = false, and, the columns are the logical operators: 0, false, Contradiction; 1, NOR, Logical NOR (Peirce's arrow); 2, Converse nonimplication; 3, ¬p, Negation; 4, Material nonimplication; 5, ¬q, Negation; 6, XOR, Exclusive disjunction; 7, NAND, Logical NAND (Sheffer stroke); 8, AND, Logical conjunction; 9, XNOR, If and only if, Logical biconditional; 10, q, Projection function; 11, if/then, Logical implication; 12, p, Projection function; 13, then/if, Converse implication; 14, OR, Logical disjunction; 15, true, Tautology. - -Each logic operator can be used in an assertion about variables and operations, showing a basic rule of inference. Examples: - -* The column-14 operator (OR), shows Addition rule: when p=T (the hypothesis selects the first two lines of the table), we see (at column-14) that p∨q=T. - -*: We can see also that, with the same premise, another conclusions are valid: columns 12, 14 and 15 are T. - -* The column-8 operator (AND), shows Simplification rule: when p∧q=T (first line of the table), we see that p=T. - -*: With this premise, we also conclude that q=T, p∨q=T, etc. as showed by columns 9-15. - -* The column-11 operator (IF/THEN), shows Modus ponens rule: when p→q=T and p=T only one line of the truth table (the first) satisfies these two conditions. On this line, q is also true. Therefore, whenever p → q is true and p is true, q must also be true. - -Machines and well-trained people use this look at table approach to do basic inferences, and to check if other inferences (for the same premises) can be obtained. - -Consider the following assumptions: "If it rains today, then we will not go on a canoe today. If we do not go on a canoe trip today, then we will go on a canoe trip tomorrow. Therefore (Mathematical symbol for "therefore" is $\therefore$), if it rains today, we will go on a canoe trip tomorrow". - -To make use of the rules of inference in the above table we let $p$ be the proposition "If it rains today", $q$ be "We will not go on a canoe today" and let $r$ be "We will go on a canoe trip tomorrow". Then this argument is of the form: - -\begin{align} - -p \rightarrow q\\ - -q \rightarrow r\\ - -\therefore \overline{p \rightarrow r} \\ - -\end{align} - -Consider a more complex set of assumptions: "It is not sunny today and it is colder than yesterday". "We will go swimming only if it is sunny", "If we do not go swimming, then we will have a barbecue", and "If we will have a barbecue, then we will be home by sunset" lead to the conclusion "We will be home by sunset." - -Proof by rules of inference: Let $p$ be the proposition "It is sunny today", $q$ the proposition "It is colder than yesterday", $r$ the proposition "We will go swimming", $s$ the proposition "We will have a barbecue", and $t$ the proposition "We will be home by sunset". Then the hypotheses become $\neg p \wedge q, r \rightarrow p, \neg r \rightarrow s$ and $s \rightarrow t$. Using our intuition we conjecture that the conclusion might be $t$. Using the Rules of Inference table we can prove the conjecture easily: diff --git a/wiki/wikipedia/783.txt b/wiki/wikipedia/783.txt deleted file mode 100644 index 04847e79a247e11c8b2724d10b622185f3744380..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/783.txt +++ /dev/null @@ -1,69 +0,0 @@ -Nurikabe (hiragana: ぬりかべ) is a binary determination puzzle named for Nurikabe, an invisible wall in Japanese folklore that blocks roads and delays foot travel. Nurikabe was apparently invented and named by Nikoli; other names (and attempts at localization) for the puzzle include Cell Structure and Islands in the Stream. - -The puzzle is played on a typically rectangular grid of cells, some of which contain numbers. Cells are initially of unknown color, but can only be black or white. Two same-color cells are considered "connected" if they are adjacent vertically or horizontally, but not diagonally. Connected white cells form "islands", while connected black cells form the "sea". - -The challenge is to paint each cell black or white, subject to the following rules: - -# Each numbered cell is an island cell, the number in it is the number of cells in that island. - -# Each island must contain exactly one numbered cell. - -# There must be only one sea, which is not allowed to contain "pools", i.e. 2×2 areas of black cells. - -Human solvers typically dot the non-numbered cells they've determined to be certain to belong to an island. - -Like most other pure-logic puzzles, a unique solution is expected, and a grid containing random numbers is highly unlikely to provide a uniquely solvable Nurikabe puzzle. - -Nurikabe was first developed by "renin (れーにん)," whose pen name is the Japanese pronunciation of "Lenin" and whose autonym can be read as such, in the 33rd issue of (Puzzle Communication) Nikoli at March 1991. - -It soon created a sensation, and has appeared in all issues of that publication from the 38th to the present. - -As of 2005, seven books consisting entirely of Nurikabe puzzles have been published by Nikoli. - -(This paragraph mainly depends on "Nikoli complete works of interesting-puzzles(ニコリ オモロパズル大全集)." https://web.archive.org/web/20060707011243/http://www.nikoli.co.jp/storage/addition/omopadaizen/) - -No blind guessing should be required to solve a Nurikabe puzzle. Rather, a series of simple procedures and rules can be developed and followed, assuming the solver is sufficiently observant to find where to apply them. - -The greatest mistake made by beginning solvers is to concentrate solely on determining black or white and not the other; most Nurikabe puzzles require going back and forth. Marking white cells may force other cells to be black lest a section of black be isolated, and vice versa. (Those familiar with Go can think of undetermined cells next to various regions as "liberties" and apply "atari" logic to determine how they must grow.) - -* Since two islands may only touch at corners, cells between two partial islands (numbers and adjacent white cells that don't total their numbers yet) must be black. This is often a way to start a Nurikabe puzzle, by marking cells adjacent to two or more numbers as black. - -* Once an island is "complete"-that is, it has all the white cells its number requires-all cells that share a side with it must be black. Obviously, any cells marked with '1' at the outset are complete islands unto themselves, and can be isolated with black at the beginning. - -* Whenever three black cells form an "elbow"-an L-shape-the cell in the bend (diagonally in from the corner of the L) must be white. (The alternative is a "pool", for lack of a better term.) - -* All black cells must eventually be connected. If there is a black region with only one possible way to connect to the rest of the board, the sole connecting pathway must be black. - -** Corollary: there cannot be a continuous path, using either vertical, horizontal or diagonal steps, of white cells from one cell lying on the edge of the board to a different cell like that, that encloses some black cells inside, because otherwise, the black cells won't be connected. - -* All white cells must eventually be part of exactly one island. If there is a white region that does not contain a number, and there is only one possible way for it to connect to a numbered white region, the sole connecting pathway must be white. - -* Some puzzles will require the location of "unreachables"-cells that cannot be connected to any number, being either too far away from all of them or blocked by other numbers. Such cells must be black. Often, these cells will have only one route of connection to other black cells or will form an elbow whose required white cell (see previous bullet) can only reach one number, allowing further progress. - -* If there is a square consisting of two black cells and two unknown cells, at least one of the two unknown cells must remain white according to the rules. Thus, if one of those two unknown cells (call it 'A') can only be connected to a numbered square by way of the other one (call it 'B'), then B must necessarily be white (and A may or may not be white). - -* If an island of size N already has N-1 white cells identified, and there are only two remaining cells to choose from, and those two cells touch at their corners, then the cell between those two that is on the far side of the island must be black. - -* If a square must be white and only two islands can connect to it and have no unidentified cells left after connecting, then if the islands connect at a 90 degree angle (ex: One island can connect to the top side and the other to the right side) the cell inside the angle (The one touching the top-left corner of the white square in the previous example) must be black to avoid connecting the 2 islands. - -* Undetermined cells adjacent to a straight row (or a straight column) of black cells can be tested for being black, because if they are black it will form two elbows, and there will be two adjacent white cells which need to be reachable from the islands. If they can not be fulfilled within the constraints, it means the cell that was probed for blackness must be white. - -It is NP-complete to solve Nurikabe, even when the involved numbers are 1 and 2 only. - -Further, consider these two rules of Nurikabe: - -# Black cells form a connected area - -# Black cells cannot form 2 × 2 squares, - -Either one can be ignored, giving a total of three variants. As it turns out, they are all NP-complete. - -The binary determination puzzles LITS and Mochikoro, also published by Nikoli, are similar to Nurikabe and employ similar solution methods. The binary determination puzzle Atsumari is similar to Nurikabe but based upon a hexagonal tiling rather than a square tiling. - -Mochikoro is a variant of the Nurikabe puzzle: - -# Each numbered cell belongs to a white area, the number indicates how many cells belong to the white area. Some white areas may not include a numbered cell. - -# All white areas must be diagonally connected. - -# The black cell must not cover an area of 2x2 cells or larger. diff --git a/wiki/wikipedia/784.txt b/wiki/wikipedia/784.txt deleted file mode 100644 index 88e05a3ae6f67f906ede3d7a794545087a15b9c4..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/784.txt +++ /dev/null @@ -1,11 +0,0 @@ -In logic, Import-Export is a deductive argument form which states that $ (P \rightarrow ( Q \rightarrow R )) \leftrightarrow ((P \land Q) \rightarrow R)$. In natural language terms, the principle means that the following English sentences are logically equivalent. - -# If Mary isn't at home, then if Sally isn't at home, then the house is empty. - -# If Mary isn't home and Sally isn't home, then the house is empty. - -Import-Export holds in classical logic, where the conditional operator $\rightarrow$ is taken as material implication. However, there are other logics where it does not hold and its status as a true principle of logic is a matter of debate. Controversy over the principle arises from the fact that any conditional operator that satisfies it will collapse to material implication when combined with certain other principles. This conclusion would be problematic given the paradoxes of material implication, which are commonly taken to show that natural language conditionals are not material implication. However, other approaches reject Import-Export as a general principle, motivated by cases such as the following, uttered in a context where it is most likely that the match will be lit by throwing it into a campfire, but where it is possible that it could be lit by striking it. In this context, the first sentence is intuitively true but the second is intuitively false. - -# If you strike the match and it lights, it will light. - -# If the match lights, it will light if you strike it. diff --git a/wiki/wikipedia/785.txt b/wiki/wikipedia/785.txt deleted file mode 100644 index f7fd280f2658c81663ed4ee05080eb7da65dd346..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/785.txt +++ /dev/null @@ -1,3 +0,0 @@ -SPASS is an automated theorem prover for first-order logic with equality developed at the Max Planck Institute for Computer Science and using the superposition calculus. The name originally stood for Synergetic Prover Augmenting Superposition with Sorts. The theorem proving system is released under the FreeBSD license. - -An extension of SPASS called SPASS-XDB added support for on-the-fly retrieval of positive unit axioms from external sources. SPASS-XDB can thus incorporate facts coming from relational databases, web services, or linked data servers. Support for arithmetic using Mathematica was also added. diff --git a/wiki/wikipedia/786.txt b/wiki/wikipedia/786.txt deleted file mode 100644 index 905c59e14bae42d66e3c403a258d7c85ea2fe7a9..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/786.txt +++ /dev/null @@ -1,25 +0,0 @@ -In algebra, the Amitsur–Levitzki theorem states that the algebra of n by n matrices over a commutative ring satisfies a certain identity of degree 2n. It was proved by . In particular matrix rings are polynomial identity rings such that the smallest identity they satisfy has degree exactly 2n. - -The standard polynomial of degree n is -$$ -S_n(x_1,\ldots,x_n) = \sum_{\sigma\in S_{n}}\text{sgn}(\sigma)x_{\sigma(1)}\cdots x_{\sigma(n)} -$$ - -in non-commutative variables x1,...,xn, where the sum is taken over all n! elements of the symmetric group Sn. - -The Amitsur–Levitzki theorem states that for n by n matrices A1,...,A2n whose entries are taken from a commutative ring then -$$ - S_{2n}(A_1,\ldots,A_{2n}) = 0 \ . -$$ - -gave the first proof. - -Kostant deduced the Amitsur–Levitzki theorem from the Koszul–Samelson theorem about primitive cohomology of Lie algebras. - -Swan and Swan gave a simple combinatorial proof as follows. By linearity it is enough to prove the theorem when each matrix has only one nonzero entry, which is 1. In this case each matrix can be encoded as a directed edge of a graph with n vertices. So all matrices together give a graph on n vertices with 2n directed edges. The identity holds provided that for any two vertices A and B of the graph, the number of odd Eulerian paths from A to B is the same as the number of even ones. (Here a path is called odd or even depending on whether its edges taken in order give an odd or even permutation of the 2n edges.) Swan showed that this was the case provided the number of edges in the graph is at least 2n, thus proving the Amitsur–Levitzki theorem. - -Razmyslov gave a proof related to the Cayley–Hamilton theorem. - -Rosset gave a short proof using the exterior algebra of a vector space of dimension 2n. - -Procesi gave another proof, showing that the Amitsur–Levitzki Theorem is the Cayley–Hamilton identity for the generic Grassman matrix. diff --git a/wiki/wikipedia/787.txt b/wiki/wikipedia/787.txt deleted file mode 100644 index bb5a682dd379d8ac1871373147b8f37a01bacfb3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/787.txt +++ /dev/null @@ -1,3 +0,0 @@ -In mathematics, the Weyl–von Neumann theorem is a result in operator theory due to Hermann Weyl and John von Neumann. It states that, after the addition of a compact operator (Weyl) or Hilbert–Schmidt operator (von Neumann) of arbitrarily small norm, a bounded self-adjoint operator or unitary operator on a Hilbert space is conjugate by a unitary operator to a diagonal operator. The results are subsumed in later generalizations for bounded normal operators due to David Berg (1971, compact perturbation) and Dan-Virgil Voiculescu (1979, Hilbert–Schmidt perturbation). The theorem and its generalizations were one of the starting points of operator K-homology, developed first by Lawrence G. Brown, Ronald Douglas and Peter Fillmore and, in greater generality, by Gennadi Kasparov. - -In 1958 Kuroda showed that the Weyl–von Neumann theorem is also true if the Hilbert–Schmidt class is replaced by any Schatten class Sp with p ≠ 1. For S1, the trace-class operators, the situation is quite different. The Kato–Rosenblum theorem, proved in 1957 using scattering theory, states that if two bounded self-adjoint operators differ by a trace-class operator, then their absolutely continuous parts are unitarily equivalent. In particular if a self-adjoint operator has absolutely continuous spectrum, no perturbation of it by a trace-class operator can be unitarily equivalent to a diagonal operator. diff --git a/wiki/wikipedia/788.txt b/wiki/wikipedia/788.txt deleted file mode 100644 index 5eaf7ae747933b4e66cc5e9940fe76d9be9a2c97..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/788.txt +++ /dev/null @@ -1,49 +0,0 @@ -The right triangle altitude theorem or geometric mean theorem is a result in elementary geometry that describes a relation between the altitude on the hypotenuse in a right triangle and the two line segments it creates on the hypotenuse. It states that the geometric mean of the two segments equals the altitude. - -If h denotes the altitude in a right triangle and p and q the segments on the hypotenuse then the theorem can be stated as: - -Another application of provides a geometrical proof of the AM–GM inequality in the case of two numbers. For the numbers p and q one constructs a half circle with diameter p+q. Now the altitude represents the geometric mean and the radius the arithmetic mean of the two numbers. Since the altitude is always smaller or equal to the radius, this yields the inequality. - -The theorem can also be thought of as a special case of the intersecting chords theorem for a circle, since the converse of Thales' theorem ensures that the hypotenuse of the right angled triangle is the diameter of its circumcircle. - -The converse statement is true as well. Any triangle, in which the altitude equals the geometric mean of the two line segments created by it, is a right triangle. - -The theorem is usually attributed to Euclid (ca. 360–280 BC), who stated it as a corollary to proposition 8 in book VI of his Elements. In proposition 14 of book II Euclid gives a method for squaring a rectangle, which essentially matches the method given here. Euclid however provides a different slightly more complicated proof for the correctness of the construction rather than relying on the geometric mean theorem. - -Proof of theorem: - -The triangles $\triangle ADC $ and $\triangle BCD$ are similar, since: - -* consider triangles $\triangle ABC, \triangle ACD$, here we have $\angle ACB=\angle ADC=90^\circ$ and $\angle BAC=\angle CAD$, therefore by the AA postulate $\triangle ABC \sim \triangle ACD $ - -* further, consider triangles $\triangle ABC, \triangle BCD$, here we have $\angle ACB=\angle BDC= 90^\circ$ and $\angle ABC=\angle CBD$, therefore by the AA postulate $\triangle ABC \sim \triangle BCD$ - -Therefore, both triangles $\triangle ACD$ and $\triangle BCD$ are similar to $\triangle ABC$ and themselves, i.e. $\triangle ACD \sim \triangle ABC \sim \triangle BCD$. - -Because of the similarity we get the following equality of ratios and its algebraic rearrangement yields the theorem:. -$$ - \frac{h}{p}=\frac{q}{h}\Leftrightarrowh^2=pq\Leftrightarrowh=\sqrt{pq}\qquad (h,p,q> 0) -$$ - -Proof of converse: - -For the converse we have a triangle $\triangle ABC $ in which $h^2=pq$ holds and need to show that the angle at C is a right angle. Now because of $h^2=pq$ we also have $\tfrac{h}{p}=\tfrac{q}{h} $. Together with $\angle ADC=\angle CDB $ the triangles $\triangle ADC $ and $\triangle BDC $ have an angle of equal size and have corresponding pairs of legs with the same ratio. This means the triangles are similar, which yields: -$$ -\angle ACB=\angle ACD +\angle DCB=\angle ACD+(90^\circ-\angle DBC)=\angle ACD+(90^\circ-\angle ACD)=90^\circ -$$ - -In the setting of the geometric mean theorem there are three right triangles $\triangle ABC $, $\triangle ADC $ and $\triangle DBC $, in which the Pythagorean theorem yields: -$$ -h^2=a^2-q^2 -$$, $h^2=b^2-p^2$ and $c^2=a^2+b^2$ - -Adding the first 2 two equations and then using the third then leads to: -$$ -2h^2=a^2+b^2-p^2-q^2=c^2-p^2-q^2=(p+q)^2-p^2-q^2=2pq -$$. - -A division by two finally yields the formula of the geometric mean theorem. - -Dissecting the right triangle along its altitude h yields two similar triangles, which can be augmented and arranged in two alternative ways into a larger right triangle with perpendicular sides of lengths p+h and q+h. One such arrangement requires a square of area h2 to complete it, the other a rectangle of area pq. Since both arrangements yield the same triangle, the areas of the square and the rectangle must be identical. - -The square of the altitude can be transformed into an rectangle of equal area with sides p and q with the help of three shear mappings (shear mappings preserve the area): diff --git a/wiki/wikipedia/789.txt b/wiki/wikipedia/789.txt deleted file mode 100644 index f5820889bd9e5fed8c3c17a957b170caf56cc5ad..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/789.txt +++ /dev/null @@ -1,94 +0,0 @@ -In computer science and graph theory, Karger's algorithm is a randomized algorithm to compute a minimum cut of a connected graph. It was invented by David Karger and first published in 1993. - -The idea of the algorithm is based on the concept of contraction of an edge $(u, v)$ in an undirected graph $G = (V, E)$. Informally speaking, the contraction of an edge merges the nodes $u$ and $v$ into one, reducing the total number of nodes of the graph by one. All other edges connecting either $u$ or $v$ are "reattached" to the merged node, effectively producing a multigraph. Karger's basic algorithm iteratively contracts randomly chosen edges until only two nodes remain; those nodes represent a cut in the original graph. By iterating this basic algorithm a sufficient number of times, a minimum cut can be found with high probability. - -A cut $(S,T)$ in an undirected graph $G = (V, E)$ is a partition of the vertices $V$ into two non-empty, disjoint sets $S\cup T= V$. The cutset of a cut consists of the edges $\{ uv \in E \colon u\in S, v\in T\}$ between the two parts. The size (or weight) of a cut in an unweighted graph is the cardinality of the cutset, i.e., the number of edges between the two parts, -$$ -w(S,T) = |\{ uv \in E \colon u\in S, v\in T\}|. -$$ - -There are $2^$ ways of choosing for each vertex whether it belongs to $S$ or to $T$, but two of these choices make $S$ or $T$ empty and do not give rise to cuts. Among the remaining choices, swapping the roles of $S$ and $T$ does not change the cut, so each cut is counted twice; therefore, there are $2^ \leq \frac{k}{nk/2} = \frac{2}{n}.$ - -The probability $p_n$ that the contraction algorithm on an $n$-vertex graph avoids $C$ satisfies the recurrence $p_n \geq \left( 1 - \frac{2}{n} \right) p_{n-1}$, with $p_2 = 1$, which can be expanded as - - - -p_n \geq \prod_{i=0}^{n-3} \Bigl(1-\frac{2}{n-i}\Bigr) = - -\prod_{i=0}^{n-3} {\frac{n-i-2}{n-i}} - -= \frac{n-2}{n}\cdot \frac{n-3}{n-1} \cdot \frac{n-4}{n-2}\cdots \frac{3}{5}\cdot \frac{2}{4} \cdot \frac{1}{3} - -= \binom{n}{2}^{-1}. - - - -By repeating the contraction algorithm $ T = \binom{n}{2}\ln n $ times with independent random choices and returning the smallest cut, the probability of not finding a minimum cut is - - - -\left[1-\binom{n}{2}^{-1}\right]^T - -\leq \frac{1}{e^{\ln n}} = \frac{1}{n}. - - - -The total running time for $T$ repetitions for a graph with $n$ vertices and $m$ edges is $ O(Tm) = O(n^2 m \log n)$. - -An extension of Karger’s algorithm due to David Karger and Clifford Stein achieves an order of magnitude improvement. - -The basic idea is to perform the contraction procedure until the graph reaches $t$ vertices. - -procedure contract($G=(V,E)$, $t$): - -while $|V| > t$ - -choose $e\in E$ uniformly at random -$$ -G \leftarrow G/e -$$ - -return $G$ - -The probability $p_{n,t}$ that this contraction procedure avoids a specific cut $C$ in an $n$-vertex graph is - - - -p_{n,t} \ge \prod_{i=0}^{n-t-1} \Bigl(1-\frac{2}{n-i}\Bigr) = \binom{t}{2}\Bigg/\binom{n}{2}. - - - -This expression is approximately $t^2/n^2$ and becomes less than $\frac{1}{2}$ around $ t= n/\sqrt 2 $. In particular, the probability that an edge from $C$ is contracted grows towards the end. This motivates the idea of switching to a slower algorithm after a certain number of contraction steps. - -procedure fastmincut($G= (V,E)$): - -if $|V| \le 6$: - -return mincut($V$) - -else: -$$ -t\leftarrow \lceil 1 + |V|/\sqrt 2\rceil -$$ -$$ -G_1 \leftarrow -$$ contract($G$, $t$) -$$ -G_2 \leftarrow -$$ contract($G$, $t$) - -return min {fastmincut($G_1$), fastmincut($G_2$)} - -The probability $P(n)$ the algorithm finds a specific cutset $C$ is given by the recurrence relation -$$ -P(n)= 1-\left(1-\frac{1}{2} P\left(\Bigl\lceil 1 + \frac{n}{\sqrt{2}}\Bigr\rceil \right)\right)^2 -$$ - -with solution $P(n) = \Omega\left(\frac{1}{\log n}\right)$. The running time of fastmincut satisfies -$$ -T(n)= 2T\left(\Bigl\lceil 1+\frac{n}{\sqrt{2}}\Bigr\rceil\right)+O(n^2) -$$ - -with solution $T(n)=O(n^2\log n)$. To achieve error probability $O(1/n)$, the algorithm can be repeated $O(\log n/P(n))$ times, for an overall running time of $T(n) \cdot \frac{\log n}{P(n)} = O(n^2\log ^3 n)$. This is an order of magnitude improvement over Karger’s original algorithm. - -To determine a min-cut, one has to touch every edge in the graph at least once, which is $\Theta(n^2)$ time in a dense graph. The Karger–Stein's min-cut algorithm takes the running time of $O(n^2\ln ^{O(1)} n)$, which is very close to that. diff --git a/wiki/wikipedia/79.txt b/wiki/wikipedia/79.txt deleted file mode 100644 index a6c81868a65bbdd0f5eb3f5737fbd570da596fa9..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/79.txt +++ /dev/null @@ -1 +0,0 @@ -#redirectFortunate number diff --git a/wiki/wikipedia/790.txt b/wiki/wikipedia/790.txt deleted file mode 100644 index 95c22613be1f60575f9f60dd422095eb7fa51e7a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/790.txt +++ /dev/null @@ -1,155 +0,0 @@ -In mathematics, specifically abstract algebra, the isomorphism theorems (also known as Noether's isomorphism theorems) are theorems that describe the relationship between quotients, homomorphisms, and subobjects. Versions of the theorems exist for groups, rings, vector spaces, modules, Lie algebras, and various other algebraic structures. In universal algebra, the isomorphism theorems can be generalized to the context of algebras and congruences. - -The isomorphism theorems were formulated in some generality for homomorphisms of modules by Emmy Noether in her paper Abstrakter Aufbau der Idealtheorie in algebraischen Zahl- und Funktionenkörpern, which was published in 1927 in Mathematische Annalen. Less general versions of these theorems can be found in work of Richard Dedekind and previous papers by Noether. - -Three years later, B.L. van der Waerden published his influential Moderne Algebra the first abstract algebra textbook that took the groups-rings-fields approach to the subject. Van der Waerden credited lectures by Noether on group theory and Emil Artin on algebra, as well as a seminar conducted by Artin, Wilhelm Blaschke, Otto Schreier, and van der Waerden himself on ideals as the main references. The three isomorphism theorems, called homomorphism theorem, and two laws of isomorphism when applied to groups, appear explicitly. - -We first present the isomorphism theorems of the groups. - -Below we present four theorems, labelled A, B, C and D. They are often numbered as "First isomorphism theorem", "Second..." and so on; however, there is no universal agreement on the numbering. Here we give some examples of the group isomorphism theorems in the literature. Notice that these theorems have analogs for rings and modules. - -It is less common to include the Theorem D, usually known as the lattice theorem or the correspondence theorem, to one of isomorphism theorems, but when they do, it is the last one. - -Let G and H be groups, and let f : G → H be a homomorphism. Then: - -# The kernel of f is a normal subgroup of G, - -# The image of f is a subgroup of H, and - -# The image of f is isomorphic to the quotient group G / ker(f). - -In particular, if f is surjective then H is isomorphic to G / ker(f). - -Let $G$ be a group. Let $S$ be a subgroup of $G$, and let $N$ be a normal subgroup of $G$. Then the following hold: - -# The product $SN$ is a subgroup of $G$, - -# The intersection $S \cap N$ is a normal subgroup of $S$, and - -# The quotient groups $(SN)/N$ and $S/(S\cap N)$ are isomorphic. - -Technically, it is not necessary for $N$ to be a normal subgroup, as long as $S$ is a subgroup of the normalizer of $N$ in $G$. In this case, the intersection $S \cap N$ is not a normal subgroup of $G$, but it is still a normal subgroup of $S$. - -This theorem is sometimes called the isomorphism theorem, or the parallelogram theorem. - -An application of the second isomorphism theorem identifies projective linear groups: for example, the group on the complex projective line starts with setting $G = \operatorname{GL}_2(\mathbb{C})$, the group of invertible 2 × 2 complex matrices, $S = \operatorname{SL}_2(\mathbb{C})$, the subgroup of determinant 1 matrices, and $N$ the normal subgroup of scalar matrices $\mathbb{C}^{\times}\!I = \left\{\left( \begin{smallmatrix} a & 0 \\ 0 & a \end{smallmatrix} \right) : a \in \mathbb{C}^{\times} \right\}$, we have $S \cap N = \{\pm I\}$, where $I$ is the identity matrix, and $SN = \operatorname{GL}_2(\mathbb{C})$. Then the second isomorphism theorem states that: -$$ -\operatorname{PGL}_2(\mathbb{C}) := \operatorname{GL}_2 \left(\mathbb{C})/(\mathbb{C}^{\times}\!I\right) \cong \operatorname{SL}_2(\mathbb{C})/\{\pm I\} =: \operatorname{PSL}_2(\mathbb{C}) -$$ - -Let $G$ be a group, and $N$ a normal subgroup of $G$. - -Then - -# If $K$ is a subgroup of $G$ such that $N \subseteq K \subseteq G$, then $G/N$ has a subgroup isomorphic to $K/N$. - -# Every subgroup of $G/N$ is of the form $K/N$ for some subgroup $K$ of $G$ such that $N \subseteq K \subseteq G$. - -# If $K$ is a normal subgroup of $G$ such that $N \subseteq K \subseteq G$, then $G/N$ has a normal subgroup isomorphic to $K/N$. - -# Every normal subgroup of $G/N$ is of the form $K/N$ for some normal subgroup $K$ of $G$ such that $N \subseteq K \subseteq G$. - -# If $K$ is a normal subgroup of $G$ such that $N \subseteq K \subseteq G$, then the quotient group $(G/N)/(K/N)$ is isomorphic to $G/K$. - -The correspondence theorem (also known as the lattice theorem) is sometimes called the third or fourth isomorphism theorem. - -The Zassenhaus lemma (also known as the butterfly lemma) is sometimes called the fourth isomorphism theorem. - -The first isomorphism theorem can be expressed in category theoretical language by saying that the category of groups is (normal epi, mono)-factorizable; in other words, the normal epimorphisms and the monomorphisms form a factorization system for the category. This is captured in the commutative diagram in the margin, which shows the objects and morphisms whose existence can be deduced from the morphism $ f : G \rightarrow H$. The diagram shows that every morphism in the category of groups has a kernel in the category theoretical sense; the arbitrary morphism f factors into $\iota \circ \pi$, where ι is a monomorphism and π is an epimorphism (in a conormal category, all epimorphisms are normal). This is represented in the diagram by an object $\ker f$ and a monomorphism $\kappa: \ker f \rightarrow G$ (kernels are always monomorphisms), which complete the short exact sequence running from the lower left to the upper right of the diagram. The use of the exact sequence convention saves us from having to draw the zero morphisms from $\ker f$ to $H$ and $G / \ker f$. - -If the sequence is right split (i.e., there is a morphism σ that maps $G / \operatorname{ker} f$ to a pi-preimage of itself), then G is the semidirect product of the normal subgroup $\operatorname{im} \kappa$ and the subgroup $\operatorname{im} \sigma$. If it is left split (i.e., there exists some $\rho: G \rightarrow \operatorname{ker} f$ such that $\rho \circ \kappa = \operatorname{id}_{\text{ker} f}$), then it must also be right split, and $\operatorname{im} \kappa \times \operatorname{im} \sigma$ is a direct product decomposition of G. In general, the existence of a right split does not imply the existence of a left split; but in an abelian category (such as that of abelian groups), left splits and right splits are equivalent by the splitting lemma, and a right split is sufficient to produce a direct sum decomposition $\operatorname{im} \kappa \oplus \operatorname{im} \sigma$. In an abelian category, all monomorphisms are also normal, and the diagram may be extended by a second short exact sequence $0 \rightarrow G / \operatorname{ker} f \rightarrow H \rightarrow \operatorname{coker} f \rightarrow 0$. - -In the second isomorphism theorem, the product SN is the join of S and N in the lattice of subgroups of G, while the intersection S ∩ N is the meet. - -The third isomorphism theorem is generalized by the nine lemma to abelian categories and more general maps between objects. - -The statements of the theorems for rings are similar, with the notion of a normal subgroup replaced by the notion of an ideal. - -Let R and S be rings, and let φ : R → S be a ring homomorphism. Then: - -# The kernel of φ is an ideal of R, - -# The image of φ is a subring of S, and - -# The image of φ is isomorphic to the quotient ring R / ker(φ). - -In particular, if φ is surjective then S is isomorphic to R / ker(φ). - -Let R be a ring. Let S be a subring of R, and let I be an ideal of R. Then: - -# The sum S + I = {s + i | s ∈ S, i ∈ I } is a subring of R, - -# The intersection S ∩ I is an ideal of S, and - -# The quotient rings (S + I) / I and S / (S ∩ I) are isomorphic. - -Let R be a ring, and I an ideal of R. Then - -# If $A$ is a subring of $R$ such that $I \subseteq A \subseteq R$, then $A/I$ is a subring of $R/I$. - -# Every subring of $R/I$ is of the form $A/I$ for some subring $A$ of $R$ such that $I \subseteq A \subseteq R$. - -# If $J$ is an ideal of $R$ such that $I \subseteq J \subseteq R$, then $J/I$ is an ideal of $R/I$. - -# Every ideal of $R/I$ is of the form $J/I$ for some ideal $J$ of $R$ such that $I \subseteq J \subseteq R$. - -# If $J$ is an ideal of $R$ such that $I \subseteq J \subseteq R$, then the quotient ring $(R/I)/(J/I)$ is isomorphic to $R/J$. - -Let $I$ be an ideal of $R$. The correspondence $A\leftrightarrow A/I$ is an inclusion-preserving bijection between the set of subrings $A$ of $R$ that contain $I$ and the set of subrings of $R/I$. Furthermore, $A$ (a subring containing $I$) is an ideal of $R$ if and only if $A/I$ is an ideal of $R/I$. - -The statements of the isomorphism theorems for modules are particularly simple, since it is possible to form a quotient module from any submodule. The isomorphism theorems for vector spaces (modules over a field) and abelian groups (modules over $\mathbb{Z}$) are special cases of these. For finite-dimensional vector spaces, all of these theorems follow from the rank–nullity theorem. - -In the following, "module" will mean "R-module" for some fixed ring R. - -Let M and N be modules, and let φ : M → N be a module homomorphism. Then: - -# The kernel of φ is a submodule of M, - -# The image of φ is a submodule of N, and - -# The image of φ is isomorphic to the quotient module M / ker(φ). - -In particular, if φ is surjective then N is isomorphic to M / ker(φ). - -Let M be a module, and let S and T be submodules of M. Then: - -# The sum S + T = {s + t | s ∈ S, t ∈ T} is a submodule of M, - -# The intersection S ∩ T is a submodule of M, and - -# The quotient modules (S + T) / T and S / (S ∩ T) are isomorphic. - -Let M be a module, T a submodule of M. - -# If $S$ is a submodule of $M$ such that $T \subseteq S \subseteq M$, then $S/T$ is a submodule of $M/T$. - -# Every submodule of $M/T$ is of the form $S/T$ for some submodule $S$ of $M$ such that $T \subseteq S \subseteq M$. - -# If $S$ is a submodule of $M$ such that $T \subseteq S \subseteq M$, then the quotient module $(M/T)/(S/T)$ is isomorphic to $M/S$. - -Let $M$ be a module, $N$ a submodule of $M$. There is a bijection between the submodules of $M$ that contain $N$ and the submodules of $M/N$. The correspondence is given by $A\leftrightarrow A/N$ for all $A\supseteq N$. This correspondence commutes with the processes of taking sums and intersections (i.e., is a lattice isomorphism between the lattice of submodules of $M/N$ and the lattice of submodules of $M$ that contain $N$). - -To generalise this to universal algebra, normal subgroups need to be replaced by congruence relations. - -A congruence on an algebra $A$ is an equivalence relation $\Phi\subseteq A \times A$ that forms a subalgebra of $A \times A$ considered as an algebra with componentwise operations. One can make the set of equivalence classes $A/\Phi$ into an algebra of the same type by defining the operations via representatives; this will be well-defined since $\Phi$ is a subalgebra of $A \times A$. The resulting structure is the quotient algebra. - -Let $f:A \rightarrow B$ be an algebra homomorphism. Then the image of $f$ is a subalgebra of $B$, the relation given by $\Phi:f(x)=f(y)$ (i.e. the kernel of $f$) is a congruence on $A$, and the algebras $A/\Phi$ and $\operatorname{im} f$ are isomorphic. (Note that in the case of a group, $f(x)=f(y)$ iff $f(xy^{-1}) = 1$, so one recovers the notion of kernel used in group theory in this case.) - -Given an algebra $A$, a subalgebra $B$ of $A$, and a congruence $\Phi$ on $A$, let $\Phi_B = \Phi \cap (B \times B)$ be the trace of $\Phi$ in $B$ and $[B]^\Phi=\{K \in A/\Phi: K \cap B \neq\emptyset\}$ the collection of equivalence classes that intersect $B$. Then - -# $\Phi_B$ is a congruence on $B$, - -# $ \ [B]^\Phi$ is a subalgebra of $A/\Phi$, and - -# the algebra $[B]^\Phi$ is isomorphic to the algebra $B/\Phi_B$. - -Let $A$ be an algebra and $\Phi, \Psi$ two congruence relations on $A$ such that $\Psi \subseteq \Phi$. Then $\Phi/\Psi = \{ ([a']_\Psi,[a]_\Psi): (a',a)\in \Phi\} = [\ ]_\Psi \circ \Phi \circ [\ ]_\Psi^{-1}$ is a congruence on $A/\Psi$, and $A/\Phi$ is isomorphic to $(A/\Psi)/(\Phi/\Psi).$ - -Let $A$ be an algebra and denote $\operatorname{Con}A$ the set of all congruences on $A$. The set -$$ -\operatorname{Con}A -$$ is a complete lattice ordered by inclusion. - -If $\Phi\in\operatorname{Con}A$ is a congruence and we denote by $\left[\Phi,A\times A\right]\subseteq\operatorname{Con}A$ the set of all congruences that contain $\Phi$ (i.e. $\left[\Phi,A\times A\right]$ is a principal filter in $\operatorname{Con}A$, moreover it is a sublattice), then - -the map $\alpha:\left[\Phi,A\times A\right]\to\operatorname{Con}(A/\Phi),\Psi\mapsto\Psi/\Phi$ is a lattice isomorphism. diff --git a/wiki/wikipedia/791.txt b/wiki/wikipedia/791.txt deleted file mode 100644 index ac35cb9c2f299713919897f683d597f6f5ad1661..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/791.txt +++ /dev/null @@ -1,5 +0,0 @@ -In probability and mathematical statistics, Ignatov's theorem is a basic result on the distribution of record values of a stochastic process. - -Let X1, X2, ... be an infinite sequence of independent and identically distributed random variables. The initial rank of the nth term of this sequence is the value r such that Xi ≥ Xn for exactly r values of i less than or equal to n. Let Yk = (Yk,1, Yk,2, Yk,3, ...) denote the stochastic process consisting of the terms Xi having initial rank k; that is, Yk,j is the jth term of the stochastic process that achieves initial rank k. The sequence Yk is called the sequence of kth partial records. Ignatov's theorem states that the sequences Y1, Y2, Y3, ... are independent and identically distributed. - -The theorem is named after Tzvetan Ignatov a Bulgarian professor in probability and mathematical statistics at Sofia University. Due to it and his general contributions to mathematics, Prof. Ignatov was granted a Doctor Honoris Causa degree in 2013 from Sofia University. The recognition is given on extremely rare occasions and only to scholars with internationally landmark results. diff --git a/wiki/wikipedia/792.txt b/wiki/wikipedia/792.txt deleted file mode 100644 index 6e3eebb5d0985817ccef60c3a6df232affb18bf2..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/792.txt +++ /dev/null @@ -1,86 +0,0 @@ -In probability theory, the continuous mapping theorem states that continuous functions preserve limits even if their arguments are sequences of random variables. A continuous function, in Heine’s definition, is such a function that maps convergent sequences into convergent sequences: if xn → x then g(xn) → g(x). The continuous mapping theorem states that this will also be true if we replace the deterministic sequence {xn} with a sequence of random variables {Xn}, and replace the standard notion of convergence of real numbers “→” with one of the types of convergence of random variables. - -This theorem was first proved by Henry Mann and Abraham Wald in 1943, and it is therefore sometimes called the Mann–Wald theorem. Meanwhile, Denis Sargan refers to it as the general transformation theorem. - -Let {Xn}, X be random elements defined on a metric space S. Suppose a function g: S→S′ (where S′ is another metric space) has the set of discontinuity points Dg such that 1=Pr[X ∈ Dg] = 0. Then - - - -\begin{align} - -X_n \ \xrightarrow\text{d}\ X \quad & \Rightarrow\quad g(X_n)\ \xrightarrow\text{d}\ g(X); \\[6pt] - -X_n \ \xrightarrow\text{p}\ X \quad & \Rightarrow\quad g(X_n)\ \xrightarrow\text{p}\ g(X); \\[6pt] - -X_n \ \xrightarrow{\!\!\text{a.s.}\!\!}\ X \quad & \Rightarrow\quad g(X_n)\ \xrightarrow{\!\!\text{a.s.}\!\!}\ g(X). - -\end{align} - - - -where the superscripts, "d", "p", and "a.s." denote convergence in distribution, convergence in probability, and almost sure convergence respectively. - -
    This proof has been adopted from
    - -Spaces S and S′ are equipped with certain metrics. For simplicity we will denote both of these metrics using the |x − y| notation, even though the metrics may be arbitrary and not necessarily Euclidean. - -We will need a particular statement from the portmanteau theorem: that convergence in distribution $X_n\xrightarrow{d}X$ is equivalent to -$$ - \mathbb E f(X_n) \to \mathbb E f(X) -$$ for every bounded continuous functional f. - -So it suffices to prove that $ \mathbb E f(g(X_n)) \to \mathbb E f(g(X))$ for every bounded continuous functional f. Note that $ F = f \circ g$ is itself a bounded continuous functional. And so the claim follows from the statement above. - -Fix an arbitrary ε > 0. Then for any δ > 0 consider the set Bδ defined as - - - -B_\delta = \big\{x\in S \mid x\notin D_g:\ \exists y\in S:\ |x-y|<\delta, |g(x)-g(y)|>\varepsilon\big\}. - - - -This is the set of continuity points x of the function g(·) for which it is possible to find, within the δ-neighborhood of x, a point which maps outside the ε-neighborhood of g(x). By definition of continuity, this set shrinks as δ goes to zero, so that limδ → 0Bδ = ∅. - -Now suppose that |g(X) − g(Xn)| > ε. This implies that at least one of the following is true: either |X−Xn| ≥ δ, or X ∈ Dg, or X∈Bδ. In terms of probabilities this can be written as - - - -\Pr\big(\big|g(X_n)-g(X)\big|>\varepsilon\big) \leq - -\Pr\big(|X_n-X|\geq\delta\big) + \Pr(X\in B_\delta) + \Pr(X\in D_g). - - - -On the right-hand side, the first term converges to zero as n → ∞ for any fixed δ, by the definition of convergence in probability of the sequence {Xn}. The second term converges to zero as δ → 0, since the set Bδ shrinks to an empty set. And the last term is identically equal to zero by assumption of the theorem. Therefore, the conclusion is that - - - -\lim_{n\to\infty}\Pr \big(\big|g(X_n)-g(X)\big|>\varepsilon\big) = 0, - - - -which means that g(Xn) converges to g(X) in probability. - -By definition of the continuity of the function g(·), - - - -\lim_{n\to\infty}X_n(\omega) = X(\omega) \quad\Rightarrow\quad \lim_{n\to\infty}g(X_n(\omega)) = g(X(\omega)) - - - -at each point X(ω) where g(·) is continuous. Therefore, - -\begin{align} - -\Pr\left(\lim_{n\to\infty}g(X_n) = g(X)\right) - -&\geq \Pr\left(\lim_{n\to\infty}g(X_n) = g(X),\ X\notin D_g\right) \\ - -&\geq \Pr\left(\lim_{n\to\infty}X_n = X,\ X\notin D_g\right) = 1, - -\end{align} - -because the intersection of two almost sure events is almost sure. - -By definition, we conclude that g(Xn) converges to g(X) almost surely. diff --git a/wiki/wikipedia/793.txt b/wiki/wikipedia/793.txt deleted file mode 100644 index 372669f9896937ae26fbcb32f101843f05609bc8..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/793.txt +++ /dev/null @@ -1,53 +0,0 @@ -In parallel computing, an embarrassingly parallel workload or problem (also called embarrassingly parallelizable, perfectly parallel, delightfully parallel or pleasingly parallel) is one where little or no effort is needed to separate the problem into a number of parallel tasks. This is often the case where there is little or no dependency or need for communication between those parallel tasks, or for results between them. - -Thus, these are different from distributed computing problems that need communication between tasks, especially communication of intermediate results. They are easy to perform on server farms which lack the special infrastructure used in a true supercomputer cluster. They are thus well suited to large, Internet-based distributed platforms such as BOINC, and do not suffer from parallel slowdown. The opposite of embarrassingly parallel problems are inherently serial problems, which cannot be parallelized at all. - -A common example of an embarrassingly parallel problem is 3D video rendering handled by a graphics processing unit, where each frame (forward method) or pixel (ray tracing method) can be handled with no interdependency. Some forms of password cracking are another embarrassingly parallel task that is easily distributed on central processing units, CPU cores, or clusters. - -"Embarrassingly" is used here in the same sense as in the phrase "an embarrassment of riches", meaning an overabundance—here referring to parallelization problems which are "embarrassingly easy". The term may also imply embarrassment on the part of developers or compilers: "Because so many important problems remain unsolved mainly due to their intrinsic computational complexity, it would be embarrassing not to develop parallel implementations of polynomial homotopy continuation methods." The term is first found in the literature in a 1986 book on multiprocessors by MATLAB's creator Cleve Moler, who claims to have invented the term. - -An alternative term, pleasingly parallel, has gained some use, perhaps to avoid the negative connotations of embarrassment in favor of a positive reflection on the parallelizability of the problems: "Of course, there is nothing embarrassing about these programs at all." - -Some examples of embarrassingly parallel problems include: - -* Monte Carlo analysis - -* Distributed relational database queries using . - -* Numerical integration - -* Serving static files on a webserver to multiple users at once. - -* The Mandelbrot set, Perlin noise and similar images, where each point is calculated independently. - -* Rendering of computer graphics. In computer animation, each frame or pixel may be rendered independently (see parallel rendering). - -* Brute-force searches in cryptography. Notable real-world examples include distributed.net and proof-of-work systems used in cryptocurrency. - -* BLAST searches in bioinformatics for multiple queries (but not for individual large queries). - -* Large scale facial recognition systems that compare thousands of arbitrary acquired faces (e.g., a security or surveillance video via closed-circuit television) with similarly large number of previously stored faces (e.g., a rogues gallery or similar watch list). - -* Computer simulations comparing many independent scenarios. - -* Genetic algorithms. - -* Ensemble calculations of numerical weather prediction. - -* Event simulation and reconstruction in particle physics. - -* The marching squares algorithm. - -* Sieving step of the quadratic sieve and the number field sieve. - -* Tree growth step of the random forest machine learning technique. - -* Discrete Fourier transform where each harmonic is independently calculated. - -* Convolutional neural networks running on GPUs. - -* Hyperparameter grid search in machine learning. - -* Parallel search in constraint programming - -* In R (programming language) – The Simple Network of Workstations (SNOW) package implements a simple mechanism for using a set of workstations or a Beowulf cluster for embarrassingly parallel computations. diff --git a/wiki/wikipedia/794.txt b/wiki/wikipedia/794.txt deleted file mode 100644 index f83e13acf2ee72cdebf0bc4bcb411a2ccea13208..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/794.txt +++ /dev/null @@ -1,36 +0,0 @@ -The non-squeezing theorem, also called Gromov's non-squeezing theorem, is one of the most important theorems in symplectic geometry. It was first proven in 1985 by Mikhail Gromov. - -The theorem states that one cannot embed a ball into a cylinder via a symplectic map unless the radius of the ball is less than or equal to the radius of the cylinder. The importance of this theorem is as follows: very little was known about the geometry behind symplectic transformations. - -One easy consequence of a transformation being symplectic is that it preserves volume. One can easily embed a ball of any radius into a cylinder of any other radius by a volume-preserving transformation: just picture squeezing the ball into the cylinder (hence, the name non-squeezing theorem). Thus, the non-squeezing theorem tells us that, although symplectic transformations are volume-preserving, it is much more restrictive for a transformation to be symplectic than it is to be volume-preserving. - -We start by considering the symplectic spaces -$$ - \mathbb{R}^{2n} = \{z = (x_1, \ldots , x_n, y_1, \ldots , y_n )\}, -$$ - -the ball of radius R: $B(R) = \{z \in \mathbb{R}^{2n} | \|z \| < R \}, $ - -and the cylinder of radius r: $Z(r) = \{z \in \mathbb{R}^{2n} ~|~ x_1^2 + y_1^2 ~|~ < r^2 \}, $ - -each endowed with the symplectic form -$$ - \omega = dx_1 \wedge dy_1 + \cdots + dx_n \wedge dy_n. -$$ - -Note: The choice of axes for the cylinder are not arbitrary given the fixed symplectic form above; namely the circles of the cylinder each lie in a symplectic subspace of \mathbb{R}^{2n} - -. - -The non-squeezing theorem tells us that if we can find a symplectic embedding φ : B(R) → Z(r) then R ≤ r. - -Gromov's non-squeezing theorem has also become known as the principle of the symplectic camel since Ian Stewart referred to it by alluding to the parable of the camel and the eye of a needle. As Maurice A. de Gosson states: - -Similarly: - -De Gosson has shown that the non-squeezing theorem is closely linked to the Robertson–Schrödinger–Heisenberg inequality, a generalization of the Heisenberg uncertainty relation. The Robertson–Schrödinger–Heisenberg inequality states that: -$$ -var(Q) var(P) \geq cov^2(Q,P) + \left(\frac{\hbar}{2}\right)^2 -$$ - -with Q and P the canonical coordinates and var and cov the variance and covariance functions. diff --git a/wiki/wikipedia/795.txt b/wiki/wikipedia/795.txt deleted file mode 100644 index 1dcddb4d7b0cbba3cae0d5ac85dbe1fc0bebbd9f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/795.txt +++ /dev/null @@ -1,33 +0,0 @@ -In mathematical programming and polyhedral combinatorics, the Hirsch conjecture is the statement that the edge-vertex graph of an n-facet polytope in d-dimensional Euclidean space has diameter no more than n − d. That is, any two vertices of the polytope must be connected to each other by a path of length at most n − d. The conjecture was first put forth in a letter by to George B. Dantzig in 1957 The result was presented at the conference 100 Years in Seattle: the mathematics of Klee and Grünbaum and appeared in Annals of Mathematics. Specifically, the paper presented a 43-dimensional polytope of 86 facets with a diameter of more than 43. The counterexample has no direct consequences for the analysis of the simplex method, as it does not rule out the possibility of a larger but still linear or polynomial number of steps. - -Various equivalent formulations of the problem had been given, such as the d-step conjecture, which states that the diameter of any 2d-facet polytope in d-dimensional Euclidean space is no more than d; Santos Leal's counterexample also disproves this conjecture. - -The graph of a convex polytope $P$ is any graph whose vertices are in bijection with the vertices of $P$ in such a way that any two vertices of the graph are joined by an edge if and only if the two corresponding vertices of $P$ are joined by an edge of the polytope. The diameter of $P$, denoted $\delta(P)$, is the diameter of any one of its graphs. These definitions are well-defined since any two graphs of the same polytope must be isomorphic as graphs. We may then state the Hirsch conjecture as follows: - -Conjecture Let $P$ be a d-dimensional convex polytope with n facets. Then $\delta(P)\leq n-d$. - -For example, a cube in three dimensions has six facets. The Hirsch conjecture then indicates that the diameter of this cube cannot be greater than three. Accepting the conjecture would imply that any two vertices of the cube may be connected by a path from vertex to vertex using, at most, three steps. For all polytopes of dimension at least 8, this bound is actually optimal; no polytope of dimension $d\geq 8$ has a diameter less than n-d, with n being the number of its facets, as before. In other words, for nearly all cases, the conjecture provides the minimum number of steps needed to join any two vertices of a polytope by a path along its edges. Since the simplex method essentially operates by constructing a path from some vertex of the feasible region to an optimal point, the Hirsch conjecture would provide a lower bound needed for the simplex method to terminate in the worst case scenario. - -The Hirsch conjecture is a special case of the polynomial Hirsch conjecture, which claims that there exists some positive integer k such that, for all polytopes $P$, $\delta(P)=O(n^k)$, where n is the number of facets of P. - -The Hirsch conjecture has been proven true for a number of cases. For example, any polytope with dimension 3 or lower satisfies the conjecture. Any d-dimensional polytope with n facets such that $ n-d\leq 6 $ satisfies the conjecture as well. - -Other attempts to solve the conjecture manifested out of a desire to formulate a different problem whose solution would imply the Hirsch conjecture. One example of particular importance is the d-step conjecture, a relaxation of the Hirsch conjecture that has actually been shown to be equivalent to it. - -Theorem The following statements are equivalent: - -# $\delta(P)\leq n-d $ for all d-dimensional polytopes $P$ with n facets. - -# $\delta(P)\leq d $ for all d-dimensional polytopes $P$ with 2d facets. - -In other words, in order to prove or disprove the Hirsch conjecture, one only needs to consider polytopes with exactly twice as many facets as its dimension. Another significant relaxation is that the Hirsch conjecture holds for all polytopes if and only if it holds for all simple polytopes. - -Unfortunately, the Hirsch conjecture is not true in all cases, as shown by Francisco Santos in 2011. Santos' explicit construction of a counterexample comes both from the fact that the conjecture may be relaxed to only consider simple polytopes, and from equivalence between the Hirsch and d-step conjectures. In particular, Santos produces his counterexample by examining a particular class of polytopes called spindles. - -Definition A d-spindle is a d-dimensional polytope $P$ for which there exist a pair of distinct vertices such that every facet of $P$ contains exactly one of these two vertices. - -The length of the shortest path between these two vertices is called the length of the spindle. The disproof of the Hirsch conjecture relies on the following theorem, referred to as the strong d-step theorem for spindles. - -Theorem (Santos) Let $P$ be a d-spindle. Let n be the number of its facets, and let l be its length. Then there exists an $(n-d)$-spindle, $P'$, with $2n-2d$ facets and a length bounded below by $l+n-2d$. In particular, if $l>d$, then $P'$ violates the d-step conjecture. - -Santos then proceeds to construct a 5-dimensional spindle with length 6, hence proving that there exists another spindle that serves as a counterexample to the Hirsch conjecture. The first of these two spindles has 48 facets and 322 vertices, while the spindle that actually disproves the conjecture has 86 facets and is 43-dimensional. This counterexample does not disprove the polynomial Hirsch conjecture, which remains an open problem. diff --git a/wiki/wikipedia/796.txt b/wiki/wikipedia/796.txt deleted file mode 100644 index b413d17d383c853e4b6e311e35b6c3e49dbf9884..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/796.txt +++ /dev/null @@ -1,38 +0,0 @@ -A Supnick matrix or Supnick array - named after Fred Supnick of the City College of New York, who introduced the notion in 1957 - is a Monge array which is also a symmetric matrix. - -A Supnick matrix is a square Monge array that is symmetric around the main diagonal. - -An n-by-n matrix is a Supnick matrix if, for all i, j, k, l such that if -$$ -1\le i < k\le n -$$ and $1\le j < l\le n$ - -then -$$ -a_{ij} + a_{kl} \le a_{il} + a_{kj} -$$ - -and also -$$ -a_{ij} = a_{ji}. -$$ - -A logically equivalent definition is given by Rudolf & Woeginger who in 1995 proved that - -A matrix is a Supnick matrix iff it can be written as the sum of a sum matrix S and a non-negative linear combination of LL-UR block matrices. - -The sum matrix is defined in terms of a sequence of n real numbers {αi}: - - - -S = [s_{ij}] = [\alpha_i + \alpha_j]; - - - -and an LL-UR block matrix consists of two symmetrically placed rectangles in the lower-left and upper right corners for which aij = 1, with all the rest of the matrix elements equal to zero. - -Adding two Supnick matrices together will result in a new Supnick matrix (Deineko and Woeginger 2006). - -Multiplying a Supnick matrix by a non-negative real number produces a new Supnick matrix (Deineko and Woeginger 2006). - -If the distance matrix in a traveling salesman problem can be written as a Supnick matrix, that particular instance of the problem admits an easy solution (even though the problem is, in general, NP hard). diff --git a/wiki/wikipedia/797.txt b/wiki/wikipedia/797.txt deleted file mode 100644 index 8d160007a7fb28b7fe902def1e323a28bda03906..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/797.txt +++ /dev/null @@ -1,15 +0,0 @@ -In mathematics, Hanner's inequalities are results in the theory of Lp spaces. Their proof was published in 1956 by Olof Hanner. They provide a simpler way of proving the uniform convexity of Lp spaces for p ∈ (1, +∞) than the approach proposed by James A. Clarkson in 1936. - -Let f, g ∈ Lp(E), where E is any measure space. If p ∈ [1, 2], then -$$ -\|f+g\|_p^p + \|f-g\|_p^p \geq \big( \|f\|_p + \|g\|_p \big)^p + \big| \|f\|_p-\|g\|_p \big|^p. -$$ - -The substitutions F = f + g and G = f - g yield the second of Hanner's inequalities: -$$ -2^p \big( \|F\|_p^p + \|G\|_p^p \big) \geq \big( \|F+G\|_p + \|F-G\|_p \big)^p + \big| \|F+G\|_p-\|F-G\|_p \big|^p. -$$ - -For p ∈ [2, +∞) the inequalities are reversed (they remain non-strict). - -Note that for $p = 2$ the inequalities become equalities which are both the parallelogram rule. diff --git a/wiki/wikipedia/798.txt b/wiki/wikipedia/798.txt deleted file mode 100644 index c5c25fce8c2914c948e5353ac7d2a55085a9ebe0..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/798.txt +++ /dev/null @@ -1,345 +0,0 @@ -Killer sudoku (also killer su doku, sumdoku, sum doku, sumoku, addoku, or samunamupure) is a puzzle that combines elements of sudoku and kakuro. Despite the name, the simpler killer sudokus can be easier to solve than regular sudokus, depending on the solver's skill at mental arithmetic; the hardest ones, however, can take hours to crack. - -A typical problem is shown on the right, using colors to define the groups of cells. More often, puzzles are printed in black and white, with thin dotted lines used to outline the "cages" (see below for terminology). - -Killer sudoku puzzles were already an established variant of sudoku in Japan by the mid 1990s, where they were known as "samunamupure." The name stemmed from a Japanized form of the English words "sum number place." Killer sudokus were introduced to most of the English-speaking world by The Times in 2005. - -Traditionally, as with regular sudoku puzzles, the grid layout is symmetrical around a diagonal, horizontal or vertical axis, or a quarter or half turn about the centre. This is a matter of aesthetics, though, rather than obligatory: many Japanese puzzle-makers will make small deviations from perfect symmetry for the sake of improving the puzzle. Other puzzle-makers may produce entirely asymmetrical puzzles. - -; Cell : A single square that contains one number in the grid - -; Row : A horizontal line of 9 cells - -; Column : A vertical line of 9 cells - -; Nonet : A 3×3 grid of cells, as outlined by the bolder lines in the diagram above; also called a box - -; Cage : The grouping of cells denoted by a dotted line or by individual colours. - -; House : Any nonrepeating set of 9 cells: can be used as a general term for "row, column, or nonet" (or, in Killer X variants, "long diagonal") - -The objective is to fill the grid with numbers from 1 to 9 in a way that the following conditions are met: - -* Each row, column, and nonet contains each number exactly once. - -* The sum of all numbers in a cage must match the small number printed in its corner. - -* No number appears more than once in a cage. (This is the standard rule for killer sudokus, and implies that no cage can include more than 9 cells.) - -In 'Killer X', an additional rule is that each of the long diagonals contains each number once. - -By convention in Japan, killer sudoku cages do not include duplicate numbers. However, when The Times first introduced the killer sudoku on 31 August 2005, the newspaper did not make this rule explicit. Even though the vast majority of killer sudoku puzzles followed the rule anyway, English-speaking solvers were confused about appropriate solving strategies given the ambiguity. On September 16, 2005 The Times added a new ruling that “Within each dotted-line shape, a digit CAN be repeated if the normal row, column and 3x3 box rules are not broken”. But on September 19 the rule changed to “Within each dotted-line shape, a digit CANNOT be repeated if the normal row, column and 3x3 box rules are not broken” - causing even more confusion. This revised rule stuck and the world standard is no duplicates within cages. - -Generally the problem is best tackled starting from the extreme sums—cages with the largest or the smallest sums. This is because these have the fewest possible combinations. For example, 5 cells within the same cage totalling 34 can only be 4, 6, 7, 8, and 9. Yet, 5 cells within the same cage totaling 25 has twelve possible combinations. - -In the early stages of the game, the most common way to begin filling in numbers is to look at such low-sum or high-sum cages that form a 'straight line'. As the solver can infer from these that certain numbers are in a certain row or column, they can begin 'cross-hatching' across from them. - -A further technique can be derived from the knowledge that the numbers in all houses (rows, columns and nonets) add up to 45. By adding up the cages and single numbers in a particular house, the user can deduce the result of a single cell. If the cell calculated is within the house itself, it is referred to as an 'innie'; conversely if the cell is outside it, it is called an 'outie'. Even if this is not possible, advanced players may find it useful to derive the sum of two or three cells, then use other elimination techniques (see below for an example of this). This '45' technique can also be extended to calculate the innies or outies of N adjacent houses, as the difference between the cage-sums and N*45. - -A short-cut to calculating or checking the value of a single 'innie' or 'outie' on a large number of cages is to add up the cages using 'clock' arithmetic (correctly, Modular Arithmetic modulo 10), in which all digits other than the last in any number are ignored. - -When two numbers are added together, the last digit of the total is not affected by anything other than the last digits of the two original numbers. Adding together a number ending in 7 and a number ending in 8 always results in a number ending in 5, for example. So, for example, 17 + 18 = 35 becomes, in clock arithmetic, 7 + 8 = 5. The biggest number an 'innie' or 'outie' can hold is 9, so adding or subtracting that value will change the last digit of the total in a way that no other value would - allowing the 'innie' or 'outie' to be directly calculated. Clock arithmetic has the advantage that you are only ever dealing with single-digit sums, rather than sums like, say, 58+27 - and even if the concept is initially unfamiliar, it rapidly becomes trivial. - -Example: A set of cages form a complete nonet with an 'outie'. The cages have values 8, 10, 14, 7, 14. - -* Using normal arithmetic, those add up to 53. A single nonet totals 45, so the 'outie' must contain an 8. - -* Checking that, using clock arithmetic on those values in turn: 8+0=8; 8+4=2; 2+7=9; 9+4=3. So the clock total is 3, meaning that the actual total also ends in 3 (which we've seen that it does). Any odd number of houses (in this case, 1 nonet) always have an arithmetic total ending in 5 - so, the only 'outie' we could add to change that 5 to a 3 is, again, 8. - -Clock arithmetic has the additional bonus that, when the final digits of two cage totals add up to 10 (13 and 27, for example), the pair will make no difference to the overall clock total, and can simply be skipped. - -Clock arithmetic should at most be used with caution for houses with more than one 'innie' or 'outie', when more than one set of values may result in the same final number, but may still be useful as a quick arithmetic check. - -Even though some cages can have multiple combinations of numbers available, there can often be one or more numbers that are consistent within all available solutions. For example: a 4 cell cage totaling 13 has the possible combinations of (1, 2, 3, 7), (1, 2, 4, 6), or (1, 3, 4, 5). Even though, initially, there is no way to tell which combination of numbers is correct, every solution available has a 1 in it. The player then knows for certain that one of the numbers within that cage is 1 (no matter which is the final solution). This can be useful if, for example, they have already deduced another cell within a nonet the cage resides in as having the number 1 as its solution. They then know that the 1 can only reside in cells that are outside of this nonet. If there is only one cell available, it is a 1. - -The two cells in the top left must be 1+2. The 3 cells to the right totaling 15 cannot therefore have either a 1 or a 2, so they must be either 3+4+8, 3+5+7, or 4+5+6. - -The two vertical cells in the top left of the top right nonet cannot be 2+2 as that would mean duplicates, so they must be 1+3. The 1 cannot be in the top line as that conflicts with our first 2 cells therefore the top cell of this pair is 3 and the lower cell 1. This also means the 3 cell cage 15 to the left cannot contain a 3 and so is 4+5+6. - -Similarly the neighbouring 16 must be 9+7. - -The four cells in the top right cage (totaling 15) can only include one of 1, 3, 7, or 9 (if at all) because of the presence of 1, 3, 7, and 9 in the top right hand nonet. If any one of 1, 3, 7, or 9 is present then this must be the lone square in the nonet below. Therefore, these 4 cells is one of 1+2+4+8 or 2+3+4+6; the 2 cells in the middle of the left edge must be either 1+5 or 2+4; and so on. - -Looking at the nonet on the left hand side in the middle, we can see that there are three cages which do not cross over into another nonet; these add up to 33, meaning that the sum of the remaining two cells must be 12. This does not seem particularly useful, but consider that the cell in the bottom right of the nonet is part of a 3-cage of 6; it can therefore only contain 1, 2 or 3. If it contained 1 or 2, the other cell would have to contain 11 or 10 respectively; this is impossible. It must, therefore, contain 3, and the other cell 9. - -With 6-cell, 7-cell or 8-cell cages, correlating the combinations with their 3-cell, 2-cell, or 1-cell complements usually simplifies things. The table for 6 cell cages is the complement of the 3 cell table adding up to 45 minus the listed value; similarly, the 7 cell table complements the 2 cell table. An 8-cell cage is of course missing only one digit (45 minus the sum of the cage). - -For example, the complement of a 7-cell cage totalling 41 is a 2-cell cage totalling 4 (because 9–7=2 and 45–41=4). As a 2-cell cage totalling 4 can contain only 1 and 3, we deduce that a 7-cell cage totalling 41 contains neither 1 nor 3. - -The following tables list the possible combinations for various sums. - -;1 cell - -1: 1 - -2: 2 - -3: 3 - -4: 4 - -5: 5 - -6: 6 - -7: 7 - -8: 8 - -9: 9 - -;2 cells - -3: 12 - -4: 13 - -5: 14 23 - -6: 15 24 - -7: 16 25 34 - -8: 17 26 35 - -9: 18 27 36 45 - -10: 19 28 37 46 - -11: 29 38 47 56 - -12: 39 48 57 - -13: 49 58 67 - -14: 59 68 - -15: 69 78 - -16: 79 - -17: 89 - -;3 cells - -6: 123 - -7: 124 - -8: 125 134 - -9: 126 135 234 - -10: 127 136 145 235 - -11: 128 137 146 236 245 - -12: 129 138 147 156 237 246 345 - -13: 139 148 157 238 247 256 346 - -14: 149 158 167 239 248 257 347 356 - -15: 159 168 249 258 267 348 357 456 - -16: 169 178 259 268 349 358 367 457 - -17: 179 269 278 359 368 458 467 - -18: 189 279 369 378 459 468 567 - -19: 289 379 469 478 568 - -20: 389 479 569 578 - -21: 489 579 678 - -22: 589 679 - -23: 689 - -24: 789 - -;4 cells - -10: 1234 - -11: 1235 - -12: 1236 1245 - -13: 1237 1246 1345 - -14: 1238 1247 1256 1346 2345 - -15: 1239 1248 1257 1347 1356 2346 - -16: 1249 1258 1267 1348 1357 1456 2347 2356 - -17: 1259 1268 1349 1358 1367 1457 2348 2357 2456 - -18: 1269 1278 1359 1368 1458 1467 2349 2358 2367 2457 3456 - -19: 1279 1369 1378 1459 1468 1567 2359 2368 2458 2467 3457 - -20: 1289 1379 1469 1478 1568 2369 2378 2459 2468 2567 3458 3467 - -21: 1389 1479 1569 1578 2379 2469 2478 2568 3459 3468 3567 - -22: 1489 1579 1678 2389 2479 2569 2578 3469 3478 3568 4567 - -23: 1589 1679 2489 2579 2678 3479 3569 3578 4568 - -24: 1689 2589 2679 3489 3579 3678 4569 4578 - -25: 1789 2689 3589 3679 4579 4678 - -26: 2789 3689 4589 4679 5678 - -27: 3789 4689 5679 - -28: 4789 5689 - -29: 5789 - -30: 6789 - -;5 cells - -15: 12345 - -16: 12346 - -17: 12347 12356 - -18: 12348 12357 12456 - -19: 12349 12358 12367 12457 13456 - -20: 12359 12368 12458 12467 13457 23456 - -21: 12369 12378 12459 12468 12567 13458 13467 23457 - -22: 12379 12469 12478 12568 13459 13468 13567 23458 23467 - -23: 12389 12479 12569 12578 13469 13478 13568 14567 23459 23468 23567 - -24: 12489 12579 12678 13479 13569 13578 14568 23469 23478 23568 24567 - -25: 12589 12679 13489 13579 13678 14569 14578 23479 23569 23578 24568 34567 - -26: 12689 13589 13679 14579 14678 23489 23579 23678 24569 24578 34568 - -27: 12789 13689 14589 14679 15678 23589 23679 24579 24678 34569 34578 - -28: 13789 14689 15679 23689 24589 24679 25678 34579 34678 - -29: 14789 15689 23789 24689 25679 34589 34679 35678 - -30: 15789 24789 25689 34689 35679 45678 - -31: 16789 25789 34789 35689 45679 - -32: 26789 35789 45689 - -33: 36789 45789 - -34: 46789 - -35: 56789 - -;6 cells - -21: 123456 - -22: 123457 - -23: 123458 123467 - -24: 123459 123468 123567 - -25: 123469 123478 123568 124567 - -26: 123479 123569 123578 124568 134567 - -27: 123489 123579 123678 124569 124578 134568 234567 - -28: 123589 123679 124579 124678 134569 134578 234568 - -29: 123689 124589 124679 125678 134579 134678 234569 234578 - -30: 123789 124689 125679 134589 134679 135678 234579 234678 - -31: 124789 125689 134689 135679 145678 234589 234679 235678 - -32: 125789 134789 135689 145679 234689 235679 245678 - -33: 126789 135789 145689 234789 235689 245679 345678 - -34: 136789 145789 235789 245689 345679 - -35: 146789 236789 245789 345689 - -36: 156789 246789 345789 - -37: 256789 346789 - -38: 356789 - -39: 456789 - -;7 cells - -28: 1234567 - -29: 1234568 - -30: 1234569 1234578 - -31: 1234579 1234678 - -32: 1234589 1234679 1235678 - -33: 1234689 1235679 1245678 - -34: 1234789 1235689 1245679 1345678 - -35: 1235789 1245689 1345679 2345678 - -36: 1236789 1245789 1345689 2345679 - -37: 1246789 1345789 2345689 - -38: 1256789 1346789 2345789 - -39: 1356789 2346789 - -40: 1456789 2356789 - -41: 2456789 - -42: 3456789 - -;8 cells - -36: 12345678 - -37: 12345679 - -38: 12345689 - -39: 12345789 - -40: 12346789 - -41: 12356789 - -42: 12456789 - -43: 13456789 - -44: 23456789 - -;9 cells - -45: 123456789 diff --git a/wiki/wikipedia/799.txt b/wiki/wikipedia/799.txt deleted file mode 100644 index 13353d94d2c582ae16a1504e185936eda249e13a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/799.txt +++ /dev/null @@ -1,3 +0,0 @@ -In mathematics, the Seifert conjecture states that every nonsingular, continuous vector field on the 3-sphere has a closed orbit. It is named after Herbert Seifert. In a 1950 paper, Seifert asked if such a vector field exists, but did not phrase non-existence as a conjecture. He also established the conjecture for perturbations of the Hopf fibration. - -The conjecture was disproven in 1974 by Paul Schweitzer, who exhibited a $C^1$ counterexample. Schweitzer's construction was then modified by Jenny Harrison in 1988 to make a $C^{2+\delta}$ counterexample for some $\delta > 0$. The existence of smoother counterexamples remained an open question until 1993 when Krystyna Kuperberg constructed a very different $C^\infty$ counterexample. Later this construction was shown to have real analytic and piecewise linear versions. diff --git a/wiki/wikipedia/8.txt b/wiki/wikipedia/8.txt deleted file mode 100644 index b7ff10de8f9b8c3822c531108d1db0d3d1a83966..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/8.txt +++ /dev/null @@ -1,20 +0,0 @@ -In the mathematical field of complex analysis, Akhiezer's theorem is a result about entire functions proved by Naum Akhiezer. - -Let f(z) be an entire function of exponential type τ, with f(x) ≥ 0 for real x. Then the following are equivalent: - -* There exists an entire function F, of exponential type τ/2, having all its zeros in the (closed) upper half plane, such that -$$ -f(z)=F(z)\overline{F(\overline{z})} -$$ - -* One has: - - - -\sum|\operatorname{Im}(1/z_{n})|<\infty - - - -where zn are the zeros of f. - -It is not hard to show that the Fejér–Riesz theorem is a special case. diff --git a/wiki/wikipedia/80.txt b/wiki/wikipedia/80.txt deleted file mode 100644 index 89965d9f4824cae8c661615de54aa5c5b2202433..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/80.txt +++ /dev/null @@ -1,11 +0,0 @@ -A correlation inequality is any of a number of inequalities satisfied by the correlation functions of a model. Such inequalities are of particular use in statistical mechanics and in percolation theory. - -Examples include: - -*Bell's inequality - -*FKG inequality - -*Griffiths inequality, and its generalisation, the Ginibre inequality - -*Gaussian correlation inequality diff --git a/wiki/wikipedia/800.txt b/wiki/wikipedia/800.txt deleted file mode 100644 index 214e94848e9ce7f7c67779c24b1c2c98de6e7714..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/800.txt +++ /dev/null @@ -1,63 +0,0 @@ -In statistics, Samuelson's inequality, named after the economist Paul Samuelson, also called the Laguerre-Samuelson inequality, after the mathematician Edmond Laguerre, states that every one of any collection x1, ..., xn, is within n - 1 uncorrected sample standard deviations of their sample mean. - -If we let -$$ - \overline{x} = \frac{x_1+\cdots+x_n}{n} -$$ - -be the sample mean and -$$ - s = \sqrt{\frac{1}{n} \sum_{i=1}^n (x_i - \overline{x})^2 } -$$ - -be the standard deviation of the sample, then -$$ - \overline{x} - s\sqrt{n-1} \le x_j \le \overline{x} + s\sqrt{n-1}\qquad \text{for } j = 1,\dots,n. -$$ - -Equality holds on the left (or right) for $x_j$ if and only if all the n - 1 $x_i$s other than $x_j$ are equal to each other and greater (smaller) than $x_j.$ - -Consider a polynomial with all roots real: -$$ - a_0x^n + a_1x^{n-1} + \cdots + a_{n-1}x + a_n = 0 -$$ - - - -Without loss of generality let $a_0 = 1$ and let -$$ - t_1 = \sum x_i -$$ and $ t_2 = \sum x_i^2 $ - - - -Then -$$ - a_1 = - \sum x_i = -t_1 -$$ - -and -$$ - a_2 = \sum x_ix_j = \frac{t_1^2 - t_2}{2} \qquad \text{ where } i < j -$$ - -In terms of the coefficients -$$ - t_2 = a_1^2 - 2a_2 -$$ - -Laguerre showed that the roots of this polynomial were bounded by -$$ - -a_1 / n \pm b \sqrt{n - 1} -$$ - -where -$$ - b = \frac{\sqrt{nt_2 - t_1^2}}{n} = \frac{\sqrt{na_1^2 - a_1^2 - 2na_2}}{n} -$$ - -Inspection shows that $-\tfrac{a_1}{n}$ is the mean of the roots and that b is the standard deviation of the roots. - -Laguerre failed to notice this relationship with the means and standard deviations of the roots, being more interested in the bounds themselves. This relationship permits a rapid estimate of the bounds of the roots and may be of use in their location. - -When the coefficients $ a_1 $ and $ a_2 $ are both zero no information can be obtained about the location of the roots, because not all roots are real (as can be seen from Descartes' rule of signs) unless the constant term is also zero. diff --git a/wiki/wikipedia/801.txt b/wiki/wikipedia/801.txt deleted file mode 100644 index a4a957ce847d4f5308334a99da6d22721640c0e0..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/801.txt +++ /dev/null @@ -1,23 +0,0 @@ -3D Topicscape, a software application, is a Personal Information Manager that provides a template loosely based on mind-mapping or concept mapping. It presents the mind map as a 3D scene where each node is a cone (or pyramid, or variation on such a shape). It can also display in a 2D format. Nodes are arranged in a way that indicates how they are related in much the same way as a mind map. In addition to its use for information management it is claimed to be suitable as a task manager, and for use in project management. - -A Topicscape is created by importing folders (by Drag-and-drop or menus), importing from other mind mapping software including FreeMind, PersonalBrain and MindManager or by hand with mouse clicks or keyboard shortcuts. Import sources may be converted to a new Topicscape or added as a portion of an existing one. - -The number of levels that can be stored is not limited, but up to seven levels of the hierarchy may be viewed at once. Any node may be chosen as the centre of the 3D scene and choosing one at the edge will cause more to come into view. - -Topicscape's most obvious difference from 2D mind mapping software is that it provides a zooming interface and simulates flying as noted by Wall Street Journal columnist Jeremy Wagstaff in his column "Fly through your computer." The BBC World Service and PC World have also reviewed 3D Topicscape. - -*3D Topicscape public Beta in Jan 2006 - -*3D Topicscape 1.0, May 2006 - -*3D Topicscape Lite 1.05; 1.07, Dec 2007; 1.2, Aug 2008 - -*3D Topicscape Pro 1.2, Feb 2007; 1.3, May 2007; 1.56, Dec 2007; 1.59, May 2008; 1.6, Jul 2008; 1.63, Sep 2008; 2.0, Apr 2009; 2.5 Dec 2009; 2.6 Feb 2010; 2.7 April 2010 - -*3D Topicscape Student Edition Beta, Sep 2007; 1.0, Feb 2008; 2.0, Dec 2009 - -Uses an embedded Firebird relational database to store user-provided and operational metadata. Files attached to nodes (topics) may be linked to in their original location or be held in a folder (directory) associated with a given Topicscape. Links to files in a Topicscape's folder are relative. Topicscape folders may therefore be moved without breaking such links. - -Import file formats supported include FreeMind, OML, MindManager versions 5-8, PersonalBrain, and text (outline-numbered); - -Export file formats can be those for FreeMind, OPML, HTML and text structured for re-import, or text for reading. diff --git a/wiki/wikipedia/802.txt b/wiki/wikipedia/802.txt deleted file mode 100644 index 2cc89b53e72da84b9a857eb21f764a726a60b020..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/802.txt +++ /dev/null @@ -1,5 +0,0 @@ -The Bogoliubov–Parasyuk theorem in quantum field theory states that renormalized Green's functions and matrix elements of the scattering matrix (S-matrix) are free of ultraviolet divergencies. Green's functions and scattering matrix are the fundamental objects in quantum field theory which determine basic physically measurable quantities. Formal expressions for Green's functions and S-matrix in any physical quantum field theory contain divergent integrals (i.e., integrals which take infinite values) and therefore formally these expressions are meaningless. The renormalization procedure is a specific procedure to make these divergent integrals finite and obtain (and predict) finite values for physically measurable quantities. The Bogoliubov–Parasyuk theorem states that for a wide class of quantum field theories, called renormalizable field theories, these divergent integrals can be made finite in a regular way using a finite (and small) set of certain elementary subtractions of divergencies. - -The theorem guarantees that computed within the perturbation expansion Green's functions and matrix elements of the scattering matrix are finite for any renormalized quantum field theory. The theorem specifies a concrete procedure (the Bogoliubov–Parasyuk R-operation) for subtraction of divergences in any order of perturbation theory, establishes correctness of this procedure, and guarantees the uniqueness of the obtained results. - -The theorem was proved by Nikolay Bogoliubov and Ostap Parasyuk in 1955. The proof of the Bogoliubov–Parasyuk theorem was simplified later. diff --git a/wiki/wikipedia/803.txt b/wiki/wikipedia/803.txt deleted file mode 100644 index 641b23aca6a5712ace6dad6a5d5e6f4c86751aac..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/803.txt +++ /dev/null @@ -1,72 +0,0 @@ -In real analysis the Heine–Borel theorem, named after Eduard Heine and Émile Borel, states: - -For a subset S of Euclidean space Rn, the following two statements are equivalent: - -*S is closed and bounded - -*S is compact, that is, every open cover of S has a finite subcover. - -The history of what today is called the Heine–Borel theorem starts in the 19th century, with the search for solid foundations of real analysis. Central to the theory was the concept of uniform continuity and the theorem stating that every continuous function on a closed interval is uniformly continuous. Peter Gustav Lejeune Dirichlet was the first to prove this and implicitly he used the existence of a finite subcover of a given open cover of a closed interval in his proof. Later Eduard Heine, Karl Weierstrass and Salvatore Pincherle used similar techniques. Émile Borel in 1895 was the first to state and prove a form of what is now called the Heine–Borel theorem. His formulation was restricted to countable covers. Pierre Cousin (1895), Lebesgue (1898) and Schoenflies (1900) generalized it to arbitrary covers. - -If a set is compact, then it must be closed. - -Let S be a subset of Rn. Observe first the following: if a is a limit point of S, then any finite collection C of open sets, such that each open set U ∈ C is disjoint from some neighborhood VU of a, fails to be a cover of S. Indeed, the intersection of the finite family of sets VU is a neighborhood W of a in Rn. Since a is a limit point of S, W must contain a point x in S. This x ∈ S is not covered by the family C, because every U in C is disjoint from VU and hence disjoint from W, which contains x. - -If S is compact but not closed, then it has a limit point a not in S. Consider a collection C ′ consisting of an open neighborhood N(x) for each x ∈ S, chosen small enough to not intersect some neighborhood Vx of a. Then C ′ is an open cover of S, but any finite subcollection of C ′ has the form of C discussed previously, and thus cannot be an open subcover of S. This contradicts the compactness of S. Hence, every limit point of S is in S, so S is closed. - -The proof above applies with almost no change to showing that any compact subset S of a Hausdorff topological space X is closed in X. - -If a set is compact, then it is bounded. - -Let $S$ be a compact set in $\mathbf{R}^n$, and $U_x$ a ball of radius 1 centered at $x\in\mathbf{R}^n$. Then the set of all such balls centered at $x\in S$ is clearly an open cover of $S$, since $\cup_{x\in S} U_x$ contains all of $S$. Since $S$ is compact, take a finite subcover of this cover. This subcover is the finite union of balls of radius 1. Consider all pairs of centers of these (finitely many) balls (of radius 1) and let $M$ be the maximum of the distances between them. Then if $C_p$ and $C_q$ are the centers (respectively) of unit balls containing arbitrary $p,q\in S$, the triangle inequality says: - - - -d(p, q)\le d(p, C_p) + d(C_p, C_q) + d(C_q, q)\le 1 + M + 1 = M + 2. - - - -So the diameter of $S$ is bounded by $M+2$. - -A closed subset of a compact set is compact. - -Let K be a closed subset of a compact set T in Rn and let CK be an open cover of K. Then U = Rn \ K is an open set and -$$ - C_T = C_K \cup \{U\} -$$ - -is an open cover of T. Since T is compact, then CT has a finite subcover $ C_T',$ that also covers the smaller set K. Since U does not contain any point of K, the set K is already covered by $ C_K' = C_T' \setminus \{U\}, $ that is a finite subcollection of the original collection CK. It is thus possible to extract from any open cover CK of K a finite subcover. - -If a set is closed and bounded, then it is compact. - -If a set S in Rn is bounded, then it can be enclosed within an n-box -$$ - T_0 = [-a, a]^n -$$ - -where a > 0. By the property above, it is enough to show that T0 is compact. - -Assume, by way of contradiction, that T0 is not compact. Then there exists an infinite open cover C of T0 that does not admit any finite subcover. Through bisection of each of the sides of T0, the box T0 can be broken up into 2n sub n-boxes, each of which has diameter equal to half the diameter of T0. Then at least one of the 2n sections of T0 must require an infinite subcover of C, otherwise C itself would have a finite subcover, by uniting together the finite covers of the sections. Call this section T1. - -Likewise, the sides of T1 can be bisected, yielding 2n sections of T1, at least one of which must require an infinite subcover of C. Continuing in like manner yields a decreasing sequence of nested n-boxes: -$$ - T_0 \supset T_1 \supset T_2 \supset \ldots \supset T_k \supset \ldots -$$ - -where the side length of Tk is (2 a) / 2k, which tends to 0 as k tends to infinity. Let us define a sequence (xk) such that each xk is in Tk. This sequence is Cauchy, so it must converge to some limit L. Since each Tk is closed, and for each k the sequence (xk) is eventually always inside Tk, we see that L ∈ Tk for each k. - -Since C covers T0, then it has some member U ∈ C such that L ∈ U. Since U is open, there is an n-ball B(L) ⊆ U. For large enough k, one has Tk ⊆ B(L) ⊆ U, but then the infinite number of members of C needed to cover Tk can be replaced by just one: U, a contradiction. - -Thus, T0 is compact. Since S is closed and a subset of the compact set T0, then S is also compact (see above). - -The Heine–Borel theorem does not hold as stated for general metric and topological vector spaces, and this gives rise to the necessity to consider special classes of spaces where this proposition is true. They are called the spaces with the Heine–Borel property. - -===In the theory of metric spaces=== - -A metric space $(X,d)$ is said to have the Heine–Borel property if each closed bounded set in $X$ is compact. - -Many metric spaces fail to have the Heine–Borel property, such as the metric space of rational numbers (or indeed any incomplete metric space). Complete metric spaces may also fail to have the property; for instance, no infinite-dimensional Banach spaces have the Heine–Borel property (as metric spaces). Even more trivially, if the real line is not endowed with the usual metric, it may fail to have the Heine–Borel property. - -A metric space $(X,d)$ has a Heine–Borel metric which is Cauchy locally identical to $d$ if and only if it is complete, $\sigma$-compact, and locally compact. - -A topological vector space $X$ is said to have the Heine–Borel property (R.E. Edwards uses the term boundedly compact space) if each closed bounded set in $X$ is compact. No infinite-dimensional Banach spaces have the Heine–Borel property (as topological vector spaces). But some infinite-dimensional Fréchet spaces do have, for instance, the space $C^\infty(\Omega)$ of smooth functions on an open set $\Omega\subset\mathbb{R}^n$ and the space $H(\Omega)$ of holomorphic functions on an open set $\Omega\subset\mathbb{C}^n$. More generally, any quasi-complete nuclear space has the Heine–Borel property. All Montel spaces have the Heine–Borel property as well. diff --git a/wiki/wikipedia/804.txt b/wiki/wikipedia/804.txt deleted file mode 100644 index 5d5fda01f282edc87ee02c9a8a72a51239ed0ce7..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/804.txt +++ /dev/null @@ -1,116 +0,0 @@ -Van der Waerden's theorem is a theorem in the branch of mathematics called Ramsey theory. Van der Waerden's theorem states that for any given positive integers r and k, there is some number N such that if the integers {1, 2, ..., N} are colored, each with one of r different colors, then there are at least k integers in arithmetic progression whose elements are of the same color. The least such N is the Van der Waerden number W(r, k), named after the Dutch mathematician B. L. van der Waerden. - -For example, when r = 2, you have two colors, say red and blue. W(2, 3) is bigger than 8, because you can color the integers from {1, ..., 8} like this: - -and no three integers of the same color form an arithmetic progression. But you can't add a ninth integer to the end without creating such a progression. If you add a red 9, then the red 3, 6, and 9 are in arithmetic progression. Alternatively, if you add a blue 9, then the blue 1, 5, and 9 are in arithmetic progression. - -In fact, there is no way of coloring 1 through 9 without creating such a progression (it can be proved by considering examples). Therefore, W(2, 3) is 9. - -It is an open problem to determine the values of W(r, k) for most values of r and k. The proof of the theorem provides only an upper bound. For the case of r = 2 and k = 3, for example, the argument given below shows that it is sufficient to color the integers {1, ..., 325} with two colors to guarantee there will be a single-colored arithmetic progression of length 3. But in fact, the bound of 325 is very loose; the minimum required number of integers is only 9. Any coloring of the integers {1, ..., 9} will have three evenly spaced integers of one color. - -For r = 3 and k = 3, the bound given by the theorem is 7(2·37 + 1)(2·37·(2·37 + 1) + 1), or approximately 4.22·1014616. But actually, you don't need that many integers to guarantee a single-colored progression of length 3; you only need 27. (And it is possible to color {1, ..., 26} with three colors so that there is no single-colored arithmetic progression of length 3; for example: - -An open problem is the attempt to reduce the general upper bound to any 'reasonable' function. Ronald Graham offered a prize of US$1000 for showing W(2, k) < 2k2. In addition, he offered a US$250 prize for a proof of his conjecture involving more general off-diagonal van der Waerden numbers, stating W(2; 3, k) ≤ kO(1), while mentioning numerical evidence suggests W(2; 3, k) = k2 + o(1). Ben Green disproved this latter conjecture and proved super-polynomial counterexamples to W(2; 3, k) < kr for any r. The best upper bound currently known is due to Timothy Gowers, who establishes -$$ -W(r,k) \leq 2^{2^{r^{2^{2^{k + 9}}}}}, -$$ - -by first establishing a similar result for Szemerédi's theorem, which is a stronger version of Van der Waerden's theorem. The previously best-known bound was due to Saharon Shelah and proceeded via first proving a result for the Hales-Jewett theorem, which is another strengthening of Van der Waerden's theorem. - -The best lower bound currently known for $W(2, k)$ is that for all positive $\varepsilon$ we have $W(2, k) > 2^k/k^\varepsilon$, for all sufficiently large $k$. - -The following proof is due to Ron Graham and B.L. Rothschild. Khinchin gives a fairly simple proof of the theorem without estimating W(r, k). - -We will prove the special case mentioned above, that W(2, 3) ≤ 325. Let c(n) be a coloring of the integers {1, ..., 325}. We will find three elements of {1, ..., 325} in arithmetic progression that are the same color. - -Divide {1, ..., 325} into the 65 blocks {1, ..., 5}, {6, ..., 10}, ... {321, ..., 325}, thus each block is of the form {5b + 1, ..., 5b + 5} for some b in {0, ..., 64}. Since each integer is colored either red or blue, each block is colored in one of 32 different ways. By the pigeonhole principle, there are two blocks among the first 33 blocks that are colored identically. That is, there are two integers b1 and b2, both in {0,...,32}, such that - -c(5b1 + k) = c(5b2 + k) - -for all k in {1, ..., 5}. Among the three integers 5b1 + 1, 5b1 + 2, 5b1 + 3, there must be at least two that are of the same color. (The pigeonhole principle again.) Call these 5b1 + a1 and 5b1 + a2, where the ai are in {1,2,3} and a1 < a2. Suppose (without loss of generality) that these two integers are both red. (If they are both blue, just exchange 'red' and 'blue' in what follows.) - -Let a3 = 2a2 - a1. If 5b1 + a3 is red, then we have found our arithmetic progression: 5b1 + ai are all red. - -Otherwise, 5b1 + a3 is blue. Since a3 ≤ 5, 5b1 + a3 is in the b1 block, and since the b2 block is colored identically, 5b2 + a3 is also blue. - -Now let b3 = 2b2 - b1. Then b3 ≤ 64. Consider the integer 5b3 + a3, which must be ≤ 325. What color is it? - -If it is red, then 5b1 + a1, 5b2 + a2, and 5b3 + a3 form a red arithmetic progression. But if it is blue, then 5b1 + a3, 5b2 + a3, and 5b3 + a3 form a blue arithmetic progression. Either way, we are done. - -A similar argument can be advanced to show that W(3, 3) ≤ 7(2·37+1)(2·37·(2·37+1)+1). One begins by dividing the integers into 2·37·(2·37 + 1) + 1 groups of 7(2·37 + 1) integers each; of the first 37·(2·37 + 1) + 1 groups, two must be colored identically. - -Divide each of these two groups into 2·37+1 subgroups of 7 integers each; of the first 37 + 1 subgroups in each group, two of the subgroups must be colored identically. Within each of these identical subgroups, two of the first four integers must be the same color, say red; this implies either a red progression or an element of a different color, say blue, in the same subgroup. - -Since we have two identically-colored subgroups, there is a third subgroup, still in the same group that contains an element which, if either red or blue, would complete a red or blue progression, by a construction analogous to the one for W(2, 3). Suppose that this element is green. Since there is a group that is colored identically, it must contain copies of the red, blue, and green elements we have identified; we can now find a pair of red elements, a pair of blue elements, and a pair of green elements that 'focus' on the same integer, so that whatever color it is, it must complete a progression. - -The proof for W(2, 3) depends essentially on proving that W(32, 2) ≤ 33. We divide the integers {1,...,325} into 65 'blocks', each of which can be colored in 32 different ways, and then show that two blocks of the first 33 must be the same color, and there is a block colored the opposite way. Similarly, the proof for W(3, 3) depends on proving that -$$ -W(3^{7(2 \cdot 3^7+1)},2) \leq 3^{7(2 \cdot 3^7+1)}+1. -$$ - -By a double induction on the number of colors and the length of the progression, the theorem is proved in general. - -A D-dimensional arithmetic progression (AP) consists of - -numbers of the form: -$$ - a + i_1 s_1 + i_2 s_2 + \cdots + i_D s_D -$$ - -where a is the basepoint, the s's are positive step-sizes, and the i's range from 0 to L-1. A d-dimensional AP is homogeneous for some coloring when it is all the same color. - -A D-dimensional arithmetic progression with benefits is all numbers of the form above, but where you add on some of the "boundary" of the arithmetic progression, i.e. some of the indices i's can be equal to L. The sides you tack on are ones where the first k i's are equal to L, and the remaining i's are less than L. - -The boundaries of a D-dimensional AP with benefits are these additional arithmetic progressions of dimension $d-1, d-2, d-3, d-4$, down to 0. The 0-dimensional arithmetic progression is the single point at index value $(L, L, L, L, \cdots, L)$. A D-dimensional AP with benefits is homogeneous when each of the boundaries are individually homogeneous, but different boundaries do not have to necessarily have the same color. - -Next define the quantity MinN(L, D, N) to be the least integer so - -that any assignment of N colors to an interval of length MinN or more necessarily contains a homogeneous D-dimensional arithmetical progression with benefits. - -The goal is to bound the size of MinN. Note that MinN(L,1,N) is an upper bound for Van der Waerden's number. There are two inductions steps, as follows: - -Assume MinN is known for a given lengths L for all dimensions of arithmetic progressions with benefits up to D. This formula gives a bound on MinN when you increase the dimension to D+1: - -let $ M = {\mathrm MinN}(L,D,n)$, then -$$ - {\mathrm MinN}(L, D+1 , n) \le M \cdot {\mathrm MinN}(L,1,n^M) -$$ - -{{Math_proof|First, if you have an n-coloring of the interval 1...I, you can define a block coloring of k-size blocks. Just consider each sequence of k colors in each k block to define a unique color. Call this k-blocking an n-coloring. k-blocking an n coloring of length l produces an n^k coloring of length l/k. - -So given a n-coloring of an interval I of size $M \cdot MinN(L,1,n^M))$ you can M-block it into an n^M coloring of length $MinN(L,1,n^M)$. But that means, by the definition of MinN, that you can find a 1-dimensional arithmetic sequence (with benefits) of length L in the block coloring, which is a sequence of blocks equally spaced, which are all the same block-color, i.e. you have a bunch of blocks of length M in the original sequence, which are equally spaced, which have exactly the same sequence of colors inside. - -Now, by the definition of M, you can find a d-dimensional arithmetic sequence with benefits in any one of these blocks, and since all of the blocks have the same sequence of colors, the same d-dimensional AP with benefits appears in all of the blocks, just by translating it from block to block. This is the definition of a d+1 dimensional arithmetic progression, so you have a homogeneous d+1 dimensional AP. The new stride parameter s_D+1 is defined to be the distance between the blocks. - -But you need benefits. The boundaries you get now are all old boundaries, plus their translations into identically colored blocks, because i_D+1 is always less than L. The only boundary which is not like this is the 0-dimensional point when $i_1=i_2=\cdots=i_{D+1}=L$. This is a single point, and is automatically homogeneous. - -}} - -Assume MinN is known for one value of L and all possible dimensions D. Then you can bound MinN for length L+1. -$$ -{\mathrm MinN}(L+1,1,n) \le 2{\mathrm MinN}(L,n,n) -$$ - -{{Math_proof|Given an n-coloring of an interval of size MinN(L,n,n), by definition, you can find an arithmetic sequence with benefits of dimension n of length L. But now, the number of "benefit" boundaries is equal to the number of colors, so one of the homogeneous boundaries, say of dimension k, has to have the same color as another one of the homogeneous benefit boundaries, say the one of dimension p 0$, $H = T^{a + \varepsilon}$ and with as less as possible value of $a > 0$, where $\varepsilon > 0$ is an arbitrarily small number – open two new directions in the investigation of the Riemann zeta function. - -1. For any $\varepsilon > 0$ there exists such $T_0 = T_0(\varepsilon) > 0$ that for $T \geq T_0$ and $H=T^{0.25+\varepsilon}$ the interval $(T,T+H]$ contains a zero of odd order of the function $\zeta\bigl(\tfrac{1}{2}+it\bigr)$. - -2. For any $\varepsilon > 0$ there exist $T_0 = T_0(\varepsilon) > 0$ and $c = c(\varepsilon) > 0$, such that for $T \geq T_0$ and $H=T^{0.5+\varepsilon}$ the inequality $N_0(T+H)-N_0(T) \geq cH$ is true. - -In 1942 Atle Selberg studied the problem 2 and proved that for any $\varepsilon > 0$ there exists such $T_0 = T_0(\varepsilon) > 0$ and $c = c(\varepsilon) > 0$, such that for $T \geq T_0$ and $H=T^{0.5+\varepsilon}$ the inequality $N(T+H)-N(T) \geq cH\log T$ is true. - -In his turn, Selberg made his conjecture that it's possible to decrease the value of the exponent $a = 0.5$ for $H=T^{0.5+\varepsilon}$ which was proved 42 years later by A.A. Karatsuba. diff --git a/wiki/wikipedia/809.txt b/wiki/wikipedia/809.txt deleted file mode 100644 index 6b6cc287f2414ba7280c056eb7adee98d7756ae2..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/809.txt +++ /dev/null @@ -1,19 +0,0 @@ -In mathematics, a toy theorem is a simplified instance (special case) of a more general theorem, which can be useful in providing a handy representation of the general theorem, or a framework for proving the general theorem. One way of obtaining a toy theorem is by introducing some simplifying assumptions in a theorem. - -In many cases, a toy theorem is used to illustrate the claim of a theorem, while in other cases, studying the proofs of a toy theorem (derived from a non-trivial theorem) can provide insight that would be hard to obtain otherwise. - -Toy theorems can also have educational value as well. For example, after presenting a theorem (with, say, a highly non-trivial proof), one can sometimes give some assurance that the theorem really holds, by proving a toy version of the theorem. - -A toy theorem of the Brouwer fixed-point theorem is obtained by restricting the dimension to one. In this case, the Brouwer fixed-point theorem follows almost immediately from the intermediate value theorem. - -Another example of toy theorem is Rolle's theorem, which is obtained from the mean value theorem by equating the function values at the endpoints. - -==See also== - -*Corollary - -*Fundamental theorem - -*Lemma (mathematics) - -*Toy model diff --git a/wiki/wikipedia/81.txt b/wiki/wikipedia/81.txt deleted file mode 100644 index 59923bff06b7708050549bc5fe3fb10b57341cbd..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/81.txt +++ /dev/null @@ -1,17 +0,0 @@ -In mathematics, the branching theorem is a theorem about Riemann surfaces. Intuitively, it states that every non-constant holomorphic function is locally a polynomial. - -Let $X$ and $Y$ be Riemann surfaces, and let $f : X \to Y$ be a non-constant holomorphic map. Fix a point $a \in X$ and set $b := f(a) \in Y$. Then there exist $k \in \N$ and charts $\psi_{1} : U_{1} \to V_{1}$ on $X$ and $\psi_{2} : U_{2} \to V_{2}$ on $Y$ such that - -* $\psi_{1} (a) = \psi_{2} (b) = 0$; and - -* $\psi_{2} \circ f \circ \psi_{1}^{-1} : V_{1} \to V_{2}$ is $z \mapsto z^{k}.$ - -This theorem gives rise to several definitions: - -* We call $k$ the [[Multiplicity (mathematics)|multiplicity - -]] of $f$ at $a$. Some authors denote this $\nu (f, a)$. - -* If $k > 1$, the point $a$ is called a branch point of $f$. - -* If $f$ has no branch points, it is called unbranched. See also unramified morphism. diff --git a/wiki/wikipedia/810.txt b/wiki/wikipedia/810.txt deleted file mode 100644 index 44ed8e2869caa23317dc90bce2acd85769993e73..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/810.txt +++ /dev/null @@ -1,156 +0,0 @@ -In computational complexity theory, the space hierarchy theorems are separation results that show that both deterministic and nondeterministic machines can solve more problems in (asymptotically) more space, subject to certain conditions. For example, a deterministic Turing machine can solve more decision problems in space n log n than in space n. The somewhat weaker analogous theorems for time are the time hierarchy theorems. - -The foundation for the hierarchy theorems lies in the intuition that - -with either more time or more space comes the ability to compute more - -functions (or decide more languages). The hierarchy theorems are used - -to demonstrate that the time and space complexity classes form a - -hierarchy where classes with tighter bounds contain fewer languages - -than those with more relaxed bounds. Here we define and prove the - -space hierarchy theorem. - -The space hierarchy theorems rely on the concept of space-constructible functions. The deterministic and nondeterministic space hierarchy theorems state that for all space-constructible functions f(n), -$$ -\mathsf{SPACE}\left(o(f(n))\right) \subsetneq \mathsf{SPACE}(f(n)) -$$, - -where SPACE stands for either DSPACE or NSPACE, and o refers to the little o notation. - -Formally, a function $f:\mathbb{N} \longrightarrow \mathbb{N}$ is space-constructible if $f(n) \ge \log~n$ and there exists a Turing machine - -which computes the function $f(n)$ in space $O(f(n))$ when starting - -with an input $1^n$, where $1^n$ represents a string of n consecutive 1s. Most of the common functions that we work with are space-constructible, including polynomials, exponents, and logarithms. - -For every space-constructible function f:\mathbb{N} \longrightarrow - -\mathbb{N}, there exists a language L that is decidable in space -$$ -O(f(n)) -$$ but not in space $o(f(n))$. - -The goal here is to define a language that can be decided in space -$$ -O(f(n)) -$$ but not space $o(f(n))$. Here we define the language L: - -
    - -L = \{~ (\langle M \rangle, 10^k): M \mbox{ uses space } \le f(|\langle M \rangle, 10^k|) \mbox{ and } M \mbox{ does not accept } (\langle M \rangle, - -10^k) ~ \} - -
    - -Now, for any machine M that decides a language in space $o(f(n))$, L will differ in at least one spot from the language of M. Namely, for some large enough k, M will use space \le f(|\langle M \rangle, 10^k|) on (\langle M \rangle, 10^k) and will therefore differ at its value. - -On the other hand, L is in $\mathsf{SPACE}(f(n))$. The algorithm for deciding the language L is as follows: - -# On an input x, compute $f(|x|)$ using space-constructibility, and mark off $f(|x|)$ cells of tape. Whenever an attempt is made to use more than $f(|x|)$ cells, reject. - -# If x is not of the form $\langle M \rangle, 10^k$ for some TM M, reject. - -# Simulate M on input x for at most $2^{f(|x|)}$ steps (using $f(|x|)$ space). If the simulation tries to use more than $f(|x|)$ space or more than $2^{f(|x|)}$ operations, then reject. - -# If M accepted x during this simulation, then reject; otherwise, accept. - -Note on step 3: Execution is limited to $2^{f(|x|)}$ steps in order to avoid the - -case where M does not halt on the input x. That is, the case where - -M consumes space of only $O(f(x))$ as required, but runs for - -infinite time. - -The above proof holds for the case of PSPACE whereas we must make some change for the case of NPSPACE. The crucial point is that while on a deterministic TM we may easily invert acceptance and rejection (crucial for step 4), this is not possible on a non-deterministic machine.
    - -For the case of NPSPACE we will first redefine L: - -
    - -L = \{~ (\langle M \rangle, 10^k): M \mbox{ uses space } \le f(|\langle M \rangle, 10^k|) \mbox{ and } M \mbox{ accepts } (\langle M \rangle, - -10^k) ~ \} - -
    - -Now, we need to change the algorithm to accept L by modifying step 4 to: - -* If M accepted x during this simulation, then accept; otherwise, reject. - -We will now prove by contradiction that L can not be decided by a TM using $o(f(n))$ cells. Assuming L can be decided by some TM M using $o(f(n))$ cells, and following from the Immerman–Szelepcsényi theorem, $\overline L$ can also be determined by a TM (which we will call $\overline M$) using $o(f(n))$ cells. Here lies the contradiction, therefore our assumption must be false: - -# If $w = (\langle \overline M \rangle, 10^k)$ (for some large enough k) is not in $\overline L$ then M will accept it, therefore $\overline M$ rejects w, therefore w is in $\overline L$ (contradiction). - -# If $w = (\langle \overline M \rangle, 10^k)$ (for some large enough k) is in $\overline L$ then M will reject it, therefore $\overline M$ accepts w, therefore w is not in $\overline L$ (contradiction). - -The space hierarchy theorem is stronger than the analogous time hierarchy theorems in several ways: - -* It only requires s(n) to be at least log n instead of at least n. - -* It can separate classes with any asymptotic difference, whereas the time hierarchy theorem requires them to be separated by a logarithmic factor. - -* It only requires the function to be space-constructible, not time-constructible. - -It seems to be easier to separate classes in space than in time. Indeed, whereas the time hierarchy theorem has seen little remarkable improvement since its inception, the nondeterministic space hierarchy theorem has seen at least one important improvement by Viliam Geffert in his 2003 paper "Space hierarchy theorem revised". This paper made several generalizations of the theorem: - -* It relaxes the space-constructibility requirement. Instead of merely separating the union classes \mathsf{DSPACE}(O(s(n)) and \mathsf{DSPACE}(o(s(n)), it separates \mathsf{DSPACE}(f(n)) from \mathsf{DSPACE}(g(n)) where f(n) is an arbitrary O(s(n)) function and g(n) is a computable o(s(n)) function. These functions need not be space-constructible or even monotone increasing. - -* It identifies a unary language, or tally language, which is in one class but not the other. In the original theorem, the separating language was arbitrary. - -* It does not require s(n) to be at least log n; it can be any nondeterministically fully space-constructible function. - -If space is measured as the number of cells used regardless of alphabet size, then 1=\mathsf{SPACE}(f(n)) = \mathsf{SPACE}(O(f(n))) because one can achieve any linear compression by switching to a larger alphabet. However, by measuring space in bits, a much sharper separation is achievable for deterministic space. Instead of being defined up to a multiplicative constant, space is now defined up to an additive constant. However, because any constant amount of external space can be saved by storing the contents into the internal state, we still have 1=\mathsf{SPACE}(f(n)) = \mathsf{SPACE}(f(n)+O(1)). - -Assume that f is space-constructible. SPACE is deterministic. - -* For a wide variety of sequential computational models, including for Turing machines, SPACE(f(n)-ω(log(f(n)+n))) ⊊ SPACE(f(n)). This holds even if SPACE(f(n)-ω(log(f(n)+n))) is defined using a different computational model than \mathsf{SPACE}(f(n)) because the different models can simulate each other with O(\log(f(n)+n)) space overhead. - -* For certain computational models, we even have SPACE(f(n)-ω(1)) ⊊ SPACE(f(n)). In particular, this holds for Turing machines if we fix the alphabet, the number of heads on the input tape, the number of heads on the worktape (using a single worktape), and add delimiters for the visited portion of the worktape (that can be checked without increasing space usage). SPACE(f(n)) does not depend on whether the worktape is infinite or semi-infinite. We can also have a fixed number of worktapes if f(n) is either a SPACE constructible tuple giving the per-tape space usage, or a SPACE(f(n)-ω(log(f(n)))-constructible number giving the total space usage (not counting the overhead for storing the length of each tape). - -The proof is similar to the proof of the space hierarchy theorem, but with two complications: The universal Turing machine has to be space-efficient, and the reversal has to be space-efficient. One can generally construct universal Turing machines with O(\log(space)) space overhead, and under appropriate assumptions, just O(1) space overhead (which may depend on the machine being simulated). For the reversal, the key issue is how to detect if the simulated machine rejects by entering an infinite (space-constrained) loop. Simply counting the number of steps taken would increase space consumption by about f(n). At the cost of a potentially exponential time increase, loops can be detected space-efficiently as follows: - -Modify the machine to erase everything and go to a specific configuration A on success. Use depth-first search to determine whether A is reachable in the space bound from the starting configuration. The search starts at A and goes over configurations that lead to A. Because of determinism, this can be done in place and without going into a loop. - -We can also determine whether the machine exceeds a space bound (as opposed to looping within the space bound) by iterating over all configurations about to exceed the space bound and checking (again using depth-first search) whether the initial configuration leads to any of them. - -For any two functions $f_1$, f_2: \mathbb{N} \longrightarrow - -\mathbb{N}, where $f_1(n)$ is $o(f_2(n))$ and $f_2$ is space-constructible, $\mathsf{SPACE}(f_1(n)) \subsetneq \mathsf{SPACE}(f_2(n))$. - -This corollary lets us separate various space complexity classes. - -For any function $n^k$ is space-constructible for any natural - -number k. Therefore for any two natural numbers $k_1 < k_2$ we can - -prove $\mathsf{SPACE}(n^{k_1}) \subsetneq \mathsf{SPACE}(n^{k_2})$. We can extend - -this idea for real numbers in the following corollary. This - -demonstrates the detailed hierarchy within the PSPACE class. - -For any two nonnegative real numbers a_1 < a_2, \mathsf{SPACE}(n^{a_1}) - -\subsetneq \mathsf{SPACE}(n^{a_2}). - -NL ⊊ PSPACE. - -Savitch's theorem shows that $\mathsf{NL} \subseteq \mathsf{SPACE}(\log^2n)$, while the space hierarchy theorem shows that $\mathsf{SPACE}(\log^2n) \subsetneq \mathsf{SPACE}(n)$. Thus we get this corollary along with the fact that TQBF ∉ NL - -since TQBF is PSPACE-complete. - -This could also be proven using the non-deterministic space hierarchy theorem to show that NL ⊊ NPSPACE, and using Savitch's theorem to show that PSPACE = NPSPACE. - -PSPACE ⊊ EXPSPACE. - -This last corollary shows the existence of decidable - -problems that are intractable. In other words, their decision procedures must use more than polynomial space. - -There are problems in requiring an arbitrarily large exponent to solve; therefore does not collapse to (nk) for some constant k. diff --git a/wiki/wikipedia/811.txt b/wiki/wikipedia/811.txt deleted file mode 100644 index 5482fc11241aa0071acd8ee8587708dbac444f78..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/811.txt +++ /dev/null @@ -1,378 +0,0 @@ -In mathematics, the Euclidean algorithm, or Euclid's algorithm, is an efficient method for computing the greatest common divisor (GCD) of two integers (numbers), the largest number that divides them both without a remainder. It is named after the ancient Greek mathematician Euclid, who first described it in his Elements (c. 300 BC). - -It is an example of an algorithm, a step-by-step procedure for performing a calculation according to well-defined rules, - -and is one of the oldest algorithms in common use. It can be used to reduce fractions to their simplest form, and is a part of many other number-theoretic and cryptographic calculations. - -The Euclidean algorithm is based on the principle that the greatest common divisor of two numbers does not change if the larger number is replaced by its difference with the smaller number. For example, 21 is the GCD of 252 and 105 (as 252 = 21 × 12 and 105 = 21 × 5), and the same number 21 is also the GCD of 105 and 252 − 105 = 147. Since this replacement reduces the larger of the two numbers, repeating this process gives successively smaller pairs of numbers until the two numbers become equal. When that occurs, they are the GCD of the original two numbers. By reversing the steps or using the extended Euclidean algorithm, the GCD can be expressed as a linear combination of the two original numbers, that is the sum of the two numbers, each multiplied by an integer (for example, 21 = 5 × 105 + (−2) × 252). The fact that the GCD can always be expressed in this way is known as Bézout's identity. - -The version of the Euclidean algorithm described above (and by Euclid) can take many subtraction steps to find the GCD when one of the given numbers is much bigger than the other. A more efficient version of the algorithm shortcuts these steps, instead replacing the larger of the two numbers by its remainder when divided by the smaller of the two (with this version, the algorithm stops when reaching a zero remainder). With this improvement, the algorithm never requires more steps than five times the number of digits (base 10) of the smaller integer. This was proven by Gabriel Lamé in 1844, and marks the beginning of computational complexity theory. Additional methods for improving the algorithm's efficiency were developed in the 20th century. - -The Euclidean algorithm has many theoretical and practical applications. It is used for reducing fractions to their simplest form and for performing division in modular arithmetic. Computations using this algorithm form part of the cryptographic protocols that are used to secure internet communications, and in methods for breaking these cryptosystems by factoring large composite numbers. The Euclidean algorithm may be used to solve Diophantine equations, such as finding numbers that satisfy multiple congruences according to the Chinese remainder theorem, to construct continued fractions, and to find accurate rational approximations to real numbers. Finally, it can be used as a basic tool for proving theorems in number theory such as Lagrange's four-square theorem and the uniqueness of prime factorizations. The original algorithm was described only for natural numbers and geometric lengths (real numbers), but the algorithm was generalized in the 19th century to other types of numbers, such as Gaussian integers and polynomials of one variable. This led to modern abstract algebraic notions such as Euclidean domains. - -The Euclidean algorithm calculates the greatest common divisor (GCD) of two natural numbers a and b. The greatest common divisor g is the largest natural number that divides both a and b without leaving a remainder. Synonyms for the GCD include the greatest common factor (GCF), the highest common factor (HCF), the highest common divisor (HCD), and the greatest common measure (GCM). The greatest common divisor is often written as gcd(a, b) or, more simply, as (a, b), although the latter notation is ambiguous, also used for concepts such as an ideal in the ring of integers, which is closely related to GCD. - -If gcd(a, b) = 1, then a and b are said to be coprime (or relatively prime). This property does not imply that a or b are themselves prime numbers. For example, neither 6 nor 35 is a prime number, since they both have two prime factors: 6 = 2 × 3 and 35 = 5 × 7. Nevertheless, 6 and 35 are coprime. No natural number other than 1 divides both 6 and 35, since they have no prime factors in common. - -Let g = gcd(a, b). Since a and b are both multiples of g, they can be written a = mg and b = ng, and there is no larger number G > g for which this is true. The natural numbers m and n must be coprime, since any common factor could be factored out of m and n to make g greater. Thus, any other number c that divides both a and b must also divide g. The greatest common divisor g of a and b is the unique (positive) common divisor of a and b that is divisible by any other common divisor c. - -The GCD can be visualized as follows. Consider a rectangular area a by b, and any common divisor c that divides both a and b exactly. The sides of the rectangle can be divided into segments of length c, which divides the rectangle into a grid of squares of side length c. The greatest common divisor g is the largest value of c for which this is possible. For illustration, a 24-by-60 rectangular area can be divided into a grid of: 1-by-1 squares, 2-by-2 squares, 3-by-3 squares, 4-by-4 squares, 6-by-6 squares or 12-by-12 squares. Therefore, 12 is the greatest common divisor of 24 and 60. A 24-by-60 rectangular area can be divided into a grid of 12-by-12 squares, with two squares along one edge (24/12 = 2) and five squares along the other (60/12 = 5). - -The GCD of two numbers a and b is the product of the prime factors shared by the two numbers, where a same prime factor can be used multiple times, but only as long as the product of these factors divides both a and b. For example, since 1386 can be factored into 2 × 3 × 3 × 7 × 11, and 3213 can be factored into 3 × 3 × 3 × 7 × 17, the greatest common divisor of 1386 and 3213 equals 63 = 3 × 3 × 7, the product of their shared prime factors. If two numbers have no prime factors in common, their greatest common divisor is 1 (obtained here as an instance of the empty product), in other words they are coprime. A key advantage of the Euclidean algorithm is that it can find the GCD efficiently without having to compute the prime factors. Factorization of large integers is believed to be a computationally very difficult problem, and the security of many widely used cryptographic protocols is based upon its infeasibility. - -Another definition of the GCD is helpful in advanced mathematics, particularly ring theory. - -g = gcd(a, b) = gcd(b, r0) = gcd(r0, r1) = … = gcd(rN−2, rN−1) = rN−1. - -For illustration, the Euclidean algorithm can be used to find the greatest common divisor of a = 1071 and b = 462. To begin, multiples of 462 are subtracted from 1071 until the remainder is less than 462. Two such multiples can be subtracted (q0 = 2), leaving a remainder of 147: - -1071 = 2 × 462 + 147. - -Then multiples of 147 are subtracted from 462 until the remainder is less than 147. Three multiples can be subtracted (q1 = 3), leaving a remainder of 21: - -462 = 3 × 147 + 21. - -Then multiples of 21 are subtracted from 147 until the remainder is less than 21. Seven multiples can be subtracted (q2 = 7), leaving no remainder: - -147 = 7 × 21 + 0. - -Since the last remainder is zero, the algorithm ends with 21 as the greatest common divisor of 1071 and 462. This agrees with the gcd(1071, 462) found by prime factorization above. In tabular form, the steps are: - -The Euclidean algorithm can be visualized in terms of the tiling analogy given above for the greatest common divisor. Assume that we wish to cover an a-by-b rectangle with square tiles exactly, where a is the larger of the two numbers. We first attempt to tile the rectangle using b-by-b square tiles; however, this leaves an r0-by-b residual rectangle untiled, where r0 < b. We then attempt to tile the residual rectangle with r0-by-r0 square tiles. This leaves a second residual rectangle r1-by-r0, which we attempt to tile using r1-by-r1 square tiles, and so on. The sequence ends when there is no residual rectangle, i.e., when the square tiles cover the previous residual rectangle exactly. The length of the sides of the smallest square tile is the GCD of the dimensions of the original rectangle. For example, the smallest square tile in the adjacent figure is 21-by-21 (shown in red), and 21 is the GCD of 1071 and 462, the dimensions of the original rectangle (shown in green). - -At every step k, the Euclidean algorithm computes a quotient qk and remainder rk from two numbers rk−1 and rk−2 - -rk−2 = qk rk−1 + rk - -where the rk is non-negative and is strictly less than the absolute value of rk−1. The theorem which underlies the definition of the Euclidean division ensures that such a quotient and remainder always exist and are unique. - -In Euclid's original version of the algorithm, the quotient and remainder are found by repeated subtraction; that is, rk−1 is subtracted from rk−2 repeatedly until the remainder rk is smaller than rk−1. After that rk and rk−1 are exchanged and the process is iterated. Euclidean division reduces all the steps between two exchanges into a single step, which is thus more efficient. Moreover, the quotients are not needed, thus one may replace Euclidean division by the modulo operation, which gives only the remainder. Thus the iteration of the Euclidean algorithm becomes simply - -rk = rk−2 mod rk−1. - -Implementations of the algorithm may be expressed in pseudocode. For example, the division-based version may be programmed as - -function gcd(a, b) - -while b ≠ 0 - -t := b - -b := a mod b - -a := t - -return a - -At the beginning of the kth iteration, the variable b holds the latest remainder rk−1, whereas the variable a holds its predecessor, rk−2. The step b := a mod b is equivalent to the above recursion formula rk ≡ rk−2 mod rk−1. The temporary variable t holds the value of rk−1 while the next remainder rk is being calculated. At the end of the loop iteration, the variable b holds the remainder rk, whereas the variable a holds its predecessor, rk−1. - -(If negative inputs are allowed, or if the mod function may return negative values, the last line must be changed into return max(a, −a).) - -In the subtraction-based version, which was Euclid's original version, the remainder calculation (b := a mod b) is replaced by repeated subtraction. Contrary to the division-based version, which works with arbitrary integers as input, the subtraction-based version supposes that the input consists of positive integers and stops when a = b: - -function gcd(a, b) - -while a ≠ b - -if a > b - -a := a − b - -else - -b := b − a - -return a - -The variables a and b alternate holding the previous remainders rk−1 and rk−2. Assume that a is larger than b at the beginning of an iteration; then a equals rk−2, since rk−2 > rk−1. During the loop iteration, a is reduced by multiples of the previous remainder b until a is smaller than b. Then a is the next remainder rk. Then b is reduced by multiples of a until it is again smaller than a, giving the next remainder rk+1, and so on. - -The recursive version is based on the equality of the GCDs of successive remainders and the stopping condition gcd(rN−1, 0) = rN−1. - -function gcd(a, b) - -if b = 0 - -return a - -else - -return gcd(b, a mod b) - -(As above, if negative inputs are allowed, or if the mod function may return negative values, the instruction "return a" must be changed into "return max(a, −a)".) - -For illustration, the gcd(1071, 462) is calculated from the equivalent gcd(462, 1071 mod 462) = gcd(462, 147). The latter GCD is calculated from the gcd(147, 462 mod 147) = gcd(147, 21), which in turn is calculated from the gcd(21, 147 mod 21) = gcd(21, 0) = 21. - -In another version of Euclid's algorithm, the quotient at each step is increased by one if the resulting negative remainder is smaller in magnitude than the typical positive remainder. Previously, the equation - -rk−2 = qk rk−1 + rk - -assumed that . However, an alternative negative remainder ek can be computed: - -rk−2 = (qk + 1) rk−1 + ek - -if rk−1 > 0 or - -rk−2 = (qk - 1) rk−1 + ek - -if rk−1 < 0. - -If rk is replaced by ek. when , then one gets a variant of Euclidean algorithm such that - -at each step. - -Leopold Kronecker has shown that this version requires the fewest steps of any version of Euclid's algorithm. It appears in Euclid's Elements (c. 300 BC), specifically in Book 7 (Propositions 1–2) and Book 10 (Propositions 2–3). In Book 7, the algorithm is formulated for integers, whereas in Book 10, it is formulated for lengths of line segments. (In modern usage, one would say it was formulated there for real numbers. But lengths, areas, and volumes, represented as real numbers in modern usage, are not measured in the same units and there is no natural unit of length, area, or volume; the concept of real numbers was unknown at that time.) The latter algorithm is geometrical. The GCD of two lengths a and b corresponds to the greatest length g that measures a and b evenly; in other words, the lengths a and b are both integer multiples of the length g. - -The algorithm was probably not discovered by Euclid, who compiled results from earlier mathematicians in his Elements. The mathematician and historian B. L. van der Waerden suggests that Book VII derives from a textbook on number theory written by mathematicians in the school of Pythagoras. The algorithm was probably known by Eudoxus of Cnidus (about 375 BC). primarily to solve Diophantine equations that arose in astronomy and making accurate calendars. In the late 5th century, the Indian mathematician and astronomer Aryabhata described the algorithm as the "pulverizer", perhaps because of its effectiveness in solving Diophantine equations. Although a special case of the Chinese remainder theorem had already been described in the Chinese book Sunzi Suanjing, the general solution was published by Qin Jiushao in his 1247 book Shushu Jiuzhang (數書九章 Mathematical Treatise in Nine Sections). The Euclidean algorithm was first described numerically and popularized in Europe in the second edition of Bachet's Problèmes plaisants et délectables (Pleasant and enjoyable problems, 1624). Consider the set of all numbers ua + vb, where u and v are any two integers. Since a and b are both divisible by g, every number in the set is divisible by g. In other words, every number of the set is an integer multiple of g. This is true for every common divisor of a and b. However, unlike other common divisors, the greatest common divisor is a member of the set; by Bézout's identity, choosing u = s and v = t gives g. A smaller common divisor cannot be a member of the set, since every member of the set must be divisible by g. Conversely, any multiple m of g can be obtained by choosing u = ms and v = mt, where s and t are the integers of Bézout's identity. This may be seen by multiplying Bézout's identity by m, - -mg = msa + mtb. - -Therefore, the set of all numbers ua + vb is equivalent to the set of multiples m of g. In other words, the set of all possible sums of integer multiples of two numbers (a and b) is equivalent to the set of multiples of gcd(a, b). The GCD is said to be the generator of the ideal of a and b. This GCD definition led to the modern abstract algebraic concepts of a principal ideal (an ideal generated by a single element) and a principal ideal domain (a domain in which every ideal is a principal ideal). - -Certain problems can be solved using this result. For example, consider two measuring cups of volume a and b. By adding/subtracting u multiples of the first cup and v multiples of the second cup, any volume ua + vb can be measured out. These volumes are all multiples of g = gcd(a, b). - -The integers s and t of Bézout's identity can be computed efficiently using the extended Euclidean algorithm. This extension adds two recursive equations to Euclid's algorithm - -sk = sk−2 − qksk−1 - -tk = tk−2 − qktk−1 - -with the starting values - -s−2 = 1, t−2 = 0 - -s−1 = 0, t−1 = 1. - -Using this recursion, Bézout's integers s and t are given by s = sN and t = tN, where N+1 is the step on which the algorithm terminates with rN+1 = 0. - -The validity of this approach can be shown by induction. Assume that the recursion formula is correct up to step k − 1 of the algorithm; in other words, assume that - -rj = sj a + tj b - -for all j less than k. The kth step of the algorithm gives the equation - -rk = rk−2 − qkrk−1. - -Since the recursion formula has been assumed to be correct for rk−2 and rk−1, they may be expressed in terms of the corresponding s and t variables - -rk = (sk−2 a + tk−2 b) − qk(sk−1 a + tk−1 b). - -Rearranging this equation yields the recursion formula for step k, as required - -rk = sk a + tk b = (sk−2 − qksk−1) a + (tk−2 − qktk−1) b. - -The integers s and t can also be found using an equivalent matrix method. The sequence of equations of Euclid's algorithm - - - -\begin{align} - -a & = q_0 b + r_0 \\ - -b & = q_1 r_0 + r_1 \\ - -& \vdots \\ - -r_{N-2} & = q_N r_{N-1} + 0 - -\end{align} - - - -can be written as a product of 2-by-2 quotient matrices multiplying a two-dimensional remainder vector - - - -\begin{pmatrix} a \\ b \end{pmatrix} = - -\begin{pmatrix} q_0 & 1 \\ 1 & 0 \end{pmatrix} \begin{pmatrix} b \\ r_0 \end{pmatrix} = - -\begin{pmatrix} q_0 & 1 \\ 1 & 0 \end{pmatrix} \begin{pmatrix} q_1 & 1 \\ 1 & 0 \end{pmatrix} \begin{pmatrix} r_0 \\ r_1 \end{pmatrix} = - -\cdots = - -\prod_{i=0}^N \begin{pmatrix} q_i & 1 \\ 1 & 0 \end{pmatrix} \begin{pmatrix} r_{N-1} \\ 0 \end{pmatrix} . - - - -Let M represent the product of all the quotient matrices - - - -\mathbf{M} = \begin{pmatrix} m_{11} & m_{12} \\ m_{21} & m_{22} \end{pmatrix} = - -\prod_{i=0}^N \begin{pmatrix} q_i & 1 \\ 1 & 0 \end{pmatrix} = - -\begin{pmatrix} q_0 & 1 \\ 1 & 0 \end{pmatrix} \begin{pmatrix} q_1 & 1 \\ 1 & 0 \end{pmatrix} \cdots \begin{pmatrix} q_{N} & 1 \\ 1 & 0 \end{pmatrix} . - - - -This simplifies the Euclidean algorithm to the form - - - -\begin{pmatrix} a \\ b \end{pmatrix} = - -\mathbf{M} \begin{pmatrix} r_{N-1} \\ 0 \end{pmatrix} = - -\mathbf{M} \begin{pmatrix} g \\ 0 \end{pmatrix} . - - - -To express g as a linear sum of a and b, both sides of this equation can be multiplied by the inverse of the matrix M. The determinant of M equals (−1)N+1, since it equals the product of the determinants of the quotient matrices, each of which is negative one. Since the determinant of M is never zero, the vector of the final remainders can be solved using the inverse of M - - - -\begin{pmatrix} g \\ 0 \end{pmatrix} = - -\mathbf{M}^{-1} \begin{pmatrix} a \\ b \end{pmatrix} = - -(-1)^{N+1} \begin{pmatrix} m_{22} & -m_{12} \\ -m_{21} & m_{11} \end{pmatrix} \begin{pmatrix} a \\ b \end{pmatrix} . - - - -Since the top equation gives - -g = (−1)N+1 ( m22 a − m12 b), - -the two integers of Bézout's identity are s = (−1)N+1m22 and t = (−1)Nm12. The matrix method is as efficient as the equivalent recursion, with two multiplications and two additions per step of the Euclidean algorithm. - -Bézout's identity is essential to many applications of Euclid's algorithm, such as demonstrating the unique factorization of numbers into prime factors. To illustrate this, suppose that a number L can be written as a product of two factors u and v, that is, L = uv. If another number w also divides L but is coprime with u, then w must divide v, by the following argument: If the greatest common divisor of u and w is 1, then integers s and t can be found such that - -1 = su + tw . - -by Bézout's identity. Multiplying both sides by v gives the relation - -v = suv + twv = sL + twv . - -Since w divides both terms on the right-hand side, it must also divide the left-hand side, v. This result is known as Euclid's lemma. Specifically, if a prime number divides L, then it must divide at least one factor of L. Conversely, if a number w is coprime to each of a series of numbers a1, a2, ..., an, then w is also coprime to their product, a1 × a2 × ... × an. The sequence of equations can be written in the form - - - -\begin{align} - -\frac a b &= q_0 + \frac{r_0} b \\ - -\frac b {r_0} &= q_1 + \frac{r_1}{r_0} \\ - -\frac{r_0}{r_1} &= q_2 + \frac{r_2}{r_1} \\ - -& \vdots \\ - -\frac{r_{k-2}}{r_{k-1}} &= q_k + \frac{r_k}{r_{k-1}} \\ - -& \vdots \\ - -\frac{r_{N-2}}{r_{N-1}} &= q_N. - -\end{align} - - - -The last term on the right-hand side always equals the inverse of the left-hand side of the next equation. Thus, the first two equations may be combined to form -$$ -\frac a b = q_0 + \cfrac 1 {q_1 + \cfrac{r_1}{r_0}} . -$$ - -The third equation may be used to substitute the denominator term r1/r0, yielding -$$ -\frac a b = q_0 + \cfrac 1 {q_1 + \cfrac 1 {q_2 + \cfrac{r_2}{r_1}}}. -$$ - -The final ratio of remainders rk/rk−1 can always be replaced using the next equation in the series, up to the final equation. The result is a continued fraction -$$ -\frac a b = q_0 + \cfrac 1 {q_1 + \cfrac 1 {q_2 + \cfrac{1}{\ddots + \cfrac 1 {q_N}}}} = [ q_0; q_1, q_2, \ldots , q_N ] . -$$ - -In the worked example above, the gcd(1071, 462) was calculated, and the quotients qk were 2, 3 and 7, respectively. Therefore, the fraction 1071/462 may be written -$$ -\frac{1071}{462} = 2 + \cfrac 1 {3 + \cfrac 1 7} = [2; 3, 7] -$$ - -as can be confirmed by calculation. - -Calculating a greatest common divisor is an essential step in several integer factorization algorithms, such as Pollard's rho algorithm, Shor's algorithm, Dixon's factorization method and the Lenstra elliptic curve factorization. The Euclidean algorithm may be used to find this GCD efficiently. Continued fraction factorization uses continued fractions, which are determined using Euclid's algorithm. - -The computational efficiency of Euclid's algorithm has been studied thoroughly. This efficiency can be described by the number of division steps the algorithm requires, multiplied by the computational expense of each step. The first known analysis of Euclid's algorithm is due to A. A. L. Reynaud in 1811, who showed that the number of division steps on input (u, v) is bounded by v; later he improved this to v/2 + 2. Later, in 1841, P. J. E. Finck showed that the number of division steps is at most 2 log2 v + 1, and hence Euclid's algorithm runs in time polynomial in the size of the input. Émile Léger, in 1837, studied the worst case, which is when the inputs are consecutive Fibonacci numbers. If g is the GCD of a and b, then a = mg and b = ng for two coprime numbers m and n. Then - -T(a, b) = T(m, n) - -as may be seen by dividing all the steps in the Euclidean algorithm by g. By the same argument, the number of steps remains the same if a and b are multiplied by a common factor w: T(a, b) = T(wa, wb). Therefore, the number of steps T may vary dramatically between neighboring pairs of numbers, such as T(a, b) and T(a, b + 1), depending on the size of the two GCDs. - -The recursive nature of the Euclidean algorithm gives another equation - -T(a, b) = 1 + T(b, r0) = 2 + T(r0, r1) = … = N + T(rN−2, rN−1) = N + 1 - -where T(x, 0) = 0 by assumption. More precisely, if the Euclidean algorithm requires N steps for the pair a > b, then one has a ≥ FN+2 and b ≥ FN+1. This can be shown by induction. If N = 1, b divides a with no remainder; the smallest natural numbers for which this is true is b = 1 and a = 2, which are F2 and F3, respectively. Now assume that the result holds for all values of N up to M − 1. The first step of the M-step algorithm is a = q0b + r0, and the Euclidean algorithm requires M − 1 steps for the pair b > r0. By induction hypothesis, one has b ≥ FM+1 and r0 ≥ FM. Therefore, a = q0b + r0 ≥ b + r0 ≥ FM+1 + FM = FM+2, - -which is the desired inequality. - -This proof, published by Gabriel Lamé in 1844, represents the beginning of computational complexity theory, and also the first practical application of the Fibonacci numbers. -$$ -T(a) \approx C + \frac{12}{\pi^2} \ln 2 \left(\ln a - \sum_{d \mid a} \frac{\Lambda(d)} d\right) -$$ - -where Λ(d) is the Mangoldt function. - -A third average Y(n) is defined as the mean number of steps required when both a and b are chosen randomly (with uniform distribution) from 1 to n - -For comparison, Euclid's original subtraction-based algorithm can be much slower. A single integer division is equivalent to the quotient q number of subtractions. If the ratio of a and b is very large, the quotient is large and many subtractions will be required. On the other hand, it has been shown that the quotients are very likely to be small integers. The probability of a given quotient q is approximately ln|u/(u − 1)| where u = (q + 1)2. For illustration, the probability of a quotient of 1, 2, 3, or 4 is roughly 41.5%, 17.0%, 9.3%, and 5.9%, respectively. Since the operation of subtraction is faster than division, particularly for large numbers, the subtraction-based Euclid's algorithm is competitive with the division-based version. This is exploited in the binary version of Euclid's algorithm. - -Combining the estimated number of steps with the estimated computational expense per step shows that the Euclid's algorithm grows quadratically (h2) with the average number of digits h in the initial two numbers a and b. Let h0, h1, ..., hN−1 represent the number of digits in the successive remainders r0, r1, ..., rN−1. Since the number of steps N grows linearly with h, the running time is bounded by - - - -O\Big(\sum_{i - -Euclid's algorithm is widely used in practice, especially for small numbers, due to its simplicity. For comparison, the efficiency of alternatives to Euclid's algorithm may be determined. - -One inefficient approach to finding the GCD of two natural numbers a and b is to calculate all their common divisors; the GCD is then the largest common divisor. The common divisors can be found by dividing both numbers by successive integers from 2 to the smaller number b. The number of steps of this approach grows linearly with b, or exponentially in the number of digits. Another inefficient approach is to find the prime factors of one or both numbers. As noted above, the GCD equals the product of the prime factors shared by the two numbers a and b. Additional efficiency can be gleaned by examining only the leading digits of the two numbers a and b. The binary algorithm can be extended to other bases (k-ary algorithms), with up to fivefold increases in speed. Lehmer's GCD algorithm uses the same general principle as the binary algorithm to speed up GCD computations in arbitrary bases. - -A recursive approach for very large integers (with more than 25,000 digits) leads to quasilinear integer GCD algorithms, such as those of Schönhage, and Stehlé and Zimmermann. These algorithms exploit the 2×2 matrix form of the Euclidean algorithm given above. These quasilinear methods generally scale as O(h (log h)2 (log log h)). - -Although the Euclidean algorithm is used to find the greatest common divisor of two natural numbers (positive integers), it may be generalized to the real numbers, and to other mathematical objects, such as polynomials, The basic procedure is similar to that for integers. At each step k, a quotient polynomial qk(x) and a remainder polynomial rk(x) are identified to satisfy the recursive equation -$$ -r_{k-2}(x) = q_k(x)r_{k-1}(x) + r_k(x), -$$ - -where r−2(x) = a(x) and r−1(x) = b(x). Each quotient polynomial is chosen such that each remainder is either zero or has a degree that is smaller than the degree of its predecessor: deg[rk(x)] < deg[rk−1(x)]. Since the degree is a nonnegative integer, and since it decreases with every step, the Euclidean algorithm concludes in a finite number of steps. The last nonzero remainder is the greatest common divisor of the original two polynomials, a(x) and b(x). - -For example, consider the following two quartic polynomials, which each factor into two quadratic polynomials - -\begin{align} - -a(x) &= x^4 - 4x^3 + 4x^2 - 3x + 14 = (x^2 - 5x + 7)(x^2 + x + 2) \qquad \text{and}\\ - -b(x) &= x^4 + 8x^3 + 12x^2 + 17x + 6 = (x^2 + 7x + 3)(x^2 + x + 2). - -\end{align} - -Dividing a(x) by b(x) yields a remainder r0(x) = x3 + (2/3)x2 + (5/3)x − (2/3). In the next step, b(x) is divided by r0(x) yielding a remainder r1(x) = x2 + x + 2. Finally, dividing r0(x) by r1(x) yields a zero remainder, indicating that r1(x) is the greatest common divisor polynomial of a(x) and b(x), consistent with their factorization. - -Many of the applications described above for integers carry over to polynomials. The Euclidean algorithm can be used to solve linear Diophantine equations and Chinese remainder problems for polynomials; continued fractions of polynomials can also be defined. - -The polynomial Euclidean algorithm has other applications, such as Sturm chains, a method for counting the zeros of a polynomial that lie inside a given real interval. This in turn has applications in several areas, such as the Routh–Hurwitz stability criterion in control theory. - -Finally, the coefficients of the polynomials need not be drawn from integers, real numbers or even the complex numbers. For example, the coefficients may be drawn from a general field, such as the finite fields GF(p) described above. The corresponding conclusions about the Euclidean algorithm and its applications hold even for such polynomials. and i is the square root of negative one. This unique factorization is helpful in many applications, such as deriving all Pythagorean triples or proving Fermat's theorem on sums of two squares. In general, the Euclidean algorithm is convenient in such applications, but not essential; for example, the theorems can often be proven by other arguments. - -The Euclidean algorithm developed for two Gaussian integers α and β is nearly the same as that for ordinary integers, but differs in two respects. As before, the task at each step k is to identify a quotient qk and a remainder rk such that -$$ -r_k = r_{k-2} - q_k r_{k-1}, -$$ - -where rk−2 = α, where rk−1 = β, and where every remainder is strictly smaller than its predecessor: . The first difference is that the quotients and remainders are themselves Gaussian integers, and thus are complex numbers. The quotients qk are generally found by rounding the real and complex parts of the exact ratio (such as the complex number α/β) to the nearest integers. - -The fundamental theorem of arithmetic applies to any Euclidean domain: Any number from a Euclidean domain can be factored uniquely into irreducible elements. Any Euclidean domain is a unique factorization domain (UFD), although the converse is not true. The first example of a Euclidean domain that was not norm-Euclidean (with D = 69) was published in 1994. - -The Euclidean algorithm may be applied to some noncommutative rings such as the set of Hurwitz quaternions. Let α and β represent two elements from such a ring. They have a common right divisor δ if α = ξδ and β = ηδ for some choice of ξ and η in the ring. Similarly, they have a common left divisor if α = dξ and β = dη for some choice of ξ and η in the ring. Since multiplication is not commutative, there are two versions of the Euclidean algorithm, one for right divisors and one for left divisors. - -Most of the results for the GCD carry over to noncommutative numbers. For example, Bézout's identity states that the right gcd(α, β) can be expressed as a linear combination of α and β. In other words, there are numbers σ and τ such that -$$ -\Gamma_\text{right} = \sigma\alpha + \tau\beta. -$$ - -The analogous identity for the left GCD is nearly the same: -$$ -\Gamma_\text{left} = \alpha\sigma + \beta\tau. -$$ - -Bézout's identity can be used to solve Diophantine equations. For instance, one of the standard proofs of Lagrange's four-square theorem, that every positive integer can be represented as a sum of four squares, is based on quaternion GCDs in this way. diff --git a/wiki/wikipedia/812.txt b/wiki/wikipedia/812.txt deleted file mode 100644 index a116b410c1757b36c5bd01e4aa301c337b10a454..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/812.txt +++ /dev/null @@ -1,89 +0,0 @@ -In mathematical logic and set theory, an ordinal notation is a partial function mapping the set of all finite sequences of symbols, themselves members of a finite alphabet, to a countable set of ordinals. A Gödel numbering is a function mapping the set of well-formed formulae (a finite sequence of symbols on which the ordinal notation function is defined) of some formal language to the natural numbers. This associates each well-formed formula with a unique natural number, called its Gödel number. If a Gödel numbering is fixed, then the subset relation on the ordinals induces an ordering on well-formed formulae which in turn induces a well-ordering on the subset of natural numbers. A recursive ordinal notation must satisfy the following two additional properties: - -# the subset of natural numbers is a recursive set - -# the induced well-ordering on the subset of natural numbers is a recursive relation - -There are many such schemes of ordinal notations, including schemes by Wilhelm Ackermann, Heinz Bachmann, Wilfried Buchholz, Georg Cantor, Solomon Feferman, Gerhard Jäger, Isles, Pfeiffer, Wolfram Pohlers, Kurt Schütte, Gaisi Takeuti (called ordinal diagrams), Oswald Veblen. Stephen Cole Kleene has a system of notations, called Kleene's O, which includes ordinal notations but it is not as well behaved as the other systems described here. - -Usually one proceeds by defining several functions from ordinals to ordinals and representing each such function by a symbol. In many systems, such as Veblen's well known system, the functions are normal functions, that is, they are strictly increasing and continuous in at least one of their arguments, and increasing in other arguments. Another desirable property for such functions is that the value of the function is greater than each of its arguments, so that an ordinal is always being described in terms of smaller ordinals. There are several such desirable properties. Unfortunately, no one system can have all of them since they contradict each other. - -As usual, we must start off with a constant symbol for zero, "0", which we may consider to be a function of arity zero. This is necessary because there are no smaller ordinals in terms of which zero can be described. The most obvious next step would be to define a unary function, "S", which takes an ordinal to the smallest ordinal greater than it; in other words, S is the successor function. In combination with zero, successor allows one to name any natural number. - -The third function might be defined as one that maps each ordinal to the smallest ordinal that cannot yet be described with the above two functions and previous values of this function. This would map β to ω·β except when β is a fixed point of that function plus a finite number in which case one uses ω·(β+1). - -The fourth function would map α to ωω·α except when α is a fixed point of that plus a finite number in which case one uses ωω·(α+1). - -One could continue in this way, but it would give us an infinite number of functions. So instead let us merge the unary functions into a binary function. By transfinite recursion on α, we can use transfinite recursion on β to define ξ(α,β) = the smallest ordinal γ such that α < γ and β < γ and γ is not the value of ξ for any smaller α or for the same α with a smaller β. - -Thus, define ξ-notations as follows: - -*"0" is a ξ-notation for zero. - -*If "A" and "B" are replaced by ξ-notations for α and β in "ξAB", then the result is a ξ-notation for ξ(α,β). - -*There are no other ξ-notations. - -The function ξ is defined for all pairs of ordinals and is one-to-one. It always gives values larger than its arguments and its range is all ordinals other than 0 and the epsilon numbers (ε=ωε). - -One has ξ(α, β) < ξ(γ, δ) if and only if either (α = γ and β < δ) or (α < γ and β < ξ(γ, δ)) or (α > γ and ξ(α, β) ≤ δ). - -With this definition, the first few ξ-notations are: - -"0" for 0. "ξ00" for 1. "ξ0ξ00" for ξ(0,1)=2. "ξξ000" for ξ(1,0)=ω. "ξ0ξ0ξ00" for 3. "ξ0ξξ000" for ω+1. "ξξ00ξ00" for ω·2. "ξξ0ξ000" for ωω. "ξξξ0000" for $\omega^{\omega^{\omega}}.$ - -In general, ξ(0,β) = β+1. While ξ(1+α,β) = ωωα·(β+k) for k = 0 or 1 or 2 depending on special situations:
    - -k = 2 if α is an epsilon number and β is finite.
    - -Otherwise, k = 1 if β is a multiple of ωωα+1 plus a finite number.
    - -Otherwise, k = 0. - -The ξ-notations can be used to name any ordinal less than ε0 with an alphabet of only two symbols ("0" and "ξ"). If these notations are extended by adding functions that enumerate epsilon numbers, then they will be able to name any ordinal less than the first epsilon number that cannot be named by the added functions. This last property, adding symbols within an initial segment of the ordinals gives names within that segment, is called repleteness (after Solomon Feferman). - -There are many different systems for ordinal notation introduced by various authors. It is often quite hard to convert between the different systems. - -"Exponential polynomials" in 0 and ω gives a system of ordinal notation for ordinals less than epsilon zero. There are many equivalent ways to write these; instead of exponential polynomials, one can use rooted trees, or nested parentheses, or the system described above. - -The 2-variable Veblen functions can be used to give a system of ordinal notation for ordinals less than the Feferman-Schutte ordinal. The Veblen functions in a finite or transfinite number of variables give systems of ordinal notations for ordinals less than the small and large Veblen ordinals. - -Ackermann described a system of ordinal notation rather weaker than the system described earlier by Veblen. The limit of his system is sometimes called the Ackermann ordinal. - -Bachmann introduced the key idea of using uncountable ordinals to produce new countable ordinals. His original system was rather cumbersome to use as it required choosing a special sequence converging to each ordinal. Later systems of notation introduced by Feferman and others avoided this complication. - -Takeuti described a very powerful system of ordinal notation called "ordinal diagrams", which is hard to understand but was later simplified by Feferman. - -Feferman introduced theta functions, described in Buchholz as follows. For an ordinal α, θα is a function mapping ordinals to ordinals. Often θα(β) is written as θαβ. The set C(α, β) is defined by induction on α to be the set of ordinals that can be generated from 0, ω1, ω2, ..., ωω, together with the ordinals less than β by the operations of ordinal addition and the functions θξ for ξ<α. And the function θγ is defined to be the function enumerating the ordinals δ with δ∉C(γ,δ). The problem with this system is that ordinal notations and collapsing functions are not identical, and therefore this function does not qualify as an ordinal notation. An associated ordinal notation is not known. - -Buchholz described the following system of ordinal notation as a simplification of Feferman's theta functions. Define: - -*Ωξ = ωξ if ξ > 0, Ω0 = 1 - -The functions ψv(α) for α an ordinal, v an ordinal at most ω, are defined by induction on α as follows: - -*ψv(α) is the smallest ordinal not in Cv(α) - -where Cv(α) is the smallest set such that - -*Cv(α) contains all ordinals less than Ωv - -*Cv(α) is closed under ordinal addition - -*Cv(α) is closed under the functions ψu (for u≤ω) applied to arguments less than α. - -This system has about the same strength as Fefermans system, as $\theta\varepsilon_{\Omega_v+1}0 = \psi_0(\varepsilon_{\Omega_v+1})$ for v ≤ ω. Yet, while this system is powerful, it does not qualify as an ordinal notation. Buchholz did create an associated ordinal notation, yet it is complicated: the definition is in the main article. - -Kleene described a system of notation for all recursive ordinals (those less than the Church–Kleene ordinal). Unfortunately, unlike the other systems described above there is in general no effective way to tell whether some natural number represents an ordinal, or whether two numbers represent the same ordinal. However, one can effectively find notations that represent the ordinal sum, product, and power (see ordinal arithmetic) of any two given notations in Kleene's $\mathcal{O}$; and given any notation for an ordinal, there is a recursively enumerable set of notations that contains one element for each smaller ordinal and is effectively ordered. Kleene's $\mathcal{O}$ denotes a canonical (and very non-computable) set of notations. It uses a subset of the natural numbers instead of finite strings of symbols, and is not recursive, therefore, once again, not qualifying as an ordinal notation. - -Dmytro Taranovsky described in a self-published web page the following general frame work to construct a binary function $C$ from a well-ordered set $(S, <)$ equipped with an additional structure $D \subset S \times S$ satisfying a suitable condition explained later. Let $0_S$ denote the least element of $(S, <)$. For an $a \in S$, we put $C_a := {c \in S: (c, a) \in D}$, $a+1 := \textrm{min}\{c \in S: a < c\}$ (assume the existence), and $(a) := \{c \in S: c < a\}$. For an $S' \subset S$, we denote by $lim(S') \subset S$ the subset of limits of elements of $S'$ in $(S, <)$. We say that $D$ is a degree for $(S, <)$ if the following hold: - -* $C_{0_S} = S$ - -* $\forall a \in S: a \neq 0_S \implies 0_S \notin C_a$ - -* $\forall a \in \textrm{lim}(S): C_a = \bigcup_{b < a} C_b$ - -* $\forall a \in S: C_{a+1} = \textrm{lim}(C_a) \or \exists d \in \textrm{lim}(S) \cap (a+1) \and C_{a+1} = \textrm{lim}(C_a) \cup (d+1)$ - -If $D$ is a degree for $(S, <)$, then we set $C(a, b) := \textrm{min}\{c \in C_a: b < c\}$. In particular, this construction works for a limit ordinal $\eta$ equipped with a degree $D \subset \eta \times \eta$ for $(\eta, \in)$. Since $D$ is not unique, the resulting function $C$ heavily depends on the choice of $D$. Taranovsky introduced several explicit examples of degrees. diff --git a/wiki/wikipedia/813.txt b/wiki/wikipedia/813.txt deleted file mode 100644 index ca28c85ea9dfd5378adee97a591b1315f1d421c7..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/813.txt +++ /dev/null @@ -1,23 +0,0 @@ -ConceptDraw MINDMAP is proprietary mind mapping and brainstorming software developed by CS Odessa for Microsoft Windows and Apple macOS operating systems. - -The mind mapping technology of visual thinking was invented by Tony Buzan in the 1960s. - -Along with the traditional practice of hand-drawn mind maps there is a range of special mind mapping software, which is commonly used to create mind maps for purposes of business, project management and knowledge management. - -The first version of ConceptDraw MINDMAP was released in 2001. Since 2008 it has been a part of the ConceptDraw OFFICE software package for Windows and macOS platform. - -* CDMZ - ConceptDraw MINDMAP document - -* CDMM - ConceptDraw MINDMAP v5 and earlier document - -* CDMTZ - ConceptDraw MINDMAP template - -ConceptDraw MINDMAP is cross-platform compatible when running on macOS and Windows operating systems: files created on a computer power by macOS can be opened and edited on a Windows computer, and vice versa. The Developer's end-user license agreement allows for cross-platform installation with a single license. - -Using a standard file format allows interchange of files between: mind maps, project files and diagrams. - -ConceptDraw MINDMAP can import OPML files, text outlines, MS Project, MS Word and MS PowerPoint files, along with some mind mapping formats, such as MindManager, XMind and FreeMind. - -Export options include MS Project, MS Word, MS PowerPoint and MindManager as well, and also Adobe PDF, HTML, and a variety of graphics formats. - -Through the set of plug-ins ConceptDraw MINDMAP is compatible with Twitter and Evernote services. Since the release of version 9, it is also compatible with Outlook and OneNote from Microsoft. diff --git a/wiki/wikipedia/814.txt b/wiki/wikipedia/814.txt deleted file mode 100644 index a38e05309382449ffcd6799604d1ddd2dd9dfe33..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/814.txt +++ /dev/null @@ -1,58 +0,0 @@ -In algebraic geometry, the theorem on formal functions states the following: - -Let $f: X \to S$ be a proper morphism of noetherian schemes with a coherent sheaf $\mathcal{F}$ on X. Let $S_0$ be a closed subscheme of S defined by $\mathcal{I}$ and $\widehat{X}, \widehat{S}$ formal completions with respect to $X_0 = f^{-1}(S_0)$ and $S_0$. Then for each $p \ge 0$ the canonical (continuous) map: -$$ -(R^p f_* \mathcal{F})^\wedge \to \varprojlim_k R^p f_* \mathcal{F}_k -$$ - -is an isomorphism of (topological) $\mathcal{O}_{\widehat{S}}$-modules, where - -*The left term is $\varprojlim R^p f_* \mathcal{F} \otimes_{\mathcal{O}_S} \mathcal{O}_S/{\mathcal{I}^{k+1}}$. - -*$\mathcal{F}_k = \mathcal{F} \otimes_{\mathcal{O}_S} (\mathcal{O}_S/{\mathcal{I}}^{k+1})$ - -*The canonical map is one obtained by passage to limit. - -The theorem is used to deduce some other important theorems: Stein factorization and a version of Zariski's main theorem that says that a proper birational morphism into a normal variety is an isomorphism. Some other corollaries (with the notations as above) are: - -Corollary: For any $s \in S$, topologically, -$$ -((R^p f_* \mathcal{F})_s)^\wedge \simeq \varprojlim H^p(f^{-1}(s), \mathcal{F}\otimes_{\mathcal{O}_S} (\mathcal{O}_s/\mathfrak{m}_s^k)) -$$ - -where the completion on the left is with respect to $\mathfrak{m}_s$. - -Corollary: Let r be such that $\operatorname{dim} f^{-1}(s) \le r$ for all $s \in S$. Then -$$ -R^i f_* \mathcal{F} = 0, \quad i > r. -$$ - -Corollay: For each $s \in S$, there exists an open neighborhood U of s such that -$$ -R^i f_* \mathcal{F}|_U = 0, \quad i > \operatorname{dim} f^{-1}(s). -$$ - -Corollary: If $f_* \mathcal{O}_X = \mathcal{O}_S$, then $f^{-1}(s)$ is connected for all $s \in S$. - -The theorem also leads to the Grothendieck existence theorem, which gives an equivalence between the category of coherent sheaves on a scheme and the category of coherent sheaves on its formal completion (in particular, it yields algebralizability.) - -Finally, it is possible to weaken the hypothesis in the theorem; cf. Illusie. According to Illusie (pg. 204), the proof given in EGA III is due to Serre. The original proof (due to Grothendieck) was never published. - -Let the setting be as in the lede. In the proof one uses the following alternative definition of the canonical map. - -Let $i': \widehat{X} \to X, i: \widehat{S} \to S$ be the canonical maps. Then we have the base change map of $\mathcal{O}_{\widehat{S}}$-modules -$$ -i^* R^q f_* \mathcal{F} \to R^p \widehat{f}_* (i'^* \mathcal{F}) -$$. - -where $\widehat{f}: \widehat{X} \to \widehat{S}$ is induced by $f: X \to S$. Since $\mathcal{F}$ is coherent, we can identify $i'^*\mathcal{F}$ with $\widehat{\mathcal{F}}$. Since $R^q f_* \mathcal{F}$ is also coherent (as f is proper), doing the same identification, the above reads: -$$ -(R^q f_* \mathcal{F})^\wedge \to R^p \widehat{f}_* \widehat{\mathcal{F}} -$$. - -Using $f: X_n \to S_n$ where $X_n = (X_0, \mathcal{O}_X/\mathcal{J}^{n+1})$ and $S_n = (S_0, \mathcal{O}_S/\mathcal{I}^{n+1})$, one also obtains (after passing to limit): -$$ -R^q \widehat{f}_* \widehat{\mathcal{F}} \to \varprojlim R^p f_* \mathcal{F}_n -$$ - -where $\mathcal{F}_n$ are as before. One can verify that the composition of the two maps is the same map in the lede. (cf. EGA III-1, section 4) diff --git a/wiki/wikipedia/815.txt b/wiki/wikipedia/815.txt deleted file mode 100644 index 07ae4dbeb6969eadf47b2570099ba2ceb0feb2dc..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/815.txt +++ /dev/null @@ -1,5 +0,0 @@ -Sudoku Challenge! is a WiiWare sudoku game developed by Digital Leisure. The game was released in North America on November 24, 2008 and in the PAL region on December 19, 2008 and costs 500 Wii Points. The DSiWare version has been released in North America on November 30, 2009 and in the PAL region on May 14, 2010. - -The game includes three difficulty levels, Original and Grand Sudoku game modes (where the player has to complete five intersecting Sudoku boards at the same time), and over 100,000,000 sudoku puzzles. - -WiiWare World gave the game 6 out of 10, believing Sudoku Challenge to be an acceptable puzzler but with still room for improvement. Commenting that the presentation was functional and that most players will be satisfied with what was on offer, with the sudoku puzzles varying in difficulty to challenge players of all skill levels, they were also impressed with the number of puzzles included and claimed that, at 10 minutes a puzzle, it would take a player at least 1900 years to complete all the puzzles included on the game. diff --git a/wiki/wikipedia/816.txt b/wiki/wikipedia/816.txt deleted file mode 100644 index bec6edb8cf1eec42c36af2a55e89dd155d939410..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/816.txt +++ /dev/null @@ -1,72 +0,0 @@ -In logic, the law of non-contradiction (LNC) (also known as the law of contradiction, principle of non-contradiction (PNC), or the principle of contradiction) states that contradictory propositions cannot both be true in the same sense at the same time, e. g. the two propositions "p is the case" and "p is not the case" are mutually exclusive. Formally this is expressed as the tautology ¬(p ∧ ¬p). The law is not to be confused with the law of excluded middle which states that at least one, "p is the case" or "p is not the case" holds. - -One reason to have this law is the principle of explosion, which states that anything follows from a contradiction. The law is employed in a reductio ad absurdum proof. - -To express the fact that the law is tenseless and to avoid equivocation, sometimes the law is amended to say "contradictory propositions cannot both be true 'at the same time and in the same sense'". - -It is one of the so called three laws of thought, along with its complement, the law of excluded middle, and the law of identity. However, no system of logic is built on just these laws, and none of these laws provide inference rules, such as modus ponens or De Morgan's laws. - -The law of non contradiction and the law of excluded middle create a dichotomy in "logical space", wherein the two parts are "mutually exclusive" and "jointly exhaustive". The law of non-contradiction is merely an expression of the mutually exclusive aspect of that dichotomy, and the law of excluded middle, an expression of its jointly exhaustive aspect. - -One difficulty in applying the law of non-contradiction is ambiguity in the propositions. For instance, if is not explicitly specified as part of the propositions A and B, then A may be B at one time, and not at another. A and B may in some cases be made to sound mutually exclusive linguistically even though A may be partly B and partly not B at the same time. However, it is impossible to predicate of the same thing, at the same time, and in the same sense, the absence and the presence of the same fixed quality. - -According to both Plato and Aristotle, Heraclitus was said to have denied the law of non-contradiction. This is quite likely if, as Plato pointed out, the law of non-contradiction does not hold for changing things in the world. If a philosophy of Becoming is not possible without change, then (the potential of) what is to become must already exist in the present object. In "We step and do not step into the same rivers; we are and we are not", both Heraclitus's and Plato's object simultaneously must, in some sense, be both what it now is and have the potential (dynamic) of what it might become. - -Unfortunately, so little remains of Heraclitus' aphorisms that not much about his philosophy can be said with certainty. He seems to have held that strife of opposites is universal both within and without, therefore both opposite existents or qualities must simultaneously exist, although in some instances in different respects. "The road up and down are one and the same" implies either the road leads both ways, or there can be no road at all. This is the logical complement of the law of non-contradiction. According to Heraclitus, change, and the constant conflict of opposites is the universal logos of nature. - -Personal subjective perceptions or judgments can only be said to be true at the same time in the same respect, in which case, the law of non-contradiction must be applicable to personal judgments. - -The most famous saying of Protagoras is: "Man is the measure of all things: of things which are, that they are, and of things which are not, that they are not". However, Protagoras was referring to things that are used by or in some way related to humans. This makes a great difference in the meaning of his aphorism. Properties, social entities, ideas, feelings, judgments, etc. originate in the human mind. However, Protagoras has never suggested that man must be the measure of stars or the motion of the stars. - -Parmenides employed an ontological version of the law of non-contradiction to prove that being is and to deny the void, change, and motion. He also similarly disproved contrary propositions. In his poem On Nature, he said, - -The nature of the ‘is’ or what-is in Parmenides is a highly contentious subject. Some have taken it to be whatever exists, some to be whatever is or can be the object of scientific inquiry. - -In Plato's early dialogues, Socrates uses the elenctic method to investigate the nature or definition of ethical concepts such as justice or virtue. Elenctic refutation depends on a dichotomous thesis, one that may be divided into exactly two mutually exclusive parts, only one of which may be true. Then Socrates goes on to demonstrate the contrary of the commonly accepted part using the law of non-contradiction. According to Gregory Vlastos, the method has the following steps: - -# Socrates' interlocutor asserts a thesis, for example, "Courage is endurance of the soul", which Socrates considers false and targets for refutation. - -# Socrates secures his interlocutor's agreement to further premises, for example, "Courage is a fine thing" and "Ignorant endurance is not a fine thing". - -# Socrates then argues, and the interlocutor agrees, that these further premises imply the contrary of the original thesis, in this case, it leads to: "courage is not endurance of the soul". - -# Socrates then claims that he has shown that his interlocutor's thesis is false and that its negation is true. - -Plato's version of the law of non-contradiction states that "The same thing clearly cannot act or be acted upon in the same part or in relation to the same thing at the same time, in contrary ways" (The Republic (436b)). In this, Plato carefully phrases three axiomatic restrictions on action or reaction: 1) in the same part, 2) in the same relation, 3) at the same time. The effect is to momentarily create a frozen, timeless state, somewhat like figures frozen in action on the frieze of the Parthenon. - -This way, he accomplishes two essential goals for his philosophy. First, he logically separates the Platonic world of constant change from the formally knowable world of momentarily fixed physical objects. Second, he provides the conditions for the dialectic method to be used in finding definitions, as for example in the Sophist. So Plato's law of non-contradiction is the empirically derived necessary starting point for all else he has to say. - -In contrast, Aristotle reverses Plato's order of derivation. Rather than starting with experience, Aristotle begins a priori with the law of non-contradiction as the fundamental axiom of an analytic philosophical system. This axiom then necessitates the fixed, realist model. Now, he starts with much stronger logical foundations than Plato's non-contrariety of action in reaction to conflicting demands from the three parts of the soul. - -The traditional source of the law of non-contradiction is Aristotle's Metaphysics where he gives three different versions. - -#ontological: "It is impossible that the same thing belong and not belong to the same thing at the same time and in the same respect." (1005b19-20) - -#psychological: "No one can believe that the same thing can (at the same time) be and not be." (1005b23-24) - -#logical (aka the medieval Lex Contradictoriarum): "The most certain of all basic principles is that contradictory propositions are not true simultaneously." (1011b13-14) - -Aristotle attempts several proofs of this law. He first argues that every expression has a single meaning (otherwise we could not communicate with one another). This rules out the possibility that by "to be a man", "not to be a man" is meant. But "man" means "two-footed animal" (for example), and so if anything is a man, it is necessary (by virtue of the meaning of "man") that it must be a two-footed animal, and so it is impossible at the same time for it not to be a two-footed animal. Thus "it is not possible to say truly at the same time that the same thing is and is not a man" (Metaphysics 1006b 35). Another argument is that anyone who believes something cannot believe its contradiction (1008b). - -Why does he not just get up first thing and walk into a well or, if he finds one, over a cliff? In fact, he seems rather careful about cliffs and wells. - -Avicenna's commentary on the Metaphysics illustrates the common view that the law of non-contradiction "and their like are among the things that do not require our elaboration." Avicenna’s words for "the obdurate" are quite facetious: "he must be subjected to the conflagration of fire, since 'fire' and 'not fire' are one. Pain must be inflicted on him through beating, since 'pain' and 'no pain' are one. And he must be denied food and drink, since eating and drinking and the abstention from both are one [and the same]." - -The law of non-contradiction is found in ancient Indian logic as a meta-rule in the Shrauta Sutras, the grammar of Pāṇini, and the Brahma Sutras attributed to Vyasa. It was later elaborated on by medieval commentators such as Madhvacharya. - -Leibniz and Kant both used the law of non contradiction to define the difference between analytic and synthetic propositions. For Leibniz, analytic statements follow from the law of non contradiction, and synthetic ones from the principle of sufficient reason. - -The principle was stated as a theorem of propositional logic by Russell and Whitehead in Principia Mathematica as: -$$ -\mathbf{*3\cdot24}. \ \ \vdash. \thicksim(p.\thicksim p) -$$ - -Graham Priest advocates the view that under some conditions, some statements can be both true and false simultaneously, or may be true and false at different times. Dialetheism arises from formal logical paradoxes, such as the Liar's paradox and Russell's paradox. See dialetheism. - -As is true of all axioms of logic, the law of non-contradiction is alleged to be neither verifiable nor falsifiable, on the grounds that any proof or disproof must use the law itself prior to reaching the conclusion. In other words, in order to verify or falsify the laws of logic one must resort to logic as a weapon, an act which would essentially be self-defeating. Since the early 20th century, certain logicians have proposed logics that deny the validity of the law. - -Logics known as "paraconsistent" are inconsistency-tolerant logics in that there, from P together with ¬P, it doesn't imply that any proposition follows. Nevertheless, not all paraconsistent logics deny the law of non-contradiction and some such logics even prove it. - -Some, such as David Lewis, have objected to paraconsistent logic on the ground that it is simply impossible for a statement and its negation to be jointly true. A related objection is that "negation" in paraconsistent logic is not really negation; it is merely a subcontrary-forming operator. - -The Fargo episode "The Law of Non-Contradiction", which takes its name from the law, was noted for its several elements relating to the law of non-contradiction, as the episode's main character faces several paradoxes. For example, she is still the acting chief of police while having been demoted from the position, and tries to investigate a man that both was and was not named Ennis Stussy, and who both was and was not her stepfather. It also features the story of a robot who, after having spent millions of years unable to help humanity, is told that he greatly helped mankind all along by observing history. diff --git a/wiki/wikipedia/817.txt b/wiki/wikipedia/817.txt deleted file mode 100644 index 741c11f99a95be04637172b7f5ec4db1bdd7abbc..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/817.txt +++ /dev/null @@ -1,156 +0,0 @@ -In mathematics, Muirhead's inequality, named after Robert Franklin Muirhead, also known as the "bunching" method, generalizes the inequality of arithmetic and geometric means. - -For any real vector -$$ -a=(a_1,\dots,a_n) -$$ - -define the "a-mean" [a] of positive real numbers x1, ..., xn by -$$ -[a]=\frac{1}{n!}\sum_\sigma x_{\sigma_1}^{a_1}\cdots x_{\sigma_n}^{a_n}, -$$ - -where the sum extends over all permutations σ of { 1, ..., n }. - -When the elements of a are nonnegative integers, the a-mean can be equivalently defined via the monomial symmetric polynomial $m_a(x_1,\dots,x_n)$ as -$$ -[a] = \frac{k_1!\cdots k_l!}{n!} m_a(x_1,\dots,x_n), -$$ - -where l is the number of distinct elements in a, and k1, ..., kl are their multiplicities. - -Notice that the a-mean as defined above only has the usual properties of a mean (e.g., if the mean of equal numbers is equal to them) if $a_1+\cdots+a_n=1$. In the general case, one can consider instead $[a]^{1/(a_1+\cdots+a_n)}$, which is called a Muirhead mean. - -; Examples - -* For a = (1, 0, ..., 0), the a-mean is just the ordinary arithmetic mean of x1, ..., xn. - -* For a = (1/n, ..., 1/n), the a-mean is the geometric mean of x1, ..., xn. - -* For a = (x, 1-x), the a-mean is the Heinz mean. - -* The Muirhead mean for a = (-1, 0, ..., 0) is the harmonic mean. - -An n × n matrix P is doubly stochastic precisely if both P and its transpose PT are stochastic matrices. A stochastic matrix is a square matrix of nonnegative real entries in which the sum of the entries in each column is 1. Thus, a doubly stochastic matrix is a square matrix of nonnegative real entries in which the sum of the entries in each row and the sum of the entries in each column is 1. - -Muirhead's inequality states that [a] ≤ [b] for all x such that xi > 0 for every i ∈ { 1, ..., n } if and only if there is some doubly stochastic matrix P for which a = Pb. - -Furthermore, in that case we have [a] = [b] if and only if a = b or all xi are equal. - -The latter condition can be expressed in several equivalent ways; one of them is given below. - -The proof makes use of the fact that every doubly stochastic matrix is a weighted average of permutation matrices (Birkhoff-von Neumann theorem). - -Because of the symmetry of the sum, no generality is lost by sorting the exponents into decreasing order: -$$ -a_1 \geq a_2 \geq \cdots \geq a_n -$$ -$$ -b_1 \geq b_2 \geq \cdots \geq b_n. -$$ - -Then the existence of a doubly stochastic matrix P such that a = Pb is equivalent to the following system of inequalities: - - - -\begin{align} - -a_1 & \leq b_1 \\ - -a_1+a_2 & \leq b_1+b_2 \\ - -a_1+a_2+a_3 & \leq b_1+b_2+b_3 \\ - -& \vdots \\ - -a_1+\cdots +a_{n-1} & \leq b_1+\cdots+b_{n-1} \\ - -a_1+\cdots +a_n & = b_1+\cdots+b_n. - -\end{align} - - - -(The last one is an equality; the others are weak inequalities.) - -The sequence $b_1, \ldots, b_n$ is said to majorize the sequence $a_1, \ldots, a_n$. - -It is convenient to use a special notation for the sums. A success in reducing an inequality in this form means that the only condition for testing it is to verify whether one exponent sequence ($\alpha_1, \ldots, \alpha_n$) majorizes the other one. -$$ -\sum_\text{sym} x_1^{\alpha_1} \cdots x_n^{\alpha_n} -$$ - -This notation requires developing every permutation, developing an expression made of n! monomials, for instance: - -\begin{align} - -\sum_\text{sym} x^3 y^2 z^0 &= x^3 y^2 z^0 + x^3 z^2 y^0 + y^3 x^2 z^0 + y^3 z^2 x^0 + z^3 x^2 y^0 + z^3 y^2 x^0 \\ - -&= x^3 y^2 + x^3 z^2 + y^3 x^2 + y^3 z^2 + z^3 x^2 + z^3 y^2 - -\end{align} - -Let -$$ -a_G = \left( \frac 1 n , \ldots , \frac 1 n \right) -$$ - -and -$$ -a_A = ( 1 , 0, 0, \ldots , 0 ). -$$ - -We have - - - -\begin{align} - -a_{A1} = 1 & > a_{G1} = \frac 1 n, \\ - -a_{A1} + a_{A2} = 1 & > a_{G1} + a_{G2} = \frac 2 n, \\ - -& \vdots \\ - -a_{A1} + \cdots + a_{An} & = a_{G1} + \cdots + a_{Gn} = 1. - -\end{align} - - - -Then - -[aA] ≥ [aG], - -which is -$$ -\frac 1 {n!} (x_1^1 \cdot x_2^0 \cdots x_n^0 + \cdots + x_1^0 \cdots x_n^1) (n-1)! \geq \frac 1 {n!} (x_1 \cdot \cdots \cdot x_n)^{1/n} n! -$$ - -yielding the inequality. - -We seek to prove that x2 + y2 ≥ 2xy by using bunching (Muirhead's inequality). - -We transform it in the symmetric-sum notation: -$$ -\sum_ \mathrm{sym} x^2 y^0 \ge \sum_\mathrm{sym} x^1 y^1. -$$ - -The sequence (2, 0) majorizes the sequence (1, 1), thus the inequality holds by bunching. - -Similarly, we can prove the inequality -$$ -x^3+y^3+z^3 \ge 3 x y z -$$ - -by writing it using the symmetric-sum notation as -$$ -\sum_ \mathrm{sym} x^3 y^0 z^0 \ge \sum_\mathrm{sym} x^1 y^1 z^1, -$$ - -which is the same as -$$ - 2 x^3 + 2 y^3 + 2 z^3 \ge 6 x y z. -$$ - -Since the sequence (3, 0, 0) majorizes the sequence (1, 1, 1), the inequality holds by bunching. diff --git a/wiki/wikipedia/818.txt b/wiki/wikipedia/818.txt deleted file mode 100644 index ecdd756bb37e60bf570870d356d54410abba32d5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/818.txt +++ /dev/null @@ -1,29 +0,0 @@ -The Oberwolfach problem is an unsolved problem in mathematics that may be formulated either as a problem of scheduling seating assignments for diners, - -or more abstractly as a problem in graph theory, on the edge cycle covers of complete graphs. It is named after the Mathematical Research Institute of Oberwolfach, where the problem was posed in 1967 by Gerhard Ringel. - -In conferences held at Oberwolfach, it is the custom for the participants to dine together in a room with circular tables, not all the same size, and with assigned seating that rearranges the participants from meal to meal. The Oberwolfach problem asks how to make a seating chart for a given set of tables so that all tables are full at each meal and all pairs of conference participants are seated next to each other exactly once. An instance of the problem can be denoted as $OP(x,y,z,\dots)$ where $x,y,z,\dots$ are the given table sizes. Alternatively, when some table sizes are repeated, they may be denoted using exponential notation; for instance, $OP(5^3)$ describes an instance with three tables of size five. - -Formulated as a problem in graph theory, the pairs of people sitting next to each other at a single meal can be represented as a disjoint union of cycle graphs $C_x+C_y+C_z+\cdots$ of the specified lengths, with one cycle for each of the dining tables. This union of cycles is a 2-regular graph, and every 2-regular graph has this form. If $G$ is this 2-regular graph and has $n$ vertices, the question is whether the complete graph $K_n$ can be represented as an edge-disjoint union of copies of $G$. - -In order for a solution to exist, the total number of conference participants (or equivalently, the total capacity of the tables, or the total number of vertices of the given cycle graphs) must be an odd number. For, at each meal, each participant sits next to two neighbors, so the total number of neighbors of each participant must be even, and this is only possible when the total number of participants is odd. The problem has, however, also been extended to even values of $n$ by asking, for those $n$, whether all of the edges of the complete graph except for a perfect matching can be covered by copies of the given 2-regular graph. Like the ménage problem (a different mathematical problem involving seating arrangements of diners and tables), this variant of the problem can be formulated by supposing that the $n$ diners are arranged into $n/2$ married couples, and that the seating arrangements should place each diner next to each other diner except their own spouse exactly once. - -The only instances of the Oberwolfach problem that are known not to be solvable are $OP(3^2)$, $OP(3^4)$, $OP(4,5)$, and $OP(3,3,5)$. It is widely believed that all other instances have a solution, but only special cases have been proven to be solvable. - -The cases for which a solution is known include: - -*All instances $OP(x^y)$ except $OP(3^2)$ and $OP(3^4)$. - -*All instances in which all of the cycles have even length. - -*All instances (other than the known exceptions) with $n\le 60$. - -*All instances for certain choices of $n$, belonging to infinite subsets of the natural numbers. - -*All instances $OP(x,y)$ other than the known exceptions $OP(3,3)$ and $OP(4,5)$. - -Kirkman's schoolgirl problem, of grouping fifteen schoolgirls into rows of three in seven different ways so that each pair of girls appears once in each triple, is a special case of the Oberwolfach problem, $OP(3^5)$. The problem of Hamiltonian decomposition of a complete graph $K_n$ is another special case, $OP(n)$. - -Alspach's conjecture, on the decomposition of a complete graph into cycles of given sizes, is related to the Oberwolfach problem, but neither is a special case of the other. - -If $G$ is a 2-regular graph, with $n$ vertices, formed from a disjoint union of cycles of certain lengths, then a solution to the Oberwolfach problem for $G$ would also provide a decomposition of the complete graph into $(n-1)/2$ copies of each of the cycles of $G$. However, not every decomposition of $K_n$ into this many cycles of each size can be grouped into disjoint cycles that form copies of $G$, and on the other hand not every instance of Alspach's conjecture involves sets of cycles that have $(n-1)/2$ copies of each cycle. diff --git a/wiki/wikipedia/819.txt b/wiki/wikipedia/819.txt deleted file mode 100644 index d1108b9379edbb475ce043bffc05ab6aac353860..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/819.txt +++ /dev/null @@ -1,27 +0,0 @@ -In mathematics, the term essentially unique is used to describe a weaker form of uniqueness, where an object satisfying a property is "unique" only in the sense that all objects satisfying the property are equivalent to each other. The notion of essential uniqueness presupposes some form of "sameness", which is often formalized using an equivalence relation. - -A related notion is a universal property, where an object is not only essentially unique, but unique up to a unique isomorphism (meaning that it has trivial automorphism group). In general there can be more than one isomorphism between examples of an essentially unique object. - -At the most basic level, there is an essentially unique set of any given cardinality, whether one labels the elements $\{1,2,3\}$ or $\{a,b,c\}$. - -In this case, the non-uniqueness of the isomorphism (e.g., match 1 to $a$ or 1 to $c$) is reflected in the symmetric group. - -On the other hand, there is an essentially unique ordered set of any given finite cardinality: if one writes $\{1 < 2 < 3\}$ and $\{a< b< c\}$, then the only order-preserving isomorphism is the one which maps 1 to $a$, 2 to $b$, and 3 to $c$. - -The fundamental theorem of arithmetic establishes that the factorization of any positive integer into prime numbers is essentially unique, i.e., unique up to the ordering of the prime factors. - -In the context of classification of groups, there is an essentially unique group containing exactly 2 elements. Similarly, there is also an essentially unique group containing exactly 3 elements: the cyclic group of order three. In fact, regardless of how one chooses to write the three elements and denote the group operation, all such groups can be shown to be isomorphic to each other, and hence are "the same". - -On the other hand, there does not exist an essentially unique group with exactly 4 elements, as there are in this case two non-isomorphic groups in total: the cyclic group of order 4 and the Klein four group. - -There is an essentially unique measure that is translation-invariant, strictly positive and locally finite on the real line. In fact, any such measure must be a constant multiple of Lebesgue measure, specifying that the measure of the unit interval should be 1—before determining the solution uniquely. - -There is an essentially unique two-dimensional, compact, simply connected manifold: the 2-sphere. In this case, it is unique up to homeomorphism. - -In the area of topology known as knot theory, there is an analogue of the fundamental theorem of arithmetic: the decomposition of a knot into a sum of prime knots is essentially unique. - -A maximal compact subgroup of a semisimple Lie group may not be unique, but is unique up to conjugation. - -An object that is the limit or colimit over a given diagram is essentially unique, as there is a unique isomorphism to any other limiting/colimiting object. - -Given the task of using 24-bit words to store 12 bits of information in such a way that 7-bit errors can be detected and 3-bit errors can be corrected, the solution is essentially unique: the extended binary Golay code. diff --git a/wiki/wikipedia/82.txt b/wiki/wikipedia/82.txt deleted file mode 100644 index 17f6f55b3d0868ad7d427a963e05ec004806b052..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/82.txt +++ /dev/null @@ -1,103 +0,0 @@ -In physics, the no-communication theorem or no-signaling principle is a no-go theorem from quantum information theory which states that, during measurement of an entangled quantum state, it is not possible for one observer, by making a measurement of a subsystem of the total state, to communicate information to another observer. The theorem is important because, in quantum mechanics, quantum entanglement is an effect by which certain widely separated events can be correlated in ways that suggest the possibility of communication faster-than-light. The no-communication theorem gives conditions under which such transfer of information between two observers is impossible. These results can be applied to understand the so-called paradoxes in quantum mechanics, such as the EPR paradox, or violations of local realism obtained in tests of Bell's theorem. In these experiments, the no-communication theorem shows that failure of local realism does not lead to what could be referred to as "spooky communication at a distance" (in analogy with Einstein's labeling of quantum entanglement as requiring - -"spooky action at a distance" on the assumption of QM's completeness). - -The no-communication theorem states that, within the context of quantum mechanics, it is not possible to transmit classical bits of information by means of carefully prepared mixed or pure states, whether entangled or not. The theorem disallows all communication, not just faster-than-light communication, by means of shared quantum states. The theorem disallows not only the communication of whole bits, but even fractions of a bit. This is important to take note of, as there are many classical radio communications encoding techniques that can send arbitrarily small fractions of a bit across arbitrarily narrow, noisy communications channels. In particular, one may imagine that there is some ensemble that can be prepared, with small portions of the ensemble communicating a fraction of a bit; this, too, is not possible. - -The theorem is built on the basic presumption that the laws of quantum mechanics hold. Similar theorems may or may not hold for other related theories, such as hidden variable theories. The no-communication theorem is not meant to constrain other, non-quantum-mechanical theories. - -The basic assumption entering into the theorem is that a quantum-mechanical system is prepared in an initial state, and that this initial state is describable as a mixed or pure state in a Hilbert space H. The system then evolves over time in such a way that there are two spatially distinct parts, A and B, sent to two distinct observers, Alice and Bob, who are free to perform quantum mechanical measurements on their portion of the total system (viz, A and B). The question is: is there any action that Alice can perform on A that would be detectable by Bob making an observation of B? The theorem replies 'no'. - -An important assumption going into the theorem is that neither Alice nor Bob is allowed, in any way, to affect the preparation of the initial state. If Alice were allowed to take part in the preparation of the initial state, it would be trivially easy for her to encode a message into it; thus neither Alice nor Bob participates in the preparation of the initial state. The theorem does not require that the initial state be somehow 'random' or 'balanced' or 'uniform': indeed, a third party preparing the initial state could easily encode messages in it, received by Alice and Bob. Simply, the theorem states that, given some initial state, prepared in some way, there is no action that Alice can take that would be detectable by Bob. - -The proof proceeds by defining how the total Hilbert space H can be split into two parts, HA and HB, describing the subspaces accessible to Alice and Bob. The total state of the system is assumed to be described by a density matrix σ. This appears to be a reasonable assumption, as a density matrix is sufficient to describe both pure and mixed states in quantum mechanics. Another important part of the theorem is that measurement is performed by applying a generalized projection operator P to the state σ. This again is reasonable, as projection operators give the appropriate mathematical description of quantum measurements. After a measurement by Alice, the state of the total system is said to have collapsed to a state P(σ). - -The goal of the theorem is to prove that Bob cannot in any way distinguish the pre-measurement state σ from the post-measurement state P(σ). This is accomplished mathematically by comparing the trace of σ and the trace of P(σ), with the trace being taken over the subspace HA. Since the trace is only over a subspace, it is technically called a partial trace. Key to this step is the assumption that the (partial) trace adequately summarizes the system from Bob's point of view. That is, everything that Bob has access to, or could ever have access to, measure, or detect, is completely described by a partial trace over HA of the system σ. Again, this is a reasonable assumption, as it is a part of standard quantum mechanics. The fact that this trace never changes as Alice performs her measurements is the conclusion of the proof of the no-communication theorem. - -The proof of the theorem is commonly illustrated for the setup of Bell tests in which two observers Alice and Bob perform local observations on a common bipartite system, and uses the statistical machinery of quantum mechanics, namely density states and quantum operations. - -Alice and Bob perform measurements on system S whose underlying Hilbert space is -$$ - H = H_A \otimes H_B. -$$ - -It is also assumed that everything is finite-dimensional to avoid convergence issues. The state of the composite system is given by a density operator on H. Any density operator σ on H is a sum of the form: -$$ - \sigma = \sum_i T_i \otimes S_i -$$ - -where Ti and Si are operators on HA and HB respectively. For the following, it is not required to assume that Ti and Si are state projection operators: i.e. they need not necessarily be non-negative, nor have a trace of one. That is, σ can have a definition somewhat broader than that of a density matrix; the theorem still holds. Note that the theorem holds trivially for separable states. If the shared state σ is separable, it is clear that any local operation by Alice will leave Bob's system intact. Thus the point of the theorem is no communication can be achieved via a shared entangled state. - -Alice performs a local measurement on her subsystem. In general, this is described by a quantum operation, on the system state, of the following kind -$$ - P(\sigma) = \sum_k (V_k \otimes I_{H_B})^* \ \sigma \ (V_k \otimes I_{H_B}), -$$ - -where Vk are called Kraus matrices which satisfy -$$ - \sum_k V_k V_k^* = I_{H_A}. -$$ - -The term -$$ -I_{H_B} -$$ - -from the expression -$$ -(V_k \otimes I_{H_B}) -$$ - -means that Alice's measurement apparatus does not interact with Bob's subsystem. - -Supposing the combined system is prepared in state σ and assuming, for purposes of argument, a non-relativistic situation, immediately (with no time delay) after Alice performs her measurement, the relative state of Bob's system is given by the partial trace of the overall state with respect to Alice's system. In symbols, the relative state of Bob's system after Alice's operation is -$$ -\operatorname{tr}_{H_A}(P(\sigma)) -$$ - -where $\operatorname{tr}_{H_A}$ is the partial trace mapping with respect to Alice's system. - -One can directly calculate this state: -$$ - \operatorname{tr}_{H_A}(P(\sigma)) = \operatorname{tr}_{H_A} \left(\sum_k (V_k \otimes I_{H_B})^* \sigma (V_k \otimes I_{H_B} )\right) -$$ -$$ - = \operatorname{tr}_{H_A} \left(\sum_k \sum_i V_k^* T_i V_k \otimes S_i \right) -$$ -$$ - = \sum_i \sum_k \operatorname{tr}(V_k^* T_i V_k) S_i -$$ -$$ - = \sum_i \sum_k \operatorname{tr}(T_i V_k V_k^*) S_i -$$ -$$ - = \sum_i \operatorname{tr}\left(T_i \sum_k V_k V_k^*\right) S_i -$$ -$$ - = \sum_i \operatorname{tr}(T_i) S_i -$$ -$$ - = \operatorname{tr}_{H_A}(\sigma). -$$ - -From this it is argued that, statistically, Bob cannot tell the difference between what Alice did and a random measurement (or whether she did anything at all). - -*If the density operator $P(\sigma)$ is allowed to evolve under the influence of non-local interactions between A and B, then in general the calculation in the proof no longer holds, unless suitable commutation relations are assumed. - -*The no-communication theorem thus says shared entanglement alone cannot be used to transmit any information. Compare this with the no-teleportation theorem, which states a classical information channel cannot transmit quantum information. (By transmit, we mean transmission with full fidelity.) However, quantum teleportation schemes utilize both resources to achieve what is impossible for either alone. - -* The no-communication theorem implies the no-cloning theorem, which states that quantum states cannot be (perfectly) copied. That is, cloning is a sufficient condition for the communication of classical information to occur. To see this, suppose that quantum states could be cloned. Assume parts of a maximally entangled Bell state are distributed to Alice and Bob. Alice could send bits to Bob in the following way: If Alice wishes to transmit a "0", she measures the spin of her electron in the z direction, collapsing Bob's state to either $|z+\rangle_B$ or $|z-\rangle_B$. To transmit "1", Alice does nothing to her qubit. Bob creates many copies of his electron's state, and measures the spin of each copy in the z direction. Bob will know that Alice has transmitted a "0" if all his measurements will produce the same result; otherwise, his measurements will have outcomes $|z+\rangle_B$ or $|z-\rangle_B$ with equal probability. This would allow Alice and Bob to communicate classical bits between each other (possibly across space-like separations, violating causality). - -* The version of the no-communication theorem discussed in this article assumes that the quantum system shared by Alice and Bob is a composite system, i.e. that its underlying Hilbert space is a tensor product whose first factor describes the part of the system that Alice can interact with and whose second factor describes the part of the system that Bob can interact with. In quantum field theory, this assumption can be replaced by the assumption that Alice and Bob are spacelike separated. This alternate version of the no-communication theorem shows that faster-than-light communication cannot be achieved using processes which obey the rules of quantum field theory. - -* The proof of the no-communication theorem assumes that all measurable properties of Bob's system can be calculated from its reduced density matrix, which is true given the Born rule for calculating the probability of making various measurements. But this equivalence with the Born rule can also essentially be derived in the opposite direction, in that it's possible to show that the Born rule follows from the assumption that space-like separated events cannot violate causality by affecting each other. - -* No-broadcast theorem - -* No-cloning theorem - -* No-deleting theorem - -* No-hiding theorem - -* No-teleportation theorem diff --git a/wiki/wikipedia/820.txt b/wiki/wikipedia/820.txt deleted file mode 100644 index da94e581d95522c5f7134314c0efb43ec3a61500..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/820.txt +++ /dev/null @@ -1,101 +0,0 @@ -In propositional logic, transposition is a valid rule of replacement that permits one to switch the antecedent with the consequent of a conditional statement in a logical proof if they are also both negated. It is the inference from the truth of "A implies B" to the truth of "Not-B implies not-A", and conversely. It is very closely related to the rule of inference modus tollens. It is the rule that -$$ -(P \to Q) \Leftrightarrow (\neg Q \to \neg P) -$$ - -where "$\Leftrightarrow$" is a metalogical symbol representing "can be replaced in a proof with". - -The transposition rule may be expressed as a sequent: -$$ -(P \to Q) \vdash (\neg Q \to \neg P) -$$ - -where $\vdash$ is a metalogical symbol meaning that $(\neg Q \to \neg P)$ is a syntactic consequence of $(P \to Q)$ in some logical system; - -or as a rule of inference: -$$ -\frac{P \to Q}{\therefore \neg Q \to \neg P} -$$ - -where the rule is that wherever an instance of "$P \to Q$" appears on a line of a proof, it can be replaced with "$\neg Q \to \neg P$"; - -or as the statement of a truth-functional tautology or theorem of propositional logic. The principle was stated as a theorem of propositional logic by Russell and Whitehead in Principia Mathematica as: -$$ -(P \to Q) \to (\neg Q \to \neg P) -$$ - -where $P$ and $Q$ are propositions expressed in some formal system. - -In the inferred proposition, the consequent is the contradictory of the antecedent in the original proposition, and the antecedent of the inferred proposition is the contradictory of the consequent of the original proposition. The symbol for material implication signifies the proposition as a hypothetical, or the "if-then" form, e.g. "if P then Q". - -The biconditional statement of the rule of transposition (↔) refers to the relation between hypothetical (→) propositions, with each proposition including an antecent and consequential term. As a matter of logical inference, to transpose or convert the terms of one proposition requires the conversion of the terms of the propositions on both sides of the biconditional relationship. Meaning, to transpose or convert (P → Q) to (Q → P) requires that the other proposition, (~Q → ~P), be transposed or converted to (~P → ~Q). Otherwise, to convert the terms of one proposition and not the other renders the rule invalid, violating the sufficient condition and necessary condition of the terms of the propositions, where the violation is that the changed proposition commits the fallacy of denying the antecedent or affirming the consequent by means of illicit conversion. - -The truth of the rule of transposition is dependent upon the relations of sufficient condition and necessary condition in logic. - -In the proposition "If P then Q", the occurrence of 'P' is sufficient reason for the occurrence of 'Q'. 'P', as an individual or a class, materially implicates 'Q', but the relation of 'Q' to 'P' is such that the converse proposition "If Q then P" does not necessarily have sufficient condition. The rule of inference for sufficient condition is modus ponens, which is an argument for conditional implication: - -Premise (1): If P, then Q - -Premise (2): P - -Conclusion: Therefore, Q - -Since the converse of premise (1) is not valid, all that can be stated of the relationship of 'P' and 'Q' is that in the absence of 'Q', 'P' does not occur, meaning that 'Q' is the necessary condition for 'P'. The rule of inference for necessary condition is modus tollens: - -Premise (1): If P, then Q - -Premise (2): not Q - -Conclusion: Therefore, not P - -An example traditionally used by logicians contrasting sufficient and necessary conditions is the statement "If there is fire, then oxygen is present". An oxygenated environment is necessary for fire or combustion, but simply because there is an oxygenated environment does not necessarily mean that fire or combustion is occurring. While one can infer that fire stipulates the presence of oxygen, from the presence of oxygen the converse "If there is oxygen present, then fire is present" cannot be inferred. All that can be inferred from the original proposition is that "If oxygen is not present, then there cannot be fire". - -The symbol for the biconditional ("↔") signifies the relationship between the propositions is both necessary and sufficient, and is verbalized as "if and only if", or, according to the example "If P then Q 'if and only if' if not Q then not P". - -Necessary and sufficient conditions can be explained by analogy in terms of the concepts and the rules of immediate inference of traditional logic. In the categorical proposition "All S is P", the subject term 'S' is said to be distributed, that is, all members of its class are exhausted in its expression. Conversely, the predicate term 'P' cannot be said to be distributed, or exhausted in its expression because it is indeterminate whether every instance of a member of 'P' as a class is also a member of 'S' as a class. All that can be validly inferred is that "Some P are S". Thus, the type 'A' proposition "All P is S" cannot be inferred by conversion from the original 'A' type proposition "All S is P". All that can be inferred is the type "A" proposition "All non-P is non-S" (Note that (P → Q) and (~Q → ~P) are both 'A' type propositions). Grammatically, one cannot infer "all mortals are men" from "All men are mortal". An 'A' type proposition can only be immediately inferred by conversion when both the subject and predicate are distributed, as in the inference "All bachelors are unmarried men" from "All unmarried men are bachelors". - -In traditional logic the reasoning process of transposition as a rule of inference is applied to categorical propositions through contraposition and obversion, a series of immediate inferences where the rule of obversion is first applied to the original categorical proposition "All S is P"; yielding the obverse "No S is non-P". In the obversion of the original proposition to an 'E' type proposition, both terms become distributed. The obverse is then converted, resulting in "No non-P is S", maintaining distribution of both terms. The No non-P is S" is again obverted, resulting in the [contrapositive] "All non-P is non-S". Since nothing is said in the definition of contraposition with regard to the predicate of the inferred proposition, it is permissible that it could be the original subject or its contradictory, and the predicate term of the resulting 'A' type proposition is again undistributed. This results in two contrapositives, one where the predicate term is distributed, and another where the predicate term is undistributed. - -Note that the method of transposition and contraposition should not be confused. Contraposition is a type of immediate inference in which from a given categorical proposition another categorical proposition is inferred which has as its subject the contradictory of the original predicate. Since nothing is said in the definition of contraposition with regard to the predicate of the inferred proposition, it is permissible that it could be the original subject or its contradictory. This is in contradistinction to the form of the propositions of transposition, which may be material implication, or a hypothetical statement. The difference is that in its application to categorical propositions the result of contraposition is two contrapositives, each being the obvert of the other, i.e. "No non-P is S" and "All non-P is non-S". The distinction between the two contrapositives is absorbed and eliminated in the principle of transposition, which presupposes the "mediate inferences" of contraposition and is also referred to as the "law of contraposition". - -See Transposition (mathematics), Set theory - -In Hilbert-style deductive systems for propositional logic, only one side of the transposition is taken as an axiom, and the other is a theorem. We describe a proof of this theorem in the system of three axioms proposed by Jan Łukasiewicz: - -A1. $\phi \to \left( \psi \to \phi \right) $ - -A2. $\left( \phi \to \left( \psi \rightarrow \xi \right) \right) \to \left( \left( \phi \to \psi \right) \to \left( \phi \to \xi \right) \right)$ - -A3. $\left ( \lnot \phi \to \lnot \psi \right) \to \left( \psi \to \phi \right) $ - -(A3) already gives one of the directions of the transposition. The other side, $( \psi \to \phi ) \to ( \neg \phi \to \neg \psi)$, if proven below, using the following lemmas proven here: - -(DN1) $ \neg \neg p \to p$ - Double negation (one direction) - -(DN2) $ p \to \neg \neg p$ - Double negation (another direction) - -(HS1) $(q \to r) \to ((p \to q) \to (p \to r))$ - one form of Hypothetical syllogism - -(HS2) $(p \to q) \to ((q \to r) \to (p \to r))$ - another form of Hypothetical syllogism. - -We also use the method of the hypothetical syllogism metatheorem as a shorthand for several proof steps. - -The proof is as follows: - -(1) $ q \to \neg\neg q $ (instance of the (DN2)) - -(2) $ (q \to \neg\neg q) \to ((p \to q) \to (p \to \neg\neg q)) $ (instance of the (HS1) - -(3) $ (p \to q) \to (p \to \neg\neg q) $ (from (1) and (2) by modus ponens) - -(4) $ \neg\neg p \to p $ (instance of the (DN1)) - -(5) $ (\neg\neg p \to p) \to ((p \to \neg\neg q) \to (\neg\neg p \to \neg\neg q)) $ (instance of the (HS2)) - -(6) $ (p \to \neg\neg q) \to (\neg\neg p \to \neg\neg q) $ (from (4) and (5) by modus ponens) - -(7) $ (p \to q) \to (\neg\neg p \to \neg\neg q) $ (from (3) and (6) using the hypothetical syllogism metatheorem) - -(8) $ (\neg\neg p \to \neg\neg q) \to (\neg q \to \neg p) $ (instance of (A3)) - -(9) $ (p \to q) \to (\neg q \to \neg p) $ (from (7) and (8) using the hypothetical syllogism metatheorem) diff --git a/wiki/wikipedia/821.txt b/wiki/wikipedia/821.txt deleted file mode 100644 index 5858915f80319a7e24c16c1fea3de6520963836a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/821.txt +++ /dev/null @@ -1,49 +0,0 @@ -In mathematics, particularly topology, the tube lemma is a useful tool in order to prove that the finite product of compact spaces is compact. - -The lemma uses the following terminology: - -* If $X$ and $Y$ are topological spaces and $X \times Y$ is the product space, endowed with the product topology, a slice in $X \times Y$ is a set of the form $\{ x \} \times Y$ for $x \in X$. - -* A tube in $X \times Y$ is a subset of the form $U\times Y$ where $U$ is an open subset of $X$. It contains all the slices $\{x\} \times Y$ for $x \in U$. - -Let $X$ and $Y$ be topological spaces with $Y$ compact, and consider the product space $X \times Y.$ If $N$ is an open set containing a slice in $X \times Y,$ then there exists a tube in $X \times Y$ containing this slice and contained in $N.$ - -Using the concept of closed maps, this can be rephrased concisely as follows: if $X$ is any topological space and $Y$ a compact space, then the projection map $X \times Y \to X$ is closed. - -Let $X$ and $Y$ be topological spaces and consider the product space $X \times Y.$ Let $A$ be a compact subset of $X$ and $B$ be a compact subset of $Y.$ If $N$ is an open set containing $A \times B,$ then there exists $U$ open in $X$ and $V$ open in $Y$ such that $A \times B \subseteq U \times V \subseteq N.$ - -1. Consider $\mathbb{R} \times \mathbb{R}$ in the product topology, that is the Euclidean plane, and the open set $N = \{ (x, y) \in \mathbb{R} \times \mathbb{R} ~:~ |xy| < 1 \}.$ The open set $N$ contains $\{ 0 \} \times \mathbb{R},,$ but contains no tube, so in this case the tube lemma fails. Indeed, if $W \times \mathbb{R}$ is a tube containing $\{ 0 \} \times \mathbb{R}$ and contained in $N,$ $W$ must be a subset of $\left( - 1/x, 1/x \right)$ for all $x>0$ which means $W = \{ 0 \}$ contradicting the fact that $W$ is open in $\mathbb{R}$ (because $W \times \mathbb{R}$ is a tube). This shows that the compactness assumption is essential. - -2. The tube lemma can be used to prove that if $X$ and $Y$ are compact spaces, then $X \times Y$ is compact as follows: - -Let $\{ G_a \}$ be an open cover of $X \times Y$. For each $x \in X$, cover the slice $\{ x \} \times Y$ by finitely many elements of $\{ G_a \}$ (this is possible since $\{ x \} \times Y$ is compact, being homeomorphic to $Y$). - -Call the union of these finitely many elements $N_x.$ - -By the tube lemma, there is an open set of the form $W_x \times Y$ containing $\{ x \} \times Y$ and contained in $N_x.$ - -The collection of all $W_x$ for $x \in X$ is an open cover of $X$ and hence has a finite subcover $\{W_{x_1},\dots,W_{x_n}\}$. Thus the finite collection $\{W_{x_1}\times Y,\dots,W_{x_n}\times Y\}$ covers $X\times Y$. - -Using the fact that each $W_{x_i} \times Y$ is contained in $N_{x_i}$ and each $N_{x_i}$ is the finite union of elements of $\{G_a\}$, one gets a finite subcollection of $\{G_a\}$ that covers $X \times Y$. - -3. By part 2 and induction, one can show that the finite product of compact spaces is compact. - -4. The tube lemma cannot be used to prove the Tychonoff theorem, which generalizes the above to infinite products. - -The tube lemma follows from the generalized tube lemma by taking $A = \{ x \}$ and $B = Y.$ - -It therefore suffices to prove the generalized tube lemma. - -By the definition of the product topology, for each $(a, b) \in A \times B$ there are open sets $U_{a, b} \subseteq X$ and $V_{a, b} \subseteq Y$ such that $(a, b) \in U_{a, b} \times V_{a, b} \subseteq N.$ - -For any $a \in A,$ $\left\{ V_{a,b} ~:~ b \in B \right\}$ is an open cover of the compact set $B$ so this cover has a finite subcover; namely, there is a finite set $B_0(a) \subseteq B$ such that $V_{a} := \bigcup_{b \in B_0(a)} V_{a,b}$ contains $B,$ where observe that $V_a$ is open in $Y.$ - -For every $a \in A,$ let $U_a := \bigcap_{b \in B_0(a)} U_{a,b},$ which is an open in $X$ set since $B_0(a)$ is finite. - -Moreover, the construction of $U_a$ and $V_a$ implies that $\{ a \} \times B \subseteq U_a \times V_a \subseteq N.$ - -We now essentially repeat the argument to drop the dependence on $a.$ - -Let $A_0 \subseteq A$ be a finite subset such that $U := \bigcup_{a \in A_0} U_a$ contains $A$ and set $V := \bigcap_{a \in A_0} V_a.$ - -It then follows by the above reasoning that $A \times B \subseteq U \times V \subseteq N$ and $U \subseteq X$ and $V \subseteq Y$ are open, which completes the proof. diff --git a/wiki/wikipedia/822.txt b/wiki/wikipedia/822.txt deleted file mode 100644 index d3da6ab83e648dd70ab07280fa64773193f0921d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/822.txt +++ /dev/null @@ -1,11 +0,0 @@ -Tetris Ultimate is a puzzle video game developed by American studio SoMa Play and published by Ubisoft. Ubisoft partnered with The Tetris Company to develop the game to celebrate the 30th anniversary of the Tetris franchise. - -Tetris Ultimate on Nintendo 3DS features seven modes, including a new single-player Challenge mode. Other versions offer six different game modes. - -Tetris Ultimate was first released in November 2014 for the Nintendo 3DS as retail game and as digital download in the Nintendo 3DS eShop. In December 2014, the game became available as a digital download for Xbox One and PlayStation 4. In 2015, the game was released for PlayStation Vita. - -Because of the release of Tetris Ultimate, Nintendo removed the 1989 Game Boy version of Tetris and the digital download of the 2011 game Tetris: Axis from the Nintendo 3DS eShop in December 2014. Since January 2014, Ubisoft holds the license rights for PlayStation 4 downloadable versions of Tetris. - -, Tetris Ultimate has been delisted on all platforms and neither the game or its DLC are available digitally, but the Ubisoft website states that both the servers and Ubisoft Club features are still active. - -Critical response to Tetris Ultimate was mixed. GameSpot gave the game a 7/10, praising it for being a good version of Tetris, as well as the extensive customization options, stating "If all you want is a good version of classic Tetris for your new console, this one will suit your needs well". However, he also criticized the bugginess of the online play and the lack of innovation or new modes. The PlayStation 4 and Xbox One versions were also criticized for the multitude of issues in the online modes. diff --git a/wiki/wikipedia/823.txt b/wiki/wikipedia/823.txt deleted file mode 100644 index 40599262d0ce334ae4a95ea8e4b6048a588d1eea..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/823.txt +++ /dev/null @@ -1,74 +0,0 @@ -Stratification has several usages in mathematics. - -In mathematical logic, stratification is any consistent assignment of numbers to predicate symbols guaranteeing that a unique formal interpretation of a logical theory exists. Specifically, we say that a set of clauses of the form $Q_1 \wedge \dots \wedge Q_n \wedge \neg Q_{n+1} \wedge \dots \wedge \neg Q_{n+m} \rightarrow P$ is stratified if and only if - -there is a stratification assignment S that fulfills the following conditions: - -# If a predicate P is positively derived from a predicate Q (i.e., P is the head of a rule, and Q occurs positively in the body of the same rule), then the stratification number of P must be greater than or equal to the stratification number of Q, in short $S(P) \geq S(Q)$. - -# If a predicate P is derived from a negated predicate Q (i.e., P is the head of a rule, and Q occurs negatively in the body of the same rule), then the stratification number of P must be greater than the stratification number of Q, in short $S(P) > S(Q)$. - -The notion of stratified negation leads to a very effective operational semantics for stratified programs in terms of the stratified least fixpoint, that is obtained by iteratively applying the fixpoint operator to each stratum of the program, from the lowest one up. - -Stratification is not only useful for guaranteeing unique interpretation of Horn clause - -theories. - -In New Foundations (NF) and related set theories, a formula $\phi$ in the language of first-order logic with equality and membership is said to be - -stratified if and only if there is a function -$$ -\sigma -$$ which sends each variable appearing in $\phi$ (considered as an item of syntax) to - -a natural number (this works equally well if all integers are used) in such a way that - -any atomic formula $x \in y$ appearing in $\phi$ satisfies $\sigma(x)+1 = \sigma(y)$ and any atomic formula $x = y$ appearing in $\phi$ satisfies $\sigma(x) = \sigma(y)$. - -It turns out that it is sufficient to require that these conditions be satisfied only when - -both variables in an atomic formula are bound in the set abstract $\{x \mid \phi\}$ - -under consideration. A set abstract satisfying this weaker condition is said to be - -weakly stratified. - -The stratification of New Foundations generalizes readily to languages with more - -predicates and with term constructions. Each primitive predicate needs to have specified - -required displacements between values of $\sigma$ at its (bound) arguments - -in a (weakly) stratified formula. In a language with term constructions, terms themselves - -need to be assigned values under $\sigma$, with fixed displacements from the - -values of each of their (bound) arguments in a (weakly) stratified formula. Defined term - -constructions are neatly handled by (possibly merely implicitly) using the theory - -of descriptions: a term $(\iota x.\phi)$ (the x such that $\phi$) must - -be assigned the same value under $\sigma$ as the variable x. - -A formula is stratified if and only if it is possible to assign types to all variables appearing - -in the formula in such a way that it will make sense in a version TST of the theory of - -types described in the New Foundations article, and this is probably the best way - -to understand the stratification of New Foundations in practice. - -The notion of stratification can be extended to the lambda calculus; this is found - -in papers of Randall Holmes. - -A motivation for the use of stratification is to address Russell's paradox, the antinomy considered to have undermined Frege's central work Grundgesetze der Arithmetik (1902). - -In singularity theory, there is a different meaning, of a decomposition of a topological space X into disjoint subsets each of which is a topological manifold (so that in particular a stratification defines a partition of the topological space). This is not a useful notion when unrestricted; but when the various strata are defined by some recognisable set of conditions (for example being locally closed), and fit together manageably, this idea is often applied in geometry. Hassler Whitney and René Thom first defined formal conditions for stratification. See Whitney stratification and topologically stratified space. - -See stratified sampling. - -Category:Set theory - -Category:Mathematical logic diff --git a/wiki/wikipedia/824.txt b/wiki/wikipedia/824.txt deleted file mode 100644 index 4c75b469fce540c885614acc9169f0602a758c3d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/824.txt +++ /dev/null @@ -1 +0,0 @@ -#redirect Greenberg's conjectures diff --git a/wiki/wikipedia/825.txt b/wiki/wikipedia/825.txt deleted file mode 100644 index b2ecd7fbd02063328d8b943c1955872e821255ff..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/825.txt +++ /dev/null @@ -1,16 +0,0 @@ -In functional analysis, a field of mathematics, the Banach–Mazur theorem is a theorem roughly stating that most well-behaved normed spaces are subspaces of the space of continuous paths. It is named after Stefan Banach and Stanisław Mazur. - -Every real, separable Banach space (X, is isometrically isomorphic to a closed subspace of C0([0, 1], R), the space of all continuous functions from the unit interval into the real line. - -On the one hand, the Banach–Mazur theorem seems to tell us that the seemingly vast collection of all separable Banach spaces is not that vast or difficult to work with, since a separable Banach space is "only" a collection of continuous paths. On the other hand, the theorem tells us that C0([0, 1], R) is a "really big" space, big enough to contain every possible separable Banach space. - -Non-separable Banach spaces cannot embed isometrically in the separable space C0([0, 1], R), but for every Banach space X, one can find a compact Hausdorff space K and an isometric linear embedding j of X into the space C(K) of scalar continuous functions on K. The simplest choice is to let K be the unit ball of the continuous dual X ′, equipped with the w*-topology. This unit ball K is then compact by the Banach–Alaoglu theorem. The embedding j is introduced by saying that for every x ∈ X, the continuous function j(x) on K is defined by -$$ - \forall x' \in K: \qquad j(x)(x') = x'(x). -$$ - -The mapping j is linear, and it is isometric by the Hahn–Banach theorem. - -Another generalization was given by Kleiber and Pervin (1969): a metric space of density equal to an infinite cardinal α is isometric to a subspace of C0([0,1]α, R), the space of real continuous functions on the product of α copies of the unit interval. - -Let us write Ck[0, 1] for Ck([0, 1], R). In 1995, Luis Rodríguez-Piazza proved that the isometry i : X → C0[0, 1] can be chosen so that every non-zero function in the image i(X) is nowhere differentiable. Put another way, if D ⊂ C0[0, 1] consists of functions that are differentiable at at least one point of [0, 1], then i can be chosen so that i(X) ∩ D = {0}. This conclusion applies to the space C0[0, 1] itself, hence there exists a linear map i : C0[0, 1] → C0[0, 1] that is an isometry onto its image, such that image under i of C0[0, 1] (the subspace consisting of functions that are everywhere differentiable with continuous derivative) intersects D only at 0: thus the space of smooth functions (with respect to the uniform distance) is isometrically isomorphic to a space of nowhere-differentiable functions. Note that the (metrically incomplete) space of smooth functions is dense in C0[0, 1]. diff --git a/wiki/wikipedia/826.txt b/wiki/wikipedia/826.txt deleted file mode 100644 index 6f94b478574e5bcc5edfb4f07779ff1af2aff4aa..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/826.txt +++ /dev/null @@ -1,9 +0,0 @@ -In set theory, a branch of mathematics, the condensation lemma is a result about sets in the - -constructible universe. - -It states that if X is a transitive set and is an elementary submodel of some level of the constructible hierarchy Lα, that is, $(X,\in)\prec (L_\alpha,\in)$, then in fact there is some ordinal $\beta\leq\alpha$ such that $X=L_\beta$. - -More can be said: If X is not transitive, then its transitive collapse is equal to some $L_\beta$, and the hypothesis of elementarity can be weakened to elementarity only for formulas which are $\Sigma_1$ in the Lévy hierarchy. Also, the assumption that X be transitive automatically holds when $\alpha=\omega_1$. - -The lemma was formulated and proved by Kurt Gödel in his proof that the axiom of constructibility implies GCH. diff --git a/wiki/wikipedia/827.txt b/wiki/wikipedia/827.txt deleted file mode 100644 index 941f0ff133d787443248a8e53abb24c9b33e9b5b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/827.txt +++ /dev/null @@ -1,41 +0,0 @@ -Calendaring software is software that minimally provides users with an electronic version of a calendar. Additionally, the software may provide an appointment book, address book, and/or contact list. These tools are an extension of many of the features provided by time management software such as desk accessory packages and computer office automation systems. Calendaring is a standard feature of many PDAs, EDAs and smartphones and also of many office suites for personal computers. - -The software may be a local application designed for individual use (such as the Lightning extension for Mozilla Thunderbird, Microsoft Outlook without Exchange Server, or Windows Calendar) or may be a networked package that allows for the sharing of information between users (such as Mozilla Sunbird, Windows Live Calendar, Google Calendar, or Microsoft Outlook with Exchange Server). - -Calendaring software will contain one or more of the following features: - -* Calendara calendar showing dates and days of the week. An example a simple software calendar is the cal command, which simply outputs a monthly or yearly calendar. - -* Address booka list of contacts with information to enable the user to communicate with the contacts. - -* Appointment attachmentsThis feature allow users to attach a file to an appointment. If the appointment includes other participants, the attachment is shared with them. - -* Appointment calendara list of appointments and the attendees for the appointments. This software may include the capability of detecting scheduling conflicts, notifying the participants of the conflict, and suggesting alternate meeting times. - -* Appointment remindersAutomatically reminds participants of an upcoming meeting. - -* Availability sharingthis feature allows users to share their availability with others (users can select how much detail is shared); thus facilitating meeting scheduling amongst several individuals. - -* Availability and capacity checkingCheck the availability of all other employee and resource calendars in the group. - -* Calendar publishingsome calendaring tools allow the user to publish select calendar information on a public link. - -* Calendar exportingUsers are allowed to export selected calendars into various file formats, including iCalendar standard. - -* Collaborative schedulingthe capability of the software to check schedules and propose meeting times to all of the participants. This allows the invitees to suggest times that will work best for them, allowing the organizer to pick a meeting time that works best for all of the participants. - -* CustomizationThis feature allow users to customize several available features such as: email appointment remainders, calendar viewing default, workweek and work hours display, etc. - -* E-mail integrationan electronic mail communication system. This can be tied into the appointment calendar to send reminders and notify the participants of issues arising with scheduled meetings. - -* Group calendara calendar showing dates of groups in addition to individual calendars. - -* Multiple calendarsthis feature allows users to create separate calendars (i.e. work calendar, children school calendar). - -* Multiple viewsthis feature allow users to select how their calendar is displayed: one day, one week, one month, one year, etc. - -* PrintingUser may print selected schedule. Usually, this feature allows users to select how she wants to have the printout to look (i.e. include comments, subject only, etc.). - -* Timeblocking Allow users to organize their days into chunks, assigning a task to each chunk of time - -* Web-based interfaceallow users to access their calendars from any computer or mobile device (including cell phone) without having to solely rely on their work or personal computer. diff --git a/wiki/wikipedia/828.txt b/wiki/wikipedia/828.txt deleted file mode 100644 index 3de44edcdde6de609d73514f8183d382bad76b18..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/828.txt +++ /dev/null @@ -1,21 +0,0 @@ -In theoretical computer science, the subgraph isomorphism problem is a computational task in which two graphs G and H are given as input, and one must determine whether G contains a subgraph that is isomorphic to H. - -Subgraph isomorphism is a generalization of both the maximum clique problem and the problem of testing whether a graph contains a Hamiltonian cycle, and is therefore NP-complete. However certain other cases of subgraph isomorphism may be solved in polynomial time. - -Ullmann is a substantial update to the 1976 subgraph isomorphism algorithm paper. - -Cordella proposed in 2004 another algorithm based on Ullmann's, VF2, which improves the refinement process using different heuristics and uses significantly less memory. - -Bonnici proposed a better algorithm, which improves the initial order of the vertices using some heuristics. - -The current state of the art solver for moderately-sized, hard instances is the Glasgow Subgraph Solver (McCreesh). This solver adopts a constraint programming approach, using bit-parallel data structures and specialized propagation algorithms for performance. It supports most common variations of the problem and is capable of counting or enumerating solutions as well as deciding whether one exists. - -For large graphs, state-of-the art algorithms include CFL-Match and Turboiso, and extensions thereupon such as DAF by Han. - -As subgraph isomorphism has been applied in the area of cheminformatics to find similarities between chemical compounds from their structural formula; often in this area the term substructure search is used. A query structure is often defined graphically using a structure editor program; SMILES based database systems typically define queries using SMARTS, a SMILES extension. - -The closely related problem of counting the number of isomorphic copies of a graph H in a larger graph G has been applied to pattern discovery in databases, the bioinformatics of protein-protein interaction networks, and in exponential random graph methods for mathematically modeling social networks. - -Ohlrich describe an application of subgraph isomorphism in the computer-aided design of electronic circuits. Subgraph matching is also a substep in graph rewriting (the most runtime-intensive), and thus offered by graph rewrite tools. - -The problem is also of interest in artificial intelligence, where it is considered part of an array of pattern matching in graphs problems; an extension of subgraph isomorphism known as graph mining is also of interest in that area. diff --git a/wiki/wikipedia/829.txt b/wiki/wikipedia/829.txt deleted file mode 100644 index bab747366680df20861852a5b33c37b06c502023..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/829.txt +++ /dev/null @@ -1,213 +0,0 @@ -In probability theory, the central limit theorem (CLT) states that, in many situations, when independent random variables are added, their properly normalized sum tends toward a normal distribution. This article gives two illustrations of this theorem. Both involve the sum of independent and identically-distributed random variables and show how the probability distribution of the sum approaches the normal distribution as the number of terms in the sum increases. - -The first illustration involves a continuous probability distribution, for which the random variables have a probability density function. The second illustration, for which most of the computation can be done by hand, involves a discrete probability distribution, which is characterized by a probability mass function. - -The density of the sum of two independent real-valued random variables equals the convolution of the density functions of the original variables. - -Thus, the density of the sum of m+n terms of a sequence of independent identically distributed variables equals the convolution of the densities of the sums of m terms and of n term. In particular, the density of the sum of n+1 terms equals the convolution of the density of the sum of n terms with the original density (the "sum" of 1 term). - -A probability density function is shown in the first figure below. Then the densities of the sums of two, three, and four independent identically distributed variables, each having the original density, are shown in the following figures. - -If the original density is a piecewise polynomial, as it is in the example, then so are the sum densities, of increasingly higher degree. Although the original density is far from normal, the density of the sum of just a few variables with that density is much smoother and has some of the qualitative features of the normal density. - -The convolutions were computed via the discrete Fourier transform. A list of values y = f(x0 + k Δx) was constructed, where f is the original density function, and Δx is approximately equal to 0.002, and k is equal to 0 through 1000. The discrete Fourier transform Y of y was computed. Then the convolution of f with itself is proportional to the inverse discrete Fourier transform of the pointwise product of Y with itself. - -We start with a probability density function. This function, although discontinuous, is far from the most pathological example that could be created. It is a piecewise polynomial, with pieces of degrees 0 and 1. The mean of this distribution is 0 and its standard deviation is 1. - -Next we compute the density of the sum of two independent variables, each having the above density. - -The density of the sum is the convolution of the above density with itself. - -The sum of two variables has mean 0. - -The density shown in the figure at right has been rescaled by $\sqrt{2}$, so that its standard deviation is 1. - -This density is already smoother than the original. - -There are obvious lumps, which correspond to the intervals on which the original density was defined. - -We then compute the density of the sum of three independent variables, each having the above density. - -The density of the sum is the convolution of the first density with the second. - -The sum of three variables has mean 0. - -The density shown in the figure at right has been rescaled by 3, so that its standard deviation is 1. - -This density is even smoother than the preceding one. - -The lumps can hardly be detected in this figure. - -Finally, we compute the density of the sum of four independent variables, each having the above density. - -The density of the sum is the convolution of the first density with the third (or the second density with itself). - -The sum of four variables has mean 0. - -The density shown in the figure at right has been rescaled by 4, so that its standard deviation is 1. - -This density appears qualitatively very similar to a normal density. - -No lumps can be distinguished by the eye. - -This section illustrates the central limit theorem via an example for which the computation can be done quickly by hand on paper, unlike the more computing-intensive example of the previous section. - -Suppose the probability distribution of a discrete random variable X puts equal weights on 1, 2, and 3: - -X=\left\{\begin{matrix} 1 & \mbox{with}\ \mbox{probability}\ 1/3, \\ - -2 & \mbox{with}\ \mbox{probability}\ 1/3, \\ - -3 & \mbox{with}\ \mbox{probability}\ 1/3. - -\end{matrix}\right. - -The probability mass function of the random variable X may be depicted by the following bar graph: - -Clearly this looks nothing like the bell-shaped curve of the normal distribution. Contrast the above with the depictions below. - -Now consider the sum of two independent copies of X: - -\left\{\begin{matrix} - -1+1 & = & 2 \\ - -1+2 & = & 3 \\ - -1+3 & = & 4 \\ - -2+1 & = & 3 \\ - -2+2 & = & 4 \\ - -2+3 & = & 5 \\ - -3+1 & = & 4 \\ - -3+2 & = & 5 \\ - -3+3 & = & 6 - -\end{matrix}\right\} - -=\left\{\begin{matrix} - -2 & \mbox{with}\ \mbox{probability}\ 1/9 \\ - -3 & \mbox{with}\ \mbox{probability}\ 2/9 \\ - -4 & \mbox{with}\ \mbox{probability}\ 3/9 \\ - -5 & \mbox{with}\ \mbox{probability}\ 2/9 \\ - -6 & \mbox{with}\ \mbox{probability}\ 1/9 - -\end{matrix}\right\} - - - -The probability mass function of this sum may be depicted thus: - -This still does not look very much like the bell-shaped curve, but, like the bell-shaped curve and unlike the probability mass function of X itself, it is higher in the middle than in the two tails. - -Now consider the sum of three independent copies of this random variable: - -\left\{\begin{matrix} - -1+1+1 & = & 3 \\ - -1+1+2 & = & 4 \\ - -1+1+3 & = & 5 \\ - -1+2+1 & = & 4 \\ - -1+2+2 & = & 5 \\ - -1+2+3 & = & 6 \\ - -1+3+1 & = & 5 \\ - -1+3+2 & = & 6 \\ - -1+3+3 & = & 7 \\ - -2+1+1 & = & 4 \\ - -2+1+2 & = & 5 \\ - -2+1+3 & = & 6 \\ - -2+2+1 & = & 5 \\ - -2+2+2 & = & 6 \\ - -2+2+3 & = & 7 \\ - -2+3+1 & = & 6 \\ - -2+3+2 & = & 7 \\ - -2+3+3 & = & 8 \\ - -3+1+1 & = & 5 \\ - -3+1+2 & = & 6 \\ - -3+1+3 & = & 7 \\ - -3+2+1 & = & 6 \\ - -3+2+2 & = & 7 \\ - -3+2+3 & = & 8 \\ - -3+3+1 & = & 7 \\ - -3+3+2 & = & 8 \\ - -3+3+3 & = & 9 - -\end{matrix}\right\} - -=\left\{\begin{matrix} - -3 & \mbox{with}\ \mbox{probability}\ 1/27 \\ - -4 & \mbox{with}\ \mbox{probability}\ 3/27 \\ - -5 & \mbox{with}\ \mbox{probability}\ 6/27 \\ - -6 & \mbox{with}\ \mbox{probability}\ 7/27 \\ - -7 & \mbox{with}\ \mbox{probability}\ 6/27 \\ - -8 & \mbox{with}\ \mbox{probability}\ 3/27 \\ - -9 & \mbox{with}\ \mbox{probability}\ 1/27 - -\end{matrix}\right\} - - - -The probability mass function of this sum may be depicted thus: - -Not only is this bigger at the center than it is at the tails, but as one moves toward the center from either tail, the slope first increases and then decreases, just as with the bell-shaped curve. - -The degree of its resemblance to the bell-shaped curve can be quantified as follows. Consider - -Pr(X1 + X2 + X3 ≤ 7) = 1/27 + 3/27 + 6/27 + 7/27 + 6/27 = 23/27 = 0.85185... . - -How close is this to what a normal approximation would give? It can readily be seen that the expected value of Y = X1 + X2 + X3 is 6 and the standard deviation of Y is the square root of 2. Since Y ≤ 7 (weak inequality) if and only if Y < 8 (strict inequality), we use a continuity correction and seek - -\mbox{Pr}(Y\leq 7.5) - -=\mbox{P}\left({Y-6 \over \sqrt{2}}\leq{7.5-6 \over \sqrt{2}}\right) - -=\mbox{Pr}(Z\leq 1.0606602\dots) = 0.85558\dots - -where Z has a standard normal distribution. The difference between 0.85185... and 0.85558... seems remarkably small when it is considered that the number of independent random variables that were added was only three. - -The following image shows the result of a simulation based on the example presented in this page. The extraction from the uniform distribution is repeated 1,000 times, and the results are summed. - -Since the simulation is based on the Monte Carlo method, the process is repeated 10,000 times. The results shows that the distribution of the sum of 1,000 uniform extractions resembles the bell-shaped curve very well. diff --git a/wiki/wikipedia/83.txt b/wiki/wikipedia/83.txt deleted file mode 100644 index 2477c7f6bf91e22253daa881e0d9e1bcfcf42e61..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/83.txt +++ /dev/null @@ -1,23 +0,0 @@ -Raymond's Algorithm is a lock based algorithm for mutual exclusion on a distributed system. It imposes a logical structure (a K-ary tree) on distributed resources. As defined, each node has only a single parent, to which all requests to attain the token are made. - -# Each node has only one parent to whom received requests are forwarded - -# Each node maintains a FIFO queue of requests each time that it sees the token; - -# If any node is forwarding privilege to other node and has non-empty queue then it forwards a request message along - -# If a node i (not holding the token) wishes to receive the token in order to enter into its critical section, it sends a request to its parent, node j. - -#* If node j FIFO is empty, node j shifts i into its FIFO queue; j then issues a request to its parent, k, that it desires the token - -#* If node j FIFO queue is not empty, it simply shifts i into the queue - -# When node k has token and receives the request from j it sends token to j and sets j as its parent - -# When node j receives the token from k, it forwards the token to i and i is removed from the queue of j - -#* If the queue of j is not empty after forwarding the token to i, j must issue a request to i in order to get the token back - -Note: If j wishes to request a token, and its queue is not empty, then it places itself into its own queue. Node j will utilize the token to enter into its critical section if it is at the head of the queue when the token is received. - -Raymond's algorithm is guaranteed to be O(log n) per critical section entry if the processors are organized into a K-ary tree. Additionally, each processor needs to store at most O(log n) bits because it must track O(1) neighbors. diff --git a/wiki/wikipedia/830.txt b/wiki/wikipedia/830.txt deleted file mode 100644 index 294e437c52e64b9ae69c650b4635739b6ff5502f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/830.txt +++ /dev/null @@ -1,56 +0,0 @@ -In mathematics in general, a characterization theorem says that a particular object – a function, a space, etc. – is the only one that possesses properties specified in the theorem. A characterization of a probability distribution accordingly states that it is the only probability distribution that satisfies specified conditions. More precisely, the model of characterization of - -probability distribution was described by in such manner. On the probability space we define the space $ \mathcal{X}=\{ X \} $ of random variables with values in measurable metric space $(U,d_{u})$ and the space $ \mathcal{Y}=\{ Y \} $ of random variables with values in measurable metric space $(V,d_{v})$. By characterizations of probability distributions we understand general problems of description of some set $ \mathcal{C}$ in the space $ \mathcal{X}$ by extracting the sets $ \mathcal{A} \subseteq \mathcal{X} $ and $ \mathcal{B} \subseteq \mathcal{Y} $ which describe the properties of random variables $ X \in\mathcal{A}$ and their images $ Y=\mathbf{F}X \in \mathcal{B} $, obtained by means of a specially chosen mapping $ \mathbf{F}:\mathcal{X} \to \mathcal{Y} $. - -
    The description of the properties of the random variables $X$ and of their images $ Y=\mathbf{F}X $ is equivalent to the indication of the set $ \mathcal{A} \subseteq \mathcal{X} $ from which $X$ must be taken and of the set $ \mathcal{B} \subseteq \mathcal{Y} $ into which its image must fall. So, the set which interests us appears therefore in the following form: - - - -X\in\mathcal{A}, \mathbf{F} X \in \mathcal{B} \Leftrightarrow X \in \mathcal{C}, i.e. \mathcal{C} = \mathbf{F}^{-1} \mathcal{B}, - - - -where $ \mathbf{F}^{-1} \mathcal{B}$ denotes the complete inverse image of $ \mathcal{B}$ in $ \mathcal{A}$. This is the general model of characterization of probability distribution. Some examples of characterization theorems: - -* The assumption that two linear (or non-linear) statistics are identically distributed (or independent, or have a constancy regression and so on) can be used to characterize various populations. For example, according to George Pólya's characterization theorem, if $X_1$ and $X_2$ are independent identically distributed random variables with finite variance, then the statistics $ S_1 = X_1 $ and $ S_2 = \cfrac{X_1 + X_2}{\sqrt{2}}$ are identically distributed if and only if $ X_1 $ and $ X_2 $ have a normal distribution with zero mean. In this case - - - -\mathbf{F} = \begin{bmatrix} - -1 & 0 \\ - -1/\sqrt{2} & 1/\sqrt{2} - -\end{bmatrix} - -, -$$ - \mathcal{A} -$$ is a set of random two-dimensional column-vectors with independent identically distributed components, $ \mathcal{B}$ is a set of random two-dimensional column-vectors with identically distributed components and $ \mathcal{C}$ is a set of two-dimensional column-vectors with independent identically distributed normal components. - -* According to generalized George Pólya's characterization theorem (without condition on finiteness of variance ) if $X_1 , X_2 , \dots, X_n$ are non-degenerate independent identically distributed random variables, statistics $X_1$ and $ a_1X_1 + a_2X_2 + \dots + a_nX_n$ are identically distributed and $\left | a_j \right \vert < 1, a_1^2 + a_2^2 + \dots + a_n^2 = 1 $, then $ X_j $ is normal random variable for any $ j, j=1,2, \dots, n $. In this case - - - -\mathbf{F} = \begin{bmatrix} - -1 & 0 & \dots & 0\\ - -a_1 & a_2 & \dots & a_n - -\end{bmatrix} - -, -$$ - \mathcal{A} -$$ is a set of random n-dimensional column-vectors with independent identically distributed components, $ \mathcal{B}$ is a set of random two-dimensional column-vectors with identically distributed components and $ \mathcal{C}$ is a set of n-dimensional column-vectors with independent identically distributed normal components. - -* All probability distributions on the half-line $\left [ 0, \infty \right )$ that are memoryless are exponential distributions. "Memoryless" means that if $X$ is a random variable with such a distribution, then for any numbers $ 0 < y < x $ , -$$ - \Pr(X > x\mid X>y) = \Pr(X>x-y) -$$. - -
    - -Verification of conditions of characterization theorems in practice is possible only with some error $\epsilon $, i.e., only to a certain degree of accuracy. Such a situation is observed, for instance, in the cases where a sample of finite size is considered. That is why there arises the following natural question. Suppose that the conditions of the characterization theorem are fulfilled not exactly but only approximately. May we assert that the conclusion of the theorem is also fulfilled approximately? The theorems in which the problems of this kind are considered are called stability characterizations of probability distributions. diff --git a/wiki/wikipedia/831.txt b/wiki/wikipedia/831.txt deleted file mode 100644 index 368bced2a106ad73dc15ca21bb044a07f716f4e6..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/831.txt +++ /dev/null @@ -1,93 +0,0 @@ -In descriptive set theory, the Borel determinacy theorem states that any Gale–Stewart game whose payoff set is a Borel set is determined, meaning that one of the two players will have a winning strategy for the game. - -The theorem was proved by Donald A. Martin in 1975, and is applied in descriptive set theory to show that Borel sets in Polish spaces have regularity properties such as the perfect set property and the property of Baire. - -The theorem is also known for its metamathematical properties. In 1971, before the theorem was proved, Harvey Friedman showed that any proof of the theorem in Zermelo–Fraenkel set theory must make repeated use of the axiom of replacement. Later results showed that stronger determinacy theorems cannot be proven in Zermelo–Fraenkel set theory, although they are relatively consistent with it, if certain large cardinals are consistent. - -A Gale-Stewart game is a two-player game of perfect information. The game is defined using a set A, and is denoted GA. The two players alternate turns, and each player is aware of all moves before making the next one. On each turn, each player chooses a single element of A to play. The same element may be chosen more than once without restriction. The game can be visualized through the following diagram, in which the moves are made from left to right, with the moves of player I above and the moves of player II below. - -
    - - - -\begin{matrix} - -\mathrm{I} & a_1 & \quad & a_3 & \quad & a_5 & \quad & \cdots\\ - -\mathrm{II} & \quad & a_2 & \quad & a_4 & \quad & a_6 & \cdots - -\end{matrix} - - - -
    - -The play continues without end, so that a single play of the game determines an infinite sequence $\langle a_1,a_2,a_3\ldots\rangle$ of elements of A. The set of all such sequences is denoted Aω. The players are aware, from the beginning of the game, of a fixed payoff set (a.k.a. winning set) that will determine who wins. The payoff set is a subset of Aω. If the infinite sequence created by a play of the game is in the payoff set, then player I wins. Otherwise, player II wins; there are no ties. - -This definition initially does not seem to include traditional perfect information games such as chess, since the set of moves available in such games changes every turn. However, this sort of case can be handled by declaring that a player who makes an illegal move loses immediately, so that the Gale-Stewart notion of a game does in fact generalize the concept of a game defined by a game tree. - -A winning strategy for a player is a function that tells the player what move to make from any position in the game, such that if the player follows the function he or she will surely win. More specifically, a winning strategy for player I is a function f that takes as input sequences of elements of A of even length and returns an element of A, such that player I will win every play of the form - -
    - - - -\begin{matrix} - -\mathrm{I} & a_1 = f(\langle \rangle) & \quad & a_3 = f(\langle a_1, a_2\rangle)& \quad & a_5 = f(\langle a_1, a_2, a_3, a_4\rangle) & \quad & \cdots\\ - -\mathrm{II} & \quad & a_2 & \quad & a_4 & \quad & a_6 & \cdots. - -\end{matrix} - - - -
    - -A winning strategy for player II is a function g that takes odd-length sequences of elements of A and returns elements of A, such that player II will win every play of the form - -
    - - - -\begin{matrix} - -\mathrm{I} & a_1 & \quad & a_3 & \quad & a_5 & \quad & \cdots\\ - -\mathrm{II} & \quad & a_2 = g(\langle a_1\rangle)& \quad & a_4 = g(\langle a_1,a_2,a_3\rangle) & \quad & a_6 = g(\langle a_1,a_2,a_3,a_4,a_5\rangle) & \cdots . - -\end{matrix} - - - -
    - -At most one player can have a winning strategy; if both players had winning strategies, and played the strategies against each other, only one of the two strategies could win that play of the game. If one of the players has a winning strategy for a particular payoff set, that payoff set is said to be determined. - -For a given set A, whether a subset of Aω will be determined depends to some extent on its topological structure. For the purposes of Gale-Stewart games, the set A is endowed with the discrete topology, and Aω endowed with the resulting product topology, where Aω is viewed as a countably infinite topological product of A with itself. In particular, when A is the set {0,1}, the topology defined on Aω is exactly the ordinary topology on Cantor space, and when A is the set of natural numbers, it is the ordinary topology on Baire space. - -The set Aω can be viewed as the set of paths through a certain tree, which leads to a second characterization of its topology. The tree consists of all finite sequences of elements of A, and the children of a particular node σ of the tree are exactly the sequences that extend σ by one element. Thus if A = { 0, 1 }, the first level of the tree consists of the sequences ⟨ 0 ⟩ and ⟨ 1 ⟩; the second level consists of the four sequences ⟨ 0, 0 ⟩, ⟨ 0, 1 ⟩, ⟨ 1, 0 ⟩, ⟨ 1, 1 ⟩; and so on. For each of the finite sequences σ in the tree, the set of all elements of Aω that begin with σ is a basic open set in the topology on A. The open sets of Aω are precisely the sets expressible as unions of these basic open sets. The closed sets, as usual, are those whose complement is open. - -The Borel sets of Aω are the smallest class of subsets of Aω that includes the open sets and is closed under complement and countable union. That is, the Borel sets are the smallest σ-algebra of subsets of Aω containing all the open sets. The Borel sets are classified in the Borel hierarchy based on how many times the operations of complement and countable union are required to produce them from open sets. - -Gale and Stewart (1953) proved that if the payoff set is an open or closed subset of Aω then the Gale-Stewart game with that payoff set is always determined. Over the next twenty years, this was extended to slightly higher levels of the Borel hierarchy through ever more complicated proofs. This led to the question of whether the game must be determined whenever the payoff set is a Borel subset of Aω. It was known that, using the axiom of choice, it is possible to construct a subset of {0,1}ω that is not determined (Kechris 1995, p. 139). - -Harvey Friedman (1971) proved that any proof that all Borel subsets of Cantor space ({0,1}ω ) were determined would require repeated use of the axiom of replacement, an axiom not typically required to prove theorems about "small" objects such as Cantor space. - -Donald A. Martin (1975) proved that for any set A, all Borel subsets of Aω are determined. Because the original proof was quite complicated, Martin published a shorter proof in 1982 that did not require as much technical machinery. In his review of Martin's paper, Drake describes the second proof as "surprisingly straightforward." - -The field of descriptive set theory studies properties of Polish spaces (essentially, complete separable metric spaces). The Borel determinacy theorem has been used to establish many properties of Borel subsets of these spaces. For example, all Borel subsets of Polish spaces have the perfect set property and the property of Baire. - -The Borel determinacy theorem is of interest for its metamethematical properties as well as its consequences in descriptive set theory. - -Determinacy of closed sets of Aω for arbitrary A is equivalent to the axiom of choice over ZF (Kechris 1995, p. 139). When working in set-theoretical systems where the axiom of choice is not assumed, this can be circumvented by considering generalized strategies known as quasistrategies (Kechris 1995, p. 139) or by only considering games where A is the set of natural numbers, as in the axiom of determinacy. - -Zermelo set theory (Z) is Zermelo–Fraenkel set theory without the axiom of replacement. It differs from ZF in that Z does not prove that the power set operation can be iterated uncountably many times beginning with an arbitrary set. In particular, Vω + ω, a particular countable level of the cumulative hierarchy, is a model of Zermelo set theory. The axiom of replacement, on the other hand, is only satisfied by Vκ for significantly larger values of κ, such as when κ is a strongly inaccessible cardinal. Friedman's theorem of 1971 showed that there is a model of Zermelo set theory (with the axiom of choice) in which Borel determinacy fails, and thus Zermelo set theory cannot prove the Borel determinacy theorem. - -The existence of all beth numbers of countable index is sufficient to prove the Borel determinacy theorem. - -Several set-theoretic principles about determinacy stronger than Borel determinacy are studied in descriptive set theory. They are closely related to large cardinal axioms. - -The axiom of projective determinacy states that all projective subsets of a Polish space are determined. It is known to be unprovable in ZFC but relatively consistent with it and implied by certain large cardinal axioms. The existence of a measurable cardinal is enough to imply over ZFC that all analytic subsets of Polish spaces are determined. - -The axiom of determinacy states that all subsets of all Polish spaces are determined. It is inconsistent with ZFC but in ZF + DC (Zermelo–Fraenkel set theory plus the axiom of dependent choice) it is equiconsistent with certain large cardinal axioms. diff --git a/wiki/wikipedia/832.txt b/wiki/wikipedia/832.txt deleted file mode 100644 index bbd00b1806dc90d0e0ab45e7b64639339fc049fd..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/832.txt +++ /dev/null @@ -1,3 +0,0 @@ -In mathematical finite group theory, the Brauer–Fowler theorem, proved by Brauer, states that if a group G has even order g > 2 then it has a proper subgroup of order greater than g1/3. The technique of the proof is to count involutions (elements of order 2) in G. Perhaps more important is another result that the authors derive from the same count of involutions, namely that - -up to isomorphism there are only a finite number of finite simple groups with a given centralizer of an involution. This suggested that finite simple groups could be classified by studying their centralizers of involutions, and it led to the discovery of several sporadic groups. Later it motivated a part of the classification of finite simple groups. diff --git a/wiki/wikipedia/833.txt b/wiki/wikipedia/833.txt deleted file mode 100644 index 1f485f79d9879e9f148519e6dedfc7fde8dc1aec..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/833.txt +++ /dev/null @@ -1,285 +0,0 @@ -In mathematical logic, proof compression by RecycleUnits is a method for compressing propositional logic resolution proofs. - -Its main idea is to make use of intermediate (e.g. non input) proof results being unit clauses, i.e. clauses containing only one literal. Certain proof nodes can be replaced with the nodes representing these unit clauses. - -After this operation the obtained graph is transformed into a valid proof. - -The output proof is shorter than the original while being equivalent or stronger. - -The algorithms treat resolution proofs as directed acyclic graphs, where each node is labeled by a clause and each node has either one or two predecessors called parents. If a node has two parents it is also labeled with a propositional variable called the pivot, which was used to compute the nodes clause using resolution.
    - -The following algorithm describes the replacement of nodes.
    - -It is assumed that in the resolution proof for all non leaf nodes with two parent nodes, the left parent node contains the positive and the right parent node the negative pivot variable. - -The algorithm first iterates over all non leaf unit clauses and then over all non ancestor nodes of the proof. If the node's pivot element is the variable of the present unit clause's literal, one of the parent nodes can be replaced by the node corresponding to the unit clause. Because of the above assumption, if the literal is equal to the pivot, the left parent contains the literal and can be replaced by the unit clause node. If the literal is equal to the negation of the pivot the right parent is replaced. - -1 function RecycleUnits(Proof $P$): - -2 Let $U$ be the set of non leaf nodes representing unit clauses - -3 for each $u \in U$ do - -4 Mark the ancestors of u - -5 for each unmarked $n \in P$ do - -6 let $p$ be the pivot variable of $n$ - -7 let $l$ be the literal contained in the clause of $u$ - -8 if $p == l$ then - -9 replace the left parent of $n$ with $u$ - -10 else if $\neg p == l$ then - -11 replace the right parent of $n$ with $u$ - -In general after execution of this function the proof won't be a legal proof anymore. - -The following algorithm takes the root node of a proof and constructs a legal proof out of it. - -The computation begins with recursively calls to the children nodes. In order to minimize the algorithm calls, it is beingt kept track of which nodes were already visited. Note that a resolution proof can be seen as a general directed acyclic graph as opposed to a tree. - -After the recursive call the clause of the present node is updated. While doing so four different cases can occur. - -The present pivot variable can occur in both, the left, the right or in none of the parent nodes. If it occurs in both parent nodes the clause is calculated as resolvent of the parent clauses. - -If it is not present in one of the parent nodes the clause of this parent can be copied. If it misses in both parents one has to choose heuristically. - -1 function ReconstructProof(Node $n$): - -3 if $n$ is visited return - -4 mark $n$ as visited - -5 if $n$ has no parents return - -6 else if $n$ has only one parent $x$ then - -7 ReconstructProof($x$) - -8 $n$.Clause = $x$.Clause - -9 else - -10 let $l$ be the left and $r$ the right parent node - -11 let $p$ be the pivot variable used to compute $n$ - -12 ReconstructProof($l$) - -13 ReconstructProof($r$) - -14 if $p \in l.Clause$ and $p \in r.Clause$ - -15 $n$.Clause = Resolve($l$,$r$,$p$) - -16 else if $p \in l.Clause$ and $p \notin r.Clause$ - -17 $n$.Clause = $r$.Clause - -18 delete reference to $l$ - -19 else if $p \in r.Clause$ and $p \notin l.Clause$ - -20 $n$.Clause = $l$.Clause - -21 delete reference to $r$ - -22 else - -23 let $x \in \{l,r\}$ and $y \in \{l,r\} \setminus \{x\}$ //choose x heuristically - -24 $n$.Clause = $x$.Clause - -25 delete reference to $y$ - -Consider the following resolution proof.
    - -One intermediate result is $C_8$ which is representing the unit clause (-1). - - - -(1)\cfrac{ - -(2)\cfrac{ - -(1)\cfrac{C_1 (1,3)\qquad C_2 (-1,2,5)}{C_3 (2,3,5)} - -\qquad - -C_4 (1,-2) - -} - -{C_7 (1,3,5)} - -\qquad - -(4)\cfrac{C_5 (-1,4) \qquad C_6 (-1,-4)}{\color{red}C_8 (-1)} - -} - -{ - -C_9 (3,5) - -} - - - -There is one non-ancestor node using the variable 1 as a pivot element: $C_3$. - - - -(1)\cfrac{ - -(2)\cfrac{ - -{\color{red}(1)}\cfrac{C_1 (1,3)\qquad C_2 (-1,2,5)}{C_3 (2,3,5)} - -\qquad - -C_4 (1,-2) - -} - -{C_7 (1,3,5)} - -\qquad - -(4)\cfrac{C_5 (-1,4) \qquad C_6 (-1,-4)}{C_8 (-1)} - -} - -{ - -C_9 (3,5) - -} - - - -The literal -1 is contained in the right parent of this node and therefore this parent is replaced by $C_8$. The string ${C_8}^*$ denotes a reference to the clause $C_8$ (the structure is now a directed acyclic graph rather than a tree). - - - -(1)\cfrac{ - -(2)\cfrac{ - -(1)\cfrac{C_1 (1,3)\qquad {\color{red} {C_8}^*}}{C_3 (2,3,5)} - -\qquad - -C_4 (1,-2) - -} - -{C_7 (1,3,5)} - -\qquad - -(4)\cfrac{C_5 (-1,4) \qquad C_6 (-1,-4)}{C_8 (-1)} - -} - -{ - -C_9 (3,5) - -} - - - -This structure is not a legal proof anymore, because $C_3$ is not the resolvent of $C_1$ and $C_8$. Therefore it has to be transformed into one again.
    - -The first step is to update $C_3$. As the pivot variable 1 appears in both parent nodes, $C_3$ is computed as the resolvent of them. - - - -(1)\cfrac{ - -(2)\cfrac{ - -(1)\cfrac{C_1 (1,3)\qquad {C_8}^*}{C_3 {\color{red}(3)}} - -\qquad - -C_4 (1,-2) - -} - -{C_7 (1,3,5)} - -\qquad - -(4)\cfrac{C_5 (-1,4) \qquad C_6 (-1,-4)}{C_8 (-1)} - -} - -{ - -C_9 (3,5) - -} - - - -The left parent node of $C_7$ does not contain the pivot variable and therefore the clause of this parent is copied into the clause of $C_7$. The link between $C_7$ and $C_4$ is removed and since there are no other links to $C_4$ this node can be deleted. - - - -(1)\cfrac{ - -\cfrac{ - -(1)\cfrac{C_1 (1,3)\qquad {C_8}^*}{C_3 (3)} - -} - -{C_7 {\color{red}(3)}} - -\qquad - -(4)\cfrac{C_5 (-1,4) \qquad C_6 (-1,-4)}{C_8 (-1)} - -} - -{ - -C_9 (3,5) - -} - - - -Again the left parent of $C_9$ does not contain the pivot variable and the same operation is performed as before. - - - -\cfrac{ - -\cfrac{ - -(1)\cfrac{C_1 (1,3)\qquad (4)\cfrac{C_5 (-1,4) \qquad C_6 (-1,-4)}{C_8 (-1)}}{C_3 (3)} - -} - -{C_7 (3)} - -} - -{ - -C_9 {\color{red}(3)} - -} - - - -Note: the reference ${C_8}^*$ was replaced by the actual proof node $C_8$.
    - -The result of this proof is the unit clause (3) which is a stronger result than the clause (3,5) of the original proof. diff --git a/wiki/wikipedia/834.txt b/wiki/wikipedia/834.txt deleted file mode 100644 index da1db8f79f084bf6bbb45eaa21696102de56c0ce..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/834.txt +++ /dev/null @@ -1,235 +0,0 @@ -In numerical analysis, Lagrange polynomials are used for polynomial interpolation. For a given set of points $(x_j,y_j)$ with no two $x_j$ values equal, the Lagrange polynomial is the polynomial of lowest degree that assumes at each value $x_j$ the corresponding value $y_j$. - -Although named after Joseph-Louis Lagrange, who published it in 1795, the method was first discovered in 1779 by Edward Waring. It is also an easy consequence of a formula published in 1783 by Leonhard Euler. - -Uses of Lagrange polynomials include the Newton–Cotes method of numerical integration and Shamir's secret sharing scheme in cryptography. - -Lagrange interpolation is susceptible to Runge's phenomenon of large oscillation. As changing the points $x_j$ requires recalculating the entire interpolant, it is often easier to use Newton polynomials instead. - -Given a set of k + 1 data points -$$ -(x_0, y_0),\ldots,(x_j, y_j),\ldots,(x_k, y_k) -$$ - -where no two $x_j$ are the same, the interpolation polynomial in the Lagrange form is a linear combination -$$ -L(x) := \sum_{j=0}^{k} y_j \ell_j(x) -$$ - -of Lagrange basis polynomials -$$ -\ell_j(x) := \prod_{\begin{smallmatrix}0\le m\le k\\ m\neq j\end{smallmatrix}} \frac{x-x_m}{x_j-x_m} = \frac{(x-x_0)}{(x_j-x_0)} \cdots \frac{(x-x_{j-1})}{(x_j-x_{j - 1})} \frac{(x-x_{j+1})}{(x_j-x_{j+1})} \cdots \frac{(x-x_k)}{(x_j-x_k)}, -$$ - -where $0\le j\le k$. Note how, given the initial assumption that no two $x_j$ are the same, then (when $m\neq j$) $x_j - x_m \neq 0$, so this expression is always well-defined. The reason pairs $x_i = x_j$ with $y_i\neq y_j$ are not allowed is that no interpolation function $L$ such that $y_i = L(x_i)$ would exist; a function can only get one value for each argument $x_i$. On the other hand, if also $y_i = y_j$, then those two points would actually be one single point. - -For all $i\neq j$, $\ell_j(x)$ includes the term $(x-x_i)$ in the numerator, so the whole product will be zero at $x=x_i$: -$$ -\forall ({j\ne i}): \ell_{j}(x_i) = \prod_{m\neq j} \frac{x_i-x_m}{x_j-x_m} = \frac{(x_i-x_0)}{(x_j-x_0)} \cdots \frac{(x_i-x_i)}{(x_j-x_i)} \cdots \frac{(x_i-x_k)}{(x_j-x_k)} = 0. -$$ - -On the other hand, - -\ell_j(x_j) := \prod_{m\neq j} \frac{x_j-x_m}{x_j-x_m} = 1 - -In other words, all basis polynomials are zero at $x=x_j$, except $\ell_j(x)$, for which it holds that $\ell_j(x_j)=1$, because it lacks the $(x-x_j)$ term. - -It follows that $y_j \ell_j(x_j)=y_j$, so at each point $x_j$, $L(x_j)=y_j+0+0+\dots +0=y_j$, showing that $L$ interpolates the function exactly. - -The function L(x) being sought is a polynomial in x of the least degree that interpolates the given data set; that is, it assumes the value yj at the corresponding xj for all data points j: -$$ -L(x_j) = y_j \qquad j=0,\ldots,k. -$$ - -Observe that: - -* In $\ell_j(x)$ there are k factors in the product and each factor contains one x, so L(x) (which is a sum of these k-degree polynomials) must be a polynomial of degree at most k. - -* \ell_j(x_i) - -= \prod_{\begin{smallmatrix}m=0\\ m\neq j\end{smallmatrix}}^{k} \frac{x_i-x_m}{x_j-x_m}. - - - -Expand this product. Since the product omits the term where m = j, if i = j then all terms that appear are $\frac{x_j-x_m}{x_j-x_m} = 1$. Also, if i ≠ j then one term in the product will be (for m = i), $\frac{x_i-x_i}{x_j-x_i} = 0$, zeroing the entire product. So, - -\ell_j(x_i) - -= \delta_{ji} = \begin{cases} - -1, & \text{if } j=i \\ - -0, & \text{if } j \ne i \end{cases}, - - - -where $\delta_{ij}$ is the Kronecker delta. So: -$$ -L(x_i) = \sum_{j=0}^k y_j \ell_j(x_i) = \sum_{j=0}^{k} y_j \delta_{ji} = y_i. -$$ - -Thus the function L(x) is a polynomial with degree at most k and where L(xi) = yi. - -Additionally, the interpolating polynomial is unique, as shown by the unisolvence theorem at the polynomial interpolation article. - -It's also true that: -$$ - \sum_{j=0}^k \ell_j(x) = 1 \qquad \forall x -$$ - -since it must be a polynomial of degree, at most, k and passes through all these k + 1 data points: -$$ -(x_0, 1),\ldots,(x_j, 1),\ldots,(x_k, 1) -$$ - -resulting in a horizontal line, since a straight line is the only polynomial of degree less than k + 1 that passes through k + 1 aligned points. - -Solving an interpolation problem leads to a problem in linear algebra amounting to inversion of a matrix. Using a standard monomial basis for our interpolation polynomial $L(x) = \sum_{j=0}^k x^j m_j$, we must invert the Vandermonde matrix $(x_i)^j$ to solve $L(x_i) = y_i$ for the coefficients $m_j$ of $L(x)$. By choosing a better basis, the Lagrange basis, $L(x) = \sum_{j=0}^k l_j(x) y_j$, we merely get the identity matrix, $\delta_{ij}$, which is its own inverse: the Lagrange basis automatically inverts the analog of the Vandermonde matrix. - -This construction is analogous to the Chinese Remainder Theorem. Instead of checking for remainders of integers modulo prime numbers, we are checking for remainders of polynomials when divided by linears. - -Furthermore, when the order is large, Fast Fourier Transformation can be used to solve for the coefficients of the interpolated polynomial. - -We wish to interpolate ƒ(x) = x2 over the range 1 ≤ x ≤ 3, given these three points: - - - -\begin{align} - -x_0 & = 1 & & & f(x_0) & = 1 \\ - -x_1 & = 2 & & & f(x_1) & = 4 \\ - -x_2 & = 3 & & & f(x_2) & =9. - -\end{align} - - - -The interpolating polynomial is: - - \begin{align} - -L(x) &= {1}\cdot{x - 2 \over 1 - 2}\cdot{x - 3 \over 1 - 3}+{4}\cdot{x - 1 \over 2 - 1}\cdot{x - 3 \over 2 - 3}+{9}\cdot{x - 1 \over 3 - 1}\cdot{x - 2 \over 3 - 2} \\[10pt] - -&= x^2. - -\end{align} - -We wish to interpolate ƒ(x) = x3 over the range 1 ≤ x ≤ 4, given these four points: - -The interpolating polynomial is: - - \begin{align} - -L(x) &= {1}\cdot{x - 2 \over 1 - 2}\cdot{x - 3 \over 1 - 3}\cdot{x - 4 \over 1 - 4}+{8}\cdot{x - 1 \over 2 - 1}\cdot{x - 3 \over 2 - 3}\cdot{x - 4 \over 2 - 4}+{27}\cdot{x - 1 \over 3 - 1}\cdot{x - 2 \over 3 - 2}\cdot{x - 4 \over 3 - 4}+{64}\cdot{x - 1 \over 4 - 1}\cdot{x - 2\over 4 - 2}\cdot{x - 3 \over 4 - 3} \\[8pt] - -&= x^3 - -\end{align} - -The Lagrange form of the interpolation polynomial shows the linear character of polynomial interpolation and the uniqueness of the interpolation polynomial. Therefore, it is preferred in proofs and theoretical arguments. Uniqueness can also be seen from the invertibility of the Vandermonde matrix, due to the non-vanishing of the Vandermonde determinant. - -But, as can be seen from the construction, each time a node xk changes, all Lagrange basis polynomials have to be recalculated. A better form of the interpolation polynomial for practical (or computational) purposes is the barycentric form of the Lagrange interpolation (see below) or Newton polynomials. - -Lagrange and other interpolation at equally spaced points, as in the example above, yield a polynomial oscillating above and below the true function. This behaviour tends to grow with the number of points, leading to a divergence known as Runge's phenomenon; the problem may be eliminated by choosing interpolation points at Chebyshev nodes. - -The Lagrange basis polynomials can be used in numerical integration to derive the Newton–Cotes formulas. - -Using -$$ -\ell(x) = (x - x_0)(x - x_1) \cdots (x - x_k) -$$ -$$ -\ell'(x_j) = \frac{\mathrm{d} \ell(x)}{\mathrm{d} x}\Big|_{x=x_j} = \prod_{i=0,i \neq j}^k(x_j-x_i) -$$ - -we can rewrite the Lagrange basis polynomials as -$$ -\ell_j(x) = \frac{\ell(x)}{\ell'(x_j)(x-x_j)} -$$ - -or, by defining the barycentric weights -$$ -w_j = \frac{1}{\ell'(x_j)} -$$ - -we can simply write -$$ -\ell_j(x) = \ell(x)\frac{w_j}{x-x_j} -$$ - -which is commonly referred to as the first form of the barycentric interpolation formula. - -The advantage of this representation is that the interpolation polynomial may now be evaluated as -$$ -L(x) = \ell(x) \sum_{j=0}^k \frac{w_j}{x-x_j}y_j -$$ - -which, if the weights $w_j$ have been pre-computed, requires only $\mathcal O(k)$ operations (evaluating $\ell(x)$ and the weights $w_j/(x-x_j)$) as opposed to $\mathcal O(k^2)$ for evaluating the Lagrange basis polynomials $\ell_j(x)$ individually. - -The barycentric interpolation formula can also easily be updated to incorporate a new node $x_{k+1}$ by dividing each of the $w_j$, $j=0 \dots k$ by $(x_j - x_{k+1})$ and constructing the new $w_{k+1}$ as above. - -We can further simplify the first form by first considering the barycentric interpolation of the constant function $g(x)\equiv 1$: -$$ -g(x) = \ell(x) \sum_{j=0}^k \frac{w_j}{x-x_j}. -$$ - -Dividing $L(x)$ by $g(x)$ does not modify the interpolation, yet yields -$$ -L(x) = \frac{\sum_{j=0}^k \frac{w_j}{x-x_j}y_j}{\sum_{j=0}^k \frac{w_j}{x-x_j}} -$$ - -which is referred to as the second form or true form of the barycentric interpolation formula. This second form has the advantage that $\ell(x)$ need not be evaluated for each evaluation of $L(x)$. - -When interpolating a given function f by a polynomial of degree k at the nodes $x_0,...,x_k$ we get the remainder $R(x) = f(x) - L(x)$ which can be expressed as -$$ - R(x) = f[x_0,\ldots,x_k,x] \ell(x) = \ell(x) \frac{f^{(k+1)}(\xi)}{(k+1)!}, \quad \quad x_0 < \xi < x_k, -$$ - -where $f[x_0,\ldots,x_k,x]$ is the notation for divided differences. Alternatively, the remainder can be expressed as a contour integral in complex domain as -$$ -R(x) = \frac{\ell(x)}{2\pi i} \int_C \frac{f(t)}{(t-x)(t-x_0) \cdots (t-x_k)} dt = \frac{\ell(x)}{2\pi i} \int_C \frac{f(t)}{(t-x)\ell(t)} dt. -$$ - -The remainder can be bound as -$$ -|R(x)| \leq \frac{(x_k-x_0)^{k+1}}{(k+1)!}\max_{x_0 \leq \xi \leq x_k} |f^{(k+1)}(\xi)|. -$$ - -Clearly, $R(x) $ is zero at nodes. To find $R(x)$ at a point $x_p $, define a new function $F(x)=R(x)-\tilde{R}(x)=f(x)-L(x)-\tilde{R}(x)$ and choose $\tilde{R}(x)=C\cdot\prod_{i=0}^k(x-x_i)$ where $C$ is the constant we are required to determine for a given $x_p$. We choose $C$ so that $F(x)$ has $k+2$ zeroes (at all nodes and $x_p$) between $x_0$ and $x_k$ (including endpoints). Assuming that $f(x)$ is $k+1$-times differentiable, since $L(x)$ and $\tilde{R}(x)$ are polynomials, and therefore, are infinitely differentiable, $F(x)$ will be $k+1$-times differentiable. By Rolle's theorem, $F^{(1)}(x)$ has $k+1$ zeroes, $F^{(2)}(x)$ has $k$ zeroes... $F^{(k+1)}$ has 1 zero, say $\xi, x_0<\xiW is the zero vector of W, then the kernel of T is the preimage of the zero subspace {0W}; that is, the subset of V consisting of all those elements of V that are mapped by T to the element 0W. The kernel is usually denoted as ker T, or some variation thereof: -$$ - \ker T = \{\mathbf{v} \in V : T(\mathbf{v}) = \mathbf{0}_{W}\} . -$$ - -Since a linear map preserves zero vectors, the zero vector 0V of V must belong to the kernel. The transformation T is injective if and only if its kernel is reduced to the zero subspace. - -The kernel ker T is always a linear subspace of V. Thus, it makes sense to speak of the quotient space V/(ker T). The first isomorphism theorem for vector spaces states that this quotient space is naturally isomorphic to the image of T (which is a subspace of W). As a consequence, the dimension of V equals the dimension of the kernel plus the dimension of the image. - -If V and W are finite-dimensional and bases have been chosen, then T can be described by a matrix M, and the kernel can be computed by solving the homogeneous system of linear equations 1=Mv = 0. In this case, the kernel of T may be identified to the kernel of the matrix M, also called "null space" of M. The dimension of the null space, called the nullity of M, is given by the number of columns of M minus the rank of M, as a consequence of the rank–nullity theorem. - -Solving homogeneous differential equations often amounts to computing the kernel of certain differential operators. - -For instance, in order to find all twice-differentiable functions f from the real line to itself such that -$$ -x f(x) + 3 f'(x) = f(x), -$$ - -let V be the space of all twice differentiable functions, let W be the space of all functions, and define a linear operator T from V to W by -$$ -(Tf)(x) = x f(x) + 3 f'(x) - f(x) -$$ - -for f in V and x an arbitrary real number. - -Then all solutions to the differential equation are in ker T. - -One can define kernels for homomorphisms between modules over a ring in an analogous manner. This includes kernels for homomorphisms between abelian groups as a special case. This example captures the essence of kernels in general abelian categories; see Kernel (category theory). - -Let G and H be groups and let f be a group homomorphism from G to H. If eH is the identity element of H, then the kernel of f is the preimage of the singleton set {eH}; that is, the subset of G consisting of all those elements of G that are mapped by f to the element eH. - -The kernel is usually denoted ker f (or a variation). In symbols: -$$ - \ker f = \{g \in G : f(g) = e_{H}\} . -$$ - -Since a group homomorphism preserves identity elements, the identity element eG of G must belong to the kernel. - -The homomorphism f is injective if and only if its kernel is only the singleton set {eG}. If f were not injective, then the non-injective elements can form a distinct element of its kernel: there would exist $a, b \in G$ such that $a \neq b$ and $f(a) = f(b)$. Thus $f(a)f(b)^{-1} = e_H$. f is a group homomorphism, so inverses and group operations are preserved, giving $f\left(ab^{-1}\right) = e_H$; in other words, $ab^{-1} \in \ker f$, and ker f would not be the singleton. Conversely, distinct elements of the kernel violate injectivity directly: if there would exist an element $g \neq e_G \in \ker f$, then $f(g) = f(e_G) = e_H$, thus f would not be injective. - -ker f is a subgroup of G and further it is a normal subgroup. Thus, there is a corresponding quotient group G/(ker f). This is isomorphic to f(G), the image of G under f (which is a subgroup of H also), by the first isomorphism theorem for groups. - -In the special case of abelian groups, there is no deviation from the previous section. - -Let G be the cyclic group on 6 elements {0, 1, 2, 3, 4, 5} with modular addition, H be the cyclic on 2 elements {0, 1} with modular addition, and f the homomorphism that maps each element g in G to the element g modulo 2 in H. Then ker f = {0, 2, 4} , since all these elements are mapped to 0H. The quotient group G/(ker f) has two elements: {0, 2, 4} and {1, 3, 5} . It is indeed isomorphic to H. - -Let R and S be rings (assumed unital) and let f be a ring homomorphism from R to S. - -If 0S is the zero element of S, then the kernel of f is its kernel as linear map over the integers, or, equivalently, as additive groups. It is the preimage of the zero ideal {0S}, which is, the subset of R consisting of all those elements of R that are mapped by f to the element 0S. - -The kernel is usually denoted ker f (or a variation). - -In symbols: -$$ - \operatorname{ker} f = \{r \in R : f(r) = 0_{S}\}\mbox{.} -$$ - -Since a ring homomorphism preserves zero elements, the zero element 0R of R must belong to the kernel. - -The homomorphism f is injective if and only if its kernel is only the singleton set {0R}. - -This is always the case if R is a field, and S is not the zero ring. - -Since ker f contains the multiplicative identity only when S is the zero ring, it turns out that the kernel is generally not a subring of R. The kernel is a subrng, and, more precisely, a two-sided ideal of R. - -Thus, it makes sense to speak of the quotient ring R/(ker f). - -The first isomorphism theorem for rings states that this quotient ring is naturally isomorphic to the image of f (which is a subring of S). (Note that rings need not be unital for the kernel definition). - -To some extent, this can be thought of as a special case of the situation for modules, since these are all bimodules over a ring R: - -* R itself; - -* any two-sided ideal of R (such as ker f); - -* any quotient ring of R (such as R/(ker f)); and - -* the codomain of any ring homomorphism whose domain is R (such as S, the codomain of f). - -However, the isomorphism theorem gives a stronger result, because ring isomorphisms preserve multiplication while module isomorphisms (even between rings) in general do not. - -This example captures the essence of kernels in general Mal'cev algebras. - -Let M and N be monoids and let f be a monoid homomorphism from M to N. Then the kernel of f is the subset of the direct product M × M consisting of all those ordered pairs of elements of M whose components are both mapped by f to the same element in N. The kernel is usually denoted ker f (or a variation thereof). In symbols: -$$ -\operatorname{ker} f = \left\{\left(m, m'\right) \in M \times M : f(m) = f\left(m'\right)\right\}. -$$ - -Since f is a function, the elements of the form (m, m) must belong to the kernel. The homomorphism f is injective if and only if its kernel is only the diagonal set {(m, m) : m in M} . - -It turns out that ker f is an equivalence relation on M, and in fact a congruence relation. Thus, it makes sense to speak of the quotient monoid M/(ker f). The first isomorphism theorem for monoids states that this quotient monoid is naturally isomorphic to the image of f (which is a submonoid of N; for the congruence relation). - -This is very different in flavour from the above examples. In particular, the preimage of the identity element of N is not enough to determine the kernel of f. - -All the above cases may be unified and generalized in universal algebra. - -Let A and B be algebraic structures of a given type and let f be a homomorphism of that type from A to B. - -Then the kernel of f is the subset of the direct product A × A consisting of all those ordered pairs of elements of A whose components are both mapped by f to the same element in B. - -The kernel is usually denoted ker f (or a variation). - -In symbols: -$$ -\operatorname{ker} f = \left\{\left(a, a'\right) \in A \times A : f(a) = f\left(a'\right)\right\}\mbox{.} -$$ - -Since f is a function, the elements of the form (a, a) must belong to the kernel. - -The homomorphism f is injective if and only if its kernel is exactly the diagonal set {(a, a) : a ∈ A}. - -It is easy to see that ker f is an equivalence relation on A, and in fact a congruence relation. - -Thus, it makes sense to speak of the quotient algebra A/(ker f). - -The first isomorphism theorem in general universal algebra states that this quotient algebra is naturally isomorphic to the image of f (which is a subalgebra of B). - -Note that the definition of kernel here (as in the monoid example) doesn't depend on the algebraic structure; it is a purely set-theoretic concept. - -For more on this general concept, outside of abstract algebra, see kernel of a function. - -In the case of Malcev algebras, this construction can be simplified. Every Malcev algebra has a special neutral element (the zero vector in the case of vector spaces, the identity element in the case of commutative groups, and the zero element in the case of rings or modules). The characteristic feature of a Malcev algebra is that we can recover the entire equivalence relation ker f from the equivalence class of the neutral element. - -To be specific, let A and B be Malcev algebraic structures of a given type and let f be a homomorphism of that type from A to B. If eB is the neutral element of B, then the kernel of f is the preimage of the singleton set {eB}; that is, the subset of A consisting of all those elements of A that are mapped by f to the element eB. - -The kernel is usually denoted ker f (or a variation). In symbols: -$$ - \operatorname{ker} f = \{a \in A : f(a) = e_{B}\}\mbox{.} -$$ - -Since a Malcev algebra homomorphism preserves neutral elements, the identity element eA of A must belong to the kernel. The homomorphism f is injective if and only if its kernel is only the singleton set {eA}. - -The notion of ideal generalises to any Malcev algebra (as linear subspace in the case of vector spaces, normal subgroup in the case of groups, two-sided ideals in the case of rings, and submodule in the case of modules). - -It turns out that ker f is not a subalgebra of A, but it is an ideal. - -Then it makes sense to speak of the quotient algebra G/(ker f). - -The first isomorphism theorem for Malcev algebras states that this quotient algebra is naturally isomorphic to the image of f (which is a subalgebra of B). - -The connection between this and the congruence relation for more general types of algebras is as follows. - -First, the kernel-as-an-ideal is the equivalence class of the neutral element eA under the kernel-as-a-congruence. For the converse direction, we need the notion of quotient in the Mal'cev algebra (which is division on either side for groups and subtraction for vector spaces, modules, and rings). - -Using this, elements a and b of A are equivalent under the kernel-as-a-congruence if and only if their quotient a/b is an element of the kernel-as-an-ideal. - -Sometimes algebras are equipped with a nonalgebraic structure in addition to their algebraic operations. - -For example, one may consider topological groups or topological vector spaces, with are equipped with a topology. - -In this case, we would expect the homomorphism f to preserve this additional structure; in the topological examples, we would want f to be a continuous map. - -The process may run into a snag with the quotient algebras, which may not be well-behaved. - -In the topological examples, we can avoid problems by requiring that topological algebraic structures be Hausdorff (as is usually done); then the kernel (however it is constructed) will be a closed set and the quotient space will work fine (and also be Hausdorff). - -The notion of kernel in category theory is a generalisation of the kernels of abelian algebras; see Kernel (category theory). - -The categorical generalisation of the kernel as a congruence relation is the kernel pair. - -(There is also the notion of difference kernel, or binary equaliser.) diff --git a/wiki/wikipedia/837.txt b/wiki/wikipedia/837.txt deleted file mode 100644 index b2e794bcb1fabeca5f42c44bf0363226c1a5be41..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/837.txt +++ /dev/null @@ -1,315 +0,0 @@ -A list of articles with mathematical proofs: - -*Bertrand's postulate and a proof - -*Estimation of covariance matrices - -*Fermat's little theorem and some proofs - -*Gödel's completeness theorem and its original proof - -*Mathematical induction and a proof - -*Proof that 0.999... equals 1 - -*Proof that 22/7 exceeds π - -*Proof that e is irrational - -*Proof that π is irrational - -*Proof that the sum of the reciprocals of the primes diverges - -*Banach fixed-point theorem - -*Banach–Tarski paradox - -*Basel problem - -*Bolzano–Weierstrass theorem - -*Brouwer fixed-point theorem - -*Buckingham π theorem (proof in progress) - -*Burnside's lemma - -*Cantor's theorem - -*Cantor–Bernstein–Schroeder theorem - -*Cayley's formula - -*Cayley's theorem - -*Clique problem (to do) - -*Compactness theorem (very compact proof) - -*Erdős–Ko–Rado theorem - -*Euler's formula - -*Euler's four-square identity - -*Euler's theorem - -*Five color theorem - -*Five lemma - -*Fundamental theorem of arithmetic - -*Gauss–Markov theorem (brief pointer to proof) - -*Gödel's incompleteness theorem - -**Gödel's first incompleteness theorem - -**Gödel's second incompleteness theorem - -*Goodstein's theorem - -*Green's theorem (to do) - -**Green's theorem when D is a simple region - -*Heine–Borel theorem - -*Intermediate value theorem - -*Itô's lemma - -*Kőnig's lemma - -*Kőnig's theorem (set theory) - -*Kőnig's theorem (graph theory) - -*Lagrange's theorem (group theory) - -*Lagrange's theorem (number theory) - -*Liouville's theorem (complex analysis) - -*Markov's inequality (proof of a generalization) - -*Mean value theorem - -*Multivariate normal distribution (to do) - -*Holomorphic functions are analytic - -*Pythagorean theorem - -*Quadratic equation - -*Quotient rule - -*Ramsey's theorem - -*Rao–Blackwell theorem - -*Rice's theorem - -*Rolle's theorem - -*Splitting lemma - -*squeeze theorem - -*Sum rule in differentiation - -*Sum rule in integration - -*Sylow theorems - -*Transcendence of e and π (as corollaries of Lindemann–Weierstrass) - -*Tychonoff's theorem (to do) - -*Ultrafilter lemma - -*Ultraparallel theorem - -*Urysohn's lemma - -*Van der Waerden's theorem - -*Wilson's theorem - -*Zorn's lemma - -*Bellman–Ford algorithm (to do) - -*Euclidean algorithm - -*Kruskal's algorithm - -*Gale–Shapley algorithm - -*Prim's algorithm - -*Shor's algorithm (incomplete) - -*Basis (linear algebra) - -*Burrows–Abadi–Needham logic - -*Direct proof - -*Generating a vector space - -*Linear independence - -*Polynomial - -*Proof - -*Pumping lemma - -*Simpson's rule - -*Addition in N - -**associativity of addition in N - -**commutativity of addition in N - -**uniqueness of addition in N - -*Algorithmic information theory - -*Boolean ring - -**commutativity of a boolean ring - -*Boolean satisfiability problem - -**NP-completeness of the Boolean satisfiability problem - -*Cantor's diagonal argument - -**set is smaller than its power set - -**uncountability of the real numbers - -*Cantor's first uncountability proof - -**uncountability of the real numbers - -*Combinatorics - -*Combinatory logic - -*Co-NP - -*Coset - -*Countable - -**countability of a subset of a countable set (to do) - -*Angle of parallelism - -*Galois group - -**Fundamental theorem of Galois theory (to do) - -*Gödel number - -**Gödel's incompleteness theorem - -*Group (mathematics) - -*Halting problem - -**insolubility of the halting problem - -*Harmonic series (mathematics) - -**divergence of the (standard) harmonic series - -*Highly composite number - -*Area of hyperbolic sector, basis of hyperbolic angle - -*Infinite series - -**convergence of the geometric series with first term 1 and ratio 1/2 - -*Integer partition - -*Irrational number - -**irrationality of log23 - -**irrationality of the square root of 2 - -*Limit point - -*Mathematical induction - -**sum identity - -*Power rule - -**differential of xn - -*Product and Quotient Rules - -*Derivation of Product and Quotient rules for differentiating. - -*Prime number - -**Infinitude of the prime numbers - -*Primitive recursive function - -*Principle of bivalence - -**no propositions are neither true nor false in intuitionistic logic - -*Recursion - -*Relational algebra (to do) - -*Solvable group - -*Square root of 2 - -*Tetris - -*Algebra of sets - -**idempotent laws for set union and intersection - -*Cauchy's integral formula - -*Cauchy integral theorem - -*Computational geometry - -*Fundamental theorem of algebra - -*Lambda calculus - -*Invariance of domain - -*Minkowski inequality - -*Nash embedding theorem - -*Open mapping theorem (functional analysis) - -*Product topology - -*Riemann integral - -*Time hierarchy theorem - -**Deterministic time hierarchy theorem - -*No-cloning theorem - -*Torque diff --git a/wiki/wikipedia/838.txt b/wiki/wikipedia/838.txt deleted file mode 100644 index 28fd2e941d7f47bc86e789d7dc97577ec719af42..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/838.txt +++ /dev/null @@ -1,25 +0,0 @@ -Synchronize It! is a file synchronization software which allows users to compare and synchronize folders that can be stored on the same computer, on different computers, in archives or on FTP sites. Various synchronization modes (actions) and comparison rules are available. - -The most common application of Synchronize It! is to compare two folders — a source folder and a target folder. Users can choose whether subfolders should be included (allowing exclusion of specific subfolders), whether only matching folders should be compared and users can apply file filters. Various actions and comparison modes are available (see below). Custom sessions can be saved and organized into projects. - -Zip archives are supported internally and external archivers can be used to support other archive types. Archives support allows users to: - -*create backup archives from folders; - -* compare folders content with archived version and update it; - -* compare and synchronize two archives with each other. - -Synchronize It! can also be started and configured from command line. This allows automation and implementation in other tools, such as Total Commander. - -The package feature can be used to synchronize distant, non-connected PCs. A list of the files contained in the source folder is stored in a small package file, that can be kept on a USB stick. The package is then synchronized with the target folder on another computer. Files only present or modified on the target computer will be packed into the package. A package hence contains only the files that are different and not the entire folder, which makes it a tool to keep non-connected computers synchronized (e.g. a PC at work and a PC at home). - -=== Reporting and printing === - -Results form comparison can be printed or exported and published as HTML reports. - -Synchronize It! also works without installation and is only 2.5 MB in size. It can be run directly from a USB stick. - -The following actions / synchronization modes are available: - -1 Date comparison in all rules takes into account global time-related options, such as Ignore 2 secs difference etc. diff --git a/wiki/wikipedia/839.txt b/wiki/wikipedia/839.txt deleted file mode 100644 index 74c975f7b35dda4210bbcd497860e64a01bc8928..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/839.txt +++ /dev/null @@ -1,51 +0,0 @@ -In topological graph theory, an embedding (also spelled imbedding) of a graph $G$ on a surface $\Sigma$ is a representation of $G$ on $\Sigma$ in which points of $\Sigma$ are associated with vertices and simple arcs (homeomorphic images of $[0,1]$) are associated with edges in such a way that: - -* the endpoints of the arc associated with an edge $e$ are the points associated with the end vertices of $e,$ - -* no arcs include points associated with other vertices, - -* two arcs never intersect at a point which is interior to either of the arcs. - -Here a surface is a compact, connected $2$-manifold. - -Informally, an embedding of a graph into a surface is a drawing of the graph on the surface in such a way that its edges may intersect only at their endpoints. It is well known that any finite graph can be embedded in 3-dimensional Euclidean space $\mathbb{R}^3$. A planar graph is one that can be embedded in 2-dimensional Euclidean space $\mathbb{R}^2.$ - -Often, an embedding is regarded as an equivalence class (under homeomorphisms of $\Sigma$) of representations of the kind just described. - -Some authors define a weaker version of the definition of "graph embedding" by omitting the non-intersection condition for edges. In such contexts the stricter definition is described as "non-crossing graph embedding". - -This article deals only with the strict definition of graph embedding. The weaker definition is discussed in the articles "graph drawing" and "crossing number". - -If a graph $G$ is embedded on a closed surface $\Sigma$, the complement of the union of the points and arcs associated with - -the vertices and edges of $G$ is a family of regions (or faces). A 2-cell embedding, cellular embedding or map is an embedding in which every face is homeomorphic to an open disk. A closed 2-cell embedding is an embedding in which the closure of every face is homeomorphic to a closed disk. - -The genus of a graph is the minimal integer $n$ such that the graph can be embedded in a surface of genus $n$. In particular, a planar graph has genus $0$, because it can be drawn on a sphere without self-crossing. The non-orientable genus of a graph is the minimal integer $n$ such that the graph can be embedded in a non-orientable surface of (non-orientable) genus $n$. - -The Euler genus of a graph is the minimal integer $n$ such that the graph can be embedded in an orientable surface of (orientable) genus $n/2$ or in a non-orientable surface of (non-orientable) genus $n$. A graph is orientably simple if its Euler genus is smaller than its non-orientable genus. - -The maximum genus of a graph is the maximal integer $n$ such that the graph can be $2$-cell embedded in an orientable surface of genus $n$. - -An embedded graph uniquely defines cyclic orders of edges incident to the same vertex. The set of all these cyclic orders is called a rotation system. Embeddings with the same rotation system are considered to be equivalent and the corresponding equivalence class of embeddings is called combinatorial embedding (as opposed to the term topological embedding, which refers to the previous definition in terms of points and curves). Sometimes, the rotation system itself is called a "combinatorial embedding". - -An embedded graph also defines natural cyclic orders of edges which constitutes the boundaries of the faces of the embedding. However handling these face-based orders is less straightforward, since in some cases some edges may be traversed twice along a face boundary. For example this is always the case for embeddings of trees, which have a single face. To overcome this combinatorial nuisance, one may consider that every edge is "split" lengthwise in two "half-edges", or "sides". Under this convention in all face boundary traversals each half-edge is traversed only once and the two half-edges of the same edge are always traversed in opposite directions. - -Other equivalent representations for cellular embeddings include the ribbon graph, a topological space formed by gluing together topological disks for the vertices and edges of an embedded graph, and the graph-encoded map, an edge-colored cubic graph with four vertices for each edge of the embedded graph. - -The problem of finding the graph genus is NP-hard (the problem of determining whether an $n$-vertex graph has genus $g$ is NP-complete). - -At the same time, the graph genus problem is fixed-parameter tractable, i.e., polynomial time algorithms are known to check whether a graph can be embedded into a surface of a given fixed genus as well as to find the embedding. - -The first breakthrough in this respect happened in 1979, when algorithms of time complexity - -O(nO(g)) were independently submitted to the Annual ACM Symposium on Theory of Computing: one by I. Filotti and G.L. Miller and another one by John Reif. Their approaches were quite different, but upon the suggestion of the program committee they presented a joint paper. However, Wendy Myrvold and William Kocay proved in 2011 that the algorithm given by Filotti, Miller and Reif was incorrect. - -In 1999 it was reported that the fixed-genus case can be solved in time linear in the graph size and doubly exponential in the genus. - -It is known that any finite graph can be embedded into a three-dimensional space. - -One method for doing this is to place the points on any line in space and to draw the edges as curves each of which lies in a distinct halfplane, with all halfplanes having that line as their common boundary. An embedding like this in which the edges are drawn on halfplanes is called a book embedding of the graph. This metaphor comes from imagining that each of the planes where an edge is drawn is like a page of a book. It was observed that in fact several edges may be drawn in the same "page"; the book thickness of the graph is the minimum number of halfplanes needed for such a drawing. - -Alternatively, any finite graph can be drawn with straight-line edges in three dimensions without crossings by placing its vertices in general position so that no four are coplanar. For instance, this may be achieved by placing the ith vertex at the point (i,i2,i3) of the moment curve. - -An embedding of a graph into three-dimensional space in which no two of the cycles are topologically linked is called a linkless embedding. A graph has a linkless embedding if and only if it does not have one of the seven graphs of the Petersen family as a minor. diff --git a/wiki/wikipedia/84.txt b/wiki/wikipedia/84.txt deleted file mode 100644 index 3c9fc9b28f14c6621e2cdc474bee9c8a8a312de9..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/84.txt +++ /dev/null @@ -1,9 +0,0 @@ -Otter is an automated theorem prover developed by William McCune at Argonne National Laboratory in Illinois. Otter was the first widely distributed, high-performance theorem prover for first-order logic, and it pioneered a number of important implementation techniques. Otter is an acronym for Organized Techniques for Theorem-proving and Effective Research. - -Otter is based on resolution and paramodulation, constrained by term orderings similar to those in the superposition calculus. The prover also supports positive and negative hyperresolution and a set-of-support strategy. Proof search is based on saturation using a version of the given-clause algorithm, and is controlled by several heuristics. There also are meta-heuristics determining search parameters automatically. Otter also pioneered the use of efficient term indexing techniques to speed up the search for inference partners in large clause sets. - -Otter has been very stable for a number of years but is no longer actively developed. As of November 2008, the last changelog entry was dated 14 September 2004. A successor to Otter is Prover9. - -The software is in the public domain. The University of Chicago has declined to assert its copyrights in this software, and it may be used, modified, and redistributed (with or without modifications) by the public. However, "NEITHER THE UNITED STATES GOVERNMENT NOR ANY AGENCY THEREOF [...] REPRESENTS THAT ITS USE WOULD NOT INFRINGE PRIVATELY OWNED RIGHTS." - -According to Wos and Pieper, OTTER is written in approximately 28,000 lines of C programming language. diff --git a/wiki/wikipedia/840.txt b/wiki/wikipedia/840.txt deleted file mode 100644 index 5ae27b7d4a34fcf442f0c21ac17466dabd36beaf..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/840.txt +++ /dev/null @@ -1,31 +0,0 @@ -Ecstasy of Order: The Tetris Masters is a 2011 American documentary film that follows the lives of several gamers from around the country as they prepare to compete in the 2010 Classic Tetris World Championship held in Los Angeles, California. It recounts the development and rise of Tetris as one of the most-played video games of all-time, the role it has played in shaping the lives of the gamers it chronicles, the mystery surrounding the whereabouts of former Nintendo World Champion Thor Aackerlund, and the conception and execution of the first ever Classic Tetris World Championship by gaming enthusiast Robin Mihara. - -The film was directed by Adam Cornelius and screened at over 15 film festivals both domestic and internationally, beginning with its World Premiere at the Austin Film Festival on October 21, 2011 and including an appearance at the International Documentary Film Festival Amsterdam, the largest film festival of its kind. It debuted on DVD and Video On Demand (VOD) on August 21, 2012. An expanded version of the film's soundtrack from composer Chris Pickolick has also been released. - -* Harry Hong - -* Jonas Neubauer - -* Robin Mihara - -* Dana Wilcox - -* Ben Mullen - -* Jesse Kelkar - -* Thor Aackerlund - -* Trey Harrison - -* Chris Tang - -* Matt Buco - -* Alex Kerr - -An Oregon-based Tetris enthusiast by the name of Robin Mihara, longing to track down the best Tetris players in the nation for the purpose of organizing a competition, happens upon a website called Twin Galaxies which has been tracking video game world records since the 1980s. From this website he learns of record-holders and top scorers like Jonas Neubauer & Harry Hong (both of whom have maxed out with a high score of 999,999), Ben Mullen (world-record holder for most lines with 296), Dana Wilcox, and Jesse Kelkar. In addition, he seeks to reconnect with several of the top players from the heyday of NES and Tetris—competitors from 1990's Nintendo World Championships—Trey Harrison and Thor Aackerlund, the latter of whom claims to have reached Level 30. Little has been heard of Aackerlund since his rise to notoriety as the poster child for video gaming in the 1990s, and being successful at playing video games is described by Mihara in the film as "the cornerstone of a very difficult life [for Thor]." Regardless of his reluctance, Aackerlund is able to put the past behind him and agrees to participate. With the addition of competitors like Matt Buco and Bay area Tetris Grandmaster Alex Kerr, the stage is set to for the top eight competitors to go head-to-head in Los Angeles, vying for the title of Tetris Champion in the first ever Classic Tetris World Championship. - -The film integrates a fair amount of Tetris gameplay footage and strategy with the stories of the individual gamers and also features interviews with Tetris developer Alexey Pajitnov, Former Twin Galaxies Senior Referee Mr. Kelly R. Flewin, multi-platform champion gamer Chris Tang, and a special appearance by The Tetris Company CEO Henk Rogers. - -The film received the prestigious Audience Award for Documentary Feature at the 2011 Austin Film Festival, sharing the honor with "Stories from an Undeclared War." It received the award for Best Feature Film (any genre) at the 2012 Phoenix Comicon Film Festival. The film has received generally favorable reviews from critics. Dennis Harvey of Variety (magazine) noted that it will "delight fans of the highly addictive game" and Allistair Pinsof of Flixist calls it "one of the best videogame films of all time." diff --git a/wiki/wikipedia/841.txt b/wiki/wikipedia/841.txt deleted file mode 100644 index 1bd6d454baca29b8acc3d10182caeb1626cc413f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/841.txt +++ /dev/null @@ -1,5 +0,0 @@ -In mathematics, the Pompeiu problem is a conjecture in integral geometry, named for Dimitrie Pompeiu, who posed the problem in 1929, - -as follows. Suppose f is a nonzero continuous function defined on a Euclidean space, and K is a simply connected Lipschitz domain, so that the integral of f vanishes on every congruent copy of K. Then the domain is a ball. - -A special case is Schiffer's conjecture. diff --git a/wiki/wikipedia/842.txt b/wiki/wikipedia/842.txt deleted file mode 100644 index 3b5a29f03080f41af49c1aa69d18eed911bed5a5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/842.txt +++ /dev/null @@ -1,37 +0,0 @@ -Lamport's Distributed Mutual Exclusion Algorithm is a contention-based algorithm for mutual exclusion on a distributed system. - -# Every process maintains a queue of pending requests for entering critical section in order. The queues are ordered by virtual time stamps derived from Lamport timestamps. - -Requesting process - -# Pushing its request in its own queue (ordered by time stamps) - -# Sending a request to every node. - -# Waiting for replies from all other nodes. - -# If own request is at the head of its queue and all replies have been received, enter critical section. - -# Upon exiting the critical section, remove its request from the queue and send a release message to every process. - -Other processes - -# After receiving a request, pushing the request in its own request queue (ordered by time stamps) and reply with a time stamp. - -# After receiving release message, remove the corresponding request from its own request queue. - -# - -This algorithm creates 3(N - 1) messages per request, or (N - 1) messages and 2 broadcasts. 3(N - 1) messages per request includes: - -* (N - 1) total number of requests - -* (N - 1) total number of replies - -* (N - 1) total number of releases - -This algorithm has several disadvantages. They are: - -* It is very unreliable as failure of any one of the processes will halt progress. - -* It has a high message complexity of 3(N − 1) messages per entry/exit into the critical section. diff --git a/wiki/wikipedia/843.txt b/wiki/wikipedia/843.txt deleted file mode 100644 index 4a74be177dbee36fb6be41fd9b4907d445693cc6..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/843.txt +++ /dev/null @@ -1,23 +0,0 @@ -Blichfeldt's theorem is a mathematical theorem in the geometry of numbers, stating that whenever a bounded set in the Euclidean plane has area $A$, it can be translated so that it includes at least $\lceil A\rceil$ points of the integer lattice. Equivalently, the set contains $\lceil A\rceil$ points whose coordinates all differ by integers. It can be generalized to other lattices and to higher dimensions, and can be interpreted as a continuous version of the pigeonhole principle. It is named after Danish-American mathematician Hans Frederick Blichfeldt, who published it in 1914. Some sources call it Blichfeldt's principle or Blichfeldt's lemma. - -The theorem can be stated most simply for points in the Euclidean plane, and for the integer lattice in the plane. For this version of the theorem, let $S$ be any measurable set, let $A$ denote its area, and round this number up to the next integer value, $n=\lceil A\rceil $. Then Blichfeldt's theorem states that $S$ can be translated so that its translated copy contains at least $n$ points with integer coordinates. - -The basic idea of the proof is to cut $S$ into pieces according to the squares of the integer lattice, and to translate each of those pieces by an integer amount so that it lies within the unit square having the origin as its lower right corner. This translation may cause some pieces of the unit square to be covered more than once, but if the combined area of the translated pieces is counted with multiplicity it remains unchanged, equal to $A$. On the other hand, if the whole unit square were covered with multiplicity $n-1$ its area would be $n-1$, less than $A$. Therefore, some point $p$ of the unit square must be covered with multiplicity at least $n$. A translation that takes $p$ to the origin will also take all of the $n$ points of $S$ that covered $p$ to integer points, which is what was required. - -More generally, the theorem applies to $d$-dimensional sets $S$, with $d$-dimensional volume $A$, and to an arbitrary $d$-dimensional lattice $\Lambda$ (a set of points in $d$-dimensional space that do not all lie in any lower dimensional subspace, are separated from each other by some minimum distance, and can be combined by adding or subtracting their coordinates to produce other points in the same set). Just as the integer lattice divides the plane into squares, an arbitrary lattice divides its space into fundamental regions (called parallelotopes) with the property that any one of these regions can be translated onto any other of them by adding the coordinates of a unique lattice point. If $L$ is the $d$-dimensional volume of one of parallelotopes, then Blichfeldt's theorem states that $S$ can be translated to include at least $\lceil A/L\rceil$ points of $\Lambda$. The proof is as before: cut up $S$ by parallelotopes, translate the pieces by translation vectors in $\lambda$ onto a single parallelotope without changing the total volume (counted with multiplicity), observe that there must be a point $p$ of multiplicity at least $\lceil A/L\rceil$, and use a translation that takes $p$ to the origin. - -Instead of asking for a translation for which there are $n$ lattice points, an equivalent form of the theorem states that $S$ itself contains a set of $n$ points, all of whose pairwise differences belong to the lattice. A strengthened version of the theorem applies to compact sets, and states that they can be translated to contain at least $\lfloor A+1\rfloor$ points of the lattice. This number of points differs from $\lceil A\rceil $ only when $A$ is an integer, for which it is larger by one. - -Minkowski's theorem, proved earlier than Blichfeldt's work by Hermann Minkowski, states that any convex set in the plane that is centrally symmetric around the origin, with area greater than four (or a compact symmetric set with area equal to four) contains a nonzero integer point. More generally, for a $d$-dimensional lattice $\Lambda$ whose fundamental parallelotopes have volume $L$, any set centrally symmetric around the origin with volume greater than $2^d L$ contains a nonzero lattice point. - -Although Minkowski's original proof was different, Blichfeldt's theorem can be used in a simple proof of Minkowski's theorem. Let $X$ be any centrally symmetric set with volume greater than $2^d L$ (meeting the conditions of Minkowski's theorem), and scale it down by a factor of two to obtain a set $\tfrac{1}{2}X$ of volume greater than $L$. By Blichfeldt's theorem, $\tfrac{1}{2}X$ has two points $p$ and $q$ whose coordinatewise difference belongs to $L$. Reversing the shrinking operation, $2p$ and $2q$ belong to $X$. By symmetry $-2q$ also belongs to $X$, and by convexity the midpoint of $2p$ and $-2q$ belongs to $X$. But this midpoint is $p-q$, a nonzero point of $L$. - -Many applications of Blichfeldt's theorem, like the application to Minkowski's theorem, involve finding a nonzero lattice point in a large-enough set, but one that is not convex. For the proof of Minkowski's theorem, the key relation between the sets $X$ and $\tfrac{1}{2}X$ that makes the proof work is that all differences of pairs of points in $\tfrac{1}{2}X$ belong to $X$. However, for a set $X$ that is not convex, $\tfrac{1}{2}X$ might have pairs of points whose difference does not belong to $X$, making it unusable in this technique. One could instead find the largest centrally symmetric convex subset $K\subset X$, and then apply Minkowski's theorem to $K$, or equivalently apply Blichfeldt's theorem to $\tfrac{1}{2}K$. However, in many cases a given non-convex set $X$ has a subset $Y\subset X$ that is larger than $\tfrac{1}{2}K$, whose pairwise differences belong to $X$. When this is the case, the larger size of $Y$ relative to $\tfrac{1}{2}K$ leads to tighter bounds on how big $X$ needs to be sure of containing a lattice point. - -For a centrally symmetric star domain, it is possible to use the calculus of variations to find the largest set $X'$ whose pairwise differences belong to $X$. Applications of this method include simultaneous Diophantine approximation, the problem of approximating a given set of irrational numbers by rational numbers that all have the same denominators. - -Analogues of Blichfeldt's theorem have been proven for other sets of points than lattices, showing that large enough regions contain many points from these sets. These include a theorem for Fuchsian groups, lattice-like subsets of $2\times 2$ matrices, and for the sets of vertices of Archimedean tilings. - -Other generalizations allow the set $S$ to be a measurable function, proving that its sum over some set of translated lattice points is at least as large as its integral, or replace the single set $S$ with a family of sets. - -A computational problem related to Blichfeldt's theorem has been shown to be complete for the PPP complexity class, and therefore unlikely to be solvable in polynomial time. The problem takes as input a set of integer vectors forming the basis of a $d$-dimensional lattice $\Lambda$, and a set $S$ of integer vectors, represented implicitly by a Boolean circuit for testing whether a given vector belongs to $S$. It is required that the cardinality of $S$, divided by the volume of the fundamental parallelotope of $\Lambda$, is at least one, from which a discrete version of Blichfeldt's theorem implies that $S$ includes a pair of points whose difference belongs to $\Lambda$. The task is to find either such a pair, or a point of $S$ that itself belongs to $\Lambda$. The computational hardness of this task motivates the construction of a candidate for a collision-resistant cryptographic hash function. diff --git a/wiki/wikipedia/844.txt b/wiki/wikipedia/844.txt deleted file mode 100644 index f14841d5cb8ce69267044715a04ab8f05308f78b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/844.txt +++ /dev/null @@ -1,91 +0,0 @@ -In combinatorial mathematics, the identity -$$ -\sum^n_{i=r}{i\choose r}={n+1\choose r+1} \qquad \text{ for } n,r\in\mathbb{N}, \quad n\geq r -$$ - -or equivalently, the mirror-image by the substitution $j\to i-r$: -$$ -\sum^{n-r}_{j=0}{j+r\choose r}=\sum^{n-r}_{j=0}{j+r\choose j}={n+1\choose n-r} \qquad \text{ for } n,r\in\mathbb{N}, \quad n\geq r -$$ - -is known as the hockey-stick or Christmas stocking identity. The name stems from the graphical representation of the identity on Pascal's triangle: when the addends represented in the summation and the sum itself are highlighted, the shape revealed is vaguely reminiscent of those objects (see hockey stick, Christmas stocking). - -The inductive and algebraic proofs both make use of Pascal's identity: -$$ -{n \choose k}={n-1\choose k-1}+{n-1\choose k}. -$$ - -This identity can be proven by mathematical induction on $n$. - -Base case - -Let $n=r$; -$$ -\sum^n_{i=r} {i\choose r} = \sum^r_{i=r}{i\choose r}={r\choose r} = 1 = {r+1\choose r+1} = {n+1\choose r+1}. -$$ - -Inductive step - -Suppose, for some $k\in\mathbb{N}, k \geqslant r$, -$$ -\sum^k_{i=r}{i\choose r}={k+1\choose r+1} -$$ - -Then -$$ -\sum^{k+1}_{i=r} {i\choose r} = \left(\sum^k_{i=r} {i\choose r} \right) + {k+1\choose r}={k+1\choose r+1}+{k+1\choose r}={k+2\choose r+1}. -$$ - -We use a telescoping argument to simplify the computation of the sum: - - - -\begin{align} - -\sum_{t=\color{blue}0}^n \binom{t}{k} - -=\sum_{t=\color{blue}k}^n\binom tk - -&= \sum_{t=k}^n\left[ \binom {t+1}{k+1}-\binom {t}{k+1}\right]\\ - -&=\sum_{t=\color{green}k}^{\color{green}n}\binom {\color{green}{t+1}}{k+1} - \sum_{t=k}^n \binom t{k+1}\\ - -&=\sum_{t=\color{green}{k+1}}^{\color{green}{n+1}}\binom {\color{green}{t}}{k+1} - \sum_{t=k}^n \binom t{k+1}\\ - -&=\binom{n+1}{k+1}-\underbrace{\binom k{k+1}}_0&&\text{by telescoping}\\ - -&=\binom{n+1}{k+1}. - -\end{align} - - - -Imagine that we are distributing $n$ indistinguishable candies to $k$ distinguishable children. By a direct application of the stars and bars method, there are -$$ -\binom{n+k-1}{ k-1} -$$ - -ways to do this. Alternatively, we can first give $0\leqslant i\leqslant n$ candies to the oldest child so that we are essentially giving $n-i$ candies to $k-1$ kids and again, with stars and bars and double counting, we have -$$ -\binom{n+k-1}{ k-1}=\sum_{i=0}^n\binom{n+k-2-i}{k-2}, -$$ - -which simplifies to the desired result by taking $n' = n+k-2$ and $r=k-2$, and noticing that $n'-n = k-2=r$: -$$ -\binom{n'+1}{ r+1}=\sum_{i=0}^n \binom {n'-i}r = \sum_{i=r}^{n'} \binom {i}r . -$$ - -We can form a committee of size $k+1$ from a group of $n+1$ people in -$$ - \binom{n+1}{k+1} -$$ - -ways. Now we hand out the numbers $1,2,3,\dots,n-k+1$ to $n-k+1$ of the $n+1$ people. We can divide this into $n-k+1$ disjoint cases. In general, in case $x$, $1\leqslant x\leqslant n-k+1$, person $x$ is on the committee and persons $1,2,3,\dots, x-1$ are not on the committee. This can be done in -$$ -\binom{n-x+1}{k} -$$ - -ways. Now we can sum the values of these $n-k+1$ disjoint cases, getting -$$ - \binom{n+1}{k+1} = \binom n k + \binom {n-1} k + \binom{n-2} k + \cdots + \binom{k+1} k+ \binom k k. -$$ diff --git a/wiki/wikipedia/845.txt b/wiki/wikipedia/845.txt deleted file mode 100644 index b993c6cb9e32d99508e1b0bbc0835963a6947eb5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/845.txt +++ /dev/null @@ -1,126 +0,0 @@ -In mathematics, particularly in functional analysis and convex analysis, the Ursescu theorem is a theorem that generalizes the closed graph theorem, the open mapping theorem, and the uniform boundedness principle. - -The following notation and notions are used, where $\mathcal{R} : X \rightrightarrows Y$ is a multivalued function and $S$ is a non-empty subset of a topological vector space $X$: - -* the affine span of $S$ is denoted by $\operatorname{aff} S$ and the linear span is denoted by $\operatorname{span} S.$ - -* $S^{i} := \operatorname{aint}_X S$ denotes the algebraic interior of $S$ in $X.$ - -* ${}^{i}S:= \operatorname{aint}_{\operatorname{aff}(S - S)} S$ denotes the relative algebraic interior of $S$ (i.e. the algebraic interior of $S$ in $\operatorname{aff}(S - S)$). - -* ${}^{ib}S := {}^{i}S$ if $\operatorname{span} \left(S - s_0\right)$ is barreled for some/every $s_0 \in S$ while ${}^{ib}S := \varnothing$ otherwise. - -** If $S$ is convex then it can be shown that for any $x \in X,$ $x \in {}^{ib} S$ if and only if the cone generated by $S - x$ is a barreled linear subspace of $X$ or equivalently, if and only if $\cup_{n \in \N} n (S - x)$ is a barreled linear subspace of $X$ - -* The domain of $\mathcal{R}$ is $\operatorname{Dom} \mathcal{R} := \{ x \in X : \mathcal{R}(x) \neq \varnothing \}.$ - -* The image of $\mathcal{R}$ is $\operatorname{Im} \mathcal{R} := \cup_{x \in X} \mathcal{R}(x).$ For any subset $A \subseteq X,$ $\mathcal{R}(A) := \cup_{x \in A} \mathcal{R}(x).$ - -* The graph of $\mathcal{R}$ is $\operatorname{gr} \mathcal{R} := \{ (x, y) \in X \times Y : y \in \mathcal{R}(x) \}.$ - -* $\mathcal{R}$ is closed (respectively, convex) if the graph of $\mathcal{R}$ is closed (resp. convex) in $X \times Y.$ - -** Note that $\mathcal{R}$ is convex if and only if for all $x_0, x_1 \in X$ and all $r \in [0, 1],$ $r \mathcal{R}\left(x_0\right) + (1 - r) \mathcal{R}\left(x_1\right) \subseteq \mathcal{R} \left(r x_0 + (1 - r) x_1\right).$ - -* The inverse of $\mathcal{R}$ is the multifunction $\mathcal{R}^{-1} : Y \rightrightarrows X$ defined by $\mathcal{R}^{-1}(y) := \{ x \in X : y \in \mathcal{R}(x) \}.$ For any subset $B \subseteq Y,$ $\mathcal{R}^{-1}(B) := \cup_{y \in B} \mathcal{R}^{-1}(y).$ - -** If $f : X \to Y$ is a function, then its inverse is the multifunction $f^{-1} : Y \rightrightarrows X$ obtained from canonically identifying $f$ with the multifunction $f : X \rightrightarrows Y$ defined by $x \mapsto \{ f(x)\}.$ - -* $\operatorname{int}_T S$ is the topological interior of $S$ with respect to $T,$ where $S \subseteq T.$ - -* $\operatorname{rint} S := \operatorname{int}_{\operatorname{aff} S} S$ is the interior of $S$ with respect to $\operatorname{aff} S.$ - -{{Math theorem|name=Theorem|note=Ursescu|math_statement= - -Let $X$ be a complete semi-metrizable locally convex topological vector space and $\mathcal{R} : X \rightrightarrows Y$ be a closed convex multifunction with non-empty domain. - -Assume that $\operatorname{span} (\operatorname{Im} \mathcal{R} - y)$ is a barrelled space for some/every $y \in \operatorname{Im} \mathcal{R}.$ - -Assume that $y_0 \in {}^{i}(\operatorname{Im} \mathcal{R})$ and let $x_0 \in \mathcal{R}^{-1}\left(y_0\right)$ (so that $y_0 \in \mathcal{R}\left(x_0\right)$). - -Then for every neighborhood $U$ of $x_0$ in $X,$ $y_0$ belongs to the relative interior of $\mathcal{R}(U)$ in $\operatorname{aff} (\operatorname{Im} \mathcal{R})$ (that is, $y_0 \in \operatorname{int}_{\operatorname{aff} (\operatorname{Im} \mathcal{R})} \mathcal{R}(U)$). - -In particular, if ${}^{ib}(\operatorname{Im} \mathcal{R}) \neq \varnothing$ then ${}^{ib}(\operatorname{Im} \mathcal{R}) = {}^{i}(\operatorname{Im} \mathcal{R}) = \operatorname{rint} (\operatorname{Im} \mathcal{R}).$ - -}} - -Let $X$ and $Y$ be Fréchet spaces and $T : X \to Y$ be a linear map. Then $T$ is continuous if and only if the graph of $T$ is closed in $X \times Y.$ - -For the non-trivial direction, assume that the graph of $T$ is closed and let $\mathcal{R} := T^{-1} : Y \rightrightarrows X.$ It is easy to see that $\operatorname{gr} \mathcal{R}$ is closed and convex and that its image is $X.$ - -Given $x \in X,$ $(Tx, x)$ belongs to $Y \times X$ so that for every open neighborhood $V$ of $Tx$ in $Y,$ $\mathcal{R}(V) = T^{-1}(V)$ is a neighborhood of $x$ in $X.$ - -Thus $T$ is continuous at $x.$ -$$ -\blacksquare -$$ - -Let $X$ and $Y$ be Fréchet spaces and $T : X \to Y$ be a bijective linear map. Then $T$ is continuous if and only if $T^{-1} : Y \to X$ is continuous. Furthermore, if $T$ is continuous then $T$ is an isomorphism of Fréchet spaces. - -Apply the closed graph theorem to $T$ and $T^{-1}.$ -$$ -\blacksquare -$$ - -Let $X$ and $Y$ be Fréchet spaces and $T : X \to Y$ be a continuous surjective linear map. Then T is an open map. - -Clearly, $T$ is a closed and convex relation whose image is $Y.$ - -Let $U$ be a non-empty open subset of $X,$ let $y$ be in $T(U),$ and let $x$ in $U$ be such that $y = Tx.$ - -From the Ursescu theorem it follows that $T(U)$ is a neighborhood of $y.$ -$$ -\blacksquare -$$ - -The following notation and notions are used for these corollaries, where $\mathcal{R} : X \rightrightarrows Y$ is a multifunction, $S$ is a non-empty subset of a topological vector space $X$: - -* a convex series with elements of $S$ is a series of the form $\sum_{i=1}^\infty r_i s_i$ where all $s_i \in S$ and $\sum_{i=1}^\infty r_i = 1$ is a series of non-negative numbers. If $\sum_{i=1}^\infty r_i s_i$ converges then the series is called convergent while if $\left(s_i\right)_{i=1}^{\infty}$ is bounded then the series is called bounded and b-convex. - -* $S$ is ideally convex if any convergent b-convex series of elements of $S$ has its sum in $S.$ - -* $S$ is lower ideally convex if there exists a Fréchet space $Y$ such that $S$ is equal to the projection onto $X$ of some ideally convex subset B of $X \times Y.$ Every ideally convex set is lower ideally convex. - -Let $X$ be a barreled first countable space and let $C$ be a subset of $X.$ Then: - -# If $C$ is lower ideally convex then $C^{i} = \operatorname{int} C.$ - -# If $C$ is ideally convex then $C^{i} = \operatorname{int} C = \operatorname{int} \left(\operatorname{cl} C\right) = \left(\operatorname{cl} C\right)^i.$ - -{{Math theorem|name=Simons' theorem|note=|math_statement= - -Let $X$ and $Y$ be first countable with $X$ locally convex. Suppose that $\mathcal{R} : X \rightrightarrows Y$ is a multimap with non-empty domain that satisfies condition (Hwx) or else assume that $X$ is a Fréchet space and that $\mathcal{R}$ is lower ideally convex. - -Assume that $\operatorname{span} (\operatorname{Im} \mathcal{R} - y)$ is barreled for some/every $y \in \operatorname{Im} \mathcal{R}.$ - -Assume that $y_0 \in {}^{i}(\operatorname{Im} \mathcal{R})$ and let $x_0 \in \mathcal{R}^{-1}\left(y_0\right).$ - -Then for every neighborhood $U$ of $x_0$ in $X,$ $y_0$ belongs to the relative interior of $\mathcal{R}(U)$ in $\operatorname{aff} (\operatorname{Im} \mathcal{R})$ (i.e. $y_0 \in \operatorname{int}_{\operatorname{aff} (\operatorname{Im} \mathcal{R})} \mathcal{R}(U)$). - -In particular, if ${}^{ib}(\operatorname{Im} \mathcal{R}) \neq \varnothing$ then ${}^{ib}(\operatorname{Im} \mathcal{R}) = {}^{i}(\operatorname{Im} \mathcal{R}) = \operatorname{rint} (\operatorname{Im} \mathcal{R}).$ - -}} - -The implication (1) $\implies$ (2) in the following theorem is known as the Robinson–Ursescu theorem. - -{{Math theorem|name=Robinson–Ursescu theorem |note=|math_statement= - -Let $(X, \|\cdot\|)$ and $(Y, \|\cdot\|)$ be normed spaces and $\mathcal{R} : X \rightrightarrows Y$ be a multimap with non-empty domain. - -Suppose that $Y$ is a barreled space, the graph of $\mathcal{R}$ verifies condition condition (Hwx), and that $(x_0, y_0) \in \operatorname{gr} \mathcal{R}.$ - -Let $C_X$ (resp. $C_Y$) denote the closed unit ball in $X$ (resp. $Y$) (so $C_X = \{ x \in X : \| x \| \leq 1 \}$). - -Then the following are equivalent: - -# $y_0$ belongs to the algebraic interior of $\operatorname{Im} \mathcal{R}.$ - -# $y_0 \in \operatorname{int} \mathcal{R}\left(x_0 + C_X\right).$ - -# There exists $B > 0$ such that for all $0 \leq r \leq 1,$ $y_0 + B r C_Y \subseteq \mathcal{R} \left(x_0 + r C_X\right).$ - -# There exist $A > 0$ and $B > 0$ such that for all $x \in x_0 + A C_X$ and all $y \in y_0 + A C_Y,$ $d\left(x, \mathcal{R}^{-1}(y)\right) \leq B \cdot d(y, \mathcal{R}(x)).$ - -# There exists $B > 0$ such that for all $x \in X$ and all $y \in y_0 + B C_Y,$ $d \left(x, \mathcal{R}^{-1}(y)\right) \leq \frac{1 + \left\|x - x_0\right\|}{B - \left\|y - y_0\right\|} \cdot d(y, \mathcal{R}(x)).$ - -}} diff --git a/wiki/wikipedia/846.txt b/wiki/wikipedia/846.txt deleted file mode 100644 index dc10a08508cabdc6023f7efed1f6a2269bc3d93f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/846.txt +++ /dev/null @@ -1,68 +0,0 @@ -In mathematics, Wedderburn's little theorem states that every finite domain is a field. In other words, for finite rings, there is no distinction between domains, division rings and fields. - -The Artin–Zorn theorem generalizes the theorem to alternative rings: every finite alternative division ring is a field. - -The original proof was given by Joseph Wedderburn in 1905, who went on to prove it two other ways. Another proof was given by Leonard Eugene Dickson shortly after Wedderburn's original proof, and Dickson acknowledged Wedderburn's priority. However, as noted in , Wedderburn's first proof was incorrect – it had a gap – and his subsequent proofs appeared only after he had read Dickson's correct proof. On this basis, Parshall argues that Dickson should be credited with the first correct proof. - -A simplified version of the proof was later given by Ernst Witt. Witt's proof is sketched below. Alternatively, the theorem is a consequence of the Skolem–Noether theorem by the following argument. Let D be a finite division algebra with center k. Let [D : k] = n2 and q denote the cardinality of k. Every maximal subfield of D has qn elements; so they are isomorphic and thus are conjugate by Skolem–Noether. But a finite group (the multiplicative group of D in our case) cannot be a union of conjugates of a proper subgroup; hence, n = 1. - -A later "group-theoretic" proof was given by Theodore Kaczynski. This proof, Kaczynski's first published piece of mathematical writing, was a short, two-page note which also acknowledged the earlier historical proofs. - -The theorem is essentially equivalent to saying that the Brauer group of a finite field is trivial. In fact, this characterization immediately yields a proof of the theorem as follows: let k be a finite field. Since the Herbrand quotient vanishes by finiteness, $\operatorname{Br}(k) = H^2(k^{\text{al}}/k)$ coincides with $H^1(k^{\text{al}}/k)$, which in turn vanishes by Hilbert 90. - -Let A be a finite domain. For each nonzero x in A, the two maps -$$ -a \mapsto ax, a \mapsto xa: A \to A -$$ - -are injective by the cancellation property, and thus, surjective by counting. It follows from the elementary group theory that the nonzero elements of A form a group under multiplication. Thus, A is a skew-field. - -To prove that every finite skew-field is a field, we use strong induction on the size of the skew-field. Thus, let A be a skew-field, and assume that all skew-fields that are proper subsets of A are fields. Since the center Z(A) of A is a field, A is a vector space over Z(A) with finite dimension n. Our objective is then to show n = 1. If q is the order of Z(A), then A has order qn. Note that because Z(A) contains the distinct elements 0 and 1, q>1. For each x in A that is not in the center, the centralizer Zx of x is clearly a skew-field and thus a field, by the induction hypothesis, and because Zx can be viewed as a vector space over Z(A) and A can be viewed as a vector space over Zx, we have that Zx has order qd where d divides n and is less than n. Viewing Z(A)*, A*, and the Z*x as groups under multiplication, we can write the class equation -$$ -q^n - 1 = q - 1 + \sum {q^n - 1 \over q^d - 1} -$$ - -where the sum is taken over the conjugacy classes not contained within Z(A)*, and the d are defined so that for each conjugacy class, the order of Z*x for any x in the class is qd-1. qn−1 and qd−1 both admit polynomial factorization in terms of cyclotomic polynomials -$$ -\Phi_f(q) -$$. - -In the polynomial identities -$$ -x^n-1 = \prod_{m|n} \Phi_m(x) -$$ and $x^d-1 = \prod_{m|d} \Phi_m(x)$, - -we set x = q. Because each d is a proper divisor of n, -$$ -\Phi_n(q) -$$ divides both qn−1 and each ${q^n - 1 \over q^d - 1}$, - -so by the above class equation $\Phi_n(q)$ must divide q−1, and therefore -$$ -|\Phi_n(q)| \leq q-1 -$$. - -To see that this forces n to be 1, we will show -$$ -|\Phi_n(q)| > q-1 -$$ - -for n > 1 using factorization over the complex numbers. In the polynomial identity -$$ -\Phi_n(x) = \prod (x - \zeta) -$$, - -where ζ runs over the primitive n-th roots of unity, set x to be q and then take absolute values -$$ -|\Phi_n(q)| = \prod |q - \zeta| -$$. - -For n > 1, we see that for each primitive n-th root of unity ζ, -$$ -|q-\zeta| > |q-1| -$$ - -because of the location of q, 1, and ζ in the complex plane. Thus -$$ -|\Phi_n(q)| > q-1 -$$. diff --git a/wiki/wikipedia/847.txt b/wiki/wikipedia/847.txt deleted file mode 100644 index 8a5c5d89940535133fd4ef78c2383a18cf61cdc6..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/847.txt +++ /dev/null @@ -1,91 +0,0 @@ -The Gauss–Bonnet theorem, or Gauss–Bonnet formula, is a relationship between surfaces in differential geometry. It connects the curvature of a surface (from geometry) to its Euler characteristic (from topology). - -In the simplest application, the case of a triangle on a plane, the sum of its angles is 180 degrees. The Gauss–Bonnet theorem extends this to more complicated shapes and curved surfaces, connecting the local and global geometries. - -The theorem is named after Carl Friedrich Gauss, who developed a version but never published it, and Pierre Ossian Bonnet, who published a special case in 1848. - -Suppose $M$ is a compact two-dimensional Riemannian manifold with boundary $\partial M$. Let $K$ be the Gaussian curvature of $M$, and let $k_g$ be the geodesic curvature of $\partial M$. Then -$$ -\int_M KdA+\int_{\partial M}k_gds=2\pi\chi(M), -$$ - -where dA is the element of area of the surface, and ds is the line element along the boundary of M. Here, $\chi(M)$ is the Euler characteristic of $M$. - -If the boundary $\partial M$ is piecewise smooth, then we interpret the integral $\int_{\partial M}k_gds$ as the sum of the corresponding integrals along the smooth portions of the boundary, plus the sum of the angles by which the smooth portions turn at the corners of the boundary. - -Many standard proofs use the theorem of turning tangents, which states roughly that the winding number of a Jordan curve is exactly ±1. - -The theorem applies in particular to compact surfaces without boundary, in which case the integral -$$ -\int_{\partial M}k_gds -$$ - -can be omitted. It states that the total Gaussian curvature of such a closed surface is equal to 2π times the Euler characteristic of the surface. Note that for orientable compact surfaces without boundary, the Euler characteristic equals $2-2g$, where $g$ is the genus of the surface: Any orientable compact surface without boundary is topologically equivalent to a sphere with some handles attached, and $g$ counts the number of handles. - -If one bends and deforms the surface $M$, its Euler characteristic, being a topological invariant, will not change, while the curvatures at some points will. The theorem states, somewhat surprisingly, that the total integral of all curvatures will remain the same, no matter how the deforming is done. So for instance if you have a sphere with a "dent", then its total curvature is 4π (the Euler characteristic of a sphere being 2), no matter how big or deep the dent. - -Compactness of the surface is of crucial importance. Consider for instance the open unit disc, a non-compact Riemann surface without boundary, with curvature 0 and with Euler characteristic 1: the Gauss–Bonnet formula does not work. It holds true however for the compact closed unit disc, which also has Euler characteristic 1, because of the added boundary integral with value 2π. - -As an application, a torus has Euler characteristic 0, so its total curvature must also be zero. If the torus carries the ordinary Riemannian metric from its embedding in R3, then the inside has negative Gaussian curvature, the outside has positive Gaussian curvature, and the total curvature is indeed 0. It is also possible to construct a torus by identifying opposite sides of a square, in which case the Riemannian metric on the torus is flat and has constant curvature 0, again resulting in total curvature 0. It is not possible to specify a Riemannian metric on the torus with everywhere positive or everywhere negative Gaussian curvature. - -Sometimes the Gauss-Bonnet formula is stated as -$$ -\int_T K = 2\pi - \sum \alpha - \int_{\partial T} \kappa_g -$$ - -where T is a geodesic triangle. Here we define a "triangle" on M to be a simply-connected region whose boundary consists of three geodesics. We can then apply GB to the surface T formed by the inside of that triangle and the piecewise boundary of the triangle. - -The geodesic curvature the bordering geodesics is 0, and the Euler characteristic of T being 1. - -Hence the sum of the turning angles of the geodesic triangle is equal to 2π minus the total curvature within the triangle. Since the turning angle at a corner is equal to π minus the interior angle, we can rephrase this as follows: - -The sum of interior angles of a geodesic triangle is equal to π plus the total curvature enclosed by the triangle. -$$ -\sum (\pi - \alpha) = \pi + \int_T K -$$ - -In the case of the plane (where the Gaussian curvature is 0 and geodesics are straight lines), we recover the familiar formula for the sum of angles in an ordinary triangle. On the standard sphere, where the curvature is everywhere 1, we see that the angle sum of geodesic triangles is always bigger than π. - -A number of earlier results in spherical geometry and hyperbolic geometry, discovered over the preceding centuries, were subsumed as special cases of Gauss–Bonnet. - -In spherical trigonometry and hyperbolic trigonometry, the area of a triangle is proportional to the amount by which its interior angles fail to add up to 180°, or equivalently by the (inverse) amount by which its exterior angles fail to add up to 360°. - -The area of a spherical triangle is proportional to its excess, by Girard's theorem – the amount by which its interior angles add up to more than 180°, which is equal to the amount by which its exterior angles add up to less than 360°. - -The area of a hyperbolic triangle, conversely is proportional to its defect, as established by Johann Heinrich Lambert. - -Descartes' theorem on total angular defect of a polyhedron is the polyhedral analog: - -it states that the sum of the defect at all the vertices of a polyhedron which is homeomorphic to the sphere is 4π. More generally, if the polyhedron has Euler characteristic $\chi=2-2g$ (where g is the genus, meaning "number of holes"), then the sum of the defect is $2\pi \chi.$ - -This is the special case of Gauss–Bonnet, where the curvature is concentrated at discrete points (the vertices). - -Thinking of curvature as a measure, rather than as a function, Descartes' theorem is Gauss–Bonnet where the curvature is a discrete measure, and Gauss–Bonnet for measures generalizes both Gauss–Bonnet for smooth manifolds and Descartes' theorem. - -There are several combinatorial analogs of the Gauss–Bonnet theorem. We state the following one. Let $M$ be a finite 2-dimensional pseudo-manifold. Let $\chi(v)$ denote the number of triangles containing the vertex $v$. Then -$$ - \sum_{v\in{\mathrm{int}}{M}}\left(6 - \chi(v)\right) + \sum_{v\in\partial M}\left(3 - \chi(v)\right) = 6\chi(M),\ -$$ - -where the first sum ranges over the vertices in the interior of $M$, the second sum is over the boundary vertices, and $\chi(M)$ is the Euler characteristic of $M$. - -Similar formulas can be obtained for 2-dimensional pseudo-manifold when we replace triangles with higher polygons. For polygons of n vertices, we must replace 3 and 6 in the formula above with n/(n − 2) and 2n/(n − 2), respectively. - -For example, for quadrilaterals we must replace 3 and 6 in the formula above with 2 and 4, respectively. More specifically, if $M$ is a closed 2-dimensional digital manifold, the genus turns out -$$ - g = 1 + \frac{M_5 + 2 M_6 - M_3}{8}, -$$ - -where $M_i$ indicates the number of surface-points each of which has $i$ adjacent points on the surface. This is the simplest formula of Gauss–Bonnet theorem in 3D digital space. - -The Chern theorem (after Shiing-Shen Chern 1945) is the 2n-dimensional generalization of GB (also see Chern–Weil homomorphism). - -The Riemann–Roch theorem can also be seen as a generalization of GB to complex manifolds. - -A far-reaching generalization that includes all the above-mentioned theorems is the Atiyah–Singer index theorem. - -A generalization to 2-manifolds that need not be compact is the Cohn-Vossen's inequality. - -In Greg Egan's novel Diaspora, two characters discuss the derivation of this theorem. - -The theorem can be used directly as a system to control sculpture. For example in work by Edmund Harriss in the collection of the University of Arkansas Honors College. diff --git a/wiki/wikipedia/848.txt b/wiki/wikipedia/848.txt deleted file mode 100644 index 34e6e8de68d5469258e6d97a9c900fe106e8f67c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/848.txt +++ /dev/null @@ -1,5 +0,0 @@ -In mathematical complex analysis, Radó's theorem, proved by , states that every connected Riemann surface is second-countable (has a countable base for its topology). - -The Prüfer surface is an example of a surface with no countable base for the topology, so cannot have the structure of a Riemann surface. - -The obvious analogue of Radó's theorem in higher dimensions is false: there are 2-dimensional connected complex manifolds that are not second-countable. diff --git a/wiki/wikipedia/849.txt b/wiki/wikipedia/849.txt deleted file mode 100644 index 3812fc0e707db6ccf949cc35c8d31ad428ad8c0f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/849.txt +++ /dev/null @@ -1,29 +0,0 @@ -The Borsuk problem in geometry, for historical reasons incorrectly called Borsuk's conjecture, is a question in discrete geometry. It is named after Karol Borsuk. - -In 1932, Karol Borsuk showed that an ordinary 3-dimensional ball in Euclidean space can be easily dissected into 4 solids, each of which has a smaller diameter than the ball, and generally n-dimensional ball can be covered with compact sets of diameters smaller than the ball. At the same time he proved that n subsets are not enough in general. The proof is based on the Borsuk–Ulam theorem. That led Borsuk to a general question: - -Die folgende Frage bleibt offen: Lässt sich jede beschränkte Teilmenge E des Raumes $\mathbb R^n$ in (n + 1) Mengen zerlegen, von denen jede einen kleineren Durchmesser als E hat? - -This can be translated as: - -The following question remains open: Can every bounded subset E of the space $\mathbb R^n$ be partitioned into (n + 1) sets, each of which has a smaller diameter than E? - -The question was answered in the positive in the following cases: - -* n = 2 — which is the original result by Karol Borsuk (1932). - -* n = 3 — shown by Julian Perkal (1947), and independently, 8 years later, by H. G. Eggleston (1955). A simple proof was found later by Branko Grünbaum and Aladár Heppes. - -* For all n for smooth convex bodies — shown by Hugo Hadwiger (1946). - -* For all n for centrally-symmetric bodies — shown by A.S. Riesling (1971). - -* For all n for bodies of revolution — shown by Boris Dekster (1995). - -The problem was finally solved in 1993 by Jeff Kahn and Gil Kalai, who showed that the general answer to Borsuk's question is no. They claim that their construction shows that pieces do not suffice for and for each . However, as pointed out by Bernulf Weißbach, the first part of this claim is in fact false. But after improving a suboptimal conclusion within the corresponding derivation, one can indeed verify one of the constructed point sets as a counterexample for n = 1325 (as well as all higher dimensions up to 1560). - -Their result was improved in 2003 by Hinrichs and Richter, who constructed finite sets for , which cannot be partitioned into parts of smaller diameter. - -In 2013, Andriy V. Bondarenko had shown that Borsuk’s conjecture is false for all . Shortly after, Thomas Jenrich derived a 64-dimensional counterexample from Bondarenko's construction, giving the best bound up to now. - -Apart from finding the minimum number n of dimensions such that the number of pieces $\alpha(n) > n+1$, mathematicians are interested in finding the general behavior of the function $\alpha(n)$. Kahn and Kalai show that in general (that is, for n sufficiently large), one needs $\alpha(n) \ge (1.2)^\sqrt{n}$ many pieces. They also quote the upper bound by Oded Schramm, who showed that for every ε, if n is sufficiently large, $\alpha(n) \le \left(\sqrt{3/2} + \varepsilon\right)^n$. The correct order of magnitude of α(n) is still unknown. However, it is conjectured that there is a constant such that $\alpha(n) > c^n$ for all . diff --git a/wiki/wikipedia/85.txt b/wiki/wikipedia/85.txt deleted file mode 100644 index 1d76e7aebed3837b6064c3ac03e52f18a81b7722..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/85.txt +++ /dev/null @@ -1,7 +0,0 @@ -A wait-for graph in computer science is a directed graph used for deadlock detection in operating systems and relational database systems. - -In computer science, a system that allows concurrent operation of multiple processes and locking of resources and which does not provide mechanisms to avoid or prevent deadlock must support a mechanism to detect deadlocks and an algorithm for recovering from them. - -One such deadlock detection algorithm makes use of a wait-for graph to track which other processes a process is currently blocking on. In a wait-for graph, processes are represented as nodes, and an edge from process $P_i$ to $P_j$ implies $P_j$ is holding a resource that $P_i$ needs and thus $P_i$ is waiting for $P_j$ to release its lock on that resource. If the process is waiting for more than a single resource to become available (the trivial case), multiple edges may represent a conjunctive (and) or disjunctive (or) set of different resources or a certain number of equivalent resources from a collection. The possibility of a deadlock is implied by graph cycles in the conjunctive case, and by knots in the disjunctive case. There is no simple algorithm for detecting the possibility of deadlock in the final case. - -The wait-for-graph scheme is not applicable to a resource allocation system with multiple instances of each resource type. diff --git a/wiki/wikipedia/850.txt b/wiki/wikipedia/850.txt deleted file mode 100644 index 8ff8dbaa18e9b8c90e8524d4bc4de396d9298b70..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/850.txt +++ /dev/null @@ -1,26 +0,0 @@ -The following tables list the computational complexity of various algorithms for common mathematical operations. - -Here, complexity refers to the time complexity of performing computations on a multitape Turing machine. See big O notation for an explanation of the notation used. - -Note: Due to the variety of multiplication algorithms, $M(n)$ below stands in for the complexity of the chosen multiplication algorithm. - -Many of the methods in this section are given in Borwein & Borwein. - -The elementary functions are constructed by composing arithmetic operations, the exponential function ($\exp$), the natural logarithm ($\log$), trigonometric functions ($\sin, \cos$), and their inverses. The complexity of an elementary function is equivalent to that of its inverse, since all elementary functions are analytic and hence invertible by means of Newton's method. In particular, if either $\exp$ or $\log$ in the complex domain can be computed with some complexity, then that complexity is attainable for all other elementary functions. - -Below, the size $n$ refers to the number of digits of precision at which the function is to be evaluated. - -It is not known whether $O(M(n) \log n)$ is the optimal complexity for elementary functions. The best known lower bound is the trivial bound -$$ -\Omega -$$$(M(n))$. - -This table gives the complexity of computing approximations to the given constants to $n$ correct digits. - -Algorithms for number theoretical calculations are studied in computational number theory. - -The following complexity figures assume that arithmetic with individual elements has complexity O(1), as is the case with fixed-precision floating-point arithmetic or operations on a finite field. - -In 2005, Henry Cohn, Robert Kleinberg, Balázs Szegedy, and Chris Umans showed that either of two different conjectures would imply that the exponent of matrix multiplication is 2. - -Algorithms for computing transforms of functions (particularly integral transforms) are widely used in all areas of mathematics, particularly analysis and signal processing. diff --git a/wiki/wikipedia/851.txt b/wiki/wikipedia/851.txt deleted file mode 100644 index 718220707eaf66ea5ad1164edf3bb285163072d6..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/851.txt +++ /dev/null @@ -1,19 +0,0 @@ -In mathematics, the Riemann–Lebesgue lemma, named after Bernhard Riemann and Henri Lebesgue, states that the Fourier transform or Laplace transform of an L1 function vanishes at infinity. It is of importance in harmonic analysis and asymptotic analysis. - -If ƒ is L1 integrable on Rd, that is to say, if the Lebesgue integral of |ƒ| is finite, then the Fourier transform of ƒ satisfies -$$ - \hat{f}(z)\equiv \int_{\mathbb{R}^d} f(x) \exp(-iz \cdot x)dx \rightarrow 0\text{ as } |z|\rightarrow \infty. -$$ - -First suppose that $f(x)=\chi_{(a,b)}(x)$, the indicator function of an open interval. - -Then:
    $\int f(x)e^{i\xi x} dx=\int_a^be^{i\xi x} dx=\frac{e^{i\xi b}-e^{i\xi a}}{i\xi} \rightarrow 0$ as $|\xi| \rightarrow \infty$
    By additivity of limits, the same holds for an arbitrary step function. - -That is, for any function $f$ of the form:
    $f=\sum_{i=1}^Nc_i\chi_{(a_i,b_i)},~~c_i\in\R,~~a_i\leq b_i\in\R$
    We have that:
    $\lim_\int|f'(x)| dx \rightarrow 0 \text{ as } z\rightarrow\pm\infty. $ - -If ƒ is an arbitrary integrable function, it may be approximated in the L1 norm by a compactly supported smooth function g. Pick such a g so that ||ƒ - g||L1 < ε. Then -$$ - \limsup_{z\rightarrow\pm\infty} |\hat{f}(z)| \leq \limsup_{z\to\pm\infty} \left|\int (f(x)-g(x))e^{-ixz} dx\right| + \limsup_{z\rightarrow\pm\infty} \left|\int g(x)e^{-ixz} dx\right| \leq \varepsilon+0=\varepsilon, -$$ - -and since this holds for any ε > 0, the theorem follows. diff --git a/wiki/wikipedia/852.txt b/wiki/wikipedia/852.txt deleted file mode 100644 index 9c53f8d7b61eeb5ceb9f32a4eb586a4ff5c23cd4..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/852.txt +++ /dev/null @@ -1,13 +0,0 @@ -In the theory of causal structure on Lorentzian manifolds, Geroch's theorem or Geroch's splitting theorem (first proved by Robert Geroch) gives a topological characterization of globally hyperbolic spacetimes. - -Let $(M, g_{ab})$ be a globally hyperbolic spacetime. Then $(M, g_{ab})$ is strongly causal and there exists a global "time function" on the manifold, i.e. a continuous, surjective map $f:M \rightarrow \mathbb{R}$ such that: - -*For all $t \in \mathbb{R}$, $f^{-1}(t)$ is a Cauchy surface, and - -*$f$ is strictly increasing on any causal curve. - -Moreover, all Cauchy surfaces are homeomorphic, and $M$ is homeomorphic to $S \times \mathbb{R}$ where $S$ is any Cauchy surface of $M$. - -Category:Theorems in general relativity - -Category:Lorentzian manifolds diff --git a/wiki/wikipedia/853.txt b/wiki/wikipedia/853.txt deleted file mode 100644 index ed788777468ae9085dd3a47d9172d284383cce95..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/853.txt +++ /dev/null @@ -1,13 +0,0 @@ -The QED manifesto was a proposal for a computer-based database of all mathematical knowledge, strictly formalized and with all proofs having been checked automatically. (Q.E.D. means in Latin, meaning "which was to be demonstrated.") - -The idea for the project arose in 1993, mainly under the impetus of Robert Boyer. The goals of the project, tentatively named QED project or project QED, were outlined in the QED manifesto, a document first published in 1994, with input from several researchers. Explicit authorship was deliberately avoided. A dedicated mailing list was created, and two scientific conferences on QED took place, the first one in 1994 at Argonne National Laboratories and the second in 1995 in Warsaw organized by the Mizar group. - -The project seems to have dissolved by 1996, never having produced more than discussions and plans. In a 2007 paper, Freek Wiedijk identifies two reasons for the failure of the project. In order of importance: - -* Very few people are working on formalization of mathematics. There is no compelling application for fully mechanized mathematics. - -* Formalized mathematics does not yet resemble real, traditional mathematics. This is partly due to the complexity of mathematical notation, and partly to the limitations of existing theorem provers and proof assistants; the paper finds that the major contenders, Mizar, HOL, and Coq, have serious shortcomings in their abilities to express mathematics. - -Nonetheless, QED-style projects are regularly proposed, and the Mizar library has successfully formalized a large portion of undergraduate mathematics. it is the largest such library. Another such project is the Metamath proof database. - -In 2014 the Twenty years of the QED Manifesto workshop was organized as part of the Vienna Summer of Logic. diff --git a/wiki/wikipedia/854.txt b/wiki/wikipedia/854.txt deleted file mode 100644 index 843064e247f30bdbddf976ddfd128a8fcfbc4b76..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/854.txt +++ /dev/null @@ -1,62 +0,0 @@ -In functional analysis, a branch of mathematics, Michael selection theorem is a selection theorem named after Ernest Michael. In its most popular form, it states the following: - -Let X be a paracompact space and Y a Banach space. - -Let $F\colon X\to Y$ be a lower hemicontinuous multivalued map with nonempty convex closed values. - -Then there exists a continuous selection $f\colon X \to Y$ of F. - -Conversely, if any lower semicontinuous multimap from topological space X to a Banach space, with nonempty convex closed values, admits a continuous selection, then X is paracompact. This provides another characterization for paracompactness. - -The function: - -F(x)= - -[1-x/2, ~1-x/4] - -, shown by the grey area in the figure at the right, is a multi-valued function from the real interval [0,1] to itself. It satisfies all Michael's conditions, and indeed it has a continuous selection, for example: - -f(x)= - -1-x/2 - - or - -f(x)= - -1-3x/8 - -. - -The function - - - -F(x)= - -\begin{cases} - -3/4 & 0 \le x < 0.5 \\ - -\left[0,1\right] & x = 0.5 \\ - -1/4 & 0.5 < x \le 1 - -\end{cases} - - - -is a multi-valued function from the real interval [0,1] to itself. It has nonempty convex closed values. However, it is not lower hemicontinuous at 0.5. Indeed, Michael's theorem does not apply and the function does not have a continuous selection: any selection at 0.5 is necessarily discontinuous. - -Michael selection theorem can be applied to show that the differential inclusion -$$ -\frac{dx}{dt}(t)\in F(t,x(t)), \quad x(t_0)=x_0 -$$ - -has a C1 solution when F is lower semi-continuous and F(t, x) is a nonempty closed and convex set for all (t, x). When F is single valued, this is the classic Peano existence theorem. - -A theorem due to Deutsch and Kenderov generalizes Michel selection theorem to an equivalence relating approximate selections to almost lower hemicontinuity, where $F$ is said to be almost lower hemicontinuous if at each $x \in X$, all neighborhoods $V$ of $0$ there exists a neighborhood $U$ of $x$ such that $\cap_{u\in U} \{F(u)+V\} \ne \emptyset. $ - -Precisely, Deutsch–Kenderov theorem states that if $X$ is paracompact, $Y$ a normed vector space and $F (x)$ is nonempty convex for each $x \in X$, then $F$ is almost lower hemicontinuous if and only if $F$ has continuous approximate selections, that is, for each neighborhood $V$ of $0$ in $Y$ there is a continuous function $f \colon X \mapsto Y$ such that for each $x \in X$, $f (x) \in F (X) + V$. - -In a note Xu proved that Deutsch–Kenderov theorem is also valid if $Y$ is a locally convex topological vector space. diff --git a/wiki/wikipedia/855.txt b/wiki/wikipedia/855.txt deleted file mode 100644 index cf283f4926cc4170e693059e9661ff0df3947ae3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/855.txt +++ /dev/null @@ -1,13 +0,0 @@ -In mathematics, the Kuratowski–Ulam theorem, introduced by , called also the Fubini theorem for category, is an analog of Fubini's theorem for arbitrary second countable Baire spaces. - -Let X and Y be second countable Baire spaces (or, in particular, Polish spaces), and let $A \subset X \times Y$. Then the following are equivalent if A has the Baire property: - -# A is meager (respectively comeager). - -# The set $\{ x \in X :A_x \text{ is meager (resp. comeager) in }Y \}$ is comeager in X, where $A_x=\pi_Y[A\cap \lbrace x \rbrace \times Y]$, where $\pi_Y$ is the projection onto Y. - -Even if A does not have the Baire property, 2. follows from 1. - -Note that the theorem still holds (perhaps vacuously) for X an arbitrary Hausdorff space and Y a Hausdorff space with countable π-base. - -The theorem is analogous to the regular Fubini's theorem for the case where the considered function is a characteristic function of a subset in a product space, with the usual correspondences, namely, meagre set with a set of measure zero, comeagre set with one of full measure, and a set with the Baire property with a measurable set. diff --git a/wiki/wikipedia/856.txt b/wiki/wikipedia/856.txt deleted file mode 100644 index 3a8fed7cca49c143432dbd53162f2f4bb37496c2..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/856.txt +++ /dev/null @@ -1,9 +0,0 @@ -Prosenjit K. "Jit" Bose is a Canadian mathematician and computer scientist who works at Carleton University as a professor in the School of Computer Science and associate dean of research and graduate studies for the Faculty of Science. His research concerns graph algorithms and computational geometry, including work on geometric spanners and geographic routing in wireless ad hoc networks. - -Bose did his undergraduate studies in mathematics at the University of Waterloo, graduating in 1990, and earned a master's degree from Waterloo in 1991. He earned his Ph.D. in computer science from McGill University in 1994 under the supervision of Godfried Toussaint. After postdoctoral studies at the University of British Columbia, he became an assistant professor at the Université du Québec à Trois-Rivières in 1995, and moved to Carleton in 1997. - -*. - -*. - -*. diff --git a/wiki/wikipedia/857.txt b/wiki/wikipedia/857.txt deleted file mode 100644 index 9b31a9801e110e5712a04fa76a7c967c96d03d25..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/857.txt +++ /dev/null @@ -1,27 +0,0 @@ -The Gale–Ryser theorem is a result in graph theory and combinatorial matrix theory, two branches of combinatorics. It provides one of two known approaches to solving the bipartite realization problem, i.e. it gives a necessary and sufficient condition for two finite sequences of natural numbers to be the degree sequence of a labeled simple bipartite graph; a sequence obeying these conditions is called "bigraphic". It is an analog of the Erdős–Gallai theorem for simple graphs. The theorem was published independently in 1957 by H. J. Ryser and David Gale. - -A pair of sequences of nonnegative integers $(a_1,\ldots,a_n)$ and $(b_1,\ldots,b_n)$ with $a_1\geq\cdots\geq a_n$ is bigraphic if and only if $\sum_{i=1}^{n}a_i=\sum_{i=1}^{n}b_i$ and the following inequality holds for $k$ such that $1 \leq k \leq n$: - - - -\sum^k_{i=1} a_i\leq \sum^n_{i=1} \min(b_i,k). - - - -Sometimes this theorem is stated with the additional constraint $b_1\geq\cdots\geq b_n$. This condition is not necessary, because the labels of vertices of one partite set in a bipartite graph can be switched arbitrarily. - -In 1962 Ford and Fulkerson gave a different but equivalent formulation for the theorem. - -The theorem can also be stated in terms of zero-one matrices. The connection can be seen if one realizes that each bipartite graph has a biadjacency matrix where the column sums and row sums correspond to $(a_1,\ldots,a_n)$ and $(b_1,\ldots,b_n)$. Each sequence can also be considered as a partition of the same number $m=\sum_{i=1}^{n}a_i$. It turns out that partition $(a^*_1,\ldots,a^*_n)$ where $a^*_k:=|\{b_i|b_i \geq k\}|$ is the conjugate partition of $(b_1,\ldots,b_n)$. The conjugate partition can be determined by a Ferrers diagram. Moreover, there is a connection to the relation majorization. Consider sequences $(a_1,\ldots,a_n)$, $(b_1,\ldots,b_n)$ and $(a^*_1,\ldots,a^*_n)$ as $n$-dimensional vectors $a$, $b$ and $a^*$. Since $\sum_{i=1}^k a^*_i =\sum^n_{i=1} \min(b_i,k) $, the theorem above states that a pair of nonnegative integer sequences a and b with nonincreasing a is bigraphic if and only if the conjugate partition $a^*$ of $b$ majorizes $a$. - -A third formulation is in terms of degree sequences of simple directed graphs with at most one loop per vertex. In this case the matrix is interpreted as the adjacency matrix of such a directed graph. When are pairs of nonnegative integers $((a_1,b_1),...,(a_n,b_n))$ the indegree-outdegree pairs of a labeled directed graph with at most one loop per vertex? The theorem can easily be adapted to this formulation, because there does not exist a special order of b. - -The proof is composed of two parts: the necessity of the condition and its sufficiency. We outline the proof of both parts in the language of matrices. To see that the condition in the theorem is necessary, consider the adjacency matrix of a bigraphic realization with row sums $(b_1,\ldots,b_n)$ and column sums $(a_1,\ldots,a_n)$, and shift all ones in the matrix to the left. The row sums remain, while the column sums are now $a^*$. The operation of shifting all ones to the left increases a partition in majorization order, and so $a^*$ majorizes $a$. - -The original proof of sufficiency of the condition was rather complicated. Krause gave a simple algorithmic proof. The idea is to start with the Ferrers diagram of $b$ and shift ones to the right until the column sums are $a$. The algorithm runs in at most $n$ steps, in each of which a single one entry is moved to the right. - -Berger proved that it suffices to consider those $k$th inequalities such that $1 \leq k < n$ with $a_k > a_{k+1}$ and the equality for $k = n$. - -A pair of finite sequences of nonnegative integers $a$ and $b$ with nonincreasing $a$ is bigraphic if and only if $\sum_{i=1}^{n}a_i=\sum_{i=1}^{n}b_i$ and there exists a sequence $c$ such that the pair $c,b$ is bigraphic and $c$ majorizes $a$. Moreover, in is also proved that pair $a$ and $b$ has more bigraphic realizations than pair $c$ and $b$. This yields to the result that regular sequences have for fixed numbers of vertices and edges the largest number of bigraphic realizations, if n divides m. They are the contrary sequences of threshold sequences with only one unique bigraphic realization, which is known as threshold graph. Minconvex sequences generalize this concept if n does not divide m. - -Similar theorems describe the degree sequences of simple graphs and simple directed graphs. The first problem is characterized by the Erdős–Gallai theorem. The latter case is characterized by the Fulkerson–Chen–Anstee theorem. diff --git a/wiki/wikipedia/858.txt b/wiki/wikipedia/858.txt deleted file mode 100644 index d34b0cd3546ea37cc792b1ab98271873737e9fc8..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/858.txt +++ /dev/null @@ -1,34 +0,0 @@ -In differential geometry, Mikhail Gromov's filling area conjecture asserts that the hemisphere has minimum area among the orientable surfaces that fill a closed curve of given length without introducing shortcuts between its points. - -Every smooth surface M or curve in Euclidean space is a metric space, in which the (intrinsic) distance dM(x,y) between two points x, y of M is defined as the infimum of the lengths of the curves that go from x to y along M. For example, on a closed curve $ C $ of length 2L, for each point x of the curve there is a unique other point of the curve (called the antipodal of x) at distance L from x. - -A compact surface M fills a closed curve C if its border (also called boundary, denoted ∂M) is the curve C. The filling M is said isometric if for any two points x,y of the boundary curve C, the distance dM(x,y) between them along M is the same (not less) than the distance dC(x,y) along the boundary. In other words, to fill a curve isometrically is to fill it without introducing shortcuts. - -Question: How small can be the area of a surface that isometrically fills its boundary curve, of given length? - -For example, in three-dimensional Euclidean space, the circle -$$ - C = \{(x,y,0):\ x^2+y^2=1\} -$$ - -(of length 2pi) is filled by the flat disk -$$ - D = \{(x,y,0):\ x^2+y^2\leq 1\} -$$ - -which is not an isometric filling, because any straight chord along it is a shortcut. In contrast, the hemisphere -$$ - H = \{(x,y,z):\ x^2+y^2+z^2=1\text{ and }z\geq 0\} -$$ - -is an isometric filling of the same circle C, which has twice the area of the flat disk. Is this the minimum possible area? - -The surface can be imagined as made of a flexible but non-stretchable material, that allows it to be moved around and bended in Euclidean space. None of these transformations modifies the area of the surface nor the length of the curves drawn on it, which are the magnitudes relevant to the problem. The surface can be removed from Euclidean space altogether, obtaining a Riemannian surface, which is an abstract smooth surface with a Riemannian metric that encodes the lengths and area. Reciprocally, according to the Nash-Kuiper theorem, any Riemannian surface with boundary can be embedded in Euclidean space preserving the lengths and area specified by the Riemannian metric. Thus the filling problem can be stated equivalently as a question about Riemannian surfaces, that are not placed in Euclidean space in any particular way. - -Conjecture (Gromov's filling area conjecture, 1983): The hemisphere has minimum area among the orientable compact Riemannian surfaces that fill isometrically their boundary curve, of given length. - -In the same paper where Gromov stated the conjecture, he proved that - -the hemisphere has least area among the Riemannian surfaces that isometrically fill a circle of given length, and are homeomorphic to a disk. This implies that if a piece of sphere is sufficiently small (and therefore, nearly flat), then it is a volume minimizer. If this theorem can be extended to large regions (namely, to the whole hemisphere), then the filling area conjecture is true. It has been conjectured that all simple Riemannian manifolds (those that are convex at their boundary, and where every two points are joined by a unique geodesic) are volume minimizers. - -The proof that each almost flat manifold M is a volume minimizer involves embedding M in $ L^\infty(\partial M)$, and then showing that any isometric replacement of M can also be mapped into the same space $ L^\infty (\partial M)$, and projected onto M, without increasing its volume. This implies that the replacement has not less volume than the original manifold M. diff --git a/wiki/wikipedia/859.txt b/wiki/wikipedia/859.txt deleted file mode 100644 index c8cfd9605d80fe0b394e1c6de4acb9a0e46dae01..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/859.txt +++ /dev/null @@ -1 +0,0 @@ -In mathematics, Janiszewski's theorem, named after the Polish mathematician Zygmunt Janiszewski, is a result concerning the topology of the plane or extended plane. It states that if A and B are closed subsets of the extended plane with connected intersection, then any two points that can be connected by paths avoiding either A or B can be connected by a path avoiding both of them. The theorem has been used as a tool for proving the Jordan curve theorem and in complex function theory. diff --git a/wiki/wikipedia/86.txt b/wiki/wikipedia/86.txt deleted file mode 100644 index 74600038d31dcb2687bdc31bcbe5d06a54821f6a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/86.txt +++ /dev/null @@ -1,89 +0,0 @@ -In mathematics, Bochner's theorem (named for Salomon Bochner) characterizes the Fourier transform of a positive finite Borel measure on the real line. More generally in harmonic analysis, Bochner's theorem asserts that under Fourier transform a continuous positive-definite function on a locally compact abelian group corresponds to a finite positive measure on the Pontryagin dual group. - -Bochner's theorem for a locally compact abelian group G, with dual group $\widehat{G}$, says the following: - -Theorem For any normalized continuous positive-definite function f on G (normalization here means that f is 1 at the unit of G), there exists a unique probability measure μ on $\widehat{G}$ such that -$$ -f(g) = \int_{\widehat{G}} \xi(g) d\mu(\xi), -$$ - -i.e. f is the Fourier transform of a unique probability measure μ on $\widehat{G}$. Conversely, the Fourier transform of a probability measure on $\widehat{G}$ is necessarily a normalized continuous positive-definite function f on G. This is in fact a one-to-one correspondence. - -The Gelfand–Fourier transform is an isomorphism between the group C*-algebra C*(G) and C0(Ĝ). The theorem is essentially the dual statement for states of the two abelian C*-algebras. - -The proof of the theorem passes through vector states on strongly continuous unitary representations of G (the proof in fact shows that every normalized continuous positive-definite function must be of this form). - -Given a normalized continuous positive-definite function f on G, one can construct a strongly continuous unitary representation of G in a natural way: Let F0(G) be the family of complex-valued functions on G with finite support, i.e. h(g) = 0 for all but finitely many g. The positive-definite kernel K(g1, g2) = f(g1 − g2) induces a (possibly degenerate) inner product on F0(G). Quotiening out degeneracy and taking the completion gives a Hilbert space -$$ -(\mathcal{H}, \langle \cdot, \cdot\rangle_f), -$$ - -whose typical element is an equivalence class [h]. For a fixed g in G, the "shift operator" Ug defined by (Ug)(h) (g') = h(g − g), for a representative of [h], is unitary. So the map -$$ -g \mapsto U_g -$$ - -is a unitary representations of G on $(\mathcal{H}, \langle \cdot, \cdot\rangle_f)$. By continuity of f, it is weakly continuous, therefore strongly continuous. By construction, we have -$$ -\langle U_g [e], [e] \rangle_f = f(g), -$$ - -where [e] is the class of the function that is 1 on the identity of G and zero elsewhere. But by Gelfand–Fourier isomorphism, the vector state $\langle \cdot [e], [e] \rangle_f $ on C*(G) is the pull-back of a state on $C_0(\widehat{G})$, which is necessarily integration against a probability measure μ. Chasing through the isomorphisms then gives -$$ -\langle U_g [e], [e] \rangle_f = \int_{\widehat{G}} \xi(g) d\mu(\xi). -$$ - -On the other hand, given a probability measure μ on $\widehat{G}$, the function -$$ -f(g) = \int_{\widehat{G}} \xi(g) d\mu(\xi) -$$ - -is a normalized continuous positive-definite function. Continuity of f follows from the dominated convergence theorem. For positive-definiteness, take a nondegenerate representation of $C_0(\widehat{G})$. This extends uniquely to a representation of its multiplier algebra $C_b(\widehat{G})$ and therefore a strongly continuous unitary representation Ug. As above we have f given by some vector state on Ug -$$ -f(g) = \langle U_g v, v \rangle, -$$ - -therefore positive-definite. - -The two constructions are mutual inverses. - -Bochner's theorem in the special case of the discrete group Z is often referred to as Herglotz's theorem (see Herglotz representation theorem) and says that a function f on Z with f(0) = 1 is positive-definite if and only if there exists a probability measure μ on the circle T such that -$$ -f(k) = \int_{\mathbb{T}} e^{-2 \pi i k x} d\mu(x). -$$ - -Similarly, a continuous function f on R with f(0) = 1 is positive-definite if and only if there exists a probability measure μ on R such that -$$ -f(t) = \int_{\mathbb{R}} e^{-2 \pi i \xi t} d\mu(\xi). -$$ - -In statistics, Bochner's theorem can be used to describe the serial correlation of certain type of time series. A sequence of random variables $\{f_n\}$ of mean 0 is a (wide-sense) stationary time series if the covariance -$$ -\operatorname{Cov}(f_n, f_m) -$$ - -only depends on n − m. The function -$$ -g(n - m) = \operatorname{Cov}(f_n, f_m) -$$ - -is called the autocovariance function of the time series. By the mean zero assumption, -$$ -g(n - m) = \langle f_n, f_m \rangle, -$$ - -where ⟨⋅, ⋅⟩ denotes the inner product on the Hilbert space of random variables with finite second moments. It is then immediate that g is a positive-definite function on the integers ℤ. By Bochner's theorem, there exists a unique positive measure μ on [0, 1] such that -$$ -g(k) = \int e^{-2 \pi i k x} d\mu(x). -$$ - -This measure μ is called the spectral measure of the time series. It yields information about the "seasonal trends" of the series. - -For example, let z be an m-th root of unity (with the current identification, this is 1/m ∈ [0, 1]) and f be a random variable of mean 0 and variance 1. Consider the time series $\{z^n f\}$. The autocovariance function is -$$ -g(k) = z^k. -$$ - -Evidently, the corresponding spectral measure is the Dirac point mass centered at z. This is related to the fact that the time series repeats itself every m periods. - -When g has sufficiently fast decay, the measure μ is absolutely continuous with respect to the Lebesgue measure, and its Radon–Nikodym derivative f is called the spectral density of the time series. When g lies in ℓ1(ℤ), f is the Fourier transform of g. diff --git a/wiki/wikipedia/860.txt b/wiki/wikipedia/860.txt deleted file mode 100644 index db36252c7ce81aac1b52b1525058d3f2e38c6782..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/860.txt +++ /dev/null @@ -1,47 +0,0 @@ -A partition of a polygon is a set of primitive units (e.g. squares), which do not overlap and whose union equals the polygon. A polygon partition problem is a problem of finding a partition which is minimal in some sense, for example a partition with a smallest number of units or with units of smallest total side-length. - -Polygon partitioning is an important class of problems in computational geometry. There are many different polygon partition problems, depending on the type of polygon being partitioned and on the types of units allowed in the partition. - -The term polygon decomposition is often used as a general term that includes both polygon covering and partitioning. - -Polygon decomposition is applied in several areas: - -The problem of minimizing the number of component rectangles is polynomial: several polynomial-time algorithms are known. See - -In some applications, it is more important to minimize the total length of the cuts (e.g. to minimize the cost of performing the partition, or to minimize the amount of dust). This problem is called minimum edge-length rectangular partitioning. It was first studied by Lingas, Pinter, Rivest and Shamir in 1982. The run-time complexity of this problem crucially depends on whether the raw polygon is allowed to have holes. - -If the raw polygon is hole-free, then an optimal partition can be found in time $O(n^4)$, where n is the number of vertices of the polygon. In the special case of a "histogram polygon", the complexity improves to $O(n^3)$. For the case in which all holes are single points, several constant-factor approximations have been developed: - -* A (3+sqrt(3)) approximation in time $O(n^2)$; the latter approximation uses a restricted variant of the problem called guillotine partitioning, in which the cuts must be guillotine cuts (edge-to-edge cuts). - -*Several polynomial-time approximation schemes using sophisticated guillotine cuts. - -* If the large polygon is a rectangle, then in any maximal arrangement of n rectangles, all the holes are rectangles, and their number is at most $n - \lceil 2 \sqrt{n} - 1\rceil$, and this is tight. - -* If the large polygon is a rectilinear polygon with T reflex vertices, then in any maximal arrangement of n rectangles, the holes can be partitioned into at most $T + n - \lceil 2 \sqrt{n} - 1\rceil$ rectangles, and this is tight. - -In VLSI artwork processing systems, it is often required to partition a polygonal region into the minimum number of trapezoids, with two horizontal sides. A triangle with a horizontal side is considered to be a trapezoid with two horizontal sides one of which is degenerate. For a hole-free polygon with $n$ sides, a smallest such partition can be found in time $O(n^2)$. - -If the polygon does contain holes, the problem is NP-complete, but a 3-approximation can be found in time $O(n\log n)$. - -A quadrilateralization or a quadrangulation is a partition into quadrilaterals. - -A recurring characteristic of quadrangulation problems is whether they Steiner point are allowed, i.e., whether the algorithm is allowed to add points which are not vertices of the polygon. Allowing Steiner points may enable smaller divisions, but then it is much more difficult to guarantee that the divisions found by an algorithms have minimum size. - -There are linear-time algorithms for quadrangulations of hole-free polygons with Steiner points, but they are not guaranteed to find a smallest partition. - -A generalization of previous problems is the partitioning into polygons that have exactly m sides, or at most m sides. Here the goal is to minimize the total edge length. This problem can be solved in time polynomial in n and m. - -When partitioning a general polygon into convex polygons, several objectives have been studied. - -The optimal convex partitioning problem is to partition a non-convex polygon into as few as possible convex polygons, using only the initial polygon's vertices. There are exact and approximate algorithms for this problem. - -The original polygon already contains some pairwise-disjoint convex figures, and the goal is to partition it into convex polygons that such that each original figure is contained in one of the pieces, and subject to this, the number of "blanks" (pieces that do not contain an original figure) is as small as possible. If the large polygon is convex, then in any maximal arrangement of n convex figures, all the holes are convex, and their number is at most $2n-5$, and this is tight. - -The fair polygon partitioning problem is to partition a (convex) polygon into (convex) pieces with an equal perimeter and equal area (this is a special case of fair cake-cutting). Any convex polygon can be easily cut into any number n of convex pieces with an area of exactly 1/n. However, ensuring that the pieces have both equal area and equal perimeter is more challenging. There are algorithms for solving this problem when the number of pieces is a power of 2. - -A generalization of this problem is when the area and perimeter measures are replaced with a measure on the body and on the boundary of the polygon, respectively. This problem was studied for 2 and 3 pieces. - -There is a further generalization to handle any number of measures. - -More general shapes of pieces have been studied, including: spiral shapes, star polygons and monotone polygons. See for a survey. diff --git a/wiki/wikipedia/861.txt b/wiki/wikipedia/861.txt deleted file mode 100644 index 5e9134c94930693be6f1ca1f3cc43965ebee2acf..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/861.txt +++ /dev/null @@ -1 +0,0 @@ -In number theory, the Katz–Lang finiteness theorem, proved by , states that if X is a smooth geometrically connected scheme of finite type over a field K that is finitely generated over the prime field, and Ker(X/K) is the kernel of the maps between their abelianized fundamental groups, then Ker(X/K) is finite if K has characteristic 0, and the part of the kernel coprime to p is finite if K has characteristic p > 0. diff --git a/wiki/wikipedia/862.txt b/wiki/wikipedia/862.txt deleted file mode 100644 index e5b017ee5b1897dea7f6b4a09990feba2f85a594..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/862.txt +++ /dev/null @@ -1,58 +0,0 @@ -A state space is the set of all possible configurations of a system. It is a useful abstraction for reasoning about the behavior of a given system and is widely used in the fields of artificial intelligence and game theory. - -For instance, the toy problem Vacuum World has a discrete finite state space in which there are a limited set of configurations that the vacuum and dirt can be in. A "counter" system, where states are the natural numbers starting at 1 and are incremented over time has an infinite discrete state space. The angular position of an undamped pendulum is a continuous (and therefore infinite) state space. - -In the theory of dynamical systems, a discrete system defined by a function ƒ, the state space of the system can be modeled as a directed graph where each possible state of a dynamical system is represented by a vertex, and there is a directed edge from a to b if and only if ƒ(a) = b. This is known as a state diagram. - -For a continuous dynamical system defined by a function ƒ, the state space of the system is the image of ƒ. - -State spaces are useful in computer science as a simple model of machines. Formally, a state space can be defined as a tuple [N, A, S, G] where: - -* N is a set of states - -* A is a set of arcs connecting the states - -* S is a nonempty subset of N that contains start states - -* G is a nonempty subset of N that contains the goal states. - -A state space has some common properties: - -* complexity, where branching factor is important - -* structure of the space, see also graph theory: - -** directionality of arcs - -** tree - -** rooted graph - -For example, the Vacuum World has a branching factor of 4, as the vacuum cleaner can end up in 1 of 4 adjacent squares after moving (assuming it cannot stay in the same square nor move diagonally). The arcs of Vacuum World are bidirectional, since any square can be reached from any adjacent square, and the state space is not a tree since it is possible to enter a loop by moving between any 4 adjacent squares. - -State spaces can be either infinite or finite, and discrete or continuous. - -The size of the state space for a given system is the number of possible configurations of the space. - -If the size of the state space is finite, calculating the size of the state space is a combinatorial problem. For example, in the Eight queens puzzle, the state space can be calculated by counting all possible ways to place 8 pieces on an 8x8 chessboard. This is the same as choosing 8 positions without replacement from a set of 64, or -$$ - \binom{64}{8} = 4,426,165,368 -$$ - -This is significantly greater than the number of legal configurations of the queens, 92. In many games the effective state space is small compared to all reachable/legal states. This property is also observed in Chess, where the effective state space is the set of positions that can be reached by game-legal moves. This is far smaller than the set of positions that can be achieved by placing combinations of the available chess pieces directly on the board. - -All continuous state spaces can be described by a corresponding continuous function and are therefore infinite. Discrete state spaces can also have (countably) infinite size, such as the state space of the time-dependent "counter" system, similar to the system in queueing theory defining the number of customers in a line, which would have state space {0, 1, 2, 3, ...}. - -Exploring a state space is the process of enumerating possible states in search of a goal state. The state space of Pacman, for example, contains a goal state whenever all food pellets have been eaten, and is explored by moving Pacman around the board. - -A search state is a compressed representation of a world state in a state space, and is used for exploration. Search states are used because a state space often encodes more information than is necessary to explore the space. Compressing each world state to only information needed for exploration improves efficiency by reducing the number of states in the search. For example, a state in the Pacman space includes information about the direction Pacman is facing (up, down, left, or right). Since it does not cost anything to change directions in Pacman, search states for Pacman would not include this information and reduce the size of the search space by a factor of 4, one for each direction Pacman could be facing. - -Standard search algorithms are effective in exploring discrete state spaces. The following algorithms exhibit both completeness and optimality in searching a state space. - -* Breadth-First Search - -* A* Search - -* Uniform Cost Search - -These methods do not extend naturally to exploring continuous state spaces. Exploring a continuous state space in search of a given goal state is equivalent to optimizing an arbitrary continuous function which is not always possible, see mathematical optimization. diff --git a/wiki/wikipedia/863.txt b/wiki/wikipedia/863.txt deleted file mode 100644 index 228dee4d2350a04db260f79412c5ac377fe6d71f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/863.txt +++ /dev/null @@ -1,236 +0,0 @@ -thumb|300px|The sum of the reciprocal of the primes increasing without bound. The x axis is in log scale, showing that the divergence is very slow. The red function is a lower bound that also diverges.The sum of the reciprocals of all prime numbers diverges; that is: -$$ -\sum_{p\text{ prime}}\frac1p = \frac12 + \frac13 + \frac15 + \frac17 + \frac1{11} + \frac1{13} + \frac1{17} + \cdots = \infty -$$ - -This was proved by Leonhard Euler in 1737, and strengthens (i.e. it gives more information than) Euclid's 3rd-century-BC result that there are infinitely many prime numbers. - -There are a variety of proofs of Euler's result, including a lower bound for the partial sums stating that -$$ -\sum_{\scriptstyle p\text{ prime}\atop \scriptstyle p\le n}\frac1p \ge \log \log (n+1) - \log\frac{\pi^2}6 -$$ - -for all natural numbers n. The double natural logarithm (log log) indicates that the divergence might be very slow, which is indeed the case. See Meissel–Mertens constant. - -First, we describe how Euler originally discovered the result. He was considering the harmonic series -$$ - \sum_{n=1}^\infty \frac{1}{n} = 1 + \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + \cdots = \infty -$$ - -He had already used the following "product formula" to show the existence of infinitely many primes. -$$ - \sum_{n=1}^\infty \frac{1}{n} = \prod_{p} \left( 1+\frac{1}{p}+\frac{1}{p^2}+\cdots \right) = \prod_{p} \frac{1}{1-p^{-1}} -$$ - -Here the product is taken over the set of all primes. - -Such infinite products are today called Euler products. The product above is a reflection of the fundamental theorem of arithmetic. Euler noted that if there were only a finite number of primes, then the product on the right would clearly converge, contradicting the divergence of the harmonic series. - -Euler considered the above product formula and proceeded to make a sequence of audacious leaps of logic. First, he took the natural logarithm of each side, then he used the Taylor series expansion for log x as well as the sum of a converging series: - -\begin{align} - -\log \left( \sum_{n=1}^\infty \frac{1}{n}\right) & {} = \log\left( \prod_p \frac{1}{1-p^{-1}}\right) - -= -\sum_p \log \left( 1-\frac{1}{p}\right) \\[5pt] - -& = \sum_p \left( \frac{1}{p} + \frac{1}{2p^2} + \frac{1}{3p^3} + \cdots \right) \\[5pt] - -& = \sum_{p}\frac{1}{p} + \frac{1}{2}\sum_p \frac{1}{p^2} + \frac{1}{3}\sum_p \frac{1}{p^3} + \frac{1}{4}\sum_p \frac{1}{p^4}+ \cdots \\[5pt] - -& = A + \frac{1}{2} B+ \frac{1}{3} C+ \frac{1}{4} D + \cdots \\[5pt] - -& = A + K - -\end{align} - -for a fixed constant K < 1. Then he invoked the relation -$$ -\sum_{n=1}^\infty\frac1n=\log\infty, -$$ - -which he explained, for instance in a later 1748 work, by setting x = 1 in the Taylor series expansion -$$ -\log\left(\frac1{1-x}\right)=\sum_{n=1}^\infty\frac{x^{n}}n. -$$ - -This allowed him to conclude that -$$ -A=\frac{1}{2} + \frac{1}{3} + \frac{1}{5} + \frac{1}{7} + \frac{1}{11} + \cdots = \log \log \infty. -$$ - -It is almost certain that Euler meant that the sum of the reciprocals of the primes less than n is asymptotic to log log n as n approaches infinity. It turns out this is indeed the case, and a more precise version of this fact was rigorously proved by Franz Mertens in 1874. Thus Euler obtained a correct result by questionable means. - -The following proof by contradiction is due to Paul Erdős. - -Let pi denote the ith prime number. Assume that the sum of the reciprocals of the primes converges - -Then there exists a smallest positive integer k such that -$$ -\sum_{i=k+1}^\infty \frac 1 {p_i} < \frac12 \qquad(1) -$$ - -For a positive integer x, let Mx denote the set of those n in {1, 2, …, x} which are not divisible by any prime greater than pk (or equivalently all n ≤ x which are a product of powers of primes pi ≤ pk). We will now derive an upper and a lower estimate for |Mx|, the number of elements in Mx. For large x, these bounds will turn out to be contradictory. - -Upper estimate: - -Every n in Mx can be written as n = m2r with positive integers m and r, where r is square-free. Since only the k primes p1, …, pk can show up (with exponent 1) in the prime factorization of r, there are at most 2k different possibilities for r. Furthermore, there are at most sqrt x possible values for m. This gives us the upper estimate -$$ -|M_x| \le 2^k\sqrt{x} \qquad(2) -$$ - -Lower estimate: - -The remaining x − |Mx| numbers in the set difference {1, 2, …, x} \ Mx are all divisible by a prime greater than pk. Let Ni,x denote the set of those n in {1, 2, …, x which are divisible by the ith prime pi. Then -$$ -\{1,2,\ldots,x\}\smallsetminus M_x = \bigcup_{i=k+1}^\infty N_{i,x} -$$ - -Since the number of integers in Ni,x is at most x/pi (actually zero for pi > x), we get -$$ -x-|M_x| \le \sum_{i=k+1}^\infty |N_{i,x}|< \sum_{i=k+1}^\infty \frac x {p_i} -$$ - -Using (1), this implies -$$ -\frac x 2 < |M_x| \qquad(3) -$$ - -This produces a contradiction: when x ≥ 22k + 2, the estimates (2) and (3) cannot both hold, because x/2 ≥ 2ksqrt x. - -Here is another proof that actually gives a lower estimate for the partial sums; in particular, it shows that these sums grow at least as fast as log log n. The proof is due to Ivan Niven, adapted from the product expansion idea of Euler. In the following, a sum or product taken over p always represents a sum or product taken over a specified set of primes. - -The proof rests upon the following four inequalities: - -* Every positive integer i can be uniquely expressed as the product of a square-free integer and a square as a consequence of the fundamental theorem of arithmetic. Start with: -$$ -i = q_1^{2{\alpha}_1+{\beta}_1} \cdot q_2^{2{\alpha}_2+{\beta}_2} \cdot\ldots \cdot q_r^{2{\alpha}_r+{\beta}_r}, -$$ - -where the βs are 0 (the corresponding power of prime q is even) or 1 (the corresponding power of prime q is odd). Factor out one copy of all the primes whose β is 1, leaving a product of primes to even powers, itself a square. Relabeling: -$$ -i = (p_1 p_2 \ldots p_s) \cdot b^2, -$$ - -where the first factor, a product of primes to the first power, is square free. Inverting all the is gives the inequality -$$ - \sum_{i=1}^n \frac 1 i \le \left(\prod_{p \le n} \left(1 + \frac 1 p \right)\right) \cdot \left(\sum_{k=1}^n \frac 1 {k^2}\right) = A \cdot B. -$$ - -To see this, note that -$$ -\frac 1 i = \frac 1 {p_1 p_2 \ldots p_s} \cdot \frac 1 {b^2}, -$$ - -where - -\begin{align} - -\left(1 + \frac 1 p_1\right)\left(1 + \frac 1 p_2\right) \ldots \left(1 + \frac 1 p_s\right) &= \left(\frac 1 p_1\right)\left(\frac 1 p_2\right)\ldots\left(\frac 1 p_s\right) + \ldots\\ - -&= \frac 1 {p_1 p_2 \ldots p_s} + \ldots.\end{align} - -That is, $1/(p_1p_2 \ldots p_s)$ is one of the summands in the expanded product A. And since $1/b^2$ is one of the summands of B, every i is represented in one of the terms of AB when multiplied out. The inequality follows. - -* The upper estimate for the natural logarithm - -\begin{align} - -\log(n+1) &= \int_1^{n+1} \frac{dx}x \\ - -&= \sum_{i=1}^n\underbrace{\int_i^{i+1}\frac{dx}x}_{{} < \frac1i} \\ - -&< \sum_{i=1}^n \frac 1 i - -\end{align} - -* The lower estimate 1 + x < exp(x) for the exponential function, which holds for all x > 0. - -* Let n ≥ 2. The upper bound (using a telescoping sum) for the partial sums (convergence is all we really need) - -\begin{align} - -\sum_{k=1}^n \frac 1 {k^2} - -&< 1 + \sum_{k=2}^n \underbrace{\left(\frac1{k - \frac{1}{2}} - \frac1{k + \frac{1}{2}}\right)}_{= \frac{1}{k^2 - \frac14} > \frac{1}{k^2}} \\ - -&= 1 + \frac23 - \frac1{n + \frac{1}{2}} < \frac53 - -\end{align} - -Combining all these inequalities, we see that - -\begin{align} - -\log(n+1) & < \sum_{i=1}^n\frac{1}{i} \\ - -& \le \prod_{p \le n} \left(1 + \frac{1}{p}\right) \sum_{k=1}^n \frac{1}{k^2} \\ - -& < \frac53\prod_{p \le n} \exp\left(\frac{1}{p}\right) \\ - -& = \frac53\exp\left(\sum_{p \le n} \frac{1}{p} \right) - -\end{align} - -Dividing through by 5/3 and taking the natural logarithm of both sides gives -$$ -\log\log(n + 1) - \log\frac53 < \sum_{p \le n} \frac{1}{p} -$$ - -as desired. ∎ - -Using -$$ -\sum_{k=1}^\infty \frac{1}{k^2} = \frac{\pi^2}6 -$$ - -(see the Basel problem), the above constant log 5/3 = 0.51082… can be improved to log π2/6 = 0.4977…; in fact it turns out that -$$ - \lim_{n \to \infty } \left( \sum_{p \leq n} \frac{1}{p} - \log \log n \right) = M -$$ - -where M = 0.261497… is the Meissel–Mertens constant (somewhat analogous to the much more famous Euler–Mascheroni constant). - -From Dusart's inequality, we get -$$ - p_n < n \log n + n \log \log n \quad\mbox{for } n \ge 6 -$$ - -Then - -\begin{align} - -\sum_{n=1}^\infty \frac1{ p_n} - -&\ge \sum_{n=6}^\infty \frac1{ p_n} \\ - -&\ge \sum_{n=6}^\infty \frac1{ n \log n + n \log \log n} \\ - -&\ge \sum_{n=6}^\infty \frac1{2n \log n} = \infty - -\end{align} - -by the integral test for convergence. This shows that the series on the left diverges. - -Suppose for contradiction the sum converged. Then, there exists $ n $ such that $ \sum_{i \geq n+1} \frac{1}{p_i} < 1 $. Call this sum $x $. - -Now consider the convergent geometric series $x+x^2+x^3+\cdots$. - -This geometric series contains the sum of reciprocals of all numbers whose prime factorization contain only primes in the set $ \{ p_{n+1}, p_{n+2}, \cdots \}$. - -Consider the subseries $\sum_{i \geq 1} \frac{1}{1+i(p_1 p_2 \cdots p_n)}$. This is a subseries because $1+i(p_1p_2 \cdots p_n)$ is not divisible by any $p_j, j \leq n $. - -However, by the Limit comparison test, this subseries diverges by comparing it to the harmonic series. Indeed, $\lim_{i \to \infty} \frac{1+i(p_1 p_2 \cdots p_n)}{i}=p_1 p_2 \cdots p_n$. - -Thus, we have found a divergent subseries of the original convergent series, and since all terms are positive, this gives the contradiction. We may conclude $\sum_{i \geq 1} \frac{1}{p_i}$ diverges. $ \blacksquare$ - -While the partial sums of the reciprocals of the primes eventually exceed any integer value, they never equal an integer. - -One proof is by induction: The first partial sum is 1/2, which has the form odd/even. If the nth partial sum (for n ≥ 1) has the form odd/even, then the (n + 1)st sum is -$$ -\frac\text{odd}\text{even} + \frac{1}{p_{n+1}} = \frac{\text{odd} \cdot p_{n+1} + \text{even}}{\text{even} \cdot p_{n+1}} = \frac{\text{odd} + \text{even}}\text{even} = \frac\text{odd}\text{even} -$$ - -as the (n + 1)st prime pn + 1 is odd; since this sum also has an odd/even form, this partial sum cannot be an integer (because 2 divides the denominator but not the numerator), and the induction continues. - -Another proof rewrites the expression for the sum of the first n reciprocals of primes (or indeed the sum of the reciprocals of any set of primes) in terms of the least common denominator, which is the product of all these primes. Then each of these primes divides all but one of the numerator terms and hence does not divide the numerator itself; but each prime does divide the denominator. Thus the expression is irreducible and is non-integer. diff --git a/wiki/wikipedia/864.txt b/wiki/wikipedia/864.txt deleted file mode 100644 index 76d53e62248091e309ec8ad4b907dcef92c73bd2..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/864.txt +++ /dev/null @@ -1,6 +0,0 @@ -In mathematics, the Beurling–Lax theorem is a theorem due to Beurling and Lax which characterizes the shift-invariant subspaces of the Hardy space $H^2(\mathbb{D},\mathbb{C})$. It states that each such space is of the form -$$ - \theta H^2(\mathbb{D},\mathbb{C}), -$$ - -for some inner function $\theta$. diff --git a/wiki/wikipedia/865.txt b/wiki/wikipedia/865.txt deleted file mode 100644 index 144fcbe55f5d62634b4d6a1087089ea04e04e3df..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/865.txt +++ /dev/null @@ -1,173 +0,0 @@ -In mathematics, a Beatty sequence (or homogeneous Beatty sequence) is the sequence of integers found by taking the floor of the positive multiples of a positive irrational number. Beatty sequences are named after Samuel Beatty, who wrote about them in 1926. - -Rayleigh's theorem, named after Lord Rayleigh, states that the complement of a Beatty sequence, consisting of the positive integers that are not in the sequence, is itself a Beatty sequence generated by a different irrational number. - -Beatty sequences can also be used to generate Sturmian words. - -A positive irrational number $r$ generates the Beatty sequence -$$ -\mathcal{B}_r = \lfloor r \rfloor, \lfloor 2r \rfloor, \lfloor 3r \rfloor,\ldots -$$ - -If $r > 1 ,$ then $s = r/(r-1)$ is also a positive irrational number. These two numbers naturally satisfy the equation $1/r + 1/s = 1$. - -The two Beatty sequences they generate, -$$ -\mathcal{B}_r = ( \lfloor nr \rfloor)_{n\geq 1} -$$ and -$$ -\mathcal{B}_s = ( \lfloor ns \rfloor)_{n\geq 1} -$$, - -form a pair of complementary Beatty sequences. Here, "complementary" means that every positive integer belongs to exactly one of these two sequences. - -When r is the golden mean, we have s = r + 1. In this case, the sequence $( \lfloor nr \rfloor)$, known as the lower Wythoff sequence, is - -* 1, 3, 4, 6, 8, 9, 11, 12, 14, 16, 17, 19, 21, 22, 24, 25, 27, 29, ... . - -and the complementary sequence $( \lfloor ns \rfloor)$, the upper Wythoff sequence, is - -* 2, 5, 7, 10, 13, 15, 18, 20, 23, 26, 28, 31, 34, 36, 39, 41, 44, 47, ... . - -These sequences define the optimal strategy for Wythoff's game, and are used in the definition of the Wythoff array - -As another example, for r = 2, we have s = 2 + 2. In this case, the sequences are - -* 1, 2, 4, 5, 7, 8, 9, 11, 12, 14, 15, 16, 18, 19, 21, 22, 24, ... and - -* 3, 6, 10, 13, 17, 20, 23, 27, 30, 34, 37, 40, 44, 47, 51, 54, 58, ... . - -And for r = π and s = π/(π − 1) the sequences are - -* 3, 6, 9, 12, 15, 18, 21, 25, 28, 31, 34, 37, 40, 43, 47, 50, 53, ... and - -* 1, 2, 4, 5, 7, 8, 10, 11, 13, 14, 16, 17, 19, 20, 22, 23, 24, 26, ... . - -Any number in the first sequence is absent in the second, and vice versa. - -Beatty sequences got their name from the problem posed in the American Mathematical Monthly by Samuel Beatty in 1926. It is probably one of the most often cited problems ever posed in the Monthly. However, even earlier, in 1894 such sequences were briefly mentioned by John W. Strutt (3rd Baron Rayleigh) in the second edition of his book The Theory of Sound. - -Given $r > 1 ,$ let $s = r/(r-1)$. We must show that every positive integer lies in one and only one of the two sequences $\mathcal{B}_r$ and $\mathcal{B}_s$. We shall do so by considering the ordinal positions occupied by all the fractions $j/r$ and $k/s$ when they are jointly listed in nondecreasing order for positive integers j and k. - -To see that no two of the numbers can occupy the same position (as a single number), suppose to the contrary that $j/r = k/s$ for some j and k. Then $r/s$ = $j/k$, a rational number, but also, $r/s = r(1 - 1/r) = r - 1,$ not a rational number. Therefore, no two of the numbers occupy the same position. - -For any $j/r$, there are $j$ positive integers $i$ such that $i/r \le j/r$ and $ \lfloor js/r \rfloor$ positive integers $k$ such that $k/s \le j/r$, so that the position of $j/r$ in the list is $j + \lfloor js/r \rfloor$. The equation $1/r + 1/s = 1$ implies -$$ -j + \lfloor js/r \rfloor = j + \lfloor j(s - 1) \rfloor = \lfloor js \rfloor. -$$ - -Likewise, the position of $k/s$ in the list is $\lfloor kr \rfloor$. - -Conclusion: every positive integer (that is, every position in the list) is of the form $\lfloor nr \rfloor$ or of the form $\lfloor ns \rfloor$, but not both. The converse statement is also true: if p and q are two real numbers such that every positive integer occurs precisely once in the above list, then p and q are irrational and the sum of their reciprocals is 1. - -Collisions: Suppose that, contrary to the theorem, there are integers j > 0 and k and m such that -$$ -j = \left\lfloor {k \cdot r} \right\rfloor = \left\lfloor {m \cdot s} \right\rfloor . -$$ - -This is equivalent to the inequalities -$$ -j \le k \cdot r < j + 1 \text{ and } j \le m \cdot s < j + 1. -$$ - -For non-zero j, the irrationality of r and s is incompatible with equality, so -$$ -j < k \cdot r < j + 1 \text{ and } j < m \cdot s < j + 1 -$$ - -which lead to -$$ -{j \over r} < k < {j + 1 \over r} \text{ and } {j \over s} < m < {j + 1 \over s}. -$$ - -Adding these together and using the hypothesis, we get -$$ -j < k + m < j + 1 -$$ - -which is impossible (one cannot have an integer between two adjacent integers). Thus the supposition must be false. - -Anti-collisions: Suppose that, contrary to the theorem, there are integers j > 0 and k and m such that -$$ -k \cdot r < j \text{ and } j + 1 \le (k + 1) \cdot r \text{ and } m \cdot s < j \text{ and } j + 1 \le (m + 1) \cdot s . -$$ - -Since j + 1 is non-zero and r and s are irrational, we can exclude equality, so -$$ -k \cdot r < j \text{ and } j + 1 < (k + 1) \cdot r \text{ and } m \cdot s < j \text{ and } j + 1 < (m + 1) \cdot s. -$$ - -Then we get -$$ -k < {j \over r} \text{ and } {j + 1 \over r} < k + 1 \text{ and } m < {j \over s} \text{ and } {j + 1 \over s} < m + 1 -$$ - -Adding corresponding inequalities, we get -$$ -k + m < j \text{ and } j + 1 < k + m + 2 -$$ -$$ -k + m < j < k + m + 1 -$$ - -which is also impossible. Thus the supposition is false. -$$ -m \in \mathcal{B}_r -$$ if and only if -$$ - 1 - \frac{1}{r} < \left[ \frac{m}{r} \right]_1 -$$ - -where $[x]_1$ denotes the fractional part of $x$ i.e., $[x]_1 = x - \lfloor x \rfloor$. - -Proof: -$$ - m \in B_r -$$ -$$ -\Leftrightarrow \exists n, m = \lfloor nr \rfloor -$$ -$$ -\Leftrightarrow m < nr < m + 1 -$$ -$$ -\Leftrightarrow \frac{m}{r} < n < \frac{m}{r} + \frac{1}{r} -$$ -$$ -\Leftrightarrow n - \frac{1}{r} < \frac{m}{r} < n -$$ -$$ -\Leftrightarrow 1 - \frac{1}{r} < \left[ \frac{m}{r} \right]_1 -$$ - -Furthermore, $m = \left\lfloor \left( \left\lfloor \frac{m}{r} \right\rfloor + 1 \right) r \right\rfloor$. - -Proof: -$$ -m = \left\lfloor \left( \left\lfloor \frac{m}{r} \right\rfloor + 1 \right) r \right\rfloor -$$ -$$ -\Leftrightarrow m < \left( \left\lfloor \frac{m}{r} \right\rfloor + 1 \right) r < m + 1 -$$ -$$ -\Leftrightarrow \frac{m}{r} < \left\lfloor \frac{m}{r} \right\rfloor + 1 < \frac{m + 1}{r} -$$ -$$ -\Leftrightarrow \left\lfloor \frac{m}{r} \right\rfloor + 1 - \frac{1}{r} < \frac{m}{r} < \left\lfloor \frac{m}{r} \right\rfloor + 1 -$$ -$$ -\Leftrightarrow 1 - \frac{1}{r} < \frac{m}{r} - \left\lfloor \frac{m}{r} \right\rfloor =\left[ \frac{m}{r} \right]_1 -$$ - -The first difference -$$ -\lfloor (n+1)r\rfloor-\lfloor nr\rfloor -$$ - -of the Beatty sequence associated with the irrational number $r$ is a characteristic Sturmian word over the alphabet $\{\lfloor r\rfloor,\lfloor r\rfloor+1\}$. - -If slightly modified, the Rayleigh's theorem can be generalized to positive real numbers (not necessarily irrational) and negative integers as well: if positive real numbers $r$ and $s$ satisfy $1/r + 1/s = 1$, the sequences $( \lfloor mr \rfloor)_{m \in \mathbb{Z}}$ and $( \lceil ns \rceil -1)_{n \in \mathbb{Z}}$ form a partition of integers. - -The Lambek–Moser theorem generalizes the Rayleigh theorem and shows that more general pairs of sequences defined from an integer function and its inverse have the same property of partitioning the integers. - -Uspensky's theorem states that, if $\alpha_1,\ldots,\alpha_n$ are positive real numbers such that $(\lfloor k\alpha_i\rfloor)_{k,i\ge1}$ contains all positive integers exactly once, then $n\le2.$ That is, there is no equivalent of Rayleigh's theorem to three or more Beatty sequences. diff --git a/wiki/wikipedia/866.txt b/wiki/wikipedia/866.txt deleted file mode 100644 index dc87f237810e5e201d3d2c3b49645d79eee0ecd6..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/866.txt +++ /dev/null @@ -1 +0,0 @@ -Tetris Battle Gaiden () is a puzzle video game developed and published in 1993 by Bullet Proof Software for the Super Famicom. Released only in Japan, the game is a variant of the Tetris series involving multiplayer battles comparable to those of the Puyo Puyo and Columns series of video games. diff --git a/wiki/wikipedia/867.txt b/wiki/wikipedia/867.txt deleted file mode 100644 index 01cba9a7425cd45fe7dcaec331ef16a78032d634..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/867.txt +++ /dev/null @@ -1,5 +0,0 @@ -In mathematics, specifically Riemannian geometry, Synge's theorem is a classical result relating the curvature of a Riemannian manifold to its topology. It is named for John Lighton Synge, who proved it in 1936. Let M be a compact Riemannian manifold with positive sectional curvature. The theorem asserts: - -* If M is even-dimensional and orientable, then M is simply connected. - -* If M is odd-dimensional, then it is orientable. diff --git a/wiki/wikipedia/868.txt b/wiki/wikipedia/868.txt deleted file mode 100644 index 9016e0d3db983e04126bf20b6498392d4ecbcd7a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/868.txt +++ /dev/null @@ -1,116 +0,0 @@ -In algebra and number theory, Euclid's lemma is a lemma that captures a fundamental property of prime numbers, namely: If a prime p divides the product ab of two integers a and b, then p must divide at least one of those integers a and b. For example, if p = 19, a = 133, b = 143, then ab = 133 × 143 = 19019, and since this is divisible by 19, the lemma implies that one or both of 133 or 143 must be as well. In fact, 133 = 19 × 7. - -If the premise of the lemma does not hold, i.e., p is a composite number, its consequent may be either true or false. For example, in the case of p = 10, a = 4, b = 15, composite number 10 divides ab = 4 × 15 = 60, but 10 divides neither 4 nor 15. - -This property is the key in the proof of the fundamental theorem of arithmetic. It is used to define prime elements, a generalization of prime numbers to arbitrary commutative rings. Euclid's Lemma shows that in the integers irreducible elements are also prime elements. The proof uses induction so it does not apply to all integral domains. - -Let $p$ be a prime number, and assume $p$ divides the product of two integers $a$ and $b$. In symbols, this is written $p \mid ab$. Its negation, $p$ does not divide $ab$, is written $p \nmid ab$. Then $p \mid a$ or $p \mid b$ (or both). Equivalent statements are: - -* If $p \nmid a$ and $p \nmid b$, then $p \nmid ab$. - -* If $p \nmid a$ and $p \mid ab$, then $p \mid b$. - -Euclid's lemma can be generalized from prime numbers to any integers: - -This is a generalization because if $n$ is prime, either - -* $n \mid a$ or - -* $n$ is relatively prime to $a$. In this second possibility, $n \nmid a$ so $n \mid b$. - -The lemma first appears as proposition 30 in Book VII of Euclid's Elements. It is included in practically every book that covers elementary number theory. - -The generalization of the lemma to integers appeared in Jean Prestet's textbook Nouveaux Elémens de Mathématiques in 1681. - -In Carl Friedrich Gauss's treatise Disquisitiones Arithmeticae, the statement of the lemma is Euclid's Proposition 14 (Section 2), which he uses to prove the uniqueness of the decomposition product of prime factors of an integer (Theorem 16), admitting the existence as "obvious". From this existence and uniqueness he then deduces the generalization of prime numbers to integers. For this reason, the generalization of Euclid's lemma is sometimes referred to as Gauss's lemma, but some believe this usage is incorrect due to confusion with Gauss's lemma on quadratic residues. - -In modern mathematics, a common proof involves a result called Bézout's identity, which was unknown at Euclid's time. Bézout's identity states that if x and y are relatively prime integers (i.e. they share no common divisors other than 1 and -1) there exist integers r and s such that - - - -rx+sy = 1. - - - -Let a and n be relatively prime, and assume that n|ab. By Bézout's identity, there are r and s making - - - -rn+sa = 1. - - - -Multiply both sides by b: - - - -rnb+sab = b. - - - -The first term on the left is divisible by n, and the second term is divisible by ab, which by hypothesis is divisible by n. Therefore their sum, b, is also divisible by n. This is the generalization of Euclid's lemma mentioned above. - -The following proof is inspired by Euclid's version of Euclidean algorithm, which proceeds by using only subtractions. - -Suppose that $p \nmid a$ and $p \mid ab$. We want to show that $p \mid b$. Since $p \nmid a$, there is an n coprime to a (that is, their only common divisors are 1 and –1) such that -$$ -nq=ab. -$$ - -One has to prove that n divides b. For proving this by strong induction, we suppose that this has been proved for all positive lower values of ab. - -There are three cases: - -If n = a, coprimality implies n = 1, and n divides b trivially. - -If n < a, one has -$$ -n(q-b)=(a-n)b. -$$ - -The positive integers a – n and n are coprime: if any prime divides both, then it divides their sum, and thus divides both n and a. This is a contradiction of the coprimality hypothesis. As a consequence of the right hand side - -, q – b is positive. So, the conclusion follows from the induction hypothesis, since a – n < a. - -If n > a, one has -$$ -(n-a)q=a(b-q). -$$ - -Similarly as above, n – a and a are coprime. Since b – q < b, by the induction hypothesis, there is an integer r such that $b-q=r(n-a).$ So -$$ -(n-a)q=a(b-q)=ar(n-a), -$$ - -and one gets q = ar, by dividing by n – a. Thus -$$ -ab=nq=anr, -$$ - -and, by division by a, one gets b = nr, which the desired conclusion. - -Euclid's lemma is proved at the Proposition 30 in Book VII of Euclid's Elements. The original proof is difficult to understand as is, so we quote the commentary from Euclid. - -;Proposition 19 - -If four numbers be proportional, the number produced from the first and fourth is equal to the number produced from the second and third; and, if the number produced from the first and fourth be equal to that produced from the second and third, the four numbers are proportional. - -;Proposition 20 - -The least numbers of those that have the same ratio with them measures those that have the same ratio the same number of times—the greater the greater and the less the less. - -;Proposition 21 - -Numbers prime to one another are the least of those that have the same ratio with them. - -;Proposition 29 - -Any prime number is prime to any number it does not measure. - -;Proposition 30 - -If two numbers, by multiplying one another, make the same number, and any prime number measures the product, it also measures one of the original numbers. - -;Proof of 30 - -If c, a prime number, measure ab, c measures either a or b.
    Suppose c does not measure a.
    Therefore c, a are prime to one another. [VII. 29]
    Suppose ab=mc.
    Therefore c : a = b : m. [VII. 19]
    Hence [VII. 20, 21] b=nc, where n is some integer.
    Therefore c measures b.
    Similarly, if c does not measure b, c measures a.
    Therefore c measures one or other of the two numbers a, b.
    Q.E.D. diff --git a/wiki/wikipedia/869.txt b/wiki/wikipedia/869.txt deleted file mode 100644 index ab16d24a93e707e66c8e12d14b4b032aa5b5d038..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/869.txt +++ /dev/null @@ -1,77 +0,0 @@ -Data lineage includes the data origin, what happens to it and where it moves over time. Data lineage gives visibility while greatly simplifying the ability to trace errors back to the root cause in a data analytics process. - -It also enables replaying specific portions or inputs of the data flow for step-wise debugging or regenerating lost output. Database systems use such information, called data provenance, to address similar validation and debugging challenges. Data provenance refers to records of the inputs, entities, systems, and processes that influence data of interest, providing a historical record of the data and its origins. The generated evidence supports forensic activities such as data-dependency analysis, error/compromise detection and recovery, auditing, and compliance analysis. "Lineage is a simple type of why provenance." These problems will only become larger and more acute as these systems and data continue to grow. As such, more cost-efficient ways of analyzing data intensive scalable computing (DISC) are crucial to their continued effective use. - -According to an EMC/IDC study: - -* 2.8ZB of data were created and replicated in 2012, - -* the digital universe will double every two years between now and 2020, and - -* there will be approximately 5.2TB of data for every person in 2020. - -Working with this scale of data has become very challenging. - -Unstructured data usually refers to information that doesn't reside in a traditional row-column database. Unstructured data files often include text and multimedia content. Examples include e-mail messages, word processing documents, videos, photos, audio files, presentations, webpages and many other kinds of business documents. Note that while these sorts of files may have an internal structure, they are still considered "unstructured" because the data they contain doesn't fit neatly in a database. Experts estimate that 80 to 90 percent of the data in any organization is unstructured. And the amount of unstructured data in enterprises is growing significantly often many times faster than structured databases are growing. "Big data can include both structured and unstructured data, but IDC estimates that 90 percent of big data is unstructured data." - -The fundamental challenge of unstructured data sources is that they are difficult for non-technical business users and data analysts alike to unbox, understand, and prepare for analytic use. Beyond issues of structure, is the sheer volume of this type of data. Because of this, current data mining techniques often leave out valuable information and make analyzing unstructured data laborious and expensive. - -In today’s competitive business environment, companies have to find and analyze the relevant data they need quickly. The challenge is going through the volumes of data and accessing the level of detail needed, all at a high speed. The challenge only grows as the degree of granularity increases. One possible solution is hardware. Some vendors are using increased memory and parallel processing to crunch large volumes of data quickly. Another method is putting data in-memory but using a grid computing approach, where many machines are used to solve a problem. Both approaches allow organizations to explore huge data volumes. Even this level of sophisticated hardware and software, few of the image processing tasks in large scale take a few days to few weeks. Debugging of the data processing is extremely hard due to long run times. - -A third approach of advanced data discovery solutions combines self-service data prep with visual data discovery, enabling analysts to simultaneously prepare and visualize data side-by-side in an interactive analysis environment offered by newer companies Trifacta, Alteryx and others. - -Another method to track data lineage is spreadsheet programs such as Excel that do offer users cell-level lineage, or the ability to see what cells are dependent on another, but the structure of the transformation is lost. Similarly, ETL or mapping software provide transform-level lineage, yet this view typically doesn’t display data and is too coarse-grained to distinguish between transforms that are logically independent (e.g. transforms that operate on distinct columns) or dependent. - -Big Data platforms have a very complicated structure. Data is distributed among several machines. Typically the jobs are mapped into several machines and results are later combined by reduce operations. Debugging of a big data pipeline becomes very challenging because of the very nature of the system. It will not be an easy task for the data scientist to figure out which machine's data has the outliers and unknown features causing a particular algorithm to give unexpected results. - -Data provenance or data lineage can be used to make the debugging of big data pipeline easier. This necessitates the collection of data about data transformations. The below section will explain data provenance in more detail. - -Data provenance provides a historical record of the data and its origins. The provenance of data which is generated by complex transformations such as workflows is of considerable value to scientists. From it, one can ascertain the quality of the data based on its ancestral data and derivations, track back sources of errors, allow automated re-enactment of derivations to update a data, and provide attribution of data sources. Provenance is also essential to the business domain where it can be used to drill down to the source of data in a data warehouse, track the creation of intellectual property, and provide an audit trail for regulatory purposes. - -The use of data provenance is proposed in distributed systems to trace records through a dataflow, replay the dataflow on a subset of its original inputs and debug data flows. To do so, one needs to keep track of the set of inputs to each operator, which were used to derive each of its outputs. Although there are several forms of provenance, such as copy-provenance and how-provenance, - -Intuitively, for an operator T producing output o, lineage consists of triplets of form {I, T, o}, where I is the set of inputs to T used to derive o. Capturing lineage for each operator T in a dataflow enables users to ask questions such as “Which outputs were produced by an input i on operator T ?” and “Which inputs produced output o in operator T ?” Backward tracing is useful for debugging, while forward tracing is useful for tracking error propagation. - -Association is a combination of the inputs, outputs and the operation itself. The operation is represented in terms of a black box also known as the actor. The associations describe the transformations that are applied on the data. The associations are stored in the association tables. Each unique actor is represented by its own association table. An association itself looks like {i, T, o} where i is the set of inputs to the actor T and o is set of outputs given produced by the actor. Associations are the basic units of Data Lineage. Individual associations are later clubbed together to construct the entire history of transformations that were applied to the data. - -Big data systems scale horizontally i.e. increase capacity by adding new hardware or software entities into the distributed system. The distributed system acts as a single entity in the logical level even though it comprises multiple hardware and software entities. The system should continue to maintain this property after horizontal scaling. An important advantage of horizontal scalability is that it can provide the ability to increase capacity on the fly. The biggest plus point is that horizontal scaling can be done using commodity hardware. - -The horizontal scaling feature of Big Data systems should be taken into account while creating the architecture of lineage store. This is essential because the lineage store itself should also be able to scale in parallel with the Big data system. The number of associations and amount of storage required to store lineage will increase with the increase in size and capacity of the system. The architecture of Big data systems makes the use of a single lineage store not appropriate and impossible to scale. The immediate solution to this problem is to distribute the lineage store itself. - -The best-case scenario is to use a local lineage store for every machine in the distributed system network. This allows the lineage store also to scale horizontally. In this design, the lineage of data transformations applied to the data on a particular machine is stored on the local lineage store of that specific machine. The lineage store typically stores association tables. Each actor is represented by its own association table. The rows are the associations themselves and columns represent inputs and outputs. This design solves 2 problems. It allows horizontal scaling of the lineage store. If a single centralized lineage store was used, then this information had to be carried over the network, which would cause additional network latency. The network latency is also avoided by the use of a distributed lineage store. - -The information stored in terms of associations needs to be combined by some means to get the data flow of a particular job. In a distributed system a job is broken down into multiple tasks. One or more instances run a particular task. The results produced on these individual machines are later combined together to finish the job. Tasks running on different machines perform multiple transformations on the data in the machine. All the transformations applied to the data on a machines is stored in the local lineage store of that machines. This information needs to be combined together to get the lineage of the entire job. The lineage of the entire job should help the data scientist understand the data flow of the job and he/she can use the data flow to debug the big data pipeline. The data flow is reconstructed in 3 stages. - -The first stage of the data flow reconstruction is the computation of the association tables. The association tables exists for each actor in each local lineage store. The entire association table for an actor can be computed by combining these individual association tables. This is generally done using a series of equality joins based on the actors themselves. In few scenarios the tables might also be joined using inputs as the key. Indexes can also be used to improve the efficiency of a join. The joined tables need to be stored on a single instance or a machine to further continue processing. There are multiple schemes that are used to pick a machine where a join would be computed. The easiest one being the one with minimum CPU load. Space constraints should also be kept in mind while picking the instance where join would happen. - -The second step in data flow reconstruction is computing an association graph from the lineage information. The graph represents the steps in the data flow. The actors act as vertices and the associations act as edges. Each actor T is linked to its upstream and downstream actors in the data flow. An upstream actor of T is one that produced the input of T, while a downstream actor is one that consumes the output of T . Containment relationships are always considered while creating the links. The graph consists of three types of links or edges. - -The simplest link is an explicitly specified link between two actors. These links are explicitly specified in the code of a machine learning algorithm. When an actor is aware of its exact upstream or downstream actor, it can communicate this information to lineage API. This information is later used to link these actors during the tracing query. For example, in the MapReduce architecture, each map instance knows the exact record reader instance whose output it consumes. - -Developers can attach data flow archetypes to each logical actor. A data flow archetype explains how the children types of an actor type arrange themselves in a data flow. With the help of this information, one can infer a link between each actor of a source type and a destination type. For example, in the MapReduce architecture, the map actor type is the source for reduce, and vice versa. The system infers this from the data flow archetypes and duly links map instances with reduce instances. However, there may be several MapReduce jobs in the data flow, and linking all map instances with all reduce instances can create false links. To prevent this, such links are restricted to actor instances contained within a common actor instance of a containing (or parent) actor type. Thus, map and reduce instances are only linked to each other if they belong to the same job. - -In distributed systems, sometimes there are implicit links, which are not specified during execution. For example, an implicit link exists between an actor that wrote to a file and another actor that read from it. Such links connect actors which use a common data set for execution. The dataset is the output of the first actor and is the input of the actor following it. - -The final step in the data flow reconstruction is the topological sorting of the association graph. The directed graph created in the previous step is topologically sorted to obtain the order in which the actors have modified the data. This inherit order of the actors defines the data flow of the big data pipeline or task. - -This is the most crucial step in Big Data debugging. The captured lineage is combined and processed to obtain the data flow of the pipeline. The data flow helps the data scientist or a developer to look deeply into the actors and their transformations. This step allows the data scientist to figure out the part of the algorithm that is generating the unexpected output. A big data pipeline can go wrong in two broad ways. The first is a presence of a suspicious actor in the data-flow. The second being the existence of outliers in the data. - -The first case can be debugged by tracing the data-flow. By using lineage and data-flow information together a data scientist can figure out how the inputs are converted into outputs. During the process actors that behave unexpectedly can be caught. Either these actors can be removed from the data flow or they can be augmented by new actors to change the data-flow. The improved data-flow can be replayed to test the validity of it. Debugging faulty actors include recursively performing coarse-grain replay on actors in the data-flow, which can be expensive in resources for long dataflows. Another approach is to manually inspect lineage logs to find anomalies, which can be tedious and time-consuming across several stages of a data-flow. Furthermore, these approaches work only when the data scientist can discover bad outputs. To debug analytics without known bad outputs, the data scientist need to analyze the data-flow for suspicious behavior in general. However, often, a user may not know the expected normal behavior and cannot specify predicates. This section describes a debugging methodology for retrospectively analyzing lineage to identify faulty actors in a multi-stage data-flow. We believe that sudden changes in an actor’s behavior, such as its average selectivity, processing rate or output size, is characteristic of an anomaly. Lineage can reflect such changes in actor behavior over time and across different actor instances. Thus, mining lineage to identify such changes can be useful in debugging faulty actors in a data-flow. - -The second problem i.e. the existence of outliers can also be identified by running the data-flow step wise and looking at the transformed outputs. The data scientist finds a subset of outputs that are not in accordance to the rest of outputs. The inputs which are causing these bad outputs are the outliers in the data. This problem can be solved by removing the set of outliers from the data and replaying the entire data-flow. It can also be solved by modifying the machine learning algorithm by adding, removing or moving actors in the data-flow. The changes in the data-flow are successful if the replayed data-flow does not produce bad outputs. - -Even though the use of data lineage approaches is a novel way of debugging of big data pipelines, the process is not simple. The challenges include scalability of the lineage store, fault tolerance of the lineage store, accurate capture of lineage for black box operators and many others. These challenges must be considered carefully and trade offs between them need to be evaluated to make a realistic design for data lineage capture. - -DISC systems are primarily batch processing systems designed for high throughput. They execute several jobs per analytics, with several tasks per job. The overall number of operators executing at any time in a cluster can range from hundreds to thousands depending on the cluster size. Lineage capture for - -these systems must be able scale to both large volumes of data and numerous operators to avoid being a bottleneck for the DISC analytics. - -Lineage capture systems must also be fault tolerant to avoid rerunning data flows to capture lineage. At the same time, they must also accommodate failures in the DISC system. To do so, they must be able to identify a failed DISC task and avoid storing duplicate copies of lineage between the partial lineage generated by the failed task and duplicate lineage produced by the restarted task. A lineage system should also be able to gracefully handle multiple instances of local lineage systems going down. This can achieved by storing replicas of lineage associations in multiple machines. The replica can act like a backup in the event of the real copy being lost. - -Lineage systems for DISC dataflows must be able to capture accurate lineage across black-box operators to enable fine-grain debugging. Current approaches to this include Prober, which seeks to find the minimal set of inputs that can produce a specified output for a black-box operator by replaying the data-flow several times to deduce the minimal set, and dynamic slicing, as used by Zhang et al. to capture lineage for NoSQL operators through binary rewriting to compute dynamic slices. Although producing highly accurate lineage, such techniques can incur significant time overheads for capture or tracing, and it may be preferable to instead trade some accuracy for better performance. Thus, there is a need for a lineage collection system for DISC dataflows that can capture lineage from arbitrary operators with reasonable accuracy, and without significant overheads in capture or tracing. - -Tracing is essential for debugging, during which, a user can issue multiple tracing queries. Thus, it is important that tracing has fast turnaround times. Ikeda et al. can perform efficient backward tracing queries for MapReduce dataflows, but are not generic to different DISC systems and do not perform efficient forward queries. Lipstick, a lineage system for Pig, while able to perform both backward and forward tracing, is specific to Pig and SQL operators and can only perform coarse-grain tracing for black-box operators. Thus, there is a need for a lineage system that enables efficient forward and backward tracing for generic DISC systems and dataflows with black-box operators. - -Replaying only specific inputs or portions of a data-flow is crucial for efficient debugging and simulating what-if scenarios. Ikeda et al. present a methodology for lineage-based refresh, which selectively replays updated inputs to recompute affected outputs. This is useful during debugging for re-computing outputs when a bad input has been fixed. However, sometimes a user may want to remove the bad input and replay the lineage of outputs previously affected by the error to produce error-free outputs. We call this exclusive replay. Another use of replay in debugging involves replaying bad inputs for step-wise debugging (called selective replay). Current approaches to using lineage in DISC systems do not address these. Thus, there is a need for a lineage system that can perform both exclusive and selective replays to address different debugging needs. - -One of the primary debugging concerns in DISC systems is identifying faulty operators. In long dataflows with several hundreds of operators or tasks, manual inspection can be tedious and prohibitive. Even if lineage is used to narrow the subset of operators to examine, the lineage of a single output can still span several operators. There is a need for an inexpensive automated debugging system, which can substantially narrow the set of potentially faulty operators, with reasonable accuracy, to minimize the amount of manual examination required. diff --git a/wiki/wikipedia/87.txt b/wiki/wikipedia/87.txt deleted file mode 100644 index 58b835909b6bdc3f045514f018882635f6d0c6bd..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/87.txt +++ /dev/null @@ -1,22 +0,0 @@ -In mathematics, the Besicovitch inequality is a geometric inequality relating volume of a set and distances between certain subsets of its boundary. The inequality was first formulated by Abram Besicovitch. - -Consider the n-dimensional cube $[0,1]^n$ with a Riemannian metric $g$. Let -$$ -d_i= dist_g(\{x_i=0\}, \{x_i=1\}) -$$ - -denote the distance between opposite faces of the cube. The Besicovitch inequality asserts that -$$ -\prod_i d_i \geq Vol([0,1]^n,g) -$$ - -The inequality can be generalized in the following way. Given an n-dimensional Riemannian manifold M with connected boundary and a smooth map $f: M \rightarrow [0,1]^n$, such that the restriction of f to the boundary of M is a degree 1 map onto $ \partial [0,1]^n$, define -$$ -d_i= dist_M(f^{-1}(\{x_i=0\}), f^{-1}(\{x_i=1\})) -$$ - -Then $\prod_i d_i \geq Vol(M)$. - -The Besicovitch inequality was used to prove systolic inequalities - -on surfaces. diff --git a/wiki/wikipedia/870.txt b/wiki/wikipedia/870.txt deleted file mode 100644 index 4a6913e7551a9eed09653285b3c95b159ffb6168..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/870.txt +++ /dev/null @@ -1,9 +0,0 @@ -Thomas Snyder (born c. 1980) is an American puzzle creator and world-champion sudoku and logic puzzle solver. He is the first person to win both the World Sudoku Championship (3 times) and the World Puzzle Championship. Snyder writes a puzzle blog as Dr. Sudoku. - -Thomas Snyder grew up in the suburbs of Buffalo, New York. He attended Amherst Central High School before getting chemistry degrees from the California Institute of Technology and Harvard University - -*World Puzzle Champion 2018 - -*U.S. Sudoku Champion 2007 - -*U.S. Puzzle Champion 2006-2010, 2012, 2017 diff --git a/wiki/wikipedia/871.txt b/wiki/wikipedia/871.txt deleted file mode 100644 index 3b524933170c3aa6c0ae5f613c5dcaf10e97be50..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/871.txt +++ /dev/null @@ -1,42 +0,0 @@ -Rado's theorem is a theorem from the branch of mathematics known as Ramsey theory. It is named for the German mathematician Richard Rado. It was proved in his thesis, Studien zur Kombinatorik. - -Let $A \mathbf{x} = \mathbf{0}$ be a system of linear equations, where $A$ is a matrix with integer entries. This system is said to be $r$-regular if, for every $r$-coloring of the natural numbers 1, 2, 3, ..., the system has a monochromatic solution. A system is regular if it is r-regular for all r ≥ 1. - -Rado's theorem states that a system $A \mathbf{x} = \mathbf{0}$ is regular if and only if the matrix A satisfies the columns condition. Let ci denote the i-th column of A. The matrix A satisfies the columns condition provided that there exists a partition C1, C2, ..., Cn of the column indices such that if $s_i = \Sigma_{j \in C_i}c_j$, then - -# s1 = 0 - -# for all i ≥ 2, si can be written as a rational linear combination of the cjs in all the Ck with k < i. This means that si is in the linear subspace of Qm spanned by the set of the cj's. - -Folkman's theorem, the statement that there exist arbitrarily large sets of integers all of whose nonempty sums are monochromatic, may be seen as a special case of Rado's theorem concerning the regularity of the system of equations -$$ -x_T = \sum_{i\in T}x_{\{i\}}, -$$ - -where T ranges over each nonempty subset of the set {1, 2, ..., x}. - -Other special cases of Rado's theorem are Schur's theorem and Van der Waerden's theorem. For proving the former apply Rado's theorem to the matrix $(1\ 1\ {-1})$. For Van der Waerden's theorem with m chosen to be length of the monochromatic arithmetic progression, one can for example consider the following matrix: - - - -\left( - -\begin{matrix} - -1&1&-1&0&\cdots&0&0\\ - -1&2&0&-1&\cdots&0&0\\ - -\vdots&\vdots&\vdots&\vdots&\ddots&\vdots&\vdots - -\\1&m-1&0&0&\cdots&-1&0\\ - -1&m&0&0&\cdots&0&-1 - -\end{matrix}\right) - - - -Given a system of linear equations it is a priori unclear how to check computationally that it is regular. Fortunately, Rado's theorem provides a criterion which is testable in finite time. Instead of considering colourings (of infinitely many natural numbers), it must be checked that the given matrix satisfies the columns condition. Since the matrix consists only of finitely many columns, this property can be verified in finite time. - -However, the subset sum problem can be reduced to the problem of computing the required partition C1, C2, ..., Cn of columns: Given an input set S for the subset sum problem we can write the elements of S in a matrix of shape 1 × |S|. Then the elements of S corresponding to vectors in the partition C1 sum to zero. The subset sum problem is NP-complete. Hence, verifying that a system of linear equations is regular is also an NP-complete problem. diff --git a/wiki/wikipedia/872.txt b/wiki/wikipedia/872.txt deleted file mode 100644 index be2af5b76aacc77457ab620ea5d821181656c46a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/872.txt +++ /dev/null @@ -1,15 +0,0 @@ -Toon-Doku is a 2007 sudoku puzzle video game developed by Dragon's Den Unlimited and published by Majesco Entertainment for the Nintendo DS. Directed by Joseph Sutton, the game was first released in North America and Europe in April 2007, with an Australian release following in October later the same year. - -Toon-Doku was received generally negatively by critics, who criticized the game for having poor controls, as well as its replacement of numbers with symbols. - -Toon-Doku follows the same principles as the logic-based puzzle Sudoku. In Toon-Doku, players have to fill a 9×9 grid with different symbols; however, no line or 3×3 subgrid can feature multiple of the same symbols. One of Toon-Doku main gameplay points is the player's ability to draw custom symbols for use within the games puzzles. The game also has several unlockable symbols, which can be obtained by completing certain puzzles. - -Toon-Doku features three main single-player modes: the instant mode, where players can play over 250 different sudoku puzzles, the stage mode, which features nine different levels, each with eleven puzzles, and the "Vs. CPU" mode, where the player competes against a CPU. Levels in the stage mode end with a boss fight, where an enemy can hide some of the boxes in the player's grid. Additionally, the game also features a multiplayer mode, where the player can hide some of the boxes in their enemies grids, similarly to the game's boss fights. - -Toon-Doku was developed by the American video game development studio Dragon's Den Unlimited. The game was published by Majesco Entertainment. The game was intended to be more suitable for younger players than traditional sudoku, which was attempted via the game primarily using symbols, instead of numbers. Toon-Doku released on April 10, 2007, in Europe, with a North American release following on April 16. The game released in Australia on October 12. - -Toon-Doku received "generally unfavorable reviews", according to review aggregator Metacritic. The aggregator calculated a normalized score of 39/100, based on six critic reviews. Jeuxvideo.com Jihem was more positive towards the use of symbols, while still writing that the idea wasn't original, due to it being used in children's newspaper, as well as another sudoku Nintendo DS game, Zendoku. - -Several critics have voiced a disliking towards the game's controls. Eric Bratcher of GamesRadar described the controls and interface as "clunky", in addition to criticizing how it's hard to tell which symbol is which, due to the Nintendo DS' "fuzzy screen". Jihem disliked the game's use of the Nintendo DS stylus, as he felt it was inprecise. - -Critics have enjoyed the amount of content included in the game. Jonathan Metts of Nintendo World Report rated the game's lastability 8.5/10, writing that the game's randomly generated puzzles leads to "infinite possibilities". Bratcher listed unlocking symbols and the game's length as two of its best points. Jihem mirrored Bratcher's sentiments, writing that unlocking images in the game feels more complete than other aspects of the game. diff --git a/wiki/wikipedia/873.txt b/wiki/wikipedia/873.txt deleted file mode 100644 index 664102d4a0555b95cf5b741b9cdb11b13c34ff5d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/873.txt +++ /dev/null @@ -1,163 +0,0 @@ -In computational number theory, Cipolla's algorithm is a technique for solving a congruence of the form -$$ -x^2\equiv n \pmod{p}, -$$ - -where $x,n \in \mathbf{F}_{p}$, so n is the square of x, and where $p$ is an odd prime. Here $\mathbf{F}_p$ denotes the finite field with $p$ elements; $\{0,1,\dots,p-1\}$. The algorithm is named after Michele Cipolla, an Italian mathematician who discovered it in 1907. - -Apart from prime moduli, Cipolla's algorithm is also able to take square roots modulo prime powers. - -Inputs: - -* $p$, an odd prime, - -* $n \in \mathbf{F}_p$, which is a square. - -Outputs: - -* $x \in \mathbf{F}_p$, satisfying $ x^2= n . $ - -Step 1 is to find an $a \in \mathbf{F}_p$ such that $a^2 - n$ is not a square. There is no known algorithm for finding such an $a$, except the trial and error method. Simply pick an $a$ and by computing the Legendre symbol $(a^2-n|p)$ one can see whether $a$ satisfies the condition. The chance that a random $a$ will satisfy is $(p-1)/2p$. With $p$ large enough this is about $1/2$. Therefore, the expected number of trials before finding a suitable $a$ is about 2. - -Step 2 is to compute x by computing $x=\left( a + \sqrt{a^2-n} \right)^{(p+1)/2}$ within the field $\mathbf{F}_{p^2} = \mathbf{F}_p(\sqrt{a^2-n})$. This x will be the one satisfying $ x^2 =n .$ - -If $x^2 = n$, then $(-x)^2 = n$ also holds. And since p is odd, $ x \neq -x $. So whenever a solution x is found, there's always a second solution, -x. - -(Note: All elements before step two are considered as an element of $\mathbf{F}_{13}$ and all elements in step two are considered as elements of $\mathbf{F}_{13^2}$.) - -Find all x such that $x^2 = 10.$ - -Before applying the algorithm, it must be checked that $10$ is indeed a square in $\mathbf{F}_{13}$. Therefore, the Legendre symbol $(10 | 13)$ has to be equal to 1. This can be computed using Euler's criterion: $(10 | 13) \equiv 10^6 \equiv 1 \pmod{13}.$ This confirms 10 being a square and hence the algorithm can be applied. - -* Step 1: Find an a such that $a^2 - n$ is not a square. As stated, this has to be done by trial and error. Choose $a=2$. Then $a^2 - n$ becomes 7. The Legendre symbol $(7 | 13)$ has to be −1. Again this can be computed using Euler's criterion: $7^6 = 343^2 \equiv 5^2 \equiv 25 \equiv -1 \pmod{13}.$ So $a=2$ is a suitable choice for a. - -* Step 2: Compute $x = \left( a + \sqrt{a^2-n} \right)^{(p+1)/2} = \left( 2 + \sqrt{-6}\right)^7$ in $\mathbf{F}_{13}(\sqrt{-6})$: -$$ -\left(2+\sqrt{-6}\right)^2 = 4 + 4\sqrt{-6} - 6 = -2 + 4 \sqrt{-6} -$$ -$$ -\left(2+\sqrt{-6}\right)^4 = \left(-2+4\sqrt{-6}\right)^2 = -1-3\sqrt{-6} -$$ -$$ -\left(2+\sqrt{-6}\right)^6 = \left(-2 + 4\sqrt{-6}\right)\left(-1-3\sqrt{-6}\right) = 9+2\sqrt{-6} -$$ -$$ -\left(2+\sqrt{-6}\right)^7 = \left(9+2\sqrt{-6}\right)\left(2+ \sqrt{-6}\right) = 6 -$$ - -So $x = 6 $ is a solution, as well as $x = -6$. Indeed, $6^2 \equiv 10 \pmod{13}.$ - -The first part of the proof is to verify that $\mathbf{F}_{p^2} = \mathbf{F}_p(\sqrt{a^2-n}) = \{x + y\sqrt{a^2-n} : x,y \in \mathbf{F}_p\}$ is indeed a field. For the sake of notation simplicity, $\omega$ is defined as $\sqrt{a^2-n}$. Of course, $a^2-n$ is a quadratic non-residue, so there is no square root in $\mathbf{F}_p$. This $\omega$ can roughly be seen as analogous to the complex number i. - -The field arithmetic is quite obvious. Addition is defined as -$$ -\left(x_1 + y_1 \omega \right) + \left(x_2 + y_2 \omega \right) = \left(x_1 + x_2 \right) + \left(y_1 + y_2\right) \omega -$$. - -Multiplication is also defined as usual. With keeping in mind that $\omega^2 = a^2-n$, it becomes -$$ -\left(x_1 + y_1 \omega \right)\left(x_2 + y_2 \omega \right) = x_1 x_2 + x_1 y_2 \omega + y_1 x_2 \omega + y_1 y_2 \omega^2 = \left( x_1 x_2 + y_1 y_2 \left(a^2-n\right)\right) + \left(x_1 y_2 + y_1 x_2 \right) \omega -$$. - -Now the field properties have to be checked. - -The properties of closure under addition and multiplication, associativity, commutativity and distributivity are easily seen. This is because in this case the field $\mathbf{F}_{p^2}$ is somewhat resembles the field of complex numbers (with $\omega$ being the analogon of i).
    - -The additive identity is $0$, or more formally $0 + 0\omega$: Let $\alpha \in \mathbf{F}_{p^2}$, then -$$ -\alpha + 0 = (x+y\omega) + (0 + 0\omega) = (x + 0) + (y + 0)\omega = x+y\omega = \alpha -$$. - -The multiplicative identity is $1$, or more formally $ 1 + 0\omega$: -$$ -\alpha \cdot 1 = (x+y\omega)(1 + 0\omega) = \left(x\cdot 1 + 0 \cdot y \left(a^2-n\right)\right) + (x\cdot 0 + 1 \cdot y)\omega = x+y\omega = \alpha -$$. - -The only thing left for $\mathbf{F}_{p^2}$ being a field is the existence of additive and multiplicative inverses. It is easily seen that the additive inverse of $x+y\omega$ is $-x-y\omega$, which is an element of $\mathbf{F}_{p^2}$, because $-x,-y \in \mathbf{F}_p$. In fact, those are the additive inverse elements of x and y. For showing that every non-zero element $\alpha$ has a multiplicative inverse, write down $\alpha = x_1 + y_1 \omega$ and $\alpha^{-1} = x_2 + y_2 \omega$. In other words, -$$ -(x_1 + y_1 \omega)(x_2 + y_2 \omega) = \left( x_1 x_2 + y_1 y_2 \left(a^2-n\right)\right) + \left(x_1 y_2 + y_1 x_2 \right) \omega = 1 -$$. - -So the two equalities $x_1x_2 + y_1y_2(a^2-n) = 1$ and $x_1y_2 + y_1x_2 = 0$ must hold. Working out the details gives expressions for $x_2$ and $y_2$, namely -$$ -x_2 = -y_1^{-1}x_1\left(y_1\left(a^2-n\right)-x_1^2y_1^{-1}\right)^{-1} -$$, -$$ -y_2 = \left( y_1 \left(a^2-n\right) - x_1^2y_1^{-1}\right)^{-1} -$$. - -The inverse elements which are shown in the expressions of $x_2$ and $y_2$ do exist, because these are all elements of $\mathbf{F}_p$. This completes the first part of the proof, showing that $\mathbf{F}_{p^2}$ is a field. - -The second and middle part of the proof is showing that for every element $x+y\omega \in \mathbf{F}_{p^2} : (x+y\omega)^p = x - y\omega$. - -By definition, $\omega^2=a^2-n$ is not a square in $\mathbf{F}_p$. Euler's criterion then says that -$$ -\omega^{p-1} = \left(\omega^2\right)^{\frac{p-1}{2}} = -1 -$$. - -Thus $\omega^p = -\omega$. This, together with Fermat's little theorem (which says that $x^p = x$ for all $x \in \mathbf{F}_{p}$) and the knowledge that in fields of characteristic p the equation $\left(a+b\right)^p = a^p + b^p$ holds, a relationship sometimes called the Freshman's dream, shows the desired result -$$ -(x+y\omega)^p = x^p + y^p \omega^p = x - y\omega -$$. - -The third and last part of the proof is to show that if $x_0=\left(a+\omega \right)^{\frac{p+1}{2}} \in \mathbf{F}_{p^2}$, then $x_0^2=n \in \mathbf{F}_p$.
    - -Compute -$$ -x_0^2 = \left(a+\omega \right)^{p+1} = (a+\omega)(a+\omega)^{p}=(a+\omega)(a-\omega)=a^2 - \omega^2 = a^2 - \left(a^2 - n \right) = n -$$. - -Note that this computation took place in $\mathbf{F}_{p^2}$, so this $x_0 \in \mathbf{F}_{p^2}$. But with Lagrange's theorem, stating that a non-zero polynomial of degree n has at most n roots in any field K, and the knowledge that $x^2-n$ has 2 roots in $\mathbf{F}_p$, these roots must be all of the roots in $\mathbf{F}_{p^2}$. It was just shown that $x_0$ and $-x_0$ are roots of $x^2-n$ in $\mathbf{F}_{p^2}$, so it must be that $x_0, -x_0 \in \mathbf{F}_p$. - -After finding a suitable a, the number of operations required for the algorithm is $4m + 2k - 4$ multiplications, $4m-2$ sums, where m is the number of digits in the binary representation of p and k is the number of ones in this representation. To find a by trial and error, the expected number of computations of the Legendre symbol is 2. But one can be lucky with the first try and one may need more than 2 tries. In the field $\mathbf{F}_{p^2}$, the following two equalities hold -$$ -(x+y\omega)^2 = \left(x^2 + y^2 \omega^2 \right) + \left(\left(x+y\right)^2-x^2-y^2\right)\omega, -$$ - -where $\omega^2 = a^2-n$ is known in advance. This computation needs 4 multiplications and 4 sums. -$$ -\left(x+y\omega\right)^2\left(a + \omega \right) = \left( ad^2 - b\left(x+d\right)\right) + \left(d^2 - by\right)\omega, -$$ - -where $d=(x+ya)$ and $b=ny$. This operation needs 6 multiplications and 4 sums. - -Assuming that $p \equiv 1 \pmod 4,$ (in the case $p \equiv 3 \pmod 4$, the direct computation $x \equiv \pm n^{\frac{p+1}{4}}$ is much faster) the binary expression of $(p+1)/2$ has $m-1$ digits, of which k are ones. So for computing a $(p+1)/2$ power of $\left(a + \omega \right)$, the first formula has to be used $n-k-1$ times and the second $k-1$ times. - -For this, Cipolla's algorithm is better than the Tonelli–Shanks algorithm if and only if $S(S-1) > 8m+20$, with $2^{S}$ being the maximum power of 2 which divides $p-1$. - -According to Dickson's "History Of Numbers", the following formula of Cipolla will find square roots modulo powers of prime: -$$ -2^{-1}q^{t}((k+\sqrt{k^{2}-q})^{s}+(k-\sqrt{k^{2}-q})^{s})\bmod{p^{\lambda}} -$$ - -where $t=(p^{\lambda}-2p^{\lambda-1}+1)/2$ and $s=p^{\lambda-1}(p+1)/2$ - -where $q=10$, $k=2$ as in this article's example - -Taking the example in the wiki article we can see that this formula above does indeed take square roots modulo prime powers. - -As -$$ -\sqrt{10}\bmod{ 13^{3}}\equiv 1046 -$$ - -Now solve for $ 2^{-1}q^{t}$ via: -$$ -2^{-1}10^{(13^{3} - 2\cdot 13^{2} + 1)/2} \bmod{13^{3}}\equiv 1086 -$$ - -Now create the $(2+ \sqrt{2^{2}-10})^{13^{2}\cdot 7}\bmod{13^{3}}$ and $(2- \sqrt{2^{2}-10})^{13^{2}\cdot 7}\bmod{13^{3}}$ - -(See here for mathematica code showing this above computation, remembering - -that something close to complex modular arithmetic is going on here) - -As such: -$$ -(2+\sqrt{2^{2}-10})^{13^{2}\cdot 7}\bmod{13^{3}}\equiv 1540 -$$ and $(2-\sqrt{2^{2}-10})^{13^{2}\cdot 7}\bmod{13^{3}}\equiv 1540$ - -and the final equation is: -$$ -1086 (1540+1540)\bmod{ 13^{3}}\equiv 1046 -$$ which is the answer. diff --git a/wiki/wikipedia/874.txt b/wiki/wikipedia/874.txt deleted file mode 100644 index c9b0f073b8da657254d6d425af0b9d812db1c3d9..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/874.txt +++ /dev/null @@ -1,29 +0,0 @@ -Charge, parity, and time reversal symmetry is a fundamental symmetry of physical laws under the simultaneous transformations of charge conjugation (C), parity transformation (P), and time reversal (T). CPT is the only combination of C, P, and T that is observed to be an exact symmetry of nature at the fundamental level. The CPT theorem says that CPT symmetry holds for all physical phenomena, or more precisely, that any Lorentz invariant local quantum field theory with a Hermitian Hamiltonian must have CPT symmetry. - -The CPT theorem appeared for the first time, implicitly, in the work of Julian Schwinger in 1951 to prove the connection between spin and statistics. In 1954, Gerhart Lüders and Wolfgang Pauli derived more explicit proofs, so this theorem is sometimes known as the Lüders–Pauli theorem. At about the same time, and independently, this theorem was also proved by John Stewart Bell. These proofs are based on the principle of Lorentz invariance and the principle of locality in the interaction of quantum fields. Subsequently, Res Jost gave a more general proof in the framework of axiomatic quantum field theory. - -Efforts during the late 1950s revealed the violation of P-symmetry by phenomena that involve the weak force, and there were well-known violations of C-symmetry as well. For a short time, the CP-symmetry was believed to be preserved by all physical phenomena, but in the 1960s that was later found to be false too, which implied, by CPT invariance, violations of T-symmetry as well. - -Consider a Lorentz boost in a fixed direction z. This can be interpreted as a rotation of the time axis into the z axis, with an imaginary rotation parameter. If this rotation parameter were real, it would be possible for a 180° rotation to reverse the direction of time and of z. Reversing the direction of one axis is a reflection of space in any number of dimensions. If space has 3 dimensions, it is equivalent to reflecting all the coordinates, because an additional rotation of 180° in the x-y plane could be included. - -This defines a CPT transformation if we adopt the Feynman–Stueckelberg interpretation of antiparticles as the corresponding particles traveling backwards in time. This interpretation requires a slight analytic continuation, which is well-defined only under the following assumptions: - -#The theory is Lorentz invariant; - -#The vacuum is Lorentz invariant; - -#The energy is bounded below. - -When the above hold, quantum theory can be extended to a Euclidean theory, defined by translating all the operators to imaginary time using the Hamiltonian. The commutation relations of the Hamiltonian, and the Lorentz generators, guarantee that Lorentz invariance implies rotational invariance, so that any state can be rotated by 180 degrees. - -Since a sequence of two CPT reflections is equivalent to a 360-degree rotation, fermions change by a sign under two CPT reflections, while bosons do not. This fact can be used to prove the spin-statistics theorem. - -The implication of CPT symmetry is that a "mirror-image" of our universe — with all objects having their positions reflected through an arbitrary point (corresponding to a parity inversion), all momenta reversed (corresponding to a time inversion) and with all matter replaced by antimatter (corresponding to a charge inversion) — would evolve under exactly our physical laws. The CPT transformation turns our universe into its "mirror image" and vice versa. CPT symmetry is recognized to be a fundamental property of physical laws. - -In order to preserve this symmetry, every violation of the combined symmetry of two of its components (such as CP) must have a corresponding violation in the third component (such as T); in fact, mathematically, these are the same thing. Thus violations in T symmetry are often referred to as CP violations. - -The CPT theorem can be generalized to take into account pin groups. - -In 2002 Oscar Greenberg published an apparent proof that CPT violation implies the breaking of Lorentz symmetry. If correct, this would imply that any study of CPT violation also includes Lorentz violation. However, Chaichian et al later disputed the validity of Greenberg's result. Greenberg replied that the model used in their paper meant that their "proposed objection was not relevant to my result". - -The overwhelming majority of experimental searches for Lorentz violation have yielded negative results. A detailed tabulation of these results was given in 2011 by Kostelecky and Russell. diff --git a/wiki/wikipedia/875.txt b/wiki/wikipedia/875.txt deleted file mode 100644 index 8adae8c37edbe020287ba3a373b27edb86709223..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/875.txt +++ /dev/null @@ -1,162 +0,0 @@ -The Pythagorean trigonometric identity, also called simply the Pythagorean identity, is an identity expressing the Pythagorean theorem in terms of trigonometric functions. Along with the sum-of-angles formulae, it is one of the basic relations between the sine and cosine functions. - -The identity is -$$ -\sin^2 \theta + \cos^2 \theta = 1. -$$ - -As usual, sin2 θ means $(\sin\theta)^2$. - -Any similar triangles have the property that if we select the same angle in all of them, the ratio of the two sides defining the angle is the same regardless of which similar triangle is selected, regardless of its actual size: the ratios depend upon the three angles, not the lengths of the sides. Thus for either of the similar right triangles in the figure, the ratio of its horizontal side to its hypotenuse is the same, namely cos θ. - -The elementary definitions of the sine and cosine functions in terms of the sides of a right triangle are: -$$ -\sin \theta = \frac{\mathrm{opposite}}{\mathrm{hypotenuse}}= \frac{b}{c} -$$ -$$ -\cos \theta = \frac{\mathrm{adjacent}}{\mathrm{hypotenuse}} = \frac{a}{c} -$$ - -The Pythagorean identity follows by squaring both definitions above, and adding; the left-hand side of the identity then becomes -$$ -\frac{\mathrm{opposite}^2 + \mathrm{adjacent}^2}{\mathrm{hypotenuse}^2} -$$ - -which by the Pythagorean theorem is equal to 1. This definition is valid for all angles, due to the definition of defining $x = \cos \theta $ and $ y = \sin \theta $ for the unit circle and thus $x = c\cos \theta $ and $ y = c\sin \theta $ for a circle of radius c and reflecting our triangle in the y axis and setting $ a=x $ and $ b=y $. - -Alternatively, the identities found at Trigonometric symmetry, shifts, and periodicity may be employed. By the periodicity identities we can say if the formula is true for −π < θ ≤ π then it is true for all real θ. Next we prove the range π/2 < θ ≤ π, to do this we let t = θ − π/2, t will now be in the range 0 < t ≤ π/2. We can then make use of squared versions of some basic shift identities (squaring conveniently removes the minus signs): -$$ -\sin^2\theta+\cos^2\theta = \sin^2\left(t+\frac{1}{2}\pi\right) + \cos^2\left(t+\frac{1}{2}\pi\right) = \cos^2t+\sin^2t = 1. -$$ - -All that remains is to prove it for −π < θ < 0; this can be done by squaring the symmetry identities to get -$$ -\sin^2\theta=\sin^2(-\theta)\text{ and }\cos^2\theta=\cos^2(-\theta). -$$ - -The identities -$$ -1 + \tan^2 \theta = \sec^2 \theta -$$ - -and -$$ -1 + \cot^2 \theta = \csc^2 \theta -$$ - -are also called Pythagorean trigonometric identities. If one leg of a right triangle has length 1, then the tangent of the angle adjacent to that leg is the length of the other leg, and the secant of the angle is the length of the hypotenuse. -$$ - \tan \theta =\frac{b}{a} \ , -$$ - -and: -$$ - \sec \theta = \frac{c}{a} \ . -$$ - -In this way, this trigonometric identity involving the tangent and the secant follows from the Pythagorean theorem. The angle opposite the leg of length 1 (this angle can be labeled φ = π/2 − θ) has cotangent equal to the length of the other leg, and cosecant equal to the length of the hypotenuse. In that way, this trigonometric identity involving the cotangent and the cosecant also follows from the Pythagorean theorem. - -The following table gives the identities with the factor or divisor that relates them to the main identity. - -The unit circle centered at the origin in the Euclidean plane is defined by the equation: -$$ -x^2 + y^2 = 1. -$$ - -Given an angle θ, there is a unique point P on the unit circle at an angle θ from the x-axis, and the x- and y-coordinates of P are: -$$ -x = \cos \theta \ \mathrm { and} \ y = \sin \theta \ . -$$ - -Consequently, from the equation for the unit circle: -$$ - \cos^2 \theta + \sin^2 \theta = 1 \ , -$$ - -the Pythagorean identity. - -In the figure, the point P has a negative x-coordinate, and is appropriately given by x = cosθ, which is a negative number: cosθ = −cos(π−θ ). Point P has a positive y-coordinate, and sinθ = sin(π−θ ) > 0. As θ increases from zero to the full circle θ = 2π, the sine and cosine change signs in the various quadrants to keep x and y with the correct signs. The figure shows how the sign of the sine function varies as the angle changes quadrant. - -Because the x- and y-axes are perpendicular, this Pythagorean identity is equivalent to the Pythagorean theorem for triangles with hypotenuse of length 1 (which is in turn equivalent to the full Pythagorean theorem by applying a similar-triangles argument). See unit circle for a short explanation. - -The trigonometric functions may also be defined using power series, namely (for x an angle measured in radians): - -\begin{align} - -\sin x &= \sum_{n = 0}^\infty \frac{(-1)^n}{(2n + 1)!} x^{2n + 1},\\ - -\cos x &= \sum_{n = 0}^\infty \frac{(-1)^n}{(2n)!} x^{2n}. - -\end{align} - -Using the formal multiplication law for power series at Multiplication and division of power series (suitably modified to account for the form of the series here) we obtain - - - -\begin{align} - -\sin^2 x & = \sum_{i = 0}^\infty \sum_{j = 0}^\infty \frac{(-1)^i}{(2i + 1)!} \frac{(-1)^j}{(2j + 1)!} x^{(2i + 1) + (2j + 1)} \\ - -& = \sum_{n = 1}^\infty \left(\sum_{i = 0}^{n - 1} \frac{(-1)^{n - 1}}{(2i + 1)!(2(n - i - 1) + 1)!}\right) x^{2n} \\ - -& = \sum_{n = 1}^\infty \left( \sum_{i = 0}^{n - 1} {2n \choose 2i + 1} \right) \frac{(-1)^{n - 1}}{(2n)!} x^{2n},\\ - -\cos^2 x & = \sum_{i = 0}^\infty \sum_{j = 0}^\infty \frac{(-1)^i}{(2i)!} \frac{(-1)^j}{(2j)!} x^{(2i) + (2j)} \\ - -& = \sum_{n = 0}^\infty \left(\sum_{i = 0}^n \frac{(-1)^n}{(2i)!(2(n - i))!}\right) x^{2n} \\ - -& = \sum_{n = 0}^\infty \left( \sum_{i = 0}^n {2n \choose 2i} \right) \frac{(-1)^n}{(2n)!} x^{2n}. - -\end{align} - - - -In the expression for sin2, n must be at least 1, while in the expression for cos2, the constant term is equal to 1. The remaining terms of their sum are (with common factors removed) - -\sum_{i = 0}^n {2n \choose 2i} - \sum_{i = 0}^{n - 1} {2n \choose 2i + 1} - -= \sum_{j = 0}^{2n} (-1)^j {2n \choose j} - -= (1 - 1)^{2n} - -= 0 - -by the binomial theorem. Consequently, -$$ -\sin^2 x + \cos^2 x = 1 \ , -$$ - -which is the Pythagorean trigonometric identity. - -When the trigonometric functions are defined in this way, the identity in combination with the Pythagorean theorem shows that these power series parameterize the unit circle, which we used in the previous section. This definition constructs the sine and cosine functions in a rigorous fashion and proves that they are differentiable, so that in fact it subsumes the previous two. - -Sine and cosine can be defined as the two solutions to the differential equation: -$$ -y + y = 0 -$$ - -satisfying respectively y(0) = 0, y and y(0) = 1, y. It follows from the theory of ordinary differential equations that the first solution, sine, has the second, cosine, as its derivative, and it follows from this that the derivative of cosine is the negative of the sine. The identity is equivalent to the assertion that the function -$$ -z = \sin^2 x + \cos^2 x -$$ - -is constant and equal to 1. Differentiating using the chain rule gives: -$$ - \frac{d}{dx} z = 2 \sin x \ \cos x + 2 \cos x \ (-\sin x) = 0 \ , -$$ - -so z is constant. A calculation confirms that z(0) = 1, and z is a constant so z = 1 for all x, so the Pythagorean identity is established. - -A similar proof can be completed using power series as above to establish that the sine has as its derivative the cosine, and the cosine has as its derivative the negative sine. In fact, the definitions by ordinary differential equation and by power series lead to similar derivations of most identities. - -This proof of the identity has no direct connection with Euclid's demonstration of the Pythagorean theorem. - -Euler's formula states that -$$ -e^{i\theta} = \cos\theta + i\sin\theta -$$ - -So, -$$ -\sin^2 \theta + \cos^2 \theta = (\cos\theta + i\sin\theta)(\cos\theta - i\sin\theta) = e^{i\theta}e^{-i\theta} = 1 -$$. diff --git a/wiki/wikipedia/876.txt b/wiki/wikipedia/876.txt deleted file mode 100644 index 7ff0259ed0cc83c74295b26947a6ca5d3c351423..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/876.txt +++ /dev/null @@ -1,3 +0,0 @@ -In the field of computer science, a pre-topological order or pre-topological ordering of a directed graph is a linear ordering of its vertices such that if there is a directed path from vertex u to vertex v and v comes before u in the ordering, then there is also a directed path from vertex v to vertex u. - -If the graph is a directed acyclic graph (DAG), topological orderings are pre-topological orderings and vice versa. In other cases, any pre-topological ordering gives a partial order. diff --git a/wiki/wikipedia/877.txt b/wiki/wikipedia/877.txt deleted file mode 100644 index dab2ee5311f56bf1f0308e93027ab88a8304c6b6..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/877.txt +++ /dev/null @@ -1,77 +0,0 @@ -In mathematics, Fischer's inequality gives an upper bound for the determinant of a positive-semidefinite matrix whose entries are complex numbers in terms of the determinants of its principal diagonal blocks. - -Suppose A, C are respectively p×p, q×q positive-semidefinite complex matrices and B is a p×q complex matrix. - -Let -$$ -M := \left[\begin{matrix} A & B \\ B^* & C \end{matrix}\right] -$$ - -so that M is a (p+q)×(p+q) matrix. - -Then Fischer's inequality states that -$$ - \det (M) \le \det(A) \det(C). -$$ - -If M is positive-definite, equality is achieved in Fischer's inequality if and only if all the entries of B are 0. Inductively one may conclude that a similar inequality holds for a block decomposition of M with multiple principal diagonal blocks. Considering 1×1 blocks, a corollary is Hadamard's inequality. On the other hand, Fischer's inequality can also be proved by using Hadamard's inequality, see the proof of Theorem 7.8.5 in Horn and Johnson's Matrix Analysis. - -Assume that A and C are positive-definite. We have $A^{-1}$ and $C^{-1}$ are positive-definite. Let -$$ -D := \left[\begin{matrix} A & 0 \\ 0 & C \end{matrix}\right]. -$$ - -We note that -$$ -D^{-\frac{1}{2}} M D^{-\frac{1}{2} } = \left[\begin{matrix} A^{-\frac{1}{2}} & 0 \\ 0 & C^{-\frac{1}{2}} \end{matrix}\right] \left[\begin{matrix} A & B \\ B^* & C \end{matrix}\right] \left[\begin{matrix} A^{-\frac{1}{2}} & 0 \\ 0 & C^{-\frac{1}{2}} \end{matrix}\right] = \left[\begin{matrix} I_{p} & A^{-\frac{1}{2}} BC^{-\frac{1}{2}} \\ C^{-\frac{1}{2}}B^*A^{-\frac{1}{2}} & I_{q}\end{matrix}\right] -$$ - -Applying the AM-GM inequality to the eigenvalues of $D^{-\frac{1}{2}} M D^{-\frac{1}{2} }$, we see -$$ -\det (D^{-\frac{1}{2}} M D^{-\frac{1}{2}}) \le \left({1 \over p + q} \mathrm{tr} (D^{-\frac{1}{2}} M D^{-\frac{1}{2}}) \right)^{p+q} = 1^{p+q} = 1. -$$ - -By multiplicativity of determinant, we have - - - -\begin{align} - -\det(D^{-\frac{1}{2}} ) \det(M) \det(D^{-\frac{1}{2}} ) \le 1 \\ - -\Longrightarrow \det(M) \le \det(D) = \det(A) \det(C). - -\end{align} - -In this case, equality holds if and only if M = D that is, all entries of B are 0. - -For $\varepsilon > 0$, as $A + \varepsilon I_p$ and $C + \varepsilon I_q$ are positive-definite, we have -$$ -\det(M+ \varepsilon I_{p+q}) \le \det(A + \varepsilon I_p) \det(C + \varepsilon I_q). -$$ - -Taking the limit as $\varepsilon \rightarrow 0$ proves the inequality. From the inequality we note that if M is invertible, then both A and C are invertible and we get the desired equality condition. - -If M can be partitioned in square blocks Mij, then the following inequality by Thompson is valid: -$$ -\det(M) \le \det([\det(M_{ij})]) -$$ - -where [det(Mij)] is the matrix whose (i,j) entry is det(Mij). - -In particular, if the block matrices B and C are also square matrices, then the following inequality by Everett is valid: -$$ -\det(M) \le \det \begin{bmatrix} \det(A) && \det(B) \\ \det(B^*) && \det(C) \end{bmatrix} -$$ - -Thompson's inequality can also be generalized by an inequality in terms of the coefficients of the characteristic polynomial of the block matrices. Expressing the characteristic polynomial of the matrix A as -$$ -p_A (t) = \sum_{k=0}^n t^{n-k} (-1)^k \operatorname{tr}(\Lambda^k A) -$$ - -and supposing that the blocks Mij are m x m matrices, the following inequality by Lin and Zhang is valid: -$$ -\det(M) \le \left(\frac{\det([\operatorname{tr}(\Lambda^r M_{ij}]))}{ \binom{m}r} \right)^{\frac{m}{r}},\quad r=1, \ldots, m -$$ - -Note that if r = m, then this inequality is identical to Thompson's inequality. diff --git a/wiki/wikipedia/878.txt b/wiki/wikipedia/878.txt deleted file mode 100644 index efabb09ac92749530cc9fc4d2a8524ff06cb0bb6..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/878.txt +++ /dev/null @@ -1,40 +0,0 @@ -In geometry, the Wallace–Bolyai–Gerwien theorem, named after William Wallace, Farkas Bolyai and Paul Gerwien, is a theorem related to dissections of polygons. It answers the question when one polygon can be formed from another by cutting it into a finite number of pieces and recomposing these by translations and rotations. The Wallace–Bolyai–Gerwien theorem states that this can be done if and only if two polygons have the same area. - -Wallace had proven the same result already in 1807. - -According to other sources, Bolyai and Gerwien had independently proved the theorem in 1833 and 1835, respectively. - -There are several ways in which this theorem may be formulated. The most common version uses the concept of "equidecomposability" of polygons: two polygons are equidecomposable if they can be split into finitely many triangles that only differ by some isometry (in fact only by a combination of a translation and a rotation). In this case the Wallace–Bolyai–Gerwien theorem states that two polygons are equidecomposable if and only if they have the same area. - -Another formulation is in terms of scissors congruence: two polygons are scissors-congruent if they can be decomposed into finitely many polygons that are pairwise congruent. Scissors-congruence is an equivalence relation. In this case the Wallace–Bolyai–Gerwien theorem states that the equivalence classes of this relation contain precisely those polygons that have the same area. - -The theorem can be understood in a few steps. Firstly, every polygon can be cut into triangles. There are a few methods for this. For convex polygons one can cut off each vertex in turn, while for concave polygons this requires more care. A general approach that works for non-simple polygons as well would be to choose a line not parallel to any of the sides of the polygon and draw a line parallel to this one through each of the vertices of the polygon. This will divide the polygon into triangles and trapezoids, which in turn can be converted into triangles. - -Secondly, each of these triangles can be transformed into a right triangle and subsequently into a rectangle with one side of length 1. Alternatively, a triangle can be transformed into one such rectangle by first turning it into a parallelogram and then turning this into such a rectangle. By doing this for each triangle, the polygon can be decomposed into a rectangle with unit width and height equal to its area. - -Since this can be done for any two polygons, a "common subdivision" of the rectangle in between proves the theorem. That is, cutting the common rectangle (of size 1 by its area) according to both polygons will be an intermediate between both polygons. - -First of all, this proof requires an intermediate polygon. In the formulation of the theorem using scissors-congruence, the use of this intermediate can be reformulated by using the fact that scissor-congruences are transitive. Since both the first polygon and the second polygon are scissors-congruent to the intermediate, they are scissors-congruent to one another. - -The proof of this theorem is constructive and doesn't require the axiom of choice, even though some other dissection problems (e.g. Tarski's circle-squaring problem) do need it. In this case, the decomposition and reassembly can actually be carried out "physically": the pieces can, in theory, be cut with scissors from paper and reassembled by hand. - -Nonetheless, the number of pieces required to compose one polygon from another using this procedure generally far exceeds the minimum number of polygons needed. - -Consider two equidecomposable polygons P and Q. The minimum number n of pieces required to compose one polygon Q from another polygon P is denoted by σ(P,Q). - -Depending on the polygons, it is possible to estimate upper and lower bounds for σ(P,Q). For instance, Alfred Tarski proved that if P is convex and the diameters of P and Q are respectively given by d(P) and d(Q), then -$$ -\sigma(P,Q) \ge \frac{d(P)}{d(Q)}. -$$ - -If Px is a rectangle of sides a·x and a·(1/x) and Q is a rectangle of size a, then Px and Q are equidecomposable for every x > 0. An upper bound for σ(Px,Q) is given by -$$ -\sigma(P_x,Q) \le 2 + \left\lceil \sqrt{x^2 - 1} \right\rceil, \quad\text{for } x \ge 1. -$$ - -Since σ(Px,Q) = σ(P(1/x),Q), we also have that -$$ -\sigma\left(P_\frac{1}{x},Q\right) \le 2 + \left\lceil \frac{\sqrt{1-x^2}}{x} \right\rceil, \quad\text{for } x \le 1. -$$ - -The analogous statement about polyhedra in three dimensions, known as Hilbert's third problem, is false, as proven by Max Dehn in 1900. The problem has also been considered in some non-Euclidean geometries. In two-dimensional hyperbolic and spherical geometry, the theorem holds. However, the problem is still open for these geometries in three dimensions. diff --git a/wiki/wikipedia/879.txt b/wiki/wikipedia/879.txt deleted file mode 100644 index 3d86444b260437eade86f159ac3150a2f68e98db..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/879.txt +++ /dev/null @@ -1,225 +0,0 @@ -In mathematics, the Euler–Maclaurin formula is a formula for the difference between an integral and a closely related sum. It can be used to approximate integrals by finite sums, or conversely to evaluate finite sums and infinite series using integrals and the machinery of calculus. For example, many asymptotic expansions are derived from the formula, and Faulhaber's formula for the sum of powers is an immediate consequence. - -The formula was discovered independently by Leonhard Euler and Colin Maclaurin around 1735. Euler needed it to compute slowly converging infinite series while Maclaurin used it to calculate integrals. It was later generalized to Darboux's formula. - -If m and n are natural numbers and f(x) is a real or complex valued continuous function for real numbers x in the interval [m,n], then the integral - -I = \int_m^n f(x)dx - -can be approximated by the sum (or vice versa) - -S = f(m + 1) + \cdots + f(n - 1) + f(n) - -(see rectangle method). The Euler–Maclaurin formula provides expressions for the difference between the sum and the integral in terms of the higher derivatives f evaluated at the endpoints of the interval, that is to say x = m and x = n. - -Explicitly, for p a positive integer and a function f(x) that is p times continuously differentiable on the interval [m,n], we have - -S - I = \sum_{k=1}^p {\frac{B_k}{k!} \left(f^{(k - 1)}(n) - f^{(k - 1)}(m)\right)} + R_p, - -where Bk is the kth Bernoulli number (with B1 = 1/2) and Rp is an error term which depends on n, m, p, and f and is usually small for suitable values of p. - -The formula is often written with the subscript taking only even values, since the odd Bernoulli numbers are zero except for B1. In this case we have - -\sum_{i=m}^n f(i) = - -\int^n_m f(x)dx + \frac{f(n) + f(m)}{2} + - -\sum_{k=1}^{\left\lfloor \frac{p}{2}\right\rfloor} \frac{B_{2k}}{(2k)!} \left(f^{(2k - 1)}(n) - f^{(2k - 1)}(m)\right) + R_p, - - - -or alternatively - -\sum_{i=m+1}^n f(i) = - -\int^n_m f(x)dx + \frac{f(n) - f(m)}{2} + - -\sum_{k=1}^{\left\lfloor \frac{p}{2}\right\rfloor} \frac{B_{2k}}{(2k)!} \left(f^{(2k - 1)}(n) - f^{(2k - 1)}(m)\right) + R_p. - - - -The remainder term arises because the integral is usually not exactly equal to the sum. The formula may be derived by applying repeated integration by parts to successive intervals [r, r + 1] for r = m, m + 1, …, n − 1. The boundary terms in these integrations lead to the main terms of the formula, and the leftover integrals form the remainder term. - -The remainder term has an exact expression in terms of the periodized Bernoulli functions Pk(x). The Bernoulli polynomials may be defined recursively by B0(x) = 1 and, for k ≥ 1, - -\begin{align} - -B_k'(x) &= kB_{k - 1}(x), \\ - -\int_0^1 B_k(x)dx &= 0. - -\end{align} - -The periodized Bernoulli functions are defined as - -P_k(x) = B_k\bigl(x - \lfloor x\rfloor\bigr), - -where ⌊x⌋ denotes the largest integer less than or equal to x, so that x − ⌊x⌋ always lies in the interval [0,1). - -With this notation, the remainder term Rp equals - -R_p = (-1)^{p+1}\int_m^n f^{(p)}(x) \frac{P_p(x)}{p!}dx. - -When k > 0, it can be shown that - -\bigl|B_k(x)\bigr| \le \frac{2 \cdot k!}{(2\pi)^k}\zeta(k), - -where ζ denotes the Riemann zeta function; one approach to prove this inequality is to obtain the Fourier series for the polynomials Bk(x). The bound is achieved for even k when x is zero. The term ζ(k) may be omitted for odd k but the proof in this case is more complex (see Lehmer). Using this inequality, the size of the remainder term can be estimated as - -\left|R_p\right| \leq \frac{2 \zeta(p)}{(2\pi)^p}\int_m^n \left|f^{(p)}(x)\right|dx. - -The Bernoulli numbers from B1 to B7 are 1/2, 1/6, 0, −1/30, 0, 1/42, 0. Therefore the low-order cases of the Euler–Maclaurin formula are: - -\begin{align} - -\sum_{n=a+1}^b f(n) - \int_a^b f(x)dx &= \frac{f(b)-f(a)}{2} + \int_a^b f'(x)P_1(x)dx \\ - -&=\frac{f(b)-f(a)}{2} + \frac{1}{6}\frac{f'(b) - f'(a)}{2!} - \int_a^b f(x)\frac{P_2(x)}{2!}dx \\ - -&=\frac{f(b)-f(a)}{2} + \frac{1}{6}\frac{f'(b) - f'(a)}{2!} + \int_a^b f(x)\frac{P_3(x)}{3!}dx \\ - -&=\frac{f(b)-f(a)}{2} + \frac{1}{6}\frac{f'(b) - f'(a)}{2!} - \frac{1}{30}\frac{f(b) - f(a)}{4!}-\int_a^b f^{(4)}(x) \frac{P_4(x)}{4!} dx \\ - -&=\frac{f(b)-f(a)}{2} + \frac{1}{6}\frac{f'(b) - f'(a)}{2!} - \frac{1}{30}\frac{f(b) - f(a)}{4!} + \int_a^b f^{(5)}(x)\frac{P_5(x)}{5!}dx \\ - -&=\frac{f(b)-f(a)}{2} + \frac{1}{6}\frac{f'(b) - f'(a)}{2!} - \frac{1}{30}\frac{f(b) - f(a)}{4!} + \frac{1}{42}\frac{f^{(5)}(b) - f^{(5)}(a)}{6!} - \int_a^b f^{(6)}(x)\frac{P_6(x)}{6!}dx \\ - -&=\frac{f(b)-f(a)}{2} + \frac{1}{6}\frac{f'(b) - f'(a)}{2!} - \frac{1}{30}\frac{f(b) - f(a)}{4!} + \frac{1}{42}\frac{f^{(5)}(b) - f^{(5)}(a)}{6!} + \int_a^b f^{(7)}(x)\frac{P_7(x)}{7!}dx. - -\end{align} - -The Basel problem is to determine the sum - - 1 + \frac14 + \frac19 + \frac1{16} + \frac1{25} + \cdots = \sum_{n=1}^\infty \frac{1}{n^2}. - -Euler computed this sum to 20 decimal places with only a few terms of the Euler–Maclaurin formula in 1735. This probably convinced him that the sum equals π2/6, which he proved in the same year. - -If f is a polynomial and p is big enough, then the remainder term vanishes. For instance, if f(x) = x3, we can choose p = 2 to obtain, after simplification, - -\sum_{i=0}^n i^3 = \left(\frac{n(n + 1)}{2}\right)^2. - -The formula provides a means of approximating a finite integral. Let a < b be the endpoints of the interval of integration. Fix N, the number of points to use in the approximation, and denote the corresponding step size by h = b − a/N − 1. Set xi = a + (i − 1)h, so that x1 = a and xN = b. Then: - - - -\begin{align} - -I & = \int_a^b f(x)dx \\ - -&\sim h\left(\frac{f(x_1)}{2} + f(x_2) + \cdots + f(x_{N-1}) + \frac{f(x_N)}{2}\right) + \frac{h^2}{12}\bigl[f'(x_1) - f'(x_N)\bigr] - \frac{h^4}{720}\bigl[f(x_1) - f(x_N)\bigr] + \cdots - -\end{align} - - - -This may be viewed as an extension of the trapezoid rule by the inclusion of correction terms. Note that this asymptotic expansion is usually not convergent; there is some p, depending upon f and h, such that the terms past order p increase rapidly. Thus, the remainder term generally demands close attention. - -The Bernoulli polynomials Bn(x) and the periodic Bernoulli functions Pn(x) for n = 0, 1, 2, ... were introduced above. - -The first several Bernoulli polynomials are - -\begin{align} - -B_0(x) &= 1, \\ - -B_1(x) &= x - \tfrac{1}{2}, \\ - -B_2(x) &= x^2 - x + \tfrac{1}{6}, \\ - -B_3(x) &= x^3 - \tfrac{3}{2}x^2 + \tfrac{1}{2}x, \\ - -B_4(x) &= x^4 - 2x^3 + x^2 - \tfrac{1}{30}, \\ - -&\vdots - -\end{align} - -The values Bn(0) are the Bernoulli numbers Bn. Notice that for n ≠ 1 we have - -B_n = B_n(0) = B_n(1), - -and for n = 1, - -B_1 = B_1(0) = -B_1(1). - -The functions Pn agree with the Bernoulli polynomials on the interval [0, 1] and are periodic with period 1. Furthermore, except when n = 1, they are also continuous. Thus, - - P_n(0) = P_n(1) = B_n \quad \text{for }n \neq 1. - -Let k be an integer, and consider the integral - - \int_k^{k + 1} f(x)dx = \int_k^{k + 1} udv, - -where - -\begin{align} - -u &= f(x), \\ - -du &= f'(x)dx, \\ - -dv &= P_0(x)dx & \text{since }P_0(x) &= 1, \\ - -v &= P_1(x). - -\end{align} - -Integrating by parts, we get - -\begin{align} - -\int_k^{k + 1} f(x)dx &= \bigl[uv\bigr]_k^{k + 1} - \int_k^{k + 1} vdu \\ - -&= \bigl[f(x)P_1(x)\bigr]_k^{k + 1} - \int_k^{k+1} f'(x)P_1(x)dx \\ - -&= B_1(1)f(k+1)-B_1(0)f(k) - \int_k^{k+1} f'(x)P_1(x)dx. - -\end{align} - -Using B1(0) = −1/2, B1(1) = 1/2, and summing the above from k = 0 to k = n − 1, we get - -\begin{align} - -\int_0^n f(x) dx &= \int_0^1 f(x)dx + \cdots + \int_{n-1}^n f(x)dx \\ - -&= \frac{f(0)}{2}+ f(1) + \dotsb + f(n-1) + \frac{f(n)}{2} - \int_0^n f'(x) P_1(x)dx. - -\end{align} - -Adding f(n) − f(0)/2 to both sides and rearranging, we have - - \sum_{k=1}^n f(k) = \int_0^n f(x)dx + \frac{f(n) - f(0)}{2} + \int_0^n f'(x) P_1(x)dx. - -This is the p = 1 case of the summation formula. To continue the induction, we apply integration by parts to the error term: - -\int_k^{k+1} f'(x)P_1(x)dx = \int_k^{k + 1} udv, - -where - -\begin{align} - -u &= f'(x), \\ - -du &= f(x)dx, \\ - -dv &= P_1(x)dx, \\ - -v &= \tfrac{1}{2}P_2(x). - -\end{align} - -The result of integrating by parts is - -\begin{align} - -\bigl[uv\bigr]_k^{k + 1} - \int_k^{k + 1} vdu &= \left[\frac{f'(x)P_2(x)}{2} \right]_k^{k+1} - \frac{1}{2}\int_k^{k+1} f(x)P_2(x)dx \\ - -&= \frac{B_2}{2}(f'(k + 1) - f'(k)) - \frac{1}{2}\int_k^{k + 1} f(x)P_2(x)dx. - -\end{align} - -Summing from k = 0 to k = n - 1 and substituting this for the lower order error term results in the p = 2 case of the formula, - -\sum_{k=1}^n f(k) = \int_0^n f(x)dx + \frac{f(n) + f(0)}{2} + \frac{B_2}{2}\bigl(f'(n) - f'(0)\bigr) - \frac{1}{2}\int_0^n f(x)P_2(x)dx. - -This process can be iterated. In this way we get a proof of the Euler–Maclaurin summation formula which can be formalized by mathematical induction, in which the induction step relies on integration by parts and on identities for periodic Bernoulli functions. diff --git a/wiki/wikipedia/88.txt b/wiki/wikipedia/88.txt deleted file mode 100644 index e3ead84cc88e752304f24f9002b5f46efabc6783..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/88.txt +++ /dev/null @@ -1,22 +0,0 @@ -Idempotency of entailment is a property of logical systems that states that one may derive the same consequences from many instances of a hypothesis as from just one. This property can be captured by a structural rule called contraction, and in such systems one may say that entailment is idempotent if and only if contraction is an admissible rule. - -Rule of contraction: from - -A,C,C → B - -is derived - -A,C → B. - -Or in sequent calculus notation, -$$ -\frac{\Gamma,C,C\vdash B}{\Gamma,C\vdash B} -$$ - -In linear and affine logic, entailment is not idempotent. - -* No-deleting theorem - -Category:Logical consequence - -Category:Theorems in propositional logic diff --git a/wiki/wikipedia/880.txt b/wiki/wikipedia/880.txt deleted file mode 100644 index eab8bcb7dde71fa4bc1d3e12718cd21a1dc4deb0..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/880.txt +++ /dev/null @@ -1,28 +0,0 @@ -In complex analysis, a branch of mathematics, the - -Hadamard three-circle theorem is a result about the behavior of holomorphic functions. - -Let $f(z)$ be a holomorphic function on the annulus -$$ -r_1\leq\left| z\right| \leq r_3. -$$ - -Let $M(r)$ be the maximum of $|f(z)|$ on the circle $|z|=r.$ Then, $\log M(r)$ is a convex function of the logarithm $\log (r).$ Moreover, if $f(z)$ is not of the form $cz^\lambda$ for some constants $\lambda$ and $c$, then $\log M(r)$ is strictly convex as a function of $\log (r).$ - -The conclusion of the theorem can be restated as - -\log\left(\frac{r_3}{r_1}\right)\log M(r_2)\leq - -\log\left(\frac{r_3}{r_2}\right)\log M(r_1) - -+\log\left(\frac{r_2}{r_1}\right)\log M(r_3) - - - -for any three concentric circles of radii $r_1af(z)) is harmonic between two circles, and therefore takes its maximum value on one of the circles. The theorem follows by choosing the constant a so that this harmonic function has the same maximum value on both circles. - -The theorem can also be deduced directly from Hadamard's three-lines theorem. diff --git a/wiki/wikipedia/881.txt b/wiki/wikipedia/881.txt deleted file mode 100644 index 2fe925845850bbe42e26e07194d514d9bfad026d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/881.txt +++ /dev/null @@ -1,26 +0,0 @@ -In homotopy theory (a branch of mathematics), the Whitehead theorem states that if a continuous mapping f between CW complexes X and Y induces isomorphisms on all homotopy groups, then f is a homotopy equivalence. This result was proved by J. H. C. Whitehead in two landmark papers from 1949, and provides a justification for working with the concept of a CW complex that he introduced there. It is a model result of algebraic topology, in which the behavior of certain algebraic invariants (in this case, homotopy groups) determines a topological property of a mapping. - -In more detail, let X and Y be topological spaces. Given a continuous mapping -$$ -f\colon X \to Y -$$ - -and a point x in X, consider for any n ≥ 1 the induced homomorphism -$$ -f_*\colon \pi_n(X,x) \to \pi_n(Y,f(x)), -$$ - -where πn(X,x) denotes the n-th homotopy group of X with base point x. (For n = 0, π0(X) just means the set of path components of X.) A map f is a weak homotopy equivalence if the function -$$ -f_*\colon \pi_0(X) \to \pi_0(Y) -$$ - -is bijective, and the homomorphisms f* are bijective for all x in X and all n ≥ 1. (For X and Y path-connected, the first condition is automatic, and it suffices to state the second condition for a single point x in X.) The Whitehead theorem states that a weak homotopy equivalence from one CW complex to another is a homotopy equivalence. (That is, the map f: X → Y has a homotopy inverse g: Y → X, which is not at all clear from the assumptions.) This implies the same conclusion for spaces X and Y that are homotopy equivalent to CW complexes. - -Combining this with the Hurewicz theorem yields a useful corollary: a continuous map $f\colon X \to Y$ between simply connected CW complexes that induces an isomorphism on all integral homology groups is a homotopy equivalence. - -A word of caution: it is not enough to assume πn(X) is isomorphic to πn(Y) for each n in order to conclude that X and Y are homotopy equivalent. One really needs a map f : X → Y inducing an isomorphism on homotopy groups. For instance, take X= S2 × RP3 and Y= RP2 × S3. Then X and Y have the same fundamental group, namely the cyclic group Z/2, and the same universal cover, namely S2 × S3; thus, they have isomorphic homotopy groups. On the other hand their homology groups are different (as can be seen from the Künneth formula); thus, X and Y are not homotopy equivalent. - -The Whitehead theorem does not hold for general topological spaces or even for all subspaces of Rn. For example, the Warsaw circle, a compact subset of the plane, has all homotopy groups zero, but the map from the Warsaw circle to a single point is not a homotopy equivalence. The study of possible generalizations of Whitehead's theorem to more general spaces is part of the subject of shape theory. - -In any model category, a weak equivalence between cofibrant-fibrant objects is a homotopy equivalence. diff --git a/wiki/wikipedia/882.txt b/wiki/wikipedia/882.txt deleted file mode 100644 index 1ccc81dab7f60b49be42c037065a0fa0ab00d02f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/882.txt +++ /dev/null @@ -1,43 +0,0 @@ -In number theory, Euler's theorem (also known as the Fermat–Euler theorem or Euler's totient theorem) states that, if n and a are coprime positive integers, and $\varphi(n)$ is Euler's totient function, then a raised to the power $\varphi(n)$ is congruent to 1 modulo n; that is -$$ -a^{\varphi (n)} \equiv 1 \pmod{n}. -$$ - -In 1736, Leonhard Euler published a proof of Fermat's little theorem (stated by Fermat without proof), which is the restriction of Euler's theorem to the case where n is a prime number. Subsequently, Euler presented other proofs of the theorem, culminating with his paper of 1763, in which he proved a generalization to the case where n is not prime. - -The converse of Euler's theorem is also true: if the above congruence is true, then $a$ and $n$ must be coprime. - -The theorem is further generalized by Carmichael's theorem. - -The theorem may be used to easily reduce large powers modulo $n$. For example, consider finding the ones place decimal digit of $7^{222}$, i.e. $7^{222} \pmod{10}$. The integers 7 and 10 are coprime, and $\varphi(10) = 4$. So Euler's theorem yields $7^4 \equiv 1 \pmod{10}$, and we get $7^{222} \equiv 7^{4 \times 55 + 2} \equiv (7^4)^{55} \times 7^2 \equiv 1^{55} \times 7^2 \equiv 49 \equiv 9 \pmod{10}$. - -In general, when reducing a power of $a$ modulo $n$ (where $a$ and $n$ are coprime), one needs to work modulo $\varphi(n)$ in the exponent of $a$: - -if $x \equiv y \pmod{\varphi(n)}$, then $a^x \equiv a^y \pmod{n}$. - -Euler's theorem underlies RSA cryptosystem, which is widely used in Internet communications. In this cryptosystem, Euler's theorem is used with n being a product of two large prime numbers, and the security of the system is based on the difficulty of factoring such an integer. - -1. Euler's theorem can be proven using concepts from the theory of groups: - -The residue classes modulo n that are coprime to n form a group under multiplication (see the article Multiplicative group of integers modulo n for details). The order of that group is φ(n). Lagrange's theorem states that the order of any subgroup of a finite group divides the order of the entire group, in this case φ(n). If a is any number coprime to n then a is in one of these residue classes, and its powers a, a^2, ... , a^k modulo n form a subgroup of the group of residue classes, with a^k ≡ 1 (mod n). Lagrange's theorem says k must divide φ(n), i.e. there is an integer M such that kM = φ(n). This then implies, -$$ -a^{\varphi(n)} = a^{kM} = (a^{k})^M \equiv 1^M =1 \equiv 1 \pmod{n}. -$$ - -2. There is also a direct proof: Let R = be a reduced residue system (mod n) and let a be any integer coprime to n. The proof hinges on the fundamental fact that multiplication by a permutes the x_i: in other words if ax_j ≡ ax_k (mod n) then j = k. (This law of cancellation is proved in the article Multiplicative group of integers modulo n.) That is, the sets R and aR = , considered as sets of congruence classes (mod n), are identical (as sets—they may be listed in different orders), so the product of all the numbers in R is congruent (mod n) to the product of all the numbers in aR: - - - -\prod_{i=1}^{\varphi(n)} x_i \equiv - -\prod_{i=1}^{\varphi(n)} ax_i = - -a^{\varphi(n)}\prod_{i=1}^{\varphi(n)} x_i \pmod{n}, - - and using the cancellation law to cancel each x_i gives Euler's theorem: - - - -a^{\varphi(n)}\equiv 1 \pmod{n}. - - diff --git a/wiki/wikipedia/883.txt b/wiki/wikipedia/883.txt deleted file mode 100644 index d241d690d2d08cba802626b0d44b70f2c4c5e44b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/883.txt +++ /dev/null @@ -1,57 +0,0 @@ -In mathematics, the Gelfand–Naimark theorem states that an arbitrary C*-algebra A is isometrically *-isomorphic to a C*-subalgebra of bounded operators on a Hilbert space. This result was proven by Israel Gelfand and Mark Naimark in 1943 and was a significant point in the development of the theory of C*-algebras since it established the possibility of considering a C*-algebra as an abstract algebraic entity without reference to particular realizations as an operator algebra. - -The Gelfand–Naimark representation π is the direct sum of representations πf - -of A where f ranges over the set of pure states of A and πf is the irreducible representation associated to f by the GNS construction. Thus the Gelfand–Naimark representation acts on - -the Hilbert direct sum of the Hilbert spaces Hf by -$$ - \pi(x) [\bigoplus_{f} H_f] = \bigoplus_{f} \pi_f(x)H_f. -$$ - -π(x) is a bounded linear operator since it is the direct sum of a family of operators, each one having norm ≤ ||x||. - -Theorem. The Gelfand–Naimark representation of a C*-algebra is an isometric *-representation. - -It suffices to show the map π is injective, since for *-morphisms of C*-algebras injective implies isometric. Let x be a non-zero element of A. By the Krein extension theorem for positive linear functionals, there is a state f on A such that f(z) ≥ 0 for all non-negative z in A and f(-x* x) < 0. Consider the GNS representation πf with cyclic vector ξ. Since - - - -\begin{align} - -\|\pi_f(x) \xi\|^2 & = \langle \pi_f(x) \xi \mid \pi_f(x) \xi \rangle - -= \langle \xi \mid \pi_f(x^*) \pi_f(x) \xi \rangle \\[6pt] - -& = \langle \xi \mid \pi_f(x^* x) \xi \rangle= f(x^* x) > 0, - -\end{align} - - - -it follows that πf (x) ≠ 0, so π (x) ≠ 0, so π is injective. - -The construction of Gelfand–Naimark representation depends only on the GNS construction and therefore it is meaningful for any Banach *-algebra A having an approximate identity. In general (when A is not a C*-algebra) it will not be a faithful representation. The closure of the image of π(A) will be a C*-algebra of operators called the C*-enveloping algebra of A. Equivalently, we can define the - -C*-enveloping algebra as follows: Define a real valued function on A by -$$ - \|x\|_{\operatorname{C}^*} = \sup_f \sqrt{f(x^* x)} -$$ - -as f ranges over pure states of A. This is a semi-norm, which we refer to as the C* semi-norm of A. The set I of elements of A whose semi-norm is 0 forms a two sided-ideal in A closed under involution. Thus the quotient vector space A / I is an involutive algebra and the norm -$$ - \| \cdot \|_{\operatorname{C}^*} -$$ - -factors through a norm on A / I, which except for completeness, is a C* norm on A / I (these are sometimes called pre-C*-norms). Taking the completion of A / I relative to this pre-C*-norm produces a C*-algebra B. - -By the Krein–Milman theorem one can show without too much difficulty that for x an element of the Banach *-algebra A having an approximate identity: -$$ - \sup_{f \in \operatorname{State}(A)} f(x^*x) = \sup_{f \in \operatorname{PureState}(A)} f(x^*x). -$$ - -It follows that an equivalent form for the C* norm on A is to take the above supremum over all states. - -The universal construction is also used to define universal C*-algebras of isometries. - -Remark. The Gelfand representation or Gelfand isomorphism for a commutative C*-algebra with unit $A$ is an isometric *-isomorphism from $A$ to the algebra of continuous complex-valued functions on the space of multiplicative linear functionals, which in the commutative case are precisely the pure states, of A with the weak* topology. diff --git a/wiki/wikipedia/884.txt b/wiki/wikipedia/884.txt deleted file mode 100644 index 1bce4706c0fffdc61e8b34fff4d5e1326b29f066..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/884.txt +++ /dev/null @@ -1,3 +0,0 @@ -In mathematics, the surface subgroup conjecture of Friedhelm Waldhausen states that the fundamental group of every closed, irreducible 3-manifold with infinite fundamental group has a surface subgroup. By "surface subgroup" we mean the fundamental group of a closed surface not the 2-sphere. This problem is listed as Problem 3.75 in Robion Kirby's problem list. - -Assuming the geometrization conjecture, the only open case was that of closed hyperbolic 3-manifolds. A proof of this case was announced in the summer of 2009 by Jeremy Kahn and Vladimir Markovic and outlined in a talk August 4, 2009 at the FRG (Focused Research Group) Conference hosted by the University of Utah. A preprint appeared in the arxiv.org server in October 2009. In June 2012, Kahn and Markovic were given the Clay Research Awards by the Clay Mathematics Institute at a ceremony in Oxford. diff --git a/wiki/wikipedia/885.txt b/wiki/wikipedia/885.txt deleted file mode 100644 index 44af2400e1c9b31b3065ccd1d370ddda73a7e268..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/885.txt +++ /dev/null @@ -1,5 +0,0 @@ -In mathematics, the Lévy–Steinitz theorem identifies the set of values to which rearrangements of an infinite series of vectors in Rn can converge. It was proved by Paul Lévy in his first published paper when he was 19 years old. In 1913 Ernst Steinitz filled in a gap in Lévy's proof and also proved the result by a different method. - -In an expository article, Peter Rosenthal stated the theorem in the following way. - -The set of all sums of rearrangements of a given series of vectors in a finite-dimensional real Euclidean space is either the empty set or a translate of a subspace (i.e., a set of the form v + M, where v is a given vector and M is a linear subspace). diff --git a/wiki/wikipedia/886.txt b/wiki/wikipedia/886.txt deleted file mode 100644 index 0e6a5f5e75ac366d90f8d89cd3382fc600fb9178..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/886.txt +++ /dev/null @@ -1,30 +0,0 @@ -In number theory, Agrawal's conjecture, due to Manindra Agrawal in 2002, forms the basis for the cyclotomic AKS test. Agrawal's conjecture states formally: - -Let $n$ and $r$ be two coprime positive integers. If -$$ -(X-1)^n \equiv X^n - 1 \pmod{n, X^r - 1} -$$ - -then either $n$ is prime or $n^2 \equiv 1 \pmod r$ - -If Agrawal's conjecture were true, it would decrease the runtime complexity of the AKS primality test from $\tilde O(\log^{10.5} n)$ to $\tilde O(\log^3 n)$. - -The conjecture was formulated by Rajat Bhattacharjee and Prashant Pandey in their 2001 thesis. It has been computationally verified for $r < 100$ and $n < 10^{10}$, and for $r = 5, n < 10^{11}$. - -However, a heuristic argument by Carl Pomerance and Hendrik W. Lenstra suggests there are infinitely many counterexamples. In particular, the heuristic shows that such counterexamples have asymptotic density greater than $\tfrac{1}{n^{\varepsilon}}$ for any $\varepsilon > 0$. - -Assuming Agrawal's conjecture is false by the above argument, Roman B. Popovych conjectures a modified version may still be true: - -Let $n$ and $r$ be two coprime positive integers. If -$$ -(X-1)^n \equiv X^n - 1 \pmod{n, X^r - 1} -$$ - -and -$$ -(X+2)^n \equiv X^n + 2 \pmod{n, X^r - 1} -$$ - -then either $n$ is prime or $n^2 \equiv 1 \pmod{r}$. - -Both Agrawal's conjecture and Popovych's conjecture were tested by distributed computing project Primaboinca which ran from 2010 to 2020, based on BOINC. The project found no counterexample, searching in $10^{10} < n < 10^{17}$. diff --git a/wiki/wikipedia/887.txt b/wiki/wikipedia/887.txt deleted file mode 100644 index 0a5bad8bdf90d64905730659970196425f532b4c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/887.txt +++ /dev/null @@ -1,7 +0,0 @@ -In computer science, the maximum weight matching problem is the problem of finding, in a weighted graph, a matching in which the sum of weights is maximized. - -A special case of it is the assignment problem, in which the input is restricted to be a bipartite graph, and the matching constrained to be have cardinality that of the smaller of the two partitions. Another special case is the problem of finding a maximum cardinality matching on an unweighted graph: this corresponds to the case where all edge weights are the same. - -There is a $O(V^{2}E)$ time algorithm to find a maximum matching or a maximum weight matching in a graph that is not bipartite; it is due to Jack Edmonds, is called the paths, trees, and flowers method or simply Edmonds' algorithm, and uses bidirected edges. A generalization of the same technique can also be used to find maximum independent sets in claw-free graphs. - -More elaborate algorithms exist and are reviewed by Duan and Pettie (see Table III). Their work proposes an approximation algorithm for the maximum weight matching problem, which runs in linear time for any fixed error bound. diff --git a/wiki/wikipedia/888.txt b/wiki/wikipedia/888.txt deleted file mode 100644 index c0efb4c6f4b99e755b11c47284286d82498e1eda..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/888.txt +++ /dev/null @@ -1,34 +0,0 @@ -In combinatorial mathematics and extremal set theory, the Sauer–Shelah lemma states that every family of sets with small VC dimension consists of a small number of sets. It is named after Norbert Sauer and Saharon Shelah, who published it independently of each other in 1972. The same result was also published slightly earlier and again independently, by Vladimir Vapnik and Alexey Chervonenkis, after whom the VC dimension is named. In his paper containing the lemma, Shelah gives credit also to Micha Perles, and for this reason the lemma has also been called the Perles–Sauer–Shelah lemma. - -Buzaglo et al. call this lemma "one of the most fundamental results on VC-dimension", and graph theory. - -If $ \textstyle \mathcal{F}=\{S_1,S_2,\dots\}$ is a family of sets, and $T$ is another set, then $T$ is said to be shattered by $\mathcal{F}$ if every subset of $T$ (including the empty set and $T$ itself) can be obtained as an intersection $T\cap S_i$ between $T$ and a set in the family. The VC dimension of $\mathcal{F}$ is the largest cardinality of a set shattered by $\mathcal{F}$. - -In terms of these definitions, the Sauer–Shelah lemma states that if $\mathcal{F}$ is a family of sets with $n$ distinct elements such that -$$ - \textstyle |\mathcal{F}| > \sum_{i=0}^{k-1} {\binom{n}{i}} -$$, then $\mathcal{F}$ shatters a set of size $k$. Equivalently, if the VC dimension of $\mathcal{F}$ is $k,$ then $\mathcal{F}$ can consist of at most $\textstyle \sum_{i=0}^{k} {\binom{n}{i}} =O(n^k)$ sets. - -The bound of the lemma is tight: Let the family $\mathcal{F}$ be composed of all subsets of $\{1,2,\dots, n\}$ with size less than $k$. Then the size of $\mathcal{F}$ is exactly $\textstyle \sum_{i=0}^{k-1} {\binom{n}{i}}$ but it does not shatter any set of size $k$. - -A strengthening of the Sauer–Shelah lemma, due to Pajor, states that every finite set family $\mathcal{F}$ shatters at least $|\mathcal{F}|$ sets. This immediately implies the Sauer–Shelah lemma, because only $\sum_{i=0}^{k-1} {\tbinom{n}{i}} $ of the subsets of an $n$-item universe have cardinality less than $k$. Thus, when $|\mathcal{F}|>\sum_{i=0}^{k-1} {\tbinom{n}{i}}$, there are not enough small sets to be shattered, so one of the shattered sets must have cardinality at least $k$. - -For a restricted type of shattered set, called an order-shattered set, the number of shattered sets always equals the cardinality of the set family. - -Base: every family of only one set shatters the empty set. - -Step: Assume the lemma is true for all families of size less than $|\mathcal{F}|$ and let $\mathcal{F}$ be a family of two or more sets. Let $x$ be an element that belongs to some but not all of the sets in $\mathcal{F}$. Split $\mathcal{F}$ into two subfamilies, of the sets that contain $x$ and the sets that do not contain $x$. - -By the induction assumption, these two subfamilies shatter two collections of sets whose sizes add to at least $|\mathcal{F}|$. - -None of these shattered sets contain $x$, since a set that contains $x$ cannot be shattered by a family in which all sets contain $x$ or all sets do not contain $x$. - -Some of the shattered sets may be shattered by both subfamilies. When a set $S$ is shattered by only one of the two subfamilies, it contributes one unit both to the number of shattered sets of the subfamily and to the number of shattered sets of $\mathcal{F}$. When a set $S$ is shattered by both subfamilies, both $S$ and $S\cup\{x\}$ are shattered by $\mathcal{F}$, so $S$ contributes two units to the number of shattered sets of the subfamilies and of $\mathcal{F}$. Therefore, the number of shattered sets of $\mathcal{F}$ is at least equal to the number shattered by the two subfamilies of $\mathcal{F}$, which is at least $|\mathcal{F}|$. - -A different proof of the Sauer–Shelah lemma in its original form, by Péter Frankl and János Pach, is based on linear algebra and the inclusion–exclusion principle. and Komlós similarly showed that there always exist ε-nets of cardinality $O(\tfrac{d}{\epsilon}\log\tfrac{1}{\epsilon})$, and more precisely of cardinality at most $\tfrac{d}{\epsilon}\ln\tfrac{1}{\epsilon}+\tfrac{2d}{\epsilon}\ln\ln\tfrac{1}{\epsilon}+\tfrac{6d}{\epsilon}$. The main idea of the proof of the existence of small ε-nets is to choose a random sample x of cardinality $O(\tfrac{d}{\epsilon}\log\tfrac{1}{\epsilon})$ and a second independent random sample y of cardinality $O(\tfrac{d}{\epsilon}\log^2\tfrac{1}{\epsilon})$, and to bound the probability that x is missed by some large event E by the probability that x is missed and simultaneously the intersection of y with E is larger than its median value. For any particular E, the probability that x is missed while y is larger than its median is very small, - -and the Sauer–Shelah lemma (applied to $x\cup y$) shows that only a small number of distinct events E need to be considered, so by the union bound, with nonzero probability, x is an ε-net. - -In turn, ε-nets and ε-approximations, and the likelihood that a random sample of large enough cardinality has these properties, have important applications in machine learning, in the area of probably approximately correct learning. In computational geometry, they have been applied to range searching, derandomization, and approximation algorithms. - -Kozma use generalizations of the Sauer–Shelah lemma to prove results in graph theory such as that the number of strong orientations of a given graph is sandwiched between its numbers of connected and 2-edge-connected subgraphs. diff --git a/wiki/wikipedia/889.txt b/wiki/wikipedia/889.txt deleted file mode 100644 index efb8048db5f594d1dac5f2748606d18c9f59a10f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/889.txt +++ /dev/null @@ -1,47 +0,0 @@ -Stephen Wolfram (; born 29 August 1959) is a British-American computer scientist, physicist, and businessman. He is known for his work in computer science, mathematics, and in theoretical physics. In 2012, he was named a fellow of the American Mathematical Society. - -As a businessman, he is the founder and CEO of the software company Wolfram Research where he worked as chief designer of Mathematica and the Wolfram Alpha answer engine. - -Stephen Wolfram was born in London in 1959 to Hugo and Sybil Wolfram, both German Jewish refugees to the United Kingdom. His maternal grandmother was British psychoanalyst Kate Friedlander. - -Wolfram's father, Hugo Wolfram, was a textile manufacturer and served as managing director of the Lurex Company—makers of the fabric Lurex. Wolfram's mother, Sybil Wolfram, was a Fellow and Tutor in Philosophy at Lady Margaret Hall at University of Oxford from 1964 to 1993. - -Stephen Wolfram is married to a mathematician. They have four children together. - -Wolfram was educated at Eton College, but left prematurely in 1976. As a young child, Wolfram had difficulties learning arithmetic. At the age of 12, he wrote a directory of physics. By age 14, he had written three books on particle physics. He entered St. John's College, Oxford, at age 17 and left in 1978 without graduating to attend the California Institute of Technology the following year, where he received a PhD in particle physics on 19 November 1979 at age 20. Wolfram's thesis committee was composed of Richard Feynman, Peter Goldreich, Frank J. Sciulli and Steven Frautschi, and chaired by Richard D. Field. and nine other papers. Wolfram's work with Geoffrey C. Fox on the theory of the strong interaction is still used in experimental particle physics. - -Following his PhD, Wolfram joined the faculty at Caltech and became the youngest recipient of the MacArthur Fellowships in 1981, at age 21. - -In 1983, Wolfram left for the School of Natural Sciences of the Institute for Advanced Study in Princeton. By that time, he was no longer interested in particle physics. Instead, he began pursuing investigations into cellular automata, mainly with computer simulations. He produced a series of papers systematically investigating the class of elementary cellular automata, conceiving the Wolfram code, a naming system for one-dimensional cellular automata, and a classification scheme for the complexity of their behaviour. He conjectured that the Rule 110 cellular automaton might be Turing complete, which was later proved correct. Wolfram's cellular-automata work came to be cited in more than 10,000 papers. and helped initiate the field of complex systems. In 1984, he was a participant in the Founding Workshops of the Santa Fe Institute, along with Nobel laureates Murray Gell-Mann, Manfred Eigen, and Philip Warren Anderson, and future laureate Frank Wilczek. In 1986, he founded the Center for Complex Systems Research (CCSR) at the University of Illinois at Urbana–Champaign. In 1987, he founded the journal Complex Systems. SMP was further developed and marketed commercially by Inference Corp. of Los Angeles during 1983–1988. - -In 1986, Wolfram left the Institute for Advanced Study for the University of Illinois at Urbana–Champaign where he founded their Center for Complex Systems Research and started to develop the computer algebra system Mathematica, which was first released on 23 June 1988, when he left academia. In 1987, he founded Wolfram Research which continues to develop and market the program. which presents an empirical study of simple computational systems. Additionally, it argues that for fundamental reasons these types of systems, rather than traditional mathematics, are needed to model and understand complexity in nature. Wolfram's conclusion is that the universe is discrete in its nature, and runs on fundamental laws which can be described as simple programs. He predicts that a realization of this within scientific communities will have a revolutionary influence on physics, chemistry, biology, and a majority of scientific areas in general, hence the book's title. - -In April 2020, Wolfram announced the "Wolfram Physics Project" as an effort to reduce and explain all the laws of physics within a paradigm of a hypergraph that is transformed by minimal rewriting rules which obey the Church-Rosser property. The effort is a continuation of the ideas he originally described in A New Kind of Science. Wolfram claims that "From an extremely simple model, we're able to reproduce special relativity, general relativity and the core results of quantum mechanics." Physicists are generally unimpressed with Wolfram's claim, and state that Wolfram's results are non-quantitative and arbitrary. - -In March 2009, Wolfram announced Wolfram Alpha, an answer engine. WolframAlpha later launched in May 2009, and a paid-for version with extra features launched in February 2012. The engine is based on natural language processing and a large library of algorithms. The application programming interface allows other applications to extend and enhance Wolfram Alpha. - -In 2010, Wolfram co-founded Touchpress along with Theodore Gray, Max Whitby, and John Cromie. The company specialised in creating in-depth premium apps and games covering a wide range of educational subjects designed for children, parents, students, and educators. Since the launch, Touchpress has published more than 100 apps. The company is no longer active. - -In March 2014, at the annual South by Southwest (SXSW) event, Wolfram officially announced the Wolfram Language as a new general multi-paradigm programming language and currently better known as a multi-paradigm computational communication language, though it was previously available through Mathematica and not an entirely new programming language. The documentation for the language was pre-released in October 2013 to coincide with the bundling of Mathematica and the Wolfram Language on every Raspberry Pi computer with some controversy because of its proprietary nature. While the Wolfram Language has existed for over 30 years as the primary programming language used in Mathematica, it was not officially named until 2014. - -The significance data has on the products Wolfram creates transfers into his own life. He has an extensive log of personal analytics, including emails received and sent, keystrokes made, meetings and events attended, phone calls, even physical movement dating back to the 1980s. In the preface of A New Kind of Science, he noted that he recorded over one-hundred million keystrokes and one-hundred mouse miles. He has stated "[personal analytics] can give us a whole new dimension to experiencing our lives." - -Stephen Wolfram was involved as a scientific consultant for the 2016 film Arrival. He and his son Christopher wrote some of the code featured on-screen, such as the code in graphics depicting an analysis of the alien logograms, for which they used the Wolfram Language. - -*Combinators: A Centennial View (2021) - -* A Project to Find the Fundamental Theory of Physics (2020), Publisher: Wolfram Media, - -*Adventures of a Computational Explorer (2019) - -*Idea Makers: Personal Perspectives on the Lives & Ideas of Some Notable People (2016) - -*Elementary Introduction to the Wolfram Language (2015) - -*A New Kind of Science (2002) - -*The Mathematica Book (multiple editions) - -*Cellular Automata and Complexity: Collected Papers (1994) - -*Theory and Applications of Cellular Automata (1986) diff --git a/wiki/wikipedia/89.txt b/wiki/wikipedia/89.txt deleted file mode 100644 index fbbcfa879c715c51a6552b5153f2d7e35ef46b3c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/89.txt +++ /dev/null @@ -1,21 +0,0 @@ -In mathematical optimization, the network simplex algorithm is a graph theoretic specialization of the simplex algorithm. The algorithm is usually formulated in terms of a minimum-cost flow problem. The network simplex method works very well in practice, typically 200 to 300 times faster than the simplex method applied to general linear program of same dimensions. - -For a long time, the existence of a provably efficient network simplex algorithm was one of the major open problems in complexity theory, even though efficient-in-practice versions were available. In 1995 Orlin provided the first polynomial algorithm with runtime of $O(V^2 E \log(VC))$ where $C$ is maximum cost of any edges. Later Tarjan improved this to $O(VE \log V \log(VC))$ using dynamic trees in 1997. Strongly polynomial dual network simplex algorithms for the same problem, but with a higher dependence on the numbers of edges and vertices in the graph, have been known for longer. - -The network simplex method is an adaptation of the bounded variable primal simplex algorithm. The basis is represented as a rooted spanning tree of the underlying network, in which variables are represented by arcs, and the simplex multipliers by node potentials. At each iteration, an entering variable is selected by some pricing strategy, based on the dual multipliers (node potentials), and forms a cycle with the arcs of the tree. The leaving variable is the arc of the cycle with the least augmenting flow. The substitution of entering for leaving arc, and the reconstruction of the tree is called a pivot. When no non-basic arc remains eligible to enter, the optimal solution has been reached. - -The network simplex algorithm can be used to solve many practical problems including, - -* Transshipment problem - -* Hitchcock transportation problem - -* Assignment problem - -* Chains and antichains in partially ordered sets - -* System of distinct representatives - -* Covers and matching in bipartite graphs - -* Caterer problem diff --git a/wiki/wikipedia/890.txt b/wiki/wikipedia/890.txt deleted file mode 100644 index 61c71202916a3eb6a7ac1aee8056904742961f4f..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/890.txt +++ /dev/null @@ -1,29 +0,0 @@ -In operator theory, Atkinson's theorem (named for Frederick Valentine Atkinson) gives a characterization of Fredholm operators. - -Let H be a Hilbert space and L(H) the set of bounded operators on H. The following is the classical definition of a Fredholm operator: an operator T ∈ L(H) is said to be a Fredholm operator if the kernel Ker(T) is finite-dimensional, Ker(T*) is finite-dimensional (where T* denotes the adjoint of T), and the range Ran(T) is closed. - -Atkinson's theorem states: - -A T ∈ L(H) is a Fredholm operator if and only if T is invertible modulo compact perturbation, i.e. TS = I + C1 and ST = I + C2 for some bounded operator S and compact operators C1 and C2. - -In other words, an operator T ∈ L(H) is Fredholm, in the classical sense, if and only if its projection in the Calkin algebra is invertible. - -The outline of a proof is as follows. For the ⇒ implication, express H as the orthogonal direct sum - - H = - -\operatorname{Ker}(T)^\perp \oplus \operatorname{Ker} (T). - - - -The restriction T : Ker(T) → Ran(T) is a bijection, and therefore invertible by the open mapping theorem. Extend this inverse by 0 on Ran(T) = Ker(T*) to an operator S defined on all of H. Then I − TS is the finite-rank projection onto Ker(T*), and I − ST is the projection onto Ker(T). This proves the only if part of the theorem. - -For the converse, suppose now that ST = I + C2 for some compact operator C2. If x ∈ Ker(T), then STx = x + C2x = 0. So Ker(T) is contained in an eigenspace of C2, which is finite-dimensional (see spectral theory of compact operators). Therefore Ker(T) is also finite-dimensional. The same argument shows that Ker(T*) is also finite-dimensional. - -To prove that Ran(T) is closed, we make use of the approximation property: let F be a finite-rank operator such that ||F − C2|| < r. Then for every x in Ker(F), - -||S||⋅||Tx|| ≥ ||STx|| = ||x + C2x|| = ||x + Fx +C2x − Fx|| ≥ ||x|| − ||C2 − F||⋅||x|| ≥ (1 − r)||x||. - -Thus T is bounded below on Ker(F), which implies that T(Ker(F)) is closed. On the other hand, T(Ker(F)) is finite-dimensional, since Ker(F) = Ran(F*) is finite-dimensional. Therefore Ran(T) = T(Ker(F)) + T(Ker(F)) is closed, and this proves the theorem. - -A more complete treatment of Atkinson's Theorem is in the reference by Arveson: it shows that if B is a Banach space, an operator is Fredholm iff it is invertible modulo a finite rank operator (and that the latter is equivalent to being invertible modulo a compact operator, which is significant in view of Enflo's example of a separable, reflexive Banach space with compact operators that are not norm-limits of finite rank operators). For Banach spaces, a Fredholm operator is one with finite dimensional kernel and range of finite codimension (equivalent to the kernel of its adjoint being finite dimensional). Note that the hypothesis that Ran(T) is closed is redundant since a space of finite codimension that is also the range of a bounded operator is always closed (see Arveson reference below); this is a consequence of the open-mapping theorem (and is not true if the space is not the range of a bounded operator, for example the kernel of a discontinuous linear functional). diff --git a/wiki/wikipedia/891.txt b/wiki/wikipedia/891.txt deleted file mode 100644 index 1004e231f39ebdec5ac31fad55ad7afc16fa6f83..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/891.txt +++ /dev/null @@ -1,38 +0,0 @@ -In order theory a better-quasi-ordering or bqo is a quasi-ordering that does not admit a certain type of bad array. Every better-quasi-ordering is a well-quasi-ordering. - -Though well-quasi-ordering is an appealing notion, many important infinitary operations do not preserve well-quasi-orderedness. - -An example due to Richard Rado illustrates this. - -In a 1965 paper Crispin Nash-Williams formulated the stronger notion of better-quasi-ordering in order to prove that the class of trees of height ω is well-quasi-ordered under the topological minor relation. Since then, many quasi-orderings have been proven to be well-quasi-orderings by proving them to be better-quasi-orderings. For instance, Richard Laver established Laver's theorem (previously a conjecture of Roland Fraïssé) by proving that the class of scattered linear order types is better-quasi-ordered. More recently, Carlos Martinez-Ranero has proven that, under the proper forcing axiom, the class of Aronszajn lines is better-quasi-ordered under the embeddability relation. - -It is common in better-quasi-ordering theory to write $ {_*}x $ for the sequence $x$ with the first term omitted. Write $[\omega]^{<\omega}$ for the set of finite, strictly increasing sequences with terms in $\omega$, and define a relation $\triangleleft$ on $[\omega]^{<\omega}$ as follows: $s\triangleleft t$ if there is $u \in [\omega]^{<\omega}$ such that $s$ is a strict initial segment of $u$ and $t={}_*u$. The relation $\triangleleft$ is not transitive. - -A block $B$ is an infinite subset of $[\omega]^{<\omega}$ that contains an initial segment of every - -infinite subset of $\bigcup B$. For a quasi-order $Q$, a $Q$-pattern is a function from some block $B$ into $Q$. A $Q$-pattern $f\colon B\to Q$ is said to be bad if $f(s)\not \le_Q f(t)$ for every pair $s,t\in B$ such that $s\triangleleft t$; otherwise $f$ is good. A quasi-ordering $Q$ is called a better-quasi-ordering if there is no bad $Q$-pattern. - -In order to make this definition easier to work with, Nash-Williams defines a barrier to be a block whose elements are pairwise incomparable under the inclusion relation $\subset$. A $Q$-array is a $Q$-pattern whose domain is a barrier. By observing that every block contains a barrier, one sees that $Q$ is a better-quasi-ordering if and only if there is no bad $Q$-array. - -Simpson introduced an alternative definition of better-quasi-ordering in terms of Borel functions $[\omega]^{\omega}\to Q$, where $[\omega]^{\omega}$, the set of infinite subsets of $\omega$, is given the usual product topology. - -Let $Q$ be a quasi-ordering and endow $Q$ with the discrete topology. A $Q$-array is a Borel function $[A]^{\omega}\to Q$ for some infinite subset $A$ of $\omega$. A $Q$-array $f$ is bad if $f(X)\not\le_Q f({_*}X)$ for every $X\in[A]^{\omega}$; -$$ -f -$$ is good otherwise. The quasi-ordering $Q$ is a better-quasi-ordering if there is no bad $Q$-array in this sense. - -Many major results in better-quasi-ordering theory are consequences of the Minimal Bad Array Lemma, which appears in Simpson's paper as follows. See also Laver's paper, where the Minimal Bad Array Lemma was first stated as a result. The technique was present in Nash-Williams' original 1965 paper. - -Suppose $(Q,\le_Q)$ is a quasi-order. A partial ranking $\le'$ of $Q$ is a well-founded partial ordering of $Q$ such that $q\le'r \to q \le_Q r$. For bad $Q$-arrays (in the sense of Simpson) $f\colon [A]^{\omega}\to Q$ and $g\colon [B]^{\omega}\to Q$, define: -$$ -g\le^* f \text{ if } B\subseteq A \text{ and } g(X)\le' f(X) \text{ for every } X\in[B]^{\omega} -$$ -$$ -g <^* f \text{ if } B\subseteq A \text{ and } g(X) <' f(X) \text{ for every } X\in[B]^{\omega} -$$ - -We say a bad $Q$-array $g$ is minimal bad (with respect to the partial ranking $\le'$) if there is no bad $Q$-array $f$ such that $f <^* g$. - -The definitions of $\le^*$ and $<'$ depend on a partial ranking $\le'$ of $Q$. The relation $<^*$ is not the strict part of the relation $\le^*$. - -Theorem (Minimal Bad Array Lemma). Let $Q$ be a quasi-order equipped with a partial ranking and suppose $f$ is a bad $Q$-array. Then there is a minimal bad $Q$-array $g$ such that $g \le^* f$. diff --git a/wiki/wikipedia/892.txt b/wiki/wikipedia/892.txt deleted file mode 100644 index 95742c1752cc48f81394480efebfb249632e80c0..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/892.txt +++ /dev/null @@ -1,27 +0,0 @@ -In the mathematical field of group theory, the Kurosh subgroup theorem describes the algebraic structure of subgroups of free products of groups. The theorem was obtained by Alexander Kurosh, a Russian mathematician, in 1934. Informally, the theorem says that every subgroup of a free product is itself a free product of a free group and of its intersections with the conjugates of the factors of the original free product. - -After the original 1934 proof of Kurosh, there were many subsequent proofs of the Kurosh subgroup theorem, including proofs of Harold W. Kuhn (1952), Saunders Mac Lane (1958) and others. The theorem was also generalized for describing subgroups of amalgamated free products and HNN extensions. Other generalizations include considering subgroups of free pro-finite products and a version of the Kurosh subgroup theorem for topological groups. - -In modern terms, the Kurosh subgroup theorem is a straightforward corollary of the basic structural results of Bass–Serre theory about groups acting on trees. - -Let $G = A*B$ be the free product of groups A and B and let $H \le G$ be a subgroup of G. Then there exist a family $(A_i)_{i\in I}$ of subgroups $A_i \le A$, a family $(B_j)_{j\in J}$ of subgroups $B_j \le B$, families $g_i, i\in I$ and $f_j, j\in J$ of elements of G, and a subset $X\subseteq G$ such that -$$ -H=F(X)*(*_{i\in I} g_i A_ig_i^{-1})* (*_{j\in J} f_jB_jf_j^{-1}). -$$ - -This means that X freely generates a subgroup of G isomorphic to the free group F(X) with free basis X and that, moreover, giAigi−1, fjBjfj−1 and X generate H in G as a free product of the above form. - -There is a generalization of this to the case of free products with arbitrarily many factors. Its formulation is: - -If H is a subgroup of ∗i∈IGi = G, then -$$ -H=F(X)*(*_{j\in J} g_jH_jg_j^{-1}), -$$ - -where X ⊆ G and J is some index set and gj ∈ G and each Hj is a subgroup of some Gi. - -The Kurosh subgroup theorem easily follows from the basic structural results in Bass–Serre theory, as explained, for example in the book of Cohen (1987): - -Let G = A∗B and consider G as the fundamental group of a graph of groups Y consisting of a single non-loop edge with the vertex groups A and B and with the trivial edge group. Let X be the Bass–Serre universal covering tree for the graph of groups Y. Since H ≤ G also acts on X, consider the quotient graph of groups Z for the action of H on X. The vertex groups of Z are subgroups of G-stabilizers of vertices of X, that is, they are conjugate in G to subgroups of A and B. The edge groups of Z are trivial since the G-stabilizers of edges of X were trivial. By the fundamental theorem of Bass–Serre theory, H is canonically isomorphic to the fundamental group of the graph of groups Z. Since the edge groups of Z are trivial, it follows that H is equal to the free product of the vertex groups of Z and the free group F(X) which is the fundamental group (in the standard topological sense) of the underlying graph Z of Z. This implies the conclusion of the Kurosh subgroup theorem. - -The result extends to the case that G is the amalgamated product along a common subgroup C, under the condition that H meets every conjugate of C only in the identity element. diff --git a/wiki/wikipedia/893.txt b/wiki/wikipedia/893.txt deleted file mode 100644 index 88dc0a8b18ea832c149c836606b5f7e4823d2e03..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/893.txt +++ /dev/null @@ -1,9 +0,0 @@ -The problem of points, also called the problem of division of the stakes, is a classical problem in probability theory. One of the famous problems that motivated the beginnings of modern probability theory in the 17th century, it led Blaise Pascal to the first explicit reasoning about what today is known as an expected value. - -The problem concerns a game of chance with two players who have equal chances of winning each round. The players contribute equally to a prize pot, and agree in advance that the first player to have won a certain number of rounds will collect the entire prize. Now suppose that the game is interrupted by external circumstances before either player has achieved victory. How does one then divide the pot fairly? It is tacitly understood that the division should depend somehow on the number of rounds won by each player, such that a player who is close to winning will get a larger part of the pot. But the problem is not merely one of calculation; it also involves deciding what a "fair" division actually is. - -Luca Pacioli considered such a problem in his 1494 textbook Summa de arithmetica, geometrica, proportioni et proportionalità. His method was to divide the stakes in proportion to the number of rounds won by each player, and the number of rounds needed to win did not enter his calculations at all. - -In the mid-16th century Niccolò Tartaglia noticed that Pacioli's method leads to counterintuitive results if the game is interrupted when only one round has been played. In that case, Pacioli's rule would award the entire pot to the winner of that single round, though a one-round lead early in a long game is far from decisive. Tartaglia constructed a method that avoids that particular problem by basing the division on the ratio between the size of the lead and the length of the game. - -Though Pascal's derivation of this result was independent of Fermat's tabular method, it is clear that it also describes exactly the counting of different outcomes of $r+s-1$ additional rounds that Fermat suggested. diff --git a/wiki/wikipedia/894.txt b/wiki/wikipedia/894.txt deleted file mode 100644 index 55b3f10e638e4a91dac79c4d7072810eb3d99839..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/894.txt +++ /dev/null @@ -1,17 +0,0 @@ -In social choice theory, May's theorem states that simple majority voting is the only anonymous, neutral, and positively responsive social choice function between two alternatives. Further, this procedure is resolute when there are an odd number of voters and ties (indecision) are not allowed. Kenneth May first published this theorem in 1952. - -Various modifications have been suggested by others since the original publication. Mark Fey extended the proof to an infinite number of voters. Robert Goodin and Christian List showed that, among methods of aggregating first-preference votes over multiple alternatives, plurality rule uniquely satisfies May's conditions; under approval balloting, a similar statement can be made about approval voting. - -Arrow's theorem in particular does not apply to the case of two candidates, so this possibility result can be seen as a mirror analogue of that theorem. (Note that anonymity is a stronger form of non-dictatorship.) - -Another way of explaining the fact that simple majority voting can successfully deal with at most two alternatives is to cite Nakamura's theorem. The theorem states that the number of alternatives that a rule can deal with successfully is less than the Nakamura number of the rule. The Nakamura number of simple majority voting is 3, except in the case of four voters. Supermajority rules may have greater Nakamura numbers. - -*Condition 1. The group decision function sends each set of preferences to a unique winner. (resolute, unrestricted domain) - -*Condition 2. The group decision function treats each voter identically. (anonymity) - -*Condition 3. The group decision function treats both outcomes the same, in that reversing each set of preferences reverses the group preference. (neutrality) - -*Condition 4. If the group decision was 0 or 1 and a voter raises a vote from −1 to 0 or 1, or from 0 to 1, the group decision is 1. (positive responsiveness) - -Theorem: A group decision function with an odd number of voters meets conditions 1, 2, 3, and 4 if and only if it is the simple majority method. diff --git a/wiki/wikipedia/895.txt b/wiki/wikipedia/895.txt deleted file mode 100644 index a87abe28e8f94d631ecfbb42e2d9d65289ae7fab..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/895.txt +++ /dev/null @@ -1,55 +0,0 @@ -In mathematics, the Riesz–Markov–Kakutani representation theorem relates linear functionals on spaces of continuous functions on a locally compact space to measures in measure theory. The theorem is named for who introduced it for continuous functions on the unit interval, who extended the result to some non-compact spaces, and who extended the result to compact Hausdorff spaces. - -There are many closely related variations of the theorem, as the linear functionals can be complex, real, or positive, the space they are defined on may be the unit interval or a compact space or a locally compact space, the continuous functions may be vanishing at infinity or have compact support, and the measures can be Baire measures or regular Borel measures or Radon measures or signed measures or complex measures. - -The following theorem represents positive linear functionals on Cc(X), the space of continuous compactly supported complex-valued functions on a locally compact Hausdorff space X. The Borel sets in the following statement refer to the σ-algebra generated by the open sets. - -A non-negative countably additive Borel measure μ on a locally compact Hausdorff space X is a Radon measure if and only if - -* μ(K) < ∞ for every compact K; - -* For every Borel set E, -$$ - \mu(E) = \inf \{\mu(U): E \subseteq U, U \mbox{ open}\} -$$ - -* The relation -$$ - \mu(E) = \sup \{\mu(K): K \subseteq E, K \mbox{ compact}\} -$$ - -holds whenever E is open or when E is Borel and μ(E) < ∞ . - -Theorem. Let X be a locally compact Hausdorff space. For any positive linear functional $ \psi $ on Cc(X), there is a unique Radon measure μ on X such that -$$ -\forall f \in C_c(X): \qquad \psi(f) = \int_X f(x) d \mu(x). -$$ - -One approach to measure theory is to start with a Radon measure, defined as a positive linear functional on Cc(X). This is the way adopted by Bourbaki; it does of course assume that X starts life as a topological space, rather than simply as a set. For locally compact spaces an integration theory is then recovered. - -Without the condition of regularity the Borel measure need not be unique. For example, let X be the set of ordinals at most equal to the first uncountable ordinal Ω, with the topology generated by "open intervals". The linear functional taking a continuous function to its value at Ω corresponds to the regular Borel measure with a point mass at Ω. However it also corresponds to the (non-regular) Borel measure that assigns measure 1 to any Borel set $B\subseteq [0,\Omega]$ if there is closed and unbounded set $C\subseteq [0,\Omega[$ with $C\subseteq B$, and assigns measure 0 to other Borel sets. (In particular the singleton {Ω} gets measure 0, contrary to the point mass measure.) - -In its original form by F. Riesz (1909) the theorem states that every continuous linear functional A[f] over the space C([0, 1]) of continuous functions in the interval [0,1] can be represented in the form -$$ -A[f] = \int_0^1 f(x)d\alpha(x), -$$ - -where α(x) is a function of bounded variation on the interval [0, 1], and the integral is a Riemann–Stieltjes integral. Since there is a one-to-one correspondence between Borel regular measures in the interval and functions of bounded variation (that assigns to each function of bounded variation the corresponding Lebesgue–Stieltjes measure, and the integral with respect to the Lebesgue–Stieltjes measure agrees with the Riemann–Stieltjes integral for continuous functions), the above stated theorem generalizes the original statement of F. Riesz. (See Gray (1984), for a historical discussion.) - -The following theorem, also referred to as the Riesz–Markov theorem, gives a concrete realisation of the topological dual space of C0(X), the set of continuous functions on X which vanish at infinity. The Borel sets in the statement of the theorem also refers to the σ-algebra generated by the open sets. - -If μ is a complex-valued countably additive Borel measure, μ is called regular if the non-negative countably additive measure |μ| is regular as defined above. - -Theorem. Let X be a locally compact Hausdorff space. For any continuous linear functional ψ on C0(X), there is a unique regular countably additive complex Borel measure μ on X such that -$$ -\forall f \in C_0(X): \qquad \psi(f) = \int_X f(x) d \mu(x). -$$ - -The norm of ψ as a linear functional is the total variation of μ, that is -$$ - \|\psi\| = |\mu|(X). -$$ - -Finally, ψ is positive if and only if the measure μ is non-negative. - -One can deduce this statement about linear functionals from the statement about positive linear functionals by first showing that a bounded linear functional can be written as a finite linear combination of positive ones. diff --git a/wiki/wikipedia/896.txt b/wiki/wikipedia/896.txt deleted file mode 100644 index cdfe61f7af6c4b2ddea5b327f64cb9593e6d1c6c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/896.txt +++ /dev/null @@ -1,121 +0,0 @@ -In computability theory, Rice's theorem states that all non-trivial semantic properties of programs are undecidable. A semantic property is one about the program's behavior (for instance, does the program terminate for all inputs), unlike a syntactic property (for instance, does the program contain an if-then-else statement). A property is non-trivial if it is neither true for every partial computable function, nor false for every partial computable function. - -Rice's theorem can also be put in terms of functions: for any non-trivial property of partial functions, no general and effective method can decide whether an algorithm computes a partial function with that property. Here, a property of partial functions is called trivial if it holds for all partial computable functions or for none, and an effective decision method is called general if it decides correctly for every algorithm. The theorem is named after Henry Gordon Rice, who proved it in his doctoral dissertation of 1951 at Syracuse University. - -Let p be a property of a formal language that is nontrivial, meaning - -# there exists a recursively enumerable language having the property p, - -# there exists a recursively enumerable language not having the property p, - -(that is, p is neither uniformly true nor uniformly false for all recursively enumerable languages). - -Then it is undecidable to determine for a given Turing machine M, whether the language recognized by it - L(M) - has the property p. - -In practice, this means that there is no machine that can always decide whether the language of a given Turing machine has a particular nontrivial property. Special cases include e.g. the undecidability of whether the language recognized by a Turing machine could be recognized by a nontrivial simpler machine, such as a finite automaton (meaning, it is undecidable whether the language of a Turing machine is regular). - -It is important to note that Rice's theorem does not concern the properties of machines or programs; it concerns properties of functions and languages. For example, whether a machine runs for more than 100 steps on a particular input is a decidable property, even though it is non-trivial. Two different machines recognizing exactly the same language might require a different number of steps to recognize the same input string. Similarly, whether a machine has more than 5 states is a decidable property of the machine, as the number of states can simply be counted. For properties of this kind, which concerns a Turing machine but not the language recognized by it, Rice's Theorem does not apply. - -Using Rogers' characterization of acceptable programming systems, Rice's Theorem may essentially be generalized from Turing machines to most computer programming languages: there exists no automatic method that decides with generality non-trivial questions on the behavior of computer programs. - -As an example, consider the following variant of the halting problem. Let P be the following property of partial functions F of one argument: P(F) means that F is defined for the argument '1'. It is obviously non-trivial, since there are partial functions that are defined at 1, and others that are undefined at 1. The 1-halting problem is the problem of deciding of any algorithm whether it defines a function with this property, - -i.e., whether the algorithm halts on input 1. By Rice's theorem, the 1-halting problem is undecidable. Similarly the question of whether a Turing machine T terminates on an initially empty tape (rather than with an initial word w given as second argument in addition to a description of T, as in the full halting problem) is still undecidable. - -Let $\mathbb N$ denote the natural numbers, and let $\mathbf{P}^{(1)}$ denote the class of unary (partial) computable functions. Let $\phi\colon \mathbb{N} \to \mathbf{P}^{(1)}$ be an admissible numbering of the computable functions. Denote by $\phi_e:=\phi(e)$ the eth (partial) computable function. - -We identify each property that a computable function may have with the subset of $\mathbf{P}^{(1)}$ consisting of the functions with that property. Thus, given a set $F \subseteq \mathbf{P}^{(1)}$, a computable function $\phi_e$ has property $F$ if and only if $\phi_e \in F$. For each property $F \subseteq \mathbf{P}^{(1)}$ there is an associated membership decision problem $D_F$ of determining, given e, whether $\phi_e \in F$. - -Rice's theorem states that the decision problem $D_F$ is decidable (also called recursive or computable) if and only if $F = \varnothing$ or $F = \mathbf{P}^{(1)}$. - -According to Rice's theorem, if there is at least one partial computable function in a particular class C of partial computable functions and another partial computable function not in C then the problem of deciding whether a particular program computes a function in C is undecidable. For example, Rice's theorem shows that each of the following sets of partial computable functions is undecidable (that is, the set is not recursive, or not computable): - -* The class of partial computable functions that return 0 for every input, and its complement. - -* The class of partial computable functions that return 0 for at least one input, and its complement. - -* The class of partial computable functions that are constant, and its complement. - -*The class of partial computable functions that are identical to a given partial computable function, and its complement. - -*The class of partial computable functions that diverge (i.e., undefined) for some input, and its complement. - -* The class of indices for computable functions that are total. - -* The class of indices for recursively enumerable sets that are cofinite. - -* The class of indices for recursively enumerable sets that are recursive. - -A corollary to Kleene's recursion theorem states that for every Gödel numbering $\phi\colon \mathbb{N} \to \mathbf{P}^{(1)}$ of the computable functions and every computable function $Q(x,y)$, there is an index $e$ such that $\phi_e(y)$ returns $Q(e,y)$. (In the following, we say that $f(x)$ "returns" $g(x)$ if either $f(x)=g(x)$, or both $f(x)$ and $g(x)$ are undefined.) Intuitively, $\phi_e$ is a quine, a function that returns its own source code (Gödel number), except that rather than returning it directly, $\phi_e$ passes its Gödel number to $Q$ and returns the result. - -Assume for contradiction that $F$ is a set of computable functions such that $\emptyset \neq F \neq \mathbf{P}^{(1)}$. Then there are computable functions $f \in F$ and $g \notin F$. Suppose that the set of indices $x$ such that $\phi_x \in F$ is decidable; then, there exists a function $Q(x,y)$ that returns $g(y)$ if $\phi_x \in F$, and $f(y)$ otherwise. By the corollary to the recursion theorem, there is an index $e$ such that $\phi_e(y)$ returns $Q(e,y)$. But then, if $\phi_e \in F$, then $\phi_e$ is the same function as $g$, and therefore $\phi_e \notin F$; and if $\phi_e \notin F$, then $\phi_e$ is $f$, and therefore $\phi_e \in F$. In both cases, we have a contradiction. - -Suppose, for concreteness, that we have an algorithm for examining a program p and determining infallibly whether p is an implementation of the squaring function, which takes an integer d and returns d2. The proof works just as well if we have an algorithm for deciding any other nontrivial property of program behavior (i.e. a semantic and non-trivial property), and is given in general below. - -The claim is that we can convert our algorithm for identifying squaring programs into one that identifies functions that halt. We will describe an algorithm that takes inputs a and i and determines whether program a halts when given input i. - -The algorithm for deciding this is conceptually simple: it constructs (the description of) a new program t taking an argument n, which (1) first executes program a on input i (both a and i being hard-coded into the definition of t), and (2) then returns the square of n. If a(i) runs forever, then t never gets to step (2), regardless of n. Then clearly, t is a function for computing squares if and only if step (1) terminates. Since we've assumed that we can infallibly identify programs for computing squares, we can determine whether t, which depends on a and i, is such a program, and that for every a and i; thus we have obtained a program that decides whether program a halts on input i. Note that our halting-decision algorithm never executes t, but only passes its description to the squaring-identification program, which by assumption always terminates; since the construction of the description of t can also be done in a way that always terminates, the halting-decision cannot fail to halt either. - -halts (a,i) { - -define t(n) { - -a(i) - -return n×n - -} - -return is_a_squaring_function(t) - -} - -This method doesn't depend specifically on being able to recognize functions that compute squares; as long as some program can do what we're trying to recognize, we can add a call to a to obtain our t. We could have had a method for recognizing programs for computing square roots, or programs for computing the monthly payroll, or programs that halt when given the input "Abraxas"; in each case, we would be able to solve the halting problem similarly. - -For the formal proof, algorithms are presumed to define partial functions over strings and are themselves represented by strings. The partial function computed by the algorithm represented by a string a is denoted Fa. This proof proceeds by reductio ad absurdum: we assume that there is a non-trivial property that is decided by an algorithm, and then show that it follows that we can decide the halting problem, which is not possible, and therefore a contradiction. - -Let us now assume that P(a) is an algorithm that decides some non-trivial property of Fa. Without loss of generality we may assume that P(no-halt) = "no", with no-halt being the representation of an algorithm that never halts. If this is not true, then this holds for the negation of the property. Since P decides a non-trivial property, it follows that there is a string b that represents an algorithm and P(b) = "yes". We can then define an algorithm H(a, i) as follows: - -1. construct a string t that represents an algorithm T(j) such that - -* T first simulates the computation of Fa(i), - -* then T simulates the computation of Fb(j) and returns its result. - -2. return P(t). - -We can now show that H decides the halting problem: - -* Assume that the algorithm represented by a halts on input i. In this case Ft = Fb and, because P(b) = "yes" and the output of P(x) depends only on Fx, it follows that P(t) = "yes" and, therefore H(a, i) = "yes". - -* Assume that the algorithm represented by a does not halt on input i. In this case Ft = Fno-halt, i.e., the partial function that is never defined. Since P(no-halt) = "no" and the output of P(x) depends only on Fx, it follows that P(t) = "no" and, therefore H(a, i) = "no". - -Since the halting problem is known to be undecidable, this is a contradiction and the assumption that there is an algorithm P(a) that decides a non-trivial property for the function represented by a must be false. - -Rice's theorem can be succinctly stated in terms of index sets: - -
    - -Let $\mathcal{C}$ be a class of partial recursive functions with index set $C$. Then $C$ is recursive if and only if $C = \varnothing$ or $C = \mathbb{N}$. - -
    - -Here $\mathbb{N}$ is the set of natural numbers, including zero. - -One can regard Rice's theorem as asserting the impossibility of effectively deciding for any recursively enumerable set whether it has a certain nontrivial property. In this section, we give an analogue of Rice's theorem for recursive sets, instead of recursively enumerable sets. Roughly speaking, the analogue says that if one can effectively determine for every recursive set whether it has a certain property, then only finitely many integers determine whether a recursive set has the property. This result is analogous to the original theorem of Rice, because both results assert that a property is "decidable" only if one can determine whether a set has that property by examining for at most finitely many $i$ (for no $i$, for the original theorem), if $i$ belongs to the set. - -Let $W$ be a class (called a simple game and thought of as a property) of recursive sets. If $S$ is a recursive set, then for some $e$, computable function $\phi_e$ is the characteristic function of $S$. We call $e$ a characteristic index for $S$. (There are infinitely many such $e$.) Let's say the class $W$ is computable if there is an algorithm (computable function) that decides - -for any nonnegative integer $e$ (not necessarily a characteristic index), - -* if $e$ is a characteristic index for a recursive set belonging to $W$, then the algorithm gives "yes"; - -* if $e$ is a characteristic index for a recursive set not belonging to $W$, then the algorithm gives "no". - -A set $S\subseteq \mathbb{N}$ extends a string $\tau$ of 0's and 1's if for every $k< |\tau|$ (the length of $\tau$), the $k$th element of $\tau$ is 1 if $k\in S$; and is 0 otherwise. For example, $S=\{1,3,4,7, \ldots \}$ extends the string $01011001$. A string $\tau$ is winning determining if every recursive set extending $\tau$ belongs to $W$. A string $\tau$ is losing determining if no recursive set extending $\tau$ belongs to $W$. - -We can now state the following analogue of Rice's theorem (Kreisel, Lacombe, and Shoenfield, 1959, Kumabe and Mihara, 2008): - -A class $W$ of recursive sets is computable if and only if there are a recursively enumerable set $T_0$ of losing determining strings and a recursively enumerable set $T_1$ of winning determining strings such that every recursive set extends a string in $T_0\cup T_1$. - -This result has been applied to foundational problems in computational social choice (more broadly, algorithmic game theory). For instance, Kumabe and Mihara (2008, 2008) apply this result to an investigation of the Nakamura numbers for simple games in cooperative game theory and social choice theory. diff --git a/wiki/wikipedia/897.txt b/wiki/wikipedia/897.txt deleted file mode 100644 index 8278bda9e039b59a22f8ffd31c1e85ac9eedd56c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/897.txt +++ /dev/null @@ -1,57 +0,0 @@ -In mathematics, Wiener's lemma is a well-known identity which relates the asymptotic behaviour of the Fourier coefficients of a Borel measure on the circle to its atomic part. This result admits an analogous statement for measures on the real line. It was first discovered by Norbert Wiener. - -* Given a real or complex Borel measure $\mu$ on the unit circle $\mathbb T$, let $\mu_a=\sum_j c_j\delta_{z_j}$ be its atomic part (meaning that $\mu(\{z_j\})=c_j\neq 0$ and $\mu(\{z\})=0$ for $z\not\in\{z_j\}$. Then -$$ -\lim_{N\to\infty}\frac{1}{2N+1}\sum_{n=-N}^N|\widehat\mu(n)|^2=\sum_j|c_j|^2, -$$ - -where $\widehat{\mu}(n)=\int_{\mathbb T}z^{-n}d\mu(z)$ is the $n$-th Fourier coefficient of $\mu$. - -* Similarly, given a real or complex Borel measure $\mu$ on the real line $\mathbb R$ and called $\mu_a=\sum_j c_j\delta_{x_j}$ its atomic part, we have -$$ -\lim_{R\to\infty}\frac{1}{2R}\int_{-R}^R|\widehat\mu(\xi)|^2d\xi=\sum_j|c_j|^2, -$$ - -where $\widehat{\mu}(\xi)=\int_{\mathbb R}e^{-2\pi i\xi x}d\mu(x)$ is the Fourier transform of $\mu$. - -* First of all, we observe that if $\nu$ is a complex measure on the circle then -$$ -\frac{1}{2N+1}\sum_{n=-N}^N\widehat{\nu}(n)=\int_{\mathbb T}f_N(z)d\nu(z), -$$ - -with $f_N(z)=\frac{1}{2N+1}\sum_{n=-N}^N z^{-n}$. The function $f_N$ is bounded by $1$ in absolute value and has $f_N(1)=1$, while $f_N(z)=\frac{z^{N+1}-z^{-N}}{(2N+1)(z-1)}$ for $z\in\mathbb{T}\setminus\{1\}$, which converges to $0$ as $N\to\infty$. Hence, by the dominated convergence theorem, -$$ -\lim_{N\to\infty}\frac{1}{2N+1}\sum_{n=-N}^N\widehat{\nu}(n)=\int_{\mathbb T}1_{\{1\}}(z)d\nu(z)=\nu(\{1\}). -$$ - -We now take $\mu'$ to be the pushforward of $\overline\mu$ under the inverse map on $\mathbb T$, namely $\mu'(B)=\overline{\mu(B^{-1})}$ for any Borel set $B\subseteq\mathbb T$. This complex measure has Fourier coefficients $\widehat{\mu'}(n)=\overline{\widehat{\mu}(n)}$. We are going to apply the above to the convolution between $\mu$ and $\mu'$, namely we choose $\nu=\mu*\mu'$, meaning that $\nu$ is the pushforward of the measure $\mu\times\mu'$ (on $\mathbb T\times\mathbb T$) under the product map $\cdot:\mathbb{T}\times\mathbb{T}\to\mathbb{T}$. By Fubini's theorem - -\widehat{\nu}(n)=\int_{\mathbb{T}\times\mathbb{T}}(zw)^{-n}d(\mu\times\mu')(z,w) - -=\int_{\mathbb T}\int_{\mathbb T}z^{-n}w^{-n}d\mu'(w)d\mu(z)=\widehat{\mu}(n)\widehat{\mu'}(n)=|\widehat{\mu}(n)|^2. - -So, by the identity derived earlier, -$$ -\lim_{N\to\infty}\frac{1}{2N+1}\sum_{n=-N}^N|\widehat{\mu}(n)|^2=\nu(\{1\})=\int_{\mathbb T\times\mathbb T}1_{\{zw=1\}}d(\mu\times\mu')(z,w). -$$ - -By Fubini's theorem again, the right-hand side equals -$$ -\int_{\mathbb T}\mu'(\{z^{-1}\})d\mu(z)=\int_{\mathbb T}\overline{\mu(\{z\})}d\mu(z)=\sum_j|\mu(\{z_j\})|^2=\sum_j|c_j|^2. -$$ - -* The proof of the analogous statement for the real line is identical, except that we use the identity -$$ -\frac{1}{2R}\int_{-R}^R\widehat\nu(\xi)d\xi=\int_{\mathbb R}f_R(x)d\nu(x) -$$ - -(which follows from Fubini's theorem), where $f_R(x)=\frac{1}{2R}\int_{-R}^R e^{-2\pi i\xi x}d\xi$. - -We observe that $|f_R|\le 1$, $f_R(0)=1$ and $f_R(x)=\frac{e^{2\pi iRx}-e^{-2\pi iRx}}{4\pi iRx}$ for $x\neq 0$, which converges to $0$ as $R\to\infty$. So, by dominated convergence, we have the analogous identity -$$ -\lim_{R\to\infty}\frac{1}{2R}\int_{-R}^R\widehat\nu(\xi)d\xi=\nu(\{0\}). -$$ - -* A real or complex Borel measure $\mu$ on the circle is diffuse (i.e. $\mu_a=0$) if and only if $\lim_{N\to\infty}\frac{1}{2N+1}\sum_{n=-N}^N|\widehat\mu(n)|^2=0$. - -* A probability measure $\mu$ on the circle is a Dirac mass if and only if $\lim_{N\to\infty}\frac{1}{2N+1}\sum_{n=-N}^N|\widehat\mu(n)|^2=1$. (Here, the nontrivial implication follows from the fact that the weights $c_j$ are positive and satisfy $1=\sum_j c_j^2\le\sum_j c_j\le 1$, which forces $c_j^2=c_j$ and thus $c_j=1$, so that there must be a single atom with mass $1$.) diff --git a/wiki/wikipedia/898.txt b/wiki/wikipedia/898.txt deleted file mode 100644 index 2c7b1783561bc0b5bf870fcc1f2dfcb3c5426374..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/898.txt +++ /dev/null @@ -1,94 +0,0 @@ -Iterative deepening A* (IDA*) is a graph traversal and path search algorithm that can find the shortest path between a designated start node and any member of a set of goal nodes in a weighted graph. It is a variant of iterative deepening depth-first search that borrows the idea to use a heuristic function to evaluate the remaining cost to get to the goal from the A* search algorithm. Since it is a depth-first search algorithm, its memory usage is lower than in A*, but unlike ordinary iterative deepening search, it concentrates on exploring the most promising nodes and thus does not go to the same depth everywhere in the search tree. Unlike A*, IDA* does not utilize dynamic programming and therefore often ends up exploring the same nodes many times. - -While the standard iterative deepening depth-first search uses search depth as the cutoff for each iteration, the IDA* uses the more informative $f(n) = g(n) + h(n)$, where $g(n)$ is the cost to travel from the root to node $n$ and $h(n)$ is a problem-specific heuristic estimate of the cost to travel from $n$ to the goal. - -The algorithm was first described by Richard Korf in 1985. - -Iterative-deepening-A* works as follows: at each iteration, perform a depth-first search, cutting off a branch when its total cost $f(n) = g(n) + h(n)$ exceeds a given threshold. This threshold starts at the estimate of the cost at the initial state, and increases for each iteration of the algorithm. At each iteration, the threshold used for the next iteration is the minimum cost of all values that exceeded the current threshold. - -As in A*, the heuristic has to have particular properties to guarantee optimality (shortest paths). See Properties below. - -path current search path (acts like a stack) - -node current node (last node in current path) - -g the cost to reach current node - -f estimated cost of the cheapest path (root..node..goal) - -h(node) estimated cost of the cheapest path (node..goal) - -cost(node, succ) step cost function - -is_goal(node) goal test - -successors(node) node expanding function, expand nodes ordered by g + h(node) - -ida_star(root) return either NOT_FOUND or a pair with the best path and its cost - -procedure ida_star(root) - -bound := h(root) - -path := [root] - -loop - -t := search(path, 0, bound) - -if t = FOUND then return (path, bound) - -if t = ∞ then return NOT_FOUND - -bound := t - -end loop - -end procedure - -function search(path, g, bound) - -node := path.last - -f := g + h(node) - -if f > bound then return f - -if is_goal(node) then return FOUND - -min := ∞ - -for succ in successors(node) do - -if succ not in path then - -path.push(succ) - -t := search(path, g + cost(node, succ), bound) - -if t = FOUND then return FOUND - -if t < min then min := t - -path.pop() - -end if - -end for - -return min - -end function - -Like A*, IDA* is guaranteed to find the shortest path leading from the given start node to any goal node in the problem graph, if the heuristic function h is admissible, - -IDA* is beneficial when the problem is memory constrained. A* search keeps a large queue of unexplored nodes that can quickly fill up memory. By contrast, because IDA* does not remember any node except the ones on the current path, it requires an amount of memory that is only linear in the length of the solution that it constructs. Its time complexity is analyzed by Korf et al. under the assumption that the heuristic cost estimate h is consistent, meaning that -$$ -h(n) \le \mathrm{cost}(n, n') + h(n') -$$ - -for all nodes n and all neighbors n' of n; they conclude that compared to a brute-force tree search over an exponential-sized problem, IDA* achieves a smaller search depth (by a constant factor), but not a smaller branching factor. - -Recursive best-first search is another memory-constrained version of A* search that can be faster in practice than IDA*, since it requires less regenerating of nodes. - -Applications of IDA* are found in such problems as planning. diff --git a/wiki/wikipedia/899.txt b/wiki/wikipedia/899.txt deleted file mode 100644 index a35db24f41d77b25f022a8212578ff320fb97ecb..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/899.txt +++ /dev/null @@ -1,55 +0,0 @@ -Condorcet's jury theorem is a political science theorem about the relative probability of a given group of individuals arriving at a correct decision. The theorem was first expressed by the Marquis de Condorcet in his 1785 work Essay on the Application of Analysis to the Probability of Majority Decisions. - -The assumptions of the theorem are that a group wishes to reach a decision by majority vote. One of the two outcomes of the vote is correct, and each voter has an independent probability p of voting for the correct decision. The theorem asks how many voters we should include in the group. The result depends on whether p is greater than or less than 1/2: - -* If p is greater than 1/2 (each voter is more likely to vote correctly), then adding more voters increases the probability that the majority decision is correct. In the limit, the probability that the majority votes correctly approaches 1 as the number of voters increases. - -* On the other hand, if p is less than 1/2 (each voter is more likely to vote incorrectly), then adding more voters makes things worse: the optimal jury consists of a single voter. - -Since Condorcet, many other researchers have proved various other jury theorems, relaxing some or all of Condorcet's assumptions. - -To avoid the need for a tie-breaking rule, we assume n is odd. Essentially the same argument works for even n if ties are broken by adding a single voter. - -Now suppose we start with n voters, and let m of these voters vote correctly. - -Consider what happens when we add two more voters (to keep the total number odd). The majority vote changes in only two cases: - -* m was one vote too small to get a majority of the n votes, but both new voters voted correctly. - -* m was just equal to a majority of the n votes, but both new voters voted incorrectly. - -The rest of the time, either the new votes cancel out, only increase the gap, or don't make enough of a difference. So we only care what happens when a single vote (among the first n) separates a correct from an incorrect majority. - -Restricting our attention to this case, we can imagine that the first n-1 votes cancel out and that the deciding vote is cast by the n-th voter. In this case the probability of getting a correct majority is just p. Now suppose we send in the two extra voters. The probability that they change an incorrect majority to a correct majority is (1-p)p2, while the probability that they change a correct majority to an incorrect majority is p(1-p)(1-p). The first of these probabilities is greater than the second if and only if p > 1/2, proving the theorem. - -This proof is direct; it just sums up the probabilities of the majorities. Each term of the sum multiplies the number of combinations of a majority by the probability of that majority. Each majority is counted using a combination, n items taken k at a time, where n is the jury size, and k is the size of the majority. Probabilities range from 0 (= the vote is always wrong) to 1 (= always right). Each person decides independently, so the probabilities of their decisions multiply. The probability of each correct decision is p. The probability of an incorrect decision, q, is the opposite of p, i.e. 1 − p. The power notation, i.e. $p^x$ is a shorthand for x multiplications of p. - -Committee or jury accuracies can be easily estimated by using this approach in computer spreadsheets or programs. - -As an example, let us take the simplest case of n = 3, p = 0.8. We need to show that 3 people have higher than 0.8 chance of being right. Indeed: - -0.8 × 0.8 × 0.8 + 0.8 × 0.8 × 0.2 + 0.8 × 0.2 × 0.8 + 0.2 × 0.8 × 0.8 = 0.896. - -The probability of a correct majority decision P(n, p), when the individual probability p is close to 1/2 grows linearly in terms of p − 1/2. For n voters each one having probability p of deciding correctly and for odd n (where there are no possible ties): -$$ -P(n, p) = 1/2 + c_1 (p - 1/2) + c_3 (p - 1/2)^3 + O\left( (p - 1/2)^5 \right), -$$ - -where -$$ -c_1 = {n \choose {\lfloor n/2 \rfloor}} \frac{\lfloor n/2 \rfloor + 1}{4^{\lfloor n/2 \rfloor}} = \sqrt{\frac{2n + 1}{\pi}} \left(1 + \frac{1}{16n^2} + O(n^{-3})\right), -$$ - -and the asymptotic approximation in terms of n is very accurate. The expansion is only in odd powers and $c_3 < 0$. In simple terms, this says that when the decision is difficult (p close to 1/2), the gain by having n voters grows proportionally to $\sqrt{n}$. - -The Condorcet jury theorem has recently been used to conceptualize score integration when several physician readers (radiologists, endoscopists, etc.) independently evaluate images for disease activity. This task arises in central reading performed during clinical trials and has similarities to voting. According to the authors, the application of the theorem can translate individual reader scores into a final score in a fashion that is both mathematically sound (by avoiding averaging of ordinal data), mathematically tractable for further analysis, and in a manner that is consistent with the scoring task at hand (based on decisions about the presence or absence of features, a subjective classification task) - -The Condorcet jury theorem is also used in ensemble learning in the field of machine learning. An ensemble method combines the predictions of many individual classifiers by majority voting. Assuming that each of the individual classifiers predict with slightly greater than 50% accuracy and their predictions are independent, then the ensemble of their predictions will be far greater than their individual predictive scores. - -* Condorcet method - -*Condorcet paradox - -*Jury theorem - -*Wisdom of the crowd diff --git a/wiki/wikipedia/9.txt b/wiki/wikipedia/9.txt deleted file mode 100644 index 0ad719ae5d4c0f0fe8ad12a6d0c8c7e25655224b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/9.txt +++ /dev/null @@ -1,47 +0,0 @@ -In mathematics, the Gagliardo–Nirenberg interpolation inequality is a result in the theory of Sobolev spaces that relates the $L^p$ norms of the weak derivatives of a function. The inequality “interpolates” among various values of p and orders of differentiation, hence the name. The result is of particular importance in the theory of elliptic partial differential equations. It was proposed by Louis Nirenberg and Emilio Gagliardo. - -Suppose $n,j,m$ are non-negative integers and that $1 \leq p, q , r \leq \infty$ and $\alpha \in [0,1]$ are real numbers such that - - - -\frac{1}{p} = \frac{j}{n} + \left( \frac{1}{r} - \frac{m}{n} \right) \alpha + \frac{1 - \alpha}{q} - - - -and - - - -\frac{j}{m} \leq \alpha \leq 1. - - - -Suppose furthermore that $u: \mathbb{R}^n \to \R$ is a function in $L^q(\mathbb{R}^n)$ with mth weak derivative in $L^r(\mathbb{R}^n)$. - -Then the jth weak derivative of $u$ lies in $L^p(\mathbb{R}^n)$ and there exists a constant $C$ depending on $m,n,j,q,r$ and $\alpha$ but independent of $u$ such that - -\| \mathrm{D}^{j} u \|_{L^{p}} \leq C \| \mathrm{D}^{m} u \|_{L^{r}}^{\alpha} \| u \|_{L^{q}}^{1 - \alpha}. - - - -The result has two exceptional cases: - -# If $j = 0, mr < n $ and $q = \infty$, then it is necessary to make the additional assumption that either $\lim_{|x|\to\infty} u(x) = 0$ or that $u \in L^s(\R^n)$ Ls for some $s < \infty$ - -# If $1 < r < \infty$ and $m-j-\frac{n}{r}$ is a non-negative integer, then it is necessary to assume also that $\alpha \neq 1$ - -For functions $u: \Omega \to \R$ defined on a bounded Lipschitz domain $\Omega \subseteq \R^n$, the interpolation inequality has the same hypotheses as above and reads - - - -\| \mathrm{D}^{j} u \|_{L^{p}} \leq C_{1} \| \mathrm{D}^{m} u \|_{L^{r}}^{\alpha} \| u \|_{L^{q}}^{1 - \alpha} + C_{2} \| u \|_{L^{s}} - - - -for arbitrary $s \geq 1$, where the constants $C_1,C_2$ depend on the domain $\Omega$ and on $s$ in addition to the other parameters. - -* When $\alpha = 1$, the term $\|u\|_{L^q}$vanishes and the Gagliardo–Nirenberg interpolation inequality then implies the Sobolev embedding theorem. (Note, in particular, that r is permitted to be 1.) - -* Another special case of the Gagliardo–Nirenberg interpolation inequality is Ladyzhenskaya's inequality, in which $m=1, j=0, q=r=2, p = 4$ and $n$ is either $2,3$. For example, the Ladyzheskaya inequality for dimension $2$ states that $\|u\|_{L^4} \leq \|u\|_{L^2}^{1/2} \|D u\|_{L^2}^{1/2}$ - -* In the setting of the Sobolev spaces $H^s$, with $0\leq s_1 \|u\|_{H^{s}}\leq \|u\|_{H^{s_1}}^{\frac{s_2-s}{s_2-s_1}} \|u\|_{H^{s_2}}^{\frac{s-s_1}{s_2-s_1}}This can also be derived via Plancherel theorem and Hölder's inequality. diff --git a/wiki/wikipedia/90.txt b/wiki/wikipedia/90.txt deleted file mode 100644 index dff7dfffd1082ad95b4053934fc7d3307e7d85ed..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/90.txt +++ /dev/null @@ -1,7 +0,0 @@ -In mathematical group theory, Frobenius's theorem states that if n divides the order of a finite group G, then the number of solutions of xn = 1 is a multiple of n. It was introduced by . - -A more general version of Frobenius's theorem states that if C is a conjugacy class with h elements of a finite group G with g elements and n is a positive integer, then the number of elements k such that kn is in C is a multiple of the greatest common divisor (hn,g). - -One application of Frobenius's theorem is to show that the coefficients of the Artin–Hasse exponential are p integral, by interpreting them in terms of the number of elements of order a power of p in the symmetric group Sn. - -Frobenius conjectured that if in addition the number of solutions to xn=1 is exactly n where n divides the order of G then these solutions form a normal subgroup. This has been proved as a consequence of the classification of finite simple groups. The symmetric group S3 has exactly 4 solutions to x4=1 but these do not form a normal subgroup; this is not a counterexample to the conjecture as 4 does not divide the order of S3. diff --git a/wiki/wikipedia/900.txt b/wiki/wikipedia/900.txt deleted file mode 100644 index 4c11f497bb14b1c0c27ab523bb954ce22f05c755..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/900.txt +++ /dev/null @@ -1,176 +0,0 @@ -In Mathematics, the Lindström–Gessel–Viennot lemma provides a way to count the number of tuples of non-intersecting lattice paths, or, more generally, paths on a directed graph. It was proved by Gessel–Viennot in 1985, based on previous work of Lindström published in 1973. - -Let G be a locally finite directed acyclic graph. This means that each vertex has finite degree, and that G contains no directed cycles. Consider base vertices $ A = \{ a_1, \ldots, a_n \}$ and destination vertices $ B = \{ b_1, \ldots, b_n \}$, and also assign a weight $\omega_{e}$ to each directed edge e. These edge weights are assumed to belong to some commutative ring. For each directed path P between two vertices, let $ \omega(P) $ be the product of the weights of the edges of the path. For any two vertices a and b, write e(a,b) for the sum $e(a,b) = \sum_{P: a \to b} \omega(P)$ over all paths from a to b. This is well-defined if between any two points there are only finitely many paths; but even in the general case, this can be well-defined under some circumstances (such as all edge weights being pairwise distinct formal indeterminates, and $e(a,b)$ being regarded as a formal power series). If one assigns the weight 1 to each edge, then e(a,b) counts the number of paths from a to b. - -With this setup, write -$$ - M = \begin{pmatrix} e(a_1,b_1) & e(a_1,b_2) & \cdots & e(a_1,b_n) \\ e(a_2,b_1) & e(a_2,b_2) & \cdots & e(a_2,b_n) \\ \vdots & \vdots & \ddots & \vdots \\ e(a_n,b_1) & e(a_n,b_2) & \cdots & e(a_n,b_n) \end{pmatrix} -$$. - -An n-tuple of non-intersecting paths from A to B means an n-tuple (P1, ..., Pn) of paths in G with the following properties: - -* There exists a permutation $\sigma$ of $\left\{ 1, 2, ..., n \right\}$ such that, for every i, the path Pi is a path from $a_i$ to $b_{\sigma(i)}$. - -* Whenever $i \neq j$, the paths Pi and Pj have no two vertices in common (not even endpoints). - -Given such an n-tuple (P1, ..., Pn), we denote by $\sigma(P)$ the permutation of $\sigma$ from the first condition. - -The Lindström–Gessel–Viennot lemma then states that the determinant of M is the signed sum over all n-tuples P = (P1, ..., Pn) of non-intersecting paths from A to B: -$$ - \det(M) = \sum_{(P_1,\ldots,P_n) \colon A \to B} \mathrm{sign}(\sigma(P)) \prod_{i=1}^n \omega(P_i). -$$ - -That is, the determinant of M counts the weights of all n-tuples of non-intersecting paths starting at A and ending at B, each affected with the sign of the corresponding permutation of $(1,2,\ldots,n)$, given by $ P_i $ taking $ a_i $ to $ b_{\sigma(i)} $. - -In particular, if the only permutation possible is the identity (i.e., every n-tuple of non-intersecting paths from A to B takes ai to bi for each i) and we take the weights to be 1, then det(M) is exactly the number of non-intersecting n-tuples of paths starting at A and ending at B. - -To prove the Lindström–Gessel–Viennot lemma, we first introduce some notation. - -An n-path from an n-tuple $(a_1, a_2, \ldots, a_n)$ of vertices of G to an n-tuple $(b_1, b_2, \ldots, b_n)$ of vertices of G will mean an n-tuple $(P_1, P_2, \ldots, P_n)$ of paths in G, with each $P_i$ leading from $a_i$ to $b_i$. This n-path will be called non-intersecting just in case the paths Pi and Pj have no two vertices in common (including endpoints) whenever $i \neq j$. Otherwise, it will be called entangled. - -Given an n-path $P = (P_1, P_2, \ldots, P_n)$, the weight $\omega(P)$ of this n-path is defined as the product $\omega(P_1) \omega(P_2) \cdots \omega(P_n)$. - -A twisted n-path from an n-tuple $(a_1, a_2, \ldots, a_n)$ of vertices of G to an n-tuple $(b_1, b_2, \ldots, b_n)$ of vertices of G will mean an n-path from $(a_1, a_2, \ldots, a_n)$ to $\left(b_{\sigma(1)}, b_{\sigma(2)}, \ldots, b_{\sigma(n)}\right)$ for some permutation $\sigma$ in the symmetric group $S_n$. This permutation $\sigma$ will be called the twist of this twisted n-path, and denoted by $\sigma(P)$ (where P is the n-path). This, of course, generalises the notation $\sigma(P)$ introduced before. - -Recalling the definition of M, we can expand det M as a signed sum of permutations; thus we obtain - - - -\begin{array}{rcl} - -\det M &= &\sum_{\sigma\in S_{n}}\mathrm{sign}(\sigma)\prod_{i=1}^{n}e(a_{i},b_{\sigma(i)})\\ - -&= &\sum_{\sigma\in S_{n}}\mathrm{sign}(\sigma)\prod_{i=1}^{n}\sum_{P_{i}:a_{i}\to b_{\sigma(i)}}\omega(P_{i})\\ - -&= &\sum_{\sigma\in S_{n}}\mathrm{sign}(\sigma) - -\sum\{\omega(P):P~\text{an}~n\text{-path from}~\left(a_1,a_2,\ldots,a_n\right)~\text{to}~\left(b_{\sigma(1)}, b_{\sigma(2)}, \ldots, b_{\sigma(n)}\right)\}\\ - -&= &\sum\{\mathrm{sign}(\sigma(P))\omega(P):P~\text{a twisted}~n\text{-path from}~\left(a_1,a_2,\ldots,a_n\right)~\text{to}~\left(b_1, b_2, ..., b_n\right)\}\\ - -&= &\sum\{\mathrm{sign}(\sigma(P))\omega(P):P~\text{a non-intersecting twisted}~n\text{-path from}~\left(a_1,a_2,\ldots,a_n\right)~\text{to}~\left(b_1, b_2, ..., b_n\right)\}\\ - -&&+\sum\{\mathrm{sign}(\sigma(P))\omega(P):P~\text{an entangled twisted}~n\text{-path from}~\left(a_1,a_2,\ldots,a_n\right)~\text{to}~\left(b_1, b_2, ..., b_n\right)\}\\ - -&= &\sum_{(P_1,\ldots,P_n) \colon A \to B} \mathrm{sign}(\sigma(P))\omega(P)\\ - -&&+\underbrace{ - -\sum\{\mathrm{sign}(\sigma(P))\omega(P):P~\text{an entangled twisted}~n\text{-path from}~\left(a_1,a_2,\ldots,a_n\right)~\text{to}~\left(b_1, b_2, ..., b_n\right)\}}_{=0?}\\ - -\end{array} - - - -It remains to show that the sum of $\mathrm{sign}(\sigma(P))\omega(P)$ over all entangled twisted n-paths vanishes. Let $\mathcal{E}$ denote the set of entangled twisted n-paths. To establish this, we shall construct an involution$f:\mathcal{E}\longrightarrow\mathcal{E}$ with the properties $\omega(f(P))=\omega(P)$ and $\mathrm{sign}(\sigma(f(P)))=-\mathrm{sign}(\sigma(P))$ for all $P\in\mathcal{E}$. Given such an involution, the rest-term - - - -\sum\{\mathrm{sign}(\sigma(P))\omega(P):P~\text{an entangled twisted}~n\text{-path from}~\left(a_1,a_2,\ldots,a_n\right)~\text{to}~\left(b_1, b_2, ..., b_n\right)\} - -= - -\sum_{P \in \mathcal{E}} \mathrm{sign}(\sigma(P))\omega(P) - - - -in the above sum reduces to 0, since its addends cancel each other out (namely, the addend corresponding to each $P \in \mathcal{E}$ cancels the addend corresponding to $f(P)$). - -Construction of the involution: The idea behind the definition of the involution $f$ is to take choose two intersecting paths within an entangled path, and switch their tails after their point of intersection. There are in general several pairs of intersecting paths, which can also intersect several times; hence, a careful choice needs to be made. Let $P = \left(P_1,P_2,...,P_n\right)$ be any entangled twisted n-path. Then $f(P)$ is defined as follows. We call a vertex crowded if it belongs to at least two of the paths $P_1,P_2,...,P_n$. Since P is entangled, there is at least one crowded vertex. We pick the smallest $i \in \{1,2,\ldots,n\}$ such that $P_i$ contains a crowded vertex. Then, we pick the first crowded vertex v on $P_i$ ("first" in sense of "encountered first when travelling along $P_i$"), and we pick the largest j such that v belongs to $P_j$. The crowdedness of v implies j > i. Write the two paths $P_i$ and $P_j$ as - - - -\begin{array}{rcl} - -P_{i} &\equiv &a_{i}=u_{0}\to u_{1}\to u_{2}\ldots u_{\alpha-1}\to\overbrace{ - -\mathbf{u}_{\alpha}\to u_{\alpha+1} \ldots\to u_{r}=b_{\sigma(i)} - -}^{\mathrm{tail}~i}\\ - -P_{j} &\equiv &a_{j}=v_{0}\to v_{1}\to v_{2}\ldots v_{\beta-1}\to\underbrace{ - -\mathbf{v}_{\beta}\to v_{\beta+1} \ldots\to v_{s}=b_{\sigma(j)} - -}_{\mathrm{tail}~j}\\ - -\end{array} - - - -where $\sigma=\sigma(P)$, and where $\alpha$ and $\beta$ are chosen such that v is the $\alpha$-th vertex along $P_i$ and the $\beta$-th vertex along $P_j$ (that is, $v = u_{\alpha} = v_{\beta}$). We set $\alpha_P = \alpha$ and $\beta_P = \beta$ and $i_{P} = i$ and $j_{P} = j$. Now define the twisted n-path $f(P)$ to coincide with $P$ except for components $i$ and $j$, which are replaced by - - - -\begin{array}{rcl} - -P'_{i} &\equiv &a_{i}=u_{0}\to u_{1}\to u_{2}\ldots u_{\alpha-1}\to\overbrace{ - -v_{\beta_{P}}\to v_{\beta_{P}+1}\ldots\to v_{s}=b_{\sigma(j)} - -}^{\mathrm{tail}~j}\\ - -P'_{j} &\equiv &a_{j}=v_{0}\to v_{1}\to v_{2}\ldots v_{\beta-1}\to\underbrace{ - -u_{\alpha_{P}}\to u_{\alpha_{P}+1}\ldots\to u_{r}=b_{\sigma(i)} - -}_{\mathrm{tail}~i}\\ - -\end{array} - - - -It is immediately clear that $f(P)$ is an entangled twisted n-path. Going through the steps of the construction, it is easy to see that $i_{f(P)}=i_{P}$, $j_{f(P)}=j_{P}$ and furthermore that $\alpha_{f(P)}=\alpha_{P}$ and $\beta_{f(P)}=\beta_{P}$, so that applying $f$ again to $f(P)$ involves swapping back the tails of $f(P)_{i},f(P)_{j}$ and leaving the other components intact. Hence $f(f(P))=P$. Thus $f$ is an involution. It remains to demonstrate the desired antisymmetry properties: - -From the construction one can see that $\sigma(f(P))$ coincides with $\sigma=\sigma(P)$ except that it swaps $\sigma(i)$ and $\sigma(j)$, thus yielding $\mathrm{sign}(\sigma(f(P)))=-\mathrm{sign}(\sigma(P))$. To show that $\omega(f(P))=\omega(P)$ we first compute, appealing to the tail-swap - - - -\begin{array}{rcl} - -\omega(P'_{i})\omega(P'_{j}) - -&= &\left(\prod_{t=0}^{\alpha-1}\omega(u_{t},u_{t+1})\cdot \prod_{t=\beta}^{s-1}\omega(v_{t},v_{t+1})\right) - -\cdot\left(\prod_{t=0}^{\beta-1}\omega(v_{t},v_{t+1})\cdot \prod_{t=\alpha}^{r-1}\omega(u_{t},u_{t+1})\right)\\ - -&= &\prod_{t=0}^{r-1}\omega(u_{t},u_{t+1})\cdot\prod_{t=0}^{s-1}\omega(v_{t},v_{t+1})\\ - -&= &\omega(P_{i})\omega(P_{j}).\\ - -\end{array} - - - -Hence $\omega(f(P))=\prod_{k=1}^{n}\omega(f(P)_{k})=\prod_{k=1,~k\neq i,j}^{n}\omega(P_{k})\cdot \omega(P'_{i})\omega(P'_{j})=\prod_{k=1,~k\neq i,j}^{n}\omega(P_{k})\cdot \omega(P_{i})\omega(P_{j})=\prod_{k=1}^{n}\omega(P_{k})=\omega(P)$. - -Thus we have found an involution with the desired properties and completed the proof of the Lindström-Gessel-Viennot lemma. - -Remark. Arguments similar to the one above appear in several sources, with variations regarding the choice of which tails to switch. A version with j smallest (unequal to i) rather than largest appears in the Gessel-Viennot 1989 reference (proof of Theorem 1). - -The Lindström–Gessel–Viennot lemma can be used to prove the equivalence of the following two different definitions of Schur polynomials. Given a partition $ \lambda = \lambda_1 + \cdots + \lambda_r $ of n, the Schur polynomial $s_\lambda(x_1,\ldots,x_n)$ can be defined as: - -*$ s_\lambda(x_1,\ldots,x_n) = \sum_T w(T), $ - -where the sum is over all semistandard Young tableaux T of shape λ, and the weight of a tableau T is defined as the monomial obtained by taking the product of the xi indexed by the entries i of T. For instance, the weight of the tableau - -is $ x_1 x_3 x_4^3 x_5 x_6 x_7 $. - -*$ s_\lambda(x_1, \ldots, x_n) = \det \left ( (h_{\lambda_i +j-i} )_{i,j}^{r \times r} \right ), $ - -where hi are the complete homogeneous symmetric polynomials (with hi understood to be 0 if i is negative). For instance, for the partition (3,2,2,1), the corresponding determinant is -$$ - s_{(3,2,2,1)} = \begin{vmatrix} h_3 & h_4 & h_5 & h_6 \\ h_1 & h_2 & h_3 & h_4 \\ 1 & h_1 & h_2 & h_3 \\ 0 & 0 & 1 & h_1 \end{vmatrix}. -$$ - -To prove the equivalence, given any partition λ as above, one considers the r starting points $ a_i = (r+1-i,1) $ and the r ending points $ b_i = (\lambda_i + r+1-i, n)$, as points in the lattice $ \mathbb{Z}^2 $, which acquires the structure of a directed graph by asserting that the only allowed directions are going one to the right or one up; the weight associated to any horizontal edge at height i is xi, and the weight associated to a vertical edge is 1. With this definition, r-tuples of paths from A to B are exactly semistandard Young tableaux of shape λ, and the weight of such an r-tuple is the corresponding summand in the first definition of the Schur polynomials. For instance, with the tableau - -Image:RSK example result.svg, - -one gets the corresponding 4-tuple - -On the other hand, the matrix M is exactly the matrix written above for D. This shows the required equivalence. (See also §4.5 in Sagan's book, or the First Proof of Theorem 7.16.1 in Stanley's EC2, or §3.3 in Fulmek's arXiv preprint, or §9.13 in Martin's lecture notes, for slight variations on this argument.) - -One can also use the Lindström–Gessel–Viennot lemma to prove the Cauchy–Binet formula, and in particular the multiplicativity of the determinant. - -The acyclicity of G is an essential assumption in the Lindström–Gessel–Viennot lemma; it guarantees (in reasonable situations) that the sums $e(a, b)$ are well-defined, and it advects into the proof (if G is not acyclic, then f might transform a self-intersection of a path into an intersection of two distinct paths, which breaks the argument that f is an involution). Nevertheless, Kelli Talaska's 2012 paper establishes a formula generalizing the lemma to arbitrary digraphs. The sums $e(a, b)$ are replaced by formal power series, and the sum over nonintersecting path tuples now becomes a sum over collections of nonintersecting and non-self-intersecting paths and cycles, divided by a sum over collections of nonintersecting cycles. The reader is referred to Talaska's paper for details. diff --git a/wiki/wikipedia/901.txt b/wiki/wikipedia/901.txt deleted file mode 100644 index fad20115bcdfc6f5c40c22e25f019bb23a88e2d2..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/901.txt +++ /dev/null @@ -1,3 +0,0 @@ -Norton Zone was a cloud file sharing and online backup tool service operated by Symantec that can be used to share, sync, access, store, and backup data. Symantec distinguishes Norton Zone from competition by automatically scanning files for malware and viruses. - -On June 3, 2014, Symantec announced that Norton Zone would be discontinued on August 6, 2014. diff --git a/wiki/wikipedia/902.txt b/wiki/wikipedia/902.txt deleted file mode 100644 index d92c92410c564a96a01f48cd85eace646e2c36ef..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/902.txt +++ /dev/null @@ -1,8 +0,0 @@ -In mathematical complex analysis, Schottky's theorem, introduced by is a quantitative version of Picard's theorem. It states that that for a holomorphic function f in the open unit disk that does not take the values 0 or 1, the value |f(z)| of can be bounded in terms of z and f(0). - -Schottky's original theorem did not give an explicit bound for f. gave some weak explicit bounds. Ahlfors gave a strong explicit bound, showing that if f is holomorphic in the open unit disk and does not take the values 0 or 1 then -$$ -\log |f(z)| \le \frac{1+|z|}{1-|z|}(7+\max(0,\log |f(0)|)) -$$. - -Several authors, such as Jenkins, have given variations of Ahlfors's bound with better constants: in particular Hempel gave some bounds whose constants are in some sense the best possible. diff --git a/wiki/wikipedia/903.txt b/wiki/wikipedia/903.txt deleted file mode 100644 index dd317215c61e119394f34e4e913380d96d418a9b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/903.txt +++ /dev/null @@ -1,34 +0,0 @@ -In topology, the Tietze extension theorem (also known as the Tietze–Urysohn–Brouwer extension theorem) states that continuous functions on a closed subset of a normal topological space can be extended to the entire space, preserving boundedness if necessary. - -If $X$ is a normal space and -$$ -f : A \to \R -$$ - -is a continuous map from a closed subset $A$ of $X$ into the real numbers $\R$ carrying the standard topology, then there exists a continuous extension of $f$ to $X,$ that is, there exists a map -$$ -F : X \to \R -$$ - -continuous on all of $X$ with $F(a) = f(a)$ for all $a \in A.$ Moreover, $F$ may be chosen such that -$$ -\sup \{ |f(a)| : a \in A \} = \sup \{ |F(x)| : x \in X \}, -$$ - -that is, if $f$ is bounded then $F$ may be chosen to be bounded (with the same bound as $f$). - -L. E. J. Brouwer and Henri Lebesgue proved a special case of the theorem, when $X$ is a finite-dimensional real vector space. Heinrich Tietze extended it to all metric spaces, and Pavel Urysohn proved the theorem as stated here, for normal topological spaces. - -This theorem is equivalent to Urysohn's lemma (which is also equivalent to the normality of the space) and is widely applicable, since all metric spaces and all compact Hausdorff spaces are normal. It can be generalized by replacing $\R$ with $\R^J$ for some indexing set $J,$ any retract of $\R^J,$ or any normal absolute retract whatsoever. - -If $X$ is a metric space, $A$ a non-empty subset of $X$ and $f : A \to \R$ is a Lipschitz continuous function with Lipschitz constant $K,$ then $f$ can be extended to a Lipschitz continuous function $F : X \to \R$ with same constant $K.$ - -This theorem is also valid for Hölder continuous functions, that is, if $f : A \to \R$ is Hölder continuous function with constant less than or equal to $1,$ then $f$ can be extended to a Hölder continuous function $F : X \to \R$ with the same constant. - -Another variant (in fact, generalization) of Tietze's theorem is due to Z. Ercan: - -Let $A$ be a closed subset of a topological space $X.$ If $f : X \to \R$ is an upper semicontinuous function, $g : X \to \R$ a lower semicontinuous function, and $h : A \to \R$ a continuous function such that $f(x) \leq g(x)$ for each $x \in X$ and $f(a) \leq h(a) \leq g(a)$ for each $a \in A$, then there is a continuous - -extension $H : X \to \R$ of $h$ such that $f(x) \leq H(x) \leq g(x)$ for each $x \in X.$ - -This theorem is also valid with some additional hypothesis if $\R$ is replaced by a general locally solid Riesz space. diff --git a/wiki/wikipedia/904.txt b/wiki/wikipedia/904.txt deleted file mode 100644 index c515b74560738ed741e57a865950e805e01c5f8d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/904.txt +++ /dev/null @@ -1,256 +0,0 @@ -In logic, a rule of inference is admissible in a formal system if the set of theorems of the system does not change when that rule is added to the existing rules of the system. In other words, every formula that can be derived using that rule is already derivable without that rule, so, in a sense, it is redundant. The concept of an admissible rule was introduced by Paul Lorenzen (1955). - -Admissibility has been systematically studied only in the case of structural rules in propositional non-classical logics, which we will describe next. - -Let a set of basic propositional connectives be fixed (for instance, $\{\to,\land,\lor,\bot\}$ in the case of superintuitionistic logics, or $\{\to,\bot,\Box\}$ in the case of monomodal logics). Well-formed formulas are built freely using these connectives from a countably infinite set of propositional variables p0, p1, .... A substitution σ is a function from formulas to formulas which commutes with the connectives, i.e., -$$ -\sigma f(A_1,\dots,A_n)=f(\sigma A_1,\dots,\sigma A_n) -$$ - -for every connective f, and formulas A1, ..., An. (We may also apply substitutions to sets Γ of formulas, making 1=σΓ = {σA: A ∈ Γ}.) A Tarski-style consequence relation is a relation $\vdash$ between sets of formulas, and formulas, such that - -for all formulas A, B, and sets of formulas Γ, Δ. A consequence relation such that - -for all substitutions σ is called structural. (Note that the term "structural" as used here and below is unrelated to the notion of structural rules in sequent calculi.) A structural consequence relation is called a propositional logic. A formula A is a theorem of a logic $\vdash$ if $\varnothing\vdash A$. - -For example, we identify a superintuitionistic logic L with its standard consequence relation $\vdash_L$ axiomatizable by modus ponens and axioms, and we identify a normal modal logic with its global consequence relation $\vdash_L$ axiomatized by modus ponens, necessitation, and axioms. - -A structural inference rule (or just rule for short) is given by a pair (Γ,B), usually written as -$$ -\frac{A_1,\dots,A_n}B\qquad\text{or}\qquad A_1,\dots,A_n/B, -$$ - -where Γ = {A1, ..., An} is a finite set of formulas, and B is a formula. An instance of the rule is -$$ -\sigma A_1,\dots,\sigma A_n/\sigma B -$$ - -for a substitution σ. The rule Γ/B is derivable in $\vdash$, if $\Gamma\vdash B$. It is admissible if for every instance of the rule, σB is a theorem whenever all formulas from σΓ are theorems. In other words, a rule is admissible if, when added to the logic, does not lead to new theorems. We also write $\Gamma|\!\!\!\sim B$ if Γ/B is admissible. (Note that $|\!\!\!\sim$ is a structural consequence relation on its own.) - -Every derivable rule is admissible, but not vice versa in general. A logic is structurally complete if every admissible rule is derivable, i.e., ${\vdash}={|\!\!\!\sim}$. - -In logics with a well-behaved conjunction connective (such as superintuitionistic or modal logics), a rule $A_1,\dots,A_n/B$ is equivalent to $A_1\land\dots\land A_n/B$ with respect to admissibility and derivability. It is therefore customary to only deal with unary rules A/B. - -*Classical propositional calculus (CPC) is structurally complete. Indeed, assume that A/B is non-derivable rule, and fix an assignment v such that v(A) = 1, and v(B) = 0. Define a substitution σ such that for every variable p, σp = $\top$ if v(p) = 1, and σp = $\bot$ if v(p) = 0. Then σA is a theorem, but σB is not (in fact, ¬σB is a theorem). Thus the rule A/B is not admissible either. (The same argument applies to any multi-valued logic L complete with respect to a logical matrix whose all elements have a name in the language of L.) - -*The Kreisel–Putnam rule (a.k.a. Harrop's rule, or independence of premise rule) -$$ -(\mathit{KPR})\qquad\frac{\neg p\to q\lor r}{(\neg p\to q)\lor(\neg p\to r)} -$$ - -is admissible in the intuitionistic propositional calculus (IPC). In fact, it is admissible in every superintuitionistic logic. On the other hand, the formula -$$ -(\neg p\to q\lor r)\to ((\neg p\to q)\lor(\neg p\to r)) -$$ - -is not an intuitionistic tautology, hence KPR is not derivable in IPC. In particular, IPC is not structurally complete. - -*The rule -$$ -\frac{\Box p}p -$$ - -is admissible in many modal logics, such as K, D, K4, S4, GL (see this table for names of modal logics). It is derivable in S4, but it is not derivable in K, D, K4, or GL. - -*The rule -$$ -\frac{\Diamond p\land\Diamond\neg p}\bot -$$ - -is admissible in every normal modal logic. It is derivable in GL and S4.1, but it is not derivable in K, D, K4, S4, S5. - -*Löb's rule -$$ -(\mathit{LR})\qquad\frac{\Box p\to p}p -$$ - -is admissible (but not derivable) in the basic modal logic K, and it is derivable in GL. However, LR is not admissible in K4. In particular, it is not true in general that a rule admissible in a logic L must be admissible in its extensions. - -*The Gödel–Dummett logic (LC), and the modal logic Grz.3 are structurally complete. The product fuzzy logic is also structurally complete. - -The basic question about admissible rules of a given logic is whether the set of all admissible rules is decidable. Note that the problem is nontrivial even if the logic itself (i.e., its set of theorems) is decidable: the definition of admissibility of a rule A/B involves an unbounded universal quantifier over all propositional substitutions, hence a priori we only know that admissibility of rule in a decidable logic is $\Pi^0_1$ (i.e., its complement is recursively enumerable). For instance, it is known that admissibility in the bimodal logics Ku and K4u (the extensions of K or K4 with the universal modality) is undecidable. Remarkably, decidability of admissibility in the basic modal logic K is a major open problem. - -Nevertheless, admissibility of rules is known to be decidable in many modal and superintuitionistic logics. The first decision procedures for admissible rules in basic transitive modal logics were constructed by Rybakov, using the reduced form of rules. A modal rule in variables p0, ..., pk is called reduced if it has the form -$$ -\frac{\bigvee_{i=0}^n\bigl(\bigwedge_{j=0}^k\neg_{i,j}^0p_j\land\bigwedge_{j=0}^k\neg_{i,j}^1\Box p_j\bigr)}{p_0}, -$$ - -where each $\neg_{i,j}^u$ is either blank, or negation $\neg$. For each rule r, we can effectively construct a reduced rule s (called the reduced form of r) such that any logic admits (or derives) r if and only if it admits (or derives) s, by introducing extension variables for all subformulas in A, and expressing the result in the full disjunctive normal form. It is thus sufficient to construct a decision algorithm for admissibility of reduced rules. - -Let $\textstyle\bigvee_{i=0}^n\varphi_i/p_0$ be a reduced rule as above. We identify every conjunction $\varphi_i$ with the set $\{\neg_{i,j}^0p_j,\neg_{i,j}^1\Box p_j\mid j\le k\}$ of its conjuncts. For any subset W of the set $\{\varphi_i\mid i\le n\}$ of all conjunctions, let us define a Kripke model $M=\langle W,R,{\Vdash}\rangle$ by -$$ -\varphi_i\Vdash p_j\iff p_j\in\varphi_i, -$$ -$$ -\varphi_iR\varphi_{i'}\iff\forall j\le k(\Box p_j\in\varphi_i\Rightarrow\{p_j,\Box p_j\}\subseteq\varphi_{i'}). -$$ - -Then the following provides an algorithmic criterion for admissibility in K4: - -Theorem. The rule $\textstyle\bigvee_{i=0}^n\varphi_i/p_0$ is not admissible in K4 if and only if there exists a set $W\subseteq\{\varphi_i\mid i\le n\}$ such that - -#$\varphi_i\nVdash p_0$ for some $i\le n,$ - -#$\varphi_i\Vdash\varphi_i$ for every $i\le n,$ - -#for every subset D of W there exist elements $\alpha,\beta\in W$ such that the equivalences -$$ -\alpha\Vdash\Box p_j -$$ if and only if $\varphi\Vdash p_j\land\Box p_j$ for every $\varphi\in D$ -$$ -\alpha\Vdash\Box p_j -$$ if and only if $\alpha\Vdash p_j$ and $\varphi\Vdash p_j\land\Box p_j$ for every $\varphi\in D$ - -hold for all j. - -Similar criteria can be found for the logics S4, GL, and Grz. Furthermore, admissibility in intuitionistic logic can be reduced to admissibility in Grz using the Gödel–McKinsey–Tarski translation: -$$ -A|\!\!\!\sim_{IPC}B -$$ if and only if $T(A)|\!\!\!\sim_{Grz}T(B).$ - -Rybakov (1997) developed much more sophisticated techniques for showing decidability of admissibility, which apply to a robust (infinite) class of transitive (i.e., extending K4 or IPC) modal and superintuitionistic logics, including e.g. S4.1, S4.2, S4.3, KC, Tk (as well as the above-mentioned logics IPC, K4, S4, GL, Grz). - -Despite being decidable, the admissibility problem has relatively high computational complexity, even in simple logics: admissibility of rules in the basic transitive logics IPC, K4, S4, GL, Grz is coNEXP-complete. This should be contrasted with the derivability problem (for rules or formulas) in these logics, which is PSPACE-complete. - -Admissibility in propositional logics is closely related to unification in the equational theory of modal or Heyting algebras. The connection was developed by Ghilardi (1999, 2000). In the logical setup, a unifier of a formula A in a logic L (an L-unifier for short) is a substitution σ such that σA is a theorem of L. (Using this notion, we can rephrase admissibility of a rule A/B in L as "every L-unifier of A is an L-unifier of B".) An L-unifier σ is less general than an L-unifier τ, written as σ ≤ τ, if there exists a substitution υ such that -$$ -\vdash_L\sigma p\leftrightarrow \upsilon\tau p -$$ - -for every variable p. A complete set of unifiers of a formula A is a set S of L-unifiers of A such that every L-unifier of A is less general than some unifier from S. A most general unifier (mgu) of A is a unifier σ such that {σ} is a complete set of unifiers of A. It follows that if S is a complete set of unifiers of A, then a rule A/B is L-admissible if and only if every σ in S is an L-unifier of B. Thus we can characterize admissible rules if we can find well-behaved complete sets of unifiers. - -An important class of formulas which have a most general unifier are the projective formulas: these are formulas A such that there exists a unifier σ of A such that -$$ -A\vdash_L B\leftrightarrow\sigma B -$$ - -for every formula B. Note that σ is a mgu of A. In transitive modal and superintuitionistic logics with the finite model property (fmp), one can characterize projective formulas semantically as those whose set of finite L-models has the extension property: if M is a finite Kripke L-model with a root r whose cluster is a singleton, and the formula A holds in all points of M except for r, then we can change the valuation of variables in r so as to make A true in r as well. Moreover, the proof provides an explicit construction of a mgu for a given projective formula A. - -In the basic transitive logics IPC, K4, S4, GL, Grz (and more generally in any transitive logic with the fmp whose set of finite frame satisfies another kind of extension property), we can effectively construct for any formula A its projective approximation Π(A): a finite set of projective formulas such that - -#$P\vdash_L A$ for every $P\in\Pi(A),$ - -#every unifier of A is a unifier of a formula from Π(A). - -It follows that the set of mgus of elements of Π(A) is a complete set of unifiers of A. Furthermore, if P is a projective formula, then -$$ -P|\!\!\!\sim_L B -$$ if and only if $P\vdash_L B$ - -for any formula B. Thus we obtain the following effective characterization of admissible rules: -$$ -A|\!\!\!\sim_L B -$$ if and only if $\forall P\in\Pi(A)(P\vdash_L B).$ - -Let L be a logic. A set R of L-admissible rule is called a basis of admissible rules, if every admissible rule Γ/B can be derived from R and the derivable rules of L, using substitution, composition, and weakening. In other words, R is a basis if and only if $|\!\!\!\sim_L$ is the smallest structural consequence relation which includes $\vdash_L$ and R. - -Notice that decidability of admissible rules of a decidable logic is equivalent to the existence of recursive (or recursively enumerable) bases: on the one hand, the set of all admissible rule is a recursive basis if admissibility is decidable. On the other hand, the set of admissible rules is always co-r.e., and if we further have an r.e. basis, it is also r.e., hence it is decidable. (In other words, we can decide admissibility of A/B by the following algorithm: we start in parallel two exhaustive searches, one for a substitution σ which unifies A but not B, and one for a derivation of A/B from R and $\vdash_L$. One of the searches has to eventually come up with an answer.) Apart from decidability, explicit bases of admissible rules are useful for some applications, e.g. in proof complexity. - -For a given logic, we can ask whether it has a recursive or finite basis of admissible rules, and to provide an explicit basis. If a logic has no finite basis, it can nevertheless has an independent basis: a basis R such that no proper subset of R is a basis. - -In general, very little can be said about existence of bases with desirable properties. For example, while tabular logics are generally well-behaved, and always finitely axiomatizable, there exist tabular modal logics without a finite or independent basis of rules. Finite bases are relatively rare: even the basic transitive logics IPC, K4, S4, GL, Grz do not have a finite basis of admissible rules, though they have independent bases. - -*The empty set is a basis of L-admissible rules if and only if L is structurally complete. - -*Every extension of the modal logic S4.3 (including, notably, S5) has a finite basis consisting of the single rule -$$ -\frac{\Diamond p\land\Diamond\neg p}\bot. -$$ - -*Visser's rules -$$ -\frac{\displaystyle\Bigl(\bigwedge_{i=1}^n(p_i\to q_i)\to p_{n+1}\lor p_{n+2}\Bigr)\lor r}{\displaystyle\bigvee_{j=1}^{n+2}\Bigl(\bigwedge_{i=1}^{n}(p_i\to q_i)\to p_j\Bigr)\lor r},\qquad n\ge 1 -$$ - -are a basis of admissible rules in IPC or KC. - -*The rules -$$ -\frac{\displaystyle\Box\Bigl(\Box q\to\bigvee_{i=1}^n\Box p_i\Bigr)\lor\Box r}{\displaystyle\bigvee_{i=1}^n\Box(q\land\Box q\to p_i)\lor r},\qquad n\ge0 -$$ - -are a basis of admissible rules of GL. (Note that the empty disjunction is defined as $\bot$.) - -*The rules -$$ -\frac{\displaystyle\Box\Bigl(\Box(q\to\Box q)\to\bigvee_{i=1}^n\Box p_i\Bigr)\lor\Box r}{\displaystyle\bigvee_{i=1}^n\Box(\Box q\to p_i)\lor r},\qquad n\ge0 -$$ - -are a basis of admissible rules of S4 or Grz. - -A rule Γ/B is valid in a modal or intuitionistic Kripke frame $F=\langle W,R\rangle$, if the following is true for every valuation $\Vdash$ in F: - -if $\forall x\in W(x\Vdash A)$ for all $A\in\Gamma$, then $\forall x\in W(x\Vdash B)$. - -(The definition readily generalizes to general frames, if needed.) - -Let X be a subset of W, and t a point in W. We say that t is - -*a reflexive tight predecessor of X, if for every y in W: t R y if and only if t = y or x = y or x R y for some x in X, - -*an irreflexive tight predecessor of X, if for every y in W: t R y if and only if x = y or x R y for some x in X. - -We say that a frame F has reflexive (irreflexive) tight predecessors, if for every finite subset X of W, there exists a reflexive (irreflexive) tight predecessor of X in W. - -We have: - -*a rule is admissible in IPC if and only if it is valid in all intuitionistic frames which have reflexive tight predecessors, - -*a rule is admissible in K4 if and only if it is valid in all transitive frames which have reflexive and irreflexive tight predecessors, - -*a rule is admissible in S4 if and only if it is valid in all transitive reflexive frames which have reflexive tight predecessors, - -*a rule is admissible in GL if and only if it is valid in all transitive converse well-founded frames which have irreflexive tight predecessors. - -Note that apart from a few trivial cases, frames with tight predecessors must be infinite, hence admissible rules in basic transitive logics do not enjoy the finite model property. - -While a general classification of structurally complete logics is not an easy task, we have a good understanding of some special cases. - -Intuitionistic logic itself is not structurally complete, but its fragments may behave differently. Namely, any disjunction-free rule or implication-free rule admissible in a superintuitionistic logic is derivable. On the other hand, the Mints rule -$$ -\frac{(p\to q)\to p\lor r}{((p\to q)\to p)\lor((p\to q)\to r)} -$$ - -is admissible in intuitionistic logic but not derivable, and contains only implications and disjunctions. - -We know the maximal structurally incomplete transitive logics. A logic is called hereditarily structurally complete, if any extension is structurally complete. For example, classical logic, as well as the logics LC and Grz.3 mentioned above, are hereditarily structurally complete. A complete description of hereditarily structurally complete superintuitionistic and transitive modal logics was given respectively by Citkin and Rybakov. Namely, a superintuitionistic logic is hereditarily structurally complete if and only if it is not valid in any of the five Kripke frames - -Similarly, an extension of K4 is hereditarily structurally complete if and only if it is not valid in any of certain twenty Kripke frames (including the five intuitionistic frames above). - -There exist structurally complete logics that are not hereditarily structurally complete: for example, Medvedev's logic is structurally complete, but it is included in the structurally incomplete logic KC. - -A rule with parameters is a rule of the form -$$ -\frac{A(p_1,\dots,p_n,s_1,\dots,s_k)}{B(p_1,\dots,p_n,s_1,\dots,s_k)}, -$$ - -whose variables are divided into the "regular" variables pi, and the parameters si. The rule is L-admissible if every L-unifier σ of A such that σsi = si for each i is also a unifier of B. The basic decidability results for admissible rules also carry to rules with parameters. - -A multiple-conclusion rule is a pair (Γ,Δ) of two finite sets of formulas, written as -$$ -\frac{A_1,\dots,A_n}{B_1,\dots,B_m}\qquad\text{or}\qquad A_1,\dots,A_n/B_1,\dots,B_m. -$$ - -Such a rule is admissible if every unifier of Γ is also a unifier of some formula from Δ. For example, a logic L is consistent iff it admits the rule -$$ -\frac{\bot}{}, -$$ - -and a superintuitionistic logic has the disjunction property iff it admits the rule -$$ -\frac{p\lor q}{p,q}. -$$ - -Again, basic results on admissible rules generalize smoothly to multiple-conclusion rules. In logics with a variant of the disjunction property, the multiple-conclusion rules have the same expressive power as single-conclusion rules: for example, in S4 the rule above is equivalent to -$$ -\frac{A_1,\dots,A_n}{\Box B_1\lor\dots\lor\Box B_m}. -$$ - -Nevertheless, multiple-conclusion rules can often be employed to simplify arguments. - -In proof theory, admissibility is often considered in the context of sequent calculi, where the basic objects are sequents rather than formulas. For example, one can rephrase the cut-elimination theorem as saying that the cut-free sequent calculus admits the cut rule -$$ -\frac{\Gamma\vdash A,\Delta\qquad\Pi,A\vdash\Lambda}{\Gamma,\Pi\vdash\Delta,\Lambda}. -$$ - -(By abuse of language, it is also sometimes said that the (full) sequent calculus admits cut, meaning its cut-free version does.) However, admissibility in sequent calculi is usually only a notational variant for admissibility in the corresponding logic: any complete calculus for (say) intuitionistic logic admits a sequent rule if and only if IPC admits the formula rule which we obtain by translating each sequent $\Gamma\vdash\Delta$ to its characteristic formula $\bigwedge\Gamma\to\bigvee\Delta$. diff --git a/wiki/wikipedia/905.txt b/wiki/wikipedia/905.txt deleted file mode 100644 index 5323c045e6ebec63e4e737a9ef7dc69c3f9ae286..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/905.txt +++ /dev/null @@ -1,33 +0,0 @@ -In mathematics, the Quillen–Lichtenbaum conjecture is a conjecture relating étale cohomology to algebraic K-theory introduced by Quillen, who was inspired by earlier conjectures of Lichtenbaum. Kahn and Rognes proved the Quillen–Lichtenbaum conjecture at the prime 2 for some number fields. Voevodsky, using some important results of Markus Rost, has proved the Bloch–Kato conjecture, which implies the Quillen–Lichtenbaum conjecture for all primes. - -The conjecture in Quillen's original form states that if A is a finitely-generated algebra over the integers and l is prime, then there is a spectral sequence analogous to the Atiyah–Hirzebruch spectral sequence, starting at -$$ -E_2^{pq}=H^p_{\text{etale}}(\text{Spec }A[\ell^{-1}], Z_\ell(-q/2)), -$$ (which is understood to be 0 if q is odd) - -and abutting to -$$ -K_{-p-q}A\otimes Z_\ell -$$ - -for -p - q > 1 + dim A. - -Assuming the Quillen–Lichtenbaum conjecture and the Vandiver conjecture, the K-groups of the integers, Kn(Z), are given by: - -*0 if n = 0 mod 8 and n > 0, Z if n = 0 - -*Z ⊕ Z/2 if n = 1 mod 8 and n > 1, Z/2 if n = 1. - -*Z/ck ⊕ Z/2 if n = 2 mod 8 - -*Z/8dk if n = 3 mod 8 - -*0 if n = 4 mod 8 - -*Z if n = 5 mod 8 - -*Z/ck if n = 6 mod 8 - -*Z/4dk if n = 7 mod 8 - -where ck/dk is the Bernoulli number B2k/k in lowest terms and n is 4k - 1 or 4k - 2 . diff --git a/wiki/wikipedia/906.txt b/wiki/wikipedia/906.txt deleted file mode 100644 index 3c7ed68fd4a0b3bb7bdbcf67c6fa0fe54d32a6e4..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/906.txt +++ /dev/null @@ -1,87 +0,0 @@ -In mathematics, the Calabi conjecture was a conjecture about the existence of certain "nice" Riemannian metrics on certain complex manifolds, made by and proved by . Yau received the Fields Medal in 1982 in part for this proof. - -The Calabi conjecture states that a compact Kähler manifold has a unique Kähler metric in the same class whose Ricci form is any given 2-form representing the first Chern class. In particular if the first Chern class vanishes there is a unique Kähler metric in the same class with vanishing Ricci curvature; these are called Calabi-Yau manifolds. - -More formally, the Calabi conjecture states: - -If M is a compact Kähler manifold with Kähler metric $g$ and Kähler form $\omega$, and R is any (1,1)-form representing the manifold's first Chern class, then there exists a unique Kähler metric $\tilde{g}$ on M with Kähler form $\tilde{\omega}$ such that $\omega$ and $\tilde{\omega}$ represent the same class in cohomology $H^2(M,\R)$ and the Ricci form of $\tilde{\omega}$ is R. - -The Calabi conjecture is closely related to the question of which Kähler manifolds have Kähler-Einstein metrics. - -A conjecture closely related to the Calabi conjecture states that if a compact Kähler variety has a negative, zero, or positive first Chern class then it has a Kähler–Einstein metric in the same class as its Kähler metric, unique up to rescaling. This was proved for negative first Chern classes independently by Thierry Aubin and Shing-Tung Yau in 1976. When the Chern class is zero it was proved by Yau as an easy consequence of the Calabi conjecture. These results were never explicitly conjectured by Calabi, but would have followed from results that he announced in his 1954 talk at the International Congress of Mathematicians. - -When the first Chern class is positive, the above conjecture is actually false as a consequence of a result of Yozo Matsushima, which shows that the complex automorphism group of a Kähler–Einstein manifold of positive scalar curvature is necessarily reductive. For example, the complex projective plane blown up at 2 points has no Kähler–Einstein metric and so is a counterexample. Another problem arising from complex automorphisms is that they can lead to a lack of uniqueness for the Kähler–Einstein metric, even when it exists. However, complex automorphisms are not the only difficulty that arises in the positive case. Indeed, it was conjectured by Yau et al that when the first Chern class is positive, a Kähler manifold admits a Kähler–Einstein metric if and only if it is K-stable. A proof of this conjecture was published by Xiuxiong Chen, Simon Donaldson and Song Sun in January 2015, and Tian gave a proof electronically published on September 16, 2015. - -On the other hand, in the special case of complex dimension two, a compact complex surface with positive first Chern class does admit a Kähler–Einstein metric if and only if its automorphism group if reductive. This important result is often attributed to Gang Tian. Since Tian’s proof, there have been some simplifications and refinements of arguments involved; cf. the paper by Odaka, Spotti, and Sun cited below. The complex surfaces that admit such Kähler–Einstein metrics are therefore exactly the complex projective plane, the product of two copies of a projective line, and blowups of the projective plane in 3 to 8 points in general position. - -Calabi transformed the Calabi conjecture into a non-linear partial differential equation of complex Monge–Ampère type, and showed that this equation has at most one solution, thus establishing the uniqueness of the required Kähler metric. - -Yau proved the Calabi conjecture by constructing a solution of this equation using the continuity method. This involves first solving an easier equation, and then showing that a solution to the easy equation can be continuously deformed to a solution of the hard equation. The hardest part of Yau's solution is proving certain a priori estimates for the derivatives of solutions. - -Suppose that $M$ is a complex compact manifold with a Kähler form $\omega$. - -Any other Kähler form in the same class is of the form -$$ -\omega+dd'\varphi -$$ - -for some smooth function $\varphi$ on $M$, unique up to addition of a constant. The Calabi conjecture is therefore equivalent to the following problem: - -Let $F=e^f$ be a positive smooth function on $M$ with average value 1. Then there is a smooth real function $\varphi$; with -$$ -(\omega+dd'\varphi)^m = e^f\omega^m -$$ - -and $\varphi$; is unique up to addition of a constant. - -This is an equation of complex Monge–Ampère type for a single function $\varphi$. - -It is a particularly hard partial differential equation to solve, as it is non-linear in the terms of highest order. It is easy to solve it when $f=0$, as $\varphi = 0 $ is a solution. The idea of the continuity method is to show that it can be solved for all $f$ by showing that the set of $f$ for which it can be solved is both open and closed. Since the set of $f$ for which it can be solved is non-empty, and the set of all $f$ is connected, this shows that it can be solved for all $f$. - -The map from smooth functions to smooth functions taking $\varphi$ to $F$ defined by -$$ -F=(\omega+dd'\varphi)^m/\omega^m -$$ - -is neither injective nor surjective. It is not injective because adding a constant to $\varphi$ does not change $F$, and it is not surjective because $F$ must be positive and have average value 1. So we consider the map restricted to functions $\varphi$ that are normalized to have average value 0, and ask if this map is an isomorphism onto the set of positive $F=e^f$ with average value 1. Calabi and Yau proved that it is indeed an isomorphism. This is done in several steps, described below. - -Proving that the solution is unique involves showing that if -$$ -(\omega+dd'\varphi_1)^m = (\omega+dd'\varphi_2)^m -$$ - -then φ1 and φ2 differ by a constant - -(so must be the same if they are both normalized to have average value 0). - -Calabi proved this by showing that the average value of -$$ -|d(\varphi_1-\varphi_2)|^2 -$$ - -is given by an expression that is at most 0. As it is obviously at least 0, it must be 0, so -$$ -d(\varphi_1-\varphi_2) = 0 -$$ - -which in turn forces φ1 and φ2 to differ by a constant. - -Proving that the set of possible F is open (in the set of smooth functions with average value 1) involves showing that if it is possible to solve the equation for some F, then it is possible to solve it for all sufficiently close F. Calabi proved this by using the implicit function theorem for Banach spaces: in order to apply this, the main step is to show that the linearization of the differential operator above is invertible. - -This is the hardest part of the proof, and was the part done by Yau. - -Suppose that F is in the closure of the image of possible - -functions φ. This means that there is a sequence of - -functions φ1, φ2, ... - -such that the corresponding functions F1, F2,... - -converge to F, and the problem is to show that some subsequence of the φs converges to a solution φ. In order to do this, Yau finds some a priori bounds for the functions φi and their higher derivatives - -in terms of the higher derivatives of log(fi). Finding these bounds requires a long sequence of hard estimates, each improving slightly on the previous estimate. The bounds Yau gets are enough to show that the functions φi all lie in a compact subset of a suitable Banach space of functions, so it is possible to find a convergent subsequence. - -This subsequence converges to a function φ with image F, which - -shows that the set of possible images F is closed. diff --git a/wiki/wikipedia/907.txt b/wiki/wikipedia/907.txt deleted file mode 100644 index 06ebc88dbe167c42d1ab5b90f0d69ffd0a3b0611..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/907.txt +++ /dev/null @@ -1,11 +0,0 @@ -Illumination problems are a class of mathematical problems that study the illumination of rooms with mirrored walls by point light sources. - -The original formulation was attributed to Ernst Straus in the 1950s and has been resolved. Straus asked if a room with mirrored walls can always be illuminated by a single point light source, allowing for repeated reflection of light off the mirrored walls. Alternatively, the question can be stated as asking that if a billiard table can be constructed in any required shape, is there a shape possible such that there is a point where it is impossible to the billiard ball in a at another point, assuming the ball is point-like and continues infinitely rather than stopping due to friction. - -The original problem was first solved in 1958 by Roger Penrose using ellipses to form the Penrose unilluminable room. He showed there exists a room with curved walls that must always have dark regions if lit only by a single point source. This problem was also solved for polygonal rooms by George Tokarsky in 1995 for 2 and 3 dimensions, which showed there exists an unilluminable polygonal 26-sided room with a "dark spot" which is not illuminated from another point in the room, even allowing for repeated reflections. These were rare cases, when a finite number of dark points (rather than regions) are unilluminable only from a fixed position of the point source. - -In 1997, two different 24-sided rooms with the same properties were put forward by George Tokarsky and David Castro separately. - -In 1995, Tokarsky found the first polygonal unilluminable room which had 4 sides and two fixed boundary points. - -In 2016, Lelièvre, Monteil and Weiss showed that a light source in a polygonal room whose angles (in degrees) are all rational numbers will illuminate the entire polygon, with the possible exception of a finite number of points. diff --git a/wiki/wikipedia/908.txt b/wiki/wikipedia/908.txt deleted file mode 100644 index 045ce41a4fc6a483b22c5c95aaa2521f9bae6008..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/908.txt +++ /dev/null @@ -1,7 +0,0 @@ -Siegel's paradox is the phenomenon that uncertainty about future prices can theoretically push rational consumers to temporarily trade away their preferred consumption goods (or currency) for non-preferred goods (or currency), as part of a plan to trade back to the preferred consumption goods after prices become clearer. For example, in some models, Americans can expect to earn more American dollars on average by investing in Euros, while Europeans can expect to earn more Euros on average by investing in American dollars. The paradox was identified by economist Jeremy Siegel in 1972. - -Like the related two envelopes problem, the phenomenon is sometimes labeled a paradox because an agent can seem to trade for something of equal monetary value and yet, paradoxically, seem at the same time to gain monetary value from the trade. Closer analysis shows that the "monetary value" of the trade is ambiguous but that nevertheless such trades are often favorable, depending on the scenario. - -Economist Fischer Black gave the following illustration in 1995. Suppose that the exchange rate between an "apple" country where consumers prefer apples, and an "orange" country where consumers prefer valuable oranges, is currently 1:1, but will change next year to 2:1 or 1:2 with equal probability. Suppose an apple consumer trades an apple to an orange consumer in exchange for an orange. The apple consumer now has given up an apple for an orange, which next year has an expected value of 1.25 apples. The orange consumer now has given up an orange for an apple, which next year has an expected value of 1.25 oranges. Thus both appear to have benefited from the exchange on average. - -While the wine and the apples are toy examples, the paradox has a real-world application to what currencies investors should choose to hold. Fischer Black concluded from analyses similar to the apple/orange example that when investing overseas, investors should not seek to hedge all their currency risk. where they show that the only possible way to avoid making risk-less money in such future-based currency exchanges is to settle on the (weighted) geometric mean of the future exchange rates, or more generally a product of the weighted geometric mean and a so-called reciprocity function. The weights of the geometric mean depend on the probability of the rates occurring in the future, while the reciprocity function can always be taken to be the unit function. What this implies, for instance, in the case of apple/orange example above, is that the consumers should trade their products for √(2)(1/2) = 1 units of the other product to avoid an arbitrage. This method will provide currency traders on both sides with a common exchange rate they can safely agree on. diff --git a/wiki/wikipedia/909.txt b/wiki/wikipedia/909.txt deleted file mode 100644 index 0372ea96d886a98f8ff56c223b36c71a7578e87b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/909.txt +++ /dev/null @@ -1,33 +0,0 @@ -In measure theory Prokhorov's theorem relates tightness of measures to relative compactness (and hence weak convergence) in the space of probability measures. It is credited to the Soviet mathematician Yuri Vasilyevich Prokhorov, who considered probability measures on complete separable metric spaces. The term "Prokhorov’s theorem" is also applied to later generalizations to either the direct or the inverse statements. - -Let $(S, \rho)$ be a separable metric space. - -Let $\mathcal{P}(S)$ denote the collection of all probability measures defined on $S$ (with its Borel σ-algebra). - -Theorem. - -# A collection $K\subset \mathcal{P}(S)$ of probability measures is tight if and only if the closure of $K$ is sequentially compact in the space $\mathcal{P}(S)$ equipped with the topology of weak convergence. - -# The space $\mathcal{P}(S)$ with the topology of weak convergence is metrizable. - -# Suppose that in addition, $(S,\rho)$ is a complete metric space (so that $(S,\rho)$ is a Polish space). There is a complete metric $d_0$ on $\mathcal{P}(S)$ equivalent to the topology of weak convergence; moreover, $ K\subset \mathcal{P}(S)$ is tight if and only if the closure of $ K$ in $(\mathcal{P}(S),d_0)$ is compact. - -For Euclidean spaces we have that: - -* If $ (\mu_n)$ is a tight sequence in $\mathcal{P}(\mathbb{R}^m)$ (the collection of probability measures on $m$-dimensional Euclidean space), then there exist a subsequence $(\mu_{n_k})$ and a probability measure $\mu\in\mathcal{P}(\mathbb{R}^m)$ such that $\mu_{n_k}$ converges weakly to $\mu$. - -* If $ (\mu_n)$ is a tight sequence in $\mathcal{P}(\mathbb{R}^m)$ such that every weakly convergent subsequence $(\mu_{n_k})$ has the same limit $\mu\in\mathcal{P}(\mathbb{R}^m)$, then the sequence $(\mu_n)$ converges weakly to $\mu$. - -Prokhorov's theorem can be extended to consider complex measures or finite signed measures. - -Theorem: - -Suppose that $(S,\rho)$ is a complete separable metric space and $\Pi$ is a family of Borel complex measures on $S$. The following statements are equivalent: - -*$\Pi$ is sequentially compact; that is, every sequence $\{\mu_n\}\subset\Pi$ has a weakly convergent subsequence. - -* $\Pi$ is tight and uniformly bounded in total variation norm. - -Since Prokhorov's theorem expresses tightness in terms of compactness, the Arzelà–Ascoli theorem is often used to substitute for compactness: in function spaces, this leads to a characterization of tightness in terms of the modulus of continuity or an appropriate analogue—see tightness in classical Wiener space and tightness in Skorokhod space. - -There are several deep and non-trivial extensions to Prokhorov's theorem. However, those results do not overshadow the importance and the relevance to applications of the original result. diff --git a/wiki/wikipedia/91.txt b/wiki/wikipedia/91.txt deleted file mode 100644 index bd6e3519e3a4eee411c39986582dc2e6d74f1f2e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/91.txt +++ /dev/null @@ -1,102 +0,0 @@ -In probability theory, the central limit theorem states conditions under which the average of a sufficiently large number of independent random variables, each with finite mean and variance, will be approximately normally distributed. - -Directional statistics is the subdiscipline of statistics that deals with directions (unit vectors in Rn), axes (lines through the origin in Rn) or rotations in Rn. The means and variances of directional quantities are all finite, so that the central limit theorem may be applied to the particular case of directional statistics. - -This article will deal only with unit vectors in 2-dimensional space (R2) but the method described can be extended to the general case. - -A sample of angles $\theta_i$ are measured, and since they are indefinite to within a factor of $2\pi$, the complex definite quantity $z_i=e^{i\theta_i}=\cos(\theta_i)+i\sin(\theta_i)$ is used as the random variate. The probability distribution from which the sample is drawn may be characterized by its moments, which may be expressed in Cartesian and polar form: -$$ -m_n=E(z^n)= C_n +i S_n = R_n e^{i \theta_n} -$$ - -It follows that: -$$ -C_n=E(\cos (n\theta)) -$$ -$$ -S_n=E(\sin (n\theta)) -$$ -$$ -R_n=|E(z^n)|=\sqrt{C_n^2+S_n^2} -$$ -$$ -\theta_n=\arg(E(z^n)) -$$ - -Sample moments for N trials are: -$$ -\overline{m_n}=\frac{1}{N}\sum_{i=1}^N z_i^n =\overline{C_n} +i \overline{S_n} = \overline{R_n} e^{i \overline{\theta_n}} -$$ - -where -$$ -\overline{C_n}=\frac{1}{N}\sum_{i=1}^N\cos(n\theta_i) -$$ -$$ -\overline{S_n}=\frac{1}{N}\sum_{i=1}^N\sin(n\theta_i) -$$ -$$ -\overline{R_n}=\frac{1}{N}\sum_{i=1}^N |z_i^n| -$$ -$$ -\overline{\theta_n}=\frac{1}{N}\sum_{i=1}^N \arg(z_i^n) -$$ - -The vector [$\overline{ C_1 },\overline{ S_1 }$] may be used as a representation of the sample mean $(\overline{m_1})$ and may be taken as a 2-dimensional random variate. The bivariate central limit theorem states that the joint probability distribution for $\overline{ C_1 }$ and $\overline{ S_1 }$ in the limit of a large number of samples is given by: -$$ -[\overline{C_1},\overline{S_1}] \xrightarrow{d} \mathcal{N}([C_1,S_1],\Sigma/N) -$$ - -where $\mathcal{N}()$ is the bivariate normal distribution and $\Sigma$ is the covariance matrix for the circular distribution: - - - -\Sigma - -= - -\begin{bmatrix} - -\sigma_{CC} & \sigma_{CS} \\ - -\sigma_{SC} & \sigma_{SS} - -\end{bmatrix} - -\quad -$$ -\sigma_{CC}=E(\cos^2\theta)-E(\cos\theta)^2 -$$ -$$ -\sigma_{CS}=\sigma_{SC}=E(\cos\theta\sin\theta)-E(\cos\theta)E(\sin\theta) -$$ -$$ -\sigma_{SS}=E(\sin^2\theta)-E(\sin\theta)^2 -$$ - -Note that the bivariate normal distribution is defined over the entire plane, while the mean is confined to be in the unit ball (on or inside the unit circle). This means that the integral of the limiting (bivariate normal) distribution over the unit ball will not be equal to unity, but rather approach unity as N approaches infinity. - -It is desired to state the limiting bivariate distribution in terms of the moments of the distribution. - -Using multiple angle trigonometric identities -$$ -C_2= E(\cos(2\theta)) = E(\cos^2\theta-1)=E(1-\sin^2\theta) -$$ -$$ -S_2= E(\sin(2\theta)) = E(2\cos\theta\sin\theta) -$$ - -It follows that: -$$ -\sigma_{CC}=E(\cos^2\theta)-E(\cos\theta)^2 =\frac{1}{2}\left(1 + C_2 - 2C_1^2\right) -$$ -$$ -\sigma_{CS}=E(\cos\theta\sin\theta)-E(\cos\theta)E(\sin\theta)=\frac{1}{2}\left(S_2 - 2 C_1 S_1 \right) -$$ -$$ -\sigma_{SS}=E(\sin^2\theta)-E(\sin\theta)^2 =\frac{1}{2}\left(1 - C_2 - 2S_1^2\right) -$$ - -The covariance matrix is now expressed in terms of the moments of the circular distribution. - -The central limit theorem may also be expressed in terms of the polar components of the mean. If $P(\overline{C_1},\overline{S_1})d\overline{C_1}d\overline{S_1}$ is the probability of finding the mean in area element $d\overline{C_1}d\overline{S_1}$, then that probability may also be written $P(\overline{R_1}\cos(\overline{\theta_1}),\overline{R_1}\sin(\overline{\theta_1}))\overline{R_1}d\overline{R_1}d\overline{\theta_1}$. diff --git a/wiki/wikipedia/910.txt b/wiki/wikipedia/910.txt deleted file mode 100644 index 8bb8807f30dceb8eedb948e49e17df217bc57890..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/910.txt +++ /dev/null @@ -1,31 +0,0 @@ -The Calogero conjecture is a minority interpretation of quantum mechanics. It is a quantization explanation involving quantum mechanics, originally stipulated in 1997 and further republished in 2004 by Francesco Calogero that suggests the classical stochastic background field to which Edward Nelson attributes quantum mechanical behavior in his theory of stochastic quantization is a fluctuating space-time, and that there are further mathematical relations between the involved quantities. - -The hypothesis itself suggests that if the angular momentum associated with a stochastic tremor with spatial coherence provides an action purported by that motion within the order of magnitude of Planck's constant then the order of magnitude of the associated angular momentum has the same value. Calogero himself suggests that these findings, originally based on the simplified model of the universe "are affected (and essentially, unaffected) by the possible presence in the mass of the Universe of a large component made up of particles much lighter than nucleons". - -Essentially, the relation explained by Calogero can be expressed with the formulas: -$$ -h \approx \alpha G^{1/2} m^{3/2} [R(t)]^{1/2}. -$$ - -Furthermore: -$$ -h \equiv h(t) = A \sqrt{R(t)} = h_0 \sqrt{a(t)} -$$ Const G,m - -Where: -$$ -G -$$ represents the gravitational constant -$$ -m -$$ represents the mass of a hydrogen atom. -$$ -R(t) -$$ represents the radius of the universe accessible by gravitational interactions in time, t. -$$ -A -$$ is a dimensional constant. - -Despite its common description, it has been noted that the conjecture is not entirely defined within the realms of Nelson's stochastic mechanics, but can also be thought of as a means of inquiring into the statistical effects of interaction with distant masses in the universe and was expected by Calogero himself to be within the same order of magnitude as quantum mechanical effects. - -After the publication of Calogero's original paper, "[The] [c]osmic origin of quantization" a response was published by Giuseppe Gaeta of the University of Rome in which he discussed the compatibility of the conjecture with present bounds on variation of fundamental constants, but also outlined his focus on the modification of the relation between redshift and distance, and of the estimations attained from observations of elapsed time from the production of cosmic radiation and implications—both being related also, to the observed blackbody distribution of background cosmic radiation. diff --git a/wiki/wikipedia/911.txt b/wiki/wikipedia/911.txt deleted file mode 100644 index 2e2ddbcfe15ddcb7ee7243cbb0ba515d0510a3b5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/911.txt +++ /dev/null @@ -1 +0,0 @@ -GRASP is a well known SAT instance solver. It was developed by João Marques Silva, a Portuguese computer science researcher. It stands for Generic seaRch Algorithm for the Satisfiability Problem. diff --git a/wiki/wikipedia/912.txt b/wiki/wikipedia/912.txt deleted file mode 100644 index 4b1cbb479ac8d028b6f6755db297ab156cedb12e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/912.txt +++ /dev/null @@ -1,31 +0,0 @@ -Nicholas Michael Katz (born December 7, 1943) is an American mathematician, working in arithmetic geometry, particularly on p-adic methods, monodromy and moduli problems, and number theory. He is currently a professor of Mathematics at Princeton University and an editor of the journal Annals of Mathematics. - -Katz graduated from Johns Hopkins University (BA 1964) and from Princeton University, where in 1965 he received his master's degree and in 1966 he received his doctorate under supervision of Bernard Dwork with thesis On the Differential Equations Satisfied by Period Matrices. After that, at Princeton, he was an instructor, an assistant professor in 1968, associate professor in 1971 and professor in 1974. From 2002 to 2005 he was the chairman of faculty there. He was also a visiting scholar at the University of Minnesota, the University of Kyoto, Paris VI, Orsay Faculty of Sciences, the Institute for Advanced Study and the IHES. While in France, he adapted methods of scheme theory and category theory to the theory of modular forms. Subsequently, he has applied geometric methods to various exponential sums. - -From 1968 to 1969, he was a NATO Postdoctoral Fellow, from 1975 to 1976 and from 1987–1988 Guggenheim Fellow and from 1971 to 1972 Sloan Fellow. In 1978 he was an invited speaker at the International Congress of Mathematicians in Helsinki (p-adic L functions, Serre-Tate local moduli and ratios of solutions of differential equations) and 1970 in Nice (The regularity theorem in algebraic geometry). - -Since 2003 he is a member of the American Academy of Arts and Sciences and since 2004 the National Academy of Sciences. In 2003 he was awarded with Peter Sarnak the Levi L. Conant Prize of the American Mathematical Society (AMS) for the essay "Zeroes of Zeta Functions and Symmetry" in the Bulletin of the American Mathematical Society. Since 2004 he is an editor of the Annals of Mathematics. - -He played a significant role as a sounding-board for Andrew Wiles when Wiles was developing in secret his proof of Fermat's Last Theorem. Mathematician and cryptographer Neal Koblitz was one of Katz's students. - -Katz studied, with Sarnak among others, the connection of the eigenvalue distribution of large random matrices of classical groups to the distribution of the distances of the zeros of various L and zeta functions in algebraic geometry. He also studied trigonometric sums (Gauss sums) with algebro-geometric methods. - -He introduced the Katz–Lang finiteness theorem. - -* Gauss sums, Kloosterman sums, and monodromy groups. Annals of Mathematical Studies, Princeton 1988. - -* Exponential sums and differential equations. Annals of Mathematical Studies, Princeton 1990. - -* Rigid Local Systems. Annals of Mathematical Studies, Princeton 1996. - -* Twisted $L$-functions and Monodromy. Annals of Mathematical Studies, Princeton 2002. - -* Moments, Monodromy, and Perversity. A Diophantine Perspective. Annals of Mathematical Studies, Princeton 2005, . - -* Annals of Mathematical Studies, Princeton 2012. - -* With Barry Mazur: Arithmetic Moduli of elliptic curves. Princeton 1985. - -* With Peter Sarnak: Random Matrices, Frobenius Eigenvalues, and Monodromy. AMS Colloquium publications 1998, . - -* With Peter Sarnak: "". Bulletin of the AMS, Vol. 36, 1999, S.1-26. diff --git a/wiki/wikipedia/913.txt b/wiki/wikipedia/913.txt deleted file mode 100644 index 9ab4d295c506ef912385c13537122515b6a42674..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/913.txt +++ /dev/null @@ -1,13 +0,0 @@ -In computational geometry, a constrained Delaunay triangulation is a generalization of the Delaunay triangulation that forces certain required segments into the triangulation as edges, unlike the Delaunay triangulation itself which is based purely on the position of a given set of vertices without regard to how they should be connected by edges. It can be computed efficiently and has applications in geographic information systems and in mesh generation. - -The input to the constrained Delaunay triangulation problem is a planar straight-line graph, a set of points and non-crossing line segments in the plane. - -The constrained Delaunay triangulation of this input is a triangulation of its convex hull, including all of the input segments as edges, and using only the vertices of the input. For every additional edge $e$ added to this input to make it into a triangulation, there should exist a circle through the endpoints of $e$, such that any vertex interior to the circle is blocked from visibility from at least one endpoint of $e$ by a segment of the input. This generalizes the defining property of two-dimensional Delaunay triangulations of points, that each edge have a circle through its two endpoints containing no other vertices. A triangulation satisfying these properties always exists. - -Jonathan Shewchuk has generalized this definition to constrained Delaunay triangulations of three-dimensional inputs, systems of points and non-crossing segments and triangles in three-dimensional space; however, not every input of this type has a constrained Delaunay triangulation according to his generalized definition. - -Several algorithms for computing constrained Delaunay triangulations of planar straight-line graphs in time $O(n\log n)$ are known. The constrained Delaunay triangulation of a simple polygon can be constructed in linear time. - -In topographic surveying, one constructs a triangulation from points shot in the field. If an edge of the triangulation crosses a river, the resulting surface does not accurately model the path of the river. So one draws breaklines along rivers, edges of roads, mountain ridges, and the like. The breaklines are used as constraints when constructing the triangulation. - -Constrained Delaunay triangulation can also be used in Delaunay refinement methods for mesh generation, as a way to force the mesh to conform with the domain boundaries as it is being refined. diff --git a/wiki/wikipedia/914.txt b/wiki/wikipedia/914.txt deleted file mode 100644 index eb6c942b1072b462a5729609b9533d6fc1753aaa..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/914.txt +++ /dev/null @@ -1,6 +0,0 @@ -In mathematics, the Rothe–Hagen identity is a mathematical identity valid for all complex numbers ($x, y, z$) except where its denominators vanish: -$$ -\sum_{k=0}^n\frac{x}{x+kz}{x+kz \choose k}\frac{y}{y+(n-k)z}{y+(n-k)z \choose n-k}=\frac{x+y}{x+y+nz}{x+y+nz \choose n}. -$$ - -It is a generalization of Vandermonde's identity, and is named after Heinrich August Rothe and Johann Georg Hagen. diff --git a/wiki/wikipedia/915.txt b/wiki/wikipedia/915.txt deleted file mode 100644 index 69a154b1c37794c00d830d92cfdea112aef42eb4..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/915.txt +++ /dev/null @@ -1,42 +0,0 @@ -Gordan's lemma is a lemma in convex geometry and algebraic geometry. It can be stated in several ways. - -* Let $A$ be a matrix of integers. Let $M$ be the set of non-negative integer solutions of $A \cdot x = 0$. Then there exists a finite subset of vectors $M$, such that every element of $M$ is a linear combination of these vectors with non-negative integer coefficients. - -* The semigroup of integral points in a rational convex polyhedral cone is finitely generated. - -* An affine toric variety is an algebraic variety (this follows from the fact that the prime spectrum of the semigroup algebra of such a semigroup is, by definition, an affine toric variety). - -The lemma is named after the mathematician Paul Gordan (1837–1912). Some authors have misspelled it as "Gordon's lemma". - -There are topological and algebraic proofs. - -Let $\sigma$ be the dual cone of the given rational polyhedral cone. Let $u_1, \dots, u_r$ be integral vectors so that $\sigma = \{ x \mid \langle u_i, x \rangle \ge 0, 1 \le i \le r \}.$ Then the $u_i$'s generate the dual cone $\sigma^{\vee}$; indeed, writing C for the cone generated by $u_i$'s, we have: $\sigma \subset C^{\vee}$, which must be the equality. Now, if x is in the semigroup -$$ -S_\sigma = \sigma^\vee \cap \mathbb{Z}^d, -$$ - -then it can be written as -$$ -x = \sum_i n_i u_i + \sum_i r_i u_i, -$$ - -where $n_i$ are nonnegative integers and $0 \le r_i \le 1$. But since x and the first sum on the right-hand side are integral, the second sum is a lattice point in a bounded region, and so there are only finitely many possibilities for the second sum (the topological reason). Hence, $S_{\sigma}$ is finitely generated. - -The proof is based on a fact that a semigroup S is finitely generated if and only if its semigroup algebra $\mathbb{C}[S]$ is a finitely generated algebra over $\mathbb{C}$. To prove Gordan's lemma, by induction (cf. the proof above), it is enough to prove the following statement: for any unital subsemigroup S of $\mathbb{Z}^d$, - -If S is finitely generated, then $S^+ = S \cap \{ x \mid \langle x, v \rangle \ge 0 \}$, v an integral vector, is finitely generated. - -Put $A = \mathbb{C}[S]$, which has a basis $\chi^a, a \in S$. It has $\mathbb{Z}$-grading given by -$$ -A_n = \operatorname{span} \{ \chi^a \mid a \in S, \langle a, v \rangle = n \} -$$. - -By assumption, A is finitely generated and thus is Noetherian. It follows from the algebraic lemma below that $\mathbb{C}[S^+] = \oplus_0^\infty A_n$ is a finitely generated algebra over $A_0$. Now, the semigroup $S_0 = S \cap \{ x \mid \langle x, v \rangle = 0 \}$ is the image of S under a linear projection, thus finitely generated and so A_0 - -= \mathbb{C}[S_0] is finitely generated. Hence, $S^+$ is finitely generated then. - -Lemma: Let A be a $\mathbb{Z}$-graded ring. If A is a Noetherian ring, then $A^+ = \oplus_0^{\infty} A_n$ is a finitely generated $A_0$-algebra. - -Proof: Let I be the ideal of A generated by all homogeneous elements of A of positive degree. Since A is Noetherian, I is actually generated by finitely many $f_i's$, homogeneous of positive degree. If f is homogeneous of positive degree, then we can write $f = \sum_i g_i f_i$ with $g_i$ homogeneous. If f has sufficiently large degree, then each $g_i$ has degree positive and strictly less than that of f. Also, each degree piece $A_n$ is a finitely generated $A_0$-module. (Proof: Let $N_i$ be an increasing chain of finitely generated submodules of $A_n$ with union $A_n$. Then the chain of the ideals $N_i A$ stabilizes in finite steps; so does the chain $N_i = N_i A \cap A_n.$) Thus, by induction on degree, we see $A^+$ is a finitely generated $A_0$-algebra. - -A multi-hypergraph over a certain set $V $ is a multiset of subsets of $V $ (it is called "multi-hypergraph" since each hyperedge may appear more than once). A multi-hypergraph is called regular if all vertices have the same degree. It is called decomposable if it has a proper nonempty subset that is regular too. For any integer n, let $D(n) $ be the maximum degree of an indecomposable multi-hypergraph on n vertices. Gordan's lemma implies that $D(n) $ is finite. Proof: for each subset S of vertices, define a variable xS (a non-negative integer). Define another variable d (a non-negative integer). Consider the following set of n equations (one equation per vertex):\sum_{S\ni v} x_S - d = 0 \text{ for all } v\in V Every solution (x,d) denotes a regular multi-hypergraphs on $V $, where x defines the hyperedges and d is the degree. By Gordan's lemma, the set of solutions is generated by a finite set of solutions, i.e., there is a finite set $M$ of multi-hypergraphs, such that each regular multi-hypergraph is a linear combination of some elements of $M$. Every non-decomposable multi-hypergraph must be in $M$ (since by definition, it cannot be generated by other multi-hypergraph). Hence, the set of non-decomposable multi-hypergraphs is finite. diff --git a/wiki/wikipedia/916.txt b/wiki/wikipedia/916.txt deleted file mode 100644 index dab74ea4540001df71a2966358f34e23353d4415..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/916.txt +++ /dev/null @@ -1,71 +0,0 @@ -In science and engineering, a power level and a field level (also called a root-power level) are logarithmic measures of certain quantities referenced to a standard reference value of the same type. - -* A power level is a logarithmic quantity used to measure power, power density or sometimes energy, with commonly used unit decibel (dB). - -* A field level (or root-power level) is a logarithmic quantity used to measure quantities of which the square is typically proportional to power (for instance, the square of Voltage is proportional to Power by the inverse of the conductor's Resistance), etc., with commonly used units neper (Np) or decibel (dB). - -The type of level and choice of units indicate the scaling of the logarithm of the ratio between the quantity and its reference value, though a logarithm may be considered to be a dimensionless quantity. The reference values for each type of quantity are often specified by international standards. - -Power and field levels are used in electronic engineering, telecommunications, acoustics and related disciplines. Power levels are used for signal power, noise power, sound power, sound exposure, etc. Field levels are used for voltage, current, sound pressure. - -Level of a power quantity, denoted LP, is defined by -$$ -L_P = \frac{1}{2} \log_{\mathrm{e}}\!\left(\frac{P}{P_0}\right)\!~\mathrm{Np} = \log_{10}\!\left(\frac{P}{P_0}\right)\!~\mathrm{B} = 10 \log_{10}\!\left(\frac{P}{P_0}\right)\!~\mathrm{dB}. -$$ - -where - -*P is the power quantity; - -*P0 is the reference value of P. - -The level of a root-power quantity (also known as a field quantity), denoted LF, is defined by -$$ -L_F = \log_{\mathrm{e}}\!\left(\frac{F}{F_0}\right)\!~\mathrm{Np} = 2 \log_{10}\!\left(\frac{F}{F_0}\right)\!~\mathrm{B} = 20 \log_{10}\!\left(\frac{F}{F_0}\right)\!~\mathrm{dB}. -$$ - -where - -*F is the root-power quantity, proportional to the square root of power quantity; - -*F0 is the reference value of F. - -If the power quantity P is proportional to F2, and if the reference value of the power quantity, P0, is in the same proportion to F02, the levels LF and LP are equal. - -The neper, bel, and decibel (one tenth of a bel) are units of level that are often applied to such quantities as power, intensity, or gain. The neper, bel, and decibel are related by - -*1=1 B = 1/2 loge10 Np; - -*1=1 dB = 0.1 B = 1/20 loge10 Np. - -Level and its units are defined in ISO 80000-3. - -The ISO standard defines each of the quantities power level and field level to be dimensionless, with 1=1 Np = 1. This is motivated by simplifying the expressions involved, as in systems of natural units. - -Power and field quantities are part of a larger class, logarithmic ratio quantities. - -ANSI/ASA S1.1-2013 defines a class of quantities it calls levels. It defines a level of a quantity Q, denoted LQ, as -$$ -L_Q = \log_r\!\left(\frac{Q}{Q_0}\right)\!, -$$ - -where - -*r is the base of the logarithm; - -*Q is the quantity; - -*Q0 is the reference value of Q. - -For the level of a root-power quantity, the base of the logarithm is 1=r = e. - -For the level of a power quantity, the base of the logarithm is 1=r = e2. - -Frequency level of a frequency f is the logarithm of a ratio of the frequency f to a reference frequency f_0. The reference frequency is C_0, four octaves below middle C. - -In electronics, the octave (oct) is used as a unit with logarithm base 2, and the decade (dec) is used as a unit with logarithm base 10: -$$ -L_f = \log_2 \!\left( \frac{f}{f_0} \right) ~\text{oct} = \log_{10} \!\left( \frac{f}{f_0} \right) ~\text{dec}. -$$ - -In music theory, the octave is a unit used with logarithm base 2 (called interval). A semitone is one twelfth of an octave. A cent is one hundredth of a semitone. diff --git a/wiki/wikipedia/917.txt b/wiki/wikipedia/917.txt deleted file mode 100644 index bcac36a8683b486bfe38de8c940f694fad8ce466..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/917.txt +++ /dev/null @@ -1,64 +0,0 @@ -In mathematics and logic, an axiomatic system is any set of axioms from which some or all axioms can be used in conjunction to logically derive theorems. A theory is a consistent, relatively-self-contained body of knowledge which usually contains an axiomatic system and all its derived theorems. An axiomatic system that is completely described is a special kind of formal system. A formal theory is an axiomatic system (usually formulated within model theory) that describes a set of sentences that is closed under logical implication. A formal proof is a complete rendition of a mathematical proof within a formal system. - -An axiomatic system is said to be consistent if it lacks contradiction. That is, it is impossible to derive both a statement and its negation from the system's axioms. Consistency is a key requirement for most axiomatic systems, as the presence of contradiction would allow any statement to be proven (principle of explosion). - -In an axiomatic system, an axiom is called independent if it cannot be proven or disproven from other axioms in the system. A system is called independent if each of its underlying axioms is independent. Unlike consistency, independence is not a necessary requirement for a functioning axiomatic system — though it is usually sought after to minimize the number of axioms in the system. - -An axiomatic system is called complete if for every statement, either itself or its negation is derivable from the system's axioms (equivalently, every statement is capable of being proven true or false). - -Beyond consistency, relative consistency is also the mark of a worthwhile axiom system. This describes the scenario where the undefined terms of a first axiom system are provided definitions from a second, such that the axioms of the first are theorems of the second. - -A good example is the relative consistency of absolute geometry with respect to the theory of the real number system. Lines and points are undefined terms (also called primitive notions) in absolute geometry, but assigned meanings in the theory of real numbers in a way that is consistent with both axiom systems. - -A model for an axiomatic system is a well-defined set, which assigns meaning for the undefined terms presented in the system, in a manner that is correct with the relations defined in the system. The existence of a concrete model proves the consistency of a system. A model is called concrete if the meanings assigned are objects and relations from the real world, as opposed to an abstract model which is based on other axiomatic systems. - -Models can also be used to show the independence of an axiom in the system. By constructing a valid model for a subsystem without a specific axiom, we show that the omitted axiom is independent if its correctness does not necessarily follow from the subsystem. - -Two models are said to be isomorphic if a one-to-one correspondence can be found between their elements, in a manner that preserves their relationship. An axiomatic system for which every model is isomorphic to another is called categorial (sometimes categorical). The property of categoriality (categoricity) ensures the completeness of a system, however the converse is not true: Completeness does not ensure the categoriality (categoricity) of a system, since two models can differ in properties that cannot be expressed by the semantics of the system. - -As an example, observe the following axiomatic system, based on first-order logic with additional semantics of the following countably infinitely many axioms added (these can be easily formalized as an axiom schema): -$$ -\exist x_1: \exist x_2: \lnot (x_1=x_2) -$$ (informally, there exist two different items). -$$ -\exist x_1: \exist x_2: \exist x_3: \lnot (x_1=x_2) \land \lnot (x_1=x_3) \land \lnot (x_2=x_3) -$$ (informally, there exist three different items). -$$ -... -$$ - -Informally, this infinite set of axioms states that there are infinitely many different items. However, the concept of an infinite set cannot be defined within the system — let alone the cardinality of such as set. - -The system has at least two different models – one is the natural numbers (isomorphic to any other countably infinite set), and another is the real numbers (isomorphic to any other set with the cardinality of the continuum). In fact, it has an infinite number of models, one for each cardinality of an infinite set. However, the property distinguishing these models is their cardinality — a property which cannot be defined within the system. Thus the system is not categorial. However it can be shown to be complete. - -Stating definitions and propositions in a way such that each new term can be formally eliminated by the priorly introduced terms requires primitive notions (axioms) to avoid infinite regress. This way of doing mathematics is called the axiomatic method. - -A common attitude towards the axiomatic method is logicism. In their book Principia Mathematica, Alfred North Whitehead and Bertrand Russell attempted to show that all mathematical theory could be reduced to some collection of axioms. More generally, the reduction of a body of propositions to a particular collection of axioms underlies the mathematician's research program. This was very prominent in the mathematics of the twentieth century, in particular in subjects based around homological algebra. - -The explication of the particular axioms used in a theory can help to clarify a suitable level of abstraction that the mathematician would like to work with. For example, mathematicians opted that rings need not be commutative, which differed from Emmy Noether's original formulation. Mathematicians decided to consider topological spaces more generally without the separation axiom which Felix Hausdorff originally formulated. - -The Zermelo-Fraenkel set theory, a result of the axiomatic method applied to set theory, allowed the "proper" formulation of set-theory problems and helped avoid the paradoxes of naïve set theory. One such problem was the continuum hypothesis. Zermelo–Fraenkel set theory, with the historically controversial axiom of choice included, is commonly abbreviated ZFC, where "C" stands for "choice". Many authors use ZF to refer to the axioms of Zermelo–Fraenkel set theory with the axiom of choice excluded. Today ZFC is the standard form of axiomatic set theory and as such is the most common foundation of mathematics. - -Mathematical methods developed to some degree of sophistication in ancient Egypt, Babylon, India, and China, apparently without employing the axiomatic method. - -Euclid of Alexandria authored the earliest extant axiomatic presentation of Euclidean geometry and number theory. Many axiomatic systems were developed in the nineteenth century, including non-Euclidean geometry, the foundations of real analysis, Cantor's set theory, Frege's work on foundations, and Hilbert's 'new' use of axiomatic method as a research tool. For example, group theory was first put on an axiomatic basis towards the end of that century. Once the axioms were clarified (that inverse elements should be required, for example), the subject could proceed autonomously, without reference to the transformation group origins of those studies. - -Not every consistent body of propositions can be captured by a describable collection of axioms. In recursion theory, a collection of axioms is called recursive if a computer program can recognize whether a given proposition in the language is a theorem. Gödel's first incompleteness theorem then tells us that there are certain consistent bodies of propositions with no recursive axiomatization. Typically, the computer can recognize the axioms and logical rules for deriving theorems, and the computer can recognize whether a proof is valid, but to determine whether a proof exists for a statement is only soluble by "waiting" for the proof or disproof to be generated. The result is that one will not know which propositions are theorems and the axiomatic method breaks down. An example of such a body of propositions is the theory of the natural numbers, which is only partially axiomatized by the Peano axioms (described below). - -In practice, not every proof is traced back to the axioms. At times, it is not even clear which collection of axioms a proof appeals to. For example, a number-theoretic statement might be expressible in the language of arithmetic (i.e. the language of the Peano axioms) and a proof might be given that appeals to topology or complex analysis. It might not be immediately clear whether another proof can be found that derives itself solely from the Peano axioms. - -Any more-or-less arbitrarily chosen system of axioms is the basis of some mathematical theory, but such an arbitrary axiomatic system will not necessarily be free of contradictions, and even if it is, it is not likely to shed light on anything. Philosophers of mathematics sometimes assert that mathematicians choose axioms "arbitrarily", but it is possible that although they may appear arbitrary when viewed only from the point of view of the canons of deductive logic, that appearance is due to a limitation on the purposes that deductive logic serves. - -The mathematical system of natural numbers 0, 1, 2, 3, 4, ... is based on an axiomatic system first devised by the mathematician Giuseppe Peano in 1889. He chose the axioms, in the language of a single unary function symbol S (short for "successor"), for the set of natural numbers to be: - -* There is a natural number 0. - -* Every natural number a has a successor, denoted by Sa. - -* There is no natural number whose successor is 0. - -* Distinct natural numbers have distinct successors: if a ≠ b, then Sa ≠ Sb. - -* If a property is possessed by 0 and also by the successor of every natural number it is possessed by, then it is possessed by all natural numbers ("Induction axiom"). - -In mathematics, axiomatization is the process of taking a body of knowledge and working backwards towards its axioms. It is the formulation of a system of statements (i.e. axioms) that relate a number of primitive terms — in order that a consistent body of propositions may be derived deductively from these statements. Thereafter, the proof of any proposition should be, in principle, traceable back to these axioms. diff --git a/wiki/wikipedia/918.txt b/wiki/wikipedia/918.txt deleted file mode 100644 index 59d9a84b44df1d997deb6c6dd8d8f7c0fec770e0..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/918.txt +++ /dev/null @@ -1,79 +0,0 @@ -In coding theory, the Singleton bound, named after Richard Collom Singleton, is a relatively crude upper bound on the size of an arbitrary block code $C$ with block length $n$, size $M$ and minimum distance $d$. It is also known as the Joshibound. proved by Joshi and even earlier by Komamiya. - -The minimum distance of a set $C$ of codewords of length $n$ is defined as -$$ -d = \min_{\{x,y \in C : x \neq y\}} d(x,y) -$$ - -where $d(x,y)$ is the Hamming distance between $x$ and $y$. The expression $A_{q}(n,d)$ represents the maximum number of possible codewords in a $q$-ary block code of length $n$ and minimum distance $d$. - -Then the Singleton bound states that -$$ -A_q(n,d) \leq q^{n-d+1}. -$$ - -First observe that the number of $q$-ary words of length $n$ is $q^n$, since each letter in such a word may take one of $q$ different values, independently of the remaining letters. - -Now let $C$ be an arbitrary $q$-ary block code of minimum distance $d$. Clearly, all codewords $c \in C$ are distinct. If we puncture the code by deleting the first $d-1$ letters of each codeword, then all resulting codewords must still be pairwise different, since all of the original codewords in $C$ have Hamming distance at least $d$ from each other. Thus the size of the altered code is the same as the original code. - -The newly obtained codewords each have length -$$ -n-(d-1)=n-d+1 -$$, - -and thus, there can be at most $q^{n-d+1}$ of them. Since $C$ was arbitrary, this bound must hold for the largest possible code with these parameters, thus: -$$ -|C| \le A_q(n,d) \leq q^{n-d+1}. -$$ - -If $C$ is a linear code with block length $n$, dimension $k$ and minimum distance $d$ over the finite field with $q$ elements, then the maximum number of codewords is $q^k$ and the Singleton bound implies: -$$ -q^k \leq q^{n-d+1} -$$, - -so that -$$ -k \leq n - d + 1 -$$, - -which is usually written as -$$ -d \leq n - k + 1 -$$. - -In the linear code case a different proof of the Singleton bound can be obtained by observing that rank of the parity check matrix is $ n - k$. Another simple proof follows from observing that the rows of any generator matrix in standard form have weight at most $n - k + 1$. - -The usual citation given for this result is Singleton, but was proven earlier by Joshi. According to Welsh the result can be found in a 1953 paper of Komamiya - -Linear block codes that achieve equality in the Singleton bound are called MDS (maximum distance separable) codes. Examples of such codes include codes that have only two codewords (the all-zero word and the all-one word, having thus minimum distance $n$), codes that use the whole of $(\mathbb{F}_{q})^{n}$ (minimum distance 1), codes with a single parity symbol (minimum distance 2) and their dual codes. These are often called trivial MDS codes. - -In the case of binary alphabets, only trivial MDS codes exist. - -Examples of non-trivial MDS codes include Reed-Solomon codes and their extended versions. - -MDS codes are an important class of block codes since, for a fixed $n$ and $k$, they have the greatest error correcting and detecting capabilities. There are several ways to characterize MDS codes: - -Theorem: Let $C$ be a linear [$n,k,d$] code over $\mathbb{F}_q$. The following are equivalent: - -* $C$ is an MDS code. - -* Any $k$ columns of a generator matrix for $C$ are linearly independent. - -* Any $n-k$ columns of a parity check matrix for $C$ are linearly independent. - -* $C^{\perp}$ is an MDS code. - -* If $G = (I|A)$ is a generator matrix for $C$ in standard form, then every square submatrix of $A$ is nonsingular. - -* Given any $d$ coordinate positions, there is a (minimum weight) codeword whose support is precisely these positions. - -The last of these characterizations permits, by using the MacWilliams identities, an explicit formula for the complete weight distribution of an MDS code. - -Theorem: Let $C$ be a linear [$n,k,d$] MDS code over $\mathbb{F}_q$. If $A_w$ denotes the number of codewords in $C$ of weight $w$, then -$$ -A_w = \binom{n}{w} \sum_{j=0}^{w-d} (-1)^j \binom{w}{j} (q^{w-d+1-j} -1) = \binom{n}{w}(q-1)\sum_{j=0}^{w-d} (-1)^j \binom{w-1}{j}q^{w-d-j}. -$$ - -The linear independence of the columns of a generator matrix of an MDS code permits a construction of MDS codes from objects in finite projective geometry. Let $PG(N,q)$ be the finite projective space of (geometric) dimension $N$ over the finite field $\mathbb{F}_q$. Let $K = \{P_1,P_2,\dots,P_m \}$ be a set of points in this projective space represented with homogeneous coordinates. Form the $(N+1) \times m$ matrix $G$ whose columns are the homogeneous coordinates of these points. Then, - -Theorem: $K$ is a (spatial) $m$-arc if and only if $G$ is the generator matrix of an $[m,N+1,m-N]$ MDS code over $\mathbb{F}_q$. diff --git a/wiki/wikipedia/919.txt b/wiki/wikipedia/919.txt deleted file mode 100644 index a96627182106c72fa0136a0de18af0f224958d9c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/919.txt +++ /dev/null @@ -1,43 +0,0 @@ -In computability theory and computational complexity theory, an undecidable problem is a decision problem for which it is proved to be impossible to construct an algorithm that always leads to a correct yes-or-no answer. The halting problem is an example: it can be proven that there is no algorithm that correctly determines whether arbitrary programs eventually halt when run. - -A decision problem is any arbitrary yes-or-no question on an infinite set of inputs. Because of this, it is traditional to define the decision problem equivalently as the set of inputs for which the problem returns yes. These inputs can be natural numbers, but also other values of some other kind, such as strings of a formal language. Using some encoding, such as a Gödel numbering, the strings can be encoded as natural numbers. Thus, a decision problem informally phrased in terms of a formal language is also equivalent to a set of natural numbers. To keep the formal definition simple, it is phrased in terms of subsets of the natural numbers. - -Formally, a decision problem is a subset of the natural numbers. The corresponding informal problem is that of deciding whether a given number is in the set. A decision problem A is called decidable or effectively solvable if A is a recursive set and undecidable otherwise. A problem is called partially decidable, semi-decidable, solvable, or provable if A is a recursively enumerable set. - -In computability theory, the halting problem is a decision problem which can be stated as follows: - -Given the description of an arbitrary program and a finite input, decide whether the program finishes running or will run forever. - -Alan Turing proved in 1936 that a general algorithm running on a Turing machine that solves the halting problem for all possible program-input pairs necessarily cannot exist. Hence, the halting problem is undecidable for Turing machines. - -The concepts raised by Gödel's incompleteness theorems are very similar to those raised by the halting problem, and the proofs are quite similar. In fact, a weaker form of the First Incompleteness Theorem is an easy consequence of the undecidability of the halting problem. This weaker form differs from the standard statement of the incompleteness theorem by asserting that an axiomatization of the natural numbers that is both complete and sound is impossible. The "sound" part is the weakening: it means that we require the axiomatic system in question to prove only true statements about natural numbers. Since soundness implies consistency, this weaker form can be seen as a corollary of the strong form. It is important to observe that the statement of the standard form of Gödel's First Incompleteness Theorem is completely unconcerned with the truth value of a statement, but only concerns the issue of whether it is possible to find it through a mathematical proof. - -The weaker form of the theorem can be proved from the undecidability of the halting problem as follows. Assume that we have a sound (and hence consistent) and complete axiomatization of all true first-order logic statements about natural numbers. Then we can build an algorithm that enumerates all these statements. This means that there is an algorithm N(n) that, given a natural number n, computes a true first-order logic statement about natural numbers, and that for all true statements, there is at least one n such that N(n) yields that statement. Now suppose we want to decide if the algorithm with representation a halts on input i. We know that this statement can be expressed with a first-order logic statement, say H(a, i). Since the axiomatization is complete it follows that either there is an n such that N(n) = H(a, i) or there is an n' such that N(n') = ¬ H(a, i). So if we iterate over all n until we either find H(a, i) or its negation, we will always halt, and furthermore, the answer it gives us will be true (by soundness). This means that this gives us an algorithm to decide the halting problem. Since we know that there cannot be such an algorithm, it follows that the assumption that there is a consistent and complete axiomatization of all true first-order logic statements about natural numbers must be false. - -Undecidable problems can be related to different topics, such as logic, abstract machines or topology. Since there are uncountably many undecidable problems, any list, even one of infinite length, is necessarily incomplete. - -There are two distinct senses of the word "undecidable" in contemporary use. The first of these is the sense used in relation to Gödel's theorems, that of a statement being neither provable nor refutable in a specified deductive system. The second sense is used in relation to computability theory and applies not to statements but to decision problems, which are countably infinite sets of questions each requiring a yes or no answer. Such a problem is said to be undecidable if there is no computable function that correctly answers every question in the problem set. The connection between these two is that if a decision problem is undecidable (in the recursion theoretical sense) then there is no consistent, effective formal system which proves for every question A in the problem either "the answer to A is yes" or "the answer to A is no". - -Because of the two meanings of the word undecidable, the term independent is sometimes used instead of undecidable for the "neither provable nor refutable" sense. The usage of "independent" is also ambiguous, however. It can mean just "not provable", leaving open whether an independent statement might be refuted. - -Undecidability of a statement in a particular deductive system does not, in and of itself, address the question of whether the truth value of the statement is well-defined, or whether it can be determined by other means. Undecidability only implies that the particular deductive system being considered does not prove the truth or falsity of the statement. Whether there exist so-called "absolutely undecidable" statements, whose truth value can never be known or is ill-specified, is a controversial point among various philosophical schools. - -One of the first problems suspected to be undecidable, in the second sense of the term, was the word problem for groups, first posed by Max Dehn in 1911, which asks if there is a finitely presented group for which no algorithm exists to determine whether two words are equivalent. This was shown to be the case in 1952. - -The combined work of Gödel and Paul Cohen has given two concrete examples of undecidable statements (in the first sense of the term): The continuum hypothesis can neither be proved nor refuted in ZFC (the standard axiomatization of set theory), and the axiom of choice can neither be proved nor refuted in ZF (which is all the ZFC axioms except the axiom of choice). These results do not require the incompleteness theorem. Gödel proved in 1940 that neither of these statements could be disproved in ZF or ZFC set theory. In the 1960s, Cohen proved that neither is provable from ZF, and the continuum hypothesis cannot be proven from ZFC. - -In 1970, Russian mathematician Yuri Matiyasevich showed that Hilbert's Tenth Problem, posed in 1900 as a challenge to the next century of mathematicians, cannot be solved. Hilbert's challenge sought an algorithm which finds all solutions of a Diophantine equation. A Diophantine equation is a more general case of Fermat's Last Theorem; we seek the integer roots of a polynomial in any number of variables with integer coefficients. Since we have only one equation but n variables, infinitely many solutions exist (and are easy to find) in the complex plane; however, the problem becomes impossible if solutions are constrained to integer values only. Matiyasevich showed this problem to be unsolvable by mapping a Diophantine equation to a recursively enumerable set and invoking Gödel's Incompleteness Theorem. - -In 1936, Alan Turing proved that the halting problem—the question of whether or not a Turing machine halts on a given program—is undecidable, in the second sense of the term. This result was later generalized by Rice's theorem. - -In 1973, Saharon Shelah showed the Whitehead problem in group theory is undecidable, in the first sense of the term, in standard set theory. - -In 1977, Paris and Harrington proved that the Paris-Harrington principle, a version of the Ramsey theorem, is undecidable in the axiomatization of arithmetic given by the Peano axioms but can be proven to be true in the larger system of second-order arithmetic. - -Kruskal's tree theorem, which has applications in computer science, is also undecidable from the Peano axioms but provable in set theory. In fact Kruskal's tree theorem (or its finite form) is undecidable in a much stronger system codifying the principles acceptable on basis of a philosophy of mathematics called predicativism. - -Goodstein's theorem is a statement about the Ramsey theory of the natural numbers that Kirby and Paris showed is undecidable in Peano arithmetic. - -Gregory Chaitin produced undecidable statements in algorithmic information theory and proved another incompleteness theorem in that setting. Chaitin's theorem states that for any theory that can represent enough arithmetic, there is an upper bound c such that no specific number can be proven in that theory to have Kolmogorov complexity greater than c. While Gödel's theorem is related to the liar paradox, Chaitin's result is related to Berry's paradox. - -In 2007, researchers Kurtz and Simon, building on earlier work by J.H. Conway in the 1970s, proved that a natural generalization of the Collatz problem is undecidable. diff --git a/wiki/wikipedia/92.txt b/wiki/wikipedia/92.txt deleted file mode 100644 index 5645256039bb923ceec628796c8aa1cc2bcaedfd..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/92.txt +++ /dev/null @@ -1,15 +0,0 @@ -In mathematics, particularly in set theory, Fodor's lemma states the following: - -If $\kappa$ is a regular, uncountable cardinal, $S$ is a stationary subset of $\kappa$, and $f:S\rightarrow\kappa$ is regressive (that is, $f(\alpha)<\alpha$ for any $\alpha\in S$, $\alpha\neq 0$) then there is some $\gamma$ and some stationary $S_0\subseteq S$ such that $f(\alpha)=\gamma$ for any $\alpha\in S_0$. In modern parlance, the nonstationary ideal is normal. - -The lemma was first proved by the Hungarian set theorist, Géza Fodor in 1956. It is sometimes also called "The Pressing Down Lemma". - -We can assume that $0\notin S$ (by removing 0, if necessary). - -If Fodor's lemma is false, for every $\alpha<\kappa$ there is some club set $C_\alpha$ such that $C_\alpha\cap f^{-1}(\alpha)=\emptyset$. Let $C=\Delta_{\alpha<\kappa} C_\alpha$. The club sets are closed under diagonal intersection, so $C$ is also club and therefore there is some $\alpha\in S\cap C$. Then $\alpha\in C_\beta$ for each $\beta<\alpha$, and so there can be no $\beta<\alpha$ such that $\alpha\in f^{-1}(\beta)$, so $f(\alpha)\geq\alpha$, a contradiction. - -Fodor's lemma also holds for Thomas Jech's notion of stationary sets as well as for the general notion of stationary set. - -Another related statement, also known as Fodor's lemma (or Pressing-Down-lemma), is the following: - -For every non-special tree $T$ and regressive mapping $f:T\rightarrow T$ (that is, $f(t)n$. - -Therefore, we have an alternative definition of cohomological dimension as follows, - -The cohomological dimension of G with coefficient in $\Z$ is the smallest n (possibly infinity) such that G has a projective resolution of length n, i.e., $\Z$ has a projective resolution of length n as a trivial $\Z[G]$ module. - -Let $G$ be a finitely presented group and $n\ge 3$ be an integer. Suppose the cohomological dimension of $G$ with coefficients in $\Z$ is at most $n$, i.e., $\operatorname{cd}_{\Z}(G)\le n$. Then there exists an $n$-dimensional aspherical CW complex $X$ such that the fundamental group of $X$ is $G$, i.e., $\pi_1(X)=G$. - -Converse of this theorem is an consequence of cellular homology, and the fact that every free module is projective. - -Theorem: Let X be an aspherical n-dimensional CW complex with π1(X) = G, then cdZ(G) ≤ n. - -For n = 1 the result is one of the consequences of Stallings theorem about ends of groups. - -Theorem: Every finitely generated group of cohomological dimension one is free. - -For $n=2$ the statement is known as the Eilenberg–Ganea conjecture. - -Eilenberg−Ganea Conjecture: If a group G has cohomological dimension 2 then there is a 2-dimensional aspherical CW complex X with $\pi_1(X)=G$. - -It is known that given a group G with $\operatorname{cd}_{\Z}(G)=2$, there exists a 3-dimensional aspherical CW complex X with $\pi_1(X)=G$. diff --git a/wiki/wikipedia/921.txt b/wiki/wikipedia/921.txt deleted file mode 100644 index f7dd844dcfe92fe3abc1d9da69e9e05ecfeb9f44..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/921.txt +++ /dev/null @@ -1,15 +0,0 @@ -Shearer's inequality or also Shearer's lemma, in mathematics, is an inequality in information theory relating the entropy a set of variables to the entropies of a collection of subsets. It is named for mathematician James Shearer. - -Concretely, it states that if X1, ..., Xd are random variables and S1, ..., Sn are subsets of {1, 2, ..., d} such that every integer between 1 and d lies in at least r of these subsets, then -$$ - H[(X_1,\dots,X_d)] \leq \frac{1}{r}\sum_{i=1}^n H[(X_j)_{j\in S_i}] -$$ - -where $H$ is entropy and $ (X_{j})_{j\in S_{i}}$ is the Cartesian product of random variables $X_{j}$ with indices j in $S_{i}$. - -Let $\mathcal{F}$ be a family of subsets of [n] (possibly with repeats) with each $i\in [n]$ included in at least $t$ members of $\mathcal{F}$. Let $\mathcal{A}$ be another set of subsets of $\mathcal F$. Then -$$ - \mathcal |\mathcal{A}|\leq \prod_{F\in \mathcal{F}}\operatorname{trace}_{F}(\mathcal{A})^{1/t} -$$ - -where $ \operatorname{trace}_{F}(\mathcal{A})=\{A\cap F:A\in\mathcal{A}\}$ the set of possible intersections of elements of $ \mathcal{A}$ with $ F$. diff --git a/wiki/wikipedia/922.txt b/wiki/wikipedia/922.txt deleted file mode 100644 index 0abea529461f5ba0beebdfd71d866d5891694634..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/922.txt +++ /dev/null @@ -1,51 +0,0 @@ -The gambler's fallacy, also known as the Monte Carlo fallacy or the fallacy of the maturity of chances, is the incorrect belief that, if a particular event occurs more frequently than normal during the past, it is less likely to happen in the future (or vice versa), when it has otherwise been established that the probability of such events does not depend on what has happened in the past. Such events, having the quality of historical independence, are referred to as statistically independent. The fallacy is commonly associated with gambling, where it may be believed, for example, that the next dice roll is more than usually likely to be six because there have recently been fewer than the usual number of sixes. - -The term "Monte Carlo fallacy" originates from the best known example of the phenomenon, which occurred in the Monte Carlo Casino in 1913. - -An example of a retrospective gambler's fallacy would be to observe multiple successive "heads" on a coin toss and conclude from this that the previously unknown flip was "tails". - -After having multiple children of the same sex, some parents may believe that they are due to have a child of the opposite sex. While the Trivers–Willard hypothesis predicts that birth sex is dependent on living conditions, stating that more male children are born in good living conditions, while more female children are born in poorer living conditions, the probability of having a child of either sex is still regarded as near 0.5 (50%). - -Perhaps the most famous example of the gambler's fallacy occurred in a game of roulette at the Monte Carlo Casino on August 18, 1913, when the ball fell in black 26 times in a row. This was an extremely uncommon occurrence: the probability of a sequence of either red or black occurring 26 times in a row is (18/37)26-1 or around 1 in 66.6 million, assuming the mechanism is unbiased. Gamblers lost millions of francs betting against black, reasoning incorrectly that the streak was causing an imbalance in the randomness of the wheel, and that it had to be followed by a long streak of red. - -The gambler's fallacy does not apply in situations where the probability of different events is not independent. In such cases, the probability of future events can change based on the outcome of past events, such as the statistical permutation of events. An example is when cards are drawn from a deck without replacement. If an ace is drawn from a deck and not reinserted, the next draw is less likely to be an ace and more likely to be of another rank. The probability of drawing another ace, assuming that it was the first card drawn and that there are no jokers, has decreased from 4/52 (7.69%) to 3/51 (5.88%), while the probability for each other rank has increased from 4/52 (7.69%) to 4/51 (7.84%). This effect allows card counting systems to work in games such as blackjack. - -In most illustrations of the gambler's fallacy and the reverse gambler's fallacy, the trial (e.g. flipping a coin) is assumed to be fair. In practice, this assumption may not hold. For example, if a coin is flipped 21 times, the probability of 21 heads with a fair coin is 1 in 2,097,152. Since this probability is so small, if it happens, it may well be that the coin is somehow biased towards landing on heads, or that it is being controlled by hidden magnets, or similar. In this case, the smart bet is "heads" because Bayesian inference from the empirical evidence — 21 heads in a row — suggests that the coin is likely to be biased toward heads. Bayesian inference can be used to show that when the long-run proportion of different outcomes is unknown but exchangeable (meaning that the random process from which the outcomes are generated may be biased but is equally likely to be biased in any direction) and that previous observations demonstrate the likely direction of the bias, the outcome which has occurred the most in the observed data is the most likely to occur again. - -For example, if the a priori probability of a biased coin is say 1%, and assuming that such a biased coin would come down heads say 60% of the time, then after 21 heads the probability of a biased coin has increased to about 32%. - -The opening scene of the play Rosencrantz and Guildenstern Are Dead by Tom Stoppard discusses these issues as one man continually flips heads and the other considers various possible explanations. - -If external factors are allowed to change the probability of the events, the gambler's fallacy may not hold. For example, a change in the game rules might favour one player over the other, improving his or her win percentage. Similarly, an inexperienced player's success may decrease after opposing teams learn about and play against their weaknesses. This is another example of bias. - -The gambler's fallacy arises out of a belief in a law of small numbers, leading to the erroneous belief that small samples must be representative of the larger population. According to the fallacy, streaks must eventually even out in order to be representative. Amos Tversky and Daniel Kahneman first proposed that the gambler's fallacy is a cognitive bias produced by a psychological heuristic called the representativeness heuristic, which states that people evaluate the probability of a certain event by assessing how similar it is to events they have experienced before, and how similar the events surrounding those two processes are. Other researchers believe that belief in the fallacy may be the result of a mistaken belief in an internal locus of control. When a person believes that gambling outcomes are the result of their own skill, they may be more susceptible to the gambler's fallacy because they reject the idea that chance could overcome skill or talent. The two types differ in that type one wrongly assumes that gambling conditions are fair and perfect, while type two assumes that the conditions are biased, and that this bias can be detected after a certain amount of time. - -Another variety, known as the retrospective gambler's fallacy, occurs when individuals judge that a seemingly rare event must come from a longer sequence than a more common event does. The belief that an imaginary sequence of die rolls is more than three times as long when a set of three sixes is observed as opposed to when there are only two sixes. This effect can be observed in isolated instances, or even sequentially. Another example would involve hearing that a teenager has unprotected sex and becomes pregnant on a given night, and concluding that she has been engaging in unprotected sex for longer than if we hear she had unprotected sex but did not become pregnant, when the probability of becoming pregnant as a result of each intercourse is independent of the amount of prior intercourse. - -Another psychological perspective states that gambler's fallacy can be seen as the counterpart to basketball's hot-hand fallacy, in which people tend to predict the same outcome as the previous event - known as positive recency - resulting in a belief that a high scorer will continue to score. In the gambler's fallacy, people predict the opposite outcome of the previous event - negative recency - believing that since the roulette wheel has landed on black on the previous six occasions, it is due to land on red the next. Ayton and Fischer have theorized that people display positive recency for the hot-hand fallacy because the fallacy deals with human performance, and that people do not believe that an inanimate object can become "hot." Human performance is not perceived as random, and people are more likely to continue streaks when they believe that the process generating the results is nonrandom. When a person exhibits the gambler's fallacy, they are more likely to exhibit the hot-hand fallacy as well, suggesting that one construct is responsible for the two fallacies. - -The difference between the two fallacies is also found in economic decision-making. A study by Huber, Kirchler, and Stockl in 2010 examined how the hot hand and the gambler's fallacy are exhibited in the financial market. The researchers gave their participants a choice: they could either bet on the outcome of a series of coin tosses, use an expert opinion to sway their decision, or choose a risk-free alternative instead for a smaller financial reward. Participants turned to the expert opinion to make their decision 24% of the time based on their past experience of success, which exemplifies the hot-hand. If the expert was correct, 78% of the participants chose the expert's opinion again, as opposed to 57% doing so when the expert was wrong. The participants also exhibited the gambler's fallacy, with their selection of either heads or tails decreasing after noticing a streak of either outcome. This experiment helped bolster Ayton and Fischer's theory that people put more faith in human performance than they do in seemingly random processes. - -While the representativeness heuristic and other cognitive biases are the most commonly cited cause of the gambler's fallacy, research suggests that there may also be a neurological component. Functional magnetic resonance imaging has shown that after losing a bet or gamble, known as riskloss, the frontoparietal network of the brain is activated, resulting in more risk-taking behavior. In contrast, there is decreased activity in the amygdala, caudate, and ventral striatum after a riskloss. Activation in the amygdala is negatively correlated with gambler's fallacy, so that the more activity exhibited in the amygdala, the less likely an individual is to fall prey to the gambler's fallacy. These results suggest that gambler's fallacy relies more on the prefrontal cortex, which is responsible for executive, goal-directed processes, and less on the brain areas that control affective decision-making. - -The desire to continue gambling or betting is controlled by the striatum, which supports a choice-outcome contingency learning method. The striatum processes the errors in prediction and the behavior changes accordingly. After a win, the positive behavior is reinforced and after a loss, the behavior is conditioned to be avoided. In individuals exhibiting the gambler's fallacy, this choice-outcome contingency method is impaired, and they continue to make risks after a series of losses. - -The gambler's fallacy is a deep-seated cognitive bias and can be very hard to overcome. Educating individuals about the nature of randomness has not always proven effective in reducing or eliminating any manifestation of the fallacy. Participants in a study by Beach and Swensson in 1967 were shown a shuffled deck of index cards with shapes on them, and were instructed to guess which shape would come next in a sequence. The experimental group of participants was informed about the nature and existence of the gambler's fallacy, and were explicitly instructed not to rely on run dependency to make their guesses. The control group was not given this information. The response styles of the two groups were similar, indicating that the experimental group still based their choices on the length of the run sequence. This led to the conclusion that instructing individuals about randomness is not sufficient in lessening the gambler's fallacy. - -An individual's susceptibility to the gambler's fallacy may decrease with age. A study by Fischbein and Schnarch in 1997 administered a questionnaire to five groups: students in grades 5, 7, 9, 11, and college students specializing in teaching mathematics. None of the participants had received any prior education regarding probability. The question asked was: "Ronni flipped a coin three times and in all cases heads came up. Ronni intends to flip the coin again. What is the chance of getting heads the fourth time?" The results indicated that as the students got older, the less likely they were to answer with "smaller than the chance of getting tails", which would indicate a negative recency effect. 35% of the 5th graders, 35% of the 7th graders, and 20% of the 9th graders exhibited the negative recency effect. Only 10% of the 11th graders answered this way, and none of the college students did. Fischbein and Schnarch theorized that an individual's tendency to rely on the representativeness heuristic and other cognitive biases can be overcome with age. - -Another possible solution comes from Roney and Trick, Gestalt psychologists who suggest that the fallacy may be eliminated as a result of grouping. When a future event such as a coin toss is described as part of a sequence, no matter how arbitrarily, a person will automatically consider the event as it relates to the past events, resulting in the gambler's fallacy. When a person considers every event as independent, the fallacy can be greatly reduced. - -Roney and Trick told participants in their experiment that they were betting on either two blocks of six coin tosses, or on two blocks of seven coin tosses. The fourth, fifth, and sixth tosses all had the same outcome, either three heads or three tails. The seventh toss was grouped with either the end of one block, or the beginning of the next block. Participants exhibited the strongest gambler's fallacy when the seventh trial was part of the first block, directly after the sequence of three heads or tails. The researchers pointed out that the participants that did not show the gambler's fallacy showed less confidence in their bets and bet fewer times than the participants who picked with the gambler's fallacy. When the seventh trial was grouped with the second block, and was perceived as not being part of a streak, the gambler's fallacy did not occur. - -Roney and Trick argued that instead of teaching individuals about the nature of randomness, the fallacy could be avoided by training people to treat each event as if it is a beginning and not a continuation of previous events. They suggested that this would prevent people from gambling when they are losing, in the mistaken hope that their chances of winning are due to increase based on an interaction with previous events. - -Within a real-world setting, numerous studies have uncovered that for various decision makers placed in high stakes scenarios, it is likely they will reflect some degree of strong negative autocorrelation in their judgement. - -In a study aimed at discovering if the negative autocorrelation that exists with the gambler's fallacy existed in the decision made by U.S. asylum judges, results showed that after two successive asylum grants, a judge would be 5.5% less likely to approve a third grant. - -In the game of baseball, decisions are made every minute. One particular decision made by umpires which is often subject to scrutiny is the ‘strike zone’ decision. Whenever a batter does not swing, the umpire must decide if the ball was within a fair region for the batter, known as the strike zone. If outside of this zone, the ball does not count towards outing the batter. In a study of over 12,000 games, results showed that umpires are 1.3% less likely to call a strike if the previous two balls were also strikes. Soon after, a 1994 study was constructed by Dek Terrell to test the findings of Clotfelter and Cook. The key change in Terrell's study was the examination of a pari-mutuel lottery in which, a number selected with lower total wagers placed on it will result in a higher pay-out. While this examination did conclude that players in both types of lotteries exhibited behaviour in-line with the Gambler's fallacy theory, those who took part in pari-mutuel betting seemed to be less influenced. - -The effect of gamblers fallacy can be observed as numbers are chosen far less frequently soon after they are selected as winners, recovering slowly over a two-month period. For example, on the 11th of April 1988, 41 players selected 244 as the winning combination. Three days later only 24 individuals selected 244, a 41.5% decrease. This is the gamblers fallacy in motion, as lottery players believe that the occurrence of a winning combination in previous days will decrease its likelihood of occurring today. - -Several video games feature the use of loot boxes, a collection of in-game items awarded on opening with random contents set by rarity metrics, as a monetization scheme. Since around 2018, loot boxes have come under scrutiny from governments and advocates on the basis they are akin to gambling, particularly for games aimed at youth. Some games use a special "pity-timer" mechanic, that if the player has opened several loot boxes in a row without obtaining a high-rarity item, subsequent loot boxes will improve the odds of a higher-rate item drop. This is considered to feed into the gambler's fallacy since it reinforces the idea that a player will eventually obtain a high-rarity item (a win) after only receiving common items from a string of previous loot boxes. diff --git a/wiki/wikipedia/923.txt b/wiki/wikipedia/923.txt deleted file mode 100644 index 85c7d6e3da762419f2ff515569654b4b2f249581..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/923.txt +++ /dev/null @@ -1 +0,0 @@ -In probability theory, Glivenko's theorem states that if $\varphi_n, n\in \mathbb N$, $\varphi$ are the characteristic functions of some probability distributions $\mu_n, \mu$ respectively and $\varphi_n \to \varphi$ almost everywhere, then $\mu_n \to \mu$ in the sense of probability distributions. diff --git a/wiki/wikipedia/924.txt b/wiki/wikipedia/924.txt deleted file mode 100644 index c1ad1c455d346d19bc1e81a292f710b2018917ad..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/924.txt +++ /dev/null @@ -1,55 +0,0 @@ -Optimistic concurrency control (OCC), also known as optimistic locking, is a concurrency control method applied to transactional systems such as relational database management systems and software transactional memory. OCC assumes that multiple transactions can frequently complete without interfering with each other. While running, transactions use data resources without acquiring locks on those resources. Before committing, each transaction verifies that no other transaction has modified the data it has read. If the check reveals conflicting modifications, the committing transaction rolls back and can be restarted. Optimistic concurrency control was first proposed by H. T. Kung and John T. Robinson. - -OCC is generally used in environments with low data contention. When conflicts are rare, transactions can complete without the expense of managing locks and without having transactions wait for other transactions' locks to clear, leading to higher throughput than other concurrency control methods. However, if contention for data resources is frequent, the cost of repeatedly restarting transactions hurts performance significantly, in which case other concurrency control methods may be better suited. However, locking-based ("pessimistic") methods also can deliver poor performance because locking can drastically limit effective concurrency even when deadlocks are avoided. - -Optimistic concurrency control transactions involve these phases: - -*Begin: Record a timestamp marking the transaction's beginning. - -*Modify: Read database values, and tentatively write changes. - -*Validate: Check whether other transactions have modified data that this transaction has used (read or written). This includes transactions that completed after this transaction's start time, and optionally, transactions that are still active at validation time. - -*Commit/Rollback: If there is no conflict, make all changes take effect. If there is a conflict, resolve it, typically by aborting the transaction, although other resolution schemes are possible. Care must be taken to avoid a time-of-check to time-of-use bug, particularly if this phase and the previous one are not performed as a single atomic operation. - -The stateless nature of HTTP makes locking infeasible for web user interfaces. It is common for a user to start editing a record, then leave without following a "cancel" or "logout" link. If locking is used, other users who attempt to edit the same record must wait until the first user's lock times out. - -HTTP does provide a form of built-in OCC. The response to an initial GET request can include an ETag for subsequent PUT requests to use in the If-Match header. Any PUT requests with an out-of-date ETag in the If-Match header can then be rejected. - -Some database management systems offer OCC natively, without requiring special application code. For others, the application can implement an OCC layer outside of the database, and avoid waiting or silently overwriting records. In such cases, the form may include a hidden field with the record's original content, a timestamp, a sequence number, or an opaque token. On submit, this is compared against the database. If it differs, the conflict resolution algorithm is invoked. - -* MediaWiki's edit pages use OCC. - -* Bugzilla uses OCC; edit conflicts are called "mid-air collisions". - -* The Ruby on Rails framework has an API for OCC. - -* The Grails framework uses OCC in its default conventions. - -* The GT.M database engine uses OCC for managing transactions (even single updates are treated as mini-transactions). - -* Microsoft's Entity Framework (including Code-First) has built-in support for OCC based on a binary timestamp value. - -* Mimer SQL is a DBMS that only implements optimistic concurrency control. - -* Google App Engine data store uses OCC. - -* The Apache Solr search engine supports OCC via the _version_ field. - -* The Elasticsearch search engine supports OCC via the version attribute. - -* CouchDB implements OCC through document revisions. - -* The MonetDB column-oriented database management system's transaction management scheme is based on OCC. - -* Most implementations of software transactional memory use OCC. - -* Redis provides OCC through WATCH command. - -* MySQL implements OCC in Group Replication configuration. - -* Firebird uses Multi-generational architecture as an implementation of OCC for data management. - -* DynamoDB uses conditional update as an implementation of OCC. - -* Kubernetes uses OCC when updating resources. diff --git a/wiki/wikipedia/925.txt b/wiki/wikipedia/925.txt deleted file mode 100644 index c8f37de26904b44c7f239f5e79e99a328071a779..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/925.txt +++ /dev/null @@ -1,379 +0,0 @@ -In mathematics, the exponential function can be characterized in many ways. The following characterizations (definitions) are most common. This article discusses why each characterization makes sense, and why the characterizations are independent of and equivalent to each other. As a special case of these considerations, it will be demonstrated that the three most common definitions given for the mathematical constant e are equivalent to each other. - -The six most common definitions of the exponential function exp(x) = ex for real x are: - -1. Define ex by the limit -$$ -e^x = \lim_{n\to\infty} \left(1+\frac x n \right)^n. -$$ - -2. Define ex as the value of the infinite series -$$ -e^x = \sum_{n=0}^\infty {x^n \over n!} = 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + \frac{x^4}{4!} + \cdots -$$ - -(Here n! denotes the factorial of n. One proof that e is irrational uses a special case of this formula.) - -3. Define ex to be the unique number y > 0 such that -$$ -\int_1^y \frac{dt}{t} = x. -$$ - -This is as the inverse of the natural logarithm function, which is defined by this integral. - -4. Define ex to be the unique solution to the initial value problem -$$ -y' = y,\quad y(0) = 1. -$$ - -(Here, y′ denotes the derivative of y.) - -5. The exponential function ex is the unique function f with f(1) = e and f(x + y) = f(x) f(y) for all x and y that satisfies any one of the following additional conditions: - -* f is Lebesgue-measurable (Hewitt and Stromberg, 1965, exercise 18.46). - -* f is continuous at at least one point (Rudin, 1976, chapter 8, exercise 6). (As shown below, if f(x + y) = f(x) f(y) for all x and y, and f is continuous at any single point, then f is necessarily continuous everywhere.) - -* f is increasing. (An increasing function that agrees with ex on rational numbers must equal ex.) - -For the uniqueness, one must impose some additional condition like those above, since otherwise other functions can be constructed using a basis for the real numbers over the rationals, as described by Hewitt and Stromberg. - -One could also replace f(1) = e and the "additional condition" with the single condition f′(0) = 1. - -6. Let e be the unique positive real number satisfying -$$ -\lim_{h \to 0} \frac{e^h - 1}{h} = 1. -$$ - -This limit can be shown to exist. Then define ex to be the exponential function with this base. This definition is particularly suited to computing the derivative of the exponential function. - -One way of defining the exponential function for domains larger than the domain of real numbers is to first define it for the domain of real numbers using one of the above characterizations and then extend it to larger domains in a way which would work for any analytic function. - -It is also possible to use the characterisations directly for the larger domain, though some problems may arise. (1), (2), and (4) all make sense for arbitrary Banach algebras. (3) presents a problem for complex numbers, because there are non-equivalent paths along which one could integrate, and (5) is not sufficient. For example, the function f defined (for x and y real) as -$$ - f(x + iy) = e^x(\cos(2y) + i\sin(2y)) = e^{x + 2iy} -$$ - -satisfies the conditions in (5) without being the exponential function of x + iy. To make (5) sufficient for the domain of complex numbers, one may either stipulate that there exists a point at which f is a conformal map or else stipulate that -$$ - f(i) = \cos(1) + i\sin(1). -$$ - -In particular, the alternate condition in (5) that $f'(0)=1$ is sufficient since it implicitly stipulates that f be conformal. - -Some of these definitions require justification to demonstrate that they are well-defined. For example, when the value of the function is defined as the result of a limiting process (i.e. an infinite sequence or series), it must be demonstrated that such a limit always exists. - -Since - -\lim_{n\to\infty} \left|\frac{x^{n+1}/(n+1)!}{x^n/n!}\right| - -= \lim_{n\to\infty} \left|\frac{x}{n+1}\right| - -= 0 < 1. - -it follows from the ratio test that $\sum_{n=0}^\infty \frac{x^n}{n!}$ converges for all x. - -Since the integrand is an integrable function of t, the integral expression is well-defined. It must be shown that the function from $\mathbb{R}^+$ to $\mathbb{R}$ defined by -$$ -\int_1^{(\cdot)} \frac{dt}{t} -$$ - -is a bijection. Since 1/t is positive for positive t, this function is strictly increasing, hence injective. If the two integrals - - - -\begin{align} - -\int_1^\infty \frac{dt} t & = \infty \\[8pt] - -\int_1^0 \frac{dt} t & = -\infty - -\end{align} - - - -hold, then it is surjective as well. Indeed, these integrals do hold; they follow from the integral test and the divergence of the harmonic series. - -== Equivalence of the characterizations == - -The following proof demonstrates the equivalence of the first three characterizations given for e above. The proof consists of two parts. First, the equivalence of characterizations 1 and 2 is established, and then the equivalence of characterizations 1 and 3 is established. Arguments linking the other characterizations are also given. - -The following argument is adapted from a proof in Rudin, theorem 3.31, p. 63–65. - -Let $x\geq0$ be a fixed non-negative real number. Define -$$ -s_n = \sum_{k=0}^n\frac{x^k}{k!},\ t_n=\left(1+\frac x n \right)^n. -$$ - -By the binomial theorem, - - - -\begin{align} - -t_n & =\sum_{k=0}^n{n \choose k}\frac{x^k}{n^k}=1+x+\sum_{k=2}^n\frac{n(n-1)(n-2)\cdots(n-(k-1))x^k}{k!n^k} \\[8pt] - -& = 1+x+\frac{x^2}{2!}\left(1-\frac{1}{n}\right)+\frac{x^3}{3!}\left(1-\frac{1}{n}\right)\left(1-\frac{2}{n}\right)+\cdots \\[8pt] - -& {}\qquad \cdots +\frac{x^n}{n!}\left(1-\frac{1}{n}\right)\cdots\left(1-\frac{n-1}{n}\right)\le s_n - -\end{align} - - - -(using x ≥ 0 to obtain the final inequality) so that -$$ -\limsup_{n\to\infty}t_n \le \limsup_{n\to\infty}s_n = e^x -$$ - -where ex is in the sense of definition 2. Here, limsups must be used, because it is not known if tn converges. For the other direction, by the above expression of tn, if 2 ≤ m ≤ n, -$$ -1+x+\frac{x^2}{2!}\left(1-\frac{1}{n}\right)+\cdots+\frac{x^m}{m!}\left(1-\frac{1}{n}\right)\left(1-\frac{2}{n}\right)\cdots\left(1-\frac{m-1}{n}\right)\le t_n. -$$ - -Fix m, and let n approach infinity. Then -$$ -s_m = 1+x+\frac{x^2}{2!}+\cdots+\frac{x^m}{m!} \le \liminf_{n\to\infty}t_n -$$ - -(again, liminf's must be used because it is not known if tn converges). Now, taking the above inequality, letting m approach infinity, and putting it together with the other inequality, this becomes -$$ -\limsup_{n\to\infty}t_n \le e^x \le \liminf_{n\to\infty}t_n -$$ - -so that -$$ -\lim_{n\to\infty}t_n = e^x. -$$ - -This equivalence can be extended to the negative real numbers by noting $\left(1 - \frac r n \right)^n \left(1+\frac{r}{n}\right)^n = \left(1-\frac{r^2}{n^2}\right)^n $ and taking the limit as n goes to infinity. - -The error term of this limit-expression is described by -$$ -\left(1+\frac x n \right)^n=e^x \left(1-\frac{x^2}{2n}+\frac{x^3(8+3x)}{24n^2}+\cdots \right), -$$ - -where the polynomial's degree (in x) in the term with denominator nk is 2k. - -Here, the natural logarithm function is defined in terms of a definite integral as above. By the first part of fundamental theorem of calculus, -$$ -\frac d {dx}\ln x=\frac{d}{dx} \int_1^x \frac1 t dt = \frac 1 x. -$$ - -Besides, $\ln 1 = \int_1^1 \frac{1}{t}dt = 0$ - -Now, let x be any fixed real number, and let -$$ -y=\lim_{n\to\infty}\left(1+\frac{x}{n}\right)^n. -$$ - -Ln(y) = x, which implies that y = ex, where ex is in the sense of definition 3. We have -$$ -\ln y=\ln\lim_{n\to\infty}\left(1+\frac{x}{n} \right)^n = \lim_{n\to\infty} \ln\left(1+\frac{x}{n}\right)^n. -$$ - -Here, the continuity of ln(y) is used, which follows from the continuity of 1/t: -$$ -\ln y=\lim_{n\to\infty}n\ln \left(1+\frac{x}{n} \right) = \lim_{n\to\infty} \frac{x\ln\left(1+(x/n)\right)}{(x/n)}. -$$ - -Here, the result lnan = nlna has been used. This result can be established for n a natural number by induction, or using integration by substitution. (The extension to real powers must wait until ln and exp have been established as inverses of each other, so that ab can be defined for real b as eb lna.) -$$ -=x\cdot\lim_{h\to 0}\frac{\ln\left(1+h\right)}{h} \quad \text{ where } h = \frac{x}{n} -$$ -$$ -=x\cdot\lim_{h\to 0}\frac{\ln\left(1+h\right)-\ln 1}{h} -$$ -$$ -=x\cdot\frac{d}{dt} \ln t \Bigg|_{t=1} -$$ -$$ -\! = x. -$$ - -Characterisation 3 involves defining the natural logarithm before the exponential function is defined. First, - - - -\log x:=\int_{1}^{x}\frac{1}{t}dt - - - -This means that the natural logarithm of $x$ equals the (signed) area under the graph of $1/t$ between $t=1$ and $t=x$. If $x<1$, then this area is taken to be negative. Then, $\exp$ is defined as the inverse of $\log$, meaning that -$$ -\exp(\log(x))=x \text{ and } \log(\exp(x))=x -$$ - -by the definition of an inverse function. If $a$ is a positive real number then $a^x$ is defined as $\exp(x\log(a))$. Finally, $e$ is defined as the number $a$ such that $\log(a)=1$. It can then be shown that $e^x=\exp(x)$: -$$ -e^x=\exp(x\log(e))=\exp(x) -$$ - -By the fundamental theorem of calculus, the derivative of $\log x = \frac{1}{x}$. We are now in a position to prove that $\frac{d(e^x)}{dx}=e^x$, satisfying the first part of the initial value problem given in characterisation 4: - - - -\begin{align} - -\text{Let }y&=e^x=\exp(x) \\ - -\log(y)&=\log(\exp(x))=x \\ - -\frac{1}{y}\frac{dy}{dx}&=1 \\ - -\frac{dy}{dx}&=y=e^x - -\end{align} - - - -Then, we merely have to note that $e^0=\exp(0)=1$, and we are done. Of course, it is much easier to show that characterisation 4 implies characterisation 3. If $e^x$ is the unique function $f:\mathbb{R}\to\mathbb{R}$ satisfying $f'(x)=e^x$, and $f(0)=1$, then $\log$ can be defined as its inverse. The derivative of $\log$ can be found in the following way: - - - -y = \log x \implies x=e^y - - - -If we differentiate both sides with respect to $y$, we get - - - -\begin{align} - -\frac{dx}{dy} &= e^y \\ - -\frac{dy}{dx} &= \frac{1}{e^y} = \frac{1}{x} - -\end{align} - - - -Therefore, - - - -\int_{1}^{x}\frac{1}{t}dt=\left[\log t\right]_{1}^{x} = \log x - \log 1 = \log x - 0 = \log x - - - -Let n be a non-negative integer. In the sense of definition 4 and by induction, $\frac{d^ny}{dx^n}=y$. - -Therefore $\frac{d^ny}{dx^n}\Bigg|_{x=0}=y(0)=1.$ - -Using Taylor series, -$$ -y= \sum_{n=0}^\infty \frac {f^{(n)}(0)}{n!} x^n = \sum_{n=0}^\infty \frac {1}{n!} x^n = \sum_{n=0}^\infty \frac {x^n}{n!}. -$$ This shows that definition 4 implies definition 2. - -In the sense of definition 2, - - - -\begin{align} - -\frac{d}{dx}e^x & = \frac{d}{dx} \left(1+\sum_{n=1}^\infty \frac {x^n}{n!} \right) = \sum_{n=1}^\infty \frac {nx^{n-1}}{n!} =\sum_{n=1}^\infty \frac {x^{n-1}}{(n-1)!} \\[6pt] - -& =\sum_{k=0}^\infty \frac {x^k}{k!}, \text{ where } k=n-1 \\[6pt] - -& =e^x - -\end{align} - - - -Besides, $e^0=1+0+\frac{0^2}{2!}+\frac{0^3}{3!}+\cdots=1.$ This shows that definition 2 implies definition 4. - -The following proof is a simplified version of the one in Hewitt and Stromberg, exercise 18.46. First, one proves that measurability (or here, Lebesgue-integrability) implies continuity for a non-zero function $f(x)$ satisfying $f(x+y)=f(x)f(y)$, and then one proves that continuity implies $f(x) = e^{kx}$ for some k, and finally $f(1)=e$ implies k=1. - -First, a few elementary properties from $f(x)$ satisfying $f(x+y)=f(x)f(y)$ are proven, and the assumption that $f(x)$ is not identically zero: - -* If $f(x)$ is nonzero anywhere (say at x=y), then it is non-zero everywhere. Proof: $f(y) = f(x) f(y - x) \neq 0$ implies $f(x) \neq 0$. - -* $f(0)=1$. Proof: $f(x)= f(x+0) = f(x) f(0)$ and $f(x)$ is non-zero. - -* $f(-x)=1/f(x)$. Proof: $1 = f(0)= f(x-x) = f(x) f(-x)$. - -* If $f(x)$ is continuous anywhere (say at x = y), then it is continuous everywhere. Proof: $f(x+\delta)-f(x) = f(x-y) [ f(y+\delta) - f(y)] \rightarrow 0$ as $\delta\rightarrow 0$ by continuity at y. - -The second and third properties mean that it is sufficient to prove $f(x)=e^x$ for positive x. - -If $f(x)$ is a Lebesgue-integrable function, then -$$ -g(x) = \int_0^x f(x') dx'. -$$ - -It then follows that -$$ -g(x+y)-g(x) = \int_x^{x+y} f(x') dx' = \int_0^y f(x+x') dx' = f(x) g(y). -$$ - -Since $f(x)$ is nonzero, some y can be chosen such that $g(y) \neq 0$ and solve for $f(x)$ in the above expression. Therefore: - - - -\begin{align} - -f(x+\delta)-f(x) & = \frac{[g(x+\delta+y)-g(x+\delta)]-[g(x+y)-g(x)]}{g(y)} \\ - -& =\frac{[g(x+y+\delta)-g(x+y)]-[g(x+\delta)-g(x)]}{g(y)} \\ - -& =\frac{f(x+y)g(\delta)-f(x)g(\delta)}{g(y)}=g(\delta)\frac{f(x+y)-f(x)}{g(y)}. - -\end{align} - - - -The final expression must go to zero as $\delta\rightarrow 0$ since $g(0)=0$ and $g(x)$ is continuous. It follows that $f(x)$ is continuous. - -Now, $f(q) = e^{kq}$ can be proven, for some k, for all positive rational numbers q. Let q=n/m for positive integers n and m. Then -$$ -f\left(\frac{n}{m}\right)=f\left(\frac{1}{m}+\cdots+\frac{1}{m} \right)=f\left(\frac{1}{m}\right)^n -$$ - -by elementary induction on n. Therefore, $f(1/m)^m = f(1)$ and thus -$$ -f\left(\frac{n}{m}\right)=f(1)^{n/m}=e^{k(n/m)}. -$$ - -for $k = \ln [f(1)]$. If restricted to real-valued $f(x)$, then $f(x) = f(x/2)^2$ is everywhere positive and so k is real. - -Finally, by continuity, since $f(x) = e^{kx}$ for all rational x, it must be true for all real x since the closure of the rationals is the reals (that is, any real x can be written as the limit of a sequence of rationals). If $f(1) = e$ then k = 1. This is equivalent to characterization 1 (or 2, or 3), depending on which equivalent definition of e one uses. - -In the sense of definition 2, -$$ -\begin{align} \lim_{h\to 0} \frac{e^h-1}{h} & =\lim_{h\to 0} \frac{1}{h} \left (\left (1+h+ \frac{h^2}{2!}+\frac{h^3}{3!}+\frac{h^4}{4!}+\cdots \right) -1 \right) \\ & =\lim_{h\to 0} \left(1+ \frac{h}{2!}+\frac{h^2}{3!}+\frac{h^3}{4!}+\cdots \right) \\ & =1 \\ \end{align} -$$ - -The conditions f(0) = 1 and f(x + y) = f(x) f(y) imply both conditions in characterization 4. Indeed, one gets the initial condition f(0) = 1 by dividing both sides of the equation -$$ -f(0) = f(0 + 0) = f(0) f(0) -$$ - -by f(0), and the condition that f′(x) = f(x) follows from the condition that f′(0) = 1 and the definition of the derivative as follows: - - - -\begin{array}{rcccccc} - -f'(x) & = & \lim\limits_{h\to 0}\frac{f(x+h)-f(x)} h - -& = & \lim\limits_{h\to 0}\frac{f(x)f(h)-f(x)} h - -& = & \lim\limits_{h\to 0}f(x)\frac{f(h)-1} h - -\\[1em] - -& = & f(x)\lim\limits_{h\to 0}\frac{f(h)-1} h - -& = & f(x)\lim\limits_{h\to 0}\frac{f(0+h)-f(0)} h - -& = & f(x)f'(0) = f(x). - -\end{array} - - - -In the sense of definition 6, $\frac{d}{dx}e^x=\lim_{h \to 0} \frac{e^{x+h}-e^x}{h}=e^x \cdot \lim_{h \to 0}\frac{e^h-1}{h}=e^x.$ - -By the way $e^0=1$, therefore definition 6 implies definition 4. diff --git a/wiki/wikipedia/926.txt b/wiki/wikipedia/926.txt deleted file mode 100644 index 23e60bb89972ab6950df1ca07821bbf9e38d94aa..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/926.txt +++ /dev/null @@ -1,102 +0,0 @@ -The Shapley–Folkman lemma is a result in convex geometry with applications in mathematical economics that describes the Minkowski addition of sets in a vector space. Minkowski addition is defined as the addition of the sets' members: for example, adding the set consisting of the integers zero and one to itself yields the set consisting of zero, one, and two: - -{0, 1} + {0, 1} = {0 + 0, 0 + 1, 1 + 0, 1 + 1} = {0, 1, 2}. - -The Shapley–Folkman lemma and related results provide an affirmative answer to the question, "Is the sum of many sets close to being convex?" A set is defined to be convex if every line segment joining two of its points is a subset in the set: For example, the solid disk $\bullet$ is a convex set but the circle $\circ$ is not, because the line segment joining two distinct points $\oslash$ is not a subset of the circle. The Shapley–Folkman lemma suggests that if the number of summed sets exceeds the dimension of the vector space, then their Minkowski sum is approximately convex. - -The Shapley–Folkman lemma was introduced as a step in the proof of the Shapley–Folkman theorem, which states an upper bound on the distance between the Minkowski sum and its convex hull. The convex hull of a set Q is the smallest convex set that contains Q. This distance is zero if and only if the sum is convex. - -The theorem's bound on the distance depends on the dimension D and on the shapes of the summand-sets, but not on the number of summand-sets N, when N > D. - -The shapes of a subcollection of only D summand-sets determine the bound on the distance between the Minkowski average of N sets - -1/N (Q1 + Q2 + ... + QN) - -and its convex hull. As N increases to infinity, the bound decreases to zero (for summand-sets of uniformly bounded size). The topic of non-convex sets in economics has been studied by many Nobel laureates, besides Lloyd Shapley who won the prize in 2012: Arrow (1972), Robert Aumann (2005), Gérard Debreu (1983), Tjalling Koopmans (1975), Paul Krugman (2008), and Paul Samuelson (1970); the complementary topic of convex sets in economics has been emphasized by these laureates, along with Leonid Hurwicz, Leonid Kantorovich (1975), and Robert Solow (1987). - -The Shapley–Folkman lemma has applications also in optimization and probability theory. In optimization theory, the Shapley–Folkman lemma has been used to explain the successful solution of minimization problems that are sums of many functions. - -The distance between the convex interval [0, 2] and the non-convex set {0, 1, 2} equals one-half - -1/2 = |1 - 1/2| = |0 - 1/2| = |2 - 3/2| = |1 - 3/2|. - -However, the distance between the average Minkowski sum - -1/2 ( {0, 1} + {0, 1} ) = {0, 1/2, 1} - -and its convex hull [0, 1] is only 1/4, which is half the distance (1/2) between its summand {0, 1} and [0, 1]. As more sets are added together, the average of their sum "fills out" its convex hull: The maximum distance between the average and its convex hull approaches zero as the average includes more summands. - -For every subset Q of a real vector space, its convex hull Conv(Q) is the minimal convex set that contains Q. Thus Conv(Q) is the intersection of all the convex sets that cover Q. The convex hull of a set can be equivalently defined to be the set of all convex combinations of points in Q. For example, the convex hull of the set of integers {0,1} is the closed interval of real numbers [0,1], which contains the integer end-points. - -[[File:Shapley–Folkman lemma.svg|thumb|upright=1.35|alt=The Shapley–Folkman lemma depicted by a diagram with two panes, one on the left and the other on the right. The left-hand pane displays four sets, which are displayed in a two-by-two array. Each of the sets contains exactly two points, which are displayed in red. In each set, the two points are joined by a pink line-segment, which is the convex hull of the original set. Each set has exactly one point that is indicated with a plus-symbol. In the top row of the two-by-two array, the plus-symbol lies in the interior of the line segment; in the bottom row, the plus-symbol coincides with one of the red-points. This completes the description of the left-hand pane of the diagram. The right-hand pane displays the Minkowski sum of the sets, which is the union of the sums having exactly one point from each summand-set; for the displayed sets, the sixteen sums are distinct points, which are displayed in red: The right-hand red sum-points are the sums of the left-hand red summand-points. The convex hull of the sixteen red-points is shaded in pink. In the pink interior of the right-hand sumset lies exactly one plus-symbol, which is the (unique) sum of the plus-symbols from the right-hand side. The right-hand plus-symbol is indeed the sum of the four plus-symbols from the left-hand sets, precisely two points from the original non-convex summand-sets and two points from the convex hulls of the remaining summand-sets. - -|Minkowski addition and convex hulls. The sixteen dark-red points (on the right) form the Minkowski sum of the four non-convex sets (on the left), each of which consists of a pair of red points. Their convex hulls (shaded pink) contain plus-signs (+): The right plus-sign is the sum of the left plus-signs.]] - -By the preceding identity, for every point $x\in\mathrm{Conv}(\sum_{n=1}^{N}Q_{n})$ there exist elements in the convex hulls, $q_{n}(x)\in\mathrm{Conv}(Q_{n})$ for $1\leq n\leq N$, dependent upon $x$, and such that $\sum_{n=1}^{N}q_{n}(x)=x$. - -Working with the above setup, the Shapley–Folkman lemma states that in the above representation -$$ -x=\sum_{n=1}^{N}q_{n}(x) -$$ - -at most $D$ of the summands $q_{n}(x)$ need to be taken strictly from the convex hulls. That is, there exists a representation of the above form, such that $|\{l\leq n\leq N\mid q_{n}(x)\in\mathrm{Conv}(Q_{n})\setminus Q_{n}\}|\leq D$. Shuffling indexes if necessary, this means that the point has a representation -$$ -x = \sum_{n=1}^{D}q_{n}(x) + \sum_{n=D+1}^{N} q_n(x) -$$ - -where $q_{n}\in\mathrm{Conv}(Q_{n})$ for $1\leq n\leq D$ - -and $q_{n}\in Q_{n}$ for $D+1\leq n\leq N$. Note that the re-indexing depends on the point. More succinctly, the Shapley–Folkman lemma states that -$$ -\mathrm{Conv}(\sum_{n=1}^{N}Q_{n})\subseteq\bigcup_{I\subseteq\{1,2,\ldots N\}:~|I|=D}\sum_{n\in I}\mathrm{Conv}(Q_{n})+\sum_{n\notin I}Q_{n}. -$$ - -As an example, every point in $[0,2]=[0,1]+[0,1]=\mathrm{Conv}(\{0,1\})+\mathrm{Conv}(\{0,1\})$ is according to the lemma the sum of an element in $\{0,1\}$ and an element in $[0,1]$. - -* The inner radius of a set Qn is defined to be the smallest number r such that, for any point q in the convex hull of Qn, there is a sphere of radius r that contains a subset of Qn whose convex hull contains q. - -Starr used the inner radius to reduce the upper bound stated in the Shapley–Folkman theorem: - -* Starr's corollary to the Shapley–Folkman theorem states that the squared Euclidean distance from any point x in the convexified sum Conv( Σ Qn ) to the original (unconvexified) sum Σ Qn is bounded by the sum of the squares of the D largest inner-radii of the sets Qn. - -Nonetheless, non-convex preferences were illuminated from 1959 to 1961 by a sequence of papers in The Journal of Political Economy (JPE). The main contributors were Farrell,Bator, Koopmans, and Rothenberg. In particular, Rothenberg's paper discussed the approximate convexity of sums of non-convex sets. These JPE-papers stimulated a paper by Lloyd Shapley and Martin Shubik, which considered convexified consumer-preferences and introduced the concept of an "approximate equilibrium". The JPE-papers and the Shapley–Shubik paper influenced another notion of "quasi-equilibria", due to Robert Aumann. - -Previous publications on non-convexity and economics were collected in an annotated bibliography by Kenneth Arrow. He gave the bibliography to Starr, who was then an undergraduate enrolled in Arrow's (graduate) advanced mathematical-economics course. In his term-paper, Starr studied the general equilibria of an artificial economy in which non-convex preferences were replaced by their convex hulls. In the convexified economy, at each price, the aggregate demand was the sum of convex hulls of the consumers' demands. Starr's ideas interested the mathematicians Lloyd Shapley and Jon Folkman, who proved their eponymous lemma and theorem in "private correspondence", which was reported by Starr's published paper of 1969. The Shapley–Folkman–Starr results have been featured in the economics literature: in microeconomics, in general-equilibrium theory, in public economics (including market failures), as well as in game theory, in mathematical economics, and in applied mathematics (for economists). The Shapley–Folkman–Starr results have also influenced economics research using measure and integration theory. - -The Shapley–Folkman lemma has been used to explain why large minimization problems with non-convexities can be nearly solved (with iterative methods whose convergence proofs are stated for only convex problems). The Shapley–Folkman lemma has encouraged the use of methods of convex minimization on other applications with sums of many functions. - -For example, the quadratic function f(x) = x2 is convex, as is the absolute value function g(x) = |x|. However, the sine function (pictured) is non-convex on the interval (0, π). - -In many optimization problems, the objective function f is separable: that is, f is the sum of many summand-functions, each of which has its own argument: - -f(x) = f( (x1, ..., xN) ) = Σ fn(xn). - -For example, problems of linear optimization are separable. Given a separable problem with an optimal solution, we fix an optimal solution - -xmin = (x1, ..., xN)min - -with the minimum value f(xmin). For this separable problem, we also consider an optimal solution (xmin, f(xmin) ) - -to the "convexified problem", where convex hulls are taken of the graphs of the summand functions. Such an optimal solution is the limit of a sequence of points in the convexified problem - -(xj, f(xj) ) Σ Conv (Graph( fn ) ). Ekeland's analysis explained the success of methods of convex minimization on large and separable problems, despite the non-convexities of the summand functions. Ekeland and later authors argued that additive separability produced an approximately convex aggregate problem, even though the summand functions were non-convex. The crucial step in these publications is the use of the Shapley–Folkman lemma. The Shapley–Folkman lemma has encouraged the use of methods of convex minimization on other applications with sums of many functions. - -Convex sets are often studied with probability theory. Each point in the convex hull of a (non-empty) subset Q of a finite-dimensional space is the expected value of a simple random vector that takes its values in Q, as a consequence of Carathéodory's lemma. Thus, for a non-empty set Q, the collection of the expected values of the simple, Q-valued random vectors equals Q convex hull; this equality implies that the Shapley–Folkman–Starr results are useful in probability theory. In the other direction, probability theory provides tools to examine convex sets generally and the Shapley–Folkman–Starr results specifically. The Shapley–Folkman–Starr results have been widely used in the probabilistic theory of random sets, for example, to prove a law of large numbers, a central limit theorem, Here, the traditional term "range" (alternatively, "image") is the set of values produced by the function. - -A vector measure is a vector-valued generalization of a measure; - -for example, - -if p1 and p2 are probability measures defined on the same measurable space, - -then the product function p1 p2 is a vector measure, - -where p1 p2 - -is defined for every event ω - -by - -(p1 p2)(ω)=(p1(ω), p2(ω)). - -Lyapunov's theorem has been used in economics, Lyapunov's theorem has been called a continuous counterpart of the Shapley–Folkman lemma, diff --git a/wiki/wikipedia/927.txt b/wiki/wikipedia/927.txt deleted file mode 100644 index 82b39adf2a0e12c3cf28783ec02ed3b2e37f3282..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/927.txt +++ /dev/null @@ -1,229 +0,0 @@ -In probability theory, the Chernoff bound gives exponentially decreasing bounds on tail distributions of sums of independent random variables. Despite being named after Herman Chernoff, the author of the paper it first appeared in, the result is due to Herman Rubin. It is a sharper bound than the known first- or second-moment-based tail bounds such as Markov's inequality or Chebyshev's inequality, which only yield power-law bounds on tail decay. However, the Chernoff bound requires that the variates be independent – a condition that neither Markov's inequality nor Chebyshev's inequality require, although Chebyshev's inequality does require the variates to be pairwise independent. - -It is related to the (historically prior) Bernstein inequalities and to Hoeffding's inequality. - -The generic Chernoff bound for a random variable X is attained by applying Markov's inequality to etX. This gives a bound in terms of the moment-generating function of X. For every $t \geq 0$: -$$ -\Pr(X \geq a) = \Pr(e^{t\cdot X} \geq e^{t\cdot a}) \leq \frac{\mathrm{E}\left [e^{t\cdot X}\right]}{e^{t\cdot a}}. -$$ - -Since this bound is true for every $t$, we have: -$$ -\Pr(X \geq a) \leq \inf_{t \geq 0} \frac{\mathrm{E}\left [\exp(t\cdot X)\right]}{\exp(t\cdot a)}. -$$ - -The Chernoff bound sometimes refers to the above inequality, which was first applied by Sergei Bernstein to prove the related Bernstein inequalities. It is also used to prove Hoeffding's inequality, Bennett's inequality, and McDiarmid's inequality. - -This inequality can be applied generally to various classes of distributions, including sub-gaussian distributions, sub-gamma distributions, and sums of independent random variables. are often used, which follow from the inequality $\textstyle\frac{2\delta}{2+\delta} \le \log(1+\delta)$ from the list of logarithmic inequalities: -$$ -\Pr( X \le (1-\delta)\mu) \le e^{-\frac{\delta^2\mu}{2}}, \qquad 0 \le \delta \le 1, -$$ -$$ -\Pr( X \ge (1+\delta)\mu)\le e^{-\frac{\delta^2\mu}{2+\delta}}, \qquad 0 \le \delta, -$$ -$$ -\Pr( |X - \mu| \ge \delta\mu) \le 2e^{-\frac{\delta^2\mu}{3}}, \qquad 0 \le \delta \le 1. -$$ - -The following theorem is due to Wassily Hoeffding and hence is called the Chernoff–Hoeffding theorem. - -Chernoff–Hoeffding theorem. Suppose X1, ..., Xn are i.i.d. random variables, taking values in {0, 1}. Let p = E[X1] and ε > 0. - -\begin{align} - -\Pr \left (\frac{1}{n} \sum X_i \geq p + \varepsilon \right ) \leq \left (\left (\frac{p}{p + \varepsilon}\right )^{p+\varepsilon} {\left (\frac{1 - p}{1-p- \varepsilon}\right )}^{1 - p- \varepsilon}\right )^n &= e^{-D(p+\varepsilon\parallel p) n} \\ - -\Pr \left (\frac{1}{n} \sum X_i \leq p - \varepsilon \right ) \leq \left (\left (\frac{p}{p - \varepsilon}\right )^{p-\varepsilon} {\left (\frac{1 - p}{1-p+ \varepsilon}\right )}^{1 - p+ \varepsilon}\right )^n &= e^{-D(p-\varepsilon\parallel p) n} - -\end{align} - -where -$$ - D(x\parallel y) = x \ln \frac{x}{y} + (1-x) \ln \left (\frac{1-x}{1-y} \right ) -$$ - -is the Kullback–Leibler divergence between Bernoulli distributed random variables with parameters x and y respectively. If p ≥ 1/2, then $D(p+\varepsilon\parallel p)\ge \tfrac{\varepsilon^2}{2p(1-p)}$ which means -$$ - \Pr\left ( \frac{1}{n}\sum X_i>p+x \right ) \leq \exp \left (-\frac{x^2n}{2p(1-p)} \right ). -$$ - -A simpler bound follows by relaxing the theorem using D(p + ε , which follows from the convexity of D(p + ε and the fact that -$$ -\frac{d^2}{d\varepsilon^2} D(p+\varepsilon\parallel p) = \frac{1}{(p+\varepsilon)(1-p-\varepsilon) } \geq 4 =\frac{d^2}{d\varepsilon^2}(2\varepsilon^2). -$$ - -This result is a special case of Hoeffding's inequality. Sometimes, the bounds - - - -\begin{align} - -D( (1+x) p \parallel p) \geq \frac{1}{4} x^2 p, & & & {-\tfrac{1}{2}} \leq x \leq \tfrac{1}{2},\\[6pt] - -D(x \parallel y) \geq \frac{3(x-y)^2}{2(2y+x)}, \\[6pt] - -D(x \parallel y) \geq \frac{(x-y)^2}{2y}, & & & x \leq y,\\[6pt] - -D(x \parallel y) \geq \frac{(x-y)^2}{2x}, & & & x \geq y - -\end{align} - - - -which are stronger for p < 1/8, are also used. - -Chernoff bounds may also be applied to general sums of independent, bounded random variables, regardless of their distribution; this is known as Hoeffding's inequality. The proof follows a similar approach to the other Chernoff bounds, but applying Hoeffding's lemma to bound the moment generating functions (see Hoeffding's inequality). - -Hoeffding's inequality. Suppose X1, ..., Xn are independent random variables taking values in [a,b]. Let X denote their sum and let μ = E[X] denote the sum's expected value. Then for any δ > 0, -$$ -\Pr (X \le (1-\delta)\mu) < e^{-\frac{2\delta^2 \mu^2}{n(b-a)^2}}, -$$ -$$ -\Pr (X \ge (1+\delta)\mu) < e^{-\frac{2\delta^2 \mu^2}{n(b-a)^2}}. -$$ - -Chernoff bounds have very useful applications in set balancing and packet routing in sparse networks. - -The set balancing problem arises while designing statistical experiments. Typically while designing a statistical experiment, given the features of each participant in the experiment, we need to know how to divide the participants into 2 disjoint groups such that each feature is roughly as balanced as possible between the two groups. - -Chernoff bounds are also used to obtain tight bounds for permutation routing problems which reduce network congestion while routing packets in sparse networks. - -The use of the Chernoff bound permits one to abandon the strong—and mostly unrealistic—small perturbation hypothesis (the perturbation magnitude is small). The robustness level can be, in turn, used either to validate or reject a specific algorithmic choice, a hardware implementation or the appropriateness of a solution whose structural parameters are affected by uncertainties. - -A simple and common use of Chernoff bounds is for "boosting" of randomized algorithms. If one has an algorithm that outputs a guess that is the desired answer with probability p > 1/2, then one can get a higher success rate by running the algorithm $n = \log(1/\delta) 2p/(p - 1/2)^2$ times and outputting a guess that is output by more than n/2 runs of the algorithm. (There cannot be more than one such guess by the pigeonhole principle.) Assuming that these algorithm runs are independent, the probability that more than n/2 of the guesses is correct is equal to the probability that the sum of independent Bernoulli random variables Xk that are 1 with probability p is more than n/2. This can be shown to be at least $1-\delta$ via the multiplicative Chernoff bound (Corollary 13.3 in Sinclair's class notes, μ = np).: -$$ -\Pr\left[X > {n \over 2}\right] \ge 1 - e^{-\frac{1}{2p}n \left(p - \frac{1}{2} \right)^2} \geq 1-\delta -$$ - -Rudolf Ahlswede and Andreas Winter introduced a Chernoff bound for matrix-valued random variables. The following version of the inequality can be found in the work of Tropp. - -Let M1, ..., Mt be independent matrix valued random variables such that $ M_i\in \mathbb{C}^{d_1 \times d_2} $ and $ \mathbb{E}[M_i]=0$. - -Let us denote by $ \lVert M \rVert $ the operator norm of the matrix $ M $. If $ \lVert M_i \rVert \leq \gamma $ holds almost surely for all $ i\in\{1,\ldots, t\} $, then for every ε > 0 -$$ -\Pr\left( \left\| \frac{1}{t} \sum_{i=1}^t M_i \right\| > \varepsilon \right) \leq (d_1+d_2) \exp \left( -\frac{3\varepsilon^2 t}{8\gamma^2} \right). -$$ - -Notice that in order to conclude that the deviation from 0 is bounded by ε with high probability, we need to choose a number of samples $t $ proportional to the logarithm of $ d_1+d_2 $. In general, unfortunately, a dependence on $ \log(\min(d_1,d_2)) $ is inevitable: take for example a diagonal random sign matrix of dimension $d\times d $. The operator norm of the sum of t independent samples is precisely the maximum deviation among d independent random walks of length t. In order to achieve a fixed bound on the maximum deviation with constant probability, it is easy to see that t should grow logarithmically with d in this scenario. - -The following theorem can be obtained by assuming M has low rank, in order to avoid the dependency on the dimensions. - -Let 0 < ε < 1 and M be a random symmetric real matrix with $\| \mathrm{E}[M] \| \leq 1 $ and $\| M\| \leq \gamma $ almost surely. Assume that each element on the support of M has at most rank r. Set -$$ - t = \Omega \left( \frac{\gamma\log (\gamma/\varepsilon^2)}{\varepsilon^2} \right). -$$ - -If $ r \leq t $ holds almost surely, then -$$ -\Pr\left(\left\| \frac{1}{t} \sum_{i=1}^t M_i - \mathrm{E}[M] \right\| > \varepsilon \right) \leq \frac{1}{\mathbf{poly}(t)} -$$ - -where M1, ..., Mt are i.i.d. copies of M. - -Garg, Lee, Song and Srivastava proved a Chernoff-type bound for sums of matrix-valued random variables sampled via a random walk on an expander, confirming a conjecture due to Wigderson and Xiao. - -Kyng and Song proved a Chernoff-type bound for sums of Laplacian matrix of random spanning trees. - -The following variant of Chernoff's bound can be used to bound the probability that a majority in a population will become a minority in a sample, or vice versa. - -Suppose there is a general population A and a sub-population B⊆A. Mark the relative size of the sub-population (|B|/|A|) by r. - -Suppose we pick an integer k and a random sample S⊂A of size k. Mark the relative size of the sub-population in the sample (|B∩S|/|S|) by rS. - -Then, for every fraction d∈[0,1]: -$$ -\mathrm{Pr}\left(r_S < (1-d)\cdot r\right) < \exp\left(-r\cdot d^2 \cdot k/2\right) -$$ - -In particular, if B is a majority in A (i.e. r > 0.5) we can bound the probability that B will remain majority in S (rS>0.5) by taking: d = 1 - 1 / (2 r): -$$ -\mathrm{Pr}\left(r_S > 0.5\right) > 1 - \exp\left(-r\cdot \left(1 - \frac{1}{2 r}\right)^2 \cdot k/2\right) -$$ - -This bound is of course not tight at all. For example, when r=0.5 we get a trivial bound Prob > 0. - -Following the conditions of the multiplicative Chernoff bound, let X1, ..., Xn be independent Bernoulli random variables, whose sum is X, each having probability pi of being equal to 1. For a Bernoulli variable: -$$ -\mathrm{E} \left[e^{t\cdot X_i} \right] = (1 - p_i) e^0 + p_i e^t = 1 + p_i (e^t -1) \leq e^{p_i (e^t - 1)} -$$ - -So, using () with $a = (1+\delta)\mu$ for any $\delta>0$ and where $\mu = \mathrm{E}[X] = \textstyle\sum_{i=1}^n p_i$, - -\begin{align} - -\Pr (X > (1 + \delta)\mu) &\le \inf_{t \geq 0} \exp(-t(1+\delta)\mu)\prod_{i=1}^n\operatorname{E}[\exp(tX_i)]\\[4pt] - -& \leq \inf_{t \geq 0} \exp\Big(-t(1+\delta)\mu + \sum_{i=1}^n p_i(e^t - 1)\Big) \\[4pt] - -& = \inf_{t \geq 0} \exp\Big(-t(1+\delta)\mu + (e^t - 1)\mu\Big). - -\end{align} - -If we simply set t = log(1 + δ) so that t > 0 for δ > 0, we can substitute and find -$$ -\exp\Big(-t(1+\delta)\mu + (e^t - 1)\mu\Big) = \frac{\exp((1+\delta - 1)\mu)}{(1+\delta)^{(1+\delta)\mu}} = \left[\frac{e^\delta}{(1+\delta)^{(1+\delta)}}\right]^\mu. -$$ - -This proves the result desired. - -Let q = p + ε. Taking a = nq in (), we obtain: -$$ -\Pr\left ( \frac{1}{n} \sum X_i \ge q\right )\le \inf_{t>0} \frac{E \left[\prod e^{t X_i}\right]}{e^{tnq}} = \inf_{t>0} \left ( \frac{ E\left[e^{tX_i} \right] }{e^{tq}}\right )^n. -$$ - -Now, knowing that Pr(Xi = 1) = p, Pr(Xi = 0) = 1 − p, we have -$$ -\left (\frac{\mathrm{E}\left[e^{tX_i} \right] }{e^{tq}}\right )^n = \left (\frac{p e^t + (1-p)}{e^{tq} }\right )^n = \left ( pe^{(1-q)t} + (1-p)e^{-qt} \right )^n. -$$ - -Therefore, we can easily compute the infimum, using calculus: -$$ -\frac{d}{dt} \left (pe^{(1-q)t} + (1-p)e^{-qt} \right) = (1-q)pe^{(1-q)t}-q(1-p)e^{-qt} -$$ - -Setting the equation to zero and solving, we have - -\begin{align} - -(1-q)pe^{(1-q)t} &= q(1-p)e^{-qt} \\ - -(1-q)pe^{t} &= q(1-p) - -\end{align} - -so that -$$ -e^t = \frac{(1-p)q}{(1-q)p}. -$$ - -Thus, -$$ -t = \log\left(\frac{(1-p)q}{(1-q)p}\right). -$$ - -As q = p + ε > p, we see that t > 0, so our bound is satisfied on t. Having solved for t, we can plug back into the equations above to find that - -\begin{align} - -\log \left (pe^{(1-q)t} + (1-p)e^{-qt} \right ) &= \log \left ( e^{-qt}(1-p+pe^t) \right ) \\ - -&= \log\left (e^{-q \log\left(\frac{(1-p)q}{(1-q)p}\right)}\right) + \log\left(1-p+pe^{\log\left(\frac{1-p}{1-q}\right)}e^{\log\frac{q}{p}}\right ) \\ - -&= -q\log\frac{1-p}{1-q} -q \log\frac{q}{p} + \log\left(1-p+ p\left(\frac{1-p}{1-q}\right)\frac{q}{p}\right) \\ - -&= -q\log\frac{1-p}{1-q} -q \log\frac{q}{p} + \log\left(\frac{(1-p)(1-q)}{1-q}+\frac{(1-p)q}{1-q}\right) \\ - -&= -q \log\frac{q}{p} + \left ( -q\log\frac{1-p}{1-q} + \log\frac{1-p}{1-q} \right ) \\ - -&= -q\log\frac{q}{p} + (1-q)\log\frac{1-p}{1-q} \\ - -&= -D(q \parallel p). - -\end{align} - -We now have our desired result, that -$$ -\Pr \left (\tfrac{1}{n}\sum X_i \ge p + \varepsilon\right ) \le e^{-D(p+\varepsilon\parallel p) n}. -$$ - -To complete the proof for the symmetric case, we simply define the random variable Yi = 1 − Xi, apply the same proof, and plug it into our bound. diff --git a/wiki/wikipedia/928.txt b/wiki/wikipedia/928.txt deleted file mode 100644 index 5cf64389d534aad2cbab8347a617e776d65155c9..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/928.txt +++ /dev/null @@ -1,41 +0,0 @@ -In physics and mathematics, the Golden–Thompson inequality is a trace inequality between exponentials of symmetric and Hermitian matrices proved independently by Golden and Thompson. It has been developed in the context of statistical mechanics, where it has come to have a particular significance. - -The Golden–Thompson inequality states that for (real) symmetric or (complex) Hermitian matrices A and B, the following trace inequality holds: -$$ - \operatorname{tr} e^{A+B} \le \operatorname{tr} \left(e^A e^B\right). -$$ - -This inequality is well defined, since the quantities on either side are real numbers. For the expression on right hand side of the inequality, this can be seen by rewriting it as $\operatorname{tr}(e^{A/2}e^B e^{A/2})$ using the cyclic property of the trace. - -The Golden–Thompson inequality can be viewed as a generalization of a stronger statement for real numbers. If a and b are two real numbers, then the exponential of a+b is the product of the exponential of a with the exponential of b: -$$ - e^{a+b} = e^a e^b . -$$ - -If we replace a and b with commuting matrices A and B, then the same inequality $ e^{A+B} = e^A e^B$ holds. - -This relationship is not true if A and B do not commute. In fact, Petz proved that if A and B are two Hermitian matrices for which the Golden–Thompson inequality is verified as an equality, then the two matrices commute. The Golden–Thompson inequality shows that, even though $e^{A+B}$ and $e^Ae^B$ are not equal, they are still related by an inequality. - -The Golden–Thompson inequality generalizes to any unitarily invariant norm. If A and B are Hermitian matrices and $\|\cdot\|$ is a unitarily invariant norm, then -$$ -\|e^{A+B}\| \leq \|e^{A/2}e^Be^{A/2}\| . -$$ - -The standard Golden–Thompson inequality is a special case of the above inequality, where the norm is the Schatten norm with $p=1$. Since $e^{A+B}$ and $e^{A/2}e^Be^{A/2}$ are both positive semidefinite matrices, $\operatorname{tr}(e^{A+B}) = \|e^{A+B}\|_1$ and $\operatorname{tr}(e^{A/2}e^Be^{A/2}) = \|e^{A/2}e^Be^{A/2}\|_1$. - -The inequality has been generalized to three matrices by Lieb and furthermore to any arbitrary number of Hermitian matrices by Sutter. A naive attempt at generalization does not work: the inequality -$$ -\operatorname{tr}(e^{A+B+C}) \leq |\operatorname{tr}(e^Ae^Be^C)| -$$ - -is false. For three matrices, the correct generalization takes the following form: - - \operatorname{tr} e^{A+B+C} \le \operatorname{tr} \left(e^A \mathcal{T}_{e^{-B}} e^C\right), - - - -where the operator $\mathcal{T}_f$ is the derivative of the matrix logarithm given by $ \mathcal{T}_f(g) = \int_0^\infty \operatorname{d}t (f+t)^{-1} g (f+t)^{-1} $. - -Note that, if $f$ and $g$ commute, then $ \mathcal{T}_f(g) = gf^{-1}$, and the inequality for three matrices reduces to the original from Golden and Thompson. - -used the Kostant convexity theorem to generalize the Golden–Thompson inequality to all compact Lie groups. diff --git a/wiki/wikipedia/929.txt b/wiki/wikipedia/929.txt deleted file mode 100644 index ef3efb81ff5aaf1518916ed3e32484d445298dbd..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/929.txt +++ /dev/null @@ -1,49 +0,0 @@ -In number theory, Znám's problem asks which sets of k integers have the property that each integer in the set is a proper divisor of the product of the other integers in the set, plus 1. Znám's problem is named after the Slovak mathematician Štefan Znám, who suggested it in 1972, although other mathematicians had considered similar problems around the same time. One closely related problem drops the assumption of properness of the divisor, and will be called the improper Znám problem hereafter. - -One solution to the improper Znám problem is easily provided for any k: the first k terms of Sylvester's sequence have the required property. Sun showed that there is at least one solution to the (proper) Znám problem for each k ≥ 5. Sun's solution is based on a recurrence similar to that for Sylvester's sequence, but with a different set of initial values. - -The Znám problem is closely related to Egyptian fractions. It is known that there are only finitely many solutions for any fixed k. It is unknown whether there are any solutions to Znám's problem using only odd numbers, and there remain several other open questions. - -Znám's problem asks which sets of integers have the property that each integer in the set is a proper divisor of the product of the other integers in the set, plus 1. That is, given k, what sets of integers -$$ -\{n_1, \ldots, n_k\} -$$ - -are there, such that, for each i, ni divides but is not equal to -$$ -\Bigl(\prod_{j \ne i}^n n_j\Bigr) + 1\quad ? -$$ - -A closely related problem concerns sets of integers in which each integer in the set is a divisor, but not necessarily a proper divisor, of one plus the product of the other integers in the set. This problem does not seem to have been named in the literature, and will be referred to as the improper Znám problem. Any solution to Znám's problem is also a solution to the improper Znám problem, but not necessarily vice versa. - -Znám's problem is named after the Slovak mathematician Štefan Znám, who suggested it in 1972. Barbeau had posed the improper Znám problem for k = 3, and Mordell, independently of Znám, found all solutions to the improper problem for k ≤ 5. Skula showed that Znám's problem is unsolvable for k < 5, and credited J. Janák with finding the solution {2, 3, 11, 23, 31} for k = 5. - -One solution to k = 5 is {2, 3, 7, 47, 395}. A few calculations will show that - -An interesting "near miss" for k = 4 is the set {2, 3, 7, 43}, formed by taking the first four terms of Sylvester's sequence. It has the property that each integer in the set divides the product of the other integers in the set, plus 1, but the last member of this set is equal to the product of the first three members plus one, rather than being a proper divisor. Thus, it is a solution to the improper Znám problem, but not a solution to Znám's problem as it is usually defined. - -Any solution to the improper Znám problem is equivalent (via division by the product of the xi's) to a solution to the equation -$$ -\sum\frac1{x_i} + \prod\frac1{x_i}=y, -$$ - -where y as well as each xi must be an integer, and conversely any such solution corresponds to a solution to the improper Znám problem. However, all known solutions have y = 1, so they satisfy the equation -$$ -\sum\frac1{x_i} + \prod\frac1{x_i}=1. -$$ - -That is, they lead to an Egyptian fraction representation of the number one as a sum of unit fractions. Several of the cited papers on Znám's problem study also the solutions to this equation. Brenton describe an application of the equation in topology, to the classification of singularities on surfaces, and Domaratzki describe an application to the theory of nondeterministic finite automata. - -As Janák showed, the number of solutions for any k is finite, so it makes sense to count the total number of solutions for each k. - -Brenton and Vasiliu calculated that the number of solutions for small values of k, starting with k = 5, forms the sequence - -2, 5, 18, 96 . - -Presently, a few solutions are known for k = 9 and k = 10, but it is unclear how many solutions remain undiscovered for those values of k. - -However, there are infinitely many solutions if k is not fixed: - -Cao showed that there are at least 39 solutions for each k ≥ 12, improving earlier results proving the existence of fewer solutions (, ). Sun conjecture that the number of solutions for each value of k grows monotonically with k. - -It is unknown whether there are any solutions to Znám's problem using only odd numbers. With one exception, all known solutions start with 2. If all numbers in a solution to Znám's problem or the improper Znám problem are prime, their product is a primary pseudoperfect number ; it is unknown whether infinitely many solutions of this type exist. diff --git a/wiki/wikipedia/93.txt b/wiki/wikipedia/93.txt deleted file mode 100644 index cc8a67c44ff0af2c730bded54b7e67fefb33d6dc..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/93.txt +++ /dev/null @@ -1,22 +0,0 @@ -The Duffin–Schaeffer conjecture is a conjecture (now a theorem) in mathematics, specifically, the Diophantine approximation proposed by R. J. Duffin and A. C. Schaeffer in 1941. It states that if $f : \mathbb{N} \rightarrow \mathbb{R}^+$ is a real-valued function taking on positive values, then for almost all $\alpha$ (with respect to Lebesgue measure), the inequality -$$ -\left| \alpha - \frac{p}{q} \right| < \frac{f(q)}{q} -$$ - -has infinitely many solutions in coprime integers $p,q$ with $q > 0$ if and only if -$$ -\sum_{q=1}^\infty f(q) \frac{\varphi(q)}{q} = \infty, -$$ - -where $\varphi(q)$ is Euler's totient function. - -In 2019, the Duffin–Schaeffer conjecture was proved by Dimitris Koukoulopoulos and James Maynard. - -That existence of the rational approximations implies divergence of the series follows from the Borel–Cantelli lemma. The converse implication is the crux of the conjecture. This was strengthened by Jeffrey Vaaler in 1978 to the case $f(n) = O(n^{-1})$. More recently, this was strengthened to the conjecture being true whenever there exists some $\varepsilon > 0 $ such that the series -$$ -\sum_{n=1}^\infty \left(\frac{f(n)}{n}\right)^{1 + \varepsilon} \varphi(n) = \infty -$$. This was done by Haynes, Pollington, and Velani. - -In 2006, Beresnevich and Velani proved that a Hausdorff measure analogue of the Duffin–Schaeffer conjecture is equivalent to the original Duffin–Schaeffer conjecture, which is a priori weaker. This result is published in the Annals of Mathematics. - -In July 2019, Dimitris Koukoulopoulos and James Maynard announced a proof of the conjecture. In July 2020, the proof was published in the Annals of Mathematics. diff --git a/wiki/wikipedia/930.txt b/wiki/wikipedia/930.txt deleted file mode 100644 index 87c2b79ea9e252e17127634b11b003ede104c6d5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/930.txt +++ /dev/null @@ -1 +0,0 @@ -In the mathematics of topological vector spaces, Minlos's theorem states that a cylindrical measure on the dual of a nuclear space is a Radon measure if its Fourier transform is continuous. It is named after Robert Adol'fovich Minlos and can be proved using Sazonov's theorem. diff --git a/wiki/wikipedia/931.txt b/wiki/wikipedia/931.txt deleted file mode 100644 index 2e2834710b9c613409e9c4e6af89b5742d5d0e3e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/931.txt +++ /dev/null @@ -1,5 +0,0 @@ -In differential geometry, Meusnier's theorem states that all curves on a surface passing through a given point p and having the same tangent line at p also have the same normal curvature at p and their osculating circles form a sphere. The theorem was first announced by Jean Baptiste Meusnier in 1776, but not published until 1785. - -At least prior to 1912, several writers in English were in the habit of calling the result Meunier's theorem, although there is no evidence that Meusnier himself ever spelt his name in this way. - -This alternative spelling of Meusnier's name also appears on the Arc de Triomphe in Paris. diff --git a/wiki/wikipedia/932.txt b/wiki/wikipedia/932.txt deleted file mode 100644 index f8fe4d41d17e6273bc58f80b76b705a0508d85bb..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/932.txt +++ /dev/null @@ -1,80 +0,0 @@ -In mathematics, a Riemann–Roch theorem for smooth manifolds is a version of results such as the Hirzebruch–Riemann–Roch theorem or Grothendieck–Riemann–Roch theorem (GRR) without a hypothesis making the smooth manifolds involved carry a complex structure. Results of this kind were obtained by Michael Atiyah and Friedrich Hirzebruch in 1959, reducing the requirements to something like a spin structure. - -Let X and Y be oriented smooth closed manifolds, - -and f: X → Y a continuous map. - -Let vf=f*(TY) - TX in the K-group - -K(X). - -If dim(X) ≡ dim(Y) mod 2, then -$$ -\mathrm{ch}(f_{K*}(x)) = f_{H*}(\mathrm{ch}(x) e^{d(v_f)/2}\hat{A}(v_f)), -$$ - -where ch is the Chern character, d(vf) an element of - -the integral cohomology group H2(Y, Z) satisfying - -d(vf) ≡ f* w2(TY)-w2(TX) mod 2, - -fK* the Gysin homomorphism for K-theory, - -and fH* the Gysin homomorphism for cohomology - -. - -This theorem was first proven by Atiyah and Hirzebruch. - -The theorem is proven by considering several special cases. - -If Y is the Thom space of a vector bundle V over X, - -then the Gysin maps are just the Thom isomorphism. - -Then, using the splitting principle, it suffices to check the theorem via explicit computation for line - -bundles. - -If f: X → Y is an embedding, then the - -Thom space of the normal bundle of X in Y can be viewed as a tubular neighborhood of X - -in Y, and excision gives a map -$$ -u:H^*(B(N), S(N)) \to H^*(Y, Y-B(N)) \to H^*(Y) -$$ - -and -$$ -v:K(B(N), S(N)) \to K(Y, Y-B(N)) \to K(Y) -$$. - -The Gysin map for K-theory/cohomology is defined to be the composition of the Thom isomorphism with these maps. - -Since the theorem holds for the map from X to the Thom space of N, - -and since the Chern character commutes with u and v, the theorem is also true for embeddings. - -f: X → Y. - -Finally, we can factor a general map f: X → Y - -into an embedding -$$ -i: X \to Y \times S^{2n} -$$ - -and the projection -$$ -p: Y \times S^{2n} \to Y. -$$ - -The theorem is true for the embedding. - -The Gysin map for the projection is the Bott-periodicity isomorphism, which commutes with the Chern character, - -so the theorem holds in this general case also. - -Atiyah and Hirzebruch then specialised and refined in the case X = a point, where the condition becomes the existence of a spin structure on Y. Corollaries are on Pontryagin classes and the J-homomorphism. diff --git a/wiki/wikipedia/933.txt b/wiki/wikipedia/933.txt deleted file mode 100644 index 70a96ea7416f280f81521c16e7d29f9d184014a6..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/933.txt +++ /dev/null @@ -1,25 +0,0 @@ -In the branch of mathematics called knot theory, the volume conjecture is the following open problem that relates quantum invariants of knots to the hyperbolic geometry of knot complements. - -Let O denote the unknot. For any knot K let $\langle K\rangle_N$ be Kashaev's invariant of $K$; this invariant coincides with the following evaluation of the $N$-Colored Jones Polynomial $J_{K,N}(q)$ of $K$: - -{{NumBlk|:|$\langle K\rangle_N=\lim_{q\to e^{2\pi i/N}}\frac{J_{K,N}(q)}{J_{O,N}(q)}.$|}} - -Then the volume conjecture states that - -{{NumBlk|:|$\lim_{N\to\infty} \frac{2\pi\log |\langle K\rangle_N|}{N} = \operatorname{vol}(K), $|}} - -where vol(K) denotes the hyperbolic volume of the complement of K in the 3-sphere. - -observed that the asymptotic behavior of a certain state sum of knots gives the hyperbolic volume $\operatorname{vol}(K)$ of the complement of knots $ K $ and showed that it is true for the knots $4_1$, $5_2$, and $6_1$. He conjectured that for the general hyperbolic knots the formula (2) would hold. His invariant for a knot $K$ is based on the theory of quantum dilogarithms at the $N$-th root of unity, $q=\exp{(2\pi i/N)}$. - -Murakami had firstly pointed out that Kashaev's invariant is related to the colored Jones polynomial by replacing q with the 2N-root of unity, namely, $\exp{\frac{i\pi}{N}}$. They used an R-matrix as the discrete Fourier transform for the equivalence of these two values. - -The volume conjecture is important for knot theory. In section 5 of this paper they state that: - -Assuming the volume conjecture, every knot that is different from the trivial knot has at least one different Vassiliev (finite type) invariant. - -Using complexification, Murakami rewrote the formula (1) into - -{{NumBlk|:|$\lim_{N\to\infty} \frac{2\pi\log \langle K\rangle_N}{N} = \operatorname{vol}(S^3\backslash K) + CS(S^3\backslash K), $|}} - -where $CS(S^3\backslash K)$ is called the Chern–Simons invariant. They showed that there is a clear relation between the complexified colored Jones polynomial and Chern–Simons theory from a mathematical point of view. diff --git a/wiki/wikipedia/934.txt b/wiki/wikipedia/934.txt deleted file mode 100644 index c71b4313c25436098f5686f2273bff3c6ce1d121..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/934.txt +++ /dev/null @@ -1,25 +0,0 @@ -[[File:Steiner lehmus.svg|thumb|right|260px||AE|=|BD|,\alpha=\beta, - -\gamma=\delta $\implies \triangle ABC \text{ is isosceles}$]] - -The Steiner–Lehmus theorem, a theorem in elementary geometry, was formulated by C. L. Lehmus and subsequently proved by Jakob Steiner. It states: - -Every triangle with two angle bisectors of equal lengths is isosceles. - -The theorem was first mentioned in 1840 in a letter by C. L. Lehmus to C. Sturm, in which he asked for a purely geometric proof. C. Sturm passed the request on to other mathematicians and Jakob Steiner was among the first to provide a solution. The theorem became a rather popular topic in elementary geometry ever since with a somewhat regular publication of articles on it. - -The Steiner–Lehmus theorem can be proved using elementary geometry by proving the contrapositive statement. - -There is some controversy over whether a "direct" proof is possible; - -allegedly "direct" proofs have been published, but not everyone agrees that these proofs are "direct." - -For example, there exist simple algebraic expressions for angle bisectors in terms of the sides of the triangle. Equating two of these expressions and algebraically manipulating the equation results in a product of two factors which equal 0, but only one of them (a - b) can equal 0 and the other must be positive. Thus a = b. But this may not be considered direct as one must first argue about why the other factor cannot be 0. - -John Conway - -has argued that there can be no "equality-chasing" proof because the theorem (stated algebraically) does not hold over an arbitrary field, or even when negative real numbers are allowed as parameters. - -A precise definition of a "direct proof" inside both classical and intuitionistic logic has been provided by Victor Pambuccian, - -who proved, without presenting the direct proofs, that direct proofs must exist in both the classical logic and the intuitionistic logic setting. diff --git a/wiki/wikipedia/935.txt b/wiki/wikipedia/935.txt deleted file mode 100644 index 0004e2bf8a4abe09a50548d093967fa5cc688437..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/935.txt +++ /dev/null @@ -1,59 +0,0 @@ -The Ellsberg paradox is a paradox of choice in which people's decisions produce inconsistencies with subjective expected utility theory. The paradox was popularized by Daniel Ellsberg in his 1961 paper “Risk, Ambiguity, and the Savage Axioms”, although a version of it was noted considerably earlier by John Maynard Keynes. It is generally taken to be evidence for ambiguity aversion, in which a person tends to prefer choices with quantifiable risks over those with unknown risks. - -Ellsberg's findings indicate that choices with an underlying level of risk are favoured in instances where the likelihood of risk is clear, rather than instances in which the likelihood of risk is unknown. A decision maker will overwhelmingly favour a choice with a transparent likelihood of risk, even in instances where the unknown alternative may produce a larger utility. Given a particular set of choices in which each choice carries known and varying levels of risk, people will still prefer choices with calculable risk, even in instances with a lower utility outcome. - -Ellsberg's experimental research involved two separate thought experiments: the 2-urn 2-color scenario and the 1 urn 3-colour scenario. - -There are two urns each containing 100 balls. It is known that urn A contains 50 red and 50 black, but urn B contains an unknown mix of red and black balls. - -The following bets are offered to a participant: - -Bet 1A: get $1 if red is drawn from urn A, $0 otherwise - -Bet 2A: get $1 if black is drawn from urn A, $0 otherwise - -Bet 1B: get $1 if red is drawn from urn B, $0 otherwise - -Bet 2B: get $1 if black is drawn from urn B, $0 otherwise - -Typically, participants were seen to be indifferent between bet 1A and bet 2A (consistent with expected utility theory), but were seen to strictly prefer Bet 1A to Bet 1B and Bet 2A to 2B. This result is generally interpreted to be a consequence of ambiguity aversion (also known as uncertainty aversion); people intrinsically dislike situations where they cannot attach probabilities to outcomes, in this case favouring the bet in which they know the probability and utility outcome (0.5 and $1 respectively). - -There is one urn containing 90 balls: 30 balls are red, while the remaining 60 balls are either black or yellow in unknown proportions. The balls are well mixed so that each individual ball is as likely to be drawn as any other. The participants then make a choice within a gamble scenario: - -Additionally, the participant may choose a separate gamble scenario within the same situational parameters: - -The experimental conditions manufactured by Ellsberg serve to rely upon two economic principles: - -Knightian uncertainty, pertaining to the unquantifiable nature of the mix between both yellow and black balls within the single urn, and probability, of which red balls are drawn at 1/3 vs. 2/3. - -Utility theory models the choice by assuming that in choosing between these gambles, people assume a probability that the non-red balls are yellow versus black, and then computes the expected utility of the two gambles individually. - -Since the prizes are the same, it follows that you will prefer Gamble A to Gamble B if and only if you believe that drawing a red ball is more likely than drawing a black ball (according to expected utility theory). Also, there would be no clear preference between the choices if you thought that a red ball was as likely as a black ball. Similarly it follows that you will prefer Gamble C to Gamble D if, and only if, you believe that drawing a red or yellow ball is more likely than drawing a black or yellow ball. It might seem intuitive that, if drawing a red ball is more likely than drawing a black ball, then drawing a red or yellow ball is also more likely than drawing a black or yellow ball. So, supposing you prefer Gamble A to Gamble B, it follows that you will also prefer Gamble C to Gamble D. Supposing instead that you prefer Gamble B to Gamble A, it follows that you will also prefer Gamble D to Gamble C. - -Ellsberg's findings violate assumptions made within common Expected Utility Theory, with participants strictly preferring Gamble A to Gamble B and Gamble D to Gamble C. - -Mathematically, the estimated probabilities of each color ball can be represented as: R, Y, and B. If you strictly prefer Gamble A to Gamble B, by utility theory, it is presumed this preference is reflected by the expected utilities of the two gambles. We reach a contradiction in our utility calculations. This contradiction indicates that your preferences are inconsistent with expected-utility theory. - -The result holds regardless of your utility function. Indeed, the amount of the payoff is likewise irrelevant. Whichever gamble is selected, the prize for winning it is the same, and the cost of losing it is the same (no cost), so ultimately, there are only two outcomes: receive a specific amount of money, or receive nothing. Therefore, it is sufficient to assume that the preference is to receive some money to nothing (this assumption is not necessary: in the mathematical treatment above, it was assumed U($100) > U($0), but a contradiction can still be obtained for U($100) < U($0) and for U($100) = U($0)). - -In addition, the result holds regardless of your risk aversion - all the gambles involve risk. By choosing Gamble D, you have a 1 in 3 chance of receiving nothing, and by choosing Gamble A, you have a 2 in 3 chance of receiving nothing. If Gamble A was less risky than Gamble B, it would follow that Gamble C was less risky than Gamble D (and vice versa), so, risk is not averted in this way. - -However, because the exact chances of winning are known for Gambles A and D, and not known for Gambles B and C, this can be taken as evidence for some sort of ambiguity aversion which cannot be accounted for in expected utility theory. It has been demonstrated that this phenomenon occurs only when the choice set permits comparison of the ambiguous proposition with a less vague proposition (but not when ambiguous propositions are evaluated in isolation). - -There have been various attempts to provide decision-theoretic explanations of Ellsberg's observation. Since the probabilistic information available to the decision-maker is incomplete, these attempts sometimes focus on quantifying the non-probabilistic ambiguity which the decision-maker faces – see Knightian uncertainty. That is, these alternative approaches sometimes suppose that the agent formulates a subjective (though not necessarily Bayesian) probability for possible outcomes. - -One such attempt is based on info-gap decision theory. The agent is told precise probabilities of some outcomes, though the practical meaning of the probability numbers is not entirely clear. For instance, in the gambles discussed above, the probability of a red ball is 30/90, which is a precise number. Nonetheless, the agent may not distinguish, intuitively, between this and, say, 30/91. No probability information whatsoever is provided regarding other outcomes, so the agent has very unclear subjective impressions of these probabilities. - -In light of the ambiguity in the probabilities of the outcomes, the agent is unable to evaluate a precise expected utility. Consequently, a choice based on maximizing the expected utility is also impossible. The info-gap approach supposes that the agent implicitly formulates info-gap models for the subjectively uncertain probabilities. The agent then tries to satisfice the expected utility and to maximize the robustness against uncertainty in the imprecise probabilities. This robust-satisficing approach can be developed explicitly to show that the choices of decision-makers should display precisely the preference reversal which Ellsberg observed. - -Another possible explanation is that this type of game triggers a deceit aversion mechanism. Many humans naturally assume in real-world situations that if they are not told the probability of a certain event, it is to deceive them. Participants make the same decisions in the experiment as they would about related but not identical real-life problems where the experimenter would be likely to be a deceiver acting against the subject's interests. When faced with the choice between a red ball and a black ball, the probability of 30/90 is compared to the lower part of the 0/90–60/90 range (the probability of getting a black ball). The average person expects there to be fewer black balls than yellow balls because in most real-world situations, it would be to the advantage of the experimenter to put fewer black balls in the urn when offering such a gamble. On the other hand, when offered a choice between red and yellow balls and black and yellow balls, people assume that there must be fewer than 30 yellow balls as would be necessary to deceive them. When making the decision, it is quite possible that people simply neglect to consider that the experimenter does not have a chance to modify the contents of the urn in between the draws. In real-life situations, even if the urn is not to be modified, people would be afraid of being deceived on that front as well. - -In order to describe how an individual would take decisions in a world where uncertainty aversion exists, modifications of the expected utility framework have been proposed. These include: - -* Choquet expected utility: Created by French mathematician Gustave Choquet was a subadditive integral used as a way of measuring expected utility in situations with unknown parameters. The mathematical principle is seen as a way in which the contradiction between rational choice theory, Expected utility theory and Ellsberg's seminal findings can be reconciled. - -* Maxmin expected utility: Axiomatized by Gilboa and Shmeidler is a widely received alternative to utility maximisation, taking into account ambiguity-averse preferences. This model reconciles the notion that intuitive decisions may violate the ambiguity neutrality, established within both the Ellsberg Paradox and Allais Paradox. - -Other alternative explanations include the competence hypothesis and the comparative ignorance hypothesis. Both theories attribute the source of the ambiguity aversion to the participant's pre-existing knowledge. - -Upon graduating in Economics from Harvard in 1952, Ellsberg left immediately to serve as a US Marine before coming back to Harvard in 1957 to complete his post-graduate studies on decision making under uncertainty. Ellsberg left his graduate studies to join the RAND Corporation as a Strategic Analyst but maintained his academic work on the side. He presented his breakthrough paper pertaining to the Ellsberg paradox in the December meeting of the Econometric Society in St. Louis in 1960. The book built upon previous works of both J.M. Keynes and F.H Knight, challenging dominant 'Rational Decision Theory' at the time and extending the academic literature. The book was made public in 2001, some 40 years after being published, because of the Pentagon Papers scandal then encircling Ellsberg's life. The book is considered a highly-influential paper and is still considered influential within economic academia pertaining to risk ambiguity and uncertainty. diff --git a/wiki/wikipedia/936.txt b/wiki/wikipedia/936.txt deleted file mode 100644 index fbb19a00f29fa633964ab0cce8d290951cc1dfac..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/936.txt +++ /dev/null @@ -1,26 +0,0 @@ -Pappus's area theorem describes the relationship between the areas of three parallelograms attached to three sides of an arbitrary triangle. The theorem, which can also be thought of as a generalization of the Pythagorean theorem, is named after the Greek mathematician Pappus of Alexandria (4th century AD), who discovered it. - -Given an arbitrary triangle with two arbitrary parallelograms attached to two of its sides the theorem tells how to construct a parallelogram over the third side, such that the area of the third parallelogram equals the sum of the areas of the other two parallelograms. - -Let ABC be the arbitrary triangle and ABDE and ACFG the two arbitrary parallelograms attached to the triangle sides AB and AC. The extended parallelogram sides DE and FG intersect at H. The line segment AH now "becomes" the side of the third parallelogram BCML attached to the triangle side BC, i.e., one constructs line segments BL and CM over BC, such that BL and CM are a parallel and equal in length to AH. The following identity then holds for the areas (denoted by A) of the parallelograms: -$$ -\text{A}_{ABDE}+\text{A}_{ACFG}=\text{A}_{BCML} -$$ - -The theorem generalizes the Pythagorean theorem twofold. Firstly it works for arbitrary triangles rather than only for right angled ones and secondly it uses parallelograms rather than squares. For squares on two sides of an arbitrary triangle it yields a parallelogram of equal area over the third side and if the two sides are the legs of a right angle the parallelogram over the third side will be square as well. For a right-angled triangle, two parallelograms attached to the legs of the right angle yield a rectangle of equal area on the third side and again if the two parallelograms are squares then the rectangle on the third side will be a square as well. - -Due to having the same base length and height the parallelograms ABDE and ABUH have the same area, the same argument applying to the parallelograms ACFG and ACVH, ABUH and BLQR, ACVH and RCMQ. This already yields the desired result, as we have: - - - -\begin{align} - -\text{A}_{ABDE}+\text{A}_{ACFG} &=\text{A}_{ABUH}+\text{A}_{ACVH}\\ - -&=\text{A}_{BLRQ}+\text{A}_{RCMQ}\\ - -&=\text{A}_{BCML} - -\end{align} - - diff --git a/wiki/wikipedia/937.txt b/wiki/wikipedia/937.txt deleted file mode 100644 index cc973d9e63ceed0b37058655fd6d496e1f821e0c..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/937.txt +++ /dev/null @@ -1,3 +0,0 @@ -In the area of mathematics known as differential topology, the disc theorem of Palais states that two embeddings of a closed k-disc into a connected n-manifold are ambient isotopic provided that if k = n the two embeddings are equioriented. - -The disc theorem implies that the connected sum of smooth oriented manifolds is well defined. diff --git a/wiki/wikipedia/938.txt b/wiki/wikipedia/938.txt deleted file mode 100644 index 53620b741f16ac1afbcf8748a6a286af7909ec23..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/938.txt +++ /dev/null @@ -1,7 +0,0 @@ -Littlewood's law states that a person can expect to experience events with odds of one in a million (referred to as a "miracle") at the rate of about one per month. It was framed by British mathematician John Edensor Littlewood. - -The law was framed by Cambridge University Professor John Edensor Littlewood, and published in a 1986 collection of his work, A Mathematician's Miscellany. It seeks among other things to debunk one element of supposed supernatural phenomenology and is related to the more general law of truly large numbers, which states that with a sample size large enough, any outrageous (in terms of probability model of single sample) thing is likely to happen. - -Littlewood defines a miracle as an exceptional event of special significance occurring at a frequency of one in a million. He assumes that during the hours in which a human is awake and alert, a human will see or hear one "event" per second, which may be either exceptional or unexceptional. Additionally, Littlewood supposes that a human is alert for about eight hours per day. - -As a result, a human will in 35 days have experienced under these suppositions about one million events. Accepting this definition of a miracle, one can expect to observe one miraculous event for every 35 days' time, on average - and therefore, according to this reasoning, seemingly miraculous events are actually commonplace. diff --git a/wiki/wikipedia/939.txt b/wiki/wikipedia/939.txt deleted file mode 100644 index 7e062a98060adff7feed01e4a12242de1ca99c75..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/939.txt +++ /dev/null @@ -1,96 +0,0 @@ -In geometry, the angle bisector theorem is concerned with the relative lengths of the two segments that a triangle's side is divided into by a line that bisects the opposite angle. It equates their relative lengths to the relative lengths of the other two sides of the triangle. - -Consider a triangle ABC. Let the angle bisector of angle A intersect side BC at a point D between B and C. The angle bisector theorem states that the ratio of the length of the line segment BD to the length of segment CD is equal to the ratio of the length of side AB to the length of side AC: -$$ -{\frac }={\frac }, -$$ - -and conversely, if a point D on the side BC of triangle ABC divides BC in the same ratio as the sides AB and AC, then AD is the angle bisector of angle ∠ A. - -The generalized angle bisector theorem states that if D lies on the line BC, then -$$ -{\frac }={\frac } = {\frac {\sin \angle ADB} {\sin \angle DAB}} -$$|}} - -{{NumBlk|:|${\frac } = {\frac {\sin \angle ADC} {\sin \angle DAC}} $|}} - -Angles ∠ ADB and ∠ ADC form a linear pair, that is, they are adjacent supplementary angles. Since supplementary angles have equal sines, -$$ - = {\sin \angle ADC}. -$$ - -Angles ∠ DAB and ∠ DAC are equal. Therefore, the right hand sides of equations () and () are equal, so their left hand sides must also be equal. -$$ -{\frac }={\frac }, -$$ - -which is the angle bisector theorem. - -If angles ∠ DAB and ∠ DAC are unequal, equations () and () can be re-written as: -$$ - {\frac \sin \angle\ DAB = \sin \angle ADB}, -$$ -$$ - {\frac \sin \angle\ DAC = \sin \angle ADC}. -$$ - -Angles ∠ ADB and ∠ ADC are still supplementary, so the right hand sides of these equations are still equal, so we obtain: -$$ - {\frac \sin \angle\ DAB = \frac \sin \angle\ DAC}, -$$ - -which rearranges to the "generalized" version of the theorem. - -Let D be a point on the line BC, not equal to B or C and such that AD is not an altitude of triangle ABC. - -Let B1 be the base (foot) of the altitude in the triangle ABD through B and let C1 be the base of the altitude in the triangle ACD through C. Then, if D is strictly between B and C, one and only one of B1 or C1 lies inside triangle ABC and it can be assumed without loss of generality that B1 does. This case is depicted in the adjacent diagram. If D lies outside of segment BC, then neither B1 nor C1 lies inside the triangle. - -∠ DB1B and ∠ DC1C are right angles, while the angles ∠ B1DB and ∠ C1DC are congruent if D lies on the segment BC (that is, between B and C) and they are identical in the other cases being considered, so the triangles DB1B and DC1C are similar (AAA), which implies that: -$$ -{\frac }= {\frac }=\frac = \sin \angle \ BAD \text{ and } \frac = \sin \angle \ DAC, -$$ - -and the generalized form follows. - -A quick proof can be obtained by looking at the ratio of the areas of the two triangles $\triangle BAD$ and $\triangle CAD$, which are created by the angle bisector in $A$. Computing those areas twice using different formulas, that is $\tfrac{1}{2}gh$ with base $g$ and altitude $h$ and $\tfrac{1}{2}ab\sin(\gamma)$ with sides $a$, $b$ and their enclosed angle $\gamma$, will yield the desired result. - -Let $h$ denote the height of the triangles on base $BC$ and $\alpha$ be half of the angle in $A$. Then - - - -\frac - -= \frac{\frac{1}{2}|BD|h}{\frac{1}{2}|CD|h}= \frac - - - -and - - - -\frac=\frac{\frac{1}{2}|AB||AD|\sin(\alpha)}{\frac{1}{2}|AC||AD|\sin(\alpha)}=\frac - - - -yields - - - -\frac = \frac. - - - -For the exterior angle bisectors in a non-equilateral triangle there exist similar equations for the ratios of the lengths of triangle sides. More precisely if the exterior angle bisector in $A$ intersects the extended side $BC$ in $E$, the exterior angle bisector in $B$ intersects the extended side $AC$ in $D$ and the exterior angle bisector in $C$ intersects the extended side $AB$ in $F$, then the following equations hold: -$$ -\frac=\frac -$$, $\frac=\frac$, $\frac=\frac$ - -The three points of intersection between the exterior angle bisectors and the extended triangle sides $D$, $E$ and $F$ are collinear, that is they lie on a common line. - -The angle bisector theorem appears as Proposition 3 of Book VI in Euclid's Elements. According to Heath, the corresponding statement for an external angle bisector was given by Robert Simson who noted that Pappus assumed this result without proof. Heath goes on to say that Augustus De Morgan proposed that the two statements should be combined as follows: - -If an angle of a triangle is bisected internally or externally by a straight line which cuts the opposite side or the opposite side produced, the segments of that side will have the same ratio as the other sides of the triangle; and, if a side of a triangle be divided internally or externally so that its segments have the same ratio as the other sides of the triangle, the straight line drawn from the point of section to the angular point which is opposite to the first mentioned side will bisect the interior or exterior angle at that angular point. - -This theorem has been used to prove the following theorems/results: - -• Coordinates of the incenter of a triangle diff --git a/wiki/wikipedia/94.txt b/wiki/wikipedia/94.txt deleted file mode 100644 index dc573cd260df75c22b1f048ad7ff47e7d897939b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/94.txt +++ /dev/null @@ -1 +0,0 @@ -In algebraic geometry, the Serre–Tate theorem, says that an abelian scheme and its p-divisible group have the same infinitesimal deformation theory. This was first proved by Serre when the reduction of the abelian variety is ordinary, using the Greenberg functor; then Tate gave a proof in the general case by a different method. Their proofs were not published, but they were summarized in the notes of the Lubin–Serre–Tate seminar (Woods Hole, 1964). Other proofs were published by Messing (1962) and Drinfeld (1976). diff --git a/wiki/wikipedia/940.txt b/wiki/wikipedia/940.txt deleted file mode 100644 index 06573d8b442cdcebda832508a0309ff22e53fe61..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/940.txt +++ /dev/null @@ -1,134 +0,0 @@ -Cox's theorem, named after the physicist Richard Threlkeld Cox, is a derivation of the laws of probability theory from a certain set of postulates. This derivation justifies the so-called "logical" interpretation of probability, as the laws of probability derived by Cox's theorem are applicable to any proposition. Logical (also known as objective Bayesian) probability is a type of Bayesian probability. Other forms of Bayesianism, such as the subjective interpretation, are given other justifications. - -Cox wanted his system to satisfy the following conditions: - -#Divisibility and comparability - The plausibility of a proposition is a real number and is dependent on information we have related to the proposition. - -#Common sense - Plausibilities should vary sensibly with the assessment of plausibilities in the model. - -#Consistency - If the plausibility of a proposition can be derived in many ways, all the results must be equal. - -The postulates as stated here are taken from Arnborg and Sjödin. - -"Common sense" includes consistency with Aristotelian logic in the sense that logically equivalent propositions shall have the same plausibility. - -The postulates as originally stated by Cox were not mathematically - -rigorous (although better than the informal description above), e.g., - -as noted by Halpern. However it appears to be possible - -to augment them with various mathematical assumptions made either - -implicitly or explicitly by Cox to produce a valid proof. - -Cox's notation: - -The plausibility of a proposition $A$ given some related information $X$ is denoted by $A\mid X$. - -Cox's postulates and functional equations are: - -*The plausibility of the conjunction $AB$ of two propositions $A$, $B$, given some related information $X$, is determined by the plausibility of $A$ given $X$ and that of $B$ given $AX$. - -In form of a functional equation -$$ -AB\mid X=g(A\mid X,B\mid AX) -$$ - -Because of the associative nature of the conjunction in propositional logic, the consistency with logic gives a functional equation saying that the function $g$ is an associative binary operation. - -*Additionally, Cox postulates the function $g$ to be monotonic. - -All strictly increasing associative binary operations on the real numbers are isomorphic to multiplication of numbers in a subinterval of , which means that there is a monotonic function $w$ mapping plausibilities to such that -$$ -w(AB\mid X)=w(A\mid X)w(B\mid AX) -$$ - -*In case $A$ given $X$ is certain, we have $AB\mid X=B\mid X$ and $B\mid AX=B\mid X$ due to the requirement of consistency. The general equation then leads to -$$ -w(B\mid X)=w(A\mid X)w(B\mid X) -$$ - -This shall hold for any proposition $B$, which leads to -$$ -w(A\mid X)=1 -$$ - -*In case $A$ given $X$ is impossible, we have $AB\mid X=A\mid X$ and $A\mid BX=A\mid X$ due to the requirement of consistency. The general equation (with the A and B factors switched) then leads to -$$ -w(A\mid X)=w(B\mid X)w(A\mid X) -$$ - -This shall hold for any proposition $B$, which, without loss of generality, leads to a solution -$$ -w(A\mid X)=0 -$$ - -Due to the requirement of monotonicity, this means that $w$ maps plausibilities to interval . - -*The plausibility of a proposition determines the plausibility of the proposition's negation. - -This postulates the existence of a function $f$ such that -$$ -w(\text{not } A\mid X)=f(w(A\mid X)) -$$ - -Because "a double negative is an affirmative", consistency with logic gives a functional equation -$$ -f(f(x))=x, -$$ - -saying that the function $f$ is an involution, i.e., it is its own inverse. - -*Furthermore, Cox postulates the function $f$ to be monotonic. - -The above functional equations and consistency with logic imply that -$$ -w(AB\mid X)=w(A\mid X)f(w(\text{not }B\mid AX))=w(A\mid X)f\left( {w(A\text{ not }B\mid X) \over w(A\mid X)} \right) -$$ - -Since $AB$ is logically equivalent to $BA$, we also get -$$ -w(A\mid X)f\left( {w(A\text{ not }B\mid X) \over w(A\mid X)} \right)=w(B\mid X)f\left( {w(B\text{ not }A\mid X) \over w(B\mid X)} \right) -$$ - -If, in particular, $B=\text{ not }(AD)$, then also $A\text{ not } B = \text{not }B$ and $B\text{ not }A=\text{ not }A$ and we get -$$ -w(A\text{ not }B\mid X)=w(\text{not }B\mid X)=f(w(B\mid X)) -$$ - -and -$$ -w(B\text{ not }A\mid X)=w(\text{not }A\mid X)=f(w(A\mid X)) -$$ - -Abbreviating $w(A\mid X)=x$ and $w(B\mid X)=y$ we get the functional equation -$$ -xf\left({f(y) \over x}\right)=yf\left({f(x) \over y}\right) -$$ - -The laws of probability derivable from these postulates are the following. Let $A\mid B$ be the plausibility of the proposition $A$ given $B$ satisfying Cox's postulates. Then there is a function $w$ mapping plausibilities to interval [0,1] and a positive number $m$ such that - -# Certainty is represented by $w(A\mid B)=1.$ - -# $w^m(A|B)+w^m(\text{not }A\mid B)=1.$ - -# $w(AB\mid C)=w(A\mid C)w(B\mid AC)=w(B\mid C)w(A\mid BC).$ - -It is important to note that the postulates imply only these general properties. We may recover the usual laws of probability by setting a new function, conventionally denoted $P$ or $\Pr$, equal to $w^m$. Then we obtain the laws of probability in a more familiar form: - -# Certain truth is represented by $\Pr(A\mid B)=1$, and certain falsehood by $\Pr(A\mid B)=0.$ - -# $\Pr(A\mid B)+\Pr(\text{not }A\mid B)=1.$ - -# $\Pr(AB\mid C)=\Pr(A\mid C)\Pr(B\mid AC)=\Pr(B\mid C)\Pr(A\mid BC).$ - -Rule 2 is a rule for negation, and rule 3 is a rule for conjunction. Given that any proposition containing conjunction, disjunction, and negation can be equivalently rephrased using conjunction and negation alone (the conjunctive normal form), we can now handle any compound proposition. - -The laws thus derived yield finite additivity of probability, but not countable additivity. The measure-theoretic formulation of Kolmogorov assumes that a probability measure is countably additive. This slightly stronger condition is necessary for the proof of certain theorems. - -Cox's theorem has come to be used as one of the justifications for the - -use of Bayesian probability theory. For example, in Jaynes - -The original formulation of Cox's theorem is in Cox, which is extended with additional results and more discussion in Cox. Jaynes cites Abel for the first known use of the associativity functional equation. János Aczél provides a long proof of the "associativity equation" (pages 256-267). Jaynes reproduces the shorter proof by Cox in which differentiability is assumed. A guide to Cox's theorem by Van Horn aims at comprehensively introducing the reader to all these references. diff --git a/wiki/wikipedia/941.txt b/wiki/wikipedia/941.txt deleted file mode 100644 index 4a8d8c2b4d08ec8c51501ed501a1b62ec2e54618..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/941.txt +++ /dev/null @@ -1,12 +0,0 @@ -In mathematics, the Schwarz–Ahlfors–Pick theorem is an extension of the Schwarz lemma for hyperbolic geometry, such as the Poincaré half-plane model. - -The Schwarz–Pick lemma states that every holomorphic function from the unit disk U to itself, or from the upper half-plane H to itself, will not increase the Poincaré distance between points. The unit disk U with the Poincaré metric has negative Gaussian curvature −1. In 1938, Lars Ahlfors generalised the lemma to maps from the unit disk to other negatively curved surfaces: - -Theorem (Schwarz–Ahlfors–Pick). Let U be the unit disk with Poincaré metric $\rho$; let S be a Riemann surface endowed with a Hermitian metric $\sigma$ whose Gaussian curvature is ≤ -1; let $f:U\rightarrow S$ be a holomorphic function. Then -$$ -\sigma(f(z_1),f(z_2)) \leq \rho(z_1,z_2) -$$ - -for all $z_1,z_2 \in U.$ - -A generalization of this theorem was proved by Shing-Tung Yau in 1973. diff --git a/wiki/wikipedia/942.txt b/wiki/wikipedia/942.txt deleted file mode 100644 index 1bbe1f3f4268a6921d3bacd0dccb6845b22252d4..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/942.txt +++ /dev/null @@ -1,45 +0,0 @@ -"Tetris" is a song arranged by English composer Andrew Lloyd Webber and English record producer Nigel Wright, collaborating under the pseudonym Doctor Spin. The composition is based on the theme to the 1989 Game Boy game Tetris, which itself is based on the Russian folk song "Korobeiniki". Doctor Spin released their version of "Tetris" on 21 September 1992 through Polydor and Carpet Records; it reached number six on the UK Singles Chart and also charted in Austria, Finland, and Ireland. This song, along with "Supermarioland" by Ambassadors of Funk, commenced a brief trend of recreated video game music entering mainstream popularity. - -The original composition that the main theme of Tetris is based on is the Russian folk song "Korobeiniki", which is a musical recreation of a poem by Russian poet Nikolay Nekrasov. The song used in the Game Boy version of Tetris was composed by Hirokazu Tanaka. In 1992, Andrew Lloyd Webber and Nigel Wright collaborated under the name Doctor Spin to record and release a Eurodance version of Tanka's arrangement. The track was officially licensed by Nintendo. - -Doctor Spin released "Tetris" through Polydor Records and Lloyd Webber's sublabel of Polydor, Carpet Records, on 21 September 1992 in four formats: CD, cassette, 7-inch vinyl, and 12-inch vinyl. Along with "Supermarioland" by Ambassadors of Funk, which charted simultaneously with "Tetris", the success of the composition began a brief period of popularity for novelty rave music, which also included the charity single "Supersonic" by H.W.A. (based on the Sonic the Hedgehog franchise). - -UK and European CD single - -# "Tetris" (7-inch mix) - -# "Tetris" (12-inch mix) - -# "Tetris" (hardcore mix) - -# "Play Game Boy" - -UK 7-inch and cassette single - -# "Tetris" - -# "Play Game Boy" - -UK 12-inch vinyl - -A1. "Tetris" (12-inch mix) - -A1. "Tetris" (hardcore mix) - -B1. "Tetris" (7-inch mix) - -B2. "Play Game Boy" - -Credits are lifted from the UK CD single liner notes. - -Studio - -* "Play Game Boy" recorded at Skratch Studios (Surrey, England) - -Personnel - -* Andrew Lloyd Webber – traditional arrangement, executive production - -* Nigel Wright – traditional arrangement, production - -* Robin Sellars – engineering diff --git a/wiki/wikipedia/943.txt b/wiki/wikipedia/943.txt deleted file mode 100644 index bda8bb1ea2139c20d2891f920ca7b137adcd8461..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/943.txt +++ /dev/null @@ -1,33 +0,0 @@ -An Ordered Key-Value Store (OKVS) is a type of data storage paradigm that can support multi-model database. An OKVS is an ordered mapping of bytes to bytes. It is a more powerful paradigm than Key-Value Store because OKVS allow to build higher level abstractions without the need to do full scans. An OKVS will keep the key-value pairs sorted by the key lexicographic order. Some OKVS provide a way to customize the sorting algorithm. OKVS systems provides different set of features and performance trade-offs. Most of them are shipped as a library without network interfaces, in order to be embedded in another process. Most OKVS support ACID guarantees. Some OKVS are distributed databases. Ordered Key-Value Store found their way into many modern database systems including NewSQL database systems like Google Spanner, CockroachDB and TiDB. - -The origin of Ordered Key-Value Store stems from the work of Ken Thompson on dbm in 1979. Later in 1991, Berkeley DB was released that featured a B-Tree backend that allowed the keys to stay sorted. Berkeley DB was said to be very fast and made its way into various commercial product. It was included in Python standard library until 2.7. In 2009, Tokyo Cabinet was released that was superseded by Kyoto Cabinet that support both transaction and ordered keys. In 2011, LMDB was created to replace Berkeley DB in OpenLDAP. There is also Google's LevelDB that was forked by Facebook in 2012 as RocksDB. In 2014, WiredTiger, successor of Berkeley DB was acquired by MongoDB and is as of 2019 the primary backend of MongoDB database. - -Other notable implementation of the OKVS paradigm are Sophia and SQlite LSM extension. Another notable use of OKVS paradigm is the multi-model database system called ArangoDB based on RocksDB. - -Some NewSQL databases are supported by Ordered Key-Value Stores. JanusGraph, a property graph database, has both a Berkeley DB backend and FoundationDB backend. - -There are algorithms that encode basic data types (string, integer, float) and composition of those data types using containers like tuples, lists or vectors that preserve their "natural" ordering. That is, one can work with an Ordered Key-Value Store without having to fiddle with bytes directly. In FoundationDB, it is called the tuple layer. - -One can construct key spaces to build higher level abstractions. The idea is to construct keys, that takes advantage of the ordered nature of the top level key space. When taking advantage of the ordered nature of the key space, one can query ranges of keys that have particular pattern. - -De-normalization, as in, repeating the same piece of data in multiple subspace is common practice. It allows to create secondary representation (also called indices) that will allow to speed up queries. - -As of 2019, the following abstraction or databases were built on top Ordered Key-Value Stores: - -* Timeseries database, - -* Record Database, also known as Row store databases, they behave similarly to what is dubbed RDBMS, - -* Tuple Stores, also known as Triple Store or Quad Store but also Generic Tuple Store, - -* Document database, that mimics MongoDB API, - -* Full-text search - -*Geographic Information Systems - -*Property Graph - -*Versioned Data - -All those abstraction can co-exist with the same OKVS database and when ACID is supported, the operations happens with the guarantees offered by the transaction system. diff --git a/wiki/wikipedia/944.txt b/wiki/wikipedia/944.txt deleted file mode 100644 index 49e4deb0aafb6e613929ab1bdc6eda86c46e34e5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/944.txt +++ /dev/null @@ -1,182 +0,0 @@ -In geometry, a regular icosahedron ( or ) is a convex polyhedron with 20 faces, 30 edges and 12 vertices. It is one of the five Platonic solids, and the one with the most faces. - -It has five equilateral triangular faces meeting at each vertex. It is represented by its Schläfli symbol {3,5}, or sometimes by its vertex figure as 3.3.3.3.3 or 35. It is the dual of the dodecahedron, which is represented by {5,3}, having three pentagonal faces around each vertex. In most contexts, the unqualified use of the word "icosahedron" refer specifically to this figure. - -A regular icosahedron is a strictly convex deltahedron and a gyroelongated pentagonal bipyramid and a biaugmented pentagonal antiprism in any of six orientations. - -The name comes . The plural can be either "icosahedrons" or "icosahedra" (). - -If the edge length of a regular icosahedron is $a$, the radius of a circumscribed sphere (one that touches the icosahedron at all vertices) is - -r_u = \frac{a}{2} \sqrt{\varphi \sqrt{5}} = \frac{a}{4} \sqrt{10 +2\sqrt{5}} = a\sin\frac{2\pi}{5} \approx 0.9510565163 \cdot a - -and the radius of an inscribed sphere (tangent to each of the icosahedron's faces) is - -r_i = \frac{\varphi^2 a}{2 \sqrt{3}} = \frac{\sqrt{3}}{12} \left(3+ \sqrt{5} \right) a \approx 0.7557613141\cdot a - -while the midradius, which touches the middle of each edge, is - -r_m = \frac{a \varphi}{2} = \frac{1}{4} \left(1+\sqrt{5}\right) a = a\cos\frac{\pi}{5} \approx 0.80901699\cdot a - -where $\varphi$ is the golden ratio. - -The surface area $A$ and the volume $V$ of a regular icosahedron of edge length $a$ are: - -A = 5\sqrt{3}a^2 \approx 8.66025404a^2 - -V = \frac{5}{12} \left(3+\sqrt{5}\right)a^3 \approx 2.18169499a^3 - -The latter is F = 20 times the volume of a general tetrahedron with apex at the center of the - -inscribed sphere, where the volume of the tetrahedron is one third times the base area sqrt 3/4a2 times its height ri. - -The volume filling factor of the circumscribed sphere is: - -f=\frac{V}{\frac43 \pi r_u^3} = \frac{20\left(3+\sqrt{5}\right)}{\left(2\sqrt{5}+10\right)^{\frac32}\pi}\approx 0.6054613829, - -compared to 66.49% for a dodecahedron. A sphere inscribed in an icosahedron will enclose 89.635% of its volume, compared to only 75.47% for a dodecahedron. - -The midsphere of an icosahedron will have a volume 1.01664 times the volume of the icosahedron, which is by far the closest similarity in volume of any platonic solid with its midsphere. This arguably makes the icosahedron the "roundest" of the platonic solids. - -The vertices of an icosahedron centered at the origin with an edge length of 2 and a circumradius of ϕ + 2 ≈ 1.9 are - -(0, ±1, ±ϕ) - -(±1, ±ϕ, 0) - -(±ϕ, 0, ±1) - -where ϕ = 1 + 5/2 is the golden ratio. Taking all permutations of these coordinates (not just cyclic permutations) results in the Compound of two icosahedra. - -The vertices of the icosahedron form five sets of three concentric, mutually orthogonal golden rectangles, whose edges form Borromean rings. - -If the original icosahedron has edge length 1, its dual dodecahedron has edge length 1/ϕ = ϕ − 1 = 5 − 1/2. - -The 12 edges of a regular octahedron can be subdivided in the golden ratio so that the resulting vertices define a regular icosahedron. This is done by first placing vectors along the octahedron's edges such that each face is bounded by a cycle, then similarly subdividing each edge into the golden mean along the direction of its vector. The five octahedra defining any given icosahedron form a regular polyhedral compound, while the two icosahedra that can be defined in this way from any given octahedron form a uniform polyhedron compound. - -The locations of the vertices of a regular icosahedron can be described using spherical coordinates, for instance as latitude and longitude. If two vertices are taken to be at the north and south poles (latitude ±90°), then the other ten vertices are at latitude ± arctan 1/2 = ±26.57°. These ten vertices are at evenly spaced longitudes (36° apart), alternating between north and south latitudes. - -This scheme takes advantage of the fact that the regular icosahedron is a pentagonal gyroelongated bipyramid, with D5d dihedral symmetry—that is, it is formed of two congruent pentagonal pyramids joined by a pentagonal antiprism. - -The icosahedron has three special orthogonal projections, centered on a face, an edge and a vertex: - -This configuration matrix represents the icosahedron. The rows and columns correspond to vertices, edges, and faces. The diagonal numbers say how many of each element occur in the whole icosahedron. The nondiagonal numbers say how many of the column's element occur in or at the row's element. -$$ -\begin{bmatrix}\begin{matrix}12 & 5 & 5 \\ 2 & 30 & 2 \\ 3 & 3 & 20 \end{matrix}\end{bmatrix} -$$ - -Here is the configuration expanded with k-face elements and k-figures. The diagonal element counts are the ratio of the full Coxeter group H3, order 120, divided by the order of the subgroup with mirror removal. - -The icosahedron can also be represented as a spherical tiling, and projected onto the plane via a stereographic projection. This projection is conformal, preserving angles but not areas or lengths. Straight lines on the sphere are projected as circular arcs on the plane. - -*An icosahedron has 43,380 distinct nets. - -*To color the icosahedron, such that no two adjacent faces have the same color, requires at least 3 colors. - -*A problem dating back to the ancient Greeks is to determine which of two shapes has larger volume, an icosahedron inscribed in a sphere, or a dodecahedron inscribed in the same sphere. The problem was solved by Hero, Pappus, and Fibonacci, among others. Apollonius of Perga discovered the curious result that the ratio of volumes of these two shapes is the same as the ratio of their surface areas. Both volumes have formulas involving the golden ratio, but taken to different powers. As it turns out, the icosahedron occupies less of the sphere's volume (60.54%) than the dodecahedron (66.49%). - -The following construction of the icosahedron avoids tedious computations in the number field $\Q[\sqrt{5}]$ necessary in more elementary approaches. - -The existence of the icosahedron amounts to the existence of six equiangular lines in $\R^3$. Indeed, intersecting such a system of equiangular lines with a Euclidean sphere centered at their common intersection yields the twelve vertices of a regular icosahedron as can easily be checked. Conversely, supposing the existence of a regular icosahedron, lines defined by its six pairs of opposite vertices form an equiangular system. - -In order to construct such an equiangular system, we start with this 6 × 6 square matrix: - -A=\left(\begin{array}{crrrrr} - -0&1&1&1&1&1\\ - -1&0&1&-1&-1&1\\ - -1&1&0&1&-1&-1\\ - -1&-1&1&0&1&-1\\ - -1&-1&-1&1&0&1\\ - -1&1&-1&-1&1&0\end{array}\right). - -A straightforward computation yields $A^2=5I$ (where $I$ is the 6 × 6 identity matrix). This implies that $A$ has eigenvalues $-\sqrt{5}$ and $\sqrt{5}$, both with multiplicity 3 since $A$ is symmetric and of trace zero. - -The matrix $A+\sqrt{5}I$ induces thus a Euclidean structure on the quotient space $\R^6/\operatorname{ker}(A+\sqrt{5}I)$, which is isomorphic to $\R^3$ since the kernel $\operatorname{ker}(A+\sqrt{5}I)$ of $A+\sqrt{5}I$ has dimension 3. The image under the projection $\pi:\R^6\to\R^6/\operatorname{ker}(A+\sqrt{5}I)$ of the six coordinate axes in $\R^6$ forms a system of six equiangular lines in $\R^3$ intersecting pairwise at a common acute angle of $\arccos 1/\sqrt{5}$. Orthogonal projection of the positive and negative basis vectors of $\R^6$ onto the $\sqrt{5}$-eigenspace of $A$ yields thus the twelve vertices of the icosahedron. - -A second straightforward construction of the icosahedron uses representation theory of the alternating group $A_5$ acting by direct isometries on the icosahedron. - -The rotational symmetry group of the regular icosahedron is isomorphic to the alternating group on five letters. This non-abelian simple group is the only non-trivial normal subgroup of the symmetric group on five letters. Since the Galois group of the general quintic equation is isomorphic to the symmetric group on five letters, and this normal subgroup is simple and non-abelian, the general quintic equation does not have a solution in radicals. The proof of the Abel–Ruffini theorem uses this simple fact, and Felix Klein wrote a book that made use of the theory of icosahedral symmetries to derive an analytical solution to the general quintic equation, . See icosahedral symmetry: related geometries for further history, and related symmetries on seven and eleven letters. - -The full symmetry group of the icosahedron (including reflections) is known as the full icosahedral group, and is isomorphic to the product of the rotational symmetry group and the group $C_2$ of size two, which is generated by the reflection through the center of the icosahedron. - -The icosahedron has a large number of stellations. According to specific rules defined in the book The Fifty-Nine Icosahedra, 59 stellations were identified for the regular icosahedron. The first form is the icosahedron itself. One is a regular Kepler–Poinsot polyhedron. Three are regular compound polyhedra. - -The small stellated dodecahedron, great dodecahedron, and great icosahedron are three facetings of the regular icosahedron. They share the same vertex arrangement. They all have 30 edges. The regular icosahedron and great dodecahedron share the same edge arrangement but differ in faces (triangles vs pentagons), as do the small stellated dodecahedron and great icosahedron (pentagrams vs triangles). - -There are distortions of the icosahedron that, while no longer regular, are nevertheless vertex-uniform. These are invariant under the same rotations as the tetrahedron, and are somewhat analogous to the snub cube and snub dodecahedron, including some forms which are chiral and some with Th-symmetry, i.e. have different planes of symmetry from the tetrahedron. - -The icosahedron is unique among the Platonic solids in possessing a dihedral angle not less than 120°. Its dihedral angle is approximately 138.19°. Thus, just as hexagons have angles not less than 120° and cannot be used as the faces of a convex regular polyhedron because such a construction would not meet the requirement that at least three faces meet at a vertex and leave a positive defect for folding in three dimensions, icosahedra cannot be used as the cells of a convex regular polychoron because, similarly, at least three cells must meet at an edge and leave a positive defect for folding in four dimensions (in general for a convex polytope in n dimensions, at least three facets must meet at a peak and leave a positive defect for folding in n-space). However, when combined with suitable cells having smaller dihedral angles, icosahedra can be used as cells in semi-regular polychora (for example the snub 24-cell), just as hexagons can be used as faces in semi-regular polyhedra (for example the truncated icosahedron). Finally, non-convex polytopes do not carry the same strict requirements as convex polytopes, and icosahedra are indeed the cells of the icosahedral 120-cell, one of the ten non-convex regular polychora. - -An icosahedron can also be called a gyroelongated pentagonal bipyramid. It can be decomposed into a gyroelongated pentagonal pyramid and a pentagonal pyramid or into a pentagonal antiprism and two equal pentagonal pyramids. - -It can be projected to 3D from the 6D 6-demicube using the same basis vectors that form the hull of the Rhombic triacontahedron from the 6-cube. Shown here including the inner 20 vertices which are not connected by the 30 outer hull edges of 6D norm length sqrt 2. The inner vertices form a dodecahedron. - -The 3D projection basis vectors [u,v,w] used are: - - - -\begin{align} - -u &= (1, \varphi, 0, -1, \varphi, 0)\\ - -v &= (\varphi, 0, 1, \varphi, 0, -1)\\ - -w &= (0, 1, \varphi, 0, -1, \varphi)\\ - -\end{align} - - - -There are 3 uniform colorings of the icosahedron. These colorings can be represented as 11213, 11212, 11111, naming the 5 triangular faces around each vertex by their color. - -The icosahedron can be considered a snub tetrahedron, as snubification of a regular tetrahedron gives a regular icosahedron having chiral tetrahedral symmetry. It can also be constructed as an alternated truncated octahedron, having pyritohedral symmetry. The pyritohedral symmetry version is sometimes called a pseudoicosahedron, and is dual to the pyritohedron. - -Many viruses, e.g. herpes virus, have icosahedral shells. Viral structures are built of repeated identical protein subunits known as capsomeres, and the icosahedron is the easiest shape to assemble using these subunits. A regular polyhedron is used because it can be built from a single basic unit protein used over and over again; this saves space in the viral genome. - -Various bacterial organelles with an icosahedral shape were also found. The icosahedral shell encapsulating enzymes and labile intermediates are built of different types of proteins with BMC domains. - -In 1904, Ernst Haeckel described a number of species of Radiolaria, including Circogonia icosahedra, whose skeleton is shaped like a regular icosahedron. A copy of Haeckel's illustration for this radiolarian appears in the article on regular polyhedra. - -The closo-carboranes are chemical compounds with shape very close to icosahedron. Icosahedral twinning also occurs in crystals, especially nanoparticles. - -Many borides and allotropes of boron contain boron B12 icosahedron as a basic structure unit. - -Icosahedral dice with twenty sides have been used since ancient times. - -In several roleplaying games, such as Dungeons & Dragons, the twenty-sided die (d20 for short) is commonly used in determining success or failure of an action. This die is in the form of a regular icosahedron. It may be numbered from "0" to "9" twice (in which form it usually serves as a ten-sided die, or d10), but most modern versions are labeled from "1" to "20". - -An icosahedron is the three-dimensional game board for Icosagame, formerly known as the Ico Crystal Game. - -An icosahedron is used in the board game Scattergories to choose a letter of the alphabet. Six letters are omitted (Q, U, V, X, Y, and Z). - -In the Nintendo 64 game Kirby 64: The Crystal Shards, the boss Miracle Matter is a regular icosahedron. - -Inside a Magic 8-Ball, various answers to yes–no questions are inscribed on a regular icosahedron. - -The "skwish" baby toy is a tensegrity object in the form of a Jessen's icosahedron, which has the same vertex coordinates as a regular icosahedron, and the same number of faces, but with six edges turned 90° to connect to other vertices. - -R. Buckminster Fuller and Japanese cartographer Shoji Sadao designed a world map in the form of an unfolded icosahedron, called the Fuller projection, whose maximum distortion is only 2%. The American electronic music duo ODESZA use a regular icosahedron as their logo. - -The skeleton of the icosahedron (the vertices and edges) forms a graph. It is one of 5 Platonic graphs, each a skeleton of its Platonic solid. - -The high degree of symmetry of the polygon is replicated in the properties of this graph, which is distance-transitive and symmetric. The automorphism group has order 120. The vertices can be colored with 4 colors, the edges with 5 colors, and the diameter is 3. - -The icosahedral graph is Hamiltonian: there is a cycle containing all the vertices. It is also a planar graph. - -There are 4 related Johnson solids, including pentagonal faces with a subset of the 12 vertices. The similar dissected regular icosahedron has 2 adjacent vertices diminished, leaving two trapezoidal faces, and a bifastigium has 2 opposite sets of vertices removed and 4 trapezoidal faces. The pentagonal antiprism is formed by removing two opposite vertices. - -The icosahedron can be transformed by a truncation sequence into its dual, the dodecahedron: - -As a snub tetrahedron, and alternation of a truncated octahedron it also exists in the tetrahedral and octahedral symmetry families: - -This polyhedron is topologically related as a part of sequence of regular polyhedra with Schläfli symbols {3,n}, continuing into the hyperbolic plane. - -The regular icosahedron, seen as a snub tetrahedron, is a member of a sequence of snubbed polyhedra and tilings with vertex figure (3.3.3.3.n) and Coxeter–Dynkin diagram . These figures and their duals have (n32) rotational symmetry, being in the Euclidean plane for $n=6$, and hyperbolic plane for any higher $n$. The series can be considered to begin with $n=2$, with one set of faces degenerated into digons. - -The icosahedron can tessellate hyperbolic space in the order-3 icosahedral honeycomb, with 3 icosahedra around each edge, 12 icosahedra around each vertex, with Schläfli symbol {3,5,3}. It is one of four regular tessellations in the hyperbolic 3-space. diff --git a/wiki/wikipedia/945.txt b/wiki/wikipedia/945.txt deleted file mode 100644 index d5a3c12c3260323553b87497d9db0e3edd4fdb97..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/945.txt +++ /dev/null @@ -1,28 +0,0 @@ -In mathematics, the Hausdorff moment problem, named after Felix Hausdorff, asks for necessary and sufficient conditions that a given sequence (m0, m1, m2, ...) be the sequence of moments -$$ -m_n = \int_0^1 x^nd\mu(x) -$$ - -of some Borel measure μ supported on the closed unit interval [0, 1]. In the case m0 = 1, this is equivalent to the existence of a random variable X supported on [0, 1], such that E[X. - -The essential difference between this and other well-known moment problems is that this is on a bounded interval, whereas in the Stieltjes moment problem one considers a half-line [0, ∞), and in the Hamburger moment problem one considers the whole line (−∞, ∞). The Stieltjes moment problems and the Hamburger moment problems, if they are solvable, may have infinitely many solutions (indeterminate moment problem) whereas a Hausdorff moment problem always has a unique solution if it is solvable (determinate moment problem). In the indeterminate moment problem case, there are infinite measures corresponding to the same prescribed moments and they consist of a convex set. The set of polynomials may or may not be dense in the associated Hilbert spaces if the moment problem is indeterminate, and it depends on whether measure is extremal or not. But in the determinate moment problem case, the set of polynomials is dense in the associated Hilbert space. - -In 1921, Hausdorff showed that (m0, m1, m2, ...) is such a moment sequence if and only if the sequence is completely monotonic, that is, its difference sequences satisfy the equation -$$ -(-1)^k(\Delta^k m)_n \geq 0 -$$ - -for all n, k ≥ 0. Here, Δ is the difference operator given by -$$ -(\Delta m)_n = m_{n+1} - m_n. -$$ - -The necessity of this condition is easily seen by the identity -$$ -(-1)^k(\Delta^k m)_n = \int_0^1 x^n (1-x)^k d\mu(x), -$$ - -which is non-negative since it is the integral of a non-negative function. For example, it is necessary to have -$$ -(\Delta^4 m)_6 = m_6 - 4m_7 + 6m_8 - 4m_9 + m_{10} = \int x^6 (1-x)^4 d\mu(x) \geq 0. -$$ diff --git a/wiki/wikipedia/946.txt b/wiki/wikipedia/946.txt deleted file mode 100644 index db1a29842593431914c250f4f6c6bf70f748958d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/946.txt +++ /dev/null @@ -1,6 +0,0 @@ -In matrix analysis Stahl's theorem is a theorem proved in 2011 by Herbert Stahl concerning Laplace transforms for special matrix functions. It originated in 1975 as the Bessis-Moussa-Villani (BMV) conjecture by Daniel Bessis, Pierre Moussa, and Marcel Villani. In 2004 Elliott H. Lieb and Robert Seiringer gave two important reformulations of the BMV conjecture. In 2015 Alexandre Eremenko gave a simplified proof of Stahl's theorem. - -Let $\operatorname{tr}$ denote the trace of a matrix. If A and B are n × n Hermitian matrices and B is positive semidefinite, define $\mathbf{f}$(t) = $\operatorname{tr(exp(A-tB))}$, for all real t ≥ 0. Then $\mathbf{f}$ can be represented as the Laplace transform of a non-negative Borel measure μ on ${[0,\infty)}$. In other words, for all real t ≥ 0, -$$ -\mathbf{f} -$$(t) = $\int_{[0,\infty)} e^{-ts} d\mu(s)$, for some non-negative measure μ depending upon A and B. diff --git a/wiki/wikipedia/947.txt b/wiki/wikipedia/947.txt deleted file mode 100644 index 07938d6a23bd3b9bba26a6c99cdbfbc3742af196..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/947.txt +++ /dev/null @@ -1,41 +0,0 @@ -In complex analysis, an area of mathematics, Montel's theorem refers to one of two theorems about families of holomorphic functions. These are named after French mathematician Paul Montel, and give conditions under which a family of holomorphic functions is normal. - -The first, and simpler, version of the theorem states that a family of holomorphic functions defined on an open subset of the complex numbers is normal if and only if it is locally uniformly bounded. - -This theorem has the following formally stronger corollary. Suppose that -$$ -\mathcal{F} -$$ is a family of - -meromorphic functions on an open set $D$. If $z_0\in D$ is such that -$$ -\mathcal{F} -$$ is not normal at $z_0$, and $U\subset D$ is a neighborhood of $z_0$, then $\bigcup_{f\in\mathcal{F}}f(U)$ is dense - -in the complex plane. - -The stronger version of Montel's Theorem (occasionally referred to as the Fundamental Normality Test) states that a family of holomorphic functions, all of which omit the same two values $a,b\in\mathbb{C},$ is normal. - -The conditions in the above theorems are sufficient, but not necessary for normality. Indeed, - -the family $\{z\mapsto z\}$ is normal, but does not omit any complex value. - -The first version of Montel's theorem is a direct consequence of Marty's Theorem (which - -states that a family is normal if and only if the spherical derivatives are locally bounded) - -and Cauchy's integral formula. - -This theorem has also been called the Stieltjes–Osgood theorem, after Thomas Joannes Stieltjes and William Fogg Osgood. - -The Corollary stated above is deduced as follows. Suppose that all the functions in $\mathcal{F}$ omit the same neighborhood of the point $z_1$. By postcomposing with the map $z\mapsto \frac{1}{z-z_1}$ we obtain a uniformly bounded family, which is normal by the first version of the theorem. - -The second version of Montel's theorem can be deduced from the first by using the fact that there exists a holomorphic universal covering from the unit disk to the twice punctured plane $\mathbb{C}\setminus\{a,b\}$. (Such a covering is given by the elliptic modular function). - -This version of Montel's theorem can be also derived from Picard's theorem, - -by using Zalcman's lemma. - -A heuristic principle known as Bloch's Principle (made precise by Zalcman's lemma) states that properties that imply that an entire function is constant correspond to properties that ensure that a family of holomorphic functions is normal. - -For example, the first version of Montel's theorem stated above is the analog of Liouville's theorem, while the second version corresponds to Picard's theorem. diff --git a/wiki/wikipedia/948.txt b/wiki/wikipedia/948.txt deleted file mode 100644 index b378d3835037c0367c5543ab2003d95cdd831bd3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/948.txt +++ /dev/null @@ -1,5 +0,0 @@ -In the historical study of mathematics, an apotome is a line segment formed from a longer line segment by breaking it into two parts, one of which is commensurable only in power to the whole; the other part is the apotome. In this definition, two line segments are said to be "commensurable only in power" when the ratio of their lengths is an irrational number but the ratio of their squared lengths is rational. - -Translated into modern algebraic language, an apotome can be interpreted as a quadratic irrational number formed by subtracting one square root of a rational number from another. - -This concept of the apotome appears in Euclid's Elements beginning in book X, where Euclid defines two special kinds of apotomes. In an apotome of the first kind, the whole is rational, while in an apotome of the second kind, the part subtracted from it is rational; both kinds of apotomes also satisfy an additional condition. Euclid Proposition XIII.6 states that, if a rational line segment is split into two pieces in the golden ratio, then both pieces may be represented as apotomes. diff --git a/wiki/wikipedia/949.txt b/wiki/wikipedia/949.txt deleted file mode 100644 index 206d60c55d1581846a73cb31fb7ff0dadea68bc5..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/949.txt +++ /dev/null @@ -1,64 +0,0 @@ -In mathematics, the Rogers–Ramanujan identities are two identities related to basic hypergeometric series and integer partitions. The identities were first discovered and proved by , and were subsequently rediscovered (without a proof) by Srinivasa Ramanujan some time before 1913. Ramanujan had no proof, but rediscovered Rogers's paper in 1917, and they then published a joint new proof . independently rediscovered and proved the identities. - -The Rogers–Ramanujan identities are - -G(q) = \sum_{n=0}^\infty \frac {q^{n^2}} {(q;q)_n} = - -\frac {1}{(q;q^5)_\infty (q^4; q^5)_\infty} - - =1+ q +q^2 +q^3 +2q^4+2q^5 +3q^6+\cdots - - - -and - -H(q) =\sum_{n=0}^\infty \frac {q^{n^2+n}} {(q;q)_n} = - -\frac {1}{(q^2;q^5)_\infty (q^3; q^5)_\infty} - -=1+q^2 +q^3 +q^4+q^5 +2q^6+\cdots - - . - -Here, $(\cdot;\cdot)_n$ denotes the q-Pochhammer symbol. - -Consider the following: - -* $\frac {q^{n^2}} {(q;q)_n}$ is the generating function for partitions with exactly $n$ parts such that adjacent parts have difference at least 2. - -* $\frac {1}{(q;q^5)_\infty (q^4; q^5)_\infty}$ is the generating function for partitions such that each part is congruent to either 1 or 4 modulo 5. - -* $\frac {q^{n^2+n}} {(q;q)_n}$ is the generating function for partitions with exactly $n$ parts such that adjacent parts have difference at least 2 and such that the smallest part is at least 2. - -* $\frac {1}{(q^2;q^5)_\infty (q^3; q^5)_\infty}$ is the generating function for partitions such that each part is congruent to either 2 or 3 modulo 5. - -The Rogers–Ramanujan identities could be now interpreted in the following way. Let $n$ be a non-negative integer. - -# The number of partitions of $n$ such that the adjacent parts differ by at least 2 is the same as the number of partitions of $n$ such that each part is congruent to either 1 or 4 modulo 5. - -# The number of partitions of $n$ such that the adjacent parts differ by at least 2 and such that the smallest part is at least 2 is the same as the number of partitions of $n$ such that each part is congruent to either 2 or 3 modulo 5. - -Alternatively, - -# The number of partitions of $n$ such that with $k$ parts the smallest part is at least $k$ is the same as the number of partitions of $n$ such that each part is congruent to either 1 or 4 modulo 5. - -# The number of partitions of $n$ such that with $k$ parts the smallest part is at least $k+1$ is the same as the number of partitions of $n$ such that each part is congruent to either 2 or 3 modulo 5. - -If q = e2πiτ, then q-1/60G(q) and q11/60H(q) are modular functions of τ. - -The Rogers–Ramanujan identities appeared in Baxter's solution of the hard hexagon model in statistical mechanics. - -Ramanujan's continued fraction is -$$ -1+\frac{q}{1+\frac{q^2}{1+\frac{q^3}{1+\cdots}}} = \frac{G(q)}{H(q)}. -$$ - -James Lepowsky and Robert Lee Wilson were the first to prove Rogers–Ramanujan identities using completely representation-theoretic techniques. They proved these identities using level 3 modules for the affine Lie algebra $\widehat{\mathfrak{sl}_2}$. In the course of this proof they invented and used what they called $Z$-algebras. - -Lepowsky and Wilson's approach is universal, in that it is able to treat all affine Lie algebras at all levels. - -It can be used to find (and prove) new partition identities. - -First such example is that of Capparelli's identities discovered by Stefano Capparelli using level 3 modules for - -the affine Lie algebra $A_2^{(2)}$. diff --git a/wiki/wikipedia/95.txt b/wiki/wikipedia/95.txt deleted file mode 100644 index f8f1bb79b84584ed7fdf7421da20b49ef2825406..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/95.txt +++ /dev/null @@ -1,29 +0,0 @@ -Andrica's conjecture (named after ) is a conjecture regarding the gaps between prime numbers. - -The conjecture states that the inequality -$$ -\sqrt{p_{n+1}} - \sqrt{p_n} < 1 -$$ - -holds for all $n$, where $p_n$ is the nth prime number. If $g_n = p_{n+1} - p_n$ denotes the nth prime gap, then Andrica's conjecture can also be rewritten as -$$ -g_n < 2\sqrt{p_n} + 1. -$$ - -Imran Ghory has used data on the largest prime gaps to confirm the conjecture for $n$ up to 1.3002 × 1016. Using a table of maximal gaps and the above gap inequality, the confirmation value can be extended exhaustively to 4 × 1018. - -The discrete function $A_n = \sqrt{p_{n+1}}-\sqrt{p_n}$ is plotted in the figures opposite. The high-water marks for $A_n$ occur for n = 1, 2, and 4, with A4 ≈ 0.670873..., with no larger value among the first 105 primes. Since the Andrica function decreases asymptotically as n increases, a prime gap of ever increasing size is needed to make the difference large as n becomes large. It therefore seems highly likely the conjecture is true, although this has not yet been proven. - -As a generalization of Andrica's conjecture, the following equation has been considered: -$$ - p _ {n+1} ^ x - p_ n ^ x = 1, -$$ - -where $ p_n $ is the nth prime and x can be any positive number. - -The largest possible solution for x is easily seen to occur for n=1, when xmax = 1. The smallest solution for x is conjectured to be xmin ≈ 0.567148... which occurs for n = 30. - -This conjecture has also been stated as an inequality, the generalized Andrica conjecture: -$$ - p _ {n+1} ^ x - p_ n ^ x < 1 -$$ for $x < x_{\min}.$ diff --git a/wiki/wikipedia/950.txt b/wiki/wikipedia/950.txt deleted file mode 100644 index 70ce2acfb146ceef3782c143ebf8784ad7d9af0b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/950.txt +++ /dev/null @@ -1,59 +0,0 @@ -In projective geometry, Pascal's theorem (also known as the hexagrammum mysticum theorem) states that if six arbitrary points are chosen on a conic (which may be an ellipse, parabola or hyperbola in an appropriate affine plane) and joined by line segments in any order to form a hexagon, then the three pairs of opposite sides of the hexagon (extended if necessary) meet at three points which lie on a straight line, called the Pascal line of the hexagon. It is named after Blaise Pascal. - -The theorem is also valid in the Euclidean plane, but the statement needs to be adjusted to deal with the special cases when opposite sides are parallel. - -The most natural setting for Pascal's theorem is in a projective plane since any two lines meet and no exceptions need to be made for parallel lines. However, the theorem remains valid in the Euclidean plane, with the correct interpretation of what happens when some opposite sides of the hexagon are parallel. - -If exactly one pair of opposite sides of the hexagon are parallel, then the conclusion of the theorem is that the "Pascal line" determined by the two points of intersection is parallel to the parallel sides of the hexagon. If two pairs of opposite sides are parallel, then all three pairs of opposite sides form pairs of parallel lines and there is no Pascal line in the Euclidean plane (in this case, the line at infinity of the extended Euclidean plane is the Pascal line of the hexagon). - -This theorem is a generalization of Pappus's (hexagon) theorem – Pappus's theorem is the special case of a degenerate conic of two lines. Pascal's theorem is the polar reciprocal and projective dual of Brianchon's theorem. It was formulated by Blaise Pascal in a note written in 1639 when he was 16 years old and published the following year as a broadside titled "Essay pour les coniques. Par B. P." - -Pascal's theorem is a special case of the Cayley–Bacharach theorem. - -A degenerate case of Pascal's theorem (four points) is interesting; given points ABCD on a conic Γ, the intersection of alternate sides, AB ∩ CD, BC ∩ DA, together with the intersection of tangents at opposite vertices (A, C) and (B, D) are collinear in four points; the tangents being degenerate 'sides', taken at two possible positions on the 'hexagon' and the corresponding Pascal line sharing either degenerate intersection. This can be proven independently using a property of pole-polar. If the conic is a circle, then another degenerate case says that for a triangle, the three points that appear as the intersection of a side line with the corresponding side line of the Gergonne triangle, are collinear. - -Six is the minimum number of points on a conic about which special statements can be made, as five points determine a conic. - -The converse is the Braikenridge–Maclaurin theorem, named for 18th-century British mathematicians William Braikenridge and Colin Maclaurin , which states that if the three intersection points of the three pairs of lines through opposite sides of a hexagon lie on a line, then the six vertices of the hexagon lie on a conic; the conic may be degenerate, as in Pappus's theorem. The Braikenridge–Maclaurin theorem may be applied in the Braikenridge–Maclaurin construction, which is a synthetic construction of the conic defined by five points, by varying the sixth point. - -The theorem was generalized by August Ferdinand Möbius in 1847, as follows: suppose a polygon with 4n + 2 sides is inscribed in a conic section, and opposite pairs of sides are extended until they meet in 2n + 1 points. Then if 2n of those points lie on a common line, the last point will be on that line, too. - -If six unordered points are given on a conic section, they can be connected into a hexagon in 60 different ways, resulting in 60 different instances of Pascal's theorem and 60 different Pascal lines. This configuration of 60 lines is called the Hexagrammum Mysticum. - -As Thomas Kirkman proved in 1849, these 60 lines can be associated with 60 points in such a way that each point is on three lines and each line contains three points. The 60 points formed in this way are now known as the Kirkman points. The Pascal lines also pass, three at a time, through 20 Steiner points. There are 20 Cayley lines which consist of a Steiner point and three Kirkman points. The Steiner points also lie, four at a time, on 15 Plücker lines. Furthermore, the 20 Cayley lines pass four at a time through 15 points known as the Salmon points. - -Pascal's original note has no proof, but there are various modern proofs of the theorem. - -It is sufficient to prove the theorem when the conic is a circle, because any (non-degenerate) conic can be reduced to a circle by a projective transformation. This was realised by Pascal, whose first lemma states the theorem for a circle. His second lemma states that what is true in one plane remains true upon projection to another plane. Degenerate conics follow by continuity (the theorem is true for non-degenerate conics, and thus holds in the limit of degenerate conic). - -A short elementary proof of Pascal's theorem in the case of a circle was found by van Yzeren, based on the proof in . This proof proves the theorem for circle and then generalizes it to conics. - -A short elementary computational proof in the case of the real projective plane was found by Stefanovic - -We can infer the proof from existence of isogonal conjugate too. If we are to show that X = AB ∩ DE, Y = BC ∩ EF, Z = CD ∩ FA are collinear for concyclic ABCDEF, then notice that △EYB and △CYF are similar, and that X and Z will correspond to the isogonal conjugate if we overlap the similar triangles. This means that ∠BYX = ∠CYZ, hence making XYZ collinear. - -A short proof can be constructed using cross-ratio preservation. Projecting tetrad ABCE from D onto line AB, we obtain tetrad ABPX, and projecting tetrad ABCE from F onto line BC, we obtain tetrad QBCY. This therefore means that R(AB; PX) = R(QB; CY), where one of the points in the two tetrads overlap, hence meaning that other lines connecting the other three pairs must coincide to preserve cross ratio. Therefore, XYZ are collinear. - -Another proof for Pascal's theorem for a circle uses Menelaus' theorem repeatedly. - -Dandelin, the geometer who discovered the celebrated Dandelin spheres, came up with a beautiful proof using "3D lifting" technique that is analogous to the 3D proof of Desargues' theorem. The proof makes use of the property that for every conic section we can find a one-sheet hyperboloid which passes through the conic. - -There also exists a simple proof for Pascal's theorem for a circle using the law of sines and similarity. - -Pascal's theorem has a short proof using the Cayley–Bacharach theorem that given any 8 points in general position, there is a unique ninth point such that all cubics through the first 8 also pass through the ninth point. In particular, if 2 general cubics intersect in 8 points then any other cubic through the same 8 points meets the ninth point of intersection of the first two cubics. Pascal's theorem follows by taking the 8 points as the 6 points on the hexagon and two of the points (say, M and N in the figure) on the would-be Pascal line, and the ninth point as the third point (P in the figure). The first two cubics are two sets of 3 lines through the 6 points on the hexagon (for instance, the set AB, CD, EF, and the set BC, DE, FA), and the third cubic is the union of the conic and the line MN. Here the "ninth intersection" P cannot lie on the conic by genericity, and hence it lies on MN. - -The Cayley–Bacharach theorem is also used to prove that the group operation on cubic elliptic curves is associative. The same group operation can be applied on a cone if we choose a point E on the cone and a line MP in the plane. The sum of A and B is obtained by first finding the intersection point of line AB with MP, which is M. Next A and B add up to the second intersection point of the cone with line EM, which is D. Thus if Q is the second intersection point of the cone with line EN, then -$$ -(A + B) + C = D + C = Q = A + F = A + (B + C) -$$ - -Thus the group operation is associative. On the other hand, Pascal's theorem follows from the above associativity formula, and thus from the associativity of the group operation of elliptic curves by way of continuity. - -Suppose f is the cubic polynomial vanishing on the three lines through AB, CD, EF and g is the cubic vanishing on the other three lines BC, DE, FA. Pick a generic point P on the conic and choose λ so that the cubic h = f + λg vanishes on P. Then h = 0 is a cubic that has 7 points A, B, C, D, E, F, P in common with the conic. But by Bézout's theorem a cubic and a conic have at most 3 × 2 = 6 points in common, unless they have a common component. So the cubic h = 0 has a component in common with the conic which must be the conic itself, so h = 0 is the union of the conic and a line. It is now easy to check that this line is the Pascal line. - -Again given the hexagon on a conic of Pascal's theorem with the above notation for points (in the first figure), we have -$$ -\frac{\overline{GB}}{\overline{GA}} \times \frac{\overline{HA}}{\overline{HF}} \times \frac{\overline{KF}}{\overline{KE}} \times\frac{\overline{GE}}{\overline{GD}} \times \frac{\overline{HD}}{\overline{HC}} \times \frac{\overline{KC}}{\overline{KB}}=1. -$$ - -There exist 5-point, 4-point and 3-point degenerate cases of Pascal's theorem. In a degenerate case, two previously connected points of the figure will formally coincide and the connecting line becomes the tangent at the coalesced point. See the degenerate cases given in the added scheme and the external link on circle geometries. If one chooses suitable lines of the Pascal-figures as lines at infinity one gets many interesting figures on parabolas and hyperbolas. diff --git a/wiki/wikipedia/951.txt b/wiki/wikipedia/951.txt deleted file mode 100644 index 342952f2013526554718efadd5b8a6b804c6b8cf..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/951.txt +++ /dev/null @@ -1,33 +0,0 @@ -In computer science, dancing links is a technique for reverting the operation of deleting a node from a circular doubly linked list. It is particularly useful for efficiently implementing backtracking algorithms, such as Donald Knuth's Algorithm X for the exact cover problem. Algorithm X is a recursive, nondeterministic, depth-first, backtracking algorithm that finds all solutions to the exact cover problem. Some of the better-known exact cover problems include tiling, the n queens problem, and Sudoku. - -The name dancing links, which was suggested by Donald Knuth, stems from the way the algorithm works, as iterations of the algorithm cause the links to "dance" with partner links so as to resemble an "exquisitely choreographed dance." Knuth credits Hiroshi Hitotsumatsu and Kōhei Noshita with having invented the idea in 1979, but it is his paper which has popularized it. - -As the remainder of this article discusses the details of an implementation technique for Algorithm X, the reader is strongly encouraged to read the Algorithm X article first. - -The idea of DLX is based on the observation that in a circular doubly linked list of nodes, - -x.left.right ← x.right; - -x.right.left ← x.left; - -will remove node x from the list, while - -x.left.right ← x; - -x.right.left ← x; - -will restore xs position in the list, assuming that x.right and x.left have been left unmodified. This works regardless of the number of elements in the list, even if that number is 1. - -Knuth observed that a naive implementation of his Algorithm X would spend an inordinate amount of time searching for 1's. When selecting a column, the entire matrix had to be searched for 1's. When selecting a row, an entire column had to be searched for 1's. After selecting a row, that row and a number of columns had to be searched for 1's. To improve this search time from complexity O(n) to O(1), Knuth implemented a sparse matrix where only 1's are stored. - -At all times, each node in the matrix will point to the adjacent nodes to the left and right (1's in the same row), above and below (1's in the same column), and the header for its column (described below). Each row and column in the matrix will consist of a circular doubly-linked list of nodes. - -Each column will have a special node known as the "column header," which will be included in the column list, and will form a special row ("control row") consisting of all the columns which still exist in the matrix. - -Finally, each column header may optionally track the number of nodes in its column, so that locating a column with the lowest number of nodes is of complexity O(n) rather than O(n×m) where n is the number of columns and m is the number of rows. Selecting a column with a low node count is a heuristic which improves performance in some cases, but is not essential to the algorithm. - -In Algorithm X, rows and columns are regularly eliminated from and restored to the matrix. Eliminations are determined by selecting a column and a row in that column. If a selected column doesn't have any rows, the current matrix is unsolvable and must be backtracked. When an elimination occurs, all columns for which the selected row contains a 1 are removed, along with all rows (including the selected row) that contain a 1 in any of the removed columns. The columns are removed because they have been filled, and the rows are removed because they conflict with the selected row. To remove a single column, first remove the selected column's header. Next, for each row where the selected column contains a 1, traverse the row and remove it from other columns (this makes those rows inaccessible and is how conflicts are prevented). Repeat this column removal for each column where the selected row contains a 1. This order ensures that any removed node is removed exactly once and in a predictable order, so it can be backtracked appropriately. If the resulting matrix has no columns, then they have all been filled and the selected rows form the solution. - -To backtrack, the above process must be reversed using the second algorithm stated above. One requirement of using that algorithm is that backtracking must be done as an exact reversal of eliminations. Knuth's paper gives a clear picture of these relationships and how the node removal and reinsertion works, and provides a slight relaxation of this limitation. - -It is also possible to solve one-cover problems in which a particular constraint is optional, but can be satisfied no more than once. Dancing Links accommodates these with primary columns which must be filled and secondary columns which are optional. This alters the algorithm's solution test from a matrix having no columns to a matrix having no primary columns and if the heuristic of minimum one's in a column is being used then it needs to be checked only within primary columns. Knuth discusses optional constraints as applied to the n queens problem. The chessboard diagonals represent optional constraints, as some diagonals may not be occupied. If a diagonal is occupied, it can be occupied only once. diff --git a/wiki/wikipedia/952.txt b/wiki/wikipedia/952.txt deleted file mode 100644 index b04d2357816b478685f57101d48ce11f0a48dac2..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/952.txt +++ /dev/null @@ -1,39 +0,0 @@ -In computing, a futex (short for "fast userspace mutex") is a kernel system call that programmers can use to implement basic locking, or as a building block for higher-level locking abstractions such as semaphores and POSIX mutexes or condition variables. - -A futex consists of a kernelspace wait queue that is attached to an atomic integer in userspace. Multiple processes or threads operate on the integer entirely in userspace (using atomic operations to avoid interfering with one another), and only resort to relatively expensive system calls to request operations on the wait queue (for example to wake up waiting processes, or to put the current process on the wait queue). A properly programmed futex-based lock will not use system calls except when the lock is contended; since most operations do not require arbitration between processes, this will not happen in most cases. - -On Linux, Hubertus Franke (IBM Thomas J. Watson Research Center), Matthew Kirkwood, Ingo Molnár (Red Hat) and Rusty Russell (IBM Linux Technology Center) originated the futex mechanism. Futexes appeared for the first time in version 2.5.7 of the Linux kernel development series; the semantics stabilized as of version 2.5.40, and futexes have been part of the Linux kernel mainline since the December 2003 release of 2.6.x stable kernel series. - -In 2002 discussions took place on a proposal to make futexes accessible via the file system by creating a special node in /dev or /proc. However, Linus Torvalds strongly opposed this idea and rejected any related patches. - -Futexes have been implemented in Microsoft Windows since Windows 8 or Windows Server 2012 under the name WaitOnAddress. - -In 2013 Microsoft patented futexes and the patent was granted in 2014. - -In May 2014 the CVE system announced a vulnerability discovered in the Linux kernel's futex subsystem that allowed denial-of-service attacks or local privilege escalation. - -In May 2015 the Linux kernel introduced a deadlock bug via that caused a hang in user applications. The bug affected many enterprise Linux distributions, including 3.x and 4.x kernels, and Red Hat Enterprise Linux version 5, 6 and 7, SUSE Linux 12 and Amazon Linux. - -Futexes have been implemented in OpenBSD since 2016. - -The futex mechanism is one of the core concepts of the Zircon kernel in Google's Fuchsia operating system since at least April 2018. - -Futexes have two basic operations, WAIT and WAKE. - -* WAIT(addr, val) - -If the value stored at the address addr is val, puts the current thread to sleep. - -* WAKE(addr, num) - -Wakes up num number of threads waiting on the address addr. - -For more advanced uses there are a number of other operations the most used being REQUEUE and WAKE_OP, which both function as more generic WAKE operations. - -* CMP_REQUEUE(old_addr, new_addr, num_wake, num_move, val) - -If the value stored at the address old_addr is val, wakes num_wake threads waiting on the address old_addr, and enqueues num_move threads waiting on the address old_addr to now wait on the address new_addr. This can be used to avoid the thundering herd problem on wake. - -* WAKE_OP(addr1, addr2, num1, num2, op, op_arg, cmp, cmp_arg) - -Will read addr2, perform op with op_arg on it, and storing the result back to addr2. Then it will wake num1 threads waiting on addr1, and if the previously read value from addr2 matches cmp_arg using comparison cmp will wake num2 threads waiting on addr2. This very flexible and generic wake mechanism is useful for implementing many synchronization primitives. diff --git a/wiki/wikipedia/953.txt b/wiki/wikipedia/953.txt deleted file mode 100644 index 68f900930c1e727d7e2ec2af411841b50b701cf2..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/953.txt +++ /dev/null @@ -1,5 +0,0 @@ -SyncEvolution synchronizes Evolution's contact, calendar and task items via SyncML. The items are exchanged in the vCard 2.1 or 3.0 format and iCalendar 2.0 format via the Synthesis C++ client API library, which should make SyncEvolution compatible with the majority of SyncML servers. Full, one-way and incremental synchronization of items are supported. - -SyncEvolution synchronizes personal information management (PIM) data such as contacts, appointments, tasks and memos using the Synthesis sync engine, which provides support for the SyncML synchronization protocol. - -SyncEvolution synchronizes with SyncML servers over HTTP and with SyncML capable phones locally over Bluetooth/OBEX. Plugins provide access to the data which is to be synchronized. Binaries are available for Linux desktops (synchronizing data in GNOME Evolution, with KDE supported indirectly already and Akonadi support in development), for MeeGo and for Maemo 5/Nokia N900. The source code can be compiled for Unix-like systems and provides a framework to build custom SyncML clients or servers. diff --git a/wiki/wikipedia/954.txt b/wiki/wikipedia/954.txt deleted file mode 100644 index 65b84a675d6117d9758b270252d9d839b7c961cd..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/954.txt +++ /dev/null @@ -1,7 +0,0 @@ -In automated theorem proving, PhoX is a proof assistant based on higher-order logic which is eXtensible. The user gives PhoX an initial goal and guides it through subgoals and evidence to prove that goal; internally, it constructs natural deduction trees. Each previously proven formula can become a rule for later proofs. - -PhoX was originally designed and implemented by Christophe Raffalli in the OCaml programming language. He has continued to lead the current development team, a joint effort of University of Savoy and University Paris VII. - -The primary aim of the PhoX project creating a user friendly proof checker using the type system developed by Jean-Louis Krivine at University Paris VII. It is meant to be more intuitive than other systems while remaining extensible, efficient, and expressive. Compared to other systems, the proof-building syntax is simplified and closer to natural language. Other features include GUI-driven proof construction, rendering formatted output, and proof of correctness of programs in the ML programming language. - -PhoX is currently used to teach logic at Savoy University. It is in an experimental but usable state. It is released under CeCILL 2.0. diff --git a/wiki/wikipedia/955.txt b/wiki/wikipedia/955.txt deleted file mode 100644 index 35e8c836b6fa080a20f8c24a76b12a5251b6b13e..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/955.txt +++ /dev/null @@ -1,7 +0,0 @@ -In additive number theory, Kemnitz's conjecture states that every set of lattice points in the plane has a large subset whose centroid is also a lattice point. It was proved independently in the autumn of 2003 by Christian Reiher, then an undergraduate student, and Carlos di Fiore, then a high school student. - -The exact formulation of this conjecture is as follows: - -Let $n$ be a natural number and $S$ a set of $4n-3$ lattice points in plane. Then there exists a subset $S_1 \subseteq S$ with $n$ points such that the centroid of all points from $S_1$ is also a lattice point. - -Kemnitz's conjecture was formulated in 1983 by Arnfried Kemnitz as a generalization of the Erdős–Ginzburg–Ziv theorem, an analogous one-dimensional result stating that every $2n-1$ integers have a subset of size $n$ whose average is an integer. In 2000, Lajos Rónyai proved a weakened form of Kemnitz's conjecture for sets with $4n-2$ lattice points. Then, in 2003, Christian Reiher proved the full conjecture using the Chevalley–Warning theorem. diff --git a/wiki/wikipedia/956.txt b/wiki/wikipedia/956.txt deleted file mode 100644 index 07ce3f8a84327a85b3696ce5ccc7d0be604270f3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/956.txt +++ /dev/null @@ -1,46 +0,0 @@ -In mathematics, Dixon's identity (or Dixon's theorem or Dixon's formula) is any of several different but closely related identities proved by A. C. Dixon, some involving finite sums of products of three binomial coefficients, and some evaluating a hypergeometric sum. These identities famously follow from the MacMahon Master theorem, and can now be routinely proved by computer algorithms . - -The original identity, from , is -$$ -\sum_{k=-a}^{a}(-1)^{k}{2a\choose k+a}^3 =\frac{(3a)!}{(a!)^3}. -$$ - -A generalization, also sometimes called Dixon's identity, is -$$ -\sum_{k\in\mathbb{Z}}(-1)^k{a+b\choose a+k} {b+c\choose b+k}{c+a\choose c+k} = \frac{(a+b+c)!}{a!b!c!} -$$ - -where a, b, and c are non-negative integers . - -The sum on the left can be written as the terminating well-poised hypergeometric series -$$ -{b+c\choose b-a}{c+a\choose c-a}{}_3F_2(-2a,-a-b,-a-c;1+b-a,1+c-a;1) -$$ - -and the identity follows as a limiting case (as a tends to an integer) of - -Dixon's theorem evaluating a well-poised 3F2 generalized hypergeometric series at 1, from : - -_3F_2 (a,b,c;1+a-b,1+a-c;1)= - -\frac{\Gamma(1+a/2)\Gamma(1+a/2-b-c)\Gamma(1+a-b)\Gamma(1+a-c)} - -{\Gamma(1+a)\Gamma(1+a-b-c)\Gamma(1+a/2-b)\Gamma(1+a/2-c)}. - -This holds for Re(1 + 1/2a - b - c) > 0. As c tends to -∞ it reduces to Kummer's formula for the hypergeometric function 2F1 at -1. Dixon's theorem can be deduced from the evaluation of the Selberg integral. - -A q-analogue of Dixon's formula for the basic hypergeometric series in terms of the q-Pochhammer symbol is given by - -_{4}\phi_3 \left[\begin{matrix} - -a & -qa^{1/2} & b & c \\ - -&-a^{1/2} & aq/b & aq/c \end{matrix} - -; q,qa^{1/2}/bc \right] = - -\frac{(aq,aq/bc,qa^{1/2}/b,qa^{1/2}/c;q)_\infty}{(aq/b,aq/c,qa^{1/2},qa^{1/2}/bc;q)_\infty} - - - -where |qa1/2/bc| < 1. diff --git a/wiki/wikipedia/957.txt b/wiki/wikipedia/957.txt deleted file mode 100644 index 4d93ea77cf8cee154721a13295bc42e55cb917a7..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/957.txt +++ /dev/null @@ -1,13 +0,0 @@ -Hoffman's packing puzzle is an assembly puzzle named after Dean G. Hoffman, who described it in 1978. The puzzle consists of 27 identical rectangular cuboids, each of whose edges have three different lengths. Its goal is to assemble them all to fit within a cube whose edge length is the sum of the three lengths. - -Hoffman writes that the first person to solve the puzzle was David A. Klarner, and that typical solution times can range from 20 minutes to multiple hours. - -The puzzle itself consists only of 27 identical rectangular cuboid-shaped blocks, although physical realizations of the puzzle also typically supply a cubical box to fit the blocks into. If the three lengths of the block edges are x, y, and z, then the cube should have edge length x + y + z. - -Although the puzzle can be constructed with any three different edge lengths, it is most difficult when the three edge lengths of the blocks are close enough together that x + y + z < 4 min(x,y,z), as this prevents alternative solutions in which four blocks of the minimum width are packed next to each other. Additionally, having the three lengths form an arithmetic progression can make it more confusing, because in this case placing three blocks of the middle width next to each other produces a row of the correct total width but one that cannot lead to a valid solution to the whole puzzle. - -Each valid solution to the puzzle arranges the blocks in an approximate 3 × 3 × 3 grid of blocks, with the sides of the blocks all parallel to the sides of the outer cube, and with one block of each width along each axis-parallel line of three blocks. Counting reflections and rotations as being the same solution as each other, the puzzle has 21 combinatorially distinct solutions. - -The total volume of the pieces, 27xyz, is less than the volume (x + y + z)3 of the cube that they pack into. If one takes the cube root of both volumes, and divides by three, then the number obtained in this way from the total volume of the pieces is the geometric mean of x, y, and z, while the number obtained in the same way from the volume of the cube is their arithmetic mean. The fact that the pieces have less total volume than the cube follows from the inequality of arithmetic and geometric means. - -A two-dimensional analogue of the puzzle asks to pack four identical rectangles of side lengths x and y into a square of side length x + y; as the figure shows, this is always possible. In d dimensions the puzzle asks to pack dd identical blocks into a hypercube. By a result of Raphael M. Robinson this is again solvable whenever d = d1 × d2 for two numbers d1 and d2 such that the d1- and d2-dimensional cases are themselves solvable. For instance, according to this result, it is solvable for dimensions 4, 6, 8, 9, and other 3-smooth numbers. In all dimensions, the inequality of arithmetic and geometric means shows that the volume of the pieces is less than the volume of the hypercube into which they should be packed. However, it is unknown whether the puzzle can be solved in five dimensions, or in higher prime number dimensions. diff --git a/wiki/wikipedia/958.txt b/wiki/wikipedia/958.txt deleted file mode 100644 index 3416720fd73bc429cdb61e34aed92620e813f424..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/958.txt +++ /dev/null @@ -1,17 +0,0 @@ -The chronology protection conjecture is a hypothesis first proposed by Stephen Hawking that laws of physics beyond those of standard general relativity prevent time travel on all but microscopic scales - even when the latter theory states that it should be possible (I.E. in scenarios where faster than light travel is allowed). The permissibility of time travel is represented mathematically by the existence of closed timelike curves in some solutions to the field equations of general relativity. The chronology protection conjecture should be distinguished from chronological censorship under which every closed timelike curve passes through an event horizon, which might prevent an observer from detecting the causal violation (also known as chronology violation). - -In a 1992 paper, Hawking uses the metaphorical device of a "Chronology Protection Agency" as a personification of the aspects of physics that make time travel impossible at macroscopic scales, thus apparently preventing time paradoxes. He says: - -The idea of the Chronology Protection Agency appears to be drawn playfully from the Time Patrol or Time Police concept, which has been used in many works of science fiction such as Poul Anderson's series of Time Patrol stories or Isaac Asimov's novel The End of Eternity, or in the television series Doctor Who. "The Chronology Protection Case" by Paul Levinson, posits a universe that goes so far as to murder any scientists who are close to inventing any means of time travel. - -Many attempts to generate scenarios for closed timelike curves have been suggested, and the theory of general relativity does allow them in certain circumstances. Some theoretical solutions in general relativity that contain closed timelike curves would require an infinite universe with certain features that our universe does not appear to have, such as the universal rotation of the Gödel metric or the rotating cylinder of infinite length known as a Tipler cylinder. However, some solutions allow for the creation of closed timelike curves in a bounded region of spacetime, with the Cauchy horizon being the boundary between the region of spacetime where closed timelike curves can exist and the rest of spacetime where they cannot. One of the first such bounded time travel solutions found was constructed from a traversable wormhole, based on the idea of taking one of the two "mouths" of the wormhole on a round-trip journey at relativistic speed to create a time difference between it and the other mouth (see the discussion at Wormhole#Time travel). - -General relativity does not include quantum effects on its own, and a full integration of general relativity and quantum mechanics would require a theory of quantum gravity, but there is an approximate method for modeling quantum fields in the curved spacetime of general relativity, known as semiclassical gravity. Initial attempts to apply semiclassical gravity to the traversable wormhole time machine indicated that at exactly the moment that wormhole would first allow for closed timelike curves, quantum vacuum fluctuations build up and drive the energy density to infinity in the region of the wormholes. This occurs when the two wormhole mouths, call them A and B, have been moved in such a way that it becomes possible for a particle or wave moving at the speed of light to enter mouth B at some time T2 and exit through mouth A at an earlier time T1, then travel back towards mouth B through ordinary space, and arrive at mouth B at the same time T2 that it entered B on the previous loop; in this way the same particle or wave can make a potentially infinite number of loops through the same regions of spacetime, piling up on itself. Calculations showed that this effect would not occur for an ordinary beam of radiation, because it would be "defocused" by the wormhole so that most of a beam emerging from mouth A would spread out and miss mouth B. But when the calculation was done for vacuum fluctuations, it was found that they would spontaneously refocus on the trip between the mouths, indicating that the pileup effect might become large enough to destroy the wormhole in this case. - -Uncertainty about this conclusion remained, because the semiclassical calculations indicated that the pileup would only drive the energy density to infinity for an infinitesimal moment of time, after which the energy density would die down. But semiclassical gravity is considered unreliable for large energy densities or short time periods that reach the Planck scale; at these scales, a complete theory of quantum gravity is needed for accurate predictions. So, it remains uncertain whether quantum-gravitational effects might prevent the energy density from growing large enough to destroy the wormhole. Stephen Hawking conjectured that not only would the pileup of vacuum fluctuations still succeed in destroying the wormhole in quantum gravity, but also that the laws of physics would ultimately prevent any type of time machine from forming; this is the chronology protection conjecture. - -Subsequent works in semiclassical gravity provided examples of spacetimes with closed timelike curves where the energy density due to vacuum fluctuations does not approach infinity in the region of spacetime outside the Cauchy horizon. However, in 1997 a general proof was found demonstrating that according to semiclassical gravity, the energy of the quantum field (more precisely, the expectation value of the quantum stress-energy tensor) must always be either infinite or undefined on the horizon itself. Both cases indicate that semiclassical methods become unreliable at the horizon and quantum gravity effects would be important there, consistent with the possibility that such effects would always intervene to prevent time machines from forming. - -A definite theoretical decision on the status of the chronology protection conjecture would require a full theory of quantum gravity as opposed to semiclassical methods. There are also some arguments from string theory that seem to support chronology protection, but string theory is not yet a complete theory of quantum gravity. Experimental observation of closed timelike curves would of course demonstrate this conjecture to be false, but short of that, if physicists had a theory of quantum gravity whose predictions had been well-confirmed in other areas, this would give them a significant degree of confidence in the theory's predictions about the possibility or impossibility of time travel. - -Other proposals that allow for backwards time travel but prevent time paradoxes, such as the Novikov self-consistency principle, which would ensure the timeline stays consistent, or the idea that a time traveler is taken to a parallel universe while their original timeline remains intact, do not qualify as "chronology protection". diff --git a/wiki/wikipedia/959.txt b/wiki/wikipedia/959.txt deleted file mode 100644 index 6af13c1691844081c7219af0062cd1b59099c9b3..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/959.txt +++ /dev/null @@ -1,20 +0,0 @@ -In propositional logic, biconditional introduction is a valid rule of inference. It allows for one to infer a biconditional from two conditional statements. The rule makes it possible to introduce a biconditional statement into a logical proof. If $P \to Q$ is true, and if $Q \to P$ is true, then one may infer that $P \leftrightarrow Q$ is true. For example, from the statements "if I'm breathing, then I'm alive" and "if I'm alive, then I'm breathing", it can be inferred that "I'm breathing if and only if I'm alive". Biconditional introduction is the converse of biconditional elimination. The rule can be stated formally as: -$$ -\frac{P \to Q, Q \to P}{\therefore P \leftrightarrow Q} -$$ - -where the rule is that wherever instances of "$P \to Q$" and "$Q \to P$" appear on lines of a proof, "$P \leftrightarrow Q$" can validly be placed on a subsequent line. - -The biconditional introduction rule may be written in sequent notation: -$$ -(P \to Q), (Q \to P) \vdash (P \leftrightarrow Q) -$$ - -where $\vdash$ is a metalogical symbol meaning that $P \leftrightarrow Q$ is a syntactic consequence when $P \to Q$ and $Q \to P$ are both in a proof; - -or as the statement of a truth-functional tautology or theorem of propositional logic: -$$ -((P \to Q) \land (Q \to P)) \to (P \leftrightarrow Q) -$$ - -where $P$, and $Q$ are propositions expressed in some formal system. diff --git a/wiki/wikipedia/96.txt b/wiki/wikipedia/96.txt deleted file mode 100644 index 7faed2c1c076af572ffc821c43133e379056e7ea..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/96.txt +++ /dev/null @@ -1,78 +0,0 @@ -In mathematics, specifically the theory of Lie algebras, Lie's theorem states that, over an algebraically closed field of characteristic zero, if $\pi: \mathfrak{g} \to \mathfrak{gl}(V)$ is a finite-dimensional representation of a solvable Lie algebra, then there's a flag $V = V_0 \supset V_1 \supset \cdots \supset V_n = 0$ of invariant subspaces of $\pi(\mathfrak{g})$ with $\operatorname{codim} V_i = i$, meaning that $\pi(X)(V_i) \subseteq V_i$ for each $X \in \mathfrak{g}$ and i. - -Put in another way, the theorem says there is a basis for V such that all linear transformations in $\pi(\mathfrak{g})$ are represented by upper triangular matrices. This is a generalization of the result of Frobenius that commuting matrices are simultaneously upper triangularizable, as commuting matrices generate an abelian Lie algebra, which is a fortiori solvable. - -A consequence of Lie's theorem is that any finite dimensional solvable Lie algebra over a field of characteristic 0 has a nilpotent derived algebra (see #Consequences). Also, to each flag in a finite-dimensional vector space V, there correspond a Borel subalgebra (that consist of linear transformations stabilizing the flag); thus, the theorem says that $\pi(\mathfrak{g})$ is contained in some Borel subalgebra of $\mathfrak{gl}(V)$. - -For algebraically closed fields of characteristic p>0 Lie's theorem holds provided the dimension of the representation is less than p (see the proof below), but can fail for representations of dimension p. An example is given by the 3-dimensional nilpotent Lie algebra spanned by 1, x, and d/dx acting on the p-dimensional vector space k[x]/(xp), which has no eigenvectors. Taking the semidirect product of this 3-dimensional Lie algebra by the p-dimensional representation (considered as an abelian Lie algebra) gives a solvable Lie algebra whose derived algebra is not nilpotent. - -The proof is by induction on the dimension of $\mathfrak{g}$ and consists of several steps. (Note: the structure of the proof is very similar to that for Engel's theorem.) The basic case is trivial and we assume the dimension of $\mathfrak{g}$ is positive. We also assume V is not zero. For simplicity, we write $X \cdot v = \pi(X)(v)$. - -Step 1: Observe that the theorem is equivalent to the statement: - -*There exists a vector in V that is an eigenvector for each linear transformation in $\pi(\mathfrak{g})$. - -Indeed, the theorem says in particular that a nonzero vector spanning $V_{n-1}$ is a common eigenvector for all the linear transformations in $\pi(\mathfrak{g})$. Conversely, if v is a common eigenvector, take $V_{n-1}$ to its span and then $\pi(\mathfrak{g})$ admits a common eigenvector in the quotient $V/V_{n-1}$; repeat the argument. - -Step 2: Find an ideal $\mathfrak{h}$ of codimension one in $\mathfrak{g}$. - -Let $D\mathfrak{g} = [\mathfrak{g}, \mathfrak{g}]$ be the derived algebra. Since $\mathfrak{g}$ is solvable and has positive dimension, $D\mathfrak{g} \ne \mathfrak{g}$ and so the quotient $\mathfrak{g}/D\mathfrak{g}$ is a nonzero abelian Lie algebra, which certainly contains an ideal of codimension one and by the ideal correspondence, it corresponds to an ideal of codimension one in $\mathfrak{g}$. - -Step 3: There exists some linear functional $\lambda$ in $\mathfrak{h}^*$ such that -$$ -V_{\lambda} = \{ v \in V | X \cdot v = \lambda(X) v, X \in \mathfrak{h} \} -$$ - -is nonzero. This follows from the inductive hypothesis (it is easy to check that the eigenvalues determine a linear functional). - -Step 4: $V_{\lambda}$ is a $\mathfrak{g}$-invariant subspace. (Note this step proves a general fact and does not involve solvability.) - -Let $Y \in \mathfrak{g}$, $v \in V_{\lambda}$, then we need to prove $Y \cdot v \in V_{\lambda}$. If $v = 0$ then it's obvious, so assume $v \ne 0$ and set recursively $v_0 = v, v_{i+1} = Y \cdot v_i$. Let $U = \operatorname{span} \{ v_i | i \ge 0 \}$ and $\ell \in \mathbb{N}_0$ be the largest such that $v_0,\ldots,v_\ell$ are linearly independent. Then we'll prove that they generate U and thus $\alpha = (v_0,\ldots,v_\ell)$ is a basis of U. Indeed, assume by contradiction that it's not the case and let $m \in \mathbb{N}_0$ be the smallest such that $v_m \notin \langle v_0,\ldots,v_\ell\rangle$, then obviously $m \ge \ell + 1$. Since $v_0,\ldots,v_{\ell+1}$ are linearly dependent, $v_{\ell+1}$ is a linear combination of $v_0,\ldots,v_\ell$. Applying the map $Y^{m-\ell-1}$ it follows that $v_m$ is a linear combination of $v_{m-\ell-1},\ldots,v_{m-1}$. Since by the minimality of m each of these vectors is a linear combination of $v_0,\ldots,v_\ell$, so is $v_m$, and we get the desired contradiction. We'll prove by induction that for every $n \in \mathbb{N}_0$ and $X \in \mathfrak{h}$ there exist elements $a_{0,n,X},\ldots,a_{n,n,X}$ of the base field such that $a_{n,n,X}=\lambda(X)$ and -$$ -X \cdot v_n = \sum_{i=0}^{n} a_{i,n,X}v_i. -$$ - -The $n=0$ case is straightforward since $X \cdot v_0 = \lambda(X) v_0$. Now assume that we have proved the claim for some $n \in \mathbb{N}_0$ and all elements of $\mathfrak{h}$ and let $X \in \mathfrak{h}$. Since $\mathfrak{h}$ is an ideal, it's $[X,Y] \in \mathfrak{h}$, and thus -$$ -X \cdot v_{n+1} = Y \cdot (X \cdot v_n) + [X, Y] \cdot v_n = Y \cdot \sum_{i=0}^{n} a_{i,n,X}v_i + \sum_{i=0}^{n} a_{i,n,[X,Y]}v_i = a_{0,n,[X,Y]}v_0 + \sum_{i=1}^{n} (a_{i-1,n,X} + a_{i,n,[X,Y]})v_i + \lambda(X)v_{n+1}, -$$ - -and the induction step follows. This implies that for every $X \in \mathfrak{h}$ the subspace U is an invariant subspace of X and the matrix of the restricted map $\pi(X)|_U$ in the basis $\alpha$ is upper triangular with diagonal elements equal to $\lambda(X)$, hence $\operatorname{tr}(\pi(X)|_U) = \dim(U) \lambda(X)$. Applying this with $[X,Y] \in \mathfrak{h}$ instead of X gives $\operatorname{tr}(\pi([X,Y])|_U) = \dim(U) \lambda([X,Y])$. On the other hand, U is also obviously an invariant subspace of Y, and so -$$ -\operatorname{tr}(\pi([X,Y])|_U) = \operatorname{tr}([\pi(X),\pi(Y)]|_U]) = \operatorname{tr}([\pi(X)|_U, \pi(Y)|_U]) = 0 -$$ - -since commutators have zero trace, and thus $\dim(U) \lambda([X,Y]) = 0$. Since $\dim(U) > 0$ is invertible (because of the assumption on the characteristic of the base field), $\lambda([X, Y]) = 0$ and -$$ -X \cdot (Y \cdot v) = Y \cdot (X \cdot v) + [X, Y] \cdot v = Y \cdot (\lambda(X) v) + \lambda([X, Y]) v = \lambda(X) (Y \cdot v), -$$ - -and so $Y \cdot v \in V_{\lambda}$. - -Step 5: Finish up the proof by finding a common eigenvector. - -Write $\mathfrak{g} = \mathfrak{h} + L$ where L is a one-dimensional vector subspace. Since the base field is algebraically closed, there exists an eigenvector in $V_{\lambda}$ for some (thus every) nonzero element of L. Since that vector is also eigenvector for each element of $\mathfrak{h}$, the proof is complete. $\square$ - -The theorem applies in particular to the adjoint representation $\operatorname{ad}: \mathfrak{g} \to \mathfrak{gl}(\mathfrak{g})$ of a (finite-dimensional) solvable Lie algebra $\mathfrak{g}$ over an algebraically closed field of characteristic zero; thus, one can choose a basis on $\mathfrak{g}$ with respect to which $\operatorname{ad}(\mathfrak{g})$ consists of upper triangular matrices. It follows easily that for each $x, y \in \mathfrak{g}$, $\operatorname{ad}([x, y]) = [\operatorname{ad}(x), \operatorname{ad}(y)]$ has diagonal consisting of zeros; i.e., $\operatorname{ad}([x, y])$ is a strictly upper triangular matrix. This implies that $[\mathfrak g, \mathfrak g]$ is a nilpotent Lie algebra. Moreover, if the base field is not algebraically closed then solvability and nilpotency of a Lie algebra is unaffected by extending the base field to its algebraic closure. Hence, one concludes the statement (the other implication is obvious): - -A finite-dimensional Lie algebra $\mathfrak g$ over a field of characteristic zero is solvable if and only if the derived algebra $D \mathfrak g = [\mathfrak g, \mathfrak g]$ is nilpotent. - -Lie's theorem also establishes one direction in Cartan's criterion for solvability: - -If V is a finite-dimensional vector space over a field of characteristic zero and $\mathfrak{g} \subseteq \mathfrak{gl}(V)$ a Lie subalgebra, then $\mathfrak{g}$ is solvable if and only if $\operatorname{tr}(XY) = 0$ for every $X \in \mathfrak{g}$ and $Y \in [\mathfrak{g}, \mathfrak{g}]$. - -Indeed, as above, after extending the base field, the implication $\Rightarrow$ is seen easily. (The converse is more difficult to prove.) - -Lie's theorem (for various V) is equivalent to the statement: - -For a solvable Lie algebra $\mathfrak g$ over an algebraically closed field of characteristic zero, each finite-dimensional simple $\mathfrak{g}$-module (i.e., irreducible as a representation) has dimension one. - -Indeed, Lie's theorem clearly implies this statement. Conversely, assume the statement is true. Given a finite-dimensional $\mathfrak g$-module V, let $V_1$ be a maximal $\mathfrak g$-submodule (which exists by finiteness of the dimension). Then, by maximality, $V/V_1$ is simple; thus, is one-dimensional. The induction now finishes the proof. - -The statement says in particular that a finite-dimensional simple module over an abelian Lie algebra is one-dimensional; this fact remains true over any base field since in this case every vector subspace is a Lie subalgebra. - -Here is another quite useful application: - -Let $\mathfrak{g}$ be a finite-dimensional Lie algebra over an algebraically closed field of characteristic zero with radical $\operatorname{rad}(\mathfrak{g})$. Then each finite-dimensional simple representation $\pi: \mathfrak{g} \to \mathfrak{gl}(V)$ is the tensor product of a simple representation of $\mathfrak{g}/\operatorname{rad}(\mathfrak{g})$ with a one-dimensional representation of $\mathfrak{g}$ (i.e., a linear functional vanishing on Lie brackets). - -By Lie's theorem, we can find a linear functional $\lambda$ of $\operatorname{rad}(\mathfrak{g})$ so that there is the weight space $V_{\lambda}$ of $\operatorname{rad}(\mathfrak{g})$. By Step 4 of the proof of Lie's theorem, $V_{\lambda}$ is also a $\mathfrak{g}$-module; so $V = V_{\lambda}$. In particular, for each $X \in \operatorname{rad}(\mathfrak{g})$, $\operatorname{tr}(\pi(X)) = \dim(V) \lambda(X)$. Extend $\lambda$ to a linear functional on $\mathfrak{g}$ that vanishes on $[\mathfrak g, \mathfrak g]$; $\lambda$ is then a one-dimensional representation of $\mathfrak{g}$. Now, $(\pi, V) \simeq (\pi, V) \otimes (-\lambda) \otimes \lambda$. Since $\pi$ coincides with $\lambda$ on $\operatorname{rad}(\mathfrak{g})$, we have that $V \otimes (-\lambda)$ is trivial on $\operatorname{rad}(\mathfrak{g})$ and thus is the restriction of a (simple) representation of $\mathfrak{g}/\operatorname{rad}(\mathfrak{g})$. $\square$ diff --git a/wiki/wikipedia/960.txt b/wiki/wikipedia/960.txt deleted file mode 100644 index 17031bea14d3c604623de08c308fecf77eb167a0..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/960.txt +++ /dev/null @@ -1 +0,0 @@ -In mathematics, the Mostow–Palais theorem is an equivariant version of the Whitney embedding theorem. It states that if a manifold is acted on by a compact Lie group with finitely many orbit types, then it can be embedded into some finite-dimensional orthogonal representation. It was introduced by and . diff --git a/wiki/wikipedia/961.txt b/wiki/wikipedia/961.txt deleted file mode 100644 index 712593ed3b0a220181a3b56d4bcad642d608f2e6..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/961.txt +++ /dev/null @@ -1,11 +0,0 @@ -In the mathematical theory of probability, Lenglart's inequality was proved by Èrik Lenglart in 1977. Later slight modifications are also called Lenglart's inequality. - -Let X be a non-negative right-continuous $\mathcal{F}_t$-adapted process and let G be a non-negative right-continuous non-decreasing predictable process such that $\mathbb{E}[X(\tau)\mid \mathcal{F}_0]\leq \mathbb{E}[G(\tau)\mid \mathcal{F}_0]< \infty$ for any bounded stopping time $\tau$. Then - - - -(i) $\forall c,d>0, \mathbb{P}\left(\sup_{t\geq 0}X(t)>c\Big\vert\mathcal{F}_0\right)\leq \frac{1}{c}\mathbb{E} \left[\sup_{t\geq 0}G(t)\wedge d\Big\vert\mathcal{F}_0\right]+\mathbb{P}\left(\sup_{t\geq 0}G(t)\geq d\Big\vert\mathcal{F}_0\right).$ - - - -(ii) $\forall p\in(0,1), \mathbb{E}\left[\left(\sup_{t\geq 0}X(t)\right)^p\Big\vert \mathcal{F}_0 \right]\leq c_p\mathbb{E}\left[\left(\sup_{t\geq 0}G(t)\right)^p\Big\vert \mathcal{F}_0\right], \text{ where } c_p:=\frac{p^{-p}}{1-p}$. diff --git a/wiki/wikipedia/962.txt b/wiki/wikipedia/962.txt deleted file mode 100644 index bd62984b9968a34629514eb989e7622f2eca34bb..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/962.txt +++ /dev/null @@ -1,152 +0,0 @@ -In plane geometry, Morley's trisector theorem states that in any triangle, the three points of intersection of the adjacent angle trisectors form an equilateral triangle, called the first Morley triangle or simply the Morley triangle. The theorem was discovered in 1899 by Anglo-American mathematician Frank Morley. It has various generalizations; in particular, if all of the trisectors are intersected, one obtains four other equilateral triangles. - -There are many proofs of Morley's theorem, some of which are very technical. - -Several early proofs were based on delicate trigonometric calculations. Recent proofs include an algebraic proof by extending the theorem to general fields other than characteristic three, and John Conway's elementary geometry proof. The latter starts with an equilateral triangle and shows that a triangle may be built around it which will be similar to any selected triangle. Morley's theorem does not hold in spherical and hyperbolic geometry. - -One proof uses the trigonometric identity - -which, by using of the sum of two angles identity, can be shown to be equal to -$$ -\sin(3\theta)=-4\sin^3\theta+3\sin\theta. -$$ - -The last equation can be verified by applying the sum of two angles identity to the left side twice and eliminating the cosine. - -Points $D, E, F$ are constructed on $\overline{BC}$ as shown. We have $3\alpha+3\beta+3\gamma=180^\circ$, the sum of any triangle's angles, so $\alpha+\beta+\gamma=60^\circ.$ Therefore, the angles of triangle $XEF$ are $\alpha, (60^\circ+\beta),$ and $(60^\circ+\gamma).$ - -From the figure - -{{NumBlk|::|$\sin(60^\circ+\beta)=\frac{\overline{DX}}{\overline{XE}}$|}} - -and - -{{NumBlk|::|$\sin(60^\circ+\gamma)=\frac{\overline{DX}}{\overline{XF}}.$|}} - -Also from the figure -$$ -\angle{AYC}=180^\circ-\alpha-\gamma=120^\circ+\beta -$$ - -and - -{{NumBlk|::|$\angle{AZB}=120^\circ+\gamma.$|}} - -The law of sines applied to triangles $AYC$ and $AZB$ yields - -{{NumBlk|::|$\sin(120^\circ+\beta)=\frac{\overline{AC}}{\overline{AY}}\sin\gamma$|}} - -and - -{{NumBlk|::|$\sin(120^\circ+\gamma)=\frac{\overline{AB}}{\overline{AZ}}\sin\beta.$|}} - -Express the height of triangle $ABC$ in two ways -$$ -h=\overline{AB} \sin(3\beta)=\overline{AB}\cdot 4\sin\beta\sin(60^\circ+\beta)\sin(120^\circ+\beta) -$$ - -and -$$ -h=\overline{AC} \sin(3\gamma)=\overline{AC}\cdot 4\sin\gamma\sin(60^\circ+\gamma)\sin(120^\circ+\gamma). -$$ - -where equation (1) was used to replace $\sin(3\beta)$ and $\sin(3\gamma)$ in these two equations. Substituting equations (2) and (5) in the $\beta$ equation and equations (3) and (6) in the $\gamma$ equation gives -$$ -h=4\overline{AB}\sin\beta\cdot\frac{\overline{DX}}{\overline{XE}}\cdot\frac{\overline{AC}}{\overline{AY}}\sin\gamma -$$ - -and -$$ -h=4\overline{AC}\sin\gamma\cdot\frac{\overline{DX}}{\overline{XF}}\cdot\frac{\overline{AB}}{\overline{AZ}}\sin\beta -$$ - -Since the numerators are equal -$$ -\overline{XE}\cdot\overline{AY}=\overline{XF}\cdot\overline{AZ} -$$ - -or -$$ -\frac{\overline{XE}}{\overline{XF}}=\frac{\overline{AZ}}{\overline{AY}}. -$$ - -Since angle $EXF$ and angle $ZAY$ are equal and the sides forming these angles are in the same ratio, triangles $XEF$ and $AZY$ are similar. - -Similar angles $AYZ$ and $XFE$ equal $(60^\circ+\gamma)$, and similar angles $AZY$ and $XEF$ equal $(60^\circ+\beta).$ Similar arguments yield the base angles of triangles $BXZ$ and $CYX.$ - -In particular angle $BZX$ is found to be $(60^\circ+\alpha)$ and from the figure we see that -$$ -\angle{AZY}+\angle{AZB}+\angle{BZX}+\angle{XZY}=360^\circ. -$$ - -Substituting yields -$$ -(60^\circ+\beta)+(120^\circ+\gamma)+(60^\circ+\alpha)+\angle{XZY}=360^\circ -$$ - -where equation (4) was used for angle $AZB$ and therefore -$$ -\angle{XZY}=60^\circ. -$$ - -Similarly the other angles of triangle $XYZ$ are found to be $60^\circ.$ - -The first Morley triangle has side lengths -$$ -a^\prime=b^\prime=c^\prime=8R\sin(A/3)\sin(B/3)\sin(C/3), -$$ - -where R is the circumradius of the original triangle and A, B, and C are the angles of the original triangle. Since the area of an equilateral triangle is $\tfrac{\sqrt{3}}{4}a'^2,$ the area of Morley's triangle can be expressed as -$$ -\text{Area} = 16 \sqrt{3}R^2\sin^2(A/3)\sin^2(B/3)\sin^2(C/3). -$$ - -Morley's theorem entails 18 equilateral triangles. The triangle described in the trisector theorem above, called the first Morley triangle, has vertices given in trilinear coordinates relative to a triangle ABC as follows: - -A-vertex = 1 : 2 cos(C/3) : 2 cos(B/3) - -B-vertex = 2 cos(C/3) : 1 : 2 cos(A/3) - -C-vertex = 2 cos(B/3) : 2 cos(A/3) : 1 - -Another of Morley's equilateral triangles that is also a central triangle is called the second Morley triangle and is given by these vertices: - -A-vertex = 1 : 2 cos(C/3 - 2pi/3) : 2 cos(B/3 - 2pi/3) - -B-vertex = 2 cos(C/3 - 2pi/3) : 1 : 2 cos(A/3 - 2pi/3) - -C-vertex = 2 cos(B/3 - 2pi/3) : 2 cos(A/3 - 2pi/3) : 1 - -The third of Morley's 18 equilateral triangles that is also a central triangle is called the third Morley triangle and is given by these vertices: - -A-vertex = 1 : 2 cos(C/3 - 4pi/3) : 2 cos(B/3 - 4pi/3) - -B-vertex = 2 cos(C/3 - 4pi/3) : 1 : 2 cos(A/3 - 4pi/3) - -C-vertex = 2 cos(B/3 - 4pi/3) : 2 cos(A/3 - 4pi/3) : 1 - -The first, second, and third Morley triangles are pairwise homothetic. Another homothetic triangle is formed by the three points X on the circumcircle of triangle ABC at which the line XX -1 is tangent to the circumcircle, where X -1 denotes the isogonal conjugate of X. This equilateral triangle, called the circumtangential triangle, has these vertices: - -A-vertex = csc(C/3 - B/3) : csc(B/3 + 2C/3) : -csc(C/3 + 2B/3) - -B-vertex = -csc(A/3 + 2C/3) : csc(A/3 - C/3) : csc(C/3 + 2A/3) - -C-vertex = csc(A/3 + 2B/3) : -csc(B/3 + 2A/3) : csc(B/3 - A/3) - -A fifth equilateral triangle, also homothetic to the others, is obtained by rotating the circumtangential triangle pi/6 about its center. Called the circumnormal triangle, its vertices are as follows: - -A-vertex = sec(C/3 - B/3) : -sec(B/3 + 2C/3) : -sec(C/3 + 2B/3) - -B-vertex = -sec(A/3 + 2C/3) : sec(A/3 - C/3) : -sec(C/3 + 2A/3) - -C-vertex = -sec(A/3 + 2B/3) : -sec(B/3 + 2A/3) : sec(B/3 - A/3) - -An operation called "extraversion" can be used to obtain one of the 18 Morley triangles from another. Each triangle can be extraverted in three different ways; the 18 Morley triangles and 27 extravert pairs of triangles form the 18 vertices and 27 edges of the Pappus graph. - -The centroid of the first Morley triangle is given in trilinear coordinates by - -Morley center = X(356) = cos(A/3) + 2 cos(B/3)cos(C/3) : cos(B/3) + 2 cos(C/3)cos(A/3) : cos(C/3) + 2 cos(A/3)cos(B/3). - -The first Morley triangle is perspective to triangle ABC: the lines each connecting a vertex of the original triangle with the opposite vertex of the Morley triangle concur at the point - -1st Morley–Taylor–Marr center = X(357) = sec(A/3) : sec(B/3) : sec(C/3). diff --git a/wiki/wikipedia/963.txt b/wiki/wikipedia/963.txt deleted file mode 100644 index 6691c0bd224e2dd8fb385ec55c4d6b02602b14cc..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/963.txt +++ /dev/null @@ -1,101 +0,0 @@ -In mathematical logic, the hypersequent framework is an extension of the proof-theoretical framework of sequent calculi used in structural proof theory to provide analytic calculi for logics which are not captured in the sequent framework. A hypersequent is usually taken to be a finite multiset of ordinary sequents, written -$$ -\Gamma_1 \Rightarrow \Delta_1 \mid \cdots \mid \Gamma_n \Rightarrow \Delta_n -$$ - -The sequents making up a hypersequent are called components. The added expressivity of the hypersequent framework is provided by rules manipulating different components, such as the communication rule for intermediate logic LC (below left) or the modal splitting rule for modal logic S5 (below right): -$$ -\frac{\Gamma_1 \Rightarrow \Delta_1 \mid \dots \mid \Gamma_n \Rightarrow \Delta_n \mid \Sigma \Rightarrow A \qquad \Omega_1 \Rightarrow \Theta_1 \mid \dots \mid \Omega_m \Rightarrow \Theta_m \mid \Pi \Rightarrow B}{\Gamma_1 \Rightarrow \Delta_1 \mid \dots \mid \Gamma_n \Rightarrow \Delta_n \mid \Omega_1 \Rightarrow \Theta_1 \mid \dots \mid \Omega_m \Rightarrow \Theta_m \mid \Sigma \Rightarrow B \mid \Pi \Rightarrow A} -$$ $\frac{\Gamma_1 \Rightarrow \Delta_1 \mid \dots \mid \Gamma_n \Rightarrow \Delta_n \mid \Box \Sigma, \Theta \Rightarrow \Box \Pi, \Omega}{\Gamma_1 \Rightarrow \Delta_1 \mid \dots \mid \Gamma_n \Rightarrow \Delta_n \mid \Box \Sigma \Rightarrow \Box \Pi \mid \Theta \Rightarrow \Omega}$ - -Hypersequent calculi have been used to treat modal logics, intermediate logics, and substructural logics. Hypersequents usually have a formula interpretation, i.e., are interpreted by a formula in the object language, nearly always as some kind of disjunction. The precise formula interpretation depends on the considered logic. - -Formally, a hypersequent is usually taken to be a finite multiset of ordinary sequents, written -$$ -\Gamma_1 \Rightarrow \Delta_1 \mid \dots \mid \Gamma_n \Rightarrow \Delta_n -$$ - -The sequents making up a hypersequent consist of tuples of multisets of formulae, and are called the components of the hypersequent. Variants defining hypersequents and sequents in terms of sets or lists instead of multisets are also considered, and depending on the considered logic the sequents can be classical or intuitionistic. The rules for the propositional connectives usually are adaptions of the corresponding standard sequent rules with an additional side hypersequent, also called hypersequent context. E.g., a common set of rules for the functionally complete set of connectives $\{\bot,\to\}$ for classical propositional logic is given by the following four rules: -$$ -\frac{}{\mathcal{G} \mid \Gamma, p \Rightarrow p,\Delta} -$$ -$$ -\frac{}{\mathcal{G} \mid \Gamma, \bot \Rightarrow \Delta} -$$ - -\frac{\mathcal{G} \mid \Gamma, B \Rightarrow \Delta - -\qquad - -\mathcal{G} \mid \Gamma \Rightarrow A, \Delta - -} - -{\mathcal{G}\mid \Gamma, A \to B \Rightarrow \Delta - -} - -\frac{\mathcal{G} \mid \Gamma,A \Rightarrow B, \Delta - -} - -{\mathcal{G} \mid \Gamma \Rightarrow A \to B, \Delta - -} - -Due to the additional structure in the hypersequent setting the structural rules are considered in their internal and external variants. The internal weakening and internal contraction rules are the adaptions of the corresponding sequent rules with an added hypersequent context: - -\frac{\mathcal{G} \mid \Gamma \Rightarrow \Delta - -} - -{\mathcal{G}\mid \Gamma, \Sigma \Rightarrow \Delta, \Pi - -} - -\frac{\mathcal{G} \mid \Gamma, A, A \Rightarrow \Delta - -} - -{\mathcal{G} \mid \Gamma, A \Rightarrow \Delta - -} - -\frac{\mathcal{G} \mid \Gamma \Rightarrow A, A, \Delta - -} - -{\mathcal{G} \mid \Gamma \Rightarrow A, \Delta - -} - -The external weakening and external contraction rules are the corresponding rules on the level of hypersequent components instead of formulae: -$$ -\frac{\mathcal{G}}{\mathcal{G} \mid \Gamma \Rightarrow\Delta} -$$ - -\frac{\mathcal{G} \mid \Gamma \Rightarrow\Delta \mid \Gamma\Rightarrow\Delta - -} - -{\mathcal{G} \mid \Gamma\Rightarrow\Delta - -} - -Soundness of these rules is closely connected to the formula interpretation of the hypersequent structure, nearly always as some form of disjunction. The precise formula interpretation depends on the considered logic, see below for some examples. - -Hypersequents have been used to obtain analytic calculi for modal logics, for which analytic sequent calculi proved elusive. In the context of modal logics the standard formula interpretation of a hypersequent -$$ -\Gamma_1 \Rightarrow \Delta_1 \mid \dots \mid \Gamma_n \Rightarrow \Delta_n -$$ - -is the formula -$$ -\Box (\bigwedge \Gamma_1 \to \bigvee \Delta_1) \lor \dots \lor \Box( \bigwedge\Gamma_n \to \bigvee\Delta_n) -$$ - -Here if $\Gamma$ is the multiset $A_1, \dots, A_n$ we write $\Box \Gamma$ for the result of prefixing every formula in $\Gamma$ with $\Box$, i.e., the multiset $\Box A_1, \dots, \Box A_n$. Note that the single components are interpreted using the standard formula interpretation for sequents, and the hypersequent bar $\mid$ is interpreted as a disjunction of boxes. The prime example of a modal logic for which hypersequents provide an analytic calculus is the logic S5. In a standard hypersequent calculus for this logic Hypersequent calculi have also been proposed for many other modal logics. - -As for intermediate logics, hypersequents have been used to obtain analytic calculi for many substructural logics and fuzzy logics. - -The hypersequent structure seem to have appeared first in under the name of cortege to obtain a calculus for modal logic S5. It seems to have been developed independently in, also for treating modal logics, and in the influential, where calculi for modal, intermediate and substructural logics are considered, and the term hypersequent is introduced. diff --git a/wiki/wikipedia/964.txt b/wiki/wikipedia/964.txt deleted file mode 100644 index e8da802ddb2a6d930affbe4edc026fc2767d121a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/964.txt +++ /dev/null @@ -1,81 +0,0 @@ -rsync is a utility for efficiently transferring and synchronizing files between a computer and a storage drive and across networked computers by comparing the modification times and sizes of files. It is commonly found on Unix-like operating systems and is under the GPL-3.0-or-later license. - -Rsync is written in C as a single threaded application. The rsync algorithm is a type of delta encoding, and is used for minimizing network usage. Zlib may be used for additional data compression, Tridgell discusses the design, implementation, and performance of rsync in chapters 3 through 5 of his Ph.D. thesis in 1999. It is currently maintained by Wayne Davison. - -Because of the flexibility, speed, and scriptability of rsync, it has become a standard Linux utility, included in all popular Linux distributions. It has been ported to Windows (via Cygwin, Grsync, or SFU), FreeBSD, NetBSD, OpenBSD, and macOS. - -Similar to cp, rcp and scp, rsync requires the specification of a source and of a destination, of which at least one must be local. - -Generic syntax: - - - -rsync [OPTION] … SRC … [USER@]HOST:DEST - -rsync [OPTION] … [USER@]HOST:SRC [DEST] - - - -where SRC is the file or directory (or a list of multiple files and directories) to copy from, DEST is the file or directory to copy to, and square brackets indicate optional parameters. - -rsync can synchronize Unix clients to a central Unix server using rsync/ssh and standard Unix accounts. It can be used in desktop environments, for example to efficiently synchronize files with a backup copy on an external hard drive. A scheduling utility such as cron can carry out tasks such as automated encrypted rsync-based mirroring between multiple hosts and a central server. - -A command line to mirror FreeBSD might look like: - -$ rsync -avz --delete ftp4.de.FreeBSD.org::FreeBSD/ /pub/FreeBSD/ - -The Apache HTTP Server supports rsync only for updating mirrors. - -$ rsync -avz --delete --safe-links rsync.apache.org::apache-dist /path/to/mirror - -The preferred (and simplest) way to mirror a PuTTY website to the current directory is to use rsync. - -$ rsync -auH rsync://rsync.chiark.greenend.org.uk/ftp/users/sgtatham/putty-website-mirror/ . - -A way to mimic the capabilities of Time Machine (macOS); - -see also Time rsYnc Machine (tym). - - - -$ date=$(date "+%FT%H-%M-%S") # rsync interprets ":" as separator between host and port (i. e. host:port), so we cannot use %T or %H:%M:%S here, so we use %H-%M-%S - -$ rsync -aP --link-dest=$HOME/Backups/current /path/to/important_files $HOME/Backups/back-$date - -$ ln -nfs $HOME/Backups/back-$date $HOME/Backups/current - - - -Make a full backup of system root directory: - - - -$ rsync -avAXHS --progress --exclude={"/dev/*","/proc/*","/sys/*","/tmp/*","/run/*","/mnt/*","/media/*","/lost+found"} / /path/to/backup/folder - - - -Delete all files and directories, within a directory, extremely fast: - - - -# Make an empty directory somewhere, which is the first path, and the second path is the directory you want to empty. - -$ rsync -a --delete /path/to/empty/dir /path/to/dir/to/empty - - - -An rsync process operates by communicating with another rsync process, a sender and a receiver. At startup, an rsync client connects to a peer process. If the transfer is local (that is, between file systems mounted on the same host) the peer can be created with fork, after setting up suitable pipes for the connection. If a remote host is involved, rsync starts a process to handle the connection, typically Secure Shell. Upon connection, a command is issued to start an rsync process on the remote host, which uses the connection thus established. As an alternative, if the remote host runs an rsync daemon, rsync clients can connect by opening a socket on TCP port 873, possibly using a proxy. - -Rsync has numerous command line options and configuration files to specify alternative shells, options, commands, possibly with full path, and port numbers. Besides using remote shells, tunnelling can be used to have remote ports appear as local on the server where an rsync daemon runs. Those possibilities allow adjusting security levels to the state of the art, while a naive rsync daemon can be enough for a local network. - -By default, rsync determines which files differ between the sending and receiving systems by checking the modification time and size of each file. If time or size is different between the systems, it transfers the file from the sending to the receiving system. As this only requires reading file directory information, it is quick, but it will miss unusual modifications which change neither. It is used by Dropbox, rdiff-backup, duplicity, and other utilities. - -The acrosync library is an independent, cross-platform implementation of the rsync network protocol. Unlike librsync, it is wire-compatible with rsync (protocol version 29 or 30). It is released under the Reciprocal Public License and used by the commercial rsync software Acrosync. - -Duplicity is a variation on rdiff-backup that allows for backups without cooperation from the storage server, as with simple storage services like Amazon S3. It works by generating the hashes for each block in advance, encrypting them, and storing them on the server. It then retrieves them when doing an incremental backup. The rest of the data is also stored encrypted for security purposes. - -As of macOS 10.5 and later, there is a special -E or --extended-attributes switch which allows retaining much of the HFS file metadata when syncing between two machines supporting this feature. This is achieved by transmitting the Resource Fork along with the Data Fork. - -zsync is an rsync-like tool optimized for many downloads per file version. zsync is used by Linux distributions such as Ubuntu for distributing fast changing beta ISO image files. zsync uses the HTTP protocol and .zsync files with pre-calculated rolling hash to minimize server load yet permit diff transfer for network optimization. - -Rclone is an open-source tool inspired by rsync that focuses on cloud and other high latency storage. It supports more than 50 different providers and provides an rsync-like interface for cloud storage. diff --git a/wiki/wikipedia/965.txt b/wiki/wikipedia/965.txt deleted file mode 100644 index 13c67a585c49449ae1ff3b24235ab5585fe311eb..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/965.txt +++ /dev/null @@ -1,59 +0,0 @@ -In mathematics, in the field of harmonic analysis, - -the van der Corput lemma is an estimate for oscillatory integrals - -named after the Dutch mathematician J. G. van der Corput. - -The following result - -is stated by E. Stein: - -Suppose that a real-valued function $\phi(x)$ is smooth in an open interval $(a,b)$, - -and that $|\phi^{(k)}(x)|\ge 1$ for all $x\in (a,b)$. - -Assume that either $k\ge 2$, or that -$$ -k=1 -$$ and $\phi'(x)$ is monotone for $x\in\R$. - -There is a constant $c_k$, which does not depend on $\phi$, - -such that - - - -\Big|\int_a^b e^{i\lambda\phi(x)}\Big|\le c_k\lambda^{-1/k}, - - - -for any $\lambda\in\R$. - -The van der Corput lemma is closely related to the sublevel set estimates - -(see for example - -), - -which give the upper bound on the measure of the set - -where a function takes values not larger than $\epsilon$. - -Suppose that a real-valued function $\phi(x)$ is smooth - -on a finite or infinite interval $I\subset\R$, - -and that $|\phi^{(k)}(x)|\ge 1$ for all $x\in I$. - -There is a constant $c_k$, which does not depend on $\phi$, - -such that - -for any $\epsilon\ge 0$ - -the measure of the sublevel set -$$ -\{x\in I:|\phi(x)|\le\epsilon\} -$$ - -is bounded by $c_k\epsilon^{1/k}$. diff --git a/wiki/wikipedia/966.txt b/wiki/wikipedia/966.txt deleted file mode 100644 index 75d97e708e9222f1eb2ac5d1548cd137bc79693b..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/966.txt +++ /dev/null @@ -1,17 +0,0 @@ -In combinatorial mathematics, Toida's conjecture, due to Shunichi Toida in 1977, is a refinement of the disproven Ádám's conjecture from 1967. - -Both conjectures concern circulant graphs. These are graphs defined from a positive integer $n$ and a set $S$ of positive integers. - -Their vertices can be identified with the numbers from 0 to $n-1$, and two vertices $i$ and $j$ are connected by an edge - -whenever their difference modulo $n$ belongs to set $S$. Every symmetry of the cyclic group of addition modulo $n$ - -gives rise to a symmetry of the $n$-vertex circulant graphs, and Ádám conjectured (incorrectly) that these are the only symmetries of the circulant graphs. - -However, the known counterexamples to Ádám's conjecture involve sets $S$ in which some elements share non-trivial divisors with $n$. - -Toida's conjecture states that, when every member of $S$ is relatively prime to $n$, then the only symmetries of the circulant graph for $n$ and $S$ are symmetries coming from the underlying cyclic group. - -The conjecture was proven in the special case where n is a prime power by Klin and Poschel in 1978, and by Golfand, Najmark, and Poschel in 1984. - -The conjecture was then fully proven by Muzychuk, Klin, and Poschel in 2001 by using Schur algebra, and simultaneously by Dobson and Morris in 2002 by using the classification of finite simple groups. diff --git a/wiki/wikipedia/967.txt b/wiki/wikipedia/967.txt deleted file mode 100644 index dac803762a3b76b5f9da5941d606f75931593008..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/967.txt +++ /dev/null @@ -1,27 +0,0 @@ -In mathematics, Lafforgue's theorem, due to Laurent Lafforgue, completes the Langlands program for general linear groups over algebraic function fields, by giving a correspondence between automorphic forms on these groups and representations of Galois groups. - -The Langlands conjectures were introduced by and describe a correspondence between representations of the Weil group of an algebraic function field and representations of algebraic groups over the function field, generalizing class field theory of function fields from abelian Galois groups to non-abelian Galois groups. - -The Langlands conjectures for GL1(K) follow from (and are essentially equivalent to) class field theory. More precisely the Artin map gives a map from the idele class group to the abelianization of the Weil group. - -The representations of GLn(F) appearing in the Langlands correspondence are automorphic representations. - -Here F is a global field of some positive characteristic p, and ℓ is some prime not equal to p. - -Lafforgue's theorem states that there is a bijection σ between: - -*Equivalence classes of cuspidal representations π of GLn(F), and - -*Equivalence classes of irreducible ℓ-adic representations σ(π) of dimension n of the absolute Galois group of F - -that preserves the L-function at every place of F. - -The proof of Lafforgue's theorem involves constructing a representation σ(π) of the absolute Galois group for each cuspidal representation π. The idea of doing this is to look in the ℓ-adic cohomology of the moduli stack of shtukas of rank n that have compatible level N structures for all N. The cohomology contains subquotients of the form - -π⊗σ(π)⊗σ(π) - -which can be used to construct σ(π) from π. A major problem is that the moduli stack is not of finite type, which means that there are formidable technical difficulties in studying its cohomology. - -Lafforgue's theorem implies the Ramanujan–Petersson conjecture that if an automorphic form for GLn(F) has central character of finite order, then the corresponding Hecke eigenvalues at every unramified place have absolute value 1. - -Lafforgue's theorem implies the conjecture of Deligne that an irreducible finite-dimensional l-adic representation of the absolute Galois group with determinant character of finite order is pure of weight 0. diff --git a/wiki/wikipedia/968.txt b/wiki/wikipedia/968.txt deleted file mode 100644 index 0d051677cb294becd47d6277cff0875aa0a5df75..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/968.txt +++ /dev/null @@ -1,17 +0,0 @@ -In graph theory, a nested triangles graph with n vertices is a planar graph formed from a sequence of n/3 triangles, by connecting pairs of corresponding vertices on consecutive triangles in the sequence. It can also be formed geometrically, by gluing together n/3 − 1 triangular prisms on their triangular faces. - -This graph, and graphs closely related to it, have been frequently used in graph drawing to prove lower bounds on the area requirements of various styles of drawings. - -The nested triangles graph with two triangles is the graph of the triangular prism, and the nested triangles graph with three triangles is the graph of the triangular bifrustum. - -More generally, because the nested triangles graphs are planar and 3-vertex-connected, it follows from Steinitz's theorem that they all can be represented as convex polyhedra. - -An alternative geometric representation of these graphs may be given by gluing triangular prisms end-to-end on their triangular faces; the number of nested triangles is one more than the number of glued prisms. However, using right prisms, this gluing process will cause the rectangular faces of adjacent prisms to be coplanar, so the result will not be strictly convex. - -The nested triangles graph was named by Dolev, who used it to show that drawing an n-vertex planar graph in the integer lattice (with straight line-segment edges) may require a bounding box of size at least n/3 × n/3. In such a drawing, no matter which face of the graph is chosen to be the outer face, some subsequence of at least n/6 of the triangles must be drawn nested within each other, and within this part of the drawing each triangle must use two rows and two columns more than the next inner triangle. If the outer face is not allowed to be chosen as part of the drawing algorithm, but is specified as part of the input, the same argument shows that a bounding box of size 2n/3 × 2n/3 is necessary, and a drawing with these dimensions exists. - -For drawings in which the outer face may be freely chosen, the area lower bound of Dolev may not be tight. - -Frati showed that this graph, and any graph formed by adding diagonals to its quadrilaterals, can be drawn within a box of dimensions n/3 × 2n/3. When no extra diagonals are added the nested triangles graph itself can be drawn in even smaller area, approximately n/3 × n/2, as shown. Closing the gap between the 2n2/9 upper bound and the n2/9 lower bound on drawing area for completions of the nested triangle graph remains an open problem. - -Variants of the nested triangles graph have been used for many other lower bound constructions in graph drawing, for instance on area of rectangular visibility representations, area of drawings with right angle crossings or relative area of planar versus nonplanar drawings. diff --git a/wiki/wikipedia/969.txt b/wiki/wikipedia/969.txt deleted file mode 100644 index 1d9251da2e8b4482e35d65acdde7629bc71fa231..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/969.txt +++ /dev/null @@ -1 +0,0 @@ -#redirect Kefeng Liu#Mariño–Vafa conjecture and string duality diff --git a/wiki/wikipedia/97.txt b/wiki/wikipedia/97.txt deleted file mode 100644 index f5ae998a71215b91d4dd48ca71d68b13b5794384..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/97.txt +++ /dev/null @@ -1,69 +0,0 @@ -350px|thumb|The commutative diagram used in the proof of the five lemma. - -In mathematics, and especially in category theory, a commutative diagram is a diagram such that all directed paths in the diagram with the same start and endpoints lead to the same result. It is said that commutative diagrams play the role in category theory that equations play in algebra (see Barr). - -A commutative diagram often consists of three parts: - -* objects (also known as vertices) - -* morphisms (also known as arrows or edges) - -* paths or composites - -In algebra texts, the type of morphism can be denoted with different arrow usages: - -* A monomorphism (injective homomorphism) may be labeled with a $\hookrightarrow$. - -* An epimorphism (surjective homomorphism) may be labeled with a $\twoheadrightarrow$. - -* An isomorphism (bijective homomorphism) may be labeled with a $\overset{\sim}{\rightarrow}$. - -* The dashed arrow typically represents the claim that the indicated morphism exists (whenever the rest of the diagram holds); the arrow may be optionally labeled as $\exists$. - -** If the morphism is in addition unique, then the dashed arrow may be labeled $!$ or $\exists!$. - -Commutativity makes sense for a polygon of any finite number of sides (including just 1 or 2), and a diagram is commutative if every polygonal subdiagram is commutative. - -Note that a diagram may be non-commutative, i.e., the composition of different paths in the diagram may not give the same result. - -Phrases like "this commutative diagram" or "the diagram commutes" may be used. - -In the left diagram, which expresses the first isomorphism theorem, commutativity of the triangle means that $f = \tilde{f} \circ \pi$. In the right diagram, commutativity of the square means $h \circ f = k \circ g$. - -In order for the diagram below to commute, three equalities must be satisfied: - -# $r \circ h \circ g = H \circ G \circ l$ - -# $m \circ g = G \circ l$ - -# $r \circ h = H \circ m$ - -Here, since the first equality follows from the last two, it suffices to show that (2) and (3) are true in order for the diagram to commute. However, since equality (3) generally does not follow from the other two, it is generally not enough to have only equalities (1) and (2) if one were to show that the diagram commutes. - -Diagram chasing (also called diagrammatic search) is a method of mathematical proof used especially in homological algebra, where one establishes a property of some morphism by tracing the elements of a commutative diagram. A proof by diagram chasing typically involves the formal use of the properties of the diagram, such as injective or surjective maps, or exact sequences. A syllogism is constructed, for which the graphical display of the diagram is just a visual aid. It follows that one ends up "chasing" elements around the diagram, until the desired element or result is constructed or verified. - -Examples of proofs by diagram chasing include those typically given for the five lemma, the snake lemma, the zig-zag lemma, and the nine lemma. - -In higher category theory, one considers not only objects and arrows, but arrows between the arrows, arrows between arrows between arrows, and so on ad infinitum. For example, the category of small categories Cat is naturally a 2-category, with functors as its arrows and natural transformations as the arrows between functors. In this setting, commutative diagrams may include these higher arrows as well, which are often depicted in the following style: $\Rightarrow$. For example, the following (somewhat trivial) diagram depicts two categories C and D, together with two functors F, G : C → D and a natural transformation α : F ⇒ G: - -There are two kinds of composition in a 2-category (called vertical composition and horizontal composition), and they may also be depicted via pasting diagrams (see 2-category#Definition for examples). - -A commutative diagram in a category C can be interpreted as a functor from an index category J to C; one calls the functor a diagram. - -More formally, a commutative diagram is a visualization of a diagram indexed by a poset category. Such a diagram typically include: - -* a node for every object in the index category, - -* an arrow for a generating set of morphisms (omitting identity maps and morphisms that can be expressed as compositions), - -* the commutativity of the diagram (the equality of different compositions of maps between two objects), corresponding to the uniqueness of a map between two objects in a poset category. - -Conversely, given a commutative diagram, it defines a poset category, where: - -* the objects are the nodes, - -* there is a morphism between any two objects if and only if there is a (directed) path between the nodes, - -* with the relation that this morphism is unique (any composition of maps is defined by its domain and target: this is the commutativity axiom). - -However, not every diagram commutes (the notion of diagram strictly generalizes commutative diagram). As a simple example, the diagram of a single object with an endomorphism ($f\colon X \to X$), or with two parallel arrows ($\bullet \rightrightarrows \bullet$, that is, $f,g\colon X \to Y$, sometimes called the free quiver), as used in the definition of equalizer need not commute. Further, diagrams may be messy or impossible to draw, when the number of objects or morphisms is large (or even infinite). diff --git a/wiki/wikipedia/970.txt b/wiki/wikipedia/970.txt deleted file mode 100644 index bd473bc12c4c174c0dcfac1d0cae0d22862fb766..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/970.txt +++ /dev/null @@ -1,53 +0,0 @@ -In computational complexity theory, a gap reduction is a reduction to a particular type of decision problem, known as a c-gap problem. Such reductions provide information about the hardness of approximating solutions to optimization problems. In short, a gap problem refers to one wherein the objective is to distinguish between cases where the best solution is above one threshold from cases where the best solution is below another threshold, such that the two thresholds have a gap in between. Gap reductions can be used to demonstrate inapproximability results, as if a problem may be approximated to a better factor than the size of gap, then the approximation algorithm can be used to solve the corresponding gap problem. - -We define a c-gap problem as follows: given an optimization (maximization or minimization) problem P, the equivalent c-gap problem distinguishes between two cases, for an input k and an instance x of problem P: -$$ -\text{OPT}_P(x) \le k -$$. Here, the best solution to instance x of problem P has a cost, or score, below k. -$$ -\text{OPT}_P(x) > c\cdot k -$$. Here, the best solution to instance x of problem P has a cost above c⋅k. The gap between the two thresholds is thus c. - -Note that whenever OPT falls between the thresholds, there is no requirement on what the output should be. A valid algorithm for the c-gap problem may answer anything if OPT is in the middle of the gap. The value c does not need to be constant; it can depend on the size of the instance of P. Note that c-approximating the solution to an instance of P is at least as hard as solving the c-gap version of P. - -One can define an (a,b)-gap problem similarly. The difference is that the thresholds do not depend on the input; instead, the lower threshold is a and the upper threshold is b. - -A gap-producing reduction is a reduction from an optimization problem to a c-gap problem, so that solving the c-gap problem quickly would enable solving the optimization problem quickly. The term gap-producing arises from the nature of the reduction: the optimal solution in the optimization problem maps to the opposite side of the gap from every other solution via reduction. Thus, a gap is produced between the optimal solution and every other solution. - -A simple example of a gap-producing reduction is the nonmetric Traveling Salesman problem (i.e. where the graph's edge costs need not satisfy the conditions of a metric). We can reduce from the Hamiltonian path problem on a given graph G = (V, E) to this problem as follows: we construct a complete graph G' = (V, E'), for the traveling salesman problem. For each edge e ∈ G', we let the cost of traversing it be 1 if e is in the original graph G and ∞ otherwise. A Hamiltonian path in the original graph G exists if and only if there exists a traveling salesman solution with weight (|V|-1). However, if no such Hamiltonian path exists, then the best traveling salesman tour must have weight at least |V|. Thus, Hamiltonian Path reduces to |V|/(|V|-1)-gap nonmetric traveling salesman. - -A gap-preserving reduction is a reduction from a c-gap problem to a c'-gap problem. More specifically, we are given an instance x of a problem A with |x| = n and want to reduce it to an instance x' of a problem B with |x'| = n'. A gap-preserving reduction from A to B is a set of functions (k(n), k'(n'), c(n), c'(n')) such that - -For minimization problems: - -OPTA(x) ≤ k ⇒ OPTB(x') ≤ k', and - -OPTA(x) ≥ c⋅k ⇒ OPTB(x') ≥ c'⋅k' - -For maximization problems: - -OPTA(x) ≥ k ⇒ OPTB(x') ≥ k', and - -OPTA(x) ≤ k/c ⇒ OPTB(x') ≤ k'/c' - -If c' > c, then this is a gap-amplifying reduction. - -This problem is a form of the Boolean satisfiability problem (SAT), where each clause contains three distinct literals and we want to maximize the number of clauses satisfied. - -Håstad showed that the (1/2+ε, 1-ε)-gap version of a similar problem, MAX E3-X(N)OR-SAT, is NP-hard. The MAX E3-X(N)OR-SAT problem is a form of SAT where each clause is the XOR of three distinct literals, exactly one of which is negated. We can reduce from MAX E3-X(N)OR-SAT to MAX E3SAT as follows: - -A clause xi ⊕ xj ⊕ xk = 1 is converted to (xi ∨ xj ∨ xk) ∧ (¬xi ∨ ¬xj ∨ xk) ∧ (¬xi ∨ xj ∨ ¬xk) ∧ (xi ∨ ¬xj ∨ ¬xk) - -A clause xi ⊕ xj ⊕ xk = 0 is converted to (¬xi ∨ ¬xj ∨ ¬xk) ∧ (¬xi ∨ xj ∨ xk) ∧ (xi ∨ ¬xj ∨ xk) ∧ (xi ∨ xj ∨ ¬xk) - -If a clause is not satisfied in the original instance of MAX E3-X(N)OR-SAT, then at most three of the four corresponding clauses in our MAX E3SAT instance can be satisfied. Using a gap argument, it follows that a YES instance of the problem has at least a (1-ε) fraction of the clauses satisfied, while a NO instance of the problem has at most a (1/2+ε)(1) + (1/2-ε)(3/4) = (7/8 + ε/4)-fraction of the clauses satisfied. Thus, it follows that (7/8 + ε, 1 - ε)-gap MAX E3SAT is NP-hard. Note that this bound is tight, as a random assignment of variables gives an expected 7/8 fraction of satisfied clauses. - -The label cover problem is defined as follows: given a bipartite graph G = (A∪B, E), with - -A = A1 ∪ A2 ∪ ... ∪ Ak, |A| = n, and |Ai| = n/k - -B = B1 ∪ B2 ∪ ... ∪ Bk, |B| = n, and |Bi| = n/k - -We define a "superedge" between Ai and Bj if at least one edge exists from Ai to Bj in G, and define the superedge to be covered if at least one edge from Ai to Bj is covered. - -In the max-rep version of the problem, we are allowed to choose one vertex from each Ai and each Bi, and we aim to maximize the number of covered superedges. In the min-rep version, we are required to cover every superedge in the graph, and want to minimize the number of vertices we choose. Manurangsi and Moshkovitz show that the (O(n1/4), 1)-gap version of both problems is solvable in polynomial time. diff --git a/wiki/wikipedia/971.txt b/wiki/wikipedia/971.txt deleted file mode 100644 index af99d12b490915e47147d34739f4612d1a6d2684..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/971.txt +++ /dev/null @@ -1,2026 +0,0 @@ -This article lists mathematical properties and laws of sets, involving the set-theoretic operations of union, intersection, and complementation and the relations of set equality and set inclusion. It also provides systematic procedures for evaluating expressions, and performing calculations, involving these operations and relations. - -The binary operations of set union ($\cup$) and intersection ($\cap$) satisfy many identities. Several of these identities or "laws" have well established names. - -Throughout this article, capital letters such as $A, B, C, L, M, R, S,$ and $X$ will denote sets and $\wp(X)$ will denote the power set of $X.$ - -If it is needed then unless indicated otherwise, it should be assumed that $X$ denotes the universe set, which means that all sets that are used in the formula are subsets of $X.$ - -In particular, the complement of a set $L$ will be denoted by $L^C$ where unless indicated otherwise, it should be assumed that $L^C$ denotes the complement of $L$ in (the universe) $X.$ - -Typically, the set $L$ will denote the Left most set, $M$ the Middle set, and $R$ the Right most set. - -For sets $L$ and $R,$ define: - -\begin{alignat}{4} - -L \cup R &&~:=~ \{~ x ~:~ x \in L &&\text{ or } && x \in R ~\} \\ - -L \cap R &&~:=~ \{~ x ~:~ x \in L &&\text{ and } && x \in R ~\} \\ - -L \setminus R &&~:=~ \{~ x ~:~ x \in L &&\text{ and } && x \notin R ~\} \\ - -\end{alignat} - -and - -L \triangle R ~:=~ \{~ x ~:~ x \text{ belongs to exactly one of } L \text{ and } R ~\}. - -The symmetric difference of $L$ and $R$ is: - -\begin{alignat}{4} - -L \triangle R - -~&=~ (L ~\setminus~ &&R) ~\cup~ &&(R ~\setminus~ &&L) \\ - -~&=~ (L ~\cup~ &&R) ~\setminus~ &&(L ~\cap~ &&R). - -\end{alignat} - -If $L$ is a set that is understood (say from context, or because it is clearly stated) to be a subset of some other set $X$ then the complement of a set $L$ may be denoted by: - -L^{\operatorname{C}} ~:=~ X \setminus L. - -The definition of $L^{\operatorname{C}} = X \setminus L$ may depend on context. For instance, had $L$ been declared as a subset of $Y,$ with the sets $Y$ and $X$ not necessarily related to each other in any way, then $L^{\operatorname{C}}$ would likely mean $Y \setminus L$ instead of $X \setminus L.$ - -A family $\Phi$ of subsets of a set $X$ is said to be an algebra of sets if $\varnothing \in \Phi$ and for all $L, R \in \Phi,$ all three of the sets $X \setminus R, L \cap R,$ and $L \cup R$ are elements of $\Phi.$ - -The article on this topic lists set identities and other relationships these three operations. - -Every algebra of sets is also a ring of sets algebra of sets in $X$ containing $\mathcal{S}.$
  • - -
  • Then the algebra generated by $\mathcal{S}$ is the set $\Phi_{\mathcal{S}}$ consisting of all possible finite unions of sets in $\mathcal{S}_1.$
  • - - - -Assume $L \subseteq X.$ - -Identity: - -\begin{alignat}{10} - -L \cap X &=&& L ~~~~\text{ where } L \subseteq X \\[1.4ex] - -L \cup \varnothing &=&& L \\[1.4ex] - -L \triangle \varnothing &=&& L \\[1.4ex] - -L \setminus \varnothing &=&& L \\[1.4ex] - -X \cap L &=&& L ~~~~\text{ where } L \subseteq X \\[1.4ex] - -\varnothing \cup L &=&& L \\[1.4ex] - -\varnothing \triangle L &=&& L \\[1.4ex] - -\end{alignat} - -but - -\varnothing \setminus L = \varnothing - -so \varnothing \setminus L = L \text{ if and only if } L = \varnothing. - -Idempotence and Nilpotence: - -\begin{alignat}{10} - -L \cup L &=&& L && \quad \text{ (Idempotence)} \\[1.4ex] - -L \cap L &=&& L && \quad \text{ (Idempotence)} \\[1.4ex] - -L \triangle L &=&& \varnothing && \quad \text{ (Nilpotence of index 2)} \\[1.4ex] - -L \setminus L &=&& \varnothing && \quad \text{ (Nilpotence of index 2)} \\[1.4ex] - -\end{alignat} - -Domination: - -\begin{alignat}{10} - -X \cup L &=&& X ~~~~\text{ where } L \subseteq X \\[1.4ex] - -\varnothing \cap L &=&& \varnothing \\[1.4ex] - -\varnothing \times L &=&& \varnothing \\[1.4ex] - -\varnothing \setminus L &=&& \varnothing \\[1.4ex] - -L \cup X &=&& X ~~~~\text{ where } L \subseteq X \\[1.4ex] - -L \cap \varnothing &=&& \varnothing \\[1.4ex] - -L \times \varnothing &=&& \varnothing \\[1.4ex] - -\end{alignat} - -but - -L \setminus \varnothing = L - -so L \setminus \varnothing = \varnothing \text{ if and only if } L = \varnothing. - -Double complement or involution law: - -\begin{alignat}{10} - -X \setminus (X \setminus L) - -&= L - -&&\qquad\text{ Also written }\quad - -&&\left(L^C\right)^C = L - -&&\quad&&\text{ where } L \subseteq X \quad - -\text{ (Double complement/Involution law)} \\[1.4ex] - -\end{alignat} - -L \setminus \varnothing = L - -\begin{alignat}{4} - -\varnothing - -&= L &&\setminus L \\ - -&= \varnothing &&\setminus L \\ - -&= L &&\setminus X ~~~~\text{ where } L \subseteq X \\ - -\end{alignat} - -L^C = X \setminus L \quad \text{ (definition of notation)} - -\begin{alignat}{10} - -L \cup (X \setminus L) - -&= X - -&&\qquad\text{ Also written }\quad - -&&L \cup L^C = X - -&&\quad&&\text{ where } L \subseteq X - -\\[1.4ex] - -L \triangle (X \setminus L) - -&= X - -&&\qquad\text{ Also written }\quad - -&&L \triangle L^C = X - -&&\quad&&\text{ where } L \subseteq X - -\\[1.4ex] - -L \cap (X \setminus L) - -&= \varnothing - -&&\qquad\text{ Also written }\quad - -&&L \cap L^C = \varnothing - -&&\quad&& - -\\[1.4ex] - -\end{alignat} - -\begin{alignat}{10} - -X \setminus \varnothing - -&= X - -&&\qquad\text{ Also written }\quad - -&&\varnothing^C = X - -&&\quad&&\text{ (Complement laws for the empty set))} - -\\[1.4ex] - -X \setminus X - -&= \varnothing - -&&\qquad\text{ Also written }\quad - -&&X^C = \varnothing - -&&\quad&&\text{ (Complement laws for the universe set)} - -\\[1.4ex] - -\end{alignat} - -Other properties: - -If $L$ is any set then the following are equivalent: - -
      - -
    1. $L$ is not empty ($L \neq \varnothing$), meaning: $\lnot [\forall x (x \not\in L)]$
    2. - -
    3. (In classical mathematics) $L$ is inhabited, meaning: $\exists x (x \in L)$ - -* In constructive mathematics, "not empty" and "inhabited" are not equivalent: every inhabited set is not empty but the converse is not always guaranteed; that is, in constructive mathematics, a set $L$ that is not empty (where by definition, "$L$ is empty" means that the statement $\forall x (x \not\in L)$ is true) might not have an inhabitant (which is an $x$ such that $x \in L$). - -
    4. - -
    5. $L \not\subseteq R$ for some set $R$
    6. - -
    - -If $L$ is any set then the following are equivalent: - -
      - -
    1. $L$ is empty ($L = \varnothing$), meaning: $\forall x (x \not\in L)$
    2. - -
    3. $L \cup R \subseteq R$ for every set $R$
    4. - -
    5. $L \subseteq R$ for every set $R$
    6. - -
    7. $L \subseteq R \setminus L$ for some/every set $R$
    8. - -
    9. $\varnothing / L = L$
    10. - -
    - -In the left hand sides of the following identities, $L$ is the Left most set and $R$ is the Right most set. - -Assume both $L \text{ and } R$ are subsets of some universe set $X.$ - -In the left hand sides of the following identities, $L$ is the Left most set and $R$ is the Right most set. Whenever necessary, both $L \text{ and } R$ should be assumed to be subsets of some universe set $X,$ so that $L^C := X \setminus L \text{ and } R^C := X \setminus R.$ - -\begin{alignat}{9} - -L \cap R - -&= L &&\setminus &&(L &&\setminus &&R) \\ - -&= R &&\setminus &&(R &&\setminus &&L) \\ - -&= L &&\setminus &&(L &&\triangle &&R) \\ - -&= L &&\triangle &&(L &&\setminus &&R) \\ - -\end{alignat} - -\begin{alignat}{9} - -L \cup R - -&= (&&L \triangle R) &&\cup && &&L && && \\ - -&= (&&L \triangle R) &&\triangle &&(&&L &&\cap &&R) \\ - -&= (&&R \setminus L) &&\cup && &&L && && ~~~~~\text{ (union is disjoint)} \\ - -\end{alignat} - -\begin{alignat}{9} - -L \triangle R - -&= &&R \triangle L && && && && \\ - -&= (&&L \cup R) &&\setminus &&(&&L \cap R) && \\ - -&= (&&L \setminus R) &&\cup &&(&&R \setminus L) && ~~~~~\text{ (union is disjoint)} \\ - -&= (&&L \triangle M) &&\triangle &&(&&M \triangle R) && ~~~~~\text{ where } M \text{ is an arbitrary set. } \\ - -&= (&&L^C) &&\triangle &&(&&R^C) && \\ - -\end{alignat} - -\begin{alignat}{9} - -L \setminus R - -&= &&L &&\setminus &&(L &&\cap &&R) \\ - -&= &&L &&\cap &&(L &&\triangle &&R) \\ - -&= &&L &&\triangle &&(L &&\cap &&R) \\ - -&= &&R &&\triangle &&(L &&\cup &&R) \\ - -\end{alignat} - -De Morgan's laws: - -For $L, R \subseteq X:$ - -\begin{alignat}{10} - -X \setminus (L \cap R) - -&= (X \setminus L) \cup (X \setminus R) - -&&\qquad\text{ Also written }\quad - -&&(L \cap R)^C = L^C \cup R^C - -&&\quad&&\text{ (De Morgan's law)} - -\\[1.4ex] - -X \setminus (L \cup R) - -&= (X \setminus L) \cap (X \setminus R) - -&&\qquad\text{ Also written }\quad - -&&(L \cup R)^C = L^C \cap R^C - -&&\quad&&\text{ (De Morgan's law)} - -\\[1.4ex] - -\end{alignat} - -Absorption laws: - -\begin{alignat}{4} - -L \cup (L \cap R) &=&& L && \quad \text{ (Absorption)} \\[1.4ex] - -L \cap (L \cup R) &=&& L && \quad \text{ (Absorption)} \\[1.4ex] - -\end{alignat} - -Commutativity: - -\begin{alignat}{10} - -L \cup R &=&& R \cup L && \quad \text{ (Commutativity)} \\[1.4ex] - -L \cap R &=&& R \cap L && \quad \text{ (Commutativity)} \\[1.4ex] - -L \triangle R &=&& R \triangle L && \quad \text{ (Commutativity)} \\[1.4ex] - -\end{alignat} - -Set subtraction is not commutative. However, the commutativity of set subtraction can be characterized: from $(L \setminus R) \cap (R \setminus L) = \varnothing$ it follows that: - -L \setminus R = R \setminus L \quad \text{ if and only if } \quad L = R. - -Said differently, if distinct symbols always represented distinct sets, then the only true formulas of the form $\cdot \setminus \cdot = \cdot \setminus \cdot$ that could be written would be those involving a single symbol; that is, those of the form: $S \setminus S = S \setminus S.$ - -But such formulas are necessarily true for every binary operation $\ast$ (because $x \ast x = x \ast x$ must hold by definition of equality), and so in this sense, set subtraction is as diametrically opposite to being commutative as is possible for a binary operation. - -Set subtraction is also neither left alternative nor right alternative; instead, $(L \setminus L) \setminus R = L \setminus (L \setminus R)$ if and only if $L \cap R = \varnothing$ if and only if $(R \setminus L) \setminus L = R \setminus (L \setminus L).$ - -Set subtraction is quasi-commutative and satisfies the Jordan identity. - -Other properties - -\begin{alignat}{10} - -L \setminus R - -&= L \cap (X \setminus R) - -&&\qquad\text{ Also written }\quad - -&&L \setminus R = L \cap R^C - -&&\quad&&\text{ where } L, R \subseteq X - -\\[1.4ex] - -X \setminus (L \setminus R) - -&= (X \setminus L) \cup R - -&&\qquad\text{ Also written }\quad - -&&(L \setminus R)^C = L^C \cup R - -&&\quad&&\text{ where } R \subseteq X - -\\[1.4ex] - -L \setminus R - -&= (X \setminus R) \setminus (X \setminus L) - -&&\qquad\text{ Also written }\quad - -&&L \setminus R = R^C \setminus L^C - -&&\quad&&\text{ where } L, R \subseteq X - -\\[1.4ex] - -\end{alignat} - -\text{If } L \cup R = X \text{ and } L \cap R = \varnothing \text{ then } R = X \setminus L \qquad \text{ (Uniqueness of complements)} - -
      - -
    • Given any $x,$ $\quad x \not\in L \setminus R \quad \text{ if and only if } \quad x \in L \cap R \text{ or } x \not\in L.$
    • - -
    • If $L \cap R = \varnothing$ then $L = R \text{ if and only if } L = \varnothing = R.$
    • - -
    • If $L \subseteq R$ then: - -
        - -
      • $L \triangle R = R \setminus L$
      • - -
      - -
    • - -
    - -The following statements are equivalent: - -
      - -
    1. $L = R$
    2. - -
    3. $L \triangle R = \varnothing$
    4. - -
    5. $L \setminus R = R \setminus L$
    6. - -
    7. - -
    - -The following are equivalent for any $L, R \subseteq X:$ - -
      - -
    1. $L \subseteq R$
    2. - -
    3. $L \cap R = L$
    4. - -
    5. $L \cup R = R$
    6. - -
    7. $L \triangle R = R \setminus L$
    8. - -
    9. $L \triangle R \subseteq R \setminus L$
    10. - -
    11. $L \setminus R = \varnothing$
    12. - -
    13. $X \setminus R \subseteq X \setminus L \qquad$ (that is, $R^C \subseteq L^C$)
    14. - -
    - -Inclusion is a partial order: - -Explicitly, this means that inclusion $\subseteq,$ which is a binary operation, has the following three properties: - -
      - -
    • Reflexivity: - -L \subseteq L
    • - -
    • Antisymmetry: - -(L \subseteq R \text{ and } R \subseteq L) \text{ if and only if } L = R
    • - -
    • Transitivity: - -\text{If }L \subseteq M \text{ and } M \subseteq R \text{ then } L \subseteq R
    • - -
    - -The following proposition says that for any set $S,$ the power set of $S,$ ordered by inclusion, is a bounded lattice, and hence together with the distributive and complement laws above, show that it is a Boolean algebra. - -Existence of a least element and a greatest element: - -\varnothing \subseteq L \subseteq X - -Joins/supremums exist: - -L \subseteq L \cup R - -The union $L \cup R$ is the join/supremum of $L \text{ and } R$ with respect to $\subseteq$ because: - -
      - -
    1. $L \subseteq L \cup R \text{ and } R \subseteq L \cup R,$ and
    2. - -
    3. If $Z$ is a set such that $L \subseteq Z \text{ and } R \subseteq Z$ then $L \cup R \subseteq Z.$
    4. - -
    - -The intersection $L \cap R$ is the join/supremum of $L \text{ and } R$ with respect to $\supseteq.$ - -Meets/infimums exist: - -L \cap R \subseteq L - -The intersection $L \cap R$ is the meet/infimum of $L \text{ and } R$ with respect to $\subseteq$ because: - -
      - -
    1. If $L \cap R \subseteq L \text{ and } L \cap R \subseteq R,$ and
    2. - -
    3. If $Z$ is a set such that $Z \subseteq L \text{ and } Z \subseteq R$ then $Z \subseteq L \cap R.$
    4. - -
    - -The union $L \cup R$ is the meet/infimum of $L \text{ and } R$ with respect to $\supseteq.$ - -Other inclusion properties: - -L \setminus R \subseteq L - -(L \setminus R) \cap L = L \setminus R - -(L \setminus R) \cap R = \varnothing - -
      - -
    • If $L \subseteq X \text{ and } R \subseteq Y$ then $L \times R \subseteq X \times Y$
    • - -
    • If $x \not\in L \text{ then } x \not\in L \setminus R$
    • - -
    - -In the left hand sides of the following identities, $L$ is the Left most set, $M$ is the Middle set, and $R$ is the Right most set. - -Definition: A binary operator $\ast$ is called associative if $(L \ast M) \ast R = L \ast (M \ast R)$ always holds. - -The following set operations are associative: - -\begin{alignat}{5} - -(L \cup M) \cup R &=&& L \cup (M \cup R) \\[1.4ex] - -(L \cap M) \cap R &=&& L \cap (M \cap R) \\[1.4ex] - -(L \triangle M) \triangle R &=&& L \triangle (M \triangle R) \\[1.4ex] - -\end{alignat} - -For set subtraction, instead of associativity, only the following is always guaranteed: - -(L \setminus M) \setminus R ~~{\color{red}{\subseteq}}~~ L \setminus (M \setminus R) - -where equality holds if and only if $L \cap R = \varnothing$ (this condition does not depend on $M$). Thus - - (L \setminus M) \setminus R = L \setminus (M \setminus R) \quad \text{ if and only if } \quad (R \setminus M) \setminus L = R \setminus (M \setminus L), - -where the only difference between the left and right hand side set equalities is that the locations of $L \text{ and } R$ have been swapped. - -Definition: If $\ast \text{ and } \bullet$ are binary operators then $\ast$ left distributes over $\bullet$ if L \ast (M \bullet R) ~=~ (L \ast M) \bullet (L \astR) \qquad\qquad \text{ for all } L, M, R - -while $\ast$ right distributes over $\bullet$ if (L \bullet M) \ast R ~=~ (L \ast R) \bullet (M \astR) \qquad\qquad \text{ for all } L, M, R. - -The operator $\ast$ distributes over $\bullet$ if it both left distributes and right distributes over $\bullet.$ - -In the definitions above, to transform one side to the other, the innermost operator (the operator inside the parentheses) becomes the outermost operator and the outermost operator becomes the innermost operator. - -Right distributivity: - -\begin{alignat}{9} - -(L \cap M) \cup R ~&~~=~~&& (L \cup R) &&\cap &&(M \cup R) \qquad - -&&\text{ (Right-distributivity of } \cup \text{ over } \cap \text{)} \\[1.4ex] - -(L \cup M) \cup R ~&~~=~~&& (L \cup R) &&\cup &&(M \cup R) \qquad - -&&\text{ (Right-distributivity of } \cup \text{ over } \cup \text{)} \\[1.4ex] - -(L \cup M) \cap R ~&~~=~~&& (L \cap R) &&\cup &&(M \cap R) \qquad - -&&\text{ (Right-distributivity of } \cap \text{ over } \cup \text{)} \\[1.4ex] - -(L \cap M) \cap R ~&~~=~~&& (L \cap R) &&\cap &&(M \cap R) \qquad - -&&\text{ (Right-distributivity of } \cap \text{ over } \cap \text{)} \\[1.4ex] - -(L \triangle M) \cap R ~&~~=~~&& (L \cap R) &&\triangle &&(M \cap R) \qquad - -&&\text{ (Right-distributivity of } \cap \text{ over } \triangle \text{)} \\[1.4ex] - -(L \cap M) \times R ~&~~=~~&& (L \times R) &&\cap &&(M \times R) \qquad - -&&\text{ (Right-distributivity of } \times \text{ over } \cap \text{)} \\[1.4ex] - -(L \cup M) \times R ~&~~=~~&& (L \times R) &&\cup &&(M \times R) \qquad - -&&\text{ (Right-distributivity of } \times \text{ over } \cup \text{)} \\[1.4ex] - -(L \setminus M) \times R ~&~~=~~&& (L \times R) &&\setminus &&(M \times R) \qquad - -&&\text{ (Right-distributivity of } \times \text{ over } \setminus \text{)} \\[1.4ex] - -(L \cup M) \setminus R ~&~~=~~&& (L \setminus R) &&\cup &&(M \setminus R) \qquad - -&&\text{ (Right-distributivity of } \setminus \text{ over } \cup \text{)} \\[1.4ex] - -(L \cap M) \setminus R ~&~~=~~&& (L \setminus R) &&\cap &&(M \setminus R) \qquad - -&&\text{ (Right-distributivity of } \setminus \text{ over } \cap \text{)} \\[1.4ex] - -(L \triangle M) \setminus R ~&~~=~~&& (L \setminus R) &&\triangle &&(M \setminus R) \qquad - -&&\text{ (Right-distributivity of } \setminus \text{ over } \triangle \text{)} \\[1.4ex] - -(L \setminus M) \setminus R ~&~~=~~&& (L \setminus R) &&\setminus &&(M \setminus R) ~~=~~ L \setminus (M \cup R) \qquad - -&&\text{ (Right-distributivity of } \setminus \text{ over } \setminus \text{)} \\[1.4ex] - -\end{alignat} - -Left distributivity: - -\begin{alignat}{5} - -L \cup (M \cap R) &=&& (L \cup M) \cap (L \cup R) \qquad - -&&\text{ (Left-distributivity of } \cup \text{ over } \cap \text{)} \\[1.4ex] - -L \cup (M \cup R) &=&& (L \cup M) \cup (L \cup R) - -&&\text{ (Left-distributivity of } \cup \text{ over } \cup \text{)} \\[1.4ex] - -L \cap (M \cup R) &=&& (L \cap M) \cup (L \cap R) - -&&\text{ (Left-distributivity of } \cap \text{ over } \cup \text{)} \\[1.4ex] - -L \cap (M \cap R) &=&& (L \cap M) \cap (L \cap R) - -&&\text{ (Left-distributivity of } \cap \text{ over } \cap \text{)} \\[1.4ex] - -L \cap (M \triangle R) &=&& (L \cap M) \triangle (L \cap R) - -&&\text{ (Left-distributivity of } \cap \text{ over } \triangle \text{)} \\[1.4ex] - -L \times (M \cap R) &=&& (L \times M) \cap (L \times R) - -&&\text{ (Left-distributivity of } \times \text{ over } \cap \text{)} \\[1.4ex] - -L \times (M \cup R) &=&& (L \times M) \cup (L \times R) - -&&\text{ (Left-distributivity of } \times \text{ over } \cup \text{)} \\[1.4ex] - -L \times (M \setminus R) &=&& (L \times M) \setminus (L \times R) - -&&\text{ (Left-distributivity of } \times \text{ over } \setminus \text{)} \\[1.4ex] - -\end{alignat} - -Left distributivity and set subtraction: - -Set subtraction is right distributive over itself. However, set subtraction is not left distributive over itself because only the following is guaranteed in general: - -\begin{alignat}{5} - -L \setminus (M \setminus R) &~~{\color{red}{\supseteq}}~~&& \color{black}{} (L \setminus M) \setminus (L \setminus R) ~~=~~ L \cap R \setminus M \\[1.4ex] - -\end{alignat} - -where equality holds if and only if $L \setminus M = L \cap R,$ which happens if and only if $L \cap M \cap R = \varnothing \text{ and } L \setminus M \subseteq R.$ - -Set subtraction is not left distributive over unions or intersections because only the following are guaranteed in general: - -\begin{alignat}{5} - -L \setminus (M \cup R) ~&~~{\color{red}{\subseteq}}~~&& \color{black}{} (L \setminus M) \cup (L \setminus R) ~~=~~ L \setminus (M \cap R) \\[1.4ex] - -\end{alignat} - -\begin{alignat}{5} - -L \setminus (M \cap R) ~&~~{\color{red}{\supseteq}}~~&& \color{black}{} (L \setminus M) \cap (L \setminus R) ~~=~~ L \setminus (M \cup R) \\[1.4ex] - -\end{alignat} - -where equality holds for one (or equivalently, for both) of the above two inclusion formulas if and only if $L \setminus (M \cap R) \subseteq L \setminus (M \cup R),$ which happens if and only if $L \cap M = L \cap R.$ - -In contrast, for symmetric difference, the sets $L \setminus (M \triangle R)$ and $(L \setminus M) \triangle (L \setminus R) = L \cap (M \triangle R)$ are always disjoint. - -So these two sets are equal if and only if they are both equal to $\varnothing.$ - -Moreover, $L \setminus (M \triangle R) = \varnothing$ if and only if $L \cap M \cap R = \varnothing \text{ and } L \subseteq M \cup R.$ - -Distributivity and symmetric difference: - -Intersection distributes over symmetric difference: - -\begin{alignat}{5} - -L \cap (M \triangle R) ~&~~=~~&& (L \cap M) \triangle (L \cap R) ~&&~ \\[1.4ex] - -\end{alignat} - -\begin{alignat}{5} - -(L \triangle M) \cap R~&~~=~~&& (L \cap R) \triangle (M \cap R) ~&&~ \\[1.4ex] - -\end{alignat} - -Union does not distribute over symmetric difference because only the following is guaranteed in general: - -\begin{alignat}{5} - -L \cup (M \triangle R) - -~~{\color{red}{\supseteq}}~~ \color{black}{} (L \cup M) \triangle (L \cup R) ~ - -&~=~&& (M \triangle R) \setminus L - -&~=~&& (M \setminus L) \triangle (R \setminus L) \\[1.4ex] - -\end{alignat} - -Symmetric difference does not distribute over itself: - -L \triangle (M \triangle R) - -~~{\color{red}{\neq}}~~ \color{black}{} (L \triangle M) \triangle (L \triangle R) - -~=~ M \triangle R - -and in general, for any sets $L \text{ and } A$ (where $A$ represents $M \triangle R$), $L \triangle A$ might not be a subset, nor a superset, of $L$ (ditto for $A$). - -Set subtraction complexity: To manage the many identities involving set subtraction, this section is divided based on where the set subtraction operation and parentheses are located on the left hand side of the identity. The great variety and (relative) complexity of formulas involving set subtraction (compared to those without it) is in part due to the fact that unlike $\cup, \cap, \text{ and } \triangle,$ set subtraction is neither associative nor commutative. - -\begin{alignat}{4} - -(L \setminus M) \setminus R - -&= &&L \setminus (M \cup R) \\[0.6ex] - -&= (&&L \setminus M) \cap (L \setminus R) \\[0.6ex] - -&= (&&L \setminus R) \setminus M \\[0.6ex] - -&= (&&L \setminus R) \setminus (M \setminus R) \\[1.4ex] - -\end{alignat} - -\begin{alignat}{4} - -L \setminus (M \setminus R) - -&= (L \setminus M) \cup (L \cap R) \\[1.4ex] - -\end{alignat} - -* If $L \subseteq M \text{ then } L \setminus (M \setminus R) = L \cap R$ - -* L \setminus (M \setminus R) \subseteq (L \setminus M) \cup R with equality if and only if $R \subseteq L.$ - -Quasi-commutativity: - -(L \setminus M) \setminus R ~=~ (L \setminus R) \setminus M \qquad \text{ (Quasi-commutative)} - -but - -L \setminus (M \setminus R) ~\subseteq~ L \setminus (R \setminus M) \qquad \text{ if and only if } \qquad L \cap R ~\subseteq~ M - -so that the following are equivalent: - -
      - -
    1. $L \setminus (M \setminus R) ~=~ L \setminus (R \setminus M)$
    2. - -
    3. $L \cap R ~\subseteq~ M \text{ and } L \cap M ~\subseteq~ R$
    4. - -
    5. $L \cap (M \cup R) ~\subseteq~ M \cap R$
    6. - -
    7. $L \cap (M \cup R) ~=~ L \cap M \cap R$
    8. - -
    - -Parentheses on the left - -\begin{alignat}{4} - -\left(L \setminus M\right) \cup R - -&= (L \cup R) \setminus (M \setminus R) \\ - -&= (L \setminus (M \cup R)) \cup R ~~~~~ \text{ (the outermost union is disjoint) } \\ - -\end{alignat} - -\begin{alignat}{4} - -(L \setminus M) \cap R - -&= (&&L \cap R) \setminus (M \cap R) ~~~\text{ (Distributive law of } \cap \text{ over } \setminus \text{ )} \\ - -&= (&&L \cap R) \setminus M \\ - -&= &&L \cap (R \setminus M) \\ - -\end{alignat} - -\begin{alignat}{4} - -(L \setminus M) ~\triangle~ R - -&= (L \setminus (M \cup R)) \cup (R \setminus L) \cup (L \cap M \cap R) ~~~\text{ (the three outermost sets are pairwise disjoint) } \\ - -\end{alignat} - -(L \setminus M) \times R = (L \times R) \setminus (M \times R) ~~~~~\text{ (Distributivity)} - -Parentheses on the right - -\begin{alignat}{3} - -L \setminus (M \cup R) - -&= (L \setminus M) &&\cap (&&L \setminus R) ~~~~\text{ (De Morgan's law) } \\ - -&= (L \setminus M) &&\setminus &&R \\ - -&= (L \setminus R) &&\setminus &&M \\ - -\end{alignat} - -\begin{alignat}{4} - -L \setminus (M \cap R) - -&= (L \setminus M) \cup (L \setminus R) ~~~~\text{ (De Morgan's law) } \\ - -\end{alignat} - -\begin{alignat}{4} - -L \setminus (M ~\triangle~ R) - -&= (L \setminus (M \cup R)) \cup (L \cap M \cap R) ~~~\text{ (the outermost union is disjoint) } \\ - -\end{alignat} - -Parentheses on the left - -\begin{alignat}{4} - -(L \cup M) \setminus R - -&= (L \setminus R) \cup (M \setminus R) \\ - -\end{alignat} - -\begin{alignat}{4} - -(L \cap M) \setminus R - -&= (&&L \setminus R) &&\cap (M \setminus R) \\ - -&= &&L &&\cap (M \setminus R) \\ - -&= &&M &&\cap (L \setminus R) \\ - -\end{alignat} - -\begin{alignat}{4} - -(L \triangle M) \setminus R - -&= (L \setminus R) ~&&\triangle~ (M \setminus R) \\ - -&= (L \cup R) ~&&\triangle~ (M \cup R) \\ - -\end{alignat} - -Parentheses on the right - -\begin{alignat}{3} - -L \cup (M \setminus R) - -&= && &&L &&\cup &&(M \setminus (R \cup L)) &&~~~\text{ (the outermost union is disjoint) } \\ - -&= [&&(&&L \setminus M) &&\cup &&(R \cap L)] \cup (M \setminus R) &&~~~\text{ (the outermost union is disjoint) } \\ - -&= &&(&&L \setminus (M \cup R)) &&\cup &&(R \cap L) \cup (M \setminus R) &&~~~\text{ (the three outermost sets are pairwise disjoint) } \\ - -\end{alignat} - -\begin{alignat}{4} - -L \cap (M \setminus R) - -&= (&&L \cap M) &&\setminus (L \cap R) ~~~\text{ (Distributive law of } \cap \text{ over } \setminus \text{ )} \\ - -&= (&&L \cap M) &&\setminus R \\ - -&= &&M &&\cap (L \setminus R) \\ - -&= (&&L \setminus R) &&\cap (M \setminus R) \\ - -\end{alignat} - - - -L \times (M \setminus R) = (L \times M) \setminus (L \times R) ~~~~~\text{ (Distributivity)} - -Operations of the form $(L \bullet M) \ast (M \bullet R)$: - -\begin{alignat}{9} - -(L \cup M) &\cup&& (&&M \cup R) && - -&&=&& L \cup M \cup R \\[1.4ex] - -(L \cup M) &\cap&& (&&M \cup R) && - -&&=&& M \cap (L \cup R) \\[1.4ex] - -(L \cup M) &\setminus&& (&&M \cup R) && - -&&=&& L \setminus (M \cup R) \\[1.4ex] - -(L \cup M) &\triangle&& (&&M \cup R) && - -&&=&& (L \setminus (M \cup R)) \cup (R \setminus (L \cup M)) \\[1.4ex] - -&&&&&&& &&=&& (L \triangle R) \setminus M \\[1.4ex] - -(L \cap M) &\cup&& (&&M \cap R) && - -&&=&& M \cup (L \cap R) \\[1.4ex] - -(L \cap M) &\cap&& (&&M \cap R) && - -&&=&& L \cap M \cap R \\[1.4ex] - -(L \cap M) &\setminus&& (&&M \cap R) && - -&&=&& (L \cap M) \setminus R \\[1.4ex] - -(L \cap M) &\triangle&& (&&M \cap R) && - -&&=&& [(L \cap M) \cup (M \cap R)] \setminus (L \cap M \cap R) \\[1.4ex] - -(L \setminus M) &\cup&& (&&M \setminus R) && - -&&=&& (L \cup M) \setminus (M \cap R) \\[1.4ex] - -(L \setminus M) &\cap&& (&&M \setminus R) && - -&&=&& \varnothing \\[1.4ex] - -(L \setminus M) &\setminus&& (&&M \setminus R) && - -&&=&& L \setminus M \\[1.4ex] - -(L \setminus M) &\triangle&& (&&M \setminus R) && - -&&=&& (L \setminus M) \cup (M \setminus R) \\[1.4ex] - -&&&&&&& &&=&& (L \cup M) \setminus (M \cap R) \\[1.4ex] - -(L \triangle M) &\cup&& (&&M \triangle R) && - -&&=&& (L \cup M \cup R) \setminus (L \cap M \cap R) \\[1.4ex] - -(L \triangle M) &\cap&& (&&M \triangle R) && - -&&=&& ((L \cap R) \setminus M) \cup (M \setminus (L \cup R)) \\[1.4ex] - -(L \triangle M) &\setminus&& (&&M \triangle R) && - -&&=&& (L \setminus (M \cup R)) \cup ((M \cap R) \setminus L) \\[1.4ex] - -(L \triangle M) &\triangle&& (&&M \triangle R) && - -&&=&& L \triangle R \\[1.7ex] - -\end{alignat} - -Operations of the form $(L \bullet M) \ast (R \setminus M)$: - -\begin{alignat}{9} - -(L \cup M) &\cup&& (&&R \setminus M) && - -&&=&& L \cup M \cup R \\[1.4ex] - -(L \cup M) &\cap&& (&&R \setminus M) && - -&&=&& (L \cap R) \setminus M \\[1.4ex] - -(L \cup M) &\setminus&& (&&R \setminus M) && - -&&=&& M \cup (L \setminus R) \\[1.4ex] - -(L \cup M) &\triangle&& (&&R \setminus M) && - -&&=&& M \cup (L \triangle R) \\[1.4ex] - -(L \cap M) &\cup&& (&&R \setminus M) && - -&&=&& [L \cap (M \cup R)] \cup [R \setminus (L \cup M)] \qquad \text{ (disjoint union)} \\[1.4ex] - -&&&&&&& &&=&& (L \cap M) \triangle (R \setminus M) \\[1.4ex] - -(L \cap M) &\cap&& (&&R \setminus M) && - -&&=&& \varnothing \\[1.4ex] - -(L \cap M) &\setminus&& (&&R \setminus M) && - -&&=&& L \cap M \\[1.4ex] - -(L \cap M) &\triangle&& (&&R \setminus M) && - -&&=&& (L \cap M) \cup (R \setminus M) \qquad \text{ (disjoint union)} \\[1.4ex] - -(L \setminus M) &\cup&& (&&R \setminus M) && - -&&=&& L \cup R \setminus M \\[1.4ex] - -(L \setminus M) &\cap&& (&&R \setminus M) && - -&&=&& (L \cap R) \setminus M \\[1.4ex] - -(L \setminus M) &\setminus&& (&&R \setminus M) && - -&&=&& L \setminus (M \cup R) \\[1.4ex] - -(L \setminus M) &\triangle&& (&&R \setminus M) && - -&&=&& (L \triangle R) \setminus M \\[1.4ex] - -(L \triangle M) &\cup&& (&&R \setminus M) && - -&&=&& (L \cup M \cup R) \setminus (L \cap M) \\[1.4ex] - -(L \triangle M) &\cap&& (&&R \setminus M) && - -&&=&& (L \cap R) \setminus M \\[1.4ex] - -(L \triangle M) &\setminus&& (&&R \setminus M) && - -&&=&& [L \setminus (M \cup R)] \cup (M \setminus L) \qquad \text{ (disjoint union)} \\[1.4ex] - -&&&&&&& &&=&& (L \triangle M) \setminus (L \cap R) \\[1.4ex] - -(L \triangle M) &\triangle&& (&&R \setminus M) && - -&&=&& L \triangle (M \cup R) \\[1.7ex] - -\end{alignat} - -Operations of the form $(L \setminus M) \ast (L \setminus R)$: - -\begin{alignat}{9} - -(L \setminus M) &\cup&& (&&L \setminus R) - -&&=&& L \setminus (M \cap R) \\[1.4ex] - -(L \setminus M) &\cap&& (&&L \setminus R) - -&&=&& L \setminus (M \cup R) \\[1.4ex] - -(L \setminus M) &\setminus&& (&&L \setminus R) - -&&=&& (L \cap R) \setminus M \\[1.4ex] - -(L \setminus M) &\triangle&& (&&L \setminus R) - -&&=&& L \cap (M \triangle R) \\[1.4ex] - -&&&&& &&=&& (L \cap M) \triangle (L \cap R) \\[1.4ex] - -\end{alignat} - -Other properties: - -L \cap M = R \text{ and } L \cap R = M \qquad \text{ if and only if } \qquad M = R \subseteq L. - -
      - -
    • If $L \subseteq M$ then $L \setminus R = L \cap (M \setminus R).$
    • - -
    • $L \times (M \setminus R) = (L \times M) \setminus (L \times R)$
    • - -
    • If $L \subseteq R$ then $M \setminus R \subseteq M \setminus L.$
    • - -
    • $L \cap M \cap R = \varnothing$ if and only if for any $x \in L \cup M \cup R,$ $x$ belongs to at most two of the sets $L, M, \text{ and } R.$
    • - -
    - -Let $\left(L_i\right)_{i \in I},$ $\left(R_j\right)_{j \in J},$ and $\left(S_{i,j}\right)_{(i, j) \in I \times J}$ be families of sets. Whenever the assumption is needed, then all indexing sets, such as $I$ and $J,$ are assumed to be non-empty. - -Arbitrary unions defined - -{{NumBlk|::|$\bigcup_{i \in I} L_i ~~\colon=~ \{x ~:~ \text{ there exists } i \in I \text{ such that } x \in L_i\}$|}} - -If $I = \varnothing$ then $\bigcup_{i \in \varnothing} L_i = \{x ~:~ \text{ there exists } i \in \varnothing \text{ such that } x \in L_i\} = \varnothing,$ which is somethings called the nullary union convention (despite being called a convention, this equality follows from the definition). - -Arbitrary intersections defined - -If $I \neq \varnothing$ then - -{{NumBlk|::|$\bigcap_{i \in I} L_i ~~\colon=~ \{x ~:~ x \in L_i \text{ for every } i \in I\} ~=~ \{x ~:~ \text{ for all } i, \text{ if } i \in I \text{ then } x \in L_i\}.$|}} - -Nullary intersections - -If $I = \varnothing$ then - -\bigcap_{i \in \varnothing} L_i = \{x ~:~ \text{ for all } i, \text{ if } i \in \varnothing \text{ then } x \in L_i\} - -where every possible thing $x$ in the universe vacuously satisfied the condition: "if $i \in \varnothing$ then $x \in L_i$". Consequently, $\bigcap_{i \in \varnothing} L_i = \{x ~:~ \text{ for all } i, \text{ if } i \in \varnothing \text{ then } x \in L_i\} = \{x : \text{ for all } i, \text{ true }\}$ consists of everything in the universe. - -So if $I = \varnothing$ and: - -#if you are working in a model in which there exists some universe set $X$ then $\bigcap_{i \in \varnothing} L_i = \{x ~:~ x \in L_i \text{ for every } i \in \varnothing\} ~=~ X.$ - -#otherwise, if you are working in a model in which "the class of all things $x$" is not a set (by far the most common situation) then $\bigcap_{i \in \varnothing} L_i$ is undefined. This is because $\bigcap_{i \in \varnothing} L_i = \{x ~:~ \text{ for all } i, \text{ if } i \in \varnothing \text{ then } x \in L_i\}$ consists of everything, which makes $\bigcap_{i \in \varnothing} L_i$ a proper class and not a set. - -Assumption: Henceforth, whenever a formula requires some indexing set to be non-empty in order for an arbitrary intersection to be well-defined, then this will automatically be assumed without mention. - -A consequence of this is the following assumption/definition: - -A finite intersection of sets or an intersection of finitely many sets refers to the intersection of a finite collection of one or more sets. - -Some authors adopt the so called nullary intersection convention, which is the convention that an empty intersection of sets is equal to some canonical set. In particular, if all sets are subsets of some set $X$ then some author may declare that the empty intersection of these sets be equal to $X.$ However, the nullary intersection convention is not as commonly accepted as the nullary union convention and this article will not adopt it (this is due to the fact that unlike the empty union, the value of the empty intersection depends on $X$ so if there are multiple sets under consideration, which is commonly the case, then the value of the empty intersection risks becoming ambiguous). - -Multiple index sets - -\bigcup_{\stackrel{i \in I,}{j \in J}} S_{i,j} ~~\colon=~ \bigcup_{(i, j) \in I \times J} S_{i,j} - -\bigcap_{\stackrel{i \in I,}{j \in J}} S_{i,j} ~~\colon=~ \bigcap_{(i, j) \in I \times J} S_{i,j} - -Commutativity: - -\bigcup_{\stackrel{i \in I,}{j \in J}} S_{i,j} - -~~\colon=~ \bigcup_{(i, j) \in I \times J} S_{i,j} - -~=~ \bigcup_{i \in I} \left(\bigcup_{j \in J} S_{i,j}\right) - -~=~ \bigcup_{j \in J} \left(\bigcup_{i \in I} S_{i,j}\right) - -\bigcap_{\stackrel{i \in I,}{j \in J}} S_{i,j} - -~~\colon=~ \bigcap_{(i, j) \in I \times J} S_{i,j} - -~=~ \bigcap_{i \in I} \left(\bigcap_{j \in J} S_{i,j}\right) - -~=~ \bigcap_{j \in J} \left(\bigcap_{i \in I} S_{i,j}\right) - -Unions of unions and intersections of intersections: - -\left(\bigcup_{i \in I} L_i\right) \cup R ~=~ \bigcup_{i \in I} \left(L_i \cup R\right) - -\left(\bigcap_{i \in I} L_i\right) \cap R ~=~ \bigcap_{i \in I} \left(L_i \cap R\right) - -{{NumBlk|:|$\left(\bigcup_{i \in I} L_i\right) \cup \left(\bigcup_{j \in J} R_j\right) ~=~ \bigcup_{\stackrel{i \in I,}{j \in J}} \left(L_i \cup R_j\right)$|}} - -{{NumBlk|:|$\left(\bigcap_{i \in I} L_i\right) \cap \left(\bigcap_{j \in J} R_j\right) ~=~ \bigcap_{\stackrel{i \in I,}{j \in J}} \left(L_i \cap R_j\right)$|}} - -and if $I = J$ then also: - -{{NumBlk|:|$\left(\bigcup_{i \in I} L_i\right) \cup \left(\bigcup_{i \in I} R_i\right) ~=~ \bigcup_{i \in I} \left(L_i \cup R_i\right)$|}} - -{{NumBlk|:|$\left(\bigcap_{i \in I} L_i\right) \cap \left(\bigcap_{i \in I} R_i\right) ~=~ \bigcap_{i \in I} \left(L_i \cap R_i\right)$|}} - -{{NumBlk|:|$\left(\bigcup_{i \in I} L_i\right) \cap R ~=~ \bigcup_{i \in I} \left(L_i \cap R\right)$|}} - -{{NumBlk|:|$\left(\bigcup_{i \in I} L_i\right) \cap \left(\bigcup_{j \in J} R_j\right) ~=~ \bigcup_{\stackrel{i \in I,}{j \in J}} \left(L_i \cap R_j\right)$|}} - -* If all $\left(L_i\right)_{i \in I}$ are pairwise disjoint and all $\left(R_j\right)_{j \in J}$ are also pairwise disjoint, then so are all $\left(L_i \cap R_j\right)_{(i, j) \in I \times J}$ (that is, if $(i, j) \neq \left(i_2, j_2\right)$ then $\left(L_i \cap R_j\right) \cap \left(L_{i_2} \cap R_{j_2}\right) = \varnothing$). - -* Importantly, if $I = J$ then in general, ~\left(\bigcup_{i \in I} L_i\right) \cap \left(\bigcup_{i \in I} R_i\right) ~~\color{Red}{\neq}\color{Black}{}~~ \bigcup_{i \in I} \left(L_i \cap R_i\right)~ (see this footnote for an example). The single union on the right hand side must be over all pairs $(i, j) \in I \times I:$ ~\left(\bigcup_{i \in I} L_i\right) \cap \left(\bigcup_{i \in I} R_i\right) ~~=~~ \bigcup_{\stackrel{i \in I,}{j \in I}} \left(L_i \cap R_j\right).~ The same is usually true for other similar non-trivial set equalities and relations that depend on two (potentially unrelated) indexing sets $I$ and $J$ (such as or ). Two exceptions are (unions of unions) and (intersections of intersections), but both of these are among the most trivial of set equalities and moreover, even for these equalities there is still something that must be proven. for an example) and the analogous statement is also true of the left hand side as well. - -Equality can hold under certain circumstances, such as in and , which are respectively the special cases where $S_{i,j} \colon= L_i \setminus R_j$ and $\left(\hat{S}_{j,i}\right)_{(j, i) \in J \times I} \colon= \left(L_i \setminus R_j\right)_{(j, i) \in J \times I}$ (for , $I$ and $J$ are swapped). - -Formula for equality - -For an equality of sets that extends the distributive laws, an approach other than just switching $\cup$ and $\cap$ is needed. Suppose that for each $i \in I,$ there is some non-empty index set $J_i$ and for each $j \in J_i,$ let $T_{i,j}$ be any set (for example, with $\left(S_{i,j}\right)_{(i, j) \in I \times J}$ use $J_i \colon= J$ for all $i \in I$ and use $T_{i,j} \colon= S_{i,j}$ for all $i \in I$ and all $j \in J_i = J$). Let - -\mathcal{F} ~\colon=~ \prod_{i \in I} J_i - -be the Cartesian product, which can be interpreted as the set of all functions $f ~:~ I ~\to~ \bigcup_{i \in I} J_i$ such that $f(i) \in J_i$ for every $i \in I.$ Then - -{{NumBlk|:|$\bigcap_{i \in I} \left[\bigcup_{j \in J_i} T_{i,j}\right] = \bigcup_{f \in \mathcal{F}} \left[\bigcap_{i \in I} T_{i,f(i)}\right]$||LnSty=1px dashed black}} - -{{NumBlk|:|$\bigcup_{i \in I} \left[\bigcap_{j \in J_i} T_{i,j}\right] = \bigcap_{f \in \mathcal{F}} \left[\bigcup_{i \in I} T_{i,f(i)}\right]$||LnSty=1px dashed black}} - -where $\mathcal{F} ~\colon=~ \prod_{i \in I} J_i.$ - -Example application: In the particular case where all $J_i$ are equal (that is, $J_i = J_{i_2}$ for all $i, i_2 \in I,$ which is the case with the family $\left(S_{i,j}\right)_{(i, j) \in I \times J},$ for example), then letting $J$ denote this common set, the Cartesian product $\mathcal{F} ~\colon=~ \prod_{i \in I} J_i$ will be $\mathcal{F} = J^I;$ that is, $\mathcal{F}$ will be the set of all functions of the form $f ~:~ I ~\to~ J.$ The above set equalities and , respectively become: - -* $\bigcap_{i \in I} \left[\bigcup_{j \in J} S_{i,j}\right] = \bigcup_{f \in J^I} \left[\bigcap_{i \in I} S_{i,f(i)}\right]$ - -* $\bigcup_{i \in I} \left[\bigcap_{j \in J} S_{i,j}\right] = \bigcap_{f \in J^I} \left[\bigcup_{i \in I} S_{i,f(i)}\right]$ - -which when combined with implies: - -\bigcup_{i \in I} \left[\bigcap_{j \in J} S_{i,j}\right] - -~=~ \bigcap_{f \in J^I} \left[\bigcup_{i \in I} S_{i,f(i)}\right] - -~~\color{Red}{\subseteq}\color{Black}{}~~ \bigcup_{g \in I^J} \left[\bigcap_{j \in J} S_{g(j),j}\right] - -~=~ \bigcap_{j \in J} \left[\bigcup_{i \in I} S_{i,j}\right] - -where - -* on the  left   hand side, the indices $f \text{ and } i$ range over $f \in J^I \text{ and } i \in I$ (so the subscripts of $S_{i,f(i)}$ range over $i \in I \text{ and } f(i) \in f(I) \subseteq J$) - -* on the right hand side, the indices $g \text{ and } j$ range over $g \in I^J \text{ and } j \in J$ (so the subscripts of $S_{g(j),j}$ range over $j \in J \text{ and } g(j) \in g(J) \subseteq I$). - -Example application: To apply the general formula to the case of $\left(C_k\right)_{k \in K}$ and $\left(D_{l}\right)_{l \in L},$ use $I \colon= \{1, 2\},$ $J_1 \colon= K,$ $J_2 \colon= L,$ and let $T_{1,k} \colon= C_k$ for all $k \in J_1$ and let $T_{2,l} \colon= D_l$ for all $l \in J_2.$ - -Every map $f \in \mathcal{F} ~\colon=~ \prod_{i \in I} J_i = J_1 \times J_2 = K \times L$ can be bijectively identified with the pair $\left(f(1), f(2)\right) \in K \times L$ (the inverse sends $(k,l) \in K \times L$ to the map $f_{(k,l)} \in \mathcal{F}$ defined by $1 \mapsto k$ and $2 \mapsto l;$ this is technically just a change of notation). Expanding and simplifying the left hand side of , which recall was - -~\bigcap_{i \in I} \left[\bigcup_{j \in J_i} T_{i,j}\right] = \bigcup_{f \in \mathcal{F}} \left[\bigcap_{i \in I} T_{i,f(i)}\right]~ - -gives - -\bigcap_{i \in I} \left[\bigcup_{j \in J_i} T_{i,j}\right] - -= \left(\bigcup_{j \in J_1} T_{1,j}\right) \cap \left(\bigcup_{j \in J_2} T_{2,j}\right) - -= \left(\bigcup_{k \in K} T_{1,k}\right) \cap \left(\bigcup_{l \in L} T_{2,l}\right) - -= \left(\bigcup_{k \in K} C_k\right) \cap \left(\bigcup_{l \in L} D_l\right) - - - -and doing the same to the right hand side gives: - -\bigcup_{f \in \mathcal{F}} \left[\bigcap_{i \in I} T_{i,f(i)}\right] - -= \bigcup_{f \in \mathcal{F}} \left(T_{1,f(1)} \cap T_{2,f(2)}\right) - -= \bigcup_{f \in \mathcal{F}} \left(C_{f(1)} \cap D_{f(2)}\right) - -= \bigcup_{(k,l) \in K \times L} \left(C_k \cap D_l\right) - -= \bigcup_{\stackrel{k \in K,}{l \in L}} \left(C_k \cap D_l\right). - - - -Thus the general identity reduces down to the previously given set equality : - -\left(\bigcup_{k \in K} C_k\right) \cap \left(\bigcup_{l \in L} D_l\right) = \bigcup_{\stackrel{k \in K,}{l \in L}} \left(C_k \cap D_l\right). - -{{NumBlk|:|$\left(\bigcup_{i \in I} L_i\right) \setminus R ~=~ \bigcup_{i \in I} \left(L_i \setminus R\right)$|}} - -{{NumBlk|:|$\left(\bigcap_{i \in I} L_i\right) \setminus R ~=~ \bigcap_{i \in I} \left(L_i \setminus R\right)$|}} - -{{NumBlk|:|$L \setminus \left(\bigcup_{j \in J} R_j\right) ~=~ \bigcap_{j \in J} \left(L \setminus R_j\right)$ (De Morgan's law)|}} - -{{NumBlk|:|$L \setminus \left(\bigcap_{j \in J} R_j\right) ~=~ \bigcup_{j \in J} \left(L \setminus R_j\right)$ (De Morgan's law)|}} - -The following set equalities can be deduced from the equalities - above (see this note on why the following equalties are atypical): - -{{NumBlk|:|$\left(\bigcup_{i \in I} L_i\right) \setminus \left(\bigcup_{j \in J} R_j\right) ~=~ \bigcup_{i \in I} \left(\bigcap_{j \in J} \left(L_i \setminus R_j\right)\right) ~=~ \bigcap_{j \in J} \left(\bigcup_{i \in I} \left(L_i \setminus R_j\right)\right)$|}} - -{{NumBlk|:|$\left(\bigcap_{i \in I} L_i\right) \setminus \left(\bigcap_{j \in J} R_j\right) ~=~ \bigcup_{j \in J} \left(\bigcap_{i \in I} \left(L_i \setminus R_j\right)\right) ~=~ \bigcap_{i \in I} \left(\bigcup_{j \in J} \left(L_i \setminus R_j\right)\right)$|}} - -{{NumBlk|:|$\left(\bigcup_{i \in I} L_i\right) \setminus \left(\bigcap_{j \in J} R_j\right) ~=~ \bigcup_{\stackrel{i \in I,}{j \in J}} \left(L_i \setminus R_j\right)$|}} - -{{NumBlk|:|$\left(\bigcap_{i \in I} L_i\right) \setminus \left(\bigcup_{j \in J} R_j\right) ~=~ \bigcap_{\stackrel{i \in I,}{j \in J}} \left(L_i \setminus R_j\right)$|}} - -Intersections of products - -If $\left(S_{i,j}\right)_{(i,j) \in I \times J}$ is a family of sets then - -{{NumBlk|:|$\bigcap_{j \in J} \left(\prod_{i \in I} S_{i,j}\right) ~~=~~ \prod_{i \in I} \left(\bigcap_{j \in J} S_{i,j}\right)$|}} - -* Moreover, a tuple $\left(x_i\right)_{i \in I}$ belongs to the set in above if and only if $x_i \in S_{i,j}$ for all $i \in I$ and all $j \in J.$ - -Binary intersections of products - -Let $\left(L_i\right)_{i \in I}$ and $\left(R_j\right)_{j \in J}$ be two families of sets. - -If $I = J$ then \left(\prod_{i \in I} L_i\right) \cap \left(\prod_{i \in I}R_i\right) ~=~ \prod_{i \in I} \left(L_i \cap R_i\right) - -* So for instance, (L \times R) \cap \left(L_2 \times R_2\right) ~=~ \left(L \cap L_2\right) \times \left(R \cap R_2\right) and (L \times M \times R) \cap \left(L_2 \times M_2 \times R_2\right) ~=~ \left(L \cap L_2\right) \times \left(M \cap M_2\right) \times \left(R \cap R_2\right) - -If $I \neq J$ then $\left(\prod_{i \in I} L_i\right) \cap \left(\prod_{j \in J} R_j\right) = \varnothing$ in general (meaning, unless these products are somehow identified as the same set through some bijection or one of these products is identified as a subset of the other via some injective map), so only the case $I = J$ is useful. - -* For example, if $I := \{1, 2\}$ and $J := \{1, 2, 3\}$ with all sets equal to $\R$ then $\prod_{i \in I} L_i = \prod_{i \in \{1, 2\}} \R = \R^2$ and $\prod_{j \in J} R_j = \prod_{j \in \{1, 2, 3\}} \R = \R^3$ where $\R^2 \cap \R^3 = \varnothing$ unless, for example, $\prod_{i \in \{1, 2\}} \R = \R^2$ is identified as a subset of $\prod_{j \in \{1, 2, 3\}} \R = \R^3$ through some injection, such as maybe $(x, y) \mapsto (x, y, 0)$ for instance; however, in this particular case the product $\prod_{i \in I = \{1, 2\}} L_i$ actually represents the $J$-indexed product $\prod_{j \in J = \{1, 2, 3\}} L_i$ where $L_3 := \{0\}.$ - -* For another example, take $I := \{1, 2\}$ and $J := \{1, 2, 3\}$ with $L_1 := \R^2$ and $L_2, R_1, R_2, \text{ and } R_3$ all equal to $\R.$ Then $\prod_{i \in I} L_i = \R^2 \times \R$ and $\prod_{j \in J} R_j = \R \times \R \times \R,$ which can both be identified as the same set via the bijection that sends $((x, y), z) \in \R^2 \times \R$ to $(x, y, z) \in \R \times \R \times \R.$ Under this identification, $\left(\prod_{i \in I} L_i\right) \cap \left(\prod_{j \in J}R_j\right) ~=~ \R^3.$ - -Unions of products - -For unions, only the following is guaranteed in general: - -\bigcup_{j \in J} \left(\prod_{i \in I} S_{i,j}\right) ~~\color{Red}{\subseteq}\color{Black}{}~~ \prod_{i \in I} \left(\bigcup_{j \in J} S_{i,j}\right) \qquad \text{ and } \qquad \bigcup_{i \in I} \left(\prod_{j \in J} S_{i,j}\right) ~~\color{Red}{\subseteq}\color{Black}{}~~ \prod_{j \in J} \left(\bigcup_{i \in I} S_{i,j}\right) - -where $\left(S_{i,j}\right)_{(i,j) \in I \times J}$ is a family of sets. - -However, - -\begin{alignat}{9} - -\left(L \times R\right) ~\cup~ \left(L_2 \times R_2\right) - -~&=~ \left(\left(L \setminus L_2\right) \times R\right) ~\cup~ \left(\left(L_2 \setminus L\right) \times R_2\right) ~\cup~ \left(\left(L \cap L_2\right) \times \left(R \cup R_2\right)\right) \\[0.5ex] - -~&=~ \left(L \times \left(R \setminus R_2\right)\right) ~\cup~ \left(L_2 \times \left(R_2 \setminus R\right)\right) ~\cup~ \left(\left(L \cup L_2\right) \times \left(R \cap R_2\right)\right) \\ - -\end{alignat} - -Set subtraction of products - -If $\left(L_i\right)_{i \in I}$ and $\left(R_i\right)_{i \in I}$ are two families of sets then: - -\begin{alignat}{9} - -\left(\prod_{i \in I} L_i\right) ~\setminus~ \left(\prod_{i \in I}R_i\right) - -~&=~ \bigcup_{j \in I} \prod_{i \in I} \begin{cases}L_j \setminus R_j & \text{ if } i = j \\ L_i & \text{ if } i \neq j \\ \end{cases} \\[0.5ex] - -~&=~ \bigcup_{j \in I} \Big[\left(L_j \setminus R_j\right) ~\times~ \prod_{\stackrel{i \in I,}{j \neq i}} L_i\Big] \\[0.3ex] - -\end{alignat} - -so for instance, - -\begin{alignat}{9} - -\left(L \times R\right) ~\setminus~ \left(L_2 \times R_2\right) - -~&=~ \left[\left(L \setminus L_2\right) \times R\right] ~\cup~ \left[L \times \left(R \setminus R_2\right)\right] \\ - -\end{alignat} - -and - -(L \times M \times R) ~\setminus~ \left(L_2 \times M_2 \times R_2\right) - -~=~ \left[\left(L \setminus L_2\right) \times M \times R\right] ~\cup~ \left[L \times \left(M \setminus M_2\right) \times R\right] ~\cup~ \left[L \times M \times \left(R \setminus R_2\right)\right] - -Let $f : X \to Y$ be any function. - -Let $L \text{ and } R$ be completely arbitrary sets. Assume $A \subseteq X \text{ and } C \subseteq Y.$ - -Let $f : X \to Y$ be any function, where we denote its domain $X$ by $\operatorname{domain} f$ and denote its codomain $Y$ by $\operatorname{codomain} f.$ - -Many of the identities below do not actually require that the sets be somehow related to $f$'s domain or codomain (that is, to $X$ or $Y$) so when some kind of relationship is necessary then it will be clearly indicated. - -Because of this, in this article, if $L$ is declared to be "any set," and it is not indicated that $L$ must be somehow related to $X$ or $Y$ (say for instance, that it be a subset $X$ or $Y$) then it is meant that $L$ is truly arbitrary. - -This generality is useful in situations where $f : X \to Y$ is a map between two subsets $X \subseteq U$ and $Y \subseteq V$ of some larger sets $U$ and $V,$ and where the set $L$ might not be entirely contained in $X = \operatorname{domain} f$ and/or $Y = \operatorname{codomain} f$ (e.g. if all that is known about $L$ is that $L \subseteq U$); in such a situation it may be useful to know what can and cannot be said about $f(L)$ and/or $f^{-1}(L)$ without having to introduce a (potentially unnecessary) intersection such as: $f(L \cap X)$ and/or $f^{-1}(L \cap Y).$ - -Images and preimages of sets - -If $L$ is any set then the is defined to be the set: - -f(L) ~:=~ \{ f(s) ~:~ s \in L \cap \operatorname{domain} f \} - -while the preimage of $L$ under $f$ is: - -f^{-1}(L) ~:=~ \{ x \in \operatorname{domain} f ~:~ f(x) \in L \} - -where if $L = \{s\}$ is a singleton set then the preimage of $s$ under $f$ is - -f^{-1}(s) ~:=~ f^{-1}(\{s\}) ~=~ \{ x \in \operatorname{domain} f ~:~ f(x) = s \}. - -Denote by $\operatorname{Im} f$ or $\operatorname{image} f$ the image or range of $f : X \to Y,$ which is the set: - -\operatorname{Im} f ~=~ f(X) ~:=~ f(\operatorname{domain} f) ~=~ \{ f(x) ~:~ x \in \operatorname{domain} f \}. - -Saturated sets - -A set $A$ is said to be $f$-saturated or a ' if any of the following equivalent conditions are satisfied: - -
      - -
    1. There exists a set $R$ such that $A = f^{-1}(R).$ - -* Any such set $R$ necessarily contains $f(A)$ as a subset.
    2. - -
    3. $A = f^{-1}(f(A)).$
    4. - -
    5. $A \supseteq f^{-1}(f(A))$ and $A \subseteq \operatorname{domain} f.$ - -* The inclusion $L \cap \operatorname{domain} f \subseteq f^{-1}(f(L))$ always holds, where if $A \subseteq \operatorname{domain} f$ then this becomes $A \subseteq f^{-1}(f(A)).$
    6. - -
    - -For a set $A$ to be $f$-saturated, it is necessary that $A \subseteq \operatorname{domain} f.$ - -Compositions and restrictions of functions - -If $f$ and $g$ are maps then $g \circ f$ denotes the composition map - -g \circ f ~:~ \{ x \in \operatorname{domain} f ~:~ f(x) \in \operatorname{domain} g \} ~\to~ \operatorname{codomain} g - -with domain and codomain - -\begin{alignat}{4} - -\operatorname{domain} (g \circ f) &= \{ x \in \operatorname{domain} f ~:~ f(x) \in \operatorname{domain} g \} \\[0.4ex] - -\operatorname{codomain} (g \circ f) &= \operatorname{codomain} g \\[0.7ex] - -\end{alignat} - -defined by - -(g \circ f)(x) := g(f(x)). - -The restriction of $f : X \to Y$ to $L,$ denoted by $f\big\vert_{L},$ is the map - -f\big\vert_L ~:~ L \cap \operatorname{domain} f ~\to~ Y - -with $\operatorname{domain} f\big\vert_L ~=~ L \cap \operatorname{domain} f$ defined by sending $x \in L \cap \operatorname{domain} f$ to $f(x);$ that is, - -f\big\vert_L(x) ~:=~ f (x). - -Alternatively, $~f\big\vert_L ~=~ f \circ \operatorname{In}~$ where $~\operatorname{In} ~:~ L \cap X \to X~$ denotes the inclusion map, which is defined by $\operatorname{In}(s) := s.$ - -Equivalences and implications of images and preimages - -* $f(L) \cap R = \varnothing \quad \text{ if and only if } \quad L \cap f^{-1}(R) = \varnothing.$ - -** So for any $t,$ t \not\in f(L) \quad \text{ if and only if } \quad L \cap f^{-1}(t) = \varnothing. - -Throughout, let $L \text{ and } R$ be any sets and let $f : X \to Y$ be any function. - -Summary - -As the table below shows, set equality is not guaranteed only for images of: intersections, set subtractions, and symmetric differences. - -Preimages preserve set operations - -Preimages of sets are well-behaved with respect to all basic set operations: - -\begin{alignat}{4} - -f^{-1}(L \cup R) ~&=~ f^{-1}(L) \cup f^{-1}(R) \\ - -f^{-1}(L \cap R) ~&=~ f^{-1}(L) \cap f^{-1}(R) \\ - -f^{-1}(L \setminus R) ~&=~ f^{-1}(L) \setminus f^{-1}(R) \\ - -f^{-1}(L \triangle R) ~&=~ f^{-1}(L) \triangle f^{-1}(R) \\ - -\end{alignat} - -In words, preimages distribute over unions, intersections, set subtraction, and symmetric difference. - -Images only preserve unions - -Images of unions are well-behaved: - -\begin{alignat}{4} - -f(L \cup R) ~&=~ f(L) \cup f(R) \\ - -\end{alignat} - -but images of the other basic set operations are not since only the following are guaranteed in general: - -\begin{alignat}{4} - -f(L \cap R) ~&\subseteq~ f(L) \cap f(R) \\ - -f(L \setminus R) ~&\supseteq~ f(L) \setminus f(R) \\ - -f(L \triangle R) ~&\supseteq~ f(L) \triangle f(R) \\ - -\end{alignat} - -In words, images distribute over unions but not necessarily over intersections, set subtraction, or symmetric difference. - -In general, equality is not guaranteed for images of set subtraction $L \setminus R$ nor for images of the other two elementary set operators that can be defined as the difference of two sets: - -L \cap R = L \setminus (L \setminus R) \quad \text{ and } \quad L \triangle R = (L \cup R) \setminus (L \cap R). - -If $L = X$ then $f(X \setminus R) \supseteq f(X) \setminus f(R)$ where as in the more general case, equality is not guaranteed. If $f$ is surjective then $f(X \setminus R) ~\supseteq~ Y \setminus f(R),$ which can be rewritten as: $f\left(R^{\operatorname{C}}\right) ~\supseteq~ f(R)^{\operatorname{C}}$ if $R^{\operatorname{C}} := X \setminus R$ and $f(R)^{\operatorname{C}} := Y \setminus f(R).$ - -thumb|right|upright=1.6|Picture showing $f$ failing to [[Distributive property|distribute over set intersection:
    -$$ -f\left(A_1 \cap A_2\right) \subsetneq f\left(A_1\right) \cap f\left(A_2\right). -$$ - -The map $f : \R \to \R$ is defined by $x \mapsto x^2,$ where $\R$ denotes the real numbers. The sets $A_1 = [-4, 2]$ and $A_2 = [-2, 4]$ are shown in immediately below the $x$-axis while their intersection $A_3 = [-2, 2]$ is shown in .]] - -Examples will be given demonstrating that the set containments - -\begin{alignat}{4} - -f(L \cap R) ~&\subseteq~ f(L) \cap f(R) \\ - -f(L \setminus R) ~&\supseteq~ f(L) \setminus f(R) \\ - -f(X \setminus R) ~&\supseteq~ f(X) \setminus f(R) \\ - -f(L \triangle R) ~&\supseteq~ f(L) \triangle f(R) \\ - -\end{alignat} - -might all be strict/proper; that is, equality need not hold. - -Specifically, the example below shows that these equalities could fail for any constant function whose domain contains at least two (distinct) points. - -For instance, all four containments are proper if $f : \{1, 2\} \to Y$ is constant, $L = \{1\}, \text{ and } R = \{2\}.$ - -Thus equality is not guaranteed for even the simplest of functions. - -Example: Let $f : X \to Y$ be any constant function with image $f(X) = \{y\}$ and suppose that $L, R \subseteq X$ are non-empty disjoint subsets; that is, $L \neq \varnothing, R \neq \varnothing,$ and $L \cap R = \varnothing,$ which implies that all of the following sets are non-empty (and so their images under $f$ are all equal to $\{y\}$) $L ~\triangle~ R = L \cup R,$ $L \setminus R = L,$ and $X \setminus R \supseteq L \setminus R.$ - -
      - -
    1. The containment $~f(L \setminus R) ~\supseteq~ f(L) \setminus f(R)~$ is strict: - -\{y\} ~=~ f(L \setminus R) ~\neq~ f(L) \setminus f(R) ~=~ \{y\} \setminus \{y\} ~=~ \varnothing - -In words: functions might not distribute over set subtraction $\setminus$ - -
    2. - -
    3. The containment $~f(X \setminus R) ~\supseteq~ f(X) \setminus f(R)~$ is strict: - -\{y\} ~=~ f(X \setminus R) ~\neq~ f(X) \setminus f(R) ~=~ \{y\} \setminus \{y\} ~=~ \varnothing. - -
    4. - -
    5. The containment $~f(L ~\triangle~ R) ~\supseteq~ f(L) ~\triangle~ f(R)~$ is strict: - -\{y\} ~=~ f\left(L ~\triangle~ R\right) ~\neq~ f(L) ~\triangle~ f(R) ~=~ \{y\} \triangle \{y\} ~=~ \varnothing - -In words: functions might not distribute over symmetric difference $\triangle$ (which can be defined as the set subtraction of two sets: $L \triangle R = (L \cup R) \setminus (L \cap R)$). - -
    6. - -
    7. The containment $~f(L \cap R) ~\subseteq~ f(L) \cap f(R)~$ is strict: - -\varnothing ~=~ f(\varnothing) ~=~ f(L \cap R) ~\neq~ f(L) \cap f(R) ~=~ \{y\} \cap \{y\} ~=~ \{y\} - -In words: functions might not distribute over set intersection $\cap$ (which can be defined as the set subtraction of two sets: $L \cap R = L \setminus (L \setminus R)$). - -
    8. - -
    - -What the set operations in these four examples have in common is that they either are set subtraction $\setminus$ (examples (1) and (2)) or else they can naturally be defined as the set subtraction of two sets (examples (3) and (4)). - -Mnemonic: In fact, for each of the above four set formulas for which equality is not guaranteed, the direction of the containment (that is, whether to use $\subseteq \text{ or } \supseteq$) can always be deduced by imagining the function $f$ as being constant and the two sets ($L \text{ and } R$) as being non-empty disjoint subsets of its domain. This is because every equality fails for such a function and sets: one side will be always be $\varnothing$ and the other non-empty − from this fact, the correct choice of $\subseteq \text{ or } \supseteq$ can be deduced by answering: "which side is empty?" For example, to decide if the $?$ in - -f(L \triangle R) \setminus f(R) ~~?~~ f((L \triangle R) \setminus R) - -should $\subseteq \text{ or } \supseteq,$ pretend - -that $f$ is constant and that $L \triangle R \text{ and } R$ are non-empty disjoint subsets of $f$'s domain; then the left hand side would be empty (since $f(L \triangle R) \setminus f(R) = \{f\text{'s single value}\} \setminus \{f\text{'s single value}\} = \varnothing$), which indicates that $?$ should be $\subseteq$ (the resulting statement is always guaranteed to be true) because this is the choice that will make - -\varnothing = \text{left hand side} ~~?~~ \text{right hand side} - -true. - -Alternatively, the correct direction of containment can also be deduced by consideration of any constant $f : \{1, 2\} \to Y$ with $L = \{1\} \text{ and } R = \{2\}.$ - -Furthermore, this mnemonic can also be used to correctly deduce whether or not equalities such as $f(L \cap R) = f(L) \cap f(R)$ or $f^{-1}(L \cap R) = f^{-1}(L) \cap f^{-1}(R)$ hold in general (although $\cap$ was used here, it can replaced by $\cup, \setminus, \text{ or } \triangle$). The answer to such a question can, as before, be deduced by consideration of this constant function: the answer for the general case (that is, for arbitrary $f, L, \text{ and } R$) is always the same as the answer for this choice of (constant) function and disjoint non-empty sets. - -Characterizations of when equality holds for all sets: - -For any function $f : X \to Y,$ the following statements are equivalent: - -
      - -
    1. $f : X \to Y$ is injective. - -* This means: $f(x) \neq f(y)$ for all distinct $x, y \in X.$ - -
    2. - -
    3. $f(L \cap R) = f(L) \cap f(R) \text{ for all } L, R \subseteq X.$ (The equals sign $=$ can be replaced with $\supseteq$). - -
    4. - -
    5. $f(L \setminus R) = f(L) \setminus f(R) \text{ for all } L, R \subseteq X.$ (The equals sign $=$ can be replaced with $\subseteq$). - -
    6. - -
    7. $f(X \setminus R) = f(X) \setminus f(R) \text{ for all } ~~~~~R \subseteq X.$ (The equals sign $=$ can be replaced with $\subseteq$). - -
    8. - -
    9. $f(L \triangle R) = f(L) \triangle f(R) \text{ for all } L, R \subseteq X.$ (The equals sign $=$ can be replaced with $\subseteq$). - -
    10. - -
    11. Any one of the four statements (b) - (e) but with the words "for all" replaced with any one of the following: - -
        - -
      1. "for all singleton subsets" - -* In particular, the statement that results from (d) gives a characterization of injectivity that explicitly involves only one point (rather than two): $f$ is injective if and only if $f(x) \not\in f(X \setminus \{x\}) \text{ for every } x \in X.$
      2. - -
      3. "for all disjoint singleton subsets" - -* For statement (d), this is the same as: "for all singleton subsets" (because the definition of "pairwise disjoint" is satisfies vacuously by any family that consists of exactly 1 set).
      4. - -
      5. "for all disjoint subsets"
      6. - -
      - -
    12. - -
    - -In particular, if a map is not known to be injective then barring additional information, there is no guarantee that any of the equalities in statements (b) - (e) hold. - -An example above can be used to help prove this characterization. Indeed, comparison of that example with such a proof suggests that the example is representative of the fundamental reason why one of these four equalities in statements (b) - (e) might not hold (that is, representative of "what goes wrong" when a set equality does not hold). - -f(L \cap R) ~\subseteq~ f(L) \cap f(R) \qquad\qquad \text{ always holds} - -Characterizations of equality: The following statements are equivalent: - -
      - -
    1. $f(L \cap R) ~=~ f(L) \cap f(R)$
    2. - -
    3. $f(L \cap R) ~\supseteq~ f(L) \cap f(R)$
    4. - -
    5. $f(L) \setminus f(L \cap R) ~\subseteq~ f(L) \setminus f(R)$
    6. - -
    7. $f(R) \setminus f(L \cap R) ~\subseteq~ f(R) \setminus f(L)$
    8. - -
    9. $f(L \cup R) \setminus f(L \cap R) ~\subseteq~ f(L) \triangle f(R)$
    10. - -
    11. $L \cap f^{-1}(f(R)) ~\subseteq~ L \cap f^{-1}(f(L \cap R))$
    12. - -
    13. $R \cap f^{-1}(f(L)) ~\subseteq~ R \cap f^{-1}(f(L \cap R))$
    14. - -
    15. Any of the above five conditions (c) - (g) but with the subset symbol $\subseteq$ replaced with an equals sign $=.$
    16. - -
    17. $L \cap f^{-1}(f(R)) ~\subseteq~ f^{-1}(f(L \cap R))$ - -* Because $L \cap f^{-1}(f(R)) ~\subseteq~ f^{-1}(f(L))$ is always true, so is $L \cap f^{-1}(f(R)) = L \cap f^{-1}(f(L) \cap f(R))$
    18. - -
    19. $R \cap f^{-1}(f(L)) ~\subseteq~ f^{-1}(f(L \cap R))$
    20. - -
    - -Sufficient conditions for equality: Equality holds if any of the following are true: - -
      - -
    1. $f$ is injective.
    2. - -
    3. The restriction $f\big\vert_{L \cup R}$ is injective.
    4. - -
    5. $f^{-1}(f(R)) ~\subseteq~ R$ or equivalently, $R \cap \operatorname{domain} f ~=~ f^{-1}(f(R))$
    6. - -
    7. $R$ is $f$-saturated; that is, $R = f^{-1}(f(R)).$ - -\begin{alignat}{4} - -f(L \setminus R) - -&= Y ~~~ \setminus \left\{ y \in Y ~~~~~~~~~~ ~:~ L \cap f^{-1}(y) \subseteq R \right\} \\[0.4ex] - -&= f(L) \setminus \left\{ y \in f(L)~~~~~~~ ~:~ L \cap f^{-1}(y) \subseteq R \right\} \\[0.4ex] - -&= f(L) \setminus \left\{ y \in f(L \cap R) ~:~ L \cap f^{-1}(y) \subseteq R \right\} \\[0.4ex] - -&= f(L) \setminus \left\{ y \in V~~~~~~~~~~~~ ~:~ L \cap f^{-1}(y) \subseteq R \right\} \qquad && \text{ for any superset } \quad V \supseteq f(L \cap R) \\[0.4ex] - -&= f(S) \setminus \left\{ y \in f(S)~~~~~~~ ~:~ L \cap f^{-1}(y) \subseteq R \right\} \qquad && \text{ for any superset } \quad S \supseteq L \cap X. \\[0.7ex] - -\end{alignat} - -Taking $L := X = \operatorname{domain} f$ in the above formulas gives: - -\begin{alignat}{4} - -f(X \setminus R) - -&= Y ~~~ \setminus \left\{ y \in Y ~~~~ :~ f^{-1}(y) \subseteq R \right\} \\[0.4ex] - -&= f(X) \setminus \left\{ y \in f(X) ~ :~ f^{-1}(y) \subseteq R \right\} \\[0.4ex] - -&= f(X) \setminus \left\{ y \in f(R) ~ :~ f^{-1}(y) \subseteq R \right\} \\[0.4ex] - -&= \operatorname{Im} f \setminus \left\{ y \in W ~~~ :~ f^{-1}(y) \subseteq R \right\} \qquad \text{ for any superset } \quad W \supseteq f(R) \\[0.4ex] - -\end{alignat} - -where the set $\left\{ y \in f(R) : f^{-1}(y) \subseteq R \right\}$ is equal to the image under $f$ of the largest $f$-saturated subset of $R.$ - -
        - -
      • In general, only $f(X \setminus R) \supseteq f(X) \setminus f(R)$ always holds and equality is not guaranteed; but replacing "$f(R)$" with "$\left\{ y \in f(R) : f^{-1}(y) \subseteq R \right\}$" results in a formula in which equality is always guaranteed: - -f(X \setminus R) = f(X) \setminus \left\{ y \in f(R) : f^{-1}(y) \subseteq R \right\}. - -From this it follows that: - -f(X \setminus R) = f(X) \setminus f(R) \quad \text{ if and only if } \quad f(R) = \left\{ y \in f(R) : f^{-1}(y) \subseteq R \right\} \quad \text{ if and only if } \quad f^{-1}(f(R)) \subseteq R. - -
      • - -
      • If $f_R := \left\{ y \in f(X) : f^{-1}(y) \subseteq R \right\}$ then $f(X \setminus R) = f(X) \setminus f_R,$ which can be written more symmetrically as $f(X \setminus R) = f_X \setminus f_R$ (since $f_X = f(X)$). - -
      • - -
      - -It follows from $L \triangle R = (L \cup R) \setminus (L \cap R)$ and the above formulas for the image of a set subtraction that for any function $f : X \to Y$ and any sets $L \text{ and } R,$ - -\begin{alignat}{4} - -f(L \triangle R) - -&= Y ~~~~~~~~~ \setminus \left\{ y \in Y ~~~~~~~~~~ ~:~ L \cap f^{-1}(y) = R \cap f^{-1}(y)\right\} \\[0.4ex] - -&= f(L \cup R) \setminus \left\{ y \in f(L \cup R) ~:~ L \cap f^{-1}(y) = R \cap f^{-1}(y)\right\} \\[0.4ex] - -&= f(L \cup R) \setminus \left\{ y \in f(L \cap R) ~:~ L \cap f^{-1}(y) = R \cap f^{-1}(y)\right\} \\[0.4ex] - -&= f(L \cup R) \setminus \left\{ y \in V ~~~~~~~~~~~~ ~:~ L \cap f^{-1}(y) = R \cap f^{-1}(y)\right\} \qquad && \text{ for any superset } \quad V \supseteq f(L \cap R) \\[0.4ex] - -&= f(S) ~~~~~~ \setminus \left\{ y \in f(S) ~~~~~~ ~:~ L \cap f^{-1}(y) = R \cap f^{-1}(y)\right\} \qquad && \text{ for any superset } \quad S \supseteq (L \cup R) \cap X. \\[0.7ex] - -\end{alignat} - -It follows from the above formulas for the image of a set subtraction that for any function $f : X \to Y$ and any set $L,$\begin{alignat}{4} - -f(L) - -&= Y ~~~ \setminus \left\{ y \in Y ~~~ ~:~ f^{-1}(y) \cap L = \varnothing \right\} \\[0.4ex] - -&= \operatorname{Im} f \setminus \left\{ y \in \operatorname{Im} f ~:~ f^{-1}(y) \cap L = \varnothing \right\} \\[0.4ex] - -&= W ~~~ \setminus \left\{ y \in W ~~ ~:~ f^{-1}(y) \cap L = \varnothing \right\} \qquad \text{ for any superset } \quad W \supseteq f(L) \\[0.7ex] - -\end{alignat} - -It follows from the above formulas for the image of a set that for any function $f : X \to Y$ and any sets $L \text{ and } R,$ - -\begin{alignat}{4} - -f(L \cap R) - -&= Y~~~~~ \setminus \left\{ y \in Y~~~~~ ~:~ L \cap R \cap f^{-1}(y) = \varnothing \right\} && \\[0.4ex] - -&= f(L) \setminus \left\{ y \in f(L) ~:~ L \cap R \cap f^{-1}(y) = \varnothing \right\} && \\[0.4ex] - -&= f(L) \setminus \left\{ y \in U~~~~~ ~:~ L \cap R \cap f^{-1}(y) = \varnothing \right\} \qquad && \text{ for any superset } \quad U \supseteq f(L) \\[0.4ex] - -&= f(R) \setminus \left\{ y \in f(R) ~:~ L \cap R \cap f^{-1}(y) = \varnothing \right\} && \\[0.4ex] - -&= f(R) \setminus \left\{ y \in V~~~~~ ~:~ L \cap R \cap f^{-1}(y) = \varnothing \right\} \qquad && \text{ for any superset } \quad V \supseteq f(R) \\[0.7ex] - -\end{alignat} - -where moreover, for any $y \in Y,$ -$$ -L \cap f^{-1}(y) \subseteq L \setminus R~\quad -$$ if and only if $\quad~L \cap R \cap f^{-1}(y) = \varnothing~\quad$ if and only if $\quad~R \cap f^{-1}(y) \subseteq R \setminus L.$ - -The sets $U$ and $V$ mentioned above could, in particular, be any of the sets $f(L \cup R), \operatorname{Im} f, \text{ or } Y,$ for example. - -Let $L$ and $R$ be arbitrary sets, $f : X \to Y$ be any map, and let $A \subseteq X$ and $C \subseteq Y$. - -\begin{alignat}{4} - -f^{-1}(f(L) \setminus f(L \setminus R)) - -&=&& f^{-1}\left(\left\{y \in f(L \cap R) ~:~ L \cap f^{-1}(y) \subseteq R\right\}\right) \\ - -&=&& \left\{x \in f^{-1}(f(L \cap R)) ~:~ L \cap f^{-1}(f(x)) \subseteq R\right\} \\ - -\end{alignat} - -where using $L := X$ gives - -\begin{alignat}{4} - -f^{-1}(Y \setminus f(X \setminus R)) - -&~=~&& f^{-1}(f(X) \setminus f(X \setminus R)) \\ - -&=&& f^{-1}\left(\left\{y \in f(R) ~:~ f^{-1}(y) \subseteq R\right\}\right) \\ - -&=&& \left\{r \in R \cap X ~:~ f^{-1}(f(r)) \subseteq R\right\} \\ - -&\subseteq&& R \\ - -\end{alignat} - -\begin{alignat}{4} - -f^{-1}(Y \setminus f(L \setminus R)) - -&~=~&& f^{-1}(f(X) \setminus f(L \setminus R)) \\ - -&=&& f^{-1}\left(\left\{y \in f(X) ~:~ L \cap f^{-1}(y) \subseteq R\right\}\right) \\ - -&=&& \left\{x \in X ~:~ L \cap f^{-1}(f(x)) \subseteq R\right\} \\ - -\end{alignat} - -Images and preimages of unions are always preserved. Inverse images preserve both unions and intersections. It is only images of intersections that are not always preserved. - -If $\left(L_i\right)_{i \in I}$ is a family of arbitrary sets indexed by $I \neq \varnothing$ then: - -\begin{alignat}{4} - -f\left(\bigcap_{i \in I} L_i\right) &~\color{Red}{\subseteq}\color{Black}{}~ \bigcap_{i \in I} f\left(L_i\right) \\ - -f\left(\bigcup_{i \in I} L_i\right) &~=~ \bigcup_{i \in I} f\left(L_i\right) \\ - -f^{-1}\left(\bigcup_{i \in I} L_i\right) &~=~ \bigcup_{i \in I} f^{-1}\left(L_i\right) \\ - -f^{-1}\left(\bigcap_{i \in I} L_i\right) &~=~ \bigcap_{i \in I} f^{-1}\left(L_i\right) \\ - -\end{alignat} - -If all $L_i$ are $f$-saturated then $\bigcap_{i \in I} L_i$ be will be $f$-saturated and equality will hold in the first relation above; explicitly, this means: - -{{NumBlk|:|$f\left(\bigcap_{i \in I} L_i\right) ~=~ \bigcap_{i \in I} f\left(L_i\right) \qquad \textit{IF} \qquad X \cap L_i = f^{-1}\left(f\left(L_i\right)\right) \quad \text{ for all } \quad i \in I.$|}} - -If $\left(A_i\right)_{i \in I}$ is a family of arbitrary subsets of $X = \operatorname{domain} f,$ which means that $A_i \subseteq X$ for all $i,$ then becomes: - -{{NumBlk|:|$f\left(\bigcap_{i \in I} A_i\right) ~=~ \bigcap_{i \in I} f\left(A_i\right) \qquad \textit{IF} \qquad A_i = f^{-1}\left(f\left(A_i\right)\right) \quad \text{ for all } \quad i \in I.$|}} - -This subsection will discuss the preimage of a subset $B \subseteq \prod_{j \in J} Y_j$ under a map of the form $F ~:~ X ~\to~ \prod_{j \in J} Y_j.$ - -For every $k \in J,$ - -* let $\pi_k ~:~ \prod_{j \in J} Y_j ~\to~ Y_k$ denote the canonical projection onto $Y_k,$ and - -* let $F_k ~:=~ \pi_k \circ F ~:~ X ~\to~ Y_k$ - -so that $F ~=~ \left(F_j\right)_{j \in J},$ which is also the unique map satisfying: $\pi_j \circ F = F_j$ for all $j \in J.$ The map $\left(F_j\right)_{j \in J} ~:~ X ~\to~ \prod_{j \in J} Y_j$ should not be confused with the Cartesian product $\prod_{j \in J} F_j$ of these maps, which is by definition the map -$$ -\prod_{j \in J} F_j ~:~ \prod_{j \in J} X ~\to~ \prod_{j \in J} Y_j -$$ defined by sending $\left(x_j\right)_{j \in J} \in \prod_{j \in J} X \quad \text{ to } \quad \left(F_j\left(x_j\right)\right)_{j \in J}.$ - -If $F = \left(F_j\right)_{j \in J} ~:~ X ~\to~ \prod_{j \in J} Y_j \quad \text{ and } \quad B ~\subseteq~ \prod_{j \in J} Y_j \quad \text{ then }$ - -F^{-1}(B) ~~\color{Red}{\subseteq}\color{Black}{}~~ \bigcap_{j \in J} F_j^{-1}\left(\pi_j(B)\right). - -If $B = \prod_{j \in J} \pi_j(B)$ then equality will hold: - -{{NumBlk|:|$F^{-1}(B) ~=~ \bigcap_{j \in J} F_j^{-1}\left(\pi_j(B)\right).$| - -For equality to hold, it suffices for there to exist a family $\left(B_j\right)_{j \in J}$ of subsets $B_j \subseteq Y_j$ such that $B = \prod_{j \in J} B_j,$ in which case: - -{{NumBlk|:|$F^{-1}\left(\prod_{j \in J} B_j\right) ~=~ \bigcap_{j \in J} F_j^{-1}\left(B_j\right)$|}} - -and $\pi_j(B) = B_j$ for all $j \in J.$ - -}} - -Sequences of sets often arise in measure theory. - -Throughout, $S \text{ and } T$ will be arbitrary sets and $S_{\bull}$ and will denote a net or a sequence of sets where if it is a sequence then this will be indicated by either of the notations - -S_{\bull} = \left(S_i\right)_{i=1}^{\infty} \qquad \text{ or } \qquad S_{\bull} = \left(S_i\right)_{i \in \N} - -where $\N$ denotes the natural numbers. - -A notation $S_{\bull} = \left(S_i\right)_{i \in I}$ indicates that $S_{\bull}$ is a net directed by $(I, \leq),$ which (by definition) is a sequence if the set $I,$ which is called the net's indexing set, is the natural numbers (that is, if $I = \N$) and $\leq$ is the natural order on $\N.$ - -Disjoint and monotone sequences - -If $S_i \cap S_j = \varnothing$ for all distinct indices $i \neq j$ then $S_{\bull}$ is called a pairwise disjoint or simply a disjoint. - -A sequence or net $S_{\bull}$ of set is called increasing or non-decreasing if (resp. decreasing or non-increasing) if for all indices $i \leq j,$ $S_i \subseteq S_j$ (resp. $S_i \supseteq S_j$). - -A sequence or net $S_{\bull}$ of set is called strictly increasing (resp. strictly decreasing) if it is non-decreasing (resp. is non-increasing) and also $S_i \neq S_j$ for all distinct indices $i \text{ and } j.$ - -It is called monotone if it is non-decreasing or non-increasing and it is called strictly monotone if it is strictly increasing or strictly decreasing. - -A sequences or net $S_{\bull}$ is said to increase to $S,$) denoted by $S_{\bull} \uparrow S$ or $S_{\bull} \nearrow S,$ if $S_{\bull}$ is increasing and the union of all $S_i$ is $S;$ that is, if \bigcup_{n} S_n = S \qquad \text{ and } \qquad S_i \subseteq S_j \quad \text{ whenever } i \leq j. - -It is said to decrease to $S,$ denoted by $S_{\bull} \downarrow S$ or $S_{\bull} \searrow S,$ if $S_{\bull}$ is increasing and the intersection of all $S_i$ is $S$ that is, if \bigcap_{n} S_n = S \qquad \text{ and } \qquad S_i \supseteq S_j \quad \text{ whenever } i \leq j. - -Ruppose that $L$ is any set such that $L \supseteq R_i$ for every index $i.$ - -If $R_{\bull}$ decreases to $R$ then $L \setminus R_{\bull} := \left(L \setminus R_i\right)_i$ increases to $L \setminus R$ - -whereas if instead $R_{\bull}$ increases to $R$ then $L \setminus R_{\bull}$ decreases to $L \setminus R.$ - -If $L \text{ and } R$ are arbitrary sets and if $L_{\bull} = \left(L_i\right)_i$ increases (resp. decreases) to $L$ then $\left(L_i \setminus R\right)_i$ increase (resp. decreases) to $L \setminus R.$ - -Suppose that $S_{\bull} = \left(S_i\right)_{i = 1}^{\infty}$ is any sequence of sets, that $S \subseteq \bigcup_i S_i$ is any subset, and for every index $i,$ let $D_i := \left(S_i \cap S\right) \setminus \bigcup_{m=1}^i \left(S_m \cap S\right).$ - -Then $S = \bigcup_i D_i$ and $D_{\bull} := \left(D_i\right)_{i=1}^{\infty}$ is a sequence of pairwise disjoint sets. - -Suppose that $S_{\bull} = \left(S_i\right)_{i = 1}^{\infty}$ is non-decreasing, let $S_0 := \varnothing,$ and let $D_i := S_i \setminus S_{i-1}$ for every $i = 1, 2, \ldots.$ Then $\bigcup_i S_i = \bigcup_i D_i$ and $D_{\bull} := \left(D_i\right)_{i=1}^{\infty}$ is a sequence of pairwise disjoint sets. - -A family of sets or simply a family is a set whose elements are sets. - -A family over $X$ is a family of subsets of $X.$ - -The power set of a set $X$ is the set of all subsets of $X$: - -\wp(X) ~\colon=~ \{ S ~:~ S \subseteq X \}. - -If $\mathcal{L} \text{ and } \mathcal{R}$ are families of sets and if $S$ is any set then define: - -\mathcal{L} (\cup) \mathcal{R} ~\colon=~ \{~ L \cup R ~:~ L \in \mathcal{L} ~\text{ and }~ R \in \mathcal{R} ~\} - -\mathcal{L} (\cap) \mathcal{R} ~\colon=~ \{~ L \cap R ~:~ L \in \mathcal{L} ~\text{ and }~ R \in \mathcal{R} ~\} - -\mathcal{L} (\setminus) \mathcal{R} ~\colon=~ \{~ L \setminus R ~:~ L \in \mathcal{L} ~\text{ and }~ R \in \mathcal{R} ~\} - -\mathcal{L} (\triangle) \mathcal{R} ~\colon=~ \{~ L \triangle R ~:~ L \in \mathcal{L} ~\text{ and }~ R \in \mathcal{R} ~\} - -\mathcal{L}\big\vert_S ~\colon=~ \{ L \cap S ~:~ L \in \mathcal{L} \} = \mathcal{L} (\cap) \{S\} - -which are respectively called elementwise union, elementwise intersection, elementwise (set) difference, elementwise symmetric difference, and the trace/restriction of $\mathcal{L}$ to $S.$ The regular union, intersection, and set difference are all defined as usual and are denoted with their usual notation: $\mathcal{L} \cup \mathcal{R}, \mathcal{L} \cap \mathcal{R}, \mathcal{L} \triangle \mathcal{R},$ and $\mathcal{L} \setminus \mathcal{R},$ respectively. - -These elementwise operations on families of sets play an important role in, among other subjects, the theory of filters and prefilters on sets. - -The upward closure in $X$ of a family $\mathcal{L} \subseteq \wp(X)$ is the family: - -\mathcal{L}^{\uparrow X} ~\colon=~ \bigcup_{L \in \mathcal{L}} \{ S ~:~ L \subseteq S \subseteq X \} ~=~ \{ S \subseteq X ~:~ \text{ there exists } L \in \mathcal{L} \text{ such that } L \subseteq S \} - -and the downward closure of $\mathcal{L}$ is the family: - -\mathcal{L}^{\downarrow} ~\colon=~ \bigcup_{L \in \mathcal{L}} \wp(L) ~=~ \{ S ~:~ \text{ there exists } L \in \mathcal{L} \text{ such that } S \subseteq L \}. - -Categories of families of sets - -A family $\mathcal{L}$ is called isotone, ascending, or upward closed in $X$ if $\mathcal{L} \subseteq \wp(X)$ and $\mathcal{L} = \mathcal{L}^{\uparrow X}.$ - -A family $\mathcal{L}$ is called downward closed if $\mathcal{L} = \mathcal{L}^{\downarrow}.$ - -A family $\mathcal{L}$ is said to be: - -* closed under finite intersections (resp. closed under finite unions) if whenever $L, R \in \mathcal{L}$ then $L \cap R \in \mathcal{L}$ (respectively, $L \cup R \in \mathcal{L}$). - -* closed under countable intersections (resp. closed under countable unions) if whenever $L_1, L_2, L_3, \ldots$ are elements of $\mathcal{L}$ then so is their intersections $\bigcap_{i=1}^{\infty} L_i := L_1 \cap L_2 \cap L_3 \cap \cdots$ (resp. so is their union $\bigcup_{i=1}^{\infty} L_i := L_1 \cup L_2 \cup L_3 \cup \cdots$). - -* closed under complementation in (or with respect to) $X$ if whenever $L \in \mathcal{L}$ then $X \setminus L \in \mathcal{L}.$ - -A family $\mathcal{L}$ of sets is called a/an: - -* pi−system if $\mathcal{L} \neq \varnothing$ and $\mathcal{L}$ is closed under finite-intersections. - -** Every non-empty family $\mathcal{L}$ is contained in a unique smallest (with respect to $\subseteq$) pi−system that is denoted by $\pi(\mathcal{L})$ and called the pi−system generated by $\mathcal{L}.$ - -* filter subbase and is said to have the finite intersection property if $\mathcal{L} \neq \varnothing$ and $\varnothing \not\in \pi(\mathcal{L}).$ - -* filter on $X$ if $\mathcal{L} \neq \varnothing$ is a family of subsets of $X$ that is a pi−system, is upward closed in $X,$ and is also proper, which by definition means that it does not contain the empty set as an element. - -* prefilter or filter base if it is a non-empty family of subsets of some set $X$ whose upward closure in $X$ is a filter on $X.$ - -* algebra on $X$ is a non-empty family of subsets of $X$ that contains the empty set, forms a pi−system, and is also closed under complementation with respect to $X.$ - -* σ-algebra on $X$ is an algebra on $X$ that is closed under countable unions (or equivalently, closed under countable intersections). - -Let $\mathcal{L}, \mathcal{M},$ and $\mathcal{R}$ be families of sets over $X.$ - -On the left hand sides of the following identities, $\mathcal{L}$ is the Left most family, $\mathcal{M}$ is in the Middle, and $\mathcal{R}$ is the Right most set. - -Commutativity: - -\mathcal{L} (\cup) \mathcal{R} = \mathcal{R} (\cup) \mathcal{L} - -\mathcal{L} (\cap) \mathcal{R} = \mathcal{R} (\cap) \mathcal{L} - -Associativity: - -[\mathcal{L} (\cup) \mathcal{M}] (\cup) \mathcal{R} = \mathcal{L} (\cup) [\mathcal{M} (\cup) \mathcal{R}] - -[\mathcal{L} (\cap) \mathcal{M}] (\cap) \mathcal{R} = \mathcal{L} (\cap) [\mathcal{M} (\cap) \mathcal{R}] - -Identity: - -\mathcal{L} (\cup) \{\varnothing\} = \mathcal{L} - -\mathcal{L} (\cap) \{X\} = \mathcal{L} - -\mathcal{L} (\setminus) \{\varnothing\} = \mathcal{L} - -Domination: - -\mathcal{L} (\cup) \{X\} = \{X\} ~~~~\text{ if } \mathcal{L} \neq \varnothing - -\mathcal{L} (\cap) \{\varnothing\} = \{\varnothing\} ~~~~\text{ if } \mathcal{L} \neq \varnothing - -\mathcal{L} (\cup) \varnothing = \varnothing - -\mathcal{L} (\cap) \varnothing = \varnothing - -\mathcal{L} (\setminus) \varnothing = \varnothing - -\varnothing (\setminus) \mathcal{R} = \varnothing diff --git a/wiki/wikipedia/972.txt b/wiki/wikipedia/972.txt deleted file mode 100644 index 9c765336bc2126854288faf164a525ceb359ee68..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/972.txt +++ /dev/null @@ -1,137 +0,0 @@ -In algebraic geometry, the problem of resolution of singularities asks whether every algebraic variety V has a resolution, a non-singular variety W with a proper birational map W→V. For varieties over fields of characteristic 0 this was proved in Hironaka (1964), while for varieties over fields of characteristic p it is an open problem in dimensions at least 4. - -Originally the problem of resolution of singularities was to find a nonsingular model for the function field of a variety X, in other words a complete non-singular variety X′ with the same function field. In practice it is more convenient to ask for a different condition as follows: a variety X has a resolution of singularities if we can find a non-singular variety X′ and a proper birational map from X′ to X. The condition that the map is proper is needed to exclude trivial solutions, such as taking X′ to be the subvariety of non-singular points of X. - -More generally, it is often useful to resolve the singularities of a variety X embedded into a larger variety W. Suppose we have a closed embedding of X into a regular variety W. A strong desingularization of X is given by a proper birational morphism from a regular variety W′ to W subject to some of the following conditions (the exact choice of conditions depends on the author): - -# The strict transform X′ of X is regular, and transverse to the exceptional locus of the resolution morphism (so in particular it resolves the singularities of X). - -#The map from the strict transform of X to X is an isomorphism away from the singular points of X. - -# W′ is constructed by repeatedly blowing up regular closed subvarieties of W or more strongly regular subvarieties of X, transverse to the exceptional locus of the previous blowings up. - -# The construction of W′ is functorial for smooth morphisms to W and embeddings of W into a larger variety. (It cannot be made functorial for all (not necessarily smooth) morphisms in any reasonable way.) - -# The morphism from X′ to X does not depend on the embedding of X in W. Or in general, the sequence of blowings up is functorial with respect to smooth morphisms. - -Hironaka showed that there is a strong desingularization satisfying the first three conditions above whenever X is defined over a field of characteristic 0, and his construction was improved by several authors (see below) so that it satisfies all conditions above. - -Every algebraic curve has a unique nonsingular projective model, which means that all resolution methods are essentially the same because they all construct this model. In higher dimensions this is no longer true: varieties can have many different nonsingular projective models. - -Kollár lists about 20 ways of proving resolution of singularities of curves. - -Resolution of singularities of curves was essentially first proved by , who showed the existence of Puiseux series for a curve from which resolution follows easily. - -Riemann constructed a smooth Riemann surface from the function field of a complex algebraic curve, which gives a resolution of its singularities. This can be done over more general fields by using the set of discrete valuation rings of the field as a substitute for the Riemann surface. - -Albanese's method consists of taking a curve that spans a projective space of sufficiently large dimension (more than twice the degree of the curve) and repeatedly projecting down from singular points to projective spaces of smaller dimension. This method extends to higher-dimensional varieties, and shows that any n-dimensional variety has a projective model with singularities of multiplicity at most n!. For a curve, n = 1, and thus there are no singular points. - -Muhly gave a one step method of resolving singularities of a curve by taking the normalization of the curve. Normalization removes all singularities in codimension 1, so it works for curves but not in higher dimensions. - -Another one-step method of resolving singularities of a curve is to take a space of valuation rings of the function field of the curve. This space can be made into a nonsingular projective curve birational to the original curve. - -Repeatedly blowing up the singular points of a curve will eventually resolve the singularities. The main task with this method is to find a way to measure the complexity of a singularity and to show that blowing up improves this measure. There are many ways to do this. For example, one can use the arithmetic genus of the curve. - -Noether's method takes a plane curve and repeatedly applies quadratic transformations (determined by a singular point and two points in general position). Eventually this produces a plane curve whose only singularities are ordinary multiple points (all tangent lines have multiplicity two). - -Bertini's method is similar to Noether's method. It starts with a plane curve, and repeatedly applies birational transformations to the plane to improve the curve. The birational transformations are more complicated than the quadratic transformations used in Noether's method, but produce the better result that the only singularities are ordinary double points. - -Surfaces have many different nonsingular projective models (unlike the case of curves where the nonsingular projective model is unique). However a surface still has a unique minimal resolution, that all others factor through (all others are resolutions of it). In higher dimensions there need not be a minimal resolution. - -There were several attempts to prove resolution for surfaces over the complex numbers by Del Pezzo, Levi, Severi, Chisini, and Albanese, but Zariski points out that none of these early attempts are complete, and all are vague (or even wrong) at some critical point of the argument. The first rigorous proof was given by Walker, and an algebraic proof for all fields of characteristic 0 was given by Zariski. Abhyankar gave a proof for surfaces of non-zero characteristic. Resolution of singularities has also been shown for all excellent 2-dimensional schemes (including all arithmetic surfaces) by Lipman. - -Zariski's method of resolution of singularities for surfaces is to repeatedly alternate normalizing the surface (which kills codimension 1 singularities) with blowing up points (which makes codimension 2 singularities better, but may introduce new codimension 1 singularities). Although this will resolve the singularities of surfaces by itself, Zariski used a more roundabout method: he first proved a local uniformization theorem showing that every valuation of a surface could be resolved, then used the compactness of the Zariski–Riemann surface to show that it is possible to find a finite set of surfaces such that the center of each valuation is simple on at least one of these surfaces, and finally by studying birational maps between surfaces showed that this finite set of surfaces could be replaced by a single non-singular surface. - -By applying strong embedded resolution for curves, Jung reduces to a surface with only rather special singularities (abelian quotient singularities) which are then dealt with explicitly. The higher-dimensional version of this method is de Jong's method. - -In general the analogue of Albanese's method for curves shows that for any variety one can reduce to singularities of order at most n!, where n is the dimension. For surfaces this reduces to the case of singularities of order 2, which are easy enough to do explicitly. - -Abhyankar proved resolution of singularities for surfaces over a field of any characteristic by proving a local uniformization theorem for valuation rings. The hardest case is valuation rings of rank 1 whose valuation group is a nondiscrete subgroup of the rational numbers. The rest of the proof follows Zariski's method. - -Hironaka's method for arbitrary characteristic varieties gives a resolution method for surfaces, which involves repeatedly blowing up points or smooth curves in the singular set. - -Lipman showed that a surface Y (a 2-dimensional reduced Noetherian scheme) has a desingularization if and only if its normalization is finite over Y and analytically normal (the completions of its singular points are normal) and has only finitely many singular points. In particular if Y is excellent then it has a desingularization. - -His method was to consider normal surfaces Z with a birational proper map to Y and show that there is a minimal one with minimal possible arithmetic genus. He then shows that all singularities of this minimal Z are pseudo rational, and shows that pseudo rational singularities can be resolved by repeatedly blowing up points. - -The problem of resolution of singularities in higher dimensions is notorious for many incorrect published proofs and announcements of proofs that never appeared. - -For 3-folds the resolution of singularities was proved in characteristic 0 by Zariski. He first proved a theorem about local uniformization of valuation rings, valid for varieties of any dimension over any field of characteristic 0. He then showed that the Zariski–Riemann space of valuations is quasi-compact (for any variety of any dimension over any field), implying that there is a finite family of models of any projective variety such that any valuation has a smooth center over at least one of these models. The final and hardest part of the proof, which uses the fact that the variety is of dimension 3 but which works for all characteristics, is to show that given 2 models one can find a third that resolves the singularities that each of the two given models resolve. - -Abhyankar proved resolution of singularities for 3-folds in characteristic greater than 6. The restriction on the characteristic arises because Abhyankar shows that it is possible to resolve any singularity of a 3-fold of multiplicity less than the characteristic, and then uses Albanese's method to show that singularities can be reduced to those of multiplicity at most (dimension)! = 3! = 6. Cutkosky gave a simplified version of Abhyankar's proof. - -proved resolution of singularities of 3-folds in all characteristics, by proving local uniformization in dimension at most 3, and then checking that Zariski's proof that this implies resolution for 3-folds still works in the positive characteristic case. - -Resolution of singularities in characteristic 0 in all dimensions was first proved by Hironaka. He proved that it was possible to resolve singularities of varieties over fields of characteristic 0 by repeatedly blowing up along non-singular subvarieties, using a very complicated argument by induction on the dimension. Simplified versions of - -his formidable proof were given by several people, including Bierstone, Villamayor, Encinas, Encinas, Wlodarczyk, Kollár. Some of the recent proofs are about a tenth of the length of Hironaka's original proof, and are easy enough to give in an introductory graduate course. For an expository account of the theorem, see and - -for a historical discussion see . - -de Jong found a different approach to resolution of singularities, generalizing Jung's method for surfaces, which was used by - -Bogomolov and by Abramovich to prove resolution of singularities in characteristic 0. De Jong's method gave a weaker result for varieties of all dimensions in characteristic p, which was strong enough to act as a substitute for resolution for many purposes. - -De Jong proved that for any variety X over a field there is a dominant proper morphism which preserves the dimension from a regular variety onto X. This need not be a birational map, so is not a resolution of singularities, as it may be generically finite to one and so involves a finite extension of the function field of X. De Jong's idea was to try to represent X as a fibration over a smaller space Y with fibers that are curves (this may involve modifying X), then eliminate the singularities of Y by induction on the dimension, then eliminate the singularities in the fibers. - -It is easy to extend the definition of resolution to all schemes. Not all schemes have resolutions of their singularities: Grothendieck showed that if a locally Noetherian scheme X has the property that one can resolve the singularities of any finite integral scheme over X, then X must be quasi-excellent. Grothendieck also suggested that the converse might hold: in other words, if a locally Noetherian scheme X is reduced and quasi excellent, then it is possible to resolve its singularities. When X is defined over a field of characteristic 0 and is Noetherian, this follows from Hironaka's theorem, and when X has dimension at most 2 it was proved by Lipman. - -Hauser gave a survey of work on the unsolved characteristic p resolution problem. - -There are many constructions of strong desingularization but all of them give essentially the same result. In every case the global object (the variety to be desingularized) is replaced by local data (the ideal sheaf of the variety and those of the exceptional divisors and some orders that represents how much should be resolved the ideal in that step). With this local data the centers of blowing-up are defined. The centers will be defined locally and therefore it is a problem to guarantee that they will match up into a global center. This can be done by defining what blowings-up are allowed to resolve each ideal. Done appropriately, this will make the centers match automatically. Another way is to define a local invariant depending on the variety and the history of the resolution (the previous local centers) so that the centers consist of the maximum locus of the invariant. The definition of this is made such that making this choice is meaningful, giving smooth centers transversal to the exceptional divisors. - -In either case the problem is reduced to resolve singularities of the tuple formed by the ideal sheaf and the extra data (the exceptional divisors and the order, d, to which the resolution should go for that ideal). This tuple is called a marked ideal and the set of points in which the order of the ideal is larger than d is called its co-support. The proof that there is a resolution for the marked ideals is done by induction on dimension. The induction breaks in two steps: - -# Functorial desingularization of marked ideal of dimension n - 1 implies functorial desingularization of marked ideals of maximal order of dimension n. - -# Functorial desingularization of marked ideals of maximal order of dimension n implies functorial desingularization of (a general) marked ideal of dimension n. - -Here we say that a marked ideal is of maximal order if at some point of its co-support the order of the ideal is equal to d. - -A key ingredient in the strong resolution is the use of the Hilbert–Samuel function of the local rings of the points in the variety. This is one of the components of the resolution invariant. - -The most obvious invariant of a singularity is its multiplicity. However this need not decrease under blowup, so it is necessary to use more subtle invariants to measure the improvement. - -For example, the rhamphoid cusp y2 = x5 has a singularity of order 2 at the origin. After blowing up at its singular point it becomes the ordinary cusp y2 = x3, which still has multiplicity 2. - -It is clear that the singularity has improved, since the degree of defining polynomial has decreased. This does not happen in general. - -An example where it does not is given by the isolated singularity of x2 + y3z + z3 = 0 at the origin. Blowing it up gives the singularity x2 + y2z + yz3 = 0. It is not immediately obvious that this new singularity is better, as both singularities have multiplicity 2 and are given by the sum of monomials of degrees 2, 3, and 4. - -A natural idea for improving singularities is to blow up the locus of the "worst" singular points. The Whitney umbrella x2 = y2z has singular set the z axis, most of whose point are ordinary double points, but there is a more complicated pinch point singularity at the origin, so blowing up the worst singular points suggests that one should start by blowing up the origin. However blowing up the origin reproduces the same singularity on one of the coordinate charts. So blowing up the (apparently) "worst" singular points does not improve the singularity. Instead the singularity can be resolved by blowing up along the z-axis. - -There are algorithms that work by blowing up the "worst" singular points in some sense, such as , but this example shows that the definition of the "worst" points needs to be quite subtle. - -For more complicated singularities, such as x2 = ymzn which is singular along x = yz =0, blowing up the worst singularity at the origin produces the singularities x2 = ym+n-2zn and x2 = ymzm+n-2 which are worse than the original singularity if m and n are both at least 3. - -After resolution, the total transform (the union of the strict transform and the exceptional divisors) is a variety with singularities of the simple normal crossings type. It is natural to consider the possibility of resolving singularities without resolving this type of singularities, this is finding a resolution that is an isomorphism over the set of smooth and simple normal crossing points. When the strict transform is a divisor (i.e., can be embedded as a codimension one subvariety in a smooth variety) it is known that there exists a strong resolution avoiding simple normal crossing points. Whitney's umbrella shows that it is not possible to resolve singularities avoiding blowing-up the normal crossings singularities. - -A natural way to resolve singularities is to repeatedly blow up some canonically chosen smooth subvariety. This runs into the following problem. The singular set of x2 = y2z2 is the pair of lines given by the y and z axes. The only reasonable varieties to blow up are the origin, one of these two axes, or the whole singular set (both axes). However the whole singular set cannot be used since it is not smooth, and choosing one of the two axes breaks the symmetry between them so is not canonical. This means we have to start by blowing up the origin, but this reproduces the original singularity, so we seem to be going round in circles. - -The solution to this problem is that although blowing up the origin does not change the type of the singularity, it does give a subtle improvement: it breaks the symmetry between the two singular axes because one of them is an exceptional divisor for a previous blowup, so it is now permissible to blow up just one of these. However, in order to exploit this the resolution procedure needs to treat these 2 singularities differently, even though they are locally the same. This is sometimes done by giving the resolution procedure some memory, so the center of the blowup at each step depends not only on the singularity, but on the previous blowups used to produce it. - -Some resolution methods (in characteristic 0) are functorial for all smooth morphisms. - -However it is not possible to find a strong resolution functorial for all (possibly non-smooth) morphisms. An example is given by the map from the affine plane A2 to the conical singularity x2 + y2 = z2 taking (X,Y) to (2XY, X2 - Y2, X2 + Y2). The XY-plane is already nonsingular so should not be changed by resolution, and any resolution of the conical singularity factorizes through the minimal resolution given by blowing up the singular point. However the rational map from the XY-plane to this blowup does not extend to a regular map. - -Minimal resolutions (resolutions such that every resolution factors through them) exist in dimensions 1 and 2, but not always in higher dimensions. The Atiyah flop gives an example in 3 dimensions of a singularity with no minimal resolution. - -Let Y be the zeros of xy = zw in A4, and let V be the blowup of Y at the origin. - -The exceptional locus of this blowup is isomorphic to P1×P1, and can be blown down to P1 in 2 different ways, giving two small resolutions X1 and X2 of Y, neither of which can be blown down any further. - -Kollár gives the following example showing that one cannot expect a sufficiently good resolution procedure to commute with products. If f:A→B is the blowup of the origin of a quadric cone B in affine 3-space, then f×f:A×A→B×B cannot be produced by an étale local resolution procedure, essentially because the exceptional locus has 2 components that intersect. - -Singularities of toric varieties give examples of high-dimensional singularities that are easy to resolve explicitly. A toric variety is defined by a fan, a collection of cones in a lattice. The singularities can be resolved by subdividing each cone into a union of cones each of which is generated by a basis for the lattice, and taking the corresponding toric variety. - -Construction of a desingularization of a variety X may not produce centers of blowings up that are smooth subvarieties of X. Many constructions of a desingularization of an abstract variety X proceed by locally embedding X in a smooth variety W, considering its ideal in W and computing a canonical desingularization of this ideal. The desingularization of ideals uses the order of the ideal as a measure of how singular is the ideal. The desingularization of the ideal can be made such that one can justify that the local centers patch together to give global centers. This method leads to a proof that is relatively simpler to present, compared to Hironaka's original proof, which uses the Hilbert-Samuel function as the measure of how bad singularities are. For example, the proofs in Villamayor, Encinas, Encinas, and Kollár use this idea. However, this method only ensures centers of blowings up that are regular in W. - -The following example shows that this method can produce centers that have non-smooth intersections with the (strict transform of) X. Therefore, the resulting desingularization, when restricted to the abstract variety X, is not obtained by blowing up regular subvarieties of X. - -Let X be the subvariety of the four-dimensional affine plane, with coordinates x,y,z,w, generated by y2-x3 and x4+xz2-w3. The canonical desingularization of the ideal with these generators would blow up the center C0 given by x=y=z=w=0. The transform of the ideal in the x-chart if generated by x-y2 and y2(y2+z2-w3). The next center of blowing up C1 is given by x=y=0. However, the strict transform of X is X1, which is generated by x-y2 and y2+z2-w3. This means that the intersection of C1 and X1 is given by x=y=0 and z2-w3=0, which is not regular. - -To produce centers of blowings up that are regular subvarieties of X stronger proofs (Bierstone) use the Hilbert-Samuel function of the local rings of X rather than the order of its ideal in the local embedding in W. - -After the resolution the total transform, the union of the strict transform, X, and the exceptional divisor, is a variety that can be made, at best, to have simple normal crossing singularities. Then it is natural to consider the possibility of resolving singularities without resolving this type of singularities. The problem is to find a resolution that is an isomorphism over the set of smooth and simple normal crossing points. When X is a divisor, i.e. it can be embedded as a codimension-one subvariety in a smooth variety it is known to be true the existence of the strong resolution avoiding simple normal crossing points. The general case or generalizations to avoid different types of singularities are still not known. . - -Avoiding certain singularities is impossible. For example, one can't resolve singularities avoiding blowing-up the normal crossings singularities. In fact, to resolve the pinch point singularity the whole singular locus needs to be blown up, including points where normal crossing singularities are present. diff --git a/wiki/wikipedia/973.txt b/wiki/wikipedia/973.txt deleted file mode 100644 index 0f037743aeafbf9befa729d9e6e6fd65803cb8bb..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/973.txt +++ /dev/null @@ -1,129 +0,0 @@ -In computer science, the Bron–Kerbosch algorithm is an enumeration algorithm for finding all maximal cliques in an undirected graph. That is, it lists all subsets of vertices with the two properties that each pair of vertices in one of the listed subsets is connected by an edge, and no listed subset can have any additional vertices added to it while preserving its complete connectivity. The Bron–Kerbosch algorithm was designed by Dutch scientists Coenraad Bron and Joep Kerbosch, who published its description in 1973. - -Although other algorithms for solving the clique problem have running times that are, in theory, better on inputs that have few maximal independent sets, the Bron–Kerbosch algorithm and subsequent improvements to it are frequently reported as being more efficient in practice than the alternatives. It is well-known and widely used in application areas of graph algorithms such as computational chemistry. - -A contemporaneous algorithm of Akkoyunlu, although presented in different terms, can be viewed as being the same as the Bron–Kerbosch algorithm, as it generates the same search tree. - -The basic form of the Bron–Kerbosch algorithm is a recursive backtracking algorithm that searches for all maximal cliques in a given graph G. More generally, given three disjoint sets of vertices R, P, and X, it finds the maximal cliques that include all of the vertices in R, some of the vertices in P, and none of the vertices in X. In each call to the algorithm, P and X are disjoint sets whose union consists of those vertices that form cliques when added to R. In other words, P ∪ X is the set of vertices which are joined to every element of R. When P and X are both empty there are no further elements that can be added to R, so R is a maximal clique and the algorithm outputs R. - -The recursion is initiated by setting R and X to be the empty set and P to be the vertex set of the graph. Within each recursive call, the algorithm considers the vertices in P in turn; if there are no such vertices, it either reports R as a maximal clique (if X is empty), or backtracks. For each vertex v chosen from P, it makes a recursive call in which v is added to R and in which P and X are restricted to the neighbor set N(v) of v, which finds and reports all clique extensions of R that contain v. Then, it moves v from P to X to exclude it from consideration in future cliques and continues with the next vertex in P. - -That is, in pseudocode, the algorithm performs the following steps: - -algorithm BronKerbosch1(R, P, X) is - -if P and X are both empty then - -report R as a maximal clique - -for each vertex v in P do - -BronKerbosch1(R ⋃ {v}, P ⋂ N(v), X ⋂ N(v)) - -P := P \ {v} - -X := X ⋃ {v} - -The basic form of the algorithm, described above, is inefficient in the case of graphs with many non-maximal cliques: it makes a recursive call for every clique, maximal or not. To save time and allow the algorithm to backtrack more quickly in branches of the search that contain no maximal cliques, Bron and Kerbosch introduced a variant of the algorithm involving a "pivot vertex" u, chosen from P (or more generally, as later investigators realized, from P ⋃ X). Any maximal clique must include either u or one of its non-neighbors, for otherwise the clique could be augmented by adding u to it. Therefore, only u and its non-neighbors need to be tested as the choices for the vertex v that is added to R in each recursive call to the algorithm. In pseudocode: - -algorithm BronKerbosch2(R, P, X) is - -if P and X are both empty then - -report R as a maximal clique - -choose a pivot vertex u in P ⋃ X - -for each vertex v in P \ N(u) do - -BronKerbosch2(R ⋃ {v}, P ⋂ N(v), X ⋂ N(v)) - -P := P \ {v} - -X := X ⋃ {v} - -If the pivot is chosen to minimize the number of recursive calls made by the algorithm, the savings in running time compared to the non-pivoting version of the algorithm can be significant. - -An alternative method for improving the basic form of the Bron–Kerbosch algorithm involves forgoing pivoting at the outermost level of recursion, and instead choosing the ordering of the recursive calls carefully in order to minimize the sizes of the sets P of candidate vertices within each recursive call. - -The degeneracy of a graph G is the smallest number d such that every subgraph of G has a vertex with degree d or less. Every graph has a degeneracy ordering, an ordering of the vertices such that each vertex has d or fewer neighbors that come later in the ordering; a degeneracy ordering may be found in linear time by repeatedly selecting the vertex of minimum degree among the remaining vertices. If the order of the vertices v that the Bron–Kerbosch algorithm loops through is a degeneracy ordering, then the set P of candidate vertices in each call (the neighbors of v that are later in the ordering) will be guaranteed to have size at most d. The set X of excluded vertices will consist of all earlier neighbors of v, and may be much larger than d. In recursive calls to the algorithm below the topmost level of the recursion, the pivoting version can still be used. - -In pseudocode, the algorithm performs the following steps: - -algorithm BronKerbosch3(G) is - -P = V(G) - -R = X = empty - -for each vertex v in a degeneracy ordering of G do - -BronKerbosch2({v}, P ⋂ N(v), X ⋂ N(v)) - -P := P \ {v} - -X := X ⋃ {v} - -This variant of the algorithm can be proven to be efficient for graphs of small degeneracy, and experiments show that it also works well in practice for large sparse social networks and other real-world graphs. - -In the example graph shown, the algorithm is initially called with R = Ø, P = {1,2,3,4,5,6}, and X = Ø. The pivot u should be chosen as one of the degree-three vertices, to minimize the number of recursive calls; for instance, suppose that u is chosen to be vertex 2. Then there are three remaining vertices in P \ N(u): vertices 2, 4, and 6. - -The iteration of the inner loop of the algorithm for v = 2 makes a recursive call to the algorithm with R = {2}, P = {1,3,5}, and X = Ø. Within this recursive call, one of 1 or 5 will be chosen as a pivot, and there will be two second-level recursive calls, one for vertex 3 and the other for whichever vertex was not chosen as pivot. These two calls will eventually report the two cliques {1,2,5} and {2,3}. After returning from these recursive calls, vertex 2 is added to X and removed from P. - -The iteration of the inner loop of the algorithm for v = 4 makes a recursive call to the algorithm with R = {4}, P = {3,5,6}, and X = Ø (although vertex 2 belongs to the set X in the outer call to the algorithm, it is not a neighbor of v and is excluded from the subset of X passed to the recursive call). This recursive call will end up making three second-level recursive calls to the algorithm that report the three cliques {3,4}, {4,5}, and {4,6}. Then, vertex 4 is added to X and removed from P. - -In the third and final iteration of the inner loop of the algorithm, for v = 6, there is a recursive call to the algorithm with R = {6}, P = Ø, and X = {4}. Because this recursive call has P empty and X non-empty, it immediately backtracks without reporting any more cliques, as there can be no maximal clique that includes vertex 6 and excludes vertex 4. - -The call tree for the algorithm, therefore, looks like: - -BronKerbosch2(Ø, {1,2,3,4,5,6}, Ø) - -BronKerbosch2({2}, {1,3,5}, Ø) - -BronKerbosch2({2,3}, Ø, Ø): output {2, 3} - -BronKerbosch2({2,5}, {1}, Ø) - -BronKerbosch2({1,2,5}, Ø, Ø): output {1,2,5} - -BronKerbosch2({4}, {3,5,6}, Ø) - -BronKerbosch2({3,4}, Ø, Ø): output {3,4} - -BronKerbosch2({4,5}, Ø, Ø): output {4,5} - -BronKerbosch2({4,6}, Ø, Ø): output {4,6} - -BronKerbosch2({6}, Ø, {4}): no output - -The graph in the example has degeneracy two; one possible degeneracy ordering is 6,4,3,1,2,5. If the vertex-ordering version of the Bron–Kerbosch algorithm is applied to the vertices, in this order, the call tree looks like - -BronKerbosch3(G) - -BronKerbosch2({6}, {4}, Ø) - -BronKerbosch2({6,4}, Ø, Ø): output {6,4} - -BronKerbosch2({4}, {3,5}, {6}) - -BronKerbosch2({4,3}, Ø, Ø): output {4,3} - -BronKerbosch2({4,5}, Ø, Ø): output {4,5} - -BronKerbosch2({3}, {2}, {4}) - -BronKerbosch2({3,2}, Ø, Ø): output {3,2} - -BronKerbosch2({1}, {2,5}, Ø) - -BronKerbosch2({1,2}, {5}, Ø) - -BronKerbosch2({1,2,5}, Ø, Ø): output {1,2,5} - -BronKerbosch2({2}, {5}, {1,3}): no output - -BronKerbosch2({5}, Ø, {1,2,4}): no output - -The Bron–Kerbosch algorithm is not an output-sensitive algorithm: unlike some other algorithms for the clique problem, it does not run in polynomial time per maximal clique generated. However, it is efficient in a worst-case sense: by a result of Moon, any n-vertex graph has at most 3n/3 maximal cliques, and the worst-case running time of the Bron–Kerbosch algorithm (with a pivot strategy that minimizes the number of recursive calls made at each step) is O(3n/3), matching this bound. - -For sparse graphs, tighter bounds are possible. In particular the vertex-ordering version of the Bron–Kerbosch algorithm can be made to run in time O(dn3d/3), where d is the degeneracy of the graph, a measure of its sparseness. There exist d-degenerate graphs for which the total number of maximal cliques is (n - d)3d/3, so this bound is close to tight. diff --git a/wiki/wikipedia/974.txt b/wiki/wikipedia/974.txt deleted file mode 100644 index 43129d0e57d85eca588ba0b6825c08901a8defd8..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/974.txt +++ /dev/null @@ -1,39 +0,0 @@ -The Yamabe problem refers to a conjecture in the mathematical field of differential geometry, which was resolved in the 1980s. It is a statement about the scalar curvature of Riemannian manifolds: - -By computing a formula for how the scalar curvature of fg relates to that of g, this statement can be rephrased in the following form: - -{{quote|Let (M,g) be a closed smooth Riemannian manifold. Then there exists a positive and smooth function φ on M, and a number c, such that -$$ -\frac{4(n-1)}{n-2}\Delta^g\varphi+R^g\varphi+c\varphi^{(n+2)/(n-2)}=0. -$$ - -Here n denotes the dimension of M, Rg denotes the scalar curvature of g, and ∆g denotes the Laplace-Beltrami operator of g.}} - -The mathematician Hidehiko Yamabe, in the paper Yamabe, gave the above statements as theorems and provided a proof; however, Trudinger discovered an error in his proof. The problem of understanding whether the above statements are true or false became known as the Yamabe problem. The combined work of Yamabe, Trudinger, Thierry Aubin, and Richard Schoen provided an affirmative resolution to the problem in 1984. - -It is now regarded as a classic problem in geometric analysis, with the proof requiring new methods in the fields of differential geometry and partial differential equations. A decisive point in Schoen's ultimate resolution of the problem was an application of the positive energy theorem of general relativity, which is a purely differential-geometric mathematical theorem first proved (in a provisional setting) in 1979 by Schoen and Shing-Tung Yau. - -There has been more recent work due to Simon Brendle, Marcus Khuri, Fernando Codá Marques, and Schoen, dealing with the collection of all positive and smooth functions f such that, for a given Riemannian manifold (M,g), the metric fg has constant scalar curvature. Additionally, the Yamabe problem as posed in similar settings, such as for complete noncompact Riemannian manifolds, is not yet fully understood. - -Here, we refer to a "solution of the Yamabe problem" on a Riemmannian manifold $(M,\overline{g})$ as a Riemannian metric g on M for which there is a positive smooth function $\varphi:M\to\mathbb{R},$ with $g=\varphi^{-2}\overline{g}.$ - -Let $(M,\overline{g})$ be a smooth Riemannian manifold. Consider a positive smooth function $\varphi:M\to\mathbb{R},$ so that $g=\varphi^{-2}\overline{g}$ is an arbitrary element of the smooth conformal class of $\overline{g}.$ A standard computation shows -$$ -\overline{R}_{ij}-\frac{1}{n}\overline{R}\overline{g}_{ij}=R_{ij}-\frac{1}{n}Rg_{ij}+\frac{n-2}{\varphi}\Big(\nabla_i\nabla_j\varphi+\frac{1}{n}g_{ij}\Delta\varphi\Big). -$$ - -Taking the g-inner product with $\textstyle\varphi(\operatorname{Ric}-\frac{1}{n}Rg)$ results in -$$ -\varphi\left\langle\overline{\operatorname{Ric}}-\frac{1}{n}\overline{R}\overline{g},\operatorname{Ric}-\frac{1}{n}Rg\right\rangle_g=\varphi\Big|\operatorname{Ric}-\frac{1}{n}Rg\Big|_g^2+(n-2)\Big(\big\langle\operatorname{Ric},\operatorname{Hess}\varphi\big\rangle_g-\frac{1}{n}R\Delta\varphi\Big). -$$ - -If $\overline{g}$ is assumed to be Einstein, then the left-hand side vanishes. If $M$ is assumed to be closed, then one can do an integration by parts, recalling the Bianchi identity $\textstyle\operatorname{div}\operatorname{Ric}=\frac{1}{2}\nabla R,$ to see -$$ -\int_M \varphi\Big|\operatorname{Ric}-\frac{1}{n}Rg\Big|^2d\mu_g=(n-2)\Big(\frac{1}{2}-\frac{1}{n}\Big)\int_M \langle\nabla R,\nabla\varphi\rangled\mu_g. -$$ - -If R has constant scalar curvature, then the right-hand side vanishes. The consequent vanishing of the left-hand side proves the following fact, due to Obata (1971): - -Obata then went on to prove that, except in the case of the standard sphere with its usual constant-sectional-curvature metric, the only constant-scalar-curvature metrics in the conformal class of an Einstein metric (on a closed manifold) are constant multiples of the given metric. The proof proceeds by showing that the gradient of the conformal factor is actually a conformal Killing field. If the conformal factor is not constant, following flow lines of this gradient field, starting at a minimum of the conformal factor, then allows one to show that the manifold is conformally related to the cylinder $S^{n-1}\times \mathbb{R}$, and hence has vanishing Weyl curvature. - -A closely related question is the so-called "non-compact Yamabe problem", which asks: Is it true that on every smooth complete Riemannian manifold (M,g) which is not compact, there exists a metric that is conformal to g, has constant scalar curvature and is also complete? The answer is no, due to counterexamples given by Jin. Various additional criteria under which a solution to the Yamabe problem for a non-compact manifold can be shown to exist are known (for example Aviles); however, obtaining a full understanding of when the problem can be solved in the non-compact case remains a topic of research. diff --git a/wiki/wikipedia/975.txt b/wiki/wikipedia/975.txt deleted file mode 100644 index 3721214337f902cc3a1bc6b191dbf6454a74e031..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/975.txt +++ /dev/null @@ -1,3 +0,0 @@ -In computer science, a goal node is a node in a graph that meets defined criteria for success or termination. - -Heuristical artificial intelligence algorithms, like A* and B*, attempt to reach such nodes in optimal time by defining the distance to the goal node. When the goal node is reached, A* defines the distance to the goal node as 0 and all other nodes' distances as positive values. diff --git a/wiki/wikipedia/976.txt b/wiki/wikipedia/976.txt deleted file mode 100644 index f1035b566cb97081fe5afc000007522715eb83ee..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/976.txt +++ /dev/null @@ -1,23 +0,0 @@ -Sylvester's theorem or Sylvester's formula describes a particular interpretation of the sum of three pairwise distinct vectors of equal length in the context of triangle geometry. It is also referred to as Sylvester's (triangle) problem in literature, when it is given as a problem rather than a theorem. The theorem is named after the British mathematician James Joseph Sylvester. - -Consider three pairwise distinct vectors of equal length $\vec{u}$, $\vec{v}$ and $\vec{w}$ each of them acting on the same point $O$ thus creating the points $A$, $B$ and $C$. Those points form the triangle $\triangle ABC$ with $O$ as the center of its circumcircle. Now let $H$ denote the orthocenter of the triangle, then connection vector $\overrightarrow{OH}$ is equal to the sum of the three vectors: -$$ -\overrightarrow{OH}=\vec{u}+\vec{v}+\vec{w} -$$ - -Furthermore, since the points $O$ and $H$ are located on the Euler line together with the centroid $S$ the following equation holds: -$$ -\overrightarrow{OH}=3\cdot \overrightarrow{OS} -$$ - -If the condition of equal length in Sylvester's theorem is dropped and one considers merely three arbitrary pairwise distinct vectors, then the equation above does not hold anymore. However the relation with the centroid remains true, that is: -$$ -3\cdot \overrightarrow{OS}=\vec{u}+\vec{v}+\vec{w} -$$ - -This follows directly from the definition of the centroid for a finite set of points in $\mathbb{R}^n$, which also yields a version for $n$ vectors acting on $O$: -$$ -n\cdot \overrightarrow{OS}=\sum_{i=1}^n v_i -$$ - -Here $S$ is the centroid of the vertices of the polygon generated by the $n$ vectors acting on $O$. diff --git a/wiki/wikipedia/977.txt b/wiki/wikipedia/977.txt deleted file mode 100644 index 82145edbf02d35e64082955afee29574ed06eccb..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/977.txt +++ /dev/null @@ -1,41 +0,0 @@ -In mathematics, the mountain climbing problem is a problem of finding the conditions that two functions forming profiles of a two-dimensional mountain must satisfy, so that two climbers can start on the bottom on the opposite sides of the mountain and coordinate their movements to meet (possibly at the top) while always staying at the same height. This problem was named and posed in this form by , but its history goes back to , who solved a version of it. The problem has been repeatedly rediscovered and solved independently in different context by a number of people (see references below). - -Since the 1990s, the problem was shown to be connected to the weak Fréchet distance of curves in the plane, various planar motion planning problems in computational geometry, the inscribed square problem, semigroup of polynomials, etc. The problem was popularized in the article by Goodman, which received the Mathematical Association of America's Lester R. Ford Award in 1990. - -== Understanding the problem == - -It is easy to coordinate the climbers' movement between the peaks and valleys (local maxima and minima of the functions). The difficulty is that to progress, the climbers must occasionally go down the mountain, either one or the other, or both climbers. Similarly, either one or the other climber must backtrack towards the beginning of the journey. In fact, it has been observed that for a mountain with n peaks and valleys the number of turns can be as large as quadratic in n. These complications make the problem unintuitive and sometimes rather difficult, both in theory and in practice. - -== Formulation == - -The following result is due to Huneke: - -Suppose $f$ and $g$ are continuous functions from $[0,1]$ to $[0,1]$ with $f(0)=g(0)=0$ and $f(1)=g(1)=1$, and such that neither function is constant on an interval. Then there exist continuous functions $s$ and $t$ from $[0,1]$ to $[0,1]$ with $s(0)=t(0)=0$, $s(1)=t(1)=1$, and such that $f\circ s = g\circ t$, where "$\circ$" stands for a composition of functions. - -On the other hand, it is not possible to extend this result to all continuous functions. For, if $f$ has constant height over an interval while $g$ has infinitely many oscillations that pass through the same height, then the first climber may be forced to go back and forth infinitely many times, and thus can never reach the top. - -For the piecewise linear functions there are no obstructions. In this case, the climbers can always coordinate their movements to get to the top, regardless of whether there are intervals of constant height. - -Suppose that both functions are piecewise linear, and do not have any intervals of constant height. - -Consider the set of all pairs $(x,y)$ for which a first climber at $x$ and a second climber at $y$ would have the same height as each other. - -If we interpret these pairs as the Cartesian coordinates of points in the plane, then this set is a union of line segments. It can be interpreted as the drawing of an undirected graph $G$ that has a vertex at each line segment endpoint or crossing, and an edge for each portion of a line segment that connects two vertices. - -This graph may or may not be connected. It has vertices of the following types: - -*At the vertex $(0,0)$ the degree of the vertex (the number of incident edges) is one: the only possible direction for both climbers to go is onto the mountain. Similarly, at $(1,1)$ the degree is one, because both climbers can only return down the mountain. - -*At a vertex where neither climber is at a peak or a valley, then the degree is two: it is only possible for both climbers to go up, or both to go down. - -*At a vertex where one climber is at a peak or a valley and the other one is not, then the degree is again two: the climber at the peak or valley has two choices of which way to go, and the other climber can only go one way. - -*At a vertex where both climbers are at peaks or both climbers are at valleys, the degree is four: both climbers may choose independently of each other which direction to go. - -*The set of pairs $(x,y)$ used to define the graph $G$ may also include points where one climber is at a peak and the other is at a valley. These points may be interpreted as isolated vertices of $G$: neither climber can move, so the degree is zero. - -According to the handshaking lemma, every connected component of an undirected graph has an even number of odd-degree vertices. - -Since the only odd-degree vertices are $(0,0)$ and $(1,1)$, these two vertices must belong to the same connected component. That is, there must be a path from $(0,0)$ to $(1,1)$ in $G$. In the language of mountain climbers, this gives a way to coordinate the climbers' movement to reach the top of the mountain. - -The proof for functions that are piecewise linear but that allow intervals of constant height is similar, but involves a more complicated case analysis. Alternatively, one can find a path for modified functions in which all the intervals of constant height have been collapsed to points, and then extend the resulting path to the original functions. diff --git a/wiki/wikipedia/978.txt b/wiki/wikipedia/978.txt deleted file mode 100644 index ed6a7575e6ff3b83e694b3296af2aab184ddae22..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/978.txt +++ /dev/null @@ -1,15 +0,0 @@ -Qualitative properties are properties that are observed and can generally not be measured with a numerical result. They are contrasted to quantitative properties which have numerical characteristics. - -Some engineering and scientific properties are qualitative. A test method can result in qualitative data about something. This can be a categorical result or a binary classification (e.g., pass/fail, go/no go, conform/non-conform). It can sometimes be an engineering judgement. - -The data that all share a qualitative property form a nominal category. A variable which codes for the presence or absence of such a property is called a binary categorical variable, or equivalently a dummy variable. - -Some important qualitative properties that concern businesses are: - -Human factors, 'human work capital' is probably one of the most important issues that deals with qualitative properties. Some common aspects are work, motivation, general participation, etc. Although all of these aspects are not measurable in terms of quantitative criteria, the general overview of them could be summarized as a quantitative property. - -Environmental issues are in some cases quantitatively measurable, but other properties are qualitative e.g.: environmentally friendly manufacturing, responsibility for the entire life of a product (from the raw-material till scrap), attitudes towards safety, efficiency, and minimum waste production. - -Ethical issues are closely related to environmental and human issues, and may be covered in corporate governance. Child labour and illegal dumping of waste are examples of ethical issues. - -The way a company deals with its stockholders (the 'acting' of a company) is probably the most obvious qualitative aspect of a business. Although measuring something in qualitative terms is difficult, most people can (and will) make a judgement about a behaviour on the basis of how they feel treated. This indicates that qualitative properties are closely related to emotional impressions. diff --git a/wiki/wikipedia/979.txt b/wiki/wikipedia/979.txt deleted file mode 100644 index b07eeb7e2ad11d84564955e11842a7c645e10444..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/979.txt +++ /dev/null @@ -1,23 +0,0 @@ -In computational complexity theory, a gadget is a subset of a problem instance that simulates the behavior of one of the fundamental units of a different computational problem. Gadgets are typically used to construct reductions from one computational problem to another, as part of proofs of NP-completeness or other types of computational hardness. The component design technique is a method for constructing reductions by using gadgets. - -Szabó traces the use of gadgets to a 1954 paper in graph theory by W. T. Tutte, in which Tutte provided gadgets for reducing the problem of finding a subgraph with given degree constraints to a perfect matching problem. However, the "gadget" terminology has a later origin, and does not appear in Tutte's paper. - -Many NP-completeness proofs are based on many-one reductions from 3-satisfiability, the problem of finding a satisfying assignment to a Boolean formula that is a conjunction (Boolean and) of clauses, each clause being the disjunction (Boolean or) of three terms, and each term being a Boolean variable or its negation. A reduction from this problem to a hard problem on undirected graphs, such as the Hamiltonian cycle problem or graph coloring, would typically be based on gadgets in the form of subgraphs that simulate the behavior of the variables and clauses of a given 3-satisfiability instance. These gadgets would then be glued together to form a single graph, a hard instance for the graph problem in consideration. - -For instance, the problem of testing 3-colorability of graphs may be proven NP-complete by a reduction from 3-satisfiability of this type. The reduction uses two special graph vertices, labeled as "Ground" and "False", that are not part of any gadget. As shown in the figure, the gadget for a variable x consists of two vertices connected in a triangle with the ground vertex; one of the two vertices of the gadget is labeled with x and the other is labeled with the negation of x. The gadget for a clause (t0 ∨ t1 ∨ t2) consists of six vertices, connected to each other, to the vertices representing the terms t0, t1, and t2, and to the ground and false vertices by the edges shown. Any 3-CNF formula may be converted into a graph by constructing a separate gadget for each of its variables and clauses and connecting them as shown. - -In any 3-coloring of the resulting graph, one may designate the three colors as being true, false, or ground, where false and ground are the colors given to the false and ground vertices (necessarily different, as these vertices are made adjacent by the construction) and true is the remaining color not used by either of these vertices. Within a variable gadget, only two colorings are possible: the vertex labeled with the variable must be colored either true or false, and the vertex labeled with the variable's negation must correspondingly be colored either false or true. In this way, valid assignments of colors to the variable gadgets correspond one-for-one with truth assignments to the variables: the behavior of the gadget with respect to coloring simulates the behavior of a variable with respect to truth assignment. - -Each clause assignment has a valid 3-coloring if at least one of its adjacent term vertices is colored true, and cannot be 3-colored if all of its adjacent term vertices are colored false. In this way, the clause gadget can be colored if and only if the corresponding truth assignment satisfies the clause, so again the behavior of the gadget simulates the behavior of a clause. - -Agrawal considered what they called "a radically simple form of gadget reduction", in which each bit describing part of a gadget may depend only on a bounded number of bits of the input, and used these reductions to prove an analogue of the Berman–Hartmanis conjecture stating that all NP-complete sets are polynomial-time isomorphic. - -The standard definition of NP-completeness involves polynomial time many-one reductions: a problem in NP is by definition NP-complete if every other problem in NP has a reduction of this type to it, and the standard way of proving that a problem in NP is NP-complete is to find a polynomial time many-one reduction from a known NP-complete problem to it. But (in what Agrawal et al. called "a curious, often observed fact") all sets known to be NP-complete at that time could be proved complete using the stronger notion of AC0 many-one reductions: that is, reductions that can be computed by circuits of polynomial size, constant depth, and unbounded fan-in. Agrawal et al. proved that every set that is NP-complete under AC0 reductions is complete under an even more restricted type of reduction, NC0 many-one reductions, using circuits of polynomial size, constant depth, and bounded fan-in. In an NC0 reduction, each output bit of the reduction can depend only on a constant number of input bits, - -The Berman–Hartmanis conjecture is an unsolved problem in computational complexity theory stating that all NP-complete problem classes are polynomial-time isomorphic. That is, if A and B are two NP-complete problem classes, there is a polynomial-time one-to-one reduction from A to B whose inverse is also computable in polynomial time. Agrawal et al. used their equivalence between AC0 reductions and NC0 reductions to show that all sets complete for NP under AC0 reductions are AC0-isomorphic. - -One application of gadgets is in proving hardness of approximation results, by reducing a problem that is known to be hard to approximate to another problem whose hardness is to be proven. In this application, one typically has a family of instances of the first problem in which there is a gap in the objective function values, and in which it is hard to determine whether a given instance has an objective function that is on the low side or on the high side of the gap. The reductions used in these proofs, and the gadgets used in the reductions, must preserve the existence of this gap, and the strength of the inapproximability result derived from the reduction will depend on how well the gap is preserved. - -Trevisan formalize the problem of finding gap-preserving gadgets, for families of constraint satisfaction problems in which the goal is to maximize the number of satisfied constraints. They give as an example a reduction from 3-satisfiability to 2-satisfiability by Garey, in which the gadget representing a 3-SAT clause consists of ten 2-SAT clauses, and in which a truth assignment that satisfies 3-SAT clause also satisfies at least seven clauses in the gadget, while a truth assignment that fails to satisfy a 3-SAT clause also fails to satisfy more than six clauses of the gadget. Using this gadget, and the fact that (unless P = NP) there is no polynomial-time approximation scheme for maximizing the number of 3-SAT clauses that a truth assignment satisfies, it can be shown that there is similarly no approximation scheme for MAX 2-SAT. - -Trevisan et al. show that, in many cases of the constraint satisfaction problems they study, the gadgets leading to the strongest possible inapproximability results may be constructed automatically, as the solution to a linear programming problem. The same gadget-based reductions may also be used in the other direction, to transfer approximation algorithms from easier problems to harder problems. For instance, Trevisan et al. provide an optimal gadget for reducing 3-SAT to a weighted variant of 2-SAT (consisting of seven weighted 2-SAT clauses) that is stronger than the one by Garey; using it, together with known semidefinite programming approximation algorithms for MAX 2-SAT, they provide an approximation algorithm for MAX 3-SAT with approximation ratio 0.801, better than previously known algorithms. diff --git a/wiki/wikipedia/98.txt b/wiki/wikipedia/98.txt deleted file mode 100644 index 3d11509eb5572510b9e54cd1f394fd1682187694..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/98.txt +++ /dev/null @@ -1,168 +0,0 @@ -The Ehrenfest theorem, named after Paul Ehrenfest, an Austrian theoretical physicist at Leiden University, relates the time derivative of the expectation values of the position and momentum operators x and p to the expectation value of the force $F=-V'(x)$ on a massive particle moving in a scalar potential $V(x)$, -$$ -m\frac{d}{dt}\langle x\rangle = \langle p\rangle, \frac{d}{dt}\langle p\rangle = -\left\langle V'(x)\right\rangle ~. -$$ - -The Ehrenfest theorem is a special case of a more general relation between the expectation of any quantum mechanical operator and the expectation of the commutator of that operator with the Hamiltonian of the system $\frac{d}{dt}\langle A\rangle = \frac{1}{i\hbar}\langle [A,H] \rangle+ \left\langle \frac{\partial A}{\partial t}\right\rangle ~,$ - -where A is some quantum mechanical operator and ⟨A⟩ is its expectation value. This more general theorem was not actually derived by Ehrenfest (it is due to Werner Heisenberg). - -It is most apparent in the Heisenberg picture of quantum mechanics, where it is just the expectation value of the Heisenberg equation of motion. It provides mathematical support to the correspondence principle. - -The reason is that Ehrenfest's theorem is closely related to Liouville's theorem of Hamiltonian mechanics, which involves the Poisson bracket instead of a commutator. Dirac's rule of thumb suggests that statements in quantum mechanics which contain a commutator correspond to statements in classical mechanics where the commutator is supplanted by a Poisson bracket multiplied by iħ. This makes the operator expectation values obey corresponding classical equations of motion, provided the Hamiltonian is at most quadratic in the coordinates and momenta. Otherwise, the evolution equations still may hold approximately, provided fluctuations are small. - -Although, at first glance, it might appear that the Ehrenfest theorem is saying that the quantum mechanical expectation values obey Newton’s classical equations of motion, this is not actually the case. If the pair $(\langle x\rangle,\langle p\rangle)$ were to satisfy Newton's second law, the right-hand side of the second equation would have to be -$$ --V'\left(\left\langle x\right\rangle\right), -$$ - -which is typically not the same as -$$ --\left\langle V'(x)\right\rangle. -$$ - -If for example, the potential $V(x)$ is cubic, (i.e. proportional to $x^3$), then $V'$ is quadratic (proportional to $x^2$). This means, in the case of Newton's second law, the right side would be in the form of $\langle x\rangle^2$, while in the Ehrenfest theorem it is in the form of $\langle x^2\rangle$. The difference between these two quantities is the square of the uncertainty in $x$ and is therefore nonzero. - -An exception occurs in case when the classical equations of motion are linear, that is, when $V$ is quadratic and $V'$ is linear. In that special case, $V'\left(\left\langle x\right\rangle\right)$ and $\left\langle V'(x)\right\rangle$ do agree. Thus, for the case of a quantum harmonic oscillator, the expected position and expected momentum do exactly follow the classical trajectories. - -For general systems, if the wave function is highly concentrated around a point $x_0$, then $V'\left(\left\langle x\right\rangle\right)$ and $\left\langle V'(x)\right\rangle$ will be almost the same, since both will be approximately equal to $V'(x_0)$. In that case, the expected position and expected momentum will approximately follow the classical trajectories, at least for as long as the wave function remains localized in position. - -Suppose some system is presently in a quantum state Φ. If we want to know the instantaneous time derivative of the expectation value of A, that is, by definition - -\begin{align} - -\frac{\mathrm{d}}{\mathrm{d}t}\langle A\rangle &= \frac{\mathrm{d}}{\mathrm{d}t}\int \Phi^* A \Phi\mathrm{d}^3x \\ - -&= \int \left( \frac{\partial \Phi^*}{\partial t} \right) A\Phi\mathrm{d}^3x + \int \Phi^* \left( \frac{\partial A}{\partial t}\right) \Phi\mathrm{d}^3x +\int \Phi^* A \left( \frac{\partial \Phi}{\partial t} \right)\mathrm{d}^3x \\ - -&= \int \left( \frac{\partial \Phi^*}{\partial t} \right) A\Phi\mathrm{d}^3x + \left\langle \frac{\partial A}{\partial t}\right\rangle + \int \Phi^* A \left( \frac{\partial \Phi}{\partial t} \right)\mathrm{d}^3x - -\end{align} - -where we are integrating over all of space. If we apply the Schrödinger equation, we find that -$$ -\frac{\partial \Phi}{\partial t} = \frac{1}{i\hbar}H\Phi -$$ - -By taking the complex conjugate we find -$$ -\frac{\partial \Phi^*}{\partial t} = -\frac{1}{i\hbar}\Phi^*H^* = -\frac{1}{i\hbar}\Phi^*H. -$$ - -Note H = H , because the Hamiltonian is Hermitian. Placing this into the above equation we have -$$ -\frac{d}{dt}\langle A\rangle = \frac{1}{i\hbar}\int \Phi^* (AH-HA) \Phi~d^3x + \left\langle \frac{\partial A}{\partial t}\right\rangle = \frac{1}{i\hbar}\langle [A,H]\rangle + \left\langle \frac{\partial A}{\partial t}\right\rangle. -$$ - -Often (but not always) the operator A is time-independent so that its derivative is zero and we can ignore the last term. - -In the Heisenberg picture, the derivation is trivial. The Heisenberg picture moves the time dependence of the system to operators instead of state vectors. Starting with the Heisenberg equation of motion -$$ -\frac{d}{dt}A(t) = \frac{\partial A(t)}{\partial t} + \frac{1}{i \hbar}[A(t),H], -$$ - -we can derive Ehrenfest's theorem simply by projecting the Heisenberg equation onto $ |\Psi\rangle $ from the right and $ \langle\Psi| $ from the left, or taking the expectation value, so -$$ -\left\langle\Psi\left|\frac{d}{dt}A(t)\right|\Psi\right\rangle = \left\langle\Psi\left|\frac{\partial A(t)}{\partial t}\right|\Psi\right\rangle + \left\langle\Psi\left|\frac{1}{i \hbar}[A(t),H]\right|\Psi\right\rangle, -$$ - -We can pull the d/dt out of the first term since the state vectors are no longer time dependent in the Heisenberg Picture. Therefore, -$$ -\frac{d}{dt}\langle A(t)\rangle = \left\langle\frac{\partial A(t)}{\partial t}\right\rangle + \frac{1}{i \hbar}\left\langle[A(t),H]\right\rangle -$$ - -The expectation values of the theorem, however, are the very same in the Schrödinger picture as well. For the very general example of a massive particle moving in a potential, the Hamiltonian is simply -$$ - H(x,p,t) = \frac{p^2}{2m} + V(x,t) -$$ - -where x is the position of the particle. - -Suppose we wanted to know the instantaneous change in the expectation of the momentum p. Using Ehrenfest's theorem, we have - - \frac{d}{dt}\langle p\rangle = \frac{1}{i\hbar}\langle [p,H]\rangle + \left\langle \frac{\partial p}{\partial t}\right\rangle = - -\frac{1}{i\hbar}\langle [p,V(x,t)]\rangle, - -since the operator p commutes with itself and has no time dependence. By expanding the right-hand-side, replacing p by −iħ∇, we get -$$ -\frac{d}{dt}\langle p\rangle = \int \Phi^* V(x,t)\frac{\partial}{\partial x}\Phi~dx - \int \Phi^* \frac{\partial}{\partial x} (V(x,t)\Phi)~dx ~. -$$ - -After applying the product rule on the second term, we have - - \begin{align} - -\frac{d}{dt}\langle p\rangle &= \int \Phi^* V(x,t) \frac{\partial}{\partial x}\Phi~dx - \int \Phi^* \left(\frac{\partial}{\partial x} V(x,t)\right)\Phi ~dx - \int \Phi^* V(x,t) \frac{\partial}{\partial x}\Phi~dx \\ - -&= - \int \Phi^* \left(\frac{\partial}{\partial x} V(x,t)\right)\Phi ~dx \\ - -&= \left\langle - \frac{\partial}{\partial x} V(x,t)\right\rangle = \langle F \rangle. - -\end{align} - -As explained in the introduction, this result does not say that the pair $(\langle X\rangle,\langle P\rangle)$ satisfies Newton's second law, because the right-hand side of the formula is $\langle F(x,t)\rangle,$ rather than $F(\langle X\rangle,t)$. Nevertheless, as explained in the introduction, for states that are highly localized in space, the expected position and momentum will approximately follow classical trajectories, which may be understood as an instance of the correspondence principle. - -Similarly, we can obtain the instantaneous change in the position expectation value. - -\begin{align} - -\frac{d}{dt}\langle x\rangle &= \frac{1}{i\hbar}\langle [x,H]\rangle + \left\langle \frac{\partial x}{\partial t}\right\rangle \\[5pt] - -&= \frac{1}{i\hbar} \left \langle \left [x,\frac{p^2}{2m} + V(x,t) \right ] \right \rangle + 0 \\[5pt] - -&= \frac{1}{i\hbar} \left \langle \left [x,\frac{p^2}{2m} \right] \right \rangle \\[5pt] - -&= \frac{1}{i\hbar 2 m} \left \langle [x,p] \frac{d}{dp} p^2 \right\rangle \\[5pt] - -&= \frac{1}{i\hbar 2 m}\langle i \hbar 2 p\rangle \\[5pt] - -&= \frac{1}{m}\langle p\rangle - -\end{align} - -This result is actually in exact accord with the classical equation. - -It was established above that the Ehrenfest theorems are consequences of the Schrödinger equation. However, the converse is also true: the Schrödinger equation can be inferred from the Ehrenfest theorems. We begin from - -\begin{align} - -m\frac{d}{dt} \left \langle \Psi(t) \right | \hat{x} \left | \Psi(t) \right \rangle &= \left \langle \Psi(t) \right | \hat{p} \left | \Psi(t) \right \rangle, \\[5pt] - -\frac{d}{dt} \left \langle \Psi(t) \right | \hat{p} \left | \Psi(t) \right \rangle &= \left \langle \Psi(t) \right | -V'(\hat{x}) \left | \Psi(t) \right \rangle. - -\end{align} - -Application of the product rule leads to - -\begin{align} - -\left \langle \frac{d\Psi}{dt} \Big | \hat{x} \Big | \Psi \right \rangle + \left \langle \Psi \Big | \hat{x} \Big | \frac{d\Psi}{dt} \right \rangle &= \left \langle \Psi \Big | \frac{\hat{p}}{m} \Big | \Psi \right \rangle, \\[5pt] - -\left \langle \frac{d\Psi}{dt} \Big | \hat{p} \Big | \Psi \right \rangle + \left \langle \Psi \Big | \hat{p} \Big | \frac{d\Psi}{dt} \right \rangle &= \langle \Psi | -V'(\hat{x}) | \Psi \rangle, - -\end{align} - -Here, apply Stone's theorem, using Ĥ to denote the quantum generator of time translation. The next step is to show that this is the same as the Hamiltonian operator used in quantum mechanics. Stone's theorem implies -$$ -i\hbar \left | \frac{d\Psi}{dt} \right \rangle = \hat{H} | \Psi(t) \rangle ~, -$$ - -where ħ was introduced as a normalization constant to the balance dimensionality. Since these identities must be valid for any initial state, the averaging can be dropped and the system of commutator equations for Ĥ are derived: -$$ -im [\hat{H}, \hat{x}] = \hbar \hat{p}, \qquad i [\hat{H}, \hat{p}] = -\hbar V'(\hat{x}). -$$ - -Assuming that observables of the coordinate and momentum obey the canonical commutation relation [x̂, p̂] = iħ. Setting $\hat{H} = H(\hat{x}, \hat{p})$, the commutator equations can be converted into the differential equations -$$ -m \frac{\partial H (x,p)}{\partial p} = p, \qquad \frac{\partial H(x,p)}{\partial x} = V'(x), -$$ - -whose solution is the familiar quantum Hamiltonian -$$ -\hat{H} = \frac{\hat{p}^2}{2m} + V(\hat{x}). -$$ - -Whence, the Schrödinger equation was derived from the Ehrenfest theorems by assuming the canonical commutation relation between the coordinate and momentum. If one assumes that the coordinate and momentum commute, the same computational method leads to the Koopman–von Neumann classical mechanics, which is the Hilbert space formulation of classical mechanics. Therefore, this derivation as well as the derivation of the Koopman–von Neumann mechanics, shows that the essential difference between quantum and classical mechanics reduces to the value of the commutator [x̂, p̂]. - -The implications of the Ehrenfest theorem for systems with classically chaotic dynamics are discussed at Scholarpedia article . Due to exponential instability of classical trajectories the Ehrenfest time, on which there is a complete correspondence between quantum and classical evolution, is shown to be logarithmically short being proportional to a logarithm of typical quantum number. For the case of integrable dynamics this time scale is much larger being proportional to a certain power of quantum number. diff --git a/wiki/wikipedia/980.txt b/wiki/wikipedia/980.txt deleted file mode 100644 index e42fea13ef2c2e8c670f18473fd529b30ba6fc6a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/980.txt +++ /dev/null @@ -1,31 +0,0 @@ -In arithmetic geometry, the Mordell conjecture is the conjecture made by Louis Mordell that a curve of genus greater than 1 over the field Q of rational numbers has only finitely many rational points. In 1983 it was proved by Gerd Faltings, and is now known as Faltings's theorem. The conjecture was later generalized by replacing Q by any number field. - -Let C be a non-singular algebraic curve of genus g over Q. Then the set of rational points on C may be determined as follows: - -* Case g = 0: no points or infinitely many; C is handled as a conic section. - -* Case g = 1: no points, or C is an elliptic curve and its rational points form a finitely generated abelian group (Mordell's Theorem, later generalized to the Mordell–Weil theorem). Moreover, Mazur's torsion theorem restricts the structure of the torsion subgroup. - -* Case g > 1: according to the Mordell conjecture, now Faltings's theorem, C has only a finite number of rational points. - -Igor Shafarevich conjectured that there are only finitely many isomorphism classes of abelian varieties of fixed dimension and fixed polarization degree over a fixed number field with good reduction outside a fixed finite set of places. Aleksei Parshin showed that Shafarevich's finiteness conjecture would imply the Mordell conjecture, using what is now called Parshin's trick. - -Gerd Faltings proved Shafarevich's finiteness conjecture using a known reduction to a case of the Tate conjecture, together with tools from algebraic geometry, including the theory of Néron models. The main idea of Faltings's proof is the comparison of Faltings heights and naive heights via Siegel modular varieties. - -* Paul Vojta gave a proof based on diophantine approximation. Enrico Bombieri found a more elementary variant of Vojta's proof. - -*Brian Lawrence and Akshay Venkatesh gave a proof based on p-adic Hodge theory, borrowing also some of the easier ingredients of Faltings's original proof. - -Faltings's 1983 paper had as consequences a number of statements which had previously been conjectured: - -* The Mordell conjecture that a curve of genus greater than 1 over a number field has only finitely many rational points; - -* The Isogeny theorem that abelian varieties with isomorphic Tate modules (as Q-modules with Galois action) are isogenous. - -A sample application of Faltings's theorem is to a weak form of Fermat's Last Theorem: for any fixed n ≥ 4 there are at most finitely many primitive integer solutions (pairwise coprime solutions) to an + bn = cn, since for such n the Fermat curve xn + yn = 1 has genus greater than 1. - -Because of the Mordell–Weil theorem, Faltings's theorem can be reformulated as a statement about the intersection of a curve C with a finitely generated subgroup Γ of an abelian variety A. Generalizing by replacing A by a semiabelian variety, C by an arbitrary subvariety of A, and Γ by an arbitrary finite-rank subgroup of A leads to the Mordell–Lang conjecture, which was proved in 1995 by McQuillan following work of Laurent, Raynaud, Hindry, Vojta, and Faltings. - -Another higher-dimensional generalization of Faltings's theorem is the Bombieri–Lang conjecture that if X is a pseudo-canonical variety (i.e., a variety of general type) over a number field k, then X(k) is not Zariski dense in X. Even more general conjectures have been put forth by Paul Vojta. - -The Mordell conjecture for function fields was proved by Yuri Ivanovich Manin and by Hans Grauert. In 1990, Robert F. Coleman found and fixed a gap in Manin's proof. diff --git a/wiki/wikipedia/981.txt b/wiki/wikipedia/981.txt deleted file mode 100644 index 9981e49dadb7ec94667d982ff3ad80f1507a9ddf..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/981.txt +++ /dev/null @@ -1,112 +0,0 @@ -In mathematics, Hurwitz's automorphisms theorem bounds the order of the group of automorphisms, via orientation-preserving conformal mappings, of a compact Riemann surface of genus g > 1, stating that the number of such automorphisms cannot exceed 84(g − 1). A group for which the maximum is achieved is called a Hurwitz group, and the corresponding Riemann surface a Hurwitz surface. Because compact Riemann surfaces are synonymous with non-singular complex projective algebraic curves, a Hurwitz surface can also be called a Hurwitz curve. The theorem is named after Adolf Hurwitz, who proved it in . - -Hurwitz's bound also holds for algebraic curves over a field of characteristic 0, and over fields of positive characteristic p>0 for groups whose order is coprime to p, but can fail over fields of positive characteristic p>0 when p divides the group order. For example, the double cover of the projective line y2 = xp −x branched at all points defined over the prime field has genus g=(p−1)/2 but is acted on by the group SL2(p) of order p3−p. - -One of the fundamental themes in differential geometry is a trichotomy between the Riemannian manifolds of positive, zero, and negative curvature K. It manifests itself in many diverse situations and on several levels. In the context of compact Riemann surfaces X, via the Riemann uniformization theorem, this can be seen as a distinction between the surfaces of different topologies: - -* X a sphere, a compact Riemann surface of genus zero with K > 0; - -* X a flat torus, or an elliptic curve, a Riemann surface of genus one with K = 0; - -* and X a hyperbolic surface, which has genus greater than one and K < 0. - -While in the first two cases the surface X admits infinitely many conformal automorphisms (in fact, the conformal automorphism group is a complex Lie group of dimension three for a sphere and of dimension one for a torus), a hyperbolic Riemann surface only admits a discrete set of automorphisms. Hurwitz's theorem claims that in fact more is true: it provides a uniform bound on the order of the automorphism group as a function of the genus and characterizes those Riemann surfaces for which the bound is sharp. - -Theorem: Let $X$ be a smooth connected Riemann surface of genus $g \ge 2$. Then its automorphism group $\mathrm{Aut}(X)$ has size at most $84(g-1)$ - -Proof: Assume for now that $G = \mathrm{Aut}(X)$ is finite (we'll prove this at the end). - -* Consider the quotient map $X \to X/G$. Since $G$ acts by holomorphic functions, the quotient is locally of the form $z \to z^n$ and the quotient $X/G$ is a smooth Riemann surface. The quotient map $X \to X/G$ is a branched cover, and we will see below that the ramification points correspond to the orbits that have a non trivial stabiliser. Let $g_0$ be the genus of $X/G$. - -* By the Riemann-Hurwitz formula, 2g-2 \ = \ |G| \cdot \left( 2g_0-2 + \sum_{i = 1}^k \left(1-\frac{1}{e_i}\right)\right) where the sum is over the $k$ ramification points $p_i \in X/G$ for the quotient map $ X \to X/G$. The ramification index $e_i$ at $p_i$ is just the order of the stabiliser group, since $e_i f_i = \deg(X/ X/G)$ where $f_i$ the number of pre-images of $p_i$ (the number of points in the orbit), and $\deg(X/ X/G) = |G|$. By definition of ramification points, $e_i \ge 2$ for all $k$ ramification indices. - -Now call the righthand side $|G| R$ and since $g \ge 2 $ we must have $R>0$. Rearranging the equation we find: - -* If $g_0 \ge 2$ then $R \ge 2$, and $|G| \le (g-1) $ - -* If $g_0 = 1 $, then $ k \ge 1$ and $R\ge 0 + 1 - 1/2 = 1/2$ so that $|G| \le 4(g-1)$, - -* If $g_0 = 0$, then $k \ge 3$ and - -** if $k \ge 5$ then $R \ge -2 + k(1 - 1/2) \ge 1/2$, so that $|G| \le 4(g-1)$ - -** if $k=4$ then $ R \ge -2 + 4 - 1/2 - 1/2 - 1/2 - 1/3 = 1/6$, so that $|G| \le 12(g-1)$, - -** if $k=3$ then write $e_1 = p, e_2 = q, e_3 = r$. We may assume $2 \le p\le q\ \le r$. - -*** if $ p \ge 3 $ then $ R \ge -2 + 3 - 1/3 - 1/3 - 1/4 = 1/12$ so that $|G| \le 24(g-1)$, - -*** if $ p = 2 $ then - -**** if $q \ge 4 $ then $R \ge -2 + 3 - 1/2 - 1/4 - 1/5 = 1/20$ so that $|G| \le 40(g-1)$, - -**** if $q = 3 $ then $R \ge -2 + 3 - 1/2 - 1/3 - 1/7 = 1/42$ so that $|G| \le 84(g-1)$. - -In conclusion, $|G| \le 84(g-1)$. - -To show that $G$ is finite, note that $G$ acts on the cohomology $H^*(X,\mathbf{C})$ preserving the Hodge decomposition and the lattice $H^1(X,\mathbf{Z})$. - -*In particular, its action on $V=H^{0,1}(X,\mathbf{C})$ gives a homomorphism $h: G \to \mathrm{GL}(V)$ with discrete image $h(G)$. - -*In addition, the image $h(G)$ preserves the natural non degenerate Hermitian inner product $(\omega,\eta)= i \int\bar{\omega}\wedge\eta$ on $V$. In particular the image $h(G)$ is contained in the unitary group $\mathrm{U}(V) \subset \mathrm{GL}(V)$ which is compact. Thus the image $h(G)$ is not just discrete, but finite. - -* It remains to prove that $h: G \to \mathrm{GL}(V)$ has finite kernel. In fact, we will prove $h$ is injective. Assume $\phi \in G$ acts as the identity on $V$. If $\mathrm{fix}(\phi)$ is finite, then by the Lefschetz fixed-point theorem, |\mathrm{fix}(\phi)| = 1 - 2\mathrm{tr}(h(\phi)) + 1 = 2 - 2\mathrm{tr}(\mathrm{id}_V) = 2 - 2g < 0. - -This is a contradiction, and so $\mathrm{fix}(\phi)$ is infinite. Since $\mathrm{fix}(\phi)$ is a closed complex sub variety of positive dimension and $X$ is a smooth connected curve (i.e. $\dim_{\mathbf C}(X) = 1$), we must have $\mathrm{fix}(\phi) = X$. Thus $\phi$ is the identity, and we conclude that $h$ is injective and $G \cong h(G)$ is finite. - -Q.E.D. - -Corollary of the proof: A Riemann surface $X$ of genus $g \ge 2$ has $84(g-1)$ automorphisms if and only if $X$ is a branched cover $X \to \mathbf{P}^1$ with three ramification points, of indices 2,3 and 7. - -By the uniformization theorem, any hyperbolic surface X – i.e., the Gaussian curvature of X is equal to negative one at every point – is covered by the hyperbolic plane. The conformal mappings of the surface correspond to orientation-preserving automorphisms of the hyperbolic plane. By the Gauss–Bonnet theorem, the area of the surface is - -A(X) = − 2π χ(X) = 4π(g − 1). - -In order to make the automorphism group G of X as large as possible, we want the area of its fundamental domain D for this action to be as small as possible. If the fundamental domain is a triangle with the vertex angles π/p, π/q and π/r, defining a tiling of the hyperbolic plane, then p, q, and r are integers greater than one, and the area is - -A(D) = π(1 − 1/p − 1/q − 1/r). - -Thus we are asking for integers which make the expression - -1 − 1/p − 1/q − 1/r - -strictly positive and as small as possible. This minimal value is 1/42, and - -1 − 1/2 − 1/3 − 1/7 = 1/42 - -gives a unique (up to permutation) triple of such integers. This would indicate that the order |G| of the automorphism group is bounded by - -A(X)/A(D) ≤ 168(g − 1). - -However, a more delicate reasoning shows that this is an overestimate by the factor of two, because the group G can contain orientation-reversing transformations. For the orientation-preserving conformal automorphisms the bound is 84(g − 1). - -To obtain an example of a Hurwitz group, let us start with a (2,3,7)-tiling of the hyperbolic plane. Its full symmetry group is the full (2,3,7) triangle group generated by the reflections across the sides of a single fundamental triangle with the angles π/2, π/3 and π/7. Since a reflection flips the triangle and changes the orientation, we can join the triangles in pairs and obtain an orientation-preserving tiling polygon. - -A Hurwitz surface is obtained by 'closing up' a part of this infinite tiling of the hyperbolic plane to a compact Riemann surface of genus g. This will necessarily involve exactly 84(g − 1) double triangle tiles. - -The following two regular tilings have the desired symmetry group; the rotational group corresponds to rotation about an edge, a vertex, and a face, while the full symmetry group would also include a reflection. The polygons in the tiling are not fundamental domains – the tiling by (2,3,7) triangles refines both of these and is not regular. - -Wythoff constructions yields further uniform tilings, yielding eight uniform tilings, including the two regular ones given here. These all descend to Hurwitz surfaces, yielding tilings of the surfaces (triangulation, tiling by heptagons, etc.). - -From the arguments above it can be inferred that a Hurwitz group G is characterized by the property that it is a finite quotient of the group with two generators a and b and three relations -$$ -a^2 = b^3 = (ab)^7 = 1, -$$ - -thus G is a finite group generated by two elements of orders two and three, whose product is of order seven. More precisely, any Hurwitz surface, that is, a hyperbolic surface that realizes the maximum order of the automorphism group for the surfaces of a given genus, can be obtained by the construction given. - -This is the last part of the theorem of Hurwitz. - -The smallest Hurwitz group is the projective special linear group PSL(2,7), of order 168, and the corresponding curve is the Klein quartic curve. This group is also isomorphic to PSL(3,2). - -Next is the Macbeath curve, with automorphism group PSL(2,8) of order 504. Many more finite simple groups are Hurwitz groups; for instance all but 64 of the alternating groups are Hurwitz groups, the largest non-Hurwitz example being of degree 167. The smallest alternating group that is a Hurwitz group is A15. - -Most projective special linear groups of large rank are Hurwitz groups, . For lower ranks, fewer such groups are Hurwitz. For np the order of p modulo 7, one has that PSL(2,q) is Hurwitz if and only if either q=7 or q = pnp. Indeed, PSL(3,q) is Hurwitz if and only if q = 2, PSL(4,q) is never Hurwitz, and PSL(5,q) is Hurwitz if and only if q = 74 or q = pnp, . - -Similarly, many groups of Lie type are Hurwitz. The finite classical groups of large rank are Hurwitz, . The exceptional Lie groups of type G2 and the Ree groups of type 2G2 are nearly always Hurwitz, . Other families of exceptional and twisted Lie groups of low rank are shown to be Hurwitz in . - -There are 12 sporadic groups that can be generated as Hurwitz groups: the Janko groups J1, J2 and J4, the Fischer groups Fi22 and Fi'24, the Rudvalis group, the Held group, the Thompson group, the Harada–Norton group, the third Conway group Co3, the Lyons group, and the Monster, . - -The largest |Aut(X)| can get for a Riemann surface X of genus g is shown below, for 2≤g≤10, along with a surface X0 with |Aut(X0)| maximal. - -In this range, there only exists a Hurwitz curve in genus g=3 and g=7. diff --git a/wiki/wikipedia/982.txt b/wiki/wikipedia/982.txt deleted file mode 100644 index 6c63ecd8f76ba4a79145b0522416e2b3c4aa6892..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/982.txt +++ /dev/null @@ -1,36 +0,0 @@ -#redirectApartness relation9qdjuyq57tixu7lzl7jbjsnetzwro3l - - - - - - - -Ali Adham Amhaz - -0 - -20157151 - - - - - -250986141 - -2008-11-10T23:45:54Z - - - -Sherurcij - -120909 - - - -←Redirected page to Mohammed Hassan Dbouk - -wikitext - -text/x-wiki diff --git a/wiki/wikipedia/983.txt b/wiki/wikipedia/983.txt deleted file mode 100644 index 76f0c8d4c350f93ba35dff382b396799acaff214..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/983.txt +++ /dev/null @@ -1,84 +0,0 @@ -Cantor's intersection theorem refers to two closely related theorems in general topology and real analysis, named after Georg Cantor, about intersections of decreasing nested sequences of non-empty compact sets. - -Theorem. Let $S$ be a topological space. A decreasing nested sequence of non-empty compact, closed subsets of $S$ has a non-empty intersection. In other words, supposing $(C_k)_{k \geq 0}$ is a sequence of non-empty compact subsets of S satisfying -$$ -C_0 \supset C_1 \supset \cdots \supset C_n \supset C_{n+1} \supset \cdots, -$$ - -it follows that -$$ -\bigcap_{k = 0}^\infty C_k \neq \emptyset. -$$ - -The closedness condition may be omitted in situations where every compact subset of $S$ is closed, for example when $S$ is Hausdorff. - -Proof. Assume, by way of contradiction, that ${\textstyle \bigcap_{k = 0}^\infty C_k}=\emptyset$. For each $k$, let $U_k=C_0\setminus C_k$. Since ${\textstyle \bigcup_{k = 0}^\infty U_k}=C_0\setminus {\textstyle \bigcap_{k = 0}^\infty C_k}$ and ${\textstyle \bigcap_{k = 0}^\infty C_k}=\emptyset$, we have ${\textstyle \bigcup_{k = 0}^\infty U_k}=C_0$. Since the $C_k$ are closed relative to $S$ and therefore, also closed relative to $C_0$, the $U_k$, their set complements in $C_0$, are open relative to $C_0$. - -Since $C_0\subset S$ is compact and $\{U_k \vert k \geq 0\}$ is an open cover (on $C_0$) of $C_0$, a finite cover $\{U_{k_1}, U_{k_2}, \ldots, U_{k_m}\}$ can be extracted. Let $M=\max_{1\leq i\leq m} {k_i}$. Then ${\textstyle \bigcup_{i = 1}^m U_{k_i}}=U_M$ because $U_1\subset U_2\subset\cdots\subset U_n\subset U_{n+1}\cdots$, by the nesting hypothesis for the collection $(C_k)_{k \geq 0}$. Consequently, $C_0={\textstyle \bigcup_{i = 1}^m U_{k_i}} = U_M$. But then $C_M=C_0\setminus U_M=\emptyset$, a contradiction. ∎ - -The theorem in real analysis draws the same conclusion for closed and bounded subsets of the set of real numbers $\mathbb{R}$. It states that a decreasing nested sequence $(C_k)_{k \geq 0}$ of non-empty, closed and bounded subsets of $\mathbb{R}$ has a non-empty intersection. - -This version follows from the general topological statement in light of the Heine-Borel theorem, which states that sets of real numbers are compact if and only if they are closed and bounded. However, it is typically used as a lemma in proving said theorem, and therefore warrants a separate proof. - -As an example, if $C_k=[0,1/k]$, the intersection over $(C_k)_{k \geq 0}$ is $\{0\}$. On the other hand, both the sequence of open bounded sets $C_k=(0,1/k)$ and the sequence of unbounded closed sets $C_k=[k,\infty)$ have empty intersection. All these sequences are properly nested. - -This version of the theorem generalizes to $\mathbf{R}^n$, the set of $n$-element vectors of real numbers, but does not generalize to arbitrary metric spaces. For example, in the space of rational numbers, the sets -$$ -C_k = [\sqrt{2}, \sqrt{2}+1/k] = (\sqrt{2}, \sqrt{2}+1/k) -$$ - -are closed and bounded, but their intersection is empty. - -Note that this contradicts neither the topological statement, as the sets $C_k$ are not compact, nor the variant below, as the rational numbers are not complete with respect to the usual metric. - -A simple corollary of the theorem is that the Cantor set is nonempty, since it is defined as the intersection of a decreasing nested sequence of sets, each of which is defined as the union of a finite number of closed intervals; hence each of these sets is non-empty, closed, and bounded. In fact, the Cantor set contains uncountably many points. - -Theorem. Let $(C_k)_{k \geq 0}$ be a sequence of non-empty, closed, and bounded subsets of $\mathbb{R}$ satisfying -$$ -C_0 \supset C_1 \supset \cdots C_n \supset C_{n+1} \cdots. -$$ - -Then, -$$ -\bigcap_{k = 0}^\infty C_k \neq \emptyset. -$$ - -Proof. Each nonempty, closed, and bounded subset $C_k\subset\mathbb{R}$ admits a minimal element $x_k$. Since for each $k$, we have -$$ -x_{k+1} \in C_{k+1} \subset C_k -$$, - -it follows that -$$ -x_k \le x_{k+1} -$$, - -so $(x_k)_{k \geq 0}$ is an increasing sequence contained in the bounded set $C_0$. The monotone convergence theorem for bounded sequences of real numbers now guarantees the existence of a limit point -$$ -x=\lim_{k\to \infty} x_k. -$$ - -For fixed $k$, $x_j\in C_k$ for all $j\geq k$, and since $C_k$ is closed and $x$ is a limit point, it follows that $x\in C_k$. Our choice of $k$ is arbitrary, hence $x$ belongs to ${\textstyle \bigcap_{k = 0}^\infty C_k}$ and the proof is complete. ∎ - -In a complete metric space, the following variant of Cantor's intersection theorem holds. - -Theorem. Suppose that $X$ is a complete metric space, and $(C_k)_{k \geq 1}$ is a sequence of non-empty closed nested subsets of $X$ whose diameters tend to zero: -$$ -\lim_{k\to\infty} \operatorname{diam}(C_k) = 0, -$$ - -where $\operatorname{diam}(C_k)$ is defined by -$$ -\operatorname{diam}(C_k) = \sup\{d(x,y) \mid x,y\in C_k\}. -$$ - -Then the intersection of the $C_k$ contains exactly one point: -$$ -\bigcap_{k=1}^\infty C_k = \{x\} -$$ - -for some $x \in X$. - -Proof (sketch). Since the diameters tend to zero, the diameter of the intersection of the $C_k$ is zero, so it is either empty or consists of a single point. So it is sufficient to show that it is not empty. Pick an element $x_k\in C_k$ for each $k$. Since the diameter of $C_k$ tends to zero and the $C_k$ are nested, the $x_k$ form a Cauchy sequence. Since the metric space is complete this Cauchy sequence converges to some point $x$. Since each $C_k$ is closed, and $x$ is a limit of a sequence in $C_k$, $x$ must lie in $C_k$. This is true for every $k$, and therefore the intersection of the $C_k$ must contain $x$. ∎ - -A converse to this theorem is also true: if $X$ is a metric space with the property that the intersection of any nested family of non-empty closed subsets whose diameters tend to zero is non-empty, then $X$ is a complete metric space. (To prove this, let $(x_k)_{k \geq 1}$ be a Cauchy sequence in $X$, and let $C_k$ be the closure of the tail $(x_j)_{j \geq k}$ of this sequence.) diff --git a/wiki/wikipedia/984.txt b/wiki/wikipedia/984.txt deleted file mode 100644 index fc35725bcb4d80149ba1c0ebe5652d258a9be3f7..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/984.txt +++ /dev/null @@ -1,73 +0,0 @@ -In mathematics, specifically transcendental number theory, the six exponentials theorem is a result that, given the right conditions on the exponents, guarantees the transcendence of at least one of a set of exponentials. - -If x1, x2, ..., xd are d complex numbers that are linearly independent over the rational numbers, and y1, y2, ..., yl are l complex numbers that are also linearly independent over the rational numbers, and if dl > d + l, then at least one of the following dl numbers is transcendental: -$$ -\exp(x_i y_j),\quad (1 \leq i \leq d,\ 1 \leq j \leq l). -$$ - -The most interesting case is when d = 3 and l = 2, in which case there are six exponentials, hence the name of the result. The theorem is weaker than the related but thus far unproved four exponentials conjecture, whereby the strict inequality dl > d + l is replaced with dl ≥ d + l, thus allowing d = l = 2. - -The theorem can be stated in terms of logarithms by introducing the set L of logarithms of algebraic numbers: -$$ -\mathcal{L}=\{\lambda\in\mathbb{C}:e^\lambda\in\overline{\mathbb{Q}}\}. -$$ - -The theorem then says that if λij are elements of L for i = 1, 2 and j = 1, 2, 3, such that λ11, λ12, and λ13 are linearly independent over the rational numbers, and λ11 and λ21 are also linearly independent over the rational numbers, then the matrix -$$ -M=\begin{pmatrix}\lambda_{11}&\lambda_{12}&\lambda_{13} \\ \lambda_{21}&\lambda_{22}&\lambda_{23}\end{pmatrix} -$$ - -has rank 2. - -A special case of the result where x1, x2, and x3 are logarithms of positive integers, y1 = 1, and y2 is real, was first mentioned in a paper by Leonidas Alaoglu and Paul Erdős from 1944 in which they try to prove that the ratio of consecutive colossally abundant numbers is always prime. They claimed that Carl Ludwig Siegel knew of a proof of this special case, but it is not recorded. Using the special case they manage to prove that the ratio of consecutive colossally abundant numbers is always either a prime or a semiprime. - -The theorem was first explicitly stated and proved in its complete form independently by Serge Lang and Kanakanahalli Ramachandra in the 1960s. - -A stronger, related result is the five exponentials theorem, which is as follows. Let x1, x2 and y1, y2 be two pairs of complex numbers, with each pair being linearly independent over the rational numbers, and let γ be a non-zero algebraic number. Then at least one of the following five numbers is transcendental: -$$ -e^{x_1 y_1}, e^{x_1 y_2}, e^{x_2 y_1}, e^{x_2 y_2}, e^{\gamma x_2/x_1}. -$$ - -This theorem implies the six exponentials theorem and in turn is implied by the as yet unproven four exponentials conjecture, which says that in fact one of the first four numbers on this list must be transcendental. - -Another related result that implies both the six exponentials theorem and the five exponentials theorem is the sharp six exponentials theorem. This theorem is as follows. Let x1, x2, and x3 be complex numbers that are linearly independent over the rational numbers, and let y1 and y2 be a pair of complex numbers that are linearly independent over the rational numbers, and suppose that βij are six algebraic numbers for 1 ≤ i ≤ 3 and 1 ≤ j ≤ 2 such that the following six numbers are algebraic: -$$ -e^{x_1 y_1-\beta_{11}}, e^{x_1 y_2-\beta_{12}}, e^{x_2 y_1-\beta_{21}}, e^{x_2 y_2-\beta_{22}}, e^{x_3 y_1-\beta_{31}}, e^{x_3 y_2-\beta_{32}}. -$$ - -Then xi yj = βij for 1 ≤ i ≤ 3 and 1 ≤ j ≤ 2. The six exponentials theorem then follows by setting βij = 0 for every i and j, while the five exponentials theorem follows by setting x3 = γ/x1 and using Baker's theorem to ensure that the xi are linearly independent. - -There is a sharp version of the five exponentials theorem as well, although it as yet unproven so is known as the sharp five exponentials conjecture. This conjecture implies both the sharp six exponentials theorem and the five exponentials theorem, and is stated as follows. Let x1, x2 and y1, y2 be two pairs of complex numbers, with each pair being linearly independent over the rational numbers, and let α, β11, β12, β21, β22, and γ be six algebraic numbers with γ ≠ 0 such that the following five numbers are algebraic: -$$ -e^{x_1 y_1-\beta_{11}}, e^{x_1 y_2-\beta_{12}}, e^{x_2 y_1-\beta_{21}}, e^{x_2 y_2-\beta_{22}}, e^{(\gamma x_2/x_1)-\alpha}. -$$ - -Then xi yj = βij for 1 ≤ i, j ≤ 2 and γx2 = αx1. - -A consequence of this conjecture that isn't currently known would be the transcendence of eπ², by setting x1 = y1 = β11 = 1, x2 = y2 = iπ, and all the other values in the statement to be zero. - -A further strengthening of the theorems and conjectures in this area are the strong versions. The strong six exponentials theorem is a result proved by Damien Roy that implies the sharp six exponentials theorem. This result concerns the vector space over the algebraic numbers generated by 1 and all logarithms of algebraic numbers, denoted here as L. So L is the set of all complex numbers of the form -$$ -\beta_0+\sum_{i=1}^n \beta_i\log\alpha_i, -$$ - -for some n ≥ 0, where all the βi and αi are algebraic and every branch of the logarithm is considered. The strong six exponentials theorem then says that if x1, x2, and x3 are complex numbers that are linearly independent over the algebraic numbers, and if y1 and y2 are a pair of complex numbers that are also linearly independent over the algebraic numbers then at least one of the six numbers xi yj for 1 ≤ i ≤ 3 and 1 ≤ j ≤ 2 is not in L. This is stronger than the standard six exponentials theorem which says that one of these six numbers is not simply the logarithm of an algebraic number. - -There is also a strong five exponentials conjecture formulated by Michel Waldschmidt. It would imply both the strong six exponentials theorem and the sharp five exponentials conjecture. This conjecture claims that if x1, x2 and y1, y2 are two pairs of complex numbers, with each pair being linearly independent over the algebraic numbers, then at least one of the following five numbers is not in L: -$$ -x_1y_1,x_1y_2,x_2y_1,x_2y_2,x_1/x_2. -$$ - -All the above conjectures and theorems are consequences of the unproven extension of Baker's theorem, that logarithms of algebraic numbers that are linearly independent over the rational numbers are automatically algebraically independent too. The diagram on the right shows the logical implications between all these results. - -The exponential function ez uniformizes the exponential map of the multiplicative group Gm. Therefore, we can reformulate the six exponential theorem more abstractly as follows: - -Let G = Gm × Gm and take u : C → G(C) to be a non-zero complex-analytic group homomorphism. Define L to be the set of complex numbers l for which u(l) is an algebraic point of G. If a minimal generating set of L over Q has more than two elements then the image u(C) is an algebraic subgroup of G(C). - -(In order to derive the classical statement, set u(z) = (ey1z; ey2z) and note that Qx1 + Qx2 + Qx3 is a subset of L). - -In this way, the statement of the six exponentials theorem can be generalized to an arbitrary commutative group variety G over the field of algebraic numbers. This generalized six exponential conjecture, however, seems out of scope at the current state of transcendental number theory. - -For the special but interesting cases G = Gm × E and G = E × E′, where E, E′ are elliptic curves over the field of algebraic numbers, results towards the generalized six exponential conjecture were proven by Aleksander Momot. These results involve the exponential function ez and a Weierstrass function $\wp$ resp. two Weierstrass functions $\wp, \wp'$ with algebraic invariants $g_2, g_3, g_2', g_3'$, instead of the two exponential functions $e^{y_1z}, e^{y_2z}$ in the classical statement. - -Let G = Gm × E and suppose E is not isogenous to a curve over a real field and that u(C) is not an algebraic subgroup of G(C). Then L is generated over Q either by two elements x1, x2, or three elements x1, x2, x3 which are not all contained in a real line Rc, where c is a non-zero complex number. A similar result is shown for G = E × E′. diff --git a/wiki/wikipedia/985.txt b/wiki/wikipedia/985.txt deleted file mode 100644 index 9b1ddce3aa8a875b7e42c16ed4e104301908b16d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/985.txt +++ /dev/null @@ -1,122 +0,0 @@ -In mathematics, Pappus's centroid theorem (also known as the Guldinus theorem, Pappus–Guldinus theorem or Pappus's theorem) is either of two related theorems dealing with the surface areas and volumes of surfaces and solids of revolution. - -The theorems are attributed to Pappus of Alexandria and Paul Guldin. The Pappus's statement of this theorem appears in the print for first time in 1659, but it was known before, by Kepler in 1615 and Guldin in 1640. - -The first theorem states that the surface area A of a surface of revolution generated by rotating a plane curve C about an axis external to C and on the same plane is equal to the product of the arc length s of C and the distance d traveled by the geometric centroid of C: -$$ -A = sd. -$$ - -For example, the surface area of the torus with minor radius r and major radius R is -$$ -A = (2\pi r)(2\pi R) = 4\pi^2 R r. -$$ - -The second theorem states that the volume V of a solid of revolution generated by rotating a plane figure F about an external axis is equal to the product of the area A of F and the distance d traveled by the geometric centroid of F. (The centroid of F is usually different from the centroid of its boundary curve C.) That is: -$$ -V = Ad. -$$ - -For example, the volume of the torus with minor radius r and major radius R is -$$ -V = (\pi r^2)(2\pi R) = 2\pi^2 R r^2. -$$ - -This special case was derived by Johannes Kepler using infinitesimals. - -Let $A$ be the area of $F$, $W$ the solid of revolution of $F$, and $V$ the volume of $W$. Suppose $F$ starts in the $xz$-plane and rotates around the $z$-axis. The distance of the centroid of $F$ from the $z$-axis is its $x$-coordinate -$$ -R = \frac{\iint_F xdA}{A}, -$$ - -and the theorem states that -$$ -V = Ad = A \cdot 2\pi R = 2\pi\iint_F xdA. -$$ - -To show this, let $F$ be in the xz-plane, parametrized by $\mathbf{\Phi}(u,v) = (x(u,v),0,z(u,v))$ for $(u,v)\in F^*$, a parameter region. Since $\mathbf{\Phi}$ is essentially a mapping from $\mathbb{R}^2$ to $\mathbb{R}^2$, the area of $F$ is given by the change of variables formula: -$$ -A = \iint_F dA = \iint_{F^*} \left|\frac{\partial(x,z)}{\partial(u,v)}\right|dudv = \iint_{F^*} \left|\frac{\partial x}{\partial u}\frac{\partial z}{\partial v} - \frac{\partial x}{\partial v}\frac{\partial z}{\partial u}\right|dudv, -$$ - -where $\left|\tfrac{\partial(x,z)}{\partial(u,v)}\right|$ is the determinant of the Jacobian matrix of the change of variables. - -The solid $W$ has the toroidal parametrization $\mathbf{\Phi}(u,v,\theta) = (x(u,v)\cos\theta,x(u,v)\sin\theta,z(u,v))$ for $(u,v,\theta)$ in the parameter region $W^*=F^*\times [0,2\pi]$; and its volume is -$$ -V = \iiint_W dV = \iiint_{W^*} \left|\frac{\partial(x,y,z)}{\partial(u,v,\theta)}\right|dudvd\theta. -$$ - -Expanding, - - - -\begin{align} - -\left|\frac{\partial(x,y,z)}{\partial(u,v,\theta)}\right| & = - -\left|\det\begin{bmatrix} - -\frac{\partial x}{\partial u}\cos\theta & \frac{\partial x}{\partial v}\cos\theta & -x\sin\theta \\[6pt] - -\frac{\partial x}{\partial u}\sin\theta & \frac{\partial x}{\partial v}\sin\theta & x\cos\theta \\[6pt] - -\frac{\partial z}{\partial u} & \frac{\partial z}{\partial v} & 0 - -\end{bmatrix}\right| \\[5pt] - -& = \left|-\frac{\partial z}{\partial v}\frac{\partial x}{\partial u}x + \frac{\partial z}{\partial u}\frac{\partial x}{\partial v}x\right| - -=\ \left|-x\frac{\partial(x,z)}{\partial(u,v)}\right| = x\left|\frac{\partial(x,z)}{\partial(u,v)}\right|. - -\end{align} - - - -The last equality holds because the axis of rotation must be external to $F$, meaning $x \geq 0$. Now, - - - -\begin{align} - -V &= \iiint_{W^*} \left|\frac{\partial(x,y,z)}{\partial(u,v,\theta)}\right|dudvd\theta = \int_0^{2\pi}\!\!\!\!\iint_{F^*} x(u,v)\left|\frac{\partial(x,z)}{\partial(u,v)}\right|dudvd\theta \\[6pt] - -& = 2\pi\iint_{F^*} x(u,v)\left|\frac{\partial(x,z)}{\partial(u,v)}\right|dudv = 2\pi\iint_F xdA - -\end{align} - - - -by change of variables. - -The theorems can be generalized for arbitrary curves and shapes, under appropriate conditions. - -Goodman & Goodman generalize the second theorem as follows. If the figure F moves through space so that it remains perpendicular to the curve L traced by the centroid of F, then it sweeps out a solid of volume V = Ad, where A is the area of F and d is the length of L. (This assumes the solid does not intersect itself.) In particular, F may rotate about its centroid during the motion. - -However, the corresponding generalization of the first theorem is only true if the curve L traced by the centroid lies in a plane perpendicular to the plane of C. - -In general, one can generate an $n$ dimensional solid by rotating an $n-p$ dimensional solid $F$ around a $p$ dimensional sphere. This is called an $n$-solid of revolution of species $p$. Let the $p$-th centroid of $F$ be defined by -$$ -R = \frac{\iint_F x^pdA}{A}, -$$ - -Then Pappus' theorems generalize to: - -
      - -Volume of $n$-solid of revolution of species $p$ - -
      = (Volume of generating $(n{-}p)$-solid) $\times$ (Surface area of $p$-sphere traced by the $p$-th centroid of the generating solid) - -
      - -and - -
      - -Surface area of $n$-solid of revolution of species $p$ - -
      = (Surface area of generating $(n{-}p)$-solid) $\times$ (Surface area of $p$-sphere traced by the $p$-th centroid of the generating solid) - -
      - -The original theorems are the case with $n=3, p = 1$. diff --git a/wiki/wikipedia/986.txt b/wiki/wikipedia/986.txt deleted file mode 100644 index ebd8d62da9b67cdea82cbba320cedff28d82ba77..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/986.txt +++ /dev/null @@ -1,19 +0,0 @@ -Rush Hour is a sliding block puzzle invented by Nob Yoshigahara in the 1970s. It was first sold in the United States in 1996. It is now being manufactured by ThinkFun (formerly Binary Arts). - -ThinkFun now sells Rush Hour spin-offs Rush Hour Jr., Safari Rush Hour, Railroad Rush Hour, Rush Hour Brain Fitness and Rush Hour Shift, with puzzles by Scott Kim. - -The board is a 6×6 grid with grooves in the tiles to allow cars to slide, card tray to hold the cards, current active card holder and an exit hole. The game comes with 16 vehicles (12 cars, 4 trucks), each colored differently, and 40 puzzle cards. Cars and trucks are both one square wide, but cars are two squares long and trucks are three squares long. Vehicles can only be moved along a straight line on the grid; rotation is forbidden. Puzzle cards, each with a level number that indicates the difficulty of the challenge, show the starting positions of cars and trucks. Not all cars and trucks are used in all challenges. - -The goal of the game is to get only the red car out through the exit of the board by moving the other vehicles out of its way. However, the cars and trucks (set up before play, according to a puzzle card) obstruct the path which makes the puzzle even more difficult. - -The Regular Edition comes with forty puzzles split into four different difficulties, ranging from Beginner to Expert. The Deluxe Edition has a black playing board, card box in place of the Regular Edition's card tray, and sixty new puzzles with an extra difficulty: the Grand Master. The Ultimate Collector's Edition has a playing board that can hold vehicles not in play and can display the active card in a billboard-like display. The Ultimate Collectors Edition also includes 155 new puzzles (with some of them being from card set three) and a white limo. In 2011, the board was changed to black, like the Deluxe Edition. - -An iOS version of the game was released in 2010. - -Three official expansions, called "add-on packs", were released: Card Set 2, which comes with a red sports car that takes up 2 squares; Card Set 3, which comes with a white limo that takes up 3 squares; and Card Set 4, which comes with a taxi that takes up 2 squares. Each set also come with 40 new exclusive challenges—from Intermediate to Grand Master—that make use of the new vehicles in place of (or in addition to) the red car. All three of the expansion packs will work with all editions of the game. Also, like the Regular Edition of the game in 2011, the cards of all three expansions were changed to have new levels and design to match the new board color of the Regular Edition. - -When generalized so that it can be played on an arbitrarily large board, the problem of deciding if a Rush Hour problem has a solution is PSPACE-complete. This is proved by reducing a graph game called nondeterministic constraint logic, which is known to be PSPACE-complete, to generalized Rush Hour positions. In 2005, Tromp and Cilibrasi showed that Rush Hour is still PSPACE-complete when the cars are of size 2 only. They also conjectured that Rush Hour is still nontrivial when the cars are of size 1 only. - -The hardest possible initial configuration has been shown to take 93 steps. A shortest solution can be seen on the right. - -If you count the necessary moves instead of the steps, the most difficult start configuration in this sense requires 51 moves. diff --git a/wiki/wikipedia/987.txt b/wiki/wikipedia/987.txt deleted file mode 100644 index e86bf31764fdee6afbd6f971b2dadb93e170d90a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/987.txt +++ /dev/null @@ -1,11 +0,0 @@ -In number theory, the Hasse norm theorem states that if L/K is a cyclic extension of number fields, then if a nonzero element of K is a local norm everywhere, then it is a global norm. - -Here to be a global norm means to be an element k of K such that there is an element l of L with $\mathbf{N}_{L/K}(l) = k$; in other words k is a relative norm of some element of the extension field L. To be a local norm means that for some prime p of K and some prime P of L lying over K, then k is a norm from LP; here the "prime" p can be an archimedean valuation, and the theorem is a statement about completions in all valuations, archimedean and non-archimedean. - -The theorem is no longer true in general if the extension is abelian but not cyclic. Hasse gave the counterexample that 3 is a local norm everywhere for the extension ${\mathbf Q}(\sqrt{-3},\sqrt{13})/{\mathbf Q}$ but is not a global norm. Serre and Tate showed that another counterexample is given by the field ${\mathbf Q}(\sqrt{13},\sqrt{17})/{\mathbf Q}$ where every rational square is a local norm everywhere but $5^2$ is not a global norm. - -This is an example of a theorem stating a local-global principle. - -The full theorem is due to . The special case when the degree n of the extension is 2 was proved by Hilbert, and the special case when n is prime was proved by Furtwangler. - -The Hasse norm theorem can be deduced from the theorem that an element of the Galois cohomology group H2(L/K) is trivial if it is trivial locally everywhere, which is in turn equivalent to the deep theorem that the first cohomology of the idele class group vanishes. This is true for all finite Galois extensions of number fields, not just cyclic ones. For cyclic extensions the group H2(L/K) is isomorphic to the Tate cohomology group H0(L/K) which describes which elements are norms, so for cyclic extensions it becomes Hasse's theorem that an element is a norm if it is a local norm everywhere. diff --git a/wiki/wikipedia/988.txt b/wiki/wikipedia/988.txt deleted file mode 100644 index bc436bc25a76538f796c37914ec788610dd72dee..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/988.txt +++ /dev/null @@ -1,107 +0,0 @@ -In logic and computer science, the Davis–Putnam–Logemann–Loveland (DPLL) algorithm is a complete, backtracking-based search algorithm for deciding the satisfiability of propositional logic formulae in conjunctive normal form, i.e. for solving the CNF-SAT problem. - -It was introduced in 1961 by Martin Davis, George Logemann and Donald W. Loveland and is a refinement of the earlier Davis–Putnam algorithm, which is a resolution-based procedure developed by Davis and Hilary Putnam in 1960. Especially in older publications, the Davis–Logemann–Loveland algorithm is often referred to as the "Davis–Putnam method" or the "DP algorithm". Other common names that maintain the distinction are DLL and DPLL. - -After more than 50 years the DPLL procedure still forms the basis for most efficient complete SAT solvers. It has recently been extended for automated theorem proving for fragments of first-order logic by way of the DPLL(T) algorithm. - -The SAT problem is important both from theoretical and practical points of view. In complexity theory it was the first problem proved to be NP-complete, and can appear in a broad variety of applications such as model checking, automated planning and scheduling, and diagnosis in artificial intelligence. - -As such, it has been a hot topic in research for many years, and competitions between SAT solvers regularly take place. DPLL's modern implementations like Chaff and zChaff, GRASP or MiniSat are in the first places of the competitions these last years. - -Another application that often involves DPLL is automated theorem proving or satisfiability modulo theories (SMT), which is a SAT problem in which propositional variables are replaced with formulas of another mathematical theory. - -The basic backtracking algorithm runs by choosing a literal, assigning a truth value to it, simplifying the formula and then recursively checking if the simplified formula is satisfiable; if this is the case, the original formula is satisfiable; otherwise, the same recursive check is done assuming the opposite truth value. This is known as the splitting rule, as it splits the problem into two simpler sub-problems. The simplification step essentially removes all clauses that become true under the assignment from the formula, and all literals that become false from the remaining clauses. - -The DPLL algorithm enhances over the backtracking algorithm by the eager use of the following rules at each step: - -; Unit propagation : If a clause is a unit clause, i.e. it contains only a single unassigned literal, this clause can only be satisfied by assigning the necessary value to make this literal true. Thus, no choice is necessary. Unit propagation consists in removing every clause containing a unit clause's literal and in discarding the complement of a unit clause's literal from every clause containing that complement. In practice, this often leads to deterministic cascades of units, thus avoiding a large part of the naive search space. - -; Pure literal elimination : If a propositional variable occurs with only one polarity in the formula, it is called pure. A pure literal can always be assigned in a way that makes all clauses containing it true. Thus, when it is assigned such way, these clauses do not constrain the search anymore, and can be deleted. - -Unsatisfiability of a given partial assignment is detected if one clause becomes empty, i.e. if all its variables have been assigned in a way that makes the corresponding literals false. Satisfiability of the formula is detected either when all variables are assigned without generating the empty clause, or, in modern implementations, if all clauses are satisfied. Unsatisfiability of the complete formula can only be detected after exhaustive search. - -The DPLL algorithm can be summarized in the following pseudocode, where Φ is the CNF formula: - -Input: A set of clauses Φ. - -Output: A Truth Value. - -function DPLL(Φ) - -if Φ is a consistent set of literals then - -return true; - -if Φ contains an empty clause then - -return false; - -for every unit clause {l} in Φ do - -Φ ← unit-propagate(l, Φ); - -for every literal l that occurs pure in Φ do - -Φ ← pure-literal-assign(l, Φ); - -l ← choose-literal(Φ); - -return DPLL(Φ ∧ {l}) or DPLL(Φ ∧ {not(l)}); - -In this pseudocode, unit-propagate(l, Φ) and pure-literal-assign(l, Φ) are functions that return the result of applying unit propagation and the pure literal rule, respectively, to the literal l and the formula Φ. In other words, they replace every occurrence of l with "true" and every occurrence of not l with "false" in the formula Φ, and simplify the resulting formula. The or in the return statement is a short-circuiting operator. Φ ∧ {l} denotes the simplified result of substituting "true" for l in Φ. - -The algorithm terminates in one of two cases. Either the CNF formula Φ is found to comprise a consistent set of literals-that is, there is no l and ¬l for any literal l in the formula (there are only pure literals). If this is the case, the variables can be trivially satisfied by setting them to the respective polarity of the encompassing literal in the valuation. Otherwise, when the formula contains an empty clause, the clause is vacuously false because a disjunction requires at least one member that is true for the overall set to be true. In this case, the existence of such a clause implies that the formula (evaluated as a conjunction of all clauses) cannot evaluate to true and must be unsatisfiable. - -The pseudocode DPLL function only returns whether the final assignment satisfies the formula or not. In a real implementation, the partial satisfying assignment typically is also returned on success; this can be derived from the consistent set of literals of the first if statement of the function. - -The Davis–Logemann–Loveland algorithm depends on the choice of branching literal, which is the literal considered in the backtracking step. As a result, this is not exactly an algorithm, but rather a family of algorithms, one for each possible way of choosing the branching literal. Efficiency is strongly affected by the choice of the branching literal: there exist instances for which the running time is constant or exponential depending on the choice of the branching literals. Such choice functions are also called heuristic functions or branching heuristics. - -Davis, Logemann, Loveland (1961) had developed this algorithm. - -Some properties of this original algorithm are: - -* It is based on search. - -* It is the basis for almost all modern SAT solvers. - -* It does not use learning or non-chronological backtracking (introduced in 1996). - -An example with visualization of a DPLL algorithm having chronological backtracking: - - - -Image:Dpll1.png|All clauses making a CNF formula - -Image:Dpll2.png|Pick a variable - -Image:Dpll3.png|Make a decision, variable a = False (0), thus green clauses becomes True - -Image:Dpll4.png|After making several decisions, we find an implication graph that leads to a conflict. - -Image:Dpll5.png|Now backtrack to immediate level and by force assign opposite value to that variable - -Image:Dpll6.png|But a forced decision still leads to another conflict - -Image:Dpll7.png|Backtrack to previous level and make a forced decision - -Image:Dpll8.png|Make a new decision, but it leads to a conflict - -Image:Dpll9.png|Make a forced decision, but again it leads to a conflict - -Image:Dpll10.png|Backtrack to previous level - -Image:Dpll11.png|Continue in this way and the final implication graph - - - -In the 2010s years, work on improving the algorithm has been done on three directions: - -# Defining different policies for choosing the branching literals. - -# Defining new data structures to make the algorithm faster, especially the part on unit propagation. - -# Defining variants of the basic backtracking algorithm. The latter direction include non-chronological backtracking (aka backjumping) and clause learning. These refinements describe a method of backtracking after reaching a conflict clause which "learns" the root causes (assignments to variables) of the conflict in order to avoid reaching the same conflict again. The resulting Conflict-Driven Clause Learning SAT solvers are the state of the art in 2014. - -A newer algorithm from 1990 is Stålmarck's method. Also since 1986 (reduced ordered) binary decision diagrams have also been used for SAT solving. - -Runs of DPLL-based algorithms on unsatisfiable instances correspond to tree resolution refutation proofs. diff --git a/wiki/wikipedia/989.txt b/wiki/wikipedia/989.txt deleted file mode 100644 index aa7985baee7b7adedc48f129d8930b5200031831..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/989.txt +++ /dev/null @@ -1,40 +0,0 @@ -In mathematics, the dimension theorem for vector spaces states that all bases of a vector space have equally many elements. This number of elements may be finite or infinite (in the latter case, it is a cardinal number), and defines the dimension of the vector space. - -Formally, the dimension theorem for vector spaces states that - -Given a vector space V, any two bases have the same cardinality. - -As a basis is a generating set that is linearly independent, the theorem is a consequence of the following theorem, which is also useful: - -In a vector space V, if G is a generating set, and I is a linearly independent set, then the cardinality of I is not larger than the cardinality of G. - -In particular if V is finitely generated, then all its bases are finite and have the same number of elements. - -While the proof of the existence of a basis for any vector space in the general case requires Zorn's lemma and is in fact equivalent to the axiom of choice, the uniqueness of the cardinality of the basis requires only the ultrafilter lemma, which is strictly weaker (the proof given below, however, assumes trichotomy, i.e., that all cardinal numbers are comparable, a statement which is also equivalent to the axiom of choice). The theorem can be generalized to arbitrary R-modules for rings R having invariant basis number. - -In the finitely generated case the proof uses only elementary arguments of algebra, and does not require the axiom of choice nor its weaker variants. - -Let V be a vector space, {ai: i ∈ I} be a linearly independent set of elements of V, and {bj: j ∈ J} be a generating set. One has to prove that the cardinality of I is not larger than that of J. - -If J is finite, this results from the Steinitz exchange lemma. (Indeed, the Steinitz exchange lemma implies every finite subset of I has cardinality not larger than that of J, hence I is finite with cardinality not larger than that of J.) If J is finite, a proof based on matrix theory is also possible. - -Assume that J is infinite. If I is finite, there is nothing to prove. Thus, we may assume that I is also infinite. Let us suppose that the cardinality of I is larger than that of J. We have to prove that this leads to a contradiction. - -By Zorn's lemma, every linearly independent set is contained in a maximal linearly independent set K. This maximality implies that K spans V and is therefore a basis (the maximality implies that every element of V is linearly dependent from the elements of K, and therefore is a linear combination of elements of K). As the cardinality of K is greater than or equal to the cardinality of I, one may replace {ai: i ∈ I} with K, that is, one may suppose, without loss of generality, that {ai: i ∈ I} is a basis. - -Thus, every bj can be written as a finite sum -$$ -\textstyle b_j = \sum_{i\in E_j} \lambda_{i,j} a_i, -$$ - -where $E_j$ is a finite subset of $I.$ As J is infinite, $\textstyle\bigcup_{j\in J} E_j$ has the same cardinality as J. Therefore $\textstyle\bigcup_{j\in J} E_j$ has cardinality smaller than that of I. So there is some $i_0\in I$ which does not appear in any $E_j$. The corresponding $a_{i_0}$ can be expressed as a finite linear combination of $b_j$s, which in turn can be expressed as finite linear combination of $a_i$s, not involving $a_{i_0}$. Hence $ a_{i_0}$ is linearly dependent on the other $a_i$s, which provides the desired contradiction. - -This application of the dimension theorem is sometimes itself called the dimension theorem. Let - -T: U → V - -be a linear transformation. Then - -dim(range(T)) + dim(kernel(T)) = dim(U), - -that is, the dimension of U is equal to the dimension of the transformation's range plus the dimension of the kernel. See rank–nullity theorem for a fuller discussion. diff --git a/wiki/wikipedia/99.txt b/wiki/wikipedia/99.txt deleted file mode 100644 index 0a3dbcc44bb8c1ceac09e78ed511a07775316445..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/99.txt +++ /dev/null @@ -1,37 +0,0 @@ -In fluid mechanics, Helmholtz's theorems, named after Hermann von Helmholtz, describe the three-dimensional motion of fluid in the vicinity of vortex filaments. These theorems apply to inviscid flows and flows where the influence of viscous forces are small and can be ignored. - -Helmholtz's three theorems are as follows: - -;Helmholtz's first theorem: - -The strength of a vortex filament is constant along its length. - -;Helmholtz's second theorem: - -A vortex filament cannot end in a fluid; it must extend to the boundaries of the fluid or form a closed path. - -;Helmholtz's third theorem: - -In the absence of rotational external forces, a fluid that is initially irrotational remains irrotational. - -Helmholtz's theorems apply to inviscid flows. In observations of vortices in real fluids the strength of the vortices always decays gradually due to the dissipative effect of viscous forces. - -Alternative expressions of the three theorems are as follows:
      - -# The strength of a vortex tube does not vary with time. - -# Fluid elements lying on a vortex line at some instant continue to lie on that vortex line. More simply, vortex lines move with the fluid. Also vortex lines and tubes must appear as a closed loop, extend to infinity or start/end at solid boundaries. - -# Fluid elements initially free of vorticity remain free of vorticity. - -Helmholtz's theorems have application in understanding: - -*Generation of lift on an airfoil - -*Starting vortex - -*Horseshoe vortex - -*Wingtip vortices. - -Helmholtz's theorems are now generally proven with reference to Kelvin's circulation theorem. However the Helmholtz's theorems were published in 1858, nine years before the 1867 publication of Kelvin's theorem. There was much communication between the two men on the subject of vortex lines, with many references to the application of their theorems to the study of smoke rings. diff --git a/wiki/wikipedia/990.txt b/wiki/wikipedia/990.txt deleted file mode 100644 index 2cc2d1691c9365e162a4e0592184ca6a40b4c682..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/990.txt +++ /dev/null @@ -1,321 +0,0 @@ -In linear algebra, Cramer's rule is an explicit formula for the solution of a system of linear equations with as many equations as unknowns, valid whenever the system has a unique solution. It expresses the solution in terms of the determinants of the (square) coefficient matrix and of matrices obtained from it by replacing one column by the column vector of right-hand-sides of the equations. It is named after Gabriel Cramer (1704-1752), who published the rule for an arbitrary number of unknowns in 1750, although Colin Maclaurin also published special cases of the rule in 1748 (and possibly knew of it as early as 1729). - -Cramer's rule implemented in a naïve way is computationally inefficient for systems of more than two or three equations. In the case of n equations in n unknowns, it requires computation of n + 1 determinants, while Gaussian elimination produces the result with the same computational complexity as the computation of a single determinant. Cramer's rule can also be numerically unstable even for 2×2 systems. However, it has recently been shown that Cramer's rule can be implemented in O(n3) time, which is comparable to more common methods of solving systems of linear equations, such as Gaussian elimination (consistently requiring 2.5 times as many arithmetic operations for all matrix sizes), while exhibiting comparable numeric stability in most cases. - -Consider a system of n linear equations for n unknowns, represented in matrix multiplication form as follows: -$$ - A\mathbf{x} = \mathbf{b} -$$ - -where the n × n matrix A has a nonzero determinant, and the vector $ \mathbf{x} = (x_1, \ldots, x_n)^\mathsf{T} $ is the column vector of the variables. Then the theorem states that in this case the system has a unique solution, whose individual values for the unknowns are given by: -$$ - x_i = \frac{\det(A_i)}{\det(A)} \qquad i = 1, \ldots, n -$$ - -where $ A_i $ is the matrix formed by replacing the i-th column of A by the column vector b. - -A more general version of Cramer's rule considers the matrix equation -$$ - AX = B -$$ - -where the n × n matrix A has a nonzero determinant, and X, B are n × m matrices. Given sequences $ 1 \leq i_1 < i_2 < \cdots < i_k \leq n $ and $ 1 \leq j_1 < j_2 < \cdots < j_k \leq m $, let $ X_{I,J} $ be the k × k submatrix of X with rows in $ I := (i_1, \ldots, i_k ) $ and columns in $ J := (j_1, \ldots, j_k ) $. Let $ A_{B}(I,J) $ be the n × n matrix formed by replacing the $i_s$ column of A by the $j_s$ column of B, for all $ s = 1,\ldots, k $. Then -$$ - \det X_{I,J} = \frac{\det(A_{B}(I,J))}{\det(A)}. -$$ - -In the case $ k = 1 $, this reduces to the normal Cramer's rule. - -The rule holds for systems of equations with coefficients and unknowns in any field, not just in the real numbers. - -The proof for Cramer's rule uses the following properties of the determinants: linearity with respect to any given column and the fact that the determinant is zero whenever two columns are equal, which is implied by the property that the sign of the determinant flips if you switch two columns. - -Fix the index j of a column. Linearity means that if we consider only column j as variable (fixing the others arbitrarily), the resulting function Rn → R (assuming matrix entries are in R) can be given by a matrix, with one row and n columns, that acts on column j. In fact this is precisely what Laplace expansion does, writing det(A) = C1a1,j + ⋯ + Cnan,j for certain coefficients C1, ..., Cn that depend on the columns of A other than column j (the precise expression for these cofactors is not important here). The value det(A) is then the result of applying the one-line matrix L(j) = (C1 C2 ⋯ Cn) to column j of A. If L(j) is applied to any other column k of A, then the result is the determinant of the matrix obtained from A by replacing column j by a copy of column k, so the resulting determinant is 0 (the case of two equal columns). - -Now consider a system of n linear equations in n unknowns $x_1, \ldots,x_n$, whose coefficient matrix is A, with det(A) assumed to be nonzero: - -\begin{matrix} - -a_{11}x_1+a_{12}x_2+\cdots+a_{1n}x_n&=&b_1\\ - -a_{21}x_1+a_{22}x_2+\cdots+a_{2n}x_n&=&b_2\\ - -&\vdots&\\ - -a_{n1}x_1+a_{n2}x_2+\cdots+a_{nn}x_n&=&b_n. - -\end{matrix} - -If one combines these equations by taking C1 times the first equation, plus C2 times the second, and so forth until Cn times the last, then the coefficient of xj will become C1a1, j + ⋯ + Cnan,j = det(A), while the coefficients of all other unknowns become 0; the left hand side becomes simply det(A)xj. The right hand side is C1b1 + ⋯ + Cnbn, which is L(j) applied to the column vector b of the right hand side bi. In fact what has been done here is multiply the matrix equation Ax = b on the left by L(j). Dividing by the nonzero number det(A) one finds the following equation, necessary to satisfy the system: -$$ -x_j=\frac{L_{(j)}\cdot\mathbf{b}}{\det(A)}. -$$ - -But by construction the numerator is the determinant of the matrix obtained from A by replacing column j by b, so we get the expression of Cramer's rule as a necessary condition for a solution. The same procedure can be repeated for other values of j to find values for the other unknowns. - -The only point that remains to prove is that these values for the unknowns, the only possible ones, do indeed together form a solution. But if the matrix A is invertible with inverse A−1, then x = A−1b will be a solution, thus showing its existence. To see that A is invertible when det(A) is nonzero, consider the n × n matrix M obtained by stacking the one-line matrices L(j) on top of each other for j = 1, ..., n (this gives the adjugate matrix for A). It was shown that L(j)A = (0 ⋯ 0 det(A) 0 ⋯ 0) where det(A) appears at the position j; from this it follows that MA = det(A)In. Therefore, -$$ -\frac1{\det(A)}M=A^{-1}, -$$ - -completing the proof. - -For other proofs, see below. - -Let A be an n × n matrix with entries in a field F. Then -$$ -A\operatorname{adj}(A) = \operatorname{adj}(A)A=\det(A) I -$$ - -where adj(A) denotes the adjugate matrix, det(A) is the determinant, and I is the identity matrix. If det(A) is nonzero, then the inverse matrix of A is -$$ -A^{-1} = \frac{1}{\det(A)} \operatorname{adj}(A). -$$ - -This gives a formula for the inverse of A, provided det(A) ≠ 0. In fact, this formula works whenever F is a commutative ring, provided that det(A) is a unit. If det(A) is not a unit, then A is not invertible over the ring (it may be invertible over a larger ring in which some non-unit elements of F may be invertible). - -Consider the linear system - -\left\{\begin{matrix} - -a_1x + b_1y&= {\color{red}c_1}\\ - -a_2x + b_2y&= {\color{red}c_2} - -\end{matrix}\right. - -which in matrix format is -$$ -\begin{bmatrix} a_1 & b_1 \\ a_2 & b_2 \end{bmatrix}\begin{bmatrix} x \\ y \end{bmatrix}=\begin{bmatrix} {\color{red}c_1} \\ {\color{red}c_2} \end{bmatrix}. -$$ - -Assume a1b2 − b1a2 nonzero. Then, with help of determinants, x and y can be found with Cramer's rule as - -\begin{align} - -x &= \frac{\begin{vmatrix} {\color{red}{c_1}} & b_1 \\ {\color{red}{c_2}} & b_2 \end{vmatrix}}{\begin{vmatrix} a_1 & b_1 \\ a_2 & b_2 \end{vmatrix}} = { {\color{red}c_1}b_2 - b_1{\color{red}c_2} \over a_1b_2 - b_1a_2}, \quad - -y = \frac{\begin{vmatrix} a_1 & {\color{red}{c_1}} \\ a_2 & {\color{red}{c_2}} \end{vmatrix}}{\begin{vmatrix} a_1 & b_1 \\ a_2 & b_2 \end{vmatrix}} = { a_1{\color{red}c_2} - {\color{red}c_1}a_2 \over a_1b_2 - b_1a_2} - -\end{align}. - -The rules for 3 × 3 matrices are similar. Given - -\left\{\begin{matrix} - -a_1x + b_1y + c_1z&= {\color{red}d_1}\\ - -a_2x + b_2y + c_2z&= {\color{red}d_2}\\ - -a_3x + b_3y + c_3z&= {\color{red}d_3} - -\end{matrix}\right. - -which in matrix format is -$$ -\begin{bmatrix} a_1 & b_1 & c_1 \\ a_2 & b_2 & c_2 \\ a_3 & b_3 & c_3 \end{bmatrix}\begin{bmatrix} x \\ y \\ z \end{bmatrix}=\begin{bmatrix} {\color{red}d_1} \\ {\color{red}d_2} \\ {\color{red}d_3} \end{bmatrix}. -$$ - -Then the values of x, y and z can be found as follows: - -x = \frac{\begin{vmatrix} {\color{red}d_1} & b_1 & c_1 \\ {\color{red}d_2} & b_2 & c_2 \\ {\color{red}d_3} & b_3 & c_3 \end{vmatrix} } { \begin{vmatrix} a_1 & b_1 & c_1 \\ a_2 & b_2 & c_2 \\ a_3 & b_3 & c_3 \end{vmatrix}}, \quad - -y = \frac {\begin{vmatrix} a_1 & {\color{red}d_1} & c_1 \\ a_2 & {\color{red}d_2} & c_2 \\ a_3 & {\color{red}d_3} & c_3 \end{vmatrix}} {\begin{vmatrix} a_1 & b_1 & c_1 \\ a_2 & b_2 & c_2 \\ a_3 & b_3 & c_3 \end{vmatrix}}, \text{ and } - -z = \frac { \begin{vmatrix} a_1 & b_1 & {\color{red}d_1} \\ a_2 & b_2 & {\color{red}d_2} \\ a_3 & b_3 & {\color{red}d_3} \end{vmatrix}} {\begin{vmatrix} a_1 & b_1 & c_1 \\ a_2 & b_2 & c_2 \\ a_3 & b_3 & c_3 \end{vmatrix} }. - -Cramer's rule is used in the Ricci calculus in various calculations involving the Christoffel symbols of the first and second kind. - -In particular, Cramer's rule can be used to prove that the divergence operator on a Riemannian manifold is invariant with respect to change of coordinates. We give a direct proof, suppressing the role of the Christoffel symbols. - -Let $(M,g)$ be a Riemannian manifold equipped with local coordinates $ (x^1, x^2, \dots, x^n)$. Let $A=A^i \frac{\partial}{\partial x^i}$ be a vector field. We use the summation convention throughout. - -Theorem. - -The divergence of $A$, -$$ - \operatorname{div} A = \frac{1}{\sqrt{\det g}} \frac{\partial}{\partial x^i} \left( A^i \sqrt{\det g} \right), -$$ - -is invariant under change of coordinates. - -Let $(x^1,x^2,\ldots,x^n)\mapsto (\bar x^1,\ldots,\bar x^n)$ be a coordinate transformation with non-singular Jacobian. Then the classical transformation laws imply that $A=\bar A^{k}\frac{\partial}{\partial\bar x^{k}}$ where $\bar A^{k}=\frac{\partial \bar x^{k}}{\partial x^{j}}A^{j}$. Similarly, if $g=g_{mk}dx^{m}\otimes dx^{k}=\bar{g}_{ij}d\bar x^{i}\otimes d\bar x^{j}$, then $\bar{g}_{ij}=\frac{\partial x^{m}}{\partial\bar x^{i}}\frac{\partial x^{k}}{\partial \bar x^{j}}g_{mk}$. - -Writing this transformation law in terms of matrices yields $\bar g=\left(\frac{\partial x}{\partial\bar{x}}\right)^{\text{T}}g\left(\frac{\partial x}{\partial\bar{x}}\right)$, which implies $\det\bar g=\left(\det\left(\frac{\partial x}{\partial\bar{x}}\right)\right)^{2}\det g$. - -Now one computes - -\begin{align} - -\operatorname{div} A &=\frac{1}{\sqrt{\det g}}\frac{\partial}{\partial x^{i}}\left( A^{i}\sqrt{\det g}\right)\\ - - &=\det\left(\frac{\partial x}{\partial\bar{x}}\right)\frac{1}{\sqrt{\det\bar g}}\frac{\partial \bar x^k}{\partial x^{i}}\frac{\partial}{\partial\bar x^{k}}\left(\frac{\partial x^{i}}{\partial \bar x^{\ell}}\bar{A}^{\ell}\det\!\left(\frac{\partial x}{\partial\bar{x}}\right)^{\!\!-1}\!\sqrt{\det\bar g}\right). - -\end{align} - -In order to show that this equals -$$ -\frac{1}{\sqrt{\det\bar g}}\frac{\partial}{\partial\bar x^{k}}\left(\bar A^{k}\sqrt{\det\bar{g}}\right) -$$, - -it is necessary and sufficient to show that -$$ -\frac{\partial\bar x^{k}}{\partial x^{i}}\frac{\partial}{\partial\bar x^{k}}\left(\frac{\partial x^{i}}{\partial \bar x^{\ell}}\det\!\left(\frac{\partial x}{\partial\bar{x}}\right)^{\!\!\!-1}\right)=0\qquad\text{for all } \ell, -$$ - -which is equivalent to - -\frac{\partial}{\partial \bar x^{\ell}}\det\left(\frac{\partial x}{\partial\bar{x}}\right) - -=\det\left(\frac{\partial x}{\partial\bar{x}}\right)\frac{\partial\bar x^{k}}{\partial x^{i}}\frac{\partial^{2}x^{i}}{\partial\bar x^{k}\partial\bar x^{\ell}}. - - - -Carrying out the differentiation on the left-hand side, we get: - -\begin{align} - - \frac{\partial}{\partial\bar x^{\ell}}\det\left(\frac{\partial x}{\partial\bar{x}}\right) - - &=(-1)^{i+j}\frac{\partial^{2}x^{i}}{\partial\bar x^{\ell}\partial\bar x^{j}}\det M(i|j)\\ - - &=\frac{\partial^{2}x^{i}}{\partial\bar x^{\ell}\partial\bar x^{j}}\det\left(\frac{\partial x}{\partial\bar{x}}\right)\frac{(-1)^{i+j}}{\det\left(\frac{\partial x}{\partial\bar{x}}\right)}\det M(i|j)=(\ast), - - \end{align} - -where $M(i|j)$ denotes the matrix obtained from $\left(\frac{\partial x}{\partial\bar{x}}\right)$ by deleting the $i$th row and $j$th column. - -But Cramer's Rule says that -$$ -\frac{(-1)^{i+j}}{\det\left(\frac{\partial x}{\partial\bar{x}}\right)}\det M(i|j) -$$ - -is the $(j,i)$th entry of the matrix $\left(\frac{\partial \bar{x}}{\partial x}\right)$. - -Thus -$$ -(\ast)=\det\left(\frac{\partial x}{\partial\bar{x}}\right)\frac{\partial^{2}x^{i}}{\partial\bar x^{\ell}\partial\bar x^{j}}\frac{\partial\bar x^{j}}{\partial x^{i}}, -$$ - -completing the proof. - -Consider the two equations $F(x, y, u, v) = 0$ and $G(x, y, u, v) = 0$. When u and v are independent variables, we can define $x = X(u, v)$ and $y = Y(u, v).$ - -An equation for $\dfrac{\partial x}{\partial u}$ can be found by applying Cramer's rule. - -{{Collapse top|title=Calculation of $\dfrac{\partial x}{\partial u}$}} - -First, calculate the first derivatives of F, G, x, and y: - -\begin{align} - -dF &= \frac{\partial F}{\partial x} dx + \frac{\partial F}{\partial y} dy +\frac{\partial F}{\partial u} du +\frac{\partial F}{\partial v} dv = 0 \\[6pt] - -dG &= \frac{\partial G}{\partial x} dx + \frac{\partial G}{\partial y} dy +\frac{\partial G}{\partial u} du +\frac{\partial G}{\partial v} dv = 0 \\[6pt] - -dx &= \frac{\partial X}{\partial u} du + \frac{\partial X}{\partial v} dv \\[6pt] - -dy &= \frac{\partial Y}{\partial u} du + \frac{\partial Y}{\partial v} dv. - -\end{align} - -Substituting dx, dy into dF and dG, we have: - -\begin{align} - -dF &= \left(\frac{\partial F}{\partial x} \frac{\partial x}{\partial u} +\frac{\partial F}{\partial y} \frac{\partial y}{\partial u} + \frac{\partial F}{\partial u} \right) du + \left(\frac{\partial F}{\partial x} \frac{\partial x}{\partial v} +\frac{\partial F}{\partial y} \frac{\partial y}{\partial v} +\frac{\partial F}{\partial v} \right) dv = 0 \\ [6pt] - -dG &= \left(\frac{\partial G}{\partial x} \frac{\partial x}{\partial u} +\frac{\partial G}{\partial y} \frac{\partial y}{\partial u} +\frac{\partial G}{\partial u} \right) du + \left(\frac{\partial G}{\partial x} \frac{\partial x}{\partial v} +\frac{\partial G}{\partial y} \frac{\partial y}{\partial v} +\frac{\partial G}{\partial v} \right) dv = 0. - -\end{align} - -Since u, v are both independent, the coefficients of du, dv must be zero. So we can write out equations for the coefficients: - -\begin{align} - -\frac{\partial F}{\partial x} \frac{\partial x}{\partial u} +\frac{\partial F}{\partial y} \frac{\partial y}{\partial u} & = -\frac{\partial F}{\partial u} \\[6pt] - -\frac{\partial G}{\partial x} \frac{\partial x}{\partial u} +\frac{\partial G}{\partial y} \frac{\partial y}{\partial u} & = -\frac{\partial G}{\partial u} \\[6pt] - -\frac{\partial F}{\partial x} \frac{\partial x}{\partial v} +\frac{\partial F}{\partial y} \frac{\partial y}{\partial v} & = -\frac{\partial F}{\partial v} \\[6pt] - -\frac{\partial G}{\partial x} \frac{\partial x}{\partial v} +\frac{\partial G}{\partial y} \frac{\partial y}{\partial v} & = -\frac{\partial G}{\partial v}. - -\end{align} - -Now, by Cramer's rule, we see that: -$$ -\frac{\partial x}{\partial u} = \frac{\begin{vmatrix} -\frac{\partial F}{\partial u} & \frac{\partial F}{\partial y} \\ -\frac{\partial G}{\partial u} & \frac{\partial G}{\partial y}\end{vmatrix}}{\begin{vmatrix}\frac{\partial F}{\partial x} & \frac{\partial F}{\partial y} \\ \frac{\partial G}{\partial x} & \frac{\partial G}{\partial y}\end{vmatrix}}. -$$ - -This is now a formula in terms of two Jacobians: -$$ -\frac{\partial x}{\partial u} = -\frac{\left(\frac{\partial (F, G)}{\partial (u, y)}\right)}{\left(\frac{\partial (F, G)}{\partial(x, y)}\right)}. -$$ - -Similar formulas can be derived for $\frac{\partial x}{\partial v}, \frac{\partial y}{\partial u}, \frac{\partial y}{\partial v}.$ - -Cramer's rule can be used to prove that an integer programming problem whose constraint matrix is totally unimodular and whose right-hand side is integer, has integer basic solutions. This makes the integer program substantially easier to solve. - -Cramer's rule is used to derive the general solution to an inhomogeneous linear differential equation by the method of variation of parameters. - -Cramer's rule has a geometric interpretation that can be considered also a proof or simply giving insight about its geometric nature. These geometric arguments work in general and not only in the case of two equations with two unknowns presented here. - -Given the system of equations -$$ -\begin{matrix}a_{11}x_1+a_{12}x_2&=b_1\\a_{21}x_1+a_{22}x_2&=b_2\end{matrix} -$$ - -it can be considered as an equation between vectors -$$ -x_1\binom{a_{11}}{a_{21}}+x_2\binom{a_{12}}{a_{22}}=\binom{b_1}{b_2}. -$$ - -The area of the parallelogram determined by $\binom{a_{11}}{a_{21}}$ and $\binom{a_{12}}{a_{22}}$ is given by the determinant of the system of equations: -$$ -\begin{vmatrix}a_{11}&a_{12}\\a_{21}&a_{22}\end{vmatrix}. -$$ - -In general, when there are more variables and equations, the determinant of n vectors of length n will give the volume of the parallelepiped determined by those vectors in the n-th dimensional Euclidean space. - -Therefore, the area of the parallelogram determined by $x_1\binom{a_{11}}{a_{21}}$ and $\binom{a_{12}}{a_{22}}$ has to be $x_1$ times the area of the first one since one of the sides has been multiplied by this factor. Now, this last parallelogram, by Cavalieri's principle, has the same area as the parallelogram determined by $\binom{b_1}{b_2}=x_1\binom{a_{11}}{a_{21}}+x_2\binom{a_{12}}{a_{22}}$ and $\binom{a_{12}}{a_{22}}.$ - -Equating the areas of this last and the second parallelogram gives the equation -$$ -\begin{vmatrix}b_1&a_{12}\\b_2&a_{22}\end{vmatrix} = \begin{vmatrix}a_{11}x_1&a_{12}\\a_{21}x_1&a_{22}\end{vmatrix} =x_1 \begin{vmatrix}a_{11}&a_{12}\\a_{21}&a_{22}\end{vmatrix} -$$ - -from which Cramer's rule follows. - -This is a restatement of the proof above in abstract language. - -Consider the map $\mathbf{x}=(x_1,\ldots, x_n) \mapsto \frac{1}{\det A} \left(\det (A_1),\ldots, \det(A_n)\right),$ where $A_i$ is the matrix $A$ with $\mathbf{x}$ substituted in the $i$th column, as in Cramer's rule. Because of linearity of determinant in every column, this map is linear. Observe that it sends the $i$th column of $A$ to the $i$th basis vector $\mathbf{e}_i=(0,\ldots, 1, \ldots, 0) $ (with 1 in the $i$th place), because determinant of a matrix with a repeated column is 0. So we have a linear map which agrees with the inverse of $A$ on the column space; hence it agrees with $A^{-1}$ on the span of the column space. Since $A$ is invertible, the column vectors span all of $\mathbb{R}^n$, so our map really is the inverse of $A$. Cramer's rule follows. - -A short proof of Cramer's rule can be given by noticing that $x_1$ is the determinant of the matrix - -X_1=\begin{bmatrix} - -x_1 & 0 & 0 & \cdots & 0\\ - -x_2 & 1 & 0 & \cdots & 0\\ - -x_3 & 0 & 1 & \cdots & 0\\ - -\vdots & \vdots & \vdots & \ddots &\vdots \\ - -x_n & 0 & 0 & \cdots & 1 - -\end{bmatrix} - -On the other hand, assuming that our original matrix A is invertible, this matrix $X_1$ has columns $A^{-1}\mathbf{b}, A^{-1}\mathbf{v}_2, \ldots, A^{-1}\mathbf{v}_n $, where $\mathbf{v}_n$ is the n-th column of the matrix A. Recall that the matrix $A_1$ has columns $\mathbf{b}, \mathbf{v}_2, \ldots, \mathbf{v}_n $, and therefore $X_1=A^{-1}A_1$. Hence, by using that the determinant of the product of two matrices is the product of the determinants, we have -$$ - x_1= \det (X_1) = \det (A^{-1}) \det (A_1)= \frac{\det (A_1)}{\det (A)}. -$$ - -The proof for other $x_j$ is similar. - -A system of equations is said to be incompatible or inconsistent when there are no solutions and it is called indeterminate when there is more than one solution. For linear equations, an indeterminate system will have infinitely many solutions (if it is over an infinite field), since the solutions can be expressed in terms of one or more parameters that can take arbitrary values. - -Cramer's rule applies to the case where the coefficient determinant is nonzero. In the 2×2 case, if the coefficient determinant is zero, then the system is incompatible if the numerator determinants are nonzero, or indeterminate if the numerator determinants are zero. - -For 3×3 or higher systems, the only thing one can say when the coefficient determinant equals zero is that if any of the numerator determinants are nonzero, then the system must be incompatible. However, having all determinants zero does not imply that the system is indeterminate. A simple example where all determinants vanish (equal zero) but the system is still incompatible is the 3×3 system x+y+z=1, x+y+z=2, x+y+z=3. diff --git a/wiki/wikipedia/991.txt b/wiki/wikipedia/991.txt deleted file mode 100644 index 04ca72409f13eb48902142c4800c6dd9d73bb2ad..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/991.txt +++ /dev/null @@ -1,5 +0,0 @@ -Kernel Transaction Manager (KTM) is a component of the Windows operating system kernel in Windows Vista and Windows Server 2008 that enables applications to use atomic transactions on resources by making them available as kernel objects. - -The transaction engine, which operates in kernel mode, allows for transactions on both kernel mode and user mode resources, as well as among distributed resources. The Kernel Transaction Manager intends to make it easy for application developers to do much error recovery, virtually transparently, with KTM acting as a transaction manager that transaction clients can plug into. Those transaction clients can be third-party clients that want to initiate transactions on resources that are managed by Transaction Resource Manager. The resource managers can also be third-party or built into the system. - -KTM is used to implement Transactional NTFS (TxF) and Transactional Registry (TxR). KTM relies on the Common Log File System (CLFS) for its operation. CLFS is a general-purpose log-file subsystem designed for creating data and event logs. diff --git a/wiki/wikipedia/992.txt b/wiki/wikipedia/992.txt deleted file mode 100644 index ea283822bdf5f7cf7cba45100b3ccd056536c155..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/992.txt +++ /dev/null @@ -1,7 +0,0 @@ -In computer science, the minimum routing cost spanning tree of a weighted graph is a spanning tree minimizing the sum of pairwise distances between vertices in the tree. It is also called the optimum distance spanning tree, shortest total path length spanning tree, minimum total distance spanning tree, or minimum average distance spanning tree. In an unweighted graph, this is the spanning tree of minimum Wiener index. - -Hu writes that the problem of constructing these trees was proposed by Francesco Maffioli. - -It is NP-hard to construct it, even for unweighted graphs. However, it has a polynomial-time approximation scheme. The approximation works by choosing a number $k$ that depends on the approximation ratio but not on the number of vertices of the input graph, and by searching among all trees with $k$ internal nodes. - -The minimum routing cost spanning tree of an unweighted interval graph can be constructed in linear time. A polynomial time algorithm is also known for distance-hereditary graphs, weighted so that the weighted distances are hereditary. diff --git a/wiki/wikipedia/993.txt b/wiki/wikipedia/993.txt deleted file mode 100644 index dbf5b0b971ee372f0db93eb2337b6e19088eed13..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/993.txt +++ /dev/null @@ -1,25 +0,0 @@ -In mathematics, a Cullen number is a member of the integer sequence $C_n = n \cdot 2^n + 1$ (where $n$ is a natural number). Cullen numbers were first studied by James Cullen in 1905. The numbers are special cases of Proth numbers. - -In 1976 Christopher Hooley showed that the natural density of positive integers $n \leq x$ for which Cn is a prime is of the order o(x) for $x \to \infty$. In that sense, almost all Cullen numbers are composite. Hooley's proof was reworked by Hiromi Suyama to show that it works for any sequence of numbers n·2n + a + b where a and b are integers, and in particular also for Woodall numbers. The only known Cullen primes are those for n equal to: - -141, 4713, 5795, 6611, 18496, 32292, 32469, 59656, 90825, 262419, 361275, 481899, 1354828, 6328548, 6679881 . - -Still, it is conjectured that there are infinitely many Cullen primes. - -A Cullen number Cn is divisible by p = 2n − 1 if p is a prime number of the form 8k − 3; furthermore, it follows from Fermat's little theorem that if p is an odd prime, then p divides Cm(k) for each m(k) = (2k − k) - - (p − 1) − k (for k > 0). It has also been shown that the prime number p divides C(p + 1)/2 when the Jacobi symbol (2 | p) is −1, and that p divides C(3p − 1)/2 when the Jacobi symbol (2 | p) is + 1. - -It is unknown whether there exists a prime number p such that Cp is also prime. - -Sometimes, a generalized Cullen number base b is defined to be a number of the form n·bn + 1, where n + 2 > b; if a prime can be written in this form, it is then called a generalized Cullen prime. Woodall numbers are sometimes called Cullen numbers of the second kind. - -As of October 2021, the largest known generalized Cullen prime is 2525532·732525532 + 1. It has 4,705,888 digits and was discovered by Tom Greer, a PrimeGrid participant. - -According to Fermat's little theorem, if there is a prime p such that n is divisible by p − 1 and n + 1 is divisible by p (especially, when n = p − 1) and p does not divide b, then bn must be congruent to 1 mod p (since bn is a power of bp − 1 and bp − 1 is congruent to 1 mod p). Thus, n·bn + 1 is divisible by p, so it is not prime. For example, if some n congruent to 2 mod 6 (i.e. 2, 8, 14, 20, 26, 32, ...), n·bn + 1 is prime, then b must be divisible by 3 (except b = 1). - -The least n such that n·bn + 1 is prime (with question marks if this term is currently unknown) are - -1, 1, 2, 1, 1242, 1, 34, 5, 2, 1, 10, 1, ?, 3, 8, 1, 19650, 1, 6460, 3, 2, 1, 4330, 2, 2805222, 117, 2, 1, ?, 1, 82960, 5, 2, 25, 304, 1, 36, 3, 368, 1, 1806676, 1, 390, 53, 2, 1, ?, 3, ?, 9665, 62, 1, 1341174, 3, ?, 1072, 234, 1, 220, 1, 142, 1295, 8, 3, 16990, 1, 474, 129897, ?, 1, 13948, 1, ?, 3, 2, 1161, 12198, 1, 682156, 5, 350, 1, 1242, 26, 186, 3, 2, 1, 298, 14, 101670, 9, 2, 775, 202, 1, 1374, 63, 2, 1, ... - -
      diff --git a/wiki/wikipedia/994.txt b/wiki/wikipedia/994.txt deleted file mode 100644 index 93eedcbc4278e317b7984b9974de3ce81734867a..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/994.txt +++ /dev/null @@ -1,19 +0,0 @@ -In formal logic, Horn-satisfiability, or HORNSAT, is the problem of deciding whether a given set of propositional Horn clauses is satisfiable or not. Horn-satisfiability and Horn clauses are named after Alfred Horn. - -A Horn clause is a clause with at most one positive literal, called the head of the clause, and any number of negative literals, forming the body of the clause. A Horn formula is a propositional formula formed by conjunction of Horn clauses. - -The problem of Horn satisfiability is solvable in linear time. - -The problem of deciding the truth of quantified Horn formulas can be also solved in polynomial time. - -A polynomial-time algorithm for Horn satisfiability is based on the rule of unit propagation: if the formula contains a clause composed of a single literal $l$ (a unit clause), then all clauses containing $l$ (except the unit clause itself) are removed, and all clauses containing $\neg l$ have this literal removed. The result of the second rule may itself be a unit clause, which is propagated in the same manner. If there are no unit clauses, the formula can be satisfied by simply setting all remaining variables negative. The formula is unsatisfiable if this transformation generates a pair of opposite unit clauses $l$ and $\neg l$. Horn satisfiability is actually one of the "hardest" or "most expressive" problems which is known to be computable in polynomial time, in the sense that it is a P-complete problem. - -This algorithm also allows determining a truth assignment of satisfiable Horn formulae: all variables contained in a unit clause are set to the value satisfying that unit clause; all other literals are set to false. The resulting assignment is the minimal model of the Horn formula, that is, the assignment having a minimal set of variables assigned to true, where comparison is made using set containment. - -Using a linear algorithm for unit propagation, the algorithm is linear in the size of the formula. - -A generalization of the class of Horn formulae is that of renamable-Horn formulae, which is the set of formulae that can be placed in Horn form by replacing some variables with their respective negation. Checking the existence of such a replacement can be done in linear time; therefore, the satisfiability of such formulae is in P as it can be solved by first performing this replacement and then checking the satisfiability of the resulting Horn formula. Horn satisfiability and renamable Horn satisfiability provide one of two important subclasses of satisfiability that are solvable in polynomial time; the other such subclass is 2-satisfiability. - -The Horn satisfiability problem can also be asked for propositional many-valued logics. The algorithms are not usually linear, but some are polynomial; see Hähnle (2001 or 2003) for a survey. - -A dual variant of Horn SAT is Dual-Horn SAT, in which each clause has at most one negative literal. Negating all variables transforms an instance of Dual-Horn SAT into Horn SAT. It was proven in 1951 by Horn that Dual-Horn SAT is in P. diff --git a/wiki/wikipedia/995.txt b/wiki/wikipedia/995.txt deleted file mode 100644 index 81d1715f395dea5b429985df7c05b2ee76460cea..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/995.txt +++ /dev/null @@ -1,11 +0,0 @@ -In number theory, a congruum (plural congrua) is the difference between successive square numbers in an arithmetic progression of three squares. - -That is, if x2, y2, and z2 (for integers x, y, and z) are three square numbers that are equally spaced apart from each other, then the spacing between them, 1=z2 − y2 = y2 − x2, is called a congruum. - -The congruum problem is the problem of finding squares in arithmetic progression and their associated congrua. For instance, the congruum 96 can be constructed by these formulas with m = 3 and n = 1, while the congruum 216 is obtained by multiplying the smaller congruum 24 by the square number 9. - -An equivalent formulation of this solution, given by Bernard Frénicle de Bessy, is that for the three squares in arithmetic progression x2, y2, and z2, the middle number y is the hypotenuse of a Pythagorean triangle and the other two numbers x and z are the difference and sum respectively of the triangle's two legs. The congruum itself is four times the area of the same Pythagorean triangle. The example of an arithmetic progression with the congruum 96 can be obtained in this way from a right triangle with side and hypotenuse lengths 6, 8, and 10. - -A congruent number is defined as the area of a right triangle with rational sides. - -Because every congruum can be obtained (using the parameterized solution) as the area of a Pythagorean triangle, it follows that every congruum is congruent. Conversely, every congruent number is a congruum multiplied by the square of a rational number. However, testing whether a number is a congruum is much easier than testing whether a number is congruent. For the congruum problem, the parameterized solution reduces this testing problem to checking a finite set of parameter values. In contrast, for the congruent number problem, a finite testing procedure is known only conjecturally, via Tunnell's theorem, under the assumption that the Birch and Swinnerton-Dyer conjecture is true. diff --git a/wiki/wikipedia/996.txt b/wiki/wikipedia/996.txt deleted file mode 100644 index 81e57fe13acb951fd800032482e85c126aee6bc7..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/996.txt +++ /dev/null @@ -1 +0,0 @@ -250px|thumb|Seven circles theoremIn geometry, the seven circles theorem is a theorem about a certain arrangement of seven circles in the Euclidean plane. Specifically, given a chain of six circles all tangent to a seventh circle and each tangent to its two neighbors, the three lines drawn between opposite pairs of the points of tangency on the seventh circle all pass through the same point. Though elementary in nature, this theorem was not discovered until 1974 (by Evelyn, Money-Coutts, and Tyrrell). diff --git a/wiki/wikipedia/997.txt b/wiki/wikipedia/997.txt deleted file mode 100644 index f271d622291b05ef493af1a03d89cd0b2eda162d..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/997.txt +++ /dev/null @@ -1,13 +0,0 @@ -In mathematics and physics, the diamagnetic inequality relates the Sobolev norm of the absolute value of a section of a line bundle to its covariant derivative. The diamagnetic inequality has an important physical interpretation, that a charged particle in a magnetic field has more energy in its ground state than it would in a vacuum. - -To precisely state the inequality, let $L^2(\mathbb R^n)$ denote the usual Hilbert space of square-integrable functions, and $H^1(\mathbb R^n)$ the Sobolev space of square-integrable functions with square-integrable derivatives. - -Let $f, A_1, \dots, A_n$ be measurable functions on $\mathbb R^n$ and suppose that $A_j \in L^2_{\text{loc}} (\mathbb R^n)$ is real-valued, $f$ is complex-valued, and $f , (\partial_1 + iA_1)f, \dots, (\partial_n + iA_n)f \in L^2(\mathbb R^n)$. - -Then for almost every $x \in \mathbb R^n$, - -|\nabla |f|(x)| \leq |(\nabla + iA)f(x)|. - -In particular, $f \in H^1(\mathbb R^n)$. - -For this proof we follow Lieb and Loss. diff --git a/wiki/wikipedia/998.txt b/wiki/wikipedia/998.txt deleted file mode 100644 index bf173f3b506ec26e301c169f8b2607d22b8542cd..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/998.txt +++ /dev/null @@ -1,112 +0,0 @@ -In mathematics, the Cauchy integral theorem (also known as the Cauchy–Goursat theorem) in complex analysis, named after Augustin-Louis Cauchy (and Édouard Goursat), is an important statement about line integrals for holomorphic functions in the complex plane. Essentially, it says that if $f(z)$ is analytic in a simply connected domain Ω, then for any simply closed contour $C$ in Ω, that contour integral is zero. - -\int_C f(z)dz = 0. - -If f(z) is a holomorphic functions on an open region U, and $\gamma$ is a curve in U from $z_0$ to $z_1$ then, - -\int_{\gamma}f^{\prime}(z)dz=f(z_1)-f(z_0) - -. - -Also, when f(z) has an single-valued antiderivative in an open region U, then the path integral $\int_{\gamma}f^{\prime}(z)dz$ is - -path independent for all paths in U. - -Formulation on Simply Connected Regions - -Let $U \subseteq \Complex$ be a simply connected open set, and let $f: U \to \Complex$ be a holomorphic function. Let $\gamma: [a,b] \to U$ be a smooth closed curve. Then: -$$ -\int_\gamma f(z)dz = 0. -$$ - -(The condition that $U$ be simply connected means that $U$ has no "holes", or in other words, that the fundamental group of $U$ is trivial.) - -General Formulation - -Let $U \subseteq \Complex$ be an open set, and let $f: U \to \Complex$ be a holomorphic function. Let $\gamma: [a,b] \to U$ be a smooth closed curve. If $\gamma$ is homotopic to a constant curve, then: -$$ -\int_\gamma f(z)dz = 0. -$$ - -(Recall that a curve is homotopic to a constant curve if there exists a smooth homotopy from the curve to the constant curve. Intuitively, this means that one can shrink the curve into a point without exiting the space.) The first version is a special case of this because on a simply connected set, every closed curve is homotopic to a constant curve. - -Main Example - -In both cases, it is important to remember that the curve $\gamma$ not surround any "holes" in the domain, or else the theorem does not apply. A famous example is the following curve: -$$ -\gamma(t) = e^{it} \quad t \in \left[0, 2\pi\right] , -$$ - -which traces out the unit circle. Here the following integral: -$$ -\int_{\gamma} \frac{1}{z}dz = 2\pi i \neq 0 , -$$ - -is nonzero. The Cauchy integral theorem does not apply here since $f(z) = 1/z$ is not defined at $z = 0$. Intuitively, $\gamma$ surrounds a "hole" in the domain of $f$, so $\gamma$ cannot be shrunk to a point without exiting the space. Thus, the theorem does not apply. - -As Édouard Goursat showed, Cauchy's integral theorem can be proven assuming only that the complex derivative $f'(z)$ exists everywhere in $U$. This is significant because one can then prove Cauchy's integral formula for these functions, and from that deduce these functions are infinitely differentiable. - -The condition that $U$ be simply connected means that $U$ has no "holes" or, in homotopy terms, that the fundamental group of $U$ is trivial; for instance, every open disk $U_{z_0} = \{ z : \left|z-z_{0}\right| < r\}$, for $z_0 \in \Complex$, qualifies. The condition is crucial; consider -$$ -\gamma(t) = e^{it} \quad t \in \left[0, 2\pi\right] -$$ - -which traces out the unit circle, and then the path integral -$$ -\oint_\gamma \frac{1}{z}dz = \int_0^{2\pi} \frac{1}{e^{it}}(ie^{it} dt) = \int_0^{2\pi}idt = 2\pi i -$$ - -is nonzero; the Cauchy integral theorem does not apply here since $f(z) = 1/z$ is not defined (and is certainly not holomorphic) at $z = 0$. - -One important consequence of the theorem is that path integrals of holomorphic functions on simply connected domains can be computed in a manner familiar from the fundamental theorem of calculus: let $U$ be a simply connected open subset of $\Complex$, let $f: U \to \Complex$ be a holomorphic function, and let $\gamma$ be a piecewise continuously differentiable path in $U$ with start point $a$ and end point $b$. If $F$ is a complex antiderivative of $f$, then -$$ -\int_\gamma f(z)dz=F(b)-F(a). -$$ - -The Cauchy integral theorem is valid with a weaker hypothesis than given above, e.g. given $U$, a simply connected open subset of $\Complex$, we can weaken the assumptions to $f$ being holomorphic on $U$ and continuous on $\overline{U}$ and $\gamma$ a rectifiable simple loop in $\overline{U}$. - -The Cauchy integral theorem leads to Cauchy's integral formula and the residue theorem. - -If one assumes that the partial derivatives of a holomorphic function are continuous, the Cauchy integral theorem can be proved as a direct consequence of Green's theorem and the fact that the real and imaginary parts of $f=u+iv$ must satisfy the Cauchy–Riemann equations in the region bounded by $\gamma$, and moreover in the open neighborhood U of this region. Cauchy provided this proof, but it was later proved by Goursat without requiring techniques from vector calculus, or the continuity of partial derivatives. - -We can break the integrand $f$, as well as the differential $dz$ into their real and imaginary components: -$$ - f=u+iv -$$ -$$ - dz=dx+idy -$$ - -In this case we have -$$ -\oint_\gamma f(z)dz = \oint_\gamma (u+iv)(dx+idy) = \oint_\gamma (udx-vdy) +i\oint_\gamma (vdx+udy) -$$ - -By Green's theorem, we may then replace the integrals around the closed contour $\gamma$ with an area integral throughout the domain $D$ that is enclosed by $\gamma$ as follows: -$$ -\oint_\gamma (udx-vdy) = \iint_D \left( -\frac{\partial v}{\partial x} -\frac{\partial u}{\partial y} \right) dxdy -$$ -$$ -\oint_\gamma (vdx+udy) = \iint_D \left( \frac{\partial u}{\partial x} -\frac{\partial v}{\partial y} \right) dxdy -$$ - -But as the real and imaginary parts of a function holomorphic in the domain $D$, $u$ and $v$ must satisfy the Cauchy–Riemann equations there: -$$ -\frac{ \partial u }{ \partial x } = \frac{ \partial v }{ \partial y } -$$ -$$ -\frac{ \partial u }{ \partial y } = -\frac{ \partial v }{ \partial x } -$$ - -We therefore find that both integrands (and hence their integrals) are zero -$$ -\iint_D \left( -\frac{\partial v}{\partial x} -\frac{\partial u}{\partial y} \right )dxdy = \iint_D \left( \frac{\partial u}{\partial y} -\frac{\partial u}{\partial y} \right )dxdy =0 -$$ -$$ -\iint_D \left( \frac{\partial u}{\partial x}-\frac{\partial v}{\partial y} \right )dxdy = \iint_D \left( \frac{\partial u}{\partial x}-\frac{\partial u}{\partial x} \right ) dx dy = 0 -$$ - -This gives the desired result -$$ -\oint_\gamma f(z)dz =0 -$$ diff --git a/wiki/wikipedia/999.txt b/wiki/wikipedia/999.txt deleted file mode 100644 index 4266e4231b2f1ad800852e210f3d7036cd453de2..0000000000000000000000000000000000000000 --- a/wiki/wikipedia/999.txt +++ /dev/null @@ -1,11 +0,0 @@ -Hilbert's thirteenth problem is one of the 23 Hilbert problems set out in a celebrated list compiled in 1900 by David Hilbert. It entails proving whether a solution exists for all 7th-degree equations using algebraic (variant: continuous) functions of two arguments. It was first presented in the context of nomography, and in particular "nomographic construction" — a process whereby a function of several variables is constructed using functions of two variables. The variant for continuous functions was resolved affirmatively in 1957 by Vladimir Arnold when he proved the Kolmogorov–Arnold representation theorem, but the variant for algebraic functions remains unresolved. - -William Rowan Hamilton showed in 1836 that any seventh-degree equation can be reduced via radicals to the form $x^7 + ax^3 + bx^2 + cx + 1 = 0$. - -Regarding this equation, Hilbert asked whether its solution, x, considered as a function of the three variables a, b and c, can be expressed as the composition of a finite number of two-variable functions. - -Hilbert originally posed his problem for algebraic functions (Hilbert 1927, "...Existenz von algebraischen Funktionen...", i.e., "...existence of algebraic functions..."; also see Abhyankar 1997, Vitushkin 2004). However, Hilbert also asked in a later version of this problem whether there is a solution in the class of continuous functions. - -A generalization of the second ("continuous") variant of the problem is the following question: can every continuous function of three variables be expressed as a composition of finitely many continuous functions of two variables? The affirmative answer to this general question was given in 1957 by Vladimir Arnold, then only nineteen years old and a student of Andrey Kolmogorov. Kolmogorov had shown in the previous year that any function of several variables can be constructed with a finite number of three-variable functions. Arnold then expanded on this work to show that only two-variable functions were in fact required, thus answering Hilbert's question when posed for the class of continuous functions. - -Arnold later returned to the algebraic version of the problem, jointly with Goro Shimura (Arnold and Shimura 1976).